Glossary of AI terms

Whether you use a language model at work, or an algorithm helps you decide what to watch in the evening, you’re probably using artificial intelligence (AI) on a regular basis. It’s a fast-moving field, and keeping up with all the different aspects of it can be a challenge, but with its use becoming more and more prevalent, it’s vital to understand the basics. Whether you're brand new to the world of AI, or well-versed but missing some of the ever-evolving vocabulary, this glossary of AI terms is for you.

Artificial intelligence (AI)

AI, or artificial intelligence, is a branch of computer science that focuses on creating machines or software that can think, learn, and make decisions like humans.

AI can help with everything from managing schedules to improving healthcare and making businesses more efficient.

AI agent

An AI agent is a virtual helper that can observe its surroundings and take action to achieve a goal on its own.

Examples include virtual assistants (like Siri or Alexa), which listen to your requests and respond, and self-driving cars, which navigate roads by sensing their surroundings and making driving decisions. These software programs are set up by humans but take action by themselves.

AI ethics

AI ethics is the implementation of rules and guidelines that help people create and use AI technology responsibly. It's essentially a moral compass for machines and the people who make them.

AI ethics are vital for avoiding mistakes, building trust, and protecting people's rights. For example, it ensures that an AI job application doesn't favour one gender over another.

Algorithm

An algorithm in AI is a set of instructions that tell a computer how to solve a problem or complete a task.

They allow AI to learn from data, make predictions, recognise patterns, and make decisions. 

Alignment

Alignment in AI refers to the training practises that ensure AI systems remain helpful and safe, preventing them from producing harmful outputs and steering them towards ethical principles.

For example, if you asked an aligned AI to make you a sandwich, it would understand that you want a safe, edible sandwich (rather than try to turn you into a sandwich). It might even consider your dietary requirements and preferences.

Augmented intelligence

Augmented intelligence is an approach to using AI that focuses on enhancing human intelligence rather than replacing it. 

For example, AI can assist doctors in diagnosing diseases by analysing medical images, but the final decision rests with the human doctor.

Bias

Bias in AI refers to when systems produce unfair or discriminatory results. This can be caused by the data used to train the AI, if the data doesn't represent everyone fairly, or if the data contains inherent (and unhelpful) human biases. 

These biases can manifest in various ways, such as facial recognition working better for some skin tones than others or job application systems favouring certain types of candidates.

Chatbot

A chatbot is an AI-powered program that simulates conversation with users, making it easier to get information, support, or entertainment without needing to talk to a human directly. Because of their efficiency and 24/7 availability, they're often used for customer support or as personal assistants, like Siri or Google Assistant. 

They can be rule-based, meaning they produce scripted responses based on specific keywords or patterns, or they can be powered by an LLM (large language model), enabling them to respond more naturally.

ChatGPT

ChatGPT is an advanced AI chatbot developed by OpenAI and launched in 2022 that allows users to have conversations using natural language. 

See also Generative pre-trained transformers (GPT).

Completion

Completion refers to the output given by a language model based on a given prompt or starting point. 

If the prompt is a question, the completion would be an answer, but if the prompt is the beginning of a story, the completion would be the continuation of it.

Data science

Data science is the practice of using data to understand and solve problems. It combines various fields like statistics, computer science, and domain knowledge to analyse information and extract valuable insights. 

The insights and patterns discovered through data science can be used to train AI systems, making them more intelligent and capable.

Deep learning

Deep learning is a type of machine learning that mimics how the human brain works, allowing it to learn and make decisions independently. It uses artificial neural networks with many layers to analyse data. 

As a result, it’s efficient at tasks that can be easy for humans but hard for regular computers, like recognising faces or understanding speech.

Emergence/Emergent behaviour

Emergence in AI refers to unexpected abilities or behaviours in advanced AI systems, especially as they grow more complex. 

Emergent abilities are hard to foresee, even for the AI's creators. This could lead to AI systems becoming much more capable very quickly, which excites some researchers and concerns others.

Explainable AI (XAI)

XAI, or explainable AI, is a way to make AI systems more understandable to humans by creating them so that they offer explanations for their decision-making. 

It aims to teach AI to show their work rather than just give the final answer. This makes it easier to spot errors and creates a more user-friendly system, especially for non-experts.

Generative AI (GenAI)

Generative AI, or GenAI, is a type of AI that can create new content, such as images, text, code, music, and voices, on its own. 

It's trained on lots of existing data, like books, pictures, or songs, to learn patterns and styles so that it can respond creatively to prompts and produce outputs that can often be mistaken for human-made work.

See also Predictive AI.

Generative pre-trained transformers (GPT)

GPT, which stands for generative pre-trained transformer, is a particularly popular AI model developed by OpenAI that can understand and generate human-like text. 

It's trained on a vast dataset, allowing it to understand grammar, facts, and even some reasoning abilities. It can be used for chatbots, content creation, language translation, and coding assistance. 

See also ChatGPT.

Hallucination

An AI hallucination occurs when a chatbot makes up false information and presents it as if it were true. This could be in the form of imaginary facts, made-up quotes, or links that lead nowhere. 

Because hallucinations are often plausible and presented confidently, they can be hard to spot, which is why it's important to be vigilant when checking the information you’re presented with. Hallucinations are a significant challenge in AI development because they can undermine the reliability and trustworthiness of AI systems.

Hyperparameter

Hyperparameters are the settings you choose before the AI starts learning, like adjusting the dials on a machine before turning it on. They control how the AI learns rather than what it learns, such as how quickly it learns or how complex its "thinking" can be. 

Unlike regular parameters that the AI figures out independently, a human has to set hyperparameters. This is often done through trial and error for the best outcome.

Language model

A language model is an AI system that helps computers understand and generate human language. 

They come in various sizes and complexities, but generally, they can all predict what words come next in a sentence and help create sentences that sound natural.

Large language model (LLM)

LLMs, or large language models, are advanced AI systems that can understand and generate human-like text, making them a powerful tool for various language-related tasks. 

They're trained on massive amounts of text from books, websites, and other sources. Many popular AI chatbots, like ChatGPT, are powered by LLMs, enabling them to offer more natural, human-like conversations compared to traditional rule-based chatbots, which are limited to scripted responses.

Machine learning

Machine learning is what allows computers to improve their performance on a task through experience, without being explicitly programmed for every possible scenario, making them adaptable and capable of handling complex situations.

The more data and experience the system gets, the better it becomes at its task - just like how people improve with practice.

See also Deep learning.

Natural language processing (NLP)

Natural language processing, or NLP, is the technology that allows computers to read, understand, and respond to human language meaningfully. It acts as a translator between humans and computers, helping machines grasp the meaning behind words, including context, tone, and intent. 

NLPs enable computers to perform tasks like translating between languages, summarising long texts, and answering questions. They underpin almost every computer system that you interact with through language.

Neural network

A neural network is a deep learning technique that mimics the human brain, enabling it to recognise patterns and make decisions, improving its performance as it's exposed to more data. It's a fundamental building block of many modern AI systems. 

Imagine a web of connected points, where each point (like a neuron) can send and receive information. The network learns by adjusting the strength of connections between these points based on the examples it's presented with. As data moves through the network, it learns to recognise patterns, much like how we recognise faces or objects. 

One-shot/Few-shot

One-shot or few-shot learning in AI is a way to teach computers to understand new things using only a small number of examples rather than needing thousands or even millions of samples. 

It aims to make AI more like human learners, who can often grasp new ideas from just a few instances. This approach is useful when it's difficult, expensive, or time-consuming to collect large amounts of data for training. 

Overfitting

Overfitting is when AI becomes very familiar with the specific data it was trained on but struggles with new, slightly different information, like a student who memorises specific math problems but can't solve similar ones with different numbers.

The challenge in AI is to find the right balance between learning from data and maintaining the ability to generalise for new situations.

See also Underfitting.

Parameter

In AI, a parameter is an internal setting that the AI model learns during its training process. In many AI models, especially neural networks, parameters typically include weights (which determine the importance of inputs) and biases (which allow the model to adjust its output). 

Well-tuned parameters lead to better performance and more accurate results.

See also Hyperparameter.

Predictive AI 

Predictive AI is a type of AI that uses data from the past to make educated guesses about what might happen in the future. 

Predictive AI is helpful in a wide variety of fields, including finance, healthcare, marketing, and even weather forecasting. It can be used as a probability calculator and almost a fortune teller, by recognising repeating patterns and trends and helping organisations make more informed decisions. 

See also Generative AI.

Prompt

A prompt is a question or instruction you give to an AI to get it to generate a specific response. It's the way you communicate with the AI to guide its output. 

The quality and clarity of the prompt can significantly influence the relevance and accuracy of the AI's answer.

Prompt engineering

Prompt engineering is the art of crafting effective instructions for AI systems to get the best possible results. 

By carefully wording prompts, you can steer the AI towards more accurate, relevant, or creative responses. 

Prompt injection

Prompt injection is a security vulnerability in AI systems, particularly language models, where someone tricks the AI into doing something it's not supposed to do. 

Perpetrators attempt to bypass the AI's built-in restrictions to make the AI produce responses it normally wouldn't, such as revealing sensitive information or generating harmful content.

Reinforcement learning

Reinforcement learning is where an AI learns by interacting with its environment and receiving feedback. 

Examples of this in action could include an AI learning to play chess by playing many games and seeing which moves lead to winning, or a robot learning to walk by trying different movements and getting "rewards" for staying upright and moving forward. 

Reinforcement learning from human feedback (RLHF)

Reinforcement learning from human feedback (RLHF) is a way to train AI systems by giving them feedback on their performance to help them understand what we consider good or bad responses in various situations. The AI systems learn what humans like or don't like based on their feedback and try different approaches to improve over time. 

In RLHF, the feedback comes directly from humans, unlike other reinforcement learning techniques that might provide feedback from a coded reward stimulus.

Retrieval-augmented generation (RAG)

RAG, or retrieval-augmented generation, is a way to make language models smarter and more accurate by giving them access to extra information, such as a company's knowledge base, a website, or an annual report.

This means when you ask it a question, it not only thinks about what it already knows from its training but also quickly searches for relevant information to give you a more accurate answer.

Supervised training

In supervised training, humans label data before giving it to the model for training. So, for example, the model would be taught to recognise cats by being shown thousands of pictures labelled "cat" or "not cat." 

The AI learns by studying many examples where the correct answer is provided, allowing it to recognise patterns and apply this knowledge to new situations. 

See also Unsupervised training.

Token

A token in AI, particularly in language models, is a small piece of text that the AI uses to understand language. It can be a single letter, a whole word, or even part of a word, depending on how the AI is set up. 

By breaking text into tokens, the AI can better understand the context between different parts of the text. 

When you hear about AI models having limits (like "4000 tokens"), it refers to how much text they can handle at once. Some language models charge users based on the number of tokens they use in their prompts.

Training data

Training data in AI is the information used to teach an artificial intelligence system how to perform its tasks. It's essentially the material an AI uses to learn about the world and how to do its job. 

By exposing the AI to lots of data, it learns to recognise patterns and make decisions based on those patterns. This data often comes from real-world sources and can include text, images, audio, and any other type of data that's relevant to the AI's purpose. 

The better and more diverse the training data, the more effective the AI becomes at its tasks.

Transfer learning

Transfer learning is the process of using what an AI has learned from one task to help it learn another quicker than if it were starting from scratch. For example, training an AI that can already recognise cars to recognise trucks is a form of transfer learning.

Transfer learning is particularly useful when you don't have enough data or computing power to train a complex AI from scratch.

Transformer

A transformer in AI is a powerful type of neural network whose key feature is the ability to focus on important parts of the input, so it's great at grasping the meaning of words based on their context. 

While originally designed for language tasks, it's now used for images, audio, and even scientific data.

Underfitting

Underfitting in AI is when a machine learning model is too simple to capture the important patterns in the data it's learning from.

For example, an AI could be trying to predict house prices using only the number of bedrooms, ignoring important factors like location or size.

Unsupervised training

Unsupervised training in AI is where a system learns to find patterns in data without being given the answers. This means unexpected patterns may be identified, which may be positive or negative depending on your goal. 

This approach is particularly useful for discovering new insights that humans might not have noticed, as it avoids being limited by preconceived notions of what's important. 

See also Supervised training.

Weight

In AI, weight refers to the importance or strength of connections within a neural network. As the AI learns, it adjusts these weights to improve its performance, giving more importance to relevant information and less to irrelevant details, allowing it to make better decisions. 

Weights are usually represented as numbers, where higher numbers mean stronger connections.

Ready to harness the power of AI?

Narus helps future-focused businesses use and manage generative AI securely, so you can give your teams the tools they need whilst giving your business the protection it deserves.

Try it free