Glossary

A comprehensive collection of AI terminology, sorted alphabetically.

A

Artificial Intelligence (AI)

The simulation of human intelligence processes by machines, especially computer systems. This includes learning, reasoning, problem-solving, perception, and language understanding.

C

Computer Vision

A field of AI focused on enabling machines to interpret and process visual information from the world, such as images and videos.

D

Deep Learning

A specialized subset of ML that uses multi-layered neural networks to model complex patterns in large datasets, often used in image and speech recognition.

E

Edge AI

AI processing that happens locally on hardware devices (like smartphones or IoT sensors) rather than in the cloud or on centralized servers.

Explainability (XAI)

Techniques and practices that make the behavior and decisions of AI models understandable to humans.

F

Fine-Tuning

The process of taking a pre-trained model and continuing its training on a new dataset to specialize it for a specific task.

G

Generative AI

AI systems capable of producing new content such as text, images, audio, or code by learning patterns from existing data

H

Hallucination

When an AI model generates information that appears plausible but is false or nonsensical, particularly in LLMs.

I

Inference

The phase in which an AI model applies what it has learned during training to new, unseen data in order to make predictions or decisions.

L

Large Language Model (LLM)

A type of deep learning model trained on massive text corpora to perform tasks like translation, summarization, question answering, and text generation.

M

Machine Learning (ML)

A subset of AI focused on building systems that can learn from and make decisions based on data without being explicitly programmed.

MCP (Model Context Protocol)

MCP is a recently released open standard that allows AI models to interact with external tools—like your calendar, CRM, Slack, or codebase—easily, reliably, and securely. Previously, developers had to write their own custom code for each new integrat...

Model

An AI model is a computer program that is built to work like a human brain. You give it some input (i.e. a prompt), it does some processing, and it generates a response.Like a child, a model “learns” by being exposed to many examples of how people ty...

Model Weights

Numerical parameters within a neural network that determine how input data is transformed into output. They are adjusted during training.

N

Natural Language Processing (NLP)

A field of AI that enables machines to understand, interpret, and generate human language, including speech and text

Neural Network

A series of algorithms modeled after the human brain that are designed to recognize patterns and interpret sensory data through machine perception, labeling, or clustering.

O

Overfitting

A modeling error that occurs when an AI model learns the training data too well, including noise or irrelevant details, resulting in poor performance on new data.

P

Prompt Engineering

The process of crafting inputs (prompts) to guide LLMs or generative AI models to produce desired outputs.

R

RAG

Supervised learning refers to when a model is trained on “labeled” data—meaning the correct answers are provided. For example, the model might be given thousands of emails labeled “spam” or “not spam” and, from that, learn to spot the patterns that d...

Reinforcement Learning (RL)

A type of machine learning where agents learn optimal behaviors through rewards and penalties as they interact with an environment

RLHF

RLHF (reinforcement learning from human feedback) is a post-training technique that goes beyond next-word prediction and fine-tuning by teaching AI models to behave the way humans want them to—making them safer, more helpful, and aligned with our int...

S

Supervised Learning

Supervised learning refers to when a model is trained on “labeled” data—meaning the correct answers are provided. For example, the model might be given thousands of emails labeled “spam” or “not spam” and, from that, learn to spot the patterns that d...

T

Tokenization

The process of converting raw text into smaller pieces (tokens) such as words, subwords, or characters that a model can process.

Training Data

The dataset used to teach a machine learning model to recognize patterns or make decisions.

Training/Pre-training

Training is the process by which an AI model learns by analyzing massive amounts of data. This data might include large portions of the internet, every book ever published, audio recordings, movies, video games, etc. Training state-of-the-art models ...

Transfer Learning

A technique where a pre-trained model is adapted to a new, related task, reducing the amount of data and compute required for training.

Transformer

The transformer architecture, developed by Google researchers in 2017, is the algorithmic discovery that made modern AI (and LLMs in particular) possible.Transformers introduced a mechanism called “attention,” where instead of only being able to read...

U

Unsupervised Learning

Unsupervised learning is the opposite of supervised learning: the model is given data without any labels or answers. Its job is to discover patterns or structure on its own, like grouping similar news articles together or detecting unusual patterns i...