AI & Agent Development Glossary

50 terms covering AI agents, LLMs, and developer infrastructure. Each definition is self-contained and quotable.

A

A/B Testing

A/B testing compares two or more variants of a system by randomly assigning users to groups and measuring statistically significant differences in predefined outcome metrics.

MLOps

Agent Harness

An agent harness is the runtime environment that manages an AI agent's execution loop, tool access, permission boundaries, memory persistence, and conversation state.

Developer Tools

Agent Loop

An agent loop is the iterative cycle of observe, reason, act, and evaluate that an AI agent repeats until it completes a task or reaches a termination condition.

AI Agent Development

Agent Orchestration

Agent orchestration is the coordination layer that manages how multiple AI agents communicate, share context, delegate tasks, and resolve conflicts within a system.

AI Agent Development

Agentic AI

Agentic AI refers to artificial intelligence systems that autonomously plan, execute, and adapt multi-step tasks toward a goal without requiring human intervention at each step.

AI Agent Development

AI Agent Memory

AI agent memory is the system that persists information across interactions, enabling agents to recall past context, learn from experience, and maintain continuity between sessions.

AI Agent Development

AI Alignment

AI alignment is the research field dedicated to ensuring artificial intelligence systems reliably pursue goals that match human intentions, values, and ethical principles.

AI Safety

AI Coding Agent

An AI coding agent is an autonomous software development assistant that can read codebases, write code, run tests, debug errors, and commit changes with minimal human direction.

Developer Tools

AI Guardrails

AI guardrails are programmatic constraints and validation layers that prevent AI systems from generating harmful, off-topic, or policy-violating outputs during production use.

AI Safety

Attention Mechanism

An attention mechanism allows neural networks to dynamically focus on relevant parts of the input when producing each element of the output, weighting information by learned importance.

LLM Architecture

M

MCP Server

An MCP server is a lightweight program that exposes tools, resources, and prompts to AI applications through the Model Context Protocol's standardized client-server interface.

Developer Tools

Mixture of Experts

Mixture of Experts (MoE) is a neural network architecture that routes each input to a subset of specialized sub-networks, enabling massive model capacity with efficient per-token computation.

LLM Architecture

Model Context Protocol (MCP)

Model Context Protocol is an open standard that defines how AI applications connect to external data sources and tools through a unified client-server interface.

Developer Tools

Model Distillation

Model distillation transfers knowledge from a large teacher model to a smaller student model by training the student to match the teacher's output distributions rather than hard labels.

LLM Architecture

Model Registry

A model registry is a centralized repository that stores, versions, and manages machine learning model artifacts along with their metadata, lineage, and deployment status.

MLOps

Model Serving

Model serving deploys trained machine learning models as production services that accept inference requests and return predictions with low latency and high availability.

MLOps

Multi-Agent System

A multi-agent system is an architecture where multiple specialized AI agents collaborate, communicate, and coordinate to solve problems that exceed any single agent's capabilities.

AI Agent Development

Multimodal AI

Multimodal AI refers to systems that can process, understand, and generate content across multiple data types including text, images, audio, and video within a unified model.

Machine Learning