Compare Amazon Bedrock AgentCore and LangGraph for AI agent orchestration. Architecture, state management, deployment, and pricing differences explained with code examples.

TL;DR: AgentCore is a managed AWS runtime for deploying and operating AI agents with built-in scaling, memory, and security. LangGraph is an open-source graph-based framework for building stateful, controllable agent workflows with explicit state machines. Choose AgentCore when you need managed infrastructure and production operations on AWS. Choose LangGraph when you need fine-grained control over agent execution flow, branching logic, human-in-the-loop patterns, or complex multi-step orchestration. They are complementary -- you can orchestrate agent logic with LangGraph and deploy it on AgentCore Runtime.
AI agent orchestration in 2026 has split into two distinct camps: managed platforms that handle infrastructure, and open-source frameworks that give developers explicit control over agent behavior. Amazon Bedrock AgentCore and LangGraph represent the best of each approach, and developers building production agents need to understand when each tool is the right choice.
This comparison matters because agent orchestration is fundamentally harder than single-turn LLM calls. Real-world agents make decisions across multiple steps, maintain state between those steps, recover from failures, and sometimes need human approval before proceeding. How you handle that orchestration -- whether through a managed runtime or an explicit state graph -- determines your agent's reliability, debuggability, and operational cost.
AgentCore and LangGraph are not competing for the same layer. AgentCore is infrastructure. LangGraph is orchestration logic. But they overlap enough in developer mindshare that teams frequently ask which one to adopt, and the answer often is both.
AgentCore is Amazon Bedrock's managed runtime and infrastructure service for AI agents. It provides five integrated components: Memory for persistent context with semantic search, Runtime for auto-scaling agent hosting, Code Interpreter for sandboxed execution, Browser for cloud-based web automation, and Gateway for MCP-based tool integration. AgentCore abstracts away deployment, scaling, monitoring, and security, allowing developers to focus on agent logic. It is tightly coupled to the AWS ecosystem and requires an AWS account.
LangGraph is an open-source framework built by the LangChain team for creating stateful, multi-step agent applications. Released under the MIT license, LangGraph models agent workflows as directed graphs where nodes represent computation steps and edges define transitions based on state. Its key innovation is explicit state management: agents maintain a typed state object (using Python's TypedDict) that every node can read and write, with automatic checkpointing for persistence. LangGraph supports cycles, conditional branching, parallel execution, and human-in-the-loop patterns. It runs locally, on any cloud, or via LangGraph Cloud for managed deployment.
The architectural difference between AgentCore and LangGraph reflects fundamentally different answers to the question "what is an agent?"
AgentCore treats an agent as a service endpoint. You write agent logic, package it, and deploy it as a managed service that receives requests and returns responses. AgentCore handles everything around that logic: scaling, health checks, memory persistence, tool access, and security. The agent's internal decision-making process is a black box from AgentCore's perspective -- it manages the runtime, not the reasoning.
LangGraph treats an agent as a state machine. You define the agent's behavior as a graph where each node performs a computation (LLM call, tool execution, data transformation) and edges determine what happens next based on the current state. This makes the agent's decision flow explicit and inspectable. You can see exactly which node executed, what state was passed, and why a particular branch was taken.
The practical consequence: LangGraph gives you more control over agent behavior but requires you to design the graph. AgentCore gives you less control over orchestration but eliminates operational complexity.
State management is where LangGraph most clearly differentiates itself -- not just from AgentCore, but from nearly every other agent framework.
In LangGraph, state is a first-class concept. Every graph has a typed state schema (typically a TypedDict), and every node receives the current state and returns updates to it. State transitions are explicit and deterministic given the same inputs. LangGraph's checkpointer automatically persists state at every step, enabling pause/resume workflows, time-travel debugging (replaying from any checkpoint), and crash recovery. This is powerful for complex agents that need to maintain context across many steps.
AgentCore Memory provides persistent context through a different mechanism: it is a managed semantic memory service. Rather than checkpointing agent execution state, AgentCore Memory stores and retrieves contextual information using natural language queries. It supports hierarchical organization by actors and sessions, semantic search over stored memories, and custom memory strategies. This is well-suited for conversational memory and long-term user context, but it does not provide the step-by-step execution state that LangGraph's checkpointing offers.
For multi-step agents that need to pause, resume, branch, or replay from intermediate states, LangGraph's approach is more appropriate. For agents that need persistent conversational memory with semantic retrieval across sessions, AgentCore Memory is simpler to operate.
AgentCore provides a complete deployment pipeline. You write your agent, call runtime.launch(), and AgentCore handles containerization, ECR image creation, auto-scaling, health monitoring, and endpoint provisioning. The deployment target is always AWS, and you get IAM-based access control, CloudWatch integration, and managed networking out of the box.
LangGraph itself has no deployment mechanism -- it is a library you import into your Python application. You deploy LangGraph agents however you deploy Python applications: as a FastAPI service, a Lambda function, a Docker container, or any other packaging. This gives you full control but requires you to manage all operational concerns.
LangGraph Cloud bridges this gap by providing managed deployment specifically for LangGraph applications. It offers hosted endpoints, built-in persistence, streaming support, and a studio UI for debugging agent graphs. LangGraph Cloud is a paid service from LangChain Inc. and is the closest equivalent to AgentCore's managed deployment, though it is specifically optimized for graph-based agents rather than being a general-purpose agent runtime.
AgentCore uses AWS pay-as-you-go pricing across its components: compute time for Runtime, per-operation charges for Memory, execution duration for Code Interpreter, session minutes for Browser, and API calls for Gateway. Costs are predictable and scale with usage, but can accumulate for high-throughput applications.
LangGraph is free and open-source under the MIT license. Your costs are infrastructure (servers, databases) and LLM API calls. LangGraph Cloud pricing is separate and based on usage tiers. For teams with existing infrastructure, running LangGraph directly is typically more cost-effective. For teams without DevOps resources, the combined cost of LangGraph Cloud or AgentCore may be justified by reduced operational burden.
Choose AgentCore when:
Choose LangGraph when:
Yes, and this is often the best approach for production systems.
LangGraph and AgentCore operate at different layers of the stack. LangGraph defines how your agent thinks -- the orchestration logic, state transitions, and decision flow. AgentCore defines how your agent runs -- the deployment, scaling, memory persistence, and security infrastructure.
A practical architecture combines them: build your agent's orchestration as a LangGraph state machine with explicit branching, human-in-the-loop gates, and checkpointed state. Then wrap that LangGraph agent in an AgentCore Runtime application for managed deployment on AWS. You get LangGraph's controllable orchestration with AgentCore's operational simplicity.
You can also use AgentCore Memory alongside LangGraph's checkpointing: use LangGraph checkpoints for execution state (which step the agent is on) and AgentCore Memory for long-term semantic memory (what the agent remembers about a user across sessions).
LangGraph is an open-source Python framework built by the LangChain team for creating stateful, multi-step agent applications using graph-based orchestration. Unlike linear chains or simple ReAct loops, LangGraph models agent workflows as directed graphs where nodes represent computation steps (LLM calls, tool use, data processing) and edges define transitions based on the current state. It supports cycles, conditional branching, parallel execution, and human-in-the-loop patterns. LangGraph's key innovation is explicit state management with automatic checkpointing, enabling agents that can pause, resume, replay, and recover from failures. It is MIT licensed and can run locally, on any cloud, or via LangGraph Cloud for managed hosting.
LangChain is a general-purpose framework for building LLM applications with composable abstractions for chains, agents, tools, and memory. LangGraph is a specialized library built on top of LangChain's primitives that adds graph-based orchestration with explicit state management. Think of LangChain as the building blocks (model wrappers, tool interfaces, prompt templates) and LangGraph as the orchestration layer that connects those blocks into controllable stateful workflows. LangChain's built-in agent executors use simple loops (like ReAct), while LangGraph lets you design arbitrary execution graphs with branching, cycles, and checkpointing. You typically use both together: LangChain components as the nodes in a LangGraph state machine.
Yes. LangGraph is model-agnostic and works with any LLM provider, including Amazon Bedrock. You use the langchain-aws package to connect LangGraph nodes to Bedrock models like Claude, Llama, or Mistral. You can also deploy a LangGraph agent on AgentCore Runtime for managed AWS hosting, or use AgentCore Memory for long-term context alongside LangGraph's execution checkpointing. The combination of LangGraph's orchestration with Bedrock's model access and AgentCore's infrastructure gives you a complete stack for production agents on AWS.
For complex agents with branching logic, conditional paths, human approval gates, or cyclic reasoning, LangGraph is the stronger choice. Its graph-based architecture was specifically designed for these patterns, and its typed state management with checkpointing makes complex workflows debuggable and recoverable. AgentCore does not prescribe an orchestration pattern -- it provides the runtime and infrastructure for whatever agent logic you build. For complex orchestration deployed on AWS, the recommended approach is to use LangGraph for the agent's decision graph and AgentCore Runtime for managed production deployment. This gives you the best of both: controllable orchestration and operational simplicity.
Aaron is a senior software engineer and AI researcher specializing in generative AI, multimodal systems, and cloud-native AI infrastructure. He writes about cutting-edge AI developments, practical tutorials, and deep technical analysis at fp8.co.
Comprehensive comparison of Amazon Bedrock AgentCore and LangChain for building AI agents. Compare architecture, deployment, pricing, memory management, and tool integration to choose the right framework.
AI Engineering, Agent FrameworksBuild AI agents with Amazon Bedrock AgentCore. Step-by-step Python examples for memory, code execution, browser automation, and tool integration.
AI Agents, Amazon Bedrock, Conversational AIMaster the art of context engineering for AI agents. Learn 6 battle-tested techniques from production systems: KV cache optimization, tool masking, filesystem-as-context, attention manipulation, error preservation, and few-shot pitfalls.
AI Engineering, Agent FrameworksCompare memory management in LangChain, Bedrock AgentCore, and Strands Agents. Practical guide to architecture, persistence, and context engineering patterns.
Agent Memory Management