Comprehensive overview of memory management patterns in AI systems, including multi-layered memory models and orchestration mechanisms.
The Swarm pattern implements a sophisticated memory hierarchy:
Individual Agent Memory
Shared Context Memory
Swarm-Level Memory
State Reset and Isolation
def reset_executor_state(self) -> None:
"""Reset SwarmNode executor state to initial state when swarm was created."""
self.executor.messages = copy.deepcopy(self._initial_messages)
self.executor.state = AgentState(self._initial_state.get())
Context Building for Handoffs
The _build_node_input() method (src/strands/multiagent/swarm.py:416-488) creates rich context for each agent:
def _build_node_input(self, target_node: SwarmNode) -> str:
# Includes:
# - Handoff messages from previous agent
# - Original user request
# - Complete execution history
# - Shared knowledge from all previous agents
# - Available agent descriptions for collaboration
Tool-Based Coordination
@tool
def handoff_to_agent(agent_name: str, message: str, context: dict[str, Any] | None = None) -> dict[str, Any]:
"""Transfer control to another agent with context sharing."""
JSON Serialization Enforcement
Both AgentState and SharedContext enforce JSON serializability:
def _validate_json_serializable(self, value: Any) -> None:
try:
json.dumps(value)
except (TypeError, ValueError) as e:
raise ValueError(f"Value is not JSON serializable: {type(value).__name__}")
Session Management Restrictions
Execution Controls
Memory Validation
Key Insights
The Swarm pattern achieves collaborative intelligence through:
This design enables true multi-agent collaboration where agents maintain their individual capabilities while sharing knowledge and coordinating effectively through
structured memory management.
Based on my analysis of both systems, here are the core differences between
Strands Agents' built-in memory management and Amazon's official AgentCore
memory, along with integration opportunities:
Core Differences
Strands Agents: Multi-Layered, Agent-Centric
key-value store) and conversation history
per-agent namespacing
metrics
Amazon AgentCore: Session-Centric, Long-Term
to 365 days
interactions
current sessions
Strands Agents:
# File/S3-based structured storage
/
└── session_
├── session.json
└── agents/
└── agent_
├── agent.json
└── messages/
├── message_
Amazon AgentCore:
Strands Agents:
Amazon AgentCore:
Integration and Complementary Opportunities
Long-Term + Short-Term Memory:
class HybridMemoryAgent(Agent):
def __init__(self, bedrock_memory_config, strands_session_manager):
# Long-term memory via Bedrock
self.bedrock_memory = BedrockMemoryStore(bedrock_memory_config)
# Short-term structured memory via Strands
super().__init__(session_manager=strands_session_manager)
async def invoke_async(self, task):
# Inject long-term context from Bedrock
long_term_context = await self.bedrock_memory.get_summary()
enriched_context = f"Previous context: {long_term_context}\n\nCurrent
task: {task}"
# Use Strands for execution-level memory
result = await super().invoke_async(enriched_context)
# Persist important outcomes to Bedrock
await self.bedrock_memory.store_interaction(task, result)
return result
Enhanced Swarm with Cross-Session Learning:
class BedrockIntegratedSwarm(Swarm):
def __init__(self, nodes, bedrock_session_id):
super().__init__(nodes)
self.bedrock_session_id = bedrock_session_id
def _build_node_input(self, target_node):
# Get traditional swarm context
swarm_context = super()._build_node_input(target_node)
# Enrich with long-term memory
if hasattr(target_node.executor, 'bedrock_memory'):
long_term_context =
target_node.executor.bedrock_memory.get_relevant_memory()
return f"Long-term memory: {long_term_context}\n\n{swarm_context}"
return swarm_context
Bidirectional Integration:
class MemoryBridge:
def __init__(self, strands_session_manager, bedrock_memory_client):
self.strands_sm = strands_session_manager
self.bedrock_memory = bedrock_memory_client
async def sync_to_bedrock(self, agent):
# Extract structured insights from Strands session
session_data = self.strands_sm.read_session(agent.session_id)
key_insights = self._extract_insights(session_data)
# Store summarized insights in Bedrock for long-term retention
await self.bedrock_memory.store_memory_summary(key_insights)
async def enrich_from_bedrock(self, agent, context_window):
# Retrieve relevant long-term context
relevant_context = await
self.bedrock_memory.query_memory(context_window)
# Inject into agent state for current execution
agent.state.set("long_term_context", relevant_context)
Scenario 1: Customer Service Agent Team
(billing, technical, account)
months/years
profile while collaborating on immediate issue
Scenario 2: Research and Analysis Pipeline
current project
over time
Scenario 3: Personal Assistant Evolution
preferences
Key Benefits of Integration
strategic learning
relationship management
insights
user understanding
The integration creates a comprehensive memory system where Strands Agents
handle complex immediate coordination while AgentCore provides the continuity
and learning that makes agents truly intelligent over time.
Based on my analysis, there are significant differences in memory management
between the Workflows (GraphBuilder) and Swarm patterns in Strands Agents'
multi-agent implementation:
Core Memory Architecture Differences
Swarm Pattern: Collaborative Shared Memory
handoff tools
SharedContext
tool-based handoffs
choose when to collaborate
Graph Pattern: Deterministic Data Pipeline
DAG structure
dependency outputs
choice
Swarm Memory State (SwarmState):
@dataclass
class SwarmState:
current_node: SwarmNode # Currently executing agent
shared_context: SharedContext # Cross-agent shared memory
node_history: list[SwarmNode] # Sequential execution history
handoff_message: str | None # Dynamic handoff communication
# Agents can change execution flow at runtime
Graph Memory State (GraphState):
@dataclass
class GraphState:
completed_nodes: set[GraphNode] # Completed dependency tracking
execution_order: list[GraphNode] # Deterministic execution sequence
results: dict[str, NodeResult] # Structured output storage
# Static execution flow based on DAG topology
Swarm Context Building (_build_node_input()):
# Rich collaborative context with:
# - Handoff messages from previous agent
# - Complete execution history
# - Shared knowledge from ALL previous agents
# - Available agent descriptions for future coordination
context_text = f"""
Handoff Message: {handoff_message}
User Request: {original_task}
Previous agents: {agent_history}
Shared knowledge: {shared_context_from_all_agents}
Other agents available: {all_other_agents}
"""
Graph Context Building (_build_node_input()):
# Structured dependency-based context with:
# - Original task preserved
# - Outputs only from direct dependencies
# - Clear attribution per dependency
node_input = f"""
Original Task: {original_task}
Inputs from previous nodes:
From dependency_1: {output_1}
From dependency_2: {output_2}
"""
Swarm Memory:
Graph Memory:
GraphState.results
edge conditions
Swarm Pattern Flow:
[Research Agent] --handoff--> [Analysis Agent] --handoff--> [Writer Agent]
↑ ↑ ↑
Can access Can access all Can access all
shared context previous context previous context
+ handoff msg + new handoff msg + new handoff msg
Graph Pattern Flow:
[Data Agent] ----output----> [Analysis Agent] ----output----> [Report Agent]
↑ ↑ ↑
Original task Original task + Original task +
only Data Agent output All upstream outputs
Key Architectural Implications
Memory Efficiency
Execution Flexibility
flexibility
Debugging and Traceability
Scaling Characteristics
Use Case Optimization
coordination
pipelines
Integration Possibilities
Both patterns can be combined effectively:
# Graph node containing a Swarm for collaborative sub-workflows
complex_analysis_swarm = Swarm([specialist1, specialist2, expert])
builder = GraphBuilder()
builder.add_node(data_collector, "collect")
builder.add_node(complex_analysis_swarm, "analyze") # Swarm as Graph node
builder.add_node(report_generator, "report")
builder.add_edge("collect", "analyze")
builder.add_edge("analyze", "report")
This creates a hybrid system where:
to graph outputs
The choice between patterns depends on whether you need structured data
transformation (Graph) or dynamic agent collaboration (Swarm) for your
specific use case.