Deployment Overview
Agent Kernel supports multiple deployment modes for different use cases.
Deployment Modes
Quick Comparison
| Mode | Best For | Scalability | Cold Start | Cost | 
|---|---|---|---|---|
| Local/CLI | Development, testing | N/A | Instant | Free | 
| REST API | Web apps, APIs | Manual scaling | Instant | Server costs | 
| AWS Lambda | Variable load | Auto-scaling | 1-3s | Pay per use | 
| AWS ECS | Consistent load | Auto-scaling | Instant | Running containers | 
| MCP Server | AI integrations | Manual | Instant | Server costs | 
| A2A Server | Agent networks | Manual | Instant | Server costs | 
Local Development
Uses agentkernel.CLI module.
python my_agent.py
- Interactive CLI
- Instant feedback
- No deployment needed
REST API Server
Uses agentkernel.RESTAPI module.
python my_agent.py
- HTTP endpoints
- Easy integration
- Self-hosted
AWS Serverless
Uses Agent Kernel terraform modules
# Configure the modules and run
terraform init && terraform apply
- Lambda functions
- API Gateway
- Auto-scaling
- Pay per request
AWS Containerized
Uses Agent Kernel terraform modules
# Configure the modules and run
terraform init && terraform apply
- ECS Fargate
- Application Load Balancer
- Consistent performance
- Lower latency
Choosing a Deployment Mode
Development
→ Local/CLI: Fast iteration, no setup
Small Web App
→ REST API: Simple, self-hosted
Variable Traffic
→ AWS Lambda: Auto-scales, pay per use
High Traffic
→ AWS ECS: Consistent performance
AI Integration
→ MCP/A2A: Protocol-based integration