Skip to main content
Version: 0.2.9

Introduction to Agent Kernel

Welcome to Agent Kernel - a versatile, framework-agnostic runtime for building and deploying AI agents.

What's New

🎯 Execution Hooks & Smart Memory - Take complete control of your agents with pre/post-execution hooks and intelligent caching. Read the announcement →

What is Agent Kernel?

Agent Kernel is a lightweight runtime and adapter layer for building and running AI agents across multiple frameworks and running within a unified execution environment. It provides the low level scaffolding to build, test and deploy your agents, your mcp tools and A2A quickly in many deployment configurations. The unified execution environment provides the session and memory management seamlessly.

Migrate your existing agents to Agent Kernel and instantly utilize pre-built execution and testing capabilities. It eliminates the complexity of framework development allowing AI engineers to focus on Agent development and provides a consistent development experience regardless of the underlying AI agent framework.

It's not

  • a substitute for popular Agent frameworks and SDKs like LangGraph and OpenAI
  • another heavy abstraction that you have to learn

It's a lightweight, simple, intuitive framework to make your life easy.

Why Agent Kernel?

Effortless Migration

Build agents using any AI agentic framework and migrate them to Agent Kernel to benefit from its execution framework capabilities. No need to build a platform code from scratch to run your agents. You can focus on domain-specific Agent development and Agent Kernel takes care of testing, deployment and execution.

Ready-to-Use Execution

Agent Kernel provides pre-built execution capabilities:

  • CLI Testing Environment for local development
  • REST API Server for web integration
  • Built-in popular integrations for pluggable integrations and ability to build custom integrations quickly
    • Slack
    • WhatsApp
    • Messenger
    • Telegram
    • Instagram
    • Gmail
  • AWS Serverless Deployment for scalable production
  • AWS Containerized Deployment for consistent loads
  • MCP Server for Model Context Protocol tool publishing
  • A2A Server for Agent-to-Agent communication

Pluggable Architecture

Easily extend Agent Kernel with custom framework adapters, memory back-ends, and deployment profiles.

Enterprise-Ready Features

  • Session Management: Built-in conversational state tracking across multiple backends

  • Memory Management: Pluggable memory with smart caching

    • In-memory (development)
    • Redis (production)
    • DynamoDB (serverless)
    • Volatile Cache: Request-scoped temporary storage for RAG context, file content, and intermediate data
    • Non-Volatile Cache: Session-persistent storage for user preferences, metadata, and configurations

    Learn more about session management → | Advanced memory features →

  • Execution Hooks: Powerful pre and post-execution hooks for ultimate control

    • Pre-execution hooks: Guard rails, RAG context injection, input validation, authentication
    • Post-execution hooks: Response moderation, disclaimers, output filtering, analytics
    • Hook chaining: Compose multiple hooks in sequence for complex behaviors
    • Early termination: Pre-hooks can halt execution and return custom responses
  • Fault Tolerance: Production-grade resilience

    • Multi-AZ deployments for high availability
    • Automatic failure recovery and retry mechanisms
    • Health monitoring and auto-scaling (auto-scaling will be made available soon)
    • Persistent state across failures
  • Traceability: Track and audit all agent operations

    • LangFuse
    • OpenLLMetry
  • Multi-Agent Collaboration: Leverage multi-agent hierarchies of supported agentic frameworks

  • Agent Testing Capability: Built in Agent test framework so that you can write automated tests easily

  • Governance: Guard rails and human in the middle capabilities are coming soon

Key Features

Unified API

from agentkernel.core import Agent, Runner, Session, Module, Runtime

All framework adapters expose the same core abstractions:

  • Agent: Framework-specific agent wrapped by Agent Kernel
  • Runner: Framework-specific execution strategy
  • Session: Shared conversational state
  • Module: Container for registering agents
  • Runtime: Global orchestrator

Execution Hooks

Powerful pre-execution and post-execution hooks give you surgical control over agent behavior:

  • Pre-hooks: Intercept prompts before agents see them
    • 🛡️ Guard rails and content filtering
    • 🧠 RAG context injection from knowledge bases
    • 🔍 Input validation and authentication
    • 📊 Request logging and analytics
  • Post-hooks: Transform responses after generation
    • ⚖️ Add disclaimers and compliance messages
    • 🔒 Output moderation and filtering
    • 📈 Response analytics and monitoring

Works with any framework - same hook code across OpenAI, CrewAI, LangGraph, and ADK.

Learn more in our blog post →

Smart Memory Management

Two types of cache with identical APIs but different lifecycles:

  • Volatile Cache: Request-scoped temporary storage
    • Perfect for RAG context, file content, intermediate calculations
    • Auto-clears after request completion
    • Keeps prompts clean and reduces token usage
  • Non-Volatile Cache: Session-persistent storage
    • Store user preferences, metadata, configurations
    • Persists across multiple requests
    • Share data between hooks and tools

Multiple backends - swap between in-memory, Redis, or DynamoDB with just environment variables.

Read the advanced memory guide →

Multi-Framework Support

Agent Kernel currently supports:

  • OpenAI Agents SDK - Official OpenAI agents framework
  • CrewAI - Role-based multi-agent framework
  • LangGraph - Graph-based agent orchestration
  • Google ADK - Google's Agent Development Kit

Flexible Deployment

Quick Example

Here's a simple agent built with Agent Kernel using CrewAI:

from crewai import Agent as CrewAgent
from agentkernel.cli import CLI
from agentkernel.crewai import CrewAIModule

# Define your agent
agent = CrewAgent(
role="assistant",
goal="Help users with their questions",
backstory="You are a helpful AI assistant",
verbose=False,
)

# Register with Agent Kernel
CrewAIModule([agent])

# Run with built-in CLI
if __name__ == "__main__":
CLI.main()

You can:

  • Test locally with the CLI
  • Deploy to AWS Lambda with one line-change
  • Expose as a REST API
  • Integrate with MCP or A2A protocols

All without changing your agent code!

Who Should Use Agent Kernel?

Agent Kernel is ideal for:

  • AI Engineers who want framework flexibility without vendor lock-in
  • Teams building production AI agent systems
  • Developers who need to migrate between frameworks
  • Organizations requiring enterprise-grade agent deployment
  • Researchers exploring different agent frameworks

Next Steps

Ready to get started? Here's what to do next:

  1. Install Agent Kernel - Get up and running in minutes
  2. Quick Start Guide - Build your first agent
  3. Core Concepts - Understand the architecture
  4. Execution Hooks - Add guard rails, RAG, and response control
  5. Session Management - Session configuration and storage
  6. Memory Management - Advanced caching and persistence
  7. Framework Integration - Choose your framework
  8. Deployment Guide - Deploy to production

Community & Support

License

Agent Kernel is released under the MIT License. See the LICENSE file for details.


Built with ❤️ by Yaala Labs

💬 Ask AI Assistant

Get instant help with Agent Kernel documentation, examples, and more

AI Assistant