Traceability and Observability
Track, monitor, and debug all agent operations with built-in tracing and observability features.
Overview
Agent Kernel provides comprehensive observability capabilities through integration with popular tracing platforms. Monitor agent execution, debug issues, and gain insights into your AI agent systems.
Supported Platforms
Agent Kernel supports the following observability platforms:
- Langfuse - Open-source LLM engineering platform for tracing, evaluating, and monitoring AI applications
- OpenLLMetry (Traceloop) - OpenTelemetry-based observability for LLM applications with support for multiple backends
Getting Started with Langfuse
Installation
Install Agent Kernel with Langfuse support:
pip install agentkernel[langfuse]
Or if you need multiple framework integrations:
# OpenAI with Langfuse
pip install agentkernel[openai,langfuse]
# LangGraph with Langfuse
pip install agentkernel[langgraph,langfuse]
# CrewAI with Langfuse
pip install agentkernel[crewai,langfuse]
# Google ADK with Langfuse
pip install agentkernel[adk,langfuse]
Configuration
Method 1: Configuration File
Create or update config.yaml:
trace:
enabled: true
type: langfuse
Method 2: Environment Variables
export AK_TRACE__ENABLED=true
export AK_TRACE__TYPE=langfuse
With tracing enabled in config, all agent interactions will be automatically traced to Langfuse.
Langfuse Credentials
Configure Langfuse credentials via environment variables:
export LANGFUSE_PUBLIC_KEY=pk-lf-...
export LANGFUSE_SECRET_KEY=sk-lf-...
export LANGFUSE_HOST=https://cloud.langfuse.com # or your self-hosted instance
Or add them to your .env file:
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_HOST=https://cloud.langfuse.com
Getting Langfuse Credentials
- Sign up for a free account at https://cloud.langfuse.com
- Create a new project
- Navigate to Settings → API Keys
- Copy your Public Key and Secret Key
For self-hosted Langfuse, see the Langfuse documentation.
Getting Started with OpenLLMetry (Traceloop)
Installation
Install Agent Kernel with OpenLLMetry support:
pip install agentkernel[openllmetry]
Or if you need multiple framework integrations:
# OpenAI with OpenLLMetry
pip install agentkernel[openai,openllmetry]
# LangGraph with OpenLLMetry
pip install agentkernel[langgraph,openllmetry]
# CrewAI with OpenLLMetry
pip install agentkernel[crewai,openllmetry]
# Google ADK with OpenLLMetry
pip install agentkernel[adk,openllmetry]
Configuration
Method 1: Configuration File
Create or update config.yaml:
trace:
enabled: true
type: openllmetry
Method 2: Environment Variables
export AK_TRACE__ENABLED=true
export AK_TRACE__TYPE=openllmetry
OpenLLMetry Credentials
Configure Traceloop credentials via environment variables:
export TRACELOOP_API_KEY=your-api-key
# Optional: for self-hosted instances
export TRACELOOP_BASE_URL=https://api.traceloop.com
Or add them to your .env file:
TRACELOOP_API_KEY=your-api-key
TRACELOOP_BASE_URL=https://api.traceloop.com
Getting Traceloop Credentials
- Sign up for an account at https://www.traceloop.com
- Create a new project
- Navigate to Settings → API Keys
- Copy your API key
For self-hosted deployment or other backends (like Datadog, New Relic, Honeycomb), see the Traceloop documentation.
OpenLLMetry Features
OpenLLMetry provides:
- OpenTelemetry Standards: Industry-standard telemetry data
- Multiple Backends: Send traces to Traceloop, Datadog, New Relic, Honeycomb, and more
- Automatic Instrumentation: Zero-code instrumentation for popular LLM frameworks
- Performance Monitoring: Track latency, token usage, and costs
- Distributed Tracing: Follow requests across multiple services
What Gets Traced
When tracing is enabled, Agent Kernel automatically captures:
Execution Metrics
- Request/Response: Full input prompts and agent responses
- Latency: Execution time for each agent interaction
- Token Usage: Input and output tokens consumed
Agent Context
- Agent Name: Which agent handled the request
- Session ID: Conversation session identifier
- Metadata: Custom metadata and tags
LLM Interactions
- Model Calls: All LLM API calls with parameters
- Prompt Templates: Resolved prompts sent to models
- Completions: Model responses and reasoning
Tool Usage
- Tool Invocations: Which tools were called
- Tool Parameters: Arguments passed to tools
- Tool Results: Return values and execution status
Viewing Traces
In Langfuse
After running your agents with Langfuse tracing enabled:
- Log in to your Langfuse dashboard
- Navigate to Traces
- View detailed execution traces including:
- Full conversation flow
- LLM calls with prompts and completions
- Tool invocations
- Performance metrics
- Token consumption
Langfuse Features:
- Search and Filter: Find specific traces by agent, session, or metadata
- Timeline View: See execution flow and timing
- Cost Analysis: Track token usage and estimated costs
- Error Tracking: Debug failed executions
- Performance Metrics: Latency analysis and bottleneck identification
In OpenLLMetry/Traceloop
After running your agents with OpenLLMetry tracing enabled:
- Log in to your Traceloop dashboard (or your configured backend)
- Navigate to Traces or Observability
- View detailed execution traces including:
- Distributed trace timeline
- LLM API calls with full context
- Token usage and costs
- Performance metrics
- Error details
OpenLLMetry/Traceloop Features:
- OpenTelemetry Standards: Compatible with any OpenTelemetry backend
- Multi-Backend Support: View traces in Traceloop, Datadog, New Relic, etc.
- Distributed Tracing: Track requests across multiple services
- Custom Metrics: Add custom spans and metrics
- Real-time Monitoring: Live trace streaming and alerts
Troubleshooting
Langfuse Issues
Traces Not Appearing:
- Check Credentials: Verify
LANGFUSE_PUBLIC_KEY,LANGFUSE_SECRET_KEY, andLANGFUSE_HOSTare set correctly - Verify Configuration: Ensure
trace.enabledis set totrueandtrace.typeislangfuse - Check Installation: Confirm
agentkernel[langfuse]is installed - Review Logs: Look for trace initialization messages in your application logs
Authentication Errors:
# Verify your Langfuse credentials
python -c "from langfuse import Langfuse; client = Langfuse(); print(client.auth_check())"
This should return True if credentials are valid.
OpenLLMetry Issues
Traces Not Appearing:
- Check Credentials: Verify
TRACELOOP_API_KEYis set correctly - Verify Configuration: Ensure
trace.enabledis set totrueandtrace.typeisopenllmetry - Check Installation: Confirm
agentkernel[openllmetry]is installed - Review Logs: Look for Traceloop initialization messages in your application logs
Connection Errors:
# Verify your Traceloop setup
python -c "from traceloop.sdk import Traceloop; Traceloop.init(app_name='test'); print('Success')"
Performance Impact
Tracing adds minimal overhead:
- Latency: < 50ms per request
- Memory: Negligible (~1-2MB)
- Network: Async batched uploads to minimize impact