Overview
Laminar is an open-source, OpenTelemetry-native observability platform for the OpenAI Agents SDK. Trace, debug, and monitor every agent turn, tool call, handoff, and sub-agent with a singleLaminar.initialize() call. Self-host via Helm or use managed cloud.
The OpenAI Agents SDK lets you build agent workflows in Python with Agent, Runner.run, tool functions, and handoffs between specialists. Laminar hooks into the SDK’s built-in TracingProcessor to mirror every agent workflow span into Laminar, plus adds the system instructions so the full prompt is visible in the trace.
What Laminar captures:
- The root agent workflow, each
agents.task, and everyagents.turnwith the model, prompt, response, token counts, latency, and cost. - Every
function_toolinvocation, with arguments and return value. - Handoffs between agents, with the destination agent’s turns nested under the handoff span.
- Agent instructions prepended to the input messages on every LLM span.
Getting started
Initialize Laminar
Laminar.initialize() auto-instruments the OpenAI Agents SDK when openai-agents is importable. No wrapping call is needed.Wrapping your entry point in
@observe() is optional but recommended: it creates a root span that captures inputs and outputs and makes the trace easy to find in the UI.See what happened in a trace
Open the trace in Laminar and you land on the transcript view: each turn reads as a conversation, with the prompt, the model’s response, and any tool calls inline with their inputs and outputs. A tree of span names tells you the shape of the run; the transcript tells you what actually happened.
Multi-agent runs with handoffs
The OpenAI Agents SDK models specialization through handoffs: a triage agent routes the user to a specialist viahandoff(other_agent), and the SDK exposes each handoff as a transfer_to_<agent> tool call. In Laminar, the destination agent’s turns and tool calls nest under the handoff, so you can follow the full multi-agent conversation in one trace.


Track outcomes with Signals
Traces tell you what happened on one run. Signals turn that into structured outcomes: describe a behavior or failure in natural language (“the triage agent answered a cancellation itself instead of handing off”, “a tool returned an error”, “the agent exceeded three turns”) and Laminar extracts matching events across your history and every new trace. Route them to alerts or datasets.Query across traces
- SQL editor for ad-hoc queries across traces, spans, signals, and evals.
- SQL API for programmatic access from scripts and pipelines.
- MCP server to query Laminar directly from Claude Code, Cursor, Codex, or any MCP-aware client.
Troubleshooting
I don't see any traces in Laminar
I don't see any traces in Laminar
- Confirm
LMNR_PROJECT_API_KEYis set in the same process that runs the SDK. openai-agentsmust be importable whenLaminar.initialize()runs. Install withpip install openai-agents.- The integration requires
openai-agents >= 0.7.0andlmnr >= 0.7.48.
Handoffs aren't nested under the source agent
Handoffs aren't nested under the source agent
The SDK emits a
transfer_to_<agent> tool call followed by an agents.handoff span, and the destination agent’s work lands as a sibling under the same parent task. Open the trace in tree view to see the full structure.Self-hosting Laminar
Self-hosting Laminar
Set
base_url and the ports of your instance when initializing. For a local OSS deployment:What’s next
- Viewing traces: read the transcript view, filter, and search across traces.
- Signals: extract structured outcomes and failures from your agent runs.
- SQL editor and MCP server: query traces programmatically.
- Tracing structure: sessions, metadata, and tags for deeper control.
- Using the OpenAI Python SDK directly (without Agents)? See the OpenAI integration page.
