Documentation Index
Fetch the complete documentation index at: https://laminar.sh/docs/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Laminar is an open-source, OpenTelemetry-native observability platform for AI agents. Trace, debug, and monitor every LangChain deepagents run, subagent spawn, tool call, and model turn with a singleLaminar.initialize() call. Self-host via Helm or use managed cloud.
deepagents is LangChain’s reference implementation of the Claude-Code-style “deep agent” loop: a main agent with a planner, a virtual filesystem (write_todos, read_file, write_file, edit_file), and a built-in task tool that spawns specialist subagents. Laminar injects middleware into every agent you build with create_deep_agent, wraps the compiled LangGraph at the entrypoint, and produces a clean flat trace: one root span per invocation, one TOOL span per tool call, with every subagent’s LLM and tool spans nested under the task call that spawned it.
What Laminar captures:
- The root
deep_agentspan per top-levelinvoke/ainvoke/stream/astreamcall with the user prompt and the final assistant message. - One TOOL span per tool call: built-in filesystem tools (
write_todos,read_file,write_file,edit_file), thetasksubagent tool, and any custom tools you pass in. - LLM turns (Anthropic, OpenAI, etc.) as children of the agent or subagent that made the call, with prompts, responses, token counts, latency, and cost.
- Subagents as collapsible cards in transcript view, grouped automatically from the spans nested under each
taskcall.
Getting started
Install
Ensure you have
lmnr version 0.7.50 or higher and deepagents version 0.5.0 or higher. Install a LangChain provider package for the model you want to drive the agent with:Initialize Laminar
Laminar.initialize() auto-instruments deepagents when the package is importable. No wrapping call or middleware registration is needed: Laminar injects its own AgentMiddleware into every agent built via create_deep_agent and wraps the compiled graph.Tool functions passed to
deepagents must have a docstring. LangChain uses it as the tool description sent to the model.See what happened in a trace
Open a deepagents trace in Laminar and you land on the transcript view: the user prompt at the top, each model turn as a conversation line, every tool call inline with its arguments and result, and every subagent collapsed into a card showing its own input and output. The span tree tells you the shape of the run; the transcript tells you what actually happened.
Subagents: the interesting part
The reason you reach fordeepagents over plain LangChain is the built-in task tool and the subagent system. Subagents are specialist agents defined as a SubAgent spec with a name, a description, a system prompt, and optionally a scoped tool list or a different model. The main agent decides when to delegate based on the description and calls task(subagent_type="...", description="...") with its own prompt.
Laminar traces each task invocation as a TOOL span. The subagent’s own LLM turns and tool calls nest underneath automatically via OpenTelemetry context propagation, so the hierarchy reads top-down: main agent → task → subagent’s LLM → subagent’s tools → task returns → main agent continues.
Parallel subagents
A common deepagents pattern: the main agent farms out independent research jobs to several subagent instances in parallel, then synthesizes the results. Here, a coordinator compares three agent frameworks by dispatching tworesearch-scout subagents (one covering LangGraph and CrewAI, the other covering OpenAI Agents SDK) and finishes with a dedicated editor subagent that tightens the draft.
Each entry in
subagents[].tools must be a Python callable (with a docstring) or a BaseTool, not a string name. Subagents inherit the built-in filesystem tools (read_file, edit_file, write_file, write_todos) from the middleware stack automatically, so an empty "tools": [] still gives them the virtual filesystem.
Subagents surface in transcript view as collapsible cards because Laminar’s frontend groups the LLM and tool spans under each
task TOOL span into a single subagent boundary. You do not need to add any subagent-specific instrumentation or span types; nesting the subagent run under the task tool span is enough.Later in the same run
After the research phase, the coordinator delegates a finaltask call to the editor subagent to polish the draft. The subagent appears as a second collapsible card with its own input (the style instructions) and output (the critique), and the main agent keeps control of the run for the final summary.

Streaming
deepagents returns a compiled LangGraph, so you can stream the agent’s intermediate steps with .stream() / .astream(). Laminar wraps both. The root span is opened inside the returned generator, survives as long as you iterate, and ends when the generator closes (including on early break or exception).
A single top-level call produces a single
deep_agent root span even though Pregel.invoke delegates to self.stream internally. The instrumentation uses a ContextVar sentinel to collapse the nested call into one root.Async
agent.ainvoke / agent.astream are traced identically to the sync paths.
Custom tools
Any callable you pass totools= is wrapped in a TOOL span with its arguments as input and its return value as output. Give the tool a clear docstring: deepagents uses it as the tool’s description to the model, and the function name shows up as the span name in Laminar.
task span automatically, so the full delegation graph is visible end-to-end.
Track outcomes with Signals
Traces answer what happened on this run. Signals answer the cross-trace question: how often does a subagent exceed its scoped tool list, when does the main agent forget to callwrite_todos before delegating, which runs loop through more than five task calls. A Signal pairs a plain-language prompt with a JSON output schema. Laminar runs it live on new traces (Triggers) or backfills it across history (Jobs) and records a structured event every time it matches. From there you query, cluster, and alert on events across every trace.
Every new project ships with a Failure Detector Signal that categorizes issues on any trace over 1000 tokens. Open it from the Signals sidebar to see events as soon as your deepagents traces arrive.
Query across traces
- SQL editor for ad-hoc queries across traces, spans, signals, and evals.
- SQL API for programmatic access from scripts and pipelines.
- CLI (
lmnr-cli sql query) for terminal-driven queries and piping JSON into shell tools or coding agents. - MCP server to query Laminar directly from Claude Code, Cursor, or Codex.
Troubleshooting
I don't see any traces in Laminar
I don't see any traces in Laminar
- Confirm
LMNR_PROJECT_API_KEYis set in the same process that runs the agent. deepagentsandlangchainmust both be importable whenLaminar.initialize()runs. If only one is installed, the deepagents instrumentor is a silent no-op.- The integration requires
lmnr >= 0.7.50anddeepagents >= 0.5.0.
Traces show `anthropic.chat` (or similar) as the root instead of `deep_agent`
Traces show `anthropic.chat` (or similar) as the root instead of `deep_agent`
Call
Laminar.initialize() before from deepagents import create_deep_agent. The instrumentor patches the deepagents.create_deep_agent module attribute; a prior from deepagents import ... binds the local name to the unwrapped function, so the patch never takes effect in that script.Subagents aren't nested under the task call
Subagents aren't nested under the task call
- Make sure subagents are declared via the
subagents=[...]argument tocreate_deep_agent. Without that list, thetasktool has nothing to delegate to. - If you build the agent by hand (without
create_deep_agent), attachLaminarMiddlewareyourself:
LaminarMiddleware is idempotent: create_deep_agent already injects one, so duplicates are deduplicated.I want LangChain / LangGraph spans alongside the `deep_agent` trace
I want LangChain / LangGraph spans alongside the `deep_agent` trace
Pass an explicit Note that LangChain / LangGraph auto-instrumentors emit a LangSmith-style node-level span per graph step, which overlaps with what Laminar already captures at the agent boundary.
instruments set that includes all three:I want to disable the deepagents integration
I want to disable the deepagents integration
Pass
disabled_instruments={Instruments.DEEPAGENTS} to Laminar.initialize(). The LangChain and LangGraph instrumentors will then auto-enable in its place.Self-hosting Laminar
Self-hosting Laminar
Set
base_url and the ports of your instance when initializing. For a local OSS deployment:What’s next
Viewing traces
Read the transcript view, filter, and search across traces.
Signals
Detect behaviors and failures across every run, then query, cluster, and alert on them.
SQL editor and MCP server
Query traces programmatically from the UI, API, or your IDE.
LangChain / LangGraph
Tracing raw LangChain chains or LangGraph without deepagents on top.
