Documentation Index
Fetch the complete documentation index at: https://laminar.sh/docs/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Laminar is an open-source, OpenTelemetry-native observability platform for AI agents. Trace, debug, and monitor every Mastra agent run, model step, tool call, and sub-agent with a singleMastraExporter wired into your Observability config. Self-host via Helm or use managed cloud.
Mastra’s agents run Agent.generate and Agent.stream through a multi-step loop: the model calls a tool, Mastra runs it, feeds the result back, and repeats until the stop condition trips. Laminar plugs in through Mastra’s ObservabilityExporter contract and ingests every span Mastra produces. When an agent exposes sub-agents as tools (coordinator calls specialist), the full sub-agent run nests under the parent tool call in one unified trace.
What Laminar captures:
- The root agent run and each model step with system prompt, prompt messages, response, tool calls, token counts, latency, and cost.
- Every tool invocation, with arguments and return value.
- Sub-agents invoked from a parent agent’s tools, nested under their parent span.
- Thinking tokens per step.

Getting started
Install
Requires Swap
@lmnr-ai/lmnr >= 0.8.21, @mastra/core >= 1.0.0, and @mastra/observability >= 1.0.0.@ai-sdk/openai for any provider adapter you use.Set environment variables
Laminar initialization. If not specified,
Laminar will look for the key in the LMNR_PROJECT_API_KEY environment variable.Initialize Laminar and wire the exporter
Call
Laminar.initialize() at the entry point of your app, then hand MastraExporter to Mastra’s Observability config. The exporter has no transport or batching config of its own: it sends spans through Laminar’s tracer provider.default: { enabled: false } turns off Mastra’s built-in default exporters so you don’t double-emit. Flush through observability.shutdown() before process exit: Mastra’s Observability has no synchronous flush() method.Multi-agent runs with sub-agents
A common Mastra pattern: a coordinator agent exposes sub-agents as tools. The coordinator decides who should handle what, calls the tool, and the tool runssubAgent.generate(...) internally. Laminar nests the sub-agent’s full run (its own model steps and tool calls) directly underneath the parent tool span, so one trace tells the whole story.
observe() gives you a single root span for the whole request. MastraExporter detects the active OpenTelemetry context and rewrites every Mastra span onto that trace, so your plan-trip root and the full Mastra subtree render together.
Tree view shows the full hierarchy when you want to see how the sub-agent nests:

Nest Mastra spans inside your own code
Wrap any Mastra call withobserve() to group multiple agent runs under one trace, add metadata, or pin the trace to a session or user.
MastraExporter options
realtime: forces a flush on every span end. Useful for scripts and serverless handlers that exit before the batch processor drains on its own. Leave off for long-running services.linkToActiveContext: defaulttrue. When a Mastra agent runs inside an active OpenTelemetry span (anobserve()wrapper, a Next.js route instrumented with@vercel/otel, any other OTel-aware library), the exporter rewrites Mastra’s trace id onto the caller’s trace so the whole thing renders as one. Set tofalseto keep Mastra’s original trace id.
See what happened in a trace
Open the trace in Laminar and the default view is the transcript: each agent renders as a card with its auto-extracted input and final output, sub-agents collapse to the same card shape so you can see the delegation at a glance and expand the ones you care about, and every LLM turn has a one-line preview of the response. Switch to tree view when you want span-by-span structure. More on the trace UX: Viewing traces.Track outcomes with Signals
Traces answer what happened on this run. Signals answer the cross-trace question: how often does the concierge skip delegation and answer a booking itself, when do sub-agent tool calls return errors, which runs exceed five model steps without a final answer. A Signal pairs a plain-language prompt with a JSON output schema. Laminar runs it live on new traces (Triggers) or backfills it across history (Jobs) and records a structured event every time it matches. From there you query, cluster, and alert on events across every trace.Every new project ships with a Failure Detector Signal that categorizes issues on any trace over 1000 tokens. Open it from the Signals sidebar to see events as soon as your Mastra traces arrive.
Query across traces
- SQL editor for ad-hoc queries across traces, spans, signals, and evals.
- SQL API for programmatic access from scripts and pipelines.
- CLI (
lmnr-cli sql query) for terminal-driven queries and piping JSON into shell tools or coding agents. - MCP server to query Laminar from Claude Code, Cursor, or Codex.
Troubleshooting
I don't see any traces in Laminar
I don't see any traces in Laminar
- Confirm
LMNR_PROJECT_API_KEYis set in the same process that runs Mastra. Laminar.initialize()must run beforenew Mastra({ ... })so the exporter can hook into Laminar’s tracer provider.- Call
await observability.shutdown()before your process exits. There is no.flush()onObservability; shutdown is the flush path. - If you pass
default: { enabled: true }you’ll get Mastra’s default in-memory exporters alongside Laminar; turn it off unless you want both.
Mastra span appears as a separate trace from my observe() wrapper
Mastra span appears as a separate trace from my observe() wrapper
By default,
MastraExporter reparents Mastra spans onto the active OTel trace so they nest under your observe() span. If you see two traces instead of one:- Make sure
Laminar.initialize()runs before theobserve()call. - Check that you did not pass
linkToActiveContext: falsetoMastraExporter.
Self-hosting Laminar
Self-hosting Laminar
Set
baseUrl and the ports of your instance when initializing. For a local OSS deployment:What’s next
Viewing traces
Read the transcript view, filter, and search across traces.
Signals
Detect behaviors and failures across every run, then query, cluster, and alert on them.
SQL editor and MCP server
Query traces programmatically from the UI, API, or your IDE.
Tracing structure
Sessions, user IDs, metadata, and tags.
Vercel AI SDK
Mastra uses the AI SDK under the hood. Using it directly? Trace it here.
OpenAI Agents SDK
Python-first agent framework with similar multi-agent shape.
