Skip to main content

Documentation Index

Fetch the complete documentation index at: https://laminar.sh/docs/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Laminar is an open-source, OpenTelemetry-native observability platform for AI agents. Trace, debug, and monitor every Mastra agent run, model step, tool call, and sub-agent with a single MastraExporter wired into your Observability config. Self-host via Helm or use managed cloud. Mastra’s agents run Agent.generate and Agent.stream through a multi-step loop: the model calls a tool, Mastra runs it, feeds the result back, and repeats until the stop condition trips. Laminar plugs in through Mastra’s ObservabilityExporter contract and ingests every span Mastra produces. When an agent exposes sub-agents as tools (coordinator calls specialist), the full sub-agent run nests under the parent tool call in one unified trace. What Laminar captures:
  • The root agent run and each model step with system prompt, prompt messages, response, tool calls, token counts, latency, and cost.
  • Every tool invocation, with arguments and return value.
  • Sub-agents invoked from a parent agent’s tools, nested under their parent span.
  • Thinking tokens per step.
Mastra trace in Laminar, transcript view

Getting started

1

Install

Requires @lmnr-ai/lmnr >= 0.8.21, @mastra/core >= 1.0.0, and @mastra/observability >= 1.0.0.
npm install @lmnr-ai/lmnr @mastra/core @mastra/observability @ai-sdk/openai
Swap @ai-sdk/openai for any provider adapter you use.
2

Set environment variables

# .env
LMNR_PROJECT_API_KEY=your-laminar-project-api-key
OPENAI_API_KEY=your-openai-api-key
To get the project API key, go to the Laminar dashboard, click the project settings, and generate a project API key. This is available both in the cloud and in the self-hosted version of Laminar.Specify the key at Laminar initialization. If not specified, Laminar will look for the key in the LMNR_PROJECT_API_KEY environment variable.
3

Initialize Laminar and wire the exporter

Call Laminar.initialize() at the entry point of your app, then hand MastraExporter to Mastra’s Observability config. The exporter has no transport or batching config of its own: it sends spans through Laminar’s tracer provider.
import { openai } from '@ai-sdk/openai';
import { Agent } from '@mastra/core/agent';
import { Mastra } from '@mastra/core/mastra';
import { Observability } from '@mastra/observability';
import { Laminar, MastraExporter } from '@lmnr-ai/lmnr';
import 'dotenv/config';

Laminar.initialize();

const agent = new Agent({
  id: 'assistant',
  name: 'assistant',
  instructions: 'You are a concise, friendly assistant.',
  model: openai('gpt-5-mini'),
});

const observability = new Observability({
  default: { enabled: false },
  configs: {
    laminar: {
      serviceName: 'my-mastra-app',
      exporters: [new MastraExporter()],
    },
  },
});

new Mastra({
  agents: { assistant: agent },
  observability,
});

const result = await agent.generate('Write a two-line haiku about tracing.');
console.log(result.text);

await observability.shutdown();
await Laminar.shutdown();
default: { enabled: false } turns off Mastra’s built-in default exporters so you don’t double-emit. Flush through observability.shutdown() before process exit: Mastra’s Observability has no synchronous flush() method.

Multi-agent runs with sub-agents

A common Mastra pattern: a coordinator agent exposes sub-agents as tools. The coordinator decides who should handle what, calls the tool, and the tool runs subAgent.generate(...) internally. Laminar nests the sub-agent’s full run (its own model steps and tool calls) directly underneath the parent tool span, so one trace tells the whole story.
import { openai } from '@ai-sdk/openai';
import { Agent } from '@mastra/core/agent';
import { Mastra } from '@mastra/core/mastra';
import { createTool } from '@mastra/core/tools';
import { Observability } from '@mastra/observability';
import { Laminar, MastraExporter, observe } from '@lmnr-ai/lmnr';
import { z } from 'zod';

Laminar.initialize();

const lookupFlights = createTool({
  id: 'lookup_flights',
  description: 'Return candidate flights between two cities on a date.',
  inputSchema: z.object({ from: z.string(), to: z.string(), date: z.string() }),
  execute: async (inputData) => {
    const { from, to, date } = inputData.context;
    return {
      flights: [
        { carrier: 'Delta', number: 'DL123', priceUsd: 342 },
        { carrier: 'United', number: 'UA456', priceUsd: 298 },
      ],
      route: `${from} -> ${to}`,
      date,
    };
  },
});

const flightAgent = new Agent({
  id: 'flight-agent',
  name: 'flight-agent',
  instructions:
    'You are a flight specialist. Call lookup_flights and recommend one option.',
  model: openai('gpt-5-mini'),
  tools: { lookupFlights },
});

const bookFlight = createTool({
  id: 'book_flight',
  description: 'Delegate to the flight specialist.',
  inputSchema: z.object({ from: z.string(), to: z.string(), date: z.string() }),
  execute: async (inputData) => {
    const { from, to, date } = inputData.context;
    const res = await flightAgent.generate(
      `Find a flight from ${from} to ${to} on ${date}. Recommend one option.`,
    );
    return { recommendation: res.text };
  },
});

const conciergeAgent = new Agent({
  id: 'concierge',
  name: 'concierge',
  instructions:
    'You are a travel concierge. Use book_flight to delegate, then summarize the plan.',
  model: openai('gpt-5'),
  tools: { bookFlight },
});

new Mastra({
  agents: { conciergeAgent, flightAgent },
  observability: new Observability({
    default: { enabled: false },
    configs: {
      laminar: {
        serviceName: 'trip-planner',
        exporters: [new MastraExporter()],
      },
    },
  }),
});

await observe({ name: 'plan-trip' }, async () => {
  const { text } = await conciergeAgent.generate(
    'Plan a flight from San Francisco to New York on 2026-06-15.',
    { stopWhen: ({ steps }) => (steps?.length ?? 0) >= 6 },
  );
  console.log(text);
});
Wrapping the coordinator call in observe() gives you a single root span for the whole request. MastraExporter detects the active OpenTelemetry context and rewrites every Mastra span onto that trace, so your plan-trip root and the full Mastra subtree render together. Tree view shows the full hierarchy when you want to see how the sub-agent nests:
Mastra multi-agent trace in Laminar, tree view

Nest Mastra spans inside your own code

Wrap any Mastra call with observe() to group multiple agent runs under one trace, add metadata, or pin the trace to a session or user.
import { observe } from '@lmnr-ai/lmnr';

await observe(
  { name: 'user-request', sessionId: req.sessionId, userId: req.userId },
  async () => {
    const plan = await conciergeAgent.generate(req.prompt);
    const followUp = await conciergeAgent.generate(
      `Given the plan, suggest two dinner options near the hotel. Plan: ${plan.text}`,
    );
    return { plan: plan.text, followUp: followUp.text };
  },
);
See the full observe reference for session IDs, user IDs, metadata, and tags.

MastraExporter options

new MastraExporter({
  realtime: true,            // Force-flush after each span end. Use in short-lived processes.
  linkToActiveContext: true, // Default. Rewrite Mastra spans onto the active OTel trace.
});
  • realtime: forces a flush on every span end. Useful for scripts and serverless handlers that exit before the batch processor drains on its own. Leave off for long-running services.
  • linkToActiveContext: default true. When a Mastra agent runs inside an active OpenTelemetry span (an observe() wrapper, a Next.js route instrumented with @vercel/otel, any other OTel-aware library), the exporter rewrites Mastra’s trace id onto the caller’s trace so the whole thing renders as one. Set to false to keep Mastra’s original trace id.

See what happened in a trace

Open the trace in Laminar and the default view is the transcript: each agent renders as a card with its auto-extracted input and final output, sub-agents collapse to the same card shape so you can see the delegation at a glance and expand the ones you care about, and every LLM turn has a one-line preview of the response. Switch to tree view when you want span-by-span structure. More on the trace UX: Viewing traces.

Track outcomes with Signals

Traces answer what happened on this run. Signals answer the cross-trace question: how often does the concierge skip delegation and answer a booking itself, when do sub-agent tool calls return errors, which runs exceed five model steps without a final answer. A Signal pairs a plain-language prompt with a JSON output schema. Laminar runs it live on new traces (Triggers) or backfills it across history (Jobs) and records a structured event every time it matches. From there you query, cluster, and alert on events across every trace.
Every new project ships with a Failure Detector Signal that categorizes issues on any trace over 1000 tokens. Open it from the Signals sidebar to see events as soon as your Mastra traces arrive.

Query across traces

  • SQL editor for ad-hoc queries across traces, spans, signals, and evals.
  • SQL API for programmatic access from scripts and pipelines.
  • CLI (lmnr-cli sql query) for terminal-driven queries and piping JSON into shell tools or coding agents.
  • MCP server to query Laminar from Claude Code, Cursor, or Codex.

Troubleshooting

  • Confirm LMNR_PROJECT_API_KEY is set in the same process that runs Mastra.
  • Laminar.initialize() must run before new Mastra({ ... }) so the exporter can hook into Laminar’s tracer provider.
  • Call await observability.shutdown() before your process exits. There is no .flush() on Observability; shutdown is the flush path.
  • If you pass default: { enabled: true } you’ll get Mastra’s default in-memory exporters alongside Laminar; turn it off unless you want both.
By default, MastraExporter reparents Mastra spans onto the active OTel trace so they nest under your observe() span. If you see two traces instead of one:
  • Make sure Laminar.initialize() runs before the observe() call.
  • Check that you did not pass linkToActiveContext: false to MastraExporter.
Set baseUrl and the ports of your instance when initializing. For a local OSS deployment:
Laminar.initialize({
  baseUrl: 'http://localhost',
  httpPort: 8000,
  grpcPort: 8001,
});

What’s next

Viewing traces

Read the transcript view, filter, and search across traces.

Signals

Detect behaviors and failures across every run, then query, cluster, and alert on them.

SQL editor and MCP server

Query traces programmatically from the UI, API, or your IDE.

Tracing structure

Sessions, user IDs, metadata, and tags.

Vercel AI SDK

Mastra uses the AI SDK under the hood. Using it directly? Trace it here.

OpenAI Agents SDK

Python-first agent framework with similar multi-agent shape.