Helicone has announced it is joining Mintlify (March 2026) and will remain in maintenance mode, so you have time to migrate properly. If you picked Helicone because it was the simplest way to get LLM observability running, Laminar is the same story - one initialization call, no proxy, no routing changes. Helicone was both a gateway and an observability layer; Laminar replaces the observability side with deeper, agent-focused tracing. This guide covers two migration paths: let a coding agent handle it, or do it manually in about 15 minutes.
Fast Path: Let the Agent Migrate It
If you use Claude Code, Cursor, Codex, or another coding agent that supports skills, this is the fastest route. Run the laminar-instrument-codebase skill (see Skills setup) with the following prompt:
Use the laminar-instrument-codebase skill to migrate this repo from Helicone to Laminar.
Replace Helicone proxying and headers with Laminar.initialize, keep request/user/session metadata
as tags/metadata, and verify traces in Laminar.
What it will do:
- Install the Laminar SDK in the right package
- Add Laminar.initialize(...) at the earliest safe startup point
- Remove Helicone-specific headers or base URL overrides
- Move Helicone metadata into Laminar tags and trace metadata
- Verify traces show up in the Laminar UI
Manual Migration (15 Minutes)
-
Create a Laminar project and set LMNR_PROJECT_API_KEY.
-
Install the SDK (TypeScript: npm add @lmnr-ai/lmnr; Python: pip install lmnr). See Hosting Options if you are self-hosting.
-
Initialize Laminar as early as possible in your app entrypoint.
import { Laminar } from '@lmnr-ai/lmnr'; Laminar.initialize({ projectApiKey: process.env.LMNR_PROJECT_API_KEY!, });import os from lmnr import Laminar Laminar.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"]) -
Remove Helicone proxying.
// Before: Helicone proxy const openai = new OpenAI({ baseURL: 'https://oai.helicone.ai/v1', defaultHeaders: { 'Helicone-Auth': process.env.HELICONE_API_KEY }, }); // After: native endpoint, Laminar instruments automatically const openai = new OpenAI(); -
Move Helicone metadata into Laminar context (inside an active span).
// Before: Helicone headers headers: { 'Helicone-User-Id': userId, 'Helicone-Session-Id': sessionId, 'Helicone-Property-Env': 'production', } // After: Laminar context (inside an active span) Laminar.setTraceUserId(userId); Laminar.setTraceSessionId(sessionId); Laminar.setTraceMetadata({ env: 'production' });# After: Laminar context (inside an active span) Laminar.set_trace_user_id(user_id) Laminar.set_trace_session_id(session_id) Laminar.set_trace_metadata({"env": "production"})Active span means you're already inside an observe(...) block or a Laminar span. For example:
TypeScript
import { Laminar, observe } from '@lmnr-ai/lmnr'; await observe({ name: 'handle_request' }, async () => { Laminar.setTraceUserId(userId); Laminar.setTraceSessionId(sessionId); Laminar.setTraceMetadata({ env: 'production' }); // ...rest of your request/agent logic });Python
from lmnr import Laminar, observe @observe() def handle_request(user_id: str, session_id: str): Laminar.set_trace_user_id(user_id) Laminar.set_trace_session_id(session_id) Laminar.set_trace_metadata({"env": "production"}) # ...rest of your request/agent logic -
Verify in the UI. Run a single request. You should see a trace with LLM spans and a clear tree. Tool spans will appear if your tool layer is instrumented (e.g., LangChain/LlamaIndex) or if you wrap tool calls with observe({ spanType: 'TOOL' }) (TypeScript) or Laminar.start_as_current_span(..., span_type="TOOL") (Python). If you only see a root span, wrap your agent entrypoint with observe(...) or @observe(). See Viewing Traces.
If You Already Have OpenTelemetry
Laminar is OTel-native. If you already emit OTel spans, keep your span structure and point the OTLP exporter to Laminar. OTLP/gRPC is recommended.
import { Metadata } from '@grpc/grpc-js';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc';
const metadata = new Metadata();
metadata.set('authorization', `Bearer ${process.env.LMNR_PROJECT_API_KEY}`);
const exporter = new OTLPTraceExporter({
url: 'https://api.lmnr.ai:8443/v1/traces',
metadata,
});
See OpenTelemetry for HTTP exporters and Python examples.
What Maps and What Doesn't
Laminar replaces Helicone's observability and tracing - and gives you significantly more depth with agent-focused tracing, browser session recordings, Signals, and SQL analysis. If you were also using Helicone as an AI gateway for caching, provider routing, or rate limiting, you'll need a separate solution for that layer (LiteLLM, Portkey, or your provider's native SDK).
| What You Used in Helicone | Laminar Equivalent | Notes |
|---|---|---|
| Request logging & cost tracking | Trace viewer + dashboards | Laminar auto-tracks tokens, latency, and cost per span for instrumented providers. |
| Session tracing | observe() + session IDs | Deeper agent trace trees, not just request logs. |
| Custom properties (headers) | Laminar.setTraceMetadata() | Same concept, SDK-based instead of header-based. |
| User/session tracking | Laminar.setTraceUserId() / Laminar.setTraceSessionId() | Direct mapping. |
| Prompt playground | Laminar Playground | Similar capability. See Playground. |
| Response caching | No equivalent | Use provider-native caching or a separate gateway. |
| Provider routing / fallbacks | No equivalent | Use LiteLLM or direct provider SDKs. |
| Rate limiting | No equivalent | Handle at the application or gateway layer. |
| Prompt versioning / management | No equivalent | Handle in code or use a prompt registry. |
What You Gain with Laminar
- Browser session recordings synced to traces
- Signals for natural language pattern detection across traces
- SQL access to all trace data
- Evals you can run locally or in CI
- Replay and rollout workflows from traced spans
Once you're set up, explore Signals, SQL analysis, and real-time agent tracing in the docs.
If anything doesn't map cleanly, drop into our Discord and we'll help you sort it out.