Laminar logo

Migrate from Helicone to Laminar

Mar 5, 2026

·

Sam Komesarook

Helicone has announced it is joining Mintlify (March 2026) and will remain in maintenance mode, so you have time to migrate properly. If you picked Helicone because it was the simplest way to get LLM observability running, Laminar is the same story - one initialization call, no proxy, no routing changes. Helicone was both a gateway and an observability layer; Laminar replaces the observability side with deeper, agent-focused tracing. This guide covers two migration paths: let a coding agent handle it, or do it manually in about 15 minutes.

Fast Path: Let the Agent Migrate It

If you use Claude Code, Cursor, Codex, or another coding agent that supports skills, this is the fastest route. Run the laminar-instrument-codebase skill (see Skills setup) with the following prompt:

Use the laminar-instrument-codebase skill to migrate this repo from Helicone to Laminar.
Replace Helicone proxying and headers with Laminar.initialize, keep request/user/session metadata
as tags/metadata, and verify traces in Laminar.

What it will do:

  • Install the Laminar SDK in the right package
  • Add Laminar.initialize(...) at the earliest safe startup point
  • Remove Helicone-specific headers or base URL overrides
  • Move Helicone metadata into Laminar tags and trace metadata
  • Verify traces show up in the Laminar UI

Manual Migration (15 Minutes)

  1. Create a Laminar project and set LMNR_PROJECT_API_KEY.

  2. Install the SDK (TypeScript: npm add @lmnr-ai/lmnr; Python: pip install lmnr). See Hosting Options if you are self-hosting.

  3. Initialize Laminar as early as possible in your app entrypoint.

    import { Laminar } from '@lmnr-ai/lmnr';
    
    Laminar.initialize({
      projectApiKey: process.env.LMNR_PROJECT_API_KEY!,
    });
    
    import os
    from lmnr import Laminar
    
    Laminar.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"])
    
  4. Remove Helicone proxying.

    // Before: Helicone proxy
    const openai = new OpenAI({
      baseURL: 'https://oai.helicone.ai/v1',
      defaultHeaders: { 'Helicone-Auth': process.env.HELICONE_API_KEY },
    });
    
    // After: native endpoint, Laminar instruments automatically
    const openai = new OpenAI();
    
  5. Move Helicone metadata into Laminar context (inside an active span).

    // Before: Helicone headers
    headers: {
      'Helicone-User-Id': userId,
      'Helicone-Session-Id': sessionId,
      'Helicone-Property-Env': 'production',
    }
    
    // After: Laminar context (inside an active span)
    Laminar.setTraceUserId(userId);
    Laminar.setTraceSessionId(sessionId);
    Laminar.setTraceMetadata({ env: 'production' });
    
    # After: Laminar context (inside an active span)
    Laminar.set_trace_user_id(user_id)
    Laminar.set_trace_session_id(session_id)
    Laminar.set_trace_metadata({"env": "production"})
    

    Active span means you're already inside an observe(...) block or a Laminar span. For example:

    TypeScript

    import { Laminar, observe } from '@lmnr-ai/lmnr';
    
    await observe({ name: 'handle_request' }, async () => {
      Laminar.setTraceUserId(userId);
      Laminar.setTraceSessionId(sessionId);
      Laminar.setTraceMetadata({ env: 'production' });
      // ...rest of your request/agent logic
    });
    

    Python

    from lmnr import Laminar, observe
    
    @observe()
    def handle_request(user_id: str, session_id: str):
        Laminar.set_trace_user_id(user_id)
        Laminar.set_trace_session_id(session_id)
        Laminar.set_trace_metadata({"env": "production"})
        # ...rest of your request/agent logic
    
  6. Verify in the UI. Run a single request. You should see a trace with LLM spans and a clear tree. Tool spans will appear if your tool layer is instrumented (e.g., LangChain/LlamaIndex) or if you wrap tool calls with observe({ spanType: 'TOOL' }) (TypeScript) or Laminar.start_as_current_span(..., span_type="TOOL") (Python). If you only see a root span, wrap your agent entrypoint with observe(...) or @observe(). See Viewing Traces.

If You Already Have OpenTelemetry

Laminar is OTel-native. If you already emit OTel spans, keep your span structure and point the OTLP exporter to Laminar. OTLP/gRPC is recommended.

import { Metadata } from '@grpc/grpc-js';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc';

const metadata = new Metadata();
metadata.set('authorization', `Bearer ${process.env.LMNR_PROJECT_API_KEY}`);

const exporter = new OTLPTraceExporter({
  url: 'https://api.lmnr.ai:8443/v1/traces',
  metadata,
});

See OpenTelemetry for HTTP exporters and Python examples.

What Maps and What Doesn't

Laminar replaces Helicone's observability and tracing - and gives you significantly more depth with agent-focused tracing, browser session recordings, Signals, and SQL analysis. If you were also using Helicone as an AI gateway for caching, provider routing, or rate limiting, you'll need a separate solution for that layer (LiteLLM, Portkey, or your provider's native SDK).

What You Used in HeliconeLaminar EquivalentNotes
Request logging & cost trackingTrace viewer + dashboardsLaminar auto-tracks tokens, latency, and cost per span for instrumented providers.
Session tracingobserve() + session IDsDeeper agent trace trees, not just request logs.
Custom properties (headers)Laminar.setTraceMetadata()Same concept, SDK-based instead of header-based.
User/session trackingLaminar.setTraceUserId() / Laminar.setTraceSessionId()Direct mapping.
Prompt playgroundLaminar PlaygroundSimilar capability. See Playground.
Response cachingNo equivalentUse provider-native caching or a separate gateway.
Provider routing / fallbacksNo equivalentUse LiteLLM or direct provider SDKs.
Rate limitingNo equivalentHandle at the application or gateway layer.
Prompt versioning / managementNo equivalentHandle in code or use a prompt registry.

What You Gain with Laminar

Once you're set up, explore Signals, SQL analysis, and real-time agent tracing in the docs.

If anything doesn't map cleanly, drop into our Discord and we'll help you sort it out.