A ten-minute agent run produces a two-thousand-span trace. You open it and see a tree. The tree tells you nothing. You wanted to know what the agent was asked to do, what it did, what tools it called, and which subagent dropped the ball. That is what Laminar's transcript view shows by default.
Spans are the right data model for observability. They are a bad interface for reading what an agent did. A span tree packs every HTTP call, prompt render, and middleware hop at the same visual weight as the single LLM turn you actually care about. It is a flame graph for people who want a conversation log.
Transcript view is the fix. It reads top-to-bottom, renders each turn like a chat, inlines tool calls where they happened, and collapses subagents into named cards that show who was asked to do what. The hierarchy is still there; it is just the second tab.

Inputs to every agent and subagent, surfaced for free
The thing we care most about in transcript view is that you never have to instrument a trace to read it.
Laminar parses the span tree and pulls out the input to the root call and the input to each subagent, on its own. No extra attribute to set. No wrapper span you forgot to add. No "prompt" field you meant to record and didn't. You send OpenTelemetry spans from the Claude Agent SDK (or any other framework we cover) and the transcript view already knows what the human asked the orchestrator, what the orchestrator asked each subagent, and what each subagent returned.
This is the part that disappears when you move off span-tree observability tools. In a tree view, the input to a subagent is buried inside an attribute on the invoking LLM call, three expansions deep. In transcript view, it is the first line of the subagent's card.
The DX difference compounds. Every time you build a new multi-agent flow, every time a teammate opens a trace they did not write, every time you come back to a trace a week later: you do not have to remember or reverse-engineer where the prompt lives. It is just there.
Subagents are cards, not a flood of spans
A multi-agent run in a tree view is a wall. The orchestrator's LLM turn, the first subagent's LLM turn, the second's, the third's, their tool calls, their retries: everything is a row, everything is indented, everything is the same color as the one thing you actually want to see.
Transcript view refuses to do this. Laminar recognizes each subagent invocation, collapses the entire subagent subtree into a single card, and gives the card a name pulled from the invocation's intent. Code Researcher. Code Reviewer. Test Runner. You see three cards, not three trees.

When you want to dig in, click the card. It expands in place and shows only the spans that belong to that subagent: its LLM turns, its Read and Grep and Edit calls, its tool outputs. Nothing else in the trace moves. The other two subagents stay collapsed. The orchestrator's scroll position holds.
This is the flow: scan cards, open the one that looks off, read it like a chat, close it, move on. It is not a tree walk. It is not remembering where you were. The collapsed-by-default rule is what makes it work at the scale real agents run at, where fanning out to six or ten subagents per trace is ordinary.
Tree view is still there when you want it
Nothing about transcript view removes the tree. The view dropdown flips to Tree in one click and preserves your scroll position.

Look at the two screenshots side by side. The tree tells you the shape of what happened. It does not tell you what was asked or what came back, not without expansion. To answer "what did the reviewer subagent actually say," you open the reviewer's LLM span, you open Span Output, you scroll. To compare the three subagents' outputs, you open three panes in sequence.
Transcript view collapses that work into "look at three cards, open the one that looks off." The time-to-understanding drops. The time-to-signal drops. The cognitive cost of keeping a trace in your head while you debug drops.
You want the tree when you are debugging span nesting itself: a custom observe wrapping a server-side turn, a custom MCP tool with its own child spans, a batched background job inside a turn. You want the tree when you are building out an integration and want to confirm your spans are parented correctly. The rest of the time transcript is the answer. That is why it is the default. The viewing traces page walks through both.
Why this matters for agents specifically
Request/response observability tools assume a single linear flow: request comes in, stack does work, response goes out. Agents are not that. One agent run is:
- A turn that plans.
- A tool call.
- A turn that reacts to the tool call.
- Another tool call.
- A subagent with its own five turns and ten tool calls.
- A final turn that synthesizes everything.
The shape of what happened is a conversation interleaved with side effects. A conversation interleaved with side effects is what the transcript view renders. Tree views can tell you an agent made 47 calls; transcript view tells you the agent asked itself "what file should I look at first," read a file, found a bug, asked a subagent to double-check, and wrote the fix.
When you are debugging an agent that did the wrong thing, the question is almost never "what span is slow." It is "at what turn did the plan go sideways." Transcript view puts that turn on a single scrollable page.
Inline tool previews
Every tool call and every LLM turn gets a one-line preview rendered next to it, generated automatically from the call arguments or model output. "Read /sandbox/scratch/cas_demo_target/fizzbuzz.py." "Edit fizzbuzz.py." "Grep def .*." You skim a 200-tool-call trace and read it like a log. No expanding, no scrolling sideways.

The previews are generated server-side and rendered inline. They work the same way for LLM output, tool use blocks, and nested subagent cards. For the full rule set, the viewing traces docs go through every row type transcript view renders.
Try it
- Send a trace. Any integration works; the Claude Agent SDK one takes four lines.
- Open the trace. Transcript is the default tab.
- Signals turn descriptions of outcomes into structured events across every transcript. Pair them with transcript view and you can go from "this run failed in an interesting way" to "here are all 47 runs that failed the same way" in a query.
Laminar is open source and self-hostable (github.com/lmnr-ai/lmnr). Transcript view ships in both the managed cloud and the OSS build.