Every AI agent session produces three things:
- The work itself—code written, analysis performed, problems solved
- The decisions that shaped it—why this approach, not that one
- The context that informed it—what the agent learned along the way
Most teams capture #1. The commit lands. The report gets delivered. The ticket closes.
Almost nobody captures #2 or #3.
After running agent sessions for months building zeros, I stopped thinking about context limits as a problem. They're a forcing function. Every session ends—gracefully or not. The question isn't how to extend it. The question is what from this session needs to exist independently of it.
The Difference Between "Discussed" and "In the Graph"
Here's a scenario that plays out constantly:
During a session, the agent and I work through a problem. We decide that invoices over $10K need manager approval before sending. Good decision. Makes sense for this client's risk profile.
Session ends. Where does that decision live?
If it's in the session transcript, it's tribal knowledge. Someone would have to read thousands of lines of conversation to find it. In practice, nobody will.
If it's in the Decision Graph—explicitly captured, linked to the invoice workflow, tagged with the session that created it—it's institutional knowledge. Future sessions can query it. The Ontology Sage can surface it when someone touches invoicing. It persists.
Discussed = tribal knowledge. It happened, someone remembers, good luck finding it.
In the graph = institutional knowledge. It's queryable, traceable, and injected when relevant.
This is the institutionalization gap. The delta between what gets decided and what gets recorded in a way that survives the session.
Session Attribution: "Which Session Introduced This?"
Once you start capturing decisions systematically, a new question emerges: where did this come from?
We needed to trace decisions back to their origin. Not just "Marc decided this" but "this was introduced in session X, while working on feature Y, in the context of problem Z."
So we built session attribution into the Decision Graph. Every decision carries:
- The session ID that created it
- The entities it affects (code, workflows, policies)
- The rationale captured at decision time
- Links to related decisions (supersedes, conflicts with, depends on)
Now I can ask: "Which session introduced the rule about manager approval for large invoices?" And get an answer. With context.
This matters for governance. When an auditor asks why something works the way it does, "we discussed it at some point" isn't an answer. A traceable decision with rationale and lineage is.
The Orphan Problem
Sessions don't always end cleanly. Context limits hit. Connections drop. Someone closes the terminal.
We call these orphaned sessions—sessions that started but never went through proper wrap-up. Whatever decisions were made, whatever patterns were discovered, they're stranded in the session logs.
This week we built orphan detection. Every session now records a snapshot at start. If wrap-up never happens, the system flags it. Recovery tools can reconstruct what was decided, even from abandoned sessions.
It's not glamorous infrastructure. But it closes the gap between "we worked on it" and "it's captured."
The Ontology Sage
Capturing decisions is half the problem. Surfacing them at the right moment is the other half.
The Ontology Sage is our answer. It walks the Decision Graph continuously—finding patterns across sessions, surfacing decisions when they're relevant, flagging conflicts before they cause problems.
When I start a session touching the invoice workflow, the Sage injects the relevant decisions. Not all 500 decisions we've ever made—just the ones linked to what I'm working on. Just-in-time context, drawn from institutional memory.
The agents don't need infinite context windows. They need access to everything that's been institutionalized.
What This Means for AI Operations
If you're running AI agents at any scale, you're generating decisions constantly. The question is whether you're capturing them.
Most teams treat agent sessions as ephemeral. The work product matters; the session that produced it doesn't.
But the session is the context. It's where the hard decisions get made, where edge cases get resolved, where institutional knowledge gets created—or lost.
Session boundaries aren't the problem. The institutionalization gap is.
Close the gap, and your AI operations compound knowledge instead of losing it.