AI coding assistants are powerful, but they have a fundamental limitation: no persistent memory.
Every session, they start from zero. They don't know that you decided to use bcrypt instead of MD5 six months ago. They don't know that the last three times someone skipped the regression test, a bug made it to production.
We've been building a system called zeros that solves the first problem. Decisions—recorded choices with rationale—get injected into sessions at the right moment. When the AI touches authentication code, it sees "DEC-2025-108: Use bcrypt for all password hashing."
This works. But we noticed a gap.
The Missing Half
The AI would get the rule (write regression tests for bug fixes) but skip the process (observe → reproduce → hypothesize → trace → fix → verify). It knew what to do but not how to do it properly.
So we added a parallel system: Standard Operating Procedures.
- Decisions = what we decided (facts, constraints, rules)
- SOPs = how we do things (procedures, sequences, steps)
Both solve the same problem: institutional knowledge that doesn't survive context windows.
Guardrails, Not Workflows
Here's the insight that made this click: SOPs aren't user workflows. They're LLM control infrastructure.
The AI is the one that skips steps, makes assumptions, takes shortcuts. SOPs are guardrails that enforce the processes we've already figured out.
Our bug fix SOP has 8 steps. Step 2 is "Reproduce"—and it's a gate. The AI can't proceed to hypothesizing until it's actually triggered the bug. This prevents the classic failure mode: confidently "fixing" something that was never actually broken.
Injection, Not Instruction
The key is contextual injection. When the system detects bug-fixing work, it injects both the relevant decisions AND the bug-fix SOP. The AI gets the rules and the playbook.
This isn't just telling the AI what to do. It's giving the AI the same institutional knowledge that a senior team member would have—the accumulated wisdom of "here's how we do things around here."
Why This Matters Beyond Our Use Case
Any team using AI assistants faces the same challenge: how do you preserve institutional knowledge across sessions?
The answer is the same for both facts and processes: make them explicit, store them persistently, inject them contextually.
- Explicit: If it's not written down, the AI doesn't know it
- Persistent: Stored outside the context window, in a knowledge graph
- Contextual: Injected when relevant, not dumped all at once
The AI becomes more capable not by becoming smarter, but by having access to your accumulated organizational wisdom.
What processes does your team follow that exist only in people's heads? Those are your SOPs waiting to be captured.