Colburn had a nine-month head start
I built an agent orchestrator and a learning academy in June 2025, before there was a shape for either. By the time the shape arrived, I was glad I hadn't waited.
15 April 2026 · 8 min read · colburn, agentic-ai, orchestration, builds
OpenClaw shipped its orchestration panel earlier this year. Three tabs across the top — Analyze, Plan, Execute — and a row of controls below: pause, resume, cancel. There's a specific kind of joke the universe saves for people who built early. You watch the keynote, you nod at the tabs, you notice the button, you realise you wrote that panel nine months ago in a folder called agent-server, on a project called Colburn. Colburn was a working name my brother and I had picked for something else when we were kids, and it had been sitting in my projects folder waiting for a use.
Colburn itself is a place in North Yorkshire my brother and I visited exactly once as kids, on some family drive neither of us remembers the purpose of. It's a garrison town next door to Catterick, and its name means something like "cold stream" in a mix of Old English and Old Norse — the beck there ran through a coal seam, which is how the place got christened and how my brother and I first encountered the word. Nine-year-olds collect words the way older relatives collect postcards, and Colburn was one of the words we brought home. That's the whole origin of the project name. Most project names are that, if we're honest. Half the infrastructure running production somewhere is named after someone's cat, someone's dog, or a pub someone's uncle used to drink in.
I was not the right person to build an agent orchestrator in June 2025. Then again, a lot of people weren't the right people, and a lot of us were writing one anyway. "Agent orchestration" was having a moment. Every Discord had a thread. Every demo video had two agents chatting on a green background. I wanted three. I wanted three agents that could pause and resume and cancel and — the part that kept me up — tell me afterwards who had decided what.
Nothing I could download did that. So I wrote it.
Colburn was two projects in a trench coat. The trench coat was TypeScript, held together by a refusal to start over. The front half was the orchestrator: a three-stage pipeline that took a sentence of English and turned it into a workflow you could watch run. The back half was a learning academy — every agent that ran got a file of its own, and the files accumulated into something like experience. I'll tell you the front half first, because that's the one that shipped.
The shape of the orchestrator fits in three endpoints.
// server/routes/orchestration.ts
fastify.post('/orchestration/analyze', ...) // intent → structure
fastify.post('/orchestration/plan', ...) // structure → phases
fastify.post('/orchestration/execute', ...) // phases → run
Analyze parses the user's input into intent, domain, complexity, and requirements. Plan picks agents and builds phases with cost and time estimates, optionally generating alternatives. Execute runs the plan in one of three modes — dry-run, full, step-by-step — and streams progress over a WebSocket. Around the core there are pause/resume/cancel endpoints, a status endpoint that can return LLM request metrics on demand, an analysis history table, a small explainability route that converts a structured intent JSON blob into plain English for a stakeholder who doesn't want the JSON, and a dedicated LINGUIST path for deeper NLP analysis when the input deserves it.
None of that is clever in April 2026. It was not clever in June 2025 either. It was absent. That was the only reason to write it.
The one piece I'm still proud of is strategy selection. Given a set of agents and an analysis, the orchestrator picks between four modes — sequential, parallel, hub_spoke, and collaborative — and that choice is where the coordination actually lives.
// server/core/core-orchestrator.ts
private selectExecutionStrategy(
agents: AgentSelection[],
analysis: Analysis,
request: WorkflowRequest,
coordinationPlan?: CoordinationPlan,
mixedTeamResult?: TeamIntegrationResult,
): 'sequential' | 'parallel' | 'hub_spoke' | 'collaborative' {
if (agents.length === 1) return 'parallel' // direct execution
// ...role hierarchy, escalation rules, mixed-team checks
}
A one-agent plan is trivially parallel — it just runs. Two agents with a clear producer/consumer relationship go sequential. A planner plus three workers go hub-spoke. Anything genuinely peer-to-peer — a code review by three agents with different biases, a debate between a security specialist and a performance specialist — is collaborative. The function itself is boring. The fact that there is a function — that "how do these agents talk to each other" is a choice the system makes at plan time, rather than a shape the prompt accidentally falls into — is the thing I would keep if I lost everything else.
The other piece worth naming is the fallback. The intelligent agent selector starts by searching four pools: system agents, user-created agents, meta-created specialists, and a general pool. If nothing in any pool fits the task, it doesn't fail. It hands off to NEXUS — the meta layer — to create a new specialist.
// server/services/agent-selection-service.ts
const existingAgents = await this.findExistingAgents(analysis)
if (existingAgents.length > 0) {
return existingAgents
}
// No existing agents found — hand off to NEXUS
return await this.requestNexusAgentCreation(analysis)
That five-line branch is the bit I kept coming back to. The pool isn't a list; it's a behaviour. If the match is weak enough, the system grows. Most orchestrators I looked at that summer treated their roster as fixed at boot. Colburn treated it as a function of what had already been asked of it.
Here's the one honest gap. Four pools is the design. Two pools is the data path today. discoverAgentsFromAllPools returns empty arrays for user_agents and general_agents, and populated arrays for the other two. The system works because the NEXUS fallback fills in the space the empty pools leave behind. It's a design that's true at the interface and incomplete at the storage layer, and I know which of those I'd rather have. The choice that's easier to undo is the one that's safe to leave half-built.
The orchestrator can pick between four strategies and four pools and keep a history of every analysis, and at the end of a run it knows more about the agents it used than it did at the start. The question is what to do with that knowledge.
I called the second half of Colburn the Agent Growth Academy, because "Agent Learning Academy" sounded like a Coursera subsidiary. Growth is only fractionally less corporate, but it was the best I had at midnight. The idea is old and slightly embarrassing to type out: if an agent is going to outlive the conversation that created it, it should get better at its job. Every run, the academy records milestones (what task was completed, how), adaptations (what behaviour changed as a result), and reflections (what the agent would do differently next time). A specialist's performance is compared to a baseline LLM on three axes — speed, quality, cost — and the gap is the thing the academy watches.
// server/routes/academy.ts
fastify.post('/academy/agents/:id/milestones', ...)
fastify.post('/academy/agents/:id/adaptations', ...)
fastify.post('/academy/agents/:id/reflections', ...)
fastify.get('/academy/agents/:id/performance', ...)
fastify.get('/academy/dashboard', ...)
The first four routes work. The fifth returns placeholder values. I had the service layer doing the right things — tracking milestones and adaptations, computing specialist performance against a baseline — before I had the dashboard that would make those values visible, and by the time I got to the dashboard the rest of the year had happened. The academy is the part of Colburn that is most honest about how new this problem is. We don't know what "my agent got 7% better this week" is supposed to look like on a page. We know how to store it. The rendering is still waiting for a shape.
I didn't finish Colburn. The market did. Over the nine months between June 2025 and April 2026, three teams shipped orchestration products that look like the orchestration product I was writing, and one of them is probably better than mine. I should feel scooped. I don't, or not mostly, and the reason is the only thing worth taking from this piece.
You cannot read your way to knowing what an orchestration API should look like. You cannot read your way to understanding why hub_spoke is a different animal from collaborative, or why the NEXUS fallback is the interesting part of agent pooling and pooling itself is the plumbing. You have to write the bad version first. Colburn was the bad version — or the first honest version, if I want to be kinder — and when OpenClaw's panel appeared on Tuesday I could look at it and immediately see three decisions they'd made differently and why, and two they'd made the same and probably for the same reasons. That isn't a consolation prize. It's the only way I know to have an opinion about a thing.
The demoscene has a phrase for the version of a demo nobody sees. Intro zero — the one you write to figure out what the tools even are, before you write the one people will watch. Mostly they're thrown out. Occasionally a trick from the zeroth version survives into the released intro and becomes the signature of the group. Colburn is, at best, the intro zero for whatever the agent year turns into. The NEXUS fallback is the trick I'd like to carry forward. The academy's milestone → adaptation → reflection cycle is the shape I haven't finished thinking about.
My brother used to buy magazines with type-in BASIC in the back. You'd spend a Saturday copying fifteen pages of DATA statements into the BBC Micro, run it, realise it was a worse version of a game that had already shipped on cartridge, and do it again the next month. He said the point wasn't the game. The point was that by the end of the year your hands knew what a game was shaped like.
I built Colburn because by June 2025 my hands knew what a two-agent demo was shaped like, and I wanted to know what a three-agent system was shaped like, and the only way to find out was to type it in. I'd do it again. I'd do it if OpenClaw had already existed. You don't build the thing that already exists because somebody is paying you to; you build it because you can't afford to have opinions about it otherwise.
Colburn is still sitting in agent-server. The academy dashboard still returns placeholders. The user and general pools are still empty. None of that bothers me more than it should. The hands know what it's shaped like now.
Next up
Continue reading
More from the feed — same tags when we can, otherwise fresh picks so you always get a next read.
- 18 Apr 2026The number was always 1,284
Two months building a workflow editor, a green pill on an admin panel, and the cracktro habit I hadn't noticed I was carrying.
- 5 Apr 2026Two bikes, one tick
A head-on collision should kill both riders. In a naïve game loop it doesn't. HexRider, TrailBlazer, and one small decision about order.
- 20 Feb 2026A Dragon 32 in the classroom
A classmate's computer, a yarn about a printer, and the Acorn Electron I talked my dad into buying. How I became a coder and stayed one.