intuto.build()

We Turned Our Developer Workflow Into An Orchestration System

Written by Aaron Leggett | May 7, 2026 9:59:59 PM

This is Part 2 of a 3-part series. If you missed the part 1, start with How We Encoded 10 Years of Tribal Knowledge Into AI Instructions.

In the first part of this series, we discussed our copilot-instructions.md file. While that file serves as the "institutional brain" ensuring our patterns are enforced, a brain needs a nervous system to actually move the boat forward. For a long time, we used AI like most developers: as a reactive chatbot. You’d open a window, paste a block of code, and ask for a feature or bug fix. It’s a functional starting point, but in a 10-year-old codebase, you eventually hit a ceiling. As complexity grows, the context window fills with noise, architectural patterns begin to drift, and you find yourself constantly resetting the session just to keep the AI on the rails.

The natural progression was to move from "chatting" to orchestration. We realized that the real value wasn't in asking the AI to write code, but in using it to architect the process itself. By building specialized agents and templates tailored to our stack, we moved the AI from a simple assistant into a governed development pipeline. We stopped asking the AI to write a feature and started using it to architect the process itself—building its very own agents and templates itself!

Our Current Approach

Before we dive into the mechanics, a disclaimer: this isn't a "how-to" manual or a claim that we’ve found the one true way to develop with AI. This is simply where we have found our place currently. Our process is ever-adapting, and what works for our 10-year-old hybrid stack might look entirely different in your environment. This is a snapshot of our journey, not a destination.

Story-Driven Scoping

As we refined this system, we found that the most effective way to maintain precision was to focus on granular, actionable units of work. Instead of feeding a broad project scope into the system, we now write a specific scope.md for every story or task.

This story-driven approach eliminates context noise. By working through a project in these logical, isolated chunks, we ensure every piece of code is built with 100% focus and zero interference from unrelated features. It allows the agents to stay in their lane and keeps the developer’s review focused on a single, coherent change at a time.

The 6-Step Orchestration Pipeline

This is not "Auto-Dev"—it is Augmented Development. We don't just review the final output; we watch every change the agent makes and review it before committing. Because context windows eventually reach a saturation point where the AI "loses" everything, we use specific commit breakpoints to lock in progress and reset the context window whenever the noise becomes too high:

  • 1. Scope Iteration: We write a specific scope.md for the current story. An agent with read-access to the codebase iterates on this to identify "hidden" complexities—like side effects in a legacy service—before we begin.
    Commit: scope.md. 

  • 2. The Planning Agent: This agent generates *-tasks.md files for specialized agents such as Database, Service, or UI. Architectural decisions are made here, not mid-code.
    Commit: Scaffolding.

  • 3. The Checkpoint Protocol (Execution): This is peer-programming at high speed. The agent presents an implementation plan; you approve or correct it; they execute task-by-task. (Note: For teams practicing TDD, the Testing Agent can run first here, building the test suite for the implementation agents to solve against).
    Commit: After each agent completes.

  • 4. Testing Agent: If not done first for TDD, a specialized agent builds unit and integration tests to verify the logic. Commit: Testing Agent.

  • 5. The Local Quality Gate: A Code Reviewer Agent validates the work locally against our standards. This catches errors before a human sees a PR and before it even hits our secondary AI reviewers in GitHub Actions.
    Commit: Review Agent.

  • 6. Archiving & Sync: An agent updates relevant documentation files to keep them in sync with the code. We then move the project artifacts into a dedicated folder.
    Commit: Archive.

Only after these six commits do we open the Pull Request.

The Apprentice Model

We treat our agents like apprentices, not black boxes. Every time we intervene to correct a hallucination or steer an agent back toward a specific architectural pattern, we summarize that moment into a learnings.md file specific to that task. We then use an agent to update the specialized agent’s own template with these insights and "compact" it.

This ensures the system literally learns as it goes. For example, if our Service Agent attempts to use an semantically incorrect event for a deletion task—missing the nuance of our audit logs—we capture that correction. The next time any agent (or human) touches that domain, the "apprentice" already has that experience encoded into its identity. You aren't just shipping a feature; you are curating the institutional memory of your codebase.

Archiving for the Future

Once a story is validated and the code is ready for a Pull Request, we don't just delete our trail. We archive  the scope.md, coordination.md, learnings.md, and the results from the local codereview.md into a dedicated project folder. While we purge the temporary agent task files to keep the working directory clean, we keep these core artifacts as a permanent record.

This creates a searchable, human-readable history of the intent behind the code. The next time you revisit a feature six months later, you aren't just looking at lines of C#; you're looking at the original scope, the hurdles encountered, and the specific lessons learned during its orchestration.

Upgrading the Work

This system hasn't replaced our seniority; it has amplified it. We spend less time on repetitive boilerplate and more time on high-level architecture and the "Checkpoint Protocol."

You are peer-programming with a system that understands your "tribal knowledge" because it helped you build the documentation for it. It’s a shift from being the person swinging the hammer to the architect ensuring every block is laid perfectly.

This is Part 2 of a 3-part series on AI Orchestration.

Now that we’ve covered the "brain" and the "nervous system," the final question is: How do people actually use this? In the final part, we’ll discuss the human factor—how to shift from "coder" to "orchestrator," how we got developers to trust the system, and why this workflow makes engineering more engaging, not less.

Part 3: Coming Soon

Are you still copy-pasting prompts, or have you started building your own orchestration? Tweet us on X with #intutobuild and tag us @intutohq!

Authored by Aaron Leggett, Lead Developer at Intuto. Photo by Daniil Komov.