2 min read

How We Encoded our Tribal Knowledge Into AI Instructions

How We Encoded our Tribal Knowledge Into AI Instructions

Let’s be honest: we suck at documentation.

We used to have documentation in Confluence. It was detailed, well-structured, and—like every other wiki—utterly useless because nobody ever read it. If you ask any developer on our team to spend two hours updating the wiki, you’ll get a wall of excuses. But if you ask them to fix a bug or ship a feature, they’re all over it.

The problem is that while we’re busy shipping, we’re also building up "tribal knowledge." That stuff only lives in our heads. We’re constantly fighting the "bus factor"—if one of us gets hit by a bus, we’re screwed. The knowledge isn’t in the codebase; it’s in the person. We end up with pattern drift—code that looks like it was written by five different people who never talked to each other.

This is Part 1 of a 3-part series on how our small SaaS team architected a full AI-driven dev workflow. We realized that if we wanted to scale our output without scaling our headcount, we had to stop relying on human memory and stop writing wikis nobody reads. We needed to encode our standards into something that actually enforces them.

The Fix: A Machine-Readable Constitution

We wanted our coding standards to be active, not static. We started putting our "rules of the road" into a file that the AI models actually read. Depending on your tool, this might be:

  • GitHub Copilot: A .github/copilot-instructions.md file.
  • Claude: A CLAUDE.md file or project-specific instructions.
  • Other Tools: Check your AI agent's documentation for specific "system prompt" or "custom instructions" files.

It’s not documentation; it’s a list of non-negotiable architectural constraints. It covers how we handle multi-tenancy, our DDD patterns, audit logging, and formatting. When our AI agent helps build a new service or review code, it’s not guessing—it’s reading the constitution. It helps keep our code aligned with our patterns, catching potential drifts before they even get to the commit stage.

We Cheated (And You Should Too)

If you think we sat down and spent weeks drafting these files, you’re wrong. We didn't write them. We made the AI write them.

We pointed the AI at our repo and told it to reverse-engineer our patterns. We asked it to look at our code and tell us what we were doing, not just what we said we were doing. Then we iterated. Now, when we update our workflow, we don't just update the code; we tell the agent to update its own instructions to match.

It’s meta. It works. And it saves us from having to manually maintain a doc that would quickly be outdated anyway.

The "Context Bloat" Trap

We learned a hard lesson early on: The AI isn't an encyclopedia.

Our first attempt was bloated. We stuffed the file full of rules. The AI started hallucinating, and the output quality tanked. Why? Because we were overloading the context window, before it had even read our prompt.

Learn to structure your instructions to point to where the context lives, not hold the context itself. Keep the instructions lean, keep them precise, and use them to direct the AI to the right files for specific logic.

Also, watch the bill. With LLM usage-based billing becoming the norm, filling the context window with your instructions file is a great way to burn your budget. Managing context isn't just a technical challenge anymore; it's a cost-saving one.

Just Start

Stop trying to design the perfect "Prompt Engineering" strategy. You’re overthinking it.

Grab an hour, get your hands dirty, and ask the AI to map out your own patterns for you. You don't need a formal design session to get this running. You just need to start typing. The architecture will reveal itself while you iterate with the AI that is helping you build it.


What's your team's biggest "tribal knowledge" risk? The pattern only one person knows? The code everyone is terrified to touch? Tweet us on X with #intutobuild and tag us @intutohq!


Authored by Aaron Leggett, Lead Developer at Intuto.

Our Journey with Branching Models: They're Not Just for Big Teams

Our Journey with Branching Models: They're Not Just for Big Teams

Remember the early days of your dev career? The adrenaline of pushing code, the camaraderie of a small team... and maybe a silent prayer that your...

Read More
Welcome to intuto.build()

Welcome to intuto.build()

Hey there, fellow builders! The team at Intuto is thrilled to announce the launch of our new blog: intuto.build() For a while now, we've been wanting...

Read More
Pulling the Plug on Windows: From C: to /opt

Pulling the Plug on Windows: From C: to /opt

The move from an old Windows VM to a managed Linux App Service environment is a daunting "black box" for any small dev team. Most high-level guides...

Read More