In an earlier blog post, I asked: Should you version control your prompts? It was a useful question at the time, but today I want to shift the focus. Perhaps it is not your prompts that most need version control. Instead, it may be the context.
Why Context Matters
When generating code with AI, the context is everything. Context determines what the system can see, what it can remember, and what it can base its reasoning on. At some point, though, context begins to slip out of control:
Projects grow too large or too complex.
There is too much vital information to fit in the available context window.
Business models of code generation platforms force context trade-offs (balancing cost per user with compute budgets).
For most casual users, this isn’t a problem. The platform delivers apps quickly, easily, and often with surprisingly good quality. But for building real-world applications with complex features, the loss of context becomes a critical issue.
The Case for Version Controlling Context
That is why we now believe: it is the context that must be version controlled.
Storing context is not just about saving everything into one large repository. That would quickly become unmanageable. Instead, we need tools and methods to:
Summarize the context rather than duplicate it.
Keep track of what has been built and why.
Record design decisions and dependencies.
Capture the reasoning behind features, not just the features themselves.
This is familiar to any traditional developer. It is basic planning and documentation. But with AI-assisted code generation, the prompts, the outputs, and the context provided to the AI become just as important as the code itself.
The Aim: Small, Tidy, and Useful Context
The goal should be to keep context as small and tidy as possible while still being useful. This helps in two ways:
Efficiency – A lean context saves tokens and improves performance.
Clarity – A clean summary avoids confusion, drift, and duplication.
But this always involves a compromise: more context means better accuracy, but also higher cost. Less context reduces cost, but increases risk of drift.
Today, most AI code generation platforms make this choice for you. They define the limits, and you work within them. We believe it would be more practical, efficient, and fair if the user could decide how much to invest in context quality.
More Than Just Repositories
The solution is not as simple as “just add it to the repo.” Context management will require:
Multiple agents working together to handle summarization, indexing, and retrieval.
Tools that make context usable, not just stored.
Memory systems that go beyond files, keeping track of the purpose and logic behind code, not just the code itself.
Only then can context serve its true role: helping users build the software they intend to build, not just the software the AI can remember at the moment.
A Problem We Must Solve
Context overflow is not a minor bug. It is one of the must-solve problems in AI-assisted code generation. Ignoring it will not make it disappear. At Gadlet, we are paying extra attention to this challenge because we see it as central to the future of AI-built software.
Your Experience?
Have you coded with AI? How did context behave for you?
Did the system stay on track?
Or did it drift and forget decisions you thought were settled?
We want to learn from your cases. Is this hyped nonsense, or is context version control truly the issue we need to fix?
Great apps start with great context.