Jan 15, 2026·9 min·Artificial Intelligence, Public Sector, Digital Transformation, Governance
"Build or buy" in the public sector: AI is changing the rules of the game
Artificial intelligence reduces friction, shifts the bottleneck to governance, and reshapes the balance between internal teams and vendors
There is a dilemma that shows up almost every time we talk about technology in public administration: build internally or hire outside. For years, that question was framed as a strategic choice but, in reality, it was often simpler - internal teams did not exist to build with speed, consistency, and security, so “buy” (or contract) was the natural route, even when that meant accepting poorly fitted solutions, prolonged dependencies, and a certain fatalism of “it is what it is.”
What is happening now, with artificial intelligence, is more disruptive than it looks at first glance precisely because it changes the premise that held the model together: the idea that building is always expensive, slow, and reserved for large, highly specialised teams. AI does not solve everything, does not replace engineering, and does not erase the need for governance, but it is reducing the cost of turning an idea into a prototype and a prototype into a functional MVP.
It is not only because code gets written faster. It is because, for the first time in a long while, “build” can move from a theoretical option to a practical one, with smaller teams - provided they have the right people.
The democratisation of building
When people say AI democratises development, some imagine a caricatured scenario where anyone writes a few sentences and a production system appears. That is not what is at stake, and I think that caricature hurts the discussion because it lets the topic be dismissed with a smile and “that is not realistic.”
What is realistic is that tools like Claude Code, OpenAI Codex, and others are brutally shortening the distance between those who understand the problem and those who can materialise a minimally functional solution - even if only to validate hypotheses. This matters in every sector, but in the public sector it is especially relevant because there is a lot of domain knowledge concentrated in people who do not have availability for projects that consume countless hours, endless “knowledge transfer” sessions with external teams, nor months to explain, refine, re-explain, go back, and restart until someone translates it into requirements, user stories, specifications, or development.
AI does not remove the need for developers, because code still needs review, validation, tests, integration, and accountability, but it changes what is most expensive in a project: friction. The friction of translating knowledge to whoever will execute, the friction of waiting for development windows, the friction of depending on an endless backlog where “what matters” always competes with “what is urgent.” And, above all, the friction of only discovering whether the flow makes sense when the system is already heavy, expensive, and politically hard to change.
With AI, a prototype becomes a decision tool, and that has enormous value in a context where so many decisions are taken by intuition, memory, fear of risk, or simply because there is no time to measure and experiment.
What changes in the public sector: smaller teams, more design, more autonomy
If we had to summarise the shift in one point, I would say AI favours organisations that can work with smaller, more mature teams focused on what matters: solution design, architecture, integration, and service impact.
In my view, the most important nuance is that AI does not eliminate the need for technical profiles. On the contrary, it may increase the importance of certain roles because it accelerates production and, when production accelerates, errors and entropy accelerate too. The real “blocker” for many public entities becomes less “how many people can I have writing code” and more “can I have the right people to guide, design, integrate, and govern.”
Architects, solution designers, and other senior profiles are scarcer and more expensive; public administration, in its current form, struggles to compete for these resources. In other words, AI democratises building but moves the problem to a more demanding place - the quality of design and technical responsibility. An organisation can produce faster, but without an architectural and governance backbone it will produce faster solutions that do not hold up.
At the same time, the opportunity is real because for small, weakened, and dependent IT departments it becomes plausible to imagine internal teams that are more agile, with fewer people, but proactive and dynamic, able to build and evolve components with autonomy without having to “ask the market” for every iteration. The impact on vendor dependence is obvious: less vendor lock-in by inertia or lack of technical capacity, less conditioning, more informed decision-making.
And this is where the public sector can gain a lot, not because it will suddenly “do everything alone,” but because it can stop being hostage to a model where knowledge walks out the door whenever a contract ends and every improvement becomes a negotiation, an addendum, a new procurement cycle, a new continuity risk.
The uncomfortable part: many vendors will have to stop selling presence
For a long time, the dominant model of service delivery to the State was simple: place a team, start the “taximeter,” and keep “doing” as the client asks. This is not necessarily bad faith; often it is a consequence of a contracting model that pays for effort and capacity, not necessarily for outcomes and impact. But that model depends on the condition that the client has no alternative, cannot execute internally, and cannot test or prototype without the vendor.
When a public entity can, with a small focused team, produce prototypes quickly, clarify requirements with less noise, accelerate feedback cycles, and even develop parts of the system with more autonomy, it becomes much harder to justify a vendor that sells only “labour” and “presence.” Not only because everything becomes faster but also because it becomes more evident when time is being lost in minutiae, in endless detail discussions, in refinement that does not unlock decisions, and in going in circles with the feeling of progress. And here the immediacy of AI is relentless: if the market can deliver faster, the client will demand value faster. It does not mean complex projects stop being complex; it means patience for poorly managed complexity will shrink.
What seems inevitable to me is a migration to models where the vendor’s value proposition is much more tied to responsibility. Ensuring coherence, integration, security, results, reducing real risk, and delivering measurable impact instead of just absorbing work - I leave a short note here because I want to dedicate the next reflection to this in more detail. I have seen the idea of Results as a Service (RaaS) emerge as a reaction to this shift. More on that next time.
The biggest risk: everyone automating and no one building a system
There is, however, a risk that genuinely worries me and that enthusiasm for technology does not solve. The ease of building can lead to an explosion of “stray” solutions if each team, each person, each department starts creating small apps to automate mechanical tasks that are currently manual. That can generate quick local gains, but it can also increase global entropy, because those apps do not talk to each other, do not share data consistently, do not follow a common architecture, and do not build a real system.
The result can be paradoxical: we automate task execution but integration remains manual. We create more tools, but not more coherence. And without an overarching vision of technical, application, and business architecture, public administration risks trading bureaucracy for fragmentation.
Add to this the risk of blind trust in generated code, which can introduce security weaknesses, scalability issues, and technical decisions that are hard to maintain. Human programmers also make mistakes, but the difference is that AI can make mistakes at scale, quickly, and with an appearance of “correctness” that misleads those without review practice.
That is why, for me, the “build vs buy” discussion must evolve: it is no longer enough to discuss price, capacity, and timeline. We must discuss governance, architecture, maintainability, and technical responsibility, because building faster without a minimal structure to sustain evolution only creates bigger debt, sooner.
In the end, AI is shifting the centre of gravity. Public administration can gain autonomy, reduce dependency, and retain knowledge, but only if it has the courage to invest in the right profiles and in architecture, and only if it resists turning “ease of building” into fragmentation.
And vendors, if they are smart, will not fight this - they will adapt. Because the question that will start to appear more often, very bluntly, is this: if today it is possible to do more with less, why are we still paying as if it were impossible?
Related reflections
View allMar 14, 2026 · 9 min
Migrating legacy systems in the public sector: what nobody warns you about before you start
Before replacing a system that has been running for 15 or 20 years, it is worth understanding what it solved. This reflection explores the risk of undervaluing legacy, the importance of listening to those who maintained it, and the tension between respecting what exists and knowing when to cut.
Feb 07, 2026 · 7 min
Reforming public procurement: accelerating with AI is not enough, we need to change what we buy
Introducing AI to speed up public procurement may make the problem more visible, but it does not solve it. The biggest friction lies in what gets contracted, how success is defined, and how delivered value is measured.
Dec 28, 2025 · 8 min
Why building information systems in the public sector is different—and what rarely gets discussed
Digitising public processes without simplifying them only computerises old problems. The real challenge is keeping a simple path for most users while giving dignified routes to exceptions, without killing the MVP or betraying universality.