Saltar para o conteúdo

Respeitamos a sua privacidade

Utilizamos cookies para garantir o funcionamento do site e recolher métricas de forma opcional. Pode rever as suas escolhas a qualquer momento.

Ver Política de Cookies

Feb 07, 2026·7 min·Public Procurement, Artificial Intelligence, Public Sector, Digital Transformation

Public Procurement
Artificial Intelligence
Public Sector
Digital Transformation

Reforming public procurement: accelerating with AI is not enough, we need to change what we buy

Artificial intelligence can speed up the process, but the real problem is what gets contracted and how success is measured

Index

In the previous reflection, I left a note about an idea that has been gaining weight - Results as a Service, or payment by results applied to technology contracting. For those who want to explore the concept further, this article by Syed Tahmid Alam is a good starting point. This reflection is about what that idea can, and cannot, change in how the State buys technology.

There is a fairly consensual movement gaining momentum in favour of using artificial intelligence to speed up public procurement. It is not hard to see why. Analysis, screening, document validation, proposal comparison - all of this consumes time and energy, and much of it is repetitive. Portugal's National AI Agenda itself identifies use cases under development in this area and points to the complexity of public innovation procurement as a structural barrier to AI adoption in government.

But the less discussed point is that introducing AI to accelerate the process may make the problem more visible without solving it. The biggest friction is not only in evaluating proposals. It lies in the fact that we keep contracting technology as if the world were stable, as if months were needed to reach a solution, and as if the best way to ensure control were to lock down the scope from day one.

In a time when technology moves fast and development is being democratised, keeping the underlying logic of public procurement untouched and simply inserting AI midway risks doing one thing: accelerating the execution of contracts that are misaligned with reality.

AI is already entering public procurement

This is no longer speculation. There are public initiatives aimed at using AI in the procurement cycle, from intelligent agents to support proposal evaluation to tools for automating document verification and reducing the operational burden on evaluation panels.

At the same time, institutional discourse is starting to acknowledge something that should be obvious but is rarely treated seriously: it makes no sense to digitise processes that are already inefficient. The priority, in several recent interventions, is to simplify first and only then apply AI, with interoperability and process reengineering as central pillars.

This matters because it is easy to fall into the trap of thinking that modernising procurement means putting AI to read PDFs and compare proposals. That may help with administrative steps, but it does not change what really matters - what is being bought, how success is defined, and how change is managed over time.

If AI cuts weeks off the initial phase and the contract keeps buying hours, or a fixed scope that is already outdated at birth, then AI merely shortens the road to the same destination.

Contracts designed for a slower world

On the ground, technology procurement tends to fall into one of two models, both with well-known problems.

The first is the capacity model - buying team, profiles, full-time equivalents. Work advances at the pace of the pending work list and funding follows effort. It is auditable, it is controllable, but it creates obvious incentives to prolong conversations and confuse motion with progress.

The second is the fixed-scope model - define requirements, close the terms of reference, award, deliver. It is defensible because there is a list of deliverables, but it is slow to start and demands a prediction of the future that rarely exists. Reality changes, exceptions appear, and the contract ends up living on amendments or contortions to fit what was written months ago.

AI is accelerating development and reducing friction between those who understand the problem and those who can materialise a solution. That acceleration makes both models more fragile: the "taximeter" becomes harder to justify when faster iteration is possible, and the fixed scope becomes outdated more quickly when the market moves in short cycles.

There is a paradox here that I explored before: digitising is not simplifying. And procuring faster is not procuring better. If the contract does not keep pace with the speed of the problem, the process may be exemplary and the final result still mediocre.

Paying for results, not for teams or feature lists

The idea of payment by results is not new - it has existed for decades in areas such as healthcare, development aid, and public works. What is changing now is its application to technology and digital services, under the name Results as a Service (RaaS): linking funding to the achievement of pre-defined and measurable results, instead of paying only for effort or a list of deliverables.

In practice, instead of a contract that describes the final product in detail, which is time-consuming and often illusory, the problem is described and a set of measurable results is defined, with a payment component tied to value delivery.

This changes the conversation in several ways. It shifts the discussion to what actually matters, which is what will improve in the service, with what evidence, and over what horizon. The contract stops being a feature catalogue and becomes a commitment to an effect. It also changes the pressure on the vendor, because if payment depends on the result, selling "available team" stops being a value proposition and becomes a cost. The vendor must bring method, design, integration, and security, because the risk of not delivering is no longer merely reputational. And it forces the client to do what many contracts avoid, which is to define success and measure it. This aligns with good practices such as the GOV.UK Service Standard, where there is an explicit principle of defining success and publishing performance data from the discovery phase, not at the end.

In the Portuguese context, we are not starting from zero. The framework agreement model, for instance, is already used for software development procurement in public administration. ESPAP manages framework agreements that allow various public bodies to contract IT services based on pre-negotiated terms, and the Tax Authority itself used this type of mechanism for the development and evolution of its portal. A framework agreement is not a results-based model, but it is a step towards better structuring the contractual relationship. What payment by results adds is the direct link between funding and demonstrated impact, something that current framework agreements rarely require explicitly.

The intention is appealing. But the model only works when the organisation can govern metrics and exceptions. And that is not trivial.

Governing metrics is harder than it looks

When a performance indicator determines payment, it determines behaviour. Simple indicators are useful because they are auditable, but they are easily gamed. Richer indicators get closer to real impact, but they are harder to verify and generate interpretive conflict.

In a sector where universality is an obligation and not a slogan, optimising for "percentage of cases resolved automatically" can turn rare cases into statistical noise. The margins of the system are precisely where administrative justice tends to be most exposed, a point I explored in the reflection on exceptions and public services and which here becomes even more acute.

AI can make this worse because it accelerates the production of solutions and, at the same time, accelerates entropy if there is no architecture and governance. And the legal rigidity around contract modifications, such as that arising from Directive 2014/24/EU requiring revision clauses to be clear, precise, and unequivocal, makes the "let's award and figure it out later" approach even riskier.

Results-based models need to be designed with variation corridors and exception rules from the start. This is not something that gets resolved mid-contract.

Starting without revolution

There is no painless leap from capacity contracts or fixed scope to a results-based model. And, in many cases, there should not be. There are services where impact is hard to attribute, areas where measurement is weak, and contexts where the priority is to stabilise.

Still, there is a pragmatic path. Investing in the capacity to measure before contracting for results, because if there is no baseline, the first contract should be about instrumentation and data, not results. Designing contracts with pre-defined adaptation corridors, taking advantage of the fact that the Directive allows revision clauses and modification mechanisms, but with the rigour needed to avoid turning them into a legally fragile "open door." And treating exceptions as part of the service and not as noise, because a results-based model that only measures the general rule will fail where the State cannot fail - which means exception channels, thresholds, and specific indicators for rare cases, with intentional friction where it is necessary.

The public machine itself is trying to address part of this. The National AI Agenda foresees structures and guides to support AI procurement. But the risk is assuming that a guide solves the essentials. The essentials are deciding what to buy - whether it is means, deliverables, or results.

AI will accelerate public procurement. The question is whether it will accelerate value delivery or merely the road to the same kind of contracts as always.

Public ProcurementArtificial IntelligencePublic SectorDigital Transformation