Saltar para o conteúdo

Respeitamos a sua privacidade

Utilizamos cookies para garantir o funcionamento do site e recolher métricas de forma opcional. Pode rever as suas escolhas a qualquer momento.

Ver Política de Cookies

Dec 28, 2025·8 min·Digital Transformation, Public Sector, Public Services, Service Design

Digital Transformation
Public Sector
Public Services
Service Design

Why building information systems in the public sector is different—and what rarely gets discussed

Digitising is not simplifying: balancing universality, exceptions, and responsibility in public services

Index

There is a distinction that, in practice, decides the fate of many public projects: digitising is not simplifying. Digitising is putting a process on a screen. Simplifying is rethinking the goal and designing the most efficient path to reach it.

This is especially sensitive in the public sector because public service is not “for the majority”. By definition it is for everyone—including people who arrive with less common situations, more vulnerability, and more difficulty fitting a standard flow.

What rarely gets discussed is not the existence of laws, scrutiny, or risk. Everyone knows that. What rarely gets discussed is the practical effect this has on how we design systems: how to create a simple service for most people without betraying universality, how to stop the fear of exceptions from killing an MVP, and how to make principled decisions when you almost never have metrics.

The problem of recreating the physical world on a screen

The most common mistake is this: taking the physical workflow and restaging it digitally, step by step, as if the current order were “the process” and not just a historical solution to a set of risks and constraints that, often, have already changed.

The public sector is full of processes that exist for a reason. They are not foolish. They have been shaped by context, technical limitations, risks, case law, edge cases. The accumulated experience is valuable. The problem starts when we treat that experience as an absolute limit instead of a starting point for redesign.

Redesigning is not disrespecting. It is understanding why the process ended up that way and honestly asking: does this risk still exist? Does this control still make sense? Is this sequence really necessary? Are there better ways to mitigate the problem without forcing everyone through the same maze?

This is where transformation in the public sector stops being a “product” and becomes a broader project. Because if the goal is truly to simplify, it may require changing internal practices, redefining responsibilities, and, in some cases, evolving legislation. What makes no sense is using legislation as an excuse to keep bad flows. Legislation should focus on what is core and enduring, not on implementation details that age in months—for example, enshrining “the specific website” where something was published.

The exception loop and the MVP that never ships

For me, one point separates a project that advances from one that drags on: exception management.

In theory it is simple to say “we must consider exceptions; the service is for everyone.” In practice the pattern repeats: we start designing validations to protect the system and ensure compliance > a specific need appears > someone remembers another situation > the law “allows” everything to be identifiable or handled and suddenly the system stops being a product with a clear flow and becomes an attempt to anticipate the infinite.

The effect on delivery is devastating. It blocks releases, fails tests, prevents an MVP, delays any testing with real users, and stops feedback on what will realistically be the path for most users. The project gets stuck in the most expensive phase of all: debating without data.

And there is a detail that makes this even harder in the public sector: many of these exceptions are not written down. They are in the heads of experienced people who have seen rare cases and therefore legitimately feel the system must be prepared. But without numbers the debate never ends. It is always possible to discover one more thing that “could happen”.

This creates a dangerous polarisation. Either we hyper-simplify and ignore relevant cases, or we become hyper-focused on exceptions and the system never becomes “good enough” to use. What is missing in the middle is not good intent but a decision method grounded in evidence.

What it means to decide when there are almost no metrics

Here lies the heart of the problem: in the public sector it is very hard to define a threshold because universality is not a slogan; it is a moral and practical obligation. And many exceptions belong precisely to minorities that already face limitations and/or difficulties. If we create a flow that “works for most” but pushes others into a dead end, we are failing public service.

At the same time, trying to convert every exception into business rules in the main flow is a recipe for paralysis. A system can never be designed around exceptions. If it is, it turns into an overly permissive mechanism, an “open door” that lets everything in, with weak validations, little predictability, and greater operational risk.

When balance exists, it is usually built on two simple ideas.

The first is to make the most common cases as automatic and simple as possible. Not to “escape” complexity but to free human capacity for what is genuinely complex. The day-to-day “froth” should be handled with smoothness, consistency, and automation. Human resources—ever scarcer—should focus on exceptions, complex cases, and situations that demand interpretation and accountability.

The second is to recognise that “service for everyone” does not mean “the same path for everyone”. It means ensuring no one is left without an answer. That may require a well-designed exception channel with triage, traceability, and clear criteria, instead of trying to force everything through the same funnel.

This does not solve the metrics problem, but it changes the conversation. Instead of “let’s predict everything,” it becomes “let’s ensure most people have a simple journey and minorities have a dignified, safe, and operable journey.”

Responsibility: fewer slogans, more usable principles

When we talk about “designing responsibly” it is easy to drift into nice phrases. I prefer to look at concrete references because they force us out of vague discourse.

In Portugal, Mosaico (mosaico.gov.pt) aims to create a common model for designing and developing digital public services, with guiding principles and role-based guides. The underlying idea is to avoid each team inventing its own “State” and leaving services inconsistent with each other.

In the UK, the GOV.UK Service Standard is cited for a reason: it is not an abstract manifesto; it is a set of points that forces teams to think about users, accessibility, security, operations, continuous improvement, and even publishing performance data.

The least consensual part is how these standards are applied. When they become a checklist to “comply” with, they create new bureaucracy and little real improvement. When they work as a common language and decision tool, they help cut noise and give autonomy.

This links directly to the topic of exceptions. A service can be simple and still be responsible. Simple does not mean reducing everything to the minimum; it means removing friction where it protects no one and keeping friction where it is necessary to ensure fairness, security, and trust. Much of the work is deciding where friction is intentional and where it is mere legacy.

In the public sector, digitising without simplifying is the fastest way to computerise the problem. A public system cannot just be a physical workflow with screens. It must start from the outcome: what do we want to happen for the citizen, the business, the community? From there we rebuild the process, assess risks, design exceptions with dignity, and create space to test with real people instead of trying to anticipate infinity in a room.

Sometimes the solution sounds “simple”: gather data, measure, understand exceptions, and decide. In theory, yes. In practice, this is where the public sector pulls the rug.

Many systems were designed with the assumption that people would absorb variability. If an odd case came in, someone would assess it. If there was a doubt, experience solved it. The service was universal because the “system” did not need to close everything at the entrance; there was human capacity to handle what escaped the rules. But there is a hidden cost: when triage is human and decisions are distributed, data structure is lost. The exceptions exist but are diluted in poorly normalised records, notes, ad hoc decisions, and tacit knowledge.

And we arrive at today: fewer resources, more pressure, shrinking teams, experienced people leaving faster than replacements arrive, and a growing expectation that the system will be more automated, more consistent, and fairer. We start wanting to narrow the funnel—validate rules, prevent inconsistencies, stop “everything getting in”. That makes sense. But we stumble on the same starting point: the rules we need to automate are not always formalised in current systems, and the data that would let us define thresholds confidently is not as available as we would like.

In the end, we return to the beginning. How do you decide what stays in the main flow and what moves to an exception channel when the past is not properly measured—precisely because, for years, universality was guaranteed by people and not by rules? I do not have a perfect answer. If I did, I would probably make fewer mistakes. What I know is that ignoring this reality only pushes us to two extremes: either we keep designing wide-open doors because “public service is for everyone”, or we try to close everything on principle and end up excluding those who need it most.

Digitising without simplifying is just putting an old problem on a new screen.

Digital TransformationPublic SectorPublic ServicesService Design