Mar 14, 2026·9 min·Legacy Systems, Public Sector, Digital Transformation, Project Management
Migrating legacy systems in the public sector: what nobody warns you about before you start
The biggest risk in a migration is not the new system failing - it is not understanding what the old system solved
In digital transformation projects, when you look at a system that is 15 or 20 years old, the almost automatic reaction is to treat it as a problem to be solved. Obsolete technology, code that is hard to maintain, interfaces that nobody designed with the user in mind, integrations done "by hand", documentation that is non-existent or outdated. The temptation is to look at it and conclude that the way forward is to replace it - design from scratch, with modern architecture, best practices, agile methodologies, and everything the previous project lacked.
But that reading ignores something: the old system works. It works every day, with thousands of users, in scenarios nobody foresaw when it was designed. It works with patches, with exceptions, with logic that seems odd until you understand the context. And it works because someone - often a small team, with few resources and without the recognition they deserved - kept solving real problems over the years, even without following the "normal" development flow.
This reflection is about that: about the risk of undervaluing what the legacy contains, about the difference between replacing and understanding, and about a tension I still cannot fully resolve - the boundary between respecting what exists and knowing when it is time to cut.
The reflex of treating legacy as the problem
When people talk about legacy systems in the public sector, a vocabulary settles in quickly: technical debt, obsolescence, risk, dependency. All true, to varying degrees. But these systems are also the most complete repository of business rules the organisation has.
They are not documented in wikis, not in procedure manuals, not in functional specifications - they are in the code, in the data, in the validations, in the exception flows. Every if/else that seems arbitrary, every field that makes no sense at first glance, every validation rule that contradicts what was written in the original terms of reference - all of that exists because, at some point, someone faced a concrete problem and found a solution. It may not have been the most elegant solution. It may not have followed best practices. But it solved the problem and the service kept running.
Treating this as "technical junk" means losing information. And losing information at the start of a migration is the kind of mistake you only discover months later, when the new system is already in testing and someone asks "but why doesn't this case work like before?"
The old system knows more than any document
There is a moment, in any migration, that repeats itself. The new team analyses the old system, identifies the main flows, maps the most common use cases, and moves on to designing the solution. Weeks or months later, during testing, situations start appearing that nobody anticipated - not because they are rare, but because they were coded into layers of the system that nobody analysed deeply enough.
This is not negligence. It is the consequence of a wrong premise: the idea that existing documentation, combined with a few requirements-gathering sessions, is enough to capture what the system does. In practice, the documentation of a 15 or 20-year-old system is fiction - or, to be fairer, a blurred photograph of a moment that no longer exists. The system evolved, the documentation did not keep up, and the distance between what is written and what is in production is greater than anyone wants to admit.
The only reliable documentation is the code and the data. And even those need to be read with context, because the code tells you what but rarely tells you why. The why is in the head of whoever wrote that patch at three in the morning to fix a problem that appeared in production and needed to be resolved before the next day.
This connects with something I already wrote in the reflection on exceptions in public services: many of the most important rules of a system are not formalised. They are diluted in poorly normalised records, in notes, in one-off decisions, in tacit knowledge. They were maintained by people, not by documentation.
The people who maintained the legacy are not the obstacle
This is the point that bothers me most. In many migration projects, the people who maintained the legacy system for years are treated as part of the problem. They are seen as resistant to change, attached to old processes, unable to think "outside the box". The new team arrives with budget, methodology, modern tools, and implicitly with the message that everything done before is not good enough.
This is, at the very least, ungrateful. But it is also a strategic mistake.
The people who maintained the system are the only ones who know how it actually works - not how it is documented, not how it should work according to the specifications, but how it works in practice. They are the ones who know that "that field cannot be deleted because there is a monthly report that depends on it", that "that validation was disabled in 2019 because it caused blocks in a specific case that appeared twice a year but, when it appeared, stopped everything", that "that flow has an extra step because the law changed in 2017 and the process was never redesigned from the ground up."
If those people are not heard at the beginning of the process, the new system will be born without the ability to handle the cases that matter most. And the worst part is that, most of the time, nobody will notice until it is too late - until the system is in production and situations start appearing that "should work" and do not.
The team building the new system often makes the same mistakes the original team made. Not out of incompetence, but because they did not understand why certain decisions were taken. They look at a business rule in the old system, decide it is unnecessary, remove it. Months later, the exception case that rule covered appears. It gets reimplemented, now with more urgency and less context, and the result is a worse solution than the original.
When to respect and when to cut
But recognising the value of the legacy cannot mean preserving everything. There are things in the old system that are genuinely wrong - decisions made under pressure that were never revisited, workarounds that solved the problem of the moment but introduced worse problems down the line, rules that no longer make sense because the legislation changed or the context disappeared.
The question is: how do you tell the difference? How do you look at a rule in the old system and decide "this is accumulated knowledge we should respect" versus "this is dead weight we should cut"?
I wish I could say there are clear signs. That there is a method, a checklist, a systematic way of separating what to preserve from what to eliminate. But the truth is that I do not have that answer, at least not in a clean form. What I have is an intuition built through trial and error: when someone can explain the why behind a rule - "this exists because in 2018 there was a case where..." - it is usually worth preserving, even if the implementation needs to be redone. When nobody knows why but "it has always been like this", it is a sign that it needs to be investigated before being kept or cut. And when the justification is "the law requires it" but nobody can point to the article, what usually exists is a defensive interpretation that may or may not be correct.
None of this is linear. There are rules that seem arbitrary and that, when you dig deep, protect critical cases. And there are rules that seem fundamental and that, when you investigate, are inheritances from a context that no longer exists. The only thing I know with some confidence is that deciding without investigating - whether to keep or to cut - is almost always the most expensive path.
I have written about this in the context of AI and the build-or-buy dilemma: speed without understanding is not efficiency. If what is being built starts from an incomplete understanding of what existed, acceleration only produces mistakes faster.
The clean cut is almost always an illusion
There is an idea that persists in many migration programmes: the clean cut. A date is set, the old system is switched off, the new one is switched on.
In the public sector, this is almost always an illusion. Not only because the service cannot stop - a commercial registry, a civil registry, a licensing platform does not close for renovations - but because the data migration itself reveals problems nobody anticipated. Inconsistent data, duplicate records, fields that changed meaning over time, relationships between entities that are not documented. And each of those inconsistencies is, in fact, yet another piece of information about how the system was used in practice versus how it was designed in theory.
Progressive migration is harder to manage, harder to explain to those who pay, and harder to contract. But it allows you to learn from mistakes before they affect the entire system. Each migrated module is a real test, with real users, in real conditions. What this demands is patience - and patience is a scarce resource in programmes with rigid deadlines. But the alternative is to risk everything on a single date, and most public services cannot afford that luxury.
Respecting does not mean preserving everything
What I am saying is simple: before replacing a system, it is worth understanding what it solved. Not to keep it as it is - that is not the argument - but to avoid rebuilding blindly. To avoid losing rules that took years to discover. To avoid treating as incompetence what was, many times, the best possible decision within constraints the new team will never experience.
The boundary between respecting and cutting probably does not have a formula. It depends on the context, the people, the capacity to investigate, the time available. What I know is that erring on the side of investigating too much costs weeks. Erring on the side of ignoring costs months - and it costs trust, which in the public sector is the hardest resource to recover.
The old system may be slow, it may be ugly, it may run on technology nobody wants to maintain. But before replacing it, it is worth understanding what it solved. Because the biggest risk in a migration is not the new system failing - it is losing what the old one knew.
Related reflections
View allFeb 07, 2026 · 7 min
Reforming public procurement: accelerating with AI is not enough, we need to change what we buy
Introducing AI to speed up public procurement may make the problem more visible, but it does not solve it. The biggest friction lies in what gets contracted, how success is defined, and how delivered value is measured.
Jan 15, 2026 · 9 min
"Build or buy" in the public sector: AI is changing the rules of the game
AI makes “build” a practical option in the public sector by reducing friction and dependency, while demanding stronger architecture, governance, and technical accountability.
Dec 28, 2025 · 8 min
Why building information systems in the public sector is different—and what rarely gets discussed
Digitising public processes without simplifying them only computerises old problems. The real challenge is keeping a simple path for most users while giving dignified routes to exceptions, without killing the MVP or betraying universality.