Much like America’s aging physical infrastructure, the coronavirus crisis has revealed the decrepit state of America’s digital infrastructure.
To fix these urgent problems, local, state and federal governments could turn to best practices used in the private sector to develop more reliable software.
Over the last few weeks, millions of Americans have found themselves repeatedly staring at a “webpage unavailable” screen as they unsuccessfully tried to navigate their state’s unemployment insurance portal.
Even worse, states have found their attempts to deploy emergency improvements and fixes hampered by decades old technology and a lack of coders who remember that the programming languages these websites are built around even exist.
Yet, these obstacles should come as no surprise.
Numerous prior failures, such as the collapse of Veterans Administration IT systems required to pay out GI Bill benefits in 2018, have signaled that governmental software applications of all types need an upgrade. Governments could begin to address these shortcomings by following three principles.
First, government agencies and departments could think of software as a set of platforms to build upon instead of as isolated applications that accomplish a single task in a single way.
Today, government software applications trap data inside silos where it cannot easily be shared with other applications or reused in an emergency for a new purpose. In contrast, America’s leading software corporations build their software as a network of small, interoperable components.
When a new opportunity arises, they identify the most relevant components and quickly adapt or add to them to deliver a new solution to their customers. Amazon or Google doesn’t force their customers to provide the same information over and over again; a modern IT layer could allow the government to do the same where appropriate.
Second, government acquisitions procedures should adequately fund the complete lifecycle of software applications.
Too often, government agencies resource software projects the same way they fund other purchases, with a significant up-front investment to deliver the initial product and greatly reduced appropriations to maintain the application over time.
However, a software application is not like a car — the owner needs to do more than change the oil every six months to keep it running. Instead, software projects continuously evolve as the data they ingest change, users discover new things they need the software to do, and errors and omissions in the original design crop up.
Software projects inevitably accumulate what is known as “technical debt” over time — needed improvements that will not add any new functionality but will reduce the complexity of the code and make the application easier to maintain over the long term.
When any organization fails to allocate time and resources to pay down this technical debt, it quickly finds that its software cannot be extended to support a new use or host ever larger numbers of people using its critical functionality.
As anyone who has tried to claim unemployment benefits in the last month now knows, failing to continuously maintain software code often results in a catastrophic failure at the worst possible time.
Finally, agency officials could consider how newly proposed uses and additional requirements will affect the complexity of the software that implements them.
Most people expect tasks to scale linearly; our intuition is that doing four things will be about twice as difficult as completing two. Instead, the complexity of software scales exponentially with the number of features requested.
While any individual ask may be simple, software features quickly begin to interact with and impact each other. If one software module has three potential outcomes and a second module has four, then implementing both creates 12 possible paths to complete the overall task.
Adding module after module quickly makes comprehensively verifying that the software works correctly untenable; instead, most software programmers only check that the most common cases work correctly.
Under normal circumstances this works well enough, but when less commonly used software features suddenly become essential, they all too often do not work as intended.
Minimizing the number of corner cases and “nice-to-have” features is essential to making resilient software; special cases and carve outs will quickly lead to cost overruns and incorrect implementations.
If the new requirements are proposed by legislators, then executive branch agencies might consider how best to inform them of the likely effects before the bill is passed, allowing for a discussion of the tradeoffs involved, such as the need for additional funding to develop expanded information system capacity that would prevent a falloff in functionality.
Legislators may have to rely on experts to help them navigate these tradeoffs; resurrecting organizations like the Office of Technology Assessment in Congress and recruiting experienced software engineers to staff it could help.
Digital infrastructures were in sad shape long before the coronavirus spread across the globe. It is hoped, though, the deadly virus will shine a light on a problem in urgent need of repair.