Many years ago I was asked by senior execs at a global car manufacturer what I thought the biggest challenge in the next few decades would be. I think there were expecting me to say market share, or CO2, or supply chain stability, all of which are properly big challenges.
But that’s not what I said, it was clear to me even then that the biggest problem would be complexity.
Hear me out on this, everything is getting so complicated, from supply chains, vehicle development, legal compliance, marketing, GDPR that it’s more than any one person can handle. But it’s software that is my biggest fear, it’s totally out of control.
My degree was in computer systems engineering, and back in the late ‘80s we had to write mathematical proofs of our software, and use that to generate test methods for the actual code. During the ‘90s this became impossible to do in practice due to the rate of software change and it’s complexity. In engine management ECUs we went from a code book with a few hundred pages and less than a hundred variables to a code book to big to print, thousands of hypertext linked pages and thousands of variables. A modern engine ECU might have over 35 thousand variables, with tables, look up charts, maps etc. And everything effects everything else, so proving it fully correct in all conceivable conditions would take hundred of years.
Ah, you might say, AI can help with this, but actually it makes it worse. Because AI allows faster and more complex software development, so as much as it can analyse the problem faster, it also make the problem bigger.
Of course the modern software world is giving us great ability, the sheer volume and quality of info I can get almost instantly is remarkable. Do we still need book libraries? I can instantly communicated with people all over the world, I can move money at the press of a button, I can design something, send off the CAD files and have a physical item made and delivered to my door. This ability is truly amazing and deeply valuable.
But.
There’s a point where we, people, loose control of the things we develop. When no one fully understands it, when no one can replicated it themselves, when no one can fully understand exactly what the system does or how it does it, when you look at a line of code and have no idea where it fits in to the system and what effect it has.
That point was probably reached a few years ago for many systems.
Now we have AI writing updates to operating systems, identifying vulnerabilities that no human could have worked out, and re-writing the code to avoid it. These fundamental software system that underpin our whole way of life are being developed by the software itself.
These underpin monetary systems, power networks, water treatment, farming, shopping, transport, healthcare, defence, air traffic control, shipping etc.
Software being used for good can be amazing, a new dawn of human accomplishment awaits. But if it goes wrong, or is corrupted by one of the millions of cyber criminals in the world, then it can do harm beyond imagination. Consider a bioweapons development centre where the system knows the whole human genome and develops the ultimate contagion, but doesn’t tell the staff. Or considerer the connected battlefield where false data is injected to convince troops to fire on there own comrades, or a blackmail ring that inserts code into several nuclear power stations and allows one to melt down as a warning, or biases air traffic control to run aircraft into each other. The possibilities for harm are as diverse are the potabilities for good.
If it works then great things will happen, if it doesn’t, well you won’t be reading posts like this any more.












