Friday January 25, 2019 0 comments
By Thomas Frey
The DaVinci Institute
Over the past couple decades we’ve seen a number of spectacular tech failures. It’s easy to list things like MySpace, Google Glass, Pebble Watch, TiVo, Iridium, Napster, and even AOL as major business failures, but in each case it was a matter of failing forward. Each of these products influenced the next generation of technology and paved the way for far better efforts that followed.
Few people remember that the Panama Canal started out as a world-cringing disaster, with the French company spending over $287 million and causing more than 20,000 deaths before throwing in the towel and filing for bankruptcy. Again, this set the stage for a far more successful effort led by the U.S. that followed.
But what happens when we no longer fail forward? Or what if there are too many failures all at once?
We are more dependent on technology today than ever before in history. And it’s rather obvious, as this trend continues, that we will use more technology in the future than we do today.
As our reliance on technology increases, we are finding a parallel increase in the number of possible breaking points associated with it. So, more things can go wrong.
Technology today is lightyears ahead of any policy or laws designed to govern it. This means that we can’t rely on government to protect us.
Along with potential catastrophes, we are extending far beyond the safety nets of any well-governed society. This makes some form of impending disaster a near certainty.
Even though our ability to auto-manage and auto-govern our actions will improve, the error potential will grow even faster.
So what will a techno-disaster look like, and will it rise to the level of a “techno-apocalypse?” Perhaps!
Are we destined to face a techno-apocalypse?
Modern society is fixated on the concept of progress, where all tech advancements are first viewed through a positive lens even though the balance scale of plusses and minuses may tilt more in a harmful direction. This kind of tech-blindness may be helpful at times, but will likely mask the downside lurking behind the glad-handers and well-wishers.
Our ability to sense and monitor change should reduce the risk of things like global pandemics, ecological collapses, nuclear wars, major asteroid impacts, and climate change. But, in my mind, the greatest risk we face will be deviant human behavior. Only this time it will be turbo-charged with technology.
These deranged individuals will transition from criminals to super criminals over night! Naturally this opens the door for things like:
- Large data-destroying viruses unleashed on today’s businesses.
- The odds of being threatened and blackmailed online will rise to nearly 100%.
- Major governmental systems will be disabled or destroyed.
And we still run the risk of disasters beyond our control such as:
- A large solar flare, with its associated EMP blast, that could bomb us back into the stone ages.
Even with countless systems to protect against it, the biggest problem will be loss of control over our own money and wealth. As safety and peace of mind erode, we run the risk of deteriorating into a survivalist society where all semblance of trust is broken and we only care about ourselves and our families.
For me, it’s far easier to understand something when it’s framed out in short scenarios. Scenarios, in this context, are brief cause-and-effect stories about one possible version of the future.
To set the stage, these five scenarios focus on devious people causing failures at key inflection points. One failure will often cause a cascade effect that grows far beyond the initial problem.
- Extreme Privacy Failure
While radical transparency advocates live happily in a false meme world, thinking that if we all know everything about everyone that we will create a much safer society. However, nothing could be further from the truth.
When we know everything about our neighbor, it means we also know their credit card numbers, bank account number, and passwords. When this happens, we quickly lose our ability to “own things,” and ownership is a foundational right that the modern world is based on.
So when Cambridge Analytica used their psychometric scientists to rummage through people’s personal computers via their accounts on Facebook, they not only uncovered incriminating data, but also stealable assets and re-assignable forms of personal wealth.
For those truly intent on creating a new world order, perhaps using a Marxist form of wealth distribution, it becomes easy to imagine a backdoor approach like this to rewrite the “ownership code of humanity” which would lead to a very chaotic and dysfunctional world ahead.
- Global Airport System Collapse
The busiest airports in the world are Atlanta, Beijing, Dubai, and Tokyo. If a series of well-orchestrated tech incidents were planned at any of these airports, it would cause a huge ripple affect across the entire global air transportation network. Whether it starts as waterhole attacks that alter traffic control systems, some form of Gatwick-like drone chaos, or a series of well-placed explosives, the shutdown of a single airport will have serious implications.
A confrontation like this could rise to techno-apocalypse level if the disruption is not easily remedied and if it has the potential to be duplicated quickly across multiple airports.
Air transportation is a complex global system based on multiple dependencies. Even though it’s by far the safest of all forms of transportation, at its core is a system based on trust. As you might imagine, trust is a hard thing to quantify and even harder to rebuild once its lost.
- Dismantling a Major Tech Company
The world has become very dependent upon a few key companies like Google, Amazon, Apple, Microsoft, and Alibaba, to name a few. It’s entirely possible for a well-focused effort, using an array of sabotage tactics to both disable and destroy one of these linchpin companies.
We live in a world where virtually anyone is blackmailable. Since we all care about someone or something, the right threats made by the right entities, at the right time, can make nearly anyone vulnerable. This is especially true for corporate executives where great power can leave them exposed in unfortunate ways.
By focusing on uniquely positioned individuals and nuanced impact points, radical groups can start dismantling the digital services we’ve all become very dependent on, without anyone noticing until it’s too late.
- Dark Web Militia
Most of the major problems in the world today can be traced back to a few key decision-makers who are very wealthy and powerful. This is true for most of the world’s pollution problems, destruction of the rainforests, human trafficking, organized crime, and much more.
Using the dark web to recruit an army of super hackers, the Dark Web Militia could launch a series of relentless cyberattack on these folks.
Recruiting people for this cause will be relatively easy because they can remain anonymous and the organization’s goals are easy to rally behind. Using a promo campaign filled with righteous anger, vilifying each of these individuals to a point where they no longer seem human, a series of attacks gets staged to destroy the lives of each of these so-called bad actors.
In this situation, the unintended consequences of ruining these people’s lives becomes a Pyhrric victory. In addition to taking down the culprits, a number of significant businesses will collapse, forcing countless jobs lost, and the collateral damage will end up being far worse than the original problem they were attempting to solve.
- Fort Peck Incident
Twenty years ago I published a disaster scenario about a team of terrorists that blew up the huge rolled earth dam in Ft. Peck, Montana. As it collapses, the dam’s 23 billion cubic meters of water begins to barrel through the Missouri river valley, quickly overloading its capacity, setting the stage for it to wipe out five more major dams downstream.
In just a day and a half, a massive wall of water will leave an unbelievable trail of destruction over three thousand miles long as it rips through the center of the United States. Not only is there damage, but the country is also literally cut in half with virtually no ground transportation, data lines, or power lines remaining between the two halves. This trail of devastation will leave over 15 million people homeless and thousands missing and presumed dead. Major power plants will have been destroyed, and restoring power to the whole country will be a long time in coming.
With this single act of destruction, nearly every person on the face of the Earth is somehow affected. Five Federal Reserve banks will be destroyed. Thousands of major companies will have been demolished. The stock market, domestic and international, will be thrown into total turmoil. Many insurance companies will simply fold up because the losses are too great. World food supply systems are thrown into disarray, and critical water supplies, sewer systems, and a number of other essential services we take for granted will take years to repair.
Will a techno-apocalypse be this dismal?
Most theories on the techno-apocalypse tend to be based on some form of the singularity.
The technological singularity, based on the theory that exponential advancements will lead to the creation of an artificial super-intelligence, will abruptly trigger runaway tech advancements, resulting in unfathomable changes to humanity.
According to this theory, a rapidly upgrading intelligent agent, such as a computer running software-based artificial general intelligence, will enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation upgrading even more frequently, causing an intelligence explosion, resulting in a powerful super-intelligence that will, qualitatively, far surpass all human intelligence.
The idea of a “singularity” was first mentioned in the 1950s by John von Neumann, an early computer scientist, polymath, and physicist. Over time, the idea of a singularity was repeatedly mentioned by some of the world’s top scientists including a book titled “The Singularity is Near” written by Google’s Director of Engineering, Ray Kurzweil.
More recently, a number of leading thinkers including Stephen Hawking, Bill Gates, and Elon Musk have issued dire warnings about the consequences of the singularity with a general AI shedding the bonds of human control and essentially destroying humanity.
As I read through these scenarios and their accompanying warning, I’m still left with the fundamental question of “Why?”
Even though we, as humans, represent an infinitesimally small lifeforce in the universe, we still have some overarching forms of logic at play, and having AIs imbued with things like motivation, purpose, and intentions still doesn’t make sense to me.
Yes, I understand how the creation of a super A.I. technology can lead to one of us triggering the “mother of all mistakes.” But when technology takes over, then what?
As always, your comments and feedback are encouraged. Please let me know your thoughts.