Friday August 18, 2017 0 comments
Of all the topics I’ve written about, this one scares me the most.
Yes, artificial intelligence, one of humanity’s greatest achievements, can also unleash the seeds of our own destruction. Weaponized A.I. will range from relatively minor weapons designed to change a specific action to nation-vs-nation full-blown A.I. wars.
Artificial intelligence, while still in its infancy, is growing up fast. A recent Cylance survey showed that 62% of security experts think we’ll see the first incidents of weaponized A.I. happening in less than a year.
Several aspects of artificial intelligence make its use as an offensive weapon different than anything we’ve ever encountered in the past.
Attacks can be highly individualized, carefully directed towards the greatest vulnerabilities of key individuals, formed around very specific threats, extortions, blackmails, and intimidations.
The British TV show Black Mirror does a particularly good job of demonstrating how a simple threat can spiral out of control with its “Shut Up and Dance” episode.
In the hands of a terrorist, weaponized A.I. can also be formed around an unpredictable chaos engine, whose sole purpose is to disrupt as many people, places, and things as possible.
Using next generation A.I. masking tools, the wrongdoers will maintain a far-distant relationship from the path of destruction they’ve created, hiding any direct ties to the actual puppet masters in the background.
Once a well-crafted A.I. weapon is launched, it can operate on its own creating devastation and mayhem for months, years, perhaps even decades into the future.
Ironically, the greatest tool for fighting an A.I. weapon is more A.I. This will likely become our next big arms race with the smart good guys trying to stay one step ahead of the smart bad guys.
A.I. weapons will range from students wanting a better grade in class, to terrorists threatening to destroy an entire country.
Some of the ideas that follow have the potential of unleashing unspeakable evil, and I’ve had to wrestle with whether or not to make these public. But after considerable reflection, I’ve concluded that anything I can think of, terrorists and evildoers are also capable of coming up with.
Starting with an Innocent Façade
Ayzenberg is an A.I. marketing company that leverages consumer social media activities by turning it into data that can be segmented to create incredibly specific marketing strategies. Using a series of machine-learning algorithms it can analyze social-speech, along with basically everything else that you see, post, and share across all social media platforms.
Over time, Ayzenberg will know you far better than you know yourself.
From a positive perspective, it will create more efficient systems for leveraging advertising dollars, and for you as the consumer, to only see products and deals that you’re interested in.
However, an A.I. system like this will be equally good at scoping out your main vulnerabilities, weaknesses, and liabilities.
In much the same way Google’s personalized marketing system delivers targeted ads, a weaponized intimidation engine will be capable of delivering highly targeted threats.
As A.I. cyber crimes escalate, we run the risk of having our social structures deteriorate into invisible mafia-style communities with the blackmailers ruling the blackmailees, and few, if any, capable of understanding the behind-the-scenes war zones?
Understanding the Targets
People who live in obscurity, eking out a living just to keep their families afloat, generally have less to worry about. But they can still find themselves as unwitting pawns in a much larger scheme.
Most primary targets will be the fame-seekers, those driven by accomplishment, status, and position. All the trapping of power and success make them the most vulnerable.
Virtually any person, put under a microscope, can be threatened with his or her own character flaws.
Perhaps the greatest danger comes from knowing personal weaknesses, and in most cases, it’s the person or thing they care about most. For an A.I. seeking leverage, the quickest results will come the greatest point of leverage, and whether it’s a child, parent, valuable possession, or someone’s reputation, one well-crafted threat can turn a mild concern into instant blackmail.
Virtually every situation presents an opportunity for weaponized A.I., but each will require different strategies, targets, and techniques. Once a clear objective is put into place, the A.I. will use a series of trial and error processes to find the optimal strategy.
A.I. tools will include incentives, pressures, threats, intimidation, accusations, theft, and blackmail. All can be applied in some fashion to targeted individuals as well as those close to them.
If a $100,000 reward is offered to someone to kidnap an eight-year-old girl, many who pride themselves in being law-abiding citizens will jump at the chance, knowing full well that if they don’t do it, someone else will. They also know that the A.I. will “protect” them and that if they don’t do it, someone not as nice will run with it.
Each of these “games” will be played until a final outcome has been achieved. In reality, there is little difference between this type of game and A.I. playing Alpha Go, Jeopardy, or chess.
1.) Stock Market Manipulation – There are only a small number of highly influential stock market analysts who do all the math for determining the true value of a stock. These people can be influenced without them ever knowing they’re being manipulated. Or they can be outright threatened. This kind of manipulation can be accomplished by making a few key stocks look better than normal and others look worse than normal. Most likely it will involve strategic people placing critical “buy” or “sell” orders at a specific time.
2.) Blackmailing a Judge – Judges will soon find themselves in a particularly vulnerable position. Even with juries present, judges remain the most critical influencer in any case’s outcome. Adjusting a particular A.I. weapon from 1-10 on the subtlety scale, the threat to a judge can range from a bedbug infestation in the jury’s hotel to a bomb threat at the school of the judge’s daughter. Even with the FBI watching, veiled threats and paranoia can become an insidious influencer.
3.) Threatening a Politician – Living in the U.S. where we have nearly 90,000 forms of government (city, state, county, special taxing district, etc.), finding a politician to manipulate is relatively easy. With American style democracy, an elected official that lives in the public eye under constant scrutiny can either be forced to “play ball” or find himself or herself replaced by someone who will. Quite often 1-2 people will control massive budgets, and many of our current checks and balance systems are largely window dressing for what’s really happening in the background.
4.) Hijacking a City – Every city is made up of interdependent systems that function symbiotically with their constituency. Stoplights, water, electric, sewage, traffic control, garbage removal, tax assessment, tax collection, police, and fire departments are just a few of the obvious trigger points. Using one example, if a water treatment plant were crippled, stoplights shut down, and all the power for police and fire departments turned off, a city will be left nearly non-functional until systems could be restored. Once A.I. can disable a single city, it can easily be replicated to affect many more.
5.) Funding a Startup – Whether its corporate funding, venture capital, or angel investors, it all boils down to decision-makers. With the right set of circumstances, every funding situation can be turned into a bidding war, capturing the imagination of a much larger audience of potential users in the process.
6.) Hosting the Olympics – Every two years, cities around the world make bids to the International Olympic Commission to host the Olympic Games. Membership of the IOC consists of 95 active members and 43 honorary members. As with every decision-making group there is an inner circle that wields far more influence, and these individuals can be swayed with aggressive A.I. tactics.
7.) Destroy a Religion – The quickest way to destroy a religion is through scandal and controversy, and while every religious organization already has it’s share, leveraging a series of videos with an incessant string of threats, confessions, and lies can drive a serious wedge between leaders and followers. This will cause a number of splinter groups to form. Other mitigating factors that can speed the demise will be things like significant financial loss, claims of false doctrine, overt favoritism, and theft.
8.) Destroy a Country – At the core of every country are its financial systems. Turning a country into a game board, using currency values as the defining metric, weaponized A.I. could be directed to attack essential communication and power systems. Once those are disabled, the next wave of attacks could be focus on airports, banks, hospitals, grocery stores, and emergency services. Every system has its weakest link and this kind of exploitive weaponry will be relentless until each point of failure is exploited and the currency goes into a freefall.
Key Points of Intimidation
Throughout society there are “people of influence” who are critical for maintaining the systems, business operations, and processes that govern our lives. These individuals become the most “at risk” for becoming a target of weaponized A.I.
- Stock Analysts– The value of our entire stock market hinges on the assessment of a few key individuals.
- Politicians– Any elected official can be bullied into voting in favor of a specific bill or funding proposal.
- Judges– The outcome of most court cases is decided by a single judge.
- Newspaper Editors– These people decide what news is important and what makes the front page.
- Corporate CEOs– The CEOs are a huge factor in determining the success or failure of a business.
- Medical Doctors– Doctors and physicians are among the most respected professions on the planet.
- Military Generals– Far beyond the field of war, military generals make far reaching decisions on a daily basis.
- Insurance Company Executives– In many insurance coverage situations, they decide who lives and who dies.
- Venture Capitalists– Can a VC be coerced into producing a well-funded term sheet with favorable conditions?
- Angel Investors– For every VC there are potentially hundreds of angel investors.
- Bankers– Can bankers be forced to issue a huge loan?
- Corporate Investors– Since corporations are less personally accountable for investment decisions, their support may be easier to coerce.
- Accelerators– Winners and losers in an accelerator competition are often only a single vote apart.
- Grant-Makers– Every philanthropic process boils down to a few decision-makers.
- Foundations– Virtually every foundation grant has exceptions to the normal funding criteria. In these kinds of scenarios, it all boils down to the judgment call of the gifting few.
- Sponsors– Many sponsor relationships are worth millions.
Landmark Decisions in the Future
Will our most important decision is the future be decided by well-informed individuals or a heavily biased A.I.?
- Should cryptocurrencies replace national currencies?
- Should we have a single world leader?
- Should dying languages be allowed to live or die?
- How should life and death decisions be made in the future?
Every major system has the potential of being hijacked by an evil A.I. in the future. Either through the tech itself, the people that control it, or a combination of both, virtually all future systems will be vulnerable.
- Stock Exchanges
- Power Plants
- City Water Supply
- Security Systems
- Cloud Storage Systems
- Election Systems
As our equipment becomes more universally connected to the web, commandeered devices will become an ongoing concern. For example, the same drone that can deliver packages can also deliver bombs, poison, and spy on your kids.
- Flying Drones
- Driverless Cars
- IoT Devices
- Delivery Trucks
- Data Centers
- Smart Houses
Anyone who thought that privacy wasn’t all that important in the past will quickly come to an entirely different conclusion once weaponized A.I. touches them directly.
Privacy has a way of masking our personal foibles and overall weaknesses. Look for an entire new wave of privacy concerns and privacy demands to take center stage over the coming years.
Until recently I had largely dismissed the warnings of Elon Musk, Bill Gates, and Stephen Hawkings about the dangers of A.I. Yes, the super advanced A.I. that they’re talking about will be problematic on many levels, but we’re still many years away from that being a problem.
The part that I was missing was not artificial intelligence itself, rather the sinister people capable of controlling it in the background.
Weaponized A.I. is coming. The first iteration will be crude and poorly implemented, but the 2nd and 3rd generation of this technology will be far more menacing.
Once again, the greatest tool for fighting weaponized A.I. is more A.I.
The only way to minimize the threat is by upping the ante and creating a more powerful A.I. to combat the dangerous stuff.
We cannot turn back the hands of time, or suddenly ban all further A.I. research. Progress will happen with or without our blessing.
Instead, we must navigate our way through the coming dicey years in the same fashion we’ve worked through other dangerous technologies like nuclear weapons, chemical warfare, and suicide bombers.
It’s never easy, but in the end the benefits will far outweigh the penalties we must endure.
But please don’t think that I have all the answers. Let us know what you think. Will we survive the murky times ahead, or have we gotten ahead of our capabilities and now face a no-win situation?