Friday February 24, 2017 0 comments
By Thomas Frey
The DaVinci Institute
I installed my Nest Thermostat a little over a year ago. This “learning” machine was billed as being able to study the habits of people and adjust the settings to optimize both temperature and energy usage.
But ever since then I’ve found myself in a constant battle with my thermostat. It’s cooling things down when I need heat, warming things up when I’d rather be cool, and the amount of energy it’s saved is far less than the loss of productivity I’ve experienced from being uncomfortable.
This is also true with my other “smart” devices.
My washing machine still doesn’t understand the fabrics it’s trying to wash. My smart door lock still doesn’t know who I am. And our home security system does a far better job of keeping the good guys in, instead of the bad guys out.
Much of the “smartness” we’ve added to our lives has been in meager doses, slightly better than before, but not much.
That said, the level of intelligence in our homes, cars, clothes, and offices is about to move quickly up the exponential learning curve as connected devices combine remote processing power with everything around us.
Our orange juice bottles, cans of soup, and boxes of crackers will all have a way of reordering themselves when inventories get low. Toasters will soon be toasting reminders onto the sides of our bread so we won’t forget birthdays and anniversaries.
Biometric coffee makers will know exactly how much caffeine to put into our coffee, and our fireplace will even know what color of flame we’re in the mood for.
If I’m feeling ill, not only will my devices know what’s wrong, they’ll be able to scan my home and give me a quick recipe for a cure.
“Add 2 oz of turpentine from the garage, 3 tablespoons of shoe polish, four capfuls of Listerine, and 2 cough drops to a cup of boiling water, and what floats to the top will fix your problem.”
I refer to this as “MacGyvering medicine.”
Our learning machines will pave the way for a hyper-individualized world where everything around us syncs perfectly with our personal needs and desires. But that’s the point where the train begins to derail, and all our best intentions start to work against us. Here’s why.
Some background on machine learning
Machine learning is an offshoot of the early work done on expert systems, neural networks, and artificial intelligence in the 1980s.
Since then we’ve figured out how to connect devices so one device can talk to another, added a pervasive Internet that attaches remote capabilities to imbedded chips, and today’s machine learning has morphed into something far different than anything researchers dreamed of in the 1980s.
With algorithms that can “learn” from past data, it uses sophisticated forms of predictive analysis and decision trees to closely simulate the human decision-making process.
As the number of sensors grows and the amount of data increases, the human-machine relationship will become more refined, and our ability to delineate between personal decisions and machine decisions will become an increasingly fine line.
At the same time, machine learning creates a number of quandaries or paradoxes for us to contend with.
Paradox #1 – Optimized humans will become less human
If every smart device were able to tap into the mood of people it came into contact with, it could easily make decisions for them, and in the process, optimize their performance.
I’ve always been drawn to the idea of walking into a building and have the building recognize me. Parking spaces magically appear; the pathway to where I’m going lights up; music in the air perfectly matches my mood; temperature, humidity and environmental condition instantly sync with my body; and impeccably prepared food supernaturally appears whenever I’m the least bit hungry.
This utopian dream of living the easy life certainly has its appeal, but grossly oversimplifies our need for obstacles to overcome, problems to wrestle with, and adversarial challenges for us to tackle.
When life becomes too simple, we become less durable.
Without the need to struggle, we become less resilient. If we were to find ourselves living the soft cushy life on easy street, every new danger will leave us cowering in fear, unable to muster a response to the hazards ahead.
Paradox #2 – Originality becomes impossible when all possible options can be machine generated
Humans place great value on creativity, originality, and discovery. History books are filled with talented people who figured out how to “zig left” when everyone else “zagged right.”
Recently, a company called Qentis offhandedly claimed its computers were in the process of generating every possible combination of words, and preemptively copyrighting all creative text.
It will also be possible for them to generate every possible combination of musical notes to enable them to claim first rights to every “new” musical score.
Similarly, Cloem is a company that has developed software capable of linguistically manipulating the claims on a patent filing, substituting keywords with synonyms, reordering steps, and rephrasing core concepts in order to generate tens of thousands of potentially patentable “new” inventions.
In much the same way computers are capable of generating every possible combination of lottery numbers to guarantee a win, patent and copyright trolls will soon have the ability to play their game of “fleecing the innovators” at an entirely new level.
More importantly, it confuses the concept of originality, and compromises the contribution of an individual if a version of every “new” idea already exists.
Paradox #3 – Perfection eliminates dependencies, removes our sense of purpose, and will destroy our economy
Humans are odd creatures. We have exceptions to every rule, we value intangible things based on our emotional connection to them, and our greatest strength is flawed logic.
One person’s deficiencies are counterbalanced by another person’s over-adequacies. Individually we’re all failures, but together we each represent the pixels on life’s great masterpiece.
Wherever we find insufficiencies, we create dependencies to help fill the gap, and every “need” produces economic value.
Using this line of thinking, the human race cannot exist as self-sufficient organisms. We all pride ourselves as being rugged individualists, yet we have little chance of surviving without each other.
Machine learning comes with the promise that we’ll soon become stand-alone organisms, content in our surroundings, wielding off-the-chart levels of intelligence and capabilities exceeding our wildest imagination.
However, this is where the whole scenario begins to break down.
Self-sufficiency will lead to isolation and our need for each other will begin to vanish. Without needs and dependencies, there is no economy. And without the drive for fixing every insufficiency, our sense of purpose begins to vanish.
Having a super intelligent machine is meaningless if there is nothing to apply the intelligence to. Much like a perpetual motion machine that never gets used, there’s little purpose for its existence.
How do we make the best possible decision?
Yes, I love the idea of having a laundry soap dispenser that is connected to sensors in the washing machine and able to mix multiple channels of organic ingredients dynamically to suit the conditions of the wash and optimize the cleaning process.
I also love the idea of not having to make so many decisions. Until now, every new device seems to add more decision points to my daily routine, not less.
However, we need to be aware of the quandaries ahead. Not all changes are for the better, and many times simple little shifts will have far reaching ripple effects that force us to rethink our systems, our communities and our way of life.
Sometimes our best intentions are little more than a mirage that leads us to an area we never intended to go.
Machine learning is neither good nor bad. It’s up to us to decide.