Eight human-versus-machine competitions: Where are we heading?
Friday April 2, 2021
By Thomas Frey
The Davinci Institute
How do we evaluate and communicate the evolving capabilities of artificial intelligence (AI)? One way to do it is by describing how closely AI matches up with “real” human intelligence.
We can’t seem to get enough of creating “head-to-head” human matchups with AI in games. Pitting our human champions against AI makes for great headlines and human-interest stories because it illustrates the capabilities of AI in ways that most people can easily relate to.
In the last 75 years, we’ve had some memorable moments where computers with increasingly sophisticated AI prevailed in some very complex games, humbling world champions in the process:
1. Tic-Tac-Toe, 1952
In his Ph.D. dissertation on Human-Computer Interaction, University of Cambridge doctoral candidate A.S. Douglas programmed a room-sized EMPSAC computer mainframe to flawlessly compete in the game of “Noughts and Crosses.” There’s no record of the name of its first human “victim,” but whoever it was, they had to enter their move using digital telephonic technology.
2. Backgammon, 1979
A world-class chess player and AI/computer science instructor at Carnegie Mellon University, Hans Berliner was focused on developing an AI chess program. As an interim step, though, he devised BKG 9.8, a backgammon program that defeated Luigi Villa, the world champion, by a score of 7-1.
This was the first time a computer defeated a world champion in any gaming competition. Two years later, Berliner predicted that a chess AI program would defeat the world chess champion by 1990. He was off by just seven years.
3. Checkers, 1994
Marion Tinsley was a Ph.D. mathematician and university math professor at Florida A&M and Florida State … as well as the former Checkers world champion. In 1992, he initially defeated Chinook, a checkers AI program devised by researchers at Alberta University in Canada.
Two years later, they matched up again, but Dr. Tinsley withdrew after several games for health reasons. He was diagnosed with cancer shortly thereafter. Given those circumstances, it’s a bit callous to call this a loss but regardless, Chinook continued to improve from that point on. It ultimately was perfected, meaning that as long as the human player made no mistakes, every match would end in a draw.
4. Chess, 1997
For many of us, chess was long considered the epitome of gaming intellect and strategy. International championships were covered by mainstream media. World champs like Boris Spassky, Anatoly Karpov, and Bobby Fischer were household names.
Computer scientists, including Hans Berliner of backgammon programming fame, had worked on chess-playing computer programs since the 1950s. In 1989, IBM assembled an all-star team to develop its AI chess program, known as Deep Blue. In its first matchup with the world champion in 1996, Garry Kasparov prevailed. But in their second match a year later, Deep Blue came out on top in their six-game match 2-1 with three draws.
5. Scrabble, 2007
In 2007, the AI Scrabble program, Quackle, defeated former Scrabble World champion, David Boys. To qualify to play Boys, Quackle had to defeat another AI Scrabble program, Maven. At the time, and subsequently, Quackle was used by the media during world-class Scrabble tournaments to describe possible plays confronting the competitors.
6. Jeopardy, 2011
Two of Jeopardy’s most successful contestants to date, Ken Jennings and Brad Rutter, agreed to play a three-game exhibition match against IBM’s Watson – a room-sized computer housed in another location due to its noisy operations and need for a cool environment. The humans had their moments, but in the end, Watson was victorious in spite of the fact that Jeopardy “answers” are known for their subtlety and wordplay.
7. Go, 2016
While its rules are simple, the Chinese game of Go is extremely complex. It’s said that the number of possible moves in a game is greater than the number of atoms in the universe. Contributing to the complexity is the fact that as a Go game progresses, the number of stones and possible moves on the board increases rather than decreases, as in the case of board games like checkers, chess, Scrabble, and others. In spite of these hurdles, in 2016, DeepMind, a UK company purchased by Google two years earlier, defeated Go world champion Lee Sodol in a five-game match.
8. Texas Hold ‘Em, 2017
One year later, the Libratus AI system took on four of the world’s top poker players over a three-week period in “Heads-Up No-Limit Texas Hold ‘Em” … and beat them soundly. Developed by researchers at Carnegie Mellon University, Libratus represented a major step forward in AI learning in an environment of imperfect-information game solving and which requires players to take a long-term strategy vs. seizing on short-term wins and losses.
Why do It?
AI is proving its capabilities more and more each year in the sciences, military, transportation, and business functions like Human Resources and e-commerce. One would think we’ve gone beyond the day when we ask AI to “prove” itself to us by beating us in board or computer games.
However, these exhibitions serve an important purpose. They define the capabilities, and hint at the possible perils of machine thinking and AI decision-making in ways the average person can relate to.
Knowing that AI is powering stock picks and selecting job candidates might be uninspiring to many. But when Ken Jennings, tongue-in-cheek, appends the comment, “I, for one, welcome our new computer overlords” to his Final Jeopardy answer during the Watson exhibition; it put AI into a relatable context for us all.
That’s why AI developers will and should continue to conduct these kinds of real-life demonstrations. I wrote last year about a recent demonstration of the human-like capabilities of the GPT-3 language prediction tool. This AI program made headlines by drafting essays that read and sounded almost human-like.
But as we said at the time, AI has a long way to go. GPT-3 requires inputs and prompts. It’s a language predictor that determines, usually correctly, the thought or statement that should logically come next given what it’s presented with.
And like all AI, GPT-3 is an example of GOFAI, “good old-fashioned artificial intelligence.” We haven’t progressed much beyond these systems that are based on programmed learning and representational information about the world, rather than intelligently perceived information – wisdom that’s accumulated just like the human brain would gather and process it.
In that sense, AI will still remain artificial until it can defeat a human being in the Game of Life -- and I don’t mean the Hasbro version!