Artificial Intelligence: Mastering Chess, Then Societal Challenges?

Artificial Intelligence: Mastering Chess, Then Societal Challenges?

In May 1997, an IBM chess-playing computer called Deep Blue defeated a grandmaster human chess player (under regular time controls) for the first time in history. It took four decades for computer programs and hardware to advance from their first victory in the mid-1950s to besting a world champion. In the twenty plus years since, however, chess programs running on relatively common hardware (like that used in smartphones) could routinely beat even the best human players.

The strongest chess programs utilize handcrafted evaluation functions, developed by humans over many years, to help these programs play the game more effectively. At least one team of computer scientists adopted a more general approach by developing a system that only requires knowing the rules of the game. Called AlphaZero, the system learns chess by playing itself and training its neural networks based on the outcomes. More importantly, the same setup has also mastered shogi (a harder game than chess) and a far more complex game called Go. By mastery, I mean this “self-taught” program matches or outplays the best programs specifically designed to play just one of these games.1

Keep in mind that all of these programs outperform the most advanced human players. Such mastery in different arenas by the same program raises two important questions: First, does AlphaZero represent an advance toward an artificial general intelligence (AGI)? Second, if we could develop an AGI, would we actually listen to what it has to say?

AlphaZero and AGIs

AlphaZero does represent a significant step toward AGI but it’s not clear that such a step makes developing an AGI any closer. AlphaZero shows that a single system can master three separate tasks (playing chess, shogi, and Go). However, it still approaches those tasks separately. As far as I can tell, AlphaZero does not take the knowledge it acquired through learning chess to develop a more general set of principles that it applies to shogi. Instead, AlphaZero trains itself to play chess, then it starts from scratch and trains itself to play shogi; it then repeats the process to play Go.

Critical features, such as well-defined rules and indisputable conditions that determine wins, losses, and draws, allow AlphaZero to master the various games. However, this means that every new game (with new rules or different goals) requires AlphaZero to start from scratch to learn the new game. This process differs markedly from how humans would approach a slight variant of a known game. A human would start from the accumulated knowledge of the original game and extrapolate how the rule changes would affect that knowledge. Although AlphaZero demonstrates the effectiveness of the new programming approach, it remains to be seen whether programming can ever employ the abstraction that humans routinely use to deal with new and unpredictable situations.

Would Humans Listen?

Shortly after Deep Blue bested human grandmasters, humans quit playing against chess programs because the programs played a far superior brand of chess. Instead, people started using the programs to learn how to play the game better because the programs could explore a range of game play not yet available to humans.

Now consider a scenario where we develop an AI (either narrow or general) with the capacity to evaluate options on something more consequential—like climate change or healthcare. Chess is a game with a clear objective, but healthcare requires balancing competing interests where different people value different things. Would the AI need its own values, or would humans program those? Would we adopt an economic value system (most efficient use of physical resources), utilitarian value system (greatest good for the greatest number), or one based on inherent human dignity (each and every human has value)? What if those values result in eliminating the AI? And an even more practical question: Would we actually listen to the AI, or would we tweak the inputs to get the answer we wanted from the start?

AI Would Still Point to a Creator

At first glance, it seems that the development of AGI would undermine central pillars of the Christian faith. An AGI would challenge the idea of human exceptionalism—that humans differ not just in degree but also in kind from every other creature on Earth. And artificially created, sentient beings would challenge the basics of the gospel. However, my colleague Fuz Rana argues that, while AI will become increasingly sophisticated at mimicking human behavior, it will never become self-aware. I tend to agree. Also, the values humans possess stem from a moral awareness that separates us from possible AGI.

A central tenet of Christianity is that God created everything. The Bible also clearly states that we humans are made in God’s image (imago Dei). Just like our ability to produce beautiful art and music flows from the imago Dei, humans creating an AGI also reflects this quality and points to someone who created us.

Endnotes
  1. David Silver et al., “A General Reinforcement Learning Algorithm that Masters Chess, Shogi and Go through Self-play,” Science 362, issue 6419 (December 7, 2018): 1140–44, doi:10.1126/science.aar6404.