AI Investigations: Philosophy Is Not Dead

AI Investigations: Philosophy Is Not Dead

Why can we think about the universe when the universe cannot think about us? This simple, yet profound, question leads to fascinating philosophical, theological, and scientific investigations. Those same considerations converge in the arena of artificial intelligence research and its goal of artificial general intelligence. Artificial intelligence is a burgeoning field, but this research is not strictly a scientific endeavor. It necessarily involves philosophical reasoning that I think is reminiscent of intelligent agency behind the origin of the universe.

The pursuit of artificial general intelligence (AGI) requires that scientists figure out how to build and program a machine with capacity to contemplate things outside of itself, as well as its place within a larger context. Even more modest projects aiming for artificial narrow intelligence (ANI) that can work through open-ended problems mandate that scientists carefully consider how we think and reason.

AI conversations often highlight how fast computers perform calculations compared to humans. Clearly, programs compute faster and work through logical processes faster than we do. However, the salient question is how well can computers make decisions based on arguments derived from incomplete data that do not permit definitive conclusions? It turns out that philosophy, specifically the philosophy of argumentation, plays an integral role in describing and developing the processes for handling such situations.

Real-life Knowledge Is Usually Tentative

These AI scenarios differ from reasoning in the mathematical arena in that real-life knowledge is not monotonic. In formal (monotonic) logic, from a set of basic axioms, the conclusions drawn from those axioms become true, and the set of “known” things always grows. The nature of the formal logic disallows the possibility of contradiction or revision when new information is acquired. But rarely are the conditions required for formal logic met in everyday scenarios. Consequently, one must assess the conditions, evaluate different options for explaining the conditions, evaluate different options for how to proceed, and ultimately make a decision on the best way to proceed.

As the authors of one article state,1

. . . there are . . . a number of fundamental distinctions between the concepts “P is a formal proof that T holds” and “P is a persuasive argument for accepting T.”

Which Ethical System?

However, humans often disagree about whether an argument is persuasive or what the best course of action is. Any form of AI must figure out how to navigate the reality that most circumstances in life don’t have a single best answer and that any solution has benefits and consequences. Evaluating various benefits and consequences almost always involves some system of ethics and morals. This raises the question of what system the AI should use.

In our society, we seem to be moving in the direction that everyone determines what is right or wrong for themselves. Do we really want an AI (that might make decisions faster and respond more quickly that humans) making decisions with a subjective moral code? Suppose an AI makes a decision that results in someone’s injury or death. Who do we hold accountable? Can the AI be held responsible, or would the creator of the AI?

Maybe a Completely Rational, Incredibly Powerful Machine Is Not the Best Option

We assume that creating an AGI would work out something like Data on Star Trek: The Next Generation. Although Data was physically more capable than the crew of the Enterprise in almost every way (strength, intelligence, speed, etc), he always seemed to know when to submit to the authority of his superiors. The creators of the Star Trek universe can write things however they like, but a powerful intelligence in real life would pose quite a dilemma. If Data has a genuine awareness of self, how does he choose to submit—especially when he knows his superiors’ decisions are incorrect? Human history is littered with examples of people who became powerful enough to impose their will on others (with great destruction resulting). The very thing we hope AI will do (perform human tasks with superhuman skills) also provides the platform to rain destruction upon us. This possibility brings us back to the previous point. How would we instill a set of values and ethics into such a machine, and how would we choose which values and ethics to use?

Here’s where I see a parallel between AI research and cosmology. Some scientists investigating the history and origin of the universe have claimed that philosophy is dead (or at least rather worthless) because only science has the proper tools to provide the answers we seek. But a brief look into the pursuit of AI reveals the naivety of such a statement. Well-established philosophical principles, including ethical considerations, are helping guide the development of basic AI capabilities (short of self-awareness) and the goal of any AGI will require further philosophical and theological input. In this way, it seems to me that a wise pursuit of AGI provides an argument for the existence of God. That is, the very questions researchers (physicists, astronomers) ask assume philosophical reasoning. Did the universe begin to exist? Has it existed forever? These are concepts. Thus, a study of nature does not furnish these questions, but intelligent, nonartificial agents created in the image of a superintelligent Being are equipped to ask them.

Endnotes
  1. T. J. M. Bench-Capon and Paul E. Dunne, “Argumentation in Artificial Intelligence,” Artificial Intelligence 171, nos. 10–15 (2007): 619–41, doi:10.1016/j.artint.2007.05.001.