Saturday, August 19, 2023

Book Summary and Review III: The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do — Erik J. Larson

Introduction


In this paper, we will review Erik J. Larson’s book, 
The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do.1 Larson’s observations on current AI algorithms and his discussion on differences between human and machine intelligence prompt us to think deeper about AI, human and machine intelligence, AI’s function in the techno-industrial system, and the conditions of its development. And we should think deeper on these issues to see beyond the haze of utopian or dystopian sensationalist claims about AI. According to Larson, the myth about AI is the belief that the field of AI is approaching towards a generally intelligent machine by the methods it utilizes as of now. According to this myth, no conceptual breakthrough is necessary; we only need to develop further the current algorithms and hardware to reach artificial general intelligence (AGI.) Moreover, the myth purports that the coming of AGI is inevitable. The myth of AI has another dimension related to this: Human mind is a more developed version of current machine learning algorithms, but it is not fundamentally different from them. Since there is no fundamental difference between the human mind and current AI, further developments in AI will eventually reach a human-like general intelligence. Larson says that this aspect of the myth denigrates the human mind, causes distrust about its abilities, and might constitute the biggest obstacle to reaching AGI.

We agree with Larson that today’s AI algorithms and the human mind are fundamentally different. But we reach this conclusion from a different perspective. Larson has a humanistic view of human intelligence. He sees the human mind as a general-purpose cognitive organ. This view regards human intelligence as something unique with a mysterious essence. Though Larson never explicitly says that the human mind has a mysterious essence that makes it unique, from his observations about the human mind and his discussion about the differences between human and machine intelligence, it is clear that his view of the human mind is humanistic. This view inevitably tends to regard human mind as unique and mysterious. In contrast, we believe that the human mind is shaped by evolution. It is adapted to solve reproductive problems our species faced during its evolutionary history. The human mind is fundamentally different from today’s AI (and will be fundamentally different from a future possible generally intelligent machine) because evolutionary pressures humans faced were unique to our species. Today’s AI algorithms are developing in fundamentally different conditions for solving different problems. That is why human intelligence is fundamentally different from machine intelligence, not because it has a mysterious capability that makes it a general-purpose cognitive organ.

We begin by discussing the differences between humanistic and evolutionary views of the human mind. We then summarize the obstacles to AGI discussed by Larson. This discussion allows us better understand where the field of AI stands today and the characteristics of today’s AI systems. We then discuss the aims of the techno-industrial system for developing AI. What could be the expectations of its developers from AI? This discussion takes us to think about the role of AI in the system. To what problems and fields it could be applied, and what could be the consequences of its use?

Humanistic vs. Evolutionary View of the Human Mind

According to Larson, the human brain is a general-purpose cognitive organ. It is not narrow and task-oriented. It can learn and perform diverse tasks such as playing chess, driving a car, speaking a language, etc. Larson says that “intelligence of the sort we all display daily is not an algorithm running in our heads, but calls on the entire cultural, historical, and social context within which we think and act in the world.” He thinks that the human mind has a general “learning” ability. The human brain does the same things when it learns to speak, drive a car, solve mathematical equations, and detect cheaters in social situations. Larson’s conception of the human mind, the humanistic view, is prevalent in social sciences. According to this, the human mind consists of general purpose or content-independent cognitive mechanisms. The human mind has an essence that makes it conscious, which gives it the ability of understanding and learning. As a domain-independent general-purpose information processing system, the human mind would learn everything from scratch by observing its environment. Thus, it would be equally easy for human minds to learn to walk, speak, and engage in abstract logic.

Larson contrasts these purported abilities of the human mind with the narrowness of today’s AI systems. AI systems are not general-purpose, and they are task-oriented. For example, an AI system that can play GO and beat the best human players cannot play chess. Larson claims that without a change in the perspective and a conceptual and theoretical breakthrough, today’s approaches to artificial intelligence wouldn’t be able to surmount the narrowness problem and reach human-like general intelligence. Larson’s critique of the AI field and his explanations about the shortcomings of current methods are illuminating in understanding where the field stands today. Some of his observations offer antidotes against the exaggerated claims of doom or boon about the inevitable coming of AGI and superintelligence. However, since his starting point is an erroneous conception of the human mind, he tends to underestimate the potential of the AI field.2 Despite emphasizing page after page the fundamental differences between human and machine intelligence, he falls into the trap of anthropomorphism. His only reference point for intelligence is the erroneous humanistic view of human intelligence. He cannot see that Darwinian selection could shape fundamentally different intelligence through different evolutionary pressures. To appreciate this fact, we should turn to evolutionary psychology and its findings about the human mind. 

Larson divides the capabilities of intelligence into two domains: System X capabilities (narrow capabilities such as playing chess, making complex arithmetic computations, memorizing the whole Internet, and coughing up this memorized data), and System Y capabilities (broad capabilities such as understanding, creating novel insights and ideas, reaching synthesis using available information, etc.) Larson says that we can build algorithms exceptionally intelligent in System X capabilities. But we cannot do the same thing about System Y capabilities because we do not know how to simulate those capacities in code; we do not know how to reflect them in algorithms. And without System Y capabilities, we wouldn’t be able to design machines that have general intelligence.

According to Larson, we should design machines with System Y capabilities to create AGI. Larson says that we are not able to simulate those capacities in code. But this is not an accurate picture of the reality. In fact, the problem might be more fundamental than that. We not only cannot code those capabilities in algorithms, we don’t know what these capacities are and how our brains manifest those capabilities. Understanding, learning, creativity, etc., are concepts that are extremely broad. They are inadequate to describe what really happens in our brains when we “learn,” “understand,” or “create.” For example, “learning” gives the impression that the human mind does the same thing when it learns different things. As if “learning” is a well-defined cognitive operation that is the same for every talent: Learning to drive a car and speaking a language is the same. But this is not so. Neural mechanisms for learning a language and driving are distinct, and they are at different parts of the brain. In the case of language, we have specialized structures shaped by evolution to make us speak and understand the spoken language: Wernicke’s and Broca’s areas. Therefore, learning a language isn’t the consequence of general learning ability. It is the consequence of neural mechanisms that evolved to make us speak and understand what is spoken. Evolutionary psychology has demonstrated that the human mind is not a general-purpose cognitive organ but a bundle of specialized cognitive mechanisms that evolved to solve recurrent reproductive problems.

Larson criticizes today’s AI systems as being narrow. He says that they are only good at solving specific tasks. For example, an algorithm designed to play Go cannot recognize images; an algorithm designed to recognize images cannot chat with humans. Even for seemingly similar tasks like playing different board games, an algorithm designed to play Go cannot play chess. Larson contrasts this narrowness of AI with the general character of the human mind. We can learn different tasks and perform those tasks reasonably well. The same person can play go, chess; recognize images; drive a car; speak languages, and understand spoken words. But, as we said above, evolutionary psychology has demonstrated that we are good at many tasks not because we have a general-purpose brain but because we have many special-purpose neural mechanisms that evolved to solve different recurrent evolutionary problems.

These mechanisms evolved to solve recurrent adaptive problems such as survival and growth (food acquisition and selection, finding a place to live, combating predators and other environmental dangers, combating disease), mating strategies (short-term and long-term mating strategies of men and women), problems of parenting, problems of aiding genetic relatives, etc. The human brain consists of layer-on-layer of these ad hoc mechanisms. According to this view,

Rather than a single general intelligence, humans possess multiple intelligences. Rather than a general ability to reason humans have many specialized abilities to reason, depending on the nature of the adaptive problems they were designed by selection to solve. Rather than general abilities to learn, imitate, calculate means-ends relationships, compute similarity, form concepts, remember things and compute representations, evolutionary psychology suggests that the human mind is filled with many problem-specific cognitive mechanisms, each designed to solve different adaptive problems.3

Some evidence shows that not even our logical reasoning is general or content-independent. Humans are better at reasoning and solving evolutionarily relevant problems, problems that strongly affect our reproductive success. One of the clearest demonstrations of this fact is the Wason Selection Task. In this task, when the question (conditional hypotheses of the form If P then Q is violated or not) is formulated as an abstract problem, less than 25% of college students complete the task correctly. When the question is formulated in a way that requires the participators to detect cheaters, their performance increases dramatically. This time more than 75% of the subjects completed the task correctly. See below two Wason selection tasks that have identical logical structures:4