The Thinking Machine

The following sample Philosophy essay is 1684 words long, in unknown format, and written at the undergraduate level. It has been downloaded 381 times and is available for you to use, free of charge.

The development of computers is a staple of the technological gains our society has produced over the past century. An interesting number of responses to this remarkable development have taken place in cultural, philosophical, industrial, and individualistic ways. This variety of cognitive responses is an obvious result of the ubiquitous presence of new technologies, but as the complexity and objective properties of these technologies increase a more singular reaction has taken place: fear. At the heart of that fear, and probably most forms of fear, is a shared element of ignorance. The potentiality of new technologies to be developed in more complex ways only compounds that fear. Such a potentiality elicits thoughts of power and control; fear as a function of not knowing the future. A common question is “Can a robot be created that has the ability to think?” This question panders, mostly, to fear. It is less a question of technological potential and more a question of personal threat. Central to the debate of robots “thinking” is the role of humans in their creation, and the evolutionary process of humans being created. Humans create robots, therefore robots can become increasingly human-like, but they will likely never be able to “think” as humans do.

The question of machines “thinking” was a central focus for Turing (1950) in his seminal piece on new forms of computing machinery. He identified this as an inherently human-centric problem, and thus replaced the question “Can machines think?” to another question “which is closely related to it and is expressed in relatively unambiguous words” (Turing, 1). To accomplish this he created an experimental design that would test whether or not a machine could stump a human judge of artificial intelligence. In the game, one person would individually communicate with another human subject and with a computer. The goal for the test subjects was to convince the judge that they were indeed human. If the judge couldn’t tell, the computer won.

A computer stumping a judge in Turing’s “Imitation Game” is a step towards the debate on whether or not a computer can think, but it’s a very elementary step. In this experiment, Turing showed that a computer can act independently of the way human thinking can act. This doesn’t necessarily mean the computer was thinking, only that it was acting in a different way than a human was thinking. Human actions are predetermined by some level of thought, but computer actions are not predetermined by some level of computer thought. Indeed, they are the product of human thought, programmed into the computer by a human programmer with personal intentions and directions.

Surely, there is human influence in a human invention like a robot. Can a human, then, invoke consciousness in its invention? Turing argued that there are certainly mysteries to this debate, but concluded that “I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper” (Turing, 12). Successful completion of his imitation game was enough to convince him that a computer could potentially possess consciousness on some scale. Searle (1980) challenged this assumption, citing that the intentional action of a human is defined by the causal features of its brain. Inherent to this argument is the understanding that the action of a computer is not due to its own intention, but the intention of the human programmer who created the computer.

Artificial intelligence is just that, artificial, but the tactile utility of synthetic entities has an established place in our world. Lycan (YEAR) defined artificial intelligence as the production of a machine that could perform a job that usually requires intelligence and judgment. Lycan also defined intelligence as “a kind of flexibility, a responsiveness to contingencies” (Lycan, 1). The debate over whether or not a robot could ever “think” must address this form of flexibility and responsiveness. It doesn’t take much proof, though, to see the possibility of programming a robot to be able to flexibly respond to a large number of potentialities. Increased information storage capacity, synthetic communication channels, and other technological possibilities exist to provide a robot with human-like intelligence. Indeed, a robot could possess vastly more information than a human brain. The ability for this robot to act with a flexible, responsive intellect, however, is still a product of human programming. Moreover, the limit to the level of situational responsiveness a robot possesses would equal the limitations of human imagination. A robot could be prepared to read a book, cook a meal, or play a game of chess, but only because humans read books, cook meals, and play chess.

The human restrictions inherent to a robotic invention that could potentially “think” are many. Lycan cited two limitations to computer programs set on artificial intelligence. Firstly, computer programs must be fed information. A computer program could be created that has the ability to collect its own sensory information, however that program would have to be initiated by a human. Secondly, the effectiveness of the work performed by a computer program is limited by the intentions of the programmer. A computer program could possess the ability to run rampant with a number of random activities. Lycan cites unpredictable elements like physical defects, bugs in programming, randomizers, learning and analogy mechanisms, and reliable goal descriptions. The utility of those activities, however, would still be a definition of human applicability. A robot accomplishing a task that once required intelligence and judgment doesn’t mean that the robot is utilizing intelligence and judgment in its task, only that it is mirroring the human’s exercise of intelligence and judgment.

Cognitive ability and the personal intention to act as a function of cognitive processes are difficult to quantify, but Searle believed that simply creating a machine that can do human things doesn’t result in creating a human. He stated, “for any program you like it is possible for something to instantiate that program and still not have any mental states” (Searle, 14). A consequence of this reasoning is that the brain produces intentional action in a different way than it instantiates a computer program. The many complex internal processes that result in intentional action cannot, then, be simulated. The biological production of causality and intentionality, then, seems to be the limiting step in the production of a robot that could “think.”

With these premises, one can use biological evolution as a tool to understand the capability of a robot’s potential ability to “think.” Darwinian evolution has long stood as a model to understand the current state of organic life on Earth. The basis of Darwinian evolution is descent with modification or the change in the inherited characteristics of genetic populations over time as a response to external and internal pressures. Our human intelligence is a product of that process. The variety of organic life present today is the result of millions of years of changes in the biological and morphological traits that allow an individual organism to thrive or die in its environment.

Artificial intelligence is not a product of this process, however, the current understanding of evolutionary mechanisms potentiate an artificial model of evolutionary pressures on a computer program. Evolutionary mechanisms such as genetic mutations can produce changes in an individual. The changes can either be deleterious, useful, or neutral. If the changes are deleterious the individual will decrease in fitness within its environment. The subsequent death of the individual results in the loss of its deleterious genetic material from the genetic population. Useful changes to genetic information can result in increased fitness. An individual is more suited for its environment and remains an active part of the genetic population, passing off this useful genetic material to subsequent generations. A software model of the evolutionary process, as applied to a single computer program, would place the program on an evolutionary journey that could mirror the millions of years of organic evolution that has taken place on Earth, leading to humans “thinking.” Evolutionary mechanisms like mutation could code for new functions within the program. The computer program could speciate, repair viral issues, connect with other programs, compete with other programs, and survive traumatic problems.

While these evolutionary forces could be simulated within a computer program, the evolution of a computer program will likely not resemble that of organic life. The computer program would have to evolve independently of other external programming pressures. The evolution of organic life is the result of a seemingly unthinkable number of individual organisms reacting to their environment. They react to their environment as a product of variability, with the ability to respond to pressures as they arise. As they respond to pressures they learn. This education is built into the genetic code and subsequent generations retain a dichotomous ability to unconsciously use the lessons of the past to respond to the pressures of the present. Even if a computer program was set on this path, it is highly unlikely that it would develop into a functionally “thinking” autonomous being.

The creation of a robot that can “think” is unlikely, despite the incredible technological advances present today and implicated in the future. A review of Turing’s essay on machine intelligence proves that there are limits to human imagination. Human supposition, just as Turing’s argument, is limited to modern-day knowledge. The debate over whether or not robots can “think” isn’t grounded by a lack of technological potentiality. Instead, it’s grounded by the simple fact that a robot is a human creation. Artificial intelligence, no matter how varied and complex it becomes, will always be a tool produced by a human. The utility of that tool will be defined by humans. In the absence of humans, it will have no meaning. Therefore, robots will not be able to “think.” If a computer program could be set on the same evolutionary path that led to the modern-day human, it would encounter pressures specific to computer programs. Even if it did pass the statistical near-impossibility of gaining intelligence, consciousness, or the ability to “think,” the result would likely not resemble sentience in any human-quantifiable way.