Computers will never gain consciousness

 

Any discussion of the possibilities of computers gaining consciousness must begin with an understanding of what consciousness means. For this purpose, consciousness will be regarded as self-awareness, an understanding of what one’s self is in contrast to one’s surroundings. In this context, consciousness comes along with sentience, not just an ability to know, but an ability to learn. In an interview with Free Inquiry Deputy Editor Matt Cherry, philosopher John R. Searle gave a compelling argument against the likelihood of computers gaining consciousness. Computers, he argued, will never gain consciousness because their “processes are defined independently of consciousness, purely in terms of symbolic manipulation” (Searle, 1998, p.316). What he meant is that a computers purpose is given to them by their outside programming. Nothing is self-derived, as a state of consciousness demands.  Assuming that a methodology for computers to program themselves never arises, Searle was exactly right; computers will never gain consciousness.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Searle began his argument against computer consciousness by defining what computer processes are. “The implemented programs consists of purely of syntactical or symbolic processes, usually thought of as sequences of zeros and ones, but any symbols will do (Searle, 1998, p. 315).” This is compelling because it defines computer processes as antithetical from what is commonly regarded to as consciousness. Humans are conscious; so are animals. Consciousness is a biological construct that takes root in nature and is exclusively tied to a biological body; it is not written by someone else, but is shaped and defined by individual experiences. Searle then provided a thought experiment that differentiates consciousness from the potential of computers by using his “Chinese room argument.” In this argument, Searle explained how computers could not be considered conscious even if they passed the Turing test, a proposed test that would measure computer intelligence by probing whether or not it could communicate like a human (“The Turing Test,” 2005). Searle’s (1980) thought experiment ceded that a computer could pass the Turing test and communicate with someone in Chinese. However, a man locked in a room with an English version of the computer’s programming and the database it uses, could give the exact answers the computer gave, thus passing the Turing Test without any real knowledge of the Chinese language. The computer is only regurgitating symbols based on a preset formula, it holds no inherent knowledge itself.

Searle’s arguments are compelling. By defining computers as a purely human construct, he effectively showed how limited the possibilities for computer consciousness truly are. At the same time, perhaps the only flaw in his argument is the assumption that computers will forever remain a purely human construct.  His belief that “the relationship between the brain and consciousness is one of causation” was absolutely correct (Searle, 1998, 314). This follows with his belief that computers will not attain consciousness because they depend on a sequence of inputted symbols to function. However, if technology matures to where computers can program themselves, in effect creating their own symbols, then it seems possible that a causal relationship between computer and function could be attained. In a world where one web browser can struggle to display pages optimized for another browser, this possibility can seem far-fetched. Then again, the computer age has not had the luxury of 100,000 years of evolution. So while it seems unlikely to happen, it also seems a bit premature to rule it out wholeheartedly.

Perhaps the best answer to give to the question of whether or not computers can attain consciousness would be a simple: not any time soon. By defining consciousness as something that an individual itself causes, John R. Searle clearly showed that the likelihood of computers causing a consciousness for themselves is unforeseeable for present day computers, as long as their function rests in outside programming. However, if programming ever develops to the point that computers could program themselves, computers could evolve to a point that Searle did not foresee. Regardless, while this may be a possibility, it presently seems highly unlikely and, at best, a long way off. Thus, as computers are presently defined by outside forces, they will not gain consciousness.

References

;

Searle, John R. (Fall 1998) God, Mind, and Artificial Intelligence: An Interview with John Searle.

Free Inquiry

;

Searle, John R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457

Retrieved April 10, 2008 from http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html

;

Stanford Encyclopedia of Philosophy (2005, July 28) The Turing Test. Retrieved April 10, 2008, from              http://plato.stanford.edu/entries/turing-test/