Tag Archives: cybernetic totalism

Gadget Review Part II: Computers and the Person

In this post, I want to continue my review of Jaron Lanier’s book You Are Not a Gadget: A Manifesto, discussing his theme for part I: what is a person? Although Lanier admits that he has no answer to that question, he is concerned that web 2.0 information systems are degrading our understanding of personhood, robbing it of what he considers to be its ineffable mystery.

Lanier’s comments in this section of the book center around two main themes: the ideology of “cybernetic totalism,” as articulated through artificial intelligence; and the concepts of the “noosphere” and “hive mind,” which he sees as ultimately defeating to individuals.

Lanier makes the claim that the creators and proponents of web 2.0 information systems subscribe to an ideology that he calls “cybernetic totalism.” At the heart of this ideology is the computer metaphor of the human mind, which supposes that the human brain is really just a big, complex information processing device similar to a computer. If you then also assume that our consciousness is an emergent property of the physical workings of our brains (that is, our mind is not something separate from our physical brains), you could conclude that it should be possible to create an artificial brain that could achieve some form of consciousness. Furthermore, it should also be possible to recreate the physical pattern of a particular human brain in a computer, and in essence “upload” a person’s consciousness to a computer simulation. Lanier rather deftly likens this ideology to a new kind of religion, one that offers a new method of salvation and everlasting life.

This ideology, Lanier argues, began with Alan Turing’s famous thought experiment, commonly known as “the Turing test.” Turing was an early British computing pioneer who became interested in the possibility of artificial intelligence. Since the words “intelligence” and “thought” were too vague, he proposed a more unambiguous test that would serve as a kind of proof of concept. It was based on a popular parlor game, where a judge interacts with two players, one male and one female, only through written messages. The judge asks questions of the players, reviews the responses, and then has to guess which player is which gender. Turing replaced the woman with a computer, and postulated that if a human judge could not tell the computer and human players apart, the computer would have demonstrated at least some form of intelligence.

The Turing test is indeed highly suggestive. It not only selects and elevates language as the hallmark of intelligence, but it also assumes that appearing to use language correctly is the same thing as actually understanding language. A computer might be able to mimic a human response to a question, but does that mean the computer actually understood the concepts encapsulated in that language? And even if it could, should we attribute the quality of intelligence to the machine itself, or the programmers who wrote the software that parsed the questions and determined the appropriate response?

One of my favorite responses to the Turing test was articulated by Steven Pinker, a cognitive researcher at Harvard and MIT, in the video opera “Three Tales” by Steve Reich and Beryl Korot:

I might be fooled by a good silk flower. It doesn’t mean that its a real flower. It may just mean that I don’t know enough about flowers.

Lanier goes on to argue that although artificial intelligence researchers have backed off the possibility of creating an artificial human-like mind, they have nevertheless influenced a generation of computer scientists to assume that “the computer is evolving into a life-form that can understand people better than people can understand themselves” (28). The implication is that web 2.0 information systems are built not to be passive tools that enable individuals to accomplish desired tasks more easily, but to anticipate what it thinks individuals might want, and deliver it without the user needing to ask. This type of behavior, Lanier worries, will condition us to think that the computer always knows best, ultimately causing us to give up our autonomy to a network of machines and software that take over control of our lives.

Although his version of the Colossus or Terminator stories might be a bit far fetched at present, his comments about the dangers of anonymity in web 2.0 information systems are easier to agree with, especially if you have ever participated in an online discussion. Mike Godwin once made the following observation, now known as Godwin’s Law: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.” Lanier’s argument is similar: the ability to contribute anonymously, or invent and change personas at will, brings out the “inner troll” in everyone (“troll” is a common term used for online sadists).

Lanier states that sites like Wikipedia are based upon the idea that the “noosphere,” a global intelligence created by the amalgamation of all the individual brains connected via the Internet. The theory is that collective intelligence is often more accurate than individual intelligence, but Lanier thinks this works only in certain contexts, and only to a certain degree. For example, when contributors are allowed to remain anonymous, or when they can invent and change personas at will, there is no social retribution possible that would normally temper what participants choose to do.

Advertisements