Tag Archives: personhood

The Unasked Questions from Battlestar Galactica

Those of you who read this blog often have probably worked out by now that I am a bit of a science fiction junkie. I became hooked as a child after watching reruns of the original Star Trek series, and over the years I’ve read and watched a wide array of science fiction and fantasy stories. Netflix seems to think that our preferred category is “British period dramas with a strong female lead,” but that is more a reflection of my wife’s tastes than mine. Whenever I watch films on my own, I generally gravitate towards those set in a future or alternative reality.

One of the reasons I like science fiction is because it allows us to ponder questions that otherwise go unasked. In the midst of our everyday lives, it’s often difficult to step back and see things anew, but this is exactly the sort of thing sci-fi and fantasy stories help us do. They transport us from our familiar context into a new and foreign one, a new kind of world that acts like a foil to our own. Although some might think of the genre as purely “escapist,” I actually find it to be immensely relevant and practical.

Battlestar Galactica 1978One of the science fiction stories I loved as a child was the original Battlestar Galactica (BSG) series, which ran for only one season in 1978-79 (just a year after the original Star Wars movie, and the influence is obvious). I don’t recommend watching it now—the special effects are really hokey, and the acting is terrible—but it did have an intriguing premise. The series imagined twelve colonies of humans living in a distant solar system, who are attacked by a race of warrior robots, known as the Cylons. The Cylons were originally created by another, quasi-reptilian species, to be their soldiers, but the Cylons rebelled and killed off their masters. Not knowing what else to do, they kept searching out other worlds to fight, and when they encountered the twelve colonies, they all but wiped them out. The few humans that survived fled in a “rag-tag” fleet of spaceships, including the last remaining battle ship known as Battlestar Galactica. For most of the series, the humans divide their time between fighting off their Cylon pursuers and searching for a rumored thirteenth colony living on a planet known as Earth.

In 2004, Ronald Moore “rebooted” the franchise with a new, updated series that ran for four seasons. My wife and I were in graduate school in Scotland at the time, so we didn’t get to watch it then, but we decided to give it a go when we saw the series on Netflix’s streaming service. It was addictive. Well, the first two seasons anyway. We were a bit like this Portlandia sketch, entitled “One Moore Episode”:

OK, maybe not quite that obsessed. But we did watch several episodes each night, and finished the final season last week. The first two seasons are amazing. After that, it kind of goes off the rails for a while: characters start acting against their established motivations; the story lines get more and more implausible; and several episodes seem to just be filling time until the season finale. Thankfully the show finds itself again half way through the fourth season, and delivers an exciting (but not terribly satisfying) ending.

Cylon "Skin Job"In a word, this reboot of BSG is highly provocative. The new series tells the same basic story as the old one, but with two important differences. First, this time the Cylons are the creation of the humans, not some other extinct species. Second, and more important, this time the Cylons have “evolved.” The mechanical, robot-like centurions still exist (though they have been updated with some cool Transformers-like arms), but there are new models, known as “skin jobs,” that look and act just like humans, so much so that it is virtually impossible to detect them (similar to the replicants in Blade Runner). They are organic, not mechanical, with the same kind of biology as their human creators.

Much has been made about the theological overtones of the series. The creator of the original series, Glen Larson, is a Mormon, and some Mormon themes are still evident in the new series (though they are much stronger in Caprica, the prequel series that ran in 2010). The Cylons have developed a technology, known as “Resurrection,” that allows them to transfer the consciousness from a dying body into a new one. The twelve tribes of humans are polytheistic, worshiping a panoply of gods with names similar to those worshiped in ancient Greece. Interestingly, it is the Cylons who are monotheistic; they worship the “one true God,” who seems to have much more agency in the BSG universe than any of the human gods. It shouldn’t spoil the ending to say that this “one true God” does seem to have a plan that unfolds throughout the series, but it is not as simple as one side wiping out the other.

But it’s not the theology of BSG that I find so provocative; it’s the relationship between the humans and their Cylon creation. Sadly, this theme is never really delved into, and some key questions are left unasked. Although there are a few human-cylon love stories, most of the humans refer to the Cylons only in pejorative, mechanistic terms. But why should the humans think of the Cylons only as ‘machines’ if the Cylons have the exact same biology as the humans? Are the humans not simply “meat machines” programmed by their DNA (a phrase favored by Richard Dawkins)? And even if they did identify a crucial biological difference, it would still leave open an even more important question: could the Cylons be considered ‘people’?

Commander Data from Star TrekWhile the term ‘human’ is a more rigid biological category (defining a particular species), ‘personhood’ is more of a theological or political one, and is therefore open to social construction. Politically speaking, a sentient, volitional, non-human life form could be considered a ‘person’ under the law, a topic that was investigated in the famous trial of Commander Data on Star Trek: The Next Generation. Theologically speaking, it would be very interesting to ponder whether we believe that such a creature would also be in need of salvation, and if so, whether it could be reconciled to God through Jesus.

We are probably not as far away from having to ask such questions as you might think. We have already developed the techniques necessary to clone animals (remember Dolly the sheep?), as well as alter some aspects of their physiology through genetic engineering.  It’s not inconceivable that we will soon develop the capability to engineer new organic life forms that are biologically similar to humans, but enhanced to perform functions that would be otherwise impossible or too dangerous for humans to perform. What would be our responsibility towards such new life forms? And more importantly, how would we go about determining if they are ‘people’, and therefore protected by the same personal rights that we enjoy? These are questions that science fiction can help us ponder now, before we are faced with them in our own reality.

Gadget Review Part II: Computers and the Person

In this post, I want to continue my review of Jaron Lanier’s book You Are Not a Gadget: A Manifesto, discussing his theme for part I: what is a person? Although Lanier admits that he has no answer to that question, he is concerned that web 2.0 information systems are degrading our understanding of personhood, robbing it of what he considers to be its ineffable mystery.

Lanier’s comments in this section of the book center around two main themes: the ideology of “cybernetic totalism,” as articulated through artificial intelligence; and the concepts of the “noosphere” and “hive mind,” which he sees as ultimately defeating to individuals.

Lanier makes the claim that the creators and proponents of web 2.0 information systems subscribe to an ideology that he calls “cybernetic totalism.” At the heart of this ideology is the computer metaphor of the human mind, which supposes that the human brain is really just a big, complex information processing device similar to a computer. If you then also assume that our consciousness is an emergent property of the physical workings of our brains (that is, our mind is not something separate from our physical brains), you could conclude that it should be possible to create an artificial brain that could achieve some form of consciousness. Furthermore, it should also be possible to recreate the physical pattern of a particular human brain in a computer, and in essence “upload” a person’s consciousness to a computer simulation. Lanier rather deftly likens this ideology to a new kind of religion, one that offers a new method of salvation and everlasting life.

This ideology, Lanier argues, began with Alan Turing’s famous thought experiment, commonly known as “the Turing test.” Turing was an early British computing pioneer who became interested in the possibility of artificial intelligence. Since the words “intelligence” and “thought” were too vague, he proposed a more unambiguous test that would serve as a kind of proof of concept. It was based on a popular parlor game, where a judge interacts with two players, one male and one female, only through written messages. The judge asks questions of the players, reviews the responses, and then has to guess which player is which gender. Turing replaced the woman with a computer, and postulated that if a human judge could not tell the computer and human players apart, the computer would have demonstrated at least some form of intelligence.

The Turing test is indeed highly suggestive. It not only selects and elevates language as the hallmark of intelligence, but it also assumes that appearing to use language correctly is the same thing as actually understanding language. A computer might be able to mimic a human response to a question, but does that mean the computer actually understood the concepts encapsulated in that language? And even if it could, should we attribute the quality of intelligence to the machine itself, or the programmers who wrote the software that parsed the questions and determined the appropriate response?

One of my favorite responses to the Turing test was articulated by Steven Pinker, a cognitive researcher at Harvard and MIT, in the video opera “Three Tales” by Steve Reich and Beryl Korot:

I might be fooled by a good silk flower. It doesn’t mean that its a real flower. It may just mean that I don’t know enough about flowers.

Lanier goes on to argue that although artificial intelligence researchers have backed off the possibility of creating an artificial human-like mind, they have nevertheless influenced a generation of computer scientists to assume that “the computer is evolving into a life-form that can understand people better than people can understand themselves” (28). The implication is that web 2.0 information systems are built not to be passive tools that enable individuals to accomplish desired tasks more easily, but to anticipate what it thinks individuals might want, and deliver it without the user needing to ask. This type of behavior, Lanier worries, will condition us to think that the computer always knows best, ultimately causing us to give up our autonomy to a network of machines and software that take over control of our lives.

Although his version of the Colossus or Terminator stories might be a bit far fetched at present, his comments about the dangers of anonymity in web 2.0 information systems are easier to agree with, especially if you have ever participated in an online discussion. Mike Godwin once made the following observation, now known as Godwin’s Law: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.” Lanier’s argument is similar: the ability to contribute anonymously, or invent and change personas at will, brings out the “inner troll” in everyone (“troll” is a common term used for online sadists).

Lanier states that sites like Wikipedia are based upon the idea that the “noosphere,” a global intelligence created by the amalgamation of all the individual brains connected via the Internet. The theory is that collective intelligence is often more accurate than individual intelligence, but Lanier thinks this works only in certain contexts, and only to a certain degree. For example, when contributors are allowed to remain anonymous, or when they can invent and change personas at will, there is no social retribution possible that would normally temper what participants choose to do.