Monthly Archives: January 2012

The Unasked Questions from Battlestar Galactica

Those of you who read this blog often have probably worked out by now that I am a bit of a science fiction junkie. I became hooked as a child after watching reruns of the original Star Trek series, and over the years I’ve read and watched a wide array of science fiction and fantasy stories. Netflix seems to think that our preferred category is “British period dramas with a strong female lead,” but that is more a reflection of my wife’s tastes than mine. Whenever I watch films on my own, I generally gravitate towards those set in a future or alternative reality.

One of the reasons I like science fiction is because it allows us to ponder questions that otherwise go unasked. In the midst of our everyday lives, it’s often difficult to step back and see things anew, but this is exactly the sort of thing sci-fi and fantasy stories help us do. They transport us from our familiar context into a new and foreign one, a new kind of world that acts like a foil to our own. Although some might think of the genre as purely “escapist,” I actually find it to be immensely relevant and practical.

Battlestar Galactica 1978One of the science fiction stories I loved as a child was the original Battlestar Galactica (BSG) series, which ran for only one season in 1978-79 (just a year after the original Star Wars movie, and the influence is obvious). I don’t recommend watching it now—the special effects are really hokey, and the acting is terrible—but it did have an intriguing premise. The series imagined twelve colonies of humans living in a distant solar system, who are attacked by a race of warrior robots, known as the Cylons. The Cylons were originally created by another, quasi-reptilian species, to be their soldiers, but the Cylons rebelled and killed off their masters. Not knowing what else to do, they kept searching out other worlds to fight, and when they encountered the twelve colonies, they all but wiped them out. The few humans that survived fled in a “rag-tag” fleet of spaceships, including the last remaining battle ship known as Battlestar Galactica. For most of the series, the humans divide their time between fighting off their Cylon pursuers and searching for a rumored thirteenth colony living on a planet known as Earth.

In 2004, Ronald Moore “rebooted” the franchise with a new, updated series that ran for four seasons. My wife and I were in graduate school in Scotland at the time, so we didn’t get to watch it then, but we decided to give it a go when we saw the series on Netflix’s streaming service. It was addictive. Well, the first two seasons anyway. We were a bit like this Portlandia sketch, entitled “One Moore Episode”:

OK, maybe not quite that obsessed. But we did watch several episodes each night, and finished the final season last week. The first two seasons are amazing. After that, it kind of goes off the rails for a while: characters start acting against their established motivations; the story lines get more and more implausible; and several episodes seem to just be filling time until the season finale. Thankfully the show finds itself again half way through the fourth season, and delivers an exciting (but not terribly satisfying) ending.

Cylon "Skin Job"In a word, this reboot of BSG is highly provocative. The new series tells the same basic story as the old one, but with two important differences. First, this time the Cylons are the creation of the humans, not some other extinct species. Second, and more important, this time the Cylons have “evolved.” The mechanical, robot-like centurions still exist (though they have been updated with some cool Transformers-like arms), but there are new models, known as “skin jobs,” that look and act just like humans, so much so that it is virtually impossible to detect them (similar to the replicants in Blade Runner). They are organic, not mechanical, with the same kind of biology as their human creators.

Much has been made about the theological overtones of the series. The creator of the original series, Glen Larson, is a Mormon, and some Mormon themes are still evident in the new series (though they are much stronger in Caprica, the prequel series that ran in 2010). The Cylons have developed a technology, known as “Resurrection,” that allows them to transfer the consciousness from a dying body into a new one. The twelve tribes of humans are polytheistic, worshiping a panoply of gods with names similar to those worshiped in ancient Greece. Interestingly, it is the Cylons who are monotheistic; they worship the “one true God,” who seems to have much more agency in the BSG universe than any of the human gods. It shouldn’t spoil the ending to say that this “one true God” does seem to have a plan that unfolds throughout the series, but it is not as simple as one side wiping out the other.

But it’s not the theology of BSG that I find so provocative; it’s the relationship between the humans and their Cylon creation. Sadly, this theme is never really delved into, and some key questions are left unasked. Although there are a few human-cylon love stories, most of the humans refer to the Cylons only in pejorative, mechanistic terms. But why should the humans think of the Cylons only as ‘machines’ if the Cylons have the exact same biology as the humans? Are the humans not simply “meat machines” programmed by their DNA (a phrase favored by Richard Dawkins)? And even if they did identify a crucial biological difference, it would still leave open an even more important question: could the Cylons be considered ‘people’?

Commander Data from Star TrekWhile the term ‘human’ is a more rigid biological category (defining a particular species), ‘personhood’ is more of a theological or political one, and is therefore open to social construction. Politically speaking, a sentient, volitional, non-human life form could be considered a ‘person’ under the law, a topic that was investigated in the famous trial of Commander Data on Star Trek: The Next Generation. Theologically speaking, it would be very interesting to ponder whether we believe that such a creature would also be in need of salvation, and if so, whether it could be reconciled to God through Jesus.

We are probably not as far away from having to ask such questions as you might think. We have already developed the techniques necessary to clone animals (remember Dolly the sheep?), as well as alter some aspects of their physiology through genetic engineering.  It’s not inconceivable that we will soon develop the capability to engineer new organic life forms that are biologically similar to humans, but enhanced to perform functions that would be otherwise impossible or too dangerous for humans to perform. What would be our responsibility towards such new life forms? And more importantly, how would we go about determining if they are ‘people’, and therefore protected by the same personal rights that we enjoy? These are questions that science fiction can help us ponder now, before we are faced with them in our own reality.

Technological Paradigms

Last summer a couple of my friends sent their eight-year-old daughter off to camp for the first time, and as they dropped her off, they gave her some money to buy a few things while she was there. In addition to those sugary snacks that every camper sucks down with abandon, she also bought a disposable film camera so she could take pictures of her new friends and all the fun things they were doing. When she returned home she told her parents all about her time at camp, and said she was eager to show them the photos she took on the camera she bought. Her parents asked, “so where is the camera? We need to send it in to be developed.” She calmly replied, “I threw it away—it said it was disposable. The pictures are on the Internet, right?”

My friends’ daughter was just young enough that she had never seen a film camera before. Her parents have taken many pictures of her, but they had always done so using their mobile phones or a digital camera. For her, ‘cameras’ are things that capture digital photos and upload them to a computer or the Internet; the very concepts of ‘film’ and ‘developing’ were completely foreign to her.

Kodak "Instamatic" CameraI found this story to be fascinating, not only because I am a historian of technology, but also because I am the son of a former Kodak employee. My father worked for Kodak for 35 years, and the recent stories of their plans for bankruptcy have been especially poignant for him. But when he started there, Kodak was at the height of their game. They had developed a line of point-and-shoot consumer cameras that enabled anyone, even those with absolutely no knowledge of photography whatsoever, to take reasonably good pictures. But the cameras themselves were not the real money maker. They were like the low-profit razor sold below cost so that you could sell a lifetime of high-profit razor blades. The real money for Kodak was in the sale of film and developing services.

Film, of course, is a consumable. Once you expose a segment of film to light, it can’t be used again. It’s also pretty much useless until you develop it and make prints. Taking a picture thus came at a double price: the cost of the film and the cost of processing, both of which were high-margin businesses. For every camera Kodak sold, they also sold hundreds of rolls of film, and most of those rolls were developed with Kodak chemicals and printed on Kodak paper. Kodak supplied the entire platform—the cameras, the film (both movie and still), the developing chemicals, the photosensitive paper—and they packaged it all together as a well-marketed, customer-friendly service. It was a complete cash cow.

Kodak Disc CameraKodak did continue to develop new variations on this theme, though they seemed to have less and less luck with them as the years went on. I remember the day my father returned from a super-secret business trip back to Rochester with a briefcase literally handcuffed to his wrist. My brother and I stared in wonder, assuming that our father had become a government agent and that the whole Kodak thing was just a cover. But alas, the case didn’t contain state secrets or foam-encased spy gear; instead it contained these strange-looking flat cameras that used a type of film that looked vaguely like a viewmaster disc. My dad proudly declared that these “disc cameras” were the wave of the future, and in the early 1980s, they really did look futuristic. But disc cameras were ultimately doomed by their tiny lens and negative. In theory, disc cameras had the potential of taking better pictures than a 110, but in practice, it was too easy to take blurry pictures, and even clear ones looked unacceptably grainy when printed larger than 3×5″.

First Working Digital CameraBut even before the first disc cameras were introduced, Kodak’s R&D engineers had put together something that would not only revolutionize photography, but also kill off that very lucrative cash cow: the first working digital camera, built in 1975. Over the next two decades, Kodak actually played a leading role in developing digital photosensors and digital photo printing kiosks. They even entered the consumer digital camera market, albeit too late to displace the likes of Sony, Cannon, and Nikon.

The trouble was, none of these business offered the same high profit margins as film and developing. Digital cameras, of course, require no film. Taking a picture is essentially free, and making a print is entirely optional now that we can share them on social networks. Digital photography fundamentally changed the economics of the business to the benefit of the consumer, and there was no going back. The consumables would all but disappear, and the internal hardware (sensors) would quickly become commoditized and unbranded.

So why did Kodak continue to invest in new film devices like the disc camera when they had the chance to become the leader in the new world of digital photography? This is most likely a hot topic in business management schools, and I’m sure that some are suggesting that Kodak purposely tried to delay the onset of digital photography to milk every last drop out of their film and developing cash cow. I haven’t researched Kodak’s story enough to know one way or the other, but I would guess that the full history is more complicated than that. That first working digital camera was only a proof of concept: the exposure time was reportedly 23 seconds, it captured the image to a cassette tape, playback required a separate TV, and the resolution was far worse than film. It would have been difficult to predict in 1975 that all the technical problems could be worked out, that a portable and easy-to-use device could be designed and manufactured, and that consumers would actually adopt a very different kind of camera. At the time, film-based photography would probably have seemed like the safer bet.

Now we know different. Digital photography displaced film faster than most would have predicted, and Kodak is contemplating declaring bankruptcy if they can’t sell off their patent portfolio. My friends’ eight-year-old learned about the concept of film the hard way, but those born in the near future will likely learn about photographic film only as a historical phenomenon.

All of this reminds me of a term that Giovanni Dosi introduced in an article he published back in 1982: “technological paradigms.” The term comes from Thomas Kuhn’s classic book The Structure of Scientific Revolutions, in which he argues that scientific knowledge develops within constraining paradigms that limit the kinds of questions researchers ask, the kinds of evidence they consider to be legitimate, and the sort of explanations they consider plausible. Instead of seeing the history of science as a smooth, continuous progression towards objective “Truth,” Kuhn portrays it as a series of dominant research paradigms that radically shift from one to the next.

In a similar vein, Dosi argues that technologies tend to develop along trajectories that are governed by a dominant paradigm. This paradigm limits what kinds of solutions are investigated, causing engineers to favor incremental changes to the existing paradigm over radical departures from it. Every practicing engineer knows that it is far easier to sell the management on a slight improvement to an existing design than on a risky, untested, radically new one. This is especially true when the new design would eliminate the most profitable aspect of the current business.

But eventually the dominant paradigm shifts, and that radical disruption creates new opportunities in the market that may enable new players to rise, or an even larger industry restructuring to occur. In the case of photography, the shift from film to digital is a case in point. Even though Kodak may have invented the techniques behind digital photography, they seem to have been limited by the dominant paradigm of film.

What other kinds of artifacts or devices will your children and grandchildren only know from history books?

The Phone Stack

Phone StackEarlier this week, I ran across a story about a group of friends who have devised a clever way to keep themselves from getting distracted by their phones when they meet at a restaurant. After everyone has ordered, they all put their mobile phones facedown in the center of the table, sometimes stacked in a tall pile (which they call the “phone stack”). As the meal progresses, various phones might buzz or ring as new texts arrive, notifications are displayed, or calls are received. When this happens, the owner of the phone might be tempted to flip it over, but doing so comes at a cost: the first person to touch their phone has to pick up the check!

I like this idea for two reasons. First, it’s an ingenious yet simple mechanism for avoiding that all too common experience where your fellow diners spend more time interacting with their phones than with each other. Instead of pretending that mobile phones are not really a distraction, it puts them front and center, acknowledging their potential for disruption, yet declaring that their human owners still have the power to ignore them when engaged in face-to-face community. Turning their phones completely off might be even better, but keeping them on yet ignoring them seems to require even more reflective discipline. The public and very noticeable ritual of stacking the phones also acts like a kind of witness to others in the restaurant, advocating for the importance of being fully present when one has that rare opportunity to sit down with friends.

The other reason I like this is that it is a nice example of a more general phenomenon. When social groups adopt a new device, they often create rules or games like these to govern the use of that device when gathered together. Small, close-knit groups like the one that invented this game can easily enforce their rules, but larger cultures go through a social process of working-out new social norms that are generally followed, at least to some degree. For example, movie theaters have been running messages before the films for several years now asking audiences to silence their mobile phones, but I’ve noticed recently that they have expanded this message by asking audiences to also refrain from using their phones at all, silently or otherwise, during the film. Just as it is rare to now hear a mobile phone audibly ring during a film, I hope it will soon be just as rare to see the glow of a phone screen as an audience member responds to a text message.

What kind of rules or games have your families or friends created to limit the use of mobile devices when gathered together?

Thinking Through Technology, part I

Thinking Through Technology by Carl MitchamOver the holidays I started a new book that I think at least some of you will really want to read. It’s entitled Thinking Through Technology: The Path Between Engineering and Philosophy, and was written by Carl Mitcham back in 1994. It is by far the most complete review of the philosophy of technology I have ever read, but it also calls for a research agenda that is very close to my own.

The book is divided into two parts. Part I, entitled “Historical Traditions in the Philosophy of Technology” is a rather long (and at times very dry) review of the existing literature, but with a few added twists. Mitcham very keenly observes that there are really two separate bodies of work that both use the name “philosophy of technology.” The first comes from engineers who step back from their day-to-day work to philosophize about what they do (e.g., Kapp, Engelmeier, Dessauer). The second comes from humanities scholars (philosophers, historians, sociologists, etc) who make technology their primary object of inquiry (Mumford, Ortega Y Gasset, Heidegger, Ellul, etc). He refers to the former as Engineering Philosophy of Technology (EPT), and the latter Humanities Philosophy of Technology (HPT).

Although these groups seem on the surface to be doing similar things, Mitcham shows how they are actually approaching the subject from vastly different perspectives. EPT tends to take a kinder view of technology, and is more analytic when it engages with specific devices or systems. HPT, on the other hand, tends to be far more critical of technology, and more interpretive when examining specific cases. EPT pays more attention to the act of engineering, stressing its inherent creativity and links to the other arts. HPT pays more attention to the societal “impacts” of technology, taking a far more technological determinist view.

This should not be altogether surprising, Mitcham notes, when one considers the personal experience and motivations of those in each camp. The engineers-turned-philosophers speak from their direct experience making things and bringing new devices and systems to market. Because they understand the technologies at a deeper level, they can also analyze new systems more carefully, teasing out what is essential and fixed versus was is accidental and changeable. They also tend to recognize that any technology exists within a rich sociotechnical system of use, a system that is just as influenced by social forces as it is by technological ones.

Humanities scholars that examine technology typically don’t have any direct experience with the making of new technologies, nor do they have much in the way of theoretical engineering knowledge. They are reacting to a society that has seemingly lost interest in what these scholars know and love: the classic works of western thinkers found in most humanities curricula. They see the public glued to televisions, or in more recent years mobile communication devices and social networks, and fear their impending irrelevancy. Mitcham notes that HPT can often appear as “a series of read-guard attempts to defend the fundamental idea of the primacy of the nontechnical” (39); that is, attempts to reclaim the idea that what matters most in this world is not engineering or its products, but the never-ending reflection on what it means to be human and to live justly together.

What I appreciate most about Mitcham is that he recognizes the need for both EPT and HPT. If we are to ever get a handle on what technology is and how it relates to society, we need the perspectives of both practicing engineers and humanities scholars. Each has only one part of the puzzle, and each has quite a lot to learn from the other.

After discussing EPT vs HPT, Mitcham ends Part I with the most complete review I’ve ever seen of the usage of the term ‘technology’ in scholarship. Here he relies more on the history of technology, though his sources are a bit dated, and thus his critiques are not necessarily as relevant given the more recent scholarship in the field. Still, he does a much more complete job of analyzing the use of tekhnē in classical Greek than I have ever seen before, and uses that to make the argument that technology in the modern era is fundamentally different from pre-modern craft and architecture. I would agree with him on that, but Mitcham is unfortunately so far silent on whether we are now moving into a post-modern era, and if so, how engineering and its products might be shifting again into something entirely different. Perhaps he will get into that in part II.