Monthly Archives: March 2012

Self-Driving Cars

Self-driving carImagine for a moment what your daily commute would be like if your car could drive itself. You’d get in, tell your car where you want to go, and then sit back and let it take you there. You could read, work, eat, talk, text, or even sleep during the journey. The computer driving your car would automatically choose the best available route, and perfectly pilot the car to maximize fuel economy. And if everyone has a self-driving car, maneuvers that require social negotiation, such as merging or navigating an all-way stop, would be accomplished smoothly and quickly since the computers controlling the cars would follow the same rules, or might even communicate with each other.

It sounds utterly utopian, doesn’t it? Of course, visions like these typically revel in the positives while completely ignoring the possible consequences, but that is often necessary in the early stages in order to capture the imagination. It’s only later that the messy details rise to the surface, and we as a culture have to conduct a frank discussion about decidedly untechnical things like safety, responsibility, and liability.

A case in point is the promotion of Google’s self-driving car prototype. Google released a new video this week that was picked up by a few news outlets, in which they show a legally blind man using the car to get a taco and pick up his dry cleaning. Here’s the video:

Although Google is famous for their April Fool’s jokes, this isn’t one of them. Google has been testing their self-driving car for a while now, and this latest video is an attempt to show one possible use for such a product: restoring mobility and independence to those who can no longer drive. But this is really only the tip of the iceberg. What the creators of  Google’s self-driving car want to do is far more profound. They want to revolutionize transportation for everyone. This video explains:

In many ways, the idea of a driverless transportation system is not really new. Various forms of driverless subways are already in operation in many parts of the world. In the 1970s, the French attempted to build a driverless transportation system that featured individual cars that could join together to form quasi-trains when they reached a major arterial (see Latour’s book Aramis, or the Love of Technology). One can now ride fully-automated “pod” cards between terminals at London’s Heathrow airport. And a few high-end luxury vehicles already feature the ability to parallel park automatically.

While Google’s self-driving car takes this vision much further, there is a basic assumption that underlies all of these projects: humans are fallible, so dangerous things like driving should be given over to computerized automation, which is assumed to be perfect. As the rhetoric goes, computers don’t get tired or distracted, and they always make the logical choice.

But this, of course, assumes that the humans who program those computers and design those automated systems do not make any mistakes either. Computers don’t do things on their own—they follow the explicit instructions given to them by a human programmer. Anyone who has worked in the software industry knows that programmers are just as fallible as anyone else. Programmers get tired, distracted, and make mistakes, just like drivers do. Even when the programmer is concentrating fully, it’s sometimes impossible to see all the ramifications of a small change made to an obscure part of the code. Even if you get all the code right, there’s no guarantee that the connection between the computerized controller and the actual mechanics won’t break down or malfunction. And even if all that is working properly, one still has to worry about purposeful malicious behavior; consider for a minute what would happen if someone managed to hack into a self-driving car’s control system.

When I was in graduate school, I participated in a research network that was investigating ways to make computer-based systems highly dependable. Some researchers reported on ways in which actual systems had failed in practice, helping us learn from our mistakes. Others studied systems that had managed to achieve a remarkable level of dependability, trying to discern what factors in particular led to that achievement. What became obvious rather quickly was that dependability required far more than just good technique and well-engineered artifacts. It also required a highly disciplined social organization to operate that technical infrastructure, keeping it in good repair, and making sure it does what it’s suppose to do.

When I apply this to self-driving cars, it raises a number of questions for me. Who will verify that the control systems are correctly designed and implemented? If problems are detected after manufacture, how will they be updated, and how will those updates be tested? When the system starts to fail, either due to software problems or mechanical issues, will it fail gracefully, and will drivers know how and be ready to resume control? And when the first accident occurs involving a self-driven car, who will be found liable? The driver? The manufacturer? The software developers?

I’m not saying that these problems are insurmountable, only that we will be forced to consider them before any kind of widespread revolution in transport can occur. The airline industry has traveled this road before, and the auto industry will no doubt learn from their mistakes and achievements. In the meantime, buckle up, and watch out for those self-driving cars!

Update: Although the self-driving Prius is real, Google’s April Fool’s day joke this year takes it to a whole new level: a self-driving NASCAR.


Television in Fiji

Miss RepresentationThis week I had the pleasure of attending a screening of the film Miss Representation, a documentary about the way women are portrayed in the media. I highly recommend watching it, especially with your daughters. Like all activist documentaries, it’s full of shocking statistics that are completely decontextualized and un-cited, but the overall argument of the film is one that would be pretty hard not to agree with, at least to some degree. One doesn’t really need statistics to notice that there are problems with the way women are portrayed in the media, and that those portrayals are going to have some negative affects on women, especially adolescent girls.

After the film, there was a panel discussion during which one of the panelists offhandedly mentioned something that peaked my curiosity: a study that had been done on how the introduction of television to a rural Fijian area had affected the way adolescent girls thought about their bodies and eating habits. I did some searching when I got home, and found the original study, as well as a followup article that delved more deeply into the links between television and body image.

It turns out that Fiji is a perfect place to investigate this. Unlike Europeans and Americans, Fijians traditionally encouraged and celebrated what the primary author describes as “robust appetites and body shapes” (Becker et al, 2002: 509). A bit of girth was conventionally associated with strength and hard work, not laziness, and attempts to purposely reshape one’s body through dieting or exercise were typically discouraged. Prior to the introduction of television, there had been only one reported case of anorexia in all of Fiji, and other kinds of eating disorders were almost non-existent.

Given this kind of cultural foundation, the researchers asked, what kind of influence would television have? Would Fijian women and girls become increasingly body conscious and develop eating disorders like their Americans and European counterparts? Or would they retain their traditional body aesthetic and eating habits?

I should also note at this point that it wasn’t just the general technology of television that was being introduced to Fiji—it was also television shows and advertisements created in the United States, New Zealand, and Australia. With the exception of a short local newscast, all the programming came from cultures where the definition of a beautiful body was thin, tall, and fit. The popular shows in Fiji were ones that many of us have also seen or at least heard of: Xena, Warrior Princess; Beverly Hills 90210; Melrose Place; and the Australian dramatic series Shortland Street. All of them starred men and women who had a distinctly different body type than the typical Fijian. And interspersed throughout these shows were advertisements, most of which featured…you guessed it…exercise equipment and diet formulas.

The researchers conducted two studies of adolescent Fijian girls, one in 1995 just after television was introduced, and one three years later. In addition to capturing basic body measurements and TV watching behavior, the participants also responded to an “eating attitudes test” that included questions about binging and purging. Those who indicated that they had binged or purged were then interviewed to learn more about why they did it, and wether exposure to these TV programs may have played a role.

The results are pretty shocking. In those short three years, the percentage of families with a TV rose from 41% to 70%, and the percentage of girls who had unhealthy scores on the eating attitudes test had also risen from about 12% to nearly 30%. The use of self-induced vomiting, which none had admitted to in the first study, had risen to just over 11%. A feeling that one should eat less was also significantly higher, and 74% of the girls reported that they now felt like they were “too fat.”

Now, we should also temper this with all the caveats, most of which are noted in the original study. The number of participants was relatively low, just over 60, and not exactly the same between the two measurements. The participants were also self-reporting, which can often skew results. Not all forms of disordered eating behaviors rose (the use of laxatives and diuretics did not change, nor did incidents of binge eating). Lastly, and perhaps most importantly, the introduction of television was not the only aspect of Fijian culture that was changing during this period. Fiji, like most places in the world, was rapidly changing due to globalization, and Fijian girls had already been exposed to American culture via magazine and other print media long before TV arrived.

Nevertheless, these rapid and significant changes in eating habits certainly require explanation, and this is where the in-depth interviews with girls who reported binging or purging came into play. It is pretty clear from the quotes highlighted in the second article that many of the girls were trying to loose weight because they wanted to emulate the characters they saw on TV, primarily because those characters seemed so powerful, successful, and confident. The girls assumed that these qualities naturally flowed from the actresses’ tall, thin, and fit body shapes, and that reshaping their own bodies to match would be necessary in order to be competitive and ultimately successful in a quickly globalizing economy.

XenaOf particular interest was the girls’ desire to be more like Xena, who they saw as a strong and powerful woman who could successfully compete in a man’s world. In this sense, Xena provided a powerful feminist role model, but again the assumption was that her power came from her thin, athletic body, and not from her intelligence or integrity.

So what should we take away from all of this? Should we blame the medium of television for these rather sudden changes in eating habit and body image? Or could that medium have been used to convey shows featuring other types of body shapes? For example, would the results have been different if the programming shown had been made in Polynesia or the UK? Or does television, as a predominantly visual medium, naturally favor actors who resemble the producing culture’s ideal body shape, and thus end up determining the content that comes across it?

Regardless of how you answer these questions for yourself, I encourage you to watch the film Miss Representation and talk about it with your daughters, nieces, and friends. We need to help young women see through the impossible standard of beauty that is paraded constantly before them, and find other kinds of powerful, confident women that they can turn to for role models.

Media and True Stories

When I was in grad school in Scotland, I used to tell my classmates that if they wanted to understand the culture of the United States, they should listen to the podcasts from two radio shows: A Prairie Home Companion with Garrison Keillor; and This American Life with Ira Glass. The former is an old-fashioned radio variety show that captures the essence of that quirky, somewhat innocent, but deeply hospitable, traditional culture of the heartland. The latter captures the stories of everyday Americans who are struggling through disenchantment to find a new source of meaning, goodness, beauty, and truth. By listening to the two, one can get a sense of the dual nature of American culture, and the tensions that currently animate it.

This American LifeI still listen to both programs, and this week’s show on This American Life was electrifying. It was a little unusual, in that the whole show was devoted to a retraction of a story they had previously aired about the working conditions at the Chinese factories that build Apple’s most beloved gadgets. The original story was told by Mike Daisey, an actor and activist who wrote the monologue The Agony and Ecstasy of Steve Jobs, in which Daisey purportedly describes his own experiences of visiting these factories and talking with the workers.

The monologue, as well as the story he told on This American Life, lays out a number of shocking accusations: at the gates of the infamous Foxconn factory, he talked with a group of underaged workers who were 12-14 years old; he met with workers who had been poisoned by n-hexane, a powerful neurotoxin that is used to clean iPhone screens; he showed an iPad to a man who’s hand had been destroyed by the machine used to make the case; he saw the inside of worker dormitories that had bunk beds stacked to the ceiling and cameras that observed their every move; and he saw the guards at the factory gates brandishing guns to keep prying eyes away. All of this was capped off with the chilling rhetorical question: “do you really think that Apple doesn’t know?”

Unfortunately for Mr Daisey, the Chinese correspondent for the popular business and finance show Marketplace, Rob Schmitz, heard this story and had a hard time reconciling these claims with what he had observed and reported on over the last few years. Yes, Apple’s Chinese suppliers had routinely violated Apple’s own labor practice standards, the working conditions are notoriously harsh, and there had been a few terrible accidents, including the n-hexane poisoning. But several of the details in Daisey’s story just didn’t seem probable. Only the police and military are allowed to have guns in China, so corporate security guards brandishing firearms would be highly unlikely, and Schmitz had never seen such a thing before. There have been problems with underage workers in Chinese factories, but Apple in particular had been fairly aggressive in stopping that practice at their suppliers, and it would highly unlikely for an underage worker to openly admit to being so to a strange American with a Chinese interpreter.

After some quick searching, Schmitz found the interpreter that Daisey used while in China. Schmitz sent her Daisey’s monologue and asked her if she could corroborate the details. She replied that most of the details were at least exaggerated, if not completely fabricated. They had gone to the gates of Foxconn, but didn’t encounter any underage workers. They had met some disgruntled employees who were trying to form an illegal union, but there were only a couple of workers there, and none of them had the injuries he described. The guards at the gates did not have guns, and Daisey was never allowed in the dormitories, so he couldn’t have known what they looked like.

Mike DaiseySchmitz and Ira Glass confronted Daisey about all of this, and to their dismay, Daisey admitted to representing various stories he had heard only second-hand as if he had seen or heard them himself. His reasoning was that it was all “true” and that he represented these events that way for theatrical purposes. He thought that relaying his experiences accurately would “unpack the complexities” in such a way that it would make the narrative arc more confusing and less effective.

The confrontation between Schmitz, Glass, and Daisey was certainly worth listening to, but the part of the show that I found most interesting was how Glass tried to grapple with Daisey’s claims that his story could be considered “true” in a theatrical context, but not in a journalistic one. Daisey admitted that he took “a few shortcuts in my pasion to be heard” but that he was proud of his use of “the tools of the theater and memoir to achieve [the story’s] dramatic arc…because it made you care, Ira.”

In other words, Daisey is claiming that a “true” story in the theater is one that makes you care, not one that is accurate in a literal sense. Daisey then expressed regret because he brought that story into a journalistic context, a context where what counts as a “true” story is significantly different. Exasperated by this, Glass chided Daisey that he was kidding himself if he thought that his audience understood this distinction. Glass himself attended the show and concluded “I thought it was true because you were on stage saying ‘this happened to me.’ I took you at your word.”

All of this raises an interesting question: how, if at all, does a medium affect what is considered a “true” story? (The term ‘medium’ is notoriously slippery, but I’m using it here in the same sense that Daisey was using the term ‘context’.) Can a story be true in the medium of theater, and then become less or untrue when it is moved to the medium of journalism? Does what counts as a true story differ between journalism and history? Do you assay the truth of a story differently when you hear it in the theater, on film, in journalistic print, or in academic discourse?

Actor-Network Theory

In my last post, I drew a map to plot out the various positions authors take when theorizing the technology and society relationship. I did that primarily so that I could destroy that map in this post by describing another theory that challenges the core assumption underlying all those other perspectives: that ‘technology’ and ‘society’ are two separate spheres that interact with each other in some kind of way.

Actor-Network Theory (or ANT for short) was developed in the 1980s and 90s primarily by three scholars: the French polymath Bruno Latour; the French engineer turned sociologist Michel Callon; and the English sociologist John Law. All three were involved in the early sociological studies of science and technology, and like their colleagues that developed the SCOT and SST positions, they argued against the technological determinism that was dominant at that time. But unlike the SCOT and SST approaches, ANT scholars took a closer look at this concept of ‘society’ that was supposedly exerting a shaping force upon technological artifacts. What they concluded was something that promised not only to revolutionize the way people thought about technology and society, but also to shake the entire foundation of social theory down to its core.

Reassembling the SocialThe problem, as Bruno Latour articulates it in his book Reassembling the Social: An Introduction to Actor-Network Theory, is that social theorists have traditionally thought of society as a ‘thing’, an entity that has some kind of separate existence from its participants. This thing called society was assumed to be made only of “social stuff” (as opposed to material stuff) that was surprisingly durable and all encompassing of its participants, much like the ‘aether’ of 19th-century physics. This allowed society to be an independent ‘force’ that could somehow influence not only its human participants, but also the techniques and material artifacts they produced. But it also had the effect of making society into something that was more like a mysterious force than a phenomenon that could be studied in a scientific way.

Actor-Network Theory, in contrast, argues that there is actually no such ‘thing’ as society. That is to say, society isn’t an entity that is separate from its participants. There’s no place you can point to and say “there is where society is.” Instead, society is more like an enactment, a condition that is constantly re-achieved every time a set of people interact with one another in a given pattern. Because it exists only in time, it is also fragile and prone to change; as soon as people start acting differently, a new kind of society becomes traceable (this is similar to various interactionist positions, and Latour himself comes out of ethnomethodology).

Latour, who was originally trained as an anthropologist, began thinking about this in the context of baboon societies. Baboon, like humans, create relatively complex social orders, but they do so using only direct bodily interactions. One baboon can try to enforce a particular dominance hierarchy over others, but as soon as that baboon is no longer physically present, the order starts to break down. Even when the dominant baboon is there, others will periodically test and challenge its dominance, leading to a recurring re-negotiation. Because baboons are limited to physical bodily interactions, their societies remain limited in size, and are subject to frequent reorganization.

In contrast, human societies seem to have no problem growing quite large in scale, spreading over wide geographic areas, and expressing features that remain relatively durable across many generations. Even pre-modern tribal societies seem to create networks of social links that persist even when participants are not present, and stay more or less intact throughout many generations. How is this achieved? If a society needs to be constantly re-enacted, how can it be extended and reinforced across space and time?

What Latour and his fellow ANT scholars concluded was that the very mechanism that enables us to extend and reinforce human societies across space and time is the thing we call ‘technology’. Material artifacts such as clothing, jewelry, houses, and cars don’t just reflect something called social class, they actually make it possible to assert such a concept in the first place and reinforce it over space and time. Communication media such as books, newspapers, telegraphs, telephones, television, and now the Internet don’t sit apart from something called culture and influence it from time to time, they are the very things that make it possible to create a geographically dispersed yet cohesive culture in the first place. Techniques such as money, corporations, the scientific method, engineering, and law are not just products of a modern culture, they are the very things that construct and reinforce that kind of culture we call “modern.”

In other words, technology and society are so deeply and inextricably intertwined that it would be misleading to talk about them as if they were two separate things that interact from time to time. They are mutually constitutive, each literally making the other, completely interdependent for their very meaning. Without technology, there would be no human society, and without that society, there would be no technology.

For ANT scholars, society is nothing more than a temporary assemblage of connections made between ‘actors’. Actors are “anything that makes a difference,” so they include both humans and non-human agencies/artifacts that influence in some way the connections that are being made. When analysts trace these connections, they reveal various ‘networks’ of which the actors (and now the analyst) are a part. These networks are often consciously constructed by a particular actor (called a ‘network builder’), who ‘enrolls’ other actors (human or otherwise) into the network by ‘translating’ them (literally “speaking for” them). Networks allow some actors to influence the behavior of others, but they are also quite fragile; as soon as those other actors resist or otherwise thwart the translation effort, that part of the network will fall apart.

If you take ANT seriously (which I do), it requires you to completely reorient the way you think about technology and society. A question like “is the technology destroying our society?” becomes almost meaningless since technology is the very thing that makes a geographically-dispersed, temporally-stable social order possible. We can still discuss how a given technology is developed and adopted, and whether the way it changes our social links is a good or bad thing, but the idea of a human society existing without technology just doesn’t make any sense. I’ll elaborate on ANT in future posts, and show you how it can be used to better understand the technology-society relationship.

A Map of Typical Positions on Technology and Culture

In this post, I want to step back a bit from historical details in order to do some broad-stroke theory. I want to build a map for you that should help give you some orientation when wading into various writing on the technology and culture relationship. Those of you who study this all the time will probably find this post a bit of a review, and if that’s the case, feel free to skip it. But if you tend to find yourself getting more and more perplexed when reading conflicting perspectives on technology, this post should help you get your bearings.

Let’s start our map by laying out a spectrum on the horizontal axis.

Whenever an author theorizes the technology and culture relationship, that author must deal with one of the most basic questions in the field: in what direction do the influences flow? That is, does technology “impact” culture, does culture shape technology, or do both happen simultaneously? How an author answers this question can be plotted on this spectrum.

At one extreme is the position of technological determinism. People who ascribe to this believe that technologies impact an adopting culture culture in a kind of one-way, deterministic relationship. Technologies are seen as a powerful, non-neutral forces that carry with them moral consequences, and produce deterministic effects. Extreme technological determinists also tend to think of technology as an autonomous force that actually guides and determines its own development. As one of my professors used to say, a strong technological determinist believes that once someone invents the techniques for radar, it’s really only a matter of time before we get the microwavable burrito.

On the other extreme is the position of social determinism, which is sometimes called instrumentalism by philosophers of technology. Extreme social determinists see technologies as completely neutral artifacts that can be used for good or for evil depending on the desires of the adopting individual or culture. This kind of position is wonderfully summarized using that well-known motto of the National Handgun and Rifle Association (NHRA): “guns don’t kill people; people kill people.”

I’ve portrayed these positions as extreme ends of a spectrum because it’s important to realize that very few authors subscribe to either of these positions wholeheartedly. Some certainly lean farther to one side or the other, but we should avoid labeling any author as being strictly a technological determinist or a social determinist. Most sit somewhere in between the extremes, which leads us to that position at the center: the social-shaping perspective.

The social-shaping of technology (SST) perspective acknowledges what is obviously true about both of the more extreme positions: technologies certainly do affect an adopting culture in significant ways; but historical cases also show quite clearly that engineers and adopting cultures play important roles in reshaping those technologies to better fit with their existing social values. SST sees technology and culture as “mutually constitutive,” (MacKenzie & Wajcman 1999) each creating and shaping the other. In other words, “guns don’t kill people, but they sure make it a heck of a lot easier.”

To complete our map, we need to add a vertical dimension to our existing horizontal one:

This vertical axis represents the moral attitude an author takes towards technological change. At one extreme is techno-optimism, a belief that our technologies are making the world a better place. In its most extreme forms, techno-optimists elevate technology to the position of savoir, the ultimate tool with which we can save ourselves and create a utopia on earth. This position is excited about the possibilities of new technologies and says “full steam ahead” to any and all technological development.

At the other extreme is techno-pessimism, a position that sees technology not as a savoir, but as a destroyer. Techno-pessimists think that technology is making the world a worse place, and that it might just end up killing us all (think nuclear holocaust, genetic engineering gone awry, sentient robots that turn against us, etc). This position tends to pine for the simpler days before industrialization, and is sympathetic towards  Romanticism.

As with the other axis, this is of course a spectrum and most authors situate themselves somewhere in between the two extremes. At the very middle is a position I’ve called “double-edged sword.” This position argues that every technological change brings with it a wide array of consequences, some of which can be considered ‘good’, others ‘bad’, depending on your perspective. The costs and benefits of an innovation are never equally distributed in a given society, so whether you think a given technology is making the world better or worse largely depends on whether you received more of its benefits and less of its costs, or vice-versa.

Putting it all together, we get a map that looks something like this:

Most critics of technology (Christian or secular) tend to sit somewhere in the lower-left quadrant. They lean towards technological determinism, and they are generally pessimistic about future technological change. Jacques Ellul seems the most pessimistic to me—his book The Technological Society is almost fatalistic. Neil Postman is closer to the double-edged sword position, but he is still overall more pessimistic than optimistic. Marshall McLuhan is an unapologetic technological determinist, but he is far less pessimistic than other Christian critics.

In the upper-left quadrant we find people like Ray Kurzweil, who is extremely excited about the potential for a full human-machine integration. His belief in the inevitability of the “singularity” puts him on the technological determinist side, but unlike McLuhan or Ellul, he sees technology as a potential savoir of humanity.

At the extreme corner of the upper-right quadrant would be the NHRA sentiment I discussed earlier. The Social Construction of Technology (SCOT) position is probably the most social determinist theory I know of, but it takes a very neutral view on whether technology is making the world better or worse. The Social Shaping of Technology (SST) position is on there twice because the first edition of MacKenzie & Wajcman’s book in 1985 was far more social determinist than their second edition in 1999, which took a much more balanced tone.

Interestingly, I don’t know yet of any author that would fit into the lower-right quadrant, probably because those who lean towards social determinism rarely have an overly pessimistic view of technology.

Does this help you navigate your way around the various positions you may have encountered? Where would you place your favorite authors on this map?