Narrative Science

In his novel Timequake, Kurt Vonnegut tells of an architect named Frank who encounters a software program named Palladio. The program promises to enable anyone, regardless of training, to design any kind of architectural structure, in any kind of style, simply by specifying a few basic project parameters. Frank doubts that the program could really replicate the skills and knowledge he has gained and honed over many years, so he decides to put it to the test. He tells Palladio to design a three-story parking garage in the style of Thomas Jefferson’s Monticello. To his amazement, the program doesn’t refuse or crash. Instead, it takes him through menu after menu of project parameters, explaining how local codes would alter this or that aspect of the structure. At the end, the program produces detailed building plans and cost estimates, and it even offers to generate alternative plans in the style of Michael Graves or I M Pei. In typical Vonnegut style, Frank is so shocked and filled with dispair that he immediate goes home and shoots himself.

Narrative Science LogoI was reminded of this scene in Vonnegut’s novel after reading an article about the company Narrative Science. They have produced a software program that can automatically write news stories, in human-like prose, about sporting events and routine financial reports. They are now branching out into other genres, like in-house managerial reports, restaurant guides, and summaries of gaming tournaments. Last year they generated 400,000 such stories, all without a single human journalist.

Well, not quite. Like all software programs, their program has to be trained, not only about the rules of a particular domain, but also how to write appropriate-sounding prose for the target audience. The former is done by statisticians and programmers, but the latter requires seasoned journalists, who provide templates and style guides. Theoretically, however, once those journalists train the program to sound like them, the program could generate millions of stories all on its own.

So far, this program has been used to generate stories about minor sporting events and routine financial reports that normally would not garner the attention of a real reporter. For example, parents can capture play-by-play data about their son’s little league baseball game, and submit that to Narrative Science. In a few minutes, the program can analyze the data and generate a story that highlights pivotal moments in the game as well as the final outcome, all written in that flamboyant style of a veteran sports reporter. By looking at the earlier games in the same or previous season, the program can also comment on how the team or individual players performed relative to other games and similar match-ups.

Similarly, most corporate earnings reports go unnoticed by journalists, but this program can quickly analyze the various numbers, compare them with other firms in the same industry, and generate a story for stock holders and other interested parties that highlights important changes in the company’s performance.

Narrative Science is proud of the fact that their program has not yet put any journalists out of work, and they believe that it will be used primarily to generate stories that would normally never have been written in the first place. But when asked how long they think it will take before one of their computer-generated stories would win a Pulitzer Prize, their CTO guessed that it would be within five years.

I’m a bit dubious about that last prediction, but I do find their system very interesting. Narrative Science has essentially picked the low-hanging fruit of professional writing: those routine, boring, and generally formulaic stories that might as well be written by a computer. In some senses, their program is similar to a simple machine tool that is able to construct some particular kind of part over and over again, but in another sense, they have gone far beyond that. By combining data mining techniques with prose generation, they have created a system that can not only find new insights in large datasets, but also communicate those with a wide audience in a style that the audience will recognize and trust.

But before we start worrying about whether their program will soon put all journalists out of work, we need to realize that this kind of program only works in data-rich domains, and the kinds of insights it can generate are limited to the quantity and quality of the data it receives. It can generate insights from complex data sets that a human might not notice, but it can’t really understand those irrational and mirky depths of human emotions, motivations, and desires. I have a hard time, for example, seeing how it could cover a complex public policy debate, or ask tough questions about how a certain dataset was collected, and whether it might be skewed or biased in some way.

Kurt Vonnegut’s first novel, Player Piano, was written in 1952 after he saw an early machine tool quickly make a turbine part that used to require a skilled machinist much longer to accomplish. In the novel, he imagined a dystopian future where blue-collar workers had nothing left to do, and the entire society was run  by managerial technocrats. We now know that things didn’t quite turn out this way (see David Noble’s classic book Forces of Production). Similarly, I don’t think that newsroom management will ever be able to replace human reporters entirely. No doubt, some of the more routine and formulaic reporting will become automated, but the more idiosyncratic stories will still requite a reporter that understands the human condition.

Is Technological Determinism Making Us Stupid?

Is Facebook Making Us Lonely?In a recent interview I did with the Figure/Ground project, the interviewer asked me what I thought of Stephen Marche’s recent article in The Atlantic entitled “Is Facebook Making Us Lonely?” I had read the article when it first ran, so I replied that if you read it closely, this article doesn’t really argue for the position implied in the title and abstract. Although Marche starts with the assumption that Facebook is making people lonely, he ends up articulating a much more nuanced position by the end. After I explained what I meant by that, I concluded by saying, “the better question to ask is why are these kinds of articles so popular? Why are we seeing such a sudden rash of articles entitled ‘is pick-your-new-technology making us stupid/narcissistic/lonely/shallow/etc.?'”

Thankfully, the interviewer didn’t ask me to answer my own question. If he had, I’m not sure I could have given him a good answer at the time. These kinds of articles are, of course, nothing terribly new. I remember articles from my youth that asked if calculators were making us lazy, or if Sony Walkmans were making us socially isolated and possibly deaf. A trip through the newspaper archives would no doubt reveal similar articles surrounding the mass-adoption of just about any new technological device, especially those since the 1960s.

Instead of trying to engage the specific questions that these articles pose, I think it might be more interesting to ask, why are these authors framing their questions in this sort of yes/no, pro/con, good/bad way? And why does framing their questions in that way seem to attract a large number of readers and secondary commentary?

The economically-minded answer would probably note that these kinds of headlines are more attention-grabbing, and that the ultimate goal of any publication funded by advertising is to grab attention. I wouldn’t doubt that this is a contributing factor, and I’m happy that at least in the case of Marche’s article, he nevertheless finds a more nuanced position.

But I also wonder if technological determinism has seeped so far into the popular collective conscious that it is difficult for journalists and the public to think any other way about technology and society. This kind of framing tends to betray an underlying assumption that technology “impacts” society in a kind of one-way, deterministic relationship. Authors may debate whether those impacts are good or bad, but they tend to assume that those impacts will always be inevitable, deterministic, and irreversible.

In the introduction to the classic book Does Technology Drive History?, Merritt Roe Smith argues that Americans in particular have always been attracted to this way of thinking because our national identity has always been wrapped up with technology and the ideology of progress. Our greatest heroes have been inventors and industrialists, not artists or humanitarians, and we commonly attribute our current global hegemony to our technological prowess.

But Americans have also become more willing since the 1960s to question the supposed benefits of new innovations, and to enquire about the often undisclosed costs. Nevertheless, this seems to happen only after the innovation becomes mass-adopted. When Google first appeared on the scene, journalists praised it for its clean look, efficiency, and uncanny ability to find what it was you were really looking for. We rooted for them as the up-and-coming underdog, and we rejoiced in their algorithms’ abilities to bring some kind of order to the ever-growing morass of information on the web. But once it became so ubiquitous that it transmogrified into its own verb, we began to see articles like Nicholas Carr’s “Is Google Making Us Stupid?

Why do we frame the questions in these ways? And why do articles that use this kind of framing generate such interest and secondary commentary? Do they poke at some deep-seated anxieties that we have about technological change? Let me know what you think.

Update: I just found a fantastic blog post by a social media researcher named Zeynep Tufekci that offers three possible answers:

  1. We actually have become more isolated (in terms of strong ties) during the same period that social media has arisen, so we assume that the latter has caused the former, even though evidence to the contrary is legion.
  2. Online socialization really can’t entirely replace face-to-face interaction, so we also assume that increased use of social networking causes increased feelings of isolation, even though people who are social online are also social offline.
  3. “Just like we convert text (visual) into language in our head (which is all oral in the brain), we need to convert mediated-interaction to that visceral kind of sociality in our brain. And not everyone can do this equally well [a condition she calls ‘cyberasociality’]. And people who are cyberasocial are driving this discussion.”

See her post for more details, including links to primary research that backs up what she is saying.

Facebook Fast

Facebook LogoA few posts back, I mentioned that I was giving up Facebook for Lent this year. Now that Lent is over, and I’m back on Facebook, I thought I would reflect a bit on how this limited form of a “digital sabbath” worked out.

At the start, I was concerned that this little experiment of mine might prove to be too difficult, as I really felt that I had become a bit too addicted to Facebook of late. Most of my work right now consists of long-term research, writing, and conference planning projects, so I would often check Facebook whenever I was a little bored, distracted, or just wanting to avoid doing my work. I wondered if I would actually make it until Easter, or if I would just cave part way through.

I have to admit that for the first couple of days, I often found my mouse impulsively shooting up to where the bookmark used to be in my browser window, only to be reminded by its absence of my Lenten fast. This impulse subsided after a few days though, and abstaining from Facebook turned out to be much easier than I thought it would be. I did break the fast once, to self-promote a piece published on Bloomberg.com, but other than that, I stayed off until Easter.

So what did I do with all that extra time? Some productive things, but also some unproductive things. On the productive side, I managed to read a number of books and articles I’ve been meaning to read for quite some time, and because I knew that I couldn’t break away and check Facebook when I became distracted, I found that I was better able to follow longer and more complex arguments. I also spent more time going on walks, thinking through problems, praying, and seeking direction. And I even got my sorely-neglected saxophone out of its case and did some practicing, which felt really good.

But if I was to be honest, I also spent quite a lot of time doing things on the web that were simply pale substitutes for checking Facebook. Instead of checking to see who interacted with my latest status update, I routinely checked the page view stats on my blog, hoping to get that same feeling of acceptance and legitimation. Instead of reading and seeing what my friends were up to, I compulsively read news sites, hoping to feel more in touch with what was going on. And instead of sharing interesting articles I came across with my Facebook friends, I tried tweeting them, but I don’t think anyone was listening.

So does Facebook cause me to be more distracted, or is it just a convenient tool for fulfilling my own desire to be distracted? Is it making me shallow and narcissistic, or is it just one of many places where I can feed my existing insecurities?

The answer is probably a bit of both. As I’ve argued before, each of us needs to be aware not only of our own personal vulnerabilities, but also whether the ways in which we are using our technologies are connecting with those vulnerabilities. I could try to blame Facebook for my foibles, but it’s probably more accurate to say that affordances of Facebook are very well aligned with my some of my existing vulnerabilities. If Facebook didn’t exist, I would still have those vulnerabilities, but I also need to recognize that particular ways of using Facebook might also be making them worse.

Now that Lent is over and I’m back on Facebook, I’ve been much more conscious of the ways in which it can often hit my vulnerabilities. I’ve decided to limit my usage not just in terms of time, but also in terms of what I am trying to get from it. I’ll still post things that I think others will find interesting, but I’m trying not to care how many “likes” I get, or how many comments it might solicit. I still enjoy reading what my friends are doing, but I will try not to compare myself to them and feel inadequate when I don’t measure up. In other words, I don’t simply need to use Facebook less—I need to use it differently.

In a word, I’m domesticating Facebook, altering my usage of it so that it fits better into my life, and aligns better to my stated social values. Instead of knee-jerk reactions that decry how Facebook is ruining our youth, we need to be encouraging each other to do this hard work of self-examination, being honest with ourselves about our personal vulnerabilities and the ways in which the devices and systems we use might be exacerbating those. For some, Facebook might pose little problem, but for others, some changes are probably in order. Let’s get to it.

Self-Driving Cars

Self-driving carImagine for a moment what your daily commute would be like if your car could drive itself. You’d get in, tell your car where you want to go, and then sit back and let it take you there. You could read, work, eat, talk, text, or even sleep during the journey. The computer driving your car would automatically choose the best available route, and perfectly pilot the car to maximize fuel economy. And if everyone has a self-driving car, maneuvers that require social negotiation, such as merging or navigating an all-way stop, would be accomplished smoothly and quickly since the computers controlling the cars would follow the same rules, or might even communicate with each other.

It sounds utterly utopian, doesn’t it? Of course, visions like these typically revel in the positives while completely ignoring the possible consequences, but that is often necessary in the early stages in order to capture the imagination. It’s only later that the messy details rise to the surface, and we as a culture have to conduct a frank discussion about decidedly untechnical things like safety, responsibility, and liability.

A case in point is the promotion of Google’s self-driving car prototype. Google released a new video this week that was picked up by a few news outlets, in which they show a legally blind man using the car to get a taco and pick up his dry cleaning. Here’s the video:

Although Google is famous for their April Fool’s jokes, this isn’t one of them. Google has been testing their self-driving car for a while now, and this latest video is an attempt to show one possible use for such a product: restoring mobility and independence to those who can no longer drive. But this is really only the tip of the iceberg. What the creators of  Google’s self-driving car want to do is far more profound. They want to revolutionize transportation for everyone. This video explains:

In many ways, the idea of a driverless transportation system is not really new. Various forms of driverless subways are already in operation in many parts of the world. In the 1970s, the French attempted to build a driverless transportation system that featured individual cars that could join together to form quasi-trains when they reached a major arterial (see Latour’s book Aramis, or the Love of Technology). One can now ride fully-automated “pod” cards between terminals at London’s Heathrow airport. And a few high-end luxury vehicles already feature the ability to parallel park automatically.

While Google’s self-driving car takes this vision much further, there is a basic assumption that underlies all of these projects: humans are fallible, so dangerous things like driving should be given over to computerized automation, which is assumed to be perfect. As the rhetoric goes, computers don’t get tired or distracted, and they always make the logical choice.

But this, of course, assumes that the humans who program those computers and design those automated systems do not make any mistakes either. Computers don’t do things on their own—they follow the explicit instructions given to them by a human programmer. Anyone who has worked in the software industry knows that programmers are just as fallible as anyone else. Programmers get tired, distracted, and make mistakes, just like drivers do. Even when the programmer is concentrating fully, it’s sometimes impossible to see all the ramifications of a small change made to an obscure part of the code. Even if you get all the code right, there’s no guarantee that the connection between the computerized controller and the actual mechanics won’t break down or malfunction. And even if all that is working properly, one still has to worry about purposeful malicious behavior; consider for a minute what would happen if someone managed to hack into a self-driving car’s control system.

When I was in graduate school, I participated in a research network that was investigating ways to make computer-based systems highly dependable. Some researchers reported on ways in which actual systems had failed in practice, helping us learn from our mistakes. Others studied systems that had managed to achieve a remarkable level of dependability, trying to discern what factors in particular led to that achievement. What became obvious rather quickly was that dependability required far more than just good technique and well-engineered artifacts. It also required a highly disciplined social organization to operate that technical infrastructure, keeping it in good repair, and making sure it does what it’s suppose to do.

When I apply this to self-driving cars, it raises a number of questions for me. Who will verify that the control systems are correctly designed and implemented? If problems are detected after manufacture, how will they be updated, and how will those updates be tested? When the system starts to fail, either due to software problems or mechanical issues, will it fail gracefully, and will drivers know how and be ready to resume control? And when the first accident occurs involving a self-driven car, who will be found liable? The driver? The manufacturer? The software developers?

I’m not saying that these problems are insurmountable, only that we will be forced to consider them before any kind of widespread revolution in transport can occur. The airline industry has traveled this road before, and the auto industry will no doubt learn from their mistakes and achievements. In the meantime, buckle up, and watch out for those self-driving cars!

Update: Although the self-driving Prius is real, Google’s April Fool’s day joke this year takes it to a whole new level: a self-driving NASCAR.

Television in Fiji

Miss RepresentationThis week I had the pleasure of attending a screening of the film Miss Representation, a documentary about the way women are portrayed in the media. I highly recommend watching it, especially with your daughters. Like all activist documentaries, it’s full of shocking statistics that are completely decontextualized and un-cited, but the overall argument of the film is one that would be pretty hard not to agree with, at least to some degree. One doesn’t really need statistics to notice that there are problems with the way women are portrayed in the media, and that those portrayals are going to have some negative affects on women, especially adolescent girls.

After the film, there was a panel discussion during which one of the panelists offhandedly mentioned something that peaked my curiosity: a study that had been done on how the introduction of television to a rural Fijian area had affected the way adolescent girls thought about their bodies and eating habits. I did some searching when I got home, and found the original study, as well as a followup article that delved more deeply into the links between television and body image.

It turns out that Fiji is a perfect place to investigate this. Unlike Europeans and Americans, Fijians traditionally encouraged and celebrated what the primary author describes as “robust appetites and body shapes” (Becker et al, 2002: 509). A bit of girth was conventionally associated with strength and hard work, not laziness, and attempts to purposely reshape one’s body through dieting or exercise were typically discouraged. Prior to the introduction of television, there had been only one reported case of anorexia in all of Fiji, and other kinds of eating disorders were almost non-existent.

Given this kind of cultural foundation, the researchers asked, what kind of influence would television have? Would Fijian women and girls become increasingly body conscious and develop eating disorders like their Americans and European counterparts? Or would they retain their traditional body aesthetic and eating habits?

I should also note at this point that it wasn’t just the general technology of television that was being introduced to Fiji—it was also television shows and advertisements created in the United States, New Zealand, and Australia. With the exception of a short local newscast, all the programming came from cultures where the definition of a beautiful body was thin, tall, and fit. The popular shows in Fiji were ones that many of us have also seen or at least heard of: Xena, Warrior Princess; Beverly Hills 90210; Melrose Place; and the Australian dramatic series Shortland Street. All of them starred men and women who had a distinctly different body type than the typical Fijian. And interspersed throughout these shows were advertisements, most of which featured…you guessed it…exercise equipment and diet formulas.

The researchers conducted two studies of adolescent Fijian girls, one in 1995 just after television was introduced, and one three years later. In addition to capturing basic body measurements and TV watching behavior, the participants also responded to an “eating attitudes test” that included questions about binging and purging. Those who indicated that they had binged or purged were then interviewed to learn more about why they did it, and wether exposure to these TV programs may have played a role.

The results are pretty shocking. In those short three years, the percentage of families with a TV rose from 41% to 70%, and the percentage of girls who had unhealthy scores on the eating attitudes test had also risen from about 12% to nearly 30%. The use of self-induced vomiting, which none had admitted to in the first study, had risen to just over 11%. A feeling that one should eat less was also significantly higher, and 74% of the girls reported that they now felt like they were “too fat.”

Now, we should also temper this with all the caveats, most of which are noted in the original study. The number of participants was relatively low, just over 60, and not exactly the same between the two measurements. The participants were also self-reporting, which can often skew results. Not all forms of disordered eating behaviors rose (the use of laxatives and diuretics did not change, nor did incidents of binge eating). Lastly, and perhaps most importantly, the introduction of television was not the only aspect of Fijian culture that was changing during this period. Fiji, like most places in the world, was rapidly changing due to globalization, and Fijian girls had already been exposed to American culture via magazine and other print media long before TV arrived.

Nevertheless, these rapid and significant changes in eating habits certainly require explanation, and this is where the in-depth interviews with girls who reported binging or purging came into play. It is pretty clear from the quotes highlighted in the second article that many of the girls were trying to loose weight because they wanted to emulate the characters they saw on TV, primarily because those characters seemed so powerful, successful, and confident. The girls assumed that these qualities naturally flowed from the actresses’ tall, thin, and fit body shapes, and that reshaping their own bodies to match would be necessary in order to be competitive and ultimately successful in a quickly globalizing economy.

XenaOf particular interest was the girls’ desire to be more like Xena, who they saw as a strong and powerful woman who could successfully compete in a man’s world. In this sense, Xena provided a powerful feminist role model, but again the assumption was that her power came from her thin, athletic body, and not from her intelligence or integrity.

So what should we take away from all of this? Should we blame the medium of television for these rather sudden changes in eating habit and body image? Or could that medium have been used to convey shows featuring other types of body shapes? For example, would the results have been different if the programming shown had been made in Polynesia or the UK? Or does television, as a predominantly visual medium, naturally favor actors who resemble the producing culture’s ideal body shape, and thus end up determining the content that comes across it?

Regardless of how you answer these questions for yourself, I encourage you to watch the film Miss Representation and talk about it with your daughters, nieces, and friends. We need to help young women see through the impossible standard of beauty that is paraded constantly before them, and find other kinds of powerful, confident women that they can turn to for role models.

Media and True Stories

When I was in grad school in Scotland, I used to tell my classmates that if they wanted to understand the culture of the United States, they should listen to the podcasts from two radio shows: A Prairie Home Companion with Garrison Keillor; and This American Life with Ira Glass. The former is an old-fashioned radio variety show that captures the essence of that quirky, somewhat innocent, but deeply hospitable, traditional culture of the heartland. The latter captures the stories of everyday Americans who are struggling through disenchantment to find a new source of meaning, goodness, beauty, and truth. By listening to the two, one can get a sense of the dual nature of American culture, and the tensions that currently animate it.

This American LifeI still listen to both programs, and this week’s show on This American Life was electrifying. It was a little unusual, in that the whole show was devoted to a retraction of a story they had previously aired about the working conditions at the Chinese factories that build Apple’s most beloved gadgets. The original story was told by Mike Daisey, an actor and activist who wrote the monologue The Agony and Ecstasy of Steve Jobs, in which Daisey purportedly describes his own experiences of visiting these factories and talking with the workers.

The monologue, as well as the story he told on This American Life, lays out a number of shocking accusations: at the gates of the infamous Foxconn factory, he talked with a group of underaged workers who were 12-14 years old; he met with workers who had been poisoned by n-hexane, a powerful neurotoxin that is used to clean iPhone screens; he showed an iPad to a man who’s hand had been destroyed by the machine used to make the case; he saw the inside of worker dormitories that had bunk beds stacked to the ceiling and cameras that observed their every move; and he saw the guards at the factory gates brandishing guns to keep prying eyes away. All of this was capped off with the chilling rhetorical question: “do you really think that Apple doesn’t know?”

Unfortunately for Mr Daisey, the Chinese correspondent for the popular business and finance show Marketplace, Rob Schmitz, heard this story and had a hard time reconciling these claims with what he had observed and reported on over the last few years. Yes, Apple’s Chinese suppliers had routinely violated Apple’s own labor practice standards, the working conditions are notoriously harsh, and there had been a few terrible accidents, including the n-hexane poisoning. But several of the details in Daisey’s story just didn’t seem probable. Only the police and military are allowed to have guns in China, so corporate security guards brandishing firearms would be highly unlikely, and Schmitz had never seen such a thing before. There have been problems with underage workers in Chinese factories, but Apple in particular had been fairly aggressive in stopping that practice at their suppliers, and it would highly unlikely for an underage worker to openly admit to being so to a strange American with a Chinese interpreter.

After some quick searching, Schmitz found the interpreter that Daisey used while in China. Schmitz sent her Daisey’s monologue and asked her if she could corroborate the details. She replied that most of the details were at least exaggerated, if not completely fabricated. They had gone to the gates of Foxconn, but didn’t encounter any underage workers. They had met some disgruntled employees who were trying to form an illegal union, but there were only a couple of workers there, and none of them had the injuries he described. The guards at the gates did not have guns, and Daisey was never allowed in the dormitories, so he couldn’t have known what they looked like.

Mike DaiseySchmitz and Ira Glass confronted Daisey about all of this, and to their dismay, Daisey admitted to representing various stories he had heard only second-hand as if he had seen or heard them himself. His reasoning was that it was all “true” and that he represented these events that way for theatrical purposes. He thought that relaying his experiences accurately would “unpack the complexities” in such a way that it would make the narrative arc more confusing and less effective.

The confrontation between Schmitz, Glass, and Daisey was certainly worth listening to, but the part of the show that I found most interesting was how Glass tried to grapple with Daisey’s claims that his story could be considered “true” in a theatrical context, but not in a journalistic one. Daisey admitted that he took “a few shortcuts in my pasion to be heard” but that he was proud of his use of “the tools of the theater and memoir to achieve [the story’s] dramatic arc…because it made you care, Ira.”

In other words, Daisey is claiming that a “true” story in the theater is one that makes you care, not one that is accurate in a literal sense. Daisey then expressed regret because he brought that story into a journalistic context, a context where what counts as a “true” story is significantly different. Exasperated by this, Glass chided Daisey that he was kidding himself if he thought that his audience understood this distinction. Glass himself attended the show and concluded “I thought it was true because you were on stage saying ‘this happened to me.’ I took you at your word.”

All of this raises an interesting question: how, if at all, does a medium affect what is considered a “true” story? (The term ‘medium’ is notoriously slippery, but I’m using it here in the same sense that Daisey was using the term ‘context’.) Can a story be true in the medium of theater, and then become less or untrue when it is moved to the medium of journalism? Does what counts as a true story differ between journalism and history? Do you assay the truth of a story differently when you hear it in the theater, on film, in journalistic print, or in academic discourse?

Actor-Network Theory

In my last post, I drew a map to plot out the various positions authors take when theorizing the technology and society relationship. I did that primarily so that I could destroy that map in this post by describing another theory that challenges the core assumption underlying all those other perspectives: that ‘technology’ and ‘society’ are two separate spheres that interact with each other in some kind of way.

Actor-Network Theory (or ANT for short) was developed in the 1980s and 90s primarily by three scholars: the French polymath Bruno Latour; the French engineer turned sociologist Michel Callon; and the English sociologist John Law. All three were involved in the early sociological studies of science and technology, and like their colleagues that developed the SCOT and SST positions, they argued against the technological determinism that was dominant at that time. But unlike the SCOT and SST approaches, ANT scholars took a closer look at this concept of ‘society’ that was supposedly exerting a shaping force upon technological artifacts. What they concluded was something that promised not only to revolutionize the way people thought about technology and society, but also to shake the entire foundation of social theory down to its core.

Reassembling the SocialThe problem, as Bruno Latour articulates it in his book Reassembling the Social: An Introduction to Actor-Network Theory, is that social theorists have traditionally thought of society as a ‘thing’, an entity that has some kind of separate existence from its participants. This thing called society was assumed to be made only of “social stuff” (as opposed to material stuff) that was surprisingly durable and all encompassing of its participants, much like the ‘aether’ of 19th-century physics. This allowed society to be an independent ‘force’ that could somehow influence not only its human participants, but also the techniques and material artifacts they produced. But it also had the effect of making society into something that was more like a mysterious force than a phenomenon that could be studied in a scientific way.

Actor-Network Theory, in contrast, argues that there is actually no such ‘thing’ as society. That is to say, society isn’t an entity that is separate from its participants. There’s no place you can point to and say “there is where society is.” Instead, society is more like an enactment, a condition that is constantly re-achieved every time a set of people interact with one another in a given pattern. Because it exists only in time, it is also fragile and prone to change; as soon as people start acting differently, a new kind of society becomes traceable (this is similar to various interactionist positions, and Latour himself comes out of ethnomethodology).

Latour, who was originally trained as an anthropologist, began thinking about this in the context of baboon societies. Baboon, like humans, create relatively complex social orders, but they do so using only direct bodily interactions. One baboon can try to enforce a particular dominance hierarchy over others, but as soon as that baboon is no longer physically present, the order starts to break down. Even when the dominant baboon is there, others will periodically test and challenge its dominance, leading to a recurring re-negotiation. Because baboons are limited to physical bodily interactions, their societies remain limited in size, and are subject to frequent reorganization.

In contrast, human societies seem to have no problem growing quite large in scale, spreading over wide geographic areas, and expressing features that remain relatively durable across many generations. Even pre-modern tribal societies seem to create networks of social links that persist even when participants are not present, and stay more or less intact throughout many generations. How is this achieved? If a society needs to be constantly re-enacted, how can it be extended and reinforced across space and time?

What Latour and his fellow ANT scholars concluded was that the very mechanism that enables us to extend and reinforce human societies across space and time is the thing we call ‘technology’. Material artifacts such as clothing, jewelry, houses, and cars don’t just reflect something called social class, they actually make it possible to assert such a concept in the first place and reinforce it over space and time. Communication media such as books, newspapers, telegraphs, telephones, television, and now the Internet don’t sit apart from something called culture and influence it from time to time, they are the very things that make it possible to create a geographically dispersed yet cohesive culture in the first place. Techniques such as money, corporations, the scientific method, engineering, and law are not just products of a modern culture, they are the very things that construct and reinforce that kind of culture we call “modern.”

In other words, technology and society are so deeply and inextricably intertwined that it would be misleading to talk about them as if they were two separate things that interact from time to time. They are mutually constitutive, each literally making the other, completely interdependent for their very meaning. Without technology, there would be no human society, and without that society, there would be no technology.

For ANT scholars, society is nothing more than a temporary assemblage of connections made between ‘actors’. Actors are “anything that makes a difference,” so they include both humans and non-human agencies/artifacts that influence in some way the connections that are being made. When analysts trace these connections, they reveal various ‘networks’ of which the actors (and now the analyst) are a part. These networks are often consciously constructed by a particular actor (called a ‘network builder’), who ‘enrolls’ other actors (human or otherwise) into the network by ‘translating’ them (literally “speaking for” them). Networks allow some actors to influence the behavior of others, but they are also quite fragile; as soon as those other actors resist or otherwise thwart the translation effort, that part of the network will fall apart.

If you take ANT seriously (which I do), it requires you to completely reorient the way you think about technology and society. A question like “is the technology destroying our society?” becomes almost meaningless since technology is the very thing that makes a geographically-dispersed, temporally-stable social order possible. We can still discuss how a given technology is developed and adopted, and whether the way it changes our social links is a good or bad thing, but the idea of a human society existing without technology just doesn’t make any sense. I’ll elaborate on ANT in future posts, and show you how it can be used to better understand the technology-society relationship.