Category Archives: News

Google Doodle for Bob Moog’s Birthday

Moog Google DoodleDid you see the Google Doodle for today? It’s a functional model of an analog synthesizer in honor of what would have been Bob Moog’s 78th birthday. You can adjust the oscillator, filter, and envelope settings to create a wide range of sounds. It even has a recorder attached to it so you can capture your creations and share them with others!

Over a year ago now, I wrote a couple of posts about Moog (rhymes with ‘rogue’) and his synthesizer. The first was inspired by a documentary about Moog and his work. Here is a trailer for that, in which he discusses how people reacted to the synthesizer when it was first introduced:

Moog recounts how critics at the time really didn’t know what to make of his creation. For them, “real music” came only from strings, wood, brass, or skins. These new electronic synthesizers seemed more like sophisticated noise-makers, something useful for sound-effects engineers, but hardly something that could be categorized as a “musical instrument.” Moog’s most strident critics actually accused him of “destroying music” by introducing a most “unnatural” device.

The synthesizer’s shift from “noise-maker” to “musical instrument” is captured well in Pinch and Trocco’s book Analog Days, which was the subject of my second post on Moog. These authors trace the early days of the Moog, describing how it quickly became a staple feature for psychedelic rock bands of the late 1960s. But in the fall of 1968, a recording was released that completely changed how people thought about what the synthesizer was, and was good for. It was called Switched on Bach, and as the title implies, it featured the works of Johann Sebastian Bach performed entirely on the synthesizer. The album was an instant hit, and was one of the first classical recordings to ever go platinum. That album inspired many other keyboardists to explore the potential of the synthesizer and integrate it into their creative work.

I think the history of the synthesizer is valuable for two reasons. First, it reminds us to be careful about conflating the concepts of “natural” and “traditional.” The synthesizer was certainly untraditional when it was introduced, but is was just as much an artifact, and therefore unnatural, as a violin or saxophone. And instead of destroying music, it opened up entirely new sonic possibilities that helped expand the creative potential of musicians. We need to be careful when making dire predictions about how this or that new device will destroy some aspect of our traditional culture—it may very well turn out to be quite the opposite.

Second, the synthesizer, like the iPad or the telephone, is the kind of device that requires a bit of “working out” before a culture decides what it actually is and what it’s good for. The synthesizer’s social meaning was underdetermined and somewhat flexible when it was first introduced, and the way it turned out was influenced just as much by its initial users as it was by those who designed, produced and marketed it. Early adopters often play key roles in redefining and reshaping new devices so that they better fit into the target culture.

OK, enough theorizing—now go make some music!

The Digital Society Conference

Digital Society Conference Logo

A little over a year ago, I attended a conference on technology, culture, and Christian spirituality down at Laity Lodge in Texas. That conference featured Albert Borgmann, the well-known philosopher of technology, as well as those who have found his work to be an inspiration for their own.

It was an engaging and fun conference, but my colleague Al Erisman and I returned from that trip feeling that something was missing from the discussion. Both of us felt that the practical experiences of those who design, develop, and direct technical projects were not yet integrated into the theoretical perspectives of the academics. I also felt that the insights from more recent science and technology studies could add more nuance and balance to the discussion.

In response, I started this blog, and Al started writing some pieces for his journal Ethix. We both spent some time working out our thoughts, and when we met again last fall, we decided to organize another conference, one that would continue the great work done down at Laity, but also build upon it and push the conversation forward in light of our current context.

I want to invite you to join us at this conference. We seek to gather a diverse set of people who are interested in rethinking the Christian commentary on technology for the digital era. Our aim is to start a new conversation that blends the theoretical perspectives from academia with the practical experiences of those who actively work with and on information technologies. Al, myself, and several of our speakers have worked in both arenas, and know how valuable it is to have each of these perspectives inform the other.

The conference will be held this summer, June 22-23 on the Seattle Pacific University campus (Seattle, WA, USA). We have a fantastic set of keynote speakers, the names of which regular readers of this blog will no doubt recognize:

They will be joined by several other panelists who will discuss the Christian commentary on technology thus far, how communities and individuals are flourishing (or withering) in online spaces, and how we can integrate our Christian faith with our engineering practice.

Space constraints require us to limit the size of this conference, so register early to guarantee your place!

If you know someone who would be interested in this conference, please forward this post to them, or send them a direct link to the conference web site: http://www.spu.edu/digitalsociety.

I hope to see many of you at the conference!

Narrative Science

In his novel Timequake, Kurt Vonnegut tells of an architect named Frank who encounters a software program named Palladio. The program promises to enable anyone, regardless of training, to design any kind of architectural structure, in any kind of style, simply by specifying a few basic project parameters. Frank doubts that the program could really replicate the skills and knowledge he has gained and honed over many years, so he decides to put it to the test. He tells Palladio to design a three-story parking garage in the style of Thomas Jefferson’s Monticello. To his amazement, the program doesn’t refuse or crash. Instead, it takes him through menu after menu of project parameters, explaining how local codes would alter this or that aspect of the structure. At the end, the program produces detailed building plans and cost estimates, and it even offers to generate alternative plans in the style of Michael Graves or I M Pei. In typical Vonnegut style, Frank is so shocked and filled with dispair that he immediate goes home and shoots himself.

Narrative Science LogoI was reminded of this scene in Vonnegut’s novel after reading an article about the company Narrative Science. They have produced a software program that can automatically write news stories, in human-like prose, about sporting events and routine financial reports. They are now branching out into other genres, like in-house managerial reports, restaurant guides, and summaries of gaming tournaments. Last year they generated 400,000 such stories, all without a single human journalist.

Well, not quite. Like all software programs, their program has to be trained, not only about the rules of a particular domain, but also how to write appropriate-sounding prose for the target audience. The former is done by statisticians and programmers, but the latter requires seasoned journalists, who provide templates and style guides. Theoretically, however, once those journalists train the program to sound like them, the program could generate millions of stories all on its own.

So far, this program has been used to generate stories about minor sporting events and routine financial reports that normally would not garner the attention of a real reporter. For example, parents can capture play-by-play data about their son’s little league baseball game, and submit that to Narrative Science. In a few minutes, the program can analyze the data and generate a story that highlights pivotal moments in the game as well as the final outcome, all written in that flamboyant style of a veteran sports reporter. By looking at the earlier games in the same or previous season, the program can also comment on how the team or individual players performed relative to other games and similar match-ups.

Similarly, most corporate earnings reports go unnoticed by journalists, but this program can quickly analyze the various numbers, compare them with other firms in the same industry, and generate a story for stock holders and other interested parties that highlights important changes in the company’s performance.

Narrative Science is proud of the fact that their program has not yet put any journalists out of work, and they believe that it will be used primarily to generate stories that would normally never have been written in the first place. But when asked how long they think it will take before one of their computer-generated stories would win a Pulitzer Prize, their CTO guessed that it would be within five years.

I’m a bit dubious about that last prediction, but I do find their system very interesting. Narrative Science has essentially picked the low-hanging fruit of professional writing: those routine, boring, and generally formulaic stories that might as well be written by a computer. In some senses, their program is similar to a simple machine tool that is able to construct some particular kind of part over and over again, but in another sense, they have gone far beyond that. By combining data mining techniques with prose generation, they have created a system that can not only find new insights in large datasets, but also communicate those with a wide audience in a style that the audience will recognize and trust.

But before we start worrying about whether their program will soon put all journalists out of work, we need to realize that this kind of program only works in data-rich domains, and the kinds of insights it can generate are limited to the quantity and quality of the data it receives. It can generate insights from complex data sets that a human might not notice, but it can’t really understand those irrational and mirky depths of human emotions, motivations, and desires. I have a hard time, for example, seeing how it could cover a complex public policy debate, or ask tough questions about how a certain dataset was collected, and whether it might be skewed or biased in some way.

Kurt Vonnegut’s first novel, Player Piano, was written in 1952 after he saw an early machine tool quickly make a turbine part that used to require a skilled machinist much longer to accomplish. In the novel, he imagined a dystopian future where blue-collar workers had nothing left to do, and the entire society was run  by managerial technocrats. We now know that things didn’t quite turn out this way (see David Noble’s classic book Forces of Production). Similarly, I don’t think that newsroom management will ever be able to replace human reporters entirely. No doubt, some of the more routine and formulaic reporting will become automated, but the more idiosyncratic stories will still requite a reporter that understands the human condition.

Is Technological Determinism Making Us Stupid?

Is Facebook Making Us Lonely?In a recent interview I did with the Figure/Ground project, the interviewer asked me what I thought of Stephen Marche’s recent article in The Atlantic entitled “Is Facebook Making Us Lonely?” I had read the article when it first ran, so I replied that if you read it closely, this article doesn’t really argue for the position implied in the title and abstract. Although Marche starts with the assumption that Facebook is making people lonely, he ends up articulating a much more nuanced position by the end. After I explained what I meant by that, I concluded by saying, “the better question to ask is why are these kinds of articles so popular? Why are we seeing such a sudden rash of articles entitled ‘is pick-your-new-technology making us stupid/narcissistic/lonely/shallow/etc.?'”

Thankfully, the interviewer didn’t ask me to answer my own question. If he had, I’m not sure I could have given him a good answer at the time. These kinds of articles are, of course, nothing terribly new. I remember articles from my youth that asked if calculators were making us lazy, or if Sony Walkmans were making us socially isolated and possibly deaf. A trip through the newspaper archives would no doubt reveal similar articles surrounding the mass-adoption of just about any new technological device, especially those since the 1960s.

Instead of trying to engage the specific questions that these articles pose, I think it might be more interesting to ask, why are these authors framing their questions in this sort of yes/no, pro/con, good/bad way? And why does framing their questions in that way seem to attract a large number of readers and secondary commentary?

The economically-minded answer would probably note that these kinds of headlines are more attention-grabbing, and that the ultimate goal of any publication funded by advertising is to grab attention. I wouldn’t doubt that this is a contributing factor, and I’m happy that at least in the case of Marche’s article, he nevertheless finds a more nuanced position.

But I also wonder if technological determinism has seeped so far into the popular collective conscious that it is difficult for journalists and the public to think any other way about technology and society. This kind of framing tends to betray an underlying assumption that technology “impacts” society in a kind of one-way, deterministic relationship. Authors may debate whether those impacts are good or bad, but they tend to assume that those impacts will always be inevitable, deterministic, and irreversible.

In the introduction to the classic book Does Technology Drive History?, Merritt Roe Smith argues that Americans in particular have always been attracted to this way of thinking because our national identity has always been wrapped up with technology and the ideology of progress. Our greatest heroes have been inventors and industrialists, not artists or humanitarians, and we commonly attribute our current global hegemony to our technological prowess.

But Americans have also become more willing since the 1960s to question the supposed benefits of new innovations, and to enquire about the often undisclosed costs. Nevertheless, this seems to happen only after the innovation becomes mass-adopted. When Google first appeared on the scene, journalists praised it for its clean look, efficiency, and uncanny ability to find what it was you were really looking for. We rooted for them as the up-and-coming underdog, and we rejoiced in their algorithms’ abilities to bring some kind of order to the ever-growing morass of information on the web. But once it became so ubiquitous that it transmogrified into its own verb, we began to see articles like Nicholas Carr’s “Is Google Making Us Stupid?

Why do we frame the questions in these ways? And why do articles that use this kind of framing generate such interest and secondary commentary? Do they poke at some deep-seated anxieties that we have about technological change? Let me know what you think.

Update: I just found a fantastic blog post by a social media researcher named Zeynep Tufekci that offers three possible answers:

  1. We actually have become more isolated (in terms of strong ties) during the same period that social media has arisen, so we assume that the latter has caused the former, even though evidence to the contrary is legion.
  2. Online socialization really can’t entirely replace face-to-face interaction, so we also assume that increased use of social networking causes increased feelings of isolation, even though people who are social online are also social offline.
  3. “Just like we convert text (visual) into language in our head (which is all oral in the brain), we need to convert mediated-interaction to that visceral kind of sociality in our brain. And not everyone can do this equally well [a condition she calls ‘cyberasociality’]. And people who are cyberasocial are driving this discussion.”

See her post for more details, including links to primary research that backs up what she is saying.

Self-Driving Cars

Self-driving carImagine for a moment what your daily commute would be like if your car could drive itself. You’d get in, tell your car where you want to go, and then sit back and let it take you there. You could read, work, eat, talk, text, or even sleep during the journey. The computer driving your car would automatically choose the best available route, and perfectly pilot the car to maximize fuel economy. And if everyone has a self-driving car, maneuvers that require social negotiation, such as merging or navigating an all-way stop, would be accomplished smoothly and quickly since the computers controlling the cars would follow the same rules, or might even communicate with each other.

It sounds utterly utopian, doesn’t it? Of course, visions like these typically revel in the positives while completely ignoring the possible consequences, but that is often necessary in the early stages in order to capture the imagination. It’s only later that the messy details rise to the surface, and we as a culture have to conduct a frank discussion about decidedly untechnical things like safety, responsibility, and liability.

A case in point is the promotion of Google’s self-driving car prototype. Google released a new video this week that was picked up by a few news outlets, in which they show a legally blind man using the car to get a taco and pick up his dry cleaning. Here’s the video:

Although Google is famous for their April Fool’s jokes, this isn’t one of them. Google has been testing their self-driving car for a while now, and this latest video is an attempt to show one possible use for such a product: restoring mobility and independence to those who can no longer drive. But this is really only the tip of the iceberg. What the creators of  Google’s self-driving car want to do is far more profound. They want to revolutionize transportation for everyone. This video explains:

In many ways, the idea of a driverless transportation system is not really new. Various forms of driverless subways are already in operation in many parts of the world. In the 1970s, the French attempted to build a driverless transportation system that featured individual cars that could join together to form quasi-trains when they reached a major arterial (see Latour’s book Aramis, or the Love of Technology). One can now ride fully-automated “pod” cards between terminals at London’s Heathrow airport. And a few high-end luxury vehicles already feature the ability to parallel park automatically.

While Google’s self-driving car takes this vision much further, there is a basic assumption that underlies all of these projects: humans are fallible, so dangerous things like driving should be given over to computerized automation, which is assumed to be perfect. As the rhetoric goes, computers don’t get tired or distracted, and they always make the logical choice.

But this, of course, assumes that the humans who program those computers and design those automated systems do not make any mistakes either. Computers don’t do things on their own—they follow the explicit instructions given to them by a human programmer. Anyone who has worked in the software industry knows that programmers are just as fallible as anyone else. Programmers get tired, distracted, and make mistakes, just like drivers do. Even when the programmer is concentrating fully, it’s sometimes impossible to see all the ramifications of a small change made to an obscure part of the code. Even if you get all the code right, there’s no guarantee that the connection between the computerized controller and the actual mechanics won’t break down or malfunction. And even if all that is working properly, one still has to worry about purposeful malicious behavior; consider for a minute what would happen if someone managed to hack into a self-driving car’s control system.

When I was in graduate school, I participated in a research network that was investigating ways to make computer-based systems highly dependable. Some researchers reported on ways in which actual systems had failed in practice, helping us learn from our mistakes. Others studied systems that had managed to achieve a remarkable level of dependability, trying to discern what factors in particular led to that achievement. What became obvious rather quickly was that dependability required far more than just good technique and well-engineered artifacts. It also required a highly disciplined social organization to operate that technical infrastructure, keeping it in good repair, and making sure it does what it’s suppose to do.

When I apply this to self-driving cars, it raises a number of questions for me. Who will verify that the control systems are correctly designed and implemented? If problems are detected after manufacture, how will they be updated, and how will those updates be tested? When the system starts to fail, either due to software problems or mechanical issues, will it fail gracefully, and will drivers know how and be ready to resume control? And when the first accident occurs involving a self-driven car, who will be found liable? The driver? The manufacturer? The software developers?

I’m not saying that these problems are insurmountable, only that we will be forced to consider them before any kind of widespread revolution in transport can occur. The airline industry has traveled this road before, and the auto industry will no doubt learn from their mistakes and achievements. In the meantime, buckle up, and watch out for those self-driving cars!

Update: Although the self-driving Prius is real, Google’s April Fool’s day joke this year takes it to a whole new level: a self-driving NASCAR.

Media and True Stories

When I was in grad school in Scotland, I used to tell my classmates that if they wanted to understand the culture of the United States, they should listen to the podcasts from two radio shows: A Prairie Home Companion with Garrison Keillor; and This American Life with Ira Glass. The former is an old-fashioned radio variety show that captures the essence of that quirky, somewhat innocent, but deeply hospitable, traditional culture of the heartland. The latter captures the stories of everyday Americans who are struggling through disenchantment to find a new source of meaning, goodness, beauty, and truth. By listening to the two, one can get a sense of the dual nature of American culture, and the tensions that currently animate it.

This American LifeI still listen to both programs, and this week’s show on This American Life was electrifying. It was a little unusual, in that the whole show was devoted to a retraction of a story they had previously aired about the working conditions at the Chinese factories that build Apple’s most beloved gadgets. The original story was told by Mike Daisey, an actor and activist who wrote the monologue The Agony and Ecstasy of Steve Jobs, in which Daisey purportedly describes his own experiences of visiting these factories and talking with the workers.

The monologue, as well as the story he told on This American Life, lays out a number of shocking accusations: at the gates of the infamous Foxconn factory, he talked with a group of underaged workers who were 12-14 years old; he met with workers who had been poisoned by n-hexane, a powerful neurotoxin that is used to clean iPhone screens; he showed an iPad to a man who’s hand had been destroyed by the machine used to make the case; he saw the inside of worker dormitories that had bunk beds stacked to the ceiling and cameras that observed their every move; and he saw the guards at the factory gates brandishing guns to keep prying eyes away. All of this was capped off with the chilling rhetorical question: “do you really think that Apple doesn’t know?”

Unfortunately for Mr Daisey, the Chinese correspondent for the popular business and finance show Marketplace, Rob Schmitz, heard this story and had a hard time reconciling these claims with what he had observed and reported on over the last few years. Yes, Apple’s Chinese suppliers had routinely violated Apple’s own labor practice standards, the working conditions are notoriously harsh, and there had been a few terrible accidents, including the n-hexane poisoning. But several of the details in Daisey’s story just didn’t seem probable. Only the police and military are allowed to have guns in China, so corporate security guards brandishing firearms would be highly unlikely, and Schmitz had never seen such a thing before. There have been problems with underage workers in Chinese factories, but Apple in particular had been fairly aggressive in stopping that practice at their suppliers, and it would highly unlikely for an underage worker to openly admit to being so to a strange American with a Chinese interpreter.

After some quick searching, Schmitz found the interpreter that Daisey used while in China. Schmitz sent her Daisey’s monologue and asked her if she could corroborate the details. She replied that most of the details were at least exaggerated, if not completely fabricated. They had gone to the gates of Foxconn, but didn’t encounter any underage workers. They had met some disgruntled employees who were trying to form an illegal union, but there were only a couple of workers there, and none of them had the injuries he described. The guards at the gates did not have guns, and Daisey was never allowed in the dormitories, so he couldn’t have known what they looked like.

Mike DaiseySchmitz and Ira Glass confronted Daisey about all of this, and to their dismay, Daisey admitted to representing various stories he had heard only second-hand as if he had seen or heard them himself. His reasoning was that it was all “true” and that he represented these events that way for theatrical purposes. He thought that relaying his experiences accurately would “unpack the complexities” in such a way that it would make the narrative arc more confusing and less effective.

The confrontation between Schmitz, Glass, and Daisey was certainly worth listening to, but the part of the show that I found most interesting was how Glass tried to grapple with Daisey’s claims that his story could be considered “true” in a theatrical context, but not in a journalistic one. Daisey admitted that he took “a few shortcuts in my pasion to be heard” but that he was proud of his use of “the tools of the theater and memoir to achieve [the story’s] dramatic arc…because it made you care, Ira.”

In other words, Daisey is claiming that a “true” story in the theater is one that makes you care, not one that is accurate in a literal sense. Daisey then expressed regret because he brought that story into a journalistic context, a context where what counts as a “true” story is significantly different. Exasperated by this, Glass chided Daisey that he was kidding himself if he thought that his audience understood this distinction. Glass himself attended the show and concluded “I thought it was true because you were on stage saying ‘this happened to me.’ I took you at your word.”

All of this raises an interesting question: how, if at all, does a medium affect what is considered a “true” story? (The term ‘medium’ is notoriously slippery, but I’m using it here in the same sense that Daisey was using the term ‘context’.) Can a story be true in the medium of theater, and then become less or untrue when it is moved to the medium of journalism? Does what counts as a true story differ between journalism and history? Do you assay the truth of a story differently when you hear it in the theater, on film, in journalistic print, or in academic discourse?

The Phone Stack

Phone StackEarlier this week, I ran across a story about a group of friends who have devised a clever way to keep themselves from getting distracted by their phones when they meet at a restaurant. After everyone has ordered, they all put their mobile phones facedown in the center of the table, sometimes stacked in a tall pile (which they call the “phone stack”). As the meal progresses, various phones might buzz or ring as new texts arrive, notifications are displayed, or calls are received. When this happens, the owner of the phone might be tempted to flip it over, but doing so comes at a cost: the first person to touch their phone has to pick up the check!

I like this idea for two reasons. First, it’s an ingenious yet simple mechanism for avoiding that all too common experience where your fellow diners spend more time interacting with their phones than with each other. Instead of pretending that mobile phones are not really a distraction, it puts them front and center, acknowledging their potential for disruption, yet declaring that their human owners still have the power to ignore them when engaged in face-to-face community. Turning their phones completely off might be even better, but keeping them on yet ignoring them seems to require even more reflective discipline. The public and very noticeable ritual of stacking the phones also acts like a kind of witness to others in the restaurant, advocating for the importance of being fully present when one has that rare opportunity to sit down with friends.

The other reason I like this is that it is a nice example of a more general phenomenon. When social groups adopt a new device, they often create rules or games like these to govern the use of that device when gathered together. Small, close-knit groups like the one that invented this game can easily enforce their rules, but larger cultures go through a social process of working-out new social norms that are generally followed, at least to some degree. For example, movie theaters have been running messages before the films for several years now asking audiences to silence their mobile phones, but I’ve noticed recently that they have expanded this message by asking audiences to also refrain from using their phones at all, silently or otherwise, during the film. Just as it is rare to now hear a mobile phone audibly ring during a film, I hope it will soon be just as rare to see the glow of a phone screen as an audience member responds to a text message.

What kind of rules or games have your families or friends created to limit the use of mobile devices when gathered together?