Category Archives: Theory

The Social Meaning of Technology

Sometime in the early-1980s, Kodak began using a sleek new voice messaging service that they called KMX, short for Kodak Message Exchange. It was pretty cool for the time; you could dial in from most anywhere via a toll-free number, authenticate with a mailbox number and passcode, and exchange asynchronous voice messages with other employees. Although voicemail systems are completely normal to us now, most people at this time had never heard of such a thing. Home answering machines were just becoming popular, but the idea of dialing into a centralized system so that you could send voice messages to individuals and groups was still somewhat revolutionary.

As I’ve noted in earlier posts, my father worked for Kodak for his entire career. By the time they adopted KMX, my father was an executive who spent most of his time coordinating his sales and marketing force, so he spent a lot of time, both at work and at home, on KMX. Most evenings after dinner, he would go up to his home office, dial into the system, listen to his new messages, and leave his responses. He could easily spend a few hours doing that, which of course meant that his colleagues had to spend a few more hours listening to the messages he sent them, replying to his questions, and so on, and so on. Today, we often complain that the ease of email has created a torrent of unnecessary messages, but at least one can visually scan email text; imagine if you had to listen to every rambling voice message, in real time, happily narrated by the sender!

By the late-1980s, my father also had a computer on his desk at work that was no doubt hooked into the company’s new email system, but I don’t think he ever turned it on, nor did he ever learn to type with any kind of proficiency (how now has a laptop, but my mother is the one who types the emails). I once visited his office around that time and I noticed a thick layer of dust covering his beautiful IBM PS/2, which seemed like an absolute travesty to me. But my father was of an earlier generation of executives, a generation that came of age with dictaphones and secretaries who would type his recorded messages onto office memo sheets. He was much more comfortable using a system like KMX than email, as it was similar to what he already knew. KMX seemed like a big dictaphone in the sky; typing messages into a computer was a secretary’s job.

I tell this story to highlight that we often overlay complex social meanings upon new technologies that go far beyond their mere function. If we look only at the function of some new system, such as voicemail or email, we often miss the ways in which the adopting culture struggles to make sense of the new technology in terms of what they already know and do. The meanings we now ascribe to these technologies are often subtly different from the way people thought about them when they were first introduced. Our current meanings are the result of a dynamic interplay between the adopting culture’s attempts to fit the new technology into their existing categorizations and traditions, and the ways using that new technology alters their thoughts and perceptions, challenging those existing assumptions, categorizations, and rules.

America Calling

This phenomenon becomes more evident when we look at detailed historical case studies of technological adoption. Over the Christmas break, I got a chance to read one such account, Claude Fischer’s book America Calling: The Social History of the Telephone to 1940. I had read bits and pieces of it before, but never had the chance to read it all the way through, and I’m glad I did. Fischer’s account is fascinating and enlightening.

Fischer notes that the first generation of Bell executives came from the telegraph industry, so they tended to think of the telephone as a new kind of audible telegraph: a serious tool for serious (meaning “male”) business use. Bell’s designs and marketing reflected this assumption, and their sales efforts focused mostly on male urban professionals, who often saw the telephone as a convenient replacement for messenger boys.

Although Bell marketed the telephone as an urban business device, it was nevertheless eagerly adopted by rural farmers, especially the farm wives who saw the telephone as a very welcome tool for social interaction. Fischer recounts stories of farmers setting up their own exchanges and lines, often piggy-backing on their existing barbed wire fences, so that they could communicate with friends and family. Bell actively discouraged not only these private exchanges, but also the social use of the telephone, warning women to not tie up the lines with “idle gossip.”

The various companies that provided telephone service did eventually accept and then encourage this more social use of the telephone, but Fischer argues that it was not until a new generation of executives had come of age, a generation that came from other industries where sociality was a norm. The first generation of executives were too conditioned by the dynamics of the telegraph industry, and were thus unable to see the ways in which consumers were transforming the social meaning of their new device.

If we accept this notion that the social meaning of a new technology is dynamically worked out over time, then we should also expect that something similar will occur with today’s mobile phones and social media. How people 20 or 40 years from now will think of these may end up being quite different from the way we think of them now, primarily because they will have grown up in a world where these devices are not something new. In some ways we have already seen a shift in the meaning and usage of the mobile phone: we now use this device to send asynchronous text messages far more often than we make synchronous voice calls. Today’s “mobile phone” is really a misnomer; we are already starting to think of these devices more like pocket-sized computers than telephones.

Actor-Network Theory

In my last post, I drew a map to plot out the various positions authors take when theorizing the technology and society relationship. I did that primarily so that I could destroy that map in this post by describing another theory that challenges the core assumption underlying all those other perspectives: that ‘technology’ and ‘society’ are two separate spheres that interact with each other in some kind of way.

Actor-Network Theory (or ANT for short) was developed in the 1980s and 90s primarily by three scholars: the French polymath Bruno Latour; the French engineer turned sociologist Michel Callon; and the English sociologist John Law. All three were involved in the early sociological studies of science and technology, and like their colleagues that developed the SCOT and SST positions, they argued against the technological determinism that was dominant at that time. But unlike the SCOT and SST approaches, ANT scholars took a closer look at this concept of ‘society’ that was supposedly exerting a shaping force upon technological artifacts. What they concluded was something that promised not only to revolutionize the way people thought about technology and society, but also to shake the entire foundation of social theory down to its core.

Reassembling the SocialThe problem, as Bruno Latour articulates it in his book Reassembling the Social: An Introduction to Actor-Network Theory, is that social theorists have traditionally thought of society as a ‘thing’, an entity that has some kind of separate existence from its participants. This thing called society was assumed to be made only of “social stuff” (as opposed to material stuff) that was surprisingly durable and all encompassing of its participants, much like the ‘aether’ of 19th-century physics. This allowed society to be an independent ‘force’ that could somehow influence not only its human participants, but also the techniques and material artifacts they produced. But it also had the effect of making society into something that was more like a mysterious force than a phenomenon that could be studied in a scientific way.

Actor-Network Theory, in contrast, argues that there is actually no such ‘thing’ as society. That is to say, society isn’t an entity that is separate from its participants. There’s no place you can point to and say “there is where society is.” Instead, society is more like an enactment, a condition that is constantly re-achieved every time a set of people interact with one another in a given pattern. Because it exists only in time, it is also fragile and prone to change; as soon as people start acting differently, a new kind of society becomes traceable (this is similar to various interactionist positions, and Latour himself comes out of ethnomethodology).

Latour, who was originally trained as an anthropologist, began thinking about this in the context of baboon societies. Baboon, like humans, create relatively complex social orders, but they do so using only direct bodily interactions. One baboon can try to enforce a particular dominance hierarchy over others, but as soon as that baboon is no longer physically present, the order starts to break down. Even when the dominant baboon is there, others will periodically test and challenge its dominance, leading to a recurring re-negotiation. Because baboons are limited to physical bodily interactions, their societies remain limited in size, and are subject to frequent reorganization.

In contrast, human societies seem to have no problem growing quite large in scale, spreading over wide geographic areas, and expressing features that remain relatively durable across many generations. Even pre-modern tribal societies seem to create networks of social links that persist even when participants are not present, and stay more or less intact throughout many generations. How is this achieved? If a society needs to be constantly re-enacted, how can it be extended and reinforced across space and time?

What Latour and his fellow ANT scholars concluded was that the very mechanism that enables us to extend and reinforce human societies across space and time is the thing we call ‘technology’. Material artifacts such as clothing, jewelry, houses, and cars don’t just reflect something called social class, they actually make it possible to assert such a concept in the first place and reinforce it over space and time. Communication media such as books, newspapers, telegraphs, telephones, television, and now the Internet don’t sit apart from something called culture and influence it from time to time, they are the very things that make it possible to create a geographically dispersed yet cohesive culture in the first place. Techniques such as money, corporations, the scientific method, engineering, and law are not just products of a modern culture, they are the very things that construct and reinforce that kind of culture we call “modern.”

In other words, technology and society are so deeply and inextricably intertwined that it would be misleading to talk about them as if they were two separate things that interact from time to time. They are mutually constitutive, each literally making the other, completely interdependent for their very meaning. Without technology, there would be no human society, and without that society, there would be no technology.

For ANT scholars, society is nothing more than a temporary assemblage of connections made between ‘actors’. Actors are “anything that makes a difference,” so they include both humans and non-human agencies/artifacts that influence in some way the connections that are being made. When analysts trace these connections, they reveal various ‘networks’ of which the actors (and now the analyst) are a part. These networks are often consciously constructed by a particular actor (called a ‘network builder’), who ‘enrolls’ other actors (human or otherwise) into the network by ‘translating’ them (literally “speaking for” them). Networks allow some actors to influence the behavior of others, but they are also quite fragile; as soon as those other actors resist or otherwise thwart the translation effort, that part of the network will fall apart.

If you take ANT seriously (which I do), it requires you to completely reorient the way you think about technology and society. A question like “is the technology destroying our society?” becomes almost meaningless since technology is the very thing that makes a geographically-dispersed, temporally-stable social order possible. We can still discuss how a given technology is developed and adopted, and whether the way it changes our social links is a good or bad thing, but the idea of a human society existing without technology just doesn’t make any sense. I’ll elaborate on ANT in future posts, and show you how it can be used to better understand the technology-society relationship.

A Map of Typical Positions on Technology and Culture

In this post, I want to step back a bit from historical details in order to do some broad-stroke theory. I want to build a map for you that should help give you some orientation when wading into various writing on the technology and culture relationship. Those of you who study this all the time will probably find this post a bit of a review, and if that’s the case, feel free to skip it. But if you tend to find yourself getting more and more perplexed when reading conflicting perspectives on technology, this post should help you get your bearings.

Let’s start our map by laying out a spectrum on the horizontal axis.

Whenever an author theorizes the technology and culture relationship, that author must deal with one of the most basic questions in the field: in what direction do the influences flow? That is, does technology “impact” culture, does culture shape technology, or do both happen simultaneously? How an author answers this question can be plotted on this spectrum.

At one extreme is the position of technological determinism. People who ascribe to this believe that technologies impact an adopting culture culture in a kind of one-way, deterministic relationship. Technologies are seen as a powerful, non-neutral forces that carry with them moral consequences, and produce deterministic effects. Extreme technological determinists also tend to think of technology as an autonomous force that actually guides and determines its own development. As one of my professors used to say, a strong technological determinist believes that once someone invents the techniques for radar, it’s really only a matter of time before we get the microwavable burrito.

On the other extreme is the position of social determinism, which is sometimes called instrumentalism by philosophers of technology. Extreme social determinists see technologies as completely neutral artifacts that can be used for good or for evil depending on the desires of the adopting individual or culture. This kind of position is wonderfully summarized using that well-known motto of the National Handgun and Rifle Association (NHRA): “guns don’t kill people; people kill people.”

I’ve portrayed these positions as extreme ends of a spectrum because it’s important to realize that very few authors subscribe to either of these positions wholeheartedly. Some certainly lean farther to one side or the other, but we should avoid labeling any author as being strictly a technological determinist or a social determinist. Most sit somewhere in between the extremes, which leads us to that position at the center: the social-shaping perspective.

The social-shaping of technology (SST) perspective acknowledges what is obviously true about both of the more extreme positions: technologies certainly do affect an adopting culture in significant ways; but historical cases also show quite clearly that engineers and adopting cultures play important roles in reshaping those technologies to better fit with their existing social values. SST sees technology and culture as “mutually constitutive,” (MacKenzie & Wajcman 1999) each creating and shaping the other. In other words, “guns don’t kill people, but they sure make it a heck of a lot easier.”

To complete our map, we need to add a vertical dimension to our existing horizontal one:

This vertical axis represents the moral attitude an author takes towards technological change. At one extreme is techno-optimism, a belief that our technologies are making the world a better place. In its most extreme forms, techno-optimists elevate technology to the position of savoir, the ultimate tool with which we can save ourselves and create a utopia on earth. This position is excited about the possibilities of new technologies and says “full steam ahead” to any and all technological development.

At the other extreme is techno-pessimism, a position that sees technology not as a savoir, but as a destroyer. Techno-pessimists think that technology is making the world a worse place, and that it might just end up killing us all (think nuclear holocaust, genetic engineering gone awry, sentient robots that turn against us, etc). This position tends to pine for the simpler days before industrialization, and is sympathetic towards  Romanticism.

As with the other axis, this is of course a spectrum and most authors situate themselves somewhere in between the two extremes. At the very middle is a position I’ve called “double-edged sword.” This position argues that every technological change brings with it a wide array of consequences, some of which can be considered ‘good’, others ‘bad’, depending on your perspective. The costs and benefits of an innovation are never equally distributed in a given society, so whether you think a given technology is making the world better or worse largely depends on whether you received more of its benefits and less of its costs, or vice-versa.

Putting it all together, we get a map that looks something like this:

Most critics of technology (Christian or secular) tend to sit somewhere in the lower-left quadrant. They lean towards technological determinism, and they are generally pessimistic about future technological change. Jacques Ellul seems the most pessimistic to me—his book The Technological Society is almost fatalistic. Neil Postman is closer to the double-edged sword position, but he is still overall more pessimistic than optimistic. Marshall McLuhan is an unapologetic technological determinist, but he is far less pessimistic than other Christian critics.

In the upper-left quadrant we find people like Ray Kurzweil, who is extremely excited about the potential for a full human-machine integration. His belief in the inevitability of the “singularity” puts him on the technological determinist side, but unlike McLuhan or Ellul, he sees technology as a potential savoir of humanity.

At the extreme corner of the upper-right quadrant would be the NHRA sentiment I discussed earlier. The Social Construction of Technology (SCOT) position is probably the most social determinist theory I know of, but it takes a very neutral view on whether technology is making the world better or worse. The Social Shaping of Technology (SST) position is on there twice because the first edition of MacKenzie & Wajcman’s book in 1985 was far more social determinist than their second edition in 1999, which took a much more balanced tone.

Interestingly, I don’t know yet of any author that would fit into the lower-right quadrant, probably because those who lean towards social determinism rarely have an overly pessimistic view of technology.

Does this help you navigate your way around the various positions you may have encountered? Where would you place your favorite authors on this map?

What is an “Oral Culture?”

Walter OngThis week I ran across an interesting article entitled “The Theology of Sound: A Critique of Orality,” which offers an insightful critique of Walter Ong’s conception of oral vs literate societies. It was written by Jonathan Sterne, who teaches art history, communications, and the philosophy of science at McGill University, and who is currently researching a history of sound in modern culture.

Sterne had become suspicious of what he calls “an aging fable” about the history of communications. The fable, as it is typically presented, takes place in three acts. Act one is “oral culture,” which is a strictly auditory culture where nothing can be externalized from the mind of the knower. Act two transitions to “literate culture,” which is a visual culture where subject and object are split apart, and large-scale activities become possible such as the rise of modern science and industrialization. Act three culminates in “electronic culture,” where the techniques of externalization from literate culture are combined with a return to an oral mindset driven by electronic, image-and-sound-based media.

This fable, Sterne notes, comes primarily from the work of Walter Ong, who was a student of the young Marshall McLuhan in the 1940s. Ong’s book Orality and Literacy (1982) remains one of the most cited works in this area, and is also a favorite of Christian popularizers, as Ong was a Catholic priest as well as a professor.

Sterne finds a number of problems with this narrative. First and perhaps foremost, it is overly simplistic, assuming that an entire society can be sorted cleanly into one of the oral, literate, or electronic categories. Human cultures are far more diverse and complicated, employing a wide range of techniques for externalizing information in addition to writing, such as painting, sculpture, architecture (especially monuments), and music. All of these techniques predate writing, and all but music rely on the visual sense that is supposed to dominate a literate culture.

But Sterne’s most interesting and insightful critique comes from a close examination of the motivations and assumptions that lie behind much of Ong’s thought, assumptions that are more readily apparent in Ong’s earlier works than in Orality and Literacy. Sterne writes that Ong’s motivation was “to better understand the conditions under which it was possible for people to hear the word of God in his age,” and that his use of the verb “hear” was no accident. Ong assumed that the human sense of hearing is closer to the divine than seeing. In Ong’s book The Presence of the Word, he concludes that “the mystery of sound is the one which in the ways suggested here is the most productive of understanding and unity, the most personally human, and in this sense closest to the divine” (324).

Ong’s privileging of hearing over seeing was influenced, Sterne argues, by a faulty understanding of the difference between Hebrew and Greek thought that was fashionable in Ong’s day. Hebrew culture was assumed to be primarily oral, in contrast to Greek culture which was assumed to be based on the written word. Ong felt that the Hebrews were far more open to the God’s presence than the Greeks, and that this openness was a direct result of them being an oral culture, which for Ong meant that their emphasis was on hearing the word spoken in act, not seeing it written on a page. Although Christian popularizers may not realize it, Ong was actually celebrating the return to orality that he saw in electronic media such as television, believing that this “second orality” would pave the way for a new flowering of God’s Spirit. The Protestant emphasis on reading the word of God for oneself had, in Ong’s estimation, distanced us from one another and silenced “man’s life-world,” thereby stifling God’s presence amongst his people.

Sterne warns that we must keep this in mind when we read Ong’s narrative:

It is in this suggestively messianic context that we need to read Ong’s sensory history. “Oral man,” dweller of a temporalized world of sound, gave way to “literate man,” who resided in the spatialized and externalized world of sight. Ong’s sensory history is the story of the fall from innocence and a possible future redemption. At the moment of Ong’s writing, he saw the construct of literacy giving way to a new electronic oral-aural consciousness consisting of a new kind of immediate co-presence. Only then might it be possible to find God again (219).

Although Christians might be more sympathetic to Ong’s motivations than secular communications scholars, we do need to be careful about taking Ong’s claims at face value without subjecting them to more recent scholarship. Here Sterne also takes issue with Ong, noting that “the evidence on which the orality-literacy split rests is thin and dated…[it] is based on 50- to 100-year-old interpretations of textual sources” (220). More recent history and anthropology have challenged many of Ong’s assumptions about oral cultures. Cultures that lack writing use many other kinds of techniques to achieve the same end of externalizing information (see above). Rationality and individuality, qualities assumed by Ong to exist only within literate cultures, are also identifiable in cultures that rely primarily on oral communication. And large-scale social organization, which supposedly required a “literate man,” was achieved many times over by the ancient Egyptians without the use of a phonic alphabet or widespread literacy.

But Sterne is not out to completely dismiss Ong and McLuhan; instead Sterne is encouraging us to follow their lead and keep seeking out a better understanding of media based on the most current information:

These authors asked the right questions for their moments, but our moment is not theirs, and our world is not their world. We can honour their spirit by re-asking the central questions in their work and following them through to new conclusions (222).

May we endeavor to do so.

The Human-Machine Boundary

I recently had this rather strange revelation: my mom is slowly turning into the Bionic Woman. About a month ago she underwent cataract surgery, a procedure that replaces the natural lenses in her eye with artificial ones. Her natural lenses had developed cataracts, essentially spots where the lens loses its transparency, which resulted in her vision becoming cloudy. The artificial lenses implanted by the surgeon not only corrected the cloudiness, they also corrected her vision for the most part. The new lenses were made with the same corrective curvature as the lenses in the glasses she had worn for most of her life. So she pretty much has bionic eyes now.

Well, OK, maybe they aren’t quite as cool as Steve Austin’s super-telescopic eyes in the Six Million Dollar Man, but they are a step in that direction. In the future, it might be possible for those artificial lenses to have additional features, like zooming, automatic shading in bright environments, or the ability to project text and graphics over the field of vision. When they do develop such things, people might not want to wait until they develop cataracts before they get their lenses replaced. Perhaps it will become just a standard corporeal upgrade, like getting braces to straighten your teeth, or plastic surgery to reshape some part of your body.

Interestingly, the ability to project text and graphics over the field of vision is already being worked on. Researchers at the University of Washington and Aalto University in Finland have developed a prototype contact lens, controlled by a wireless signal, that can project a single pixel into the eye. A single pixel might not sound terribly impressive, but it is an important proof of concept—it will likely be only a matter of time before they develop the techniques necessary to project enough pixels to fill your field of vision with relevant textual and graphical information. Imagine having a person’s name and relevant life details automatically appear next to his or her head when you meet them; or having the kind of information displayed today on your GPS navigation system’s screen merged with your view of the actual road.

Does this sound frightening to you, or at least a bit disturbing? Perhaps it should. But then again, haven’t we slowly been moving towards this sort of thing for a long time now? After all, what is the difference between wearing eye glasses, and replacing one’s natural eye lenses with artificial ones? In both cases, we are employing a bit of technology to correct someone’s eyesight. The former is worn while the latter is surgically implanted, but is that a difference of kind, or only of degree?

There are of course other kinds of bodily problems that we’ve been addressing for some time with surgically-implanted devices: artificial limbs and joints restore mobility; implants in the ear enable hearing again; stents reopen blocked arteries; pacemakers keep hearts beating at a normal rate; and in extreme cases like Dick Cheney, surgeons have implanted mechanical pumps to circulate the blood instead of relying on the patient’s beating heart muscle. In each of these cases, a bit of technology is added to the human body in order to correct a problem or flaw, blurring the boundary between human and machine.

One distinction we could try to draw here is between implanted devices that correct a problem, and those that would enhance what would be considered “normal” human ability. Today’s artificial lenses correct cataracts and myopia, but if those same lenses gained some super-human features, we could say that they enhance normal human eyesight. But even that distinction can be a bit fuzzy. If we were to acknowledge that it is “normal” for human eyesight to get worse with age, then even today’s artificial lenses could be considered an “enhancement.”

Another interesting case is that of Oscar Pistorius, the double-amputee sprinter who has flexible carbon-fiber blades for legs. He was initially disqualified from the 2008 Olympics because the committee felt that his springy artificial legs gave him an unfair advantage over able-bodied athletes. Although this decision was later overturned, it demonstrates the difficulty of drawing a clean distinction between “correction” and “enhancement.”

Of course, I don’t mean to suggest that able-bodied sprinters will soon elect to replace their own human legs with artificial ones, but there is a distinct possibility that in the future, people will choose to receive other kinds of surgical implants that are designed solely to enhance their abilities to super-human levels. Essentially, you might get the chance to become the bionic man or woman…without the need for that tragic test flight or parachute accident.

So if researchers were able to develop these kinds of super-human bodily enhancements, would you be interested? Which kinds of things would you entertain, and which would you rule out? Or would you be opposed to anything beyond corrective devices as a matter of principle?

Moog Documentary

I recently watched a fascinating documentary about Bob Moog, the inventor of the Moog synthesizer. Here is a trailer for it:

(If you are interested in watching this documentary, it is currently available via instant-play on Netflix, or you can watch it in segments on YouTube.)

I have to admit that as a documentary film, it wasn’t the best it could be, but I love the subject matter. The synthesizer is another one of those artifacts that, when introduced, caused quite a lot of angst in the surrounding culture. Avant-garde musicians loved it, sound-effects engineers eagerly embraced it, but the wider culture didn’t really know what to make of this thing. It looked far more like a telephone switchboard than it did a musical instrument.

File:Bob Moog3.jpgThe original Moog synthesizers were complicated beasts, with dozens of dials, switches, and patch cords. They had keyboards as well, but the synthesizer could produce only one note a time, so the keyboard was really just a mechanism to set the initial pitch of the generated wave, which could then be bent and transformed by the various processing modules. Most avant-garde musicians actually had little use for the keyboard, preferring instead to generate new kinds of sounds and pitches that did not fit into the traditional tempered scale. Other synthesizer makers that were more influenced by these musicians (such as Don Buchla) omitted the keyboard entirely.

File:Minimoog.JPGSeveral progressive rock musicians also started using Moog’s synthesizers, most notably Keith Emerson of Emerson, Lake & Palmer. Because these groups toured, they asked for a more portable, self-contained version, and in 1970 Moog introduced what became his most iconic instrument, the Minimoog.

Sadly, critics accused Moog and his synthesizer performers of destroying music. For these critics, real musical sounds could originate only from strings, wood, brass, or skins. Electronically-produced sounds were simply not ‘natural’ and thus not music.

But is there anything really ‘natural’ about a violin, saxophone, or drum? Each one of these musical instruments is an artifact, something created by humans that does not exist apart form human agency. At some point in history, violins were invented, developed, adopted, and shaped into the instrument we know today. Violins are certainly old, and their sound can move the human heart, but they are hardly products of Nature.

We must be careful when we swing around that word ‘natural’; we too often use it as an unreflective synonym for ‘traditional’. The distinction between ‘natural’ and ‘artificial’ is a rather hard and unyielding one, but what is considered ‘traditional’ is maleable; it changes over time, adapting to new cultural developments.

Historical cases like the Moog synthesizer should teach us that the dire predictions of today’s cultural critics need to be taken with a large grain of salt. The synthesizer didn’t destroy music; quite the opposite occurred as musicians embraced the new sounds and techniques made possible by that new instrument. It would have been difficult in 1970 to foresee how the synthesizer would enable new approaches to music-making that we today take for granted.

So will mobile phone texting and Twitter be the death of writing? Will Facebook destroy ‘real’ community? It is unlikely that we can foresee now just what changes these systems will engender in our society. These systems will, no doubt, reshape our cultures in profound ways, but our cultures will also reshape these systems in return. The real question is which social groups will be the predominant shapers of these systems as they evolve?