Category Archives: Social-Shaping

The Social Meaning of Technology

Sometime in the early-1980s, Kodak began using a sleek new voice messaging service that they called KMX, short for Kodak Message Exchange. It was pretty cool for the time; you could dial in from most anywhere via a toll-free number, authenticate with a mailbox number and passcode, and exchange asynchronous voice messages with other employees. Although voicemail systems are completely normal to us now, most people at this time had never heard of such a thing. Home answering machines were just becoming popular, but the idea of dialing into a centralized system so that you could send voice messages to individuals and groups was still somewhat revolutionary.

As I’ve noted in earlier posts, my father worked for Kodak for his entire career. By the time they adopted KMX, my father was an executive who spent most of his time coordinating his sales and marketing force, so he spent a lot of time, both at work and at home, on KMX. Most evenings after dinner, he would go up to his home office, dial into the system, listen to his new messages, and leave his responses. He could easily spend a few hours doing that, which of course meant that his colleagues had to spend a few more hours listening to the messages he sent them, replying to his questions, and so on, and so on. Today, we often complain that the ease of email has created a torrent of unnecessary messages, but at least one can visually scan email text; imagine if you had to listen to every rambling voice message, in real time, happily narrated by the sender!

By the late-1980s, my father also had a computer on his desk at work that was no doubt hooked into the company’s new email system, but I don’t think he ever turned it on, nor did he ever learn to type with any kind of proficiency (how now has a laptop, but my mother is the one who types the emails). I once visited his office around that time and I noticed a thick layer of dust covering his beautiful IBM PS/2, which seemed like an absolute travesty to me. But my father was of an earlier generation of executives, a generation that came of age with dictaphones and secretaries who would type his recorded messages onto office memo sheets. He was much more comfortable using a system like KMX than email, as it was similar to what he already knew. KMX seemed like a big dictaphone in the sky; typing messages into a computer was a secretary’s job.

I tell this story to highlight that we often overlay complex social meanings upon new technologies that go far beyond their mere function. If we look only at the function of some new system, such as voicemail or email, we often miss the ways in which the adopting culture struggles to make sense of the new technology in terms of what they already know and do. The meanings we now ascribe to these technologies are often subtly different from the way people thought about them when they were first introduced. Our current meanings are the result of a dynamic interplay between the adopting culture’s attempts to fit the new technology into their existing categorizations and traditions, and the ways using that new technology alters their thoughts and perceptions, challenging those existing assumptions, categorizations, and rules.

America Calling

This phenomenon becomes more evident when we look at detailed historical case studies of technological adoption. Over the Christmas break, I got a chance to read one such account, Claude Fischer’s book America Calling: The Social History of the Telephone to 1940. I had read bits and pieces of it before, but never had the chance to read it all the way through, and I’m glad I did. Fischer’s account is fascinating and enlightening.

Fischer notes that the first generation of Bell executives came from the telegraph industry, so they tended to think of the telephone as a new kind of audible telegraph: a serious tool for serious (meaning “male”) business use. Bell’s designs and marketing reflected this assumption, and their sales efforts focused mostly on male urban professionals, who often saw the telephone as a convenient replacement for messenger boys.

Although Bell marketed the telephone as an urban business device, it was nevertheless eagerly adopted by rural farmers, especially the farm wives who saw the telephone as a very welcome tool for social interaction. Fischer recounts stories of farmers setting up their own exchanges and lines, often piggy-backing on their existing barbed wire fences, so that they could communicate with friends and family. Bell actively discouraged not only these private exchanges, but also the social use of the telephone, warning women to not tie up the lines with “idle gossip.”

The various companies that provided telephone service did eventually accept and then encourage this more social use of the telephone, but Fischer argues that it was not until a new generation of executives had come of age, a generation that came from other industries where sociality was a norm. The first generation of executives were too conditioned by the dynamics of the telegraph industry, and were thus unable to see the ways in which consumers were transforming the social meaning of their new device.

If we accept this notion that the social meaning of a new technology is dynamically worked out over time, then we should also expect that something similar will occur with today’s mobile phones and social media. How people 20 or 40 years from now will think of these may end up being quite different from the way we think of them now, primarily because they will have grown up in a world where these devices are not something new. In some ways we have already seen a shift in the meaning and usage of the mobile phone: we now use this device to send asynchronous text messages far more often than we make synchronous voice calls. Today’s “mobile phone” is really a misnomer; we are already starting to think of these devices more like pocket-sized computers than telephones.

Advertisements

Google Doodle for Bob Moog’s Birthday

Moog Google DoodleDid you see the Google Doodle for today? It’s a functional model of an analog synthesizer in honor of what would have been Bob Moog’s 78th birthday. You can adjust the oscillator, filter, and envelope settings to create a wide range of sounds. It even has a recorder attached to it so you can capture your creations and share them with others!

Over a year ago now, I wrote a couple of posts about Moog (rhymes with ‘rogue’) and his synthesizer. The first was inspired by a documentary about Moog and his work. Here is a trailer for that, in which he discusses how people reacted to the synthesizer when it was first introduced:

Moog recounts how critics at the time really didn’t know what to make of his creation. For them, “real music” came only from strings, wood, brass, or skins. These new electronic synthesizers seemed more like sophisticated noise-makers, something useful for sound-effects engineers, but hardly something that could be categorized as a “musical instrument.” Moog’s most strident critics actually accused him of “destroying music” by introducing a most “unnatural” device.

The synthesizer’s shift from “noise-maker” to “musical instrument” is captured well in Pinch and Trocco’s book Analog Days, which was the subject of my second post on Moog. These authors trace the early days of the Moog, describing how it quickly became a staple feature for psychedelic rock bands of the late 1960s. But in the fall of 1968, a recording was released that completely changed how people thought about what the synthesizer was, and was good for. It was called Switched on Bach, and as the title implies, it featured the works of Johann Sebastian Bach performed entirely on the synthesizer. The album was an instant hit, and was one of the first classical recordings to ever go platinum. That album inspired many other keyboardists to explore the potential of the synthesizer and integrate it into their creative work.

I think the history of the synthesizer is valuable for two reasons. First, it reminds us to be careful about conflating the concepts of “natural” and “traditional.” The synthesizer was certainly untraditional when it was introduced, but is was just as much an artifact, and therefore unnatural, as a violin or saxophone. And instead of destroying music, it opened up entirely new sonic possibilities that helped expand the creative potential of musicians. We need to be careful when making dire predictions about how this or that new device will destroy some aspect of our traditional culture—it may very well turn out to be quite the opposite.

Second, the synthesizer, like the iPad or the telephone, is the kind of device that requires a bit of “working out” before a culture decides what it actually is and what it’s good for. The synthesizer’s social meaning was underdetermined and somewhat flexible when it was first introduced, and the way it turned out was influenced just as much by its initial users as it was by those who designed, produced and marketed it. Early adopters often play key roles in redefining and reshaping new devices so that they better fit into the target culture.

OK, enough theorizing—now go make some music!

Technological Paradigms

Last summer a couple of my friends sent their eight-year-old daughter off to camp for the first time, and as they dropped her off, they gave her some money to buy a few things while she was there. In addition to those sugary snacks that every camper sucks down with abandon, she also bought a disposable film camera so she could take pictures of her new friends and all the fun things they were doing. When she returned home she told her parents all about her time at camp, and said she was eager to show them the photos she took on the camera she bought. Her parents asked, “so where is the camera? We need to send it in to be developed.” She calmly replied, “I threw it away—it said it was disposable. The pictures are on the Internet, right?”

My friends’ daughter was just young enough that she had never seen a film camera before. Her parents have taken many pictures of her, but they had always done so using their mobile phones or a digital camera. For her, ‘cameras’ are things that capture digital photos and upload them to a computer or the Internet; the very concepts of ‘film’ and ‘developing’ were completely foreign to her.

Kodak "Instamatic" CameraI found this story to be fascinating, not only because I am a historian of technology, but also because I am the son of a former Kodak employee. My father worked for Kodak for 35 years, and the recent stories of their plans for bankruptcy have been especially poignant for him. But when he started there, Kodak was at the height of their game. They had developed a line of point-and-shoot consumer cameras that enabled anyone, even those with absolutely no knowledge of photography whatsoever, to take reasonably good pictures. But the cameras themselves were not the real money maker. They were like the low-profit razor sold below cost so that you could sell a lifetime of high-profit razor blades. The real money for Kodak was in the sale of film and developing services.

Film, of course, is a consumable. Once you expose a segment of film to light, it can’t be used again. It’s also pretty much useless until you develop it and make prints. Taking a picture thus came at a double price: the cost of the film and the cost of processing, both of which were high-margin businesses. For every camera Kodak sold, they also sold hundreds of rolls of film, and most of those rolls were developed with Kodak chemicals and printed on Kodak paper. Kodak supplied the entire platform—the cameras, the film (both movie and still), the developing chemicals, the photosensitive paper—and they packaged it all together as a well-marketed, customer-friendly service. It was a complete cash cow.

Kodak Disc CameraKodak did continue to develop new variations on this theme, though they seemed to have less and less luck with them as the years went on. I remember the day my father returned from a super-secret business trip back to Rochester with a briefcase literally handcuffed to his wrist. My brother and I stared in wonder, assuming that our father had become a government agent and that the whole Kodak thing was just a cover. But alas, the case didn’t contain state secrets or foam-encased spy gear; instead it contained these strange-looking flat cameras that used a type of film that looked vaguely like a viewmaster disc. My dad proudly declared that these “disc cameras” were the wave of the future, and in the early 1980s, they really did look futuristic. But disc cameras were ultimately doomed by their tiny lens and negative. In theory, disc cameras had the potential of taking better pictures than a 110, but in practice, it was too easy to take blurry pictures, and even clear ones looked unacceptably grainy when printed larger than 3×5″.

First Working Digital CameraBut even before the first disc cameras were introduced, Kodak’s R&D engineers had put together something that would not only revolutionize photography, but also kill off that very lucrative cash cow: the first working digital camera, built in 1975. Over the next two decades, Kodak actually played a leading role in developing digital photosensors and digital photo printing kiosks. They even entered the consumer digital camera market, albeit too late to displace the likes of Sony, Cannon, and Nikon.

The trouble was, none of these business offered the same high profit margins as film and developing. Digital cameras, of course, require no film. Taking a picture is essentially free, and making a print is entirely optional now that we can share them on social networks. Digital photography fundamentally changed the economics of the business to the benefit of the consumer, and there was no going back. The consumables would all but disappear, and the internal hardware (sensors) would quickly become commoditized and unbranded.

So why did Kodak continue to invest in new film devices like the disc camera when they had the chance to become the leader in the new world of digital photography? This is most likely a hot topic in business management schools, and I’m sure that some are suggesting that Kodak purposely tried to delay the onset of digital photography to milk every last drop out of their film and developing cash cow. I haven’t researched Kodak’s story enough to know one way or the other, but I would guess that the full history is more complicated than that. That first working digital camera was only a proof of concept: the exposure time was reportedly 23 seconds, it captured the image to a cassette tape, playback required a separate TV, and the resolution was far worse than film. It would have been difficult to predict in 1975 that all the technical problems could be worked out, that a portable and easy-to-use device could be designed and manufactured, and that consumers would actually adopt a very different kind of camera. At the time, film-based photography would probably have seemed like the safer bet.

Now we know different. Digital photography displaced film faster than most would have predicted, and Kodak is contemplating declaring bankruptcy if they can’t sell off their patent portfolio. My friends’ eight-year-old learned about the concept of film the hard way, but those born in the near future will likely learn about photographic film only as a historical phenomenon.

All of this reminds me of a term that Giovanni Dosi introduced in an article he published back in 1982: “technological paradigms.” The term comes from Thomas Kuhn’s classic book The Structure of Scientific Revolutions, in which he argues that scientific knowledge develops within constraining paradigms that limit the kinds of questions researchers ask, the kinds of evidence they consider to be legitimate, and the sort of explanations they consider plausible. Instead of seeing the history of science as a smooth, continuous progression towards objective “Truth,” Kuhn portrays it as a series of dominant research paradigms that radically shift from one to the next.

In a similar vein, Dosi argues that technologies tend to develop along trajectories that are governed by a dominant paradigm. This paradigm limits what kinds of solutions are investigated, causing engineers to favor incremental changes to the existing paradigm over radical departures from it. Every practicing engineer knows that it is far easier to sell the management on a slight improvement to an existing design than on a risky, untested, radically new one. This is especially true when the new design would eliminate the most profitable aspect of the current business.

But eventually the dominant paradigm shifts, and that radical disruption creates new opportunities in the market that may enable new players to rise, or an even larger industry restructuring to occur. In the case of photography, the shift from film to digital is a case in point. Even though Kodak may have invented the techniques behind digital photography, they seem to have been limited by the dominant paradigm of film.

What other kinds of artifacts or devices will your children and grandchildren only know from history books?

The Phone Stack

Phone StackEarlier this week, I ran across a story about a group of friends who have devised a clever way to keep themselves from getting distracted by their phones when they meet at a restaurant. After everyone has ordered, they all put their mobile phones facedown in the center of the table, sometimes stacked in a tall pile (which they call the “phone stack”). As the meal progresses, various phones might buzz or ring as new texts arrive, notifications are displayed, or calls are received. When this happens, the owner of the phone might be tempted to flip it over, but doing so comes at a cost: the first person to touch their phone has to pick up the check!

I like this idea for two reasons. First, it’s an ingenious yet simple mechanism for avoiding that all too common experience where your fellow diners spend more time interacting with their phones than with each other. Instead of pretending that mobile phones are not really a distraction, it puts them front and center, acknowledging their potential for disruption, yet declaring that their human owners still have the power to ignore them when engaged in face-to-face community. Turning their phones completely off might be even better, but keeping them on yet ignoring them seems to require even more reflective discipline. The public and very noticeable ritual of stacking the phones also acts like a kind of witness to others in the restaurant, advocating for the importance of being fully present when one has that rare opportunity to sit down with friends.

The other reason I like this is that it is a nice example of a more general phenomenon. When social groups adopt a new device, they often create rules or games like these to govern the use of that device when gathered together. Small, close-knit groups like the one that invented this game can easily enforce their rules, but larger cultures go through a social process of working-out new social norms that are generally followed, at least to some degree. For example, movie theaters have been running messages before the films for several years now asking audiences to silence their mobile phones, but I’ve noticed recently that they have expanded this message by asking audiences to also refrain from using their phones at all, silently or otherwise, during the film. Just as it is rare to now hear a mobile phone audibly ring during a film, I hope it will soon be just as rare to see the glow of a phone screen as an audience member responds to a text message.

What kind of rules or games have your families or friends created to limit the use of mobile devices when gathered together?

The Struggle to Define a New Device: More on the Moog

I’ve been reading more about the Moog synthesizer, and in this post I want to talk about a story I ran across in Trevor Pinch and Frank Trocco’s wonderful book, Analog Days: The Invention and Impact of the Moog Synthesizer. The story concerns the early days of the synthesizer and a rather significant recording you might have heard of.

As I mentioned in my previous post on the Moog, it was not immediately obvious to everyone what exactly the early synthesizers were, much less what they were good for. The avant-garde musicians were excited by the new sonic possibilities created by the synthesizer, and sound effects engineers quickly embraced it for their work, but both of these early uses had the effect of defining the synthesizer as an ethereal noise-making device, and not an instrument capable of making “real” music. One reviewer criticized the early synthesizers as sounding like an “obnoxious mating of a catfight and a garbage compactor,” useful only for “cheesy, invader-from-Mars movies” (132).

So how did the synthesizer get redefined as the keyboard instrument we know today? In their book, Pinch and Trocco describe in detail how this occurred, but there was one crucial story that seemed to be the turning point in the process. It’s a story that has all the elements you’d ever want: Johann Sebastian Bach, analog synthesizers, and one of the first transgendered musical performers.

By 1968, the Moog synthesizer had already been featured on a few rock albums, but it’s use was still limited to creating ancillary, psychedelic, sonic effects. Groups like The Byrds, The Doors, and even the Beatles had been enthusiastic adopters of the Moog (especially after they discovered the synesthesia-like effects its sounds often had for those high on LSD, a drug that was legal in the US until late-1968), but their use of it was limited to a narrow set of common sounds that each copied from the other. This made the Moog an important, almost required component of late 1960s rock music, but it was still “largely seen as a way to add an unusual psychedelic effect here and there,” as opposed to an instrument capable of carrying the melody or harmony (122).

Switched-On BachThis all changed in the fall of 1968 with the release of the album Switched-On Bach. The recording featured the works of Bach performed on a Moog synthesizer, which was quite a feat considering that the Moog could produce only one note at any given time, and changing between different sounds required the time-consuming shuffling of patch cords and adjustments to various knobs. The album was entirely a production of the studio, with countless splices and overdubs to create the required effects, but those effects were nothing short of redefining: for the first time, someone had created very recognizable keyboard, and at times orchestral, music using nothing but an analog synthesizer. The album was an instant hit, becoming one of the first classical albums to go platinum, eventually reaching the Billboard Top 10.

The performer, or “synthesist” as they were commonly known, was credited as Walter Carlos. A classically-trained pianist who also had a passion for electronics, Carlos studied music at Brown University, but actually majored in physics, and brought that technical expertise to a masters in music composition at Columbia. Uninterested in the compositional serialism that was dominant at that time, Carlos turned his attention to electronic music, meeting Bob Moog in 1964, and purchasing one of his modular synthesizers soon after.

Carlos and Moog got along famously. Carlos was demanding, and could translate what he wanted musically into Moog’s native language: electronics. Carlos pushed Moog to improve the touch response of the keyboard, and develop new modules that would allow him to better recreate the timbres of orchestral instruments. Carlos was a perfectionist, and the quality of music he was able to produce was beyond what anyone else had done with a Moog. In many ways, Carlos’s efforts reshaped the Moog from a sound-effects device into a keyboard instrument capable of playing Bach.

The relationship between Carlos and Moog provides us with a nice example of how “users” of technologies often turn out to have profound effects upon them. Many cultural critics tend to assume that the influence of technology on culture goes in only one direction; that technologies “impact” culture, and culture has little to no influence on those technologies in return. But when we examine historical cases like the Moog in detail, we often see examples where the early users profoundly shaped devices as they were being adopted. In fact, the line between inventor, producer, and user is often quite blurry and porous during the initial years of a new technological artifact or system.

The commercial success of Switched-On Bach spawned a litany of copy-cat albums: Switched-On Bacharach, Switched-On Gershwin, Switched-On Santa, and Chopin á la Moog, to name just a few. My personal favorite is The Plastic Cow that Goes MOOOOOG, a title which no doubt further cemented the common mispronunciation of Moog’s name; Moog is actually a Dutch name that rhymes with “rogue,” though most people (including myself before I heard otherwise) assume that it is pronounced like a cow’s “moo” with a “g” on the end.

The album’s success also made Carlos an overnight star, but sadly it was a fame that Carlos could not fully enjoy. During the making of Switched-On Bach, Walter Carlos was slowly becoming Wendy. Carlos began cross-dressing and taking hormone therapy during 1968, and was living “permanently as a woman by the middle of May 1969” (137). Carlos made a few public appearances as Walter, wearing a man’s wig and makeup to simulate sideburns and facial stubble, but eventually withdrew from public scrutiny to complete the metamorphosis. Since the music he/she created was impossible to play live, there was no demand for a tour, and Carlos returned to the studio to create more albums featuring the Moog.

The lesson here is that Switched-On Bach was a powerful resource in the struggle to define just what this new device was, and what it was good for. It demonstrated without a doubt that the Moog was a real instrument capable of producing not just psychedelic or ethereal sonic effects, but recognizable melody and harmony. Pinch and Trocco also note that this album was the reason many notable pop and rock keyboardists, such as Keith Emerson, Patrick Gleeson, Tomita, and Stevie Wonder embraced the synthesizer as a new instrument, capable of playing the lead musical line (147).

iPad in da House

A few posts back, I noted that new artifacts are always to some extent “underdetermined”; that is, different groups will often have conflicting opinions as to what the new artifact actually is, and what it is good for. One current example of this phenomenon is some recent discourse surrounding Apple’s sexy new tablet: the iPad.

Earlier this week, a colleague and good friend of mine from the UK sent me a news story about a member of Parliament reading her speech from an iPad instead of a printed piece of paper. At first I was perplexed, as I couldn’t imagine why this was newsworthy, but the article explained that this was indeed the first instance of an MP using the new tablet device (instead of the more traditional printed paper) during a speech. It also explained that electronic devices like laptop computers have always been banned from the chamber. But the iPad posed a bit of a quandary: is it just a portable computer in a different form-factor, or is it more akin to electronic paper? How you answer that question determines whether the MP’s use of the iPad was appropriate or not.

As it turns out, the iPad has recently made a few appearances in other political assemblies as well. In June of last year, a similar incident happened in the German Parliament, where the use of laptops is also banned. In December of last year, US Representative Henry Cuellar (D-TX) was questioned over his use of an iPad during his speech. In his defense, he wittily replied “I’m not using it to play Angry Birds!”

It seems that the US House of Representatives, like many parliamentary bodies, has traditionally banned the use of electronic devices, especially those that can receive and transmit information over communication networks. Interestingly, the reason is not based on a concern for security; rather, it is a concern for “decorum” and the need for representatives to avoid outside distractions while in session. Laptop computers and mobile phones fall into this category of “distracting devices,” but the US house does currently allow “unobtrusive handheld electronic devices” such as Blackberrys, and now it would seem, iPads.

Why are iPads allowed while laptop computers are not? If you try to answer this from a technical perspective, you would just become frustrated. After all, an iPad is really just a somewhat-simplified laptop computer in a different form factor, and it can offer up just as many distractions (if not more) as a full-fledged laptop. To answer this, we need to remove our engineering caps, and don our sociological ones instead.

The difference between the two really lies in the social meanings this culture attaches to the respective devices. At some point in the past, these politicians achieved some degree of “closure” on their meaning of the laptop computer, characterizing it as a device that is too distracting for use within the chamber (for the concept of closure, see the SCOT framework developed by Pinch and Bijker). When the iPad was introduced, the culture was faced with the task of constructing a meaning for the new device, and as is typical, different groups within the culture have tried to characterize it in terms of categories they already knew. Some have argued that it should be banned because it is just like a laptop computer, while others have advocated that it should be allowed, because it is just an electronic version of the paper and pens they already condone.

Those of us in education will also soon face this same quandary (if you haven’t already). The iPad is a perfect medium for interactive textbooks and could become a decent note-taking device if Apple devises an easier method for silent text input (perhaps a chording keyboard?) Should we resist it, arguing that it is too much of a temptation towards distraction? Or should we embrace it and actively try to shape it into something beneficial for students? I think the latter is possible, but only by conscious, active engagement at this most-critical stage of adoption.

Moog Documentary

I recently watched a fascinating documentary about Bob Moog, the inventor of the Moog synthesizer. Here is a trailer for it:

(If you are interested in watching this documentary, it is currently available via instant-play on Netflix, or you can watch it in segments on YouTube.)

I have to admit that as a documentary film, it wasn’t the best it could be, but I love the subject matter. The synthesizer is another one of those artifacts that, when introduced, caused quite a lot of angst in the surrounding culture. Avant-garde musicians loved it, sound-effects engineers eagerly embraced it, but the wider culture didn’t really know what to make of this thing. It looked far more like a telephone switchboard than it did a musical instrument.

File:Bob Moog3.jpgThe original Moog synthesizers were complicated beasts, with dozens of dials, switches, and patch cords. They had keyboards as well, but the synthesizer could produce only one note a time, so the keyboard was really just a mechanism to set the initial pitch of the generated wave, which could then be bent and transformed by the various processing modules. Most avant-garde musicians actually had little use for the keyboard, preferring instead to generate new kinds of sounds and pitches that did not fit into the traditional tempered scale. Other synthesizer makers that were more influenced by these musicians (such as Don Buchla) omitted the keyboard entirely.

File:Minimoog.JPGSeveral progressive rock musicians also started using Moog’s synthesizers, most notably Keith Emerson of Emerson, Lake & Palmer. Because these groups toured, they asked for a more portable, self-contained version, and in 1970 Moog introduced what became his most iconic instrument, the Minimoog.

Sadly, critics accused Moog and his synthesizer performers of destroying music. For these critics, real musical sounds could originate only from strings, wood, brass, or skins. Electronically-produced sounds were simply not ‘natural’ and thus not music.

But is there anything really ‘natural’ about a violin, saxophone, or drum? Each one of these musical instruments is an artifact, something created by humans that does not exist apart form human agency. At some point in history, violins were invented, developed, adopted, and shaped into the instrument we know today. Violins are certainly old, and their sound can move the human heart, but they are hardly products of Nature.

We must be careful when we swing around that word ‘natural’; we too often use it as an unreflective synonym for ‘traditional’. The distinction between ‘natural’ and ‘artificial’ is a rather hard and unyielding one, but what is considered ‘traditional’ is maleable; it changes over time, adapting to new cultural developments.

Historical cases like the Moog synthesizer should teach us that the dire predictions of today’s cultural critics need to be taken with a large grain of salt. The synthesizer didn’t destroy music; quite the opposite occurred as musicians embraced the new sounds and techniques made possible by that new instrument. It would have been difficult in 1970 to foresee how the synthesizer would enable new approaches to music-making that we today take for granted.

So will mobile phone texting and Twitter be the death of writing? Will Facebook destroy ‘real’ community? It is unlikely that we can foresee now just what changes these systems will engender in our society. These systems will, no doubt, reshape our cultures in profound ways, but our cultures will also reshape these systems in return. The real question is which social groups will be the predominant shapers of these systems as they evolve?