Tag Archives: music

Gadget Part IV: The End

I recently finished Jaron Lanier’s book You Are Not a Gadget: A Manifesto. I have to admit that the book has grown on me a bit since I started reading it. At first, the book really frustrated me, as Lanier is not what you would call an analytical thinker–that is, he doesn’t procede from one point to the next, linearly building an argument to prove a thesis. Rather, he tells stories and discusses networks of ideas as he swirls around a general topic he wishes to explore. That kind of thinking can often produce highly creative insights, but it is difficult to then summarize in a short review (which, I’m sure he would say, is precisely the problem with blogging!). I often liken his style of thinking to watching a pointillistic painting in progress: you watch the artist make a point here, a point there, but it’s not until the picture is finished that you can step back and understand it as a whole.

Section three of the book covers Lanier’s last big complaint about web 2.0 systems and the digital culture that surrounds them: despite their ability to harness the creativity of the hive mind, they can’t seem to produce anything truly innovative. Lanier argues that for the last two decades, digital culture denizens have simply rehashed old ideas, applying them to new contexts for sure, but without any substantial improvement.

He offers three chief examples. First, he discusses the open source operating system Linux, describing it as simply a port of a messy, difficult-to-use, 40-year-old operating system to the Intel PC. Despite the immense increase in processing power since the 1970s, Linux uses essentially the same design, and offers pretty much the same features as the early UNIX variants. Linux lovers usually say that there’s no need to improve on good design, but Lanier thinks that this lack of improvement is due to the “crowd” not having the same level of creative potential that singular people do.

I have used Linux myself, and have done research into the history and sociology of open-source software, and much of what Lanier says rings true. Linux is most certainly reliable and powerful, but it really is the kind of mess that only a balding man with a long beard and Birkenstocks could love. Much of the innovation in Linux these days is actually funded by commercial corporations that either sell related services, or rely on it to run their core business. Successful open source projects also tend to have strong leaders who set the agenda and overal design for the product.

Lanier’s second example is his familiar whipping boy, Wikipedia. There is of course nothing new about the idea of an encyclopedia, and although Wikipedia fundamentally changed the notion of authorship and the scope of entries, Lanier thinks that their contributions have not really improved on the earlier forms, much less created a radically new kind of knowledge source. He summarizes his disappointment with this rather scathing passage:

Let’s suppose that back in the 1980s I had said, “In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new encyclopedia and a new version of UNIX!” It would have sounded utterly pathetic (121-122).

Later in the book, Lanier asks why the open source community has so far only managed to produce nicely-polished versions of antiques, while commercial industry has continued to produce game-changing innovations like the iPhone and iPad. His conclusion: people are creative, not crowds.

His third example is pop music, and the claim he makes here is perhaps his most provocative and certainly the most suspect. Lanier asserts that pop music has been essentially frozen in time over the last two decades, and that it would be difficult, if not impossible, to place a recent but unfamiliar pop song in its proper decade. Why? Because pop music is in a posture of nostalgia and dominated by inflexible digital models like MIDI. He also implies that homogenizing web 2.0 systems are dulling consumers’ appetites for new and creative expressions.

Lanier is most likely overstating his case in this section; one could probably find counter examples that defy his general claims, but his underlying thesis–that people are creative, not crowds–is an intriguing one. It is a restatement of that old adage “too many cooks spoil the broth” and the joke that “a camel is a horse designed by a committee” (which is really unfair to camels). It may be true that the collective knowledge of the crowd is more reliable than the claims of any one person, but when that same logic is applied to creativity, the results tend to be conservative, nostalgic, or just plain messy.

Lanier ends the book with two sections on how he thinks digital technologies could be developed and used in ways that are more honoring of, and beneficial to, human beings. He focuses mostly on modeling and understanding human cognition, a field in which he is currently participating. He offers a different computational metaphor for the human brain, one based on incremental evolution, and explores the possibilities of using virtual reality to communicate in a fluid stream of concrete images.

On the whole, I recommend reading the book if you are interested in the issues surrounding web 2.0 information systems. But don’t expect a linear argument–instead, prepare yourself for a journey through ideas with a technologist who has been at the forefront of innovation for at least three decades.

I’ll leave you with a video he mentions in the closing chapter of the book. Watch it closely; the octopus being filmed can literality change the color and texture of its skin to match its environment, morphing itself much like what we see in science fiction films. Will we figure out how to introduce this trait into humans with genetic engineering?

Review of You Are Not A Gadget: Part I

I recently started reading a rather interesting new book: You Are Not a Gadget: A Manifesto, written by Jaron Lanier. I’ll post my reactions to the book as I read through it; this first post will introduce the author and review the opening of the book.

Jaron Lanier is an unlikely candidate to write a book that is critical of any kind of technology. In the 1980s he coined (or at least popularized) the term “virtual reality” and developed some of the first VR gear and software systems. After his company was acquired by Sun Microsystems, he joined a non-profit computer networking consortium, Internet2, as their Chief Scientist. His technical work has continued to focus on total-immersion computing, and he has held a number of research positions at major computer software and hardware firms.

Lanier is also an avid musician and composer, specializing in the playing of some rather unusual string and wind instruments. For those that have never worked in the computer industry, this combination of music and computers is not as strange as it might at first seem. During my years as a full-time software developer and program manager, I worked with numerous people who were also very accomplished musicians. Several had earned advanced musical degrees, and some even continued to play professionally as they had time. This, I think, attests to the highly-creative, artisanal nature of software programming.

In his book, Lanier brings both of these aspects of his personality to a critical examination of the so-called “web 2.0” information systems. He makes it clear in the introduction to the paperback edition that his perspective is “not antitechnology in any sense. It is prohuman” (ix). His concern is also not with the Internet as a whole, but with particular designs that “tend to pull us into life patters that gradually degrade the ways in which each of us exists as an individual” (x). In the opening chapter, he states his position rather clearly: “the so-called web 2.0 ideas are stinkers, and we out to reject them while we still can.”

For those that have never heard the term “web 2.0,” it refers to a style of web site that emphasizes user participation and sharing. The “content” of a web 2.0 site comes from the same people who use it: the posts and comments on a blog, the videos on YouTube, the reviews on Yelp, the information on Wikipedia, the posts on Facebook, and the tweets on Twitter. The advocates of web 2.0 celebrate the democratization of information these sites allow, but Lanier harbors some serious misgivings.

So what does Lanier think is so wrong with web 2.0? Honestly, it’s a bit difficult to grasp his argument in the opening chapter. It feels like a series of highly provocative statements, with little to no supporting evidence, strung together in a stream of consciousness. In the preface, he predicts that his words in this book will be “misrepresented by crowds of quick and sloppy readers into wikis and automatically aggregated wireless text message streams,” but he certainly isn’t making it easy in his first chapter for his readers to do otherwise. Perhaps that is his goal; to force people to read beyond the first chapter and struggle with his words to figure out what he’s really talking about.

From what I can tell, his primary concerns are articulated in the following paragraph:

When developers of digital technologies design a program that requires you to interact with a computer as if it were a person, they ask you to accept in some corner of your brain that you might also be conceived of as a program. When they design an internet service that is edited by a vast anonymous crowd, they are suggesting that a random crowd of humans is an organism with a legitimate point of view (4).

Lanier seems to be concerned that our understanding of personhood is being degraded in subtle ways by the assumptions that underly the designs of current web 2.0 technologies. He uses the Musical Instrument Digital Interface (MIDI) as an example of what he means. MIDI is a standard for representing musical phenomenon in a digital form so that a computer can process and manipulate it. It was invented by a friend of Lanier’s, Dave Smith, who also happened to be a keyboardist. Lanier makes the claim that MIDI was designed from a keyboardist’s perspective, and thus projects a keyboard-centric model that has difficulty representing the subtleties of wind and string instruments, much less the human voice. MIDI event messages are the analog of what happens on a keyboard: a key is pressed, a key is released. The note selected and the pressure of the strike are passed in the key-down message, and the performer can trigger additional events to bend the pitch while the note is still on, but all the other effects one can do with acoustic instruments have no representation in MIDI (note: I have never studied MIDI, so this is Lanier’s take on it, not mine). Thus, the wide range of possible acoustical musical phenomenon are lost when we start to think of music only in the terms MIDI allows; Lanier argues that the model MIDI projects has cheapened what is possible in music, and if we are not careful, we will forget that it’s only a model, and a limited one at that.

By analogy, a blog presents a certain model of discourse that is by necessity more simplistic than face-to-face interaction. You can add your comments to this post, and most blogging software will let you do that anonymously, which encourages people to say things that they would never have said to an author’s face. It may also make it difficult for me as an author to fully understand your point, as I can’t hear your vocal inflections, nor see your body language.

I agree with Lanier that we must remember that human discourse, music, and personhood are far more complex than our digital models for them, but I’m not sure I agree with his concerns that we are somehow forgetting that these are only models. I’m sure there are those who have difficulty understanding the difference between someone’s Facebook profile and the real person, but let’s consider his main example: MIDI. In the 1980s, one of my favorite sax players, the late Michael Brecker, eagerly embraced the electronic wind instrument (EWI), which was a breath-activated MIDI controller. He made some amazingly creative music with that device, but he never abandoned his acoustic saxophone. He approached the EWI as a new kind of instrument, with unique characteristics and potentialities. He explored those fully, but also never forgot how to create passionate, expressive, and heart-wrenching music on his saxophone.

Perhaps it is up to the artists to keep reminding the public that our digital models are only just that: new models that augment, not replace, the phenomenon they are intended to represent.

To end, here’s an example of Michael Brecker playing the EWI:

The Struggle to Define a New Device: More on the Moog

I’ve been reading more about the Moog synthesizer, and in this post I want to talk about a story I ran across in Trevor Pinch and Frank Trocco’s wonderful book, Analog Days: The Invention and Impact of the Moog Synthesizer. The story concerns the early days of the synthesizer and a rather significant recording you might have heard of.

As I mentioned in my previous post on the Moog, it was not immediately obvious to everyone what exactly the early synthesizers were, much less what they were good for. The avant-garde musicians were excited by the new sonic possibilities created by the synthesizer, and sound effects engineers quickly embraced it for their work, but both of these early uses had the effect of defining the synthesizer as an ethereal noise-making device, and not an instrument capable of making “real” music. One reviewer criticized the early synthesizers as sounding like an “obnoxious mating of a catfight and a garbage compactor,” useful only for “cheesy, invader-from-Mars movies” (132).

So how did the synthesizer get redefined as the keyboard instrument we know today? In their book, Pinch and Trocco describe in detail how this occurred, but there was one crucial story that seemed to be the turning point in the process. It’s a story that has all the elements you’d ever want: Johann Sebastian Bach, analog synthesizers, and one of the first transgendered musical performers.

By 1968, the Moog synthesizer had already been featured on a few rock albums, but it’s use was still limited to creating ancillary, psychedelic, sonic effects. Groups like The Byrds, The Doors, and even the Beatles had been enthusiastic adopters of the Moog (especially after they discovered the synesthesia-like effects its sounds often had for those high on LSD, a drug that was legal in the US until late-1968), but their use of it was limited to a narrow set of common sounds that each copied from the other. This made the Moog an important, almost required component of late 1960s rock music, but it was still “largely seen as a way to add an unusual psychedelic effect here and there,” as opposed to an instrument capable of carrying the melody or harmony (122).

Switched-On BachThis all changed in the fall of 1968 with the release of the album Switched-On Bach. The recording featured the works of Bach performed on a Moog synthesizer, which was quite a feat considering that the Moog could produce only one note at any given time, and changing between different sounds required the time-consuming shuffling of patch cords and adjustments to various knobs. The album was entirely a production of the studio, with countless splices and overdubs to create the required effects, but those effects were nothing short of redefining: for the first time, someone had created very recognizable keyboard, and at times orchestral, music using nothing but an analog synthesizer. The album was an instant hit, becoming one of the first classical albums to go platinum, eventually reaching the Billboard Top 10.

The performer, or “synthesist” as they were commonly known, was credited as Walter Carlos. A classically-trained pianist who also had a passion for electronics, Carlos studied music at Brown University, but actually majored in physics, and brought that technical expertise to a masters in music composition at Columbia. Uninterested in the compositional serialism that was dominant at that time, Carlos turned his attention to electronic music, meeting Bob Moog in 1964, and purchasing one of his modular synthesizers soon after.

Carlos and Moog got along famously. Carlos was demanding, and could translate what he wanted musically into Moog’s native language: electronics. Carlos pushed Moog to improve the touch response of the keyboard, and develop new modules that would allow him to better recreate the timbres of orchestral instruments. Carlos was a perfectionist, and the quality of music he was able to produce was beyond what anyone else had done with a Moog. In many ways, Carlos’s efforts reshaped the Moog from a sound-effects device into a keyboard instrument capable of playing Bach.

The relationship between Carlos and Moog provides us with a nice example of how “users” of technologies often turn out to have profound effects upon them. Many cultural critics tend to assume that the influence of technology on culture goes in only one direction; that technologies “impact” culture, and culture has little to no influence on those technologies in return. But when we examine historical cases like the Moog in detail, we often see examples where the early users profoundly shaped devices as they were being adopted. In fact, the line between inventor, producer, and user is often quite blurry and porous during the initial years of a new technological artifact or system.

The commercial success of Switched-On Bach spawned a litany of copy-cat albums: Switched-On Bacharach, Switched-On Gershwin, Switched-On Santa, and Chopin á la Moog, to name just a few. My personal favorite is The Plastic Cow that Goes MOOOOOG, a title which no doubt further cemented the common mispronunciation of Moog’s name; Moog is actually a Dutch name that rhymes with “rogue,” though most people (including myself before I heard otherwise) assume that it is pronounced like a cow’s “moo” with a “g” on the end.

The album’s success also made Carlos an overnight star, but sadly it was a fame that Carlos could not fully enjoy. During the making of Switched-On Bach, Walter Carlos was slowly becoming Wendy. Carlos began cross-dressing and taking hormone therapy during 1968, and was living “permanently as a woman by the middle of May 1969” (137). Carlos made a few public appearances as Walter, wearing a man’s wig and makeup to simulate sideburns and facial stubble, but eventually withdrew from public scrutiny to complete the metamorphosis. Since the music he/she created was impossible to play live, there was no demand for a tour, and Carlos returned to the studio to create more albums featuring the Moog.

The lesson here is that Switched-On Bach was a powerful resource in the struggle to define just what this new device was, and what it was good for. It demonstrated without a doubt that the Moog was a real instrument capable of producing not just psychedelic or ethereal sonic effects, but recognizable melody and harmony. Pinch and Trocco also note that this album was the reason many notable pop and rock keyboardists, such as Keith Emerson, Patrick Gleeson, Tomita, and Stevie Wonder embraced the synthesizer as a new instrument, capable of playing the lead musical line (147).

Moog Documentary

I recently watched a fascinating documentary about Bob Moog, the inventor of the Moog synthesizer. Here is a trailer for it:

(If you are interested in watching this documentary, it is currently available via instant-play on Netflix, or you can watch it in segments on YouTube.)

I have to admit that as a documentary film, it wasn’t the best it could be, but I love the subject matter. The synthesizer is another one of those artifacts that, when introduced, caused quite a lot of angst in the surrounding culture. Avant-garde musicians loved it, sound-effects engineers eagerly embraced it, but the wider culture didn’t really know what to make of this thing. It looked far more like a telephone switchboard than it did a musical instrument.

File:Bob Moog3.jpgThe original Moog synthesizers were complicated beasts, with dozens of dials, switches, and patch cords. They had keyboards as well, but the synthesizer could produce only one note a time, so the keyboard was really just a mechanism to set the initial pitch of the generated wave, which could then be bent and transformed by the various processing modules. Most avant-garde musicians actually had little use for the keyboard, preferring instead to generate new kinds of sounds and pitches that did not fit into the traditional tempered scale. Other synthesizer makers that were more influenced by these musicians (such as Don Buchla) omitted the keyboard entirely.

File:Minimoog.JPGSeveral progressive rock musicians also started using Moog’s synthesizers, most notably Keith Emerson of Emerson, Lake & Palmer. Because these groups toured, they asked for a more portable, self-contained version, and in 1970 Moog introduced what became his most iconic instrument, the Minimoog.

Sadly, critics accused Moog and his synthesizer performers of destroying music. For these critics, real musical sounds could originate only from strings, wood, brass, or skins. Electronically-produced sounds were simply not ‘natural’ and thus not music.

But is there anything really ‘natural’ about a violin, saxophone, or drum? Each one of these musical instruments is an artifact, something created by humans that does not exist apart form human agency. At some point in history, violins were invented, developed, adopted, and shaped into the instrument we know today. Violins are certainly old, and their sound can move the human heart, but they are hardly products of Nature.

We must be careful when we swing around that word ‘natural’; we too often use it as an unreflective synonym for ‘traditional’. The distinction between ‘natural’ and ‘artificial’ is a rather hard and unyielding one, but what is considered ‘traditional’ is maleable; it changes over time, adapting to new cultural developments.

Historical cases like the Moog synthesizer should teach us that the dire predictions of today’s cultural critics need to be taken with a large grain of salt. The synthesizer didn’t destroy music; quite the opposite occurred as musicians embraced the new sounds and techniques made possible by that new instrument. It would have been difficult in 1970 to foresee how the synthesizer would enable new approaches to music-making that we today take for granted.

So will mobile phone texting and Twitter be the death of writing? Will Facebook destroy ‘real’ community? It is unlikely that we can foresee now just what changes these systems will engender in our society. These systems will, no doubt, reshape our cultures in profound ways, but our cultures will also reshape these systems in return. The real question is which social groups will be the predominant shapers of these systems as they evolve?

When a Device Becomes an Instrument

In my last post, I hinted that Borgmann’s “device paradigm” can become a bit problematic when we consider some historical cases, and in this post, I want to explain what I meant by that.

Borgmann illustrates his concept of the device paradigm in a few different examples, but the one that caught my attention was his contrast between playing music on an instrument, such as a violin, and playing music via a stereo. In the former, the musician is participating in the creation of something new that exists only in that place and in that time. In the latter, the stereo recreates a commodified recording of music, something that has been divorced from place and time. The player of the violin must have some degree of skill and familiarity with the material aspects of the instrument, built up over many hours of practice. The player of the stereo needs only to have the sufficient amount of money required to purchase the stereo and some recorded music, and a basic understanding of which buttons to push to start the playback.

I found this example to be intriguing, for I too am a musician. I started playing clarinet at age nine, switched to saxophone about three years later and have played ever since. I have experienced that transcendent moment when the music produced by the ensemble achieves a quality that goes beyond the sum of the individual contributions of the players. It is a feeling that is fundamentally rooted in that time and place, and even if it is captured on a recording, the playback can never quite recapture that moment, especially if the listener was not there for the original performance.

As a musician, I am sympathetic to Borgmann’s example, but as a historian of technology, I find that it needs some clarification. The trouble with the example is that it seems to assume that a stereo has only one possible purpose or use. It assumes that the device has some kind of intrinsic meaning that is inherent in its design. But is the playback of commodified, pre-recorded music the only thing that one can do with a stereo? Can it, in fact, also be used as an instrument in its own right to create new music?

ScratchingConsider the technique of “scratching” that was popularized by hip-hop artists, but has now spread to a number of other genres. The artifacts are all the same–the turntable, vinyl record, amplifier, and speakers–but the meaning ascribed to them has changed. They are no longer simply devices for playing commodified pre-recorded music. They have been transformed, by the hands of the artist, into a new instrument, capable of creating new sounds and new kinds of music. The ‘device’ has been reclaimed (or should we say ‘redeemed’?) as a new tool for human creativity.

This example, I think, points towards a deeper and rather important dynamic in the interaction between technology and culture. We often assume that artifacts have singlar and stable social meanings that arise out of their very designs: a stereo is for playing pre-recorded music; a bicycle is for riding from one place to another; a plastic payment card is a vehicle for consumer credit. But when we look at the detailed historical contexts surrounding the development and adoption of these artifacts, we can see that what they were “good for” was not immediately obvious to everyone when they were first introduced. The social meanings we now attribute to those artifacts were the result of social negotiations between the inventors, producers, marketers, legislators, and consumers.

This process of “working out” the meaning and purpose of a new artifact or system is often referred to as the process of “domestication” (see the introduction of Oudshoorn and Pinch, How Users Matter). I love that word. It makes one think of new technologies as a kind of wild beast that eventually gets tamed by the culture that adopts it. It might ascribe a bit too much agency to artifacts, but it does make clear that the users of these new artifacts play an important role in deciding how these artifacts will be adopted and used. In some cases, they even play a significant role in (re)shaping the material aspects of artifact to better suit their desired use (for example, the bicycle fractured into racing, pleasure, commuting, and eventually mountain varieties, each driven by a different social group ascribing a different meaning to the bicycle).

Of course, we can’t ascribe any sort of meaning we wish to a given artifact. Artifacts are stubborn things; they are obdurate. Some artifacts have only a few possible uses, and others seem to have a certain political meaning inscribed into them by their designers (see Winner, “Do Artifacts Have Politics?“). But this is not to say that the meaning of an artifact is deterministic; all artifacts, even the most stubborn, are still “underdetermined.” People can ascribe multiple different meanings to the object, and those meanings can change over time.

So where am I am going with all of this? The trouble I see with Bormann’s device paradigm is that it focuses our attention too much on the artifacts themselves, and not enough on our relationship with those artifacts. It tempts us to think that artifacts have intrinsic meanings and purposes, which further tempts us to brand some artifacts as inherently ‘bad’ and others as inherently ‘good’, regardless of the ways they might be reinterpreted by the adopting culture in the future. This leaves us no room to redeem these artifacts, to ascribe new meanings and purposes to them, and to relate to them in a different way. It robs us of our potential for “creative destruction,” the tearing down of that which we feel is unhealthy and oppressive in order to rebuild something more life-giving.

Ultimately, I agree with Borgmann about the evils of commodifying that which should remain sacred, but I think we need to be careful about where we place the blame for that activity. To play with a familiar phrase, “The fault, dear Brutus, lies not within our devices, but within ourselves.”