Tag Archives: Lanier

Gadget Part IV: The End

I recently finished Jaron Lanier’s book You Are Not a Gadget: A Manifesto. I have to admit that the book has grown on me a bit since I started reading it. At first, the book really frustrated me, as Lanier is not what you would call an analytical thinker–that is, he doesn’t procede from one point to the next, linearly building an argument to prove a thesis. Rather, he tells stories and discusses networks of ideas as he swirls around a general topic he wishes to explore. That kind of thinking can often produce highly creative insights, but it is difficult to then summarize in a short review (which, I’m sure he would say, is precisely the problem with blogging!). I often liken his style of thinking to watching a pointillistic painting in progress: you watch the artist make a point here, a point there, but it’s not until the picture is finished that you can step back and understand it as a whole.

Section three of the book covers Lanier’s last big complaint about web 2.0 systems and the digital culture that surrounds them: despite their ability to harness the creativity of the hive mind, they can’t seem to produce anything truly innovative. Lanier argues that for the last two decades, digital culture denizens have simply rehashed old ideas, applying them to new contexts for sure, but without any substantial improvement.

He offers three chief examples. First, he discusses the open source operating system Linux, describing it as simply a port of a messy, difficult-to-use, 40-year-old operating system to the Intel PC. Despite the immense increase in processing power since the 1970s, Linux uses essentially the same design, and offers pretty much the same features as the early UNIX variants. Linux lovers usually say that there’s no need to improve on good design, but Lanier thinks that this lack of improvement is due to the “crowd” not having the same level of creative potential that singular people do.

I have used Linux myself, and have done research into the history and sociology of open-source software, and much of what Lanier says rings true. Linux is most certainly reliable and powerful, but it really is the kind of mess that only a balding man with a long beard and Birkenstocks could love. Much of the innovation in Linux these days is actually funded by commercial corporations that either sell related services, or rely on it to run their core business. Successful open source projects also tend to have strong leaders who set the agenda and overal design for the product.

Lanier’s second example is his familiar whipping boy, Wikipedia. There is of course nothing new about the idea of an encyclopedia, and although Wikipedia fundamentally changed the notion of authorship and the scope of entries, Lanier thinks that their contributions have not really improved on the earlier forms, much less created a radically new kind of knowledge source. He summarizes his disappointment with this rather scathing passage:

Let’s suppose that back in the 1980s I had said, “In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new encyclopedia and a new version of UNIX!” It would have sounded utterly pathetic (121-122).

Later in the book, Lanier asks why the open source community has so far only managed to produce nicely-polished versions of antiques, while commercial industry has continued to produce game-changing innovations like the iPhone and iPad. His conclusion: people are creative, not crowds.

His third example is pop music, and the claim he makes here is perhaps his most provocative and certainly the most suspect. Lanier asserts that pop music has been essentially frozen in time over the last two decades, and that it would be difficult, if not impossible, to place a recent but unfamiliar pop song in its proper decade. Why? Because pop music is in a posture of nostalgia and dominated by inflexible digital models like MIDI. He also implies that homogenizing web 2.0 systems are dulling consumers’ appetites for new and creative expressions.

Lanier is most likely overstating his case in this section; one could probably find counter examples that defy his general claims, but his underlying thesis–that people are creative, not crowds–is an intriguing one. It is a restatement of that old adage “too many cooks spoil the broth” and the joke that “a camel is a horse designed by a committee” (which is really unfair to camels). It may be true that the collective knowledge of the crowd is more reliable than the claims of any one person, but when that same logic is applied to creativity, the results tend to be conservative, nostalgic, or just plain messy.

Lanier ends the book with two sections on how he thinks digital technologies could be developed and used in ways that are more honoring of, and beneficial to, human beings. He focuses mostly on modeling and understanding human cognition, a field in which he is currently participating. He offers a different computational metaphor for the human brain, one based on incremental evolution, and explores the possibilities of using virtual reality to communicate in a fluid stream of concrete images.

On the whole, I recommend reading the book if you are interested in the issues surrounding web 2.0 information systems. But don’t expect a linear argument–instead, prepare yourself for a journey through ideas with a technologist who has been at the forefront of innovation for at least three decades.

I’ll leave you with a video he mentions in the closing chapter of the book. Watch it closely; the octopus being filmed can literality change the color and texture of its skin to match its environment, morphing itself much like what we see in science fiction films. Will we figure out how to introduce this trait into humans with genetic engineering?

Advertisements

Gadget Part III: The Problem of Linear Extrapolation

ThumbnailPart II of Jaron Lanier’s book You Are Not a Gadget: A Manifesto discusses what he considers to be the economic failures of the web 2.0 paradigm, and the first of these failures is what he refers to as “free culture”: the cultural expectation that digital content, regardless of quality, should be free, both in terms of cost, and copyright.

Lanier likens this to a kind of digital Marxism, where individual property rights are ignored, and all creative production is assumed to belong automatically to the “hive mind.” His primary offender is YouTube, a site which makes it very easy to share videos, many of which are mashups of copyrighted content used without permission, that are freely viewable by anyone with an Internet connection.

Of course, Google does not operate YouTube just for the benefit of humanity. The economic model of free culture rests upon the familiar foundation of advertising, and those content creators who build the largest audiences often get a small share of the spoils. But if most content creators receive nothing for their work, what motivates them to contribute? Lanier thinks that it is ultimately about self-promotion:

The combination of hive mind and advertising has resulted in a new kind of social contract. The basic idea of this contract is that authors, journalists, musicians, and artists are encouraged to treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind. Reciprocity takes the form of self-promotion. Culture is to become precisely nothing but advertising (83).

Lanier goes on to argue that this self-promotion is sometimes getting rewarded by old-style media: take for example the blogger who builds a large-enough audience to get a book contract, which then turns into a movie deal (e.g., Juno; Julie & Julia). These artists receive significant compensation for their work, but that compensation doesn’t come from the web 2.0 crowd–it comes from those of us who still buy books and pay to watch movies.

Lanier’s concern is that as soon as these old-style media are subsumed by the web 2.0 free culture, there will no longer be an incentive for artists to produce anything that takes significant time or effort. If a spontaneous video of a cute cat can receive millions of views, why go through the trouble to write a new story and produce a well-done film? Why spend time writing a new song when a mashup of old material, or an illegal copy of an old music video, becomes an instant hit?

Lanier ends this train of thought by wondering if artists of the future, like the great artists of the Renaissance, will be forced to find patrons in order to survive. If that happens, he reasons, art will become a handmaiden to institutions that will suppress the more daring and critical works.

I think Lanier has a legitimate concern about copyright in the digital era, but I think he is also making a mistake that is very common amongst futurists: he is assuming that the future will be a linear extrapolation of present trends. That is to say, he is assuming that the social order of the future will be much like that of the present, only more extreme.

I disagree with Lanier in that I see our current situation as a transitional one. We are in the process of working out a new economic model that is appropriate for a digital age, and it is highly unlikely that the model we eventually arrive at will bear much resemblance to the transitionary model we know now.

File:Copyright.svgThe adoption of digital computers and telecommunication networks most certainly challenged our traditional techniques for enforcing copyright. Although it was always possible to copy a record to a cassette tape and give it to your friend, the physicality of the medium and the loss of quality during copying tended to limit the extent to which this posed a serious threat to copyright. Now that one can rip a CD into a digital file, with very little noticeable loss of quality, and send it to millions of people around the world in a few seconds, artists (and their distributors) are scrambling to develop a new model where they can get their art out to the largest possible audience, but still also make some money from it.

There are several different models being tested now, and it is probably too early to tell which one might succeed, or if the model of the future will be something completely different. The iTunes store and CD Baby are two interesting models that so far have seemed to be successful. These systems allow both signed and independent musicians to sell their songs to a wide audience, and that audience seems to be very willing to pay for them. Both systems take a substantial cut of the revenues, and each no doubt hopes to cement itself at the center of all economic/artistic exchange.

But the future is not yet written, and I would not be surprised to see iTunes toppled by another network setup by artists, for artists. Digital technologies have made obsolete most of what record producers and distributors do, so a model like iTunes, that simply replaces those entities with a new one, need not be the final solution. New technologies always enable social change, but often the final extent of that change is beyond our current imaginations.

But to give Lanier credit where credit is due, here’s a video of a cat “playing” piano that has been viewed over 21 million times! Will this be the standard of “art” in the future? I certainly hope not.

Gadget Review Part II: Computers and the Person

In this post, I want to continue my review of Jaron Lanier’s book You Are Not a Gadget: A Manifesto, discussing his theme for part I: what is a person? Although Lanier admits that he has no answer to that question, he is concerned that web 2.0 information systems are degrading our understanding of personhood, robbing it of what he considers to be its ineffable mystery.

Lanier’s comments in this section of the book center around two main themes: the ideology of “cybernetic totalism,” as articulated through artificial intelligence; and the concepts of the “noosphere” and “hive mind,” which he sees as ultimately defeating to individuals.

Lanier makes the claim that the creators and proponents of web 2.0 information systems subscribe to an ideology that he calls “cybernetic totalism.” At the heart of this ideology is the computer metaphor of the human mind, which supposes that the human brain is really just a big, complex information processing device similar to a computer. If you then also assume that our consciousness is an emergent property of the physical workings of our brains (that is, our mind is not something separate from our physical brains), you could conclude that it should be possible to create an artificial brain that could achieve some form of consciousness. Furthermore, it should also be possible to recreate the physical pattern of a particular human brain in a computer, and in essence “upload” a person’s consciousness to a computer simulation. Lanier rather deftly likens this ideology to a new kind of religion, one that offers a new method of salvation and everlasting life.

This ideology, Lanier argues, began with Alan Turing’s famous thought experiment, commonly known as “the Turing test.” Turing was an early British computing pioneer who became interested in the possibility of artificial intelligence. Since the words “intelligence” and “thought” were too vague, he proposed a more unambiguous test that would serve as a kind of proof of concept. It was based on a popular parlor game, where a judge interacts with two players, one male and one female, only through written messages. The judge asks questions of the players, reviews the responses, and then has to guess which player is which gender. Turing replaced the woman with a computer, and postulated that if a human judge could not tell the computer and human players apart, the computer would have demonstrated at least some form of intelligence.

The Turing test is indeed highly suggestive. It not only selects and elevates language as the hallmark of intelligence, but it also assumes that appearing to use language correctly is the same thing as actually understanding language. A computer might be able to mimic a human response to a question, but does that mean the computer actually understood the concepts encapsulated in that language? And even if it could, should we attribute the quality of intelligence to the machine itself, or the programmers who wrote the software that parsed the questions and determined the appropriate response?

One of my favorite responses to the Turing test was articulated by Steven Pinker, a cognitive researcher at Harvard and MIT, in the video opera “Three Tales” by Steve Reich and Beryl Korot:

I might be fooled by a good silk flower. It doesn’t mean that its a real flower. It may just mean that I don’t know enough about flowers.

Lanier goes on to argue that although artificial intelligence researchers have backed off the possibility of creating an artificial human-like mind, they have nevertheless influenced a generation of computer scientists to assume that “the computer is evolving into a life-form that can understand people better than people can understand themselves” (28). The implication is that web 2.0 information systems are built not to be passive tools that enable individuals to accomplish desired tasks more easily, but to anticipate what it thinks individuals might want, and deliver it without the user needing to ask. This type of behavior, Lanier worries, will condition us to think that the computer always knows best, ultimately causing us to give up our autonomy to a network of machines and software that take over control of our lives.

Although his version of the Colossus or Terminator stories might be a bit far fetched at present, his comments about the dangers of anonymity in web 2.0 information systems are easier to agree with, especially if you have ever participated in an online discussion. Mike Godwin once made the following observation, now known as Godwin’s Law: “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.” Lanier’s argument is similar: the ability to contribute anonymously, or invent and change personas at will, brings out the “inner troll” in everyone (“troll” is a common term used for online sadists).

Lanier states that sites like Wikipedia are based upon the idea that the “noosphere,” a global intelligence created by the amalgamation of all the individual brains connected via the Internet. The theory is that collective intelligence is often more accurate than individual intelligence, but Lanier thinks this works only in certain contexts, and only to a certain degree. For example, when contributors are allowed to remain anonymous, or when they can invent and change personas at will, there is no social retribution possible that would normally temper what participants choose to do.

Review of You Are Not A Gadget: Part I

I recently started reading a rather interesting new book: You Are Not a Gadget: A Manifesto, written by Jaron Lanier. I’ll post my reactions to the book as I read through it; this first post will introduce the author and review the opening of the book.

Jaron Lanier is an unlikely candidate to write a book that is critical of any kind of technology. In the 1980s he coined (or at least popularized) the term “virtual reality” and developed some of the first VR gear and software systems. After his company was acquired by Sun Microsystems, he joined a non-profit computer networking consortium, Internet2, as their Chief Scientist. His technical work has continued to focus on total-immersion computing, and he has held a number of research positions at major computer software and hardware firms.

Lanier is also an avid musician and composer, specializing in the playing of some rather unusual string and wind instruments. For those that have never worked in the computer industry, this combination of music and computers is not as strange as it might at first seem. During my years as a full-time software developer and program manager, I worked with numerous people who were also very accomplished musicians. Several had earned advanced musical degrees, and some even continued to play professionally as they had time. This, I think, attests to the highly-creative, artisanal nature of software programming.

In his book, Lanier brings both of these aspects of his personality to a critical examination of the so-called “web 2.0” information systems. He makes it clear in the introduction to the paperback edition that his perspective is “not antitechnology in any sense. It is prohuman” (ix). His concern is also not with the Internet as a whole, but with particular designs that “tend to pull us into life patters that gradually degrade the ways in which each of us exists as an individual” (x). In the opening chapter, he states his position rather clearly: “the so-called web 2.0 ideas are stinkers, and we out to reject them while we still can.”

For those that have never heard the term “web 2.0,” it refers to a style of web site that emphasizes user participation and sharing. The “content” of a web 2.0 site comes from the same people who use it: the posts and comments on a blog, the videos on YouTube, the reviews on Yelp, the information on Wikipedia, the posts on Facebook, and the tweets on Twitter. The advocates of web 2.0 celebrate the democratization of information these sites allow, but Lanier harbors some serious misgivings.

So what does Lanier think is so wrong with web 2.0? Honestly, it’s a bit difficult to grasp his argument in the opening chapter. It feels like a series of highly provocative statements, with little to no supporting evidence, strung together in a stream of consciousness. In the preface, he predicts that his words in this book will be “misrepresented by crowds of quick and sloppy readers into wikis and automatically aggregated wireless text message streams,” but he certainly isn’t making it easy in his first chapter for his readers to do otherwise. Perhaps that is his goal; to force people to read beyond the first chapter and struggle with his words to figure out what he’s really talking about.

From what I can tell, his primary concerns are articulated in the following paragraph:

When developers of digital technologies design a program that requires you to interact with a computer as if it were a person, they ask you to accept in some corner of your brain that you might also be conceived of as a program. When they design an internet service that is edited by a vast anonymous crowd, they are suggesting that a random crowd of humans is an organism with a legitimate point of view (4).

Lanier seems to be concerned that our understanding of personhood is being degraded in subtle ways by the assumptions that underly the designs of current web 2.0 technologies. He uses the Musical Instrument Digital Interface (MIDI) as an example of what he means. MIDI is a standard for representing musical phenomenon in a digital form so that a computer can process and manipulate it. It was invented by a friend of Lanier’s, Dave Smith, who also happened to be a keyboardist. Lanier makes the claim that MIDI was designed from a keyboardist’s perspective, and thus projects a keyboard-centric model that has difficulty representing the subtleties of wind and string instruments, much less the human voice. MIDI event messages are the analog of what happens on a keyboard: a key is pressed, a key is released. The note selected and the pressure of the strike are passed in the key-down message, and the performer can trigger additional events to bend the pitch while the note is still on, but all the other effects one can do with acoustic instruments have no representation in MIDI (note: I have never studied MIDI, so this is Lanier’s take on it, not mine). Thus, the wide range of possible acoustical musical phenomenon are lost when we start to think of music only in the terms MIDI allows; Lanier argues that the model MIDI projects has cheapened what is possible in music, and if we are not careful, we will forget that it’s only a model, and a limited one at that.

By analogy, a blog presents a certain model of discourse that is by necessity more simplistic than face-to-face interaction. You can add your comments to this post, and most blogging software will let you do that anonymously, which encourages people to say things that they would never have said to an author’s face. It may also make it difficult for me as an author to fully understand your point, as I can’t hear your vocal inflections, nor see your body language.

I agree with Lanier that we must remember that human discourse, music, and personhood are far more complex than our digital models for them, but I’m not sure I agree with his concerns that we are somehow forgetting that these are only models. I’m sure there are those who have difficulty understanding the difference between someone’s Facebook profile and the real person, but let’s consider his main example: MIDI. In the 1980s, one of my favorite sax players, the late Michael Brecker, eagerly embraced the electronic wind instrument (EWI), which was a breath-activated MIDI controller. He made some amazingly creative music with that device, but he never abandoned his acoustic saxophone. He approached the EWI as a new kind of instrument, with unique characteristics and potentialities. He explored those fully, but also never forgot how to create passionate, expressive, and heart-wrenching music on his saxophone.

Perhaps it is up to the artists to keep reminding the public that our digital models are only just that: new models that augment, not replace, the phenomenon they are intended to represent.

To end, here’s an example of Michael Brecker playing the EWI: