I recently finished Jaron Lanier’s book You Are Not a Gadget: A Manifesto. I have to admit that the book has grown on me a bit since I started reading it. At first, the book really frustrated me, as Lanier is not what you would call an analytical thinker–that is, he doesn’t procede from one point to the next, linearly building an argument to prove a thesis. Rather, he tells stories and discusses networks of ideas as he swirls around a general topic he wishes to explore. That kind of thinking can often produce highly creative insights, but it is difficult to then summarize in a short review (which, I’m sure he would say, is precisely the problem with blogging!). I often liken his style of thinking to watching a pointillistic painting in progress: you watch the artist make a point here, a point there, but it’s not until the picture is finished that you can step back and understand it as a whole.
Section three of the book covers Lanier’s last big complaint about web 2.0 systems and the digital culture that surrounds them: despite their ability to harness the creativity of the hive mind, they can’t seem to produce anything truly innovative. Lanier argues that for the last two decades, digital culture denizens have simply rehashed old ideas, applying them to new contexts for sure, but without any substantial improvement.
He offers three chief examples. First, he discusses the open source operating system Linux, describing it as simply a port of a messy, difficult-to-use, 40-year-old operating system to the Intel PC. Despite the immense increase in processing power since the 1970s, Linux uses essentially the same design, and offers pretty much the same features as the early UNIX variants. Linux lovers usually say that there’s no need to improve on good design, but Lanier thinks that this lack of improvement is due to the “crowd” not having the same level of creative potential that singular people do.
I have used Linux myself, and have done research into the history and sociology of open-source software, and much of what Lanier says rings true. Linux is most certainly reliable and powerful, but it really is the kind of mess that only a balding man with a long beard and Birkenstocks could love. Much of the innovation in Linux these days is actually funded by commercial corporations that either sell related services, or rely on it to run their core business. Successful open source projects also tend to have strong leaders who set the agenda and overal design for the product.
Lanier’s second example is his familiar whipping boy, Wikipedia. There is of course nothing new about the idea of an encyclopedia, and although Wikipedia fundamentally changed the notion of authorship and the scope of entries, Lanier thinks that their contributions have not really improved on the earlier forms, much less created a radically new kind of knowledge source. He summarizes his disappointment with this rather scathing passage:
Let’s suppose that back in the 1980s I had said, “In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new encyclopedia and a new version of UNIX!” It would have sounded utterly pathetic (121-122).
Later in the book, Lanier asks why the open source community has so far only managed to produce nicely-polished versions of antiques, while commercial industry has continued to produce game-changing innovations like the iPhone and iPad. His conclusion: people are creative, not crowds.
His third example is pop music, and the claim he makes here is perhaps his most provocative and certainly the most suspect. Lanier asserts that pop music has been essentially frozen in time over the last two decades, and that it would be difficult, if not impossible, to place a recent but unfamiliar pop song in its proper decade. Why? Because pop music is in a posture of nostalgia and dominated by inflexible digital models like MIDI. He also implies that homogenizing web 2.0 systems are dulling consumers’ appetites for new and creative expressions.
Lanier is most likely overstating his case in this section; one could probably find counter examples that defy his general claims, but his underlying thesis–that people are creative, not crowds–is an intriguing one. It is a restatement of that old adage “too many cooks spoil the broth” and the joke that “a camel is a horse designed by a committee” (which is really unfair to camels). It may be true that the collective knowledge of the crowd is more reliable than the claims of any one person, but when that same logic is applied to creativity, the results tend to be conservative, nostalgic, or just plain messy.
Lanier ends the book with two sections on how he thinks digital technologies could be developed and used in ways that are more honoring of, and beneficial to, human beings. He focuses mostly on modeling and understanding human cognition, a field in which he is currently participating. He offers a different computational metaphor for the human brain, one based on incremental evolution, and explores the possibilities of using virtual reality to communicate in a fluid stream of concrete images.
On the whole, I recommend reading the book if you are interested in the issues surrounding web 2.0 information systems. But don’t expect a linear argument–instead, prepare yourself for a journey through ideas with a technologist who has been at the forefront of innovation for at least three decades.
I’ll leave you with a video he mentions in the closing chapter of the book. Watch it closely; the octopus being filmed can literality change the color and texture of its skin to match its environment, morphing itself much like what we see in science fiction films. Will we figure out how to introduce this trait into humans with genetic engineering?