Category Archives: Reflections

The Social Meaning of Technology

Sometime in the early-1980s, Kodak began using a sleek new voice messaging service that they called KMX, short for Kodak Message Exchange. It was pretty cool for the time; you could dial in from most anywhere via a toll-free number, authenticate with a mailbox number and passcode, and exchange asynchronous voice messages with other employees. Although voicemail systems are completely normal to us now, most people at this time had never heard of such a thing. Home answering machines were just becoming popular, but the idea of dialing into a centralized system so that you could send voice messages to individuals and groups was still somewhat revolutionary.

As I’ve noted in earlier posts, my father worked for Kodak for his entire career. By the time they adopted KMX, my father was an executive who spent most of his time coordinating his sales and marketing force, so he spent a lot of time, both at work and at home, on KMX. Most evenings after dinner, he would go up to his home office, dial into the system, listen to his new messages, and leave his responses. He could easily spend a few hours doing that, which of course meant that his colleagues had to spend a few more hours listening to the messages he sent them, replying to his questions, and so on, and so on. Today, we often complain that the ease of email has created a torrent of unnecessary messages, but at least one can visually scan email text; imagine if you had to listen to every rambling voice message, in real time, happily narrated by the sender!

By the late-1980s, my father also had a computer on his desk at work that was no doubt hooked into the company’s new email system, but I don’t think he ever turned it on, nor did he ever learn to type with any kind of proficiency (how now has a laptop, but my mother is the one who types the emails). I once visited his office around that time and I noticed a thick layer of dust covering his beautiful IBM PS/2, which seemed like an absolute travesty to me. But my father was of an earlier generation of executives, a generation that came of age with dictaphones and secretaries who would type his recorded messages onto office memo sheets. He was much more comfortable using a system like KMX than email, as it was similar to what he already knew. KMX seemed like a big dictaphone in the sky; typing messages into a computer was a secretary’s job.

I tell this story to highlight that we often overlay complex social meanings upon new technologies that go far beyond their mere function. If we look only at the function of some new system, such as voicemail or email, we often miss the ways in which the adopting culture struggles to make sense of the new technology in terms of what they already know and do. The meanings we now ascribe to these technologies are often subtly different from the way people thought about them when they were first introduced. Our current meanings are the result of a dynamic interplay between the adopting culture’s attempts to fit the new technology into their existing categorizations and traditions, and the ways using that new technology alters their thoughts and perceptions, challenging those existing assumptions, categorizations, and rules.

America Calling

This phenomenon becomes more evident when we look at detailed historical case studies of technological adoption. Over the Christmas break, I got a chance to read one such account, Claude Fischer’s book America Calling: The Social History of the Telephone to 1940. I had read bits and pieces of it before, but never had the chance to read it all the way through, and I’m glad I did. Fischer’s account is fascinating and enlightening.

Fischer notes that the first generation of Bell executives came from the telegraph industry, so they tended to think of the telephone as a new kind of audible telegraph: a serious tool for serious (meaning “male”) business use. Bell’s designs and marketing reflected this assumption, and their sales efforts focused mostly on male urban professionals, who often saw the telephone as a convenient replacement for messenger boys.

Although Bell marketed the telephone as an urban business device, it was nevertheless eagerly adopted by rural farmers, especially the farm wives who saw the telephone as a very welcome tool for social interaction. Fischer recounts stories of farmers setting up their own exchanges and lines, often piggy-backing on their existing barbed wire fences, so that they could communicate with friends and family. Bell actively discouraged not only these private exchanges, but also the social use of the telephone, warning women to not tie up the lines with “idle gossip.”

The various companies that provided telephone service did eventually accept and then encourage this more social use of the telephone, but Fischer argues that it was not until a new generation of executives had come of age, a generation that came from other industries where sociality was a norm. The first generation of executives were too conditioned by the dynamics of the telegraph industry, and were thus unable to see the ways in which consumers were transforming the social meaning of their new device.

If we accept this notion that the social meaning of a new technology is dynamically worked out over time, then we should also expect that something similar will occur with today’s mobile phones and social media. How people 20 or 40 years from now will think of these may end up being quite different from the way we think of them now, primarily because they will have grown up in a world where these devices are not something new. In some ways we have already seen a shift in the meaning and usage of the mobile phone: we now use this device to send asynchronous text messages far more often than we make synchronous voice calls. Today’s “mobile phone” is really a misnomer; we are already starting to think of these devices more like pocket-sized computers than telephones.

Bringing Order out of Chaos

Summer has finally arrived in Seattle. We just finished a very typical cycle that we like call “Juneuary,” when the weather feels much more like January than it does June. It happens pretty much every year, and every year Seattleites moan and complain about it. We often get a nice stretch of weather in late April or early May, just to tease us, and then it goes back to being cold and rainy for another six to eight weeks. But then something magical happens; sometime during the first or second week of July, we go from 50 degrees and raining to 70-80 degrees and sunny, and it generally stays that way through the first part of September. It’s as if someone just flips a switch, and Seattle becomes one of the most glorious places to be in the continental United States.

This is also the time when Seattleites return to their gardening in earnest. The soaking rains of June followed by a few days of warmth and sun seem to turbo-charge the growth of every kind of plant, including those that we’d rather not have: the dandelions, clover, and moss that thwart our attempts at a decent lawn; the insidious vines that creep over from the neighbor’s yard and wind their way around everything; and all manner of quick-growing weeds that seem to spring up from nowhere and fill the planter beds, crowding out the flowers and bushes that we so meticulously planted the year before.

I often joke that our yard is very “intertextual,” which is a nice way of saying that it’s really a complete mess. I was out in the garden yesterday, mowing and trying my best to bring some order to the chaos that is our yard. I’ve never been a talented gardner, so my efforts are mostly on the macro level: mowing down the lawns, pruning back the large bushes, ripping out clusters of weeds and dead plants, or digging up sections that are beyond the point of any kind of surgical redemption. I often think that I am really just holding back nature’s unending and rather effective efforts to reclaim our yard and house  to the wilderness. What little pastoral tranquility we have achieved is the result of a constant struggle to bring order out of chaos.

Of course, gardeners are not the only people who bring order out of chaos. In many ways, this is what all artists do too; they creatively engage with unformed materials in order bring about new forms of order. Gardners create an ordered nature out of wilderness. Sculptors release enticing forms from the solid blocks of material that surround them. Potters caress beautiful and useful shapes out of formless clay. Dancers bring purpose and structure to otherwise random movement. And musical composers stich together ordered melodies, harmonies, and rhythm from a cacophony of possible sounds.

It may sound odd, but I think engineers do this as well. This shouldn’t be all that surprising though, as engineering and art are far more connected than we typically assume. The term ‘artisan’ points towards this connection; we don’t use it all that much anymore, but it refers to someone who is skilled at making things that are both useful and beautiful. Engineering, and especially computer programming, is really an artisanal craft. It requires a creative engagement with the world, an engagement that seeks to bring order out of disorder.

To be a bit more accurate, artists and artisans participate in the ongoing, redemptive work of God to bring about order in creation. Now, I’m not a trained theologian, nor play one on TV, so you’ll have to excuse me if I inadvertently start spouting heresy here. But it seems to me that much of what God does in the word involves bringing about order out of chaos, or restoring that order whenever it begins to disintegrate. We as artists and engineers get to participate in that redeeming and sustaining action, and we do so through our creativity.

In his book Voicing Creation’s Praise, Jeremy Begbie argues that “human creativity is supremely about sharing through the Spirit in the creative purpose of the Father as he draws all things to himself through the Son” (179). He then goes on to describe how we get to participate in this work:

There needs to be an interaction with creation, a development, a bringing forth of new forms of order out of what we are given at the hand of the Creator. And there will be a redeeming of disorder, mirroring God’s redeeming work in Christ, a renewal of that which has been spoiled, a re-ordering of what is distorted. This redeeming activity will entail a penetration of the disorder of the world—human and non-human, just as the Son of God penetrated our twisted and warped existence. It will also entail judgement; an unmasking of disorder, a denunciation of that which disfigures the world, as at Golgatha. There will be a transformation, the forging of a new order out of the ugliness of disorder, as in Christ’s resurrection (179).

How do you participate in the redemption of disorder, the bringing of order out of chaos? What would the “unmasking of disorder” and the “denunciation of that which disfigures the world” look like in your vocation?

Becoming a Christian Engineer

In 1991, I was a fresh-faced, fairly naive information systems major who was about to graduate from college. A few months before the end of school, an alumnus who worked for Microsoft came to our seminar and showed us a video of a speech Bill Gates had made the year before at Comdex. The speech was entitled “Information at Your Fingertips” and it was Bill’s first attempt at articulating a vision for the future of PC industry, a future where everyone would have instant and easy access to whatever information they could ever need or want (he gave another more-well-known version of the speech in 1995). Watching it today, one can’t help but smile at Bill’s enormous glasses, bad haircut, and cheesy delivery, but at the time, his vision looked incredibly cool to me. I knew then that I desperately wanted to be a part of making it happen.

I jumped into the software industry shortly after graduation, and spent nearly a decade designing, building, and managing software that could deliver information to people’s fingertips. Although I had studied information systems, I did so at a small, integrative liberal arts college, so most of what I learned about the practice of software engineering was actually acquired on the job. I learned C, then C++, and a smattering of other higher-level languages. I became adept at relational databases and SQL. I read books on algorithms, object-oriented theory, design patterns, human-computer interaction, and obscure programming tricks. I learned to evaluate the efficiency of everything I did, to seek the optimal solution. I read Dilbert religiously. I watched a lot of sci-fi. I became an engineer.

As I acquired the technical skills of software programming, I also took on some of the more annoying behaviors that are often characteristic of engineers. I became quite arrogant, assuming that my computer skills were evidence of a broader intellect that enabled me to have the correct opinion on just about anything. I became easily frustrated when people chose what I deemed to be a suboptimal course of action. I figured that I was capable of solving just about any problem given the right set of tools and techniques. And by “any problem,” I meant, any problem: automating sales reports was really just a special case of solving world hunger, homelessness, and the troubled middle east. All that was needed, I naively assumed, was a bit of rational decision making, supported by better computer systems that could catalog and deliver the right information at the right time.

After a few years, however, I started to notice that with every set of problems we solved, a whole new set of problems seemed to emerge. We would start every project with the greatest ambitions and expectations, but by the end we were already starting to see its shortcomings and thinking “oh well, we’ll fix that in the next version” (and we always assumed there would be a “next version,” even though our customers would have probably preferred us to just fix the problems in the existing one). Throughout the 1990s, we did automate scores of routine tasks, and developed tools that could catalog and retrieve information in ways similar to Bill’s vision, but our greatest social problems still seemed as intractable as ever. In some ways, we may have actually made them worse.

By the late 1990s, I was starting to get pretty cynical about the software industry in particular, and technology in general, so one of my friends suggested that I read Neil Postman’s book Technopoly. It was just what I needed. I can still remember how the following passage completely stopped me in my tracks:

You need only ask yourself, What is the problem in the Middle East, or South Africa, or Northern Ireland? Is it lack of information that keeps these conflicts at fever pitch? Is it lack of information about how to grow food that keeps millions at starvation levels? Is it lack of information that brings soaring crime rates and physical decay to our cities? Is it lack of information that leads to high divorce rates and keeps the beds of mental institutions filled to overflowing? (60)

I stayed in the software industry for a few more years, but reading Technopoly eroded my faith in modern technology’s ability to solve our larger social problems. I channeled my inner grumpy old man, and started to wonder if modern technology was actually more the cause than a solution to our social ills. I read Thoreau and pined for the simpler life. We got rid of our TV and spent more time reading. We bought a dining table made from reclaimed factory floor boards. We replaced the overhead electric light with a candelabra that we diligently lit each night. I exchanged my power tools for manual ones. I replaced my GoreTex with wool. I bought a push mower. I became a Romantic.

Well, sort of. I’m a city-boy at heart, and I never really learned how to appreciate poetry, so I was never quite the card-carrying Romantic. Still, I became much more of a techno-pessimist and eagerly read all the prominent Christian critics of modern technology. I also began to wonder whether one could really be both a engineer and a sincere Christian. If, as Ellul and Borgman claimed, industrialists and engineers were primarily responsible for the modern mindset, including all the social ills that it led to, how could a sincere Christian continue to do that kind of work?

Shortly thereafter, I left software to go back to graduate school, hoping to deepen my understanding of the ways in which modern technology had influenced our culture, and determine if my Christian and my engineering selves could really co-exist. I had never been much of a historian (business and computer science are perhaps some of the most a-historical fields there are), but the critics I most admired seemed to be well-versed in the history of technology, so I thought I should pursue that as well. It turned out to be a good decision, but not for the reasons I originally thought.

As I began to study the history and sociology of technology, I discovered that most critics of technology, especially the ones who write for a popular audience, rely on a theory that is no longer supported by most historians. That theory, commonly known as “technological determinism,” posits that technologies have a kind of one-way, deterministic “impact” on any society that adopts them. The stronger forms of this theory also hold that technological innovations advance according to an internal logic that makes technological progression inevitable and unstoppable.

Although technological determinism was the dominant historical theory for the first half of the 20th century, most current historians consider it to be only half right. Technologies most certainly change the societies that adopt them, but those changes are rarely, if ever, deterministic. Instead, detailed historical cases show that consumers play very active roles in shaping our understanding of what a new device is and is good for. In some cases, they also instigate a physical or functional reshaping of the new device as they seek to make it fit better into their lives (for example, the Kosher mobile phone).

This discovery opened up the possibility that I, as a Christian who was also passionate about technology, could actively engage in the reshaping and redeeming of these new devices. When we think as a technological determinist, we are left with a fairly bleak choice: adopt the new device and suffer the inevitable consequences; or completely reject it and hope you can convince others to do so as well. As Sherry Turkle has reminded us, this is the language of addiction—it’s similar to the way an addict thinks about his or her drugs. But when we realize that both engineers and consumers play active roles in the shaping of new technologies, a new possibility arises: the opportunity for a participatory redemption.

This realization also helped me see how I might reintegrate my Christian and engineering selves. If technologies did not have deterministic impacts and did not advance entirely according to their own logic, then it was dreadfully important for more Christians to be actively involved in not only the engineering of new devices and systems, but also their early adoption. If Christians aren’t there to inject their own values into the design, production, marketing, and adoption of new technologies, we really have no excuse if we don’t like how things turn out. Blaming deterministic outcomes just obscures what is really a lack of engagement.

I also began to realize that my Romantic reaction was just as short-sighted as the techno-optimism of my youth. It was certainly good to question the purported benefits of modern technology, and perhaps reject a few things that were really more of a distraction than a help, but to deny the flourishing I felt when designing and building software was to deny an important part of who I was made to be. Not all of us are made to be farmers or poets. Some of us are made to be engineers and artisans.

Are you a Christian involved in some kind of engineering practice? If so, how do you integrate your faith and your work? What makes a Christian engineer different from a secular one?

Sherlock, or Why Engineers Need to be Involved in the Christian Commentary on Technology

SherlockA while back, my wife and I were trolling the streaming options on Netflix, when we came across what looked to be an interesting setting of Sir Arthur Conan Doyle’s Sherlock Holmes stories. This recent BBC series is simply titled “Sherlock,” and unlike the other versions we’ve seen, which had been set in Doyle’s original context of late 19th and early 20th century England, these episodes are set in the present day. Sherlock still solves perplexing crimes with his amazing powers of deduction, but now he uses a mobile phone instead of his usual network of street-savvy children. Watson still serves as his assistant, but he now reports their adventures via a blog instead of the newspaper.

The series is really fantastic, but what I find most fascinating about it is the way in which the writers had to sift and separate which elements of the original characters and stories were truly essential, and which were merely accidental, contextual, and contingent. In other words, they had to extract and maintain what made Sherlock truly Sherlock; the rest they could then update and play with to better fit our current context. Writers are, of course, the best equipped to do this kind of thing with stories, as they have the skills and sensitivities necessary to analyze the various components and ascertain which elements must remain, and which could be different.

In a similar way, engineers are the best equipped to do this same kind of work with technology. Engineers are trained to look inside the “black box” of a given device or system and separate which features are absolutely necessary to its function, and which are the products of relatively arbitrary decisions made by the original designers. In other words, engineers are uniquely equipped to look deep inside a given technology and highlight the aspects that could be changed without sacrificing the device’s core function.

This, I think, is one of the reasons why engineers need to get involved in the Christian commentary on technology. Too often, technological critics treat the targets of their ire as black boxes, failing to separate the things that are essential to the way something works from those things that could easily be modified and reshaped. In this kind of analysis, one is often left with the impression that the entire device must be resisted if any of its present behaviors are found to be undesirable. But if those undesirable behaviors are not really essential to the way the device functions, a new possibility emerges: we can domesticate the device by altering those accidental behaviors so that they better fit with our existing social values.

Let me try to make this more concrete with an example. In his book, The Shallows, Nicholas Carr argues that hypertext is inherently more difficult to read than traditional linear text because each hyperlink requires the extra cognitive task of deciding whether to follow the link or not (126-129). He supports this argument by citing a number of studies where researchers asked one group of students to read a story in a traditional printed form, and another group to read the same story decorated with hyperlinks that when clicked, took them to different parts of the narrative. Those who read the hyperlinked version tended to score lower on comprehension tests administered after reading, and several subjects complained that the story was hard to follow. Conclusion: hypertext is inherently distracting and harder to read.

I have a lot of sympathy for this conclusion, as I too have experienced my fair share of badly-designed hypertext that I found frustrating to read. But notice the way that Carr is treating “hypertext” as a black box. There is no discussion here of how the particular text was designed: how many links there were, whether the links took the reader to something related or helpful versus something tangential, and how the links themselves appeared and behaved on the screen. All of these things are actually quite flexible, and can be altered by the individual designer without loosing the essential feature of hypertext. In order for hypertext to be hypertext it must contains a few links, but as any web developer knows, the design of those links can make an enormous difference in how effective the text is.

In the early days of the World Wide Web, developers actually had very little control over how hyperlinks were formatted on screen. Web browsers almost universally rendered them in bright blue, heavily underlined text, which made them stand out from the other text on the page (sadly, this is also the style used by this WordPress template, and writing this post has made me realize I need to change that). This kind of styling made the links not only highly noticeable, but also visually distracting, resulting in the kind of extra cognitive load that Carr describes. But starting in the mid-1990s, browsers began to support features that enable page developers to control the visual appearance of hyperlinks, allowing one to style links in more subtle and less visually distracting ways. One can even make links look very similar, or even identical, to the surrounding text, but then become more noticeable when the reader hovers the mouse pointer over the link. This sort of styling allows readers to generally ignore the links until they decide to interact with them. Browsers also added scripting features that have further enabled developers to alter the behavior of an activated link—I’ve seen several sites that display a definition for the word clicked upon in a small floating panel in the same page, so that the reader does not navigate away and lose context.

The structure of a hypertext—how many links are used and what those links connect to—also makes a significant difference in how one experiences the content. Excessive use of links, or links that take the reader to seemingly unrelated pages, commonly lead to confusion and lack of comprehension. In the early 1990s, page designs tended to use hyperlinks like Visual Basic developers used 3D effects when they were first introduced—far too often and without consideration of whether the effect was actually improving usability or just creating unnecessary visual distraction. A more judicious use of subtly-styled links that connect to truly useful and related content would no doubt result in hypertexts that would fare better in the kinds of studies that Carr refers to.

After looking through Carr’s footnotes and doing some searching (which, I must say, would have been much easier had I been able to click on the footnote as a hyperlink, and then click on his citation to view the original paper), I found some of the studies he referred to, and as I suspected, their results were actually a bit more nuanced than what he portrays in his book. Although the stories the researchers tested were harder to read in hypertext than traditional linear form, the researchers also noted “Hypertexts that were structured to capitalize on the inherent organization of the domain (usually hierarchical structures for information content) often resulted in better comprehension, memory, and navigation” (DeStefano & LeFevre 2007, 1636). Extra markers that indicated the kind of content a given hyperlink would lead to also improved navigation and learning. Sadly, the researchers did not explore whether more visually-subtle link styles decreased distraction and improved comprehension, but one would assume that these kinds of links would require less cognitive load than highly-noticeable ones.

My point is really just this: when we critique new technologies, we need to separate between the elements that are truly essential to their functions, and those that are more accidental, contextual, and contingent. In many cases, the latter can easily be changed so that the devices fit better into our lives. Engineers are well-equipped to make these kinds of distinctions, which is why, I think, more engineers need to get involved in the Christian commentary on technology. Additionally, if we fail to make these kinds of distinctions, those who do understand these technologies will no doubt find our critiques to be short-sighted, and therefore dismissible.

If you’re an engineer and you’re now convinced that you’d like to get involved in the Christian commentary on technology, there is an excellent opportunity to do so coming up very soon: The Digital Society Conference, which will be held June 22-23 on the Seattle Pacific University campus. You can read more about our motivations in my blog post about the conference, and get more details and register on the conference web site. Hope to see you there!

Is Technological Determinism Making Us Stupid?

Is Facebook Making Us Lonely?In a recent interview I did with the Figure/Ground project, the interviewer asked me what I thought of Stephen Marche’s recent article in The Atlantic entitled “Is Facebook Making Us Lonely?” I had read the article when it first ran, so I replied that if you read it closely, this article doesn’t really argue for the position implied in the title and abstract. Although Marche starts with the assumption that Facebook is making people lonely, he ends up articulating a much more nuanced position by the end. After I explained what I meant by that, I concluded by saying, “the better question to ask is why are these kinds of articles so popular? Why are we seeing such a sudden rash of articles entitled ‘is pick-your-new-technology making us stupid/narcissistic/lonely/shallow/etc.?'”

Thankfully, the interviewer didn’t ask me to answer my own question. If he had, I’m not sure I could have given him a good answer at the time. These kinds of articles are, of course, nothing terribly new. I remember articles from my youth that asked if calculators were making us lazy, or if Sony Walkmans were making us socially isolated and possibly deaf. A trip through the newspaper archives would no doubt reveal similar articles surrounding the mass-adoption of just about any new technological device, especially those since the 1960s.

Instead of trying to engage the specific questions that these articles pose, I think it might be more interesting to ask, why are these authors framing their questions in this sort of yes/no, pro/con, good/bad way? And why does framing their questions in that way seem to attract a large number of readers and secondary commentary?

The economically-minded answer would probably note that these kinds of headlines are more attention-grabbing, and that the ultimate goal of any publication funded by advertising is to grab attention. I wouldn’t doubt that this is a contributing factor, and I’m happy that at least in the case of Marche’s article, he nevertheless finds a more nuanced position.

But I also wonder if technological determinism has seeped so far into the popular collective conscious that it is difficult for journalists and the public to think any other way about technology and society. This kind of framing tends to betray an underlying assumption that technology “impacts” society in a kind of one-way, deterministic relationship. Authors may debate whether those impacts are good or bad, but they tend to assume that those impacts will always be inevitable, deterministic, and irreversible.

In the introduction to the classic book Does Technology Drive History?, Merritt Roe Smith argues that Americans in particular have always been attracted to this way of thinking because our national identity has always been wrapped up with technology and the ideology of progress. Our greatest heroes have been inventors and industrialists, not artists or humanitarians, and we commonly attribute our current global hegemony to our technological prowess.

But Americans have also become more willing since the 1960s to question the supposed benefits of new innovations, and to enquire about the often undisclosed costs. Nevertheless, this seems to happen only after the innovation becomes mass-adopted. When Google first appeared on the scene, journalists praised it for its clean look, efficiency, and uncanny ability to find what it was you were really looking for. We rooted for them as the up-and-coming underdog, and we rejoiced in their algorithms’ abilities to bring some kind of order to the ever-growing morass of information on the web. But once it became so ubiquitous that it transmogrified into its own verb, we began to see articles like Nicholas Carr’s “Is Google Making Us Stupid?

Why do we frame the questions in these ways? And why do articles that use this kind of framing generate such interest and secondary commentary? Do they poke at some deep-seated anxieties that we have about technological change? Let me know what you think.

Update: I just found a fantastic blog post by a social media researcher named Zeynep Tufekci that offers three possible answers:

  1. We actually have become more isolated (in terms of strong ties) during the same period that social media has arisen, so we assume that the latter has caused the former, even though evidence to the contrary is legion.
  2. Online socialization really can’t entirely replace face-to-face interaction, so we also assume that increased use of social networking causes increased feelings of isolation, even though people who are social online are also social offline.
  3. “Just like we convert text (visual) into language in our head (which is all oral in the brain), we need to convert mediated-interaction to that visceral kind of sociality in our brain. And not everyone can do this equally well [a condition she calls ‘cyberasociality’]. And people who are cyberasocial are driving this discussion.”

See her post for more details, including links to primary research that backs up what she is saying.

Facebook Fast

Facebook LogoA few posts back, I mentioned that I was giving up Facebook for Lent this year. Now that Lent is over, and I’m back on Facebook, I thought I would reflect a bit on how this limited form of a “digital sabbath” worked out.

At the start, I was concerned that this little experiment of mine might prove to be too difficult, as I really felt that I had become a bit too addicted to Facebook of late. Most of my work right now consists of long-term research, writing, and conference planning projects, so I would often check Facebook whenever I was a little bored, distracted, or just wanting to avoid doing my work. I wondered if I would actually make it until Easter, or if I would just cave part way through.

I have to admit that for the first couple of days, I often found my mouse impulsively shooting up to where the bookmark used to be in my browser window, only to be reminded by its absence of my Lenten fast. This impulse subsided after a few days though, and abstaining from Facebook turned out to be much easier than I thought it would be. I did break the fast once, to self-promote a piece published on, but other than that, I stayed off until Easter.

So what did I do with all that extra time? Some productive things, but also some unproductive things. On the productive side, I managed to read a number of books and articles I’ve been meaning to read for quite some time, and because I knew that I couldn’t break away and check Facebook when I became distracted, I found that I was better able to follow longer and more complex arguments. I also spent more time going on walks, thinking through problems, praying, and seeking direction. And I even got my sorely-neglected saxophone out of its case and did some practicing, which felt really good.

But if I was to be honest, I also spent quite a lot of time doing things on the web that were simply pale substitutes for checking Facebook. Instead of checking to see who interacted with my latest status update, I routinely checked the page view stats on my blog, hoping to get that same feeling of acceptance and legitimation. Instead of reading and seeing what my friends were up to, I compulsively read news sites, hoping to feel more in touch with what was going on. And instead of sharing interesting articles I came across with my Facebook friends, I tried tweeting them, but I don’t think anyone was listening.

So does Facebook cause me to be more distracted, or is it just a convenient tool for fulfilling my own desire to be distracted? Is it making me shallow and narcissistic, or is it just one of many places where I can feed my existing insecurities?

The answer is probably a bit of both. As I’ve argued before, each of us needs to be aware not only of our own personal vulnerabilities, but also whether the ways in which we are using our technologies are connecting with those vulnerabilities. I could try to blame Facebook for my foibles, but it’s probably more accurate to say that affordances of Facebook are very well aligned with my some of my existing vulnerabilities. If Facebook didn’t exist, I would still have those vulnerabilities, but I also need to recognize that particular ways of using Facebook might also be making them worse.

Now that Lent is over and I’m back on Facebook, I’ve been much more conscious of the ways in which it can often hit my vulnerabilities. I’ve decided to limit my usage not just in terms of time, but also in terms of what I am trying to get from it. I’ll still post things that I think others will find interesting, but I’m trying not to care how many “likes” I get, or how many comments it might solicit. I still enjoy reading what my friends are doing, but I will try not to compare myself to them and feel inadequate when I don’t measure up. In other words, I don’t simply need to use Facebook less—I need to use it differently.

In a word, I’m domesticating Facebook, altering my usage of it so that it fits better into my life, and aligns better to my stated social values. Instead of knee-jerk reactions that decry how Facebook is ruining our youth, we need to be encouraging each other to do this hard work of self-examination, being honest with ourselves about our personal vulnerabilities and the ways in which the devices and systems we use might be exacerbating those. For some, Facebook might pose little problem, but for others, some changes are probably in order. Let’s get to it.

The Human-Machine Boundary

I recently had this rather strange revelation: my mom is slowly turning into the Bionic Woman. About a month ago she underwent cataract surgery, a procedure that replaces the natural lenses in her eye with artificial ones. Her natural lenses had developed cataracts, essentially spots where the lens loses its transparency, which resulted in her vision becoming cloudy. The artificial lenses implanted by the surgeon not only corrected the cloudiness, they also corrected her vision for the most part. The new lenses were made with the same corrective curvature as the lenses in the glasses she had worn for most of her life. So she pretty much has bionic eyes now.

Well, OK, maybe they aren’t quite as cool as Steve Austin’s super-telescopic eyes in the Six Million Dollar Man, but they are a step in that direction. In the future, it might be possible for those artificial lenses to have additional features, like zooming, automatic shading in bright environments, or the ability to project text and graphics over the field of vision. When they do develop such things, people might not want to wait until they develop cataracts before they get their lenses replaced. Perhaps it will become just a standard corporeal upgrade, like getting braces to straighten your teeth, or plastic surgery to reshape some part of your body.

Interestingly, the ability to project text and graphics over the field of vision is already being worked on. Researchers at the University of Washington and Aalto University in Finland have developed a prototype contact lens, controlled by a wireless signal, that can project a single pixel into the eye. A single pixel might not sound terribly impressive, but it is an important proof of concept—it will likely be only a matter of time before they develop the techniques necessary to project enough pixels to fill your field of vision with relevant textual and graphical information. Imagine having a person’s name and relevant life details automatically appear next to his or her head when you meet them; or having the kind of information displayed today on your GPS navigation system’s screen merged with your view of the actual road.

Does this sound frightening to you, or at least a bit disturbing? Perhaps it should. But then again, haven’t we slowly been moving towards this sort of thing for a long time now? After all, what is the difference between wearing eye glasses, and replacing one’s natural eye lenses with artificial ones? In both cases, we are employing a bit of technology to correct someone’s eyesight. The former is worn while the latter is surgically implanted, but is that a difference of kind, or only of degree?

There are of course other kinds of bodily problems that we’ve been addressing for some time with surgically-implanted devices: artificial limbs and joints restore mobility; implants in the ear enable hearing again; stents reopen blocked arteries; pacemakers keep hearts beating at a normal rate; and in extreme cases like Dick Cheney, surgeons have implanted mechanical pumps to circulate the blood instead of relying on the patient’s beating heart muscle. In each of these cases, a bit of technology is added to the human body in order to correct a problem or flaw, blurring the boundary between human and machine.

One distinction we could try to draw here is between implanted devices that correct a problem, and those that would enhance what would be considered “normal” human ability. Today’s artificial lenses correct cataracts and myopia, but if those same lenses gained some super-human features, we could say that they enhance normal human eyesight. But even that distinction can be a bit fuzzy. If we were to acknowledge that it is “normal” for human eyesight to get worse with age, then even today’s artificial lenses could be considered an “enhancement.”

Another interesting case is that of Oscar Pistorius, the double-amputee sprinter who has flexible carbon-fiber blades for legs. He was initially disqualified from the 2008 Olympics because the committee felt that his springy artificial legs gave him an unfair advantage over able-bodied athletes. Although this decision was later overturned, it demonstrates the difficulty of drawing a clean distinction between “correction” and “enhancement.”

Of course, I don’t mean to suggest that able-bodied sprinters will soon elect to replace their own human legs with artificial ones, but there is a distinct possibility that in the future, people will choose to receive other kinds of surgical implants that are designed solely to enhance their abilities to super-human levels. Essentially, you might get the chance to become the bionic man or woman…without the need for that tragic test flight or parachute accident.

So if researchers were able to develop these kinds of super-human bodily enhancements, would you be interested? Which kinds of things would you entertain, and which would you rule out? Or would you be opposed to anything beyond corrective devices as a matter of principle?

The Mixed Blessing of Shuffle

I have an embarrassing confession to make: my closet is full of shirts that are all some shade of solid blue or grey. I am a bit neurotic in this way; I have a hard time feeling comfortable wearing a patterned shirt, and I can’t seem to bring myself to buy reds, greens, or anything terribly far away from blue. This becomes most apparent on laundry day, when my stack of t-shirts oscillates within a very narrow spectrum of color, and my wife looks at me with that look of “how many blue shirts do you need? Are you allergic to other colors?”

This is, of course, slightly hypocritical coming from my wife, who owns something like fifty different versions of a black skirt. Whenever we go shopping, she pulls yet another black skirt off the rack, holds it up to her and says “what do you think?” My usual smart-aleck response goes something like “oh look—a black skirt; just like all those other black skirts you already have!” She then responds with “No, this is completely different…see?”

The truth is, we are both stuck in a rut when it comes to clothing. I keep buying solid blue or grey shirts (preferably a nice shade of blue-grey), and she keeps buying black skirts. We go to the store with all the best intentions of branching out into other colors, patterns, and styles, but we invariably keep buying the same outfit, over and over again.

Most of us tend to get stuck in ruts like these, buying the same outfits, cooking the same meals, walking or driving the exact same routes everyday (I have a very particular path I take to the market each day, even though I could mix it up and go different ways, encountering different sights and people). It also seems to get worse with age; the older we get, the harder it seems to break out of our established patterns and try something different.

Interestingly, my wife and I have also been noticing lately that organizations tend to get stuck in ruts when it comes to hiring new people, and academic institutions seem to suffer from this quite acutely. They say they want to hire more interdisciplinary scholars, or more women, or more ethnic minorities, but when it comes down to it, they just buy the same outfit over and over again. They opt for the familiar, the person that looks and sounds most like what they are used to. Over time, the rut gets deeper and deeper, and the organization gets more and more entrenched. Eventually it becomes insular, inflexible, and irrelevant.

iPod ShuffleI’ve been thinking about all of this because I have recently started using the shuffle feature on my iPod. I have to admit that I resisted the whole digital music phenomenon for quite a while. My wife and I have a large stack of compact discs, and the thought of spending hours ripping them into iTunes seemed like far too much work. We were also suspect of the playback quality, and wondered how iTunes would handle classical and live recordings, where the atomic unit was an entire symphony or album, not a single track.

But over the last few months we have slowly ripped most of our collection, and I bought a cable so that we could play our iPod through our amplified stereo. We have even put together specialized playlists, such as “mellow” music for weeknight dinners when we need to slough off the day’s stress. And then we started using the shuffle feature.

Ordinarily, I tend to avoid shuffle-type features because they are built upon the assumption that the song (or single track) is an isolated and independent entity. This is probably true for most pop and rock music, but really isn’t true for other genres like classical, live concerts, or concept albums. The movements of a symphony are often split into separate tracks on a recording, but they are meant to be played together and in order. They make little sense when separated from their sibling movements or played in a random order. Similarly, live concert recordings have a kind of emotional flow that is carefully planned by the artist, so shuffling the tracks around usually leads to jarring transitions.

But as I started experimenting with the shuffle feature on my iPod, I noticed something rather surprising: I started listening to music that I haven’t listened to in years, and I was really enjoying it. I have several recordings that I bought, listened to once or twice, and then never listened to again. It wasn’t that I disliked the music; it was simply that the music was a little too different from the rut that I had gotten myself into. When I reached for a CD to play, I tended to select one from the same subset of CDs that I always selected from. I kept buying the same outfit. I was stuck in a rut. And the shuffle feature was starting to get me out of it.

My conclusion? Shuffle is a mixed blessing. Sometimes it’s not at all appropriate, but other times, it can act as a helpful mechanism that forces you out of your ruts. Like all technologies, it engenders several kinds of social changes all at once. Some of those changes will be intentional and obvious, while others will be more hidden and unexpected. And all of these changes can be seen as either negative or positive depending on the context.

Now my question to you is, do organizations, and especially academic institutions, need a kind of “shuffle feature” to force them out of their ruts? I’m not suggesting that they select candidates in a random fashion, nor that they purposely hire for difference over competency. What I am suggesting, however, is that these institutions need to think about how they may be seeing very qualified candidates as not adequate simply because those candidates don’t look or sound like the kind of person they are used to hiring. Organizations tend to create an archetype of their ideal candidate, and not surprisingly, the archetype looks a lot like those who are already working there. In other words, they keep buying the same outfit, over and over again, and don’t realize how homogenous and insular they are becoming.

Handwriting and Other Irrelevant Skills

handwritingWhen I was about ten years old, my family was living in a suburb far to the east of the San Francisco Bay area, nestled in the foothills of Mount Diablo. I had recently learned that we were moving to Seattle, where my father had been recently transferred by his company, so I was practicing writing the name of my new town. Very carefully, probably with my tongue sticking out between my teeth, I drew the two stems for the double ‘t’, and figuring that it would be easier, I drew one single horizontal cross line through the two stems. My father took one look at it, chastised me for writing improperly, and made me write it 100 times with separate cross bars on the ‘t’s.

His reasoning (which tended to be a bit cloudy on matters like this) was that I would never get a job if I didn’t develop proper handwriting skills. Perspective employers would take one look at my handwritten job application and realize that I was either uneducated, lazy, or probably both.

Of course, my father, speaking to me in 1979, could have never anticipated how little that proper handwriting would matter to my future employers. My first résumés and cover letters were word-processed and laser-printed (cutting edge in 1991!) and my first job as a software developer was at a progressive company where most information was submitted electronically to computerized databases. Keyboarding skills were what counted to my employers, and I quickly learned to type far faster than I could ever write by hand. I have since lost the ability to write in cursive, and my printing is a sorry sight.

I tell this story not to highlight my father’s inability to see the future, but to remind us of the ways in which technological changes can quickly make particular skills that seem all-important today mostly irrelevant by tomorrow. Proper handwriting, once considered the mark of education and cultural refinement, is largely irrelevant to our digital culture. We are still surrounded by words, but those words no longer emanate from the tip of a pen or pencil controlled by a steady and practiced hand.

To be clear, I am not arguing that this was a ‘good’ or a ‘bad’ change—there are authors who argue that handwriting is essential to memory and intellectual development, and others that are ecstatic to see handwriting go the way of so many other skills that we no longer need to learn. One can argue this either way, but the larger point is that proper handwriting is no longer a skill that is necessary to survive and thrive in our society, and very few people would have foreseen that just 30 years ago.

What other skills are we demanding our children and students learn today that will become socially irrelevant within their lifetimes? Will voice-to-text (or thought-to-text) software eventually get to the point where we rarely need to even type anymore? If so, the need to spell correctly might also become unnecessary, as the software will do the spelling for you. What other “critical” skills might soon be made redundant?