Tag Archives: technological determinism

Becoming a Christian Engineer

In 1991, I was a fresh-faced, fairly naive information systems major who was about to graduate from college. A few months before the end of school, an alumnus who worked for Microsoft came to our seminar and showed us a video of a speech Bill Gates had made the year before at Comdex. The speech was entitled “Information at Your Fingertips” and it was Bill’s first attempt at articulating a vision for the future of PC industry, a future where everyone would have instant and easy access to whatever information they could ever need or want (he gave another more-well-known version of the speech in 1995). Watching it today, one can’t help but smile at Bill’s enormous glasses, bad haircut, and cheesy delivery, but at the time, his vision looked incredibly cool to me. I knew then that I desperately wanted to be a part of making it happen.

I jumped into the software industry shortly after graduation, and spent nearly a decade designing, building, and managing software that could deliver information to people’s fingertips. Although I had studied information systems, I did so at a small, integrative liberal arts college, so most of what I learned about the practice of software engineering was actually acquired on the job. I learned C, then C++, and a smattering of other higher-level languages. I became adept at relational databases and SQL. I read books on algorithms, object-oriented theory, design patterns, human-computer interaction, and obscure programming tricks. I learned to evaluate the efficiency of everything I did, to seek the optimal solution. I read Dilbert religiously. I watched a lot of sci-fi. I became an engineer.

As I acquired the technical skills of software programming, I also took on some of the more annoying behaviors that are often characteristic of engineers. I became quite arrogant, assuming that my computer skills were evidence of a broader intellect that enabled me to have the correct opinion on just about anything. I became easily frustrated when people chose what I deemed to be a suboptimal course of action. I figured that I was capable of solving just about any problem given the right set of tools and techniques. And by “any problem,” I meant, any problem: automating sales reports was really just a special case of solving world hunger, homelessness, and the troubled middle east. All that was needed, I naively assumed, was a bit of rational decision making, supported by better computer systems that could catalog and deliver the right information at the right time.

After a few years, however, I started to notice that with every set of problems we solved, a whole new set of problems seemed to emerge. We would start every project with the greatest ambitions and expectations, but by the end we were already starting to see its shortcomings and thinking “oh well, we’ll fix that in the next version” (and we always assumed there would be a “next version,” even though our customers would have probably preferred us to just fix the problems in the existing one). Throughout the 1990s, we did automate scores of routine tasks, and developed tools that could catalog and retrieve information in ways similar to Bill’s vision, but our greatest social problems still seemed as intractable as ever. In some ways, we may have actually made them worse.

By the late 1990s, I was starting to get pretty cynical about the software industry in particular, and technology in general, so one of my friends suggested that I read Neil Postman’s book Technopoly. It was just what I needed. I can still remember how the following passage completely stopped me in my tracks:

You need only ask yourself, What is the problem in the Middle East, or South Africa, or Northern Ireland? Is it lack of information that keeps these conflicts at fever pitch? Is it lack of information about how to grow food that keeps millions at starvation levels? Is it lack of information that brings soaring crime rates and physical decay to our cities? Is it lack of information that leads to high divorce rates and keeps the beds of mental institutions filled to overflowing? (60)

I stayed in the software industry for a few more years, but reading Technopoly eroded my faith in modern technology’s ability to solve our larger social problems. I channeled my inner grumpy old man, and started to wonder if modern technology was actually more the cause than a solution to our social ills. I read Thoreau and pined for the simpler life. We got rid of our TV and spent more time reading. We bought a dining table made from reclaimed factory floor boards. We replaced the overhead electric light with a candelabra that we diligently lit each night. I exchanged my power tools for manual ones. I replaced my GoreTex with wool. I bought a push mower. I became a Romantic.

Well, sort of. I’m a city-boy at heart, and I never really learned how to appreciate poetry, so I was never quite the card-carrying Romantic. Still, I became much more of a techno-pessimist and eagerly read all the prominent Christian critics of modern technology. I also began to wonder whether one could really be both a engineer and a sincere Christian. If, as Ellul and Borgman claimed, industrialists and engineers were primarily responsible for the modern mindset, including all the social ills that it led to, how could a sincere Christian continue to do that kind of work?

Shortly thereafter, I left software to go back to graduate school, hoping to deepen my understanding of the ways in which modern technology had influenced our culture, and determine if my Christian and my engineering selves could really co-exist. I had never been much of a historian (business and computer science are perhaps some of the most a-historical fields there are), but the critics I most admired seemed to be well-versed in the history of technology, so I thought I should pursue that as well. It turned out to be a good decision, but not for the reasons I originally thought.

As I began to study the history and sociology of technology, I discovered that most critics of technology, especially the ones who write for a popular audience, rely on a theory that is no longer supported by most historians. That theory, commonly known as “technological determinism,” posits that technologies have a kind of one-way, deterministic “impact” on any society that adopts them. The stronger forms of this theory also hold that technological innovations advance according to an internal logic that makes technological progression inevitable and unstoppable.

Although technological determinism was the dominant historical theory for the first half of the 20th century, most current historians consider it to be only half right. Technologies most certainly change the societies that adopt them, but those changes are rarely, if ever, deterministic. Instead, detailed historical cases show that consumers play very active roles in shaping our understanding of what a new device is and is good for. In some cases, they also instigate a physical or functional reshaping of the new device as they seek to make it fit better into their lives (for example, the Kosher mobile phone).

This discovery opened up the possibility that I, as a Christian who was also passionate about technology, could actively engage in the reshaping and redeeming of these new devices. When we think as a technological determinist, we are left with a fairly bleak choice: adopt the new device and suffer the inevitable consequences; or completely reject it and hope you can convince others to do so as well. As Sherry Turkle has reminded us, this is the language of addiction—it’s similar to the way an addict thinks about his or her drugs. But when we realize that both engineers and consumers play active roles in the shaping of new technologies, a new possibility arises: the opportunity for a participatory redemption.

This realization also helped me see how I might reintegrate my Christian and engineering selves. If technologies did not have deterministic impacts and did not advance entirely according to their own logic, then it was dreadfully important for more Christians to be actively involved in not only the engineering of new devices and systems, but also their early adoption. If Christians aren’t there to inject their own values into the design, production, marketing, and adoption of new technologies, we really have no excuse if we don’t like how things turn out. Blaming deterministic outcomes just obscures what is really a lack of engagement.

I also began to realize that my Romantic reaction was just as short-sighted as the techno-optimism of my youth. It was certainly good to question the purported benefits of modern technology, and perhaps reject a few things that were really more of a distraction than a help, but to deny the flourishing I felt when designing and building software was to deny an important part of who I was made to be. Not all of us are made to be farmers or poets. Some of us are made to be engineers and artisans.

Are you a Christian involved in some kind of engineering practice? If so, how do you integrate your faith and your work? What makes a Christian engineer different from a secular one?

Advertisements

Is Technological Determinism Making Us Stupid?

Is Facebook Making Us Lonely?In a recent interview I did with the Figure/Ground project, the interviewer asked me what I thought of Stephen Marche’s recent article in The Atlantic entitled “Is Facebook Making Us Lonely?” I had read the article when it first ran, so I replied that if you read it closely, this article doesn’t really argue for the position implied in the title and abstract. Although Marche starts with the assumption that Facebook is making people lonely, he ends up articulating a much more nuanced position by the end. After I explained what I meant by that, I concluded by saying, “the better question to ask is why are these kinds of articles so popular? Why are we seeing such a sudden rash of articles entitled ‘is pick-your-new-technology making us stupid/narcissistic/lonely/shallow/etc.?'”

Thankfully, the interviewer didn’t ask me to answer my own question. If he had, I’m not sure I could have given him a good answer at the time. These kinds of articles are, of course, nothing terribly new. I remember articles from my youth that asked if calculators were making us lazy, or if Sony Walkmans were making us socially isolated and possibly deaf. A trip through the newspaper archives would no doubt reveal similar articles surrounding the mass-adoption of just about any new technological device, especially those since the 1960s.

Instead of trying to engage the specific questions that these articles pose, I think it might be more interesting to ask, why are these authors framing their questions in this sort of yes/no, pro/con, good/bad way? And why does framing their questions in that way seem to attract a large number of readers and secondary commentary?

The economically-minded answer would probably note that these kinds of headlines are more attention-grabbing, and that the ultimate goal of any publication funded by advertising is to grab attention. I wouldn’t doubt that this is a contributing factor, and I’m happy that at least in the case of Marche’s article, he nevertheless finds a more nuanced position.

But I also wonder if technological determinism has seeped so far into the popular collective conscious that it is difficult for journalists and the public to think any other way about technology and society. This kind of framing tends to betray an underlying assumption that technology “impacts” society in a kind of one-way, deterministic relationship. Authors may debate whether those impacts are good or bad, but they tend to assume that those impacts will always be inevitable, deterministic, and irreversible.

In the introduction to the classic book Does Technology Drive History?, Merritt Roe Smith argues that Americans in particular have always been attracted to this way of thinking because our national identity has always been wrapped up with technology and the ideology of progress. Our greatest heroes have been inventors and industrialists, not artists or humanitarians, and we commonly attribute our current global hegemony to our technological prowess.

But Americans have also become more willing since the 1960s to question the supposed benefits of new innovations, and to enquire about the often undisclosed costs. Nevertheless, this seems to happen only after the innovation becomes mass-adopted. When Google first appeared on the scene, journalists praised it for its clean look, efficiency, and uncanny ability to find what it was you were really looking for. We rooted for them as the up-and-coming underdog, and we rejoiced in their algorithms’ abilities to bring some kind of order to the ever-growing morass of information on the web. But once it became so ubiquitous that it transmogrified into its own verb, we began to see articles like Nicholas Carr’s “Is Google Making Us Stupid?

Why do we frame the questions in these ways? And why do articles that use this kind of framing generate such interest and secondary commentary? Do they poke at some deep-seated anxieties that we have about technological change? Let me know what you think.

Update: I just found a fantastic blog post by a social media researcher named Zeynep Tufekci that offers three possible answers:

  1. We actually have become more isolated (in terms of strong ties) during the same period that social media has arisen, so we assume that the latter has caused the former, even though evidence to the contrary is legion.
  2. Online socialization really can’t entirely replace face-to-face interaction, so we also assume that increased use of social networking causes increased feelings of isolation, even though people who are social online are also social offline.
  3. “Just like we convert text (visual) into language in our head (which is all oral in the brain), we need to convert mediated-interaction to that visceral kind of sociality in our brain. And not everyone can do this equally well [a condition she calls ‘cyberasociality’]. And people who are cyberasocial are driving this discussion.”

See her post for more details, including links to primary research that backs up what she is saying.

A Map of Typical Positions on Technology and Culture

In this post, I want to step back a bit from historical details in order to do some broad-stroke theory. I want to build a map for you that should help give you some orientation when wading into various writing on the technology and culture relationship. Those of you who study this all the time will probably find this post a bit of a review, and if that’s the case, feel free to skip it. But if you tend to find yourself getting more and more perplexed when reading conflicting perspectives on technology, this post should help you get your bearings.

Let’s start our map by laying out a spectrum on the horizontal axis.

Whenever an author theorizes the technology and culture relationship, that author must deal with one of the most basic questions in the field: in what direction do the influences flow? That is, does technology “impact” culture, does culture shape technology, or do both happen simultaneously? How an author answers this question can be plotted on this spectrum.

At one extreme is the position of technological determinism. People who ascribe to this believe that technologies impact an adopting culture culture in a kind of one-way, deterministic relationship. Technologies are seen as a powerful, non-neutral forces that carry with them moral consequences, and produce deterministic effects. Extreme technological determinists also tend to think of technology as an autonomous force that actually guides and determines its own development. As one of my professors used to say, a strong technological determinist believes that once someone invents the techniques for radar, it’s really only a matter of time before we get the microwavable burrito.

On the other extreme is the position of social determinism, which is sometimes called instrumentalism by philosophers of technology. Extreme social determinists see technologies as completely neutral artifacts that can be used for good or for evil depending on the desires of the adopting individual or culture. This kind of position is wonderfully summarized using that well-known motto of the National Handgun and Rifle Association (NHRA): “guns don’t kill people; people kill people.”

I’ve portrayed these positions as extreme ends of a spectrum because it’s important to realize that very few authors subscribe to either of these positions wholeheartedly. Some certainly lean farther to one side or the other, but we should avoid labeling any author as being strictly a technological determinist or a social determinist. Most sit somewhere in between the extremes, which leads us to that position at the center: the social-shaping perspective.

The social-shaping of technology (SST) perspective acknowledges what is obviously true about both of the more extreme positions: technologies certainly do affect an adopting culture in significant ways; but historical cases also show quite clearly that engineers and adopting cultures play important roles in reshaping those technologies to better fit with their existing social values. SST sees technology and culture as “mutually constitutive,” (MacKenzie & Wajcman 1999) each creating and shaping the other. In other words, “guns don’t kill people, but they sure make it a heck of a lot easier.”

To complete our map, we need to add a vertical dimension to our existing horizontal one:

This vertical axis represents the moral attitude an author takes towards technological change. At one extreme is techno-optimism, a belief that our technologies are making the world a better place. In its most extreme forms, techno-optimists elevate technology to the position of savoir, the ultimate tool with which we can save ourselves and create a utopia on earth. This position is excited about the possibilities of new technologies and says “full steam ahead” to any and all technological development.

At the other extreme is techno-pessimism, a position that sees technology not as a savoir, but as a destroyer. Techno-pessimists think that technology is making the world a worse place, and that it might just end up killing us all (think nuclear holocaust, genetic engineering gone awry, sentient robots that turn against us, etc). This position tends to pine for the simpler days before industrialization, and is sympathetic towards  Romanticism.

As with the other axis, this is of course a spectrum and most authors situate themselves somewhere in between the two extremes. At the very middle is a position I’ve called “double-edged sword.” This position argues that every technological change brings with it a wide array of consequences, some of which can be considered ‘good’, others ‘bad’, depending on your perspective. The costs and benefits of an innovation are never equally distributed in a given society, so whether you think a given technology is making the world better or worse largely depends on whether you received more of its benefits and less of its costs, or vice-versa.

Putting it all together, we get a map that looks something like this:

Most critics of technology (Christian or secular) tend to sit somewhere in the lower-left quadrant. They lean towards technological determinism, and they are generally pessimistic about future technological change. Jacques Ellul seems the most pessimistic to me—his book The Technological Society is almost fatalistic. Neil Postman is closer to the double-edged sword position, but he is still overall more pessimistic than optimistic. Marshall McLuhan is an unapologetic technological determinist, but he is far less pessimistic than other Christian critics.

In the upper-left quadrant we find people like Ray Kurzweil, who is extremely excited about the potential for a full human-machine integration. His belief in the inevitability of the “singularity” puts him on the technological determinist side, but unlike McLuhan or Ellul, he sees technology as a potential savoir of humanity.

At the extreme corner of the upper-right quadrant would be the NHRA sentiment I discussed earlier. The Social Construction of Technology (SCOT) position is probably the most social determinist theory I know of, but it takes a very neutral view on whether technology is making the world better or worse. The Social Shaping of Technology (SST) position is on there twice because the first edition of MacKenzie & Wajcman’s book in 1985 was far more social determinist than their second edition in 1999, which took a much more balanced tone.

Interestingly, I don’t know yet of any author that would fit into the lower-right quadrant, probably because those who lean towards social determinism rarely have an overly pessimistic view of technology.

Does this help you navigate your way around the various positions you may have encountered? Where would you place your favorite authors on this map?

From the Garden to the City (A Review)

From the Garden to the City

A few weeks ago, my friend and fellow blogger Rosie Perera recommended a book to me with a rather intriguing title: From the Garden to the City: The Redeeming and Corrupting Power of Technology by John Dyer. Let me just say at the outset that this is a great book, and any Christian interested in the topic of technology and culture should read it. As I read through it, I often found myself thinking “dang, this is good. I wish I had written it!” The book is certainly not perfect (what book ever is?), but it is the most articulate, balanced, and nuanced examination of technology written from a Christian perspective that I’ve read so far.

Dyer approaches the topic of technology not only as a practicing software developer, but also as a former youth pastor and seminary-trained theologian who has done quite a bit of reading in “media ecology,” a discipline that studies media as an element of a more complex sociotechnical “ecosystem.” These two sides of his personality allow him to have a much more balanced view of technology, one that can both deconstruct shortsighted critiques of the latest and most feared social media, and acknowledge the ways in which technology is rarely, if ever, neutral.

His stated purpose in the book is to “dismantle the concept of technology, examine it carefully, and then put it back together again” (17). While he does so, he reflects on the Biblical narrative, fitting technology into the four main movements of the Christian story: creation, fall, redemption, and restoration. His hope is that this will help Christians not only to reflect more deeply on the nature of technology, but also to imagine ways in which the negative consequences of particular technologies might be “redeemed” by new creative uses.

Dyer describes a few examples of technological redemption, but the one that stuck with me the most was a story about his former pastor who was diagnosed with stage-four colon cancer. A member of the congregation gave the pastor a beeper, a device usually critiqued as intrusive and community-destroying, and told the other members to call the associated number whenever they prayed for the pastor. As the pastor waited to undergo surgery, and all throughout his recovery, the constantly buzzing beeper was a tangible reminder of the prayers his parishioners were offering up, as well as the care and concern they had for him as a person. This creative repurposing, Dyer argues, redeemed the beeper, transforming it “into something that mediated an entirely different set of values” (99).

In the more philosophical parts of the book, Dyer defines the word ‘technology’, and outlines the typical stances one finds in discussions of how technology and culture interact. He interprets the word ‘technology’ fairly broadly, noting its ancient connection with art and creativity, and offering up this concise definition: “the human activity of using tools to transform God’s creation for practical purposes” (65). He includes both physical artifacts and methods (techniques) in the term ‘tool’ but emphasizes that for something to be a tool, it must enable the transformation of creation. Strangely, he goes on to argue that art is not a tool since it exists for “its own sake” (66), but this overlooking of art’s often purposeful social influence seems strange given his earlier examination of the greek root téchnē.

In another philosophical chapter, Dyer briefly outlines the stances of technological determinism and instrumentalism. The former sees technology as a separate sphere that “impacts” culture and advances according to its own logic, while the latter sees technology as a neutral tool to be used by us to do either evil or good (or as the NHRA once said it on a bumpersticker: “guns don’t kill people, people kill people.”). Dyer nicely shows the problems with both of these extreme positions, and guides the reader to a more balanced understanding of the ways in which we both shape our devices and are shaped by them in return.

Despite these philosophical sections, the book is written for a general reader. Dyer’s prose is clear and approachable, and he gently guides the reader through difficult to grasp concepts. This book would make an excellent choice for a book group interested in the subject, or pastors who want to present a more balanced and nuanced view of technology in their sermons. On the whole, I highly recommend it.