Tag Archives: Turkle

Affordances and Vulnerabilities

Have you ever noticed how some people seem to be completely “owned” by a device, like a mobile phone for instance, while other people seem to be able to integrate that same device into their lives in a much healthier way? I have friends who constantly check their phones, even when I am trying to have a conversation with them, and other friends who carry a phone but are happy to ignore text messages and even calls when they are having in-person meetings. This also doesn’t seem to be strictly a product of age. Amongst my nieces, nephews, and students, I see the same phenomenon: some are seemingly addicted to their phones, while others are able to treat it as a useful tool that has an appropriate time and place.

In Sherry Turkle’s latest book, which I reviewed in an earlier post, she introduces a pair of concepts that I have found to be very useful in thinking about this phenomenon: technological affordances, and human vulnerabilities.

Book cover for Design of Everyday ThingsThe term ‘affordances’ actually comes from Donald Norman, the cognitive psychologist who wrote the classic book The Design of Everyday Things (a must-read for anyone involved in designing user interfaces). In that book, he defined affordances as “those fundamental properties that determine just how the thing could possibly be used” (9). Affordances give us clues as to how a device should be used: a flat metal plate on a door suggests pushing, while a vertical handle suggests pulling; a button suggests pushing, while a short rod sticking out a right angle suggests flipping.

The brilliance of Norman’s book is how he demonstrates these concepts on the completely mundane and often unnoticed things we use every day: doors, faucets, lights, stoves, teapots, etc. Once you read the book, you’ll never be able to look at these items in the same way again. You’ll also start to notice just how badly designed many of these things are. If a door needs a sign that says “push,” it’s a failure of design, not the users.

This same concept of affordances also works with more complicated devices. Just as the design of a door suggests a type of interaction, the design of a mobile phone (and its corresponding service) or a social networking site can also suggest one or more patterns of use. Designers “inscribe” these patterns into the physical artifacts, and systems behind them, through explicit design choices. Marketers then reinforce those by demonstrating particular patterns of use in their ads. Of course, users don’t have to follow these suggestions, and historical case studies are rife with examples of how consumers have adopted new technologies in ways that were contrary to those suggested by the manufacturer (for example, see the book How Users Matter: The Co-Construction of Users and Technology).

Turkle’s second and related concept is that of human vulnerabilities. Each one of us has particular needs, wants, or addictions that make us vulnerable in particular ways. For example, some people have deep seated insecurities that tend to make them vulnerable to anyone or anything that promises to make them feel more accepted and loved. Others struggle with an overwhelming need for interpersonal connection, and are thus vulnerable to anything that promises to satisfy that. Still others have a deep fear of chaos and are thus vulnerable to anything that allows them to exert control and order over their situation.

When the affordances of a device or system align well with a given person’s vulnerabilities, the results will often be unhealthy for that person. For example, someone with a high need for social interaction but a deep-seated fear of intimacy might find Facebook so alluring that it becomes almost addictive. A person with a fear of chaos and a high need for control will eagerly embrace a mobile smartphone and obsessively check email or the web.

The important point to note here is that this combination of affordances and vulnerabilities is personal and particular. There probably are some vulnerabilities that are truly universal to all humans, but most are not. Some people can walk into a casino, have a bit of fun gambling, and walk out without issue, while others will walk into that same casino and quickly fall into an addiction response. Similarly, some people can carry a mobile phone or use Facebook as helpful tools, while others fall into a pattern of use that enslaves them to the device or service. If affordances align with vulnerabilities, there’s a high likelihood that the relationship will be unhealthy, but if not, it may be perfectly fine.

I like these concepts because they offer a more nuanced way of investigating and critiquing new technologies. Too often we see shocking news articles about “on call” teens that imply this will be the fate of all teens who use a mobile phone. Or we hear a technological critic assert that “Facebook is making us shallow and narcissistic,” assuming that everyone is using it in the same way, and with the same results. These kind of universal statements don’t represent the particular and variable relationships that people have with these systems. They also don’t really help potential users (nor their parents) assess whether they will be able to adopt a new device or system in a healthy way or not.

In order to make that assessment, we need to uncover two things: the affordances (suggested, probable, and possible patterns of use) of the devices or systems in question; and our own particular vulnerabilities. The former is achieved by analyzing and deconstructing the design of the new device or system, and the latter is achieved only by reflection, introspection, and a large dose of self-knowledge and honesty. Both of these are hard to do, and the latter can often be painful, but if we truly desire a more healthy relationship with our technologies, we must endure.

Alone Together

When I recently travelled to a memorial service for a close friend, the program, on heavy cream-colored card stock, listed the afternoon’s speakers, told who would play what music, and displayed photographs of my friend as a young woman and in her prime. Several around me used the program’s stiff, protective wings to hide their cell phones as they sent text messages during the service. One of the texting mourners, a woman in her late sixties, came over to chat with me after the service. Matter-of-factly, she offered, ‘I couldn’t stand to sit that long without getting on my phone.’ The point of the service was to take a moment. This woman had been schooled by a technology she’d had for less than a decade to find this close to impossible (loc 5642).

Sherry Turkle’s new book Alone Together: Why We Expect More from Technology and Less from Each Other, is full of stories like this one. One of the reasons that Turkle’s books are so interesting is that she collects and tells the kind of stories that make you as the reader both scowl with judgement and cringe with self-recognition. Texting during a memorial service seems especially distasteful to me, but I know that I have done similar things, attempting to dissociate so that I did not have to be fully present in the place where I was, feeling the anxiety and sorrow that would be appropriate for the moment.

The first half of the book, reviewed in my last post, deals with social robotics, but the second half focuses on social networking technologies: not only the typical examples of Facebook and Myspace, but also mobile telephony, texting, instant messaging, simulations like Second Life, and confessional web sites (which are particularly interesting). For Turkle, social robotics and social networking are part of the same phenomenon; we are trying to use technology to mediate relationships so that we can control, or entirely avoid, their inherent risks. Turkle is concerned that we are trading away real human relationship for something that is shallow and ultimately narcissistic. It gives us the illusion of “being connected,” but we are left feeling alone. Like relational junk food, it satiates our immediate surface desires, but leaves our deeper relational needs malnourished.

Turkle’s critiques of Facebook and Myspace are similar to, but refreshingly different from, those of other authors. For example, Jaron Lanier, who worries that Facebook is causing adolescents to confuse their limited online profile with a fuller understanding of personhood, rarely quotes or cites interviews with real adolescent Facebook users to show that his concerns are genuine and not simply the projections of an older adult. Turkle, however, has spent her academic career talking with children and adolescents about identity formation online, and her extensive quotes show that most adolescents are fully aware that their Facebook profiles are just an avatar, a projection of who they would like to be, constructed for an audience.

Turkle reminds us that adolescents have always used artifacts to play with and project their developing identities. In the 1980s, we would decorate our cars, folders, book covers, and the inside of our locker doors with pictures, the names of cool bands, comics, or anything that would communicate a desired message about who we wanted others to perceive us to be. Today’s generation now does this same thing on Facebook or Myspace, but these new platforms are different in two important ways: they are always available, resulting in many adolescents feeling pressured to constantly perform on them; and those performances are very public and essentially permanent.

But Turkle is also quick to remind us that our use of these technologies is not determined by the systems themselves. Facebook’s wide availability, or the speed of text messaging, may afford constant performances and rapid responses, but it is we who require those patters of use. This is not a pedantic distinction; to confuse the two is to leave us with a false dichotomy–play along, or leave the game. It does not enable us to consider our third option: rewrite the rules.

Turkle notes that this kind of binary choice actually stems from the language of addiction, a language that many critics use when discussing the ills of social networking technologies, but one that is ultimately unhelpful. Turkle explains:

Talking about addiction subverts our best thinking because it suggests that if there are problems, there is only one solution. To combat the addiction, you have to discard the addicting substance. But we are not going to “get rid” of the Internet. We will not go “cold turkey” or forbid cell phones to our children…. The idea of addiction, with its one solution that we know we won’t take, makes us feel hopeless. We have to find a way to live with seductive technology and make it work to our purposes. This is hard and will take work. Simple love of technology is not going to help. Nor is a Luddite impulse (loc 5604).

Of course, those who are truly addicted to social networking technologies should seek help, and may need to discontinue using them, but for most of us, we must be suspect of both triumphal praise of, as well as apocalyptic predictions about, these technologies. Finding the middle road towards a more healthy pattern of use will be difficult, but it can be done.

Turkle ends the book with an encouragement that we have not yet locked ourselves into a particular pattern of use:

It is too early to have reached such an impasse. Rather, I believe we have reached a point of inflection, where we can see the costs and start to take action. We will begin with very simple things. Some will seem like just reclaiming good manners. Talk to colleagues down the hall, no cell phones at dinner, on the playground, in the car, or in company. There will be more complicated things: to name only one, nascent efforts to reclaim privacy would be supported across the generations. And compassion is due to those of us–and there are many of us–who are so dependent on our devices that we cannot sit still for a funeral service or a lecture or a play…. Yet, no matter how difficult, it is time to look again toward the virtues of solitude, deliberateness, and living fully in the moment (loc 5647).

I couldn’t agree more.

Sherry Turkle’s Robotic Moment

I was fifteen years old when the film The Terminator was released, and it offered just about everything a fifteen year old boy in the 1980s could ever wish for in a film: a mind-bending, time-warping science fiction plot; a massively-powerful, ruthless, motorcycle-riding, robotic bad guy; a clever, heroic, and thoroughly-human good guy; and of course a hot, but resourceful, female love interest. Like most adolescent boys, I was unsure which male character I would rather be; you are supposed to identify with the hero, but there was something kind of alluring about being a robot who could not be hurt, either physically or emotionally.

But the terminator in the first film was pretty frightening. He was so…well…inhuman. Like a runaway computer program, it pursued its goal without concern and without emotion. Although it had adopted a humanoid form, one could not relate to it like a human. It had absolutely no empathy. The terminator was strictly an “it.”

When the second film came out in 1991, it was hailed for its cutting-edge computerized graphic effects, but the graphics were not the only thing that had significantly changed (spoiler alert!). In a clever twist, a terminator with the same humanoid form shows up, but this time it was sent to protect the young future leader from a new, more-advanced model. The young future leader quickly learns to trust the good terminator, and begins to relate to it as if it were a human. In classic Star Trek fashion, the terminator begins to express more human traits, including a form of empathy and self-sacrifice. The film concludes with a tearful farewell scene, where the young future leader cries over the destruction of the good terminator in the same way he would do so over a fallen human friend. The machine had become a kind-of person, a sort-of “thou.”

This shift in human-machine relations is emblematic of what Sherry Turkle discusses in the first section of her new book, entitled Alone Together: Why We Expect More from Technology and Less from Each Other. Turkle is a psychologist by training who has spent the last thirty years or so investigating the way humans relate to technology, especially computerized communication media. In this recent book, however, she spends the first section discussing various kinds of artificial intelligence and robotic toys: Tamagotchis, Furbies, AIBOs, and My Real Babies. She notes that children relate to these sociable toys in different ways than kids of my generation related to our Merlin, Simon, and Speak & Spell devices.

When children of my generation encountered these early “computational objects,” we were challenged to decide what exactly these new things were. Was the Speak & Spell just a noisy new kind of toy, or was it somehow intelligent or even “alive?” After all, it could ask me questions, respond to my answers, and beat me at spelling games, just like my mother could. But it was also a bit like the original terminator; its voice and mannerisms were highly mechanical, and it had a very limited repertoire of interaction. It didn’t take me long to feel no remorse when I turned it off and tossed it aside.

In the 1990s, children began to encounter a decidedly different sort of toy: one that not only seemed to think, but also to move and relate to them like a fellow creature. Despite the fact that Tamagotchis had only a digital manifestation, children felt real remorse when their Tamagotchis died, and would often “burry” them and buy new ones rather than simply reset their current one. When the Furby came out, the creature was given not only a physical manifestation, but also a voice, one that initially spoke “Furbish.” Children were encouraged to teach their Furby English, and amazingly the Furby seemed to respond to the teaching; in actuality, the Furby was pre-programmed to gradually shift to English no matter what happened, but the illusion helped the child bond with the toy in a way that went beyond the typical child-doll relationship. Children who “raised” these new social toys considered them to be “kind-of alive,” something more than a toy, perhaps closer to a pet.

Turkle observed this new classification first-hand through an experiment designed by Freedom Baird, a graduate of the MIT Media Lab. Baird developed a sort of Turing Test for the heart (see previous post), which was designed to determine “under what conditions a creature is deemed alive enough for people to experience an ethical dilemma if it is distressed” (loc 1062). Baird had her participants hold a Barbie doll, a Furby, and a biological gerbil upside-down for as long as their emotions would allow them to do so. None had troubles dangling Barbie upside down, and nearly all released the gerbil as soon as it showed signs of distress. When the participants flipped-over the Furby, it began to whine and say that it was scared, causing most to feel guilty and turn it back upright within thirty seconds. The participants, many of whom were adults who fully understood that the Furby was just a robot, found it difficult to torment the toy because its cries make them think of it as a fellow creature.

But is there anything wrong with children, or even adults, relating to their technological devices like fellow creatures? Turkle thinks there is. In the opening of the book, she states her concerns clearly:

These days, insecure in our relationships and anxious about intimacy, we look to technology for ways to be in relationships and protect ourselves from them at the same time…. We bend to the inanimate with new solicitude. We fear the risks and disappointments of relationships with our fellow humans. We expect more from technology and less from each other (loc 153).

Turkle worries that these social robotic toys are only the beginning. What will happen when clever engineers develop “My Real Girlfriend?” Will socially-awkward men prefer the company of a robotic girlfriend that is programmed to assert its will only just-enough to keep up the illusion?

Even if robotic girlfriends never come to pass, Turkle’s quote hints at another technologically-mediated form of relationship with which most of us are already quite familiar: social network media. That is the focus of the second part of her book, and will likewise be the focus of my next post.

For now, here is the ending scene from Terminator 2: