The Man Who Lied to His Laptop: What We Can Learn About Ourselves from Our Machines
Counterintuitive insights about building successful relationships- based on research into human-computer interaction.

Books like Predictably Irrational and Sway have revolutionized how we view human behavior. Now, Stanford professor Clifford Nass has discovered a set of rules for effective human relationships, drawn from an unlikely source: his study of our interactions with computers.

Based on his decades of research, Nass demonstrates that-although we might deny it-we treat computers and other devices like people: we empathize with them, argue with them, form bonds with them. We even lie to them to protect their feelings.

This fundamental revelation has led to groundbreaking research on how people should behave with one another. Nass's research shows that:
  • Mixing criticism and praise is a wildly ineffective method of evaluation
  • Flattery works-even when the recipient knows it's fake
  • Introverts and extroverts are each best at selling to one of their own
Nass's discoveries provide nothing less than a new blueprint for successful human relationships.
"1110904758"
The Man Who Lied to His Laptop: What We Can Learn About Ourselves from Our Machines
Counterintuitive insights about building successful relationships- based on research into human-computer interaction.

Books like Predictably Irrational and Sway have revolutionized how we view human behavior. Now, Stanford professor Clifford Nass has discovered a set of rules for effective human relationships, drawn from an unlikely source: his study of our interactions with computers.

Based on his decades of research, Nass demonstrates that-although we might deny it-we treat computers and other devices like people: we empathize with them, argue with them, form bonds with them. We even lie to them to protect their feelings.

This fundamental revelation has led to groundbreaking research on how people should behave with one another. Nass's research shows that:
  • Mixing criticism and praise is a wildly ineffective method of evaluation
  • Flattery works-even when the recipient knows it's fake
  • Introverts and extroverts are each best at selling to one of their own
Nass's discoveries provide nothing less than a new blueprint for successful human relationships.
24.0 In Stock
The Man Who Lied to His Laptop: What We Can Learn About Ourselves from Our Machines

The Man Who Lied to His Laptop: What We Can Learn About Ourselves from Our Machines

by Clifford Nass, Corina Yen
The Man Who Lied to His Laptop: What We Can Learn About Ourselves from Our Machines

The Man Who Lied to His Laptop: What We Can Learn About Ourselves from Our Machines

by Clifford Nass, Corina Yen

Paperback

$24.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

Counterintuitive insights about building successful relationships- based on research into human-computer interaction.

Books like Predictably Irrational and Sway have revolutionized how we view human behavior. Now, Stanford professor Clifford Nass has discovered a set of rules for effective human relationships, drawn from an unlikely source: his study of our interactions with computers.

Based on his decades of research, Nass demonstrates that-although we might deny it-we treat computers and other devices like people: we empathize with them, argue with them, form bonds with them. We even lie to them to protect their feelings.

This fundamental revelation has led to groundbreaking research on how people should behave with one another. Nass's research shows that:
  • Mixing criticism and praise is a wildly ineffective method of evaluation
  • Flattery works-even when the recipient knows it's fake
  • Introverts and extroverts are each best at selling to one of their own
Nass's discoveries provide nothing less than a new blueprint for successful human relationships.

Product Details

ISBN-13: 9781617230042
Publisher: Penguin Publishing Group
Publication date: 06/26/2012
Pages: 256
Product dimensions: 5.50(w) x 8.30(h) x 0.70(d)
Age Range: 18 Years

About the Author

Clifford Nass is the Thomas M. Storke Professor at Stanford University and director of the Communication between Humans and Interactive Media (CHIMe) Lab. He is a popular designer, consultant, and keynote speaker, and is widely quoted by the media on issues such as the impact of multitasking on young minds. He lives in Silicon Valley.

Read an Excerpt

Introduction

Why I Study Computers to Uncover Social Strategies

When you work with people, you can usually tell whether things are going smoothly or are falling apart. It’s much harder to figure out why things are going wrong and how to improve them. People seem too complex for you to consistently make them happier or more cooperative, or to make them see you as more intelligent and persuasive.

Over the past twenty years, I have discovered that the social world is much less complicated than it appears. In fact, interactions between people are governed by simple rules and patterns. These truths aren’t vague generalities, such as advice from our grandparents (“nothing ventured, nothing gained”), pop psychologists (“follow your dreams”), or celebrities (“don’t take no for an answer”). Instead, in this book I present scientifically grounded findings on how to praise and criticize, how to work with different types of people, how to form teams, how to manage emotions, and how to persuade others.

I didn’t set out to discover ways to guide successful human relationships. As a professor in many departments—communication; computer science; education; science, technology, and society; sociology; and symbolic systems—and an industry consultant, I work at the intersection of social science and technology. My research at Stanford University and my collaborations with corporate teams had originally been focused on making computers and other technologies easier, more effective, and more pleasant for people to use. I didn’t know that I would be thrust into the world of successful human relationships until I encountered three peculiar problems: an obnoxious paper clip, a suspicious auditor, and an untrustworthy navigator.

In 1998, Microsoft asked me to provide evidence that it was possible to improve one of the worst software designs in computer history: Clippy, the animated paper clip in Microsoft Office. While I have often been asked by companies to make their interfaces easier to use, I had a real challenge on my hands with Clippy. The mere mention of his name to computer users brought on levels of hatred usually reserved for jilted lovers and mortal enemies. There were “I hate Clippy” Web sites, videos, and T-shirts in numerous languages. One of the first viral videos on the Internet—well before YouTube made posting videos common—depicted a person mangling a live version of Clippy, screaming, “I hate you, you lousy paper clip!”

One might think that the hostility toward Clippy emerged because grown-ups don’t like animated characters. But popular culture demonstrates that adults can indeed have rich relationships with cartoons. For many years, licensing for the animated California Raisins (originally developed as an advertising gimmick by the California Raisin Advisory Board) yielded higher revenues than the actual raisin industry. The campaign’s success in fact helped motivate Microsoft to deploy Clippy in the first place. (Bill Gates envisioned a future of Clippy mugs, T-shirts, and other merchandise.) Similarly, Homer Simpson, Fred Flintstone, and Bugs Bunny all have name recognition and star power equivalent to the most famous human celebrities. What about Clippy, then, aroused such animosity in people?

Around this same time, my second mystery appeared. A market-analysis firm asked me to explain why employees at some companies had started reporting dramatic increases in the approval ratings of all the software applications they were using.

I started my investigation by comparing the newly satisfied users with those who had experienced no change in satisfaction. Strangely, I found that the people in the satisfied and dissatisfied companies were relatively uniform with regard to their industries (banking versus retail), the types of computers being used (PCs versus Macs), the categories of software they worked with (programming versus word processing), and the technical skill levels of their employees (novice versus expert).

I then looked at how the researchers surveyed the companies (how often, by whom, how many times). The only difference I found was that the companies that had started reporting higher approval ratings had changed their procedure for obtaining the evaluation. Formerly, all of the companies had people evaluate software on a separate “evaluation” computer. Later, some companies later changed that procedure and had their employees evaluate the software on the same computer they normally worked with. Those companies subsequently reported higher approval ratings. Why would people give software higher ratings on one computer as compared to another identical computer?

My third problem concerned the navigation system BMW used in its Five Series car in Germany. BMW represents the pinnacle of German engineering excellence, and at the time its navigation system was arguably well ahead of other companies in terms of accuracy and functionality. Despite that fact, BMW was forced to recall the product. What was the problem? It turns out that the system had a female voice, and male German drivers refused to take directions from a woman! The service desk received numerous calls from agitated German men that went something like this:

Customer: I can’t use my navigation system.
Something wasn’t right, but the logic seemed impregnable (give or take).


How a Sock Rescued My Research

While these three dilemmas existed in vastly different products, industries, and domains, one critical insight allowed me to address all of them. My epiphany occurred while I was sitting in a hotel room, flipping through television channels. Suddenly, I saw Shari Lewis, the great puppeteer. She caught my attention for three reasons. First, instead of entertaining children, she was on C-SPAN testifying before Congress. Second, she had brought along her sock puppet Lamb Chop (not the first “puppet” to have appeared before Congress). Third, Lamb Chop was testifying in response to a congressman’s question.

In her childlike “Lamb Choppy” voice (very distinct from Lewis’s Bronx accent), Lamb Chop said, “Violence on television is very bad for children. It should be regulated.” The representative then asked, “Do you agree with Lamb Chop, Ms. Lewis?” It took the gallery 1.6 seconds to laugh, the other congressmen 3.5 seconds to laugh, and the congressman who asked the question an excruciating 7.4 seconds to realize the foolishness of his question.

The exchange, while leaving me concerned for the fate of democracy, also struck me as very natural: here was someone with a face and a voice, and here was someone else—albeit a sock—with its own face and voice. Why shouldn’t they be asked for their opinions individually? Perhaps the seemingly absolute line between how we perceive and treat other people and how we perceive and treat things such as puppets was fuzzier than commonly believed.

I had seen that, given the slightest encouragement, people will treat a sock like a person—in socially appropriate ways. I decided to apply this understanding to unraveling the seemingly illogical behaviors toward technology that I had previously observed. I started with the despised Clippy. If you think about people’s interaction with Clippy as a social relationship, how would you assess Clippy’s behavior? Abysmal, that’s how. He is utterly clueless and oblivious to the appropriate ways to treat people. Every time a user typed “Dear . . . ,” Clippy would dutifully propose, “I see you are writing a letter. Would you like some help?”—no matter how many times the user had rejected this offer in the past. Clippy would give unhelpful answers to questions, and when the user rephrased the question, Clippy would give the same unhelpful answers again. No matter how long users worked with Clippy, he never learned their names or preferences. Indeed, Clippy made it clear that he was not at all interested in getting to know them. If you think of Clippy as a person, of course he would evoke hatred and scorn.

To stop Clippy’s annoying habits or to have him learn about his users would have required advanced artificial-intelligence technology, resulting in a great deal of design and development time. To show Microsoft how a small change could make him popular, I needed an easier solution. I searched through the social science literature to find simple tactics that unpopular people use to make friends.

The most powerful strategy I found was to create a scapegoat. I therefore designed a new version of Clippy. After Clippy made a suggestion or answered a question, he would ask, “Was that helpful?” and then present buttons for “yes” and “no.” If the user clicked “no,” Clippy would say, “That gets me really angry! Let’s tell Microsoft how bad their help system is.” He would then pop up an e-mail to be sent to “Manager, Microsoft Support,” with the subject, “Your help system needs work!” After giving the user a couple of minutes to type a complaint, Clippy would say, “C’mon! You can be tougher than that. Let ’em have it!”

We showed this system to twenty-five computer users, and the results were unanimous: people fell in love with the new Clippy! A long-standing business user of Microsoft Office exclaimed, “Clippy is awesome!” An avowed Clippy hater said, “He’s so supportive!” And a user who despised “eye candy” in software said, “I wish all software was like this!” Virtually all of the users lauded Clippy 2.0 as a marvelous innovation.

Without any fundamental change in the software, the right social strategy rescued Clippy from the list of Most Hated Software of All Time; creating a scapegoat bonded Clippy and the user against a common enemy. Unfortunately, that enemy was Microsoft, and while impressed with our ability to make Clippy lovable, the company did not pursue our approach. When Microsoft retired Clippy in 2007, it invited people to shoot staples at him before his final burial.

Did the social approach also help explain users’ puzzling enthusiasm for their software when they gave feedback to the computer they had just worked with? Think about this as a social situation with a person rather than with a computer being evaluated. If you had just worked with someone and the person asked, “How did I do?” the polite thing to do would be to exaggerate the positive and downplay the negative. Meanwhile, if someone else asked you how that person did, you would be more honest. Similarly, the higher ratings of the software when it was evaluated on the same computer could have been due to users’ desire to be polite to the computer and their perception of the second computer as a neutral party. Did users feel a social pull when evaluating the computer they had worked with, hiding their true feelings and saying nicer things in order to avoid “hurting the computer’s feelings”?

To answer this question, I designed a study to re-create the typical scenarios in companies that evaluate their software. I had people work with a piece of software for thirty minutes and then asked them a series of questions concerning their feelings about the software, such as, “How likely would you be to buy this software?” and “How much did you enjoy using this software?” One group of users answered the questions on the computer they worked with; another group answered the questions on a separate but identical computer across the room.

In a result that still surprises me fifteen years later, users entered more positive responses on the computer that asked about itself than they did on the separate, “objective” computer. People gave different answers because they unconsciously felt that they had to be polite to the computer they were evaluating! When we questioned them after the experiment, every one of the participants insisted that she or he would never bother being polite to a computer.

What about BMW’s problem with its “female” navigation system? Could stereotypes be so powerful that people would apply them to technology even though notions of “male” and “female” are clearly irrelevant? I performed an experiment where we invited forty people to come to my laboratory to work with a computer to learn about two topics: love and relationships, a stereotypically female subject, and physics, a stereotypically male subject. Half of the participants heard a recorded female voice; the other half heard a recorded male voice.

After being tutored by the computer for about twenty minutes, we gave the participants a computer-based questionnaire (on a different computer, of course!) that asked how they felt about the tutoring with respect to the two topics.

Although every aspect of the interaction was identical except for the voice, participants who heard the female voice reported that the computer taught “love and relationships” more effectively, while participants with the male-voiced computer reported that it more effectively taught “technical subjects.” Male and female participants alike stereotyped the “gendered” computers. When we asked participants afterward whether the apparent gender of the voice made a difference, they uniformly said that it would be ludicrous to assign a gender to a computer. Furthermore, every participant denied harboring any gender stereotypes at all!

People’s tendencies with regard to scapegoating, politeness, and gender stereotypes are just a few of the social behaviors that appear in full force when people interact with technology. Hundreds of results from my laboratory, as summarized in two books (The Media Equation and Wired for Speech) and more than a hundred papers, show that people treat computers as if they were real people. These discoveries are not simply entries for “kids say the darndest things” or “stupid human tricks.” Although it might seem ludicrous, humans expect computers to act as though they were people and get annoyed when technology fails to respond in socially appropriate ways. In consulting with companies such as Microsoft, Sony, Toyota, Charles Schwab, Time Warner, Dell, Volkswagen, Nissan, Fidelity, and Philips, I have helped improve a range of interactive technologies, including computer software, Web sites, cars, and automated phone systems. Technologies have become more likable, persuasive, and compelling by ensuring that they behave the way people are supposed to behave. The language of human behaviors has entered the design vocabulary of software and hardware companies around the world.

Of course, this “Computers Are Social Actors” approach can only work if the engineers and designers know the appropriate rules. In many cases, this is not a problem: there are certain behaviors that virtually everyone knows are socially acceptable. On a banking Web site, for example, we all would agree that it is important that the site use polite and formal language, just as a bank teller would. For a humanoid robot, it doesn’t take an expert to know that the robot should not turn its back on a person when either is speaking.

What can design teams do when they don’t know the relevant rules? There are three common, though flawed, strategies. The simplest is to turn to adages or proverbs, collectively accepted social “truths.” Unfortunately, adages frequently conflict: for example, “absence makes the heart grow fonder” and “out of sight, out of mind”; and “many hands make light work” and “too many cooks spoil the broth.” Of course, each proverb could be good advice given particular people and particular contexts, but sayings don’t come with an instruction manual explaining when they should be applied. Even when following a single adage, ambiguity makes applying it a challenge. For example, absence may make the heart grow fonder, but never seeing your sweetheart again probably wouldn’t nourish your romance. Similarly, how many hands are “many” hands and how many cooks are “too many” cooks? This is reminiscent of the scene in Annie Hall in which Diane Keaton and Woody Allen both complain to their respective psychiatrists about how often they have sex. He says: “Hardly ever, maybe three times a week.” She says: “Constantly! I’d say three times a week.”

A second approach is to reflect on past experiences in order to learn from trial and error. Unfortunately, in design, as in life, you don’t get many opportunities to err and try again (unless you are in the movie Groundhog Day, in which Bill Murray’s character lives the same day over and over again until he gets it right). In addition to lacking opportunities for learning, it’s hard to know what lesson to learn. For example, my first dating experience lasted three dates before the girl broke it off. I decided to learn from the experience by thinking through everything that had happened during our brief relationship. I quickly became overwhelmed; I had made all kinds of decisions in that time, and I couldn’t tell which were effective and which weren’t. I deliberated for a while before coming up with the perfect solution. “Since you’ve dated before and I haven’t,” I said to her, “I’d really appreciate it if you could tell me what I did wrong so that I could learn from my mistakes.” Her expression mingled pity and disgust.

Last, people try to learn by example. Another dating disaster taught me the deficiencies of this strategy. When I was a teenager, a suave boy Iwon the most beautiful girl at my middle school by drawing the following on the sidewalk outside her home: [picture of eye + heart + U]

When she came outside, he pointed at the drawing and said, “I did this for you!” She was immediately enthralled.

I decided that I would adopt the same strategy to entrance my lady love. I drew this, replacing “U” with a “ewe” to impress her with my wordplay: [picture of eye + heart + sheep]

When the girl came outside and saw me and my pictures, she ran back into her house screaming. She had concluded that I either wanted to alert her to my love for sheep or to cut out the eyes and heart of one in a bizarre ritual of devotion.

Imitating a charismatic person is difficult—even if you don’t try to “innovate” as I did—and it usually comes across as a pathetic attempt at mimicry. For example, when a charismatic person asks a series of questions about someone, it feels like sincere interest; when others do it, it can seem like stalking. Similarly, rigid imitation can become self-parody, as when one attempts to frequently use a person’s first name: “Hi, Cliff. It’s wonderful to have you visiting us, Cliff. Cliff, let me show you where everything is.”

If you try to avoid the pitfalls of imitation by directly asking people for the secrets to their success, you run into the problem that people frequently don’t know what makes them successful. For example, when one of the greatest chess masters of all time, José Raúl Capablanca, was asked why he was such a poor chess teacher even though his own play was impeccable, he answered: “I only see one move ahead . . . the right one.”

Although adages, learning from mistakes, and imitating others have their limitations, there is one foolproof method for discovering rigorous and effective social rules: science. Just as the Guinness Book of World Records or a Google search resolves sports debates, you can resolve social rule debates by turning to the relevant psychological, sociological, communication, or anthropological findings. For example, I was working with a design team on making an SAT tutoring system. We were trying to decide whether the teaching portions of the software should appear as a one-on-one session with a personal tutor avatar or as a classroom setting with avatars not only for the teacher but for the other students.

Some designers said that a solo tutor would encourage students to pay more attention and learn more. Others argued that being part of a class might make students feel less pressured because they would be just “another student” in the class and not the sole focus of the teacher. So I turned to the social science literature on how the presence of other people affects learning. As established in the classic paper on “social facilitation” by Robert Zajonc and much subsequent research, the effect of other students depends on how confident the student is. When you feel confident, having other people present improves how well you learn and perform. However, when you feel insecure, having other people around makes you nervous and pressured so you don’t learn as well. As a result, we decided to have the teaching environment be a virtual classroom but with a variable number of students.

When users were doing well on the practice tests, more students would appear at the desks, but when their practice test scores were low, there would be fewer students and more empty desks.

Because new technologies appear constantly and social science rules are numerous and difficult to nail down, I was kept busy for a number of years. As a researcher, I was the expert on the “Computers Are Social Actors” paradigm, formalizing social rules and making sure that they worked with interactive technologies. Happily, they virtually always did. I became well versed in the social science literature, uncovering more and more findings that I could “steal” and apply to computers. I often joked that I had the easiest job in the world: to make a discovery, I would find any conclusion by a social science researcher and change the sentence “People will do X when interacting with other people” to “People will do X when interacting with a computer.” I constantly challenged myself to uncover ever more unlikely social rules that applied to technology in defiance of all common sense. As Bill Gates described it, “Clifford Nass . . . showed us some amazing things.”

While I thought that research and consulting based on this “Computers Are Social Actors” paradigm would keep me excited and challenged for the rest of my career, eventually I became dissatisfied. I had become a researcher because I wanted to discover new things, not simply “borrow” and apply what others already knew. Furthermore, I had gotten very good at doing things I had become less interested in. Ironically, it was a seemingly trivial computer application that pushed me in a new direction.

I was working with a software company on improving its spell checker. Before the development of automatic spell correction, users would check their spelling after their document was complete. Thinking about it from a social perspective, as the spell checker went through the document, all it would ever say is “wrong! wrong! wrong!” Even when you were right—for example, when you typed in a proper name or used a word that wasn’t in the spell checker’s dictionary—it would say that you were wrong. And what did the spell checker do when it was wrong? It would simply ask you to “add the word to the dictionary” without even an apology. It was not surprising, then, that few pieces of software (other than Clippy, perhaps) created greater frustration.

So I brought together the usual cast of characters (programmers, designers, marketers, and so on) to resolve the problem. As we discussed how to improve the interface, I thought about the differences between a disparaging critic and an encouraging teacher. I felt that what users needed was a “kinder and gentler” spell checker. So I suggested that in addition to signaling errors, the system could commend users on difficult words that they had spelled correctly. For example, when it saw the word “onomatopoeia,” it could say, “Wow, that’s a really hard word to spell right!” “After all,” I argued, “it’s always nice to hear some praise.”

“That’s ridiculous!” one of the software engineers exclaimed. “Computers are supposed to get to the point. I don’t want my time wasted hearing about everything I do correctly. In fact,” she added in a scathing tone, “if you really think that’s a good idea, why doesn’t the computer go all the way: tell users that their spelling is improving, even if it’s actually lousy?”

While the engineer thought she was making a sarcastic recommendation, what our lead designer heard was a brilliant insight. “That’s fantastic!” he said. “Everyone loves a little flattery, and what’s the harm? It will make people feel better about checking their spelling. Users might even try harder to spell things right in order to get more praise!”

“Just what I always wanted,” the engineer replied. “An ass-kissing, brownnosing, bootlicking computer! Why the heck would I want a computer to falsely inflate my ego?”

Before they could grow even more polarized, I had the other team members chime in about what they thought about flattery. Do people like flatterers? Do flatterers seem insincere or insightful? Is flattery ignored or appreciated? As our initial conversation suggested, we foundlittle agreement, so I decided to look at what the social science literature had to say.

When I searched, however, I couldn’t find anything close to a clear answer. There were isolated mentions of sincerity, kindness, honesty, and politeness in the social science literature, but nothing that tackled the question of flattery head-on. I decided to tap into my network of social science researchers to see if someone would conduct a study on flattery for me.

Although I was friendly with literally hundreds of social scientists around the world, I couldn’t find one person that would take on the research. When I asked them to explain their reluctance, most researchers told me that there was simply no way to properly study flattery. For an experiment to be clean and compelling, the researcher must keep everything else constant except the characteristic that she or he wants to study. In the case of flattery, the trickiest thing to keep constant is what people say and how they say it; after all, when two people communicate with each other, almost anything can happen! Thus, when experimenters want to ensure that each participant who comes into the lab has the same experience, they hire and train a “confederate,” a person whose behavior is directed by the experimenter but who is meant to appear as if she or he were just another participant in the experiment. For example, the experimenter could have the confederate and participant work together, and then the confederate could just “happen” to flatter, sincerely praise, or criticize the participant; the experimenter could then note the actual participant’s reactions.

To ensure a rigorous experiment, the confederate would have to behave the exact same way every time. This can be an insurmountable challenge. Imagine how difficult it would be to say the exact same words with the exact same facial expression, tone of voice, and body language whether speaking with a very attractive person, an ugly man covered in tattoos and piercings, an obnoxious jerk, a woman who looks like your mother, or a man who reminds you of a grade-school bully. Of course, the characteristics of the confederate could also matter: flattery means something different when it comes from a smiling versus a frowning person, a woman versus a man, or someone in a lab coat versus someone in street clothes.

In the case of flattery and other questions that involve conversation and social interaction, these inconsistencies make it extremely difficult to run a rigorous study. The problem of a fully reliable confederate also plagues such questions as how to criticize (chapter 1), whether people can effectively change manifestations of their personality (chapter 2), what happens when people become teammates (chapter 3), if misery loves company (chapter 4), and when rational arguments are more or less effective than emotional arguments (chapter 5).

The other reason my social scientist colleagues would not do the research was even more frustrating. They said that questions such as the effectiveness of flattery aren’t important despite how common they are in daily life. To a social scientist, “important” means addressing some fundamental question about the human brain or basic interactions among a group of humans, not helping people to have more successful relationships. It is also harder to get funding for “applied” questions than for abstract ones. For these scientists, how many people would value the information or how relevant it would be to daily life is irrelevant.

I was crushed. All I needed to make every computer user happier, more efficient, more comfortable, and more competent were answers to relatively straightforward questions about how people feel, behave, and think—the core of social science. I wasn’t worried about the theorists’ objections about importance because it was clear that numerous companies found my research interesting and would provide me with a great deal of money to do it; “applied” was actually a good word in many of the circles in which I traveled.

The real problem was finding a compelling confederate. I needed someone who was social but not “too” social. The confederate had to be able to carry on a constrained conversation without the participant finding it contrived. The confederate had to behave consistently in each experimental session, unaffected by who the participant was. Ideally, the confederate’s demographic or other characteristics would not affect the behavior of the participant. Above all, the interaction with the confederate had to feel natural. When framed this way, it became clear to me that human confederates were simply “too human.”

I am embarrassed to say how long it took me to realize that the answer to the problem was right in front of me: computers are the perfect research confederates! Computers, I knew, evoke a wide range of social responses similar to those elicited by people. Computers can do the same thing twenty-four hours a day, seven days a week, without deviation. They aren’t influenced by subconscious responses or unintended observations about their interaction partner. Without features such as a voice or a face that mark gender, age, or other demographic characteristics, one computer is very much the same as another. Ironically, I realized that just as studying interactions between people is the best way to discover how people interact with computers, people’s interactions with computers could be the best way to study how people interact with each other.

Eureka!


Experiment: Is Flattery Useful?

My exploration of flattery, then, became the first study in which I used computers to uncover social rules to guide how both successful people and successful computers should behave. Working with my Ph.D. student B. J. Fogg (now a consulting professor at Stanford), we started by programming a computer to play a version of the game Twenty Questions. The computer “thinks” of an animal. The participant then has to ask “yes” or “no” questions to narrow down the possibilities. After ten questions, the participant guesses the animal. At that point, rather than telling participants whether they are right or wrong, the computer simply tells the users how effective or ineffective their questions have been. The computer then “thinks” of another animal and the questions and feedback continue. We designed the game this way for a few reasons: the interaction was constrained and focused (avoiding the need for artificial intelligence), the rules were simple and easy to understand, and people typically play games like it with a computer.

Having created the basic scenario, we could now study flattery. When participants showed up at our laboratory, we sat them down in front of a computer and explained how the game worked. We told one group of participants that the feedback they would receive was highly accurate and based on years of research into the science of inquiry. We told a second group of participants that while the system would eventually be used to evaluate their question-asking prowess, the software hadn’t been written yet, so they would receive random comments that had nothing to do with the actual questions they asked. The participants in this condition, because we told them that the computer’s comments were intrinsically meaningless, would have every reason to simply ignore what the computer said. A third control group did not receive any feedback; they were just asked to move on to the next animal after asking ten questions.

The computer gave both sets of users who received feedback identical, glowing praise throughout the experiment. People’s answers were “ingenious,” “highly insightful,” “clever,” and so on; every round generated another positive comment. The sole difference between the two groups was that the first group of participants thought that they were receiving accurate praise, while the second group thought they were receiving flattery, with no connection to their actual performance. After participants went through the experiment, we asked them a number of questions about how much they liked the computer, how they felt about their own performance and the computer’s performance, and whether they enjoyed the task.

If flattery was a bad strategy, we would find a strong dislike of the flatterer computer and its performance, and flattery would not affect how well participants thought they had done. But if flattery was effective, flattered participants would think that they had done very well and would have had a great time; they would also think well of the flatterer computer.

Results and Implications

Participants reported that they liked the flatterer computer (which gave random and generic feedback) as much as they liked the accurate computer. Why did people like the flatterer even though it was a “brownnoser”? Because participants happily accepted the flatterer’s praise: the questionnaires showed that positive feedback boosted users’ perceptions of their own performance regardless of whether the feedback was (seemingly) sincere or random. Participants even considered the flatterer computer as smart as the “accurate” computer, even though we told them that the former didn’t have any evaluation algorithms at all!

Did the flattered participants simply forget that the feedback was random? When asked whether they paid attention to the comments from the flatterer computer, participants uniformly responded “no.” One participant was so dismissive of this idea that in addition to answering “no” to the question, he wrote a note next to it saying, “Only an idiot would be influenced by comments that had nothing to do with their real performance.” Oddly, these influenced “idiots” were graduate students in computer science. Although they consciously knew that the feedback from the flatterer was meaningless, they automatically and unconsciously accepted the praise and admired the flatterer.

The results of this study suggest the following social rule: don’t hesitate to praise, even if you’re not sure the praise is accurate. Receivers of the praise will feel great and you will seem thoughtful and intelligent for noticing their marvelous qualities—whether they exist or not.

The rules and principles presented in this book have emerged from using the computer-as-confederate approach to make discoveries that previous social science approaches could never uncover. One cannot fail to see the irony here. Not only are computers associated with the most unsociable responses imaginable (e.g., “Your response is invalid. Try again”), computers are stereotypically the domain of the most socially inept people. Nonetheless, computers’ “deficiencies” are what make them key to understanding social behavior and discovering successful social strategies.

The experiments that I now conduct uncover surprising and powerful social rules that apply to people (as well as to computers). Whenever a clear rule does not exist in the social science literature, I nail it down through experiments pairing people with computers. The experiments present people with the same contexts—collaboration, evaluation, learning, playing—and the same human roles or characteristics—praiser versus criticizer, male versus female voices, dominant versus submissive personalities, happy versus frowning faces. The experiments include traditional measures and metrics to assess people’s behaviors—standard questionnaires for personality and liking, memory tests, physiological measures of emotion. And I formalize the conclusions in terms of actionable rules that can create and support successful human relationships as well as advance the social sciences and user experience design.

This approach forces me to be ruthlessly direct and precise in the questions I ask and try to answer. A computer follows rigid steps and uses ironclad reasoning to reach exact, objective, and universal results. Thus, computer-derived rules are unambiguous, rigorous, and straightforward—making them readily usable in daily life. Because a computer is so obviously not a social presence—lacking a face, a body, emotions, and so on—if a social rule is effective for a computer, it will be even more effective when followed by a person, regardless of the situation, time, and place. For example, while a person flattered by another person might rationalize that somehow the flatterer was being sincere, the computer was obviously and unambiguously flattering: (seemingly) making random comments. Nonetheless, participants believed they did better because of it. The effectiveness of such blatant and irrelevant flattery suggests that these results are a conservative reflection of success you can attain in daily life by flattering others.

The rules I have uncovered and describe are so basic that any person (or computer) can apply them easily, and they are so broad and effective that every person (or computer) can become more persuasive, likeable, and socially successful. And while the rules are simple, they need not be followed mechanically: each rule is presented with the relevant underlying psychology so that you know how and when to apply it effectively.

I have long enjoyed the opportunity to work with designers and engineers to improve products and services, making cars safer, educational software more engaging, mobile phones more socially supportive, robots less frightening, and Web sites better able to close the deal. Now I also confer with social scientists about the “holes” in their understanding of people. In addition to improving products, I use my rigorous experiments with computers to help people evaluate others more effectively, work more smoothly with those different than themselves, manage their own and their colleagues’ frustrations, and better persuade others. Combining the theories and methods of social science and cutting-edge research with computers where social science is inadequate, the insights in The Man Who Lied to His Laptop will help you improve your professional and personal relationships.

The discoveries presented in this book are far-reaching. You will no longer use the “evaluation sandwich”—praise, then criticism, then praise again—after learning that it is neither helpful nor pleasant. You will identify the personalities of your customers and use that information to better persuade them. You will discover why team-building exercises don’t build teams, and what to do about it. You will leverage the “laws of emotion” to defuse heated situations and rally your colleagues. You will appreciate that even unintentional or meaningless inconsistencies carry great weight. The rules that emerge from the fascinating and sometimes bizarre ways that people treat computers like people will give you the tools you’ve always wanted to dramatically improve your day-to-day life. I invite you to join me as I move back and forth between the world of people and the world of technology, finding life-changing insight in both.

(Continues…)



Excerpted from "The Man Who Lied to His Laptop"
by .
Copyright © 2012 Clifford Nass.
Excerpted by permission of Penguin Publishing Group.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Preface xi

Introduction: Why I Study Computers to Uncover Social Strategies 1

Chapter 1 Praise and Criticism 23

Chapter 2 Personality 57

Chapter 3 Teams and Team Building 81

Chapter 4 Emotion 115

Chapter 5 Persuasion 161

Epilogue 201

Acknowledgments 203

Bibliography 209

Index 221

From the B&N Reads Blog

Customer Reviews