MTU Cork Library Catalogue

Syndetics cover image
Image from Syndetics

The truth about trust : how it determines success in life, love, learning, and more / David DeSteno, PhD.

By: DeSteno, David.
Material type: materialTypeLabelBookPublisher: New York : Hudson Street Press, 2014Description: xvii, 266 pages ; 24 cm.Content type: text Media type: unmediated Carrier type: volumeISBN: 1594631239.Subject(s): Trust | Trust -- Social aspectsDDC classification: 158.2 Summary: Draws on the latest research from a diverse range of fields to consider the role of trust in success, failure, and overall well-being, discussing how to recognize cues in order to discern the trustworthiness of others.

Enhanced descriptions from Syndetics:

Can I trust you?

Although it's an important question to answer, we usually fail to recognize just how often in confronts us. Most of us go through our days with little inkling that almost every interaction we face comes with this question attached. Of course we'd like to be able to predict whether we can count on others (or even ourselves), but few of us realize how deeply issues of trust pervade our lives or how powerfully they predict whether we'll flourish or fail.

In The Truth About Trust renowned psychologist David DeSteno not only unveils new insights that help us gauge whether someone is trustworthy, he also brings together the latest research from fields as diverse as psychology, economics, biology, an robotics to create a compelling narrative that reveals some of the surprising ways trust matters. He shows us how trust influences us at every level and at every stage of life. Children, for example, learn and retain knowledge far better when their teacher is someone they trust as opposed to someone they like. If we don't quite trust the one percent, there is a good reason- power, even of a temporary sort, alters calculations for fairness. Trust also influences the long-and short-term considerations that tighten or loosen our romantic bonds. It even affects our ability to care for our own health and well-being (ever wonder why it's sometimes difficult to trust yourself to keep to your diet?). And as we move to an ever more wired world, capacities to use and abuse trust on the Internet to influence our behavior are becoming more sophisticated. To help address these challenges, DeSteno also takes the lid off new research conducted in his own lab that provides the first scientifically verified cues to help us 'read' the trustworthiness of others.

Appealing to readers of Dan Ariely, Daniel Gilbert, and David Eagleman, The Truth About Trust offers a new paradigm that will change not only how you think about trust, but also how you understand, communicate, and make decisions in every area of your life.

Advance praise for The Truth About Trust

'Smart, fun, and informative, The Truth About Trust describes the most frightening, most wonderful, and most human thing we do- putting our fates in someone else's hands. This one's worth reading. Trust me.' Daniel Gilbert, PhD, Edgar Pierce Professor of Psychology at Harvard and bestselling author of Stumbling on Happiness

'Trusting others puts us at risk. Yet failure to trust entails risk as well. The ability to navigate through this minefield successfully is one of life's most valuable assets. DeSteno provides by far the best account of what science has learned about how we do this. The Truth About Trust is also a terrific read.' Robert H. Frank, PhD, Henrietta Johnson Louis Professor of Management at Cornell University and bestselling author of The Economic Naturalist and The Darwin Economy

'The Truth About Trust tackles some of the most important and challenging issues in life. Psychologist David DeSteno takes a fresh look at fundamental questions, from gauging the trustworthiness of others to whether you can trust yourself.' Adam Grant, PhD, professor at the Wharton School of the University of Pennsylvania and bestselling author of Give and Take

Includes bibliographical references and index.

Draws on the latest research from a diverse range of fields to consider the role of trust in success, failure, and overall well-being, discussing how to recognize cues in order to discern the trustworthiness of others.

Table of contents provided by Syndetics

  • Preface (p. ix)
  • Chapter 1 Fundamentals, Foibles, and Fixes: What Is Trust, Anyway? (p. 1)
  • Chapter 2 Built to Trust? How Our Biology Determines Whom We Trust and Why (p. 35)
  • Chapter 3 In the Beginning: Learning to Trust and Trusting to Learn (p. 61)
  • Chapter 4 The Heart of the Matter: Trust in Romantic Relationships (p. 91)
  • Chapter 5 Power and Money: Trust Among the One Percent and Their Wannabes (p. 125)
  • Chapter 6 Can I Trust You? Unlocking the Signals of Trustworthiness (p. 147)
  • Chapter 7 Cybertrust: The Risks and Rewards of Trusting Those You Virtually Know (p. 183)
  • Chapter 8 Can You Trust Yourself? Why You May Not Know Yourself as Well as You Think You Do (p. 209)
  • Chapter 9 Trust or Dust? In the End, It's Usually One or the Other (p. 231)
  • Acknowledgments (p. 245)
  • Notes (p. 247)
  • Index (p. 261)

Excerpt provided by Syndetics

PREFACE Can I trust you? This question--this set of four simple words--often occupies our minds to a degree few other concerns can. It's a question on which we exert a lot of mental effort--often without our even knowing it--as its answers have the potential to influence almost everything we do. Unlike many other puzzles we confront, questions of trust don't just involve attempting to grasp and analyze a perplexing concept. They all share another characteristic: risk. So while it's true that we turn our attention to many complex problems throughout our lives, finding the answers to most doesn't usually involve navigating the treacherous landscape of our own and others' competing desires. When we're young, asking why the sky is blue or why pizza can't be for dinner every night, though sometimes seeming of equal cosmic importance, necessitates only the transmission of facts to answer. Wondering what exactly a Higgs boson is or whether anything out of the ordinary really happened at Roswell can, it's true, keep the gears of the mind whirring. For most of us, though, attempts to find answers to these questions won't keep us up at night. And while asking our financial advisor for the eighth time how to calculate compound interest might require stepping up our mental math, in and of itself, finding the answer is fairly formulaic. Bring the word trust into the equation, however, and it suddenly becomes a whole different story. Trust implies a seeming unknowable--a bet of sorts, if you will. At its base is a delicate problem centered on the balance between two dynamic and often opposing desires--a desire for someone else to meet your needs and his desire to meet his own. Whether a child can trust her parents' answer to her question about the color of the sky requires estimating not only their scientific bona fides, but also their desire to appear smart even if they really don't know the answer. Whether she can trust them to make pizza for dinner, rather than simply ask why she can't have it every night, relies on divining her parents' willingness to uphold their promise to cook in the face of sudden needs to work late or to take an extra trip to the grocery store to refill an empty pantry. Whether you can trust scientists to tell you why searching for the Higgs or related subatomic particles is worth the huge taxpayer expense, rather than ask them to simply provide a definition for what the little particle is, means pitting everyone's desire to acquire knowledge that can lead to a better world against the scientists' related desires to pad their research budgets. The same logic even applies to trusting yourself. Think about it. Whether you can trust that you'll invest your next paycheck for the long term as opposed to spending it immediately to purchase the newest iPad is quite different from figuring out how much money you'll have in twenty years if you do choose to invest it. Whether we're talking about money, fidelity, social support, business dealings, or secret-keeping, trust isn't just about the facts. It's about trying to predict what someone will do based on competing interests and capabilities. In short, it's about gambling on your ability to read someone's mind, even if that someone is your future self. Like all gambles, though, assessing trustworthiness is an imperfect endeavor; there's always a chance you're going to come up short. Sure, most of us have theories about what signals whether people can be trusted. Do they stumble over their words or avert their gaze? Do they seem too "smooth"? Did they "come through" last time? The problem, of course, is that most of us have also had the all-too-frequent experience of being surprised when our guesses turned out to be wrong. We're not alone, however; deception "experts" and security professionals haven't proved much better. Until very recently, there's been precious little evidence indicating that anyone can accurately determine if someone else can be trusted, especially if they don't know the individual well. Scientists have spent decades looking for markers of trustworthiness in the body, face, voice, penmanship, and the like, all to little avail. Forget what you see on television; it's all science fiction. If polygraphs were foolproof, we wouldn't need juries. After all, the list of famous criminals who were found guilty based on polygraphs doesn't include the likes of CIA-spy-turned-traitor Aldrich Ames and "Green River Killer" Gary Ridgway, both of whom "passed" this physiological test. Likewise, there wouldn't be a long list of people who had to endure false accusations based on failed polygraph tests--people like Bill Wegerle of Wichita, Kansas, who was initially suspected of being the BTK killer. Entertaining movies and television shows aside, the same criticisms apply to the use of facial expressions. If a single smile or twitch could accurately predict who could be trusted, all negotiations would occur under a spotlight with video recordings. Science, put simply, doesn't yet have all the answers to unlocking the mysteries of trust. Still, finding the keys is of such importance that the business community and the military spend millions of dollars a year trying to do just that. In fact, current knowledge has been so limited that the Intelligence Advanced Research Projects Activity (IARPA)--one of the central research units under the Director of National Intelligence--published a notice in 2009 specifically soliciting scientific proposals to develop new and more accurate methods to gauge a target's trustworthiness. This state of affairs raises some questions, however: If the need to trust is so central to humans, why is it so difficult to figure out who is worthy of it? Why after millennia of evolutionary development and decades of scientific inquiry are answers only beginning to emerge? To my mind, there are two good reasons. The first, as I've hinted, is that unlike many forms of communication, issues of trust are often characterized by a competition or battle. As we'll see, it's not always an adaptive strategy to be an open book to others, or even to ourselves. Consequently, trying to discern if someone can be trusted is fundamentally different from trying to assess characteristics like mathematical ability. Aptitude in math can be estimated from answers to specific types of problems. Unless the person is a genius trying to pull the wool over your eyes, there shouldn't be any competing interests pushing her answers one way or another. As a result, her answers should, on average, serve as accurate indicators of her true abilities and be solid predictors of how she'll perform in the future. With trust, neither of these facts is necessarily true. As we'll see throughout this book, deciding to be trustworthy depends on the momentary balance between competing mental forces pushing us in opposite directions, and being able to predict which of those forces is going to prevail in any one instance is a complicated business. The second reason why assessing trustworthiness remains something of an enigma is that, to put it bluntly, we've been going about it in precisely the wrong way. I don't say this lightly, as many great minds have been focused on this topic for decades. Yet it's also the case that this intense focus has led to a tunnel vision of sorts that often results in dead ends among the research community and simplistic expectations among the public. Everyone is looking for the one golden cue that predicts trustworthiness in all situations. Everyone assumes that trustworthiness is a fairly stable trait. Everyone believes that they know when and how issues of trust will affect them. The problem, though, is that they're mostly wrong; trust just doesn't work the way most people think. How do I know? I could say, "Trust me," but that would defeat the whole point. I'm a scientist, so my goal is to convince you based on findings--not on opinions or testimonials. I should note that I haven't spent my life as a trust researcher, a security professional, or a science writer. To the contrary, I spend my days running a lab focused on one primary theme: how and why emotional states guide social and moral behavior. It's been an endeavor characterized by both great discoveries and never-ending questions. It's one that has allowed my research group to plumb the depths of the best and worst humanity has to offer. Whether we're uncovering the processes that give rise to dishonesty and hypocrisy or shedding light on the wonders of compassion and virtue, the task at hand always requires a lot of creativity and a willingness to go where the data lead. It's also a job that requires a bit of humility. The longer I do it, the more I realize that the best way to answer perennially difficult questions is not to go it alone, but rather to bring the best minds from many different fields together to look at old problems in new ways. This is exactly the perspective that my group brought to studying trust, and it's one that has allowed us to approach the issue with an entirely new perspective. Why the interest in trust in the first place? Primarily because the more we examined vacillations in emotions and moral behavior, the more we realized that trust often played a central role. Whether it's wondering if a partner might cheat, needing to show that you recognize a responsibility to repay a debt, or desiring to signal that your abilities are up to the challenge, issues of trust rear their head. Jealousy and anger often stem from distrust of the loyalty of a partner. Showing gratitude stands as an efficient way to let people know you realize you owe them a favor. Quick flashes of pride can signal people that they can trust your competence. In short, much of human social life, and the emotions that revolve around it, invokes issues of trust in one way or another. Given this fact, my research group turned its lens on the dual aspects of trust--both how it works and whether and how people can accurately predict who is worthy of it. In so doing, we began an in-depth and novel investigation that traipsed across many traditionally separate fields of inquiry. In the end, what emerged are not only new insights into how to detect the trustworthiness of others, but also an entirely new way to think about how trust influences our lives, our success, and our interactions with those around us. Still, of all the things I learned, one of the most profound--and the one I hope you'll take from this book--is that trust isn't only a concern that emerges at big moments in our lives. It's not relevant just to signing a contract, making a large purchase, and exchanging wedding vows. Yes, these events certainly affect our lives in important ways and depend on trust, but they're just the tip of the iceberg. Whether we realize it or not, issues of trust permeate our days from the time we're born to the time we die, and it's often what's below the surface of consciousness that can have the greatest influence on a life well lived. Our minds didn't develop in a social vacuum. Humans evolved living in social groups, and that means the minds of our ancestors were sculpted by the challenges posed by living with others on whom they depended. Chief among those challenges was the need to solve dilemmas of trust correctly. And it's precisely because of this fact that the human mind constantly tries to ascertain the trustworthiness of others while also weighing the need to be trustworthy itself. Your conscious experience may not correspond with this fact, but again that's because much of the relevant computations are automatic and take place outside of awareness. As you'll see in this book, trust influences more than most of us would have imagined. It affects how we learn, how we love, how we spend, how we take care of our health, and how we maximize our well-being. It not only affects our communication and comfort with others, but as our social worlds change from the physical to the virtual, the role of trust and its impact on our interactions will change as well. I invite you to come on the journey with me to find out exactly what we do and don't know about the role of trust in our lives. Along the way, I'll discuss not only work from my lab that bears on the issue, but also the work, views, and opinions of some of the best thinkers on the topic. From economists and computer scientists, to social media mavens and security officials, to physiologists and psychologists, it'll be a wide-ranging journey designed to put the pieces together. To accomplish this goal, I've loosely divided the book into four parts. The first two chapters will set the stage by laying out the fundamentals--what trust is, why it matters, how it's physiologically embodied, and how we might profitably correct older ways of thinking about it. The next three chapters will explore the far-ranging ways trust impacts us--from how trust develops and influences children's morality and ability to learn, to the ways trust or lack thereof shapes relationships with those we love, to how and why power and money have the potential to alter loyalties. The sixth chapter turns the tables from an examination of how trust affects behavior to the age-old question of whether and how we can actually detect the trustworthiness of others. Here, I'll flip the old view on its head and open a whole new vista from which to explore trust detection. I'll also point out some bugs in the system, thereby arming you to avoid succumbing to them. From this base, the final section--chapters 7 and 8--will move in a slightly different though no less important direction. Here, I'll consider what all of the preceding means for two relatively novel realms when it comes to trust--realms where a partner isn't exactly who, or even what, you'd usually expect. Can you trust a virtual avatar? A robot? An unknown person on Facebook? How trust works in a world of rapid technological advancement and virtual interaction--a world where the science of trust can be manipulated and used for good or ill with unprecedented precision--is the first theme I'll explore. Consideration of the second realm, however, will require adopting a different focus. Rather than looking outward to decide whom you can trust, I'll ask you to direct your gaze inward to ask what may be a more unsettling, yet in many ways a more fundamental, question for reaching your goals: Can you trust yourself? Although it's true that cooperation and vulnerability require two parties, no one ever said that the two parties had to be different people. To the contrary, the parties can be the same person at different times. Can the present you trust the future you not to cheat on your diet by bingeing on chocolate cake? Not to cheat on an exam? Not to cheat on your spouse? Not to go gambling again? These last questions highlight a nuance it's important to remember as you proceed through this book. Each of us is never just an observer trying to ascertain whether someone else is to be trusted; we're also targets of observation ourselves. The same forces that determine whether someone else will be honest or loyal also impinge on our own minds. Assessing the trustworthiness of another and acting trustworthy ourselves, then, are simply two sides of the same coin. Understanding how to predict and control the flip of that coin is what this book is all about. And as we close in chapter 9, we'll see exactly why understanding trust matters as we explore the links between trust and resilience in an unvarnished way--a way that quite literally shows how trust, when used correctly, can be one of the most important tools to raise us all from ruin. 1 CHAPTER FUNDAMENTALS, FOIBLES, AND FIXES What Is Trust, Anyway? At the most basic level, the need to trust implies one fundamental fact: you're vulnerable. The ability to satisfy your needs or obtain the outcomes you desire is not entirely under your control. Whether a business partner embezzles profits that doom your corporation, a spouse has an affair that wrecks your marriage, or a supposed confidant tweets a personal factoid that ruins your reputation, your well-being, like it or not, often depends on the cooperation of others. These others, of course, have needs of their own: needs to pay for a new car that might push them to skim profits and fix the books; needs to have a more charged love life that might lead them to acts of infidelity; or needs to be popular that might cause them to supply some juicy gossip to their friends at your expense. It's precisely where your needs and theirs diverge that trust comes into play. If each person's goals were the same--in both nature and priority--there would be no potential conflict and thereby no need to trust. Such alignments of needs and desires only rarely occur, however. The social lives of humans are characterized by a never-ending struggle between different types of desires--desires favoring selfish versus selfless goals, desires focused on immediate gratification versus long-term benefit, desires stemming from the conscious versus unconscious minds. Only an overriding threat or an amazing confluence of random factors--what we'd otherwise call pure luck--can result in an exact mirroring of two people's needs and goals at all levels. Trust, then, is simply a bet, and like all bets, it contains an element of risk. Yet risk is something most of us could do without. Decades of research have shown time and again that humans are generally risk-averse when it comes to making decisions, and with good reason. Risk, by definition, implies the potential for loss, and who likes to lose? In fact, the aversion to loss is so deeply ingrained that our minds have developed a sort of bias in calculating preferences. Losing X amount of something--whether X is dollars, cars, or cupcakes--hurts more in absolute terms than gaining the same amount of X feels pleasurable. There is no absolute value; it depends on whether we're winning or losing. Given an innate risk aversion, the question of why humans trust in the first place is an intriguing one. Why do we take the risk? The short answer is that we have to. The potential benefits from trusting others considerably outweigh the potential losses on average . The ever-increasing complexity and resources of human society--its technological advancement, interconnected social capital, and burgeoning economic resources--all depend on trust and cooperation. Picture for a moment the familiar scene of a NASA mission control during any shuttle launch or space-probe landing. It's a room filled with individuals, each hunched over a computer screen, working in consort to achieve what no single one of them could do alone. Each person, each link in the chain, has a small but central role to play, and each relies on the trustworthiness of the others to do their jobs. If a single individual fails to notice an important data point--whether it involves the pressure in a tank, atmospheric conditions, or the heart rate of an astronaut--the whole enterprise can be in peril. Everyone has to trust the others to do their jobs and do them well if the joint venture is to succeed. Of course, it's not just amazing feats like space launches where trust plays a role. Trusting others also affects a majority of the everyday things we do, most of which we take for granted. We deposit our money in banks and let the bankers make decisions about how much and to whom they should lend it to help us earn interest. We let our kids go to school assuming that someone else will educate them so that we are freed to earn an income. We divide the labor in running the household so that we can accomplish much more than any one person could on his or her own. The examples are endless, but they all share a common thread: more can be achieved by working together than by working alone. That's why we trust--plain and simple. The need to increase resources--whether they be financial, physical, or social--often necessitates depending on others to cooperate. As we know all too well, however, not every instance of trust is always well placed. The financial crisis of 2008 is a case in point. People trusted the banks to invest their money wisely, but risky mortgage lending and credit-default swaps provided another classic reminder of the duality of human nature. The banks were taking incredible risks, even betting against the success of their own deals, with money from depositors--money they were entrusted to manage responsibly. The evening news regularly highlights breaches of trust in our schools ranging from administrators falsifying records to teachers abusing students. But here's where the on average part of the reason for trust comes into play. On average , more is to be gained by trusting others, as the aggregated benefits in the long term tend to outweigh the potential individual losses that come from misplaced trust. But there's the catch: greater benefits on average don't mean much when you're the person who loses money, a spouse, or a solid and wholesome education for your child. Still, statistically speaking, trusting usually pays greater dividends in the long run. It's this dynamic tension between the opposing costs and benefits that has shaped how our minds solve the trust equation at different moments in our lives--with respect to both acting trustworthy ourselves and assessing the trustworthiness of others. If you truly wanted to avoid the risks inherent in trusting other people while still benefiting from cooperation, there's really only one route: transparency. If you could actually verify the actions of another, the risks, by definition, become lower. In fact, if you think you can't trust a potential partner at all, transparency is the only way to go. Think of the classic image from the last crime drama you saw on television. Two criminals need to complete an exchange. What do they say? Usually it's some variant of, "Open the suitcases and we'll exchange them on the count of three." They each want to see--to know for sure--that the other has the money, drugs, kidnapped person, or similar valuable. They also want to make sure that they don't give up their prize without acquiring their desired object at the same time. In such cases, trust is completely out of the picture. The problem, of course, is that the ability to verify actions isn't always possible--a limitation that can occur for two main reasons. The first involves effort. Verification is onerous; it takes time and energy. The Transportation Safety Administration (TSA) has to verify that no one is boarding a plane with a weapon, hence long airport lines. The mortgage company has to verify that you can pay your bills, hence the mounds of paperwork. And that's just when we're considering one person at a time. Imagine how difficult and costly it would be to run a business if an employer had to verify every action taken by a subordinate. Imagine how much time you'd have to spend watching hidden Web cams in your home if you wanted to verify that your spouse wasn't cheating on you or your babysitter wasn't stealing from you. One reason, then, that true verification directly constrains resource accumulation is that it limits the time and energy that could be devoted to other endeavors. The second reason verifiability isn't always feasible is that there can be a time lag--a delay between the exchanges. You invest money now expecting a future return. You help a friend move now expecting that she'll help you move when your lease is up. Needs don't always arise in tandem, which means that if people were only willing to act in a trustworthy manner when that trust was simultaneously repaid, nothing much would get done that required mutual support. Consequently, someone has to be willing to take the risk to be the first to invest money, time, or other resources, hoping that the partner will then keep up her end of the deal at a future point. As my friend and collaborator the economist Robert Frank often puts it, solving this commitment problem is one of the central dilemmas of human life. If no one were willing to trust and subsequently honor commitments, human society, as we know it, would cease to exist. Frank's focus on the challenges posed by delayed interactions is an important one for understanding how trust works. It clearly shows why complete transparency is often impractical. Without delayed reciprocity--the process by which we reap rewards after initially extending ourselves to help others--cooperation would be hamstrung. We'd only help those who could help us back in the here and now--a situation that wouldn't be very efficient. Every time you needed help you'd have to find someone else who was also in need, to ensure that the mutual problems would be solved simultaneously. As a result, the age-old question of whether you could count on a person when you needed him would go right out the window. It's precisely because of such substantial, and sometimes impossible, constraints that trust becomes necessary. Without it, productive cooperation would be hard to come by. So, we trust at times; we really don't have much of a choice. But once we leave the world of verifiability, we inevitably come across more selfish behavior and at the same time face greater difficulty in predicting who will show it. It's not the case that honesty and loyalty will forever disappear without transparency. As we'll see, a dynamic equilibrium between trustworthy and untrustworthy behavior will eventually result. Where that equilibrium settles, though, is flexible, and being able to predict it is what much of this book is about. The Fundamentals: What's a Prisoner to Do? Whether you're a head of state, CEO, or kid on the playground, situations involving trust share a common structure. Ultimately, your outcomes are intertwined with those of your partner, with success or failure often depending on each person's best guess as to what the other will do. Although it's surely true that the gravity of the objective consequences will vary, the fundamental nature--the underlying mathematics of the situation, if you will--remain the same. Different combinations of trustworthy and/or untrustworthy behavior can lead to different magnitudes of gains or losses in metrics ranging from quantities of nuclear arms to hours of detention at school. Consider the following situation. Jack and Kate get sent to the principal's office for trying to steal their teacher's answer key. Although Jack and Kate did plot the theft, the evidence--though sufficient to get them in trouble--is still a bit murky with respect to who exactly did what. To get a better picture of culpability, the principal separates them and presents the same deal to each. Let's start with Jack. If he is willing to incriminate Kate by squealing while Kate continues to remain silent, he'll get a lighter detention sentence (one day) than will Kate (four days). If they both remain silent, the ability to decisively convict one or the other will be lessened; consequently, they'll both serve a moderate detention (two days). However, if both Jack and Kate implicate the other (remember, Kate is being offered the same deal), they'll each serve slightly less time than if only one is convicted (three days each), since they at least were willing to assist in the investigation. What should Jack do? Mathematically, the answer is pretty clear: he should rat on Kate. To see why, take a look at the table below. If Jack implicates Kate (i.e., defects on Kate) and Kate keeps quiet (i.e., cooperates with Jack), he gets one day of detention; he would receive two days if he, too, holds his tongue. Now, if Kate defects, it still makes sense for Jack to rat on her. In this case, Jack would get three days of detention, as opposed to four if he remained loyal to her. Defecting, then, makes perfect sense. It's what game theorists call a dominant strategy--one that always leads to the best results for an individual irrespective of what the other person does. There's one last aspect to consider, though. Kate is mulling over the same deal at the same time, and Jack knows it. This simple fact alters the whole picture. Although the strategy of defecting is the best one to follow from each individual's perspective, as it maximizes gain regardless of what one's partner decides, it doesn't always lead to the best outcome when fates are joined. If both defect, thereby following the strategy that is in each person's best interest individually, they both end up with a pretty bad outcome: three days in detention. If, however, they both cooperate with each other by remaining silent, they end up only serving two days apiece. And there's the rub. If you can trust a partner--if you know you'll each accept a small sacrifice on the other's behalf--both of you can end up better off than if you each followed a strategy to maximize your own self-interest without regard to the other. The structure of this problem--referred to as the prisoner's dilemma (PD)--was first formalized by game theorists Merrill Flood and Melvin Dresher at RAND Corporation and later formalized by the mathematician Albert Tucker. The interesting aspect of the dilemma--and one that accounts for its long-standing use--is that it captures the essence of the trade-offs inherent in many decisions to cooperate by showing how loyalty can lead to better outcomes than simple self-interest. Its popularity as a tool for scientific investigation also stems from its portability to the lab. Although it reflects the dynamics of decisions that can hold great costs, it can be transformed in a way that allows trust to be studied ethically. For example, the costs and benefits can be made to involve tens of dollars provided by an experimenter instead of thousands of dollars in profit from real business deals. Using variations on this theme, the PD has been utilized to study trust and cooperation in many, many realms. The fundamental question, of course, is what strategy works best in life? To find out, the political scientist Robert Axelrod decided to compare different strategies for interacting in PDs that were iterative--situations, like real life, where you're given multiple opportunities to defect or cooperate with others who have been trustworthy or selfish in the past. There was a problem with this plan, though. How could he accomplish this goal when he not only needed players that differed in dispositions such as a willingness to forgive past transgressions, be vengeful, and be trustworthy, but also needed them to interact across hundreds of rounds? Finding the best strategy for the long run, after all, would require comparing outcomes across many, many instances. In a stroke of brilliance, Axelrod decided to conduct a tournament where the players would be computer programs designed to behave as different types of people might. He would then run simulations consisting of hundreds of trials each where the programs played against each other in a round-robin, all the while gaining or losing points determined by the structure of the PD. Axelrod was under no illusion that he had all the answers, so he invited different researchers to submit programs. The "contestants" varied widely in nature. Some programs were vengeful, never cooperating again with a partner who defected. Others showed some level of forgiveness, only defecting on a partner after being cheated twice. Still others possessed even greater levels of complexity. At the end of the tournament, though, one fact became clear. The superior performing strategies all tended to share two properties. One was an initial willingness to be trustworthy; they never were the first to defect. Another was to be provokable; they were willing to respond to untrustworthy actions in kind. Which instantiation of these guiding principles worked best? The answer, as well as the overall winner, was quite plain. It was an exceedingly simple strategy: tit-for-tat. As its name suggests, tit-for-tat (TFT) means just that: start out being fair but then copy your partner's actions. If she remains fair, so do you on your next turn. If she defects, then you defect on the next round. While it's true that TFT may not have beaten every other strategy in every round of the round-robin tournament, it fared the best overall; it was a consistent silver medalist in a sea of one-shot wonders. The analogue to human trust and cooperation is clear. If each different strategy represented a different person's tactics (e.g., hostile, cheating, forgiving), TFT provided the best benefits on average against the whole lot. Precisely because TFT allows a willingness to forgive and regain trust, it can avoid entering death spirals when used with partners who employ different strategies. Unlike a strategy that assumes a broken trust should always entail retribution, TFT allows for a partner to be redeemed through her willingness to cooperate again. Based on Axelrod's simulations, then, playing the game of trust seems pretty simple. Be fair to start, but return favors in kind; in the end, you'll maximize your gains. In truth, though, life's not that simple. Advanced as these simulations were for their time, they differ from true human interaction in an important way. Computers are perfect and rational; humans are not. Sometimes we break a trust when we don't intend to do so, meaning that sometimes we slight others by mistake. We've all been there. We don't complete our part of a team assignment on time because we misremembered the due date, or we don't buy a lottery ticket from the neighbor's kid even though he remembers us saying we would (when actually he's got us confused with the other neighbor). Simply put, human social interaction leaves room for error. Our actions aren't always clear indicators of our intentions; it's a noisy system. And as it turns out, "noise" can cause some real problems. Consider the following: two well-intentioned people adopt the TFT strategy for deciding whether to be cooperative with each other. All goes well for a while, but then one of these unintended slights occurs. Person A believes that Person B "defected" on her (whether defection here means intentionally revealing a secret, skimming profits, not working hard enough, etc.), when in actuality Person B's behavior was accidental (i.e., she didn't intend to act in an untrustworthy manner). Assuming they both adhere to TFT, the death spiral begins. While tit-for-tat can recover from defections when used against many strategies, this isn't the case when it's used against itself. The result is that noise in the system can doom what otherwise appeared to be the superior strategy. Among the first to recognize this problem was the physicist Robert May, whose work subsequently led the mathematicians Martin Nowak and Karl Sigmund to explore its significance for comparing cooperative strategies. Nowak and Sigmund, in ingenious fashion, decided to tweak Axelrod's simulation to bring it closer to a model of true human interaction and evolutionary development. They made two fundamental changes. First, they allowed for noise in the system by way of "mutations"; different contestant algorithms would come into being at random and choose to cooperate or defect with others based on probabilities. Second, they allowed these mutants to evolve; their simulation had a generational aspect. Following a basic law of natural selection, contestants that did better in earlier rounds propagated more in successive rounds. In this way, the team could model the dynamics of trust and cooperation in an evolutionary sense. The result forever altered understanding of how trust and fairness flow in a society. Across Nowak and Sigmund's many simulations, a general pattern typically emerged. One strategy--always defect--initially took the lead. This fact is not entirely surprising, as I already noted that defection is the dominant strategy in an individual game. So, for about the first hundred generations of the simulations, the defectors ruled. They exploited the initial kindness of the more trusting tit-for-tatters and their kin and reaped the benefits of selfishness. Over time, however, the situation changed. While TFTs always performed more poorly against the untrustworthy defectors, as they always ended up getting conned before they learned not to trust, they always performed better when playing other TFTs, where the initial benefits of cooperation were smaller, but the relationship would remain loyal and stable. Over the long run, the TFTs--whose population had at first been driven dangerously low--would regroup and prosper, overtaking the cheating defectors. What Nowak and Sigmund didn't expect at the outset, however, was that TFT wouldn't end up being the dominant strategy. That prize went to its cousin, a strategy the duo referred to as generous tit-for-tat (GTFT). As its name implies, GTFT was slightly more forgiving than TFT; it would choose to cooperate with some small probability even when facing defection. For example, it might choose to cooperate 25 percent of the time when facing an individual who had previously been untrustworthy. This extra bit of forgiveness functioned to overcome some of the noise mentioned earlier. Sure, sometimes being forgiving led to exploitation, but other times it allowed a loyal relationship to blossom--a relationship where the initial defection was a mistake. Perhaps the most important point, though, to come from Nowak and Sigmund's simulations was the realization that even GTFT wasn't always best. Winners, at some point, almost always fall, and so did GTFT. The problem was that as GTFT continued to dominate, the population as a whole became more and more trustworthy. Once everyone is a saint, no one expects to be cheated; everyone cooperates. As a result, the situation becomes ripe for the dishonest. It's a con man's paradise; everyone trusts by default. When a random mutation favoring defection again emerges, it's initially unstoppable. The defectors propagate and gain dominance, pushing more cooperative strategies almost to extinction, only then to decline as the trusting and cooperative reemerge. The insight here is to realize that trust isn't about finding the perfect single strategy--there isn't one. It's about realizing that selfishness and cooperation, disloyalty and trustworthiness, exist in an ever-changing equilibrium. It's always been that way; it always will. Angie Likes Him Okay, I'll admit it; I hate contractors. When my elderly parents needed to have work done on their house, they asked me to help find someone to do it. Unfortunately for me, though, building and construction are not my forté. That means I was left trying to decide which contractor to hire not only without having dealt with any of them before, but also without a knowledge base with which to evaluate their claims. To this day, I have no idea what the difference is between drywall and blue board. As you can imagine, then, I had no idea which contractor would be the most trustworthy in terms of quoted price and completing the job on schedule. It's a matter of degree, of course, as these elements never go as planned in any construction job. Still, I wanted to help my parents do business with the person I felt was being the most honest. What did I do? Like anyone in the same position, I asked around. This simple example--one that most of you have no doubt experienced--shines a light on one of the shortcomings of the models of trust and cooperation we've been discussing. TFT and the like all depend on direct experience with a partner. Yes, it's true that in cases where you don't know what the other person will do, the mathematical models suggest trusting at first. After all, while it's true that trusting someone you don't know can end up in a single loss, it's also true that not being willing to trust can prevent you from finding an honest partner who, over years of cooperation, could provide massive gains--gains that when aggregated often outweigh a single loss. But here again, these models aren't of much assistance when you really don't want to get taken advantage of in the here and now, or, for that matter, when you have several potential partners--or contractors--from which to choose. If I were going to make the best choice for my parents, I needed to resist the opportunity to trust the first contractor who showed up in favor of doing some homework. So, as I just noted, I checked into their reputations. I asked friends; I asked neighbors; I asked Angie's List. As you might easily guess, the ability to predict if someone you don't know will be trustworthy offers immense benefits. It increases the odds that your decision to work with them will lead to certain gain over certain loss. It completely circumvents the problem posed even by relatively successful strategies like TFT: the possibility of a loss on the first encounter. It also solves the problems posed by complex societies where specialization and wider commerce are the norm. While I may not know whether a contractor is trying to cheat me by using an inferior product, the people whose kitchen and bath he fixed two years ago will. Likewise, while I can use the Internet to get bids from several potential contractors, I can also use the Internet to find reviews of their honesty from people I don't even know. Reputation, then, is often viewed as a prime method for solving problems of trust. It's a form of what is often termed indirect reciprocity --a mechanism by which one person can benefit from another's experiences. If a contractor acted in good faith with one person, this action can be construed as an indicator that he will do so with another. Similarly, if he cut corners once, most believe he's likely to do it again. And if one assumes that accurate reputational information is available, choosing to always trust someone, even on the first encounter, suddenly becomes a less adaptive strategy. Reputation also possesses a second benefit. It not only provides insight into whether you should trust another person, it increases the odds of trustworthy behavior in general. Everyone becomes subject to what economists call the shadow of the future. If you cheat someone, that reputation will precede you. Word will spread that you are not to be trusted, and your future gains, in terms of both economic and social capital, will rapidly diminish. And in the digital age we now inhabit, access to this information is becoming ever easier. Foibles and Fixes: What's Right, What's Wrong, and How We Begin to Fix It Excerpted from The Truth about Trust: How It Determines Success in Life, Love, Learning, and More by David DeSteno All rights reserved by the original copyright owners. Excerpts are provided for display purposes only and may not be reproduced, reprinted or distributed without the written permission of the publisher.

Reviews provided by Syndetics

Kirkus Book Review

New research on the never-ending debate about trust. "Trust isn't only a concern that emerges at big moments in our lives," writes DeSteno (Psychology/Northeastern Univ.; Out of Character: Surprising Truths About the Liar, Cheat, Sinner (and Saint) Lurking in All of Us, 2013) in his fact-filled analysis of this age-old concept. "[I]ssues of trust permeate our daysand it's often what's below the surface of consciousness that can have the greatest influence on a life well lived." Through extensive research and chronicles of his progressive experiments, the author leads readers on a quest to identify not only what trust really means, but how it informs almost every decision a person makes throughout the day. Issues of morality, loyalty, competence, what's fair and what's not, and how to reliably follow your gut's reaction to a situation are just some of the aspects of this multidimensional behavior DeSteno examines. In concise prose backed by engaging stories, the author addresses the pros and cons of common issues such as trusting a business transaction, using trust in learning situations and the need for trust in personal relationships. DeSteno's studies extend beyond these parameters to consider the trust necessary to launch into social media and other cyber-related connections, where visual cues are not available but where trust is still an important factor in any kind of transaction. The author identifies the nuances of physical nonverbal signs that identify trustworthiness and also explains how an examination of the self can help readers know whether they can even trust themselves to behave with willpower and morality. DeSteno ends with six powerful and easy-to-remember rules regarding trust with the hope that the overall effect will be for the greater good of all. Fresh insight into a necessary part of everyday life.]] Copyright Kirkus Reviews, used with permission.

Author notes provided by Syndetics

DAVID DESTENO is a professor of psychology at Northeastern University, where he directs the Social Emotions Group. A fellow of the Association for Psychological Science and editor in chief of the American Psychological Association's journal Emotion, he is the author, with Piercarlo Valdesolo, of Out of Character . DeSteno earned his PhD from Yale University and has written for publications including the New York Times and Boston Globe . He lives in Massachusetts.

Powered by Koha