# Highlights: SuperCooperators by Roger Highfield ![rw-book-cover](https://readwise-assets.s3.amazonaws.com/static/images/default-book-icon-8.18caceaece2b.png) ## Metadata - Author:: [[Martin Nowak]] and [[Roger Highfield]] - Full Title: SuperCooperators - Category: #books - Date Created: [[2024-02-13]] ## Highlights >[!quote] > Legend has it that when asked how he had created David, his masterpiece, Michelangelo explained that he simply took away everything from the block of marble that was not David. A mathematician, when confronted by the awesome complexity of nature, also has to hack away at a wealth of observations and ideas until the very essence of the problem becomes clear, along with a mathematical idea of unparalleled beauty. (Page 36) >[!quote] > However, the implications of what happens when the Prisoner’s Dilemma is played over and over again were first described before Trivers’s analysis in 1965 by a smart double act: Albert Chammah, who had emigrated from Syria to the United States to study industrial engineering, and Anatol Rapoport, a remarkable Russian- born mathematician- psychologist who used game theory to explore the limits of purely rational thinking and would come to dedicate himself to the cause of global peace. In their book, Prisoner’s Dilemma, they gave an account of the many experiments in which the game had been played. (Page 53) - Note: #books >[!quote] > Robert Axelrod’s book The Evolution of Cooperation is now regarded as a classic in the field, and deservedly so. (Page 57) - Note: #books >[!quote] > After a few generations, evolution will settle on yet another strategy, which we called Generous Tit for Tat, where natural selection has tuned the optimum level of forgiveness: always meet cooperation with cooperation, and when facing defection, cooperate for one in every three encounters (the precise details actually depend on the value of the payoffs being used). So as not to let your opponent know exactly when you were going to be nice, which would be a mistake (John Maynard Smith’s Tit for Two Tats strategy could be easily exploited by alternating cooperation and defection), the recipe for forgiveness was probabilistic, so that the prospect of letting bygones be bygones after a bad move was a matter of chance, not a certainty. Generous Tit for Tat works in this way: never forget a good turn, but occasionally forgive a bad one. (Page 62) >[!quote] > If we have both cooperated in the last round, then I will cooperate once again. If we have both defected, then I will cooperate (with a certain probability). If you have cooperated and I have defected, then I will defect again. If you have defected and I have cooperated, then I will defect. Overall, this means whenever we have done the same, then I will cooperate; whenever we have done something different, then I will defect. In other words, this winning strategy does the following: If I am doing well, I will repeat my move. If I am doing badly, I will change what I am doing. I now became intrigued. My mood lifted. Back in Oxford I told the distinguished biologist John Krebs about the winning strategy when I bumped into him in the corridor of the Zoology Department. He recognized it instantly. “This sounds like Win Stay, Lose Shift, a strategy which is often considered by animal behavioralists.” The strategy was much loved by pigeons, rats, mice, and monkeys. It was used to train horses too. And it had been studied for a century. John was amazed at how the strategy had evolved by itself in a simple and idealized computer simulation of cooperation. So was I. (Page 70) >[!quote] > Our results were published in 1992, in the journal Nature. I can still see the look of joy on Bob’s face when our paper was accepted. To celebrate, I went to a little copy shop near New College and had T- shirts printed with the evocative patterns. I foolishly imagined that one day an industry of computer- generated art would be based on the machinations of my primordial pizza program. The entire world would end up crisscrossed with advancing waves of defectors and cooperators. Prestigious galleries such as the Museum of Modern Art in New York would have art inspired by this eternal struggle between light and darkness. Alas, almost two decades later, this particular field of computer- generated art has yet to realize its potential. There is one small consolation. Linux uses one of my patterns as a screen saver. (Page 112) - Note: I thought this was funny >[!quote] > We came up with the following simplified scenario, couched as ever in the language of cooperation and defection: When individuals interact with others in the same group, they get a payoff. The individuals can reproduce in proportion to payoff— so those who experience cooperation fare better than those who experience defection— and their offspring are added to the same group. Therefore cooperative groups grow faster. In the second component of our model, we assumed that groups can break up, rather like cells divide or rival factions in a society part company to go their separate ways. By this I mean that if a group becomes too large, it can split into two groups. Occasionally, splinter groups also become extinct because there is only a limited amount of space (a creature runs out of habitat) or other resources (companies go broke for want of credit, for example). Thus we added the limitation that when one group divides into two, another group dies in order to constrain the total number of groups. Run this model in a computer and you find that groups that contain fitter individuals reach the critical size faster and, therefore, split more often. What is satisfying is that this model leads to selection among groups, even though it is only the individuals that reproduce. The higher- level selection of groups emerges from reproduction at the lower level of the individual. (Page 123) >[!quote] > Playing with the math of our model, a simple and compelling result emerges. Group selection allows the evolution of cooperation, provided that one thing holds good: the ratio of the benefits to cost exceeds the value of one plus the ratio of group size to number of groups. Thus group selection works well if there are many small groups and not so well if there are a few large lumbering groups. (Page 124) - Note: This makes me draw parallels with how the price of growing a dominant group is greater splintering amd death >[!quote] > We found that group selection still favored cooperation, provided that the benefit to cost ratio was boosted to compensate for migration. (Page 125) - Note: What does this tell us about the relationship between citizens and migrants? >[!quote] > One can easily imagine how altruistic behavior can help insure a group against the costs of combat. For example, a broken leg could be fatal in a selfish group, since the injured member would be unable to forage and be at risk of starving. But food sharing in an altruistic group would mean an injured member could survive, ultimately making it less risky for that group to go to war. (Page 126) - Note: Risk taking behaviour for the group can only emerge if everyone knows the group has their back when they go down >[!quote] > For group selection to occur we need competition between groups and some coherence of groups. Different groups have different fitnesses, depending on the proportion of altruists. If 80 percent of a group is altruistic, it does better than a group that is enriched with only 20 percent altruists. So while selection within groups favors selfishness, those groups with many altruists do better. But, of course, the extent of group selection depends on important details, such as migration and group coherence. With this caveat, natural selection can indeed operate on several levels, from gene to groups of kin to species and perhaps beyond. (Page 130) >[!quote] > There have been some influential books, such as The Selfish Gene by Richard Dawkins, that have brought this idea to a wide and appreciative audience. (Page 132) - Note: #books >[!quote] > Hamilton’s rule, which is written as r > c/ b, is meant to predict just how much cooperation should be expected as a function of the degree of genetic relatedness. If the cost (c) of acting altruistically divided by the benefit (b) to the recipient of cooperation is less than the coefficient of relatedness (r) of the two individuals (the probability that both individuals possess the gene in question), then genes for cooperation could evolve. (Page 138) >[!quote] > Returning to Haldane’s dilemma, if you are unable to distinguish a brother from a cousin then it is hard to see how kin selection can possibly operate at all. (Page 145) >[!quote] > It is often remarked that inclusive fitness theory is a gene- centered approach. In reality, however, it is centered on the individual. Let us consider, for example, an ant colony with queen and workers. This lifestyle is an example of eusociality: there are workers who do not reproduce but instead help the queen to reproduce. Inclusive fitness theory makes the worker the center of attention, asking why the worker behaves altruistically and raises the offspring of the queen. Why doesn’t the worker ant leave the colony, mate, and raise her own offspring? To answer this question, kin selectionists believe they have to go beyond natural selection and analyze the inclusive fitness of an individual’s action, such as a worker ant helping the queen raise her offspring. They believe that before the invention of inclusive fitness there was no satisfactory answer to this question. I beg to differ. If, instead, you develop a model that is truly gene- centered, you realize there is no need to use the concept of inclusive fitness at all. All we have to ask is whether a gene linked with social behavior wins out over one linked with solitary behavior. Now the central question becomes the following: Is that social gene favored or opposed by selection? The calculation that is needed to answer this question is entirely based on standard natural selection theory. (Page 149) >[!quote] > Epstein wanted me to organize a conference on the evolution of language. This event took some planning and would eventually take place a year later at the Institute for Advanced Study. Epstein himself turned up at the start of the gathering. His private jet was on standby at Princeton Airport and, even though he would make a quick getaway to Paris after a short while, the meeting seemed to whet his appetite for my research. Some time later, he invited me to visit him again. A female member of Jeffrey’s household rang to make the arrangements. There would be a ticket to fly me to San Juan, Puerto Rico. From there, I would be picked up by helicopter. She casually added that she would be the pilot. Now I felt like an extra in a James Bond movie. Not long after, I found myself overlooking warm, cobalt waters. I was sitting at a stone desk in a colonnaded courtyard on a speck of land in the Caribbean. Jeffrey’s tropical island consisted of only 110 acres in all and was ringed with reefs. There was a luxury retreat with patina- finished roofs, and a mile- long beach dotted with palm trees that had been imported from Florida. To warn off pirates, a star- spangled banner fluttered high in the wind. My guesthouse had blue shutters and the interior had been decorated by artists who had been flown in from France. Several of these little houses were arranged around a fountain, courtyard, and kidney- shaped pool. There were sofas and lounge chairs dotted about, but I always liked to work at the stone table. Every day I breakfasted with Jeffrey as the sun rose. We would have endless conversations about science, about my work, what it meant and where it was headed. Jeffrey was the perfect host. I asked a casual question about what it was like to dive in the warm, clear waters around the island. The very next day a scuba diving instructor turned up. When the British cosmologist Stephen Hawking came to visit, and remarked that he had never been underwater, Jeffrey rented a submarine for him. On the last day of my visit, Jeffrey said he would build an institute for me. (Page 156) - Note: Is this why so many people were enamoured by him? >[!quote] > In 2003, after negotiations between him and our then president Larry Summers, I was able to set up the Program for Evolutionary Dynamics, PED. Summers, who went on to become the chief economic adviser to President Obama, gave me some blunt advice on how to do this within limited resources: “Spend it. There will always be enough money.” We occupied the top floor of a smart new office block in Brattle Square in Cambridge, a central spot surrounded by restaurants, boutiques, shops, and during the summer, buskers and street performers. There I could be joined by a hand- picked group of crack mathematicians, biologists, or indeed anyone who wanted to explore the remarkable power of cooperation. Some students called my program “Nowakia.” For us, Nowakia was an academic paradise. One of Harvard’s most impressive graduate students, Erez Lieberman, joked that PED could also stand for “party every day.” Depending on visitors, I had between fifteen and twenty- five people at any one time investigating all kinds of cool problems. There were plenty of undergraduate students too. We called them “Romans,” because they were formally part of a program called Research Opportunities in Mathematical Evolution. Erez Lieberman styled himself as Decurio, the name given to the leader of a Roman squad of ten people, and used it as an excuse to crack toe- curling jokes. If too many students showed up to work with us, Erez would announce that “all roads lead to Rome.” If a student’s research project had run into difficulties, he would comfort them with the words: “Don’t worry, Rome was not built in a day.” My empire sounds like something out of an old Marx Brothers movie. I aspire to be its Rufus T. Firefly, the legendary dictator with the statesmanship of Gladstone, the humility of Lincoln, and the wisdom of Pericles. In fact, I have a simpler objective: I want the inhabitants of PED to be fulfilled in their quest to understand nature, and happy too. I want them to have options, not obligations. They don’t work for me. I work for them. (Page 158) - Note: Sounds like a fun place to work >[!quote] > On his island paradise, Jeffrey had plenty of time to think. Several years later as we both sat at the same stone desk, he returned to one of the biggest questions of all: “What is life?” Yet he put this Biggest of Big Questions in a more interesting way. He added: “Life is the solution. But what is the problem?” After all, the orbits of the planets around the sun are the answers to how these celestial bodies “solve” Isaac Newton’s equations of gravity. The movement of electrons around the nuclei of light atoms “solves” equations of quantum mechanics. Which equations does life “solve”? (Page 159) >[!quote] > It is easy to see why the idea of the quasispecies, a grouping of slightly different but related RNA molecules, is so powerful. You might think that the success of any single RNA replicator depends only on its own ability to make itself and thus on its own rate of replication. But this is not the case: it also depends on the replication rate of nearby mutants. The reason is that, after the right mutation, these nearby mutants can also generate the original RNA replicator. the different close- by sequences can “cooperate” with each other via mutation. In this way, natural selection picks the fittest cloud of RNAs (quasi- species) rather than the fittest RNA sequence itself. The consequences can be counterintuitive when it comes to the game of life. Imagine two RNA sequences, A and B. Let’s assume that A has a higher replication rate than B, thus it has a higher fitness value. The conventional view is that A should win the game of life. Correct? Not so fast. Let’s suppose that A is surrounded by mutants with very low fitness, so it is atop a sharp peak, while B is surrounded by mutants with relatively high fitness, more like a tabletop mountain. The tabletop mountain is at a lower altitude than A’s narrow peak. Play with the equations, as Peter Schuster did at the University of Vienna with Jörg Swetina, and you find that as the mutation rate increases A loses the competition with B. There is a critical mutation rate, below which A is the winner, but above which B and its neighbors are favored, and this can be worked out with the “quasispecies equation.” (Page 166) >[!quote] > This is a kinetic model, one that gives the broad brush picture of how the parts of the hypercycle work together. Each replicator is small enough not to run into the error limit and so can be made with high fidelity. Nor can any one replicator take over the brew: the ensemble is such that the RNA actors depend on one another for a successful performance. By establishing a hypercycle of individual RNA sequences, each one below the information error threshold, a larger genetic message could be stored, such as one capable of making proteins to check for errors and to repair them too. (Page 171) - Note: I see parallels with how big complex software systems are built reliably by building them in small highly specific functional pieces >[!quote] > If we assume that cells with the most successfully replicating— and thus cooperating—– hypercycles divide most rapidly, then these cells would thrive relative to those that had hypercycles that are being challenged with cheats. By evoking a higher level of selection, acting between cells, hypercycles can shrug off parasites. This is a beautiful example of multilevel selection, where each cell acts as a group of replicators. Within cells, defectors might win. But cells without defectors can outcompete cells with defectors. Cooperation rules. (Page 175) - Note: #books >[!quote] > He described the evolution over the eons from genes to chromosomes to cells to language in the book The Major Transitions in Evolution. (Page 175) #### CHAPTER 7 Society of Cells >[!quote] > We are all cells in the same body of humanity.—Peace Pilgrim (Mildred Lisette Norman) (Page 180) ##### This awful disease is but one consequence of a collapse of cooperation, when our single- celled heritage reasserts itself. Cancer is the price that we pay for having complex bodies built by an extraordinary level of cooperation. >[!quote] > WHEN COOPERATION FAILS (Page 186) #### CHAPTER 8 The Lord of the Ants >[!quote] > The different cell types in multicellular organisms are analogous to the different castes in a beehive, with workers constituting the soma— body tissue— and the queens representing the germ line, eggs and sperm. And, just as the body has mechanisms to weed out sickly cells, by apoptosis, a bee colony can regulate the lifespan of its members. The genome of our bodies is “optimized” by natural selection to build a good level of cooperation between germ line cells and soma cells with the help of apoptosis and various other processes. The same goes for breeding “good” workers and “good” queens. When I say good, I mean that they successfully reproduce and cooperate. (Page 203) >[!quote] > Finally, it is fascinating to contrast two- legged and six- legged society. Both owe their success to cooperation and division of labor. Both rely on multilevel selection, where there has been competition between groups. But, of course, ants are ruled by instinct alone while, thanks to language, we also have swiftly evolving cultures. Before we feel too smug, we should remember that after only 200,000 years, we humans are in danger of overwhelming our planet while the ants have lived in harmony with it for 100 million years. Wilson likes to point out that both our civilization and that of the leaf- cutter owe their existence to agriculture. What is remarkable is that while our relationship with plants catapulted our species out of its hunter- gatherer lifestyle around 10,000 years ago, some social insects had achieved this transition 60 million years earlier. There may be parallels between the scenarios of animal and human eusocial evolution, and they are, we believe, well worth examining to shed light on how we made the step from wandering tribes of hunter- gatherers to hamlets, towns, and cities. Queen leaf- cutter, who rules over the greatest super- organism, still has much to teach us. (Page 217) #### CHAPTER 9 The Gift of the Gab >[!quote] > That is because language provided a vast new stage upon which Darwin’s struggle for existence could play out, a novel mode of evolution and a remarkable spur for cooperation, even among people who are separated in time and in space. (Page 220) >[!quote] > Most creatures don’t elaborate in the remarkable way that we do because syntax comes at a heavy cost. It requires a big power- hungry brain and a degree of mental exertion to string words together in the right way. Natural selection can only see the advantage of this brain expenditure on grammar if the number of relevant communication topics (that affect fitness) are above a certain threshold. If there are only ten things to say to each other (“ water,” “enemy,” “sex?” and so on), there’s no point in going through all the fuss and bother of inventing syntax. Having syntax only pays if your ecology includes the complicated politics of indirect reciprocity and there is much to discuss, from boyfriends to iPods to quantitative easing. In a simple environment, where all there is to gossip about is the next banana and the occasional loitering lion, a few grunts and shrieks will do nicely. (Page 237) >[!quote] > In a population there is always a fog of conflicting information. People might use slightly different grammars. Some folks are cooler than others and inspire more imitators. Also the evolutionary model spans many generations of successive speakers. Like the language of DNA, spoken languages form, mutate, and compete with each other over many generations. To solve this problem, I extended my work on evolutionary game theory to language by devising the following model. (Page 242) #### CHAPTER 10 Public Goods >[!quote] > One fundamental conclusion drawn by Milinski and his team is that the public must be well informed about the risks of climate change. Ordinary people must have a reasonable understanding of what is going on with the global ecosystem. If the public is misled into thinking that the risk is small, then they will not cooperate. If people know that the risk is high, then they will be much more inclined to club together to curb climate change. The role of scientists must be to provide honest, reliable information. If they embellish and inflate the risk, then there is a danger that they will lose the confidence of the public. To cry wolf can turn out to be as damaging as underplaying the risks. There are many who feel that the dangers of BSE (“ mad cow” disease), AIDS, and swine flu were exaggerated (and, of course, there are many experts who rightly counter these arguments by pointing out that the death tolls would have been much worse if they had underplayed the risks). Like some other highly charged aspects of science, such as embryonic stem cell research, germ line gene therapy, and conservation, passionate advocates must take care not to spin and distort, even if they mean to back a good cause. They must accept the results of good quality research and peer- reviewed studies, even if they undermine their beliefs. They must focus on the positive effects of climate change as much as the negative. There is a related issue: public understanding of science. Many climate change predictions are couched in terms of risks and probabilities. They rest on making certain assumptions. When presenting this information to a public that is hazy about the difference between climate and weather, or finds it hard to work out a percentage, even a clear, carefully drafted message can be misinterpreted. There is evidence in Britain, for example, that careless presentation of seasonal forecasts harmed public confidence in the predictions. (Page 270) >[!quote] > One way that we can become more familiar with the Tragedy of the Commons is for us all to play the kinds of games devised by Milinski. Let’s do it at company retreats, at schools, and in the home. Let’s devise a fun version for the web. We all need to get the feel for being involved in a global- scaled “collective- risk social dilemma” and learn strategies for its solution. (Page 272) - Note: Something to do? >[!quote] > The decision to paint eyes that seem to see members of a tribe exploits the fact that the more people know that they are being watched, the more charitable they become. Cooperation kindled by indirect reciprocity has led to an arms race when it comes to establishing one’s own reputation and discerning the reputations of others. No wonder that George Orwell’s Big Brother, the dictator of Oceania, was always watching the citizens of the totalitarian state, or that religions contain the idea of an omnipresent God who “sees through everything.” Or indeed that the symbol of moral pressure is the ever- watchful eye in heaven. For millennia, this link between behavior and being observed has been used by religions to make traditional societies more honest and fair. They remind us that our actions have consequences. (Page 273) >[!quote] > Just the thought that we are being observed is very persuasive. One can even think of conscience, our inner sense of right and wrong, as a gauge of how we will be viewed by others. Even two eyespots on a computer screen background are enough to boost generosity. Indeed, the electrical activity recorded emanating from the scalp of normal subjects has been shown to register more activity in response to isolated eyes than it does to full faces. The effect was neatly illustrated by a little experiment carried out at Newcastle University, Newcastle upon Tyne in the UK. The common room in the university’s psychology department had an “honesty box” in which fifty students, staff, and academics were asked to pay for tea, coffee, and milk. The system had been operating for many years, so users had no reason to suspect they were being used as guinea pigs in an experiment. Over ten weeks, the researchers placed a sign on the door of the cupboard where the honesty box sat above the kettle and coffeemaker. Pictures of flowers alternated on a weekly basis with pictures of eyes— male or female, always looking directly at the observer. The expressions ranged from alert and watchful to manic. Every week the money collected in the honesty box was counted up. On weeks when the eyes image was shown, takings were almost three times more than during the flower weeks. The eye pictures were probably influential because they made the coffee drinkers fret about what others would think of them. There’s evidence that a robot with large, humanlike eyes can have the same effect. The eyes seem to make us more aware that if we advertise we are good, we improve our chances of being helped at some future date. (Page 274) >[!quote] > CHAPTER 11 Punish and Perish (Page 279) - Note: Hmmm... Sounds like religious people... >[!quote] > If people are good only because they fear punishment, and hope for reward, then we are a sorry lot indeed.—Albert Einstein (Page 279) >[!quote] > The payoff matrix in the Dilemma is such that “defection” could mean one of three things: First, withholding reward. So instead of cooperating, I do nothing at all. Think of the student who refuses to cook dinner for his flatmates when one of them refuses to do the washing up. Second is stealing. Now I take something from you and, as a result, you suffer a loss and I have a gain. Third, there is costly punishment, where I suffer a loss but, as a result of this, you suffer a bigger one. Perhaps a neighbor opposes your planning application because you did not keep your dog quiet at night. The neighbor is prepared to suffer a cost— your opprobrium and his time— to inflict a much bigger one in terms of denying you your dream of building an extension on your house. (Page 282) >[!quote] > When in our evolutionary history had our ancestors encountered the peculiar situation seen in the Fehr- Gächter game, in which we know exactly who contributed what to the public goods game, but not who punished us in the subsequent round? (Page 285) - Note: Online rage trolls? >[!quote] > Because extraordinary circumstances can generate extraordinary behavior, David, Anna, and I decided to see if repeating encounters between players made a difference to the effectiveness of costly punishment. Working with Drew Fudenberg of Harvard, a leading expert in economic game theory, we asked 104 college students to take part in a variant of the repeated two- player Prisoner’s Dilemma. We made pairs of subjects play repeated games with three options. The two players had a choice between cooperation, defection, and costly punishment. “Cooperation” meant paying one dollar for the other person to receive three dollars. “Defection” meant taking away one dollar from the other person. “Punishment” meant paying one dollar for the other person to lose four. The important difference between our setup and earlier experiments was this: we allowed them to slake their thirst for revenge. If Alice punished Bob, then Bob had the opportunity to punish Alice in the next round. We monitored 1,230 repeated interactions between pairs of players, each lasting between one and nine rounds. It was fascinating to observe. In the second figure of the Nature paper that carried the results, we showed the effects of costly punishment. What was gratifying was that nice people finished first. A pair who cooperated over four rounds were equally ranked first in terms of payoff. Those who turned the other cheek did well too. When one person cooperated in two consecutive rounds, despite being matched with defection each time, he or she went on to cooperate and ended up being sixth overall. The defector, meanwhile, was converted to cooperation in the last three rounds and ended up nineteenth. A cooperator was so enraged by a defection that he or she responded with punishment. After five rounds, the defector had not cooperated in the face of punishment. They ended up twenty- fifth and twenty- second respectively. There was also the cooperator who when confronted with a defector responded with punishment. That triggered retaliatory punishment from the defector, then round after round of punishment and counterpunishment. The players of this particular game of mutually assured destruction were ranked thirtieth and twenty- fifth. In one case we had evidence of the consequences of preemptive strikes. After mutual cooperation, one person punished and this again triggered mutual defection. The punisher was ranked twenty- ninth and his fellow player twenty- fourth. What leapt out at us from the study was the clear link between punishment and poor results. The six best- performing people never used punishment. In contrast, the worst- performing people used punishment most often. Thus winners do not punish, losers do. (Page 288) ##### REWARD IS BETTER THAN PUNISHMENT >[!quote] > In all, 192 people took part in this next experiment. They were divided into groups of 4, with 16 control groups and 32 experimental groups. In effect, we followed the original Fehr- Gächter recipe but with an extra ingredient. We allowed repeated rather than anonymous encounters, so there was an opportunity to know who was punishing whom. We had three “treatments” and one control experiment. The control was a standard public goods game. Treatment one allowed punishment, treatment two allowed reward, and treatment three allowed punishment and reward. The costs were as follows: “Punishment” meant you paid something for someone to lose something. “Reward” meant you paid something for someone to gain something. Each group had a marathon session of fifty rounds. All three treatment groups saw cooperation emerge, unlike the control group, where it floundered in a traditional Tragedy of the Commons. But even though punishment led to cooperation, it was costly and the total payoff was as low as in the control group. The reward treatments also maintained high contributions in the public goods game, but, significantly, the total payoff in the reward session was much higher than in the control session. We found that if both reward and punishment are on offer, then the winning groups do not use punishment, which turns out to be both costly and ineffective. Rewards go further than punishment in both benefiting the public good and in building cooperation, despite the efforts of free riders. A simple idea would emerge from this experiment: the Tragedy of the Commons can be solved by linking the public goods game to games with targeted interactions. By this I mean that, rather than withdraw your cooperation, which affects all players in a traditional public goods game, you withdraw it only from those that defect and, even better, reward those that cooperate. Cooperators in the public goods game gain a reputation, which makes them more attractive partners for other cooperators in private— one to one— dealings, just as a company with good green credentials will find it easier to win business. This recipe for cooperation is simple and effective. When our paper appeared in the journal Science in 2009, it was entitled “Positive Interactions Promote Public Cooperation.” (Page 290) ##### ANTISOCIAL PUNISHMENT >[!quote] > Another experiment that cast doubt on the efficiency of costly punishment in promoting cooperation was performed by Benedikt Herrmann, Christian Thöni, and Simon Gächter. They studied the behavior of people in sixteen cities, from Boston and Bonn to Riyadh, Minsk, Nottingham, Seoul, and others, marking at that time the largest cross- cultural study of experimental games in the developed world. As before, they played a public goods game in which participants were given chips and told they could either keep them all for themselves or put them into a common “pot” that would yield extra interest that would be shared out equally among all players. Over ten rounds of the game, 1,120 middle- class college students in Boston and Copenhagen contributed about eighteen chips each; those in Athens, Riyadh, and Istanbul, only six. The most cooperative participants, who contributed up to 90 percent of their chips, contributed 3.1 times as much as the least cooperative, who had an average contribution of 29 percent of their chips. Behavior varied dramatically when the players were given the ability to punish another by taking tokens away. As the earlier work had shown, players were willing to part with a token of their own in order to punish the low investors or the freeloaders who had exploited others. But in the international version of this game, striking national differences would arise. When freeloaders were punished for putting their own interests ahead of the common good in countries such as the United States, Switzerland, and the United Kingdom, the freeloaders accepted their punishment and became more cooperative and the earnings in the game increased over time. However, in countries such as Greece and Russia, the freeloaders sought retribution. Because they used the same setup as the original Fehr- Gächter experiment, there was no possibility for targeted revenge. Instead they lashed out generally at cooperators. Presumably they reasoned that they could preempt their next punishment this way, or reasoned that they were the likeliest people to have punished them in a previous round. Perhaps free riders punished cooperators as a show of dominance, to say something like “These cooperators are fools, stupid and weak for not keeping everything for themselves, and I will punish them to show them who is boss.” The study seemed to confirm the stereotypes that the British have a sense of fair play, while the Greeks thirst for revenge. Players in Athens and Muscat had the highest level of revenge punishments, retaliating against the enforcers— punishing cooperators— about six times as often as did students in Seoul, Bonn, Nottingham, and other cities. Samarra, Minsk, Istanbul, and Riyadh were somewhere in the middle. What is also fascinating is that the thirst for retaliation and dominance seemed to track measures of the norms of civic cooperation and rule of law that had been made by social scientists in what is called the World Democracy Audit. These norms covered general attitudes toward the law— for example, whether or not citizens think it is acceptable to dodge taxes or flout rules— and weighed up political rights, civil liberties, press freedom, and corruption. In societies where public cooperation is ingrained and people trust the police and their law enforcement institutions, revenge is generally shunned. But in those societies where the rule of law is perceived to be ineffective— that is, if criminal acts frequently go unpunished— antisocial “revenge” punishment thrives where a defector punishes a cooperator. Cooperation is very strongly inhibited as a result, so there is an incentive to take a free ride and ignore civic- minded initiatives such as recycling, neighborhood watches, voting, maintaining the local environment, tackling climate change, and so on. Importantly, the work also revealed that punishment did not always increase cooperation in subsequent rounds of the international games. Cooperation in about half of the participant pools remained at the initial level, and the higher the level of antisocial punishment in a participant pool, the lower was the rate of increase in cooperation. At best, “altruistic punishment” did not help people to cooperate very much. This seemed to me to capture something of the flavor of real life. (Page 292) - Note: I found this section quite interesting >[!quote] > There is another interesting corollary to the idea that a targeted response is more effective. Let’s say that you want a workforce of two hundred engineers to build a fast, luxurious automobile. Each of them plays a role in the production process and so, to make them more efficient, you decide to introduce a regime where the failure to add a component, say a screw, leads to a fine. I am sure that this form of punishment would make the engineers take more care when installing the components. However, I am also sure they will do the minimum necessary to fulfill their contract. But what if you reward them for success— for instance when you sell more automobiles? If you allow the engineers to take a share of the profits, you may well find that they are inspired to do much more than ensure the correct component is added at the right time. Instead of simply meeting their obligations, they might come up with new production processes and regimes. They might reorganize the flow of components, or find ways to fit more than one at a time. Reward leads to more creative forms of cooperation than punishment. Reward does more than make us work more effectively together— it stimulates creativity too. Reward, not necessity, is the true mother of invention. (Page 295) #### CHAPTER 12 How Many Friends Are Too Many? ##### BRIEF HISTORY OF NETWORKS >[!quote] > In one experiment, Milgram sent packages to 160 people randomly selected in Omaha, Nebraska, asking them to forward the package to a friend or acquaintance who they thought would help bring the package closer to a target person, a stockbroker who lived in Boston, Massachusetts. Amazingly, given the tens of millions of people in America, his experiment suggested that there tended to be just six people on average linking one person with any another— giving rise to the popular notion that we all may be connected by just six degrees of separation. (Page 301) - Note: Now I know the origin story >[!quote] > Erdõs is also known for the idea of the Erdõs number— which measures the “collaborative distance” in authoring papers. The lower the number, the closer a person is to Erdõs. This is a matter of considerable pride among mathematicians. We start with his own number, which is 0. Erdõs’s coauthors have an Erdõs number of 1. People other than Erdõs who have written a joint paper with someone with an Erdõs number of 1 but not with Erdõs himself have an Erdõs number of 2. By the same reasoning, my own Erdõs number is 3. If you count books, Roger Highfield’s is 4. And so on and so forth. If there is no chain of coauthorships connecting someone with the great Erdõs, then that person’s Erdõs number is said to be infinite. A similar exercise can be conducted for any other individual. (Page 302) >[!quote] > Albert- László Barabási, (Page 303) - Note: [[Albert Laszlo Barabasi]] ##### ONLY CONNECT >[!quote] > Another surprise came with indirect relationships. Again, while an individual becoming happy increases his friend’s chances of smiling, a friend of that friend experiences a nearly 10 percent chance of increased happiness, and a friend of that friend has around a 6 percent increased chance— a three- degree cascade of good humor. Thus, your actions and moods— whether blue or jolly— affect your friends, your friends’ friends, and your friends’ friends’ friends. Fowler and Christakis have even done experiments in which strangers are randomly assigned to interact with each other, and they have found that altruistic, cooperative behavior also spreads to three degrees. Thereafter your influence fades away from the network like the smile of the Cheshire Cat. “While all people are on average six degrees separated from each other, our ability to influence others appears to stretch to only three degrees,” says Christakis. “It’s the difference between the structure and function of social networks.” Alison Hill and Dave Rand have studied the data to determine if happiness and depression behave like infectious diseases. It turns out that they do. They also found an interesting twist: happiness is more infectious than depression, so that the average lifetime of a happiness “infection” is about a decade, compared with five years in the case of unhappiness. (Page 305) ##### EVOLVING NETWORKS >[!quote] > But in doing this, we also found amplifiers and suppressors of selection. Networks that act like amplifiers can augment the chances of advantageous mutants. That boosts their ability to take over a population. By the same token, suppressors reduce the chances of advantageous mutants. Depending on how they guide natural selection, these graphs have different anatomies. Amplifiers are often starlike structures. The World Wide Web is one potential example, where there are hubs of highly connected individuals. These hubs are hotspots of evolution. Another example of an amplifier network consists of a funnel, where one node is connected to three, then nine more, and so on until the structure wraps back to our first node. Or even a metafunnel, where many funnels sprout from a single node, or a superstar, which looks like a manypetaled daisy. Structures like the superstar and the metafunnel are supercharged amplifiers of selection. They virtually guarantee the fixation of any advantageous mutant. In such a population, a good idea can never be lost. Suppressors, on the other hand, tend to be organized into hierarchies. Small upstream populations that feed into large downstream populations act as suppressors of selection. In such populations, there is a good chance that innovations are overlooked. This kind of network can be found in the organization of tissues, for example. As I described earlier, in my work on stem cells, crypts, and cancer, there are many tissues in the human body with a population structure that damps down selection. A single stem cell divides and produces differentiated cells, which differentiate further until they give rise to terminally differentiated cells, which eventually die. All cells are the offspring of the stem cell but only the stem cell makes more of its own kind. In this way, we have evolved a tissue design that is meant to fight off cancer for as long as possible (in the context of a normal human lifespan). (Page 308) - Note: How does this relate to organisational structure and innovation? ##### NETWORK GAMES >[!quote] > There was a simple link between cooperation and the structure of a network. Overall, the game was easier for cooperators if each individual had fewer neighbors. The average number of neighbors is called the degree of the graph, k. For example, a cycle has a degree two, because each individual has two neighbors. What took us aback was that the computer simulations seemed to suggest the following simple rule held sway over all kinds of networks: if the benefit- to- cost ratio is greater than the degree of the graph, then cooperators are more abundant than defectors. We were surprised and excited that such an elegant rule might hold. (Page 313) - Note: How is this relevant to myself and to Excalibur? >[!quote] > As I had mentioned at the start of this chapter, defectors always beat cooperators when they encounter each other in a well- mixed population. But on a graph, cooperation can thrive when cooperators huddle together to form clusters. From Hisashi’s rule we can see that it is easier to form a cluster if each individual is only connected to a few others. The fewer the neighbors, k, means the smaller the benefit- to- cost ratio that is required for cooperation to thrive. (Page 313) >[!quote] > Once again, the update rule is very important because it specifies how players learn from each other. There are many different plausible update rules and it turns out that any given population structure can support evolution of cooperation for certain update rules but not for others. Cooperation can emerge if the update rule is extrovert and says: Which of my friends is doing well? Is he a cooperator or a defector? If the former, then cooperate. However, cooperation cannot thrive if the update rule is introvert and says: I compare myself with a friend; if I am doing better I stay with my strategy; if my friend is doing better I adopt her strategy. The reason for the difference resonates with the example I used at the start of the chapter. Let’s use an outgoing update strategy. I want to learn from my friends who seem to be fashionable, whether in terms of what they wear or the music they like to listen to. I look at what they like, then buy the same clothes and download the same tracks. This leads to cooperation. Now let’s use a myopic and self- centered update rule. What has made me successful so far? Choosing these particular clothes and songs. I decide to adopt this strategy, no matter what. An inevitable consequence of this update rule is that I erode cooperation in my network. (Page 313) #### CHAPTER 13 Game, Set, and Match >[!quote] > In fact we all belong to clubs. Even Groucho. Society is a vast and sprawling multidimensional tapestry of clubs. They don’t have to depend on formalities, such as having to wear the right tie. They can be allegiances. They can be friendships too. Or simply sets of people with aligned interests. Membership in the same organization becomes a good reason for friendship to bloom between two people and for a link to be forged between their respective social networks. (Page 316) ##### A NEW DAWN >[!quote] > Evolutionary Dynamics. (Page 320) - Note: #books ##### TRENDSETTING >[!quote] > One simple conclusion that we could draw from Corina’s work was that the more sets there are, the better it is for cooperation. The reason is that when there are more sets, cooperators have more opportunities to escape. It is easier for them to find sets that are free of the troublesome defectors that could exploit them. (Page 327) >[!quote] > individuals might only begin to interact with each other if they have several sets in common. When I realize that the person who has joined my tennis club also studies theoretical biology, for example, then I will be more likely to begin collaborating with her. (Page 327) ##### With intermediate mobility, cooperators have a chance to hang around for long enough to benefit each other, but they can also escape from defection by colonizing new sets. The process is guided by natural selection: if a few cooperators find a new set without any defectors then they do well and attract more cooperators. Only after some time one of them might switch to defection and, by doing this, destroy the happy situation that had once thrived there. Then the resident cooperators have to find a new set. Because it is harder for sets with defectors to attract new members, over time they dwindle and, eventually, become empty. >[!quote] > COOPERATIVE DILEMMAS (Page 328) >[!quote] > Overall, a cooperative dilemma is defined by R being greater than P— in other words mutual cooperation is greater than mutual defection— and at least one of the following temptations to defect: T is greater than R; P is greater than S; or T is greater than S. When T is greater than R, it means that if the other person cooperates it is better for me to defect; P greater than S means that if the other person defects, it is better for me to defect too; and T greater than S means that in a pair formed of one cooperator and one defector, it is better for me to be the defector. If none of the three incentives to defect are present, then the game is not a cooperative dilemma. In this case “cooperation” is just the best thing to do. It is a no- brainer. In his explorations of this world of cooperative dilemmas, Tibor Antal had come up with a beautiful result. Let us consider a well- mixed population, where every player is equally likely to interact with every other. Individuals play the game, accumulate payoff, and tend to imitate the strategy of other successful players. Hence there is natural selection between the two strategies that is proportional to payoff. But let us also introduce mutation between the two strategies, which means people can sometimes switch from cooperation to defection at random. Tibor showed that cooperators are on average more abundant than defectors if R + S is greater than T + P. What does this condition tell us? R + S is simply the average payoff that a cooperator receives if he is equally likely to encounter cooperators or defectors. Similarly, T + P is the average payoff that a defector receives if he is equally likely to encounter cooperators or defectors. (In both cases we have omitted the factor 1⁄2 because it cancels out.) The condition “R + S is greater than T + P ” means that the average payoff of a cooperator is greater than that of a defector. This inequality could never hold true for the Prisoner’s Dilemma. In a well- mixed population, if all players are in the grip of this Dilemma, then cooperators are always worse off than defectors. But for other cooperative dilemmas, the condition might hold. Then it might pay to cooperate, even in a well- mixed population. (Page 329) ##### GEOMETRY OF COOPERATION #### CHAPTER 14 Crescendo of Cooperation >[!quote] > Of all his works, one of the most striking is Symphony no. 8, (Page 334) ##### MECHANICS OF COOPERATION >[!quote] > 1. Repetition (direct reciprocity), (Page 336) >[!quote] > 2. Reputation (indirect reciprocity). (Page 337) >[!quote] > We found that indirect reciprocity can only promote cooperation if the probability of knowing someone’s reputation exceeds the cost- to- benefit ratio of the altruistic act; (Page 337) >[!quote] > 3. Spatial selection. (Page 338) >[!quote] > A surprisingly simple rule determines whether cooperation can bud and flower on graphs. The benefit- to- cost ratio must exceed the average number of neighbors per individual. (Page 338) >[!quote] > 4. Multilevel selection. (Page 338) >[!quote] > Multilevel (group) selection allows the evolution of cooperation, provided that one thing holds good: the ratio of the benefits to cost is greater than one plus the ratio of group size to number of groups. (Page 338) >[!quote] > 5. Kin selection. (Page 339) >[!quote] > What I find amazing is that these calculations show that despite nature’s competitive setting— based on natural selection— the winning strategies of direct and indirect reciprocity must have the following “charitable” attributes: be hopeful, generous, and forgiving. Hopeful here means that if I meet a newcomer then I hope that I can establish the basis of cooperation with him by making an effort to cooperate. Forgiving means that if someone defects, then I will work hard to reestablish a relationship based on cooperation. Generous means that in most of my interactions with other people, I do not adopt a myopic perspective. I do not moan about who is doing better than me and who is getting the bigger share of the pie. Instead, I am content with equal or even slightly smaller shares but enjoy many productive and helpful interactions overall; now many more pies get shared. (Page 339) ##### THE NEXT STEP FOR MANKIND >[!quote] > What we learned in chapter 11 was that by rewarding successful cooperation, rather than by punishing defection, we can propel this cooperative effort to being truly creative and innovative. Working together this way, we can achieve things as a society that no individual ever could. (Page 343) >[!quote] > I believe that intelligent life is fragile. I think that life has evolved in the universe often and has done so for the 13.7 billion years that our cosmos has been in existence. But, as far as we can see, we are alone. Intelligent life does not seem to stay around for long. This should give us pause for thought. (Page 346) ##### THE ETERNAL SYMPHONY >[!quote] > The story of humanity is one that rests on the never- ending creative tension between the dark pursuit of selfish short- term interests and the shining example of striving toward collective long- term goals. (Page 349) >[!quote] > We cannot expect cooperation to endure forever. But we can hope to prevent a drastic fall, or at least ensure that cooperation is more likely to prevail over longer periods of time and only suffer the occasional breakdown. We can work to quickly reestablish cooperation after each collapse. We need to place more faith in citizens than leaders. Cooperation has to come from the bottom up and not be imposed from the top down. (Page 352)