The Challenge Network

   back   menu   next   

When does "Economic Man" dominate social behaviour?

When does "Economic Man" dominate social behaviour?

This note attempts to summarise a remarkable paper by Camerer C, Fehr E. Science 311 47-52 (2006)

Summary: Each rational "economic man" is thought to deploy their best endeavours upon accessible information and understanding, so as to maximise individual advantage. However, much real-world and experimental evidence points to other kinds of outcome being not only common, but general. The paper shows that small groups of economically-rational individuals, following their best advantage, can nevertheless force a much more collaborative stance within a network of interaction. In distinct circumstances, however, the same individuals can crystallise group behaviour into their own model and away from collaboration. The dynamic of how these outcomes are arrived at depends on the competitive structure of the environment itself, not upon the motives of the individual agents.

Background: games, strategies and individual reputation.

Background: games, strategies and individual reputation.

There are two ways in which researchers have, traditionally, approached the issue of social co-operation. One model appeals to genetics, as to how evolution has shaped the ability of animals to act within social groups. In such a model, altruism - working for the good of the group, in defence or in child-rearing, for example - is explained in terms of the benefits which such collaboration confers upon relatives who share common genes. Inherited behaviour which assists the group will assist the spread of genes of those in the group, by a sort of surrogate parenthood.

The second approach, on which this paper dwells, is concerned with game theory. Game theory asserts that there are mechanisms that are connected with individual motivation, such that the interaction of two or more agents will always lead to predictable outcomes. The mechanisms which give rise to this predictability depend on each agent recognising their options and seeking out those which confer greatest advantage upon them as an individual, weighted for uncertainty and assessments of what other players may do. Such situations of competition or negotiation are often called "games", and they predict that individuals will behave selfishly, without regard to the common good. Indeed, this prediction is built into their fundamental assumptions. However, as is evident in both daily life and in the outcome of very many experimental trials, such predictions are not always matched by events.

Interactions can be reduced to a number of fundamental transactions: for example, a virtual guarantee of even-handed distribution of a good between two people is to allow one to "cut the cake" and the other to choose on of the two slices. The cutter - unless a very fond parent - will always divide the cake in half, for otherwise the person choosing will take the larger slice, leaving them with less than they might otherwise enjoy.

Other such games include the "Prisoners' Dilemma" and the "Ultimate Game". The Ultimate Game is an extremely simple procedure that leads to complex outcomes. One agent has a desirable thing - let us say, is given a sum of money. The rule of the game is that he or she is to offer a part of this to another player, who both knows the size of the total sum and who may also choose to accept or reject - but not to bargain over - this offer. If the offer is rejected, then neither player gets anything. If it is accepted, then both players get, respectively, the offer and the remaining amount. Pure game theory would have the first player offer the second a minimal amount, which would be accepted because the accepting agent would get something, rather than nothing, and should be happy with that. In practice, however, almost all offers between two players for an amount under 30-40% of the total sum are rejected, with both players losing the entire sum.

This pattern is common across much of such trials as applied to humans, and even common amongst animals such a monkeys that are exposed to similar tests. It appears that recognition as equity is somehow innate to us, and that we - or at least, a substantial fraction of us - prefer to punish those who behave in ways which we perceive as unfair, even when this carries a considerable practical and immediate cost. Inverting this finding, it is possible to place a specific value on probity.

The Prisoners' Dilemma is a related game. In this, two agents are required to trust each other: in the original version of the game, both were in separate cells, and could be convicted only if the other confessed. Both gain substantially if this trust is upheld - if neither prisoners offers evidence against the other, for example - and each stands to gain a small amount (and the other to lose heavily) if either breaches trust. That is, if one prisoner confesses, the deal is that he is given a light sentence and his peer a heavy one.

Game theory suggest that each should default, trying to seize advantage at the expense of the other. As a consequence both can be expected to suffer a penalty. This is precisely what happens when experimental situations are set up in which each individual has no knowledge of the other. For example, individuals are given a sum of money which they can place, anonymously, into a common pot. The pot is then augmented by 25%, and the result split evenly between the players. If one puts in nothing or a great deal, one gets the same amount back, so the motivation to cheat is high. Such games invariably arrive at a state in which nobody contributes anything after a few rounds of play.

By contrast, games persist in proportion to the amount of information that is made available to the players. That is, if individual contributions are occasionally made accessible to other players, then contributions to the central pot remain high and the game persists. A naive reading of this situation is that cheats are shamed into collaboration; but a moment's thought shows that there is no punishment whatsoever that is meted out to cheats. They have nothing direct to lose by acting in a way that maximises their short term advantage provided, crucially, that they have no reason to believe in the long term viability of the game. However, if they can see a way to maintain the game whilst also not taking the risk of being themselves cheated, then this offers a much preferred option. They need to be able to "read the minds" of their peers in the game.

These games are easily accessible to computer simulation, and can also be played by experimental subjects who have been instructed to behave in certain ways. The finding is that the strategy that offers the best return is 'tit for tat', or cautious reciprocity. In essence, each player has to feel out the disposition of their peer(s), and collaborate if they seem inclined to do so, and to default quickly if the other appears prone to cheat. People who take up such a stance are following a strategy that optimises individual returns whilst generating a structure which creates common benefits. That is, "economic man" - individualist, essentially selfish - should nevertheless be a 'reciprocator' when the conditions are right for this. As the authors point it: "The mere belief that [the other player] is a reciprocator generates strong cooperation incentives even amongst purely self-regarding players to gain a reputation by mimicking the behaviour of strong reciprocators."

As we have seen, there are also naturally strong reciprocators present in all societies, people who seek fairness and who are prepared to suffer personal losses in punishing those who are perceived to be unfair. In addition, we generally act as though other agents whom we know well tend to stick with one strategy for considerable periods. That is, a person or company with a well-earned reputation for honesty tends to go on behaving in ways that preserve this. Reputation has a great influence on how individuals are treated by others in animal societies, amongst human groups and between organisation such as companies. There are huge (economic, rational) reasons to acquire and to husband such a reputation, even though gaining this has costs when compared to the gains from pursuing raw advantage. To further quote the authors: "Individuals who violate the assumptions of economics may create powerful economic incentives for [those who do not] to change their behaviour."

By contrast, such restraint is pointless for economically rational individuals in those situations where reputation counts for nothing, where information is concealed and where today's action has no affect on tomorrow's opportunities. Situations can, therefore, crystallise the overall flavour of an interchange either around collaboration or else around default as being the dominant mode of behaviour.

Two types of social strategy.

Two types of social strategy.

The Ultimate Game has an typical outcome, which is that one-on-one games demand a pay out of around 30-40% of the total prize. However, if more than one person can accept the hand-out, this number falls precipitately, to around 2-5% when something of the order of four such individuals are involved. Why should this be so? What changes the game from what is essentially collaborative to a 'selfish' mode?

In the situation where a single individual is involved in accepting or vetoing the offer, they have a monopoly over decision-taking, They know that they can punish their partner for making what they regard as an unfair offer. They often prefer to forgo a reward in order to do this. However, when they have rivals, they must accept that there is a probability - and in the case where there are several rivals, a high probability - that there will be an agent who does not feel the incentive to 'punish the cheat'. Their attempt to punish will therefore be ineffectual, and so the rational outcome is to accept the smaller benefit of a small reward. Further, they will know that other players will be making the same calculation. It transpires that the relationship between the numbers of bidders involved and the assessed probability of their being a defector amongst them correlates well with the bids that are made. Ensemble estimates of the behaviour of others determines the strategy that is likely to followed by the group.

The authors note two generic types of interaction which determine what kinds of social strategy will crystallise. They call these strategic substitutability and strategic complementarity.

Substitutable goods are those which can stand in for each other: coal and wood for a fire, for example. Complementary goods are those which are often used together, as with gloves and overcoats. All things being equal, more sales of one substitutable good means less of the other; whilst more sales of a complementary good means more of the other. Complementary pricing strategies mean that it is advantageous to match the prices of competitors, whilst pricing strategies which revolve around substitutability mean that differentiating the firm's prices from those of the competition provides advantage.

These simple thoughts govern the generics of how firms interact in competition. However, they also have much to say to the issues of collaboration and defection. In conditions of substitutability, the economic rationalist should take advantage of the less rational competitors. In conditions of complementarity, they should adopt a mask of reciprocity and mimic the behaviour of the less rational agents. Strikingly, a small minority of economically-rational agents can force a situation of substitutability into what economic modelling would regard as 'rational' - that is, following the model of the many-player Ultimate Game, above. Equally, however - and to the nave eye, extraordinarily - a situation of complementarity will be forced into an economically non-rational, collaborative outcome by the action of this same small group of economic rationalists.

It turns out that such values, applied to a populations whose players have attitudes to assessment distributed much as discussed, closely simulate observed behaviour. That is, in a situation of substitutability, "market forces" arise whereby the first, second and indeed third order assessments allow various levels of player to 'feed' off the errors of those making less sophisticated assessments. Such mechanisms deploy the best possible information in the pursuit of profit. Market signals are often better predictors of issues which affect market outcomes than are formal predictions. For example, orange juice futures are better at predicting the probability of frosts in Florida than are formal weather forecasts. The Iowa Political Stock Market has outperformed opinion polls in hundreds of separate elections. Sixty days before the US Presidential elections, the forecast has an error of only 2%. Better informed traders essentially feed of - and quash - the views of less well-informed players. (Please note - this web site is available only intermittently.)

To see how this works under situations of substitutability, consider the "beauty contest" that Maynard Keynes called picking a stock. One is looking not for excellent stock, but what an audience will ultimately come to see as a beautiful stock. That is, the calculation is less what is true of the item traded as to how the other viewers of it will come to see it. Experiments that are based on such thoughts find that only a small number of valuers go through more than one iteration of valuation. They look at a stock on its merits, rather on how others may view those merits, or on how people looking at how others are likely to view the issue may themselves act. In practice, the mean 'tau' of such as group is 1.5, which is to say that the average level of iteration is slightly above unity - the raw value of the asset - and below two, or how others may see the value of that asset. Only 8% of assessors undertake three steps of such valuation: how the value will change as a result of people thinking about how the general run will come to value the stock.

The authors conclude by saying that it is a caricature of the truth to say that the route to successful economies and institutions consists of enabling "economic man". Rather, the great successes of markets and other self-assembling institutions lies in their ability to incorporate a far wider range of behaviour. It may well be that we can design even better structures once we have understood human responses better, and put our minds to the kinds of equilibria which we would like to emerge from these.

Note added subsequent to the original post:

A recent (23 Jan 2006) publication in Nature shows that the monkey equivalent of line managers - one layer from the top - act as policemen in the group, breaking up fights and asserting dominance in ways which prevent these. Removing the police causes fearfulness and reduced social interactions, more fights and dominance struggles in the group.

Another study, published in the same week in AAAS Science, looks at the parts of the human brain which are associated with empathy. (It may sound strange to assert that parts of the brain are connected to so vague a concept, but this is so, and relatively easy to demonstrate. Indeed, many issues of fine discrimination - "Is that person looking at me?" - are always and repeatedly associated with dedicated tissue in the human brain.)

In this study, subjects engaged in a real-money game of the sort discussed above. One group were exposed to individuals who were instructed always to cheat, whilst another was exposed to reciprocating partners. In both cases, the subjects were able to identify their partners clearly. In a second phase of the trial, with their the areas of activation in their brains imaged, these subjects watched their game partners subject to another activity which involved these partners being given apparently painful electric shocks. The empathy sections of the brain illuminated for the reciprocators, but not the cheats. We extend our sympathies to those who have helped us, and feel less for those who have cheated us. We know this subjectively - love the tribe, ignore the Other; help the friend, drop the freeloader - but here we see the first demonstration that it is hard wired into our behaviour.

  To top