The Challenge Network

   back   menu   next   

Experimental economics and making choices

Experimental economics and making choices

How we choose is subject several differing rationales. How we exercise choice differs, depending on the nature of the choice itself. Where different mechanisms are in play, mistakes get made. This is a brief note about the progress which is being made in exploring the roots and the implications of this.

Much progress is being made in the field which is becoming known as "experimental economics". Classical economics is, perhaps, chiefly concerned with what people ought to do, as rational agents, and with what they actually do when acting in large numbers. Experimental economics concerns itself with the evidence for the truth of this, and with what happens in the less perfect domain of the individual, the emotional and the specific.

The interesting feature of this discipline is that it uses everything from mathematical game theory to neural imaging to find experimental support for the issues which it tests. For example, Science 300 p 1755 13 June 2003 reports how people's mental processing reacts to economic choice.

The experimental set-up is as follows. A game between two players uses real money, which they retain at the end of the test. A pot of money is to be divided between two players. One of them proposes a division to another: perhaps that they get $4 out of a pot of $10. The proposer keeps the balance. Rejection of this offer leads to neither of them getting any money, whilst acceptance results in the pot of money being divided as agreed.

This game was played with the second player placed in a neural imaging system. This device is able to watch the activation of areas of the brain in near-real time. As the brain is highly specialised, with parts of it performing very specific tasks, this procedure allows the experimenters to follow which parts of the mind are engaged when particular problems are being solved.

The results of the brain imaging shows how the player reacts to various type of offer, and shows that quite distinct parts of the brain are engaged when different kinds of decision are being made. Specifically, there are areas of the brain which are engaged when the issue of the perceived 'fairness' of the division becomes acute.

Game theory suggests that people who play this game should accept any non-zero offer. By contrast, experiment shows that the mean division is around 50/50, and that about half of all low offers are rejected. The regions associated with negative emotions and conflicting emotions both become excited and may over-ride the 'rational' or cortical processing when low offers were made. The experimenters are able to detect a "f*** you" response, with near-complete reliability. The lesson from this is that what we see as a single field - "choosing" - is in fact dissected into quite dissimilar domains, where we exercise choice differently.

To reinforce this point, consider the role of reputation. There is a so-called 'trust' game, which involves putting hidden sums of real money into a pot, which is then equally divided between the players, with an overall premium added to reward strong investment. The game theory equilibrium is to cheat: one should not put any money into the pot, but aim to get something back from the division. In this, it is identical with the "tragedy of the commons" in which it rewards individuals to exploit natural resources and to invest nothing in their upkeep.

As a result, however, most such games peter out after a few rounds, with nobody investing. If, by contrast, the contributions of individuals are made known, then everyone invests to the hilt and the group prospers. Experiments show that occasional exposure is quite enough to keep the game going, particularly if players can choose with whom they play, such that reputation is both a source of information about likely behaviour, and a passport to a high-return world. Fascinating work has been done on the value of reputation, and on the amount of information needed to make a valid decision about participation in such a game.

However, for present purposes, it is worth noting that the development of 'bonding' correlates not only with factual information but also with the levels of the hormone oxytocin in the blood of the individuals, This compound associated with many social processes - such as lactation in female mammals - but is closely linked to the social group formation, trust development, conflict resolution and learning. We find that some of our most important social processes are, therefore, mediated not even by our unconscious minds, but by our deep biochemical plumbing.

Autistic adults, who have difficulties in understanding other peoples' inner lives, behave in trust games just as game theory would suggest. People from small-scale societies do much the same.

It follows that there seem to be four basic (and very different) domains within which we make decisions. None of these are of themselves surprising, but each has different rules of participation and closure. They have to be handled differently. Conflict and mistakes may well arise where we have these categories mixed together, or where one group is acting upon on rationale, whilst another is following a distinct set of processes.

All of these define choice, depending in which domain we are operating. The world is not organised around rationality, but assorted and incompatible rationales. Push a negotiator out of mode one and into mode two and the f*** you syndrome will dominate, even if it costs him or her their job.

There is much more being done in what is, in effect, a marriage of psychology with abstract economic concepts. Some of this is being done in humans, whilst others work with animals. One animal example relates to an attempt to train a chimpanzee to urinate in a bowl. This was done in classical operant style, by offering a reward when she performed as required. The animal quickly learned the task. However, she also learned that she was rewarded per offering, not by volume. She therefore changed her behaviour to pee frequently and a little rather once, copiously. Any attempt at regulation will be met by arbitrage, it seems.

As a fascinating if faintly disgusting coda, the chimp tried to substitute both water, carried by hand, and her spit for urine when she was completely dried out, showing that she was thinking about the world in categories (liquids, or some such.)

Under the heading "Monkeys reject unequal pay" Brasnan and de Waal have shown (Nature 425 p297-299 2003) that capuchin monkeys possess a sense of what we would call "fairness". Capuchins are a highly cooperative species and frequently share food. The experimenters taught the monkeys to trade grapes for cucumbers. Later, when the idea of exchange was understood, they learned to "buy" the much-favoured grapes from human experimenters using tokens, analogous to money. Pay rates were established, with the monkeys getting tokens for performing a task, which they could then use to buy grapes.

A striking observation was that if one monkey saw another getting more for their token than they themselves did, then they would refuse to work, even at the expense of not getting any grapes at all. Lower-paid animals often rejected their reward altogether, despite the relish that the moneys had for grapes. The underpaid monkeys disrupted the system of work and pay. Tokens were taken and not handed back, thrown to the bottom of the cage or ejected altogether. This did not happen in three years of an equal pay regime, and it was only the low-paid monkeys which reacted in this way.

The experimenters noted that it was not the absolute rates of pay (how many grapes per token) that mattered to the monkeys, but whether they all got the same number of grapes for a token. The authors argue for a universal "sense of fairness" amongst the primates. Dog owners will note a similar phenomenon, as well as jealousy, guilt and other emotions. There is no argument that the higher mammals feel fear, excitement, hunger and other emotions much as we do, so it should come as no surprise that the concomitants of economic life are also shared. The invisible hand is, perhaps, a consequence of our genes, all but 3% of which we share with chimps. However, it does suggest that there may be limits to how far we can manipulate human behaviour, and considerable truth in the idea that things change, but people do not.

One very sweeping example is the measurement of the risk-return relationship, usually called the utility curve in bees, ants and mice. Not only did the relationship actually exist, but it behaved as economists had speculated in response to generically-secure or choppy environments.

There have since been a torrent of equivalent papers. Here are two human-focused examples.

Reputation

It has been known for some time that tacit institutions - and particularly those involving the common good - are valued by people, to the extent that they will make considerable sacrifices to protect them. (e.g. Ledyard Handbook of Experimental Economics PUP 1995.)

Two new lines of evidence have been added to this. The first is concerned with reputation, where the urge to cheat has to be balanced against the affect which cheating might have in future.

Freeloading and cheating lead to the "tragedy of the commons", whereby free goods - grazing, water - are overexploited because any one actor who holds back in order to protect them loses out to others who do not. The Nash equilibrium (subject of the current hit movie, "A Beautiful Mind") takes the system into pollution, desertification and the like. Thus, theory goes, we need rules and Hobbsian rulers to impose them.

Simple experiments allow the scope of this to be investigated. Milinski et al (Nature 415 424-426 2002) run games with real money. Players contribute a variable amount to a pool, which is doubled and then returned to each, divided evenly. Thus, if each player put $5 in the pool, each would get $10 back. However, putting nothing in the pool would, if many others did in fact contribute, send the player home with a little under $15. Each player makes this calculation and, in the absence of information about the behaviour of others, contributions quickly fall to zero.

In the experiments described, individual behaviour was alternatively revealed and concealed, or (unpublished) subject to random revelation. This completely changes the outcome of the game. Contributions are high, and the overall 'value added' for the players is also high. Collaboration is by inferred models of other's conduct. We call such inferences 'reputation'. Where we can calibrate our model of others sufficiently well, then we take the risk which in involved in collaboration. Where we cannot do this, we play for the safe lowest common denominator. Interestingly, when information was cut during these game series, collaboration swiftly ceased.

Solutions to the issues raised by the Nash equilibria are less Hobbsian than Lockian. That is, we do not need enforcement so much as enforced ownership.

We are not short of cows or fir trees. By contrast, whales and some tropical forests have no owners, and it follows that there is no one body which has defensible self-interest in investment which leads to their sustainable cropping. If the UN World Maritime Organisation were to sell title to whales, for example, then the issue of conservation would reduce to one of the enforcement of ownership rights.

The cheapest and most effective form of policing is self-imposed. This works best, as we have seen, where the key penalty for defection is the loss of reputation and, as a consequence, the loss of the long-term ability to trade. It follows that a policy of making goods such as whales and forests tradable assets also requires a structure in which reputation and trust are of critical importance to long term success, and specifically important outside of national and cultural boundaries where environmental damage is tolerated or dissent is quashed.

Punishing defectors

A related major paper (Fehr and Gachter Nature 415 137-140 2002) looked at the policing of tacit institutions. A similar series of games were set up. However, individuals could now 'fine' defectors, at considerable financial penalty to themselves. The experiments showed that, as expected, collaboration fell to low levels in the absence of information and penalties for defection.

Where information was available, collaboration is stronger. However, the design of the experiment was intended to factor our the impact of reputation. Instead, players worked in novel pairs, and each had the ability to punish or not to punish their partner, at a considerable cost to themselves if they chose to exercise the option to do so. Where such penalties were put in place, collaboration and the pursuit of group norms was strong and persistent. What came as a surprise was the degree to which collaborators were prepared to harm themselves in order to enforce these norms. The conclusion that is drawn from this is that "altruistic punishment" is a major force in human affairs (and human evolution.) We are prepared to incur considerable penalties in order to deter those who go against group interests.

We will, it seems, go to considerable pains to seek out those with good reputation, and to punish those who default. We will - although this is not shown, it seems a logical consequence - go to equal pains to assure ourselves of free information flows in our societies ands organisations. After all, perhaps a third of prime time broadcasting consists of news programs and commentary, and we spend hours every day informing ourselves through gossip and newspapers, radio and television about our world.

The economics of impatience

We are all familiar with discount rates: that we regard future rewards as less valuable than rewards which are reaped today. A considerable body of work now shows that this intuition is as true for animals as it is for humans. Our planning assumptions have, however, always been based on the notion that the rate at which we discount the future is constant, both for the distance the event is to occur in the future and for the time interval between alternatives which we might trade for it. It turns out that these assumptions are wrong.

A pigeon, confronted with a crumb before it and a chunk of bread ten seconds waddle away will take the crumb. However, it will go after the larger of two chunks of bread that are distant, even though the smaller chunk is closer. In this and similar ways, we can measure how animals make choices.

It turns out that humans, faced with alternative contingencies, act in much the same manner. (e.g. Loewenstein at Carnegie Mellon.) Work by Laibson at Harvard bases an explanatory model on the notion that there is an unresolved conflict between the discount rates that determine short and long-term behaviour. This gives a far better model of actual consumer behaviour than does the constant discount rate model.

In a recent US survey, 75% said that they should be saving more for the future, but only 6% said that they were completely fulfilling these needs. A further group (Thaler at Chicago, Banartzi at UCal) worked through the logical implications of this into a practical experiment. People are reluctant to save current income, but they should be happier to save from future income. A company was persuaded to offer its employees the opportunity to commit themselves to save an increased proportion of future salary increases. In something over two years, savings rose from 3.5% to 11.8% of staff salaries.

Truth and consequences.

One wonders what the implications of this for the current rule by quarter-by-quarter earnings approach which seems to dominate in capital markets, and the importance of it for long-term partnerships with peer organisations. It seems to re-enforce one great C21st truth. We are all modular and multifaceted, whether we act as a firm, a nation or an individual. Our connections with other such modules is erratic and unpredictable. The interface - how we are interpreted, and what interpretations we make, crucially define our joint behaviour. Further, such interfaces are defined by information, expressed in ways which we call insight, trust, reputation, oversight, governance and the like. We are treated in ways which reflect how we are seen, and the more that how we are seen and what we actually do are in line with each other, the stronger our long-term collaborative relationships are likely to be.

 to the top 

The Challenge Network supports the Trek Peru charity.