A Quick Excursion into Behavioral Economics

It is always worth re-visiting ideas that managed to leave the ivory tower of academia and then infiltrate “mainstream” thinking. One of those ideas was behavioral economics which gave us “Thinking Fast and Slow”, “Nudges” and a never-ending list of cognitive biases.

These ideas will ultimately become un-tethered from their academic foundations and context. Here, I want to reconnect us to the original academic roots of this intellectual movement. Specifically, I want to talk about Kahneman’s work on alternative models of decisions under uncertainty and the paper “Prospect Theory: An Analysis of Decision under Risk” with Amos Tversky, Econometrica, Vol. 47, No. 2. (Mar., 1979). This paper laid out a new modeling framework for decision under uncertainty. The idea was that the model was more accurate and could better represent the inconsistencies humans suffer from.

It is important to emphasize, that these behavioral models are still very complex. Even though they deal with one of cognitive bias, they generally introduce further complexity. The academic community was willing to allow for that extra complexity if we could get closer to “real behavior”.

However, a little under twenties years later Econometrica published two papers back to back in 1994 that assessed which models performed best. The two papers were:

  1. “Investigating generalizations of expected utility theory using experimental data” by Hey and Orme

  2. “The Predictive Utility of Generalized Expected Utility Theories” by Harless and Camerer

These two papers together came to a pretty unsatisfying conclusion: all theories are pretty bad but the least worse is the traditional rational framework. For that reason I give a more detailed exposition on the rational choice framework and then briefly discuss the implications of the two papers above.

For the curious

I give a brief description of the traditional rational framework. Ken Binmore’s book “Rational Decision” has a succinct description of the traditional framework. The goal is simply this: we need a method of attaching a number in order to value an uncertain object.

  • Imagine you have an investment that yields $G (good) and $B (bad).

  • $G happens with probability p and $L with probability (1-p). With a tree diagram it might look like this

    ├── (p)

    │ └────> $G

    └── (1-p)

    └────> $B

  • Now how should we value this uncertain investment? von Neumann’s idea was to first set a temperature scale for an individual.

  • We can’t simply look at the expected value of the lottery (which would be p$G+(1-p)$B) because people have preferences over risk. So we have to do some more work.

  • So for a person, imagine the best prize ever (W) and the worst prize ever (L).

  • The next step is to suggest that we can replace an object with a random “lottery” that gives you W and L with some probability. In fact, we can tune the probabilities for this lottery to make you indifferent between the object and the lottery.

  • So returning to our investment example above, we could replace $G with a lottery that gives you W with probability p_g and we can $B replace with lottery that pays out W with probability p_b

  • Visually for $G we are going to replace it with a L/W lottery that gives you W (the best outcome) with probability p_g:

    $G

    ├── p_g

    │ └── -> W

    └── 1-p_g

    └── -> L

    and

    $B

    ├── p_b

    │ └── -> W

    └── 1-p_b

    └── -> L

    von Neumann then says we can just call the value of $B —> p_b, and then $G—>p_g.

  • The BIG IDEA: In other words, call the value of $B the probability of getting W that makes the individual indifferent between $B and the W/L lottery with probability p_b. And every person wants to have the highest probability of getting W. So you want to maximize the probability of W

  • So, now the investment is actually another lottery that pays out two different lotteries over W/L prizes. So we could replace things. So start with the original investment project

    ├── (p)

    │ └────> $G

    └── (1-p)

    └────> $B

    And then we can replace the prizes to get this “compound” lottery

    ├── p

    │ ├── p_g

    │ │ └──────W

    │ └── 1-p_g

    │ └───────L

    └── 1-p

    ├── p_b

    │ └──────W

    └── 1-p_b

    └───────L

  • And then von Neumann says we can simplify this further. We don’t need all these branches since everything is just two prizes W & L. So, let’s just think about the total probability of getting W and L. This leads to:

    ├── (p*p_g)+(1-p)*p_b

    │ └─────────── W

    └── (1-p)*(1-p_b)+p*(1-p_g)

    └──────────── L

  • So now recall the earlier trick of just calling the value of the object the probability of getting W. So now the investment project is simply:

    (p*p_g)+(1-p)*p_b

    and recall from before

    (p*p_g)+(1-p)*p_b=p [Value of $G]+(1-p) [Value of $B]

So that is the traditional method of creating a utility scale for measuring uncertain objects. This approach is called “expected utility”. Now:

  1. We can replace any object that is uncertain with W/L lotteries

  2. Do some math, and make sure to pick the object that has to highest probability of getting W.

Now this is pretty tricky math to figure out what your own utility scale looks like. So, of course errors can occur. Allais famously spotted Leonard Savage making inconsistent choices that contradicted the above framework. And this error is the starting point for Kahneman and Tversky.

So what is Allais’s example:

You have to choose between two lotteries

J

├── 0

│ └──$0

├── 1

│ └── $1m

└── 0

└── $5m

So with probability 1 you get $1m. The other lottery is

K

├── 0.01

│ └── $0

├── 0.89

│ └── $1m

└── 0.10

└── $5m

Which one would you pick? Let’s imagine you pick J. We will remember this for the end. Now you face a choice between

L

├── 0.89

│ └── $0

├── 0.11

│ └── $1m

└── 0

└── $5m

and

M

├── 0.9

│ └── $0

├── 0

│ └── $1m

└── 0.1

└── $5m

Which one would you prefer? Most people will say they like M more than L. So let us imagine someone said:

J is better than K

M is better than L

What’s the problem with that? Well we can use the von Neumann strategy for finding a set of values that rationalize this choices. So if you chose J over K then it has to be that the value attached to J is greater than K. So we start by setting the temperature scale.

The biggest prize is 5 so call v(5)=1, and the worst is 0 so v(0)=0. Now we need to figure out v(1) that can work for both scenarios. So let us compute the expected utility

E(J)=v(1)=x

We will call [v(1)=Value of $1m] “x” for now.

E(K)=0.89*v(1)+0.10v(5)=0.89x+0.10

E(L)=0.11*v(1)=0.11x

E(M)=0.1

We know it must be the case that

E(J)>E(K) —> x>0.89x+0.10 =>0.11x>0.10

But

E(M)>E(L) —> 0.10>0.11x

That is a contradiction and this is Allais’s “paradox”.

In their paper, they show experimentally (in pretty small samples ~80 people) that this error occurs frequently enough that they are motivated to build an alternative utility scale that can accommodate these inconsistencies.

The details of their theory involve changing the probability weights we took as given from the lotteries above and replacing them with weights that could be probability or could be something else. These weights do not have a direct interpretation other than allowing the utility functions to “fit the data better”. In other words, the new utility functions simply avoid the contradiction above. Essentially, they rigged their functions to ensure that the very specific error above falls away.

How do you know the model does better?

But what does that mean? There are no formal statistical tests in the original Kahneman Tversky paper to determine the veracity of the claim that this model “does better” at explaining human choices. That’s the key problem that gets resolved in 1994 with the two papers I mention above. Those two papers take seriously all models that have been proposed up to that point which are meant to “fit the data” better. And as I mentioned above, all theories are pretty rubbish but the expected utility theory performs least worse.

This is such a big topic to explore so briefly here. However, this should give you a sense of the key ideas.

References

  1. Kahneman, Daniel and Amos Tversky. "Prospect theory: An analysis of decision under risk." Econometrica 47, no. 2 (1979): 363-391.

  2. Hey, John D., and Chris Orme. "Investigating generalizations of expected utility theory using experimental data." Econometrica: Journal of the Econometric Society (1994): 1291-1326.

  3. Harless, David W., and Colin F. Camerer. "The predictive utility of generalized expected utility theories." Econometrica: Journal of the Econometric Society (1994): 1251-1289.

  4. Binmore, Ken. "Rational decisions." In Rational Decisions. Princeton university press, 2008.