Daniel Kahneman, who was, along with Elinor Ostrom, one of the very few non-economists to win the Economics Nobel award, has died aged 90. There are lots of obituaries out there, so I won’t try to summarise his work. Rather, I’ll talk about how it influenced my own academic career.

When I was an undergraduate, in the late 1970s, economic analysis of decisions under uncertainty was dominated by the expected utility (EU) theory of von Neumann and Morgenstern. The mean-variance approach, still popular in finance, was regarded as, at best, a special case of the correct EU theory. Some early theoretical challenges, notably from French theorist Maurice Allais around 1950 had been thoroughly refuted, at least to the satisfaction of most in the field. (A more fundamental critique by Daniel Ellsberg (later famous for leaking the Pentagon Papers) had been shunted into the “too-hard” basket.)

The first big challenge to this consensus came in a 1974 paper by Kahneman and his long-time collaborator Amos Tversky (already a big name in the field of measurement theory) who found that judgements about probabilities were characterised by a variety of systematic biases, based on misleading heuristics. This set off a surge of interest in challenges to EU, including a revival of the criticisms made by Allais.

One of the key ideas here was that, rather than take an arithmetic average of the utilities yielded by different possible outcomes of an uncertain, people might place more weight on low-probability outcomes like winning the lottery or dying in a plane crash. Unfortunately, the obvious approach of transforming probabilities into weights doesn’t work. Think about a choice which yields lots of different outcomes, each with a small probability and a utility close to, but below 1. The weighted average procedure will yield a value greater than 1, implying that the choice would be preferred to getting 1 with certainty. This is obviously silly (the technical term is a violation of dominance)

In 1979, while working on my undergraduate honours thesis, I came up with a solution to this problem. If the transformation is applied to the cumulative probability of getting an outcome less than or equal to some given value, rather than to individual probabilities, only the probabilities of extreme outcomes (like lottery wins and plane crashes) are overweighted and violations of dominance are avoided. This approach is now called rank-dependent utility theory

In the same year, Kahneman and Tversky published the first version of a generalized version of EU called prospect theory. Among other changes, Kahneman and Tversky used probability weighting in the problematic form described above. They avoided dominance violations in a rather ad hoc fashion, by “editing” out dominated prospects.

My own idea took the usual tortuous process to publication, eventually appearing in the (then new) Journal of Economic Behavior and Organization in 1982. It didn’t attract much attention at first, but eventually got noticed by some of the leading figures in the newly developing field of generalized expected utility theory, and even byais, who had returned to the topic after an absence of many decades. Finally, in 1992, Kahneman and Tversky incorporated the rank-dependent idea into their cumulative prospect theory, which became the standard version of prospect theory.

To the extent I have any fame as an economic theorist, it’s mostly due to this work. And, if you are going to engage in debate on policy issues, the credibility gained from having a (moderately) big name in economic theory makes it hard for rightwing economists to dismiss you.

So I owe a big debt to Kahneman (as well as Tversky). He will be missed.