Schedule - Parallel Session 4 - Non-EU Theory

WMG IDL 1st Floor Syndicate Room - 11:00 - 12:30

Harmonic Probability Weighting Functions and Mental States of Decision Makers Applied to Core Expected Utility Theory Plus Stochastic Error

G. Charles-Cadogan

Abstract

We introduce a harmonic probability weighting function (HPWF) that decomposes the class of generalized expected utility theory (EUT) models, e.g., regret theory, rank dependent utility, Chew-MacCrimmon weighted utility, prospect reference theory, Gul’s disappointment aversion, and Koszegi-Rabin hybrid reference dependent preference model, into core EUT plus stochastic error. The upshot of the HPWF is that it comprises a linear anchor probability that supports EUT, and a harmonic addend that is functionally equivalent to a probabilistic version of binary switching functions like the Loomes-Sugden rejoice-regret function or the gain-loss utility function in Koszegi-Rabin hybrid reference dependent preferences model. Our HPWF analysis proves that those functions mimic stochastic error in a core EUT plus noise decomposition. This allows the HPWF to bring several seemingly different generalized EUT models under one probabilistic umbrella, and it provides theoretical support for Hey-Orme seminal experiments which found no statistical difference between core EUT and generalized EUT models.

G. Charles-Cadogan

Lecturer, University of Cape Town

The Loss Attitude Function

Horst Zank

Abstract

A two-stage model for aggregating choice behavior over the gain component of a prospect with the choice behavior over the corresponding loss component is proposed. Choice behavior among pure gain prospects and that among pure loss prospects is similar to prospect theory using corresponding probability weighting functions and a pure utility function for outcomes. A mixed prospect, which includes both gains and losses, is treated as a two-stage object of choice that gives a conditional pure loss prospect and a conditional pure gain prospect at stage two. Mixed prospects are evaluated recursively: prospect theory is applied to each conditional component, leading to a reduced one-stage simple prospect giving positive probability to one gain and positive probability to one loss; this simple prospect receive a second application of prospect theory. The resulting model is a recursive extension of prospect theory that accounts, among other aspects of behavior, for attitudes towards the overall probabilities of losing and of gaining. Additionally, the loss attitude function is introduced, which formally models loss aversion by combining the pure utility for gains with the pure utility for losses when evaluating mixed prospects.

Horst Zank

Professor, University of Manchester

The Probability Discounting Model: A Primer for Economists

Andre Hofmeyr

Abstract

The probability discounting model is a popular framework for investigating people’s instantaneous or atemporal attitudes toward risk in experimental settings. It is particularly common in studies of addiction where delay discounting data is also often obtained. The model was developed in a series of three papers, which have been cited collectively over 1000 times, by Rachlin, Logue, Gibbon and Frankel (1986), Rachlin, Castrogiovanni and Cross (1987), and Rachlin, Raineri and Cross (1991). As its name suggests, the probability discounting model draws its inspiration from models of temporal or delay discounting, where the delay to a reward is replaced with the odds against receiving a reward. This paper explains the model in language familiar to decision theorists, statisticians and economists, and it shows how the model relates to standard theories of choice under risk like expected utility theory, prospect theory, rank-dependent utility theory, and rank-dependent expected value theory. It highlights the way in which researchers typically collect probability discounting data and how they analyse it statistically. The paper then uses data from a well-cited study to show that more flexible specifications of a probability weighting function outperform the probability discounting model’s implicit probability weighting function. It concludes by arguing that researchers who are interested in investigating choice under risk should adopt theoretical, methodological and statistical tools appropriate to the task, and should therefore abandon the probability discounting model.

Andre Hofmeyr

Senior Lecturer, University of Cape Town

Can Imprecision Determine a Decision? Can Prelec's Function Be Discontinuous at P = 1?

Alexander Harin

Abstract

The dispersion is a common measure of imprecision and noise. An existence theorem has been proven for non-zero “forbidden zones” (bounds) for the expectation of a discrete random variable that takes on a finite set of possible values in a finite interval. The non-zero “forbidden zones” have been proven to exist at the borders of the interval under the condition of the non-zero dispersion. A new formula has been obtained for the dependence of the width of the “forbidden zones” on any dispersion value (see, e.g., Harin, 2015). Due to the theorem, under the condition of the non-zero imprecision and/or noise, the probability weighting (Prelec’s) function W(p) can be discontinuous at the borders of the probability scale, in particular at the probability p = 1. A discontinuity is a topological feature of a function. Therefore, it can determine properties of the function in its vicinity. Prelec’s function is a basic one in the field of utility and can determine subjects’ decisions. Therefore, its discontinuity at the probability p = 1 can determine subjects’ decisions at p ~ 1. Note, this discontinuity can be hidden by a “certain–uncertain” inconsistency between certain outcomes and uncertain incentives of the usual experimental procedures in the field of utility (see, e.g., Harin, 2014). The well-known experiment of Starmer and Sugden (1991) confirms the possibility of existence of these discontinuity and “certain–uncertain” inconsistency. So, the non-zero imprecision and/or noise can lead to the non-zero “forbidden zone” for the data expectation near p = 1 and to the discontinuity of Prelec’s function at p = 1. So, the imprecision and/or noise can determine a decision at the probabilities p ~ 1.

Alexander Harin

Post Doc, Modern University for the Humanities