4 Psychreg Decision Uncertainty

4 psychreg decision uncertainty

Cognition & Uncertainty

Cognitive psychology has researched decision-making under uncertainty for decades. Some of the best data on this topic is coming from outside psychology departments – specifically from engineers developing algorithms to analyze how humans behave under uncertainty in strategic, multi-player games. Engineers working in complex, multi-agent environments were forced to formalize what humans actually do under uncertainty, and more importantly, what they consistently fail to do. That formalized knowledge has surprisingly translated directly into clinical and applied psychology.

This piece reviews what this large body of work has found, and what implications it holds for understanding human cognition under ambiguity.

Structural Differences Between Two Types of Uncertainty

Game theory-based research has clearly shown that there is a significant distinction between aleatory uncertainty (the type of probability associated with rolling dice – I.e., the underlying probabilities are known but the specific outcome realized is not) and epistemic uncertainty (where the structure of the situation itself is unknown). A clinician who estimates treatment response using a known base rate is faced with a different cognitive process than a negotiator trying to estimate another party’s reservation price.

There is substantial evidence that humans perform differently depending on whether they are dealing with aleatory or epistemic uncertainty. Humans are generally able to accurately calibrate their beliefs with respect to frequencies under aleatory uncertainty (as shown by Gigerenzer et al.). However, under epistemic uncertainty (I.e., where what the other player knows and how they will update based on observed action is part of the unknown) performance drops off much more significantly, and follows a characteristic path. Specifically, people do not simply become less accurate. Rather, they systematically become overly confident, rely too heavily upon initial observations (early “signals”), and do not revise their beliefs upon observing contradictory evidence as would be expected by the confirmation bias effect.

In addition to showing that humans’ failures to rationally respond to epistemic uncertainty occur in systematic ways, game research has provided the quantitative details regarding how these failures unfold over multiple trials (not just within isolated experimental tasks). When models of human decision processes are compared against optimal solution methods in adversarial environments, the gap between human-like decision-making and optimal solution behavior exhibits a consistent signature. Specifically, humans exhibit a tendency to employ exploitable decision-making strategies in predictable ways (e.g., yield excessively to certain types of pressure while committing excessively to others); furthermore, the extent to which humans engage in exploitable decision-making strategies correlates with measures of personality and stress.

The Recursive-Modelling Ceiling

Another robust finding emerging from this literature is that humans’ ability to model recursively what other players believe is limited to low levels of recursion. Specifically, humans can reliably model both “what does the other player know?” (“first-order belief”) and “what does the other player believe I believe? (”second-order belief”). At the third level (“what does the other player believe I think they believe?”), however, humans show little to no accuracy; and by the fourth level (“what does the other player believe I think they believe I think…”), humans essentially guess randomly.

This is not necessarily a shortcoming. Most real-world decisions require only two levels of belief modeling (e.g., knowing what someone else believes about me, and knowing what I think they believe about me), thereby freeing up cognitive capacity for processing other aspects of the environment. The issue arises when real-world decisions involve higher-level recursive reasoning. Examples of such decisions include high-stakes negotiations, competitive business planning, certain forms of medical risk assessment, and parenting adolescents. In these cases, humans appear to use heuristics instead of employing deeper recursive reasoning. Furthermore, these heuristics are typically vulnerable to exploitation.

Quantitatively demonstrating that humans’ heuristic substitutions follow predictable patterns is one of the key contributions of game-theoretic research. Specifically, humans appear to rely disproportionately on superficial indicators (e.g., tone of voice, recent behaviors) when deeper recursive thinking is necessary. Humans also tend to attribute their own preferences to the opponent (rather than maintaining separate representations of the opponent’s preferences). Finally, humans frequently transform ambiguous situations into probabilistic representations that they can subsequently reason about even when those representations misframe the situation. All three of these heuristic substitution patterns have since been empirically documented in the literature, and game-theory-based studies have produced particularly clear and strong measurements of when each substitution occurs.

Temporal Dimension

Another contribution made by game research has been providing quantitative detail regarding how humans’ uncertainty-related decision-making abilities degrade over time during a decision. While classical psychology extensively explored single-shot decision-making under uncertainty, sequential decision-making requiring adaptation to new information is far more difficult to explore experimentally due to an exploding design space. This is exactly where the development of game-playing systems created a natural laboratory for exploring such sequences.

One clinically important implication of this is that a patient making a single decision about his/her treatment course is cognitively in a fundamentally different place than a patient adjusting medications sequentially over months, even though the underlying probabilities governing either scenario may be equivalent. The latter scenario illustrates the acceleration of drift towards whatever strategy was selected initially, regardless of subsequent incoming evidence – a phenomenon described by adversarial-strategy researchers as strategic anchoring. In observational analyses of competitive decision-making this translates into persistent engagement in suboptimal (losing) strategies even after it is apparent that the opposing player has adjusted its approach; similarly, in clinical contexts it translates into delays in switching treatments despite objectively measurable indicators suggesting lack of therapeutic response.

Implications for Applied Psychology

The translational potential of this body of work lies in providing psychometric tools and quantitative baselines for assessing individual differences in how individuals handle uncertainty that existing questionnaire assessments of risk tolerance and impulsive behavior may not capture. Behavioral measurement derived from controlled strategic decision tasks correlated with – but provided independent information beyond – established behavioral assessments of risk tolerance and impulse control. Preliminary evidence suggests that they predict real world outcomes in areas where managing uncertainty has consequences (financial decision-making and occupational performance).

Perhaps the most immediately applicable takeaway for clinicians will be distinguishing between aleatory and epistemic uncertainty as a clinical framework. A number of patients presenting with what appears to be generalized decisional anxiety are likely performing effectively on aleatory tasks but poorly on epistemic ones. Moreover, the treatment used to help them will differ accordingly. Identifying the structural difference provides a potentially valuable initial step in formulating treatment plans for such patients.

From a research perspective, perhaps the greatest contribution of this methodology will be the provision of large-scale datasets describing strategic decisions under uncertainty generated by engineering programs creating systems capable of simulating adversarial decision-making. Once access to these datasets becomes available, there will be an unambiguously stronger empirical basis for examining how individuals manage uncertainty in real-world decision-making scenarios than existed even five years ago.

Development in recent years of computational game theory has resulted in an extremely fertile area of interdisciplinary research. This area has enhanced our understanding of previously unclear concepts related to decision-making under uncertainty. Additionally, this area has provided tighter measurement techniques than have traditionally been possible in laboratory settings using human subjects. Finally, this area has also initiated discussions regarding interventions that can be used to improve decision-making capabilities in high-stakes situations involving significant amounts of uncertainty.

The author writes about the intersections between computational decision science and applied cognitive psychology.