1 Introduction

Opportunity costs are the foregone benefits associated with the option not chosen when making a decision between different alternatives (Thaler, 1980). For example, the opportunity cost of a college education is the income and experiences foregone by not working, and the opportunity cost of a public health program is the health foregone by spending less money elsewhere in the health sector. If opportunity costs are neglected or not fully attended to at the point of decision making, it means that not all relevant consequences of the choices we make are considered, and as a result, good or even superior alternatives may be overlooked. Ultimately this leads to a loss in welfare, and potentially even in human lives (Fetherstonhaugh et al., 1997). Especially when deviating from the consumer domain, there are many cases where misallocation of resources could lead to bad but preventable outcomes for society: from healthcare policies to decisions that impact climate change or possibly even pursuing a war. A central mechanism for why opportunity cost neglect arises is that people tend to be selective in their use of information, focusing on the most prominent feature of a decision (Legrenzi et al., 1993), which implies a tendency to restrict attention only to what is explicitly represented in one’s thought model—the focal alternative (Erlandsson et al., 2020; Persson et al., 2022; Slovic, 1975; Tversky et al., 1988).

In experimental settings, opportunity cost neglect is usually inferred from the observation that participants’ propensity to choose a given option (e.g., buy a cell phone, invest time and effort in education, donate to a charity) changes when the alternative usage of a scarce resource is made salient. In the prototypical task introduced by Frederick et al., (2009, Study 1), the proportion of participants who stated a willingness to purchase a special-priced DVD for $14.99 declined from 75 to 55% when people were reminded about the opportunity cost, i.e., that not buying the DVD implied that the $14.99 could be used for other purchases. The study by Frederick et al. (2009) was the first to show the effect of opportunity cost neglect in a behavioural experiment, and that the magnitude of the bias was substantial. Simply reminding people that money could have alternative usage decreased their willingness to purchase consumer products by almost 50% (from 73 to 37% in Study 2). Following Frederick et al. (2009) several studies have used similar experimental paradigms to explore the existence of opportunity cost neglect in different decision-making domains, such as public policy (see Aharoni et al., 2018, 2020 for an application in criminal punishment recommendations and Persson & Tinghög, 2020a, 2020b for the health domain), in charitable giving (Moche et al., 2020), intertemporal choice (Read et al., 2017; Spiller, 2019) and consumer choice (Greenberg & Spiller, 2016; Moche et al., 2020; Plantinga et al., 2018; Read et al., 2017; Spiller, 2019; Weiss & Kivetz, 2019; Zhang et al., 2017). Nonetheless, arguments have been put forward that opportunity costs might be overestimated, rather than neglected, when making choices under external constraints (Weiss & Kivetz, 2019). This is because people may exaggerate the benefits associated with alternative non-salient options as a consequence of loss aversion. Moreover, some studies that have found no opportunity cost neglect and even the opposite effect. For example, Persson and Tinghög (2020b) found that older people were more likely to spend when reminded about the opportunity cost in consumer choice, suggesting that age might be a moderating factor of the effect.

Despite the importance of opportunity cost neglect, both from a theoretical and practical point of view no study has to date systematically and critically addressed the strength of the effect, its occurrence across domains, and what boundary conditions moderate the effect. Therefore, we carried out both a systematic review and a meta-analysis to provide a critical and extensive overview of the phenomenon.

2 Methods

The study was preregistered. The preregistration together with data and analysis codes can be accessed via this repository https://osf.io/p4yjm/. Data collection started on 2020-08-04.

A systematic review was carried out, searching the literature for experimental studies investigating the effect on the choice of making alternatives and forgone benefits explicit. We also made open calls and contacted researchers for unpublished studies. In the meta-analysis, we included studies that had two or more conditions equivalent in content, but where one condition made the alternative usage of a scarce resource more salient. As specified in the pre-registration, studies were only included in the meta-analysis if they used a between-subjects design and reported statistics for the strength of the effect. We had initially planned to carry out a sensitivity analysis to evaluate the appropriateness of within-subjects designs (for future studies, based on comparison with our collected data from between-subject studies), but the number of studies using this procedure was too small. Only studies in English were included.

2.1 The original experimental paradigm

Figure 1 shows the original paradigm to study opportunity cost neglect introduced by Frederick et al. (2009). In this paradigm, participants see a brief scenario that describes a possible purchase situation, and they are randomly assigned to either a control condition or an experimental condition. In the control condition, participants decide whether to buy a good or not. In the experimental condition, participants face the same decision, but opportunity costs are made more salient by adding “keep the money for other purchases” to the “do not buy” alternative.

Fig. 1
figure 1

Original experimental paradigm. Scenario from Study 1A in Frederick et al. (2009)

Opportunity cost neglect is then commonly inferred if a lower proportion of participants say they would buy the good (by selecting alternative A) in the experimental condition than in the control condition. This paradigm thus operates under the assumption that if people fully consider opportunity costs at the point of decision-making, adding the reminder (in the experimental condition) should have no effect on decisions. In contrast, if opportunity costs are routinely neglected, adding the reminder should make some people less willing to buy the good, presumably because they consider alternative uses of the available money more carefully.

Most subsequent studies on opportunity cost neglect have used some close variants of this original paradigm. Typically, the good and context in question are adapted but the main experimental manipulation (control vs. experimental condition via a money reminder) is retained. For example, Plantinga et al. (2018) used a tablet, a movie ticket, and a concert ticket (each in a separate yes/no decision).

2.2 Studies included in the meta-analysis

Figure 2 shows the procedure for literature search and study inclusion. Based on our initial search we identified 1614 articles to be screened for potential inclusion in the meta-analysis. Most of these were excluded, typically because they were topically unrelated to the concept of opportunity cost neglect. After the screening phase, 162 articles remained for eligibility assessment. Out of these, 150 articles were excluded, most of them because they were non-experimental, or the experimental conditions differed on more than the one dimension we were interested in (salience of opportunity costs). In the end, 12 eligible articles satisfied all pre-registered inclusion/exclusion criteria and were included in the meta-analysis.

Fig. 2
figure 2

Prisma flow diagram. Procedure for literature search and study inclusion with number of studies at each stage

In total, the meta-analysis covers 12 studies (articles) involving 39 experiments with a total of 12,093 subjects across 5 countries (see Appendix Table 1 for study characteristics). Seven of these were from unpublished studies: two from a conference summary (Chatterjee et al., 2011), two from a master thesis (Fritzell & Strand, 2018) and three from data provided by the authors (Persson & Tinghög, 2020a; Zhang, 2020). Some studies tested the same participant more than once (e.g., in two different domains), resulting in a total of N = 14,005 observations. Two-thirds of the experiments were conducted online with survey workers (e.g., M-Turk, Prolific). A few studies were conducted with a general population sample or experts in certain areas and the rest were non-specific student samples.

Most of the included studies used the same or slight modifications of the original design by Frederick et al. (2009), either for replication purposes or to explore potential differences in opportunity cost neglect across domains or types of populations.

Zhang et al. (2017) compared two different populations (Canada and China) to test for the effect of sociocultural differences in considering opportunity costs and Plantinga et al. (2018) tested whether participants with low socioeconomic status would be more prone to consider opportunity costs compared to their more wealthy counterparts. Other studies used the paradigm to test the effect in different domains. Moche et al. (2020) focused on charitable giving, reminding participants about the opportunity costs of spending the donation money in other situations. Persson and Tinghög (2020b) tested opportunity cost neglect in general healthcare priority-setting decisions, using both participants from the general population and experts in healthcare decision-making. In this case, participants had to choose whether to invest in a new healthcare program, but in the experimental condition they were reminded that that money could be allocated to other programs. Persson and Tinghög (2020a) used a similar design and also focused on health care, but for more specific rationing decisions. Aharoni et al. (2018) investigated criminal punishment judgment, explicating the costs of the convictions for the State, but also in terms of per-taxpayer costs.

Some studies focused on opportunity cost considerations in standard intertemporal choices or in time allocation decisions. Fritzell and Strand (2018), Zhang (2020) and Chatterjee et al. (2011) reminded participants that time could be spent on doing other activities. The latter study required participants to engage in trade-off thinking between time and money. Read et al. (2017) asked participants to choose between receiving a smaller sooner or larger later amount of money, and they varied the extent to which participants were reminded about foregone time-dated payments implied by the different options; e.g., choosing the larger later amount means not getting the smaller sooner amount. Finally, Greenberg and Spiller (2016) investigated how the salience of opportunity costs influences repeated evaluations (ratings) of different options, e.g. two nights in a luxury hotel. Their items included both experience goods and standard consumption goods.

2.3 Statistical analysis

For each included study, we quantified the effect as the logarithm of the odds ratio of choice rates between the experimental group (in which opportunity costs are made more salient) and the control group. A positive effect indicates that subjects became less willing to consume/choose the presented option when alternatives were made more salient i.e., opportunity cost neglect. A negative effect, in contrast, indicates that people became more willing to consume/choose the presented option when alternatives were made salient. The unit of analysis was always individual participants at the experiment level. Thus, some participants contributed to more than one observation in our sample; e.g., when the same participant was tested in two different domains (two experiments) in the same study.

Most meta-analyses are carried out using two levels that identify the source of variability in the data at the participant level (Level 1) and at the study level (Level 2). Considering that, in our case, many papers (studies) included several experiments, we tested whether a multivariate three-level model, which takes account of the similarities of experiments coming from the same paper, would better fit the analysis than a canonical random-effects two-level model. The analysis showed the multivariate three-level model did not improve the fit of the model (p = 1.00). Therefore, a random-effects meta-analysis with DerSimonian–Laird random effects pooling method was carried out using the natural logarithm of the odds of the change in the rate of choices as effect size measure and its standard error.

3 Results

Figure 3 shows a forest plot including the 39 experiments included in the meta-analysis. The odds ratio is reported on the x axis, showing the change in the rate of choices. When opportunity costs are made more salient, the option that is positively highlighted is chosen, on average, 1.48 times more than the alternative (that is, for every 100 people that choose one option, 148 choose it when opportunity costs make the same option positively highlighted). Thus, the analysis showed a small effect, equivalent to a Cohen’s d of 0.22 (CI [0.15; 0.27], that is statistically significant (log odds = 0.39, z = 6.02, p < 0.0001, CI [0.26; 0.52]). Between-study heterogeneity was moderate (I2 = 63.11%; Q(38) = 103.01, p < 0.0001). In addition, the figure shows that (i) only one study show a statistically significant effect opposite to the hypothesized direction, (ii) experiments with larger sample sizes, identifiable by the thickness of the squares, have effects closer to zero, and (iii) experiments from the original study by Frederick et al. (2009) stand out.

Fig. 3
figure 3

Forest plot and summary of the effects in rate of change of the alternative. Squares and whiskers represent the mean effect and 95% CI found in each study, respectively. The rhombus depicts the summary of the effect, with its centre representing the average effect and its length the 95% CI. Unpublished studies are followed by an asterisk

To assess for publication bias, that is, the tendency to find significant results in the scientific literature because they are easier to publish, we conducted a Regression Test and a Rank Correlation Test for funnel plot asymmetry. Both the Regression test (z = 2.56, p = 0.01) and the Rank Correlation Test (Kendall's tau = 0.28, p = 0.01) yielded significant results, thus indicating evidence of unpublished studies with either null effects or results in the opposite direction than hypothesized. This result is also visible in the sunset (power-enhanced) funnel plot (Fig. 4). The figure shows the distribution of effects for both published (in white) and unpublished (in black) studies depending on their power. Studies with higher statistical power reported effects closer to zero, that is, a rate change of 1. Without publication bias, we would expect to find effects with a symmetrical disposition around the vertical dotted line depicting the effect summarized by the meta-analysis. Instead, we find a skewed distribution, and only one study with the opposite direction of the effect.

Fig. 4
figure 4

Sunset (power-enhanced) funnel plot with both published (white dots) and unpublished (black dots) studies. The x axis shows the strength of the effect, the y axis shows the standard error of the effect, and colours depict the statistical power of the studies. Power is computed at the study level assuming the meta-analytic summary effect as the true effect. Contours represent 95 and 99% confidence intervals for an effect equal to zero, meaning that dots on top and outside both contours, or inside the purple area, are an indication of an effect different from zero. The vertical dotted line depicts the main effect found in the meta-analysis. Medpower is the median power of all studies included. d33 and d66 indicate the true effect size necessary such that the median power of the studies would have been 33% or 66% respectively. E, O and pTES show the results of a test of excess significance (Ioannidis & Trikalinos, 2007), and R-Index denotes expected replicability

We also carried out a leave-one-out sensitivity analysis to assess the change in the overall effect when eliminating one study at a time (Fig. 5). This analysis shows that the studies from Frederick et al. (2009) were different from all other studies (p < 0.001). The heterogeneity of the pooled model decreases from an I2 = 79.68% to I2 = 54.96% when excluding the experiments from Frederick et al (2009).

Fig. 5
figure 5

Leave-one-out analysis. Sensitivity analysis showing the change in the effect eliminating single groups of pooled studies. Unpublished studies are followed by an asterisk

Thus, experiments from Frederick et al. (2009) seem to be outliers significantly biasing the effect. In support of this hypothesis is also the fact that excluding the study from the main analysis, the Regression Test and Rank Correlation Test for funnel plot asymmetry for publication bias was no longer significant (z = 0.85, p = 0.40; Kendall's tau = 0.13, p = 0.29). The resulting main effect was reduced to a log odds ratio of 0.29 (z = 5.05; p < 0.0001; CI [0.177; 0.40]), which reflects a rate change of 1.34 (CI [1.21; 1.70]). The result is still significant, but small since it is comparable to a Cohen’s d of 0.16 (CI [0.1; 0.22]).

3.1 Moderation models

We also examined the extent to which opportunity cost neglect could be moderated by factors discussed in the literature (domain, amount of money, age).Footnote 1 Full results from these analyses can be found in the Supplementary materials. Starting with the domain in which choices were made, five main domains were identified: consumer choices, prosocial behaviour/donations, intertemporal choice, policy interventions and opportunity cost of time. Figure 6 shows the forest plot for this subgroup analysis. Note that the intertemporal choice and donation domains contain experiments from a single study in each domain and should therefore be interpreted with caution. The effect for the consumer domain is the largest, with an estimated rate change of 1.75 when comparing choices across the experimental and control condition. However, excluding experiments from Frederick et al. (2009), the effect decreases to a rate change of 1.46 (CI [1.21; 1.77]), which is close to the general effect.

Fig. 6
figure 6

Forest plot with domain moderation analysis. Squares and whiskers represent the mean effect and 95% CI found in each study, respectively. The rhombus depicts the summary of the effect, with its centre representing the average effect and its length the 95% CI. Unpublished studies are followed by an asterisk

Comparing the different domains, the only domain that was significantly different from consumer choice was the intertemporal choice domain (β = − 0.38, z = − 2.23, p = 0.026; see Supplementary materials Table S1 for all contrasts). However, it should be noted that the manipulation in these studies was slightly different from the others. Read and colleagues made the lack of gains salient by explicating that nothing would be gained in a certain moment, without expanding the set of alternatives e.g., adding “use the money difference for other purchases” or “use the money right away for something you need as soon as possible”. It could be argued that the manipulation employed made opportunity costs less explicit compared to other experiments, leaving the reader the opportunity to think about implied alternatives without them being directly stated. When experiments from Frederick et al. (2009) are excluded, the difference between consumer choice and intertemporal choice is no longer significant (z = − 0.2, p = 0.17).

Next, we investigated whether opportunity cost neglect was moderated by the mean amount of money involved in decisions made in the consumer domain (see Supplementary materials Table S2 and Fig. S1 for complete results from this analysis). Here, studies with alternatives between $500 and $1000 had a significantly enhanced effect compared to choices that involved less than $20 (z = 3.26, p = 0.001, β = 1.07). However, it should be noted that in the $500 and $1000 categories only studies from Frederick et al. (2009) were present. In fact, the latter experiments differ from all others (z = 4.96, p < 0.001, β = 0.94).

In our final moderation analysis, we investigated moderation by mean age of participants (full results are in Supplementary materials Table S3 and Fig. S2). Age groups were created for values lower than 30, between 30 and 40, between 40 and 50, and over 50 years old. Since only studies that reported mean age or made data available were included, it should be noted that data from Frederick et al. (2009) were not present in this analysis. Results showed that no group differed from the effect with mean age lower than 30 (p > 0.05); however, mean age may a too coarse index. Persson and Tinghög (2020b) found a correlation between participants’ age and the strength of the effect, therefore using mean age alone could hide the relation.

4 Discussion

For the first time, the phenomenon of opportunity cost neglect was quantitatively synthesized through a meta-analysis. We conducted an extensive search and identified 12 eligible studies, three of which were unpublished, consisting of 39 experiments with a total of 12,093 subjects.

The meta-analysis showed a small but robust mean effect (Cohen’s d = 0.22, p < 0.001). Taken together, our results indicate that the magnitude of opportunity cost neglect is likely much smaller than has been suggested in previous studies, including the seminal paper by Frederick et al. (2009); perhaps because the initial studies had relatively small samples and thus lower statistical power, which typically inflates effect sizes of published findings (Button et al., 2013). Still, it appears to be a robust phenomenon, carrying substantive potential to influence decision-making across the board, even at the smaller effect sizes obtained in this meta-analysis. When we control for publication bias, the estimated mean effect is equivalent to a Cohen’s d of 0.16. This is not a negligible effect, keeping in mind the fundamental role of opportunity cost in decision-making, both in consumer choice and at the policy level.

Going forward, there are important aspects of opportunity cost neglect that still have not been fully explored. We conducted moderation analysis by domain, but due to the relatively low number of studies outside the consumer domain, no strong conclusions could be drawn based on that analysis. We generally expected opportunity cost neglect to be stronger in public policy decisions than private spending, because opportunity costs are less salient in public spending (it is not out-of-pocket), and for some policy domains, like health care, people may be reluctant to engage in trade-off thinking at all because health is of special moral importance (e.g., sometimes seen as a ‘sacred value’; Persson et al., 2022; Tetlock, 2003). However, the moderation analysis indicated the opposite pattern, with a smaller mean effect for the policy domain compared to the consumption domain, although the difference was not statistically significant (see Fig. 6 and Supplementary material Table S1). Moreover, there was some heterogeneity among the studies within the policy group; one study found no effect (Persson & Tinghög, 2020a), one study found a moderate effect consistent with opportunity cost neglect (Persson & Tinghög, 2020b), and another study found quite a large effect (Aharoni et al., 2018). There is clearly scope for more research on opportunity cost neglect in the public policy area.

Some additional limitations should be noted. First, the experimental literature on opportunity cost neglect is based on experimental studies that are primarily hypothetical (with the exception of one of the experiments in Frederick et al., 2009), and the results from our meta-analysis should be interpreted with this in mind. Second, even though most of the included studies used similar designs, the items and contexts varied substantially between some of the studies. For example, the studies on health policy naturally considered interventions that could save lives and cost millions of dollars, whereas studies in the consumer domain concerned standard consumption items like a smartphone, a tablet, a laptop, a movie ticket, and so on. Moreover, more than half of the experiments in the meta-analysis were in fact in the consumer domain (21 out of 39 experiments), with a lower number of studies in each of the other domains covered by the included studies. Thus, a cautious interpretation would be that the result for the consumer domain is robust and informative from a meta-analytic point of view, but for the other domains, we should be more cautious.

Concluding, this work highlights the importance of replication, preregistration, and meta-analysis to assess cumulative evidence for behavioural phenomena discovered in empirical work. On the back of the replication crisis in psychology, there is now an increasing awareness of methodological challenges across the social sciences. Findings from laboratory experiments are often particularly suited for meta-analysis because standardized protocols are commonly used and there is an increasing availability of open data. Meta-analysis facilitates the aggregation of the fragmented information provided by single studies into a more structured scientific consensus. The concept of opportunity cost neglect provides a picture of human decision-making as limited, but also with room for improvement. Taken at face value, if small manipulations of information can influence decision-making (as shown in the studies included in this meta-analysis), it should likewise be possible to boost or nudge people into making better decisions. This would primarily be relevant in the context of policymaking using, for example, decision aids to facilitate a more structured decision process with the goal of improving the allocation of resources in society.