Introduction

Price gouging refers to the practice that during times of emergency, sellers increase prices to a much higher level than is considered reasonable. Multiple authorities regard such practice as illegitimate, introducing an anti-price-gouging law either explicitly or implicitly. The law places a cap on price increases (e.g. 10%) during a certain period under the declaration of an emergency situation. Examples include the United Kingdom’s 1998 Competition Act1, the European Union’s Article 102 of the Treaty on the Functioning of the European Union2 and the anti-price-gouging laws in multiple states in the United States3. Such legislative controls on market prices have brought about heated debates across various disciplines such as economics4,5,6, ethics7,8,9 and politics10,11, which demonstrates the complexity and multi-dimensionality of the issue.

Effectiveness and legitimacy of anti-price-gouging laws have long been debated from various perspectives such as maximising welfare and promoting virtue10. For example, opponents to the law argue that it hampers the recovery of supply chains by discouraging sellers from taking proactive actions to restore disrupted supply chains4,6,12. Another opposing argument is that artificially controlled prices encourage hoarding behaviour (i.e. people buy more than they need), which exacerbates the issue of insufficient supplies after a disaster event13,14,15. Accordingly, in their view, the law is bound to have adverse effects on a community in spite of the original intention. On the other hand, proponents argue that by inhibiting an excessive price increase by price gouging, the law can aid people, who are already affected by a disaster event, to purchase essential goods and repair damaged assets11,16,17. Another noteworthy argument is that the law can bolster community solidarity by showing the authorities’ disapproval of selfish behaviours5,8,18.

Noteworthy, as recognised by some previous studies19, all those arguments appear to hold truth. If one agrees with such observation (i.e. that they are all valid), a reasonable approach for decision-making would be to compare the influences of those competing forces on decision objectives. Since different objectives would conflict with each other (e.g. an increased price improves an issue of insufficient supply but exacerbates people’s purchasing power), analysis results should be used for multiple stakeholder groups to draw a consensus on an acceptable balance of selected decision metrics. Such analysis can be done by setting up a single model that simulates phenomena of interest altogether.

Necessity for such interdisciplinary analysis has long been recognised. For example20, integrated earthquake engineering, economics and social sciences in order to more accurately quantify disaster impacts on people. Refs. 21,22,23 connected engineering analysis results to social inequality. On the other hand, to facilitate interdisciplinary research, multiple frameworks have been developed including multidisciplinary design research24 and validation of interdisciplinary modelling25. Another notable interdisciplinary development is application of computational methods to analysing social phenomena, for which some of the most popular techniques are agent-based modelling26,27,28,29 and data analytics30,31,32. A more progressive perspective was proposed by Zajko33, which suggested to use interdisciplinarity “to formulate new problems, rather than providing new solutions to existing problems” and proposed using mathematics to discover hidden solutions in existing social problems.

However, to the best of the authors’ knowledge, the issue of anti-price-gouging laws has not been investigated from such an interdisciplinary and comprehensive perspective. That is, while it has been investigated by considering one of the relevant aspects (e.g. impact on supply chain and presence of hoarding behaviours), the investigation has not been performed by considering all relevant aspects simultaneously. Therefore, in this article, the effectiveness of anti-price-gouging laws is investigated by developing a probabilistic model that embodies four phenomena that are most discussed in relation to price-gouging: price-gouging, recovery of the supply chain, donation and hoarding. We apply the model to an earthquake scenario in the San Francisco Bay Area in California, USA. The proposed model assesses two decision objectives concerning the affected households: continued access to basic goods (i.e. products essential for livelihoods such as water, food and bills) and rapid repair of damaged assets. These two objectives are affected by the magnitude of asset damage and the extent of price increases. As causes of price increase, we consider increased demands following an earthquake event, price gouging and increased demands by hoarding.

The developed model characterises a new interdisciplinary approach for assessing a societal issue, based on which we propose the narrative numeric (NN) discourse method. The NN method connects narrations and numbers, by translating narrative arguments into numerical models and numeric decision metrics back to narrations. We propose the NN method with an 8-step framework. Then, we specify the idea by illustrating how each step is embodied in the proposed simulation model of anti-price-gouging law. We also present noteworthy premises and corollaries of the NN method, which be useful in applying the method to real-world social and political discourses.

Results: Effectiveness of anti-price-gouging laws

We perform two analyses without and with consideration of donation and hoarding, which are, respectively, presented in Figs. 1 and 2. The effectiveness of an anti-price-gouging law is examined by varying a price cap from 0% to 200% with a 5% interval, where 0% and 200%, respectively, represent the strictest scenario and an absence of the law. To support decision-making, we develop four complementary risk metrics: a shortage in basic goods, time taken for repairing damaged assets (first row of the figures), well-being loss caused by a supply shortage and well-being loss by price increase (second row). These metrics are defined to reflect three perspectives of decision-making: the entire population under many possible disruption scenarios (first column), the entire population under worst disruption scenarios (second column) and the worst-hit population under all disruption scenarios (third column).

Fig. 1: Analysis results of price-gouging.
figure 1

a–c Blue solid curves represent an average cumulative shortage of basic goods measured in USD per household (left y-axis), and red dashed curves represent an average number of weeks for complete repair (right y-axis). d–f The graphs illustrate an average cumulative well-being loss. Blue solid curves represent loss caused by supply shortage (left y-axis), and red dashed curves represent loss caused by price increase (right y-axis). a, d Averages of all households and samples. b, e Averages of 5%-percentile worst samples of all households. c, f Averages of the 5%-percentile worst household of all samples. a–f Shaded areas represent ± 1 standard deviation.

Fig. 2: Analysis results of price-gouging with donation and hoarding in consideration.
figure 2

Notations are the same as in Fig. 1.

In regard to a shortage in basic goods, represented by blue solid curves in Figs. 1a–c and 2a–c, all cases have the same level of price cap between 30% and 35% which minimises the number of shortfalls. We find that the value coincides with the average price increase caused by an increase in production cost and an increase in demand, but excluding an increase by price-gouging. Such increase is evaluated to be around 33% from samples, being calculated as

$$\frac{1}{N}\mathop{\sum }\limits_{n=1}^{N}\frac{2}{3}\left({{{\rm{{{\Delta }}}}}}{\widetilde{P}}_{{\rm {b}}0,n}+{\widetilde{\alpha }}_{{\rm {b}},n}\cdot {{{\rm{{{\Delta }}}}}}{\widetilde{Q}}_{{\rm {b}}0,n}\right),$$
(1)

where 2/3 is multiplied because of the assumed linear decrease in the price (cf. see the “Methods” section). On the other hand, as for well-being losses caused by a shortage in basic goods, illustrated in Figs. 1d–f and 2d–f, we observe a consistent trend across all cases. An increasing price cap reduces well-being loss caused by supply shortages and increases that are induced by increased prices.

The weeks taken until complete repair, are illustrated by red dashed curves in Figs. 1a–c and 2a–c, show different trends depending on household profiles. From the perspective of the entire population, whether an evaluation is made on all samples (Figs. 1a and 2a) or on the worst samples (Figs. 1b and 2b), a minimum value is achieved by a price cap value between 20% and 25%. We note that this is less than the one for a minimum shortage of basic goods. In contrast, in the case of the worst impacted households (Figs. 1c and 2c), the trends are different in that the curve largely remains constant while a minimum value takes place with a 0% price cap. This implies that the worst-hit populations remain insensitive to anti-price-gouging laws and thus require a different set of support schemes.

Donation is simulated by assuming that households without any remaining repair costs donate 10% of what they are left with after purchasing all necessary basic goods. We find that it has a significant effect in reducing the time taken for repairing assets as the overall repair time is reduced by around 50%. Another notable observation is that it changes the sensitivity of repair time to a price cap. When a donation is not present (Fig. 1a, b), the repair time increases after it reaches its minimum value. On the other hand, with its presence (Fig. 2a, b), it remains constant after reaching a minimum value. This indicates that a community with a high donation rate remains more resilient to a post-disaster price increase.

We simulate hoarding by assuming 30% of a maximum demand increase (i.e. a household purchases more than what they need when they can afford the excess up to 30%), which as expected increases well-being losses caused by supply shortage. By comparing the blue solid curves in Fig. 2d–f to those in Fig. 1d-f, we find that the average well-being losses are increased by around 30%. This coincides with the assumed hoarding rate, indicating that hoarding incurs a linear increase in well-being losses.

Figures 1c, f and 2c, f illustrate the risk metrics with regard to the 5%-quantile worst-hit households. We find that the curves represent different profiles of households. To illustrate this, in Table 1, we present two statistics of household profiles, a mean and a coefficient of variance (c.o.v.) (i.e. standard deviation divided by a mean), of three metrics of household profile: weekly income, repair cost and a ratio of repair cost to weekly income. A lower c.o.v. indicates a higher influence on the metric. For each decision metric, the most explanatory profile metric (i.e. the one that leads to the lowest c.o.v.) is marked by bold letters.

Table 1 Mean (c.o.v.) of profile metrics (columns) of 5%-worst households in terms of decision metrics (rows)

With respect to a shortage in basic goods (blue solid curves in Figs. 1c and 2c), we find that (weekly) income is the most relevant variable and that, comparing to the median income of 1610, the worst-affected households belong to a high-income group. Such observation with absolute shortages in basic goods is in contrast to well-being losses, which represent relative shortages in regard to demands (cf. “Methods” section). As demand has a positive correlation with income, the worst-hit households in terms of well-being loss are in a low-income group, earning less than the median value. This observation is consistent for both losses induced by price increase (red dashed curves in Figs. 1f and 2f) and by insufficient supply (blue solid curves in Figs. 1f and 2f). Noteworthy, when donation and hoarding are present, the worst-affected households become even a higher-income group for shortage in basic goods and a lower-income group for well-being loss. Since access to basic goods is mostly affected by hoarding, this indicates that hoarding behaviour is particularly influential for the lowest and the highest income groups than for mid-income groups.

On the other hand, in terms of repair time, the worst-affected households (red dashed curves in Figs. 1c and 2c) are those that have the highest ratios of repair cost to income. We note that, given an earthquake scenario, there is a high variance in asset damage realisations, i.e. the same household would face a very different level of repair costs in each simulation. Therefore, those who are unfavoured by the randomness will be confronted with a particularly higher asset damage among the households with a similar profile. As aforementioned, for this particular group of households, one may consider implementing a separate support scheme.

Finally, to understand the significance of donation and hoarding, in Fig. 3, we present results of two more settings, one with a donation rate reduced from 10% to 5% (Fig. 3a–c) and one with a maximum hoarding rate increased from 30% to 50% (Fig. 3d–f). In regard to donation, we observe that the difference in repair time of households (red dashed curves in the figures) between 5% rate and 10% rate (Fig. 2) is much less than that between 5% and none (Fig. 1). This is because in both cases, the absolute amount of donated money constitutes only a small proportion of the entire amount being simulated in the model. The result again implies significant impacts of donation in a community even when the sum is small. On the other hand, as for hoarding, we observe that the shortage of basic goods (blue solid curves in the figures) changes largely in proportion to the maximum rate of hoarding.

Fig. 3: Analysis results of price-gouging with changed parameters representing donation and hoarding from Fig. 2.
figure 3

Notations are the same as in Fig. 2a–c. a–c Donation rate reduced to 5%. d–f Maximum hoarding rate is increased to 50%.

Proposition of numerical narrative method to facilitate social discourses

The presented analysis results do not provide the single best solution but show conflicting trends of decision metrics, and thus additional discussions are required to decide on an agreeable compromise. Such lack of a global optimum seems to hold for many important decision problems, for which various complementary viewpoints have been established throughout humankind’s history. Taking political philosophy theories as an example, Aristotle’s view put emphasis on the role of laws that cultivate good characters of people34. Kant and Rawls maintained that the right be put prior to the good35,36. Utilitarianism argues that laws aim to maximise the overall happiness of a community, which they believe can be objectively quantified37,38. For libertarianism, the most important value is the freedom of individuals39. Based on these developments, there have been persistent efforts to find a universal law that is not contingent on arbitrary conditions, but none of those was successful. For example, Kant argued the presence of a universal law that “by itself commands absolutely and without any further motives”35. However, such an idea of universal laws has never materialised in a concrete form10. Another example is utilitarianism’s failure to summarise all types of happiness into a single metric10,38.

Such continuous failures rather strongly indicate that there is no existence of a universal solution. If this is the case, a rational strategy would be to find a solution that can compromise opposing views by balancing conflicting values. However, such a task remains challenging primarily because of difficulties in communication across stakeholder groups, which raises necessity for an effective discourse method. By generalising the proposed model of anti-price-gouging laws, we propose a new discourse approach, namely the narrative numeric (NN) discourse method, which facilitates intersectoral communication by running a narrative and a quantitative analysis in parallel.

To mediate a social debate, the NN method consists of eight steps listed in Table 2, which need to be iterated to conclude a decision. Taking debates over anti-price-gouging laws as an example, in Step 1, decision objectives are identified to minimise overall deficiency of basic goods (which would have to be covered by the government budget), minimise repair duration of residential buildings (which impacts individual homeowners) and minimise disruptions on living standards (which impacts ordinary lives of individuals). These are translated into decision metrics of insufficient amount of basic goods, repair duration, well-being loss by insufficient supply of basic goods and well-being loss by price increase in basic goods. In Step 2, candidate arguments are whether a price-gouging ban be employed, and if so, what level of price cap would be appropriate. They are modelled by a decision variable representing different levels of a price cap, where a sufficiently high price cap is equivalent to the absence of a ban. In Step 3, conflicting forces are an excessive price increase by price-gouging and a decrease in supply quantity by a price cap. This is modelled by the transformation of a supply curve. In Step 4, local conditions include asset vulnerability, increase in production cost, increase in demands of basic goods, donation rates, hoarding effects, etc., which are represented by model parameters (cf. “Methods” section). Regarding Step 5, in this study, the model parameters are determined by reviewing technical articles on price-gouging and price caps13,17,40. If further data are collected, data techniques such as Bayesian model updating can be employed to update the model itself or the model parameters. The illustrated connections between the steps of the NN method and the proposed model for anti-price-gouging law are summarised in Table 3.

Table 2 Eight steps of the proposed narrative numeric (NN) method
Table 3 Analysing the effectiveness of anti-price-gouging law following the steps in Table 2

While the current study covers only from Step 1 to 5, further steps are required to complete a discourse. In Step 6, narrative data are collected and mapped into decision metric values so that stakeholders can make sense of quantitative analysis results. Those data can be collected, for example, from interviews, news articles and social media. In Step 7, such mapping is used to decide acceptable levels of those complementary decision metrics. Finally, in Step 8, a decision scenario needs to be selected, which keeps the probability of each decision metric taking an unacceptable value under a target risk level.

As an example of Steps 6–8, consider Hurricane Katrina which hit New Orleans, US in 2005. It was reported 5 years after the event that there were people “still living in trailers scattered across neighbouring Texas and beyond”41, who are likely to belong to low-income groups. This implies that while the simulation results include households going through repair for more than a year, such a long period of recovery is likely to lead to a major divergence in the recovery path, meaning some people may fail to restore their previous standards of life. From such observation, one may conclude that a repair duration longer than one year is unacceptable. Another example is that a month after Hurricane Katrina, “hundreds of homeowners began returning to the suburbs of New Orleans, despite authorities urging residents to wait until facilities have fully been restored”42. These homeowners are likely to belong to mid- to high-income groups. Although they might be in a less troubled situation than displaced people, it still has a substantial impact on one’s life to be forced to be away from hometown for a month. Such mapping between narrative situations and numerical analysis results can be used to make decisions on acceptable levels of repair period (which is one of the analysed decision metrics in this article).

In the following, We enumerate noteworthy premises and corollaries of the NN method.

Discourses begin by clarifying problem’s scope and conflicting views

A scope includes decision-making objectives, candidate decision scenarios, time-scale of consideration and targeted population group.

Conflicting forces should be compared within a single model

To compare their conflicting effects, counterarguments and decision metrics should be analysed within a single model. Model parameters should be set to reflect local conditions since different local conditions would result in different analysis results.

Counterarguments are made by providing data that justify revising a model and/or model parameters

As often multiple counterarguments are valid, debates based solely on logic are bound to be ineffective in drawing an agreement. Counterarguments need to be made in a way to demonstrate that a current model is overlooking or overestimating certain forces.

Social forces should be taken into account, not only as analysis outputs, but also as inputs

While scientific discourses often tend to focus on physical forces, there is bountiful evidence that social forces are equally substantial (this point has resemblance to the question raised by Sandel10 that “why we should not bring our moral and religious convictions to bear in public discourse about justice and rights”). For example, in the case of a price-gouging ban, analysis results are notably influenced by social forces such as people’s tendency towards donation and hoarding. Since there are relatively fewer studies on the numerical modelling of social forces than physical forces, more research and data collection are required in this direction.

Narrations and numbers can be connected by interdisciplinary investigation across soft science and hard science

It remains challenging to reconcile the narrative language common in soft science and the numerical language dominant in hard science. The NN method facilitates communication between the two disciplines through two strategies: running a narrative analysis and a quantitative analysis in parallel and mapping narrations and numerical analysis results.

Complexity in social phenomena can be accounted for by using multiple decision metrics

Concerns are often raised against numerical decision metrics lest they result in an incomplete representation of complex phenomena. Such risk can be overcome by utilising a set of complementary decision metrics.

Probabilistic analysis is required to make risk-informed decisions

A real-world decision problem is bound to entail a high level of uncertainty, arising from both inherent variations and lack of knowledge. For example, in the case of anti-price-gouging laws, there are uncertainties in the occurrence of asset damage, magnitude of supply chain disruption, people’s reactive behaviour as a consumer, etc. Accordingly, a decision should be made by properly treating those uncertainties. This can be done by computing probability distributions of decision metrics as analysis outcomes, rather than obtaining a single value. Then, a decision can be made in a way that controls the probability of unacceptable decision values under a target probability value. We note that it is in general impossible to reduce the probability of undesirable events to zero, and thus a community would require a set of back-up measures to deal with those unlikely but high-consequence scenarios. Such backup planning can also be facilitated by probabilistic analysis.

Mathematical models do not exempt stakeholders from responsibility

Use of mathematical models for policy-making is often criticised for being a black-box and exempting stakeholders from their responsibility. Both are not necessarily true. Since a decision problem involves a set of conflicting decision metrics, a decision is often a deliberate choice by stakeholders on the most desirable balance between multiple decision objectives.

The objective of debates is not to win counterparts but to draw an agreeable compromise across all parties

Most decision problems are continuous problems rather than binary problems. Accordingly, the purpose of the NN method does not lie in selecting the best argument but rather in identifying the most agreeable compromise of conflicting decision metrics. In this context, debate skills should be appraised, rather than by an ability to present the most persuasive arguments, by the capability of drawing an effective compromise by collating conflicting arguments. Although emotions are crucial components in policy-making, which must be taken into account as both inputs and outputs of an analysis, emotional campaigns based on a few fragmented stories should not be a primary drive of social discourses. While it may appear an ideal but far-fetched argument, convening rationale-based social discourses (or at least minimising the influence of emotional campaigns) is possible by developing detailed numerical simulations, which have now become affordable thanks to advanced computing technology.

Discussion

Effectiveness of anti-price-gouging law has been debated from various perspectives, including economics, social sciences, and philosophy. This study focuses on the fact that all conflicting arguments are valid and an informed decision can be made only by comparing the influences of conflicting arguments in a single model. Thereby, we propose a new model, which includes the four most discussed issues: price gouging, recovery of the supply chain, donation, and hoarding. The model embodies two decision objectives, access to basic goods and repair of damaged assets, while returning analysis results that concern the entire population, the worst scenarios for the community, and the worst-hit households.

The analysis results indicate that in terms of access to basic goods, an ideal price cap is equal to a market clearing price (i.e. natural increase caused by increased production costs and increased post-disaster demands but not by price-gouging). Meanwhile, in terms of repairing damaged assets, a minimum value of the average repair period can be achieved by a price cap slightly lower than a market clearing price. We find that the presence of donations within the community has a significant influence on shortening repair periods, even when the amount of donation is small. On the other hand, hoarding exacerbates the insufficiency of basic goods, with a maximum hoarding rate having a largely proportional impact. On the other hand, the worst-hit households have different profiles depending on decision metrics. In terms of an absolute amount of insufficiency in basic goods, they belong to high-income groups, while as for relative amounts, they belong to low-income groups. In terms of repair periods, they are those that face a particularly high repair cost compared to their income. Their repair periods remain insensitive to a level of price cap, indicating that they need a different support scheme than an anti-price-gouging law.

Building on the proposed simulation model of anti-price-gouging law, we propose a new social and political discourse method, namely the narrative numeric (NN) discourse method. To this end, an 8-step framework is proposed, with an illustration of how the proposed simulation model embodies each step. Further discussions on the NN method are made by presenting noteworthy premises and corollaries. The primary objective of the NN method is to connect narrations and numbers, which has become viable thanks to the recent development of computing technology and computational methods (as evidenced by the intensive calculations required by the proposed simulation model). We also note that the NN method places emphasis on assessing the collective influence of competing arguments rather than assessing which argument is more truthful than others. Thereby, it aims to identify a decision scenario that yields an agreeable compromise to (hopefully) all stakeholder groups.

The proposed model of anti-price-gouging laws has several limitations. First, apart from the simulation model of asset damage, there are few datasets or models that are directly applicable for deciding model parameters. Also, because of the authors’ background in engineering, the economic and social models introduced in the proposed model are basic. In this paper, this limitation is partially addressed by assuming high variances in model variables, while a more accurate investigation can be performed with more data and models. Such data and models include post-disaster economic models that explain, for example, the increase in production cost and basic living consumption and social data such as interview/observational data on post-disaster lives of people. In addition, while multiple statistical tests have been performed to conclude whether price-gouging exists6,17, the proposed model would be directly benefited by quantitative measure or computational model of a price increase by price-gouging (in this analysis, it is assumed to be between 25% and 200%). Second, the model does not account for insurance subscriptions or personal savings. These factors may make high-income groups less vulnerable than how they appear in the presented analysis. Third, the proposed model does not take into account discomfort arising from an insufficient supply of essential goods, such as increased frequencies for shopping, waiting time in the queue and hiring costs for delegate purchase43. If this type of expense is to be considered, another decision metric can be defined.

In a more general context, the NN method has various topics that require particular attention. First, some values are especially challenging to specify and quantify. For example, the legitimacy of anti-price-gouging laws is often debated from the perspective of freedom, whose definition itself is highly controversial10. Opponents of anti-price-gouging laws view that freedom to undertake economic activities be preserved. In their view, such freedom is violated for both sellers and buyers since a price cap prevents sellers from deciding the prices of their own products, and insufficient supplies thereof prevent buyers from purchasing goods even if they are willing to pay higher prices. On the other hand, proponents argue that purchasing what one desires is not freedom as poor people are likely deprived of such freedom, which would be exacerbated by inflated prices. Second, the influences of a policy on people’s behaviour, which may be regarded as the morality of a community, need to be modelled for more accurate investigation. To this end, one may collect post-policy data. Third, mapping numerical analysis results to real-world narrations is a critical but under-explored task, which would require dedicated efforts of interdisciplinary research.

Methods

Model structure

In the analysis of disaster events that by definition rarely happen, a constant challenge is insufficient data and thereby difficulty in validation. To address this issue, we follow a common path employed in catastrophe modelling and agent-based simulation: we collate elemental data/models and perform a simulation to find emerging collective phenomena44 (illustrated in the section “Results: Effectiveness of anti-price-gouging laws”). We further justify such an approach by the fact that the analysis purpose is primarily on comparative analysis (e.g. what happens if a price cap is lower than a market clearing price) rather than on absolute analysis (e.g. exactly how much deficiency in basic goods occurs with a certain level of price cap).

The developed model for probabilistic analysis consists of five elements: the household profile, post-disaster physical losses, post-disaster economic effect, post-disaster price and supply and household consumption each week. The scope is summarised in Fig. 4 with each of the five elements indicated by a grey box. In Fig. 5, the model is again portrayed by a graphical representation of causal relationships between model variables, where each element is indicated by a shaded area.

Fig. 4: Scope of the proposed probabilistic model.
figure 4

The framework consists of five elements: post-disaster economic effect, household profile, post-disaster physical losses, price & supply and household reaction each week. Each element is indicated by a grey box in the figure.

Fig. 5: Causal dependence and notations of the proposed probabilistic model.
figure 5

The five shaded areas correspond to each of the five model elements in Fig. 4. Each node represents a model variable: single-edge circles represent random variables, double-edge circles represent deterministic functions and squares denote decision variables. Deterministic parameters are denoted by filled circles, while they are also marked by an overhead bar in their notation. Directed edges illustrate a causal relationship between variables, where a causal node and a resultant node are, respectively, connected to the tail and the head of an edge. A box represents that the presented model is set up for each household, in which double-stroke edges indicate a connection between a set of variables.

While other model variables are explained in the following sections, we note that the model has a decision variable Pc that represents an imposed level of price cap. For analyses presented in Figs. 1 and 2, Pc has been investigated by values from 0% to 200%. The presented analysis results are obtained by generating 1000 Monte Carlo (MC) samples. We note that since a disaster scenario is assumed to have occurred, the analysis does not deal with a rare event and thus 1000 MC samples are enough to achieve convergence. The high variances in analysis results do not arise from an insufficient number of samples but rather from the variances in the probability distributions of the proposed model.

Disruptions by an earthquake event

Disruptions are considered by repair costs of damaged assets, increased production cost, increased demands in basic goods, the occurrence of price-gouging and occurrence of hoarding behaviours. The first factor is reflected by random variables (i.e. a variable whose value is not deterministic but rather follows a probabilistic distribution) R0,i that represents a repair cost borne by a household i {1, …, Np}, where Np is the number of households. The repair costs are simulated using the Regional Resilience Determination (R2D) tool developed by SimCenter45. To generate a realistic dataset representing strong earthquake damage in urban areas, the building profiles of the San Francisco area and a hazard scenario of magnitude 7.2 earthquake on the San Andreas fault are used as simulation inputs (the simulation results can be reproduced through Example 1 of R2D’s documentation). The simulation results have previously been investigated in different studies for numerical demonstration of post-disaster recovery models46,47,48. The model utilises probabilistic seismic hazard analysis45 to generate the spatial distribution of peak ground acceleration at sites and uses Hazus49 to estimate the repair costs. On the other hand, the other four elements are taken into account by four random variables, P0, \({Q}_{0}^{{\rm {b}}}\), G0 and \({\bar{h}}_{t}^{{\rm {b}}}\), each of which represents a proportion of increase after an earthquake. Then, by introducing another random variable Nw representing the number of weeks that disruptions persist, the disruption variables P, Qb, G and \({\bar{h}}^{{\rm {b}}}\) are assumed to decrease linearly in time t, i.e.

$${X}_{t}=\left\{\begin{array}{ll}(1-(t-1)/{N}^{{\rm {w}}})\cdot {X}_{0}\quad &{{{\rm{if}}}}\,1\le t\le {N}^{{\rm {w}}},\\ 0\quad &{{{\rm{if}}}}\,t\, > \,{N}^{{\rm {w}}},\end{array}\right.$$
(2)

where X applies to P, Qb, G and \({\bar{h}}^{{\rm {b}}}\).

Demand of basic goods

For each household i {1, …, Np}, a price of basic goods consumed in normality is set as an affine function of a weekly income \({\bar{i}}_{i}\), i.e.

$${\bar{q}}_{i}^{\rm{b}}=\max \left\{{{\bar{q}}_{\max}^{b}},\,{{\bar{q}}_{\min }^{b}}+\beta \cdot ({\bar{i}}_{i}-{\bar{i}}_{\min })\right\},$$
(3)

where \({\bar{q}}_{\min }^{{\rm {b}}}\) and \({\bar{i}}_{\min }\) are constants representing the minimum amount of basic goods consumption and weekly income, respectively, \({\bar{q}}_{\max }^{{\rm {b}}}\) is the maximum amount of basic goods consumption. The constant β (0, 1) determines the rate of consumption increase with regard to an increase in weekly income and is set as 0.5 in the current study. Given an earthquake scenario, the consumption of household i at week t is increased to

$${q}_{t,i}^{{\rm {b}}{\prime} }=(1+{Q}_{t}^{{\rm {b}}})\cdot {\bar{q}}_{i}^{{\rm {b}}}+{\bar{h}}_{t}^{{\rm {b}}}\cdot {\bar{q}}_{i}^{{\rm {b}}},$$
(4)

where the second term reflects excessive consumption arising from hoarding behaviour.

Price, supply and distribution of basic goods

A normalised price increase of basic goods at time t is calculated as

$${P}_{t}^{{\rm {b}}{\prime} }=\max \{{P}_{{\rm {c}}},\,{\alpha }^{{\rm {b}}}{Q}_{t}^{{\rm {b}}}+{P}_{t}+{G}_{t}\},$$
(5)

where αb is a parameter that relates the increase in demand to the price increase. Then, the supply is calculated from the calculated price \({P}_{t}^{{\rm {b}}{\prime} }\) as

$${S}_{t}^{{\rm {b}}}=\left\{\begin{array}{ll}{S}_{\min }^{{\rm {b}}}\quad &{{{\rm{if}}}}\,{P}_{t}^{{\rm {b}}{\prime} }\, < \,{\alpha }^{{\rm {b}}}{S}_{\min }^{{\rm {b}}}+{P}_{t},\\ ({P}_{t}^{{\rm {b}}{\prime} }-{P}_{t})/{\alpha }^{{\rm {b}}}\quad &{{{\rm{if}}}}\,{P}_{t}^{{\rm {b}}{\prime} }\, < \,{\alpha }^{{\rm {b}}}{Q}_{t}^{{\rm {b}}}+{P}_{t},\\ {Q}_{t}^{{\rm {b}}}\quad &{{{\rm{otherwise}}}},\end{array}\right.$$
(6)

where \({S}_{\min }^{{\rm {b}}}\) is a parameter representing a minimum amount of supply of basic goods. In the first two cases where an increase in price is less than a market clearing price (i.e. natural increase caused by higher production costs and higher demands), a supply quantity \({S}_{t}^{{\rm {b}}}\) becomes less than a demand quantity \({Q}_{t}^{{\rm {b}}}\).

Since the quantities above represent a normalised increase, the actual quantity of supply is equivalent to \((1+{S}_{t}^{{\rm {b}}}){\sum }_{i}{q}_{i}^{{\rm {b}}}\). Then, this supply quantity is distributed to households, which decides the actual consumption of household i at week t, \({q}_{t,i}^{{\rm {b}}}\). We assume that basic goods are distributed in proportion to a desired demand \({q}_{t,i}^{{\rm {b}}{\prime} }\). In case a household cannot afford an allocated amount because of insufficient budget, they get only the amount that they can afford, and the rest is re-distributed to other households who have their desired quantities yet satisfied and have a budget to purchase more. Once \({q}_{t,i}^{{\rm {b}}}\) are decided, each household is left with a budget

$${B}_{i,t}^{{\rm {b}}}={B}_{i,t-1}+{\bar{i}}_{i}-(1+{P}_{t}^{{\rm {b}}{\prime} })\cdot {q}_{t,i}^{{\rm {b}}},$$
(7)

where Bi,t−1 is the budget left over from the previous week after paying for basic goods and repair costs.

We note that the proposed model does not consider the balancing procedure to reach an equilibrium between price and quantity. This is to reflect the fact that in a highly variable condition such as a post-seismic situation, such equilibrium is not expected as it is achieved through long-term interactions. The same assumption is employed for evaluating the repair of damaged assets, which is explained in the next section.

Price, supply and distribution of repair resources

Repair process is simulated in a similar way to the case of basic goods. That is, a normalised price increase is calculated as

$${P}_{t}^{{\rm {r}}{\prime} }=\max \{{P}_{{\rm {c}}},\,{\alpha }^{{\rm {r}}}\mathop{\sum}\limits_{i}{R}_{t-1,i}/\bar{R}+{P}_{t}+{G}_{t}\},$$
(8)

where Rt,i represents a remaining repair cost for a household i in week t, and \(\bar{R}\) is a constant standing for a demand under normality. Then, the supply is calculated from the calculated price \({P}_{t}^{{\rm {r}}{\prime} }\) as

$${S}_{t}^{{\rm {r}}}=\left\{\begin{array}{ll}{S}_{\min }^{{\rm {r}}}\quad &{{{\rm{if}}}}\,{P}_{t}^{{\rm {r}}^{\prime} } \,< \,{\alpha }^{{\rm {r}}}{S}_{\min }^{{\rm {r}}}+{P}_{t},\\ ({P}_{t}^{{\rm {r}}^{\prime} }-{P}_{t})/{\alpha }^{{\rm {r}}}\quad &{{{\rm{if}}}}\,{P}_{t}^{{\rm {r}}^{\prime} }\, < \,{\alpha }^{{\rm {r}}}{\sum }_{i}{R}_{t-1,i}+{P}_{t},\\ {\sum }_{i}{R}_{t-1,i}\quad &{{{\rm{otherwise}}}},\end{array}\right.$$
(9)

where \({S}_{\min }^{{\rm {r}}}\) is a parameter representing a minimum amount of supply for repair.

Distribution of repair resources is also simulated in a similar way to that of basic goods. The supply quantity is first distributed in proportion to the demand of each household, while the distributed resources that cannot be afforded by some households are transferred to the households that need and can afford more. This process determines the repair demand \({q}_{t,i}^{{\rm {r}}}\), whose actual price becomes \((1+{P}_{t}^{{\rm {r}}{\prime} })\cdot {q}_{t,i}^{{\rm {r}}}\).

Transfer to a next time window

After paying for basic goods and repair costs, a household is left with budget amounting to

$${B}_{i,t}^{{\rm {r}}}={B}_{i,t}^{{\rm {b}}}-(1+{P}_{t}^{{\rm {r}}{\prime} })\cdot {q}_{t,i}^{\rm {{r}}}.$$
(10)

Then, to simulate donation, we assume that donations are made by households \({{{{\mathcal{I}}}}}_{{\rm {d}}}\subset \{1,\ldots ,{N}^{{\rm {p}}}\}\) that do not have any remaining repair costs and have a positive remaining budget. Then, the sum of donations is equally distributed to those households that have remaining repair costs. Accordingly, the final remaining budget for each household is calculated as

$${B}_{i,t}=\left\{\begin{array}{ll}{B}_{i,t}^{{\rm {r}}}\cdot (1-{\bar{d}})\quad &{{{\rm{for}}}}\,i\,\in \,{{{{\mathcal{I}}}}}_{{\rm {d}}},\\ {B}_{i,t}^{{\rm {r}}}+{d}_{t}\quad &{{{\rm{for}}}}\,i\,\notin \,{{{{\mathcal{I}}}}}_{{\rm {d}}},\end{array}\right.$$
(11)

where by construction the amount of donation that an eligible household receives is

$${d}_{t}=\frac{{\sum }_{i\in {{{{\mathcal{I}}}}}_{{\rm {d}}}}{B}_{i,t}^{{\rm {r}}}\cdot (1-\bar{d})}{{N}^{{\rm {p}}}-| {{{{\mathcal{I}}}}}_{{\rm {d}}}| },$$
(12)

and \(\bar{d}\in [0,1]\) is a parameter representing a donation proportion among the remaining budget of a household after purchasing basic goods. In the equation above, the numerator and the denominator respectively represent the total sum of donations and the number of beneficiary households.

Decision metrics

In the analysis results shown in Figs. 1 and 2, four decision metrics are presented: a shortage in basic goods, repair time, well-being loss incurred by a supply shortage and well-being loss by price increase. First, the average shortage of basic goods is defined as

$${L}^{{\rm {b}}}=\frac{{\sum }_{t}{\sum }_{i}({q}_{t,i}^{{\rm {b}}^{\prime} }-{q}_{t,i}^{{\rm {b}}})}{{N}^{{\rm {p}}}}.$$
(13)

On the other hand, the average number of weeks taken for repair is evaluated as

$${T}^{{\rm {r}}}=\frac{{\sum }_{i}{T}_{i}^{{\rm {r}}}}{{N}^{{\rm {p}}}},$$
(14)

where \({T}_{i}^{{\rm {r}}}\) is the first week that Rt,i becomes zero.

A total well-being loss caused by insufficient provisions of basic goods is evaluated by assuming a bi-linear well-being curve presented in Fig. 6. The bi-linear form reflects that well-being changes more sensitively when the minimum amount of provisions is not fulfilled20. In the figure, \({q}_{t,i}^{{\rm {b}}}\) and \({q}_{t,i}^{{\rm {b}}{\prime} }\), respectively represent an acquired amount of basic goods and a desired amount for household i in week t; and \({q}_{t,\min }^{{\rm {b}}{\prime} }\) and w0 denote the minimum requirement of basic goods (i.e. the demand of the lowest income group) and a parameter of a well-being ratio taken up by minimum requirement, which is set as 0.75 in this study. A mathematical representation of the well-being function for a household i at time t is

$${w}_{i,t}({q}_{t,i}^{{\rm {b}}})=\left\{\begin{array}{ll}\frac{{w}_{0}{q}_{t,i}^{{\rm {b}}^{\prime} }}{{q}_{t,\min }^{{\rm {b}}^{\prime} }}\,\cdot\, \frac{{q}_{t,i}^{{\rm {b}}}}{{q}_{t,i}^{{\rm {b}}^{\prime} }}\quad &{{{\rm{for}}}}\,{q}_{t,i}^{{\rm {b}}}\, < \,{q}_{t,\min }^{{\rm {b}}^{\prime} },\\ \frac{1-{w}_{0}}{1-{q}_{t,\min }^{b{\prime} }/{q}_{t,i}^{{\rm {b}}^{\prime} }}\,\cdot \,\frac{{q}_{t,i}^{{\rm {b}}}-{q}_{t,\min }^{{\rm {b}}{\prime} }}{{q}_{t,i}^{{\rm {b}}^{\prime} }}+{w}_{0}\quad &{{{\rm{for}}}}\,{q}_{t,\min }^{{\rm {b}}^{\prime} }\,\le \,{q}_{t,i}^{\rm {{b}}}\, < \,1,\\ 1\quad &{{{\rm{otherwise}}}},\end{array}\right.$$
(15)

from which a well-being loss of household i in week t is evaluated as

$${L}_{t,i}^{{\rm {w}}}=1-{w}_{i,t}({q}_{t,i}^{{\rm {b}}}).$$
(16)

Then, a well-being loss \({L}_{t,i}^{\rm {{w}}}\) is broken down into an amount attributed to insufficient supply \({L}_{t,i,{\rm {s}}}^{{\rm {w}}}\) and that to price increase \({L}_{t,i,{\rm {p}}}^{{\rm {w}}}\). The former is obtained by

$${L}_{t,i,s}^{w}={L}_{t,i}^{w}\cdot \min \left(1,\frac{\mathop{B}\nolimits_{t,i}^{b}}{1+\mathop{P}\nolimits_{t}^{b^{\prime}}}\cdot \frac{1}{\mathop{q}\nolimits_{t,i}^{b^{\prime}}-\mathop{q}\nolimits_{t,i}^{b}}\right),$$
(17)

Remind that \({B}_{t,i}^{{\rm {b}}}\) represents a remaining budget after basic good consumption and that divided by the price is the amount of quantity that could have been additionally consumed without the supply lack. Meanwhile,

$${L}_{t,i,{\rm {p}}}^{{\rm {w}}}={L}_{t,i}^{{\rm {w}}}-{L}_{t,i,{\rm {s}}}^{{\rm {w}}}.$$
(18)

Finally, an average value of cumulative well-being losses, is presented in Figs. 1 and 2, are calculated as

$${L}_{k}^{{\rm {w}}}=\frac{{\sum }_{t}{\sum }_{i}{L}_{t,i,k}^{\rm {{w}}}}{{N}^{{\rm {p}}}},$$
(19)

where k can be either s or p.

Fig. 6: A well-being curve of household i in week t, \({w}_{i,t}({q}_{i,t}^{{\rm {b}}})\).
figure 6

In week t, household i demands basic goods of \({q}_{t,i}^{{\rm {b}}{\prime} }\) and acquires an amount of \({q}_{t,i}^{{\rm {b}}{\prime} }\). The function takes a bi-linear form to take into account different impacts of not securing the minimum requirement and an excessive demand (which depends on an income level). The minimum requirement is set as the demand by the households with the least income, and w0 was assumed to be 0.75. The well-being loss for a household i at time t, which is required to evaluate a risk metric, is evaluated as \({L}_{t,i}^{{\rm {w}}}=1-{w}_{i,t}\).

Model parameter values

Tables 4 and 5 summarise selected values in this study respectively for random variables and parameters that are illustrated in Figs. 4 and 5 and in the subsections above. For random variables, probability distributions are assumed to follow a truncated normal distribution with the mean, coefficient of variance (c.o.v.) and bounds illustrated in Table 4.

Table 4 Random variables of the model and their means, c.o.v.’s and bounds

To decide probability distributions of P0, Nw, and G0 in Table 4, we mostly refer to studies on price bubbles observed after disaster events (for example, refs. 5, 17, 50) and testimonies published as news articles (for example, a good summary can be found in ref. 10). The references suggest that price bubbles often persist from 4 weeks to more than half a year, which has been taken into account to decide the probability distribution of Nw. On the other hand, as for P0 and G0, we utilise observations from the references that the price increase was observed to be up to around 200%. Among such increases, to decide on P0, we take into account the fact that most states do not accept more than a 10% of increase by their anti-price-gouging law, which was set as its mean value. Then, the range of G0 has been set to reflect the observed increase being up to 200%, while the mean value was set by the authors’ judgement. As for αb and αr, by performing parametric studies, we have chosen the values that lead repair time of the worst-hit households to be more or less than one year. Considering the recovery process after major hazardous events such as the 2019 Ridgecrest earthquakes and the 2005 Hurricane Katrina, we deem such a period most reasonable.

Regarding Table 5, as for hoarding, there are few studies that quantified absolute magnitude of hoarding for general goods, while many studies have focused on the causes or presence of hoarding (e.g. refs. 13, 51). Therefore, after reviewing data in previous studies that provide indirect information about magnitudes, the value has been chosen by the authors’ judgement. As for the donation rate \(\bar{d}\), the analysis objective is on investigating the effect of donation. To make such an investigation in a conservative way (i.e. to probe if there exist meaningful impacts even with a low level of donation), the value has been chosen relatively low, i.e. a low proportion of remaining income (after completing repair works and buying all desired basic goods).

Table 5 Deterministic parameters and their values

While the parameters have been chosen mostly by indirect data and the authors’ judgement, we justify the choices from three perspectives. First, the analysis intends to compare different levels of a price cap, rather than predicting the exact extent of damage. For instance, one of the primary queries is what happens with a price cap being lower than an increase in supply cost (i.e. a realisation of P0). The analysis results indicate that such a price cap has adverse effects on both the provision of basic goods and repair time for overall households, but not much effect on repair time for the worst-hit households. Second, analysis results do not appear to be sensitive to the parameters by showing constant trends in different combinations of parameter values. For example, Fig. 7 illustrates the results of two more simulations with the means and bounds of the variables in Table 4 increased (indicating a worse economic condition, Fig. 7a-c) and reduced (a better condition, Fig. 7d–f), respectively, by 20%. All other settings remain the same as in Fig. 1a-c. Comparing Figs. 1 and 7, while there are some differences (e.g. magnitudes of impacts), the primary trends noted earlier (e.g. overall shapes of the curves) are consistent. Finally, such lack of data raises the need for data collection. To this end, the proposed model can be used to clarify what types of data need to be collected.

Fig. 7: Analysis results of price-gouging with changed means and bounds of random variables (Table 4) related to the economic model.
figure 7

Notations are the same as in Fig. 1. a–c Means and bounds are increased by 20% (i.e. a worse condition). d–f Means and bounds are reduced by 20%.