Introduction

The impact of the modern human-made climate crisis has inspired a florescence of research inspecting prehistoric human–environment interactions. Scholarship has centred on two patterns of system transformation: ‘collapse’ or ‘resilience’—two concepts that have taken on a colloquial meaning in the academic literature, referring to systems that either crumble under stress (and collapse) or persist (demonstrate resilience) (Butzer 2012; Middleton 2017; Carleton and Collard 2019; Rick and Sandweiss 2020; Degroot et al. 2021). Crucially, these models of human–environment dynamics view stress (system response to stressors, both exogenous and endogenous) as a net negative. However, researchers across a range of disciplines have long observed seemingly paradoxical responses to stressors in nature: while typically resulting in ‘distress’, stressors can also elicit positive and even beneficial outcomes.

‘Eustress’ in psychology, natural selection in Darwinian evolution, the benefits of challenges for positive child development, and the strengthening of the immune system through exposure to disease, offer examples in which stress, in limited doses, results in net-positive adaptations and developments (e.g., Le Fevre et al. 2003; Badyaev 2005; Sapolsky 2015; Lukianoff and Haidt 2018). Recently, Nasim Taleb’s antifragility framework (Taleb 2012) has theorized the role of positive stress for complex systems, such as financial markets. Here, we explore (1) the theoretical implications of an antifragility model of transformation for the study of human–environment systems and (2) how antifragility can be applied archaeologically. We argue that stressors such as highly variable environments and natural hazards were necessary for the emergence of antifragile floodwater infrastructure technology in pre-Hispanic coastal Peru, c. 900–1460 CE.

Ecology, equilibrium, and stress

The nature and dynamics of human cultures and the environments they inhabit have long attracted archaeological scholarship. Julian Steward’s (1955) cultural ecology placed the environment front-and-centre in theories of cultural change. By the 1950s and 1960s, North American archaeology increasingly emphasized ecology, culminating in the polemics of Processualism, which followed White (1949, p. 8) in viewing culture as humanity’s ‘extrasomatic means of adaptation’ (e.g., Binford 1965, p. 205). This ‘new archaeology’ attributed change and evolution to the interaction of a ‘living system and its environmental field’ (1972, p. 106). Material culture was a product of this long-term, equilibration process, honed by the continual exchange between society and the environment: when the environment changed, and new stressors introduced energy contributions or drains, so too would the material culture change as communities scrambled to find a new steady state (Binford 1982).

Processualism’s framing of cultures as functional systems at equilibrium resonated with a contemporary ecological turn in social anthropology (notably Odum 1953). Societies were understood as ecosystems: closed units of interacting biotic and abiotic variables (Jochim 1984; Moran 1984). These variables were thought to exchange energy tending towards equilibrium, and, controlling for any outside disturbance, the ecosystem would be self-regulating. Biological concepts such as nutrient cycling, carrying capacity, and succession were adopted into discussions of subsistence patterns, group organization, and human–environment interaction (Harris 1968, 1974; Rappaport 1968; Vayda 1969, 1976). The dominance of the ecosystem concept and the centrality of equilibrium contributed to what some have characterized as the ‘equilibrium paradigm’, where stress and stressors were understood as disturbances to stable states (Sullivan 1996).

In archaeological circles, Post-processualism rejected, among other things, the environmental determinism of many Processualist approaches, instead, adopting models that placed emphasis on the role of agency and structure as a source of change (Hodder 1994). If these scholars studied human–environment interactions at all, it was often framed in reference to the subjectivities of past actors. For example, some took an epiphenomenal approach that posited how landscapes acted as generative agents in the development of social lifeways (e.g., Tilly 1994); others viewed environmental features alongside other non-human actors as entangled in a patchwork of evolving co-dependent relationships (e.g., Hodder 2012). While many, though certainly not all, archaeologists turned their attention away from past human–environmental interaction (Kintigh et al. 2014; Davis 2020), paleoclimatologists developed methods to better collect and analyse samples as well as finetune their chronological ranges. Making use of archaeological reconstructions of sociopolitical dynamics, they developed a common methodology that can be described as an attempt to find correlations between episodes of environmental change and particular ‘moments’ in a given culture sequence. That is, archaeological transitions (e.g., the overthrow of an imperial dynasty or the widespread abandonment of sites) are matched with known climate events (e.g., drought), and vice versa. In this way, episodes of stress are seen to interrupt the social equilibrium, often leading to collapse (examples are numerous and see Hodell et al. 1995; Cullen et al. 2000; Weiss and Bradley 2001; Diamond 2005; Zhang et al. 2007; Buckley et al. 2010).

Collapse and resilience

Within the equilibrium paradigm of human–environment studies, resilience and collapse have emerged as two ends of a spectrum of possible system responses to stress. Collapse has referred to wholescale regional depopulation as well as simple culture change. While some view collapse as the result of a catastrophic event, often sudden and unprecedented in scale (though not always inevitable, e.g., Diamond 2005), others have suggested a longer, cumulative buildup of successive challenging episodes that end with a system-wide collapse (notably Tainter 1988). This theoretical framework posits that when climatic variation or environmental change is perceived to be too great for an existing economic, political, or social system, that system is knocked out of equilibrium and collapses: administrative organizations flounder, trade networks unravel, familial structures break down, and geographic loci become uninhabitable as existing subsistence practices, such as rain-fed agriculture, are no longer sustainable (overviews of studies and theoretical approaches can be found in Middleton 2017; Davis 2020; Burke et al. 2021).

In contrast to collapse, resilient societies are those that can overcome and survive stress to remain in steady (enough) states. Again, originating in ecological studies, resilience theory assumes change is inevitable and uses models of ‘panarchy’ or adaptive cycles of exploitation, accumulation, release, and reorganization to predict patterns of change (Gunderson and Holling 2002; Brand and Jax 2007). Within archaeology, resilience theorists investigate how certain socio-ecological arrangements can prolong the phase of accumulation, while other arrangements lead to ‘small-and-fast’ phases of release and reorganization (Redman 2005; McAnany and Yoffee 2009; Thompson and Turck 2010; Butzer and Endfield 2012; Rosen and Rivera-Collazo 2012; Gronenborn et al. 2014; Faulseit 2016; Folke et al. 2016; Bradtmöller et al. 2017; Weiberg and Finné 2018; Jacobson 2022). Resilience theory also accounts for ‘collapse’ phases (equivalent to ‘release’) as a regular part of adaptive cycling.

Both resilience, as a quality of systems, and collapse, as a phase of system dynamics, offer valid and important explanatory frameworks for understanding human–environmental interaction. However, these and other frameworks operating within an equilibrium paradigm, view stress in a similar way—as a net negative that, if unchecked, disrupts or overwhelms systems operating in otherwise predictable ways. Here, we ask how ‘eustress’, or a positive response to stressors, might be understood within human–environment dynamics, and possibly enhance resilience models.

Antifragility: rethinking stress and stressors in human–environment interactions

As noted above, the collapse and resilience models of human–environment interactions view societies as having to mitigate or otherwise respond to disturbances or stressors. In doing so they share a view of stress as a net negative with societies either failing (collapse) or prevail (show resilience) when faced with environmental stressors. Løvschal’s (2022, p. 200) overview finds that: ‘The adoption of a resilience concept that was inherently derived from ecology has led to an overly apocalyptic emphasis on climate change and environmental crises as the principal drivers of innovation and societal change’ as such, ‘resilience can reveal the breadth of the diverse responses available to us as we deliberate on how to mitigate crises’.

We wish to explore a third model, inspired by antifragility, which posits a beneficial role for stress. Although the concept has a much older history, the term ‘antifragility’ was coined by Nassim Taleb to describe systems that benefit from stress and adversity (Taleb 2012). In fact, homoeostatic models of organismic biology were perhaps the first to identify a positive role for stress. Stress theory, developed by the endocrinologist Hans Selye, conceives of an organism that seeks to maintain internal homoeostasis under changing external conditions (Selye 1936). In Selye’s terminology, an external stressor induces an internal response (stress). Selye recognized that organismic systems tended to wear themselves out in coping with too many or too-intense stressors. He referred to this state as ‘distress’. But he also recognized what he called ‘eustress’: a positive response to exposure to stressors in just the right dosage. However, he was not always consistent with his use of these terms, leading some to question the utility of the distress/eustress distinction (Bienertova-Vasku et al. 2020).

Taleb’s work was derived from his experience in the financial sector, and what he viewed to be harmful investing strategies that avoided risk (the potential for stress) at all costs; he argued risk-avoidance resulted in weaker (fragile) portfolios that inevitably failed. Organizational and management studies are increasingly making use of the concept to explain, for example, how certain sectors were able not only to recover after the shock of COVID (resilience), but actually profited from the pandemic (antifragility) (see Munoz et al. 2022). However, by Taleb’s definition, many of these studies are describing adaptive learning, not the quality of antifragility that emerges after long-term exposure to stressors (see also Hillson 2023).

Instead, Taleb’s most convincing demonstrations of ‘antifragile’ mechanisms refer to human/animal physiology. Muscles and bones strengthen when exposed to stress, and the immune system requires some level of exposure to pathogens to improve and continue to develop; in fact, these cannot improve without being repeatedly tested and subsequently adapting through these experiences. Antifragile systems are those that benefit from stressors. Controlled burns, for example, are crucial for the overall health of many forest environments (Turner et al. 2003; Fairman et al. 2019). Preventing fires altogether, Taleb notes, would deprive the system of its ability to reduce stress in a manageable way and, crucially, even promotes the dangerous buildup of dry material capable of fuelling massive fires when they (inevitably) breakout (Taleb 2012, p. 100–102). The recent, unprecedented, forest fires in Australia have brought with them the recognition that replantation is not sufficient mitigation, and that endogenous, First Nation land management practices, including small-scale, controlled fires, could prevent future devastating effects of megafires (Bowman et al. 2020; Lindenmayer and Taylor 2020). In other words, antifragility views stressors as crucial to a system’s ability to sustain itself and eventually develop: antifragile systems require a certain dosage of stressors in order to grow. The elimination of stressors, or, in human–environment contexts, the effort to reduce variability, will decrease a system’s ability to overcome adverse environmental inputs and, consequently, increase its fragility.

Antifragile systems differ significantly from resilient human–environment systems. Resilience requires actors capable of learning and evolving when subjected to challenges, however, that learning is directed towards adapting to, mitigating, or otherwise attempting to reduce stressors. For example, in agricultural settings characterized by variable climate, farmers adjust their mobility, herd size, crop choice, and may even modify features of the landscape to conserve water (cisterns), reduce erosion (terraces, lithic mulching), and secure food supply (storage). Some landscape modifications have enduring system effects; a rich literature on landesque capital has demonstrated how past investments in the built environment can continue to mitigate risks, banking labour for future generations and forming part of a package of resilient behaviours (Blaikie and Brookfield 1987; Lentz 2000; Denevan 2001; Erickson 2006; Morrison 2014). Resilience describes those system dynamics that return to a steady state (although not necessary the same steady state) after experiencing system shock. Instead, antifragility predicts that stressors are imperative to system growth and improvement, so much so that the elimination of stressors will have negative consequences. But, can antifragility be detected in the archaeological record?

Taleb (2012) describes antifragile systems as those that have emerged after a long period of trial-and-error with constant stressors, and they often have redundancy (including ‘overcompensation’) built into their designs (p. 44–45). Therefore, antifragile systems can be observed in two ways: (1) by identifying those scenarios where the absence of stressors led to system degradation; or (2) identifying scenarios where the presence of prolonged stressors and emergent systems with redundancy intersect. This paper takes up the latter pathway, selecting pre-Hispanic coastal Peru and floodwater-management systems as its case study. We posit that stressors, including climatic variability, resulted in the increased size and improved functionality in floodwater infrastructure technology in an agricultural system of arid, coastal Peru.

Antifragility in pre-Hispanic coastal floodwater technology

Coastal Peru is a hyper-arid environment where average annual precipitation can be 12–40 mm per year. This same coastal plain has been the site of continuous irrigation-canal based agriculture beginning as early as 2000–1500 BCE (Pozorski 1987, p. 42). Hyper-arid environments provide useful testing grounds for agricultural systems under stress. The USGS defines arid environments as those receiving less than an average of 250 mm of precipitation annually (USGS 2022); however, when they do experience rainfall events, they tend to be highly variable in both intensity and timing.

The north coast of Peru is impacted by the El Niño phase of the El Niño Southern Oscillation (ENSO) phenomenon every 6–10 years. Archaeological and climate records point to a deep history of ENSO impacts on the coast, changing frequencies of events over time, and a diversity of ‘flavours’ of ENSO (Sandweiss et al. 1996, 2001, 2020; Thompson et al. 1984, Waylen and Caviedes 1986). Eastern Pacific (EP) El Niños are the most well-known event type—these are characterized by warming in the central Pacific and weakened trade winds as waters move eastward. These EP events result in some of the most destructive rainfall and flooding on the coastal Peruvian landscape. Coastal El Niño events (COA), are similar in their effects to EP events (torrential rains and floods), but rather than migrating from the central Pacific eastward, warm waters appear on the western coast of South America. The most recent COA event took place in 2017 and has been referred to locally as the ‘El Niño Costero’. Finally, rare cyclones also impact the region, and while climatologically distinct, once again, the effects are largely indistinguishable. The most recent event took place in 2023 and was called the ‘Yaku Cyclone’ (yaku meaning ‘water’ in Quechua). EP events have been present in the region as early as 11 kya at an unknown frequency. EP and ENSO activity was subdued or absent between 8 and 5.8 kya. By 5.8–2.9 kya ENSO resumed, reaching modern-day EP frequency (every 6–10 years) by 2.9 kya (Sandweiss et al. 2020; Leclerc 2023).

While multiproxy records coincide on the large-scale patterning of ENSO in the early to mid-Holocene, local, long-term ENSO records for the north coast are more difficult to collect. The most direct, local proxy data for a past El Niño event are alluvial deposits; however, whether and where floods create such deposits depends on a number of variables. Moreover, obtaining secure radiocarbon samples from such deposits is often challenging. In an effort to create a local El Niño record for the Chicama Valley, Billman and Huckleberry (2008) carried out controlled excavations in an ideal context: an area where a pre-Hispanic aqueduct dammed a ravine, creating a small basin upslope of the aqueduct where alluvial deposits would only occur under El Niño conditions. They identified 33 deposition events dating to between 1260 and 1998 CE. Even in this well-controlled context, flood runoff was only present an average of once every 23 or so years, while climate records suggest El Niño events occur once every 6–10 years (Billman and Huckleberry 2008, p. 110). Billman and Huckleberry (2008) conclude that evidence for flooding is highly dependent on the timing and location of rainfall, which can vary from event to event (Tapley and Waylen 1990).

During strong El Niño (typically EP) events, precipitation can increase 170-fold, going from a few dozen millimetres of precipitation to several metres of rainfall in the span of a few months (Morera et al. 2017). Generally, the rains cause flooding in two areas on the landscape: (1) along the river itself, causing it to spill over its banks, and (2) down the steep ravines of the Andean foothills in the form of flash floods. Depending on the location of precipitation and the type of surface floodwaters contact (loose gravels, compacted dry sediment) (Huckleberry and Billman 2003, Billman and Huckleberry 2008), these flash floods can result in massive debris flows locally referred to as huaycos.

Huaycos are a source of extreme risk (potential for stress-induced system failure) for irrigated farming systems. Water and pressure gather behind the narrow point of the ravine, causing the energy of the flood to build until it can transport the sediments, vegetation, boulders, and large clasts of the ravine bed (Church and Jakob 2020). During debris floods, water transports suspended sediments and detritus—much-needed inputs for a desert farming landscape—but also destructive pebbles, cobbles, and boulders. Sediment and rock transport threatens the integrity of canals and fields, and high-energy flow destabilizes stream channels and causes erosion. During the 1982/1983 and 1997/1998 strong El Niño events, researchers recorded 26 and 44 Mt/y of sediment transport, respectively, while an ENSO-neutral year results in just 4.4 Mt/y of mobilized sediment (Morera et al. 2017). Consequently, forms of debris-flood mitigation infrastructure are common features in arid agricultural systems.

Check-dams, in the ancient past and today, are effective forms of flood management infrastructure (Doolittle 1985; Nabhan 1986; Fish and Fish 1992; Logan 1999; Erickson 2000; Fish 2000; Lentz 2000). These barriers are found in a range of environmental zones with mountainous topography and are often placed in a series perpendicular to the piedmont slope. Typically small, low, simple constructions of piled rock, check-dams are designed, not to stop flow, but attenuate it. Studies show that the construction of rock-piled check-dams can decrease the size of sediment grain in transport, increase the ‘time to peak’ of floodwaters, allowing more time for those impacted downstream to respond to the event, and reduce the peak discharge volume of floods (García-Ruiz et al. 2013; Yazdi et al. 2018; Yuan et al. 2022). The permeable matrix filters the floodwater of much of its suspended sediments and releases some of the pressure that builds behind the dam (Heede 1966). During low-intensity events, water will pond behind the dam, and slowly drain into the layers of flood-transported sediments, and, ultimately the water table. The saturated sediments encourage vegetation growth, which further acts to prevent erosion and spread-out floodwater.

Check-dams in hyper-arid environments are built in anticipation of an acute risk (debris floods) and their design is adapted based on observable responses to periodic stressors, such as water pressure, sediment transport, and volume. Therefore, unlike terraces, ditches, retention dams, or other examples of landesque capital, whose design responds to medium and long-term processes such as erosion, check-dams, and other forms of floodwater technology, developed through trial-and-error response to events over time.

Pre-Hispanic water management infrastructure, including earthworks, canals, reservoirs, and dams, have been transforming the slopes of the high Andes, the Amazonian basin, and the alluvial plains of the coast for millennia (Denevan 2001). Despite formal similarities, these technologies function in different ways across these different landscapes, according to the specific challenges, or stressors, of their environment. For example, while the Southern Andean watersheds are fed by seasonal glacial melt, the north-central highlands depend on variable runoff. Check-dams appear in both regions: in the southern Andes, they were designed to prevent agriculturally induced erosion, while in the north-central Andes, they functioned in large part to store water (Zuccarelli et al. 2022, Lane, 2017, Lane, 2014). Check-dams on the hyper-arid coast are relatively unknown (see Dillehay et al. 2004), but where they do exist, their design likely responds to one, main stressor: debris-flow hazards.

On the north coast of Peru, flood mitigation technology emerged over millennia of human–environment interaction (Eling 1987; Dillehay and Kolata 2004; Dillehay et al. 2004). Recent work in the Pampa de Mocan, a north coast piedmont desert in the Chicama Valley, has identified evidence of the early- and long-term incorporation of floodwater into the irrigated farming system, which developed between 1100 BCE and 1460 CE (Caramanica et al. 2020). The farmed landscape was abandoned just after Spanish invasion and conquest, and today the landscape is extremely arid, except when impacted by El Niño events. Here, pre-Hispanic Pampa de Mocan—Ascope-area check-dams, crucial components of the now-abandoned agricultural system, can be observed under stress. Assessing whether these constructions improved in their primary function to attenuate flow over time (over multiple exposures to El Niño floods), would require close measurement during multiple flood events and these data do not currently exist. However, several proxy indicators are available: water ponding and vegetative growth. Using historical photography, modern-day Google Earth satellite imagery, and drone photography, it is possible to deduce the performance of pre-Hispanic coastal check-dam technology over the past 80 years and at least 2 flood events.

Two check-dams features (A1, A2) in the Pampa de Mocan—Ascope area provide an opportunity to track the long-term antifragility of this technology (Fig. 1). These were visited by the author in 2019 and 2023, are constructed of piled cobble stone, and are located in the piedmont of normally dry, steep ravines (Figs. 2 and 3). The timespan of imagery includes the 2017 El Niño Costero (COA) and the most recent, 2023, event, the Yaku Cyclone.

Fig. 1: Plan view of the Chicama Valley, with modern towns, the Panamerican Highway, irrigation canals, and the major modern irrigation development project, CHAVIMOCHIC, labelled.
figure 1

A1 and A2 features are labelled between the Pampa de Mocan and the town of Ascope.

Fig. 2: A1 check-dam feature located near Ascope, La Libertad Peru.
figure 2

(1) Google Earth Imagery of A1 feature in 2016—pre El Niño Costero event. Arrow indicates the check-dam feature; (a) an irrigation canal, likely pre-Hispanic origin, but has been modified in modern times; (b) point on the bajada where erosional gullies have formed in the past; (c) location of Chimu burial ground. (2) Google Earth Imagery of A1 in April 2017, just after El Niño Costero rains. Note the bright water-soaked sediments between the berms of the check-dam and invasion of green vegetation, (a) and (b) points where debris flooding formed and breached the canal. (3) Drone image of A1 taken in May 2023, commissioned by Ari Caramanica, just after the Yaku Cyclone flood events. Same points labelled. Green is new vegetation. (4) Photograph of A1 by the author, taken in 2019; notebook in foreground for scale. Google Earth 2023 CNES/Airbus.

Fig. 3: A2 check-dam feature located just northwest of A1, near Ascope, La Libertad Peru.
figure 3

(1) 1943 aerial photograph of the feature from the Servicio Aerofotográfico Nacional del Perú. Arrow indicating the A2 feature. (2) Google Earth Imagery of A2 feature in 2016—pre-El Niño Costero event; (a) downslope pre-Hispanic field; (b) point of moisture accumulation. (3) Google Earth Imagery of A2 feature in April 2017, just after El Niño Costero rains; (a) downslope pre-Hispanic field; (b) green vegetation clustered at the point of accumulation. (4a) Drone photograph taken in May 2023, commissioned by Ari Caramanica, just after the Yaku Cyclone flood event. (4b) detail of A2 segment with upslope vegetation clustered on the upper right corner and pre-Hispanic fields visible in left side of the photo (downslope of the feature). (5) Photograph of A2 by the Ari Caramanica, taken in 2019 from on top of the upslope berm of the feature. Google Earth 2023 CNES/Airbus.

Neither check-dam feature conforms to known types in the Andes. Both are large-scale and are located much lower on the bajadas of their respective ravines than a typical check-dam system. A1 measures 32 m in length and 85 m at the widest point of the base; the spillway basin is 15 m in width. For comparison, most cross-channel terraces or check-dams in the Andes have stone retainer walls between 0.5 and 2 m high (Denevan 2001, p. 176). The presence of a grouping of Chimu (Late Intermediate Period 900–1460 CE) burials downslope of the A1 dam indicates that the flood barrier was present and diverting flow by at least the time of the establishment of the gravesite. The earliest, clear aerial images of A1 available to the authors are Google Earth Imagery from 2003, 5 years after a mega-El Niño event in 1998. Vegetation is visible clustering between the upslope side of the dam and the end sill or dam apron. This kind of clustering was observed in other silt-trapping mechanisms in the Pampa de Mocan; plant growth is supported by the water-retentive qualities of the finer sediments caught by the check-dam (Caramanica et al. 2020). In 2016, just before another major El Niño event, the A1 dam is largely unchanged; even the dry remains of desert scrub persists in the apron feature (Heede 1966). In April 2017, just after the rains of the El Niño Costero event, newly deposited wet sediments reflect brightly in the satellite images, and just weeks later, green growth begins invading the feature (Fig. 2). In 2019, our visit confirmed the presence of a layer of compact, fine sediment within the dam and a plant community beginning to colonize a large segment of the adjacent active channel.

Similarly, A2 is a massive check-dam feature, located approximately 5 km north of the A1 feature. Based on drone photography and surface mapping, this feature is unusually long (1.47 km in length) and high (9.18 m in height), and measures 25 m at the widest point of the visible base. A close look at cuts in the profile of this feature indicates that the construction matrix is almost entirely cobble stones likely sourced from the surrounding bajada. On the downslope side of A2, pre-Hispanic E-shaped-furrow fields remain visible and preserved on the surface (Fig. 3).

The earliest images of A2 date to 1943, where the feature clearly marks a boundary between coarse, upslope sediment and finer downslope sediment. Along the entire length of the feature, at least 14 erosional gullies can be identified upslope of the check-dam, while downslope, there are only 4 gullies visible in 1943. Similar to A1, the timing of Google Earth imagery for A2 only clearly captures the most recent El Niño event, the 2017 El Niño Costero. In 2016, before rainfall began, the feature contains some vegetation clustered along its length, but in April 2017, ponded water is visible at several points along the upslope berm of the dam, and green growth feeding off the moisture retained in newly deposited fine sediment can be observed, in stark contrast to the surrounding bajada.

Both A1 and A2 were photographed with a camera-mounted UAV and observed from the ground in the weeks and months after the 2023 Yaku cyclone event, the effects of which mimicked that of a COA. The drone photos show a dramatic expansion of vegetation, and in both cases the growth is concentrated on the upslope-side of the features, where sediments and water would collect and be stored. Active channels on the southern ends of both A1 and A2 feature plant colonization, likely contributing to channel stabilization. Finally, in both cases, the archaeological features downslope of these check-dam features continue to be preserved (see Figs. 2 and 3).

Does repeated exposure to stress (debris flows) improve the function (attenuated flow, ponding and water drainage, vegetative growth) of these pre-Hispanic, arid-zone check-dam features? The laying-down of finer sediments with each event works in at least two ways. First, as sediments fill the smaller pores of the check-dam matrix, these features become more water-retentive both upslope of the cobble-piled barrier, and in the apron. Second, this encourages the growth of vegetation and brush (Vining et al. 2022). Together, these improve the matrix’s capacity to attenuate flow, and to catch silt and larger debris; and associated plant communities also help to slow flow, erosion, and stabilize upslope channels. It is clear from the imagery that between the 2017 and 2023 flood events, vegetation was not held constant, or even demonstrated a controlled bounce-back, but rather, vegetation expanded dramatically, indicating that the effects of these features improve with successive exposure to stressors.

Finally, the scale of construction of these features is redundant, or ‘over-compensating’. While their locations, construction material, and design largely conform to known check-dams in the broader Andean region, the size of both A1 and A2 suggest that they were designed in anticipation of an event whose magnitude would have far-exceeded that of known events. The combination of these factors suggests that antifragility was incorporated into infrastructure design.

Discussion: rethinking our relationship with stress and uncertainty in archaeological reconstructions

Environmental determinism has been met with increasing criticism. New voices have prompted scholars to rethink what changes—what, specifically, collapses, and what remains resilient. Does the collapse of political systems, economic strategies, or social hierarchies necessarily correspond to demographic collapse (Folke 2006; McAnany and Yoffee 2009; Barnes et al. 2013; Castree et al. 2014; Haldon et al. 2018; Sörlin and Lane 2018; Løvschal 2022)? On the other end of the spectrum, what is meant by resilience? What, specifically, is resilient in a social system and how can it be measured in the past? Research has framed resilience as the ability to resist stress, systems that can buffer or sacrifice portions of themselves before failing, models of regeneration and regrowth leading to the eventual return to a system’s original state, and as flexibility, where some if not most of the system is eventually transformed (Redman 2005; McAnany and Yoffee 2009; Faulseit 2016; d’Alpoim Guedes et al. 2016; Bradtmöller et al. 2017; Middleton 2017; Løvschal 2022).

Without rejecting these approaches, we see value in resilience studies that address how learning, changing, and adapting can improve a system’s ability to deal with stressors (see in Middleton 2017; Fitzhugh et al. 2019; Walker 2020; Jacobson 2022; Løvschal 2022). For example, Fitzhugh et al. (2019, p. 1081) note that when conceiving of resilience, the best models are: ‘those that seek persistence in system relationships, rather than stability per se’. Similarly, in his analysis of the application of the term ‘resilience’, Walker (2020) notes that many approaches overlook the importance of repeated practice through trial-and-error for improvement, and that trying to immobilize a system actually diminishes its capacity for resilience. Such a framework matches Taleb’s contention that systems should be designed to enhance themselves through iterative testing. It also resonates with the Pampa de Mochan floodwater management evidence.

We rarely ask, what would happen if a system was deprived of stressors? Even for those who define resilience as the ability to learn through trials and emphasize the role of systemic evolution, the answer would seem obvious: systems would continue to exist, unchanged. The categorical assumption that stress is, by default, exogenous and deleterious is perhaps why the equilibrium or steady-state paradigm has been so difficult to shed. It is perhaps why the prevailing approach in archaeological interpretations of paleoclimatic data is essentially ‘wiggle matching’ cultural trends to climate proxies. Under this premise, identifying an extreme event or stressor signals that you have also identified a system’s breaking point (see Stewart et al. 2022). This approach has remained dominant, certainly so in top-tier journals, even as numerous papers have questioned the underlying assumptions behind the practice of aligning climate proxies with archaeological data (Liu et al. 2007; Caseldine and Turney 2010; Blaauw 2012; Butzer and Endfield 2012; Lowe and Walker 2014; Marston 2015; Izdebski et al. 2016; Nelson et al. 2016; Middleton 2017; Kintigh and Ingram 2018; Weiberg and Finné 2018. Carleton and Collard 2019; Manning et al. 2020; and see discussions in Fitzhugh et al. 2019; Davis 2020; Degroot et al. 2021; Jaffe et al. 2021; Løvschal 2022).

A renewed examination of the so-called sudden collapse of the Liangzhu culture (~5300–4300 BP), for example, shows the limitations of identifying climate-record peaks and the potential harms of steady states, which leave a system untested for too long. Its demise is often attributed to an unprecedented wet anomaly, but a closer look reveals a more complicated story. The past two decades have uncovered a cluster of large-scale water management system, established and maintained over several centuries (Liu et al. 2017, 2019). The water management system was situated near the modern-day city of Hengzhou in Zhejiang province, which is still one of the wettest areas of China: the climate is dominated by the East Asian monsoon, which brings heavy summer rains. The waterworks included a system of dams, canals and channels that controlled water pressure, flow, and directionality, provided dependable transportation routes, irrigated fields, and, perhaps most importantly, prevented devastating flood events. The hydrologic system predates the construction of the associated settlement and was maintained throughout the early and middle period of the Liangzhu culture. However, construction of new dams declined sharply at ~4700 BP and minimal investment was documented from ~4450 BP until Liangzhu’s end (Long et al. 2014; Liu et al. 2017, 2019).

Given the environmental setting, the eventual demise of the Liangzhu culture has been linked, understandably, to extreme climate shifts, notably the ~4k BP event, when torrential rains and rising water levels are thought to have overwhelmed the water management system and led to its collapse (Liu et al. 2017; Renfrew and Liu 2018). A thick layer of yellowish, silty sediment has indeed been documented above the Liangzhu cultural layers; yet a closer look reveals a more complex scenario. Based on new carbon isotope data, Zhang et al. (2021) compare periods of increased investment in the water management system to argue that, while the first part of the Liangzhu culture is marked with variability of wet and dry episodes, the abandonment of dam construction and upkeep takes place during a longer dry-spell (of diminished precipitation). The existing system was sufficient to buffer the lower amounts of annual rainfalls and flooding and, thus, may not have required its continual upkeep or enlargement (Zhang et al. 2021; Fig. 4).

Fig. 4: Liangzhu Hydraulic construction investment intensity and normalized δ13C records from central and south China (reconstructed from Zhang et al. 2021).
figure 4

Most construction and upkeep take place during the first several centuries of the Liangzhu culture, only limited upkeep is pursued from ~4600 BP, and none is documented after ~4500 BP. Note that the wet phase at ~4300 BP, when the Liangzhu culture ends, is not ‘higher’ in comparison to peak wet periods in the past.

The abandonment of the Liangzhu centre, however, did not take place during an unprecedented wet anomaly, but at levels already experienced in the region prior to 4700BP. In other words, the isotopic data do not translate into a marked spike in the climatic history in the broader context of the culture’s millennium-long tenure. What might have pushed the Liangzhu socio-hydraulic system beyond its tipping point, we would argue, was actually the lack of stress after an extended period of climatic variability. Initial climatic uncertainty—clearly seen in the oscillation between wet and dry periods in the Zhang et al. records—spurred the Liangzhu society to constantly invest in the upkeep and improvement of its flood prevention systems. Later on, the more stable, and thus predictable, patterns of rare flooding events may have lulled the inhabitants of the Liangzhu people into a false sense of security; when water levels that had been quite manageable in the past returned, the system was no longer able to cope.

Can we apply insights gained from the Liangzhu study to reassess and discover additional cases characterized by a long-term, stable relationship between climate and society, which is then unexpectedly confronted with unprecedented environmental changes? We leave this to our colleagues, but here we wish to reiterate that by questioning the equilibrium paradigm, antifragility approaches ask us to rethink how systems successfully respond to stress (Winterhalder et al. 1999; Panter-Brick 2014; Diaz and Moore 2017; Tucker and Nelson 2017; Drennan et al. 2020). For example, archaeologists frequently reconstruct ancient agriculturalists as consequence-aware agents who consistently and intentionally strive to eliminate risk in the context of an unpredictable environment. Increasing the resilience of economic systems via storage and/or diversification (the variety of crops that are sown, the location of fields, and the timing of planting), are often seen as the main ways that farmers attempt to reduce and minimize risk of crop failure and hunger (Marston 2011; Lentz et al. 2014; Kuijt 2015; Reed and Ryan 2019).

In truth, there are no perfect solutions or strategies to reduce risk across the board, only strategies that deal with stressors as they arise. A growing body of scholarship on pre-industrial farming communities underlines the fact that being a successful farmer requires relentless vigilance, and a constant awareness of changing environmental conditions (Cooper and Sheets 2012; Halstead 2014; Nelson et al. 2016; Fisher 2020). Farmers are constantly engaged in risk management precisely because of the trade-offs for each risk-mitigating strategy, such as crop diversification or seed storage. Even farmers’ perception of ‘good weather’ is a product of the prior strategies they employed. For example, early vigorous growth of grain crops can lead to lodging, where the stems buckle under the weight of the plants, ruining the harvest. Therefore, spring rains, while a necessary condition for adequate yield, can also be a risk: too much water and rapid growth can indeed lead to lodging. This can be prevented by grazing sheep on the plants, but only enough to leave sufficient level of the stems intact. If the spring rains are weak, farmers who do nothing may benefit, as their crops will receive just enough water to grow. However, harvesting earlier in the season, to avoid having to predict the intensity of spring rains (or hedging one’s bets and trying to employ all strategies on a single field), still risks reducing the yield and seed size, which farmers may want to avoid (Fig. 5).

Fig. 5: Coping strategies for early vigorous growth (from right to left).
figure 5

(a) Doing nothing could result in successful crop yield, if spring rains are weak, or doing nothing could lead to crop failure if rains are plentiful; (b) the opposite is the case if caprines allowed to graze, where plentiful rains will allow the crops to recover but weak rains will not be sufficient for a successful crop; (c) harvesting early, before the rainy season, circumvents the input of rains later in the season, but results in a small yield (note that trying to employ all three practices in a single field will result in lower yields as well and the first two strategies depend on opposite spring rain inputs to be successful).

Thus, traditional farmers often ‘roll with the punches’: by addressing one problem at a time, every action taken will reduce some form of stress, but will also have the potential of exposing the system to other stressors (risk is ‘double edged’ as Chibnik (2011) notes). In fact, many of the above works highlight the volatility that is at the heart of pre-industrial agricultural systems. Successful farmers are antifragile precisely because they do not eschew risk but confront it head-on as an integral part of their agricultural lifeways.

Conclusion and suggestions for future studies

What are the main takeaways inspired by gaining awareness of antifragility frameworks?

  1. 1.

    Stress is not a net negative. The most successful systems improve with exposure to stress and most systems weaken when starved of adversity, and, as a consequence, become more fragile.

  2. 2.

    Systems are only ever temporarily in a steady state or at equilibrium and are only ‘robust’ if no change ever takes place; they are not intrinsically stable.

  3. 3.

    Consequently, wiggle matching (by presuming that steady states are inherently stable), proves to be a significantly constrained method for evaluating the interplay between climate and societies. The practice precludes the capture of signals of dynamic human–environment interaction and neglects any enduring, interconnected relationships between the two.

  4. 4.

    Contrary to both collapse and resilience-based models of social systems, antifragile systems require exposure to stressors for positive growth.

  5. 5.

    Even if stress could be temporarily eliminated for short-term gains, it would make for fragile systems in the long-run.

In the present day, human-driven climate change has generally had negative, even catastrophic results for human societies. Understandably, therefore, research has framed climatic and environmental stress as a hazard. The impacts of the unprecedent catastrophic climatic events of the modern era are undeniable, but environmental stochasticity can have a beneficial role as well (see a recent call in Burke et al. 2021). Here we have suggested that stress was crucial in developing antifragile systems in some past societies. In the Pampa de Mocan, rainfall variability was an integral part farming life, which, when embraced as such, allowed for effective management/mitigation of flooding events. In the context of near-constant variability, check-dams design emerged as antifragile, capable of improving in their capacities to attenuate floodwater with each event. At Liangzhu, while a variable environment provided fertile grounds for initial social and demographic expansions, over time, environmental stability increased fragility. For agricultural societies, risk is challenging because environments are volatile, making flexibility and the avoidance of rigidity a desirable strategy for coping with unpredictable challenges.