Introduction

In recent years, social networks have positioned themselves as the preferred means of communication for connecting citizens and governments (Jahng, 2021; Guzmán & Rodríguez-Cánovas, 2021; Lazer et al., 2018; Wang, 2006), as they facilitate, mediate and speed up the interactions, which makes this type of network a space for the circulation of information of a massive nature, in which the expression and exchange of ideas and opinions is allowed in a generalised manner (Carlo Bertot et al., 2012), breaking traditional paradigms of communication between states and stakeholders (e.g., citizens and businesses) by moving from a one-way to a two-way approach (Guzmán et al., 2020), which has influenced all state functions, including diplomacy, in a cross-cutting manner (Manor & Segev, 2020). Social media communication has been widely adopted in diplomacy, understood as a systematised process in which international actors seek to achieve foreign policy objectives (Cull, 2011) resulting in closer contact between the international sender and the local receiver of information, thereby providing individuals with the possibility of communicating with diplomatic actors (Graffy, 2009).

In this context, the potential of social networks as a communication channel for diplomacy has been recognised, as they make it possible to build loyal communities by bringing senders and receivers closer together (Graffy, 2009); the achievement of effective and efficient communication with the stakeholders (Gebhard, 2016); budget optimisation as it is associated with lower costs and investments compared to traditional methods (Fjällhed, 2021); among others. However, at the edge of this potential, some governments have made use of this channel and the direct relationship with citizens online to systematically propagate disinformation and thus meddle in national issues of other sovereign states, influencing the opinion of citizens in order to benefit their own interests and fulfil some of their foreign policy objectives (Lazer et al., 2018; Cull, 2016).

As an example of this, the elections in the United States of America (USA) in 2016 can be mentioned, in which the Russian government, through its agencies, intermediaries, paid advertising campaigns, paid users, trolls and state-funded media, discredited the Democratic candidate Hillary Clinton in key USA election states. Initially, it was determined by the Office of the Director of National Intelligence (2017) that Russian intervention had the potential to swing the election in favour of Donald Trump and Moscow’s interests; however, recent studies have indicated that the impact of disinformation in this election campaign would not have had such an impact (Guess et al., 2020). This is because the disinformation campaign focused on the already disinformed population and not on other susceptible populations (Guess et al., 2020; Gunther et al., 2019). More recently, disinformation continues to permeate social media for diplomatic purposes, as Agarwal and Alsaeedi (2020) identified how the Russian media RT and Sputnik initially accused NATO and the USA of creating the COVID-19 virus and using it to destabilise China’s economy. Hence, disinformation as a strategy of diplomacy has regained relevance in the field of international relations (Fjällhed, 2021), and has become one of the main problems for the defence of states, as it develops in a new scenario such as social networks, in which information is disseminated at great speed and whose origin is difficult to trace, in addition to the intervention of new mechanisms for disseminating the messages that are specific to this type of network (McGonagle, 2017; Pamment et al., 2017).

Thus, studies related to the use of the strategy of disinformation from diplomacy and social networks have focused mainly on the documentation of cases, with the aim of understanding the elements involved in the dissemination of this type of information and the effects it has on citizens (e.g.: Lanoszka, 2019), There are many gaps in the understanding of the use of this strategy, due to the lack of previous experience in the field of international relations (Fjällhed, 2021), the lack of confirmation of its use by states, and the difficulty of finding declassified (uncensored) information from the governments concerned. Hence, authors such as La Cour (2020) recognise that, although progress has been made in understanding how this type of information is spread from other areas of knowledge, it is important to establish an approach directly related to diplomacy, due to the fact that local dynamics cannot fully explain how this information is disseminated at the international level which involves monetary resources and actors that go beyond traditional disinformation campaigns. In addition to the above, it is necessary to establish the patterns generated by disinformation as a strategy of diplomacy, based on the behaviour of individuals and the elements of the system itself, in order to generate strategies to mitigate the effects caused by this phenomenon, which affect multiple aspects of citizens’ lives, such as the influence on their opinions and beliefs, the generation of disturbances, among others (Fjällhed, 2021; Lanoszka, 2019; La Cour, 2020).

This article aimed to simulate the propagation of disinformation in social networks derived from the strategy of diplomacy, based on the elements of the system documented in the literature. Thus, from the approach of modelling and diplomacy, we sought to provide a first approximation to the answer to the following question: how do the elements of disinformation derived from the social media diplomacy strategy interact to affect a susceptible population? With the answer to this research question, we gain an understanding of the dynamics and impact of disinformation generated through diplomacy on social media, focusing on how these elements influence people’s opinions and beliefs, as well as the generation of disturbances in society. In addition to this, the response provides a comprehensive analysis of the mechanisms and consequences of disinformation in this context from a diplomatic perspective, offering a more complete view of how diplomatic actors strategically use social media to achieve their objectives. Furthermore, an approach was also sought to address the following research questions:

  • What impact do bots and trolls, as elements of the digital world, have on the spread of disinformation on social media as a strategy of diplomacy?

  • What is the impact of social media in delaying the activation of disinformation mechanisms as a strategy of diplomacy?

  • What are the effects of the echo chambers that social media algorithms foster on diplomacy-generated disinformation?

By fulfilling the aim and answering the research questions, two contributions are made to the study of disinformation on social media from the perspective of diplomacy. Firstly, a simulation model based on system dynamics is presented, with which specialists in international relations can generate scenarios that approximate the way in which diplomacy agents used this medium to achieve their objectives, eliminating, to a certain extent, possible biases in their conclusions due to not having all the information available in terms of time. Secondly, a holistic approach to disinformation in social networks is presented, incorporating elements that interact at the same time (for example, bots, trolls and the payment of campaigns to promote disinformation) and that had not been addressed in studies related to diplomacy, which allows for a more realistic view of the behaviour of the disinformation system in social networks where agents of diplomacy intervene.

Accordingly, this article is structured into four main sections. The first section conceptualises disinformation, the use of this strategy of social media diplomacy and the elements of the system involved in such a strategy; the second one sets out the methodology used for the development of the dynamic model and the corresponding simulations to solve the research questions; the third section presents the model, together with the results of the computational simulation defined in the methodology; and the fourth one presents the discussion and conclusions.

Theoretical framework and background

Conceptual delimitation of disinformation

The term disinformation has become common in journalistic contexts and political language in recent years (Rodríguez, 2018), relating as a current phenomenon derived from web-based technologies; however, the conceptualisation of this term occurred at the beginning of the 20th century, having its origins in the political sphere, when it was used by the French after the First World War to refer to actions directed from inside and outside the country to prevent the consolidation of the communist regime in France (Durandin, 1993; Jacquard, 1988) by discrediting its political and economic systems, based on the propagation of false information. Since that time the term has evolved to refer to any deviant information that has the intent and effect of distorting and misleading a target audience in a predetermined way (Innes, 2020).

It is necessary to clarify that disinformation, being a colloquial expression, is often misinterpreted by social actors, assigning conceptualisations and characteristics that do not correspond to its scope (Fallis, 2015), hence the need for a conceptual delimitation of the term. The first delimitation relates to the intentionality with which it is recognised that such information is not the result of a mistake but is specifically intended to deceive (Fallis, 2015; Fallis, 2011), exerting influence and control over the receptors to make them act according to the sender’s intentions, therefore it is clearly a deliberate phenomenon (Van Dijk, 2006). The second one corresponds to the lack of truth, because disinformation can be by commission, in which a falsehood is knowingly transmitted (Rodríguez, 2018; Durandin, 1993), or by omission, when relevant data is concealed so that it is not possible to obtain the veracity (McGonagle, 2017). Having stated that, the misinforming’s operation focuses on giving the appearance of truth to an event that is not true, so that the receiver trusts the information and takes it as real (McGonagle, 2017).

The third description is closely related with the channels of communication, because the sender uses them in order to massify the disinformation (Agarwal & Alsaeedi, 2020); hence, the intention to misinform it is not only enough, but an effective intermediation is required resulting in accordance with the point of view of the creator of the disinformation content (Rodríguez, 2018). While the emitters of disinformation had relied on traditional means of communication, which have been widely documented at the time (Desantes-Guanter, 1976; Chiais, 2008), the internet, with its ability to disseminate both true and false facts, has changed the landscape, in which communicators can reach out directly to users and amplify the message to a larger target group (Lazer et al., 2018). And the fourth delimitation of this concept and the point of intersection between the intention, the creation of the message (lack of truth) and the communication channels is the organisation in which it is planned, how the activities related to disinformation will be executed, ranging from the definition of the target audience to the evaluation of the efficiency of the misinformative message, represented in the opinions and actions created in the citizenship (Jacquard, 1998).

However, in the field of diplomacy, disinformation should not be confused with propaganda, given the existence of a fine line between the two concepts. Thus, propaganda is associated with a message in order to keep the receiver under control, benefiting the sender in the medium and long term (Desantes-Guanter, 1976). This is exemplified in the case of dictatorial or absolutist regimes. Disinformation from diplomacy seeks objectives that do not lead to this type of control over the population, but rather seeks to unbalance one or several states in the short term.

Social media disinformation as a strategy for diplomacy

Disinformation as a strategy of diplomacy aims to spread false information to unbalance foreign states by confusing and misleading their citizens (Agarwal & Alsaeedi, 2020; Gerrits, 2018), in this way, the state sending the message benefits from the disagreement generated in the society, the change of policies due to pressure from citizens on governments, as well as increasing its international presence and power, and fulfilling its international policy objectives (Fjällhed, 2021; Cull, 2016).

In this context, it is acknowledged that the use of this strategy is not a recent development in diplomacy, since the US and its allies, as well as the Soviet Union, began to broadcast disinformation about its rival during the Cold War (Chiais, 2008; Gerrits, 2018), making use of traditional channels of communication such as television, radio and newspapers. However, like any strategy, whatever its scope, it has evolved and incorporated new elements from a changing environment, hence disinformation has started to spread on internet-based communication media channels such as social media. The digitalisation of disinformation and its transmission on this type of network has resulted in a change in its potential, since what is new is not the message or the change of channel, but the speed at which it is spread and the impact that false information disseminated in this medium can have on the population, hence the importance of analysing disinformation on this channel (Vériter et al., 2020).

Therefore, disinformation as a strategy of diplomacy in recent years has concentrated its efforts on social networks, due to the mechanisms they have for the amplification of the message (e.g., echo chambers, bots, trolls, etc.) and, which allow a larger number of users to be exposed to disinformation (Bjola, 2018). Hence, there is growing interest in the study of the use of this strategy by both governments and the academic community. Thus, advances in diplomatic understanding have focused on documenting countries’ use of disinformation, concentrating on Russia and China (e.g.: La Cour, 2020; Lupion, 2018; Mölder & Sazonov, 2018) because of its foreign policy towards Western countries, especially the US and those in Western and Southern Europe, which have shown the potential to interfere in democratic processes such as elections (La Cour 2020; Bayer et al., 2019); the possibility of polarising citizens’ opinions through the spread of conspiracy theories, the exacerbation of radical and supremacist (racist) thinking (Faris et al., 2017); and the diminishing credibility of traditional media and mainstream institutions (Bennett & Livingston, 2018).

Despite the advances described in the literature, the analysis of disinformation as a strategy of diplomacy has been rather limited, focusing on the description of case studies related to the effect of the implementation of the strategy and the evaluation of citizens’ perceptions. This is largely due to the difficulties involved in the study of this strategy, especially in terms of tracing the origin of disinformation, making it impossible to determine the attribution factor and the study from the origin of the issuer (Gerrits, 2018). Therefore, there is a need to explore other aspects of disinformation and its use in diplomacy, such as its diffusion, building on existing theory and thus proposing models and new scenarios that allow for new insights that have not been addressed.

Propagation of disinformation and elements of diplomacy’s use of this strategy in social networks

The propagation of disinformation in many ways is similar to the way in which an epidemic spreads as there are a number of uninformed (infected) individuals who seek to affect a susceptible population by transmitting the message with false information, thus models of the spread of disinformation are based on the SIR (Susceptible-Infected-Recovered) model (e.g.: Zhao & Wang, 2013a; Rapoport & Rebhun, 1952). Subsequent studies have complemented the basis of this model, including and eliminating elements, such as the SIRaRu model, which allowed us to understand the behaviour of disinformation in homogeneous and heterogeneous communities (Wang et al., 2014), the SEIR model (Susceptible-Exposed-Infectious-Recovered), which established the possibility of quantifying the duration of the disinformation outbreak (Di et al., 2020), the SIR model for complex social networks (Zhao & Wang, 2013a), among others.

While the above models explain the spread of misinformation, they have generally focused on traditional communication channel mechanisms, and therefore do not incorporate the characteristic elements of social media such as types of reach (organic, paid and by invitation) or level of engagement. Advances in models of the spread of disinformation in social networks have been more recent, focusing on pattern detection and incorporating context for predicting misinformation dissemination behaviour (Bian et al., 2020; Ma et al., 2015) and maximising user influence, where an individual with many followers can generate a massive disinformation cascade (Li et al., 2020).

In view of these developments, models of disinformation propagation have focused on other areas of knowledge not directly related to diplomacy, so that the construction of these models lacks some elements that are incorporated in the use of this strategy by governments, thus varying the overall behaviour of the propagation system. It is worth remembering that disinformation is intentional (Gerrits, 2018), which is why its use in diplomacy obeys strategic planning, seeking to maximise the effects of the message on a population (Vosoughi et al., 2018). Therefore, the social media profiles of the disinformation agent seek to attract the greatest number of target audiences (Hollenbaugh & Ferris, 2014) and therefore make use of organic, paid and invitation-based outreach to attract the target population and convert them into a population susceptible to viewing the disinformation message (Buchanan & Benson, 2019).

With the linking of the susceptible population to the disinformation profiles, the process of sending the message through the various media begins, highlighting organic reach (Buchanan & Benson, 2019), paid reach (Bodine-Baron et al., 2016), bots (Helmus et al., 2018) and trolls (Starbird, 2019), exposing the message in a systematic way to establish the misinformed population. However, this is done once there is a consolidated susceptible population, there is a delay between the susceptible population and the moment when they are disinformed, as the disinformation agent seeks to amplify the effect of the disinformation, taking advantage of the possible reactions and comments to the message sent. The delay in sending the disinformation is only justified if one wants to maximise the organic reach in the first stage. Regarding the means available to the misinforming agent, it should be noted that organic and paid reach are typical of the dynamics of social networks, facilitated by the algorithm, and in which the misinforming message is subject to the rules of the social network. Otherwise, Bots and Trolls are used to amplify the message in parallel to the dynamics of social networks. These last two elements were incorporated into Russia’s diplomatic disinformation strategy in the US elections (Helmus et al., 2018).

Under the systematic exposure of the biased message, inhich the misinformed population is involved, it has been shown that, by constantly interacting with the message, an echo chamber is generated, which reinforces it (Bessi et al., 2015; Garrett, 2009). This leads to a higher level of interaction of the uninformed population with the message (engagement level), which hinders exposure to truthful content, resulting in the uninformed population not becoming the informed population (Quattrociocchi et al., 2016), thus achieving one of the ultimate goals of disinformation as a strategy of diplomacy. However, the ability of the uninformed population to seek additional information in media other than social media is recognised as a final element, which translates into a correction rate, leading to a reduction in it (Chiang & Knight, 2011; Entman, 2007). In this scenario, the now-informed population must make the decision to stop following the misinforming agent’s profile(s), or to continue to be in contact with them and remain part of the susceptible population. Table 1 summarises the elements identified in the literature that relate to the strategy of disinformation in diplomacy.

Table 1 Elements of disinformation as a strategy of diplomacy.

Methodology

Design

In order to fulfil the proposed objective and answer the research questions, this article was based on the development of a computational simulation model whose main technique was System Dynamics, considering Bala et al. (2017), Forrester (2013) and Sterman (2012) as theoretical references. Thus, the choice of this computational modelling and simulation method is based on the recognition of the complexity of the disinformation propagation system because of the diplomacy strategy, in which multiple elements are involved, and whose behaviour is non-linear, multi-causal and time-lagged (Bal et al., 2017). Thus, for the development of the model, the elements identified in the literature (Table 1), which are employed in diplomacy to propagate disinformation, were used. With these elements, we proceeded to conceptualise the model and its formal construction, following the procedure suggested by Bala et al. (2017).

In this sense, the diagram of flows and levels of the model was constructed, understanding this as the underlying physical structure of the system, where the stocks represent the state or condition of the system in a defined period, while the flows represent the change in function of the decisions taken in the system. In this phase, the variables that allow the system’s behaviour to be represented must be defined. Subsequently, the differential equations representing the cause-effect relationships between the variables were established. With these equations, the parameters were determined, assigning numerical values to each of the variables. Thus, the parameters were based on the US Senate Select Committee on Intelligence reports on Russian interference in the 2016 US presidential election, and on previously developed studies on the elements of the system. In addition, estimates were made for the variables using disaggregation, aggregation and multiple equation techniques. Finally, the internal consistency of the model was tested to establish that the representation of the system was adequate within the scope of the study’s purpose.

The proposed model

Figure 1 presents the proposed model of flows and levels based on the SIR model and advances in other fields of knowledge related to the propagation of disinformation, as well as the characteristics of this diplomacy strategy. This model was designed with seven levels: five measured in number of persons, one in number of B and one in number of T.

Fig. 1:  .
figure 1

Model of flows and levels of disinformation as a strategy of diplomacy.

The model also considered other variables in addition to those defined in Table 1 that are required for the functioning of the disinformation system as a diplomacy strategy, and which together regulate the levels of the model, as presented in Table 2.

Table 2 Other variables required for model development.

The structure of the model allowed us to understand how disinformation spreads as a strategy of diplomacy based on three assumptions. The first was that OP was fixed, so it did not increase or decrease due to effects other than PS formation. Second, that cd was the same in both the susceptibility adoption process and the disinformation process. Third, the model defines the growth of the amount of B and T with exponential growth. In this sense, it is assumed that its growth is not a function of the amount of monetary resources of the disinforming agent, but of the agent’s need to have as many B and T as possible to spread disinformation. To eliminate this assumption, in this section the model can be adapted to the mechanism described by Guzmán et al. (2022). Under the technical conditions of non-negativity of the variables (i.e. their domain is restricted to 0 or positive numbers) and that \(t = 0,1,2 \ldots ,180\), the model was represented by the following system of differential equations.

Target population:

$$\begin{array}{l}PO_{\left( t \right)} = \Big[ PO_{\left( {t - 1} \right)} - \Big[ \left( {PO_{\left( {t - 1} \right)} \times i \times ei} \right)\\ \qquad \quad +\, \left( {PO_{\left( {t - 1} \right)} \times tao} \right) + \left( {CPM \times cd \times ec} \right) \Big] \Big]dt\end{array}$$
(1)

Susceptible population:

$$\begin{array}{l}PS_{\left( t \right)} = \big[ PS_{\left( {t - 1} \right)} + \big[ \left( {PO_{\left( {t - 1} \right)} \times i \times ei} \right) + \left( {PO_{\left( {t - 1} \right)} \times tao} \right)\\\qquad \quad +\, \left( {CPM \times cd \times ec} \right) + \left( {PIn_{\left( {t - 1} \right)} \times tr} \right) \big]\\ \qquad \quad -\, {\left[ {f\left( {x_t,x_{t - \tau },t} \right)dt;t \ge t_0} \right]} \big]dt\end{array}$$
(2)

It is worth noting that \(f\left( {x_t,x_{t - \tau },t} \right)dt;t \ge t_0\) mathematically describes the delay of an action, for our case of the onset of disinformation propagation. The above apply to Eqs. 2 and 3. Where xt is equal to:

$$\begin{array}{l}x_t = \bigg[ \left( {PS_{\left( {t - 1} \right)} \times tao\_1} \right) + \left( {CPM \times cd \times ec} \right) \\ \quad +\, \left( {PS_{\left( {t - 1} \right)} \times tcb \times tao\_2 \times B} \right) + \left( \begin{array}{l}PS_{\left( {t - 1} \right)} \times tct\\ \times tao\_3 \times T\end{array} \right) \bigg]dt\end{array}$$
(2.1)

In turn:

$$B_{\left( t \right)} = \left[ {B_{\left( {t - 1} \right)} + \left( {B_{\left( {t - 1} \right)} \times tab} \right) - \left( {B_{\left( {t - 1} \right)} \times tdb} \right)} \right]dt$$
(2.1.1)
$$T_{\left( t \right)} = \left[ {T_{\left( {t - 1} \right)} + \left( {T_{\left( {t - 1} \right)} \times tcpt} \right) - \left( {T_{\left( {t - 1} \right)} \times tet} \right)} \right]dt$$
(2.1.2)

Disinformed population:

$$PD_{\left( t \right)} = \left[ {PD_{\left( {t - 1} \right)} + \left[ {f\left( {x_t,x_{t - \tau },t} \right)dt;t \ge t_0} \right] - \left( {PD_{\left( {t - 1} \right)} \times ce} \right)} \right]dt$$
(3)

Informed population:

$$PIn_{\left( t \right)} = \left[ {PIn_{\left( {t - 1} \right)} + \left( {PD_{\left( {t - 1} \right)} \times ce} \right) - \left[ {\left( {PIn_{\left( {t - 1} \right)} \times td} \right) + \left( {PIn_{\left( {t - 1} \right)} \times tr} \right)} \right]} \right]dt$$
(4)

where tr is equal to:

$$tr = \left[ {1 - td} \right]dt$$
(4.1)

The value of ce depends on the value of ne, being this represented in a graphical function (see Table 3), the above is represented:

$$ce = f\left( {ne} \right)dt$$
(4.2)

Unsubscribed population:

$$PU_{\left( t \right)} = \left[ {PU_{\left( {t - 1} \right)} + \left( {PIn_{\left( {t - 1} \right)} \times td} \right)} \right]dt$$
(5)
Table 3 Initial parameters of the model variables.

Having said that, the initial parameters of the dynamic model are presented in Table 3.

Model validation

Regarding the validation process, Schwaninger and Groesser (2020) recognise that system dynamics-based models can be validated using both quantitative and qualitative methods. Thus, three major categories of validation are distinguished: model context, model structure and model behaviour. In the case of this article, the validation of the model was based on the model structure category. Therefore, the model structure tests aim to increase confidence in the structure of the theory created about the mode of behaviour of interest. In this sense, structure tests evaluate whether the logic of the model is in line with the corresponding structure in the real world (Schwaninger & Groesser, 2020). The test used was sensitivity analysis to parameter changes.

This validation method "evaluates changes in the model’s behaviour by systematically varying input parameters" (Schwaninger & Groesser, 2020). This validation test reveals the parameters to which the model is highly sensitive, through numerous simulations with changes in parameters randomly within a range defined by the modelling. Thus, a model is considered valid when the numerical values of the simulation results change, but the model’s behaviour remains consistent.

This validation test can reveal the degree of robustness in the model’s behaviour and, therefore, indicate to what extent the conclusions based on the model could be affected by uncertainty in parameter values (Schwaninger & Groesser, 2020). For the purposes of this study, the following variables were modified by ± 10% of their initial parameter values (see Table 3): ec, i, rd, tab, tdb, ne, tcpt and tet. For the variables cd, the modification was made in the range of 0–20, and for rd, between 65 and 85 days. If modifying a variable resulted in negative values, the minimum value for sensitivity analysis was set to 0. A total of 100 scenarios were simulated with a uniform distribution for all variables.

Sensitivity analysis was performed on the model’s stocks of PO, PS, PD, Pin and PU, as shown in Fig. 2. Numerical sensitivity was observed in the analysed stocks, indicating that the values change significantly with the parameters; however, the system’s behaviour remains consistent for all stocks.

Fig. 2: Model sensitivity analysis.
figure 2

a PO sensitivity analysis. b Sensitivity analysis of PS. c Sensitivity analysis of PD. d Sensitivity analysis of PIn. e Sensitivity analysis of PU.

Based on calculations using a 95% confidence interval (CI), it is estimated that the number of people exposed to misinformation in the PO category, at time t = 180, will range between 0 and 757,000 individuals (Fig. 2a). Similarly, within the same interval and period, in the PS category (Fig. 2b), the susceptible population is expected to be between 0 and 923,000 people. As for the PD category (Fig. 2c), the number of misinformed individuals is estimated to range from 227 to 203,000. Furthermore, with a 95% CI and for t = 180, it is projected that the informed population (Pin, Fig. 2d) will range from 138 to 69,200 individuals. Finally, regarding the number of unsubscribed individuals (PU, Fig. 2e), it is estimated that the values will be within a range of 1,060 to 76,000.

However, the behaviour of the system after day 150 is explained by the fact that the target population of the disinformation agent has reached its limit, as shown in Fig. 2a, b. In the case of PD, PIn and PU stocks, the behaviour is derived from the confluence of the variables involved in the model flows. For these three variables the behaviour presents peaks and troughs due to the extreme conditions, being this represented in the quartiles simulated in the sensitivity analysis, changing only the numerical value of the stocks.

Simulations and data analysis

With the proposed model, we proceeded to establish the effect of the different elements of the system through computer simulation, for which modifications were made to the parameters established in the initial model (see Table 3). It should be noted that in the execution of the simulations only the parameter indicated in Table 4 was modified, and the others retained their initial values shown in Table 3, and the results on the levels of the system were named with the simulation code assigned in Table 4, followed by the name given to the level. RQ1 was answered by the model and RQ2, RQ3 and RQ4 were answered by the simulations.

Table 4 Computer simulations.

Based on the results of the developed simulations, system dynamics-based models can be either deterministic or stochastic. In the case of the present model, it is deterministic because it does not consider variables with random parameters. Therefore, it is assumed that the causal relationships between the system variables are known and constant over time. In other words, the behaviour of the system is fully determined by the rules and relationships established in the model. This means that if the simulation is run multiple times with the same parameters and initial conditions, the same result will be obtained each time without random variations. Hence, for the subsequent statistical analyses described, it is not necessary to run the simulations multiple times.

Thus, to test for statistically significant differences between the initial behaviour of the system and those generated with the modified parameters, the average levels of the model were compared. The Kolmogorov-Smirnov statistic was applied to check whether the data fit a normal distribution (p-value > 0.05), and it was found that the data did not follow a normal distribution. In this way, to establish the difference in the medians between the behaviour of the system with the initial parameters and the modified parameters, the Wilcoxon test was used, considering this difference with a p-value < 0.05. In this way, it was possible to answer RQ2.

Finally, the computational work on the model and simulations was developed in Stella Architect software version 3.3. The following model settings were considered: initial time = 0, final time = 180, Δt = 1/10, time units in days and selected Euler integration method. SPSS software version 25 was used for the statistical analyses.

Results

Under the initial conditions of the model, it was observed that, in the 180 days simulated, the PO decreased by 84.3%, so that 843,000 people were susceptible to being uninformed, however, the final PS was 691,722 people (Fig. 3a). The diplomacy ’s disinformation agent managed to spread the message to a total of 267,275 people, 135,463 of whom had previously been misinformed. Thus, the PIn during the 180 days was 148,117 people of whom only 11,779 (PU) took the decision to cancel their subscriptions to the disinformation agent ’s accounts. On the other hand, on average since the start of the disinformation activity, the agent managed to impact 1476 people each day, with the 176th day being the day of greatest growth with 3335 people (Fig. 3b). Similarly, a growth in PIn was evidenced (Fig. 3b), which represented a decrease in the difference between this population and PD, being 4.69 times at t = 71 to 1.70 at t = 180, however, the value of this difference on average was 1.83 times.

Fig. 3: Simulation results of the model with initial parameters.
figure 3

a System behaviour at PO and PS levels. b System behaviour at PD, PU and PIn levels. c System behaviour at B and T levels. d Behaviour of variables at, ao 1, ap and ab.

The behaviour of B and T showed an exponential growth of B and T from one to 411,036 ≈ 412 and 314, respectively (Fig. 3c). Regarding the dissemination methods used by diplomacy to disinform, it was shown that in the case of ap, for any value of t, it is constant disinforming 1200 people per day, compared to the other disinformation mechanisms for t = 180, ao 1 managed to misinform 29 people, at 261 and finally ab 4,580. In the case of the decrease of ab for day 100 (Fig. 3d), it is a consequence of the tao-2 effect, since the more followers the bots have, the more the organic reach of the publications they make is limited. This is due to the fact that the more susceptible population the bot has in the social network, the fewer people who can see the disinformation. This strategy is used by social networks to force accounts with a large reach to pay for users to see their publications. Figure 3d shows the behaviour of the disinformation methods.

With regard to the comparison of the behaviour of the original system and simulation one (Sim-1), it was found that there are statistically significant differences in the absence of cd, which is represented in that the levels of PO and Sim-1 PO (z = −11.63, p-value < 0.001); PS and Sim-1 PS (z = −11.63, p-value < 0.001); PD and Sim-1 PD (z = −9.10, p-value < 0.001); PIn and Sim-1 PIn (z = −9.10, p-value < 0.001); and, PU and Sim-1 PU (z = −9.10, p-value < 0.001) changed between the run simulations. Thus, for t = 180, which resulted in the number of uninformed, informed and unsubscribed people in the disinformation agent’s account decreasing by 1,355,864 and 10,506 people, respectively. All this behaviour is presented in Fig. 4b.

Fig. 4: Simulation results of the model with parameters set for Sim-1, Sim-2, Sim-3 and Sim-4.
figure 4

a, c, g, e System behaviour at PO and PS levels. b, d, f, h System behaviour at PD, PU and PIn levels.

For Sim-2, statistically significant differences were found in the absence of B in the propagation of disinformation as a strategy of diplomacy. Thus, the levels of PO and Sim-2 PO (z = −6.95, p-value < 0.001); PS and Sim-2 PS (z = −9.06, p-value < 0.001); PD and Sim-2 PD (z = −9.06, p-value < 0.001); PIn and Sim-2 PIn (z = −9.02, p-value < 0.001); and PU and Sim-2 PU (z = −8.81, p-value < 0.001) changed between the run simulations. In this scenario, for t = 180, it was established that PO was lower by 14,000 persons (Fig. 4c), that is, in the absence of the misinforming element PD, PIn and PU decreased by 514,360 and 1346 persons, respectively (Fig. 4d).

However, in the case of Sim-3, statistically significant differences were established in the absence of T. The levels of PO and Sim-3 PO (z = −6.92, p-value < 0.001); PS and Sim-3 PS (z = −9.06, p-value < 0.001); PD and Sim-3 PD (z = −9.06, p-value < 0.001); PIn and Sim-3 PIn (z = −9.02, p-value < 0.001; and, PU and Sim-3 PU (z = −8.81, p-value < 0.001) changed between the run simulations. In this way, it was determined that in t = 180, The PS was lower by 13,000 persons (Fig. 4e), and that the levels of PD, PIn and PU decreased by 497,343 and 1252 persons respectively, as shown in Fig. 4f.

For Sim-4, statistically significant differences were found for the variation of rd, i.e. the time at which the disinformation agent initiates the propagation of the message. Thus, the levels of PO and Sim-4 PO (z = −10.55, p-value < 0.001); PS and Sim-4 PS (z = −10.62 p-value < 0.001); PD and Sim-4 PD (z = −2.86, p-value < 0.001); PIn and Sim-4 PIn (z = −3.03, p-value < 0.001); and PU and Sim-4 PU (z = −10.62, p-value < 0.001) changed between the run simulations. In this scenario, in t = 180, the PS increased by 15,000 people (Fig. 4g), while PD and PIn levels decreased by 186 and 184 people, respectively, while PU increased by 3,026 people (Fig. 4h).

Finally, compared to the scenarios presented in Sim-5, statistically significant differences were found with both the increase and decrease of ne in the system levels as shown in Table 5, whereby the levels changed between the run simulations. Thus, for the case of ne = 5% at t = 180, the PO decreased by 12,000 persons (Fig. 5a), which meant that for the PD, PIn and PU levels it decreased by 1,215,331 and 1151 persons, respectively (Fig. 5b) regarding SIM-1 . When one equals 40% for the same t, a decrease in PS by 15,000 persons was observed (Fig. 5c), however, PD increased by 3648 persons while PIn and PU decreased by 307 and 1538 persons respectively (Fig. 5d).

Table 5 Initial parameters of the model variables.
Fig. 5: Simulation results of the model with parameters set for SIM-5.
figure 5

a, b system behaviour at PO and PS levels with ne = 5%. c, d system behaviour at PO and PS levels with ne = 40%.

Discussion and conclusions

The study aimed to simulate the propagation of disinformation in social networks derived from the strategy of diplomacy, based on the elements of the system. In this sense, and in accordance with the results presented above, it was possible to provide an initial approximation to answering the research questions through modelling and diplomacy. A conceptual, mathematical and simulation model was established to understand how disinformation spreads on social networks as a diplomacy strategy, taking the SIR model as a basis and modifying it to include the elements of this diplomacy strategy documented in the literature (e.g., paid, and organic reach, bots and trolls). It is important to highlight that the model is adaptable in parameters to any social network, or multiple in case of using layer or array-based modelling.

Compared to the original model proposed by Rapoport and Rebhun (1952), and to the models of disinformation in social networks in contexts other than diplomacy, such as those of Bian et al. (2020), Li et al. (2020) or Guzmán et al. (2022), the model proposed here differs in two aspects: the first one relates to the target population, which is defined by the agent of international diplomacy, given that it focuses its efforts on a limited audience with specific characteristics, which it seeks to influence through the disinformation message, this aspect was not taken into account in other non-diplomatic models, which assumed that the uninformed population would grow without limit; the second concerns linking the different elements of the disinformation system as a strategy of diplomacy, as previous research has focused on analysing each of these separately, as exemplified by Buchanan and Benson (2019), Starbird (2019), Helmus et al. (2018) and Entman (2007), Hence, this model makes it possible to understand the impact of each of the elements identified in the literature by integrating them into a single system, and, in line with La Cour (2020), the proposed model provides an explanation for this problem from a macro and not a local dynamic, by involving a greater number of elements and the possibility of executing monetary resources to intensify disinformation work.

Regarding the behaviour of the system in the case of suppressing some of the elements that compose it or modifying the established parameters such as the level of engagement, statistically significant differences were found that increase or decrease the levels of PS, PO, PD, PU and PIn, as shown in the state of the levels at t = 180 and in figures three and four. Thus, in the absence of paid outreach, the PS of disinformation was reduced by 38.02%, which means that paying for the linking of the target population to the disinformation agent’s accounts, as well as the propagation of the message on the social network, are of vital importance for the action in this strategy of international diplomacy. The absence of this element in the system changes the behaviour of the disinformation system, affecting fewer people in the target population, so the role of social networks and this mechanism to control the spread of disinformation should be evaluated. This generates a new scenario that should be incorporated into the study of the phenomenon of disinformation, especially in diplomacy, which evaluates the double standards of social networks in wanting to prevent the propagation of the disinformation message, but at the same time profit from this activity, as was shown in the case of the US elections and documented by the Office of the Director of National Intelligence (2017).

Thus, in the absence of bots and trolls, the amount of uninformed population decreases, but not to the same extent as in the absence of paid reach. This behaviour can be explained for three reasons: the first is related to the limited number of parameterised bots and trolls hired at the initial moment of the propagation of the disinformation; the second is related to the limited reach they have, as their activity is concentrated exclusively on the organic reach defined by the social network in which they disinform; and the third is related to the effectiveness of the mechanisms that these types of networks have to deactivate the bots and eliminate the troll accounts.

Regarding the onset of disinformation, the simulation showed that the early beginning of the propagation of the disinformation message has the capacity to increase the susceptible population, as well as to increase the number of people disengaging from the disinformation agent’s accounts; however, the number of uninformed and informed people did not show a major change (0.06% and 0.12%, respectively) compared to the results of the initial behaviour of the system. Finally, the simulation of the level of engagement showed that its decrease generates a decrease in PS, although less interaction with the disinformation message does not generate a greater number of informed, uninformed and unsubscribed people. Furthermore, the increase in the level of citizen interaction with the disinformation message results in an increase in PS and the misinformed population.

Given the results and discussion presented here, the model developed sheds light on how disinformation spreads on social media as a result of the strategy of diplomacy, providing a novel new picture that links the highly theoretical component of the study of this phenomenon from international relations, and the documentation of cases. It is recognised that the study of disinformation remains complex, especially in diplomacy, because of the difficulty of tracing the origin of disinformation and the exact use of the elements of the system, and therefore the academic community and states are widely encouraged to use the model presented here to continue the analysis of this strategy of diplomacy.

Now, in view of the limitations of the study, it should be taken into account that the simulations only modified one parameter during their execution, so the results presented here are based on the Ceteris Paribus criterion, so that the modification of several parameters will result in a change in the behaviour of the system. Randomisation of some of the parameters should also be considered to determine possible changes in the behaviour of the disinformation system. On the other hand, it is recommended that the academic community use various techniques to evaluate the model with different techniques associated with system dynamics, in order to provide additional evidence of its robustness. Additionally, the proposed model was based on the current elements used by diplomacy to misinform on social media, so if a new element is introduced as a result of the evolution of both the platforms and the strategy, it should be incorporated.