Collective patterns and stable misunderstandings in networks striving for consensus without a common value system

Collective phenomena in systems of interacting agents have helped us understand diverse social, ecological and biological observations. The corresponding explanations are challenged by incorrect information processing. In particular, the models typically assume a shared understanding of signals or a common truth or value system, i.e., an agreement of whether the measurement or perception of information is ‘right’ or ‘wrong’. It is an open question whether a collective consensus can emerge without these conditions. Here we introduce a model of interacting agents that strive for consensus, however, each with only a subjective perception of the world. Our communication model does not presuppose a definition of right or wrong and the actors can hence not distinguish between correct and incorrect observations. Depending on a single parameter that governs how responsive the agents are to changing their world-view we observe a transition between an unordered phase of individuals that are not able to communicate with each other and a phase of an emerging shared signalling framework. We find that there are two types of convention-aligned clusters: one, where all social actors in the cluster have the same set of conventions, and one, where neighbouring actors have different but compatible conventions (‘stable misunderstandings’).

www.nature.com/scientificreports/ the world. Thereby, they are faced with a cognitive dissonance between their own cognition and what they perceive as their neighbours' cognition 46 . In order to reach conformity 47 , each agent strives to minimise its cognitive dissonance, based only on its own subjective observations. We show that under these conditions collective behaviour still can emerge. We also show that stable misunderstandings can form, i.e. an emerging pattern where many nodes are without perceived conflict, despite heterogeneity in subjective perception. Often this emerges as an alternating arrangement of compatible but distinct subjective perceptions on suitable network topologies.
To elaborate on the interplay between our model's dynamics and the topology of the underlying network, we analyse the dynamics on regular lattices and random regular graphs. The results help to better understand the emergence of order within a connected community of agents without an objective instance. Our model is motivated by Gotthard Günther's polycontextural logic 48,49 , where two subjects observing the same situation can come to different conclusions, even when each of the subjects adheres to binary (but distinct, contexture-dependent) logic. We hence denote our model polycontextural networks.
Our findings not only have an impact on sociological questions but also on the ongoing problem of how distributed machine learning systems can negotiate a common signalling system. It is also an ongoing debate in philosophy how a consensus can emerge out of observer-dependent facts 50 .
The remainder of the paper is organised as follows: In the next section, we introduce our model and illustrate its static properties with two simple network motifs. In Sect. 3 we then analyse the dynamics of the model on random regular graphs, before we focus on triangular and square lattices to investigate the observed self-similarity and develop a mechanistic understanding of the observed dynamics. Subsequently, in Sect. 4 we discuss the implications of our model and draw some conclusions in Sect. 5.

Model
The polycontextural network is a simple model where N agents interact over a network. Each agent A n with n ∈ N is equipped with a characteristic c n whose expression is taken from a pool of size C. To simplify the notation, the characteristic of each agent is given as a standard basis vector e i of length C with 1 in the ith position and 0 in every other position.
To incorporate subjectivity, each agent has an individual dictionary that bijectively maps the 'outside world' of the agent to its personal cognition. Formally, this dictionary is a bijective function σ : C → C and can be written as a C × C permutation matrix T n . If one agent A n observes the characteristic c m of another agent, the observing agent sees T n c m instead of the 'true' (objective) c m (as depicted in Fig. 1). To give it an intuitive meaning, in the following we will assume that the characteristics c n are colours. Due to this definition, our model does not have objective truth values-a predefined understanding of colour-but C! different and equally correct world-views (here: colour mappings). In the following, we understand the term world-view to mean a set of truth values that determine how an agent perceives the environment. Each world-view hence refers to a specific choice of a value system.
The different agents are spatially distributed and partially connected, whereby they form a network structure where each agent is one node. During this investigation, we will analyse our model with different network topologies of different sizes.
The only interaction in our model is a simple version of social influence where all agents strive for consensus. A single update step of our model proceeds similarly to the standard voter model with C different opinions and asynchronous dynamics, which means that a randomly chosen agent A n adopts the opinion (the colour) of one of its neighbours A m . However, and in difference to the voter model, A n can not observe the 'true and objective' colour c m but sees the characteristic T n c m . A sequence of N updates forms a time step, which means that on average at every time step every node is selected once.
While formally similar to the voter model, due to the different world-views ( T x ) and hence the different perception of colours, the model's dynamic would in general not converge to a uniform colouring. To illustrate this, let us imagine two nodes A and B, which can be either red or blue. The nodes shall have two different, but  www.nature.com/scientificreports/ fixed world-views: A recognises colours as they are and B recognises the colours reversed (blue to red, red to blue). If A is red and is observed by B, B will turn blue. In some subsequent time step, A will observe B and also turn blue. However, as soon as B observes A again, B will turn red and the process starts again. What is missing is the single agent's ability to sense that its own world-view is not aligned with that of its neighbours. According to relational epistemology 51 and the philosophy of world-views 52,53 , world-views are shaped by and changed according to lived experiences and determine how one understands the world and responds to it.
Following this concept, within our model each agent is equipped with two internal counters: O n and K n , and every update step proceeds as follows: • One agent A n and one of its neighbours A m are selected randomly following a uniform distribution.
• Agent A n subjectively observes the feature of A m , which means A n sees the characteristic T n c m .
• If agent A n 's own characteristic is already equal to the observed one, A n only increments its internal O n counter by 1. • Otherwise, A n changes its own characteristic to the subjectively observed one and increments both its internal O n and K n counter by one. • If the fraction K n /O n is larger than the parameter q (which means that in more than q percent of the observations the observed characteristic was not equal to the own), the agent changes its own world-view T n to a random selection out of the C! possibilities and resets both counters to zero. Note that O n gets always incremented at least once before this last step. The fraction is hence always defined.
The individuals in our model hence share a predefined response once a threshold number of conflicts is detected and, following the definitions given in Ref 54 , our model would have to be considered to belong to the group of quorum sensing models, although the term 'subjective quorum-sensing' would probably fit best. In terms of everyday experience, it may seem strange that opinions in our model are changed immediately, regardless of past observations. However, what our model reflects are the different time scales for a change of opinion vs. a change of world-view. The illustrative idea behind the dynamics of the model is that agents react-according to their subjective interpretation-to the states of other agents. Since each agent is also an object of other observations, the state of the observed neighbours is sometimes already the reaction to the observation of the own state. This enables each agent to perform a self-reflection and a repeated observation of the neighbour's state can hence indicate whether the neighbourhood confirms the own world-view (for a more detailed philosophical interpretation see Sect. 4). The parameter q (threshold parameter) therefore controls how sensitive an agent decides that its own belief system does not conform to the neighbourhood, subsequently changing it.
Before we proceed, we define two special terms to simplify the notation: Two connected nodes i, j are called compatible, if their dictionaries mutually agree in all colours, which means T i × T j = I . A network is considered solved if all connected nodes are compatible. Note that our definition of compatible and solved networks only depends on the dictionary of the nodes and not on the current colour. As we will show in the following examples, a solved network does not imply that no more colour changes occur.
Two illustrative examples. Let us refer to two minimal network motifs as illustrative examples. Figure 2(1) shows three connected nodes (a triangular structure). Following the above-given definition of a solved system, node A has to be compatible with respect to both nodes B and C, which means: At the same time, the nodes B and C need to be compatible as well, hence: This is only possible if  www.nature.com/scientificreports/ which is true for all 2-cycle permutation matrices. Note that the definition of a solved system is a local definition and does explicitly not imply that no more colour changes occur. Let us hereto assume that C = 3 and for all dictionaries it holds: Let us further assume that node A is in state c A = (1, 0, 0) . If both node B and C observe A, both change their states to c A/B = (0, 1, 0) . However, if now B observes C it needs to change its state again to c B = (1, 0, 0) which would create a further state change after an observation by A, and so on. For the system to reach a state without any more colour changes, it needs to hold: This would require T A to be a 3-cycle permutation matrix. However, since we already know that T A needs to be a 2-cycle permutation matrix this is only possible if T A is the identity.
Things are different for the four-node system shown in Fig. 2(2). For A it again holds: However, B and C are not connected directly, but only via D. To reach a solved system there are hence two more equations that need to be satisfied: which means that: In contrast to the triangular structure, these constraints cause a system where no more changes of colour will happen. To see how this comes, let us start with node A and cycle over the other nodes. It follows: Since in this configuration a system can be stable although the agents have different world-views, we call this novel effect a stable misunderstanding. The important point here is that-due to their subjectivity-the nodes involved can not detect this misunderstanding. To our knowledge, our polycontextural network model is the first that allows for and demonstrates the impact of such stable misunderstandings.
One might argue that this observed effect of stable misunderstandings is just an artefact of whether the considered cycle has an even or odd number of nodes. While this is true from a mathematical point of view, we argue that the artefact mainly arises because of our simplified world-view representation. We assume that for each node perception of colours takes place deterministically (no single colour is perceived in multiple ways without a change in world-view) and without information loss (two distinct colours are never perceived as the same colour; no 'colour blindness'). This set of requirements leads to the restriction to bijective translation tables. If we would e.g. allow for non-bijective associations within the translation tables, the effect would vanish. However, to keep the model simple and comprehensible, we will stick to our definition.
Over time, we observe that connected nodes synchronise their world-views and form clusters of nodes with the same understanding of the world. We are hence mainly interested in the dynamics and organisation of the world-views (the tables). The (fluctuating) colours of the nodes are just signs (their language) to communicate with their surroundings and are-in our investigation-only of limited interest. In the following section we show that, depending on the value of the threshold value, the sizes of these table-clusters stay small or expand over all scales, indicating a critical state and a phase transition. Besides the change of the threshold parameter, we also demonstrate how the topology of the network affects the type and size of clusters.
In what follows, we present the results of the simulations for four different network topologies. To avoid any grid artefacts, we first analyse random 3-regular and 4-regular graphs. To better understand and visualise the dynamics of our model we then focus on regular triangular and square lattices. For all models, we set C = 3 and-to avoid boundary effects-use periodic boundary conditions if applicable.

Results
Phenomenology. We will first illustrate the time-dependent behaviour of our model before we proceed to analyse the dependence on the q parameter. Figure 3 shows the time evolution of colour changes per time step, table changes per time step and the relative mean size of clusters of the same translation table T X within a (4) To gain insight into how the growth of clusters is taking place, Fig. 4 shows a snapshot of a random 4-regular graph. The colours of the nodes indicate the time span the respective node has not changed. The more yellowish the node, the longer no change. For illustrative purposes, the six nodes that did not change for the longest have been shifted out of the bulk. As can be seen from the figure, three of these nodes each form a separate triangular motif. These triangular motifs were the nuclei for the growth of a large cluster of shared world-views. The evolution of the cluster sizes can thus be understood as a nucleation and coarsening process. If the cluster does not span the full system we can still see colour fluctuations, however, the colours change less often than in the other two cases. There is hence an intermediate regime between continuous table changes and an immediately frozen state, where world-views can synchronise and cluster. q-Dependence. To gain more insight into the critical behaviour, in the following we analyse in detail how the model behaves under a change of the parameter q. Figure 5 shows exemplary behaviours of the relative size of the largest cluster over the value of q for both a random 3-regular and 4-regular graph of size N = 200 . For a small window of q the largest cluster spans the full system, indicating system-wide correlations between the tables. Additionally, the lower plots show the fraction of the six different translation tables, where T A denotes the identity, T {B,C,D} denote the three possible 2-cycle matrices and T {E,F} denote the two 3-cycle matrices, with T E × T F = I.
In terms of the motivation of our model the results so far already prove the emergence of a consensus. However, to better understand the characteristics of our model as well as its properties at criticality, we will now turn to two simplified network topologies and analyse their behaviour.

Regular lattices.
To gain more insight into the critical behaviour of our polycontextural network model, in the following we focus on regular triangular and square lattices. Figure 6 shows a snapshot of two critical systems after the evolution of t = 20, 000 steps. Here, the colours of the nodes do not indicate their current colour but are illustrations for their respective translation table. Depending on the topology of the network, the clusters are only formed by equal translation tables or also by patterns of two alternating tables, indicating the occurrence of stable misunderstandings. This is in line with the analysis of the motifs in Sect. 2. Figure 7a, b shows the mean cluster size over the value of q for different system sizes for both the triangular and square lattices. As expected, for small values of q the mean cluster size is close to one: the fluctuations in the www.nature.com/scientificreports/ system do not permit the buildup of correlations. At a rather sharp value q = q c we observe a sudden jump in the cluster size, indicating a phase transition. Then, for values q > q c the cluster size slowly decreases and converges to the initial mean cluster size of one. It is important to note that for q ≫ q c the typical size of clusters is much smaller than the system size and hence the mean cluster size is not limited by the system size.
As already for the random regular graphs, a key quantity for our system is the size of the largest cluster. Figure 7c, d shows exemplary behaviours of the relative size of the largest cluster. For a small window of q the largest cluster of the system spans the full system, indicating system-wide correlations between the tables. Additionally, the lower plots show the fraction of the six different translation tables, where T A denotes the identity, T {B,C,D} denote the three possible 2-cycle matrices and T {E,F} denote the two 3-cycle matrices, with T E × T F = 1 . As already observed in Fig. 6 the topology of the underlying network determines which translation tables cluster: At criticality, the triangular network predominantly consists of nodes that hold the identity matrix T A . Contrarily, in the grid network, all translation tables have the same probability. However, as the diverging variance at the critical point already indicates, this is just an averaging effect. In a single system, only one type of cluster configuration wins, but the probability to win is equal for all configurations.
We will now turn to an analysis of the scale-freeness of the cluster size distribution.   The new orientation is selected with a probability that depends on the energy difference between the old and new state (with regard to the interactions with the nearest neighbours) as well as on an external parameter, the temperature T. It is well known that the cluster sizes in grain growth and particle coarsening show self-similarity 56,57 . This means that during coarsening different system configurations reveal similar behaviour when scaled to the same scale: they are scale-invariant. Whether such self-similarity is also present in social dynamics is an ongoing debate 58 . Due to the similarities between our model and the Potts model it is hence natural and interesting to ask if we also observe a scale-invariant behaviour in our model. A well-known characteristic of self-similarity is a power-law distribution of observable quantities. In the case of our model, this could e.g. be the cluster size distribution. In Fig. 8 (left) we show this distribution for the triangular network at q = 0.57 , slightly larger than the critical value q c = 0.56 . In a log-log plot, this distribution shows a linear behaviour with slope m = −2.3 , indicating a power law with an exponent of α = −2.3 . This cluster size distribution is hence scale-invariant. In terms of social systems, this would mean that an opinion structure that is found in small communities is equal to the structure that is found in large systems of interconnected communities. Self-similarity of the cluster size distribution would also imply that the size of the largest cluster C max scales with the linear system size L according to: where d f is the (possibly) fractal dimension of the cluster 59,60 . In Fig. 8 (right) we observe the expected scaling for both lattices analysed and obtain the corresponding values for d f . The quadratic lattice growth with a fractal dimension of 1.3 indicating a rough boundary, whereas the triangular lattice growth with a fractal dimension of 2, equal to the spatial dimension of the lattice, hence there is a high 'surface tension' and the clusters are more compact with a smooth boundary. One should note that this determination of d f is not very accurate and one would need another approach to obtain a more exact version. This is, however, out of the scope of this manuscript and will be left for an upcoming publication.
With the results obtained, we are now in the position to explain mechanistically how clusters in our system are formed and how the observed self-similarity can be explained. Hereto, we draw on Fig. 9. If q < q c the system is in phase 1. There are too many fluctuations in the system such that no clusters of shared tables can emerge and possibly existing clusters are destroyed. For q > q m (phase 3) there is too little activity. Every single node behaves as a single nucleus of a new cluster and does not adapt to join other, possibly larger clusters. Upon a decrease of q below q m neighbouring agents begin to form clusters. However, small nuclei of possibly incompatible clusters appear all over the system and grow (with dimension d f ) until they reach the boundary of other clusters. The result is a cluster-cluster competition between different incompatible clusters as it is also known for models like the naming game 54 . The smaller q the smaller is the probability that an initial nucleus appears. Close to q with q > q c there is only a very small probability for an initial nucleus, but once a first nucleus is stabilised it can grow over the full system without being limited by another growing cluster. www.nature.com/scientificreports/

Discussion
We have shown that our simple model, the polycontextural network, has a phase where the world-views cluster globally, leading to a shared perception of signals, or global stable misunderstandings. In the following, we show how our model fits into the landscape of established models and discuss its implications.
Clustering of opinions. The clustering of opinions within human populations is an important and ongoing research topic 6 . Based on empirically validated mechanisms like 'homophily' , different models tried to explain how opinion clustering might happen 61 . Most of these models are able to show a clustering of opinions. Depending on the detailed mechanisms of the model, adding noise can facilitate mono culture or maintains pluralism 6,62 . A prominent source of such noise is the misinterpretation of information. Starting from an initial configuration where no predefined definition of a right or wrong interpretation of signals is given, our polycontextural network shows the build-up of a shared perception of signals. Our model hence provides a framework to understand the basic mechanisms of how a first basic understanding of information might emerge. A prerequisite most current models of social sciences (implicitly) rely on.
Polycontextural logic. Binary logic is a cornerstone of western thought and technology and hence an important component in our decision strategies and opinion formation processes. As a consequence, social dynamics around opinion formation are challenged by situations where facts can not easily be mapped to global and objective 'true' and 'false' , as is e.g. the case in modern social phenomena, like the emergence and prevalence of fake news or the counter-phenomenon of fact-checking entities: What is obvious to us may be a lie from hell to our neighbour 52 . When facts are not perceived equally by two distinct observers, each of whom adheres to binary logic, we are in a situation, which the philosopher Gotthard Günther attempted to capture with his theory of polycontexturality 48 . He argues that each of the observers is applying the framework of Aristotelian logic consistently within this observer's own realm of observation, called contexture. Each contexture has its own set of factual embedding of true and false. This alignment of their own subjective understanding with the community requires that each individual is able to self-reflect their own understanding. Based on Hegel's dialectics, Günther formally analysed how living beings with only subjective perception can interact and how they can become aware of their own subjectivity 48,63 . He assumed that every ordered combination of an observer (a subject) and the observed object form a contexture that each has its own classical two-valued logic. He found the mutual interaction between three contextures to be a crucial requirement for successful communication. The three contextures are arranged in a structure as shown in Fig. 10, which Günther termed proemial relation. In the first contexture (C1) an object (O) is observed by the subject S1. This contexture can become the observed object of a second subject (S2). Thereby, S2 observes the object of the first contexture as observed by the observer S1 from the first contexture 64 . Subsequently, within a third contexture S2 can compare the original object with its subjectified version. The proemial relation hence allows the single observers to reflect their own understandings of the world. The dynamics of our polycontextural network can be interpreted in this manner: Let us assume two nodes A and B. The colour (the fact) of node A corresponds to the object O in the first contexture. This colour can subjectively be observed by node B (S1), which colours itself according to the result of this observation. Now, in the second contexture, node A (S2) can observe the colour of B (S1). Last (the third contexture), node A compares its own colour (O) with node B's colour (as observed by A itself) which is (based on A's standpoint) the own colour through the eyes of another 65 . Following this dynamics, node A is able to notice a possible misalignment between its own and node B's world-view (translation table). Our results hence indicate how the subjective observation of observers enables a self-reflection that can lead to the emergence of shared signals and provides a numerical example of Günther's and Hegel's philosophy.
Within social systems, communication and influence often lead to each of the social entities gathering 'followers' , supporters of their particular interpretation of a given set of facts and, hence, of their respective contexture. Contextures in this way become entrenched in society. Similarly, in our model, we observe the growth of clusters of similar world-views.

Conclusion
In this manuscript, we presented a new model to explain if and how consensus can appear between agents that can only judge based on a subjective understanding of the world. Focusing on two regular lattices, the triangular and the quadratic lattice, as well as random regular graphs we observed the emergence of a system-wide (and then by definition objective) understanding of signs within the network. This emergence depends on a single parameter that controls the volatility: If the agents are too volatile they change their convictions too often to form clusters of shared understandings. If the agents are not volatile enough they do not adapt to majority opinions and locally separated clusters of different convictions appear. Both phases are connected by a phase transition and only at the transition point the growth of a spanning cluster is possible.
The findings of our model add to several ongoing discussions in social science as well as philosophy and computer science. Obviously, the study of our model on different types of networks is not exhausted. In this manuscript, we have restricted our analysis to the regular triangular and square grid lattices which already showed-especially in terms of the cluster composition-two quite different behaviours. The next natural step could hence be to observe how fast a consensus can be reached on random networks like ER or BA graphs. Here, it might also be interesting to introduce a degree-dependent threshold parameter such that nodes with a high degree are more convinced of their position and change their tables rarer. Additionally, it could be worthwhile to transform the dynamics of the model to a mechanism comparable to the q-voter model where each node observes q neighbours and only changes its table if a given fraction of these neighbours have a wrong colour. To model people's behaviour a little more realistically an interesting modification of the model would also be to reduce the memory of the agents 72 . Instead of the possibly infinite memory, one could restrict the agents to only remember the last X observations. This would create more fluctuations and could avoid the creation of small but stable minority clusters. Additionally, it is interesting to increase the number of possible colours C and thereby the number C! of different world-views. This leads to a larger variety of possible misunderstandings and the possibility of stable and partially compatible world-views. www.nature.com/scientificreports/