The morphospace of language networks

What is the nature of language? How has it evolved in different species? Are there qualitative, well-defined classes of languages? Most studies of language evolution deal in a way or another with such theoretical contraption and explore the outcome of diverse forms of selection on the communication matrix that somewhat optimizes communication. This framework naturally introduces networks mediating the communicating agents, but no systematic analysis of the underlying landscape of possible language graphs has been developed. Here we present a detailed analysis of network properties on a generic model of a communication code, which reveals a rather complex and heterogeneous morphospace of language graphs. Additionally, we use curated data of English words to locate and evaluate real languages within this morphospace. Our findings indicate a surprisingly simple structure in human language unless particles with the ability of naming any other concept are introduced in the vocabulary. These results refine and for the first time complement with empirical data a lasting theoretical tradition around the framework of least effort language.


I. INTRODUCTION
The origins of complex forms of communication, and of human language in particular, defines one of the most difficult problems for evolutionary biology (Bickerton, 1992;Szathmáry and Maynard Smith, 1997;Deacon, 1998;Bickerton, 2014;Berwick and Chomsky, 2015).Language makes our species a singular one, equipped with an extraordinary means of transferring and creating a virtually infinite repertoire of sentences.Such an achievement represents a major leap over genetic information and is a crucial component of our success as a species (Suddendorf, 2013).Language is a specially remarkable outcome of the evolution of cognitive complexity (Jablonka and Szathmáry, 1995;Jablonka and Lamb, 2006) since it requires perceiving the external world in terms of objects and actions and name them using a set of signals.
Modelling language evolution is a challenging issue, given the unavoidable complexity of the problem and its multiple facets.Language evolution takes place in a given context involving ecological, genetic, cognitive, and cultural components.Moreover, language cannot be described as a separate collection of phonological, lexical, semantic, and syntactic features.All of them can be relevant and interact with each other.A fundamental issue of these studies has to do with language evolution and how to define a proper representation of language as an evolvable replicator (Christiansen et al., 2016).Despite the obvious complexities and diverse potential strategies to tackle this problem, a common feature is shared by most modelling approximations: an underlying bipartite relationship between signals (words) used to refer to a set of object, concepts, or actions (meanings) that define the external world.Such mapping asumes the existence of speakers and listeners, and is used in models grounded in formal language theory (Nowak et al., 2002), evolutionary game theory (Nowak et al., 1999), agent modelling (Kirby, 2001(Kirby, , 2002;;Kirby et al., 2008;Steels, 2015Steels, , 1997)), and connectionist systems (Cangelosi and Parisi, 1998).
In all these approaches, a fundamental formal model of language includes (figure 1a): i) a speaker that encodes the message, ii) a hearer that must decode it, and iii) a potentially noisy communication channel (Cover and Thomas, 1991) described by a set of probabilities of delivering the right output for a given signal.Within the theory of communication channels, key concepts such as reliability, optimality, or redundancy are of high relevance to the evolution of language.
In looking for universal rules pervading the architecture and evolution of communication systems, it is essential to consider models capable of capturing the very basic properties of language.Such a minimal toy model (Ferrer i Cancho and Solé, 2003) can be described as a set of available signals or "words", each of which might or might not name one element from the set R = {r j , j = 1, . . ., m} of objects or "meanings" existing in the world.These potential associations can be encoded by a matrix A ≡ FIG. 1 A toy model to explore least effort language.a Any minimal model of communication should include a (possibly noisy) channel that connects hearers and speakers.At the heart of this channel lies a confusion matrix p(rj|si) that tells the likelihood that an object is interpreted by the hearer when a signal is uttered by the speaker.In an ideal, not noisy situation we can encode these word-object associations by a matrix (b and c) such that aij = 1 if signal si names object rj and aij = 0 otherwise.Such matrices naturally introduce synonymy and polysemy.They also define bipartite language networks (d-f).
We study how an optimization problem posed on the communication channel is reflected in optimal languages, with extreme solutions resulting in minimal effort for a speaker (hence maximal for a hearer, d) or the other way around (f).
{a ij } such that a ij = 1 if signal s i names object r j and a ij = 0 otherwise (figure 1e-f).Following a conjecture made by George Zipf (Zipf, 1949), this model was used to test whether human language properties could be the result of a simultaneous minimization of efforts between hearers and speakers (Ferrer i Cancho and Solé, 2003).In a nutshell, if a signal in language A can name several objects in R, its degeneracy implies a large decoding effort Ω h to the hearer.A limit case is shown in figure 1d, where one signal names all objects.Otherwise, if one (and only one) different signal exists to name each of the elements in R (figure 1c and f), the burden Ω s falls mainly on the speaker who must find each precise name among all those existing, while a hearer does not incur in any decoding costs.Minimal effort for one of the parts implies maximal cost for the other.Zipf's conjecture suggested that a compromise between these two extremes would pervade the efficiency of human language.
The model introduced above (Ferrer i Cancho and Solé, 2003) allows us to quantify these costs explicitly and hence tackle the Zipfian least effort principle using information theory.It does so by considering a linear 'energy' function Ω(λ) that optimal languages would minimize, and that contains both the hearer and speaker costs: (3) λ ∈ [0, 1] is an external metaparameter balancing the importance of both contributions.In terms of information theory, it is natural to encode Ω s and Ω h as entropies.
One choice is to define Ω h as the conditional entropy that weights the errors made by the hearer, namely: where p(r j |s i ) is the probability that object r j was referred to when the word s i was uttered by a speaker.Such confusing probabilities depend on the ambiguity of the signals.We can also postulate the following effort for a speaker: where p(s i ) is the frequency with which the s i signal is employed given the matrix A. To compute p(s i ) we assume that every object needs to be recalled equally often and that we choose indistinctly among synonyms for each object.
The global minimization of equation 3 was tackled numerically (Ferrer i Cancho and Solé, 2003) and analytically (Prokopenko et al., 2010;Salge et al., 2013).Slight variants of the global energy have also been studied, broadly reaching similar conclusions.An interesting finding is the presence of two "phases" associated to the extreme solutions shown in figures 1d and f.These two regimes were associated to rough representations of a "no-communication possible" scenario in which one signal can name any object (figure 1d) and a phase tied to animal and computer programming languages where non-ambiguous (one-to-one) mappings would be found (figure 1f).The two phases recover the ideal solutions for speakers and hearers respectively, and they are separated by an abrupt transition at a given critical value λ c .It was conjectured that human language would exist right at this critical point.
Solutions of this linear global optimization problem have been found (Prokopenko et al., 2010;Salge et al., 2013) and they display a mixture of properties, some associated (and some others not) to human language features.There might be a potential limitation with this approach: is the linear constraint a reasonable assumption?If no predefined coupling between Ω h and Ω s is introduced, the simultaneous optimization of both targets becomes a Multi Objective (or Pareto) Optimization (MOO) problem (Seoane, 2016;Deb, 2003;Coello, 2006;Schuster, 2012).This is a much more general approach that does not make additional assumptions about the existence of a global energy such as equation 3. The solutions to MOO problems are not a single global optimum, but a collection of designs (in this case, word-object associations encoded by matrices) that constitute the optimal tradeoff between our optimization targets.This tradeoff (called the Pareto front) and its shape have recently been linked to thermodynamics, phase transitions, and critical phenomena (Seoane andSolé, 2013, 2015a,b,c;Seoane, 2016).By relaxing the assumptions concerning the energy function, a more general scenario is considered.
The Pareto front for the MOO of language networks has never been portrayed.In this paper we aim at fully exploring the space of communication networks in the speaker/hearer effort space where the Pareto front defines one of its boundaries.It will be shown that the front matches the global minimization problem only at the critical point.But we will also study the whole space of language networks beyond the Pareto front, showing that it exists a wealth of communication codes embodied by all different binary matrices.These, as they link signals and objects, naturally define graphs with important information about how easy communication is, how words relate to each other, or how objects become linked in semantic webs as a same signal refers to many of them.All these characteristics pose interesting, alternative driving forces that may be optimized near the Pareto front or, in the contrary, might pull actual communication systems away from it.
By exploring the whole space of possibilities we are defining a morphospace of language networks.The concept of theoretical morphospace (McGhee, 1999) was introduced within evolutionary biology (Niklas, 1997(Niklas, , 2004;;Raup, 1965) as a systematic way of exploring all possible structures allowed to occur in a given system.This includes real (morphological) structures as well as those resulting from theoretical or computational models.Typically the morphospace is constructed in one of two different ways.One is applied to real sets of data.In this case, available morphological traits defined on each system are measured and a statistical clustering method (such as principal component analysis) is applied as a way to define the main axes and locate each system within this space (McGhee, 1999).The alternative is to use explicit parameters that define continuous axes that allow ordering all systems in a properly defined metric space.In recent years, graph morphospaces have been explored, thus showing how the concept can be generalized to the analysis of complex networks (Avena-Koenigsberger et al., 2014).In our context, the language morphospace analyzed below is shown to be unexpectedly rich.It appears partitioned into a finite set of language networks, thus suggesting archetypal classes involving distinct type of communication graphs.This also occurs within the set of optimal communication networks that define the Pareto front of the morphospace.Finally, dedicated, data-driven studies exist about different optimality aspects of language, from prosody to syntax among many others (Jaeger and Levy, 2006;Frank and Jaeger, 2008;Jaeger, 2010;Piantadosi et al., 2011;Mahowald et al., 2013).But discussion of the least-effort language model has focused on its information theoretical characterization.The hypothesis that human language falls near the phase transition of the model has never been tested on empirical data before.We do so here using the WordNet database (Miller, 1995;Fellbaum, 1998).The previous development of the morphospace allows us not only to asses the optimality of real corpora, but also to portray some of its complex characteristics.This kind of study may become relevant for future evolutionary studies of communication systems, most of them relying on the "speaker to noisy-channel to hearer" scheme (figure 1) at the core of the least effort model.

II. COMPLEXITY OF LANGUAGE MORPHOSPACE
In this section we characterize the morphospace of all codes allowed by our toy model.We refer to this set of possible languages as Γ.We look at it with the two leasteffort target functions (Ω h ≡ H m (R|S) and Ω s ≡ H n (S)) as a reference.Therefore, first it was necessary to find its boundaries in the Ω h − Ω s plane, and to generate a fair sample throughout.In appendix A we discuss thoroughly how this was done.Figure 2 shows the boundaries found for our morphospace, as well as the location of some prominent solutions: i) the star graph, which minimizes the effort of a speaker and maximizes that of a hearer; ii) the one-to-one mapping, often associated to animal communication, which minimizes the effort of a hearer at the expense of a speaker's; and iii) the Pareto optimal manifold (Π Γ ) corresponding to the lower, diagonal boundary of Γ in the Ω h − Ω s plane.Π Γ tells us the optimal trade-off between both targets.In appendix A we discuss how the shape of this Pareto front implies that the model indeed has a first order phase transition, and determines analytically that this transition does contain a critical point.See (Seoane andSolé, 2013, 2015a,b,c;Seoane, 2016) for thorough discussions of the connection between the geometry of the Pareto front, phase transitions, and criticality.The criticality of this model had been suggested, but never proved analytically.Our results from appendix A and the analytical findings of (Prokopenko et al., 2010;Salge et al., 2013) imply that the Pareto front consists of all languages without synonyms.
To explore the morphosapce we take a series of measurements upon the A matrices that relate to their size, network structure, or suitability as a model of actual human language.Far from smooth, simple gradients, we find a fragmented morphospace where different properties peak or fade, and in which languages are non-trivially clustered together.Such measurements where taken both on samples of languages across the morphospace and in the most restricted Pareto front.In the following, we report results for the morphospace in general.All results for the Pareto front alone can be found in appendix B.

A. Characterizing the vocabulary
First of all we measure the effective vocabulary size (number of signals that refer to at least one object) of the codes (L, figure 2b), which can go from L = 1 for the star graph to the one-to-one map (L = n).By plotting L across the morphospace a non-trivial structure is revealed.Codes with small L occur mostly near the star and in a narrow region adjacent to the Pareto front (marked A in figure 2b).Far apart from the front there is yet another region (marked B) with less than 30% of all available signals being used.The transition to codes that use more than 75% of available signals (central, red region in figure 2b) seems to be abrupt wherever we approach those codes from.
It is important to take into account the effective vocabulary size when measuring certain properties.Let us consider a polysemy index I P and synonymy index I P , defined as: respectively.Here σ j is the number of signals associated to object r j and ρ i is the number of objects associated to signal s i .These indexes measure the average logarithm of σ j and ρ i respectively -i.e. the average number of bits needed to decode an object given a signal (I P ) and the averaged degeneracy of choices to name a given object (I S ).
The low-vocabulary region B consists mostly of very polysemic signals (figure 2c).But codes with small vocabularies are not necessarily very polysemic -e.g.along the Pareto front.Right next to region B, I P drops suddenly (area C in figure 2c) and then increases steadily as we tend towards the top right corner of Γ (where a matrix sits with a ij = 1 ∀i, j).
Region B starts close to the star and it is also associated to a large synonymy index (figure 2d).This implies that I S increases sharply around the star as codes become less Pareto optimal.This swift increase does not happen if we start off anywhere else from the front.The condition for Pareto optimality is that codes do not have synonyms (see appendix A), so this picture indicates that Pareto optimality degrades almost uniformly anywhere but near the star.This might have evolutionary implications: Languages around the B region require more contextual information to be disambiguated.That part of the morphospace might be difficult to reach or unstable if Pareto selective forces are at play.

B. Network structure
Words are not isolated entities within human language.Word inventories are only the first layer of language complexity.To make sense of language structure we need to consider how words interact, i.e. the patterns of connectivity associated to the underlying networks.Language networks can be defined in diverse ways (Solé, 2010) by linking words together.It was early found that such networks are heterogeneous (the distribution of links displays very broad tails) and highly efficient in terms of navigation (Solé and Seoane, 2014).The nature of these connections and the resulting graphs have been explored in very diverse classes of systems.Even the toy model studied here has been used to gain insight into the origins of complex linguistic features such as grammar and syntax (Ferrer, 2006(Ferrer, , 2005;;Solé, 2005).A network approach allows us to look at language from a system-level perspective, beyond the statistics associated to signal inventories.
Each code in our model defines a bipartite network which connectivity is given by its matrix A (figures 1d- 3 Different graphs derived from the language matrix.a A Pareto optimal language contains nonsynonymous signals only.Its language graph consists of isolated clusters in which each signal clusters together a series of objects.b Concepts within a cluster appear as cliques in the R-graph.c The S-graph is just a collection of isolated nodes.d Not Pareto optimal languages produce more interesting language graphs that might be connected (as here) or not.A connected language graph guarantees both a connected R-and S-graphs (e and f respectively).
f and 3a and d).We refer to such a network as the code graph.We can derive two more networks from each code: one named R-graph (figure 3b and e) in which objects r j , r j ∈ R are connected if they are associated to one same (polysemous) signal, and another one named S-graph (figure 3c and f) in which signals s i , s i ∈ S are connected if they are synonymous.Because Pareto optimal codes do not contain synonyms, their bipartite code graphs consist of disconnected components in which the i-th signal binds together ρ i objects (figure 3a).Consequently, each Pareto optimal R-graph is a set of independent, fully connected cliques (figure 3b) and S-graphs are isolated nodes (figure 3c).
A first characterization of network structure is the size of its most connected component.This is shown across the morphospace in figure 4a-c for code graphs, R-graphs, and S-graphs respectively.Regions with large connected components in their code graphs (figure 4a) largely overlap with networks with large effective vocab- ulary (L, figure 2b).The B region is the exception, as it displays an intermediate level of connectivity with very low L.This connectivity disappears for S-graphs in the B region, but the corresponding R-graphs remain very well connected.Hence a few signals are keeping together most of object space.Remarkably, R-graphs are very well connected everywhere throughout most of the morphospace, except for a very narrow region that extends from the one-to-one mapping along the Pareto front, more than halfway through it.
We kept track of the set of all connected components of a network C ≡ {C i , i = 1, . . ., N C } (with N C the number of independent connected components) and their sizes ||C i ||.If f (||C i ||) tells us the frequency with which components of a given size show up, then the entropy of this distribution conveys information about how diverse the network is.This measure is shown in figure 4d for code graphs (it is virtually the same for R-and S-graphs).H C is small everywhere except on a broad band parallel to, and extending all along, the Pareto front.The fact that H C is so low in most of the morphospace stems from either one of three facts: i) Just one connected component exists, as in most of the area with large vocabulary.ii) Just a few signals make up the network, deeming all others irrelevant so that, effectively, all the features of the network can be summarized by a few archetypal graphs.iii) While a lot of signals are involved, they produce just a few different graphs.That shall be the case along the Pareto front (see appendix B.2).
The band with moderate to large H C runs parallel to the Pareto front, but a little bit inside the morphospace.This would imply that, if the heterogeneity of the underlaying network were a trait selected for by human languages, they would be pulled off the Pareto front.Finally, H C is the highest around region D in figure 4d, at the end of the high-entropy band closer to the one-to-one mapping.
C. Complexity from codes as a semantic network Words, concepts, and objects in the real world constitute an abstract semantic web whose structure shall be imprinted into (or stem from) our brains (Huth et al., 2012(Huth et al., , 2016)).It is often speculated that semantic networks must be easy to navigate.This in turn relates to the presence of a small-world underlying structure (Solé and Seoane, 2014;Steyvers and Tenenbaum, 2005).Navigation efficiency relates to system-level network properties.It would be interesting to quantify this using our codes as a generative toy model.
We approached this as follows.Starting with an arbitrary signal or object we implement a random walk moving into adjacent objects or signals.We record the nodes visited, hence generating symbolic strings associated to elements r j ∈ R and s i ∈ S. The network structure shall condition the frequency f (r j ) and f (s i ) with which different objects and signals are visited.The entropies will be large if R or S are evenly sampled.They will present lower values if the network introduces non-trivial sampling biases.Hence, here low entropy is a measure of non-trivial structure arising form our toy generative model.We also recorded 2-grams (couples of consecutive objects or signals during the random walk) and computed the corresponding entropies H 2R and H 2S .This procedure is limited to sampling from the connected component to which the first node (chosen at random) belongs.If, by chance, we would land in a small connected component, these entropies would be artificially low disregarding of the structure that could exist elsewhere in the network.To avoid this situation we imposed that our generative model jumps randomly when an object was repeated twice since the last random jump, or since the start of the random walk.(We also interrupted the random walk when signals, instead of objects, were repeated.Results were largely the same.) These measures present a non-trivial profile across the morphospace.We appreciate two regions in which H R drops (E and F in figure 5a).The code graphs around these areas must have some canalizing properties that break the symmetry between objects.However, the drop in entropy is of around a 10% at most.(A third region with low H R near the star graph is discussed in appendix B.3 together with the measurements along the Pareto front.) From figures 2b and 4a, region E has moderately large vocabulary and size of connected component.It sits at a transition from lower values of these quantities (registered towards the front and within the B region) to the larger values found deeper inside the morphospace.Figure 4d shows how region E is located right out of the broad band with large H C .All of this suggests that, within E, diverse networks of smaller size get connected into a large component which inherits part of the heterogeneous structure.This heterogeneity results in a bias in the sampling of objects, but not as much in the sampling of signals.The lowest H S is registered towards the start-graph instead (see appendix B.3).Note also that biases in signal sampling are larger (meaning lower H S ) throughout the morphospace -compare the scale of the color bars in figures 5a and b.
Region F sits deeper inside the morphospace, where vocabulary size is almost the largest possible and the connected component involves most of all signals and objects.The network here is well consolidated suggesting that the bias of object sampling comes from non-trivial topologies established through redundant paths.Interestingly, regions E and F are separated by an area (G in figure 5b) with a more homogeneous sampling of objects and a relatively heterogeneous sampling of signals.H S within F itself is larger than in G, suggesting no remarkable bias on word sampling in F despite the bias on object sampling, and vice-versa.We take all this as an example of the diversity found in the morphospace, which allows an important asymmetry between words and objects inducing heterogeneity in one set while keeping the other homogeneous.
Figure 5c shows H 2R , the entropy of 2-grams objects produced by the sampling.It seems to inherit a faded version of the E region from H R .It is also low along a band largely overlapping the one shown in figure 4d for H C .The largest drop in H 2R happens closer to the oneto-one mapping.It makes intuitive sense that codes in this last area start consisting of networks similar to the one-to-one mapping in which extra words connect formerly isolated objects, hence resulting in a bias of couples of objects that appear together.The entropy of 2-gram words (H 2S , not shown) is largely similar to that of H S (figure 5b).

D. Zipf, and other power laws
Zipf's law is one of the most notable statistical patterns in human language (Zipf, 1949).Despite important efforts (Corominas-Murtra and Solé, 2010;Corominas-Murtra et al., 2011, 2016), the reasons why natural language should converge towards this distribution of word frequencies are far from definitive.Detailed research of diverse written corpora suggests that under certain circumstances (e.g.learning children, military jargon, cognitively impaired patients) the frequency of words presents a power-law distribution with a generalized exponent (Ferrer i Cancho, 2005;Baixeries et al., 2013).
In the past, different authors have studied how well the least-effort toy model can account for Zipf's distribution of words (Prokopenko et al., 2010;Salge et al., 2013;Ferrer i Cancho and Solé, 2003).Assuming that every object needs to be recalled equally often, and that whenever an object r j is recalled we choose uniformly among all the synonymous words naming r j ; we can compute the frequency with which a word would show up given a matrix A. This is far from realistic: not all objects need to be recalled equally often, and not all names for an object are used indistinctly.This does not prevent numerical speculation about computational aspects of the model, which might also be informative about the richness of the morphospace.
The first explorations of the model (Ferrer i Cancho and Solé, 2003) indicated that Zipf's law lays just at the transition point between the star and one-to-one codes.This suggested that self-organization of human language at the least-effort critical point could be a driving force for the emergence of Zipf's distribution in word corpora.Later on, it was shown analytically that while it is possible to find languages owing Zipf's law at that transition, this is not the most frequent distribution among Pareto optimal languages (Prokopenko et al., 2010;Salge et al., 2013).This is consistent with the diversity that we find at the critical manifold (see appendix B).This also implies that if Pareto-optimal least-effort is a driving force of language evolution, it would not be enough to constrain the word distribution to be Zipfian.Other authors (Fortuny and Corominas-Murtra, 2013) have provided mathematical arguments to expect that Zipf's law will be found right at the center of the Pareto front (with Ω h = 1/2 = Ω s ).Again, even if human language would converge to this singular point, this shall still leave the word distribution unconstrained.
We built the word frequencies from each A matrix using the prescriptions just outlined (all objects are referred to equally often, all synonyms are used indistinctly).To asses how well each distribution is explained by Zipf's law, we used a Kolmogorov-Smirnov (KS) test (scores are plotted in figure 6a).The area with better fitness to Zipf is broad and stretches notably inside the morphospace, indicating that Zipf's distribution does not necessarily correlate with least-effort.This area runs horizontally with values H n (S) ∼ 0.75 and roughly H m (R|S) ∈ (0.25, 0.75).In the best (least-effort) of cases, speakers incur in costs (Ω s ≡ H n (S)) three times higher that hearers.Less Pareto optimal codes that achieve Zipf always have a greater cost associated to speakers too.
Following the methods in (Clauset and Newman, 2009), we fitted the word frequencies of each A matrix to power laws with arbitrary exponents.The KS-score from figure 6b reveals an alternative region with large goodness of fit that runs parallel along the lower part of the Pareto front.However, the exponent obtained through this method (figure 6c) falls around the 1.6 − 1.8 region, far from Zipf's law.Our morphospace seems a powerful tool to plot the diverse exponents found in special written corpora (Ferrer i Cancho, 2005;Baixeries et al., 2013).This could provide insights about how the language network structure changes in those cases.
These numerical findings present notable evidence against least-effort as an explanation of Zipf's law.Not Pareto-optimal codes exist with larger fitness to Zipf's than least-effort languages (figure 6a) and codes along the critical manifold seem better fitted by other power laws (see appendix B.4, figure 11b and c).Two important limitations of the model should be considered: First, objects and synonyms are not equally frequently used.Introducing asymmetries (hopefully realistic ones, derived from actual word usage) could alter the balance between hearer and speaker efforts.Second, we are dealing with relatively small matrices (200×200) to make the computations tractable.Good measurements of power- law exponents demand larger matrices.Alleviating these handicaps of the model shall bring back evidence supporting the least-effort principle.

III. CODE ARCHETYPES AND REAL LANGUAGES
We introduced different measurements over the matrices A of our toy model.The emerging picture, far from a smooth landscape, is that the language morphospace breaks into finite, non-trivial "archetypes".To support this, we ran additional analyses to discern relevant dimensions for our problem.With all the measurements described above we moved into Principal Component (PC) space.5 PCs we needed to explain 90% of the variation in the data.We then applied a k-means algorithm (Lloyd, 1982) using all PC values.For k = 5, running the algorithm several times we converged consistently upon similar clusters that we classify as follows (figure 7, clockwise from top-left): I Codes near the one-to-one mapping and upper two thirds of the Pareto front.This includes the graphs with largest H C (figure 4d).
II Codes along a stripe parallel to the upper half of the Pareto front.This overlaps largely with the region with large H C (figure 4d) and low H 2R (figure 5c).
III Bulk interior region consisting mostly of codes with a single connected component and large vocabulary.It includes region F with low H R (figure 5a).
IV Region B from figure 2b-d, consisting of codes with large polysemy and small vocabularies.These demand exhaustive contextual cues for communication.
V Codes along the lower half of the Pareto front and a thick stripe parallel to it.This overlaps partly with the region with good fit to power-laws (figure 6b).
Solutions to the original least-effort problem were widely analyzed in the literature from a theoretical perspective.These studies focused on the model's phase transition (Ferrer i Cancho and Solé, 2003), on the existence of Zipf's distribution at the critical point (Prokopenko et al., 2010;Salge et al., 2013;Ferrer i Cancho and Solé, 2003;Solé and Seoane, 2014), or on mechanisms that could drive languages to this distribution (Seoane and Solé, 2015b;Fortuny and Corominas-Murtra, 2013;Ferrer i Cancho, 2005).Based on such analyses it was speculated that human language should lay at the transition point, since either extreme was not suitable to describe the flexibility of our communication systems.One-to-one mapping, associated to animal codes, was deemed rather rigid and memory demanding.This raised a point that ambiguity would be the price to pay for least-effort efficient language.On the other hand, the star code makes communication impossible unless all the information is contextually explicit.
The assessment of real languages using this toy model is missing in the literature.This owes, perhaps, to the difficulty of building matrices A out of linguistic corpora.WordNet (Miller, 1995;Fellbaum, 1998)  The parentheses stand for additional information not relevant here.Each word is associated to several codes.Each code identifies a unique, unambiguous object or concept.For example, 02959942 refers to the car of a railway while 02960501 refers to the gondola of a funicular.The word "car" appears associated to these two meanings among others.WordNet makes this information available for four separate grammatical categories: adjectives, adverbs, nouns, and names.
We built the corresponding A matrices out of this database and evaluated H m (R|S) and H n (S) for each grammatical category.All four categories contain more signals than objects, hence synonyms exist and languages are not Pareto optimal.Theoretical models (also beside ours) argue that synonyms should not exist in optimal codes (Salge et al., 2013;Ferrer i Cancho and Solé, 2003;Nowak et al., 1999), but they seem real in folk language.Synonymy shall also have degrees, with linguists dissenting about whether two terms name the precise same concept.Such information is lost due to our coarse mapping into binary matrices, but it is possible to extend our analysis if A displayed likelihoods a ij ∈ [0, 1] indicating affinity between words and concepts.
Figure 7a shows all grammatical categories (labeled Adj, Adv, Noun, and Verb respectively) in our morphospace.While not Pareto optimal, they appear fairly close to the front.They also appear near the one-to-one mapping.This would suggest that human language is not such a great departure from codes associated to other animals, thus contradicting several arguments in least-effort literature.Also, all matrices appear restricted to a small area, leaving the huge morphospace mostly unexplored.
However, the WordNet database does not contain grammatical words such as pronouns.Some proper names appear in the Noun database (e.g.Ada and Darwin), but 'she', 'he', or 'it' are not included.Any feminine proper name can be substituted by 'she', while 'it' can represent any common noun.Similarly, in English most verbs can be substituted by 'to do' or 'to be' -e.g."She plays rugby!" becomes "Does she play rugby?" and eventually "She does!".Appending these words to the corresponding matrices would account for adding signals that can name almost every object.We simulated this by adding a single word to the real matrices for nouns and verbs that can name any other concept.This changed the corresponding H m (R|S) and H n (S) values, shifting these codes right into the central-lower part of cluster II (figure 7a, points marked Noun' and Verb' with apostrophes), near the center of the Pareto front.This suggests that grammatical words might bear all the weight in opening up the morphospace for human languages, with most semantic words conforming a not-so-outstanding network close to the one-to-one mapping and still demanding huge memory usage.

IV. DISCUSSION
The least-effort model discussed in this paper has long captured the attention of the community.It features a core element of most communication studies -namely, the "coder to noisy-channel to decoder" structure found in Shannon's original paper on information theory (Shannon, 1948), as well as in more recent experiments on the evolution of languages (Kirby, 2001;Kirby et al., 2008;Steels, 2015).This toy model allows us to formulate a series of questions regarding the optimality of human language and other communication systems.These had been partly addressed numerically (Ferrer i Cancho and Solé, 2003) and analytically (Prokopenko et al., 2010;Salge et al., 2013).It was found that a first order phase transition separates the one-to-one mapping from a fully degenerated code.It was further speculated that a critical point existed at this transition, and that human language may be better described by that regime owing to the repertoire of properties of critical systems (Ferrer i Cancho and Solé, 2003).However, this hypothesis has never been confronted with empirical data.The criticality at the phase transition was never settled either.Finally, by looking only at least-effort languages the vast majority of codes present in the model was left unexplored.
This paper uses a formalism grounded on Pareto optimality to recover the first order phase transition of the model (Seoane andSolé, 2013, 2015b;Seoane, 2016) and to prove analytically that it indeed contains a critical point (Seoane and Solé, 2015c).Besides, the paper characterizes the very rich morphospace of communication codes beyond the optimality constraints.Finally, it addresses for the first time empirically the hypothesis about the optimality and criticality of human language within the least-effort model.
The language morphospace turns out to be surprisingly rich, far from a monotonous variation of language features.Different quantities such as the synonymy of a code, its network structure, or its ability to serve as a good model for language (e.g. by owing Zipf's law) present non-trivial variations across the morphospace.These quantities might or might not align with each other or with gradients towards Pareto optimality, and may hence pose newer conflicting forces that human language or other communication systems shall be driven by.
To portray real human languages within the least-effort formalism we resorted to the WordNet database (Miller, 1995;Fellbaum, 1998).Raw matrices extracted from this curated directory locate human language close enough to one-to-one mappings proper of other animals, and in the interior of the morphospace.This would invalidate the previous hypothesis that human language belongs far apart from animal communication and along the critical point of the model.But introducing grammatical particles such as the pronoun 'it' or the auxiliary form of the verb 'to do' (both missing from the WordNet database) does move human language far away from one-to-one mappings and closer to the center of the critical manifold.Both found locations for human languages (before and after adding grammatical particles) present some interesting properties such as a large entropy of conceptcluster size (H C , figure 4d).This quantity drops to zero at the Pareto front, suggesting evolutionary forces that could pull real languages away from the kind of least-effort optimality studied here.
Our results suggest a picture of human language consisting of a few referential particles operating upon a vastly larger substrate of otherwise unremarkable words.The transformative power of grammatical words is further highlighted if we consider that just one was enough to completely displace human codes into a more interesting region of the morphospace.This invites us to try more refined versions of the model in which grammatical particles are introduced with more care -e.g. based on how often pronouns substitute another word daily language usage.This also poses interesting questions regarding the sufficient role of such grammatical units to trigger and sustain full-fledged language.
The WordNet database is only the most straightforward possibility to map human language into the model.Controlled experiments or recent neuroscientific developments (Huth et al., 2016) offer new opportunities to validate or challenge our results or to address new questions in evolutionary or developmental linguistics.In this sense, the morphospace introduced here offers an elegant framework upon which to trace the progression, e.g., of synthetic languages grown in the lab (Kirby, 2001;Kirby et al., 2008) or in silico (Steels, 2015); or to depict other signal-object mappings found in culture or biology, such as the 'codon-amino acid' correspondence of the genetic code.
the overall shape of our design space in target space, and what consequences this has for the model from an optimality viewpoint.
A first step is to find the extent of Γ in the Ω h − Ω s plane.The global minima of Ω h and Ω s delimit two of the boundaries of Γ.Take the matrix associated to the minimal hearer effort, A h ≡ I n , where I n denotes the n × n identity matrix so that a ij = δ ij (with δ ij = 1 for i = j and zero otherwise, figure 1c).This matrix minimizes the effort for a hearer: signals are degenerated and she does not need to struggle with ambiguity.Naturally, Ω h (A h ) = 0 while from equation 6 Ω s (A h ) = log n (m).So A h dwells on the top-left corner of the set of possible languages in target space.Consider on the other hand Here one given signal (s k ) is used to name all existing r j resulting in the minimal cost for the speaker.It follows from equations 5 and 6 that Ω h (A s ) = 1 and Ω s (A s ) = 0, so this matrix sits on the bottom-right corner of Γ. Owing to the graph representing A s (figure 1d) we refer to it as the star graph.
These optimal languages for one of the agents also suppose the worst case for its counterpart.Hence, (for n = m) no matrices lay above Ω s = log n (m) nor to the right of Ω h = 1.A language with as many signals as objects and with all of its signals completely degenerated sits on the upper right corner of the corresponding space.This is encoded by a block matrix filled with ones.For simplicity, the vertical axis in all figures of this paper has been rescaled by log m (n) so that the horizontal boundary of the set is Ω s = 1.(This happens naturally if n = m, which we take often to be the case.) The only boundary left to ascertain is the one connecting A h and A s in the lower left region of target space.This constitutes the optimal tradeoff when trying to simultaneously minimize both Ω h and Ω s , hence it is the Pareto front (Π Γ ) of the multiobjective least effort language problem.It can have any shape as long as it is monotonously decreasing (notably, it does not need to be derivable nor continuous), and its shape is associated to phase transitions and critical points of the model (Seoane andSolé, 2013, 2015a,b,c;Seoane, 2016).
Prokopenko et al. (Prokopenko et al., 2010;Salge et al., 2013) computed analytically the global minimizers of equation 3.These turn out to be all matrices A that do not contain synonyms -i.e. which have just one 1 in each column.For those codes, using some algebra we come to the next expressions for the target functions: where ρ i is the number of objects named by the i-th sig-nal.Equation A3 defines a straight line in target space (figure 2a).It can be shown that minimizers of equation 3 are always Pareto optimal (Seoane and Solé, 2013;Seoane, 2016).The opposite is not necessarily true (there might be Pareto optimal solutions that do not minimize equation 3), but the curve from equation A3 connects A h and A s in target space exhausting any other possibility.
In this problem there cannot exist other Pareto optimal matrices and equation A3 constitutes the whole MOO solution by itself.
Assuming n = m, Π Γ is the straight line Ω s = 1 − Ω h (figure 2a).This implies that the global optimizers of equation 3 undergo a first order phase transition at λ = λ c ≡ 1/2 (Seoane andSolé, 2013, 2015b;Seoane, 2016), thus confirming previous observations about the model (Prokopenko et al., 2010;Salge et al., 2013;Ferrer i Cancho and Solé, 2003).In the literature it is also speculated that this phase transition has a critical point, but this could not be confirmed.This is precisely what is predicted for MOO problems whose Pareto front is a straight line, so equation A3 proves the critical nature of the system analytically.Besides, a straight Pareto front implies that any Pareto selective force1 will poise the system to its critical state (Seoane and Solé, 2015c).
Again assuming n = m, the triangle shown in figure 2a contains all possible communication codes according to our model.For a modest n = 200 there are 2 nm = 2 40000 possible codes.In section II we report a series of measurements taken on language networks throughout the morphospace.For these to be representative we need that Γ is sampled evenly across the Ω h − Ω s plane.Several strategies were tried with that aim, such as wiring objects to signals with a low probability p, generating a few Pareto optimal codes, the star and the one-to-one mappings, mutations and combinations of these, etc.This approach allowed to sample very small and isolated regions of the morphospace.To improve over this, we implemented a genetic algorithm with N s = 10000 matrices that would proceed until the upper-right half of a 30 × 30 grid in (Ω h , Ω s ) ∈ [0, 1] × [0, 1] was evenly covered with roughly 20 matrices in each square of the grid.Going beyond n = 200 = m proved to be computationally very costly.
This cost could be partly alleviated for Pareto optimal matrices.These are defined as languages that do not contain synonyms.This allowed a sparse encoding of these matrices.Some computations were also simplified (e.g. the costs are bound by equation A3).Because of this, we could perform an alternative sampling of N s = 10 000 matrices along the Pareto front with more signals and objects (up to 1 000).Different stochastic mechanisms were used to seed a similar genetic algorithm that en-sured an even sample of matrices along the front.While Pareto optimal matrices always included 1 000 objects, some of the mechanisms to generate them would result in languages with less signals.In the following, all quantities have been properly normalized for comparison.The results of the different measurements on Pareto optimal matrices are reported in appendix B.
The fact that simple recipes to build matrices (and mutations thereof) resulted in a poor sampling of our language morphospace provides some relevant insight about how difficult it is to access most of Γ.In order to sample the whole space we needed non-trivial algorithms and a target that the whole space was covered.If we would observe actual languages in singular regions of the morphospace, we could wonder about what evolutionary forces brought those languages there and suggest that more is needed than what simple rules offer for free.In section II we reported a series of measurements taken over an even sample across the morphospace.Those results are complemented here by measurements taken over a more exhaustive sample of the Pareto front which includes larger matrices (with up to 1 000 signals and objects, as opposed to the n = 400 = m in the main text).In the following sections we analyze the same measurements of vocabulary, network structure, matrix as a generative model, and goodness of fit to power-law that we analyzed above.
The critical manifold is just a straight line, which allows us to present simpler plots.Below, the horizontal axis reports the value of Ω h ≡ H(R|S) along the front.This is, the one-to-one mapping lays at the leftmost part of the plot and the star graph at the rightmost end.

Characterizing the vocabulary
By definition, Pareto optimal languages have no synonyms hence I S = 0. We report next vocabulary size (L) and polysemy index (I P ) along the front.
Figure 8 shows that the effective vocabulary size does not decrease linearly as we proceed from the one-to-one mapping (L = n) to the star (L = 1).Furthermore, at most given points along the front, there seem to be several languages with the same effort for both speaker and hearers, and yet with different vocabulary size.This indicates that there are different strategies to achieve the same degree of optimality, or that being Pareto optimal leaves the diversity of languages largely unconstrained.
Regarding polysemy, we could also expect that it would build up uniformly as we approach the star code.Instead we see that at each point along the front there are very different codes showing a range of polysemy (figure 8, inset).The maximum of this range does grow with H m (R|S), but we know that I P has to be maximum and unique for the star graph.The fact that similar Pareto optimal codes present such diverse I P (as well as L) suggests a great diversity within the critical point of the model.We will find that this is a recurrent theme of Pareto optimal languages for other measurements as well.

Network structure
We recall now the bipartite network structure (code graph) and the corresponding R-and S-graphs in object and signal space.These are naturally induced by the A matrices as illustrated in figure 3. Associated to them, we report the size of the largest connected component (||C 1 ||) for each graph, and the entropy of the distribution of component sizes (H C ) as introduced in section II.B.
For languages along the Pareto front, the largest connected component of the S-graph has trivially just 1 signal (because, again, there are no synonyms).This implies that the largest connected component of the code and R-graphs are virtually the same.Figure 9a shows the size (normalized to the maximum value possible) of the largest component of the code graph along the front.It grows as we move from the one-to-one mapping to the star code, but this growth is again mostly non-linear and often several possibilities coexist at each point of the front.Regarding H C , we find no consistent pattern throughout the front (figure 9b).This is the measure for which we find less correlation along any direction in Pareto optimal codes, again suggesting that the diversity of networks along the front is largely unconstrained.Notwithstanding, this variability is perhaps not so salient: H C here is small as in most of the morphospace (compare the scale in the color bar of panel 4d against the vertical axis of panel 9b).Moving apart from the star graph, we know that several signals are involved in Pareto optimal languages (as the vocabulary size implies -figure 8) and yet H C is kept low and relatively constant throughout.This suggests that, while a lot of disconnected components coexist to make up a Pareto optimal language network, their sizes are similar resulting in just a few graphs similar to each other.

Complexity from codes as a semantic network
We turn our attention now to language matrices as generative toy models of semantic relationships.Therefore, we had introduced a random walk over code graphs in section II.C.These allowed us to capture, with a series of entropies (H R,S and H 2R,2S ), whether the network structure somehow biased the sampling of signals or objects as it traversed the network randomly.Large entropies in the distribution of sampled objects or signals implied networks that do not induce remarkable structures.Meanwhile, noteworthy biases in object or signal sampling would result in lower entropies than expected.
By construction, H R must be maximum at both extremes of the front and non-trivial along it (figure 10a).In the one-to-one mapping, a same object is always sampled repeatedly, resulting in a reset of the random walk process as described in section II.C.Because the starting point is uniformly random, so must be the random walk and H R collapses to 1.This results in a maximal entropy over signals as well (figure 10b).At the star graph, only one signal produces a valid sample of the code graph, and again this sample is uniform over objects (resulting in H R = 1, figure 10a) but this implies a maximally asymmetric sampling of words (H S = 0, figure 10b).Along the front, objects group up in clusters of different size, resulting in potentially greater biases towards some objects than others.This results in the possibility of a lower H R , which is not always fulfilled.As in other cases, we see that a same point along the front hosts several different language networks with diverse H R values.The set of languages that produce a more remarkable structure is very close to the star graph (figure 10a).Overall, H R is large along the Pareto front as it was throughout the morphospace.The number of objects that a word links together determines how often that signal can be sampled through the random walker without reseting the process.This results in a smooth curve of decreasing entropy for H S (figure 10b).This suggests an explanation for the area of the morphospace with lowest H S in figure 5b near the star graph.
The entropy of 2-gram objects also has to be maximal at both ends of the front (figure 10c).It remains largely unconstrained along the rest of the front, with little correlation and again large variability at a given point.The entropy of 2-gram signals again decays to 0 as the start graph is approached, but the decay is now less smooth and the range of values of H 2S at a given point is larger.

Zipf, and other power laws
Using the same methods as in section II.D, we computed the goodness of fit of word distribution to either Zipf or power-laws with arbitrary exponents.One of the caveats is that our languages across the morphospace are relatively small (n=400=m).While this is partly alleviated here (thanks to languages with up to 1 000 signals), these are nevertheless meager numbers.The results in this section can again mount evidence against the least- effort hypothesis as the origin of Zipf's distribution in human language, but this must be taken with extreme care given the computational shortages just mentioned.
Regarding goodness of fit to Zipf's law, along the Pareto front we find again a great variety of codes even within single points along the critical manifold (figure 11a).This indicates, as pointed out above and already anticipated in (Prokopenko et al., 2010;Salge et al., 2013), that least-effort alone would not be enough to enforce Zipf's distribution into word corpora -at least not within this very limited toy model.There is a clear minimum of KS-score (i.e.maximum fitness to Zipf's distribution, figure 11a) around Ω h ∼ 0.3 (hence Ω s ∼ 0.7).This is close to, but not right at the value Ω h = 1/2 = Ω s put forward in (Fortuny and Corominas-Murtra, 2013) for theoretical reasons.Also, the minimum KS-score (∼ 0.1) is larger than scores reached deeper inside the morphospace.According to this, the observation of Zipf's law in natural corpora would be evidence against the least-effort principles captured by the model.
Regarding the goodness of fit to arbitrary power laws (figure 11b), we find a more shallow minimum suggesting a broader region of interesting Pareto optimal languages.Looking at the exponents that come out of those fits (figure 11c), we find two branches as we move in the direction of increasing H m (R|S): i) a branch of roughly constant and low exponents close to 1 (hence similar to Zipf's law), ii) a branch of exponents that increase monotonously with H m (R|S).It is difficult to asses which of these branches is yielding the lowest KS-score (best fit) in figure 11b.

Code archetypes along the Pareto front
Finally, as we did for the whole language morphospace, we analyzed possible archetypes clustering out of the measurements across the Pareto front.We moved into PC space and tried building 3 and 5 language archetypes using k-means clustering.For k = 3 we found three relatively stable clusters: i) a few codes near the one-to-one graph, ii) a few others near the star network, and iii) all remaining codes along the front.However, the boundaries between the clusters changed notably after different initializations of the algorithm, sometimes leaving the third group almost without elements.With k = 5, the These results are very unlike the outcome for the whole morphospace.There, applying k-means several times with random initializations would consistently yield the same broad classes, which were clearly segregated across the morphospace with little overlap at their borders.Our inability to converge into well defined archetypes at the Pareto front is yet another indication of its huge diversity.We should also be careful about the previous clustering of Pareto optima within groups I and V (see section III).Fortunately, those classes reach deeper inside the morphospace and do not seem to depend so much on Pareto optimal solutions.

FIG. 2
FIG.2Vocabulary size, polysemy, and synonymy across language morphospace.a The space Γ that can be occupied by language networks is shown in gray.Two limit cases (the one-to-one and star graphs) are also mapped.b Effective vocabulary size is only low near the star graph (in a prominent area labeled B) and along the Pareto front.c Polysemy is large in region B and as we complete the matrix A towards the upper-right corner.d Synonymy increases uniformly as we move apart from the front except for codes in B. This makes them highly Pareto inefficient.
FIG.3Different graphs derived from the language matrix.aA Pareto optimal language contains nonsynonymous signals only.Its language graph consists of isolated clusters in which each signal clusters together a series of objects.b Concepts within a cluster appear as cliques in the R-graph.c The S-graph is just a collection of isolated nodes.d Not Pareto optimal languages produce more interesting language graphs that might be connected (as here) or not.A connected language graph guarantees both a connected R-and S-graphs (e and f respectively).
FIG. 4 Network connectivity across the morphospace.The size of the largest connected component is shown for code graphs (a), R-graphs (b), and S − graphs.d Entropy of component size distribution is large around a band that runs parallel along the Pareto front.

FIG. 5
FIG.5Complexity of codes as a random generative model.a Entropy of objects as sampled by a random walker (HR) over the language network is close to its maximum throughout the morphospace, except for two non-trivial areas labeled E and F. Whichever mechanisms give rise to the heterogeneity there, they seem to be different, since the transition between E and F is not smooth.b Entropy of signals as sampled by a random walker (HS) is lower than its maximum across the morphospace, and the most singular areas do not correlate with the ones found for (HR).Notably, region G seems to separate E and F and contain more heterogeneous signal sampling despite the largely homogeneous object sampling.c 2-grams of objects as sampled by a random walked present a lower entropy H2R than HR, and only the E region seems to remain in place.

FIG. 6
FIG. 6 Power laws from the least-effort model.a Goodness of fit of the word distribution from the toy, least-effort model to a Zipf law.b Goodness of fit of the word distribution from the model to an arbitrary power law.c Exponent obtained when fitting the word distribution of the model to the arbitrary power law from panel b.In each case, the level curves indicate areas where a Kolmogorov-Smirnov test suggest a good fit.
contains a huge database with different semantic relationships, including manually annotated relationships between words and objects or concepts.A few examples: ape ( . . . ) 02470325 09964411 09796185 c a r ( . . . ) 02958343 02959942 02960501 . . .c o m p l e x i t y ( . . . ) 04766275 rugby ( . . . ) 00470966 FIG.7Clustering of languages across the morphospace.k-means clustering using all principal components reveals a consistent structure in the morphospace.Five cluster are shown here.Real languages fall within cluster I, close to the one-to-one mapping proper of animal communication systems.The real matrices are marked: Adj for the adjectives, Adv for the adverbs, Noun for the nouns, and Verb for the verb.If certain grammatical words are included (named with an apostrophe: Noun' for nouns and Verb' for verbs) they move into cluster II and towards the center of the morphospace, relatively close to the Pareto front.b All clusters get further segregated in two principal component space.This space appears interrupted by a stripe along which no codes exist.
Appendix B: Complexity of language networks along the Pareto front

FIG. 8
FIG.8Vocabulary size and polysemy along the Pareto front.a Codes along the Pareto front keep a relatively low vocabulary except close to the one-to-one mapping.Also, two branches seem noticeable around the middle of the front, suggesting that similar Pareto optimal values of Hm(R|S) and of Hn(S) can be achieved with differently wired codes.b A reduced vocabulary size does not result in a strictly monotonous increase of polysemy as we approach the star code.Instead, languages with similar Hm(R|S) may present different polysemy levels.The range available grows as we approach the maximally ambiguous code.

FIG. 9
FIG. 9 Network connectivity along the Pareto front.a Along the front, the size of the largest connected component grows from 1/m to 1 as we move from the one-to-one mapping to the start graph.b The entropy of component size distribution shows a large degree of degeneracy even for single points along the front.

FIG. 10
FIG. 10 Complexity of codes as a random generative model along the Pareto front.a The entropy of objects as sampled by a random walker (HR) over the language network is maximal at either end of the front and presents a minimum close to the star graph.b The entropy of objects as sampled by a random walker (HS) decreases rather smoothly along the Pareto front as we move from the one-to-one mapping to the star.c The entropy of 2-gram objects as sampled by a random walker (H2R) presents less structure than HR and is still maximal at either extreme.d The entropy of 2-grams signals as sampled by a random walker (H2S) also decreases as we move along the front, but in a less structured fashion.

FIG. 11
FIG. 11 Power laws from the least-effort model along the Pareto front.a Goodness of fit of the word distribution from the toy, least-effort model to a Zipf law along the Pareto front.b Goodness of fit of the word distribution from the model to an arbitrary power law along the Pareto front.c Exponent obtained along the Pareto front when fitting the word distribution of the model to the arbitrary power law from panel b.