Abstract
Particle swarm optimization (PSO) is a natureinspired algorithm that has shown outstanding performance in solving many realistic problems. In the original PSO and most of its variants all particles are treated equally, overlooking the impact of structural heterogeneity on individual behavior. Here we employ complex networks to represent the population structure of swarms and propose a selectivelyinformed PSO (SIPSO), in which the particles choose different learning strategies based on their connections: a denselyconnected hub particle gets full information from all of its neighbors while a nonhub particle with few connections can only follow a single yet bestperformed neighbor. Extensive numerical experiments on widelyused benchmark functions show that our SIPSO algorithm remarkably outperforms the PSO and its existing variants in success rate, solution quality, and convergence speed. We also explore the evolution process from a microscopic point of view, leading to the discovery of different roles that the particles play in optimization. The hub particles guide the optimization process towards correct directions while the nonhub particles maintain the necessary population diversity, resulting in the optimum overall performance of SIPSO. These findings deepen our understanding of swarm intelligence and may shed light on the underlying mechanism of information exchange in natural swarm and flocking behaviors.
Introduction
Optimization^{1,2,3} aims to seek the minimal or maximal point in the constrained parameter space of a system, which is highly challenging due to the increasing complexity of real problems we face in modern society. To solve realworld optimization problems researchers learned from the collective behaviors of social animals, yielding several intelligent algorithms^{4,5,6}. Among those, particle swarm optimization (PSO), proposed by Kennedy and Eberhart^{5}, is a typical swarmintelligence algorithm that derives the inspiration from the selforganization and adaptation in flocking phenomena^{7,8,9,10,11}.
In PSO, a flock of particles move in a constrained parameter space, interact with each other, and update their velocities and positions according to their own and their neighbors' experiences, searching for the global optimum. Owing to its simplicity, effectiveness and low computational cost, PSO has gained significant popularity and improvements. Most studies on improving the PSO fall into three categories. (1) Modifying the model coefficients. Shi and Eberhart introduced an inertia weight to reduce the restriction on velocity and better control the scope of search^{12}. Later on, they employed fuzzy system and stochastic mechanism to better adapt the inertia weight^{13}. Clerc and Kennedy introduced a constriction coefficient to ensure the convergence of the particles^{14}. Trelea used dynamical system theory to analyze the PSO algorithm and derived the guidelines for choosing appropriate parameters^{15}. Zhan et al. proposed an adaptive PSO in which model coefficients can vary according to evolutionary states^{16}. (2) Considering the population structure. Kennedy showed that the sociometric structure and smallworld manipulation interacted with function can produce a significant effect on performance^{17}. Kennedy and Mendes examined the impact of topological structure more detailedly, leading to the identification of superior population configurations^{18}. Liu et al. proposed the scalefree PSO (SFPSO) which employs degreeheterogeneous (scalefree) topologies and is able to significantly improve the optimization performance^{19}. (3) Altering the interaction modes. Mendes et al. revised the way how each particle is influenced by its neighbors, resulting in the fullyinformed PSO (FIPSO)^{20,21} in which each particle learns from every individual in its neighborhood rather than the single best one. The performance of FIPSO is closely related to the population structure^{22}. Liang et al. proposed the comprehensive learning PSO that allows each dimension of a particle to learn from different neighbors^{23}. Li et al. proposed the adaptive learning PSO in which each particle can adaptively guide its behavior of exploration and exploitation^{24}. They further proposed the selflearning PSO (SLPSO) that allows each particle to adaptively choose one of four learning strategies in different situations with respect to convergence, exploitation, exploration, and jumping out of the basins of attraction of local optima^{25}.
However, most of the existing PSO algorithms treat all particles equally, prompting us to explore the impact of heterogeneous sight ranges: the hub particles (leaders) have a broad sight of the population; each nonhub particle (follower) has only a single source of information. The former would make the optimization process well guided by the leaders while the latter allows the followers to move without unnecessary interference. We found that our algorithm, selectivelyinformed PSO (SIPSO), taking into account the individuals' heterogeneity, can balance the exploration and the exploitation in the optimization process thus it achieves better performance.
In the following we will briefly introduce the PSO and its typical variants and then describe our SIPSO algorithm in detail.
GPSO & LPSO
For a minimum optimization problem with D independent variables and an objective function f(x), the PSO algorithm represents the potential solutions with a flock of particles. Each particle i has a position x_{i} = [x_{i}_{1}, x_{i}_{2}, …x_{iD}] and a velocity v_{i} = [v_{i}_{1}, v_{i}_{2}, … v_{iD}] in the Ddimensional space. The goal is to find an optimal position x_{i} of any particle i that makes the objective function f(x) minimum. Initially the particles' positions and velocities are generated randomly. Then, at each time step (iteration), each particle updates its position and velocity according to the following equations^{5}:where , φ = c_{1} + c_{2} > 4. Here p_{i} is the best historical position found by particle i, p_{n}_{,i} is the best historical position found by i's neighbors, c_{1} and c_{2} are the acceleration coefficients. U(a, b) is a random number drawn at each iteration from the uniform distribution [a, b]. Therefore, c_{1} and c_{2} balance the impacts of each particle's own and its neighbors' experiences, and η indicates the learning rate. Based on previous extensive analysis^{14} we choose the appropriate settings as c_{1} = c_{2} = 2.05 and η = 0.7298. Previous studies^{17,18,20,21,22} have found that the interaction topology of particles has a great influence on final optimization results. Two versions of canonical PSO algorithm with different topologies are most commonly used: the GPSO with a fully connected network (Fig. 1(a)) and the LPSO with a ring (Fig. 1(b)). GPSO converges more rapidly than LPSO, yet, is more susceptible to be trapped at local optima^{17}.
FIPSO
In the canonical PSO each particle is influenced by itself and the bestperformed particle in its neighborhood. This “singleinformed” strategy may ignore some important information from the remaining neighbors. Mendes et. al. hence proposed a “fullyinformed” version of PSO (FIPSO)^{20,21}, in which each particle adjusts its velocity according to the experiences of its all neighbors:where is the node set of i's neighbors, k_{i} is the number of i's neighbors (i.e., k_{i} is i's degree and ), p_{j} is the best historic position found by j. Studies^{21,22} have revealed that, with appropriate parameter settings, the FIPSO can outperform the traditional PSO, but it is susceptible to the topology alteration. In some topologies the FIPSO may perform even worse than the canonical PSO.
SFPSO & SFIPSO
Recently, many natural and manmade networks have been found to exhibit scalefree property, i.e. the degree distribution is powerlaw^{26,27}. Examples include neural networks^{28}, citation networks^{29}, World Wide Web^{30}, Internet^{31}, software engineering^{32}, and online social networks^{33}. In scalefree networks, only a few nodes are densely connected hubs and most nodes are low degree nonhub nodes, resulting in high heterogeneity of node's degrees (Fig. 1c). This discovery has triggered the interest of studying the impacts of underlying network structures on dynamical processes^{34,35,36,37,38,39,40} and also of introducing scalefree topologies into evolutionary optimization algorithms^{19,41,42,43}. In particular, Liu et al. investigated the influence of scalefree population structure on the performance of PSO^{19}. Their results indicated that the scalefree PSO (SFPSO) outperforms the traditional GPSO and LPSO. In the following we also compare our algorithm to the fullyinformed versions of SFPSO and GPSO (called SFIPSO and GFIPSO hereafter, respectively).
SLPSO
In most traditional PSO algorithms, a single learning mode is used for all particles, which may restrict the intelligence for a particular particle to deal with different situations. Li et al. proposed the selflearning PSO (SLPSO) that enables the particles to switch between four modes: exploitation, exploration, jumping out, and convergence^{25}. Each mode has a set of operations to update the particles' velocity and position. A common strategy was introduced to allow each particle to adaptively choose the most suitable mode which depends on evolutionary stages and local fitness landscape. Experimental comparisons showed that SLPSO outperforms several peer algorithms in terms of mean value, success rate and overall ranking, especially for some complex highdimensional functions. Yet, three key parameters of SLPSO need to be chosen very carefully through a parameter tuning approach, as these parameters significantly affect the algorithm's performance. Note that in SLPSO, although each particle is able to switch between different modes, the learning strategy of choosing suitable modes is identical for all particles.
Selectivelyinformed PSO
The algorithms described above assumed that all particles are singleinformed or fully informed, or adopt the same strategy for switching between different modes, overlooking the heterogeneity of individuals. Here we propose the selectivelyinformed PSO (SIPSO) algorithm that takes into consideration the heterogeneity of individuals' learning strategies. The population structure of our SIPSO is represented by a scalefree network (see Methods). And the learning strategy of each particle depends on its degree:where k_{i} is the degree of particle i, k_{c} is the threshold to determine a particle fully or singleinformed. The denselyconnected hubs (k > k_{c}) are provided with more information to better lead the optimization process. The nonhub particles (k ≤ k_{c}) are less affected so that they can move in the search space with more freedom, maintaining the diversity of the population. Note that, when k_{c} = k_{min} − 1, all the particles are fullyinformed thus the algorithm is degenerated to SFIPSO; when k_{c} = k_{max}, all the particles take the canonical learning strategy, turning the algorithm to SFPSO. Here we are interested in the information selectivity, i.e, k_{min} − 1 < k_{c} < k_{max}. For example, in Fig. 1c, when k_{c} = 5 the grey nodes (particles) with degree higher than 5 are fullyinformed and the rest red nodes are singleinformed.
Results
Overall performance
We test the performance of our algorithm on eight widelyused benchmark functions f_{1–8} (see Methods) and compare it to other seven algorithms for three criteria: success rate, solution quality, and convergence speed (see Methods). Note that in SIPSO the optimal value of the degree threshold k_{c} varies for different test functions. We also show the results for a fixed threshold () over all the functions.
Table 1 lists the comparison of success rate. Our algorithm SIPSO shows significant advantages, i.e., 99% on f_{8} and 100% on all the other functions. Even with a fixed threshold the SIPSO also gets very satisfactory success rates.
Table 2 lists the results in terms of solution quality. For each function, the best solutions are highlighted in bold and “–” means that the corresponding algorithm fails to reach the acceptable solution even once. For functions f_{2–4} our SIPSO remarkably outperforms the other algorithms, for f_{1}, f_{5}, f_{6} and f_{8} the SIPSO ranks 2^{nd} of all the algorithms, while for f_{7} it ranks 3^{rd}. When the degree threshold is fixed as , the solution quality still ranks top 3 of all the algorithms over eight test functions.
Table 3 shows the convergence speed of each algorithm, represented by the steps required to reach the goal value. Thus the smaller the number of required steps, the higher the convergence speed. The best cases are marked in bold. Our SIPSO has a relatively fast convergence speed on all the functions, ranking 2^{nd} on f_{1}, f_{2}, f_{3}, f_{6} and f_{8}, 3^{rd} on f_{4} and f_{7}, 4^{th} on f_{5}. SFIPSO has the fastest convergence speed on f_{1}, f_{2}, f_{3}, f_{6}, and f_{8}, and the GFIPSO converges fastest on f_{4}, f_{5} and f_{7}. It is worth noting that, faster convergence does not necessarily mean a better optimization trial. Actually, too fast convergence may lead to the problem of prematureness, i.e., being trapped at local optima. For example, as shown in Table 2 the solution qualities of SFIPSO and GFIPSO are really bad for most benchmark functions, although their convergence are very fast. In the fullyinformed algorithms, each particle's information can be quickly transferred to all other individuals in the swarm thus the algorithms converge rapidly, resulting in prematureness. In contrast, in our SIPSO, only the hub particles are fullyinformed and there are many nonhub particles taking the singleinformed learning strategy to maintain the population diversity. Consequently, our SIPSO can achieve better performance with a satisfactory convergence speed.
The impact of k_{c}
As described above we find that for each function there is an optimal value of the threshold k_{c} with which our algorithm SIPSO performs best. Hence we investigate the impact of k_{c} on the performance for all eight benchmark functions. The results of solution quality, success rate and convergence speed are shown in Figs. 2 and 3. One can see that, for the solution quality on all functions except f_{5} and f_{7} SFPSO (the rightmost data point) outperforms SFIPSO (the leftmost data point), while for f_{5} and f_{7} it reverses. However, on all the functions except for f_{7}, neither SFIPSO nor SFPSO is able to obtain the best result. With k_{c} between k_{min} and k_{max} our algorithm SIPSO achieves the best performance (Fig. 2). Similar results for success rate are shown in Fig. 3(a). Our SIPSO has high success rate on all functions with an appropriate k_{c}. As shown in Fig. 3(b), increasing the number of fullyinformed particles can significantly improve the convergence speed and our SIPSO has moderate speed of convergence.
The microscopic point of view
To uncover the underlying mechanism of our algorithm, we explore the optimization process from a microscopic point of view. We compare our SIPSO (k_{min} − 1 < k_{c} < k_{max}) to SFIPSO (k_{c} = k_{min} − 1) and SFPSO (k_{c} = k_{max}) that are all on scalefree networks, excluding the influence of other factors. For the sake of simplicity, in the following we will present the results for the function f_{1}. The results for other functions are alike and not shown here.
First, we examine the mean fitness (F_{mean}) of the swarm population during an optimization process, with the definition where N is the total number of particles, x_{i} is the position of particle i, and x_{opt} = 1 is the optimum solution of f_{1}. As shown in Fig. 4(a) the SFIPSO has the fastest convergence as each particle uses full information from all of its neighbors, but it is trapped at some local optima in the early stage (~ 150 iterations). Despite their relatively low convergence SIPSO and SFPSO are able to achieve higher qualities of final solutions, and SIPSO is the best for the mean fitness.
Second, we compare the population diversity of SFPSO, SFIPSO and SIPSO, which indicates the extent of exploration during the searching process of the swarm. The population diversity is defined as^{45} where N is the total number of particles, and is the mean position (center) of the swarm. Thus, the larger the σ, more diverse is the swarm. And a very small σ means that all particles are aggregated together, diminishing the capability of exploration. As shown in Fig. 4(b), the diversity of SFIPSO decreases quickly to a very small value due to the information redundancy of the fullyinformed learning. Consequently, SFIPSO is not able to escape once gets stuck at a local optimum. Both SFPSO and SIPSO have a high level of diversity during the optimization, which ensure the thorough search in the parameter space thus improve the probability of finding the global optimum.
Furthermore, we investigate the fitness of particles with different degrees, i.e., , where N_{k} is the number of particles with degree k. δ(k_{i}, k) = 1 if k_{i} = k, and 0 otherwise. The particles in SFPSO have only one information source, which is very unstable during the optimization process. So the fluctuation of the particles' fitness in SFPSO are violent (Fig. 5(a)). In SFIPSO, all particles are fullyinformed, making the algorithm converge fast but prematurely (Fig. 5(b)). Our SIPSO combines the advantages of the two algorithms. The fitness of hub particles monotonously decreases, indicating that the hubs play the role of guiding the swarm. On the contrary, the nonhub particles have oscillating fitness, maintaining the necessary diversity of the swarm (Fig. 5(c)). The two different roles of the particles in SIPSO result in the appropriate tradeoff between the convergence speed and the population diversity.
Discussion
Taking into account the heterogeneity of individuals behaviors in flocking we propose the SelectivelyInformed Particle Swarm Optimization (SIPSO) algorithm. In SIPSO, the particles interact with their neighbors and change the searching direction and speed by learning from the experiences of themselves and their neighbors. Each particle's learning strategy depends on its degree: the hubs are able to learn from all of their neighbors (fullyinformed) while each nonhub particle learns from a single yet bestperformed neighbor. Consequently, the hubs have bird's eye views of the swarm and can better lead the population; the nonhub particles are less influenced thus can search in the space with high freedom, maintaining the diversity of the population.
We test the performance of our SIPSO on eight benchmark functions. The results show that SIPSO has high success rate, high solution quality, and acceptable convergence speed. We examine the optimization process from a microscopic point of view and reveal that, indeed, there are two different roles that the particles play in the SIPSO. Moreover, our algorithm is able to balance the population diversity and the convergence speed during optimization processes, improving the overall performance in comparison with other seven algorithms.
It is worth noting that we do not introduce adaptation into our SIPSO algorithm, i.e., all parameters including k_{c} are set initially and do not change during the optimization process, but instead we discriminate the nodes with different degrees, in contrast to SLPSO which adopts adaptive strategies in search of the optimum. Despite the lack of adaptation, our SIPSO works very well in the benchmark test functions. This finding uncovers the importance of considering the individuals' heterogeneity in particle swarm optimization. Nevertheless, as shown in previous works (e.g., refs. 24, 25), adaptation can improve PSO's performance. It is fairly expected that adaptively tuning the value of k_{c} during the searching process could improve our SIPSO's performance, which deserves future pursuits.
Methods
Benchmark functions
To make a comprehensive comparison to test the effectiveness of our algorithm we designed extensive experiments. We choose eight benchmark functions (Table 4) that have been widely used^{17,18,20,21,44}. Functions f_{1} − f_{4} are unimodal, which are relatively easy to solve. Functions f_{5} − f_{8} are multimodal with a large number of local optima so that the algorithm really suffers from being premature. Functions f_{6} and f_{7} are the same Griewank function with different dimensions. In fact, f_{7} is considered more difficult^{18}. Column 2 shows the formula of the fitness function. Column 3 shows the dimension of the problem D. Column 4 gives the range that variables can take. In column 5 the optimum values of the problems are presented. Column 6 defines the goal value to judge whether a run (trial) is successful or not.
Parameter settings
The parameters of experiments are set as follows. The population size is 50. For each algorithm and each benchmark function, the experiment consists of 100 independent runs. The maximal iteration is 5000. For SFPSO, SFIPSO and SIPSO, the scalefree network has maximal degree 14 and minimal degree 2. We generate the scalefree networks by BarabásiAlbert model^{46}, which has two main mechanisms: growth and preferential attachment. Starting with m_{0} fullyconnected nodes, at each time step we add a new node to the network and connect it to m existing nodes(m < m_{0}). The probability P_{i} that the new node is connected to an existing node i depends on i's degree: , where j runs over all the existing nodes. Here we set the parameters m_{0} = 4 and m = 2.
Criteria
To compare the performance of different algorithms we use three criteria: solution quality, convergence speed, and success rate. The solution quality is the final fitness value at the end of 5000 iterations. The convergence speed is represented by the number of iterations required to reach the goal. Obviously, the larger the number of required iterations, the lower the convergence speed. The success rate is the fraction of successful runs. Both the solution quality and the convergence speed are average values over the successful runs.
References
 1.
Holland, J. H. Adaptation in natural and artificial system: An introductory analysis with applications to biology, control, and artificial intelligence. (University of Michigan Press, Ann Arbor, 1975).
 2.
Glover, F. & Laguna, M. Tabu search (Springer, USA, 1999).
 3.
Van Laarhoven, P. J. & Aarts, E. H. Simulated annealing. (Springer, Netherlands, 1987).
 4.
Dorigo, M., Maniezzo, V. & Colorni, A. Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst., Man, Cybern. B, Cybern. 26, 29–41 (1996).
 5.
Kennedy, J. & Eberhart, R. Particle swarm optimization. Proc. IEEE Int. Conf. Neural Netw. 4, 1942–1948 Perth, WA. (10.1109/ICNN.1995.488968) (1995).
 6.
Karaboga, D. An idea based on honey bee swarm for numerical optimization. Technical report, Erciyes University, Computer Engineering Department. (2005) Available at: http://mf.erciyes.edu.tr/abc/pub/tr06_2005.pdf. (Accessed: 19 December 2014).
 7.
Heppner, F. & Grenander, U. A stochastic nonlinear model for coordinated bird flocks. (ed. Krasner, S.) (AAAS Publications, 1990).
 8.
Couzin, I. D., Krause, J., Franks, N. R. & Levin, S. A. Effective leadership and decision making in animal groups on the move. Nature 433, 513–516 (2005).
 9.
Conradt, L., Krause, J., Couzin, I. D. & Roper, T. J. “Leading according to need” in selforganizing groups. Am. Nat. 173, 304–312 (2009).
 10.
Nagy, M., Ákos, Z., Biro, D. & Vicsek, T. Hierarchical group dynamics in pigeon flocks. Nature 464, 890–893 (2010).
 11.
Vicsek, T. & Zafeiris, A. Collective motion. Phys. Rep. 517, 71–140 (2012).
 12.
Shi, Y. & Eberhart, R. A modified particle swarm optimizer. IEEE World Congr. Comput. Intell., Anchorage, AK. (10.1109/ICEC.1998.699146) (1998).
 13.
Shi, Y. & Eberhart, R. Fuzzy adaptive particle swarm optimization. CEC '01, Seoul, South Korea. IEEE Proc. Congr. Evol. Comput. 1, 101–106. (10.1109/CEC.2001.934377) (2001).
 14.
Clerc, M. & Kennedy, J. The particle swarmexplosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 6, 58–73 (2002).
 15.
Trelea, I. C. The particle swarm optimization algorithm: convergence analysis and parameter selection. Inform Process Lett. 85, 317–325 (2003).
 16.
Zhan, Z. H., Zhang, J., Li, Y. & Chung, H. H. Adaptive particle swarm optimization. IEEE Trans. Syst., Man, Cybern. B, Cybern. 39, 1362–1381 (2009).
 17.
Kennedy, J. Small worlds and megaminds: effects of neighborhood topology on particle swarm performance. CEC '99, Washington, DC. IEEE Proc. Congr. Evol. Comput. 3, 1931–1938 (10.1109/CEC.1999.785509) (1999).
 18.
Kennedy, J. & Mendes, R. Population structure and particle swarm performance. CEC '02, Honolulu, Hawaii. IEEE Proc. Congr. Evol. Comput. 2, 1671–1676. (10.1109/CEC.2002.1004493) (2002).
 19.
Liu, C., Du, W. B. & Wang, W. X. Particle swarm optimization with scalefree interactions. PLoS ONE 9, e97822 (2014).
 20.
Mendes, R., Kennedy, J. & Neves, J. Watch thy neighbor or how the swarm can learn from its environment. IEEE Proc. Swarm Intell. Symp. 88–94 (10.1109/SIS.2003.1202252) (2003).
 21.
Mendes, R., Kennedy, J. & Neves, J. The fully informed particle swarm: simpler, maybe better. IEEE Trans. Evol. Comput. 8, 204–210 (2004).
 22.
Kennedy, J. & Mendes, R. Neighborhood topologies in fully informed and bestofneighborhood particle swarms. IEEE Trans. Syst., Man, Cybern. C, Appl. Rev. 36, 515–519 (2006).
 23.
Liang, J. J., Qin, A. K., Suganthan, P. N. & Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10, 281–295 (2006).
 24.
Li, C. H. & Yang, S. X. An Adaptive Learning Particle Swarm Optimizer for Function optimization. CEC '09, Trondheim, Norway. IEEE Proc. Congr. Evol. Comput. 381–388. (10.1109/CEC.2009.4982972) (2009).
 25.
Li, C. H., Yang, S. X. & Trung, T. N. A SelfLearning Particle Swarm Optimizer for Global Optimization Problems. IEEE Trans. Syst., Man, Cybern. B, Cybern. 42, 627–646 (2012).
 26.
Albert, R. & Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 74, 47–97 (2002).
 27.
Newman, M. E. J. The Structure and Function of Complex Networks. SIAM Rev. 45, 167–256 (2003).
 28.
Eguiluz, V. M., Chialvo, D. R., Cecchi, G. A., Baliki, M. & Apkarian, A. V. Scalefree brain functional networks. Phys. Rev. Lett. 94, 018102 (2005).
 29.
Redner, S. How popular is your paper? An empirical study of the citation distribution. Eur. Phys. J. B 4, 131–134 (1998).
 30.
Barabási, A.L., Albert, R. & Jeong, H. Scalefree characteristics of random networks: the topology of the worldwide web. Physica A 281, 69–77 (2000).
 31.
Vázquez, A., PastorSatorras, R. & Vespignani, A. Largescale topological and dynamical properties of the Internet. Phys. Rev. E 65, 066130 (2002).
 32.
Wen, L., Dromey, R. G. & Kirk, D. Software engineering and scalefree networks. IEEE Trans. Syst., Man, Cybern. B, Cybern. 39, 845–854 (2009).
 33.
Leskovec, J. & Horvitz, E. PlanetaryScale Views on an InstantMessaging Network. WWW '08, Beijing, China. ACM Proc. 17th Int. Conf. World Wide Web 915–924. (10.1145/1367497.1367620) (2008).
 34.
Barrat, A., Barthelemy, M. & Vespignani, A. Dynamical processes on complex networks. (Cambridge University Press, 2008).
 35.
Song, C., Havlin, S. & Makse, H. A. Selfsimilarity of complex networks. Nature 433, 392–395 (2005).
 36.
Zhou, S. & Mondragón, R. J. The richclub phenomenon in the Internet topology. IEEE Commun. Lett. 8, 180–182 (2004).
 37.
Boccaletti, S. et al. The structure and dynamics of multilayer networks. Phys. Rep. 544, 1–122 (2014).
 38.
Perc, M. & Szolnoki, A. Coevolutionary games  A mini review. BioSystems 99, 109–125 (2010).
 39.
Shen, H.W., Cheng, X.Q. & Fang, B.X. Covariance, correlation matrix, and the multiscale community structure of networks. Phys. Rev. E. 82, 016114 (2010).
 40.
Wu, Z.X., Rong, Z. & Holme, P. Diversity of reproduction time scale promotes cooperation in spatial prisoner's dilemma games. Phys. Rev. E. 80, 036106 (2009).
 41.
Gasparri, A., Panzieri, S., Pascucci, F. & Ulivi, G. A spatially structured genetic algorithm over complex networks for mobile robot localisation. Intelligent Service Robotics 2, 31–40 (2009).
 42.
Giacobini, M., Preuss, M. & Tomassini, M. Effects of scalefree and smallworld topologies on binary coded selfadaptive CEA. EvoCOP '06, Budapest, Hungary. Lect. Notes Comput. SC. 3906, 86–98. (10.1007/11730095_8) (2006).
 43.
Kirley, M. & Stewart, R. An analysis of the effects of population structure on scalable multiobjective optimization problems. GECCO '07, London, UK. ACM Proc. 9th Annu. Conf. of Genetic and Evol. Comput. 845–852. (10.1145/1276958.1277124) (2007).
 44.
Tang, K. et al. Benchmark functions for the CEC'2008 special session and competition on large scale global optimization. Technical report, USTC, China. (2007) Available at: http://sci2s.ugr.es/programacion/workshop/Tech.Report.CEC2008.LSGO.pdf. (Accessed: 19th November 2014).
 45.
Deb, K. & Beyer, H. G. Selfadaptive genetic algorithms with simulated binary crossover. Evol. Comput. 9, 197–221 (2001).
 46.
Barabási, A.L. & Albert, R. Emergence of scaling in random networks. Science 286, 509–512 (1999).
Acknowledgements
Y.G. and W.B.D. acknowledge the financial support from the National Natural Science Foundation of China (Grant Nos. 61201314 and 61221061). G. Y. acknowledges the financial support from NSCTA sponsored by US Army Research Laboratory under Agreement No. W911NF0920053.
Author information
Affiliations
School of Electronic and Information Engineering, Beihang University, Beijing 100191, People's Republic of China
 Yang Gao
 & Wenbo Du
Center for Complex Network Research and Department of Physics, Northeastern University, Boston, MA 02115 USA
 Gang Yan
Authors
Search for Yang Gao in:
Search for Wenbo Du in:
Search for Gang Yan in:
Contributions
Y.G., W.B.D. and G.Y. designed and performed the research, analyzed the results, and wrote the paper.
Competing interests
The authors declare no competing financial interests.
Corresponding author
Correspondence to Wenbo Du.
Rights and permissions
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder in order to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Further reading

1.
Prediction of pKa Values for Neutral and Basic Drugs based on Hybrid Artificial Intelligence Methods
Scientific Reports (2018)

2.
A Robust Method for Inferring Network Structures
Scientific Reports (2017)

3.
Physics of transportation: Towards optimal capacity using the multilayer network framework
Scientific Reports (2016)

4.
The Emergence of Relationshipbased Cooperation
Scientific Reports (2015)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.