Abstract
With the development of artificial intelligence, numerous researchers are attracted to study new heuristic algorithms and improve traditional algorithms. Artificial bee colony (ABC) algorithm is a swarm intelligence optimization algorithm inspired by the foraging behavior of honeybees, which is one of the most widely applied methods to solve optimization problems. However, the traditional ABC has some shortcomings such as under-exploitation and slow convergence, etc. In this study, a novel variant of ABC named chaotic and neighborhood search-based ABC algorithm (CNSABC) is proposed. The CNSABC contains three improved mechanisms, including Bernoulli chaotic mapping with mutual exclusion mechanism, neighborhood search mechanism with compression factor, and sustained bees. In detail, Bernoulli chaotic mapping with mutual exclusion mechanism is introduced to enhance the diversity and the exploration ability. To enhance the convergence efficiency and exploitation capability of the algorithm, the neighborhood search mechanism with compression factor and sustained bees are presented. Subsequently, a series of experiments are conducted to verify the effectiveness of the three presented mechanisms and the superiority of the proposed CNSABC, the results demonstrate that the proposed CNSABC has better convergence efficiency and search ability. Finally, the CNSABC is applied to solve two engineering optimization problems, experimental results show that CNSABC can produce satisfactory solutions.
Similar content being viewed by others
Introduction
Optimization problems have a large number of applications in many fields, such as engineering design, optimization of structural parameters, financial investments, etc.1,2. To improve the handling of these problems, a series of global optimization algorithms have been developed to traditional mathematical theories and solution methods3,4, which are divided into deterministic optimization algorithms and stochastic optimization algorithms5. As stochastic optimization algorithms, metaheuristic optimization algorithms have high solution accuracy and efficiency, mainly including genetic algorithm (GA) inspired by biological evolution6, simulated annealing algorithm (SA)7 and gravity search algorithm (GSA) inspired by physical principles8, particle swarm algorithm (PSO) inspired by animal population behavior9 and artificial bee colony algorithm (ABC)10. As optimization problems become more and more complex, many metaheuristic optimization algorithms have been presented to solve large-scale global optimization (LSGO) problems, such as the wild goose algorithm (WGA)11, the african condor optimization algorithm (AVOA)12, the australian wild dog optimization algorithm (DOA)13. Conscious neighborhood-based crow search algorithm (CCSA)14, starling murmuration optimizer (SMO)15, diversity-maintained multi-trial differential evolution algorithm (DMDE)16 and enhanced moth-flame optimization algorithm using an effective stagnation finding and replacing strategy (MFP-SFR)17, etc.
Compared with the other metaheuristic algorithms, the ABC has advantages such as few control parameters, easy implementation, and outstanding exploration capability, it performs both global and local optimal solution searches during each iteration, therefore the probability of being able to find the optimal solution is greatly increased.
The ABC is a novel swarm intelligence algorithm by simulating the foraging behavior of bees, which is widely used in the fields of PID parameter optimization, image processing, numerical optimization, structural design, etc. Bingul et al.18 compared the PSO and the ABC to find the best performance parameters of the PID controller, and the results of the robustness analysis show that the PID controller parameters adjusted by the ABC have stronger robustness under the internal and external perturbations. Öztürk et al.19 analyzed the improved ABC for medical image processing proposed during 2010–2020. Hussain et al.20 proposed the improved ABC for copolymerization of high-dimensional data, and showed that the combination of a new similarity measure and an optimized local search method, significant progress was achieved in searching for optimal clusters. Sagayam et al.21 proposed a hybrid one-dimensional HMM model with ABC to optimize its parameters and observed state sequences to improve performance, and the results showed a very low recognition error rate. Li et al.22 proposed an ABC algorithm-based structural design optimization method for fiber-reinforced plastic (FRP) vessels, and the results showed that the weight of a 32.98 m FRP fishing vessel could be reduced by 8.31%.
However, the traditional ABC contains some disadvantages, such as slow convergence, easy stagnation, etc. Therefore, many researchers delivered a lot of improvement measures to enhance the convergence speed and exploitation capability of ABC. Zhang et al.23 proposed an improved ABC algorithm with a unitary inheritance (OPI) mechanism (OPIABC), to address the fact that the solution in the ABC varies in only one dimension. Wang et al.24 presented a selection method based on the radius of the neighborhood, which improves the search phase of the detection bee and enhances the exploitation of the ABC. Shi et al.25 introduced the concept of queen bee to propose a new neighborhood search mechanism and improved the dimensional selection strategy to realize the conversion between one-dimensional search and full-dimensional search. In real bee colonies, onlooker bees and employed bees have different exploitation mechanisms, and onlooker bees choose the best one nectar source for exploitation. Karaboga et al.26 designed a new search equation for onlooker bees, and the proposed quick artificial bee colony algorithm (qABC) more accurately simulates the behavior of onlooker bees and improves the local search capability of the ABC. To enhance the global convergence speed, Gao et al.27 introduced a new search mechanism which include logistic chaos mechanism and backward learning to improve algorithm, the mechanism can control the probability of introducing two search equations. Xiao et al.28 proposed a new adaptive neighborhood search gaussian perturbation algorithm (NGABC), which first used an adaptive method to adjust the neighborhood, then applied the global optimal solution to guide the search, and finally designed a new Gaussian perturbation.
To solve the shortage of ABC algorithm with strong exploration capability and weak development capability, Zhu et al.29 proposed the Gbest-guided artificial bee colony algorithm (GABC), which adds the influence of global optimal solution to the neighborhood search equation and improves the exploitation capability of the algorithm. Zheng et al.30 used cat chaos mapping to increase the diversity of the solutions in the initial stage, applied differential evolution to improve the search strategy, and designed adaptive scaling factors to achieve dynamic search. To improve the development efficiency and convergence speed, Chouaib et al.31 proposed a multiple population ABC based on global and local optima (MPGABC), which divides the population into multiple subpopulations and introduces global and local optimal solutions in the search equation of the solution. Brajevi´ et al.32 added shuffle variation operators to the hiring bee and onlooker bee phases, making the algorithm get a good balance between global search ability and local exploitation ability to solve integer programming and minimax problems. Zhao et al.33 proposed a novel method (QABC) with a search equation based on the idea of quasi-Avanti transformation, which enhanced the exploitation capability of the algorithm, and then introduced a collaborative search matrix to update the position of the nectar source to ensure the randomness and balance of the search. Even though the improved ABC algorithms can produce satisfying solutions in solving optimization problems, with regard to effectiveness and efficiency (such as slow convergence speed, easy premature maturation), there is still space for improvement further of ABC performance.
In this study, a novel chaotic and neighborhood search-based artificial bee colony algorithm (CNSABC) is proposed. The proposed CNSABC includes three novel mechanisms, which are chaotic mapping with mutual exclusion mechanism, neighborhood search mechanism with compression factor and sustained bees. The chaotic mapping with mutual exclusion mechanism is introduced to have better ergodicity in the solution space and enhance global exploration; the neighborhood search mechanism with compression factor is presented to and enhance the convergence efficiency and local exploitation capability. A new type of bee named sustained bees is proposed to improve the ability to explore optimal solution, further to avoid the appearance of premature maturity in some degree. These three strategies work together to improve the performance of global exploration and local exploitation, resulting in a faster convergence speed and decent quality of solutions for the ABC. To verify the performance of CNSABC and confirm the effectiveness of the proposed mechanisms, three sets of numerical experiments are conducted on selected 26 benchmark functions. The first set includes CNSABC and the ABC algorithm with a single strategy for improvement, the second set includes CNSABC and five commonly used metaheuristic optimization algorithms (ABC, PSO, GWO34, WOA35, and BOA36), the last set includes CNSABC and five improved ABC algorithms in other literatures (qABC, SBABC, MPGABC, GABC, and NGABC). In addition, the Tension/compression spring design problem and the Speed reducer design problem are used to test the ability of the CNSABC for solving real engineering problems. The results verify the dominant of CNSABC with regard to the convergence speed and optimal solution search ability, which can also indicate the three proposed mechanisms play a guiding role on enhancing ABC algorithm.
The rest sections of this study are arranged as follows. Section "Improved ABC algorithm" introduces the principle and pseudo-code of the ABC algorithm and the strategy of the CNSABC in detail. Section "Experimental results and analyses" shows three sets of experimental results comparing the CNSABC with other algorithms and analyzes the effect of the improvements. Section "CNSABC for solving engineering optimization problems" uses two engineering example problems to verify the practicality of CNSABC for solving practical problems. Section "Conclusions" summarizes this research and illustrates some future research directions.
Improved ABC algorithm
Traditional ABC algorithm
ABC algorithm is inspired by the foraging behavior of honeybees. The foraging behavior of bees is shown in Fig. 1. In a colony, there are three main types of bees: employed bees (E), onlooker bees (O), and scout bees (S). Each bee is closely related to the location of the nectar source. Employed bees harvest nectar near the initial nectar source (A, B) and share nectar information (EF1) within the hive (Dance area A, B), onlooker bees select some better nectar sources to exploit (EF2), and when the nectar source has no nectar, onlooker bees transform into scout bees to find new nectar sources. ABC algorithm is an iterative process and the steps of the algorithm are as follows.
Initialization The nectar source in the ABC is a multidimensional vector. ABC starts the search from a set of randomly distributed initial nectar sources, which are generated by using Eq. (1).
in which, i and j take values in the interval [1, SN] and [1, D], \(x_{j}^{\min }\) and \(x_{j}^{\max }\) are the lower and upper limits of the j-th dimension, SN is the total number of nectar sources, D is the number of dimensions, and rand is a uniform random number in the value interval [0, 1].
Employed bees phase Each employed bee is associated with a nectar source (\(x_{ij}\)), and performs a neighborhood search in the vicinity of the associated nectar source, which in turn produces a new nectar source (\(v_{ij}\)), the location is generated by using Eq. (2).
in which, i is the serial number of the current honey source, k is the serial number of other honey sources, k ∈ [1, SN], k ≠ i, \(\phi_{ij}\) is a random number within the value interval of [− 1, 1]. In this phase, the employed bee compares the original nectar source with the new one and then chooses the better one to develop and returns to the hive to share the information of the better nectar source with other bees.
Onlooker bees phase Onlooker bees evaluate all known honey sources and select a certain probability of nectar source for exploitation, the probability (\(p_{i}\)) of nectar source selection is calculated by using Eq. (3).
in which, \(fit_{i}\) is the fitness value of the i-th nectar source. The nectar source with better fitness value is more likely to be selected and then follow the bee to exploit this source and then update the nectar location by Eq. (2). Like the employed bee, the better nectar source will be selected for retention based on the greed criterion.
Scout bee phase Employed bees and onlookers may keep taking nectar from an un-renewed nectar source, therefore an upper limit is set on the number of times a nectar source can be exploited. When the upper limit of exploitation is reached and the nectar source is not renewed, this source is abandoned, and the bees that were following it are transformed into scout bees to regenerate a new source using Eq. (1).
The proposed CNSABC
Bernoulli chaotic mapping with mutual exclusion mechanism
The initial solution of the traditional ABC is generated by using Eq. (1), where rand is a uniformly distributed random number belongs to the pseudo-random number37,38,39,40,41. However, when solving high-dimensional problems, the initial population generated in this way is not uniform enough. Therefore, it is not guaranteed to get a better population in the global search. In addition, the use of rand in the search process reduces the local search ability in the employed bees phase and onlooker bees phase. To overcome this shortcoming, a chaotic mapping with mutual exclusion mechanism is introduced to generate the initial population.
Chaos mapping methods mainly include Logistic chaotic mapping and Bernoulli chaotic mapping42,43,44. Among them, the Logistic chaotic mapping is the most widely used. However, the logistic chaotic mapping has a high probability of taking values in the interval [0, 0.1] and [0.9, 1], which is not uniformly traversed in the global optimization search process and may lead to reduce the efficiency of the algorithm. Bernoulli chaos mapping is uniformly distributed between [0, 1]. Compared with Logistic chaos mapping, Bernoulli chaos mapping has better traversal uniformity and randomness. The chaos value distribution of Logistic chaos mapping and Bernoulli chaos mapping at 105 iterations is shown in Fig. 2.
The Bernoulli chaotic mapping is e expressed by Eq. (4).
in which, the range of \(\beta\) is (0,1). In the range of \(\beta\), the system is in the chaotic state. By introducing the Bernoulli chaotic mapping with mutual exclusion mechanism into the initial population generation equation, Eq. (1) becomes into Eqs. (5) and (6). The initial individual \(x_{ij}\) is considered to be the one with the best fitness, as shown in Eq. (7). The mutual exclusion mechanism makes the search direction into two opposite directions, which can improve the exploration capability.
The pseudo-code of Bernoulli chaotic mapping is shown in Fig. 3.
The standard deviation of the initial population individuals generated by Bernoulli chaos mapping with mutual exclusion mechanism, and the initial population individuals generated by rand are counted in the value range of [− 100, 100] to generate 30-dimensional, 50-dimensional and 100-dimensional population individuals, respectively. The larger the standard deviation of the generated initial population individuals, the better the initial population diversity. The standard deviation results are shown in Table1.
Neighborhood search mechanism with compression factor
In nature, employed bees and onlooker bees play different roles in a bee colony. The main purpose of employed bees is to explore more nectar sources, and onlooker bees is to exploit known nectar sources. However, in traditional ABC algorithm, the same neighborhood search way is used for both employed and onlooker bees to simulate nectar collection behavior. This way leads to limit exploration and insufficient exploitation of the bee colony. Therefore, to balance the exploration and exploitation capabilities of the algorithm, a neighborhood search mechanism with compression factor is proposed by improving search mechanism for the employed bees and onlooker bees. In the new neighborhood search mechanism, the employed bee focuses on expanding the search range and enhancing the global exploration ability, as expressed in Eq. (8). And the onlooker bee focuses on improving the exploitation ability and doing local exploration to obtain better solutions, as shown in Eq. (9).
in which, \(cp\) is the adaptive compression factor, it can be calculated by Eq. (10), the theoretical value range is [0, 1]; C is the Cauchy distribution, calculated by Eq. (11); \(\phi_{ij}\) is a uniform random number with values ranging from [− 1, 1], calculated by Eq. (12); \(\psi_{ij}^{1}\) is a uniform random number with values ranging from [0, 1.5], which can be calculated by Eq. (13) ; iter is the number of generations of the current iteration.
The weight \(\psi_{ij}^{2}\) influences the local exploration of the onlooker bees, and then has an impact on the exploitation capacity of the onlooker bees. To test the effect of the parameter \(\max \psi_{ij}^{2}\) on the search ability of the onlooker bees, tests are performed with \(\max \psi_{ij}^{2}\) = 1.5, 4, 6 and 8 respectively. The test functions are F1, F6, F15 and F25. The colony size is 20, the number of iterations is 1000. Due to their random peculiarity, intelligent heuristic algorithms may generate better or worse solutions than those they anteriorly produced in exploring new solutions. Thus, it is a good select to compare the results by the statistical approach. Then, when testing each function, all algorithms run independently 20 times, the results are shown in Table 2.
As shown in Table 2, it can be seen that \(\max \psi_{ij}^{2}\) = 6 in the search formula of the onlooker bee is the most appropriate. Therefore, \(\psi_{ij}^{2}\) is a uniform random number with values ranging from [0, 6], which can be calculated by Eq. (14).
To ensure the randomness of exploration, the generation of \(\phi_{ij}\) and \(\psi_{ij}^{1}\), \(\psi_{ij}^{2}\) all introduce Bernoulli chaos mapping. The introduction of cp and C will further improve the capability of local search.
Sustained bees
In traditional ABC algorithm, the colony has three kinds of bees. Due to the mechanism of scout bees, each bee has an upper limit of exploitation, which will lead to some bees may develop to a more optimal solution, but give up exploitation because the upper limit of exploitation has been reached. Therefore, a new bee species is proposed that will continuously exploit the current optimal honey source without the upper limit of exploitation, called sustained bees. Sustained bees are influenced by the global optimal solution to develop new solutions based on the current optimal solution. The update formula of the sustained bee is shown in Eq. (15)
in which, \(x_{iter}\) is the current optimal solution.
A proposed variant of ABC
In this study, the traditional ABC algorithm is combined with three improvements, including the Bernoulli chaotic mapping with mutual exclusion mechanism, neighborhood search mechanism with compression factor and sustained bees. Then, a novel chaos and neighborhood search-based ABC algorithm (CNSABC) is formed. Figure 4 shows the pseudo-code of CNSABC.
Time complexity is an important tool to determine the computational complexity of an algorithm. Generally, the time complexity of algorithm is determined by the population size, variable dimensionality and fitness function. In the proposed CNSABC, the population size is SN and the dimensionality is D. Assuming that the parameter initialization time is t0 and the initialization solution time is t1, this, the time complexity of the initialization phase is shown in Eq. (16).
In the iterative process, the number of iterations is K, and f(D) is the time to calculate the fitness value of the optimal individual. The time to select the better individual in the hire in colony is t2, the time to replace the last iteration individual in the employed bee phase is t3, the time to replace the last iteration individual in the onlooker bee phase is t4, the time to replace the last iteration individual in the scout bee phase is t5, the time to replace the last iteration individual in the sustained bee phase is t6, and the time to calculate the weight is t7, so the time complexity of the employed bee phase is Eq. (17 ) is shown.
In conclusion, the time complexity of the CNSABC can be calculated by Eq. (20)
Experimental results and analyses
In this part, to verify the performance of the presented CNSABC, comprehensive experiments are conducted and analyzed based on the experimental results obtained from 26 benchmark test functions. Firstly, the 26 benchmark test functions are presented and the strategies in Sect. "Improved ABC algorithm" are tested individually to confirm the effectiveness of their improvements. Then CNSABC is compared with five standard algorithms (ABC, PSO, GWO, WOA, and BOA) and finally with five advanced improved ABC (qABC, SBABC, MPGABC, GABC, and NGABC). In addition, all algorithms are coded by Matlab 2020a and run on Intel ® Core i5-2400, CPU@3.0 GHz, 2 GB RAM and Windows 10 computer.
Benchmark functions
In this experiment, the 26 test functions proposed by Zhong45, Luo46, Gao47, Zhu and Kwong29, Karaboga and Akay48 are used to test the effectiveness of the presented CNSABC. These test functions include unimodal separable (US) functions (F1, F2, and F3), unimodal non-separable (UN) functions (F4, F5, F6, F7, F8, F9, F10, F11, F12 and F26), multimodal separable (MS) functions (F13, F14, F15, F16, and F17), and multimodal non-separable (MN) functions (F18, F19, F20, F21, F22, F23, F24, and F25). The test functions are shown in Table 3, respectively. Specifically, the unimodal functions can be applied to test the exploitation capability of the algorithm and the multimodal functions can be applied to test the exploration capability.
Influence of improvement points
To determine the effectiveness of the three improvement strategies, each improvement strategy is combined with the ABC algorithm separately to form three variants of ABC. In Table 4, "√" indicates that the improvement strategy is used in combination with ABC, and "●" indicates that the variant of ABC does not use the improvement strategy. In fairness, the same initial parameters are used for all algorithms throughout the testing process, which is a population size of 40, maximum iterations K = 1000, and each benchmark function tested 40 times. The experimental results containing the mean, standard deviation, maximum and minimum values are recorded in Table 5. The last row of Table 5 indicates the advantages and disadvantages of this algorithm compared with the ABC, and the symbols " + ", "−" and " = " indicate algorithms that are better than, worse than and equal to other comparisons, respectively.
By comparing the mean and standard deviation in Table 5, it can be analyzed that sustained bees and Bernoulli chaotic mapping with mutual exclusion mechanism have limited improvement on ABC and performs poorly in the tests of the 4 benchmark functions F7, F11, F16 and F25; the neighborhood search mechanism with compression factor has stronger improvement on ABC; and the CNSABC shows stronger exploration and exploitation when the three improved strategies work simultaneously. In the benchmark functions F4, F5, F13, F14, F15, F20, F21 and F24 are extremely close to the theoretical optimal values, due to the fact that the values exceed the display digits of Matlab, thus, the decimal digits are not displayed. The performance of each algorithm in the tests of benchmark functions F5, F19 and F23 is not extremely different. In addition, each comparison algorithm is compared with the CNSABC for 5% nonparametric statistics Wilcoxon test and Friedman test, the p-values of Wilcoxon test for ABC, CABC, NABC and SABC are 1.5856E−07, 8.3922E−09, 1.3414E−10 and 1.6690E−08 respectively, which are less than 5%, indicating that there are remarkable distinctions between algorithms. Friedman test is shown in Table 6, Mean-rank represents the average ranking of each algorithm, and smaller values represent better algorithm performance. In conclusion, the three proposed mechanisms, including the Bernoulli chaotic mapping with mutual exclusion mechanism, a neighborhood search mechanism with compression factor and sustained bees, have superiority to improve the performance of ACO.
Comparison with other advanced original algorithms
To verify the advantage of the CNSABC, five commonly used metaheuristic optimization algorithms including PSO, ABC, GWO, WOA and BOA are used for comparison. The parameters of different algorithms are set to be same: population size N = 20, maximum iterations K = 1000, and each benchmark function tested 40 times. The other parameters are set based on the recommended values in the original manuscript of the literature, and the mean, standard deviation, minimum and maximum values of the results of the 40 experiments are counted in Table 7. In Table 7, " + ", "−" and " = " indicate the number of benchmark functions in which the CNSABC is better, worse and equal to other original algorithms or the number of benchmark functions in which other original algorithms are better, worse and equal to the CNSABC among the 26 benchmark functions, respectively.
Specifically, for the results of the mean, CNSABC gets the best results in 20 out of 26 benchmark functions (F1, F2, F3, F4, F5, F8, F9, F10, F12, F13, F14, F15, F19, F20, F21, F22, F23, F24, F25 and F26); For the standard deviation, CNSABC obtains the best results in 18 functions (F1, F2, F3, F4, F5, F9, F10, F12, F13, F14, F15, F19, F20, F21, F22, F24, F25 and F26). This can indicate that the CNSABC has a strong exploitation capability and stability. Meanwhile, CNSABC obtains the optimal result for the minimum value among 22 functions (F1, F2, F3, F4, F5, F8, F9, F10, F11, F12, F13, F14, F15, F16, F18, F19, F20, F21, F22, F23, F24 and F25). The result indicates that the CNSABC is the best performance among the compared algorithms.
Figure 6 shows representative convergence curves for only a subset of the 26 benchmark functions (F1, F3, F6, F10, F13, F15, F19, and F25)49. To allow for a more intuitive comparison, the y-axis of the convergence curves is the logarithm of the fitness, besides F19 function.
From the results in Table 7, it can be seen that the CNSABC demonstrates excellent performance of other functions in the tests of F2, F3, F4, F9, F12, F19, F25 and F26, however, the worst performance in the tests of F16 and F17.
In Fig. 5, the convergence curves show different characteristics for different search strategies, one is a slow convergence from the beginning of the iteration, and the other is a cliff-like decline in the convergence process. The first kind of curve mainly reflected in the single-peaked benchmark functions, which are F1–F14. And the second kind of curve mainly reflected in the multimodal benchmark functions, which are F15–F25. By analyzing the different types of benchmark functions, the CNSABC has excellent performance in both convergence accuracy and convergence process. Most convergence curves are the first kind of convergence curves, reflecting that the CNSABC achieves a well balance between exploitation capability and exploration capability. Each comparison algorithm is compared with CNSABC for 5% nonparametric statistics Wilcoxon test and Friedman test. p-values of Wilcoxon test for PSO, GWO, WOA BOA and ABC are 7.2866E−04, 5.4838E−11, 3.4412E−11, 5.3564E−15, 2 and 1.2364E−07, respectively, which are less than 5%, indicating that there are remarkable distinctions between algorithms. Friedman test can be seen in Table 8, Mean-rank represents the average rank of each algorithm, the smaller the value is, the wonderful performance of the algorithm is.
Comparison with other improved ABC algorithms
In this part, the performance of the CNSABC is compared with other improved ABC algorithms in 26 benchmark function tests, including qABC, SBABC, MPGABC, GABC and NGABC. To be fair, the different algorithms all contain parameters with the same settings: population size N = 20, maximum iterations K = 1000, and 40 tests for each benchmark function. The experimental results are recorded in Table 9, containing the mean, standard deviation, minimum and maximum values of the results obtained from the 40 sets of experiments. in the last row of Table 9, " + ", "−" and " = " indicate the amount of benchmark functions in which the CNSABC is better, worse and equal to other algorithms or the number of benchmark functions in which other algorithms are better. In Table 9, in terms of the mean, the CNSABC obtains the best results for 21 of the 26 benchmark functions (F2, F3, F4, F5, F6, F8, F9, F10, F11, F12, F13, F14, F15, F18, F19, F20, F21, F22, F23, F24 and F25). In terms of the minimum value, the CNSABC obtains the best results for 22 functions (F2, F3, F4, F5, F6, F8, F9, F10, F11, F12, F13, F14, F15, F16, F18, F19, F20, F21, F22, F23, F24 and F25). The results indicate that CNSABC has wonderful performance in terms of the convergence accuracy. Figure 7 shows representative convergence curves for only a subset of the 26 benchmark functions (F2, F3, F6, F10, F13, F15, F19, and F25).
In Table 9, the CNSABC is stronger than other improved ABC algorithms, whether it is a single-peak benchmark function or a multimodal benchmark function, which also reflects the exploration and development capability of the CNSABC to achieve a well balance.
In Fig. 6, the CNSABC has outstanding superiority compared with other improved ABC algorithms. In most of the convergence curves show a slow decreasing trend from the beginning of the iterations, but have been better in fitness than other algorithms, which indicates that the CNSABC has a strong exploration and exploitation capability. This is attributed to the introduction of three mechanisms. Each comparison algorithm is compared with CNSABC for 5% nonparametric statistics Wilcoxon test and Friedman test. p-values of Wilcoxon test for qABC, SBABC, MPGABC GABC and NGABC are 1.0921E−09, 6.2631E−10, 7.0176E−10, 1.8958E−05 and 1.2600E−02 respectively, Moreover, the better results of CNSABC also can be demonstrated by Friedman test in Table 10.
CNSABC for solving engineering optimization problems
Tension/compression spring design optimization problem
The main objective of this engineering problem is to minimize the mass of the tension/compression spring. The optimization constraints of this problem are described as follows:
-
(1)
Shear stress.
-
(2)
Surge frequency.
-
(3)
Minimum deflection.
The schematic diagram of spring is exhibit in Fig. 7
.
This problem has three variables: wire diameter (d), mean coil diameter (D), and the number of active coils (P). The mathematical model is described as follows:
Consider:
Minimize:
Subject to:
In fairness, CNSABC uses the same penalty function as the other algorithms, the results are shown in Table 11. Table 12 shows the mean, standard deviation, minimum and maximum values of the 10 experiments of the CNSABC. Figure 8 shows the adaptation convergence curve of CNSABC computing the tension/compression spring design. The best solution is obtained by CNSABC at design variables \(\vec{x} = [x_{1} \, x_{2} \, x_{3} ]\) with \(f(\vec{x}) = {0}{\text{.012192037027776}}\). In solving tension/compression spring design problem, results show that the optimal weights compared with EPO, SHO, GWO, MVO, SCA, EPO, DE, ES, GA, RO, improved HS, HSCA, CB-ABC and I-ABC greedy, CNSABC increased by 3.6735%, 3.8028%, 3.8356%, 4.87755%, 4.0727%, 3.6728%, 3.7739%, 3.8559%, 4.0630%, 3.8392%, 3.7770%, 3.3734%, 3.3734%, and 3.3734%, respectively. CNSABC has superiority performance than the other algorithms.
Speed reducer design optimization problem
The main objective of this engineering problem is to minimize the mass of the reducer as much as possible. There are 7 design variables in this model, which are the face width (b), the tooth die (m), the number of pinion teeth (p), the length of the first and second shaft between the bearings (l1), (l2), the diameter of the first shaft (d1) and the diameter of the second shaft (d2). The design variables of the reducer are reflected in Fig. 9. The mathematical model can be described as:
Consider:
Minimize:
Subject to:
in which:
In fairness, CNSABC uses the same penalty function as the other algorithms, the results of CNSABC and the other algorithms are displayed in Table 1350,51. Table 14 shows the mean, standard deviation, minimum and maximum values of the 10 experiments of the CNSABC. Figure 10 shows the adaptation convergence curve of CNSABC computing the tension/compression spring design problem. The best solution is obtained by CNSABC at design variables \(\vec{x} = [x_{1} \, x_{2} \, x_{3} \, x_{4} \, x_{5} \, x_{6} \, x_{7} ]\) with objective function \(f(\vec{x}) = 2994.534574\). The optimal value obtained in speed reducer design problem, CNSABC improved over SHO, GWO, POS, MVO, SCA, GSA, GA, AO, AOA, HS FA, HSCA, CB-ABC and I-ABC greedy by 0.13605%, 0.2271%, 0.3757%, 0.2816%, 1.1909%, 1.8566%, 2.3828%, 0.4409%, 0.1149%, 1.1400%, 0.5205%, 1.3358e-09, 1.3358e-09 and 2.2041e-08. CNSABC has wonderful performance than the other algorithms.
Conclusions
In this study, a chaotic and neighborhood search-based ABC algorithm (CNSABC) is presented to solve the shortcomings of traditional ABC for solving optimization problems. Firstly, Bernoulli chaos mapping with mutual exclusion mechanism is proposed to increase the diversity of populations and strengthen the global exploration capability. Secondly, neighborhood search mechanism with compression factor and sustained bees are presented to improve the local exploration and exploitation capability, and further to avoid the appearance of premature maturity. Subsequently, three groups of simulation experiments based on 26 benchmark functions are conducted to compare the CNSABC with eight existing variants of ABC and five commonly used metaheuristic optimization algorithms. The experimental results composed of “Mean”, “Std.”, “Max”, “Min” of the 26 benchmark functions verify the dominant of CNSABC in the optimal solution search ability. In detail, the overall performance of CNSABC is generally superior to PSO, ABC, GWO, WOA and BOA on 21, 26, 23 and 26 out of 26 functions. For the five variants of ABC, the CNSABC outperforms qABC, SBABC, MPGABC, GABC and NGABC with 26 benchmark functions of 25, 25, 25, 22 and 24, respectively. Finally, the CNSABC is applied to two engineering examples, the experimental results show that CNSABC can effectively solve practical application problems.
Although the proposed CNSABC achieves excellent results in terms of exploitation capability and local exploration capability, the research of the CNSABC is still in the initial stage, and many problems need further study, such as a deficiency of low computational efficiency. Future work will focus on further enhancing the algorithm efficiency and exploring more improvement directions. For example, search strategies can be borrowed and combined with other algorithms. The application is extended to more practical applications, such as PID parameter optimization, and improved parameter search combined with neural network.
Data availability
The datasets used during the current study available from the corresponding author on reasonable request.
References
Latif, M. A. & Saka, M. P. Optimum design of tied-arch bridges under code requirements using enhanced artificial bee colony algorithm. Adv. Eng. Softw. 135, 102685 (2019).
Taib, H. & Bahreininejad, A. Data clustering using hybrid water cycle algorithm and a local pattern search method. Adv. Eng. Softw. 153, 102961 (2021).
Sun, S. et al. A new hybrid optimization ensemble learning approach for carbon price forecasting. Appl. Math. Model. 97, 182–205 (2021).
Abd Elaziz, M., Yousri, D. & Mirjalili, S. A hybrid Harris hawks-moth-flame optimization algorithm including fractional-order chaos maps and evolutionary population dynamics. Adv. Eng. Softw. 154, 102973 (2021).
Cuevas, E. et al. Fast multi-feature image segmentation. Appl. Math. Model. 90, 742–757 (2021).
Booker, L. B., Goldberg, D. E. & Holland, J. H. Classifier systems and genetic algorithms. Artif. Intell. 40(1), 235–282 (1989).
Metropolis, N. et al. Simulated annealing. J. Chem. Phys. 21(161–162), 1087–1092 (1953).
Rashedi, E., Nezamabadi-Pour, H. & Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009).
Kennedy, J., Eberhart, R. Particle swarm optimization. In: Proceedings of ICNN'95-International Conference on Neural Networks. IEEE, vol. 4, pp. 1942–1948 (1995).
Karaboga, D. An idea based on honey bee swarm for numerical optimization. Technical report-tr06, Erciyes university, engineering faculty, computer engineering department (2005).
Ghasemi, M. et al. Wild Geese Algorithm: A novel algorithm for large scale optimization based on the natural life and death of wild geese. Array 11, 100074 (2021).
Abdollahzadeh, B., Gharehchopogh, F. S. & Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 158, 107408 (2021).
Peraza-Vázquez, H. et al. A bio-inspired method for engineering design optimization inspired by dingoes hunting strategies. Math. Probl. Eng. https://doi.org/10.1155/2021/9107547 (2021).
Zamani, H., Nadimi-Shahraki, M. H. & Gandomi, A. H. CCSA: Conscious neighborhood-based crow search algorithm for solving global optimization problems. Appl. Soft Comput. 85, 105583 (2019).
Zamani, H., Nadimi-Shahraki, M. H. & Gandomi, A. H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 392, 114616 (2022).
Nadimi-Shahraki, M. H. & Zamani, H. DMDE: Diversity-maintained multi-trial vector differential evolution algorithm for non-decomposition large-scale global optimization. Expert Syst. Appl. 198, 116895 (2022).
Nadimi-Shahraki, M. H. et al. MFO-SFR: An enhanced moth-flame optimization algorithm using an effective stagnation finding and replacing strategy. Mathematics 11(4), 862 (2023).
Bingul, Z. & Karahan, O. Comparison of PID and FOPID controllers tuned by PSO and ABC algorithms for unstable and integrating systems with time delay. Optim. Control Appl. Methods 39(4), 1431–1450 (2018).
Öztürk, Ş, Ahmad, R. & Akhtar, N. Variants of Artificial Bee colony algorithm and its applications in medical image processing. Appl. Soft Comput. 97(A), 106799 (2020).
Hussain, S. F., Pervez, A. & Hussain, M. Co-clustering optimization using Artificial Bee Colony (ABC) algorithm. Appl. Soft Comput. 97(B), 106725 (2020).
Sagayam, K. M. & Hemanth, D. J. ABC algorithm based optimization of 1-D hidden Markov model for hand gesture recognition applications. Comput. Ind. 99, 313–323 (2018).
Li, K. et al. Research on structural optimization method of FRP fishing vessel based on artificial bee colony algorithm. Adv. Eng. Softw. 121, 250–261 (2018).
Zhang, X. & Yuen, S. Y. Improving artificial bee colony with one-position inheritance mechanism. Memet. Comput. 5(3), 187–211 (2013).
Wang, H. et al. Improving artificial bee colony algorithm using a new neighborhood selection mechanism. Inf. Sci. 527, 227–240 (2020).
Shi, Y. J. et al. An improved artificial bee colony and its application. Knowl. -Based Syst. 107, 14–31 (2016).
Karaboga, D. & Gorkemli, B. A quick artificial bee colony (qABC) algorithm and its performance on optimization problems. Appl. Soft Comput. 23, 227–238 (2014).
Gao, W. F. & Liu, S. Y. Improved artificial bee colony algorithm for global optimization. Inf. Process. Lett. 111(17), 871–882 (2011).
Xiao, S. Y. et al. Artificial bee colony algorithm based on adaptive neighborhood search and Gaussian perturbation. Appl. Soft Comput. 100, 106955 (2021).
Zhu, G. P. & Kwong, S. Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl. Math. Comput. 217(7), 3166–3173 (2010).
Zheng, X. et al. An improved artificial bee Colony algorithm based on cat mapping and differential variation. J. Data Inf. Manag. 4, 119–135 (2022).
Ben Djaballah, C. & Nouibat, W. A new multi-population artificial bee algorithm based on global and local optima for numerical optimization. Clust. Comput. 25(3), 2037–2059 (2022).
Brajević, I. A shuffle-based artificial bee colony algorithm for solving integer programming and minimax problems. Mathematics 9(11), 1211 (2021).
Zhao, B. H., Sung, T. W. & Zhang, X. A quasi-affine transformation artificial bee colony algorithm for global optimization. J. Intell. Fuzzy Syst. 40(3), 5527–5544 (2021).
Agarwal, A. et al. Grey wolf optimizer: A new strategy to invert geophysical data sets. Geophys. Prospect. 66(6), 1215–1226 (2018).
Mirjalili, S. & Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016).
Arora, S., Singh, S. & Yetilmezsoy, K. A modified butterfly optimization algorithm for mechanical design optimization problems. J. Braz. Soc. Mech. Sci. Eng. 40(1), 1–17 (2018).
Varol Altay, E. & Alatas, B. Bird swarm algorithms with chaotic mapping. Artif. Intell. Rev. 53(2), 1373–1414 (2020).
Li, T. et al. A WSN positioning algorithm based on 3D discrete chaotic mapping. EURASIP J. Wirel. Commun. Netw. 2019(1), 1–13 (2019).
Alatas, B. Chaotic bee colony algorithms for global numerical optimization. Expert Syst. Appl. 37(8), 5682–5687 (2010).
Bharti, K. K. & Singh, P. K. Chaotic gradient artificial bee colony for text clustering. Soft Comput. 20(3), 1113–1126 (2016).
Gaidhane, P. J. & Nigam, M. J. A hybrid grey wolf optimizer and artificial bee colony algorithm for enhancing the performance of complex systems. J. Comput. Sci. 27, 284–302 (2018).
Gao, W. & Liu, S. A modified artificial bee colony algorithm. Comput. Op. Res. 39(3), 687–697 (2012).
Crampin, M. & Heal, B. On the chaotic behaviour of the tent map. Teach. Math. Appli. Int. J. IMA 13(2), 83–89 (1994).
Saito, A. & Yamaguchi, A. Pseudorandom number generation using chaotic true orbits of the Bernoulli map. Chaos Interdiscipl. J. Nonlinear Sci. 26(6), 063122 (2016).
Zhong, F. L., Li, H. & Zhong, S. M. An improved artificial bee colony algorithm with modified-neighborhood-based update operator and independent-inheriting-search strategy for global optimization. Eng. Appl. Artif. Intell. 58, 134–156 (2017).
Luo, J., Wang, Q. & Xiao, X. A modified artificial bee colony algorithm based onconverge-onlookers approach for global optimization. Appl. Math. Comput. 2019(20), 10253–10262 (2013).
Li, G., Niu, P. & Xiao, X. Development and investigation of efficient artificial beecolony algorithm for numerical function optimization. Appl. Soft Comput. 2012(1), 320–332 (2012).
Karaboga, D. & Akay, B. A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 2014, 108–132 (2009).
Ghambari, S. & Rahati, A. An improved artificial bee colony algorithm and its application to reliability optimization problems. Appl. Soft Comput. 62, 736–767 (2018).
Abualigah, L. et al. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 157, 107250 (2021).
Yang, X. S. & He, X. Firefly algorithm: Recent advances and applications. Int. J. Swarm Intell. 1(1), 36–50 (2013).
Dhiman, G. & Kumar, V. Emperor penguin optimizer: A bio-inspired algorithm for engineering problems. Knowl. -Based Syst. 159, 20–50 (2018).
Dhiman, G. & Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 114, 48–70 (2017).
Mirjalili, S., Mirjalili, S. M. & Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 27(2), 495–513 (2016).
Mirjalili, S. SCA: A Sine Cosine algorithm for solving optimization problems. Knowl. -Based Syst. 96, 120–133 (2016).
Abualigah, L. et al. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 376, 113609 (2021).
Li, L. J. et al. A heuristic particle swarm optimizer for optimization of pin connected structures. Comput. Struct. 85(7), 340–349 (2007).
Mezura-Montes, E. & Coello, C. A. C. An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. Int. J. Gen. Syst. 37(4), 443–473 (2008).
Kaveh, A. & Khayatazad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 112–113, 283–294 (2012).
Mahdavi, M., Fesanghary, M. & Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 188(2), 1567–1579 (2007).
Brajević, I. et al. Hybrid Sine Cosine algorithm for solving engineering optimization problems. Mathematics 10(23), 4555 (2022).
Acknowledgements
This research was funded by the National Key R&D Program of China (2021YFB3401400, 2021YFB3401402), the Fundamental Research Funds for the Central Universities—the Opening Fund of National Engineering Laboratory of Offshore Geophysical and Exploration Equipment, China University of Petroleum, Qingdao 266580, China (20CX02303A), the Project of Ministry of Industry and Information Technology of the People's Republic of China (CH02N20), and the Project of Ministry of Industry and Information Technology of the People's Republic of China (CJ09N20).
Author information
Authors and Affiliations
Contributions
W.X.: Analysis, Writing—review & editing, Funding acquisition. G.L.: Methodology, Design, Experiment, Writing—original draft. C.L.: Investigation, Data collection, Review & editing. L.T.: Investigation, Data collection.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Xiao, Ws., Li, Gx., Liu, C. et al. A novel chaotic and neighborhood search-based artificial bee colony algorithm for solving optimization problems. Sci Rep 13, 20496 (2023). https://doi.org/10.1038/s41598-023-44770-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-023-44770-8
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.