Skip to main content

Advertisement

Log in

Drone Squadron Optimization: a novel self-adaptive algorithm for global numerical optimization

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

This paper proposes Drone Squadron Optimization (DSO), a new self-adaptive metaheuristic for global numerical optimization which is updated online by a hyper-heuristic. DSO is an artifact-inspired technique, as opposed to many nature-inspired algorithms used today. DSO is very flexible because it is not related to natural behaviors or phenomena. DSO has two core parts: the semiautonomous drones that fly over a landscape to explore, and the command center that processes the retrieved data and updates the drones’ firmware whenever necessary. The self-adaptive aspect of DSO in this work is the perturbation/movement scheme, which is the procedure used to generate target coordinates. This procedure is evolved by the command center during the global optimization process in order to adapt DSO to the search landscape. We evaluated DSO on a set of widely employed single-objective benchmark functions. The statistical analysis of the results shows that the proposed method is competitive with the other methods, but we plan several future improvements to make it more powerful and robust.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. The terminology employed in this work is using the artifact as a metaphor which—by way of analogy—can facilitate its understanding.

  2. It should be noted that teams are not like species nor niching in evolutionary algorithms.

  3. It is important to note that all solutions are one-dimension arrays; therefore, all operations present in this work are element-wise.

  4. Many well-known benchmark functions have their global optimum at the origin, and there are algorithms that exploit this characteristic to achieve high performance.

References

  1. Ahmadi S-A (2016) Human behavior-based optimization: a novel metaheuristic approach to solve complex optimization problems. Neural Comput Appl 27:1–12

    Google Scholar 

  2. Alba E, Tomassini M (2002) Parallelism and evolutionary algorithms. IEEE Trans Evol Comput 6(5):443–462

    Article  Google Scholar 

  3. Auger A, Hansen N (2005) A restart CMA evolution strategy with increasing population size. In: Proceedings of the 2005 IEEE congress on evolutionary computation (CEC2005). Canberra, 8-12. IEEE Press, New York, pp 1769–1776

  4. Auger A, Hansen N (2005) Performance evaluation of an advanced local search evolutionary algorithm. In: The 2005 IEEE congress on evolutionary computation, vol 2. IEEE, pp 1777–1784

  5. Auger A, Hansen N (2005) A restart CMA evolution strategy with increasing population size. In: Proceedings of the 2005 IEEE congress on evolutionary computation (CEC2005), vol 2. IEEE, pp 1769–1776

  6. Bäck T (1998) An overview of parameter control methods by self-adaptation in evolutionary algorithms. Fundam Inf 35(1–4):51–66

    MATH  Google Scholar 

  7. Ballester PJ, Stephenson J, Carter JN, Gallagher K (2005) Real-parameter optimization performance study on the CEC-2005 benchmark with SPC-PNX. In: Proceedings of the 2005 IEEE congress on evolutionary computation (CEC2005), pp 498–505

  8. Banzhaf W (2001) Artificial intelligence: Genetic programming. In: Smelser NJ, Baltes PB (eds) International encyclopedia of the social & behavioral sciences. Pergamon, Oxford, pp 789–792

    Chapter  Google Scholar 

  9. Banzhaf W, Nordin P, Keller RE, Francone FD (1998) Genetic programming—an introduction: on the automatic evolution of computer programs and its applications. Dpunkt–Verlag and Morgan Kaufmann, New York

  10. Bonabeau E, Dorigo M, Theraulaz G (1999) Swarm intelligence: from natural to artificial systems. Number 1. Oxford University Press, Oxford

  11. Burke EK, Hyde M, Kendall G, Ochoa G, Ozcan E, Woodward JR (2010) A classification of hyper-heuristics approaches. In: Handbook of metaheuristics, volume 57 of international series in operations research and management science, 2nd edn, chap 15. Springer, Berlin, pp 449–468

    Chapter  Google Scholar 

  12. Chakraborty UK (2008) Advances in differential evolution. Springer Publishing Company Incorporated, New York

    Book  Google Scholar 

  13. Clerc M, Kennedy J (2002) The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73

    Article  Google Scholar 

  14. Das S, Nagaratnam Suganthan P (2011) Differential evolution: a survey of the state-of-the-art. IEEE Trans Evol Comput 15(1):4–31

    Article  Google Scholar 

  15. de Melo VV, Luiza CCG (2013) Automatic generation of evolutionary operators: a study with mutation strategies for the differential evolution. In: Proceedings of the 28th annual ACM symposium on applied computing, SAC ’13, Coimbra, Portugal, March 18–22, 2013, pp 188–193

  16. De Melo VV, Iacca G (2014) A modified covariance matrix adaptation evolution strategy with adaptive penalty function and restart for constrained optimization. Expert Syst Appl 41(16):7077–7094

    Article  Google Scholar 

  17. Dorigo M (1992) Optimization, learning and natural algorithms. Ph. D. Thesis, Politecnico di Milano, Italy

  18. Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, vol 1. New York, pp 39–43

  19. Fister I Jr, Yang X-S, Fister I, Brest J, Fister D (2013) A brief review of nature-inspired algorithms for optimization. arXiv preprint arXiv:1307.4186

  20. Gandomi A, Yang X-S, Alavi A (2013) Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng Comput 29(1):17–35. doi:10.1007/s00366-011-0241-y

    Article  Google Scholar 

  21. García S, Molina D, Lozano M, Herrera F (2009) A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the cec’2005 special session on real parameter optimization. J Heuristics 15(6):617–644

    Article  Google Scholar 

  22. García-Martínez C, Lozano M (2005) Hybrid real-coded genetic algorithms with female and male differentiation. In: The 2005 IEEE congress on evolutionary computation, vol 1. IEEE, pp 896–903

  23. Glover Fred W, Kochenberger Gary A (2006) Handbook of metaheuristics, vol 57. Springer Science & Business Media, New York

    MATH  Google Scholar 

  24. Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison-Wesley, Reading

    MATH  Google Scholar 

  25. Haklı H, Uğuz H (2014) A novel particle swarm optimization algorithm with levy flight. Appl Soft Comput 23:333–345

    Article  Google Scholar 

  26. Hansen N (2009) Benchmarking a bi-population CMA-ES on the BBOB-2009 function testbed. In: Proceedings of the 11th annual conference companion on genetic and evolutionary computation: late breaking papers. ACM, New York, pp 2389–2396

  27. Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evol Comput 9(2):159–195

    Article  Google Scholar 

  28. Hansen N, Müller SD, Koumoutsakos P (2003) Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). Evol Comput 11(1):1–18

    Article  Google Scholar 

  29. Hong L, Woodward J, Li J, Özcan E (2013) Automated design of probability distributions as mutation operators for evolutionary programming using genetic programming. In: Krzysztof K, Alberto M, Ting H, Sima E-UA, Bin H (eds) genetic programming, vol 7831 of Lecture Notes in Computer Science. Springer, Berlin, pp 85–96

    Google Scholar 

  30. Dervis K, Bahriye B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Global Optim 39(3):459–471

    Article  MathSciNet  Google Scholar 

  31. Kelly JP (1996) Meta-heuristics: theory and applications. Kluwer, Norwell

    Google Scholar 

  32. Krishnanand KN, Ghose D (2009) Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions. Swarm Intell 3(2):87–124

    Article  Google Scholar 

  33. Larrañaga P, Lozano JA (2001) Estimation of distribution algorithms: a new tool for evolutionary computation. Kluwer, Norwell

  34. Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarm optimizer with local search. In: The 2005 IEEE congress on evolutionary computation, vol 1, pp 522–528. IEEE

  35. Lin Shyh-Chang, Punch WF, III, Goodman Erik D (1994) Coarse-grain parallel genetic algorithms: categorization and new approach. In: Sixth IEEE symposium on parallel and distributed processing, 1994. Proceedings, pp 28–37. IEEE

  36. Michalewicz Z, Schoenauer M (1996) Evolutionary algorithms for constrained parameter optimization problems. Evol Comput 4:1–32

    Article  Google Scholar 

  37. Miranda PB, Prudêncio RB (2015) Gefpso: a framework for PSO optimization based on grammatical evolution. In: Proceedings of the 17th annual conference on genetic and evolutionary computation. ACM, New York, pp 1087–1094

  38. Mirjalili S (2016) Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput Appl 27(4):1053–1073

    Article  Google Scholar 

  39. Molina D, Herrera F, Lozano M (2005) Adaptive local search parameters for real-coded memetic algorithms. In: The 2005 IEEE congress on evolutionary computation, vol 1. IEEE, pp 888–895

  40. O’Neill M, Brabazon A (2006) Grammatical differential evolution. In: IC-AI, pp 231–236

  41. Osaba E, Diaz F, Onieva E, Carballedo R, Perallos A (2014) Amcpa: a population metaheuristic with adaptive crossover probability and multi-crossover mechanism for solving combinatorial optimization problems. Int J Artif Intell 12(2):1–23

    Google Scholar 

  42. Pavlidis NG, Tasoulis DK, Plagianakos VP, Vrahatis MN (2006) Human designed vs. genetically programmed differential evolution operators. In: Proceedings of the 2006 IEEE congress on evolutionary computation (CEC2006), pp 1880–1886

  43. Peng F, Tang K, Chen G, Yao X (2010) Population-based algorithm portfolios for numerical optimization. IEEE Trans Evol Comput 14(5):782–800

    Article  Google Scholar 

  44. Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S, Zaidi M (2006) The bees algorithm—a novel tool for complex optimisation problems. In: Proceedings of the 2nd virtual international conference on intelligent production machines and systems (IPROMS 2006), pp 454–459

  45. Poli R, Langdon WB, Holland O (2005) Extending particle swarm optimisation via genetic programming. In: Proceedings of the 8th European conference on genetic programming, EuroGP’05. Springer, Berlin, pp 291–300

    Google Scholar 

  46. Pošik P (2005) Real-parameter optimization using the mutation step co-evolution. In: The 2005 IEEE congress on evolutionary computation, vol 1. IEEE, pp 872–879

  47. Price KV, Storn RM, Lampinen JA (2005) Differential evolution a practical approach to global optimization. Natural computing series. Springer, Berlin

  48. Qin AK, Suganthan PN (2005) Self-adaptive differential evolution algorithm for numerical optimization. In: Proceedings of the 2005 IEEE congress on evolutionary computation (CEC2005), vol 2. IEEE, pp 1785–1791

  49. Rachlin J, Goodwin R, Murthy S, Akkiraju R, Wu F, Kumaran S, Das R (1999) A-teams: an agent architecture for optimization and decision-support. In: Intelligent agents V: agents theories, architectures, and languages. Springer, Berlin, pp 261–276

    Chapter  Google Scholar 

  50. Rahnamayan S, Tizhoosh HR, Salama MMA (2008) Opposition-based differential evolution. IEEE Trans Evol Comput 12(1):64–79

    Article  Google Scholar 

  51. Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) Gsa: a gravitational search algorithm. Inf Sci 179(13):2232–2248

    Article  Google Scholar 

  52. Rashid M, Baig AR (2008) Adaptable evolutionary particle swarm optimization. In: 3rd international conference on innovative computing information and control, 2008. ICICIC ’08, pp 602–602

  53. Rônkkônen J, Kukkonen S, Price KV (2005) Real-parameter optimization using the mutation step coevolution. In: Proceedings of the 2005 IEEE congress on evolutionary computation (CEC 2005), pp 506–513

  54. Si T, De A, Bhattacharjee AK (2014) Grammatical swarm based-adaptable velocity update equations in particle swarm optimizer. In: Proceedings of the international conference on frontiers of intelligent computing: theory and applications (FICTA) 2013. Springer, Berlin, pp 197–206

    Chapter  Google Scholar 

  55. Sinha A, Tiwari S, Deb K (2005) A population-based, steady-state procedure for real-parameter optimization. In: The 2005 IEEE congress on evolutionary computation, vol 1. IEEE, pp 514–521

  56. Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Global Opt 11(4):341–359

    Article  MathSciNet  Google Scholar 

  57. Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, Auger A, Tiwari S (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Report, 2005005

  58. Tasoulis DK, Pavlidis NG, Plagianakos VP, Vrahatis MN (2004) Parallel differential evolution. In: Congress on evolutionary computation, 2004. CEC2004, vol 2. IEEE, pp 2023–2029

  59. Thierens D (2005) An adaptive pursuit strategy for allocating operator probabilities. In: Proceedings of the 7th annual conference on Genetic and evolutionary computation, pages 1539–1546. ACM, New York

  60. Voß T, Hansen N, Igel C (2010) Improved step size adaptation for the MO-CMA-ES. In: Proceedings of the 12th annual conference on Genetic and evolutionary computation. ACM, New York, pp 487–494

  61. Vrugt JA, Robinson BA, Hyman JM (2009) Self-adaptive multimethod search for global optimization in real-parameter spaces. IEEE Trans Evol Comput 13(2):243–259

    Article  Google Scholar 

  62. Whitacre JM, Pham TQ, Sarker RA (2006) Credit assignment in adaptive evolutionary algorithms. In: Proceedings of the 8th annual conference on genetic and evolutionary computation. ACM, New York, pp 1353–1360

  63. Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82

    Article  Google Scholar 

  64. Woodward JR, Swan J (2012) The automatic generation of mutation operators for genetic algorithms. In: Proceedings of the 14th annual conference companion on genetic and evolutionary computation. ACM, New York, pp 67–74

  65. Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102

    Article  Google Scholar 

  66. Yuan B, Gallagher M (2005) Experimental results for the special session on real-parameter optimization at CEC 2005: a simple, continuous EDA. In: Proceedings of the 2005 IEEE congress on evolutionary computation (CEC2005), vol 2005. IEEE, pp 1792–1799

  67. Zhan Z-H, Zhang J, Li Y, Chung HS-H (2009) Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern Part B 39(6):1362–1381 (Cybernetics)

    Article  Google Scholar 

  68. Zhang J, Sanderson AC (2009) Jade: adaptive differential evolution with optional external archive. IEEE Trans Evol Comput 13(5):945–958

    Article  Google Scholar 

Download references

Acknowledgements

This paper was supported by the Brazilian Government CNPq (Universal) Grant (486950/2013-1) and CAPES (Science without Borders) Grant (12180-13-0) to V.V.M., and Canada’s NSERC Discovery Grant RGPIN 283304-2012 to W.B.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vinícius Veloso de Melo.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Appendices

Appendix 1: More detailed explanation on DSO

Departure points

  1. 1.

    CBC: the matrix of current best solutions found so far;

  2. 2.

    PermutedCBC: the current best solutions found so far, but permuted every iteration. Because of the permutation, CBC can be combined with other solutions even using the same firmware;

  3. 3.

    \(CBC_{pBest}\): the p Best solutions found, where p is a user-defined percentage parameter. The selected solutions are sampled with repetition to create a matrix with N solutions;

  4. 4.

    Multivariate normal sampling (MVNS): new random solutions sampled using the average and covariance matrix of the p Best solutions found;

  5. 5.

    Opposition(CBC): the opposed coordinates of the Current Best ones, calculated as proposed in [50].

Offset

The movements from the departure points are generated applying scaling and functions to other coordinates and information. These items are presented below.

  1. 1.

    Constants: \(\overrightarrow{matInterval}=(\overrightarrow{UB}-\overrightarrow{LB})\) and user-defined values C1, C2, and C3, where \(\overrightarrow{matInterval}\) is an array of size D;

  2. 2.

    Random weights: U(0, 1), U(0.5, 1), G(0, 1),

    abs(G(0.5, 0.1)), and abs(G(0, 0.01)), where U is the uniform distributions, and G the Gaussian distribution;

  3. 3.

    Calculated weights: \(\mathrm{std-dev}(CBC)\),

    \(\mathrm{std-dev}(CBC_{pBest})\), and

    \(Step(CBC)=\sigma *G(0,1)_{N,D}*\overrightarrow{matInterval}*U(0,\,0.5)\), as used in [16], where N is the number of drones in a team;

  4. 4.

    Two-parameter functions: plus, times, sub, protected division, average, where protected division returns \(Numerator/((1e-15)+Denominator)\).

  5. 5.

    TmC: the best positions found by the teams after calculating the target coordinates;

  6. 6.

    Shift: the difference between TmC and CBC, that is, how much the drones have to move;

  7. 7.

    \(\overrightarrow{GBC}\): the best solution found so far;

  8. 8.

    \(Opposition(CBC_{pBest})\): the opposed position of the pBest current best coordinates.

Reference perturbation

The command center is instructed to set the initial firmware with at least one reference perturbation. This directive is to avoid starting with teams using completely random perturbation. The two reference perturbations available for this DSO are:

  1. 1.

    \(\overrightarrow{CBC_{r1}}+c_{1}*(\overrightarrow{CBC_{r2}}-\overrightarrow{CBC_{r3}})\), and

  2. 2.

    \(MVNS+Step(CBC)\);

where \(r_{1}\), \(r_{2}\), and \(r_{3}\) are random and distinct solutions. Therefore, the two reference perturbations are 1) rand / 1 from the Differential Evolution (but not linked to a particular crossover), and 2) inspired by the CMA-ES technique, but employing only sample generation and step calculation with \(\sigma =0.04\times \mu _{eff}\times ||\mu ||\). This formula is from the CMA-ES author’s source code and was not tuned to be used in DSO.

1.1 Recombination

After the perturbation step generates new coordinates, a recombination with the current coordinates, representing the best coordinates found so far, may be done. Three possibilities are available:

  1. 1.

    No recombination;

  2. 2.

    Uniform crossover [GA] / Binomial recombination [DE];

  3. 3.

    One or two-point crossover [GA] / Exponential recombination [DE].

In the current DSO, recombination is performed after perturbation, but changing the order is also an option. That will change the behavior of the method without invalidating the original inspiration.

1.2 Coordinates correction (bounds)

The drones may be allowed to move only inside a particular perimeter. Therefore, if the new target coordinates (x) are outside the perimeter then a correction must be made. Three techniques are available:

  1. 1.

    The coordinate is re-positioned exactly over the bound;

  2. 2.

    The coordinate gets a new random value inside the feasible bounds;

  3. 3.

    The coordinate gets the remainder of the excess, that is

    \(LB_{j}+remainder\) or \(UB_{j}-remainder\), for \(j=1,\ldots ,D\).

Appendix 2: More detailed results for CEC’05

See Figs. 10, 11 and Table 12.

Table 12 DSO’s results obtained in CEC’2005 Special Session in dimension 10, over 36 independent trials
Fig. 10
figure 10

DSO curves for functions f1–f12 (CEC’05)

Fig. 11
figure 11

DSO curves for functions f13–f25 (CEC’05)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

de Melo, V.V., Banzhaf, W. Drone Squadron Optimization: a novel self-adaptive algorithm for global numerical optimization. Neural Comput & Applic 30, 3117–3144 (2018). https://doi.org/10.1007/s00521-017-2881-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-017-2881-3

Keywords

Navigation