• Nie Znaleziono Wyników

Index of /rozprawy2/11221

N/A
N/A
Protected

Academic year: 2021

Share "Index of /rozprawy2/11221"

Copied!
157
0
0

Pełen tekst

(1)Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie. Wydział Informatyki, Elektroniki i Telekomunikacji Katedra Informatyki. Rozprawa doktorska Agent-based memetic computing in continuous optimization mgr inż. Wojciech Korczyński Promotor: dr hab. inż. Aleksander Byrski. Kraków 2017.

(2)

(3) To my beloved Parents, for their endless love and support.

(4)

(5) Abstract The classic no-free-lunch theorem poses the primary motivation for developing novel metaheuristics—general-purpose methods for solving difficult search and optimization problems. Throughout the years, population-based metaheuristics and, in particular, evolutionary algorithms inspired by the Darwinian theory of natural selection, have gained an increased popularity in the field of metaheuristic computing. In numerous research studies, they were proven to be an effective tool in dealing with problems that were too difficult to tackle by analytical approaches. A significant qualitative leap in terms of evolutionary computing has been achieved by introducing the notion of agency. Agent-based systems have turned out to be able to attain better solutions than classic techniques, requiring less computational effort. Further enhancements have been provided by hybrid methods such as memetic algorithms that combine local search exploitation with exploration-based evolutionary metaheuristics. Nevertheless, efficiency turns out to be the main issue when population-based metaheuristics are applied because of the excessive computational demand they tend to generate. This becomes even-more challenging in the case of memetic algorithms due to the immense number of expensive evaluations they require. Therefore, new methods for improving metaheuristic efficiency are indispensable. The following dissertation proposes a technique of fitness evaluation buffering that makes it possible to decrease the complexity of memetic computing, and thereby to obtain better results in a shorter amount of time (when compared to classic evolutionary and agent-based metaheuristics). In order to prove its usefulness, experimental verification has been performed with the use of hard high-dimensional (5000) continuous benchmarks. Moreover, the possibilities of speeding up computations by delegating their most-expensive parts to the GPU and FPGA devices are touched upon. In the beginning, the concepts of metaheuristics and population-based, evolutionary, and memetic algorithms are shown. Further, the Evolutionary Multi-Agent System (EMAS) is discussed, along with its two memetic variants. Next, the means for augmenting metaheuristic efficiency by parallelizing computations are reviewed and implementations of particular algorithms on the PyAgE computing platform are put forth. Finally, results of the experiments are shown and profoundly analyzed..

(6) 6.

(7) Streszczenie Klasyczne stwierdzenie no-free-lunch stanowi jedną z głównych motywacji do rozwoju nowych metaheurystyk – metod ogólnego przeznaczenia stosowanych w rozwiązywaniu trudnych problemów poszukiwawczych i optymalizacyjnych. Przez lata wyraźną popularność w dziedzinie obliczeń metaheurystycznych zdobyły metody populacyjne, a w szczególności algorytmy ewolucyjne inspirowane darwinowską teorią doboru naturalnego. W licznych badaniach dowiedziono ich skuteczności w problemach trudnych do rozwiązania przy pomocy metod analitycznych. Znaczący skok jakościowy w dziedzinie obliczeń ewolucyjnych przyniosło wprowadzenie koncepcji agentowości. Okazało się, że systemy agentowe są w stanie osiągać lepsze rozwiązania niż techniki klasyczne, potrzebując przy tym mniej nakładu obliczeniowego. Dalsze usprawnienia zostały zapewnione przez metody hybrydowe, takie jak algorytmy memetyczne, które łączą eksploatację lokalnego przeszukiwania ze zorientowanymi na eksplorację metaheurystykami ewolucyjnymi. Metaheurystyki populacyjne stanowią efektywne, lecz mało wydajne narzędzia. W szczególności w tę kategorię wpisują się algorytmy memetyczne, z racji ogromnej liczby ewaluacji osobników. Dlatego też, niezbędne jest opracowywanie metod zwiększających wydajność metaheurystyk. Niniejsza rozprawa prezentuje technikę buforowania funkcji oceny przystosowania, która umożliwia zmniejszenie złożoności obliczeń memetycznych i pozwala na uzyskanie lepszych wyników w krótszym czasie, niż w przypadku relewantnych metod ewolucyjnych i agentowych. W pracy przeprowadzono eksperymentalną weryfikację przy wykorzystaniu trudnych, wielowymiarowych (5000 wymiarów) funkcji ciągłych. Ponadto, przedyskutowano możliwości przyśpieszenia obliczeń poprzez delegację ich najkosztowniejszych części do urządzeń GPU i FPGA. Początek tej rozprawy traktuje o pojęciach metaheurystyk, algorytmów populacyjnych, ewolucyjnych i memetycznych. Dalej, przeprowadzono dyskusję nad wieloagentowym systemem ewolucyjnym EMAS, wraz z jego dwoma wariantami memetycznymi. Kolejno, wykonano przegląd sposobów na zwiększenie wydajności metaheurystyk poprzez zrównoleglenie obliczeń. Następnie, przedłożono implementacje poszczególnych algorytmów na platformie obliczeniowej. W końcu pokazano i dogłębnie przeanalizowano wyniki eksperymentów..

(8) 8.

(9) Contents Introduction. 13. 1 Search and optimization metaheuristics. 19. 1.1. Heuristics and metaheuristics . . . . . . . . . . . . . . . . . . . . . . .. 20. 1.2. Population-based metaheuristics . . . . . . . . . . . . . . . . . . . . . .. 21. 1.3. Evolutionary metaheuristics . . . . . . . . . . . . . . . . . . . . . . . .. 23. 1.3.1. Origins of evolutionary metaheuristics. . . . . . . . . . . . . . .. 23. 1.3.2. Principles of evolutionary metaheuristics . . . . . . . . . . . . .. 24. Hybridization of local search with evolutionary metaheuristics . . . . .. 29. 1.4.1. Baldwinian local search . . . . . . . . . . . . . . . . . . . . . . .. 31. 1.4.2. Lamarckian local search . . . . . . . . . . . . . . . . . . . . . .. 32. 1.4. 2 Agent-based metaheuristics. 35. 2.1. From evolutionary algorithms to evolutionary agent-based systems . . .. 36. 2.2. Evolutionary Multi-Agent System (EMAS) . . . . . . . . . . . . . . . .. 36. 2.3. Memetic Evolutionary Multi-Agent System (MemEMAS) . . . . . . . .. 41. 2.4. Lifelong Memetization in Memetic Multi-Agent System . . . . . . . . .. 43. 3 Efficient metaheuristic computing 3.1. 3.2. 47. Parallelization of evolutionary algorithms . . . . . . . . . . . . . . . . .. 47. 3.1.1. Parallel models of evolutionary algorithms . . . . . . . . . . . .. 48. 3.1.2. Software frameworks for parallel evolutionary computing . . . .. 51. 3.1.3. Taxonomy of computer architectures . . . . . . . . . . . . . . .. 53. 3.1.4. Popular MIMD-compliant hardware . . . . . . . . . . . . . . . .. 54. Efficient hybrid evolutionary algorithms . . . . . . . . . . . . . . . . . .. 56. 3.2.1. 58. Speeding-up evolutionary algorithms using GPGPU . . . . . . ..

(10) 10 3.2.2 3.3. Speeding-up evolutionary algorithms using FPGA . . . . . . . .. 60. Fitness evaluation buffering . . . . . . . . . . . . . . . . . . . . . . . .. 62. 3.3.1. Considered problems . . . . . . . . . . . . . . . . . . . . . . . .. 62. 3.3.2. Algorithm of local search buffering . . . . . . . . . . . . . . . .. 63. 4 PyAgE: a flexible platform for metaheuristic computing. 67. 4.1. An overview of the platform . . . . . . . . . . . . . . . . . . . . . . . .. 67. 4.2. Implementation of population-based algorithm . . . . . . . . . . . . . .. 69. 4.3. Implementation of agent-based computing system . . . . . . . . . . . .. 73. 4.4. Implementation of memetic algorithm . . . . . . . . . . . . . . . . . . .. 77. 4.5. Implementation of local search buffering . . . . . . . . . . . . . . . . .. 82. 4.6. Implementation of parallelization . . . . . . . . . . . . . . . . . . . . .. 83. 4.7. Implementation of hybridization with GPGPU . . . . . . . . . . . . . .. 86. 4.8. Implementation of hybridization with FPGA . . . . . . . . . . . . . . .. 90. 5 Experimental results. 95. 5.1. Considered difficult benchmark problems . . . . . . . . . . . . . . . . .. 95. 5.2. Experimental configuration . . . . . . . . . . . . . . . . . . . . . . . . .. 96. 5.3. High-dimensional continuous optimization . . . . . . . . . . . . . . . . 100. 5.4. Detailed memetic parameter study . . . . . . . . . . . . . . . . . . . . 109. 5.5. 5.4.1. Memetic mutation repetitions . . . . . . . . . . . . . . . . . . . 110. 5.4.2. Mutation strength . . . . . . . . . . . . . . . . . . . . . . . . . 110. 5.4.3. Probability of lifelong memetization . . . . . . . . . . . . . . . . 114. 5.4.4. Frequency of lifelong memetization . . . . . . . . . . . . . . . . 115. 5.4.5. Combination of mutation repetitions and mutation strength . . 117. Memetic algorithms in shape optimization of rotating disc . . . . . . . 120 5.5.1. Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . 120. 5.5.2. Experimental configuration. 5.5.3. Experimental results . . . . . . . . . . . . . . . . . . . . . . . . 123. . . . . . . . . . . . . . . . . . . . . 122. Summary. 127. List of Figures. 131. List of Tables. 133.

(11) 11 Bibliography. 135. Scientific curriculum. 153.

(12) 12.

(13) Introduction A vast number of search and optimization problems are too difficult to be efficiently solved with the use of common analytical methods. The reasons are problem complexity and search spaces that are too big to be entirely explored, to mention a few [136]. Knowledge of the problem domain and characteristics of the search spaces is often too insufficient to provide solutions in any way other than by random generation. Such challenging problems are known as black-box problems and may only be solved with the use of metaheuristics—general-purpose algorithms that provide rough solutions [69]. Solutions provided by metaheuristics may be far from optimal, but they are assumed to be adequate. What is essential is the fact that these solutions can be obtained within a reasonable time and with a sensible computational effort [86, 135]. According to the so-called no-free-lunch theorem by Wolpert and Macready, it is impossible to discover a metaheuristic method that would be the ultimate solution to all problems, no matter how well it works for a certain one [94, 188]. Therefore, it is indispensable to search for novel solvers adjusted to each given problem. Metaheuristic methods are often inspired by domains such as nature, biology, social sciences, etc. [86]. However, the greatest interest has been shown in the field of evolutionary computing. Evolutionary metaheuristics have gained growing popularity throughout the years, as they have been recognized as an effective and comprehensible technique for solving difficult optimization problems [23]. Additionally, following a formal model proposed by Michael Vose in [184], certain evolutionary metaheuristics may be acknowledged as well-defined global optimization algorithms. Vose proposed the view of a simple genetic algorithm with a fixed-size population as a mathematical object (namely, the Markov chain), which he then proved to be ergodic. Nevertheless, classic evolutionary algorithms have some significant drawbacks that motivate researchers to investigate novel metaheuristics (i.a., they are computationally exhaustive [88] and do not take into account some important features of evolution [23]). New algorithms often constitute combinations of diverse approaches. Agent-based systems are one of the popular classes of methods that are commonly hybridized with classic evolutionary metaheuristics. Introducing agency to evolutionary.

(14) 14 computing made it possible to follow such important features of evolution as population decomposition and species co-evolution. One example of the successful application of agent-based systems to evolutionary metaheuristics is the Evolutionary Multi-Agent System (EMAS) [51]. Throughout the years, it has been the subject of extensive research that has ascertained its effectiveness in various domains (cf., [38, 43, 67], for example). Noteworthy is the fact that EMAS provided satisfactory results more efficiently than classic evolutionary algorithms while requiring significantly fewer evaluations. According to the proof of ergodicity of a Markov chain in a formal model elaborated in [45], EMAS is confirmed to be a general optimization method. A particular concern has been expressed over the blending of ideas of local search and popular population-based metaheuristics (such as evolutionary algorithms) in order to enhance exploitation in exploration-oriented methods. This kind of approach has been implemented in memetic algorithms inspired by the theory of memes [140]. Memetic algorithms may also be successfully hybridized with agent-based systems. Preliminary endeavors to take advantage of an efficient EMAS extended by local search methods have been attempted; e.g., in [44]. Although memetic algorithms turned out to be effective methods for solving difficult problems, they are computationally expensive (even more than classic metaheuristics), as a remarkable overhead is generated due to the vast number of evaluations needed in the process of a local search. Therefore, it is essential to propose new efficient memetic methods, as the application of dedicated local search operators will cause an improvement to the efficiency of searches in the solution space of agent-based computing systems (as compared to evolutionary multi-agent systems and classic evolutionary algorithms), at the same time making it possible to obtain better results in a shorter amount of time. In order to support this thesis, a review of methods for boosting efficiency in metaheuristics by parallelizing computations is presented. Next, taking inspiration from the seminal work of Gallardo, Cotta, and Fernández [82], a mechanism of buffering partial results of evaluation is introduced and then implemented on the PyAgE computing platform. Such a mechanism makes it possible to reduce the complexity of memetic algorithms and, thus, reduce the execution time of local search procedures. Further, an approach proposed in this dissertation is applied to difficult multidimensional (5000 dimensions) continuous optimization problems. Next, a series of experiments comparing basic metaheuristics (i.e., classic evolutionary algorithm and EMAS) with their memetic variants are conducted. Their aim is to find the global optima of several hard.

(15) Introduction. 15. benchmark continuous functions. Then, the influence of particular memetic parameters on algorithm efficiency is investigated. Finally, classic and memetic metaheuristics are employed for the practical engineering problem of the shape optimization of a rotating annular disc. Summing up, the main research outcomes of this dissertation are: • proposing an idea of lifelong mutation—memetization applied at any moment of an agent’s lifetime—as an alternative to the classic approach with a local search applied in the course of reproduction (Sections 2.3 and 2.4). • proposing and implementing the buffering method making it possible to reduce the complexity of evaluations of solution quality in a continuous search space (Sections 3.3.1, 3.3.2 and 4.5). • realization of hybrid computing on the PyAgE platform leveraging GPU and FPGA devices (Sections 4.7 and 4.8). • performing a detailed experimental comparison along with a statistical analysis of the discussed algorithms in solving high-dimensional continuous benchmark problems with the use of proposed efficiency enhancements (Section 5.3). • performing a profound examination of the impact of memetic parameters on the quality of the attained solutions (Section 5.4). • application of the discussed classic and memetic algorithms to the real-world engineering problem of rotating disc shape optimization (Section 5.5). The structure of this dissertation is as follows. Chapter 1 introduces concepts of search and optimization problems as well as difficult black-box problems solved with the use of heuristics that act as methods of last resort. Then, the main characteristics of heuristics and their general definitions are described; i.e., metaheuristics. In the following sections, population-based metaheuristics are discussed, along with their most-popular type (namely, evolutionary metaheuristics). In this part of the dissertation, the origins of evolutionary algorithms and basic principles of their fundamental operations are put forth. Finally, memetic algorithms hybridizing a local search with evolutionary metaheuristics are presented as methods for enhancing exploration in the latter. Chapter 2 deals with agent-based evolutionary metaheuristics that follow the process of evolution more precisely than classic evolutionary algorithms and make it possible to find solutions in a more-efficient manner. Later, the Evolutionary Multi-Agent System (EMAS) is introduced as a concept that has been proven to be able to tackle.

(16) 16 difficult problems with less computational effort than classic population-based metaheuristics. Furthermore, a way of combining EMAS with memetic algorithms is explained, and two methods of applying a local search—in the course of reproduction and during an agent’s lifetime (the so-called “lifelong memetization”)—are presented. The main issues that have to be dealt with in metaheuristic computing are the computational effort and exorbitant time necessary to find a satisfactory solution. Therefore, the need for developing techniques boosting metaheuristic efficiency (e.g., by parallelization) have emerged. This matter is put into focus in Chapter 3. At first, a Parallel Evolutionary Algorithm (PEA) is discussed, along with diverse models of parallelization of evolutionary computing. Next, the traditional Flynn’s taxonomy is recalled as an introduction to the review of popular, fully-concurrent hardware suitable for demanding operations. Later, hybrid evolutionary algorithms sped-up with GPU and FPGA devices are described. Ultimately, the method of buffering parts of a solution as a manner of decreasing the number of required evaluation operations is proposed. In addition, a discussion on how this technique reduces the complexity of memetic metaheuristics is put forward. Chapter 4 begins with an overview of PyAgE—an agent-based computing platform employed in experimental research. The remaining sections give examples of how algorithms and methods treated by Chapters 1–3 might be realized in PyAgE. First, implementations of population-based, agent-based, and memetic algorithms are outlined. Afterwards, the realization of a local search buffering technique is depicted. Eventually, the subject of computation parallelization and hybridization with GPGPU and FPGA is touched upon. Chapter 5 concerns the experiments performed in order to prove whether or not the proposed thesis may be supported. During the experiments, four difficult multidimensional benchmark functions are tackled, and their global minimum is sought. The obtained results are analyzed in detail, and a statistical study is carried out. The next section discusses the outcomes of a thorough analysis of how memetic parameters influence the search for an optimal solution. The final section provides an example of the practical application of classic and memetic metaheuristics to the variational problem of the shape optimization of a rotating annular disc. The last part summarizes this dissertation. Conclusions drawn from the experimental results are put forth, and whether or not the thesis is supported is determined by a review of the essential parts of the dissertation. Finally, possibilities for future work are discussed..

(17) 17. Introduction. ?. ?. ?. I wish to express my appreciation and gratitude to my advisor Aleksander Byrski Ph.D., D.Sc. for the priceless help, boundless enthusiasm, and continued support. His immense knowledge (not just in the field of metaheuristic computing), patience, and warmth were my greatest inspirations. It has been an honor and an immense privilege to be his Ph.D. student. I would like to thank Marek Kisiel-Dorohinicki Ph.D., D.Sc. for the crucial pieces of advice, fruitful collaboration, and all the opportunities I received. I truly appreciate a friendly and supportive atmosphere he creates. I am thankful to all the people who helped me in my research. I am especially grateful to my best friend, Maciej Kaziród, for working (and having fun) together for many years and for the assistance with the PyAgE platform. I also wish to thank Roman Dębski, Ph.D. for his commitment and for almost three years of collaboration. Last but not least, I would like to express my deepest gratitude to the two most important people in my life, that is to my Mom and Dad. Thank you for being my constant companions in both joyful and difficult moments of life. You put a lot of time and effort in raising me and making my education possible. You never let me down. I will always appreciate your endless love, everlasting understanding, and strong encouragement..

(18) 18.

(19) Chapter 1 Search and optimization metaheuristics Numerous popular problems belong to a class of search problems, as they consist in finding a set of parameters in accordance with some criteria, which are usually expressed as a function of the mentioned parameters: Φ : D → [0, M ], where D ⊂ RN , N ∈ N is a set of solutions, R+ 3 M < +∞. This function serves to evaluate the quality of the proposed solutions. Since the goal of a search task is to optimize criteria function Φ, it may be stated that search problems belong to the class of optimization [37]. Many search and optimization problems are too difficult to be solved in a reasonable amount of time with the use of standard, exhaustive methods. Following Michalewicz and Fogel [136], the main reasons of perceiving a problem as difficult may be that the search space is too big or complex to be efficiently and exhaustively explored, or an evaluation function is noisy and, in addition, varies over time. Some problems (for example, combinatorial optimization problems [144]) have domains that are hard or even impossible to be described and explored with the use of classic mathematical apparatuses. In these cases, there is little if any knowledge of the search space, topology, or other intrinsic features of the problem. Therefore, solutions cannot be derived from the knowledge of a domain, but have to be randomly generated. What is more, such a sampling remains the only way of obtaining solutions, because all features of the search space are hidden, including the range between the proposed solution and optimum. Such problems are called black-box problems [69]. In order to solve black-box problems, general-purpose algorithms—heuristics—are used as methods of last resort..

(20) 20. 1.1. Heuristics and metaheuristics. Heuristics (gr. heuresis: to find) are search methods that provide solutions that are “good enough” (i.e., they may be neither correct nor optimal) in a reasonable amount of time. Therefore, computational effort is remarkably reduced at the expense of solution quality [135, 136]. Since a heuristic does not guarantee satisfactory results and constitutes only a simplified model, both the heuristic itself and the solutions it provides have to be verified experimentally. What is more, a heuristic can be freely controlled by a researcher and can be stopped when an adequate solution is reached (e.g., close to the optimal one). Heuristic methods extensively use stochastic sampling of the solution space. A Metaheuristic (gr. meta: beyond) is a general-purpose, nature-inspired search algorithm [86]. It might be perceived as a general definition (“template”) of a heuristic without providing particular information about a problem nor its domain or operators [174]. Metaheuristics are usually inspired by various domains of life; e.g., biology, evolution, genetics, sociology, culture, etc. Metaheuristics also leverage stochastic sampling, however they tend to “guide” the search process, instead of relying only on randomness. Following Blum and Roli [32], these are the main characteristics of metaheuristics: • they guide the search process, • their goal is to efficiently explore the search space in order to find a (sub-)optimal solution, • they are approximate and usually non-deterministic, • they are not problem-specific, • in order to search for solutions more efficiently and to avoid local optima, they can engage more-advanced methods, such as complex learning processes, machine learning, memorization, etc. While researching and developing new heuristic algorithms, one has to keep in mind that no ultimate “one-in-all” solution that fits all kind of problems can be discovered (cf. the aforementioned no-free-lunch theorem). Actually, all search and optimization techniques provide statistically identical results for all problems in a certain domain [188, 189]. This is why a search for new heuristics and the adaptation of their parameters to given problems are exceptionally meaningful. In accordance with a widely practiced classification, metaheuristics are divided into two groups [34, 65, 174]:.

(21) 1. Search and optimization metaheuristics. 21. • Single-solution metaheuristics, which focus on improving a single solution by iterative exploring of its neighborhood in a search space. It is oriented towards exploitation. Examples of single-solution metaheuristics are tabu search [85], hill climbing [156], and simulated annealing [112]. • Population-based metaheuristics, which are based on the population of many solutions utilized to generate new solutions. It is oriented towards exploration. Examples of population-based metaheuristics are evolutionary algorithms [22], ant colony optimization [64], particle swarm optimization [111], scatter search [87], artificial immune systems [61], and many others. This dissertation concerns the latter type: population-based metaheuristics.. 1.2. Population-based metaheuristics. Population-based metaheuristics focus on exploring a search space by a population of individuals, each of which represents a solution for a tackled problem. The goal is to continuously improve these solutions in consecutive iterations. The main idea behind population-based metaheuristics is to generate new solutions by recombining the existing ones and selecting those that look most promising. The quality of particular solutions is evaluated by a criteria function specific for a given problem (cf. function Φ introduced in the first paragraph of this Section) [174]. It is noteworthy that, despite being oriented towards the exploration of a search space, individuals may start exploitation when they reach a nearby extremum (the next populations are generated in the vicinity of this extremum). The main advantage of a population-based algorithm over a single-solution one is that the former avoids local extrema more effectively [37]. Algorithm 1 presents a general, high-level pseudo-code of a population-based metaheuristic. A procedure starts with some initial population P0 (it is usually generated randomly or may be a parameter of a procedure). Next, the following operations are repeated (as long as a stop condition is not met): 1. quality of solutions in population Pt is evaluated 2. population Pt0 with new solutions is generated based on the existing individuals from population Pt 3. some individuals are selected from populations Pt and Pt0 to form population Pt+1.

(22) 22 Algorithm 1 Pseudo-code of population-based metaheuristic 1: function search 2: P ← P0 3: t←0 4: while ¬stopConditionIsM et do 5: evaluate(Pt ) 6: Pt0 ← generate(Pt ) 7: Pt+1 ← select(Pt ∪ Pt0 ) 8: t←t+1 9:. return best(P ). In the end, the best solution in a final population is returned as a result of the algorithm. Regarding the stop condition, diverse criteria may be applied; however, the mostcommon ones are based on the observation of consecutive populations and the solutions they generate [134, 145, 174]. Here are some examples: • criterion of maximal cost: this may be expressed by computation time or the number of algorithm iterations; • criterion of satisfactory solution quality: the algorithm is stopped when the best individual in the population reaches a certain, predefined quality. One should keep in mind that this criterion is risky since it is based on the value of a quality function of which one possesses little knowledge (cf. properties of “black-box problems”). Additionally, it is possible that a predefined quality will never be reached, causing the algorithm to go into an infinite loop; • criterion of minimal improvement speed: the algorithm is stopped when the improvement reached by consecutive populations falls below a certain, predefined level. In this case, some risk also occurs since, similar to the criterion of satisfactory solution quality, some knowledge of a quality function is needed, and improvement might never fall below a predefined level (e.g,. if a population gets stuck in a local optimum); • criterion of loss of population diversity: the algorithm is stopped if the diversity of solutions falls below a certain, predefined level. Analogous to the two preceding criteria, this also does not ensure that the algorithm stops (e.g., if individuals get stuck in a local optimum), so the value of a borderline diversity level has to be chosen carefully..

(23) 1. Search and optimization metaheuristics. 23. Population-based metaheuristics usually follow various phenomena observed in biology, evolution, sociology, physics, etc. The greatest popularity has been sustained by metaheuristics inspired by evolution.. 1.3. Evolutionary metaheuristics. Algorithms that follow the phenomena of evolution have been researched and developed since about the mid-20th Century. Throughout the years, many variants of evolutionary metaheuristics have been proposed, and their usefulness in numerous applications has been proven. Nowadays, evolutionary algorithms constitute an important type of metaheuristic. The purpose of this section is to familiarize the reader with the history of evolutionary metaheuristics, their common principles, and their elements.. 1.3.1. Origins of evolutionary metaheuristics. The roots of evolutionary metaheuristics date back to the 19th Century, when Gregor Mendel [133] and Charles Darwin [60] formulated laws of inheritance (elaborated by the former) and evolution (devised by the latter). Influenced by Mendel’s and Darwin’s works, contemporary researchers created different varieties and paradigms of evolutionary algorithms. In the 1960s, Ingo Rechenberg and Hans-Paul Schwefel applied Darwinian principles of evolution to the optimization of minimal drag bodies in a wind tunnel. They used operations typical of evolutionary algorithms in order to improve consecutive generations of solutions. The methods they proposed are known as evolution strategies [30, 151, 161, 162]. Also in the 1960s, Lawrence J. Fogel elaborated an approach called evolutionary programming. Fogel’s goal was to use evolution as a learning process of artificial intelligence implemented as finite-state machines able to understand a predefined language [77, 78]. Evolutionary programming has been developed throughout the years, and its contemporary version is rather similar to evolutionary strategies [25]. In the 1970s, an immense contribution to the field of evolutionary computing was made by John Holland’s works on cellular automata that led to the formalization of Holland’s schema theorem. This theorem, also known as the fundamental theorem of genetic algorithms, underlies evolutionary metaheuristics. It states that the average quality of a processed population gradually increases in successive generations. Holland also laid the foundations of genetic algorithm by being the first who used variation.

(24) 24 operators in the process of the evolution of individuals represented in binary code [100, 101]. Genetic algorithms were then widely popularized by David Goldberg in [88]. Another meaningful type of evolutionary algorithm is genetic programming pioneered by John R. Koza in the 1990s. In genetic programming, computer programs are the objects of evolution. They are represented in tree-based structures, so functional programming languages are the most-suitable for being evolved using genetic programming. An objective is to find programs that perform effectively in a predefined task [119].. 1.3.2. Principles of evolutionary metaheuristics. Evolutionary metaheuristics process a population of exemplary solutions in order to find an optimal one. This goal is reached by gradually improving the quality of solutions (also known as their fitness) generated in consecutive algorithm iterations. Solutions are represented by individuals, or more specifically by the genotypes that the individuals contain. While a genotype constitutes an encoded solution for a problem, genes embody its particular features. Depending on a problem domain, different genotype representations may be implemented; e.g., based on real numbers for continuous optimization, or based on integers for discrete problems. Algorithm 2 depicts the pseudo-code of a common Evolutionary Algorithm [71, 134, 165]. At the beginning, initial population P0 is generated. Then, the main part of the algorithm is executed iteratively (as long as a predefined stop condition is not met). Each iteration is called a generation and consists of the following operators: 1. Evaluation: fitness of each individual is calculated with the use of a criteria function (also called a fitness function). 2. Selection: some individuals are selected to form the next population. They constitute a mating pool—a set of parents of new individuals. Selection is usually based on the value of fitness. 3. Crossover: offspring are generated based on the mating pool. New individuals inherit their features from parents. 4. Mutation: individuals’ genes are altered. Finally, the best solution in a population is returned. There are multiple strategies of implementing the particular operators. For instance, population initialization might be fully randomized or somehow controlled. Hybrid approaches are also applied [174]..

(25) 1. Search and optimization metaheuristics. 25. Algorithm 2 Pseudo-code of Evolutionary Algorithm 1: function search 2: P0 ← initializeP opulation() 3: t←0 4: while ¬stopConditionIsM et do 5: evaluate(Pt ) 6: Pt0 ← select(Pt ) 7: Pt+1 ← crossover(Pt0 ) 8: mutate(Pt+1 ) 9: t←t+1 10:. return best(P ). Methods of selection Diverse strategies are utilized in the case of selection. Usually, the best individuals (i.e., individuals with the highest fitness) are selected, as they are assumed to generate better offspring and, thus, drive the population forward towards more-satisfactory results. Nevertheless, keeping population diversity in mind, worse individuals should also be preserved, as they may provide valuable material; e.g., in the case of getting stuck in a local optimum and slowing down the evolution. Selection methods may be divided into two types [174]: • proportional fitness assignment: actual fitness values are associated with individuals, • rank-based fitness assignment: objective values (e.g., rank) are associated with individuals. The most-popular selection strategies include: • Fitness proportionate selection (also known as roulette wheel selection): individuals are selected by repeated random sampling, and those with higher fitness are more likely to be included in the mating pool. In this straightforward. method, PN the probability of selecting individual i may be expressed as: pi = fi j=1 fj , where fi is the fitness of the i-th individual and N is the number of individuals in a population. The fitness proportionate selection method is recognized to be unfair, since a weak individual’s chances of being selected are reduced to a minimum [26]..

(26) 26 • Stochastic universal sampling: this is an unbiased extension of the fitness proportionate selection. The main difference is that individuals are mapped to contiguous segments of a line. Each segment’s length is proportional to an individual’s fitness. Then, pointers indicating the selected individuals are distributed along the line at even intervals (their amount is equal to the number of individuals to be selected). As opposed to the fitness proportionate selection, this method is unbiased because low-quality individuals maintain their chances of being chosen [26]. • Tournament selection: this method consists in conducting a series of competitions between randomly chosen individuals. As a result of each tournament, the individual with the highest fitness is selected. The predefined tournament size may be adjusted in order to increase (smaller tournament) or decrease (larger tournament) the chances of selecting weak individuals [138]. • Rank-based selection: individuals are ranked according to their fitness values. In the process of selection, only their positions in the ranking is taken into consideration, not the actual fitness; so, the distribution of individuals is uniform (contrary to proportional fitness assignment methods) [24].. Methods of crossover Crossover is a process inspired by biological reproduction in which an offspring is produced and inherits its genetic material from its parents. There are a multitude of methods of crossover; the most-popular ones are as follows [97]: • Single point crossover: a point in the parental genotypes is selected randomly. The newborn’s genotype is created by copying genes from the beginning to the crossover point of the first parent’s genotype and joining them to genes copied from the crossover point to the end of the second parent’s genotype. Let us assume that the parents’ genotypes are: G1 = [x1 , x2 , . . . , xn ] G2 = [y1 , y2 , . . . , yn ] Then, if point k (1 < k < n) is selected, the child’s genotype will be: G3 = [x1 , x2 , . . . , xk−1 , yk , yk+1 , . . . , yn ].

(27) 1. Search and optimization metaheuristics. 27. Optionally, a second offspring may be produced by swapping the order of the parental genes: G4 = [y1 , y2 , . . . , yk−1 , xk , xk+1 , . . . , xn ] • Multi-point crossover: this method is similar to single point crossover, but more points are selected here. A new genotype is generated by combining parental gene portions as determined by selected points. For instance, if the parents’ genotypes are: G1 = [x1 , x2 , . . . , xn ] G2 = [y1 , y2 , . . . , yn ] and points k, l, m are selected, the offspring might look as follows: G3 = [x1 , x2 , . . . , xk−1 , yk , yk+1 , . . . , yl−1 , xl , xl+1 , . . . , xm−1 , ym , ym+1 , . . . , yn ] Similar to single point crossover, a second child may be created by swapping the order of the parents. • Uniform crossover: particular genes of an offspring are taken randomly from parents [172]. The crossover operator has to be adjusted to a genotype’s encoding. The preceding methods can be successfully applied to many representations; e.g., real-valued, binary, etc. However, in the case of real-valued encoding, for example, one can also utilize some more-advanced methods based on arithmetic operations (such as intermediate recombination or line recombination [141]). Methods of mutation The goal of a mutation operator is to preserve population diversity. Mutation should ensure that individuals avoid becoming too similar and, thereby, more susceptible to getting stuck in local extrema. On the other hand, it should not be applied excessively in order to avoid replacing an evolutionary process with a random search. Usually, mutation consists in modifying a random gene with some probability. Diverse mutation methods are utilized depending on genotype encoding, and the ones used most are listed below [22, 174]. • Flip bit: this method is applicable only to binary encoded genotypes. The value of a randomly chosen gene is inverted—1 is changed to 0, and 0 to 1..

(28) 28 • Swap: two randomly selected genes are swapped. • Uniform: gene value is replaced with random uniform value drawn from a uniform distribution over a range from the lower to upper bounds of a problem domain. • Non-uniform: mutation strength and frequency change in the course of evolution. For instance, mutation can have a bigger impact on initial individuals in order to increase diversity within a population. Later, it can be reduced to allow for the adjustment of good solutions. • Gaussian: a random value from a Gaussian distribution is added to a gene. • Boundary: gene value is replaced with either the lower or upper bound of a problem domain. Except for flip bit, all of these mutation methods can be applied to real-valued genotype encoding. Figure 1.1 illustrates a model of a common Evolutionary Algorithm. Each individual owns its genotype, and its quality corresponds to a fitness value. In the course of evolution, it is processed by subsequent operators.. individual genotype individual genotype fitness. fitness. individual genotype. evaluation selection crossover. fitness. mutation individual genotype fitness. individual genotype fitness. individual genotype fitness. Figure 1.1: Evolutionary Algorithm.

(29) 1. Search and optimization metaheuristics. 29. Evolutionary metaheuristics turned out to have applications in a variety of fields. They play an important role in biology and bioinformatics (e.g., in molecular research and design [56] or molecular sequence alignment [89]) as well as in medicine (e.g., in cancer detection [75] or in ophthalmology imaging [121]). Engineers also often benefit from evolutionary computing, employing it in telecommunications (e.g., in such problems as network design or hardware infrastructure design [15]), electronics (especially in electronic circuits design [193, 194]), the problems of design and integration of control systems [53], etc. Furthermore, evolutionary algorithms are useful in management and scheduling applications [31], economics (e.g., in rental market analysis [28] or in economic modeling [186]), and chemistry [126].. 1.4. Hybridization of local search with evolutionary metaheuristics. Evolutionary metaheuristics have been continually developed throughout the years. Enhancement by hybridization with other search techniques seems particularly interesting, as this makes it possible to combine the advantages of respective methods. One example of such a hybridization that has been implemented with success is the introduction of memetic algorithms as a hybrid of evolutionary computing with a dedicated local search in order to improve exploitation in exploration-oriented methods. Memetic algorithms originate from Richard Dawkins’ theory of memes [62]. Meme is understood as a “unit of culture” that carries ideas, behaviors, and styles. This unit spreads among people by being passed from person to person within a culture by speech, writing, and other means of direct and indirect communication. According to the theory elaborated by Dawkins (also known as memetics), memes undergo phenomena analogous to evolution. They compete, and those that are more prolific and seem to be useful for individuals are more likely to spread and to be inherited. They can also be changed in the process of mutation. Unfit memes (which influence individuals in a harmful manner) become extinct and disappear. Noteworthy is the fact that memes can spread both vertically (in the course of inheritance) and horizontally (by various means of communication and knowledge sharing) [90, 98]. Memetic algorithms take advantage of population-based metaheuristics and local search methods and blend them together. The first researcher who proposed and applied memetic metaheuristic with success was Pablo Moscato, who managed to combine an evolutionary algorithm and simulated annealing method with the aim of solving the Traveling Salesman Problem [140]. Memetic algorithms (initially popularized by Rad-.

(30) 30 cliffe and Surry [150], for example) have been proven to provide remarkable success [95]. The hybridization of evolutionary algorithms and local search methods was formalized by Krasnogor and Smith in [120]. Memetic algorithms may be classified as cultural algorithms, which were introduced by Robert G. Reynolds in 1994 [153]. They take into consideration in search process both evolutionary process and cultural relations between individuals. In systems utilizing the cultural algorithms, a culture represents knowledge about search space (environment). Such knowledge constitutes the belief space (knowledge base). Individuals can share this information and communicate it to each other, in order to notify about promising or valueless regions of search space. Thus culture affects evolutionary process. The pseudo-code of a Cultural Algorithm has been included in Algorithm 3. In respect for the classic evolutionary search method, two additional operations have been introduced: influencing the population by cultural information (cf. line 10) and updating the belief space with knowledge acquired by individuals (cf. line 11). Algorithm 3 Pseudo-code of Cultural Algorithm 1: function search 2: P0 ← initializeP opulation() 3: knowledgeBase0 ← initializeKnowledgeBase(P0 ) 4: t←0 5: while ¬stopConditionIsM et do 6: evaluate(Pt ) 7: Pt0 ← select(Pt ) 8: Pt+1 ← crossover(Pt0 ) 9: mutate(Pt+1 ) 10: inf luence(Pt+1 , knowledgeBaset ) 11: knowledgeBaset+1 ← update(Pt+1 ) 12: t←t+1 13:. return best(P ). Usually, a local search is applied in the course of evaluation (Baldwininan local search) or mutation (Lamarckian local search). Memetic algorithms have been applied to numerous real-world problems. For instance, Aguilar and Colmenares combined a genetic algorithm with neural network to recognize alphabetic characters and geometric figures [14]. In turn, Mignotte et al. used hybrid evolutionary method in image processing, namely in classification of image.

(31) 1. Search and optimization metaheuristics. 31. objects [137]. Harris and Ifeachor employed a hybrid genetic algorithm in designing of frequency sampling filters [93], whereas Reis et al. hybridized an evolutionary metaheuristic with gate type local search in order to design a digital circuit [152]. Memetic algorithms are also applied to engineering problems, e.g. in traffic management systems (c.f. hybrid fuzzy logic/genetic algorithm developed by Srinivasan et al. in [170]) or in aircraft design (c.f. Bos’ work where genetic algorithm was joined with a gradientguided algorithm [33]). Costa applied an evolutionary algorithm enhanced with tabu search to a scheduling of the National Hockey League matches [58]. In [59], Cotta and Fernández proved a usefulness of memetic algorithms in scheduling and timetabling as well. The subject of employing memetic algorithms in the field of scheduling was also tackled by Franca et. al in [80]. In [29], Berretta et al. discussed how memetic methods may be utilized in bioinformatics. Finally, Urdaneta et al. proposed the hybridization of genetic algorithm with a successive linear program in the problem of power planning [181].. 1.4.1. Baldwinian local search. According to the Baldwinian theory, an individual’s predispositions and learning capabilities are inherited during reproduction [27]. The Baldwin effect follows the Darwinian theory of natural selection, as reproductive success is affected by an individual’s learning capabilities passed on to its offspring by inheritance (while the genetic code itself remains unchanged). Regarding evolutionary metaheuristics, a local search algorithm based on the Baldwinian theory is usually applied in the course of the evaluation process. Numerous potential descendants of the evaluated individual are generated, their fitness values calculated, and the highest one assigned to the individual (while its characteristics encoded in the genotype do not change). That is, fitness—which, in this case, represents learning capabilities—implies how good the solution will potentially be in future generations. For the first time, such an approach has been proposed by Hinton and Nowlan in [99], where they proved that evolution with the use of the Baldwin effect is much more effective even though no changes are reflected in the genotype. Further research that proved the advantage of individuals learning according to the Baldwin effect over non-learning ones was carried out by Ackley and Littman in [13]. Algorithm 4 presents a pseudo-code of a Memetic Algorithm that consists of an evolutionary metaheuristic enhanced with a local search implemented according to the Baldwinian theory. Instead of an ordinary evaluation, the baldwinianEvaluate.

(32) 32 function has been used. This function applies a local search, during which the potential descendants of individuals in population P are generated. As a result of the baldwinianEvaluate function, each individual in population P is assigned the best fitness of its descendants. New fitness values are then used in the process of selection. Algorithm 4 Pseudo-code of Memetic Algorithm with Baldwinian local search function baldwinianEvaluate(P) 2: localSearch(P ). 1:. 3: 4: 5: 6: 7: 8: 9: 10: 11:. function search P0 ← initializeP opulation() t←0 while ¬stopConditionIsM et do baldwinianEvaluate(Pt ) Pt0 ← select(Pt ) Pt+1 ← crossover(Pt0 ) mutate(Pt+1 ) t←t+1. 12:. 1.4.2. return best(P ). Lamarckian local search. 18th- and 19th-Century biologist Jean-Baptiste Lamarck proposed a theory according to which individual’s characteristics acquired during its lifetime may be inherited by its offspring [72]. Each individual may improve and change its genetic material, which is then inherited by its descendants. Nowadays, Lamarckism has been entirely discredited as totally inconsistent with Darwin’s Theory of Evolution [166]. In respect to the implementation of a Lamarckian local search in evolutionary algorithms, it is usually applied in the course of the mutation process. As the result of a local search starting at the point represented by an individual’s genes, numerous solutions are sampled, and the most-satisfactory one replaces the individual’s genotype. Further selection is based on fitness calculated for the new genotype. Lamarckian evolution may be applied to crossover as well—different combinations of parental genotypes might be analyzed, and the best would be chosen. As with the Baldwin effect, a Lamarckian local search has been proven to be an effective solution for search and optimization problems [102, 125, 155]..

(33) 1. Search and optimization metaheuristics. 33. Algorithm 5 introduces a pseudo-code of Memetic Algorithm based on the Lamarckian approach. A local search is applied in the lamarckianM utate function that replaced an ordinary mutation operator. As a result of the lamarckianM utate function, numerous mutations of each individual in population P are generated. Afterwards, individuals exchange their genotypes for the most-satisfactory solutions generated in the process of the local search. Algorithm 5 Pseudo-code of Memetic Algorithm with Lamarckian local search function lamarckianMutate(P) 2: localSearch(P ) 1:. 3: 4: 5: 6: 7: 8: 9: 10: 11: 12:. function search P0 ← initializeP opulation() t←0 while ¬stopConditionIsM et do evaluate(Pt ) Pt0 ← select(Pt ) Pt+1 ← crossover(Pt0 ) lamarckianM utate(Pt+1 ) t←t+1 return best(P ). It is difficult to unequivocally state which of these two approaches is better. The success of the Baldwin effect or Lamarckian evolution is often dependent on the problem to be tackled. On one hand, the former might be more effective, while the latter converges to a local optimum (as Whitley et al. proved in [187]). On the other hand, Ku and Mak in [124] and Julstrom in [108] stated that Lamarckian learning provided satisfactory results, as opposed to the Baldwin effect (which performed poorly). In turn, Yao et al. obtained similar outcomes of memetic algorithms based on a Lamarckian local search and the Baldwinian method in [192]. Nonetheless, a similar conclusion emerges from studies on memetic algorithms—evolutionary metaheuristics enhanced with a local search are more effective, robust, and bring good solutions much faster than classic approaches influenced by Darwinian theory (e.g., [91, 192]). Notwithstanding the evident advantages of combining a local search with evolutionary metaheuristics, one has to employ this technique with care, as the problem of population diversity loss arises [143]. Figure 1.2 depicts an Evolutionary Algorithm enhanced with a local search. The.

(34) 34 memetic method has been realized as an additional operator (memetization) that processes a population in the process of evolution.. individual genotype. individual genotype. fitness. fitness. evaluation selection. individual genotype. crossover. fitness. mutation individual genotype. individual genotype. fitness. fitness. memetization individual genotype fitness. Figure 1.2: Memetic Evolutionary Algorithm. One of the main drawbacks of evolutionary metaheuristics is the computational overhead that results from their intrinsic feature—incessant modifications of population and, consequently, an immense number of evaluations performed towards obtaining a satisfactory solution [88]. This issue becomes even more demanding in the case of memetic algorithms that perform a significantly increased number of evaluations. In addition, noteworthy is the fact that fitness functions are often complex, computationally expensive, and time-consuming; therefore, they should not be overused. Thus, new metaheuristics that require less computational effort are continuously being created and developed..

(35) Chapter 2 Agent-based metaheuristics Chapter 1 introduces evolutionary algorithms as common metaheuristics useful for solving difficult search and optimization problems. Although they are widely applied to diverse problems, an efficiency remains their main drawback. Besides, researchers, induced by the no-free-lunch theorem, constantly investigate novel metaheuristics, often hybridizing diverse approaches. One of the concepts that turned out to be useful in terms of evolutionary computing is agency. There is no official common definition of an agent; however, according to the mostwidely-accepted one, an agent is an autonomous, pseudo-intelligent computer system situated in some environment, able to act reactively (i.e., by reacting to changes in the environment), pro-actively (i.e., undertaking autonomous actions), or based on interactions with other agents (e.g., cooperation, communication, negotiation, etc.) in order to fulfill a common goal [190]. In numerous research, agents constitute a basis for the hybridization of artificial intelligence disciplines. For instance, they have already been connected with evolutionary algorithms in order to support the realization of their tasks or to manage algorithmdistributed execution [52, 158, 180]. A multi-agent system (MAS) is an open, distributed, decentralized system in which the main part consists of a group of agents that interact in their common environment [74, 105, 171]. A concept of agent-based systems is derived from the need to decompose a main task into smaller parts to be solved by distributed individuals. For a long time, multi-agent systems have been successfully applied to various problems: power system management [132], flood forecasting [84], business process management [104], intersection management [66], difficult optimization problems [127], and many others..

(36) 36. 2.1. From evolutionary algorithms to evolutionary agent-based systems. Numerous experimental research has proven that simple evolutionary algorithms work efficiently and yield satisfactory results if the population of individuals preserves adequate diversity; i.e., if solutions represented by particular individuals are remarkably different [22]. However, these classic evolutionary algorithms do not take into account such important features of evolution as changes in environment, global knowledge, generational synchronization, species co-evolution, and many others [23]. Therefore, in the case of many simple algorithms, population diversity is not preserved—they tend to get stuck in the local optima (i.e,. in the basins of attraction in a local optimum). In addition, classic evolutionary algorithms are computationally demanding due to the continuous modifications of the population and, thus, the numerous evaluations run for each generation. For these reasons, many variants of evolutionary algorithms have been proposed throughout the years. They have introduced new mechanisms in order to enhance computational efficiency and imitate the process of evolution as precisely as possible. Among these, population decomposition and species co-evolution have had the largest influence on the creation of agent-based evolutionary systems that have turned out to provide new quality in terms of evolutionary algorithms.. 2.2. Evolutionary Multi-Agent System (EMAS). In 1996, Krzysztof Cetnarowicz proposed the concept of an Evolutionary Multi-Agent System (EMAS) [51]. The basis of this agent-based metaheuristic are agents—entities that bear appearances of intelligence and are able to make decisions autonomously. Following the idea of population decomposition and evolution decentralization, the main problem is decomposed into sub-tasks, each of which is entrusted to an agent. One of the most-important features of EMAS is the lack of global control—agents co-evolve independently of any superior management. Another remarkable advantage of EMAS over classic population-based algorithms is the parallel ontogenesis—agents may die, reproduce, or act at the same time. A schematic illustration of EMAS is presented in Figure 2.1. Each agent possesses a genotype that represents an exemplary solution of the tackled problem. Agents are situated on evolutionary islands, where they interact with each other. It is noteworthy.

(37) 37. 2. Agent-based metaheuristics. that such a structure corresponds to the distributed character of computations and facilitates algorithm parallelization and implementation in a distributed environment. The quality of each agent’s solution is expressed by its energy—a non-renewable resource acquired or lost during its lifetime. Energy is exchanged between agents in the process of so-called meetings that take place between two agents. An agent with higher fitness (i.e., its genotype represents a solution of higher quality) acquires some portion of another agent’s energy. The mechanism of selection is based on the level of agent energy—agents with less energy are more likely to be removed from the system, as they are assumed to represent poor-quality solutions.. Evolutionary island. agent genotype. agent. energy. genotype. agent. energy. genotype energy. agent high energy: reproduction. genotype energy. evaluation and energy transfer. agent genotype. agent genotype. low energy: death. energy. agent. energy. genotype energy. emigration. imigration. Evolutionary island A A A A. Evolutionary island A A A A A migrations A A A. Figure 2.1: Evolutionary Multi-Agent System (EMAS). Two core phenomena of evolution (i.e., inheritance and the aforementioned selection) are modeled by reproduction and the death of agents (respectively). Reproduction is accomplished between two agents that each own a high-enough level of energy (i.e., their genotypes represent high-quality solutions). Information encoded in each parent’s genotype is inherited with the use of variation operators—mutation and recombination. Since a newly created agent receives some initial portion of energy,.

(38) 38 and the global amount of this resource must remain constant, parental levels of energy are adequately decreased. If an agent’s energy falls below a certain level, it dies and is removed from the system. This mechanism corresponds to the evolutionary phenomenon of selection, as agents with low levels of energy are supposed to bear low-quality solutions. Additionally, an agent may change the island on which it is located—if its level of energy is high enough, it can migrate to other evolutionary islands. The mechanism of migration provides an exchange of information and resources throughout the system [113]. Algorithm 6 presents a pseudo-code of actions performed by an EMAS agent during each evolutionary step. In each iteration, an agent communicates with a neighbor provided by its parent (i.e., aggregate agent—island—that encapsulates it). Then, they might crossover if their levels of energy are high enough (which is verified in the canReproduce method). Otherwise, the mechanism of meeting is launched. In the end, the agent may migrate to another island (if its energy stands at a proper level, which is verified in the canMigrate method) or make a move inside its current island. Algorithm 6 Pseudo-code of EMAS agent’s evolutionary step 1: function step 2: neighbor ← parent.getN eighbor() 3: if canReproduce(this, neighbor) then 4: reproduce(this, neighbor) 5: else 6: meet(this, neighbor) 7: 8: 9: 10:. if canM igrate(this) then migrate(this) else if shouldM ove(this) then move(this). Algorithm 7 illustrates the pseudo-code of a reproduction mechanism in EMAS. Each parent donates half of a descendant’s energy (the agent’s initial energy is a common global parameter). In this way, the sum of all of the agents’ energy levels remains constant. A new agent’s genotype is created by a crossover operator—its functioning depends on the provided implementation. Next, the newly-created agent is mutated (again, the particular mutation strategy is dependent on the implementation) and evaluated. Finally, the agent is added to the evolutionary island of its parents. The process of meeting between two agents is presented by Algorithm 8. Agents determine which one of them has a lower level of energy. This agent should give some.

(39) 2. Agent-based metaheuristics. 39. Algorithm 7 Pseudo-code of EMAS reproduction action 1: global newbornEnergy 2: function reproduce(agent1, agent2) 3: agent1.energy ← agent1.energy − newbornEnergy/2 4: agent2.energy ← agent2.energy − newbornEnergy/2 5: newborn ← newAgent() 6: newborn.energy ← newbornEnergy 7: newborn.genotype ← crossover(agent1.genotype, agent2.genotype) 8: mutate(newborn) 9: evaluate(newborn) 10: agent1.island.addAgent(newborn). portion of its energy to the other one. Finally, it should be verified if this agent should not be removed from the system (in case its energy level reaches the minimal value) by a mechanism of death that models an evolutionary selection. Algorithm 8 Pseudo-code of EMAS meeting mechanism 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13:. function meet(agent1, agent2) if agent1.f itness > agent2.f itness then energyT oT ransf er ← agent2.getEnergyT oT ransf er() agent1.energy ← agent1.energy + energyT oT ransf er agent2.energy ← agent2.energy − energyT oT ransf er if shouldDie(agent2) then die(agent2) else if agent1.f itness < agent2.f itness then energyT oT ransf er ← agent1.getEnergyT oT ransf er() agent1.energy ← agent1.energy − energyT oT ransf er agent2.energy ← agent2.energy + energyT oT ransf er if shouldDie(agent1) then die(agent1). Agent migration from one evolutionary island to another is put forth in Algorithm 9. To begin with, a target island is chosen. It may be randomly selected or chosen according to some other strategy. Then, the migrating agent is removed from its current island and introduced to the target one. Algorithm 10 presents how the death of an EMAS agent may appear. It is a ratherstraightforward action, as it consists only of removing an agent from its island..

(40) 40 Algorithm 9 Pseudo-code of EMAS migration action 1: global evolutionaryIslands 2: function migrate(agent) 3: islandT oM igrate ← evolutionaryIslands.get() 4: agent.island.removeAgent(agent) 5: islandT oM igrate.addAgent(agent) Algorithm 10 Pseudo-code of EMAS death action function die(agent) 2: agent.island.removeAgent(agent) 1:. As previously noted, simple evolutionary algorithms fail to preserve population diversity. In the case of EMAS, it is quite simple to provide diverse solutions, i.a., by decomposing populations into different evolutionary islands and allowing agents to move from one island to another (thus, introducing the mechanism of allopatric speciation [48]). In order to conduct research on additional levels, EMAS has frequently been extended and hybridized with various mechanisms and new ideas, allowing for the creation of new variants of the system. Among them, the most important are: • Immunological EMAS (iEMAS): iEMAS was created by Aleksander Byrski so that the process of selection could be sped up, especially when the fitness evaluation is time-consuming [36]. Following the immunological inspiration, a new kind of agent is introduced. It imitates a lymphocyte T-cell and is created by the transformation of a dying agent. Next, it recognizes its affinity with other agents by verifying the similarity between genotypes. Depending on the implementation, those agents that own genotypes similar to the lymphocyte’s are either removed from the system or penalized (e.g., by reducing their energy) [40, 41]. • Elitist EMAS: elitist EMAS was developed by Leszek Siwik for supporting decisions in multi-criteria optimization [168, 169]. Elitism (i.e., a mechanism of preserving the best individuals—the elite of the population—regardless of the selection operator) was introduced into EMAS by adding an elitist island on which agents with high levels of prestige can migrate. An agent’s prestige increases each time it dominates another agent. Agents with high levels of this resource belong to the elite. • Co-evolutionary EMAS (coEMAS): co-evolutionary techniques were introduced into EMAS by Rafał Dreżewski (see: [67]). Their main goal is to support solv-.

(41) 2. Agent-based metaheuristics. 41. ing multi-modal optimization problems by enhancing population diversity. The diversity is improved by enabling the population to form species within many basins of attraction (contrary to classic EMAS, where each population tends to locate in one optimum). Two types of coEMAS have been developed: – coEMAS with co-evolving species (nCoEMAS) that differentiates two types of agents (niches and individuals) in order to split the population into species and enable species co-evolution, – coEMAS with sexual selection (sCoEMAS) that provides speciation of individuals by introducing two agent sexes—female and male—in order to enable the co-evolution of sexes and sexual selection. Research conducted with the use of these EMAS variants have proven their usefulness in different applications and tackled problems. EMAS has been also formally proven to be able to solve optimization problems (proof based on the ergodicity of an appropriately constructed Markov chain, similar to the works of Vose [184]) [45, 47, 159]. Moreover, since the creation of EMAS, it has been applied to many problems; in each case, it has turned out to be very efficient and yield better results than classic evolutionary algorithms: classic continuous benchmark optimization [38], inverse problems [191], optimization of neural network architecture [43], multi-objective optimization [167], multi-modal optimization [67], financial optimization [68], etc. It has also proven to be useful in research at different levels: formal modeling [46, 160], framework development [73], experimental research [41, 44, 148], etc.. 2.3. Memetic Evolutionary Multi-Agent System (MemEMAS). EMAS can be enhanced with memetic algorithms in a very straightforward manner. Implementation of a local search may be realized by a modifying evaluation operator (the Baldwinian local search model) or mutation operator (the Lamarckian local search model). Baldwinian memetics may be implemented in EMAS by returning the best fitness of its potential descendants found in the process of a local search, not the actual fitness value of the agent being evaluated. The genotype of the evaluated agent remains unchanged..

(42) 42 Lamarckian memetics is implemented in EMAS by running a local search procedure during the process of reproduction or at any moment during an agent’s lifetime. An agent’s genotype is mutated numerous times, and the best-encountered genotype is returned. Therefore, contrary to the Baldwinian model, both agent’s genotypes and fitness values are changed. When handled with care, local search algorithms can enhance an individual’s genotypes and bring it closer to the local or global extrema. In this dissertation, a Lamarckian local search model is taken under consideration. Algorithm 11 shows the simplified version of an action of memetization realized in EMAS. It assumes that some procedure of the local search is provided. As a result of the local search, an agent receives the new genotype along with its quality. Algorithm 11 Pseudo-code of EMAS memetization action function memetize(agent) 2: agent.genotype, agent.f itness ← localSearch(agent) 1:. Algorithm 12 presents a memetization implemented in EMAS according to the Lamarckian model. A local search is run during reproduction. As a result of the memetize action, the best genotype found in the process of the local search is assigned to a newly created agent (along with the fitness value calculated for this genotype). Algorithm 12 Pseudo-code of memetization realized in EMAS in the course of reproduction action (Lamarckian memetic model) 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11:. global newbornEnergy function reproduce(agent1, agent2) agent1.energy ← agent1.energy − newbornEnergy/2 agent2.energy ← agent2.energy − newbornEnergy/2 newborn ← newAgent() newborn.energy ← newbornEnergy newborn.genotype ← crossover(agent1.genotype, agent2.genotype) mutate(newborn) evaluate(newborn) memetize(newborn) agent1.island.addAgent(newborn). An outline of EMAS with memetization during the course of reproduction is illustrated in Figure 2.2. An agent applies a local search algorithm in order to create different solutions (represented with small circles, each containing a new genotype)..

(43) 43. 2. Agent-based metaheuristics. These are evaluated, and the best one (marked with a bold border) replaces the agent’s genotype. The local search may be applied only by the agent that has just been created during the process of reproduction.. Evolutionary island. agent genotype energy. agent genotype energy. gen. gen. agent gen. genotype energy. agent high energy: reproduction. genotype energy. evaluation and energy transfer. genotype. agent genotype. agent. gen. energy. low energy: death. agent genotype energy. imigration. energy. emigration. Evolutionary island A Evolutionary island A A A A A A A A migrations A A A Figure 2.2: Memetic EMAS with local search realized in course of reproduction. The first successful experiments concerning the hybridization of EMAS with memetization were presented in [44] and later in [42]. In [114], a memetic variant of EMAS was employed to deal with combinatorial optimization. Further research on this topic was developed in [116] where, additionally, a mechanism of efficient evaluations was introduced (cf. Sections 3.3.1 and 3.3.2).. 2.4. Lifelong Memetization in Memetic Multi-Agent System. In the most-common case, memetic algorithms are applied once at a precisely defined moment of an agent’s lifetime (such as mutation or evaluation). However, bearing in mind the agent’s autonomy and parallel ontogenesis, it is possible to run a local search.

(44) 44 from time to time at any arbitrary moment of an agent’s lifetime based on conditions of the environment or other factors. Thus, an agent can autonomously decide at any point in time whether it should apply a local search; what is more, one agent can run a search several times. It is noteworthy that such a mechanism can lead to the gradual improvement of the whole population, even between the reproductions. Algorithm 13 introduces the realization of lifelong memetization in EMAS. Following its assumptions, a local search may be repeatedly applied during an agent’s lifetime. First of all, an agent verifies if it should run a memetic algorithm (e.g., this decision might be made by chance). If memetization is to be run, the appropriate action is performed and the agent is provided with a genotype found by a local search and fitness that corresponds with this genotype. Algorithm 13 Pseudo-code of lifelong memetization realized in EMAS (Lamarckian memetic model) 1: function step 2: if shouldM emetize(this) then 3: memetize(this) 4: 5: 6: 7: 8: 9: 10: 11: 12:. neighbor ← parent.getN eighbor() if canReproduce(this, neighbor) then reproduce(this, neighbor) else meet(this, neighbor) if canM igrate(this) then migrate(this) else if shouldM ove(this) then move(this). Figure 2.3 depicts a schema of lifelong memetization realized in EMAS. Contrary to Figure 2.2, all agents are able to memetize at any arbitrary moment of their lives. In [117], the first research on the subject of the hybridization of EMAS with lifelong memetization was discussed. This topic was then continued in [118]. As previously mentioned, memetic algorithms are believed to improve the results yielded by classic methods. However, the hybridization of a local search with EMAS does not differ remarkably from a similar approach applied to classic evolutionary algorithms—efficiency still remains the main issue. One has to handle memetization with care, not to hamper computations by the increased number of evaluation events. Therefore, mechanisms that improve the efficiency of memetics are essentially needed..

(45) 45. 2. Agent-based metaheuristics. Evolutionary island gen. agent. gen. genotype. gen. agent genotype energy. gen. gen gen. genotype energy. agent high energy: reproduction. genotype energy. evaluation and energy transfer. gen. agent genotype. agent genotype. gen. agent. gen. gen. gen. energy. low energy: death. agent. imigration. energy. genotype energy. gen. gen. energy. emigration. Evolutionary island A Evolutionary island A A A A A A A A migrations A A A Figure 2.3: Memetic EMAS with local search realized during agent’s lifetime.

(46) 46.

(47) Chapter 3 Efficient metaheuristic computing Metaheuristics touched upon in Chapter 1 constitute a common manner of dealing with difficult problems that are unable to be solved efficiently with the use of classic exhaustive analysis. However, the computational cost and exorbitant search time remains their main drawbacks. In order to find an optimal solution, large populations are generated, and their processing might become extremely expensive. What is more, an evaluation function is usually very complex. Some problems such as, e.g., “inverse problems” require the exploration of a huge parameter space in order to learn about the parameters of a model by drawing conclusions from the obtained outcomes [191]. Admittedly, agent-based evolutionary systems, treated of in Chapter 2, reduce the computational effort, but hybridization with memetics poses even bigger challenge in terms of usage of computational resources. Therefore, it is indispensable to plan the proper utilization of available computing resources and to take advantage of the benefits offered by various computational infrastructures in order to increase the efficiency of metaheuristic algorithms. This chapter deals with the possibilities of improving metaheuristic computing efficiency. Diverse approaches are discussed, i.a., algorithm parallelization and the utilization of popular computational infrastructures. Furthermore, the application of GPU and FPGA architectures to evolutionary algorithms is introduced. The final section touches upon ways of efficient fitness evaluation by a technique known as buffering.. 3.1. Parallelization of evolutionary algorithms. Classic evolutionary algorithms probably gained their greatest popularity from populationbased metaheuristics. Such algorithms often need to tackle populations of an exceptionally large size, hampering the overall efficiency of computing. However, the concept.

Cytaty

Powiązane dokumenty