Read by QxMD icon Read

Evolutionary Computation

Yuping Wang, Haiyan Liu, Fei Wei, Tingting Zong, Xiaodong Li
For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered as an effective strategy to decompose the problem into smaller subproblems, each of which can be then solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem...
August 9, 2017: Evolutionary Computation
Filomena Ferrucci, Pasquale Salza, Federica Sarro
The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time...
June 29, 2017: Evolutionary Computation
M R Przybylek, A Wierzbicki, Z Michalewicz
Real-world optimization problems have been studied in the past, but the work resulted in approaches tailored to individual problems that could not be easily generalized. The reason for this limitation was the lack of appropriate models for the systematic study of salient aspects of real-world problems. The aim of this paper is to study one of such aspects: multi-hardness. We propose a variety of decomposition-based algorithms for an abstract multi-hard problem and compare them against the most promising heuristics...
June 20, 2017: Evolutionary Computation
Hsien-Kuei Hwang, Alois Panholzer, Nicolas Rolin, Tsung-Hsi Tsai, Wei-Mei Chen
We give a detailed analysis of the optimization time of the (1 + 1)-Evolutionary Algorithm under two simple fitness functions (ONEMAX and LEADINGONES). The problem has been approached in the evolutionary algorithm literature in various ways and with different degrees of rigor. Our asymptotic approximations for the mean and the variance represent the strongest of their kind. The approach we develop is based on an asymptotic resolution of the underlying recurrences and can also be extended to characterize the corresponding limiting distributions...
June 20, 2017: Evolutionary Computation
Anton V Eremeev
In this paper, we consider a fitness-level model of a non-elitist mutation-only evolutionary algorithm (EA) with tournament selection. The model provides upper and lower bounds for the expected proportion of the individuals with fitness above given thresholds. In the case of so-called monotone mutation, the obtained bounds imply that increasing the tournament size improves the EA performance. As corollaries, we obtain an exponentially vanishing tail bound for the Randomized Local Search on unimodal functions and polynomial upper bounds on the runtime of EAs on 2-SAT problem and on a family of Set Cover problems proposed by E...
May 10, 2017: Evolutionary Computation
Ngoc Hoang Luong, Han La Poutré, Peter A N Bosman
This article tackles the Distribution Network Expansion Planning (DNEP) problemthat has to be solved by distribution network operators to decide which, where, and/or when enhancements to electricity networks should be introduced to satisfy the future power demands. Because of many real-world details involved, the structure of the problem is not exploited easily using mathematical programming techniques, for which reason we consider solving this problem with evolutionary algorithms (EAs). We compare three types of EAs for optimizing expansion plans: the classic genetic algorithm (GA), the estimation-of-distribution algorithm (EDA), and the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA)...
April 7, 2017: Evolutionary Computation
Elena Popovici
Co-optimization problems often involve settings in which the quality ( utility) of a potential solution is dependent on the scenario within which it is evaluated, and many such scenarios exist. Maximizing expected utility is simply the goal of finding the potential solution whose expected utility value over all possible scenarios is best. Such problems are often approached using coevolutionary algorithms. We are interested in the design of generally well-performing black-box algorithms for this problem, that is algorithms which have access to the utility function only via input-output queries...
March 30, 2017: Evolutionary Computation
Evert Haasdijk, Jacqueline Heinerman
Selection is an essential component of any evolutionary system and analysing this fundamental force in evolution can provide relevant insights into the evolutionary development of a population. The 1990s and early 2000s saw a substantial number of publications that investigated selection pressure through methods such as takeover time and Markov chain analysis. Over the last decade, however, interest in the analysis of selection in evolutionary computing has waned. The established methods for analysis of selection pressure provide little insight when selection is based on more than comparison-of-fitness values...
March 21, 2017: Evolutionary Computation
Krzysztof L Sadowski, Dirk Thierens, Peter A N Bosman
Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this paper, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables...
February 16, 2017: Evolutionary Computation
Tomasz P Pawlak, Krzysztof Krawiec
Program semantics is a promising recent research thread in Genetic Programming (GP). Over a dozen of semantic-aware search, selection, and initialization operators for GP have been proposed to date. Some of those operators are designed to exploit the geometric properties of semantic space, while some others focus on making offspring effective, i.e., semantically different from their parents. Only a small fraction of previous works aimed at addressing both these features simultaneously. In this paper, we propose a suite of competent operators that combine effectiveness with geometry for population initialization, mate selection, mutation and crossover...
February 16, 2017: Evolutionary Computation
Patrik Gustavsson, Anna Syberfeldt
Non-dominated sorting is a technique often used in evolutionary algorithms to determine the quality of solutions in a population. The most common algorithm is the Fast Non-dominated Sort (FNS). This algorithm, however, has the drawback that its performance deteriorates when the population size grows. The same drawback applies also to other non-dominating sorting algorithms such as the Efficient Non-dominated Sort with Binary Strategy (ENS-BS). An algorithm suggested to overcome this drawback is the Divide-and-Conquer Non-dominated Sort (DCNS) which works well on a limited number of objectives but deteriorates when the number of objectives grows...
January 19, 2017: Evolutionary Computation
Emma Hart, Kevin Sim
Although the use of ensemble methods in machine-learning is ubiquitous due to their proven ability to outperform their constituent algorithms, ensembles of optimisation algorithms have received relatively little attention. Existing approaches lag behind machine-learning in both theory and practice, with no principled design guidelines available. In this article, we address fundamental questions regarding ensemble composition in optimisation using the domain of bin-packing as an example. In particular, we investigate the trade-off between accuracy and diversity, and whether diversity metrics can be used as a proxy for constructing an ensemble, proposing a number of novel metrics for comparing algorithm diversity...
January 10, 2017: Evolutionary Computation
Mohammad Reza Bonyadi, Zbigniew Michalewicz
This paper reviews recent studies on the Particle Swarm Optimization (PSO) algorithm. The review has been focused on high impact recent articles that have analyzed and/or modified PSO algorithms. This paper also presents some potential areas for future study.
2017: Evolutionary Computation
Rubén Saborido, Ana B Ruiz, Mariano Luque
In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA ( global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed...
2017: Evolutionary Computation
Jorge Gomes, Pedro Mariano, Anders Lyhne Christensen
Cooperative coevolutionary algorithms (CCEAs) rely on multiple coevolving populations for the evolution of solutions composed of coadapted components. CCEAs enable, for instance, the evolution of cooperative multiagent systems composed of heterogeneous agents, where each agent is modelled as a component of the solution. Previous works have, however, shown that CCEAs are biased toward stability: the evolutionary process tends to converge prematurely to stable states instead of (near-)optimal solutions. In this study, we show how novelty search can be used to avoid the counterproductive attraction to stable states in coevolution...
2017: Evolutionary Computation
Dirk Sudholt
We reinvestigate a fundamental question: How effective is crossover in genetic algorithms in combining building blocks of good solutions? Although this has been discussed controversially for decades, we are still lacking a rigorous and intuitive answer. We provide such answers for royal road functions and OneMax, where every bit is a building block. For the latter, we show that using crossover makes every ([Formula: see text]+[Formula: see text]) genetic algorithm at least twice as fast as the fastest evolutionary algorithm using only standard bit mutation, up to small-order terms and for moderate [Formula: see text] and [Formula: see text]...
2017: Evolutionary Computation
Alberto Moraglio, Dirk Sudholt
Geometric crossover is a formal class of crossovers that includes many well-known recombination operators across representations. In previous work, it was shown that all evolutionary algorithms with geometric crossover (but no mutation) do the same form of convex search regardless of the underlying representation, the specific selection mechanism, offspring distribution, search space, and problem at hand. Furthermore, it was suggested that the generalised convex search could perform well on generalised forms of concave and approximately concave fitness landscapes regardless of the underlying space and representation...
2017: Evolutionary Computation
Ilya Loshchilov
Limited-memory BFGS (L-BFGS; Liu and Nocedal, 1989 ) is often considered to be the method of choice for continuous optimization when first- or second-order information is available. However, the use of L-BFGS can be complicated in a black box scenario where gradient information is not available and therefore should be numerically estimated. The accuracy of this estimation, obtained by finite difference methods, is often problem-dependent and may lead to premature convergence of the algorithm. This article demonstrates an alternative to L-BFGS, the limited memory covariance matrix adaptation evolution strategy (LM-CMA) proposed by Loshchilov ( 2014 )...
2017: Evolutionary Computation
Muhammad Iqbal, Will N Browne, Mengjie Zhang
A main research direction in the field of evolutionary machine learning is to develop a scalable classifier system to solve high-dimensional problems. Recently work has begun on autonomously reusing learned building blocks of knowledge to scale from low-dimensional problems to high-dimensional ones. An XCS-based classifier system, known as XCSCFC, has been shown to be scalable, through the addition of expression tree-like code fragments, to a limit beyond standard learning classifier systems. XCSCFC is especially beneficial if the target problem can be divided into a hierarchy of subproblems and each of them is solvable in a bottom-up fashion...
2017: Evolutionary Computation
Antoine S Dymond, Schalk Kok, P Stephan Heyns
Control parameter studies assist practitioners to select optimization algorithm parameter values that are appropriate for the problem at hand. Parameter values are well suited to a problem if they result in a search that is effective given that problem's objective function(s), constraints, and termination criteria. Given these considerations a many-objective tuning algorithm named MOTA is presented. MOTA is specialized for tuning a stochastic optimization algorithm according to multiple performance measures, each over a range of objective function evaluation budgets...
2017: Evolutionary Computation
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"