Dreaming of Metaheuristics

Thoughts about metaheuristics for hard optimization

Go to the content | Go to the menu | Go to the search page

Monday 24 December 2007

Blog moving

Due to some problems with my web hosting (unmanaged spam, arbitrarily deleted tables, etc.), this blog is moving to the following URL: http://metah.nojhan.net

The new RSS feed is located at: http://metah.nojhan.net/feed/rss2

Friday 12 October 2007

Classification of metaheuristics

I eventually find some time to try a graphical representation of how metaheuristics could be classified.

Here is a slidified version, that shows each classes independently:

And a static image version: Graphical classification of metaheuristics

Note that some metaheuristics are not completely contained in certains classes, this indicate that the method could be considered as part of the class or not, depending on your point of view.

I have reported the following metaheuristics:

  • genetic programming, genetic algorithms, differential evolution,
  • evolution strategies,
  • estimation of distribution algorithms,
  • particle swarm optimization,
  • ant colony optimization,
  • simulated annealing,
  • tabu search,
  • GRASP,
  • variable neighborhood search,
  • iterated, stochastic and guided local search.

And the following classes :

  • metaheurisitcs vs local search,
  • population metaheuristics vs trajectory-based ones,
  • evolutionary computation or not,
  • nature-inspired methods or not,
  • dynamic objective function vs static ones,
  • memory-based algorithms vs memory-less,
  • implicit, explicit or direct metaheuristics.

I proposed the last class, so that it may not be well-known. You will find more informations about it in the following paper: Adaptive Learning Search, a new tool to help comprehending metaheuristics, J. Dreo, J.-P. Aumasson, W. Tfaili, P. Siarry, International Journal on Artificial Intelligence Tools, Vol. 16, No. 3.. - 1 June 2007

I didn't placed a stochastic category, as it seems a bit difficult to represent graphically. Indeed, a lot of methods could be "stochasticized" or "derandomized" in several ways.

There is surely several lacks or errors, feel free to give your point of view with a trackback, an email or by modifying the SVG source version (comments are disabled due to spam that I didn't have time to fight accurately).

Tuesday 4 September 2007

Network of books about metaheuristics

Playing with Touchgraph, I have tried to search on Amazon for books about metaheuristics. The results are interesting, as one can see on the following graphs :

This one only shows the books themselves. Note the lot of books not connected in the bottom-left side and the highly connected boks about evolutionary computation. I also find interesting that the machine-learning topic on the top and the connexions with computer programming on the right. It surprises me that my "metaheuristic for hard optimization" book is the only one connected with such topics... maybe because of the case studies.

Check also the network of authors and subjects:

The graphic for authors and publishers (Springer is the biggest, as one can expect):

And finally, all the items on one graph:

Friday 27 July 2007

Hybridization : estimation of distribution as a meta-model filter generator for metaheuristics ?

An interesting idea is to use meta-model (a priori representation of the problem) as a filter to bias the sample produced by metaheuristics. This approach seems especially promising for engineering problem, where computing the objective function is very expensive.

One simple form of meta-model is a probability density function, approximating the shape of the objective function. This PDF could thus be used to filter out bad points before evaluation.

Why, then, do not directly use EDA to generate the sample ? Because one can imagine that the problem shape is not well known, and that using a complex PDF is impossible (too expensive to compute, for example). Then, using a classical indirect metaheuristic (let say an evolutionary algorithm) should be preferable (computationnaly inexpensive) for the sample generation. If one know a good approximation to use for the distribution of the EDA (not too computationnaly expensive), one can imagine using the best part of the two worlds.

An example could be a problem with real variable : using an EDA with a multi-variate normal distribution is computationnaly expensive (due to the estimation of the co-variance, mainly), and using a mixture of gaussian kernels makes difficult to have an a priori on the problem. Thus, why not using a indirect metaheuristic to handle the sample generation, and use a meta-model which parameters are estimated from the previous sample, according to a chosen distribution ?

One more hybridization to try...

Thursday 5 July 2007

Error metrics

Many metrics are used to assess the quality of approximation found by metaheuristics. Two of them are used really often: distance to the true optimum according to its position and to its value.

Unfortunately, the objective function's shape can vary a lot in real-world problem, making these metrics difficult to interpret. For example, if the optimum is in a very deep valley (in value), a solution close to it in position may not signifiate that the algorithm have well learn the shape of it. Inversely, a solution close to an optimum in value may not signifiate that it is in the same valley.

One metric that can counter thse drawbacks is a distance taking into account the parameters of the problem as well as the value dimension.

Anyway, the question of the type of distance to use is dependent of the problem.

Wednesday 28 March 2007

Random draw in a sphere

When adapting combinatorial metaheuristics to continuous problems, one sometimes use a sphere as an approximation of the "neighborhood". The idea is thus to draw the neighbours around a solution, for example in order to apply a simple mutation in a genetic algorithm.

Sometimes, one choose to use an uniform law, but how to draw random vectors uniformly in an hyper-sphere ?

The first idea that comes to mind is to use a polar coordinate system and draw the radius r and the angles a1...a2...ai...aN with a uniform law. Then, one convert the coordinates in the rectangular system, x1...x2...xi...xN.

The result is interesting for a metaheuristic design, but is not a uniform distribution:

The correct method is to draw each xi according to: xi=(ri1/Nai)/√(∑(ai))
(in LATEX : $x_{i}=\frac{r^{\frac{1}{N}}_i\cdot a_{i}}{\sqrt{{\displaystyle \sum _{j=1}^{N}a_{i}}}}$)

With ri uniformly drawn in U0,1 and a following a normal law NO,1

The result is then a true uniform distribution:

Credits goes to Maurice Clerc for detecting and solving the problem.

Sunday 7 January 2007

Evolving Objects 1.0 is out

The EO framework has just reached the 1.0 version. This is one of the most interesting library for metaheuristics.

It is a templatized C++ framework, with a component based architecture. EO is focused on "evolutionary computing" (a synonym of metaheuristics, imho) and can be used for any population-base metaheuristics. There exists versions for local searches, multi-objective optimization or parallel architectures... a real good piece of software :-)

Thursday 21 December 2006

Random and learning

The Machine Learning (Theory) blog, by John Langford, has a very intersting post on "Explicit Randomization in Learning algorithms".

The post and its comments are talking about machine-learning, but can largely be applied to metaheuristics. The page is listing several reason for using randomization, from which some are of special intersts for metaheuristics:

  1. symmetry breaking as a way to make decision, which is of great importance for metaheuristics, which must learn and choose where are the "promising regions";
  2. overfit avoidance, which is related to the intensification/diversification balance problem;
  3. adversary defeating and bias suppression, which can be interpreted as trying to design a true meta-heuristic (i.e. that can be applied on several problems without major changes).

Of course, it should be possible to design a completely deterministic algorithm that takes decisions, achieve a correct i/d balance and can tackle all problems... Even if this force to integrate the problems themselves in the algorithm, it should be possible. The drawback is that it is computationally intractable.

In fact, metaheuristics (and, as far as I understand, machine-learning algorithms) are located somewhere between random search algorithms and deterministic ones. The compromise between these two tendencies is dependent of the problem and of the offered computational effort.

Wednesday 20 December 2006

Metaheuristics and experimental research

Springer has just published a book on "Experimental Research in Evolutionary Computation", written by Thomas Bartz-Beielstein.

Thomas Bartz-Beielstein is working on the statistical analysis of the behaviour of metaheuristics (see its tutorials at GECCO and CEC), and the publication of its book is a really great thing. I haven't read it yet, but the table of content seems really promising. There is a true need for such work in the metaheuristics community, and in stochastic optimization in general.

A friend said to me that the lack of experimental culture in the computer science community was a form of consensus, perhaps because theoretical aspects of mathematics was the "only way to make true science". This is a true problem when you deal with stochastic algorithm, applied to real world problem. Despite the fact that several papers early call for more rigourous experimental studies of metaheuristcs (E.D. Taillard has written papers on this problem several years ago, for example), the community does not seems to quickly react.

Yet, things are changing, after the series of CEC special sessions on benchmark for metaheuristics, there is more and more papers on how to test stochastic optimization algorithms and outline the results. I think this book is coming timely... the next step will be to promote the dissemination of the results data (and code!), in an open format, along with the papers.

Tuesday 19 December 2006

Metaheuristics and machine-learning

Metaheuristics and machine-learning algorithms shares a large number of characteristics, like stochastic processes, manipulaton of probability density functions, etc.

One of the interesting evolution of the research on metaheuristics these years is the increasing bridge-building with machine-learning. I see at least two interesting pathways: the use of metaheuristics in machine-learning and the use of machine-learning in metaheuristics.

The first point is not really new, machine-learning heavily use optimization, and it was natural to try stochastic algorithms where local search or exact algorithms failed. Nevertheless, there is now a sufficient litterature to organize some special sessions in some symposium. For 2007, there will be a special session on Genetics-Based Machine Learning at CEC, and a track on Genetics-Based Machine Learning and Learning Classifier Systems at GECCO. These events are centered around "genetic" algortihm (see the posts on the IlliGAL blog : 1, 2), despite the fact that there are several papers using other metaheuritics, like simulated annealing, but this is a common drawback, and does not affect the interest of the subject.

The second point is less exploited, but I find it of great interest. A simple example of what can be done with machine-learning inside metaheuristic can be shown with estimation of distribution algorithms. In these metaheuristics, a probability density function is used to explicitely build a new sample of the objective function (a "population", in the evolutionary computation terminology) at each iteration. It is then crucial to build a probability density function that is related to the structure of the objective function (the "fitness landscape"). There, it should be really interesting to build the model of the pdf itself from a selected sample, using a machine-learning algorithm. There is some interesting papers talking about that.

If you mix these approaches with the problem of estimating a Boltzmann distribution (the basis of simulated annealing), you should have an awesome research field...

Saturday 28 October 2006

Frameworks for metaheuristics

Note that descriptions are picked up from the web sites of the projects.

As one can see, most of these softwares are designed for evolutionnary algorithms, but I recommend you to try out some of the generic frameworks, because "genetic" algorithms are not always the best choice for solving an optimization problem, despite their wide spread.

Here are the frameworks I would recommend. These frameworks are free softwares, you can use, look at the code, modify it and redistribute it (precious qualities for frameworks).

I would also recommend C or C++, which permits to implement fast programs, while using object oriented programming. C++ compilers are also available for a large choice of plateforms (with a special distinction for GCC, which is free software). A fast progam is crucial for testing algorithms on real problems and using a well-known langage is a good idea.

The main idea beside the design of the framework is specified as one of the following keywords:

  • template: design a new algorithm concist in extending a base class, perhaps the simple object model to understand, but it can be difficult to re-use existing code.
  • component: design a new algorithm concist in select its component from availables operators, make it easy to implement algorithms, but it can be quite difficult to understand the underlying model. It can be hard to add new algorithm paradigm (generaly used for evolutionnary algorithms).
  • function: design a new algorithm concist in use the framework's primitives. Simple to understand, but one must learn the primitives to use and code a lot of stuff.

Favorites

Here is my list :

  • Open Metaheuristics: a library aimed at the conception of metaheuristics (i.e. genetic/evolutionnary algorithms, tabu search, simulated annealing, ant colony algorithms, etc.). One of the main goal of oMetah is to permit rigourous empirical tests of metaheuristics, through a statistical approach. (C++, LGPL, template)
  • *EO: a set of paradigm-free Evolutionary Computation libraries dedicated to the flexible design of EAs through evolving objects superseding the most common dialects (Genetic Algorithms, Evolution Strategies, Evolutionary Programming and Genetic Programming). (C++, GPL, component)
  • Sferes: a framework that gathers an evolution framework with a simulation framework. It is dedicated to experiments involving the design of artificial agents through artificial evolutionary algorithms, like Genetic Algorithms or Evolution Strategies. (C++, CeCILL, component)
  • openBEAGLE: Evolutionary Computation (EC) framework. (C++, LGPL, component).
  • PISA: Platform and Programming Language Independent Interface for Search Algorithms. PISA is mainly dedicated to multi-objective search, where the optimization problem is characterized by a set of conflicting goals and not just one criterion that needs to be optimized. (C, BSD, function)
  • GAlib: defines classes for using genetic algorithms to do optimization in any C++ program using any representation and genetic operators. The distribution includes extensive documentation and many examples.. (C++, MIT, component)
  • MOMHLib++: a library of C++ classes that implements a number of multiple objective metaheuristics. It has been developed on the basis of former implementations of Pareto simulated annealing (PSA) and multiple objective genetic local search (MOGLS). (C++, GPL, template).
  • ECJ Java Evolutionary Computation Research System. (Java, AFL, component)

Other

These frameworks are not those which I would recommend, but they have some properties that could be intersting :

If you want to find out more and more frameworks, try searching "genetic" on source forge

Many of these projects are dead, be carefull if you need a maintained framework. Take a close look to the project activity, generally, the number of developers and file releases give you a good idea of the framework vitality. Also check if it is a student's project for a training course or if it is made by professional researchers.

Sunday 10 September 2006

Confirmation of the biological foundations of particle swarm optimization

Particle swarm optimization (PSO) is based on biological (and theorical physic) work concerning self-organizaton in animals groups. Up to now, theory explained that animals must adjust their direction in order to set up a group. PSO use this concept to build a set of vectors that will exlpore the search space of an optimization problem, while converging on an optimum.

One key prediction of the theory is a transition between the recruitment of the individuals and the collective motion. This transition "from disorder to order" has been proved in situ by biologists, while studying locusts. They have filmed during 8 hours a group of 5 to 120 desert locusts, in a circular cockpit, and analysed the motions datas. The study shows that, at low density (under 25 individuals/m2), the animals moves independently. When reaching 25 to 60 locusts/m2, they form collective groups, which direction can vary abruptly. Beyond 75 locusts/m2, the coordinated marching is homogen.

While this should not change the use of PSO, which is a simplified model, it is always interesting to consider works talking about this transition between order and chaos, in self-organized systems. Indeed, this transition can also occurs in metaheuristics, and is perhaps interesting for further research, like in dynamical optimization.

From Disorder to Order in Marching Locusts, Buhl et al., Science, Vol. 312. no. 5778, pp. 1402 - 1406, 2 june 2006.

Friday 1 September 2006

Finding references about metaheuristics (and computer science)

The Internet is rapidly becoming the first place to search for scientific papers. But the number of places gathering ressources becomes really high. Here is a list of web sites with free access. These are places where you can find some stuff about metaheuristics, I have not include all the available databases (nor journals web pages), despite the fact that metaheuristics are often apply to a wide range of fields.

  • Online collective bibliography managers are really usefull for picking references when surfing on journal sites, they permits to automatically gather the reference's informations, tag them, share your list with others and export it in your local reference manager.
  • digital libraries or paper databases:
    • CiteSeer, search for citations and papers, show citations inside each papers, permits corrections on items, really interesting for computer science.
    • ScienceDirect, database, require registration, make available a watchlist based on email alerts.
    • arXiv, an e-print service, well formated ressources, RSS syndication.
    • Springer Links, books & papers database, RSS syndication, watchlist for registered users.
    • ACM portal, digital library, no syndication or free watchlist for registered users.
    • IEEE Xplore, database, no syndication, no watchlist, email alerts at field level only.
    • The Collection of Computer Science Bibliographies, database, RSS syndication.
    • PubMed, database, emails alerts for registered users.
    • Backwell Synergy, database, RSS/Atom syndication, email alerts.
    • Optimization Online, e-print about optimization, monthly email alerts.
    • Scitation, database, no syndication, no free email alerts.
    • Wiley InterScience, database, no syndication, emails alerts for registerd users.
  • generalistic search-engines:

Wednesday 30 August 2006

Metaheuristics & benchmarks at CEC

The IEEE Congress on Evolutionary Computation (CEC) is a well-known event that take place every year.

Since 2005, there is an interesting group of special sessions, organized by Ponnuthurai Nagaratnam Suganthan:

What is really interesting in these sessions is the systematic presence of an implemented generalistic benchmark, built after discussion between researchers.

This is an extremely necessary practice, which is, unfortunately, not generalized. Indeed, this is the first step toward a rigourous performance assessment of metaheuristics (the second one being a true statistical approach, and the third one a considered data presentation).

Saturday 26 August 2006

Ant Colony Optimization Authorship

One of the scientist key policy is always to refer to people who did the first work (as it is pointed out by the "hard blogging scientists" manifest). It is due to the fact that researchers want to share free science with everybody (at least at little cost), and that recognition is a form of remuneration (in a similar way, Eric S. Raymond explain such mechanism for hackers, in his essay "The Cathedral and the Bazaar").

Recently is increasing a (rather small, but interesting) controversy about the authorship of the Ant Colony Optimization (ACO) idea.

The "orthodox" seminal paper about ACO is a technical report, written by Dorigo, Maniezzo and Colorni, submitted in 1991. As technical report are not a rigourous publication, perhaps a more pertinent citation would be the corresponding paper published in the proceedings of the First European Conference on Artificial Life, in 1992, or the Dorigo's PhD thesis, completed in 1991. Dorigo et al. refers to a work by Denebourg, Pasteels and Verhaeghe ("Probabilistic Behaviour in Ants: a Strategy of Errors?", published in 1983), as the key paper that give them the ACO idea.

Nowadays, M. Dorigo has become the reference researcher on ACO, for instance, the ANTS conference is helded every two years in his lab, in Brussels.

Where is the controversy ?

It seems that there is an oldest paper talking about using ant colonies behavior to design algorithms. This paper, written by Manderick and Moyson (The collective behavior of ants: An example of self-organization in massive parallelism) and published in 1988 in the proceedings of the AAAI Spring Symposium on Parallel Models of Intelligence, also refers to the work by Deneubourg et al., but is not cited by Dorigo et al..

Well, this does not seems to be a big problem, everyone can miss a paper in the huge amount of ideas published every days.

I first read the question of the authorship of ACO in the "habilitation" thesis of Pierre Collet (published in 2004), and it recently rised up with the modification of the ACO article on Wikipedia, the 11th of august, by an anonymous editor (with the nick "MoyMan"), who add the Manderick paper to the references. The 15th of august, Daniel Angus posted an email on the ACO mailing-list (and a post on his laboratory blog), asking "why this paper has not received more attention from the field?". The same day, a contributor called "MDorigo" delete the reference on Wikipedia, in order to give M. Dorigo the authorship. But a few hours later, the first addition was reintroduced.

In my opinion, M. Dorigo just saw the addition as a kind of vandalism, trying to introduce a fake reference, as it sometimes occurs on Wikipedia, and did a revert without verifying the validity of the source. Note that M. Dorigo did not enter an edit war after the revert, which prove his bona fide.

The funniest (well, I should say "sadiest") about this story is that people are already talking about it around me... Why such gossiping? I think this is because M. Dorigo has a strong reputation. I personaly think that such reputation is incontrovertible when you begin to be known for your work. And M. Dorigo is certainly known for his work, as it is a very good one, acknowledge by his peers.

Finally, let say that the controversy does'nt have any basis, as M. Dorigo point out in a message on the ACO mailing-list, replying to the question of D. Angus. Firstly, if the Moyson and Manderick paper is about using ant colonies behavior in the AI field (about using the theory of complex dynamical systems to design massively parallel systems, to be sharply exact [1]), it is not about using it to tackle optimization problem. Secondly, M. Dorigo cite the papers that gave him the key idea, and Moyson & Manderick one is not concerned, even if it is interesting. And finally, Bernard Manderick read the paper before its submission, and didn't felt necessary to tell to M. Dorigo that his paper should be cited.

This stupid story (when people are gossiping without having all the informations, it is always stupid) is about an insignifiant event. Only ten persons in the world will feel concerned. But I find it interesting for the following reasons:

  • this show that scientist can have an ego, and are finally as subjective as "humans": when a researcher gain popularity, he also gain criticisms, despite objectivity;
  • it illustrate how much a reputation is decisive when judging other's work;
  • and how much transparency is necessary;
  • finally, this show that, despite criticisms, Wikipedia can be a place for researchers to exchange informations, even if it can be in a rather rough manner.

Notes

[1] Here is the abstract of the Moyson & Manderick paper:
Massive parallelism needs a new methodology. The theory of complex dynamical systems provides such a methodology in those cases where massively parallel systems can be modelled as complex dynamical systems.
In particular for AI, complex dynamical systems are intersting in another respect: they have the ability to self-organize. Moreover, this self-organizational behavior can be adaptive. Important is that the emergence of self-organization is not programmed or a consequence of external instructions but results from local interactions at the microscopic level and the interplay between the system and its environment.
The new methodology and the emergence of self-organization are illustrated by the adaptive response of ant colonies to their environment. This behavior is simulated by an intrinsically parallel algorithm. Also, a mathematical model of the behavior is proposed and used to fix the parameters in the simulation. For these parameter values, self-organization is observed in the ant colony.