Dreaming of Metaheuristics

Thoughts about metaheuristics for hard optimization

Go to the content | Go to the menu | Go to the search page

Friday 27 July 2007

Hybridization : estimation of distribution as a meta-model filter generator for metaheuristics ?

An interesting idea is to use meta-model (a priori representation of the problem) as a filter to bias the sample produced by metaheuristics. This approach seems especially promising for engineering problem, where computing the objective function is very expensive.

One simple form of meta-model is a probability density function, approximating the shape of the objective function. This PDF could thus be used to filter out bad points before evaluation.

Why, then, do not directly use EDA to generate the sample ? Because one can imagine that the problem shape is not well known, and that using a complex PDF is impossible (too expensive to compute, for example). Then, using a classical indirect metaheuristic (let say an evolutionary algorithm) should be preferable (computationnaly inexpensive) for the sample generation. If one know a good approximation to use for the distribution of the EDA (not too computationnaly expensive), one can imagine using the best part of the two worlds.

An example could be a problem with real variable : using an EDA with a multi-variate normal distribution is computationnaly expensive (due to the estimation of the co-variance, mainly), and using a mixture of gaussian kernels makes difficult to have an a priori on the problem. Thus, why not using a indirect metaheuristic to handle the sample generation, and use a meta-model which parameters are estimated from the previous sample, according to a chosen distribution ?

One more hybridization to try...

Thursday 5 July 2007

Error metrics

Many metrics are used to assess the quality of approximation found by metaheuristics. Two of them are used really often: distance to the true optimum according to its position and to its value.

Unfortunately, the objective function's shape can vary a lot in real-world problem, making these metrics difficult to interpret. For example, if the optimum is in a very deep valley (in value), a solution close to it in position may not signifiate that the algorithm have well learn the shape of it. Inversely, a solution close to an optimum in value may not signifiate that it is in the same valley.

One metric that can counter thse drawbacks is a distance taking into account the parameters of the problem as well as the value dimension.

Anyway, the question of the type of distance to use is dependent of the problem.