Abstract:
Design optimization is a subject field where mathematical algorithms are used to improve
designs. Analyses of designs using computational techniques often require significant
computing resources, and for these problems, an efficient optimization method
is needed. Efficient Global Optimization (EGO), first proposed by Jones et al. [25] is
an optimization method which aims to use few function evaluations when optimizing a
design problem. In this study, we use a multi-objective strategy to parallelize EGO.
EGO is part of a set of algorithms called surrogate optimization methods. A set of initial
designs are analyzed and then a response surface is fitted to the evaluated designs. In
each iteration, EGO selects the set of design variables for which the next analysis will
be performed. It makes this decision based on two opposing criteria. EGO will either
decide to sample where the predicted objective function value is low, an exploitation
approach, or where there is high uncertainty, an exploration approach.
In each iteration, the classical EGO only selects one design per iteration. This selected
design vector is either a result of exploitation or exploration based on a measure referred
to as maximum Expected Improvement (EI). However, the modern day computing environment
is capable of running multiple different analyses in parallel. Thus, it would
be advantageous if EGO would be able to select multiple designs to evaluate in each
iteration.
In this research, we treat EGO?s inherent selection criteria to either exploit or explore as
a multi-objective optimization problem, since each criterion can be defined by a separate
objective function. In general multi-objective optimization problems don?t only have one
solution, but a set of solutions called a Pareto optimal set. In our proposed strategy
multiple designs from this Pareto optimal set are selected by EGO to be analyzed in the
subsequent iteration. This proposed strategy is referred to as Simple Intuitive Multiobjective
ParalLElization of Efficient Global Optimization (SIMPLE-EGO). We start our study by investigating the behaviour of classical EGO. During each iteration
of EGO, a new design is selected to be evaluated. This is performed by finding
the maximum of the Expected Improvement (EI) function. Maximizing this function
initially proved challenging. However, by exploiting information regarding the nature
of the EI function, the maximization problem is simplified significantly, and the robustness
of finding the maximum is enhanced. More importantly, solving this maximization
problem robustly, dramatically improves the convergence behaviour once a local basin
has been found.
We compare our SIMPLE-EGO method to a multi-objective optimization algorithm
(EGO-MO) published by Feng et al. [16]. We first investigate the behaviour of EGO,
EGO-MO, and SIMPLE-EGO. Thereafter the convergence performance of these methods
is quantified.
As expected the parallelization of both SIMPLE-EGO and EGO-MO lead to faster
convergence on a range of test functions compared to classical EGO, which only sampled
one point per iteration. The convergence characteristics of SIMPLE-EGO and EGOMO
are also markedly different. We conclude with a discussion on the advantages and
disadvantages of the investigated methods.