Wednesday, 11 October 2017

multi-objective optimisation I

There are lots of optimisation algorithms around: I’ve used simulated annealing and evolutionary algorithms, to name but two.  However, I’ve only ever used them to optimise a single value.  But real problems often have multiple objectives to satisfy simultaneously: time, price, and quality is a traditional triple.  However, multiple objectives are often in conflict, and so it is not possible to optimise them all simultaneously: there are tradeoffs.  “Pick any two of three” is the stated tradeoff for time, price, and quality; more nuanced tradeoffs are available.

One common technique is simply to add the various objective values together, with some weight for each, to reduce the problem to a single optimisation.  But which weights?  Different weights imply different tradeoffs.

So the next step is a full multi-objective optimisation system.  To see what is going on, the crucial concept of dominated solution needs to be understood.

Let’s assume that a candidate solution s has objective values (f1, f2, ..., fn).  And let’s assume we want to minimise each of those objectives (the maximisation case is similar, mutatis mutandis).  Then solution s1 dominates solution s2 if every one of its objective values is less than the corresponding value for s2.  If on the other hand, s2 is better in all its objectives than s1, then s2 dominates s1.  And in the case when neither dominates the other, these solutions are non-dominated.  The figure below illustrates these concepts.

Candidate solutions plotted in some in 2D objective space, where we wish to minimise both objectives.  Consider the central red solution.  It dominates all candidate solutions in the upper right quadrant, because both its objective values are better than those in the dominated quadrant.  It is dominated by all candidate solutions in the lower left quadrant, because both its objective values are worse than those in that quadrant.  The other two quadrants are non-dominated solutions: one objective is better, but the other is worse.  The large green dots along the lower edge represent solutions on the Pareto front: all the solutions that are not dominated by any other candidate.  The concepts generalise to higher dimensions when there are more than two objectives.  
We can write a simple python function to calculate whether one solution dominates another, for an arbitrary number of objectives:
def dominates_min(a,b):   # for minimisation problems
  # a and b are nD tuples of the respective n objective values
  # a dominates b if a.x < b.x and a.y < b.y and ...
  return all([(a < b) for a, b in zip(a,b)])
How this works:  Let the tuples by (ax, ay, az) and (bx, by, bz).  Then zipping them together gives the list of tuples [(ax,bx),(ay,by),(az,bz)].  The set comprehension gives a list of booleans, True or False depending if the respective ai<bi or not.  Then the all function tests whether all the values are True.
In a complete collection of candidate solutions, the ones not dominated by any other solution form the Pareto front.  These are the best possible solutions: no other candidate performs better in all objectives.  The front describes the possible tradeoffs: values along the front represent different tradeoffs.

When we don’t have all candidate solutions to hand, maybe just a sample of solutions, we need an optimisation algorithm to find those candidates close to the actual Pareto front, and to use them to find even better ones.

But first, we need to calculate the points on the Pareto front of a given population, that is, all the candidate solutions that are not dominated by any other solutions (the green points in the figure above).  The obvious algorithm (for each candidate solution, test it against all other solutions for all objectives) is rather inefficient, being O(MN2), where M is the number of objectives, and N is the number of solutions to test.  Deb et al (2000) give a more efficient algorithm:


We can code this up in python as follows (with comments showing how it relates to the algorithm above):
def fast_nondominated_sort_single_front(pop):
  # pop = list of individuals
  #   individual of form { 'state': (s1,s2,s3,...), 'fitness':(f1,f2,f3,...) }
  # pfront = list of individuals on the calculated "pareto front"

  pfront = [ pop[0] ]      # P' = {1}
  for pi,p in enumerate(pop):  # for each p in P and p notin P'
    if pi == 0 : continue      # (so not the first element)

    pfront = pfront + [ p ]    # P' = P' cup {p}
    # make a copy to iterate over, because pfront is being modified; 
    #   don't need the last item, which is p  (makes q neq p below True)
    pfrontcopy = pfront[:-1]   

    pf = p['fitness']
    for qi,q in enumerate(pfrontcopy):    # for each q in P' and q neq p 
      qf = q['fitness']
      if dominates_min(pf,qf):     # if p dom q
        pfront.remove(q)           #   then P' = P' \ {q}
      elif dominates_min(qf,pf):   # else if q dom p
        del pfront[-1]             #   then P' = P' \ {p}
        break    # don't have to check any more qs as p already gone

  return(pfront)
Once we have the front, we could remove it from the population, and calculate the next front, and so on.  This is useful in an optimisation algorithm, to find less fit (not on the current best front) but still possibly good solutions, for use in building even better ones.

The multiple front calculation is as follows:
def fast_nondominated_sort(pop):
  # find a list of ever decreasing fronts
  # pop = list of individuals
  # pfrontlist = list of fronts = list of lists of individuals

  popcopy = pop[:]
  pfrontlist = []
  while popcopy:
    pfront = fast_nondominated_sort_single_front(popcopy)
    pfrontlist += [pfront]
    # remove the found pfront from the pop
    popcopy = [ ind for ind in popcopy if ind not in pfront ]  

  # check that no-one has been lost
  assert len(pop) == len([item for pfront in pfrontlist for item in pfront]) 

  return(pfrontlist)
Given some population of candidate solutions, we can plot out the relevant fronts so calculated (plotting code always seems much more complicated than the actual algorithms!):
import matplotlib.pyplot as plt
def plot_fronts(pfrontlist):
  # pfrontlist = list of fronts; each front is a list of individuals

  fig = plt.figure() 
  ax1 = fig.add_subplot(111) 

  # one colour per front 
  colors = matplotlib.cm.rainbow(np.linspace(0, 1, len(pfrontlist))) 
  # iterate backwards, to draw better fronts on top of less good ones 
  for i in range(len(pfrontlist)-1, -1, -1) :  
    pfront = pfrontlist[i]

    # sort members of front in order of fitness; draw a line along the front
    data = sorted([ pf['fitness'] for pf in pfront ]) 
    xs = [ i[0] for i in data ]
    ys = [ i[1] for i in data ]
    ax11.plot(xs, ys, ':o', color=colors[i], mew=0)
  plt.show()
This gives a plot like:
A sequence of fronts, found by finding the Pareto front of a population of candidate solutions, removing it from the population, finding the front of the remaining candidates, and so on.


In a later post, I will go through the rest of the Deb et al (2000) algorithm, and show how they incorporate this front-finding into a full-blown multi-objective optimiser.

Reference

Deb, Kalyanmoy; Agrawal, Samir; Pratap, Amrit; Meyarivan, T. (2000) A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimization: NSGA-II. In Parallel Problem Solving from Nature, PPSN VI. LNCS 1917:849–858. Springer. doi:10.1007/3-540-45356-3_83

No comments:

Post a Comment