Evolutionary Algorithms with Self-adjusting Asymmetric Mutation.
📝

Evolutionary Algorithms with Self-adjusting Asymmetric Mutation.

By Amirhossein Rajabi, Carsten Witt, in Proc. of PPSN 2020.

Abstract

The Paper Story

Self-adjusting Asymmetric Mutation

  • Evolutionary Algorithms (EAs) are often considered as unbiased algorithms that are invariant with respect to different transformations of the underlying search space. However, if a certain amount of domain knowledge is available the use of biased search operators in EAs becomes viable.
    • On bit strings, it might be known beforehand that zero-bits have a different interpretation than one-bits and that the total number of one-bits in high-quality solutions should lie within a certain interval.
    • In the minimum spanning tree (MST) problem with vertices and edges, all valid solutions should have exactly one-bits.
💡

In Self-adjusting Asymmetric Mutation, the mutation operator flips - and -bits of the current search point with different probabilities that can be adjusted based on the number of successes with a given probability profile.

On , we proved the optimization time , a speed-up by a factor of at least compared to the previous asymmetric from [1].

On general , where both contains zeroes and ones, we proved the same asymptotic bound.

Standard and Asymmetric Mutations

image
  • Standard bit mutation: Flip each bit independently with probability . The Algorithm 1 with this mutation is called .
  • Asymmetric mutation: Flip each -bit with probability for , where is defined as the number of -bits. The Algorithm 1 with the latter mutation is called [1,2].

Algorithm

💡

We propose a rather conservative setting that flips -bits with probability for some value and -bits with probability .

  • Initially, .
  • In an observation phase of length the following pairs of - and -strengths are tried alternatingly:

,

  • The pair that leads to relatively more successes is used as the ground pair for the next phase. In case of a tie a uniform random choice is made.
  • The strengths are confined to .
image

on

  • Fitness function .
  • outperforms the from [1] by a constant factor of roughly 2 on .

Theorem: The runtime of the on satisfies where , , for an arbitrarily small constant , and .

📊

Experimentally, was compared against the and [1] on in Figure 1, with parameters and . A clear difference between the performance of and the algorithms using asymmetric mutations. The outperforms although we used relatively small and large .

image

on

  • Fitness function for an unknown target , where denotes the Hamming distance.
  • In other words, returns the number of matching bits in and .
  • does not harm its optimization time on :

Theorem: The runtime of the on is where , , and .

📊

Experimentally, was compared against the and [1] on with in Table 2 with parameters: and . Similarity between data suggests that the asymmetric algorithms perform neither worse nor better compared to . More precisely, all p-values obtained from a Mann-Whitney U test between algorithms, with respect to the null hypothesis of identical behavior, are greater than 0.1.

image

References:

[1] Jansen, T., Sudholt, D.: Analysis of an asymmetric mutation operator. Evolutionary Computation18(1), 1–26 (2010)

[2] Neumann, F., Wegener, I.: Randomized local search, evolutionary algorithms, and the minimum spanning tree problem. Theoretical Computer Science378(1), 32–40 (2007)

🙏

Thank you. if you have any question please don't hesitate to contact us.