Evolutionary Algorithms in Go
Language
- unknown
by James Smith (Golang Project Structure Admin)
Evolutionary algorithms (EAs) offer a compelling approach to problem-solving by drawing inspiration from the principles of natural selection, which have been uncovered by biologists.
In this blog post, we will explore how EAs work and consider some of their real-world applications, before going on to work on an example implementation in the Go programming language.
Table of Contents
What Are Evolutionary Algorithms?
At their core, evolutionary algorithms encapsulate the fundamental principles of biological evolution, replicating the mechanisms that drive natural selection.
These algorithms operate on a population of candidate solutions, each representing a potential answer to the problem at hand.
Through iterative processes of selection, mutation and recombination, the population evolves over successive generations, refining its own composition as it moves towards ever more optimal solutions.
The defining feature of evolutionary algorithms lies in the concept of fitness, which serves as the main criterion for survival and propagation.
Candidate solutions are evaluated based on their effectiveness in solving the problem, with fitter individuals possessing a greater likelihood of passing their traits onto the next generation.
Over time, this selection pressure fosters the emergence of increasingly refined solutions, mirroring the adaptive process that biologists have observed in nature.
How Have Evolutionary Algorithms Been Influenced by the Idea of Natural Selection?
In nature, species gradually adapt to their environments through a process of variation, competition and inheritance, ensuring that traits beneficial to survival and reproduction become more prevalent over successive generations.
Evolutionary algorithms replicate this process in a computational setting, using it as a framework for solving complex optimisation problems.
At the heart of both natural selection and evolutionary algorithms lies the concept of a population — whether of living organisms or candidate solutions.
In biological evolution, individuals within a species exhibit genetic diversity, with variations in traits such as size, speed or resistance to disease.
Similarly, in an evolutionary algorithm, each candidate solution in a population possesses unique characteristics that determine its effectiveness in solving a given problem.
Just as nature favours organisms that are better adapted to their environments, evolutionary algorithms employ a fitness function to assess the quality of each candidate solution. Those with higher fitness scores are more likely to contribute to future generations, ensuring that advantageous traits persist and propagate.
Selection plays a crucial role in both biological and computational evolution. In nature, individuals with traits that enhance their chances of survival and reproduction are more likely to pass their genes to the next generation.
This process, sometimes described as “survival of the fittest”, is mirrored in evolutionary algorithms through its selection mechanisms.
These techniques ensure that superior solutions have a higher probability of being chosen as “parents” for the next generation, while still allowing some degree of genetic diversity by occasionally selecting less optimal candidates to prevent premature convergence.
Variation is another essential factor in evolution, ensuring that populations do not stagnate and that new traits emerge over time.
In nature, variation occurs through genetic mutation and recombination. Mutation introduces small, random changes in genetic material, sometimes leading to advantageous adaptations.
![The peppered moth [Biston betularia].](https://golangprojectstructure.com/wp-content/uploads/2025/03/peppered-moth-biston-betularia-512x318.jpg)
In evolutionary algorithms, mutation serves a similar purpose, introducing small modifications to candidate solutions to explore new possibilities within the search space.
Without mutation, an evolutionary algorithm risks becoming trapped in a suboptimal solution, much as a species might struggle to adapt if its gene pool lacked sufficient diversity.
Recombination, or crossover, further enriches genetic variation by combining the traits of two parent individuals to produce offspring with a mixture of their characteristics.
In biological evolution, this occurs through sexual reproduction, where offspring inherit traits from both parents.
In an evolutionary algorithm, crossover can involve the merging of aspects of two high-performing candidate solutions to create new solutions that may outperform either parent.
This blending of traits allows for the emergence of novel and potentially superior solutions that would not have arisen through mutation alone.
Over multiple generations, the iterative cycle of selection, variation and inheritance enables evolutionary algorithms to refine their populations, gradually improving the quality of solutions.
This process closely mirrors the way species adapt to their environments over millennia, except that in a computational setting, evolution occurs on an accelerated timescale, often within minutes or seconds or even quicker.
Although evolutionary algorithms are an abstraction of natural selection rather than a precise biological model, their success in solving complex optimisation problems underscores the efficacy of evolution as a problem-solving paradigm.
When Should Evolutionary Algorithms Be Used?
Evolutionary algorithms are particularly well suited to optimisation problems where traditional methods falter.
Unlike brute-force approaches, which exhaustively evaluate all possible solutions, or gradient-based methods, which rely on smooth and predictable mathematical functions, evolutionary algorithms excel in navigating complex and unpredictable search spaces.
These algorithms are especially well suited to problems where the solution landscape is filled with local optima — points that appear to be the best solution within a small region but are not the overall best.
In many cases, the usual way to find the best solution is to follow a clear path, like climbing a hill to reach the highest point. However, some problems do not have smooth, predictable paths.
Instead, they have many small hills, and a basic method might get stuck on one of these instead of reaching the highest peak.
Evolutionary algorithms avoid this problem by keeping a diverse set of solutions and constantly trying new possibilities, increasing the chances of finding a much better outcome.
Another reason to use evolutionary algorithms is when a problem has multiple goals that need to be balanced.
For example, in designing a car engine, one goal might be to maximise power, while another is to minimise fuel consumption. These two goals compete with each other, and there is no single “best” solution — only a range of good trade-offs.
Evolutionary algorithms naturally handle these situations by evolving a variety of solutions that balance the different goals, allowing decision-makers to choose the one that best fits their needs.
These algorithms are also excellent for problems with an enormous number of possible solutions, such as scheduling work shifts, routing delivery trucks or organising a sports tournament.
Trying every possible option would take far too long, but evolutionary algorithms efficiently narrow down the best choices by learning from past attempts and gradually improving.
Moreover, evolutionary algorithms are ideal for situations where conditions change over time.
Many traditional methods require restarting from scratch whenever new data appears, but evolutionary algorithms can adapt on the fly.
This makes them useful in areas like stock-market trading and robotics, where flexibility is essential.
That said, evolutionary algorithms are not always the best choice. They can be slower than traditional methods, and they do not always offer a guarantee of finding the perfect solution.
Considering an Example of an Evolutionary Algorithm
To illustrate, let us consider an example.
We’ll optimise a mathematical function using a genetic algorithm.
The function that we’ll use will be the following simple quadratic:
𝑓(𝑥) = −(𝑥 − 3)² + 10
Our goal will be to find the value of 𝑥 that maximises 𝑓(𝑥) within a bounded range.
Now let’s work on some Go code to try and solve this mathematical problem…
Implementing an Evolutionary Algorithm in Go
To begin, we’ll define a Candidate
struct to represent potential solutions.
Each candidate possesses a floating-point value alongside an associated fitness score, as we can see in the code below:
package main
import (
"fmt"
"math/rand"
"time"
)
type Candidate struct {
Value float64
Fitness float64
}
The fitness function, responsible for evaluating the quality of a solution, follows directly from our mathematical objective, as can be seen below:
func fitness(x float64) float64 {
return -((x - 3) * (x - 3)) + 10
}
The fitness function serves as the guiding force of evolution, shaping the population towards more favourable solutions.
Higher fitness values increase an individual’s probability of survival and reproduction, mimicking the principle of natural selection.
Next we need to be able to initialise a population of random candidates. So let’s create a function to do just that:
func initialisePopulation(size int, min, max float64) []Candidate {
population := make([]Candidate, size)
for i := range population {
val := min + rand.Float64() * (max - min)
population[i] = Candidate{
Value: val,
Fitness: fitness(val),
}
}
return population
}
We should ensure that we seed the random number generator before calling the function above, in order to ensure variability.
The initial population is crucial in determining the evolutionary algorithm’s effectiveness.
A poorly distributed population may hinder exploration, leading to premature convergence. By ensuring a diverse starting set, we maximise the likelihood of discovering global optima.
Selection involves favouring fitter individuals while maintaining a certain degree of diversity within a population.
We shall use a tournament-selection strategy for the purposes of this example.
The method involves randomly selecting a subset of individuals from the population, referred to as a “tournament”, and then choosing the fittest individual from this subset to pass on its genetic material to the next generation.
The size of the tournament (i.e. the number of individuals randomly selected) can vary, but it typically remains small relative to the entire population.
Other selection mechanisms, such as roulette-wheel selection or rank-based selection, could alternatively have been used.
In roulette-wheel selection, candidates are given a chance to be selected proportional to their fitness, meaning those with higher fitness occupy a larger slice of the “roulette wheel”, thus having a greater likelihood of being chosen, though even less fit individuals still have a chance. This introduces a degree of randomness, which encourages exploration.
In contrast, rank-based selection orders candidates based on their fitness and assigns selection probabilities based on their rank rather than their absolute fitness score. This approach reduces the dominance of very fit individuals and ensures a more balanced selection process, helping to maintain diversity and avoid premature convergence.
That having been said, we’ve chosen to use a simple form of tournament selection, and our selection function can be seen below:
func selectParent(population []Candidate) Candidate {
a := population[rand.Intn(len(population))]
b := population[rand.Intn(len(population))]
if a.Fitness > b.Fitness {
return a
}
return b
}
By randomly selecting two individuals and choosing the fitter one, we introduce a degree of stochasticity that helps maintain genetic diversity in our population.
Now we need to be able to create mutations.
This ensures that we introduce variation, preventing stagnation in local optima.
Just giving a slight random perturbation can achieve this, as in the function below:
func mutate(candidate Candidate, mutationRate, min, max float64) Candidate {
if rand.Float64() < mutationRate {
candidate.Value += (rand.Float64()*2 - 1) * 0.1
if candidate.Value < min {
candidate.Value = min
} else if candidate.Value > max {
candidate.Value = max
}
candidate.Fitness = fitness(candidate.Value)
}
return candidate
}
Mutation plays a vital role in maintaining genetic diversity, ensuring that the algorithm does not become trapped in suboptimal solutions.
A careful balance must be struck, as an excessively high mutation rate may disrupt convergence, while too little mutation stifles exploration.
Now let’s create a function to handle the evolutionary loop, which will proceed iteratively, applying selection, mutation and replacement to drive the population towards improved solutions.
A termination condition, such as a fixed number of generations, will conclude the evolutionary process.
Here’s the code:
func evolve(population []Candidate, generations int, mutationRate, min, max float64) Candidate {
best := population[0]
for i := 0; i < generations; i++ {
nextGen := make([]Candidate, len(population))
for j := range population {
parent := selectParent(population)
offspring := mutate(parent, mutationRate, min, max)
nextGen[j] = offspring
if offspring.Fitness > best.Fitness {
best = offspring
}
}
population = nextGen
}
return best
}
We have all of the helper functions that we’re going to need now.
So we are finally ready to orchestrate the execution of our evolutionary algorithm:
func init() {
rand.Seed(time.Now().UnixNano())
}
func main() {
const populationSize = 100
const generations = 50
const mutationRate = 0.1
const min = -10.0
const max = 10.0
population := initialisePopulation(populationSize, min, max)
best := evolve(population, generations, mutationRate, min, max)
fmt.Printf("Best solution: x = %.5f, f(x) = %.5f\n", best.Value, best.Fitness)
}
When I run this code on my machine, I get the following output:
Best solution: x = 3.00017, f(x) = 10.00000
The quadratic function that we defined earlier reaches its maximum value when 𝑥 = 3, since at this point the squared term (𝑥 − 3)² evaluates to zero, leaving only the function’s maximum value of 10.
The result that we got from running our evolutionary algorithm shows that our program found 𝑥 to be 3.00017, which is very close to 3, with a function value of 10.00000, confirming that the best possible result has essentially been reached.
The marginal discrepancy between 𝑥 = 3.00017 and the exact optimal 𝑥 = 3 is likely the result of the stochastic nature of the algorithm, combined with the way that computers handle numbers—floating-point precision causes tiny rounding effects.
However, given that the function value has reached its theoretical maximum, the algorithm has effectively fulfilled its purpose.
In practical applications, such precision is generally more than sufficient, particularly in real-world scenarios where exact analytical solutions are often unattainable.
The success of the algorithm in finding the optimal solution within a negligible margin of error reaffirms the robustness of evolutionary techniques for navigating complex search spaces and finding near-optimal solutions efficiently.