Reference

API

Newtman.optimizeMethod
function optimize(f, sol, range, method::PopulationBase;
    rng=nothing,
    iters=2_000,
    pop_size=30,
    kwargs...
) -> OptimizationResults

The optimize constitutes an interface to quickly dispatch over all the possible implementations.

In particular, this function dispatches over all PopulationBase implementations.

Arguments

  • f: The function to optimize.
  • sol: A candidate solution, should be an AbstractArray. The dimension of this array will determine the dimension of the problem.
  • range: A collection that contains just two elements, regarding the global bounds of the search space.
  • method: A PopulationBase, e.g. PSO for Particle Swarm Optimization.

Keyword arguments

  • rng: An AbstractRNG object. This will be used for creating the population of solutions as well as for the implementation itself. If you wish to enforce reproducibility of results, create an RNG object with a set seed. If you do not pass an RNG object, a Xorshifts.Xoroshiro128Plus RNG will be created, using a truly random seed taken from the system.
  • iters: The total number of iterations for the main loop in the implementation.
  • pop_size: The total size of the population. The larger the population, the longer it will take to converge. A good population size is somewhere between 20-35.
  • kwargs: Keyword arguments passed on to the particular implementation. Each implementation has its own keyword arguments so check the documentation for a specific implementation in order to modify these arguments.
source
Newtman.optimizeMethod
function optimize(f, sol, range, method::TrajectoryBase;
    rng=nothing,
    iters=2_000,
    kwargs...
) -> OptimizationResults

The optimize constitutes an interface to quickly dispatch over all the possible implementations.

In particular, this function dispatches over all TrajectoryBase implementations.

Arguments

  • f: The function to optimize.
  • sol: A candidate solution, should be an AbstractArray. The dimension of this array will determine the dimension of the problem.
  • range: A collection that contains just two elements, regarding the global bounds of the search space.
  • method: A TrajectoryBase object, e.g. SimulatedAnnealing for the simulated annealing implementation.

Keyword arguments

  • rng: An AbstractRNG object. This will be used for creating the population of solutions as well as for the implementation itself. If you wish to enforce reproducibility of results, create an RNG object with a set seed. If you do not pass an RNG object, a Xorshifts.Xoroshiro128Plus RNG will be created, using a truly random seed taken from the system.
  • iters: The total number of iterations for the main loop in the implementation.
  • kwargs: Keyword arguments passed on to the particular implementation. Each implementation has its own keyword arguments so check the documentation for a specific implementation in order to modify these arguments.
source

Benchmarks

Newtman.TestFunctions.AckleyType
Ackley

An unconstrained implementation of the d-dimensional Ackley function defined as:

\[f(\mathbf{x}) = -20 e^{ -0.02 \sqrt{\frac{1}{d}\sum_{i=1}^{d}{x_i^2}}} - e^{\frac{1}{d}\sum_{i=1}^{d}{\cos{(2 \pi x_i)}}} + 20 + e\]

where $d$ is the dimension of the input vector $\mathbf{x}$.

source
Newtman.TestFunctions.BealeType
Beale

An unconstrained implementation of the d-dimensional Beale function defined as:

\[f(x, y) = (1.5-x+xy)^2+(2.25-x+xy^2)^2+(2.625-x+xy^3)^2\]
source
Newtman.TestFunctions.EasomType
Easom

An unconstrained implementation of the 2-dimensional Easom function defined as:

\[f(\mathbf{x}) = -\cos{(x_1)} \cos{(x_2)} \exp{[-(x_1 - \pi)^2 - (x_2 - \pi)^2]}\]

where $x_1$ and $x_2$ refer to the first and second element of the input vector $\mathbf{x}$.

source
Newtman.TestFunctions.GoldsteinPriceType
Goldstein-Price

An unconstrained implementation of the d-dimensional Goldstein-Price function defined as:

\[f(x,y)=[1 + (x + y + 1)^2(19 − 14x+3x^2− 14y + 6xy + 3y^2)] \times \\ [30 + (2x − 3y)^2(18 − 32x + 12x^2 + 4y − 36xy + 27y^2)]\]
source
Newtman.TestFunctions.LevyType
Lévy

An unconstrained implementation of the d-dimensional Lévy function defined as:

\[f(\mathbf{x}) = \sin^{2}{\pi w_1} + \sum_{i=1}^{d-1} (w_i-1)^2 [1+10\sin^{2}{\pi w_1 + 1}] + (w_d-1)^2 [1+\sin^{2}{2\pi w_d}]\]

where

$w_i = 1 + \frac{x_i-1}{4}$ and $d$ is the dimension of the vector.

source
Newtman.TestFunctions.RosenbrockType
Rosenbrock

An unconstrained implementation of the d-dimensional Rosenbrock function defined as:

\[f(\mathbf{x}) = \sum_{i=1}^{N-1} \left[100(x_{i-1}-x_i^2)^2 +(1-x_i)^2 \right]\]

where $N$ is the dimension of the input vector $\mathbf{x}$.

source
Newtman.TestFunctions.SphereType
Sphere

An unconstrained implementation of the Sphere function defined as:

\[f(\mathbf{x}) = \sum_{i=1}^{d} x_i^2\]

where $d$ is the dimension of the input vector $\mathbf{x}$.

source

Algorithms

Newtman.ParticleMethod
Particle(a, b, n::Int)

Particle that can be created randomly using the bounds and the dimension needed.

Arguments

  • a: lower bound for x and v.
  • b: upper bound for x and v.
  • n: dimension for x, v, and x_best.

Example

p = Particle(-1.0, 1.0, 3)
source
Newtman.ParticleMethod
Particle(
    x::AbstractArray{T},
    v::AbstractArray{T},
    x_best::AbstractArray{T},
    a::T,
    b::T
) where {T <: Real}

A type that can hold information about current position, current velocity, the best candidate to a solution, as well as defining the bounds. The dimensions of the Particle are inferred from the length of the arrays.

Arguments

  • x: Array that holds the positions of possible solutions.
  • v: Array that holds velocities related to x.
  • x_best: An element of x that determines the best position for the particle.
  • a: lower bound for x
  • b: upper bound for v

Example

p = Particle(zeros(3), rand(3), zeros(3), 0.0, 1.0)
source
Newtman.PopulationMethod
Population(num_particles::Int, ranges; seed=nothing)
    -> Vector{Particle}(undef, num_particles)

An array of Particle's where each of them are bounded and are given a dimension. x is a collection of ranges for each dimension for the Particle's specified.

Arguments

  • num_particles: Number of particles in the Population.
  • dim: Dimension for every Particle.
  • x: Should be a collection of ranges, for instance a tuple or a vector.

Example

# Two ranges, one for each dimension
range_a = SVector(-10.0, 10.0)
range_b = SVector(-2.5, 2.0)
pops = Population(2, 20, [ranges_a, range_b])
source
Newtman.PopulationMethod
Population(num_particles::T, dim::T, a::V, b::V)
    where {T<:Int, V<:AbstractFloat} -> Vector{Particle}(undef, num_particles)

An array of Particles where each of them are bounded and are given a dimension. This is essentially a multi-dimensional array. It makes handling Particles much easier.

Arguments

  • num_particles: Number of particles in the Population.
  • dim: Dimension for every Particle.
  • a: Lower bound for every Particle, this is shared across every instance.
  • b: Upper bound for every Particle, this is shared across every instance.

Example

pop = Population(35, 4, -1.0, 1.0)
source
Newtman.MetaheuristicType
Metaheuristic

Abstract type for metaheuristic algorithms, this makes a clear distinction between different classifications of metaheuristic algorithms.

source
Newtman.OptimizationResultsType
OptimizationResults{T, U}

Type that formats the output of Metaheuristic to get better information from it.

Fields

  • x::T: Stores the solution array from the solver, i.e. the solution that minimizes the cost function.
  • min::U: Stores the value obtained from evaluating the cost function with x, i.e. the minima found.
  • impl::AbstractString: Stores the name of the Metaheuristic used, i.e. the name or identifier of the optimization algorithm.
  • iterations::Integer: Stores the number of maximum iterations that the solver was run.
source
Newtman.PopulationBaseType
PopulationBase <: Metaheuristic

Type for population-based algorithms that employ Population, i.e. subroutines that mutate an array of possible candidates in-place. An example of this type is PSO.

source
Newtman.PSOType
PSO <: PopulationBase

PSO is the type associated with the implementation for the Particle Swarm Optimization with momentum. See Algorithms for more information.

source
Newtman.PSOMethod
PSO(f::Function, population, k_max::Int, rng;
    w=0.9, c1=2.0, c2=2.0
) -> OptimizationResults
PSO(f::Benchmark, population, k_max::Int, rng;
    w=0.9, c1=2.0, c2=2.0
) -> OptimizationResults

Method that implements PSO for a function f of type Function or of type Benchmark.

Returns an OptimizationResults type with information relevant to the run executed, see OptimizationResults.

Arguments

  • population: can be any AbstractArray that contains Particle

instances, but it is expected to be generated by Population.

  • k_max: number of maximum iterations until "convergence" of the algorithm.
  • rng: An object of type AbstractRNG.

Keyword arguments

It is recommended to use the default values provided.

  • w: value that controls how much of the initial velocity is retained, i.e. an inertia term. This values decays linearly over each iteration until it reaches

the default miminum value of 0.4.

  • c1: balance between the influence of the individual's knowledge, i.e. the best inidividual solution so far.
  • c2: balance between the influence of the population's knowledge, i.e. the best global solution so far.
  • seed: an integer to be used as the seed for the pseudo random number generators. If nothing is passed (the default), then a random seed will be taken from the system.

Examples

using Newtman
using Random

rng = MersenneTwister(1261234)
# Define the Sphere function
f_sphere(x) = sum(x.^2)

# Implement PSO for a 3-dimensional Sphere function, with
# 10000 iterations and 30 particles in the population.
val = PSO(f_sphere, Population(30, 3, -15.0, 15.0), 10_000, rng)
source
Newtman.GeneralSimulatedAnnealingMethod
GeneralSimulatedAnnealing(f::Function, a, b, dim::Integer, rng;
    low_temp=5_000, t0=500.0, qv=2.7, qa=-5.0
) -> OptimizationResults
GeneralSimulatedAnnealing(f::Benchmark, a, b, dim::Integer, rng;
    low_temp=20_000, t0=500.0, qv=2.7, qa=-5.0
) -> OptimizationResults

Implementation of the generalized version of simulated annealing.

This implementation uses all the theory from Tsallis & Stariolo for the cooling schedule and the neighbor solution search. See GeneralSimulatedAnnealing for the implementation details.

Returns an OptimizationResults type with information relevant to the run executed, see OptimizationResults.

Arguments

  • f: any user defined Function that can take AbstractArray's.
  • a: lower bound for the solution search space.
  • b: upper bound for the solution search space.
  • dim: dimension of the optimization problem.
  • rng: an object of type AbstractRNG.

Keyword arguments

It is recommended to use the default values provided.

  • t0: initial value for the temperature that is used. The default is an okay value, but should be changed depending on the optimization problem.
  • low_temp: total number of iterations, short for lowering temperature steps. This also corresponds to the famous Monte Carlo steps, which are the total number of steps until the algorithm finishes.
  • qv: This is known as the Tsallis parameter; particularly this parameter controls the cooling schedule convergence and neighbor search. Positive values in the interval $[1,5/3)$ are best because for values larger than 5/3 the neighbor search diverges.
  • qa: Another Tsallis parameter; this particular parameter controls convergence for the Metropolis-Hastings algorithm and the acceptance probability involved. The more negative the value is, the better, but Tsallis & Stariolo report that the default value is best.

Examples

using Newtman
using Random

rng = MersenneTwister()
# Define the 2D Rosenbrock function
rosenbrock2d(x) =  (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2

# Implement Simulated Annealing for a 2-dimensional Rosenbrock function, with
# 15000 iterations and default values.
val = GeneralSimulatedAnnealing(rosenbrock2d, -5.0, 5.0, 2, rng; low_temp=15_000)
source
Newtman.SimulatedAnnealingMethod
SimulatedAnnealing(f::Function, a, b, dim::Integer, rng;
    low_temp=5000, t0=500.0
) -> OptimizationResults
SimulatedAnnealing(f::Benchmark, a, b, dim::Integer, rng;
    low_temp=5000, t0=500.0
) -> OptimizationResults

Implementation of the classical version of simulated annealing.

This implementation uses a logarithmic cooling schedule and searches possible candidate solutions by sampling from an approximated Boltzmann distribution, drawn as a normal distribution.

Returns an OptimizationResults type with information relevant to the run executed, see OptimizationResults.

Arguments

  • f: any user defined Function that can take AbstractArray's.
  • a: lower bound for the solution search space.
  • b: upper bound for the solution search space.
  • dim: dimension of the optimization problem.
  • rng: an object of type AbstractRNG.

Keyword arguments

It is recommended to use the default values provided.

  • t0: initial value for the temperature that is used. The default is an okay value, but should be changed depending on the optimization problem.
  • low_temp: total number of iterations, short for lowering temperature steps. This also corresponds to the famous Monte Carlo steps, which are the total number of steps until the algorithm finishes.

Examples

using Newtman
using Random

rng = MersenneTwister()

# Define the 2D Rosenbrock function
rosenbrock2d(x) =  (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2

# Implement Simulated Annealing for a 2-dimensional Rosenbrock function, with
# 5000 iterations.
val = SimulatedAnnealing(rosenbrock2d, -5.0, 5.0, 2, rng; low_temp=5000)
source