Reference
API
Newtman.optimize — Methodfunction optimize(f, sol, range, method::PopulationBase;
rng=nothing,
iters=2_000,
pop_size=30,
kwargs...
) -> OptimizationResultsThe optimize constitutes an interface to quickly dispatch over all the possible implementations.
In particular, this function dispatches over all PopulationBase implementations.
Arguments
f: The function to optimize.sol: A candidate solution, should be anAbstractArray. The dimension of this array will determine the dimension of the problem.range: A collection that contains just two elements, regarding the global bounds of the search space.method: APopulationBase, e.g.PSOfor Particle Swarm Optimization.
Keyword arguments
rng: AnAbstractRNGobject. This will be used for creating the population of solutions as well as for the implementation itself. If you wish to enforce reproducibility of results, create an RNG object with a set seed. If you do not pass an RNG object, aXorshifts.Xoroshiro128PlusRNG will be created, using a truly random seed taken from the system.iters: The total number of iterations for the main loop in the implementation.pop_size: The total size of the population. The larger the population, the longer it will take to converge. A good population size is somewhere between 20-35.kwargs: Keyword arguments passed on to the particular implementation. Each implementation has its own keyword arguments so check the documentation for a specific implementation in order to modify these arguments.
Newtman.optimize — Methodfunction optimize(f, sol, range, method::TrajectoryBase;
rng=nothing,
iters=2_000,
kwargs...
) -> OptimizationResultsThe optimize constitutes an interface to quickly dispatch over all the possible implementations.
In particular, this function dispatches over all TrajectoryBase implementations.
Arguments
f: The function to optimize.sol: A candidate solution, should be anAbstractArray. The dimension of this array will determine the dimension of the problem.range: A collection that contains just two elements, regarding the global bounds of the search space.method: ATrajectoryBaseobject, e.g.SimulatedAnnealingfor the simulated annealing implementation.
Keyword arguments
rng: AnAbstractRNGobject. This will be used for creating the population of solutions as well as for the implementation itself. If you wish to enforce reproducibility of results, create an RNG object with a set seed. If you do not pass an RNG object, aXorshifts.Xoroshiro128PlusRNG will be created, using a truly random seed taken from the system.iters: The total number of iterations for the main loop in the implementation.kwargs: Keyword arguments passed on to the particular implementation. Each implementation has its own keyword arguments so check the documentation for a specific implementation in order to modify these arguments.
Benchmarks
Newtman.TestFunctions.Benchmark — TypeBenchmarkAbstract supertype for all benchmark functions.
Newtman.TestFunctions.Unconstrained — TypeUnconstrainedAbstract supertype for all unconstrained benchmark functions.
Newtman.TestFunctions.Ackley — TypeAckleyAn unconstrained implementation of the d-dimensional Ackley function defined as:
where $d$ is the dimension of the input vector $\mathbf{x}$.
Newtman.TestFunctions.Beale — TypeBealeAn unconstrained implementation of the d-dimensional Beale function defined as:
Newtman.TestFunctions.Easom — TypeEasomAn unconstrained implementation of the 2-dimensional Easom function defined as:
where $x_1$ and $x_2$ refer to the first and second element of the input vector $\mathbf{x}$.
Newtman.TestFunctions.GoldsteinPrice — TypeGoldstein-PriceAn unconstrained implementation of the d-dimensional Goldstein-Price function defined as:
Newtman.TestFunctions.Levy — TypeLévyAn unconstrained implementation of the d-dimensional Lévy function defined as:
where
$w_i = 1 + \frac{x_i-1}{4}$ and $d$ is the dimension of the vector.
Newtman.TestFunctions.Rosenbrock — TypeRosenbrockAn unconstrained implementation of the d-dimensional Rosenbrock function defined as:
where $N$ is the dimension of the input vector $\mathbf{x}$.
Newtman.TestFunctions.Sphere — TypeSphereAn unconstrained implementation of the Sphere function defined as:
where $d$ is the dimension of the input vector $\mathbf{x}$.
Algorithms
Newtman.Particle — MethodParticle(a, b, n::Int)Particle that can be created randomly using the bounds and the dimension needed.
Arguments
a: lower bound forxandv.b: upper bound forxandv.n: dimension forx,v, andx_best.
Example
p = Particle(-1.0, 1.0, 3)Newtman.Particle — MethodParticle(
x::AbstractArray{T},
v::AbstractArray{T},
x_best::AbstractArray{T},
a::T,
b::T
) where {T <: Real}A type that can hold information about current position, current velocity, the best candidate to a solution, as well as defining the bounds. The dimensions of the Particle are inferred from the length of the arrays.
Arguments
x: Array that holds the positions of possible solutions.v: Array that holds velocities related tox.x_best: An element ofxthat determines the best position for the particle.a: lower bound forxb: upper bound forv
Example
p = Particle(zeros(3), rand(3), zeros(3), 0.0, 1.0)Newtman.Population — MethodPopulation(num_particles::Int, ranges; seed=nothing)
-> Vector{Particle}(undef, num_particles)An array of Particle's where each of them are bounded and are given a dimension. x is a collection of ranges for each dimension for the Particle's specified.
Arguments
num_particles: Number of particles in thePopulation.dim: Dimension for everyParticle.x: Should be a collection of ranges, for instance a tuple or a vector.
Example
# Two ranges, one for each dimension
range_a = SVector(-10.0, 10.0)
range_b = SVector(-2.5, 2.0)
pops = Population(2, 20, [ranges_a, range_b])Newtman.Population — MethodPopulation(num_particles::T, dim::T, a::V, b::V)
where {T<:Int, V<:AbstractFloat} -> Vector{Particle}(undef, num_particles)An array of Particles where each of them are bounded and are given a dimension. This is essentially a multi-dimensional array. It makes handling Particles much easier.
Arguments
num_particles: Number of particles in thePopulation.dim: Dimension for everyParticle.a: Lower bound for everyParticle, this is shared across every instance.b: Upper bound for everyParticle, this is shared across every instance.
Example
pop = Population(35, 4, -1.0, 1.0)Newtman.Metaheuristic — TypeMetaheuristicAbstract type for metaheuristic algorithms, this makes a clear distinction between different classifications of metaheuristic algorithms.
Newtman.OptimizationResults — TypeOptimizationResults{T, U}Type that formats the output of Metaheuristic to get better information from it.
Fields
x::T: Stores the solution array from the solver, i.e. the solution that minimizes the cost function.min::U: Stores the value obtained from evaluating the cost function withx, i.e. the minima found.impl::AbstractString: Stores the name of theMetaheuristicused, i.e. the name or identifier of the optimization algorithm.iterations::Integer: Stores the number of maximum iterations that the solver was run.
Newtman.PopulationBase — TypePopulationBase <: MetaheuristicType for population-based algorithms that employ Population, i.e. subroutines that mutate an array of possible candidates in-place. An example of this type is PSO.
Newtman.TrajectoryBase — TypeTrajectoryBase <: MetaheuristicType for trajectory-based algorithms that only uses specific heuristics to guide the solution to a global minimum. An example of this type is SimulatedAnnealing.
Newtman.PSO — TypePSO <: PopulationBasePSO is the type associated with the implementation for the Particle Swarm Optimization with momentum. See Algorithms for more information.
Newtman.PSO — MethodPSO(f::Function, population, k_max::Int, rng;
w=0.9, c1=2.0, c2=2.0
) -> OptimizationResults
PSO(f::Benchmark, population, k_max::Int, rng;
w=0.9, c1=2.0, c2=2.0
) -> OptimizationResultsMethod that implements PSO for a function f of type Function or of type Benchmark.
Returns an OptimizationResults type with information relevant to the run executed, see OptimizationResults.
Arguments
population: can be anyAbstractArraythat containsParticle
instances, but it is expected to be generated by Population.
k_max: number of maximum iterations until "convergence" of the algorithm.rng: An object of typeAbstractRNG.
Keyword arguments
It is recommended to use the default values provided.
w: value that controls how much of the initial velocity is retained, i.e. an inertia term. This values decays linearly over each iteration until it reaches
the default miminum value of 0.4.
c1: balance between the influence of the individual's knowledge, i.e. the best inidividual solution so far.c2: balance between the influence of the population's knowledge, i.e. the best global solution so far.seed: an integer to be used as the seed for the pseudo random number generators. Ifnothingis passed (the default), then a random seed will be taken from the system.
Examples
using Newtman
using Random
rng = MersenneTwister(1261234)
# Define the Sphere function
f_sphere(x) = sum(x.^2)
# Implement PSO for a 3-dimensional Sphere function, with
# 10000 iterations and 30 particles in the population.
val = PSO(f_sphere, Population(30, 3, -15.0, 15.0), 10_000, rng)Newtman.GeneralSimulatedAnnealing — MethodGeneralSimulatedAnnealing(f::Function, a, b, dim::Integer, rng;
low_temp=5_000, t0=500.0, qv=2.7, qa=-5.0
) -> OptimizationResults
GeneralSimulatedAnnealing(f::Benchmark, a, b, dim::Integer, rng;
low_temp=20_000, t0=500.0, qv=2.7, qa=-5.0
) -> OptimizationResultsImplementation of the generalized version of simulated annealing.
This implementation uses all the theory from Tsallis & Stariolo for the cooling schedule and the neighbor solution search. See GeneralSimulatedAnnealing for the implementation details.
Returns an OptimizationResults type with information relevant to the run executed, see OptimizationResults.
Arguments
f: any user definedFunctionthat can takeAbstractArray's.a: lower bound for the solution search space.b: upper bound for the solution search space.dim: dimension of the optimization problem.rng: an object of typeAbstractRNG.
Keyword arguments
It is recommended to use the default values provided.
t0: initial value for the temperature that is used. The default is an okay value, but should be changed depending on the optimization problem.low_temp: total number of iterations, short for lowering temperature steps. This also corresponds to the famous Monte Carlo steps, which are the total number of steps until the algorithm finishes.qv: This is known as the Tsallis parameter; particularly this parameter controls the cooling schedule convergence and neighbor search. Positive values in the interval $[1,5/3)$ are best because for values larger than 5/3 the neighbor search diverges.qa: Another Tsallis parameter; this particular parameter controls convergence for the Metropolis-Hastings algorithm and the acceptance probability involved. The more negative the value is, the better, but Tsallis & Stariolo report that the default value is best.
Examples
using Newtman
using Random
rng = MersenneTwister()
# Define the 2D Rosenbrock function
rosenbrock2d(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
# Implement Simulated Annealing for a 2-dimensional Rosenbrock function, with
# 15000 iterations and default values.
val = GeneralSimulatedAnnealing(rosenbrock2d, -5.0, 5.0, 2, rng; low_temp=15_000)Newtman.SimulatedAnnealing — MethodSimulatedAnnealing(f::Function, a, b, dim::Integer, rng;
low_temp=5000, t0=500.0
) -> OptimizationResults
SimulatedAnnealing(f::Benchmark, a, b, dim::Integer, rng;
low_temp=5000, t0=500.0
) -> OptimizationResultsImplementation of the classical version of simulated annealing.
This implementation uses a logarithmic cooling schedule and searches possible candidate solutions by sampling from an approximated Boltzmann distribution, drawn as a normal distribution.
Returns an OptimizationResults type with information relevant to the run executed, see OptimizationResults.
Arguments
f: any user definedFunctionthat can takeAbstractArray's.a: lower bound for the solution search space.b: upper bound for the solution search space.dim: dimension of the optimization problem.rng: an object of typeAbstractRNG.
Keyword arguments
It is recommended to use the default values provided.
t0: initial value for the temperature that is used. The default is an okay value, but should be changed depending on the optimization problem.low_temp: total number of iterations, short for lowering temperature steps. This also corresponds to the famous Monte Carlo steps, which are the total number of steps until the algorithm finishes.
Examples
using Newtman
using Random
rng = MersenneTwister()
# Define the 2D Rosenbrock function
rosenbrock2d(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
# Implement Simulated Annealing for a 2-dimensional Rosenbrock function, with
# 5000 iterations.
val = SimulatedAnnealing(rosenbrock2d, -5.0, 5.0, 2, rng; low_temp=5000)