mealpy.math_based package

mealpy.math_based.AOA module

class mealpy.math_based.AOA.OriginalAOA(epoch: int = 10000, pop_size: int = 100, alpha: float = 5, miu: float = 0.5, moa_min: float = 0.2, moa_max: float = 0.9, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Arithmetic Optimization Algorithm (AOA)

Links:
  1. https://doi.org/10.1016/j.cma.2020.113609

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • alpha (int): [3, 8], fixed parameter, sensitive exploitation parameter, Default: 5,

  • miu (float): [0.3, 1.0], fixed parameter , control parameter to adjust the search process, Default: 0.5,

  • moa_min (float): [0.1, 0.4], range min of Math Optimizer Accelerated, Default: 0.2,

  • moa_max (float): [0.5, 1.0], range max of Math Optimizer Accelerated, Default: 0.9,

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, AOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = AOA.OriginalAOA(epoch=1000, pop_size=50, alpha = 5, miu = 0.5, moa_min = 0.2, moa_max = 0.9)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Abualigah, L., Diabat, A., Mirjalili, S., Abd Elaziz, M. and Gandomi, A.H., 2021. The arithmetic optimization algorithm. Computer methods in applied mechanics and engineering, 376, p.113609.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.math_based.CEM module

class mealpy.math_based.CEM.OriginalCEM(epoch: int = 10000, pop_size: int = 100, n_best: int = 20, alpha: float = 0.7, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Cross-Entropy Method (CEM)

Links:
  1. https://github.com/clever-algorithms/CleverAlgorithms

  2. https://doi.org/10.1007/s10479-005-5724-z

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • n_best (int): N selected solutions as a samples for next evolution

  • alpha (float): weight factor for means and stdevs (normal distribution)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, CEM
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = CEM.OriginalCEM(epoch=1000, pop_size=50, n_best = 20, alpha = 0.7)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] De Boer, P.T., Kroese, D.P., Mannor, S. and Rubinstein, R.Y., 2005. A tutorial on the cross-entropy method. Annals of operations research, 134(1), pp.19-67.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]

mealpy.math_based.CGO module

class mealpy.math_based.CGO.OriginalCGO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Chaos Game Optimization (CGO)

Links:
  1. https://doi.org/10.1007/s10462-020-09867-w

Notes

  • 4th seed is mutation process, but it is not clear mutation on multiple variables or 1 variable

  • There is no usage of the variable alpha 4th in the paper

  • The replacement of the worst solutions by generated seed are not clear (Lots of grammar errors in this section)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, CGO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = CGO.OriginalCGO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Talatahari, S. and Azizi, M., 2021. Chaos Game Optimization: a novel metaheuristic algorithm. Artificial Intelligence Review, 54(2), pp.917-1004.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.math_based.CircleSA module

class mealpy.math_based.CircleSA.OriginalCircleSA(epoch=10000, pop_size=100, c_factor=0.8, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Circle Search Algorithm (CircleSA)

Links:
  1. https://doi.org/10.3390/math10101626

  2. https://www.mdpi.com/2227-7390/10/10/1626

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, CircleSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = CircleSA.OriginalCircleSA(epoch=1000, pop_size=50, c_factor=0.8)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Qais, M. H., Hasanien, H. M., Turky, R. A., Alghuwainem, S., Tostado-Véliz, M., & Jurado, F. (2022). Circle Search Algorithm: A Geometry-Based Metaheuristic Optimization Algorithm. Mathematics, 10(10), 1626.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.math_based.GBO module

class mealpy.math_based.GBO.OriginalGBO(epoch: int = 10000, pop_size: int = 100, pr: float = 0.5, beta_min: float = 0.2, beta_max: float = 1.2, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Gradient-Based Optimizer (GBO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • pr (float): [0.2, 0.8], Probability Parameter, default = 0.5

  • beta_min (float): Fixed parameter (no name in the paper), default = 0.2

  • beta_max (float): Fixed parameter (no name in the paper), default = 1.2

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GBO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GBO.OriginalGBO(epoch=1000, pop_size=50, pr = 0.5, beta_min = 0.2, beta_max = 1.2)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Ahmadianfar, I., Bozorg-Haddad, O. and Chu, X., 2020. Gradient-based optimizer: A new metaheuristic optimization algorithm. Information Sciences, 540, pp.131-159.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.math_based.HC module

class mealpy.math_based.HC.OriginalHC(epoch: int = 10000, pop_size: int = 2, neighbour_size: int = 50, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Hill Climbing (HC)

Notes

  • The number of neighbour solutions are equal to user defined

  • The step size to calculate neighbour group is randomized

  • HC is single-based solution, so the pop_size parameter is not matter in this algorithm

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • neighbour_size (int): [2, 1000], fixed parameter, sensitive exploitation parameter, Default: 50

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, HC
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = HC.OriginalHC(epoch=1000, pop_size=50, neighbour_size = 50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mitchell, M., Holland, J. and Forrest, S., 1993. When will a genetic algorithm outperform hill climbing. Advances in neural information processing systems, 6.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.math_based.HC.SwarmHC(epoch=10000, pop_size=100, neighbour_size=10, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Swarm-based Hill Climbing (S-HC)

Notes

  • Based on swarm-of people are trying to climb on the mountain idea

  • The number of neighbour solutions are equal to population size

  • The step size to calculate neighbour is randomized and based on rank of solution.
    • The guys near on top of mountain will move slower than the guys on bottom of mountain.

    • Imagination: exploration when far from global best, and exploitation when near global best

  • Who on top of mountain first will be the winner. (global optimal)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • neighbour_size (int): [2, pop_size/2], fixed parameter, sensitive exploitation parameter, Default: 10

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, HC
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = HC.SwarmHC(epoch=1000, pop_size=50, neighbour_size = 10)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]
Parameters

epoch (int) – The current iteration

mealpy.math_based.INFO module

class mealpy.math_based.INFO.OriginalINFO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: weIghted meaN oF vectOrs (INFO)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0957417422000173

  2. https://aliasgharheidari.com/INFO.html

  3. https://doi.org/10.1016/j.eswa.2022.116516

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, INFO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = INFO.OriginalINFO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Ahmadianfar, I., Heidari, A. A., Noshadian, S., Chen, H., & Gandomi, A. H. (2022). INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Systems with Applications, 195, 116516.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.math_based.PSS module

class mealpy.math_based.PSS.OriginalPSS(epoch: int = 10000, pop_size: int = 100, acceptance_rate: float = 0.9, sampling_method: str = 'LHS', **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Pareto-like Sequential Sampling (PSS)

Links:
  1. https://doi.org/10.1007/s00500-021-05853-8

  2. https://github.com/eesd-epfl/pareto-optimizer

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • acceptance_rate (float): [0.7-0.96], the probability of accepting a solution in the normal range, default=0.9

  • sampling_method (str): ‘LHS’: Latin-Hypercube or ‘MC’: ‘MonteCarlo’, default=”LHS”

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, PSS
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = PSS.OriginalPSS(epoch=1000, pop_size=50, acceptance_rate = 0.8, sampling_method = "LHS")
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Shaqfa, M. and Beyer, K., 2021. Pareto-like sequential sampling heuristic for global optimisation. Soft Computing, 25(14), pp.9077-9096.

create_population(pop_size=None)[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]
initialize_variables()[source]

mealpy.math_based.RUN module

class mealpy.math_based.RUN.OriginalRUN(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: RUNge Kutta optimizer (RUN)

Links:
  1. https://doi.org/10.1016/j.eswa.2021.115079

  2. https://imanahmadianfar.com/codes/

  3. https://www.aliasgharheidari.com/RUN.html

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, RUN
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = RUN.OriginalRUN(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Ahmadianfar, I., Heidari, A. A., Gandomi, A. H., Chu, X., & Chen, H. (2021). RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Systems with Applications, 181, 115079.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

get_index_of_best_agent__(pop)[source]
runge_kutta__(xb, xw, delta_x)[source]
uniform_random__(a, b, size)[source]

mealpy.math_based.SCA module

class mealpy.math_based.SCA.DevSCA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Sine Cosine Algorithm (SCA)

Notes

  • The flow and few equations are changed

  • Third loops are removed faster computational time

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SCA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SCA.DevSCA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.math_based.SCA.OriginalSCA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.math_based.SCA.DevSCA

The original version of: Sine Cosine Algorithm (SCA)

Links:
  1. https://doi.org/10.1016/j.knosys.2015.12.022

  2. https://www.mathworks.com/matlabcentral/fileexchange/54948-sca-a-sine-cosine-algorithm

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SCA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SCA.OriginalSCA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mirjalili, S., 2016. SCA: a sine cosine algorithm for solving optimization problems. Knowledge-based systems, 96, pp.120-133.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.math_based.SCA.QTable(n_states, n_actions, generator)[source]

Bases: object

get_action(state)[source]
get_action_params(action)[source]
get_state(density, distance)[source]
update(state, action, reward, alpha=0.1, gama=0.9)[source]
class mealpy.math_based.SCA.QleSCA(epoch: int = 10000, pop_size: int = 100, alpha: float = 0.1, gama: float = 0.9, **kwargs: object)[source]

Bases: mealpy.math_based.SCA.DevSCA

The original version of: QLE Sine Cosine Algorithm (QLE-SCA)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0957417421017048

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • alpha (float): [0.1-1.0], the is the learning rate in Q-learning, default=0.1

  • gama (float): [0.1-1.0]: the discount factor, default=0.9

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SCA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SCA.QleSCA(epoch=1000, pop_size=50, alpha=0.1, gama=0.9)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Hamad, Q. S., Samma, H., Suandi, S. A., & Mohamad-Saleh, J. (2022). Q-learning embedded sine cosine algorithm (QLESCA). Expert Systems with Applications, 193, 116417.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

density__(pop)[source]
distance__(best, pop, lb, ub)[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

mealpy.math_based.SHIO module

class mealpy.math_based.SHIO.OriginalSHIO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Success History Intelligent Optimizer (SHIO)

Links:
  1. https://link.springer.com/article/10.1007/s11227-021-04093-9

  2. https://www.mathworks.com/matlabcentral/fileexchange/122157-success-history-intelligent-optimizer-shio

Notes

  1. The algorithm is designed with simplicity and ease of implementation in mind, utilizing basic operators.

  2. This algorithm has several limitations and weak when dealing with several problems

  3. The algorithm’s convergence is slow. The Matlab code has many errors and unnecessary things.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SHIO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SHIO.OriginalSHIO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Fakhouri, H. N., Hamad, F., & Alawamrah, A. (2022). Success history intelligent optimizer. The Journal of Supercomputing, 1-42.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration