mealpy.physics_based package

mealpy.physics_based.ASO module

class mealpy.physics_based.ASO.OriginalASO(epoch: int = 10000, pop_size: int = 100, alpha: int = 10, beta: float = 0.2, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Atom Search Optimization (ASO)

Links:
  1. https://doi.org/10.1016/j.knosys.2018.08.030

  2. https://www.mathworks.com/matlabcentral/fileexchange/67011-atom-search-optimization-aso-algorithm

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • alpha (int): Depth weight, default = 10, depend on the problem

  • beta (float): Multiplier weight, default = 0.2

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ASO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = ASO.OriginalASO(epoch=1000, pop_size=50, alpha = 50, beta = 0.2)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Zhao, W., Wang, L. and Zhang, Z., 2019. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowledge-Based Systems, 163, pp.283-304.

acceleration__(population, g_best, iteration)[source]
amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

find_LJ_potential__(iteration, average_dist, radius)[source]
generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

update_mass__(population)[source]

mealpy.physics_based.ArchOA module

class mealpy.physics_based.ArchOA.OriginalArchOA(epoch: int = 10000, pop_size: int = 100, c1: float = 2, c2: float = 6, c3: float = 2, c4: float = 0.5, acc_max: float = 0.9, acc_min: float = 0.1, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Archimedes Optimization Algorithm (ArchOA)

Links:
  1. https://doi.org/10.1007/s10489-020-01893-z

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • c1 (int): factor, default belongs to [1, 2]

  • c2 (int): factor, Default belongs to [2, 4, 6]

  • c3 (int): factor, Default belongs to [1, 2]

  • c4 (float): factor, Default belongs to [0.5, 1]

  • acc_max (float): acceleration max, Default 0.9

  • acc_min (float): acceleration min, Default 0.1

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ArchOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = ArchOA.OriginalArchOA(epoch=1000, pop_size=50, c1 = 2, c2 = 5, c3 = 2, c4 = 0.5, acc_max = 0.9, acc_min = 0.1)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Hashim, F.A., Hussain, K., Houssein, E.H., Mabrouk, M.S. and Al-Atabany, W., 2021. Archimedes optimization algorithm: a new metaheuristic algorithm for solving optimization problems. Applied Intelligence, 51(3), pp.1531-1551.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

mealpy.physics_based.CDO module

class mealpy.physics_based.CDO.OriginalCDO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Chernobyl Disaster Optimizer (CDO)

Links:
  1. https://link.springer.com/article/10.1007/s00521-023-08261-1

  2. https://www.mathworks.com/matlabcentral/fileexchange/124351-chernobyl-disaster-optimizer-cdo

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, CDO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = CDO.OriginalCDO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Shehadeh, H. A. (2023). Chernobyl disaster optimizer (CDO): a novel meta-heuristic method for global optimization. Neural Computing and Applications, 1-17.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.physics_based.EFO module

class mealpy.physics_based.EFO.DevEFO(epoch: int = 10000, pop_size: int = 100, r_rate: float = 0.3, ps_rate: float = 0.85, p_field: float = 0.1, n_field: float = 0.45, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Electromagnetic Field Optimization (EFO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • r_rate (float): [0.1, 0.6], default = 0.3, like mutation parameter in GA but for one variable

  • ps_rate (float): [0.5, 0.95], default = 0.85, like crossover parameter in GA

  • p_field (float): [0.05, 0.3], default = 0.1, portion of population, positive field

  • n_field (float): [0.3, 0.7], default = 0.45, portion of population, negative field

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, EFO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = EFO.DevEFO(epoch=1000, pop_size=50, r_rate = 0.3, ps_rate = 0.85, p_field = 0.1, n_field = 0.45)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.physics_based.EFO.OriginalEFO(epoch: int = 10000, pop_size: int = 100, r_rate: float = 0.3, ps_rate: float = 0.85, p_field: float = 0.1, n_field: float = 0.45, **kwargs: object)[source]

Bases: mealpy.physics_based.EFO.DevEFO

The original version of: Electromagnetic Field Optimization (EFO)

Links:
  1. https://www.mathworks.com/matlabcentral/fileexchange/52744-electromagnetic-field-optimization-a-physics-inspired-metaheuristic-optimization-algorithm

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • r_rate (float): [0.1, 0.6], default = 0.3, like mutation parameter in GA but for one variable

  • ps_rate (float): [0.5, 0.95], default = 0.85, like crossover parameter in GA

  • p_field (float): [0.05, 0.3], default = 0.1, portion of population, positive field

  • n_field (float): [0.3, 0.7], default = 0.45, portion of population, negative field

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, EFO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = EFO.OriginalEFO(epoch=1000, pop_size=50, r_rate = 0.3, ps_rate = 0.85, p_field = 0.1, n_field = 0.45)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Abedinpourshotorban, H., Shamsuddin, S.M., Beheshti, Z. and Jawawi, D.N., 2016. Electromagnetic field optimization: a physics-inspired metaheuristic optimization algorithm. Swarm and Evolutionary Computation, 26, pp.8-22.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]

mealpy.physics_based.EO module

class mealpy.physics_based.EO.AdaptiveEO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.physics_based.EO.OriginalEO

The original version of: Adaptive Equilibrium Optimization (AEO)

Links:
  1. https://doi.org/10.1016/j.engappai.2020.103836

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, EO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = EO.AdaptiveEO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Wunnava, A., Naik, M.K., Panda, R., Jena, B. and Abraham, A., 2020. A novel interdependence based multilevel thresholding technique using adaptive equilibrium optimizer. Engineering Applications of Artificial Intelligence, 94, p.103836.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.physics_based.EO.ModifiedEO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.physics_based.EO.OriginalEO

The original version of: Modified Equilibrium Optimizer (MEO)

Links:
  1. https://doi.org/10.1016/j.asoc.2020.106542

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, EO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = EO.ModifiedEO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Gupta, S., Deep, K. and Mirjalili, S., 2020. An efficient equilibrium optimizer with mutation strategy for numerical optimization. Applied Soft Computing, 96, p.106542.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.physics_based.EO.OriginalEO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Equilibrium Optimizer (EO)

Links:
  1. https://doi.org/10.1016/j.knosys.2019.105190

  2. https://www.mathworks.com/matlabcentral/fileexchange/73352-equilibrium-optimizer-eo

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, EO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = EO.OriginalEO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Faramarzi, A., Heidarinejad, M., Stephens, B. and Mirjalili, S., 2020. Equilibrium optimizer: A novel optimization algorithm. Knowledge-Based Systems, 191, p.105190.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

make_equilibrium_pool__(list_equilibrium=None)[source]

mealpy.physics_based.EVO module

class mealpy.physics_based.EVO.OriginalEVO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Energy Valley Optimizer (EVO)

Links:
  1. https://www.nature.com/articles/s41598-022-27344-y

  2. https://www.mathworks.com/matlabcentral/fileexchange/123130-energy-valley-optimizer-a-novel-metaheuristic-algorithm

Notes

  1. The algorithm is straightforward and does not require any specialized knowledge or techniques.

  2. The algorithm may not perform optimally due to slow convergence and no good operations, which could be improved by implementing better strategies and operations.

  3. The problem is that it is stuck at a local optimal around 1/2 of the max generations because fitness distance is being used as a factor in the equations.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, EVO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = EVO.OriginalEVO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Azizi, M., Aickelin, U., A. Khorshidi, H., & Baghalzadeh Shishehgarkhaneh, M. (2023). Energy valley optimizer: a novel metaheuristic algorithm for global and engineering optimization. Scientific Reports, 13(1), 226.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.physics_based.FLA module

class mealpy.physics_based.FLA.OriginalFLA(epoch: int = 10000, pop_size: int = 100, C1: float = 0.5, C2: float = 2.0, C3: float = 0.1, C4: float = 0.2, C5: float = 2.0, DD: float = 0.01, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Fick’s Law Algorithm (FLA)

Notes

  1. The algorithm contains a high number of parameters, some of which may be unnecessary.

  2. Despite the complexity of the algorithms, they may not perform optimally and could potentially become trapped in local optima.

  3. Division by the fitness value may cause overflow issues to arise.

  4. https://www.mathworks.com/matlabcentral/fileexchange/121033-fick-s-law-algorithm-fla

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • C1 (float): factor C1, default=0.5

  • C2 (float): factor C2, default=2.0

  • C3 (float): factor C3, default=0.1

  • C4 (float): factor C4, default=0.2

  • C5 (float): factor C5, default=2.0

  • DD (float): factor D in the paper, default=0.01

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, FLA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = FLA.OriginalFLA(epoch=1000, pop_size=50, C1 = 0.5, C2 = 2.0, C3 = 0.1, C4 = 0.2, C5 = 2.0, DD = 0.01)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Hashim, F. A., Mostafa, R. R., Hussien, A. G., Mirjalili, S., & Sallam, K. M. (2023). Fick’s Law Algorithm: A physical law-based algorithm for numerical optimization. Knowledge-Based Systems, 260, 110146.

before_main_loop()[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.physics_based.HGSO module

class mealpy.physics_based.HGSO.OriginalHGSO(epoch: int = 10000, pop_size: int = 100, n_clusters: int = 2, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Henry Gas Solubility Optimization (HGSO)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0167739X19306557

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • n_clusters (int): [2, 10], number of clusters, default = 2

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, HGSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = HGSO.OriginalHGSO(epoch=1000, pop_size=50, n_clusters = 3)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Hashim, F.A., Houssein, E.H., Mabrouk, M.S., Al-Atabany, W. and Mirjalili, S., 2019. Henry gas solubility optimization: A novel physics-based algorithm. Future Generation Computer Systems, 101, pp.646-667.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

flatten_group__(group)[source]
get_best_solution_in_team__(group=None)[source]
initialization()[source]
initialize_variables()[source]

mealpy.physics_based.MVO module

class mealpy.physics_based.MVO.DevMVO(epoch: int = 10000, pop_size: int = 100, wep_min: float = 0.2, wep_max: float = 1.0, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Multi-Verse Optimizer (MVO)

Notes

  • New routtele wheel selection can handle negative values

  • Removed condition when self.generator.normalize fitness. So the chance to choose while whole higher –> better

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • wep_min (float): [0.05, 0.3], Wormhole Existence Probability (min in Eq.(3.3) paper, default = 0.2

  • wep_max (float: [0.75, 1.0], Wormhole Existence Probability (max in Eq.(3.3) paper, default = 1.0

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, MVO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = MVO.DevMVO(epoch=1000, pop_size=50, wep_min = 0.2, wep_max = 1.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.physics_based.MVO.OriginalMVO(epoch: int = 10000, pop_size: int = 100, wep_min: float = 0.2, wep_max: float = 1.0, **kwargs: object)[source]

Bases: mealpy.physics_based.MVO.DevMVO

The original version of: Multi-Verse Optimizer (MVO)

Links:
  1. https://dx.doi.org/10.1007/s00521-015-1870-7

  2. https://www.mathworks.com/matlabcentral/fileexchange/50112-multi-verse-optimizer-mvo

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • wep_min (float): [0.05, 0.3], Wormhole Existence Probability (min in Eq.(3.3) paper, default = 0.2

  • wep_max (float: [0.75, 1.0], Wormhole Existence Probability (max in Eq.(3.3) paper, default = 1.0

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, MVO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = MVO.OriginalMVO(epoch=1000, pop_size=50, wep_min = 0.2, wep_max = 1.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mirjalili, S., Mirjalili, S.M. and Hatamlou, A., 2016. Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Computing and Applications, 27(2), pp.495-513.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

normalize__(d, to_sum=True)[source]
roulette_wheel_selection__(weights=None)[source]

mealpy.physics_based.NRO module

class mealpy.physics_based.NRO.OriginalNRO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Nuclear Reaction Optimization (NRO)

Links:
  1. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8720256

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, MVO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = MVO.DevMVO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Wei, Z., Huang, C., Wang, X., Han, T. and Li, Y., 2019. Nuclear reaction optimization: A novel and powerful physics-based algorithm for global optimization. IEEE Access, 7, pp.66084-66109. [2] Wei, Z.L., Zhang, Z.R., Huang, C.Q., Han, B., Tang, S.Q. and Wang, L., 2019, June. An Approach Inspired from Nuclear Reaction Processes for Numerical Optimization. In Journal of Physics: Conference Series (Vol. 1213, No. 3, p. 032009). IOP Publishing.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.physics_based.RIME module

class mealpy.physics_based.RIME.OriginalRIME(epoch: int = 10000, pop_size: int = 100, sr: float = 5.0, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: physical phenomenon of RIME-ice (RIME)

Links:
  1. https://doi.org/10.1016/j.neucom.2023.02.010

  2. https://www.mathworks.com/matlabcentral/fileexchange/124610-rime-a-physics-based-optimization

Notes (parameters):
  1. sr (float): Soft-rime parameters, default=5.0

  2. The algorithm is straightforward and does not require any specialized knowledge or techniques.

  3. The algorithm may exhibit slow convergence and may not perform optimally.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, RIME
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = RIME.OriginalRIME(epoch=1000, pop_size=50, sr = 5.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Su, H., Zhao, D., Heidari, A. A., Liu, L., Zhang, X., Mafarja, M., & Chen, H. (2023). RIME: A physics-based optimization. Neurocomputing.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.physics_based.SA module

class mealpy.physics_based.SA.GaussianSA(epoch: int = 10000, pop_size: int = 2, temp_init: float = 100, cooling_rate: float = 0.99, scale: float = 0.1, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version of: Gaussian Simulated Annealing (GaussianSA)

Notes

  • SA is single-based solution, so the pop_size parameter is not matter in this algorithm

  • The temp_init is very important factor. Should set it equal to the distance between LB and UB

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • temp_init (float): [1, 10000], initial temperature, default=100

  • cooling_rate (float): (0., 1.0), cooling rate, default=0.99

  • scale (float): (0., 100.), the scale in gaussian random, default=0.1

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SA.GaussianSA(epoch=1000, pop_size=2, temp_init = 100, cooling_rate = 0.99, scale = 0.1)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
before_main_loop()[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.physics_based.SA.OriginalSA(epoch: int = 10000, pop_size: int = 2, temp_init: float = 100, step_size: float = 0.1, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Simulated Annealing (OriginalSA)

Notes

  • SA is single-based solution, so the pop_size parameter is not matter in this algorithm

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • temp_init (float): [1, 10000], initial temperature, default=100

  • step_size (float): the step size of random movement, default=0.1

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SA.OriginalSA(epoch=1000, pop_size=50, temp_init = 100, step_size = 0.1)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Kirkpatrick, S., Gelatt Jr, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. science, 220(4598), 671-680.

before_main_loop()[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.physics_based.SA.SwarmSA(epoch: int = 10000, pop_size: int = 100, max_sub_iter: int = 5, t0: int = 1000, t1: int = 1, move_count: int = 5, mutation_rate: float = 0.1, mutation_step_size: float = 0.1, mutation_step_size_damp: float = 0.99, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The swarm version of: Simulated Annealing (SwarmSA)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • max_sub_iter (int): [5, 10, 15], Maximum Number of Sub-Iteration (within fixed temperature), default=5

  • t0 (int): Fixed parameter, Initial Temperature, default=1000

  • t1 (int): Fixed parameter, Final Temperature, default=1

  • move_count (int): [5, 20], Move Count per Individual Solution, default=5

  • mutation_rate (float): [0.01, 0.2], Mutation Rate, default=0.1

  • mutation_step_size (float): [0.05, 0.1, 0.15], Mutation Step Size, default=0.1

  • mutation_step_size_damp (float): [0.8, 0.99], Mutation Step Size Damp, default=0.99

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SA.SwarmSA(epoch=1000, pop_size=50, max_sub_iter = 5, t0 = 1000, t1 = 1,
>>>         move_count = 5, mutation_rate = 0.1, mutation_step_size = 0.1, mutation_step_size_damp = 0.99)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Van Laarhoven, P.J. and Aarts, E.H., 1987. Simulated annealing. In Simulated annealing: Theory and applications (pp. 7-15). Springer, Dordrecht.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]
mutate__(position, sigma)[source]

mealpy.physics_based.TWO module

class mealpy.physics_based.TWO.EnhancedTWO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.physics_based.TWO.OppoTWO, mealpy.physics_based.TWO.LevyTWO

The original version of: Enhenced Tug of War Optimization (ETWO)

Links:
  1. https://doi.org/10.1016/j.procs.2020.03.063

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TWO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TWO.EnhancedTWO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Nguyen, T., Hoang, B., Nguyen, G. and Nguyen, B.M., 2020. A new workload prediction model using extreme learning machine and enhanced tug of war optimization. Procedia Computer Science, 170, pp.362-369.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]
class mealpy.physics_based.TWO.LevyTWO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.physics_based.TWO.OriginalTWO

The Levy-flight version of: Tug of War Optimization (LevyTWO)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TWO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TWO.LevyTWO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.physics_based.TWO.OppoTWO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.physics_based.TWO.OriginalTWO

The opossition-based learning version: Tug of War Optimization (OTWO)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TWO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TWO.OppoTWO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]
class mealpy.physics_based.TWO.OriginalTWO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Tug of War Optimization (TWO)

Links:
  1. https://www.researchgate.net/publication/332088054_Tug_of_War_Optimization_Algorithm

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TWO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TWO.OriginalTWO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Kaveh, A., 2017. Tug of war optimization. In Advances in metaheuristic algorithms for optimal design of structures (pp. 451-487). Springer, Cham.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

initialization()[source]
update_weight__(teams)[source]

mealpy.physics_based.WDO module

class mealpy.physics_based.WDO.OriginalWDO(epoch: int = 10000, pop_size: int = 100, RT: int = 3, g_c: float = 0.2, alp: float = 0.4, c_e: float = 0.4, max_v: float = 0.3, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Wind Driven Optimization (WDO)

Notes
Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • RT (int): [2, 3, 4], RT coefficient, default = 3

  • g_c (float): [0.1, 0.5], gravitational constant, default = 0.2

  • alp (float): [0.3, 0.8], constants in the update equation, default=0.4

  • c_e (float): [0.1, 0.9], coriolis effect, default=0.4

  • max_v (float): [0.1, 0.9], maximum allowed speed, default=0.3

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, WDO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = WDO.OriginalWDO(epoch=1000, pop_size=50, RT = 3, g_c = 0.2, alp = 0.4, c_e = 0.4, max_v = 0.3)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Bayraktar, Z., Komurcu, M., Bossard, J.A. and Werner, D.H., 2013. The wind driven optimization technique and its application in electromagnetics. IEEE transactions on antennas and propagation, 61(5), pp.2745-2757.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]