mealpy.swarm_based package

mealpy.swarm_based.ABC module

class mealpy.swarm_based.ABC.OriginalABC(epoch: int = 10000, pop_size: int = 100, n_limits: int = 25, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Artificial Bee Colony (ABC)

Links:
  1. https://www.sciencedirect.com/topics/computer-science/artificial-bee-colony

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • n_limits (int): Limit of trials before abandoning a food source, default=25

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ABC
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = ABC.OriginalABC(epoch=1000, pop_size=50, n_limits = 50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] B. Basturk, D. Karaboga, An artificial bee colony (ABC) algorithm for numeric function optimization, in: IEEE Swarm Intelligence Symposium 2006, May 12–14, Indianapolis, IN, USA, 2006.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]

mealpy.swarm_based.ACOR module

class mealpy.swarm_based.ACOR.OriginalACOR(epoch: int = 10000, pop_size: int = 100, sample_count: int = 25, intent_factor: float = 0.5, zeta: float = 1.0, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Ant Colony Optimization Continuous (ACOR)

Notes

  • Use Gaussian Distribution (np.random.normal() function) instead of random number (np.random.rand())

  • Amend solution when they went out of space

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • sample_count (int): [2, 10000], Number of Newly Generated Samples, default = 25

  • intent_factor (float): [0.2, 1.0], Intensification Factor (Selection Pressure), (q in the paper), default = 0.5

  • zeta (float): [1, 2, 3], Deviation-Distance Ratio, default = 1.0

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ACOR
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = ACOR.OriginalACOR(epoch=1000, pop_size=50, sample_count = 25, intent_factor = 0.5, zeta = 1.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Socha, K. and Dorigo, M., 2008. Ant colony optimization for continuous domains. European journal of operational research, 185(3), pp.1155-1173.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.AGTO module

class mealpy.swarm_based.AGTO.MGTO(epoch: int = 10000, pop_size: int = 100, pp: float = 0.03, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Modified Gorilla Troops Optimization (mGTO)

Notes (parameters):
  1. pp (float): the probability of transition in exploration phase (p in the paper), default = 0.03

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, AGTO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = AGTO.MGTO(epoch=1000, pop_size=50, pp=0.03)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mostafa, R. R., Gaheen, M. A., Abd ElAziz, M., Al-Betar, M. A., & Ewees, A. A. (2023). An improved gorilla troops optimizer for global optimization problems and feature selection. Knowledge-Based Systems, 110462.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.AGTO.OriginalAGTO(epoch: int = 10000, pop_size: int = 100, p1: float = 0.03, p2: float = 0.8, beta: float = 3.0, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Artificial Gorilla Troops Optimization (AGTO)

Links:
  1. https://doi.org/10.1002/int.22535

  2. https://www.mathworks.com/matlabcentral/fileexchange/95953-artificial-gorilla-troops-optimizer

Notes (parameters):
  1. p1 (float): the probability of transition in exploration phase (p in the paper), default = 0.03

  2. p2 (float): the probability of transition in exploitation phase (w in the paper), default = 0.8

  3. beta (float): coefficient in updating equation, should be in [-5.0, 5.0], default = 3.0

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, AGTO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = AGTO.OriginalAGTO(epoch=1000, pop_size=50, p1=0.03, p2=0.8, beta=3.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Abdollahzadeh, B., Soleimanian Gharehchopogh, F., & Mirjalili, S. (2021). Artificial gorilla troops optimizer: a new nature‐inspired metaheuristic algorithm for global optimization problems. International Journal of Intelligent Systems, 36(10), 5887-5958.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.ALO module

class mealpy.swarm_based.ALO.DevALO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.swarm_based.ALO.OriginalALO

The developed version: Ant Lion Optimizer (ALO)

Notes

  • Improved performance by removing the for loop when creating n random walks

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ALO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = ALO.DevALO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

random_walk_antlion__(solution, current_epoch)[source]
class mealpy.swarm_based.ALO.OriginalALO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Ant Lion Optimizer (ALO)

Links:
  1. https://www.mathworks.com/matlabcentral/fileexchange/49920-ant-lion-optimizer-alo

  2. https://dx.doi.org/10.1016/j.advengsoft.2015.01.010

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ALO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = ALO.OriginalALO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mirjalili, S., 2015. The ant lion optimizer. Advances in engineering software, 83, pp.80-98.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

random_walk_antlion__(solution, current_epoch)[source]

mealpy.swarm_based.AO module

class mealpy.swarm_based.AO.OriginalAO(epoch=10000, pop_size=100, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Aquila Optimization (AO)

Links:
  1. https://doi.org/10.1016/j.cie.2021.107250

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, AO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = AO.OriginalAO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Abualigah, L., Yousri, D., Abd Elaziz, M., Ewees, A.A., Al-Qaness, M.A. and Gandomi, A.H., 2021. Aquila optimizer: a novel meta-heuristic optimization algorithm. Computers & Industrial Engineering, 157, p.107250.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.ARO module

class mealpy.swarm_based.ARO.IARO(epoch=10000, pop_size=100, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The improved version of: Improved Artificial Rabbits Optimization (IARO)

Links:
  1. https://doi.org/10.1016/j.engappai.2022.105082

  2. https://www.mathworks.com/matlabcentral/fileexchange/110250-artificial-rabbits-optimization-aro

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ARO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = ARO.IARO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Wang, L., Cao, Q., Zhang, Z., Mirjalili, S., & Zhao, W. (2022). Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Engineering Applications of Artificial Intelligence, 114, 105082.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.ARO.LARO(epoch=10000, pop_size=100, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The improved version of: Lévy flight, and the selective opposition version of the artificial rabbit algorithm (LARO)

Links:
  1. https://doi.org/10.3390/sym14112282

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ARO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = ARO.LARO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Wang, Y., Huang, L., Zhong, J., & Hu, G. (2022). LARO: Opposition-based learning boosted artificial rabbits-inspired optimization algorithm with Lévy flight. Symmetry, 14(11), 2282.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.ARO.OriginalARO(epoch=10000, pop_size=100, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Artificial Rabbits Optimization (ARO)

Links:
  1. https://doi.org/10.1016/j.engappai.2022.105082

  2. https://www.mathworks.com/matlabcentral/fileexchange/110250-artificial-rabbits-optimization-aro

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ARO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = ARO.OriginalARO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Wang, L., Cao, Q., Zhang, Z., Mirjalili, S., & Zhao, W. (2022). Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Engineering Applications of Artificial Intelligence, 114, 105082.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.AVOA module

class mealpy.swarm_based.AVOA.OriginalAVOA(epoch: int = 10000, pop_size: int = 100, p1: float = 0.6, p2: float = 0.4, p3: float = 0.6, alpha: float = 0.8, gama: float = 2.5, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: African Vultures Optimization Algorithm (AVOA)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0360835221003120

  2. https://www.mathworks.com/matlabcentral/fileexchange/94820-african-vultures-optimization-algorithm

Notes (parameters):
  • p1 (float): probability of status transition, default 0.6

  • p2 (float): probability of status transition, default 0.4

  • p3 (float): probability of status transition, default 0.6

  • alpha (float): probability of 1st best, default = 0.8

  • gama (float): a factor in the paper (not much affect to algorithm), default = 2.5

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, AVOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = AVOA.OriginalAVOA(epoch=1000, pop_size=50, p1=0.6, p2=0.4, p3=0.6, alpha=0.8, gama=2.5)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Abdollahzadeh, B., Gharehchopogh, F. S., & Mirjalili, S. (2021). African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Computers & Industrial Engineering, 158, 107408.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.BA module

class mealpy.swarm_based.BA.AdaptiveBA(epoch: int = 10000, pop_size: object = 100, loudness_min: float = 1.0, loudness_max: float = 2.0, pr_min: float = 0.15, pr_max: float = 0.85, pf_min: float = - 10.0, pf_max: float = 10.0, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Adaptive Bat-inspired Algorithm (ABA)

Notes

  • The value of A and r are changing after each iteration

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • loudness_min (float): A_min - loudness, default=1.0

  • loudness_max (float): A_max - loudness, default=2.0

  • pr_min (float): pulse rate / emission rate min, default = 0.15

  • pr_max (float): pulse rate / emission rate max, default = 0.85

  • pf_min (float): pulse frequency min, default = 0

  • pf_max (float): pulse frequency max, default = 10

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = BA.AdaptiveBA(epoch=1000, pop_size=50, loudness_min = 1.0, loudness_max = 2.0, pr_min = -2.5, pr_max = 0.85, pf_min = 0.1, pf_max = 10.)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Yang, X.S., 2010. A new metaheuristic bat-inspired algorithm. In Nature inspired cooperative strategies for optimization (NICSO 2010) (pp. 65-74). Springer, Berlin, Heidelberg.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

class mealpy.swarm_based.BA.DevBA(epoch=10000, pop_size=100, pulse_rate=0.95, pf_min=0.0, pf_max=10.0, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Developed Bat-inspired Algorithm (DBA)

Notes

  • A (loudness) parameter is removed

  • Flow is changed:
    • 1st: the exploration phase is proceed (using frequency)

    • 2nd: If new position has better fitness, replace the old position

    • 3rd: Otherwise, proceed exploitation phase (using finding around the best position so far)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • pulse_rate (float): [0.7, 1.0], pulse rate / emission rate, default = 0.95

  • pulse_frequency (tuple, list): (pf_min, pf_max) -> ([0, 3], [5, 20]), pulse frequency, default = (0, 10)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = BA.DevBA(epoch=1000, pop_size=50, pulse_rate = 0.95, pf_min = 0., pf_max = 10.)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]
class mealpy.swarm_based.BA.OriginalBA(epoch: int = 10000, pop_size: int = 100, loudness: float = 0.8, pulse_rate: float = 0.95, pf_min: float = 0.0, pf_max: float = 10.0, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Bat-inspired Algorithm (BA)

Notes

  • The value of A and r parameters are constant

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • loudness (float): (1.0, 2.0), loudness, default = 0.8

  • pulse_rate (float): (0.15, 0.85), pulse rate / emission rate, default = 0.95

  • pulse_frequency (list, tuple): (pf_min, pf_max) -> ([0, 3], [5, 20]), pulse frequency, default = (0, 10)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = BA.OriginalBA(epoch=1000, pop_size=50, loudness=0.8, pulse_rate=0.95, pf_min=0.1, pf_max=10.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Yang, X.S., 2010. A new metaheuristic bat-inspired algorithm. In Nature inspired cooperative strategies for optimization (NICSO 2010) (pp. 65-74). Springer, Berlin, Heidelberg.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

mealpy.swarm_based.BES module

class mealpy.swarm_based.BES.OriginalBES(epoch: int = 10000, pop_size: int = 100, a_factor: int = 10, R_factor: float = 1.5, alpha: float = 2.0, c1: float = 2.0, c2: float = 2.0, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Bald Eagle Search (BES)

Links:
  1. https://doi.org/10.1007/s10462-019-09732-5

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • a_factor (int): default: 10, determining the corner between point search in the central point, in [5, 10]

  • R_factor (float): default: 1.5, determining the number of search cycles, in [0.5, 2]

  • alpha (float): default: 2, parameter for controlling the changes in position, in [1.5, 2]

  • c1 (float): default: 2, in [1, 2]

  • c2 (float): c1 and c2 increase the movement intensity of bald eagles towards the best and centre points

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BES
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = BES.OriginalBES(epoch=1000, pop_size=50, a_factor = 10, R_factor = 1.5, alpha = 2.0, c1 = 2.0, c2 = 2.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Alsattar, H.A., Zaidan, A.A. and Zaidan, B.B., 2020. Novel meta-heuristic bald eagle search optimisation algorithm. Artificial Intelligence Review, 53(3), pp.2237-2264.

create_x_y_x1_y1__()[source]

Using numpy vector for faster computational time

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.BFO module

class mealpy.swarm_based.BFO.ABFO(epoch: int = 10000, pop_size: int = 100, C_s: float = 0.1, C_e: float = 0.001, Ped: float = 0.01, Ns: int = 4, N_adapt: int = 2, N_split: int = 40, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Adaptive Bacterial Foraging Optimization (ABFO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:

  • C_s (float): step size start, default=0.1

  • C_e (float): step size end, default=0.001

  • Ped (float): Probability eliminate, default=0.01

  • Ns (int): swim_length, default=4

  • N_adapt (int): Dead threshold value default=2

  • N_split (int): Split threshold value, default=40

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BFO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = BFO.ABFO(epoch=1000, pop_size=50, C_s=0.1, C_e=0.001, Ped = 0.01, Ns = 4, N_adapt = 2, N_split = 40)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Nguyen, T., Nguyen, B.M. and Nguyen, G., 2019, April. Building resource auto-scaler with functional-link neural network and adaptive bacterial foraging optimization. In International Conference on Theory and Applications of Models of Computation (pp. 501-517). Springer, Cham.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with full information

Parameters

solution (np.ndarray) – The solution

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

initialize_variables()[source]
update_step_size__(pop=None, idx=None)[source]
class mealpy.swarm_based.BFO.OriginalBFO(epoch: int = 10000, pop_size: int = 100, Ci: float = 0.01, Ped: float = 0.25, Nc: int = 5, Ns: int = 4, d_attract: float = 0.1, w_attract: float = 0.2, h_repels: float = 0.1, w_repels: float = 10, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Bacterial Foraging Optimization (BFO)

Notes

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • Ci (float): [0.01, 0.3], step size, default=0.01

  • Ped (float): [0.1, 0.5], probability of elimination, default=0.25

  • Ned (int): elim_disp_steps (Removed), Ned=5,

  • Nre (int): reproduction_steps (Removed), Nre=50,

  • Nc (int): [3, 10], chem_steps (Reduce), Nc = Original Nc/2, default = 5

  • Ns (int): [2, 10], swim length, default=4

  • d_attract (float): coefficient to calculate attract force, default = 0.1

  • w_attract (float): coefficient to calculate attract force, default = 0.2

  • h_repels (float): coefficient to calculate repel force, default = 0.1

  • w_repels (float): coefficient to calculate repel force, default = 10

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BFO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = BFO.OriginalBFO(epoch=1000, pop_size=50, Ci = 0.01, Ped = 0.25, Nc = 5, Ns = 4, d_attract=0.1, w_attract=0.2, h_repels=0.1, w_repels=10)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Passino, K.M., 2002. Biomimicry of bacterial foraging for distributed optimization and control. IEEE control systems magazine, 22(3), pp.52-67.

attract_repel__(idx, cells)[source]
compute_cell_interaction__(cell, cells, d, w)[source]
evaluate__(idx, cells)[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

tumble_cell__(cell, step_size)[source]

mealpy.swarm_based.BSA module

class mealpy.swarm_based.BSA.OriginalBSA(epoch: int = 10000, pop_size: int = 100, ff: int = 10, pff: float = 0.8, c1: float = 1.5, c2: float = 1.5, a1: float = 1.0, a2: float = 1.0, fc: float = 0.5, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Bird Swarm Algorithm (BSA)

Links:
  1. https://doi.org/10.1080/0952813X.2015.1042530

  2. https://www.mathworks.com/matlabcentral/fileexchange/51256-bird-swarm-algorithm-bsa

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • ff (int): (5, 20), flight frequency - default = 10

  • pff (float): the probability of foraging for food - default = 0.8

  • c_couples (list, tuple): [c1, c2] -> (2.0, 2.0), Cognitive accelerated coefficient, Social accelerated coefficient same as PSO

  • a_couples (list, tuple): [a1, a2] -> (1.5, 1.5), The indirect and direct effect on the birds’ vigilance behaviours.

  • fc (float): (0.1, 1.0), The followed coefficient - default = 0.5

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = BSA.OriginalBSA(epoch=1000, pop_size=50, ff = 10, pff = 0.8, c1 = 1.5, c2 = 1.5, a1 = 1.0, a2 = 1.0, fc = 0.5)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Meng, X.B., Gao, X.Z., Lu, L., Liu, Y. and Zhang, H., 2016. A new bio-inspired optimisation algorithm: Bird Swarm Algorithm. Journal of Experimental & Theoretical Artificial Intelligence, 28(4), pp.673-687.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with full information

Parameters

solution (np.ndarray) – The solution

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

mealpy.swarm_based.BeesA module

class mealpy.swarm_based.BeesA.CleverBookBeesA(epoch: int = 10000, pop_size: int = 100, n_elites: int = 16, n_others: int = 4, patch_size: float = 5.0, patch_reduction: float = 0.985, n_sites: int = 3, n_elite_sites: int = 1, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Bees Algorithm

Notes

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • n_elites (int): number of employed bees which provided for good location

  • n_others (int): number of employed bees which provided for other location

  • patch_size (float): patch_variables = patch_variables * patch_reduction

  • patch_reduction (float): the reduction factor

  • n_sites (int): 3 bees (employed bees, onlookers and scouts),

  • n_elite_sites (int): 1 good partition

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BeesA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = BeesA.CleverBookBeesA(epoch=1000, pop_size=50, n_elites = 16, n_others = 4,
>>>             patch_size = 5.0, patch_reduction = 0.985, n_sites = 3, n_elite_sites = 1)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] D. T. Pham, Ghanbarzadeh A., Koc E., Otri S., Rahim S., and M.Zaidi. The bees algorithm - a novel tool for complex optimisation problems. In Proceedings of IPROMS 2006 Conference, pages 454–461, 2006.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

search_neighborhood__(parent=None, neigh_size=None)[source]

Search 1 best position in neigh_size position

class mealpy.swarm_based.BeesA.OriginalBeesA(epoch: int = 10000, pop_size: int = 100, selected_site_ratio: float = 0.5, elite_site_ratio: float = 0.4, selected_site_bee_ratio: float = 0.1, elite_site_bee_ratio: float = 2.0, dance_radius: float = 0.1, dance_reduction: float = 0.99, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Bees Algorithm (BeesA)

Links:
  1. https://www.sciencedirect.com/science/article/pii/B978008045157250081X

  2. https://www.tandfonline.com/doi/full/10.1080/23311916.2015.1091540

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • selected_site_ratio (float): default = 0.5

  • elite_site_ratio (float): default = 0.4

  • selected_site_bee_ratio (float): default = 0.1

  • elite_site_bee_ratio (float): default = 2.0

  • dance_radius (float): default = 0.1

  • dance_reduction (float): default = 0.99

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BeesA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = BeesA.OriginalBeesA(epoch=1000, pop_size=50, selected_site_ratio=0.5, elite_site_ratio=0.4,
>>>         selected_site_bee_ratio=0.1, elite_site_bee_ratio=2.0, dance_radius=0.1, dance_reduction=0.99)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Pham, D.T., Ghanbarzadeh, A., Koç, E., Otri, S., Rahim, S. and Zaidi, M., 2006. The bees algorithm—a novel tool for complex optimisation problems. In Intelligent production machines and systems (pp. 454-459). Elsevier Science Ltd.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

perform_dance__(position, r)[source]
class mealpy.swarm_based.BeesA.ProbBeesA(epoch: int = 10000, pop_size: int = 100, recruited_bee_ratio: float = 0.1, dance_radius: float = 0.1, dance_reduction: float = 0.99, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Probabilistic Bees Algorithm (BeesA)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • recruited_bee_ratio (float): percent of bees recruited, default = 0.1

  • dance_factor (tuple, list): (radius, reduction) - Bees Dance Radius, default=(0.1, 0.99)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BeesA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = BeesA.ProbBeesA(epoch=1000, pop_size=50, recruited_bee_ratio = 0.1, dance_radius = 0.1, dance_reduction = 0.99)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Pham, D.T. and Castellani, M., 2015. A comparative study of the Bees Algorithm as a tool for function optimisation. Cogent Engineering, 2(1), p.1091540.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

perform_dance__(position, r)[source]

mealpy.swarm_based.COA module

class mealpy.swarm_based.COA.OriginalCOA(epoch: int = 10000, pop_size: int = 100, n_coyotes: int = 5, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Coyote Optimization Algorithm (COA)

Links:
  1. https://ieeexplore.ieee.org/document/8477769

  2. https://github.com/jkpir/COA/blob/master/COA.py (Old version Mealpy < 1.2.2)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • n_coyotes (int): [3, 15], number of coyotes per group, default=5

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, COA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = COA.OriginalCOA(epoch=1000, pop_size=50, n_coyotes = 5)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Pierezan, J. and Coelho, L.D.S., 2018, July. Coyote optimization algorithm: a new metaheuristic for global optimization problems. In 2018 IEEE congress on evolutionary computation (CEC) (pp. 1-8). IEEE.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

initialization()[source]

mealpy.swarm_based.CSA module

class mealpy.swarm_based.CSA.OriginalCSA(epoch: int = 10000, pop_size: int = 100, p_a: float = 0.3, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Cuckoo Search Algorithm (CSA)

Links:
  1. https://doi.org/10.1109/NABIC.2009.5393690

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • p_a (float): [0.1, 0.7], probability a, default=0.3

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, CSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = CSA.OriginalCSA(epoch=1000, pop_size=50, p_a = 0.3)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Yang, X.S. and Deb, S., 2009, December. Cuckoo search via Lévy flights. In 2009 World congress on nature & biologically inspired computing (NaBIC) (pp. 210-214). Ieee.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.CSO module

class mealpy.swarm_based.CSO.OriginalCSO(epoch: int = 10000, pop_size: int = 100, mixture_ratio: float = 0.15, smp: int = 5, spc: bool = False, cdc: float = 0.8, srd: float = 0.15, c1: float = 0.4, w_min: float = 0.5, w_max: float = 0.9, selected_strategy: int = 1, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Cat Swarm Optimization (CSO)

Links:
  1. https://link.springer.com/chapter/10.1007/978-3-540-36668-3_94

  2. https://www.hindawi.com/journals/cin/2020/4854895/

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • mixture_ratio (float): joining seeking mode with tracing mode, default=0.15

  • smp (int): seeking memory pool, default=5 clones (larger is better but time-consuming)

  • spc (bool): self-position considering, default=False

  • cdc (float): counts of dimension to change (larger is more diversity but slow convergence), default=0.8

  • srd (float): seeking range of the selected dimension (smaller is better but slow convergence), default=0.15

  • c1 (float): same in PSO, default=0.4

  • w_min (float): same in PSO

  • w_max (float): same in PSO

  • selected_strategy (int): 0: best fitness, 1: tournament, 2: roulette wheel, else: random (decrease by quality)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, CSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = CSO.OriginalCSO(epoch=1000, pop_size=50, mixture_ratio = 0.15, smp = 5, spc = False, cdc = 0.8, srd = 0.15, c1 = 0.4, w_min = 0.4, w_max = 0.9)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Chu, S.C., Tsai, P.W. and Pan, J.S., 2006, August. Cat swarm optimization. In Pacific Rim international conference on artificial intelligence (pp. 854-858). Springer, Berlin, Heidelberg.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]
  • x: current position of cat

  • v: vector v of cat (same amount of dimension as x)

  • flag: the stage of cat, seeking (looking/finding around) or tracing (chasing/catching) => False: seeking mode , True: tracing mode

seeking_mode__(cat)[source]

mealpy.swarm_based.CoatiOA module

class mealpy.swarm_based.CoatiOA.OriginalCoatiOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Coati Optimization Algorithm (CoatiOA)

Links:
  1. https://www.sciencedirect.com/science/article/pii/S0950705122011042

  2. https://www.mathworks.com/matlabcentral/fileexchange/116965-coa-coati-optimization-algorithm

Notes

  1. Algorithm design is similar to Zebra Optimization Algorithm (ZOA), Osprey Optimization Algorithm (OOA), Pelican optimization algorithm (POA), Siberian Tiger Optimization (STO), Language Education Optimization (LEO), Serval Optimization Algorithm (SOA), Walrus Optimization Algorithm (WOA), Fennec Fox Optimization (FFO), Three-periods optimization algorithm (TPOA), Teamwork optimization algorithm (TOA), Northern goshawk optimization (NGO), Tasmanian devil optimization (TDO), Archery algorithm (AA), Cat and mouse based optimizer (CMBO)

  2. It may be useful to compare the Matlab code of this algorithm with those of the similar algorithms to ensure its accuracy and completeness.

  3. The article may share some similarities with previous work by the same authors, further investigation may be warranted to verify the benchmark results reported in the papers and ensure their reliability and accuracy.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, CoatiOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = CoatiOA.OriginalCoatiOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Dehghani, M., Montazeri, Z., Trojovská, E., & Trojovský, P. (2023). Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowledge-Based Systems, 259, 110011.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.DMOA module

class mealpy.swarm_based.DMOA.DevDMOA(epoch: int = 10000, pop_size: int = 100, peep: float = 2, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version of: Dwarf Mongoose Optimization Algorithm (DMOA)

Notes

  1. Removed the parameter n_baby_sitter

  2. Changed in section # Next Mongoose position

  3. Removed the meaningless variable tau

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, DMOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = DMOA.DevDMOA(epoch=1000, pop_size=50, peep = 2)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]
class mealpy.swarm_based.DMOA.OriginalDMOA(epoch: int = 10000, pop_size: int = 100, n_baby_sitter: int = 3, peep: float = 2, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Dwarf Mongoose Optimization Algorithm (DMOA)

Links:
  1. https://doi.org/10.1016/j.cma.2022.114570

  2. https://www.mathworks.com/matlabcentral/fileexchange/105125-dwarf-mongoose-optimization-algorithm

Notes

  1. The Matlab code differs slightly from the original paper

  2. There are some parameters and equations in the Matlab code that don’t seem to have any meaningful purpose.

  3. The algorithm seems to be weak on solving several problems.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, DMOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = DMOA.OriginalDMOA(epoch=1000, pop_size=50, n_baby_sitter = 3, peep = 2)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Agushaka, J. O., Ezugwu, A. E., & Abualigah, L. (2022). Dwarf mongoose optimization algorithm. Computer methods in applied mechanics and engineering, 391, 114570.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]

mealpy.swarm_based.DO module

class mealpy.swarm_based.DO.OriginalDO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Dragonfly Optimization (DO)

Links:
  1. https://link.springer.com/article/10.1007/s00521-015-1920-1

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, DO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = DO.OriginalDO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mirjalili, S., 2016. Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural computing and applications, 27(4), pp.1053-1073.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]

mealpy.swarm_based.EHO module

class mealpy.swarm_based.EHO.OriginalEHO(epoch: int = 10000, pop_size: int = 100, alpha: float = 0.5, beta: float = 0.5, n_clans: int = 5, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Elephant Herding Optimization (EHO)

Links:
  1. https://doi.org/10.1109/ISCBI.2015.8

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • alpha (float): [0.3, 0.8], a factor that determines the influence of the best in each clan, default=0.5

  • beta (float): [0.3, 0.8], a factor that determines the influence of the x_center, default=0.5

  • n_clans (int): [3, 10], the number of clans, default=5

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, EHO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = EHO.OriginalEHO(epoch=1000, pop_size=50, alpha = 0.5, beta = 0.5, n_clans = 5)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Wang, G.G., Deb, S. and Coelho, L.D.S., 2015, December. Elephant herding optimization. In 2015 3rd international symposium on computational and business intelligence (ISCBI) (pp. 1-5). IEEE.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]

mealpy.swarm_based.ESOA module

class mealpy.swarm_based.ESOA.OriginalESOA(epoch=10000, pop_size=100, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Egret Swarm Optimization Algorithm (ESOA)

Links:
  1. https://www.mathworks.com/matlabcentral/fileexchange/115595-egret-swarm-optimization-algorithm-esoa

  2. https://www.mdpi.com/2313-7673/7/4/144

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ESOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = ESOA.OriginalESOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Chen, Z., Francis, A., Li, S., Liao, B., Xiao, D., Ha, T. T., … & Cao, X. (2022). Egret Swarm Optimization Algorithm: An Evolutionary Computation Approach for Model Free Optimization. Biomimetics, 7(4), 144.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

ID_WEI = 2 ID_LOC_X = 3 ID_LOC_Y = 4 ID_G = 5 ID_M = 6 ID_V = 7

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

initialize_variables()[source]

mealpy.swarm_based.FA module

class mealpy.swarm_based.FA.OriginalFA(epoch: int = 10000, pop_size: int = 100, max_sparks: int = 100, p_a: float = 0.04, p_b: float = 0.8, max_ea: int = 40, m_sparks: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Fireworks Algorithm (FA)

Links:
  1. https://doi.org/10.1007/978-3-642-13495-1_44

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • max_sparks (int): parameter controlling the total number of sparks generated by the pop_size fireworks, default=100

  • p_a (float): percent (const parameter), default=0.04

  • p_b (float): percent (const parameter), default=0.8

  • max_ea (int): maximum explosion amplitude, default=40

  • m_sparks (int): number of sparks generated in each explosion generation, default=100

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, FA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = FA.OriginalFA(epoch=1000, pop_size=50, max_sparks = 50, p_a = 0.04, p_b = 0.8, max_ea = 40, m_sparks = 50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Tan, Y. and Zhu, Y., 2010, June. Fireworks algorithm for optimization. In International conference in swarm intelligence (pp. 355-364). Springer, Berlin, Heidelberg.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.FFA module

class mealpy.swarm_based.FFA.OriginalFFA(epoch: int = 10000, pop_size: int = 100, gamma: float = 0.001, beta_base: float = 2, alpha: float = 0.2, alpha_damp: float = 0.99, delta: float = 0.05, exponent: int = 2, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Firefly Algorithm (FFA)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • gamma (float): Light Absorption Coefficient, default = 0.001

  • beta_base (float): Attraction Coefficient Base Value, default = 2

  • alpha (float): Mutation Coefficient, default = 0.2

  • alpha_damp (float): Mutation Coefficient Damp Rate, default = 0.99

  • delta (float): Mutation Step Size, default = 0.05

  • exponent (int): Exponent (m in the paper), default = 2

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, FFA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = FFA.OriginalFFA(epoch=1000, pop_size=50, gamma = 0.001, beta_base = 2, alpha = 0.2, alpha_damp = 0.99, delta = 0.05, exponent = 2)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Gandomi, A.H., Yang, X.S. and Alavi, A.H., 2011. Mixed variable structural optimization using firefly algorithm. Computers & Structures, 89(23-24), pp.2325-2336. [2] Arora, S. and Singh, S., 2013. The firefly optimization algorithm: convergence analysis and parameter selection. International Journal of Computer Applications, 69(3).

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]

mealpy.swarm_based.FFO module

class mealpy.swarm_based.FFO.OriginalFFO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Fennec Fox Optimization (FFO)

Links:
  1. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9853509

Notes

  1. This is somewhat concerning, as there appears to be a high degree of similarity between the source code for this algorithm and the Pelican Optimization Algorithm (POA).

  2. Algorithm design is similar to Zebra Optimization Algorithm (ZOA), Osprey Optimization Algorithm (OOA), Coati Optimization Algorithm (CoatiOA), Siberian Tiger Optimization (STO), Language Education Optimization (LEO), Serval Optimization Algorithm (SOA), Walrus Optimization Algorithm (WOA), Pelican Optimization Algorithm (POA), Three-periods optimization algorithm (TPOA), Teamwork optimization algorithm (TOA), Northern goshawk optimization (NGO), Tasmanian devil optimization (TDO), Archery algorithm (AA), Cat and mouse based optimizer (CMBO)

  3. It may be useful to compare the Matlab code of this algorithm with those of the similar algorithms to ensure its accuracy and completeness.

  4. The article may share some similarities with previous work by the same authors, further investigation may be warranted to verify the benchmark results reported in the papers and ensure their reliability and accuracy.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, FFO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = FFO.OriginalFFO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Trojovská, E., Dehghani, M., & Trojovský, P. (2022). Fennec Fox Optimization: A New Nature-Inspired Optimization Algorithm. IEEE Access, 10, 84417-84443.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.FOA module

class mealpy.swarm_based.FOA.DevFOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.swarm_based.FOA.OriginalFOA

The developed version: Fruit-fly Optimization Algorithm (FOA)

Notes

  • The fitness function (small function) is changed by taking the distance each 2 adjacent dimensions

  • Update the position if only new generated solution is better

  • The updated position is created by norm distance * gaussian random number

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, FOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = FOA.DevFOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.FOA.OriginalFOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Fruit-fly Optimization Algorithm (FOA)

Links:
  1. https://doi.org/10.1016/j.knosys.2011.07.001

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, FOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = FOA.OriginalFOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Pan, W.T., 2012. A new fruit fly optimization algorithm: taking the financial distress model as an example. Knowledge-Based Systems, 26, pp.69-74.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

norm_consecutive_adjacent__(position=None)[source]
class mealpy.swarm_based.FOA.WhaleFOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.swarm_based.FOA.OriginalFOA

The original version of: Whale Fruit-fly Optimization Algorithm (WFOA)

Links:
  1. https://doi.org/10.1016/j.eswa.2020.113502

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, FOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = FOA.WhaleFOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Fan, Y., Wang, P., Heidari, A.A., Wang, M., Zhao, X., Chen, H. and Li, C., 2020. Boosted hunting-based fruit fly optimization and advances in real-world problems. Expert Systems with Applications, 159, p.113502.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.FOX module

class mealpy.swarm_based.FOX.DevFOX(epoch: int = 10000, pop_size: int = 100, c1: float = 0.18, c2: float = 0.82, pp=0.5, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version of: Fox Optimizer (FOX)

Notes (parameters):
  1. c1 (float): the coefficient of jumping (c1 in the paper), default = 0.18

  2. c2 (float): the coefficient of jumping (c2 in the paper), default = 0.82

  3. pp (float): the probability of choosing the exploration and exploitation phase, default=0.5

Notes

  1. Set parameter pp = 0.18 if you want to same as Original version

  2. The different between Dev and Original version is the equation: self.g_best.solution + self.generator.standard_normal(self.problem.n_dims) * (self.mint * aa)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, FOX
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = FOX.DevFOX(epoch=1000, pop_size=50, c1=0.18, c2=0.82, pp=0.5)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mohammed, H., & Rashid, T. (2023). FOX: a FOX-inspired optimization algorithm. Applied Intelligence, 53(1), 1030-1050.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]
class mealpy.swarm_based.FOX.OriginalFOX(epoch: int = 10000, pop_size: int = 100, c1: float = 0.18, c2: float = 0.82, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Fox Optimizer (FOX)

Links:
  1. https://link.springer.com/article/10.1007/s10489-022-03533-0

  2. https://www.mathworks.com/matlabcentral/fileexchange/121592-fox-a-fox-inspired-optimization-algorithm

Notes (parameters):
  1. c1 (float): the coefficient of jumping (c1 in the paper), default = 0.18

  2. c2 (float): the coefficient of jumping (c2 in the paper), default = 0.82

Notes

  1. The equation used to calculate the distance_S_travel value in the Matlab code seems to be lacking in meaning.

  2. The if-else conditions used with p > 0.18 seem to lack a clear justification. The authors seem to have simply chosen the best value based on their experiments without explaining the rationale behind it.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, FOX
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = FOX.OriginalFOX(epoch=1000, pop_size=50, c1=0.18, c2=0.82)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mohammed, H., & Rashid, T. (2023). FOX: a FOX-inspired optimization algorithm. Applied Intelligence, 53(1), 1030-1050.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]

mealpy.swarm_based.GJO module

class mealpy.swarm_based.GJO.OriginalGJO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Golden jackal optimization (GJO)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S095741742200358X

  2. https://www.mathworks.com/matlabcentral/fileexchange/108889-golden-jackal-optimization-algorithm

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GJO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GJO.OriginalGJO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Chopra, N., & Ansari, M. M. (2022). Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Systems with Applications, 198, 116924.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.GOA module

class mealpy.swarm_based.GOA.OriginalGOA(epoch: int = 10000, pop_size: int = 100, c_min: float = 4e-05, c_max: float = 2.0, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Grasshopper Optimization Algorithm (GOA)

Links:
  1. https://dx.doi.org/10.1016/j.advengsoft.2017.01.004

  2. https://www.mathworks.com/matlabcentral/fileexchange/61421-grasshopper-optimisation-algorithm-goa

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • c_min (float): coefficient c min, default = 0.00004

  • c_max (float): coefficient c max, default = 2.0

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GOA.OriginalGOA(epoch=1000, pop_size=50, c_min = 0.00004, c_max = 1.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Saremi, S., Mirjalili, S. and Lewis, A., 2017. Grasshopper optimisation algorithm: theory and application. Advances in Engineering Software, 105, pp.30-47.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

s_function__(r_vector=None)[source]

mealpy.swarm_based.GTO module

class mealpy.swarm_based.GTO.Matlab101GTO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The conversion of Matlab code (version 1.0.1 - 29/11/2022) to Python code of: Giant Trevally Optimizer (GTO)

Links:
  1. https://www.mathworks.com/matlabcentral/fileexchange/121358-giant-trevally-optimizer-gto

  2. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9955508

Notes

1. This algorithm costs a huge amount of computational resources in each epoch. Therefore, be careful when using the maximum number of generations as a stopping condition. 2. Other algorithms update around K*pop_size times in each epoch, this algorithm updates around 2*pop_size^2 + pop_size times 3. This version is used by the authors to compared with other algorithms in their paper.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GTO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GTO.Matlab101GTO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Sadeeq, H. T., & Abdulazeez, A. M. (2022). Giant Trevally Optimizer (GTO): A Novel Metaheuristic Algorithm for Global Optimization and Challenging Engineering Problems. IEEE Access, 10, 121615-121640.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.GTO.Matlab102GTO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The conversion of Matlab code (version 1.0.2 - 27/04/2023) to Python code of: Giant Trevally Optimizer (GTO)

Links:
  1. https://www.mathworks.com/matlabcentral/fileexchange/121358-giant-trevally-optimizer-gto

  2. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9955508

Notes

1. The authors sent me an email asking to update the algorithm. In this version, they removed 2 for loops in the epoch (generations) based on my comments, so the computation time will reduce to 3*pop_size from 2*pop_size^2 + pop_size. However, this will also lead to a reduction in performance results. My question: Are the results in the paper valid? 2. I have decided to implement the original version of the algorithm exactly as described in the paper (OriginalGTO).

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GTO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GTO.Matlab102GTO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Sadeeq, H. T., & Abdulazeez, A. M. (2022). Giant Trevally Optimizer (GTO): A Novel Metaheuristic Algorithm for Global Optimization and Challenging Engineering Problems. IEEE Access, 10, 121615-121640.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.GTO.OriginalGTO(epoch: int = 10000, pop_size: int = 100, A: float = 0.4, H: float = 2.0, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Giant Trevally Optimizer (GTO)

Notes

  1. This version is implemented exactly as described in the paper.

  2. https://www.mathworks.com/matlabcentral/fileexchange/121358-giant-trevally-optimizer-gto

  3. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9955508

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • A (float): a position-change-controlling parameter with a range from 0.3 to 0.4, default=0.4

  • H (float): initial value for specifies the jumping slope function, default=2.0

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GTO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GTO.OriginalGTO(epoch=1000, pop_size=50, A=0.4, H=2.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Sadeeq, H. T., & Abdulazeez, A. M. (2022). Giant Trevally Optimizer (GTO): A Novel Metaheuristic Algorithm for Global Optimization and Challenging Engineering Problems. IEEE Access, 10, 121615-121640.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.GWO module

class mealpy.swarm_based.GWO.GWO_WOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.swarm_based.GWO.OriginalGWO

The original version of: Hybrid Grey Wolf - Whale Optimization Algorithm (GWO_WOA)

Links:
  1. https://sci-hub.se/https://doi.org/10.1177/10775463211003402

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GWO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GWO.GWO_WOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Obadina, O. O., Thaha, M. A., Althoefer, K., & Shaheed, M. H. (2022). Dynamic characterization of a master–slave robotic manipulator using a hybrid grey wolf–whale optimization algorithm. Journal of Vibration and Control, 28(15-16), 1992-2003.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.GWO.IGWO(epoch: int = 10000, pop_size: int = 100, a_min: float = 0.02, a_max: float = 2.2, **kwargs: object)[source]

Bases: mealpy.swarm_based.GWO.OriginalGWO

The original version of: Improved Grey Wolf Optimization (IGWO)

Notes

  1. Link: https://doi.org/10.1007/s00366-017-0567-1

  2. Inspired by: Mohammadtaher Abbasi (https://github.com/mtabbasi)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • a_min (float): Lower bound of a, default = 0.02

  • a_max (float): Upper bound of a, default = 2.2

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GWO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GWO.IGWO(epoch=1000, pop_size=50, a_min = 0.02, a_max = 2.2)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Kaveh, A. & Zakian, P.. (2018). Improved GWO algorithm for optimal design of truss structures. Engineering with Computers. 34. 10.1007/s00366-017-0567-1.

evolve(epoch)[source]

The main operations (equations) of algorithm.

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.GWO.OriginalGWO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Grey Wolf Optimizer (GWO)

Links:
  1. https://doi.org/10.1016/j.advengsoft.2013.12.007

  2. https://www.mathworks.com/matlabcentral/fileexchange/44974-grey-wolf-optimizer-gwo?s_tid=FX_rc3_behav

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GWO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GWO.OriginalGWO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mirjalili, S., Mirjalili, S.M. and Lewis, A., 2014. Grey wolf optimizer. Advances in engineering software, 69, pp.46-61.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.GWO.RW_GWO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Random Walk Grey Wolf Optimizer (RW-GWO)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GWO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GWO.RW_GWO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Gupta, S. and Deep, K., 2019. A novel random walk grey wolf optimizer. Swarm and evolutionary computation, 44, pp.101-112.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.HBA module

class mealpy.swarm_based.HBA.OriginalHBA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Honey Badger Algorithm (HBA)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0378475421002901

  2. https://www.mathworks.com/matlabcentral/fileexchange/98204-honey-badger-algorithm

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, HBA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = HBA.OriginalHBA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Hashim, F. A., Houssein, E. H., Hussain, K., Mabrouk, M. S., & Al-Atabany, W. (2022). Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Mathematics and Computers in Simulation, 192, 84-110.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

get_intensity__(best, pop)[source]
initialize_variables()[source]

mealpy.swarm_based.HGS module

class mealpy.swarm_based.HGS.OriginalHGS(epoch: int = 10000, pop_size: int = 100, PUP: float = 0.08, LH: float = 10000, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Hunger Games Search (HGS)

Links:

https://aliasgharheidari.com/HGS.html

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • PUP (float): [0.01, 0.2], The probability of updating position (L in the paper), default = 0.08

  • LH (float): [1000, 20000], Largest hunger / threshold, default = 10000

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, HGS
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = HGS.OriginalHGS(epoch=1000, pop_size=50, PUP = 0.08, LH = 10000)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Yang, Y., Chen, H., Heidari, A.A. and Gandomi, A.H., 2021. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Systems with Applications, 177, p.114864.

ID_HUN = 2
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

sech__(x)[source]
update_hunger_value__(pop=None, g_best=None, g_worst=None)[source]

mealpy.swarm_based.HHO module

class mealpy.swarm_based.HHO.OriginalHHO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Harris Hawks Optimization (HHO)

Links:
  1. https://doi.org/10.1016/j.future.2019.02.028

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, HHO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = HHO.OriginalHHO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Heidari, A.A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M. and Chen, H., 2019. Harris hawks optimization: Algorithm and applications. Future generation computer systems, 97, pp.849-872.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.JA module

class mealpy.swarm_based.JA.DevJA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Jaya Algorithm (JA)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, JA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = JA.DevJA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Rao, R., 2016. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. International Journal of Industrial Engineering Computations, 7(1), pp.19-34.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.JA.LevyJA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.swarm_based.JA.DevJA

The original version of: Levy-flight Jaya Algorithm (LJA)

Notes

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, JA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = JA.LevyJA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Iacca, G., dos Santos Junior, V.C. and de Melo, V.V., 2021. An improved Jaya optimization algorithm with Lévy flight. Expert Systems with Applications, 165, p.113902.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.JA.OriginalJA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.swarm_based.JA.DevJA

The original version of: Jaya Algorithm (JA)

Links:
  1. https://www.growingscience.com/ijiec/Vol7/IJIEC_2015_32.pdf

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, JA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = JA.OriginalJA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Rao, R., 2016. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. International Journal of Industrial Engineering Computations, 7(1), pp.19-34.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.MFO module

class mealpy.swarm_based.MFO.OriginalMFO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Moth-Flame Optimization (MFO)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, MFO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = MFO.OriginalMFO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mirjalili, S., 2015. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowledge-based systems, 89, pp.228-249.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.MGO module

class mealpy.swarm_based.MGO.OriginalMGO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Mountain Gazelle Optimizer (MGO)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0965997822001831

  2. https://www.mathworks.com/matlabcentral/fileexchange/118680-mountain-gazelle-optimizer

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, MGO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = MGO.OriginalMGO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Abdollahzadeh, B., Gharehchopogh, F. S., Khodadadi, N., & Mirjalili, S. (2022). Mountain gazelle optimizer: a new nature-inspired metaheuristic algorithm for global optimization problems. Advances in Engineering Software, 174, 103282.

coefficient_vector__(n_dims, epoch, max_epoch)[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.MPA module

class mealpy.swarm_based.MPA.OriginalMPA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Marine Predators Algorithm (MPA)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0957417420302025

  2. https://www.mathworks.com/matlabcentral/fileexchange/74578-marine-predators-algorithm-mpa

Notes

  1. To use the original paper, set the training mode = “swarm”

  2. They update the whole population at the same time before update the fitness

  3. Two variables that they consider it as constants which are FADS = 0.2 and P = 0.5

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, MPA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = MPA.OriginalMPA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Faramarzi, A., Heidarinejad, M., Mirjalili, S., & Gandomi, A. H. (2020). Marine Predators Algorithm: A nature-inspired metaheuristic. Expert systems with applications, 152, 113377.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]

mealpy.swarm_based.MRFO module

class mealpy.swarm_based.MRFO.OriginalMRFO(epoch: int = 10000, pop_size: int = 100, somersault_range: float = 2.0, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Manta Ray Foraging Optimization (MRFO)

Links:
  1. https://doi.org/10.1016/j.engappai.2019.103300

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • somersault_range (float): [1.5, 3], somersault factor that decides the somersault range of manta rays, default=2

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, MRFO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = MRFO.OriginalMRFO(epoch=1000, pop_size=50, somersault_range = 2.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Zhao, W., Zhang, Z. and Wang, L., 2020. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Engineering Applications of Artificial Intelligence, 87, p.103300.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.MRFO.WMQIMRFO(epoch: int = 10000, pop_size: int = 100, somersault_range: float = 2.0, pm: float = 0.5, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Wavelet Mutation and Quadratic Interpolation MRFO (WMQIMRFO)

Links:
  1. https://doi.org/10.1016/j.knosys.2021.108071

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • somersault_range (float): [1.5, 3], somersault factor that decides the somersault range of manta rays, default=2

  • pm (float): (0.0, 1.0), probability mutation, default = 0.5

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, MRFO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = MRFO.OriginalMRFO(epoch=1000, pop_size=50, somersault_range = 2.0, pm=0.5)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] G. Hu, M. Li, X. Wang et al., An enhanced manta ray foraging optimization algorithm for shape optimization of complex CCG-Ball curves, Knowledge-Based Systems (2022), doi: https://doi.org/10.1016/j.knosys.2021.108071.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.MSA module

class mealpy.swarm_based.MSA.OriginalMSA(epoch: int = 10000, pop_size: int = 100, n_best: int = 5, partition: float = 0.5, max_step_size: float = 1.0, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version: Moth Search Algorithm (MSA)

Links:
  1. https://www.mathworks.com/matlabcentral/fileexchange/59010-moth-search-ms-algorithm

  2. https://doi.org/10.1007/s12293-016-0212-3

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • n_best (int): [3, 10], how many of the best moths to keep from one generation to the next, default=5

  • partition (float): [0.3, 0.8], The proportional of first partition, default=0.5

  • max_step_size (float): [0.5, 2.0], Max step size used in Levy-flight technique, default=1.0

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, MSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = MSA.OriginalMSA(epoch=1000, pop_size=50, n_best = 5, partition = 0.5, max_step_size = 1.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Wang, G.G., 2018. Moth search algorithm: a bio-inspired metaheuristic algorithm for global optimization problems. Memetic Computing, 10(2), pp.151-164.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.NGO module

class mealpy.swarm_based.NGO.OriginalNGO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Northern Goshawk Optimization (NGO)

Links:
  1. https://ieeexplore.ieee.org/abstract/document/9638618

  2. https://www.mathworks.com/matlabcentral/fileexchange/106665-northern-goshawk-optimization-a-new-swarm-based-algorithm

Notes

  1. This is somewhat concerning, as there appears to be a high degree of similarity between the source code for this algorithm and the Pelican Optimization Algorithm (POA).

  2. Algorithm design is similar similar to Zebra Optimization Algorithm (ZOA), Osprey Optimization Algorithm (OOA), Coati Optimization Algorithm (CoatiOA), Siberian Tiger Optimization (STO), Language Education Optimization (LEO), Serval Optimization Algorithm (SOA), Walrus Optimization Algorithm (WOA), Fennec Fox Optimization (FFO), Three-periods optimization algorithm (TPOA), Teamwork optimization algorithm (TOA), Pelican Optimization Algorithm (POA), Tasmanian devil optimization (TDO), Archery algorithm (AA), Cat and mouse based optimizer (CMBO)

  3. It may be useful to compare the Matlab code of this algorithm with those of the similar algorithms to ensure its accuracy and completeness.

  4. The article may share some similarities with previous work by the same authors, further investigation may be warranted to verify the benchmark results reported in the papers and ensure their reliability and accuracy.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, NGO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = NGO.OriginalNGO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Dehghani, M., Hubálovský, Š., & Trojovský, P. (2021). Northern goshawk optimization: a new swarm-based algorithm for solving optimization problems. IEEE Access, 9, 162059-162080.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.NMRA module

class mealpy.swarm_based.NMRA.ImprovedNMRA(epoch=10000, pop_size=100, pb=0.75, pm=0.01, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The developed version of: Improved Naked Mole-Rat Algorithm (I-NMRA)

Notes

  • Use mutation probability idea

  • Use crossover operator

  • Use Levy-flight technique

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • pb (float): [0.5, 0.95], probability of breeding, default = 0.75

  • pm (float): [0.01, 0.1], probability of mutation, default = 0.01

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, NMRA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = NMRA.ImprovedNMRA(epoch=1000, pop_size=50, pb = 0.75, pm = 0.01)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
crossover_random__(pop, g_best)[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.NMRA.OriginalNMRA(epoch: int = 10000, pop_size: int = 100, pb: float = 0.75, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Naked Mole-Rat Algorithm (NMRA)

Links:
  1. https://www.doi.org10.1007/s00521-019-04464-7

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • pb (float): [0.5, 0.95], probability of breeding, default = 0.75

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, NMRA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = NMRA.OriginalNMRA(epoch=1000, pop_size=50, pb = 0.75)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Salgotra, R. and Singh, U., 2019. The naked mole-rat algorithm. Neural Computing and Applications, 31(12), pp.8837-8857.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.OOA module

class mealpy.swarm_based.OOA.OriginalOOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Osprey Optimization Algorithm (OOA)

Links:
  1. https://www.frontiersin.org/articles/10.3389/fmech.2022.1126450/full

  2. https://www.mathworks.com/matlabcentral/fileexchange/124555-osprey-optimization-algorithm

Notes

  1. Algorithm design is similar to Zebra Optimization Algorithm (ZOA), Osprey Optimization Algorithm (OOA), Pelican optimization algorithm (POA), Siberian Tiger Optimization (STO), Language Education Optimization (LEO), Serval Optimization Algorithm (SOA), Walrus Optimization Algorithm (WOA), Fennec Fox Optimization (FFO), Three-periods optimization algorithm (TPOA), Teamwork optimization algorithm (TOA), Northern goshawk optimization (NGO), Tasmanian devil optimization (TDO), Archery algorithm (AA), Cat and mouse based optimizer (CMBO)

  2. It may be useful to compare the Matlab code of this algorithm with those of the similar algorithms to ensure its accuracy and completeness.

  3. The article may share some similarities with previous work by the same authors, further investigation may be warranted to verify the benchmark results reported in the papers and ensure their reliability and accuracy.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, OOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = OOA.OriginalOOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Trojovský, P., & Dehghani, M. Osprey Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving engineering optimization problems. Frontiers in Mechanical Engineering, 8, 136.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

get_indexes_better__(pop, idx)[source]

mealpy.swarm_based.PFA module

class mealpy.swarm_based.PFA.OriginalPFA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Pathfinder Algorithm (PFA)

Links:
  1. https://doi.org/10.1016/j.asoc.2019.03.012

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, PFA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = PFA.OriginalPFA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Yapici, H. and Cetinkaya, N., 2019. A new meta-heuristic optimizer: Pathfinder algorithm. Applied soft computing, 78, pp.545-568.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.POA module

class mealpy.swarm_based.POA.OriginalPOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Pelican Optimization Algorithm (POA)

Links:
  1. https://www.mdpi.com/1424-8220/22/3/855

  2. https://www.mathworks.com/matlabcentral/fileexchange/106680-pelican-optimization-algorithm-a-novel-nature-inspired

Notes

  1. This is somewhat concerning, as there appears to be a high degree of similarity between the source code for this algorithm and the Northern Goshawk Optimization (NGO)

  2. Algorithm design is similar to Zebra Optimization Algorithm (ZOA), Osprey Optimization Algorithm (OOA), Coati Optimization Algorithm (CoatiOA), Siberian Tiger Optimization (STO), Language Education Optimization (LEO), Serval Optimization Algorithm (SOA), Walrus Optimization Algorithm (WOA), Fennec Fox Optimization (FFO), Three-periods optimization algorithm (TPOA), Teamwork optimization algorithm (TOA), Northern goshawk optimization (NGO), Tasmanian devil optimization (TDO), Archery algorithm (AA), Cat and mouse based optimizer (CMBO)

  3. It may be useful to compare the Matlab code of this algorithm with those of the similar algorithms to ensure its accuracy and completeness.

  4. The article may share some similarities with previous work by the same authors, further investigation may be warranted to verify the benchmark results reported in the papers and ensure their reliability and accuracy.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, POA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = POA.OriginalPOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Trojovský, P., & Dehghani, M. (2022). Pelican optimization algorithm: A novel nature-inspired algorithm for engineering applications. Sensors, 22(3), 855.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.PSO module

class mealpy.swarm_based.PSO.AIW_PSO(epoch: int = 10000, pop_size: int = 100, c1: float = 2.05, c2: float = 2.05, alpha: float = 0.4, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Adaptive Inertia Weight Particle Swarm Optimization (AIW-PSO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • c1 (float): [1, 3], local coefficient, default = 2.05

  • c2 (float): [1, 3], global coefficient, default = 2.05

  • alpha (float): [0., 1.0], The positive constant, default = 0.4

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, PSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = PSO.AIW_PSO(epoch=1000, pop_size=50, c1=2.05, c2=20.5, alpha=0.4)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Qin, Z., Yu, F., Shi, Z., Wang, Y. (2006). Adaptive Inertia Weight Particle Swarm Optimization. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Żurada, J.M. (eds) Artificial Intelligence and Soft Computing – ICAISC 2006. ICAISC 2006. Lecture Notes in Computer Science(), vol 4029. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11785231_48

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with full information

Parameters

solution (np.ndarray) – The solution

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

initialize_variables()[source]
class mealpy.swarm_based.PSO.CL_PSO(epoch: int = 10000, pop_size: int = 100, c_local: float = 1.2, w_min: float = 0.4, w_max: float = 0.9, max_flag: int = 7, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Comprehensive Learning Particle Swarm Optimization (CL-PSO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • c_local (float): [1.0, 3.0], local coefficient, default = 1.2

  • w_min (float): [0.1, 0.5], Weight min of bird, default = 0.4

  • w_max (float): [0.7, 2.0], Weight max of bird, default = 0.9

  • max_flag (int): [5, 20], Number of times, default = 7

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, PSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = PSO.CL_PSO(epoch=1000, pop_size=50, c_local = 1.2, w_min=0.4, w_max=0.9, max_flag = 7)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Liang, J.J., Qin, A.K., Suganthan, P.N. and Baskar, S., 2006. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE transactions on evolutionary computation, 10(3), pp.281-295.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with full information

Parameters

solution (np.ndarray) – The solution

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

initialize_variables()[source]
class mealpy.swarm_based.PSO.C_PSO(epoch: int = 10000, pop_size: int = 100, c1: float = 2.05, c2: float = 2.05, w_min: float = 0.4, w_max: float = 0.9, **kwargs: object)[source]

Bases: mealpy.swarm_based.PSO.P_PSO

The original version of: Chaos Particle Swarm Optimization (C-PSO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • c1 (float): [1.0, 3.0] local coefficient, default = 2.05

  • c2 (float): [1.0, 3.0] global coefficient, default = 2.05

  • w_min (float): [0.1, 0.4], Weight min of bird, default = 0.4

  • w_max (float): [0.4, 2.0], Weight max of bird, default = 0.9

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, PSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = PSO.C_PSO(epoch=1000, pop_size=50, c1=2.05, c2=2.05, w_min=0.4, w_max=0.9)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Liu, B., Wang, L., Jin, Y.H., Tang, F. and Huang, D.X., 2005. Improved particle swarm optimization combined with chaos. Chaos, Solitons & Fractals, 25(5), pp.1261-1271.

bounded_solution(solution: numpy.ndarray) numpy.ndarray[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

get_weights__(fit, fit_avg, fit_min)[source]
initialize_variables()[source]
class mealpy.swarm_based.PSO.HPSO_TVAC(epoch=10000, pop_size=100, ci=0.5, cf=0.1, **kwargs)[source]

Bases: mealpy.swarm_based.PSO.P_PSO

The original version of: Hierarchical PSO Time-Varying Acceleration (HPSO-TVAC)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • ci (float): [0.3, 1.0], c initial, default = 0.5

  • cf (float): [0.0, 0.3], c final, default = 0.1

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, PSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = PSO.HPSO_TVAC(epoch=1000, pop_size=50, ci=0.5, cf=0.1)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Ghasemi, M., Aghaei, J. and Hadipour, M., 2017. New self-organising hierarchical PSO with jumping time-varying acceleration coefficients. Electronics Letters, 53(20), pp.1360-1362.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.PSO.LDW_PSO(epoch: int = 10000, pop_size: int = 100, c1: float = 2.05, c2: float = 2.05, w_min: float = 0.4, w_max: float = 0.9, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Linearly Decreasing inertia Weight Particle Swarm Optimization (LDW-PSO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • c1 (float): [1, 3], local coefficient, default = 2.05

  • c2 (float): [1, 3], global coefficient, default = 2.05

  • w_min (float): [0.1, 0.5], Weight min of bird, default = 0.4

  • w_max (float): [0.8, 2.0], Weight max of bird, default = 0.9

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, PSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = PSO.LDW_PSO(epoch=1000, pop_size=50, c1=2.05, c2=20.5, w_min=0.4, w_max=0.9)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Shi, Yuhui, and Russell Eberhart. “A modified particle swarm optimizer.” In 1998 IEEE international conference on evolutionary computation proceedings. IEEE world congress on computational intelligence (Cat. No. 98TH8360), pp. 69-73. IEEE, 1998.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with full information

Parameters

solution (np.ndarray) – The solution

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

initialize_variables()[source]
class mealpy.swarm_based.PSO.OriginalPSO(epoch: int = 10000, pop_size: int = 100, c1: float = 2.05, c2: float = 2.05, w: float = 0.4, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Particle Swarm Optimization (PSO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • c1 (float): [1, 3], local coefficient, default = 2.05

  • c2 (float): [1, 3], global coefficient, default = 2.05

  • w (float): (0., 1.0), Weight min of bird, default = 0.4

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, PSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = PSO.OriginalPSO(epoch=1000, pop_size=50, c1=2.05, c2=20.5, w=0.4)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Kennedy, J. and Eberhart, R., 1995, November. Particle swarm optimization. In Proceedings of ICNN’95-international conference on neural networks (Vol. 4, pp. 1942-1948). IEEE.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with full information

Parameters

solution (np.ndarray) – The solution

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

initialize_variables()[source]
class mealpy.swarm_based.PSO.P_PSO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Phasor Particle Swarm Optimization (P-PSO)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, PSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "obj_func": objective_function,
>>>     "minmax": "min",
>>> }
>>>
>>> model = PSO.P_PSO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Ghasemi, M., Akbari, E., Rahimnejad, A., Razavi, S.E., Ghavidel, S. and Li, L., 2019. Phasor particle swarm optimization: a simple and efficient variant of PSO. Soft Computing, 23(19), pp.9701-9718.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with full information

Parameters

solution (np.ndarray) – The solution

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

initialize_variables()[source]

mealpy.swarm_based.SCSO module

class mealpy.swarm_based.SCSO.OriginalSCSO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Sand Cat Swarm Optimization (SCSO)

Links:
  1. https://link.springer.com/article/10.1007/s00366-022-01604-x

  2. https://www.mathworks.com/matlabcentral/fileexchange/110185-sand-cat-swarm-optimization

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SCSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SCSO.OriginalSCSO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Seyyedabbasi, A., & Kiani, F. (2022). Sand Cat swarm optimization: a nature-inspired algorithm to solve global optimization problems. Engineering with Computers, 1-25.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

get_index_roulette_wheel_selection__(p)[source]
initialize_variables()[source]

mealpy.swarm_based.SFO module

class mealpy.swarm_based.SFO.ImprovedSFO(epoch: int = 10000, pop_size: int = 100, pp: float = 0.1, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version: Improved Sailfish Optimizer (I-SFO)

Notes

  • Energy equation is reformed

  • AP (A) and epsilon parameters are removed

  • Opposition-based learning technique is used

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • pp (float): the rate between SailFish and Sardines (N_sf = N_s * pp) = 0.25, 0.2, 0.1

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SFO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SFO.ImprovedSFO(epoch=1000, pop_size=50, pp = 0.1)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]
class mealpy.swarm_based.SFO.OriginalSFO(epoch: int = 10000, pop_size: int = 100, pp: float = 0.1, AP: float = 4.0, epsilon: float = 0.0001, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: SailFish Optimizer (SFO)

Links:
  1. https://doi.org/10.1016/j.engappai.2019.01.001

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • pp (float): the rate between SailFish and Sardines (N_sf = N_s * pp) = 0.25, 0.2, 0.1

  • AP (float): coefficient for decreasing the value of Attack Power linearly from AP to 0

  • epsilon (float): should be 0.0001, 0.001

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SFO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SFO.OriginalSFO(epoch=1000, pop_size=50, pp = 0.1, AP = 4.0, epsilon = 0.0001)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Shadravan, S., Naji, H.R. and Bardsiri, V.K., 2019. The Sailfish Optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Engineering Applications of Artificial Intelligence, 80, pp.20-34.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]

mealpy.swarm_based.SHO module

class mealpy.swarm_based.SHO.OriginalSHO(epoch: int = 10000, pop_size: int = 100, h_factor: float = 5.0, n_trials: int = 10, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Spotted Hyena Optimizer (SHO)

Links:
  1. https://doi.org/10.1016/j.advengsoft.2017.05.014

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • h_factor (float): default = 5, coefficient linearly decreased from 5 to 0

  • n_trials (int): default = 10

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SHO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SHO.OriginalSHO(epoch=1000, pop_size=50, h_factor = 5.0, n_trials = 10)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Dhiman, G. and Kumar, V., 2017. Spotted hyena optimizer: a novel bio-inspired based metaheuristic technique for engineering applications. Advances in Engineering Software, 114, pp.48-70.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.SLO module

class mealpy.swarm_based.SLO.ImprovedSLO(epoch: int = 10000, pop_size: int = 100, c1: float = 1.2, c2: float = 1.2, **kwargs: object)[source]

Bases: mealpy.swarm_based.SLO.ModifiedSLO

The original version: Improved Sea Lion Optimization (ImprovedSLO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • c1 (float): Local coefficient same as PSO, default = 1.2

  • c2 (float): Global coefficient same as PSO, default = 1.2

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SLO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SLO.ImprovedSLO(epoch=1000, pop_size=50, c1=1.2, c2=1.5)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Nguyen, Binh Minh, Trung Tran, Thieu Nguyen, and Giang Nguyen. “An improved sea lion optimization for workload elasticity prediction with neural networks.” International Journal of Computational Intelligence Systems 15, no. 1 (2022): 90.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.SLO.ModifiedSLO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Modified Sea Lion Optimization (M-SLO)

Notes

  • Local best idea in PSO is inspired

  • Levy-flight technique is used

  • Shrink encircling idea is used

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SLO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SLO.ModifiedSLO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with full information

Parameters

solution (np.ndarray) – The solution

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

shrink_encircling_levy__(current_pos, epoch, dist, c, beta=1)[source]
class mealpy.swarm_based.SLO.OriginalSLO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Sea Lion Optimization Algorithm (SLO)

Notes

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SLO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SLO.OriginalSLO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Masadeh, R., Mahafzah, B.A. and Sharieh, A., 2019. Sea lion optimization algorithm. Sea, 10(5), p.388.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.SRSR module

class mealpy.swarm_based.SRSR.OriginalSRSR(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Swarm Robotics Search And Rescue (SRSR)

Links:
  1. https://doi.org/10.1016/j.asoc.2017.02.028

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SRSR
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SRSR.OriginalSRSR(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Bakhshipour, M., Ghadi, M.J. and Namdari, F., 2017. Swarm robotics search & rescue: A novel artificial intelligence-inspired optimization approach. Applied Soft Computing, 57, pp.708-726.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with full information

Parameters

solution (np.ndarray) – The solution

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

initialize_variables()[source]

mealpy.swarm_based.SSA module

class mealpy.swarm_based.SSA.DevSSA(epoch: int = 10000, pop_size: int = 100, ST: float = 0.8, PD: float = 0.2, SD: float = 0.1, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Sparrow Search Algorithm (SSA)

Notes

  • First, the population is sorted to find g-best and g-worst

  • In Eq. 4, the self.generator.normal() gaussian distribution is used instead of A+ and L

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • ST (float): ST in [0.5, 1.0], safety threshold value, default = 0.8

  • PD (float): number of producers (percentage), default = 0.2

  • SD (float): number of sparrows who perceive the danger, default = 0.1

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SSA.DevSSA(epoch=1000, pop_size=50, ST = 0.8, PD = 0.2, SD = 0.1)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Xue, J. and Shen, B., 2020. A novel swarm intelligence optimization approach: sparrow search algorithm. Systems Science & Control Engineering, 8(1), pp.22-34.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.swarm_based.SSA.OriginalSSA(epoch: int = 10000, pop_size: int = 100, ST: float = 0.8, PD: float = 0.2, SD: float = 0.1, **kwargs: object)[source]

Bases: mealpy.swarm_based.SSA.DevSSA

The original version of: Sparrow Search Algorithm (SSA)

Notes

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • ST (float): ST in [0.5, 1.0], safety threshold value, default = 0.8

  • PD (float): number of producers (percentage), default = 0.2

  • SD (float): number of sparrows who perceive the danger, default = 0.1

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SSA.OriginalSSA(epoch=1000, pop_size=50, ST = 0.8, PD = 0.2, SD = 0.1)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Xue, J. and Shen, B., 2020. A novel swarm intelligence optimization approach: sparrow search algorithm. Systems Science & Control Engineering, 8(1), pp.22-34.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.SSO module

class mealpy.swarm_based.SSO.OriginalSSO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Salp Swarm Optimization (SSO)

Links:
  1. https://doi.org/10.1016/j.advengsoft.2017.07.002

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SSO.OriginalSSO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mirjalili, S., Gandomi, A.H., Mirjalili, S.Z., Saremi, S., Faris, H. and Mirjalili, S.M., 2017. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software, 114, pp.163-191.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.SSpiderA module

class mealpy.swarm_based.SSpiderA.OriginalSSpiderA(epoch: int = 10000, pop_size: int = 100, r_a: float = 1.0, p_c: float = 0.7, p_m: float = 0.1, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version of: Social Spider Algorithm (OriginalSSpiderA)

Notes

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • r_a (float): the rate of vibration attenuation when propagating over the spider web, default=1.0

  • p_c (float): controls the probability of the spiders changing their dimension mask in the random walk step, default=0.7

  • p_m (float): the probability of each value in a dimension mask to be one, default=0.1

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SSpiderA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SSpiderA.OriginalSSpiderA(epoch=1000, pop_size=50, r_a = 1.0, p_c = 0.7, p_m = 0.1)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] James, J.Q. and Li, V.O., 2015. A social spider algorithm for global optimization. Applied soft computing, 30, pp.614-627.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with full information

Parameters

solution (np.ndarray) – The solution

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]
Overriding method in Optimizer class
  • x: The position of s on the web.

  • train: The fitness of the current position of s

  • target_vibration: The target vibration of s in the previous iteration.

  • intensity_vibration: intensity of vibration

  • movement_vector: The movement that s performed in the previous iteration

  • dimension_mask: The dimension mask 1 that s employed to guide movement in the previous iteration

  • The dimension mask is a 0-1 binary vector of length problem size

  • n_changed: The number of iterations since s has last changed its target vibration. (No need)

mealpy.swarm_based.SSpiderO module

class mealpy.swarm_based.SSpiderO.OriginalSSpiderO(epoch: int = 10000, pop_size: int = 100, fp_min: float = 0.65, fp_max: float = 0.9, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Social Spider Optimization (SSpiderO)

Links:
  1. https://www.hindawi.com/journals/mpe/2018/6843923/

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • fp_min (float): Female Percent min, default = 0.65

  • fp_max (float): Female Percent max, default = 0.9

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SSpiderO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SSpiderO.OriginalSSpiderO(epoch=1000, pop_size=50, fp_min = 0.65, fp_max = 0.9)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Luque-Chang, A., Cuevas, E., Fausto, F., Zaldivar, D. and Pérez, M., 2018. Social spider optimization algorithm: modifications, applications, and perspectives. Mathematical Problems in Engineering, 2018.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

crossover__(mom=None, dad=None, id=0)[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

initialization()[source]
mating__()[source]
move_females__(epoch=None)[source]
move_males__(epoch=None)[source]
recalculate_weights__(pop=None)[source]
survive__(pop=None, pop_child=None)[source]

mealpy.swarm_based.STO module

class mealpy.swarm_based.STO.OriginalSTO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Siberian Tiger Optimization (STO)

Links:
  1. https://ieeexplore.ieee.org/abstract/document/9989374

  2. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9989374

Notes

  1. This is somewhat concerning, as there appears to be a high degree of similarity between the source code for this algorithm and the Osprey Optimization Algorithm (OOA)

  2. Algorithm design is similar to Zebra Optimization Algorithm (ZOA), Osprey Optimization Algorithm (OOA), Coati Optimization Algorithm (CoatiOA), Northern Goshawk Optimization (NGO), Language Education Optimization (LEO), Serval Optimization Algorithm (SOA), Walrus Optimization Algorithm (WOA), Fennec Fox Optimization (FFO), Three-periods optimization algorithm (TPOA), Teamwork optimization algorithm (TOA), Pelican Optimization Algorithm (POA), Tasmanian devil optimization (TDO), Archery algorithm (AA), Cat and mouse based optimizer (CMBO)

  3. It may be useful to compare the Matlab code of this algorithm with those of the similar algorithms to ensure its accuracy and completeness.

  4. The article may share some similarities with previous work by the same authors, further investigation may be warranted to verify the benchmark results reported in the papers and ensure their reliability and accuracy.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, STO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = STO.OriginalSTO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Trojovský, P., Dehghani, M., & Hanuš, P. (2022). Siberian Tiger Optimization: A New Bio-Inspired Metaheuristic Algorithm for Solving Engineering Optimization Problems. IEEE Access, 10, 132396-132431.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

get_indexes_better__(pop, idx)[source]

mealpy.swarm_based.SeaHO module

class mealpy.swarm_based.SeaHO.OriginalSeaHO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Sea-Horse Optimization (SeaHO)

Links:
  1. https://link.springer.com/article/10.1007/s10489-022-03994-3

  2. https://www.mathworks.com/matlabcentral/fileexchange/115945-sea-horse-optimizer

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SeaHO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SeaHO.OriginalSeaHO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Zhao, S., Zhang, T., Ma, S., & Wang, M. (2022). Sea-horse optimizer: a novel nature-inspired meta-heuristic for global optimization problems. Applied Intelligence, 1-28.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]

mealpy.swarm_based.ServalOA module

class mealpy.swarm_based.ServalOA.OriginalServalOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Serval Optimization Algorithm (ServalOA)

Links:
  1. https://www.mdpi.com/2313-7673/7/4/204

Notes

  1. It’s concerning that the author seems to be reusing the same algorithms with minor variations.

1. Algorithm design is similar to Zebra Optimization Algorithm (ZOA), Osprey Optimization Algorithm (OOA), Coati Optimization Algorithm (CoatiOA), Siberian Tiger Optimization (STO), Language Education Optimization (LEO), Pelican Optimization Algorithm (POA), Walrus Optimization Algorithm (WOA), Fennec Fox Optimization (FFO), Three-periods optimization algorithm (TPOA), Teamwork optimization algorithm (TOA), Northern goshawk optimization (NGO), Tasmanian devil optimization (TDO), Archery algorithm (AA), Cat and mouse based optimizer (CMBO) 3. It may be useful to compare the Matlab code of this algorithm with those of the similar algorithms to ensure its accuracy and completeness. 4. The article may share some similarities with previous work by the same authors, further investigation may be warranted to verify the benchmark results reported in the papers and ensure their reliability and accuracy.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ServalOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = ServalOA.OriginalServalOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Dehghani, M., & Trojovský, P. (2022). Serval Optimization Algorithm: A New Bio-Inspired Approach for Solving Optimization Problems. Biomimetics, 7(4), 204.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.TDO module

class mealpy.swarm_based.TDO.OriginalTDO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Tasmanian Devil Optimization (TDO)

Links:
  1. https://www.mathworks.com/matlabcentral/fileexchange/111380-tasmanian-devil-optimization-tdo

  2. https://ieeexplore.ieee.org/abstract/document/9714388

Notes

  1. This is somewhat concerning, as there appears to be a high degree of similarity between the source code for this algorithm and the Osprey Optimization Algorithm (OOA)

  2. Algorithm design is similar to Zebra Optimization Algorithm (ZOA), Osprey Optimization Algorithm (OOA), Pelican optimization algorithm (POA), Siberian Tiger Optimization (STO), Language Education Optimization (LEO), Serval Optimization Algorithm (SOA), Walrus Optimization Algorithm (WOA), Fennec Fox Optimization (FFO), Three-periods optimization algorithm (TPOA), Teamwork optimization algorithm (TOA), Northern goshawk optimization (NGO), Osprey Optimization Algorithm (OOA), Archery algorithm (AA), Cat and mouse based optimizer (CMBO)

  3. It may be useful to compare the Matlab code of this algorithm with those of the similar algorithms to ensure its accuracy and completeness.

  4. The article may share some similarities with previous work by the same authors, further investigation may be warranted to verify the benchmark results reported in the papers and ensure their reliability and accuracy.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TDO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TDO.OriginalTDO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Dehghani, M., Hubálovský, Š., & Trojovský, P. (2022). Tasmanian devil optimization: a new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access, 10, 19599-19620.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.TSO module

class mealpy.swarm_based.TSO.OriginalTSO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Tuna Swarm Optimization (TSO)

Notes

  1. Two variables that authors consider it as a constants (aa = 0.7 and zz = 0.05)

  2. https://www.hindawi.com/journals/cin/2021/9210050/

  3. https://www.mathworks.com/matlabcentral/fileexchange/101734-tuna-swarm-optimization

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TSO.OriginalTSO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Xie, L., Han, T., Zhou, H., Zhang, Z. R., Han, B., & Tang, A. (2021). Tuna swarm optimization: a novel swarm-based metaheuristic algorithm for global optimization. Computational intelligence and Neuroscience, 2021.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

get_new_local_pos__(C, a1, a2, t, epoch)[source]
initialize_variables()[source]

mealpy.swarm_based.WOA module

class mealpy.swarm_based.WOA.HI_WOA(epoch: int = 10000, pop_size: int = 100, feedback_max: int = 10, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Hybrid Improved Whale Optimization Algorithm (HI-WOA)

Links:
  1. https://ieenp.explore.ieee.org/document/8900003

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • feedback_max (int): maximum iterations of each feedback, default = 10

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, WOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = WOA.HI_WOA(epoch=1000, pop_size=50, feedback_max = 10)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Tang, C., Sun, W., Wu, W. and Xue, M., 2019, July. A hybrid improved whale optimization algorithm. In 2019 IEEE 15th International Conference on Control and Automation (ICCA) (pp. 362-367). IEEE.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]
class mealpy.swarm_based.WOA.OriginalWOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Whale Optimization Algorithm (WOA)

Links:
  1. https://doi.org/10.1016/j.advengsoft.2016.01.008

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, WOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = WOA.OriginalWOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mirjalili, S. and Lewis, A., 2016. The whale optimization algorithm. Advances in engineering software, 95, pp.51-67.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.WaOA module

class mealpy.swarm_based.WaOA.OriginalWaOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Walrus Optimization Algorithm (WaOA)

Links:
  1. https://www.researchgate.net/publication/364684780_Walrus_Optimization_Algorithm_A_New_Bio-Inspired_Metaheuristic_Algorithm

Notes

  1. This is somewhat concerning, as there appears to be a high degree of similarity between the source code for this algorithm and the Northern Goshawk Optimization (NGO)

  2. Algorithm design is similar to Zebra Optimization Algorithm (ZOA), Osprey Optimization Algorithm (OOA), Coati Optimization Algorithm (CoatiOA), Siberian Tiger Optimization (STO), Language Education Optimization (LEO), Serval Optimization Algorithm (SOA), Northern Goshawk Optimization (NGO), Fennec Fox Optimization (FFO), Three-periods optimization algorithm (TPOA), Teamwork optimization algorithm (TOA), Pelican Optimization Algorithm (POA), Tasmanian devil optimization (TDO), Archery algorithm (AA), Cat and mouse based optimizer (CMBO)

  3. It may be useful to compare the Matlab code of this algorithm with those of the similar algorithms to ensure its accuracy and completeness.

  4. The article may share some similarities with previous work by the same authors, further investigation may be warranted to verify the benchmark results reported in the papers and ensure their reliability and accuracy.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, WaOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = WaOA.OriginalWaOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Trojovský, P., & Dehghani, M. (2022). Walrus Optimization Algorithm: A New Bio-Inspired Metaheuristic Algorithm.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.swarm_based.ZOA module

class mealpy.swarm_based.ZOA.OriginalZOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Zebra Optimization Algorithm (ZOA)

Links:
  1. https://ieeexplore.ieee.org/document/9768820

  2. https://www.mathworks.com/matlabcentral/fileexchange/122942-zebra-optimization-algorithm-zoa

Notes

  1. It’s concerning that the author seems to be reusing the same algorithms with minor variations.

    1. Algorithm design is similar to Zebra Optimization Algorithm (ZOA), Osprey Optimization Algorithm (OOA), Pelican optimization algorithm (POA), Siberian Tiger Optimization (STO), Language Education Optimization (LEO), Serval Optimization Algorithm (SOA), Walrus Optimization Algorithm (WOA), Fennec Fox Optimization (FFO), Three-periods optimization algorithm (TPOA), Teamwork optimization algorithm (TOA), Northern goshawk optimization (NGO), Tasmanian devil optimization (TDO), Archery algorithm (AA), Cat and mouse based optimizer (CMBO).

    2. It may be useful to compare the Matlab code of this algorithm with those of the similar algorithms to ensure its accuracy and completeness.

    3. The article may share some similarities with previous work by the same authors, further investigation may be warranted to verify the benchmark results reported in the papers and ensure their reliability and accuracy.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ZOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = ZOA.OriginalZOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Trojovská, E., Dehghani, M., & Trojovský, P. (2022). Zebra optimization algorithm: A new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access, 10, 49445-49473.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration