mealpy.human_based package

mealpy.human_based.BRO module

class mealpy.human_based.BRO.DevBRO(epoch: int = 10000, pop_size: int = 100, threshold: float = 3, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Battle Royale Optimization (BRO)

Notes

  • The flow of algorithm is changed. Thrid loop is removed

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • threshold (int): [2, 5], dead threshold, default=3

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BRO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = BRO.DevBRO(epoch=1000, pop_size=50, threshold = 3)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

find_idx_min_distance__(target_pos=None, pop=None)[source]
generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

get_idx_min__(data)[source]
initialize_variables()[source]
class mealpy.human_based.BRO.OriginalBRO(epoch: int = 10000, pop_size: int = 100, threshold: float = 3, **kwargs: object)[source]

Bases: mealpy.human_based.BRO.DevBRO

The original version of: Battle Royale Optimization (BRO)

Links:
  1. https://doi.org/10.1007/s00521-020-05004-4

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • threshold (int): [2, 5], dead threshold, default=3

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BRO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = BRO.OriginalBRO(epoch=1000, pop_size=50, threshold = 3)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Rahkar Farshi, T., 2021. Battle royale optimization algorithm. Neural Computing and Applications, 33(4), pp.1139-1157.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.human_based.BSO module

class mealpy.human_based.BSO.ImprovedBSO(epoch: int = 10000, pop_size: int = 100, m_clusters: int = 5, p1: float = 0.25, p2: float = 0.5, p3: float = 0.75, p4: float = 0.5, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The improved version: Improved Brain Storm Optimization (IBSO)

Notes

  • Remove some probability parameters, and some unnecessary equations.

  • The Levy-flight technique is employed to enhance the algorithm’s robustness and resilience in challenging environments.

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • m_clusters (int): [3, 10], number of clusters (m in the paper)

  • p1 (float): 25% percent

  • p2 (float): 50% percent changed by its own (local search), 50% percent changed by outside (global search)

  • p3 (float): 75% percent develop the old idea, 25% invented new idea based on levy-flight

  • p4 (float): [0.4, 0.6], Need more weights on the centers instead of the random position

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = BSO.ImprovedBSO(epoch=1000, pop_size=50, m_clusters = 5, p1 = 0.25, p2 = 0.5, p3 = 0.75, p4 = 0.6)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] El-Abd, M. (2017). Global-best brain storm optimization algorithm. Swarm and evolutionary computation, 37, 27-44.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

find_cluster__(pop_group)[source]
initialization()[source]
class mealpy.human_based.BSO.OriginalBSO(epoch: int = 10000, pop_size: int = 100, m_clusters: int = 5, p1: float = 0.2, p2: float = 0.8, p3: float = 0.4, p4: float = 0.5, slope: int = 20, **kwargs: object)[source]

Bases: mealpy.human_based.BSO.ImprovedBSO

The original version of: Brain Storm Optimization (BSO)

Links:
  1. https://doi.org/10.1007/978-3-642-21515-5_36

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • m_clusters (int): [3, 10], number of clusters (m in the paper)

  • p1 (float): [0.1, 0.5], probability

  • p2 (float): [0.5, 0.95], probability

  • p3 (float): [0.2, 0.8], probability

  • p4 (float): [0.2, 0.8], probability

  • slope (int): [10, 15, 20, 25], changing logsig() function’s slope (k: in the paper)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = BSO.OriginalBSO(epoch=1000, pop_size=50, m_clusters = 5, p1 = 0.2, p2 = 0.8, p3 = 0.4, p4 = 0.5, slope = 20)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Shi, Y., 2011, June. Brain storm optimization algorithm. In International conference in swarm intelligence (pp. 303-309). Springer, Berlin, Heidelberg.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.human_based.CA module

class mealpy.human_based.CA.OriginalCA(epoch: int = 10000, pop_size: int = 100, accepted_rate: float = 0.15, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Culture Algorithm (CA)

Links:
  1. https://github.com/clever-algorithms/CleverAlgorithms

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • accepted_rate (float): [0.1, 0.5], probability of accepted rate, default: 0.15

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, CA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = CA.OriginalCA(epoch=1000, pop_size=50, accepted_rate = 0.15)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Chen, B., Zhao, L. and Lu, J.H., 2009, April. Wind power forecast using RBF network and culture algorithm. In 2009 International Conference on Sustainable Power Generation and Supply (pp. 1-6). IEEE.

create_faithful__(lb, ub)[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]
update_belief_space__(belief_space, pop_accepted)[source]

mealpy.human_based.CHIO module

class mealpy.human_based.CHIO.DevCHIO(epoch: int = 10000, pop_size: int = 100, brr: float = 0.15, max_age: int = 10, **kwargs: object)[source]

Bases: mealpy.human_based.CHIO.OriginalCHIO

The developed version of: Coronavirus Herd Immunity Optimization (CHIO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • brr (float): [0.05, 0.2], Basic reproduction rate, default=0.15

  • max_age (int): [5, 20], Maximum infected cases age, default=10

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, CHIO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = CHIO.DevCHIO(epoch=1000, pop_size=50, brr = 0.15, max_age = 10)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.human_based.CHIO.OriginalCHIO(epoch: int = 10000, pop_size: int = 100, brr: float = 0.15, max_age: int = 10, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Coronavirus Herd Immunity Optimization (CHIO)

Links:
  1. https://link.springer.com/article/10.1007/s00521-020-05296-6

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • brr (float): [0.05, 0.2], Basic reproduction rate, default=0.15

  • max_age (int): [5, 20], Maximum infected cases age, default=10

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, CHIO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = CHIO.OriginalCHIO(epoch=1000, pop_size=50, brr = 0.15, max_age = 10)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Al-Betar, M.A., Alyasseri, Z.A.A., Awadallah, M.A. et al. Coronavirus herd immunity optimizer (CHIO). Neural Comput & Applic 33, 5011–5042 (2021). https://doi.org/10.1007/s00521-020-05296-6

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]

mealpy.human_based.FBIO module

class mealpy.human_based.FBIO.DevFBIO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed : Forensic-Based Investigation Optimization (FBIO)

Notes

  • Third loop is removed, the flowand a few equations is improved

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, FBIO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = FBIO.DevFBIO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

probability__(list_fitness=None)[source]
class mealpy.human_based.FBIO.OriginalFBIO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.human_based.FBIO.DevFBIO

The original version of: Forensic-Based Investigation Optimization (FBIO)

Links:
  1. https://doi.org/10.1016/j.asoc.2020.106339

  2. https://ww2.mathworks.cn/matlabcentral/fileexchange/76299-forensic-based-investigation-algorithm-fbi

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, FBIO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = FBIO.OriginalFBIO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Chou, J.S. and Nguyen, N.M., 2020. FBI inspired meta-optimization. Applied Soft Computing, 93, p.106339.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.human_based.GSKA module

class mealpy.human_based.GSKA.DevGSKA(epoch: int = 10000, pop_size: int = 100, pb: float = 0.1, kr: float = 0.7, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Gaining Sharing Knowledge-based Algorithm (GSKA)

Notes

  • Third loop is removed, 2 parameters is removed

  • Solution represent junior or senior instead of dimension of solution

  • Equations is based vector, can handle large-scale problem

  • Apply the ideas of levy-flight and global best

  • Keep the better one after updating process

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • pb (float): [0.1, 0.5], percent of the best (p in the paper), default = 0.1

  • kr (float): [0.5, 0.9], knowledge ratio, default = 0.7

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GSKA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GSKA.DevGSKA(epoch=1000, pop_size=50, pb = 0.1, kr = 0.9)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.human_based.GSKA.OriginalGSKA(epoch: int = 10000, pop_size: int = 100, pb: float = 0.1, kf: float = 0.5, kr: float = 0.9, kg: int = 5, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Gaining Sharing Knowledge-based Algorithm (GSKA)

Links:
  1. https://doi.org/10.1007/s13042-019-01053-x

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • pb (float): [0.1, 0.5], percent of the best (p in the paper), default = 0.1

  • kf (float): [0.3, 0.8], knowledge factor that controls the total amount of gained and shared knowledge added from others to the current individual during generations, default = 0.5

  • kr (float): [0.5, 0.95], knowledge ratio, default = 0.9

  • kg (int): [3, 20], number of generations effect to D-dimension, default = 5

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, GSKA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = GSKA.OriginalGSKA(epoch=1000, pop_size=50, pb = 0.1, kf = 0.5, kr = 0.9, kg = 5)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mohamed, A.W., Hadi, A.A. and Mohamed, A.K., 2020. Gaining-sharing knowledge based algorithm for solving optimization problems: a novel nature-inspired algorithm. International Journal of Machine Learning and Cybernetics, 11(7), pp.1501-1529.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.human_based.HBO module

class mealpy.human_based.HBO.OriginalHBO(epoch: int = 10000, pop_size: int = 100, degree: int = 2, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Heap-based optimizer (HBO)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0957417420305261#!

  2. https://github.com/qamar-askari/HBO/blob/master/HBO.m

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • degree (int): [2, 4], the degree level in Corporate Rank Hierarchy (CRH), default=2

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, HBO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = HBO.OriginalHBO(epoch=1000, pop_size=50, degree = 3)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Askari, Q., Saeed, M., & Younas, I. (2020). Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Systems with Applications, 161, 113702.

before_main_loop()[source]
colleagues_limits_generator__(pop_size, degree=3)[source]
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

heapifying__(pop, degree=3)[source]
initialize_variables()[source]

mealpy.human_based.HCO module

class mealpy.human_based.HCO.OriginalHCO(epoch: int = 10000, pop_size: int = 100, wfp: float = 0.65, wfv: float = 0.05, c1: float = 1.4, c2: float = 1.4, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Human Conception Optimizer (HCO)

Links:
  1. https://www.mathworks.com/matlabcentral/fileexchange/124200-human-conception-optimizer-hco

  2. https://www.nature.com/articles/s41598-022-25031-6

Notes

  1. This algorithm shares some similarities with the PSO algorithm (equations)

  2. The implementation of Matlab code is kinda different to the paper

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • wfp (float): (0, 1.) - weight factor for probability of fitness selection, default=0.65

  • wfv (float): (0, 1.0) - weight factor for velocity update stage, default=0.05

  • c1 (float): (0., 3.0) - acceleration coefficient, same as PSO, default=1.4

  • c2 (float): (0., 3.0) - acceleration coefficient, same as PSO, default=1.4

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, HCO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = HCO.OriginalHCO(epoch=1000, pop_size=50, wfp=0.65, wfv=0.05, c1=1.4, c2=1.4)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Acharya, D., & Das, D. K. (2022). A novel Human Conception Optimizer for solving optimization problems. Scientific Reports, 12(1), 21631.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]

mealpy.human_based.ICA module

class mealpy.human_based.ICA.OriginalICA(epoch: int = 10000, pop_size: int = 100, empire_count: int = 5, assimilation_coeff: float = 1.5, revolution_prob: float = 0.05, revolution_rate: float = 0.1, revolution_step_size: float = 0.1, zeta: float = 0.1, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Imperialist Competitive Algorithm (ICA)

Links:
  1. https://ieeexplore.ieee.org/document/4425083

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • empire_count (int): [3, 10], Number of Empires (also Imperialists)

  • assimilation_coeff (float): [1.0, 3.0], Assimilation Coefficient (beta in the paper)

  • revolution_prob (float): [0.01, 0.1], Revolution Probability

  • revolution_rate (float): [0.05, 0.2], Revolution Rate (mu)

  • revolution_step_size (float): [0.05, 0.2], Revolution Step Size (sigma)

  • zeta (float): [0.05, 0.2], Colonies Coefficient in Total Objective Value of Empires

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, ICA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = ICA.OriginalICA(epoch=1000, pop_size=50, empire_count = 5, assimilation_coeff = 1.5,
>>>                         revolution_prob = 0.05, revolution_rate = 0.1, revolution_step_size = 0.1, zeta = 0.1)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Atashpaz-Gargari, E. and Lucas, C., 2007, September. Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. In 2007 IEEE congress on evolutionary computation (pp. 4661-4667). Ieee.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]
revolution_country__(solution: numpy.ndarray, n_revoluted: int) numpy.ndarray[source]

mealpy.human_based.LCO module

class mealpy.human_based.LCO.DevLCO(epoch: int = 10000, pop_size: int = 100, r1: float = 2.35, **kwargs: object)[source]

Bases: mealpy.human_based.LCO.OriginalLCO

The developed version: Life Choice-based Optimization (LCO)

Notes

  • The flow is changed with if else statement.

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • r1 (float): [1.5, 4], coefficient factor, default = 2.35

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, LCO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = LCO.DevLCO(epoch=1000, pop_size=50, r1 = 2.35)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.human_based.LCO.ImprovedLCO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The improved version: Life Choice-based Optimization (ILCO)

Notes

  • The flow of the original LCO is kept.

  • Gaussian distribution and mutation mechanism are added

  • R1 parameter is removed

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, LCO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = LCO.ImprovedLCO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.human_based.LCO.OriginalLCO(epoch: int = 10000, pop_size: int = 100, r1: float = 2.35, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Life Choice-based Optimization (LCO)

Links:
  1. https://doi.org/10.1007/s00500-019-04443-z

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • r1 (float): [1.5, 4], coefficient factor, default = 2.35

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, LCO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = LCO.OriginalLCO(epoch=1000, pop_size=50, r1 = 2.35)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Khatri, A., Gaba, A., Rana, K.P.S. and Kumar, V., 2020. A novel life choice-based optimizer. Soft Computing, 24(12), pp.9121-9141.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.human_based.PO module

mealpy.human_based.QSA module

class mealpy.human_based.QSA.DevQSA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Queuing Search Algorithm (QSA)

Notes

  • The third loops are removed

  • Global best solution is used in business 3-th instead of random solution

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, QSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = QSA.DevQSA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
calculate_queue_length__(t1, t2, t3)[source]
Calculate length of each queue based on t1, t2,t3
  • t1 = t1 * 1.0e+100

  • t2 = t2 * 1.0e+100

  • t3 = t3 * 1.0e+100

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

update_business_1__(pop=None, current_epoch=None)[source]
update_business_2__(pop=None)[source]
update_business_3__(pop, g_best)[source]
class mealpy.human_based.QSA.ImprovedQSA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.human_based.QSA.OppoQSA, mealpy.human_based.QSA.LevyQSA

The original version of: Improved Queuing Search Algorithm (QSA)

Links:
  1. https://doi.org/10.1007/s12652-020-02849-4

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, QSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = QSA.ImprovedQSA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Nguyen, B.M., Hoang, B., Nguyen, T. and Nguyen, G., 2021. nQSV-Net: a novel queuing search variant for global space search and workload modeling. Journal of Ambient Intelligence and Humanized Computing, 12(1), pp.27-46.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.human_based.QSA.LevyQSA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.human_based.QSA.DevQSA

The Levy-flight version: Queuing Search Algorithm (LQSA)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, QSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = QSA.LevyQSA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

update_business_2__(pop=None, current_epoch=None)[source]
class mealpy.human_based.QSA.OppoQSA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.human_based.QSA.DevQSA

The opposition-based learning version: Queuing Search Algorithm (OQSA)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, QSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = QSA.OppoQSA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

opposition_based__(pop=None, g_best=None)[source]
class mealpy.human_based.QSA.OriginalQSA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.human_based.QSA.DevQSA

The original version of: Queuing Search Algorithm (QSA)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0307904X18302890

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, QSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = QSA.OriginalQSA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Zhang, J., Xiao, M., Gao, L. and Pan, Q., 2018. Queuing search algorithm: A novel metaheuristic algorithm for solving engineering optimization problems. Applied Mathematical Modelling, 63, pp.464-490.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

update_business_3__(pop, g_best)[source]

mealpy.human_based.SARO module

class mealpy.human_based.SARO.DevSARO(epoch: int = 10000, pop_size: int = 100, se: float = 0.5, mu: int = 15, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Search And Rescue Optimization (SARO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • se (float): [0.3, 0.8], social effect, default = 0.5

  • mu (int): maximum unsuccessful search number, belongs to range: [2, 2+int(self.pop_size/2)], default = 15

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SARO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SARO.DevSARO(epoch=1000, pop_size=50, se = 0.5, mu = 50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]
initialize_variables()[source]
class mealpy.human_based.SARO.OriginalSARO(epoch: int = 10000, pop_size: int = 100, se: float = 0.5, mu: int = 15, **kwargs: object)[source]

Bases: mealpy.human_based.SARO.DevSARO

The original version of: Search And Rescue Optimization (SARO)

Links:
  1. https://doi.org/10.1155/2019/2482543

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • se (float): [0.3, 0.8], social effect, default = 0.5

  • mu (int): [10, 20], maximum unsuccessful search number, default = 15

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SARO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SARO.OriginalSARO(epoch=1000, pop_size=50, se = 0.5, mu = 50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Shabani, A., Asgarian, B., Gharebaghi, S.A., Salido, M.A. and Giret, A., 2019. A new optimization algorithm based on search and rescue operations. Mathematical Problems in Engineering, 2019.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.human_based.SPBO module

class mealpy.human_based.SPBO.DevSPBO(epoch=10000, pop_size=100, **kwargs)[source]

Bases: mealpy.human_based.SPBO.OriginalSPBO

The developed version of: Student Psychology Based Optimization (SPBO)

Notes

  1. Replace uniform random number by normal random number

  2. Sort the population and select 1/3 pop size for each category

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SPBO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SPBO.DevSPBO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.human_based.SPBO.OriginalSPBO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Student Psychology Based Optimization (SPBO)

Notes

  1. This algorithm is a weak algorithm in solving several problems

  2. It also consumes too much time because of ndim * pop_size updating times.

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0965997820301484

  2. https://www.mathworks.com/matlabcentral/fileexchange/80991-student-psycology-based-optimization-spbo-algorithm

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SPBO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SPBO.OriginalSPBO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Das, B., Mukherjee, V., & Das, D. (2020). Student psychology based optimization algorithm: A new population based optimization algorithm for solving optimization problems. Advances in Engineering software, 146, 102804.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.human_based.SSDO module

class mealpy.human_based.SSDO.OriginalSSDO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Social Ski-Driver Optimization (SSDO)

Links:
  1. https://doi.org/10.1007/s00521-019-04159-z

  2. https://www.mathworks.com/matlabcentral/fileexchange/71210-social-ski-driver-ssd-optimization-algorithm-2019

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SSDO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SSDO.OriginalSSDO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Tharwat, A. and Gabel, T., 2020. Parameters optimization of support vector machines for imbalanced data using social ski driver algorithm. Neural Computing and Applications, 32(11), pp.6925-6938.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

generate_empty_agent(solution: Optional[numpy.ndarray] = None) mealpy.utils.agent.Agent[source]

Generate new agent with solution

Parameters

solution (np.ndarray) – The solution

mealpy.human_based.TLO module

class mealpy.human_based.TLO.DevTLO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Teaching Learning-based Optimization (TLO)

Links:
  1. https://doi.org/10.5267/j.ijiec.2012.03.007

Notes

  • Use numpy np.array to make operations faster

  • The global best solution is used

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TLO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TLO.DevTLO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Rao, R. and Patel, V., 2012. An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems. international journal of industrial engineering computations, 3(4), pp.535-560.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.human_based.TLO.ImprovedTLO(epoch: int = 10000, pop_size: int = 100, n_teachers: int = 5, **kwargs: object)[source]

Bases: mealpy.human_based.TLO.DevTLO

The original version of: Improved Teaching-Learning-based Optimization (ImprovedTLO)

Links:
  1. https://doi.org/10.1016/j.scient.2012.12.005

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • n_teachers (int): [3, 10], number of teachers in class, default=5

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TLO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TLO.ImprovedTLO(epoch=1000, pop_size=50, n_teachers = 5)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Rao, R.V. and Patel, V., 2013. An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems. Scientia Iranica, 20(3), pp.710-720.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]
class mealpy.human_based.TLO.OriginalTLO(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.human_based.TLO.DevTLO

The original version of: Teaching Learning-based Optimization (TLO)

Notes

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TLO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TLO.OriginalTLO(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Rao, R.V., Savsani, V.J. and Vakharia, D.P., 2011. Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Computer-aided design, 43(3), pp.303-315.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.human_based.TOA module

class mealpy.human_based.TOA.OriginalTOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Teamwork Optimization Algorithm (TOA)

Links:
  1. https://www.mdpi.com/1424-8220/21/13/4567

Notes

1. Algorithm design is similar to Zebra Optimization Algorithm (ZOA), Osprey Optimization Algorithm (OOA), Coati Optimization Algorithm (CoatiOA), Siberian Tiger Optimization (STO), Language Education Optimization (LEO), Serval Optimization Algorithm (SOA), Walrus Optimization Algorithm (WOA), Fennec Fox Optimization (FFO), Three-periods optimization algorithm (TPOA), Pelican Optimization Algorithm (POA), Northern goshawk optimization (NGO), Tasmanian devil optimization (TDO), Archery algorithm (AA), Cat and mouse based optimizer (CMBO)

  1. It may be useful to compare the Matlab code of this algorithm with those of the similar algorithms to ensure its accuracy and completeness.

3. While this article may share some similarities with previous work by the same authors, it is important to recognize the potential value in exploring different meta-metaphors and concepts to drive innovation and progress in optimization research.

  1. Further investigation may be warranted to verify the benchmark results reported in the papers and ensure their reliability and accuracy.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TOA.OriginalTOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Dehghani, M., & Trojovský, P. (2021). Teamwork optimization algorithm: A new optimization approach for function minimization/maximization. Sensors, 21(13), 4567.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

get_indexes_better__(pop, idx)[source]

mealpy.human_based.WarSO module

class mealpy.human_based.WarSO.OriginalWarSO(epoch: int = 10000, pop_size: int = 100, rr: float = 0.1, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: War Strategy Optimization (WarSO) algorithm

Links:
  1. https://www.researchgate.net/publication/358806739_War_Strategy_Optimization_Algorithm_A_New_Effective_Metaheuristic_Algorithm_for_Global_Optimization

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • rr (float): [0.1, 0.9], the probability of switching position updating, default=0.1

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, WarSO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = WarSO.OriginalWarSO(epoch=1000, pop_size=50, rr=0.1)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Ayyarao, Tummala SLV, and Polamarasetty P. Kumar. “Parameter estimation of solar PV models with a new proposed war strategy optimization algorithm.” International Journal of Energy Research (2022).

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]