mealpy.bio_based package

mealpy.bio_based.BBO module

class mealpy.bio_based.BBO.DevBBO(epoch: int = 10000, pop_size: int = 100, p_m: float = 0.01, n_elites: int = 2, **kwargs: object)[source]

Bases: mealpy.bio_based.BBO.OriginalBBO

The developed version: Biogeography-Based Optimization (BBO)

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • p_m (float): (0, 1) -> better [0.01, 0.2], Mutation probability

  • n_elites (int): (2, pop_size/2) -> better [2, 5], Number of elites will be keep for next generation

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BBO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = BBO.DevBBO(epoch=1000, pop_size=50, p_m=0.01, n_elites=2)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch: int) None[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.bio_based.BBO.OriginalBBO(epoch: int = 10000, pop_size: int = 100, p_m: float = 0.01, n_elites: int = 2, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Biogeography-Based Optimization (BBO)

Links:
  1. https://ieeexplore.ieee.org/abstract/document/4475427

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • p_m (float): (0, 1) -> better [0.01, 0.2], Mutation probability

  • n_elites (int): (2, pop_size/2) -> better [2, 5], Number of elites will be keep for next generation

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BBO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = BBO.OriginalBBO(epoch=1000, pop_size=50, p_m=0.01, n_elites=2)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Simon, D., 2008. Biogeography-based optimization. IEEE transactions on evolutionary computation, 12(6), pp.702-713.

evolve(epoch: int) None[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch – The current iteration

mealpy.bio_based.BBOA module

class mealpy.bio_based.BBOA.OriginalBBOA(epoch: int = 10000, pop_size: int = 100, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Brown-Bear Optimization Algorithm (BBOA)

Links:
  1. https://www.mathworks.com/matlabcentral/fileexchange/125490-brown-bear-optimization-algorithm

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BBOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = BBOA.OriginalBBOA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Prakash, T., Singh, P. P., Singh, V. P., & Singh, S. N. (2023). A Novel Brown-bear Optimization Algorithm for Solving Economic Dispatch Problem. In Advanced Control & Optimization Paradigms for Energy System Operation and Management (pp. 137-164). River Publishers.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.bio_based.BMO module

class mealpy.bio_based.BMO.OriginalBMO(epoch=10000, pop_size=100, pl=5, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The original version: Barnacles Mating Optimizer (BMO)

Links:
  1. https://ieeexplore.ieee.org/document/8441097

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • pl (int): [1, pop_size - 1], barnacle’s threshold

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, BMO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = BMO.OriginalBMO(epoch=1000, pop_size=50, pl = 4)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Wang, G.G., Deb, S. and Coelho, L.D.S., 2018. Earthworm optimisation algorithm: a bio-inspired metaheuristic algorithm for global optimisation problems. International journal of bio-inspired computation, 12(1), pp.1-22.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.bio_based.EOA module

class mealpy.bio_based.EOA.OriginalEOA(epoch: int = 10000, pop_size: int = 100, p_c: float = 0.9, p_m: float = 0.01, n_best: int = 2, alpha: float = 0.98, beta: float = 0.9, gama: float = 0.9, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Earthworm Optimisation Algorithm (EOA)

Links:
  1. http://doi.org/10.1504/IJBIC.2015.10004283

  2. https://www.mathworks.com/matlabcentral/fileexchange/53479-earthworm-optimization-algorithm-ewa

Notes

The original version from matlab code above will not work well, even with small dimensions. I change updating process, change cauchy process using x_mean, use global best solution

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • p_c (float): (0, 1) -> better [0.5, 0.95], crossover probability

  • p_m (float): (0, 1) -> better [0.01, 0.2], initial mutation probability

  • n_best (int): (2, pop_size/2) -> better [2, 5], how many of the best earthworm to keep from one generation to the next

  • alpha (float): (0, 1) -> better [0.8, 0.99], similarity factor

  • beta (float): (0, 1) -> better [0.8, 1.0], the initial proportional factor

  • gama (float): (0, 1) -> better [0.8, 0.99], a constant that is similar to cooling factor of a cooling schedule in the simulated annealing.

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, EOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = EOA.OriginalEOA(epoch=1000, pop_size=50, p_c = 0.9, p_m = 0.01, n_best = 2, alpha = 0.98, beta = 0.9, gama = 0.9)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Wang, G.G., Deb, S. and Coelho, L.D.S., 2018. Earthworm optimisation algorithm: a bio-inspired metaheuristic algorithm for global optimisation problems. International journal of bio-inspired computation, 12(1), pp.1-22.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]

mealpy.bio_based.IWO module

class mealpy.bio_based.IWO.OriginalIWO(epoch: int = 10000, pop_size: int = 100, seed_min: int = 2, seed_max: int = 10, exponent: int = 2, sigma_start: float = 1.0, sigma_end: float = 0.01, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Invasive Weed Optimization (IWO)

Links:
  1. https://pdfs.semanticscholar.org/734c/66e3757620d3d4016410057ee92f72a9853d.pdf

Notes

Better to use normal distribution instead of uniform distribution, updating population by sorting both parent population and child population

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • seed_min (int): [1, 3], Number of Seeds (min)

  • seed_max (int): [4, pop_size/2], Number of Seeds (max)

  • exponent (int): [2, 4], Variance Reduction Exponent

  • sigma_start (float): [0.5, 5.0], The initial value of Standard Deviation

  • sigma_end (float): (0, 0.5), The final value of Standard Deviation

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, EOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = EOA.OriginalEOA(epoch=1000, pop_size=50, seed_min = 3, seed_max = 9, exponent = 3, sigma_start = 0.6, sigma_end = 0.01)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Mehrabian, A.R. and Lucas, C., 2006. A novel numerical optimization algorithm inspired from weed colonization. Ecological informatics, 1(4), pp.355-366.

evolve(epoch=None)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.bio_based.SBO module

class mealpy.bio_based.SBO.DevSBO(epoch: int = 10000, pop_size: int = 100, alpha: float = 0.94, p_m: float = 0.05, psw: float = 0.02, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Satin Bowerbird Optimizer (SBO)

Links:
  1. https://doi.org/10.1016/j.engappai.2017.01.006

Notes

The original version can’t handle negative fitness value. I remove all third loop for faster training, remove equation (1, 2) in the paper, calculate probability by roulette-wheel.

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • alpha (float): [0.5, 3.0] -> better [0.5, 2.0], the greatest step size

  • p_m (float): (0, 1.0) -> better [0.01, 0.2], mutation probability

  • psw (float): (0, 1.0) -> better [0.01, 0.1], proportion of space width (z in the paper)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SBO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SBO.DevSBO(epoch=1000, pop_size=50, alpha = 0.9, p_m =0.05, psw = 0.02)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.bio_based.SBO.OriginalSBO(epoch: int = 10000, pop_size: int = 100, alpha: float = 0.94, p_m: float = 0.05, psw: float = 0.02, **kwargs: object)[source]

Bases: mealpy.bio_based.SBO.DevSBO

The original version of: Satin Bowerbird Optimizer (SBO)

Links:
  1. https://doi.org/10.1016/j.engappai.2017.01.006

  2. https://www.mathworks.com/matlabcentral/fileexchange/62009-satin-bowerbird-optimizer-sbo-2017

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • alpha (float): [0.5, 3.0] -> better [0.5, 0.99], the greatest step size

  • p_m (float): (0, 1.0) -> better [0.01, 0.2], mutation probability

  • psw (float): (0, 1.0) -> better [0.01, 0.1], proportion of space width (z in the paper)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SBO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SBO.DevSBO(epoch=1000, pop_size=50, alpha = 0.9, p_m=0.05, psw = 0.02)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Moosavi, S.H.S. and Bardsiri, V.K., 2017. Satin bowerbird optimizer: A new optimization algorithm to optimize ANFIS for software development effort estimation. Engineering Applications of Artificial Intelligence, 60, pp.1-15.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

roulette_wheel_selection__(fitness_list: Optional[list] = None) int[source]

Roulette Wheel Selection in the original version, this version can’t handle the negative fitness values

Parameters

fitness_list (list) – Fitness of population

Returns

The index of selected solution

Return type

f (int)

mealpy.bio_based.SMA module

class mealpy.bio_based.SMA.DevSMA(epoch: int = 10000, pop_size: int = 100, p_t: float = 0.03, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Slime Mould Algorithm (SMA)

Notes

  • Selected 2 unique and random solution to create new solution (not to create variable)

  • Check bound and compare old position with new position to get the best one

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • p_t (float): (0, 1.0) -> better [0.01, 0.1], probability threshold (z in the paper)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SMA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SMA.DevSMA(epoch=1000, pop_size=50, p_t = 0.03)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialize_variables()[source]
class mealpy.bio_based.SMA.OriginalSMA(epoch=10000, pop_size=100, p_t=0.03, **kwargs)[source]

Bases: mealpy.bio_based.SMA.DevSMA

The original version of: Slime Mould Algorithm (SMA)

Links:
  1. https://doi.org/10.1016/j.future.2020.03.055

  2. https://www.researchgate.net/publication/340431861_Slime_mould_algorithm_A_new_method_for_stochastic_optimization

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • p_t (float): (0, 1.0) -> better [0.01, 0.1], probability threshold (z in the paper)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SMA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SMA.OriginalSMA(epoch=1000, pop_size=50, p_t = 0.03)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Li, S., Chen, H., Wang, M., Heidari, A.A. and Mirjalili, S., 2020. Slime mould algorithm: A new method for stochastic optimization. Future Generation Computer Systems, 111, pp.300-323.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.bio_based.SOA module

class mealpy.bio_based.SOA.DevSOA(epoch=10000, pop_size=100, fc=2, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Seagull Optimization Algorithm (SOA)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0950705118305768

Notes

  1. The original one will not work because their operators always make the solution out of bound.

  2. I added the normal random number in Eq. 14 to make its work

  3. Besides, I will check keep the better one and remove the worst

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • fc (float): [1.0, 10.0] -> better [1, 5], freequency of employing variable A (A linear decreased from fc to 0), default = 2

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SOA.DevSOA(epoch=1000, pop_size=50, fc = 2)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.bio_based.SOA.OriginalSOA(epoch=10000, pop_size=100, fc=2, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The original version: Seagull Optimization Algorithm (SOA)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0950705118305768

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • fc (float): [1.0, 10.0] -> better [1, 5], freequency of employing variable A (A linear decreased from fc to 0), default = 2

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SOA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SOA.OriginalSOA(epoch=1000, pop_size=50, fc = 2)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Dhiman, G., & Kumar, V. (2019). Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowledge-based systems, 165, 169-196.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.bio_based.SOS module

class mealpy.bio_based.SOS.OriginalSOS(epoch=10000, pop_size=100, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The original version: Symbiotic Organisms Search (SOS)

Links:
  1. https://doi.org/10.1016/j.compstruc.2014.03.007

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, SOS
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = SOS.OriginalSOS(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Cheng, M. Y., & Prayogo, D. (2014). Symbiotic organisms search: a new metaheuristic optimization algorithm. Computers & Structures, 139, 98-112.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.bio_based.TPO module

class mealpy.bio_based.TPO.DevTPO(epoch: int = 10000, pop_size: int = 100, alpha: float = 0.3, beta: float = 50.0, theta: float = 0.9, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The original version: Tree Physiology Optimization (TPO)

Links:
  1. https://www.mathworks.com/matlabcentral/fileexchange/63982-tree-physiology-optimization-tpo-algorithm-for-stochastic-test-function-optimization

Notes

  1. The paper is difficult to read and understand, and the provided MATLAB code is also challenging to understand.

  2. Based on my idea:
    • pop_size = number of branhes, the population size should be equal to the number of branches.

    • The number of leaves should be calculated as int(sqrt(pop_size) + 1), so we don’t need to specify the n_leafs parameter, which will also reduce computation time.

    • When using this algorithm, especially when setting stopping conditions, be careful and set it to the FE type.

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • alpha (float): [-10, 10.] -> better [0.2, 0.5], Absorption constant for tree root elongation, default = 0.5

  • beta (float): [-100, 100.] -> better [10, 50], Diversification facor of tree shoot, default=50.

  • theta (float): (0, 1.0] -> better [0.5, 0.9], Factor to reduce randomization, Theta = Power law to reduce randomization as iteration increases, default=0.9

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TPO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TPO.DevTPO(epoch=1000, pop_size=50, alpha = 0.3, beta = 50., theta = 0.9)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Halim, A. H., & Ismail, I. (2017). Tree physiology optimization in benchmark function and traveling salesman problem. Journal of Intelligent Systems, 28(5), 849-871.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

initialization()[source]
initialize_variables()[source]

The idea is a tree has a pop_size of branches (n_branches), each branch will have several leafs.

mealpy.bio_based.TSA module

class mealpy.bio_based.TSA.OriginalTSA(epoch=10000, pop_size=100, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The original version: Tunicate Swarm Algorithm (TSA)

Links:
  1. https://www.sciencedirect.com/science/article/abs/pii/S0952197620300385?via%3Dihub

  2. https://www.mathworks.com/matlabcentral/fileexchange/75182-tunicate-swarm-algorithm-tsa

Notes

  1. This algorithm has some limitations

  2. The paper has several wrong equations in algorithm

  3. The implementation in Matlab code has some difference to the paper

  4. This algorithm shares some similarities with the Barnacles Mating Optimizer (BMO)

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, TSA
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = TSA.OriginalTSA(epoch=1000, pop_size=50)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Kaur, S., Awasthi, L. K., Sangal, A. L., & Dhiman, G. (2020). Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Engineering Applications of Artificial Intelligence, 90, 103541.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.bio_based.VCS module

class mealpy.bio_based.VCS.DevVCS(epoch: int = 10000, pop_size: int = 100, lamda: float = 0.5, sigma: float = 1.5, **kwargs: object)[source]

Bases: mealpy.optimizer.Optimizer

The developed version: Virus Colony Search (VCS)

Links:
  1. https://doi.org/10.1016/j.advengsoft.2015.11.004

Notes

  • In Immune response process, updates the whole position instead of updating each variable in position

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • lamda (float): (0, 1.0) -> better [0.2, 0.5], Percentage of the number of the best will keep, default = 0.5

  • sigma (float): (0, 5.0) -> better [0.1, 2.0], Weight factor

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, VCS
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = VCS.DevVCS(epoch=1000, pop_size=50, lamda = 0.5, sigma = 0.3)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")
calculate_xmean__(pop)[source]

Calculate the mean position of list of solutions (population)

Parameters

pop (list) – List of solutions (population)

Returns

Mean position

Return type

list

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

class mealpy.bio_based.VCS.OriginalVCS(epoch: int = 10000, pop_size: int = 100, lamda: float = 0.5, sigma: float = 1.5, **kwargs: object)[source]

Bases: mealpy.bio_based.VCS.DevVCS

The original version of: Virus Colony Search (VCS)

Links:
  1. https://doi.org/10.1016/j.advengsoft.2015.11.004

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • lamda (float): (0, 1.0) -> better [0.2, 0.5], Percentage of the number of the best will keep, default = 0.5

  • sigma (float): (0, 5.0) -> better [0.1, 2.0], Weight factor

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, VCS
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = VCS.OriginalVCS(epoch=1000, pop_size=50, lamda = 0.5, sigma = 0.3)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Li, M.D., Zhao, H., Weng, X.W. and Han, T., 2016. A novel nature-inspired algorithm for optimization: Virus colony search. Advances in Engineering Software, 92, pp.65-88.

amend_solution(solution: numpy.ndarray) numpy.ndarray[source]

This function is based on optimizer’s strategy. In each optimizer, this function can be overridden

Parameters

solution – The position

Returns

The valid solution based on optimizer’s strategy

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration

mealpy.bio_based.WHO module

class mealpy.bio_based.WHO.OriginalWHO(epoch=10000, pop_size=100, n_explore_step=3, n_exploit_step=3, eta=0.15, p_hi=0.9, local_alpha=0.9, local_beta=0.3, global_alpha=0.2, global_beta=0.8, delta_w=2.0, delta_c=2.0, **kwargs)[source]

Bases: mealpy.optimizer.Optimizer

The original version of: Wildebeest Herd Optimization (WHO)

Links:
  1. https://doi.org/10.3233/JIFS-190495

Hyper-parameters should fine-tune in approximate range to get faster convergence toward the global optimum:
  • n_explore_step (int): [2, 10] -> better [2, 4], number of exploration step

  • n_exploit_step (int): [2, 10] -> better [2, 4], number of exploitation step

  • eta (float): (0, 1.0) -> better [0.05, 0.5], learning rate

  • p_hi (float): (0, 1.0) -> better [0.7, 0.95], the probability of wildebeest move to another position based on herd instinct

  • local_alpha (float): (0, 3.0) -> better [0.5, 0.9], control local movement (alpha 1)

  • local_beta (float): (0, 3.0) -> better [0.1, 0.5], control local movement (beta 1)

  • global_alpha (float): (0, 3.0) -> better [0.1, 0.5], control global movement (alpha 2)

  • global_beta (float): (0, 3.0), control global movement (beta 2)

  • delta_w (float): (0.5, 5.0) -> better [1.0, 2.0], dist to worst

  • delta_c (float): (0.5, 5.0) -> better [1.0, 2.0], dist to best

Examples

>>> import numpy as np
>>> from mealpy import FloatVar, WHO
>>>
>>> def objective_function(solution):
>>>     return np.sum(solution**2)
>>>
>>> problem_dict = {
>>>     "bounds": FloatVar(n_vars=30, lb=(-10.,) * 30, ub=(10.,) * 30, name="delta"),
>>>     "minmax": "min",
>>>     "obj_func": objective_function
>>> }
>>>
>>> model = WHO.OriginalWHO(epoch=1000, pop_size=50, n_explore_step = 3, n_exploit_step = 3, eta = 0.15, p_hi = 0.9,
>>>                         local_alpha=0.9, local_beta=0.3, global_alpha=0.2, global_beta=0.8, delta_w=2.0, delta_c=2.0)
>>> g_best = model.solve(problem_dict)
>>> print(f"Solution: {g_best.solution}, Fitness: {g_best.target.fitness}")
>>> print(f"Solution: {model.g_best.solution}, Fitness: {model.g_best.target.fitness}")

References

[1] Amali, D. and Dinakaran, M., 2019. Wildebeest herd optimization: a new global optimization algorithm inspired by wildebeest herding behaviour. Journal of Intelligent & Fuzzy Systems, 37(6), pp.8063-8076.

evolve(epoch)[source]

The main operations (equations) of algorithm. Inherit from Optimizer class

Parameters

epoch (int) – The current iteration