Advance Guide

Stopping Condition (Termination)

In meta-heuristic algorithms, the optimization process involves iteratively generating and evolving a population of candidate solutions (individuals) to the problem. Each generation consists of evaluating the fitness of each individual, selecting the best individuals for reproduction, and applying some specific operators to generate a new population. By setting a maximum number of generations as a stopping condition, the algorithm will terminate after a certain number of iterations, even if a satisfactory solution has not been found. This can be useful to prevent the algorithm from running indefinitely, especially if there is no clear convergence criteria or if the fitness landscape is complex and difficult to navigate.

However, it is important to note that the choice of the maximum number of generations should be based on the specific problem being solved, as well as the computational resources available. A too small number of generations may not allow the algorithm to converge to a satisfactory solution, while a too large number may result in unnecessary computational expense.

By default, when creating an optimizer, the default stopping condition (termination) is based on epochs (generations, iterations).

However, there are different stopping conditions that you can try by creating a Termination dictionary. You can also use multiple stopping criteria together to improve your model. There are 4 termination types in the class Termination:

  • MG: Maximum Generations / Epochs / Iterations

  • FE: Maximum Number of Function Evaluations

  • TB: Time Bound - If you want your algorithm to run for a fixed amount of time (e.g., K seconds), especially when comparing different algorithms.

  • ES: Early Stopping - Similar to the idea in training neural networks (stop the program if the global best solution has not improved by epsilon after K epochs).

  • Parameters for Termination class, set it to None if you don’t want to use it
    • max_epoch (int): Indicates the maximum number of generations for the MG type.

    • max_fe (int): Indicates the maximum number of function evaluations for the FE type.

    • max_time (float): Indicates the maximum amount of time for the TB type.

    • max_early_stop (int): Indicates the maximum number of epochs for the ES type.
      • epsilon (float): (Optional) This is used for the ES termination type (default value: 1e-10).

    • termination (dict): (Optional) A dictionary of termination criteria.

1. MG (Maximum Generations / Epochs): This is default in all algorithms

term_dict = {  # When creating this object, it will override the default epoch you define in your model
   "max_epoch": 1000  # 1000 epochs
}

2. FE (Number of Function Evaluation)

term_dict = {
   "max_fe": 100000    # 100000 number of function evaluation
}

3. TB (Time Bound): If you want your algorithm to run for a fixed amount of time (e.g., K seconds), especially when comparing different algorithms.

term_dict = {
   "max_time": 60  # 60 seconds to run this algorithm only
}

4. ES (Early Stopping): Similar to the idea in training neural networks (stop the program if the global best solution has not improved by epsilon after K epochs).

term_dict = {
   "max_early_stop": 30  # after 30 epochs, if the global best doesn't improve then we stop the program
}

Setting multiple stopping criteria together. The first one that occurs will be used.

# Use max epochs and max function evaluations together
term_dict = {
   "max_epoch": 1000,
   "max_fe": 60000
}

# Use max function evaluations and time bound together
term_dict = {
   "max_fe": 60000,
   "max_time": 40
}

# Use max function evaluations and early stopping together
term_dict = {
   "max_fe": 55000,
   "max_early_stop": 15
}

# Use max epochs, max FE and early stopping together
term_dict = {
   "max_epoch": 1200,
   "max_fe": 55000,
   "max_early_stop": 25
}

# Use all available stopping conditions together
term_dict = {
   "max_epoch": 1100,
   "max_fe": 80000,
   "max_time": 10.5,
   "max_early_stop": 25
}

After import and create a termination object, and an optimizer object, you can pass termination object to solve() function

model3 = SMA.OriginalSMA(epoch=100, pop_size=50, pr=0.03)
model3.solve(problem_dict1, termination=term_dict)

Multi-objective Optimization

We currently offer a “weighting method” to solve multi-objective optimization problems. All you need to do is define your objective function, which returns a list of objective values, and set the objective weights corresponding to each value.

  • obj_func: Your objective function.

  • bounds: A list or an instance of problem type.

  • minmax: Indicates whether the problem you are trying to solve is a minimum or maximum. The value can be “min” or “max”.

  • obj_weights: Optional list of weights for all of your objectives. The default is [1, 1, …, 1].

  • Declare problem dictionary with “obj_weights”:

import numpy as np
from mealpy import PSO, FloatVar, Problem

## This is how you design multi-objective function
#### Link: https://en.wikipedia.org/wiki/Test_functions_for_optimization
def objective_multi(solution):
    def booth(x, y):
        return (x + 2*y - 7)**2 + (2*x + y - 5)**2
    def bukin(x, y):
        return 100 * np.sqrt(np.abs(y - 0.01 * x**2)) + 0.01 * np.abs(x + 10)
    def matyas(x, y):
        return 0.26 * (x**2 + y**2) - 0.48 * x * y
    return [booth(solution[0], solution[1]), bukin(solution[0], solution[1]), matyas(solution[0], solution[1])]

## Design a problem dictionary for multiple objective functions above
problem_multi = {
    "obj_func": objective_multi,
    "bounds": FloatVar(lb=[-10, -10], ub=[10, 10]),
    "minmax": "min",
    "obj_weights": [0.4, 0.1, 0.5]               # Define it or default value will be [1, 1, 1]
}

## Define the model and solve the problem
model = PSO.OriginalPSO(epoch=1000, pop_size=50)
model.solve(problem=problem_multi)
  • Declare a custom Problem class:

import numpy as np
from mealpy import PSO, FloatVar, Problem

## Define a custom child class of Problem class.
class MOP(Problem):
    def __init__(self, bounds=None, minmax="min", name="MOP", **kwargs):
        self.name = name
        super().__init__(bounds, minmax, **kwargs)

        def booth(x, y):
                return (x + 2*y - 7)**2 + (2*x + y - 5)**2
        def bukin(x, y):
                return 100 * np.sqrt(np.abs(y - 0.01 * x**2)) + 0.01 * np.abs(x + 10)
        def matyas(x, y):
                return 0.26 * (x**2 + y**2) - 0.48 * x * y

    def obj_func(self, solution):
        return [self.booth(solution[0], solution[1]), self.bukin(solution[0], solution[1]), self.matyas(solution[0], solution[1])]

## Create an instance of MOP class
problem_multi = MOP(bounds=FloatVar(lb=[-10, ] * 2, ub=[10, ] * 2), minmax="min", obj_weights=[0.4, 0.1, 0.5])

## Define the model and solve the problem
model = PSO.OriginalPSO(epoch=1000, pop_size=50)
model.solve(problem=problem_multi)

Constraint Optimization

For this problem, we recommend that the user defines a punishment function. The more the objective is violated, the greater the punishment function will be. As a result, the fitness will increase, and the solution will have a lower chance of being selected in the updating process.

  • Declare problem dictionary:

import numpy as np
from mealpy import PSO, FloatVar

## This is how you design Constrained Benchmark Function (G01)
#### Link: https://onlinelibrary.wiley.com/doi/pdf/10.1002/9781119136507.app2
def objective_function(solution):
        def g1(x):
        return 2 * x[0] + 2 * x[1] + x[9] + x[10] - 10
    def g2(x):
        return 2 * x[0] + 2 * x[2] + x[9] + x[10] - 10
    def g3(x):
        return 2 * x[1] + 2 * x[2] + x[10] + x[11] - 10
    def g4(x):
        return -8 * x[0] + x[9]
    def g5(x):
        return -8 * x[1] + x[10]
    def g6(x):
        return -8 * x[2] + x[11]
    def g7(x):
        return -2 * x[3] - x[4] + x[9]
    def g8(x):
        return -2 * x[5] - x[6] + x[10]
    def g9(x):
        return -2 * x[7] - x[8] + x[11]

    def violate(value):
        return 0 if value <= 0 else value

    fx = 5 * np.sum(solution[:4]) - 5 * np.sum(solution[:4] ** 2) - np.sum(solution[4:13])

    ## Increase the punishment for g1 and g4 to boost the algorithm (You can choice any constraint instead of g1 and g4)
    fx += violate(g1(solution)) ** 2 + violate(g2(solution)) + violate(g3(solution)) + \
        2 * violate(g4(solution)) + violate(g5(solution)) + violate(g6(solution)) + \
        violate(g7(solution)) + violate(g8(solution)) + violate(g9(solution))
    return fx

## Design a problem dictionary for constrained objective function above
problem_constrained = {
  "obj_func": objective_function,
  "bounds": FloatVar(lb=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ub=[1, 1, 1, 1, 1, 1, 1, 1, 1, 100, 100, 100, 1]),
  "minmax": "min",
}

## Define the model and solve the problem
model = PSO.OriginalPSO(epoch=1000, pop_size=50)
model.solve(problem=problem_multi)
  • Or declare a custom Problem class:

import numpy as np
from mealpy import PSO, FloatVar
from mealpy.utils.problem import Problem


## Define a custom child class of Problem class.
class COP(Problem):
    def __init__(self, lb, ub, minmax, name="COP", **kwargs):
        self.name = name
        super().__init__(lb, ub, minmax, **kwargs)

        def g1(x):
                return 2 * x[0] + 2 * x[1] + x[9] + x[10] - 10
        def g2(x):
                return 2 * x[0] + 2 * x[2] + x[9] + x[10] - 10
        def g3(x):
                return 2 * x[1] + 2 * x[2] + x[10] + x[11] - 10
        def g4(x):
                return -8 * x[0] + x[9]
        def g5(x):
                return -8 * x[1] + x[10]
        def g6(x):
                return -8 * x[2] + x[11]
        def g7(x):
                return -2 * x[3] - x[4] + x[9]
        def g8(x):
                return -2 * x[5] - x[6] + x[10]
        def g9(x):
                return -2 * x[7] - x[8] + x[11]

        def violate(value):
                return 0 if value <= 0 else value

    def obj_func(self, solution):
                ## This is how you design Constrained Benchmark Function (G01)
                #### Link: https://onlinelibrary.wiley.com/doi/pdf/10.1002/9781119136507.app2

            fx = 5 * np.sum(solution[:4]) - 5 * np.sum(solution[:4] ** 2) - np.sum(solution[4:13])

            ## Increase the punishment for g1 and g4 to boost the algorithm (You can choice any constraint instead of g1 and g4)
            fx += self.violate(self.g1(solution)) ** 2 + self.violate(self.g2(solution)) + self.violate(self.g3(solution)) + \
                2 * self.violate(self.g4(solution)) + self.violate(self.g5(solution)) + self.violate(self.g6(solution)) + \
                self.violate(self.g7(solution)) + self.violate(self.g8(solution)) + self.violate(self.g9(solution))

            return fx

## Create an instance of MOP class
bounds = FloatVar(lb=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ub=[1, 1, 1, 1, 1, 1, 1, 1, 1, 100, 100, 100, 1]),
problem_cop = COP(bounds=bounds, minmax="min")

## Define the model and solve the problem
model = PSO.OriginalPSO(epoch=1000, pop_size=50)
model.solve(problem=problem_cop)

Discrete Optimization

For this type of problem, we recommend creating a custom child class of the Problem class and overriding the necessary functions. At a minimum, three functions should be overridden:

  • obj_func: the fitness function

  • generate_position: a function that generate solution

  • amend_position: a function that bring back the solution to the boundary

Let’s say we want to solve Travelling Salesman Problem (TSP),

import numpy as np
from mealpy import PermutationVar       ## For Travelling Salesman Problem, the solution should be a permutation


class DOP(Problem):
    def __init__(self, bounds, minmax, name="DOP", CITY_POSITIONS=None, **kwargs):
        self.name = name
        self.CITY_POSITIONS = CITY_POSITIONS
        super().__init__(bounds, minmax, **kwargs)

    def obj_func(self, solution):
                ## Objective for this problem is the sum of distance between all cities that salesman has passed
            ## This can be change depend on your requirements
            x = self.decode_solution(solution)["per"]
            city_coord = self.CITY_POSITIONS[x]
            line_x = city_coord[:, 0]
            line_y = city_coord[:, 1]
            total_distance = np.sum(np.sqrt(np.square(np.diff(line_x)) + np.square(np.diff(line_y))))
            return total_distance


## Create an instance of DOP class
problem_cop = DOP(bounds=PermutationVar(valid_set=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], name="per"),
                                minmax="min", log_to="file", log_file="dop-results.txt")

## Define the model and solve the problem
model = PSO.OriginalPSO(epoch=1000, pop_size=50)
model.solve(problem=problem_cop)

Log Training Process

We currently offer three logging options: printing the training process on the console, logging the training process to a file, and not displaying or saving the log process.

  • By default, if the “log_to” keyword is not declared in the problem dictionary, it will be logged to the console.

problem_dict1 = {
   "obj_func": F5,
   "bounds": FloatVar(lb=[-3, -5, 1, -10, ], ub=[5, 10, 100, 30, ]),
   "minmax": "min",
   # Default = "console"
}

problem_dict1 = {
   "obj_func": F5,
   "bounds": FloatVar(lb=[-3, -5, 1, -10, ], ub=[5, 10, 100, 30, ]),
   "minmax": "min",
   "log_to": "console",
}
  • If you want to log to the file, you need an additional keyword “log_file”,

problem_dict2 = {
   "obj_func": F5,
   "bounds": FloatVar(lb=[-3, -5, 1, -10, ], ub=[5, 10, 100, 30, ]),
   "minmax": "min",
   "log_to": "file",
   "log_file": "result.log",         # Default value = "mealpy.log"
}
  • Set it to None if you don’t want to log

problem_dict3 = {
   "obj_func": F5,
   "bounds": FloatVar(lb=[-3, -5, 1, -10, ], ub=[5, 10, 100, 30, ]),
   "minmax": "min",
   "log_to": None,
}

Custom Problem

For complex problems, we recommend that the user define a custom child class of the Problem class instead of defining the problem dictionary. For instance, when training a neural network, the dataset needs to be passed to the fitness function. Defining a child class allows for passing any additional data that may be needed.

from mealpy import PSO, FloatVar, Problem

class NeuralNetwork(Problem):
    def __init__(self, bounds=None, minmax, name="NeuralNetwork", dataset=None, additional=None, **kwargs):
        self.name = name
                self.dataset = dataset
                self.additional = additional
        super().__init__(bounds, minmax, **kwargs)

    def obj_func(self, solution):
                network = NET(self.dataset, self.additional)
                obj = network.loss
                return obj

## Create an instance of MOP class
problem_cop = COP(bounds=FloatVar(lb=[-3, -5, 1, -10, ], ub=[5, 10, 100, 30, ]), name="Network",
                                dataset=dataset, additional=additional, minmax="min")

## Define the model and solve the problem
model = PSO.OriginalPSO(epoch=1000, pop_size=50)
model.solve(problem=problem_cop)

Model’s Parameters

1. Hint Validation for setting up the hyper-parameters:

If you are unsure how to set up a parameter for the optimizer, you can try setting it to any value. The optimizer will then provide a “hint validation” that can help you determine how to set valid parameters.

model = PSO.OriginalPSO(epoch="hello", pop_size="world")
model.solve(problem)

# $ 2022/03/22 08:59:16 AM, ERROR, mealpy.utils.validator.Validator [line: 31]: 'epoch' is an integer and value should be in range: [1, 100000].

model = PSO.OriginalPSO(epoch=10, pop_size="world")
model.solve(problem)

# $ 2022/03/22 09:01:51 AM, ERROR, mealpy.utils.validator.Validator [line: 31]: 'pop_size' is an integer and value should be in range: [10, 10000].

2. Set up model’s parameters as a dictionary:

from mealpy import DE, FloatVar

problem = {
   "obj_func": F5,
   "bounds": FloatVar(lb=[-10,]*10, ub=[30,]*10),
   "minmax": "min",
}

paras_de = {
   "epoch": 20,
   "pop_size": 50,
   "wf": 0.7,
   "cr": 0.9,
   "strategy": 0,
}

model = DE.OriginalDE(**paras_de)
model.solve(problem)

This will definitely be helpful when using ParameterGrid/GridSearchCV from the scikit-learn library to tune the parameter of the models. For example;

from sklearn.model_selection import ParameterGrid
from mealpy import DE, FloatVar

problem = {
        "obj_func": F5,
        "bounds": FloatVar(lb=[-10,]*10, ub=[30,]*10),
        "minmax": "min",
}

paras_de_grid = {
        "epoch": [100, 200, 300, 500, 1000],
        "pop_size": [50, 100],
        "wf": [0.5, 0.6, 0.7, 0.8, 0.9],
        "cr": [0.6, 0.7, 0.8, 0.9],
        "strategy": [0, 1, 2, 3, 4],
}

for paras_de in list(ParameterGrid(paras_de_grid)):
        model = DE.OriginalDE(**paras_de)
        model.solve(problem)

3. Get the parameters of the model

Using the method below will return the model’s parameters as a Python dictionary. If you want to convert it to a string, we recommend using the built-in Python method: str().

model.get_parameters()          # Return dictionary

str(model.get_parameter())      # Return a string

Set Up Model’s/Problem’s Name

You do not necessarily need to set names for the optimizer and the problem, but doing so can help in saving results with the names of the model and the problem, especially in multitask problems.

1. Name the problem:

from mealpy.swarm_based import PSO

problem = {
   "obj_func": F5,
   "bounds": FloatVar(lb=[-3, -5, 1, -10, ], ub=[5, 10, 100, 30, ]),
   "minmax": "min",
   "name": "Benchmark Function 5th"
}

2. Name the optimizer model:

model = PSO.OriginalPSO(epoch=10, pop_size=50, name="Normal PSO")
model.solve(problem=problem)

3. Get the name of problem and model

print(model.name)            # Normal PSO
print(model.problem.name)    # Benchmark Function 5th

Agent’s History (Trajectory)

WARNING: Trajectory will cause the memory issues:

The history of the population is not saved by default, but you can enable this feature by setting the “save_population” keyword to True in the Problem definition. Keep in mind that enabling this option may cause memory issues if your problem is too large, as it saves the history of the population in each generation. However, if your problem is small enough, you can turn it on and visualize the trajectory chart of search agents.

problem_dict1 = {
   "obj_func": F5,
   "bounds": FloatVar(lb=[-3, -5, 1, -10, ], ub=[5, 10, 100, 30, ]),
   "minmax": "min",
   "log_to": "console",
   "save_population": True,              # Default = False
}
You can access to the history of agent/population in model.history object with variables:
  • list_global_best: List of global best SOLUTION found so far in all previous generations

  • list_current_best: List of current best SOLUTION in each previous generations

  • list_global_worst: List of global worst SOLUTION found so far in all previous generations

  • list_current_worst: List of current worst SOLUTION in each previous generations

  • list_epoch_time: List of runtime for each generation

  • list_global_best_fit: List of global best FITNESS found so far in all previous generations

  • list_current_best_fit: List of current best FITNESS in each previous generations

  • list_diversity: List of DIVERSITY of swarm in all generations

  • list_exploitation: List of EXPLOITATION percentages for all generations

  • list_exploration: List of EXPLORATION percentages for all generations

  • list_population: List of POPULATION in each generations

Note: The last variable, ‘list_population’, is the one that can cause the “memory” error described above. It is recommended to set the ‘save_population’ parameter to False (which is also the default) in the input problem dictionary if you do not plan to use it.

import numpy as np
from mealpy import PSO

def objective_function(solution):
    return np.sum(solution**2)

problem_dict = {
    "obj_func": objective_function,
"bounds": FloatVar(lb=[-3, -5, 1, -10, ], ub=[5, 10, 100, 30, ]),
    "minmax": "min",
    "verbose": True,
    "save_population": False        # Then you can't draw the trajectory chart
}
model = PSO.OriginalPSO(epoch=1000, pop_size=50)
model.solve(problem=problem_dict)

print(model.history.list_global_best)
print(model.history.list_current_best)
print(model.history.list_global_worst)
print(model.history.list_current_worst)
print(model.history.list_epoch_time)
print(model.history.list_global_best_fit)
print(model.history.list_current_best_fit)
print(model.history.list_diversity)
print(model.history.list_exploitation)
print(model.history.list_exploration)
print(model.history.list_population)

## Remember if you set "save_population" to False, then there is no variable: list_population

Saving and Loading Model

Based on the tutorials above, we know that we can save the population after each epoch in the model by setting “save_population” to True in the problem dictionary.

problem_dict1 = {
  "obj_func": F5,
  "bounds": FloatVar(lb=[-3, -5, 1, -10, ], ub=[5, 10, 100, 30, ]),
  "minmax": "min",
  "log_to": "console",
  "save_population": True,              # Default = False
}

However, as a warning, if your problem is too big, setting “save_population” to True can cause memory issues when running the model. It is also important to note that “save_population” here means storing the population of each epoch in the model’s history object, and not saving the model to a file. To save and load the optimizer from a file, you will need to use the “io” module from “mealpy.utils”.

import numpy as np
from mealpy import GA, FloatVar, Problem
from mealpy.utils import io

def objective_function(solution):
        return np.sum(solution**2)

problem = {
    "obj_func": objective_function,
    "bounds": FloatVar(lb=[-100, ] * 50, ub=[100, ] * 50),
    "minmax": "min",
}

## Run the algorithm
model = BaseGA(epoch=100, pop_size=50)
g_best = model.solve(problem)
print(f"Best solution: {g_best.solution}, Best fitness: {g_best.target.fitness}")

## Save model to file
io.save_model(model, "results/model.pkl")

## Load the model from file
optimizer = io.load_model("results/model.pkl")
print(f"Best solution: {optimizer.g_best.solution}, Best fitness: {optimizer.g_best.target.fitness}")

Starting Solutions

Not recommended to use this utility. But in case you need this:

from mealpy import TLO, FloatVar
import numpy as np


def frequency_modulated(pos):
        # range: [-6.4, 6.35], f(X*) = 0, phi = 2pi / 100
        phi = 2 * np.pi / 100
        result = 0
        for t in range(0, 101):
                y_t = pos[0] * np.sin(pos[3] * t * phi + pos[1]*np.sin(pos[4] * t * phi + pos[2] * np.sin(pos[5] * t * phi)))
                y_t0 = 1.0 * np.sin(5.0 * t * phi - 1.5 * np.sin(4.8 * t * phi + 2.0 * np.sin(4.9 * t * phi)))
                result += (y_t - y_t0)**2
        return result

fm_problem = {
        "obj_func": frequency_modulated,
        "bounds": FloatVar(lb=[-6.4, ] * 6, ub=[6.35, ] * 6),
        "minmax": "min",
        "log_to": "console",
}

## This is an example I use to create starting positions
## Write your own function, remember the starting positions has to be: list of N vectors or 2D matrix of position vectors
def create_starting_solutions(n_dims=None, pop_size=None, num=1):
        return np.ones((pop_size, n_dims)) * num + np.random.uniform(-1, 1)

## Define the model
model = TLO.OriginalTLO(epoch=100, pop_size=50)

## Input your starting positions here
list_pos = create_starting_solutions(6, 50, 2)
best_agent = model.solve(fm_problem, starting_solutions=list_pos)        ## Remember the keyword: starting_solutions
print(f"Best solution: {model.g_best.solution}, Best fitness: {best_agent.target.fitness}")

## Training with other starting positions
list_pos2 = create_starting_solutions(6, 50, -1)
best_agent = model.solve(fm_problem, starting_solutions=list_pos2)
print(f"Best solution: {model.g_best.solution}, Best fitness: {best_agent.target.fitness}")

Import All Models

from mealpy import BBO, PSO, GA, ALO, AO, ARO, AVOA, BA, BBOA, BMO, EOA, IWO
from mealpy import SBO, SMA, SOA, SOS, TPO, TSA, VCS, WHO, AOA, CEM, CGO, CircleSA, GBO, HC, INFO, PSS, RUN, SCA
from mealpy import SHIO, TS, HS, AEO, GCO, WCA, CRO, DE, EP, ES, FPA, MA, SHADE, BRO, BSO, CA, CHIO, FBIO, GSKA, HBO
from mealpy import HCO, ICA, LCO, WarSO, TOA, TLO, SSDO, SPBO, SARO, QSA, ArchOA, ASO, CDO, EFO, EO, EVO, FLA
from mealpy import HGSO, MVO, NRO, RIME, SA, WDO, TWO, ABC, ACOR, AGTO, BeesA, BES, BFO, ZOA, WOA, WaOA, TSO
from mealpy import TDO, STO, SSpiderO, SSpiderA, SSO, SSA, SRSR, SLO, SHO, SFO, ServalOA, SeaHO, SCSO, POA
from mealpy import PFA, OOA, NGO, NMRA, MSA, MRFO, MPA, MGO, MFO, JA, HHO, HGS, HBA, GWO, GTO, GOA
from mealpy import GJO, FOX, FOA, FFO, FFA, FA, ESOA, EHO, DO, DMOA, CSO, CSA, CoatiOA, COA, BSA
from mealpy import StringVar, FloatVar, BoolVar, PermutationVar, MixedSetVar, IntegerVar, BinaryVar
from mealpy import Tuner, Multitask, Problem, Optimizer, Termination, ParameterGrid
from mealpy import get_all_optimizers, get_optimizer_by_name
import numpy as np

def objective_function(solution):
    return np.sum(solution ** 2)

problem = {
    "obj_func": objective_function,
    "bounds": FloatVar(lb=[-3] * 20, ub=[5] * 20),
    "name": "Squared Problem",
    "log_to": "file",
    "log_file": "results.log"
}

paras_bbo = {
    "epoch": 20,
    "pop_size": 50,
    "p_m": 0.01,
    "elites": 2,
}
paras_eoa = {
    "epoch": 20,
    "pop_size": 50,
    "p_c": 0.9,
    "p_m": 0.01,
    "n_best": 2,
    "alpha": 0.98,
    "beta": 0.9,
    "gamma": 0.9,
}
paras_iwo = {
    "epoch": 20,
    "pop_size": 50,
    "seed_min": 3,
    "seed_max": 9,
    "exponent": 3,
    "sigma_start": 0.6,
    "sigma_end": 0.01,
}
paras_sbo = {
    "epoch": 20,
    "pop_size": 50,
    "alpha": 0.9,
    "p_m": 0.05,
    "psw": 0.02,
}
paras_sma = {
    "epoch": 20,
    "pop_size": 50,
    "p_t": 0.03,
}
paras_vcs = {
    "epoch": 20,
    "pop_size": 50,
    "lamda": 0.5,
    "sigma": 0.3,
}
paras_who = {
    "epoch": 20,
    "pop_size": 50,
    "n_explore_step": 3,
    "n_exploit_step": 3,
    "eta": 0.15,
    "p_hi": 0.9,
    "local_alpha": 0.9,
    "local_beta": 0.3,
    "global_alpha": 0.2,
    "global_beta": 0.8,
    "delta_w": 2.0,
    "delta_c": 2.0,
}
paras_cro = {
    "epoch": 20,
    "pop_size": 50,
    "po": 0.4,
    "Fb": 0.9,
    "Fa": 0.1,
    "Fd": 0.1,
    "Pd": 0.5,
    "GCR": 0.1,
    "gamma_min": 0.02,
    "gamma_max": 0.2,
    "n_trials": 5,
}
paras_ocro = dict(paras_cro)
paras_ocro["restart_count"] = 5

paras_de = {
    "epoch": 20,
    "pop_size": 50,
    "wf": 0.7,
    "cr": 0.9,
    "strategy": 0,
}
paras_jade = {
    "epoch": 20,
    "pop_size": 50,
    "miu_f": 0.5,
    "miu_cr": 0.5,
    "pt": 0.1,
    "ap": 0.1,
}
paras_sade = {
    "epoch": 20,
    "pop_size": 50,
}
paras_shade = paras_lshade = {
    "epoch": 20,
    "pop_size": 50,
    "miu_f": 0.5,
    "miu_cr": 0.5,
}
paras_sap_de = {
    "epoch": 20,
    "pop_size": 50,
    "branch": "ABS"
}
paras_ep = paras_levy_ep = {
    "epoch": 20,
    "pop_size": 50,
    "bout_size": 0.05
}
paras_es = paras_levy_es = {
    "epoch": 20,
    "pop_size": 50,
    "lamda": 0.75
}
paras_fpa = {
    "epoch": 20,
    "pop_size": 50,
    "p_s": 0.8,
    "levy_multiplier": 0.2
}
paras_ga = {
    "epoch": 20,
    "pop_size": 50,
    "pc": 0.9,
    "pm": 0.05,
}
paras_single_ga = {
    "epoch": 20,
    "pop_size": 50,
    "pc": 0.9,
    "pm": 0.8,
    "selection": "roulette",
    "crossover": "uniform",
    "mutation": "swap",
}
paras_multi_ga = {
    "epoch": 20,
    "pop_size": 50,
    "pc": 0.9,
    "pm": 0.05,
    "selection": "roulette",
    "crossover": "uniform",
    "mutation": "swap",
}
paras_ma = {
    "epoch": 20,
    "pop_size": 50,
    "pc": 0.85,
    "pm": 0.15,
    "p_local": 0.5,
    "max_local_gens": 10,
    "bits_per_param": 4,
}

paras_bro = {
    "epoch": 20,
    "pop_size": 50,
    "threshold": 3,
}
paras_improved_bso = {
    "epoch": 20,
    "pop_size": 50,
    "m_clusters": 5,
    "p1": 0.2,
    "p2": 0.8,
    "p3": 0.4,
    "p4": 0.5,
}
paras_bso = dict(paras_improved_bso)
paras_bso["slope"] = 20
paras_ca = {
    "epoch": 20,
    "pop_size": 50,
    "accepted_rate": 0.15,
}
paras_chio = {
    "epoch": 20,
    "pop_size": 50,
    "brr": 0.15,
    "max_age": 3
}
paras_fbio = {
    "epoch": 20,
    "pop_size": 50,
}
paras_base_gska = {
    "epoch": 20,
    "pop_size": 50,
    "pb": 0.1,
    "kr": 0.9,
}
paras_gska = {
    "epoch": 20,
    "pop_size": 50,
    "pb": 0.1,
    "kf": 0.5,
    "kr": 0.9,
    "kg": 5,
}
paras_ica = {
    "epoch": 20,
    "pop_size": 50,
    "empire_count": 5,
    "assimilation_coeff": 1.5,
    "revolution_prob": 0.05,
    "revolution_rate": 0.1,
    "revolution_step_size": 0.1,
    "zeta": 0.1,
}
paras_lco = {
    "epoch": 20,
    "pop_size": 50,
    "r1": 2.35,
}
paras_improved_lco = {
    "epoch": 20,
    "pop_size": 50,
}
paras_qsa = {
    "epoch": 20,
    "pop_size": 50,
}
paras_saro = {
    "epoch": 20,
    "pop_size": 50,
    "se": 0.5,
    "mu": 15
}
paras_ssdo = {
    "epoch": 20,
    "pop_size": 50,
}
paras_tlo = {
    "epoch": 20,
    "pop_size": 50,
}
paras_improved_tlo = {
    "epoch": 20,
    "pop_size": 50,
    "n_teachers": 5,
}

paras_aoa = {
    "epoch": 20,
    "pop_size": 50,
    "alpha": 5,
    "miu": 0.5,
    "moa_min": 0.2,
    "moa_max": 0.9,
}
paras_cem = {
    "epoch": 20,
    "pop_size": 50,
    "n_best": 20,
    "alpha": 0.7,
}
paras_cgo = {
    "epoch": 20,
    "pop_size": 50,
}
paras_gbo = {
    "epoch": 20,
    "pop_size": 50,
    "pr": 0.5,
    "beta_min": 0.2,
    "beta_max": 1.2,
}
paras_hc = {
    "epoch": 20,
    "pop_size": 50,
    "neighbour_size": 50
}
paras_swarm_hc = {
    "epoch": 20,
    "pop_size": 50,
    "neighbour_size": 10
}
paras_pss = {
    "epoch": 20,
    "pop_size": 50,
    "acceptance_rate": 0.8,
    "sampling_method": "LHS",
}
paras_sca = {
    "epoch": 20,
    "pop_size": 50,
}

paras_hs = {
    "epoch": 20,
    "pop_size": 50,
    "c_r": 0.95,
    "pa_r": 0.05
}

paras_aeo = {
    "epoch": 20,
    "pop_size": 50,
}
paras_gco = {
    "epoch": 20,
    "pop_size": 50,
    "cr": 0.7,
    "wf": 1.25,
}
paras_wca = {
    "epoch": 20,
    "pop_size": 50,
    "nsr": 4,
    "wc": 2.0,
    "dmax": 1e-6
}

paras_archoa = {
    "epoch": 20,
    "pop_size": 50,
    "c1": 2,
    "c2": 5,
    "c3": 2,
    "c4": 0.5,
    "acc_max": 0.9,
    "acc_min": 0.1,
}
paras_aso = {
    "epoch": 20,
    "pop_size": 50,
    "alpha": 50,
    "beta": 0.2,
}
paras_efo = {
    "epoch": 20,
    "pop_size": 50,
    "r_rate": 0.3,
    "ps_rate": 0.85,
    "p_field": 0.1,
    "n_field": 0.45,
}
paras_eo = {
    "epoch": 20,
    "pop_size": 50,
}
paras_hgso = {
    "epoch": 20,
    "pop_size": 50,
    "n_clusters": 3,
}
paras_mvo = {
    "epoch": 20,
    "pop_size": 50,
    "wep_min": 0.2,
    "wep_max": 1.0,
}
paras_nro = {
    "epoch": 20,
    "pop_size": 50,
}
paras_sa = {
    "epoch": 20,
    "pop_size": 50,
    "max_sub_iter": 5,
    "t0": 1000,
    "t1": 1,
    "move_count": 5,
    "mutation_rate": 0.1,
    "mutation_step_size": 0.1,
    "mutation_step_size_damp": 0.99,
}
paras_two = {
    "epoch": 20,
    "pop_size": 50,
}
paras_wdo = {
    "epoch": 20,
    "pop_size": 50,
    "RT": 3,
    "g_c": 0.2,
    "alp": 0.4,
    "c_e": 0.4,
    "max_v": 0.3,
}

paras_abc = {
    "epoch": 20,
    "pop_size": 50,
    "n_elites": 16,
    "n_others": 4,
    "patch_size": 5.0,
    "patch_reduction": 0.985,
    "n_sites": 3,
    "n_elite_sites": 1,
}
paras_acor = {
    "epoch": 20,
    "pop_size": 50,
    "sample_count": 25,
    "intent_factor": 0.5,
    "zeta": 1.0,
}
paras_alo = {
    "epoch": 20,
    "pop_size": 50,
}
paras_ao = {
    "epoch": 20,
    "pop_size": 50,
}
paras_ba = {
    "epoch": 20,
    "pop_size": 50,
    "loudness": 0.8,
    "pulse_rate": 0.95,
    "pf_min": 0.,
    "pf_max": 10.,
}
paras_adaptive_ba = {
    "epoch": 20,
    "pop_size": 50,
    "loudness_min": 1.0,
    "loudness_max": 2.0,
    "pr_min": 0.15,
    "pr_max": 0.85,
    "pf_min": 0.,
    "pf_max": 10.,
}
paras_modified_ba = {
    "epoch": 20,
    "pop_size": 50,
    "pulse_rate": 0.95,
    "pf_min": 0.,
    "pf_max": 10.,
}
paras_beesa = {
    "epoch": 20,
    "pop_size": 50,
    "selected_site_ratio": 0.5,
    "elite_site_ratio": 0.4,
    "selected_site_bee_ratio": 0.1,
    "elite_site_bee_ratio": 2.0,
    "dance_radius": 0.1,
    "dance_reduction": 0.99,
}
paras_prob_beesa = {
    "epoch": 20,
    "pop_size": 50,
    "recruited_bee_ratio": 0.1,
    "dance_radius": 0.1,
    "dance_reduction": 0.99,
}
paras_bes = {
    "epoch": 20,
    "pop_size": 50,
    "a_factor": 10,
    "R_factor": 1.5,
    "alpha": 2.0,
    "c1": 2.0,
    "c2": 2.0,
}
paras_bfo = {
    "epoch": 20,
    "pop_size": 50,
    "Ci": 0.01,
    "Ped": 0.25,
    "Nc": 5,
    "Ns": 4,
    "d_attract": 0.1,
    "w_attract": 0.2,
    "h_repels": 0.1,
    "w_repels": 10,
}
paras_abfo = {
    "epoch": 20,
    "pop_size": 50,
    "C_s": 0.1,
    "C_e": 0.001,
    "Ped": 0.01,
    "Ns": 4,
    "N_adapt": 4,
    "N_split": 40,
}
paras_bsa = {
    "epoch": 20,
    "pop_size": 50,
    "ff": 10,
    "pff": 0.8,
    "c1": 1.5,
    "c2": 1.5,
    "a1": 1.0,
    "a2": 1.0,
    "fl": 0.5,
}
paras_coa = {
    "epoch": 20,
    "pop_size": 50,
    "n_coyotes": 5,
}
paras_csa = {
    "epoch": 20,
    "pop_size": 50,
    "p_a": 0.3,
}
paras_cso = {
    "epoch": 20,
    "pop_size": 50,
    "mixture_ratio": 0.15,
    "smp": 5,
    "spc": False,
    "cdc": 0.8,
    "srd": 0.15,
    "c1": 0.4,
    "w_min": 0.4,
    "w_max": 0.9,
    "selected_strategy": 1,
}
paras_do = {
    "epoch": 20,
    "pop_size": 50,
}
paras_eho = {
    "epoch": 20,
    "pop_size": 50,
    "alpha": 0.5,
    "beta": 0.5,
    "n_clans": 5,
}
paras_fa = {
    "epoch": 20,
    "pop_size": 50,
    "max_sparks": 20,
    "p_a": 0.04,
    "p_b": 0.8,
    "max_ea": 40,
    "m_sparks": 5,
}
paras_ffa = {
    "epoch": 20,
    "pop_size": 50,
    "gamma": 0.001,
    "beta_base": 2,
    "alpha": 0.2,
    "alpha_damp": 0.99,
    "delta": 0.05,
    "exponent": 2,
}
paras_foa = {
    "epoch": 20,
    "pop_size": 50,
}
paras_goa = {
    "epoch": 20,
    "pop_size": 50,
    "c_min": 0.00004,
    "c_max": 1.0,
}
paras_gwo = {
    "epoch": 20,
    "pop_size": 50,
}
paras_hgs = {
    "epoch": 20,
    "pop_size": 50,
    "PUP": 0.08,
    "LH": 10000,
}
paras_hho = {
    "epoch": 20,
    "pop_size": 50,
}
paras_ja = {
    "epoch": 20,
    "pop_size": 50,
}
paras_mfo = {
    "epoch": 20,
    "pop_size": 50,
}
paras_mrfo = {
    "epoch": 20,
    "pop_size": 50,
    "somersault_range": 2.0,
}
paras_msa = {
    "epoch": 20,
    "pop_size": 50,
    "n_best": 5,
    "partition": 0.5,
    "max_step_size": 1.0,
}
paras_nmra = {
    "epoch": 20,
    "pop_size": 50,
    "pb": 0.75,
}
paras_improved_nmra = {
    "epoch": 20,
    "pop_size": 50,
    "pb": 0.75,
    "pm": 0.01,
}
paras_pfa = {
    "epoch": 20,
    "pop_size": 50,
}
paras_pso = {
    "epoch": 20,
    "pop_size": 50,
    "c1": 2.05,
    "c2": 2.05,
    "w_min": 0.4,
    "w_max": 0.9,
}
paras_ppso = {
    "epoch": 20,
    "pop_size": 50,
}
paras_hpso_tvac = {
    "epoch": 20,
    "pop_size": 50,
    "ci": 0.5,
    "cf": 0.0,
}
paras_cpso = {
    "epoch": 20,
    "pop_size": 50,
    "c1": 2.05,
    "c2": 2.05,
    "w_min": 0.4,
    "w_max": 0.9,
}
paras_clpso = {
    "epoch": 20,
    "pop_size": 50,
    "c_local": 1.2,
    "w_min": 0.4,
    "w_max": 0.9,
    "max_flag": 7,
}
paras_sfo = {
    "epoch": 20,
    "pop_size": 50,
    "pp": 0.1,
    "AP": 4.0,
    "epsilon": 0.0001,
}
paras_improved_sfo = {
    "epoch": 20,
    "pop_size": 50,
    "pp": 0.1,
}
paras_sho = {
    "epoch": 20,
    "pop_size": 50,
    "h_factor": 5.0,
    "N_tried": 10,
}
paras_slo = paras_modified_slo = {
    "epoch": 20,
    "pop_size": 50,
}
paras_improved_slo = {
    "epoch": 20,
    "pop_size": 50,
    "c1": 1.2,
    "c2": 1.2
}
paras_srsr = {
    "epoch": 20,
    "pop_size": 50,
}
paras_ssa = {
    "epoch": 20,
    "pop_size": 50,
    "ST": 0.8,
    "PD": 0.2,
    "SD": 0.1,
}
paras_sso = {
    "epoch": 20,
    "pop_size": 50,
}
paras_sspidera = {
    "epoch": 20,
    "pop_size": 50,
    "r_a": 1.0,
    "p_c": 0.7,
    "p_m": 0.1
}
paras_sspidero = {
    "epoch": 20,
    "pop_size": 50,
    "fp_min": 0.65,
    "fp_max": 0.9
}
paras_woa = {
    "epoch": 20,
    "pop_size": 50,
}
paras_hi_woa = {
    "epoch": 20,
    "pop_size": 50,
    "feedback_max": 10
}

if __name__ == "__main__":
    model = BBO.BaseBBO(**paras_bbo)
        model = BBO.OriginalBBO(**paras_bbo)
        model = EOA.OriginalEOA(**paras_eoa)
        model = IWO.OriginalIWO(**paras_eoa)
        model = SBO.BaseSBO(**paras_sbo)
        model = SBO.OriginalSBO(**paras_sbo)
        model = SMA.BaseSMA(**paras_sma)
        model = SMA.OriginalSMA(**paras_sma)
        model = VCS.BaseVCS(**paras_vcs)
        model = VCS.OriginalVCS(**paras_vcs)
        model = WHO.OriginalWHO(**paras_vcs)

        model = CRO.OriginalCRO(**paras_cro)
        model = CRO.OCRO(**paras_ocro)
        model = DE.BaseDE(**paras_de)
        model = DE.JADE(**paras_jade)
        model = DE.SADE(**paras_sade)
        model = DE.SHADE(**paras_shade)
        model = DE.L_SHADE(**paras_lshade)
        model = DE.SAP_DE(**paras_sap_de)
        model = EP.OriginalEP(**paras_ep)
        model = EP.LevyEP(**paras_levy_ep)
        model = ES.OriginalES(**paras_ep)
        model = ES.LevyES(**paras_levy_ep)
        model = FPA.OriginalFPA(**paras_fpa)
        model = GA.BaseGA(**paras_ga)
        model = GA.SingleGA(**paras_single_ga)
        model = GA.MultiGA(**paras_multi_ga)
        model = MA.OriginalMA(**paras_ma)

        model = BRO.BaseBRO(**paras_bro)
        model = BRO.OriginalBRO(**paras_bro)
        model = BSO.OriginalBSO(**paras_bso)
        model = BSO.ImprovedBSO(**paras_improved_bso)
        model = CA.OriginalCA(**paras_ca)
        model = CHIO.BaseCHIO(**paras_chio)
        model = CHIO.OriginalCHIO(**paras_chio)
        model = FBIO.BaseFBIO(**paras_fbio)
        model = FBIO.OriginalFBIO(**paras_fbio)
        model = GSKA.BaseGSKA(**paras_base_gska)
        model = GSKA.OriginalGSKA(**paras_gska)
        model = ICA.OriginalICA(**paras_ica)
        model = LCO.BaseLCO(**paras_lco)
        model = LCO.OriginalLCO(**paras_lco)
        model = LCO.ImprovedLCO(**paras_improved_lco)
        model = QSA.BaseQSA(**paras_qsa)
        model = QSA.OriginalQSA(**paras_qsa)
        model = QSA.OppoQSA(**paras_qsa)
        model = QSA.LevyQSA(**paras_qsa)
        model = QSA.ImprovedQSA(**paras_qsa)
        model = SARO.BaseSARO(**paras_saro)
        model = SARO.OriginalSARO(**paras_saro)
        model = SSDO.OriginalSSDO(**paras_ssdo)
        model = TLO.BaseTLO(**paras_tlo)
        model = TLO.OriginalTLO(**paras_tlo)
        model = TLO.ImprovedTLO(**paras_improved_tlo)

        model = AOA.OriginalAOA(**paras_aoa)
        model = CEM.OriginalCEM(**paras_cem)
        model = CGO.OriginalCGO(**paras_cgo)
        model = GBO.OriginalGBO(**paras_gbo)
        model = HC.OriginalHC(**paras_hc)
        model = HC.SwarmHC(**paras_swarm_hc)
        model = PSS.OriginalPSS(**paras_pss)
        model = SCA.OriginalSCA(**paras_sca)
        model = SCA.BaseSCA(**paras_sca)

        model = HS.BaseHS(**paras_hs)
        model = HS.OriginalHS(**paras_hs)

        model = AEO.OriginalAEO(**paras_aeo)
        model = AEO.EnhancedAEO(**paras_aeo)
        model = AEO.ModifiedAEO(**paras_aeo)
        model = AEO.ImprovedAEO(**paras_aeo)
        model = AEO.AugmentedAEO(**paras_aeo)
        model = GCO.BaseGCO(**paras_aeo)
        model = GCO.OriginalGCO(**paras_aeo)
        model = WCA.OriginalWCA(**paras_wca)

        model = ArchOA.OriginalArchOA(**paras_archoa)
        model = ASO.OriginalASO(**paras_aso)
        model = EFO.OriginalEFO(**paras_efo)
        model = EFO.BaseEFO(**paras_efo)
        model = EO.OriginalEO(**paras_eo)
        model = EO.AdaptiveEO(**paras_eo)
        model = EO.ModifiedEO(**paras_eo)
        model = HGSO.OriginalHGSO(**paras_hgso)
        model = MVO.OriginalMVO(**paras_mvo)
        model = NRO.OriginalNRO(**paras_nro)
        model = SA.OriginalSA(**paras_sa)
        model = SA.SwarmSA(**paras_sa)
        model = SA.GaussianSA(**paras_sa)
        model = TWO.OriginalTWO(**paras_two)
        model = TWO.OppoTWO(**paras_two)
        model = TWO.LevyTWO(**paras_two)
        model = TWO.EnhancedTWO(**paras_two)
        model = WDO.OriginalWDO(**paras_wdo)

        model = ABC.OriginalABC(**paras_abc)
        model = ACOR.OriginalACOR(**paras_acor)
        model = ALO.OriginalALO(**paras_alo)
        model = AO.OriginalAO(**paras_ao)
        model = ALO.BaseALO(**paras_alo)
        model = BA.OriginalBA(**paras_ba)
        model = BA.AdaptiveBA(**paras_adaptive_ba)
        model = BA.ModifiedBA(**paras_modified_ba)
        model = BeesA.OriginalBeesA(**paras_beesa)
        model = BeesA.ProbBeesA(**paras_prob_beesa)
        model = BES.OriginalBES(**paras_bes)
        model = BFO.OriginalBFO(**paras_bfo)
        model = BFO.ABFO(**paras_abfo)
        model = BSA.OriginalBSA(**paras_bsa)
        model = COA.OriginalCOA(**paras_coa)
        model = CSA.OriginalCSA(**paras_csa)
        model = CSO.OriginalCSO(**paras_cso)
        model = DO.OriginalDO(**paras_do)
        model = EHO.OriginalEHO(**paras_eho)
        model = FA.OriginalFA(**paras_fa)
        model = FFA.OriginalFFA(**paras_ffa)
        model = FOA.OriginalFOA(**paras_foa)
        model = FOA.BaseFOA(**paras_foa)
        model = FOA.WhaleFOA(**paras_foa)
        model = GOA.OriginalGOA(**paras_goa)
        model = GWO.OriginalGWO(**paras_gwo)
        model = GWO.RW_GWO(**paras_gwo)
        model = HGS.OriginalHGS(**paras_hgs)
        model = HHO.OriginalHHO(**paras_hho)
        model = JA.OriginalJA(**paras_ja)
        model = JA.BaseJA(**paras_ja)
        model = JA.LevyJA(**paras_ja)
        model = MFO.OriginalMFO(**paras_mfo)
        model = MFO.BaseMFO(**paras_mfo)
        model = MRFO.OriginalMRFO(**paras_mrfo)
        model = MSA.OriginalMSA(**paras_msa)
        model = NMRA.ImprovedNMRA(**paras_improved_nmra)
        model = NMRA.OriginalNMRA(**paras_nmra)
        model = PFA.OriginalPFA(**paras_pfa)
        model = PSO.OriginalPSO(**paras_pso)
        model = PSO.PPSO(**paras_ppso)
        model = PSO.HPSO_TVAC(**paras_hpso_tvac)
        model = PSO.C_PSO(**paras_cpso)
        model = PSO.CL_PSO(**paras_clpso)
        model = SFO.OriginalSFO(**paras_sfo)
        model = SFO.ImprovedSFO(**paras_improved_sfo)
        model = SHO.OriginalSHO(**paras_sho)
        model = SLO.OriginalSLO(**paras_slo)
        model = SLO.ModifiedSLO(**paras_modified_slo)
        model = SLO.ImprovedSLO(**paras_improved_slo)
        model = SRSR.OriginalSRSR(**paras_srsr)
        model = SSA.OriginalSSA(**paras_ssa)
        model = SSA.BaseSSA(**paras_ssa)
        model = SSO.OriginalSSO(**paras_sso)
        model = SSpiderA.OriginalSSpiderA(**paras_sspidera)
        model = SSpiderO.OriginalSSpiderO(**paras_sspidero)
        model = WOA.OriginalWOA(**paras_woa)
        model = WOA.HI_WOA(**paras_hi_woa)

        best_position, best_fitness = model.solve(P1)
        print(model.get_parameters())
        print(model.get_name())
        print(model.problem.get_name())
        print(model.get_attributes()["g_best"])