In essential trial circumstances, all hyperparameters can have unconstrained fundamental values, and the viable set of hyperparameters can be an n-dimensional vector space with actual values. However, because an ML model’s hyper-parameters can take on matters from multiple domains and have distinct constraints, their optimization issues are frequently complicated constrained optimization problems.
Unleash Your Potential - Namagunga Girls Coding Club
How to use a genetic algorithm for hyperparameter tuning of ML models
1. HOW TO USE A GENETIC ALGORITHM FOR
HYPERPARAMETER TUNING OF ML MODELS?
Rahul
2. I
n essential trial circumstances, all hyperparameters can have unconstrained fundamental val-
ues, and the viable set of hyperparameters can be an n-dimensional vector space with actual
values. However, because an ML model’s hyper-parameters can take on matters from multiple
domains and have distinct constraints, their optimization issues are frequently complicated con-
strained optimization problems.
For example, in a decision tree, the number of examined features should be in the range
of 0 to the number of parts, and the number of clusters in k-means should not be greater than
the number of data points. Furthermore, definite characteristics, like the activation function and
optimizer of a neural network, can frequently only take a few specific values. As a result, the
feasible domain of a collection of hyperparameters often has a complicated structure, increas-
ing the problem’s complexity.
There are four primary components to the hyper-parameter tuning method.
• An estimator with a goal function.
• A search area.
• A method of searching or optimizing for hyper-parameter combinations.
• An evaluation function for comparing the performance of various hyper-parameter
combinations.
The working of hyperparameter tunning
Hyperparameter optimization aims to obtain optimal or near-optimal model performance
by modifying hyper-parameters within budget constraints. The function’s mathematical formula-
tion varies based on the goal function of the chosen ML algorithm and the performance metric
function. Model performance may be measured using a variety of measures, including accu-
racy, RMSE, F1-score, and false alarm rate. In reality, however, time budgets are an essential
restriction for improving hyperparameter optimization models and must be considered.
anumak.ai
3. Maximizing the objective function of an ML model with a decent number of hyper-parameter
configurations frequently takes a long time.
The primary process of Hyperparameter optimization is as follows:
• Choose an objective function and performance metrics;
• Identify the hyper-parameters that need to be tuned, describe their kinds, and identify the
best optimization approach.
• As the baseline model, train the ML model using the default hyper-parameter setup or com-
mon values.
• Begin the optimization process by selecting a broad search space as the likely hyper-param-
eter domain based on manual testing and domain expertise.
• Narrow the search space based on the areas of currently-tested well-performing hyper-pa-
rameter values or, if required, explore additional search spaces.
• As the final answer, return the best-performing hyper-parameter configuration.
How is a genetic algorithm used in hyperparameter optimization?
One of the most prevalent met heuristic algorithms is the genetic algorithm (GA), which is
based on the evolutionary idea that people with the highest survival potential and adaptation
to the environment are more likely to survive and pass on their qualities to future generations.
The rates of their parents will be passed on to the following generation, which may include
both good and bad people. Better people will be more likely to live and create more capable
children, whereas the worst people will progressively fade away. The individual with the most
adaptability will be chosen as the global optimum after multiple generations.
anumak.ai
4. To use GA to hyperparameter optimization issues, each chromosome or person represents a
hyper-parameter, and its decimal value reflects the hyper-real parameter’s input value in each
evaluation. Every chromosome has multiple genes, which are binary digits, and these genes
are subsequently subjected to crossover and mutation activities. The population contains all po-
tential values within the initialized chromosome/parameter ranges, whereas the fitness function
characterizes the parameter assessment metrics.
Since the spontaneous parameter values frequently do not include the optimal parameter
values, additional operations, like selection, crossover, and mutation, must be conducted on
the well-performing chromosomes to discover the optimums. Chromosome selection is carried
out by choosing chromosomes with high fitness function values. To keep the population size
constant, chromosomes with high fitness function values are more likely to be passed on to the
next generation, where they develop new chromosomes with the best traits of their parents. A
genetic algorithm solves the optimization problem, for example, if you need to find the best
parameters to minimize some loss function. Genetic algorithms are part of the bigger group of
evolutionary algorithms. The idea is inspired by nature and natural selection.
• Firstly you generate your initial population of ML models and randomly choose hyperparam-
eters.
• Calculate your loss function for each model, for example, log-loss.
• Then choose some amount of models with the lowest error.
• Now create offspring, so you create a population of new ML models based on the best
models from the previous generation and slightly change their hyperparameters. Your new
people will be contained from models of the prior population and freshly generated models
in some proportion, for example, 50/50.
• You calculate your loss function, sort your models and repeat the process.
Genetic algorithms are not perfect, and you still need to specify your loss function, your popula-
tion size, a ratio of offspring with changed parameters, and so on.
anumak.ai