econometron.utils.optimizers
Module: econometron.utils
Overview
The econometron.utils.optimizers module provides a suite of optimization algorithms for minimizing scalar objective functions, particularly suited for parameter estimation in econometric models, state-space models, and other statistical applications. The module includes three main optimization methods: Simulated Annealing (SA), Genetic Algorithm (GA), and Quasi-Newton (QN), along with a utility function for function evaluation. These methods are designed to handle non-linear, non-convex objective functions, such as negative log-likelihoods in maximum likelihood estimation (MLE) or posterior densities optimization in Bayesian inference. The module integrates with other econometron components, such as kalman_objective for state-space models or ols_estimator for regression, to perform robust parameter estimation.
Objective
The goal of these optimizers is to minimize a scalar objective function , typically the negative log-likelihood or negative log-posterior, subject to parameter bounds or constraints:
Where:
- : Parameter vector (shape: n_params).
- : Lower and upper bounds for parameters.
- : Objective function, often the negative log-likelihood or a similar criterion.
Functions
1. evaluate_func(function, params)
Purpose: Evaluates a given objective function at a parameter vector, providing a consistent interface for function evaluation across optimization algorithms. Includes debugging output to track parameter evaluations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
| function | Callable | Objective function to evaluate, taking a parameter vector as input. | None |
| params | np.ndarray | Parameter vector to evaluate (shape: n_params). | None |
Returns: float – Function value at params. Returns float('inf') if function is not callable.
Explanation:
- Checks if
functionis callable; if not, returns infinity to indicate an invalid evaluation. - Prints the evaluated parameter vector for debugging purposes.
- Calls
function(params)and returns the result.
Example:
from econometron.utils.optimizers import evaluate_func
import numpy as np
def obj_func(x):
return np.sum(x**2)
params = np.array([1.0, 2.0])
result = evaluate_func(obj_func, params)
print(result) # Output: 5.02. simulated_annealing(...)
Purpose: Implements Simulated Annealing (SA), a probabilistic optimization algorithm inspired by the annealing process in metallurgy. Designed to escape local minima by accepting worse solutions with a probability that decreases as the "temperature" cools, making it suitable for complex, non-convex objective functions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
| function | Callable | Objective function to minimize. | None |
| x | list or np.ndarray | Initial parameter vector (shape: n_params). | None |
| lower_bounds | list or np.ndarray | Lower bounds for parameters (shape: n_params). | None |
| upper_bounds | list or np.ndarray | Upper bounds for parameters (shape: n_params). | None |
| T | float | Initial temperature, controlling initial exploration. | None |
| cooling_rate | float | Temperature reduction factor (0 < cooling_rate < 1). | None |
| num_temperatures | int | Number of temperature levels. | None |
| num_steps | int | Number of trial points per temperature level. | None |
| seed_value | int | Random seed for reproducibility. | None |
| max_evals | int | Maximum number of function evaluations. | None |
| eps | float | Convergence threshold for objective value change. | 1e-2 |
Returns:
dict – Contains:
x (np.ndarray): Optimal parameter vector.fun (float): Negated objective function value at the optimum.N_FUNC_EVALS (int): Number of function evaluations.message (str): Termination message.
Explanation:
Initialization: Random seed set; current parameters
x, bestxopt, andfoptinitialized; step sizesVMset to bound ranges; evaluation counters initialized.Main Loop: For each temperature, performs trials per parameter:
- Proposes new parameter vector
xpby perturbation. - Resamples out-of-bounds proposals.
- Accepts proposals improving the objective or probabilistically if worse ().
- Updates best solution if improved.
- Proposes new parameter vector
Step Adaptation: Step sizes
VMadjusted based on acceptance rates.Cooling: Temperature multiplied by
cooling_rate.Convergence: Stops if evaluations exceed
max_evalsor objective change belowepsforsa_nepsiterations.
Example:
from econometron.utils.optimizers import simulated_annealing
import numpy as np
def obj_func(x): return np.sum(x**2)
x0 = np.array([1.0, 2.0])
lb = np.array([-5.0, -5.0])
ub = np.array([5.0, 5.0])
result = simulated_annealing(
obj_func, x0, lb, ub,
T=5.0, cooling_rate=0.9,
num_temperatures=5, num_steps=10,
seed_value=42, max_evals=1000
)
print(result)3. genetic_algorithm(...)
Purpose: Implements a Genetic Algorithm (GA), a population-based evolutionary optimization method inspired by natural selection. Suitable for global optimization of complex, non-convex functions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
| func | Callable | Objective function to minimize. | None |
| x0 | np.ndarray | Initial parameter vector. | None |
| lb | np.ndarray | Lower bounds. | None |
| ub | np.ndarray | Upper bounds. | None |
| pop_size | int | Population size. | 50 |
| n_gen | int | Number of generations. | 100 |
| crossover_rate | float | Crossover probability. | 0.8 |
| mutation_rate | float | Mutation probability. | 0.1 |
| elite_frac | float | Fraction of elite individuals preserved. | 0.1 |
| seed | int | Random seed. | 1 |
| verbose | bool | Print progress every 10 generations. | True |
Returns:
dict – Contains:
x (np.ndarray): Optimal parameter vector.fun (float): Objective function value.nfev (int): Number of evaluations.message (str): Termination message.
Explanation:
Input Validation: Converts inputs to arrays; checks bounds; ensures
lb >= 1e-6.Initialization: Population created; first individual set to
x0; others uniformly sampled. Objective evaluated.Main Loop: For each generation:
- Tournament selection for parents.
- Crossover with probability
crossover_rate. - Mutation with probability
mutation_rate. - Preserve elite individuals (
elite_frac). - Evaluate offspring and update population.
Tracks best solution.
Output: Progress printed if
verbose=True. Returns best solution.
Example:
from econometron.utils.optimizers import genetic_algorithm
import numpy as np
def obj_func(x): return np.sum(x**2)
x0 = np.array([1.0, 2.0])
lb = np.array([-5.0, -5.0])
ub = np.array([5.0, 5.0])
result = genetic_algorithm(obj_func, x0, lb, ub, pop_size=50, n_gen=100)
print(result)4. minimize_qn(x0, func, maxit=500, gtol=None, ptol=1e-7, verbose=False)
Purpose: Implements a Quasi-Newton (QN) optimization method using BFGS. Gradient-based; suitable for smooth, differentiable functions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
| x0 | np.ndarray | Initial parameter vector. | None |
| func | Callable | Scalar objective function to minimize. | None |
| maxit | int | Maximum iterations. | 500 |
| gtol | float | Gradient tolerance; default machine epsilon^(1/3). | None |
| ptol | float | Parameter change tolerance. | 1e-7 |
| verbose | bool | Print iteration details. | False |
Returns:
x (np.ndarray): Optimal parameter vector.crit (np.ndarray): Convergence criteria array[status, grad_norm, param_change, func_value, iterations].
Explanation:
- Hessian initialized as identity. Gradient computed.
- Main loop: computes search direction, line search, updates parameters, updates Hessian with BFGS.
- Stops if gradient norm < gtol, parameter change < ptol, or maxit reached.
- Verbose prints iteration info.
Example:
from econometron.utils.optimizers import minimize_qn
import numpy as np
def obj_func(x): return np.sum(x**2)
x0 = np.array([1.0, 2.0])
x, crit = minimize_qn(x0, obj_func, verbose=True)
print("Solution:", x, "Criteria:", crit)Notes
- Algorithm Characteristics: SA is good for global minima; GA explores broad space; QN efficient for smooth functions.
- Integration: Compatible with
kalman_objectiveandols_estimator. - Verbose Output: SA and GA print parameter/function values; QN prints iteration details.
- Numerical Stability: Bounds handled explicitly; QN uses line search and BFGS stabilization.
Dependencies
numpy,scipy.stats.norm,econometron.utils.solver,colorama.
Use Case Examples
- DSGE models with
kalman_objective. - Regression via
ols_estimator. - Bayesian inference via MAP estimates.
