Skip to content

econometron.utils.estimation.Bayesian

  • rwm_kalman – Function implementing the RWM algorithm for Kalman filter-based models.
  • compute_proposal_sigma – Function for computing proposal standard deviations for MCMC.
  • make_prior – Function for creating prior distributions.

Overview

The econometron.utils.estimation.Bayesian module provides tools for Bayesian estimation of state-space models, including but not limited to Dynamic Stochastic General Equilibrium (DSGE) models, using Markov Chain Monte Carlo (MCMC) methods. The primary algorithm implemented is the Random Walk Metropolis (RWM), which generates samples from the posterior distribution of model parameters given observed data. This module integrates with the Kalman filter (from econometron.filters) for likelihood evaluation and supports flexible prior distributions.

It is designed for general linear state-space models, where the state evolution and observation processes are Gaussian, making it suitable for applications in DSGE models, time-series analysis, signal processing, and econometrics.

Bayesian Estimation Framework

Bayesian estimation combines prior beliefs about parameters () with the likelihood of observed data () to compute the posterior distribution:

Where:

  • : Parameter vector to be estimated
  • : Observed data (e.g., time series of shape )
  • : Likelihood, computed using the Kalman filter for state-space models
  • : Prior distribution, user-specified or uniform within bounds

The Random Walk Metropolis (RWM) algorithm proposes new parameter values from a multivariate normal distribution centered at the current parameters, accepting or rejecting proposals based on the Metropolis-Hastings acceptance probability, balancing the posterior likelihood and prior.

State-Space Model

The module assumes a linear state-space model of the form:

State transition:

Observation equation:

Where:

  • : State vector (n × 1)
  • : Observation vector (m × 1)
  • : State transition matrix (n × n)
  • : Observation matrix (m × n)
  • : State covariance matrix (n × n)
  • : Observation covariance matrix (m × m)
  • : Gaussian noise terms

The rwm_kalman function uses the Kalman filter (via kalman_objective from econometron.filters) to compute the likelihood () and combines it with a user-defined prior to sample from the posterior.

Functions

1. compute_proposal_sigma(n_params, lb, ub, base_std=0.1)

Purpose: Computes proposal standard deviations for the Random Walk Metropolis algorithm.

Parameters:

NameTypeDescriptionDefault
n_paramsintNumber of parameters to estimateNone
lbnp.ndarrayLower bounds for parameters (shape: n_params)None
ubnp.ndarrayUpper bounds for parameters (shape: n_params)None
base_stdfloat or np.ndarrayBase standard deviation; scales proposal step size0.1

Returns: np.ndarray – Proposal standard deviations (shape: n_params)

Explanation:

  • Sets proposal std to 10% of parameter range.
  • Handles zero ranges by setting sigma=1.0.
  • If base_std is scalar, applies uniformly; if array, must match n_params.
  • Raises ValueError for length mismatches.

Example: Suppose in This example we have 2 parameters:

python
# We set the proposal standard deviations based on parameter ranges
base_std=[0.1,0.02]
len_params=2
sigma = compute_proposal_sigma(len_params, lb=np.array([0,0.01]), ub=np.array([1,1]), base_std=base_std)
print(sigma)
array([0.01   , 0.00198])

2. make_prior_function(param_names, priors, bounds, verbose=False)

Purpose: Creates a log-prior function evaluating the probability of a parameter vector based on user-specified distributions.

Parameters:

NameTypeDescriptionDefault
param_nameslist[str]Names of parameters in orderNone
priorsdict[str, Tuple[Callable, dict]]Parameter → (distribution, parameters)None
boundsdict[str, Tuple[float, float]]Parameter boundsNone
verboseboolPrint debug outputFalse

Returns: Callable – Returns log-prior probability (float), or -∞ for invalid parameters

Explanation:

  • Checks if each parameter is within bounds.
  • Evaluates log-density using specified distribution (e.g., scipy.stats.beta.logpdf).
  • Sums log-priors for total probability.
  • Returns -∞ for non-finite values; prints diagnostics if verbose=True.

Example:

python
from scipy.stats import beta, gamma
param_names = ['g', 'rho','phi', 'd', 'sigmax', 'sigma_y', 'sigma_p', 'sigma_r']
priors = {
    'g':       (gamma, {'a': 5, 'scale': 1}),
    'rho':     (beta, {'a': 19, 'b': 1}),
    'phi':     (gamma, {'a': 3, 'scale': 0.5}),
    'd':       (beta, {'a': 10, 'b': 10}),
    'sigmax':  (gamma, {'a': 2, 'scale': 0.02}),
    'sigma_y': (gamma, {'a': 2, 'scale': 0.02}),
    'sigma_p': (gamma, {'a': 2, 'scale': 0.02}),
    'sigma_r': (gamma, {'a': 2, 'scale': 0.02}),
}

bounds = {
    'g':        (0, 10),
    'rho':      (0, 1),
    'phi':      (1, 5),
    'd':        (0, 1),
    'sigmax':   (0, np.inf),
    'sigma_y':  (0, np.inf),
    'sigma_p':  (0, np.inf),
    'sigma_r':  (0, np.inf),
}
# Create the generalized prior function
prior = make_prior_function(param_names, priors, bounds, verbose=True)
params = [5.0,0.95, 1.5, 0.5, 0.04, 0.04, 0.04, 0.04]
logp = prior(params)
[Log Prior] g: logpdf(5.0000) = -1.740
[Log Prior] rho: logpdf(0.9500) = 2.021
[Log Prior] phi: logpdf(1.5000) = -0.803
[Log Prior] d: logpdf(0.5000) = 1.260
[Log Prior] sigmax: logpdf(0.0400) = 2.605
[Log Prior] sigma_y: logpdf(0.0400) = 2.605
[Log Prior] sigma_p: logpdf(0.0400) = 2.605
[Log Prior] sigma_r: logpdf(0.0400) = 2.605
[Total Log Prior] = 11.158 | Params = [5.0, 0.95, 1.5, 0.5, 0.04, 0.04, 0.04, 0.04]

3. rwm_kalman(...)

Purpose: Implements Random Walk Metropolis for Bayesian estimation of state-space model parameters using the Kalman filter for likelihood evaluation.

Parameters:

NameTypeDescriptionDefault
ynp.ndarrayObservations (m × T)None
x0np.ndarrayInitial parameter vectorNone
lbnp.ndarrayLower boundsNone
ubnp.ndarrayUpper boundsNone
param_nameslist[str]Parameter namesNone
fixed_paramsdictFixed parametersNone
update_state_spaceCallableMaps parameters → state-space matricesNone
n_iterintTotal MCMC iterations10000
burn_inintNumber of burn-in iterations1000
thinintThinning factor1
sigmafloat/np.ndarrayProposal stdNone
base_stdfloatBase scaling for sigma0.1
seedintRandom seed42
verboseboolPrint summary statisticsTrue
priorCallableLog-prior functionNone

Returns: dict containing:

  • resultdict with samples, log_posterior, acceptance_rate, message
  • table – Summary table of estimates, std errors, log-likelihood, and method

Explanation:

  • Validates inputs and bounds.
  • Proposal sigma is generally computed using compute_proposal_sigma()
  • Computes proposal sigma if not provided.
  • Defines objective function via kalman_objective.
  • Calls rwm to generate posterior samples.
  • Processes samples into a results table via create_results_table.
  • Verbose prints intermediate and final summaries.

Example Workflow:

python
#Defining a dict containing the parameters
base_params = {
    'g': 1.00000000e+01,
    'beta': 8.97384125e-01,
    'kappa': 0.8,
    'rho': 9.61923424e-01,
    'phi': 1,
    'd': 8.64607398e-01 ,
    'sigmax': 7.52359617e-03,
    'sigma_y': 0.01,
    'sigma_p': 0.01,
    'sigma_r': 0.01
}
# Call make_state_space_updater , supposing we already defined our parameters fro this function.
update_state_space = update_ss.make_state_space_updater(
    base_params=base_params,
    solver=new_keynisian_model.solve_RE_model,
    build_R=R_builder,
    build_C=C_builder,
    derived_fn=derived_fn
)
#
base_std = [0.02,0.02, 0.01, 0.01, 0.002, 0.002, 0.002, 0.002]
sigma = compute_proposal_sigma(len(initial_params), LB,UB, base_std=base_std)
#Prepare Prior_func
prior_func = make_prior_function(param_names, priors, bounds, verbose=True)
#
result = rwm_kalman(
    y=y, x0=x0, lb=lb, ub=ub, param_names=param_names,
    fixed_params=fixed_params, update_state_space=update_state_space,
    n_iter=5000, burn_in=1000, thin=2, sigma=sigma,
    seed=42, verbose=True, prior=prior_func
)
print(result['result']['samples'].shape)
print(result['result']['acceptance_rate'])
print(result['table'])

Notes

  • Relies on Kalman filter for likelihood evaluation.
  • Prior flexibility – supports any scipy.stats distribution.
  • Proposal tuning – monitor acceptance_rate (ideal 0.2–0.4).
  • Thinning and burn-in reduce autocorrelation.
  • Applicable to general linear Gaussian state-space models.