Skip to content

econometron.Models.neuralnets

  • NBEATS class in the econometron.Models.neuralnets module
  • Trainer_ts class for training and evaluating time series models

Overview

The NBEATS class implements the Neural Basis Expansion Analysis for Interpretable Time Series Forecasting (N-BEATS) model, a deep learning architecture designed for univariate time series forecasting. It decomposes a time series into interpretable components (trend and seasonality) using stacks of blocks, each containing fully connected layers and basis functions (generic, trend, or seasonality). The model is highly flexible, allowing customization of block types, stack configurations, and normalization methods.

The Trainer_ts class is a utility designed to train, evaluate, and forecast with the NBEATS model. It handles data preparation, normalization, training loops, learning rate optimization, and performance visualization. It supports multiple normalization strategies (revin, local, global, or none) and provides robust metrics and plotting capabilities for model evaluation.

Together, these classes form a powerful framework for time series forecasting, combining the interpretability of N-BEATS with a comprehensive training and evaluation pipeline.

Reference: Boris Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. N-BEATS: Neural Basis Expansion Analysis for Interpretable Time Series Forecasting. ICLR 2020.

Class Definitions

NBEATS class

The NBEATS class constructs a neural network composed of multiple stacks, each containing blocks that model different components of the time series.

Initialization

python
from econometron.Models.NBEATS import NBEATS

model = NBEATS(
    n=2,  # backcast multiplier
    h=12,  # forecast horizon
    n_s=2,  # number of stacks
    stack_configs=[
        {'num_B_per_S': 3, 'Blocks': ['G'], 'Harmonics': [2], 'Degree': [2], 'Dropout': [0.1], 'Layer_size': [512], 'num_lay_per_B': [3], 'share_weights': False},
        {'num_B_per_S': 3, 'Blocks': ['T'], 'Harmonics': [2], 'Degree': [2], 'Dropout': [0.1], 'Layer_size': [512], 'num_lay_per_B': [3], 'share_weights': False}
    ]
)

Parameters

AttributeTypeDescription
n_featuresintNumber of features (univariate=1).
revinRevINReversible normalization layer.
backcast_lengthintInput window length (n*h).
forecast_lengthintHorizon length (h).
num_stacksintNumber of stacks.
stacksModuleListList of NBEATS_STACKs.
  • Each stack_config dictionary can include:
  • num_B_per_S: Number of blocks per stack.
  • Blocks: List of block types ('G' for generic, 'T' for trend with polynomial basis, 'T_P' for trend with Chebyshev basis, 'S' for seasonality).
  • Harmonics: Number of Fourier harmonics for seasonality blocks.
  • Degree: Polynomial degree for trend blocks.
  • Dropout: Dropout rate for regularization.
  • Layer_size: Size of fully connected layers.
  • num_lay_per_B: Number of layers per block.
  • share_weights: Whether to share weights across blocks of the same type.

Trainer_ts class

The Trainer_ts class manages the training, evaluation, and forecasting process for the NBEATS model.

Initialization

python
from econometron.Models.NBEATS import Trainer_ts

trainer = Trainer_ts(
    model=model,
    normalization_type='revin',  # Options: 'revin', 'local', 'global', None
    device='cuda' if torch.cuda.is_available() else 'cpu',
    Seed=42
)
ParameterTypeDescription
modelnn.ModulePyTorch model (e.g., NBEATS).
normalization_typestr'revin', 'local', 'global', or None.
devicestr'cuda' or 'cpu'.
SeedintRandom seed.

Workflow

NBEATS Workflow

  1. Configure the Model Initialize NBEATS with:

    • Backcast length (n * h)
    • Forecast horizon (h)
    • Number of stacks (n_s)
    • Stack configurations

Trainer_ts Workflow

  1. Data Preparation

    • Use fit() to preprocess input data. fit() will validate inputs, handle NaN/infinite values, and create the rolling windows used for training.
    • Create rolling windows (backcast windows and matching forecast targets).
    • Apply normalization according to the trainer's normalization_type:
      • revin: reversible instance normalization applied to each sample batch.
      • local: per-window (local) normalization applied to each rolling window before training (this is performed inside fit() when normalization_type='local').
      • global: dataset-level normalization computed once on the training set.
      • None: no normalization is applied.
    • Train the model with customizable optimizers, loss functions, and learning rate schedulers
    • Supports early stopping and gradient clipping
  2. Evaluation

    • Use summary() to compute metrics:

      • MAE, MSE, RMSE, MAPE, Directional Accuracy
    • Visualize training history and predictions

  3. Forecasting

    • Use predict() for in-sample predictions
    • Use forecast_out_of_sample() for future predictions

Trainer_ts Methods

fit(Data, N, Horizon, max_epochs=100, optimizer='adam', lr=1e-4, batch_size=32, grad_clip=1, scheduler='plateau', loss_fun='mae', early_stopping=20, val_split=0.2, verbose=True)

Purpose: Trains the model on the provided data.

When called, fit() runs preprocessing steps: input validation, NaN/infinite handling, construction of rolling windows, and normalization according to the trainer's normalization_type.

Parameters:

  • Data: Input data (np.ndarray, pd.DataFrame, or pd.Series)
  • N: Backcast multiplier
  • Horizon: Forecast horizon
  • max_epochs: Maximum number of training epochs
  • optimizer: Optimizer ('adam', 'adamw', 'sgd')
  • lr: Initial learning rate
  • batch_size: Batch size for training
  • grad_clip: Gradient clipping threshold
  • scheduler: Learning rate scheduler ('plateau', 'cosine', 'step')
  • loss_fun: Loss function ('mse', 'mae', 'huber', 'smooth_l1', 'mape')
  • early_stopping: Patience for early stopping
  • val_split: Validation split ratio
  • verbose: Whether to log training progress

Returns:

  • Training history dictionary

find_optimal_lr(data, back_coeff=1, Horizon=1, val_split=0.2, batch_size=32, start_lr=1e-7, end_lr=10, num_iter=100, restore_weights=True, optimizer='adam', loss_fun='mae', plot=True)

Purpose: Finds the optimal learning rate using a learning rate range test.

Parameters:

  • Same as fit() for data and model parameters
  • start_lr, end_lr: Learning rate range
  • num_iter: Number of iterations for the test
  • restore_weights: Whether to restore original model weights
  • plot: Whether to plot the learning rate vs. loss curve

Returns:

  • Learning rates, losses, and suggested learning rate

summary(input_shape=None, plot_training=True, detailed=True, val_split=0.2)

Purpose: Generates a model summary, including architecture, training history, and prediction metrics.

Parameters:

  • input_shape: Input shape for architecture summary (optional)
  • plot_training: Whether to plot training curves and predictions
  • detailed: Whether to include detailed architecture summary
  • val_split: Validation split ratio for evaluation

Returns:

  • None (logs and plots results)

predict(test_data, plot_stacks=True, figsize=(20, 12))

Purpose: Generates in-sample predictions and stack contributions.

Parameters:

  • test_data: Test data for prediction
  • plot_stacks: Whether to plot stack contributions
  • figsize: Figure size for plots

Returns:

  • Predictions and stack contributions

forecast_out_of_sample(steps, plot=True, figsize=(15, 8))

Purpose: Generates out-of-sample forecasts for future time steps.

Parameters:

  • steps: Number of future steps to forecast
  • plot: Whether to plot the forecast
  • figsize: Figure size for the plot

Returns:

  • Forecasted values as a numpy.ndarray

Example

  1. Initialize and Train an N-BEATS Model
python
import numpy as np
import pandas as pd
from econometron.Models.neuralnets import NBEATS,Trainer_ts

# Generate sample data
np.random.seed(42)
t = np.linspace(0, 10, 1000)
data = np.sin(2 * np.pi * t) + 0.5 * np.random.randn(1000)
df = pd.Series(data, name='sine_wave')

# Initialize NBEATS model
model = NBEATS(
    n=2,  # backcast multiplier
    h=12,  # forecast horizon
    n_s=2,  # number of stacks
    stack_configs=[
        {'num_B_per_S': 3, 'Blocks': ['S'], 'Harmonics': [4], 'Degree': [2], 'Dropout': [0.1], 'Layer_size': [512], 'num_lay_per_B': [3], 'share_weights': False},
        {'num_B_per_S': 3, 'Blocks': ['T'], 'Harmonics': [2], 'Degree': [2], 'Dropout': [0.1], 'Layer_size': [512], 'num_lay_per_B': [3], 'share_weights': False}
    ]
)

# Initialize trainer
trainer = Trainer_ts(model, normalization_type='revin', device='cpu', Seed=42)

# Train the model
trainer.fit(
    Data=df,
    N=2,
    Horizon=12,
    max_epochs=50,
    optimizer='adam',
    lr=1e-3,
    batch_size=32,
    grad_clip=1.0,
    scheduler='plateau',
    loss_fun='mae',
    early_stopping=10,
    val_split=0.2,
    verbose=True
)
  1. Evaluate the Model
python
# Generate summary with metrics and plots
trainer.summary(plot_training=True, detailed=True)
  1. Make Predictions
python
# In-sample predictions
predictions, stack_contributions = trainer.predict(df, plot_stacks=True)

# Out-of-sample forecast
forecast = trainer.forecast_out_of_sample(steps=24, plot=True)

Notes

  • Normalization The Trainer_ts class supports:

    • Revin (reversible instance normalization)
    • Local (per-window normalization)
    • Global (dataset-wide normalization)
    • None (no normalization)

    Revin is particularly effective for handling non-stationary time series.

  • Interpretability The predict() method provides stack contributions, allowing users to visualize the impact of trend and seasonality components.

  • Flexibility The N-BEATS model supports various block types (G, T, T_P, S) and customizable stack configurations for tailored forecasting.

  • Robustness The Trainer_ts class includes:

    • Data validation
    • NaN/infinite value handling
    • Logging for robust training and evaluation