econometron.Models.neuralnets
NBEATSclass in theeconometron.Models.neuralnetsmoduleTrainer_tsclass for training and evaluating time series models
Overview
The NBEATS class implements the Neural Basis Expansion Analysis for Interpretable Time Series Forecasting (N-BEATS) model, a deep learning architecture designed for univariate time series forecasting. It decomposes a time series into interpretable components (trend and seasonality) using stacks of blocks, each containing fully connected layers and basis functions (generic, trend, or seasonality). The model is highly flexible, allowing customization of block types, stack configurations, and normalization methods.
The Trainer_ts class is a utility designed to train, evaluate, and forecast with the NBEATS model. It handles data preparation, normalization, training loops, learning rate optimization, and performance visualization. It supports multiple normalization strategies (revin, local, global, or none) and provides robust metrics and plotting capabilities for model evaluation.
Together, these classes form a powerful framework for time series forecasting, combining the interpretability of N-BEATS with a comprehensive training and evaluation pipeline.
Reference: Boris Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. N-BEATS: Neural Basis Expansion Analysis for Interpretable Time Series Forecasting. ICLR 2020.
Class Definitions
NBEATS class
The NBEATS class constructs a neural network composed of multiple stacks, each containing blocks that model different components of the time series.
Initialization
from econometron.Models.NBEATS import NBEATS
model = NBEATS(
n=2, # backcast multiplier
h=12, # forecast horizon
n_s=2, # number of stacks
stack_configs=[
{'num_B_per_S': 3, 'Blocks': ['G'], 'Harmonics': [2], 'Degree': [2], 'Dropout': [0.1], 'Layer_size': [512], 'num_lay_per_B': [3], 'share_weights': False},
{'num_B_per_S': 3, 'Blocks': ['T'], 'Harmonics': [2], 'Degree': [2], 'Dropout': [0.1], 'Layer_size': [512], 'num_lay_per_B': [3], 'share_weights': False}
]
)Parameters
| Attribute | Type | Description |
|---|---|---|
n_features | int | Number of features (univariate=1). |
revin | RevIN | Reversible normalization layer. |
backcast_length | int | Input window length (n*h). |
forecast_length | int | Horizon length (h). |
num_stacks | int | Number of stacks. |
stacks | ModuleList | List of NBEATS_STACKs. |
- Each stack_config dictionary can include:
num_B_per_S: Number of blocks per stack.Blocks: List of block types ('G' for generic, 'T' for trend with polynomial basis, 'T_P' for trend with Chebyshev basis, 'S' for seasonality).Harmonics: Number of Fourier harmonics for seasonality blocks.Degree: Polynomial degree for trend blocks.Dropout: Dropout rate for regularization.Layer_size: Size of fully connected layers.num_lay_per_B: Number of layers per block.share_weights: Whether to share weights across blocks of the same type.
Trainer_ts class
The Trainer_ts class manages the training, evaluation, and forecasting process for the NBEATS model.
Initialization
from econometron.Models.NBEATS import Trainer_ts
trainer = Trainer_ts(
model=model,
normalization_type='revin', # Options: 'revin', 'local', 'global', None
device='cuda' if torch.cuda.is_available() else 'cpu',
Seed=42
)| Parameter | Type | Description |
|---|---|---|
model | nn.Module | PyTorch model (e.g., NBEATS). |
normalization_type | str | 'revin', 'local', 'global', or None. |
device | str | 'cuda' or 'cpu'. |
Seed | int | Random seed. |
Workflow
NBEATS Workflow
Configure the Model Initialize
NBEATSwith:- Backcast length (
n * h) - Forecast horizon (
h) - Number of stacks (
n_s) - Stack configurations
- Backcast length (
Trainer_ts Workflow
Data Preparation
- Use
fit()to preprocess input data.fit()will validate inputs, handle NaN/infinite values, and create the rolling windows used for training. - Create rolling windows (backcast windows and matching forecast targets).
- Apply normalization according to the trainer's
normalization_type:revin: reversible instance normalization applied to each sample batch.local: per-window (local) normalization applied to each rolling window before training (this is performed insidefit()whennormalization_type='local').global: dataset-level normalization computed once on the training set.None: no normalization is applied.
- Train the model with customizable optimizers, loss functions, and learning rate schedulers
- Supports early stopping and gradient clipping
- Use
Evaluation
Use
summary()to compute metrics:- MAE, MSE, RMSE, MAPE, Directional Accuracy
Visualize training history and predictions
Forecasting
- Use
predict()for in-sample predictions - Use
forecast_out_of_sample()for future predictions
- Use
Trainer_ts Methods
fit(Data, N, Horizon, max_epochs=100, optimizer='adam', lr=1e-4, batch_size=32, grad_clip=1, scheduler='plateau', loss_fun='mae', early_stopping=20, val_split=0.2, verbose=True)
Purpose: Trains the model on the provided data.
When called, fit() runs preprocessing steps: input validation, NaN/infinite handling, construction of rolling windows, and normalization according to the trainer's normalization_type.
Parameters:
Data: Input data (np.ndarray,pd.DataFrame, orpd.Series)N: Backcast multiplierHorizon: Forecast horizonmax_epochs: Maximum number of training epochsoptimizer: Optimizer ('adam','adamw','sgd')lr: Initial learning ratebatch_size: Batch size for traininggrad_clip: Gradient clipping thresholdscheduler: Learning rate scheduler ('plateau','cosine','step')loss_fun: Loss function ('mse','mae','huber','smooth_l1','mape')early_stopping: Patience for early stoppingval_split: Validation split ratioverbose: Whether to log training progress
Returns:
- Training history dictionary
find_optimal_lr(data, back_coeff=1, Horizon=1, val_split=0.2, batch_size=32, start_lr=1e-7, end_lr=10, num_iter=100, restore_weights=True, optimizer='adam', loss_fun='mae', plot=True)
Purpose: Finds the optimal learning rate using a learning rate range test.
Parameters:
- Same as
fit()for data and model parameters start_lr,end_lr: Learning rate rangenum_iter: Number of iterations for the testrestore_weights: Whether to restore original model weightsplot: Whether to plot the learning rate vs. loss curve
Returns:
- Learning rates, losses, and suggested learning rate
summary(input_shape=None, plot_training=True, detailed=True, val_split=0.2)
Purpose: Generates a model summary, including architecture, training history, and prediction metrics.
Parameters:
input_shape: Input shape for architecture summary (optional)plot_training: Whether to plot training curves and predictionsdetailed: Whether to include detailed architecture summaryval_split: Validation split ratio for evaluation
Returns:
- None (logs and plots results)
predict(test_data, plot_stacks=True, figsize=(20, 12))
Purpose: Generates in-sample predictions and stack contributions.
Parameters:
test_data: Test data for predictionplot_stacks: Whether to plot stack contributionsfigsize: Figure size for plots
Returns:
- Predictions and stack contributions
forecast_out_of_sample(steps, plot=True, figsize=(15, 8))
Purpose: Generates out-of-sample forecasts for future time steps.
Parameters:
steps: Number of future steps to forecastplot: Whether to plot the forecastfigsize: Figure size for the plot
Returns:
- Forecasted values as a
numpy.ndarray
Example
- Initialize and Train an N-BEATS Model
import numpy as np
import pandas as pd
from econometron.Models.neuralnets import NBEATS,Trainer_ts
# Generate sample data
np.random.seed(42)
t = np.linspace(0, 10, 1000)
data = np.sin(2 * np.pi * t) + 0.5 * np.random.randn(1000)
df = pd.Series(data, name='sine_wave')
# Initialize NBEATS model
model = NBEATS(
n=2, # backcast multiplier
h=12, # forecast horizon
n_s=2, # number of stacks
stack_configs=[
{'num_B_per_S': 3, 'Blocks': ['S'], 'Harmonics': [4], 'Degree': [2], 'Dropout': [0.1], 'Layer_size': [512], 'num_lay_per_B': [3], 'share_weights': False},
{'num_B_per_S': 3, 'Blocks': ['T'], 'Harmonics': [2], 'Degree': [2], 'Dropout': [0.1], 'Layer_size': [512], 'num_lay_per_B': [3], 'share_weights': False}
]
)
# Initialize trainer
trainer = Trainer_ts(model, normalization_type='revin', device='cpu', Seed=42)
# Train the model
trainer.fit(
Data=df,
N=2,
Horizon=12,
max_epochs=50,
optimizer='adam',
lr=1e-3,
batch_size=32,
grad_clip=1.0,
scheduler='plateau',
loss_fun='mae',
early_stopping=10,
val_split=0.2,
verbose=True
)- Evaluate the Model
# Generate summary with metrics and plots
trainer.summary(plot_training=True, detailed=True)- Make Predictions
# In-sample predictions
predictions, stack_contributions = trainer.predict(df, plot_stacks=True)
# Out-of-sample forecast
forecast = trainer.forecast_out_of_sample(steps=24, plot=True)Notes
Normalization The
Trainer_tsclass supports:- Revin (reversible instance normalization)
- Local (per-window normalization)
- Global (dataset-wide normalization)
- None (no normalization)
Revin is particularly effective for handling non-stationary time series.
Interpretability The
predict()method provides stack contributions, allowing users to visualize the impact of trend and seasonality components.Flexibility The N-BEATS model supports various block types (
G,T,T_P,S) and customizable stack configurations for tailored forecasting.Robustness The
Trainer_tsclass includes:- Data validation
- NaN/infinite value handling
- Logging for robust training and evaluation
