--- title: N-BEATS: Neural Basis Expansion Analysis keywords: fastai sidebar: home_sidebar summary: "API details." description: "API details." nb_path: "nbs/models_nbeats__nbeats.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}

class IdentityBasis[source]

IdentityBasis(backcast_size:int, forecast_size:int) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

{% endraw %} {% raw %}

class TrendBasis[source]

TrendBasis(degree_of_polynomial:int, backcast_size:int, forecast_size:int) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

{% endraw %} {% raw %}

class SeasonalityBasis[source]

SeasonalityBasis(harmonics:int, backcast_size:int, forecast_size:int) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

{% endraw %} {% raw %}
{% endraw %} {% raw %}

class ExogenousBasisInterpretable[source]

ExogenousBasisInterpretable() :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

{% endraw %} {% raw %}

class ExogenousBasisWavenet[source]

ExogenousBasisWavenet(out_features, in_features, num_levels=4, kernel_size=3, dropout_prob=0) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

{% endraw %} {% raw %}

class ExogenousBasisTCN[source]

ExogenousBasisTCN(out_features, in_features, num_levels=4, kernel_size=2, dropout_prob=0) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

{% endraw %} {% raw %}
{% endraw %} {% raw %}

init_weights[source]

init_weights(module, initialization)

{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}

N-BEATS model wrapper

{% raw %}
{% endraw %} {% raw %}

class NBEATS[source]

NBEATS(n_time_in:int, n_time_out:int, n_x:int=0, n_s:int=0, shared_weights:bool=False, activation:str='ReLU', initialization:str='lecun_normal', stack_types:List[str]=['identity', 'identity', 'identity'], n_blocks:List[int]=[1, 1, 1], n_layers:List[int]=[2, 2, 2, 2, 2, 2, 2, 2, 2], n_mlp_units:List[List[int]]=[[512, 512], [512, 512], [512, 512]], n_harmonics:int=5, n_polynomials:int=5, n_x_hidden:List[int]=[0], n_s_hidden:List[int]=[0], batch_normalization:bool=False, dropout_prob_theta:float=0.0, learning_rate:float=0.001, lr_decay:float=0.5, lr_decay_step_size:int=5, weight_decay:float=0.0, loss_train:str='MAE', loss_hypar:float=0.0, loss_valid:str='MAE', frequency:str='D', random_seed:int=1) :: LightningModule

Hooks to be used in LightningModule.

{% endraw %} {% raw %}
{% endraw %} {% raw %}

NBEATS.forecast[source]

NBEATS.forecast(Y_df:DataFrame, X_df:DataFrame=None, S_df:DataFrame=None, batch_size:int=1, trainer:Trainer=None)

Method for forecasting self.n_time_out periods after last timestamp of Y_df.

Parameters

Y_df: pd.DataFrame Dataframe with target time-series data, needs 'unique_id','ds' and 'y' columns. X_df: pd.DataFrame Dataframe with exogenous time-series data, needs 'unique_id' and 'ds' columns. Note that 'unique_id' and 'ds' must match Y_df plus the forecasting horizon. S_df: pd.DataFrame Dataframe with static data, needs 'unique_id' column. bath_size: int Batch size for forecasting. trainer: pl.Trainer Trainer object for model training and evaluation.

Returns

forecast_df: pd.DataFrame Dataframe with forecasts.

{% endraw %} {% raw %}
{% endraw %} {% raw %}

suggested_space[source]

suggested_space(n_time_out:int, n_series:int, n_x:int, n_s:int, frequency:str)

Suggested hyperparameters search space for tuning. To be used with hyperopt library.

Parameters

n_time_out: int Forecasting horizon. n_series: int Number of time-series. n_x: int Number of exogenous variables. n_s: int Number of static variables. frequency: str Frequency of time-seris.

Returns

space: Dict Dictionary with search space for hyperopt library.

{% endraw %} {% raw %}
{% endraw %}

N-BEATS Usage Example

Load Data

{% raw %}
import pandas as pd
from neuralforecast.data.datasets.epf import EPF
from neuralforecast.data.tsloader import TimeSeriesLoader

import pylab as plt
from pylab import rcParams
plt.style.use('seaborn-whitegrid')
plt.rcParams['font.family'] = 'serif'

FONTSIZE = 19

# Load and plot data
Y_df, X_df, S_df = EPF.load_groups(directory='./data', groups=['NP','FR'])

fig = plt.figure(figsize=(15, 6))
plt.plot(Y_df[Y_df['unique_id']=='NP'].ds, Y_df[Y_df['unique_id']=='NP'].y.values, color='#628793', linewidth=0.4)
plt.ylabel('Price [EUR/MWh]', fontsize=19)
plt.xlabel('Date', fontsize=15)
plt.show()
{% endraw %}

Declare Model and Data Parameters

{% raw %}
mc = {}
mc['model'] = 'nbeats'
mc['mode'] = 'simple'
mc['activation'] = 'ReLU'

mc['n_time_in'] = 24*7
mc['n_time_out'] = 24
mc['n_x_hidden'] = 8
mc['n_s_hidden'] = 0

mc['stack_types'] = 2*['identity']
mc['constant_n_blocks'] = 1
mc['constant_n_layers'] = 2
mc['constant_n_mlp_units'] = 256

mc['shared_weights'] = False
mc['n_harmonics'] = 0
mc['n_polynomials'] = 0

# Optimization and regularization parameters
mc['initialization'] = 'lecun_normal'
mc['learning_rate'] = 0.001
mc['batch_size'] = 1
mc['n_windows'] = 32
mc['lr_decay'] = 0.5
mc['lr_decay_step_size'] = 33
mc['max_epochs'] = 1
mc['max_steps'] = None
mc['early_stop_patience'] = 20
mc['eval_freq'] = 500
mc['batch_normalization'] = False
mc['dropout_prob_theta'] = 0
mc['dropout_prob_exogenous'] = 0
mc['l1_theta'] = 0
mc['weight_decay'] = 0
mc['loss_train'] = 'MAE'
mc['loss_hypar'] = 0.5
mc['loss_valid'] = mc['loss_train']
mc['random_seed'] = 1

# Data Parameters
mc['idx_to_sample_freq'] = 1
mc['val_idx_to_sample_freq'] = 24 * 7
mc['n_val_weeks'] = 52
mc['normalizer_y'] = None
mc['normalizer_x'] = 'median'
mc['complete_windows'] = False
mc['frequency'] = 'H'

print(65*'=')
print(pd.Series(mc))
print(65*'=')

mc['n_mlp_units'] = len(mc['stack_types']) * [ mc['constant_n_layers'] * [int(mc['constant_n_mlp_units'])] ]
mc['n_blocks'] =  len(mc['stack_types']) * [ mc['constant_n_blocks'] ]
mc['n_layers'] =  len(mc['stack_types']) * [ mc['constant_n_layers'] ]
=================================================================
model                                   nbeats
mode                                    simple
activation                                ReLU
n_time_in                                  168
n_time_out                                  24
n_x_hidden                                   8
n_s_hidden                                   0
stack_types               [identity, identity]
constant_n_blocks                            1
constant_n_layers                            2
constant_n_mlp_units                       256
shared_weights                           False
n_harmonics                                  0
n_polynomials                                0
initialization                    lecun_normal
learning_rate                            0.001
batch_size                                   1
n_windows                                   32
lr_decay                                   0.5
lr_decay_step_size                          33
max_epochs                                   1
max_steps                                 None
early_stop_patience                         20
eval_freq                                  500
batch_normalization                      False
dropout_prob_theta                           0
dropout_prob_exogenous                       0
l1_theta                                     0
weight_decay                                 0
loss_train                                 MAE
loss_hypar                                 0.5
loss_valid                                 MAE
random_seed                                  1
idx_to_sample_freq                           1
val_idx_to_sample_freq                     168
n_val_weeks                                 52
normalizer_y                              None
normalizer_x                            median
complete_windows                         False
frequency                                    H
dtype: object
=================================================================
{% endraw %}

Instantiate Loaders and Model

{% raw %}
from neuralforecast.experiments.utils import create_datasets

train_dataset, val_dataset, test_dataset, scaler_y = create_datasets(mc=mc,
                                                                     S_df=S_df, Y_df=Y_df, X_df=X_df,
                                                                     f_cols=['Exogenous1', 'Exogenous2'],
                                                                     ds_in_val=294*24,
                                                                     ds_in_test=728*24)

train_loader = TimeSeriesLoader(dataset=train_dataset,
                                batch_size=int(mc['batch_size']),
                                n_windows=mc['n_windows'],
                                shuffle=True)

val_loader = TimeSeriesLoader(dataset=val_dataset,
                              batch_size=int(mc['batch_size']),
                              shuffle=False)

test_loader = TimeSeriesLoader(dataset=test_dataset,
                               batch_size=int(mc['batch_size']),
                               shuffle=False)

mc['n_x'], mc['n_s'] = train_dataset.get_n_variables()
INFO:root:Train Validation splits

INFO:root:                              ds                    
                             min                 max
unique_id sample_mask                               
FR        0           2014-03-16 2016-12-31 23:00:00
          1           2011-01-09 2014-03-15 23:00:00
NP        0           2016-03-08 2018-12-24 23:00:00
          1           2013-01-01 2016-03-07 23:00:00
INFO:root:
Total data 			104832 time stamps 
Available percentage=100.0, 	104832 time stamps 
Insample  percentage=53.21, 	55776 time stamps 
Outsample percentage=46.79, 	49056 time stamps 

/Users/fedex/projects/neuralforecast/neuralforecast/data/tsdataset.py:208: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only
  X.drop(['unique_id', 'ds'], 1, inplace=True)
INFO:root:Train Validation splits

INFO:root:                              ds                    
                             min                 max
unique_id sample_mask                               
FR        0           2011-01-09 2016-12-31 23:00:00
          1           2014-03-16 2015-01-03 23:00:00
NP        0           2013-01-01 2018-12-24 23:00:00
          1           2016-03-08 2016-12-26 23:00:00
INFO:root:
Total data 			104832 time stamps 
Available percentage=100.0, 	104832 time stamps 
Insample  percentage=13.46, 	14112 time stamps 
Outsample percentage=86.54, 	90720 time stamps 

/Users/fedex/projects/neuralforecast/neuralforecast/data/tsdataset.py:208: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only
  X.drop(['unique_id', 'ds'], 1, inplace=True)
INFO:root:Train Validation splits

INFO:root:                              ds                    
                             min                 max
unique_id sample_mask                               
FR        0           2011-01-09 2015-01-03 23:00:00
          1           2015-01-04 2016-12-31 23:00:00
NP        0           2013-01-01 2016-12-26 23:00:00
          1           2016-12-27 2018-12-24 23:00:00
INFO:root:
Total data 			104832 time stamps 
Available percentage=100.0, 	104832 time stamps 
Insample  percentage=33.33, 	34944 time stamps 
Outsample percentage=66.67, 	69888 time stamps 

/Users/fedex/projects/neuralforecast/neuralforecast/data/tsdataset.py:208: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only
  X.drop(['unique_id', 'ds'], 1, inplace=True)
{% endraw %} {% raw %}
model = NBEATS(n_time_in=int(mc['n_time_in']),
               n_time_out=int(mc['n_time_out']),
               n_x=mc['n_x'],
               n_s=mc['n_s'],
               n_s_hidden=int(mc['n_s_hidden']),
               n_x_hidden=int(mc['n_x_hidden']),
               shared_weights=mc['shared_weights'],
               initialization=mc['initialization'],
               activation=mc['activation'],
               stack_types=mc['stack_types'],
               n_blocks=mc['n_blocks'],
               n_layers=mc['n_layers'],
               n_mlp_units=mc['n_mlp_units'],
               n_harmonics=int(mc['n_harmonics']),
               n_polynomials=int(mc['n_polynomials']),
               batch_normalization = mc['batch_normalization'],
               dropout_prob_theta=mc['dropout_prob_theta'],
               learning_rate=float(mc['learning_rate']),
               lr_decay=float(mc['lr_decay']),
               lr_decay_step_size=float(mc['lr_decay_step_size']),
               weight_decay=mc['weight_decay'],
               loss_train=mc['loss_train'],
               loss_hypar=float(mc['loss_hypar']),
               loss_valid=mc['loss_valid'],
               frequency=mc['frequency'],
               random_seed=int(mc['random_seed']))
{% endraw %}

Train Model

{% raw %}
from pytorch_lightning.callbacks import EarlyStopping

early_stopping = EarlyStopping(monitor="val_loss", 
                               min_delta=1e-4, 
                               patience=mc['early_stop_patience'],
                               verbose=False,
                               mode="min")

trainer = pl.Trainer(max_epochs=mc['max_epochs'], 
                     max_steps=mc['max_steps'],
                     gradient_clip_val=1.0,
                     progress_bar_refresh_rate=10, 
                     log_every_n_steps=500, 
                     check_val_every_n_epoch=1,
                     callbacks=[early_stopping])

trainer.fit(model, train_loader, val_loader)
/Users/fedex/opt/miniconda3/envs/neuralforecast/lib/python3.7/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py:49: LightningDeprecationWarning: Setting `max_steps = None` is deprecated in v1.5 and will no longer be supported in v1.7. Use `max_steps = -1` instead.
  "Setting `max_steps = None` is deprecated in v1.5 and will no longer be supported in v1.7."
/Users/fedex/opt/miniconda3/envs/neuralforecast/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/callback_connector.py:91: LightningDeprecationWarning: Setting `Trainer(progress_bar_refresh_rate=10)` is deprecated in v1.5 and will be removed in v1.7. Please pass `pytorch_lightning.callbacks.progress.TQDMProgressBar` with `refresh_rate` directly to the Trainer's `callbacks` argument instead. Or, to disable the progress bar pass `enable_progress_bar = False` to the Trainer.
  f"Setting `Trainer(progress_bar_refresh_rate={progress_bar_refresh_rate})` is deprecated in v1.5 and"
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs

  | Name  | Type    | Params
----------------------------------
0 | model | _NBEATS | 1.3 M 
----------------------------------
1.3 M     Trainable params
0         Non-trainable params
1.3 M     Total params
5.199     Total estimated model params size (MB)
/Users/fedex/opt/miniconda3/envs/neuralforecast/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py:133: UserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  f"The dataloader, {name}, does not have many workers which may be a bottleneck."
/Users/fedex/opt/miniconda3/envs/neuralforecast/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py:133: UserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  f"The dataloader, {name}, does not have many workers which may be a bottleneck."
/Users/fedex/opt/miniconda3/envs/neuralforecast/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py:433: UserWarning: The number of training samples (2) is smaller than the logging interval Trainer(log_every_n_steps=500). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
  f"The number of training samples ({self.num_training_batches}) is smaller than the logging interval"
{% endraw %}

Make Predictions

{% raw %}
model.return_decomposition = True
outputs = trainer.predict(model, val_loader)

print("outputs[0][0].shape", outputs[0][0].shape)
print("outputs[0][1].shape", outputs[0][1].shape)
print("outputs[0][2].shape", outputs[0][2].shape)
/Users/fedex/projects/neuralforecast/neuralforecast/data/tsloader.py:47: UserWarning: This class wraps the pytorch `DataLoader` with a special collate function. If you want to use yours simply use `DataLoader`. Removing collate_fn
  'This class wraps the pytorch `DataLoader` with a '
/Users/fedex/opt/miniconda3/envs/neuralforecast/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py:133: UserWarning: The dataloader, predict_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  f"The dataloader, {name}, does not have many workers which may be a bottleneck."
outputs[0][0].shape torch.Size([42, 24])
outputs[0][1].shape torch.Size([42, 24])
outputs[0][2].shape torch.Size([42, 3, 24])
{% endraw %}

Forecast

{% raw %}
Y_forecast_df = Y_df[Y_df['ds']<'2016-12-27']
Y_forecast_df.tail()
unique_id ds y
87355 NP 2016-12-26 19:00:00 27.44
87356 NP 2016-12-26 20:00:00 27.11
87357 NP 2016-12-26 21:00:00 26.82
87358 NP 2016-12-26 22:00:00 26.65
87359 NP 2016-12-26 23:00:00 25.68
{% endraw %} {% raw %}
X_forecast_df = X_df[X_df['ds']<'2016-12-28']
X_forecast_df.tail()
unique_id ds Exogenous1 Exogenous2 week_day day_0 day_1 day_2 day_3 day_4 day_5 day_6
87379 NP 2016-12-27 19:00:00 0.133135 -0.566365 -0.67449 0.0 1.927498 0.0 0.0 0.0 0.0 0.0
87380 NP 2016-12-27 20:00:00 0.010193 -0.569435 -0.67449 0.0 1.927498 0.0 0.0 0.0 0.0 0.0
87381 NP 2016-12-27 21:00:00 -0.088980 -0.572021 -0.67449 0.0 1.927498 0.0 0.0 0.0 0.0 0.0
87382 NP 2016-12-27 22:00:00 -0.221603 -0.576345 -0.67449 0.0 1.927498 0.0 0.0 0.0 0.0 0.0
87383 NP 2016-12-27 23:00:00 -0.426087 -0.583618 -0.67449 0.0 1.927498 0.0 0.0 0.0 0.0 0.0
{% endraw %} {% raw %}
model.return_decomposition = False
forecast_df = model.forecast(Y_df=Y_forecast_df, X_df=X_forecast_df, S_df=S_df, batch_size=2)
/Users/fedex/opt/miniconda3/envs/neuralforecast/lib/python3.7/site-packages/ipykernel_launcher.py:26: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
INFO:root:Train Validation splits

INFO:root:                              ds                    
                             min                 max
unique_id sample_mask                               
FR        0           2011-01-09 2016-12-26 23:00:00
          1           2016-12-27 2016-12-27 23:00:00
NP        0           2013-01-01 2016-12-26 23:00:00
          1           2016-12-27 2016-12-27 23:00:00
INFO:root:
Total data 			87288 time stamps 
Available percentage=100.0, 	87288 time stamps 
Insample  percentage=0.05, 	48 time stamps 
Outsample percentage=99.95, 	87240 time stamps 

/Users/fedex/projects/neuralforecast/neuralforecast/data/tsdataset.py:208: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only
  X.drop(['unique_id', 'ds'], 1, inplace=True)
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
/Users/fedex/projects/neuralforecast/neuralforecast/data/tsloader.py:47: UserWarning: This class wraps the pytorch `DataLoader` with a special collate function. If you want to use yours simply use `DataLoader`. Removing collate_fn
  'This class wraps the pytorch `DataLoader` with a '
/Users/fedex/opt/miniconda3/envs/neuralforecast/lib/python3.7/site-packages/pytorch_lightning/trainer/data_loading.py:133: UserWarning: The dataloader, predict_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  f"The dataloader, {name}, does not have many workers which may be a bottleneck."
{% endraw %} {% raw %}
forecast_df
unique_id ds y
0 FR 2016-12-27 00:00:00 49.990929
1 FR 2016-12-27 01:00:00 46.707829
2 FR 2016-12-27 02:00:00 52.060726
3 FR 2016-12-27 03:00:00 47.081509
4 FR 2016-12-27 04:00:00 43.243519
5 FR 2016-12-27 05:00:00 49.660500
6 FR 2016-12-27 06:00:00 50.721909
7 FR 2016-12-27 07:00:00 51.070969
8 FR 2016-12-27 08:00:00 47.830345
9 FR 2016-12-27 09:00:00 48.632439
10 FR 2016-12-27 10:00:00 46.903412
11 FR 2016-12-27 11:00:00 42.483223
12 FR 2016-12-27 12:00:00 45.047684
13 FR 2016-12-27 13:00:00 46.665199
14 FR 2016-12-27 14:00:00 47.682362
15 FR 2016-12-27 15:00:00 50.067822
16 FR 2016-12-27 16:00:00 49.738674
17 FR 2016-12-27 17:00:00 47.334499
18 FR 2016-12-27 18:00:00 48.062958
19 FR 2016-12-27 19:00:00 50.410599
20 FR 2016-12-27 20:00:00 49.675797
21 FR 2016-12-27 21:00:00 49.066280
22 FR 2016-12-27 22:00:00 46.856270
23 FR 2016-12-27 23:00:00 47.586472
24 NP 2016-12-27 00:00:00 24.939322
25 NP 2016-12-27 01:00:00 23.611902
26 NP 2016-12-27 02:00:00 26.618235
27 NP 2016-12-27 03:00:00 23.681618
28 NP 2016-12-27 04:00:00 22.597773
29 NP 2016-12-27 05:00:00 25.218441
30 NP 2016-12-27 06:00:00 25.817753
31 NP 2016-12-27 07:00:00 25.533932
32 NP 2016-12-27 08:00:00 24.358547
33 NP 2016-12-27 09:00:00 24.696827
34 NP 2016-12-27 10:00:00 23.567106
35 NP 2016-12-27 11:00:00 21.562275
36 NP 2016-12-27 12:00:00 23.117901
37 NP 2016-12-27 13:00:00 24.040312
38 NP 2016-12-27 14:00:00 24.574442
39 NP 2016-12-27 15:00:00 25.083153
40 NP 2016-12-27 16:00:00 25.471178
41 NP 2016-12-27 17:00:00 24.299793
42 NP 2016-12-27 18:00:00 24.480080
43 NP 2016-12-27 19:00:00 25.411215
44 NP 2016-12-27 20:00:00 24.847712
45 NP 2016-12-27 21:00:00 25.088522
46 NP 2016-12-27 22:00:00 23.150562
47 NP 2016-12-27 23:00:00 23.898933
{% endraw %}