autots.evaluator package¶
Submodules¶
autots.evaluator.auto_model module¶
Mid-level helper functions for AutoTS.
-
autots.evaluator.auto_model.
ModelMonster
(model: str, parameters: dict = {}, frequency: str = 'infer', prediction_interval: float = 0.9, holiday_country: str = 'US', startTimeStamps=None, forecast_length: int = 14, random_seed: int = 2020, verbose: int = 0, n_jobs: int = None, **kwargs)¶ Directs strings and parameters to appropriate model objects.
- Parameters
model (str) – Name of Model Function
parameters (dict) – Dictionary of parameters to pass through to model
-
autots.evaluator.auto_model.
ModelPrediction
(df_train, forecast_length: int, transformation_dict: dict, model_str: str, parameter_dict: dict, frequency: str = 'infer', prediction_interval: float = 0.9, no_negatives: bool = False, constraint: float = None, future_regressor_train=None, future_regressor_forecast=None, holiday_country: str = 'US', startTimeStamps=None, grouping_ids=None, random_seed: int = 2020, verbose: int = 0, n_jobs: int = None)¶ Feed parameters into modeling pipeline
- Parameters
df_train (pandas.DataFrame) – numeric training dataset of DatetimeIndex and series as cols
forecast_length (int) – number of periods to forecast
transformation_dict (dict) – a dictionary of outlier, fillNA, and transformation methods to be used
model_str (str) – a string to be direct to the appropriate model, used in ModelMonster
frequency (str) – str representing frequency alias of time series
prediction_interval (float) – width of errors (note: rarely do the intervals accurately match the % asked for…)
no_negatives (bool) – whether to force all forecasts to be > 0
constraint (float) – when not None, use this value * data st dev above max or below min for constraining forecast values.
future_regressor_train (pd.Series) – with datetime index, of known in advance data, section matching train data
future_regressor_forecast (pd.Series) – with datetime index, of known in advance data, section matching test data
holiday_country (str) – passed through to holiday package, used by a few models as 0/1 regressor.
startTimeStamps (pd.Series) – index (series_ids), columns (Datetime of First start of series)
n_jobs (int) – number of processes
- Returns
Prediction from AutoTS model object
- Return type
PredictionObject (autots.PredictionObject)
-
autots.evaluator.auto_model.
NewGeneticTemplate
(model_results, submitted_parameters, sort_column: str = 'smape_weighted', sort_ascending: bool = True, max_results: int = 50, max_per_model_class: int = 5, top_n: int = 50, template_cols: list = ['Model', 'ModelParameters', 'TransformationParameters', 'Ensemble'], transformer_list: dict = {}, transformer_max_depth: int = 8, models_mode: str = 'default')¶ Return new template given old template with model accuracies.
- Parameters
model_results (pandas.DataFrame) – models that have actually been run
submitted_paramters (pandas.DataFrame) – models tried (may have returned different parameters to results)
-
autots.evaluator.auto_model.
RandomTemplate
(n: int = 10, model_list: list = ['ZeroesNaive', 'LastValueNaive', 'AverageValueNaive', 'GLS', 'GLM', 'ETS', 'ARIMA', 'FBProphet', 'RollingRegression', 'GluonTS', 'UnobservedComponents', 'VARMAX', 'VECM', 'DynamicFactor'], transformer_list: dict = 'fast', transformer_max_depth: int = 8, models_mode: str = 'default')¶ Returns a template dataframe of randomly generated transformations, models, and hyperparameters.
- Parameters
n (int) – number of random models to return
-
class
autots.evaluator.auto_model.
TemplateEvalObject
(model_results=Empty DataFrame Columns: [] Index: [], per_timestamp_smape=Empty DataFrame Columns: [] Index: [], per_series_mae=Empty DataFrame Columns: [] Index: [], per_series_rmse=Empty DataFrame Columns: [] Index: [], per_series_made=Empty DataFrame Columns: [] Index: [], per_series_contour=Empty DataFrame Columns: [] Index: [], per_series_spl=Empty DataFrame Columns: [] Index: [], model_count: int = 0)¶ Bases:
object
Object to contain all the failures!.
-
full_mae_ids
¶ list of model_ids corresponding to full_mae_errors
- Type
list
-
full_mae_errors
¶ list of numpy arrays of shape (rows, columns) appended in order of validation only provided for ‘mosaic’ ensembling
- Type
list
-
concat
(another_eval)¶ Merge another TemplateEvalObject onto this one.
-
save
(filename)¶ Save results to a file.
-
-
autots.evaluator.auto_model.
TemplateWizard
(template, df_train, df_test, weights, model_count: int = 0, ensemble: str = True, forecast_length: int = 14, frequency: str = 'infer', prediction_interval: float = 0.9, no_negatives: bool = False, constraint: float = None, future_regressor_train=None, future_regressor_forecast=None, holiday_country: str = 'US', startTimeStamps=None, random_seed: int = 2020, verbose: int = 0, n_jobs: int = None, validation_round: int = 0, current_generation: int = 0, max_generations: int = 0, model_interrupt: bool = False, grouping_ids=None, template_cols: list = ['Model', 'ModelParameters', 'TransformationParameters', 'Ensemble'], traceback: bool = False)¶ Take Template, returns Results.
There are some who call me… Tim. - Python
- Parameters
template (pandas.DataFrame) – containing model str, and json of transformations and hyperparamters
df_train (pandas.DataFrame) – numeric training dataset of DatetimeIndex and series as cols
df_test (pandas.DataFrame) – dataframe of actual values of (forecast length * n series)
weights (dict) – key = column/series_id, value = weight
ensemble (str) – desc of ensemble types to prepare metric collection
forecast_length (int) – number of periods to forecast
transformation_dict (dict) – a dictionary of outlier, fillNA, and transformation methods to be used
model_str (str) – a string to be direct to the appropriate model, used in ModelMonster
frequency (str) – str representing frequency alias of time series
prediction_interval (float) – width of errors (note: rarely do the intervals accurately match the % asked for…)
no_negatives (bool) – whether to force all forecasts to be > 0
constraint (float) – when not None, use this value * data st dev above max or below min for constraining forecast values.
future_regressor_train (pd.Series) – with datetime index, of known in advance data, section matching train data
future_regressor_forecast (pd.Series) – with datetime index, of known in advance data, section matching test data
holiday_country (str) – passed through to holiday package, used by a few models as 0/1 regressor.
startTimeStamps (pd.Series) – index (series_ids), columns (Datetime of First start of series)
validation_round (int) – int passed to record current validation.
current_generation (int) – info to pass to print statements
max_generations (int) – info to pass to print statements
model_interrupt (bool) – if True, keyboard interrupts are caught and only break current model eval.
template_cols (list) – column names of columns used as model template
traceback (bool) – include tracebook over just error representation
- Returns
TemplateEvalObject
-
autots.evaluator.auto_model.
UniqueTemplates
(existing_templates, new_possibilities, selection_cols: list = ['Model', 'ModelParameters', 'TransformationParameters', 'Ensemble'])¶ Returns unique dataframe rows from new_possiblities not in existing_templates.
- Parameters
selection_cols (list) – list of column namess to use to judge uniqueness/match on
-
autots.evaluator.auto_model.
back_forecast
(df, model_name, model_param_dict, model_transform_dict, future_regressor_train=None, n_splits: int = 'auto', forecast_length=14, frequency='infer', prediction_interval=0.9, no_negatives=False, constraint=None, holiday_country='US', random_seed=123, n_jobs='auto', verbose=0)¶ Create forecasts for the historical training data, ie. backcast or back forecast.
This actually forecasts on historical data, these are not fit model values as are often returned by other packages. As such, this will be slower, but more representative of real world model performance. There may be jumps in data between chunks.
Args are same as for model_forecast except… n_splits(int): how many pieces to split data into. Pass 2 for fastest, or “auto” for best accuracy
Returns a standard prediction object (access .forecast, .lower_forecast, .upper_forecast)
-
autots.evaluator.auto_model.
create_model_id
(model_str: str, parameter_dict: dict = {}, transformation_dict: dict = {})¶ Create a hash ID which should be unique to the model parameters.
-
autots.evaluator.auto_model.
dict_recombination
(a: dict, b: dict)¶ Recombine two dictionaries with identical keys. Return new dict.
-
autots.evaluator.auto_model.
generate_score
(model_results, metric_weighting: dict = {}, prediction_interval: float = 0.9)¶ Generate score based on relative accuracies.
SMAPE - smaller is better MAE - smaller is better RMSE - smaller is better SPL - smaller is better Contour - bigger is better (is 0 to 1) Containment - bigger is better (is 0 to 1) Runtime - smaller is better
-
autots.evaluator.auto_model.
generate_score_per_series
(results_object, metric_weighting, total_validations)¶ Score generation on per_series_metrics for ensembles.
-
autots.evaluator.auto_model.
model_forecast
(model_name, model_param_dict, model_transform_dict, df_train, forecast_length: int, frequency: str = 'infer', prediction_interval: float = 0.9, no_negatives: bool = False, constraint: float = None, future_regressor_train=None, future_regressor_forecast=None, holiday_country: str = 'US', startTimeStamps=None, grouping_ids=None, random_seed: int = 2020, verbose: int = 0, n_jobs: int = 'auto', template_cols: list = ['Model', 'ModelParameters', 'TransformationParameters', 'Ensemble'], horizontal_subset: list = None)¶ Takes numeric data, returns numeric forecasts.
Only one model (albeit potentially an ensemble)! Horizontal ensembles can not be nested, other ensemble types can be.
Well, she turned me into a newt. A newt? I got better. -Python
- Parameters
model_name (str) – a string to be direct to the appropriate model, used in ModelMonster
model_param_dict (dict) – dictionary of parameters to be passed into the model.
model_transform_dict (dict) – a dictionary of fillNA and transformation methods to be used pass an empty dictionary if no transformations are desired.
df_train (pandas.DataFrame) – numeric training dataset of DatetimeIndex and series as cols
forecast_length (int) – number of periods to forecast
frequency (str) – str representing frequency alias of time series
prediction_interval (float) – width of errors (note: rarely do the intervals accurately match the % asked for…)
no_negatives (bool) – whether to force all forecasts to be > 0
constraint (float) – when not None, use this value * data st dev above max or below min for constraining forecast values.
future_regressor_train (pd.Series) – with datetime index, of known in advance data, section matching train data
future_regressor_forecast (pd.Series) – with datetime index, of known in advance data, section matching test data
holiday_country (str) – passed through to holiday package, used by a few models as 0/1 regressor.
n_jobs (int) – number of CPUs to use when available.
template_cols (list) – column names of columns used as model template
horizontal_subset (list) – columns of df_train to use for forecast, meant for internal use for horizontal ensembling
- Returns
Prediction from AutoTS model object
- Return type
PredictionObject (autots.PredictionObject)
-
autots.evaluator.auto_model.
remove_leading_zeros
(df)¶ Accepts wide dataframe, returns dataframe with zeroes preceeding any non-zero value as NaN.
-
autots.evaluator.auto_model.
trans_dict_recomb
(dict_array)¶ Recombine two transformation param dictionaries from array of dicts.
-
autots.evaluator.auto_model.
unpack_ensemble_models
(template, template_cols: list = ['Model', 'ModelParameters', 'TransformationParameters', 'Ensemble'], keep_ensemble: bool = True, recursive: bool = False)¶ Take ensemble models from template and add as new rows.
- Parameters
template (pd.DataFrame) – AutoTS template containing template_cols
keep_ensemble (bool) – if False, drop row containing original ensemble
recursive (bool) – if True, unnest ensembles of ensembles…
-
autots.evaluator.auto_model.
validation_aggregation
(validation_results)¶ Aggregate a TemplateEvalObject.
autots.evaluator.auto_ts module¶
Higher-level functions of automated time series modeling.
-
class
autots.evaluator.auto_ts.
AutoTS
(forecast_length: int = 14, frequency: str = 'infer', prediction_interval: float = 0.9, max_generations: int = 10, no_negatives: bool = False, constraint: float = None, ensemble: str = 'auto', initial_template: str = 'General+Random', random_seed: int = 2020, holiday_country: str = 'US', subset: int = None, aggfunc: str = 'first', na_tolerance: float = 1, metric_weighting: dict = {'containment_weighting': 0, 'contour_weighting': 1, 'made_weighting': 1, 'mae_weighting': 2, 'rmse_weighting': 2, 'runtime_weighting': 0.05, 'smape_weighting': 5, 'spl_weighting': 2}, drop_most_recent: int = 0, drop_data_older_than_periods: int = 100000, model_list: str = 'default', transformer_list: dict = 'fast', transformer_max_depth: int = 6, models_mode: str = 'random', num_validations: int = 2, models_to_validate: float = 0.15, max_per_model_class: int = None, validation_method: str = 'backwards', min_allowed_train_percent: float = 0.5, remove_leading_zeroes: bool = False, prefill_na: str = None, introduce_na: bool = None, model_interrupt: bool = False, verbose: int = 1, n_jobs: int = None)¶ Bases:
object
Automate time series modeling using a genetic algorithm.
- Parameters
forecast_length (int) – number of periods over which to evaluate forecast. Can be overriden later in .predict().
frequency (str) – ‘infer’ or a specific pandas datetime offset. Can be used to force rollup of data (ie daily input, but frequency ‘M’ will rollup to monthly).
prediction_interval (float) – 0-1, uncertainty range for upper and lower forecasts. Adjust range, but rarely matches actual containment.
max_generations (int) – number of genetic algorithms generations to run. More runs = longer runtime, generally better accuracy. It’s called max because someday there will be an auto early stopping option, but for now this is just the exact number of generations to run.
no_negatives (bool) – if True, all negative predictions are rounded up to 0.
constraint (float) – when not None, use this value * data st dev above max or below min for constraining forecast values. Applied to point forecast only, not upper/lower forecasts.
ensemble (str) – None or list or comma-separated string containing: ‘auto’, ‘simple’, ‘distance’, ‘horizontal’, ‘horizontal-min’, ‘horizontal-max’, “mosaic”, “subsample”
initial_template (str) – ‘Random’ - randomly generates starting template, ‘General’ uses template included in package, ‘General+Random’ - both of previous. Also can be overriden with self.import_template()
random_seed (int) – random seed allows (slightly) more consistent results.
holiday_country (str) – passed through to Holidays package for some models.
subset (int) – maximum number of series to evaluate at once. Useful to speed evaluation when many series are input. takes a new subset of columns on each validation, unless mosaic ensembling, in which case columns are the same in each validation
aggfunc (str) – if data is to be rolled up to a higher frequency (daily -> monthly) or duplicate timestamps are included. Default ‘first’ removes duplicates, for rollup try ‘mean’ or np.sum. Beware numeric aggregations like ‘mean’ will not work with non-numeric inputs.
na_tolerance (float) – 0 to 1. Series are dropped if they have more than this percent NaN. 0.95 here would allow series containing up to 95% NaN values.
metric_weighting (dict) – weights to assign to metrics, effecting how the ranking score is generated.
drop_most_recent (int) – option to drop n most recent data points. Useful, say, for monthly sales data where the current (unfinished) month is included. occurs after any aggregration is applied, so will be whatever is specified by frequency, will drop n frequencies
drop_data_older_than_periods (int) – take only the n most recent timestamps
model_list (list) – str alias or list of names of model objects to use
transformer_list (list) – list of transformers to use, or dict of transformer:probability. Note this does not apply to initial templates. can accept string aliases: “all”, “fast”, “superfast”
transformer_max_depth (int) – maximum number of sequential transformers to generate for new Random Transformers. Fewer will be faster.
models_mode (str) – option to adjust parameter options for newly generated models. Currently includes: ‘default’, ‘deep’ (searches more params, likely slower), and ‘regressor’ (forces ‘User’ regressor mode in regressor capable models)
num_validations (int) – number of cross validations to perform. 0 for just train/test on best split. Possible confusion: num_validations is the number of validations to perform after the first eval segment, so totally eval/validations will be this + 1.
models_to_validate (int) – top n models to pass through to cross validation. Or float in 0 to 1 as % of tried. 0.99 is forced to 100% validation. 1 evaluates just 1 model. If horizontal or mosaic ensemble, then additional min per_series models above the number here are added to validation.
max_per_model_class (int) – of the models_to_validate what is the maximum to pass from any one model class/family.
validation_method (str) – ‘even’, ‘backwards’, or ‘seasonal n’ where n is an integer of seasonal ‘backwards’ is better for recency and for shorter training sets ‘even’ splits the data into equally-sized slices best for more consistent data, a poetic but less effective strategy than others here ‘seasonal n’ for example ‘seasonal 364’ would test all data on each previous year of the forecast_length that would immediately follow the training data. ‘similarity’ automatically finds the data sections most similar to the most recent data that will be used for prediction ‘custom’ - if used, .fit() needs validation_indexes passed - a list of pd.DatetimeIndex’s, tail of each is used as test
min_allowed_train_percent (float) – percent of forecast length to allow as min training, else raises error. 0.5 with a forecast length of 10 would mean 5 training points are mandated, for a total of 15 points. Useful in (unrecommended) cases where forecast_length > training length.
remove_leading_zeroes (bool) – replace leading zeroes with NaN. Useful in data where initial zeroes mean data collection hasn’t started yet.
prefill_na (str) – value to input to fill all NaNs with. Leaving as None and allowing model interpolation is recommended. None, 0, ‘mean’, or ‘median’. 0 may be useful in for examples sales cases where all NaN can be assumed equal to zero.
introduce_na (bool) – whether to force last values in one training validation to be NaN. Helps make more robust models. defaults to None, which introduces NaN in last rows of validations if any NaN in tail of training data. Will not introduce NaN to all series if subset is used. if True, will also randomly change 20% of all rows to NaN in the validations
model_interrupt (bool) – if False, KeyboardInterrupts quit entire program. if True, KeyboardInterrupts attempt to only quit current model. if True, recommend use in conjunction with verbose > 0 and result_file in the event of accidental complete termination. if “end_generation”, as True and also ends entire generation of run. Note skipped models will not be tried again.
verbose (int) – setting to 0 or lower should reduce most output. Higher numbers give more output.
n_jobs (int) – Number of cores available to pass to parallel processing. A joblib context manager can be used instead (pass None in this case). Also ‘auto’.
-
best_model
¶ DataFrame containing template for the best ranked model
- Type
pd.DataFrame
-
best_model_name
¶ model name
- Type
str
-
best_model_params
¶ model params
- Type
dict
-
best_model_transformation_params
¶ transformation parameters
- Type
dict
-
best_model_ensemble
¶ Ensemble type int id
- Type
int
-
regression_check
¶ If True, the best_model uses an input ‘User’ future_regressor
- Type
bool
-
df_wide_numeric
¶ dataframe containing shaped final data
- Type
pd.DataFrame
-
initial_results.
model_results
¶ contains a collection of result metrics
- Type
object
-
score_per_series
¶ generated score of metrics given per input series, if horizontal ensembles
- Type
pd.DataFrame
-
fit, predict
-
export_template, import_template, import_results
-
results, failure_rate
-
horizontal_to_df, mosaic_to_df
-
plot_horizontal, plot_horizontal_transformers, plot_generation_loss, plot_backforecast
-
back_forecast
(column=None, n_splits: int = 3, tail: int = None, verbose: int = 0)¶ Create forecasts for the historical training data, ie. backcast or back forecast.
This actually forecasts on historical data, these are not fit model values as are often returned by other packages. As such, this will be slower, but more representative of real world model performance. There may be jumps in data between chunks.
Args are same as for model_forecast except… n_splits(int): how many pieces to split data into. Pass 2 for fastest, or “auto” for best accuracy column (str): if to run on only one column, pass column name. Faster than full. tail (int): df.tail() of the dataset, back_forecast is only run on n most recent observations.
Returns a standard prediction object (access .forecast, .lower_forecast, .upper_forecast)
-
export_template
(filename=None, models: str = 'best', n: int = 5, max_per_model_class: int = None, include_results: bool = False)¶ Export top results as a reusable template.
- Parameters
filename (str) – ‘csv’ or ‘json’ (in filename). None to return a dataframe and not write a file.
models (str) – ‘best’ or ‘all’
n (int) – if models = ‘best’, how many n-best to export
max_per_model_class (int) – if models = ‘best’, the max number of each model class to include in template
include_results (bool) – whether to include performance metrics
-
failure_rate
(result_set: str = 'initial')¶ Return fraction of models passing with exceptions.
- Parameters
result_set (str, optional) – ‘validation’ or ‘initial’. Defaults to ‘initial’.
- Returns
float.
-
fit
(df, date_col: str = None, value_col: str = None, id_col: str = None, future_regressor=None, weights: dict = {}, result_file: str = None, grouping_ids=None, validation_indexes: list = None)¶ Train algorithm given data supplied.
- Parameters
df (pandas.DataFrame) – Datetime Indexed dataframe of series, or dataframe of three columns as below.
date_col (str) – name of datetime column
value_col (str) – name of column containing the data of series.
id_col (str) – name of column identifying different series.
future_regressor (numpy.Array) – single external regressor matching train.index
weights (dict) – {‘colname1’: 2, ‘colname2’: 5} - increase importance of a series in metric evaluation. Any left blank assumed to have weight of 1. pass the alias ‘mean’ as a str ie weights=’mean’ to automatically use the mean value of a series as its weight available aliases: mean, median, min, max
result_file (str) – results saved on each new generation. Does not include validation rounds. “.csv” save model results table. “.pickle” saves full object, including ensemble information.
grouping_ids (dict) – currently a one-level dict containing series_id:group_id mapping. used in 0.2.x but not 0.3.x+ versions. retained for potential future use
-
horizontal_to_df
()¶ helper function for plotting.
-
import_results
(filename)¶ Add results from another run on the same data.
Input can be filename with .csv or .pickle. or can be a DataFrame of model results or a full TemplateEvalObject
-
import_template
(filename: str, method: str = 'add_on', enforce_model_list: bool = True)¶ Import a previously exported template of model parameters. Must be done before the AutoTS object is .fit().
- Parameters
filename (str) – file location (or a pd.DataFrame already loaded)
method (str) – ‘add_on’ or ‘only’ - “add_on” keeps initial_template generated in init. “only” uses only this template.
enforce_model_list (bool) – if True, remove model types not in model_list
-
mosaic_to_df
()¶ Helper function to create a readable df of models in mosaic.
-
plot_backforecast
(series=None, n_splits: int = 3, start_date=None, **kwargs)¶ Plot the historical data and fit forecast on historic.
- Parameters
series (str or list) – column names of time series
n_splits (int or str) – “auto”, number > 2, higher more accurate but slower
passed to pd.DataFrame.plot() (**kwargs) –
-
plot_generation_loss
(**kwargs)¶ Plot improvement in accuracy over generations. Note: this is only “one size fits all” accuracy and doesn’t account for the benefits seen for ensembling.
- Parameters
passed to pd.DataFrame.plot() (**kwargs) –
-
plot_horizontal
(max_series: int = 20, **kwargs)¶ Simple plot to visualize assigned series: models.
Note that for ‘mosiac’ ensembles, it only plots the type of the most common model_id for that series, or the first if all are mode.
- Parameters
max_series (int) – max number of points to plot
passed to pandas.plot() (**kwargs) –
-
plot_horizontal_transformers
(method='transformers', color_list=None, **kwargs)¶ Simple plot to visualize transformers used. Note this doesn’t capture transformers nested in simple ensembles.
- Parameters
method (str) – ‘fillna’ or ‘transformers’ - which to plot
= list of colors to sample for bar colors. Can be names or hex. (color_list) –
passed to pandas.plot() (**kwargs) –
-
predict
(forecast_length: int = 'self', prediction_interval: float = 'self', future_regressor=None, hierarchy=None, just_point_forecast: bool = False, verbose: int = 'self')¶ Generate forecast data immediately following dates of index supplied to .fit().
- Parameters
forecast_length (int) – Number of periods of data to forecast ahead
prediction_interval (float) – interval of upper/lower forecasts. defaults to ‘self’ ie the interval specified in __init__() if prediction_interval is a list, then returns a dict of forecast objects.
future_regressor (numpy.Array) – additional regressor
hierarchy – Not yet implemented
just_point_forecast (bool) – If True, return a pandas.DataFrame of just point forecasts
- Returns
Either a PredictionObject of forecasts and metadata, or if just_point_forecast == True, a dataframe of point forecasts
-
results
(result_set: str = 'initial')¶ Convenience function to return tested models table.
- Parameters
result_set (str) – ‘validation’ or ‘initial’
-
class
autots.evaluator.auto_ts.
AutoTSIntervals
¶ Bases:
object
Autots looped to test multiple prediction intervals. Experimental.
Runs max_generations on first prediction interval, then validates on remainder. Most args are passed through to AutoTS().
- Parameters
interval_models_to_validate (int) – number of models to validate on each prediction interval.
import_results (str) – results from run on same data to load, filename.pickle. Currently result_file and import only save/load initial run, no validations.
-
fit
(prediction_intervals, forecast_length, df_long, max_generations, num_validations, validation_method, models_to_validate, interval_models_to_validate, date_col, value_col, id_col=None, import_template=None, import_method='only', import_results=None, result_file=None, model_list='all', metric_weighting: dict = {'containment_weighting': 0, 'contour_weighting': 0, 'mae_weighting': 0, 'rmse_weighting': 1, 'runtime_weighting': 0, 'smape_weighting': 1, 'spl_weighting': 10}, weights: dict = {}, grouping_ids=None, future_regressor=None, model_interrupt: bool = False, constraint=2, no_negatives=False, remove_leading_zeroes=False, random_seed=2020)¶ Train and find best.
-
predict
(future_regressor=None, verbose: int = 'self') → dict¶ Generate forecasts after training complete.
-
autots.evaluator.auto_ts.
error_correlations
(all_result, result: str = 'corr')¶ Onehot encode AutoTS result df and return df or correlation with errors.
- Parameters
all_results (pandas.DataFrame) – AutoTS model_results df
result (str) – whether to return ‘df’, ‘corr’, ‘poly corr’ with errors
-
autots.evaluator.auto_ts.
fake_regressor
(df, forecast_length: int = 14, date_col: str = None, value_col: str = None, id_col: str = None, frequency: str = 'infer', aggfunc: str = 'first', drop_most_recent: int = 0, na_tolerance: float = 0.95, drop_data_older_than_periods: int = 100000, dimensions: int = 1, verbose: int = 0)¶ Create a fake regressor of random numbers for testing purposes.
autots.evaluator.benchmark module¶
Created on Fri Nov 5 13:45:01 2021
@author: Colin
-
class
autots.evaluator.benchmark.
Benchmark
¶ Bases:
object
-
run
(n_jobs: int = 'auto', times: int = 3, random_seed: int = 123)¶ Run benchmark.
- Parameters
n_jobs (int) – passed to model_forecast for n cpus
times (int) – number of times to run benchmark models (returns avg of n times)
random_seed (int) – random seed, increases consistency
-
autots.evaluator.metrics module¶
Tools for calculating forecast errors.
-
autots.evaluator.metrics.
containment
(lower_forecast, upper_forecast, actual)¶ Expects two, 2-D numpy arrays of forecast_length * n series.
Returns a 1-D array of results in len n series
- Parameters
actual (numpy.array) – known true values
forecast (numpy.array) – predicted values
-
autots.evaluator.metrics.
contour
(A, F)¶ A measure of how well the actual and forecast follow the same pattern of change. Note: If actual values are unchanging, will match positive changing forecasts. Expects two, 2-D numpy arrays of forecast_length * n series Returns a 1-D array of results in len n series
- Parameters
A (numpy.array) – known true values
F (numpy.array) – predicted values
-
autots.evaluator.metrics.
mae
(ae)¶ Accepting abs error already calculated
-
autots.evaluator.metrics.
mean_absolute_differential_error
(A, F, order: int = 1)¶ Expects two, 2-D numpy arrays of forecast_length * n series.
Returns a 1-D array of results in len n series
- Parameters
A (numpy.array) – known true values
F (numpy.array) – predicted values
order (int) – order of differential
-
autots.evaluator.metrics.
mean_absolute_error
(A, F)¶ Expects two, 2-D numpy arrays of forecast_length * n series.
Returns a 1-D array of results in len n series
- Parameters
A (numpy.array) – known true values
F (numpy.array) – predicted values
-
autots.evaluator.metrics.
medae
(ae)¶ Accepting abs error already calculated
-
autots.evaluator.metrics.
median_absolute_error
(A, F)¶ Expects two, 2-D numpy arrays of forecast_length * n series.
Returns a 1-D array of results in len n series
- Parameters
A (numpy.array) – known true values
F (numpy.array) – predicted values
-
autots.evaluator.metrics.
pinball_loss
(A, F, quantile)¶ Bigger is bad-er.
-
autots.evaluator.metrics.
rmse
(ae)¶ Accepting abs error already calculated
-
autots.evaluator.metrics.
root_mean_square_error
(actual, forecast)¶ Expects two, 2-D numpy arrays of forecast_length * n series.
Returns a 1-D array of results in len n series
- Parameters
actual (numpy.array) – known true values
forecast (numpy.array) – predicted values
-
autots.evaluator.metrics.
scaled_pinball_loss
(A, F, df_train, quantile)¶ Scaled pinball loss.
- Parameters
A (np.array) – actual values
F (np.array) – forecast values
df_train (np.array) – values of historic data for scaling
quantile (float) – which bound of upper/lower forecast this is
-
autots.evaluator.metrics.
smape
(actual, forecast, ae)¶ Accepting abs error already calculated
-
autots.evaluator.metrics.
spl
(A, F, quantile, scaler)¶ Accepting scaler already calculated
-
autots.evaluator.metrics.
symmetric_mean_absolute_percentage_error
(actual, forecast)¶ Expect two, 2-D numpy arrays of forecast_length * n series. Allows NaN in actuals, and corresponding NaN in forecast, but not unmatched NaN in forecast Also doesn’t like zeroes in either forecast or actual - results in poor error value even if forecast is accurate
Returns a 1-D array of results in len n series
- Parameters
actual (numpy.array) – known true values
forecast (numpy.array) – predicted values
References
https://en.wikipedia.org/wiki/Symmetric_mean_absolute_percentage_error
Module contents¶
Model Evaluators