inferpy.criticism package

Submodules

inferpy.criticism.evaluate module

Module with the functionality for evaluating the InferPy models

inferpy.criticism.evaluate.ALLOWED_METRICS = ['binary_accuracy', 'categorical_accuracy', 'sparse_categorical_accuracy', 'log_loss', 'binary_crossentropy', 'categorical_crossentropy', 'sparse_categorical_crossentropy', 'hinge', 'squared_hinge', 'mse', 'MSE', 'mean_squared_error', 'mae', 'MAE', 'mean_absolute_error', 'mape', 'MAPE', 'mean_absolute_percentage_error', 'msle', 'MSLE', 'mean_squared_logarithmic_error', 'poisson', 'cosine', 'cosine_proximity', 'log_lik', 'log_likelihood']

List with all the allowed metrics for evaluation

inferpy.criticism.evaluate.evaluate(metrics, data, n_samples=500, output_key=None, seed=None)[source]

Evaluate a fitted inferpy model using a set of metrics. This function encapsulates the equivalent Edward one.

Parameters:metrics – list of str indicating the metrics or sccore functions to be used.

An example of use:

# evaluate the predicted data y=y_pred given that x=x_test
mse = inf.evaluate('mean_squared_error', data={x: x_test, y: y_pred}, output_key=y)
Returns:A list of evaluations or a single evaluation.
Return type:list of float or float
Raises:NotImplementedError – If an input metric does not match an implemented metric.