eval_stats_base#


class EvalStats(metrics: List[TMetric], additional_metrics: Optional[List[TMetric]] = None)[source]#

Bases: Generic[TMetric], ToStringMixin

set_name(name: str)[source]#
add_metric(metric: TMetric)[source]#
compute_metric_value(metric: TMetric) float[source]#
metrics_dict() Dict[str, float][source]#

Computes all metrics

Returns:

a dictionary mapping metric names to values

get_all() Dict[str, float][source]#

Alias for metricsDict; may be deprecated in the future

class Metric(name: Optional[str] = None, bounds: Optional[Tuple[float, float]] = None)[source]#

Bases: Generic[TEvalStats], ABC

Parameters:
  • name – the name of the metric; if None use the class’ name attribute

  • bounds – the minimum and maximum values the metric can take on (or None if the bounds are not specified)

name: str#
abstract compute_value_for_eval_stats(eval_stats: TEvalStats) float[source]#
get_paired_metrics() List[TMetric][source]#

Gets a list of metrics that should be considered together with this metric (e.g. for paired visualisations/plots). The direction of the pairing should be such that if this metric is “x”, the other is “y” for x-y type visualisations.

Returns:

a list of metrics

has_finite_bounds() bool[source]#
class EvalStatsCollection(eval_stats_list: List[TEvalStats])[source]#

Bases: Generic[TEvalStats, TMetric], ABC

get_values(metric_name: str)[source]#
get_metric_names() List[str][source]#
get_metrics() List[TMetric][source]#
get_metric_by_name(name: str) Optional[TMetric][source]#
has_metric(metric: Union[Metric, str]) bool[source]#
agg_metrics_dict(agg_fns=(<function mean>, <function std>)) Dict[str, float][source]#
mean_metrics_dict() Dict[str, float][source]#
plot_distribution(metric_name: str, subtitle: Optional[str] = None, bins=None, kde=False, cdf=False, cdf_complementary=False, stat='proportion', **kwargs) Figure[source]#

Plots the distribution of a metric as a histogram

Parameters:
  • metric_name – name of the metric for which to plot the distribution (histogram) across evaluations

  • subtitle – the subtitle to add, if any

  • bins – the histogram bins (number of bins or boundaries); metrics bounds will be used to define the x limits. If None, use ‘auto’ bins

  • kde – whether to add a kernel density estimator plot

  • cdf – whether to add the cumulative distribution function (cdf)

  • cdf_complementary – whether to plot, if cdf is True, the complementary cdf instead of the regular cdf

  • stat – the statistic to compute for each bin (‘percent’, ‘probability’=’proportion’, ‘count’, ‘frequency’ or ‘density’), y-axis value

  • kwargs – additional parameters to pass to seaborn.histplot (see https://seaborn.pydata.org/generated/seaborn.histplot.html)

Returns:

the plot

plot_scatter(metric_name_x: str, metric_name_y: str) Figure[source]#
plot_heat_map(metric_name_x: str, metric_name_y: str) Figure[source]#
to_data_frame() DataFrame[source]#
Returns:

a DataFrame with the evaluation metrics from all contained EvalStats objects; the EvalStats’ name field being used as the index if it is set

get_global_stats() TEvalStats[source]#

Alias for getCombinedEvalStats

abstract get_combined_eval_stats() TEvalStats[source]#
Returns:

an EvalStats object that combines the data from all contained EvalStats objects

class PredictionEvalStats(y_predicted: Optional[Union[ndarray, Series, DataFrame, list]], y_true: Optional[Union[ndarray, Series, DataFrame, list]], metrics: List[TMetric], additional_metrics: Optional[List[TMetric]] = None)[source]#

Bases: EvalStats[TMetric], ABC

Collects data for the evaluation of predicted values (including multi-dimensional predictions) and computes corresponding metrics

Parameters:
  • y_predicted – sequence of predicted values, or, in case of multi-dimensional predictions, either a data frame with one column per dimension or a nested sequence of values

  • y_true – sequence of ground truth labels of same shape as y_predicted

  • metrics – list of metrics to be computed on the provided data

  • additional_metrics – the metrics to additionally compute. This should only be provided if metrics is None

add(y_predicted, y_true) None[source]#

Adds a single pair of values to the evaluation

Parameters:
  • y_predicted – the value predicted by the model

  • y_true – the true value

add_all(y_predicted: Union[ndarray, Series, DataFrame, list], y_true: Union[ndarray, Series, DataFrame, list]) None[source]#
Parameters:
  • y_predicted – sequence of predicted values, or, in case of multi-dimensional predictions, either a data frame with one column per dimension or a nested sequence of values

  • y_true – sequence of ground truth labels of same shape as y_predicted

mean_stats(eval_stats_list: Sequence[EvalStats]) Dict[str, float][source]#

For a list of EvalStats objects compute the mean values of all metrics in a dictionary. Assumes that all provided EvalStats have the same metrics

class EvalStatsPlot(*args, **kwds)[source]#

Bases: Generic[TEvalStats], ABC

abstract create_figure(eval_stats: TEvalStats, subtitle: str) Optional[Figure][source]#
Parameters:
  • eval_stats – the evaluation stats from which to generate the plot

  • subtitle – the plot’s subtitle

Returns:

the figure or None if this plot is not applicable/cannot be created