Skip to content




class MetricTracker:
 | def __init__(
 |     self,
 |     metric_name: Union[str, List[str]],
 |     patience: Optional[int] = None
 | ) -> None

This class tracks a metric during training for the dual purposes of early stopping and for knowing whether the current value is the best so far. It mimics the PyTorch state_dict / load_state_dict interface, so that it can be checkpointed along with your model and optimizer.

Some metrics improve by increasing; others by decreasing. You can provide a metric_name that starts with "+" to indicate an increasing metric, or "-" to indicate a decreasing metric.


  • metric_name : Union[str, List[str]]
    Specifies the metric or metrics to track. Metric names have to start with "+" for increasing metrics or "-" for decreasing ones. If you specify more than one, it tracks the sum of the increasing metrics metrics minus the sum of the decreasing metrics.
  • patience : int, optional (default = None)
    If provided, then should_stop_early() returns True if we go this many epochs without seeing a new best value.


class MetricTracker:
 | ...
 | def clear(self) -> None

Clears out the tracked metrics, but keeps the patience


class MetricTracker:
 | ...
 | def state_dict(self) -> Dict[str, Any]

A Trainer can use this to serialize the state of the metric tracker.


class MetricTracker:
 | ...
 | def load_state_dict(self, state_dict: Dict[str, Any]) -> None

A Trainer can use this to hydrate a metric tracker from a serialized state.


class MetricTracker:
 | ...
 | def add_metrics(self, metrics: Dict[str, float]) -> None

Record a new value of the metric and update the various things that depend on it.


class MetricTracker:
 | ...
 | def is_best_so_far(self) -> bool

Returns true if the most recent value of the metric is the best so far.


class MetricTracker:
 | ...
 | def should_stop_early(self) -> bool

Returns true if improvement has stopped for long enough.


class MetricTracker:
 | ...
 | def combined_score(self, metrics: Dict[str, float]) -> float