allennlp.training.metric_tracker¶
-
class
allennlp.training.metric_tracker.
MetricTracker
(patience: Optional[int] = None, metric_name: str = None, should_decrease: bool = None)[source]¶ Bases:
object
This class tracks a metric during training for the dual purposes of early stopping and for knowing whether the current value is the best so far. It mimics the PyTorch state_dict / load_state_dict interface, so that it can be checkpointed along with your model and optimizer.
Some metrics improve by increasing; others by decreasing. Here you can either explicitly supply should_decrease, or you can provide a metric_name in which case “should decrease” is inferred from the first character, which must be “+” or “-“.
- Parameters
- patienceint, optional (default = None)
If provided, then should_stop_early() returns True if we go this many epochs without seeing a new best value.
- metric_namestr, optional (default = None)
If provided, it’s used to infer whether we expect the metric values to increase (if it starts with “+”) or decrease (if it starts with “-“). It’s an error if it doesn’t start with one of those. If it’s not provided, you should specify
should_decrease
instead.- should_decreasestr, optional (default = None)
If
metric_name
isn’t provided (in which case we can’t infershould_decrease
), then you have to specify it here.
-
add_metric
(self, metric: float) → None[source]¶ Record a new value of the metric and update the various things that depend on it.
-
clear
(self) → None[source]¶ Clears out the tracked metrics, but keeps the patience and should_decrease settings.
-
is_best_so_far
(self) → bool[source]¶ Returns true if the most recent value of the metric is the best so far.