allennlp.training.tensorboard_writer¶
-
class
allennlp.training.tensorboard_writer.
TensorboardWriter
(get_batch_num_total: Callable[[], int], serialization_dir: Optional[str] = None, summary_interval: int = 100, histogram_interval: int = None, should_log_parameter_statistics: bool = True, should_log_learning_rate: bool = False)[source]¶ Bases:
allennlp.common.from_params.FromParams
Class that handles Tensorboard (and other) logging.
- Parameters
- get_batch_num_totalCallable[[], int]
A thunk that returns the number of batches so far. Most likely this will be a closure around an instance variable in your
Trainer
class.- serialization_dirstr, optional (default = None)
If provided, this is where the Tensorboard logs will be written.
- summary_intervalint, optional (default = 100)
Most statistics will be written out only every this many batches.
- histogram_intervalint, optional (default = None)
If provided, activation histograms will be written out every this many batches. If None, activation histograms will not be written out.
- should_log_parameter_statisticsbool, optional (default = True)
Whether to log parameter statistics.
- should_log_learning_ratebool, optional (default = False)
Whether to log learning rate.
-
close
(self) → None[source]¶ Calls the
close
method of theSummaryWriter
s which makes sure that pending scalars are flushed to disk and the tensorboard event files are closed properly.
-
log_histograms
(self, model: allennlp.models.model.Model, histogram_parameters: Set[str]) → None[source]¶ Send histograms of parameters to tensorboard.
-
log_learning_rates
(self, model: allennlp.models.model.Model, optimizer: torch.optim.optimizer.Optimizer)[source]¶ Send current parameter specific learning rates to tensorboard
-
log_metrics
(self, train_metrics: dict, val_metrics: dict = None, epoch: int = None, log_to_console: bool = False) → None[source]¶ Sends all of the train metrics (and validation metrics, if provided) to tensorboard.