TensorboardWriter( self, serialization_dir: Optional[str] = None, summary_interval: int = 100, histogram_interval: int = None, batch_size_interval: Optional[int] = None, should_log_parameter_statistics: bool = True, should_log_learning_rate: bool = False, get_batch_num_total: Callable[, int] = None, ) -> None
Class that handles Tensorboard (and other) logging.
- serialization_dir : str, optional (default = None) If provided, this is where the Tensorboard logs will be written.
- summary_interval : int, optional (default = 100) Most statistics will be written out only every this many batches.
- histogram_interval : int, optional (default = None) If provided, activation histograms will be written out every this many batches. If None, activation histograms will not be written out.
- When this parameter is specified, the following additional logging is enabled:
* Histograms of model parameters
* The ratio of parameter update norm to parameter norm
* Histogram of layer activations
We log histograms of the parameters returned by
model.get_parameters_for_histogram_tensorboard_logging. The layer activations are logged for any modules in the
Modelthat have the attribute
True. Logging histograms requires a number of GPU-CPU copies during training and is typically slow, so we recommend logging histograms relatively infrequently.
- Note: only Modules that return tensors, tuples of tensors or dicts with tensors as values currently support activation logging.
- batch_size_interval :
int, optional, (default =
None) If defined, how often to log the average batch size.
- should_log_parameter_statistics : bool, optional (default = True) Whether to log parameter statistics (mean and standard deviation of parameters and gradients).
- should_log_learning_rate : bool, optional (default = False) Whether to log (parameter-specific) learning rate.
- get_batch_num_total : Callable[, int], optional (default = None)
A thunk that returns the number of batches so far. Most likely this will
be a closure around an instance variable in your
Trainerclass. Because of circular dependencies in constructing this object and the
Trainer, this is typically
Nonewhen you construct the object, but it gets set inside the constructor of our
TensorboardWriter.close(self) -> None
close method of the
SummaryWriter s which makes sure that pending
scalars are flushed to disk and the tensorboard event files are closed properly.
TensorboardWriter.log_histograms(self, model:allennlp.models.model.Model) -> None
Send histograms of parameters to tensorboard.
TensorboardWriter.log_learning_rates( self, model: allennlp.models.model.Model, optimizer: torch.optim.optimizer.Optimizer, )
Send current parameter specific learning rates to tensorboard
TensorboardWriter.log_metrics( self, train_metrics: dict, val_metrics: dict = None, epoch: int = None, log_to_console: bool = False, ) -> None
Sends all of the train metrics (and validation metrics, if provided) to tensorboard.
TensorboardWriter.log_parameter_and_gradient_statistics( self, model: allennlp.models.model.Model, batch_grad_norm: float, ) -> None
Send the mean and std of all parameters and gradients to tensorboard, as well as logging the average gradient norm.