allennlp.training.tensorboard_writer#

TensorboardWriter#

TensorboardWriter(
    self,
    serialization_dir: Optional[str] = None,
    summary_interval: int = 100,
    histogram_interval: int = None,
    batch_size_interval: Optional[int] = None,
    should_log_parameter_statistics: bool = True,
    should_log_learning_rate: bool = False,
    get_batch_num_total: Callable[[], int] = None,
) -> None

Class that handles Tensorboard (and other) logging.

Parameters

  • serialization_dir : str, optional (default = None) If provided, this is where the Tensorboard logs will be written.
  • summary_interval : int, optional (default = 100) Most statistics will be written out only every this many batches.
  • histogram_interval : int, optional (default = None) If provided, activation histograms will be written out every this many batches. If None, activation histograms will not be written out.
  • When this parameter is specified, the following additional logging is enabled: * Histograms of model parameters * The ratio of parameter update norm to parameter norm * Histogram of layer activations We log histograms of the parameters returned by model.get_parameters_for_histogram_tensorboard_logging. The layer activations are logged for any modules in the Model that have the attribute should_log_activations set to True. Logging histograms requires a number of GPU-CPU copies during training and is typically slow, so we recommend logging histograms relatively infrequently.
  • Note: only Modules that return tensors, tuples of tensors or dicts with tensors as values currently support activation logging.
  • batch_size_interval : int, optional, (default = None) If defined, how often to log the average batch size.
  • should_log_parameter_statistics : bool, optional (default = True) Whether to log parameter statistics (mean and standard deviation of parameters and gradients).
  • should_log_learning_rate : bool, optional (default = False) Whether to log (parameter-specific) learning rate.
  • get_batch_num_total : Callable[[], int], optional (default = None) A thunk that returns the number of batches so far. Most likely this will be a closure around an instance variable in your Trainer class. Because of circular dependencies in constructing this object and the Trainer, this is typically None when you construct the object, but it gets set inside the constructor of our Trainer.

close#

TensorboardWriter.close(self) -> None

Calls the close method of the SummaryWriter s which makes sure that pending scalars are flushed to disk and the tensorboard event files are closed properly.

log_histograms#

TensorboardWriter.log_histograms(self, model:allennlp.models.model.Model) -> None

Send histograms of parameters to tensorboard.

log_learning_rates#

TensorboardWriter.log_learning_rates(
    self,
    model: allennlp.models.model.Model,
    optimizer: torch.optim.optimizer.Optimizer,
)

Send current parameter specific learning rates to tensorboard

log_metrics#

TensorboardWriter.log_metrics(
    self,
    train_metrics: dict,
    val_metrics: dict = None,
    epoch: int = None,
    log_to_console: bool = False,
) -> None

Sends all of the train metrics (and validation metrics, if provided) to tensorboard.

log_parameter_and_gradient_statistics#

TensorboardWriter.log_parameter_and_gradient_statistics(
    self,
    model: allennlp.models.model.Model,
    batch_grad_norm: float,
) -> None

Send the mean and std of all parameters and gradients to tensorboard, as well as logging the average gradient norm.