allennlp.training.tensorboard_writer

class allennlp.training.tensorboard_writer.TensorboardWriter(get_batch_num_total: Callable[[], int], serialization_dir: Optional[str] = None, summary_interval: int = 100, histogram_interval: int = None, should_log_parameter_statistics: bool = True, should_log_learning_rate: bool = False)[source]

Bases: allennlp.common.from_params.FromParams

Class that handles Tensorboard (and other) logging.

Parameters
get_batch_num_totalCallable[[], int]

A thunk that returns the number of batches so far. Most likely this will be a closure around an instance variable in your Trainer class.

serialization_dirstr, optional (default = None)

If provided, this is where the Tensorboard logs will be written.

summary_intervalint, optional (default = 100)

Most statistics will be written out only every this many batches.

histogram_intervalint, optional (default = None)

If provided, activation histograms will be written out every this many batches. If None, activation histograms will not be written out.

should_log_parameter_statisticsbool, optional (default = True)

Whether to log parameter statistics.

should_log_learning_ratebool, optional (default = False)

Whether to log learning rate.

add_train_histogram(self, name: str, values: torch.Tensor) → None[source]
add_train_scalar(self, name: str, value: float, timestep: int = None) → None[source]
add_validation_scalar(self, name: str, value: float, timestep: int = None) → None[source]
close(self) → None[source]

Calls the close method of the SummaryWriter s which makes sure that pending scalars are flushed to disk and the tensorboard event files are closed properly.

enable_activation_logging(self, model: allennlp.models.model.Model) → None[source]
log_activation_histogram(self, outputs, log_prefix: str) → None[source]
log_histograms(self, model: allennlp.models.model.Model, histogram_parameters: Set[str]) → None[source]

Send histograms of parameters to tensorboard.

log_learning_rates(self, model: allennlp.models.model.Model, optimizer: torch.optim.optimizer.Optimizer)[source]

Send current parameter specific learning rates to tensorboard

log_metrics(self, train_metrics: dict, val_metrics: dict = None, epoch: int = None, log_to_console: bool = False) → None[source]

Sends all of the train metrics (and validation metrics, if provided) to tensorboard.

log_parameter_and_gradient_statistics(self, model: allennlp.models.model.Model, batch_grad_norm: float) → None[source]

Send the mean and std of all parameters and gradients to tensorboard, as well as logging the average gradient norm.

should_log_histograms_this_batch(self) → bool[source]
should_log_this_batch(self) → bool[source]