allennlp.training.callbacks

class allennlp.training.callbacks.callback.Callback[source]

Bases: allennlp.common.registrable.Registrable

The base class for Callbacks that are used by the CallbackTrainer. Notice that other than serializing / deserializing training state, there is no other “API”.

In a subclass you would register methods to handle specific events using the handle_event decorator defined above; for example

@handle_event(Events.EPOCH_END)
def epoch_end_stuff(self, trainer) -> None:
    ...

@handle_event(Events.TRAINING_END)
def training_end_stuff(self, trainer) -> None:
    ...

In this way, each callback can respond to whatever events it wants. Notice also that the methods take only the trainer as input and return nothing, which means that any shared state needs to belong to the trainer itself. (Each callback can of course maintain its own non-shared state.)

get_training_state(self) → dict[source]

If this callback contains state that should be checkpointed for training, return it here (with a key that’s unique to this callback). If the state lives in a pytorch object with a state_dict method, this should return the output of state_dict(), not the object itself.

This default implementation suffices when there’s no state to checkpoint.

restore_training_state(self, training_state: dict) → None[source]

Given a dict of training state, pull out the relevant parts and rehydrate the state of this callback however is necessary.

This default implementation suffices when there’s no state to restore.

allennlp.training.callbacks.callback.handle_event(event: str, priority: int = 0)[source]
class allennlp.training.callbacks.callback_handler.CallbackHandler(callbacks: Iterable[allennlp.training.callbacks.callback.Callback], state: allennlp.training.trainer_base.TrainerBase, verbose: bool = False)[source]

Bases: object

A CallbackHandler owns zero or more Callback``s, each of which is associated with some "event". It then exposes a ``fire_event method, which calls each callback associated with that event ordered by their priorities.

The callbacks take no parameters; instead they read from and write to this handler’s state, which should be a Trainer.

Parameters
callbacksIterable[Callback]

The callbacks to be handled.

stateTrainerBase

The trainer from which the callbacks will read state and to which the callbacks will write state.

verbosebool, optional (default = False)

If true, will log every event -> callback. Please only use this for debugging purposes.

add_callback(self, callback: allennlp.training.callbacks.callback.Callback) → None[source]
callbacks(self) → List[allennlp.training.callbacks.callback.Callback][source]

Returns the callbacks associated with this handler. Each callback may be registered under multiple events, but we make sure to only return it once. If typ is specified, only returns callbacks of that type.

fire_event(self, event: str) → None[source]

Runs every callback registered for the provided event, ordered by their priorities.

class allennlp.training.callbacks.callback_handler.EventHandler(name, callback, handler, priority)[source]

Bases: tuple

property callback

Alias for field number 1

property handler

Alias for field number 2

property name

Alias for field number 0

property priority

Alias for field number 3

class allennlp.training.callbacks.events.Events[source]

Bases: object

BACKWARD = 'BACKWARD'
BATCH_END = 'BATCH_END'
BATCH_START = 'BATCH_START'
EPOCH_END = 'EPOCH_END'
EPOCH_START = 'EPOCH_START'
ERROR = 'ERROR'
FORWARD = 'FORWARD'
TRAINING_END = 'TRAINING_END'
TRAINING_START = 'TRAINING_START'
VALIDATE = 'VALIDATE'
class allennlp.training.callbacks.checkpoint.Checkpoint(checkpointer: allennlp.training.checkpointer.Checkpointer, model_save_interval: Optional[float] = None, state_dict_attrs: List[str] = None, other_attrs: List[str] = None)[source]

Bases: allennlp.training.callbacks.callback.Callback

Callback that orchestrates checkpointing of your model and training state.

Parameters
checkpointerCheckpointer

The checkpoint reader and writer to use.

model_save_intervalfloat, optional (default=None)

If provided, then serialize models every model_save_interval seconds within single epochs. In all cases, models are also saved at the end of every epoch if serialization_dir is provided.

state_dict_attrsList[str], optional (default = [‘optimizer’])

The attributes of the Trainer state whose .state_dict() should be persisted at each checkpoint.

other_attrsList[str], optional (default = [‘batch_num_total’])

The attributes of the Trainer state that should be persisted as-is at each checkpoint.

collect_moving_averages(self, trainer: 'CallbackTrainer')[source]
classmethod from_params(params: allennlp.common.params.Params, serialization_dir: str) → 'Checkpoint'[source]

This is the automatic implementation of from_params. Any class that subclasses FromParams (or Registrable, which itself subclasses FromParams) gets this implementation for free. If you want your class to be instantiated from params in the “obvious” way – pop off parameters and hand them to your constructor with the same names – this provides that functionality.

If you need more complex logic in your from from_params method, you’ll have to implement your own method that overrides this one.

load_best_model_state(self, trainer: 'CallbackTrainer')[source]
restore_checkpoint(self, trainer: 'CallbackTrainer')[source]
save_model_at_batch_end(self, trainer: 'CallbackTrainer')[source]
save_model_at_epoch_end(self, trainer: 'CallbackTrainer')[source]
class allennlp.training.callbacks.gradient_norm_and_clip.GradientNormAndClip(grad_norm: Optional[float] = None, grad_clipping: Optional[float] = None)[source]

Bases: allennlp.training.callbacks.callback.Callback

Applies gradient norm and/or clipping.

Parameters
grad_normfloat, optional (default = None)

If provided, we rescale the gradients before the optimization step.

grad_clippingfloat, optional (default = None)

If provided, we use this to clip gradients in our model.

enable_gradient_clipping(self, trainer: 'CallbackTrainer')[source]
rescale_gradients(self, trainer: 'CallbackTrainer')[source]
class allennlp.training.callbacks.update_learning_rate.UpdateLearningRate(learning_rate_scheduler: allennlp.training.learning_rate_schedulers.learning_rate_scheduler.LearningRateScheduler)[source]

Bases: allennlp.training.callbacks.callback.Callback

Callback that runs the learning rate scheduler.

Parameters
learning_rate_schedulerLearningRateScheduler

The scheduler to handler.

classmethod from_params(params: allennlp.common.params.Params, optimizer: torch.optim.optimizer.Optimizer) → 'UpdateLearningRate'[source]

This is the automatic implementation of from_params. Any class that subclasses FromParams (or Registrable, which itself subclasses FromParams) gets this implementation for free. If you want your class to be instantiated from params in the “obvious” way – pop off parameters and hand them to your constructor with the same names – this provides that functionality.

If you need more complex logic in your from from_params method, you’ll have to implement your own method that overrides this one.

get_training_state(self) → dict[source]

We need to persist the learning_rate_scheduler state as training state.

restore_training_state(self, training_state: dict) → None[source]

Given a dict of training state, pull out the relevant parts and rehydrate the state of this callback however is necessary.

This default implementation suffices when there’s no state to restore.

step(self, trainer: 'CallbackTrainer')[source]
step_batch(self, trainer: 'CallbackTrainer')[source]
class allennlp.training.callbacks.log_to_tensorboard.LogToTensorboard(tensorboard: allennlp.training.tensorboard_writer.TensorboardWriter, log_batch_size_period: int = None)[source]

Bases: allennlp.training.callbacks.callback.Callback

Callback that handles all Tensorboard logging.

Parameters
tensorboardTensorboardWriter

The TensorboardWriter instance to write to.

log_batch_size_periodint, optional (default: None)

If provided, we’ll log the average batch sizes to Tensorboard every this-many batches.

batch_end_logging(self, trainer: 'CallbackTrainer')[source]
copy_current_parameters(self, trainer: 'CallbackTrainer')[source]
epoch_end_logging(self, trainer: 'CallbackTrainer')[source]
classmethod from_params(serialization_dir: str, params: allennlp.common.params.Params) → 'LogToTensorboard'[source]

This is the automatic implementation of from_params. Any class that subclasses FromParams (or Registrable, which itself subclasses FromParams) gets this implementation for free. If you want your class to be instantiated from params in the “obvious” way – pop off parameters and hand them to your constructor with the same names – this provides that functionality.

If you need more complex logic in your from from_params method, you’ll have to implement your own method that overrides this one.

training_end(self, trainer: 'CallbackTrainer')[source]
training_start(self, trainer: 'CallbackTrainer')[source]
class allennlp.training.callbacks.update_momentum.UpdateMomentum(momentum_scheduler: allennlp.training.momentum_schedulers.momentum_scheduler.MomentumScheduler)[source]

Bases: allennlp.training.callbacks.callback.Callback

Callback that runs a Momentum Scheduler.

Parameters
momentum_schedulerMomentumScheduler

The momentum scheduler to run.

classmethod from_params(params: allennlp.common.params.Params, optimizer: torch.optim.optimizer.Optimizer) → 'UpdateMomentum'[source]

This is the automatic implementation of from_params. Any class that subclasses FromParams (or Registrable, which itself subclasses FromParams) gets this implementation for free. If you want your class to be instantiated from params in the “obvious” way – pop off parameters and hand them to your constructor with the same names – this provides that functionality.

If you need more complex logic in your from from_params method, you’ll have to implement your own method that overrides this one.

get_training_state(self) → dict[source]

If this callback contains state that should be checkpointed for training, return it here (with a key that’s unique to this callback). If the state lives in a pytorch object with a state_dict method, this should return the output of state_dict(), not the object itself.

This default implementation suffices when there’s no state to checkpoint.

restore_training_state(self, training_state: dict) → None[source]

Given a dict of training state, pull out the relevant parts and rehydrate the state of this callback however is necessary.

This default implementation suffices when there’s no state to restore.

step(self, trainer: 'CallbackTrainer')[source]
step_batch(self, trainer: 'CallbackTrainer')[source]
class allennlp.training.callbacks.post_to_url.PostToUrl(url: str, message: str = 'Your experiment has finished running!', key: str = 'text')[source]

Bases: allennlp.training.callbacks.callback.Callback

Posts to a URL when training finishes. Useful if you want to, for example, create a Slack webhook.

Parameters
urlstr

The URL to post to.

messagestr, optional (default = “Your experiment has finished running!”)

The message to post.

keystr, optional (default = “text”)

The key to use in the JSON message blob.

post_to_url(self, trainer: 'CallbackTrainer')[source]
class allennlp.training.callbacks.track_metrics.TrackMetrics(patience: int = None, validation_metric: str = '-loss')[source]

Bases: allennlp.training.callbacks.callback.Callback

Callback that handles tracking of metrics and (potentially) early stopping.

Parameters
patienceint, optional (default = None)

If a positive number is provided, training will stop when the supplied validation_metric has not improved in this many epochs.

validation_metricstr, optional (default = “-loss”)

The metric to use for early stopping. The initial +/- indicates whether we expect the metric to increase or decrease during training.

collect_train_metrics(self, trainer: 'CallbackTrainer')[source]
collect_val_metrics(self, trainer: 'CallbackTrainer')[source]
end_of_epoch(self, trainer: 'CallbackTrainer')[source]
get_training_state(self) → dict[source]

If this callback contains state that should be checkpointed for training, return it here (with a key that’s unique to this callback). If the state lives in a pytorch object with a state_dict method, this should return the output of state_dict(), not the object itself.

This default implementation suffices when there’s no state to checkpoint.

measure_cpu_gpu(self, trainer: 'CallbackTrainer')[source]
restore_training_state(self, training_state: dict) → None[source]

Given a dict of training state, pull out the relevant parts and rehydrate the state of this callback however is necessary.

This default implementation suffices when there’s no state to restore.

set_up_metrics(self, trainer: 'CallbackTrainer')[source]
class allennlp.training.callbacks.validate.Validate(validation_data: Iterable[allennlp.data.instance.Instance], validation_iterator: allennlp.data.iterators.data_iterator.DataIterator)[source]

Bases: allennlp.training.callbacks.callback.Callback

Evaluates the trainer’s Model using the provided validation dataset. Uses the results to populate trainer.val_metrics.

Parameters
validation_dataIterable[Instance]

The instances in the validation dataset.

validation_iteratorDataIterator

The iterator to use in the evaluation.

collect_moving_averages(self, trainer: 'CallbackTrainer')[source]
set_validate(self, trainer: 'CallbackTrainer')[source]
validate(self, trainer: 'CallbackTrainer')[source]
class allennlp.training.callbacks.update_moving_average.UpdateMovingAverage(moving_average: allennlp.training.moving_average.MovingAverage)[source]

Bases: allennlp.training.callbacks.callback.Callback

Callback that orchestrates checkpointing of your model and training state.

Parameters
moving_aveageMovingAverage

The MovingAverage object to update.

apply_moving_average(self, trainer: 'CallbackTrainer') → None[source]
classmethod from_params(params: allennlp.common.params.Params, model: allennlp.models.model.Model) → 'UpdateMovingAverage'[source]

This is the automatic implementation of from_params. Any class that subclasses FromParams (or Registrable, which itself subclasses FromParams) gets this implementation for free. If you want your class to be instantiated from params in the “obvious” way – pop off parameters and hand them to your constructor with the same names – this provides that functionality.

If you need more complex logic in your from from_params method, you’ll have to implement your own method that overrides this one.