allennlp.models.esim

class allennlp.models.esim.ESIM(vocab: allennlp.data.vocabulary.Vocabulary, text_field_embedder: allennlp.modules.text_field_embedders.text_field_embedder.TextFieldEmbedder, encoder: allennlp.modules.seq2seq_encoders.seq2seq_encoder.Seq2SeqEncoder, similarity_function: allennlp.modules.similarity_functions.similarity_function.SimilarityFunction, projection_feedforward: allennlp.modules.feedforward.FeedForward, inference_encoder: allennlp.modules.seq2seq_encoders.seq2seq_encoder.Seq2SeqEncoder, output_feedforward: allennlp.modules.feedforward.FeedForward, output_logit: allennlp.modules.feedforward.FeedForward, dropout: float = 0.5, initializer: allennlp.nn.initializers.InitializerApplicator = <allennlp.nn.initializers.InitializerApplicator object>, regularizer: Optional[allennlp.nn.regularizers.regularizer_applicator.RegularizerApplicator] = None)[source]

Bases: allennlp.models.model.Model

This Model implements the ESIM sequence model described in “Enhanced LSTM for Natural Language Inference” by Chen et al., 2017.

Parameters
vocabVocabulary
text_field_embedderTextFieldEmbedder

Used to embed the premise and hypothesis TextFields we get as input to the model.

encoderSeq2SeqEncoder

Used to encode the premise and hypothesis.

similarity_functionSimilarityFunction

This is the similarity function used when computing the similarity matrix between encoded words in the premise and words in the hypothesis.

projection_feedforwardFeedForward

The feedforward network used to project down the encoded and enhanced premise and hypothesis.

inference_encoderSeq2SeqEncoder

Used to encode the projected premise and hypothesis for prediction.

output_feedforwardFeedForward

Used to prepare the concatenated premise and hypothesis for prediction.

output_logitFeedForward

This feedforward network computes the output logits.

dropoutfloat, optional (default=0.5)

Dropout percentage to use.

initializerInitializerApplicator, optional (default=``InitializerApplicator()``)

Used to initialize the model parameters.

regularizerRegularizerApplicator, optional (default=``None``)

If provided, will be used to calculate the regularization penalty during training.

forward(self, premise: Dict[str, torch.LongTensor], hypothesis: Dict[str, torch.LongTensor], label: torch.IntTensor = None, metadata: List[Dict[str, Any]] = None) → Dict[str, torch.Tensor][source]
Parameters
premiseDict[str, torch.LongTensor]

From a TextField

hypothesisDict[str, torch.LongTensor]

From a TextField

labeltorch.IntTensor, optional (default = None)

From a LabelField

metadataList[Dict[str, Any]], optional, (default = None)

Metadata containing the original tokenization of the premise and hypothesis with ‘premise_tokens’ and ‘hypothesis_tokens’ keys respectively.

Returns
An output dictionary consisting of:
label_logitstorch.FloatTensor

A tensor of shape (batch_size, num_labels) representing unnormalised log probabilities of the entailment label.

label_probstorch.FloatTensor

A tensor of shape (batch_size, num_labels) representing probabilities of the entailment label.

losstorch.FloatTensor, optional

A scalar loss to be optimised.

get_metrics(self, reset: bool = False) → Dict[str, float][source]

Returns a dictionary of metrics. This method will be called by allennlp.training.Trainer in order to compute and use model metrics for early stopping and model serialization. We return an empty dictionary here rather than raising as it is not required to implement metrics for a new model. A boolean reset parameter is passed, as frequently a metric accumulator will have some state which should be reset between epochs. This is also compatible with Metrics should be populated during the call to ``forward`, with the Metric handling the accumulation of the metric until this method is called.