[ allennlp.training.metrics.perplexity ]
Perplexity is a common metric used for evaluating how well a language model predicts a sample.
NotesAssumes negative log likelihood loss of each batch (base e). Provides the
average perplexity of the batches.
class Perplexity(Average): | ... | @overrides | def get_metric(self, reset: bool = False)
- The accumulated perplexity.