perplexity
[ allennlp.training.metrics.perplexity ]
Perplexity Objects#
class Perplexity(Average)
Perplexity is a common metric used for evaluating how well a language model predicts a sample.
Notes#
Assumes negative log likelihood loss of each batch (base e). Provides the average perplexity of the batches.
get_metric#
| @overrides
| def get_metric(self, reset: bool = False) -> float
Returns
- The accumulated perplexity.