Skip to content

perplexity

allennlp.training.metrics.perplexity

[SOURCE]


Perplexity#

@Metric.register("perplexity")
class Perplexity(Average)

Perplexity is a common metric used for evaluating how well a language model predicts a sample.

NotesAssumes negative log likelihood loss of each batch (base e). Provides the

average perplexity of the batches.

get_metric#

class Perplexity(Average):
 | ...
 | @overrides
 | def get_metric(self, reset: bool = False)

Returns

  • The accumulated perplexity.