Skip to content

softmax_loss

allennlp.modules.softmax_loss

[SOURCE]


SoftmaxLoss#

class SoftmaxLoss(torch.nn.Module):
 | def __init__(self, num_words: int, embedding_dim: int) -> None

Given some embeddings and some targets, applies a linear layer to create logits over possible words and then returns the negative log likelihood.

forward#

class SoftmaxLoss(torch.nn.Module):
 | ...
 | def forward(
 |     self,
 |     embeddings: torch.Tensor,
 |     targets: torch.Tensor
 | ) -> torch.Tensor

embeddings is size (n, embedding_dim) targets is (batch_size, ) with the correct class id Does not do any count normalization / divide by batch size