Skip to content

softmax_loss

allennlp.modules.softmax_loss

[SOURCE]


SoftmaxLoss

class SoftmaxLoss(torch.nn.Module):
 | def __init__(self, num_words: int, embedding_dim: int) -> None

Given some embeddings and some targets, applies a linear layer to create logits over possible words and then returns the negative log likelihood. Does not add a padding ID into the vocabulary, and input targets to forward should not include a padding ID.

forward

class SoftmaxLoss(torch.nn.Module):
 | ...
 | def forward(
 |     self,
 |     embeddings: torch.Tensor,
 |     targets: torch.Tensor
 | ) -> torch.Tensor

Parameters

  • embeddings : torch.Tensor
    A tensor of shape (sequence_length, embedding_dim)
  • targets : torch.Tensor
    A tensor of shape (batch_size, )

Returns

  • loss : torch.FloatTensor
    A scalar loss to be optimized.