Skip to content

boe_encoder

allennlp.modules.seq2vec_encoders.boe_encoder

[SOURCE]


BagOfEmbeddingsEncoder#

@Seq2VecEncoder.register("boe")
@Seq2VecEncoder.register("bag_of_embeddings")
class BagOfEmbeddingsEncoder(Seq2VecEncoder):
 | def __init__(self, embedding_dim: int, averaged: bool = False) -> None

A BagOfEmbeddingsEncoder is a simple Seq2VecEncoder which simply sums the embeddings of a sequence across the time dimension. The input to this module is of shape (batch_size, num_tokens, embedding_dim), and the output is of shape (batch_size, embedding_dim).

Registered as a Seq2VecEncoder with name "bag_of_embeddings" and "boe".

Parameters

  • embedding_dim : int
    This is the input dimension to the encoder.
  • averaged : bool, optional (default = False)
    If True, this module will average the embeddings across time, rather than simply summing (ie. we will divide the summed embeddings by the length of the sentence).

get_input_dim#

class BagOfEmbeddingsEncoder(Seq2VecEncoder):
 | ...
 | @overrides
 | def get_input_dim(self) -> int

get_output_dim#

class BagOfEmbeddingsEncoder(Seq2VecEncoder):
 | ...
 | @overrides
 | def get_output_dim(self) -> int

forward#

class BagOfEmbeddingsEncoder(Seq2VecEncoder):
 | ...
 | def forward(
 |     self,
 |     tokens: torch.Tensor,
 |     mask: torch.BoolTensor = None
 | )