Skip to content

cls_pooler

allennlp.modules.seq2vec_encoders.cls_pooler

[SOURCE]


ClsPooler

@Seq2VecEncoder.register("cls_pooler")
class ClsPooler(Seq2VecEncoder):
 | def __init__(
 |     self,
 |     embedding_dim: int,
 |     cls_is_last_token: bool = False
 | )

Just takes the first vector from a list of vectors (which in a transformer is typically the [CLS] token) and returns it. For BERT, it's recommended to use BertPooler instead.

Registered as a Seq2VecEncoder with name "cls_pooler".

Parameters

  • embedding_dim : int
    This isn't needed for any computation that we do, but we sometimes rely on get_input_dim and get_output_dim to check parameter settings, or to instantiate final linear layers. In order to give the right values there, we need to know the embedding dimension. If you're using this with a transformer from the transformers library, this can often be found with model.config.hidden_size, if you're not sure.
  • cls_is_last_token : bool, optional
    The [CLS] token is the first token for most of the pretrained transformer models. For some models such as XLNet, however, it is the last token, and we therefore need to select at the end.

get_input_dim

class ClsPooler(Seq2VecEncoder):
 | ...
 | def get_input_dim(self) -> int

get_output_dim

class ClsPooler(Seq2VecEncoder):
 | ...
 | def get_output_dim(self) -> int

forward

class ClsPooler(Seq2VecEncoder):
 | ...
 | def forward(
 |     self,
 |     tokens: torch.Tensor,
 |     mask: torch.BoolTensor = None
 | )

tokens is assumed to have shape (batch_size, sequence_length, embedding_dim). mask is assumed to have shape (batch_size, sequence_length) with all 1s preceding all 0s.