[ allennlp.data.token_indexers.pretrained_transformer_indexer ]
class PretrainedTransformerIndexer(TokenIndexer): | def __init__( | self, | model_name: str, | namespace: str = "tags", | max_length: int = None, | **kwargs | ) -> None
TokenIndexer assumes that Tokens already have their indexes in them (see
We still require
model_name because we want to form allennlp vocabulary from pretrained one.
Indexer is only really appropriate to use if you've also used a
PretrainedTransformerTokenizer to tokenize your input. Otherwise you'll
have a mismatch between your tokens and your vocabulary, and you'll get a lot of UNK tokens.
Registered as a
TokenIndexer with name "pretrained_transformer".
- model_name :
The name of the
transformersmodel to use.
- namespace :
str, optional (default =
We will add the tokens in the pytorch_transformer vocabulary to this vocabulary namespace. We use a somewhat confusing default value of
tagsso that we do not add padding or UNK tokens to this namespace, which would break on loading because we wouldn't find our default OOV token.
- max_length :
int, optional (default =
If not None, split the document into segments of this many tokens (including special tokens) before feeding into the embedder. The embedder embeds these segments independently and concatenate the results to get the original document representation. Should be set to the same value as the
max_lengthoption on the
| @overrides | def count_vocab_items( | self, | token: Token, | counter: Dict[str, Dict[str, int]] | )
If we only use pretrained models, we don't need to do anything here.
| @overrides | def tokens_to_indices( | self, | tokens: List[Token], | vocabulary: Vocabulary | ) -> IndexedTokenList
| @overrides | def get_empty_token_list(self) -> IndexedTokenList
| @overrides | def as_padded_tensor_dict( | self, | tokens: IndexedTokenList, | padding_lengths: Dict[str, int] | ) -> Dict[str, torch.Tensor]