Skip to content

pretrained_transformer_tokenizer

allennlp.data.tokenizers.pretrained_transformer_tokenizer

[SOURCE]


PretrainedTransformerTokenizer#

@Tokenizer.register("pretrained_transformer")
class PretrainedTransformerTokenizer(Tokenizer):
 | def __init__(
 |     self,
 |     model_name: str,
 |     add_special_tokens: bool = True,
 |     max_length: Optional[int] = None,
 |     stride: int = 0,
 |     tokenizer_kwargs: Optional[Dict[str, Any]] = None
 | ) -> None

A PretrainedTransformerTokenizer uses a model from HuggingFace's transformers library to tokenize some input text. This often means wordpieces (where 'AllenNLP is awesome' might get split into ['Allen', '##NL', '##P', 'is', 'awesome']), but it could also use byte-pair encoding, or some other tokenization, depending on the pretrained model that you're using.

We take a model name as an input parameter, which we will pass to AutoTokenizer.from_pretrained.

We also add special tokens relative to the pretrained model and truncate the sequences.

This tokenizer also indexes tokens and adds the indexes to the Token fields so that they can be picked up by PretrainedTransformerIndexer.

Registered as a Tokenizer with name "pretrained_transformer".

Parameters

  • model_name : str
    The name of the pretrained wordpiece tokenizer to use.
  • add_special_tokens : bool, optional (default = True)
    If set to True, the sequences will be encoded with the special tokens relative to their model.
  • max_length : int, optional (default = None)
    If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those will be added to the returned dictionary
  • stride : int, optional (default = 0)
    If set to a number along with max_length, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.
  • tokenizer_kwargs : Dict[str, Any], optional (default = None)
    Dictionary with additional arguments for AutoTokenizer.from_pretrained.

tokenizer_lowercases#

class PretrainedTransformerTokenizer(Tokenizer):
 | ...
 | @staticmethod
 | def tokenizer_lowercases(tokenizer: PreTrainedTokenizer) -> bool

Huggingface tokenizers have different ways of remembering whether they lowercase or not. Detecting it this way seems like the least brittle way to do it.

tokenize#

class PretrainedTransformerTokenizer(Tokenizer):
 | ...
 | @overrides
 | def tokenize(self, text: str) -> List[Token]

This method only handles a single sentence (or sequence) of text.

intra_word_tokenize#

class PretrainedTransformerTokenizer(Tokenizer):
 | ...
 | def intra_word_tokenize(
 |     self,
 |     string_tokens: List[str]
 | ) -> Tuple[List[Token], List[Optional[Tuple[int, int]]]]

Tokenizes each word into wordpieces separately and returns the wordpiece IDs. Also calculates offsets such that tokens[offsets[i][0]:offsets[i][1] + 1] corresponds to the original i-th token.

This function inserts special tokens.

intra_word_tokenize_sentence_pair#

class PretrainedTransformerTokenizer(Tokenizer):
 | ...
 | def intra_word_tokenize_sentence_pair(
 |     self,
 |     string_tokens_a: List[str],
 |     string_tokens_b: List[str]
 | ) -> Tuple[List[Token], List[Optional[Tuple[int, int]]], List[Optional[Tuple[int, int]]]]

Tokenizes each word into wordpieces separately and returns the wordpiece IDs. Also calculates offsets such that wordpieces[offsets[i][0]:offsets[i][1] + 1] corresponds to the original i-th token.

This function inserts special tokens.

add_special_tokens#

class PretrainedTransformerTokenizer(Tokenizer):
 | ...
 | def add_special_tokens(
 |     self,
 |     tokens1: List[Token],
 |     tokens2: Optional[List[Token]] = None
 | ) -> List[Token]

num_special_tokens_for_sequence#

class PretrainedTransformerTokenizer(Tokenizer):
 | ...
 | def num_special_tokens_for_sequence(self) -> int

num_special_tokens_for_pair#

class PretrainedTransformerTokenizer(Tokenizer):
 | ...
 | def num_special_tokens_for_pair(self) -> int