Skip to content

pretrained_transformer_tokenizer

[ allennlp.data.tokenizers.pretrained_transformer_tokenizer ]


PretrainedTransformerTokenizer Objects#

class PretrainedTransformerTokenizer(Tokenizer):
 | def __init__(
 |     self,
 |     model_name: str,
 |     add_special_tokens: bool = True,
 |     max_length: Optional[int] = None,
 |     stride: int = 0,
 |     truncation_strategy: str = "longest_first",
 |     tokenizer_kwargs: Optional[Dict[str, Any]] = None
 | ) -> None

A PretrainedTransformerTokenizer uses a model from HuggingFace's transformers library to tokenize some input text. This often means wordpieces (where 'AllenNLP is awesome' might get split into ['Allen', '##NL', '##P', 'is', 'awesome']), but it could also use byte-pair encoding, or some other tokenization, depending on the pretrained model that you're using.

We take a model name as an input parameter, which we will pass to AutoTokenizer.from_pretrained.

We also add special tokens relative to the pretrained model and truncate the sequences.

This tokenizer also indexes tokens and adds the indexes to the Token fields so that they can be picked up by PretrainedTransformerIndexer.

Registered as a Tokenizer with name "pretrained_transformer".

Parameters

  • model_name : str
    The name of the pretrained wordpiece tokenizer to use.
  • add_special_tokens : bool, optional (default = True)
    If set to True, the sequences will be encoded with the special tokens relative to their model.
  • max_length : int, optional (default = None)
    If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those will be added to the returned dictionary
  • stride : int, optional (default = 0)
    If set to a number along with max_length, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.
  • truncation_strategy : str, optional (default = 'longest_first')
    String selected in the following options:
    • 'longest_first' (default) Iteratively reduce the inputs sequence until the input is under max_length starting from the longest one at each token (when there is a pair of input sequences)
    • 'only_first': Only truncate the first sequence
    • 'only_second': Only truncate the second sequence
    • 'do_not_truncate': Do not truncate (raise an error if the input sequence is longer than max_length)
  • tokenizer_kwargs : Dict[str, Any]
    Dictionary with additional arguments for AutoTokenizer.from_pretrained.

tokenizer_lowercases#

 | @staticmethod
 | def tokenizer_lowercases(tokenizer: PreTrainedTokenizer) -> bool

Huggingface tokenizers have different ways of remembering whether they lowercase or not. Detecting it this way seems like the least brittle way to do it.

tokenize#

 | @overrides
 | def tokenize(self, text: str) -> List[Token]

This method only handles a single sentence (or sequence) of text.

intra_word_tokenize#

 | def intra_word_tokenize(
 |     self,
 |     string_tokens: List[str]
 | ) -> Tuple[List[Token], List[Optional[Tuple[int, int]]]]

Tokenizes each word into wordpieces separately and returns the wordpiece IDs. Also calculates offsets such that tokens[offsets[i][0]:offsets[i][1] + 1] corresponds to the original i-th token.

This function inserts special tokens.

intra_word_tokenize_sentence_pair#

 | def intra_word_tokenize_sentence_pair(
 |     self,
 |     string_tokens_a: List[str],
 |     string_tokens_b: List[str]
 | ) -> Tuple[List[Token], List[Tuple[int, int]], List[Tuple[int, int]]]

Tokenizes each word into wordpieces separately and returns the wordpiece IDs. Also calculates offsets such that wordpieces[offsets[i][0]:offsets[i][1] + 1] corresponds to the original i-th token.

This function inserts special tokens.

add_special_tokens#

 | def add_special_tokens(
 |     self,
 |     tokens1: List[Token],
 |     tokens2: Optional[List[Token]] = None
 | ) -> List[Token]

Make sure we don't change the input parameters

num_special_tokens_for_sequence#

 | def num_special_tokens_for_sequence(self) -> int

num_special_tokens_for_pair#

 | def num_special_tokens_for_pair(self) -> int