allennlp.data.token_indexers.token_characters_indexer#

TokenCharactersIndexer#

TokenCharactersIndexer(
    self,
    namespace: str = 'token_characters',
    character_tokenizer: allennlp.data.tokenizers.character_tokenizer.CharacterTokenizer = <allennlp.data.tokenizers.character_tokenizer.CharacterTokenizer object at 0x7f28ae7a75f8>,
    start_tokens: List[str] = None,
    end_tokens: List[str] = None,
    min_padding_length: int = 0,
    token_min_padding_length: int = 0,
) -> None

This :class:TokenIndexer represents tokens as lists of character indices.

Registered as a TokenIndexer with name "characters".

Parameters

  • namespace : str, optional (default=token_characters)
  • We will use this namespace in the :class:Vocabulary to map the characters in each token to indices.
  • character_tokenizer : CharacterTokenizer, optional (default=CharacterTokenizer())
  • We use a :class:CharacterTokenizer to handle splitting tokens into characters, as it has options for byte encoding and other things. The default here is to instantiate a CharacterTokenizer with its default parameters, which uses unicode characters and retains casing.
  • start_tokens : List[str], optional (default=None) These are prepended to the tokens provided to tokens_to_indices.
  • end_tokens : List[str], optional (default=None) These are appended to the tokens provided to tokens_to_indices.
  • min_padding_length : int, optional (default=0)
  • We use this value as the minimum length of padding. Usually used with :class:CnnEncoder, its value should be set to the maximum value of ngram_filter_sizes correspondingly.
  • token_min_padding_length : int, optional (default=0)
  • See :class:TokenIndexer.

as_padded_tensor_dict#

TokenCharactersIndexer.as_padded_tensor_dict(
    self,
    tokens: Dict[str, List[Any]],
    padding_lengths: Dict[str, int],
) -> Dict[str, torch.Tensor]

This method pads a list of tokens given the input padding lengths (which could actually truncate things, depending on settings) and returns that padded list of input tokens as a Dict[str, torch.Tensor]. This is a dictionary because there should be one key per argument that the TokenEmbedder corresponding to this class expects in its forward() method (where the argument name in the TokenEmbedder needs to make the key in this dictionary).

The base class implements the case when all you want to do is create a padded LongTensor for every list in the tokens dictionary. If your TokenIndexer needs more complex logic than that, you need to override this method.

count_vocab_items#

TokenCharactersIndexer.count_vocab_items(
    self,
    token: allennlp.data.tokenizers.token.Token,
    counter: Dict[str, Dict[str, int]],
)

The :class:Vocabulary needs to assign indices to whatever strings we see in the training data (possibly doing some frequency filtering and using an OOV, or out of vocabulary, token). This method takes a token and a dictionary of counts and increments counts for whatever vocabulary items are present in the token. If this is a single token ID representation, the vocabulary item is likely the token itself. If this is a token characters representation, the vocabulary items are all of the characters in the token.

get_empty_token_list#

TokenCharactersIndexer.get_empty_token_list(self) -> Dict[str, List[Any]]

Returns an already indexed version of an empty token list. This is typically just an empty list for whatever keys are used in the indexer.

get_padding_lengths#

TokenCharactersIndexer.get_padding_lengths(
    self,
    indexed_tokens: Dict[str, List[Any]],
) -> Dict[str, int]

This method returns a padding dictionary for the given indexed_tokens specifying all lengths that need padding. If all you have is a list of single ID tokens, this is just the length of the list, and that's what the default implementation will give you. If you have something more complicated, like a list of character ids for token, you'll need to override this.

tokens_to_indices#

TokenCharactersIndexer.tokens_to_indices(
    self,
    tokens: List[allennlp.data.tokenizers.token.Token],
    vocabulary: allennlp.data.vocabulary.Vocabulary,
) -> Dict[str, List[List[int]]]

Takes a list of tokens and converts them to an IndexedTokenList. This could be just an ID for each token from the vocabulary. Or it could split each token into characters and return one ID per character. Or (for instance, in the case of byte-pair encoding) there might not be a clean mapping from individual tokens to indices, and the IndexedTokenList could be a complex data structure.