allennlp.data.token_indexers.single_id_token_indexer#

SingleIdTokenIndexer#

SingleIdTokenIndexer(
    self,
    namespace: Optional[str] = 'tokens',
    lowercase_tokens: bool = False,
    start_tokens: List[str] = None,
    end_tokens: List[str] = None,
    feature_name: str = 'text',
    default_value: str = 'THIS IS A REALLY UNLIKELY VALUE THAT HAS TO BE A STRING',
    token_min_padding_length: int = 0,
) -> None

This :class:TokenIndexer represents tokens as single integers.

Registered as a TokenIndexer with name "single_id".

Parameters

  • namespace : Optional[str], optional (default=tokens)
  • We will use this namespace in the :class:Vocabulary to map strings to indices. If you explicitly pass in None here, we will skip indexing and vocabulary lookups. This means that the feature_name you use must correspond to an integer value (like text_id, for instance, which gets set by some tokenizers, such as when using byte encoding).
  • lowercase_tokens : bool, optional (default=False) If True, we will call token.lower() before getting an index for the token from the vocabulary.
  • start_tokens : List[str], optional (default=None) These are prepended to the tokens provided to tokens_to_indices.
  • end_tokens : List[str], optional (default=None) These are appended to the tokens provided to tokens_to_indices.
  • feature_name : str, optional (default=text)
  • We will use the :class:Token attribute with this name as input. This is potentially useful, e.g., for using NER tags instead of (or in addition to) surface forms as your inputs (passing ent_type_ here would do that). If you use a non-default value here, you almost certainly want to also change the namespace parameter, and you might want to give a default_value.
  • default_value : str, optional When you want to use a non-default feature_name, you sometimes want to have a default value to go with it, e.g., in case you don't have an NER tag for a particular token, for some reason. This value will get used if we don't find a value in feature_name. If this is not given, we will crash if a token doesn't have a value for the given feature_name, so that you don't get weird, silent errors by default.
  • token_min_padding_length : int, optional (default=0)
  • See :class:TokenIndexer.

count_vocab_items#

SingleIdTokenIndexer.count_vocab_items(
    self,
    token: allennlp.data.tokenizers.token.Token,
    counter: Dict[str, Dict[str, int]],
)

The :class:Vocabulary needs to assign indices to whatever strings we see in the training data (possibly doing some frequency filtering and using an OOV, or out of vocabulary, token). This method takes a token and a dictionary of counts and increments counts for whatever vocabulary items are present in the token. If this is a single token ID representation, the vocabulary item is likely the token itself. If this is a token characters representation, the vocabulary items are all of the characters in the token.

get_empty_token_list#

SingleIdTokenIndexer.get_empty_token_list(self) -> Dict[str, List[Any]]

Returns an already indexed version of an empty token list. This is typically just an empty list for whatever keys are used in the indexer.

tokens_to_indices#

SingleIdTokenIndexer.tokens_to_indices(
    self,
    tokens: List[allennlp.data.tokenizers.token.Token],
    vocabulary: allennlp.data.vocabulary.Vocabulary,
) -> Dict[str, List[int]]

Takes a list of tokens and converts them to an IndexedTokenList. This could be just an ID for each token from the vocabulary. Or it could split each token into characters and return one ID per character. Or (for instance, in the case of byte-pair encoding) there might not be a clean mapping from individual tokens to indices, and the IndexedTokenList could be a complex data structure.