Skip to content

token

[ allennlp.data.tokenizers.token ]


Token Objects#

class Token():
 | def __init__(
 |     self,
 |     text: str = None,
 |     idx: int = None,
 |     idx_end: int = None,
 |     lemma_: str = None,
 |     pos_: str = None,
 |     tag_: str = None,
 |     dep_: str = None,
 |     ent_type_: str = None,
 |     text_id: int = None,
 |     type_id: int = None
 | ) -> None

A simple token representation, keeping track of the token's text, offset in the passage it was taken from, POS tag, dependency relation, and similar information. These fields match spacy's exactly, so we can just use a spacy token for this.

Parameters

  • text : str, optional
    The original text represented by this token.
  • idx : int, optional
    The character offset of this token into the tokenized passage.
  • idx_end : int, optional
    The character offset one past the last character in the tokenized passage.
  • lemma_ : str, optional
    The lemma of this token.
  • pos_ : str, optional
    The coarse-grained part of speech of this token.
  • tag_ : str, optional
    The fine-grained part of speech of this token.
  • dep_ : str, optional
    The dependency relation for this token.
  • ent_type_ : str, optional
    The entity type (i.e., the NER tag) for this token.
  • text_id : int, optional
    If your tokenizer returns integers instead of strings (e.g., because you're doing byte encoding, or some hash-based embedding), set this with the integer. If this is set, we will bypass the vocabulary when indexing this token, regardless of whether text is also set. You can also set text with the original text, if you want, so that you can still use a character-level representation in addition to a hash-based word embedding.
  • type_id : int, optional
    Token type id used by some pretrained language models like original BERT

    The other fields on Token follow the fields on spacy's Token object; this is one we added, similar to spacy's lex_id.

text#

text = None

idx#

idx = None

idx_end#

idx_end = None

lemma_#

lemma_ = None

pos_#

pos_ = None

tag_#

tag_ = None

dep_#

dep_ = None

ent_type_#

ent_type_ = None

text_id#

text_id = None

type_id#

type_id = None

show_token#

def show_token(token: Token) -> str