allennlp.modules.span_extractors¶
-
class
allennlp.modules.span_extractors.span_extractor.
SpanExtractor
[source]¶ Bases:
torch.nn.modules.module.Module
,allennlp.common.registrable.Registrable
Many NLP models deal with representations of spans inside a sentence. SpanExtractors define methods for extracting and representing spans from a sentence.
SpanExtractors take a sequence tensor of shape (batch_size, timesteps, embedding_dim) and indices of shape (batch_size, num_spans, 2) and return a tensor of shape (batch_size, num_spans, …), forming some representation of the spans.
-
forward
(self, sequence_tensor: torch.FloatTensor, span_indices: torch.LongTensor, sequence_mask: torch.LongTensor = None, span_indices_mask: torch.LongTensor = None)[source]¶ Given a sequence tensor, extract spans and return representations of them. Span representation can be computed in many different ways, such as concatenation of the start and end spans, attention over the vectors contained inside the span, etc.
- Parameters
- sequence_tensor
torch.FloatTensor
, required. A tensor of shape (batch_size, sequence_length, embedding_size) representing an embedded sequence of words.
- span_indices
torch.LongTensor
, required. A tensor of shape
(batch_size, num_spans, 2)
, where the last dimension represents the inclusive start and end indices of the span to be extracted from thesequence_tensor
.- sequence_mask
torch.LongTensor
, optional (default =None
). A tensor of shape (batch_size, sequence_length) representing padded elements of the sequence.
- span_indices_mask
torch.LongTensor
, optional (default =None
). A tensor of shape (batch_size, num_spans) representing the valid spans in the
indices
tensor. This mask is optional because sometimes it’s easier to worry about masking after calling this function, rather than passing a mask directly.
- sequence_tensor
- Returns
- A tensor of shape
(batch_size, num_spans, embedded_span_size)
, - where
embedded_span_size
depends on the way spans are represented.
- A tensor of shape
-
-
class
allennlp.modules.span_extractors.endpoint_span_extractor.
EndpointSpanExtractor
(input_dim: int, combination: str = 'x, y', num_width_embeddings: int = None, span_width_embedding_dim: int = None, bucket_widths: bool = False, use_exclusive_start_indices: bool = False)[source]¶ Bases:
allennlp.modules.span_extractors.span_extractor.SpanExtractor
Represents spans as a combination of the embeddings of their endpoints. Additionally, the width of the spans can be embedded and concatenated on to the final combination.
The following types of representation are supported, assuming that
x = span_start_embeddings
andy = span_end_embeddings
.x
,y
,x*y
,x+y
,x-y
,x/y
, where each of those binary operations is performed elementwise. You can list as many combinations as you want, comma separated. For example, you might givex,y,x*y
as thecombination
parameter to this class. The computed similarity function would then be[x; y; x*y]
, which can then be optionally concatenated with an embedded representation of the width of the span.- Parameters
- input_dim
int
, required. The final dimension of the
sequence_tensor
.- combinationstr, optional (default = “x,y”).
The method used to combine the
start_embedding
andend_embedding
representations. See above for a full description.- num_width_embeddings
int
, optional (default = None). Specifies the number of buckets to use when representing span width features.
- span_width_embedding_dim
int
, optional (default = None). The embedding size for the span_width features.
- bucket_widths
bool
, optional (default = False). Whether to bucket the span widths into log-space buckets. If
False
, the raw span widths are used.- use_exclusive_start_indices
bool
, optional (default =False
). If
True
, the start indices extracted are converted to exclusive indices. Sentinels are used to represent exclusive span indices for the elements in the first position in the sequence (as the exclusive indices for these elements are outside of the the sequence boundary) so that start indices can be exclusive. NOTE: This option can be helpful to avoid the pathological case in which you want span differences for length 1 spans - if you use inclusive indices, you will end up with anx - x
operation for length 1 spans, which is not good.
- input_dim
-
forward
(self, sequence_tensor: torch.FloatTensor, span_indices: torch.LongTensor, sequence_mask: torch.LongTensor = None, span_indices_mask: torch.LongTensor = None) → None[source]¶ Given a sequence tensor, extract spans and return representations of them. Span representation can be computed in many different ways, such as concatenation of the start and end spans, attention over the vectors contained inside the span, etc.
- Parameters
- sequence_tensor
torch.FloatTensor
, required. A tensor of shape (batch_size, sequence_length, embedding_size) representing an embedded sequence of words.
- span_indices
torch.LongTensor
, required. A tensor of shape
(batch_size, num_spans, 2)
, where the last dimension represents the inclusive start and end indices of the span to be extracted from thesequence_tensor
.- sequence_mask
torch.LongTensor
, optional (default =None
). A tensor of shape (batch_size, sequence_length) representing padded elements of the sequence.
- span_indices_mask
torch.LongTensor
, optional (default =None
). A tensor of shape (batch_size, num_spans) representing the valid spans in the
indices
tensor. This mask is optional because sometimes it’s easier to worry about masking after calling this function, rather than passing a mask directly.
- sequence_tensor
- Returns
- A tensor of shape
(batch_size, num_spans, embedded_span_size)
, - where
embedded_span_size
depends on the way spans are represented.
- A tensor of shape
-
class
allennlp.modules.span_extractors.self_attentive_span_extractor.
SelfAttentiveSpanExtractor
(input_dim: int)[source]¶ Bases:
allennlp.modules.span_extractors.span_extractor.SpanExtractor
Computes span representations by generating an unnormalized attention score for each word in the document. Spans representations are computed with respect to these scores by normalising the attention scores for words inside the span.
Given these attention distributions over every span, this module weights the corresponding vector representations of the words in the span by this distribution, returning a weighted representation of each span.
- Parameters
- input_dim
int
, required. The final dimension of the
sequence_tensor
.
- input_dim
- Returns
- attended_text_embeddings
torch.FloatTensor
. A tensor of shape (batch_size, num_spans, input_dim), which each span representation is formed by locally normalising a global attention over the sequence. The only way in which the attention distribution differs over different spans is in the set of words over which they are normalized.
- attended_text_embeddings
-
forward
(self, sequence_tensor: torch.FloatTensor, span_indices: torch.LongTensor, sequence_mask: torch.LongTensor = None, span_indices_mask: torch.LongTensor = None) → torch.FloatTensor[source]¶ Given a sequence tensor, extract spans and return representations of them. Span representation can be computed in many different ways, such as concatenation of the start and end spans, attention over the vectors contained inside the span, etc.
- Parameters
- sequence_tensor
torch.FloatTensor
, required. A tensor of shape (batch_size, sequence_length, embedding_size) representing an embedded sequence of words.
- span_indices
torch.LongTensor
, required. A tensor of shape
(batch_size, num_spans, 2)
, where the last dimension represents the inclusive start and end indices of the span to be extracted from thesequence_tensor
.- sequence_mask
torch.LongTensor
, optional (default =None
). A tensor of shape (batch_size, sequence_length) representing padded elements of the sequence.
- span_indices_mask
torch.LongTensor
, optional (default =None
). A tensor of shape (batch_size, num_spans) representing the valid spans in the
indices
tensor. This mask is optional because sometimes it’s easier to worry about masking after calling this function, rather than passing a mask directly.
- sequence_tensor
- Returns
- A tensor of shape
(batch_size, num_spans, embedded_span_size)
, - where
embedded_span_size
depends on the way spans are represented.
- A tensor of shape
-
class
allennlp.modules.span_extractors.bidirectional_endpoint_span_extractor.
BidirectionalEndpointSpanExtractor
(input_dim: int, forward_combination: str = 'y-x', backward_combination: str = 'x-y', num_width_embeddings: int = None, span_width_embedding_dim: int = None, bucket_widths: bool = False, use_sentinels: bool = True)[source]¶ Bases:
allennlp.modules.span_extractors.span_extractor.SpanExtractor
Represents spans from a bidirectional encoder as a concatenation of two different representations of the span endpoints, one for the forward direction of the encoder and one from the backward direction. This type of representation encodes some subtlety, because when you consider the forward and backward directions separately, the end index of the span for the backward direction’s representation is actually the start index.
By default, this
SpanExtractor
represents spans assequence_tensor[inclusive_span_end] - sequence_tensor[exclusive_span_start]
meaning that the representation is the difference between the the last word in the span and the word before the span started. Note that the start and end indices are with respect to the direction that the RNN is going in, so for the backward direction, the start/end indices are reversed.Additionally, the width of the spans can be embedded and concatenated on to the final combination.
The following other types of representation are supported for both the forward and backward directions, assuming that
x = span_start_embeddings
andy = span_end_embeddings
.x
,y
,x*y
,x+y
,x-y
,x/y
, where each of those binary operations is performed elementwise. You can list as many combinations as you want, comma separated. For example, you might givex,y,x*y
as thecombination
parameter to this class. The computed similarity function would then be[x; y; x*y]
, which can then be optionally concatenated with an embedded representation of the width of the span.- Parameters
- input_dim
int
, required. The final dimension of the
sequence_tensor
.- forward_combinationstr, optional (default = “y-x”).
The method used to combine the
forward_start_embeddings
andforward_end_embeddings
for the forward direction of the bidirectional representation. See above for a full description.- backward_combinationstr, optional (default = “x-y”).
The method used to combine the
backward_start_embeddings
andbackward_end_embeddings
for the backward direction of the bidirectional representation. See above for a full description.- num_width_embeddings
int
, optional (default = None). Specifies the number of buckets to use when representing span width features.
- span_width_embedding_dim
int
, optional (default = None). The embedding size for the span_width features.
- bucket_widths
bool
, optional (default = False). Whether to bucket the span widths into log-space buckets. If
False
, the raw span widths are used.- use_sentinels
bool
, optional (default =True
). If
True
, sentinels are used to represent exclusive span indices for the elements in the first and last positions in the sequence (as the exclusive indices for these elements are outside of the the sequence boundary). This is not strictly necessary, as you may know that your exclusive start and end indices are always within your sequence representation, such as if you have appended/prepended <START> and <END> tokens to your sequence.
- input_dim
-
forward
(self, sequence_tensor: torch.FloatTensor, span_indices: torch.LongTensor, sequence_mask: torch.LongTensor = None, span_indices_mask: torch.LongTensor = None) → torch.FloatTensor[source]¶ Given a sequence tensor, extract spans and return representations of them. Span representation can be computed in many different ways, such as concatenation of the start and end spans, attention over the vectors contained inside the span, etc.
- Parameters
- sequence_tensor
torch.FloatTensor
, required. A tensor of shape (batch_size, sequence_length, embedding_size) representing an embedded sequence of words.
- span_indices
torch.LongTensor
, required. A tensor of shape
(batch_size, num_spans, 2)
, where the last dimension represents the inclusive start and end indices of the span to be extracted from thesequence_tensor
.- sequence_mask
torch.LongTensor
, optional (default =None
). A tensor of shape (batch_size, sequence_length) representing padded elements of the sequence.
- span_indices_mask
torch.LongTensor
, optional (default =None
). A tensor of shape (batch_size, num_spans) representing the valid spans in the
indices
tensor. This mask is optional because sometimes it’s easier to worry about masking after calling this function, rather than passing a mask directly.
- sequence_tensor
- Returns
- A tensor of shape
(batch_size, num_spans, embedded_span_size)
, - where
embedded_span_size
depends on the way spans are represented.
- A tensor of shape