Skip to content

bidirectional_endpoint_span_extractor

allennlp.modules.span_extractors.bidirectional_endpoint_span_extractor

[SOURCE]


BidirectionalEndpointSpanExtractor#

@SpanExtractor.register("bidirectional_endpoint")
class BidirectionalEndpointSpanExtractor(SpanExtractor):
 | def __init__(
 |     self,
 |     input_dim: int,
 |     forward_combination: str = "y-x",
 |     backward_combination: str = "x-y",
 |     num_width_embeddings: int = None,
 |     span_width_embedding_dim: int = None,
 |     bucket_widths: bool = False,
 |     use_sentinels: bool = True
 | ) -> None

Represents spans from a bidirectional encoder as a concatenation of two different representations of the span endpoints, one for the forward direction of the encoder and one from the backward direction. This type of representation encodes some subtlety, because when you consider the forward and backward directions separately, the end index of the span for the backward direction's representation is actually the start index.

By default, this SpanExtractor represents spans as sequence_tensor[inclusive_span_end] - sequence_tensor[exclusive_span_start] meaning that the representation is the difference between the the last word in the span and the word before the span started. Note that the start and end indices are with respect to the direction that the RNN is going in, so for the backward direction, the start/end indices are reversed.

Additionally, the width of the spans can be embedded and concatenated on to the final combination.

The following other types of representation are supported for both the forward and backward directions, assuming that x = span_start_embeddings and y = span_end_embeddings.

x, y, x*y, x+y, x-y, x/y, where each of those binary operations is performed elementwise. You can list as many combinations as you want, comma separated. For example, you might give x,y,x*y as the combination parameter to this class. The computed similarity function would then be [x; y; x*y], which can then be optionally concatenated with an embedded representation of the width of the span.

Registered as a SpanExtractor with name "bidirectional_endpoint".

Parameters

  • input_dim : int
    The final dimension of the sequence_tensor.
  • forward_combination : str, optional (default = "y-x")
    The method used to combine the forward_start_embeddings and forward_end_embeddings for the forward direction of the bidirectional representation. See above for a full description.
  • backward_combination : str, optional (default = "x-y")
    The method used to combine the backward_start_embeddings and backward_end_embeddings for the backward direction of the bidirectional representation. See above for a full description.
  • num_width_embeddings : int, optional (default = None)
    Specifies the number of buckets to use when representing span width features.
  • span_width_embedding_dim : int, optional (default = None)
    The embedding size for the span_width features.
  • bucket_widths : bool, optional (default = False)
    Whether to bucket the span widths into log-space buckets. If False, the raw span widths are used.
  • use_sentinels : bool, optional (default = True)
    If True, sentinels are used to represent exclusive span indices for the elements in the first and last positions in the sequence (as the exclusive indices for these elements are outside of the the sequence boundary). This is not strictly necessary, as you may know that your exclusive start and end indices are always within your sequence representation, such as if you have appended/prepended and tokens to your sequence.

get_input_dim#

class BidirectionalEndpointSpanExtractor(SpanExtractor):
 | ...
 | def get_input_dim(self) -> int

get_output_dim#

class BidirectionalEndpointSpanExtractor(SpanExtractor):
 | ...
 | def get_output_dim(self) -> int

forward#

class BidirectionalEndpointSpanExtractor(SpanExtractor):
 | ...
 | @overrides
 | def forward(
 |     self,
 |     sequence_tensor: torch.FloatTensor,
 |     span_indices: torch.LongTensor,
 |     sequence_mask: torch.BoolTensor = None,
 |     span_indices_mask: torch.BoolTensor = None
 | ) -> torch.FloatTensor

Both of shape (batch_size, sequence_length, embedding_size / 2)