Skip to content

endpoint_span_extractor

allennlp.modules.span_extractors.endpoint_span_extractor

[SOURCE]


EndpointSpanExtractor#

@SpanExtractor.register("endpoint")
class EndpointSpanExtractor(SpanExtractor):
 | def __init__(
 |     self,
 |     input_dim: int,
 |     combination: str = "x,y",
 |     num_width_embeddings: int = None,
 |     span_width_embedding_dim: int = None,
 |     bucket_widths: bool = False,
 |     use_exclusive_start_indices: bool = False
 | ) -> None

Represents spans as a combination of the embeddings of their endpoints. Additionally, the width of the spans can be embedded and concatenated on to the final combination.

The following types of representation are supported, assuming that x = span_start_embeddings and y = span_end_embeddings.

x, y, x*y, x+y, x-y, x/y, where each of those binary operations is performed elementwise. You can list as many combinations as you want, comma separated. For example, you might give x,y,x*y as the combination parameter to this class. The computed similarity function would then be [x; y; x*y], which can then be optionally concatenated with an embedded representation of the width of the span.

Registered as a SpanExtractor with name "endpoint".

Parameters

  • input_dim : int
    The final dimension of the sequence_tensor.
  • combination : str, optional (default = "x,y")
    The method used to combine the start_embedding and end_embedding representations. See above for a full description.
  • num_width_embeddings : int, optional (default = None)
    Specifies the number of buckets to use when representing span width features.
  • span_width_embedding_dim : int, optional (default = None)
    The embedding size for the span_width features.
  • bucket_widths : bool, optional (default = False)
    Whether to bucket the span widths into log-space buckets. If False, the raw span widths are used.
  • use_exclusive_start_indices : bool, optional (default = False)
    If True, the start indices extracted are converted to exclusive indices. Sentinels are used to represent exclusive span indices for the elements in the first position in the sequence (as the exclusive indices for these elements are outside of the the sequence boundary) so that start indices can be exclusive. NOTE: This option can be helpful to avoid the pathological case in which you want span differences for length 1 spans - if you use inclusive indices, you will end up with an x - x operation for length 1 spans, which is not good.

get_input_dim#

class EndpointSpanExtractor(SpanExtractor):
 | ...
 | def get_input_dim(self) -> int

get_output_dim#

class EndpointSpanExtractor(SpanExtractor):
 | ...
 | def get_output_dim(self) -> int

forward#

class EndpointSpanExtractor(SpanExtractor):
 | ...
 | @overrides
 | def forward(
 |     self,
 |     sequence_tensor: torch.FloatTensor,
 |     span_indices: torch.LongTensor,
 |     sequence_mask: torch.BoolTensor = None,
 |     span_indices_mask: torch.BoolTensor = None
 | ) -> None

shape (batch_size, num_spans)