Skip to content

compose_encoder

allennlp.modules.seq2seq_encoders.compose_encoder

[SOURCE]


ComposeEncoder#

@Seq2SeqEncoder.register("compose")
class ComposeEncoder(Seq2SeqEncoder):
 | def __init__(self, encoders: List[Seq2SeqEncoder])

This class can be used to compose several encoders in sequence.

Among other things, this can be used to add a "pre-contextualizer" before a Seq2SeqEncoder.

Registered as a Seq2SeqEncoder with name "compose".

Parameters

  • encoders : List[Seq2SeqEncoder]
    A non-empty list of encoders to compose. The encoders must match in bidirectionality.

forward#

class ComposeEncoder(Seq2SeqEncoder):
 | ...
 | @overrides
 | def forward(
 |     self,
 |     inputs: torch.Tensor,
 |     mask: torch.BoolTensor = None
 | ) -> torch.Tensor

Parameters

  • inputs : torch.Tensor
    A tensor of shape (batch_size, timesteps, input_dim)
  • mask : torch.BoolTensor, optional (default = None)
    A tensor of shape (batch_size, timesteps).

Returns

  • A tensor computed by composing the sequence of encoders.

get_input_dim#

class ComposeEncoder(Seq2SeqEncoder):
 | ...
 | @overrides
 | def get_input_dim(self) -> int

get_output_dim#

class ComposeEncoder(Seq2SeqEncoder):
 | ...
 | @overrides
 | def get_output_dim(self) -> int

is_bidirectional#

class ComposeEncoder(Seq2SeqEncoder):
 | ...
 | @overrides
 | def is_bidirectional(self) -> bool