[ allennlp.modules.stacked_alternating_lstm ]
A stacked LSTM with LSTM layers which alternate between going forwards over the sequence and going backwards.
TensorPair = Tuple[torch.Tensor, torch.Tensor]
class StackedAlternatingLstm(torch.nn.Module): | def __init__( | self, | input_size: int, | hidden_size: int, | num_layers: int, | recurrent_dropout_probability: float = 0.0, | use_highway: bool = True, | use_input_projection_bias: bool = True | ) -> None
A stacked LSTM with LSTM layers which alternate between going forwards over the sequence and going backwards. This implementation is based on the description in Deep Semantic Role Labelling - What works and what's next.
- input_size :
The dimension of the inputs to the LSTM.
- hidden_size :
The dimension of the outputs of the LSTM.
- num_layers :
The number of stacked LSTMs to use.
- recurrent_dropout_probability :
float, optional (default =
The dropout probability to be used in a dropout scheme as stated in A Theoretically Grounded Application of Dropout in Recurrent Neural Networks.
- use_input_projection_bias :
bool, optional (default =
Whether or not to use a bias on the input projection layer. This is mainly here for backwards compatibility reasons and will be removed (and set to False) in future releases.
- output_accumulator :
The outputs of the interleaved LSTMs per timestep. A tensor of shape (batch_size, max_timesteps, hidden_size) where for a given batch element, all outputs past the sequence length for that batch are zero tensors.
class StackedAlternatingLstm(torch.nn.Module): | ... | def forward( | self, | inputs: PackedSequence, | initial_state: Optional[TensorPair] = None | ) -> Tuple[Union[torch.Tensor, PackedSequence], TensorPair]
- inputs :
A batch first
PackedSequenceto run the stacked LSTM over.
- initial_state :
Tuple[torch.Tensor, torch.Tensor], optional (default =
A tuple (state, memory) representing the initial hidden state and memory of the LSTM. Each tensor has shape (1, batch_size, output_dimension).
- output_sequence :
The encoded sequence of shape (batch_size, sequence_length, hidden_size)
- final_states :
The per-layer final (state, memory) states of the LSTM, each with shape (num_layers, batch_size, hidden_size).