lstm_cell
LstmCellDecoderNet#
class LstmCellDecoderNet(DecoderNet):
| def __init__(
| self,
| decoding_dim: int,
| target_embedding_dim: int,
| attention: Optional[Attention] = None,
| bidirectional_input: bool = False
| ) -> None
This decoder net implements simple decoding network with LSTMCell and Attention.
Parameters
- decoding_dim :
int
Defines dimensionality of output vectors. - target_embedding_dim :
int
Defines dimensionality of input target embeddings. Since this model takes it's output on a previous step as input of following step, this is also an input dimensionality. - attention :
Attention
, optional (default =None
)
If you want to use attention to get a dynamic summary of the encoder outputs at each step of decoding, this is the function used to compute similarity between the decoder hidden state and encoder outputs.
init_decoder_state#
class LstmCellDecoderNet(DecoderNet):
| ...
| def init_decoder_state(
| self,
| encoder_out: Dict[str, torch.LongTensor]
| ) -> Dict[str, torch.Tensor]
forward#
class LstmCellDecoderNet(DecoderNet):
| ...
| @overrides
| def forward(
| self,
| previous_state: Dict[str, torch.Tensor],
| encoder_outputs: torch.Tensor,
| source_mask: torch.BoolTensor,
| previous_steps_predictions: torch.Tensor,
| previous_steps_mask: Optional[torch.BoolTensor] = None
| ) -> Tuple[Dict[str, torch.Tensor], torch.Tensor]