allennlp.modules.gated_sum#

GatedSum#

GatedSum(
    self,
    input_dim: int,
    activation: allennlp.nn.activations.Activation = Sigmoid(),
) -> None

This Module represents a gated sum of two tensors a and b. Specifically:

f = activation(W [a; b])
out = f * a + (1 - f) * b

Parameters

  • input_dim : int, required The dimensionality of the input. We assume the input have shape (..., input_dim).
  • activation : Activation, optional (default = torch.nn.Sigmoid()) The activation function to use.