allennlp.modules.maxout¶
A maxout neural network.
-
class
allennlp.modules.maxout.
Maxout
(input_dim: int, num_layers: int, output_dims: Union[int, Sequence[int]], pool_sizes: Union[int, Sequence[int]], dropout: Union[float, Sequence[float]] = 0.0)[source]¶ Bases:
torch.nn.modules.module.Module
,allennlp.common.from_params.FromParams
This
Module
is a maxout neural network.- Parameters
- input_dim
int
The dimensionality of the input. We assume the input has shape
(batch_size, input_dim)
.- num_layers
int
The number of maxout layers to apply to the input.
- output_dims
Union[int, Sequence[int]]
The output dimension of each of the maxout layers. If this is a single
int
, we use it for all maxout layers. If it is aSequence[int]
,len(output_dims)
must benum_layers
.- pool_sizes
Union[int, Sequence[int]]
The size of max-pools. If this is a single
int
, we use it for all maxout layers. If it is aSequence[int]
,len(pool_sizes)
must benum_layers
.- dropout
Union[float, Sequence[float]]
, optional If given, we will apply this amount of dropout after each layer. Semantics of
float
versusSequence[float]
is the same as with other parameters.
- input_dim
-
forward
(self, inputs: torch.Tensor) → torch.Tensor[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.