activations
allennlp.nn.activations
An Activation
is just a function
that takes some parameters and returns an element-wise activation function.
For the most part we just use
PyTorch activations.
Here we provide a thin wrapper to allow registering them and instantiating them from_params
.
The available activation functions include
- "linear"
- "mish"
- "swish"
- "relu"
- "relu6"
- "elu"
- "prelu"
- "leaky_relu"
- "threshold"
- "hardtanh"
- "sigmoid"
- "tanh"
- "log_sigmoid"
- "softplus"
- "softshrink"
- "softsign"
- "tanhshrink"
- "selu"
Activation¶
class Activation(torch.nn.Module, Registrable)
Pytorch has a number of built-in activation functions. We group those here under a common
type, just to make it easier to configure and instantiate them from_params
using
Registrable
.
Note that we're only including element-wise activation functions in this list. You really need to think about masking when you do a softmax or other similar activation function, so it requires a different API.
forward¶
class Activation(torch.nn.Module, Registrable):
| ...
| def forward(self, x: torch.Tensor) -> torch.Tensor
Registrable._registry[Activation]¶
Registrable._registry[Activation] = {
"relu": (torch.nn.ReLU, None),
"relu6": (torch.nn.ReLU6, None),
"elu": (torch.nn.ELU, ...
LinearActivation¶
@Activation.register("linear")
class LinearActivation(Activation)
forward¶
class LinearActivation(Activation):
| ...
| def forward(self, x: torch.Tensor) -> torch.Tensor
MishActivation¶
@Activation.register("mish")
class MishActivation(Activation)
forward¶
class MishActivation(Activation):
| ...
| def forward(self, x: torch.Tensor) -> torch.Tensor
SwishActivation¶
@Activation.register("swish")
class SwishActivation(Activation)
forward¶
class SwishActivation(Activation):
| ...
| def forward(self, x: torch.Tensor) -> torch.Tensor
GeluNew¶
@Activation.register("gelu_new")
class GeluNew(Activation)
Implementation of the GELU activation function currently in Google BERT repo (identical to OpenAI GPT). Also see the Gaussian Error Linear Units paper: https://arxiv.org/abs/1606.08415
forward¶
class GeluNew(Activation):
| ...
| def forward(self, x: torch.Tensor) -> torch.Tensor
GeluFast¶
@Activation.register("gelu_fast")
class GeluFast(Activation)
forward¶
class GeluFast(Activation):
| ...
| def forward(self, x: torch.Tensor) -> torch.Tensor