[ allennlp.modules.attention.bilinear_attention ]
@Attention.register("bilinear") class BilinearAttention(Attention): | def __init__( | self, | vector_dim: int, | matrix_dim: int, | activation: Activation = None, | normalize: bool = True | ) -> None
Computes attention between a vector and a matrix using a bilinear attention function. This
function has a matrix of weights
W and a bias
b, and the similarity between the vector
x and the matrix
y is computed as
x^T W y + b.
Registered as an
Attention with name "bilinear".
- vector_dim :
The dimension of the vector,
x, described above. This is
x.size()[-1]- the length of the vector that will go into the similarity computation. We need this so we can build the weight matrix correctly.
- matrix_dim :
The dimension of the matrix,
y, described above. This is
y.size()[-1]- the length of the vector that will go into the similarity computation. We need this so we can build the weight matrix correctly.
- activation :
Activation, optional (default =
An activation function applied after the
x^T W y + bcalculation. Default is linear, i.e. no activation.
- normalize :
bool, optional (default =
If true, we normalize the computed similarities with a softmax, to return a probability distribution for your attention. If false, this is just computing a similarity score.
class BilinearAttention(Attention): | ... | def reset_parameters(self)