bilinear_attention
[ allennlp.modules.attention.bilinear_attention ]
BilinearAttention#
@Attention.register("bilinear")
class BilinearAttention(Attention):
| def __init__(
| self,
| vector_dim: int,
| matrix_dim: int,
| activation: Activation = None,
| normalize: bool = True
| ) -> None
Computes attention between a vector and a matrix using a bilinear attention function. This
function has a matrix of weights W and a bias b, and the similarity between the vector
x and the matrix y is computed as x^T W y + b.
Registered as an Attention with name "bilinear".
Parameters
- vector_dim :
int
The dimension of the vector,x, described above. This isx.size()[-1]- the length of the vector that will go into the similarity computation. We need this so we can build the weight matrix correctly. - matrix_dim :
int
The dimension of the matrix,y, described above. This isy.size()[-1]- the length of the vector that will go into the similarity computation. We need this so we can build the weight matrix correctly. - activation :
Activation, optional (default =linear)
An activation function applied after thex^T W y + bcalculation. Default is linear, i.e. no activation. - normalize :
bool, optional (default =True)
If true, we normalize the computed similarities with a softmax, to return a probability distribution for your attention. If false, this is just computing a similarity score.
reset_parameters#
class BilinearAttention(Attention):
| ...
| def reset_parameters(self)