Skip to content

cosine_matrix_attention

allennlp.modules.matrix_attention.cosine_matrix_attention

[SOURCE]


CosineMatrixAttention#

@MatrixAttention.register("cosine")
class CosineMatrixAttention(MatrixAttention)

Computes attention between every entry in matrix_1 with every entry in matrix_2 using cosine similarity.

Registered as a MatrixAttention with name "cosine".

forward#

class CosineMatrixAttention(MatrixAttention):
 | ...
 | @overrides
 | def forward(
 |     self,
 |     matrix_1: torch.Tensor,
 |     matrix_2: torch.Tensor
 | ) -> torch.Tensor