pass_through_token_embedder
allennlp.modules.token_embedders.pass_through_token_embedder
PassThroughTokenEmbedder¶
@TokenEmbedder.register("pass_through")
class PassThroughTokenEmbedder(TokenEmbedder):
| def __init__(self, hidden_dim: int) -> None
Assumes that the input is already vectorized in some way, and just returns it.
Registered as a TokenEmbedder
with name "pass_through".
Parameters¶
- hidden_dim :
int
get_output_dim¶
class PassThroughTokenEmbedder(TokenEmbedder):
| ...
| def get_output_dim(self)
forward¶
class PassThroughTokenEmbedder(TokenEmbedder):
| ...
| def forward(self, tokens: torch.Tensor) -> torch.Tensor