allennlp.modules.time_distributed¶
A wrapper that unrolls the second (time) dimension of a tensor
into the first (batch) dimension, applies some other Module,
and then rolls the time dimension back up.
-
class
allennlp.modules.time_distributed.TimeDistributed(module)[source]¶ Bases:
torch.nn.modules.module.ModuleGiven an input shaped like
(batch_size, time_steps, [rest])and aModulethat takes inputs like(batch_size, [rest]),TimeDistributedreshapes the input to be(batch_size * time_steps, [rest]), applies the containedModule, then reshapes it back.Note that while the above gives shapes with
batch_sizefirst, thisModulealso works ifbatch_sizeis second - we always just combine the first two dimensions, then split them.It also reshapes keyword arguments unless they are not tensors or their name is specified in the optional
pass_throughiterable.-
forward(self, *inputs, pass_through: List[str] = None, **kwargs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-