Skip to content

model_test_case

[ allennlp.common.testing.model_test_case ]


ModelTestCase Objects#

class ModelTestCase(AllenNlpTestCase)

A subclass of AllenNlpTestCase with added methods for testing Model subclasses.

set_up_model#

 | def set_up_model(self, param_file, dataset_file)

ensure_model_can_train_save_and_load#

 | def ensure_model_can_train_save_and_load(
 |     self,
 |     param_file: str,
 |     tolerance: float = 1e-4,
 |     cuda_device: int = -1,
 |     gradients_to_ignore: Set[str] = None,
 |     overrides: str = "",
 |     metric_to_check: str = None,
 |     metric_terminal_value: float = None,
 |     metric_tolerance: float = 1e-4,
 |     disable_dropout: bool = True
 | )

Parameters

  • param_file : str
    Path to a training configuration file that we will use to train the model for this test.
  • tolerance : float, optional (default = 1e-4)
    When comparing model predictions between the originally-trained model and the model after saving and loading, we will use this tolerance value (passed as rtol to numpy.testing.assert_allclose).
  • cuda_device : int, optional (default = -1)
    The device to run the test on.
  • gradients_to_ignore : Set[str], optional (default = None)
    This test runs a gradient check to make sure that we're actually computing gradients for all of the parameters in the model. If you really want to ignore certain parameters when doing that check, you can pass their names here. This is not recommended unless you're really sure you don't need to have non-zero gradients for those parameters (e.g., some of the beam search / state machine models have infrequently-used parameters that are hard to force the model to use in a small test).
  • overrides : str, optional (default = "")
    A JSON string that we will use to override values in the input parameter file.
  • metric_to_check : str, optional (default = None)
    We may want to automatically perform a check that model reaches given metric when training (on validation set, if it is specified). It may be useful in CI, for example. You can pass any metric that is in your model returned metrics.
  • metric_terminal_value : str, optional (default = None)
    When you set metric_to_check, you need to set the value this metric must converge to
  • metric_tolerance : float, optional (default = 1e-4)
    Tolerance to check you model metric against metric terminal value. One can expect some variance in model metrics when the training process is highly stochastic.
  • disable_dropout : bool, optional (default = True)
    If True we will set all dropout to 0 before checking gradients. (Otherwise, with small datasets, you may get zero gradients because of unlucky dropout.)

assert_fields_equal#

 | def assert_fields_equal(
 |     self,
 |     field1,
 |     field2,
 |     name: str,
 |     tolerance: float = 1e-6
 | ) -> None

check_model_computes_gradients_correctly#

 | @staticmethod
 | def check_model_computes_gradients_correctly(
 |     model: Model,
 |     model_batch: Dict[str, Union[Any, Dict[str, Any]]],
 |     params_to_ignore: Set[str] = None,
 |     disable_dropout: bool = True
 | )

ensure_batch_predictions_are_consistent#

 | def ensure_batch_predictions_are_consistent(
 |     self,
 |     keys_to_ignore: Iterable[str] = ()
 | )

Ensures that the model performs the same on a batch of instances as on individual instances. Ignores metrics matching the regexp .loss. and those specified explicitly.

Parameters

  • keys_to_ignore : Iterable[str], optional (default = ())
    Names of metrics that should not be taken into account, e.g. "batch_weight".