All notable changes to this project will be documented in this file.
- Made the training configs compatible with the tensorboard logging changes in the main repo
- Dataset readers, models, metrics, and training configs for VQAv2, GQA, and Visual Entailment
training_configs/rc/dialog_qa.jsonnetto work with new data loading API.
- Fixed the potential for a dead-lock when training the
TransformerQAmodel on multiple GPUs when nodes receive different sized batches.
- Fixed BART. This implementation had some major bugs in it that caused poor performance during prediction.
TaskCardabstractions out of the models repository.
masterbranch renamed to
SquadEmAndF1metric can now also accept a batch of predictions and corresponding answers (instead of a single one) in the form of list (for each).
- Fix an index bug in BART prediction.
SemanticRoleLabelerPredictor.tokens_to_instancesso it treats auxiliary verbs as verbs when the language is English
- Added link to source code to API docs.
- Information updates for remaining model cards (also includes the ones in demo, but not in the repository).
Dockerfile.committo work with different CUDA versions.
- Changes required for the
transformersdependency update to version 4.0.1.
- Added missing folder for
- Changed AllenNLP dependency for releases to allow for a range of versions, instead of being pinned to an exact version.
- There will now be multiple Docker images pushed to Docker Hub for releases, each corresponding to a different supported CUDA version (currently just 10.2 and 11.0).
ValueErrorerror message in
- Better check for start and end symbols in
Seq2SeqDatasetReaderthat doesn't fail for BPE-based tokenizers.
- Information updates for all model cards.
- Added the
TaskCardclass and task cards for common tasks.
- Added a test for the interpret functionality
- Added more information to model cards for pair classification models (
- Fixed TransformerElmo config to work with the new AllenNLP
- Pinned the version of torch more tightly to make AMP work
- Fixed the somewhat fragile Bidaf test
- Updated docstring for Transformer MC.
- Added more information to model cards for multiple choice models (
- Fixed many training configs to work out-of-the box. These include the configs for
- Fixed minor bug in MaskedLanguageModel, where getting token ids used hard-coded assumptions (that could be wrong) instead of our standard utility function.
- Added dataset reader support for SQuAD 2.0 with both the
- Updated the SQuAD v1.1 metric to work with SQuAD 2.0 as well.
- Updated the
TransformerQAmodel to work for SQuAD 2.0.
- Added official support for Python 3.8.
- Added a json template for model cards.
training_configas a field in model cards.
- Added a
BeamSearchGeneratorregistrable class which can be provided to a
NextTokenLMmodel to utilize beam search for predicting a sequence of tokens, instead of a single next token.
BeamSearchGeneratoris an abstract class, so a concrete registered implementation needs to be used. One implementation is provided so far:
TransformerBeamSearchGenerator, registered as
transformer, which will work with any
NextTokenLMthat uses a
- Added an
rc-transformer-qapretrained model is now an updated version trained on SQuAD v2.0.
skip_invalid_examplesparameter in SQuAD dataset readers has been deprecated. Please use
- Fixed BART for latest
- Fixed BiDAF predictor and BiDAF predictor tests.
- Fixed a bug with
Seq2SeqDatasetReaderthat would cause an exception when the desired behavior is to not add start or end symbols to either the source or the target and the default
end_symbolare not part of the tokenizer's vocabulary.
LanguageModelTokenEmbedderto allow allow multiple token embedders, but only use first with non-empty type
- Fixed evaluation of metrics when using distributed setting.
- Fixed a bug introduced in 1.0 where the SRL model did not reproduce the original result.
- Added regression tests for training configs that run on a scheduled workflow.
- Added a test for the pretrained sentiment analysis model.
- Added way for questions from quora dataset to be concatenated like the sequences in the SNLI dataset.
GraphParser.get_metricsso that it expects a dict from
SimpleSeq2Seqmodels now work with AMP.
- Made the SST reader a little more strict in the kinds of input it accepts.
- Updated to PyTorch 1.6.
- Updated the RoBERTa SST config to make proper use of the CLS token
- Updated RoBERTa SNLI and MNLI pretrained models for latest
- Added BART model
ModelCardand related classes. Added model cards for all the pretrained models.
- Added a field
- Added a method
- Added support to multi-layer decoder in simple seq2seq model.
- Updated the BERT SRL model to be compatible with the new huggingface tokenizers.
CopyNetSeq2Seqmodel now works with pretrained transformers.
- A bug with
NextTokenLMthat caused simple gradient interpreters to fail.
- A bug in
bimpmthat used the old version of
- The fine-grained NER transformer model did not survive an upgrade of the transformers library, but it is now fixed.
- Fixed many minor formatting issues in docstrings. Docs are now published at https://docs.allennlp.org/models/.
CopyNetDatasetReaderno longer automatically adds
END_TOKENto the tokenized source. If you want these in the tokenized source, it's up to the source tokenizer.
- Added two models for fine-grained NER
- Added a category for multiple choice models, including a few reference implementations
- Implemented manual distributed sharding for SNLI dataset reader.
No additional note-worthy changes since rc6.
- Removed deprecated
Instances with new
- A bug where pretrained sentence taggers would fail to be initialized because some of the models were not imported.
- A bug in some RC models that would cause mixed precision training to crash when using NVIDIA apex.
- Predictor names were inconsistently switching between dashes and underscores. Now they all use underscores.
- Added option to SemanticDependenciesDatasetReader to not skip instances that have no arcs, for validation data
- Added a default predictors to several models
- Added sentiment analysis models to pretrained.py
- Added NLI models to pretrained.py
- Moved the models into categories based on their format
transformer_qapredictor accept JSON input with the keys "question" and "passage" to be consistent with the
conlludependency (previously part of
We first introduced this
CHANGELOG after release
v1.0.0rc4, so please refer to the GitHub release
notes for this and earlier releases.