Officially supported AllenNLP models.
allennlp-models is available on PyPI. To install with
pip, just run
pip install --pre allennlp-models
Note that the
allennlp-models package is tied to the
allennlp core package. Therefore when you install the models package you will get the corresponding version of
allennlp (if you haven't already installed
allennlp). For example,
pip install allennlp-models==1.0.0rc3 pip freeze | grep allennlp # > allennlp==1.0.0rc3 # > allennlp-models==1.0.0rc3
If you intend to install the models package from source, then you probably also want to install
allennlp from source.
Once you have
allennlp installed, run the following within the same Python environment:
git clone https://github.com/allenai/allennlp-models.git cd allennlp-models ALLENNLP_VERSION_OVERRIDE='allennlp' pip install -e . pip install -r dev-requirements.txt
ALLENNLP_VERSION_OVERRIDE environment variable ensures that the
allennlp dependency is unpinned so that your local install of
allennlp will be sufficient. If, however, you haven't installed
allennlp yet and don't want to manage a local install, just omit this environment variable and
allennlp will be installed from the main branch on GitHub.
allennlp-models are developed and tested side-by-side, so they should be kept up-to-date with each other. If you look at the GitHub Actions workflow for
allennlp-models, it's always tested against the main branch of
allennlp is always tested against the main branch of
Docker provides a virtual machine with everything set up to run AllenNLP-- whether you will leverage a GPU or just run on a CPU. Docker provides more isolation and consistency, and also makes it easy to distribute your environment to a compute cluster.
If you have GPUs available, you also need to install the nvidia-docker runtime.
To build an image locally from a specific release, run
docker build \ --build-arg RELEASE=1.2.2 \ --build-arg CUDA=10.2 \ -t allennlp/models - < Dockerfile.release
Just replace the
CUDA build args what you need. Currently only CUDA 10.2 and 11.0 are officially supported.
Alternatively, you can build against specific commits of
docker build \ --build-arg ALLENNLP_COMMIT=d823a2591e94912a6315e429d0fe0ee2efb4b3ee \ --build-arg ALLENNLP_MODELS_COMMIT=01bc777e0d89387f03037d398cd967390716daf1 \ --build-arg CUDA=10.2 \ -t allennlp/models - < Dockerfile.commit
Just change the
CUDA build args to the desired commit SHAs and CUDA versions, respectively.
Once you've built your image, you can run it like this:
mkdir -p $HOME/.allennlp/ docker run --rm --gpus all -v $HOME/.allennlp:/root/.allennlp allennlp/models
--gpus allis only valid if you've installed the nvidia-docker runtime.