A full Docker Compose setup for deep learning based on Deepo with separate container services for JupyterLab, Tensorboad, and nginx-proxy and persistent volumes.
Deepo is great but it's only one Docker image. This goes a step further to add a turnkey development environment using Docker Compose to create a full system.
DLDC is semi-opinionated.
- An Nvidia GPU is required
- There are no customization options for Deepo. The DLDC Docker image is based on the
all-
configuration of Deepo (i.e.,ufoym/deepo:all-py36-jupyter
) so it includes everything. - JupyterLab is used but classic Jupyter notebooks can be run easily from JupyterLab
- docker-compose (
docker-compose
can now utilize GPUs withnvidia-docker2
)
$ git clone https://github.com/eriknomitch/dldc.git
$ cd dldc
$ echo "JUPYTER_TOKEN='<your-token>'" >> .env
This is the base host you'll use so you can use subdomains for each service (i.e., http://jupyter., http://tensorboard.).
If you only plan on using this locally you can skip this.
$ echo "EXTERNAL_HOST='your-external-host.com'" >> .env
This will build the Docker image and start the docker-compose
services fetching anything that isn't already fetched.
$ ./dldc
After launching, the docker-compose
services will be running.
- JupyterLab at http://localhost:8888 or http://jupyter.. Use your
JUPYTER_TOKEN
you set in.env
to log in. - Tensorboard at http://tensorboard. or http://localhost:6006
You may add various types of packages to the files in config/packages/
to have DLDC build them into the Docker image.
This is especially useful because of the ephemeral nature of Docker containers. Once added here, your package will be available to any dldc
docker container you run.
Important: Only add the name of the package to the corresponding file (i.e., add a line with cowsay
to config/packages/apt
if you want to install the apt package for cowsay
).
config/
packages/
apt # Package names
jupyter # Extension names
jupyterlab # Extension names
lua # Packae names
pip # Package names
After adding, re-run ./dldc
.
$ ./dldc
If the dldc
image is already built and up-to-date (i.e., nothing has changed in your personal configuration) the build will use the cache version.
$ ./dldc shell
The local ./shared
directory will have been created and mounted on the containers at /shared
.
The Jupyter notebooks root path is ./shared
locally and /shared
on the container.
Simply run ./dldc
again to start the docker-compose
services.
All data in ./shared
will have been preserved between instances.
[comment]: # sudo -i -u erik zsh -c "cd ~/.repositories/dldc && ./dldc up-detached"