├── automated_build ├── README.md ├── Dockerfile.gpu3 ├── Dockerfile.gpu2 └── Dockerfile.gpu1 ├── run_jupyter.sh ├── jupyter_notebook_config.py ├── Dockerfile.cpu ├── Dockerfile.gpu └── README.md /automated_build/README.md: -------------------------------------------------------------------------------- 1 | ### Please do not use these `Dockerfiles` directly 2 | These files are intended for Docker Hub's automated build process. The automated build for `Dockerfile.gpu` fails due to timeout restrictions (see [https://github.com/saiprashanths/dl-docker/issues/2](https://github.com/saiprashanths/dl-docker/issues/2)). To overcome this, `Dockerfile.gpu` is split into 2 parts: `Dockerfile.gpu1` and `Dockerfile.gpu2`. These files are used to build the final `dl-docker:gpu` that you can pull directly from Docker Hub using `docker pull floydhub/dl-docker:gpu`. 3 | -------------------------------------------------------------------------------- /run_jupyter.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | # ============================================================================== 16 | 17 | 18 | jupyter notebook "$@" 19 | -------------------------------------------------------------------------------- /jupyter_notebook_config.py: -------------------------------------------------------------------------------- 1 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | # ============================================================================== 15 | import os 16 | from IPython.lib import passwd 17 | 18 | c.NotebookApp.ip = '*' 19 | c.NotebookApp.port = 8888 20 | c.NotebookApp.open_browser = False 21 | c.MultiKernelManager.default_kernel_name = 'python2' 22 | 23 | # sets a password if PASSWORD is set in the environment 24 | if 'PASSWORD' in os.environ: 25 | c.NotebookApp.password = passwd(os.environ['PASSWORD']) 26 | del os.environ['PASSWORD'] 27 | -------------------------------------------------------------------------------- /automated_build/Dockerfile.gpu3: -------------------------------------------------------------------------------- 1 | FROM floydhub/dl-docker:gpu_temp2 2 | 3 | MAINTAINER Sai Soundararaj 4 | 5 | ARG THEANO_VERSION=rel-0.8.2 6 | ARG TENSORFLOW_VERSION=0.8.0 7 | ARG TENSORFLOW_ARCH=gpu 8 | ARG KERAS_VERSION=1.0.3 9 | ARG LASAGNE_VERSION=v0.1 10 | ARG TORCH_VERSION=latest 11 | ARG CAFFE_VERSION=master 12 | 13 | 14 | # Install the latest versions of nn, cutorch, cunn, cuDNN bindings and iTorch 15 | RUN luarocks install nn > /dev/null && \ 16 | luarocks install cutorch > /dev/null && \ 17 | luarocks install cunn > /dev/null&& \ 18 | \ 19 | cd /root && git clone https://github.com/soumith/cudnn.torch.git && cd cudnn.torch && \ 20 | git checkout R4 && \ 21 | luarocks make > /dev/null && \ 22 | \ 23 | cd /root && git clone https://github.com/facebook/iTorch.git && \ 24 | cd iTorch && \ 25 | luarocks make > /dev/null 26 | 27 | 28 | # Set up notebook config 29 | COPY jupyter_notebook_config.py /root/.jupyter/ 30 | 31 | # Jupyter has issues with being run directly: https://github.com/ipython/ipython/issues/7062 32 | COPY run_jupyter.sh /root/ 33 | 34 | # Expose Ports for TensorBoard (6006), Ipython (8888) 35 | EXPOSE 6006 8888 36 | 37 | WORKDIR "/root" 38 | CMD ["/bin/bash"] 39 | -------------------------------------------------------------------------------- /automated_build/Dockerfile.gpu2: -------------------------------------------------------------------------------- 1 | FROM floydhub/dl-docker:gpu_temp 2 | 3 | MAINTAINER Sai Soundararaj 4 | 5 | ARG THEANO_VERSION=rel-0.8.2 6 | ARG TENSORFLOW_VERSION=0.8.0 7 | ARG TENSORFLOW_ARCH=gpu 8 | ARG KERAS_VERSION=1.0.3 9 | ARG LASAGNE_VERSION=v0.1 10 | ARG TORCH_VERSION=latest 11 | ARG CAFFE_VERSION=master 12 | 13 | # Install Theano and set up Theano config (.theanorc) for CUDA and OpenBLAS 14 | RUN pip --no-cache-dir install git+git://github.com/Theano/Theano.git@${THEANO_VERSION} && \ 15 | \ 16 | echo "[global]\ndevice=gpu\nfloatX=float32\noptimizer_including=cudnn\nmode=FAST_RUN \ 17 | \n[lib]\ncnmem=0.95 \ 18 | \n[nvcc]\nfastmath=True \ 19 | \n[blas]\nldflag = -L/usr/lib/openblas-base -lopenblas \ 20 | \n[DebugMode]\ncheck_finite=1" \ 21 | > /root/.theanorc 22 | 23 | 24 | # Install Keras 25 | RUN pip --no-cache-dir install git+git://github.com/fchollet/keras.git@${KERAS_VERSION} 26 | 27 | 28 | # Install Lasagne 29 | RUN pip --no-cache-dir install git+git://github.com/Lasagne/Lasagne.git@${LASAGNE_VERSION} 30 | 31 | 32 | # Install Torch 33 | RUN git clone https://github.com/torch/distro.git /root/torch --recursive && \ 34 | cd /root/torch && \ 35 | bash install-deps > /dev/null && \ 36 | yes no | ./install.sh > /dev/null 37 | 38 | # Export the LUA evironment variables manually 39 | ENV LUA_PATH='/root/.luarocks/share/lua/5.1/?.lua;/root/.luarocks/share/lua/5.1/?/init.lua;/root/torch/install/share/lua/5.1/?.lua;/root/torch/install/share/lua/5.1/?/init.lua;./?.lua;/root/torch/install/share/luajit-2.1.0-beta1/?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua' \ 40 | LUA_CPATH='/root/.luarocks/lib/lua/5.1/?.so;/root/torch/install/lib/lua/5.1/?.so;./?.so;/usr/local/lib/lua/5.1/?.so;/usr/local/lib/lua/5.1/loadall.so' \ 41 | PATH=/root/torch/install/bin:$PATH \ 42 | LD_LIBRARY_PATH=/root/torch/install/lib:$LD_LIBRARY_PATH \ 43 | DYLD_LIBRARY_PATH=/root/torch/install/lib:$DYLD_LIBRARY_PATH 44 | ENV LUA_CPATH='/root/torch/install/lib/?.so;'$LUA_CPATH 45 | -------------------------------------------------------------------------------- /automated_build/Dockerfile.gpu1: -------------------------------------------------------------------------------- 1 | FROM nvidia/cuda:7.5-cudnn4-devel 2 | 3 | MAINTAINER Sai Soundararaj 4 | 5 | ARG THEANO_VERSION=rel-0.8.2 6 | ARG TENSORFLOW_VERSION=0.8.0 7 | ARG TENSORFLOW_ARCH=gpu 8 | ARG KERAS_VERSION=1.0.3 9 | ARG LASAGNE_VERSION=v0.1 10 | ARG TORCH_VERSION=latest 11 | ARG CAFFE_VERSION=master 12 | 13 | #RUN echo -e "\n**********************\nNVIDIA Driver Version\n**********************\n" && \ 14 | # cat /proc/driver/nvidia/version && \ 15 | # echo -e "\n**********************\nCUDA Version\n**********************\n" && \ 16 | # nvcc -V && \ 17 | # echo -e "\n\nBuilding your Deep Learning Docker Image...\n" 18 | 19 | # Install some dependencies 20 | RUN apt-get update && apt-get install -y \ 21 | bc \ 22 | build-essential \ 23 | cmake \ 24 | curl \ 25 | g++ \ 26 | gfortran \ 27 | git \ 28 | libffi-dev \ 29 | libfreetype6-dev \ 30 | libhdf5-dev \ 31 | libjpeg-dev \ 32 | liblcms2-dev \ 33 | libopenblas-dev \ 34 | liblapack-dev \ 35 | libopenjpeg2 \ 36 | libpng12-dev \ 37 | libssl-dev \ 38 | libtiff5-dev \ 39 | libwebp-dev \ 40 | libzmq3-dev \ 41 | nano \ 42 | pkg-config \ 43 | python-dev \ 44 | software-properties-common \ 45 | unzip \ 46 | vim \ 47 | wget \ 48 | zlib1g-dev \ 49 | && \ 50 | apt-get clean && \ 51 | apt-get autoremove && \ 52 | rm -rf /var/lib/apt/lists/* && \ 53 | # Link BLAS library to use OpenBLAS using the alternatives mechanism (https://www.scipy.org/scipylib/building/linux.html#debian-ubuntu) 54 | update-alternatives --set libblas.so.3 /usr/lib/openblas-base/libblas.so.3 55 | 56 | # Install pip 57 | RUN curl -O https://bootstrap.pypa.io/get-pip.py && \ 58 | python get-pip.py && \ 59 | rm get-pip.py 60 | 61 | # Add SNI support to Python 62 | RUN pip --no-cache-dir install \ 63 | pyopenssl \ 64 | ndg-httpsclient \ 65 | pyasn1 66 | 67 | # Install useful Python packages using apt-get to avoid version incompatibilities with Tensorflow binary 68 | # especially numpy, scipy, skimage and sklearn (see https://github.com/tensorflow/tensorflow/issues/2034) 69 | RUN apt-get update && apt-get install -y \ 70 | python-numpy \ 71 | python-scipy \ 72 | python-nose \ 73 | python-h5py \ 74 | python-skimage \ 75 | python-matplotlib \ 76 | python-pandas \ 77 | python-sklearn \ 78 | python-sympy \ 79 | && \ 80 | apt-get clean && \ 81 | apt-get autoremove && \ 82 | rm -rf /var/lib/apt/lists/* 83 | 84 | # Install other useful Python packages using pip 85 | RUN pip --no-cache-dir install --upgrade ipython && \ 86 | pip --no-cache-dir install \ 87 | Cython \ 88 | ipykernel \ 89 | jupyter \ 90 | path.py \ 91 | Pillow \ 92 | pygments \ 93 | six \ 94 | sphinx \ 95 | wheel \ 96 | zmq \ 97 | && \ 98 | python -m ipykernel.kernelspec 99 | 100 | 101 | # Install TensorFlow 102 | RUN pip --no-cache-dir install \ 103 | https://storage.googleapis.com/tensorflow/linux/${TENSORFLOW_ARCH}/tensorflow-${TENSORFLOW_VERSION}-cp27-none-linux_x86_64.whl 104 | 105 | 106 | # Install dependencies for Caffe 107 | RUN apt-get update && apt-get install -y \ 108 | libboost-all-dev \ 109 | libgflags-dev \ 110 | libgoogle-glog-dev \ 111 | libhdf5-serial-dev \ 112 | libleveldb-dev \ 113 | liblmdb-dev \ 114 | libopencv-dev \ 115 | libprotobuf-dev \ 116 | libsnappy-dev \ 117 | protobuf-compiler \ 118 | && \ 119 | apt-get clean && \ 120 | apt-get autoremove && \ 121 | rm -rf /var/lib/apt/lists/* 122 | 123 | # Install Caffe 124 | RUN git clone -b ${CAFFE_VERSION} --depth 1 https://github.com/BVLC/caffe.git /root/caffe && \ 125 | cd /root/caffe && \ 126 | cat python/requirements.txt | xargs -n1 pip install && \ 127 | mkdir build && cd build && \ 128 | cmake -DUSE_CUDNN=1 -DBLAS=Open .. && \ 129 | make -j"$(nproc)" all && \ 130 | make install 131 | 132 | # Set up Caffe environment variables 133 | ENV CAFFE_ROOT=/root/caffe 134 | ENV PYCAFFE_ROOT=$CAFFE_ROOT/python 135 | ENV PYTHONPATH=$PYCAFFE_ROOT:$PYTHONPATH \ 136 | PATH=$CAFFE_ROOT/build/tools:$PYCAFFE_ROOT:$PATH 137 | 138 | RUN echo "$CAFFE_ROOT/build/lib" >> /etc/ld.so.conf.d/caffe.conf && ldconfig 139 | -------------------------------------------------------------------------------- /Dockerfile.cpu: -------------------------------------------------------------------------------- 1 | FROM ubuntu:14.04 2 | 3 | MAINTAINER Sai Soundararaj 4 | 5 | ARG THEANO_VERSION=rel-0.8.2 6 | ARG TENSORFLOW_VERSION=0.12.1 7 | ARG TENSORFLOW_ARCH=cpu 8 | ARG KERAS_VERSION=1.2.0 9 | ARG LASAGNE_VERSION=v0.1 10 | ARG TORCH_VERSION=latest 11 | ARG CAFFE_VERSION=master 12 | 13 | # Install some dependencies 14 | RUN apt-get update && apt-get install -y \ 15 | bc \ 16 | build-essential \ 17 | cmake \ 18 | curl \ 19 | g++ \ 20 | gfortran \ 21 | git \ 22 | libffi-dev \ 23 | libfreetype6-dev \ 24 | libhdf5-dev \ 25 | libjpeg-dev \ 26 | liblcms2-dev \ 27 | libopenblas-dev \ 28 | liblapack-dev \ 29 | libopenjpeg2 \ 30 | libpng12-dev \ 31 | libssl-dev \ 32 | libtiff5-dev \ 33 | libwebp-dev \ 34 | libzmq3-dev \ 35 | nano \ 36 | pkg-config \ 37 | python-dev \ 38 | software-properties-common \ 39 | unzip \ 40 | vim \ 41 | wget \ 42 | zlib1g-dev \ 43 | qt5-default \ 44 | libvtk6-dev \ 45 | zlib1g-dev \ 46 | libjpeg-dev \ 47 | libwebp-dev \ 48 | libpng-dev \ 49 | libtiff5-dev \ 50 | libjasper-dev \ 51 | libopenexr-dev \ 52 | libgdal-dev \ 53 | libdc1394-22-dev \ 54 | libavcodec-dev \ 55 | libavformat-dev \ 56 | libswscale-dev \ 57 | libtheora-dev \ 58 | libvorbis-dev \ 59 | libxvidcore-dev \ 60 | libx264-dev \ 61 | yasm \ 62 | libopencore-amrnb-dev \ 63 | libopencore-amrwb-dev \ 64 | libv4l-dev \ 65 | libxine2-dev \ 66 | libtbb-dev \ 67 | libeigen3-dev \ 68 | python-dev \ 69 | python-tk \ 70 | python-numpy \ 71 | python3-dev \ 72 | python3-tk \ 73 | python3-numpy \ 74 | ant \ 75 | default-jdk \ 76 | doxygen \ 77 | && \ 78 | apt-get clean && \ 79 | apt-get autoremove && \ 80 | rm -rf /var/lib/apt/lists/* && \ 81 | # Link BLAS library to use OpenBLAS using the alternatives mechanism (https://www.scipy.org/scipylib/building/linux.html#debian-ubuntu) 82 | update-alternatives --set libblas.so.3 /usr/lib/openblas-base/libblas.so.3 83 | 84 | # Install pip 85 | RUN curl -O https://bootstrap.pypa.io/get-pip.py && \ 86 | python get-pip.py && \ 87 | rm get-pip.py 88 | 89 | # Add SNI support to Python 90 | RUN pip --no-cache-dir install \ 91 | pyopenssl \ 92 | ndg-httpsclient \ 93 | pyasn1 94 | 95 | # Install useful Python packages using apt-get to avoid version incompatibilities with Tensorflow binary 96 | # especially numpy, scipy, skimage and sklearn (see https://github.com/tensorflow/tensorflow/issues/2034) 97 | RUN apt-get update && apt-get install -y \ 98 | python-numpy \ 99 | python-scipy \ 100 | python-nose \ 101 | python-h5py \ 102 | python-skimage \ 103 | python-matplotlib \ 104 | python-pandas \ 105 | python-sklearn \ 106 | python-sympy \ 107 | && \ 108 | apt-get clean && \ 109 | apt-get autoremove && \ 110 | rm -rf /var/lib/apt/lists/* 111 | 112 | # Install other useful Python packages using pip 113 | RUN pip --no-cache-dir install --upgrade ipython && \ 114 | pip --no-cache-dir install \ 115 | Cython \ 116 | ipykernel \ 117 | jupyter \ 118 | path.py \ 119 | Pillow \ 120 | pygments \ 121 | six \ 122 | sphinx \ 123 | wheel \ 124 | zmq \ 125 | && \ 126 | python -m ipykernel.kernelspec 127 | 128 | 129 | # Install TensorFlow 130 | RUN pip --no-cache-dir install \ 131 | https://storage.googleapis.com/tensorflow/linux/${TENSORFLOW_ARCH}/tensorflow-${TENSORFLOW_VERSION}-cp27-none-linux_x86_64.whl 132 | 133 | 134 | # Install dependencies for Caffe 135 | RUN apt-get update && apt-get install -y \ 136 | libboost-all-dev \ 137 | libgflags-dev \ 138 | libgoogle-glog-dev \ 139 | libhdf5-serial-dev \ 140 | libleveldb-dev \ 141 | liblmdb-dev \ 142 | libopencv-dev \ 143 | libprotobuf-dev \ 144 | libsnappy-dev \ 145 | protobuf-compiler \ 146 | && \ 147 | apt-get clean && \ 148 | apt-get autoremove && \ 149 | rm -rf /var/lib/apt/lists/* 150 | 151 | # Install Caffe 152 | RUN git clone -b ${CAFFE_VERSION} --depth 1 https://github.com/BVLC/caffe.git /root/caffe && \ 153 | cd /root/caffe && \ 154 | cat python/requirements.txt | xargs -n1 pip install && \ 155 | mkdir build && cd build && \ 156 | cmake -DCPU_ONLY=1 -DBLAS=Open .. && \ 157 | make -j"$(nproc)" all && \ 158 | make install 159 | 160 | # Set up Caffe environment variables 161 | ENV CAFFE_ROOT=/root/caffe 162 | ENV PYCAFFE_ROOT=$CAFFE_ROOT/python 163 | ENV PYTHONPATH=$PYCAFFE_ROOT:$PYTHONPATH \ 164 | PATH=$CAFFE_ROOT/build/tools:$PYCAFFE_ROOT:$PATH 165 | 166 | RUN echo "$CAFFE_ROOT/build/lib" >> /etc/ld.so.conf.d/caffe.conf && ldconfig 167 | 168 | 169 | # Install Theano and set up Theano config (.theanorc) OpenBLAS 170 | RUN pip --no-cache-dir install git+git://github.com/Theano/Theano.git@${THEANO_VERSION} && \ 171 | \ 172 | echo "[global]\ndevice=cpu\nfloatX=float32\nmode=FAST_RUN \ 173 | \n[lib]\ncnmem=0.95 \ 174 | \n[nvcc]\nfastmath=True \ 175 | \n[blas]\nldflag = -L/usr/lib/openblas-base -lopenblas \ 176 | \n[DebugMode]\ncheck_finite=1" \ 177 | > /root/.theanorc 178 | 179 | 180 | # Install Keras 181 | RUN pip --no-cache-dir install git+git://github.com/fchollet/keras.git@${KERAS_VERSION} 182 | 183 | 184 | # Install Lasagne 185 | RUN pip --no-cache-dir install git+git://github.com/Lasagne/Lasagne.git@${LASAGNE_VERSION} 186 | 187 | 188 | # Install Torch 189 | RUN git clone https://github.com/torch/distro.git /root/torch --recursive && \ 190 | cd /root/torch && \ 191 | bash install-deps && \ 192 | yes no | ./install.sh 193 | 194 | # Export the LUA evironment variables manually 195 | ENV LUA_PATH='/root/.luarocks/share/lua/5.1/?.lua;/root/.luarocks/share/lua/5.1/?/init.lua;/root/torch/install/share/lua/5.1/?.lua;/root/torch/install/share/lua/5.1/?/init.lua;./?.lua;/root/torch/install/share/luajit-2.1.0-beta1/?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua' \ 196 | LUA_CPATH='/root/.luarocks/lib/lua/5.1/?.so;/root/torch/install/lib/lua/5.1/?.so;./?.so;/usr/local/lib/lua/5.1/?.so;/usr/local/lib/lua/5.1/loadall.so' \ 197 | PATH=/root/torch/install/bin:$PATH \ 198 | LD_LIBRARY_PATH=/root/torch/install/lib:$LD_LIBRARY_PATH \ 199 | DYLD_LIBRARY_PATH=/root/torch/install/lib:$DYLD_LIBRARY_PATH 200 | ENV LUA_CPATH='/root/torch/install/lib/?.so;'$LUA_CPATH 201 | 202 | # Install the latest versions of nn, and iTorch 203 | RUN luarocks install nn && \ 204 | luarocks install loadcaffe && \ 205 | \ 206 | cd /root && git clone https://github.com/facebook/iTorch.git && \ 207 | cd iTorch && \ 208 | luarocks make 209 | 210 | # Install OpenCV 211 | RUN git clone --depth 1 https://github.com/opencv/opencv.git /root/opencv && \ 212 | cd /root/opencv && \ 213 | mkdir build && \ 214 | cd build && \ 215 | cmake -DWITH_QT=ON -DWITH_OPENGL=ON -DFORCE_VTK=ON -DWITH_TBB=ON -DWITH_GDAL=ON -DWITH_XINE=ON -DBUILD_EXAMPLES=ON .. && \ 216 | make -j"$(nproc)" && \ 217 | make install && \ 218 | ldconfig && \ 219 | echo 'ln /dev/null /dev/raw1394' >> ~/.bashrc 220 | 221 | 222 | # Set up notebook config 223 | COPY jupyter_notebook_config.py /root/.jupyter/ 224 | 225 | # Jupyter has issues with being run directly: https://github.com/ipython/ipython/issues/7062 226 | COPY run_jupyter.sh /root/ 227 | 228 | # Expose Ports for TensorBoard (6006), Ipython (8888) 229 | EXPOSE 6006 8888 230 | 231 | WORKDIR "/root" 232 | CMD ["/bin/bash"] 233 | -------------------------------------------------------------------------------- /Dockerfile.gpu: -------------------------------------------------------------------------------- 1 | FROM nvidia/cuda:8.0-cudnn5-devel-ubuntu14.04 2 | 3 | MAINTAINER Sai Soundararaj 4 | 5 | ARG THEANO_VERSION=rel-0.8.2 6 | ARG TENSORFLOW_VERSION=0.12.1 7 | ARG TENSORFLOW_ARCH=gpu 8 | ARG KERAS_VERSION=1.2.0 9 | ARG LASAGNE_VERSION=v0.1 10 | ARG TORCH_VERSION=latest 11 | ARG CAFFE_VERSION=master 12 | 13 | #RUN echo -e "\n**********************\nNVIDIA Driver Version\n**********************\n" && \ 14 | # cat /proc/driver/nvidia/version && \ 15 | # echo -e "\n**********************\nCUDA Version\n**********************\n" && \ 16 | # nvcc -V && \ 17 | # echo -e "\n\nBuilding your Deep Learning Docker Image...\n" 18 | 19 | # Install some dependencies 20 | RUN apt-get update && apt-get install -y \ 21 | bc \ 22 | build-essential \ 23 | cmake \ 24 | curl \ 25 | g++ \ 26 | gfortran \ 27 | git \ 28 | libffi-dev \ 29 | libfreetype6-dev \ 30 | libhdf5-dev \ 31 | libjpeg-dev \ 32 | liblcms2-dev \ 33 | libopenblas-dev \ 34 | liblapack-dev \ 35 | libopenjpeg2 \ 36 | libpng12-dev \ 37 | libssl-dev \ 38 | libtiff5-dev \ 39 | libwebp-dev \ 40 | libzmq3-dev \ 41 | nano \ 42 | pkg-config \ 43 | python-dev \ 44 | software-properties-common \ 45 | unzip \ 46 | vim \ 47 | wget \ 48 | zlib1g-dev \ 49 | qt5-default \ 50 | libvtk6-dev \ 51 | zlib1g-dev \ 52 | libjpeg-dev \ 53 | libwebp-dev \ 54 | libpng-dev \ 55 | libtiff5-dev \ 56 | libjasper-dev \ 57 | libopenexr-dev \ 58 | libgdal-dev \ 59 | libdc1394-22-dev \ 60 | libavcodec-dev \ 61 | libavformat-dev \ 62 | libswscale-dev \ 63 | libtheora-dev \ 64 | libvorbis-dev \ 65 | libxvidcore-dev \ 66 | libx264-dev \ 67 | yasm \ 68 | libopencore-amrnb-dev \ 69 | libopencore-amrwb-dev \ 70 | libv4l-dev \ 71 | libxine2-dev \ 72 | libtbb-dev \ 73 | libeigen3-dev \ 74 | python-dev \ 75 | python-tk \ 76 | python-numpy \ 77 | python3-dev \ 78 | python3-tk \ 79 | python3-numpy \ 80 | ant \ 81 | default-jdk \ 82 | doxygen \ 83 | && \ 84 | apt-get clean && \ 85 | apt-get autoremove && \ 86 | rm -rf /var/lib/apt/lists/* && \ 87 | # Link BLAS library to use OpenBLAS using the alternatives mechanism (https://www.scipy.org/scipylib/building/linux.html#debian-ubuntu) 88 | update-alternatives --set libblas.so.3 /usr/lib/openblas-base/libblas.so.3 89 | 90 | # Install pip 91 | RUN curl -O https://bootstrap.pypa.io/get-pip.py && \ 92 | python get-pip.py && \ 93 | rm get-pip.py 94 | 95 | # Add SNI support to Python 96 | RUN pip --no-cache-dir install \ 97 | pyopenssl \ 98 | ndg-httpsclient \ 99 | pyasn1 100 | 101 | # Install useful Python packages using apt-get to avoid version incompatibilities with Tensorflow binary 102 | # especially numpy, scipy, skimage and sklearn (see https://github.com/tensorflow/tensorflow/issues/2034) 103 | RUN apt-get update && apt-get install -y \ 104 | python-numpy \ 105 | python-scipy \ 106 | python-nose \ 107 | python-h5py \ 108 | python-skimage \ 109 | python-matplotlib \ 110 | python-pandas \ 111 | python-sklearn \ 112 | python-sympy \ 113 | && \ 114 | apt-get clean && \ 115 | apt-get autoremove && \ 116 | rm -rf /var/lib/apt/lists/* 117 | 118 | # Install other useful Python packages using pip 119 | RUN pip --no-cache-dir install --upgrade ipython && \ 120 | pip --no-cache-dir install \ 121 | Cython \ 122 | ipykernel \ 123 | jupyter \ 124 | path.py \ 125 | Pillow \ 126 | pygments \ 127 | six \ 128 | sphinx \ 129 | wheel \ 130 | zmq \ 131 | && \ 132 | python -m ipykernel.kernelspec 133 | 134 | 135 | # Install TensorFlow 136 | RUN pip --no-cache-dir install \ 137 | https://storage.googleapis.com/tensorflow/linux/${TENSORFLOW_ARCH}/tensorflow_${TENSORFLOW_ARCH}-${TENSORFLOW_VERSION}-cp27-none-linux_x86_64.whl 138 | 139 | 140 | # Install dependencies for Caffe 141 | RUN apt-get update && apt-get install -y \ 142 | libboost-all-dev \ 143 | libgflags-dev \ 144 | libgoogle-glog-dev \ 145 | libhdf5-serial-dev \ 146 | libleveldb-dev \ 147 | liblmdb-dev \ 148 | libopencv-dev \ 149 | libprotobuf-dev \ 150 | libsnappy-dev \ 151 | protobuf-compiler \ 152 | && \ 153 | apt-get clean && \ 154 | apt-get autoremove && \ 155 | rm -rf /var/lib/apt/lists/* 156 | 157 | # Install Caffe 158 | RUN git clone -b ${CAFFE_VERSION} --depth 1 https://github.com/BVLC/caffe.git /root/caffe && \ 159 | cd /root/caffe && \ 160 | cat python/requirements.txt | xargs -n1 pip install && \ 161 | mkdir build && cd build && \ 162 | cmake -DUSE_CUDNN=1 -DBLAS=Open .. && \ 163 | make -j"$(nproc)" all && \ 164 | make install 165 | 166 | # Set up Caffe environment variables 167 | ENV CAFFE_ROOT=/root/caffe 168 | ENV PYCAFFE_ROOT=$CAFFE_ROOT/python 169 | ENV PYTHONPATH=$PYCAFFE_ROOT:$PYTHONPATH \ 170 | PATH=$CAFFE_ROOT/build/tools:$PYCAFFE_ROOT:$PATH 171 | 172 | RUN echo "$CAFFE_ROOT/build/lib" >> /etc/ld.so.conf.d/caffe.conf && ldconfig 173 | 174 | 175 | # Install Theano and set up Theano config (.theanorc) for CUDA and OpenBLAS 176 | RUN pip --no-cache-dir install git+git://github.com/Theano/Theano.git@${THEANO_VERSION} && \ 177 | \ 178 | echo "[global]\ndevice=gpu\nfloatX=float32\noptimizer_including=cudnn\nmode=FAST_RUN \ 179 | \n[lib]\ncnmem=0.95 \ 180 | \n[nvcc]\nfastmath=True \ 181 | \n[blas]\nldflag = -L/usr/lib/openblas-base -lopenblas \ 182 | \n[DebugMode]\ncheck_finite=1" \ 183 | > /root/.theanorc 184 | 185 | 186 | # Install Keras 187 | RUN pip --no-cache-dir install git+git://github.com/fchollet/keras.git@${KERAS_VERSION} 188 | 189 | 190 | # Install Lasagne 191 | RUN pip --no-cache-dir install git+git://github.com/Lasagne/Lasagne.git@${LASAGNE_VERSION} 192 | 193 | 194 | # Install Torch 195 | RUN git clone https://github.com/torch/distro.git /root/torch --recursive && \ 196 | cd /root/torch && \ 197 | bash install-deps && \ 198 | yes no | ./install.sh 199 | 200 | # Export the LUA evironment variables manually 201 | ENV LUA_PATH='/root/.luarocks/share/lua/5.1/?.lua;/root/.luarocks/share/lua/5.1/?/init.lua;/root/torch/install/share/lua/5.1/?.lua;/root/torch/install/share/lua/5.1/?/init.lua;./?.lua;/root/torch/install/share/luajit-2.1.0-beta1/?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua' \ 202 | LUA_CPATH='/root/.luarocks/lib/lua/5.1/?.so;/root/torch/install/lib/lua/5.1/?.so;./?.so;/usr/local/lib/lua/5.1/?.so;/usr/local/lib/lua/5.1/loadall.so' \ 203 | PATH=/root/torch/install/bin:$PATH \ 204 | LD_LIBRARY_PATH=/root/torch/install/lib:$LD_LIBRARY_PATH \ 205 | DYLD_LIBRARY_PATH=/root/torch/install/lib:$DYLD_LIBRARY_PATH 206 | ENV LUA_CPATH='/root/torch/install/lib/?.so;'$LUA_CPATH 207 | 208 | # Install the latest versions of nn, cutorch, cunn, cuDNN bindings and iTorch 209 | RUN luarocks install nn && \ 210 | luarocks install cutorch && \ 211 | luarocks install cunn && \ 212 | luarocks install loadcaffe && \ 213 | \ 214 | cd /root && git clone https://github.com/soumith/cudnn.torch.git && cd cudnn.torch && \ 215 | git checkout R4 && \ 216 | luarocks make && \ 217 | \ 218 | cd /root && git clone https://github.com/facebook/iTorch.git && \ 219 | cd iTorch && \ 220 | luarocks make 221 | 222 | # Install OpenCV 223 | RUN git clone --depth 1 https://github.com/opencv/opencv.git /root/opencv && \ 224 | cd /root/opencv && \ 225 | mkdir build && \ 226 | cd build && \ 227 | cmake -DWITH_QT=ON -DWITH_OPENGL=ON -DFORCE_VTK=ON -DWITH_TBB=ON -DWITH_GDAL=ON -DWITH_XINE=ON -DBUILD_EXAMPLES=ON .. && \ 228 | make -j"$(nproc)" && \ 229 | make install && \ 230 | ldconfig && \ 231 | echo 'ln /dev/null /dev/raw1394' >> ~/.bashrc 232 | 233 | # Set up notebook config 234 | COPY jupyter_notebook_config.py /root/.jupyter/ 235 | 236 | # Jupyter has issues with being run directly: https://github.com/ipython/ipython/issues/7062 237 | COPY run_jupyter.sh /root/ 238 | 239 | # Expose Ports for TensorBoard (6006), Ipython (8888) 240 | EXPOSE 6006 8888 241 | 242 | WORKDIR "/root" 243 | CMD ["/bin/bash"] 244 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [Website](https://www.floydhub.com) • [Docs](https://docs.floydhub.com) • [Forum](https://forum.floydhub.com) • [Twitter](https://twitter.com/floydhub_) • [We're Hiring](https://angel.co/floydhub) 2 | 3 | [![FloydHub Logo](https://github.com/floydhub/static/blob/master/Group.png)](https://www.floydhub.com) 4 | 5 | ## All-in-one Docker image for Deep Learning 6 | Here are Dockerfiles to get you up and running with a fully functional deep learning machine. It contains all the popular deep learning frameworks with CPU and GPU support (CUDA and cuDNN included). The CPU version should work on Linux, Windows and OS X. The GPU version will, however, only work on Linux machines. See [OS support](#what-operating-systems-are-supported) for details 7 | 8 | If you are not familiar with Docker, but would still like an all-in-one solution, start here: [What is Docker?](#what-is-docker). If you know what Docker is, but are wondering why we need one for deep learning, [see this](#why-do-i-need-a-docker) 9 | 10 | ## Update: I've built a quick tool, based on dl-docker, to run your DL project on the cloud with zero setup. You can start running your Tensorflow project on AWS in <30seconds using Floyd. See [www.floydhub.com](https://www.floydhub.com). It's free to try out. 11 | ### Happy to take feature requests/feedback and answer questions - mail me sai@floydhub.com. 12 | 13 | ## Specs 14 | This is what you get out of the box when you create a container with the provided image/Dockerfile: 15 | * Ubuntu 14.04 16 | * [CUDA 8.0](https://developer.nvidia.com/cuda-toolkit) (GPU version only) 17 | * [cuDNN v5](https://developer.nvidia.com/cudnn) (GPU version only) 18 | * [Tensorflow](https://www.tensorflow.org/) 19 | * [Caffe](http://caffe.berkeleyvision.org/) 20 | * [Theano](http://deeplearning.net/software/theano/) 21 | * [Keras](http://keras.io/) 22 | * [Lasagne](http://lasagne.readthedocs.io/en/latest/) 23 | * [Torch](http://torch.ch/) (includes nn, cutorch, cunn and cuDNN bindings) 24 | * [iPython/Jupyter Notebook](http://jupyter.org/) (including iTorch kernel) 25 | * [Numpy](http://www.numpy.org/), [SciPy](https://www.scipy.org/), [Pandas](http://pandas.pydata.org/), [Scikit Learn](http://scikit-learn.org/), [Matplotlib](http://matplotlib.org/) 26 | * [OpenCV](http://opencv.org/) 27 | * A few common libraries used for deep learning 28 | 29 | ## Setup 30 | ### Prerequisites 31 | 1. Install Docker following the installation guide for your platform: [https://docs.docker.com/engine/installation/](https://docs.docker.com/engine/installation/) 32 | 33 | 2. **GPU Version Only**: Install Nvidia drivers on your machine either from [Nvidia](http://www.nvidia.com/Download/index.aspx?lang=en-us) directly or follow the instructions [here](https://github.com/saiprashanths/dl-setup#nvidia-drivers). Note that you _don't_ have to install CUDA or cuDNN. These are included in the Docker container. 34 | 35 | 3. **GPU Version Only**: Install nvidia-docker: [https://github.com/NVIDIA/nvidia-docker](https://github.com/NVIDIA/nvidia-docker), following the instructions [here](https://github.com/NVIDIA/nvidia-docker/wiki/Installation). This will install a replacement for the docker CLI. It takes care of setting up the Nvidia host driver environment inside the Docker containers and a few other things. 36 | 37 | ### Obtaining the Docker image 38 | You have 2 options to obtain the Docker image 39 | #### Option 1: Download the Docker image from Docker Hub 40 | Docker Hub is a cloud based repository of pre-built images. You can download the image directly from here, which should be _much faster_ than building it locally (a few minutes, based on your internet speed). Here is the automated build page for `dl-docker`: [https://hub.docker.com/r/floydhub/dl-docker/](https://hub.docker.com/r/floydhub/dl-docker/). The image is automatically built based on the `Dockerfile` in the Github repo. 41 | 42 | **CPU Version** 43 | ```bash 44 | docker pull floydhub/dl-docker:cpu 45 | ``` 46 | 47 | **GPU Version** 48 | An automated build for the GPU image is not available currently due to timeout restrictions in Docker's automated build process. I'll look into solving this in the future, but for now you'll have to build the GPU version locally using Option 2 below. 49 | 50 | #### Option 2: Build the Docker image locally 51 | Alternatively, you can build the images locally. Also, since the GPU version is not available in Docker Hub at the moment, you'll have to follow this if you want to GPU version. Note that this will take an hour or two depending on your machine since it compiles a few libraries from scratch. 52 | 53 | ```bash 54 | git clone https://github.com/saiprashanths/dl-docker.git 55 | cd dl-docker 56 | ``` 57 | 58 | **CPU Version** 59 | ```bash 60 | docker build -t floydhub/dl-docker:cpu -f Dockerfile.cpu . 61 | ``` 62 | 63 | **GPU Version** 64 | ```bash 65 | docker build -t floydhub/dl-docker:gpu -f Dockerfile.gpu . 66 | ``` 67 | This will build a Docker image named `dl-docker` and tagged either `cpu` or `gpu` depending on the tag your specify. Also note that the appropriate `Dockerfile.` has to be used. 68 | 69 | ## Running the Docker image as a Container 70 | Once we've built the image, we have all the frameworks we need installed in it. We can now spin up one or more containers using this image, and you should be ready to [go deeper](http://imgur.com/gallery/BvuWRxq) 71 | 72 | **CPU Version** 73 | ```bash 74 | docker run -it -p 8888:8888 -p 6006:6006 -v /sharedfolder:/root/sharedfolder floydhub/dl-docker:cpu bash 75 | ``` 76 | 77 | **GPU Version** 78 | ```bash 79 | nvidia-docker run -it -p 8888:8888 -p 6006:6006 -v /sharedfolder:/root/sharedfolder floydhub/dl-docker:gpu bash 80 | ``` 81 | Note the use of `nvidia-docker` rather than just `docker` 82 | 83 | | Parameter | Explanation | 84 | |----------------|-------------| 85 | |`-it` | This creates an interactive terminal you can use to iteract with your container | 86 | |`-p 8888:8888 -p 6006:6006` | This exposes the ports inside the container so they can be accessed from the host. The format is `-p :`. The default iPython Notebook runs on port 8888 and Tensorboard on 6006 | 87 | |`-v /sharedfolder:/root/sharedfolder/` | This shares the folder `/sharedfolder` on your host machine to `/root/sharedfolder/` inside your container. Any data written to this folder by the container will be persistent. You can modify this to anything of the format `-v /local/shared/folder:/shared/folder/in/container/`. See [Docker container persistence](#docker-container-persistence) 88 | |`floydhub/dl-docker:cpu` | This the image that you want to run. The format is `image:tag`. In our case, we use the image `dl-docker` and tag `gpu` or `cpu` to spin up the appropriate image | 89 | |`bash` | This provides the default command when the container is started. Even if this was not provided, bash is the default command and just starts a Bash session. You can modify this to be whatever you'd like to be executed when your container starts. For example, you can execute `docker run -it -p 8888:8888 -p 6006:6006 floydhub/dl-docker:cpu jupyter notebook`. This will execute the command `jupyter notebook` and starts your Jupyter Notebook for you when the container starts 90 | 91 | ## Some common scenarios 92 | ### Jupyter Notebooks 93 | The container comes pre-installed with iPython and iTorch Notebooks, and you can use these to work with the deep learning frameworks. If you spin up the docker container with `docker-run -p :` (as shown above in the [instructions](#running-the-docker-image-as-a-container)), you will have access to these ports on your host and can access them at `http://127.0.0.1:`. The default iPython notebook uses port 8888 and Tensorboard uses port 6006. Since we expose both these ports when we run the container, we can access them both from the localhost. 94 | 95 | However, you still need to start the Notebook inside the container to be able to access it from the host. You can either do this from the container terminal by executing `jupyter notebook` or you can pass this command in directly while spinning up your container using the `docker run -it -p 8888:8888 -p 6006:6006 floydhub/dl-docker:cpu jupyter notebook` CLI. The Jupyter Notebook has both Python (for TensorFlow, Caffe, Theano, Keras, Lasagne) and iTorch (for Torch) kernels. 96 | 97 | Note: If you are setting the notebook on Windows, you will need to first determine the IP address of your Docker container. This command on the Docker command-line provides the IP address 98 | ```bash 99 | docker-machine ip default 100 | > 101 | ``` 102 | ```default``` is the name of the container provided by default to the container you will spin. 103 | On obtaining the IP-address, run the docker as per the [instructions](#running-the-docker-image-as-a-container) provided and start the Jupyter notebook as [described above](#jupyter-notebooks). Then accessing ```http://:``` on your host's browser should show you the notebook. 104 | 105 | ### Data Sharing 106 | See [Docker container persistence](#docker-container-persistence). 107 | Consider this: You have a script that you've written on your host machine. You want to run this in the container and get the output data (say, a trained model) back into your host. The way to do this is using a [Shared Volume](#docker-container-persistence). By passing in the `-v /sharedfolder/:/root/sharedfolder` to the CLI, we are sharing the folder between the host and the container, with persistence. You could copy your script into `/sharedfolder` folder on the host, execute your script from inside the container (located at `/root/sharedfolder`) and write the results data back to the same folder. This data will be accessible even after you kill the container. 108 | 109 | ## What is Docker? 110 | [Docker](https://www.docker.com/what-docker) itself has a great answer to this question. 111 | 112 | Docker is based on the idea that one can package code along with its dependencies into a self-contained unit. In this case, we start with a base Ubuntu 14.04 image, a bare minimum OS. When we build our initial Docker image using `docker build`, we install all the deep learning frameworks and its dependencies on the base, as defined by the `Dockerfile`. This gives us an image which has all the packages we need installed in it. We can now spin up as many instances of this image as we like, using the `docker run` command. Each instance is called a _container_. Each of these containers can be thought of as a fully functional and isolated OS with all the deep learning libraries installed in it. 113 | 114 | ## Why do I need a Docker? 115 | Installing all the deep learning frameworks to coexist and function correctly is an exercise in dependency hell. Unfortunately, given the current state of DL development and research, it is almost impossible to rely on just one framework. This Docker is intended to provide a solution for this use case. 116 | 117 | If you would rather install all the frameworks yourself manually, take a look at this guide: [Setting up a deep learning machine from scratch](https://github.com/saiprashanths/dl-setup) 118 | 119 | ### Do I really need an all-in-one container? 120 | No. The provided all-in-one solution is useful if you have dependencies on multiple frameworks (say, load a pre-trained Caffe model, finetune it, convert it to Tensorflow and continue developing there) or if you just want to play around with the various frameworks. 121 | 122 | The Docker philosophy is to build a container for each logical task/framework. If we followed this, we should have one container for each of the deep learning frameworks. This minimizes clashes between frameworks and is easier to maintain as things evolve. In fact, if you only intend to use one of the frameworks, or at least use only one framework at a time, follow this approach. You can find Dockerfiles for individual frameworks here: 123 | * [Tensorflow Docker](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker) 124 | * [Caffe Docker](https://github.com/BVLC/caffe/tree/master/docker) 125 | * [Theano Docker](https://github.com/Kaixhin/dockerfiles/tree/master/cuda-theano) 126 | * [Keras Docker](https://github.com/Kaixhin/dockerfiles/tree/master/cuda-keras/cuda_v7.5) 127 | * [Lasagne Docker](https://github.com/Kaixhin/dockerfiles/tree/master/cuda-lasagne/cuda_v7.5) 128 | * [Torch Docker](https://github.com/Kaixhin/dockerfiles/tree/master/cuda-torch) 129 | 130 | ## FAQs 131 | ### Performance 132 | Running the DL frameworks as Docker containers should have no performance impact during runtime. Spinning up a Docker container itself is very fast and should take only a couple of seconds or less 133 | 134 | ### Docker container persistence 135 | Keep in mind that the changes made inside Docker container are not persistent. Lets say you spun up a Docker container, added and deleted a few files and then kill the container. The next time you spin up a container using the same image, all your previous changes will be lost and you will be presented with a fresh instance of the image. This is great, because if you mess up your container, you can always kill it and start afresh again. It's bad if you don't know/understand this and forget to save your work before killing the container. There are a couple of ways to work around this: 136 | 137 | 1. **Commit**: If you make changes to the image itself (say, install a few new libraries), you can commit the changes and settings into a new image. Note that this will create a new image, which will take a few GBs space on your disk. In your next session, you can create a container from this new image. For details on commit, see [Docker's documentaion](https://docs.docker.com/engine/reference/commandline/commit/). 138 | 139 | 2. **Shared volume**: If you don't make changes to the image itself, but only create data (say, train a new Caffe model), then commiting the image each time is an overkill. In this case, it is easier to persist the data changes to a folder on your host OS using shared volumes. Simple put, the way this works is you share a folder from your host into the container. Any changes made to the contents of this folder from inside the container will persist, even after the container is killed. For more details, see Docker's docs on [Managing data in containers](https://docs.docker.com/engine/userguide/containers/dockervolumes/) 140 | 141 | ### How do I update/install new libraries? 142 | You can do one of: 143 | 144 | 1. Modify the `Dockerfile` directly to install new or update your existing libraries. You will need to do a `docker build` after you do this. If you just want to update to a newer version of the DL framework(s), you can pass them as CLI parameter using the --build-arg tag ([see](-v /sharedfolder:/root/sharedfolder) for details). The framework versions are defined at the top of the `Dockerfile`. For example, `docker build -t floydhub/dl-docker:cpu -f Dockerfile.cpu --build-arg TENSORFLOW_VERSION=0.9.0rc0 .` 145 | 146 | 2. You can log in to a container and install the frameworks interactively using the terminal. After you've made sure everything looks good, you can commit the new contains and store it as an image 147 | 148 | ### What operating systems are supported? 149 | Docker is supported on all the OSes mentioned here: [Install Docker Engine](https://docs.docker.com/engine/installation/) (i.e. different flavors of Linux, Windows and OS X). The CPU version (Dockerfile.cpu) will run on all the above operating systems. However, the GPU version (Dockerfile.gpu) will only run on Linux OS. This is because Docker runs inside a virtual machine on Windows and OS X. Virtual machines don't have direct access to the GPU on the host. Unless PCI passthrough is implemented for these hosts, GPU support isn't available on non-Linux OSes at the moment. 150 | --------------------------------------------------------------------------------