├── examples ├── docker-compose │ ├── notebook │ │ ├── README.md │ │ ├── build.sh │ │ ├── down.sh │ │ ├── notebook.yml │ │ ├── Dockerfile │ │ ├── secure-notebook.yml │ │ ├── env.sh │ │ ├── letsencrypt-notebook.yml │ │ └── up.sh │ ├── bin │ │ ├── vbox.sh │ │ ├── softlayer.sh │ │ ├── sl-dns.sh │ │ └── letsencrypt.sh │ └── README.md └── make-deploy │ ├── virtualbox.makefile │ ├── Dockerfile │ ├── self-signed.makefile │ ├── softlayer.makefile │ ├── Makefile │ ├── letsencrypt.makefile │ └── README.md ├── requirements-test.txt ├── r-notebook ├── .dockerignore ├── hooks │ └── post_push ├── Dockerfile └── README.md ├── base-notebook ├── .dockerignore ├── hooks │ └── post_push ├── start-notebook.sh ├── fix-permissions ├── jupyter_notebook_config.py ├── start-singleuser.sh ├── start.sh ├── Dockerfile.ppc64le ├── Dockerfile ├── test │ └── test_container_options.py ├── Dockerfile.ppc64le.patch └── README.md ├── internal ├── docker-stacks-webhook │ ├── runtime.txt │ ├── requirements.txt │ ├── conda_requirements.txt │ ├── manifest.yml │ └── docker-stacks-webhook.ipynb ├── README.md ├── inherit-diagram.png └── inherit-diagram.svg ├── scipy-notebook ├── .dockerignore ├── hooks │ └── post_push ├── Dockerfile └── README.md ├── all-spark-notebook ├── .dockerignore ├── kernel.json ├── hooks │ └── post_push ├── Dockerfile └── README.md ├── astropy-notebook ├── .dockerignore ├── Dockerfile └── README.md ├── datascience-notebook ├── .dockerignore ├── hooks │ └── post_push ├── Dockerfile └── README.md ├── minimal-notebook ├── .dockerignore ├── hooks │ └── post_push ├── Dockerfile └── README.md ├── pyspark-notebook ├── .dockerignore ├── hooks │ └── post_push ├── Dockerfile └── README.md ├── tensorflow-notebook ├── .dockerignore ├── Dockerfile ├── hooks │ └── post_push └── README.md ├── .travis.yml ├── test └── test_notebook.py ├── .gitignore ├── .github └── issue_template.md ├── Makefile ├── LICENSE.md ├── conftest.py └── README.md /examples/docker-compose/notebook/README.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /requirements-test.txt: -------------------------------------------------------------------------------- 1 | docker 2 | pytest 3 | requests -------------------------------------------------------------------------------- /r-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /base-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /internal/docker-stacks-webhook/runtime.txt: -------------------------------------------------------------------------------- 1 | python-3.19.0 2 | -------------------------------------------------------------------------------- /scipy-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /all-spark-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /astropy-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /datascience-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /minimal-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /pyspark-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /tensorflow-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /internal/README.md: -------------------------------------------------------------------------------- 1 | Stuff to help with building Docker images. You probably don't want to use anything in here. 2 | -------------------------------------------------------------------------------- /internal/inherit-diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/spacetelescope/docker-stacks/master/internal/inherit-diagram.png -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | python: 3 | - 3.6 4 | sudo: required 5 | services: 6 | - docker 7 | install: 8 | - make test-reqs 9 | script: 10 | - make build-test-all 11 | -------------------------------------------------------------------------------- /internal/docker-stacks-webhook/requirements.txt: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | git+https://github.com/parente/kernel_gateway@concatenate-cells 4 | -------------------------------------------------------------------------------- /all-spark-notebook/kernel.json: -------------------------------------------------------------------------------- 1 | { 2 | "display_name": "Apache Toree (Scala 2.10.4)", 3 | "language": "scala", 4 | "argv": [ 5 | "/opt/toree-kernel/bin/toree-kernel", 6 | "--profile", 7 | "{connection_file}" 8 | ] 9 | } 10 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 6 | 7 | # Setup environment 8 | source "$DIR/env.sh" 9 | 10 | # Build the notebook image 11 | docker-compose -f "$DIR/notebook.yml" build 12 | -------------------------------------------------------------------------------- /internal/docker-stacks-webhook/conda_requirements.txt: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | # NOTE: jupyter_kernel_gateway uses the notebook package. We get a 4 | # speed up if we pre-install the notebook package using conda since 5 | # we'll get prebuilt binaries for all its dependencies like pyzmq. 6 | notebook==4.1 7 | -------------------------------------------------------------------------------- /examples/docker-compose/bin/vbox.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | # Set reasonable default VM settings 6 | : ${VIRTUALBOX_CPUS:=4} 7 | export VIRTUALBOX_CPUS 8 | : ${VIRTUALBOX_MEMORY_SIZE:=4096} 9 | export VIRTUALBOX_MEMORY_SIZE 10 | 11 | docker-machine create --driver virtualbox "$@" 12 | -------------------------------------------------------------------------------- /test/test_notebook.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | def test_secured_server(container, http_client): 5 | """Notebook server should eventually request user login.""" 6 | container.run() 7 | resp = http_client.get('http://localhost:8888') 8 | resp.raise_for_status() 9 | assert 'login_submit' in resp.text 10 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/down.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 6 | 7 | # Setup environment 8 | source "$DIR/env.sh" 9 | 10 | # Bring down the notebook container, using container name as project name 11 | docker-compose -f "$DIR/notebook.yml" -p "$NAME" down 12 | -------------------------------------------------------------------------------- /tensorflow-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/scipy-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | # Install Tensorflow 8 | RUN conda install --quiet --yes \ 9 | 'tensorflow=1.3*' \ 10 | 'keras=2.0*' && \ 11 | conda clean -tipsy && \ 12 | fix-permissions $CONDA_DIR 13 | -------------------------------------------------------------------------------- /r-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /base-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /minimal-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /pyspark-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /scipy-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /all-spark-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /datascience-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /tensorflow-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /examples/docker-compose/notebook/notebook.yml: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | version: "2" 5 | 6 | services: 7 | notebook: 8 | build: . 9 | image: my-notebook 10 | container_name: ${NAME} 11 | volumes: 12 | - "work:/home/jovyan/work" 13 | ports: 14 | - "${PORT}:8888" 15 | 16 | volumes: 17 | work: 18 | external: 19 | name: ${WORK_VOLUME} 20 | -------------------------------------------------------------------------------- /examples/make-deploy/virtualbox.makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | virtualbox-vm: export VIRTUALBOX_CPU_COUNT?=4 5 | virtualbox-vm: export VIRTUALBOX_DISK_SIZE?=100000 6 | virtualbox-vm: export VIRTUALBOX_MEMORY_SIZE?=4096 7 | virtualbox-vm: check 8 | @test -n "$(NAME)" || \ 9 | (echo "ERROR: NAME not defined (make help)"; exit 1) 10 | @docker-machine create -d virtualbox $(NAME) 11 | -------------------------------------------------------------------------------- /examples/docker-compose/bin/softlayer.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | # Set default SoftLayer VM settings 6 | : ${SOFTLAYER_CPU:=4} 7 | export SOFTLAYER_CPU 8 | : ${SOFTLAYER_DISK_SIZE:=100} 9 | export SOFTLAYER_DISK_SIZE 10 | : ${SOFTLAYER_MEMORY:=4096} 11 | export SOFTLAYER_MEMORY 12 | : ${SOFTLAYER_REGION:=wdc01} 13 | export SOFTLAYER_REGION 14 | 15 | docker-machine create --driver softlayer "$@" 16 | -------------------------------------------------------------------------------- /examples/make-deploy/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | # Pick your favorite docker-stacks image 5 | FROM jupyter/minimal-notebook:2d125a7161b5 6 | 7 | USER jovyan 8 | 9 | # Add permanent pip/conda installs, data files, other user libs here 10 | # e.g., RUN pip install jupyter_dashboards 11 | 12 | USER root 13 | 14 | # Add permanent apt-get installs and other root commands here 15 | # e.g., RUN apt-get install npm nodejs 16 | -------------------------------------------------------------------------------- /examples/make-deploy/self-signed.makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | self-signed-notebook: PORT?=443 5 | self-signed-notebook: NAME?=notebook 6 | self-signed-notebook: WORK_VOLUME?=$(NAME)-data 7 | self-signed-notebook: DOCKER_ARGS:=-e USE_HTTPS=yes \ 8 | -e PASSWORD=$(PASSWORD) 9 | self-signed-notebook: check 10 | @test -n "$(PASSWORD)" || \ 11 | (echo "ERROR: PASSWORD not defined or blank"; exit 1) 12 | $(RUN_NOTEBOOK) 13 | -------------------------------------------------------------------------------- /base-notebook/start-notebook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | set -e 6 | 7 | if [[ ! -z "${JUPYTERHUB_API_TOKEN}" ]]; then 8 | # launched by JupyterHub, use single-user entrypoint 9 | exec /usr/local/bin/start-singleuser.sh $* 10 | else 11 | if [[ ! -z "${JUPYTER_ENABLE_LAB}" ]]; then 12 | . /usr/local/bin/start.sh jupyter lab $* 13 | else 14 | . /usr/local/bin/start.sh jupyter notebook $* 15 | fi 16 | fi 17 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | # Pick your favorite docker-stacks image 5 | FROM jupyter/minimal-notebook:55d5ca6be183 6 | 7 | USER jovyan 8 | 9 | # Add permanent pip/conda installs, data files, other user libs here 10 | # e.g., RUN pip install jupyter_dashboards 11 | 12 | USER root 13 | 14 | # Add permanent apt-get installs and other root commands here 15 | # e.g., RUN apt-get install npm nodejs 16 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/secure-notebook.yml: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | version: "2" 5 | 6 | services: 7 | notebook: 8 | build: . 9 | image: my-notebook 10 | container_name: ${NAME} 11 | volumes: 12 | - "work:/home/jovyan/work" 13 | ports: 14 | - "${PORT}:8888" 15 | environment: 16 | USE_HTTPS: "yes" 17 | PASSWORD: ${PASSWORD} 18 | 19 | volumes: 20 | work: 21 | external: 22 | name: ${WORK_VOLUME} 23 | -------------------------------------------------------------------------------- /internal/docker-stacks-webhook/manifest.yml: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | --- 4 | applications: 5 | - name: docker-stacks-webhook 6 | memory: 128M 7 | instances: 1 8 | path: . 9 | buildpack: https://github.com/ihuston/python-conda-buildpack 10 | command: > 11 | jupyter-kernelgateway --KernelGatewayApp.port=$PORT 12 | --KernelGatewayApp.ip=0.0.0.0 13 | --KernelGatewayApp.api=notebook-http 14 | --KernelGatewayApp.seed_uri='./docker-stacks-webhook.ipynb' 15 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/env.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | # Set default values for environment variables required by notebook compose 6 | # configuration file. 7 | 8 | # Container name 9 | : "${NAME:=my-notebook}" 10 | export NAME 11 | 12 | # Exposed container port 13 | : ${PORT:=80} 14 | export PORT 15 | 16 | # Container work volume name 17 | : "${WORK_VOLUME:=$NAME-work}" 18 | export WORK_VOLUME 19 | 20 | # Container secrets volume name 21 | : "${SECRETS_VOLUME:=$NAME-secrets}" 22 | export SECRETS_VOLUME 23 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/letsencrypt-notebook.yml: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | version: "2" 5 | 6 | services: 7 | notebook: 8 | build: . 9 | image: my-notebook 10 | container_name: ${NAME} 11 | volumes: 12 | - "work:/home/jovyan/work" 13 | - "secrets:/etc/letsencrypt" 14 | ports: 15 | - "${PORT}:8888" 16 | environment: 17 | USE_HTTPS: "yes" 18 | PASSWORD: ${PASSWORD} 19 | command: > 20 | start-notebook.sh 21 | --NotebookApp.certfile=/etc/letsencrypt/fullchain.pem 22 | --NotebookApp.keyfile=/etc/letsencrypt/privkey.pem 23 | 24 | volumes: 25 | work: 26 | external: 27 | name: ${WORK_VOLUME} 28 | secrets: 29 | external: 30 | name: ${SECRETS_VOLUME} 31 | -------------------------------------------------------------------------------- /examples/docker-compose/bin/sl-dns.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | set -e 6 | 7 | # User must have slcli installed 8 | which slcli > /dev/null || (echo "SoftLayer cli not found (pip install softlayer)"; exit 1) 9 | 10 | USAGE="Usage: `basename $0` machine_name [domain]" 11 | E_BADARGS=85 12 | 13 | # Machine name is first command line arg 14 | MACHINE_NAME=$1 && [ -z "$MACHINE_NAME" ] && echo "$USAGE" && exit $E_BADARGS 15 | 16 | # Use SOFTLAYER_DOMAIN env var if domain name not set as second arg 17 | DOMAIN="${2:-$SOFTLAYER_DOMAIN}" && [ -z "$DOMAIN" ] && \ 18 | echo "Must specify domain or set SOFTLAYER_DOMAIN environment varable" && \ 19 | echo "$USAGE" && exit $E_BADARGS 20 | 21 | IP=$(docker-machine ip "$MACHINE_NAME") 22 | 23 | slcli dns record-add $DOMAIN $MACHINE_NAME A $IP 24 | -------------------------------------------------------------------------------- /minimal-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | FROM jupyter/base-notebook 5 | 6 | LABEL maintainer="Jupyter Project " 7 | 8 | USER root 9 | 10 | # Install all OS dependencies for fully functional notebook server 11 | RUN apt-get update && apt-get install -yq --no-install-recommends \ 12 | build-essential \ 13 | emacs \ 14 | git \ 15 | inkscape \ 16 | jed \ 17 | libsm6 \ 18 | libxext-dev \ 19 | libxrender1 \ 20 | lmodern \ 21 | pandoc \ 22 | python-dev \ 23 | texlive-fonts-extra \ 24 | texlive-fonts-recommended \ 25 | texlive-generic-recommended \ 26 | texlive-latex-base \ 27 | texlive-latex-extra \ 28 | texlive-xetex \ 29 | vim \ 30 | unzip \ 31 | && apt-get clean && \ 32 | rm -rf /var/lib/apt/lists/* 33 | 34 | # Switch back to jovyan to avoid accidental container runs as root 35 | USER $NB_USER 36 | -------------------------------------------------------------------------------- /astropy-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Association of Universities for Research in Astronomy 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/scipy-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | # Install Astroconda channel 8 | RUN conda config --add channels http://ssb.stsci.edu/astroconda 9 | 10 | # Create 'astroconda' channel configured with default packages 11 | RUN conda create -n astroconda stsci python=3 -y 12 | 13 | # Activate the astroconda channel 14 | RUN ["/bin/bash", "-c", "source activate astroconda"] 15 | 16 | # Install ipykernel switcher 17 | RUN python -m ipykernel install --user \ 18 | --name astroconda \ 19 | --display-name "Python (astroconda)" 20 | 21 | 22 | # Install ginga, ipywidgets and ipyevents for interactive plots 23 | RUN conda install ginga -y 24 | 25 | RUN conda install -c conda-forge ipywidgets -y 26 | 27 | RUN pip install ipyevents 28 | 29 | RUN jupyter nbextension enable --py --sys-prefix ipyevents 30 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | 3 | *~ 4 | 5 | __pycache__/ 6 | *.py[cod] 7 | 8 | # C extensions 9 | *.so 10 | 11 | # Distribution / packaging 12 | .Python 13 | env/ 14 | build/ 15 | develop-eggs/ 16 | dist/ 17 | downloads/ 18 | eggs/ 19 | .eggs/ 20 | lib/ 21 | lib64/ 22 | parts/ 23 | sdist/ 24 | var/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .coverage 43 | .coverage.* 44 | .cache 45 | nosetests.xml 46 | coverage.xml 47 | *,cover 48 | 49 | # Translations 50 | *.mo 51 | *.pot 52 | 53 | # Django stuff: 54 | *.log 55 | 56 | # Sphinx documentation 57 | docs/_build/ 58 | 59 | # PyBuilder 60 | target/ 61 | 62 | # Mac OS X 63 | .DS_Store 64 | 65 | dockerspawner 66 | dockerspawner.tar.gz 67 | *.orig 68 | -------------------------------------------------------------------------------- /r-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/minimal-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | USER root 8 | 9 | # R pre-requisites 10 | RUN apt-get update && \ 11 | apt-get install -y --no-install-recommends \ 12 | fonts-dejavu \ 13 | tzdata \ 14 | gfortran \ 15 | gcc && apt-get clean && \ 16 | rm -rf /var/lib/apt/lists/* 17 | 18 | USER $NB_USER 19 | 20 | # R packages 21 | RUN conda install --quiet --yes \ 22 | 'r-base=3.3.2' \ 23 | 'r-irkernel=0.7*' \ 24 | 'r-plyr=1.8*' \ 25 | 'r-devtools=1.12*' \ 26 | 'r-tidyverse=1.0*' \ 27 | 'r-shiny=0.14*' \ 28 | 'r-rmarkdown=1.2*' \ 29 | 'r-forecast=7.3*' \ 30 | 'r-rsqlite=1.1*' \ 31 | 'r-reshape2=1.4*' \ 32 | 'r-nycflights13=0.2*' \ 33 | 'r-caret=6.0*' \ 34 | 'r-rcurl=1.95*' \ 35 | 'r-crayon=1.3*' \ 36 | 'r-randomforest=4.6*' \ 37 | 'r-hexbin=1.27*' && \ 38 | conda clean -tipsy && \ 39 | fix-permissions $CONDA_DIR 40 | -------------------------------------------------------------------------------- /base-notebook/fix-permissions: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # set permissions on a directory 3 | # after any installation, if a directory needs to be (human) user-writable, 4 | # run this script on it. 5 | # It will make everything in the directory owned by the group $NB_GID 6 | # and writable by that group. 7 | # Deployments that want to set a specific user id can preserve permissions 8 | # by adding the `--group-add users` line to `docker run`. 9 | 10 | # uses find to avoid touching files that already have the right permissions, 11 | # which would cause massive image explosion 12 | 13 | # right permissions are: 14 | # group=$NB_GID 15 | # AND permissions include group rwX (directory-execute) 16 | # AND directories have setuid,setgid bits set 17 | 18 | set -e 19 | 20 | for d in $@; do 21 | find "$d" \ 22 | ! \( \ 23 | -group $NB_GID \ 24 | -a -perm -g+rwX \ 25 | \) \ 26 | -exec chgrp $NB_GID {} \; \ 27 | -exec chmod g+rwX {} \; 28 | # setuid,setgid *on directories only* 29 | find "$d" \ 30 | \( \ 31 | -type d \ 32 | -a ! -perm -6000 \ 33 | \) \ 34 | -exec chmod +6000 {} \; 35 | done 36 | -------------------------------------------------------------------------------- /.github/issue_template.md: -------------------------------------------------------------------------------- 1 | Hi! Thanks for using Jupyter's docker-stacks images. 2 | 3 | If you are requesting a library upgrade or addition in one of the existing images, please state the desired library name and version here and disregard the remaining sections. 4 | 5 | If you are reporting an issue with one of the existing images, please answer the questions below to help us troubleshoot the problem. Please be as thorough as possible. 6 | 7 | **What docker image you are using?** 8 | 9 | Example: `jupyter/scipy-notebook` 10 | 11 | **What complete docker command do you run to launch the container (omitting sensitive values)?** 12 | 13 | Example: `docker run -it --rm -p 8889:8888 jupyter/all-spark-notebook:latest` 14 | 15 | **What steps do you take once the container is running to reproduce the issue?** 16 | 17 | Example: 18 | 19 | 1. Visit http://localhost:8889 20 | 2. Start an R noteobok 21 | 3. ... 22 | 23 | **What do you expect to happen?** 24 | 25 | Example: ggplot output appears in my notebook. 26 | 27 | **What actually happens?** 28 | 29 | Example: No output is visible in the notebook and the notebook server log contains messages about ... 30 | -------------------------------------------------------------------------------- /examples/make-deploy/softlayer.makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | softlayer-vm: export SOFTLAYER_CPU?=4 5 | softlayer-vm: export SOFTLAYER_DISK_SIZE?=100 6 | softlayer-vm: export SOFTLAYER_MEMORY?=4096 7 | softlayer-vm: export SOFTLAYER_REGION?=wdc01 8 | softlayer-vm: check 9 | @test -n "$(NAME)" || \ 10 | (echo "ERROR: NAME not defined (make help)"; exit 1) 11 | @test -n "$(SOFTLAYER_API_KEY)" || \ 12 | (echo "ERROR: SOFTLAYER_API_KEY not defined (make help)"; exit 1) 13 | @test -n "$(SOFTLAYER_USER)" || \ 14 | (echo "ERROR: SOFTLAYER_USER not defined (make help)"; exit 1) 15 | @test -n "$(SOFTLAYER_DOMAIN)" || \ 16 | (echo "ERROR: SOFTLAYER_DOMAIN not defined (make help)"; exit 1) 17 | @docker-machine create -d softlayer $(NAME) 18 | @echo "DONE: Docker host '$(NAME)' up at $$(docker-machine ip $(NAME))" 19 | 20 | softlayer-dns: HOST_NAME:=$$(docker-machine active) 21 | softlayer-dns: IP:=$$(docker-machine ip $(HOST_NAME)) 22 | softlayer-dns: check 23 | @which slcli > /dev/null || (echo "softlayer cli not found (pip install softlayer)"; exit 1) 24 | @test -n "$(SOFTLAYER_DOMAIN)" || \ 25 | (echo "ERROR: SOFTLAYER_DOMAIN not defined (make help)"; exit 1) 26 | @slcli dns record-add $(SOFTLAYER_DOMAIN) $(HOST_NAME) A $(IP) 27 | -------------------------------------------------------------------------------- /base-notebook/jupyter_notebook_config.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | from jupyter_core.paths import jupyter_data_dir 5 | import subprocess 6 | import os 7 | import errno 8 | import stat 9 | 10 | c = get_config() 11 | c.NotebookApp.ip = '*' 12 | c.NotebookApp.port = 8888 13 | c.NotebookApp.open_browser = False 14 | 15 | # Generate a self-signed certificate 16 | if 'GEN_CERT' in os.environ: 17 | dir_name = jupyter_data_dir() 18 | pem_file = os.path.join(dir_name, 'notebook.pem') 19 | try: 20 | os.makedirs(dir_name) 21 | except OSError as exc: # Python >2.5 22 | if exc.errno == errno.EEXIST and os.path.isdir(dir_name): 23 | pass 24 | else: 25 | raise 26 | # Generate a certificate if one doesn't exist on disk 27 | subprocess.check_call(['openssl', 'req', '-new', 28 | '-newkey', 'rsa:2048', 29 | '-days', '365', 30 | '-nodes', '-x509', 31 | '-subj', '/C=XX/ST=XX/L=XX/O=generated/CN=generated', 32 | '-keyout', pem_file, 33 | '-out', pem_file]) 34 | # Restrict access to the file 35 | os.chmod(pem_file, stat.S_IRUSR | stat.S_IWUSR) 36 | c.NotebookApp.certfile = pem_file 37 | -------------------------------------------------------------------------------- /all-spark-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/pyspark-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | USER root 8 | 9 | # RSpark config 10 | ENV R_LIBS_USER $SPARK_HOME/R/lib 11 | RUN fix-permissions $R_LIBS_USER 12 | 13 | # R pre-requisites 14 | RUN apt-get update && \ 15 | apt-get install -y --no-install-recommends \ 16 | fonts-dejavu \ 17 | tzdata \ 18 | gfortran \ 19 | gcc && apt-get clean && \ 20 | rm -rf /var/lib/apt/lists/* 21 | 22 | USER $NB_USER 23 | 24 | # R packages 25 | RUN conda install --quiet --yes \ 26 | 'r-base=3.3.2' \ 27 | 'r-irkernel=0.7*' \ 28 | 'r-ggplot2=2.2*' \ 29 | 'r-sparklyr=0.5*' \ 30 | 'r-rcurl=1.95*' && \ 31 | conda clean -tipsy && \ 32 | fix-permissions $CONDA_DIR 33 | 34 | # Apache Toree kernel 35 | RUN pip install --no-cache-dir \ 36 | https://dist.apache.org/repos/dist/dev/incubator/toree/0.2.0/snapshots/dev1/toree-pip/toree-0.2.0.dev1.tar.gz \ 37 | && \ 38 | jupyter toree install --sys-prefix && \ 39 | fix-permissions $CONDA_DIR 40 | 41 | # Spylon-kernel 42 | RUN conda install --quiet --yes 'spylon-kernel=0.4*' && \ 43 | conda clean -tipsy && \ 44 | python -m spylon_kernel install --sys-prefix && \ 45 | fix-permissions $CONDA_DIR 46 | -------------------------------------------------------------------------------- /base-notebook/start-singleuser.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | set -e 6 | 7 | # set default ip to 0.0.0.0 8 | if [[ "$NOTEBOOK_ARGS $@" != *"--ip="* ]]; then 9 | NOTEBOOK_ARGS="--ip=0.0.0.0 $NOTEBOOK_ARGS" 10 | fi 11 | 12 | # handle some deprecated environment variables 13 | # from DockerSpawner < 0.8. 14 | # These won't be passed from DockerSpawner 0.9, 15 | # so avoid specifying --arg=empty-string 16 | if [ ! -z "$NOTEBOOK_DIR" ]; then 17 | NOTEBOOK_ARGS="--notebook-dir='$NOTEBOOK_DIR' $NOTEBOOK_ARGS" 18 | fi 19 | if [ ! -z "$JPY_PORT" ]; then 20 | NOTEBOOK_ARGS="--port=$JPY_PORT $NOTEBOOK_ARGS" 21 | fi 22 | if [ ! -z "$JPY_USER" ]; then 23 | NOTEBOOK_ARGS="--user=$JPY_USER $NOTEBOOK_ARGS" 24 | fi 25 | if [ ! -z "$JPY_COOKIE_NAME" ]; then 26 | NOTEBOOK_ARGS="--cookie-name=$JPY_COOKIE_NAME $NOTEBOOK_ARGS" 27 | fi 28 | if [ ! -z "$JPY_BASE_URL" ]; then 29 | NOTEBOOK_ARGS="--base-url=$JPY_BASE_URL $NOTEBOOK_ARGS" 30 | fi 31 | if [ ! -z "$JPY_HUB_PREFIX" ]; then 32 | NOTEBOOK_ARGS="--hub-prefix=$JPY_HUB_PREFIX $NOTEBOOK_ARGS" 33 | fi 34 | if [ ! -z "$JPY_HUB_API_URL" ]; then 35 | NOTEBOOK_ARGS="--hub-api-url=$JPY_HUB_API_URL $NOTEBOOK_ARGS" 36 | fi 37 | if [ ! -z "$JUPYTER_ENABLE_LAB" ]; then 38 | NOTEBOOK_BIN="jupyter labhub" 39 | else 40 | NOTEBOOK_BIN=jupyterhub-singleuser 41 | fi 42 | 43 | . /usr/local/bin/start.sh $NOTEBOOK_BIN $NOTEBOOK_ARGS $@ 44 | -------------------------------------------------------------------------------- /examples/make-deploy/Makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | .PHONY: help check image notebook 5 | 6 | IMAGE:=my-notebook 7 | 8 | # Common, extensible docker run command 9 | define RUN_NOTEBOOK 10 | @docker volume create --name $(WORK_VOLUME) > /dev/null 11 | -@docker rm -f $(NAME) 2> /dev/null 12 | @docker run -d -p $(PORT):8888 \ 13 | --name $(NAME) \ 14 | -v $(WORK_VOLUME):/home/jovyan/work \ 15 | $(DOCKER_ARGS) \ 16 | $(IMAGE) bash -c "$(PRE_CMD) chown jovyan /home/jovyan/work && start-notebook.sh $(ARGS)" > /dev/null 17 | @echo "DONE: Notebook '$(NAME)' listening on $$(docker-machine ip $$(docker-machine active)):$(PORT)" 18 | endef 19 | 20 | help: 21 | @cat README.md 22 | 23 | check: 24 | @which docker-machine > /dev/null || (echo "ERROR: docker-machine not found (brew install docker-machine)"; exit 1) 25 | @which docker > /dev/null || (echo "ERROR: docker not found (brew install docker)"; exit 1) 26 | @docker | grep volume > /dev/null || (echo "ERROR: docker 1.9.0+ required"; exit 1) 27 | 28 | image: DOCKER_ARGS?= 29 | image: 30 | @docker build --rm $(DOCKER_ARGS) -t $(IMAGE) . 31 | 32 | notebook: PORT?=80 33 | notebook: NAME?=notebook 34 | notebook: WORK_VOLUME?=$(NAME)-data 35 | notebook: check 36 | $(RUN_NOTEBOOK) 37 | 38 | # docker-machine drivers 39 | include virtualbox.makefile 40 | include softlayer.makefile 41 | 42 | # Preset notebook configurations 43 | include self-signed.makefile 44 | include letsencrypt.makefile -------------------------------------------------------------------------------- /examples/docker-compose/bin/letsencrypt.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | # Use https://letsencrypt.org to create a certificate for a single domain 6 | # and store it in a Docker volume. 7 | 8 | set -e 9 | 10 | # Get domain and email from environment 11 | [ -z "$FQDN" ] && \ 12 | echo "ERROR: Must set FQDN environment varable" && \ 13 | exit 1 14 | 15 | [ -z "$EMAIL" ] && \ 16 | echo "ERROR: Must set EMAIL environment varable" && \ 17 | exit 1 18 | 19 | # letsencrypt certificate server type (default is production). 20 | # Set `CERT_SERVER=--staging` for staging. 21 | : ${CERT_SERVER=''} 22 | 23 | # Create Docker volume to contain the cert 24 | : ${SECRETS_VOLUME:=my-notebook-secrets} 25 | docker volume create --name $SECRETS_VOLUME 1>/dev/null 26 | # Generate the cert and save it to the Docker volume 27 | docker run --rm -it \ 28 | -p 80:80 \ 29 | -v $SECRETS_VOLUME:/etc/letsencrypt \ 30 | quay.io/letsencrypt/letsencrypt:latest \ 31 | certonly \ 32 | --non-interactive \ 33 | --keep-until-expiring \ 34 | --standalone \ 35 | --standalone-supported-challenges http-01 \ 36 | --agree-tos \ 37 | --domain "$FQDN" \ 38 | --email "$EMAIL" \ 39 | $CERT_SERVER 40 | 41 | # Set permissions so nobody can read the cert and key. 42 | # Also symlink the certs into the root of the /etc/letsencrypt 43 | # directory so that the FQDN doesn't have to be known later. 44 | docker run --rm -it \ 45 | -v $SECRETS_VOLUME:/etc/letsencrypt \ 46 | ubuntu:16.04 \ 47 | bash -c "ln -s /etc/letsencrypt/live/$FQDN/* /etc/letsencrypt/ && \ 48 | find /etc/letsencrypt -type d -exec chmod 755 {} +" 49 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/up.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | set -e 6 | 7 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 8 | 9 | USAGE="Usage: `basename $0` [--secure | --letsencrypt] [--password PASSWORD] [--secrets SECRETS_VOLUME]" 10 | 11 | # Parse args to determine security settings 12 | SECURE=${SECURE:=no} 13 | LETSENCRYPT=${LETSENCRYPT:=no} 14 | while [[ $# > 0 ]] 15 | do 16 | key="$1" 17 | case $key in 18 | --secure) 19 | SECURE=yes 20 | ;; 21 | --letsencrypt) 22 | LETSENCRYPT=yes 23 | ;; 24 | --secrets) 25 | SECRETS_VOLUME="$2" 26 | shift # past argument 27 | ;; 28 | --password) 29 | PASSWORD="$2" 30 | export PASSWORD 31 | shift # past argument 32 | ;; 33 | *) # unknown option 34 | ;; 35 | esac 36 | shift # past argument or value 37 | done 38 | 39 | if [[ "$LETSENCRYPT" == yes || "$SECURE" == yes ]]; then 40 | if [ -z "${PASSWORD:+x}" ]; then 41 | echo "ERROR: Must set PASSWORD if running in secure mode" 42 | echo "$USAGE" 43 | exit 1 44 | fi 45 | if [ "$LETSENCRYPT" == yes ]; then 46 | CONFIG=letsencrypt-notebook.yml 47 | if [ -z "${SECRETS_VOLUME:+x}" ]; then 48 | echo "ERROR: Must set SECRETS_VOLUME if running in letsencrypt mode" 49 | echo "$USAGE" 50 | exit 1 51 | fi 52 | else 53 | CONFIG=secure-notebook.yml 54 | fi 55 | export PORT=${PORT:=443} 56 | else 57 | CONFIG=notebook.yml 58 | export PORT=${PORT:=80} 59 | fi 60 | 61 | # Setup environment 62 | source "$DIR/env.sh" 63 | 64 | # Create a Docker volume to store notebooks 65 | docker volume create --name "$WORK_VOLUME" 66 | 67 | # Bring up a notebook container, using container name as project name 68 | echo "Bringing up notebook '$NAME'" 69 | docker-compose -f "$DIR/$CONFIG" -p "$NAME" up -d 70 | 71 | IP=$(docker-machine ip $(docker-machine active)) 72 | echo "Notebook $NAME listening on $IP:$PORT" 73 | -------------------------------------------------------------------------------- /pyspark-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/scipy-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | USER root 8 | 9 | # Spark dependencies 10 | ENV APACHE_SPARK_VERSION 2.2.0 11 | ENV HADOOP_VERSION 2.7 12 | 13 | RUN apt-get -y update && \ 14 | apt-get install --no-install-recommends -y openjdk-8-jre-headless ca-certificates-java && \ 15 | apt-get clean && \ 16 | rm -rf /var/lib/apt/lists/* 17 | RUN cd /tmp && \ 18 | wget -q http://d3kbcqa49mib13.cloudfront.net/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz && \ 19 | echo "7a186a2a007b2dfd880571f7214a7d329c972510a460a8bdbef9f7f2a891019343c020f74b496a61e5aa42bc9e9a79cc99defe5cb3bf8b6f49c07e01b259bc6b *spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz" | sha512sum -c - && \ 20 | tar xzf spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz -C /usr/local && \ 21 | rm spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz 22 | RUN cd /usr/local && ln -s spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION} spark 23 | 24 | # Mesos dependencies 25 | RUN . /etc/os-release && \ 26 | apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF && \ 27 | DISTRO=$ID && \ 28 | CODENAME=$VERSION_CODENAME && \ 29 | echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" > /etc/apt/sources.list.d/mesosphere.list && \ 30 | apt-get -y update && \ 31 | apt-get --no-install-recommends -y --force-yes install mesos=1.2\* && \ 32 | apt-get clean && \ 33 | rm -rf /var/lib/apt/lists/* 34 | 35 | # Spark and Mesos config 36 | ENV SPARK_HOME /usr/local/spark 37 | ENV PYTHONPATH $SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.4-src.zip 38 | ENV MESOS_NATIVE_LIBRARY /usr/local/lib/libmesos.so 39 | ENV SPARK_OPTS --driver-java-options=-Xms1024M --driver-java-options=-Xmx4096M --driver-java-options=-Dlog4j.logLevel=info 40 | 41 | USER $NB_USER 42 | -------------------------------------------------------------------------------- /scipy-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/minimal-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | USER root 8 | 9 | # libav-tools for matplotlib anim 10 | RUN apt-get update && \ 11 | apt-get install -y --no-install-recommends libav-tools && \ 12 | apt-get clean && \ 13 | rm -rf /var/lib/apt/lists/* 14 | 15 | USER $NB_USER 16 | 17 | # Install Python 3 packages 18 | # Remove pyqt and qt pulled in for matplotlib since we're only ever going to 19 | # use notebook-friendly backends in these images 20 | RUN conda install --quiet --yes \ 21 | 'nomkl' \ 22 | 'ipywidgets=7.1*' \ 23 | 'pandas=0.19*' \ 24 | 'numexpr=2.6*' \ 25 | 'matplotlib=2.0*' \ 26 | 'scipy=0.19*' \ 27 | 'seaborn=0.7*' \ 28 | 'scikit-learn=0.18*' \ 29 | 'scikit-image=0.12*' \ 30 | 'sympy=1.0*' \ 31 | 'cython=0.25*' \ 32 | 'patsy=0.4*' \ 33 | 'statsmodels=0.8*' \ 34 | 'cloudpickle=0.2*' \ 35 | 'dill=0.2*' \ 36 | 'numba=0.31*' \ 37 | 'bokeh=0.12*' \ 38 | 'sqlalchemy=1.1*' \ 39 | 'hdf5=1.8.17' \ 40 | 'h5py=2.6*' \ 41 | 'vincent=0.4.*' \ 42 | 'beautifulsoup4=4.5.*' \ 43 | 'protobuf=3.*' \ 44 | 'xlrd' && \ 45 | conda remove --quiet --yes --force qt pyqt && \ 46 | conda clean -tipsy && \ 47 | # Activate ipywidgets extension in the environment that runs the notebook server 48 | jupyter nbextension enable --py widgetsnbextension --sys-prefix && \ 49 | # Also activate ipywidgets extension for JupyterLab 50 | jupyter labextension install @jupyter-widgets/jupyterlab-manager@^0.33.1 && \ 51 | npm cache clean && \ 52 | rm -rf $CONDA_DIR/share/jupyter/lab/staging && \ 53 | fix-permissions $CONDA_DIR 54 | 55 | # Install facets which does not have a pip or conda package at the moment 56 | RUN cd /tmp && \ 57 | git clone https://github.com/PAIR-code/facets.git && \ 58 | cd facets && \ 59 | jupyter nbextension install facets-dist/ --sys-prefix && \ 60 | rm -rf facets && \ 61 | fix-permissions $CONDA_DIR 62 | 63 | # Import matplotlib the first time to build the font cache. 64 | ENV XDG_CACHE_HOME /home/$NB_USER/.cache/ 65 | RUN MPLBACKEND=Agg python -c "import matplotlib.pyplot" && \ 66 | fix-permissions /home/$NB_USER 67 | 68 | USER $NB_USER 69 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | .PHONY: help test 4 | 5 | # Use bash for inline if-statements in test target 6 | SHELL:=bash 7 | OWNER:=jupyter 8 | ARCH:=$(shell uname -m) 9 | 10 | # Need to list the images in build dependency order 11 | ifeq ($(ARCH),ppc64le) 12 | ALL_STACKS:=base-notebook 13 | else 14 | ALL_STACKS:=base-notebook \ 15 | minimal-notebook \ 16 | r-notebook \ 17 | scipy-notebook \ 18 | tensorflow-notebook \ 19 | datascience-notebook \ 20 | pyspark-notebook \ 21 | all-spark-notebook 22 | endif 23 | 24 | ALL_IMAGES:=$(ALL_STACKS) 25 | 26 | help: 27 | # http://marmelab.com/blog/2016/02/29/auto-documented-makefile.html 28 | @echo "jupyter/docker-stacks" 29 | @echo "=====================" 30 | @echo "Replace % with a stack directory name (e.g., make build/minimal-notebook)" 31 | @echo 32 | @grep -E '^[a-zA-Z0-9_%/-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' 33 | 34 | arch_patch/%: ## apply hardware architecture specific patches to the Dockerfile 35 | @if [ -e ./$(notdir $@)/Dockerfile.$(ARCH).patch ]; then \ 36 | if [ -e ./$(notdir $@)/Dockerfile.orig ]; then \ 37 | cp -f ./$(notdir $@)/Dockerfile.orig ./$(notdir $@)/Dockerfile;\ 38 | else\ 39 | cp -f ./$(notdir $@)/Dockerfile ./$(notdir $@)/Dockerfile.orig;\ 40 | fi;\ 41 | patch -f ./$(notdir $@)/Dockerfile ./$(notdir $@)/Dockerfile.$(ARCH).patch; \ 42 | fi 43 | 44 | build/%: DARGS?= 45 | build/%: ## build the latest image for a stack 46 | docker build $(DARGS) --rm --force-rm -t $(OWNER)/$(notdir $@):latest ./$(notdir $@) 47 | 48 | build-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) ) ## build all stacks 49 | build-test-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) test/$(I) ) ## build and test all stacks 50 | 51 | dev/%: ARGS?= 52 | dev/%: DARGS?= 53 | dev/%: PORT?=8888 54 | dev/%: ## run a foreground container for a stack 55 | docker run -it --rm -p $(PORT):8888 $(DARGS) $(OWNER)/$(notdir $@) $(ARGS) 56 | 57 | test-reqs: # install libraries required to run the integration tests 58 | pip install -r requirements-test.txt 59 | 60 | test/%: 61 | @TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest test 62 | 63 | test/base-notebook: ## test supported options in the base notebook 64 | @TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest test base-notebook/test -------------------------------------------------------------------------------- /examples/make-deploy/letsencrypt.makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | # BE CAREFUL when using Docker engine <1.10 because running a container with 5 | # `--rm` option while mounting a docker volume may wipe out the volume. 6 | # See issue: https://github.com/docker/docker/issues/17907 7 | 8 | # Use letsencrypt production server by default to get a real cert. 9 | # Use CERT_SERVER=--staging to hit the staging server (not a real cert). 10 | 11 | letsencrypt: NAME?=notebook 12 | letsencrypt: SECRETS_VOLUME?=$(NAME)-secrets 13 | letsencrypt: TMP_CONTAINER?=$(NAME)-tmp 14 | letsencrypt: CERT_SERVER?= 15 | letsencrypt: 16 | @test -n "$(FQDN)" || \ 17 | (echo "ERROR: FQDN not defined or blank"; exit 1) 18 | @test -n "$(EMAIL)" || \ 19 | (echo "ERROR: EMAIL not defined or blank"; exit 1) 20 | @docker volume create --name $(SECRETS_VOLUME) > /dev/null 21 | @docker run -it -p 80:80 \ 22 | --name=$(TMP_CONTAINER) \ 23 | -v $(SECRETS_VOLUME):/etc/letsencrypt \ 24 | quay.io/letsencrypt/letsencrypt:latest \ 25 | certonly \ 26 | $(CERT_SERVER) \ 27 | --keep-until-expiring \ 28 | --standalone \ 29 | --standalone-supported-challenges http-01 \ 30 | --agree-tos \ 31 | --domain '$(FQDN)' \ 32 | --email '$(EMAIL)'; \ 33 | docker rm -f $(TMP_CONTAINER) > /dev/null 34 | # The letsencrypt image has an entrypoint, so we use the notebook image 35 | # instead so we can run arbitrary commands. 36 | # Here we set the permissions so nobody can read the cert and key. 37 | # We also symlink the certs into the root of the /etc/letsencrypt 38 | # directory so that the FQDN doesn't have to be known later. 39 | @docker run -it \ 40 | --name=$(TMP_CONTAINER) \ 41 | -v $(SECRETS_VOLUME):/etc/letsencrypt \ 42 | $(NOTEBOOK_IMAGE) \ 43 | bash -c "ln -s /etc/letsencrypt/live/$(FQDN)/* /etc/letsencrypt/ && \ 44 | find /etc/letsencrypt -type d -exec chmod 755 {} +"; \ 45 | docker rm -f $(TMP_CONTAINER) > /dev/null 46 | 47 | letsencrypt-notebook: PORT?=443 48 | letsencrypt-notebook: NAME?=notebook 49 | letsencrypt-notebook: WORK_VOLUME?=$(NAME)-data 50 | letsencrypt-notebook: SECRETS_VOLUME?=$(NAME)-secrets 51 | letsencrypt-notebook: DOCKER_ARGS:=-e USE_HTTPS=yes \ 52 | -e PASSWORD=$(PASSWORD) \ 53 | -v $(SECRETS_VOLUME):/etc/letsencrypt 54 | letsencrypt-notebook: ARGS:=\ 55 | --NotebookApp.certfile=/etc/letsencrypt/fullchain.pem \ 56 | --NotebookApp.keyfile=/etc/letsencrypt/privkey.pem 57 | letsencrypt-notebook: check 58 | @test -n "$(PASSWORD)" || \ 59 | (echo "ERROR: PASSWORD not defined or blank"; exit 1) 60 | $(RUN_NOTEBOOK) 61 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | # Licensing terms 2 | 3 | This project is licensed under the terms of the Modified BSD License 4 | (also known as New or Revised or 3-Clause BSD), as follows: 5 | 6 | - Copyright (c) 2001-2015, IPython Development Team 7 | - Copyright (c) 2015-, Jupyter Development Team 8 | 9 | All rights reserved. 10 | 11 | Redistribution and use in source and binary forms, with or without 12 | modification, are permitted provided that the following conditions are met: 13 | 14 | Redistributions of source code must retain the above copyright notice, this 15 | list of conditions and the following disclaimer. 16 | 17 | Redistributions in binary form must reproduce the above copyright notice, this 18 | list of conditions and the following disclaimer in the documentation and/or 19 | other materials provided with the distribution. 20 | 21 | Neither the name of the Jupyter Development Team nor the names of its 22 | contributors may be used to endorse or promote products derived from this 23 | software without specific prior written permission. 24 | 25 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 26 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 27 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 28 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE 29 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 30 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 31 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 32 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 33 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 34 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 35 | 36 | ## About the Jupyter Development Team 37 | 38 | The Jupyter Development Team is the set of all contributors to the Jupyter project. 39 | This includes all of the Jupyter subprojects. 40 | 41 | The core team that coordinates development on GitHub can be found here: 42 | https://github.com/jupyter/. 43 | 44 | ## Our Copyright Policy 45 | 46 | Jupyter uses a shared copyright model. Each contributor maintains copyright 47 | over their contributions to Jupyter. But, it is important to note that these 48 | contributions are typically only changes to the repositories. Thus, the Jupyter 49 | source code, in its entirety is not the copyright of any single person or 50 | institution. Instead, it is the collective copyright of the entire Jupyter 51 | Development Team. If individual contributors want to maintain a record of what 52 | changes/contributions they have specific copyright on, they should indicate 53 | their copyright in the commit message of the change, when they commit the 54 | change to one of the Jupyter repositories. 55 | 56 | With this in mind, the following banner should be used in any source code file 57 | to indicate the copyright and license terms: 58 | 59 | # Copyright (c) Jupyter Development Team. 60 | # Distributed under the terms of the Modified BSD License. 61 | -------------------------------------------------------------------------------- /datascience-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/scipy-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | USER root 8 | 9 | # R pre-requisites 10 | RUN apt-get update && \ 11 | apt-get install -y --no-install-recommends \ 12 | fonts-dejavu \ 13 | tzdata \ 14 | gfortran \ 15 | gcc && apt-get clean && \ 16 | rm -rf /var/lib/apt/lists/* 17 | 18 | # Julia dependencies 19 | # install Julia packages in /opt/julia instead of $HOME 20 | ENV JULIA_PKGDIR=/opt/julia 21 | ENV JULIA_VERSION=0.6.2 22 | 23 | RUN mkdir /opt/julia-${JULIA_VERSION} && \ 24 | cd /tmp && \ 25 | wget -q https://julialang-s3.julialang.org/bin/linux/x64/`echo ${JULIA_VERSION} | cut -d. -f 1,2`/julia-${JULIA_VERSION}-linux-x86_64.tar.gz && \ 26 | echo "dc6ec0b13551ce78083a5849268b20684421d46a7ec46b17ec1fab88a5078580 *julia-${JULIA_VERSION}-linux-x86_64.tar.gz" | sha256sum -c - && \ 27 | tar xzf julia-${JULIA_VERSION}-linux-x86_64.tar.gz -C /opt/julia-${JULIA_VERSION} --strip-components=1 && \ 28 | rm /tmp/julia-${JULIA_VERSION}-linux-x86_64.tar.gz 29 | RUN ln -fs /opt/julia-*/bin/julia /usr/local/bin/julia 30 | 31 | # Show Julia where conda libraries are \ 32 | RUN mkdir /etc/julia && \ 33 | echo "push!(Libdl.DL_LOAD_PATH, \"$CONDA_DIR/lib\")" >> /etc/julia/juliarc.jl && \ 34 | # Create JULIA_PKGDIR \ 35 | mkdir $JULIA_PKGDIR && \ 36 | chown $NB_USER $JULIA_PKGDIR && \ 37 | fix-permissions $JULIA_PKGDIR 38 | 39 | USER $NB_USER 40 | 41 | # R packages including IRKernel which gets installed globally. 42 | RUN conda config --system --append channels r && \ 43 | conda install --quiet --yes \ 44 | 'rpy2=2.8*' \ 45 | 'r-base=3.3.2' \ 46 | 'r-irkernel=0.7*' \ 47 | 'r-plyr=1.8*' \ 48 | 'r-devtools=1.12*' \ 49 | 'r-tidyverse=1.0*' \ 50 | 'r-shiny=0.14*' \ 51 | 'r-rmarkdown=1.2*' \ 52 | 'r-forecast=7.3*' \ 53 | 'r-rsqlite=1.1*' \ 54 | 'r-reshape2=1.4*' \ 55 | 'r-nycflights13=0.2*' \ 56 | 'r-caret=6.0*' \ 57 | 'r-rcurl=1.95*' \ 58 | 'r-crayon=1.3*' \ 59 | 'r-randomforest=4.6*' && \ 60 | conda clean -tipsy && \ 61 | fix-permissions $CONDA_DIR 62 | 63 | # Add Julia packages 64 | # Install IJulia as jovyan and then move the kernelspec out 65 | # to the system share location. Avoids problems with runtime UID change not 66 | # taking effect properly on the .local folder in the jovyan home dir. 67 | RUN julia -e 'Pkg.init()' && \ 68 | julia -e 'Pkg.update()' && \ 69 | julia -e 'Pkg.add("HDF5")' && \ 70 | julia -e 'Pkg.add("Gadfly")' && \ 71 | julia -e 'Pkg.add("RDatasets")' && \ 72 | julia -e 'Pkg.add("IJulia")' && \ 73 | # Precompile Julia packages \ 74 | julia -e 'using HDF5' && \ 75 | julia -e 'using Gadfly' && \ 76 | julia -e 'using RDatasets' && \ 77 | julia -e 'using IJulia' && \ 78 | # move kernelspec out of home \ 79 | mv $HOME/.local/share/jupyter/kernels/julia* $CONDA_DIR/share/jupyter/kernels/ && \ 80 | chmod -R go+rx $CONDA_DIR/share/jupyter && \ 81 | rm -rf $HOME/.local && \ 82 | fix-permissions $JULIA_PKGDIR $CONDA_DIR/share/jupyter 83 | -------------------------------------------------------------------------------- /base-notebook/start.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | set -e 6 | 7 | # Exec the specified command or fall back on bash 8 | if [ $# -eq 0 ]; then 9 | cmd=bash 10 | else 11 | cmd=$* 12 | fi 13 | 14 | # Handle special flags if we're root 15 | if [ $(id -u) == 0 ] ; then 16 | 17 | # Handle username change. Since this is cheap, do this unconditionally 18 | echo "Set username to: $NB_USER" 19 | usermod -d /home/$NB_USER -l $NB_USER jovyan 20 | 21 | # Handle case where provisioned storage does not have the correct permissions by default 22 | # Ex: default NFS/EFS (no auto-uid/gid) 23 | if [[ "$CHOWN_HOME" == "1" || "$CHOWN_HOME" == 'yes' ]]; then 24 | echo "Changing ownership of /home/$NB_USER to $NB_UID:$NB_GID" 25 | chown $NB_UID:$NB_GID /home/$NB_USER 26 | fi 27 | 28 | # handle home and working directory if the username changed 29 | if [[ "$NB_USER" != "jovyan" ]]; then 30 | # changing username, make sure homedir exists 31 | # (it could be mounted, and we shouldn't create it if it already exists) 32 | if [[ ! -e "/home/$NB_USER" ]]; then 33 | echo "Relocating home dir to /home/$NB_USER" 34 | mv /home/jovyan "/home/$NB_USER" 35 | fi 36 | # if workdir is in /home/jovyan, cd to /home/$NB_USER 37 | if [[ "$PWD/" == "/home/jovyan/"* ]]; then 38 | newcwd="/home/$NB_USER/${PWD:13}" 39 | echo "Setting CWD to $newcwd" 40 | cd "$newcwd" 41 | fi 42 | fi 43 | 44 | # Change UID of NB_USER to NB_UID if it does not match 45 | if [ "$NB_UID" != $(id -u $NB_USER) ] ; then 46 | echo "Set $NB_USER UID to: $NB_UID" 47 | usermod -u $NB_UID $NB_USER 48 | fi 49 | 50 | # Change GID of NB_USER to NB_GID if it does not match 51 | if [ "$NB_GID" != $(id -g $NB_USER) ] ; then 52 | echo "Set $NB_USER GID to: $NB_GID" 53 | groupmod -g $NB_GID -o $(id -g -n $NB_USER) 54 | fi 55 | 56 | # Enable sudo if requested 57 | if [[ "$GRANT_SUDO" == "1" || "$GRANT_SUDO" == 'yes' ]]; then 58 | echo "Granting $NB_USER sudo access and appending $CONDA_DIR/bin to sudo PATH" 59 | echo "$NB_USER ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/notebook 60 | fi 61 | 62 | # Add $CONDA_DIR/bin to sudo secure_path 63 | sed -ri "s#Defaults\s+secure_path=\"([^\"]+)\"#Defaults secure_path=\"\1:$CONDA_DIR/bin\"#" /etc/sudoers 64 | 65 | # Exec the command as NB_USER with the PATH and the rest of 66 | # the environment preserved 67 | echo "Executing the command: $cmd" 68 | exec sudo -E -H -u $NB_USER PATH=$PATH PYTHONPATH=$PYTHONPATH $cmd 69 | else 70 | if [[ ! -z "$NB_UID" && "$NB_UID" != "$(id -u)" ]]; then 71 | echo 'Container must be run as root to set $NB_UID' 72 | fi 73 | if [[ ! -z "$NB_GID" && "$NB_GID" != "$(id -g)" ]]; then 74 | echo 'Container must be run as root to set $NB_GID' 75 | fi 76 | if [[ "$GRANT_SUDO" == "1" || "$GRANT_SUDO" == 'yes' ]]; then 77 | echo 'Container must be run as root to grant sudo permissions' 78 | fi 79 | 80 | # Execute the command 81 | echo "Executing the command: $cmd" 82 | exec $cmd 83 | fi 84 | -------------------------------------------------------------------------------- /conftest.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | import os 4 | 5 | import docker 6 | import pytest 7 | import requests 8 | 9 | from requests.packages.urllib3.util.retry import Retry 10 | from requests.adapters import HTTPAdapter 11 | 12 | 13 | @pytest.fixture(scope='session') 14 | def http_client(): 15 | """Requests session with retries and backoff.""" 16 | s = requests.Session() 17 | retries = Retry(total=5, backoff_factor=1) 18 | s.mount('http://', HTTPAdapter(max_retries=retries)) 19 | s.mount('https://', HTTPAdapter(max_retries=retries)) 20 | return s 21 | 22 | 23 | @pytest.fixture(scope='session') 24 | def docker_client(): 25 | """Docker client configured based on the host environment""" 26 | return docker.from_env() 27 | 28 | 29 | @pytest.fixture(scope='session') 30 | def image_name(): 31 | """Image name to test""" 32 | return os.getenv('TEST_IMAGE', 'jupyter/base-notebook') 33 | 34 | 35 | class TrackedContainer(object): 36 | """Wrapper that collects docker container configuration and delays 37 | container creation/execution. 38 | 39 | Parameters 40 | ---------- 41 | docker_client: docker.DockerClient 42 | Docker client instance 43 | image_name: str 44 | Name of the docker image to launch 45 | **kwargs: dict, optional 46 | Default keyword arguments to pass to docker.DockerClient.containers.run 47 | """ 48 | def __init__(self, docker_client, image_name, **kwargs): 49 | self.container = None 50 | self.docker_client = docker_client 51 | self.image_name = image_name 52 | self.kwargs = kwargs 53 | 54 | def run(self, **kwargs): 55 | """Runs a docker container using the preconfigured image name 56 | and a mix of the preconfigured container options and those passed 57 | to this method. 58 | 59 | Keeps track of the docker.Container instance spawned to kill it 60 | later. 61 | 62 | Parameters 63 | ---------- 64 | **kwargs: dict, optional 65 | Keyword arguments to pass to docker.DockerClient.containers.run 66 | extending and/or overriding key/value pairs passed to the constructor 67 | 68 | Returns 69 | ------- 70 | docker.Container 71 | """ 72 | all_kwargs = {} 73 | all_kwargs.update(self.kwargs) 74 | all_kwargs.update(kwargs) 75 | self.container = self.docker_client.containers.run(self.image_name, **all_kwargs) 76 | return self.container 77 | 78 | def remove(self): 79 | """Kills and removes the tracked docker container.""" 80 | if self.container: 81 | self.container.remove(force=True) 82 | 83 | 84 | @pytest.fixture(scope='function') 85 | def container(docker_client, image_name): 86 | """Notebook container with initial configuration appropriate for testing 87 | (e.g., HTTP port exposed to the host for HTTP calls). 88 | 89 | Yields the container instance and kills it when the caller is done with it. 90 | """ 91 | container = TrackedContainer( 92 | docker_client, 93 | image_name, 94 | detach=True, 95 | ports={ 96 | '8888/tcp': 8888 97 | } 98 | ) 99 | yield container 100 | container.remove() 101 | -------------------------------------------------------------------------------- /base-notebook/Dockerfile.ppc64le: -------------------------------------------------------------------------------- 1 | # Copyright (c) IBM Corporation 2016 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | # Ubuntu image 5 | FROM ppc64le/ubuntu:trusty 6 | 7 | LABEL maintainer="Ilsiyar Gaynutdinov " 8 | 9 | USER root 10 | 11 | # Install all OS dependencies for notebook server that starts but lacks all 12 | # features (e.g., download as all possible file formats) 13 | ENV DEBIAN_FRONTEND noninteractive 14 | RUN apt-get update && apt-get -yq dist-upgrade \ 15 | && apt-get install -yq --no-install-recommends \ 16 | build-essential \ 17 | bzip2 \ 18 | ca-certificates \ 19 | cmake \ 20 | git \ 21 | locales \ 22 | sudo \ 23 | wget \ 24 | && apt-get clean && \ 25 | rm -rf /var/lib/apt/lists/* 26 | 27 | RUN echo "LANGUAGE=en_US.UTF-8" >> /etc/default/locale 28 | RUN echo "LC_ALL=en_US.UTF-8" >> /etc/default/locale 29 | RUN echo "LC_TYPE=en_US.UTF-8" >> /etc/default/locale 30 | RUN locale-gen en_US en_US.UTF-8 31 | 32 | #build and install Tini for ppc64le 33 | RUN wget https://github.com/krallin/tini/archive/v0.10.0.tar.gz && \ 34 | tar zxvf v0.10.0.tar.gz && \ 35 | rm -rf v0.10.0.tar.gz 36 | WORKDIR tini-0.10.0/ 37 | RUN cmake . && make install 38 | RUN mv ./tini /usr/local/bin/tini && \ 39 | chmod +x /usr/local/bin/tini 40 | WORKDIR .. 41 | 42 | # Configure environment 43 | ENV CONDA_DIR /opt/conda 44 | ENV PATH $CONDA_DIR/bin:$PATH 45 | ENV SHELL /bin/bash 46 | ENV NB_USER jovyan 47 | ENV NB_UID 1000 48 | ENV HOME /home/$NB_USER 49 | ENV LC_ALL en_US.UTF-8 50 | ENV LANG en_US.UTF-8 51 | ENV LANGUAGE en_US.UTF-8 52 | 53 | # Create jovyan user with UID=1000 and in the 'users' group 54 | RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \ 55 | mkdir -p $CONDA_DIR && \ 56 | chown $NB_USER $CONDA_DIR 57 | 58 | USER $NB_USER 59 | 60 | # Setup jovyan home directory 61 | RUN mkdir /home/$NB_USER/work && \ 62 | mkdir /home/$NB_USER/.jupyter && \ 63 | echo "cacert=/etc/ssl/certs/ca-certificates.crt" > /home/$NB_USER/.curlrc 64 | 65 | # Install conda as jovyan 66 | RUN cd /tmp && \ 67 | mkdir -p $CONDA_DIR && \ 68 | wget https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-ppc64le.sh && \ 69 | /bin/bash Miniconda3-4.2.12-Linux-ppc64le.sh -f -b -p $CONDA_DIR && \ 70 | rm -rf Miniconda3-4.2.12-Linux-ppc64le.sh && \ 71 | $CONDA_DIR/bin/conda install --quiet --yes conda=4.2.12 && \ 72 | $CONDA_DIR/bin/conda config --system --add channels conda-forge && \ 73 | $CONDA_DIR/bin/conda config --system --set auto_update_conda false && \ 74 | conda clean -tipsy 75 | 76 | # Install Jupyter notebook and Hub 77 | RUN yes | pip install --upgrade pip 78 | RUN yes | pip install --quiet --no-cache-dir \ 79 | 'notebook==5.2.*' \ 80 | 'jupyterhub==0.7.*' \ 81 | 'jupyterlab==0.18.*' 82 | 83 | USER root 84 | 85 | EXPOSE 8888 86 | WORKDIR /home/$NB_USER/work 87 | RUN echo "ALL ALL = (ALL) NOPASSWD: ALL" >> /etc/sudoers 88 | 89 | # Configure container startup 90 | ENTRYPOINT ["tini", "--"] 91 | CMD ["start-notebook.sh"] 92 | 93 | # Add local files as late as possible to avoid cache busting 94 | COPY start.sh /usr/local/bin/ 95 | COPY start-notebook.sh /usr/local/bin/ 96 | COPY start-singleuser.sh /usr/local/bin/ 97 | COPY jupyter_notebook_config.py /home/$NB_USER/.jupyter/ 98 | RUN chown -R $NB_USER:users /home/$NB_USER/.jupyter 99 | 100 | # Switch back to jovyan to avoid accidental container runs as root 101 | USER $NB_USER 102 | -------------------------------------------------------------------------------- /base-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | # Ubuntu 16.04 (xenial) from 2017-07-23 5 | # https://github.com/docker-library/official-images/commit/0ea9b38b835ffb656c497783321632ec7f87b60c 6 | FROM ubuntu@sha256:84c334414e2bfdcae99509a6add166bbb4fa4041dc3fa6af08046a66fed3005f 7 | 8 | LABEL maintainer="Jupyter Project " 9 | 10 | USER root 11 | 12 | # Install all OS dependencies for notebook server that starts but lacks all 13 | # features (e.g., download as all possible file formats) 14 | ENV DEBIAN_FRONTEND noninteractive 15 | RUN apt-get update && apt-get -yq dist-upgrade \ 16 | && apt-get install -yq --no-install-recommends \ 17 | wget \ 18 | bzip2 \ 19 | ca-certificates \ 20 | sudo \ 21 | locales \ 22 | fonts-liberation \ 23 | && apt-get clean \ 24 | && rm -rf /var/lib/apt/lists/* 25 | 26 | RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \ 27 | locale-gen 28 | 29 | # Install Tini 30 | RUN wget --quiet https://github.com/krallin/tini/releases/download/v0.10.0/tini && \ 31 | echo "1361527f39190a7338a0b434bd8c88ff7233ce7b9a4876f3315c22fce7eca1b0 *tini" | sha256sum -c - && \ 32 | mv tini /usr/local/bin/tini && \ 33 | chmod +x /usr/local/bin/tini 34 | 35 | # Configure environment 36 | ENV CONDA_DIR=/opt/conda \ 37 | SHELL=/bin/bash \ 38 | NB_USER=jovyan \ 39 | NB_UID=1000 \ 40 | NB_GID=100 \ 41 | LC_ALL=en_US.UTF-8 \ 42 | LANG=en_US.UTF-8 \ 43 | LANGUAGE=en_US.UTF-8 44 | ENV PATH=$CONDA_DIR/bin:$PATH \ 45 | HOME=/home/$NB_USER 46 | 47 | ADD fix-permissions /usr/local/bin/fix-permissions 48 | # Create jovyan user with UID=1000 and in the 'users' group 49 | # and make sure these dirs are writable by the `users` group. 50 | RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \ 51 | mkdir -p $CONDA_DIR && \ 52 | chown $NB_USER:$NB_GID $CONDA_DIR && \ 53 | fix-permissions $HOME && \ 54 | fix-permissions $CONDA_DIR 55 | 56 | USER $NB_USER 57 | 58 | # Setup work directory for backward-compatibility 59 | RUN mkdir /home/$NB_USER/work && \ 60 | fix-permissions /home/$NB_USER 61 | 62 | # Install conda as jovyan and check the md5 sum provided on the download site 63 | ENV MINICONDA_VERSION 4.3.30 64 | RUN cd /tmp && \ 65 | wget --quiet https://repo.continuum.io/miniconda/Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh && \ 66 | echo "0b80a152332a4ce5250f3c09589c7a81 *Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh" | md5sum -c - && \ 67 | /bin/bash Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh -f -b -p $CONDA_DIR && \ 68 | rm Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh && \ 69 | $CONDA_DIR/bin/conda config --system --prepend channels conda-forge && \ 70 | $CONDA_DIR/bin/conda config --system --set auto_update_conda false && \ 71 | $CONDA_DIR/bin/conda config --system --set show_channel_urls true && \ 72 | $CONDA_DIR/bin/conda update --all --quiet --yes && \ 73 | conda clean -tipsy && \ 74 | fix-permissions $CONDA_DIR 75 | 76 | # Install Jupyter Notebook and Hub 77 | RUN conda install --quiet --yes \ 78 | 'notebook=5.2.*' \ 79 | 'jupyterhub=0.8.*' \ 80 | 'jupyterlab=0.31.*' \ 81 | && conda clean -tipsy && \ 82 | jupyter labextension install @jupyterlab/hub-extension@^0.8.0 && \ 83 | npm cache clean && \ 84 | rm -rf $CONDA_DIR/share/jupyter/lab/staging && \ 85 | fix-permissions $CONDA_DIR 86 | 87 | USER root 88 | 89 | EXPOSE 8888 90 | WORKDIR $HOME 91 | 92 | # Configure container startup 93 | ENTRYPOINT ["tini", "--"] 94 | CMD ["start-notebook.sh"] 95 | 96 | # Add local files as late as possible to avoid cache busting 97 | COPY start.sh /usr/local/bin/ 98 | COPY start-notebook.sh /usr/local/bin/ 99 | COPY start-singleuser.sh /usr/local/bin/ 100 | COPY jupyter_notebook_config.py /etc/jupyter/ 101 | RUN fix-permissions /etc/jupyter/ 102 | 103 | # Switch back to jovyan to avoid accidental container runs as root 104 | USER $NB_USER 105 | -------------------------------------------------------------------------------- /base-notebook/test/test_container_options.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | import time 4 | 5 | import pytest 6 | 7 | 8 | def test_cli_args(container, http_client): 9 | """Container should respect notebook server command line args 10 | (e.g., disabling token security)""" 11 | container.run( 12 | command=['start-notebook.sh', '--NotebookApp.token=""'] 13 | ) 14 | resp = http_client.get('http://localhost:8888') 15 | resp.raise_for_status() 16 | assert 'login_submit' not in resp.text 17 | 18 | 19 | @pytest.mark.filterwarnings('ignore:Unverified HTTPS request') 20 | def test_unsigned_ssl(container, http_client): 21 | """Container should generate a self-signed SSL certificate 22 | and notebook server should use it to enable HTTPS. 23 | """ 24 | container.run( 25 | environment=['GEN_CERT=yes'] 26 | ) 27 | # NOTE: The requests.Session backing the http_client fixture does not retry 28 | # properly while the server is booting up. An SSL handshake error seems to 29 | # abort the retry logic. Forcing a long sleep for the moment until I have 30 | # time to dig more. 31 | time.sleep(5) 32 | resp = http_client.get('https://localhost:8888', verify=False) 33 | resp.raise_for_status() 34 | assert 'login_submit' in resp.text 35 | 36 | 37 | def test_uid_change(container): 38 | """Container should change the UID of the default user.""" 39 | c = container.run( 40 | tty=True, 41 | user='root', 42 | environment=['NB_UID=1010'], 43 | command=['start.sh', 'bash', '-c', 'id && touch /opt/conda/test-file'] 44 | ) 45 | # usermod is slow so give it some time 46 | c.wait(timeout=120) 47 | assert 'uid=1010(jovyan)' in c.logs(stdout=True).decode('utf-8') 48 | 49 | 50 | def test_gid_change(container): 51 | """Container should change the GID of the default user.""" 52 | c = container.run( 53 | tty=True, 54 | user='root', 55 | environment=['NB_GID=110'], 56 | command=['start.sh', 'id'] 57 | ) 58 | c.wait(timeout=10) 59 | assert 'gid=110(users)' in c.logs(stdout=True).decode('utf-8') 60 | 61 | 62 | def test_sudo(container): 63 | """Container should grant passwordless sudo to the default user.""" 64 | c = container.run( 65 | tty=True, 66 | user='root', 67 | environment=['GRANT_SUDO=yes'], 68 | command=['start.sh', 'sudo', 'id'] 69 | ) 70 | rv = c.wait(timeout=10) 71 | assert rv == 0 or rv["StatusCode"] == 0 72 | assert 'uid=0(root)' in c.logs(stdout=True).decode('utf-8') 73 | 74 | 75 | def test_sudo_path(container): 76 | """Container should include /opt/conda/bin in the sudo secure_path.""" 77 | c = container.run( 78 | tty=True, 79 | user='root', 80 | environment=['GRANT_SUDO=yes'], 81 | command=['start.sh', 'sudo', 'which', 'jupyter'] 82 | ) 83 | rv = c.wait(timeout=10) 84 | assert rv == 0 or rv["StatusCode"] == 0 85 | assert c.logs(stdout=True).decode('utf-8').rstrip().endswith('/opt/conda/bin/jupyter') 86 | 87 | 88 | def test_sudo_path_without_grant(container): 89 | """Container should include /opt/conda/bin in the sudo secure_path.""" 90 | c = container.run( 91 | tty=True, 92 | user='root', 93 | command=['start.sh', 'which', 'jupyter'] 94 | ) 95 | rv = c.wait(timeout=10) 96 | assert rv == 0 or rv["StatusCode"] == 0 97 | assert c.logs(stdout=True).decode('utf-8').rstrip().endswith('/opt/conda/bin/jupyter') 98 | 99 | 100 | def test_group_add(container, tmpdir): 101 | """Container should run with the specified uid, gid, and secondary 102 | group. 103 | """ 104 | c = container.run( 105 | user='1010:1010', 106 | group_add=['users'], 107 | command=['start.sh', 'id'] 108 | ) 109 | rv = c.wait(timeout=5) 110 | assert rv == 0 or rv["StatusCode"] == 0 111 | assert 'uid=1010 gid=1010 groups=1010,100(users)' in c.logs(stdout=True).decode('utf-8') 112 | -------------------------------------------------------------------------------- /base-notebook/Dockerfile.ppc64le.patch: -------------------------------------------------------------------------------- 1 | --- Dockerfile 2017-05-11 12:59:30.006182756 -0400 2 | +++ Dockerfile.ppc64le 2017-05-11 12:59:57.326632865 -0400 3 | @@ -1,37 +1,43 @@ 4 | -# Copyright (c) Jupyter Development Team. 5 | +# Copyright (c) IBM Corporation 2016 6 | # Distributed under the terms of the Modified BSD License. 7 | 8 | -# Debian Jessie debootstrap from 2017-02-27 9 | -# https://github.com/docker-library/official-images/commit/aa5973d0c918c70c035ec0746b8acaec3a4d7777 10 | -FROM debian@sha256:52af198afd8c264f1035206ca66a5c48e602afb32dc912ebf9e9478134601ec4 11 | +# Ubuntu image 12 | +FROM ppc64le/ubuntu:trusty 13 | 14 | -MAINTAINER Jupyter Project 15 | +MAINTAINER Ilsiyar Gaynutdinov 16 | 17 | USER root 18 | 19 | # Install all OS dependencies for notebook server that starts but lacks all 20 | # features (e.g., download as all possible file formats) 21 | ENV DEBIAN_FRONTEND noninteractive 22 | -RUN REPO=http://cdn-fastly.deb.debian.org \ 23 | - && echo "deb $REPO/debian jessie main\ndeb $REPO/debian-security jessie/updates main" > /etc/apt/sources.list \ 24 | - && apt-get update && apt-get -yq dist-upgrade \ 25 | +RUN apt-get update && apt-get -yq dist-upgrade \ 26 | && apt-get install -yq --no-install-recommends \ 27 | - wget \ 28 | + build-essential \ 29 | bzip2 \ 30 | ca-certificates \ 31 | - sudo \ 32 | + cmake \ 33 | + git \ 34 | locales \ 35 | - && apt-get clean \ 36 | - && rm -rf /var/lib/apt/lists/* 37 | - 38 | -RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \ 39 | - locale-gen 40 | + sudo \ 41 | + wget \ 42 | + && apt-get clean && \ 43 | + rm -rf /var/lib/apt/lists/* 44 | 45 | -# Install Tini 46 | -RUN wget --quiet https://github.com/krallin/tini/releases/download/v0.10.0/tini && \ 47 | - echo "1361527f39190a7338a0b434bd8c88ff7233ce7b9a4876f3315c22fce7eca1b0 *tini" | sha256sum -c - && \ 48 | - mv tini /usr/local/bin/tini && \ 49 | +RUN echo "LANGUAGE=en_US.UTF-8" >> /etc/default/locale 50 | +RUN echo "LC_ALL=en_US.UTF-8" >> /etc/default/locale 51 | +RUN echo "LC_TYPE=en_US.UTF-8" >> /etc/default/locale 52 | +RUN locale-gen en_US en_US.UTF-8 53 | + 54 | +#build and install Tini for ppc64le 55 | +RUN wget https://github.com/krallin/tini/archive/v0.10.0.tar.gz && \ 56 | + tar zxvf v0.10.0.tar.gz && \ 57 | + rm -rf v0.10.0.tar.gz 58 | +WORKDIR tini-0.10.0/ 59 | +RUN cmake . && make install 60 | +RUN mv ./tini /usr/local/bin/tini && \ 61 | chmod +x /usr/local/bin/tini 62 | +WORKDIR .. 63 | 64 | # Configure environment 65 | ENV CONDA_DIR /opt/conda 66 | @@ -59,25 +65,26 @@ 67 | # Install conda as jovyan 68 | RUN cd /tmp && \ 69 | mkdir -p $CONDA_DIR && \ 70 | - wget --quiet https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh && \ 71 | - echo "c59b3dd3cad550ac7596e0d599b91e75d88826db132e4146030ef471bb434e9a *Miniconda3-4.2.12-Linux-x86_64.sh" | sha256sum -c - && \ 72 | - /bin/bash Miniconda3-4.2.12-Linux-x86_64.sh -f -b -p $CONDA_DIR && \ 73 | - rm Miniconda3-4.2.12-Linux-x86_64.sh && \ 74 | + wget https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-ppc64le.sh && \ 75 | + /bin/bash Miniconda3-4.2.12-Linux-ppc64le.sh -f -b -p $CONDA_DIR && \ 76 | + rm -rf Miniconda3-4.2.12-Linux-ppc64le.sh && \ 77 | + $CONDA_DIR/bin/conda install --quiet --yes conda=4.2.12 && \ 78 | $CONDA_DIR/bin/conda config --system --add channels conda-forge && \ 79 | $CONDA_DIR/bin/conda config --system --set auto_update_conda false && \ 80 | conda clean -tipsy 81 | 82 | -# Install Jupyter Notebook and Hub 83 | -RUN conda install --quiet --yes \ 84 | - 'notebook=5.2.*' \ 85 | - 'jupyterhub=0.7.*' \ 86 | - 'jupyterlab=0.18.*' \ 87 | - && conda clean -tipsy 88 | +# Install Jupyter notebook and Hub 89 | +RUN yes | pip install --upgrade pip 90 | +RUN yes | pip install --quiet --no-cache-dir \ 91 | + 'notebook==5.2.*' \ 92 | + 'jupyterhub==0.7.*' \ 93 | + 'jupyterlab==0.18.*' 94 | 95 | USER root 96 | 97 | EXPOSE 8888 98 | WORKDIR /home/$NB_USER/work 99 | +RUN echo "ALL ALL = (ALL) NOPASSWD: ALL" >> /etc/sudoers 100 | 101 | # Configure container startup 102 | ENTRYPOINT ["tini", "--"] 103 | -------------------------------------------------------------------------------- /examples/make-deploy/README.md: -------------------------------------------------------------------------------- 1 | This folder contains a Makefile and a set of supporting files demonstrating how to run a docker-stack notebook container on a docker-machine controlled host. 2 | 3 | ## Prerequisites 4 | 5 | * make 3.81+ 6 | * Ubuntu users: Be aware of [make 3.81 defect 483086](https://bugs.launchpad.net/ubuntu/+source/make-dfsg/+bug/483086) which exists in 14.04 LTS but is fixed in 15.04+ 7 | * docker-machine 0.5.0+ 8 | * docker 1.9.0+ 9 | 10 | ## Quickstart 11 | 12 | To show what's possible, here's how to run the `jupyter/minimal-notebook` on a brand new local virtualbox. 13 | 14 | ``` 15 | # create a new VM 16 | make virtualbox-vm NAME=dev 17 | # make the new VM the active docker machine 18 | eval $(docker-machine env dev) 19 | # pull a docker stack and build a local image from it 20 | make image 21 | # start a notebook server in a container 22 | make notebook 23 | ``` 24 | 25 | The last command will log the IP address and port to visit in your browser. 26 | 27 | ## FAQ 28 | 29 | ### Can I run multiple notebook containers on the same VM? 30 | 31 | Yes. Specify a unique name and port on the `make notebook` command. 32 | 33 | ``` 34 | make notebook NAME=my-notebook PORT=9000 35 | make notebook NAME=your-notebook PORT=9001 36 | ``` 37 | 38 | ### Can multiple notebook containers share their notebook directory? 39 | 40 | Yes. 41 | 42 | ``` 43 | make notebook NAME=my-notebook PORT=9000 WORK_VOLUME=our-work 44 | make notebook NAME=your-notebook PORT=9001 WORK_VOLUME=our-work 45 | ``` 46 | 47 | ### How do I run over HTTPS? 48 | 49 | Instead of `make notebook`, run `make self-signed-notebook PASSWORD=your_desired_password`. This target gives you a notebook wtih a self-signed certificate. 50 | 51 | ### That self-signed certificate is a pain. Let's Encrypt? 52 | 53 | Yes. Please. 54 | 55 | ``` 56 | make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com 57 | make letsencrypt-notebook 58 | ``` 59 | 60 | The first command creates a Docker volume named after the notebook container with a `-secrets` suffix. It then runs the `letsencrypt` client with a slew of options (one of which has you automatically agreeing to the Let's Encrypt Terms of Service, see the Makefile). The second command mounts the secrets volume and configures Jupyter to use the full-chain certificate and private key. 61 | 62 | Be aware: Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`. 63 | 64 | ``` 65 | make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com CERT_SERVER=--staging 66 | ``` 67 | 68 | Also, keep in mind Let's Encrypt certificates are short lived: 90 days at the moment. You'll need to manually setup a cron job to run the renewal steps at the moment. (You can reuse the first command above.) 69 | 70 | ### My pip/conda/apt-get installs disappear every time I restart the container. Can I make them permanent? 71 | 72 | ``` 73 | # add your pip, conda, apt-get, etc. permanent features to the Dockerfile where 74 | # indicated by the comments in the Dockerfile 75 | vi Dockerfile 76 | make image 77 | make notebook 78 | ``` 79 | 80 | ### How do I upgrade my Docker container? 81 | 82 | ``` 83 | make image DOCKER_ARGS=--pull 84 | make notebook 85 | ``` 86 | 87 | The first line pulls the latest version of the Docker image used in the local Dockerfile. Then it rebuilds the local Docker image containing any customizations you may have added to it. The second line kills your currently running notebook container, and starts a fresh one using the new image. 88 | 89 | ### Can I run on another VM provider other than VirtualBox? 90 | 91 | Yes. As an example, there's a `softlayer.makefile` included in this repo as an example. You would use it like so: 92 | 93 | ``` 94 | make softlayer-vm NAME=myhost \ 95 | SOFTLAYER_DOMAIN=your_desired_domain \ 96 | SOFTLAYER_USER=your_user_id \ 97 | SOFTLAYER_API_KEY=your_api_key 98 | eval $(docker-machine env myhost) 99 | # optional, creates a real DNS entry for the VM using the machine name as the hostname 100 | make softlayer-dns SOFTLAYER_DOMAIN=your_desired_domain 101 | make image 102 | make notebook 103 | ``` 104 | 105 | If you'd like to add support for another docker-machine driver, use the `softlayer.makefile` as a template. 106 | 107 | ### Where are my notebooks stored? 108 | 109 | `make notebook` creates a Docker volume named after the notebook container with a `-data` suffix. 110 | 111 | ### Uh ... make? 112 | 113 | Yes, sorry Windows users. It got the job done for a simple example. We can certainly accept other deployment mechanism examples in the parent folder or in other repos. 114 | 115 | ### Are there any other options? 116 | 117 | Yes indeed. `cat` the Makefiles and look at the target parameters. 118 | -------------------------------------------------------------------------------- /internal/inherit-diagram.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | blockdiag 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | ubuntu@SHA 22 | 23 | base-notebook 24 | 25 | minimal-notebook 26 | 27 | scipy-notebook 28 | 29 | r-notebook 30 | 31 | tensorflow-notebook 32 | 33 | datascience-notebook 34 | 35 | pyspark-notebook 36 | 37 | all-spark-notebook 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | -------------------------------------------------------------------------------- /examples/docker-compose/README.md: -------------------------------------------------------------------------------- 1 | This example demonstrate how to deploy docker-stack notebook containers to any Docker Machine-controlled host using Docker Compose. 2 | 3 | ## Prerequisites 4 | 5 | * [Docker Engine](https://docs.docker.com/engine/) 1.10.0+ 6 | * [Docker Machine](https://docs.docker.com/machine/) 0.6.0+ 7 | * [Docker Compose](https://docs.docker.com/compose/) 1.6.0+ 8 | 9 | See the [installation instructions](https://docs.docker.com/engine/installation/) for your environment. 10 | 11 | ## Quickstart 12 | 13 | Build and run a `jupyter/minimal-notebook` container on a VirtualBox VM on local desktop. 14 | 15 | ``` 16 | # create a Docker Machine-controlled VirtualBox VM 17 | bin/vbox.sh mymachine 18 | 19 | # activate the docker machine 20 | eval "$(docker-machine env mymachine)" 21 | 22 | # build the notebook image on the machine 23 | notebook/build.sh 24 | 25 | # bring up the notebook container 26 | notebook/up.sh 27 | ``` 28 | 29 | To stop and remove the container: 30 | 31 | ``` 32 | notebook/down.sh 33 | ``` 34 | 35 | 36 | ## FAQ 37 | 38 | ### How do I specify which docker-stack notebook image to deploy? 39 | 40 | You can customize the docker-stack notebook image to deploy by modifying the `notebook/Dockerfile`. For example, you can build and deploy a `jupyter/all-spark-notebook` by modifying the Dockerfile like so: 41 | 42 | ``` 43 | FROM jupyter/all-spark-notebook:55d5ca6be183 44 | ... 45 | ``` 46 | 47 | Once you modify the Dockerfile, don't forget to rebuild the image. 48 | 49 | ``` 50 | # activate the docker machine 51 | eval "$(docker-machine env mymachine)" 52 | 53 | notebook/build.sh 54 | ``` 55 | 56 | ### Can I run multiple notebook containers on the same VM? 57 | 58 | Yes. Set environment variables to specify unique names and ports when running the `up.sh` command. 59 | 60 | ``` 61 | NAME=my-notebook PORT=9000 notebook/up.sh 62 | NAME=your-notebook PORT=9001 notebook/up.sh 63 | ``` 64 | 65 | To stop and remove the containers: 66 | 67 | ``` 68 | NAME=my-notebook notebook/down.sh 69 | NAME=your-notebook notebook/down.sh 70 | ``` 71 | 72 | ### Where are my notebooks stored? 73 | 74 | The `up.sh` creates a Docker volume named after the notebook container with a `-work` suffix, e.g., `my-notebook-work`. 75 | 76 | 77 | ### Can multiple notebook containers share the same notebook volume? 78 | 79 | Yes. Set the `WORK_VOLUME` environment variable to the same value for each notebook. 80 | 81 | ``` 82 | NAME=my-notebook PORT=9000 WORK_VOLUME=our-work notebook/up.sh 83 | NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh 84 | ``` 85 | 86 | ### How do I run over HTTPS? 87 | 88 | To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script. 89 | 90 | ``` 91 | PASSWORD=a_secret notebook/up.sh --secure 92 | 93 | # or 94 | notebook/up.sh --secure --password a_secret 95 | ``` 96 | 97 | ### Can I use Let's Encrypt certificate chains? 98 | 99 | Sure. If you want to secure access to publicly addressable notebook containers, you can generate a free certificate using the [Let's Encrypt](https://letsencrypt.org) service. 100 | 101 | 102 | This example includes the `bin/letsencrypt.sh` script, which runs the `letsencrypt` client to create a full-chain certificate and private key, and stores them in a Docker volume. _Note:_ The script hard codes several `letsencrypt` options, one of which automatically agrees to the Let's Encrypt Terms of Service. 103 | 104 | The following command will create a certificate chain and store it in a Docker volume named `mydomain-secrets`. 105 | 106 | ``` 107 | FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ 108 | SECRETS_VOLUME=mydomain-secrets \ 109 | bin/letsencrypt.sh 110 | ``` 111 | 112 | Now run `up.sh` with the `--letsencrypt` option. You must also provide the name of the secrets volume and a password. 113 | 114 | ``` 115 | PASSWORD=a_secret SECRETS_VOLUME=mydomain-secrets notebook/up.sh --letsencrypt 116 | 117 | # or 118 | notebook/up.sh --letsencrypt --password a_secret --secrets mydomain-secrets 119 | ``` 120 | 121 | Be aware that Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`. 122 | 123 | ``` 124 | FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ 125 | CERT_SERVER=--staging \ 126 | bin/letsencrypt.sh 127 | ``` 128 | 129 | Also, be aware that Let's Encrypt certificates are short lived (90 days). If you need them for a longer period of time, you'll need to manually setup a cron job to run the renewal steps. (You can reuse the command above.) 130 | 131 | ### Can I deploy to any Docker Machine host? 132 | 133 | Yes, you should be able to deploy to any Docker Machine-controlled host. To make it easier to get up and running, this example includes scripts to provision Docker Machines to VirtualBox and IBM SoftLayer, but more scripts are welcome! 134 | 135 | To create a Docker machine using a VirtualBox VM on local desktop: 136 | 137 | ``` 138 | bin/vbox.sh mymachine 139 | ``` 140 | 141 | To create a Docker machine using a virtual device on IBM SoftLayer: 142 | 143 | ``` 144 | export SOFTLAYER_USER=my_softlayer_username 145 | export SOFTLAYER_API_KEY=my_softlayer_api_key 146 | export SOFTLAYER_DOMAIN=my.domain 147 | 148 | # Create virtual device 149 | bin/softlayer.sh myhost 150 | 151 | # Add DNS entry (SoftLayer DNS zone must exist for SOFTLAYER_DOMAIN) 152 | bin/sl-dns.sh myhost 153 | ``` 154 | 155 | 156 | ## Troubleshooting 157 | 158 | ### Unable to connect to VirtualBox VM on Mac OS X when using Cisco VPN client. 159 | 160 | The Cisco VPN client blocks access to IP addresses that it does not know about, and may block access to a new VM if it is created while the Cisco VPN client is running. 161 | 162 | 1. Stop Cisco VPN client. (It does not allow modifications to route table). 163 | 2. Run `ifconfig` to list `vboxnet` virtual network devices. 164 | 3. Run `sudo route -nv add -net 192.168.99 -interface vboxnetX`, where X is the number of the virtual device assigned to the VirtualBox VM. 165 | 4. Start Cisco VPN client. 166 | -------------------------------------------------------------------------------- /r-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/r-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/r-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/r-notebook.svg)](https://microbadger.com/images/jupyter/r-notebook "jupyter/r-notebook image metadata") 2 | 3 | # Jupyter Notebook R Stack 4 | 5 | ## What it Gives You 6 | 7 | * Jupyter Notebook 5.2.x 8 | * Conda R v3.3.x and channel 9 | * plyr, devtools, shiny, rmarkdown, forecast, rsqlite, reshape2, nycflights13, caret, rcurl, and randomforest pre-installed 10 | * The [tidyverse](https://github.com/tidyverse/tidyverse) R packages are also installed, including ggplot2, dplyr, tidyr, readr, purrr, tibble, stringr, lubridate, and broom 11 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 12 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command 13 | * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 14 | * A [start.sh](../base-notebook/start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 15 | * Options for a self-signed HTTPS certificate and passwordless `sudo` 16 | 17 | ## Basic Use 18 | 19 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 20 | 21 | ``` 22 | docker run -it --rm -p 8888:8888 jupyter/r-notebook 23 | ``` 24 | 25 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 26 | 27 | ## Notebook Options 28 | 29 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 30 | 31 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 32 | 33 | ``` 34 | docker run -d -p 8888:8888 jupyter/r-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 35 | ``` 36 | 37 | For example, to set the base URL of the notebook server, run the following: 38 | 39 | ``` 40 | docker run -d -p 8888:8888 jupyter/r-notebook start-notebook.sh --NotebookApp.base_url=/some/path 41 | ``` 42 | 43 | For example, to disable all authentication mechanisms (not a recommended practice): 44 | 45 | ``` 46 | docker run -d -p 8888:8888 jupyter/r-notebook start-notebook.sh --NotebookApp.token='' 47 | ``` 48 | 49 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 50 | 51 | ## Docker Options 52 | 53 | You may customize the execution of the Docker container and the command it is running with the following optional arguments. 54 | 55 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 56 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 57 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 58 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 59 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 60 | 61 | ## SSL Certificates 62 | 63 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 64 | 65 | ``` 66 | docker run -d -p 8888:8888 \ 67 | -v /some/host/folder:/etc/ssl/notebook \ 68 | jupyter/r-notebook start-notebook.sh \ 69 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 70 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 71 | ``` 72 | 73 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 74 | 75 | ``` 76 | docker run -d -p 8888:8888 \ 77 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 78 | jupyter/r-notebook start-notebook.sh \ 79 | --NotebookApp.certfile=/etc/ssl/notebook.pem 80 | ``` 81 | 82 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 83 | 84 | For additional information about using SSL, see the following: 85 | 86 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 87 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 88 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 89 | 90 | 91 | ## Alternative Commands 92 | 93 | 94 | ### start.sh 95 | 96 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 97 | 98 | ``` 99 | docker run -it --rm jupyter/r-notebook start.sh ipython 100 | ``` 101 | 102 | Or, to run JupyterLab instead of the classic notebook, run the following: 103 | 104 | ``` 105 | docker run -it --rm -p 8888:8888 jupyter/r-notebook start.sh jupyter lab 106 | ``` 107 | 108 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 109 | 110 | ### Others 111 | 112 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 113 | -------------------------------------------------------------------------------- /tensorflow-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/tensorflow-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/tensorflow-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/tensorflow-notebook.svg)](https://microbadger.com/images/jupyter/tensorflow-notebook "jupyter/tensorflow-notebook image metadata") 2 | 3 | # Jupyter Notebook Scientific Python Stack + Tensorflow 4 | 5 | ## What it Gives You 6 | 7 | * Everything in [Scipy Notebook](https://github.com/jupyter/docker-stacks/tree/master/scipy-notebook) 8 | * Tensorflow and Keras for Python 3.x (without GPU support) 9 | 10 | ## Basic Use 11 | 12 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 13 | 14 | ``` 15 | docker run -it --rm -p 8888:8888 jupyter/tensorflow-notebook 16 | ``` 17 | 18 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 19 | 20 | ## Tensorflow Single Machine Mode 21 | 22 | As distributed tensorflow is still immature, we currently only provide the single machine mode. 23 | 24 | ``` 25 | import tensorflow as tf 26 | 27 | hello = tf.Variable('Hello World!') 28 | 29 | sess = tf.Session() 30 | init = tf.global_variables_initializer() 31 | 32 | sess.run(init) 33 | sess.run(hello) 34 | ``` 35 | 36 | ## Notebook Options 37 | 38 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 39 | 40 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 41 | 42 | ``` 43 | docker run -d -p 8888:8888 jupyter/tensorflow-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 44 | ``` 45 | 46 | For example, to set the base URL of the notebook server, run the following: 47 | 48 | ``` 49 | docker run -d -p 8888:8888 jupyter/tensorflow-notebook start-notebook.sh --NotebookApp.base_url=/some/path 50 | ``` 51 | 52 | For example, to disable all authentication mechanisms (not a recommended practice): 53 | 54 | ``` 55 | docker run -d -p 8888:8888 jupyter/tensorflow-notebook start-notebook.sh --NotebookApp.token='' 56 | ``` 57 | 58 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 59 | 60 | ## Docker Options 61 | 62 | You may customize the execution of the Docker container and the command it is running with the following optional arguments. 63 | 64 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 65 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 66 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 67 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 68 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 69 | 70 | ## SSL Certificates 71 | 72 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 73 | 74 | ``` 75 | docker run -d -p 8888:8888 \ 76 | -v /some/host/folder:/etc/ssl/notebook \ 77 | jupyter/tensorflow-notebook start-notebook.sh \ 78 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 79 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 80 | ``` 81 | 82 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 83 | 84 | ``` 85 | docker run -d -p 8888:8888 \ 86 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 87 | jupyter/tensorflow-notebook start-notebook.sh \ 88 | --NotebookApp.certfile=/etc/ssl/notebook.pem 89 | ``` 90 | 91 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 92 | 93 | For additional information about using SSL, see the following: 94 | 95 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 96 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 97 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 98 | 99 | 100 | ## Conda Environments 101 | 102 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 103 | 104 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 105 | 106 | ``` 107 | # install a package into the default (python 3.x) environment 108 | pip install some-package 109 | conda install some-package 110 | ``` 111 | 112 | 113 | ## Alternative Commands 114 | 115 | ### start.sh 116 | 117 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 118 | 119 | ``` 120 | docker run -it --rm jupyter/tensorflow-notebook start.sh ipython 121 | ``` 122 | 123 | Or, to run JupyterLab instead of the classic notebook, run the following: 124 | 125 | ``` 126 | docker run -it --rm -p 8888:8888 jupyter/tensorflow-notebook start.sh jupyter lab 127 | ``` 128 | 129 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 130 | 131 | ### Others 132 | 133 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 134 | -------------------------------------------------------------------------------- /astropy-notebook/README.md: -------------------------------------------------------------------------------- 1 | # Jupyter Notebook Astropy/Scientific Python Stack 2 | 3 | ## What it Gives You 4 | 5 | * Jupyter Notebook 5.2.x 6 | * Conda Python 3.x environment 7 | * Astroconda Python 3.x environment 8 | * pandas, matplotlib, scipy, seaborn, scikit-learn, scikit-image, sympy, cython, patsy, statsmodel, cloudpickle, dill, numba, bokeh, vincent, beautifulsoup, xlrd pre-installed 9 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 10 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command 11 | * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 12 | * A [start.sh](../base-notebook/start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 13 | * Options for HTTPS, password auth, and passwordless `sudo` 14 | 15 | ## Basic Use 16 | 17 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 18 | 19 | ``` 20 | docker run -it --rm -p 8888:8888 jupyter/astropy-notebook 21 | ``` 22 | 23 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 24 | 25 | ## Notebook Options 26 | 27 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 28 | 29 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 30 | 31 | ``` 32 | docker run -d -p 8888:8888 jupyter/astropy-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 33 | ``` 34 | 35 | For example, to set the base URL of the notebook server, run the following: 36 | 37 | ``` 38 | docker run -d -p 8888:8888 jupyter/astropy-notebook start-notebook.sh --NotebookApp.base_url=/some/path 39 | ``` 40 | 41 | For example, to disable all authentication mechanisms (not a recommended practice): 42 | 43 | ``` 44 | docker run -d -p 8888:8888 jupyter/astropy-notebook start-notebook.sh --NotebookApp.token='' 45 | ``` 46 | 47 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 48 | 49 | 50 | ## Docker Options 51 | 52 | You may customize the execution of the Docker container and the Notebook server it contains with the following optional arguments. 53 | 54 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 55 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 56 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 57 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 58 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 59 | 60 | ## SSL Certificates 61 | 62 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 63 | 64 | ``` 65 | docker run -d -p 8888:8888 \ 66 | -v /some/host/folder:/etc/ssl/notebook \ 67 | jupyter/astropy-notebook start-notebook.sh \ 68 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 69 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 70 | ``` 71 | 72 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 73 | 74 | ``` 75 | docker run -d -p 8888:8888 \ 76 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 77 | jupyter/astropy-notebook start-notebook.sh \ 78 | --NotebookApp.certfile=/etc/ssl/notebook.pem 79 | ``` 80 | 81 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 82 | 83 | For additional information about using SSL, see the following: 84 | 85 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 86 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 87 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 88 | 89 | 90 | ## Conda Environments 91 | 92 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 93 | 94 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 95 | 96 | ``` 97 | # install a package into the default (python 3.x) environment 98 | pip install some-package 99 | conda install some-package 100 | ``` 101 | 102 | ## Astroconda Environment 103 | 104 | This container is configured with the [`astroconda`](https://astroconda.readthedocs.io/en/latest/installation.html#standard-install) conda channel and includes a separate conda environment installed with the `stsci` standard install. 105 | 106 | ## Alternative Commands 107 | 108 | ### start.sh 109 | 110 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 111 | 112 | ``` 113 | docker run -it --rm jupyter/astropy-notebook start.sh ipython 114 | ``` 115 | 116 | Or, to run JupyterLab instead of the classic notebook, run the following: 117 | 118 | ``` 119 | docker run -it --rm -p 8888:8888 jupyter/astropy-notebook start.sh jupyter lab 120 | ``` 121 | 122 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 123 | 124 | ### Others 125 | 126 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 127 | -------------------------------------------------------------------------------- /minimal-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/minimal-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/minimal-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/minimal-notebook.svg)](https://microbadger.com/images/jupyter/minimal-notebook "jupyter/minimal-notebook image metadata") 2 | 3 | # Minimal Jupyter Notebook Stack 4 | 5 | Small image for working in the notebook and installing your own libraries 6 | 7 | ## What it Gives You 8 | 9 | * Fully-functional Jupyter Notebook 5.2.x 10 | * Miniconda Python 3.x 11 | * No preinstalled scientific computing packages 12 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 13 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command 14 | * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 15 | * A [start.sh](../base-notebook/start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 16 | * Options for a self-signed HTTPS certificate and passwordless `sudo` 17 | 18 | ## Basic Use 19 | 20 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 21 | 22 | ``` 23 | docker run -it --rm -p 8888:8888 jupyter/minimal-notebook 24 | ``` 25 | 26 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 27 | 28 | ## Notebook Options 29 | 30 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 31 | 32 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 33 | 34 | ``` 35 | docker run -d -p 8888:8888 jupyter/minimal-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 36 | ``` 37 | 38 | For example, to set the base URL of the notebook server, run the following: 39 | 40 | ``` 41 | docker run -d -p 8888:8888 jupyter/minimal-notebook start-notebook.sh --NotebookApp.base_url=/some/path 42 | ``` 43 | 44 | For example, to disable all authentication mechanisms (not a recommended practice): 45 | 46 | ``` 47 | docker run -d -p 8888:8888 jupyter/minimal-notebook start-notebook.sh --NotebookApp.token='' 48 | ``` 49 | 50 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 51 | 52 | 53 | ## Docker Options 54 | 55 | You may customize the execution of the Docker container and the Notebook server it contains with the following optional arguments. 56 | 57 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 58 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 59 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 60 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 61 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 62 | 63 | ## SSL Certificates 64 | 65 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 66 | 67 | ``` 68 | docker run -d -p 8888:8888 \ 69 | -v /some/host/folder:/etc/ssl/notebook \ 70 | jupyter/minimal-notebook start-notebook.sh \ 71 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 72 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 73 | ``` 74 | 75 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 76 | 77 | ``` 78 | docker run -d -p 8888:8888 \ 79 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 80 | jupyter/minimal-notebook start-notebook.sh \ 81 | --NotebookApp.certfile=/etc/ssl/notebook.pem 82 | ``` 83 | 84 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 85 | 86 | For additional information about using SSL, see the following: 87 | 88 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 89 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 90 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 91 | 92 | 93 | ## Conda Environments 94 | 95 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 96 | 97 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 98 | 99 | ``` 100 | # install a package into the default (python 3.x) environment 101 | pip install some-package 102 | conda install some-package 103 | ``` 104 | 105 | 106 | ## Alternative Commands 107 | 108 | 109 | ### start.sh 110 | 111 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 112 | 113 | ``` 114 | docker run -it --rm jupyter/minimal-notebook start.sh ipython 115 | ``` 116 | 117 | Or, to run JupyterLab instead of the classic notebook, run the following: 118 | 119 | ``` 120 | docker run -it --rm -p 8888:8888 jupyter/minimal-notebook start.sh jupyter lab 121 | ``` 122 | 123 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 124 | 125 | ### Others 126 | 127 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 128 | -------------------------------------------------------------------------------- /scipy-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/scipy-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/scipy-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/scipy-notebook.svg)](https://microbadger.com/images/jupyter/scipy-notebook "jupyter/scipy-notebook image metadata") 2 | 3 | # Jupyter Notebook Scientific Python Stack 4 | 5 | ## What it Gives You 6 | 7 | * Jupyter Notebook 5.2.x 8 | * Conda Python 3.x environment 9 | * pandas, matplotlib, scipy, seaborn, scikit-learn, scikit-image, sympy, cython, patsy, statsmodel, cloudpickle, dill, numba, bokeh, vincent, beautifulsoup, xlrd pre-installed 10 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 11 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command 12 | * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 13 | * A [start.sh](../base-notebook/start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 14 | * Options for HTTPS, password auth, and passwordless `sudo` 15 | 16 | ## Basic Use 17 | 18 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 19 | 20 | ``` 21 | docker run -it --rm -p 8888:8888 jupyter/scipy-notebook 22 | ``` 23 | 24 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 25 | 26 | ## Notebook Options 27 | 28 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 29 | 30 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 31 | 32 | ``` 33 | docker run -d -p 8888:8888 jupyter/scipy-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 34 | ``` 35 | 36 | For example, to set the base URL of the notebook server, run the following: 37 | 38 | ``` 39 | docker run -d -p 8888:8888 jupyter/scipy-notebook start-notebook.sh --NotebookApp.base_url=/some/path 40 | ``` 41 | 42 | For example, to disable all authentication mechanisms (not a recommended practice): 43 | 44 | ``` 45 | docker run -d -p 8888:8888 jupyter/scipy-notebook start-notebook.sh --NotebookApp.token='' 46 | ``` 47 | 48 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 49 | 50 | 51 | ## Docker Options 52 | 53 | You may customize the execution of the Docker container and the Notebook server it contains with the following optional arguments. 54 | 55 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 56 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 57 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 58 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 59 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 60 | 61 | ## SSL Certificates 62 | 63 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 64 | 65 | ``` 66 | docker run -d -p 8888:8888 \ 67 | -v /some/host/folder:/etc/ssl/notebook \ 68 | jupyter/scipy-notebook start-notebook.sh \ 69 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 70 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 71 | ``` 72 | 73 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 74 | 75 | ``` 76 | docker run -d -p 8888:8888 \ 77 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 78 | jupyter/scipy-notebook start-notebook.sh \ 79 | --NotebookApp.certfile=/etc/ssl/notebook.pem 80 | ``` 81 | 82 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 83 | 84 | For additional information about using SSL, see the following: 85 | 86 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 87 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 88 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 89 | 90 | 91 | ## Conda Environments 92 | 93 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 94 | 95 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 96 | 97 | ``` 98 | # install a package into the default (python 3.x) environment 99 | pip install some-package 100 | conda install some-package 101 | ``` 102 | 103 | 104 | ## Alternative Commands 105 | 106 | 107 | ### start.sh 108 | 109 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 110 | 111 | ``` 112 | docker run -it --rm jupyter/scipy-notebook start.sh ipython 113 | ``` 114 | 115 | Or, to run JupyterLab instead of the classic notebook, run the following: 116 | 117 | ``` 118 | docker run -it --rm -p 8888:8888 jupyter/scipy-notebook start.sh jupyter lab 119 | ``` 120 | 121 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 122 | 123 | ### Others 124 | 125 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 126 | -------------------------------------------------------------------------------- /base-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/base-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/base-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/base-notebook.svg)](https://microbadger.com/images/jupyter/base-notebook "jupyter/base-notebook image metadata") 2 | 3 | # Base Jupyter Notebook Stack 4 | 5 | Small base image for defining your own stack 6 | 7 | ## What it Gives You 8 | 9 | * Minimally-functional Jupyter Notebook 5.2.x (e.g., no pandoc for document conversion) 10 | * Miniconda Python 3.x 11 | * No preinstalled scientific computing packages 12 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 13 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](./start-notebook.sh) as the default command 14 | * A [start-singleuser.sh](./start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 15 | * A [start.sh](./start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 16 | * Options for a self-signed HTTPS certificate and passwordless `sudo` 17 | 18 | ## Basic Use 19 | 20 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 21 | 22 | ``` 23 | docker run -it --rm -p 8888:8888 jupyter/base-notebook 24 | ``` 25 | 26 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 27 | 28 | ## Notebook Options 29 | 30 | The Docker container executes a [`start-notebook.sh` script](./start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 31 | 32 | You can launch [JupyterLab](https://github.com/jupyterlab/jupyterlab) by setting `JUPYTER_ENABLE_LAB`: 33 | 34 | ``` 35 | docker run -it --rm -e JUPYTER_ENABLE_LAB=1 --rm -p 8888:8888 jupyter/base-notebook 36 | ``` 37 | 38 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 39 | 40 | ``` 41 | docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 42 | ``` 43 | 44 | For example, to set the base URL of the notebook server, run the following: 45 | 46 | ``` 47 | docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.base_url=/some/path 48 | ``` 49 | 50 | For example, to disable all authentication mechanisms (not a recommended practice): 51 | 52 | ``` 53 | docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.token='' 54 | ``` 55 | 56 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 57 | 58 | ## Docker Options 59 | 60 | You may customize the execution of the Docker container and the command it is running with the following optional arguments. 61 | 62 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 63 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 64 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 65 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 66 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 67 | * `--group-add users` - use this argument if you are also specifying 68 | a specific user id to launch the container (`-u 5000`), rather than launching the container as root and relying on NB_UID and NB_GID to set the user and group. 69 | 70 | 71 | ## SSL Certificates 72 | 73 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 74 | 75 | ``` 76 | docker run -d -p 8888:8888 \ 77 | -v /some/host/folder:/etc/ssl/notebook \ 78 | jupyter/base-notebook start-notebook.sh \ 79 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 80 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 81 | ``` 82 | 83 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 84 | 85 | ``` 86 | docker run -d -p 8888:8888 \ 87 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 88 | jupyter/base-notebook start-notebook.sh \ 89 | --NotebookApp.certfile=/etc/ssl/notebook.pem 90 | ``` 91 | 92 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 93 | 94 | For additional information about using SSL, see the following: 95 | 96 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 97 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 98 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 99 | 100 | 101 | ## Conda Environments 102 | 103 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 104 | 105 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 106 | 107 | ``` 108 | # install a package into the default (python 3.x) environment 109 | pip install some-package 110 | conda install some-package 111 | ``` 112 | 113 | 114 | ## Alternative Commands 115 | 116 | ### start.sh 117 | 118 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 119 | 120 | ``` 121 | docker run -it --rm jupyter/base-notebook start.sh ipython 122 | ``` 123 | 124 | Or, to run JupyterLab instead of the classic notebook, run the following: 125 | 126 | ``` 127 | docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter lab 128 | ``` 129 | 130 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 131 | 132 | ### Others 133 | 134 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 135 | 136 | -------------------------------------------------------------------------------- /datascience-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/datascience-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/datascience-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/datascience-notebook.svg)](https://microbadger.com/images/jupyter/datascience-notebook "jupyter/datascience-notebook image metadata") 2 | 3 | # Jupyter Notebook Data Science Stack 4 | 5 | ## What it Gives You 6 | 7 | * Jupyter Notebook 5.2.x 8 | * Conda Python 3.x environment 9 | * pandas, matplotlib, scipy, seaborn, scikit-learn, scikit-image, sympy, cython, patsy, statsmodel, cloudpickle, dill, numba, bokeh pre-installed 10 | * Conda R v3.3.x and channel 11 | * plyr, devtools, shiny, rmarkdown, forecast, rsqlite, reshape2, nycflights13, caret, rcurl, and randomforest pre-installed 12 | * The [tidyverse](https://github.com/tidyverse/tidyverse) R packages are also installed, including ggplot2, dplyr, tidyr, readr, purrr, tibble, stringr, lubridate, and broom 13 | * Julia v0.6.x with Gadfly, RDatasets and HDF5 pre-installed 14 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 15 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command 16 | * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 17 | * A [start.sh](../base-notebook/start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 18 | * Options for a self-signed HTTPS certificate and passwordless `sudo` 19 | 20 | ## Basic Use 21 | 22 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 23 | 24 | ``` 25 | docker run -it --rm -p 8888:8888 jupyter/datascience-notebook 26 | ``` 27 | 28 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 29 | 30 | ## Notebook Options 31 | 32 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 33 | 34 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 35 | 36 | ``` 37 | docker run -d -p 8888:8888 jupyter/datascience-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 38 | ``` 39 | 40 | For example, to set the base URL of the notebook server, run the following: 41 | 42 | ``` 43 | docker run -d -p 8888:8888 jupyter/datascience-notebook start-notebook.sh --NotebookApp.base_url=/some/path 44 | ``` 45 | 46 | For example, to disable all authentication mechanisms (not a recommended practice): 47 | 48 | ``` 49 | docker run -d -p 8888:8888 jupyter/datascience-notebook start-notebook.sh --NotebookApp.token='' 50 | ``` 51 | 52 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 53 | 54 | ## Docker Options 55 | 56 | You may customize the execution of the Docker container and the command it is running with the following optional arguments. 57 | 58 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 59 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 60 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 61 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 62 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 63 | 64 | ## SSL Certificates 65 | 66 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 67 | 68 | ``` 69 | docker run -d -p 8888:8888 \ 70 | -v /some/host/folder:/etc/ssl/notebook \ 71 | jupyter/datascience-notebook start-notebook.sh \ 72 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 73 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 74 | ``` 75 | 76 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 77 | 78 | ``` 79 | docker run -d -p 8888:8888 \ 80 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 81 | jupyter/datascience-notebook start-notebook.sh \ 82 | --NotebookApp.certfile=/etc/ssl/notebook.pem 83 | ``` 84 | 85 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 86 | 87 | For additional information about using SSL, see the following: 88 | 89 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 90 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 91 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 92 | 93 | 94 | ## Conda Environments 95 | 96 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 97 | 98 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 99 | 100 | ``` 101 | # install a package into the default (python 3.x) environment 102 | pip install some-package 103 | conda install some-package 104 | ``` 105 | 106 | 107 | ## Alternative Commands 108 | 109 | 110 | ### start.sh 111 | 112 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 113 | 114 | ``` 115 | docker run -it --rm jupyter/datascience-notebook start.sh ipython 116 | ``` 117 | 118 | Or, to run JupyterLab instead of the classic notebook, run the following: 119 | 120 | ``` 121 | docker run -it --rm -p 8888:8888 jupyter/datascience-notebook start.sh jupyter lab 122 | ``` 123 | 124 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 125 | 126 | ### Others 127 | 128 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 129 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # docker-stacks 2 | 3 | [![Build Status](https://travis-ci.org/jupyter/docker-stacks.svg?branch=master)](https://travis-ci.org/jupyter/docker-stacks) 4 | [![Join the chat at https://gitter.im/jupyter/jupyter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/jupyter/jupyter?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 5 | 6 | Opinionated stacks of ready-to-run Jupyter applications in Docker. 7 | 8 | ## Quick Start 9 | 10 | If you're familiar with Docker, have it configured, and know exactly what you'd like to run, one of these commands should get you up and running: 11 | 12 | ``` 13 | # Run an ephemeral Jupyter Notebook server in a Docker container in the terminal foreground. 14 | # Note that any work saved in the container will be lost when it is destroyed with this config. 15 | # -ti: pseudo-TTY+STDIN open. 16 | # -rm: remove the container on exit. 17 | # -p: publish port to the host 18 | docker run -ti --rm -p 8888:8888 jupyter/: 19 | 20 | # Run a Jupyter Notebook server in a Docker container in the terminal foreground. 21 | # Any files written to ~/work in the container will be saved to the current working 22 | # directory on the host. 23 | docker run -ti --rm -p 8888:8888 -v "$PWD":/home/jovyan/work jupyter/: 24 | 25 | # Run an ephemeral Jupyter Notebook server in a Docker container in the background. 26 | # Note that any work saved in the container will be lost when it is destroyed with this config. 27 | # -d: detach, run container in background. 28 | # -P: Publish all exposed ports to random ports 29 | docker run -d -P jupyter/: 30 | ``` 31 | 32 | ## Getting Started 33 | 34 | If this is your first time using Docker or any of the Jupyter projects, do the following to get started. 35 | 36 | 1. [Install Docker](https://docs.docker.com/installation/) on your host of choice. 37 | 2. Open the README in one of the folders in this git repository. 38 | 3. Follow the README for that stack. 39 | 40 | ## Visual Overview 41 | 42 | Here's a diagram of the `FROM` relationships between all of the images defined in this project: 43 | 44 | [![Image inheritance diagram](internal/inherit-diagram.svg)](http://interactive.blockdiag.com/?compression=deflate&src=eJyFzTEPgjAQhuHdX9Gws5sQjGzujsaYKxzmQrlr2msMGv-71K0srO_3XGud9NNA8DSfgzESCFlBSdi0xkvQAKTNugw4QnL6GIU10hvX-Zh7Z24OLLq2SjaxpvP10lX35vCf6pOxELFmUbQiUz4oQhYzMc3gCrRt2cWe_FKosmSjyFHC6OS1AwdQWCtyj7sfh523_BI9hKlQ25YdOFdv5fcH0kiEMA) 45 | 46 | [Click here for a commented build history of each image, with references to tag/SHA values.](https://github.com/jupyter/docker-stacks/wiki/Docker-build-history) 47 | 48 | The following are quick-links to READMEs about each image and their Docker image tags on Docker Cloud: 49 | 50 | * base-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/base-notebook), [SHA list](https://hub.docker.com/r/jupyter/base-notebook/tags/) 51 | * minimal-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/minimal-notebook), [SHA list](https://hub.docker.com/r/jupyter/minimal-notebook/tags/) 52 | * scipy-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/scipy-notebook), [SHA list](https://hub.docker.com/r/jupyter/scipy-notebook/tags/) 53 | * r-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/r-notebook), [SHA list](https://hub.docker.com/r/jupyter/r-notebook/tags/) 54 | * tensorflow-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/tensorflow-notebook), [SHA list](https://hub.docker.com/r/jupyter/tensorflow-notebook/tags/) 55 | * datascience-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook), [SHA list](https://hub.docker.com/r/jupyter/datascience-notebook/tags/) 56 | * pyspark-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/pyspark-notebook), [SHA list](https://hub.docker.com/r/jupyter/pyspark-notebook/tags/) 57 | * all-spark-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/all-spark-notebook), [SHA list](https://hub.docker.com/r/jupyter/all-spark-notebook/tags/) 58 | 59 | ## Stacks, Tags, Versioning, and Progress 60 | 61 | Starting with [git commit SHA 9bd33dcc8688](https://github.com/jupyter/docker-stacks/tree/9bd33dcc8688): 62 | 63 | * Nearly every folder here on GitHub has an equivalent `jupyter/` on Docker Hub (e.g., all-spark-notebook → jupyter/all-spark-notebook). 64 | * The `latest` tag in each Docker Hub repository tracks the `master` branch `HEAD` reference on GitHub. 65 | This is a moving target and will make backward-incompatible changes regularly. 66 | * Any 12-character image tag on Docker Hub refers to a git commit SHA here on GitHub. See the [Docker build history wiki page](https://github.com/jupyter/docker-stacks/wiki/Docker-build-history) for a table of build details. 67 | * Stack contents (e.g., new library versions) will be updated upon request via PRs against this project. 68 | * Users looking for reproducibility or stability should always refer to specific git SHA tagged images in their work, not `latest`. 69 | * For legacy reasons, there are two additional tags named `3.2` and `4.0` on Docker Hub which point to images prior to our versioning scheme switch. 70 | 71 | ## Other Tips and Known Issues 72 | 73 | - If you haven't already, pin your image to a tag, e.g. `FROM jupyter/scipy-notebook:7c45ec67c8e7`. 74 | `latest` is a moving target which can change in backward-incompatible ways as packages and operating systems are updated. 75 | * Python 2.x was [removed from all images](https://github.com/jupyter/docker-stacks/pull/433) on August 10th, 2017, starting in tag `cc9feab481f7`. If you wish to continue using Python 2.x, pin to tag `82b978b3ceeb`. 76 | * `tini -- start-notebook.sh` is the default Docker entrypoint-plus-command in every notebook stack. If you plan to modify it in any way, be sure to check the *Notebook Options* section of your stack's README to understand the consequences. 77 | * Every notebook stack is compatible with [JupyterHub](https://jupyterhub.readthedocs.io) 0.5 or higher. When running with JupyterHub, you must override the Docker run command to point to the [start-singleuser.sh](base-notebook/start-singleuser.sh) script, which starts a single-user instance of the Notebook server. See each stack's README for instructions on running with JupyterHub. 78 | * Check the [Docker recipes wiki page](https://github.com/jupyter/docker-stacks/wiki/Docker-Recipes) attached to this project for information about extending and deploying the Docker images defined here. Add to the wiki if you have relevant information. 79 | * The pyspark-notebook and all-spark-notebook stacks will fail to submit Spark jobs to a Mesos cluster when run on Mac OSX due to https://github.com/docker/for-mac/issues/68. 80 | 81 | ## Maintainer Workflow 82 | 83 | **To build new images on Docker Cloud and publish them to the Docker Hub registry, do the following:** 84 | 85 | 1. Make sure Travis is green for a PR. 86 | 2. Merge the PR. 87 | 3. Monitor the Docker Cloud build status for each of the stacks, starting with [jupyter/base-notebook](https://cloud.docker.com/app/jupyter/repository/docker/jupyter/base-notebook/general) and ending with [jupyter/all-spark-notebook](https://cloud.docker.com/app/jupyter/repository/docker/jupyter/all-spark-notebook/general). 88 | * See the stack hierarchy diagram for the current, complete build order. 89 | 4. Manually click the retry button next to any build that fails to resume that build and any dependent builds. 90 | 5. Avoid merging another PR to master until all outstanding builds complete. 91 | * There's no way at present to propagate the git SHA to build through the Docker Cloud build trigger API. Every build trigger works off of master HEAD. 92 | 93 | **When there's a security fix in the Ubuntu base image, do the following in place of the last command:** 94 | 95 | Update the `ubuntu:16.04` SHA in the most-base images (e.g., base-notebook). Submit it as a regular PR and go through the build process. Expect the build to take a while to complete: every image layer will rebuild. 96 | 97 | **When there's a new stack definition, do the following before merging the PR with the new stack:** 98 | 99 | 1. Ensure the PR includes an update to the stack overview diagram in the top-level README. 100 | * The source of the diagram is included in the alt-text of the image. Visit that URL to make edits. 101 | 2. Ensure the PR updates the Makefile which is used to build the stacks in order on Travis CI. 102 | 3. Create a new repoistory in the `jupyter` org on Docker Cloud named after the stack folder in the git repo. 103 | 4. Grant the `stacks` team permission to write to the repo. 104 | 5. Click *Builds* and then *Configure Automated Builds* for the repository. 105 | 6. Select `jupyter/docker-stacks` as the source repository. 106 | 7. Choose *Build on Docker Cloud's infrastructure using a Small node* unless you have reason to believe a bigger host is required. 107 | 8. Update the *Build Context* in the default build rule to be `/`. 108 | 9. Toggle *Autobuild* to disabled unless the stack is a new root stack (e.g., like `jupyter/base-notebook`). 109 | 10. If the new stack depends on the build of another stack in the hierarchy: 110 | 1. Hit *Save* and then click *Configure Automated Builds*. 111 | 2. At the very bottom, add a build trigger named *Stack hierarchy trigger*. 112 | 3. Copy the build trigger URL. 113 | 4. Visit the parent repository *Builds* page and click *Configure Automated Builds*. 114 | 5. Add the URL you copied to the *NEXT_BUILD_TRIGGERS* environment variable comma separated list of URLs, creating that environment variable if it does not already exist. 115 | 6. Hit *Save*. 116 | 11. If the new stack should trigger other dependent builds: 117 | 1. Add an environment variable named *NEXT_BUILD_TRIGGERS*. 118 | 2. Copy the build trigger URLs from the dependent builds into the *NEXT_BUILD_TRIGGERS* comma separated list of URLs. 119 | 3. Hit *Save*. 120 | 12. Adjust other *NEXT_BUILD_TRIGGERS* values as needed so that the build order matches that in the stack hierarchy diagram. 121 | 122 | **When there's a new maintainer, do the following:** 123 | 124 | 1. Visit https://cloud.docker.com/app/jupyter/team/stacks/users 125 | 2. Add the new maintainer user name. 126 | 127 | **If automated builds have got you down, do the following:** 128 | 129 | 1. Clone this repository. 130 | 2. Check out the git SHA you want to build and publish. 131 | 3. `docker login` with your Docker Hub/Cloud credentials. 132 | 4. Run `make retry/release-all`. 133 | 134 | When `make retry/release-all` successfully pushes the last of its images to Docker Hub (currently `jupyter/all-spark-notebook`), Docker Hub invokes [the webhook](https://github.com/jupyter/docker-stacks/blob/master/internal/docker-stacks-webhook/) which updates the [Docker build history](https://github.com/jupyter/docker-stacks/wiki/Docker-build-history) wiki page. 135 | -------------------------------------------------------------------------------- /internal/docker-stacks-webhook/docker-stacks-webhook.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# docker-stacks wiki webhook\n", 8 | "\n", 9 | "Listens for webhook callbacks from Docker Hub. Updates the [docker build history](https://github.com/jupyter/docker-stacks/wiki/Docker-build-history) wiki page in response to completed builds.\n", 10 | "\n", 11 | "References:\n", 12 | "\n", 13 | "* https://docs.docker.com/docker-hub/webhooks/\n" 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": null, 19 | "metadata": { 20 | "collapsed": false 21 | }, 22 | "outputs": [], 23 | "source": [ 24 | "import json\n", 25 | "import datetime as dt\n", 26 | "import requests\n", 27 | "import os" 28 | ] 29 | }, 30 | { 31 | "cell_type": "markdown", 32 | "metadata": {}, 33 | "source": [ 34 | "Read credentials from the environment." 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": null, 40 | "metadata": { 41 | "collapsed": false 42 | }, 43 | "outputs": [], 44 | "source": [ 45 | "GH_USERNAME = os.getenv('GH_USERNAME')\n", 46 | "GH_TOKEN = os.getenv('GH_TOKEN')" 47 | ] 48 | }, 49 | { 50 | "cell_type": "markdown", 51 | "metadata": {}, 52 | "source": [ 53 | "Configure git upfront." 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": null, 59 | "metadata": { 60 | "collapsed": true 61 | }, 62 | "outputs": [], 63 | "source": [ 64 | "%%bash\n", 65 | "git config --global user.email \"jupyter@googlegroups.com\"\n", 66 | "git config --global user.name \"Jupyter Development Team\"" 67 | ] 68 | }, 69 | { 70 | "cell_type": "markdown", 71 | "metadata": {}, 72 | "source": [ 73 | "Build the templates we need." 74 | ] 75 | }, 76 | { 77 | "cell_type": "code", 78 | "execution_count": null, 79 | "metadata": { 80 | "collapsed": false 81 | }, 82 | "outputs": [], 83 | "source": [ 84 | "wiki_git_tmpl = 'https://{GH_USERNAME}:{GH_TOKEN}@github.com/jupyter/docker-stacks.wiki.git'\n", 85 | "commit_url_tmpl = 'https://github.com/jupyter/docker-stacks/commit/{sha}'\n", 86 | "row_tmpl = '|{pushed_at}|[{sha}]({commit_url})|{commit_msg}|\\n'\n", 87 | "api_commit_url_tmpl = 'https://api.github.com/repos/jupyter/docker-stacks/commits/{sha}'" 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": null, 93 | "metadata": { 94 | "collapsed": true 95 | }, 96 | "outputs": [], 97 | "source": [ 98 | "REQUEST = json.dumps({\n", 99 | " 'body' : {\n", 100 | " \"push_data\": {\n", 101 | " \"pushed_at\": 1449017033,\n", 102 | " \"images\": [],\n", 103 | " \"tag\": \"9f9907cf1df8\",\n", 104 | " \"pusher\": \"biscarch\"\n", 105 | " },\n", 106 | " \"callback_url\": \"https://registry.hub.docker.com/u/biscarch/webhook-tester-repo/hook/2i5e3gj1bi354asb3f05gchi4ccjg0gas/\",\n", 107 | " \"repository\": {\n", 108 | " \"status\": \"Active\",\n", 109 | " \"description\": \"\",\n", 110 | " \"is_trusted\": False,\n", 111 | " \"full_description\": None,\n", 112 | " \"repo_url\": \"https://registry.hub.docker.com/u/biscarch/webhook-tester-repo/\",\n", 113 | " \"owner\": \"biscarch\",\n", 114 | " \"is_official\": False,\n", 115 | " \"is_private\": False,\n", 116 | " \"name\": \"webhook-tester-repo\",\n", 117 | " \"namespace\": \"biscarch\",\n", 118 | " \"star_count\": 0,\n", 119 | " \"comment_count\": 0,\n", 120 | " \"date_created\": 1449016916,\n", 121 | " \"repo_name\": \"biscarch/webhook-tester-repo\"\n", 122 | " }\n", 123 | " }\n", 124 | "})" 125 | ] 126 | }, 127 | { 128 | "cell_type": "markdown", 129 | "metadata": {}, 130 | "source": [ 131 | "Read values we need out of the request body." 132 | ] 133 | }, 134 | { 135 | "cell_type": "code", 136 | "execution_count": null, 137 | "metadata": { 138 | "collapsed": true 139 | }, 140 | "outputs": [], 141 | "source": [ 142 | "# POST /tag\n", 143 | "body = json.loads(REQUEST)['body']\n", 144 | "\n", 145 | "tag = body['push_data']['tag']\n", 146 | "pushed_at_ts = body['push_data']['pushed_at']\n", 147 | "callback_url = body['callback_url']" 148 | ] 149 | }, 150 | { 151 | "cell_type": "markdown", 152 | "metadata": {}, 153 | "source": [ 154 | "Validate the request by seeing if the tag is a valid SHA in the docker-stacks repo." 155 | ] 156 | }, 157 | { 158 | "cell_type": "code", 159 | "execution_count": null, 160 | "metadata": { 161 | "collapsed": false 162 | }, 163 | "outputs": [], 164 | "source": [ 165 | "# POST /tag\n", 166 | "commit_resp = requests.get(api_commit_url_tmpl.format(sha=tag))\n", 167 | "try:\n", 168 | " commit_resp.raise_for_status()\n", 169 | "except Exception as ex:\n", 170 | " requests.post(callback_url, json={\n", 171 | " 'state': 'failure',\n", 172 | " 'description': 'request does not contain a valid sha',\n", 173 | " 'context' : 'docker-stacks-webhook',\n", 174 | " 'target_url' : 'https://github.com/jupyter/docker-stacks/wiki/Docker-build-history'\n", 175 | " })\n", 176 | " raise ex" 177 | ] 178 | }, 179 | { 180 | "cell_type": "markdown", 181 | "metadata": {}, 182 | "source": [ 183 | "Get a fresh clone of the wiki git repo." 184 | ] 185 | }, 186 | { 187 | "cell_type": "code", 188 | "execution_count": null, 189 | "metadata": { 190 | "collapsed": false 191 | }, 192 | "outputs": [], 193 | "source": [ 194 | "# POST /tag\n", 195 | "wiki_git = wiki_git_tmpl.format(GH_USERNAME=GH_USERNAME, GH_TOKEN=GH_TOKEN)\n", 196 | "\n", 197 | "!rm -rf docker-stacks.wiki\n", 198 | "!git clone $wiki_git" 199 | ] 200 | }, 201 | { 202 | "cell_type": "markdown", 203 | "metadata": {}, 204 | "source": [ 205 | "Read the build page markdown." 206 | ] 207 | }, 208 | { 209 | "cell_type": "code", 210 | "execution_count": null, 211 | "metadata": { 212 | "collapsed": true 213 | }, 214 | "outputs": [], 215 | "source": [ 216 | "# POST /tag\n", 217 | "with open('docker-stacks.wiki/Docker-build-history.md') as f:\n", 218 | " lines = f.readlines()" 219 | ] 220 | }, 221 | { 222 | "cell_type": "markdown", 223 | "metadata": {}, 224 | "source": [ 225 | "Find the start of the table." 226 | ] 227 | }, 228 | { 229 | "cell_type": "code", 230 | "execution_count": null, 231 | "metadata": { 232 | "collapsed": false 233 | }, 234 | "outputs": [], 235 | "source": [ 236 | "# POST /tag\n", 237 | "for table_top_i, line in enumerate(lines):\n", 238 | " if line.startswith('|--'):\n", 239 | " break\n", 240 | "else:\n", 241 | " requests.post(callback_url, json={\n", 242 | " 'state': 'failure',\n", 243 | " 'description': 'could not locate table on wiki page',\n", 244 | " 'context' : 'docker-stacks-webhook',\n", 245 | " 'target_url' : 'https://github.com/jupyter/docker-stacks/wiki/Docker-build-history'\n", 246 | " })\n", 247 | " raise RuntimeError('wiki table missing')" 248 | ] 249 | }, 250 | { 251 | "cell_type": "markdown", 252 | "metadata": {}, 253 | "source": [ 254 | "Format the text we want to put into the wiki table row." 255 | ] 256 | }, 257 | { 258 | "cell_type": "code", 259 | "execution_count": null, 260 | "metadata": { 261 | "collapsed": false 262 | }, 263 | "outputs": [], 264 | "source": [ 265 | "# POST /tag\n", 266 | "pushed_at_dt = dt.datetime.fromtimestamp(pushed_at_ts)\n", 267 | "pushed_at = pushed_at_dt.strftime('%b. %d, %Y')\n", 268 | "commit_url = commit_url_tmpl.format(sha=tag)\n", 269 | "commit_msg = commit_resp.json()['commit']['message'].replace('\\n', ' ')\n", 270 | "row = row_tmpl.format(pushed_at=pushed_at, sha=tag, commit_url=commit_url, commit_msg=commit_msg)\n", 271 | "row" 272 | ] 273 | }, 274 | { 275 | "cell_type": "markdown", 276 | "metadata": {}, 277 | "source": [ 278 | "Insert the table row." 279 | ] 280 | }, 281 | { 282 | "cell_type": "code", 283 | "execution_count": null, 284 | "metadata": { 285 | "collapsed": false 286 | }, 287 | "outputs": [], 288 | "source": [ 289 | "# POST /tag\n", 290 | "lines.insert(table_top_i+1, row)" 291 | ] 292 | }, 293 | { 294 | "cell_type": "markdown", 295 | "metadata": {}, 296 | "source": [ 297 | "Write the file back out." 298 | ] 299 | }, 300 | { 301 | "cell_type": "code", 302 | "execution_count": null, 303 | "metadata": { 304 | "collapsed": true 305 | }, 306 | "outputs": [], 307 | "source": [ 308 | "# POST /tag\n", 309 | "with open('docker-stacks.wiki/Docker-build-history.md', 'w') as f:\n", 310 | " f.writelines(lines)" 311 | ] 312 | }, 313 | { 314 | "cell_type": "markdown", 315 | "metadata": {}, 316 | "source": [ 317 | "Commit and push." 318 | ] 319 | }, 320 | { 321 | "cell_type": "code", 322 | "execution_count": null, 323 | "metadata": { 324 | "collapsed": false 325 | }, 326 | "outputs": [], 327 | "source": [ 328 | "# POST /tag\n", 329 | "!cd docker-stacks.wiki/ && \\\n", 330 | " git add -A && \\\n", 331 | " git commit -m 'Add build $tag' && \\\n", 332 | " git push origin master" 333 | ] 334 | }, 335 | { 336 | "cell_type": "markdown", 337 | "metadata": {}, 338 | "source": [ 339 | "Tell Docker Hub we succeeded." 340 | ] 341 | }, 342 | { 343 | "cell_type": "code", 344 | "execution_count": null, 345 | "metadata": { 346 | "collapsed": false 347 | }, 348 | "outputs": [], 349 | "source": [ 350 | "# POST /tag\n", 351 | "resp = requests.post(callback_url, json={\n", 352 | " 'state': 'success',\n", 353 | " 'description': 'updated docker-stacks wiki build page',\n", 354 | " 'context' : 'docker-stacks-webhook',\n", 355 | " 'target_url' : 'https://github.com/jupyter/docker-stacks/wiki/Docker-build-history'\n", 356 | "})\n", 357 | "\n", 358 | "print(resp.status_code)" 359 | ] 360 | } 361 | ], 362 | "metadata": { 363 | "kernelspec": { 364 | "display_name": "Python 3", 365 | "language": "python", 366 | "name": "python3" 367 | }, 368 | "language_info": { 369 | "codemirror_mode": { 370 | "name": "ipython", 371 | "version": 3 372 | }, 373 | "file_extension": ".py", 374 | "mimetype": "text/x-python", 375 | "name": "python", 376 | "nbconvert_exporter": "python", 377 | "pygments_lexer": "ipython3", 378 | "version": "3.5.1" 379 | } 380 | }, 381 | "nbformat": 4, 382 | "nbformat_minor": 0 383 | } 384 | -------------------------------------------------------------------------------- /pyspark-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/pyspark-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/pyspark-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/pyspark-notebook.svg)](https://microbadger.com/images/jupyter/pyspark-notebook "jupyter/pyspark-notebook image metadata") 2 | 3 | # Jupyter Notebook Python, Spark, Mesos Stack 4 | 5 | ## What it Gives You 6 | 7 | * Jupyter Notebook 5.2.x 8 | * Conda Python 3.x environment 9 | * pyspark, pandas, matplotlib, scipy, seaborn, scikit-learn pre-installed 10 | * Spark 2.2.0 with Hadoop 2.7 for use in local mode or to connect to a cluster of Spark workers 11 | * Mesos client 1.2 binary that can communicate with a Mesos master 12 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 13 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command 14 | * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 15 | * A [start.sh](../base-notebook/start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 16 | * Options for a self-signed HTTPS certificate and passwordless `sudo` 17 | 18 | ## Basic Use 19 | 20 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 21 | 22 | ``` 23 | docker run -it --rm -p 8888:8888 jupyter/pyspark-notebook 24 | ``` 25 | 26 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 27 | 28 | ## Using Spark Local Mode 29 | 30 | This configuration is nice for using Spark on small, local data. 31 | 32 | 0. Run the container as shown above. 33 | 2. Open a Python 2 or 3 notebook. 34 | 3. Create a `SparkContext` configured for local mode. 35 | 36 | For example, the first few cells in the notebook might read: 37 | 38 | ```python 39 | import pyspark 40 | sc = pyspark.SparkContext('local[*]') 41 | 42 | # do something to prove it works 43 | rdd = sc.parallelize(range(1000)) 44 | rdd.takeSample(False, 5) 45 | ``` 46 | 47 | ## Connecting to a Spark Cluster on Mesos 48 | 49 | This configuration allows your compute cluster to scale with your data. 50 | 51 | 0. [Deploy Spark on Mesos](http://spark.apache.org/docs/latest/running-on-mesos.html). 52 | 1. Configure each slave with [the `--no-switch_user` flag](https://open.mesosphere.com/reference/mesos-slave/) or create the `jovyan` user on every slave node. 53 | 2. Ensure Python 2.x and/or 3.x and any Python libraries you wish to use in your Spark lambda functions are installed on your Spark workers. 54 | 3. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](http://spark.apache.org/docs/latest/cluster-overview.html#components).) 55 | * NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See https://github.com/jupyter/docker-stacks/issues/64 for details. 56 | 4. Open a Python 2 or 3 notebook. 57 | 5. Create a `SparkConf` instance in a new notebook pointing to your Mesos master node (or Zookeeper instance) and Spark binary package location. 58 | 6. Create a `SparkContext` using this configuration. 59 | 60 | For example, the first few cells in a Python 3 notebook might read: 61 | 62 | ```python 63 | import os 64 | # make sure pyspark tells workers to use python3 not 2 if both are installed 65 | os.environ['PYSPARK_PYTHON'] = '/usr/bin/python3' 66 | 67 | import pyspark 68 | conf = pyspark.SparkConf() 69 | 70 | # point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos) 71 | conf.setMaster("mesos://10.10.10.10:5050") 72 | # point to spark binary package in HDFS or on local filesystem on all slave 73 | # nodes (e.g., file:///opt/spark/spark-2.2.0-bin-hadoop2.7.tgz) 74 | conf.set("spark.executor.uri", "hdfs://10.122.193.209/spark/spark-2.2.0-bin-hadoop2.7.tgz") 75 | # set other options as desired 76 | conf.set("spark.executor.memory", "8g") 77 | conf.set("spark.core.connection.ack.wait.timeout", "1200") 78 | 79 | # create the context 80 | sc = pyspark.SparkContext(conf=conf) 81 | 82 | # do something to prove it works 83 | rdd = sc.parallelize(range(100000000)) 84 | rdd.sumApprox(3) 85 | ``` 86 | 87 | To use Python 2 in the notebook and on the workers, change the `PYSPARK_PYTHON` environment variable to point to the location of the Python 2.x interpreter binary. If you leave this environment variable unset, it defaults to `python`. 88 | 89 | Of course, all of this can be hidden in an [IPython kernel startup script](http://ipython.org/ipython-doc/stable/development/config.html?highlight=startup#startup-files), but "explicit is better than implicit." :) 90 | 91 | ## Connecting to a Spark Cluster on Standalone Mode 92 | 93 | Connection to Spark Cluster on Standalone Mode requires the following set of steps: 94 | 95 | 0. Verify that the docker image (check the Dockerfile) and the Spark Cluster which is being deployed, run the same version of Spark. 96 | 1. [Deploy Spark on Standalone Mode](http://spark.apache.org/docs/latest/spark-standalone.html). 97 | 2. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](http://spark.apache.org/docs/latest/cluster-overview.html#components).) 98 | * NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See https://github.com/jupyter/docker-stacks/issues/64 for details. 99 | 3. The language specific instructions are almost same as mentioned above for Mesos, only the master url would now be something like spark://10.10.10.10:7077 100 | 101 | ## Notebook Options 102 | 103 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 104 | 105 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 106 | 107 | ``` 108 | docker run -d -p 8888:8888 jupyter/pyspark-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 109 | ``` 110 | 111 | For example, to set the base URL of the notebook server, run the following: 112 | 113 | ``` 114 | docker run -d -p 8888:8888 jupyter/pyspark-notebook start-notebook.sh --NotebookApp.base_url=/some/path 115 | ``` 116 | 117 | For example, to disable all authentication mechanisms (not a recommended practice): 118 | 119 | ``` 120 | docker run -d -p 8888:8888 jupyter/pyspark-notebook start-notebook.sh --NotebookApp.token='' 121 | ``` 122 | 123 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 124 | 125 | ## Docker Options 126 | 127 | You may customize the execution of the Docker container and the command it is running with the following optional arguments. 128 | 129 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 130 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 131 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 132 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 133 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 134 | 135 | ## SSL Certificates 136 | 137 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 138 | 139 | ``` 140 | docker run -d -p 8888:8888 \ 141 | -v /some/host/folder:/etc/ssl/notebook \ 142 | jupyter/pyspark-notebook start-notebook.sh \ 143 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 144 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 145 | ``` 146 | 147 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 148 | 149 | ``` 150 | docker run -d -p 8888:8888 \ 151 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 152 | jupyter/pyspark-notebook start-notebook.sh \ 153 | --NotebookApp.certfile=/etc/ssl/notebook.pem 154 | ``` 155 | 156 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 157 | 158 | For additional information about using SSL, see the following: 159 | 160 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 161 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 162 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 163 | 164 | 165 | ## Conda Environments 166 | 167 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 168 | 169 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 170 | 171 | ``` 172 | # install a package into the default (python 3.x) environment 173 | pip install some-package 174 | conda install some-package 175 | ``` 176 | 177 | 178 | ## Alternative Commands 179 | 180 | 181 | ### start.sh 182 | 183 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 184 | 185 | ``` 186 | docker run -it --rm jupyter/pyspark-notebook start.sh ipython 187 | ``` 188 | 189 | Or, to run JupyterLab instead of the classic notebook, run the following: 190 | 191 | ``` 192 | docker run -it --rm -p 8888:8888 jupyter/pyspark-notebook start.sh jupyter lab 193 | ``` 194 | 195 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 196 | 197 | ### Others 198 | 199 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 200 | -------------------------------------------------------------------------------- /all-spark-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/all-spark-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/all-spark-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/all-spark-notebook.svg)](https://microbadger.com/images/jupyter/all-spark-notebook "jupyter/all-spark-notebook image metadata") 2 | 3 | 4 | # Jupyter Notebook Python, Scala, R, Spark, Mesos Stack 5 | 6 | ## What it Gives You 7 | 8 | * Jupyter Notebook 5.2.x 9 | * Conda Python 3.x environment 10 | * Conda R 3.3.x environment 11 | * Scala 2.11.x 12 | * pyspark, pandas, matplotlib, scipy, seaborn, scikit-learn pre-installed for Python 13 | * ggplot2, rcurl preinstalled for R 14 | * Spark 2.2.0 with Hadoop 2.7 for use in local mode or to connect to a cluster of Spark workers 15 | * Mesos client 1.2 binary that can communicate with a Mesos master 16 | * spylon-kernel 17 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 18 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command 19 | * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 20 | * A [start.sh](../base-notebook/start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 21 | * Options for a self-signed HTTPS certificate and passwordless `sudo` 22 | 23 | ## Basic Use 24 | 25 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 26 | 27 | ``` 28 | docker run -it --rm -p 8888:8888 jupyter/all-spark-notebook 29 | ``` 30 | 31 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 32 | 33 | ## Using Spark Local Mode 34 | 35 | This configuration is nice for using Spark on small, local data. 36 | 37 | ### In a Python Notebook 38 | 39 | 0. Run the container as shown above. 40 | 1. Open a Python 2 or 3 notebook. 41 | 2. Create a `SparkContext` configured for local mode. 42 | 43 | For example, the first few cells in a notebook might read: 44 | 45 | ```python 46 | import pyspark 47 | sc = pyspark.SparkContext('local[*]') 48 | 49 | # do something to prove it works 50 | rdd = sc.parallelize(range(1000)) 51 | rdd.takeSample(False, 5) 52 | ``` 53 | 54 | ### In a R Notebook 55 | 56 | 0. Run the container as shown above. 57 | 1. Open a R notebook. 58 | 2. Initialize a `sparkR` session for local mode. 59 | 60 | For example, the first few cells in a R notebook might read: 61 | 62 | ``` 63 | library(SparkR) 64 | 65 | as <- sparkR.session("local[*]") 66 | 67 | # do something to prove it works 68 | df <- as.DataFrame(iris) 69 | head(filter(df, df$Petal_Width > 0.2)) 70 | ``` 71 | 72 | ### In an Apache Toree - Scala Notebook 73 | 74 | 0. Run the container as shown above. 75 | 1. Open an Apache Toree - Scala notebook. 76 | 2. Use the pre-configured `SparkContext` in variable `sc`. 77 | 78 | For example: 79 | 80 | ``` 81 | val rdd = sc.parallelize(0 to 999) 82 | rdd.takeSample(false, 5) 83 | ``` 84 | 85 | ### In spylon-kernel - Scala Notebook 86 | 87 | 0. Run the container as shown above. 88 | 1. Open a spylon-kernel notebook 89 | 2. Lazily instantiate the sparkcontext by just running any cell without magics 90 | 91 | For example 92 | 93 | ``` 94 | val rdd = sc.parallelize(0 to 999) 95 | rdd.takeSample(false, 5) 96 | ``` 97 | 98 | ## Connecting to a Spark Cluster on Mesos 99 | 100 | This configuration allows your compute cluster to scale with your data. 101 | 102 | 0. [Deploy Spark on Mesos](http://spark.apache.org/docs/latest/running-on-mesos.html). 103 | 1. Configure each slave with [the `--no-switch_user` flag](https://open.mesosphere.com/reference/mesos-slave/) or create the `jovyan` user on every slave node. 104 | 2. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](http://spark.apache.org/docs/latest/cluster-overview.html#components).) 105 | * NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See https://github.com/jupyter/docker-stacks/issues/64 for details. 106 | 3. Follow the language specific instructions below. 107 | 108 | ### In a Python Notebook 109 | 110 | 0. Open a Python 2 or 3 notebook. 111 | 1. Create a `SparkConf` instance in a new notebook pointing to your Mesos master node (or Zookeeper instance) and Spark binary package location. 112 | 2. Create a `SparkContext` using this configuration. 113 | 114 | For example, the first few cells in a Python 3 notebook might read: 115 | 116 | ```python 117 | import os 118 | # make sure pyspark tells workers to use python3 not 2 if both are installed 119 | os.environ['PYSPARK_PYTHON'] = '/usr/bin/python3' 120 | 121 | import pyspark 122 | conf = pyspark.SparkConf() 123 | 124 | # point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos) 125 | conf.setMaster("mesos://10.10.10.10:5050") 126 | # point to spark binary package in HDFS or on local filesystem on all slave 127 | # nodes (e.g., file:///opt/spark/spark-2.2.0-bin-hadoop2.7.tgz) 128 | conf.set("spark.executor.uri", "hdfs://10.10.10.10/spark/spark-2.2.0-bin-hadoop2.7.tgz") 129 | # set other options as desired 130 | conf.set("spark.executor.memory", "8g") 131 | conf.set("spark.core.connection.ack.wait.timeout", "1200") 132 | 133 | # create the context 134 | sc = pyspark.SparkContext(conf=conf) 135 | 136 | # do something to prove it works 137 | rdd = sc.parallelize(range(100000000)) 138 | rdd.sumApprox(3) 139 | ``` 140 | 141 | To use Python 2 in the notebook and on the workers, change the `PYSPARK_PYTHON` environment variable to point to the location of the Python 2.x interpreter binary. If you leave this environment variable unset, it defaults to `python`. 142 | 143 | Of course, all of this can be hidden in an [IPython kernel startup script](http://ipython.org/ipython-doc/stable/development/config.html?highlight=startup#startup-files), but "explicit is better than implicit." :) 144 | 145 | ### In a R Notebook 146 | 147 | 0. Run the container as shown above. 148 | 1. Open a R notebook. 149 | 2. Initialize `sparkR` Mesos master node (or Zookeeper instance) and Spark binary package location. 150 | 3. Initialize `sparkRSQL`. 151 | 152 | For example, the first few cells in a R notebook might read: 153 | 154 | ``` 155 | library(SparkR) 156 | 157 | # point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos)\ 158 | # as the first argument 159 | # point to spark binary package in HDFS or on local filesystem on all slave 160 | # nodes (e.g., file:///opt/spark/spark-2.2.0-bin-hadoop2.7.tgz) in sparkEnvir 161 | # set other options in sparkEnvir 162 | sc <- sparkR.session("mesos://10.10.10.10:5050", sparkEnvir=list( 163 | spark.executor.uri="hdfs://10.10.10.10/spark/spark-2.2.0-bin-hadoop2.7.tgz", 164 | spark.executor.memory="8g" 165 | ) 166 | ) 167 | 168 | # do something to prove it works 169 | data(iris) 170 | df <- as.DataFrame(iris) 171 | head(filter(df, df$Petal_Width > 0.2)) 172 | ``` 173 | 174 | ### In an Apache Toree - Scala Notebook 175 | 176 | 0. Open a terminal via *New -> Terminal* in the notebook interface. 177 | 1. Add information about your cluster to the `SPARK_OPTS` environment variable when running the container. 178 | 2. Open an Apache Toree - Scala notebook. 179 | 3. Use the pre-configured `SparkContext` in variable `sc` or `SparkSession` in variable `spark`. 180 | 181 | The Apache Toree kernel automatically creates a `SparkContext` when it starts based on configuration information from its command line arguments and environment variables. You can pass information about your Mesos cluster via the `SPARK_OPTS` environment variable when you spawn a container. 182 | 183 | For instance, to pass information about a Mesos master, Spark binary location in HDFS, and an executor options, you could start the container like so: 184 | 185 | `docker run -d -p 8888:8888 -e SPARK_OPTS '--master=mesos://10.10.10.10:5050 \ 186 | --spark.executor.uri=hdfs://10.10.10.10/spark/spark-2.2.0-bin-hadoop2.7.tgz \ 187 | --spark.executor.memory=8g' jupyter/all-spark-notebook` 188 | 189 | Note that this is the same information expressed in a notebook in the Python case above. Once the kernel spec has your cluster information, you can test your cluster in an Apache Toree notebook like so: 190 | 191 | ``` 192 | // should print the value of --master in the kernel spec 193 | println(sc.master) 194 | 195 | // do something to prove it works 196 | val rdd = sc.parallelize(0 to 99999999) 197 | rdd.sum() 198 | ``` 199 | ## Connecting to a Spark Cluster on Standalone Mode 200 | 201 | Connection to Spark Cluster on Standalone Mode requires the following set of steps: 202 | 203 | 0. Verify that the docker image (check the Dockerfile) and the Spark Cluster which is being deployed, run the same version of Spark. 204 | 1. [Deploy Spark on Standalone Mode](http://spark.apache.org/docs/latest/spark-standalone.html). 205 | 2. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](http://spark.apache.org/docs/latest/cluster-overview.html#components).) 206 | * NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See https://github.com/jupyter/docker-stacks/issues/64 for details. 207 | 3. The language specific instructions are almost same as mentioned above for Mesos, only the master url would now be something like spark://10.10.10.10:7077 208 | 209 | ## Notebook Options 210 | 211 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 212 | 213 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed ([how-to](http://jupyter-notebook.readthedocs.io/en/latest/public_server.html#preparing-a-hashed-password)) instead of the default token, run the following: 214 | 215 | ``` 216 | docker run -d -p 8888:8888 jupyter/all-spark-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 217 | ``` 218 | 219 | For example, to set the base URL of the notebook server, run the following: 220 | 221 | ``` 222 | docker run -d -p 8888:8888 jupyter/all-spark-notebook start-notebook.sh --NotebookApp.base_url=/some/path 223 | ``` 224 | 225 | For example, to disable all authentication mechanisms (not a recommended practice): 226 | 227 | ``` 228 | docker run -d -p 8888:8888 jupyter/all-spark-notebook start-notebook.sh --NotebookApp.token='' 229 | ``` 230 | 231 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 232 | 233 | ## Docker Options 234 | 235 | You may customize the execution of the Docker container and the command it is running with the following optional arguments. 236 | 237 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 238 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 239 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 240 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 241 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 242 | 243 | ## SSL Certificates 244 | 245 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 246 | 247 | ``` 248 | docker run -d -p 8888:8888 \ 249 | -v /some/host/folder:/etc/ssl/notebook \ 250 | jupyter/all-spark-notebook start-notebook.sh \ 251 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 252 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 253 | ``` 254 | 255 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 256 | 257 | ``` 258 | docker run -d -p 8888:8888 \ 259 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 260 | jupyter/all-spark-notebook start-notebook.sh \ 261 | --NotebookApp.certfile=/etc/ssl/notebook.pem 262 | ``` 263 | 264 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 265 | 266 | For additional information about using SSL, see the following: 267 | 268 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 269 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 270 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 271 | 272 | 273 | ## Conda Environments 274 | 275 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 276 | 277 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 278 | 279 | ``` 280 | # install a package into the default (python 3.x) environment 281 | pip install some-package 282 | conda install some-package 283 | ``` 284 | 285 | 286 | ## Alternative Commands 287 | 288 | ### start.sh 289 | 290 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 291 | 292 | ``` 293 | docker run -it --rm jupyter/all-spark-notebook start.sh ipython 294 | ``` 295 | 296 | Or, to run JupyterLab instead of the classic notebook, run the following: 297 | 298 | ``` 299 | docker run -it --rm -p 8888:8888 jupyter/all-spark-notebook start.sh jupyter lab 300 | ``` 301 | 302 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 303 | 304 | ### Others 305 | 306 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 307 | --------------------------------------------------------------------------------