├── docs ├── contributing │ ├── recipes.md │ ├── tests.md │ ├── stacks.md │ └── packages.md ├── Makefile ├── make.bat ├── index.rst ├── conf.py ├── images │ └── inherit.svg └── using │ ├── common.md │ ├── specifics.md │ └── running.md ├── examples ├── docker-compose │ ├── notebook │ │ ├── README.md │ │ ├── build.sh │ │ ├── down.sh │ │ ├── notebook.yml │ │ ├── Dockerfile │ │ ├── secure-notebook.yml │ │ ├── env.sh │ │ ├── letsencrypt-notebook.yml │ │ └── up.sh │ ├── bin │ │ ├── vbox.sh │ │ ├── softlayer.sh │ │ ├── sl-dns.sh │ │ └── letsencrypt.sh │ └── README.md ├── source-to-image │ ├── save-artifacts │ ├── run │ ├── assemble │ └── README.md ├── make-deploy │ ├── virtualbox.makefile │ ├── Dockerfile │ ├── self-signed.makefile │ ├── softlayer.makefile │ ├── Makefile │ ├── letsencrypt.makefile │ └── README.md └── openshift │ ├── templates.json │ └── README.md ├── r-notebook ├── .dockerignore ├── hooks │ └── post_push ├── Dockerfile └── README.md ├── base-notebook ├── .dockerignore ├── hooks │ └── post_push ├── start-notebook.sh ├── fix-permissions ├── jupyter_notebook_config.py ├── start-singleuser.sh ├── Dockerfile.ppc64le ├── test │ └── test_container_options.py ├── Dockerfile.ppc64le.patch ├── Dockerfile ├── start.sh └── README.md ├── internal ├── docker-stacks-webhook │ ├── runtime.txt │ ├── requirements.txt │ ├── conda_requirements.txt │ └── manifest.yml ├── README.md ├── inherit-diagram.png └── inherit-diagram.svg ├── scipy-notebook ├── .dockerignore ├── hooks │ └── post_push ├── Dockerfile └── README.md ├── all-spark-notebook ├── .dockerignore ├── kernel.json ├── hooks │ └── post_push └── Dockerfile ├── datascience-notebook ├── .dockerignore ├── hooks │ └── post_push ├── Dockerfile └── README.md ├── minimal-notebook ├── .dockerignore ├── hooks │ └── post_push ├── Dockerfile └── README.md ├── pyspark-notebook ├── .dockerignore ├── hooks │ └── post_push └── Dockerfile ├── tensorflow-notebook ├── .dockerignore ├── hooks │ └── post_push ├── Dockerfile └── README.md ├── readthedocs.yml ├── requirements-dev.txt ├── .travis.yml ├── test └── test_notebook.py ├── .vscode └── settings.json ├── .gitignore ├── .github └── issue_template.md ├── Makefile ├── LICENSE.md ├── conftest.py └── README.md /docs/contributing/recipes.md: -------------------------------------------------------------------------------- 1 | # Recipes -------------------------------------------------------------------------------- /docs/contributing/tests.md: -------------------------------------------------------------------------------- 1 | # Image Tests -------------------------------------------------------------------------------- /examples/docker-compose/notebook/README.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/contributing/stacks.md: -------------------------------------------------------------------------------- 1 | # Community Stacks -------------------------------------------------------------------------------- /r-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /base-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /internal/docker-stacks-webhook/runtime.txt: -------------------------------------------------------------------------------- 1 | python-3.19.0 2 | -------------------------------------------------------------------------------- /scipy-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /all-spark-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /datascience-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /minimal-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /pyspark-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /tensorflow-notebook/.dockerignore: -------------------------------------------------------------------------------- 1 | # Documentation 2 | README.md 3 | -------------------------------------------------------------------------------- /readthedocs.yml: -------------------------------------------------------------------------------- 1 | requirements_file: requirements-dev.txt 2 | python: 3 | version: 3 -------------------------------------------------------------------------------- /docs/contributing/packages.md: -------------------------------------------------------------------------------- 1 | # Packages 2 | 3 | ## Package Updates 4 | 5 | ## New Packages -------------------------------------------------------------------------------- /examples/source-to-image/save-artifacts: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | tar cf - --files-from /dev/null 4 | -------------------------------------------------------------------------------- /internal/README.md: -------------------------------------------------------------------------------- 1 | Stuff to help with building Docker images. You probably don't want to use anything in here. 2 | -------------------------------------------------------------------------------- /internal/inherit-diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mimoralea/docker-stacks/master/internal/inherit-diagram.png -------------------------------------------------------------------------------- /requirements-dev.txt: -------------------------------------------------------------------------------- 1 | docker 2 | pytest 3 | recommonmark==0.4.0 4 | requests 5 | sphinx>=1.6 6 | sphinx_rtd_theme 7 | -------------------------------------------------------------------------------- /examples/source-to-image/run: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Start up the notebook instance. 4 | 5 | exec start-notebook.sh "$@" 6 | -------------------------------------------------------------------------------- /internal/docker-stacks-webhook/requirements.txt: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | git+https://github.com/parente/kernel_gateway@concatenate-cells 4 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | python: 3 | - 3.6 4 | sudo: required 5 | services: 6 | - docker 7 | install: 8 | - make dev-env 9 | script: 10 | - make build-test-all DARGS="--build-arg TEST_ONLY_BUILD=1" 11 | -------------------------------------------------------------------------------- /all-spark-notebook/kernel.json: -------------------------------------------------------------------------------- 1 | { 2 | "display_name": "Apache Toree (Scala 2.10.4)", 3 | "language": "scala", 4 | "argv": [ 5 | "/opt/toree-kernel/bin/toree-kernel", 6 | "--profile", 7 | "{connection_file}" 8 | ] 9 | } 10 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 6 | 7 | # Setup environment 8 | source "$DIR/env.sh" 9 | 10 | # Build the notebook image 11 | docker-compose -f "$DIR/notebook.yml" build 12 | -------------------------------------------------------------------------------- /internal/docker-stacks-webhook/conda_requirements.txt: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | # NOTE: jupyter_kernel_gateway uses the notebook package. We get a 4 | # speed up if we pre-install the notebook package using conda since 5 | # we'll get prebuilt binaries for all its dependencies like pyzmq. 6 | notebook==4.1 7 | -------------------------------------------------------------------------------- /examples/docker-compose/bin/vbox.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | # Set reasonable default VM settings 6 | : ${VIRTUALBOX_CPUS:=4} 7 | export VIRTUALBOX_CPUS 8 | : ${VIRTUALBOX_MEMORY_SIZE:=4096} 9 | export VIRTUALBOX_MEMORY_SIZE 10 | 11 | docker-machine create --driver virtualbox "$@" 12 | -------------------------------------------------------------------------------- /test/test_notebook.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | def test_secured_server(container, http_client): 5 | """Notebook server should eventually request user login.""" 6 | container.run() 7 | resp = http_client.get('http://localhost:8888') 8 | resp.raise_for_status() 9 | assert 'login_submit' in resp.text 10 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/down.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 6 | 7 | # Setup environment 8 | source "$DIR/env.sh" 9 | 10 | # Bring down the notebook container, using container name as project name 11 | docker-compose -f "$DIR/notebook.yml" -p "$NAME" down 12 | -------------------------------------------------------------------------------- /r-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /base-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /minimal-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /pyspark-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /scipy-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /all-spark-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /datascience-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /tensorflow-notebook/hooks/post_push: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Tag the latest build with the short git sha. Push the tag in addition 4 | # to the "latest" tag already pushed. 5 | GIT_SHA_TAG=${SOURCE_COMMIT:0:12} 6 | docker tag $IMAGE_NAME $DOCKER_REPO:$GIT_SHA_TAG 7 | docker push $DOCKER_REPO:$GIT_SHA_TAG 8 | 9 | # Invoke all downstream build triggers. 10 | for url in $(echo $NEXT_BUILD_TRIGGERS | sed "s/,/ /g") 11 | do 12 | curl -X POST $url 13 | done -------------------------------------------------------------------------------- /examples/docker-compose/notebook/notebook.yml: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | version: "2" 5 | 6 | services: 7 | notebook: 8 | build: . 9 | image: my-notebook 10 | container_name: ${NAME} 11 | volumes: 12 | - "work:/home/jovyan/work" 13 | ports: 14 | - "${PORT}:8888" 15 | 16 | volumes: 17 | work: 18 | external: 19 | name: ${WORK_VOLUME} 20 | -------------------------------------------------------------------------------- /examples/make-deploy/virtualbox.makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | virtualbox-vm: export VIRTUALBOX_CPU_COUNT?=4 5 | virtualbox-vm: export VIRTUALBOX_DISK_SIZE?=100000 6 | virtualbox-vm: export VIRTUALBOX_MEMORY_SIZE?=4096 7 | virtualbox-vm: check 8 | @test -n "$(NAME)" || \ 9 | (echo "ERROR: NAME not defined (make help)"; exit 1) 10 | @docker-machine create -d virtualbox $(NAME) 11 | -------------------------------------------------------------------------------- /tensorflow-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/scipy-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | # Install Tensorflow 8 | RUN conda install --quiet --yes \ 9 | 'tensorflow=1.3*' \ 10 | 'keras=2.0*' && \ 11 | conda clean -tipsy && \ 12 | fix-permissions $CONDA_DIR && \ 13 | fix-permissions /home/$NB_USER 14 | -------------------------------------------------------------------------------- /.vscode/settings.json: -------------------------------------------------------------------------------- 1 | { 2 | "cSpell.enabledLanguageIds": [ 3 | "c", 4 | "cpp", 5 | "csharp", 6 | "go", 7 | "handlebars", 8 | "javascript", 9 | "javascriptreact", 10 | "json", 11 | "latex", 12 | "markdown", 13 | "php", 14 | "plaintext", 15 | "python", 16 | "restructuredtext", 17 | "text", 18 | "typescript", 19 | "typescriptreact", 20 | "yml" 21 | ] 22 | } -------------------------------------------------------------------------------- /examples/docker-compose/bin/softlayer.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | # Set default SoftLayer VM settings 6 | : ${SOFTLAYER_CPU:=4} 7 | export SOFTLAYER_CPU 8 | : ${SOFTLAYER_DISK_SIZE:=100} 9 | export SOFTLAYER_DISK_SIZE 10 | : ${SOFTLAYER_MEMORY:=4096} 11 | export SOFTLAYER_MEMORY 12 | : ${SOFTLAYER_REGION:=wdc01} 13 | export SOFTLAYER_REGION 14 | 15 | docker-machine create --driver softlayer "$@" 16 | -------------------------------------------------------------------------------- /examples/make-deploy/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | # Pick your favorite docker-stacks image 5 | FROM jupyter/minimal-notebook:2d125a7161b5 6 | 7 | USER jovyan 8 | 9 | # Add permanent pip/conda installs, data files, other user libs here 10 | # e.g., RUN pip install jupyter_dashboards 11 | 12 | USER root 13 | 14 | # Add permanent apt-get installs and other root commands here 15 | # e.g., RUN apt-get install npm nodejs 16 | -------------------------------------------------------------------------------- /examples/make-deploy/self-signed.makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | self-signed-notebook: PORT?=443 5 | self-signed-notebook: NAME?=notebook 6 | self-signed-notebook: WORK_VOLUME?=$(NAME)-data 7 | self-signed-notebook: DOCKER_ARGS:=-e USE_HTTPS=yes \ 8 | -e PASSWORD=$(PASSWORD) 9 | self-signed-notebook: check 10 | @test -n "$(PASSWORD)" || \ 11 | (echo "ERROR: PASSWORD not defined or blank"; exit 1) 12 | $(RUN_NOTEBOOK) 13 | -------------------------------------------------------------------------------- /base-notebook/start-notebook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | set -e 6 | 7 | if [[ ! -z "${JUPYTERHUB_API_TOKEN}" ]]; then 8 | # launched by JupyterHub, use single-user entrypoint 9 | exec /usr/local/bin/start-singleuser.sh $* 10 | else 11 | if [[ ! -z "${JUPYTER_ENABLE_LAB}" ]]; then 12 | . /usr/local/bin/start.sh jupyter lab $* 13 | else 14 | . /usr/local/bin/start.sh jupyter notebook $* 15 | fi 16 | fi 17 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | # Pick your favorite docker-stacks image 5 | FROM jupyter/minimal-notebook:55d5ca6be183 6 | 7 | USER jovyan 8 | 9 | # Add permanent pip/conda installs, data files, other user libs here 10 | # e.g., RUN pip install jupyter_dashboards 11 | 12 | USER root 13 | 14 | # Add permanent apt-get installs and other root commands here 15 | # e.g., RUN apt-get install npm nodejs 16 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/secure-notebook.yml: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | version: "2" 5 | 6 | services: 7 | notebook: 8 | build: . 9 | image: my-notebook 10 | container_name: ${NAME} 11 | volumes: 12 | - "work:/home/jovyan/work" 13 | ports: 14 | - "${PORT}:8888" 15 | environment: 16 | USE_HTTPS: "yes" 17 | PASSWORD: ${PASSWORD} 18 | 19 | volumes: 20 | work: 21 | external: 22 | name: ${WORK_VOLUME} 23 | -------------------------------------------------------------------------------- /internal/docker-stacks-webhook/manifest.yml: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | --- 4 | applications: 5 | - name: docker-stacks-webhook 6 | memory: 128M 7 | instances: 1 8 | path: . 9 | buildpack: https://github.com/ihuston/python-conda-buildpack 10 | command: > 11 | jupyter-kernelgateway --KernelGatewayApp.port=$PORT 12 | --KernelGatewayApp.ip=0.0.0.0 13 | --KernelGatewayApp.api=notebook-http 14 | --KernelGatewayApp.seed_uri='./docker-stacks-webhook.ipynb' 15 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/env.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | # Set default values for environment variables required by notebook compose 6 | # configuration file. 7 | 8 | # Container name 9 | : "${NAME:=my-notebook}" 10 | export NAME 11 | 12 | # Exposed container port 13 | : ${PORT:=80} 14 | export PORT 15 | 16 | # Container work volume name 17 | : "${WORK_VOLUME:=$NAME-work}" 18 | export WORK_VOLUME 19 | 20 | # Container secrets volume name 21 | : "${SECRETS_VOLUME:=$NAME-secrets}" 22 | export SECRETS_VOLUME 23 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SPHINXPROJ = docker-stacks 8 | SOURCEDIR = . 9 | BUILDDIR = _build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) -------------------------------------------------------------------------------- /examples/docker-compose/notebook/letsencrypt-notebook.yml: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | version: "2" 5 | 6 | services: 7 | notebook: 8 | build: . 9 | image: my-notebook 10 | container_name: ${NAME} 11 | volumes: 12 | - "work:/home/jovyan/work" 13 | - "secrets:/etc/letsencrypt" 14 | ports: 15 | - "${PORT}:8888" 16 | environment: 17 | USE_HTTPS: "yes" 18 | PASSWORD: ${PASSWORD} 19 | command: > 20 | start-notebook.sh 21 | --NotebookApp.certfile=/etc/letsencrypt/fullchain.pem 22 | --NotebookApp.keyfile=/etc/letsencrypt/privkey.pem 23 | 24 | volumes: 25 | work: 26 | external: 27 | name: ${WORK_VOLUME} 28 | secrets: 29 | external: 30 | name: ${SECRETS_VOLUME} 31 | -------------------------------------------------------------------------------- /examples/docker-compose/bin/sl-dns.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | set -e 6 | 7 | # User must have slcli installed 8 | which slcli > /dev/null || (echo "SoftLayer cli not found (pip install softlayer)"; exit 1) 9 | 10 | USAGE="Usage: `basename $0` machine_name [domain]" 11 | E_BADARGS=85 12 | 13 | # Machine name is first command line arg 14 | MACHINE_NAME=$1 && [ -z "$MACHINE_NAME" ] && echo "$USAGE" && exit $E_BADARGS 15 | 16 | # Use SOFTLAYER_DOMAIN env var if domain name not set as second arg 17 | DOMAIN="${2:-$SOFTLAYER_DOMAIN}" && [ -z "$DOMAIN" ] && \ 18 | echo "Must specify domain or set SOFTLAYER_DOMAIN environment varable" && \ 19 | echo "$USAGE" && exit $E_BADARGS 20 | 21 | IP=$(docker-machine ip "$MACHINE_NAME") 22 | 23 | slcli dns record-add $DOMAIN $MACHINE_NAME A $IP 24 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | pushd %~dp0 4 | 5 | REM Command file for Sphinx documentation 6 | 7 | if "%SPHINXBUILD%" == "" ( 8 | set SPHINXBUILD=sphinx-build 9 | ) 10 | set SOURCEDIR=. 11 | set BUILDDIR=_build 12 | set SPHINXPROJ=docker-stacks 13 | 14 | if "%1" == "" goto help 15 | 16 | %SPHINXBUILD% >NUL 2>NUL 17 | if errorlevel 9009 ( 18 | echo. 19 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 20 | echo.installed, then set the SPHINXBUILD environment variable to point 21 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 22 | echo.may add the Sphinx directory to PATH. 23 | echo. 24 | echo.If you don't have Sphinx installed, grab it from 25 | echo.http://sphinx-doc.org/ 26 | exit /b 1 27 | ) 28 | 29 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 30 | goto end 31 | 32 | :help 33 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 34 | 35 | :end 36 | popd 37 | -------------------------------------------------------------------------------- /minimal-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | FROM jupyter/base-notebook 5 | 6 | LABEL maintainer="Jupyter Project " 7 | 8 | USER root 9 | 10 | # Install all OS dependencies for fully functional notebook server 11 | RUN apt-get update && apt-get install -yq --no-install-recommends \ 12 | build-essential \ 13 | emacs \ 14 | git \ 15 | inkscape \ 16 | jed \ 17 | libsm6 \ 18 | libxext-dev \ 19 | libxrender1 \ 20 | lmodern \ 21 | netcat \ 22 | pandoc \ 23 | python-dev \ 24 | texlive-fonts-extra \ 25 | texlive-fonts-recommended \ 26 | texlive-generic-recommended \ 27 | texlive-latex-base \ 28 | texlive-latex-extra \ 29 | texlive-xetex \ 30 | unzip \ 31 | vim \ 32 | && apt-get clean && \ 33 | rm -rf /var/lib/apt/lists/* 34 | 35 | # Switch back to jovyan to avoid accidental container runs as root 36 | USER $NB_UID 37 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | 3 | *~ 4 | 5 | __pycache__/ 6 | *.py[cod] 7 | 8 | # C extensions 9 | *.so 10 | 11 | # Distribution / packaging 12 | .Python 13 | env/ 14 | build/ 15 | develop-eggs/ 16 | dist/ 17 | downloads/ 18 | eggs/ 19 | .eggs/ 20 | lib/ 21 | lib64/ 22 | parts/ 23 | sdist/ 24 | var/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .coverage 43 | .coverage.* 44 | .cache 45 | nosetests.xml 46 | coverage.xml 47 | *,cover 48 | 49 | # Translations 50 | *.mo 51 | *.pot 52 | 53 | # Django stuff: 54 | *.log 55 | 56 | # Sphinx documentation 57 | docs/_build/ 58 | 59 | # PyBuilder 60 | target/ 61 | 62 | # Mac OS X 63 | .DS_Store 64 | 65 | dockerspawner 66 | dockerspawner.tar.gz 67 | *.orig 68 | .ipynb_checkpoints/ 69 | -------------------------------------------------------------------------------- /base-notebook/fix-permissions: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # set permissions on a directory 3 | # after any installation, if a directory needs to be (human) user-writable, 4 | # run this script on it. 5 | # It will make everything in the directory owned by the group $NB_GID 6 | # and writable by that group. 7 | # Deployments that want to set a specific user id can preserve permissions 8 | # by adding the `--group-add users` line to `docker run`. 9 | 10 | # uses find to avoid touching files that already have the right permissions, 11 | # which would cause massive image explosion 12 | 13 | # right permissions are: 14 | # group=$NB_GID 15 | # AND permissions include group rwX (directory-execute) 16 | # AND directories have setuid,setgid bits set 17 | 18 | set -e 19 | 20 | for d in $@; do 21 | find "$d" \ 22 | ! \( \ 23 | -group $NB_GID \ 24 | -a -perm -g+rwX \ 25 | \) \ 26 | -exec chgrp $NB_GID {} \; \ 27 | -exec chmod g+rwX {} \; 28 | # setuid,setgid *on directories only* 29 | find "$d" \ 30 | \( \ 31 | -type d \ 32 | -a ! -perm -6000 \ 33 | \) \ 34 | -exec chmod +6000 {} \; 35 | done 36 | -------------------------------------------------------------------------------- /.github/issue_template.md: -------------------------------------------------------------------------------- 1 | Hi! Thanks for using Jupyter's docker-stacks images. 2 | 3 | If you are requesting a library upgrade or addition in one of the existing images, please state the desired library name and version here and disregard the remaining sections. 4 | 5 | If you are reporting an issue with one of the existing images, please answer the questions below to help us troubleshoot the problem. Please be as thorough as possible. 6 | 7 | **What docker image you are using?** 8 | 9 | Example: `jupyter/scipy-notebook` 10 | 11 | **What complete docker command do you run to launch the container (omitting sensitive values)?** 12 | 13 | Example: `docker run -it --rm -p 8889:8888 jupyter/all-spark-notebook:latest` 14 | 15 | **What steps do you take once the container is running to reproduce the issue?** 16 | 17 | Example: 18 | 19 | 1. Visit http://localhost:8889 20 | 2. Start an R noteobok 21 | 3. ... 22 | 23 | **What do you expect to happen?** 24 | 25 | Example: ggplot output appears in my notebook. 26 | 27 | **What actually happens?** 28 | 29 | Example: No output is visible in the notebook and the notebook server log contains messages about ... 30 | -------------------------------------------------------------------------------- /r-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/minimal-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | USER root 8 | 9 | # R pre-requisites 10 | RUN apt-get update && \ 11 | apt-get install -y --no-install-recommends \ 12 | fonts-dejavu \ 13 | tzdata \ 14 | gfortran \ 15 | gcc && apt-get clean && \ 16 | rm -rf /var/lib/apt/lists/* 17 | 18 | USER $NB_UID 19 | 20 | # R packages 21 | RUN conda install --quiet --yes \ 22 | 'r-base=3.4.1' \ 23 | 'r-irkernel=0.8*' \ 24 | 'r-plyr=1.8*' \ 25 | 'r-devtools=1.13*' \ 26 | 'r-tidyverse=1.1*' \ 27 | 'r-shiny=1.0*' \ 28 | 'r-rmarkdown=1.8*' \ 29 | 'r-forecast=8.2*' \ 30 | 'r-rsqlite=2.0*' \ 31 | 'r-reshape2=1.4*' \ 32 | 'r-nycflights13=0.2*' \ 33 | 'r-caret=6.0*' \ 34 | 'r-rcurl=1.95*' \ 35 | 'r-crayon=1.3*' \ 36 | 'r-randomforest=4.6*' \ 37 | 'r-htmltools=0.3*' \ 38 | 'r-sparklyr=0.7*' \ 39 | 'r-htmlwidgets=1.0*' \ 40 | 'r-hexbin=1.27*' && \ 41 | conda clean -tipsy && \ 42 | fix-permissions $CONDA_DIR 43 | -------------------------------------------------------------------------------- /examples/make-deploy/softlayer.makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | softlayer-vm: export SOFTLAYER_CPU?=4 5 | softlayer-vm: export SOFTLAYER_DISK_SIZE?=100 6 | softlayer-vm: export SOFTLAYER_MEMORY?=4096 7 | softlayer-vm: export SOFTLAYER_REGION?=wdc01 8 | softlayer-vm: check 9 | @test -n "$(NAME)" || \ 10 | (echo "ERROR: NAME not defined (make help)"; exit 1) 11 | @test -n "$(SOFTLAYER_API_KEY)" || \ 12 | (echo "ERROR: SOFTLAYER_API_KEY not defined (make help)"; exit 1) 13 | @test -n "$(SOFTLAYER_USER)" || \ 14 | (echo "ERROR: SOFTLAYER_USER not defined (make help)"; exit 1) 15 | @test -n "$(SOFTLAYER_DOMAIN)" || \ 16 | (echo "ERROR: SOFTLAYER_DOMAIN not defined (make help)"; exit 1) 17 | @docker-machine create -d softlayer $(NAME) 18 | @echo "DONE: Docker host '$(NAME)' up at $$(docker-machine ip $(NAME))" 19 | 20 | softlayer-dns: HOST_NAME:=$$(docker-machine active) 21 | softlayer-dns: IP:=$$(docker-machine ip $(HOST_NAME)) 22 | softlayer-dns: check 23 | @which slcli > /dev/null || (echo "softlayer cli not found (pip install softlayer)"; exit 1) 24 | @test -n "$(SOFTLAYER_DOMAIN)" || \ 25 | (echo "ERROR: SOFTLAYER_DOMAIN not defined (make help)"; exit 1) 26 | @slcli dns record-add $(SOFTLAYER_DOMAIN) $(HOST_NAME) A $(IP) 27 | -------------------------------------------------------------------------------- /base-notebook/jupyter_notebook_config.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | from jupyter_core.paths import jupyter_data_dir 5 | import subprocess 6 | import os 7 | import errno 8 | import stat 9 | 10 | c = get_config() 11 | c.NotebookApp.ip = '*' 12 | c.NotebookApp.port = 8888 13 | c.NotebookApp.open_browser = False 14 | 15 | # Generate a self-signed certificate 16 | if 'GEN_CERT' in os.environ: 17 | dir_name = jupyter_data_dir() 18 | pem_file = os.path.join(dir_name, 'notebook.pem') 19 | try: 20 | os.makedirs(dir_name) 21 | except OSError as exc: # Python >2.5 22 | if exc.errno == errno.EEXIST and os.path.isdir(dir_name): 23 | pass 24 | else: 25 | raise 26 | # Generate a certificate if one doesn't exist on disk 27 | subprocess.check_call(['openssl', 'req', '-new', 28 | '-newkey', 'rsa:2048', 29 | '-days', '365', 30 | '-nodes', '-x509', 31 | '-subj', '/C=XX/ST=XX/L=XX/O=generated/CN=generated', 32 | '-keyout', pem_file, 33 | '-out', pem_file]) 34 | # Restrict access to the file 35 | os.chmod(pem_file, stat.S_IRUSR | stat.S_IWUSR) 36 | c.NotebookApp.certfile = pem_file 37 | -------------------------------------------------------------------------------- /examples/source-to-image/assemble: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -x 4 | 5 | set -eo pipefail 6 | 7 | # Remove any 'environment.yml' or 'requirements.txt' files which may 8 | # have been carried over from the base image so we don't reinstall 9 | # packages which have already been installed. This could occur where 10 | # an S2I build was used to create a new base image with pre-installed 11 | # Python packages, with the new image then subsequently being used as a 12 | # S2I builder base image. 13 | 14 | rm -f /home/$NB_USER/environment.yml 15 | rm -f /home/$NB_USER/requirements.txt 16 | 17 | # Copy injected files to target directory. 18 | 19 | cp -Rf /tmp/src/. /home/$NB_USER 20 | 21 | rm -rf /tmp/src 22 | 23 | # Install any Python modules. If we find an 'environment.yml' file we 24 | # assume we should use 'conda' to install packages. If 'requirements.txt' 25 | # use 'pip' instead. 26 | 27 | if [ -f /home/$NB_USER/environment.yml ]; then 28 | conda env update --name root --file /home/$NB_USER/environment.yml 29 | conda clean -tipsy 30 | else 31 | if [ -f /home/$NB_USER/requirements.txt ]; then 32 | pip --no-cache-dir install -r /home/$NB_USER/requirements.txt 33 | fi 34 | fi 35 | 36 | # Fix up permissions on home directory and Python installation so that 37 | # everything is still writable by 'users' group. 38 | 39 | fix-permissions $CONDA_DIR 40 | fix-permissions /home/$NB_USER 41 | -------------------------------------------------------------------------------- /base-notebook/start-singleuser.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | set -e 6 | 7 | # set default ip to 0.0.0.0 8 | if [[ "$NOTEBOOK_ARGS $@" != *"--ip="* ]]; then 9 | NOTEBOOK_ARGS="--ip=0.0.0.0 $NOTEBOOK_ARGS" 10 | fi 11 | 12 | # handle some deprecated environment variables 13 | # from DockerSpawner < 0.8. 14 | # These won't be passed from DockerSpawner 0.9, 15 | # so avoid specifying --arg=empty-string 16 | if [ ! -z "$NOTEBOOK_DIR" ]; then 17 | NOTEBOOK_ARGS="--notebook-dir='$NOTEBOOK_DIR' $NOTEBOOK_ARGS" 18 | fi 19 | if [ ! -z "$JPY_PORT" ]; then 20 | NOTEBOOK_ARGS="--port=$JPY_PORT $NOTEBOOK_ARGS" 21 | fi 22 | if [ ! -z "$JPY_USER" ]; then 23 | NOTEBOOK_ARGS="--user=$JPY_USER $NOTEBOOK_ARGS" 24 | fi 25 | if [ ! -z "$JPY_COOKIE_NAME" ]; then 26 | NOTEBOOK_ARGS="--cookie-name=$JPY_COOKIE_NAME $NOTEBOOK_ARGS" 27 | fi 28 | if [ ! -z "$JPY_BASE_URL" ]; then 29 | NOTEBOOK_ARGS="--base-url=$JPY_BASE_URL $NOTEBOOK_ARGS" 30 | fi 31 | if [ ! -z "$JPY_HUB_PREFIX" ]; then 32 | NOTEBOOK_ARGS="--hub-prefix=$JPY_HUB_PREFIX $NOTEBOOK_ARGS" 33 | fi 34 | if [ ! -z "$JPY_HUB_API_URL" ]; then 35 | NOTEBOOK_ARGS="--hub-api-url=$JPY_HUB_API_URL $NOTEBOOK_ARGS" 36 | fi 37 | if [ ! -z "$JUPYTER_ENABLE_LAB" ]; then 38 | NOTEBOOK_BIN="jupyter labhub" 39 | else 40 | NOTEBOOK_BIN=jupyterhub-singleuser 41 | fi 42 | 43 | . /usr/local/bin/start.sh $NOTEBOOK_BIN $NOTEBOOK_ARGS $@ 44 | -------------------------------------------------------------------------------- /examples/make-deploy/Makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | .PHONY: help check image notebook 5 | 6 | IMAGE:=my-notebook 7 | 8 | # Common, extensible docker run command 9 | define RUN_NOTEBOOK 10 | @docker volume create --name $(WORK_VOLUME) > /dev/null 11 | -@docker rm -f $(NAME) 2> /dev/null 12 | @docker run -d -p $(PORT):8888 \ 13 | --name $(NAME) \ 14 | -v $(WORK_VOLUME):/home/jovyan/work \ 15 | $(DOCKER_ARGS) \ 16 | $(IMAGE) bash -c "$(PRE_CMD) chown jovyan /home/jovyan/work && start-notebook.sh $(ARGS)" > /dev/null 17 | @echo "DONE: Notebook '$(NAME)' listening on $$(docker-machine ip $$(docker-machine active)):$(PORT)" 18 | endef 19 | 20 | help: 21 | @cat README.md 22 | 23 | check: 24 | @which docker-machine > /dev/null || (echo "ERROR: docker-machine not found (brew install docker-machine)"; exit 1) 25 | @which docker > /dev/null || (echo "ERROR: docker not found (brew install docker)"; exit 1) 26 | @docker | grep volume > /dev/null || (echo "ERROR: docker 1.9.0+ required"; exit 1) 27 | 28 | image: DOCKER_ARGS?= 29 | image: 30 | @docker build --rm $(DOCKER_ARGS) -t $(IMAGE) . 31 | 32 | notebook: PORT?=80 33 | notebook: NAME?=notebook 34 | notebook: WORK_VOLUME?=$(NAME)-data 35 | notebook: check 36 | $(RUN_NOTEBOOK) 37 | 38 | # docker-machine drivers 39 | include virtualbox.makefile 40 | include softlayer.makefile 41 | 42 | # Preset notebook configurations 43 | include self-signed.makefile 44 | include letsencrypt.makefile -------------------------------------------------------------------------------- /all-spark-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/pyspark-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | USER root 8 | 9 | # RSpark config 10 | ENV R_LIBS_USER $SPARK_HOME/R/lib 11 | RUN fix-permissions $R_LIBS_USER 12 | 13 | # R pre-requisites 14 | RUN apt-get update && \ 15 | apt-get install -y --no-install-recommends \ 16 | fonts-dejavu \ 17 | tzdata \ 18 | gfortran \ 19 | gcc && apt-get clean && \ 20 | rm -rf /var/lib/apt/lists/* 21 | 22 | USER $NB_UID 23 | 24 | # R packages 25 | RUN conda install --quiet --yes \ 26 | 'r-base=3.4.1' \ 27 | 'r-irkernel=0.8*' \ 28 | 'r-ggplot2=2.2*' \ 29 | 'r-sparklyr=0.7*' \ 30 | 'r-rcurl=1.95*' && \ 31 | conda clean -tipsy && \ 32 | fix-permissions $CONDA_DIR && \ 33 | fix-permissions /home/$NB_USER 34 | 35 | # Apache Toree kernel 36 | RUN pip install --no-cache-dir \ 37 | https://dist.apache.org/repos/dist/dev/incubator/toree/0.2.0-incubating-rc4/toree-pip/toree-0.2.0.tar.gz \ 38 | && \ 39 | jupyter toree install --sys-prefix && \ 40 | rm -rf /home/$NB_USER/.local && \ 41 | fix-permissions $CONDA_DIR && \ 42 | fix-permissions /home/$NB_USER 43 | 44 | # Spylon-kernel 45 | RUN conda install --quiet --yes 'spylon-kernel=0.4*' && \ 46 | conda clean -tipsy && \ 47 | python -m spylon_kernel install --sys-prefix && \ 48 | rm -rf /home/$NB_USER/.local && \ 49 | fix-permissions $CONDA_DIR && \ 50 | fix-permissions /home/$NB_USER 51 | -------------------------------------------------------------------------------- /examples/docker-compose/bin/letsencrypt.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | # Use https://letsencrypt.org to create a certificate for a single domain 6 | # and store it in a Docker volume. 7 | 8 | set -e 9 | 10 | # Get domain and email from environment 11 | [ -z "$FQDN" ] && \ 12 | echo "ERROR: Must set FQDN environment varable" && \ 13 | exit 1 14 | 15 | [ -z "$EMAIL" ] && \ 16 | echo "ERROR: Must set EMAIL environment varable" && \ 17 | exit 1 18 | 19 | # letsencrypt certificate server type (default is production). 20 | # Set `CERT_SERVER=--staging` for staging. 21 | : ${CERT_SERVER=''} 22 | 23 | # Create Docker volume to contain the cert 24 | : ${SECRETS_VOLUME:=my-notebook-secrets} 25 | docker volume create --name $SECRETS_VOLUME 1>/dev/null 26 | # Generate the cert and save it to the Docker volume 27 | docker run --rm -it \ 28 | -p 80:80 \ 29 | -v $SECRETS_VOLUME:/etc/letsencrypt \ 30 | quay.io/letsencrypt/letsencrypt:latest \ 31 | certonly \ 32 | --non-interactive \ 33 | --keep-until-expiring \ 34 | --standalone \ 35 | --standalone-supported-challenges http-01 \ 36 | --agree-tos \ 37 | --domain "$FQDN" \ 38 | --email "$EMAIL" \ 39 | $CERT_SERVER 40 | 41 | # Set permissions so nobody can read the cert and key. 42 | # Also symlink the certs into the root of the /etc/letsencrypt 43 | # directory so that the FQDN doesn't have to be known later. 44 | docker run --rm -it \ 45 | -v $SECRETS_VOLUME:/etc/letsencrypt \ 46 | ubuntu:16.04 \ 47 | bash -c "ln -s /etc/letsencrypt/live/$FQDN/* /etc/letsencrypt/ && \ 48 | find /etc/letsencrypt -type d -exec chmod 755 {} +" 49 | -------------------------------------------------------------------------------- /examples/docker-compose/notebook/up.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | set -e 6 | 7 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 8 | 9 | USAGE="Usage: `basename $0` [--secure | --letsencrypt] [--password PASSWORD] [--secrets SECRETS_VOLUME]" 10 | 11 | # Parse args to determine security settings 12 | SECURE=${SECURE:=no} 13 | LETSENCRYPT=${LETSENCRYPT:=no} 14 | while [[ $# > 0 ]] 15 | do 16 | key="$1" 17 | case $key in 18 | --secure) 19 | SECURE=yes 20 | ;; 21 | --letsencrypt) 22 | LETSENCRYPT=yes 23 | ;; 24 | --secrets) 25 | SECRETS_VOLUME="$2" 26 | shift # past argument 27 | ;; 28 | --password) 29 | PASSWORD="$2" 30 | export PASSWORD 31 | shift # past argument 32 | ;; 33 | *) # unknown option 34 | ;; 35 | esac 36 | shift # past argument or value 37 | done 38 | 39 | if [[ "$LETSENCRYPT" == yes || "$SECURE" == yes ]]; then 40 | if [ -z "${PASSWORD:+x}" ]; then 41 | echo "ERROR: Must set PASSWORD if running in secure mode" 42 | echo "$USAGE" 43 | exit 1 44 | fi 45 | if [ "$LETSENCRYPT" == yes ]; then 46 | CONFIG=letsencrypt-notebook.yml 47 | if [ -z "${SECRETS_VOLUME:+x}" ]; then 48 | echo "ERROR: Must set SECRETS_VOLUME if running in letsencrypt mode" 49 | echo "$USAGE" 50 | exit 1 51 | fi 52 | else 53 | CONFIG=secure-notebook.yml 54 | fi 55 | export PORT=${PORT:=443} 56 | else 57 | CONFIG=notebook.yml 58 | export PORT=${PORT:=80} 59 | fi 60 | 61 | # Setup environment 62 | source "$DIR/env.sh" 63 | 64 | # Create a Docker volume to store notebooks 65 | docker volume create --name "$WORK_VOLUME" 66 | 67 | # Bring up a notebook container, using container name as project name 68 | echo "Bringing up notebook '$NAME'" 69 | docker-compose -f "$DIR/$CONFIG" -p "$NAME" up -d 70 | 71 | IP=$(docker-machine ip $(docker-machine active)) 72 | echo "Notebook $NAME listening on $IP:$PORT" 73 | -------------------------------------------------------------------------------- /pyspark-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/scipy-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | USER root 8 | 9 | # Spark dependencies 10 | ENV APACHE_SPARK_VERSION 2.3.0 11 | ENV HADOOP_VERSION 2.7 12 | 13 | RUN apt-get -y update && \ 14 | apt-get install --no-install-recommends -y openjdk-8-jre-headless ca-certificates-java && \ 15 | apt-get clean && \ 16 | rm -rf /var/lib/apt/lists/* 17 | 18 | RUN cd /tmp && \ 19 | wget -q http://apache.claz.org/spark/spark-${APACHE_SPARK_VERSION}/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz && \ 20 | echo "258683885383480BA01485D6C6F7DC7CFD559C1584D6CEB7A3BBCF484287F7F57272278568F16227BE46B4F92591768BA3D164420D87014A136BF66280508B46 *spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz" | sha512sum -c - && \ 21 | tar xzf spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz -C /usr/local --owner root --group root --no-same-owner && \ 22 | rm spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz 23 | RUN cd /usr/local && ln -s spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION} spark 24 | 25 | # Mesos dependencies 26 | RUN . /etc/os-release && \ 27 | apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv E56151BF && \ 28 | DISTRO=$ID && \ 29 | CODENAME=$VERSION_CODENAME && \ 30 | echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" > /etc/apt/sources.list.d/mesosphere.list && \ 31 | apt-get -y update && \ 32 | apt-get --no-install-recommends -y --force-yes install mesos=1.2\* && \ 33 | apt-get clean && \ 34 | rm -rf /var/lib/apt/lists/* 35 | 36 | # Spark and Mesos config 37 | ENV SPARK_HOME /usr/local/spark 38 | ENV PYTHONPATH $SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.6-src.zip 39 | ENV MESOS_NATIVE_LIBRARY /usr/local/lib/libmesos.so 40 | ENV SPARK_OPTS --driver-java-options=-Xms1024M --driver-java-options=-Xmx4096M --driver-java-options=-Dlog4j.logLevel=info 41 | 42 | USER $NB_UID 43 | 44 | # Install pyarrow 45 | RUN conda install --quiet -y 'pyarrow' && \ 46 | fix-permissions $CONDA_DIR && \ 47 | fix-permissions /home/$NB_USER 48 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | .PHONY: docs help test 4 | 5 | # Use bash for inline if-statements in arch_patch target 6 | SHELL:=bash 7 | OWNER:=jupyter 8 | ARCH:=$(shell uname -m) 9 | 10 | # Need to list the images in build dependency order 11 | ifeq ($(ARCH),ppc64le) 12 | ALL_STACKS:=base-notebook 13 | else 14 | ALL_STACKS:=base-notebook \ 15 | minimal-notebook \ 16 | r-notebook \ 17 | scipy-notebook \ 18 | tensorflow-notebook \ 19 | datascience-notebook \ 20 | pyspark-notebook \ 21 | all-spark-notebook 22 | endif 23 | 24 | ALL_IMAGES:=$(ALL_STACKS) 25 | 26 | help: 27 | # http://marmelab.com/blog/2016/02/29/auto-documented-makefile.html 28 | @echo "jupyter/docker-stacks" 29 | @echo "=====================" 30 | @echo "Replace % with a stack directory name (e.g., make build/minimal-notebook)" 31 | @echo 32 | @grep -E '^[a-zA-Z0-9_%/-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' 33 | 34 | arch_patch/%: ## apply hardware architecture specific patches to the Dockerfile 35 | @if [ -e ./$(notdir $@)/Dockerfile.$(ARCH).patch ]; then \ 36 | if [ -e ./$(notdir $@)/Dockerfile.orig ]; then \ 37 | cp -f ./$(notdir $@)/Dockerfile.orig ./$(notdir $@)/Dockerfile;\ 38 | else\ 39 | cp -f ./$(notdir $@)/Dockerfile ./$(notdir $@)/Dockerfile.orig;\ 40 | fi;\ 41 | patch -f ./$(notdir $@)/Dockerfile ./$(notdir $@)/Dockerfile.$(ARCH).patch; \ 42 | fi 43 | 44 | build/%: DARGS?= 45 | build/%: ## build the latest image for a stack 46 | docker build $(DARGS) --rm --force-rm -t $(OWNER)/$(notdir $@):latest ./$(notdir $@) 47 | 48 | build-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) ) ## build all stacks 49 | build-test-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) test/$(I) ) ## build and test all stacks 50 | 51 | dev/%: ARGS?= 52 | dev/%: DARGS?= 53 | dev/%: PORT?=8888 54 | dev/%: ## run a foreground container for a stack 55 | docker run -it --rm -p $(PORT):8888 $(DARGS) $(OWNER)/$(notdir $@) $(ARGS) 56 | 57 | dev-env: ## install libraries required to build docs and run tests 58 | pip install -r requirements-dev.txt 59 | 60 | docs: ## build HTML documentation 61 | make -C docs html 62 | 63 | test/%: ## run tests against a stack 64 | @TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest test 65 | 66 | test/base-notebook: ## test supported options in the base notebook 67 | @TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest test base-notebook/test -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | Jupyter Docker Stacks 2 | ===================== 3 | 4 | Jupyter Docker Stacks are a set of ready-to-run Docker images containing Jupyter applications and interactive computing tools. You can use a stack image to do any of the following (and more): 5 | 6 | * Start a personal Jupyter Notebook server in a local Docker container 7 | * Run JupyterLab servers for a team using JupyterHub 8 | * Write your own project Dockerfile 9 | 10 | Quick Start 11 | ----------- 12 | 13 | The two examples below may help you get started if you `have Docker installed `_, know :doc:`which Docker image ` you want to use, and want to launch a single Jupyter Notebook server in a container. The other pages in this documentation describe additional uses and features in detail. 14 | 15 | **Example 1:** This command pulls the `jupyter/scipy-notebook` image tagged `2c80cf3537ca` from Docker Hub if it is not already present on the local host. It then starts a container running a Jupyter Notebook server and exposes the server on host port 8888. The server logs appear in the terminal and include a URL to the notebook server. The container remains intact for restart after notebook server exit.:: 16 | 17 | docker run -p 8888:8888 jupyter/scipy-notebook:2c80cf3537ca 18 | 19 | **Example 2:** This command pulls the `jupyter/r-notebook` image tagged `e5c5a7d3e52d` from Docker Hub if it is not already present on the local host. It then starts an *ephemeral* container running a Jupyter Notebook server and exposes the server on host port 10000. The command mounts the current working directory on the host as `/home/jovyan/work` in the container. Docker destroys the container after notebook server exit, but any files written to `~/work` in the container remain intact on the host.:: 20 | 21 | docker run --rm -p 10000:8888 -v "$PWD":/home/jovyan/work jupyter/r-notebook:e5c5a7d3e52d 22 | 23 | Table of Contents 24 | ----------------- 25 | 26 | .. toctree:: 27 | :maxdepth: 2 28 | :caption: User Guide 29 | 30 | using/selecting 31 | using/running 32 | using/common 33 | using/specifics 34 | 35 | .. toctree:: 36 | :maxdepth: 2 37 | :caption: Contributor Guide 38 | 39 | contributing/packages 40 | contributing/recipes 41 | contributing/tests 42 | contributing/stacks 43 | 44 | .. toctree:: 45 | :maxdepth: 2 46 | :caption: Getting Help 47 | 48 | Jupyter Docker Stacks issue tracker 49 | Jupyter mailing list 50 | Jupyter website 51 | -------------------------------------------------------------------------------- /examples/make-deploy/letsencrypt.makefile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | # BE CAREFUL when using Docker engine <1.10 because running a container with 5 | # `--rm` option while mounting a docker volume may wipe out the volume. 6 | # See issue: https://github.com/docker/docker/issues/17907 7 | 8 | # Use letsencrypt production server by default to get a real cert. 9 | # Use CERT_SERVER=--staging to hit the staging server (not a real cert). 10 | 11 | letsencrypt: NAME?=notebook 12 | letsencrypt: SECRETS_VOLUME?=$(NAME)-secrets 13 | letsencrypt: TMP_CONTAINER?=$(NAME)-tmp 14 | letsencrypt: CERT_SERVER?= 15 | letsencrypt: 16 | @test -n "$(FQDN)" || \ 17 | (echo "ERROR: FQDN not defined or blank"; exit 1) 18 | @test -n "$(EMAIL)" || \ 19 | (echo "ERROR: EMAIL not defined or blank"; exit 1) 20 | @docker volume create --name $(SECRETS_VOLUME) > /dev/null 21 | @docker run -it -p 80:80 \ 22 | --name=$(TMP_CONTAINER) \ 23 | -v $(SECRETS_VOLUME):/etc/letsencrypt \ 24 | quay.io/letsencrypt/letsencrypt:latest \ 25 | certonly \ 26 | $(CERT_SERVER) \ 27 | --keep-until-expiring \ 28 | --standalone \ 29 | --standalone-supported-challenges http-01 \ 30 | --agree-tos \ 31 | --domain '$(FQDN)' \ 32 | --email '$(EMAIL)'; \ 33 | docker rm -f $(TMP_CONTAINER) > /dev/null 34 | # The letsencrypt image has an entrypoint, so we use the notebook image 35 | # instead so we can run arbitrary commands. 36 | # Here we set the permissions so nobody can read the cert and key. 37 | # We also symlink the certs into the root of the /etc/letsencrypt 38 | # directory so that the FQDN doesn't have to be known later. 39 | @docker run -it \ 40 | --name=$(TMP_CONTAINER) \ 41 | -v $(SECRETS_VOLUME):/etc/letsencrypt \ 42 | $(NOTEBOOK_IMAGE) \ 43 | bash -c "ln -s /etc/letsencrypt/live/$(FQDN)/* /etc/letsencrypt/ && \ 44 | find /etc/letsencrypt -type d -exec chmod 755 {} +"; \ 45 | docker rm -f $(TMP_CONTAINER) > /dev/null 46 | 47 | letsencrypt-notebook: PORT?=443 48 | letsencrypt-notebook: NAME?=notebook 49 | letsencrypt-notebook: WORK_VOLUME?=$(NAME)-data 50 | letsencrypt-notebook: SECRETS_VOLUME?=$(NAME)-secrets 51 | letsencrypt-notebook: DOCKER_ARGS:=-e USE_HTTPS=yes \ 52 | -e PASSWORD=$(PASSWORD) \ 53 | -v $(SECRETS_VOLUME):/etc/letsencrypt 54 | letsencrypt-notebook: ARGS:=\ 55 | --NotebookApp.certfile=/etc/letsencrypt/fullchain.pem \ 56 | --NotebookApp.keyfile=/etc/letsencrypt/privkey.pem 57 | letsencrypt-notebook: check 58 | @test -n "$(PASSWORD)" || \ 59 | (echo "ERROR: PASSWORD not defined or blank"; exit 1) 60 | $(RUN_NOTEBOOK) 61 | -------------------------------------------------------------------------------- /scipy-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/minimal-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | USER root 8 | 9 | # libav-tools for matplotlib anim 10 | RUN apt-get update && \ 11 | apt-get install -y --no-install-recommends libav-tools && \ 12 | apt-get clean && \ 13 | rm -rf /var/lib/apt/lists/* 14 | 15 | USER $NB_UID 16 | 17 | # Install Python 3 packages 18 | # Remove pyqt and qt pulled in for matplotlib since we're only ever going to 19 | # use notebook-friendly backends in these images 20 | RUN conda install --quiet --yes \ 21 | 'blas=*=openblas' \ 22 | 'ipywidgets=7.1*' \ 23 | 'pandas=0.19*' \ 24 | 'numexpr=2.6*' \ 25 | 'matplotlib=2.0*' \ 26 | 'scipy=0.19*' \ 27 | 'seaborn=0.7*' \ 28 | 'scikit-learn=0.18*' \ 29 | 'scikit-image=0.12*' \ 30 | 'sympy=1.0*' \ 31 | 'cython=0.25*' \ 32 | 'patsy=0.4*' \ 33 | 'statsmodels=0.8*' \ 34 | 'cloudpickle=0.2*' \ 35 | 'dill=0.2*' \ 36 | 'numba=0.31*' \ 37 | 'bokeh=0.12*' \ 38 | 'sqlalchemy=1.1*' \ 39 | 'hdf5=1.8.17' \ 40 | 'h5py=2.6*' \ 41 | 'vincent=0.4.*' \ 42 | 'beautifulsoup4=4.5.*' \ 43 | 'protobuf=3.*' \ 44 | 'xlrd' && \ 45 | conda remove --quiet --yes --force qt pyqt && \ 46 | conda clean -tipsy && \ 47 | # Activate ipywidgets extension in the environment that runs the notebook server 48 | jupyter nbextension enable --py widgetsnbextension --sys-prefix && \ 49 | # Also activate ipywidgets extension for JupyterLab 50 | jupyter labextension install @jupyter-widgets/jupyterlab-manager@^0.33.1 && \ 51 | jupyter labextension install jupyterlab_bokeh@^0.4.0 && \ 52 | npm cache clean && \ 53 | rm -rf $CONDA_DIR/share/jupyter/lab/staging && \ 54 | rm -rf /home/$NB_USER/.cache/yarn && \ 55 | rm -rf /home/$NB_USER/.node-gyp && \ 56 | fix-permissions $CONDA_DIR && \ 57 | fix-permissions /home/$NB_USER 58 | 59 | # Install facets which does not have a pip or conda package at the moment 60 | RUN cd /tmp && \ 61 | git clone https://github.com/PAIR-code/facets.git && \ 62 | cd facets && \ 63 | jupyter nbextension install facets-dist/ --sys-prefix && \ 64 | rm -rf facets && \ 65 | fix-permissions $CONDA_DIR && \ 66 | fix-permissions /home/$NB_USER 67 | 68 | # Import matplotlib the first time to build the font cache. 69 | ENV XDG_CACHE_HOME /home/$NB_USER/.cache/ 70 | RUN MPLBACKEND=Agg python -c "import matplotlib.pyplot" && \ 71 | fix-permissions /home/$NB_USER 72 | 73 | USER $NB_UID 74 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | # Licensing terms 2 | 3 | This project is licensed under the terms of the Modified BSD License 4 | (also known as New or Revised or 3-Clause BSD), as follows: 5 | 6 | - Copyright (c) 2001-2015, IPython Development Team 7 | - Copyright (c) 2015-, Jupyter Development Team 8 | 9 | All rights reserved. 10 | 11 | Redistribution and use in source and binary forms, with or without 12 | modification, are permitted provided that the following conditions are met: 13 | 14 | Redistributions of source code must retain the above copyright notice, this 15 | list of conditions and the following disclaimer. 16 | 17 | Redistributions in binary form must reproduce the above copyright notice, this 18 | list of conditions and the following disclaimer in the documentation and/or 19 | other materials provided with the distribution. 20 | 21 | Neither the name of the Jupyter Development Team nor the names of its 22 | contributors may be used to endorse or promote products derived from this 23 | software without specific prior written permission. 24 | 25 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 26 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 27 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 28 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE 29 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 30 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 31 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 32 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 33 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 34 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 35 | 36 | ## About the Jupyter Development Team 37 | 38 | The Jupyter Development Team is the set of all contributors to the Jupyter project. 39 | This includes all of the Jupyter subprojects. 40 | 41 | The core team that coordinates development on GitHub can be found here: 42 | https://github.com/jupyter/. 43 | 44 | ## Our Copyright Policy 45 | 46 | Jupyter uses a shared copyright model. Each contributor maintains copyright 47 | over their contributions to Jupyter. But, it is important to note that these 48 | contributions are typically only changes to the repositories. Thus, the Jupyter 49 | source code, in its entirety is not the copyright of any single person or 50 | institution. Instead, it is the collective copyright of the entire Jupyter 51 | Development Team. If individual contributors want to maintain a record of what 52 | changes/contributions they have specific copyright on, they should indicate 53 | their copyright in the commit message of the change, when they commit the 54 | change to one of the Jupyter repositories. 55 | 56 | With this in mind, the following banner should be used in any source code file 57 | to indicate the copyright and license terms: 58 | 59 | # Copyright (c) Jupyter Development Team. 60 | # Distributed under the terms of the Modified BSD License. 61 | -------------------------------------------------------------------------------- /conftest.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | import os 4 | 5 | import docker 6 | import pytest 7 | import requests 8 | 9 | from requests.packages.urllib3.util.retry import Retry 10 | from requests.adapters import HTTPAdapter 11 | 12 | 13 | @pytest.fixture(scope='session') 14 | def http_client(): 15 | """Requests session with retries and backoff.""" 16 | s = requests.Session() 17 | retries = Retry(total=5, backoff_factor=1) 18 | s.mount('http://', HTTPAdapter(max_retries=retries)) 19 | s.mount('https://', HTTPAdapter(max_retries=retries)) 20 | return s 21 | 22 | 23 | @pytest.fixture(scope='session') 24 | def docker_client(): 25 | """Docker client configured based on the host environment""" 26 | return docker.from_env() 27 | 28 | 29 | @pytest.fixture(scope='session') 30 | def image_name(): 31 | """Image name to test""" 32 | return os.getenv('TEST_IMAGE', 'jupyter/base-notebook') 33 | 34 | 35 | class TrackedContainer(object): 36 | """Wrapper that collects docker container configuration and delays 37 | container creation/execution. 38 | 39 | Parameters 40 | ---------- 41 | docker_client: docker.DockerClient 42 | Docker client instance 43 | image_name: str 44 | Name of the docker image to launch 45 | **kwargs: dict, optional 46 | Default keyword arguments to pass to docker.DockerClient.containers.run 47 | """ 48 | def __init__(self, docker_client, image_name, **kwargs): 49 | self.container = None 50 | self.docker_client = docker_client 51 | self.image_name = image_name 52 | self.kwargs = kwargs 53 | 54 | def run(self, **kwargs): 55 | """Runs a docker container using the preconfigured image name 56 | and a mix of the preconfigured container options and those passed 57 | to this method. 58 | 59 | Keeps track of the docker.Container instance spawned to kill it 60 | later. 61 | 62 | Parameters 63 | ---------- 64 | **kwargs: dict, optional 65 | Keyword arguments to pass to docker.DockerClient.containers.run 66 | extending and/or overriding key/value pairs passed to the constructor 67 | 68 | Returns 69 | ------- 70 | docker.Container 71 | """ 72 | all_kwargs = {} 73 | all_kwargs.update(self.kwargs) 74 | all_kwargs.update(kwargs) 75 | self.container = self.docker_client.containers.run(self.image_name, **all_kwargs) 76 | return self.container 77 | 78 | def remove(self): 79 | """Kills and removes the tracked docker container.""" 80 | if self.container: 81 | self.container.remove(force=True) 82 | 83 | 84 | @pytest.fixture(scope='function') 85 | def container(docker_client, image_name): 86 | """Notebook container with initial configuration appropriate for testing 87 | (e.g., HTTP port exposed to the host for HTTP calls). 88 | 89 | Yields the container instance and kills it when the caller is done with it. 90 | """ 91 | container = TrackedContainer( 92 | docker_client, 93 | image_name, 94 | detach=True, 95 | ports={ 96 | '8888/tcp': 8888 97 | } 98 | ) 99 | yield container 100 | container.remove() 101 | -------------------------------------------------------------------------------- /base-notebook/Dockerfile.ppc64le: -------------------------------------------------------------------------------- 1 | # Copyright (c) IBM Corporation 2016 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | # Ubuntu image 5 | FROM ppc64le/ubuntu:trusty 6 | 7 | LABEL maintainer="Ilsiyar Gaynutdinov " 8 | 9 | USER root 10 | 11 | # Install all OS dependencies for notebook server that starts but lacks all 12 | # features (e.g., download as all possible file formats) 13 | ENV DEBIAN_FRONTEND noninteractive 14 | RUN apt-get update && apt-get -yq dist-upgrade \ 15 | && apt-get install -yq --no-install-recommends \ 16 | build-essential \ 17 | bzip2 \ 18 | ca-certificates \ 19 | cmake \ 20 | git \ 21 | locales \ 22 | sudo \ 23 | wget \ 24 | && apt-get clean && \ 25 | rm -rf /var/lib/apt/lists/* 26 | 27 | RUN echo "LANGUAGE=en_US.UTF-8" >> /etc/default/locale 28 | RUN echo "LC_ALL=en_US.UTF-8" >> /etc/default/locale 29 | RUN echo "LC_TYPE=en_US.UTF-8" >> /etc/default/locale 30 | RUN locale-gen en_US en_US.UTF-8 31 | 32 | #build and install Tini for ppc64le 33 | RUN wget https://github.com/krallin/tini/archive/v0.10.0.tar.gz && \ 34 | tar zxvf v0.10.0.tar.gz && \ 35 | rm -rf v0.10.0.tar.gz 36 | WORKDIR tini-0.10.0/ 37 | RUN cmake . && make install 38 | RUN mv ./tini /usr/local/bin/tini && \ 39 | chmod +x /usr/local/bin/tini 40 | WORKDIR .. 41 | 42 | # Configure environment 43 | ENV CONDA_DIR /opt/conda 44 | ENV PATH $CONDA_DIR/bin:$PATH 45 | ENV SHELL /bin/bash 46 | ENV NB_USER jovyan 47 | ENV NB_UID 1000 48 | ENV HOME /home/$NB_USER 49 | ENV LC_ALL en_US.UTF-8 50 | ENV LANG en_US.UTF-8 51 | ENV LANGUAGE en_US.UTF-8 52 | 53 | # Create jovyan user with UID=1000 and in the 'users' group 54 | RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \ 55 | mkdir -p $CONDA_DIR && \ 56 | chown $NB_USER $CONDA_DIR 57 | 58 | USER $NB_UID 59 | 60 | # Setup jovyan home directory 61 | RUN mkdir /home/$NB_USER/work && \ 62 | mkdir /home/$NB_USER/.jupyter && \ 63 | echo "cacert=/etc/ssl/certs/ca-certificates.crt" > /home/$NB_USER/.curlrc 64 | 65 | # Install conda as jovyan 66 | RUN cd /tmp && \ 67 | mkdir -p $CONDA_DIR && \ 68 | wget https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-ppc64le.sh && \ 69 | /bin/bash Miniconda3-4.2.12-Linux-ppc64le.sh -f -b -p $CONDA_DIR && \ 70 | rm -rf Miniconda3-4.2.12-Linux-ppc64le.sh && \ 71 | $CONDA_DIR/bin/conda install --quiet --yes conda=4.2.12 && \ 72 | $CONDA_DIR/bin/conda config --system --add channels conda-forge && \ 73 | $CONDA_DIR/bin/conda config --system --set auto_update_conda false && \ 74 | conda clean -tipsy 75 | 76 | # Install Jupyter notebook and Hub 77 | RUN yes | pip install --upgrade pip 78 | RUN yes | pip install --quiet --no-cache-dir \ 79 | 'notebook==5.2.*' \ 80 | 'jupyterhub==0.7.*' \ 81 | 'jupyterlab==0.18.*' 82 | 83 | USER root 84 | 85 | EXPOSE 8888 86 | WORKDIR /home/$NB_USER/work 87 | RUN echo "ALL ALL = (ALL) NOPASSWD: ALL" >> /etc/sudoers 88 | 89 | # Configure container startup 90 | ENTRYPOINT ["tini", "--"] 91 | CMD ["start-notebook.sh"] 92 | 93 | # Add local files as late as possible to avoid cache busting 94 | COPY start.sh /usr/local/bin/ 95 | COPY start-notebook.sh /usr/local/bin/ 96 | COPY start-singleuser.sh /usr/local/bin/ 97 | COPY jupyter_notebook_config.py /home/$NB_USER/.jupyter/ 98 | RUN chown -R $NB_USER:users /home/$NB_USER/.jupyter 99 | 100 | # Switch back to jovyan to avoid accidental container runs as root 101 | USER $NB_UID 102 | -------------------------------------------------------------------------------- /datascience-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | FROM jupyter/scipy-notebook 4 | 5 | LABEL maintainer="Jupyter Project " 6 | 7 | # Set when building on Travis so that certain long-running build steps can 8 | # be skipped to shorten build time. 9 | ARG TEST_ONLY_BUILD 10 | 11 | USER root 12 | 13 | # R pre-requisites 14 | RUN apt-get update && \ 15 | apt-get install -y --no-install-recommends \ 16 | fonts-dejavu \ 17 | tzdata \ 18 | gfortran \ 19 | gcc && apt-get clean && \ 20 | rm -rf /var/lib/apt/lists/* 21 | 22 | # Julia dependencies 23 | # install Julia packages in /opt/julia instead of $HOME 24 | ENV JULIA_PKGDIR=/opt/julia 25 | ENV JULIA_VERSION=0.6.2 26 | 27 | RUN mkdir /opt/julia-${JULIA_VERSION} && \ 28 | cd /tmp && \ 29 | wget -q https://julialang-s3.julialang.org/bin/linux/x64/`echo ${JULIA_VERSION} | cut -d. -f 1,2`/julia-${JULIA_VERSION}-linux-x86_64.tar.gz && \ 30 | echo "dc6ec0b13551ce78083a5849268b20684421d46a7ec46b17ec1fab88a5078580 *julia-${JULIA_VERSION}-linux-x86_64.tar.gz" | sha256sum -c - && \ 31 | tar xzf julia-${JULIA_VERSION}-linux-x86_64.tar.gz -C /opt/julia-${JULIA_VERSION} --strip-components=1 && \ 32 | rm /tmp/julia-${JULIA_VERSION}-linux-x86_64.tar.gz 33 | RUN ln -fs /opt/julia-*/bin/julia /usr/local/bin/julia 34 | 35 | # Show Julia where conda libraries are \ 36 | RUN mkdir /etc/julia && \ 37 | echo "push!(Libdl.DL_LOAD_PATH, \"$CONDA_DIR/lib\")" >> /etc/julia/juliarc.jl && \ 38 | # Create JULIA_PKGDIR \ 39 | mkdir $JULIA_PKGDIR && \ 40 | chown $NB_USER $JULIA_PKGDIR && \ 41 | fix-permissions $JULIA_PKGDIR 42 | 43 | USER $NB_UID 44 | 45 | # R packages including IRKernel which gets installed globally. 46 | RUN conda install --quiet --yes \ 47 | 'rpy2=2.8*' \ 48 | 'r-base=3.4.1' \ 49 | 'r-irkernel=0.8*' \ 50 | 'r-plyr=1.8*' \ 51 | 'r-devtools=1.13*' \ 52 | 'r-tidyverse=1.1*' \ 53 | 'r-shiny=1.0*' \ 54 | 'r-rmarkdown=1.8*' \ 55 | 'r-forecast=8.2*' \ 56 | 'r-rsqlite=2.0*' \ 57 | 'r-reshape2=1.4*' \ 58 | 'r-nycflights13=0.2*' \ 59 | 'r-caret=6.0*' \ 60 | 'r-rcurl=1.95*' \ 61 | 'r-crayon=1.3*' \ 62 | 'r-randomforest=4.6*' \ 63 | 'r-htmltools=0.3*' \ 64 | 'r-sparklyr=0.7*' \ 65 | 'r-htmlwidgets=1.0*' \ 66 | 'r-hexbin=1.27*' && \ 67 | conda clean -tipsy && \ 68 | fix-permissions $CONDA_DIR && \ 69 | fix-permissions /home/$NB_USER 70 | 71 | # Add Julia packages. Only add HDF5 if this is not a test-only build since 72 | # it takes roughly half the entire build time of all of the images on Travis 73 | # to add this one package and often causes Travis to timeout. 74 | # 75 | # Install IJulia as jovyan and then move the kernelspec out 76 | # to the system share location. Avoids problems with runtime UID change not 77 | # taking effect properly on the .local folder in the jovyan home dir. 78 | RUN julia -e 'Pkg.init()' && \ 79 | julia -e 'Pkg.update()' && \ 80 | (test $TEST_ONLY_BUILD || julia -e 'Pkg.add("HDF5")') && \ 81 | julia -e 'Pkg.add("Gadfly")' && \ 82 | julia -e 'Pkg.add("RDatasets")' && \ 83 | julia -e 'Pkg.add("IJulia")' && \ 84 | # Precompile Julia packages \ 85 | julia -e 'using IJulia' && \ 86 | # move kernelspec out of home \ 87 | mv $HOME/.local/share/jupyter/kernels/julia* $CONDA_DIR/share/jupyter/kernels/ && \ 88 | chmod -R go+rx $CONDA_DIR/share/jupyter && \ 89 | rm -rf $HOME/.local && \ 90 | fix-permissions $JULIA_PKGDIR $CONDA_DIR/share/jupyter 91 | -------------------------------------------------------------------------------- /base-notebook/test/test_container_options.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | import time 4 | 5 | import pytest 6 | 7 | 8 | def test_cli_args(container, http_client): 9 | """Container should respect notebook server command line args 10 | (e.g., disabling token security)""" 11 | container.run( 12 | command=['start-notebook.sh', '--NotebookApp.token=""'] 13 | ) 14 | resp = http_client.get('http://localhost:8888') 15 | resp.raise_for_status() 16 | assert 'login_submit' not in resp.text 17 | 18 | 19 | @pytest.mark.filterwarnings('ignore:Unverified HTTPS request') 20 | def test_unsigned_ssl(container, http_client): 21 | """Container should generate a self-signed SSL certificate 22 | and notebook server should use it to enable HTTPS. 23 | """ 24 | container.run( 25 | environment=['GEN_CERT=yes'] 26 | ) 27 | # NOTE: The requests.Session backing the http_client fixture does not retry 28 | # properly while the server is booting up. An SSL handshake error seems to 29 | # abort the retry logic. Forcing a long sleep for the moment until I have 30 | # time to dig more. 31 | time.sleep(5) 32 | resp = http_client.get('https://localhost:8888', verify=False) 33 | resp.raise_for_status() 34 | assert 'login_submit' in resp.text 35 | 36 | 37 | def test_uid_change(container): 38 | """Container should change the UID of the default user.""" 39 | c = container.run( 40 | tty=True, 41 | user='root', 42 | environment=['NB_UID=1010'], 43 | command=['start.sh', 'bash', '-c', 'id && touch /opt/conda/test-file'] 44 | ) 45 | # usermod is slow so give it some time 46 | c.wait(timeout=120) 47 | assert 'uid=1010(jovyan)' in c.logs(stdout=True).decode('utf-8') 48 | 49 | 50 | def test_gid_change(container): 51 | """Container should change the GID of the default user.""" 52 | c = container.run( 53 | tty=True, 54 | user='root', 55 | environment=['NB_GID=110'], 56 | command=['start.sh', 'id'] 57 | ) 58 | c.wait(timeout=10) 59 | assert 'gid=110(users)' in c.logs(stdout=True).decode('utf-8') 60 | 61 | 62 | def test_sudo(container): 63 | """Container should grant passwordless sudo to the default user.""" 64 | c = container.run( 65 | tty=True, 66 | user='root', 67 | environment=['GRANT_SUDO=yes'], 68 | command=['start.sh', 'sudo', 'id'] 69 | ) 70 | rv = c.wait(timeout=10) 71 | assert rv == 0 or rv["StatusCode"] == 0 72 | assert 'uid=0(root)' in c.logs(stdout=True).decode('utf-8') 73 | 74 | 75 | def test_sudo_path(container): 76 | """Container should include /opt/conda/bin in the sudo secure_path.""" 77 | c = container.run( 78 | tty=True, 79 | user='root', 80 | environment=['GRANT_SUDO=yes'], 81 | command=['start.sh', 'sudo', 'which', 'jupyter'] 82 | ) 83 | rv = c.wait(timeout=10) 84 | assert rv == 0 or rv["StatusCode"] == 0 85 | assert c.logs(stdout=True).decode('utf-8').rstrip().endswith('/opt/conda/bin/jupyter') 86 | 87 | 88 | def test_sudo_path_without_grant(container): 89 | """Container should include /opt/conda/bin in the sudo secure_path.""" 90 | c = container.run( 91 | tty=True, 92 | user='root', 93 | command=['start.sh', 'which', 'jupyter'] 94 | ) 95 | rv = c.wait(timeout=10) 96 | assert rv == 0 or rv["StatusCode"] == 0 97 | assert c.logs(stdout=True).decode('utf-8').rstrip().endswith('/opt/conda/bin/jupyter') 98 | 99 | 100 | def test_group_add(container, tmpdir): 101 | """Container should run with the specified uid, gid, and secondary 102 | group. 103 | """ 104 | c = container.run( 105 | user='1010:1010', 106 | group_add=['users'], 107 | command=['start.sh', 'id'] 108 | ) 109 | rv = c.wait(timeout=5) 110 | assert rv == 0 or rv["StatusCode"] == 0 111 | assert 'uid=1010 gid=1010 groups=1010,100(users)' in c.logs(stdout=True).decode('utf-8') 112 | -------------------------------------------------------------------------------- /base-notebook/Dockerfile.ppc64le.patch: -------------------------------------------------------------------------------- 1 | --- Dockerfile 2017-05-11 12:59:30.006182756 -0400 2 | +++ Dockerfile.ppc64le 2017-05-11 12:59:57.326632865 -0400 3 | @@ -1,37 +1,43 @@ 4 | -# Copyright (c) Jupyter Development Team. 5 | +# Copyright (c) IBM Corporation 2016 6 | # Distributed under the terms of the Modified BSD License. 7 | 8 | -# Debian Jessie debootstrap from 2017-02-27 9 | -# https://github.com/docker-library/official-images/commit/aa5973d0c918c70c035ec0746b8acaec3a4d7777 10 | -FROM debian@sha256:52af198afd8c264f1035206ca66a5c48e602afb32dc912ebf9e9478134601ec4 11 | +# Ubuntu image 12 | +FROM ppc64le/ubuntu:trusty 13 | 14 | -MAINTAINER Jupyter Project 15 | +MAINTAINER Ilsiyar Gaynutdinov 16 | 17 | USER root 18 | 19 | # Install all OS dependencies for notebook server that starts but lacks all 20 | # features (e.g., download as all possible file formats) 21 | ENV DEBIAN_FRONTEND noninteractive 22 | -RUN REPO=http://cdn-fastly.deb.debian.org \ 23 | - && echo "deb $REPO/debian jessie main\ndeb $REPO/debian-security jessie/updates main" > /etc/apt/sources.list \ 24 | - && apt-get update && apt-get -yq dist-upgrade \ 25 | +RUN apt-get update && apt-get -yq dist-upgrade \ 26 | && apt-get install -yq --no-install-recommends \ 27 | - wget \ 28 | + build-essential \ 29 | bzip2 \ 30 | ca-certificates \ 31 | - sudo \ 32 | + cmake \ 33 | + git \ 34 | locales \ 35 | - && apt-get clean \ 36 | - && rm -rf /var/lib/apt/lists/* 37 | - 38 | -RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \ 39 | - locale-gen 40 | + sudo \ 41 | + wget \ 42 | + && apt-get clean && \ 43 | + rm -rf /var/lib/apt/lists/* 44 | 45 | -# Install Tini 46 | -RUN wget --quiet https://github.com/krallin/tini/releases/download/v0.10.0/tini && \ 47 | - echo "1361527f39190a7338a0b434bd8c88ff7233ce7b9a4876f3315c22fce7eca1b0 *tini" | sha256sum -c - && \ 48 | - mv tini /usr/local/bin/tini && \ 49 | +RUN echo "LANGUAGE=en_US.UTF-8" >> /etc/default/locale 50 | +RUN echo "LC_ALL=en_US.UTF-8" >> /etc/default/locale 51 | +RUN echo "LC_TYPE=en_US.UTF-8" >> /etc/default/locale 52 | +RUN locale-gen en_US en_US.UTF-8 53 | + 54 | +#build and install Tini for ppc64le 55 | +RUN wget https://github.com/krallin/tini/archive/v0.10.0.tar.gz && \ 56 | + tar zxvf v0.10.0.tar.gz && \ 57 | + rm -rf v0.10.0.tar.gz 58 | +WORKDIR tini-0.10.0/ 59 | +RUN cmake . && make install 60 | +RUN mv ./tini /usr/local/bin/tini && \ 61 | chmod +x /usr/local/bin/tini 62 | +WORKDIR .. 63 | 64 | # Configure environment 65 | ENV CONDA_DIR /opt/conda 66 | @@ -59,25 +65,26 @@ 67 | # Install conda as jovyan 68 | RUN cd /tmp && \ 69 | mkdir -p $CONDA_DIR && \ 70 | - wget --quiet https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh && \ 71 | - echo "c59b3dd3cad550ac7596e0d599b91e75d88826db132e4146030ef471bb434e9a *Miniconda3-4.2.12-Linux-x86_64.sh" | sha256sum -c - && \ 72 | - /bin/bash Miniconda3-4.2.12-Linux-x86_64.sh -f -b -p $CONDA_DIR && \ 73 | - rm Miniconda3-4.2.12-Linux-x86_64.sh && \ 74 | + wget https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-ppc64le.sh && \ 75 | + /bin/bash Miniconda3-4.2.12-Linux-ppc64le.sh -f -b -p $CONDA_DIR && \ 76 | + rm -rf Miniconda3-4.2.12-Linux-ppc64le.sh && \ 77 | + $CONDA_DIR/bin/conda install --quiet --yes conda=4.2.12 && \ 78 | $CONDA_DIR/bin/conda config --system --add channels conda-forge && \ 79 | $CONDA_DIR/bin/conda config --system --set auto_update_conda false && \ 80 | conda clean -tipsy 81 | 82 | -# Install Jupyter Notebook and Hub 83 | -RUN conda install --quiet --yes \ 84 | - 'notebook=5.2.*' \ 85 | - 'jupyterhub=0.7.*' \ 86 | - 'jupyterlab=0.18.*' \ 87 | - && conda clean -tipsy 88 | +# Install Jupyter notebook and Hub 89 | +RUN yes | pip install --upgrade pip 90 | +RUN yes | pip install --quiet --no-cache-dir \ 91 | + 'notebook==5.2.*' \ 92 | + 'jupyterhub==0.7.*' \ 93 | + 'jupyterlab==0.18.*' 94 | 95 | USER root 96 | 97 | EXPOSE 8888 98 | WORKDIR /home/$NB_USER/work 99 | +RUN echo "ALL ALL = (ALL) NOPASSWD: ALL" >> /etc/sudoers 100 | 101 | # Configure container startup 102 | ENTRYPOINT ["tini", "--"] 103 | -------------------------------------------------------------------------------- /base-notebook/Dockerfile: -------------------------------------------------------------------------------- 1 | # Copyright (c) Jupyter Development Team. 2 | # Distributed under the terms of the Modified BSD License. 3 | 4 | # Ubuntu 16.04 (xenial) from 2018-02-28 5 | # https://github.com/docker-library/official-images/commit/8728671fdca3dfc029be4ab838ab5315aa125181 6 | FROM ubuntu:xenial-20180228@sha256:e348fbbea0e0a0e73ab0370de151e7800684445c509d46195aef73e090a49bd6 7 | 8 | LABEL maintainer="Jupyter Project " 9 | 10 | USER root 11 | 12 | # Install all OS dependencies for notebook server that starts but lacks all 13 | # features (e.g., download as all possible file formats) 14 | ENV DEBIAN_FRONTEND noninteractive 15 | RUN apt-get update && apt-get -yq dist-upgrade \ 16 | && apt-get install -yq --no-install-recommends \ 17 | wget \ 18 | bzip2 \ 19 | ca-certificates \ 20 | sudo \ 21 | locales \ 22 | fonts-liberation \ 23 | && apt-get clean \ 24 | && rm -rf /var/lib/apt/lists/* 25 | 26 | RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \ 27 | locale-gen 28 | 29 | # Install Tini 30 | RUN wget --quiet https://github.com/krallin/tini/releases/download/v0.10.0/tini && \ 31 | echo "1361527f39190a7338a0b434bd8c88ff7233ce7b9a4876f3315c22fce7eca1b0 *tini" | sha256sum -c - && \ 32 | mv tini /usr/local/bin/tini && \ 33 | chmod +x /usr/local/bin/tini 34 | 35 | # Configure environment 36 | ENV CONDA_DIR=/opt/conda \ 37 | SHELL=/bin/bash \ 38 | NB_USER=jovyan \ 39 | NB_UID=1000 \ 40 | NB_GID=100 \ 41 | LC_ALL=en_US.UTF-8 \ 42 | LANG=en_US.UTF-8 \ 43 | LANGUAGE=en_US.UTF-8 44 | ENV PATH=$CONDA_DIR/bin:$PATH \ 45 | HOME=/home/$NB_USER 46 | 47 | ADD fix-permissions /usr/local/bin/fix-permissions 48 | # Create jovyan user with UID=1000 and in the 'users' group 49 | # and make sure these dirs are writable by the `users` group. 50 | RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \ 51 | mkdir -p $CONDA_DIR && \ 52 | chown $NB_USER:$NB_GID $CONDA_DIR && \ 53 | chmod g+w /etc/passwd /etc/group && \ 54 | fix-permissions $HOME && \ 55 | fix-permissions $CONDA_DIR 56 | 57 | USER $NB_UID 58 | 59 | # Setup work directory for backward-compatibility 60 | RUN mkdir /home/$NB_USER/work && \ 61 | fix-permissions /home/$NB_USER 62 | 63 | # Install conda as jovyan and check the md5 sum provided on the download site 64 | ENV MINICONDA_VERSION 4.3.30 65 | RUN cd /tmp && \ 66 | wget --quiet https://repo.continuum.io/miniconda/Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh && \ 67 | echo "0b80a152332a4ce5250f3c09589c7a81 *Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh" | md5sum -c - && \ 68 | /bin/bash Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh -f -b -p $CONDA_DIR && \ 69 | rm Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh && \ 70 | $CONDA_DIR/bin/conda config --system --prepend channels conda-forge && \ 71 | $CONDA_DIR/bin/conda config --system --set auto_update_conda false && \ 72 | $CONDA_DIR/bin/conda config --system --set show_channel_urls true && \ 73 | $CONDA_DIR/bin/conda update --all --quiet --yes && \ 74 | conda clean -tipsy && \ 75 | rm -rf /home/$NB_USER/.cache/yarn && \ 76 | fix-permissions $CONDA_DIR && \ 77 | fix-permissions /home/$NB_USER 78 | 79 | # Install Jupyter Notebook and Hub 80 | RUN conda install --quiet --yes \ 81 | 'notebook=5.4.*' \ 82 | 'jupyterhub=0.8.*' \ 83 | 'jupyterlab=0.31.*' && \ 84 | conda clean -tipsy && \ 85 | jupyter labextension install @jupyterlab/hub-extension@^0.8.0 && \ 86 | npm cache clean && \ 87 | rm -rf $CONDA_DIR/share/jupyter/lab/staging && \ 88 | rm -rf /home/$NB_USER/.cache/yarn && \ 89 | fix-permissions $CONDA_DIR && \ 90 | fix-permissions /home/$NB_USER 91 | 92 | USER root 93 | 94 | EXPOSE 8888 95 | WORKDIR $HOME 96 | 97 | # Configure container startup 98 | ENTRYPOINT ["tini", "--"] 99 | CMD ["start-notebook.sh"] 100 | 101 | # Add local files as late as possible to avoid cache busting 102 | COPY start.sh /usr/local/bin/ 103 | COPY start-notebook.sh /usr/local/bin/ 104 | COPY start-singleuser.sh /usr/local/bin/ 105 | COPY jupyter_notebook_config.py /etc/jupyter/ 106 | RUN fix-permissions /etc/jupyter/ 107 | 108 | # Switch back to jovyan to avoid accidental container runs as root 109 | USER $NB_UID 110 | -------------------------------------------------------------------------------- /examples/make-deploy/README.md: -------------------------------------------------------------------------------- 1 | This folder contains a Makefile and a set of supporting files demonstrating how to run a docker-stack notebook container on a docker-machine controlled host. 2 | 3 | ## Prerequisites 4 | 5 | * make 3.81+ 6 | * Ubuntu users: Be aware of [make 3.81 defect 483086](https://bugs.launchpad.net/ubuntu/+source/make-dfsg/+bug/483086) which exists in 14.04 LTS but is fixed in 15.04+ 7 | * docker-machine 0.5.0+ 8 | * docker 1.9.0+ 9 | 10 | ## Quickstart 11 | 12 | To show what's possible, here's how to run the `jupyter/minimal-notebook` on a brand new local virtualbox. 13 | 14 | ``` 15 | # create a new VM 16 | make virtualbox-vm NAME=dev 17 | # make the new VM the active docker machine 18 | eval $(docker-machine env dev) 19 | # pull a docker stack and build a local image from it 20 | make image 21 | # start a notebook server in a container 22 | make notebook 23 | ``` 24 | 25 | The last command will log the IP address and port to visit in your browser. 26 | 27 | ## FAQ 28 | 29 | ### Can I run multiple notebook containers on the same VM? 30 | 31 | Yes. Specify a unique name and port on the `make notebook` command. 32 | 33 | ``` 34 | make notebook NAME=my-notebook PORT=9000 35 | make notebook NAME=your-notebook PORT=9001 36 | ``` 37 | 38 | ### Can multiple notebook containers share their notebook directory? 39 | 40 | Yes. 41 | 42 | ``` 43 | make notebook NAME=my-notebook PORT=9000 WORK_VOLUME=our-work 44 | make notebook NAME=your-notebook PORT=9001 WORK_VOLUME=our-work 45 | ``` 46 | 47 | ### How do I run over HTTPS? 48 | 49 | Instead of `make notebook`, run `make self-signed-notebook PASSWORD=your_desired_password`. This target gives you a notebook wtih a self-signed certificate. 50 | 51 | ### That self-signed certificate is a pain. Let's Encrypt? 52 | 53 | Yes. Please. 54 | 55 | ``` 56 | make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com 57 | make letsencrypt-notebook 58 | ``` 59 | 60 | The first command creates a Docker volume named after the notebook container with a `-secrets` suffix. It then runs the `letsencrypt` client with a slew of options (one of which has you automatically agreeing to the Let's Encrypt Terms of Service, see the Makefile). The second command mounts the secrets volume and configures Jupyter to use the full-chain certificate and private key. 61 | 62 | Be aware: Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`. 63 | 64 | ``` 65 | make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com CERT_SERVER=--staging 66 | ``` 67 | 68 | Also, keep in mind Let's Encrypt certificates are short lived: 90 days at the moment. You'll need to manually setup a cron job to run the renewal steps at the moment. (You can reuse the first command above.) 69 | 70 | ### My pip/conda/apt-get installs disappear every time I restart the container. Can I make them permanent? 71 | 72 | ``` 73 | # add your pip, conda, apt-get, etc. permanent features to the Dockerfile where 74 | # indicated by the comments in the Dockerfile 75 | vi Dockerfile 76 | make image 77 | make notebook 78 | ``` 79 | 80 | ### How do I upgrade my Docker container? 81 | 82 | ``` 83 | make image DOCKER_ARGS=--pull 84 | make notebook 85 | ``` 86 | 87 | The first line pulls the latest version of the Docker image used in the local Dockerfile. Then it rebuilds the local Docker image containing any customizations you may have added to it. The second line kills your currently running notebook container, and starts a fresh one using the new image. 88 | 89 | ### Can I run on another VM provider other than VirtualBox? 90 | 91 | Yes. As an example, there's a `softlayer.makefile` included in this repo as an example. You would use it like so: 92 | 93 | ``` 94 | make softlayer-vm NAME=myhost \ 95 | SOFTLAYER_DOMAIN=your_desired_domain \ 96 | SOFTLAYER_USER=your_user_id \ 97 | SOFTLAYER_API_KEY=your_api_key 98 | eval $(docker-machine env myhost) 99 | # optional, creates a real DNS entry for the VM using the machine name as the hostname 100 | make softlayer-dns SOFTLAYER_DOMAIN=your_desired_domain 101 | make image 102 | make notebook 103 | ``` 104 | 105 | If you'd like to add support for another docker-machine driver, use the `softlayer.makefile` as a template. 106 | 107 | ### Where are my notebooks stored? 108 | 109 | `make notebook` creates a Docker volume named after the notebook container with a `-data` suffix. 110 | 111 | ### Uh ... make? 112 | 113 | Yes, sorry Windows users. It got the job done for a simple example. We can certainly accept other deployment mechanism examples in the parent folder or in other repos. 114 | 115 | ### Are there any other options? 116 | 117 | Yes indeed. `cat` the Makefiles and look at the target parameters. 118 | -------------------------------------------------------------------------------- /base-notebook/start.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Copyright (c) Jupyter Development Team. 3 | # Distributed under the terms of the Modified BSD License. 4 | 5 | set -e 6 | 7 | # Exec the specified command or fall back on bash 8 | if [ $# -eq 0 ]; then 9 | cmd=bash 10 | else 11 | cmd=$* 12 | fi 13 | 14 | for f in /usr/local/bin/start-notebook.d/*; do 15 | case "$f" in 16 | *.sh) 17 | echo "$0: running $f"; . "$f" 18 | ;; 19 | *) 20 | if [ -x $f ]; then 21 | echo "$0: running $f" 22 | $f 23 | else 24 | echo "$0: ignoring $f" 25 | fi 26 | ;; 27 | esac 28 | echo 29 | done 30 | # Handle special flags if we're root 31 | if [ $(id -u) == 0 ] ; then 32 | 33 | # Only attempt to change the jovyan username if it exists 34 | if id jovyan &> /dev/null ; then 35 | echo "Set username to: $NB_USER" 36 | usermod -d /home/$NB_USER -l $NB_USER jovyan 37 | fi 38 | 39 | # Handle case where provisioned storage does not have the correct permissions by default 40 | # Ex: default NFS/EFS (no auto-uid/gid) 41 | if [[ "$CHOWN_HOME" == "1" || "$CHOWN_HOME" == 'yes' ]]; then 42 | echo "Changing ownership of /home/$NB_USER to $NB_UID:$NB_GID" 43 | chown -R $NB_UID:$NB_GID /home/$NB_USER 44 | fi 45 | 46 | # handle home and working directory if the username changed 47 | if [[ "$NB_USER" != "jovyan" ]]; then 48 | # changing username, make sure homedir exists 49 | # (it could be mounted, and we shouldn't create it if it already exists) 50 | if [[ ! -e "/home/$NB_USER" ]]; then 51 | echo "Relocating home dir to /home/$NB_USER" 52 | mv /home/jovyan "/home/$NB_USER" 53 | fi 54 | # if workdir is in /home/jovyan, cd to /home/$NB_USER 55 | if [[ "$PWD/" == "/home/jovyan/"* ]]; then 56 | newcwd="/home/$NB_USER/${PWD:13}" 57 | echo "Setting CWD to $newcwd" 58 | cd "$newcwd" 59 | fi 60 | fi 61 | 62 | # Change UID of NB_USER to NB_UID if it does not match 63 | if [ "$NB_UID" != $(id -u $NB_USER) ] ; then 64 | echo "Set $NB_USER UID to: $NB_UID" 65 | usermod -u $NB_UID $NB_USER 66 | fi 67 | 68 | # Change GID of NB_USER to NB_GID if it does not match 69 | if [ "$NB_GID" != $(id -g $NB_USER) ] ; then 70 | echo "Set $NB_USER GID to: $NB_GID" 71 | groupmod -g $NB_GID -o $(id -g -n $NB_USER) 72 | fi 73 | 74 | # Enable sudo if requested 75 | if [[ "$GRANT_SUDO" == "1" || "$GRANT_SUDO" == 'yes' ]]; then 76 | echo "Granting $NB_USER sudo access and appending $CONDA_DIR/bin to sudo PATH" 77 | echo "$NB_USER ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/notebook 78 | fi 79 | 80 | # Add $CONDA_DIR/bin to sudo secure_path 81 | sed -r "s#Defaults\s+secure_path=\"([^\"]+)\"#Defaults secure_path=\"\1:$CONDA_DIR/bin\"#" /etc/sudoers | grep secure_path > /etc/sudoers.d/path 82 | 83 | # Exec the command as NB_USER with the PATH and the rest of 84 | # the environment preserved 85 | echo "Executing the command: $cmd" 86 | exec sudo -E -H -u $NB_USER PATH=$PATH PYTHONPATH=$PYTHONPATH $cmd 87 | else 88 | if [[ "$NB_UID" == "$(id -u jovyan)" && "$NB_GID" == "$(id -g jovyan)" ]]; then 89 | # User is not attempting to override user/group via environment 90 | # variables, but they could still have overridden the uid/gid that 91 | # container runs as. Check that the user has an entry in the passwd 92 | # file and if not add an entry. Also add a group file entry if the 93 | # uid has its own distinct group but there is no entry. 94 | whoami &> /dev/null || STATUS=$? && true 95 | if [[ "$STATUS" != "0" ]]; then 96 | if [[ -w /etc/passwd ]]; then 97 | echo "Adding passwd file entry for $(id -u)" 98 | cat /etc/passwd | sed -e "s/^jovyan:/nayvoj:/" > /tmp/passwd 99 | echo "jovyan:x:$(id -u):$(id -g):,,,:/home/jovyan:/bin/bash" >> /tmp/passwd 100 | cat /tmp/passwd > /etc/passwd 101 | rm /tmp/passwd 102 | id -G -n 2>/dev/null | grep -q -w $(id -u) || STATUS=$? && true 103 | if [[ "$STATUS" != "0" && "$(id -g)" == "0" ]]; then 104 | echo "Adding group file entry for $(id -u)" 105 | echo "jovyan:x:$(id -u):" >> /etc/group 106 | fi 107 | else 108 | echo 'Container must be run with group root to update passwd file' 109 | fi 110 | fi 111 | 112 | # Warn if the user isn't going to be able to write files to $HOME. 113 | if [[ ! -w /home/jovyan ]]; then 114 | echo 'Container must be run with group users to update files' 115 | fi 116 | else 117 | # Warn if looks like user want to override uid/gid but hasn't 118 | # run the container as root. 119 | if [[ ! -z "$NB_UID" && "$NB_UID" != "$(id -u)" ]]; then 120 | echo 'Container must be run as root to set $NB_UID' 121 | fi 122 | if [[ ! -z "$NB_GID" && "$NB_GID" != "$(id -g)" ]]; then 123 | echo 'Container must be run as root to set $NB_GID' 124 | fi 125 | fi 126 | 127 | # Warn if looks like user want to run in sudo mode but hasn't run 128 | # the container as root. 129 | if [[ "$GRANT_SUDO" == "1" || "$GRANT_SUDO" == 'yes' ]]; then 130 | echo 'Container must be run as root to grant sudo permissions' 131 | fi 132 | 133 | # Execute the command 134 | echo "Executing the command: $cmd" 135 | exec $cmd 136 | fi 137 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | # 4 | # docker-stacks documentation build configuration file, created by 5 | # sphinx-quickstart on Fri Dec 29 20:32:10 2017. 6 | # 7 | # This file is execfile()d with the current directory set to its 8 | # containing dir. 9 | # 10 | # Note that not all possible configuration values are present in this 11 | # autogenerated file. 12 | # 13 | # All configuration values have a default; values that are commented out 14 | # serve to show the default. 15 | 16 | # If extensions (or modules to document with autodoc) are in another directory, 17 | # add these directories to sys.path here. If the directory is relative to the 18 | # documentation root, use os.path.abspath to make it absolute, like shown here. 19 | # 20 | # import os 21 | # import sys 22 | # sys.path.insert(0, os.path.abspath('.')) 23 | 24 | # For conversion from markdown to html 25 | import recommonmark.parser 26 | from recommonmark.transform import AutoStructify 27 | 28 | 29 | # -- General configuration ------------------------------------------------ 30 | 31 | # If your documentation needs a minimal Sphinx version, state it here. 32 | # 33 | needs_sphinx = '1.4' 34 | 35 | # Add any Sphinx extension module names here, as strings. They can be 36 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 37 | # ones. 38 | extensions = [ 39 | ] 40 | 41 | # Add any paths that contain templates here, relative to this directory. 42 | templates_path = ['_templates'] 43 | 44 | # The suffix(es) of source filenames. 45 | # You can specify multiple suffix as a list of string: 46 | # 47 | # source_suffix = ['.rst', '.md'] 48 | source_suffix = '.rst' 49 | 50 | # The master toctree document. 51 | master_doc = 'index' 52 | 53 | # General information about the project. 54 | project = 'docker-stacks' 55 | copyright = '2018, Project Jupyter' 56 | author = 'Project Jupyter' 57 | 58 | # The version info for the project you're documenting, acts as replacement for 59 | # |version| and |release|, also used in various other places throughout the 60 | # built documents. 61 | # 62 | # The short X.Y version. 63 | version = 'latest' 64 | # The full version, including alpha/beta/rc tags. 65 | release = 'latest' 66 | 67 | # The language for content autogenerated by Sphinx. Refer to documentation 68 | # for a list of supported languages. 69 | # 70 | # This is also used if you do content translation via gettext catalogs. 71 | # Usually you set "language" from the command line for these cases. 72 | language = None 73 | 74 | # List of patterns, relative to source directory, that match files and 75 | # directories to ignore when looking for source files. 76 | # This patterns also effect to html_static_path and html_extra_path 77 | exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] 78 | 79 | # The name of the Pygments (syntax highlighting) style to use. 80 | pygments_style = 'sphinx' 81 | 82 | # If true, `todo` and `todoList` produce output, else they produce nothing. 83 | todo_include_todos = False 84 | 85 | 86 | # -- Source ------------------------------------------------------------- 87 | 88 | source_parsers = { 89 | '.md': 'recommonmark.parser.CommonMarkParser', 90 | } 91 | 92 | source_suffix = ['.rst', '.md'] 93 | 94 | # -- Options for HTML output ---------------------------------------------- 95 | 96 | # The theme to use for HTML and HTML Help pages. See the documentation for 97 | # a list of builtin themes. 98 | # 99 | html_theme = 'sphinx_rtd_theme' 100 | 101 | # Theme options are theme-specific and customize the look and feel of a theme 102 | # further. For a list of options available for each theme, see the 103 | # documentation. 104 | # 105 | # html_theme_options = {} 106 | 107 | # Add any paths that contain custom static files (such as style sheets) here, 108 | # relative to this directory. They are copied after the builtin static files, 109 | # so a file named "default.css" will overwrite the builtin "default.css". 110 | html_static_path = ['_static'] 111 | 112 | 113 | # -- Options for HTMLHelp output ------------------------------------------ 114 | 115 | # Output file base name for HTML help builder. 116 | htmlhelp_basename = 'docker-stacksdoc' 117 | 118 | 119 | # -- Options for LaTeX output --------------------------------------------- 120 | 121 | latex_elements = { 122 | # The paper size ('letterpaper' or 'a4paper'). 123 | # 124 | # 'papersize': 'letterpaper', 125 | 126 | # The font size ('10pt', '11pt' or '12pt'). 127 | # 128 | # 'pointsize': '10pt', 129 | 130 | # Additional stuff for the LaTeX preamble. 131 | # 132 | # 'preamble': '', 133 | 134 | # Latex figure (float) alignment 135 | # 136 | # 'figure_align': 'htbp', 137 | } 138 | 139 | # Grouping the document tree into LaTeX files. List of tuples 140 | # (source start file, target name, title, 141 | # author, documentclass [howto, manual, or own class]). 142 | latex_documents = [ 143 | (master_doc, 'docker-stacks.tex', 'docker-stacks Documentation', 144 | 'Project Jupyter', 'manual'), 145 | ] 146 | 147 | 148 | # -- Options for manual page output --------------------------------------- 149 | 150 | # One entry per manual page. List of tuples 151 | # (source start file, name, description, authors, manual section). 152 | man_pages = [ 153 | (master_doc, 'docker-stacks', 'docker-stacks Documentation', 154 | [author], 1) 155 | ] 156 | 157 | 158 | # -- Options for Texinfo output ------------------------------------------- 159 | 160 | # Grouping the document tree into Texinfo files. List of tuples 161 | # (source start file, target name, title, author, 162 | # dir menu entry, description, category) 163 | texinfo_documents = [ 164 | (master_doc, 'docker-stacks', 'docker-stacks Documentation', 165 | author, 'docker-stacks', 'One line description of project.', 166 | 'Miscellaneous'), 167 | ] 168 | 169 | 170 | 171 | -------------------------------------------------------------------------------- /docs/images/inherit.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | blockdiag 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | ubuntu@SHA 22 | 23 | base-notebook 24 | 25 | minimal-notebook 26 | 27 | scipy-notebook 28 | 29 | r-notebook 30 | 31 | tensorflow-notebook 32 | 33 | datascience-notebook 34 | 35 | pyspark-notebook 36 | 37 | all-spark-notebook 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | -------------------------------------------------------------------------------- /internal/inherit-diagram.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | blockdiag 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | ubuntu@SHA 22 | 23 | base-notebook 24 | 25 | minimal-notebook 26 | 27 | scipy-notebook 28 | 29 | r-notebook 30 | 31 | tensorflow-notebook 32 | 33 | datascience-notebook 34 | 35 | pyspark-notebook 36 | 37 | all-spark-notebook 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | -------------------------------------------------------------------------------- /examples/openshift/templates.json: -------------------------------------------------------------------------------- 1 | { 2 | "kind": "Template", 3 | "apiVersion": "v1", 4 | "metadata": { 5 | "name": "jupyter-notebook", 6 | "annotations": { 7 | "openshift.io/display-name": "Jupyter Notebook", 8 | "description": "Template for deploying Jupyter Notebook images.", 9 | "iconClass": "icon-python", 10 | "tags": "python,jupyter" 11 | } 12 | }, 13 | "parameters": [ 14 | { 15 | "name": "APPLICATION_NAME", 16 | "value": "notebook", 17 | "required": true 18 | }, 19 | { 20 | "name": "NOTEBOOK_IMAGE", 21 | "value": "jupyter/minimal-notebook:latest", 22 | "required": true 23 | }, 24 | { 25 | "name": "NOTEBOOK_PASSWORD", 26 | "from": "[a-f0-9]{32}", 27 | "generate": "expression" 28 | } 29 | ], 30 | "objects": [ 31 | { 32 | "kind": "ConfigMap", 33 | "apiVersion": "v1", 34 | "metadata": { 35 | "name": "${APPLICATION_NAME}-cfg", 36 | "labels": { 37 | "app": "${APPLICATION_NAME}" 38 | } 39 | }, 40 | "data": { 41 | "jupyter_notebook_config.py": "import os\n\npassword = os.environ.get('JUPYTER_NOTEBOOK_PASSWORD')\n\nif password:\n import notebook.auth\n c.NotebookApp.password = notebook.auth.passwd(password)\n del password\n del os.environ['JUPYTER_NOTEBOOK_PASSWORD']\n\nimage_config_file = '/home/jovyan/.jupyter/jupyter_notebook_config.py'\n\nif os.path.exists(image_config_file):\n with open(image_config_file) as fp:\n exec(compile(fp.read(), image_config_file, 'exec'), globals())\n" 42 | } 43 | }, 44 | { 45 | "kind": "DeploymentConfig", 46 | "apiVersion": "v1", 47 | "metadata": { 48 | "name": "${APPLICATION_NAME}", 49 | "labels": { 50 | "app": "${APPLICATION_NAME}" 51 | } 52 | }, 53 | "spec": { 54 | "strategy": { 55 | "type": "Recreate" 56 | }, 57 | "triggers": [ 58 | { 59 | "type": "ConfigChange" 60 | } 61 | ], 62 | "replicas": 1, 63 | "selector": { 64 | "app": "${APPLICATION_NAME}", 65 | "deploymentconfig": "${APPLICATION_NAME}" 66 | }, 67 | "template": { 68 | "metadata": { 69 | "annotations": { 70 | "alpha.image.policy.openshift.io/resolve-names": "*" 71 | }, 72 | "labels": { 73 | "app": "${APPLICATION_NAME}", 74 | "deploymentconfig": "${APPLICATION_NAME}" 75 | } 76 | }, 77 | "spec": { 78 | "containers": [ 79 | { 80 | "name": "jupyter-notebook", 81 | "image": "${NOTEBOOK_IMAGE}", 82 | "command": [ 83 | "start-notebook.sh", 84 | "--config=/etc/jupyter/openshift/jupyter_notebook_config.py", 85 | "--no-browser", 86 | "--ip=0.0.0.0" 87 | ], 88 | 89 | "ports": [ 90 | { 91 | "containerPort": 8888, 92 | "protocol": "TCP" 93 | } 94 | ], 95 | "env": [ 96 | { 97 | "name": "JUPYTER_NOTEBOOK_PASSWORD", 98 | "value": "${NOTEBOOK_PASSWORD}" 99 | } 100 | ], 101 | "volumeMounts": [ 102 | { 103 | "mountPath": "/etc/jupyter/openshift", 104 | "name": "configs" 105 | } 106 | ] 107 | } 108 | ], 109 | "securityContext": { 110 | "supplementalGroups": [ 111 | 100 112 | ] 113 | }, 114 | "volumes": [ 115 | { 116 | "configMap": { 117 | "name": "${APPLICATION_NAME}-cfg" 118 | }, 119 | "name": "configs" 120 | } 121 | ] 122 | } 123 | } 124 | } 125 | }, 126 | { 127 | "kind": "Route", 128 | "apiVersion": "v1", 129 | "metadata": { 130 | "name": "${APPLICATION_NAME}", 131 | "labels": { 132 | "app": "${APPLICATION_NAME}" 133 | } 134 | }, 135 | "spec": { 136 | "host": "", 137 | "to": { 138 | "kind": "Service", 139 | "name": "${APPLICATION_NAME}", 140 | "weight": 100 141 | }, 142 | "port": { 143 | "targetPort": "8888-tcp" 144 | }, 145 | "tls": { 146 | "termination": "edge", 147 | "insecureEdgeTerminationPolicy": "Redirect" 148 | } 149 | } 150 | }, 151 | { 152 | "kind": "Service", 153 | "apiVersion": "v1", 154 | "metadata": { 155 | "name": "${APPLICATION_NAME}", 156 | "labels": { 157 | "app": "${APPLICATION_NAME}" 158 | } 159 | }, 160 | "spec": { 161 | "ports": [ 162 | { 163 | "name": "8888-tcp", 164 | "protocol": "TCP", 165 | "port": 8888, 166 | "targetPort": 8888 167 | } 168 | ], 169 | "selector": { 170 | "app": "${APPLICATION_NAME}", 171 | "deploymentconfig": "${APPLICATION_NAME}" 172 | }, 173 | "type": "ClusterIP" 174 | } 175 | } 176 | ] 177 | } 178 | -------------------------------------------------------------------------------- /examples/docker-compose/README.md: -------------------------------------------------------------------------------- 1 | This example demonstrate how to deploy docker-stack notebook containers to any Docker Machine-controlled host using Docker Compose. 2 | 3 | ## Prerequisites 4 | 5 | * [Docker Engine](https://docs.docker.com/engine/) 1.10.0+ 6 | * [Docker Machine](https://docs.docker.com/machine/) 0.6.0+ 7 | * [Docker Compose](https://docs.docker.com/compose/) 1.6.0+ 8 | 9 | See the [installation instructions](https://docs.docker.com/engine/installation/) for your environment. 10 | 11 | ## Quickstart 12 | 13 | Build and run a `jupyter/minimal-notebook` container on a VirtualBox VM on local desktop. 14 | 15 | ``` 16 | # create a Docker Machine-controlled VirtualBox VM 17 | bin/vbox.sh mymachine 18 | 19 | # activate the docker machine 20 | eval "$(docker-machine env mymachine)" 21 | 22 | # build the notebook image on the machine 23 | notebook/build.sh 24 | 25 | # bring up the notebook container 26 | notebook/up.sh 27 | ``` 28 | 29 | To stop and remove the container: 30 | 31 | ``` 32 | notebook/down.sh 33 | ``` 34 | 35 | 36 | ## FAQ 37 | 38 | ### How do I specify which docker-stack notebook image to deploy? 39 | 40 | You can customize the docker-stack notebook image to deploy by modifying the `notebook/Dockerfile`. For example, you can build and deploy a `jupyter/all-spark-notebook` by modifying the Dockerfile like so: 41 | 42 | ``` 43 | FROM jupyter/all-spark-notebook:55d5ca6be183 44 | ... 45 | ``` 46 | 47 | Once you modify the Dockerfile, don't forget to rebuild the image. 48 | 49 | ``` 50 | # activate the docker machine 51 | eval "$(docker-machine env mymachine)" 52 | 53 | notebook/build.sh 54 | ``` 55 | 56 | ### Can I run multiple notebook containers on the same VM? 57 | 58 | Yes. Set environment variables to specify unique names and ports when running the `up.sh` command. 59 | 60 | ``` 61 | NAME=my-notebook PORT=9000 notebook/up.sh 62 | NAME=your-notebook PORT=9001 notebook/up.sh 63 | ``` 64 | 65 | To stop and remove the containers: 66 | 67 | ``` 68 | NAME=my-notebook notebook/down.sh 69 | NAME=your-notebook notebook/down.sh 70 | ``` 71 | 72 | ### Where are my notebooks stored? 73 | 74 | The `up.sh` creates a Docker volume named after the notebook container with a `-work` suffix, e.g., `my-notebook-work`. 75 | 76 | 77 | ### Can multiple notebook containers share the same notebook volume? 78 | 79 | Yes. Set the `WORK_VOLUME` environment variable to the same value for each notebook. 80 | 81 | ``` 82 | NAME=my-notebook PORT=9000 WORK_VOLUME=our-work notebook/up.sh 83 | NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh 84 | ``` 85 | 86 | ### How do I run over HTTPS? 87 | 88 | To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script. 89 | 90 | ``` 91 | PASSWORD=a_secret notebook/up.sh --secure 92 | 93 | # or 94 | notebook/up.sh --secure --password a_secret 95 | ``` 96 | 97 | ### Can I use Let's Encrypt certificate chains? 98 | 99 | Sure. If you want to secure access to publicly addressable notebook containers, you can generate a free certificate using the [Let's Encrypt](https://letsencrypt.org) service. 100 | 101 | 102 | This example includes the `bin/letsencrypt.sh` script, which runs the `letsencrypt` client to create a full-chain certificate and private key, and stores them in a Docker volume. _Note:_ The script hard codes several `letsencrypt` options, one of which automatically agrees to the Let's Encrypt Terms of Service. 103 | 104 | The following command will create a certificate chain and store it in a Docker volume named `mydomain-secrets`. 105 | 106 | ``` 107 | FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ 108 | SECRETS_VOLUME=mydomain-secrets \ 109 | bin/letsencrypt.sh 110 | ``` 111 | 112 | Now run `up.sh` with the `--letsencrypt` option. You must also provide the name of the secrets volume and a password. 113 | 114 | ``` 115 | PASSWORD=a_secret SECRETS_VOLUME=mydomain-secrets notebook/up.sh --letsencrypt 116 | 117 | # or 118 | notebook/up.sh --letsencrypt --password a_secret --secrets mydomain-secrets 119 | ``` 120 | 121 | Be aware that Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`. 122 | 123 | ``` 124 | FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ 125 | CERT_SERVER=--staging \ 126 | bin/letsencrypt.sh 127 | ``` 128 | 129 | Also, be aware that Let's Encrypt certificates are short lived (90 days). If you need them for a longer period of time, you'll need to manually setup a cron job to run the renewal steps. (You can reuse the command above.) 130 | 131 | ### Can I deploy to any Docker Machine host? 132 | 133 | Yes, you should be able to deploy to any Docker Machine-controlled host. To make it easier to get up and running, this example includes scripts to provision Docker Machines to VirtualBox and IBM SoftLayer, but more scripts are welcome! 134 | 135 | To create a Docker machine using a VirtualBox VM on local desktop: 136 | 137 | ``` 138 | bin/vbox.sh mymachine 139 | ``` 140 | 141 | To create a Docker machine using a virtual device on IBM SoftLayer: 142 | 143 | ``` 144 | export SOFTLAYER_USER=my_softlayer_username 145 | export SOFTLAYER_API_KEY=my_softlayer_api_key 146 | export SOFTLAYER_DOMAIN=my.domain 147 | 148 | # Create virtual device 149 | bin/softlayer.sh myhost 150 | 151 | # Add DNS entry (SoftLayer DNS zone must exist for SOFTLAYER_DOMAIN) 152 | bin/sl-dns.sh myhost 153 | ``` 154 | 155 | 156 | ## Troubleshooting 157 | 158 | ### Unable to connect to VirtualBox VM on Mac OS X when using Cisco VPN client. 159 | 160 | The Cisco VPN client blocks access to IP addresses that it does not know about, and may block access to a new VM if it is created while the Cisco VPN client is running. 161 | 162 | 1. Stop Cisco VPN client. (It does not allow modifications to route table). 163 | 2. Run `ifconfig` to list `vboxnet` virtual network devices. 164 | 3. Run `sudo route -nv add -net 192.168.99 -interface vboxnetX`, where X is the number of the virtual device assigned to the VirtualBox VM. 165 | 4. Start Cisco VPN client. 166 | -------------------------------------------------------------------------------- /docs/using/common.md: -------------------------------------------------------------------------------- 1 | # Common Features 2 | 3 | A container launched from any Jupyter Docker Stacks image runs a Jupyter Notebook server by default. The container does so by executing a `start-notebook.sh` script. This script configures the internal container environment and then runs `jupyter notebook`, passing it any command line arguments received. 4 | 5 | This page describes the options supported by the startup script as well as how to bypass it to run alternative commands. 6 | 7 | ## Notebook Options 8 | 9 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) to the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, you can run the following: 10 | 11 | ``` 12 | docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 13 | ``` 14 | 15 | For example, to set the base URL of the notebook server, you can run the following: 16 | 17 | ``` 18 | docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.base_url=/some/path 19 | ``` 20 | 21 | ## Docker Options 22 | 23 | You may instruct the `start-notebook.sh` script to customize the container environment before launching 24 | the notebook server. You do so by passing arguments to the `docker run` command. 25 | 26 | * `-e NB_USER=jovyan` - Instructs the startup script to change the default container username from `jovyan` to the provided value. Causes the script to rename the `jovyan` user home folder. 27 | * `-e NB_UID=1000` - Instructs the startup script to switch the numeric user ID of `$NB_USER` to the given value. This feature is useful when mounting host volumes with specific owner permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the user ID.) 28 | * `-e NB_GID=100` - Instructs the startup script to change the numeric group ID of the `$NB_USER` to the given value. This feature is useful when mounting host volumes with specific group permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the group ID.) 29 | * `-e CHOWN_HOME=yes` - Instructs the startup script to recursively change the `$NB_USER` home directory owner and group to the current value of `$NB_UID` and `$NB_GID`. This change will take effect even if the user home directory is mounted from the host using `-v` as described below. 30 | * `-e GRANT_SUDO=yes` - Instructs the startup script to grant the `NB_USER` user passwordless `sudo` capability. You do **not** need this option to allow the user to `conda` or `pip` install additional packages. This option is useful, however, when you wish to give `$NB_USER` the ability to install OS packages with `apt` or modify other root-owned files in the container. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su $NB_USER` after adding `$NB_USER` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 31 | * `-e GEN_CERT=yes` - Instructs the startup script to generates a self-signed SSL certificate and configure Jupyter Notebook to use it to accept encrypted HTTPS connections. 32 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 33 | * `-user 5000 --group-add users` - Launches the container with a specific user ID and adds that user to the `users` group so that it can modify files in the default home directory and `/opt/conda`. You can use these arguments as alternatives to setting `$NB_UID` and `$NB_GID`. 34 | 35 | ## SSL Certificates 36 | 37 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt` and use them, you might run the following: 38 | 39 | ``` 40 | docker run -d -p 8888:8888 \ 41 | -v /some/host/folder:/etc/ssl/notebook \ 42 | jupyter/base-notebook start-notebook.sh \ 43 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 44 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 45 | ``` 46 | 47 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 48 | 49 | ``` 50 | docker run -d -p 8888:8888 \ 51 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 52 | jupyter/base-notebook start-notebook.sh \ 53 | --NotebookApp.certfile=/etc/ssl/notebook.pem 54 | ``` 55 | 56 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 57 | 58 | For additional information about using SSL, see the following: 59 | 60 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 61 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 62 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#securing-a-notebook-server) for best practices about securing a public notebook server in general. 63 | 64 | ## Alternative Commands 65 | 66 | ### start.sh 67 | 68 | The `start-notebook.sh` script actually inherits most of its option handling capability from a more generic `start.sh` script. The `start.sh` script supports all of the features described above, but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 69 | 70 | ``` 71 | docker run -it --rm jupyter/base-notebook start.sh ipython 72 | ``` 73 | 74 | Or, to run JupyterLab instead of the classic notebook, run the following: 75 | 76 | ``` 77 | docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter lab 78 | ``` 79 | 80 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 81 | 82 | ### Others 83 | 84 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that features supported by the `start.sh` script and its kin will not function (e.g., `GRANT_SUDO`). 85 | 86 | ## Conda Environments 87 | 88 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. The `/opt/conda/bin` directory is part of the default `jovyan` user's `$PATH`. That directory is also whitelisted for use in `sudo` commands by the `start.sh` script. 89 | 90 | The `jovyan` user has full read/write access to the `/opt/conda` directory. You can use either `conda` or `pip` to install new packages without any additional permissions. 91 | 92 | ``` 93 | # install a package into the default (python 3.x) environment 94 | pip install some-package 95 | conda install some-package 96 | ``` 97 | 98 | -------------------------------------------------------------------------------- /docs/using/specifics.md: -------------------------------------------------------------------------------- 1 | # Image Specifics 2 | 3 | This page provides details about features specific to one or more images. 4 | 5 | ## Apache Spark 6 | 7 | The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images support the use of [Apache Spark](https://spark.apache.org/) in Python, R, and Scala notebooks. The following sections provide some examples of how to get started using them. 8 | 9 | ### Using Spark Local Mode 10 | 11 | Spark local mode is useful for experimentation on small data when you do not have a Spark cluster available. 12 | 13 | #### In a Python Notebook 14 | 15 | ```python 16 | import pyspark 17 | sc = pyspark.SparkContext('local[*]') 18 | 19 | # do something to prove it works 20 | rdd = sc.parallelize(range(1000)) 21 | rdd.takeSample(False, 5) 22 | ``` 23 | 24 | #### In a R Notebook 25 | 26 | ```r 27 | library(SparkR) 28 | 29 | as <- sparkR.session("local[*]") 30 | 31 | # do something to prove it works 32 | df <- as.DataFrame(iris) 33 | head(filter(df, df$Petal_Width > 0.2)) 34 | ``` 35 | 36 | #### In a Spylon Kernel Scala Notebook 37 | 38 | Spylon kernel instantiates a `SparkContext` for you in variable `sc` after you configure Spark options in a `%%init_spark` magic cell. 39 | 40 | ```python 41 | %%init_spark 42 | # Configure Spark to use a local master 43 | launcher.master = "local[*]" 44 | ``` 45 | 46 | ```scala 47 | // Now run Scala code that uses the initialized SparkContext in sc 48 | val rdd = sc.parallelize(0 to 999) 49 | rdd.takeSample(false, 5) 50 | ``` 51 | 52 | #### In an Apache Toree Scala Notebook 53 | 54 | Apache Toree instantiates a local `SparkContext` for you in variable `sc` when the kernel starts. 55 | 56 | ```scala 57 | val rdd = sc.parallelize(0 to 999) 58 | rdd.takeSample(false, 5) 59 | ``` 60 | 61 | ### Connecting to a Spark Cluster on Mesos 62 | 63 | This configuration allows your compute cluster to scale with your data. 64 | 65 | 0. [Deploy Spark on Mesos](http://spark.apache.org/docs/latest/running-on-mesos.html). 66 | 1. Configure each slave with [the `--no-switch_user` flag](https://open.mesosphere.com/reference/mesos-slave/) or create the `$NB_USER` account on every slave node. 67 | 2. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](http://spark.apache.org/docs/latest/cluster-overview.html#components).) 68 | * NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See https://github.com/jupyter/docker-stacks/issues/64 for details. 69 | 3. Follow the language specific instructions below. 70 | 71 | #### In a Python Notebook 72 | 73 | ```python 74 | import os 75 | # make sure pyspark tells workers to use python3 not 2 if both are installed 76 | os.environ['PYSPARK_PYTHON'] = '/usr/bin/python3' 77 | 78 | import pyspark 79 | conf = pyspark.SparkConf() 80 | 81 | # point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos) 82 | conf.setMaster("mesos://10.10.10.10:5050") 83 | # point to spark binary package in HDFS or on local filesystem on all slave 84 | # nodes (e.g., file:///opt/spark/spark-2.2.0-bin-hadoop2.7.tgz) 85 | conf.set("spark.executor.uri", "hdfs://10.10.10.10/spark/spark-2.2.0-bin-hadoop2.7.tgz") 86 | # set other options as desired 87 | conf.set("spark.executor.memory", "8g") 88 | conf.set("spark.core.connection.ack.wait.timeout", "1200") 89 | 90 | # create the context 91 | sc = pyspark.SparkContext(conf=conf) 92 | 93 | # do something to prove it works 94 | rdd = sc.parallelize(range(100000000)) 95 | rdd.sumApprox(3) 96 | ``` 97 | 98 | #### In a R Notebook 99 | 100 | ```r 101 | library(SparkR) 102 | 103 | # Point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos) 104 | # Point to spark binary package in HDFS or on local filesystem on all slave 105 | # nodes (e.g., file:///opt/spark/spark-2.2.0-bin-hadoop2.7.tgz) in sparkEnvir 106 | # Set other options in sparkEnvir 107 | sc <- sparkR.session("mesos://10.10.10.10:5050", sparkEnvir=list( 108 | spark.executor.uri="hdfs://10.10.10.10/spark/spark-2.2.0-bin-hadoop2.7.tgz", 109 | spark.executor.memory="8g" 110 | ) 111 | ) 112 | 113 | # do something to prove it works 114 | data(iris) 115 | df <- as.DataFrame(iris) 116 | head(filter(df, df$Petal_Width > 0.2)) 117 | ``` 118 | 119 | #### In a Spylon Kernel Scala Notebook 120 | 121 | ```python 122 | %%init_spark 123 | # Configure the location of the mesos master and spark distribution on HDFS 124 | launcher.master = "mesos://10.10.10.10:5050" 125 | launcher.conf.spark.executor.uri=hdfs://10.10.10.10/spark/spark-2.2.0-bin-hadoop2.7.tgz 126 | ``` 127 | 128 | ```scala 129 | // Now run Scala code that uses the initialized SparkContext in sc 130 | val rdd = sc.parallelize(0 to 999) 131 | rdd.takeSample(false, 5) 132 | ``` 133 | 134 | #### In an Apache Toree Scala Notebook 135 | 136 | The Apache Toree kernel automatically creates a `SparkContext` when it starts based on configuration information from its command line arguments and environment variables. You can pass information about your Mesos cluster via the `SPARK_OPTS` environment variable when you spawn a container. 137 | 138 | For instance, to pass information about a Mesos master, Spark binary location in HDFS, and an executor options, you could start the container like so: 139 | 140 | ``` 141 | docker run -d -p 8888:8888 -e SPARK_OPTS='--master=mesos://10.10.10.10:5050 \ 142 | --spark.executor.uri=hdfs://10.10.10.10/spark/spark-2.2.0-bin-hadoop2.7.tgz \ 143 | --spark.executor.memory=8g' jupyter/all-spark-notebook 144 | ``` 145 | 146 | Note that this is the same information expressed in a notebook in the Python case above. Once the kernel spec has your cluster information, you can test your cluster in an Apache Toree notebook like so: 147 | 148 | ```scala 149 | // should print the value of --master in the kernel spec 150 | println(sc.master) 151 | 152 | // do something to prove it works 153 | val rdd = sc.parallelize(0 to 99999999) 154 | rdd.sum() 155 | ``` 156 | 157 | ### Connecting to a Spark Cluster in Standalone Mode 158 | 159 | Connection to Spark Cluster on Standalone Mode requires the following set of steps: 160 | 161 | 0. Verify that the docker image (check the Dockerfile) and the Spark Cluster which is being deployed, run the same version of Spark. 162 | 1. [Deploy Spark in Standalone Mode](http://spark.apache.org/docs/latest/spark-standalone.html). 163 | 2. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](http://spark.apache.org/docs/latest/cluster-overview.html#components).) 164 | * NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See https://github.com/jupyter/docker-stacks/issues/64 for details. 165 | 3. The language specific instructions are almost same as mentioned above for Mesos, only the master url would now be something like spark://10.10.10.10:7077 166 | 167 | ## Tensorflow 168 | 169 | The `jupyter/tensorflow-notebook` image supports the use of [Tensorflow](https://www.tensorflow.org/) in single machine or distributed mode. 170 | 171 | ### Single Machine Mode 172 | 173 | ```python 174 | import tensorflow as tf 175 | 176 | hello = tf.Variable('Hello World!') 177 | 178 | sess = tf.Session() 179 | init = tf.global_variables_initializer() 180 | 181 | sess.run(init) 182 | sess.run(hello) 183 | ``` 184 | 185 | ### Distributed Mode 186 | 187 | ```python 188 | import tensorflow as tf 189 | 190 | hello = tf.Variable('Hello Distributed World!') 191 | 192 | server = tf.train.Server.create_local_server() 193 | sess = tf.Session(server.target) 194 | init = tf.global_variables_initializer() 195 | 196 | sess.run(init) 197 | sess.run(hello) 198 | ``` -------------------------------------------------------------------------------- /r-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/r-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/r-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/r-notebook.svg)](https://microbadger.com/images/jupyter/r-notebook "jupyter/r-notebook image metadata") 2 | 3 | # Jupyter Notebook R Stack 4 | 5 | ## What it Gives You 6 | 7 | * Jupyter Notebook 5.2.x 8 | * Conda R v3.3.x and channel 9 | * plyr, devtools, shiny, rmarkdown, forecast, rsqlite, reshape2, nycflights13, caret, rcurl, and randomforest pre-installed 10 | * The [tidyverse](https://github.com/tidyverse/tidyverse) R packages are also installed, including ggplot2, dplyr, tidyr, readr, purrr, tibble, stringr, lubridate, and broom 11 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 12 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command 13 | * `/usr/local/bin/start-notebook.d` directory for custom init scripts that you can add in derived images 14 | * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 15 | * A [start.sh](../base-notebook/start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 16 | * Options for a self-signed HTTPS certificate and passwordless `sudo` 17 | 18 | ## Basic Use 19 | 20 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 21 | 22 | ``` 23 | docker run -it --rm -p 8888:8888 jupyter/r-notebook 24 | ``` 25 | 26 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 27 | 28 | ## Notebook Options 29 | 30 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 31 | 32 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 33 | 34 | ``` 35 | docker run -d -p 8888:8888 jupyter/r-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 36 | ``` 37 | 38 | For example, to set the base URL of the notebook server, run the following: 39 | 40 | ``` 41 | docker run -d -p 8888:8888 jupyter/r-notebook start-notebook.sh --NotebookApp.base_url=/some/path 42 | ``` 43 | 44 | For example, to disable all authentication mechanisms (not a recommended practice): 45 | 46 | ``` 47 | docker run -d -p 8888:8888 jupyter/r-notebook start-notebook.sh --NotebookApp.token='' 48 | ``` 49 | 50 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 51 | 52 | ## Docker Options 53 | 54 | You may customize the execution of the Docker container and the command it is running with the following optional arguments. 55 | 56 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 57 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 58 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 59 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 60 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 61 | 62 | ## SSL Certificates 63 | 64 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 65 | 66 | ``` 67 | docker run -d -p 8888:8888 \ 68 | -v /some/host/folder:/etc/ssl/notebook \ 69 | jupyter/r-notebook start-notebook.sh \ 70 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 71 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 72 | ``` 73 | 74 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 75 | 76 | ``` 77 | docker run -d -p 8888:8888 \ 78 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 79 | jupyter/r-notebook start-notebook.sh \ 80 | --NotebookApp.certfile=/etc/ssl/notebook.pem 81 | ``` 82 | 83 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 84 | 85 | For additional information about using SSL, see the following: 86 | 87 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 88 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 89 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 90 | 91 | 92 | ## Alternative Commands 93 | 94 | 95 | ### start.sh 96 | 97 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 98 | 99 | ``` 100 | docker run -it --rm jupyter/r-notebook start.sh ipython 101 | ``` 102 | 103 | Or, to run JupyterLab instead of the classic notebook, run the following: 104 | 105 | ``` 106 | docker run -it --rm -p 8888:8888 jupyter/r-notebook start.sh jupyter lab 107 | ``` 108 | 109 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 110 | 111 | ### Others 112 | 113 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 114 | -------------------------------------------------------------------------------- /docs/using/running.md: -------------------------------------------------------------------------------- 1 | # Running a Container 2 | 3 | Using one of the Jupyter Docker Stacks requires two choices: 4 | 5 | 1. Which Docker image you wish to use 6 | 2. How you wish to start Docker containers from that image 7 | 8 | This section provides details about the second. 9 | 10 | ## Using the Docker CLI 11 | 12 | You can launch a local Docker container from the Jupyter Docker Stacks using the [Docker command line interface](https://docs.docker.com/engine/reference/commandline/cli/). There are numerous ways to configure containers using the CLI. The following are a couple common patterns. 13 | 14 | **Example 1** This command pulls the `jupyter/scipy-notebook` image tagged `2c80cf3537ca` from Docker Hub if it is not already present on the local host. It then starts a container running a Jupyter Notebook server and exposes the server on host port 8888. The server logs appear in the terminal and include a URL to the notebook server. 15 | 16 | ``` 17 | docker run -p 8888:8888 jupyter/scipy-notebook:2c80cf3537ca 18 | 19 | Executing the command: jupyter notebook 20 | [I 15:33:00.567 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret 21 | [W 15:33:01.084 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. 22 | [I 15:33:01.150 NotebookApp] JupyterLab alpha preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab 23 | [I 15:33:01.150 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab 24 | [I 15:33:01.155 NotebookApp] Serving notebooks from local directory: /home/jovyan 25 | [I 15:33:01.156 NotebookApp] 0 active kernels 26 | [I 15:33:01.156 NotebookApp] The Jupyter Notebook is running at: 27 | [I 15:33:01.157 NotebookApp] http://[all ip addresses on your system]:8888/?token=112bb073331f1460b73768c76dffb2f87ac1d4ca7870d46a 28 | [I 15:33:01.157 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). 29 | [C 15:33:01.160 NotebookApp] 30 | 31 | Copy/paste this URL into your browser when you connect for the first time, 32 | to login with a token: 33 | http://localhost:8888/?token=112bb073331f1460b73768c76dffb2f87ac1d4ca7870d46a 34 | ``` 35 | 36 | Pressing `Ctrl-C` shuts down the notebook server but leaves the container intact on disk for later restart or permanent deletion using commands like the following: 37 | 38 | ``` 39 | # list containers 40 | docker ps -a 41 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 42 | d67fe77f1a84 jupyter/base-notebook "tini -- start-noteb…" 44 seconds ago Exited (0) 39 seconds ago cocky_mirzakhani 43 | 44 | # start the stopped container 45 | docker start -a d67fe77f1a84 46 | Executing the command: jupyter notebook 47 | [W 16:45:02.020 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. 48 | ... 49 | 50 | # remove the stopped container 51 | docker rm d67fe77f1a84 52 | d67fe77f1a84 53 | ``` 54 | 55 | **Example 2** This command pulls the `jupyter/r-notebook` image tagged `e5c5a7d3e52d` from Docker Hub if it is not already present on the local host. It then starts a container running a Jupyter Notebook server and exposes the server on host port 10000. The server logs appear in the terminal and include a URL to the notebook server, but with the internal container port (8888) instead of the the correct host port (10000). 56 | 57 | ``` 58 | docker run --rm -p 10000:8888 -v "$PWD":/home/jovyan/work jupyter/r-notebook:e5c5a7d3e52d 59 | 60 | Executing the command: jupyter notebook 61 | [I 19:31:09.573 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret 62 | [W 19:31:11.930 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. 63 | [I 19:31:12.085 NotebookApp] JupyterLab alpha preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab 64 | [I 19:31:12.086 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab 65 | [I 19:31:12.117 NotebookApp] Serving notebooks from local directory: /home/jovyan 66 | [I 19:31:12.117 NotebookApp] 0 active kernels 67 | [I 19:31:12.118 NotebookApp] The Jupyter Notebook is running at: 68 | [I 19:31:12.119 NotebookApp] http://[all ip addresses on your system]:8888/?token=3b8dce890cb65570fb0d9c4a41ae067f7604873bd604f5ac 69 | [I 19:31:12.120 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). 70 | [C 19:31:12.122 NotebookApp] 71 | 72 | Copy/paste this URL into your browser when you connect for the first time, 73 | to login with a token: 74 | http://localhost:8888/?token=3b8dce890cb65570fb0d9c4a41ae067f7604873bd604f5ac 75 | ``` 76 | 77 | Pressing `Ctrl-C` shuts down the notebook server and immediately destroys the Docker container. Files written to `~/work` in the container remain touched. Any other changes made in the container are lost. 78 | 79 | **Example 3** This command pulls the `jupyter/all-spark-notebook` image currently tagged `latest` from Docker Hub if an image tagged `latest` is not already present on the local host. It then starts a container named `notebook` running a JupyterLab server and exposes the server on a randomly selected port. 80 | 81 | ``` 82 | docker run -d -P --name notebook jupyter/all-spark-notebook 83 | ``` 84 | 85 | The assigned port and notebook server token are visible using other Docker commands. 86 | 87 | ``` 88 | # get the random host port assigned to the container port 8888 89 | docker port notebook 8888 90 | 0.0.0.0:32769 91 | 92 | # get the notebook token from the logs 93 | docker logs --tail 3 notebook 94 | Copy/paste this URL into your browser when you connect for the first time, 95 | to login with a token: 96 | http://localhost:8888/?token=15914ca95f495075c0aa7d0e060f1a78b6d94f70ea373b00 97 | ``` 98 | 99 | Together, the URL to visit on the host machine to access the server in this case is http://localhost:32769?token=15914ca95f495075c0aa7d0e060f1a78b6d94f70ea373b00. 100 | 101 | The container runs in the background until stopped and/or removed by additional Docker commands. 102 | 103 | ``` 104 | # stop the container 105 | docker stop notebook 106 | notebook 107 | 108 | # remove the container permanently 109 | docker rm notebook 110 | notebook 111 | ``` 112 | 113 | ## Using Binder 114 | 115 | [Binder](https://mybinder.org/) is a service that allows you to create and share custom computing environments for projects in version control. You can use any of the Jupyter Docker Stacks images as a basis for a Binder-compatible Dockerfile. See the [docker-stacks example](https://mybinder.readthedocs.io/en/latest/sample_repos.html#using-a-docker-image-from-the-jupyter-docker-stacks-repository) and [Using a Dockerfile](https://mybinder.readthedocs.io/en/latest/dockerfile.html) sections in the [Binder documentation](https://mybinder.readthedocs.io/en/latest/index.html) for instructions. 116 | 117 | ## Using JupyterHub 118 | 119 | You can configure JupyterHub to launcher Docker containers from the Jupyter Docker Stacks images. If you've been following the [Zero to JupyterHub with Kubernetes](http://zero-to-jupyterhub.readthedocs.io/en/latest/) guide, see the [Use an existing Docker image](http://zero-to-jupyterhub.readthedocs.io/en/latest/user-environment.html#use-an-existing-docker-image) section for details. If you have a custom JupyterHub deployment, see the [Picking or building a Docker image](https://github.com/jupyterhub/dockerspawner#picking-or-building-a-docker-image) instructions for the [dockerspawner](https://github.com/jupyterhub/dockerspawner) instead. 120 | 121 | ## Using Other Tools and Services 122 | 123 | You can use the Jupyter Docker Stacks with any Docker-compatible technology (e.g., [Docker Compose](https://docs.docker.com/compose/), [docker-py](https://github.com/docker/docker-py), your favorite cloud container service). See the documentation of the tool, library, or service for details about how to reference, configure, and launch containers from these images. -------------------------------------------------------------------------------- /tensorflow-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/tensorflow-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/tensorflow-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/tensorflow-notebook.svg)](https://microbadger.com/images/jupyter/tensorflow-notebook "jupyter/tensorflow-notebook image metadata") 2 | 3 | # Jupyter Notebook Scientific Python Stack + Tensorflow 4 | 5 | ## What it Gives You 6 | 7 | * Everything in [Scipy Notebook](https://github.com/jupyter/docker-stacks/tree/master/scipy-notebook) 8 | * Tensorflow and Keras for Python 3.x (without GPU support) 9 | 10 | ## Basic Use 11 | 12 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 13 | 14 | ``` 15 | docker run -it --rm -p 8888:8888 jupyter/tensorflow-notebook 16 | ``` 17 | 18 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 19 | 20 | ## Tensorflow Machine Mode 21 | 22 | Tensorflow can use single machine, or distributed mode. 23 | 24 | Single machine mode: 25 | 26 | ``` 27 | import tensorflow as tf 28 | 29 | hello = tf.Variable('Hello World!') 30 | 31 | sess = tf.Session() 32 | init = tf.global_variables_initializer() 33 | 34 | sess.run(init) 35 | sess.run(hello) 36 | ``` 37 | 38 | Distributed mode: 39 | 40 | ``` 41 | import tensorflow as tf 42 | 43 | hello = tf.Variable('Hello Distributed World!') 44 | 45 | server = tf.train.Server.create_local_server() 46 | sess = tf.Session(server.target) 47 | init = tf.global_variables_initializer() 48 | 49 | sess.run(init) 50 | sess.run(hello) 51 | ``` 52 | 53 | ## Notebook Options 54 | 55 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 56 | 57 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 58 | 59 | ``` 60 | docker run -d -p 8888:8888 jupyter/tensorflow-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 61 | ``` 62 | 63 | For example, to set the base URL of the notebook server, run the following: 64 | 65 | ``` 66 | docker run -d -p 8888:8888 jupyter/tensorflow-notebook start-notebook.sh --NotebookApp.base_url=/some/path 67 | ``` 68 | 69 | For example, to disable all authentication mechanisms (not a recommended practice): 70 | 71 | ``` 72 | docker run -d -p 8888:8888 jupyter/tensorflow-notebook start-notebook.sh --NotebookApp.token='' 73 | ``` 74 | 75 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 76 | 77 | ## Docker Options 78 | 79 | You may customize the execution of the Docker container and the command it is running with the following optional arguments. 80 | 81 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 82 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 83 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 84 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 85 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 86 | 87 | ## SSL Certificates 88 | 89 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 90 | 91 | ``` 92 | docker run -d -p 8888:8888 \ 93 | -v /some/host/folder:/etc/ssl/notebook \ 94 | jupyter/tensorflow-notebook start-notebook.sh \ 95 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 96 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 97 | ``` 98 | 99 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 100 | 101 | ``` 102 | docker run -d -p 8888:8888 \ 103 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 104 | jupyter/tensorflow-notebook start-notebook.sh \ 105 | --NotebookApp.certfile=/etc/ssl/notebook.pem 106 | ``` 107 | 108 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 109 | 110 | For additional information about using SSL, see the following: 111 | 112 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 113 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 114 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 115 | 116 | 117 | ## Conda Environments 118 | 119 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 120 | 121 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 122 | 123 | ``` 124 | # install a package into the default (python 3.x) environment 125 | pip install some-package 126 | conda install some-package 127 | ``` 128 | 129 | 130 | ## Alternative Commands 131 | 132 | ### start.sh 133 | 134 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 135 | 136 | ``` 137 | docker run -it --rm jupyter/tensorflow-notebook start.sh ipython 138 | ``` 139 | 140 | Or, to run JupyterLab instead of the classic notebook, run the following: 141 | 142 | ``` 143 | docker run -it --rm -p 8888:8888 jupyter/tensorflow-notebook start.sh jupyter lab 144 | ``` 145 | 146 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 147 | 148 | ### Others 149 | 150 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 151 | -------------------------------------------------------------------------------- /minimal-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/minimal-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/minimal-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/minimal-notebook.svg)](https://microbadger.com/images/jupyter/minimal-notebook "jupyter/minimal-notebook image metadata") 2 | 3 | # Minimal Jupyter Notebook Stack 4 | 5 | Small image for working in the notebook and installing your own libraries 6 | 7 | ## What it Gives You 8 | 9 | * Fully-functional Jupyter Notebook 5.2.x 10 | * Miniconda Python 3.x 11 | * No preinstalled scientific computing packages 12 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 13 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command 14 | * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 15 | * `/usr/local/bin/start-notebook.d` directory for custom init scripts that you can add in derived images 16 | * A [start.sh](../base-notebook/start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 17 | * Options for a self-signed HTTPS certificate and passwordless `sudo` 18 | 19 | ## Basic Use 20 | 21 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 22 | 23 | ``` 24 | docker run -it --rm -p 8888:8888 jupyter/minimal-notebook 25 | ``` 26 | 27 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 28 | 29 | ## Notebook Options 30 | 31 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 32 | 33 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 34 | 35 | ``` 36 | docker run -d -p 8888:8888 jupyter/minimal-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 37 | ``` 38 | 39 | For example, to set the base URL of the notebook server, run the following: 40 | 41 | ``` 42 | docker run -d -p 8888:8888 jupyter/minimal-notebook start-notebook.sh --NotebookApp.base_url=/some/path 43 | ``` 44 | 45 | For example, to disable all authentication mechanisms (not a recommended practice): 46 | 47 | ``` 48 | docker run -d -p 8888:8888 jupyter/minimal-notebook start-notebook.sh --NotebookApp.token='' 49 | ``` 50 | 51 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 52 | 53 | 54 | ## Docker Options 55 | 56 | You may customize the execution of the Docker container and the Notebook server it contains with the following optional arguments. 57 | 58 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 59 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 60 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 61 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 62 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 63 | 64 | ## SSL Certificates 65 | 66 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 67 | 68 | ``` 69 | docker run -d -p 8888:8888 \ 70 | -v /some/host/folder:/etc/ssl/notebook \ 71 | jupyter/minimal-notebook start-notebook.sh \ 72 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 73 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 74 | ``` 75 | 76 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 77 | 78 | ``` 79 | docker run -d -p 8888:8888 \ 80 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 81 | jupyter/minimal-notebook start-notebook.sh \ 82 | --NotebookApp.certfile=/etc/ssl/notebook.pem 83 | ``` 84 | 85 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 86 | 87 | For additional information about using SSL, see the following: 88 | 89 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 90 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 91 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 92 | 93 | 94 | ## Conda Environments 95 | 96 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 97 | 98 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 99 | 100 | ``` 101 | # install a package into the default (python 3.x) environment 102 | pip install some-package 103 | conda install some-package 104 | ``` 105 | 106 | 107 | ## Alternative Commands 108 | 109 | 110 | ### start.sh 111 | 112 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 113 | 114 | ``` 115 | docker run -it --rm jupyter/minimal-notebook start.sh ipython 116 | ``` 117 | 118 | Or, to run JupyterLab instead of the classic notebook, run the following: 119 | 120 | ``` 121 | docker run -it --rm -p 8888:8888 jupyter/minimal-notebook start.sh jupyter lab 122 | ``` 123 | 124 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 125 | 126 | ### Others 127 | 128 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 129 | -------------------------------------------------------------------------------- /scipy-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/scipy-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/scipy-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/scipy-notebook.svg)](https://microbadger.com/images/jupyter/scipy-notebook "jupyter/scipy-notebook image metadata") 2 | 3 | # Jupyter Notebook Scientific Python Stack 4 | 5 | ## What it Gives You 6 | 7 | * Jupyter Notebook 5.2.x 8 | * Conda Python 3.x environment 9 | * pandas, matplotlib, scipy, seaborn, scikit-learn, scikit-image, sympy, cython, patsy, statsmodel, cloudpickle, dill, numba, bokeh, vincent, beautifulsoup, xlrd pre-installed 10 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 11 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command 12 | * `/usr/local/bin/start-notebook.d` directory for custom init scripts that you can add in derived images 13 | * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 14 | * A [start.sh](../base-notebook/start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 15 | * Options for HTTPS, password auth, and passwordless `sudo` 16 | 17 | ## Basic Use 18 | 19 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 20 | 21 | ``` 22 | docker run -it --rm -p 8888:8888 jupyter/scipy-notebook 23 | ``` 24 | 25 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 26 | 27 | ## Notebook Options 28 | 29 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 30 | 31 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 32 | 33 | ``` 34 | docker run -d -p 8888:8888 jupyter/scipy-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 35 | ``` 36 | 37 | For example, to set the base URL of the notebook server, run the following: 38 | 39 | ``` 40 | docker run -d -p 8888:8888 jupyter/scipy-notebook start-notebook.sh --NotebookApp.base_url=/some/path 41 | ``` 42 | 43 | For example, to disable all authentication mechanisms (not a recommended practice): 44 | 45 | ``` 46 | docker run -d -p 8888:8888 jupyter/scipy-notebook start-notebook.sh --NotebookApp.token='' 47 | ``` 48 | 49 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 50 | 51 | 52 | ## Docker Options 53 | 54 | You may customize the execution of the Docker container and the Notebook server it contains with the following optional arguments. 55 | 56 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 57 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 58 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 59 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 60 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 61 | 62 | ## SSL Certificates 63 | 64 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 65 | 66 | ``` 67 | docker run -d -p 8888:8888 \ 68 | -v /some/host/folder:/etc/ssl/notebook \ 69 | jupyter/scipy-notebook start-notebook.sh \ 70 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 71 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 72 | ``` 73 | 74 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 75 | 76 | ``` 77 | docker run -d -p 8888:8888 \ 78 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 79 | jupyter/scipy-notebook start-notebook.sh \ 80 | --NotebookApp.certfile=/etc/ssl/notebook.pem 81 | ``` 82 | 83 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 84 | 85 | For additional information about using SSL, see the following: 86 | 87 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 88 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 89 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 90 | 91 | 92 | ## Conda Environments 93 | 94 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 95 | 96 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 97 | 98 | ``` 99 | # install a package into the default (python 3.x) environment 100 | pip install some-package 101 | conda install some-package 102 | ``` 103 | 104 | 105 | ## Alternative Commands 106 | 107 | 108 | ### start.sh 109 | 110 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 111 | 112 | ``` 113 | docker run -it --rm jupyter/scipy-notebook start.sh ipython 114 | ``` 115 | 116 | Or, to run JupyterLab instead of the classic notebook, run the following: 117 | 118 | ``` 119 | docker run -it --rm -p 8888:8888 jupyter/scipy-notebook start.sh jupyter lab 120 | ``` 121 | 122 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 123 | 124 | ### Others 125 | 126 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 127 | -------------------------------------------------------------------------------- /base-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/base-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/base-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/base-notebook.svg)](https://microbadger.com/images/jupyter/base-notebook "jupyter/base-notebook image metadata") 2 | 3 | # Base Jupyter Notebook Stack 4 | 5 | Small base image for defining your own stack 6 | 7 | ## What it Gives You 8 | 9 | * Minimally-functional Jupyter Notebook 5.2.x (e.g., no pandoc for document conversion) 10 | * Miniconda Python 3.x 11 | * No preinstalled scientific computing packages 12 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 13 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](./start-notebook.sh) as the default command 14 | * A [start-singleuser.sh](./start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 15 | * A [start.sh](./start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 16 | * Options for a self-signed HTTPS certificate and passwordless `sudo` 17 | 18 | ## Basic Use 19 | 20 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 21 | 22 | ``` 23 | docker run -it --rm -p 8888:8888 jupyter/base-notebook 24 | ``` 25 | 26 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 27 | 28 | ## Notebook Options 29 | 30 | The Docker container executes a [`start-notebook.sh` script](./start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 31 | 32 | You can launch [JupyterLab](https://github.com/jupyterlab/jupyterlab) by setting `JUPYTER_ENABLE_LAB`: 33 | 34 | ``` 35 | docker run -it --rm -e JUPYTER_ENABLE_LAB=1 --rm -p 8888:8888 jupyter/base-notebook 36 | ``` 37 | 38 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 39 | 40 | ``` 41 | docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 42 | ``` 43 | 44 | For example, to set the base URL of the notebook server, run the following: 45 | 46 | ``` 47 | docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.base_url=/some/path 48 | ``` 49 | 50 | For example, to disable all authentication mechanisms (not a recommended practice): 51 | 52 | ``` 53 | docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.token='' 54 | ``` 55 | 56 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 57 | 58 | ## Docker Options 59 | 60 | You may customize the execution of the Docker container and the command it is running with the following optional arguments. 61 | 62 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 63 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 64 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 65 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 66 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 67 | * `--group-add users` - use this argument if you are also specifying 68 | a specific user id to launch the container (`-u 5000`), rather than launching the container as root and relying on NB_UID and NB_GID to set the user and group. 69 | 70 | 71 | ## SSL Certificates 72 | 73 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 74 | 75 | ``` 76 | docker run -d -p 8888:8888 \ 77 | -v /some/host/folder:/etc/ssl/notebook \ 78 | jupyter/base-notebook start-notebook.sh \ 79 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 80 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 81 | ``` 82 | 83 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 84 | 85 | ``` 86 | docker run -d -p 8888:8888 \ 87 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 88 | jupyter/base-notebook start-notebook.sh \ 89 | --NotebookApp.certfile=/etc/ssl/notebook.pem 90 | ``` 91 | 92 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 93 | 94 | For additional information about using SSL, see the following: 95 | 96 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 97 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 98 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 99 | 100 | 101 | ## Conda Environments 102 | 103 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 104 | 105 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 106 | 107 | ``` 108 | # install a package into the default (python 3.x) environment 109 | pip install some-package 110 | conda install some-package 111 | ``` 112 | 113 | 114 | ## Alternative Commands 115 | 116 | ### start.sh 117 | 118 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 119 | 120 | ``` 121 | docker run -it --rm jupyter/base-notebook start.sh ipython 122 | ``` 123 | 124 | Or, to run JupyterLab instead of the classic notebook, run the following: 125 | 126 | ``` 127 | docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter lab 128 | ``` 129 | 130 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 131 | 132 | ### Others 133 | 134 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 135 | 136 | -------------------------------------------------------------------------------- /datascience-notebook/README.md: -------------------------------------------------------------------------------- 1 | ![docker pulls](https://img.shields.io/docker/pulls/jupyter/datascience-notebook.svg) ![docker stars](https://img.shields.io/docker/stars/jupyter/datascience-notebook.svg) [![](https://images.microbadger.com/badges/image/jupyter/datascience-notebook.svg)](https://microbadger.com/images/jupyter/datascience-notebook "jupyter/datascience-notebook image metadata") 2 | 3 | # Jupyter Notebook Data Science Stack 4 | 5 | ## What it Gives You 6 | 7 | * Jupyter Notebook 5.2.x 8 | * Conda Python 3.x environment 9 | * pandas, matplotlib, scipy, seaborn, scikit-learn, scikit-image, sympy, cython, patsy, statsmodel, cloudpickle, dill, numba, bokeh pre-installed 10 | * Conda R v3.3.x and channel 11 | * plyr, devtools, shiny, rmarkdown, forecast, rsqlite, reshape2, nycflights13, caret, rcurl, and randomforest pre-installed 12 | * The [tidyverse](https://github.com/tidyverse/tidyverse) R packages are also installed, including ggplot2, dplyr, tidyr, readr, purrr, tibble, stringr, lubridate, and broom 13 | * Julia v0.6.x with Gadfly, RDatasets and HDF5 pre-installed 14 | * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` 15 | * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command 16 | * `/usr/local/bin/start-notebook.d` directory for custom init scripts that you can add in derived images 17 | * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub 18 | * A [start.sh](../base-notebook/start.sh) script useful for running alternative commands in the container (e.g. `ipython`, `jupyter kernelgateway`, `jupyter lab`) 19 | * Options for a self-signed HTTPS certificate and passwordless `sudo` 20 | 21 | ## Basic Use 22 | 23 | The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured. 24 | 25 | ``` 26 | docker run -it --rm -p 8888:8888 jupyter/datascience-notebook 27 | ``` 28 | 29 | Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form. 30 | 31 | ## Notebook Options 32 | 33 | The Docker container executes a [`start-notebook.sh` script](../base-notebook/start-notebook.sh) script by default. The `start-notebook.sh` script handles the `NB_UID`, `NB_GID` and `GRANT_SUDO` features documented in the next section, and then executes the `jupyter notebook`. 34 | 35 | You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) through the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, run the following: 36 | 37 | ``` 38 | docker run -d -p 8888:8888 jupyter/datascience-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' 39 | ``` 40 | 41 | For example, to set the base URL of the notebook server, run the following: 42 | 43 | ``` 44 | docker run -d -p 8888:8888 jupyter/datascience-notebook start-notebook.sh --NotebookApp.base_url=/some/path 45 | ``` 46 | 47 | For example, to disable all authentication mechanisms (not a recommended practice): 48 | 49 | ``` 50 | docker run -d -p 8888:8888 jupyter/datascience-notebook start-notebook.sh --NotebookApp.token='' 51 | ``` 52 | 53 | You can sidestep the `start-notebook.sh` script and run your own commands in the container. See the *Alternative Commands* section later in this document for more information. 54 | 55 | ## Docker Options 56 | 57 | You may customize the execution of the Docker container and the command it is running with the following optional arguments. 58 | 59 | * `-e GEN_CERT=yes` - Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections. 60 | * `-e NB_UID=1000` - Specify the uid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the user id.) 61 | * `-e NB_GID=100` - Specify the gid of the `jovyan` user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adjusting the group id.) 62 | * `-e GRANT_SUDO=yes` - Gives the `jovyan` user passwordless `sudo` capability. Useful for installing OS packages. For this option to take effect, you must run the container with `--user root`. (The `start-notebook.sh` script will `su jovyan` after adding `jovyan` to sudoers.) **You should only enable `sudo` if you trust the user or if the container is running on an isolated host.** 63 | * `-v /some/host/folder/for/work:/home/jovyan/work` - Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. **You must grant the within-container notebook user or group (`NB_UID` or `NB_GID`) write access to the host directory (e.g., `sudo chown 1000 /some/host/folder/for/work`).** 64 | 65 | ## SSL Certificates 66 | 67 | You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt`: 68 | 69 | ``` 70 | docker run -d -p 8888:8888 \ 71 | -v /some/host/folder:/etc/ssl/notebook \ 72 | jupyter/datascience-notebook start-notebook.sh \ 73 | --NotebookApp.keyfile=/etc/ssl/notebook/notebook.key 74 | --NotebookApp.certfile=/etc/ssl/notebook/notebook.crt 75 | ``` 76 | 77 | Alternatively, you may mount a single PEM file containing both the key and certificate. For example: 78 | 79 | ``` 80 | docker run -d -p 8888:8888 \ 81 | -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ 82 | jupyter/datascience-notebook start-notebook.sh \ 83 | --NotebookApp.certfile=/etc/ssl/notebook.pem 84 | ``` 85 | 86 | In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root). 87 | 88 | For additional information about using SSL, see the following: 89 | 90 | * The [docker-stacks/examples](https://github.com/jupyter/docker-stacks/tree/master/examples) for information about how to use [Let's Encrypt](https://letsencrypt.org/) certificates when you run these stacks on a publicly visible domain. 91 | * The [jupyter_notebook_config.py](jupyter_notebook_config.py) file for how this Docker image generates a self-signed certificate. 92 | * The [Jupyter Notebook documentation](https://jupyter-notebook.readthedocs.io/en/latest/public_server.html#using-ssl-for-encrypted-communication) for best practices about running a public notebook server in general, most of which are encoded in this image. 93 | 94 | 95 | ## Conda Environments 96 | 97 | The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/envs.html) resides in `/opt/conda`. 98 | 99 | The commands `jupyter`, `ipython`, `python`, `pip`, and `conda` (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following: 100 | 101 | ``` 102 | # install a package into the default (python 3.x) environment 103 | pip install some-package 104 | conda install some-package 105 | ``` 106 | 107 | 108 | ## Alternative Commands 109 | 110 | 111 | ### start.sh 112 | 113 | The `start.sh` script supports the same features as the default `start-notebook.sh` script (e.g., `GRANT_SUDO`), but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: 114 | 115 | ``` 116 | docker run -it --rm jupyter/datascience-notebook start.sh ipython 117 | ``` 118 | 119 | Or, to run JupyterLab instead of the classic notebook, run the following: 120 | 121 | ``` 122 | docker run -it --rm -p 8888:8888 jupyter/datascience-notebook start.sh jupyter lab 123 | ``` 124 | 125 | This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like `jupyter console`, `jupyter kernelgateway`, etc. 126 | 127 | ### Others 128 | 129 | You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., `GRANT_SUDO`). 130 | -------------------------------------------------------------------------------- /examples/source-to-image/README.md: -------------------------------------------------------------------------------- 1 | This example provides scripts for building custom Jupyter Notebook images containing notebooks, data files, and with Python packages required by the notebooks already installed. The scripts provided work with the Source-to-Image tool and you can create the images from the command line on your own computer. Templates are also provided to enable running builds in OpenShift, as well as deploying the resulting image to OpenShift to make it available. 2 | 3 | The build scripts, when used with the Source-to-Image tool, provide similar capabilities to ``repo2docker``. When builds are run under OpenShift with the supplied templates, it provides similar capabilities to ``mybinder.org``, but where notebook instances are deployed in your existing OpenShift project and JupyterHub is not required. 4 | 5 | For separate examples of using JupyterHub with OpenShift, see the project: 6 | 7 | * https://github.com/jupyter-on-openshift/jupyterhub-quickstart 8 | 9 | Source-to-Image Project 10 | ----------------------- 11 | 12 | Source-to-Image (S2I) is an open source project which provides a tool for creating container images. It works by taking a base image, injecting additional source code or files into a running container created from the base image, and running a builder script in the container to process the source code or files to prepare the new image. 13 | 14 | Details on the S2I tool, and executable binaries for Linux, macOS and Windows, can be found on GitHub at: 15 | 16 | * https://github.com/openshift/source-to-image 17 | 18 | The tool is standalone, and can be used on any system which provides a docker daemon for running containers. To provide an end-to-end capability to build and deploy applications in containers, support for S2I is also integrated into container platforms such as OpenShift. 19 | 20 | Getting Started with S2I 21 | ------------------------ 22 | 23 | As an example of how S2I can be used to create a custom image with a bundled set of notebooks, run: 24 | 25 | ``` 26 | s2i build \ 27 | --scripts-url https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image \ 28 | --context-dir docs/source/examples/Notebook \ 29 | https://github.com/jupyter/notebook \ 30 | jupyter/minimal-notebook:latest \ 31 | notebook-examples 32 | ``` 33 | 34 | This example command will pull down the Git repository ``https://github.com/jupyter/notebook`` and build the image ``notebook-examples`` using the files contained in the ``docs/source/examples/Notebook`` directory of that Git repository. The base image which the files will be combined with is ``jupyter/minimal-notebook:latest``, but you can specify any of the Jupyter Project ``docker-stacks`` images as the base image. 35 | 36 | The resulting image from running the command can be seen by running ``docker images``. 37 | 38 | ``` 39 | REPOSITORY TAG IMAGE ID CREATED SIZE 40 | notebook-examples latest f5899ed1241d 2 minutes ago 2.59GB 41 | ``` 42 | 43 | You can now run the image. 44 | 45 | ``` 46 | $ docker run --rm -p 8888:8888 notebook-examples 47 | Executing the command: jupyter notebook 48 | [I 01:14:50.532 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret 49 | [W 01:14:50.724 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended. 50 | [I 01:14:50.747 NotebookApp] JupyterLab beta preview extension loaded from /opt/conda/lib/python3.6/site-packages/jupyterlab 51 | [I 01:14:50.747 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab 52 | [I 01:14:50.754 NotebookApp] Serving notebooks from local directory: /home/jovyan 53 | [I 01:14:50.754 NotebookApp] 0 active kernels 54 | [I 01:14:50.754 NotebookApp] The Jupyter Notebook is running at: 55 | [I 01:14:50.754 NotebookApp] http://[all ip addresses on your system]:8888/?token=04646d5c5e928da75842cd318d4a3c5aa1f942fc5964323a 56 | [I 01:14:50.754 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). 57 | [C 01:14:50.755 NotebookApp] 58 | 59 | Copy/paste this URL into your browser when you connect for the first time, 60 | to login with a token: 61 | http://localhost:8888/?token=04646d5c5e928da75842cd318d4a3c5aa1f942fc5964323a 62 | ``` 63 | 64 | Open your browser on the URL displayed, and you will find the notebooks from the Git repository and can work with them. 65 | 66 | The S2I Builder Scripts 67 | ----------------------- 68 | 69 | Normally when using S2I, the base image would be S2I enabled and contain the builder scripts needed to prepare the image and define how the application in the image should be run. As the Jupyter Project ``docker-stacks`` images are not S2I enabled (although they could be), in the above example the ``--scripts-url`` option has been used to specify that the example builder scripts contained in this directory of this Git repository should be used. 70 | 71 | Using the ``--scripts-url`` option, the builder scripts can be hosted on any HTTP server, or you could also use builder scripts local to your computer file using an appropriate ``file://`` format URI argument to ``--scripts-url``. 72 | 73 | The builder scripts in this directory of this repository are ``assemble`` and ``run`` and are provided as examples of what can be done. You can use the scripts as is, or create your own. 74 | 75 | The supplied ``assemble`` script performs a few key steps. 76 | 77 | The first steps copy files into the location they need to be when the image is run, from the directory where they are initially placed by the ``s2i`` command. 78 | 79 | ``` 80 | cp -Rf /tmp/src/. /home/$NB_USER 81 | 82 | rm -rf /tmp/src 83 | ``` 84 | 85 | The next steps are: 86 | 87 | ``` 88 | if [ -f /home/$NB_USER/environment.yml ]; then 89 | conda env update --name root --file /home/$NB_USER/environment.yml 90 | conda clean -tipsy 91 | else 92 | if [ -f /home/$NB_USER/requirements.txt ]; then 93 | pip --no-cache-dir install -r /home/$NB_USER/requirements.txt 94 | fi 95 | fi 96 | ``` 97 | 98 | This determines whether a ``environment.yml`` or ``requirements.txt`` file exists with the files and if so, runs the appropriate package management tool to install any Python packages listed in those files. 99 | 100 | This means that so long as a set of notebook files provides one of these files listing what Python packages they need, those packages will be automatically installed into the image so they are available when the image is run. 101 | 102 | A final step is: 103 | 104 | ``` 105 | fix-permissions $CONDA_DIR 106 | fix-permissions /home/$NB_USER 107 | ``` 108 | 109 | This fixes up permissions on any new files created by the build. This is necessary to ensure that when the image is run, you can still install additional files. This is important for when an image is run in ``sudo`` mode, or it is hosted in a more secure container platform such as Kubernetes/OpenShift where it will be run as a set user ID that isn't known in advance. 110 | 111 | As long as you preserve the first and last set of steps, you can do whatever you want in the ``assemble`` script to install packages, create files etc. Do be aware though that S2I builds do not run as ``root`` and so you cannot install additional system packages. If you need to install additional system packages, use a ``Dockerfile`` and normal ``docker build`` to first create a new custom base image from the Jupyter Project ``docker-stacks`` images, with the extra system packages, and then use that image with the S2I build to combine your notebooks and have Python packages installed. 112 | 113 | The ``run`` script in this directory is very simple and just runs the notebook application. 114 | 115 | ``` 116 | exec start-notebook.sh "$@" 117 | ``` 118 | 119 | Integration with OpenShift 120 | -------------------------- 121 | 122 | The OpenShift platform provides integrated support for S2I type builds. Templates are provided for using the S2I build mechanism with the scripts in this directory. To load the templates run: 123 | 124 | ``` 125 | oc create -f https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image/templates.json 126 | ``` 127 | 128 | This will create the templates: 129 | 130 | ``` 131 | jupyter-notebook-builder 132 | jupyter-notebook-quickstart 133 | ``` 134 | 135 | The templates can be used from the OpenShift web console or command line. This ``README`` is only going to explain deploying from the command line. 136 | 137 | To use the OpenShift command line to build into an image, and deploy, the set of notebooks used above, run: 138 | 139 | ``` 140 | oc new-app --template jupyter-notebook-quickstart \ 141 | --param APPLICATION_NAME=notebook-examples \ 142 | --param GIT_REPOSITORY_URL=https://github.com/jupyter/notebook \ 143 | --param CONTEXT_DIR=docs/source/examples/Notebook \ 144 | --param BUILDER_IMAGE=jupyter/minimal-notebook:latest \ 145 | --param NOTEBOOK_PASSWORD=mypassword 146 | ``` 147 | 148 | You can provide a password using the ``NOTEBOOK_PASSWORD`` parameter. If you don't set that parameter, a password will be generated, with it being displayed by the ``oc new-app`` command. 149 | 150 | Once the image has been built, it will be deployed. To see the hostname for accessing the notebook, run ``oc get routes``. 151 | 152 | ``` 153 | NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD 154 | notebook-examples notebook-examples-jupyter.abcd.pro-us-east-1.openshiftapps.com notebook-examples 8888-tcp edge/Redirect None 155 | ``` 156 | 157 | As the deployment will use a secure connection, the URL for accessing the notebook in this case would be: 158 | 159 | ``` 160 | https://notebook-examples-jupyter.abcd.pro-us-east-1.openshiftapps.com 161 | ``` 162 | 163 | If you only want to build an image but not deploy it, you can use the ``jupyter-notebook-builder`` template. You can then deploy it using the ``jupyter-notebook`` template provided with the [openshift](../openshift) examples directory. 164 | 165 | See the ``openshift`` examples directory for further information on customizing configuration for a Jupyter Notebook deployment and deleting a deployment. 166 | -------------------------------------------------------------------------------- /examples/openshift/README.md: -------------------------------------------------------------------------------- 1 | This example provides templates for deploying the Jupyter Project docker-stacks images to OpenShift. 2 | 3 | Prerequsites 4 | ------------ 5 | 6 | Any OpenShift 3 environment. The templates were tested with OpenShift 3.7. It is believed they should work with at least OpenShift 3.6 or later. 7 | 8 | Do be aware that the Jupyter Project docker-stacks images are very large. The OpenShift environment you are using must provide sufficient quota on the per user space for images and the file system for running containers. If the quota is too small, the pulling of the images to a node in the OpenShift cluster when deploying them, will fail due to lack of space. Even if the image is able to be run, if the quota is only just larger than the space required for the image, you will not be able to install many packages into the container before running out of space. 9 | 10 | OpenShift Online, the public hosted version of OpenShift from Red Hat has a quota of only 3GB for the image and container file system. As a result, only the ``minimal-notebook`` can be started and there is little space remaining to install additional packages. Although OpenShift Online is suitable for demonstrating these templates work, what you can do in that environment will be limited due to the size of the images. 11 | 12 | If you want to experiment with using Jupyter Notebooks in an OpenShift environment, you should instead use [Minishift](https://www.openshift.org/minishift/). Minishift provides you the ability to run OpenShift in a virtual machine on your own local computer. 13 | 14 | Loading the Templates 15 | --------------------- 16 | 17 | To load the templates, login to OpenShift from the command line and run: 18 | 19 | ``` 20 | oc create -f https://raw.githubusercontent.com/jupyter-on-openshift/docker-stacks/master/examples/openshift/templates.json 21 | ``` 22 | 23 | This should create the following templates: 24 | 25 | ``` 26 | jupyter-notebook 27 | ``` 28 | 29 | The template can be used from the command line using the ``oc new-app`` command, or from the OpenShift web console by selecting _Add to Project_. This ``README`` is only going to explain deploying from the command line. 30 | 31 | Deploying a Notebook 32 | -------------------- 33 | 34 | To deploy a notebook from the command line using the template, run: 35 | 36 | ``` 37 | oc new-app --template jupyter-notebook 38 | ``` 39 | 40 | The output will be similar to: 41 | 42 | ``` 43 | --> Deploying template "jupyter/jupyter-notebook" to project jupyter 44 | 45 | Jupyter Notebook 46 | --------- 47 | Template for deploying Jupyter Notebook images. 48 | 49 | * With parameters: 50 | * APPLICATION_NAME=notebook 51 | * NOTEBOOK_IMAGE=jupyter/minimal-notebook:latest 52 | * NOTEBOOK_PASSWORD=ded4d7cada554aa48e0db612e1ed1080 # generated 53 | 54 | --> Creating resources ... 55 | configmap "notebook-cfg" created 56 | deploymentconfig "notebook" created 57 | route "notebook" created 58 | service "notebook" created 59 | --> Success 60 | Access your application via route 'notebook-jupyter.b9ad.pro-us-east-1.openshiftapps.com' 61 | Run 'oc status' to view your app. 62 | ``` 63 | 64 | When no template parameters are provided, the name of the deployed notebook will be ``notebook``. The image used will be: 65 | 66 | ``` 67 | jupyter/minimal-notebook:latest 68 | ``` 69 | 70 | A password you can use when accessing the notebook will be auto generated and is displayed in the output from running ``oc new-app``. 71 | 72 | To see the hostname for accessing the notebook run: 73 | 74 | ``` 75 | oc get routes 76 | ``` 77 | 78 | The output will be similar to: 79 | 80 | ``` 81 | NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD 82 | notebook notebook-jupyter.abcd.pro-us-east-1.openshiftapps.com notebook 8888-tcp edge/Redirect None 83 | ``` 84 | 85 | A secure route will be used to expose the notebook outside of the OpenShift cluster, so in this case the URL would be: 86 | 87 | ``` 88 | https://notebook-jupyter.abcd.pro-us-east-1.openshiftapps.com/ 89 | ``` 90 | 91 | When prompted, enter the password for the notebook. 92 | 93 | Passing Template Parameters 94 | --------------------------- 95 | 96 | To override the name for the notebook, the image used, and the password, you can pass template parameters using the ``--param`` option. 97 | 98 | ``` 99 | oc new-app --template jupyter-notebook \ 100 | --param APPLICATION_NAME=mynotebook \ 101 | --param NOTEBOOK_IMAGE=jupyter/scipy-notebook:latest \ 102 | --param NOTEBOOK_PASSWORD=mypassword 103 | ``` 104 | 105 | You can deploy any of the Jupyter Project docker-stacks images. 106 | 107 | * jupyter/base-notebook 108 | * jupyter/r-notebook 109 | * jupyter/minimal-notebook 110 | * jupyter/scipy-notebook 111 | * jupyter/tensorflow-notebook 112 | * jupyter/datascience-notebook 113 | * jupyter/pyspark-notebook 114 | * jupyter/all-spark-notebook 115 | 116 | If you don't care what version of the image is used, add the ``:latest`` tag at the end of the image name, otherwise use the hash corresponding to the image version you want to use. 117 | 118 | Deleting the Notebook Instance 119 | ------------------------------ 120 | 121 | To delete the notebook instance, run ``oc delete`` using a label selector for the application name. 122 | 123 | ``` 124 | oc delete all,configmap --selector app=mynotebook 125 | ``` 126 | 127 | Enabling Jupyter Lab Interface 128 | ------------------------------ 129 | 130 | To enable the Jupyter Lab interface for a deployed notebook set the ``JUPYTER_ENABLE_LAB`` environment variable. 131 | 132 | ``` 133 | oc set env dc/mynotebook JUPYTER_ENABLE_LAB=true 134 | ``` 135 | 136 | Setting the environment variable will trigger a new deployment and the Jupyter Lab interface will be enabled. 137 | 138 | Adding Persistent Storage 139 | ------------------------- 140 | 141 | You can upload notebooks and other files using the web interface of the notebook. Any uploaded files or changes you make to them will be lost when the notebook instance is restarted. If you want to save your work, you need to add persistent storage to the notebook. To add persistent storage run: 142 | 143 | ``` 144 | oc set volume dc/mynotebook --add \ 145 | --type=pvc --claim-size=1Gi --claim-mode=ReadWriteOnce \ 146 | --claim-name mynotebook-data --name data \ 147 | --mount-path /home/jovyan 148 | ``` 149 | 150 | When you have deleted the notebook instance, if using a persistent volume, you will need to delete it in a separate step. 151 | 152 | ``` 153 | oc delete pvc/mynotebook-data 154 | ``` 155 | 156 | Customizing the Configuration 157 | ----------------------------- 158 | 159 | If you want to set any custom configuration for the notebook, you can edit the config map created by the template. 160 | 161 | ``` 162 | oc edit configmap/mynotebook-cfg 163 | ``` 164 | 165 | The ``data`` field of the config map contains Python code used as the ``jupyter_notebook_config.py`` file. 166 | 167 | If you are using a persistent volume, you can also create a configuration file at: 168 | 169 | ``` 170 | /home/jovyan/.jupyter/jupyter_notebook_config.py 171 | ``` 172 | 173 | This will be merged at the end of the configuration from the config map. 174 | 175 | Because the configuration is Python code, ensure any indenting is correct. Any errors in the configuration file will cause the notebook to fail when starting. 176 | 177 | If the error is in the config map, edit it again to fix it and trigged a new deployment if necessary by running: 178 | 179 | ``` 180 | oc rollout latest dc/mynotebook 181 | ``` 182 | 183 | If you make an error in the configuration file stored in the persistent volume, you will need to scale down the notebook so it isn't running. 184 | 185 | ``` 186 | oc scale dc/mynotebook --replicas 0 187 | ``` 188 | 189 | Then run: 190 | 191 | ``` 192 | oc debug dc/mynotebook 193 | ``` 194 | 195 | to run the notebook in debug mode. This will provide you with an interactive terminal session inside a running container, but the notebook will not have been started. Edit the configuration file in the volume to fix any errors and exit the terminal session. 196 | 197 | Start up the notebook again. 198 | 199 | ``` 200 | oc scale dc/mynotebook --replicas 1 201 | ``` 202 | 203 | Changing the Notebook Password 204 | ------------------------------ 205 | 206 | The password for the notebook is supplied as a template parameter, or if not supplied will be automatically generated by the template. It will be passed into the container through an environment variable. 207 | 208 | If you want to change the password, you can do so by editing the environment variable on the deployment configuration. 209 | 210 | ``` 211 | oc set env dc/mynotebook JUPYTER_NOTEBOOK_PASSWORD=mypassword 212 | ``` 213 | 214 | This will trigger a new deployment so ensure you have downloaded any work if not using a persistent volume. 215 | 216 | If using a persistent volume, you could instead setup a password in the file: 217 | 218 | ``` 219 | /home/jovyan/.jupyter/jupyter_notebook_config.py 220 | ``` 221 | 222 | as per guidelines in: 223 | 224 | * https://jupyter-notebook.readthedocs.io/en/stable/public_server.html 225 | 226 | Deploying from a Custom Image 227 | ----------------------------- 228 | 229 | If you want to deploy a custom variant of the Jupyter Project docker-stacks images, you can replace the image name with that of your own. If the image is not stored on Docker Hub, but some other public image registry, prefix the name of the image with the image registry host details. 230 | 231 | If the image is in your OpenShift project, because you imported the image into OpenShift, or used the docker build strategy of OpenShift to build a derived custom image, you can use the name of the image stream for the image name, including any image tag if necessary. 232 | 233 | This can be illustrated by first importing an image into the OpenShift project. 234 | 235 | ``` 236 | oc import-image jupyter/datascience-notebook:latest --confirm 237 | ``` 238 | 239 | Then deploy it using the name of the image stream created. 240 | 241 | ``` 242 | oc new-app --template jupyter-notebook \ 243 | --param APPLICATION_NAME=mynotebook \ 244 | --param NOTEBOOK_IMAGE=datascience-notebook \ 245 | --param NOTEBOOK_PASSWORD=mypassword 246 | ``` 247 | 248 | Importing an image into OpenShift before deploying it means that when a notebook is started, the image need only be pulled from the internal OpenShift image registry rather than Docker Hub for each deployment. Because the images are so large, this can speed up deployments when the image hasn't previously been deployed to a node in the OpenShift cluster. 249 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # docker-stacks 2 | 3 | [![Build Status](https://travis-ci.org/jupyter/docker-stacks.svg?branch=master)](https://travis-ci.org/jupyter/docker-stacks) 4 | [![Join the chat at https://gitter.im/jupyter/jupyter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/jupyter/jupyter?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 5 | 6 | Opinionated stacks of ready-to-run Jupyter applications in Docker. 7 | 8 | ## Quick Start 9 | 10 | If you're familiar with Docker, have it configured, and know exactly what you'd like to run, one of these commands should get you up and running: 11 | 12 | ``` 13 | # Run an ephemeral Jupyter Notebook server in a Docker container in the terminal foreground. 14 | # Note that any work saved in the container will be lost when it is destroyed with this config. 15 | # -ti: pseudo-TTY+STDIN open. 16 | # -rm: remove the container on exit. 17 | # -p: publish port to the host 18 | docker run -ti --rm -p 8888:8888 jupyter/: 19 | 20 | # Run a Jupyter Notebook server in a Docker container in the terminal foreground. 21 | # Any files written to ~/work in the container will be saved to the current working 22 | # directory on the host. 23 | docker run -ti --rm -p 8888:8888 -v "$PWD":/home/jovyan/work jupyter/: 24 | 25 | # Run an ephemeral Jupyter Notebook server in a Docker container in the background. 26 | # Note that any work saved in the container will be lost when it is destroyed with this config. 27 | # -d: detach, run container in background. 28 | # -P: Publish all exposed ports to random ports 29 | docker run -d -P jupyter/: 30 | ``` 31 | 32 | ## Getting Started 33 | 34 | If this is your first time using Docker or any of the Jupyter projects, do the following to get started. 35 | 36 | 1. [Install Docker](https://docs.docker.com/installation/) on your host of choice. 37 | 2. Open the README in one of the folders in this git repository. 38 | 3. Follow the README for that stack. 39 | 40 | ## Visual Overview 41 | 42 | Here's a diagram of the `FROM` relationships between all of the images defined in this project: 43 | 44 | [![Image inheritance diagram](internal/inherit-diagram.svg)](http://interactive.blockdiag.com/?compression=deflate&src=eJyFzTEPgjAQhuHdX9Gws5sQjGzujsaYKxzmQrlr2msMGv-71K0srO_3XGud9NNA8DSfgzESCFlBSdi0xkvQAKTNugw4QnL6GIU10hvX-Zh7Z24OLLq2SjaxpvP10lX35vCf6pOxELFmUbQiUz4oQhYzMc3gCrRt2cWe_FKosmSjyFHC6OS1AwdQWCtyj7sfh523_BI9hKlQ25YdOFdv5fcH0kiEMA) 45 | 46 | [Click here for a commented build history of each image, with references to tag/SHA values.](https://github.com/jupyter/docker-stacks/wiki/Docker-build-history) 47 | 48 | The following are quick-links to READMEs about each image and their Docker image tags on Docker Cloud: 49 | 50 | * base-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/base-notebook), [SHA list](https://hub.docker.com/r/jupyter/base-notebook/tags/) 51 | * minimal-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/minimal-notebook), [SHA list](https://hub.docker.com/r/jupyter/minimal-notebook/tags/) 52 | * scipy-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/scipy-notebook), [SHA list](https://hub.docker.com/r/jupyter/scipy-notebook/tags/) 53 | * r-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/r-notebook), [SHA list](https://hub.docker.com/r/jupyter/r-notebook/tags/) 54 | * tensorflow-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/tensorflow-notebook), [SHA list](https://hub.docker.com/r/jupyter/tensorflow-notebook/tags/) 55 | * datascience-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook), [SHA list](https://hub.docker.com/r/jupyter/datascience-notebook/tags/) 56 | * pyspark-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/pyspark-notebook), [SHA list](https://hub.docker.com/r/jupyter/pyspark-notebook/tags/) 57 | * all-spark-notebook: [README](https://github.com/jupyter/docker-stacks/tree/master/all-spark-notebook), [SHA list](https://hub.docker.com/r/jupyter/all-spark-notebook/tags/) 58 | 59 | ## Stacks, Tags, Versioning, and Progress 60 | 61 | Starting with [git commit SHA 9bd33dcc8688](https://github.com/jupyter/docker-stacks/tree/9bd33dcc8688): 62 | 63 | * Nearly every folder here on GitHub has an equivalent `jupyter/` on Docker Hub (e.g., all-spark-notebook → jupyter/all-spark-notebook). 64 | * The `latest` tag in each Docker Hub repository tracks the `master` branch `HEAD` reference on GitHub. 65 | This is a moving target and will make backward-incompatible changes regularly. 66 | * Any 12-character image tag on Docker Hub refers to a git commit SHA here on GitHub. See the [Docker build history wiki page](https://github.com/jupyter/docker-stacks/wiki/Docker-build-history) for a table of build details. 67 | * Stack contents (e.g., new library versions) will be updated upon request via PRs against this project. 68 | * Users looking for reproducibility or stability should always refer to specific git SHA tagged images in their work, not `latest`. 69 | * For legacy reasons, there are two additional tags named `3.2` and `4.0` on Docker Hub which point to images prior to our versioning scheme switch. 70 | 71 | ## Other Tips and Known Issues 72 | 73 | - If you haven't already, pin your image to a tag, e.g. `FROM jupyter/scipy-notebook:7c45ec67c8e7`. 74 | `latest` is a moving target which can change in backward-incompatible ways as packages and operating systems are updated. 75 | * Python 2.x was [removed from all images](https://github.com/jupyter/docker-stacks/pull/433) on August 10th, 2017, starting in tag `cc9feab481f7`. If you wish to continue using Python 2.x, pin to tag `82b978b3ceeb`. 76 | * `tini -- start-notebook.sh` is the default Docker entrypoint-plus-command in every notebook stack. If you plan to modify it in any way, be sure to check the *Notebook Options* section of your stack's README to understand the consequences. 77 | * Every notebook stack is compatible with [JupyterHub](https://jupyterhub.readthedocs.io) 0.5 or higher. When running with JupyterHub, you must override the Docker run command to point to the [start-singleuser.sh](base-notebook/start-singleuser.sh) script, which starts a single-user instance of the Notebook server. See each stack's README for instructions on running with JupyterHub. 78 | * Check the [Docker recipes wiki page](https://github.com/jupyter/docker-stacks/wiki/Docker-Recipes) attached to this project for information about extending and deploying the Docker images defined here. Add to the wiki if you have relevant information. 79 | * The pyspark-notebook and all-spark-notebook stacks will fail to submit Spark jobs to a Mesos cluster when run on Mac OSX due to https://github.com/docker/for-mac/issues/68. 80 | 81 | ## Maintainer Workflow 82 | 83 | **To build new images on Docker Cloud and publish them to the Docker Hub registry, do the following:** 84 | 85 | 1. Make sure Travis is green for a PR. 86 | 2. Merge the PR. 87 | 3. Monitor the Docker Cloud build status for each of the stacks, starting with [jupyter/base-notebook](https://cloud.docker.com/app/jupyter/repository/docker/jupyter/base-notebook/general) and ending with [jupyter/all-spark-notebook](https://cloud.docker.com/app/jupyter/repository/docker/jupyter/all-spark-notebook/general). 88 | * See the stack hierarchy diagram for the current, complete build order. 89 | 4. Manually click the retry button next to any build that fails to resume that build and any dependent builds. 90 | 5. Avoid merging another PR to master until all outstanding builds complete. 91 | * There's no way at present to propagate the git SHA to build through the Docker Cloud build trigger API. Every build trigger works off of master HEAD. 92 | 93 | **When there's a security fix in the Ubuntu base image, do the following in place of the last command:** 94 | 95 | Update the `ubuntu:16.04` SHA in the most-base images (e.g., base-notebook). Submit it as a regular PR and go through the build process. Expect the build to take a while to complete: every image layer will rebuild. 96 | 97 | **When there's a new stack definition, do the following before merging the PR with the new stack:** 98 | 99 | 1. Ensure the PR includes an update to the stack overview diagram in the top-level README. 100 | * The source of the diagram is included in the alt-text of the image. Visit that URL to make edits. 101 | 2. Ensure the PR updates the Makefile which is used to build the stacks in order on Travis CI. 102 | 3. Create a new repoistory in the `jupyter` org on Docker Cloud named after the stack folder in the git repo. 103 | 4. Grant the `stacks` team permission to write to the repo. 104 | 5. Click *Builds* and then *Configure Automated Builds* for the repository. 105 | 6. Select `jupyter/docker-stacks` as the source repository. 106 | 7. Choose *Build on Docker Cloud's infrastructure using a Small node* unless you have reason to believe a bigger host is required. 107 | 8. Update the *Build Context* in the default build rule to be `/`. 108 | 9. Toggle *Autobuild* to disabled unless the stack is a new root stack (e.g., like `jupyter/base-notebook`). 109 | 10. If the new stack depends on the build of another stack in the hierarchy: 110 | 1. Hit *Save* and then click *Configure Automated Builds*. 111 | 2. At the very bottom, add a build trigger named *Stack hierarchy trigger*. 112 | 3. Copy the build trigger URL. 113 | 4. Visit the parent repository *Builds* page and click *Configure Automated Builds*. 114 | 5. Add the URL you copied to the *NEXT_BUILD_TRIGGERS* environment variable comma separated list of URLs, creating that environment variable if it does not already exist. 115 | 6. Hit *Save*. 116 | 11. If the new stack should trigger other dependent builds: 117 | 1. Add an environment variable named *NEXT_BUILD_TRIGGERS*. 118 | 2. Copy the build trigger URLs from the dependent builds into the *NEXT_BUILD_TRIGGERS* comma separated list of URLs. 119 | 3. Hit *Save*. 120 | 12. Adjust other *NEXT_BUILD_TRIGGERS* values as needed so that the build order matches that in the stack hierarchy diagram. 121 | 122 | **When there's a new maintainer, do the following:** 123 | 124 | 1. Visit https://cloud.docker.com/app/jupyter/team/stacks/users 125 | 2. Add the new maintainer user name. 126 | 127 | **If automated builds have got you down, do the following:** 128 | 129 | 1. Clone this repository. 130 | 2. Check out the git SHA you want to build and publish. 131 | 3. `docker login` with your Docker Hub/Cloud credentials. 132 | 4. Run `make retry/release-all`. 133 | 134 | When `make retry/release-all` successfully pushes the last of its images to Docker Hub (currently `jupyter/all-spark-notebook`), Docker Hub invokes [the webhook](https://github.com/jupyter/docker-stacks/blob/master/internal/docker-stacks-webhook/) which updates the [Docker build history](https://github.com/jupyter/docker-stacks/wiki/Docker-build-history) wiki page. 135 | --------------------------------------------------------------------------------