├── attach.sh ├── getip.sh ├── run.sh ├── Dockerfile └── README.md /attach.sh: -------------------------------------------------------------------------------- 1 | docker exec -it tensorflow_2.11.1 bash 2 | -------------------------------------------------------------------------------- /getip.sh: -------------------------------------------------------------------------------- 1 | docker container inspect tensorflow_2.11.1 | grep IPAddress 2 | 3 | -------------------------------------------------------------------------------- /run.sh: -------------------------------------------------------------------------------- 1 | docker run -u $(id -u):$(id -g) -di --rm -p 8888:8888 -v /opt:/opt --name tensorflow_2.11.1 tensorflow_2.11.1 2 | 3 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM tensorflow/tensorflow:2.11.1 2 | 3 | RUN pip install numpy pandas matplotlib scikit-learn seaborn jupyter notebook pyright python-language-server python-lsp-server && mkdir /.local && chmod 777 /.local && mkdir /.cache && chmod 777 /.cache 4 | 5 | EXPOSE 8888 6 | 7 | VOLUME /opt:/opt 8 | 9 | CMD ["/bin/bash"] 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Docker container for Tensorflow 2.11.1 2 | 3 | This Dockerfile creates a docker image with Tensorflow version 2.11.1 that is compatible with at least ST Microelectronics X-CUBE-AI version 10. 4 | 5 | If a model is created with a higher version of Tensorflow, X-Cube-AI, as of now, might fail to convert the model to be used on the microprocessor. 6 | 7 | The created docker image also has some standard ML libraries, such as Numpy, Pandas and SciKit learn and the Jupyter Notebook server so development can be done using the image. 8 | 9 | # How to use it? 10 | 11 | Create the image by running **./build.sh**. If some other libraries are required, modify the pip install command on the Dockerfile as required. 12 | 13 | Create a running container by running the **./run.sh** script. 14 | 15 | Modify this script to map at least the host volume where your files (Jupyter notebooks reside) to the container **/opt** directory. 16 | At the moment, the /opt host volume is mapped to the internal /opt volume. 17 | 18 | Check the running container IP by running the **./getip.sh** script. 19 | 20 | Attach to the container by running the **./attach.sh** script. 21 | 22 | At the prompt docker container bash prompt, run the following command: **jupyter notebook --ip 0.0.0.0** 23 | 24 | Using the previous found IP, and the outputed token/url of the starting log of the jupyter notebook, use a browser to access it. 25 | 26 | For example: **http://172.17.0.12:8888/tree?token=a3fb1692994fc26a29466691a18159879604ced685196c6b** where the IP address 172.17.0.12 is the containers IP address. 27 | 28 | # How to use GPU or other version 29 | 30 | From available Tensorflow docker images, choose the one with the required version and having the GPU libraries enabled. Check out https://hub.docker.com/r/tensorflow/tensorflow 31 | --------------------------------------------------------------------------------