├── .gitattributes ├── .github └── FUNDING.yml ├── .travis.yml ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.rst ├── LICENSE ├── README.rst ├── _img ├── 0-welcome │ ├── graph-run.png │ └── joinslack.png ├── 1-basics │ ├── basic_math_operations │ │ └── graph-run.png │ └── readme.rst ├── 2-basics_in_machine_learning │ └── linear_regression │ │ └── updating_model.gif ├── 3-neural_network │ ├── convolutiona_neural_network │ │ ├── accuracy_train.png │ │ ├── activation_fc4_train.png │ │ ├── architecture.png │ │ ├── classifier_image.png │ │ ├── convlayer.png │ │ ├── graph.png │ │ ├── histogram_fc4_train.png │ │ ├── loss_accuracy_train.png │ │ ├── loss_train.png │ │ ├── terminal_training.png │ │ └── test_accuracy.png │ └── multi-layer-perceptron │ │ └── neural-network.png └── mainpage │ ├── Build.png │ ├── CNNs.png │ ├── TensorFlow_World.gif │ ├── Tensor_GIF.gif │ ├── Tensor_GIF_ff.gif │ ├── YouTube.png │ ├── basicmodels.gif │ ├── basicmodels.png │ ├── basics-old.png │ ├── basics.gif │ ├── booksubscribe.png │ ├── donation.jpg │ ├── follow-twitter.gif │ ├── installation-logo.gif │ ├── installation.gif │ ├── subscribe.gif │ ├── subscribe.png │ ├── teaser.gif │ ├── welcome.gif │ └── welcome.png ├── codes ├── ipython │ ├── 0-welcome │ │ └── welcome.ipynb │ ├── 1-basics │ │ ├── automatic_differentiation.ipynb │ │ ├── graph.ipynb │ │ ├── models.ipynb │ │ └── tensors.ipynb │ ├── advanced │ │ ├── custom_training.ipynb │ │ ├── dataset_generator.ipynb │ │ └── tfrecords.ipynb │ ├── basics_in_machine_learning │ │ ├── dataaugmentation.ipynb │ │ └── linearregression.ipynb │ └── neural_networks │ │ ├── CNNs.ipynb │ │ └── mlp.ipynb └── python │ ├── 0-welcome │ └── welcome.py │ ├── 1-basics │ ├── automatic_differentiation.py │ ├── graph.py │ ├── models.py │ └── tensors.py │ ├── advanced │ ├── custom_training.py │ ├── dataset_generator.py │ └── tfrecords.py │ ├── application │ └── image │ │ └── image_classification.py │ ├── basics_in_machine_learning │ ├── dataaugmentation.py │ └── linearregression.py │ └── neural_networks │ ├── cnns.py │ └── mlp.py ├── docs ├── Makefile ├── README.rst ├── _img │ ├── 0-welcome │ │ └── graph-run.png │ ├── 1-basics │ │ ├── basic_math_operations │ │ │ └── graph-run.png │ │ └── readme.rst │ ├── 2-basics_in_machine_learning │ │ └── linear_regression │ │ │ └── updating_model.gif │ ├── 3-neural_network │ │ ├── autoencoder │ │ │ ├── README.rst │ │ │ └── ae.png │ │ ├── convolutiona_neural_network │ │ │ ├── accuracy_train.png │ │ │ ├── activation_fc4_train.png │ │ │ ├── architecture.png │ │ │ ├── classifier_image.png │ │ │ ├── convlayer.png │ │ │ ├── graph.png │ │ │ ├── histogram_fc4_train.png │ │ │ ├── loss_accuracy_train.png │ │ │ ├── loss_train.png │ │ │ ├── terminal_training.png │ │ │ └── test_accuracy.png │ │ └── multi-layer-perceptron │ │ │ └── neural-network.png │ └── mainpage │ │ ├── TensorFlow_World.gif │ │ ├── Tensor_GIF.gif │ │ ├── Tensor_GIF_ff.gif │ │ └── installation.gif ├── conf.py ├── index.rst ├── make.bat └── tutorials │ ├── 0-welcome │ └── README.rst │ ├── 1-basics │ ├── basic_math_operations │ │ └── README.rst │ ├── readme.rst │ └── variables │ │ └── README.rst │ ├── 2-basics_in_machine_learning │ ├── linear_regression │ │ └── README.rst │ └── logistic_regression │ │ └── README.rst │ ├── 3-neural_network │ ├── autoencoder │ │ └── README.rst │ └── convolutiona_neural_network │ │ └── README.rst │ └── installation │ └── README.rst ├── requirements.txt ├── travis.sh └── welcome.py /.gitattributes: -------------------------------------------------------------------------------- 1 | codes/ipython/* linguist-vendored 2 | 3 | # Basic .gitattributes for a python repo. 4 | 5 | # Source files 6 | # ============ 7 | .pxd text diff=python 8 | .py text diff=python 9 | .py3 text diff=python 10 | .pyw text diff=python 11 | .pyx text diff=python 12 | 13 | # Binary files 14 | # ============ 15 | .db binary 16 | .p binary 17 | .pkl binary 18 | .pyc binary 19 | .pyd binary 20 | .pyo binary 21 | 22 | # Note: .db, .p, and .pkl files are associated 23 | # with the python modules ``pickle``, ``dbm.*``, 24 | # ``shelve``, ``marshal``, ``anydbm``, & ``bsddb`` 25 | # (among others). 26 | -------------------------------------------------------------------------------- /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | # These are supported funding model platforms 2 | 3 | github: [astorfi] 4 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | sudo: required 2 | language: python 3 | cache: pip 4 | git: 5 | depth: false 6 | quiet: true 7 | 8 | python: # The following versions 9 | - "3.6" 10 | - "3.7" 11 | # command to install dependencies 12 | 13 | build: 14 | stage: build 15 | only: 16 | - paths: 17 | - "^codes/*" 18 | 19 | install: 20 | - pip install numpy 21 | - pip install matplotlib 22 | - pip install pandas 23 | - pip install seaborn 24 | - pip install pathlib 25 | - pip install tensorflow_datasets 26 | - pip install scikit-image 27 | # install TensorFlow from https://storage.googleapis.com/tensorflow/ 28 | - if [[ "$TRAVIS_PYTHON_VERSION" == "3.6" ]]; then 29 | pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow_cpu-2.3.0-cp36-cp36m-manylinux2010_x86_64.whl; 30 | elif [[ "$TRAVIS_PYTHON_VERSION" == "3.7" ]]; then 31 | pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow_cpu-2.3.0-cp37-cp37m-manylinux2010_x86_64.whl; 32 | fi 33 | script: 34 | # You can run all python files in parallel, http://stackoverflow.com/questions/5015316 35 | # - find codes/python/ -type f -name "*.py" |xargs -n 1 python 36 | # get list of changed files and if they are Python files, run them. 37 | - ./travis.sh 38 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. 6 | 7 | ## Our Standards 8 | 9 | Examples of behavior that contributes to creating a positive environment include: 10 | 11 | * Using welcoming and inclusive language 12 | * Being respectful of differing viewpoints and experiences 13 | * Gracefully accepting constructive criticism 14 | * Focusing on what is best for the community 15 | * Showing empathy towards other community members 16 | 17 | Examples of unacceptable behavior by participants include: 18 | 19 | * The use of sexualized language or imagery and unwelcome sexual attention or advances 20 | * Trolling, insulting/derogatory comments, and personal or political attacks 21 | * Public or private harassment 22 | * Publishing others' private information, such as a physical or electronic address, without explicit permission 23 | * Other conduct which could reasonably be considered inappropriate in a professional setting 24 | 25 | ## Our Responsibilities 26 | 27 | Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. 28 | 29 | Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. 30 | 31 | ## Scope 32 | 33 | This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. 34 | 35 | ## Enforcement 36 | 37 | Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at amirsina.torfi@gmail.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. 38 | 39 | Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. 40 | 41 | ## Attribution 42 | 43 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version] 44 | 45 | [homepage]: http://contributor-covenant.org 46 | [version]: http://contributor-covenant.org/version/1/4/ 47 | -------------------------------------------------------------------------------- /CONTRIBUTING.rst: -------------------------------------------------------------------------------- 1 | 2 | ************* 3 | Contributing 4 | ************* 5 | 6 | When contributing to this repository, please first discuss the change you wish to make via issue, 7 | email, or any other method with the owners of this repository before making a change. *For typos, please 8 | do not create a pull request. Instead, declare them in issues or email the repository owner*. 9 | 10 | Please note we have a code of conduct, please follow it in all your interactions with the project. 11 | 12 | ==================== 13 | Pull Request Process 14 | ==================== 15 | 16 | Please consider the following criterions in order to help us in a better way: 17 | 18 | 1. The pull request is mainly expected to be a code script suggestion or improvement. 19 | 2. A pull request related to non-code-script sections is expected to make a significant difference in the documentation. Otherwise, it is expected to be announced in the issues section. 20 | 3. Ensure any install or build dependencies are removed before the end of the layer when doing a 21 | build and creating a pull request. 22 | 4. Add comments with details of changes to the interface, this includes new environment 23 | variables, exposed ports, useful file locations and container parameters. 24 | 5. You may merge the Pull Request in once you have the sign-off of at least one other developer, or if you 25 | do not have permission to do that, you may request the owner to merge it for you if you believe all checks are passed. 26 | 27 | ============ 28 | Final Note 29 | ============ 30 | 31 | We are looking forward to your kind feedback. Please help us to improve this open source project and make our work better. 32 | For contribution, please create a pull request and we will investigate it promptly. Once again, we appreciate 33 | your kind feedback and elaborate code inspections. 34 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Amirsina Torfi 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /_img/0-welcome/graph-run.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/0-welcome/graph-run.png -------------------------------------------------------------------------------- /_img/0-welcome/joinslack.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/0-welcome/joinslack.png -------------------------------------------------------------------------------- /_img/1-basics/basic_math_operations/graph-run.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/1-basics/basic_math_operations/graph-run.png -------------------------------------------------------------------------------- /_img/1-basics/readme.rst: -------------------------------------------------------------------------------- 1 | ============================== 2 | Basics 3 | ============================== 4 | 5 | 6 | -------------------------------------------------------------------------------- /_img/2-basics_in_machine_learning/linear_regression/updating_model.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/2-basics_in_machine_learning/linear_regression/updating_model.gif -------------------------------------------------------------------------------- /_img/3-neural_network/convolutiona_neural_network/accuracy_train.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/convolutiona_neural_network/accuracy_train.png -------------------------------------------------------------------------------- /_img/3-neural_network/convolutiona_neural_network/activation_fc4_train.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/convolutiona_neural_network/activation_fc4_train.png -------------------------------------------------------------------------------- /_img/3-neural_network/convolutiona_neural_network/architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/convolutiona_neural_network/architecture.png -------------------------------------------------------------------------------- /_img/3-neural_network/convolutiona_neural_network/classifier_image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/convolutiona_neural_network/classifier_image.png -------------------------------------------------------------------------------- /_img/3-neural_network/convolutiona_neural_network/convlayer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/convolutiona_neural_network/convlayer.png -------------------------------------------------------------------------------- /_img/3-neural_network/convolutiona_neural_network/graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/convolutiona_neural_network/graph.png -------------------------------------------------------------------------------- /_img/3-neural_network/convolutiona_neural_network/histogram_fc4_train.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/convolutiona_neural_network/histogram_fc4_train.png -------------------------------------------------------------------------------- /_img/3-neural_network/convolutiona_neural_network/loss_accuracy_train.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/convolutiona_neural_network/loss_accuracy_train.png -------------------------------------------------------------------------------- /_img/3-neural_network/convolutiona_neural_network/loss_train.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/convolutiona_neural_network/loss_train.png -------------------------------------------------------------------------------- /_img/3-neural_network/convolutiona_neural_network/terminal_training.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/convolutiona_neural_network/terminal_training.png -------------------------------------------------------------------------------- /_img/3-neural_network/convolutiona_neural_network/test_accuracy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/convolutiona_neural_network/test_accuracy.png -------------------------------------------------------------------------------- /_img/3-neural_network/multi-layer-perceptron/neural-network.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/3-neural_network/multi-layer-perceptron/neural-network.png -------------------------------------------------------------------------------- /_img/mainpage/Build.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/Build.png -------------------------------------------------------------------------------- /_img/mainpage/CNNs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/CNNs.png -------------------------------------------------------------------------------- /_img/mainpage/TensorFlow_World.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/TensorFlow_World.gif -------------------------------------------------------------------------------- /_img/mainpage/Tensor_GIF.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/Tensor_GIF.gif -------------------------------------------------------------------------------- /_img/mainpage/Tensor_GIF_ff.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/Tensor_GIF_ff.gif -------------------------------------------------------------------------------- /_img/mainpage/YouTube.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/YouTube.png -------------------------------------------------------------------------------- /_img/mainpage/basicmodels.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/basicmodels.gif -------------------------------------------------------------------------------- /_img/mainpage/basicmodels.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/basicmodels.png -------------------------------------------------------------------------------- /_img/mainpage/basics-old.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/basics-old.png -------------------------------------------------------------------------------- /_img/mainpage/basics.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/basics.gif -------------------------------------------------------------------------------- /_img/mainpage/booksubscribe.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/booksubscribe.png -------------------------------------------------------------------------------- /_img/mainpage/donation.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/donation.jpg -------------------------------------------------------------------------------- /_img/mainpage/follow-twitter.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/follow-twitter.gif -------------------------------------------------------------------------------- /_img/mainpage/installation-logo.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/installation-logo.gif -------------------------------------------------------------------------------- /_img/mainpage/installation.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/installation.gif -------------------------------------------------------------------------------- /_img/mainpage/subscribe.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/subscribe.gif -------------------------------------------------------------------------------- /_img/mainpage/subscribe.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/subscribe.png -------------------------------------------------------------------------------- /_img/mainpage/teaser.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/teaser.gif -------------------------------------------------------------------------------- /_img/mainpage/welcome.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/welcome.gif -------------------------------------------------------------------------------- /_img/mainpage/welcome.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/_img/mainpage/welcome.png -------------------------------------------------------------------------------- /codes/ipython/0-welcome/welcome.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "welcome.ipynb", 7 | "provenance": [], 8 | "collapsed_sections": [] 9 | }, 10 | "kernelspec": { 11 | "name": "python3", 12 | "display_name": "Python 3" 13 | }, 14 | "accelerator": "GPU" 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "code", 19 | "metadata": { 20 | "id": "7i1UqqIkNxAt", 21 | "outputId": "2f9c0fc3-ba1a-499d-f4ea-0bfe160ae841", 22 | "colab": { 23 | "base_uri": "https://localhost:8080/", 24 | "height": 129 25 | } 26 | }, 27 | "source": [ 28 | "# Import tensorflow\n", 29 | "import tensorflow as tf\n", 30 | "\n", 31 | "# Check version\n", 32 | "print(\"Tensorflow version: \", tf.__version__)\n", 33 | "\n", 34 | "# Test TensorFlow for cuda availibility\n", 35 | "print(\"Tensorflow is built with CUDA: \", tf.test.is_built_with_cuda())\n", 36 | "\n", 37 | "# Check devices\n", 38 | "print(\"All devices: \", tf.config.list_physical_devices(device_type=None))\n", 39 | "print(\"GPU devices: \", tf.config.list_physical_devices(device_type='GPU'))\n", 40 | "\n", 41 | "# Print a randomly generated tensor\n", 42 | "# tf.math.reduce_sum: https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum\n", 43 | "# tf.random.normal: https://www.tensorflow.org/api_docs/python/tf/random/normal\n", 44 | "print(tf.math.reduce_sum(tf.random.normal([1, 10])))\n" 45 | ], 46 | "execution_count": 2, 47 | "outputs": [ 48 | { 49 | "output_type": "stream", 50 | "text": [ 51 | "Tensorflow version: 2.3.0\n", 52 | "Tensorflow is built with CUDA: True\n", 53 | "All devices: [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:XLA_CPU:0', device_type='XLA_CPU'), PhysicalDevice(name='/physical_device:XLA_GPU:0', device_type='XLA_GPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]\n", 54 | "GPU devices: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]\n", 55 | "tf.Tensor(-4.315793, shape=(), dtype=float32)\n" 56 | ], 57 | "name": "stdout" 58 | } 59 | ] 60 | }, 61 | { 62 | "cell_type": "code", 63 | "metadata": { 64 | "id": "rcU8_F3fPUb5" 65 | }, 66 | "source": [ 67 | "" 68 | ], 69 | "execution_count": null, 70 | "outputs": [] 71 | } 72 | ] 73 | } -------------------------------------------------------------------------------- /codes/ipython/1-basics/automatic_differentiation.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "automatic_differentiation.ipynb", 7 | "provenance": [], 8 | "collapsed_sections": [] 9 | }, 10 | "kernelspec": { 11 | "name": "python3", 12 | "display_name": "Python 3" 13 | } 14 | }, 15 | "cells": [ 16 | { 17 | "cell_type": "markdown", 18 | "metadata": { 19 | "id": "MDevcewD85Im" 20 | }, 21 | "source": [ 22 | "## Automatic Differentiation\n", 23 | "\n", 24 | "The [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) is to calculate derivative of functions which is useful for algorithms such as [stochastic gradient descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent).\n", 25 | "\n", 26 | "It's is particularly useful when we implement neural networks and desire to calculate differentiation of the output with respect to an input that are connected with a **chain of functions**:\n", 27 | "\n", 28 | "$L(x)=f(g(h(x)))$\n", 29 | "\n", 30 | "The differentiation is as below:\n", 31 | "\n", 32 | "$\\frac{dL}{dx} = \\frac{df}{dg}\\frac{dg}{dh}\\frac{dh}{dx}$\n", 33 | "\n", 34 | "The above rule is called the [chain rule](https://en.wikipedia.org/wiki/Chain_rule).\n", 35 | "\n", 36 | "So the [gradients](https://en.wikipedia.org/wiki/Gradient) needs to be calculated for ultimate derivative calculations.\n", 37 | "\n", 38 | "Let's see how TensorFlow does it!" 39 | ] 40 | }, 41 | { 42 | "cell_type": "code", 43 | "metadata": { 44 | "id": "iJoLp_aUvBFR" 45 | }, 46 | "source": [ 47 | "# Loading necessary libraries\n", 48 | "import tensorflow as tf\n", 49 | "import numpy as np" 50 | ], 51 | "execution_count": 1, 52 | "outputs": [] 53 | }, 54 | { 55 | "cell_type": "markdown", 56 | "metadata": { 57 | "id": "w9fp4we9FWQU" 58 | }, 59 | "source": [ 60 | "### Introduction\n", 61 | "\n", 62 | "Some general information are useful to be addressed here:\n", 63 | "\n", 64 | "* To compute gradients, TensorFlow uses [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) which records the operation for later being used for gradient computation.\n", 65 | "\n", 66 | "Let's have three similar example:" 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "metadata": { 72 | "id": "26FVM1x8F-k0", 73 | "outputId": "556b8873-8aab-4525-eb58-30053330f99b", 74 | "colab": { 75 | "base_uri": "https://localhost:8080/", 76 | "height": 54 77 | } 78 | }, 79 | "source": [ 80 | "x = tf.constant([2.0])\n", 81 | "\n", 82 | "with tf.GradientTape(persistent=False, watch_accessed_variables=True) as grad:\n", 83 | " f = x ** 2\n", 84 | "\n", 85 | "# Print gradient output\n", 86 | "print('The gradient df/dx where f=(x^2):\\n', grad.gradient(f, x))" 87 | ], 88 | "execution_count": 11, 89 | "outputs": [ 90 | { 91 | "output_type": "stream", 92 | "text": [ 93 | "The gradient df/dx where f=(x^2):\n", 94 | " None\n" 95 | ], 96 | "name": "stdout" 97 | } 98 | ] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "metadata": { 103 | "id": "y_UO_8xnG6SY", 104 | "outputId": "fcc2f51b-8bf4-4ee3-c6a6-9180a7a94d90", 105 | "colab": { 106 | "base_uri": "https://localhost:8080/", 107 | "height": 54 108 | } 109 | }, 110 | "source": [ 111 | "x = tf.constant([2.0])\n", 112 | "x = tf.Variable(x)\n", 113 | "\n", 114 | "with tf.GradientTape(persistent=False, watch_accessed_variables=True) as grad:\n", 115 | " f = x ** 2\n", 116 | "\n", 117 | "# Print gradient output\n", 118 | "print('The gradient df/dx where f=(x^2):\\n', grad.gradient(f, x))" 119 | ], 120 | "execution_count": 12, 121 | "outputs": [ 122 | { 123 | "output_type": "stream", 124 | "text": [ 125 | "The gradient df/dx where f=(x^2):\n", 126 | " tf.Tensor([4.], shape=(1,), dtype=float32)\n" 127 | ], 128 | "name": "stdout" 129 | } 130 | ] 131 | }, 132 | { 133 | "cell_type": "code", 134 | "metadata": { 135 | "id": "s2SxqhpeHab7", 136 | "outputId": "5c0be66e-171b-400b-8d55-d80fde11c7fd", 137 | "colab": { 138 | "base_uri": "https://localhost:8080/", 139 | "height": 54 140 | } 141 | }, 142 | "source": [ 143 | "x = tf.constant([2.0])\n", 144 | "\n", 145 | "with tf.GradientTape(persistent=False, watch_accessed_variables=True) as grad:\n", 146 | " grad.watch(x)\n", 147 | " f = x ** 2\n", 148 | "\n", 149 | "# Print gradient output\n", 150 | "print('The gradient df/dx where f=(x^2):\\n', grad.gradient(f, x))" 151 | ], 152 | "execution_count": 13, 153 | "outputs": [ 154 | { 155 | "output_type": "stream", 156 | "text": [ 157 | "The gradient df/dx where f=(x^2):\n", 158 | " tf.Tensor([4.], shape=(1,), dtype=float32)\n" 159 | ], 160 | "name": "stdout" 161 | } 162 | ] 163 | }, 164 | { 165 | "cell_type": "markdown", 166 | "metadata": { 167 | "id": "7TptydLkH2hh" 168 | }, 169 | "source": [ 170 | "What's the difference between above examples?\n", 171 | "\n", 172 | "1. Using tf.Variable on top of the tensor to transform it into a [tf.Variable](https://www.tensorflow.org/guide/variable).\n", 173 | "2. Using [.watch()](https://www.tensorflow.org/api_docs/python/tf/GradientTape#watch) operation.\n", 174 | "\n", 175 | "The tf.Variable turn tensor to a variable tensor which is the recommended approach by TensorFlow. The .watch() method ensures the variable is being tracked by the tf.GradientTape(). \n", 176 | "\n", 177 | "**You can see if we use neither, we get NONE as the gradient which means gradients were not being tracked!**\n", 178 | "\n", 179 | "NOTE: In general it's always safe to work with variable as well as using .watch() to ensure tracking gradients.\n", 180 | "\n", 181 | "We used default arguments as:\n", 182 | "\n", 183 | "1. **persistent=False**: It says, any variable that is hold with tf.GradientTape(), after one calling of gradient will be released. \n", 184 | "2. **watch_accessed_variables=True**: By default watching variables. So if we have a variable, we do not need to use .watch() with this default setting.\n", 185 | "\n", 186 | "Let's have an example with **persistent=True**:\n" 187 | ] 188 | }, 189 | { 190 | "cell_type": "code", 191 | "metadata": { 192 | "id": "V-ugyJGRHc7S", 193 | "outputId": "a4813975-128b-4ad2-ad35-5c7270060ed1", 194 | "colab": { 195 | "base_uri": "https://localhost:8080/", 196 | "height": 90 197 | } 198 | }, 199 | "source": [ 200 | "x = tf.constant([2.0])\n", 201 | "x = tf.Variable(x)\n", 202 | "\n", 203 | "# For practice, turn persistent to False to see what happens.\n", 204 | "with tf.GradientTape(persistent=True, watch_accessed_variables=True) as grad:\n", 205 | " f = x ** 2\n", 206 | " h = x ** 3\n", 207 | "\n", 208 | "# Print gradient output\n", 209 | "print('The gradient df/dx where f=(x^2):\\n', grad.gradient(f, x))\n", 210 | "print('The gradient dh/dx where h=(x^3):\\n', grad.gradient(h, x))" 211 | ], 212 | "execution_count": 16, 213 | "outputs": [ 214 | { 215 | "output_type": "stream", 216 | "text": [ 217 | "The gradient df/dx where f=(x^2):\n", 218 | " tf.Tensor([4.], shape=(1,), dtype=float32)\n", 219 | "The gradient dh/dx where h=(x^3):\n", 220 | " tf.Tensor([12.], shape=(1,), dtype=float32)\n" 221 | ], 222 | "name": "stdout" 223 | } 224 | ] 225 | } 226 | ] 227 | } -------------------------------------------------------------------------------- /codes/ipython/1-basics/graph.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "graph.ipynb", 7 | "provenance": [], 8 | "collapsed_sections": [] 9 | }, 10 | "kernelspec": { 11 | "name": "python3", 12 | "display_name": "Python 3" 13 | } 14 | }, 15 | "cells": [ 16 | { 17 | "cell_type": "markdown", 18 | "metadata": { 19 | "id": "MDevcewD85Im" 20 | }, 21 | "source": [ 22 | "## Intorduction to TensorFlow graphs.\n", 23 | "\n", 24 | "For long, the big complaint about TensorFlow was *It's not flexible for debugging!* With the advent of TensorFlow 2.0, that changed drastically.\n", 25 | "\n", 26 | "Now TensorFlow allows you to run the oprations **eagerly**. That means, you can run TensorFlow operations by Python and return the outputs to Python again. That creates a lot of flexibility, especially for debugging. \n", 27 | "\n", 28 | "But there are some merits in NOT using the eagerly option. You can run operations on TensorFlow graphs that in some scenarios leads to significant speed up. According to TensorFlow:\n", 29 | "\n", 30 | "> Graphs are data structures that contain a set of [tf.Operation](https://www.tensorflow.org/api_docs/python/tf/Operation) objects, which represent units of computation; and [tf.Tensor](https://www.tensorflow.org/api_docs/python/tf/Tensor) objects, which represent the units of data that flow between operations. They are defined in a [tf.Graph](https://www.tensorflow.org/api_docs/python/tf/Graph) context. Since these graphs are data structures, they can be saved, run, and restored all without the original Python code.\n", 31 | "\n", 32 | "Let's have some example for transforming functions to graphs!\n" 33 | ] 34 | }, 35 | { 36 | "cell_type": "code", 37 | "metadata": { 38 | "id": "iJoLp_aUvBFR" 39 | }, 40 | "source": [ 41 | "# Loading necessary libraries\n", 42 | "import tensorflow as tf\n", 43 | "import numpy as np\n", 44 | "import timeit" 45 | ], 46 | "execution_count": 1, 47 | "outputs": [] 48 | }, 49 | { 50 | "cell_type": "markdown", 51 | "metadata": { 52 | "id": "8a9c8qUj-MSj" 53 | }, 54 | "source": [ 55 | "### Operation\n", 56 | "\n", 57 | "We can take a Python function on graph with [@tf.function](https://www.tensorflow.org/api_docs/python/tf/function) decorator." 58 | ] 59 | }, 60 | { 61 | "cell_type": "code", 62 | "metadata": { 63 | "id": "V-ugyJGRHc7S", 64 | "outputId": "0bc0f944-4f6b-4fc2-95e2-8f517141cac6", 65 | "colab": { 66 | "base_uri": "https://localhost:8080/", 67 | "height": 54 68 | } 69 | }, 70 | "source": [ 71 | "@tf.function\n", 72 | "def multiply_fn(a, b):\n", 73 | " return tf.matmul(a, b)\n", 74 | "\n", 75 | "# Create some tensors\n", 76 | "a = tf.constant([[0.5, 0.5]])\n", 77 | "b = tf.constant([[10.0], [1.0]])\n", 78 | "\n", 79 | "# Check function\n", 80 | "print('Multiple a of shape {} with b of shape {}'.format(a.shape, b.shape))\n", 81 | "print(multiply_fn(a, b).numpy())\n" 82 | ], 83 | "execution_count": 2, 84 | "outputs": [ 85 | { 86 | "output_type": "stream", 87 | "text": [ 88 | "Multiple a of shape (1, 2) with b of shape (2, 1)\n", 89 | "[[5.5]]\n" 90 | ], 91 | "name": "stdout" 92 | } 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "metadata": { 98 | "id": "zoh1CmJ59n-N", 99 | "outputId": "addb1205-94a9-4f7b-d5b1-882ebed24a8a", 100 | "colab": { 101 | "base_uri": "https://localhost:8080/", 102 | "height": 72 103 | } 104 | }, 105 | "source": [ 106 | "# Function without neing take to graph, i.e., with eager execution.\n", 107 | "def add_fn(a, b):\n", 108 | " return tf.add(a, b)\n", 109 | "\n", 110 | "# Create some tensors\n", 111 | "a = tf.constant([[0.5, 0.5]])\n", 112 | "b = tf.constant([[10.0], [1.0]])\n", 113 | "\n", 114 | "# Check function\n", 115 | "print('Add a of shape {} with b of shape {}'.format(a.shape, b.shape))\n", 116 | "print(add_fn(a, b).numpy())" 117 | ], 118 | "execution_count": 3, 119 | "outputs": [ 120 | { 121 | "output_type": "stream", 122 | "text": [ 123 | "Add a of shape (1, 2) with b of shape (2, 1)\n", 124 | "[[10.5 10.5]\n", 125 | " [ 1.5 1.5]]\n" 126 | ], 127 | "name": "stdout" 128 | } 129 | ] 130 | }, 131 | { 132 | "cell_type": "markdown", 133 | "metadata": { 134 | "id": "j-WywaJU_MXR" 135 | }, 136 | "source": [ 137 | "### Speedup\n", 138 | "\n", 139 | "Now let's define a custom model and run it:\n", 140 | "\n", 141 | "1. eagerly\n", 142 | "2. on graph\n", 143 | "\n", 144 | "To check how to define models refer to: https://www.tensorflow.org/api_docs/python/tf/keras/Model" 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "metadata": { 150 | "id": "ynGAYcDC-y2W", 151 | "outputId": "98d5fb24-d34b-4c53-85a9-b7456cfca4da", 152 | "colab": { 153 | "base_uri": "https://localhost:8080/", 154 | "height": 90 155 | } 156 | }, 157 | "source": [ 158 | "class ModelShallow(tf.keras.Model):\n", 159 | "\n", 160 | " def __init__(self):\n", 161 | " super(ModelShallow, self).__init__()\n", 162 | " self.dense1 = tf.keras.layers.Dense(10, activation=tf.nn.relu)\n", 163 | " self.dense2 = tf.keras.layers.Dense(20, activation=tf.nn.relu)\n", 164 | " self.dense3 = tf.keras.layers.Dense(30, activation=tf.nn.softmax)\n", 165 | " self.dropout = tf.keras.layers.Dropout(0.5)\n", 166 | "\n", 167 | " def call(self, inputs, training=False):\n", 168 | " x = self.dense1(inputs)\n", 169 | " if training:\n", 170 | " x = self.dropout(x, training=training)\n", 171 | " x = self.dense2(x)\n", 172 | " out = self.dense3(x)\n", 173 | " return out\n", 174 | "\n", 175 | "class ModelDeep(tf.keras.Model):\n", 176 | "\n", 177 | " def __init__(self):\n", 178 | " super(ModelDeep, self).__init__()\n", 179 | " self.dense1 = tf.keras.layers.Dense(1000, activation=tf.nn.relu)\n", 180 | " self.dense2 = tf.keras.layers.Dense(2000, activation=tf.nn.relu)\n", 181 | " self.dense3 = tf.keras.layers.Dense(3000, activation=tf.nn.softmax)\n", 182 | " self.dropout = tf.keras.layers.Dropout(0.5)\n", 183 | "\n", 184 | " def call(self, inputs, training=False):\n", 185 | " x = self.dense1(inputs)\n", 186 | " if training:\n", 187 | " x = self.dropout(x, training=training)\n", 188 | " x = self.dense2(x)\n", 189 | " out = self.dense3(x)\n", 190 | " return out\n", 191 | "\n", 192 | "# Create the model with eager esxecution by default\n", 193 | "model_shallow_with_eager = ModelShallow()\n", 194 | "\n", 195 | "# Take model to graph. \n", 196 | "# NOTE: Instead of using decorators, we can ditectly operate tf.function on the model.\n", 197 | "model_shallow_on_graph = tf.function(ModelShallow())\n", 198 | "\n", 199 | "# Model deep\n", 200 | "model_deep_with_eager = ModelDeep()\n", 201 | "model_deep_on_graph = tf.function(ModelDeep())\n", 202 | "\n", 203 | "# sample input\n", 204 | "sample_input = tf.random.uniform([60, 28, 28])\n", 205 | "\n", 206 | "# Check time for shallow model\n", 207 | "print(\"Shallow Model - Eager execution time:\", timeit.timeit(lambda: model_shallow_with_eager(sample_input), number=1000))\n", 208 | "print(\"Shallow Model - Graph-based execution time:\", timeit.timeit(lambda: model_shallow_on_graph(sample_input), number=1000))\n", 209 | "\n", 210 | "# Check time for deep model\n", 211 | "print(\"Deep Model - Eager execution time:\", timeit.timeit(lambda: model_deep_with_eager(sample_input), number=1000))\n", 212 | "print(\"Deep Model - Graph-based execution time:\", timeit.timeit(lambda: model_deep_on_graph(sample_input), number=1000))" 213 | ], 214 | "execution_count": 4, 215 | "outputs": [ 216 | { 217 | "output_type": "stream", 218 | "text": [ 219 | "Shallow Model - Eager execution time: 2.758659444999921\n", 220 | "Shallow Model - Graph-based execution time: 1.1618621510001503\n", 221 | "Deep Model - Eager execution time: 477.634194022\n", 222 | "Deep Model - Graph-based execution time: 460.01053104599987\n" 223 | ], 224 | "name": "stdout" 225 | } 226 | ] 227 | }, 228 | { 229 | "cell_type": "code", 230 | "metadata": { 231 | "id": "oBvht7tVAX4I" 232 | }, 233 | "source": [ 234 | "" 235 | ], 236 | "execution_count": 4, 237 | "outputs": [] 238 | } 239 | ] 240 | } -------------------------------------------------------------------------------- /codes/ipython/1-basics/models.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "models.ipynb", 7 | "provenance": [], 8 | "collapsed_sections": [] 9 | }, 10 | "kernelspec": { 11 | "name": "python3", 12 | "display_name": "Python 3" 13 | } 14 | }, 15 | "cells": [ 16 | { 17 | "cell_type": "markdown", 18 | "metadata": { 19 | "id": "MDevcewD85Im" 20 | }, 21 | "source": [ 22 | "## Models in TensorFlow\n", 23 | "\n", 24 | "In TensorFlow, you always need to define models to train a machine learning model. A model consists of layers that conduct operations and can be reused in the model's structure. Let's get started.\n" 25 | ] 26 | }, 27 | { 28 | "cell_type": "code", 29 | "metadata": { 30 | "id": "iJoLp_aUvBFR" 31 | }, 32 | "source": [ 33 | "# Loading necessary libraries\n", 34 | "import tensorflow as tf\n", 35 | "import numpy as np" 36 | ], 37 | "execution_count": 1, 38 | "outputs": [] 39 | }, 40 | { 41 | "cell_type": "markdown", 42 | "metadata": { 43 | "id": "t2tybyJb7Vdf" 44 | }, 45 | "source": [ 46 | "### Layer\n", 47 | "\n", 48 | "In TensorFlow, we can implement layers using the high-level [tf.Module](https://www.tensorflow.org/api_docs/python/tf/Module) class." 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "metadata": { 54 | "id": "oBvht7tVAX4I", 55 | "outputId": "820d740e-87b7-499e-cea0-f949c4e3bb4f", 56 | "colab": { 57 | "base_uri": "https://localhost:8080/", 58 | "height": 72 59 | } 60 | }, 61 | "source": [ 62 | "class SampleLayer(tf.Module):\n", 63 | " \"\"\"\n", 64 | " We define the layer with a class that inherited the structure of tf.Module class.\n", 65 | " \"\"\"\n", 66 | " def __init__(self, name=None):\n", 67 | " super().__init__(name=name)\n", 68 | "\n", 69 | " # Define a trainable variable\n", 70 | " self.x = tf.Variable([[1.0, 3.0]], name=\"x_trainable\")\n", 71 | "\n", 72 | " # Define a non-trainable variable\n", 73 | " self.y = tf.Variable(2.0, trainable=False, name=\"y_non_trainable\")\n", 74 | " def __call__(self, input):\n", 75 | " return self.x * input + self.y\n", 76 | "\n", 77 | "# Initialize the layer\n", 78 | "# Here, __call__ function will not be called\n", 79 | "simple_layer = SampleLayer(name=\"my_layer\")\n", 80 | "\n", 81 | "# Call the layer and extract some information\n", 82 | "output = simple_layer(tf.constant(1.0))\n", 83 | "print(\"Output:\", output)\n", 84 | "print(\"Layer name:\", simple_layer.name)\n", 85 | "print(\"Trainable variables:\", simple_layer.trainable_variables)" 86 | ], 87 | "execution_count": 2, 88 | "outputs": [ 89 | { 90 | "output_type": "stream", 91 | "text": [ 92 | "Output: tf.Tensor([[3. 5.]], shape=(1, 2), dtype=float32)\n", 93 | "Layer name: my_layer\n", 94 | "Trainable variables: (,)\n" 95 | ], 96 | "name": "stdout" 97 | } 98 | ] 99 | }, 100 | { 101 | "cell_type": "markdown", 102 | "metadata": { 103 | "id": "DFVkXDUK9tlj" 104 | }, 105 | "source": [ 106 | "### Model\n", 107 | "\n", 108 | "Now. let's define a model. A model consists of multiple layers." 109 | ] 110 | }, 111 | { 112 | "cell_type": "code", 113 | "metadata": { 114 | "id": "cp01Jsqg84ps", 115 | "outputId": "4107ee01-08c0-4617-fad7-8d1183bfef26", 116 | "colab": { 117 | "base_uri": "https://localhost:8080/", 118 | "height": 92 119 | } 120 | }, 121 | "source": [ 122 | "class Model(tf.Module):\n", 123 | " def __init__(self, name=None):\n", 124 | " super().__init__(name=name)\n", 125 | "\n", 126 | " self.layer_1 = SampleLayer('layer_1')\n", 127 | " self.layer_2 = SampleLayer('layer_2')\n", 128 | "\n", 129 | " def __call__(self, x):\n", 130 | " x = self.layer_1(x)\n", 131 | " output = self.layer_2(x)\n", 132 | " return output\n", 133 | "\n", 134 | "# Initialize the model\n", 135 | "custom_model = Model(name=\"model_name\")\n", 136 | "\n", 137 | "# Call the model\n", 138 | "# Call the layer and extract some information\n", 139 | "output = custom_model(tf.constant(1.0))\n", 140 | "print(\"Output:\", output)\n", 141 | "print(\"Model name:\", custom_model.name)\n", 142 | "print(\"Trainable variables:\", custom_model.trainable_variables)\n" 143 | ], 144 | "execution_count": 3, 145 | "outputs": [ 146 | { 147 | "output_type": "stream", 148 | "text": [ 149 | "Output: tf.Tensor([[ 5. 17.]], shape=(1, 2), dtype=float32)\n", 150 | "Model name: model_name\n", 151 | "Trainable variables: (, )\n" 152 | ], 153 | "name": "stdout" 154 | } 155 | ] 156 | }, 157 | { 158 | "cell_type": "markdown", 159 | "metadata": { 160 | "id": "jdpeI9wg-65r" 161 | }, 162 | "source": [ 163 | "### Keras Models\n", 164 | "\n", 165 | "Keras is a high-level API that is part of TensorFlow now. You can use [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) to define a model. You can also use the collection of [tf.keras.layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) for your convenience. It's straightforward as below to define a model that has two fully-connected layers:" 166 | ] 167 | }, 168 | { 169 | "cell_type": "code", 170 | "metadata": { 171 | "id": "_D3TjQXm-p3y", 172 | "outputId": "59f662e1-fc73-4d39-c152-7c40d746b2f5", 173 | "colab": { 174 | "base_uri": "https://localhost:8080/", 175 | "height": 72 176 | } 177 | }, 178 | "source": [ 179 | "class CustomModel(tf.keras.Model):\n", 180 | "\n", 181 | " def __init__(self):\n", 182 | " super(CustomModel, self).__init__()\n", 183 | " self.layer_1 = tf.keras.layers.Dense(16, activation=tf.nn.relu)\n", 184 | " self.layer_2 = tf.keras.layers.Dense(32, activation=None)\n", 185 | "\n", 186 | " def call(self, inputs):\n", 187 | " x = self.layer_1(inputs)\n", 188 | " out = self.layer_2(inputs)\n", 189 | " return out\n", 190 | "\n", 191 | "# Create model\n", 192 | "custom_model = CustomModel()\n", 193 | "\n", 194 | "# Call the model\n", 195 | "# Call the layer and extract some information\n", 196 | "output = custom_model(tf.constant([[1.0, 2.0, 3.0]]))\n", 197 | "print(\"Output shape:\", output.shape)\n", 198 | "print(\"Model name:\", custom_model.name)\n", 199 | "\n", 200 | "# Count total trainable variables\n", 201 | "total_trainable_var = np.sum([tf.size(var).numpy() for var in custom_model.trainable_variables])\n", 202 | "print(\"Number of trainable variables:\", total_trainable_var)" 203 | ], 204 | "execution_count": 4, 205 | "outputs": [ 206 | { 207 | "output_type": "stream", 208 | "text": [ 209 | "Output shape: (1, 32)\n", 210 | "Model name: custom_model\n", 211 | "Number of trainable variables: 192\n" 212 | ], 213 | "name": "stdout" 214 | } 215 | ] 216 | } 217 | ] 218 | } -------------------------------------------------------------------------------- /codes/ipython/1-basics/tensors.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "tensors.ipynb", 7 | "provenance": [], 8 | "collapsed_sections": [] 9 | }, 10 | "kernelspec": { 11 | "name": "python3", 12 | "display_name": "Python 3" 13 | }, 14 | "accelerator": "GPU" 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "code", 19 | "metadata": { 20 | "id": "7i1UqqIkNxAt" 21 | }, 22 | "source": [ 23 | "# Import necessary libraries\n", 24 | "import tensorflow as tf\n", 25 | "import numpy as np" 26 | ], 27 | "execution_count": 1, 28 | "outputs": [] 29 | }, 30 | { 31 | "cell_type": "markdown", 32 | "metadata": { 33 | "id": "I4YPHO9ba3Bc" 34 | }, 35 | "source": [ 36 | "## Tensors\n", 37 | "\n", 38 | "Tensor are multi-dimensitonal arrays that are used in Tensorflow.\n", 39 | "\n", 40 | "We use the following definition:\n", 41 | "\n", 42 | "* **Rank:** The number of dimensions that a vector has.\n", 43 | "\n", 44 | "Below, we will define different kinds of tensors and show their rank using [tf.rank](https://www.tensorflow.org/api_docs/python/tf/rank) function." 45 | ] 46 | }, 47 | { 48 | "cell_type": "code", 49 | "metadata": { 50 | "id": "rcU8_F3fPUb5", 51 | "outputId": "9f9a8970-4e6b-4550-f90f-64acf7d372b4", 52 | "colab": { 53 | "base_uri": "https://localhost:8080/" 54 | } 55 | }, 56 | "source": [ 57 | "tensor = tf.constant(0)\n", 58 | "print(\"Print constant tensor {} of rank {}\".format(tensor, tf.rank(tensor)))\n", 59 | "print(\"Show full tensor:\", tensor)" 60 | ], 61 | "execution_count": 2, 62 | "outputs": [ 63 | { 64 | "output_type": "stream", 65 | "text": [ 66 | "Print constant tensor 0 of rank 0\n", 67 | "Show full tensor: tf.Tensor(0, shape=(), dtype=int32)\n" 68 | ], 69 | "name": "stdout" 70 | } 71 | ] 72 | }, 73 | { 74 | "cell_type": "code", 75 | "metadata": { 76 | "id": "ahIBf6_4cRnm", 77 | "outputId": "b716e303-7c30-4bc5-84d0-f9e0142b710d", 78 | "colab": { 79 | "base_uri": "https://localhost:8080/" 80 | } 81 | }, 82 | "source": [ 83 | "# NOTE: We use .numpy() to transform tf.tensor to numpy\n", 84 | "tensor = tf.constant([1,2,3])\n", 85 | "print(\"Tensor:\", tensor)\n", 86 | "print(\"Rank:\", tf.rank(tensor).numpy())" 87 | ], 88 | "execution_count": 3, 89 | "outputs": [ 90 | { 91 | "output_type": "stream", 92 | "text": [ 93 | "Tensor: tf.Tensor([1 2 3], shape=(3,), dtype=int32)\n", 94 | "Rank: 1\n" 95 | ], 96 | "name": "stdout" 97 | } 98 | ] 99 | }, 100 | { 101 | "cell_type": "markdown", 102 | "metadata": { 103 | "id": "ss3aDmDTd-LS" 104 | }, 105 | "source": [ 106 | "### Tensor Operations" 107 | ] 108 | }, 109 | { 110 | "cell_type": "code", 111 | "metadata": { 112 | "id": "TKX2U0Imcm7d", 113 | "outputId": "7ea93f1e-a98b-418d-8f9c-5f117e1405b2", 114 | "colab": { 115 | "base_uri": "https://localhost:8080/" 116 | } 117 | }, 118 | "source": [ 119 | "x = tf.constant([[1, 1],\n", 120 | " [1, 1]])\n", 121 | "y = tf.constant([[2, 4],\n", 122 | " [6, 8]])\n", 123 | "\n", 124 | "# Add two tensors\n", 125 | "print(tf.add(x, y), \"\\n\")\n", 126 | "\n", 127 | "# Add two tensors\n", 128 | "print(tf.matmul(x, y), \"\\n\")\n" 129 | ], 130 | "execution_count": 4, 131 | "outputs": [ 132 | { 133 | "output_type": "stream", 134 | "text": [ 135 | "tf.Tensor(\n", 136 | "[[3 5]\n", 137 | " [7 9]], shape=(2, 2), dtype=int32) \n", 138 | "\n", 139 | "tf.Tensor(\n", 140 | "[[ 8 12]\n", 141 | " [ 8 12]], shape=(2, 2), dtype=int32) \n", 142 | "\n" 143 | ], 144 | "name": "stdout" 145 | } 146 | ] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": { 151 | "id": "BlEgQ2t2edKl" 152 | }, 153 | "source": [ 154 | "### Muti-dimentional Tensors\n", 155 | "\n", 156 | "This part is not much different compared to what we learned so far. However, it would be nice to try extracting as much information as we can from a multi-dimentional tensor.\n", 157 | "\n", 158 | "\n", 159 | "Let's use [tf.ones](https://www.tensorflow.org/api_docs/python/tf/ones) for our purpose here. It creates an all-one tensor." 160 | ] 161 | }, 162 | { 163 | "cell_type": "code", 164 | "metadata": { 165 | "id": "Gdtt0e4-fDkl", 166 | "outputId": "c59185ab-84f1-4e02-d327-255df4cb2b1f", 167 | "colab": { 168 | "base_uri": "https://localhost:8080/" 169 | } 170 | }, 171 | "source": [ 172 | "# We set the shape of the tensor and the desired data type.\n", 173 | "tensor = tf.ones(shape = [2, 3, 6], dtype = tf.float32)\n", 174 | "print('Tensor:', tensor)" 175 | ], 176 | "execution_count": 5, 177 | "outputs": [ 178 | { 179 | "output_type": "stream", 180 | "text": [ 181 | "Tensor: tf.Tensor(\n", 182 | "[[[1. 1. 1. 1. 1. 1.]\n", 183 | " [1. 1. 1. 1. 1. 1.]\n", 184 | " [1. 1. 1. 1. 1. 1.]]\n", 185 | "\n", 186 | " [[1. 1. 1. 1. 1. 1.]\n", 187 | " [1. 1. 1. 1. 1. 1.]\n", 188 | " [1. 1. 1. 1. 1. 1.]]], shape=(2, 3, 6), dtype=float32)\n" 189 | ], 190 | "name": "stdout" 191 | } 192 | ] 193 | }, 194 | { 195 | "cell_type": "code", 196 | "metadata": { 197 | "id": "c5PChFhlfXmx", 198 | "outputId": "15da08f5-98df-4b54-a580-f90881976b38", 199 | "colab": { 200 | "base_uri": "https://localhost:8080/" 201 | } 202 | }, 203 | "source": [ 204 | "print(\"Tensor Rank: \", tf.rank(tensor).numpy())\n", 205 | "print(\"Shape: \", tensor.shape)\n", 206 | "print(\"Elements' type\", tensor.dtype)\n", 207 | "print(\"The size of the second axis:\", tensor.shape[1])\n", 208 | "print(\"The size of the last axis:\", tensor.shape[-1])\n", 209 | "print(\"Total number of elements: \", tf.size(tensor).numpy())\n", 210 | "print(\"How many dimensions? \", tensor.ndim)" 211 | ], 212 | "execution_count": 6, 213 | "outputs": [ 214 | { 215 | "output_type": "stream", 216 | "text": [ 217 | "Tensor Rank: 3\n", 218 | "Shape: (2, 3, 6)\n", 219 | "Elements' type \n", 220 | "The size of the second axis: 3\n", 221 | "The size of the last axis: 6\n", 222 | "Total number of elements: 36\n", 223 | "How many dimensions? 3\n" 224 | ], 225 | "name": "stdout" 226 | } 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "metadata": { 232 | "id": "cnYwTBqPhW1I" 233 | }, 234 | "source": [ 235 | "### Indexing\n", 236 | "\n", 237 | "TensorFlow indexing is aligned with Python indexing. See the following examples." 238 | ] 239 | }, 240 | { 241 | "cell_type": "code", 242 | "metadata": { 243 | "id": "34-Tfcsnf6uG" 244 | }, 245 | "source": [ 246 | "x = tf.constant([[1, 2, 3],\n", 247 | " [4, 5, 6],\n", 248 | " [7, 8, 9]])" 249 | ], 250 | "execution_count": 7, 251 | "outputs": [] 252 | }, 253 | { 254 | "cell_type": "code", 255 | "metadata": { 256 | "id": "tNZhisXDhoLp", 257 | "outputId": "5a955103-8ca5-496c-bee1-8828d437491a", 258 | "colab": { 259 | "base_uri": "https://localhost:8080/" 260 | } 261 | }, 262 | "source": [ 263 | "# All elements\n", 264 | "print(x[:].numpy())" 265 | ], 266 | "execution_count": 8, 267 | "outputs": [ 268 | { 269 | "output_type": "stream", 270 | "text": [ 271 | "[[1 2 3]\n", 272 | " [4 5 6]\n", 273 | " [7 8 9]]\n" 274 | ], 275 | "name": "stdout" 276 | } 277 | ] 278 | }, 279 | { 280 | "cell_type": "code", 281 | "metadata": { 282 | "id": "KUghwlZ7hr10", 283 | "outputId": "bc6bac99-c1f7-4f16-bb53-7cea60feb1b1", 284 | "colab": { 285 | "base_uri": "https://localhost:8080/" 286 | } 287 | }, 288 | "source": [ 289 | "# All elements of the first row\n", 290 | "print(x[0,:].numpy())" 291 | ], 292 | "execution_count": 9, 293 | "outputs": [ 294 | { 295 | "output_type": "stream", 296 | "text": [ 297 | "[1 2 3]\n" 298 | ], 299 | "name": "stdout" 300 | } 301 | ] 302 | }, 303 | { 304 | "cell_type": "code", 305 | "metadata": { 306 | "id": "NSCMESaPhwnV", 307 | "outputId": "71ce4701-7169-4538-f535-a5f22e5b9a1d", 308 | "colab": { 309 | "base_uri": "https://localhost:8080/" 310 | } 311 | }, 312 | "source": [ 313 | "# First row and last column\n", 314 | "print(x[0,-1].numpy())" 315 | ], 316 | "execution_count": 10, 317 | "outputs": [ 318 | { 319 | "output_type": "stream", 320 | "text": [ 321 | "3\n" 322 | ], 323 | "name": "stdout" 324 | } 325 | ] 326 | }, 327 | { 328 | "cell_type": "code", 329 | "metadata": { 330 | "id": "hH8Fhi2Sh2rt", 331 | "outputId": "d2d85c95-df34-4cbf-cc74-f0f9214319ac", 332 | "colab": { 333 | "base_uri": "https://localhost:8080/" 334 | } 335 | }, 336 | "source": [ 337 | "# From second row to last and from third column to last\n", 338 | "print(x[1:,2:].numpy)" 339 | ], 340 | "execution_count": 11, 341 | "outputs": [ 342 | { 343 | "output_type": "stream", 344 | "text": [ 345 | ">\n" 348 | ], 349 | "name": "stdout" 350 | } 351 | ] 352 | }, 353 | { 354 | "cell_type": "markdown", 355 | "metadata": { 356 | "id": "Y_zEE3-7iUmu" 357 | }, 358 | "source": [ 359 | "### Data types\n", 360 | "\n", 361 | "You can change the data type of the tesnorflow tensors for your purpose. This will be done easily by [tf.cast](https://www.tensorflow.org/api_docs/python/tf/cast)." 362 | ] 363 | }, 364 | { 365 | "cell_type": "code", 366 | "metadata": { 367 | "id": "mFsqRDxAiK95", 368 | "outputId": "5f3aa9b1-b5c1-4fad-cd18-96e376b742d8", 369 | "colab": { 370 | "base_uri": "https://localhost:8080/" 371 | } 372 | }, 373 | "source": [ 374 | "original_tensor = tf.constant([1, 2, 3, 4], dtype=tf.int32)\n", 375 | "print('Original tensor: ', original_tensor)\n", 376 | "print(\"Tensor type before casting: \", original_tensor.dtype)\n", 377 | "\n", 378 | "# Casting to change dtype\n", 379 | "casted_tensor = tf.cast(original_tensor, dtype=tf.float32)\n", 380 | "print('New tensor: ', casted_tensor)\n", 381 | "print(\"Tensor type after casting: \", casted_tensor.dtype)" 382 | ], 383 | "execution_count": 12, 384 | "outputs": [ 385 | { 386 | "output_type": "stream", 387 | "text": [ 388 | "Original tensor: tf.Tensor([1 2 3 4], shape=(4,), dtype=int32)\n", 389 | "Tensor type before casting: \n", 390 | "New tensor: tf.Tensor([1. 2. 3. 4.], shape=(4,), dtype=float32)\n", 391 | "Tensor type after casting: \n" 392 | ], 393 | "name": "stdout" 394 | } 395 | ] 396 | }, 397 | { 398 | "cell_type": "code", 399 | "metadata": { 400 | "id": "81XDYbnxi-nx" 401 | }, 402 | "source": [ 403 | "" 404 | ], 405 | "execution_count": 12, 406 | "outputs": [] 407 | } 408 | ] 409 | } -------------------------------------------------------------------------------- /codes/python/0-welcome/welcome.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """welcome.ipynb 3 | 4 | Automatically generated by Colaboratory. 5 | 6 | Original file is located at 7 | https://colab.research.google.com/github/instillai/TensorFlow-Course/blob/master/codes/ipython/0-welcome/welcome.ipynb 8 | """ 9 | 10 | # Import tensorflow 11 | import tensorflow as tf 12 | 13 | # Check version 14 | print("Tensorflow version: ", tf.__version__) 15 | 16 | # Test TensorFlow for cuda availibility 17 | print("Tensorflow is built with CUDA: ", tf.test.is_built_with_cuda()) 18 | 19 | # Check devices 20 | print("All devices: ", tf.config.list_physical_devices(device_type=None)) 21 | print("GPU devices: ", tf.config.list_physical_devices(device_type='GPU')) 22 | 23 | # Print a randomly generated tensor 24 | # tf.math.reduce_sum: https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum 25 | # tf.random.normal: https://www.tensorflow.org/api_docs/python/tf/random/normal 26 | print(tf.math.reduce_sum(tf.random.normal([1, 10]))) 27 | 28 | -------------------------------------------------------------------------------- /codes/python/1-basics/automatic_differentiation.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """automatic_differentiation.ipynb 3 | 4 | Automatically generated by Colaboratory. 5 | 6 | Original file is located at 7 | https://colab.research.google.com/drive/1ibfKtpxC_hIhZlPbefCoqpAS7jTdyiFw 8 | 9 | ## Automatic Differentiation 10 | 11 | The [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) is to calculate derivative of functions which is useful for algorithms such as [stochastic gradient descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent). 12 | 13 | It's is particularly useful when we implement neural networks and desire to calculate differentiation of the output with respect to an input that are connected with a **chain of functions**: 14 | 15 | $L(x)=f(g(h(x)))$ 16 | 17 | The differentiation is as below: 18 | 19 | $\frac{dL}{dx} = \frac{df}{dg}\frac{dg}{dh}\frac{dh}{dx}$ 20 | 21 | The above rule is called the [chain rule](https://en.wikipedia.org/wiki/Chain_rule). 22 | 23 | So the [gradients](https://en.wikipedia.org/wiki/Gradient) needs to be calculated for ultimate derivative calculations. 24 | 25 | Let's see how TensorFlow does it! 26 | """ 27 | 28 | # Loading necessary libraries 29 | import tensorflow as tf 30 | import numpy as np 31 | 32 | """### Introduction 33 | 34 | Some general information are useful to be addressed here: 35 | 36 | * To compute gradients, TensorFlow uses [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) which records the operation for later being used for gradient computation. 37 | 38 | Let's have three similar example: 39 | """ 40 | 41 | x = tf.constant([2.0]) 42 | 43 | with tf.GradientTape(persistent=False, watch_accessed_variables=True) as grad: 44 | f = x ** 2 45 | 46 | # Print gradient output 47 | print('The gradient df/dx where f=(x^2):\n', grad.gradient(f, x)) 48 | 49 | x = tf.constant([2.0]) 50 | x = tf.Variable(x) 51 | 52 | with tf.GradientTape(persistent=False, watch_accessed_variables=True) as grad: 53 | f = x ** 2 54 | 55 | # Print gradient output 56 | print('The gradient df/dx where f=(x^2):\n', grad.gradient(f, x)) 57 | 58 | x = tf.constant([2.0]) 59 | 60 | with tf.GradientTape(persistent=False, watch_accessed_variables=True) as grad: 61 | grad.watch(x) 62 | f = x ** 2 63 | 64 | # Print gradient output 65 | print('The gradient df/dx where f=(x^2):\n', grad.gradient(f, x)) 66 | 67 | """What's the difference between above examples? 68 | 69 | 1. Using tf.Variable on top of the tensor to transform it into a [tf.Variable](https://www.tensorflow.org/guide/variable). 70 | 2. Using [.watch()](https://www.tensorflow.org/api_docs/python/tf/GradientTape#watch) operation. 71 | 72 | The tf.Variable turn tensor to a variable tensor which is the recommended approach by TensorFlow. The .watch() method ensures the variable is being tracked by the tf.GradientTape(). 73 | 74 | **You can see if we use neither, we get NONE as the gradient which means gradients were not being tracked!** 75 | 76 | NOTE: In general it's always safe to work with variable as well as using .watch() to ensure tracking gradients. 77 | 78 | We used default arguments as: 79 | 80 | 1. **persistent=False**: It says, any variable that is hold with tf.GradientTape(), after one calling of gradient will be released. 81 | 2. **watch_accessed_variables=True**: By default watching variables. So if we have a variable, we do not need to use .watch() with this default setting. 82 | 83 | Let's have an example with **persistent=True**: 84 | """ 85 | 86 | x = tf.constant([2.0]) 87 | x = tf.Variable(x) 88 | 89 | # For practice, turn persistent to False to see what happens. 90 | with tf.GradientTape(persistent=True, watch_accessed_variables=True) as grad: 91 | f = x ** 2 92 | h = x ** 3 93 | 94 | # Print gradient output 95 | print('The gradient df/dx where f=(x^2):\n', grad.gradient(f, x)) 96 | print('The gradient dh/dx where h=(x^3):\n', grad.gradient(h, x)) -------------------------------------------------------------------------------- /codes/python/1-basics/graph.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """graph.ipynb 3 | 4 | Automatically generated by Colaboratory. 5 | 6 | Original file is located at 7 | https://colab.research.google.com/drive/1ibfKtpxC_hIhZlPbefCoqpAS7jTdyiFw 8 | 9 | ## Intorduction to TensorFlow graphs. 10 | 11 | For long, the big complaint about TensorFlow was *It's not flexible for debugging!* With the advent of TensorFlow 2.0, that changed drastically. 12 | 13 | Now TensorFlow allows you to run the oprations **eagerly**. That means, you can run TensorFlow operations by Python and return the outputs to Python again. That creates a lot of flexibility, especially for debugging. 14 | 15 | But there are some merits in NOT using the eagerly option. You can run operations on TensorFlow graphs that in some scenarios leads to significant speed up. According to TensorFlow: 16 | 17 | > Graphs are data structures that contain a set of [tf.Operation](https://www.tensorflow.org/api_docs/python/tf/Operation) objects, which represent units of computation; and [tf.Tensor](https://www.tensorflow.org/api_docs/python/tf/Tensor) objects, which represent the units of data that flow between operations. They are defined in a [tf.Graph](https://www.tensorflow.org/api_docs/python/tf/Graph) context. Since these graphs are data structures, they can be saved, run, and restored all without the original Python code. 18 | 19 | Let's have some example for transforming functions to graphs! 20 | """ 21 | 22 | # Loading necessary libraries 23 | import tensorflow as tf 24 | import numpy as np 25 | import timeit 26 | 27 | """### Operation 28 | 29 | We can take a Python function on graph with [@tf.function](https://www.tensorflow.org/api_docs/python/tf/function) decorator. 30 | """ 31 | 32 | @tf.function 33 | def multiply_fn(a, b): 34 | return tf.matmul(a, b) 35 | 36 | # Create some tensors 37 | a = tf.constant([[0.5, 0.5]]) 38 | b = tf.constant([[10.0], [1.0]]) 39 | 40 | # Check function 41 | print('Multiple a of shape {} with b of shape {}'.format(a.shape, b.shape)) 42 | print(multiply_fn(a, b).numpy()) 43 | 44 | # Function without neing take to graph, i.e., with eager execution. 45 | def add_fn(a, b): 46 | return tf.add(a, b) 47 | 48 | # Create some tensors 49 | a = tf.constant([[0.5, 0.5]]) 50 | b = tf.constant([[10.0], [1.0]]) 51 | 52 | # Check function 53 | print('Add a of shape {} with b of shape {}'.format(a.shape, b.shape)) 54 | print(add_fn(a, b).numpy()) 55 | 56 | """### Speedup 57 | 58 | Now let's define a custom model and run it: 59 | 60 | 1. eagerly 61 | 2. on graph 62 | 63 | To check how to define models refer to: https://www.tensorflow.org/api_docs/python/tf/keras/Model 64 | """ 65 | 66 | class ModelShallow(tf.keras.Model): 67 | 68 | def __init__(self): 69 | super(ModelShallow, self).__init__() 70 | self.dense1 = tf.keras.layers.Dense(10, activation=tf.nn.relu) 71 | self.dense2 = tf.keras.layers.Dense(20, activation=tf.nn.relu) 72 | self.dense3 = tf.keras.layers.Dense(30, activation=tf.nn.softmax) 73 | self.dropout = tf.keras.layers.Dropout(0.5) 74 | 75 | def call(self, inputs, training=False): 76 | x = self.dense1(inputs) 77 | if training: 78 | x = self.dropout(x, training=training) 79 | x = self.dense2(x) 80 | out = self.dense3(x) 81 | return out 82 | 83 | class ModelDeep(tf.keras.Model): 84 | 85 | def __init__(self): 86 | super(ModelDeep, self).__init__() 87 | self.dense1 = tf.keras.layers.Dense(1000, activation=tf.nn.relu) 88 | self.dense2 = tf.keras.layers.Dense(2000, activation=tf.nn.relu) 89 | self.dense3 = tf.keras.layers.Dense(3000, activation=tf.nn.softmax) 90 | self.dropout = tf.keras.layers.Dropout(0.5) 91 | 92 | def call(self, inputs, training=False): 93 | x = self.dense1(inputs) 94 | if training: 95 | x = self.dropout(x, training=training) 96 | x = self.dense2(x) 97 | out = self.dense3(x) 98 | return out 99 | 100 | # Create the model with eager esxecution by default 101 | model_shallow_with_eager = ModelShallow() 102 | 103 | # Take model to graph. 104 | # NOTE: Instead of using decorators, we can ditectly operate tf.function on the model. 105 | model_shallow_on_graph = tf.function(ModelShallow()) 106 | 107 | # Model deep 108 | model_deep_with_eager = ModelDeep() 109 | model_deep_on_graph = tf.function(ModelDeep()) 110 | 111 | # sample input 112 | sample_input = tf.random.uniform([60, 28, 28]) 113 | 114 | # Check time for shallow model 115 | print("Shallow Model - Eager execution time:", timeit.timeit(lambda: model_shallow_with_eager(sample_input), number=1000)) 116 | print("Shallow Model - Graph-based execution time:", timeit.timeit(lambda: model_shallow_on_graph(sample_input), number=1000)) 117 | 118 | # Check time for deep model 119 | print("Deep Model - Eager execution time:", timeit.timeit(lambda: model_deep_with_eager(sample_input), number=100)) 120 | print("Deep Model - Graph-based execution time:", timeit.timeit(lambda: model_deep_on_graph(sample_input), number=100)) 121 | 122 | -------------------------------------------------------------------------------- /codes/python/1-basics/models.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """models.ipynb 3 | 4 | Automatically generated by Colaboratory. 5 | 6 | Original file is located at 7 | https://colab.research.google.com/drive/1ibfKtpxC_hIhZlPbefCoqpAS7jTdyiFw 8 | 9 | ## Models in TensorFlow 10 | 11 | In TensorFlow, you always need to define models to train a machine learning model. A model consists of layers that conduct operations and can be reused in the model's structure. Let's get started. 12 | """ 13 | 14 | # Loading necessary libraries 15 | import tensorflow as tf 16 | import numpy as np 17 | 18 | """### Layer 19 | 20 | In TensorFlow, we can implement layers using the high-level [tf.Module](https://www.tensorflow.org/api_docs/python/tf/Module) class. 21 | """ 22 | 23 | class SampleLayer(tf.Module): 24 | """ 25 | We define the layer with a class that inherited the structure of tf.Module class. 26 | """ 27 | def __init__(self, name=None): 28 | super().__init__(name=name) 29 | 30 | # Define a trainable variable 31 | self.x = tf.Variable([[1.0, 3.0]], name="x_trainable") 32 | 33 | # Define a non-trainable variable 34 | self.y = tf.Variable(2.0, trainable=False, name="y_non_trainable") 35 | def __call__(self, input): 36 | return self.x * input + self.y 37 | 38 | # Initialize the layer 39 | # Here, __call__ function will not be called 40 | simple_layer = SampleLayer(name="my_layer") 41 | 42 | # Call the layer and extract some information 43 | output = simple_layer(tf.constant(1.0)) 44 | print("Output:", output) 45 | print("Layer name:", simple_layer.name) 46 | print("Trainable variables:", simple_layer.trainable_variables) 47 | 48 | """### Model 49 | 50 | Now. let's define a model. A model consists of multiple layers. 51 | """ 52 | 53 | class Model(tf.Module): 54 | def __init__(self, name=None): 55 | super().__init__(name=name) 56 | 57 | self.layer_1 = SampleLayer('layer_1') 58 | self.layer_2 = SampleLayer('layer_2') 59 | 60 | def __call__(self, x): 61 | x = self.layer_1(x) 62 | output = self.layer_2(x) 63 | return output 64 | 65 | # Initialize the model 66 | custom_model = Model(name="model_name") 67 | 68 | # Call the model 69 | # Call the layer and extract some information 70 | output = custom_model(tf.constant(1.0)) 71 | print("Output:", output) 72 | print("Model name:", custom_model.name) 73 | print("Trainable variables:", custom_model.trainable_variables) 74 | 75 | """### Keras Models 76 | 77 | Keras is a high-level API that is part of TensorFlow now. You can use [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) to define a model. You can also use the collection of [tf.keras.layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) for your convenience. It's straightforward as below to define a model that has two fully-connected layers: 78 | """ 79 | 80 | class CustomModel(tf.keras.Model): 81 | 82 | def __init__(self): 83 | super(CustomModel, self).__init__() 84 | self.layer_1 = tf.keras.layers.Dense(16, activation=tf.nn.relu) 85 | self.layer_2 = tf.keras.layers.Dense(32, activation=None) 86 | 87 | def call(self, inputs): 88 | x = self.layer_1(inputs) 89 | out = self.layer_2(inputs) 90 | return out 91 | 92 | # Create model 93 | custom_model = CustomModel() 94 | 95 | # Call the model 96 | # Call the layer and extract some information 97 | output = custom_model(tf.constant([[1.0, 2.0, 3.0]])) 98 | print("Output shape:", output.shape) 99 | print("Model name:", custom_model.name) 100 | 101 | # Count total trainable variables 102 | total_trainable_var = np.sum([tf.size(var).numpy() for var in custom_model.trainable_variables]) 103 | print("Number of trainable variables:", total_trainable_var) -------------------------------------------------------------------------------- /codes/python/1-basics/tensors.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """tensors.ipynb 3 | 4 | Automatically generated by Colaboratory. 5 | 6 | Original file is located at 7 | https://colab.research.google.com/github/instillai/TensorFlow-Course/blob/master/codes/ipython/1-basics/tensors.ipynb 8 | """ 9 | 10 | # Import necessary libraries 11 | import tensorflow as tf 12 | import numpy as np 13 | 14 | """## Tensors 15 | 16 | Tensor are multi-dimensitonal arrays that are used in Tensorflow. 17 | 18 | We use the following definition: 19 | 20 | * **Rank:** The number of dimensions that a vector has. 21 | 22 | Below, we will define different kinds of tensors and show their rank using [tf.rank](https://www.tensorflow.org/api_docs/python/tf/rank) function. 23 | """ 24 | 25 | tensor = tf.constant(0) 26 | print("Print constant tensor {} of rank {}".format(tensor, tf.rank(tensor))) 27 | print("Show full tensor:", tensor) 28 | 29 | # NOTE: We use .numpy() to transform tf.tensor to numpy 30 | tensor = tf.constant([1,2,3]) 31 | print("Tensor:", tensor) 32 | print("Rank:", tf.rank(tensor).numpy()) 33 | 34 | """### Tensor Operations""" 35 | 36 | x = tf.constant([[1, 1], 37 | [1, 1]]) 38 | y = tf.constant([[2, 4], 39 | [6, 8]]) 40 | 41 | # Add two tensors 42 | print(tf.add(x, y), "\n") 43 | 44 | # Add two tensors 45 | print(tf.matmul(x, y), "\n") 46 | 47 | """### Muti-dimentional Tensors 48 | 49 | This part is not much different compared to what we learned so far. However, it would be nice to try extracting as much information as we can from a multi-dimentional tensor. 50 | 51 | 52 | Let's use [tf.ones](https://www.tensorflow.org/api_docs/python/tf/ones) for our purpose here. It creates an all-one tensor. 53 | """ 54 | 55 | # We set the shape of the tensor and the desired data type. 56 | tensor = tf.ones(shape = [2, 3, 6], dtype = tf.float32) 57 | print('Tensor:', tensor) 58 | 59 | print("Tensor Rank: ", tf.rank(tensor).numpy()) 60 | print("Shape: ", tensor.shape) 61 | print("Elements' type", tensor.dtype) 62 | print("The size of the second axis:", tensor.shape[1]) 63 | print("The size of the last axis:", tensor.shape[-1]) 64 | print("Total number of elements: ", tf.size(tensor).numpy()) 65 | print("How many dimensions? ", tensor.ndim) 66 | 67 | """### Indexing 68 | 69 | TensorFlow indexing is aligned with Python indexing. See the following examples. 70 | """ 71 | 72 | x = tf.constant([[1, 2, 3], 73 | [4, 5, 6], 74 | [7, 8, 9]]) 75 | 76 | # All elements 77 | print(x[:].numpy()) 78 | 79 | # All elements of the first row 80 | print(x[0,:].numpy()) 81 | 82 | # First row and last column 83 | print(x[0,-1].numpy()) 84 | 85 | # From second row to last and from third column to last 86 | print(x[1:,2:].numpy) 87 | 88 | """### Data types 89 | 90 | You can change the data type of the tesnorflow tensors for your purpose. This will be done easily by [tf.cast](https://www.tensorflow.org/api_docs/python/tf/cast). 91 | """ 92 | 93 | original_tensor = tf.constant([1, 2, 3, 4], dtype=tf.int32) 94 | print('Original tensor: ', original_tensor) 95 | print("Tensor type before casting: ", original_tensor.dtype) 96 | 97 | # Casting to change dtype 98 | casted_tensor = tf.cast(original_tensor, dtype=tf.float32) 99 | print('New tensor: ', casted_tensor) 100 | print("Tensor type after casting: ", casted_tensor.dtype) 101 | 102 | -------------------------------------------------------------------------------- /codes/python/advanced/custom_training.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | import numpy as np 3 | 4 | # Load MNIST data 5 | (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() 6 | # Preprocessing 7 | x_train = x_train / 255.0 8 | x_test = x_test / 255.0 9 | 10 | # Add one domention to make 3D images 11 | x_train = x_train[...,tf.newaxis] 12 | x_test = x_test[...,tf.newaxis] 13 | 14 | # Track the data type 15 | dataType, dataShape = x_train.dtype, x_train.shape 16 | print(f"Data type and shape x_train: {dataType} {dataShape}") 17 | labelType, labelShape = y_train.dtype, y_train.shape 18 | print(f"Data type and shape y_train: {labelType} {labelShape}") 19 | 20 | im_list = [] 21 | n_samples_to_show = 16 22 | c = 0 23 | for i in range(n_samples_to_show): 24 | im_list.append(x_train[i]) 25 | # Visualization 26 | import matplotlib.pyplot as plt 27 | from mpl_toolkits.axes_grid1 import ImageGrid 28 | fig = plt.figure(figsize=(4., 4.)) 29 | # Ref: https://matplotlib.org/3.1.1/gallery/axes_grid1/simple_axesgrid.html 30 | grid = ImageGrid(fig, 111, # similar to subplot(111) 31 | nrows_ncols=(4, 4), # creates 2x2 grid of axes 32 | axes_pad=0.1, # pad between axes in inch. 33 | ) 34 | # Show image grid 35 | for ax, im in zip(grid, im_list): 36 | # Iterating over the grid returns the Axes. 37 | ax.imshow(im[:,:,0], 'gray') 38 | plt.show() 39 | 40 | batch_size = 32 41 | # Prepare the training dataset. 42 | train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) 43 | train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) 44 | 45 | # Prepare the validation dataset. 46 | test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)) 47 | test_dataset = test_dataset.batch(batch_size) 48 | 49 | # Model building 50 | NUM_CLASSES = 10 51 | model = tf.keras.Sequential([ 52 | tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(28, 28, 1)), 53 | tf.keras.layers.MaxPooling2D((2, 2)), 54 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu'), 55 | tf.keras.layers.Flatten(), 56 | tf.keras.layers.Dense(32, activation='relu'), 57 | tf.keras.layers.Dense(NUM_CLASSES, activation='sigmoid')] 58 | ) 59 | 60 | # Defining loss function 61 | loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) 62 | accuracy_metric = tf.keras.metrics.Accuracy() 63 | 64 | # Calculate loss 65 | def loss_fn(gt_label, pred): 66 | # training argument define the beehaviour of layers with respect 67 | # to whether we are training the model or not. It is important for layers 68 | # such as BatchNorm and Dropout. 69 | return loss_object(y_true=gt_label, y_pred=pred) 70 | 71 | def accuracy_fn(gt_label, output): 72 | # calculate the accuracy by turning output into labels with argmax 73 | pred = tf.argmax(output, axis=1, output_type=tf.int32) 74 | return accuracy_metric(pred, gt_label) 75 | 76 | # Define the optimizer 77 | optimizer = tf.keras.optimizers.Adam(learning_rate=0.01) 78 | 79 | NUM_EPOCHS = 5 80 | EPOCH_PER_DISPLAY = 1 81 | total_loss = [] 82 | for epoch in range(NUM_EPOCHS): 83 | 84 | running_loss = [] 85 | running_accuracy = [] 86 | 87 | # Training 88 | for input, target in train_dataset: 89 | 90 | # Calculate and track graduents 91 | with tf.GradientTape() as tape: 92 | 93 | # Calculate model output and loss 94 | output = model(input, training=True) 95 | loss_ = loss_fn(target, output) 96 | accuracy_ = accuracy_fn(target, output) 97 | 98 | # Tape gradients 99 | grads = tape.gradient(loss_, model.trainable_variables) 100 | 101 | # Track batch loss and accuracy 102 | running_loss.append(loss_) 103 | running_accuracy.append(accuracy_) 104 | 105 | # Optimize model based on the gradients 106 | optimizer.apply_gradients(zip(grads, model.trainable_variables)) 107 | 108 | # Epoch calculations 109 | epoch_loss = np.mean(running_loss) 110 | epoch_accuracy = np.mean(running_accuracy) 111 | if (epoch + 1) % EPOCH_PER_DISPLAY == 0: 112 | print("Epoch {}: Loss: {:.4f} Accuracy: {:.2f}%".format(epoch+1, epoch_loss, epoch_accuracy * 100)) 113 | 114 | # Calculate the accurcy on the test set 115 | running_accuracy = [] 116 | for (input, gt_label) in test_dataset: 117 | output = model(input, training=False) 118 | accuracy_ = accuracy_fn(gt_label, output) 119 | running_accuracy.append(accuracy_) 120 | 121 | print("Test accuracy: {:.3%}".format(np.mean(running_accuracy))) -------------------------------------------------------------------------------- /codes/python/advanced/dataset_generator.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """dataset_generator.ipynb 3 | 4 | Automatically generated by Colaboratory. 5 | 6 | Original file is located at 7 | https://colab.research.google.com/drive/1SFWk7Ap06ZkvP2HmLhXLiyyqo-ei35M1 8 | 9 | # Dataset generators 10 | 11 | In this advanced tutorials I demonstrate an efficient way of using the TensorFlow [tf.Data API](https://www.tensorflow.org/guide/data) to create a dataset. This approach has some important advantages: 12 | 13 | 1. It provides a lot of flexibility in terms of using Python and packages such as NumPy to create a dataset. 14 | 2. By working with large databases, you can call the samples and shuffling **on-demand** which significantly reduce memory usage. In fact, memory won't be a bottleneck anymore. 15 | 16 | This will be done by [Python generator functions](https://www.tensorflow.org/guide/data#consuming_python_generators) to create [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) objects. The process is as follows: 17 | 18 | 1. By using a generator function, we dictate the way data must be generated. 19 | 2. By using [tf.data.Dataset.from_generator](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) method, we create the TensorFlow dataset. 20 | """ 21 | 22 | import tensorflow as tf 23 | import numpy as np 24 | 25 | # Load MNIST data 26 | (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() 27 | # Preprocessing 28 | x_train = x_train / 255.0 29 | x_test = x_test / 255.0 30 | 31 | # Add one domention to make 3D images 32 | x_train = x_train[...,tf.newaxis] 33 | x_test = x_test[...,tf.newaxis] 34 | 35 | # Track the data type 36 | dataType, dataShape = x_train.dtype, x_train.shape 37 | print(f"Data type and shape x_train: {dataType} {dataShape}") 38 | labelType, labelShape = y_train.dtype, y_train.shape 39 | print(f"Data type and shape y_train: {labelType} {labelShape}") 40 | 41 | """## Generators 42 | 43 | Here, I define separate generators for train/test. The generator function, pick a random sample from the dataset at each step. This create a shuffled dataset without the need to use the [.shuffle()](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) method. Sometimes [.shuffle()](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) method can be very [memory consuming](https://www.tensorflow.org/guide/data_performance#reducing_memory_footprint). 44 | """ 45 | 46 | # Defining generator functions for train/test samples 47 | TRAIN_LEN = x_train.shape[0] 48 | def gen_pairs_train(): 49 | for i in range(TRAIN_LEN): 50 | # Get a random image each time 51 | idx = np.random.randint(0,TRAIN_LEN) 52 | yield (x_train[idx], y_train[idx]) 53 | 54 | 55 | TEST_LEN = x_test.shape[0] 56 | def gen_pairs_test(): 57 | for i in range(TEST_LEN): 58 | # Get a random image each time 59 | idx = np.random.randint(0,TEST_LEN) 60 | yield (x_test[idx], y_test[idx]) 61 | 62 | # Function to test input pipeline 63 | sample_image, sample_label = next(gen_pairs_train()) 64 | 65 | """## Dataset creation 66 | 67 | Here I just used tf.data.Dataset.from_generator on top of the *gen_pairs_train()* and *gen_pairs_test()* generator functions. 68 | """ 69 | 70 | batch_size = 32 71 | # Prepare the training dataset. 72 | train_dataset = tf.data.Dataset.from_generator(generator=gen_pairs_train, output_types=(tf.float64, tf.uint8)) 73 | train_dataset = train_dataset.batch(batch_size) 74 | 75 | # Prepare the validation dataset. 76 | test_dataset = tf.data.Dataset.from_generator(generator=gen_pairs_test, output_types=(tf.float64, tf.uint8)) 77 | test_dataset = test_dataset.batch(batch_size) 78 | 79 | im_list = [] 80 | n_samples_to_show = 16 81 | c = 0 82 | for i in range(n_samples_to_show): 83 | img, label = next(gen_pairs_train()) 84 | im_list.append(img) 85 | # Visualization 86 | import matplotlib.pyplot as plt 87 | from mpl_toolkits.axes_grid1 import ImageGrid 88 | fig = plt.figure(figsize=(4., 4.)) 89 | # Ref: https://matplotlib.org/3.1.1/gallery/axes_grid1/simple_axesgrid.html 90 | grid = ImageGrid(fig, 111, # similar to subplot(111) 91 | nrows_ncols=(4, 4), # creates 2x2 grid of axes 92 | axes_pad=0.1, # pad between axes in inch. 93 | ) 94 | # Show image grid 95 | for ax, im in zip(grid, im_list): 96 | # Iterating over the grid returns the Axes. 97 | ax.imshow(im[:,:,0], 'gray') 98 | plt.show() 99 | 100 | # Model building 101 | NUM_CLASSES = 10 102 | model = tf.keras.Sequential([ 103 | tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(28, 28, 1)), 104 | tf.keras.layers.MaxPooling2D((2, 2)), 105 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu'), 106 | tf.keras.layers.Flatten(), 107 | tf.keras.layers.Dense(32, activation='relu'), 108 | tf.keras.layers.Dense(NUM_CLASSES, activation='sigmoid')] 109 | ) 110 | 111 | # Defining loss function 112 | loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) 113 | accuracy_metric = tf.keras.metrics.Accuracy() 114 | 115 | # Calculate loss 116 | def loss_fn(gt_label, pred): 117 | # training argument define the beehaviour of layers with respect 118 | # to whether we are training the model or not. It is important for layers 119 | # such as BatchNorm and Dropout. 120 | return loss_object(y_true=gt_label, y_pred=pred) 121 | 122 | def accuracy_fn(gt_label, output): 123 | # calculate the accuracy by turning output into labels with argmax 124 | pred = tf.argmax(output, axis=1, output_type=tf.int32) 125 | return accuracy_metric(pred, gt_label) 126 | 127 | # Define the optimizer 128 | optimizer = tf.keras.optimizers.Adam(learning_rate=0.01) 129 | 130 | NUM_EPOCHS = 5 131 | EPOCH_PER_DISPLAY = 1 132 | total_loss = [] 133 | for epoch in range(NUM_EPOCHS): 134 | 135 | running_loss = [] 136 | running_accuracy = [] 137 | 138 | # Training 139 | for input, target in train_dataset: 140 | 141 | # Calculate and track graduents 142 | with tf.GradientTape() as tape: 143 | 144 | # Calculate model output and loss 145 | output = model(input, training=True) 146 | loss_ = loss_fn(target, output) 147 | accuracy_ = accuracy_fn(target, output) 148 | 149 | # Tape gradients 150 | grads = tape.gradient(loss_, model.trainable_variables) 151 | 152 | # Track batch loss and accuracy 153 | running_loss.append(loss_) 154 | running_accuracy.append(accuracy_) 155 | 156 | # Optimize model based on the gradients 157 | optimizer.apply_gradients(zip(grads, model.trainable_variables)) 158 | 159 | # Epoch calculations 160 | epoch_loss = np.mean(running_loss) 161 | epoch_accuracy = np.mean(running_accuracy) 162 | if (epoch + 1) % EPOCH_PER_DISPLAY == 0: 163 | print("Epoch {}: Loss: {:.4f} Accuracy: {:.2f}%".format(epoch+1, epoch_loss, epoch_accuracy * 100)) 164 | 165 | # Calculate the accurcy on the test set 166 | running_accuracy = [] 167 | for (input, gt_label) in test_dataset: 168 | output = model(input, training=False) 169 | accuracy_ = accuracy_fn(gt_label, output) 170 | running_accuracy.append(accuracy_) 171 | 172 | print("Test accuracy: {:.3%}".format(np.mean(running_accuracy))) -------------------------------------------------------------------------------- /codes/python/advanced/tfrecords.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """TFRecords.ipynb 3 | 4 | Automatically generated by Colaboratory. 5 | 6 | Original file is located at 7 | https://colab.research.google.com/drive/1p-Nz6v3CyqKSc-QazX1FgvZkamt5T-uC 8 | """ 9 | 10 | import tensorflow as tf 11 | from tensorflow import keras 12 | import numpy as np 13 | 14 | # Load MNIST data 15 | (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() 16 | 17 | # Preprocessing 18 | x_train = x_train / 255.0 19 | x_test = x_test / 255.0 20 | 21 | # Track the data type 22 | dataType = x_train.dtype 23 | print(f"Data type: {dataType}") 24 | 25 | labelType = y_test.dtype 26 | print(f"Data type: {labelType}") 27 | 28 | im_list = [] 29 | n_samples_to_show = 16 30 | c = 0 31 | for i in range(n_samples_to_show): 32 | im_list.append(x_train[i]) 33 | 34 | # Visualization 35 | import matplotlib.pyplot as plt 36 | from mpl_toolkits.axes_grid1 import ImageGrid 37 | fig = plt.figure(figsize=(4., 4.)) 38 | # Ref: https://matplotlib.org/3.1.1/gallery/axes_grid1/simple_axesgrid.html 39 | grid = ImageGrid(fig, 111, # similar to subplot(111) 40 | nrows_ncols=(4, 4), # creates 2x2 grid of axes 41 | axes_pad=0.1, # pad between axes in inch. 42 | ) 43 | # Show image grid 44 | for ax, im in zip(grid, im_list): 45 | # Iterating over the grid returns the Axes. 46 | ax.imshow(im, 'gray') 47 | plt.show() 48 | 49 | # Convert values to compatible tf.Example types. 50 | 51 | def _bytes_feature(value): 52 | """Returns a bytes_list from a string / byte.""" 53 | if isinstance(value, type(tf.constant(0))): 54 | value = value.numpy() # BytesList won't unpack a string from an EagerTensor. 55 | return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) 56 | 57 | 58 | def _float_feature(value): 59 | """Returns a float_list from a float / double.""" 60 | return tf.train.Feature(float_list=tf.train.FloatList(value=[value])) 61 | 62 | 63 | def _int64_feature(value): 64 | """Returns an int64_list from a bool / enum / int / uint.""" 65 | return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) 66 | 67 | # Create the features dictionary. 68 | def image_example(image, label, dimension): 69 | feature = { 70 | 'dimension': _int64_feature(dimension), 71 | 'label': _int64_feature(label), 72 | 'image_raw': _bytes_feature(image.tobytes()), 73 | } 74 | 75 | return tf.train.Example(features=tf.train.Features(feature=feature)) 76 | 77 | record_file = 'mnistTrain.tfrecords' 78 | n_samples = x_train.shape[0] 79 | dimension = x_train.shape[1] 80 | 81 | with tf.io.TFRecordWriter(record_file) as writer: 82 | for i in range(n_samples): 83 | image = x_train[i] 84 | label = y_train[i] 85 | tf_example = image_example(image, label, dimension) 86 | writer.write(tf_example.SerializeToString()) 87 | 88 | # Create the dataset object from tfrecord file(s) 89 | dataset = tf.data.TFRecordDataset(record_file) 90 | 91 | # Decoding function 92 | def parse_record(record): 93 | name_to_features = { 94 | 'dimension': tf.io.FixedLenFeature([], tf.int64), 95 | 'label': tf.io.FixedLenFeature([], tf.int64), 96 | 'image_raw': tf.io.FixedLenFeature([], tf.string), 97 | } 98 | return tf.io.parse_single_example(record, name_to_features) 99 | 100 | def decode_record(record): 101 | image = tf.io.decode_raw( 102 | record['image_raw'], out_type=dataType, little_endian=True, fixed_length=None, name=None 103 | ) 104 | label = record['label'] 105 | dimension = record['dimension'] 106 | image = tf.reshape(image, (dimension, dimension)) 107 | 108 | return (image, label) 109 | 110 | im_list = [] 111 | n_samples_to_show = 16 112 | c = 0 113 | for record in dataset: 114 | c+=1 115 | if c > n_samples_to_show: 116 | break 117 | parsed_record = parse_record(record) 118 | decoded_record = decode_record(parsed_record) 119 | image, label = decoded_record 120 | im_list.append(image) 121 | 122 | # Visualization 123 | import matplotlib.pyplot as plt 124 | from mpl_toolkits.axes_grid1 import ImageGrid 125 | fig = plt.figure(figsize=(4., 4.)) 126 | # Ref: https://matplotlib.org/3.1.1/gallery/axes_grid1/simple_axesgrid.html 127 | grid = ImageGrid(fig, 111, # similar to subplot(111) 128 | nrows_ncols=(4, 4), # creates 2x2 grid of axes 129 | axes_pad=0.1, # pad between axes in inch. 130 | ) 131 | # Show image grid 132 | for ax, im in zip(grid, im_list): 133 | # Iterating over the grid returns the Axes. 134 | ax.imshow(im, 'gray') 135 | plt.show() 136 | 137 | -------------------------------------------------------------------------------- /codes/python/application/image/image_classification.py: -------------------------------------------------------------------------------- 1 | # Import python libraries 2 | import matplotlib.pyplot as plt 3 | import numpy as np 4 | import os 5 | import pathlib 6 | import tensorflow as tf 7 | import random 8 | import pandas as pd 9 | import tensorflow_datasets as tfds 10 | from collections import defaultdict 11 | from skimage.transform import resize 12 | from skimage.util import img_as_float 13 | from skimage import io 14 | 15 | """ # Params 16 | """ 17 | TRAIN_LEN = 1000 18 | TEST_LEN = 1000 19 | 20 | print(tf.__version__) 21 | 22 | # Download the dataset 23 | dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz" 24 | 25 | # Get the files by having the url 26 | # Ref: https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file 27 | data_dir = tf.keras.utils.get_file(origin=dataset_url, cache_subdir=os.path.expanduser('~/data'), fname='flower_photos', 28 | untar=True) 29 | 30 | # Create a Path object 31 | # Ref: https://docs.python.org/3/library/pathlib.html 32 | data_dir = pathlib.Path(data_dir) 33 | 34 | # Get all image paths 35 | image_paths = list(data_dir.glob('*/*.jpg')) 36 | 37 | # Create a dataFrame 38 | df = pd.DataFrame(image_paths, columns=['path']) 39 | 40 | 41 | def get_class(path): 42 | """ 43 | Get the class labels from the file paths 44 | :param path: The full path of the file 45 | :return: 46 | """ 47 | return path.parent.name 48 | 49 | 50 | def get_look_up_dict(df): 51 | """ 52 | Create a look up tables for class labels and their associated unique keys 53 | :param df: dataframe 54 | :return: Dict 55 | """ 56 | # Defining a dict 57 | look_up_dict = defaultdict(list) 58 | classes = list(df['class_name'].unique()) 59 | 60 | for i in range(len(classes)): 61 | look_up_dict[classes[i]] = i 62 | 63 | return look_up_dict 64 | 65 | 66 | # Store the class names in a new column 67 | df['class_name'] = df.path.apply(get_class) 68 | 69 | # Create a class to label dictionary 70 | class_to_label = get_look_up_dict(df) 71 | label_to_class = dict([(value, key) for key, value in class_to_label.items()]) 72 | 73 | # Store the class labels in a new column 74 | df['label'] = df.class_name.apply(lambda x: class_to_label[x]) 75 | 76 | # Create separate train/test splits 77 | from sklearn.model_selection import train_test_split 78 | 79 | X, y = df['path'], df['label'] 80 | # Read more at https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html 81 | # Stratify sampling is used. 82 | X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.33, random_state=42, shuffle=True) 83 | 84 | 85 | def imResize(image): 86 | """ 87 | This function resize the images. 88 | :param image: The stack of images 89 | :return: The stack of resized images 90 | """ 91 | # Desired size 92 | IM_SIZE = 200 93 | 94 | # Turn to float64 and scale to [0,1] 95 | image = img_as_float(image) 96 | 97 | desired_size = [IM_SIZE, IM_SIZE] 98 | image_resized = resize(image, (desired_size[0], desired_size[1]), 99 | anti_aliasing=True) 100 | 101 | # Cast back to float 32 102 | image_resized = image_resized.astype(np.float32) 103 | return image_resized 104 | 105 | 106 | def visualize_training(): 107 | im_list = [] 108 | n_samples_to_show = 9 109 | c = 0 110 | for i in range(n_samples_to_show): 111 | sample, label = next(train_gen()) 112 | im_list.append(sample) 113 | # Visualization 114 | import matplotlib.pyplot as plt 115 | from mpl_toolkits.axes_grid1 import ImageGrid 116 | fig = plt.figure(figsize=(4., 4.)) 117 | # Ref: https://matplotlib.org/3.1.1/gallery/axes_grid1/simple_axesgrid.html 118 | grid = ImageGrid(fig, 111, # similar to subplot(111) 119 | nrows_ncols=(3, 3), # creates 2x2 grid of axes 120 | axes_pad=0.1, # pad between axes in inch. 121 | ) 122 | # Show image grid 123 | for ax, im in zip(grid, im_list): 124 | # Iterating over the grid returns the Axes. 125 | ax.imshow(im) 126 | plt.show() 127 | 128 | 129 | """ # Dataset generator 130 | """ 131 | 132 | 133 | def train_gen(): 134 | """ 135 | The generator function to create training samples 136 | :return: Generator object 137 | ex: For next sample use next(train_gen()). 138 | To loop through: 139 | gen_obj = train_gen() 140 | for item in gen_obj: 141 | print(item) 142 | """ 143 | for i in range(TRAIN_LEN): 144 | # Pick a random choice 145 | idx = np.random.randint(0, TRAIN_LEN) 146 | im_path = X_train.iloc[idx] 147 | im_label = y_train.iloc[idx] 148 | 149 | # Read the image 150 | im = io.imread(str(im_path)) 151 | 152 | # Resize the image 153 | im = imResize(im) 154 | 155 | yield im, im_label 156 | 157 | 158 | def test_gen(): 159 | """ 160 | The generator function to create test samples 161 | :return: Generator object 162 | """ 163 | for i in range(TEST_LEN): 164 | # Pick a random choice 165 | idx = np.random.randint(0, TRAIN_LEN) 166 | im_path = X_train.iloc[idx] 167 | im_label = y_train.iloc[idx] 168 | 169 | # Read the image 170 | im = io.imread(str(im_path)) 171 | 172 | # Resize the image 173 | im = imResize(im) 174 | 175 | yield im, im_label 176 | 177 | 178 | # Get the generator object 179 | sample, label = next(train_gen()) 180 | 181 | """ # Visualize some sample images from the training set 182 | """ 183 | visualize_training() 184 | 185 | """ # Create datasets 186 | """ 187 | batch_size = 32 188 | # Prepare the training dataset. 189 | train_dataset = tf.data.Dataset.from_generator(generator=train_gen, output_types=(tf.float64, tf.uint8)) 190 | train_dataset = train_dataset.batch(batch_size) 191 | 192 | # Prepare the validation dataset. 193 | test_dataset = tf.data.Dataset.from_generator(generator=test_gen, output_types=(tf.float64, tf.uint8)) 194 | test_dataset = test_dataset.batch(batch_size) 195 | 196 | # Another way of visualization 197 | for images, labels in train_dataset.take(1): 198 | for i in range(9): 199 | ax = plt.subplot(3, 3, i + 1) 200 | plt.imshow(images[i].numpy().astype("float32")) 201 | plt.title(label_to_class[labels[i].numpy()]) 202 | plt.axis("off") 203 | plt.show() 204 | -------------------------------------------------------------------------------- /codes/python/basics_in_machine_learning/dataaugmentation.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """dataaugmentation.ipynb 3 | 4 | Automatically generated by Colaboratory. 5 | 6 | Original file is located at 7 | https://colab.research.google.com/drive/1ibfKtpxC_hIhZlPbefCoqpAS7jTdyiFw 8 | """ 9 | 10 | import tensorflow as tf 11 | import tensorflow_datasets as tfds # Import TensorFlow datasets 12 | import urllib 13 | import tensorflow_datasets as tfds 14 | import matplotlib.pyplot as plt 15 | import numpy as np 16 | # Necessary for dealing with https urls 17 | import ssl 18 | ssl._create_default_https_context = ssl._create_unverified_context 19 | 20 | # We read only the first 10 training samples 21 | ds, ds_info = tfds.load('colorectal_histology', split='train', shuffle_files=True, with_info=True, download=True) 22 | assert isinstance(ds, tf.data.Dataset) 23 | print(ds_info) 24 | 25 | # Visualizing images 26 | fig = tfds.show_examples(ds, ds_info) 27 | 28 | # Reading all images (remove break point to read all) 29 | for example in tfds.as_numpy(ds): 30 | image, label = example['image'], example['label'] 31 | break 32 | # take one sample from data 33 | one_sample = ds.take(1) 34 | one_sample = list(one_sample.as_numpy_iterator()) 35 | image = one_sample[0]['image'] 36 | label = one_sample[0]['label'] 37 | print(image.shape,label.shape) 38 | 39 | # Side by side visualization 40 | def visualize(im, imAgmented, operation): 41 | fig = plt.figure() 42 | plt.subplot(1,2,1) 43 | plt.title('Original image') 44 | plt.imshow(im) 45 | plt.subplot(1,2,2) 46 | plt.title(operation) 47 | plt.imshow(imAgmented) 48 | 49 | # Adding Gaussian noise to image 50 | common_type = tf.float32 # Make noise and image of the same type 51 | gnoise = tf.random.normal(shape=tf.shape(image), mean=0.0, stddev=0.1, dtype=common_type) 52 | image_type_converted = tf.image.convert_image_dtype(image, dtype=common_type, saturate=False) 53 | noisy_image = tf.add(image_type_converted, gnoise) 54 | visualize(image_type_converted, noisy_image, 'noisyimage') 55 | 56 | # Adjusting brighness 57 | bright = tf.image.adjust_brightness(image, 0.2) 58 | visualize(image, bright, 'brightened image') 59 | 60 | # Flip image 61 | flipped = tf.image.flip_left_right(image) 62 | visualize(image, flipped, 'flipped image') 63 | 64 | adjusted = tf.image.adjust_jpeg_quality(image, jpeg_quality=20) 65 | visualize(image, adjusted, 'quality adjusted image') 66 | 67 | # Randon cropping of the image (the cropping area is picked at random) 68 | crop_to_original_ratio = 0.5 # The scale of the cropped area to the original image 69 | new_size = int(crop_to_original_ratio * image.shape[0]) 70 | cropped = tf.image.random_crop(image, size=[new_size,new_size,3]) 71 | visualize(image, cropped, 'randomly cropped image') 72 | 73 | # Center cropping of the image (the cropping area is at the center) 74 | central_fraction = 0.6 # The scale of the cropped area to the original image 75 | center_cropped = tf.image.central_crop(image, central_fraction=central_fraction) 76 | visualize(image, center_cropped, 'centrally cropped image') -------------------------------------------------------------------------------- /codes/python/basics_in_machine_learning/linearregression.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """linearregression.ipynb 3 | 4 | Automatically generated by Colaboratory. 5 | 6 | Original file is located at 7 | https://colab.research.google.com/drive/1SFWk7Ap06ZkvP2HmLhXLiyyqo-ei35M1 8 | """ 9 | 10 | from __future__ import absolute_import, division, print_function, unicode_literals 11 | import pathlib 12 | import matplotlib.pyplot as plt 13 | import numpy as np 14 | import pandas as pd 15 | import seaborn as sns 16 | from datetime import datetime 17 | import tensorflow as tf 18 | from tensorflow import keras 19 | from tensorflow.keras import layers 20 | print(tf.__version__) 21 | 22 | # Download the daset with keras.utils.get_file 23 | dataset_path = keras.utils.get_file("housing.data", "https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data") 24 | 25 | column_names = ['CRIM','ZN','INDUS','CHAS','NOX', 26 | 'RM', 'AGE', 'DIS','RAD','TAX','PTRATION', 'B', 'LSTAT', 'MEDV'] 27 | raw_dataset = pd.read_csv(dataset_path, names=column_names, 28 | na_values = "?", comment='\t', 29 | sep=" ", skipinitialspace=True) 30 | # Create a dataset instant 31 | dataset = raw_dataset.copy() 32 | 33 | # This function returns last n rows from the object 34 | # based on position. 35 | dataset.tail(n=10) 36 | 37 | # Split data into train/test 38 | # p = training data portion 39 | p=0.8 40 | trainDataset = dataset.sample(frac=p,random_state=0) 41 | testDataset = dataset.drop(trainDataset.index) 42 | 43 | # Visual representation of training data 44 | import matplotlib.pyplot as plt 45 | fig, ax = plt.subplots() 46 | # With .pop() command, the associated columns are extracted. 47 | x = trainDataset['RM'] 48 | y = trainDataset['MEDV'] 49 | ax.scatter(x, y, edgecolors=(0, 0, 0)) 50 | ax.set_xlabel('RM') 51 | ax.set_ylabel('MEDV') 52 | plt.show() 53 | 54 | # Pop command return item and drop it from frame. 55 | # After using trainDataset.pop('RM'), the 'RM' column 56 | # does not exist in the trainDataset frame anymore! 57 | trainInput = trainDataset['RM'] 58 | trainTarget = trainDataset['MEDV'] 59 | testInput = testDataset['RM'] 60 | testTarget = testDataset['MEDV'] 61 | 62 | # We don't specify anything for activation -> no activation is applied (ie. "linear" activation: a(x) = x) 63 | # Check: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense 64 | 65 | def linear_model(): 66 | model = keras.Sequential([ 67 | layers.Dense(1, use_bias=True, input_shape=(1,), name='layer') 68 | ]) 69 | 70 | # Using adam optimizer 71 | optimizer = tf.keras.optimizers.Adam( 72 | learning_rate=0.01, beta_1=0.9, beta_2=0.99, epsilon=1e-05, amsgrad=False, 73 | name='Adam') 74 | 75 | # Check: https://www.tensorflow.org/api_docs/python/tf/keras/Model 76 | # loss: String (name of objective function), objective function or tf.keras.losses.Loss instance. See tf.keras.losses. 77 | # optimizer: String (name of optimizer) or optimizer instance. See tf.keras.optimizers. 78 | # metrics: List of metrics to be evaluated by the model during training and testing 79 | model.compile(loss='mse', optimizer=optimizer, metrics=['mae','mse']) 80 | 81 | return model 82 | 83 | # Create model instant 84 | model = linear_model() 85 | 86 | # Print the model summary 87 | model.summary() 88 | 89 | # params 90 | n_epochs = 4000 91 | batch_size = 256 92 | n_idle_epochs = 100 93 | n_epochs_log = 200 94 | n_samples_save = n_epochs_log * trainInput.shape[0] 95 | print('Checkpoint is saved for each {} samples'.format(n_samples_save)) 96 | 97 | # A mechanism that stops training if the validation loss is not improving for more than n_idle_epochs. 98 | #See https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping for details. 99 | earlyStopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=n_idle_epochs, min_delta=0.001) 100 | 101 | # Creating a custom callback to print the log after a certain number of epochs 102 | # Check: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks 103 | predictions_list = [] 104 | class NEPOCHLogger(tf.keras.callbacks.Callback): 105 | def __init__(self,per_epoch=100): 106 | ''' 107 | display: Number of batches to wait before outputting loss 108 | ''' 109 | self.seen = 0 110 | self.per_epoch = per_epoch 111 | 112 | def on_epoch_end(self, epoch, logs=None): 113 | if epoch % self.per_epoch == 0: 114 | print('Epoch {}, loss {:.2f}, val_loss {:.2f}, mae {:.2f}, val_mae {:.2f}, mse {:.2f}, val_mse {:.2f}'\ 115 | .format(epoch, logs['loss'], logs['val_loss'],logs['mae'], logs['val_mae'],logs['mse'], logs['val_mse'])) 116 | 117 | # Call the object 118 | log_display = NEPOCHLogger(per_epoch=n_epochs_log) 119 | 120 | # Include the epoch in the file name (uses `str.format`) 121 | import os 122 | checkpoint_path = "training/cp-{epoch:05d}.ckpt" 123 | checkpoint_dir = os.path.dirname(checkpoint_path) 124 | 125 | # Create a callback that saves the model's weights every 5 epochs 126 | checkpointCallback = tf.keras.callbacks.ModelCheckpoint( 127 | filepath=checkpoint_path, 128 | verbose=1, 129 | save_weights_only=True, 130 | save_freq=n_samples_save) 131 | 132 | # Save the weights using the `checkpoint_path` format 133 | model.save_weights(checkpoint_path.format(epoch=0)) 134 | 135 | # Define the Keras TensorBoard callback. 136 | logdir="logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S") 137 | tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir) 138 | 139 | history = model.fit( 140 | trainInput, trainTarget, batch_size=batch_size, 141 | epochs=n_epochs, validation_split = 0.1, verbose=0, callbacks=[earlyStopping,log_display,tensorboard_callback,checkpointCallback]) 142 | 143 | # The fit model returns the history object for each Keras model 144 | # Let's explore what is inside history 145 | print('keys:', history.history.keys()) 146 | 147 | # Returning the desired values for plotting and turn to numpy array 148 | mae = np.asarray(history.history['mae']) 149 | val_mae = np.asarray(history.history['val_mae']) 150 | 151 | # Creating the data frame 152 | num_values = (len(mae)) 153 | values = np.zeros((num_values,2), dtype=float) 154 | values[:,0] = mae 155 | values[:,1] = val_mae 156 | 157 | # Using pandas to frame the data 158 | steps = pd.RangeIndex(start=0,stop=num_values) 159 | data = pd.DataFrame(values, steps, columns=["training-mae", "val-mae"]) 160 | 161 | # Plotting 162 | sns.set(style="whitegrid") 163 | sns.lineplot(data=data, palette="tab10", linewidth=2.5) 164 | 165 | predictions = model.predict(testInput).flatten() 166 | a = plt.axes(aspect='equal') 167 | plt.scatter(predictions, testTarget, edgecolors=(0, 0, 0)) 168 | plt.xlabel('True Values') 169 | plt.ylabel('Predictions') 170 | lims = [0, 50] 171 | plt.xlim(lims) 172 | plt.ylim(lims) 173 | _ = plt.plot(lims, lims) 174 | 175 | # Get the saved checkpoint files 176 | checkpoints = [] 177 | for f_name in os.listdir(checkpoint_dir): 178 | if f_name.startswith('cp-'): 179 | file_with_no_ext = os.path.splitext(f_name)[0] 180 | checkpoints.append(file_with_no_ext) 181 | 182 | # Return unique list elements 183 | checkpoints = list(set(checkpoints)) 184 | print('checkpoints:',checkpoints) 185 | 186 | # Load all model checkpoints and evaluate for each 187 | count = 0 188 | model_improvement_progress = False 189 | if model_improvement_progress: 190 | for checkpoint in checkpoints: 191 | count += 1 192 | 193 | # Call model instant 194 | model = linear_model() 195 | 196 | # Restore the weights 197 | path = os.path.join('training',checkpoint) 198 | model.load_weights(path) 199 | 200 | # Access to layer weights 201 | layer = model.get_layer('layer') 202 | w1,w0 = layer.get_weights() 203 | w1 = float(w1[0]) 204 | w0 = float(w0[0]) 205 | 206 | # Draw the scatter plot of data 207 | fig, ax = plt.subplots() 208 | x = testInput 209 | y = testTarget 210 | ax.scatter(x, y, edgecolors=(0, 0, 0)) 211 | ax.set_xlabel('RM') 212 | ax.set_ylabel('MEDV') 213 | 214 | # Plot the line 215 | y_hat = w1*x + w0 216 | plt.plot(x, y_hat, '-r') -------------------------------------------------------------------------------- /codes/python/neural_networks/cnns.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """CNNs.ipynb 3 | 4 | Automatically generated by Colaboratory. 5 | 6 | Original file is located at 7 | https://colab.research.google.com/drive/1ibfKtpxC_hIhZlPbefCoqpAS7jTdyiFw 8 | """ 9 | 10 | import tensorflow as tf 11 | 12 | # Load MNIST data 13 | (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() 14 | # Preprocessing 15 | x_train = x_train / 255.0 16 | x_test = x_test / 255.0 17 | 18 | # Add one domention to make 3D images 19 | x_train = x_train[...,tf.newaxis] 20 | x_test = x_test[...,tf.newaxis] 21 | 22 | # Track the data type 23 | dataType, dataShape = x_train.dtype, x_train.shape 24 | print(f"Data type and shape x_train: {dataType} {dataShape}") 25 | labelType, labelShape = y_train.dtype, y_train.shape 26 | print(f"Data type and shape y_train: {labelType} {labelShape}") 27 | 28 | im_list = [] 29 | n_samples_to_show = 16 30 | c = 0 31 | for i in range(n_samples_to_show): 32 | im_list.append(x_train[i]) 33 | # Visualization 34 | import matplotlib.pyplot as plt 35 | from mpl_toolkits.axes_grid1 import ImageGrid 36 | fig = plt.figure(figsize=(4., 4.)) 37 | # Ref: https://matplotlib.org/3.1.1/gallery/axes_grid1/simple_axesgrid.html 38 | grid = ImageGrid(fig, 111, # similar to subplot(111) 39 | nrows_ncols=(4, 4), # creates 2x2 grid of axes 40 | axes_pad=0.1, # pad between axes in inch. 41 | ) 42 | # Show image grid 43 | for ax, im in zip(grid, im_list): 44 | # Iterating over the grid returns the Axes. 45 | ax.imshow(im[:,:,0], 'gray') 46 | plt.show() 47 | 48 | """## Training""" 49 | 50 | # Model building 51 | NUM_CLASSES = 10 52 | model = tf.keras.Sequential([ 53 | tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(28, 28, 1)), 54 | tf.keras.layers.MaxPooling2D((2, 2)), 55 | tf.keras.layers.Conv2D(32, (3, 3), activation='relu'), 56 | tf.keras.layers.MaxPooling2D((2, 2)), 57 | tf.keras.layers.Conv2D(64, (3, 3), activation='relu'), 58 | tf.keras.layers.Flatten(), 59 | tf.keras.layers.Dense(32, activation='relu'), 60 | tf.keras.layers.Dense(NUM_CLASSES, activation='sigmoid')] 61 | ) 62 | 63 | # Compiling the model with the high-level keras 64 | model.compile(optimizer='adam', 65 | loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), 66 | metrics=['accuracy']) 67 | 68 | # Model training 69 | model.fit(x_train, y_train, epochs=5) 70 | 71 | """## Evaluation""" 72 | 73 | eval_loss, eval_acc = model.evaluate(x_test, y_test, verbose=1) 74 | print('Eval accuracy percentage: {:.2f}'.format(eval_acc * 100)) 75 | 76 | -------------------------------------------------------------------------------- /codes/python/neural_networks/mlp.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """mlp.ipynb 3 | 4 | Automatically generated by Colaboratory. 5 | 6 | Original file is located at 7 | https://colab.research.google.com/drive/1SFWk7Ap06ZkvP2HmLhXLiyyqo-ei35M1 8 | """ 9 | 10 | import tensorflow as tf 11 | 12 | # Load MNIST data 13 | (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() 14 | # Preprocessing 15 | x_train = x_train / 255.0 16 | x_test = x_test / 255.0 17 | # Track the data type 18 | dataType = x_train.dtype 19 | print(f"Data type: {dataType}") 20 | labelType = y_test.dtype 21 | print(f"Data type: {labelType}") 22 | 23 | im_list = [] 24 | n_samples_to_show = 16 25 | c = 0 26 | for i in range(n_samples_to_show): 27 | im_list.append(x_train[i]) 28 | # Visualization 29 | import matplotlib.pyplot as plt 30 | from mpl_toolkits.axes_grid1 import ImageGrid 31 | fig = plt.figure(figsize=(4., 4.)) 32 | # Ref: https://matplotlib.org/3.1.1/gallery/axes_grid1/simple_axesgrid.html 33 | grid = ImageGrid(fig, 111, # similar to subplot(111) 34 | nrows_ncols=(4, 4), # creates 2x2 grid of axes 35 | axes_pad=0.1, # pad between axes in inch. 36 | ) 37 | # Show image grid 38 | for ax, im in zip(grid, im_list): 39 | # Iterating over the grid returns the Axes. 40 | ax.imshow(im, 'gray') 41 | plt.show() 42 | 43 | # Model building 44 | NUM_CLASSES = 10 45 | model = tf.keras.Sequential([ 46 | tf.keras.layers.Flatten(input_shape=(28, 28)), 47 | tf.keras.layers.Dense(256, activation='relu'), 48 | tf.keras.layers.Dense(NUM_CLASSES, activation='sigmoid') 49 | ]) 50 | 51 | # Compiling the model with the high-level keras 52 | model.compile(optimizer='adam', 53 | loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), 54 | metrics=['accuracy']) 55 | 56 | # Model training 57 | model.fit(x_train, y_train, epochs=10) 58 | 59 | eval_loss, eval_acc = model.evaluate(x_test, y_test, verbose=1) 60 | print('Eval accuracy percentage: {:.2f}'.format(eval_acc * 100)) 61 | 62 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SPHINXPROJ = TensorFlow-World 8 | SOURCEDIR = . 9 | BUILDDIR = _build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) -------------------------------------------------------------------------------- /docs/README.rst: -------------------------------------------------------------------------------- 1 |  2 | **************** 3 | TensorFlow World 4 | **************** 5 | .. image:: https://travis-ci.org/astorfi/TensorFlow-World.svg?branch=master 6 | :target: https://travis-ci.org/astorfi/TensorFlow-World 7 | .. image:: https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat 8 | :target: https://github.com/astorfi/TensorFlow-World/issues 9 | .. image:: https://badges.frapsoft.com/os/v2/open-source.svg?v=102 10 | :target: https://github.com/ellerbrock/open-source-badge/ 11 | .. image:: https://coveralls.io/repos/github/astorfi/TensorFlow-World/badge.svg?branch=master 12 | :target: https://coveralls.io/github/astorfi/TensorFlow-World?branch=master 13 | .. image:: https://zenodo.org/badge/86115145.svg 14 | :target: https://zenodo.org/badge/latestdoi/86115145 15 | 16 | This repository is aimed to provide simple and ready-to-use tutorials for TensorFlow. The explanations are present in the wiki_ associated with this repository. Each tutorial has a ``source code`` and its ``documentation``. 17 | 18 | .. image:: _img/mainpage/TensorFlow_World.gif 19 | 20 | .. The links. 21 | .. _wiki: https://github.com/astorfi/TensorFlow-World/wiki 22 | .. _TensorFlow: https://www.tensorflow.org/install/ 23 | 24 | ============ 25 | Motivation 26 | ============ 27 | 28 | There are different motivations for this repository. Some are TensorFlow-related which is one of the bests up to the moment that 29 | this document is being written! The question is why this repository has been created among all other available tutorials on the web? 30 | 31 | ~~~~~~~~~~~~~~~~~~~~~ 32 | Why using TensorFlow? 33 | ~~~~~~~~~~~~~~~~~~~~~ 34 | 35 | A deep learning is of great interest these days, the crucial necessity for rapid and optimized implementation of the algorithms 36 | and designing architectures is the software environment. TensorFlow is designed to facilitate this goal. The strong advantage of 37 | TensorFlow is it flexibility is designing highly modular model which also can be a disadvantage too for beginners since lots of 38 | the pieces must be considered together for creating the model. This issue has been facilitated as well by developing high-level APIs 39 | such as `Keras `_ and `Slim `_ 40 | which gather lots of the design puzzle pieces. The interesting point about TensorFlow is that **its trace can be found anywhere these days**. 41 | Lots of the researchers and developers are using it and *its community is growing with the speed of light*! So the possible issues can 42 | be overcame easily since they might be the issues of lots of other people considering a large number of people involved in TensorFlow community. 43 | 44 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 45 | What's the point of this repository? 46 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 47 | 48 | **Developing open source project for the sake of just developing something is not the reason behind for this effort**. 49 | Considering the large number of tutorials that are being added to this large community, this repository has been created to break the 50 | jump-in and jump-out process usually happens to most of the open source projects, **but why and how**? 51 | 52 | First of all, what's the point of putting effort on something that most of the people won't stop by and take a look? What's the point of creating something that does not 53 | help anyone in the developers and researchers community? Why spending time for something that can easily be forgotten? But **how we try to do it?** Even up to this 54 | very moment there are countless tutorials on TensorFlow whether on the model design or TensorFlow 55 | workflow. Most of them are too complicated or suffer from a lack of documentation. There are only a few available tutorials which are concise and well-structured 56 | and provide enough insight for their specific implemented models. The goal of this project is to help the community with structured tutorials 57 | and simple and optimized code implementation to provide better insight about how to use TensorFlow *fast and efficient*. It is worth 58 | noting that, **the main goal of this project is providing well-documented tutorials and less-complicated codes**! 59 | 60 | 61 | 62 | ==================== 63 | TensorFlow Tutorials 64 | ==================== 65 | The tutorials in this repository are partitioned into relevant categories. 66 | 67 | 68 | +----+---------------------+----------------------------------------------------------------------------------------+----------------------------------------------+ 69 | | # | topic | Source Code | | 70 | +====+=====================+========================================================================================+==============================================+ 71 | | 1 | Start-up | `Welcome `_ / `IPython `_ | `Documentation `_ | 72 | +----+---------------------+----------------------------------------------------------------------------------------+----------------------------------------------+ 73 | | 2 | *TensorFLow Basics* | `Basic Math Operations `_ / `IPython `_ | `Documentation `_ | 74 | +----+---------------------+----------------------------------------------------------------------------------------+----------------------------------------------+ 75 | | 3 | *TensorFLow Basics* | `TensorFlow Variables `_ / `IPython `_ | `Documentation `_ | 76 | +----+---------------------+----------------------------------------------------------------------------------------+----------------------------------------------+ 77 | | 4 | *Machine Learning* |`Linear Regression`_ / `IPython `_ | `Documentation `_ | 78 | +----+---------------------+----------------------------------------------------------------------------------------+----------------------------------------------+ 79 | | 5 | *Machine Learning* | `Logistic Regression`_ / `IPython `_ | `Documentation `_ | 80 | +----+---------------------+----------------------------------------------------------------------------------------+----------------------------------------------+ 81 | | 6 | *Machine Learning* | `Linear SVM`_ / `IPython `_ | | 82 | +----+---------------------+----------------------------------------------------------------------------------------+----------------------------------------------+ 83 | | 7 | *Machine Learning* |`MultiClass Kernel SVM`_ / `IPython `_ | | 84 | +----+---------------------+----------------------------------------------------------------------------------------+----------------------------------------------+ 85 | | 8 | *Neural Networks* |`Multi Layer Perceptron`_ / `IPython `_ | | 86 | +----+---------------------+----------------------------------------------------------------------------------------+----------------------------------------------+ 87 | | 9 | *Neural Networks* | `Convolutional Neural Networks`_ | `Documentation `_ | 88 | +----+---------------------+----------------------------------------------------------------------------------------+----------------------------------------------+ 89 | | 10 | *Neural Networks* | `Undercomplete Autoencoder `_ | | 90 | +----+---------------------+----------------------------------------------------------------------------------------+----------------------------------------------+ 91 | 92 | .. ~~~~~~~~~~~~ 93 | .. **Welcome** 94 | .. ~~~~~~~~~~~~ 95 | 96 | .. The tutorial in this section is just a simple entrance to TensorFlow world. 97 | 98 | .. _welcomesourcecode: https://github.com/astorfi/TensorFlow-World/tree/master/codes/0-welcome 99 | .. _Documentationcnnwelcome: https://github.com/astorfi/TensorFlow-World/blob/master/docs/tutorials/0-welcome 100 | .. _ipythonwelcome: https://github.com/astorfi/TensorFlow-World/blob/master/codes/0-welcome/code/0-welcome.ipynb 101 | 102 | 103 | 104 | .. +---+---------------------------------------------+-------------------------------------------------+ 105 | .. | # | Source Code | | 106 | .. +===+=============================================+=================================================+ 107 | .. | 1 | `Welcome `_ | `Documentation `_ | 108 | .. +---+---------------------------------------------+-------------------------------------------------+ 109 | 110 | .. ~~~~~~~~~~ 111 | .. **Basics** 112 | .. ~~~~~~~~~~ 113 | .. These tutorials are related to basics of TensorFlow. 114 | 115 | .. _basicmathsourcecode: https://github.com/astorfi/TensorFlow-World/tree/master/codes/1-basics/basic_math_operations 116 | .. _Documentationbasicmath: https://github.com/astorfi/TensorFlow-World/blob/master/docs/tutorials/1-basics/basic_math_operations 117 | .. _ipythonbasicmath: https://github.com/astorfi/TensorFlow-World/blob/master/codes/1-basics/basic_math_operations/code/basic_math_operation.ipynb 118 | 119 | .. _ipythonvariabls: https://github.com/astorfi/TensorFlow-World/blob/master/codes/1-basics/variables/code/variables.ipynb 120 | .. _variablssourcecode: https://github.com/astorfi/TensorFlow-World/blob/master/codes/1-basics/variables/README.rst 121 | .. _Documentationvariabls: https://github.com/astorfi/TensorFlow-World/blob/master/docs/tutorials/1-basics/variables 122 | 123 | 124 | .. +---+-----------------------------------------------------+-------------------------------------------------+ 125 | .. | # | Source Code | | 126 | .. +===+=====================================================+=================================================+ 127 | .. | 1 | `Basic Math Operations `_ | `Documentation `_ | 128 | .. +---+-----------------------------------------------------+-------------------------------------------------+ 129 | .. | 2 | `TensorFlow Variables `_ | `Documentation `_ | 130 | .. +---+-----------------------------------------------------+-------------------------------------------------+ 131 | 132 | .. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 133 | .. **Machine Learning Basics** 134 | .. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 135 | .. We are going to present concepts of basic machine learning models and methods and showing how to implement them in Tensorflow. 136 | 137 | .. _Linear Regression: https://github.com/astorfi/TensorFlow-World/tree/master/codes/2-basics_in_machine_learning/linear_regression 138 | .. _LinearRegressionipython: https://github.com/astorfi/TensorFlow-World/tree/master/codes/2-basics_in_machine_learning/linear_regression/code/linear_regression.ipynb 139 | .. _Documentationlr: https://github.com/astorfi/TensorFlow-World/blob/master/docs/tutorials/2-basics_in_machine_learning/linear_regression 140 | 141 | .. _Logistic Regression: https://github.com/astorfi/TensorFlow-World/tree/master/codes/2-basics_in_machine_learning/logistic_regression 142 | .. _LogisticRegressionipython: https://github.com/astorfi/TensorFlow-World/tree/master/codes/2-basics_in_machine_learning/logistic_regression/code/logistic_regression.ipynb 143 | .. _LogisticRegDOC: https://github.com/astorfi/TensorFlow-World/tree/master/docs/tutorials/2-basics_in_machine_learning/logistic_regression 144 | 145 | .. _Linear SVM: https://github.com/astorfi/TensorFlow-World/tree/master/codes/2-basics_in_machine_learning/linear_svm 146 | .. _LinearSVMipython: https://github.com/astorfi/TensorFlow-World/tree/master/codes/2-basics_in_machine_learning/linear_svm/code/linear_svm.ipynb 147 | 148 | 149 | .. _MultiClass Kernel SVM: https://github.com/astorfi/TensorFlow-World/blob/master/codes/2-basics_in_machine_learning/multiclass_svm 150 | .. _MultiClassKernelSVMipython: https://github.com/astorfi/TensorFlow-World/blob/master/codes/2-basics_in_machine_learning/multiclass_svm/code/multiclass_svm.ipynb 151 | 152 | 153 | .. +---+---------------------------------------------+----------------------------------------+ 154 | .. | # | Source Code | | 155 | .. +===+=============================================+========================================+ 156 | .. | 1 | `Linear Regression`_ | `Documentation `_ | 157 | .. +---+---------------------------------------------+----------------------------------------+ 158 | .. | 2 | `Logistic Regression`_ | `Documentation `_ | 159 | .. +---+---------------------------------------------+----------------------------------------+ 160 | .. | 3 | `Linear SVM`_ | | 161 | .. +---+---------------------------------------------+----------------------------------------+ 162 | .. | 4 | `MultiClass Kernel SVM`_ | | 163 | .. +---+---------------------------------------------+----------------------------------------+ 164 | 165 | .. ~~~~~~~~~~~~~~~~~~~ 166 | .. **Neural Networks** 167 | .. ~~~~~~~~~~~~~~~~~~~ 168 | .. The tutorials in this section are related to neural network architectures. 169 | 170 | .. _Convolutional Neural Networks: https://github.com/astorfi/TensorFlow-World/tree/master/codes/3-neural_networks/convolutional-neural-network 171 | .. _Documentationcnn: https://github.com/astorfi/TensorFlow-World/blob/master/docs/tutorials/3-neural_network/convolutiona_neural_network 172 | 173 | .. _Multi Layer Perceptron: https://github.com/astorfi/TensorFlow-World/blob/master/codes/3-neural_networks/multi-layer-perceptron 174 | .. _MultiLayerPerceptronipython: https://github.com/astorfi/TensorFlow-World/blob/master/codes/3-neural_networks/multi-layer-perceptron/code/train_mlp.ipynb 175 | 176 | 177 | .. _udercompleteautoencodercode: https://github.com/astorfi/TensorFlow-World/tree/master/codes/3-neural_networks/undercomplete-autoencoder 178 | 179 | 180 | .. +---+---------------------------------------------+----------------------------------------+ 181 | .. | # | Source Code | | 182 | .. +===+=============================================+========================================+ 183 | .. | 1 | `Multi Layer Perceptron`_ | | 184 | .. +---+---------------------------------------------+----------------------------------------+ 185 | .. | 2 | `Convolutional Neural Networks`_ | `Documentation `_ | 186 | .. +---+---------------------------------------------+----------------------------------------+ 187 | 188 | 189 | 190 | ================================================= 191 | TensorFlow Installation and Setup the Environment 192 | ================================================= 193 | 194 | .. _TensorFlow Installation: https://github.com/astorfi/TensorFlow-World/tree/master/docs/tutorials/installation 195 | 196 | In order to install TensorFlow please refer to the following link: 197 | 198 | * `TensorFlow Installation`_ 199 | 200 | 201 | .. image:: _img/mainpage/installation.gif 202 | :target: https://www.youtube.com/watch?v=_3JFEPk4qQY&t=2s 203 | 204 | 205 | The virtual environment installation is recommended in order to prevent package conflict and having the capacity to customize the working environment. The TensorFlow version employed for these tutorials is `1.1`. However, the files from previous versions can be transformed to newer versions (ex: version `1.1`) using the instructions available in the following link: 206 | 207 | * `Transitioning to TensorFlow 1.0 `_ 208 | 209 | ===================== 210 | Some Useful Tutorials 211 | ===================== 212 | 213 | * `TensorFlow Examples `_ - TensorFlow tutorials and code examples for beginners 214 | * `Sungjoon's TensorFlow-101 `_ - TensorFlow tutorials written in Python with Jupyter Notebook 215 | * `Terry Um’s TensorFlow Exercises `_ - Re-create the codes from other TensorFlow examples 216 | * `Classification on time series `_ - Recurrent Neural Network classification in TensorFlow with LSTM on cellphone sensor data 217 | 218 | 219 | 220 | ============= 221 | Contributing 222 | ============= 223 | 224 | When contributing to this repository, please first discuss the change you wish to make via issue, 225 | email, or any other method with the owners of this repository before making a change. *For typos, please 226 | do not create a pull request. Instead, declare them in issues or email the repository owner*. 227 | 228 | Please note we have a code of conduct, please follow it in all your interactions with the project. 229 | 230 | ~~~~~~~~~~~~~~~~~~~~ 231 | Pull Request Process 232 | ~~~~~~~~~~~~~~~~~~~~ 233 | 234 | Please consider the following criterions in order to help us in a better way: 235 | 236 | * The pull request is mainly expected to be a code script suggestion or improvement. 237 | * A pull request related to non-code-script sections is expected to make a significant difference in the documentation. Otherwise, it is expected to be announced in the issues section. 238 | * Ensure any install or build dependencies are removed before the end of the layer when doing a build and creating a pull request. 239 | * Add comments with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters. 240 | * You may merge the Pull Request in once you have the sign-off of at least one other developer, or if you do not have permission to do that, you may request the owner to merge it for you if you believe all checks are passed. 241 | 242 | ~~~~~~~~~~~ 243 | Final Note 244 | ~~~~~~~~~~~ 245 | 246 | We are looking forward to your kind feedback. Please help us to improve this open source project and make our work better. 247 | For contribution, please create a pull request and we will investigate it promptly. Once again, we appreciate 248 | your kind feedback and elaborate code inspections. 249 | -------------------------------------------------------------------------------- /docs/_img/0-welcome/graph-run.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/0-welcome/graph-run.png -------------------------------------------------------------------------------- /docs/_img/1-basics/basic_math_operations/graph-run.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/1-basics/basic_math_operations/graph-run.png -------------------------------------------------------------------------------- /docs/_img/1-basics/readme.rst: -------------------------------------------------------------------------------- 1 | ============================== 2 | Basics 3 | ============================== 4 | 5 | 6 | -------------------------------------------------------------------------------- /docs/_img/2-basics_in_machine_learning/linear_regression/updating_model.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/2-basics_in_machine_learning/linear_regression/updating_model.gif -------------------------------------------------------------------------------- /docs/_img/3-neural_network/autoencoder/README.rst: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /docs/_img/3-neural_network/autoencoder/ae.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/autoencoder/ae.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/convolutiona_neural_network/accuracy_train.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/convolutiona_neural_network/accuracy_train.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/convolutiona_neural_network/activation_fc4_train.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/convolutiona_neural_network/activation_fc4_train.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/convolutiona_neural_network/architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/convolutiona_neural_network/architecture.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/convolutiona_neural_network/classifier_image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/convolutiona_neural_network/classifier_image.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/convolutiona_neural_network/convlayer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/convolutiona_neural_network/convlayer.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/convolutiona_neural_network/graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/convolutiona_neural_network/graph.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/convolutiona_neural_network/histogram_fc4_train.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/convolutiona_neural_network/histogram_fc4_train.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/convolutiona_neural_network/loss_accuracy_train.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/convolutiona_neural_network/loss_accuracy_train.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/convolutiona_neural_network/loss_train.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/convolutiona_neural_network/loss_train.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/convolutiona_neural_network/terminal_training.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/convolutiona_neural_network/terminal_training.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/convolutiona_neural_network/test_accuracy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/convolutiona_neural_network/test_accuracy.png -------------------------------------------------------------------------------- /docs/_img/3-neural_network/multi-layer-perceptron/neural-network.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/3-neural_network/multi-layer-perceptron/neural-network.png -------------------------------------------------------------------------------- /docs/_img/mainpage/TensorFlow_World.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/mainpage/TensorFlow_World.gif -------------------------------------------------------------------------------- /docs/_img/mainpage/Tensor_GIF.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/mainpage/Tensor_GIF.gif -------------------------------------------------------------------------------- /docs/_img/mainpage/Tensor_GIF_ff.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/mainpage/Tensor_GIF_ff.gif -------------------------------------------------------------------------------- /docs/_img/mainpage/installation.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/docs/_img/mainpage/installation.gif -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # TensorFlow-World documentation build configuration file, created by 4 | # sphinx-quickstart on Wed Jun 28 22:26:19 2017. 5 | # 6 | # This file is execfile()d with the current directory set to its 7 | # containing dir. 8 | # 9 | # Note that not all possible configuration values are present in this 10 | # autogenerated file. 11 | # 12 | # All configuration values have a default; values that are commented out 13 | # serve to show the default. 14 | 15 | # If extensions (or modules to document with autodoc) are in another directory, 16 | # add these directories to sys.path here. If the directory is relative to the 17 | # documentation root, use os.path.abspath to make it absolute, like shown here. 18 | # 19 | # import os 20 | # import sys 21 | # sys.path.insert(0, os.path.abspath('.')) 22 | 23 | 24 | # -- General configuration ------------------------------------------------ 25 | 26 | # If your documentation needs a minimal Sphinx version, state it here. 27 | # 28 | # needs_sphinx = '1.0' 29 | 30 | # Add any Sphinx extension module names here, as strings. They can be 31 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 32 | # ones. 33 | extensions = ['sphinx.ext.autodoc', 34 | 'sphinx.ext.mathjax', 35 | 'sphinx.ext.viewcode', 36 | 'sphinx.ext.githubpages'] 37 | 38 | # Add any paths that contain templates here, relative to this directory. 39 | templates_path = ['_templates'] 40 | 41 | # The suffix(es) of source filenames. 42 | # You can specify multiple suffix as a list of string: 43 | # 44 | # source_suffix = ['.rst', '.md'] 45 | source_suffix = '.rst' 46 | 47 | # The master toctree document. 48 | master_doc = 'index' 49 | 50 | # General information about the project. 51 | # project = u'TensorFlow-World' 52 | copyright = u'2017, Amirsina Torfi' 53 | author = u'Amirsina Torfi' 54 | 55 | # The version info for the project you're documenting, acts as replacement for 56 | # |version| and |release|, also used in various other places throughout the 57 | # built documents. 58 | # 59 | # The short X.Y version. 60 | version = u'1.0' 61 | # The full version, including alpha/beta/rc tags. 62 | release = u'1.0' 63 | 64 | # The language for content autogenerated by Sphinx. Refer to documentation 65 | # for a list of supported languages. 66 | # 67 | # This is also used if you do content translation via gettext catalogs. 68 | # Usually you set "language" from the command line for these cases. 69 | language = None 70 | 71 | # List of patterns, relative to source directory, that match files and 72 | # directories to ignore when looking for source files. 73 | # This patterns also effect to html_static_path and html_extra_path 74 | exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] 75 | 76 | # The name of the Pygments (syntax highlighting) style to use. 77 | pygments_style = 'sphinx' 78 | 79 | # If true, `todo` and `todoList` produce output, else they produce nothing. 80 | todo_include_todos = False 81 | 82 | 83 | # -- Options for HTML output ---------------------------------------------- 84 | 85 | # The theme to use for HTML and HTML Help pages. See the documentation for 86 | # a list of builtin themes. 87 | # 88 | html_theme = 'alabaster' 89 | 90 | # Theme options are theme-specific and customize the look and feel of a theme 91 | # further. For a list of options available for each theme, see the 92 | # documentation. 93 | # 94 | # html_theme_options = {} 95 | 96 | html_theme_options = { 97 | 'show_powered_by': False, 98 | 'github_user': 'astorfi', 99 | 'github_repo': 'TensorFlow-World', 100 | 'github_banner': True, 101 | 'show_related': False 102 | } 103 | 104 | # Add any paths that contain custom static files (such as style sheets) here, 105 | # relative to this directory. They are copied after the builtin static files, 106 | # so a file named "default.css" will overwrite the builtin "default.css". 107 | html_static_path = ['_static'] 108 | 109 | # Title 110 | html_title = 'TensorFlow World' 111 | 112 | 113 | # -- Options for HTMLHelp output ------------------------------------------ 114 | 115 | # Output file base name for HTML help builder. 116 | htmlhelp_basename = 'TensorFlow-Worlddoc' 117 | 118 | 119 | # If true, links to the reST sources are added to the pages. 120 | html_show_sourcelink = False 121 | 122 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 123 | html_show_sphinx = False 124 | 125 | 126 | # -- Options for LaTeX output --------------------------------------------- 127 | 128 | latex_elements = { 129 | # The paper size ('letterpaper' or 'a4paper'). 130 | # 131 | # 'papersize': 'letterpaper', 132 | 133 | # The font size ('10pt', '11pt' or '12pt'). 134 | # 135 | # 'pointsize': '10pt', 136 | 137 | # Additional stuff for the LaTeX preamble. 138 | # 139 | # 'preamble': '', 140 | 141 | # Latex figure (float) alignment 142 | # 143 | # 'figure_align': 'htbp', 144 | } 145 | 146 | # Grouping the document tree into LaTeX files. List of tuples 147 | # (source start file, target name, title, 148 | # author, documentclass [howto, manual, or own class]). 149 | latex_documents = [ 150 | (master_doc, 'TensorFlow-World.tex', u'TensorFlow-World Documentation', 151 | u'Amirsina Torfi', 'manual'), 152 | ] 153 | 154 | 155 | # -- Options for manual page output --------------------------------------- 156 | 157 | # One entry per manual page. List of tuples 158 | # (source start file, name, description, authors, manual section). 159 | man_pages = [ 160 | (master_doc, 'tensorflow-world', u'TensorFlow-World Documentation', 161 | [author], 1) 162 | ] 163 | 164 | 165 | # -- Options for Texinfo output ------------------------------------------- 166 | 167 | # Grouping the document tree into Texinfo files. List of tuples 168 | # (source start file, target name, title, author, 169 | # dir menu entry, description, category) 170 | texinfo_documents = [ 171 | (master_doc, 'TensorFlow-World', u'TensorFlow-World Documentation', 172 | author, 'TensorFlow-World', 'One line description of project.', 173 | 'Miscellaneous'), 174 | ] 175 | 176 | 177 | 178 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | .. include:: ../README.rst 2 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | pushd %~dp0 4 | 5 | REM Command file for Sphinx documentation 6 | 7 | if "%SPHINXBUILD%" == "" ( 8 | set SPHINXBUILD=sphinx-build 9 | ) 10 | set SOURCEDIR=. 11 | set BUILDDIR=_build 12 | set SPHINXPROJ=TensorFlow-World 13 | 14 | if "%1" == "" goto help 15 | 16 | %SPHINXBUILD% >NUL 2>NUL 17 | if errorlevel 9009 ( 18 | echo. 19 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 20 | echo.installed, then set the SPHINXBUILD environment variable to point 21 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 22 | echo.may add the Sphinx directory to PATH. 23 | echo. 24 | echo.If you don't have Sphinx installed, grab it from 25 | echo.http://sphinx-doc.org/ 26 | exit /b 1 27 | ) 28 | 29 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 30 | goto end 31 | 32 | :help 33 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 34 | 35 | :end 36 | popd 37 | -------------------------------------------------------------------------------- /docs/tutorials/0-welcome/README.rst: -------------------------------------------------------------------------------- 1 | ============================ 2 | Welcome to TensorFlow World 3 | ============================ 4 | 5 | .. _this link: https://github.com/astorfi/TensorFlow-World/tree/master/Tutorials/0-welcome 6 | 7 | The tutorials in this section are just a start for going into the TensorFlow world. 8 | 9 | We using Tensorboard for visualizing the outcomes. TensorBoard is the graph visualization tools provided by TensorFlow. Using Google’s words: “The computations you'll use TensorFlow for - like training a massive deep neural network - can be complex and confusing. To make it easier to understand, debug, and optimize TensorFlow programs, we've included a suite of visualization tools called TensorBoard.” A simple Tensorboard implementation is used in this tutorial. 10 | 11 | **NOTE:*** 12 | 13 | * The details of summary operations, Tensorboard, and their advantages are beyond the scope of this tutorial and will be presented in more advanced tutorials. 14 | 15 | 16 | -------------------------- 17 | Preparing the environment 18 | -------------------------- 19 | 20 | At first, we have to import the necessary libraries. 21 | 22 | .. code:: python 23 | 24 | from __future__ import print_function 25 | import tensorflow as tf 26 | import os 27 | 28 | Since we are aimed to use Tensorboard, we need a directory to store the information (the operations and their corresponding outputs if desired by the user). This information is exported to ``event files`` by TensorFlow. The event files can be transformed to visual data such that the user is able to evaluate the architecture and the operations. The ``path`` to store these event files is defined as below: 29 | 30 | .. code:: python 31 | 32 | # The default path for saving event files is the same folder of this python file. 33 | tf.app.flags.DEFINE_string( 34 | 'log_dir', os.path.dirname(os.path.abspath(__file__)) + '/logs', 35 | 'Directory where event logs are written to.') 36 | 37 | # Store all elements in FLAG structure! 38 | FLAGS = tf.app.flags.FLAGS 39 | 40 | The ``os.path.dirname(os.path.abspath(__file__))`` gets the directory name of the current python file. The ``tf.app.flags.FLAGS`` points to all defined flags using the ``FLAGS`` indicator. From now on the flags can be called using ``FLAGS.flag_name``. 41 | 42 | For convenience, it is useful to only work with ``absolute paths``. By using the following script, the user is prompt to use absolute paths for the ``log_dir`` directory. 43 | 44 | .. code:: python 45 | 46 | # The user is prompted to input an absolute path. 47 | # os.path.expanduser is leveraged to transform '~' sign to the corresponding path indicator. 48 | # Example: '~/logs' equals to '/home/username/logs' 49 | if not os.path.isabs(os.path.expanduser(FLAGS.log_dir)): 50 | raise ValueError('You must assign absolute path for --log_dir') 51 | 52 | ----------------- 53 | Inauguration 54 | ----------------- 55 | 56 | Some sentence can be defined by TensorFlow: 57 | 58 | .. code:: python 59 | 60 | # Defining some sentence! 61 | welcome = tf.constant('Welcome to TensorFlow world!') 62 | 63 | The ``tf.`` operator performs the specific operation and the output will be a ``Tensor``. The attribute ``name="some_name"`` is defined for better Tensorboard visualization as we see later in this tutorial. 64 | 65 | ------------------- 66 | Run the Experiment 67 | ------------------- 68 | 69 | The ``session``, which is the environment for running the operations, is executed as below: 70 | 71 | .. code:: python 72 | 73 | # Run the session 74 | with tf.Session() as sess: 75 | writer = tf.summary.FileWriter(os.path.expanduser(FLAGS.log_dir), sess.graph) 76 | print("output: ", sess.run(welcome)) 77 | 78 | # Closing the writer. 79 | writer.close() 80 | sess.close() 81 | 82 | The ``tf.summary.FileWriter`` is defined to write the summaries into ``event files``.The command of ``sess.run()`` must be used for evaluation of any ``Tensor`` otherwise the operation won't be executed. In the end by using the ``writer.close()``, the summary writer will be closed. 83 | 84 | 85 | -------------------------------------------------------------------------------- /docs/tutorials/1-basics/basic_math_operations/README.rst: -------------------------------------------------------------------------------- 1 | ============================ 2 | Welcome to TensorFlow World 3 | ============================ 4 | 5 | .. _this link: https://github.com/astorfi/TensorFlow-World/tree/master/codes/0-welcome 6 | 7 | The tutorials in this section are just a start for going into the TensorFlow world. 8 | 9 | We using Tensorboard for visualizing the outcomes. TensorBoard is the graph visualization tools provided by TensorFlow. Using Google’s words: “The computations you'll use TensorFlow for - like training a massive deep neural network - can be complex and confusing. To make it easier to understand, debug, and optimize TensorFlow programs, we've included a suite of visualization tools called TensorBoard.” A simple Tensorboard implementation is used in this tutorial. 10 | 11 | **NOTE:*** 12 | 13 | * The details of summary operations, Tensorboard, and their advantages are beyond the scope of this tutorial and will be presented in more advanced tutorials. 14 | 15 | 16 | -------------------------- 17 | Preparing the environment 18 | -------------------------- 19 | 20 | At first, we have to import the necessary libraries. 21 | 22 | .. code:: python 23 | 24 | from __future__ import print_function 25 | import tensorflow as tf 26 | import os 27 | 28 | Since we are aimed to use Tensorboard, we need a directory to store the information (the operations and their corresponding outputs if desired by the user). This information is exported to ``event files`` by TensorFlow. The even files can be transformed to visual data such that the user is able to evaluate the architecture and the operations. The ``path`` to store these even files is defined as below: 29 | 30 | .. code:: python 31 | 32 | # The default path for saving event files is the same folder of this python file. 33 | tf.app.flags.DEFINE_string( 34 | 'log_dir', os.path.dirname(os.path.abspath(__file__)) + '/logs', 35 | 'Directory where event logs are written to.') 36 | 37 | # Store all elements in FLAG structure! 38 | FLAGS = tf.app.flags.FLAGS 39 | 40 | The ``os.path.dirname(os.path.abspath(__file__))`` gets the directory name of the current python file. The ``tf.app.flags.FLAGS`` points to all defined flags using the ``FLAGS`` indicator. From now on the flags can be called using ``FLAGS.flag_name``. 41 | 42 | For convenience, it is useful to only work with ``absolute paths``. By using the following script, the user is prompt to use absolute paths for the ``log_dir`` directory. 43 | 44 | .. code:: python 45 | 46 | # The user is prompted to input an absolute path. 47 | # os.path.expanduser is leveraged to transform '~' sign to the corresponding path indicator. 48 | # Example: '~/logs' equals to '/home/username/logs' 49 | if not os.path.isabs(os.path.expanduser(FLAGS.log_dir)): 50 | raise ValueError('You must assign absolute path for --log_dir') 51 | 52 | -------- 53 | Basics 54 | -------- 55 | 56 | Some basic math operations can be defined by TensorFlow: 57 | 58 | .. code:: python 59 | 60 | # Defining some constant values 61 | a = tf.constant(5.0, name="a") 62 | b = tf.constant(10.0, name="b") 63 | 64 | # Some basic operations 65 | x = tf.add(a, b, name="add") 66 | y = tf.div(a, b, name="divide") 67 | 68 | The ``tf.`` operator performs the specific operation and the output will be a ``Tensor``. The attribute ``name="some_name"`` is defined for better Tensorboard visualization as we see later in this tutorial. 69 | 70 | ------------------- 71 | Run the Experiment 72 | ------------------- 73 | 74 | The ``session``, which is the environment for running the operations, is executed as below: 75 | 76 | .. code:: python 77 | 78 | # Run the session 79 | with tf.Session() as sess: 80 | writer = tf.summary.FileWriter(os.path.expanduser(FLAGS.log_dir), sess.graph) 81 | print("output: ", sess.run([a,b,x,y])) 82 | 83 | # Closing the writer. 84 | writer.close() 85 | sess.close() 86 | 87 | The ``tf.summary.FileWriter`` is defined to write the summaries into ``event files``.The command of ``sess.run()`` must be used for evaluation of any ``Tensor`` otherwise the operation won't be executed. In the end by using the ``writer.close()``, the summary writer will be closed. 88 | 89 | -------- 90 | Results 91 | -------- 92 | 93 | The results for running in the terminal is as bellow: 94 | 95 | .. code:: shell 96 | 97 | [5.0, 10.0, 15.0, 0.5] 98 | 99 | 100 | If we run the Tensorboard using ``tensorboard --logdir="absolute/path/to/log_dir"`` we get the following when visualiaing the ``Graph``: 101 | 102 | .. figure:: https://github.com/astorfi/TensorFlow-World/blob/master/docs/_img/1-basics/basic_math_operations/graph-run.png 103 | :scale: 30 % 104 | :align: center 105 | 106 | **Figure 1:** The TensorFlow Graph. 107 | 108 | -------------------------------------------------------------------------------- /docs/tutorials/1-basics/readme.rst: -------------------------------------------------------------------------------- 1 | ============================== 2 | Basics 3 | ============================== 4 | 5 | 6 | -------------------------------------------------------------------------------- /docs/tutorials/1-basics/variables/README.rst: -------------------------------------------------------------------------------- 1 | Introduction to TensorFlow Variables: Creation, Initialization 2 | -------------------------------------------------------------- 3 | 4 | This tutorial deals with defining and initializing TensorFlow variables. 5 | 6 | Introduction 7 | ------------ 8 | 9 | Defining ``variables`` is necessary because they hold the parameters. 10 | Without having parameters, training, updating, saving, restoring and any 11 | other operations cannot be performed. The defined variables in 12 | TensorFlow are just tensors with certain shapes and types. The tensors 13 | must be initialized with values to become valid. In this tutorial, we 14 | are going to explain how to ``define`` and ``initialize`` variables. The 15 | `source 16 | code `__ 17 | is available on the dedicated GitHub repository. 18 | 19 | Creating variables 20 | ------------------ 21 | 22 | For a variable generation, the class of tf.Variable() will be used. When 23 | we define a variable, we basically pass a ``tensor`` and its ``value`` 24 | to the graph. Basically, the following will happen: 25 | 26 | - A ``variable`` tensor that holds a value will be pass to the 27 | graph. 28 | - By using tf.assign, an initializer set initial variable value. 29 | 30 | Some arbitrary variables can be defined as follows: 31 | 32 | .. code:: python 33 | 34 | 35 | import tensorflow as tf 36 | from tensorflow.python.framework import ops 37 | 38 | ####################################### 39 | ######## Defining Variables ########### 40 | ####################################### 41 | 42 | # Create three variables with some default values. 43 | weights = tf.Variable(tf.random_normal([2, 3], stddev=0.1), 44 | name="weights") 45 | biases = tf.Variable(tf.zeros([3]), name="biases") 46 | custom_variable = tf.Variable(tf.zeros([3]), name="custom") 47 | 48 | # Get all the variables' tensors and store them in a list. 49 | all_variables_list = ops.get_collection(ops.GraphKeys.GLOBAL_VARIABLES) 50 | 51 | 52 | In the above script, ``ops.get_collection`` gets the list of all defined variables 53 | from the defined graph. The "name" key, define a specific name for each 54 | variable on the graph 55 | 56 | Initialization 57 | -------------- 58 | 59 | ``Initializers`` of the variables must be run before all other 60 | operations in the model. For an analogy, we can consider the starter of 61 | the car. Instead of running an initializer, variables can be 62 | ``restored`` too from saved models such as a checkpoint file. Variables 63 | can be initialized globally, specifically, or from other variables. We 64 | investigate different choices in the subsequent sections. 65 | 66 | Initializing Specific Variables 67 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 68 | 69 | By using tf.variables\_initializer, we can explicitly command the 70 | TensorFlow to only initialize a certain variable. The script is as follows 71 | 72 | .. code:: python 73 | 74 | # "variable_list_custom" is the list of variables that we want to initialize. 75 | variable_list_custom = [weights, custom_variable] 76 | 77 | # The initializer 78 | init_custom_op = tf.variables_initializer(var_list=variable_list_custom) 79 | 80 | Noted that custom initialization does not mean that we don't need to 81 | initialize other variables! All variables that some operations will be 82 | done upon them over the graph, must be initialized or restored from 83 | saved variables. This only allows us to realize how we can initialize 84 | specific variables by hand. 85 | 86 | Global variable initialization 87 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 88 | 89 | All variables can be initialized at once using the 90 | tf.global\_variables\_initializer(). This op must be run after the model constructed. 91 | The script is as below: 92 | 93 | .. code:: python 94 | 95 | # Method-1 96 | # Add an op to initialize the variables. 97 | init_all_op = tf.global_variables_initializer() 98 | 99 | # Method-2 100 | init_all_op = tf.variables_initializer(var_list=all_variables_list) 101 | 102 | Both the above methods are identical. We only provide the second one to 103 | demonstrate that the ``tf.global_variables_initializer()`` is nothing 104 | but ``tf.variables_initializer`` when you yield all the variables as the input argument. 105 | 106 | Initialization of a variables using other existing variables 107 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 108 | 109 | New variables can be initialized using other existing variables' initial 110 | values by taking the values using initialized\_value(). 111 | 112 | Initialization using predefined variables' values 113 | 114 | .. code:: python 115 | 116 | # Create another variable with the same value as 'weights'. 117 | WeightsNew = tf.Variable(weights.initialized_value(), name="WeightsNew") 118 | 119 | # Now, the variable must be initialized. 120 | init_WeightsNew_op = tf.variables_initializer(var_list=[WeightsNew]) 121 | 122 | As it can be seen from the above script, the ``WeightsNew`` variable is 123 | initialized with the values of the ``weights`` predefined value. 124 | 125 | Running the session 126 | ------------------- 127 | 128 | All we did so far was to define the initializers' ops and put them on the 129 | graph. In order to truly initialize variables, the defined initializers' 130 | ops must be run in the session. The script is as follows: 131 | 132 | Running the session for initialization 133 | 134 | .. code:: python 135 | 136 | with tf.Session() as sess: 137 | # Run the initializer operation. 138 | sess.run(init_all_op) 139 | sess.run(init_custom_op) 140 | sess.run(init_WeightsNew_op) 141 | 142 | Each of the initializers has been run separated using a session. 143 | 144 | Summary 145 | ------- 146 | 147 | In this tutorial, we walked through the variable creation and 148 | initialization. The global, custom and inherited variable initialization 149 | have been investigated. In the future posts, we investigate how to save 150 | and restore the variables. Restoring a variable eliminate the necessity 151 | of its initialization. 152 | 153 | -------------------------------------------------------------------------------- /docs/tutorials/2-basics_in_machine_learning/linear_regression/README.rst: -------------------------------------------------------------------------------- 1 | 2 | Sections 3 | ~~~~~~~~ 4 | 5 | - `Introduction <#Introduction>`__ 6 | - `Description of the Overall 7 | Process <#Description%20of%20the%20Overall%20Process>`__ 8 | - `How to Do It in Code? <#How%20to%20Do%20It%20in%20Code?>`__ 9 | - `Summary <#Summary>`__ 10 | 11 | Linear Regression using TensorFlow 12 | ---------------------------------- 13 | 14 | This tutorial is about training a linear model by TensorFlow to fit the 15 | data. Alternatively, you can check this `blog post `_. 16 | 17 | .. _blogpostlinearregression: http://www.machinelearninguru.com/deep_learning/tensorflow/machine_learning_basics/linear_regresstion/linear_regression.html 18 | 19 | 20 | 21 | Introduction 22 | ------------ 23 | 24 | In machine learning and statistics, Linear Regression is the modeling of 25 | the relationship between a variable such as Y and at least one 26 | independent variable as X. In the linear regression, the linear 27 | relationship will be modeled by a predictor function which its 28 | parameters will be estimated by the data and is called a Linear Model. 29 | The main advantage of Linear Regression algorithm is its simplicity using 30 | which it is very straightforward to interpret the new model and map the 31 | data into a new space. In this article, we will introduce how to train a 32 | linear model using TensorFLow and how to showcase the generated model. 33 | 34 | Description of the Overall Process 35 | ---------------------------------- 36 | 37 | In order to train the model, the TensorFlow loops through the data and 38 | it should find the optimal line (as we have a linear model) that fits 39 | the data. The linear relationship between two variables of X, Y is 40 | estimated by designing an appropriate optimization problem for which the requirement 41 | is a proper loss function. The dataset is available from the 42 | `Stanford course CS 43 | 20SI `__: TensorFlow 44 | for Deep Learning Research. 45 | 46 | How to Do It in Code? 47 | --------------------- 48 | 49 | The process is started by loading the necessary libraries and the 50 | dataset: 51 | 52 | .. code:: python 53 | 54 | 55 | # Data file provided by the Stanford course CS 20SI: TensorFlow for Deep Learning Research. 56 | # https://github.com/chiphuyen/tf-stanford-tutorials 57 | DATA_FILE = "data/fire_theft.xls" 58 | 59 | # read the data from the .xls file. 60 | book = xlrd.open_workbook(DATA_FILE, encoding_override="utf-8") 61 | sheet = book.sheet_by_index(0) 62 | data = np.asarray([sheet.row_values(i) for i in range(1, sheet.nrows)]) 63 | num_samples = sheet.nrows - 1 64 | 65 | ####################### 66 | ## Defining flags ##### 67 | ####################### 68 | tf.app.flags.DEFINE_integer( 69 | 'num_epochs', 50, 'The number of epochs for training the model. Default=50') 70 | # Store all elements in FLAG structure! 71 | FLAGS = tf.app.flags.FLAGS 72 | 73 | Then we continue by defining and initializing the necessary variables: 74 | 75 | .. code:: python 76 | 77 | # creating the weight and bias. 78 | # The defined variables will be initialized to zero. 79 | W = tf.Variable(0.0, name="weights") 80 | b = tf.Variable(0.0, name="bias") 81 | 82 | After that, we should define the necessary functions. Different tabs 83 | demonstrate the defined functions: 84 | 85 | .. code:: python 86 | 87 | def inputs(): 88 | """ 89 | Defining the place_holders. 90 | :return: 91 | Returning the data and label lace holders. 92 | """ 93 | X = tf.placeholder(tf.float32, name="X") 94 | Y = tf.placeholder(tf.float32, name="Y") 95 | return X,Y 96 | 97 | .. code:: python 98 | 99 | def inference(): 100 | """ 101 | Forward passing the X. 102 | :param X: Input. 103 | :return: X*W + b. 104 | """ 105 | return X * W + b 106 | 107 | .. code:: python 108 | 109 | def loss(X, Y): 110 | """ 111 | compute the loss by comparing the predicted value to the actual label. 112 | :param X: The input. 113 | :param Y: The label. 114 | :return: The loss over the samples. 115 | """ 116 | 117 | # Making the prediction. 118 | Y_predicted = inference(X) 119 | return tf.squared_difference(Y, Y_predicted) 120 | 121 | .. code:: python 122 | 123 | # The training function. 124 | def train(loss): 125 | learning_rate = 0.0001 126 | return tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) 127 | 128 | Next, we are going to loop through different epochs of data and perform 129 | the optimization process: 130 | 131 | .. code:: python 132 | 133 | with tf.Session() as sess: 134 | 135 | # Initialize the variables[w and b]. 136 | sess.run(tf.global_variables_initializer()) 137 | 138 | # Get the input tensors 139 | X, Y = inputs() 140 | 141 | # Return the train loss and create the train_op. 142 | train_loss = loss(X, Y) 143 | train_op = train(train_loss) 144 | 145 | # Step 8: train the model 146 | for epoch_num in range(FLAGS.num_epochs): # run 100 epochs 147 | for x, y in data: 148 | train_op = train(train_loss) 149 | 150 | # Session runs train_op to minimize loss 151 | loss_value,_ = sess.run([train_loss,train_op], feed_dict={X: x, Y: y}) 152 | 153 | # Displaying the loss per epoch. 154 | print('epoch %d, loss=%f' %(epoch_num+1, loss_value)) 155 | 156 | # save the values of weight and bias 157 | wcoeff, bias = sess.run([W, b]) 158 | 159 | In the above code, the sess.run(tf.global\_variables\_initializer()) 160 | initialize all the defined variables globally. The train\_op is built 161 | upon the train\_loss and will be updated in each step. In the end, the 162 | parameters of the linear model, e.g., wcoeff and bias, will be returned. 163 | For evaluation, the prediction line and the original data will be 164 | demonstrated to show how the model fits the data: 165 | 166 | .. code:: python 167 | 168 | ############################### 169 | #### Evaluate and plot ######## 170 | ############################### 171 | Input_values = data[:,0] 172 | Labels = data[:,1] 173 | Prediction_values = data[:,0] * wcoeff + bias 174 | plt.plot(Input_values, Labels, 'ro', label='main') 175 | plt.plot(Input_values, Prediction_values, label='Predicted') 176 | 177 | # Saving the result. 178 | plt.legend() 179 | plt.savefig('plot.png') 180 | plt.close() 181 | 182 | The result is depicted in the following figure: 183 | 184 | .. figure:: https://github.com/astorfi/TensorFlow-World/blob/master/docs/_img/2-basics_in_machine_learning/linear_regression/updating_model.gif 185 | :scale: 50 % 186 | :align: center 187 | 188 | **Figure 1:** The original data alongside with the estimated linear 189 | model. 190 | 191 | The above animated GIF shows the model with some tiny movements which 192 | demonstrate the updating process. As it can be observed, the linear 193 | model is not certainly among the bests! However, as we mentioned, its 194 | simplicity is its advantage! 195 | 196 | Summary 197 | ------- 198 | 199 | In this tutorial, we walked through the linear model creation using 200 | TensorFlow. The line which was found after training is not guaranteed 201 | to be the best one. Different parameters affect the convergence 202 | accuracy. The linear model is found using stochastic optimization and 203 | its simplicity makes our world easier. 204 | -------------------------------------------------------------------------------- /docs/tutorials/2-basics_in_machine_learning/logistic_regression/README.rst: -------------------------------------------------------------------------------- 1 | 2 | 3 | Sections 4 | ~~~~~~~~ 5 | 6 | - `Introduction <#Introduction>`__ 7 | - `Description of the Overall 8 | Process <#Description%20of%20the%20Overall%20Process>`__ 9 | - `How to Do It in Code? <#How%20to%20Do%20It%20in%20Code?>`__ 10 | - `Summary <#Summary>`__ 11 | 12 | Logistic Regression using TensorFlow 13 | ------------------------------------ 14 | 15 | This tutorial is about training a logistic regression by TensorFlow for 16 | binary classification. 17 | 18 | Introduction 19 | ------------ 20 | 21 | In `Linear Regression using 22 | TensorFlow `__ 23 | post we described how to predict continuous-valued parameters by 24 | linearly modeling the system. What if the objective is to decide between 25 | two choices? The answer is simple: we are dealing with a classification 26 | problem. In this tutorial, the objective to decide whether the input 27 | image is digit "0" or digit "1" using Logistic Regression. In another 28 | word, whether it is digit "1" or not! The full source code is available 29 | in the associated `GitHub 30 | repository `__. 31 | 32 | Dataset 33 | ------- 34 | 35 | The dataset that we work on that in this tutorial is the 36 | `MNIST `__ dataset. The main dataset 37 | consists of 55000 training and 10000 test images. The images are 28x28x1 38 | which each of them represent a hand-written digit from 0 to 9. We create 39 | feature vectors of size 784 of each image. We only use 0 and 1 images 40 | for our setting. 41 | 42 | Logistic Regression 43 | ------------------- 44 | 45 | In linear regression the effort is to predict the outcome continuous 46 | value using the linear function of $y=W^{T}x$. On the other hand, in 47 | logistic regression we are determined to predict a binary label as 48 | $y\\in\\{0,1\\}$ in which we use a different prediction process as 49 | opposed to linear regression. In logistic regression, the predicted 50 | output is the probability that the input sample belongs to a targeted 51 | class which is digit "1" in our case. In a binary-classification 52 | problem, obviously if the $P(x\\in\\{target\\\_class\\})$ = M, then 53 | $P(x\\in\\{non\\\_target\\\_class\\}) = 1 - M$. So the hypothesis can be 54 | created as follows: 55 | 56 | $$P(y=1\|x)=h\_{W}(x)={{1}\\over{1+exp(-W^{T}x)}}=Sigmoid(W^{T}x) \\ \\ 57 | \\ (1)$$ $$P(y=0\|x)=1 - P(y=1\|x) = 1 - h\_{W}(x) \\ \\ \\ (2)$$ 58 | 59 | In the above equations, Sigmoid function maps the predicted output into 60 | probability space in which the values are in the range of $[0,1]$. The main 61 | objective is to find the model using which when the input sample is "1" 62 | the output becomes a high probability and becomes small otherwise. The 63 | important objective is to design the appropriate cost function to 64 | minimize the loss when the output is desired and vice versa. The cost 65 | function for a set of data such as $(x^{i},y^{i})$ can be defined as 66 | below: 67 | 68 | $$Loss(W) = 69 | \\sum\_{i}{y^{(i)}log{1\\over{h\_{W}(x^{i})}}+(1-y^{(i)})log{1\\over{1-h\_{W}(x^{i})}}}$$ 70 | 71 | As it can be seen from the above equation, the loss function consists of 72 | two term and in each sample only one of them is non-zero considering the 73 | binary labels. 74 | 75 | Up to now, we defined the formulation and optimization function of the 76 | logistic regression. In the next part, we show how to do it in code using 77 | mini-batch optimization. 78 | 79 | Description of the Overall Process 80 | ---------------------------------- 81 | 82 | At first, we process the dataset and extract only "0" and "1" digits. The 83 | code implemented for logistic regression is heavily inspired by our 84 | `Train a Convolutional Neural Network as a 85 | Classifier `__ 86 | post. We refer to the aforementioned post for having a better 87 | understanding of the implementation details. In this tutorial, we only 88 | explain how we process dataset and how to implement logistic regression 89 | and the rest is clear from the CNN classifier post that we referred 90 | earlier. 91 | 92 | How to Do It in Code? 93 | --------------------- 94 | 95 | In this part, we explain how to extract desired samples from dataset and 96 | to implement logistic regression using Softmax. 97 | 98 | Process Dataset 99 | ~~~~~~~~~~~~~~~ 100 | 101 | At first, we need to extract "0" and "1" digits from MNIST dataset: 102 | 103 | .. code:: python 104 | 105 | from tensorflow.examples.tutorials.mnist import input_data 106 | mnist = input_data.read_data_sets("MNIST_data/", reshape=True, one_hot=False) 107 | 108 | ######################## 109 | ### Data Processing #### 110 | ######################## 111 | # Organize the data and feed it to associated dictionaries. 112 | data={} 113 | 114 | data['train/image'] = mnist.train.images 115 | data['train/label'] = mnist.train.labels 116 | data['test/image'] = mnist.test.images 117 | data['test/label'] = mnist.test.labels 118 | 119 | # Get only the samples with zero and one label for training. 120 | index_list_train = [] 121 | for sample_index in range(data['train/label'].shape[0]): 122 | label = data['train/label'][sample_index] 123 | if label == 1 or label == 0: 124 | index_list_train.append(sample_index) 125 | 126 | # Reform the train data structure. 127 | data['train/image'] = mnist.train.images[index_list_train] 128 | data['train/label'] = mnist.train.labels[index_list_train] 129 | 130 | 131 | # Get only the samples with zero and one label for test set. 132 | index_list_test = [] 133 | for sample_index in range(data['test/label'].shape[0]): 134 | label = data['test/label'][sample_index] 135 | if label == 1 or label == 0: 136 | index_list_test.append(sample_index) 137 | 138 | # Reform the test data structure. 139 | data['test/image'] = mnist.test.images[index_list_test] 140 | data['test/label'] = mnist.test.labels[index_list_test] 141 | 142 | The code looks to be verbose but it's very simple actually. All we want 143 | is implemented in lines 28-32 in which the desired data samples are 144 | extracted. Next, we have to dig into logistic regression architecture. 145 | 146 | Logistic Regression Implementation 147 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 148 | 149 | The logistic regression structure is simply feeding-forwarding the input 150 | features through a fully-connected layer in which the last layer only 151 | has two classes. The fully-connected architecture can be defined as 152 | below: 153 | 154 | .. code:: python 155 | 156 | ############################################### 157 | ########### Defining place holders ############ 158 | ############################################### 159 | image_place = tf.placeholder(tf.float32, shape=([None, num_features]), name='image') 160 | label_place = tf.placeholder(tf.int32, shape=([None,]), name='gt') 161 | label_one_hot = tf.one_hot(label_place, depth=FLAGS.num_classes, axis=-1) 162 | dropout_param = tf.placeholder(tf.float32) 163 | 164 | ################################################## 165 | ########### Model + Loss + Accuracy ############## 166 | ################################################## 167 | # A simple fully connected with two class and a Softmax is equivalent to Logistic Regression. 168 | logits = tf.contrib.layers.fully_connected(inputs=image_place, num_outputs = FLAGS.num_classes, scope='fc') 169 | 170 | The first few lines are defining place-holders in order to put the 171 | desired values on the graph. Please refer to `this 172 | post `__ 173 | for further details. The desired loss function can easily be implemented 174 | using TensorFlow using the following script: 175 | 176 | .. code:: python 177 | 178 | # Define loss 179 | with tf.name_scope('loss'): 180 | loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=label_one_hot)) 181 | 182 | # Accuracy 183 | with tf.name_scope('accuracy'): 184 | # Evaluate the model 185 | correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(label_one_hot, 1)) 186 | 187 | # Accuracy calculation 188 | accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) 189 | 190 | The tf.nn.softmax\_cross\_entropy\_with\_logits function does the work. 191 | It optimizes the previously defined cost function with a subtle 192 | difference. It generates two inputs in which even if the sample is digit 193 | "0", the correspondent probability will be high. So 194 | tf.nn.softmax\_cross\_entropy\_with\_logits function, for each class 195 | predict a probability and inherently on its own, makes the decision. 196 | 197 | Summary 198 | ------- 199 | 200 | In this tutorial, we described logistic regression and represented how to 201 | implement it in code. Instead of making a decision based on the output 202 | probability based on a targeted class, we extended the problem to a two 203 | class problem in which for each class we predict the probability. In the future posts, we will extend this problem to multi-class problem and we 204 | show it can be done with the similar approach. 205 | -------------------------------------------------------------------------------- /docs/tutorials/3-neural_network/autoencoder/README.rst: -------------------------------------------------------------------------------- 1 | Autoencoders and their implementations in TensorFlow 2 | ---------------------------------------------------- 3 | 4 | In this post, you will learn the concept behind Autoencoders as well how 5 | to implement an autoencoder in TensorFlow. 6 | 7 | Introduction 8 | ------------ 9 | 10 | Autoencoders are a type of neural networks which copy its input to its 11 | output. They usually consist of two main parts, namely Encoder and 12 | Decoder. The encoder map the input into a hidden layer space which we 13 | refer to as a code. The decoder then reconstructs the input from the 14 | code. There are different types of Autoencoders: 15 | 16 | - **Undercomplete Autoencoders:** An autoencoder whose code 17 | dimension is less than the input dimension. Learning such an 18 | autoencoder forces it to capture the most salient features. 19 | However, using a big encoder and decoder in the lack of enough 20 | training data allows the network to memorized the task and omits 21 | learning useful features. In case of having linear decoder, it can 22 | act as PCA. However, adding nonlinear activation functions to the 23 | network makes it a nonlinear generalization of PCA. 24 | - **Regularized Autoencoders:** Rather than limiting the size of 25 | autoencoder and the code dimension for the sake of feature 26 | learning, we can add a loss function to prevent it memorizing the 27 | task and the training data. 28 | - **Sparse Autoencoders:** An autoencoder which has a sparsity 29 | penalty in the training loss in addition to the 30 | reconstruction error. They usually being used for the 31 | porpuse of other tasks such as classification. The loss is 32 | not as straightforward as other regularizers, and we will 33 | discuss it in another post later. 34 | - **Denoising Autoencoders (DAE):** The input of a DAE is a 35 | corrupted copy of the real input which is supposed to be 36 | reconstructed. Therefore, a DAE has to undo the corruption 37 | (noise) as well as reconstruction. 38 | - **Contractive Autoencoders (CAE):** The main idea behind 39 | these type of autoencoders is to learn a representation of 40 | the data which is robust to small changes in the input. 41 | - **Variational Autoencoders:** They maximize the probability of the 42 | training data instead of copying the input to the output and 43 | therefore does not need regularization to capture useful 44 | information. 45 | 46 | In this post, we are going to create a simple Undercomplete Autoencoder 47 | in TensorFlow to learn a low dimension representation (code) of the 48 | MNIST dataset. 49 | 50 | Create an Undercomplete Autoencoder 51 | ----------------------------------- 52 | 53 | We are going to create an autoencoder with a 3-layer encoder and 3-layer 54 | decoder. Each layer of encoder downsamples its input along the spatial 55 | dimensions (width, height) by a factor of two using a stride 2. 56 | Consequently, the dimension of the code is 2(width) X 2(height) X 57 | 8(depth) = 32 (for an image of 32X32). Similarly, each layer of the 58 | decoder upsamples its input by a factor of two (using transpose 59 | convolution with stride 2). 60 | 61 | .. code-block:: python 62 | 63 | import tensorflow.contrib.layers as lays 64 | 65 | def autoencoder(inputs): 66 | # encoder 67 | # 32 file code blockx 32 x 1 -> 16 x 16 x 32 68 | # 16 x 16 x 32 -> 8 x 8 x 16 69 | # 8 x 8 x 16 -> 2 x 2 x 8 70 | net = lays.conv2d(inputs, 32, [5, 5], stride=2, padding='SAME') 71 | net = lays.conv2d(net, 16, [5, 5], stride=2, padding='SAME') 72 | net = lays.conv2d(net, 8, [5, 5], stride=4, padding='SAME') 73 | # decoder 74 | # 2 x 2 x 8 -> 8 x 8 x 16 75 | # 8 x 8 x 16 -> 16 x 16 x 32 76 | # 16 x 16 x 32 -> 32 x 32 x 1 77 | net = lays.conv2d_transpose(net, 16, [5, 5], stride=4, padding='SAME') 78 | net = lays.conv2d_transpose(net, 32, [5, 5], stride=2, padding='SAME') 79 | net = lays.conv2d_transpose(net, 1, [5, 5], stride=2, padding='SAME', activation_fn=tf.nn.tanh) 80 | return net 81 | 82 | .. figure:: ../../../_img/3-neural_network/autoencoder/ae.png 83 | :scale: 50 % 84 | :align: center 85 | 86 | **Figure 1:** Autoencoder 87 | 88 | The MNIST dataset contains vectorized images of 28X28. Therefore we 89 | define a new function to reshape each batch of MNIST images to 28X28 and 90 | then resize to 32X32. The reason of resizing to 32X32 is to make it a 91 | power of two and therefore we can easily use the stride of 2 for 92 | downsampling and upsampling. 93 | 94 | .. code-block:: python 95 | 96 | import numpy as np 97 | from skimage import transform 98 | 99 | def resize_batch(imgs): 100 | # A function to resize a batch of MNIST images to (32, 32) 101 | # Args: 102 | # imgs: a numpy array of size [batch_size, 28 X 28]. 103 | # Returns: 104 | # a numpy array of size [batch_size, 32, 32]. 105 | imgs = imgs.reshape((-1, 28, 28, 1)) 106 | resized_imgs = np.zeros((imgs.shape[0], 32, 32, 1)) 107 | for i in range(imgs.shape[0]): 108 | resized_imgs[i, ..., 0] = transform.resize(imgs[i, ..., 0], (32, 32)) 109 | return resized_imgs 110 | 111 | Now we create an autoencoder, define a square error loss and an 112 | optimizer. 113 | 114 | 115 | .. code-block:: python 116 | 117 | import tensorflow as tf 118 | 119 | ae_inputs = tf.placeholder(tf.float32, (None, 32, 32, 1)) # input to the network (MNIST images) 120 | ae_outputs = autoencoder(ae_inputs) # create the Autoencoder network 121 | 122 | # calculate the loss and optimize the network 123 | loss = tf.reduce_mean(tf.square(ae_outputs - ae_inputs)) # claculate the mean square error loss 124 | train_op = tf.train.AdamOptimizer(learning_rate=lr).minimize(loss) 125 | 126 | # initialize the network 127 | init = tf.global_variables_initializer() 128 | 129 | Now we can read the batches, train the network and finally test the 130 | network by reconstructing a batch of test images. 131 | 132 | 133 | .. code-block:: python 134 | 135 | from tensorflow.examples.tutorials.mnist import input_data 136 | 137 | batch_size = 500 # Number of samples in each batch 138 | epoch_num = 5 # Number of epochs to train the network 139 | lr = 0.001 # Learning rate 140 | 141 | # read MNIST dataset 142 | mnist = input_data.read_data_sets("MNIST_data", one_hot=True) 143 | 144 | # calculate the number of batches per epoch 145 | batch_per_ep = mnist.train.num_examples // batch_size 146 | 147 | with tf.Session() as sess: 148 | sess.run(init) 149 | for ep in range(epoch_num): # epochs loop 150 | for batch_n in range(batch_per_ep): # batches loop 151 | batch_img, batch_label = mnist.train.next_batch(batch_size) # read a batch 152 | batch_img = batch_img.reshape((-1, 28, 28, 1)) # reshape each sample to an (28, 28) image 153 | batch_img = resize_batch(batch_img) # reshape the images to (32, 32) 154 | _, c = sess.run([train_op, loss], feed_dict={ae_inputs: batch_img}) 155 | print('Epoch: {} - cost= {:.5f}'.format((ep + 1), c)) 156 | 157 | # test the trained network 158 | batch_img, batch_label = mnist.test.next_batch(50) 159 | batch_img = resize_batch(batch_img) 160 | recon_img = sess.run([ae_outputs], feed_dict={ae_inputs: batch_img})[0] 161 | 162 | # plot the reconstructed images and their ground truths (inputs) 163 | plt.figure(1) 164 | plt.title('Reconstructed Images') 165 | for i in range(50): 166 | plt.subplot(5, 10, i+1) 167 | plt.imshow(recon_img[i, ..., 0], cmap='gray') 168 | plt.figure(2) 169 | plt.title('Input Images') 170 | for i in range(50): 171 | plt.subplot(5, 10, i+1) 172 | plt.imshow(batch_img[i, ..., 0], cmap='gray') 173 | plt.show() 174 | -------------------------------------------------------------------------------- /docs/tutorials/installation/README.rst: -------------------------------------------------------------------------------- 1 | ================================== 2 | Install TensorFlow from the source 3 | ================================== 4 | 5 | .. _TensorFlow: https://www.tensorflow.org/install/ 6 | .. _Installing TensorFlow from Sources: https://www.tensorflow.org/install/install_sources 7 | .. _Bazel Installation: https://bazel.build/versions/master/docs/install-ubuntu.html 8 | .. _CUDA Installation: https://github.com/astorfi/CUDA-Installation 9 | .. _NIDIA documentation: https://github.com/astorfi/CUDA-Installation 10 | 11 | 12 | 13 | The installation is available at `TensorFlow`_. Installation from the source is recommended because the user can build the desired TensorFlow binary for the specific architecture. It enriches the TensoFlow with a better system compatibility and it will run much faster. Installing from the source is available at `Installing TensorFlow from Sources`_ link. The official TensorFlow explanations are concise and to the point. However. few things might become important as we go through the installation. We try to project the step by step process to avoid any confusion. The following sections must be considered in the written order. 14 | 15 | The assumption is that installing TensorFlow in the ``Ubuntu`` using ``GPU support`` is desired. ``Python2.7`` is chosen for installation. 16 | 17 | **NOTE** Please refer to this youtube `link `_ for a visual explanation. 18 | 19 | .. _youtube: https://www.youtube.com/watch?v=_3JFEPk4qQY&t=2s 20 | 21 | ------------------------ 22 | Prepare the environment 23 | ------------------------ 24 | 25 | The following should be done in order: 26 | 27 | * TensorFlow Python dependencies installation 28 | * Bazel installation 29 | * TensorFlow GPU prerequisites setup 30 | 31 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 32 | TensorFlow Python Dependencies Installation 33 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 34 | 35 | For installation of the required dependencies, the following command must be executed in the terminal: 36 | 37 | .. code:: bash 38 | 39 | sudo apt-get install python-numpy python-dev python-pip python-wheel python-virtualenv 40 | sudo apt-get install python3-numpy python3-dev python3-pip python3-wheel python3-virtualenv 41 | 42 | The second line is for ``python3`` installation. 43 | 44 | ~~~~~~~~~~~~~~~~~~~ 45 | Bazel Installation 46 | ~~~~~~~~~~~~~~~~~~~ 47 | 48 | Please refer to `Bazel Installation`_. 49 | 50 | ``WARNING:`` The Bazel installation may change the supported kernel by the GPU! After that you may need to refresh your GPU installation or update it, otherwise, you may get the following error when evaluating the TensorFlow installation: 51 | 52 | .. code:: bash 53 | 54 | kernel version X does not match DSO version Y -- cannot find working devices in this configuration 55 | 56 | For solving that error you may need to purge all NVIDIA drivers and install or update them again. Please refer to `CUDA Installation`_ for further detail. 57 | 58 | 59 | 60 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 61 | TensorFlow GPU Prerequisites Setup 62 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 63 | 64 | The following requirements must be satisfied: 65 | 66 | * NVIDIA's Cuda Toolkit and its associated drivers(version 8.0 is recommended). The installation is explained at `CUDA Installation`_. 67 | * The cuDNN library(version 5.1 is recommended). Please refer to `NIDIA documentation`_ for further details. 68 | * Installing the ``libcupti-dev`` using the following command: ``sudo apt-get install libcupti-dev`` 69 | 70 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 71 | Creating a Virtual Environment (Optional) 72 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 73 | 74 | Assume the installation of TensorFlow in a ``python virtual environment`` is desired. First, we need to create a directory to contain all the environments. It can be done by executing the following in the terminal: 75 | 76 | .. code:: bash 77 | 78 | sudo mkdir ~/virtualenvs 79 | 80 | Now by using the ``virtualenv`` command, the virtual environment can be created: 81 | 82 | .. code:: bash 83 | 84 | sudo virtualenv --system-site-packages ~/virtualenvs/tensorflow 85 | 86 | **Environment Activation** 87 | 88 | Up to now, the virtual environment named *tensorflow* has been created. For environment activation, the following must be done: 89 | 90 | .. code:: bash 91 | 92 | source ~/virtualenvs/tensorflow/bin/activate 93 | 94 | However, the command is too verbose! 95 | 96 | **Alias** 97 | 98 | The solution is to use an alias to make life easy! Let's execute the following command: 99 | 100 | .. code:: bash 101 | 102 | echo 'alias tensorflow="source $HOME/virtualenvs/tensorflow/bin/activate" ' >> ~/.bash_aliases 103 | bash 104 | 105 | After running the previous command, please close and open terminal again. Now by running the following simple script, the tensorflow environment will be activated. 106 | 107 | .. code:: bash 108 | 109 | tensorflow 110 | 111 | **check the ``~/.bash_aliases``** 112 | 113 | To double check let's check the ``~/.bash_aliases`` from the terminal using the ``sudo gedit ~/.bash_aliases`` command. The file should contain the following script: 114 | 115 | .. code:: shell 116 | 117 | alias tensorflow="source $HO~/virtualenvs/tensorflow/bin/activate" 118 | 119 | 120 | **check the ``.bashrc``** 121 | 122 | Also, let's check the ``.bashrc`` shell script using the ``sudo gedit ~/.bashrc`` command. It should contain the following: 123 | 124 | .. code:: shell 125 | 126 | if [ -f ~/.bash_aliases ]; then 127 | . ~/.bash_aliases 128 | fi 129 | 130 | 131 | 132 | --------------------------------- 133 | Configuration of the Installation 134 | --------------------------------- 135 | 136 | At first, the Tensorflow repository must be cloned: 137 | 138 | .. code:: bash 139 | 140 | git clone https://github.com/tensorflow/tensorflow 141 | 142 | After preparing the environment, the installation must be configured. The ``flags`` of the configuration are of great importance because they determine how well and compatible the TensorFlow will be installed!! At first, we have to go to the TensorFlow root: 143 | 144 | .. code:: bash 145 | 146 | cd tensorflow # cd to the cloned directory 147 | 148 | The flags alongside with the configuration environment are demonstrated below: 149 | 150 | .. code:: bash 151 | 152 | $ ./configure 153 | Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python2.7 154 | Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: 155 | Do you wish to use jemalloc as the malloc implementation? [Y/n] Y 156 | jemalloc enabled 157 | Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] N 158 | No Google Cloud Platform support will be enabled for TensorFlow 159 | Do you wish to build TensorFlow with Hadoop File System support? [y/N] N 160 | No Hadoop File System support will be enabled for TensorFlow 161 | Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] N 162 | No XLA JIT support will be enabled for TensorFlow 163 | Found possible Python library paths: 164 | /usr/local/lib/python2.7/dist-packages 165 | /usr/lib/python2.7/dist-packages 166 | Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages] 167 | Using python library path: /usr/local/lib/python2.7/dist-packages 168 | Do you wish to build TensorFlow with OpenCL support? [y/N] N 169 | No OpenCL support will be enabled for TensorFlow 170 | Do you wish to build TensorFlow with CUDA support? [y/N] Y 171 | CUDA support will be enabled for TensorFlow 172 | Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: 173 | Please specify the Cuda SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 8.0 174 | Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 175 | Please specify the cuDNN version you want to use. [Leave empty to use system default]: 5.1.10 176 | Please specify the location where cuDNN 5 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 177 | Please specify a list of comma-separated Cuda compute capabilities you want to build with. 178 | You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. 179 | Please note that each additional compute capability significantly increases your build time and binary size. 180 | [Default is: "3.5,5.2"]: "5.2" 181 | 182 | 183 | **NOTE:** 184 | * The cuDNN version must be exactly determined using the associated files in /usr/local/cuda 185 | * The compute capability is spesified related the ``available GPU model`` in the system architecture. For example ``Geforce GTX Titan X`` GPUs have compute capability of 5.2. 186 | * Using ``bazel clean`` is recommended if re-configuration is needed. 187 | 188 | **WARNING:** 189 | * In case installation of the TwnsorFlow in a virtual environment is desired, the environment must be activated at first and before running the ``./configure`` script. 190 | 191 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 192 | Test Bazel (Optional) 193 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 194 | 195 | We can run tests using ``Bazel`` to make sure everything's fine: 196 | 197 | .. code:: bash 198 | 199 | ./configure 200 | bazel test ... 201 | 202 | --------------------- 203 | Build the .whl Package 204 | --------------------- 205 | 206 | After configuration of the setup, the pip package needs to be built by the Bazel. 207 | 208 | To build a TensorFlow package with GPU support, execute the following command: 209 | 210 | .. code:: bash 211 | 212 | bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package 213 | 214 | The ``bazel build`` command builds a script named build_pip_package. Running the following script build a .whl file within the ~/tensorflow_package directory: 215 | 216 | .. code:: bash 217 | 218 | bazel-bin/tensorflow/tools/pip_package/build_pip_package ~/tensorflow_package 219 | 220 | 221 | 222 | 223 | 224 | ------------------------------- 225 | Installation of the Pip Package 226 | ------------------------------- 227 | 228 | Two types of installation can be used. The native installation using system root and the virtual environment installation. 229 | 230 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 231 | Native Installation 232 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 233 | 234 | The following command will install the pip package created by Bazel build: 235 | 236 | .. code:: bash 237 | 238 | sudo pip install ~/tensorflow_package/file_name.whl 239 | 240 | 241 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 242 | Using Virtual Environments 243 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 244 | 245 | At first, the environment must be activation. Since we already defined the environment alias as ``tensorflow``, by the terminal execution of the simple command of ``tensorflow``, the environment will be activated. Then like the previous part, we execute the following: 246 | 247 | .. code:: bash 248 | 249 | pip install ~/tensorflow_package/file_name.whl 250 | 251 | **WARNING**: 252 | * By using the virtual environment installation method, the sudo command should not be used anymore because if we use sudo, it points to native system packages and not the one available in the virtual environment. 253 | * Since ``sudo mkdir ~/virtualenvs`` is used for creating of the virtual environment, using the ``pip install`` returns ``permission error``. In this case, the root privilege of the environment directory must be changed using the ``sudo chmod -R 777 ~/virtualenvs`` command. 254 | 255 | -------------------------- 256 | Validate the Installation 257 | -------------------------- 258 | 259 | In the terminal, the following script must be run (``in the home directory``) correctly without any error and preferably any warning: 260 | 261 | .. code:: bash 262 | 263 | python 264 | >> import tensorflow as tf 265 | >> hello = tf.constant('Hello, TensorFlow!') 266 | >> sess = tf.Session() 267 | >> print(sess.run(hello)) 268 | 269 | -------------------------- 270 | Common Errors 271 | -------------------------- 272 | 273 | Different errors reported blocking the compiling and running TensorFlow. 274 | 275 | * ``Mismatch between the supported kernel versions:`` This error mentioned earlier in this documentation. The naive solution reported being the reinstallation of the CUDA driver. 276 | * ``ImportError: cannot import name pywrap_tensorflow:`` This error usually occurs when the Python loads the tensorflow libraries from the wrong directory, i.e., not the version installed by the user in the root. The first step is to make sure we are in the system root such that the python libraries are utilized correctly. So basically we can open a new terminal and test TensorFlow installation again. 277 | * ``ImportError: No module named packaging.version":`` Most likely it might be related to the ``pip`` installation. Reinstalling that using ``python -m pip install -U pip`` or ``sudo python -m pip install -U pip`` may fix it! 278 | 279 | -------------------------- 280 | Summary 281 | -------------------------- 282 | 283 | In this tutorial, we described how to install TensorFlow from the source which has the advantage of more compatibility with the system configuration. Python virtual environment installation has been investigated as well to separate the TensorFlow environment from other environments. Conda environments can be used as well as Python virtual environments which will be explained in a separated post. In any case, the TensorFlow installed from the source can be run much faster than the pre-build binary packages provided by the TensorFlow although it adds the complexity to installation process. 284 | 285 | 286 | 287 | 288 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy 2 | scipy 3 | python-coveralls 4 | tensorflow 5 | matplotlib 6 | xlrd 7 | scikit-learn 8 | pandas 9 | scikit-image 10 | -------------------------------------------------------------------------------- /travis.sh: -------------------------------------------------------------------------------- 1 | # CHANGED_FILES=$(find codes/python/ -type f -name "*.py") 2 | 3 | CHANGED_FILES=($(git diff --name-only $TRAVIS_COMMIT_RANGE)) 4 | echo "Changed files are \n $CHANGED_FILES" 5 | 6 | for file in $CHANGED_FILES; do 7 | # Check if the last 3 characters are .py 8 | if [ ${file: -3} == ".py" ]; then 9 | python $file; 10 | else 11 | echo "$file is not a Python file, so, it won't be checked!" 12 | fi 13 | done 14 | -------------------------------------------------------------------------------- /welcome.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/instillai/TensorFlow-Course/3ac35a2e8766652f4dae507ef1e3615b2085f9a7/welcome.py --------------------------------------------------------------------------------