├── .gitattributes ├── .gitignore ├── README.md ├── Vagrantfile ├── ansible ├── ansible_install.sh ├── cli_setup.yml ├── docker_setup.yml ├── init_setup.yml ├── playbook.yml ├── py3-math-setup.yml ├── py3-parallel-setup.yml ├── py3-setup.yml ├── requirements.yml └── vars │ └── main.yml ├── doc └── releases.md ├── notebooks ├── 01-tensorflow_introduction.ipynb ├── 02-linear_models.ipynb ├── 03-minst_for_ml_beginners.ipynb └── ipython-run.sh ├── puppet ├── install_puppet_modules.sh └── manifests │ ├── classes │ ├── init.pp │ ├── ohmyzsh_setup.pp │ └── python_setup.pp │ └── default.pp ├── python ├── first-tensorflow.py └── input_data.py └── scripts └── ipy-udacity.sh /.gitattributes: -------------------------------------------------------------------------------- 1 | # Set default behaviour, in case users don't have core.autocrlf set. 2 | * text=auto 3 | 4 | # Explicitly declare text files we want to always be normalized and converted 5 | # to native line endings on checkout. 6 | *.c text 7 | 8 | # Declare files that will always have LF line endings on checkout. 9 | *.sh text eol=lf 10 | *.bash text eol=lf 11 | *.erb text eol=lf 12 | *.py text eol=lf 13 | *.md text eol=lf 14 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *~ 2 | *.pyc 3 | *.class 4 | *.swp 5 | *.log 6 | .vagrant 7 | tensorflow_source/ 8 | notebooks/.ipynb_checkpoints/ 9 | notebooks/MNIST_data/ 10 | playbook.retry 11 | 12 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # tensorflow-ipy 2 | 3 | * Source code - [Github][1] 4 | * Author - Gavin Noronha - 5 | 6 | [1]: https://github.com/gavinln/tensorflow-ipy.git 7 | 8 | ## About 9 | 10 | This project provides a [Ubuntu (14.04)][10] [Vagrant][20] Virtual Machine (VM) 11 | with the [TensorFlow][30] library from Google and [IPython][40] 12 | (now know as Jupyter) notebooks. 13 | 14 | [10]: http://releases.ubuntu.com/14.04/ 15 | [20]: http://www.vagrantup.com/ 16 | [30]: http://tensorflow.org/ 17 | [40]: http://jupyter.org/ 18 | 19 | Follow the **Requirements** section below for a one-time setup of Virtualbox, 20 | Vagrant and Git before running the commands below. These instructions should 21 | work on Windows, Mac and Linux operating systems. 22 | 23 | ## [Releases](./doc/releases.md) 24 | 25 | The list of releases of this project with a description of the main changes are 26 | included here. 27 | 28 | ## Running TensorFlow 29 | 30 | ### 1. Start the VM 31 | 32 | 1. Change to the tensorflow-ipy root directory 33 | 34 | ``` 35 | cd tensorflow-ipy 36 | ``` 37 | 38 | 2. Start the Virtual machine (VM) 39 | 40 | ``` 41 | vagrant up 42 | ``` 43 | 44 | 3. Login to the VM 45 | 46 | ``` 47 | vagrant ssh 48 | ``` 49 | 50 | ### 2. Run your first TensorFlow command line program 51 | 52 | First run section 1. 53 | 54 | 1. Change to the python directory 55 | 56 | ``` 57 | cd /vagrant/python 58 | ``` 59 | 60 | 2. Run the first program 61 | 62 | ``` 63 | python3 first-tensorflow.py 64 | # to disable warnings type 65 | TF_CPP_MIN_LOG_LEVEL=2 python3 first-tensorflow.py 66 | ``` 67 | 68 | 3. Make sure the version printed on the first line of the output is the version 69 | you expect. The releases are documented on this [page][50] 70 | 71 | [50]: https://github.com/tensorflow/tensorflow/releases 72 | 73 | ### 3. Start the IPython (Jupyter) notebooks 74 | 75 | First run section 1. 76 | 77 | 1. Change to the notebooks directory 78 | 79 | ``` 80 | cd /vagrant/notebooks 81 | ``` 82 | 83 | 2. Run the IPython notebook server 84 | 85 | ``` 86 | chmod +x ipython-run.sh 87 | ./ipython-run.sh 88 | ``` 89 | 90 | 3. Open your browser to http://192.168.33.10:8888/ to view the notebooks 91 | 92 | ### 4. Get the TensorFlow source code and examples 93 | 94 | First run section 1. 95 | 96 | 1. Make the tensorflow directory if does not exist. 97 | 98 | ``` 99 | mkdir /vagrant/tensorflow_source 100 | ``` 101 | 102 | 2. Change to the tensorflow directory (will not be checked in to git) 103 | 104 | ``` 105 | cd /vagrant/tensorflow_source 106 | ``` 107 | 108 | 3. Clone the tensorflow repository 109 | 110 | ``` 111 | git clone https://github.com/tensorflow/tensorflow.git 112 | ``` 113 | 114 | ### 5. Run the [Udacity][60] [TensorFlow][70] examples 115 | 116 | First run section 1. 117 | 118 | 1. Change to the notebooks directory 119 | 120 | ``` 121 | cd /vagrant/scripts 122 | ``` 123 | 124 | 2. Run the IPython notebook server with Udacity TensorFlow notebooks 125 | 126 | ``` 127 | chmod +x ./ipy-udacity.sh 128 | ./ipy-udacity.sh 129 | ``` 130 | 131 | 3. Open your browser to http://192.168.33.10:8888/ to view the notebooks 132 | 133 | [60]: https://www.udacity.com/ 134 | [70]: https://www.udacity.com/course/deep-learning--ud730 135 | 136 | ### 6. Using Tensorboard 137 | 138 | 1. Run Tensorboard from the command line 139 | 140 | ``` 141 | tensorboard --logdir=/home/ubuntu/tensorflow-logs 142 | ``` 143 | 144 | 2. Open a web browser to Tensorboard at http://192.168.33.10:6006/ 145 | 146 | ## Tensorflow links 147 | 148 | * http://learningtensorflow.com/getting_started/ 149 | 150 | ## Jupyter notebook extensions 151 | 152 | 1. Install Jupyter notebook extensions 153 | 154 | ``` 155 | jupyter contrib nbextension install --user 156 | ``` 157 | 158 | 2. Install vim extension (optional) 159 | 160 | ``` 161 | cd $(jupyter --data-dir)/nbextensions 162 | git clone https://github.com/lambdalisue/jupyter-vim-binding vim_binding 163 | ``` 164 | 165 | ## Requirements 166 | 167 | The following software is needed to get the software from github and run 168 | Vagrant. The Git environment also provides an [SSH client][100] for Windows. 169 | 170 | * [Oracle VM VirtualBox][110] 171 | * [Vagrant][120] 172 | * [Git][130] 173 | 174 | [100]: http://en.wikipedia.org/wiki/Secure_Shell 175 | [110]: https://www.virtualbox.org/ 176 | [120]: http://vagrantup.com/ 177 | [130]: http://git-scm.com/ 178 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | # All Vagrant configuration is done below. The "2" in Vagrant.configure 5 | # configures the configuration version (we support older styles for 6 | # backwards compatibility). Please don't change it unless you know what 7 | # you're doing. 8 | Vagrant.configure(2) do |config| 9 | # The most common configuration options are documented and commented below. 10 | # For a complete reference, please see the online documentation at 11 | # https://docs.vagrantup.com. 12 | 13 | # Every Vagrant development environment requires a box. You can search for 14 | # boxes at https://atlas.hashicorp.com/search. 15 | # config.vm.box = "ubuntu/trusty64" 16 | config.vm.box = "ubuntu/xenial64" 17 | 18 | # do not update configured box 19 | # config.vm.box_check_update = false 20 | 21 | # user insecure key 22 | # config.ssh.insert_key = false 23 | 24 | # Disable automatic box update checking. If you disable this, then 25 | # boxes will only be checked for updates when the user runs 26 | # `vagrant box outdated`. This is not recommended. 27 | 28 | config.ssh.forward_agent = true 29 | 30 | # Create a forwarded port mapping which allows access to a specific port 31 | # within the machine from a port on the host machine. In the example below, 32 | # accessing "localhost:8080" will access port 80 on the guest machine. 33 | # config.vm.network "forwarded_port", guest: 80, host: 8080 34 | 35 | # Create a private network, which allows host-only access to the machine 36 | # using a specific IP. 37 | # config.vm.network "private_network", ip: "192.168.33.10" 38 | 39 | # Create a public network, which generally matched to bridged network. 40 | # Bridged networks make the machine appear as another physical device on 41 | # your network. 42 | # config.vm.network "public_network" 43 | 44 | # Share an additional folder to the guest VM. The first argument is 45 | # the path on the host to the actual folder. The second argument is 46 | # the path on the guest to mount the folder. And the optional third 47 | # argument is a set of non-required options. 48 | # config.vm.synced_folder "../data", "/vagrant_data" 49 | 50 | # Provider-specific configuration so you can fine-tune various 51 | # backing providers for Vagrant. These expose provider-specific options. 52 | # Example for VirtualBox: 53 | 54 | config.vm.define "tensorflow-ipy", autostart: true do |machine| 55 | machine.vm.provider "virtualbox" do |vb| 56 | # vb.gui = true 57 | vb.memory = "4096" 58 | vb.cpus = "2" 59 | 60 | if Vagrant::Util::Platform.windows? then 61 | # Fix for slow external network connections for Windows 10 62 | vb.customize ['modifyvm', :id, '--natdnshostresolver1', 'on'] 63 | vb.customize ['modifyvm', :id, '--natdnsproxy1', 'on'] 64 | end 65 | end 66 | 67 | machine.vm.hostname = "tensorflow-ipy" 68 | machine.vm.network "private_network", ip: "192.168.33.10" 69 | # machine.vm.network "public_network" 70 | 71 | machine.vm.provision "shell" do |sh| 72 | sh.path = "ansible/ansible_install.sh" 73 | sh.args = "ansible/playbook.yml" 74 | end 75 | end 76 | 77 | # View the documentation for the provider you are using for more 78 | # information on available options. 79 | 80 | # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies 81 | # such as FTP and Heroku are also available. See the documentation at 82 | # https://docs.vagrantup.com/v2/push/atlas.html for more information. 83 | # config.push.define "atlas" do |push| 84 | # push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME" 85 | # end 86 | 87 | # Enable provisioning with a shell script. Additional provisioners such as 88 | # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the 89 | # documentation for more information about their specific syntax and use. 90 | # config.vm.provision "shell", inline: <<-SHELL 91 | # sudo apt-get update 92 | # sudo apt-get install -y apache2 93 | # SHELL 94 | 95 | # config.vm.provision "shell", path: 'puppet/install_puppet_modules.sh' 96 | 97 | # config.vm.hostname = "tensorflow-ipy" 98 | # config.vm.network "private_network", ip: "192.168.33.10" 99 | 100 | # config.vm.provision "puppet" do |puppet| 101 | # puppet.manifest_file = "default.pp" 102 | # puppet.manifests_path = "puppet/manifests" 103 | # puppet.options = "--certname=%s" % "tensorflow-ipy" 104 | # #puppet.options = "--verbose --debug" 105 | # end 106 | end 107 | -------------------------------------------------------------------------------- /ansible/ansible_install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Windows shell provisioner for Ansible playbooks, based on KSid's 4 | # windows-vagrant-ansible: https://github.com/KSid/windows-vagrant-ansible 5 | # 6 | # @see README.md 7 | # @author Jeff Geerling, 2014 8 | 9 | # Uncomment if behind a proxy server. 10 | # export {http,https,ftp}_proxy='http://username:password@proxy-host:80' 11 | 12 | args=() 13 | extra_vars=("is_windows=true") 14 | 15 | # Process and remove all flags. 16 | while (($#)); do 17 | case $1 in 18 | --extra-vars=*) extra_vars+=("${1#*=}") ;; 19 | --extra-vars|-e) shift; extra_vars+=("$1") ;; 20 | -*) echo "invalid option: $1" >&2; exit 1 ;; 21 | *) args+=("$1") ;; 22 | esac 23 | shift 24 | done 25 | 26 | # Restore the arguments without flags. 27 | set -- "${args[@]}" 28 | 29 | ANSIBLE_PLAYBOOK=$1 30 | PLAYBOOK_DIR=${ANSIBLE_PLAYBOOK%/*} 31 | 32 | # Detect package management system. 33 | YUM=$(which yum 2>/dev/null) 34 | APT_GET=$(which apt-get 2>/dev/null) 35 | 36 | # Make sure Ansible playbook exists. 37 | if [ ! -f "/vagrant/$ANSIBLE_PLAYBOOK" ]; then 38 | echo "Cannot find Ansible playbook." 39 | exit 1 40 | fi 41 | 42 | # Install Ansible and its dependencies if it's not installed already. 43 | if ! command -v ansible >/dev/null; then 44 | echo "Installing Ansible dependencies and Git." 45 | if [[ ! -z ${YUM} ]]; then 46 | yum install -y git python python-devel 47 | yum install -y libffi-devel 48 | yum install -y openssl-devel 49 | elif [[ ! -z ${APT_GET} ]]; then 50 | apt-get update 51 | apt-get install -y git python python-dev 52 | apt-get install -y libffi-dev 53 | apt-get install -y libssl-dev 54 | else 55 | echo "Neither yum nor apt-get are available." 56 | exit 1; 57 | fi 58 | 59 | echo "Installing pip." 60 | wget https://bootstrap.pypa.io/get-pip.py 61 | python get-pip.py && rm -f get-pip.py 62 | 63 | # Install GCC / required build tools. 64 | if [[ ! -z $YUM ]]; then 65 | yum install -y gcc 66 | elif [[ ! -z $APT_GET ]]; then 67 | apt-get install -y build-essential 68 | fi 69 | 70 | echo "Installing required python modules." 71 | pip install paramiko pyyaml jinja2 markupsafe 72 | 73 | echo "Installing Ansible." 74 | pip install ansible 75 | fi 76 | 77 | # Install requirements. 78 | echo "Installing Ansible roles from requirements file, if available." 79 | find "/vagrant/$PLAYBOOK_DIR" \( -name "requirements.yml" -o -name "requirements.txt" \) -exec sudo ansible-galaxy install --ignore-errors -r {} \; 80 | 81 | # Run the playbook. 82 | echo "Running Ansible provisioner defined in Vagrantfile." 83 | # ansible-playbook -vvvv -b -i "/vagrant/provisioning/inventory" "/vagrant/${ANSIBLE_PLAYBOOK}" --extra-vars "${extra_vars[*]}" --connection=local 84 | 85 | ansible-playbook -b "/vagrant/${ANSIBLE_PLAYBOOK}" --extra-vars "${extra_vars[*]}" --connection=local 86 | -------------------------------------------------------------------------------- /ansible/cli_setup.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Install git 3 | apt: name=git state=installed 4 | 5 | - name: Install tree 6 | apt: name=tree state=installed 7 | 8 | - name: Install ag 9 | apt: name=silversearcher-ag state=installed 10 | 11 | - name: Install jq 12 | apt: name=jq state=installed 13 | 14 | - name: Install autojump 15 | apt: name=autojump state=installed 16 | register: autojump_status 17 | 18 | - name: Install vim 19 | apt: name=vim state=installed 20 | 21 | - name: copy autojump profile 22 | copy: 23 | src=/usr/share/autojump/autojump.sh 24 | dest=/etc/profile.d/autojump.sh 25 | when: autojump_status.changed 26 | -------------------------------------------------------------------------------- /ansible/docker_setup.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Add vagrant user to docker group. 3 | user: 4 | name: vagrant 5 | groups: docker 6 | append: yes 7 | become: yes 8 | 9 | - name: Install Pip. 10 | apt: name=python-pip state=installed 11 | become: yes 12 | 13 | - name: Install Docker Python library. 14 | pip: name=docker-py state=present 15 | become: yes 16 | -------------------------------------------------------------------------------- /ansible/init_setup.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: upgrade system and packages 3 | apt: upgrade=dist 4 | 5 | - name: install python dev 6 | apt: name=python-dev state=installed 7 | 8 | - name: Install Pip 9 | apt: name=python-pip state=installed 10 | 11 | - name: Install pyopenssl for InsecurePlatform warning 12 | pip: name=pyopenssl state=present 13 | 14 | - name: Install ndg-httpsclient for InsecurePlatform warning 15 | pip: name=ndg-httpsclient state=present 16 | 17 | - name: Install pyasn1 for InsecurePlatform warning 18 | pip: name=pyasn1 state=present 19 | -------------------------------------------------------------------------------- /ansible/playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | gather_facts: yes 4 | 5 | vars_files: 6 | - vars/main.yml 7 | 8 | pre_tasks: 9 | - name: Update apt cache if needed. 10 | apt: update_cache=yes cache_valid_time=3600 11 | become: yes 12 | 13 | roles: 14 | - geerlingguy.ntp 15 | 16 | tasks: 17 | - include: init_setup.yml 18 | - include: cli_setup.yml 19 | - include: py3-setup.yml 20 | - include: py3-math-setup.yml 21 | # - include: py3-parallel-setup.yml 22 | -------------------------------------------------------------------------------- /ansible/py3-math-setup.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: install numpy 3 | pip: name=numpy state=present executable=pip3 4 | 5 | - name: install libopenblas-dev 6 | apt: name=libopenblas-dev state=installed 7 | 8 | - name: install liblapack-dev 9 | apt: name=liblapack-dev state=installed 10 | 11 | - name: install gfortran 12 | apt: name=gfortran state=installed 13 | 14 | - name: install scipy 15 | pip: name=scipy state=present executable=pip3 16 | 17 | - name: install theano 18 | pip: name=theano state=present executable=pip3 19 | 20 | - name: install sklearn 21 | pip: name=sklearn state=present executable=pip3 22 | 23 | - name: Install libfreetype6-dev for matplotlib 24 | apt: pkg=libfreetype6-dev state=installed update_cache=true 25 | 26 | - name: Install pkg-config for matplotlib 27 | apt: pkg=pkg-config state=installed update_cache=true 28 | 29 | - name: install matplotlib 30 | pip: name=matplotlib state=present executable=pip3 31 | 32 | - name: install seaborn 33 | pip: name=seaborn state=present executable=pip3 34 | 35 | - name: install pandas 36 | pip: name=pandas state=present executable=pip3 37 | 38 | - name: install jupyter notebook 39 | pip: name=notebook version=4.2.2 state=present executable=pip3 40 | 41 | - name: install jupyter notebook extensions 42 | pip: name=jupyter_contrib_nbextensions state=present executable=pip3 43 | 44 | - name: install Tensorflow 45 | pip: name=tensorflow version=1.0.0 state=present executable=pip3 46 | -------------------------------------------------------------------------------- /ansible/py3-parallel-setup.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: install pre-requisites for h5py 3 | apt: name=libhdf5-dev state=installed 4 | 5 | - name: install h5py 6 | pip: name=h5py state=present executable=pip3 7 | 8 | - name: install toolz 9 | pip: name=toolz state=present executable=pip3 10 | 11 | - name: install cloudpickle 12 | pip: name=cloudpickle state=present executable=pip3 13 | 14 | - name: install partd 15 | pip: name=partd state=present executable=pip3 16 | 17 | - name: install dask 18 | pip: name=dask state=present executable=pip3 19 | 20 | - name: install cython for feather 21 | pip: name=cython state=present executable=pip3 22 | 23 | - name: install feather-format 24 | pip: name=feather-format state=present executable=pip3 25 | 26 | - name: install distributed 27 | pip: name=distributed state=present executable=pip3 28 | 29 | - name: install castra 30 | pip: name=castra state=present executable=pip3 31 | 32 | - name: install graphviz 33 | apt: name=graphviz state=present 34 | 35 | - name: install python graphviz 36 | pip: name=graphviz state=present executable=pip3 37 | 38 | -------------------------------------------------------------------------------- /ansible/py3-setup.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: install gitric 3 | pip: name=gitric state=present 4 | 5 | - name: install fabric 6 | pip: name=fabric state=present 7 | 8 | - name: install pip3 9 | apt: name=python3-pip state=installed 10 | 11 | - name: install httpie 12 | pip: name=httpie state=present executable=pip3 13 | 14 | - name: install stormssh 15 | pip: name=stormssh state=present executable=pip3 16 | -------------------------------------------------------------------------------- /ansible/requirements.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - src: geerlingguy.ntp 3 | 4 | - src: angstwad.docker_ubuntu 5 | -------------------------------------------------------------------------------- /ansible/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | workspace: /root 3 | ntp_timezone: America/Los_Angeles 4 | 5 | fzf_repo: https://github.com/junegunn/fzf.git 6 | fzf_home: /home/vagrant/.fzf 7 | fzf_depth: 1 8 | fzf_version: 0.12.2 9 | 10 | rstudio_deb: rstudio-server-0.99.903-amd64.deb 11 | -------------------------------------------------------------------------------- /doc/releases.md: -------------------------------------------------------------------------------- 1 | ## Releases 2 | 3 | ### 2016-04-30 4 | 5 | Upgraded TensorFlow to version 0.8.0 6 | 7 | ### 2016-02-06 8 | 9 | Added instructions for running the Udacity TensorFlow examples. 10 | 11 | ### 2015-11-09 12 | 13 | Initial version of the project. 14 | -------------------------------------------------------------------------------- /notebooks/01-tensorflow_introduction.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "This notebook is created from the Tensorflow tutorial:\n", 8 | "https://www.tensorflow.org/get_started/get_started" 9 | ] 10 | }, 11 | { 12 | "cell_type": "code", 13 | "execution_count": 13, 14 | "metadata": { 15 | "collapsed": true 16 | }, 17 | "outputs": [], 18 | "source": [ 19 | "import tensorflow as tf" 20 | ] 21 | }, 22 | { 23 | "cell_type": "markdown", 24 | "metadata": {}, 25 | "source": [ 26 | "Create two constant nodes" 27 | ] 28 | }, 29 | { 30 | "cell_type": "code", 31 | "execution_count": 14, 32 | "metadata": { 33 | "collapsed": false 34 | }, 35 | "outputs": [ 36 | { 37 | "name": "stdout", 38 | "output_type": "stream", 39 | "text": [ 40 | "Tensor(\"Const_2:0\", shape=(), dtype=float32) Tensor(\"Const_3:0\", shape=(), dtype=float32)\n" 41 | ] 42 | } 43 | ], 44 | "source": [ 45 | "node1 = tf.constant(3.0, tf.float32)\n", 46 | "node2 = tf.constant(4.0) # tf.float32 implicitly\n", 47 | "print(node1, node2)" 48 | ] 49 | }, 50 | { 51 | "cell_type": "markdown", 52 | "metadata": {}, 53 | "source": [ 54 | "Create a session and run the computational graph" 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 15, 60 | "metadata": { 61 | "collapsed": false 62 | }, 63 | "outputs": [ 64 | { 65 | "name": "stdout", 66 | "output_type": "stream", 67 | "text": [ 68 | "[3.0, 4.0]\n" 69 | ] 70 | } 71 | ], 72 | "source": [ 73 | "sess = tf.Session()\n", 74 | "print(sess.run([node1, node2]))" 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "execution_count": 16, 80 | "metadata": { 81 | "collapsed": false 82 | }, 83 | "outputs": [], 84 | "source": [ 85 | "logs_path = '/home/ubuntu/tensorflow-logs'\n", 86 | "# make the directory if it does not exist\n", 87 | "!mkdir -p $logs_path" 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": 17, 93 | "metadata": { 94 | "collapsed": true 95 | }, 96 | "outputs": [], 97 | "source": [ 98 | "# tensorboard --purge_orphaned_data --logdir /home/ubuntu/tensorf low-logs" 99 | ] 100 | }, 101 | { 102 | "cell_type": "code", 103 | "execution_count": 18, 104 | "metadata": { 105 | "collapsed": true 106 | }, 107 | "outputs": [], 108 | "source": [ 109 | "summary_writer = tf.summary.FileWriter(\n", 110 | " logs_path, graph=tf.get_default_graph())" 111 | ] 112 | }, 113 | { 114 | "cell_type": "markdown", 115 | "metadata": {}, 116 | "source": [ 117 | "Add an operation node" 118 | ] 119 | }, 120 | { 121 | "cell_type": "code", 122 | "execution_count": 19, 123 | "metadata": { 124 | "collapsed": false 125 | }, 126 | "outputs": [ 127 | { 128 | "name": "stdout", 129 | "output_type": "stream", 130 | "text": [ 131 | "node3: Tensor(\"Add_1:0\", shape=(), dtype=float32)\n", 132 | "sess.run(node3): 7.0\n" 133 | ] 134 | } 135 | ], 136 | "source": [ 137 | "node3 = tf.add(node1, node2)\n", 138 | "print('node3: ', node3)\n", 139 | "print('sess.run(node3): ', sess.run(node3))" 140 | ] 141 | }, 142 | { 143 | "cell_type": "markdown", 144 | "metadata": {}, 145 | "source": [ 146 | "Add a placeholder node to which a value can be supplied" 147 | ] 148 | }, 149 | { 150 | "cell_type": "code", 151 | "execution_count": 20, 152 | "metadata": { 153 | "collapsed": true 154 | }, 155 | "outputs": [], 156 | "source": [ 157 | "a = tf.placeholder(tf.float32)\n", 158 | "b = tf.placeholder(tf.float32)\n", 159 | "adder_node = a + b" 160 | ] 161 | }, 162 | { 163 | "cell_type": "markdown", 164 | "metadata": {}, 165 | "source": [ 166 | "Supply values to the placeholders" 167 | ] 168 | }, 169 | { 170 | "cell_type": "code", 171 | "execution_count": 21, 172 | "metadata": { 173 | "collapsed": false 174 | }, 175 | "outputs": [ 176 | { 177 | "name": "stdout", 178 | "output_type": "stream", 179 | "text": [ 180 | "7.5\n", 181 | "[ 3. 7.]\n" 182 | ] 183 | } 184 | ], 185 | "source": [ 186 | "print(sess.run(adder_node, {a: 3, b: 4.5}))\n", 187 | "print(sess.run(adder_node, {a: [1, 3], b: [2, 4]}))" 188 | ] 189 | }, 190 | { 191 | "cell_type": "markdown", 192 | "metadata": {}, 193 | "source": [ 194 | "Add an operation node" 195 | ] 196 | }, 197 | { 198 | "cell_type": "code", 199 | "execution_count": 22, 200 | "metadata": { 201 | "collapsed": false 202 | }, 203 | "outputs": [ 204 | { 205 | "name": "stdout", 206 | "output_type": "stream", 207 | "text": [ 208 | "22.5\n" 209 | ] 210 | } 211 | ], 212 | "source": [ 213 | "add_and_triple = adder_node * 3.0\n", 214 | "print(sess.run(add_and_triple, {a: 3, b: 4.5}))" 215 | ] 216 | }, 217 | { 218 | "cell_type": "code", 219 | "execution_count": 23, 220 | "metadata": { 221 | "collapsed": false 222 | }, 223 | "outputs": [], 224 | "source": [ 225 | "summary_writer.close()" 226 | ] 227 | }, 228 | { 229 | "cell_type": "code", 230 | "execution_count": 24, 231 | "metadata": { 232 | "collapsed": true 233 | }, 234 | "outputs": [], 235 | "source": [ 236 | "sess.close()" 237 | ] 238 | } 239 | ], 240 | "metadata": { 241 | "kernelspec": { 242 | "display_name": "Python 3", 243 | "language": "python", 244 | "name": "python3" 245 | }, 246 | "language_info": { 247 | "codemirror_mode": { 248 | "name": "ipython", 249 | "version": 3 250 | }, 251 | "file_extension": ".py", 252 | "mimetype": "text/x-python", 253 | "name": "python", 254 | "nbconvert_exporter": "python", 255 | "pygments_lexer": "ipython3", 256 | "version": "3.5.2" 257 | } 258 | }, 259 | "nbformat": 4, 260 | "nbformat_minor": 2 261 | } 262 | -------------------------------------------------------------------------------- /notebooks/02-linear_models.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "This notebook is created based on the Tensorflow tutorial: https://www.tensorflow.org/get_started/get_started" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": 1, 13 | "metadata": { 14 | "collapsed": true 15 | }, 16 | "outputs": [], 17 | "source": [ 18 | "import tensorflow as tf" 19 | ] 20 | }, 21 | { 22 | "cell_type": "markdown", 23 | "metadata": {}, 24 | "source": [ 25 | "Create a linear model\n", 26 | "$$ y = W * x + b $$" 27 | ] 28 | }, 29 | { 30 | "cell_type": "code", 31 | "execution_count": 2, 32 | "metadata": { 33 | "collapsed": true 34 | }, 35 | "outputs": [], 36 | "source": [ 37 | "W = tf.Variable([0.3], tf.float32)\n", 38 | "b = tf.Variable([-0.3], tf.float32)\n", 39 | "x = tf.placeholder(tf.float32)\n", 40 | "linear_model = W * x + b" 41 | ] 42 | }, 43 | { 44 | "cell_type": "markdown", 45 | "metadata": {}, 46 | "source": [ 47 | "Create a session, init function and run the init function" 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "execution_count": 3, 53 | "metadata": { 54 | "collapsed": true 55 | }, 56 | "outputs": [], 57 | "source": [ 58 | "sess = tf.Session()\n", 59 | "init = tf.global_variables_initializer()\n", 60 | "sess.run(init)" 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "Evaluate the linear model for several values of x" 68 | ] 69 | }, 70 | { 71 | "cell_type": "code", 72 | "execution_count": 4, 73 | "metadata": { 74 | "collapsed": false 75 | }, 76 | "outputs": [ 77 | { 78 | "data": { 79 | "text/plain": [ 80 | "array([ 0. , 0.30000001, 0.60000002, 0.90000004], dtype=float32)" 81 | ] 82 | }, 83 | "execution_count": 4, 84 | "metadata": {}, 85 | "output_type": "execute_result" 86 | } 87 | ], 88 | "source": [ 89 | "sess.run(linear_model, {x: [1, 2, 3, 4]})" 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "metadata": {}, 95 | "source": [ 96 | "Create a sum of squares loss function" 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "execution_count": 5, 102 | "metadata": { 103 | "collapsed": true 104 | }, 105 | "outputs": [], 106 | "source": [ 107 | "y = tf.placeholder(tf.float32)\n", 108 | "squared_deltas = tf.square(linear_model - y)\n", 109 | "loss = tf.reduce_sum(squared_deltas)" 110 | ] 111 | }, 112 | { 113 | "cell_type": "markdown", 114 | "metadata": {}, 115 | "source": [ 116 | "Evalute the loss function" 117 | ] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "execution_count": 6, 122 | "metadata": { 123 | "collapsed": false 124 | }, 125 | "outputs": [ 126 | { 127 | "data": { 128 | "text/plain": [ 129 | "23.66" 130 | ] 131 | }, 132 | "execution_count": 6, 133 | "metadata": {}, 134 | "output_type": "execute_result" 135 | } 136 | ], 137 | "source": [ 138 | "x_train = [1, 2, 3, 4]\n", 139 | "y_train = [0, -1, -2, -3]\n", 140 | "sess.run(loss, {x: x_train, y: y_train})" 141 | ] 142 | }, 143 | { 144 | "cell_type": "markdown", 145 | "metadata": {}, 146 | "source": [ 147 | "The values of $W$ and $b$ that produce the smallest loss are -1 and 1. Assign the values for $W$ and $b$" 148 | ] 149 | }, 150 | { 151 | "cell_type": "code", 152 | "execution_count": 7, 153 | "metadata": { 154 | "collapsed": false 155 | }, 156 | "outputs": [ 157 | { 158 | "data": { 159 | "text/plain": [ 160 | "0.0" 161 | ] 162 | }, 163 | "execution_count": 7, 164 | "metadata": {}, 165 | "output_type": "execute_result" 166 | } 167 | ], 168 | "source": [ 169 | "fixW = tf.assign(W, [-1])\n", 170 | "fixb = tf.assign(b, [1])\n", 171 | "sess.run([fixW, fixb])\n", 172 | "sess.run(loss, {x: x_train, y: y_train})" 173 | ] 174 | }, 175 | { 176 | "cell_type": "markdown", 177 | "metadata": {}, 178 | "source": [ 179 | "Train the model using the gradient descent optimizer to compute the values of $W$ and $b$ for the smallest loss" 180 | ] 181 | }, 182 | { 183 | "cell_type": "code", 184 | "execution_count": 8, 185 | "metadata": { 186 | "collapsed": true 187 | }, 188 | "outputs": [], 189 | "source": [ 190 | "optimizer = tf.train.GradientDescentOptimizer(0.01)\n", 191 | "train = optimizer.minimize(loss)\n", 192 | "\n", 193 | "sess.run(init)\n", 194 | "for i in range(1000):\n", 195 | " sess.run(train, {x: x_train, y: y_train})" 196 | ] 197 | }, 198 | { 199 | "cell_type": "markdown", 200 | "metadata": {}, 201 | "source": [ 202 | "Display the computed model parameters" 203 | ] 204 | }, 205 | { 206 | "cell_type": "code", 207 | "execution_count": 9, 208 | "metadata": { 209 | "collapsed": false 210 | }, 211 | "outputs": [ 212 | { 213 | "data": { 214 | "text/plain": [ 215 | "[array([-0.9999969], dtype=float32),\n", 216 | " array([ 0.99999082], dtype=float32),\n", 217 | " 5.6999738e-11]" 218 | ] 219 | }, 220 | "execution_count": 9, 221 | "metadata": {}, 222 | "output_type": "execute_result" 223 | } 224 | ], 225 | "source": [ 226 | "sess.run([W, b, loss], {x: x_train, y: y_train})" 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "metadata": {}, 232 | "source": [ 233 | "Create a linear model and save the output" 234 | ] 235 | }, 236 | { 237 | "cell_type": "code", 238 | "execution_count": 10, 239 | "metadata": { 240 | "collapsed": true 241 | }, 242 | "outputs": [], 243 | "source": [ 244 | "logs_path = '/home/ubuntu/tensorflow-logs'\n", 245 | "# make the directory if it does not exist\n", 246 | "!mkdir -p $logs_path" 247 | ] 248 | }, 249 | { 250 | "cell_type": "markdown", 251 | "metadata": {}, 252 | "source": [ 253 | "Run the entire linear model at once" 254 | ] 255 | }, 256 | { 257 | "cell_type": "code", 258 | "execution_count": 11, 259 | "metadata": { 260 | "collapsed": false 261 | }, 262 | "outputs": [ 263 | { 264 | "name": "stdout", 265 | "output_type": "stream", 266 | "text": [ 267 | "W: [-0.9999969] b: [ 0.99999082] loss: 5.69997e-11\n" 268 | ] 269 | } 270 | ], 271 | "source": [ 272 | "tf.reset_default_graph()\n", 273 | "\n", 274 | "# Model parameters\n", 275 | "W = tf.Variable([.3], tf.float32, name='W')\n", 276 | "b = tf.Variable([-.3], tf.float32, name='b')\n", 277 | "# Model input and output\n", 278 | "x = tf.placeholder(tf.float32, name='x')\n", 279 | "linear_model = W * x + b\n", 280 | "y = tf.placeholder(tf.float32, name='y')\n", 281 | "# loss\n", 282 | "loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares\n", 283 | "# optimizer\n", 284 | "optimizer = tf.train.GradientDescentOptimizer(0.01)\n", 285 | "train = optimizer.minimize(loss)\n", 286 | "# training data\n", 287 | "x_train = [1,2,3,4]\n", 288 | "y_train = [0,-1,-2,-3]\n", 289 | "# training loop\n", 290 | "init = tf.global_variables_initializer()\n", 291 | "sess = tf.Session()\n", 292 | "sess.run(init) # reset values to wrong\n", 293 | "for i in range(1000):\n", 294 | " sess.run(train, {x:x_train, y:y_train})\n", 295 | "\n", 296 | "# evaluate training accuracy\n", 297 | "curr_W, curr_b, curr_loss = sess.run(\n", 298 | " [W, b, loss], {x:x_train, y:y_train})\n", 299 | "print(\"W: %s b: %s loss: %s\"%(curr_W, curr_b, curr_loss))" 300 | ] 301 | }, 302 | { 303 | "cell_type": "markdown", 304 | "metadata": {}, 305 | "source": [ 306 | "Create a graph of the nodes" 307 | ] 308 | }, 309 | { 310 | "cell_type": "code", 311 | "execution_count": 12, 312 | "metadata": { 313 | "collapsed": false 314 | }, 315 | "outputs": [], 316 | "source": [ 317 | "summary_writer = tf.summary.FileWriter(\n", 318 | " logs_path, graph=sess.graph)\n", 319 | "summary_writer.close()" 320 | ] 321 | }, 322 | { 323 | "cell_type": "markdown", 324 | "metadata": {}, 325 | "source": [ 326 | "Using tf.contrib.learn to create a linear regression model" 327 | ] 328 | }, 329 | { 330 | "cell_type": "code", 331 | "execution_count": 13, 332 | "metadata": { 333 | "collapsed": false 334 | }, 335 | "outputs": [ 336 | { 337 | "data": { 338 | "text/plain": [ 339 | "{'global_step': 1000, 'loss': 2.5715356e-06}" 340 | ] 341 | }, 342 | "execution_count": 13, 343 | "metadata": {}, 344 | "output_type": "execute_result" 345 | } 346 | ], 347 | "source": [ 348 | "import numpy as np\n", 349 | "\n", 350 | "tf.logging.set_verbosity(tf.logging.ERROR)\n", 351 | "features = [tf.contrib.layers.real_valued_column(\"x\", dimension=1)]\n", 352 | "estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)\n", 353 | "\n", 354 | "x = np.array([1, 2, 3, 4])\n", 355 | "y = np.array([0, -1, -2, -3])\n", 356 | "input_fn = tf.contrib.learn.io.numpy_input_fn(\n", 357 | " {'x': x}, y, batch_size=4, num_epochs=1000)\n", 358 | "\n", 359 | "estimator.fit(input_fn=input_fn, steps=1000)\n", 360 | "estimator.evaluate(input_fn=input_fn)" 361 | ] 362 | }, 363 | { 364 | "cell_type": "markdown", 365 | "metadata": { 366 | "collapsed": true 367 | }, 368 | "source": [ 369 | "Create a custom model with tf.contrib.learn" 370 | ] 371 | }, 372 | { 373 | "cell_type": "code", 374 | "execution_count": 14, 375 | "metadata": { 376 | "collapsed": true 377 | }, 378 | "outputs": [], 379 | "source": [ 380 | "def model(features, labels, mode):\n", 381 | " # Build a linear model and predict values\n", 382 | " W = tf.get_variable(\"W\", shape=[1], dtype=tf.float64)\n", 383 | " b = tf.get_variable(\"b\", shape=[1], dtype=tf.float64)\n", 384 | " y = W * features['x'] + b\n", 385 | " \n", 386 | " loss = tf.reduce_sum(tf.square(y - labels))\n", 387 | " \n", 388 | " global_step = tf.train.get_global_step()\n", 389 | " optimizer = tf.train.GradientDescentOptimizer(0.01)\n", 390 | " train = tf.group(optimizer.minimize(loss),\n", 391 | " tf.assign_add(global_step, 1))\n", 392 | " return tf.contrib.learn.ModelFnOps(\n", 393 | " mode=mode, predictions=y, loss=loss, train_op=train)" 394 | ] 395 | }, 396 | { 397 | "cell_type": "markdown", 398 | "metadata": {}, 399 | "source": [ 400 | "Create the estimator with the custom model" 401 | ] 402 | }, 403 | { 404 | "cell_type": "code", 405 | "execution_count": 15, 406 | "metadata": { 407 | "collapsed": false 408 | }, 409 | "outputs": [ 410 | { 411 | "data": { 412 | "text/plain": [ 413 | "{'global_step': 1000, 'loss': 2.7203895e-12}" 414 | ] 415 | }, 416 | "execution_count": 15, 417 | "metadata": {}, 418 | "output_type": "execute_result" 419 | } 420 | ], 421 | "source": [ 422 | "estimator = tf.contrib.learn.Estimator(model_fn=model)\n", 423 | "x = np.array([1., 2., 3., 4.])\n", 424 | "y = np.array([0., -1., -2., -3.])\n", 425 | "input_fn = tf.contrib.learn.io.numpy_input_fn(\n", 426 | " {'x': x}, y, 4, num_epochs=1000)\n", 427 | "\n", 428 | "# train\n", 429 | "estimator.fit(input_fn=input_fn, steps=1000)\n", 430 | "# evaluate our model\n", 431 | "estimator.evaluate(input_fn=input_fn, steps=10)" 432 | ] 433 | }, 434 | { 435 | "cell_type": "code", 436 | "execution_count": null, 437 | "metadata": { 438 | "collapsed": true 439 | }, 440 | "outputs": [], 441 | "source": [] 442 | } 443 | ], 444 | "metadata": { 445 | "kernelspec": { 446 | "display_name": "Python 3", 447 | "language": "python", 448 | "name": "python3" 449 | }, 450 | "language_info": { 451 | "codemirror_mode": { 452 | "name": "ipython", 453 | "version": 3 454 | }, 455 | "file_extension": ".py", 456 | "mimetype": "text/x-python", 457 | "name": "python", 458 | "nbconvert_exporter": "python", 459 | "pygments_lexer": "ipython3", 460 | "version": "3.5.2" 461 | } 462 | }, 463 | "nbformat": 4, 464 | "nbformat_minor": 2 465 | } 466 | -------------------------------------------------------------------------------- /notebooks/03-minst_for_ml_beginners.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "This notebook is from created from the TensorFlow tutorial:\n", 8 | "http://tensorflow.org/tutorials/mnist/beginners/index.md" 9 | ] 10 | }, 11 | { 12 | "cell_type": "markdown", 13 | "metadata": {}, 14 | "source": [ 15 | "Import the MNIST Data" 16 | ] 17 | }, 18 | { 19 | "cell_type": "code", 20 | "execution_count": 1, 21 | "metadata": { 22 | "collapsed": false 23 | }, 24 | "outputs": [], 25 | "source": [ 26 | "# import input_data\n", 27 | "from tensorflow.examples.tutorials.mnist import input_data\n", 28 | "from tensorflow.examples.tutorials.mnist import input_data" 29 | ] 30 | }, 31 | { 32 | "cell_type": "code", 33 | "execution_count": 2, 34 | "metadata": { 35 | "collapsed": false 36 | }, 37 | "outputs": [ 38 | { 39 | "name": "stdout", 40 | "output_type": "stream", 41 | "text": [ 42 | "Extracting MNIST_data/train-images-idx3-ubyte.gz\n", 43 | "Extracting MNIST_data/train-labels-idx1-ubyte.gz\n", 44 | "Extracting MNIST_data/t10k-images-idx3-ubyte.gz\n", 45 | "Extracting MNIST_data/t10k-labels-idx1-ubyte.gz\n" 46 | ] 47 | } 48 | ], 49 | "source": [ 50 | "mnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)" 51 | ] 52 | }, 53 | { 54 | "cell_type": "markdown", 55 | "metadata": {}, 56 | "source": [ 57 | "Get size of training data" 58 | ] 59 | }, 60 | { 61 | "cell_type": "code", 62 | "execution_count": 3, 63 | "metadata": { 64 | "collapsed": false 65 | }, 66 | "outputs": [ 67 | { 68 | "data": { 69 | "text/plain": [ 70 | "55000" 71 | ] 72 | }, 73 | "execution_count": 3, 74 | "metadata": {}, 75 | "output_type": "execute_result" 76 | } 77 | ], 78 | "source": [ 79 | "mnist.train.num_examples" 80 | ] 81 | }, 82 | { 83 | "cell_type": "markdown", 84 | "metadata": {}, 85 | "source": [ 86 | "Get size of test data" 87 | ] 88 | }, 89 | { 90 | "cell_type": "code", 91 | "execution_count": 4, 92 | "metadata": { 93 | "collapsed": false 94 | }, 95 | "outputs": [ 96 | { 97 | "data": { 98 | "text/plain": [ 99 | "10000" 100 | ] 101 | }, 102 | "execution_count": 4, 103 | "metadata": {}, 104 | "output_type": "execute_result" 105 | } 106 | ], 107 | "source": [ 108 | "mnist.test.num_examples" 109 | ] 110 | }, 111 | { 112 | "cell_type": "markdown", 113 | "metadata": {}, 114 | "source": [ 115 | "Display size of training images tensor" 116 | ] 117 | }, 118 | { 119 | "cell_type": "code", 120 | "execution_count": 5, 121 | "metadata": { 122 | "collapsed": false 123 | }, 124 | "outputs": [ 125 | { 126 | "data": { 127 | "text/plain": [ 128 | "(55000, 784)" 129 | ] 130 | }, 131 | "execution_count": 5, 132 | "metadata": {}, 133 | "output_type": "execute_result" 134 | } 135 | ], 136 | "source": [ 137 | "mnist.train.images.shape" 138 | ] 139 | }, 140 | { 141 | "cell_type": "code", 142 | "execution_count": 6, 143 | "metadata": { 144 | "collapsed": false 145 | }, 146 | "outputs": [ 147 | { 148 | "name": "stdout", 149 | "output_type": "stream", 150 | "text": [ 151 | "Populating the interactive namespace from numpy and matplotlib\n" 152 | ] 153 | } 154 | ], 155 | "source": [ 156 | "import numpy as np\n", 157 | "import matplotlib.pyplot as plt\n", 158 | "import matplotlib.cm as cm\n", 159 | "%pylab inline" 160 | ] 161 | }, 162 | { 163 | "cell_type": "code", 164 | "execution_count": 7, 165 | "metadata": { 166 | "collapsed": false 167 | }, 168 | "outputs": [], 169 | "source": [ 170 | "first_image_array = mnist.train.images[0, ]\n", 171 | "image_length = int(np.sqrt(first_image_array.size))\n", 172 | "first_image = np.reshape(first_image_array, (-1, image_length))" 173 | ] 174 | }, 175 | { 176 | "cell_type": "code", 177 | "execution_count": 8, 178 | "metadata": { 179 | "collapsed": false 180 | }, 181 | "outputs": [ 182 | { 183 | "data": { 184 | "text/plain": [ 185 | "(28, 28)" 186 | ] 187 | }, 188 | "execution_count": 8, 189 | "metadata": {}, 190 | "output_type": "execute_result" 191 | } 192 | ], 193 | "source": [ 194 | "first_image.shape" 195 | ] 196 | }, 197 | { 198 | "cell_type": "code", 199 | "execution_count": 9, 200 | "metadata": { 201 | "collapsed": false 202 | }, 203 | "outputs": [ 204 | { 205 | "data": { 206 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAP8AAAD8CAYAAAC4nHJkAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAADZxJREFUeJzt3X+IHPUZx/HPkx9VuFRMGnMETbUt0lBPSMsRK2o5sSkq\nxRj/iFEoKZaeQhMrKjRaSAOCaOkPCmLxiqGxJDZKW5M/qq0NjT+wiLlo/d2q9RovXhIlEg1iYuLT\nP24sp958Z92d3Zm95/2C43bnmR8Pm3xuZnd25mvuLgDxTKu6AQDVIPxAUIQfCIrwA0ERfiAowg8E\nRfiBoAg/EBThB4Ka0cmNmRlfJwTazN2tkfla2vOb2flm9i8ze9nM1rSyLgCdZc1+t9/Mpkv6t6Ql\nkkYlPSHpMnd/PrEMe36gzTqx518s6WV3/4+7H5b0e0lLW1gfgA5qJfwnSnptwvPRbNpHmNmgme0w\nsx0tbAtAydr+gZ+7D0kakjjsB+qklT3/bkkLJjw/KZsGoAu0Ev4nJJ1qZl8ws89IWiFpazltAWi3\npg/73f2Ima2S9BdJ0yWtd/fnSusMQFs1faqvqY3xnh9ou458yQdA9yL8QFCEHwiK8ANBEX4gKMIP\nBEX4gaAIPxAU4QeCIvxAUIQfCIrwA0ERfiAowg8ERfiBoAg/EBThB4Ii/EBQhB8IivADQRF+ICjC\nDwRF+IGgCD8QFOEHgiL8QFCEHwiK8ANBEX4gqKaH6JYkMxuR9I6ko5KOuHt/GU0BaL+Wwp85193f\nLGE9ADqIw34gqFbD75L+ambDZjZYRkMAOqPVw/6z3X23mc2T9KCZvejuD0+cIfujwB8GoGbM3ctZ\nkdk6SQfd/WeJecrZGIBc7m6NzNf0Yb+Z9ZjZZz98LOlbkp5tdn0AOquVw/5eSX8ysw/Xs8ndHyil\nKwBtV9phf0Mb47AfaLu2H/YD6G6EHwiK8ANBEX4gKMIPBEX4gaDKuKoPFbv++utza0Wnct98M31B\nZl9fX7L+yCOPJOtbt25N1lEd9vxAUIQfCIrwA0ERfiAowg8ERfiBoAg/ENSUuaR39erVyfoZZ5yR\nrF9yySVlttNRxxxzTNPLFv37T58+PVl///33k/UjR47k1nbt2pVcdmBgIFnfs2dPsh4Vl/QCSCL8\nQFCEHwiK8ANBEX4gKMIPBEX4gaC66jz/pk2bcmuXXnppctlp0/g7121efPHFZP28885L1l9//fUy\n2+kanOcHkET4gaAIPxAU4QeCIvxAUIQfCIrwA0EVnuc3s/WSvi1pn7v3ZdPmSNos6RRJI5KWu/tb\nhRtr8Tz/gQMHcmvHHXdcctmic76HDx9uqqcyPPbYY8n65s2bO9TJp3fBBRck6ytWrMitHX/88S1t\nu+h7AOeee25ubSrfC6DM8/y/lXT+x6atkbTN3U+VtC17DqCLFIbf3R+WtP9jk5dK2pA93iDp4pL7\nAtBmzb7n73X3sezxHkm9JfUDoENaHqvP3T31Xt7MBiUNtrodAOVqds+/18zmS1L2e1/ejO4+5O79\n7t7f5LYAtEGz4d8qaWX2eKWkLeW0A6BTCsNvZndL+oekL5vZqJl9T9ItkpaY2UuSvpk9B9BFuup6\n/tNPPz23VnRf/nvvvTdZT32HAM1buHBhbu2hhx5KLjtv3ryWtn3rrbfm1tasmbpnp7meH0AS4QeC\nIvxAUIQfCIrwA0ERfiCorjrVh6llcDD9re877rijpfW/++67ubWenp6W1l1nnOoDkET4gaAIPxAU\n4QeCIvxAUIQfCIrwA0ERfiAowg8ERfiBoAg/EBThB4Ii/EBQhB8IivADQbU8XBeQsnbt2tzaOeec\n09Ztz5iR/997YGAguez27dvLbaaG2PMDQRF+ICjCDwRF+IGgCD8QFOEHgiL8QFCF9+03s/WSvi1p\nn7v3ZdPWSfq+pDey2W509z8Xboz79rfFggULcmurV69OLnvVVVeV3c5HzJo1K7dm1tDt5dvi0KFD\nyfqxxx7boU7KV+Z9+38r6fxJpv/S3RdlP4XBB1AvheF394cl7e9ALwA6qJX3/KvM7GkzW29ms0vr\nCEBHNBv+X0v6kqRFksYk/TxvRjMbNLMdZrajyW0BaIOmwu/ue939qLt/IOk3khYn5h1y935372+2\nSQDlayr8ZjZ/wtNlkp4tpx0AnVJ4Sa+Z3S1pQNJcMxuV9BNJA2a2SJJLGpF0ZRt7BNAGheF398sm\nmXxnG3oJa/ny5cn64sW576okSVdccUVubfZsPoudzH333Vd1C5XjG35AUIQfCIrwA0ERfiAowg8E\nRfiBoLh1dwn6+vqS9XvuuSdZX7hwYbLezktfDxw4kKwfPHiwpfXfcMMNubWiy2pvu+22ZP2EE05o\nqidJ2rVrV9PLThXs+YGgCD8QFOEHgiL8QFCEHwiK8ANBEX4gqMJbd5e6sS6+dffNN9+cW7vyyvTt\nDObMmZOsHz58OFkvOh9+++2359ZGR0eTy95///3J+iuvvJKst9PIyEiyfvLJJyfrqdftzDPPTC77\n5JNPJut1VuatuwFMQYQfCIrwA0ERfiAowg8ERfiBoAg/EBTX8zdoYGAgt1Z0Hn94eDhZv+mmm5L1\nLVu2JOvd6qyzzkrW586d29L6jx49mlvr5vP4ZWHPDwRF+IGgCD8QFOEHgiL8QFCEHwiK8ANBFZ7n\nN7MFku6S1CvJJQ25+6/MbI6kzZJOkTQiabm7v9W+Vqt10UUX5dbWrl2bXPbqq68uu50p4bTTTkvW\ne3p6Wlr/zp07W1p+qmtkz39E0nXu/hVJX5f0AzP7iqQ1kra5+6mStmXPAXSJwvC7+5i778wevyPp\nBUknSloqaUM22wZJF7erSQDl+1Tv+c3sFElflfS4pF53H8tKezT+tgBAl2j4u/1mNkvSHyRd4+5v\nTxw/zt097/58ZjYoabDVRgGUq6E9v5nN1HjwN7r7H7PJe81sflafL2nfZMu6+5C797t7fxkNAyhH\nYfhtfBd/p6QX3P0XE0pbJa3MHq+UNDUvPQOmqMJbd5vZ2ZIekfSMpA+yyTdq/H3/PZI+L+m/Gj/V\nt79gXV17626Ub+PGjcn65Zdfnqy/9957yfqyZctyaw888EBy2W7W6K27C9/zu/ujkvJWdt6naQpA\nffANPyAowg8ERfiBoAg/EBThB4Ii/EBQ3LobbTU2NpZbmzdvXkvrLrpkdyqfyy8De34gKMIPBEX4\ngaAIPxAU4QeCIvxAUIQfCIrz/Gir1PDl06al9z1F1+sXDW2ONPb8QFCEHwiK8ANBEX4gKMIPBEX4\ngaAIPxAU5/nRklWrViXrM2bk/xc7dOhQctlrr702Wed6/daw5weCIvxAUIQfCIrwA0ERfiAowg8E\nRfiBoMzd0zOYLZB0l6ReSS5pyN1/ZWbrJH1f0hvZrDe6+58L1pXeGGpn5syZyfqrr76arPf29ubW\ntm/fnlx2yZIlyTom5+7WyHyNfMnniKTr3H2nmX1W0rCZPZjVfunuP2u2SQDVKQy/u49JGssev2Nm\nL0g6sd2NAWivT/We38xOkfRVSY9nk1aZ2dNmtt7MZucsM2hmO8xsR0udAihVw+E3s1mS/iDpGnd/\nW9KvJX1J0iKNHxn8fLLl3H3I3fvdvb+EfgGUpKHwm9lMjQd/o7v/UZLcfa+7H3X3DyT9RtLi9rUJ\noGyF4Tczk3SnpBfc/RcTps+fMNsySc+W3x6Admnk0/6zJH1H0jNm9lQ27UZJl5nZIo2f/huRdGVb\nOkSlik4Fb9q0KVkfHh7OrW3evLmpnlCORj7tf1TSZOcNk+f0AdQb3/ADgiL8QFCEHwiK8ANBEX4g\nKMIPBFV4SW+pG+OSXqDtGr2klz0/EBThB4Ii/EBQhB8IivADQRF+ICjCDwTV6SG635T03wnP52bT\n6qiuvdW1L4nemlVmbyc3OmNHv+TziY2b7ajrvf3q2ltd+5LorVlV9cZhPxAU4QeCqjr8QxVvP6Wu\nvdW1L4nemlVJb5W+5wdQnar3/AAqUkn4zex8M/uXmb1sZmuq6CGPmY2Y2TNm9lTVQ4xlw6DtM7Nn\nJ0ybY2YPmtlL2e9Jh0mrqLd1ZrY7e+2eMrMLK+ptgZn93cyeN7PnzOyH2fRKX7tEX5W8bh0/7Dez\n6ZL+LWmJpFFJT0i6zN2f72gjOcxsRFK/u1d+TtjMviHpoKS73L0vm/ZTSfvd/ZbsD+dsd/9RTXpb\nJ+lg1SM3ZwPKzJ84srSkiyV9VxW+dom+lquC162KPf9iSS+7+3/c/bCk30taWkEftefuD0va/7HJ\nSyVtyB5v0Ph/no7L6a0W3H3M3Xdmj9+R9OHI0pW+dom+KlFF+E+U9NqE56Oq15DfLumvZjZsZoNV\nNzOJ3mzYdEnaI6m3ymYmUThycyd9bGTp2rx2zYx4XTY+8Puks939a5IukPSD7PC2lnz8PVudTtc0\nNHJzp0wysvT/VfnaNTviddmqCP9uSQsmPD8pm1YL7r47+71P0p9Uv9GH9344SGr2e1/F/fxfnUZu\nnmxkadXgtavTiNdVhP8JSaea2RfM7DOSVkjaWkEfn2BmPdkHMTKzHknfUv1GH94qaWX2eKWkLRX2\n8hF1Gbk5b2RpVfza1W7Ea3fv+I+kCzX+if8rkn5cRQ85fX1R0j+zn+eq7k3S3Ro/DHxf45+NfE/S\n5yRtk/SSpL9JmlOj3n4n6RlJT2s8aPMr6u1sjR/SPy3pqeznwqpfu0RflbxufMMPCIoP/ICgCD8Q\nFOEHgiL8QFCEHwiK8ANBEX4gKMIPBPU/uUhluG6K4TwAAAAASUVORK5CYII=\n", 207 | "text/plain": [ 208 | "" 209 | ] 210 | }, 211 | "metadata": {}, 212 | "output_type": "display_data" 213 | } 214 | ], 215 | "source": [ 216 | "plt.imshow(first_image, cmap = cm.Greys_r)\n", 217 | "plt.show()" 218 | ] 219 | }, 220 | { 221 | "cell_type": "code", 222 | "execution_count": null, 223 | "metadata": { 224 | "collapsed": true 225 | }, 226 | "outputs": [], 227 | "source": [] 228 | }, 229 | { 230 | "cell_type": "code", 231 | "execution_count": null, 232 | "metadata": { 233 | "collapsed": true 234 | }, 235 | "outputs": [], 236 | "source": [] 237 | } 238 | ], 239 | "metadata": { 240 | "kernelspec": { 241 | "display_name": "Python 3", 242 | "language": "python", 243 | "name": "python3" 244 | }, 245 | "language_info": { 246 | "codemirror_mode": { 247 | "name": "ipython", 248 | "version": 3 249 | }, 250 | "file_extension": ".py", 251 | "mimetype": "text/x-python", 252 | "name": "python", 253 | "nbconvert_exporter": "python", 254 | "pygments_lexer": "ipython3", 255 | "version": "3.5.2" 256 | } 257 | }, 258 | "nbformat": 4, 259 | "nbformat_minor": 0 260 | } 261 | -------------------------------------------------------------------------------- /notebooks/ipython-run.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 4 | 5 | ROOT_DIR=/vagrant 6 | export PYTHONPATH=$ROOT_DIR/python 7 | ipython notebook --port=8888 --ip=0.0.0.0 --no-browser --notebook-dir=$ROOT_DIR/notebooks 8 | -------------------------------------------------------------------------------- /puppet/install_puppet_modules.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Exit on any errors. 4 | set -e 5 | 6 | PUPPET_INSTALL='puppet module install ' 7 | 8 | # install puppet modules 9 | (puppet module list | grep acme-ohmyzsh) || 10 | $PUPPET_INSTALL -v 0.1.2 acme-ohmyzsh 11 | -------------------------------------------------------------------------------- /puppet/manifests/classes/init.pp: -------------------------------------------------------------------------------- 1 | # Commands to run before all others in puppet. 2 | class init { 3 | group { "puppet": 4 | ensure => "present", 5 | } 6 | case $operatingsystem { 7 | ubuntu: { 8 | exec { "apt-update": 9 | command => "sudo apt-get update", 10 | } 11 | Exec["apt-update"] -> Package <| |> 12 | package { ['git-core']: 13 | ensure => present, 14 | } 15 | exec { 'git config --global url."https://".insteadOf git://' : 16 | environment => 'HOME=/home/vagrant', 17 | require => Package['git-core'], 18 | user => 'vagrant' 19 | } 20 | package { 'autojump': 21 | ensure => present, 22 | require => Exec['apt-update']; 23 | } 24 | package { 'jq': 25 | ensure => present, 26 | require => Exec['apt-update']; 27 | } 28 | file { '/etc/profile.d/autojump.sh': 29 | source => '/usr/share/autojump/autojump.sh', 30 | require => Package['autojump'] 31 | } 32 | } 33 | } 34 | } 35 | -------------------------------------------------------------------------------- /puppet/manifests/classes/ohmyzsh_setup.pp: -------------------------------------------------------------------------------- 1 | # Install zsh and ohmyzsh 2 | class ohmyzsh_setup { 3 | case $operatingsystem { 4 | ubuntu: { 5 | class { 'ohmyzsh': 6 | } 7 | # The following two do not work if git port is blocked 8 | #ohmyzsh::plugins { 'vagrant': 9 | # require => Class['ohmyzsh'], 10 | # plugins => 'tmux' 11 | #} 12 | #ohmyzsh::install { 'vagrant': 13 | # require => Class['ohmyzsh'], 14 | # before => Exec['vagrant_bash'] 15 | #} 16 | #exec { "vagrant_bash": 17 | # command => "chsh -s /bin/zsh vagrant" 18 | #} 19 | } 20 | } 21 | } 22 | 23 | -------------------------------------------------------------------------------- /puppet/manifests/classes/python_setup.pp: -------------------------------------------------------------------------------- 1 | # install python 2 | class python_setup { 3 | case $operatingsystem { 4 | ubuntu: { 5 | package { "python-dev": 6 | ensure => installed 7 | } 8 | package { "python-pip": 9 | ensure => installed, 10 | require => Package['python-dev'] 11 | } 12 | package { 'httpie': 13 | ensure => installed, 14 | provider => pip, 15 | require => Package['python-pip'] 16 | } 17 | package { 'tensorflow': 18 | ensure => installed, 19 | provider => pip, 20 | require => Package['python-dev'], 21 | source => 'https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl' 22 | } 23 | package { "python-zmq": 24 | ensure => installed, 25 | require => Package['python-pip'] 26 | } 27 | package { "jinja2": 28 | ensure => installed, 29 | provider => pip, 30 | require => Package['python-pip'] 31 | } 32 | package { 'tornado': 33 | ensure => installed, 34 | provider => pip, 35 | require => Package['python-pip'] 36 | } 37 | package { 'jsonschema': 38 | ensure => installed, 39 | provider => pip, 40 | require => Package['python-pip'] 41 | } 42 | package { 'terminado': 43 | ensure => installed, 44 | provider => pip, 45 | require => Package['python-pip'] 46 | } 47 | package { 'ipython': 48 | ensure => '3.2.1', 49 | provider => pip, 50 | require => Package['python-zmq', 'jinja2', 'tornado', 51 | 'jsonschema', 'terminado'] 52 | } 53 | package { ['libfreetype6-dev', 'pkg-config']: 54 | ensure => installed 55 | } 56 | package { 'matplotlib': 57 | ensure => installed, 58 | provider => pip, 59 | require => Package['libfreetype6-dev', 'pkg-config'] 60 | } 61 | package { 'python-scipy': 62 | ensure => installed 63 | } 64 | package { 'sklearn': 65 | ensure => installed, 66 | provider => pip, 67 | require => Package['python-pip'] 68 | } 69 | package { 'legit': # convenient git aliases 70 | ensure => installed, 71 | provider => pip, 72 | require => Package['python-pip'] 73 | } 74 | } 75 | } 76 | } 77 | -------------------------------------------------------------------------------- /puppet/manifests/default.pp: -------------------------------------------------------------------------------- 1 | # 2 | # puppet magic for dev boxes 3 | # 4 | import "classes/*.pp" 5 | 6 | $PROJ_DIR = "/vagrant" 7 | $HOME_DIR = "/home/vagrant" 8 | 9 | Exec { 10 | path => "/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin", 11 | } 12 | 13 | node 'tensorflow-ipy' { 14 | class { 15 | init: ; 16 | python_setup:; 17 | ohmyzsh_setup:; 18 | } 19 | } 20 | 21 | -------------------------------------------------------------------------------- /python/first-tensorflow.py: -------------------------------------------------------------------------------- 1 | #!/bin/python 2 | from __future__ import print_function 3 | import tensorflow as tf 4 | 5 | print('tensorflow version: {}'.format(tf.__version__)) 6 | 7 | hello = tf.constant('Hello, TensorFlow!') 8 | 9 | sess = tf.Session() 10 | print(sess.run(hello)) 11 | 12 | a = tf.constant(10) 13 | b = tf.constant(32) 14 | print(sess.run(a+b)) 15 | -------------------------------------------------------------------------------- /python/input_data.py: -------------------------------------------------------------------------------- 1 | """Functions for downloading and reading MNIST data.""" 2 | from __future__ import print_function 3 | import gzip 4 | import os 5 | import urllib 6 | 7 | import numpy 8 | 9 | SOURCE_URL = 'http://yann.lecun.com/exdb/mnist/' 10 | 11 | 12 | def maybe_download(filename, work_directory): 13 | """Download the data from Yann's website, unless it's already here.""" 14 | if not os.path.exists(work_directory): 15 | os.mkdir(work_directory) 16 | filepath = os.path.join(work_directory, filename) 17 | if not os.path.exists(filepath): 18 | filepath, _ = urllib.urlretrieve(SOURCE_URL + filename, filepath) 19 | statinfo = os.stat(filepath) 20 | print('Succesfully downloaded', filename, statinfo.st_size, 'bytes.') 21 | return filepath 22 | 23 | 24 | def _read32(bytestream): 25 | dt = numpy.dtype(numpy.uint32).newbyteorder('>') 26 | return numpy.frombuffer(bytestream.read(4), dtype=dt) 27 | 28 | 29 | def extract_images(filename): 30 | """Extract the images into a 4D uint8 numpy array [index, y, x, depth].""" 31 | print('Extracting', filename) 32 | with gzip.open(filename) as bytestream: 33 | magic = _read32(bytestream) 34 | if magic != 2051: 35 | raise ValueError( 36 | 'Invalid magic number %d in MNIST image file: %s' % 37 | (magic, filename)) 38 | num_images = _read32(bytestream) 39 | rows = _read32(bytestream) 40 | cols = _read32(bytestream) 41 | buf = bytestream.read(rows * cols * num_images) 42 | data = numpy.frombuffer(buf, dtype=numpy.uint8) 43 | data = data.reshape(num_images, rows, cols, 1) 44 | return data 45 | 46 | 47 | def dense_to_one_hot(labels_dense, num_classes=10): 48 | """Convert class labels from scalars to one-hot vectors.""" 49 | num_labels = labels_dense.shape[0] 50 | index_offset = numpy.arange(num_labels) * num_classes 51 | labels_one_hot = numpy.zeros((num_labels, num_classes)) 52 | labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1 53 | return labels_one_hot 54 | 55 | 56 | def extract_labels(filename, one_hot=False): 57 | """Extract the labels into a 1D uint8 numpy array [index].""" 58 | print('Extracting', filename) 59 | with gzip.open(filename) as bytestream: 60 | magic = _read32(bytestream) 61 | if magic != 2049: 62 | raise ValueError( 63 | 'Invalid magic number %d in MNIST label file: %s' % 64 | (magic, filename)) 65 | num_items = _read32(bytestream) 66 | buf = bytestream.read(num_items) 67 | labels = numpy.frombuffer(buf, dtype=numpy.uint8) 68 | if one_hot: 69 | return dense_to_one_hot(labels) 70 | return labels 71 | 72 | 73 | class DataSet(object): 74 | 75 | def __init__(self, images, labels, fake_data=False): 76 | if fake_data: 77 | self._num_examples = 10000 78 | else: 79 | assert images.shape[0] == labels.shape[0], ( 80 | "images.shape: %s labels.shape: %s" % (images.shape, 81 | labels.shape)) 82 | self._num_examples = images.shape[0] 83 | 84 | # Convert shape from [num examples, rows, columns, depth] 85 | # to [num examples, rows*columns] (assuming depth == 1) 86 | assert images.shape[3] == 1 87 | images = images.reshape(images.shape[0], 88 | images.shape[1] * images.shape[2]) 89 | # Convert from [0, 255] -> [0.0, 1.0]. 90 | images = images.astype(numpy.float32) 91 | images = numpy.multiply(images, 1.0 / 255.0) 92 | self._images = images 93 | self._labels = labels 94 | self._epochs_completed = 0 95 | self._index_in_epoch = 0 96 | 97 | @property 98 | def images(self): 99 | return self._images 100 | 101 | @property 102 | def labels(self): 103 | return self._labels 104 | 105 | @property 106 | def num_examples(self): 107 | return self._num_examples 108 | 109 | @property 110 | def epochs_completed(self): 111 | return self._epochs_completed 112 | 113 | def next_batch(self, batch_size, fake_data=False): 114 | """Return the next `batch_size` examples from this data set.""" 115 | if fake_data: 116 | fake_image = [1.0 for _ in xrange(784)] 117 | fake_label = 0 118 | return [fake_image for _ in xrange(batch_size)], [ 119 | fake_label for _ in xrange(batch_size)] 120 | start = self._index_in_epoch 121 | self._index_in_epoch += batch_size 122 | if self._index_in_epoch > self._num_examples: 123 | # Finished epoch 124 | self._epochs_completed += 1 125 | # Shuffle the data 126 | perm = numpy.arange(self._num_examples) 127 | numpy.random.shuffle(perm) 128 | self._images = self._images[perm] 129 | self._labels = self._labels[perm] 130 | # Start next epoch 131 | start = 0 132 | self._index_in_epoch = batch_size 133 | assert batch_size <= self._num_examples 134 | end = self._index_in_epoch 135 | return self._images[start:end], self._labels[start:end] 136 | 137 | 138 | def read_data_sets(train_dir, fake_data=False, one_hot=False): 139 | class DataSets(object): 140 | pass 141 | data_sets = DataSets() 142 | 143 | if fake_data: 144 | data_sets.train = DataSet([], [], fake_data=True) 145 | data_sets.validation = DataSet([], [], fake_data=True) 146 | data_sets.test = DataSet([], [], fake_data=True) 147 | return data_sets 148 | 149 | TRAIN_IMAGES = 'train-images-idx3-ubyte.gz' 150 | TRAIN_LABELS = 'train-labels-idx1-ubyte.gz' 151 | TEST_IMAGES = 't10k-images-idx3-ubyte.gz' 152 | TEST_LABELS = 't10k-labels-idx1-ubyte.gz' 153 | VALIDATION_SIZE = 5000 154 | 155 | local_file = maybe_download(TRAIN_IMAGES, train_dir) 156 | train_images = extract_images(local_file) 157 | 158 | local_file = maybe_download(TRAIN_LABELS, train_dir) 159 | train_labels = extract_labels(local_file, one_hot=one_hot) 160 | 161 | local_file = maybe_download(TEST_IMAGES, train_dir) 162 | test_images = extract_images(local_file) 163 | 164 | local_file = maybe_download(TEST_LABELS, train_dir) 165 | test_labels = extract_labels(local_file, one_hot=one_hot) 166 | 167 | validation_images = train_images[:VALIDATION_SIZE] 168 | validation_labels = train_labels[:VALIDATION_SIZE] 169 | train_images = train_images[VALIDATION_SIZE:] 170 | train_labels = train_labels[VALIDATION_SIZE:] 171 | 172 | data_sets.train = DataSet(train_images, train_labels) 173 | data_sets.validation = DataSet(validation_images, validation_labels) 174 | data_sets.test = DataSet(test_images, test_labels) 175 | 176 | return data_sets 177 | -------------------------------------------------------------------------------- /scripts/ipy-udacity.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ROOT_DIR=/vagrant 4 | export PYTHONPATH=$ROOT_DIR/python 5 | NOTEBOOK_DIR=../tensorflow_source/tensorflow/tensorflow/examples/udacity 6 | 7 | ipython notebook --port=8888 --ip=0.0.0.0 --no-browser --notebook-dir=$NOTEBOOK_DIR 8 | --------------------------------------------------------------------------------