├── saved_models └── .gitignore ├── bottleneck_features └── .gitignore ├── images ├── dalmation.jpg ├── sample_cnn.png ├── Brittany_02625.jpg ├── my_images │ ├── corgi.jpg │ ├── obama.jpg │ ├── putin.jpeg │ ├── bradpitt.jpg │ ├── udacity.png │ └── labrador.jpeg ├── sample_dog_output.png ├── sample_human_output.png ├── Labrador_retriever_06449.jpg ├── Labrador_retriever_06455.jpg ├── Labrador_retriever_06457.jpg ├── American_water_spaniel_00648.jpg ├── Curly-coated_retriever_03896.jpg └── Welsh_springer_spaniel_08203.jpg ├── requirements ├── requirements.txt ├── aind-dog-linux.yml ├── aind-dog-windows.yml └── aind-dog-mac.yml ├── .gitignore ├── extract_bottleneck_features.py └── README.md /saved_models/.gitignore: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /bottleneck_features/.gitignore: -------------------------------------------------------------------------------- 1 | DogVGG16Data.npz -------------------------------------------------------------------------------- /images/dalmation.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/dalmation.jpg -------------------------------------------------------------------------------- /images/sample_cnn.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/sample_cnn.png -------------------------------------------------------------------------------- /images/Brittany_02625.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/Brittany_02625.jpg -------------------------------------------------------------------------------- /images/my_images/corgi.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/my_images/corgi.jpg -------------------------------------------------------------------------------- /images/my_images/obama.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/my_images/obama.jpg -------------------------------------------------------------------------------- /images/my_images/putin.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/my_images/putin.jpeg -------------------------------------------------------------------------------- /images/my_images/bradpitt.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/my_images/bradpitt.jpg -------------------------------------------------------------------------------- /images/my_images/udacity.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/my_images/udacity.png -------------------------------------------------------------------------------- /images/sample_dog_output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/sample_dog_output.png -------------------------------------------------------------------------------- /images/my_images/labrador.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/my_images/labrador.jpeg -------------------------------------------------------------------------------- /images/sample_human_output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/sample_human_output.png -------------------------------------------------------------------------------- /images/Labrador_retriever_06449.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/Labrador_retriever_06449.jpg -------------------------------------------------------------------------------- /images/Labrador_retriever_06455.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/Labrador_retriever_06455.jpg -------------------------------------------------------------------------------- /images/Labrador_retriever_06457.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/Labrador_retriever_06457.jpg -------------------------------------------------------------------------------- /images/American_water_spaniel_00648.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/American_water_spaniel_00648.jpg -------------------------------------------------------------------------------- /images/Curly-coated_retriever_03896.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/Curly-coated_retriever_03896.jpg -------------------------------------------------------------------------------- /images/Welsh_springer_spaniel_08203.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/canyon289/dog-project/master/images/Welsh_springer_spaniel_08203.jpg -------------------------------------------------------------------------------- /requirements/requirements.txt: -------------------------------------------------------------------------------- 1 | opencv-python==3.2.0.6 2 | h5py==2.6.0 3 | matplotlib==2.0.0 4 | numpy==1.12.0 5 | scipy==0.18.1 6 | tqdm==4.11.2 7 | keras==2.0.2 8 | scikit-learn==0.18.1 9 | pillow==4.0.0 10 | tensorflow==1.0.0 -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | dogImages/ 3 | lfw/ 4 | saved_models/weights.best.from_scratch.hdf5 5 | saved_models/weights.best.vgg16.hdf5 6 | .ipynb_checkpoints/ 7 | bottleneck_features/DogVGG16Data.npz 8 | *.npz 9 | *.hdf5 10 | __pycache__/* 11 | -------------------------------------------------------------------------------- /extract_bottleneck_features.py: -------------------------------------------------------------------------------- 1 | def extract_VGG16(tensor): 2 | from keras.applications.vgg16 import VGG16, preprocess_input 3 | return VGG16(weights='imagenet', include_top=False).predict(preprocess_input(tensor)) 4 | 5 | def extract_VGG19(tensor): 6 | from keras.applications.vgg19 import VGG19, preprocess_input 7 | return VGG19(weights='imagenet', include_top=False).predict(preprocess_input(tensor)) 8 | 9 | def extract_Resnet50(tensor): 10 | from keras.applications.resnet50 import ResNet50, preprocess_input 11 | return ResNet50(weights='imagenet', include_top=False).predict(preprocess_input(tensor)) 12 | 13 | def extract_Xception(tensor): 14 | from keras.applications.xception import Xception, preprocess_input 15 | return Xception(weights='imagenet', include_top=False).predict(preprocess_input(tensor)) 16 | 17 | def extract_InceptionV3(tensor): 18 | from keras.applications.inception_v3 import InceptionV3, preprocess_input 19 | return InceptionV3(weights='imagenet', include_top=False).predict(preprocess_input(tensor)) -------------------------------------------------------------------------------- /requirements/aind-dog-linux.yml: -------------------------------------------------------------------------------- 1 | name: aind-dog 2 | channels: 3 | - defaults 4 | dependencies: 5 | - openssl=1.0.2l=0 6 | - pip=9.0.1=py36_1 7 | - python=3.6.1=2 8 | - readline=6.2=2 9 | - setuptools=27.2.0=py36_0 10 | - sqlite=3.13.0=0 11 | - tk=8.5.18=0 12 | - wheel=0.29.0=py36_0 13 | - xz=5.2.2=1 14 | - zlib=1.2.8=3 15 | - pip: 16 | - bleach==2.0.0 17 | - cycler==0.10.0 18 | - decorator==4.0.11 19 | - entrypoints==0.2.3 20 | - h5py==2.6.0 21 | - html5lib==0.999999999 22 | - ipykernel==4.6.1 23 | - ipython==6.1.0 24 | - ipython-genutils==0.2.0 25 | - ipywidgets==6.0.0 26 | - jedi==0.10.2 27 | - jinja2==2.9.6 28 | - jsonschema==2.6.0 29 | - jupyter==1.0.0 30 | - jupyter-client==5.0.1 31 | - jupyter-console==5.1.0 32 | - jupyter-core==4.3.0 33 | - keras==2.0.2 34 | - markupsafe==1.0 35 | - matplotlib==2.0.0 36 | - mistune==0.7.4 37 | - nbconvert==5.2.1 38 | - nbformat==4.3.0 39 | - notebook==5.0.0 40 | - numpy==1.12.0 41 | - olefile==0.44 42 | - opencv-python==3.2.0.6 43 | - pandocfilters==1.4.1 44 | - pexpect==4.2.1 45 | - pickleshare==0.7.4 46 | - pillow==4.0.0 47 | - prompt-toolkit==1.0.14 48 | - protobuf==3.3.0 49 | - ptyprocess==0.5.1 50 | - pygments==2.2.0 51 | - pyparsing==2.2.0 52 | - python-dateutil==2.6.0 53 | - pytz==2017.2 54 | - pyyaml==3.12 55 | - pyzmq==16.0.2 56 | - qtconsole==4.3.0 57 | - scikit-learn==0.18.1 58 | - scipy==0.18.1 59 | - simplegeneric==0.8.1 60 | - six==1.10.0 61 | - tensorflow==1.0.0 62 | - terminado==0.6 63 | - testpath==0.3.1 64 | - theano==0.9.0 65 | - tornado==4.5.1 66 | - tqdm==4.11.2 67 | - traitlets==4.3.2 68 | - wcwidth==0.1.7 69 | - webencodings==0.5.1 70 | - widgetsnbextension==2.0.0 71 | -------------------------------------------------------------------------------- /requirements/aind-dog-windows.yml: -------------------------------------------------------------------------------- 1 | name: aind-dog 2 | channels: 3 | - defaults 4 | dependencies: 5 | - _nb_ext_conf=0.3.0=py35_0 6 | - anaconda-client=1.6.2=py35_0 7 | - bleach=1.5.0=py35_0 8 | - bzip2=1.0.6=vc14_3 9 | - clyent=1.2.2=py35_0 10 | - colorama=0.3.7=py35_0 11 | - cycler=0.10.0=py35_0 12 | - decorator=4.0.11=py35_0 13 | - entrypoints=0.2.2=py35_1 14 | - freetype=2.5.5=vc14_2 15 | - h5py=2.7.0=np112py35_0 16 | - hdf5=1.8.15.1=vc14_4 17 | - html5lib=0.999=py35_0 18 | - icu=57.1=vc14_0 19 | - ipykernel=4.5.2=py35_0 20 | - ipython=5.3.0=py35_0 21 | - ipython_genutils=0.1.0=py35_0 22 | - ipywidgets=6.0.0=py35_0 23 | - jinja2=2.9.5=py35_0 24 | - jpeg=9b=vc14_0 25 | - jsonschema=2.5.1=py35_0 26 | - jupyter=1.0.0=py35_3 27 | - jupyter_client=5.0.0=py35_0 28 | - jupyter_console=5.1.0=py35_0 29 | - jupyter_core=4.3.0=py35_0 30 | - libpng=1.6.27=vc14_0 31 | - libtiff=4.0.6=vc14_3 32 | - markupsafe=0.23=py35_2 33 | - matplotlib=2.0.0=np112py35_0 34 | - mistune=0.7.4=py35_0 35 | - mkl=2017.0.1=0 36 | - nb_anacondacloud=1.2.0=py35_0 37 | - nb_conda=2.0.0=py35_0 38 | - nb_conda_kernels=2.0.0=py35_0 39 | - nbconvert=5.1.1=py35_0 40 | - nbformat=4.3.0=py35_0 41 | - nbpresent=3.0.2=py35_0 42 | - notebook=4.4.1=py35_0 43 | - numpy=1.12.1=py35_0 44 | - olefile=0.44=py35_0 45 | - openssl=1.0.2k=vc14_0 46 | - pandocfilters=1.4.1=py35_0 47 | - path.py=10.1=py35_0 48 | - pickleshare=0.7.4=py35_0 49 | - pillow=4.0.0=py35_1 50 | - pip=9.0.1=py35_1 51 | - prompt_toolkit=1.0.13=py35_0 52 | - pygments=2.2.0=py35_0 53 | - pyparsing=2.1.4=py35_0 54 | - pyqt=5.6.0=py35_2 55 | - python=3.5.3=0 56 | - python-dateutil=2.6.0=py35_0 57 | - pytz=2016.10=py35_0 58 | - pyyaml=3.12=py35_0 59 | - pyzmq=16.0.2=py35_0 60 | - qt=5.6.2=vc14_3 61 | - qtconsole=4.2.1=py35_2 62 | - requests=2.13.0=py35_0 63 | - scikit-learn=0.18.1=np112py35_1 64 | - scipy=0.19.0=np112py35_0 65 | - setuptools=27.2.0=py35_1 66 | - simplegeneric=0.8.1=py35_1 67 | - sip=4.18=py35_0 68 | - six=1.10.0=py35_0 69 | - testpath=0.3=py35_0 70 | - tk=8.5.18=vc14_0 71 | - tornado=4.4.2=py35_0 72 | - traitlets=4.3.2=py35_0 73 | - vs2015_runtime=14.0.25123=0 74 | - wcwidth=0.1.7=py35_0 75 | - wheel=0.29.0=py35_0 76 | - widgetsnbextension=2.0.0=py35_0 77 | - win_unicode_console=0.5=py35_0 78 | - zlib=1.2.8=vc14_3 79 | - pip: 80 | - ipython-genutils==0.1.0 81 | - jupyter-client==5.0.0 82 | - jupyter-console==5.1.0 83 | - jupyter-core==4.3.0 84 | - keras==2.0.2 85 | - nb-anacondacloud==1.2.0 86 | - nb-conda==2.0.0 87 | - nb-conda-kernels==2.0.0 88 | - opencv-python==3.1.0.0 89 | - prompt-toolkit==1.0.13 90 | - protobuf==3.2.0 91 | - tensorflow==1.0.1 92 | - theano==0.9.0 93 | - tqdm==4.11.2 94 | - win-unicode-console==0.5 95 | 96 | 97 | -------------------------------------------------------------------------------- /requirements/aind-dog-mac.yml: -------------------------------------------------------------------------------- 1 | name: aind-dog 2 | channels: 3 | - damianavila82 4 | - defaults 5 | dependencies: 6 | - rise=4.0.0b1=py35_0 7 | - _license=1.1=py35_1 8 | - alabaster=0.7.10=py35_0 9 | - anaconda-client=1.6.2=py35_0 10 | - anaconda=custom=py35_0 11 | - anaconda-navigator=1.5.0=py35_0 12 | - anaconda-project=0.4.1=py35_0 13 | - appnope=0.1.0=py35_0 14 | - appscript=1.0.1=py35_0 15 | - astroid=1.4.9=py35_0 16 | - astropy=1.3=np112py35_0 17 | - babel=2.3.4=py35_0 18 | - backports=1.0=py35_0 19 | - beautifulsoup4=4.5.3=py35_0 20 | - bitarray=0.8.1=py35_0 21 | - blaze=0.10.1=py35_0 22 | - bleach=1.5.0=py35_0 23 | - bokeh=0.12.4=py35_0 24 | - boto=2.46.1=py35_0 25 | - bottleneck=1.2.0=np112py35_0 26 | - cffi=1.9.1=py35_0 27 | - chardet=2.3.0=py35_0 28 | - chest=0.2.3=py35_0 29 | - click=6.7=py35_0 30 | - cloudpickle=0.2.2=py35_0 31 | - clyent=1.2.2=py35_0 32 | - colorama=0.3.7=py35_0 33 | - configobj=5.0.6=py35_0 34 | - contextlib2=0.5.4=py35_0 35 | - cryptography=1.7.1=py35_0 36 | - curl=7.52.1=0 37 | - cycler=0.10.0=py35_0 38 | - cython=0.25.2=py35_0 39 | - cytoolz=0.8.2=py35_0 40 | - dask=0.14.0=py35_0 41 | - datashape=0.5.4=py35_0 42 | - decorator=4.0.11=py35_0 43 | - dill=0.2.5=py35_0 44 | - docutils=0.13.1=py35_0 45 | - entrypoints=0.2.2=py35_1 46 | - et_xmlfile=1.0.1=py35_0 47 | - fastcache=1.0.2=py35_1 48 | - flask=0.12=py35_0 49 | - flask-cors=3.0.2=py35_0 50 | - freetype=2.5.5=2 51 | - get_terminal_size=1.0.0=py35_0 52 | - gevent=1.2.1=py35_0 53 | - greenlet=0.4.12=py35_0 54 | - h5py=2.6.0=np112py35_2 55 | - hdf5=1.8.17=1 56 | - heapdict=1.0.0=py35_1 57 | - html5lib=0.999=py35_0 58 | - icu=54.1=0 59 | - idna=2.2=py35_0 60 | - imagesize=0.7.1=py35_0 61 | - ipykernel=4.5.2=py35_0 62 | - ipython=5.3.0=py35_0 63 | - ipython_genutils=0.1.0=py35_0 64 | - ipywidgets=6.0.0=py35_0 65 | - isort=4.2.5=py35_0 66 | - itsdangerous=0.24=py35_0 67 | - jbig=2.1=0 68 | - jdcal=1.3=py35_0 69 | - jedi=0.9.0=py35_1 70 | - jinja2=2.9.5=py35_0 71 | - jpeg=9b=0 72 | - jsonschema=2.5.1=py35_0 73 | - jupyter=1.0.0=py35_3 74 | - jupyter_client=5.0.0=py35_0 75 | - jupyter_console=5.1.0=py35_0 76 | - jupyter_core=4.3.0=py35_0 77 | - lazy-object-proxy=1.2.2=py35_0 78 | - libiconv=1.14=0 79 | - libpng=1.6.27=0 80 | - libtiff=4.0.6=3 81 | - libxml2=2.9.4=0 82 | - libxslt=1.1.29=0 83 | - llvmlite=0.16.0=py35_0 84 | - locket=0.2.0=py35_1 85 | - lxml=3.7.3=py35_0 86 | - markupsafe=0.23=py35_2 87 | - matplotlib=2.0.0=np112py35_0 88 | - mistune=0.7.4=py35_0 89 | - mkl=2017.0.1=0 90 | - mkl-service=1.1.2=py35_3 91 | - mpmath=0.19=py35_1 92 | - multipledispatch=0.4.9=py35_0 93 | - nbconvert=5.1.1=py35_0 94 | - nbformat=4.3.0=py35_0 95 | - networkx=1.11=py35_0 96 | - nltk=3.2.2=py35_0 97 | - nose=1.3.7=py35_1 98 | - notebook=4.4.1=py35_0 99 | - numba=0.31.0=np112py35_0 100 | - numexpr=2.6.2=np112py35_0 101 | - numpy=1.12.0=py35_0 102 | - numpydoc=0.6.0=py35_0 103 | - odo=0.5.0=py35_1 104 | - olefile=0.44=py35_0 105 | - openpyxl=2.4.1=py35_0 106 | - openssl=1.0.2k=0 107 | - pandas=0.19.2=np112py35_1 108 | - pandocfilters=1.4.1=py35_0 109 | - partd=0.3.7=py35_0 110 | - path.py=10.1=py35_0 111 | - pathlib2=2.2.0=py35_0 112 | - patsy=0.4.1=py35_0 113 | - pep8=1.7.0=py35_0 114 | - pexpect=4.2.1=py35_0 115 | - pickleshare=0.7.4=py35_0 116 | - pillow=4.0.0=py35_1 117 | - pip=9.0.1=py35_1 118 | - ply=3.10=py35_0 119 | - prompt_toolkit=1.0.13=py35_0 120 | - psutil=5.2.0=py35_0 121 | - ptyprocess=0.5.1=py35_0 122 | - py=1.4.32=py35_0 123 | - pyasn1=0.2.3=py35_0 124 | - pycosat=0.6.1=py35_1 125 | - pycparser=2.17=py35_0 126 | - pycrypto=2.6.1=py35_4 127 | - pycurl=7.43.0=py35_2 128 | - pyflakes=1.5.0=py35_0 129 | - pygments=2.2.0=py35_0 130 | - pylint=1.6.4=py35_1 131 | - pyopenssl=16.2.0=py35_0 132 | - pyparsing=2.1.4=py35_0 133 | - pyqt=5.6.0=py35_2 134 | - pytables=3.3.0=np112py35_0 135 | - pytest=3.0.6=py35_0 136 | - python=3.5.3=1 137 | - python-dateutil=2.6.0=py35_0 138 | - python.app=1.2=py35_4 139 | - pytz=2016.10=py35_0 140 | - pyyaml=3.12=py35_0 141 | - pyzmq=16.0.2=py35_0 142 | - qt=5.6.2=0 143 | - qtawesome=0.4.4=py35_0 144 | - qtconsole=4.2.1=py35_1 145 | - qtpy=1.2.1=py35_0 146 | - readline=6.2=2 147 | - redis=3.2.0=0 148 | - redis-py=2.10.5=py35_0 149 | - requests=2.13.0=py35_0 150 | - rope=0.9.4=py35_1 151 | - ruamel_yaml=0.11.14=py35_1 152 | - scikit-image=0.12.3=np112py35_1 153 | - scikit-learn=0.18.1=np112py35_1 154 | - scipy=0.19.0=np112py35_0 155 | - seaborn=0.7.1=py35_0 156 | - setuptools=27.2.0=py35_0 157 | - simplegeneric=0.8.1=py35_1 158 | - singledispatch=3.4.0.3=py35_0 159 | - sip=4.18=py35_0 160 | - six=1.10.0=py35_0 161 | - snowballstemmer=1.2.1=py35_0 162 | - sockjs-tornado=1.0.3=py35_0 163 | - sphinx=1.5.1=py35_0 164 | - spyder=3.1.3=py35_0 165 | - sqlalchemy=1.1.6=py35_0 166 | - sqlite=3.13.0=0 167 | - statsmodels=0.8.0=np112py35_0 168 | - sympy=1.0=py35_0 169 | - terminado=0.6=py35_0 170 | - testpath=0.3=py35_0 171 | - tk=8.5.18=0 172 | - toolz=0.8.2=py35_0 173 | - tornado=4.4.2=py35_0 174 | - traitlets=4.3.2=py35_0 175 | - unicodecsv=0.14.1=py35_0 176 | - wcwidth=0.1.7=py35_0 177 | - werkzeug=0.12=py35_0 178 | - wheel=0.29.0=py35_0 179 | - widgetsnbextension=2.0.0=py35_0 180 | - wrapt=1.10.8=py35_0 181 | - xlrd=1.0.0=py35_0 182 | - xlsxwriter=0.9.6=py35_0 183 | - xlwings=0.10.2=py35_0 184 | - xlwt=1.2.0=py35_0 185 | - xz=5.2.2=1 186 | - yaml=0.1.6=0 187 | - zlib=1.2.8=3 188 | - pip: 189 | - backports.shutil-get-terminal-size==1.0.0 190 | - cvxopt==1.1.9 191 | - et-xmlfile==1.0.1 192 | - ipython-genutils==0.1.0 193 | - jupyter-client==5.0.0 194 | - jupyter-console==5.1.0 195 | - jupyter-core==4.3.0 196 | - keras==2.0.0 197 | - opencv-python==3.2.0.6 198 | - prompt-toolkit==1.0.13 199 | - protobuf==3.2.0 200 | - rope-py3k==0.9.4.post1 201 | - tables==3.3.0 202 | - tensorflow==1.0.0 203 | - theano==0.8.2 204 | - tqdm==4.11.2 -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [//]: # (Image References) 2 | 3 | [image1]: ./images/sample_dog_output.png "Sample Output" 4 | [image2]: ./images/vgg16_model.png "VGG-16 Model Keras Layers" 5 | [image3]: ./images/vgg16_model_draw.png "VGG16 Model Figure" 6 | 7 | 8 | ## Project Overview 9 | 10 | Welcome to the Convolutional Neural Networks (CNN) project in the AI Nanodegree! In this project, you will learn how to build a pipeline that can be used within a web or mobile app to process real-world, user-supplied images. Given an image of a dog, your algorithm will identify an estimate of the canine’s breed. If supplied an image of a human, the code will identify the resembling dog breed. 11 | 12 | ![Sample Output][image1] 13 | 14 | Along with exploring state-of-the-art CNN models for classification, you will make important design decisions about the user experience for your app. Our goal is that by completing this lab, you understand the challenges involved in piecing together a series of models designed to perform various tasks in a data processing pipeline. Each model has its strengths and weaknesses, and engineering a real-world application often involves solving many problems without a perfect answer. Your imperfect solution will nonetheless create a fun user experience! 15 | 16 | 17 | ## Project Instructions 18 | 19 | ### Instructions 20 | 21 | 1. Clone the repository and navigate to the downloaded folder. 22 | 23 | ``` 24 | git clone https://github.com/udacity/dog-project.git 25 | cd dog-project 26 | ``` 27 | 2. Download the [dog dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip). Unzip the folder and place it in the repo, at location `path/to/dog-project/dogImages`. 28 | 3. Download the [human dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/lfw.zip). Unzip the folder and place it in the repo, at location `path/to/dog-project/lfw`. If you are using a Windows machine, you are encouraged to use [7zip](http://www.7-zip.org/) to extract the folder. 29 | 4. Donwload the [VGG-16 bottleneck features](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogVGG16Data.npz) for the dog dataset. Place it in the repo, at location `path/to/dog-project/bottleneck_features`. 30 | 5. Obtain the necessary Python packages, and switch Keras backend to Tensorflow. 31 | 32 | For __Mac/OSX__: 33 | ``` 34 | conda env create -f requirements/aind-dog-mac.yml 35 | source activate aind-dog 36 | KERAS_BACKEND=tensorflow python -c "from keras import backend" 37 | ``` 38 | 39 | For __Linux__: 40 | ``` 41 | conda env create -f requirements/aind-dog-linux.yml 42 | source activate aind-dog 43 | KERAS_BACKEND=tensorflow python -c "from keras import backend" 44 | ``` 45 | 46 | For __Windows__: 47 | ``` 48 | conda env create -f requirements/aind-dog-windows.yml 49 | activate aind-dog 50 | set KERAS_BACKEND=tensorflow 51 | python -c "from keras import backend" 52 | ``` 53 | 6. Open the notebook and follow the instructions. 54 | 55 | ``` 56 | jupyter notebook dog_app.ipynb 57 | ``` 58 | 59 | __NOTE:__ While some code has already been implemented to get you started, you will need to implement additional functionality to successfully answer all of the questions included in the notebook. __Unless requested, do not modify code that has already been included.__ 60 | 61 | 62 | ## Amazon Web Services 63 | 64 | Instead of training your model on a local CPU (or GPU), you could use Amazon Web Services to launch an EC2 GPU instance. Please refer to the Udacity instructions for setting up a GPU instance for this project. ([link for AIND students](https://classroom.udacity.com/nanodegrees/nd889/parts/16cf5df5-73f0-4afa-93a9-de5974257236/modules/53b2a19e-4e29-4ae7-aaf2-33d195dbdeba/lessons/2df3b94c-4f09-476a-8397-e8841b147f84/project), [link for MLND students](https://classroom.udacity.com/nanodegrees/nd009/parts/99115afc-e849-48cf-a580-cb22eea2ba1b/modules/777db663-2b0d-4040-9ae4-bf8c6ab8f157/lessons/a088c519-05af-4589-a1e2-2c484b1268ef/project)) 65 | 66 | 67 | ## Evaluation 68 | 69 | Your project will be reviewed by a Udacity reviewer against the CNN project [rubric](#rubric). Review this rubric thoroughly, and self-evaluate your project before submission. All criteria found in the rubric must meet specifications for you to pass. 70 | 71 | 72 | ## Project Submission 73 | 74 | When you are ready to submit your project, collect the following files and compress them into a single archive for upload: 75 | - The `dog_app.ipynb` file with fully functional code, all code cells executed and displaying output, and all questions answered. 76 | - An HTML or PDF export of the project notebook with the name `report.html` or `report.pdf`. 77 | - Any additional images used for the project that were not supplied to you for the project. __Please do not include the project data sets in the `dogImages/` or `lfw/` folders. Likewise, please do not include the `bottleneck_features/` folder.__ 78 | 79 | Alternatively, your submission could consist of the GitHub link to your repository. 80 | 81 | 82 | 83 | ## Project Rubric 84 | 85 | #### Files Submitted 86 | 87 | | Criteria | Meets Specifications | 88 | |:---------------------:|:---------------------------------------------------------:| 89 | | Submission Files | The submission includes all required files. | 90 | 91 | #### Documentation 92 | 93 | | Criteria | Meets Specifications | 94 | |:---------------------:|:---------------------------------------------------------:| 95 | | Comments | The submission includes comments that describe the functionality of the code. | 96 | 97 | #### Step 1: Detect Humans 98 | 99 | | Criteria | Meets Specifications | 100 | |:---------------------:|:---------------------------------------------------------:| 101 | | __Question 1:__ Assess the Human Face Detector | The submission returns the percentage of the first 100 images in the dog and human face datasets with a detected human face. | 102 | | __Question 2:__ Assess the Human Face Detector | The submission opines whether Haar cascades for face detection are an appropriate technique for human detection. | 103 | 104 | #### Step 2: Detect Dogs 105 | 106 | | Criteria | Meets Specifications | 107 | |:---------------------:|:---------------------------------------------------------:| 108 | | __Question 3:__ Assess the Dog Detector | The submission returns the percentage of the first 100 images in the dog and human face datasets with a detected dog. | 109 | 110 | #### Step 3: Create a CNN to Classify Dog Breeds (from Scratch) 111 | 112 | | Criteria | Meets Specifications | 113 | |:---------------------:|:---------------------------------------------------------:| 114 | | Model Architecture | The submission specifies a CNN architecture. | 115 | | Train the Model | The submission specifies the number of epochs used to train the algorithm. | 116 | | Test the Model | The trained model attains at least 1% accuracy on the test set. | 117 | 118 | 119 | #### Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning) 120 | 121 | | Criteria | Meets Specifications | 122 | |:---------------------:|:---------------------------------------------------------:| 123 | | Obtain Bottleneck Features | The submission downloads the bottleneck features corresponding to one of the Keras pre-trained models (VGG-19, ResNet-50, Inception, or Xception). | 124 | | Model Architecture | The submission specifies a model architecture. | 125 | | __Question 5__: Model Architecture | The submission details why the chosen architecture succeeded in the classification task and why earlier attempts were not as successful. | 126 | | Compile the Model | The submission compiles the architecture by specifying the loss function and optimizer. | 127 | | Train the Model | The submission uses model checkpointing to train the model and saves the model with the best validation loss. | 128 | | Load the Model with the Best Validation Loss | The submission loads the model weights that attained the least validation loss. | 129 | | Test the Model | Accuracy on the test set is 60% or greater. | 130 | | Predict Dog Breed with the Model | The submission includes a function that takes a file path to an image as input and returns the dog breed that is predicted by the CNN. | 131 | 132 | 133 | #### Step 6: Write your Algorithm 134 | 135 | | Criteria | Meets Specifications | 136 | |:---------------------:|:---------------------------------------------------------:| 137 | | Write your Algorithm | The submission uses the CNN from Step 5 to detect dog breed. The submission has different output for each detected image type (dog, human, other) and provides either predicted actual (or resembling) dog breed. | 138 | 139 | #### Step 7: Test your Algorithm 140 | | Criteria | Meets Specifications | 141 | |:---------------------:|:---------------------------------------------------------:| 142 | | Test Your Algorithm on Sample Images! | The submission tests at least 6 images, including at least two human and two dog images. | 143 | | __Question 6__: Test Your Algorithm on Sample Images! | The submission discusses performance of the algorithm and discusses at least three possible points of improvement. | 144 | 145 | ## Suggestions to Make your Project Stand Out! 146 | 147 | (Presented in no particular order ...) 148 | 149 | #### (1) Augment the Training Data 150 | 151 | [Augmenting the training and/or validation set](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html) might help improve model performance. 152 | 153 | #### (2) Turn your Algorithm into a Web App 154 | 155 | Turn your code into a web app using [Flask](http://flask.pocoo.org/) or [web.py](http://webpy.org/docs/0.3/tutorial)! 156 | 157 | #### (3) Overlay Dog Ears on Detected Human Heads 158 | 159 | Overlay a Snapchat-like filter with dog ears on detected human heads. You can determine where to place the ears through the use of the OpenCV face detector, which returns a bounding box for the face. If you would also like to overlay a dog nose filter, some nice tutorials for facial keypoints detection exist [here](https://www.kaggle.com/c/facial-keypoints-detection/details/deep-learning-tutorial). 160 | 161 | #### (4) Add Functionality for Dog Mutts 162 | 163 | Currently, if a dog appears 51% German Shephard and 49% poodle, only the German Shephard breed is returned. The algorithm is currently guaranteed to fail for every mixed breed dog. Of course, if a dog is predicted as 99.5% Labrador, it is still worthwhile to round this to 100% and return a single breed; so, you will have to find a nice balance. 164 | 165 | #### (5) Experiment with Multiple Dog/Human Detectors 166 | 167 | Perform a systematic evaluation of various methods for detecting humans and dogs in images. Provide improved methodology for the `face_detector` and `dog_detector` functions. 168 | --------------------------------------------------------------------------------