├── .gitattributes ├── Notice&Guidance.md ├── OVF-ReleaseNote.md ├── OtherResources.md ├── README.md ├── lec1 ├── TA's-Tensorflow-notes-1.md └── pic │ ├── Image_001.png │ ├── Image_002.png │ ├── Image_003.png │ ├── Image_004.png │ ├── Image_005.jpg │ ├── Image_006.jpg │ ├── Image_007.jpg │ ├── Image_008.png │ ├── Image_009.jpg │ ├── Image_010.png │ └── Image_011.png ├── lec2 ├── TA's-Tensorflow-notes-2.md ├── a.py ├── animal.py ├── b.py ├── c.py ├── pic │ ├── Image_001.jpg │ ├── Image_002.jpg │ ├── Image_003.jpg │ ├── Image_004.jpg │ ├── Image_008.jpg │ ├── Image_009.jpg │ ├── Image_010.png │ ├── Image_011.jpg │ ├── Image_012.jpg │ ├── Image_013.jpg │ ├── Image_014.jpg │ ├── Image_015.jpg │ ├── Image_016.jpg │ ├── Image_017.jpg │ ├── Image_018.jpg │ ├── Image_019.jpg │ ├── Image_020.png │ ├── Image_021.png │ ├── Image_022.png │ ├── Image_023.jpg │ ├── Image_024.jpg │ ├── Image_025.jpg │ ├── Image_026.jpg │ ├── Image_027.jpg │ ├── Image_028.jpg │ └── Image_029.jpg ├── save.dat └── tf3_1.py ├── lec3 ├── TA's-Tensorflow-notes-3.md ├── pic │ ├── eq1.svg │ ├── eq10.svg │ ├── eq11.svg │ ├── eq12.svg │ ├── eq13.svg │ ├── eq14.svg │ ├── eq15.svg │ ├── eq16.svg │ ├── eq17.svg │ ├── eq2.svg │ ├── eq3.svg │ ├── eq4.svg │ ├── eq5.svg │ ├── eq6.svg │ ├── eq7.svg │ ├── eq8.svg │ ├── eq9.svg │ ├── img1.svg │ ├── img2.svg │ ├── img3.svg │ ├── in-eq1.svg │ ├── in-eq2.svg │ ├── in-eq3.svg │ ├── in-sym-al.svg │ ├── in-sym-b-L.svg │ ├── in-sym-bl.svg │ ├── in-sym-detl.svg │ ├── in-sym-w-L.svg │ ├── in-sym-wl.svg │ ├── in-sym-zl.svg │ ├── in-sym1.svg │ ├── in-sym2.svg │ └── sym1.svg ├── tf3_1.py ├── tf3_2.py ├── tf3_3.py ├── tf3_4.py ├── tf3_5.py └── tf3_6.py ├── lec4 ├── TA's-Tensorflow-notes-4.md ├── opt4_1.py ├── opt4_2.py ├── opt4_3.py ├── opt4_4-1.py ├── opt4_4-2.py ├── opt4_4.py ├── opt4_5.py ├── opt4_6.py ├── opt4_7.py ├── opt4_8_backward.py ├── opt4_8_forward.py ├── opt4_8_generateds.py └── pic │ ├── 4.4-1.svg │ ├── 4.4-2.svg │ ├── 4.4-3.svg │ ├── 4.8.svg │ ├── ReLU.svg │ ├── eq-01.svg │ ├── eq-02.svg │ ├── eq-03.svg │ ├── eq-04.svg │ ├── eq-05.svg │ ├── eq-relu.svg │ ├── eq-sigmod.svg │ ├── eq-tanh.svg │ ├── eq1-MSE.svg │ ├── img01.svg │ ├── in-eq-01.svg │ ├── in-eq-02.svg │ ├── in-eq-03.svg │ ├── in-eq-04.svg │ ├── in-eq-05.svg │ ├── in-eq-06.svg │ ├── in-eq-07.svg │ ├── in-eq-08.svg │ ├── in-eq-09.svg │ ├── in-eq-loss.svg │ ├── loss.svg │ ├── sigmod.svg │ └── tanh.svg ├── lec5 ├── TA's-Tensorflow-notes-5.md ├── mnist_backward.py ├── mnist_forward.py └── mnist_test.py ├── lec6 ├── fc2 │ ├── mnist_backward.py │ ├── mnist_forward.py │ └── mnist_test.py ├── fc3 │ ├── mnist_app.py │ ├── mnist_backward.py │ ├── mnist_forward.py │ └── mnist_test.py └── fc4 │ ├── mnist_app.py │ ├── mnist_backward.py │ ├── mnist_forward.py │ ├── mnist_generateds.py │ └── mnist_test.py ├── lec7 ├── mnist_lenet5_backward.py ├── mnist_lenet5_forward.py └── mnist_lenet5_test.py ├── lec8 ├── Nclasses.py ├── app.py ├── utils.py └── vgg16.py └── pic ├── lec8.jpg ├── vm1.jpg ├── vm2.jpg ├── vm3.jpg ├── vm4.jpg ├── vm5.jpg ├── vm6.jpg └── vm7.jpg /.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto 3 | -------------------------------------------------------------------------------- /Notice&Guidance.md: -------------------------------------------------------------------------------- 1 | 课程公告与导学 2 | ==== 3 | 4 | ## 有空时,可以看看电影《人工智能》 5 | 6 | 《人工智能》是由华纳兄弟影片公司于 2001 年拍摄发行的一部未来派的科幻类电影。 7 | 8 | 有空可以看看 2000 的人工智能幻想: 9 | - [人工智能 (豆瓣)](https://movie.douban.com/subject/1302827/) 10 | - [人工智能(电影) - 百度百科](https://baike.baidu.com/item/人工智能/3751704#viewPageContent) 11 | 12 | > 2018年11月26日 11:26 13 | 14 | 15 | ## 第一讲导学 16 | 17 | 欢迎来听 Tensorflow 笔记! 18 | 19 | 课时安排: 20 | - 1.1 概述 21 | - 1.2、1.3、1.4 分别给出了三种 TensorFlow 的安装方法,**请选择其一,配置你的电脑。** 22 | 23 | 课程**每周六 10:00AM**更新 24 | 25 | > 2018年12月03日 20:34 26 | 27 | 28 | ## Windows Anaconda TensorFlow 安装视频已发布 29 | 30 | 为方便同学们学习,实验室继双系统 Linux、虚拟机 Linux 和 Mac 上安装 TensorFlow 的视频教程后,在第一章第五节新发布了 Windows 系统直接使用 Anaconda 安装 TensorFlow 的视频教程,为同学们安装环境提供更多选择。 31 | 32 | > 2018年12月10日 12:50 33 | 34 | 35 | ## 第二讲导学 36 | 37 | 欢迎来听 Tensorflow 笔记! 38 | 39 | 本节从 Hello World 开始,50 分钟梳理完 python 的常用语法。这些语法可以帮助你读懂后续课程的 Tensorflow 代码。 40 | 41 | 对于已经掌握 Python 语法的同学,可以跳过视频讲解,直接查看“助教的 Tensorflow 笔记 2”,重温一下当年入坑时的轻松与快乐。 42 | 43 | 课时安排: 44 | - 2.1 Linux 指令、Hello World 45 | - 2.2 列表、元组、字典 46 | - 2.3 条件语句 47 | - 2.4 循环语句 48 | - 2.5 turtle 模块 49 | - 2.6 函数、模块、包 50 | - 2.7 类、对象、面向对象的编程 51 | - 2.8 文件操作 52 | 53 | 参考代码:https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/python.zip 54 | 55 | 请复现课上所有操作。 56 | 要求在下周 Tensorflow 学习前,可以借助搜索引擎读懂源码,请完成 `tf3_1.py` 的逐行注释。 57 | (遇到问题请利用百度搜索关键词,记到笔记中,提升自己对陌生语法的学习能力) 58 | 59 | **勘误:** 60 | 61 | 视频 2.7- 类、对象、面向对象的编程 5分51秒 至 7分10秒 投影的第三行 62 | 63 | `print "kitty.spots" # 打印出10` 64 | 65 | 应该无前后双引号: 66 | 67 | `print kitty.spots` 68 | 69 | > 2018年12月14日 10:40 70 | 71 | 72 | ## 第三讲导学 73 | 74 | 欢迎来听 Tensorflow 笔记! 75 | 76 | 本节首先介绍张量、计算图和会话;随后讲解前向传播和反向传播的实现方法;最后给出神经网络的搭建八股。 77 | 78 | 课时安排: 79 | - 3.1 张量、计算图、会话 80 | - 3.2 前向传播 81 | - 3.3 反向传播 82 | 83 | 参考代码:https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/tf.zip 84 | 85 | 请复现课上所有操作,体会前向传播搭建网络,反向传播优化参数的过程,记忆 `tf3_6.py` 源代码。 86 | 87 | > 2018年12月16日 21:51 88 | 89 | 90 | ## 第四讲导学 91 | 92 | 欢迎来听 Tensorflow 笔记! 93 | 94 | 本节讲解神经网络的优化,包括损失函数、学习率、滑动平均和正则化。最后给出了模块化搭建神经网络的八股。 95 | 96 | 课时安排: 97 | - 4.1 损失函数 98 | - 4.2 学习率 99 | - 4.3 滑动平均 100 | - 4.4 正则化 101 | - 4.5 神经网络搭建八股 102 | 103 | 参考代码:https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/opt.zip   104 | 105 | 请复现课上所有操作,尝试更改参考代码中的超参数和反向传播优化方法,感受超参数和优化方法对结果的影响,领会神经网络优化。 106 | 107 | > 2018年12月22日 21:56 108 | 109 | 110 | ## 第五讲导学 111 | 112 | 欢迎来听 Tensorflow 笔记! 113 | 114 | 本节讲解 MNIST 数据集,并利用 MNIST 数据集巩固模块化搭建神经网路的八股,实践前向传播和反向传播过程,编写测试程序输出手写数字识别准确率。 115 | 116 | 课时安排: 117 | - 5.1 MNIST 数据集 118 | - 5.2 模块化搭建神经网络 119 | - 5.3 手写数字识别准确率输出 120 | 121 | 参考代码:https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/fc1.zip   122 | 123 | 请实践 `fc1.zip` 中的所有代码,观察 `mnist_backward.py` 中 `loss` 减小、`mnist_test.py` 中准确率提升的过程。 124 | 用3.2节中的变量初始化方法,修改 `mnist_forward.py` 中 `w` 和 `b` 的初始化方法、修改隐藏层节点个数和隐藏层层数,修改 `mnist_backward.py` 代码中的超参数,找出最快提升准确度的“全连接网络”解决方案并在讨论区中把结果分享给大家。 125 | **比一比谁的手写数字识别准确率更高。** 126 | 127 | (一定自己跑代码,多实践才能发现规律,才会有所提升。加油!) 128 | 129 | > 2018年12月29日 12:11 130 | 131 | 132 | ## 第六讲导学 133 | 134 | 欢迎来听 Tensorflow 笔记! 135 | 136 | 本节讲解如何对输入的手写数字图片输出识别结果,并教大家制作自己的数据集实现特定应用。请将课程提供的方法,应用到你所在的领域,尝试解决实际问题。 137 | 138 | 课时安排: 139 | - 6.1 输入手写数字图片输出识别结果 140 | - 6.2 制作数据集 141 | 142 | 参考代码: 143 | - [fc2.zip](https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/fc2.zip)   144 | - [fc3.zip](https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/fc3.zip)   145 | - [fc4.z01](https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/fc4.z01)   146 | - [fc4.z02](https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/fc4.z02) 147 | - [fc4.z03](https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/fc4.z03)   148 | - [fc4.zip](https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/fc4.zip)   149 | 150 | `fc2` 在 `fc1` 的基础上增加了“断点续训”功能; 151 | 152 | `fc3` 在 `fc2` 的基础上增加了应用程序,实现了输入手写数字图片输出识别结果; 153 | 154 | `fc4` 在 `fc3` 的基础上增加了数据集生成程序,实现了把 7 万张手写数字图片制作成数据集和标签,程序可以用自制数据集和标签训练模型并输出手写数字图片的识别结果。(内含数据集原始图片,须由 `fc4.z01/fc4.z02/fc4.z03/fc4` 合成) 155 | 156 | 至此,已讲完全连接网络的全部内容,代码量和难度均有提升,请实践 `fc2/fc3/fc4` 的所有代码。尝试将课程提供的方法应用到其他数据集;尝试将你所在领域的已有数据,制作成数据集和标签,实现特定应用。 157 | 158 | **本节课后,安排了期中项目实践。请通过 MNIST 数据集训练全连接网络,识别 pic 文件夹中的十张手写数字图片,把识别结果填入考试选项。** 159 | 160 | **手写数字图片**:https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/num.zip   161 | 162 | **期中考试**:全连接网络实践:用全连接网络,识别手写数字图片 163 | 164 | (课程已进入实践应用环节,难度逐步增大,学会举一反三,学以致用。希望课程代码八股对支撑你的研究有所启发和帮助。 165 | 166 | > 2019年01月05日 10:11 167 | 168 | 169 | ## 第七讲导学 170 | 171 | 欢迎来听 Tensorflow 笔记! 172 | 173 | 本节介绍卷积神经网络,并以 lenet5 为例讲解卷积神经网络的搭建方法。 174 | 175 | 课时安排: 176 | - 7.1 卷积神经网络 177 | - 7.2 lenet5 代码讲解 178 | 179 | 参考代码: 180 | https://github.com/cj0012/AI-Practice-Tensorflow-Notes/blob/master/lenet5.zip   181 | 182 | > 2019年01月12日 11:49 183 | 184 | 185 | ## 第八讲导学 186 | 187 | 欢迎来听 Tensorflow 笔记! 188 | 189 | 本讲你将学会使用卷积神经网络,实现图片识别。课上以 VGG16 神经网络为例,讲解复现已有神经网络的方法;课下请编写代码复现 VGG16 网络,识别图片 0 至图片 9,并在期末考试“卷积网络实践”中填入使用自己复现的 VGG16 神经网络对图片 0 至图片 9 的识别结果。 190 | 191 | ![](./pic/lec8.jpg) 192 | 193 | 课时安排: 194 | - 8.1 复现已有的卷积神经网络 195 | - 8.2 用 vgg16 实现图片识别 196 | 197 | 参考代码:(内含VGG16的模型参数和待识别图片,文件约500M) 198 | 链接:https://pan.baidu.com/s/1WWNoY-ahajm2qkcCeNNgqg  密码:52b2 199 | 200 | > 2019年01月19日 10:10 201 | 202 | 203 | 204 | ## 关于学期成绩 205 | 206 | 为巩固学习效果,课程在每讲后安排了测验;在第六讲、第七讲和第八讲后分别安排了期中考试、互评作业和期末考试。 207 | 其中测验和作业答题后会给出解析提示对错,不计入学期成绩,分值为 0。学期成绩共 100 分,期中考试占 50 分,期末考试占 50 分。 208 | 209 | > 2019年01月22日 19:40 210 | 211 | 212 | ## 第九讲导学 213 | 214 | 欢迎来听 Tensorflow 笔记! 215 | 216 | 经过本期的学习大家已经掌握了使用 Tensorflow 搭建神经网络的方法,接下来需要大量阅读已有网络,感受不同网络的功能,并动手复现一些网络实现特定应用。 217 | 218 | 本节推送了2018年北京大学软件与微电子学院开设《人工智能实践》课的期末项目,分享了同学们学习神经网络一学期的成果,供大家扩展思路学习交流。 219 | 其中部分课程项目的源代码参考了互联网,学生在课程项目中完成了神经网络的应用实践、代码讲解与效果展示。 220 | 221 | 222 | 课时安排: 223 | - 9.1 **真实复杂场景手写英文体识别** 224 | 在纸上写一段英文,拍照并输入给神经网络,神经网络输出这篇英文文本,应用场景如高考阅卷和手写体笔记的电子化。 225 | - 9.2 **二值神经网络实现 MNIST 手写数字识别** 226 | 将传统神经网络的权重和激活值二值化,把矩阵运算转变为异或非运算,减少运算量和内存占用,为神经网络在可穿戴设备部署提供可能。 227 | - 9.3 **车牌号码识别** 228 | 输入含车牌的视频,实现车牌号码实时识别,可用于违章监控、安防等领域。 229 | - 9.4 **人脸表情识别** 230 | 输入人脸图片,自动识别表情,可用于面部表情识别,实时危机检测等场景。 231 | - 9.5 **实时目标检测、识别、计数和追踪** 232 | 模型具备高fps和mAP,可实现实时目标检测、识别、计数和追踪。 233 | - 9.6 **图片自动上色** 234 | 输入漫画图片,自动上色并输出彩色图片,可以用于辅助漫画家或设计人员工作。 235 | - 9.7 **图像风格融合与快速迁移** 236 | 给定20张风格图片,训练一个包含20种风格的图像生成网络。对该图像生成网络输入1张内容图片,选定4种风格,实现4种风格不同层次的融合,并迁移到该内容图片上。 237 | - 9.8 **图像中文描述** 238 | 输入一张图片,输出图片的中文描述,输出的句子符合自然语言习惯,点明图片中的重要信息,涵盖主要人物、场景、动作等内容。 239 | - 9.9 **跨模态检索** 240 | 利用 VGG16 提取图像特征、GloVe 提取文本特征,引入迁移学习模态对抗网络,学习共同表征空间,跨越语义鸿沟,实现通过图像检索文本和通过文本检索图像的跨模态检索。 241 | - 9.10 **强化学习实现“不死鸟” FlappyBird** 242 | 通过 DQN 算法,让机器学习玩 FlappyBird,最终使小鸟自由越过障碍物,实现永生。 243 | 244 | > 2019年01月23日 16:34 245 | 246 | -------------------------------------------------------------------------------- /OVF-ReleaseNote.md: -------------------------------------------------------------------------------- 1 | 2 | ## 吐槽+风险声明 3 | 4 | 虚拟机毕竟不是沙盒,还是有从里面突破的可能,**所以用/导入“来源不明”的虚拟机是有风险的。** 5 | 6 | 我发布的这个虚拟机也算是“来源不明”的。大家想用就用,用的时候当他里面有病毒就行。不要从里面复制东西出来。尤其不要执行从里面复制出的可执行文件。代码都只在虚拟机里面操作,内外的交流尽可能的少就行。 7 | 8 | **注意一下:也不要在虚拟机里面登录 MOOC 网站,毕竟里面算是不安全的环境。** 9 | 10 | 交作业的时候从里面 copy 出代码,再上传 MOOC 网站,如果只有文本的交互,还是比较安全的。 11 | 12 | 13 | **时刻保持警惕。应该就没什么问题了。** 14 | 15 | 16 | 目前是 课程团队还没有发布他们的虚拟机。所以我先抛砖引玉一下。有了官方的虚拟机,这个虚拟机我会删掉的。之后大家也就不要再传播了。 17 | 18 | 19 | ---- 20 | 21 | 已经导出为 `.ovf` 了理论上 VMware 与 VirtualBox 都能用。我使用 Vmware 按照课程的视频配置的环境。 22 | 23 | ## 账户信息 24 | 25 | * username:ailab 26 | * password:ailab 27 | 28 | ~~使用这个需要 VMware, VMPlayer 也行~~ 29 | 支持 OVF 格式的都行 30 | 31 | 32 | ## 下载地址: 33 | 34 | - 链接:https://pan.baidu.com/s/1ZOG4fBT7Y9SjA7hcd72Stw 35 | 提取码:ny4p 36 | 37 | 38 | ## 导入虚拟机步骤 39 | 40 | 1. 打开 VMware 41 | 42 | 2. 选择 打开.. 43 | ![](./pic/vm1.jpg) 44 | 45 | 3. 找到解压完的文件夹 选择.ovf 文件 46 | ![](./pic/vm2.jpg) 47 | 48 | 4. 导入前可以更改默认储存路径 49 | ![](./pic/vm3.jpg) 50 | 51 | 5. 等待导入完毕 52 | 53 | 6. 导入完成。可以直接打开虚拟机或者修改虚拟机配置(主要是改 CPU 核数与内存大小) 54 | ![](./pic/vm4.jpg) 55 | 56 | 7. 运行虚拟机。用户名密码都是:**ailab** 57 | ![](./pic/vm5.jpg) 58 | 59 | 8. 右键可以打开终端 60 | ![](./pic/vm6.jpg) 61 | 62 | 9. 输入以下代码测试;并观察/对比输出结果 63 | 64 | 先输入 `python` 进入python 环境。 65 | ```python 66 | python 67 | ``` 68 | **可以看到 使用的是 python 2.7** 69 | 70 | 再输入以下代码,进行简单的向量加法运算,以测试 TF 是否正确安装 71 | ```python 72 | import tensorflow as tf 73 | tf.__version__ 74 | 75 | a = tf.constant([1.0, 2.0], name="a") 76 | b = tf.constant([2.0, 3.0], name="b") 77 | result = a + b 78 | sess = tf.Session() 79 | sess.run(result) 80 | ``` 81 | 82 | 输出结果 83 | ![](./pic/vm7.jpg) 84 | 85 | **可以看到 tensorflow 的版本是 1.3.0** 86 | 87 | 注: 88 | 89 | 中间那一堆 warning 是因为没有在编译 tf 时开启特定 CPU 指令集的支持,开启了会跑的更快一点。但从 镜像源安装的 tf 为了兼容性时不会开启的,毕竟大家的 CPU 基本不一样。想要开启并支持这些指令集需要自己编译,目前就暂不考虑了。 90 | 91 | 92 | ---- 93 | ---- 94 | 95 | 96 | ## 虚拟机配置信息 97 | 98 | ```txt 99 | # 账户信息 100 | * username:ailab 101 | * password:ailab 102 | 103 | # 虚拟机配置 104 | 基本与 MOOC 一样 105 | 不同之处: 106 | + CPU 2 核 107 | + 内存 2GB 108 | 109 | # 已安装的软件 110 | * Ubuntu 16.04 111 | + Python 2.7 112 | + Tensorflow 1.3.0 113 | + vim 114 | + pip 18.0 115 | + VMware-tools 116 | 117 | # 其他修改 118 | + 系统源 改为 清华源 119 | + Python 源 改为 清华源 120 | 121 | by: @woclass 122 | [2018-12-02 17:05:29] 123 | 124 | File Hash 125 | ======= 126 | 127 | Algorithm : SHA256 128 | Hash : 66975CCDA64E3BEE09ED644B379AF4C35A264D95BD05A3482F5D4194D4C42910 129 | Path : .\MOOC_PKU-TFNote_Ubuntu_x64-V1.0.0\MOOC_PKU-TFNote_Ubuntu_x64-disk1.vmdk 130 | 131 | Algorithm : SHA256 132 | Hash : 6E7F64FEA84FF1AD84643DF029E9A8B02BE6C46BAC02ACA96E1607CC3095F3E3 133 | Path : .\MOOC_PKU-TFNote_Ubuntu_x64-V1.0.0\MOOC_PKU-TFNote_Ubuntu_x64.mf 134 | 135 | Algorithm : SHA256 136 | Hash : 9690969CB461B6BC116C5D842317BEED6EBFF434CD3B8092DDA2E58E66F08DBA 137 | Path : .\MOOC_PKU-TFNote_Ubuntu_x64-V1.0.0\MOOC_PKU-TFNote_Ubuntu_x64.ovf 138 | ``` 139 | 140 | - 用户名、密码都是 **ailab** 141 | - 用户名、密码**都是 ailab** 142 | - **用户名、密码都是 ailab** 143 | 144 | 重要的事情我应该说了不止 6 遍。应该都看到了,之后就不回复这种问题了。 145 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # PKU Tensorflow Notes 2 | 3 | 本 repo 用于存放北大曹健的 MOOC[《人工智能实践:Tensorflow笔记》](http://www.icourse163.org/course/PKU-1002536002)相关的讲义及配套的示例代码。 4 | 5 | 6 | ## 版权声明 7 | MOOC 的视频、课件、代码均归曹健老师的 MOOC 教学组所有。 8 | 9 | 本 repo 的其余部分按 [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.zh) 开源。 10 | 迁移到其他语言的部分除外(大概也不会放在这个 repo 里)。 11 | 12 | 13 | ## 资料索引 14 | 15 | + [各讲课程公告与导学](./Notice&Guidance.md) 16 | 17 | Tensorflow 笔记 18 | > 各讲的示例代码放在 `lec*` 文件夹内 19 | 20 | - [x] [第一讲 概述](./lec1/TA's-Tensorflow-notes-1.md) 21 | - [x] [第二讲 Python 语法串讲](./lec2/TA's-Tensorflow-notes-2.md) 22 | - [x] [第三讲 搭建神经网络](./lec3/TA's-Tensorflow-notes-3.md) 23 | - [x] [第四讲 神经网络优化](./lec4/TA's-Tensorflow-notes-4.md) 24 | - [x] [第五讲 MNIST 数据集输出手写数字识别准确率](./lec5/TA's-Tensorflow-notes-5.md) 25 | - [ ] [第六讲 输入手写数字输出识别结果](./lec6/TA's-Tensorflow-notes-6.md) 26 | - [ ] [第七讲 卷积神经网络](./lec7/TA's-Tensorflow-notes-7.md) 27 | - [ ] [第八讲 VGG16 神经网络](./lec8/TA's-Tensorflow-notes-8.md) 28 | 29 | 30 | ## 参考资料 31 | 32 | 课程官网 33 | - [人工智能实践:Tensorflow 笔记 北京大学 中国大学 MOOC(慕课)](http://www.icourse163.org/course/PKU-1002536002) 34 | 35 | MOOC 官方配套代码 36 | - [cj0012/AI-Practice-Tensorflow-Notes: 人工智能实践:Tensorflow 笔记](https://github.com/cj0012/AI-Practice-Tensorflow-Notes) 37 | 38 | 39 | ## [课程学习参考资料](./OtherResources.md) 40 | 41 | 以下仅列出个人认为比较有用的在线资料。 42 | 更多参考材料请点此节标题查看。 43 | 44 | 45 | ---- 46 | 47 | TODO ([咕咕咕](https://github.com/int-and-his-friends/gugu-tutorial)): 48 | 49 | - [x] 打包 Ubuntu 虚拟机环境 50 | 参见[OVE Release Note](./OVF-ReleaseNote.md) 51 | - [ ] 打包 docker 环境 52 | - [ ] 更新 .md 版"助教笔记" 53 | - [ ] 更新示例代码 54 | - [ ] 备份 MOOC 视频 55 | - [ ] 数学公式 LaTeX+SVG 化 56 | - [ ] 运行并检查代码是否有 bug 57 | - [ ] 整理学习参考资料 58 | - [ ] 更新代码以兼容 python 3 59 | - [ ] 制作 pynb 版示例代码 60 | - [ ] 迁移到其他语言 61 | -------------------------------------------------------------------------------- /lec1/TA's-Tensorflow-notes-1.md: -------------------------------------------------------------------------------- 1 | Tensorflow 笔记:第一讲 概述 2 | ==== 3 | 4 | # 一、 基本概念 5 | 6 | ## 1、什么是人工智能 7 | 8 | **人工智能的概念**:机器模拟人的意识和思维 9 | 10 | **重要人物**:艾伦·麦席森·图灵(Alan Mathison Turing) 11 | 12 | **人物简介**:1912 年 6 月 23 日-1954 年 6 月 7 日,英国数学家、逻辑学家,被称为计算机科学之父,人工智能之父。 13 | 14 | **相关事件**: 15 | 1. 1950 年在论文《机器能思考吗?》中提出了图灵测试,一种用于判定机器是否具有智能的试验方法:提问者和回答者隔开,提问者通过一些装置(如键盘)向机器随意提问。多次测试,如果有超过 30%的提问者认为回答问题的是人而不是机器,那么这台机器就通过测试,具有了人工智能。也就是工智能的概念:“用机器模拟人的意识和思维”。 16 | 2. 图灵在论文中预测:在 2000 年,会出现通过图灵测试具备人工智能的机器。然而直到 2014 年 6 月,英国雷丁大学的聊天程序才成功冒充了 13 岁男孩,通过了图灵测试。这一事件比图灵的预测晚了 14 年。 17 | 3. 在 2015 年 11 月 science 杂志封面新闻报道,机器人已经可以依据从未见过的文字中的一个字符,写出同样风格的字符,说明机器已经具备了迅速学习陌生文字的创造能力。 18 | 19 | **消费级人工智能产品**: 20 | - 国外 21 | 1. 谷歌 Assistant 22 | 2. 微软 Cortana 23 | 3. 苹果 Siri 24 | 4. 亚马逊 Alexa 25 | - 国内 26 | 1. 阿里的天猫精灵 27 | 2. 小米的小爱同学 28 | 29 | **人工智能先锋**: 30 | 1. Geoffrey Hinton:多伦多大学的教授,谷歌大脑多伦多分布负责人,是人工智能领域的鼻祖,他发表了许多让神经网络得以应用的论文,激活了整个人工智能领域。他还培养了许多人工智能的大家。比如 LeCun 就是他的博士后。 31 | 2. Yann LeCun:纽约大学的教授,Facebook 人工智能研究室负责人,他改进了卷积神经网路算法,使卷积神经网络具有了工程应用价值,现在卷积神经网络依旧是计算机视觉领域最有效的模型之一。 32 | 3. Yoshua Bengio:蒙特利尔大学的教授,现任微软公司战略顾问,他推动了循环神经网路算法的发展,使循环神经网络得到工程应用,用循环神经网络解决了自然语言处理中的问题。 33 | 34 | 35 | ## 2、什么是机器学习 36 | 37 | **机器学习的概念**:机器学习是一种统计学方法,计算机利用已有数据得出某种模型,再利用此模型预测结果。 38 | 39 | ![](./pic/Image_001.png) 40 | 41 | **特点**:随经验的增加,效果会变好。 42 | 43 | **简单模型举例**:决策树模型 44 | 45 | 预测班车到达时间问题描述: 每天早上七点半,班车从 A 地发往 B 地,到达 B地的时间如何准确预测? 46 | 47 | 如果你第一次乘坐班车,你的预测通常不太准。 48 | 一周之后,你大概能预测出班车 8:00 左右到达 B 地; 49 | 一个月之后,随着经验的增加,你还会知道,周一常堵车,会晚 10 分钟,下雨常堵车,会晚 20 分钟。 50 | 于是你画出了如下的一张树状图, 51 | 如果是周一,还下了雨,班车会 8:30 到达; 52 | 如果不是周一,也没有下雨,班车会 8:00 到达。 53 | 54 | ![](./pic/Image_002.png) 55 | 56 | **机器学习和传统计算机运算的区别**:传统计算机是基于冯诺依曼结构,指令预先存储。运行时,CPU 从存储器里逐行读取指令,按部就班逐行执行预先安排好的 57 | 指令。其特点是,输出结果确定,因为先干什么,后干什么都已经提前写在指令里了。 58 | 59 | ![](./pic/Image_003.png) 60 | 61 | **机器学习三要素**:数据、算法、算力 62 | 63 | ![](./pic/Image_004.png) 64 | 65 | 66 | ## 3、什么是深度学习 67 | 68 | **深度学习的概念**:深层次神经网络,源于对生物脑神经元结构的研究。 69 | 70 | **人脑神经网络**:随着人的成长,脑神经网络是在渐渐变粗变壮。 71 | 72 | ![](./pic/Image_005.jpg) 73 | 74 | **生物学中的神经元**: 75 | 下图左侧有许多支流汇总在一起,生物学中称这些支流叫做树突。树突具有接受刺激并将冲动传入细胞体的功能,是神经元的输入。这些树突汇总于细胞核又沿着一条轴突输出。 76 | 轴突的主要功能是将神经冲动由胞体传至其他神经元,是神经元的输出。人脑便是由 860 亿个这样的神经元组成,所有的思维意识,都以它为基本单元,连接成网络实现的。 77 | 78 | ![](./pic/Image_006.jpg) 79 | 80 | **计算机中的神经元模型**: 81 | 1943 年,心理学家 McCulloch 和数学家 Pitts 参考了生物神经元的结构,发表了抽象的神经元模型 MP。神经元模型是一个包含输入,输出与计算功能的模型。输入可以类比为神经元的树突,输出可以类比为神经元的轴突,计算可以类比为细胞核。 82 | 83 | ![](./pic/Image_007.jpg) 84 | 85 | 86 | ## 4、人工智能 Vs 机器学习 Vs 深度学习 87 | 88 | 人工智能,就是用机器模拟人的意识和思维。 89 | 90 | 机器学习,则是实现人工智能的一种方法,是人工智能的子集。 91 | 92 | 深度学习就是深层次神经网络,是机器学习的一种实现方法,是机器学习的子集。 93 | 94 | ![](./pic/Image_008.png) 95 | 96 | 97 | # 二、 神经网络的发展历史(三起两落) 98 | 99 | ![](./pic/Image_009.jpg) 100 | 101 | **第一次兴起**:1958 年,人们把两层神经元首尾相接,组成单层神经网络,称做感知机。感知机成了首个可以学习的人工神经网络。引发了神经网络研究的第一次兴起。 102 | 103 | **第一次寒冬**:1969 年,这个领域的权威学者 Minsky 用数学公式证明了只有单层神经网络的感知机无法对异或逻辑进行分类,Minsky 还指出要想解决异或可分问题,需要把单层神经网络扩展到两层或者以上。然而在那个年代计算机的运算能力,是无法支撑这种运算量的。只有一层计算单元的感知机,暴露出他的天然缺陷,使得神经网络研究进入了第一个寒冬。 104 | 105 | **第二次兴起**:1986 年,Hinton 等人提出了反向传播方法,有效解决了两层神经网络的算力问题。引发了神经网络研究的第二次兴起。 106 | 107 | **第二次寒冬**:1995 年,支持向量机诞生。支持向量机可以免去神经网络需要调节参数的不足,还避免了神经网络中局部最优的问题。一举击败神经网络,成为当时人工智能领域的主流算法,使得神经网络进入了他的第二个冬季。 108 | 109 | **第三次兴起**:2006 年,深层次神经网络出现,2012 年,卷积神经网络在图像识别领域中的惊人表现,又引发了神经网络研究的再一次兴起。 110 | 111 | 112 | # 三、 机器学习的典型应用 113 | 114 | ## 1、应用领域 115 | 计算机视觉、语音识别、自然语言处理 116 | 117 | ## 2、主流应用: 118 | 119 | (1) **预测**(对连续数据进行预测) 120 | 121 | 如,预测某小区 100 平米的房价卖多少钱。 122 | 根据以往数据(红色●),拟合出一条线,让它“穿过”所有的点,并且与各个点的距离尽可能的小。 123 | 124 | ![](./pic/Image_010.png) 125 | 126 | 我们可以把以前的数据,输入神经网络,让他训练出一个模型,比如这张图中红色点表示了以往的数据,虚线表示了预测出的模型 Y = ax + b ,大量历史数据也就是面积 x 和房价 y 作为输入,训练出了模型的参数 a = 3.5, b = 150,则你家 100 平米的房价应该是 3.5 * 100 + 150 = 500 万。 127 | 128 | 我们发现,模型不一定全是直线,也可以是曲线;我们还发现,随着数据的增多,模型一般会更准确。 129 | 130 | (2) **分类**(对离散数据进行分类) 131 | 132 | 如,根据肿瘤患者的年龄和肿瘤大小判断良性、恶性。 133 | 红色样本为恶性,蓝色样本为良性,绿色分为哪类? 134 | 135 | ![](./pic/Image_011.png) 136 | 137 | 假如让计算机判断肿瘤是良性还是恶性,先要把历史数据输入到神经网络进行建模,调节模型的参数,得到一条线把良性肿瘤和恶性肿瘤分开。比如输入患者的年龄、肿瘤的大小 还有对应的良性肿瘤还是恶性肿瘤,使用神经网络训练模型 调整参数,再输入新的患者年龄和肿瘤大小时,计算机会直接告诉你肿瘤是良性还是恶性。比如上图的绿色三角就属于良性肿瘤。 138 | 139 | 140 | # 四、课程小结 141 | 142 | 1. 机器学习,就是在任务 T 上,随经验 E 的增加,效果 P 随之增加。 143 | 2. 机器学习的过程是通过大量数据的输入,生成一个模型,再利用这个生成的模型,实现对结果的预测。 144 | 3. 庞大的神经网络是基于神经元结构的,是输入乘以权重,再求和,再过非线性函数的过程。 145 | -------------------------------------------------------------------------------- /lec1/pic/Image_001.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec1/pic/Image_001.png -------------------------------------------------------------------------------- /lec1/pic/Image_002.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec1/pic/Image_002.png -------------------------------------------------------------------------------- /lec1/pic/Image_003.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec1/pic/Image_003.png -------------------------------------------------------------------------------- /lec1/pic/Image_004.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec1/pic/Image_004.png -------------------------------------------------------------------------------- /lec1/pic/Image_005.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec1/pic/Image_005.jpg -------------------------------------------------------------------------------- /lec1/pic/Image_006.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec1/pic/Image_006.jpg -------------------------------------------------------------------------------- /lec1/pic/Image_007.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec1/pic/Image_007.jpg -------------------------------------------------------------------------------- /lec1/pic/Image_008.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec1/pic/Image_008.png -------------------------------------------------------------------------------- /lec1/pic/Image_009.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec1/pic/Image_009.jpg -------------------------------------------------------------------------------- /lec1/pic/Image_010.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec1/pic/Image_010.png -------------------------------------------------------------------------------- /lec1/pic/Image_011.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec1/pic/Image_011.png -------------------------------------------------------------------------------- /lec2/a.py: -------------------------------------------------------------------------------- 1 | 123 2 | -------------------------------------------------------------------------------- /lec2/animal.py: -------------------------------------------------------------------------------- 1 | class Animals(): 2 | def breathe(self): 3 | print " breathing" 4 | def move(self): 5 | print "moving" 6 | def eat (self): 7 | print "eating food" 8 | class Mammals(Animals): 9 | def breastfeed(self): 10 | print "feeding young" 11 | class Cats(Mammals): 12 | def __init__(self, spots): 13 | self.spots = spots 14 | def catch_mouse(self): 15 | print "catch mouse" 16 | def left_foot_forward(self): 17 | print "left foot forward" 18 | def left_foot_backward(self): 19 | print "left foot backward" 20 | def dance(self): 21 | self.left_foot_forward() 22 | self.left_foot_backward() 23 | self.left_foot_forward() 24 | self.left_foot_backward() 25 | kitty=Cats(10) 26 | print kitty.spots 27 | kitty.dance() 28 | kitty.breastfeed() 29 | kitty.move() 30 | -------------------------------------------------------------------------------- /lec2/b.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | age=input("输入你的年龄\n") 3 | if age>18: 4 | print "大于十八岁" 5 | print "你成年了" 6 | else: 7 | print "小于等于十八岁" 8 | print "还未成年" 9 | 10 | -------------------------------------------------------------------------------- /lec2/c.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | num=input("please input your class number:") 3 | if num==1 or num==2: 4 | print "class room 302" 5 | elif num==3: 6 | print "class room 303" 7 | elif num==4: 8 | print "class room 304" 9 | else: 10 | print "class room 305" 11 | -------------------------------------------------------------------------------- /lec2/pic/Image_001.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_001.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_002.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_002.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_003.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_003.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_004.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_004.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_008.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_008.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_009.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_009.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_010.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_010.png -------------------------------------------------------------------------------- /lec2/pic/Image_011.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_011.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_012.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_012.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_013.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_013.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_014.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_014.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_015.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_015.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_016.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_016.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_017.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_017.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_018.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_018.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_019.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_019.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_020.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_020.png -------------------------------------------------------------------------------- /lec2/pic/Image_021.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_021.png -------------------------------------------------------------------------------- /lec2/pic/Image_022.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_022.png -------------------------------------------------------------------------------- /lec2/pic/Image_023.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_023.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_024.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_024.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_025.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_025.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_026.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_026.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_027.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_027.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_028.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_028.jpg -------------------------------------------------------------------------------- /lec2/pic/Image_029.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/lec2/pic/Image_029.jpg -------------------------------------------------------------------------------- /lec2/save.dat: -------------------------------------------------------------------------------- 1 | (dp0 2 | S'pocket' 3 | p1 4 | (lp2 5 | S'key' 6 | p3 7 | aS'knife' 8 | p4 9 | asS'position' 10 | p5 11 | S'N2 E3' 12 | p6 13 | sS'money' 14 | p7 15 | I160 16 | s. -------------------------------------------------------------------------------- /lec2/tf3_1.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | a=tf.constant([1.0,2.0]) 3 | b=tf.constant([3.0,4.0]) 4 | result=a+b 5 | print result 6 | -------------------------------------------------------------------------------- /lec3/pic/eq1.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | -------------------------------------------------------------------------------- /lec3/pic/eq15.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | -------------------------------------------------------------------------------- /lec3/pic/eq2.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | -------------------------------------------------------------------------------- /lec3/pic/eq3.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | -------------------------------------------------------------------------------- /lec3/pic/eq5.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | -------------------------------------------------------------------------------- /lec3/pic/eq7.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | -------------------------------------------------------------------------------- /lec3/pic/img1.svg: -------------------------------------------------------------------------------- 1 | 2 | 5 | 6 | 7 | X1 8 | 9 | 10 | 11 | X2 12 | 13 | 14 | 15 | y 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | w1 26 | 27 | 28 | w2 29 | 30 | 31 | -------------------------------------------------------------------------------- /lec3/pic/img3.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | -------------------------------------------------------------------------------- /lec3/pic/in-eq1.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | -------------------------------------------------------------------------------- /lec3/pic/in-eq2.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | -------------------------------------------------------------------------------- /lec3/pic/in-eq3.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | -------------------------------------------------------------------------------- /lec3/pic/in-sym-al.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | -------------------------------------------------------------------------------- /lec3/pic/in-sym-b-L.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | -------------------------------------------------------------------------------- /lec3/pic/in-sym-bl.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | -------------------------------------------------------------------------------- /lec3/pic/in-sym-detl.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | -------------------------------------------------------------------------------- /lec3/pic/in-sym-w-L.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | -------------------------------------------------------------------------------- /lec3/pic/in-sym-wl.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | -------------------------------------------------------------------------------- /lec3/pic/in-sym-zl.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | -------------------------------------------------------------------------------- /lec3/pic/in-sym1.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | -------------------------------------------------------------------------------- /lec3/pic/in-sym2.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | -------------------------------------------------------------------------------- /lec3/pic/sym1.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | -------------------------------------------------------------------------------- /lec3/tf3_1.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | a=tf.constant([1.0,2.0]) 3 | b=tf.constant([3.0,4.0]) 4 | result=a+b 5 | print result 6 | -------------------------------------------------------------------------------- /lec3/tf3_2.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | x = tf.constant([[1.0, 2.0]]) 3 | w = tf.constant([[3.0], [4.0]]) 4 | y=tf.matmul(x,w) 5 | print y 6 | with tf.Session() as sess: 7 | print sess.run(y) 8 | 9 | 10 | -------------------------------------------------------------------------------- /lec3/tf3_3.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #两层简单神经网络(全连接) 3 | import tensorflow as tf 4 | 5 | #定义输入和参数 6 | x = tf.constant([[0.7, 0.5]]) 7 | w1= tf.Variable(tf.random_normal([2, 3], stddev=1, seed=1)) 8 | w2= tf.Variable(tf.random_normal([3, 1], stddev=1, seed=1)) 9 | 10 | #定义前向传播过程 11 | a = tf.matmul(x, w1) 12 | y = tf.matmul(a, w2) 13 | 14 | #用会话计算结果 15 | with tf.Session() as sess: 16 | init_op = tf.global_variables_initializer() 17 | sess.run(init_op) 18 | print"y in tf3_3.py is:\n",sess.run(y) 19 | 20 | ''' 21 | y in tf3_3.py is : 22 | [[3.0904665]] 23 | ''' 24 | 25 | -------------------------------------------------------------------------------- /lec3/tf3_4.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #两层简单神经网络(全连接) 3 | 4 | import tensorflow as tf 5 | 6 | #定义输入和参数 7 | #用placeholder实现输入定义 (sess.run中喂一组数据) 8 | x = tf.placeholder(tf.float32, shape=(1, 2)) 9 | w1= tf.Variable(tf.random_normal([2, 3], stddev=1, seed=1)) 10 | w2= tf.Variable(tf.random_normal([3, 1], stddev=1, seed=1)) 11 | 12 | 13 | #定义前向传播过程 14 | a = tf.matmul(x, w1) 15 | y = tf.matmul(a, w2) 16 | 17 | 18 | #用会话计算结果 19 | with tf.Session() as sess: 20 | init_op = tf.global_variables_initializer() 21 | sess.run(init_op) 22 | print"y in tf3_4.py is:\n",sess.run(y, feed_dict={x: [[0.7,0.5]]}) 23 | 24 | ''' 25 | y in tf3_4.py is: 26 | [[3.0904665]] 27 | ''' 28 | 29 | -------------------------------------------------------------------------------- /lec3/tf3_5.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #两层简单神经网络(全连接) 3 | 4 | import tensorflow as tf 5 | 6 | #定义输入和参数 7 | #用placeholder定义输入(sess.run喂多组数据) 8 | x = tf.placeholder(tf.float32, shape=(None, 2)) 9 | w1= tf.Variable(tf.random_normal([2, 3], stddev=1, seed=1)) 10 | w2= tf.Variable(tf.random_normal([3, 1], stddev=1, seed=1)) 11 | 12 | #定义前向传播过程 13 | a = tf.matmul(x, w1) 14 | y = tf.matmul(a, w2) 15 | 16 | #调用会话计算结果 17 | with tf.Session() as sess: 18 | init_op = tf.global_variables_initializer() 19 | sess.run(init_op) 20 | print "the result of tf3_5.py is:\n",sess.run(y, feed_dict={x: [[0.7,0.5],[0.2,0.3],[0.3,0.4],[0.4,0.5]]}) 21 | print "w1:\n", sess.run(w1) 22 | print "w2:\n", sess.run(w2) 23 | 24 | ''' 25 | the result of tf3_5.py is: 26 | [[ 3.0904665 ] 27 | [ 1.2236414 ] 28 | [ 1.72707319] 29 | [ 2.23050475]] 30 | w1: 31 | [[-0.81131822 1.48459876 0.06532937] 32 | [-2.4427042 0.0992484 0.59122431]] 33 | w2: 34 | [[-0.81131822] 35 | [ 1.48459876] 36 | [ 0.06532937]] 37 | 38 | ''' 39 | 40 | -------------------------------------------------------------------------------- /lec3/tf3_6.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #0导入模块,生成模拟数据集。 3 | import tensorflow as tf 4 | import numpy as np 5 | BATCH_SIZE = 8 6 | SEED = 23455 7 | 8 | #基于seed产生随机数 9 | rdm = np.random.RandomState(SEED) 10 | #随机数返回32行2列的矩阵 表示32组 体积和重量 作为输入数据集 11 | X = rdm.rand(32,2) 12 | #从X这个32行2列的矩阵中 取出一行 判断如果和小于1 给Y赋值1 如果和不小于1 给Y赋值0 13 | #作为输入数据集的标签(正确答案) 14 | Y_ = [[int(x0 + x1 < 1)] for (x0, x1) in X] 15 | print "X:\n",X 16 | print "Y_:\n",Y_ 17 | 18 | #1定义神经网络的输入、参数和输出,定义前向传播过程。 19 | x = tf.placeholder(tf.float32, shape=(None, 2)) 20 | y_= tf.placeholder(tf.float32, shape=(None, 1)) 21 | 22 | w1= tf.Variable(tf.random_normal([2, 3], stddev=1, seed=1)) 23 | w2= tf.Variable(tf.random_normal([3, 1], stddev=1, seed=1)) 24 | 25 | a = tf.matmul(x, w1) 26 | y = tf.matmul(a, w2) 27 | 28 | #2定义损失函数及反向传播方法。 29 | loss_mse = tf.reduce_mean(tf.square(y-y_)) 30 | train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss_mse) 31 | #train_step = tf.train.MomentumOptimizer(0.001,0.9).minimize(loss_mse) 32 | #train_step = tf.train.AdamOptimizer(0.001).minimize(loss_mse) 33 | 34 | #3生成会话,训练STEPS轮 35 | with tf.Session() as sess: 36 | init_op = tf.global_variables_initializer() 37 | sess.run(init_op) 38 | # 输出目前(未经训练)的参数取值。 39 | print "w1:\n", sess.run(w1) 40 | print "w2:\n", sess.run(w2) 41 | print "\n" 42 | 43 | # 训练模型。 44 | STEPS = 3000 45 | for i in range(STEPS): 46 | start = (i*BATCH_SIZE) % 32 47 | end = start + BATCH_SIZE 48 | sess.run(train_step, feed_dict={x: X[start:end], y_: Y_[start:end]}) 49 | if i % 500 == 0: 50 | total_loss = sess.run(loss_mse, feed_dict={x: X, y_: Y_}) 51 | print("After %d training step(s), loss_mse on all data is %g" % (i, total_loss)) 52 | 53 | # 输出训练后的参数取值。 54 | print "\n" 55 | print "w1:\n", sess.run(w1) 56 | print "w2:\n", sess.run(w2) 57 | 58 | """ 59 | X: 60 | [[ 0.83494319 0.11482951] 61 | [ 0.66899751 0.46594987] 62 | [ 0.60181666 0.58838408] 63 | [ 0.31836656 0.20502072] 64 | [ 0.87043944 0.02679395] 65 | [ 0.41539811 0.43938369] 66 | [ 0.68635684 0.24833404] 67 | [ 0.97315228 0.68541849] 68 | [ 0.03081617 0.89479913] 69 | [ 0.24665715 0.28584862] 70 | [ 0.31375667 0.47718349] 71 | [ 0.56689254 0.77079148] 72 | [ 0.7321604 0.35828963] 73 | [ 0.15724842 0.94294584] 74 | [ 0.34933722 0.84634483] 75 | [ 0.50304053 0.81299619] 76 | [ 0.23869886 0.9895604 ] 77 | [ 0.4636501 0.32531094] 78 | [ 0.36510487 0.97365522] 79 | [ 0.73350238 0.83833013] 80 | [ 0.61810158 0.12580353] 81 | [ 0.59274817 0.18779828] 82 | [ 0.87150299 0.34679501] 83 | [ 0.25883219 0.50002932] 84 | [ 0.75690948 0.83429824] 85 | [ 0.29316649 0.05646578] 86 | [ 0.10409134 0.88235166] 87 | [ 0.06727785 0.57784761] 88 | [ 0.38492705 0.48384792] 89 | [ 0.69234428 0.19687348] 90 | [ 0.42783492 0.73416985] 91 | [ 0.09696069 0.04883936]] 92 | Y_: 93 | [[1], [0], [0], [1], [1], [1], [1], [0], [1], [1], [1], [0], [0], [0], [0], [0], [0], [1], [0], [0], [1], [1], [0], [1], [0], [1], [1], [1], [1], [1], [0], [1]] 94 | w1: 95 | [[-0.81131822 1.48459876 0.06532937] 96 | [-2.4427042 0.0992484 0.59122431]] 97 | w2: 98 | [[-0.81131822] 99 | [ 1.48459876] 100 | [ 0.06532937]] 101 | 102 | 103 | After 0 training step(s), loss_mse on all data is 5.13118 104 | After 500 training step(s), loss_mse on all data is 0.429111 105 | After 1000 training step(s), loss_mse on all data is 0.409789 106 | After 1500 training step(s), loss_mse on all data is 0.399923 107 | After 2000 training step(s), loss_mse on all data is 0.394146 108 | After 2500 training step(s), loss_mse on all data is 0.390597 109 | 110 | 111 | w1: 112 | [[-0.70006633 0.9136318 0.08953571] 113 | [-2.3402493 -0.14641267 0.58823055]] 114 | w2: 115 | [[-0.06024267] 116 | [ 0.91956186] 117 | [-0.0682071 ]] 118 | """ 119 | 120 | -------------------------------------------------------------------------------- /lec4/opt4_1.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #预测多或预测少的影响一样 3 | #0导入模块,生成数据集 4 | import tensorflow as tf 5 | import numpy as np 6 | BATCH_SIZE = 8 7 | SEED = 23455 8 | 9 | rdm = np.random.RandomState(SEED) 10 | X = rdm.rand(32,2) 11 | Y_ = [[x1+x2+(rdm.rand()/10.0-0.05)] for (x1, x2) in X] 12 | 13 | #1定义神经网络的输入、参数和输出,定义前向传播过程。 14 | x = tf.placeholder(tf.float32, shape=(None, 2)) 15 | y_ = tf.placeholder(tf.float32, shape=(None, 1)) 16 | w1= tf.Variable(tf.random_normal([2, 1], stddev=1, seed=1)) 17 | y = tf.matmul(x, w1) 18 | 19 | #2定义损失函数及反向传播方法。 20 | #定义损失函数为MSE,反向传播方法为梯度下降。 21 | loss_mse = tf.reduce_mean(tf.square(y_ - y)) 22 | train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss_mse) 23 | 24 | #3生成会话,训练STEPS轮 25 | with tf.Session() as sess: 26 | init_op = tf.global_variables_initializer() 27 | sess.run(init_op) 28 | STEPS = 20000 29 | for i in range(STEPS): 30 | start = (i*BATCH_SIZE) % 32 31 | end = (i*BATCH_SIZE) % 32 + BATCH_SIZE 32 | sess.run(train_step, feed_dict={x: X[start:end], y_: Y_[start:end]}) 33 | if i % 500 == 0: 34 | print "After %d training steps, w1 is: " % (i) 35 | print sess.run(w1), "\n" 36 | print "Final w1 is: \n", sess.run(w1) 37 | #在本代码#2中尝试其他反向传播方法,看对收敛速度的影响,把体会写到笔记中 38 | -------------------------------------------------------------------------------- /lec4/opt4_2.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #酸奶成本1元, 酸奶利润9元 3 | #预测少了损失大,故不要预测少,故生成的模型会多预测一些 4 | #0导入模块,生成数据集 5 | import tensorflow as tf 6 | import numpy as np 7 | BATCH_SIZE = 8 8 | SEED = 23455 9 | COST = 1 10 | PROFIT = 9 11 | 12 | rdm = np.random.RandomState(SEED) 13 | X = rdm.rand(32,2) 14 | Y = [[x1+x2+(rdm.rand()/10.0-0.05)] for (x1, x2) in X] 15 | 16 | #1定义神经网络的输入、参数和输出,定义前向传播过程。 17 | x = tf.placeholder(tf.float32, shape=(None, 2)) 18 | y_ = tf.placeholder(tf.float32, shape=(None, 1)) 19 | w1= tf.Variable(tf.random_normal([2, 1], stddev=1, seed=1)) 20 | y = tf.matmul(x, w1) 21 | 22 | #2定义损失函数及反向传播方法。 23 | # 定义损失函数使得预测少了的损失大,于是模型应该偏向多的方向预测。 24 | loss = tf.reduce_sum(tf.where(tf.greater(y, y_), (y - y_)*COST, (y_ - y)*PROFIT)) 25 | train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss) 26 | 27 | #3生成会话,训练STEPS轮。 28 | with tf.Session() as sess: 29 | init_op = tf.global_variables_initializer() 30 | sess.run(init_op) 31 | STEPS = 3000 32 | for i in range(STEPS): 33 | start = (i*BATCH_SIZE) % 32 34 | end = (i*BATCH_SIZE) % 32 + BATCH_SIZE 35 | sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]}) 36 | if i % 500 == 0: 37 | print "After %d training steps, w1 is: " % (i) 38 | print sess.run(w1), "\n" 39 | print "Final w1 is: \n", sess.run(w1) 40 | -------------------------------------------------------------------------------- /lec4/opt4_3.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #酸奶成本9元, 酸奶利润1元 3 | #预测多了损失大,故不要预测多,故生成的模型会少预测一些 4 | #0导入模块,生成数据集 5 | import tensorflow as tf 6 | import numpy as np 7 | BATCH_SIZE = 8 8 | SEED = 23455 9 | COST = 9 10 | PROFIT = 1 11 | 12 | rdm = np.random.RandomState(SEED) 13 | X = rdm.rand(32,2) 14 | Y = [[x1+x2+(rdm.rand()/10.0-0.05)] for (x1, x2) in X] 15 | 16 | #1定义神经网络的输入、参数和输出,定义前向传播过程。 17 | x = tf.placeholder(tf.float32, shape=(None, 2)) 18 | y_ = tf.placeholder(tf.float32, shape=(None, 1)) 19 | w1= tf.Variable(tf.random_normal([2, 1], stddev=1, seed=1)) 20 | y = tf.matmul(x, w1) 21 | 22 | #2定义损失函数及反向传播方法。 23 | #重新定义损失函数,使得预测多了的损失大,于是模型应该偏向少的方向预测。 24 | loss = tf.reduce_sum(tf.where(tf.greater(y, y_), (y - y_)*COST, (y_ - y)*PROFIT)) 25 | train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss) 26 | 27 | #3生成会话,训练STEPS轮。 28 | with tf.Session() as sess: 29 | init_op = tf.global_variables_initializer() 30 | sess.run(init_op) 31 | STEPS = 3000 32 | for i in range(STEPS): 33 | start = (i*BATCH_SIZE) % 32 34 | end = (i*BATCH_SIZE) % 32 + BATCH_SIZE 35 | sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]}) 36 | if i % 500 == 0: 37 | print "After %d training steps, w1 is: " % (i) 38 | print sess.run(w1), "\n" 39 | print "Final w1 is: \n", sess.run(w1) 40 | -------------------------------------------------------------------------------- /lec4/opt4_4-1.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #设损失函数 loss=(w+1)^2, 令w初值是常数5。反向传播就是求最优w,即求最小loss对应的w值 3 | import tensorflow as tf 4 | #定义待优化参数w初值赋5 5 | w = tf.Variable(tf.constant(5, dtype=tf.float32)) 6 | #定义损失函数loss 7 | loss = tf.square(w+1) 8 | #定义反向传播方法 9 | train_step = tf.train.GradientDescentOptimizer(1).minimize(loss) 10 | #生成会话,训练40轮 11 | with tf.Session() as sess: 12 | init_op=tf.global_variables_initializer() 13 | sess.run(init_op) 14 | for i in range(40): 15 | sess.run(train_step) 16 | w_val = sess.run(w) 17 | loss_val = sess.run(loss) 18 | print "After %s steps: w is %f, loss is %f." % (i, w_val,loss_val) 19 | -------------------------------------------------------------------------------- /lec4/opt4_4-2.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #设损失函数 loss=(w+1)^2, 令w初值是常数5。反向传播就是求最优w,即求最小loss对应的w值 3 | import tensorflow as tf 4 | #定义待优化参数w初值赋5 5 | w = tf.Variable(tf.constant(5, dtype=tf.float32)) 6 | #定义损失函数loss 7 | loss = tf.square(w+1) 8 | #定义反向传播方法 9 | train_step = tf.train.GradientDescentOptimizer(0.0001).minimize(loss) 10 | #生成会话,训练40轮 11 | with tf.Session() as sess: 12 | init_op=tf.global_variables_initializer() 13 | sess.run(init_op) 14 | for i in range(40): 15 | sess.run(train_step) 16 | w_val = sess.run(w) 17 | loss_val = sess.run(loss) 18 | print "After %s steps: w is %f, loss is %f." % (i, w_val,loss_val) 19 | -------------------------------------------------------------------------------- /lec4/opt4_4.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #设损失函数 loss=(w+1)^2, 令w初值是常数5。反向传播就是求最优w,即求最小loss对应的w值 3 | import tensorflow as tf 4 | #定义待优化参数w初值赋5 5 | w = tf.Variable(tf.constant(5, dtype=tf.float32)) 6 | #定义损失函数loss 7 | loss = tf.square(w+1) 8 | #定义反向传播方法 9 | train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss) 10 | #生成会话,训练40轮 11 | with tf.Session() as sess: 12 | init_op=tf.global_variables_initializer() 13 | sess.run(init_op) 14 | for i in range(40): 15 | sess.run(train_step) 16 | w_val = sess.run(w) 17 | loss_val = sess.run(loss) 18 | print "After %s steps: w is %f, loss is %f." % (i, w_val,loss_val) 19 | -------------------------------------------------------------------------------- /lec4/opt4_5.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #设损失函数 loss=(w+1)^2, 令w初值是常数10。反向传播就是求最优w,即求最小loss对应的w值 3 | #使用指数衰减的学习率,在迭代初期得到较高的下降速度,可以在较小的训练轮数下取得更有收敛度。 4 | import tensorflow as tf 5 | 6 | LEARNING_RATE_BASE = 0.1 #最初学习率 7 | LEARNING_RATE_DECAY = 0.99 #学习率衰减率 8 | LEARNING_RATE_STEP = 1 #喂入多少轮BATCH_SIZE后,更新一次学习率,一般设为:总样本数/BATCH_SIZE 9 | 10 | #运行了几轮BATCH_SIZE的计数器,初值给0, 设为不被训练 11 | global_step = tf.Variable(0, trainable=False) 12 | #定义指数下降学习率 13 | learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step, LEARNING_RATE_STEP, LEARNING_RATE_DECAY, staircase=True) 14 | #定义待优化参数,初值给10 15 | w = tf.Variable(tf.constant(5, dtype=tf.float32)) 16 | #定义损失函数loss 17 | loss = tf.square(w+1) 18 | #定义反向传播方法 19 | train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) 20 | #生成会话,训练40轮 21 | with tf.Session() as sess: 22 | init_op=tf.global_variables_initializer() 23 | sess.run(init_op) 24 | for i in range(40): 25 | sess.run(train_step) 26 | learning_rate_val = sess.run(learning_rate) 27 | global_step_val = sess.run(global_step) 28 | w_val = sess.run(w) 29 | loss_val = sess.run(loss) 30 | print "After %s steps: global_step is %f, w is %f, learning rate is %f, loss is %f" % (i, global_step_val, w_val, learning_rate_val, loss_val) 31 | -------------------------------------------------------------------------------- /lec4/opt4_6.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | import tensorflow as tf 3 | 4 | #1. 定义变量及滑动平均类 5 | #定义一个32位浮点变量,初始值为0.0 这个代码就是不断更新w1参数,优化w1参数,滑动平均做了个w1的影子 6 | w1 = tf.Variable(0, dtype=tf.float32) 7 | #定义num_updates(NN的迭代轮数),初始值为0,不可被优化(训练),这个参数不训练 8 | global_step = tf.Variable(0, trainable=False) 9 | #实例化滑动平均类,给衰减率为0.99,当前轮数global_step 10 | MOVING_AVERAGE_DECAY = 0.99 11 | ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) 12 | #ema.apply后的括号里是更新列表,每次运行sess.run(ema_op)时,对更新列表中的元素求滑动平均值。 13 | #在实际应用中会使用tf.trainable_variables()自动将所有待训练的参数汇总为列表 14 | #ema_op = ema.apply([w1]) 15 | ema_op = ema.apply(tf.trainable_variables()) 16 | 17 | #2. 查看不同迭代中变量取值的变化。 18 | with tf.Session() as sess: 19 | # 初始化 20 | init_op = tf.global_variables_initializer() 21 | sess.run(init_op) 22 | #用ema.average(w1)获取w1滑动平均值 (要运行多个节点,作为列表中的元素列出,写在sess.run中) 23 | #打印出当前参数w1和w1滑动平均值 24 | print "current global_step:", sess.run(global_step) 25 | print "current w1", sess.run([w1, ema.average(w1)]) 26 | 27 | # 参数w1的值赋为1 28 | sess.run(tf.assign(w1, 1)) 29 | sess.run(ema_op) 30 | print "current global_step:", sess.run(global_step) 31 | print "current w1", sess.run([w1, ema.average(w1)]) 32 | 33 | # 更新global_step和w1的值,模拟出轮数为100时,参数w1变为10, 以下代码global_step保持为100,每次执行滑动平均操作,影子值会更新 34 | sess.run(tf.assign(global_step, 100)) 35 | sess.run(tf.assign(w1, 10)) 36 | sess.run(ema_op) 37 | print "current global_step:", sess.run(global_step) 38 | print "current w1:", sess.run([w1, ema.average(w1)]) 39 | 40 | # 每次sess.run会更新一次w1的滑动平均值 41 | sess.run(ema_op) 42 | print "current global_step:" , sess.run(global_step) 43 | print "current w1:", sess.run([w1, ema.average(w1)]) 44 | 45 | sess.run(ema_op) 46 | print "current global_step:" , sess.run(global_step) 47 | print "current w1:", sess.run([w1, ema.average(w1)]) 48 | 49 | sess.run(ema_op) 50 | print "current global_step:" , sess.run(global_step) 51 | print "current w1:", sess.run([w1, ema.average(w1)]) 52 | 53 | sess.run(ema_op) 54 | print "current global_step:" , sess.run(global_step) 55 | print "current w1:", sess.run([w1, ema.average(w1)]) 56 | 57 | sess.run(ema_op) 58 | print "current global_step:" , sess.run(global_step) 59 | print "current w1:", sess.run([w1, ema.average(w1)]) 60 | 61 | sess.run(ema_op) 62 | print "current global_step:" , sess.run(global_step) 63 | print "current w1:", sess.run([w1, ema.average(w1)]) 64 | 65 | #更改MOVING_AVERAGE_DECAY 为 0.1 看影子追随速度 66 | 67 | """ 68 | 69 | current global_step: 0 70 | current w1 [0.0, 0.0] 71 | current global_step: 0 72 | current w1 [1.0, 0.9] 73 | current global_step: 100 74 | current w1: [10.0, 1.6445453] 75 | current global_step: 100 76 | current w1: [10.0, 2.3281732] 77 | current global_step: 100 78 | current w1: [10.0, 2.955868] 79 | current global_step: 100 80 | current w1: [10.0, 3.532206] 81 | current global_step: 100 82 | current w1: [10.0, 4.061389] 83 | current global_step: 100 84 | current w1: [10.0, 4.547275] 85 | current global_step: 100 86 | current w1: [10.0, 4.9934072] 87 | 88 | """ 89 | -------------------------------------------------------------------------------- /lec4/opt4_7.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #0导入模块 ,生成模拟数据集 3 | import tensorflow as tf 4 | import numpy as np 5 | import matplotlib.pyplot as plt 6 | BATCH_SIZE = 30 7 | seed = 2 8 | #基于seed产生随机数 9 | rdm = np.random.RandomState(seed) 10 | #随机数返回300行2列的矩阵,表示300组坐标点(x0,x1)作为输入数据集 11 | X = rdm.randn(300,2) 12 | #从X这个300行2列的矩阵中取出一行,判断如果两个坐标的平方和小于2,给Y赋值1,其余赋值0 13 | #作为输入数据集的标签(正确答案) 14 | Y_ = [int(x0*x0 + x1*x1 <2) for (x0,x1) in X] 15 | #遍历Y中的每个元素,1赋值'red'其余赋值'blue',这样可视化显示时人可以直观区分 16 | Y_c = [['red' if y else 'blue'] for y in Y_] 17 | #对数据集X和标签Y进行shape整理,第一个元素为-1表示,随第二个参数计算得到,第二个元素表示多少列,把X整理为n行2列,把Y整理为n行1列 18 | X = np.vstack(X).reshape(-1,2) 19 | Y_ = np.vstack(Y_).reshape(-1,1) 20 | print X 21 | print Y_ 22 | print Y_c 23 | #用plt.scatter画出数据集X各行中第0列元素和第1列元素的点即各行的(x0,x1),用各行Y_c对应的值表示颜色(c是color的缩写) 24 | plt.scatter(X[:,0], X[:,1], c=np.squeeze(Y_c)) 25 | plt.show() 26 | 27 | 28 | #定义神经网络的输入、参数和输出,定义前向传播过程 29 | def get_weight(shape, regularizer): 30 | w = tf.Variable(tf.random_normal(shape), dtype=tf.float32) 31 | tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w)) 32 | return w 33 | 34 | def get_bias(shape): 35 | b = tf.Variable(tf.constant(0.01, shape=shape)) 36 | return b 37 | 38 | x = tf.placeholder(tf.float32, shape=(None, 2)) 39 | y_ = tf.placeholder(tf.float32, shape=(None, 1)) 40 | 41 | w1 = get_weight([2,11], 0.01) 42 | b1 = get_bias([11]) 43 | y1 = tf.nn.relu(tf.matmul(x, w1)+b1) 44 | 45 | w2 = get_weight([11,1], 0.01) 46 | b2 = get_bias([1]) 47 | y = tf.matmul(y1, w2)+b2 48 | 49 | 50 | #定义损失函数 51 | loss_mse = tf.reduce_mean(tf.square(y-y_)) 52 | loss_total = loss_mse + tf.add_n(tf.get_collection('losses')) 53 | 54 | 55 | #定义反向传播方法:不含正则化 56 | train_step = tf.train.AdamOptimizer(0.0001).minimize(loss_mse) 57 | 58 | with tf.Session() as sess: 59 | init_op = tf.global_variables_initializer() 60 | sess.run(init_op) 61 | STEPS = 40000 62 | for i in range(STEPS): 63 | start = (i*BATCH_SIZE) % 300 64 | end = start + BATCH_SIZE 65 | sess.run(train_step, feed_dict={x:X[start:end], y_:Y_[start:end]}) 66 | if i % 2000 == 0: 67 | loss_mse_v = sess.run(loss_mse, feed_dict={x:X, y_:Y_}) 68 | print("After %d steps, loss is: %f" %(i, loss_mse_v)) 69 | #xx在-3到3之间以步长为0.01,yy在-3到3之间以步长0.01,生成二维网格坐标点 70 | xx, yy = np.mgrid[-3:3:.01, -3:3:.01] 71 | #将xx , yy拉直,并合并成一个2列的矩阵,得到一个网格坐标点的集合 72 | grid = np.c_[xx.ravel(), yy.ravel()] 73 | #将网格坐标点喂入神经网络 ,probs为输出 74 | probs = sess.run(y, feed_dict={x:grid}) 75 | #probs的shape调整成xx的样子 76 | probs = probs.reshape(xx.shape) 77 | print "w1:\n",sess.run(w1) 78 | print "b1:\n",sess.run(b1) 79 | print "w2:\n",sess.run(w2) 80 | print "b2:\n",sess.run(b2) 81 | 82 | plt.scatter(X[:,0], X[:,1], c=np.squeeze(Y_c)) 83 | plt.contour(xx, yy, probs, levels=[.5]) 84 | plt.show() 85 | 86 | 87 | 88 | #定义反向传播方法:包含正则化 89 | train_step = tf.train.AdamOptimizer(0.0001).minimize(loss_total) 90 | 91 | with tf.Session() as sess: 92 | init_op = tf.global_variables_initializer() 93 | sess.run(init_op) 94 | STEPS = 40000 95 | for i in range(STEPS): 96 | start = (i*BATCH_SIZE) % 300 97 | end = start + BATCH_SIZE 98 | sess.run(train_step, feed_dict={x: X[start:end], y_:Y_[start:end]}) 99 | if i % 2000 == 0: 100 | loss_v = sess.run(loss_total, feed_dict={x:X,y_:Y_}) 101 | print("After %d steps, loss is: %f" %(i, loss_v)) 102 | 103 | xx, yy = np.mgrid[-3:3:.01, -3:3:.01] 104 | grid = np.c_[xx.ravel(), yy.ravel()] 105 | probs = sess.run(y, feed_dict={x:grid}) 106 | probs = probs.reshape(xx.shape) 107 | print "w1:\n",sess.run(w1) 108 | print "b1:\n",sess.run(b1) 109 | print "w2:\n",sess.run(w2) 110 | print "b2:\n",sess.run(b2) 111 | 112 | plt.scatter(X[:,0], X[:,1], c=np.squeeze(Y_c)) 113 | plt.contour(xx, yy, probs, levels=[.5]) 114 | plt.show() 115 | 116 | -------------------------------------------------------------------------------- /lec4/opt4_8_backward.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #0导入模块 ,生成模拟数据集 3 | import tensorflow as tf 4 | import numpy as np 5 | import matplotlib.pyplot as plt 6 | import opt4_8_generateds 7 | import opt4_8_forward 8 | 9 | STEPS = 40000 10 | BATCH_SIZE = 30 11 | LEARNING_RATE_BASE = 0.001 12 | LEARNING_RATE_DECAY = 0.999 13 | REGULARIZER = 0.01 14 | 15 | def backward(): 16 | x = tf.placeholder(tf.float32, shape=(None, 2)) 17 | y_ = tf.placeholder(tf.float32, shape=(None, 1)) 18 | 19 | X, Y_, Y_c = opt4_8_generateds.generateds() 20 | 21 | y = opt4_8_forward.forward(x, REGULARIZER) 22 | 23 | global_step = tf.Variable(0,trainable=False) 24 | 25 | learning_rate = tf.train.exponential_decay( 26 | LEARNING_RATE_BASE, 27 | global_step, 28 | 300/BATCH_SIZE, 29 | LEARNING_RATE_DECAY, 30 | staircase=True) 31 | 32 | 33 | #定义损失函数 34 | loss_mse = tf.reduce_mean(tf.square(y-y_)) 35 | loss_total = loss_mse + tf.add_n(tf.get_collection('losses')) 36 | 37 | #定义反向传播方法:包含正则化 38 | train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss_total) 39 | 40 | with tf.Session() as sess: 41 | init_op = tf.global_variables_initializer() 42 | sess.run(init_op) 43 | for i in range(STEPS): 44 | start = (i*BATCH_SIZE) % 300 45 | end = start + BATCH_SIZE 46 | sess.run(train_step, feed_dict={x: X[start:end], y_:Y_[start:end]}) 47 | if i % 2000 == 0: 48 | loss_v = sess.run(loss_total, feed_dict={x:X,y_:Y_}) 49 | print("After %d steps, loss is: %f" %(i, loss_v)) 50 | 51 | xx, yy = np.mgrid[-3:3:.01, -3:3:.01] 52 | grid = np.c_[xx.ravel(), yy.ravel()] 53 | probs = sess.run(y, feed_dict={x:grid}) 54 | probs = probs.reshape(xx.shape) 55 | 56 | plt.scatter(X[:,0], X[:,1], c=np.squeeze(Y_c)) 57 | plt.contour(xx, yy, probs, levels=[.5]) 58 | plt.show() 59 | 60 | if __name__=='__main__': 61 | backward() 62 | -------------------------------------------------------------------------------- /lec4/opt4_8_forward.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #0导入模块 ,生成模拟数据集 3 | import tensorflow as tf 4 | 5 | #定义神经网络的输入、参数和输出,定义前向传播过程 6 | def get_weight(shape, regularizer): 7 | w = tf.Variable(tf.random_normal(shape), dtype=tf.float32) 8 | tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w)) 9 | return w 10 | 11 | def get_bias(shape): 12 | b = tf.Variable(tf.constant(0.01, shape=shape)) 13 | return b 14 | 15 | def forward(x, regularizer): 16 | 17 | w1 = get_weight([2,11], regularizer) 18 | b1 = get_bias([11]) 19 | y1 = tf.nn.relu(tf.matmul(x, w1) + b1) 20 | 21 | w2 = get_weight([11,1], regularizer) 22 | b2 = get_bias([1]) 23 | y = tf.matmul(y1, w2) + b2 24 | 25 | return y 26 | -------------------------------------------------------------------------------- /lec4/opt4_8_generateds.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | #0导入模块 ,生成模拟数据集 3 | import numpy as np 4 | import matplotlib.pyplot as plt 5 | seed = 2 6 | def generateds(): 7 | #基于seed产生随机数 8 | rdm = np.random.RandomState(seed) 9 | #随机数返回300行2列的矩阵,表示300组坐标点(x0,x1)作为输入数据集 10 | X = rdm.randn(300,2) 11 | #从X这个300行2列的矩阵中取出一行,判断如果两个坐标的平方和小于2,给Y赋值1,其余赋值0 12 | #作为输入数据集的标签(正确答案) 13 | Y_ = [int(x0*x0 + x1*x1 <2) for (x0,x1) in X] 14 | #遍历Y中的每个元素,1赋值'red'其余赋值'blue',这样可视化显示时人可以直观区分 15 | Y_c = [['red' if y else 'blue'] for y in Y_] 16 | #对数据集X和标签Y进行形状整理,第一个元素为-1表示跟随第二列计算,第二个元素表示多少列,可见X为两列,Y为1列 17 | X = np.vstack(X).reshape(-1,2) 18 | Y_ = np.vstack(Y_).reshape(-1,1) 19 | 20 | return X, Y_, Y_c 21 | 22 | #print X 23 | #print Y_ 24 | #print Y_c 25 | #用plt.scatter画出数据集X各行中第0列元素和第1列元素的点即各行的(x0,x1),用各行Y_c对应的值表示颜色(c是color的缩写) 26 | #plt.scatter(X[:,0], X[:,1], c=np.squeeze(Y_c)) 27 | #plt.show() 28 | -------------------------------------------------------------------------------- /lec4/pic/ReLU.svg: -------------------------------------------------------------------------------- 1 | 2 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | -------------------------------------------------------------------------------- /lec4/pic/eq-sigmod.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | -------------------------------------------------------------------------------- /lec4/pic/eq-tanh.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | -------------------------------------------------------------------------------- /lec4/pic/in-eq-01.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /lec4/pic/in-eq-02.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /lec4/pic/in-eq-03.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /lec4/pic/in-eq-05.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /lec4/pic/in-eq-06.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /lec4/pic/in-eq-07.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /lec4/pic/in-eq-08.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /lec4/pic/in-eq-09.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /lec4/pic/in-eq-loss.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /lec4/pic/loss.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /lec4/pic/tanh.svg: -------------------------------------------------------------------------------- 1 | 2 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | -------------------------------------------------------------------------------- /lec5/mnist_backward.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from tensorflow.examples.tutorials.mnist import input_data 3 | import mnist_forward 4 | import os 5 | 6 | BATCH_SIZE = 200 7 | LEARNING_RATE_BASE = 0.1 8 | LEARNING_RATE_DECAY = 0.99 9 | REGULARIZER = 0.0001 10 | STEPS = 50000 11 | MOVING_AVERAGE_DECAY = 0.99 12 | MODEL_SAVE_PATH="./model/" 13 | MODEL_NAME="mnist_model" 14 | 15 | 16 | def backward(mnist): 17 | 18 | x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) 19 | y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE]) 20 | y = mnist_forward.forward(x, REGULARIZER) 21 | global_step = tf.Variable(0, trainable=False) 22 | 23 | ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1)) 24 | cem = tf.reduce_mean(ce) 25 | loss = cem + tf.add_n(tf.get_collection('losses')) 26 | 27 | learning_rate = tf.train.exponential_decay( 28 | LEARNING_RATE_BASE, 29 | global_step, 30 | mnist.train.num_examples / BATCH_SIZE, 31 | LEARNING_RATE_DECAY, 32 | staircase=True) 33 | 34 | train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) 35 | 36 | ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) 37 | ema_op = ema.apply(tf.trainable_variables()) 38 | with tf.control_dependencies([train_step, ema_op]): 39 | train_op = tf.no_op(name='train') 40 | 41 | saver = tf.train.Saver() 42 | 43 | with tf.Session() as sess: 44 | init_op = tf.global_variables_initializer() 45 | sess.run(init_op) 46 | 47 | for i in range(STEPS): 48 | xs, ys = mnist.train.next_batch(BATCH_SIZE) 49 | _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs, y_: ys}) 50 | if i % 1000 == 0: 51 | print("After %d training step(s), loss on training batch is %g." % (step, loss_value)) 52 | saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step) 53 | 54 | 55 | def main(): 56 | mnist = input_data.read_data_sets("./data/", one_hot=True) 57 | backward(mnist) 58 | 59 | if __name__ == '__main__': 60 | main() 61 | 62 | 63 | -------------------------------------------------------------------------------- /lec5/mnist_forward.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | 3 | INPUT_NODE = 784 4 | OUTPUT_NODE = 10 5 | LAYER1_NODE = 500 6 | 7 | def get_weight(shape, regularizer): 8 | w = tf.Variable(tf.truncated_normal(shape,stddev=0.1)) 9 | if regularizer != None: tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w)) 10 | return w 11 | 12 | 13 | def get_bias(shape): 14 | b = tf.Variable(tf.zeros(shape)) 15 | return b 16 | 17 | def forward(x, regularizer): 18 | w1 = get_weight([INPUT_NODE, LAYER1_NODE], regularizer) 19 | b1 = get_bias([LAYER1_NODE]) 20 | y1 = tf.nn.relu(tf.matmul(x, w1) + b1) 21 | 22 | w2 = get_weight([LAYER1_NODE, OUTPUT_NODE], regularizer) 23 | b2 = get_bias([OUTPUT_NODE]) 24 | y = tf.matmul(y1, w2) + b2 25 | return y 26 | -------------------------------------------------------------------------------- /lec5/mnist_test.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | import time 3 | import tensorflow as tf 4 | from tensorflow.examples.tutorials.mnist import input_data 5 | import mnist_forward 6 | import mnist_backward 7 | TEST_INTERVAL_SECS = 5 8 | 9 | def test(mnist): 10 | with tf.Graph().as_default() as g: 11 | x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) 12 | y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE]) 13 | y = mnist_forward.forward(x, None) 14 | 15 | ema = tf.train.ExponentialMovingAverage(mnist_backward.MOVING_AVERAGE_DECAY) 16 | ema_restore = ema.variables_to_restore() 17 | saver = tf.train.Saver(ema_restore) 18 | 19 | correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 20 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 21 | 22 | while True: 23 | with tf.Session() as sess: 24 | ckpt = tf.train.get_checkpoint_state(mnist_backward.MODEL_SAVE_PATH) 25 | if ckpt and ckpt.model_checkpoint_path: 26 | saver.restore(sess, ckpt.model_checkpoint_path) 27 | global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1] 28 | accuracy_score = sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}) 29 | print("After %s training step(s), test accuracy = %g" % (global_step, accuracy_score)) 30 | else: 31 | print('No checkpoint file found') 32 | return 33 | time.sleep(TEST_INTERVAL_SECS) 34 | 35 | def main(): 36 | mnist = input_data.read_data_sets("./data/", one_hot=True) 37 | test(mnist) 38 | 39 | if __name__ == '__main__': 40 | main() 41 | -------------------------------------------------------------------------------- /lec6/fc2/mnist_backward.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from tensorflow.examples.tutorials.mnist import input_data 3 | import mnist_forward 4 | import os 5 | 6 | BATCH_SIZE = 200 7 | LEARNING_RATE_BASE = 0.1 8 | LEARNING_RATE_DECAY = 0.99 9 | REGULARIZER = 0.0001 10 | STEPS = 50000 11 | MOVING_AVERAGE_DECAY = 0.99 12 | MODEL_SAVE_PATH="./model/" 13 | MODEL_NAME="mnist_model" 14 | 15 | 16 | def backward(mnist): 17 | 18 | x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) 19 | y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE]) 20 | y = mnist_forward.forward(x, REGULARIZER) 21 | global_step = tf.Variable(0, trainable=False) 22 | 23 | ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1)) 24 | cem = tf.reduce_mean(ce) 25 | loss = cem + tf.add_n(tf.get_collection('losses')) 26 | 27 | learning_rate = tf.train.exponential_decay( 28 | LEARNING_RATE_BASE, 29 | global_step, 30 | mnist.train.num_examples / BATCH_SIZE, 31 | LEARNING_RATE_DECAY, 32 | staircase=True) 33 | 34 | train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) 35 | 36 | ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) 37 | ema_op = ema.apply(tf.trainable_variables()) 38 | with tf.control_dependencies([train_step, ema_op]): 39 | train_op = tf.no_op(name='train') 40 | 41 | saver = tf.train.Saver() 42 | 43 | with tf.Session() as sess: 44 | init_op = tf.global_variables_initializer() 45 | sess.run(init_op) 46 | 47 | ckpt = tf.train.get_checkpoint_state(MODEL_SAVE_PATH) 48 | if ckpt and ckpt.model_checkpoint_path: 49 | saver.restore(sess, ckpt.model_checkpoint_path) 50 | 51 | for i in range(STEPS): 52 | xs, ys = mnist.train.next_batch(BATCH_SIZE) 53 | _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs, y_: ys}) 54 | if i % 1000 == 0: 55 | print("After %d training step(s), loss on training batch is %g." % (step, loss_value)) 56 | saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step) 57 | 58 | 59 | def main(): 60 | mnist = input_data.read_data_sets("./data/", one_hot=True) 61 | backward(mnist) 62 | 63 | if __name__ == '__main__': 64 | main() 65 | 66 | 67 | -------------------------------------------------------------------------------- /lec6/fc2/mnist_forward.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | 3 | INPUT_NODE = 784 4 | OUTPUT_NODE = 10 5 | LAYER1_NODE = 500 6 | 7 | def get_weight(shape, regularizer): 8 | w = tf.Variable(tf.truncated_normal(shape,stddev=0.1)) 9 | if regularizer != None: tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w)) 10 | return w 11 | 12 | 13 | def get_bias(shape): 14 | b = tf.Variable(tf.zeros(shape)) 15 | return b 16 | 17 | def forward(x, regularizer): 18 | w1 = get_weight([INPUT_NODE, LAYER1_NODE], regularizer) 19 | b1 = get_bias([LAYER1_NODE]) 20 | y1 = tf.nn.relu(tf.matmul(x, w1) + b1) 21 | 22 | w2 = get_weight([LAYER1_NODE, OUTPUT_NODE], regularizer) 23 | b2 = get_bias([OUTPUT_NODE]) 24 | y = tf.matmul(y1, w2) + b2 25 | return y 26 | -------------------------------------------------------------------------------- /lec6/fc2/mnist_test.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | import time 3 | import tensorflow as tf 4 | from tensorflow.examples.tutorials.mnist import input_data 5 | import mnist_forward 6 | import mnist_backward 7 | TEST_INTERVAL_SECS = 5 8 | 9 | def test(mnist): 10 | with tf.Graph().as_default() as g: 11 | x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) 12 | y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE]) 13 | y = mnist_forward.forward(x, None) 14 | 15 | ema = tf.train.ExponentialMovingAverage(mnist_backward.MOVING_AVERAGE_DECAY) 16 | ema_restore = ema.variables_to_restore() 17 | saver = tf.train.Saver(ema_restore) 18 | 19 | correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 20 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 21 | 22 | while True: 23 | with tf.Session() as sess: 24 | ckpt = tf.train.get_checkpoint_state(mnist_backward.MODEL_SAVE_PATH) 25 | if ckpt and ckpt.model_checkpoint_path: 26 | saver.restore(sess, ckpt.model_checkpoint_path) 27 | global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1] 28 | accuracy_score = sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}) 29 | print("After %s training step(s), test accuracy = %g" % (global_step, accuracy_score)) 30 | else: 31 | print('No checkpoint file found') 32 | return 33 | time.sleep(TEST_INTERVAL_SECS) 34 | 35 | def main(): 36 | mnist = input_data.read_data_sets("./data/", one_hot=True) 37 | test(mnist) 38 | 39 | if __name__ == '__main__': 40 | main() 41 | -------------------------------------------------------------------------------- /lec6/fc3/mnist_app.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | 3 | import tensorflow as tf 4 | import numpy as np 5 | from PIL import Image 6 | import mnist_backward 7 | import mnist_forward 8 | 9 | def restore_model(testPicArr): 10 | with tf.Graph().as_default() as tg: 11 | x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) 12 | y = mnist_forward.forward(x, None) 13 | preValue = tf.argmax(y, 1) 14 | 15 | variable_averages = tf.train.ExponentialMovingAverage(mnist_backward.MOVING_AVERAGE_DECAY) 16 | variables_to_restore = variable_averages.variables_to_restore() 17 | saver = tf.train.Saver(variables_to_restore) 18 | 19 | with tf.Session() as sess: 20 | ckpt = tf.train.get_checkpoint_state(mnist_backward.MODEL_SAVE_PATH) 21 | if ckpt and ckpt.model_checkpoint_path: 22 | saver.restore(sess, ckpt.model_checkpoint_path) 23 | 24 | preValue = sess.run(preValue, feed_dict={x:testPicArr}) 25 | return preValue 26 | else: 27 | print("No checkpoint file found") 28 | return -1 29 | 30 | def pre_pic(picName): 31 | img = Image.open(picName) 32 | reIm = img.resize((28,28), Image.ANTIALIAS) 33 | im_arr = np.array(reIm.convert('L')) 34 | threshold = 50 35 | for i in range(28): 36 | for j in range(28): 37 | im_arr[i][j] = 255 - im_arr[i][j] 38 | if (im_arr[i][j] < threshold): 39 | im_arr[i][j] = 0 40 | else: im_arr[i][j] = 255 41 | 42 | nm_arr = im_arr.reshape([1, 784]) 43 | nm_arr = nm_arr.astype(np.float32) 44 | img_ready = np.multiply(nm_arr, 1.0/255.0) 45 | 46 | return img_ready 47 | 48 | def application(): 49 | testNum = input("input the number of test pictures:") 50 | for i in range(testNum): 51 | testPic = raw_input("the path of test picture:") 52 | testPicArr = pre_pic(testPic) 53 | preValue = restore_model(testPicArr) 54 | print "The prediction number is:", preValue 55 | 56 | def main(): 57 | application() 58 | 59 | if __name__ == '__main__': 60 | main() 61 | -------------------------------------------------------------------------------- /lec6/fc3/mnist_backward.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from tensorflow.examples.tutorials.mnist import input_data 3 | import mnist_forward 4 | import os 5 | 6 | BATCH_SIZE = 200 7 | LEARNING_RATE_BASE = 0.1 8 | LEARNING_RATE_DECAY = 0.99 9 | REGULARIZER = 0.0001 10 | STEPS = 50000 11 | MOVING_AVERAGE_DECAY = 0.99 12 | MODEL_SAVE_PATH="./model/" 13 | MODEL_NAME="mnist_model" 14 | 15 | 16 | def backward(mnist): 17 | 18 | x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) 19 | y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE]) 20 | y = mnist_forward.forward(x, REGULARIZER) 21 | global_step = tf.Variable(0, trainable=False) 22 | 23 | ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1)) 24 | cem = tf.reduce_mean(ce) 25 | loss = cem + tf.add_n(tf.get_collection('losses')) 26 | 27 | learning_rate = tf.train.exponential_decay( 28 | LEARNING_RATE_BASE, 29 | global_step, 30 | mnist.train.num_examples / BATCH_SIZE, 31 | LEARNING_RATE_DECAY, 32 | staircase=True) 33 | 34 | train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) 35 | 36 | ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) 37 | ema_op = ema.apply(tf.trainable_variables()) 38 | with tf.control_dependencies([train_step, ema_op]): 39 | train_op = tf.no_op(name='train') 40 | 41 | saver = tf.train.Saver() 42 | 43 | with tf.Session() as sess: 44 | init_op = tf.global_variables_initializer() 45 | sess.run(init_op) 46 | 47 | ckpt = tf.train.get_checkpoint_state(MODEL_SAVE_PATH) 48 | if ckpt and ckpt.model_checkpoint_path: 49 | saver.restore(sess, ckpt.model_checkpoint_path) 50 | 51 | for i in range(STEPS): 52 | xs, ys = mnist.train.next_batch(BATCH_SIZE) 53 | _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs, y_: ys}) 54 | if i % 1000 == 0: 55 | print("After %d training step(s), loss on training batch is %g." % (step, loss_value)) 56 | saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step) 57 | 58 | 59 | def main(): 60 | mnist = input_data.read_data_sets("./data/", one_hot=True) 61 | backward(mnist) 62 | 63 | if __name__ == '__main__': 64 | main() 65 | 66 | 67 | -------------------------------------------------------------------------------- /lec6/fc3/mnist_forward.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | 3 | INPUT_NODE = 784 4 | OUTPUT_NODE = 10 5 | LAYER1_NODE = 500 6 | 7 | def get_weight(shape, regularizer): 8 | w = tf.Variable(tf.truncated_normal(shape,stddev=0.1)) 9 | if regularizer != None: tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w)) 10 | return w 11 | 12 | 13 | def get_bias(shape): 14 | b = tf.Variable(tf.zeros(shape)) 15 | return b 16 | 17 | def forward(x, regularizer): 18 | w1 = get_weight([INPUT_NODE, LAYER1_NODE], regularizer) 19 | b1 = get_bias([LAYER1_NODE]) 20 | y1 = tf.nn.relu(tf.matmul(x, w1) + b1) 21 | 22 | w2 = get_weight([LAYER1_NODE, OUTPUT_NODE], regularizer) 23 | b2 = get_bias([OUTPUT_NODE]) 24 | y = tf.matmul(y1, w2) + b2 25 | return y 26 | -------------------------------------------------------------------------------- /lec6/fc3/mnist_test.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | import time 3 | import tensorflow as tf 4 | from tensorflow.examples.tutorials.mnist import input_data 5 | import mnist_forward 6 | import mnist_backward 7 | TEST_INTERVAL_SECS = 5 8 | 9 | def test(mnist): 10 | with tf.Graph().as_default() as g: 11 | x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) 12 | y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE]) 13 | y = mnist_forward.forward(x, None) 14 | 15 | ema = tf.train.ExponentialMovingAverage(mnist_backward.MOVING_AVERAGE_DECAY) 16 | ema_restore = ema.variables_to_restore() 17 | saver = tf.train.Saver(ema_restore) 18 | 19 | correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 20 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 21 | 22 | while True: 23 | with tf.Session() as sess: 24 | ckpt = tf.train.get_checkpoint_state(mnist_backward.MODEL_SAVE_PATH) 25 | if ckpt and ckpt.model_checkpoint_path: 26 | saver.restore(sess, ckpt.model_checkpoint_path) 27 | global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1] 28 | accuracy_score = sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}) 29 | print("After %s training step(s), test accuracy = %g" % (global_step, accuracy_score)) 30 | else: 31 | print('No checkpoint file found') 32 | return 33 | time.sleep(TEST_INTERVAL_SECS) 34 | 35 | def main(): 36 | mnist = input_data.read_data_sets("./data/", one_hot=True) 37 | test(mnist) 38 | 39 | if __name__ == '__main__': 40 | main() 41 | -------------------------------------------------------------------------------- /lec6/fc4/mnist_app.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | 3 | import tensorflow as tf 4 | import numpy as np 5 | from PIL import Image 6 | import mnist_backward 7 | import mnist_forward 8 | 9 | def restore_model(testPicArr): 10 | with tf.Graph().as_default() as tg: 11 | x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) 12 | y = mnist_forward.forward(x, None) 13 | preValue = tf.argmax(y, 1) 14 | 15 | variable_averages = tf.train.ExponentialMovingAverage(mnist_backward.MOVING_AVERAGE_DECAY) 16 | variables_to_restore = variable_averages.variables_to_restore() 17 | saver = tf.train.Saver(variables_to_restore) 18 | 19 | with tf.Session() as sess: 20 | ckpt = tf.train.get_checkpoint_state(mnist_backward.MODEL_SAVE_PATH) 21 | if ckpt and ckpt.model_checkpoint_path: 22 | saver.restore(sess, ckpt.model_checkpoint_path) 23 | 24 | preValue = sess.run(preValue, feed_dict={x:testPicArr}) 25 | return preValue 26 | else: 27 | print("No checkpoint file found") 28 | return -1 29 | 30 | def pre_pic(picName): 31 | img = Image.open(picName) 32 | reIm = img.resize((28,28), Image.ANTIALIAS) 33 | im_arr = np.array(reIm.convert('L')) 34 | threshold = 50 35 | for i in range(28): 36 | for j in range(28): 37 | im_arr[i][j] = 255 - im_arr[i][j] 38 | if (im_arr[i][j] < threshold): 39 | im_arr[i][j] = 0 40 | else: im_arr[i][j] = 255 41 | 42 | nm_arr = im_arr.reshape([1, 784]) 43 | nm_arr = nm_arr.astype(np.float32) 44 | img = np.multiply(nm_arr, 1.0/255.0) 45 | 46 | return nm_arr #img 47 | 48 | def application(): 49 | testNum = input("input the number of test pictures:") 50 | for i in range(testNum): 51 | testPic = raw_input("the path of test picture:") 52 | testPicArr = pre_pic(testPic) 53 | preValue = restore_model(testPicArr) 54 | print "The prediction number is:", preValue 55 | 56 | def main(): 57 | application() 58 | 59 | if __name__ == '__main__': 60 | main() 61 | -------------------------------------------------------------------------------- /lec6/fc4/mnist_backward.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | from tensorflow.examples.tutorials.mnist import input_data 3 | import mnist_forward 4 | import os 5 | import mnist_generateds#1 6 | 7 | BATCH_SIZE = 200 8 | LEARNING_RATE_BASE = 0.1 9 | LEARNING_RATE_DECAY = 0.99 10 | REGULARIZER = 0.0001 11 | STEPS = 50000 12 | MOVING_AVERAGE_DECAY = 0.99 13 | MODEL_SAVE_PATH="./model/" 14 | MODEL_NAME="mnist_model" 15 | train_num_examples = 60000#2 16 | 17 | def backward(): 18 | 19 | x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) 20 | y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE]) 21 | y = mnist_forward.forward(x, REGULARIZER) 22 | global_step = tf.Variable(0, trainable=False) 23 | 24 | ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1)) 25 | cem = tf.reduce_mean(ce) 26 | loss = cem + tf.add_n(tf.get_collection('losses')) 27 | 28 | learning_rate = tf.train.exponential_decay( 29 | LEARNING_RATE_BASE, 30 | global_step, 31 | train_num_examples / BATCH_SIZE, 32 | LEARNING_RATE_DECAY, 33 | staircase=True) 34 | 35 | train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) 36 | 37 | ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) 38 | ema_op = ema.apply(tf.trainable_variables()) 39 | with tf.control_dependencies([train_step, ema_op]): 40 | train_op = tf.no_op(name='train') 41 | 42 | saver = tf.train.Saver() 43 | img_batch, label_batch = mnist_generateds.get_tfrecord(BATCH_SIZE, isTrain=True)#3 44 | 45 | with tf.Session() as sess: 46 | init_op = tf.global_variables_initializer() 47 | sess.run(init_op) 48 | 49 | ckpt = tf.train.get_checkpoint_state(MODEL_SAVE_PATH) 50 | if ckpt and ckpt.model_checkpoint_path: 51 | saver.restore(sess, ckpt.model_checkpoint_path) 52 | 53 | coord = tf.train.Coordinator()#4 54 | threads = tf.train.start_queue_runners(sess=sess, coord=coord)#5 55 | 56 | for i in range(STEPS): 57 | xs, ys = sess.run([img_batch, label_batch])#6 58 | _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs, y_: ys}) 59 | if i % 1000 == 0: 60 | print("After %d training step(s), loss on training batch is %g." % (step, loss_value)) 61 | saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step) 62 | 63 | coord.request_stop()#7 64 | coord.join(threads)#8 65 | 66 | 67 | def main(): 68 | backward()#9 69 | 70 | if __name__ == '__main__': 71 | main() 72 | 73 | 74 | -------------------------------------------------------------------------------- /lec6/fc4/mnist_forward.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | 3 | INPUT_NODE = 784 4 | OUTPUT_NODE = 10 5 | LAYER1_NODE = 500 6 | 7 | def get_weight(shape, regularizer): 8 | w = tf.Variable(tf.truncated_normal(shape,stddev=0.1)) 9 | if regularizer != None: tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w)) 10 | return w 11 | 12 | def get_bias(shape): 13 | b = tf.Variable(tf.zeros(shape)) 14 | return b 15 | 16 | def forward(x, regularizer): 17 | w1 = get_weight([INPUT_NODE, LAYER1_NODE], regularizer) 18 | b1 = get_bias([LAYER1_NODE]) 19 | y1 = tf.nn.relu(tf.matmul(x, w1) + b1) 20 | 21 | w2 = get_weight([LAYER1_NODE, OUTPUT_NODE], regularizer) 22 | b2 = get_bias([OUTPUT_NODE]) 23 | y = tf.matmul(y1, w2) + b2 24 | return y 25 | -------------------------------------------------------------------------------- /lec6/fc4/mnist_generateds.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | import tensorflow as tf 3 | import numpy as np 4 | from PIL import Image 5 | import os 6 | 7 | image_train_path='./mnist_data_jpg/mnist_train_jpg_60000/' 8 | label_train_path='./mnist_data_jpg/mnist_train_jpg_60000.txt' 9 | tfRecord_train='./data/mnist_train.tfrecords' 10 | image_test_path='./mnist_data_jpg/mnist_test_jpg_10000/' 11 | label_test_path='./mnist_data_jpg/mnist_test_jpg_10000.txt' 12 | tfRecord_test='./data/mnist_test.tfrecords' 13 | data_path='./data' 14 | resize_height = 28 15 | resize_width = 28 16 | 17 | def write_tfRecord(tfRecordName, image_path, label_path): 18 | writer = tf.python_io.TFRecordWriter(tfRecordName) 19 | num_pic = 0 20 | f = open(label_path, 'r') 21 | contents = f.readlines() 22 | f.close() 23 | for content in contents: 24 | value = content.split() 25 | img_path = image_path + value[0] 26 | img = Image.open(img_path) 27 | img_raw = img.tobytes() 28 | labels = [0] * 10 29 | labels[int(value[1])] = 1 30 | 31 | example = tf.train.Example(features=tf.train.Features(feature={ 32 | 'img_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[img_raw])), 33 | 'label': tf.train.Feature(int64_list=tf.train.Int64List(value=labels)) 34 | })) 35 | writer.write(example.SerializeToString()) 36 | num_pic += 1 37 | print ("the number of picture:", num_pic) 38 | writer.close() 39 | print("write tfrecord successful") 40 | 41 | def generate_tfRecord(): 42 | isExists = os.path.exists(data_path) 43 | if not isExists: 44 | os.makedirs(data_path) 45 | print 'The directory was created successfully' 46 | else: 47 | print 'directory already exists' 48 | write_tfRecord(tfRecord_train, image_train_path, label_train_path) 49 | write_tfRecord(tfRecord_test, image_test_path, label_test_path) 50 | 51 | def read_tfRecord(tfRecord_path): 52 | filename_queue = tf.train.string_input_producer([tfRecord_path], shuffle=True) 53 | reader = tf.TFRecordReader() 54 | _, serialized_example = reader.read(filename_queue) 55 | features = tf.parse_single_example(serialized_example, 56 | features={ 57 | 'label': tf.FixedLenFeature([10], tf.int64), 58 | 'img_raw': tf.FixedLenFeature([], tf.string) 59 | }) 60 | img = tf.decode_raw(features['img_raw'], tf.uint8) 61 | img.set_shape([784]) 62 | img = tf.cast(img, tf.float32) * (1. / 255) 63 | label = tf.cast(features['label'], tf.float32) 64 | return img, label 65 | 66 | def get_tfrecord(num, isTrain=True): 67 | if isTrain: 68 | tfRecord_path = tfRecord_train 69 | else: 70 | tfRecord_path = tfRecord_test 71 | img, label = read_tfRecord(tfRecord_path) 72 | img_batch, label_batch = tf.train.shuffle_batch([img, label], 73 | batch_size = num, 74 | num_threads = 2, 75 | capacity = 1000, 76 | min_after_dequeue = 700) 77 | return img_batch, label_batch 78 | 79 | def main(): 80 | generate_tfRecord() 81 | 82 | if __name__ == '__main__': 83 | main() 84 | -------------------------------------------------------------------------------- /lec6/fc4/mnist_test.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | import time 3 | import tensorflow as tf 4 | from tensorflow.examples.tutorials.mnist import input_data 5 | import mnist_forward 6 | import mnist_backward 7 | import mnist_generateds 8 | TEST_INTERVAL_SECS = 5 9 | TEST_NUM = 10000#1 10 | 11 | def test(): 12 | with tf.Graph().as_default() as g: 13 | x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) 14 | y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE]) 15 | y = mnist_forward.forward(x, None) 16 | 17 | ema = tf.train.ExponentialMovingAverage(mnist_backward.MOVING_AVERAGE_DECAY) 18 | ema_restore = ema.variables_to_restore() 19 | saver = tf.train.Saver(ema_restore) 20 | 21 | correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 22 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 23 | 24 | img_batch, label_batch = mnist_generateds.get_tfrecord(TEST_NUM, isTrain=False)#2 25 | 26 | while True: 27 | with tf.Session() as sess: 28 | ckpt = tf.train.get_checkpoint_state(mnist_backward.MODEL_SAVE_PATH) 29 | if ckpt and ckpt.model_checkpoint_path: 30 | saver.restore(sess, ckpt.model_checkpoint_path) 31 | global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1] 32 | 33 | coord = tf.train.Coordinator()#3 34 | threads = tf.train.start_queue_runners(sess=sess, coord=coord)#4 35 | 36 | xs, ys = sess.run([img_batch, label_batch])#5 37 | 38 | accuracy_score = sess.run(accuracy, feed_dict={x: xs, y_: ys}) 39 | 40 | print("After %s training step(s), test accuracy = %g" % (global_step, accuracy_score)) 41 | 42 | coord.request_stop()#6 43 | coord.join(threads)#7 44 | 45 | else: 46 | print('No checkpoint file found') 47 | return 48 | time.sleep(TEST_INTERVAL_SECS) 49 | 50 | def main(): 51 | test()#8 52 | 53 | if __name__ == '__main__': 54 | main() 55 | -------------------------------------------------------------------------------- /lec7/mnist_lenet5_backward.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | import tensorflow as tf 3 | from tensorflow.examples.tutorials.mnist import input_data 4 | import mnist_lenet5_forward 5 | import os 6 | import numpy as np 7 | 8 | BATCH_SIZE = 100 9 | LEARNING_RATE_BASE = 0.005 10 | LEARNING_RATE_DECAY = 0.99 11 | REGULARIZER = 0.0001 12 | STEPS = 50000 13 | MOVING_AVERAGE_DECAY = 0.99 14 | MODEL_SAVE_PATH="./model/" 15 | MODEL_NAME="mnist_model" 16 | 17 | def backward(mnist): 18 | x = tf.placeholder(tf.float32,[ 19 | BATCH_SIZE, 20 | mnist_lenet5_forward.IMAGE_SIZE, 21 | mnist_lenet5_forward.IMAGE_SIZE, 22 | mnist_lenet5_forward.NUM_CHANNELS]) 23 | y_ = tf.placeholder(tf.float32, [None, mnist_lenet5_forward.OUTPUT_NODE]) 24 | y = mnist_lenet5_forward.forward(x,True, REGULARIZER) 25 | global_step = tf.Variable(0, trainable=False) 26 | 27 | ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1)) 28 | cem = tf.reduce_mean(ce) 29 | loss = cem + tf.add_n(tf.get_collection('losses')) 30 | 31 | learning_rate = tf.train.exponential_decay( 32 | LEARNING_RATE_BASE, 33 | global_step, 34 | mnist.train.num_examples / BATCH_SIZE, 35 | LEARNING_RATE_DECAY, 36 | staircase=True) 37 | 38 | train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) 39 | 40 | ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) 41 | ema_op = ema.apply(tf.trainable_variables()) 42 | with tf.control_dependencies([train_step, ema_op]): 43 | train_op = tf.no_op(name='train') 44 | 45 | saver = tf.train.Saver() 46 | 47 | with tf.Session() as sess: 48 | init_op = tf.global_variables_initializer() 49 | sess.run(init_op) 50 | 51 | ckpt = tf.train.get_checkpoint_state(MODEL_SAVE_PATH) 52 | if ckpt and ckpt.model_checkpoint_path: 53 | saver.restore(sess, ckpt.model_checkpoint_path) 54 | 55 | for i in range(STEPS): 56 | xs, ys = mnist.train.next_batch(BATCH_SIZE) 57 | reshaped_xs = np.reshape(xs,( 58 | BATCH_SIZE, 59 | mnist_lenet5_forward.IMAGE_SIZE, 60 | mnist_lenet5_forward.IMAGE_SIZE, 61 | mnist_lenet5_forward.NUM_CHANNELS)) 62 | _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: reshaped_xs, y_: ys}) 63 | if i % 100 == 0: 64 | print("After %d training step(s), loss on training batch is %g." % (step, loss_value)) 65 | saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step) 66 | 67 | def main(): 68 | mnist = input_data.read_data_sets("./data/", one_hot=True) 69 | backward(mnist) 70 | 71 | if __name__ == '__main__': 72 | main() 73 | 74 | 75 | -------------------------------------------------------------------------------- /lec7/mnist_lenet5_forward.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | import tensorflow as tf 3 | IMAGE_SIZE = 28 4 | NUM_CHANNELS = 1 5 | CONV1_SIZE = 5 6 | CONV1_KERNEL_NUM = 32 7 | CONV2_SIZE = 5 8 | CONV2_KERNEL_NUM = 64 9 | FC_SIZE = 512 10 | OUTPUT_NODE = 10 11 | 12 | def get_weight(shape, regularizer): 13 | w = tf.Variable(tf.truncated_normal(shape,stddev=0.1)) 14 | if regularizer != None: tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w)) 15 | return w 16 | 17 | def get_bias(shape): 18 | b = tf.Variable(tf.zeros(shape)) 19 | return b 20 | 21 | def conv2d(x,w): 22 | return tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME') 23 | 24 | def max_pool_2x2(x): 25 | return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') 26 | 27 | def forward(x, train, regularizer): 28 | conv1_w = get_weight([CONV1_SIZE, CONV1_SIZE, NUM_CHANNELS, CONV1_KERNEL_NUM], regularizer) 29 | conv1_b = get_bias([CONV1_KERNEL_NUM]) 30 | conv1 = conv2d(x, conv1_w) 31 | relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_b)) 32 | pool1 = max_pool_2x2(relu1) 33 | 34 | conv2_w = get_weight([CONV2_SIZE, CONV2_SIZE, CONV1_KERNEL_NUM, CONV2_KERNEL_NUM],regularizer) 35 | conv2_b = get_bias([CONV2_KERNEL_NUM]) 36 | conv2 = conv2d(pool1, conv2_w) 37 | relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_b)) 38 | pool2 = max_pool_2x2(relu2) 39 | 40 | pool_shape = pool2.get_shape().as_list() 41 | nodes = pool_shape[1] * pool_shape[2] * pool_shape[3] 42 | reshaped = tf.reshape(pool2, [pool_shape[0], nodes]) 43 | 44 | fc1_w = get_weight([nodes, FC_SIZE], regularizer) 45 | fc1_b = get_bias([FC_SIZE]) 46 | fc1 = tf.nn.relu(tf.matmul(reshaped, fc1_w) + fc1_b) 47 | if train: fc1 = tf.nn.dropout(fc1, 0.5) 48 | 49 | fc2_w = get_weight([FC_SIZE, OUTPUT_NODE], regularizer) 50 | fc2_b = get_bias([OUTPUT_NODE]) 51 | y = tf.matmul(fc1, fc2_w) + fc2_b 52 | return y 53 | -------------------------------------------------------------------------------- /lec7/mnist_lenet5_test.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | import time 3 | import tensorflow as tf 4 | from tensorflow.examples.tutorials.mnist import input_data 5 | import mnist_lenet5_forward 6 | import mnist_lenet5_backward 7 | import numpy as np 8 | 9 | TEST_INTERVAL_SECS = 5 10 | 11 | def test(mnist): 12 | with tf.Graph().as_default() as g: 13 | x = tf.placeholder(tf.float32,[ 14 | mnist.test.num_examples, 15 | mnist_lenet5_forward.IMAGE_SIZE, 16 | mnist_lenet5_forward.IMAGE_SIZE, 17 | mnist_lenet5_forward.NUM_CHANNELS]) 18 | y_ = tf.placeholder(tf.float32, [None, mnist_lenet5_forward.OUTPUT_NODE]) 19 | y = mnist_lenet5_forward.forward(x,False,None) 20 | 21 | ema = tf.train.ExponentialMovingAverage(mnist_lenet5_backward.MOVING_AVERAGE_DECAY) 22 | ema_restore = ema.variables_to_restore() 23 | saver = tf.train.Saver(ema_restore) 24 | 25 | correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 26 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 27 | 28 | while True: 29 | with tf.Session() as sess: 30 | ckpt = tf.train.get_checkpoint_state(mnist_lenet5_backward.MODEL_SAVE_PATH) 31 | if ckpt and ckpt.model_checkpoint_path: 32 | saver.restore(sess, ckpt.model_checkpoint_path) 33 | 34 | global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1] 35 | reshaped_x = np.reshape(mnist.test.images,( 36 | mnist.test.num_examples, 37 | mnist_lenet5_forward.IMAGE_SIZE, 38 | mnist_lenet5_forward.IMAGE_SIZE, 39 | mnist_lenet5_forward.NUM_CHANNELS)) 40 | accuracy_score = sess.run(accuracy, feed_dict={x:reshaped_x,y_:mnist.test.labels}) 41 | print("After %s training step(s), test accuracy = %g" % (global_step, accuracy_score)) 42 | else: 43 | print('No checkpoint file found') 44 | return 45 | time.sleep(TEST_INTERVAL_SECS) 46 | 47 | def main(): 48 | mnist = input_data.read_data_sets("./data/", one_hot=True) 49 | test(mnist) 50 | 51 | if __name__ == '__main__': 52 | main() 53 | -------------------------------------------------------------------------------- /lec8/app.py: -------------------------------------------------------------------------------- 1 | #coding:utf-8 2 | import numpy as np 3 | import tensorflow as tf 4 | import matplotlib.pyplot as plt 5 | import vgg16 6 | import utils 7 | from Nclasses import labels 8 | 9 | img_path = raw_input('Input the path and image name:') 10 | img_ready = utils.load_image(img_path) 11 | 12 | fig=plt.figure(u"Top-5 预测结果") 13 | 14 | with tf.Session() as sess: 15 | images = tf.placeholder(tf.float32, [1, 224, 224, 3]) 16 | vgg = vgg16.Vgg16() 17 | vgg.forward(images) 18 | probability = sess.run(vgg.prob, feed_dict={images:img_ready}) 19 | top5 = np.argsort(probability[0])[-1:-6:-1] 20 | print "top5:",top5 21 | values = [] 22 | bar_label = [] 23 | for n, i in enumerate(top5): 24 | print "n:",n 25 | print "i:",i 26 | values.append(probability[0][i]) 27 | bar_label.append(labels[i]) 28 | print i, ":", labels[i], "----", utils.percent(probability[0][i]) 29 | 30 | ax = fig.add_subplot(111) 31 | ax.bar(range(len(values)), values, tick_label=bar_label, width=0.5, fc='g') 32 | ax.set_ylabel(u'probabilityit') 33 | ax.set_title(u'Top-5') 34 | for a,b in zip(range(len(values)), values): 35 | ax.text(a, b+0.0005, utils.percent(b), ha='center', va = 'bottom', fontsize=7) 36 | plt.show() 37 | 38 | 39 | 40 | -------------------------------------------------------------------------------- /lec8/utils.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | #coding:utf-8 3 | from skimage import io, transform 4 | import numpy as np 5 | import matplotlib.pyplot as plt 6 | import tensorflow as tf 7 | from pylab import mpl 8 | 9 | mpl.rcParams['font.sans-serif']=['SimHei'] # 正常显示中文标签 10 | mpl.rcParams['axes.unicode_minus']=False # 正常显示正负号 11 | 12 | def load_image(path): 13 | fig = plt.figure("Centre and Resize") 14 | img = io.imread(path) 15 | img = img / 255.0 16 | 17 | ax0 = fig.add_subplot(131) 18 | ax0.set_xlabel(u'Original Picture') 19 | ax0.imshow(img) 20 | 21 | short_edge = min(img.shape[:2]) 22 | y = (img.shape[0] - short_edge) / 2 23 | x = (img.shape[1] - short_edge) / 2 24 | crop_img = img[y:y+short_edge, x:x+short_edge] 25 | 26 | ax1 = fig.add_subplot(132) 27 | ax1.set_xlabel(u"Centre Picture") 28 | ax1.imshow(crop_img) 29 | 30 | re_img = transform.resize(crop_img, (224, 224)) 31 | 32 | ax2 = fig.add_subplot(133) 33 | ax2.set_xlabel(u"Resize Picture") 34 | ax2.imshow(re_img) 35 | 36 | img_ready = re_img.reshape((1, 224, 224, 3)) 37 | 38 | return img_ready 39 | 40 | def percent(value): 41 | return '%.2f%%' % (value * 100) 42 | 43 | -------------------------------------------------------------------------------- /lec8/vgg16.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | #coding:utf-8 3 | 4 | import inspect 5 | import os 6 | import numpy as np 7 | import tensorflow as tf 8 | import time 9 | import matplotlib.pyplot as plt 10 | 11 | VGG_MEAN = [103.939, 116.779, 123.68] 12 | 13 | class Vgg16(): 14 | def __init__(self, vgg16_path=None): 15 | if vgg16_path is None: 16 | vgg16_path = os.path.join(os.getcwd(), "vgg16.npy") 17 | self.data_dict = np.load(vgg16_path, encoding='latin1').item() 18 | 19 | def forward(self, images): 20 | 21 | print("build model started") 22 | start_time = time.time() 23 | rgb_scaled = images * 255.0 24 | red, green, blue = tf.split(rgb_scaled,3,3) 25 | bgr = tf.concat([ 26 | blue - VGG_MEAN[0], 27 | green - VGG_MEAN[1], 28 | red - VGG_MEAN[2]],3) 29 | 30 | self.conv1_1 = self.conv_layer(bgr, "conv1_1") 31 | self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2") 32 | self.pool1 = self.max_pool_2x2(self.conv1_2, "pool1") 33 | 34 | self.conv2_1 = self.conv_layer(self.pool1, "conv2_1") 35 | self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2") 36 | self.pool2 = self.max_pool_2x2(self.conv2_2, "pool2") 37 | 38 | self.conv3_1 = self.conv_layer(self.pool2, "conv3_1") 39 | self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2") 40 | self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3") 41 | self.pool3 = self.max_pool_2x2(self.conv3_3, "pool3") 42 | 43 | self.conv4_1 = self.conv_layer(self.pool3, "conv4_1") 44 | self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2") 45 | self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3") 46 | self.pool4 = self.max_pool_2x2(self.conv4_3, "pool4") 47 | 48 | self.conv5_1 = self.conv_layer(self.pool4, "conv5_1") 49 | self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2") 50 | self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3") 51 | self.pool5 = self.max_pool_2x2(self.conv5_3, "pool5") 52 | 53 | self.fc6 = self.fc_layer(self.pool5, "fc6") 54 | self.relu6 = tf.nn.relu(self.fc6) 55 | 56 | self.fc7 = self.fc_layer(self.relu6, "fc7") 57 | self.relu7 = tf.nn.relu(self.fc7) 58 | 59 | self.fc8 = self.fc_layer(self.relu7, "fc8") 60 | self.prob = tf.nn.softmax(self.fc8, name="prob") 61 | 62 | end_time = time.time() 63 | print(("time consuming: %f" % (end_time-start_time))) 64 | 65 | self.data_dict = None 66 | 67 | def conv_layer(self, x, name): 68 | with tf.variable_scope(name): 69 | w = self.get_conv_filter(name) 70 | conv = tf.nn.conv2d(x, w, [1, 1, 1, 1], padding='SAME') 71 | conv_biases = self.get_bias(name) 72 | result = tf.nn.relu(tf.nn.bias_add(conv, conv_biases)) 73 | return result 74 | 75 | def get_conv_filter(self, name): 76 | return tf.constant(self.data_dict[name][0], name="filter") 77 | 78 | def get_bias(self, name): 79 | return tf.constant(self.data_dict[name][1], name="biases") 80 | 81 | def max_pool_2x2(self, x, name): 82 | return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name=name) 83 | 84 | def fc_layer(self, x, name): 85 | with tf.variable_scope(name): 86 | shape = x.get_shape().as_list() 87 | dim = 1 88 | for i in shape[1:]: 89 | dim *= i 90 | x = tf.reshape(x, [-1, dim]) 91 | w = self.get_fc_weight(name) 92 | b = self.get_bias(name) 93 | 94 | result = tf.nn.bias_add(tf.matmul(x, w), b) 95 | return result 96 | 97 | def get_fc_weight(self, name): 98 | return tf.constant(self.data_dict[name][0], name="weights") 99 | 100 | -------------------------------------------------------------------------------- /pic/lec8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/pic/lec8.jpg -------------------------------------------------------------------------------- /pic/vm1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/pic/vm1.jpg -------------------------------------------------------------------------------- /pic/vm2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/pic/vm2.jpg -------------------------------------------------------------------------------- /pic/vm3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/pic/vm3.jpg -------------------------------------------------------------------------------- /pic/vm4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/pic/vm4.jpg -------------------------------------------------------------------------------- /pic/vm5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/pic/vm5.jpg -------------------------------------------------------------------------------- /pic/vm6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/pic/vm6.jpg -------------------------------------------------------------------------------- /pic/vm7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Code-Sample-Collection/PKU-Tensorflow-Notes/ea29c7a40762ee5e2602aad33122609698c028af/pic/vm7.jpg --------------------------------------------------------------------------------