├── .gitignore ├── Gemfile ├── README.md ├── _announcements ├── week-01.md ├── week-02.md ├── week-04.md ├── week-05.md ├── week-06.md ├── week-11.md ├── week-16.md ├── week-18.md ├── week-21.md ├── week-22.md └── week-24.md ├── _config.yml ├── _includes ├── head_custom.html └── minutes.liquid ├── _layouts ├── announcement.html ├── minimal.html ├── module.html ├── schedule.html └── staffer.html ├── _modules ├── part_0.json ├── part_0.md ├── part_1.json ├── part_1.md ├── part_2.json ├── part_2.md ├── part_3.json ├── part_3.md ├── part_4.json └── part_4.md ├── _sass ├── color_schemes │ └── d2l.scss └── custom │ ├── announcement.scss │ ├── card.scss │ ├── custom.scss │ ├── module.scss │ ├── schedule.scss │ └── staffer.scss ├── _staffers └── mu.md ├── announcements.md ├── assets ├── images │ ├── logo-with-text.png │ └── mu.jpg └── pdfs │ ├── part-0_1.pdf │ ├── part-0_2.pdf │ ├── part-0_3.pdf │ ├── part-0_4.pdf │ ├── part-0_5.pdf │ ├── part-0_6.pdf │ ├── part-0_7.pdf │ └── part-0_8.pdf ├── contents.json ├── favicon.ico ├── generate.ipynb ├── index.md └── upload.sh /.gitignore: -------------------------------------------------------------------------------- 1 | _site/ 2 | .sass-cache/ 3 | .jekyll-cache/ 4 | .jekyll-metadata 5 | .bundle/ 6 | **/.DS_Store 7 | Gemfile.lock 8 | vendor 9 | **/pdfs/ 10 | **/notebooks/ 11 | *.ipynb_checkpoints/ -------------------------------------------------------------------------------- /Gemfile: -------------------------------------------------------------------------------- 1 | source 'https://rubygems.org' 2 | gem "jekyll" 3 | gem "just-the-docs" 4 | gem "webrick" -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 《动手学深度学习》v2 课程 2 | 3 | 课程主页:https://courses.d2l.ai/zh-v2/ 4 | 5 | 本地编译 6 | 7 | ```bash 8 | bundle install 9 | bundle exec jekyll serve 10 | ``` 11 | 12 | -------------------------------------------------------------------------------- /_announcements/week-01.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 第一周 3 | week: 1 4 | --- 5 | 6 | 第一周视频上传到了B站。根据大家建议,将相关的单元做成了合集。每个视频下面有1到4个分集。可以通过[B站频道](https://space.bilibili.com/1567748478/channel/index)来按序访问,或者按照下面课程安排里的视频连接来访问。此外,因为直播时“按特定轴求和”没有讲得很清楚,直播后补录了一个视频。 -------------------------------------------------------------------------------- /_announcements/week-02.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 第二周 3 | week: 2 4 | --- 5 | 6 | 第二周视频上传到了B站。其中线性回归一节是直播后重录,会跟直播内容有点点区别。请注意下周因为清明直播停一周。 -------------------------------------------------------------------------------- /_announcements/week-04.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 第四周 3 | week: 4 4 | --- 5 | 6 | 第一部分完结。完结福利是第一次课程竞赛,内容为[预测2020加州房价](https://www.kaggle.com/c/california-house-prices/overview)。前5个人(团队)将获得作者签名版《动手学深度学习》第二版。介绍视频请见[B站](https://www.bilibili.com/video/BV1NK4y1P7Tu?p=2)。 -------------------------------------------------------------------------------- /_announcements/week-05.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 第五周 3 | week: 5 4 | --- 5 | 6 | 因为五一长假关系,4月25日、5月1日、和5月2日停课。[课程竞赛](https://www.kaggle.com/c/california-house-prices/overview)截止日期也延期到5月9日直播前1小时。 7 | -------------------------------------------------------------------------------- /_announcements/week-06.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 第六周 3 | week: 6 4 | --- 5 | 6 | 因为5月8日调休,考虑到单休周大家可能事会比较多,所以这周仍然停课。课程竞赛截止日期不变,仍然是5月9号中午12:00(也就是kaggle上显示时间)。周日会在B站介绍基线automl模型。 7 | -------------------------------------------------------------------------------- /_announcements/week-11.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 第十一周 3 | week: 11 4 | --- 5 | 6 | 第二部分卷积神经网络完结。录像全部上传到了[B站](https://space.bilibili.com/1567748478/channel/detail?cid=175509)。本部分竞赛为树叶分类,[Kaggle地址](https://www.kaggle.com/c/classify-leaves) 7 | -------------------------------------------------------------------------------- /_announcements/week-16.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 第十六周 3 | week: 16 4 | --- 5 | 6 | 树叶分类竞赛结束,查看[获胜团队和技术总结](https://www.bilibili.com/video/BV1by4y1K7SE) 7 | -------------------------------------------------------------------------------- /_announcements/week-18.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 第十八周 3 | week: 18 4 | --- 5 | 6 | 第三部分计算机视觉结束,第三次竞赛为[牛仔行头检测](https://www.bilibili.com/video/BV1F64y1x7xP/) 7 | -------------------------------------------------------------------------------- /_announcements/week-21.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 第二十一周 3 | week: 21 4 | --- 5 | 6 | 因为出差原因,本周末(7月31日和8月1日)课程直播取消。 7 | -------------------------------------------------------------------------------- /_announcements/week-22.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 第二十二周 3 | week: 22 4 | --- 5 | 6 | 目标检测竞赛结束。[前十名同学](https://competitions.codalab.org/competitions/33573#results)(lcpdeb,herunyu,signcoda,zZamm,sophiezang,saltFish,snowtyan,nekokiku,dejahu,momo233)如果在8月15日前提交代码到Kaggle,将获得签名赠书。 7 | -------------------------------------------------------------------------------- /_announcements/week-24.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 第二十四周 3 | week: 24 4 | --- 5 | 6 | 课程结束!谢谢学习。所有视频均上传[B站](https://space.bilibili.com/1567748478/channel/detail?cid=175509)。欢迎观看接下来的课程:[斯坦福2021秋季:实用机器学习](https://c.d2l.ai/stanford-cs329p/)。 7 | -------------------------------------------------------------------------------- /_config.yml: -------------------------------------------------------------------------------- 1 | title: "动手学深度学习课程" 2 | theme: just-the-docs 3 | remote_theme: pmarsceill/just-the-docs 4 | 5 | logo: "/assets/images/logo-with-text.png" 6 | 7 | aux_links: 8 | "GitHub": 9 | - "//github.com/mli/course-zh-v2" 10 | aux_links_new_tab: true 11 | 12 | heading_anchors: true 13 | 14 | ga_tracking: UA-96378503-2 15 | ga_tracking_anonymize_ip: true 16 | 17 | color_scheme: d2l 18 | 19 | 20 | tagline: 动手学深度学习在线课程 21 | description: 免费深度学习在线课程 22 | author: 李沐 23 | baseurl: '/zh-v2' # the subpath of your site, e.g. /blog 24 | url: 'https://courses.d2l.ai' 25 | exclude: ["Gemfile", "Gemfile.lock", "LICENSE", "README.md"] 26 | 27 | # Collections for website data 28 | collections: 29 | staffers: 30 | modules: 31 | announcements: 32 | 33 | # Default layouts for each collection type 34 | defaults: 35 | - scope: 36 | path: '' 37 | type: staffers 38 | values: 39 | layout: staffer 40 | subpath: '/assets/images/' 41 | - scope: 42 | path: '' 43 | type: modules 44 | values: 45 | layout: module 46 | subpath: '/assets/pdfs/' 47 | - scope: 48 | path: '' 49 | type: announcements 50 | values: 51 | layout: announcement 52 | 53 | compress_html: 54 | clippings: all 55 | comments: all 56 | endings: all 57 | startings: [] 58 | blanklines: false 59 | profile: false 60 | -------------------------------------------------------------------------------- /_includes/head_custom.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | -------------------------------------------------------------------------------- /_includes/minutes.liquid: -------------------------------------------------------------------------------- 1 | {% capture _minutes_workspace %} 2 | {% comment %} 3 | Return the number of minutes between midnight and the given time string (e.g. '9:30 AM'). 4 | 5 | Parameters: 6 | `time` (string): the time to convert. 7 | {% endcomment %} 8 | 9 | {% assign _time = include.time %} 10 | {% assign _hhmm = _time | split: ' ' | first | split: ':' %} 11 | {% assign _hours = _hhmm | first | to_integer %} 12 | {% assign _minutes = _hhmm | last | to_integer %} 13 | {% assign _ampm = _time | split: ' ' | last | upcase %} 14 | 15 | {% if _ampm == 'AM' and _hours == 12 %} 16 | {% assign _hours = _hours | minus: 12 %} 17 | {% elsif _ampm == 'PM' and _hours != 12 %} 18 | {% assign _hours = _hours | plus: 12 %} 19 | {% endif %} 20 | {% endcapture %}{% assign _minutes_workspace = '' %}{{ _hours | times: 60 | plus: _minutes }} 21 | -------------------------------------------------------------------------------- /_layouts/announcement.html: -------------------------------------------------------------------------------- 1 |
2 |

{{ page.title }}

3 | 4 | {% assign minutes = content | strip_html | number_of_words | divided_by: 180.0 | round %} 5 | 6 |
7 | {{ content }} 8 |
9 |
10 | -------------------------------------------------------------------------------- /_layouts/minimal.html: -------------------------------------------------------------------------------- 1 | --- 2 | layout: table_wrappers 3 | --- 4 | 5 | 6 | 7 | 8 | {% include head.html %} 9 | 10 | 11 | 12 | Link 13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | {% unless page.url == "/" %} 21 | {% if page.parent %} 22 | 33 | {% endif %} 34 | {% endunless %} 35 |
36 | {% if site.heading_anchors != false %} 37 | {% include vendor/anchor_headings.html html=content beforeHeading="true" anchorBody="" anchorClass="anchor-heading" %} 38 | {% else %} 39 | {{ content }} 40 | {% endif %} 41 |
42 |
43 | 44 | 45 | -------------------------------------------------------------------------------- /_layouts/module.html: -------------------------------------------------------------------------------- 1 |

{{ page.title }}

2 |
3 | {{ content }} 4 |
5 | -------------------------------------------------------------------------------- /_layouts/schedule.html: -------------------------------------------------------------------------------- 1 | {% assign start_time = page.timeline | first %} 2 | {% capture offset %}{% include minutes.liquid time=start_time %}{% endcapture %} 3 |
4 | 9 | 34 |
35 | -------------------------------------------------------------------------------- /_layouts/staffer.html: -------------------------------------------------------------------------------- 1 |
2 | {% if page.photo %} 3 | 4 | {% endif %} 5 |
6 |

7 | {% if page.website %} 8 | {{ page.name }} 9 | {% else %} 10 | {{ page.name }} 11 | {% endif %} 12 | {% if page.pronouns %} 13 | {{ page.pronouns }} 14 | {% endif %} 15 |

16 | {% if page.email %} 17 |

{{ page.email }}

18 | {% endif %} 19 | {% if page.section %} 20 |

Quiz Section: {{ page.section | markdownify | strip_html }}

21 | {% endif %} 22 | {% if page.office-hours %} 23 |

Office Hours: {{ page.office-hours | markdownify | strip_html }}

24 | {% endif %} 25 | {{ content }} 26 |
27 |
28 |
-------------------------------------------------------------------------------- /_modules/part_0.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "title": "课程安排", 4 | "day_break": true, 5 | "slides": [ 6 | "part-01.pdf", 7 | 6 8 | ], 9 | "slides_video": "https://www.bilibili.com/video/BV1oX4y137bC", 10 | "notebook": false, 11 | "notebook_video": "", 12 | "qa_video": "" 13 | }, 14 | { 15 | "title": "深度学习介绍", 16 | "day_break": false, 17 | "book": "chapter_introduction/index.html", 18 | "slides": [ 19 | "part-01.pdf", 20 | 14 21 | ], 22 | "slides_video": "https://www.bilibili.com/video/BV1J54y187f9", 23 | "notebook_video": "" 24 | }, 25 | { 26 | "title": "安装", 27 | "day_break": false, 28 | "book": "chapter_installation/index.html", 29 | "slides": [ 30 | "part-01.pdf", 31 | 2 32 | ], 33 | "slides_video": "https://www.bilibili.com/video/BV18p4y1h7Dr", 34 | "qa_video": "https://www.bilibili.com/video/BV18p4y1h7Dr?p=2" 35 | }, 36 | { 37 | "title": "数据操作", 38 | "day_break": false, 39 | "book": "chapter_preliminaries/ndarray.html", 40 | "slides": [ 41 | "part-01.pdf", 42 | 5 43 | ], 44 | "slides_video": "https://www.bilibili.com/video/BV1CV411Y7i4", 45 | "notebook_video": "https://www.bilibili.com/video/BV1CV411Y7i4?p=2", 46 | "qa_video": "https://www.bilibili.com/video/BV1CV411Y7i4?p=4" 47 | }, 48 | { 49 | "title": "数据预处理", 50 | "day_break": false, 51 | "book": "chapter_preliminaries/pandas.html", 52 | "slides": [ 53 | "part-01.pdf", 54 | 0 55 | ], 56 | "slides_video": "https://www.bilibili.com/video/BV1CV411Y7i4?p=3", 57 | "notebook_video": "", 58 | "qa_video": "" 59 | }, 60 | { 61 | "title": "线性代数", 62 | "day_break": true, 63 | "book": "chapter_preliminaries/linear-algebra.html", 64 | "slides": [ 65 | "part-01.pdf", 66 | 14 67 | ], 68 | "slides_video": "https://www.bilibili.com/video/BV1eK4y1U7Qy", 69 | "notebook_video": "https://www.bilibili.com/video/BV1eK4y1U7Qy?p=2", 70 | "qa_video": "https://www.bilibili.com/video/BV1eK4y1U7Qy?p=4" 71 | }, 72 | { 73 | "title": "[补充] 按特定轴求和", 74 | "day_break": false, 75 | "slides": [ 76 | "part-01.pdf", 77 | 0 78 | ], 79 | "slides_video": "https://www.bilibili.com/video/BV1eK4y1U7Qy?p=3", 80 | "notebook_video": "", 81 | "qa_video": "" 82 | }, 83 | { 84 | "title": "矩阵计算", 85 | "day_break": false, 86 | "book": "chapter_preliminaries/calculus.html", 87 | "slides": [ 88 | "part-01.pdf", 89 | 10 90 | ], 91 | "notebook": false, 92 | "slides_video": "https://www.bilibili.com/video/BV1eZ4y1w7PY", 93 | "notebook_video": "", 94 | "qa_video": "https://www.bilibili.com/video/BV1eZ4y1w7PY?p=2" 95 | }, 96 | { 97 | "title": "自动求导", 98 | "day_break": false, 99 | "book": "chapter_preliminaries/autograd.html", 100 | "slides": [ 101 | "part-01.pdf", 102 | 15 103 | ], 104 | "slides_video": "https://www.bilibili.com/video/BV1KA411N7Px", 105 | "notebook_video": "https://www.bilibili.com/video/BV1KA411N7Px?p=2", 106 | "qa_video": "https://www.bilibili.com/video/BV1KA411N7Px?p=3" 107 | }, 108 | { 109 | "title": "线性回归", 110 | "day_break": true, 111 | "book": "chapter_linear-networks/linear-regression.html", 112 | "slides": [ 113 | "part-01.pdf", 114 | 11 115 | ], 116 | "slides_video": "https://www.bilibili.com/video/BV1PX4y1g7KC", 117 | "notebook_video": "", 118 | "qa_video": "" 119 | }, 120 | { 121 | "title": "基础优化方法", 122 | "day_break": false, 123 | "slides": [ 124 | "part-01.pdf", 125 | 6 126 | ], 127 | "slides_video": "https://www.bilibili.com/video/BV1PX4y1g7KC?p=2", 128 | "notebook_video": "", 129 | "qa_video": "" 130 | }, 131 | { 132 | "title": "线性回归的从零开始实现", 133 | "day_break": false, 134 | "book": "chapter_linear-networks/linear-regression-scratch.html", 135 | "slides": [ 136 | "part-0.pdf", 137 | 0 138 | ], 139 | "slides_video": "https://www.bilibili.com/video/BV1PX4y1g7KC?p=3", 140 | "notebook_video": "", 141 | "qa_video": "" 142 | }, 143 | { 144 | "title": "线性回归的简洁实现", 145 | "day_break": false, 146 | "book": "chapter_linear-networks/linear-regression-concise.html", 147 | "slides": [ 148 | "part-0.pdf", 149 | 0 150 | ], 151 | "slides_video": "https://www.bilibili.com/video/BV1PX4y1g7KC?p=4", 152 | "notebook_video": "", 153 | "qa_video": "" 154 | }, 155 | { 156 | "title": "Softmax 回归", 157 | "day_break": true, 158 | "book": "chapter_linear-networks/softmax-regression.html", 159 | "slides": [ 160 | "part-01.pdf", 161 | 11 162 | ], 163 | "slides_video": "https://www.bilibili.com/video/BV1K64y1Q7wu", 164 | "notebook_video": "", 165 | "qa_video": "" 166 | }, 167 | { 168 | "title": "损失函数", 169 | "day_break": false, 170 | "book": "", 171 | "slides": [ 172 | "part-01.pdf", 173 | 7 174 | ], 175 | "slides_video": "https://www.bilibili.com/video/BV1K64y1Q7wu?p=2", 176 | "notebook_video": "", 177 | "qa_video": "" 178 | }, 179 | { 180 | "title": "图像分类数据集", 181 | "day_break": false, 182 | "book": "chapter_linear-networks/image-classification-dataset.html", 183 | "slides": [ 184 | "part-0.pdf", 185 | 0 186 | ], 187 | "slides_video": "https://www.bilibili.com/video/BV1K64y1Q7wu?p=3", 188 | "notebook_video": "", 189 | "qa_video": "" 190 | }, 191 | { 192 | "title": "Softmax 回归的从零开始实现", 193 | "day_break": false, 194 | "book": "chapter_linear-networks/softmax-regression-scratch.html", 195 | "slides": [ 196 | "part-0.pdf", 197 | 0 198 | ], 199 | "slides_video": "https://www.bilibili.com/video/BV1K64y1Q7wu?p=4", 200 | "notebook_video": "", 201 | "qa_video": "" 202 | }, 203 | { 204 | "title": "Softmax 回归的简洁实现", 205 | "day_break": false, 206 | "book": "chapter_linear-networks/softmax-regression-concise.html", 207 | "slides": [ 208 | "part-0.pdf", 209 | 0 210 | ], 211 | "slides_video": "https://www.bilibili.com/video/BV1K64y1Q7wu?p=5", 212 | "notebook_video": "", 213 | "qa_video": "" 214 | }, 215 | { 216 | "title": "感知机", 217 | "day_break": true, 218 | "slides": [ 219 | "part-01.pdf", 220 | 11 221 | ], 222 | "slides_video": "https://www.bilibili.com/video/BV1hh411U7gn", 223 | "notebook_video": "", 224 | "qa_video": "" 225 | }, 226 | { 227 | "title": "多层感知机", 228 | "day_break": false, 229 | "book": "chapter_multilayer-perceptrons/mlp.html", 230 | "slides": [ 231 | "part-01.pdf", 232 | 13 233 | ], 234 | "slides_video": "https://www.bilibili.com/video/BV1hh411U7gn?p=2", 235 | "notebook_video": "", 236 | "qa_video": "" 237 | }, 238 | { 239 | "title": "多层感知机的从零开始实现", 240 | "day_break": false, 241 | "book": "chapter_multilayer-perceptrons/mlp-scratch.html", 242 | "slides": [ 243 | "part-0.pdf", 244 | 0 245 | ], 246 | "slides_video": "https://www.bilibili.com/video/BV1hh411U7gn?p=3", 247 | "notebook_video": "", 248 | "qa_video": "" 249 | }, 250 | { 251 | "title": "多层感知机的简洁实现", 252 | "day_break": false, 253 | "book": "chapter_multilayer-perceptrons/mlp-concise.html", 254 | "slides": [ 255 | "part-0.pdf", 256 | 0 257 | ], 258 | "slides_video": "https://www.bilibili.com/video/BV1hh411U7gn?p=3", 259 | "notebook_video": "", 260 | "qa_video": "" 261 | }, 262 | { 263 | "title": "模型选择", 264 | "day_break": true, 265 | "book": "", 266 | "slides": [ 267 | "part-01.pdf", 268 | 7 269 | ], 270 | "slides_video": "https://www.bilibili.com/video/BV1kX4y1g7jp", 271 | "notebook_video": "", 272 | "qa_video": "" 273 | }, 274 | { 275 | "title": "欠拟合和过拟合", 276 | "day_break": false, 277 | "book": "chapter_multilayer-perceptrons/underfit-overfit.html", 278 | "slides": [ 279 | "part-01.pdf", 280 | 10 281 | ], 282 | "slides_video": "https://www.bilibili.com/video/BV1kX4y1g7jp?p=2", 283 | "notebook_video": "", 284 | "qa_video": "" 285 | }, 286 | { 287 | "title": "权重衰减", 288 | "day_break": true, 289 | "book": "chapter_multilayer-perceptrons/weight-decay.html", 290 | "slides": [ 291 | "part-01.pdf", 292 | 6 293 | ], 294 | "slides_video": "https://www.bilibili.com/video/BV1UK4y1o7dy", 295 | "notebook_video": "", 296 | "qa_video": "" 297 | }, 298 | { 299 | "title": "Dropout", 300 | "day_break": false, 301 | "book": "chapter_multilayer-perceptrons/dropout.html", 302 | "slides": [ 303 | "part-01.pdf", 304 | 6 305 | ], 306 | "slides_video": "https://www.bilibili.com/video/BV1Y5411c7aY", 307 | "notebook_video": "", 308 | "qa_video": "" 309 | }, 310 | { 311 | "title": "数值稳定性", 312 | "day_break": true, 313 | "book": "chapter_multilayer-perceptrons/numerical-stability-and-init.html", 314 | "slides": [ 315 | "part-01.pdf", 316 | 10 317 | ], 318 | "slides_video": "https://www.bilibili.com/video/BV1u64y1i75a", 319 | "notebook_video": "", 320 | "qa_video": "" 321 | }, 322 | { 323 | "title": "模型初始化和激活函数", 324 | "day_break": false, 325 | "slides": [ 326 | "part-01.pdf", 327 | 12 328 | ], 329 | "slides_video": "https://www.bilibili.com/video/BV1u64y1i75a?p=2", 330 | "notebook_video": "", 331 | "qa_video": "" 332 | }, 333 | { 334 | "title": "实战 Kaggle 比赛:预测房价", 335 | "day_break": false, 336 | "book": "chapter_multilayer-perceptrons/kaggle-house-price.html", 337 | "slides": [ 338 | "part-0.pdf", 339 | 0 340 | ], 341 | "slides_video": "https://www.bilibili.com/video/BV1NK4y1P7Tu", 342 | "notebook_video": "", 343 | "qa_video": "" 344 | }, 345 | { 346 | "title": "**竞赛**{: .label } 预测房价", 347 | "day_break": false, 348 | "book": "", 349 | "slides": [ 350 | "part-01.pdf", 351 | 2 352 | ], 353 | "slides_video": "https://www.bilibili.com/video/BV1NK4y1P7Tu?p=2", 354 | "notebook_video": "", 355 | "qa_video": "" 356 | } 357 | ] -------------------------------------------------------------------------------- /_modules/part_0.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 深度学习基础 3 | --- 4 | 5 | 3月20日 6 | 7 | : 课程安排 8 | :   9 | : [](assets/pdfs/part-0_1.pdf) 10 | :   11 | : [](https://www.bilibili.com/video/BV1oX4y137bC) 12 | 13 | : 深度学习介绍 14 | : [](https://zh-v2.d2l.ai/chapter_introduction/index.html) 15 | : [](assets/pdfs/part-0_2.pdf) 16 | :   17 | : [](https://www.bilibili.com/video/BV1J54y187f9) 18 | 19 | : 安装 20 | : [](https://zh-v2.d2l.ai/chapter_installation/index.html) 21 | : [](assets/pdfs/part-0_3.pdf) 22 | :   23 | : [](https://www.bilibili.com/video/BV18p4y1h7Dr) 24 | 25 | : 数据操作 26 | : [](https://zh-v2.d2l.ai/chapter_preliminaries/ndarray.html) 27 | : [](assets/pdfs/part-0_4.pdf) 28 | : [](assets/notebooks/chapter_preliminaries/ndarray.slides.html) 29 | : [](https://www.bilibili.com/video/BV1CV411Y7i4) 30 | 31 | : 数据预处理 32 | : [](https://zh-v2.d2l.ai/chapter_preliminaries/pandas.html) 33 | :   34 | : [](assets/notebooks/chapter_preliminaries/pandas.slides.html) 35 | : [](https://www.bilibili.com/video/BV1CV411Y7i4?p=3) 36 | 37 | 38 | 3月21日 39 | 40 | : 线性代数 41 | : [](https://zh-v2.d2l.ai/chapter_preliminaries/linear-algebra.html) 42 | : [](assets/pdfs/part-0_5.pdf) 43 | : [](assets/notebooks/chapter_preliminaries/linear-algebra.slides.html) 44 | : [](https://www.bilibili.com/video/BV1eK4y1U7Qy) 45 | 46 | : [补充] 按特定轴求和 47 | :   48 | :   49 | :   50 | : [](https://www.bilibili.com/video/BV1eK4y1U7Qy?p=3) 51 | 52 | : 矩阵计算 53 | : [](https://zh-v2.d2l.ai/chapter_preliminaries/calculus.html) 54 | : [](assets/pdfs/part-0_6.pdf) 55 | :   56 | : [](https://www.bilibili.com/video/BV1eZ4y1w7PY) 57 | 58 | : 自动求导 59 | : [](https://zh-v2.d2l.ai/chapter_preliminaries/autograd.html) 60 | : [](assets/pdfs/part-0_7.pdf) 61 | : [](assets/notebooks/chapter_preliminaries/autograd.slides.html) 62 | : [](https://www.bilibili.com/video/BV1KA411N7Px) 63 | 64 | 65 | 3月27日 66 | 67 | : 线性回归 68 | : [](https://zh-v2.d2l.ai/chapter_linear-networks/linear-regression.html) 69 | : [](assets/pdfs/part-0_8.pdf) 70 | : [](assets/notebooks/chapter_linear-networks/linear-regression.slides.html) 71 | : [](https://www.bilibili.com/video/BV1PX4y1g7KC) 72 | 73 | : 基础优化方法 74 | :   75 | : [](assets/pdfs/part-0_9.pdf) 76 | :   77 | : [](https://www.bilibili.com/video/BV1PX4y1g7KC?p=2) 78 | 79 | : 线性回归的从零开始实现 80 | : [](https://zh-v2.d2l.ai/chapter_linear-networks/linear-regression-scratch.html) 81 | :   82 | : [](assets/notebooks/chapter_linear-networks/linear-regression-scratch.slides.html) 83 | : [](https://www.bilibili.com/video/BV1PX4y1g7KC?p=3) 84 | 85 | : 线性回归的简洁实现 86 | : [](https://zh-v2.d2l.ai/chapter_linear-networks/linear-regression-concise.html) 87 | :   88 | : [](assets/notebooks/chapter_linear-networks/linear-regression-concise.slides.html) 89 | : [](https://www.bilibili.com/video/BV1PX4y1g7KC?p=4) 90 | 91 | 92 | 3月28日 93 | 94 | : Softmax 回归 95 | : [](https://zh-v2.d2l.ai/chapter_linear-networks/softmax-regression.html) 96 | : [](assets/pdfs/part-0_10.pdf) 97 | :   98 | : [](https://www.bilibili.com/video/BV1K64y1Q7wu) 99 | 100 | : 损失函数 101 | :   102 | : [](assets/pdfs/part-0_11.pdf) 103 | :   104 | : [](https://www.bilibili.com/video/BV1K64y1Q7wu?p=2) 105 | 106 | : 图像分类数据集 107 | : [](https://zh-v2.d2l.ai/chapter_linear-networks/image-classification-dataset.html) 108 | :   109 | : [](assets/notebooks/chapter_linear-networks/image-classification-dataset.slides.html) 110 | : [](https://www.bilibili.com/video/BV1K64y1Q7wu?p=3) 111 | 112 | : Softmax 回归的从零开始实现 113 | : [](https://zh-v2.d2l.ai/chapter_linear-networks/softmax-regression-scratch.html) 114 | :   115 | : [](assets/notebooks/chapter_linear-networks/softmax-regression-scratch.slides.html) 116 | : [](https://www.bilibili.com/video/BV1K64y1Q7wu?p=4) 117 | 118 | : Softmax 回归的简洁实现 119 | : [](https://zh-v2.d2l.ai/chapter_linear-networks/softmax-regression-concise.html) 120 | :   121 | : [](assets/notebooks/chapter_linear-networks/softmax-regression-concise.slides.html) 122 | : [](https://www.bilibili.com/video/BV1K64y1Q7wu?p=5) 123 | 124 | 125 | 4月3日 126 | 127 | : **休课**{: .label .label-green } 128 | 129 | 4月4日 130 | 131 | : **休课**{: .label .label-green } 132 | 133 | 4月10日 134 | 135 | : 感知机 136 | :   137 | : [](assets/pdfs/part-0_12.pdf) 138 | :   139 | : [](https://www.bilibili.com/video/BV1hh411U7gn) 140 | 141 | : 多层感知机 142 | : [](https://zh-v2.d2l.ai/chapter_multilayer-perceptrons/mlp.html) 143 | : [](assets/pdfs/part-0_13.pdf) 144 | : [](assets/notebooks/chapter_multilayer-perceptrons/mlp.slides.html) 145 | : [](https://www.bilibili.com/video/BV1hh411U7gn?p=2) 146 | 147 | : 多层感知机的从零开始实现 148 | : [](https://zh-v2.d2l.ai/chapter_multilayer-perceptrons/mlp-scratch.html) 149 | :   150 | : [](assets/notebooks/chapter_multilayer-perceptrons/mlp-scratch.slides.html) 151 | : [](https://www.bilibili.com/video/BV1hh411U7gn?p=3) 152 | 153 | : 多层感知机的简洁实现 154 | : [](https://zh-v2.d2l.ai/chapter_multilayer-perceptrons/mlp-concise.html) 155 | :   156 | : [](assets/notebooks/chapter_multilayer-perceptrons/mlp-concise.slides.html) 157 | : [](https://www.bilibili.com/video/BV1hh411U7gn?p=3) 158 | 159 | 160 | 4月11日 161 | 162 | : 模型选择 163 | :   164 | : [](assets/pdfs/part-0_14.pdf) 165 | :   166 | : [](https://www.bilibili.com/video/BV1kX4y1g7jp) 167 | 168 | : 欠拟合和过拟合 169 | : [](https://zh-v2.d2l.ai/chapter_multilayer-perceptrons/underfit-overfit.html) 170 | : [](assets/pdfs/part-0_15.pdf) 171 | : [](assets/notebooks/chapter_multilayer-perceptrons/underfit-overfit.slides.html) 172 | : [](https://www.bilibili.com/video/BV1kX4y1g7jp?p=2) 173 | 174 | 175 | 4月17日 176 | 177 | : 权重衰减 178 | : [](https://zh-v2.d2l.ai/chapter_multilayer-perceptrons/weight-decay.html) 179 | : [](assets/pdfs/part-0_16.pdf) 180 | : [](assets/notebooks/chapter_multilayer-perceptrons/weight-decay.slides.html) 181 | : [](https://www.bilibili.com/video/BV1UK4y1o7dy) 182 | 183 | : Dropout 184 | : [](https://zh-v2.d2l.ai/chapter_multilayer-perceptrons/dropout.html) 185 | : [](assets/pdfs/part-0_17.pdf) 186 | : [](assets/notebooks/chapter_multilayer-perceptrons/dropout.slides.html) 187 | : [](https://www.bilibili.com/video/BV1Y5411c7aY) 188 | 189 | 190 | 4月18日 191 | 192 | : 数值稳定性 193 | : [](https://zh-v2.d2l.ai/chapter_multilayer-perceptrons/numerical-stability-and-init.html) 194 | : [](assets/pdfs/part-0_18.pdf) 195 | : [](assets/notebooks/chapter_multilayer-perceptrons/numerical-stability-and-init.slides.html) 196 | : [](https://www.bilibili.com/video/BV1u64y1i75a) 197 | 198 | : 模型初始化和激活函数 199 | :   200 | : [](assets/pdfs/part-0_19.pdf) 201 | :   202 | : [](https://www.bilibili.com/video/BV1u64y1i75a?p=2) 203 | 204 | : 实战 Kaggle 比赛:预测房价 205 | : [](https://zh-v2.d2l.ai/chapter_multilayer-perceptrons/kaggle-house-price.html) 206 | :   207 | : [](assets/notebooks/chapter_multilayer-perceptrons/kaggle-house-price.slides.html) 208 | : [](https://www.bilibili.com/video/BV1NK4y1P7Tu) 209 | 210 | : **竞赛**{: .label } 预测房价 211 | :   212 | : [](assets/pdfs/part-0_20.pdf) 213 | :   214 | : [](https://www.bilibili.com/video/BV1NK4y1P7Tu?p=2) 215 | 216 | -------------------------------------------------------------------------------- /_modules/part_1.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "title": "模型构造", 4 | "day_break": true, 5 | "book": "chapter_deep-learning-computation/model-construction.html", 6 | "slides": [ 7 | "part-0.pdf", 8 | 0 9 | ], 10 | "slides_video": "https://www.bilibili.com/video/BV1AK4y1P7vs", 11 | "notebook_video": "", 12 | "qa_video": "" 13 | }, 14 | { 15 | "title": "参数管理", 16 | "day_break": false, 17 | "book": "chapter_deep-learning-computation/parameters.html", 18 | "slides": [ 19 | "part-0.pdf", 20 | 0 21 | ], 22 | "slides_video": "https://www.bilibili.com/video/BV1AK4y1P7vs?p=2", 23 | "notebook_video": "", 24 | "qa_video": "" 25 | }, 26 | { 27 | "title": "自定义层", 28 | "day_break": false, 29 | "book": "chapter_deep-learning-computation/custom-layer.html", 30 | "slides": [ 31 | "part-0.pdf", 32 | 0 33 | ], 34 | "slides_video": "https://www.bilibili.com/video/BV1AK4y1P7vs?p=3", 35 | "notebook_video": "", 36 | "qa_video": "" 37 | }, 38 | { 39 | "title": "读写文件", 40 | "day_break": false, 41 | "book": "chapter_deep-learning-computation/read-write.html", 42 | "slides": [ 43 | "part-0.pdf", 44 | 0 45 | ], 46 | "slides_video": "https://www.bilibili.com/video/BV1AK4y1P7vs?p=4", 47 | "notebook_video": "", 48 | "qa_video": "" 49 | }, 50 | { 51 | "title": "GPU", 52 | "day_break": false, 53 | "book": "chapter_deep-learning-computation/use-gpu.html", 54 | "slides": [ 55 | "part-0.pdf", 56 | 0 57 | ], 58 | "slides_video": "https://www.bilibili.com/video/BV1z5411c7C1", 59 | "notebook_video": "", 60 | "qa_video": "" 61 | }, 62 | { 63 | "title": "预测房价竞赛总结", 64 | "day_break": true, 65 | "book": "", 66 | "slides": [ 67 | "part-02.pdf", 68 | 5 69 | ], 70 | "slides_video": "https://www.bilibili.com/video/BV15Q4y1o7vc", 71 | "notebook_video": "", 72 | "qa_video": "" 73 | }, 74 | { 75 | "title": "从全连接层到卷积", 76 | "day_break": false, 77 | "book": "chapter_convolutional-neural-networks/why-conv.html", 78 | "slides": [ 79 | "part-02.pdf", 80 | 9 81 | ], 82 | "slides_video": "https://www.bilibili.com/video/BV1L64y1m7Nh", 83 | "notebook_video": "", 84 | "qa_video": "" 85 | }, 86 | { 87 | "title": "图像卷积", 88 | "day_break": false, 89 | "book": "chapter_convolutional-neural-networks/conv-layer.html", 90 | "slides": [ 91 | "part-02.pdf", 92 | 7 93 | ], 94 | "slides_video": "https://www.bilibili.com/video/BV1L64y1m7Nh?p=2", 95 | "notebook_video": "", 96 | "qa_video": "" 97 | }, 98 | { 99 | "title": "填充和步幅", 100 | "day_break": true, 101 | "book": "chapter_convolutional-neural-networks/padding-and-strides.html", 102 | "slides": [ 103 | "part-02.pdf", 104 | 8 105 | ], 106 | "slides_video": "https://www.bilibili.com/video/BV1Th411U7UN", 107 | "notebook_video": "", 108 | "qa_video": "" 109 | }, 110 | { 111 | "title": "多输入多输出通道", 112 | "day_break": false, 113 | "book": "chapter_convolutional-neural-networks/channels.html", 114 | "slides": [ 115 | "part-02.pdf", 116 | 10 117 | ], 118 | "slides_video": "https://www.bilibili.com/video/BV1MB4y1F7of", 119 | "notebook_video": "", 120 | "qa_video": "" 121 | }, 122 | { 123 | "title": "池化层", 124 | "day_break": true, 125 | "book": "chapter_convolutional-neural-networks/pooling.html", 126 | "slides": [ 127 | "part-02.pdf", 128 | 7 129 | ], 130 | "slides_video": "https://www.bilibili.com/video/BV1EV411j7nX", 131 | "notebook_video": "", 132 | "qa_video": "" 133 | }, 134 | { 135 | "title": "卷积神经网络(LeNet)", 136 | "day_break": false, 137 | "book": "chapter_convolutional-neural-networks/lenet.html", 138 | "slides": [ 139 | "part-02.pdf", 140 | 6 141 | ], 142 | "slides_video": "https://www.bilibili.com/video/BV1t44y1r7ct/", 143 | "notebook_video": "", 144 | "qa_video": "" 145 | }, 146 | { 147 | "title": "深度卷积神经网络(AlexNet)", 148 | "day_break": true, 149 | "book": "chapter_convolutional-modern/alexnet.html", 150 | "slides": [ 151 | "part-02.pdf", 152 | 13 153 | ], 154 | "slides_video": "https://www.bilibili.com/video/BV1h54y1L7oe/", 155 | "notebook_video": "", 156 | "qa_video": "" 157 | }, 158 | { 159 | "title": "使用块的网络(VGG)", 160 | "day_break": false, 161 | "book": "chapter_convolutional-modern/vgg.html", 162 | "slides": [ 163 | "part-02.pdf", 164 | 7 165 | ], 166 | "slides_video": "https://www.bilibili.com/video/BV1Ao4y117Pd/", 167 | "notebook_video": "", 168 | "qa_video": "" 169 | }, 170 | { 171 | "title": "网络中的网络(NiN)", 172 | "day_break": true, 173 | "book": "chapter_convolutional-modern/nin.html", 174 | "slides": [ 175 | "part-02.pdf", 176 | 6 177 | ], 178 | "slides_video": "https://www.bilibili.com/video/BV1Uv411G71b/", 179 | "notebook_video": "", 180 | "qa_video": "" 181 | }, 182 | { 183 | "title": "含并行连结的网络(GoogLeNet)", 184 | "day_break": false, 185 | "book": "chapter_convolutional-modern/googlenet.html", 186 | "slides": [ 187 | "part-02.pdf", 188 | 15 189 | ], 190 | "slides_video": "https://www.bilibili.com/video/BV1b5411g7Xo/", 191 | "notebook_video": "", 192 | "qa_video": "" 193 | }, 194 | { 195 | "title": "批量归一化", 196 | "day_break": true, 197 | "book": "chapter_convolutional-modern/batch-norm.html", 198 | "slides": [ 199 | "part-02.pdf", 200 | 6 201 | ], 202 | "slides_video": "https://www.bilibili.com/video/BV1X44y1r77r/", 203 | "notebook_video": "", 204 | "qa_video": "" 205 | }, 206 | { 207 | "title": "残差网络(ResNet)", 208 | "day_break": false, 209 | "book": "chapter_convolutional-modern/resnet.html", 210 | "slides": [ 211 | "part-02.pdf", 212 | 9 213 | ], 214 | "slides_video": "https://www.bilibili.com/video/BV1bV41177ap/", 215 | "notebook_video": "", 216 | "qa_video": "" 217 | }, 218 | { 219 | "title": "**竞赛**{: .label } 图片分类", 220 | "day_break": false, 221 | "book": "", 222 | "slides": [ 223 | "part-02.pdf", 224 | 2 225 | ], 226 | "slides_video": "https://www.bilibili.com/video/BV1z64y1o7iz/", 227 | "notebook_video": "", 228 | "qa_video": "" 229 | } 230 | ] -------------------------------------------------------------------------------- /_modules/part_1.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 卷积神经网络 3 | --- 4 | 5 | 4月24日 6 | 7 | : 模型构造 8 | : [](https://zh-v2.d2l.ai/chapter_deep-learning-computation/model-construction.html) 9 | :   10 | : [](assets/notebooks/chapter_deep-learning-computation/model-construction.slides.html) 11 | : [](https://www.bilibili.com/video/BV1AK4y1P7vs) 12 | 13 | : 参数管理 14 | : [](https://zh-v2.d2l.ai/chapter_deep-learning-computation/parameters.html) 15 | :   16 | : [](assets/notebooks/chapter_deep-learning-computation/parameters.slides.html) 17 | : [](https://www.bilibili.com/video/BV1AK4y1P7vs?p=2) 18 | 19 | : 自定义层 20 | : [](https://zh-v2.d2l.ai/chapter_deep-learning-computation/custom-layer.html) 21 | :   22 | : [](assets/notebooks/chapter_deep-learning-computation/custom-layer.slides.html) 23 | : [](https://www.bilibili.com/video/BV1AK4y1P7vs?p=3) 24 | 25 | : 读写文件 26 | : [](https://zh-v2.d2l.ai/chapter_deep-learning-computation/read-write.html) 27 | :   28 | : [](assets/notebooks/chapter_deep-learning-computation/read-write.slides.html) 29 | : [](https://www.bilibili.com/video/BV1AK4y1P7vs?p=4) 30 | 31 | : GPU 32 | : [](https://zh-v2.d2l.ai/chapter_deep-learning-computation/use-gpu.html) 33 | :   34 | : [](assets/notebooks/chapter_deep-learning-computation/use-gpu.slides.html) 35 | : [](https://www.bilibili.com/video/BV1z5411c7C1) 36 | 37 | 38 | 4月25日 39 | 40 | : **休课**{: .label .label-green } 41 | 42 | 5月1日 43 | 44 | : **休课**{: .label .label-green } 45 | 46 | 5月2日 47 | 48 | : **休课**{: .label .label-green } 49 | 50 | 5月8日 51 | 52 | : **休课**{: .label .label-green } 53 | 54 | 5月9日 55 | 56 | : **休课**{: .label .label-green } 57 | 58 | 5月15日 59 | 60 | : 预测房价竞赛总结 61 | :   62 | : [](assets/pdfs/part-1_1.pdf) 63 | :   64 | : [](https://www.bilibili.com/video/BV15Q4y1o7vc) 65 | 66 | : 从全连接层到卷积 67 | : [](https://zh-v2.d2l.ai/chapter_convolutional-neural-networks/why-conv.html) 68 | : [](assets/pdfs/part-1_2.pdf) 69 | :   70 | : [](https://www.bilibili.com/video/BV1L64y1m7Nh) 71 | 72 | : 图像卷积 73 | : [](https://zh-v2.d2l.ai/chapter_convolutional-neural-networks/conv-layer.html) 74 | : [](assets/pdfs/part-1_3.pdf) 75 | : [](assets/notebooks/chapter_convolutional-neural-networks/conv-layer.slides.html) 76 | : [](https://www.bilibili.com/video/BV1L64y1m7Nh?p=2) 77 | 78 | 79 | 5月16日 80 | 81 | : 填充和步幅 82 | : [](https://zh-v2.d2l.ai/chapter_convolutional-neural-networks/padding-and-strides.html) 83 | : [](assets/pdfs/part-1_4.pdf) 84 | : [](assets/notebooks/chapter_convolutional-neural-networks/padding-and-strides.slides.html) 85 | : [](https://www.bilibili.com/video/BV1Th411U7UN) 86 | 87 | : 多输入多输出通道 88 | : [](https://zh-v2.d2l.ai/chapter_convolutional-neural-networks/channels.html) 89 | : [](assets/pdfs/part-1_5.pdf) 90 | : [](assets/notebooks/chapter_convolutional-neural-networks/channels.slides.html) 91 | : [](https://www.bilibili.com/video/BV1MB4y1F7of) 92 | 93 | 94 | 5月22日 95 | 96 | : 池化层 97 | : [](https://zh-v2.d2l.ai/chapter_convolutional-neural-networks/pooling.html) 98 | : [](assets/pdfs/part-1_6.pdf) 99 | : [](assets/notebooks/chapter_convolutional-neural-networks/pooling.slides.html) 100 | : [](https://www.bilibili.com/video/BV1EV411j7nX) 101 | 102 | : 卷积神经网络(LeNet) 103 | : [](https://zh-v2.d2l.ai/chapter_convolutional-neural-networks/lenet.html) 104 | : [](assets/pdfs/part-1_7.pdf) 105 | : [](assets/notebooks/chapter_convolutional-neural-networks/lenet.slides.html) 106 | : [](https://www.bilibili.com/video/BV1t44y1r7ct/) 107 | 108 | 109 | 5月23日 110 | 111 | : 深度卷积神经网络(AlexNet) 112 | : [](https://zh-v2.d2l.ai/chapter_convolutional-modern/alexnet.html) 113 | : [](assets/pdfs/part-1_8.pdf) 114 | : [](assets/notebooks/chapter_convolutional-modern/alexnet.slides.html) 115 | : [](https://www.bilibili.com/video/BV1h54y1L7oe/) 116 | 117 | : 使用块的网络(VGG) 118 | : [](https://zh-v2.d2l.ai/chapter_convolutional-modern/vgg.html) 119 | : [](assets/pdfs/part-1_9.pdf) 120 | : [](assets/notebooks/chapter_convolutional-modern/vgg.slides.html) 121 | : [](https://www.bilibili.com/video/BV1Ao4y117Pd/) 122 | 123 | 124 | 5月29日 125 | 126 | : 网络中的网络(NiN) 127 | : [](https://zh-v2.d2l.ai/chapter_convolutional-modern/nin.html) 128 | : [](assets/pdfs/part-1_10.pdf) 129 | : [](assets/notebooks/chapter_convolutional-modern/nin.slides.html) 130 | : [](https://www.bilibili.com/video/BV1Uv411G71b/) 131 | 132 | : 含并行连结的网络(GoogLeNet) 133 | : [](https://zh-v2.d2l.ai/chapter_convolutional-modern/googlenet.html) 134 | : [](assets/pdfs/part-1_11.pdf) 135 | : [](assets/notebooks/chapter_convolutional-modern/googlenet.slides.html) 136 | : [](https://www.bilibili.com/video/BV1b5411g7Xo/) 137 | 138 | 139 | 5月30日 140 | 141 | : 批量归一化 142 | : [](https://zh-v2.d2l.ai/chapter_convolutional-modern/batch-norm.html) 143 | : [](assets/pdfs/part-1_12.pdf) 144 | : [](assets/notebooks/chapter_convolutional-modern/batch-norm.slides.html) 145 | : [](https://www.bilibili.com/video/BV1X44y1r77r/) 146 | 147 | : 残差网络(ResNet) 148 | : [](https://zh-v2.d2l.ai/chapter_convolutional-modern/resnet.html) 149 | : [](assets/pdfs/part-1_13.pdf) 150 | : [](assets/notebooks/chapter_convolutional-modern/resnet.slides.html) 151 | : [](https://www.bilibili.com/video/BV1bV41177ap/) 152 | 153 | : **竞赛**{: .label } 图片分类 154 | :   155 | : [](assets/pdfs/part-1_14.pdf) 156 | :   157 | : [](https://www.bilibili.com/video/BV1z64y1o7iz/) 158 | 159 | -------------------------------------------------------------------------------- /_modules/part_2.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "title": "硬件:CPU和GPU", 4 | "day_break": true, 5 | "book": "chapter_computational-performance/hardware.html", 6 | "slides": [ 7 | "part-03.pdf", 8 | 14 9 | ], 10 | "slides_video": "https://www.bilibili.com/video/BV1TU4y1j7Wd/", 11 | "notebook_video": "", 12 | "qa_video": "" 13 | }, 14 | { 15 | "title": "更多的专有硬件", 16 | "day_break": true, 17 | "book": "", 18 | "slides": [ 19 | "part-03.pdf", 20 | 16 21 | ], 22 | "slides_video": "https://www.bilibili.com/video/BV1VV41147PC/", 23 | "notebook_video": "", 24 | "qa_video": "" 25 | }, 26 | { 27 | "title": "多GPU训练", 28 | "day_break": false, 29 | "book": "chapter_computational-performance/multiple-gpus.html", 30 | "slides": [ 31 | "part-03.pdf", 32 | 5 33 | ], 34 | "slides_video": "https://www.bilibili.com/video/BV1vU4y1V7rd/", 35 | "notebook_video": "", 36 | "qa_video": "" 37 | }, 38 | { 39 | "title": "多GPU训练的实现", 40 | "day_break": true, 41 | "book": "chapter_computational-performance/multiple-gpus-concise.html", 42 | "slides": [ 43 | "part-0.pdf", 44 | 0 45 | ], 46 | "slides_video": "https://www.bilibili.com/video/BV1MQ4y1R7Qg", 47 | "notebook_video": "", 48 | "qa_video": "" 49 | }, 50 | { 51 | "title": "分布式训练", 52 | "day_break": false, 53 | "book": "chapter_computational-performance/parameterserver.html", 54 | "slides": [ 55 | "part-03.pdf", 56 | 16 57 | ], 58 | "slides_video": "https://www.bilibili.com/video/BV1jU4y1G7iu", 59 | "notebook_video": "", 60 | "qa_video": "" 61 | }, 62 | { 63 | "title": "图像增广", 64 | "day_break": true, 65 | "book": "chapter_computer-vision/image-augmentation.html", 66 | "slides": [ 67 | "part-03.pdf", 68 | 9 69 | ], 70 | "slides_video": "https://www.bilibili.com/video/BV17y4y1g76q", 71 | "notebook_video": "", 72 | "qa_video": "" 73 | }, 74 | { 75 | "title": "微调", 76 | "day_break": false, 77 | "book": "chapter_computer-vision/fine-tuning.html", 78 | "slides": [ 79 | "part-03.pdf", 80 | 9 81 | ], 82 | "slides_video": "https://www.bilibili.com/video/BV1Sb4y1d7CR", 83 | "notebook_video": "", 84 | "qa_video": "" 85 | }, 86 | { 87 | "title": "实战 Kaggle 比赛:图像分类(CIFAR-10)", 88 | "day_break": true, 89 | "book": "chapter_computer-vision/kaggle-cifar10.html", 90 | "slides": [ 91 | "part-0.pdf", 92 | 0 93 | ], 94 | "slides_video": "https://www.bilibili.com/video/BV1Gy4y1M7Cu", 95 | "notebook_video": "", 96 | "qa_video": "" 97 | }, 98 | { 99 | "title": "实战 Kaggle 比赛:狗的品种识别(ImageNet Dogs)", 100 | "day_break": false, 101 | "book": "chapter_computer-vision/kaggle-dog.html", 102 | "slides": [ 103 | "part-0.pdf", 104 | 0 105 | ], 106 | "slides_video": "https://www.bilibili.com/video/BV1j5411T7wx", 107 | "notebook_video": "", 108 | "qa_video": "" 109 | }, 110 | { 111 | "title": "物体检测", 112 | "day_break": true, 113 | "book": "", 114 | "slides": [ 115 | "part-03.pdf", 116 | 6 117 | ], 118 | "slides_video": "https://www.bilibili.com/video/BV1Lh411Y7LX", 119 | "notebook_video": "", 120 | "qa_video": "" 121 | }, 122 | { 123 | "title": "边缘框实现", 124 | "day_break": false, 125 | "book": "chapter_computer-vision/bounding-box.html", 126 | "slides": [ 127 | "part-03.pdf", 128 | 0 129 | ], 130 | "slides_video": "https://www.bilibili.com/video/BV1Lh411Y7LX?p=2", 131 | "notebook_video": "", 132 | "qa_video": "" 133 | }, 134 | 135 | { 136 | "title": "物体检测数据集", 137 | "day_break": false, 138 | "book": "chapter_computer-vision/object-detection-dataset.html", 139 | "slides": [ 140 | "part-0.pdf", 141 | 0 142 | ], 143 | "slides_video": "https://www.bilibili.com/video/BV1Lh411Y7LX?p=3", 144 | "notebook_video": "", 145 | "qa_video": "" 146 | }, 147 | { 148 | "title": "锚框", 149 | "day_break": false, 150 | "book": "chapter_computer-vision/anchor.html", 151 | "slides": [ 152 | "part-03.pdf", 153 | 7 154 | ], 155 | "slides_video": "https://www.bilibili.com/video/BV1aB4y1K7za", 156 | "notebook_video": "", 157 | "qa_video": "" 158 | }, 159 | { 160 | "title": "**竞赛**{: .label } 树叶分类竞赛总结", 161 | "day_break": false, 162 | "book": "", 163 | "slides": [ 164 | "part-03.pdf", 165 | 8 166 | ], 167 | "slides_video": "https://www.bilibili.com/video/BV1by4y1K7SE", 168 | "notebook_video": "", 169 | "qa_video": "" 170 | }, 171 | { 172 | "title": "区域卷积神经网络(R-CNNs)", 173 | "day_break": true, 174 | "book": "chapter_computer-vision/rcnn.html", 175 | "slides": [ 176 | "part-03.pdf", 177 | 8 178 | ], 179 | "slides_video": "https://www.bilibili.com/video/BV1Db4y1C71g", 180 | "notebook_video": "", 181 | "qa_video": "" 182 | }, 183 | { 184 | "title": "单发多框检测(SSD)", 185 | "day_break": false, 186 | "book": "", 187 | "slides": [ 188 | "part-03.pdf", 189 | 5 190 | ], 191 | "slides_video": "https://www.bilibili.com/video/BV1Db4y1C71g", 192 | "notebook_video": "", 193 | "qa_video": "" 194 | }, 195 | { 196 | "title": "你只看一次(YOLO)", 197 | "day_break": false, 198 | "book": "", 199 | "slides": [ 200 | "part-03.pdf", 201 | 3 202 | ], 203 | "slides_video": "https://www.bilibili.com/video/BV1Db4y1C71g", 204 | "notebook_video": "", 205 | "qa_video": "" 206 | }, 207 | { 208 | "title": "多尺度物体检测实现", 209 | "day_break": true, 210 | "book": "chapter_computer-vision/multiscale-object-detection.html", 211 | "slides": [ 212 | "part-0.pdf", 213 | 0 214 | ], 215 | "slides_video": "https://www.bilibili.com/video/BV1ZX4y1c7Sw?p=1", 216 | "notebook_video": "", 217 | "qa_video": "" 218 | }, 219 | { 220 | "title": "SSD 实现", 221 | "day_break": false, 222 | "book": "chapter_computer-vision/ssd.html", 223 | "slides": [ 224 | "part-0.pdf", 225 | 0 226 | ], 227 | "slides_video": "https://www.bilibili.com/video/BV1ZX4y1c7Sw?p=2", 228 | "notebook_video": "", 229 | "qa_video": "" 230 | }, 231 | { 232 | "title": "语义分割", 233 | "day_break": true, 234 | "book": "", 235 | "slides": [ 236 | "part-03.pdf", 237 | 5 238 | ], 239 | "slides_video": "https://www.bilibili.com/video/BV1BK4y1M7Rd", 240 | "notebook_video": "", 241 | "qa_video": "" 242 | }, 243 | { 244 | "title": "语义分割数据集", 245 | "day_break": false, 246 | "book": "chapter_computer-vision/semantic-segmentation-and-dataset.html", 247 | "slides": [ 248 | "part-0.pdf", 249 | 0 250 | ], 251 | "slides_video": "https://www.bilibili.com/video/BV1BK4y1M7Rd?p=2", 252 | "notebook_video": "", 253 | "qa_video": "" 254 | }, 255 | { 256 | "title": "转置卷积", 257 | "day_break": false, 258 | "book": "chapter_computer-vision/transposed-conv.html", 259 | "slides": [ 260 | "part-03.pdf", 261 | 3 262 | ], 263 | "slides_video": "https://www.bilibili.com/video/BV17o4y1X7Jn/", 264 | "notebook_video": "", 265 | "qa_video": "" 266 | }, 267 | { 268 | "title": "转置卷积是一种卷积", 269 | "day_break": false, 270 | "book": "chapter_computer-vision/transposed-conv.html", 271 | "slides": [ 272 | "part-03.pdf", 273 | 8 274 | ], 275 | "slides_video": "https://www.bilibili.com/video/BV1CM4y1K7r7/", 276 | "notebook_video": "", 277 | "qa_video": "" 278 | }, 279 | { 280 | "title": "全连接卷积神经网络(FCN)", 281 | "day_break": true, 282 | "book": "chapter_computer-vision/fcn.html", 283 | "slides": [ 284 | "part-03.pdf", 285 | 2 286 | ], 287 | "slides_video": "https://www.bilibili.com/video/BV1af4y1L7Zu/", 288 | "notebook_video": "", 289 | "qa_video": "" 290 | }, 291 | { 292 | "title": "样式迁移", 293 | "day_break": false, 294 | "book": "chapter_computer-vision/neural-style.html", 295 | "slides": [ 296 | "part-03.pdf", 297 | 3 298 | ], 299 | "slides_video": "https://www.bilibili.com/video/BV1Eh41167GN/", 300 | "notebook_video": "", 301 | "qa_video": "" 302 | }, 303 | { 304 | "title": "**竞赛**{: .label } 目标检测", 305 | "day_break": false, 306 | "book": "", 307 | "slides": [ 308 | "part-03.pdf", 309 | 3 310 | ], 311 | "slides_video": "https://www.bilibili.com/video/BV1F64y1x7xP/", 312 | "notebook_video": "", 313 | "qa_video": "" 314 | } 315 | ] -------------------------------------------------------------------------------- /_modules/part_2.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 计算机视觉 3 | --- 4 | 5 | 6月5日 6 | 7 | : 硬件:CPU和GPU 8 | : [](https://zh-v2.d2l.ai/chapter_computational-performance/hardware.html) 9 | : [](assets/pdfs/part-2_1.pdf) 10 | :   11 | : [](https://www.bilibili.com/video/BV1TU4y1j7Wd/) 12 | 13 | 14 | 6月6日 15 | 16 | : 更多的专有硬件 17 | :   18 | : [](assets/pdfs/part-2_2.pdf) 19 | :   20 | : [](https://www.bilibili.com/video/BV1VV41147PC/) 21 | 22 | : 多GPU训练 23 | : [](https://zh-v2.d2l.ai/chapter_computational-performance/multiple-gpus.html) 24 | : [](assets/pdfs/part-2_3.pdf) 25 | : [](assets/notebooks/chapter_computational-performance/multiple-gpus.slides.html) 26 | : [](https://www.bilibili.com/video/BV1vU4y1V7rd/) 27 | 28 | 29 | 6月12日 30 | 31 | : **休课**{: .label .label-green } 32 | 33 | 6月13日 34 | 35 | : **休课**{: .label .label-green } 36 | 37 | 6月19日 38 | 39 | : 多GPU训练的实现 40 | : [](https://zh-v2.d2l.ai/chapter_computational-performance/multiple-gpus-concise.html) 41 | :   42 | : [](assets/notebooks/chapter_computational-performance/multiple-gpus-concise.slides.html) 43 | : [](https://www.bilibili.com/video/BV1MQ4y1R7Qg) 44 | 45 | : 分布式训练 46 | : [](https://zh-v2.d2l.ai/chapter_computational-performance/parameterserver.html) 47 | : [](assets/pdfs/part-2_4.pdf) 48 | :   49 | : [](https://www.bilibili.com/video/BV1jU4y1G7iu) 50 | 51 | 52 | 6月20日 53 | 54 | : 图像增广 55 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/image-augmentation.html) 56 | : [](assets/pdfs/part-2_5.pdf) 57 | : [](assets/notebooks/chapter_computer-vision/image-augmentation.slides.html) 58 | : [](https://www.bilibili.com/video/BV17y4y1g76q) 59 | 60 | : 微调 61 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/fine-tuning.html) 62 | : [](assets/pdfs/part-2_6.pdf) 63 | : [](assets/notebooks/chapter_computer-vision/fine-tuning.slides.html) 64 | : [](https://www.bilibili.com/video/BV1Sb4y1d7CR) 65 | 66 | 67 | 6月26日 68 | 69 | : 实战 Kaggle 比赛:图像分类(CIFAR-10) 70 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/kaggle-cifar10.html) 71 | :   72 | : [](assets/notebooks/chapter_computer-vision/kaggle-cifar10.slides.html) 73 | : [](https://www.bilibili.com/video/BV1Gy4y1M7Cu) 74 | 75 | : 实战 Kaggle 比赛:狗的品种识别(ImageNet Dogs) 76 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/kaggle-dog.html) 77 | :   78 | : [](assets/notebooks/chapter_computer-vision/kaggle-dog.slides.html) 79 | : [](https://www.bilibili.com/video/BV1j5411T7wx) 80 | 81 | 82 | 6月27日 83 | 84 | : 物体检测 85 | :   86 | : [](assets/pdfs/part-2_7.pdf) 87 | :   88 | : [](https://www.bilibili.com/video/BV1Lh411Y7LX) 89 | 90 | : 边缘框实现 91 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/bounding-box.html) 92 | :   93 | : [](assets/notebooks/chapter_computer-vision/bounding-box.slides.html) 94 | : [](https://www.bilibili.com/video/BV1Lh411Y7LX?p=2) 95 | 96 | : 物体检测数据集 97 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/object-detection-dataset.html) 98 | :   99 | : [](assets/notebooks/chapter_computer-vision/object-detection-dataset.slides.html) 100 | : [](https://www.bilibili.com/video/BV1Lh411Y7LX?p=3) 101 | 102 | : 锚框 103 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/anchor.html) 104 | : [](assets/pdfs/part-2_8.pdf) 105 | : [](assets/notebooks/chapter_computer-vision/anchor.slides.html) 106 | : [](https://www.bilibili.com/video/BV1aB4y1K7za) 107 | 108 | : **竞赛**{: .label } 树叶分类竞赛总结 109 | :   110 | : [](assets/pdfs/part-2_9.pdf) 111 | :   112 | : [](https://www.bilibili.com/video/BV1by4y1K7SE) 113 | 114 | 115 | 7月3日 116 | 117 | : 区域卷积神经网络(R-CNNs) 118 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/rcnn.html) 119 | : [](assets/pdfs/part-2_10.pdf) 120 | :   121 | : [](https://www.bilibili.com/video/BV1Db4y1C71g) 122 | 123 | : 单发多框检测(SSD) 124 | :   125 | : [](assets/pdfs/part-2_11.pdf) 126 | :   127 | : [](https://www.bilibili.com/video/BV1Db4y1C71g) 128 | 129 | : 你只看一次(YOLO) 130 | :   131 | : [](assets/pdfs/part-2_12.pdf) 132 | :   133 | : [](https://www.bilibili.com/video/BV1Db4y1C71g) 134 | 135 | 136 | 7月4日 137 | 138 | : 多尺度物体检测实现 139 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/multiscale-object-detection.html) 140 | :   141 | : [](assets/notebooks/chapter_computer-vision/multiscale-object-detection.slides.html) 142 | : [](https://www.bilibili.com/video/BV1ZX4y1c7Sw?p=1) 143 | 144 | : SSD 实现 145 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/ssd.html) 146 | :   147 | : [](assets/notebooks/chapter_computer-vision/ssd.slides.html) 148 | : [](https://www.bilibili.com/video/BV1ZX4y1c7Sw?p=2) 149 | 150 | 151 | 7月10日 152 | 153 | : 语义分割 154 | :   155 | : [](assets/pdfs/part-2_13.pdf) 156 | :   157 | : [](https://www.bilibili.com/video/BV1BK4y1M7Rd) 158 | 159 | : 语义分割数据集 160 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/semantic-segmentation-and-dataset.html) 161 | :   162 | : [](assets/notebooks/chapter_computer-vision/semantic-segmentation-and-dataset.slides.html) 163 | : [](https://www.bilibili.com/video/BV1BK4y1M7Rd?p=2) 164 | 165 | : 转置卷积 166 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/transposed-conv.html) 167 | : [](assets/pdfs/part-2_14.pdf) 168 | : [](assets/notebooks/chapter_computer-vision/transposed-conv.slides.html) 169 | : [](https://www.bilibili.com/video/BV17o4y1X7Jn/) 170 | 171 | : 转置卷积是一种卷积 172 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/transposed-conv.html) 173 | : [](assets/pdfs/part-2_15.pdf) 174 | : [](assets/notebooks/chapter_computer-vision/transposed-conv.slides.html) 175 | : [](https://www.bilibili.com/video/BV1CM4y1K7r7/) 176 | 177 | 178 | 7月11日 179 | 180 | : 全连接卷积神经网络(FCN) 181 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/fcn.html) 182 | : [](assets/pdfs/part-2_16.pdf) 183 | : [](assets/notebooks/chapter_computer-vision/fcn.slides.html) 184 | : [](https://www.bilibili.com/video/BV1af4y1L7Zu/) 185 | 186 | : 样式迁移 187 | : [](https://zh-v2.d2l.ai/chapter_computer-vision/neural-style.html) 188 | : [](assets/pdfs/part-2_17.pdf) 189 | : [](assets/notebooks/chapter_computer-vision/neural-style.slides.html) 190 | : [](https://www.bilibili.com/video/BV1Eh41167GN/) 191 | 192 | : **竞赛**{: .label } 目标检测 193 | :   194 | : [](assets/pdfs/part-2_18.pdf) 195 | :   196 | : [](https://www.bilibili.com/video/BV1F64y1x7xP/) 197 | 198 | -------------------------------------------------------------------------------- /_modules/part_3.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "title":"序列模型", 4 | "day_break":true, 5 | "book":"chapter_recurrent-neural-networks/sequence.html", 6 | "slides":["part-04.pdf",9], 7 | "slides_video":"https://www.bilibili.com/video/BV1L44y1m768/", 8 | "notebook_video":"", 9 | "qa_video":"" 10 | }, 11 | { 12 | "title":"文本预处理", 13 | "day_break":false, 14 | "book":"chapter_recurrent-neural-networks/text-preprocessing.html", 15 | "slides":["part-0.pdf",0], 16 | "slides_video":"https://www.bilibili.com/video/BV1Fo4y1Q79L/", 17 | "notebook_video":"", 18 | "qa_video":"" 19 | }, 20 | { 21 | "title":"语言模型和数据集", 22 | "day_break":true, 23 | "book":"chapter_recurrent-neural-networks/language-models-and-dataset.html", 24 | "slides":["part-04.pdf",5], 25 | "slides_video":"https://www.bilibili.com/video/BV1ZX4y1F7K3/", 26 | "notebook_video":"", 27 | "qa_video":"" 28 | }, 29 | { 30 | "title":"循环神经网络", 31 | "day_break":false, 32 | "book":"chapter_recurrent-neural-networks/rnn.html", 33 | "slides":["part-04.pdf",8], 34 | "slides_video":"https://www.bilibili.com/video/BV1D64y1z7CA/", 35 | "notebook_video":"", 36 | "qa_video":"" 37 | }, 38 | { 39 | "title":"循环神经网络的从零开始实现", 40 | "day_break":true, 41 | "book":"chapter_recurrent-neural-networks/rnn-scratch.html", 42 | "slides":["part-0.pdf",0], 43 | "slides_video":"https://www.bilibili.com/video/BV1kq4y1H7sw/", 44 | "notebook_video":"", 45 | "qa_video":"" 46 | }, 47 | { 48 | "title":"循环神经网络的简洁实现", 49 | "day_break":false, 50 | "book":"chapter_recurrent-neural-networks/rnn-concise.html", 51 | "slides":["part-0.pdf",0], 52 | "slides_video":"https://www.bilibili.com/video/BV1kq4y1H7sw?p=2", 53 | "notebook_video":"", 54 | "qa_video":"" 55 | }, 56 | { 57 | "title":"门控循环单元(GRU)", 58 | "day_break":true, 59 | "book":"chapter_recurrent-modern/gru.html", 60 | "slides":["part-04.pdf",6], 61 | "slides_video":"https://www.bilibili.com/video/BV1mf4y157N2/", 62 | "notebook_video":"", 63 | "qa_video":"" 64 | }, 65 | { 66 | "title":"长短期记忆网络(LSTM)", 67 | "day_break":false, 68 | "book":"chapter_recurrent-modern/lstm.html", 69 | "slides":["part-04.pdf",7], 70 | "slides_video":"https://www.bilibili.com/video/BV1JU4y1H7PC/", 71 | "notebook_video":"", 72 | "qa_video":"" 73 | }, 74 | { 75 | "title":"深层循环神经网络", 76 | "day_break":false, 77 | "book":"chapter_recurrent-modern/deep-rnn.html", 78 | "slides":["part-04.pdf",5], 79 | "slides_video":"https://www.bilibili.com/video/BV1JM4y1T7N4/", 80 | "notebook_video":"", 81 | "qa_video":"" 82 | }, 83 | { 84 | "title":"双向循环神经网络", 85 | "day_break":false, 86 | "book":"chapter_recurrent-modern/bi-rnn.html", 87 | "slides":["part-04.pdf",8], 88 | "slides_video":"https://www.bilibili.com/video/BV12X4y1c71W/", 89 | "notebook_video":"", 90 | "qa_video":"" 91 | }, 92 | { 93 | "title":"机器翻译与数据集", 94 | "day_break":true, 95 | "book":"chapter_recurrent-modern/machine-translation-and-dataset.html", 96 | "slides":["part-0.pdf",0], 97 | "slides_video":"https://www.bilibili.com/video/BV1H64y1s7TH/", 98 | "notebook_video":"", 99 | "qa_video":"" 100 | }, 101 | { 102 | "title":"编码器-解码器结构", 103 | "day_break":false, 104 | "book":"chapter_recurrent-modern/encoder-decoder.html", 105 | "slides":["part-04.pdf",5], 106 | "slides_video":"https://www.bilibili.com/video/BV1c54y1E7YP/", 107 | "notebook_video":"", 108 | "qa_video":"" 109 | }, 110 | { 111 | "title":"序列到序列学习(seq2seq)", 112 | "day_break":false, 113 | "book":"chapter_recurrent-modern/seq2seq.html", 114 | "slides":["part-04.pdf",7], 115 | "slides_video":"https://www.bilibili.com/video/BV16g411L7FG/", 116 | "notebook_video":"", 117 | "qa_video":"" 118 | }, 119 | { 120 | "title":"束搜索", 121 | "day_break":false, 122 | "book":"chapter_recurrent-modern/beam-search.html", 123 | "slides":["part-04.pdf",6], 124 | "slides_video":"https://www.bilibili.com/video/BV1B44y1C7m1/", 125 | "notebook_video":"", 126 | "qa_video":"" 127 | } 128 | ] -------------------------------------------------------------------------------- /_modules/part_3.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 循环神经网络 3 | --- 4 | 5 | 7月17日 6 | 7 | : 序列模型 8 | : [](https://zh-v2.d2l.ai/chapter_recurrent-neural-networks/sequence.html) 9 | : [](assets/pdfs/part-3_1.pdf) 10 | : [](assets/notebooks/chapter_recurrent-neural-networks/sequence.slides.html) 11 | : [](https://www.bilibili.com/video/BV1L44y1m768/) 12 | 13 | : 文本预处理 14 | : [](https://zh-v2.d2l.ai/chapter_recurrent-neural-networks/text-preprocessing.html) 15 | :   16 | : [](assets/notebooks/chapter_recurrent-neural-networks/text-preprocessing.slides.html) 17 | : [](https://www.bilibili.com/video/BV1Fo4y1Q79L/) 18 | 19 | 20 | 7月18日 21 | 22 | : 语言模型和数据集 23 | : [](https://zh-v2.d2l.ai/chapter_recurrent-neural-networks/language-models-and-dataset.html) 24 | : [](assets/pdfs/part-3_2.pdf) 25 | : [](assets/notebooks/chapter_recurrent-neural-networks/language-models-and-dataset.slides.html) 26 | : [](https://www.bilibili.com/video/BV1ZX4y1F7K3/) 27 | 28 | : 循环神经网络 29 | : [](https://zh-v2.d2l.ai/chapter_recurrent-neural-networks/rnn.html) 30 | : [](assets/pdfs/part-3_3.pdf) 31 | :   32 | : [](https://www.bilibili.com/video/BV1D64y1z7CA/) 33 | 34 | 35 | 7月24日 36 | 37 | : 循环神经网络的从零开始实现 38 | : [](https://zh-v2.d2l.ai/chapter_recurrent-neural-networks/rnn-scratch.html) 39 | :   40 | : [](assets/notebooks/chapter_recurrent-neural-networks/rnn-scratch.slides.html) 41 | : [](https://www.bilibili.com/video/BV1kq4y1H7sw/) 42 | 43 | : 循环神经网络的简洁实现 44 | : [](https://zh-v2.d2l.ai/chapter_recurrent-neural-networks/rnn-concise.html) 45 | :   46 | : [](assets/notebooks/chapter_recurrent-neural-networks/rnn-concise.slides.html) 47 | : [](https://www.bilibili.com/video/BV1kq4y1H7sw?p=2) 48 | 49 | 50 | 7月25日 51 | 52 | : 门控循环单元(GRU) 53 | : [](https://zh-v2.d2l.ai/chapter_recurrent-modern/gru.html) 54 | : [](assets/pdfs/part-3_4.pdf) 55 | : [](assets/notebooks/chapter_recurrent-modern/gru.slides.html) 56 | : [](https://www.bilibili.com/video/BV1mf4y157N2/) 57 | 58 | : 长短期记忆网络(LSTM) 59 | : [](https://zh-v2.d2l.ai/chapter_recurrent-modern/lstm.html) 60 | : [](assets/pdfs/part-3_5.pdf) 61 | : [](assets/notebooks/chapter_recurrent-modern/lstm.slides.html) 62 | : [](https://www.bilibili.com/video/BV1JU4y1H7PC/) 63 | 64 | : 深层循环神经网络 65 | : [](https://zh-v2.d2l.ai/chapter_recurrent-modern/deep-rnn.html) 66 | : [](assets/pdfs/part-3_6.pdf) 67 | : [](assets/notebooks/chapter_recurrent-modern/deep-rnn.slides.html) 68 | : [](https://www.bilibili.com/video/BV1JM4y1T7N4/) 69 | 70 | : 双向循环神经网络 71 | : [](https://zh-v2.d2l.ai/chapter_recurrent-modern/bi-rnn.html) 72 | : [](assets/pdfs/part-3_7.pdf) 73 | : [](assets/notebooks/chapter_recurrent-modern/bi-rnn.slides.html) 74 | : [](https://www.bilibili.com/video/BV12X4y1c71W/) 75 | 76 | 77 | 7月31日 78 | 79 | : **休课**{: .label .label-green } 80 | 81 | 8月1日 82 | 83 | : **休课**{: .label .label-green } 84 | 85 | 8月7日 86 | 87 | : 机器翻译与数据集 88 | : [](https://zh-v2.d2l.ai/chapter_recurrent-modern/machine-translation-and-dataset.html) 89 | :   90 | : [](assets/notebooks/chapter_recurrent-modern/machine-translation-and-dataset.slides.html) 91 | : [](https://www.bilibili.com/video/BV1H64y1s7TH/) 92 | 93 | : 编码器-解码器结构 94 | : [](https://zh-v2.d2l.ai/chapter_recurrent-modern/encoder-decoder.html) 95 | : [](assets/pdfs/part-3_8.pdf) 96 | : [](assets/notebooks/chapter_recurrent-modern/encoder-decoder.slides.html) 97 | : [](https://www.bilibili.com/video/BV1c54y1E7YP/) 98 | 99 | : 序列到序列学习(seq2seq) 100 | : [](https://zh-v2.d2l.ai/chapter_recurrent-modern/seq2seq.html) 101 | : [](assets/pdfs/part-3_9.pdf) 102 | : [](assets/notebooks/chapter_recurrent-modern/seq2seq.slides.html) 103 | : [](https://www.bilibili.com/video/BV16g411L7FG/) 104 | 105 | : 束搜索 106 | : [](https://zh-v2.d2l.ai/chapter_recurrent-modern/beam-search.html) 107 | : [](assets/pdfs/part-3_10.pdf) 108 | :   109 | : [](https://www.bilibili.com/video/BV1B44y1C7m1/) 110 | 111 | -------------------------------------------------------------------------------- /_modules/part_4.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "title":"注意力机制", 4 | "day_break":true, 5 | "book":"chapter_attention-mechanisms/nadaraya-waston.html", 6 | "slides":["part-05.pdf",7], 7 | "slides_video":"https://www.bilibili.com/video/BV1264y1i7R1/", 8 | "notebook_video":"", 9 | "qa_video":"" 10 | }, 11 | { 12 | "title":"注意力分数", 13 | "day_break":false, 14 | "book":"chapter_attention-mechanisms/attention-scoring-functions.html", 15 | "slides":["part-05.pdf",6], 16 | "slides_video":"https://www.bilibili.com/video/BV1Tb4y167rb/", 17 | "notebook_video":"", 18 | "qa_video":"" 19 | }, 20 | { 21 | "title":"使用注意力机制的seq2seq", 22 | "day_break":false, 23 | "book":"chapter_attention-mechanisms/bahdanau-attention.html", 24 | "slides":["part-05.pdf",4], 25 | "slides_video":"https://www.bilibili.com/video/BV1v44y1C7Tg/", 26 | "notebook_video":"", 27 | "qa_video":"" 28 | }, 29 | { 30 | "title":"自注意力和位置编码", 31 | "day_break":true, 32 | "book":"chapter_attention-mechanisms/self-attention-and-positional-encoding.html", 33 | "slides":["part-05.pdf",8], 34 | "slides_video":"https://www.bilibili.com/video/BV19o4y1m7mo/", 35 | "notebook_video":"", 36 | "qa_video":"" 37 | }, 38 | { 39 | "title":"Transformer", 40 | "day_break":false, 41 | "book":"chapter_attention-mechanisms/transformer.html", 42 | "slides":["part-05.pdf",10], 43 | "slides_video":"https://www.bilibili.com/video/BV1Kq4y1H7FL/", 44 | "notebook_video":"", 45 | "qa_video":"" 46 | }, 47 | { 48 | "title":"BERT", 49 | "day_break":true, 50 | "book":"chapter_natural-language-processing-pretraining/bert.html", 51 | "slides":["part-05.pdf",8], 52 | "slides_video":"https://www.bilibili.com/video/BV1yU4y1E7Ns/", 53 | "notebook_video":"", 54 | "qa_video":"" 55 | }, 56 | { 57 | "title":"BERT预训练数据集", 58 | "day_break":false, 59 | "book":"chapter_natural-language-processing-pretraining/bert-dataset.html", 60 | "slides":["part-0.pdf",0], 61 | "slides_video":"https://www.bilibili.com/video/BV1yU4y1E7Ns?p=2", 62 | "notebook_video":"", 63 | "qa_video":"" 64 | }, 65 | { 66 | "title":"预训练BERT", 67 | "day_break":false, 68 | "book":"chapter_natural-language-processing-pretraining/bert-pretraining.html", 69 | "slides":["part-0.pdf",0], 70 | "slides_video":"https://www.bilibili.com/video/BV1yU4y1E7Ns?p=3", 71 | "notebook_video":"", 72 | "qa_video":"" 73 | }, 74 | { 75 | "title":"微调BERT", 76 | "day_break":false, 77 | "book":"chapter_natural-language-processing-applications/finetuning-bert.html", 78 | "slides":["part-05.pdf",6], 79 | "slides_video":"https://www.bilibili.com/video/BV15L4y1v7ts", 80 | "notebook_video":"", 81 | "qa_video":"" 82 | }, 83 | { 84 | "title":"自然语言推理和数据集", 85 | "day_break":false, 86 | "book":"chapter_natural-language-processing-applications/natural-language-inference-and-dataset.html", 87 | "slides":["part-0.pdf",0], 88 | "slides_video":"https://www.bilibili.com/video/BV15L4y1v7ts?p=2", 89 | "notebook_video":"", 90 | "qa_video":"" 91 | }, 92 | { 93 | "title":"自然语言推理:微调BERT", 94 | "day_break":false, 95 | "book":"chapter_natural-language-processing-applications/natural-language-inference-bert.html", 96 | "slides":["part-0.pdf",0], 97 | "slides_video":"https://www.bilibili.com/video/BV15L4y1v7ts?p=3", 98 | "notebook_video":"", 99 | "qa_video":"" 100 | }, 101 | { 102 | "title":"**竞赛**{: .label } 目标检测总结", 103 | "day_break":false, 104 | "book":"", 105 | "slides":["part-07.pdf",6], 106 | "slides_video":"https://www.bilibili.com/video/BV13b4y1m7y8", 107 | "notebook_video":"", 108 | "qa_video":"" 109 | }, 110 | { 111 | "title":"优化算法", 112 | "day_break":true, 113 | "book":"chapter_optimization/index.html", 114 | "slides":["part-07.pdf",14], 115 | "slides_video":"https://www.bilibili.com/video/BV1bP4y1p7Gq", 116 | "notebook_video":"", 117 | "qa_video":"" 118 | }, 119 | { 120 | "title":"课程总结和进阶学习", 121 | "day_break":false, 122 | "book":"", 123 | "slides":["part-07.pdf",11], 124 | "slides_video":"https://www.bilibili.com/video/BV1AL4y1Y7gu", 125 | "notebook_video":"", 126 | "qa_video":"" 127 | } 128 | ] -------------------------------------------------------------------------------- /_modules/part_4.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 注意力机制 3 | --- 4 | 5 | 8月8日 6 | 7 | : 注意力机制 8 | : [](https://zh-v2.d2l.ai/chapter_attention-mechanisms/nadaraya-waston.html) 9 | : [](assets/pdfs/part-4_1.pdf) 10 | : [](assets/notebooks/chapter_attention-mechanisms/nadaraya-waston.slides.html) 11 | : [](https://www.bilibili.com/video/BV1264y1i7R1/) 12 | 13 | : 注意力分数 14 | : [](https://zh-v2.d2l.ai/chapter_attention-mechanisms/attention-scoring-functions.html) 15 | : [](assets/pdfs/part-4_2.pdf) 16 | : [](assets/notebooks/chapter_attention-mechanisms/attention-scoring-functions.slides.html) 17 | : [](https://www.bilibili.com/video/BV1Tb4y167rb/) 18 | 19 | : 使用注意力机制的seq2seq 20 | : [](https://zh-v2.d2l.ai/chapter_attention-mechanisms/bahdanau-attention.html) 21 | : [](assets/pdfs/part-4_3.pdf) 22 | : [](assets/notebooks/chapter_attention-mechanisms/bahdanau-attention.slides.html) 23 | : [](https://www.bilibili.com/video/BV1v44y1C7Tg/) 24 | 25 | 26 | 8月14日 27 | 28 | : 自注意力和位置编码 29 | : [](https://zh-v2.d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html) 30 | : [](assets/pdfs/part-4_4.pdf) 31 | : [](assets/notebooks/chapter_attention-mechanisms/self-attention-and-positional-encoding.slides.html) 32 | : [](https://www.bilibili.com/video/BV19o4y1m7mo/) 33 | 34 | : Transformer 35 | : [](https://zh-v2.d2l.ai/chapter_attention-mechanisms/transformer.html) 36 | : [](assets/pdfs/part-4_5.pdf) 37 | : [](assets/notebooks/chapter_attention-mechanisms/transformer.slides.html) 38 | : [](https://www.bilibili.com/video/BV1Kq4y1H7FL/) 39 | 40 | 41 | 8月15日 42 | 43 | : BERT 44 | : [](https://zh-v2.d2l.ai/chapter_natural-language-processing-pretraining/bert.html) 45 | : [](assets/pdfs/part-4_6.pdf) 46 | : [](assets/notebooks/chapter_natural-language-processing-pretraining/bert.slides.html) 47 | : [](https://www.bilibili.com/video/BV1yU4y1E7Ns/) 48 | 49 | : BERT预训练数据集 50 | : [](https://zh-v2.d2l.ai/chapter_natural-language-processing-pretraining/bert-dataset.html) 51 | :   52 | : [](assets/notebooks/chapter_natural-language-processing-pretraining/bert-dataset.slides.html) 53 | : [](https://www.bilibili.com/video/BV1yU4y1E7Ns?p=2) 54 | 55 | : 预训练BERT 56 | : [](https://zh-v2.d2l.ai/chapter_natural-language-processing-pretraining/bert-pretraining.html) 57 | :   58 | : [](assets/notebooks/chapter_natural-language-processing-pretraining/bert-pretraining.slides.html) 59 | : [](https://www.bilibili.com/video/BV1yU4y1E7Ns?p=3) 60 | 61 | : 微调BERT 62 | : [](https://zh-v2.d2l.ai/chapter_natural-language-processing-applications/finetuning-bert.html) 63 | : [](assets/pdfs/part-4_7.pdf) 64 | :   65 | : [](https://www.bilibili.com/video/BV15L4y1v7ts) 66 | 67 | : 自然语言推理和数据集 68 | : [](https://zh-v2.d2l.ai/chapter_natural-language-processing-applications/natural-language-inference-and-dataset.html) 69 | :   70 | : [](assets/notebooks/chapter_natural-language-processing-applications/natural-language-inference-and-dataset.slides.html) 71 | : [](https://www.bilibili.com/video/BV15L4y1v7ts?p=2) 72 | 73 | : 自然语言推理:微调BERT 74 | : [](https://zh-v2.d2l.ai/chapter_natural-language-processing-applications/natural-language-inference-bert.html) 75 | :   76 | : [](assets/notebooks/chapter_natural-language-processing-applications/natural-language-inference-bert.slides.html) 77 | : [](https://www.bilibili.com/video/BV15L4y1v7ts?p=3) 78 | 79 | : **竞赛**{: .label } 目标检测总结 80 | :   81 | : [](assets/pdfs/part-4_1.pdf) 82 | :   83 | : [](https://www.bilibili.com/video/BV13b4y1m7y8) 84 | 85 | 86 | 8月21日 87 | 88 | : 优化算法 89 | : [](https://zh-v2.d2l.ai/chapter_optimization/index.html) 90 | : [](assets/pdfs/part-4_2.pdf) 91 | :   92 | : [](https://www.bilibili.com/video/BV1bP4y1p7Gq) 93 | 94 | : 课程总结和进阶学习 95 | :   96 | : [](assets/pdfs/part-4_3.pdf) 97 | :   98 | : [](https://www.bilibili.com/video/BV1AL4y1Y7gu) 99 | 100 | -------------------------------------------------------------------------------- /_sass/color_schemes/d2l.scss: -------------------------------------------------------------------------------- 1 | $link-color: rgb(33,150,243); 2 | -------------------------------------------------------------------------------- /_sass/custom/announcement.scss: -------------------------------------------------------------------------------- 1 | .announcement { 2 | @extend %card; 3 | 4 | h1, h2 { 5 | @extend .text-gamma; 6 | } 7 | 8 | .announcement-meta { 9 | @extend .text-epsilon; 10 | } 11 | } 12 | -------------------------------------------------------------------------------- /_sass/custom/card.scss: -------------------------------------------------------------------------------- 1 | @mixin abstract-card() { 2 | box-shadow: 0 1px 3px rgba(0, 0, 0, 0.07), 0 4px 14px rgba(0, 0, 0, 0.05); 3 | margin: $sp-4 (-$gutter-spacing-sm); 4 | 5 | @include mq(md) { 6 | border-radius: $border-radius; 7 | margin: $sp-4 0; 8 | } 9 | } 10 | 11 | %card { 12 | @include abstract-card(); 13 | display: flex; 14 | flex-direction: column; 15 | min-width: 0; 16 | padding: 0 $sp-4; 17 | position: relative; 18 | word-wrap: break-word; 19 | 20 | >:first-child { 21 | border-top: none !important; 22 | } 23 | 24 | >:last-child { 25 | border-bottom: none !important; 26 | } 27 | 28 | .label { 29 | border-radius: $border-radius; 30 | margin-left: 0; 31 | user-select: none; 32 | } 33 | } 34 | -------------------------------------------------------------------------------- /_sass/custom/custom.scss: -------------------------------------------------------------------------------- 1 | // Just the Class dependencies 2 | @import 'card'; 3 | 4 | // Just the Class styles 5 | @import 'announcement'; 6 | @import 'module'; 7 | @import 'schedule'; 8 | @import 'staffer'; 9 | 10 | // Overrides 11 | a abbr[title] { 12 | border-bottom: none; 13 | } 14 | 15 | abbr[title] { 16 | text-decoration: none; 17 | } 18 | 19 | code { 20 | font-size: 14px; 21 | padding: 0.2em 0.4em; 22 | border: none; 23 | } 24 | 25 | div.highlighter-rouge[overlay] { 26 | position: relative; 27 | 28 | &::after { 29 | @extend .label, .text-grey-dk-100; 30 | 31 | background-color: $white; 32 | border-radius: $border-radius; 33 | bottom: $sp-2; 34 | content: attr(overlay); 35 | position: absolute; 36 | right: 0; 37 | user-select: none; 38 | } 39 | } 40 | 41 | details { 42 | margin: 0 40px 1em; 43 | } 44 | 45 | h1, h2, h3, h4, h5, h6 { 46 | align-items: center; 47 | display: flex; 48 | } 49 | 50 | iframe, 51 | summary { 52 | max-width: 100%; 53 | } 54 | 55 | summary { 56 | @extend .btn, .btn-outline; 57 | } 58 | 59 | .main-content-wrap { 60 | max-width: $content-width; 61 | margin: auto; 62 | } 63 | 64 | .main-content { 65 | a { 66 | overflow-wrap: anywhere; 67 | white-space: normal; 68 | } 69 | 70 | dl { 71 | display: block; 72 | grid-template-columns: none; 73 | } 74 | 75 | dt { 76 | font-weight: normal; 77 | text-align: start; 78 | 79 | &::after { 80 | content: normal; 81 | } 82 | } 83 | 84 | dd { 85 | font-weight: normal; 86 | 87 | + dt { 88 | margin-top: 1em; 89 | } 90 | } 91 | 92 | .katex { 93 | font-size: 1.1em; 94 | } 95 | } 96 | 97 | [style*="--aspect-ratio"] > :first-child { 98 | width: 100%; 99 | } 100 | 101 | [style*="--aspect-ratio"] > img { 102 | height: auto; 103 | } 104 | 105 | @supports (--custom:property) { 106 | [style*="--aspect-ratio"] { 107 | position: relative; 108 | } 109 | 110 | [style*="--aspect-ratio"]::before { 111 | content: ""; 112 | display: block; 113 | padding-bottom: calc(100% / (var(--aspect-ratio))); 114 | } 115 | 116 | [style*="--add-height"]::before { 117 | padding-bottom: calc(100% / (var(--aspect-ratio)) + (var(--add-height))); 118 | } 119 | 120 | [style*="--aspect-ratio"] > :first-child { 121 | position: absolute; 122 | top: 0; 123 | left: 0; 124 | height: 100%; 125 | } 126 | } 127 | 128 | .responsive-video-container { 129 | position: relative; 130 | margin-bottom: 1em; 131 | padding-bottom: 56.25%; 132 | height: 0; 133 | overflow: hidden; 134 | max-width: 100%; 135 | 136 | iframe, 137 | object, 138 | embed { 139 | position: absolute; 140 | top: 0; 141 | left: 0; 142 | width: 100%; 143 | height: 100%; 144 | } 145 | } -------------------------------------------------------------------------------- /_sass/custom/module.scss: -------------------------------------------------------------------------------- 1 | .main-content .module, 2 | .module { 3 | @extend %card; 4 | 5 | h1, 6 | h2, 7 | h3, 8 | h4, 9 | h5, 10 | h6 { 11 | &:first-child { 12 | margin-top: $sp-4; 13 | } 14 | } 15 | 16 | >dl { 17 | border-bottom: $border $border-color; 18 | border-top: $border $border-color; 19 | display: grid; 20 | grid-template-columns: max-content 1fr; 21 | margin: $sp-2 (-$sp-4); 22 | 23 | &:first-child { 24 | margin-top: 0; 25 | } 26 | 27 | &:last-child { 28 | margin-bottom: 0; 29 | } 30 | 31 | @include mq(lg) { 32 | grid-template-columns: 1fr 7fr; 33 | } 34 | 35 | %module-item { 36 | margin: 0; 37 | padding: $sp-2; 38 | 39 | @include mq(sm) { 40 | padding: $sp-2 $sp-4; 41 | } 42 | } 43 | 44 | >dt { 45 | @extend %module-item; 46 | border-top: $border $border-color; 47 | font-weight: normal; 48 | text-align: right; 49 | width: 100px; 50 | 51 | +dd { 52 | border-top: $border $border-color; 53 | } 54 | 55 | &:first-child { 56 | border-top: none; 57 | 58 | +dd { 59 | border-top: none; 60 | } 61 | } 62 | 63 | &::after { 64 | content: ":"; 65 | } 66 | } 67 | 68 | >dd { 69 | @extend %module-item; 70 | 71 | 72 | a { 73 | background-image: none; 74 | } 75 | i { 76 | padding-right: 0.2em; 77 | color: gray; 78 | &:hover { 79 | color: #2196f3; 80 | } 81 | } 82 | +dd { 83 | padding-top: 0; 84 | } 85 | 86 | ol, ul, dl { 87 | margin: 0; 88 | } 89 | @media only screen and (max-width: 500px) { 90 | dl { 91 | display: flex; 92 | flex-direction: row; 93 | flex-flow: row wrap; 94 | 95 | dt { 96 | width: 100%; 97 | margin: 0; 98 | padding-bottom: 0.2em; 99 | } 100 | 101 | dd { 102 | font-size: 80%; 103 | width: 10%; 104 | margin: 0; 105 | padding-bottom: 0.4em; 106 | } 107 | } 108 | 109 | } 110 | @media only screen and (min-width: 500px) { 111 | dl { 112 | display: flex; 113 | flex-direction: column; 114 | 115 | @include mq(sm) { 116 | flex-direction: row; 117 | } 118 | 119 | dt { 120 | flex: 0 0 66%; 121 | margin: 0; 122 | } 123 | 124 | dd { 125 | font-size: 80%; 126 | flex: 0 0 8%; 127 | margin: 0; 128 | } 129 | } 130 | } 131 | 132 | } 133 | } 134 | } 135 | -------------------------------------------------------------------------------- /_sass/custom/schedule.scss: -------------------------------------------------------------------------------- 1 | .schedule { 2 | @include abstract-card(); 3 | overflow-x: scroll; 4 | position: relative; 5 | 6 | li::before { 7 | display: none; 8 | } 9 | 10 | ul.schedule-timeline, 11 | ul.schedule-group, 12 | ul.schedule-events { 13 | margin-top: 0; 14 | padding-left: 0; 15 | } 16 | 17 | ul.schedule-timeline { 18 | margin: 40px auto 0; 19 | position: absolute; 20 | width: 100%; 21 | } 22 | 23 | .schedule-time { 24 | @extend .fs-2; 25 | color: $grey-dk-000; 26 | height: 40px; 27 | margin: 0; 28 | padding: $sp-2; 29 | position: relative; 30 | 31 | &::after { 32 | background-color: $border-color; 33 | content: ''; 34 | height: 1px; 35 | left: 0; 36 | position: absolute; 37 | top: 0; 38 | width: 100%; 39 | } 40 | } 41 | 42 | .schedule-group { 43 | display: flex; 44 | margin-bottom: 0; 45 | position: relative; 46 | } 47 | 48 | .schedule-day { 49 | border-left: $border $border-color; 50 | flex: 1 0 0; 51 | margin: 0; 52 | min-width: 130px; 53 | 54 | &:first-of-type { 55 | border-left: 0; 56 | } 57 | } 58 | 59 | h2.schedule-header { 60 | align-items: center; 61 | display: flex; 62 | font-size: 18px !important; 63 | height: 40px; 64 | justify-content: center; 65 | margin: 0; 66 | } 67 | 68 | .schedule-events { 69 | display: flex; 70 | padding: 0; 71 | position: relative; 72 | } 73 | 74 | .schedule-event { 75 | background-color: $grey-dk-000; 76 | border-radius: $border-radius; 77 | box-shadow: 0 10px 20px rgba(0, 0, 0, .1), inset 0 -3px 0 rgba(0, 0, 0, .2); 78 | color: $white; 79 | float: left; 80 | height: 100%; 81 | margin: 0; 82 | padding: $sp-1 $sp-2; 83 | position: absolute; 84 | width: 100%; 85 | 86 | .name { 87 | @extend .fs-3, .fw-700; 88 | } 89 | 90 | .time, 91 | .location { 92 | @extend .fs-2; 93 | } 94 | 95 | &.lecture { 96 | background-color: $grey-dk-000; 97 | } 98 | 99 | &.section { 100 | background-color: $purple-000; 101 | } 102 | 103 | &.office-hours { 104 | background-color: $blue-000; 105 | } 106 | } 107 | } 108 | -------------------------------------------------------------------------------- /_sass/custom/staffer.scss: -------------------------------------------------------------------------------- 1 | .staffer { 2 | display: block; 3 | margin: $sp-4; 4 | float: left; 5 | width: 100%; 6 | 7 | .staffer-info { 8 | display: block; 9 | } 10 | 11 | .staffer-image { 12 | border-radius: 50%; 13 | height: 100px; 14 | margin-right: 1.5em; 15 | float: left; 16 | 17 | } 18 | 19 | p, 20 | .staffer-name { 21 | margin: $sp-1 !important; 22 | } 23 | 24 | .staffer-pronouns { 25 | @extend .label, .text-grey-dk-100, .bg-grey-lt-200; 26 | } 27 | 28 | .staffer-meta { 29 | @extend .text-grey-dk-000; 30 | } 31 | } 32 | -------------------------------------------------------------------------------- /_staffers/mu.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: 李沐 3 | website: https://github.com/mli 4 | photo: mu.jpg 5 | --- 6 | 7 | AWS 资深首席科学家
美国卡内基梅隆大学计算机系博士 -------------------------------------------------------------------------------- /announcements.md: -------------------------------------------------------------------------------- 1 | --- 2 | layout: page 3 | title: 课程公告 4 | nav_exclude: true 5 | description: A feed containing all of the class announcements. 6 | --- 7 | 8 | # 课程公告 9 | 10 | {% assign announcements = site.announcements | reverse %} 11 | {% for announcement in announcements %} 12 | {{ announcement }} 13 | {% endfor %} 14 | -------------------------------------------------------------------------------- /assets/images/logo-with-text.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/d2l-ai/courses-zh-v2/3d1b8b89eae0c836a24420051d33f2876a32e4d4/assets/images/logo-with-text.png -------------------------------------------------------------------------------- /assets/images/mu.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/d2l-ai/courses-zh-v2/3d1b8b89eae0c836a24420051d33f2876a32e4d4/assets/images/mu.jpg -------------------------------------------------------------------------------- /assets/pdfs/part-0_1.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/d2l-ai/courses-zh-v2/3d1b8b89eae0c836a24420051d33f2876a32e4d4/assets/pdfs/part-0_1.pdf -------------------------------------------------------------------------------- /assets/pdfs/part-0_2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/d2l-ai/courses-zh-v2/3d1b8b89eae0c836a24420051d33f2876a32e4d4/assets/pdfs/part-0_2.pdf -------------------------------------------------------------------------------- /assets/pdfs/part-0_3.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/d2l-ai/courses-zh-v2/3d1b8b89eae0c836a24420051d33f2876a32e4d4/assets/pdfs/part-0_3.pdf -------------------------------------------------------------------------------- /assets/pdfs/part-0_4.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/d2l-ai/courses-zh-v2/3d1b8b89eae0c836a24420051d33f2876a32e4d4/assets/pdfs/part-0_4.pdf -------------------------------------------------------------------------------- /assets/pdfs/part-0_5.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/d2l-ai/courses-zh-v2/3d1b8b89eae0c836a24420051d33f2876a32e4d4/assets/pdfs/part-0_5.pdf -------------------------------------------------------------------------------- /assets/pdfs/part-0_6.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/d2l-ai/courses-zh-v2/3d1b8b89eae0c836a24420051d33f2876a32e4d4/assets/pdfs/part-0_6.pdf -------------------------------------------------------------------------------- /assets/pdfs/part-0_7.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/d2l-ai/courses-zh-v2/3d1b8b89eae0c836a24420051d33f2876a32e4d4/assets/pdfs/part-0_7.pdf -------------------------------------------------------------------------------- /assets/pdfs/part-0_8.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/d2l-ai/courses-zh-v2/3d1b8b89eae0c836a24420051d33f2876a32e4d4/assets/pdfs/part-0_8.pdf -------------------------------------------------------------------------------- /contents.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "title":"Preface", 4 | "day_break":false, 5 | "book":"chapter_preface/index.html", 6 | "slides":["part-0.pdf",0], 7 | "slides_video":"", 8 | "notebook_video":"", 9 | "qa_video":"" 10 | }, 11 | { 12 | "title":"Installation", 13 | "day_break":false, 14 | "book":"chapter_installation/index.html", 15 | "slides":["part-0.pdf",0], 16 | "slides_video":"", 17 | "notebook_video":"", 18 | "qa_video":"" 19 | }, 20 | { 21 | "title":"Notation", 22 | "day_break":false, 23 | "book":"chapter_notation/index.html", 24 | "slides":["part-0.pdf",0], 25 | "slides_video":"", 26 | "notebook_video":"", 27 | "qa_video":"" 28 | }, 29 | { 30 | "title":"Introduction", 31 | "day_break":false, 32 | "book":"chapter_introduction/index.html", 33 | "slides":["part-0.pdf",0], 34 | "slides_video":"", 35 | "notebook_video":"", 36 | "qa_video":"" 37 | }, 38 | { 39 | "title":"Preliminaries", 40 | "day_break":false, 41 | "book":"chapter_preliminaries/index.html", 42 | "slides":["part-0.pdf",0], 43 | "slides_video":"", 44 | "notebook_video":"", 45 | "qa_video":"" 46 | }, 47 | { 48 | "title":"Data Manipulation", 49 | "day_break":false, 50 | "book":"chapter_preliminaries/ndarray.html", 51 | "slides":["part-0.pdf",0], 52 | "slides_video":"", 53 | "notebook_video":"", 54 | "qa_video":"" 55 | }, 56 | { 57 | "title":"Data Preprocessing", 58 | "day_break":false, 59 | "book":"chapter_preliminaries/pandas.html", 60 | "slides":["part-0.pdf",0], 61 | "slides_video":"", 62 | "notebook_video":"", 63 | "qa_video":"" 64 | }, 65 | { 66 | "title":"Linear Algebra", 67 | "day_break":false, 68 | "book":"chapter_preliminaries/linear-algebra.html", 69 | "slides":["part-0.pdf",0], 70 | "slides_video":"", 71 | "notebook_video":"", 72 | "qa_video":"" 73 | }, 74 | { 75 | "title":"Calculus", 76 | "day_break":false, 77 | "book":"chapter_preliminaries/calculus.html", 78 | "slides":["part-0.pdf",0], 79 | "slides_video":"", 80 | "notebook_video":"", 81 | "qa_video":"" 82 | }, 83 | { 84 | "title":"Automatic Differentiation", 85 | "day_break":false, 86 | "book":"chapter_preliminaries/autograd.html", 87 | "slides":["part-0.pdf",0], 88 | "slides_video":"", 89 | "notebook_video":"", 90 | "qa_video":"" 91 | }, 92 | { 93 | "title":"Probability", 94 | "day_break":false, 95 | "book":"chapter_preliminaries/probability.html", 96 | "slides":["part-0.pdf",0], 97 | "slides_video":"", 98 | "notebook_video":"", 99 | "qa_video":"" 100 | }, 101 | { 102 | "title":"Documentation", 103 | "day_break":false, 104 | "book":"chapter_preliminaries/lookup-api.html", 105 | "slides":["part-0.pdf",0], 106 | "slides_video":"", 107 | "notebook_video":"", 108 | "qa_video":"" 109 | }, 110 | { 111 | "title":"Linear Neural Networks", 112 | "day_break":false, 113 | "book":"chapter_linear-networks/index.html", 114 | "slides":["part-0.pdf",0], 115 | "slides_video":"", 116 | "notebook_video":"", 117 | "qa_video":"" 118 | }, 119 | { 120 | "title":"Linear Regression", 121 | "day_break":false, 122 | "book":"chapter_linear-networks/linear-regression.html", 123 | "slides":["part-0.pdf",0], 124 | "slides_video":"", 125 | "notebook_video":"", 126 | "qa_video":"" 127 | }, 128 | { 129 | "title":"The D2L APIs", 130 | "day_break":false, 131 | "book":"chapter_linear-networks/api.html", 132 | "slides":["part-0.pdf",0], 133 | "slides_video":"", 134 | "notebook_video":"", 135 | "qa_video":"" 136 | }, 137 | { 138 | "title":"Synthetic Regression Data", 139 | "day_break":false, 140 | "book":"chapter_linear-networks/synthetic-regression-data.html", 141 | "slides":["part-0.pdf",0], 142 | "slides_video":"", 143 | "notebook_video":"", 144 | "qa_video":"" 145 | }, 146 | { 147 | "title":"Linear Regression Implementation from Scratch", 148 | "day_break":false, 149 | "book":"chapter_linear-networks/linear-regression-scratch.html", 150 | "slides":["part-0.pdf",0], 151 | "slides_video":"", 152 | "notebook_video":"", 153 | "qa_video":"" 154 | }, 155 | { 156 | "title":"Concise Implementation of Linear Regression", 157 | "day_break":false, 158 | "book":"chapter_linear-networks/linear-regression-concise.html", 159 | "slides":["part-0.pdf",0], 160 | "slides_video":"", 161 | "notebook_video":"", 162 | "qa_video":"" 163 | }, 164 | { 165 | "title":"Softmax Regression", 166 | "day_break":false, 167 | "book":"chapter_linear-networks/softmax-regression.html", 168 | "slides":["part-0.pdf",0], 169 | "slides_video":"", 170 | "notebook_video":"", 171 | "qa_video":"" 172 | }, 173 | { 174 | "title":"The Image Classification Dataset", 175 | "day_break":false, 176 | "book":"chapter_linear-networks/image-classification-dataset.html", 177 | "slides":["part-0.pdf",0], 178 | "slides_video":"", 179 | "notebook_video":"", 180 | "qa_video":"" 181 | }, 182 | { 183 | "title":"The Base Classification Model", 184 | "day_break":false, 185 | "book":"chapter_linear-networks/classification.html", 186 | "slides":["part-0.pdf",0], 187 | "slides_video":"", 188 | "notebook_video":"", 189 | "qa_video":"" 190 | }, 191 | { 192 | "title":"Softmax Regression Implementation from Scratch", 193 | "day_break":false, 194 | "book":"chapter_linear-networks/softmax-regression-scratch.html", 195 | "slides":["part-0.pdf",0], 196 | "slides_video":"", 197 | "notebook_video":"", 198 | "qa_video":"" 199 | }, 200 | { 201 | "title":"Concise Implementation of Softmax Regression", 202 | "day_break":false, 203 | "book":"chapter_linear-networks/softmax-regression-concise.html", 204 | "slides":["part-0.pdf",0], 205 | "slides_video":"", 206 | "notebook_video":"", 207 | "qa_video":"" 208 | }, 209 | { 210 | "title":"Multilayer Perceptrons", 211 | "day_break":false, 212 | "book":"chapter_multilayer-perceptrons/index.html", 213 | "slides":["part-0.pdf",0], 214 | "slides_video":"", 215 | "notebook_video":"", 216 | "qa_video":"" 217 | }, 218 | { 219 | "title":"Multilayer Perceptrons", 220 | "day_break":false, 221 | "book":"chapter_multilayer-perceptrons/mlp.html", 222 | "slides":["part-0.pdf",0], 223 | "slides_video":"", 224 | "notebook_video":"", 225 | "qa_video":"" 226 | }, 227 | { 228 | "title":"Multilayer Perceptron Implementation from Scratch", 229 | "day_break":false, 230 | "book":"chapter_multilayer-perceptrons/mlp-scratch.html", 231 | "slides":["part-0.pdf",0], 232 | "slides_video":"", 233 | "notebook_video":"", 234 | "qa_video":"" 235 | }, 236 | { 237 | "title":"Concise Implementation of Multilayer Perceptrons", 238 | "day_break":false, 239 | "book":"chapter_multilayer-perceptrons/mlp-concise.html", 240 | "slides":["part-0.pdf",0], 241 | "slides_video":"", 242 | "notebook_video":"", 243 | "qa_video":"" 244 | }, 245 | { 246 | "title":"Model Selection, Underfitting, and Overfitting", 247 | "day_break":false, 248 | "book":"chapter_multilayer-perceptrons/underfit-overfit.html", 249 | "slides":["part-0.pdf",0], 250 | "slides_video":"", 251 | "notebook_video":"", 252 | "qa_video":"" 253 | }, 254 | { 255 | "title":"Weight Decay", 256 | "day_break":false, 257 | "book":"chapter_multilayer-perceptrons/weight-decay.html", 258 | "slides":["part-0.pdf",0], 259 | "slides_video":"", 260 | "notebook_video":"", 261 | "qa_video":"" 262 | }, 263 | { 264 | "title":"Dropout", 265 | "day_break":false, 266 | "book":"chapter_multilayer-perceptrons/dropout.html", 267 | "slides":["part-0.pdf",0], 268 | "slides_video":"", 269 | "notebook_video":"", 270 | "qa_video":"" 271 | }, 272 | { 273 | "title":"Forward Propagation, Backward Propagation, and Computational Graphs", 274 | "day_break":false, 275 | "book":"chapter_multilayer-perceptrons/backprop.html", 276 | "slides":["part-0.pdf",0], 277 | "slides_video":"", 278 | "notebook_video":"", 279 | "qa_video":"" 280 | }, 281 | { 282 | "title":"Numerical Stability and Initialization", 283 | "day_break":false, 284 | "book":"chapter_multilayer-perceptrons/numerical-stability-and-init.html", 285 | "slides":["part-0.pdf",0], 286 | "slides_video":"", 287 | "notebook_video":"", 288 | "qa_video":"" 289 | }, 290 | { 291 | "title":"Environment and Distribution Shift", 292 | "day_break":false, 293 | "book":"chapter_multilayer-perceptrons/environment.html", 294 | "slides":["part-0.pdf",0], 295 | "slides_video":"", 296 | "notebook_video":"", 297 | "qa_video":"" 298 | }, 299 | { 300 | "title":"Predicting House Prices on Kaggle", 301 | "day_break":false, 302 | "book":"chapter_multilayer-perceptrons/kaggle-house-price.html", 303 | "slides":["part-0.pdf",0], 304 | "slides_video":"", 305 | "notebook_video":"", 306 | "qa_video":"" 307 | }, 308 | { 309 | "title":"Deep Learning Computation", 310 | "day_break":false, 311 | "book":"chapter_deep-learning-computation/index.html", 312 | "slides":["part-0.pdf",0], 313 | "slides_video":"", 314 | "notebook_video":"", 315 | "qa_video":"" 316 | }, 317 | { 318 | "title":"Layers and Blocks", 319 | "day_break":false, 320 | "book":"chapter_deep-learning-computation/model-construction.html", 321 | "slides":["part-0.pdf",0], 322 | "slides_video":"", 323 | "notebook_video":"", 324 | "qa_video":"" 325 | }, 326 | { 327 | "title":"Parameter Management", 328 | "day_break":false, 329 | "book":"chapter_deep-learning-computation/parameters.html", 330 | "slides":["part-0.pdf",0], 331 | "slides_video":"", 332 | "notebook_video":"", 333 | "qa_video":"" 334 | }, 335 | { 336 | "title":"Deferred Initialization", 337 | "day_break":false, 338 | "book":"chapter_deep-learning-computation/deferred-init.html", 339 | "slides":["part-0.pdf",0], 340 | "slides_video":"", 341 | "notebook_video":"", 342 | "qa_video":"" 343 | }, 344 | { 345 | "title":"Custom Layers", 346 | "day_break":false, 347 | "book":"chapter_deep-learning-computation/custom-layer.html", 348 | "slides":["part-0.pdf",0], 349 | "slides_video":"", 350 | "notebook_video":"", 351 | "qa_video":"" 352 | }, 353 | { 354 | "title":"File I/O", 355 | "day_break":false, 356 | "book":"chapter_deep-learning-computation/read-write.html", 357 | "slides":["part-0.pdf",0], 358 | "slides_video":"", 359 | "notebook_video":"", 360 | "qa_video":"" 361 | }, 362 | { 363 | "title":"GPUs", 364 | "day_break":false, 365 | "book":"chapter_deep-learning-computation/use-gpu.html", 366 | "slides":["part-0.pdf",0], 367 | "slides_video":"", 368 | "notebook_video":"", 369 | "qa_video":"" 370 | }, 371 | { 372 | "title":"Convolutional Neural Networks", 373 | "day_break":false, 374 | "book":"chapter_convolutional-neural-networks/index.html", 375 | "slides":["part-0.pdf",0], 376 | "slides_video":"", 377 | "notebook_video":"", 378 | "qa_video":"" 379 | }, 380 | { 381 | "title":"From Fully Connected Layers to Convolutions", 382 | "day_break":false, 383 | "book":"chapter_convolutional-neural-networks/why-conv.html", 384 | "slides":["part-0.pdf",0], 385 | "slides_video":"", 386 | "notebook_video":"", 387 | "qa_video":"" 388 | }, 389 | { 390 | "title":"Convolutions for Images", 391 | "day_break":false, 392 | "book":"chapter_convolutional-neural-networks/conv-layer.html", 393 | "slides":["part-0.pdf",0], 394 | "slides_video":"", 395 | "notebook_video":"", 396 | "qa_video":"" 397 | }, 398 | { 399 | "title":"Padding and Stride", 400 | "day_break":false, 401 | "book":"chapter_convolutional-neural-networks/padding-and-strides.html", 402 | "slides":["part-0.pdf",0], 403 | "slides_video":"", 404 | "notebook_video":"", 405 | "qa_video":"" 406 | }, 407 | { 408 | "title":"Multiple Input and Multiple Output Channels", 409 | "day_break":false, 410 | "book":"chapter_convolutional-neural-networks/channels.html", 411 | "slides":["part-0.pdf",0], 412 | "slides_video":"", 413 | "notebook_video":"", 414 | "qa_video":"" 415 | }, 416 | { 417 | "title":"Pooling", 418 | "day_break":false, 419 | "book":"chapter_convolutional-neural-networks/pooling.html", 420 | "slides":["part-0.pdf",0], 421 | "slides_video":"", 422 | "notebook_video":"", 423 | "qa_video":"" 424 | }, 425 | { 426 | "title":"Convolutional Neural Networks (LeNet)", 427 | "day_break":false, 428 | "book":"chapter_convolutional-neural-networks/lenet.html", 429 | "slides":["part-0.pdf",0], 430 | "slides_video":"", 431 | "notebook_video":"", 432 | "qa_video":"" 433 | }, 434 | { 435 | "title":"Modern Convolutional Neural Networks", 436 | "day_break":false, 437 | "book":"chapter_convolutional-modern/index.html", 438 | "slides":["part-0.pdf",0], 439 | "slides_video":"", 440 | "notebook_video":"", 441 | "qa_video":"" 442 | }, 443 | { 444 | "title":"Deep Convolutional Neural Networks (AlexNet)", 445 | "day_break":false, 446 | "book":"chapter_convolutional-modern/alexnet.html", 447 | "slides":["part-0.pdf",0], 448 | "slides_video":"", 449 | "notebook_video":"", 450 | "qa_video":"" 451 | }, 452 | { 453 | "title":"Networks Using Blocks (VGG)", 454 | "day_break":false, 455 | "book":"chapter_convolutional-modern/vgg.html", 456 | "slides":["part-0.pdf",0], 457 | "slides_video":"", 458 | "notebook_video":"", 459 | "qa_video":"" 460 | }, 461 | { 462 | "title":"Network in Network (NiN)", 463 | "day_break":false, 464 | "book":"chapter_convolutional-modern/nin.html", 465 | "slides":["part-0.pdf",0], 466 | "slides_video":"", 467 | "notebook_video":"", 468 | "qa_video":"" 469 | }, 470 | { 471 | "title":"Networks with Parallel Concatenations (GoogLeNet)", 472 | "day_break":false, 473 | "book":"chapter_convolutional-modern/googlenet.html", 474 | "slides":["part-0.pdf",0], 475 | "slides_video":"", 476 | "notebook_video":"", 477 | "qa_video":"" 478 | }, 479 | { 480 | "title":"Batch Normalization", 481 | "day_break":false, 482 | "book":"chapter_convolutional-modern/batch-norm.html", 483 | "slides":["part-0.pdf",0], 484 | "slides_video":"", 485 | "notebook_video":"", 486 | "qa_video":"" 487 | }, 488 | { 489 | "title":"Residual Networks (ResNet)", 490 | "day_break":false, 491 | "book":"chapter_convolutional-modern/resnet.html", 492 | "slides":["part-0.pdf",0], 493 | "slides_video":"", 494 | "notebook_video":"", 495 | "qa_video":"" 496 | }, 497 | { 498 | "title":"Densely Connected Networks (DenseNet)", 499 | "day_break":false, 500 | "book":"chapter_convolutional-modern/densenet.html", 501 | "slides":["part-0.pdf",0], 502 | "slides_video":"", 503 | "notebook_video":"", 504 | "qa_video":"" 505 | }, 506 | { 507 | "title":"Recurrent Neural Networks", 508 | "day_break":false, 509 | "book":"chapter_recurrent-neural-networks/index.html", 510 | "slides":["part-0.pdf",0], 511 | "slides_video":"", 512 | "notebook_video":"", 513 | "qa_video":"" 514 | }, 515 | { 516 | "title":"Sequence Models", 517 | "day_break":false, 518 | "book":"chapter_recurrent-neural-networks/sequence.html", 519 | "slides":["part-0.pdf",0], 520 | "slides_video":"", 521 | "notebook_video":"", 522 | "qa_video":"" 523 | }, 524 | { 525 | "title":"Text Preprocessing", 526 | "day_break":false, 527 | "book":"chapter_recurrent-neural-networks/text-preprocessing.html", 528 | "slides":["part-0.pdf",0], 529 | "slides_video":"", 530 | "notebook_video":"", 531 | "qa_video":"" 532 | }, 533 | { 534 | "title":"The Language Model Dataset", 535 | "day_break":false, 536 | "book":"chapter_recurrent-neural-networks/language-models-and-dataset.html", 537 | "slides":["part-0.pdf",0], 538 | "slides_video":"", 539 | "notebook_video":"", 540 | "qa_video":"" 541 | }, 542 | { 543 | "title":"Recurrent Neural Networks", 544 | "day_break":false, 545 | "book":"chapter_recurrent-neural-networks/rnn.html", 546 | "slides":["part-0.pdf",0], 547 | "slides_video":"", 548 | "notebook_video":"", 549 | "qa_video":"" 550 | }, 551 | { 552 | "title":"Recurrent Neural Network Implementation from Scratch", 553 | "day_break":false, 554 | "book":"chapter_recurrent-neural-networks/rnn-scratch.html", 555 | "slides":["part-0.pdf",0], 556 | "slides_video":"", 557 | "notebook_video":"", 558 | "qa_video":"" 559 | }, 560 | { 561 | "title":"Concise Implementation of Recurrent Neural Networks", 562 | "day_break":false, 563 | "book":"chapter_recurrent-neural-networks/rnn-concise.html", 564 | "slides":["part-0.pdf",0], 565 | "slides_video":"", 566 | "notebook_video":"", 567 | "qa_video":"" 568 | }, 569 | { 570 | "title":"Backpropagation Through Time", 571 | "day_break":false, 572 | "book":"chapter_recurrent-neural-networks/bptt.html", 573 | "slides":["part-0.pdf",0], 574 | "slides_video":"", 575 | "notebook_video":"", 576 | "qa_video":"" 577 | }, 578 | { 579 | "title":"Modern Recurrent Neural Networks", 580 | "day_break":false, 581 | "book":"chapter_recurrent-modern/index.html", 582 | "slides":["part-0.pdf",0], 583 | "slides_video":"", 584 | "notebook_video":"", 585 | "qa_video":"" 586 | }, 587 | { 588 | "title":"Gated Recurrent Units (GRU)", 589 | "day_break":false, 590 | "book":"chapter_recurrent-modern/gru.html", 591 | "slides":["part-0.pdf",0], 592 | "slides_video":"", 593 | "notebook_video":"", 594 | "qa_video":"" 595 | }, 596 | { 597 | "title":"Long Short-Term Memory (LSTM)", 598 | "day_break":false, 599 | "book":"chapter_recurrent-modern/lstm.html", 600 | "slides":["part-0.pdf",0], 601 | "slides_video":"", 602 | "notebook_video":"", 603 | "qa_video":"" 604 | }, 605 | { 606 | "title":"Deep Recurrent Neural Networks", 607 | "day_break":false, 608 | "book":"chapter_recurrent-modern/deep-rnn.html", 609 | "slides":["part-0.pdf",0], 610 | "slides_video":"", 611 | "notebook_video":"", 612 | "qa_video":"" 613 | }, 614 | { 615 | "title":"Bidirectional Recurrent Neural Networks", 616 | "day_break":false, 617 | "book":"chapter_recurrent-modern/bi-rnn.html", 618 | "slides":["part-0.pdf",0], 619 | "slides_video":"", 620 | "notebook_video":"", 621 | "qa_video":"" 622 | }, 623 | { 624 | "title":"Machine Translation and the Dataset", 625 | "day_break":false, 626 | "book":"chapter_recurrent-modern/machine-translation-and-dataset.html", 627 | "slides":["part-0.pdf",0], 628 | "slides_video":"", 629 | "notebook_video":"", 630 | "qa_video":"" 631 | }, 632 | { 633 | "title":"Encoder-Decoder Architecture", 634 | "day_break":false, 635 | "book":"chapter_recurrent-modern/encoder-decoder.html", 636 | "slides":["part-0.pdf",0], 637 | "slides_video":"", 638 | "notebook_video":"", 639 | "qa_video":"" 640 | }, 641 | { 642 | "title":"Sequence to Sequence Learning", 643 | "day_break":false, 644 | "book":"chapter_recurrent-modern/seq2seq.html", 645 | "slides":["part-0.pdf",0], 646 | "slides_video":"", 647 | "notebook_video":"", 648 | "qa_video":"" 649 | }, 650 | { 651 | "title":"Beam Search", 652 | "day_break":false, 653 | "book":"chapter_recurrent-modern/beam-search.html", 654 | "slides":["part-0.pdf",0], 655 | "slides_video":"", 656 | "notebook_video":"", 657 | "qa_video":"" 658 | }, 659 | { 660 | "title":"Attention Mechanisms", 661 | "day_break":false, 662 | "book":"chapter_attention-mechanisms/index.html", 663 | "slides":["part-0.pdf",0], 664 | "slides_video":"", 665 | "notebook_video":"", 666 | "qa_video":"" 667 | }, 668 | { 669 | "title":"Attention Cues", 670 | "day_break":false, 671 | "book":"chapter_attention-mechanisms/attention-cues.html", 672 | "slides":["part-0.pdf",0], 673 | "slides_video":"", 674 | "notebook_video":"", 675 | "qa_video":"" 676 | }, 677 | { 678 | "title":"Attention Pooling: Nadaraya-Watson Kernel Regression", 679 | "day_break":false, 680 | "book":"chapter_attention-mechanisms/nadaraya-watson.html", 681 | "slides":["part-0.pdf",0], 682 | "slides_video":"", 683 | "notebook_video":"", 684 | "qa_video":"" 685 | }, 686 | { 687 | "title":"Attention Scoring Functions", 688 | "day_break":false, 689 | "book":"chapter_attention-mechanisms/attention-scoring-functions.html", 690 | "slides":["part-0.pdf",0], 691 | "slides_video":"", 692 | "notebook_video":"", 693 | "qa_video":"" 694 | }, 695 | { 696 | "title":"Bahdanau Attention", 697 | "day_break":false, 698 | "book":"chapter_attention-mechanisms/bahdanau-attention.html", 699 | "slides":["part-0.pdf",0], 700 | "slides_video":"", 701 | "notebook_video":"", 702 | "qa_video":"" 703 | }, 704 | { 705 | "title":"Multi-Head Attention", 706 | "day_break":false, 707 | "book":"chapter_attention-mechanisms/multihead-attention.html", 708 | "slides":["part-0.pdf",0], 709 | "slides_video":"", 710 | "notebook_video":"", 711 | "qa_video":"" 712 | }, 713 | { 714 | "title":"Self-Attention and Positional Encoding", 715 | "day_break":false, 716 | "book":"chapter_attention-mechanisms/self-attention-and-positional-encoding.html", 717 | "slides":["part-0.pdf",0], 718 | "slides_video":"", 719 | "notebook_video":"", 720 | "qa_video":"" 721 | }, 722 | { 723 | "title":"Transformer", 724 | "day_break":false, 725 | "book":"chapter_attention-mechanisms/transformer.html", 726 | "slides":["part-0.pdf",0], 727 | "slides_video":"", 728 | "notebook_video":"", 729 | "qa_video":"" 730 | }, 731 | { 732 | "title":"Optimization Algorithms", 733 | "day_break":false, 734 | "book":"chapter_optimization/index.html", 735 | "slides":["part-0.pdf",0], 736 | "slides_video":"", 737 | "notebook_video":"", 738 | "qa_video":"" 739 | }, 740 | { 741 | "title":"Optimization and Deep Learning", 742 | "day_break":false, 743 | "book":"chapter_optimization/optimization-intro.html", 744 | "slides":["part-0.pdf",0], 745 | "slides_video":"", 746 | "notebook_video":"", 747 | "qa_video":"" 748 | }, 749 | { 750 | "title":"Convexity", 751 | "day_break":false, 752 | "book":"chapter_optimization/convexity.html", 753 | "slides":["part-0.pdf",0], 754 | "slides_video":"", 755 | "notebook_video":"", 756 | "qa_video":"" 757 | }, 758 | { 759 | "title":"Gradient Descent", 760 | "day_break":false, 761 | "book":"chapter_optimization/gd.html", 762 | "slides":["part-0.pdf",0], 763 | "slides_video":"", 764 | "notebook_video":"", 765 | "qa_video":"" 766 | }, 767 | { 768 | "title":"Stochastic Gradient Descent", 769 | "day_break":false, 770 | "book":"chapter_optimization/sgd.html", 771 | "slides":["part-0.pdf",0], 772 | "slides_video":"", 773 | "notebook_video":"", 774 | "qa_video":"" 775 | }, 776 | { 777 | "title":"Minibatch Stochastic Gradient Descent", 778 | "day_break":false, 779 | "book":"chapter_optimization/minibatch-sgd.html", 780 | "slides":["part-0.pdf",0], 781 | "slides_video":"", 782 | "notebook_video":"", 783 | "qa_video":"" 784 | }, 785 | { 786 | "title":"Momentum", 787 | "day_break":false, 788 | "book":"chapter_optimization/momentum.html", 789 | "slides":["part-0.pdf",0], 790 | "slides_video":"", 791 | "notebook_video":"", 792 | "qa_video":"" 793 | }, 794 | { 795 | "title":"Adagrad", 796 | "day_break":false, 797 | "book":"chapter_optimization/adagrad.html", 798 | "slides":["part-0.pdf",0], 799 | "slides_video":"", 800 | "notebook_video":"", 801 | "qa_video":"" 802 | }, 803 | { 804 | "title":"RMSProp", 805 | "day_break":false, 806 | "book":"chapter_optimization/rmsprop.html", 807 | "slides":["part-0.pdf",0], 808 | "slides_video":"", 809 | "notebook_video":"", 810 | "qa_video":"" 811 | }, 812 | { 813 | "title":"Adadelta", 814 | "day_break":false, 815 | "book":"chapter_optimization/adadelta.html", 816 | "slides":["part-0.pdf",0], 817 | "slides_video":"", 818 | "notebook_video":"", 819 | "qa_video":"" 820 | }, 821 | { 822 | "title":"Adam", 823 | "day_break":false, 824 | "book":"chapter_optimization/adam.html", 825 | "slides":["part-0.pdf",0], 826 | "slides_video":"", 827 | "notebook_video":"", 828 | "qa_video":"" 829 | }, 830 | { 831 | "title":"Learning Rate Scheduling", 832 | "day_break":false, 833 | "book":"chapter_optimization/lr-scheduler.html", 834 | "slides":["part-0.pdf",0], 835 | "slides_video":"", 836 | "notebook_video":"", 837 | "qa_video":"" 838 | }, 839 | { 840 | "title":"Computational Performance", 841 | "day_break":false, 842 | "book":"chapter_computational-performance/index.html", 843 | "slides":["part-0.pdf",0], 844 | "slides_video":"", 845 | "notebook_video":"", 846 | "qa_video":"" 847 | }, 848 | { 849 | "title":"Compilers and Interpreters", 850 | "day_break":false, 851 | "book":"chapter_computational-performance/hybridize.html", 852 | "slides":["part-0.pdf",0], 853 | "slides_video":"", 854 | "notebook_video":"", 855 | "qa_video":"" 856 | }, 857 | { 858 | "title":"Asynchronous Computation", 859 | "day_break":false, 860 | "book":"chapter_computational-performance/async-computation.html", 861 | "slides":["part-0.pdf",0], 862 | "slides_video":"", 863 | "notebook_video":"", 864 | "qa_video":"" 865 | }, 866 | { 867 | "title":"Automatic Parallelism", 868 | "day_break":false, 869 | "book":"chapter_computational-performance/auto-parallelism.html", 870 | "slides":["part-0.pdf",0], 871 | "slides_video":"", 872 | "notebook_video":"", 873 | "qa_video":"" 874 | }, 875 | { 876 | "title":"Hardware", 877 | "day_break":false, 878 | "book":"chapter_computational-performance/hardware.html", 879 | "slides":["part-0.pdf",0], 880 | "slides_video":"", 881 | "notebook_video":"", 882 | "qa_video":"" 883 | }, 884 | { 885 | "title":"Training on Multiple GPUs", 886 | "day_break":false, 887 | "book":"chapter_computational-performance/multiple-gpus.html", 888 | "slides":["part-0.pdf",0], 889 | "slides_video":"", 890 | "notebook_video":"", 891 | "qa_video":"" 892 | }, 893 | { 894 | "title":"Concise Implementation for Multiple GPUs", 895 | "day_break":false, 896 | "book":"chapter_computational-performance/multiple-gpus-concise.html", 897 | "slides":["part-0.pdf",0], 898 | "slides_video":"", 899 | "notebook_video":"", 900 | "qa_video":"" 901 | }, 902 | { 903 | "title":"Parameter Servers", 904 | "day_break":false, 905 | "book":"chapter_computational-performance/parameterserver.html", 906 | "slides":["part-0.pdf",0], 907 | "slides_video":"", 908 | "notebook_video":"", 909 | "qa_video":"" 910 | }, 911 | { 912 | "title":"Computer Vision", 913 | "day_break":false, 914 | "book":"chapter_computer-vision/index.html", 915 | "slides":["part-0.pdf",0], 916 | "slides_video":"", 917 | "notebook_video":"", 918 | "qa_video":"" 919 | }, 920 | { 921 | "title":"Image Augmentation", 922 | "day_break":false, 923 | "book":"chapter_computer-vision/image-augmentation.html", 924 | "slides":["part-0.pdf",0], 925 | "slides_video":"", 926 | "notebook_video":"", 927 | "qa_video":"" 928 | }, 929 | { 930 | "title":"Fine-Tuning", 931 | "day_break":false, 932 | "book":"chapter_computer-vision/fine-tuning.html", 933 | "slides":["part-0.pdf",0], 934 | "slides_video":"", 935 | "notebook_video":"", 936 | "qa_video":"" 937 | }, 938 | { 939 | "title":"Object Detection and Bounding Boxes", 940 | "day_break":false, 941 | "book":"chapter_computer-vision/bounding-box.html", 942 | "slides":["part-0.pdf",0], 943 | "slides_video":"", 944 | "notebook_video":"", 945 | "qa_video":"" 946 | }, 947 | { 948 | "title":"Anchor Boxes", 949 | "day_break":false, 950 | "book":"chapter_computer-vision/anchor.html", 951 | "slides":["part-0.pdf",0], 952 | "slides_video":"", 953 | "notebook_video":"", 954 | "qa_video":"" 955 | }, 956 | { 957 | "title":"Multiscale Object Detection", 958 | "day_break":false, 959 | "book":"chapter_computer-vision/multiscale-object-detection.html", 960 | "slides":["part-0.pdf",0], 961 | "slides_video":"", 962 | "notebook_video":"", 963 | "qa_video":"" 964 | }, 965 | { 966 | "title":"The Object Detection Dataset", 967 | "day_break":false, 968 | "book":"chapter_computer-vision/object-detection-dataset.html", 969 | "slides":["part-0.pdf",0], 970 | "slides_video":"", 971 | "notebook_video":"", 972 | "qa_video":"" 973 | }, 974 | { 975 | "title":"Single Shot Multibox Detection", 976 | "day_break":false, 977 | "book":"chapter_computer-vision/ssd.html", 978 | "slides":["part-0.pdf",0], 979 | "slides_video":"", 980 | "notebook_video":"", 981 | "qa_video":"" 982 | }, 983 | { 984 | "title":"Region-based CNNs (R-CNNs)", 985 | "day_break":false, 986 | "book":"chapter_computer-vision/rcnn.html", 987 | "slides":["part-0.pdf",0], 988 | "slides_video":"", 989 | "notebook_video":"", 990 | "qa_video":"" 991 | }, 992 | { 993 | "title":"Semantic Segmentation and the Dataset", 994 | "day_break":false, 995 | "book":"chapter_computer-vision/semantic-segmentation-and-dataset.html", 996 | "slides":["part-0.pdf",0], 997 | "slides_video":"", 998 | "notebook_video":"", 999 | "qa_video":"" 1000 | }, 1001 | { 1002 | "title":"Transposed Convolution", 1003 | "day_break":false, 1004 | "book":"chapter_computer-vision/transposed-conv.html", 1005 | "slides":["part-0.pdf",0], 1006 | "slides_video":"", 1007 | "notebook_video":"", 1008 | "qa_video":"" 1009 | }, 1010 | { 1011 | "title":"Fully Convolutional Networks", 1012 | "day_break":false, 1013 | "book":"chapter_computer-vision/fcn.html", 1014 | "slides":["part-0.pdf",0], 1015 | "slides_video":"", 1016 | "notebook_video":"", 1017 | "qa_video":"" 1018 | }, 1019 | { 1020 | "title":"Neural Style Transfer", 1021 | "day_break":false, 1022 | "book":"chapter_computer-vision/neural-style.html", 1023 | "slides":["part-0.pdf",0], 1024 | "slides_video":"", 1025 | "notebook_video":"", 1026 | "qa_video":"" 1027 | }, 1028 | { 1029 | "title":"Image Classification (CIFAR-10) on Kaggle", 1030 | "day_break":false, 1031 | "book":"chapter_computer-vision/kaggle-cifar10.html", 1032 | "slides":["part-0.pdf",0], 1033 | "slides_video":"", 1034 | "notebook_video":"", 1035 | "qa_video":"" 1036 | }, 1037 | { 1038 | "title":"Dog Breed Identification (ImageNet Dogs) on Kaggle", 1039 | "day_break":false, 1040 | "book":"chapter_computer-vision/kaggle-dog.html", 1041 | "slides":["part-0.pdf",0], 1042 | "slides_video":"", 1043 | "notebook_video":"", 1044 | "qa_video":"" 1045 | }, 1046 | { 1047 | "title":"Natural Language Processing: Pretraining", 1048 | "day_break":false, 1049 | "book":"chapter_natural-language-processing-pretraining/index.html", 1050 | "slides":["part-0.pdf",0], 1051 | "slides_video":"", 1052 | "notebook_video":"", 1053 | "qa_video":"" 1054 | }, 1055 | { 1056 | "title":"Word Embedding (word2vec)", 1057 | "day_break":false, 1058 | "book":"chapter_natural-language-processing-pretraining/word2vec.html", 1059 | "slides":["part-0.pdf",0], 1060 | "slides_video":"", 1061 | "notebook_video":"", 1062 | "qa_video":"" 1063 | }, 1064 | { 1065 | "title":"Approximate Training", 1066 | "day_break":false, 1067 | "book":"chapter_natural-language-processing-pretraining/approx-training.html", 1068 | "slides":["part-0.pdf",0], 1069 | "slides_video":"", 1070 | "notebook_video":"", 1071 | "qa_video":"" 1072 | }, 1073 | { 1074 | "title":"The Dataset for Pretraining Word Embeddings", 1075 | "day_break":false, 1076 | "book":"chapter_natural-language-processing-pretraining/word-embedding-dataset.html", 1077 | "slides":["part-0.pdf",0], 1078 | "slides_video":"", 1079 | "notebook_video":"", 1080 | "qa_video":"" 1081 | }, 1082 | { 1083 | "title":"Pretraining word2vec", 1084 | "day_break":false, 1085 | "book":"chapter_natural-language-processing-pretraining/word2vec-pretraining.html", 1086 | "slides":["part-0.pdf",0], 1087 | "slides_video":"", 1088 | "notebook_video":"", 1089 | "qa_video":"" 1090 | }, 1091 | { 1092 | "title":"Word Embedding with Global Vectors (GloVe)", 1093 | "day_break":false, 1094 | "book":"chapter_natural-language-processing-pretraining/glove.html", 1095 | "slides":["part-0.pdf",0], 1096 | "slides_video":"", 1097 | "notebook_video":"", 1098 | "qa_video":"" 1099 | }, 1100 | { 1101 | "title":"Subword Embedding", 1102 | "day_break":false, 1103 | "book":"chapter_natural-language-processing-pretraining/subword-embedding.html", 1104 | "slides":["part-0.pdf",0], 1105 | "slides_video":"", 1106 | "notebook_video":"", 1107 | "qa_video":"" 1108 | }, 1109 | { 1110 | "title":"Word Similarity and Analogy", 1111 | "day_break":false, 1112 | "book":"chapter_natural-language-processing-pretraining/similarity-analogy.html", 1113 | "slides":["part-0.pdf",0], 1114 | "slides_video":"", 1115 | "notebook_video":"", 1116 | "qa_video":"" 1117 | }, 1118 | { 1119 | "title":"Bidirectional Encoder Representations from Transformers (BERT)", 1120 | "day_break":false, 1121 | "book":"chapter_natural-language-processing-pretraining/bert.html", 1122 | "slides":["part-0.pdf",0], 1123 | "slides_video":"", 1124 | "notebook_video":"", 1125 | "qa_video":"" 1126 | }, 1127 | { 1128 | "title":"The Dataset for Pretraining BERT", 1129 | "day_break":false, 1130 | "book":"chapter_natural-language-processing-pretraining/bert-dataset.html", 1131 | "slides":["part-0.pdf",0], 1132 | "slides_video":"", 1133 | "notebook_video":"", 1134 | "qa_video":"" 1135 | }, 1136 | { 1137 | "title":"Pretraining BERT", 1138 | "day_break":false, 1139 | "book":"chapter_natural-language-processing-pretraining/bert-pretraining.html", 1140 | "slides":["part-0.pdf",0], 1141 | "slides_video":"", 1142 | "notebook_video":"", 1143 | "qa_video":"" 1144 | }, 1145 | { 1146 | "title":"Natural Language Processing: Applications", 1147 | "day_break":false, 1148 | "book":"chapter_natural-language-processing-applications/index.html", 1149 | "slides":["part-0.pdf",0], 1150 | "slides_video":"", 1151 | "notebook_video":"", 1152 | "qa_video":"" 1153 | }, 1154 | { 1155 | "title":"Sentiment Analysis and the Dataset", 1156 | "day_break":false, 1157 | "book":"chapter_natural-language-processing-applications/sentiment-analysis-and-dataset.html", 1158 | "slides":["part-0.pdf",0], 1159 | "slides_video":"", 1160 | "notebook_video":"", 1161 | "qa_video":"" 1162 | }, 1163 | { 1164 | "title":"Sentiment Analysis: Using Recurrent Neural Networks", 1165 | "day_break":false, 1166 | "book":"chapter_natural-language-processing-applications/sentiment-analysis-rnn.html", 1167 | "slides":["part-0.pdf",0], 1168 | "slides_video":"", 1169 | "notebook_video":"", 1170 | "qa_video":"" 1171 | }, 1172 | { 1173 | "title":"Sentiment Analysis: Using Convolutional Neural Networks", 1174 | "day_break":false, 1175 | "book":"chapter_natural-language-processing-applications/sentiment-analysis-cnn.html", 1176 | "slides":["part-0.pdf",0], 1177 | "slides_video":"", 1178 | "notebook_video":"", 1179 | "qa_video":"" 1180 | }, 1181 | { 1182 | "title":"Natural Language Inference and the Dataset", 1183 | "day_break":false, 1184 | "book":"chapter_natural-language-processing-applications/natural-language-inference-and-dataset.html", 1185 | "slides":["part-0.pdf",0], 1186 | "slides_video":"", 1187 | "notebook_video":"", 1188 | "qa_video":"" 1189 | }, 1190 | { 1191 | "title":"Natural Language Inference: Using Attention", 1192 | "day_break":false, 1193 | "book":"chapter_natural-language-processing-applications/natural-language-inference-attention.html", 1194 | "slides":["part-0.pdf",0], 1195 | "slides_video":"", 1196 | "notebook_video":"", 1197 | "qa_video":"" 1198 | }, 1199 | { 1200 | "title":"Fine-Tuning BERT for Sequence-Level and Token-Level Applications", 1201 | "day_break":false, 1202 | "book":"chapter_natural-language-processing-applications/finetuning-bert.html", 1203 | "slides":["part-0.pdf",0], 1204 | "slides_video":"", 1205 | "notebook_video":"", 1206 | "qa_video":"" 1207 | }, 1208 | { 1209 | "title":"Natural Language Inference: Fine-Tuning BERT", 1210 | "day_break":false, 1211 | "book":"chapter_natural-language-processing-applications/natural-language-inference-bert.html", 1212 | "slides":["part-0.pdf",0], 1213 | "slides_video":"", 1214 | "notebook_video":"", 1215 | "qa_video":"" 1216 | }, 1217 | { 1218 | "title":"Recommender Systems", 1219 | "day_break":false, 1220 | "book":"chapter_recommender-systems/index.html", 1221 | "slides":["part-0.pdf",0], 1222 | "slides_video":"", 1223 | "notebook_video":"", 1224 | "qa_video":"" 1225 | }, 1226 | { 1227 | "title":"Overview of Recommender Systems", 1228 | "day_break":false, 1229 | "book":"chapter_recommender-systems/recsys-intro.html", 1230 | "slides":["part-0.pdf",0], 1231 | "slides_video":"", 1232 | "notebook_video":"", 1233 | "qa_video":"" 1234 | }, 1235 | { 1236 | "title":"The MovieLens Dataset", 1237 | "day_break":false, 1238 | "book":"chapter_recommender-systems/movielens.html", 1239 | "slides":["part-0.pdf",0], 1240 | "slides_video":"", 1241 | "notebook_video":"", 1242 | "qa_video":"" 1243 | }, 1244 | { 1245 | "title":"Matrix Factorization", 1246 | "day_break":false, 1247 | "book":"chapter_recommender-systems/mf.html", 1248 | "slides":["part-0.pdf",0], 1249 | "slides_video":"", 1250 | "notebook_video":"", 1251 | "qa_video":"" 1252 | }, 1253 | { 1254 | "title":"AutoRec: Rating Prediction with Autoencoders", 1255 | "day_break":false, 1256 | "book":"chapter_recommender-systems/autorec.html", 1257 | "slides":["part-0.pdf",0], 1258 | "slides_video":"", 1259 | "notebook_video":"", 1260 | "qa_video":"" 1261 | }, 1262 | { 1263 | "title":"Personalized Ranking for Recommender Systems", 1264 | "day_break":false, 1265 | "book":"chapter_recommender-systems/ranking.html", 1266 | "slides":["part-0.pdf",0], 1267 | "slides_video":"", 1268 | "notebook_video":"", 1269 | "qa_video":"" 1270 | }, 1271 | { 1272 | "title":"Neural Collaborative Filtering for Personalized Ranking", 1273 | "day_break":false, 1274 | "book":"chapter_recommender-systems/neumf.html", 1275 | "slides":["part-0.pdf",0], 1276 | "slides_video":"", 1277 | "notebook_video":"", 1278 | "qa_video":"" 1279 | }, 1280 | { 1281 | "title":"Sequence-Aware Recommender Systems", 1282 | "day_break":false, 1283 | "book":"chapter_recommender-systems/seqrec.html", 1284 | "slides":["part-0.pdf",0], 1285 | "slides_video":"", 1286 | "notebook_video":"", 1287 | "qa_video":"" 1288 | }, 1289 | { 1290 | "title":"Feature-Rich Recommender Systems", 1291 | "day_break":false, 1292 | "book":"chapter_recommender-systems/ctr.html", 1293 | "slides":["part-0.pdf",0], 1294 | "slides_video":"", 1295 | "notebook_video":"", 1296 | "qa_video":"" 1297 | }, 1298 | { 1299 | "title":"Factorization Machines", 1300 | "day_break":false, 1301 | "book":"chapter_recommender-systems/fm.html", 1302 | "slides":["part-0.pdf",0], 1303 | "slides_video":"", 1304 | "notebook_video":"", 1305 | "qa_video":"" 1306 | }, 1307 | { 1308 | "title":"Deep Factorization Machines", 1309 | "day_break":false, 1310 | "book":"chapter_recommender-systems/deepfm.html", 1311 | "slides":["part-0.pdf",0], 1312 | "slides_video":"", 1313 | "notebook_video":"", 1314 | "qa_video":"" 1315 | }, 1316 | { 1317 | "title":"Generative Adversarial Networks", 1318 | "day_break":false, 1319 | "book":"chapter_generative-adversarial-networks/index.html", 1320 | "slides":["part-0.pdf",0], 1321 | "slides_video":"", 1322 | "notebook_video":"", 1323 | "qa_video":"" 1324 | }, 1325 | { 1326 | "title":"Generative Adversarial Networks", 1327 | "day_break":false, 1328 | "book":"chapter_generative-adversarial-networks/gan.html", 1329 | "slides":["part-0.pdf",0], 1330 | "slides_video":"", 1331 | "notebook_video":"", 1332 | "qa_video":"" 1333 | }, 1334 | { 1335 | "title":"Deep Convolutional Generative Adversarial Networks", 1336 | "day_break":false, 1337 | "book":"chapter_generative-adversarial-networks/dcgan.html", 1338 | "slides":["part-0.pdf",0], 1339 | "slides_video":"", 1340 | "notebook_video":"", 1341 | "qa_video":"" 1342 | }, 1343 | { 1344 | "title":"Appendix: Mathematics for Deep Learning", 1345 | "day_break":false, 1346 | "book":"chapter_appendix-mathematics-for-deep-learning/index.html", 1347 | "slides":["part-0.pdf",0], 1348 | "slides_video":"", 1349 | "notebook_video":"", 1350 | "qa_video":"" 1351 | }, 1352 | { 1353 | "title":"Geometry and Linear Algebraic Operations", 1354 | "day_break":false, 1355 | "book":"chapter_appendix-mathematics-for-deep-learning/geometry-linear-algebraic-ops.html", 1356 | "slides":["part-0.pdf",0], 1357 | "slides_video":"", 1358 | "notebook_video":"", 1359 | "qa_video":"" 1360 | }, 1361 | { 1362 | "title":"Eigendecompositions", 1363 | "day_break":false, 1364 | "book":"chapter_appendix-mathematics-for-deep-learning/eigendecomposition.html", 1365 | "slides":["part-0.pdf",0], 1366 | "slides_video":"", 1367 | "notebook_video":"", 1368 | "qa_video":"" 1369 | }, 1370 | { 1371 | "title":"Single Variable Calculus", 1372 | "day_break":false, 1373 | "book":"chapter_appendix-mathematics-for-deep-learning/single-variable-calculus.html", 1374 | "slides":["part-0.pdf",0], 1375 | "slides_video":"", 1376 | "notebook_video":"", 1377 | "qa_video":"" 1378 | }, 1379 | { 1380 | "title":"Multivariable Calculus", 1381 | "day_break":false, 1382 | "book":"chapter_appendix-mathematics-for-deep-learning/multivariable-calculus.html", 1383 | "slides":["part-0.pdf",0], 1384 | "slides_video":"", 1385 | "notebook_video":"", 1386 | "qa_video":"" 1387 | }, 1388 | { 1389 | "title":"Integral Calculus", 1390 | "day_break":false, 1391 | "book":"chapter_appendix-mathematics-for-deep-learning/integral-calculus.html", 1392 | "slides":["part-0.pdf",0], 1393 | "slides_video":"", 1394 | "notebook_video":"", 1395 | "qa_video":"" 1396 | }, 1397 | { 1398 | "title":"Random Variables", 1399 | "day_break":false, 1400 | "book":"chapter_appendix-mathematics-for-deep-learning/random-variables.html", 1401 | "slides":["part-0.pdf",0], 1402 | "slides_video":"", 1403 | "notebook_video":"", 1404 | "qa_video":"" 1405 | }, 1406 | { 1407 | "title":"Maximum Likelihood", 1408 | "day_break":false, 1409 | "book":"chapter_appendix-mathematics-for-deep-learning/maximum-likelihood.html", 1410 | "slides":["part-0.pdf",0], 1411 | "slides_video":"", 1412 | "notebook_video":"", 1413 | "qa_video":"" 1414 | }, 1415 | { 1416 | "title":"Distributions", 1417 | "day_break":false, 1418 | "book":"chapter_appendix-mathematics-for-deep-learning/distributions.html", 1419 | "slides":["part-0.pdf",0], 1420 | "slides_video":"", 1421 | "notebook_video":"", 1422 | "qa_video":"" 1423 | }, 1424 | { 1425 | "title":"Naive Bayes", 1426 | "day_break":false, 1427 | "book":"chapter_appendix-mathematics-for-deep-learning/naive-bayes.html", 1428 | "slides":["part-0.pdf",0], 1429 | "slides_video":"", 1430 | "notebook_video":"", 1431 | "qa_video":"" 1432 | }, 1433 | { 1434 | "title":"Statistics", 1435 | "day_break":false, 1436 | "book":"chapter_appendix-mathematics-for-deep-learning/statistics.html", 1437 | "slides":["part-0.pdf",0], 1438 | "slides_video":"", 1439 | "notebook_video":"", 1440 | "qa_video":"" 1441 | }, 1442 | { 1443 | "title":"Information Theory", 1444 | "day_break":false, 1445 | "book":"chapter_appendix-mathematics-for-deep-learning/information-theory.html", 1446 | "slides":["part-0.pdf",0], 1447 | "slides_video":"", 1448 | "notebook_video":"", 1449 | "qa_video":"" 1450 | }, 1451 | { 1452 | "title":"Appendix: Tools for Deep Learning", 1453 | "day_break":false, 1454 | "book":"chapter_appendix-tools-for-deep-learning/index.html", 1455 | "slides":["part-0.pdf",0], 1456 | "slides_video":"", 1457 | "notebook_video":"", 1458 | "qa_video":"" 1459 | }, 1460 | { 1461 | "title":"Using Jupyter", 1462 | "day_break":false, 1463 | "book":"chapter_appendix-tools-for-deep-learning/jupyter.html", 1464 | "slides":["part-0.pdf",0], 1465 | "slides_video":"", 1466 | "notebook_video":"", 1467 | "qa_video":"" 1468 | }, 1469 | { 1470 | "title":"Using Amazon SageMaker", 1471 | "day_break":false, 1472 | "book":"chapter_appendix-tools-for-deep-learning/sagemaker.html", 1473 | "slides":["part-0.pdf",0], 1474 | "slides_video":"", 1475 | "notebook_video":"", 1476 | "qa_video":"" 1477 | }, 1478 | { 1479 | "title":"Using AWS EC2 Instances", 1480 | "day_break":false, 1481 | "book":"chapter_appendix-tools-for-deep-learning/aws.html", 1482 | "slides":["part-0.pdf",0], 1483 | "slides_video":"", 1484 | "notebook_video":"", 1485 | "qa_video":"" 1486 | }, 1487 | { 1488 | "title":"Using Google Colab", 1489 | "day_break":false, 1490 | "book":"chapter_appendix-tools-for-deep-learning/colab.html", 1491 | "slides":["part-0.pdf",0], 1492 | "slides_video":"", 1493 | "notebook_video":"", 1494 | "qa_video":"" 1495 | }, 1496 | { 1497 | "title":"Selecting Servers and GPUs", 1498 | "day_break":false, 1499 | "book":"chapter_appendix-tools-for-deep-learning/selecting-servers-gpus.html", 1500 | "slides":["part-0.pdf",0], 1501 | "slides_video":"", 1502 | "notebook_video":"", 1503 | "qa_video":"" 1504 | }, 1505 | { 1506 | "title":"Contributing to This Book", 1507 | "day_break":false, 1508 | "book":"chapter_appendix-tools-for-deep-learning/contributing.html", 1509 | "slides":["part-0.pdf",0], 1510 | "slides_video":"", 1511 | "notebook_video":"", 1512 | "qa_video":"" 1513 | }, 1514 | { 1515 | "title":"Utility Functions and Classes", 1516 | "day_break":false, 1517 | "book":"chapter_appendix-tools-for-deep-learning/utils.html", 1518 | "slides":["part-0.pdf",0], 1519 | "slides_video":"", 1520 | "notebook_video":"", 1521 | "qa_video":"" 1522 | }, 1523 | { 1524 | "title":"`d2l` API Document", 1525 | "day_break":false, 1526 | "book":"chapter_appendix-tools-for-deep-learning/d2l.html", 1527 | "slides":["part-0.pdf",0], 1528 | "slides_video":"", 1529 | "notebook_video":"", 1530 | "qa_video":"" 1531 | } 1532 | ] 1533 | -------------------------------------------------------------------------------- /favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/d2l-ai/courses-zh-v2/3d1b8b89eae0c836a24420051d33f2876a32e4d4/favicon.ico -------------------------------------------------------------------------------- /generate.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "Generate `contents.json` based on the `d2l-zh` repos, then manually copy and modify to `_modelules/*.json`" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "!pip install PyPDF2" 17 | ] 18 | }, 19 | { 20 | "cell_type": "code", 21 | "execution_count": 1, 22 | "metadata": { 23 | "ExecuteTime": { 24 | "end_time": "2021-09-14T05:11:03.099469Z", 25 | "start_time": "2021-09-14T05:11:01.340659Z" 26 | } 27 | }, 28 | "outputs": [], 29 | "source": [ 30 | "import notedown\n", 31 | "import os\n", 32 | "import pathlib\n", 33 | "\n", 34 | "entry = ''' {\n", 35 | " \"title\":\"TITLE\",\n", 36 | " \"day_break\":false,\n", 37 | " \"book\":\"URL\",\n", 38 | " \"slides\":[\"part-0.pdf\",0],\n", 39 | " \"slides_video\":\"\",\n", 40 | " \"notebook_video\":\"\",\n", 41 | " \"qa_video\":\"\"\n", 42 | " }'''\n", 43 | "\n", 44 | "\n", 45 | "book_repo = pathlib.Path('/Users/mli/repos/d2l-en')\n", 46 | "\n", 47 | "def get_toc(root):\n", 48 | " \"\"\"return a list of files in the order defined by TOC\"\"\"\n", 49 | " subpages = _get_subpages(root)\n", 50 | " res = [root]\n", 51 | " for fn in subpages:\n", 52 | " res.extend(get_toc(fn))\n", 53 | " return res\n", 54 | "\n", 55 | "def _get_subpages(input_fn):\n", 56 | " \"\"\"read toc in input_fn, returns what it contains\"\"\"\n", 57 | " subpages = []\n", 58 | " reader = notedown.MarkdownReader()\n", 59 | " with open(input_fn, 'r', encoding='UTF-8') as f:\n", 60 | " nb = reader.read(f)\n", 61 | " for cell in nb.cells:\n", 62 | " if (cell.cell_type == 'code' and 'attributes' in cell.metadata and\n", 63 | " 'toc' in cell.metadata.attributes['classes']):\n", 64 | " for l in cell.source.split('\\n'):\n", 65 | " l = l.strip()\n", 66 | " if not l.startswith(':'):\n", 67 | " fn = os.path.join(os.path.dirname(input_fn), l + '.md')\n", 68 | " if os.path.exists(fn):\n", 69 | " subpages.append(fn)\n", 70 | " return subpages\n", 71 | "\n", 72 | "def _get_title(filename):\n", 73 | " with open(filename, 'r') as f:\n", 74 | " lines = f.readlines()\n", 75 | " for l in lines:\n", 76 | " if l.startswith('#'): return l[1:].strip()\n", 77 | "\n", 78 | "entries = []\n", 79 | "notebooks = get_toc(str(book_repo/'index.md'))\n", 80 | "for nb in notebooks:\n", 81 | " p = str(pathlib.Path(nb).relative_to(book_repo).with_suffix('.html'))\n", 82 | " if 'index.md' in p: continue\n", 83 | " title = _get_title(nb)\n", 84 | " if not title: continue\n", 85 | " entries.append(entry.replace('TITLE', title).replace('URL', p))\n", 86 | "\n", 87 | "with open('contents.json', 'w') as f:\n", 88 | " f.write('[\\n' + ',\\n'.join(entries) + '\\n]\\n')\n" 89 | ] 90 | }, 91 | { 92 | "cell_type": "code", 93 | "execution_count": 2, 94 | "metadata": { 95 | "ExecuteTime": { 96 | "end_time": "2021-09-14T05:11:03.116157Z", 97 | "start_time": "2021-09-14T05:11:03.101253Z" 98 | } 99 | }, 100 | "outputs": [], 101 | "source": [ 102 | "from PyPDF2 import PdfFileReader, PdfFileWriter\n", 103 | "import subprocess\n", 104 | "\n", 105 | "def extract_pdf(source, start_page, end_page, target):\n", 106 | " source = pathlib.Path(source)\n", 107 | " target = pathlib.Path(target)\n", 108 | " if target.exists() and target.stat().st_mtime > source.stat().st_mtime:\n", 109 | " return\n", 110 | " pdf = PdfFileReader(str(source))\n", 111 | " assert end_page > start_page\n", 112 | " assert end_page <= pdf.getNumPages()\n", 113 | " pdf_writer = PdfFileWriter()\n", 114 | " for page in range(start_page, end_page):\n", 115 | " pdf_writer.addPage(pdf.getPage(page))\n", 116 | " with open('/tmp/tmp.pdf', 'wb') as out:\n", 117 | " pdf_writer.write(out)\n", 118 | " # compress pdf size\n", 119 | " # refer to https://askubuntu.com/questions/113544/how-can-i-reduce-the-file-size-of-a-scanned-pdf-file\n", 120 | " cmd = f'gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/prepress -dNOPAUSE -dBATCH -sOutputFile={str(target)} /tmp/tmp.pdf'\n", 121 | " process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,\n", 122 | " stderr=subprocess.PIPE)\n", 123 | " stdout, _ = process.communicate()\n", 124 | " if process.returncode != 0:\n", 125 | " print(stdout.decode().splitlines())\n", 126 | " print(f'Written {end_page-start_page} pages to {target}')" 127 | ] 128 | }, 129 | { 130 | "cell_type": "markdown", 131 | "metadata": {}, 132 | "source": [ 133 | "Generate all `_modules/part*.md`" 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": 4, 139 | "metadata": { 140 | "ExecuteTime": { 141 | "end_time": "2021-09-14T05:16:04.145250Z", 142 | "start_time": "2021-09-14T05:16:03.932538Z" 143 | }, 144 | "scrolled": true 145 | }, 146 | "outputs": [], 147 | "source": [ 148 | "import datetime\n", 149 | "import pathlib\n", 150 | "import json\n", 151 | "import os\n", 152 | "\n", 153 | "\n", 154 | "pdf_dir = '/Users/mli/Google Drive/d2l-zh-v2-slides/'\n", 155 | "slides_dir = 'assets/pdfs/'\n", 156 | "notebooks_dir = 'assets/notebooks/'\n", 157 | "book_url = 'https://zh-v2.d2l.ai/'\n", 158 | "notebook_url = 'https://nbviewer.jupyter.org/format/slides/github/d2l-ai/d2l-zh-pytorch-slides/blob/main/'\n", 159 | "notbook_repo = '../d2l-zh-pytorch-slides/'\n", 160 | "video_url = 'https://www.bilibili.com/video/'\n", 161 | "cur_day = datetime.datetime(2021, 3, 19)\n", 162 | "\n", 163 | "titles = ['深度学习基础', '卷积神经网络', '计算机视觉', '循环神经网络', '注意力机制',]\n", 164 | "holidays = [datetime.datetime(2021, m, d) for m, d in ((4,3),(4,4),(4,25),(5,1),(5,2),(5,8),(5,9),(6,12),(6,13),(7,31),(8,1))]\n", 165 | "slide_pages = {}\n", 166 | "\n", 167 | "for i, title in enumerate(titles):\n", 168 | " p = pathlib.Path('_modules')\n", 169 | " with (p/f'part_{i}.md').open('w') as f:\n", 170 | " f.write(f'---\\ntitle: {title}\\n---\\n')\n", 171 | " if not (p/f'part_{i}.json').exists():\n", 172 | " contents = []\n", 173 | " else:\n", 174 | " contents = json.load((p/f'part_{i}.json').open('r'))\n", 175 | " for entry in contents:\n", 176 | " if entry['day_break']:\n", 177 | " while True:\n", 178 | " if cur_day.weekday() == 6: # sun\n", 179 | " cur_day += datetime.timedelta(days=6)\n", 180 | " else:\n", 181 | " cur_day += datetime.timedelta(days=1)\n", 182 | " f.write(f'\\n{cur_day.month}月{cur_day.day}日\\n\\n')\n", 183 | " if cur_day not in holidays:\n", 184 | " break\n", 185 | " f.write(': **休课**{: .label .label-green }\\n')\n", 186 | " # title\n", 187 | " f.write(f': {entry[\"title\"]}\\n')\n", 188 | " # html page\n", 189 | " if 'book' in entry and entry['book']:\n", 190 | " f.write(f' : []({book_url+entry[\"book\"]})\\n')\n", 191 | " else:\n", 192 | " f.write(' :   \\n')\n", 193 | " # pdf\n", 194 | " pdf, page = entry['slides']\n", 195 | " if page:\n", 196 | " if pdf not in slide_pages:\n", 197 | " slide_pages[pdf] = [0,]\n", 198 | " save_pdf = f'{slides_dir}part-{i}_{len(slide_pages[pdf])}.pdf'\n", 199 | " cur_page = sum(slide_pages[pdf])\n", 200 | " extract_pdf(pdf_dir+pdf, cur_page, cur_page+page, save_pdf)\n", 201 | " slide_pages[pdf].append(page)\n", 202 | " f.write(f' : []({save_pdf})\\n')\n", 203 | " else:\n", 204 | " f.write(' :   \\n')\n", 205 | "\n", 206 | " # notebook\n", 207 | " write_notebook = False\n", 208 | " if 'book' in entry and entry['book'] and (not 'notebook' in entry or entry['notebook']):\n", 209 | " notebook_path = entry[\"book\"].replace('.html', '.ipynb')\n", 210 | " notebook_file = notbook_repo + notebook_path\n", 211 | " notebook_output = pathlib.Path(notebooks_dir + notebook_path).with_suffix('.slides.html')\n", 212 | " if os.path.exists(notebook_file):\n", 213 | " if not notebook_output.exists():\n", 214 | " os.system(f'jupyter nbconvert {notebook_file} --to slides --output-dir {str(notebook_output.parent)}')\n", 215 | " if notebook_output.exists():\n", 216 | " write_notebook = True\n", 217 | " f.write(f' : []({str(notebook_output)})\\n')\n", 218 | " if not write_notebook:\n", 219 | " f.write(' :   \\n')\n", 220 | " #if entry['notebook_video']:\n", 221 | " # f.write(f' [ 代码]({entry[\"notebook_video\"]})  ')\n", 222 | " #if 'qa_video' in entry and entry['qa_video']:\n", 223 | " # f.write(f' [ 问答]({entry[\"qa_video\"]})  ')\n", 224 | " if entry['slides_video']:\n", 225 | " f.write(f' : []({entry[\"slides_video\"]})\\n')\n", 226 | " else:\n", 227 | " f.write(' :   \\n')\n", 228 | " f.write('\\n')\n", 229 | "\n", 230 | "\n", 231 | "!touch index.md\n" 232 | ] 233 | }, 234 | { 235 | "cell_type": "code", 236 | "execution_count": 2, 237 | "metadata": { 238 | "ExecuteTime": { 239 | "end_time": "2021-07-16T05:57:54.653894Z", 240 | "start_time": "2021-07-16T05:57:54.648149Z" 241 | } 242 | }, 243 | "outputs": [ 244 | { 245 | "name": "stdout", 246 | "output_type": "stream", 247 | "text": [ 248 | "Overwriting video\n" 249 | ] 250 | } 251 | ], 252 | "source": [ 253 | "%%writefile video\n", 254 | "6\t488\n", 255 | "14\t813\n", 256 | "2\t966\n", 257 | "5\t281\n", 258 | "17\t875\t1\n", 259 | "5\t446\t1\n", 260 | "14\t543\n", 261 | "19\t771\t1\n", 262 | "10\t758\n", 263 | "15\t699\n", 264 | "7\t402\t1\n", 265 | "11\t897\n", 266 | "6\t503\n", 267 | "8\t902\t1\n", 268 | "6\t402\t1\n", 269 | "11\t626\n", 270 | "7\t384\n", 271 | "6\t564\t1\n", 272 | "16\t1123\t1\n", 273 | "4\t249\t1\n", 274 | "11\t832\n", 275 | "13\t1359\n", 276 | "7\t505\t1\n", 277 | "7\t1116\n", 278 | "10\t1006\n", 279 | "8\t407\t1\n", 280 | "6\t781\n", 281 | "8\t754\t1\n", 282 | "6\t719\n", 283 | "6\t736\t1\n", 284 | "10\t902\n", 285 | "12\t1443\n", 286 | "13\t870\t1\n", 287 | "2\t519\n", 288 | "6\t807\t1\n", 289 | "10\t1039\t1\n", 290 | "4\t328\t1\n", 291 | "5\t363\t1\n", 292 | "8\t814\t1\n", 293 | "5\t998\n", 294 | "9\t661\n", 295 | "7\t899\n", 296 | "8\t607\t1\n", 297 | "8\t847\n", 298 | "3\t304\t1\n", 299 | "10\t1172\n", 300 | "3\t395\t1\n", 301 | "7\t620\n", 302 | "4\t417\n", 303 | "6\t510\n", 304 | "6\t1306\t1\n", 305 | "13\t2115\n", 306 | "4\t638\t1\n", 307 | "7\t543\n", 308 | "5\t442\t1\n", 309 | "6\t654\n", 310 | "4\t600\t1\n", 311 | "15\t1618\n", 312 | "4\t429\t1\n", 313 | "6\t1078\n", 314 | "7\t1031\t1\n", 315 | "9\t849\n", 316 | "6\t564\t1\n", 317 | "2\t396\n", 318 | "14\t2361\n", 319 | "16\t1758\n", 320 | "5\t1300\n", 321 | "5\t721\t1\n", 322 | "16\t1304\n", 323 | "9\t1238\n", 324 | "14\t768\t1\n", 325 | "9\t832\n", 326 | "8\t534\t1\n", 327 | "13\t1676\t1\n", 328 | "10\t1148\t1\n", 329 | "6\t695\n", 330 | "4\t227\t1\n", 331 | "6\t585\t1\n", 332 | "7\t1086\n", 333 | "18\t1400\t1\n", 334 | "16\t2524\n", 335 | "5\t596\t1\n", 336 | "14\t3165\t1\n", 337 | "5\t431\n", 338 | "10\t1676\t1\n", 339 | "3\t560\n", 340 | "5\t760\t1\n", 341 | "2\t262\n", 342 | "11\t1060\t1\n", 343 | "3\t297\n", 344 | "8\t1157\t1" 345 | ] 346 | }, 347 | { 348 | "cell_type": "code", 349 | "execution_count": 10, 350 | "metadata": { 351 | "ExecuteTime": { 352 | "end_time": "2021-07-16T06:01:44.518853Z", 353 | "start_time": "2021-07-16T06:01:44.495121Z" 354 | } 355 | }, 356 | "outputs": [ 357 | { 358 | "name": "stdout", 359 | "output_type": "stream", 360 | "text": [ 361 | "total: 48\n", 362 | "avg page: 8.395833333333334\n", 363 | "avg length: 15.159722222222223\n", 364 | "page per min: 2.045208144478978\n", 365 | "total: 43\n", 366 | "avg page: 8.0\n", 367 | "avg length: 13.234883720930233\n", 368 | "page per min: 1.7313444384536163\n" 369 | ] 370 | } 371 | ], 372 | "source": [ 373 | "with open('video') as f:\n", 374 | " lines = f.readlines()\n", 375 | " tokens = [[int(i) for i in l.strip().split('\\t')] for l in lines]\n", 376 | "nbs = [x for x in tokens if len(x)==2]\n", 377 | "code = [x[:2] for x in tokens if len(x)==3]\n", 378 | "\n", 379 | "import numpy as np\n", 380 | "\n", 381 | "def analy(x):\n", 382 | " x = np.array(x)\n", 383 | " print('total:', len(x))\n", 384 | " print('avg page:', x[:,0].mean())\n", 385 | " print('avg length:', x[:,1].mean()/60)\n", 386 | " print('page per min:', (x[:,1]/x[:,0]).mean()/60)\n", 387 | "\n", 388 | "analy(nbs)\n", 389 | "analy(code)" 390 | ] 391 | }, 392 | { 393 | "cell_type": "code", 394 | "execution_count": null, 395 | "metadata": {}, 396 | "outputs": [], 397 | "source": [] 398 | } 399 | ], 400 | "metadata": { 401 | "kernelspec": { 402 | "display_name": "Python 3", 403 | "language": "python", 404 | "name": "python3" 405 | }, 406 | "language_info": { 407 | "codemirror_mode": { 408 | "name": "ipython", 409 | "version": 3 410 | }, 411 | "file_extension": ".py", 412 | "mimetype": "text/x-python", 413 | "name": "python", 414 | "nbconvert_exporter": "python", 415 | "pygments_lexer": "ipython3", 416 | "version": "3.8.5" 417 | } 418 | }, 419 | "nbformat": 4, 420 | "nbformat_minor": 5 421 | } 422 | -------------------------------------------------------------------------------- /index.md: -------------------------------------------------------------------------------- 1 | --- 2 | layout: home 3 | title: 课程安排 4 | --- 5 | 6 | # 动手学深度学习在线课程 7 | 8 | ## 课程更新 9 | 10 | {% if site.announcements %} 11 | {{ site.announcements.last }} 12 | [所有公告](announcements.html){: .btn .fs-3 } 13 | {% endif %} 14 | 15 | 20 | 21 | ## 课程摘要 22 | 23 | | --- | --- | 24 | | 课时 | 2021年3月20日 --- (预计)7月 | 25 | | 直播时间 | 北京时间每周六、日下午 1:00 --- 2:30 | 26 | | 直播地址 | [ 机器之心](https://app6ca5octe2206.h5.xiaoeknow.com/v1/course/alive/l_601cc496e4b05a9e88714463) | 27 | | 视频回放 | [ B站](https://space.bilibili.com/1567748478) | 28 | | 教材 | [ zh-v2.d2l.ai](https://zh-v2.d2l.ai/) | 29 | 30 | 31 | 32 | 不论是在学术突破还是在工业应用, 33 | 深度学习是人工智能在近十年里进展最为迅速的领域。然而,深度学习模型复杂、参数繁多、而且新模型层出不穷,这给学习带来了难度。 34 | 35 | 本课程将从零开始教授深度学习。同学们只需要有基础的Python编程和数学基础。我们将覆盖四大类模型:多层感知机、卷积神经网络、循环神经网络、和注意力机制。在此之上,我们将介绍深度学习中的两大应用领域---计算机视觉和自然语言处理---中的典型任务。 36 | 37 | 本课程的一大特点是不仅讲述模型算法,同时会将每一处细节都讲述如何用PyTorch进行实现。这样同学们可以在真实数据上获得第一手经验。我们将举办四次课程竞赛,让同学们实践学习到的知识如何解决实际问题。 38 | 39 | 课程内容将紧靠《动手学深度学习》第二版。本书目前已经被近200所大学采用作为教材。我们将在3月20日开始直播。同学们无需注册或缴费就可以参加。敬请期待。 40 | 41 | 42 | 43 | 44 | ## 讲师 45 | 46 | {% for staffer in site.staffers %} 47 | {{ staffer }} 48 | {% endfor %} 49 | 50 |
51 | 52 | 53 | 54 | 55 | ## 课程安排 56 | 57 | 58 | 目前日程安排为暂定安排,会根据实际进度进行调整。部分内容暂时链接到英文版,中文版会随后更新。 59 | 60 | 61 | {% for module in site.modules %} 62 | {{ module }} 63 | {% endfor %} 64 | -------------------------------------------------------------------------------- /upload.sh: -------------------------------------------------------------------------------- 1 | aws s3 sync --delete _site s3://courses.d2l.ai/zh-v2 --acl 'public-read' 2 | --------------------------------------------------------------------------------