├── 2020考试题.jpg ├── 2021年考试题.pdf ├── 2021考题.PNG ├── 2021考题2.PNG ├── CV经典模型.pdf ├── README.md ├── 交叉熵和softmax下求导.pdf ├── 卷积计算.py ├── 卷积计算padding=Valid.py ├── 图像描述.pdf ├── 姿势估计.pdf ├── 损失函数计算.ipynb ├── 损失计算.py ├── 文本生成图像模型.pdf └── 深度学习期末考试题-2022.pdf /2020考试题.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hou-jing/UCAS_deep-learning_course/1dbd1cd031f41d8dc4d23a4491ebb10ecb003373/2020考试题.jpg -------------------------------------------------------------------------------- /2021年考试题.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hou-jing/UCAS_deep-learning_course/1dbd1cd031f41d8dc4d23a4491ebb10ecb003373/2021年考试题.pdf -------------------------------------------------------------------------------- /2021考题.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hou-jing/UCAS_deep-learning_course/1dbd1cd031f41d8dc4d23a4491ebb10ecb003373/2021考题.PNG -------------------------------------------------------------------------------- /2021考题2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hou-jing/UCAS_deep-learning_course/1dbd1cd031f41d8dc4d23a4491ebb10ecb003373/2021考题2.PNG -------------------------------------------------------------------------------- /CV经典模型.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hou-jing/UCAS_deep-learning_course/1dbd1cd031f41d8dc4d23a4491ebb10ecb003373/CV经典模型.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # UCAS_deep-learning_course 2 | 国科大深度学习课程知识点整理 3 | 项目文件为国科大深度学习复习过程中的相关知识点整理,包括搬运他人的见解 4 | 如有错误,欢迎批评指正~ 5 | -------------------------------------------------------------------------------- /交叉熵和softmax下求导.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hou-jing/UCAS_deep-learning_course/1dbd1cd031f41d8dc4d23a4491ebb10ecb003373/交叉熵和softmax下求导.pdf -------------------------------------------------------------------------------- /卷积计算.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | 3 | input_x = tf.constant([ 4 | [[[5, 6, 0, 1, 8, 2], 5 | [0, 9, 8, 4, 6, 5], 6 | [2, 6, 5, 3, 8, 4], 7 | [6, 3, 4, 9, 1, 0], 8 | [7, 5, 9, 1, 6, 7], 9 | [2, 5, 9, 2, 3, 7] 10 | 11 | ]]]) 12 | 13 | filters = tf.constant([ 14 | [[[0, -1, 1], [1, 0, 0], [0, -1, 1]]] 15 | ]) 16 | input_x=tf.reshape(input_x,(1,6,6,1)) 17 | filters=tf.reshape(filters,[3,3,1,1]) 18 | 19 | res = tf.nn.conv2d(input_x, filters, strides=1, padding='SAME') 20 | print('无激活函数下的输出',res) 21 | 22 | print('激活函数下输出',tf.nn.relu(res)) 23 | 24 | ''' 25 | conv2d(input, filter, strides, padding, use_cudnn_on_gpu=True, 26 | data_format="NHWC", dilations=[1, 1, 1, 1], name=None): 27 | 28 | input:输入的tensor,被卷积的图像,conv2d要求input必须是四维的。四个维度分别为[batch, in_height, in_width, in_channels],即batch size,输入图像的高和宽以及单张图像的通道数。 29 | 30 | filter:卷积核,也要求是四维,[filter_height, filter_width, in_channels, out_channels]四个维度分别表示卷积核的高、宽,输入图像的通道数和卷积输出通道数。其中in_channels大小需要与 input 的in_channels一致。 31 | 32 | strides:步长,即卷积核在与图像做卷积的过程中每次移动的距离,一般定义为[1,stride_h,stride_w,1],stride_h与stride_w分别表示在高的方向和宽的方向的移动的步长,第一个1表示在batch上移动的步长,最后一个1表示在通道维度移动的步长,而目前tensorflow规定:strides[0] = strides[3] = 1,即不允许跳过bacth和通道,前面的动态图中的stride_h与stride_w均为1。 33 | 34 | padding:边缘处理方式,值为“SAME” 和 “VALID”,熟悉图像卷积操作的朋友应该都熟悉这两种模式;由于卷积核是有尺寸的,当卷积核移动到边缘时,卷积核中的部分元素没有对应的像素值与之匹配。此时选择“SAME”模式,则在对应的位置补零,继续完成卷积运算,在strides为[1,1,1,1]的情况下,卷积操作前后图像尺寸不变即为“SAME”。 35 | 若选择 “VALID”模式,则在边缘处不进行卷积运算,若运算后图像的尺寸会变小。 36 | 37 | 38 | ''' 39 | 40 | 41 | -------------------------------------------------------------------------------- /卷积计算padding=Valid.py: -------------------------------------------------------------------------------- 1 | import tensorflow as tf 2 | 3 | input_x = tf.constant([ 4 | [[[5, 6, 0, 1, 8, 2], 5 | [0, 9, 8, 4, 6, 5], 6 | [2, 6, 5, 3, 8, 4], 7 | [6, 3, 4, 9, 1, 0], 8 | [7, 5, 9, 1, 6, 7], 9 | [2, 5, 9, 2, 3, 7] 10 | 11 | ]]]) 12 | filters = tf.constant([ 13 | [[[0, -1, 1], [1, 0, 0], [0, -1, 1]]] 14 | ]) 15 | 16 | input_x=tf.reshape(input_x,(1,6,6,1)) 17 | filters=tf.reshape(filters,[3,3,1,1]) 18 | 19 | res = tf.nn.conv2d(input_x, filters, strides=1, padding='VALID') 20 | print('Valid 无激活函数下的输出',res) 21 | res=tf.squeeze(res) 22 | print('Valid 条件下可视化的输出:',res) 23 | 24 | 25 | # print('Valid 激活函数下输出',tf.nn.relu(res)) 26 | print('Valid 激活函数下可视化输出:',tf.squeeze(tf.nn.relu(res))) 27 | #在full卷积下,TF中没有这个参数,可以手动加0实现 28 | input_x = tf.constant([ 29 | [[[0,0,0,0,0,0,0,0], 30 | [0,5,6,0,1,8,2,0], 31 | [0,2,5,7,2,3,7,0], 32 | [0,0,7,2,4,5,6,0], 33 | [0,5,3,6,9,3,1,0], 34 | [0,6,5,3,1,4,6,0], 35 | [0,5,2,4,0,8,7,0], 36 | [0,0,0,0,0,0,0,0] 37 | ]]]) 38 | input_x=tf.reshape(input_x,(1,8,8,1)) 39 | 40 | res = tf.nn.conv2d(input_x, filters, strides=1,padding='SAME') 41 | print('Full(加0)未使用激活之前的输出',res) 42 | 43 | print('Full(加0)未使用激活函数之前的可视化输出,',tf.squeeze(res)) 44 | 45 | out = tf.nn.relu(res) 46 | print('Full 激活的输出',out) 47 | print('Full 激活之后的可视化输出,',tf.squeeze(out)) -------------------------------------------------------------------------------- /图像描述.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hou-jing/UCAS_deep-learning_course/1dbd1cd031f41d8dc4d23a4491ebb10ecb003373/图像描述.pdf -------------------------------------------------------------------------------- /姿势估计.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hou-jing/UCAS_deep-learning_course/1dbd1cd031f41d8dc4d23a4491ebb10ecb003373/姿势估计.pdf -------------------------------------------------------------------------------- /损失计算.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | 4 | criterion = nn.BCELoss()#默认是求均值,数据需要是浮点型数据 5 | pre=torch.tensor([0.1,0.2,0.3,0.4]).float() 6 | tar=torch.tensor([0,0,0,1]).float() 7 | l=criterion(pre,tar) 8 | print('二分类交叉熵损失函数计算(均值)',l) 9 | 10 | 11 | pre=torch.tensor([0.2,0.8,0.4,0.1,0.9]).float() 12 | tar=torch.tensor([0,1,0,0,1]).float() 13 | 14 | pre=torch.tensor([0.1,0.2,0.3,0.4]).float() 15 | tar=torch.tensor([0,0,0,1]).float() 16 | criterion = nn.BCELoss(reduction="sum")#求和 17 | l=criterion(pre,tar) 18 | print('二分类交叉熵损失函数计算(求和)',l) 19 | 20 | loss=nn.BCELoss(reduction="none")#reduction="none"得到的是loss向量#对每一个样本求损失 21 | l=loss(pre,tar) 22 | print('每个样本对应的loss',l) 23 | criterion2=nn.CrossEntropyLoss() 24 | import numpy as np 25 | pre1=torch.tensor([np.log(20),np.log(40),np.log(60),np.log(80)]).float() 26 | # soft=nn.Softmax(dim=0) 27 | # pre=soft(pre).float()#bs*label_nums 28 | pre1=pre1.reshape(1,4) 29 | tar=torch.tensor([3]) 30 | loss2=criterion2(pre1,tar) 31 | print('多分类交叉熵损失函数pre1条件下',loss2) 32 | 33 | pre2=torch.tensor([np.log(10),np.log(30),np.log(50),np.log(90)]).float() 34 | pre2=pre2.reshape(1,4) 35 | tar=torch.tensor([3]) 36 | loss2=criterion2(pre2,tar) 37 | print('多分类交叉熵损失函数pre2条件下',loss2) 38 | -------------------------------------------------------------------------------- /文本生成图像模型.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hou-jing/UCAS_deep-learning_course/1dbd1cd031f41d8dc4d23a4491ebb10ecb003373/文本生成图像模型.pdf -------------------------------------------------------------------------------- /深度学习期末考试题-2022.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hou-jing/UCAS_deep-learning_course/1dbd1cd031f41d8dc4d23a4491ebb10ecb003373/深度学习期末考试题-2022.pdf --------------------------------------------------------------------------------