├── Part1.预修知识 ├── 1.概率论.md ├── 微积分.xmind ├── 概率论.xmind └── 线性代数.xmind ├── Part2.总概 ├── 常用特征工程方法.xmind └── 概论.xmind ├── Part3.常用模型 ├── 1.监督学习SupervisedLearning │ ├── 0.广义线性模型GLM │ │ ├── 广义线性模型.md │ │ └── 广义线性模型.xmind │ ├── 1.线性回归(回归)LinearRegression │ │ ├── 1.线性回归.xmind │ │ └── 2.局部加权线性回归.xmind │ ├── 11.概率图模型ProbabilityGrapyModel │ │ ├── 总概.xmind │ │ └── 条件随机场.xmind │ ├── 12.集成学习EnsembleLearning │ │ ├── Adaboost.xmind │ │ ├── Boosting Tree.xmind │ │ ├── GBDT.xmind │ │ └── 随机森林.xmind │ ├── 13.深度学习DeepLearning │ │ ├── 1.前馈神经网络.xmind │ │ ├── 2.防止过拟合.xmind │ │ ├── 3.卷积神经网络 │ │ │ ├── 1.概述.xmind │ │ │ ├── 2.LeNet[Lecun,1998].xmind │ │ │ ├── 3.AlexNet[Hinton,NIPS,2012].xmind │ │ │ ├── 4.Clarifai[ECCV,2014].xmind │ │ │ ├── 5.VGGNet[ICLR,2015].xmind │ │ │ ├── 6.GoogleNet[CVPR,2015].xmind │ │ │ └── 7.ResNet[CVPR,2016].xmind │ │ ├── 4.循环神经网络 │ │ │ ├── LSTM.md │ │ │ ├── 循环神经网络RNN.xmind │ │ │ ├── 注意力机制Attention.xmind │ │ │ ├── 长短期记忆网络LSTM.xmind │ │ │ └── 门控循环单元GRU.xmind │ │ └── 5.深度模型中的优化.xmind │ ├── 2.线性判别(分类)LinearDiscriminant │ │ ├── 1.感知机判别.xmind │ │ ├── 2.fisher判别.xmind │ │ └── 3.最小平方误差判别.xmind │ ├── 3.非线性判别(分类)NonlinearDiscriminant │ │ ├── 1.广义线性判别函数.xmind │ │ ├── 2.分段线性判别函数.xmind │ │ └── 3.势函数法.xmind │ ├── 4.逻辑斯蒂回归LogisticRegression │ │ ├── 1.逻辑斯蒂回归.xmind │ │ ├── 2.SoftmaxRegression.xmind │ │ └── SoftmaxRegression.md │ ├── 6.贝叶斯分类NaiveBayes │ │ ├── 贝叶斯分类-new.xmind │ │ └── 贝叶斯分类.xmind │ ├── 7.决策树DecisionTree │ │ └── 决策树.xmind │ ├── 8.支持向量机SVM │ │ ├── 支持向量回归SVR.xmind │ │ └── 支持向量机SVM.xmind │ └── 9.高斯判别模型GaussDiscriminantModel │ │ └── 高斯判别模型.xmind └── 2.无监督学习UnsupervisedLearning │ └── 2.维归约DimensionReduction │ └── 维归约.xmind ├── Part4.优化算法 ├── A.EM算法 │ └── EM算法.xmind ├── B.梯度下降法 │ └── 梯度下降.xmind └── C.牛顿法 │ └── 牛顿法.xmind ├── Part6.特点领域应用 ├── 1.nlp │ ├── 1.词的向量表示.md │ └── 2.命名实体识别.md ├── 2.知识图谱 │ ├── 1.知识图谱简介.xmind │ ├── 2.知识表示方法.xmind │ ├── 3.知识框架学习.xmind │ ├── 4.实体识别.xmind │ ├── 5.实体消歧.xmind │ └── 6.关系抽取.xmind └── 3.CV │ └── 常见数据集.md └── README.md /Part1.预修知识/1.概率论.md: -------------------------------------------------------------------------------- 1 | # 1.贝叶斯 2 | 3 | ## 1.1公式 4 | 5 | ## 1.2推导 6 | 7 | # 1.3 全概率公式 8 | 9 | 10 | 11 | # 2.期望 12 | 13 | 14 | 15 | # 3.独立同分布 16 | 17 | 18 | 19 | # 4.常见分布 20 | 21 | ## 4.1指数族分布 22 | 23 | ### 4.1.1公式 24 | 25 | 概率密度公式: 26 | $$ 27 | p(y;\eta)=b(y)\exp(\eta^T T(y)-a(\eta)) 28 | $$ 29 | 30 | - η——自然参数/规范参数 31 | - T(y)——充分统计量,通常用T(y) = y 32 | - exp(-a(η))——log配分函数,是归一化因子的对数形式,作归一化用,它使得概率分布积分为1的条件得到满足。通过对 *A*(*η *)求导,容易得到充分统计量*T *(*x *)的均值,方差和其他性质。有: 33 | 34 | ![](https://images0.cnblogs.com/blog/663760/201504/031601391858737.png) 35 | 36 | - b(y)——响应函数 37 | - T、a、b的固定选择定义了一个由η参数化的分布族 38 | 39 | ### 4.1.2常见指数族分布 40 | 41 | ![](http://ot0qvixbu.bkt.clouddn.com/%E5%BE%AE%E4%BF%A1%E6%88%AA%E5%9B%BE_20180228171957.png) 42 | 43 | ## 4.2伯努利分布/零一分布/两点分布 44 | 45 | 46 | 47 | ## 4.3二项分布 48 | 49 | -------------------------------------------------------------------------------- /Part1.预修知识/微积分.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part1.预修知识/微积分.xmind -------------------------------------------------------------------------------- /Part1.预修知识/概率论.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part1.预修知识/概率论.xmind -------------------------------------------------------------------------------- /Part1.预修知识/线性代数.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part1.预修知识/线性代数.xmind -------------------------------------------------------------------------------- /Part2.总概/常用特征工程方法.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part2.总概/常用特征工程方法.xmind -------------------------------------------------------------------------------- /Part2.总概/概论.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part2.总概/概论.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/0.广义线性模型GLM/广义线性模型.md: -------------------------------------------------------------------------------- 1 | # 1.假设 2 | 3 | 1. y|x;θ~ExponentialFamily(η) 4 | 5 | 2. 给了x, 我们的目的是为了预测T(y)的在条件x下的期望。一般情况T(y)=y, 这就意味着我们希望预测h(x)=E[y|x]。 6 | 7 | 这个假设在逻辑回归和线性回归中都是满足的。 比如在线性回归中:hθ(x) = p(y = 1|x; θ) = 0 · p(y = 8 | 0|x; θ) + 1 · p(y = 1|x; θ) = E[y|x; θ]. 9 | 10 | 3. 参数η和输入x 是线性相关的:即η=θTx。这条更被认为是“设计选择”,而不是假设。 11 | 12 | # 2.模型的一般形式 13 | 14 | 15 | $$ 16 | y=g^{-1}(w^Tx+b) 17 | $$ 18 | g(·)——联系函数 19 | 20 | 21 | 22 | # 3.GLM推导最小二乘 23 | 24 | - MS是GLM的一个特例。 25 | - 假设y|x;θ∼N(μ,σ2),那么: 26 | 27 | $$ 28 | h_\theta(x) = E[y|x;\theta] \\ =\mu \\ =\eta \\ = \theta^T x 29 | $$ 30 | 31 | - 上式中第一行因为假设2,第二行因为高斯分布的特点,第三行根据上面高斯分布为指数族分布的推导,第四行因为假设3。 32 | 33 | # 4.GLM推导逻辑回归 34 | 35 | - LR是GLM的一个特例。 36 | - 假设y|x;θ∼Bernoulli(φ),那么: 37 | 38 | $$ 39 | h_\theta(x) = E[y|x;\theta] \\ =\phi \\ =\frac{1}{1+e^{-\eta}} \\ = \frac{1}{1+e^{-\theta^T x}} 40 | $$ 41 | 42 | - 第一行因为假设2,第二行因为伯努利分布的性质,第三行因为伯努利分布为指数族分布时的推导,第四行因为假设3。 43 | 44 | # 5.GLM推导Softmax 45 | 46 | - Softmax是GLM的一个特例 47 | - 如果要分为k类,则使用k个参数φ1, . . . , φk,各表示属于每一类的概率。由于φ1 + . . . + φk=1,则这里简化为k-1个参数,第k个参数由前k-1个推出来即可。 48 | - 为了将多项式分布能够写成指数分布族的形式,先引入T(y),它是一个k-1维的向量,如下所示: 49 | 50 | ![](https://www.2cto.com/uploadfile/Collfiles/20161219/20161219092103693.png) 51 | 52 | - 引入指示函数1,使得:1{True} = 1,1{False} = 0。则可以将T(y)表示为:(T (y))i = 1{y = i} ,从而有:E[(T(yi))] = P(y = i) = φi。 53 | - 可以得到: 54 | 55 | ![](https://www.2cto.com/uploadfile/Collfiles/20161219/20161219092103695.png) 56 | 57 | - 其中: 58 | 59 | ![](https://www.2cto.com/uploadfile/Collfiles/20161219/20161219092103696.png) 60 | 61 | - 正则关联函数如下: 62 | 63 | $$ 64 | \eta_i=log\frac{\phi_i}{\phi_k} 65 | $$ 66 | 67 | 68 | 69 | - 方便起见,定义: 70 | 71 | $$ 72 | \eta_k=log\frac{\phi_k}{\phi_k}=0 73 | $$ 74 | 75 | - 导出正则响应函数:这个变换后的φ就是Softmax函数。 76 | 77 | $$ 78 | e^{\eta_i}=\frac{\phi_i}{\phi_k} \\ \phi_ke^{\eta_i}={\phi_i} \\ \phi_k\sum_{i=1}^ke^{\eta_i}=\sum_{i=1}^k\phi_i=1\\ \phi_k=\frac{1}{\sum_{i=1}^ke^{\eta_i}}\\ \phi_i=\frac{e^{\eta_i}}{\sum_{j=1}^ke^{\eta_j}} 79 | $$ 80 | 81 | 82 | 83 | - 使用假设3,将ηi 和x联系起来,有:ηi = θiTx (i=1,...,k-1,θi是模型参数)。则有Sofumax回归: 84 | 85 | $$ 86 | p(y=i|x;\theta)=\phi_i \\ =\frac{e^{\eta_i}}{\sum_{j=1}^ke^\eta_j}\\=\frac{e^{\theta_i^Tx}}{\sum_{j=1}^ke^{\theta_j^Tx}} 87 | $$ 88 | 89 | 90 | 91 | - 向量表示的模型函数为: 92 | 93 | ![](https://www.2cto.com/uploadfile/Collfiles/20161219/20161219092103700.png) 94 | 95 | # 6.正则响应函数 96 | 97 | - 将η与原始概率分布中的参数联系起来的函数 98 | 99 | - 如: 100 | $$ 101 | \phi =\frac{1}{1+e^{-\eta}} (逻辑回归)、\mu =\eta(最小二乘)、\eta_i=log\frac{\phi_i}{\phi_k}(Softmax) 102 | $$ 103 | 104 | 105 | 106 | # 7.正则联系函数 107 | 108 | - 正则响应函数的逆。 -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/0.广义线性模型GLM/广义线性模型.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/0.广义线性模型GLM/广义线性模型.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/1.线性回归(回归)LinearRegression/1.线性回归.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/1.线性回归(回归)LinearRegression/1.线性回归.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/1.线性回归(回归)LinearRegression/2.局部加权线性回归.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/1.线性回归(回归)LinearRegression/2.局部加权线性回归.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/11.概率图模型ProbabilityGrapyModel/总概.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/11.概率图模型ProbabilityGrapyModel/总概.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/11.概率图模型ProbabilityGrapyModel/条件随机场.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/11.概率图模型ProbabilityGrapyModel/条件随机场.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/12.集成学习EnsembleLearning/Adaboost.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/12.集成学习EnsembleLearning/Adaboost.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/12.集成学习EnsembleLearning/Boosting Tree.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/12.集成学习EnsembleLearning/Boosting Tree.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/12.集成学习EnsembleLearning/GBDT.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/12.集成学习EnsembleLearning/GBDT.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/12.集成学习EnsembleLearning/随机森林.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/12.集成学习EnsembleLearning/随机森林.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/1.前馈神经网络.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/1.前馈神经网络.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/2.防止过拟合.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/2.防止过拟合.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/1.概述.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/1.概述.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/2.LeNet[Lecun,1998].xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/2.LeNet[Lecun,1998].xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/3.AlexNet[Hinton,NIPS,2012].xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/3.AlexNet[Hinton,NIPS,2012].xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/4.Clarifai[ECCV,2014].xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/4.Clarifai[ECCV,2014].xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/5.VGGNet[ICLR,2015].xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/5.VGGNet[ICLR,2015].xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/6.GoogleNet[CVPR,2015].xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/6.GoogleNet[CVPR,2015].xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/7.ResNet[CVPR,2016].xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/3.卷积神经网络/7.ResNet[CVPR,2016].xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/4.循环神经网络/LSTM.md: -------------------------------------------------------------------------------- 1 | # 1.图示 2 | 3 | ## 1.1单神经元 4 | 5 | -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/4.循环神经网络/循环神经网络RNN.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/4.循环神经网络/循环神经网络RNN.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/4.循环神经网络/注意力机制Attention.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/4.循环神经网络/注意力机制Attention.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/4.循环神经网络/长短期记忆网络LSTM.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/4.循环神经网络/长短期记忆网络LSTM.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/4.循环神经网络/门控循环单元GRU.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/4.循环神经网络/门控循环单元GRU.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/5.深度模型中的优化.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/13.深度学习DeepLearning/5.深度模型中的优化.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/2.线性判别(分类)LinearDiscriminant/1.感知机判别.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/2.线性判别(分类)LinearDiscriminant/1.感知机判别.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/2.线性判别(分类)LinearDiscriminant/2.fisher判别.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/2.线性判别(分类)LinearDiscriminant/2.fisher判别.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/2.线性判别(分类)LinearDiscriminant/3.最小平方误差判别.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/2.线性判别(分类)LinearDiscriminant/3.最小平方误差判别.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/3.非线性判别(分类)NonlinearDiscriminant/1.广义线性判别函数.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/3.非线性判别(分类)NonlinearDiscriminant/1.广义线性判别函数.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/3.非线性判别(分类)NonlinearDiscriminant/2.分段线性判别函数.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/3.非线性判别(分类)NonlinearDiscriminant/2.分段线性判别函数.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/3.非线性判别(分类)NonlinearDiscriminant/3.势函数法.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/3.非线性判别(分类)NonlinearDiscriminant/3.势函数法.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/4.逻辑斯蒂回归LogisticRegression/1.逻辑斯蒂回归.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/4.逻辑斯蒂回归LogisticRegression/1.逻辑斯蒂回归.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/4.逻辑斯蒂回归LogisticRegression/2.SoftmaxRegression.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/4.逻辑斯蒂回归LogisticRegression/2.SoftmaxRegression.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/4.逻辑斯蒂回归LogisticRegression/SoftmaxRegression.md: -------------------------------------------------------------------------------- 1 | # 1.适用 2 | 3 | 用于多分类问题。 4 | 5 | 逻辑回归的genneralization 6 | 7 | # 2.模型 8 | 9 | # 3.策略 10 | 11 | - MLE 12 | 13 | $$ 14 | ℓ(\theta) = \sum_{i=1}^mlogp(y^{(i)}|x^{(i)};\theta) \\ =\sum_{i=1}^mlog\prod_{j=1}^k(\frac{e^{\theta_l^Tx^{(i)}}}{\sum_{j=1}^ke^{\theta_j^Tx^{(i)}}})^{1{(y^i=l)}} 15 | $$ 16 | 17 | # 4.算法 18 | 19 | - 梯度上升法 20 | - 牛顿法 -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/6.贝叶斯分类NaiveBayes/贝叶斯分类-new.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/6.贝叶斯分类NaiveBayes/贝叶斯分类-new.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/6.贝叶斯分类NaiveBayes/贝叶斯分类.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/6.贝叶斯分类NaiveBayes/贝叶斯分类.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/7.决策树DecisionTree/决策树.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/7.决策树DecisionTree/决策树.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/8.支持向量机SVM/支持向量回归SVR.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/8.支持向量机SVM/支持向量回归SVR.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/8.支持向量机SVM/支持向量机SVM.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/8.支持向量机SVM/支持向量机SVM.xmind -------------------------------------------------------------------------------- /Part3.常用模型/1.监督学习SupervisedLearning/9.高斯判别模型GaussDiscriminantModel/高斯判别模型.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/1.监督学习SupervisedLearning/9.高斯判别模型GaussDiscriminantModel/高斯判别模型.xmind -------------------------------------------------------------------------------- /Part3.常用模型/2.无监督学习UnsupervisedLearning/2.维归约DimensionReduction/维归约.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part3.常用模型/2.无监督学习UnsupervisedLearning/2.维归约DimensionReduction/维归约.xmind -------------------------------------------------------------------------------- /Part4.优化算法/A.EM算法/EM算法.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part4.优化算法/A.EM算法/EM算法.xmind -------------------------------------------------------------------------------- /Part4.优化算法/B.梯度下降法/梯度下降.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part4.优化算法/B.梯度下降法/梯度下降.xmind -------------------------------------------------------------------------------- /Part4.优化算法/C.牛顿法/牛顿法.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part4.优化算法/C.牛顿法/牛顿法.xmind -------------------------------------------------------------------------------- /Part6.特点领域应用/1.nlp/1.词的向量表示.md: -------------------------------------------------------------------------------- 1 | # 1.one-hot encoding(传统方式) 2 | 3 | - 定义:表示一个单词:只有一维是1 ,其他维是0。 4 | - 缺点:语义鸿沟(同义词问题,one-hot没有语义)、维度灾难、稀疏、无法表示未出现在词表中的词。 5 | 6 | # 2.count-based 7 | 8 | ## 2.1 基于词频统计 9 | 10 | one-hot的增强版,引入了词频。 11 | 12 | ## 2.1 tf*idf 13 | 14 | - **定义:**引入了逆文档频率。 15 | - **tf(词频):**一词语出现的次数除以该文档的总词语数。 16 | - **idf(逆文档频率):**文档频率。一词语出现在多少文档的数量除以总文档数。 17 | - **假设**:如果某个词在一篇文章中出现的频率高,并且在其他文章中很少出,那么它很可能就反映了这篇文章的特性,因此要提高它的权值。 18 | - ​ 19 | 20 | ## 2.2 svd 21 | 22 | ## 2.3Glove 23 | 24 | # 3.分布式表示 25 | 26 | ## 3.1 原理 27 | 28 | - **定义:**将one-hot压缩到低维空间,每一维可以看成词的语义或主题信息,语义相似的词语距离近。 29 | - **优点:**维度压缩、语义提取解决语义鸿沟、基于学习模型可以对未出现在词表中的词进行表示。 30 | - **方法:**LDA、Deep Learning。 31 | - **核心假设:**具有相似上下文信息的词应该具有相似的词表示。 32 | - **词关系:**Paradigmatic(同义词)和Syntagmatic(搭配出现) 33 | - Distributional Representation VS Distributed Representation: 34 | - Distributional Representation是从分布式假设(由Harris在1954年提出,出现在相同上下文的词语语义相似)的角度,是一类获取词表示的方法。 35 | - 而Distributed Representation指的是文本表示的形式,就是低维、稠密的连续向量 36 | 37 | ## 3.2 传统方法——语言模型 38 | 39 | ## 3.3 深度学习方法——NNLM 40 | 41 | ## 3.4 深度学习方法——CBOW/Skipgram 42 | 43 | - NNLM方法的升级版。 44 | - 去除隐藏层。 45 | - 去不考虑词序:汉字顺序并不影响阅读。 46 | 47 | ### 3.4.1 CBOW 48 | 49 | - BOW的升级版。 50 | 51 | ### 3.4.2 skip-gram 52 | 53 | ## 3.5 实现工具 54 | 55 | - word2vec 56 | - gensim 57 | - fasttext -------------------------------------------------------------------------------- /Part6.特点领域应用/1.nlp/2.命名实体识别.md: -------------------------------------------------------------------------------- 1 | # 1.词典匹配 2 | 3 | - 简单、实用、有效、可控 4 | 5 | # 2.有监督学习方法 6 | 7 | ## 2.1 HMM 8 | 9 | ## 2.2 CRF 10 | 11 | ## 2.3 ME 12 | 13 | ## 2.4 SVM 14 | 15 | # 3.无监督学习方法 16 | 17 | # 4.半监督学习方法 18 | 19 | # 5.混合方法 20 | 21 | 22 | 23 | 24 | 25 | # 常用NER语料库 26 | 27 | - 香港城市大学语料库(1 772 202 字,训练集) 28 | - 微软亚洲研究院语料库(1 089 050 字,训练集) 29 | - 北京大学语料库(1 833 177 字,训练集) 30 | - 人民日报语料 31 | - 微博语料 32 | - ​ -------------------------------------------------------------------------------- /Part6.特点领域应用/2.知识图谱/1.知识图谱简介.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part6.特点领域应用/2.知识图谱/1.知识图谱简介.xmind -------------------------------------------------------------------------------- /Part6.特点领域应用/2.知识图谱/2.知识表示方法.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part6.特点领域应用/2.知识图谱/2.知识表示方法.xmind -------------------------------------------------------------------------------- /Part6.特点领域应用/2.知识图谱/3.知识框架学习.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part6.特点领域应用/2.知识图谱/3.知识框架学习.xmind -------------------------------------------------------------------------------- /Part6.特点领域应用/2.知识图谱/4.实体识别.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part6.特点领域应用/2.知识图谱/4.实体识别.xmind -------------------------------------------------------------------------------- /Part6.特点领域应用/2.知识图谱/5.实体消歧.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part6.特点领域应用/2.知识图谱/5.实体消歧.xmind -------------------------------------------------------------------------------- /Part6.特点领域应用/2.知识图谱/6.关系抽取.xmind: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/glqglq/prml_note/926261992b76097533cbb9afda5f9730aeab5b64/Part6.特点领域应用/2.知识图谱/6.关系抽取.xmind -------------------------------------------------------------------------------- /Part6.特点领域应用/3.CV/常见数据集.md: -------------------------------------------------------------------------------- 1 | # cifar10 2 | 3 | - 版本:python、matlab、bin 4 | - 描述:训练集5000 * 10类,测试集1000 * 10类。训练集解压出来后是5个batch,一个batch有1000张图片。 5 | - 结构(python,一个batch): 6 | - data:10000张图 * 32 * 32 * RGB3通道 7 | - labels:10000张图 * 1 8 | - batch_label:'training batch ? of 5' 9 | - filenames:文件名 10 | - 参考资料: 11 | - http://www.cs.toronto.edu/~kriz/cifar.html 12 | 13 | # cifar100 14 | 15 | - 版本:python、matlab、bin 16 | - 描述:训练集500 * 100小类/2500 * 20大类,测试集100 * 100小类/500 * 20大类。 17 | - 结构(python,train中所有): 18 | - data:50000张图 * 32 * 32 * RGB3通道 19 | - coarse_labels:大类标签,50000张图 * 1 20 | - fine_labels:小类标签,50000张图 * 1 21 | - batch_label:'training batch 1 of 1' 22 | - filenames:文件名 23 | - 参考资料: 24 | - http://www.cs.toronto.edu/~kriz/cifar.html -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Part0 项目说明 2 | 3 | - 机器学习与模式识别知识的思维导图和笔记。 4 | - 参考:李航《统计学习方法》、周志华《机器学习》、黄庆明/兰艳艳/郭嘉丰/山世光《模式识别与机器学习》课程 5 | 6 | 7 | 8 | 9 | # Part1 预修知识 10 | 11 | 12 | 13 | # Part2 总概 14 | 15 | - 统计机器学习概论 16 | - 0.AI垂直领域应用 17 | - 1.常用特征工程方法 18 | - 2.常用分类算法 19 | - 3.常用回归预测算法 20 | - 4.常用优化方法 21 | 22 | 23 | # Part3 常用模型 24 | 25 | ## 1.监督学习SupervisedLearning 26 | 27 | ### 1.1.判别函数DiscriminantFunction 28 | 29 | - 总概 30 | - 线性判别函数 31 | - Fisher判别 32 | - 感知机 33 | - 最小平方误差法判别 34 | - 非线性判别 35 | - 势函数法 36 | - 广义线性判别 37 | - 分段线性判别 38 | 39 | ### 1.2.贝叶斯分类NaiveBayes 40 | 41 | ### 1.3.支持向量机SVM 42 | 43 | ### 1.4.决策树DecisionTree 44 | 45 | ### 1.5.逻辑斯蒂回归LogisticRegression 46 | 47 | ### 1.6.高斯判别模型GaussDiscriminantModel 48 | 49 | ### 1.7.神经网络NeuralNetwork 50 | 51 | ### 1.8.k近邻kNN 52 | 53 | ### 1.9.最大熵模型MaximumEntropyModel 54 | 55 | ### 1.10.概率图模型ProbabilityGrapyModel 56 | 57 | - 总概 58 | - 有向图模型 59 | - 隐马尔可夫模型HMM 60 | - 最大熵马尔可夫模型MEMM 61 | - 无向图模型 62 | - 条件随机场模型CRF 63 | 64 | ### 1.11.线性回归LinearRegression 65 | 66 | ## 2.半监督学习SemiSupervisedLearning 67 | 68 | ## 2.无监督学习UnsupervisedLearning 69 | 70 | ### 2.1聚类Clustering 71 | 72 | ### 2.2 维归约DimensionReduction 73 | 74 | ## 3.集成学习EnsembleLearning 75 | 76 | 77 | 78 | # Part4 优化算法 79 | 80 | ## 1.EM算法 81 | 82 | ## 2.梯度下降法 83 | 84 | 85 | 86 | # Part5 常用策略 87 | 88 | 89 | 90 | # Part6 特定领域应用 91 | 92 | 93 | ## 1.nlp 94 | 95 | - 1.词的向量表示 96 | - 2.命名实体识别 97 | 98 | ## 2.知识图谱 99 | 100 | - 1.知识图谱简介 101 | - 2.知识表示方法 102 | - 3.知识框架学习 103 | - 4.实体识别 104 | - 5.实体消歧 105 | - 6.关系抽取 --------------------------------------------------------------------------------