├── .gitignore ├── 10_Tree_Basic.ipynb ├── 11_Tree_Ensemble.ipynb ├── 12_EM.ipynb ├── 13_graph.ipynb ├── 14_tran_learn.ipynb ├── 15_interview.ipynb ├── 16_max_entropy.ipynb ├── 1_MCMC.ipynb ├── 2_LDA.ipynb ├── 3_Logistic_Regression.ipynb ├── 4_theano_tutorial.ipynb ├── 5_HMM.ipynb ├── 6_CRF.ipynb ├── 7_GA.ipynb ├── 8_PCA.ipynb ├── 9_SVM.ipynb ├── README.md ├── book └── AndrieuFreitasDoucetJordan2003.pdf ├── iris ├── iris.pdf ├── max_entropy_data.txt ├── res ├── Hinge_loss_vs_zero_one_loss.svg.png ├── bayes_unigram.png ├── box_muller.png ├── crf.png ├── dag.png ├── detail-balance.png ├── doc-topic-word.png ├── dtree.jpg ├── expectation_maximization.png ├── full_con.png ├── gibbs2.png ├── graph.png ├── hmm.jpg ├── lda.png ├── lda_gibbs.png ├── linear_crf.png ├── maximum_likelihood.png ├── multi_task.png ├── prof.png ├── rejection_sampling.png ├── tree_1.png ├── tree_2.png └── ug.png └── test.py /.gitignore: -------------------------------------------------------------------------------- 1 | .idea 2 | .ipynb_checkpoints 3 | test.py 4 | *.pyc -------------------------------------------------------------------------------- /10_Tree_Basic.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# 决策树基础\n", 8 | "\n", 9 | "## 目录\n", 10 | "\n", 11 | "- [概览](#概览)\n", 12 | "- [基础知识](#基础知识)\n", 13 | "- [算法](#算法)\n", 14 | "- [ID3算法](#ID3算法)\n", 15 | " - [ID3算法流程](#ID3算法流程)\n", 16 | "- [C4.5算法](#C4.5算法)\n", 17 | "- [剪枝](#剪枝)\n", 18 | "- [CART算法](#CART算法)\n", 19 | " - [回归CART生成](#回归CART生成)\n", 20 | " - [分类CART生成](#分类CART生成)\n", 21 | " - [基尼系数](#基尼系数)\n", 22 | " - [CART剪枝](#CART剪枝)\n", 23 | " - [算法总结](#算法总结)\n", 24 | "- [决策树实践](#决策树实践)\n", 25 | "- [决策树的可解释性](#决策树的可解释性)\n", 26 | "- [参考链接](#参考链接)" 27 | ] 28 | }, 29 | { 30 | "cell_type": "markdown", 31 | "metadata": {}, 32 | "source": [ 33 | "## 概览\n", 34 | "\n", 35 | "决策树是一种分类和回归的基本模型,可从三个角度来理解它,即:\n", 36 | "- 一棵树。\n", 37 | "- if-then规则的集合,该集合是决策树上的所有从根节点到叶节点的路径的集合。\n", 38 | "- 定义在特征空间与类空间上的条件概率分布,决策树实际上是将特征空间划分成了互不相交的单元,每个从根到叶的路径对应着一个单元。决策树所表示的条件概率分布由各个单元给定条件下类的条件概率分布组成。实际中,哪个类别有较高的条件概率,就把该单元中的实例强行划分为该类别。\n", 39 | " \n", 40 | "主要的**优点**有两个:\n", 41 | "- 模型具有可解释性,容易向业务部门人员描述。\n", 42 | "- 分类速度快。\n", 43 | " \n", 44 | "![](https://github.com/applenob/machine_learning_basic/raw/master/res/dtree.jpg)\n", 45 | "\n", 46 | "## 基础知识\n", 47 | "\n", 48 | "**熵**:$H(x) = -\\sum_{i=1}^np_ilog(p_i)$\n", 49 | "\n", 50 | "**条件熵**:$H(Y|X) = H(X,Y)-H(X) = \\sum_XP(X)H(Y|X) = -\\sum_{X,Y}logP(Y|X)$\n", 51 | "\n", 52 | "**基尼系数(Gini index)**:$Gini(p) = \\sum_{k=1}^Kp_k(1-p_k) = 1-\\sum_{k=1}^Kp_k^2$,基尼指数反应了从数据集中随机抽取两个样本,其类标不一致的概率。\n", 53 | "\n", 54 | "## 算法\n", 55 | "\n", 56 | "决策树的损失函数通常是正则化的极大似然函数,学习的策略是以损失函数为目标函数的最小化。\n", 57 | "\n", 58 | "所以决策树的本质和其他机器学习模型是一致的,有一个损失函数,然后去优化这个函数;然而,区别就在于如何优化。\n", 59 | "\n", 60 | "决策树采用**启发式算法**来近似求解最优化问题,得到的是次最优的结果。\n", 61 | "\n", 62 | "该启发式算法可分为三步:\n", 63 | "\n", 64 | "- 特征选择\n", 65 | "- 模型生成\n", 66 | "- 决策树的剪枝\n", 67 | "\n", 68 | "决策树学习算法通常是一个递归地选择最优特征,并根据该特征对训练数据进行分割。\n", 69 | "\n", 70 | "选择最优特征要根据**特征的分类能力**,特征分类能力的衡量通常采用信息增益或信息增益比。\n", 71 | "\n", 72 | "决策树学习常用的算法主要有以下三种:`ID3算法`,`C4.5算法`,`CART算法`。\n" 73 | ] 74 | }, 75 | { 76 | "cell_type": "markdown", 77 | "metadata": {}, 78 | "source": [ 79 | "\n", 80 | "## ID3算法\n", 81 | "\n", 82 | "ID3使用**信息增益**作为特征选取的依据:\n", 83 | "\n", 84 | "$G(D, A) = H(D) - H(D|A)$,即**经验熵**和**经验条件熵**的差值,其中$D$是训练数据集,$A$是特征。\n", 85 | "\n", 86 | "$H(D)=-\\sum_{k=1}^K\\frac{|C_k|}{|D|}log\\frac{|C_k|}{|D|}$,其中,$|C_k|$是属于类$C_k$的个数,$|D|$是所有样本的个数。\n", 87 | "\n", 88 | "$H(D|A)=\\sum_{i=1}^np_{a_i}H(D|a_i)=\\sum_{i=1}^n\\frac{|D_i|}{|D|}H(D_i)=-\\sum_{i=1}^n\\frac{|D_i|}{|D|}\\sum_{k=1}^{K}\\frac{|D_{ik}|}{|D_i|}log\\frac{|D_{ik}|}{|D_i|}$,其中,特征$A$有$n$个不同的取值$\\{a_1, a_2, ..., a_n\\}$,根据特征$A$的取值将$D$划分为$n$个子集$D_1, D_2, ..., D_n$, $|D_i|$是$D_i$的样本个数,$D_{ik}$是$D_i$中属于类$C_k$的样本集合。\n", 89 | "\n", 90 | "### ID3算法流程\n", 91 | "\n", 92 | "- 1.计算$A$中各个特征对$D$的信息增益,选择信息增益最大的特征:$A_g$。\n", 93 | "- 2.若$A_g$的信息增益小于**阈值$\\epsilon$**,则置为单结点树,并将$D$中实例数最多的类$C_k$作为该结点的类标记。\n", 94 | "- 3.否则,对$A_g$的每一可能值:$a_i$,依据$A_g = a_i$将$D$分割为若干非空子集$D_i$,同样,将$D_i$中实例数最多的类作为类标,构建子结点。\n", 95 | "- 4.对第$i$个子结点,以$D_i$为训练集,以$A-{A_g}$为特征集,递归地调用上面1-3步。\n", 96 | "\n", 97 | "## C4.5算法\n", 98 | "\n", 99 | "C4.5使用**信息增益比**,作为特征选取的依据:\n", 100 | "\n", 101 | "**信息增益比**:$g_R(D,A)=\\frac{g(D,A)}{H_A(D)}$,即信息增益除以训练集$D$关于特征$A$的熵,$H_A(D) = -\\sum_{i=1}^n\\frac{D_i}{D}log_2\\frac{D_i}{D}$,$n$是特征$A$取值的个数。\n", 102 | "\n", 103 | "**为什么使用信息增益比?**先回顾信息增益:$H(D|A)=-\\sum_{i=1}^n\\frac{|D_i|}{|D|}\\sum_{k=1}^{K}\\frac{|D_{ik}|}{|D_i|}log\\frac{|D_{ik}|}{|D_i|}$,对于极限情况,如果某个特征$A$可以将数据集$D$完全分隔开,且每个子集的个数都是1,那么$log\\frac{|D_{ik}|}{|D_i|} = log1 = 0$,于是信息增益取得最大。但这样的特征并不是最好的。\n", 104 | "\n", 105 | "也就是说,使用信息增益作为特征选择的标准时,容易偏向于那些**取值比较多**的特征,导致训练出来的树非常的**宽**然而**深度不深**的树,非常容易导致**过拟合**。\n", 106 | "\n", 107 | "而采用信息增益比则有效地抑制了这个缺点:取值多的特征,以它作为根节点的单节点树的熵很大,即$H_A(D)$较大,导致信息增益比减小,在特征选择上会更加合理。\n", 108 | "\n", 109 | "C4.5具体算法类似于ID3算法。\n", 110 | "\n", 111 | "## 剪枝\n", 112 | "\n", 113 | "为了防止出现过拟合现象,要把过于复杂的树进行剪枝,将其简化。\n", 114 | "\n", 115 | "决策树的剪枝往往通过极小化决策树整体的损失函数(loss function)或者代价函数(cost function)来实现。\n", 116 | "\n", 117 | "决策树的生成学习局部的模型,而决策树剪枝学习整体的模型。\n", 118 | "\n", 119 | "**损失函数**:$C_α(T) = C(T)+α|T|=\\sum_{t=1}^{|T|}N_tH_t(T)+α|T|$\n", 120 | "\n", 121 | "其中,$|T|$是树$T$的叶节点个数,$t$是其中一个结点,$N_t$是这个结点的样本个数,$H_t(T)$是这个结点的经验熵。\n", 122 | "\n", 123 | "$C(T)$表示模型对训练数据的预测误差, $α|T|$则是正则化项。\n", 124 | "\n", 125 | "使用叶子结点的熵作为的模型的评价是因为:\n", 126 | "\n", 127 | "**如果分到该叶节点的所有样本都属于同一类,那么分类效果最好,熵最小。**\n", 128 | "\n", 129 | "**一般的剪枝算法**:\n", 130 | "\n", 131 | "1.计算每个结点的经验熵。\n", 132 | "\n", 133 | "2.递归地从叶节点向上回缩:设一叶结点回缩到父结点之前和之后,树分别是$T_B$和$T_A$,其对应的损失函数值分别是$C_α(T_B)$与$C_α(T_A)$,如果$C_α(T_A)≤C_α(T_B)$,则剪枝,即将父节点变成新的叶结点。" 134 | ] 135 | }, 136 | { 137 | "cell_type": "markdown", 138 | "metadata": {}, 139 | "source": [ 140 | "## CART算法\n", 141 | "\n", 142 | "**CART(Classification And Regression Tree)**本身是一种分类回归树,即,它既可以用来解决分类问题,也可以用来解决回归问题。\n", 143 | "\n", 144 | "**CART**树是一棵**二叉树**,内部结点特征的取值是“是”和“否”,左分支是取值为“是”的分支,右分支是取值是“否”的分支。\n", 145 | "\n", 146 | "因此,注意到CART的生成过程和前面的ID3和C4.5略有不同,分回归树和分类树两种情况分析。\n", 147 | "\n", 148 | "### 回归CART生成\n", 149 | "\n", 150 | "回归树的生成通常选择**平方误差**作为评判标准。\n", 151 | "\n", 152 | "假设已将输入空间划分为$M$个单元$R_1,R_2,...,R_M$,并且在每个单元$R_m$上有一个固定的输出值$c_m$,回归树可以表示为:$f(x) = \\sum_{m=1}^Mc_mI(x \\in R_m)$。\n", 153 | "\n", 154 | "在单元$R_m$上的$c_m$的最优值$\\hat c_m$是$\\hat c_m=ave(y_i|x_i\\in R_m)$(根据最小化该单元的平方误差可以得到这个结论)。\n", 155 | "\n", 156 | "至于空间的划分,先选择输入的第$j$个维度的特征$x^{(j)}$和对应的取值$s$,作为**切分变量(splitting variable)和切分点(splitting point)**,并定义两个区域:$R_1(j,s)=\\{x|x^{(j)}≤s\\}$和$R_2(j,s)=\\{x|x^{(j)}>s\\}$。再寻找最优的切分变量和切分点:$\\arg\\underset{j,s}{min}[\\underset{c_1}{min}\\sum_{x_i\\in R_1(j,s)}(y_i-c_1)^2+\\underset{c_2}{min}\\sum_{x_i\\in R_1(j,s)}(y_i-c_2)^2]$。\n", 157 | "\n", 158 | "**回归CART生成**:\n", 159 | "- 1.对于数的所有维度,遍历$j$;对固定的$j$扫描切分点$s$:\n", 160 | " - 2.寻找最优的切分变量和切分点:$\\arg\\underset{j,s}{min}[\\underset{c_1}{min}\\sum_{x_i\\in R_1(j,s)}(y_i-c_1)^2+\\underset{c_2}{min}\\sum_{x_i\\in R_1(j,s)}(y_i-c_2)^2]$。\n", 161 | "- 3.用选定的对$(j,s)$划分区域并决定相应的输出值:$R_1(j,s)=\\{x|x^{(j)}≤s\\}$和$R_2(j,s)=\\{x|x^{(j)}>s\\}$。$\\hat c_m=ave(y_i|x_i\\in R_m)\\;\\;\\;x\\in R_m,\\;m=1,2$。\n", 162 | "- 4.重复1,2,3直到满足停止条件。\n", 163 | "- 5.生成决策树:$f(x) = \\sum_{m=1}^Mc_mI(x \\in R_m)$。\n", 164 | "\n", 165 | "\n", 166 | "### 分类CART生成\n", 167 | "\n", 168 | "#### 基尼系数\n", 169 | "\n", 170 | "CART使用**基尼系数(Gini index)**最小化准则,进行特征选择。\n", 171 | "\n", 172 | "**基尼系数(Gini index)**:\n", 173 | "\n", 174 | "$Gini(D) = 1-\\sum_{k=1}^{K}(\\frac{|C_k|}{|D|})^2$\n", 175 | "\n", 176 | "$Gini(D,A) = \\sum_i\\frac{D_i}{D}Gini(D_i)$\n", 177 | "\n", 178 | "基尼指数$Gini(D,A)$表示经$A=a$分割后集合$D$的**不纯度(impurity)**,基尼指数越大,纯度越低,和熵类似。\n", 179 | "\n", 180 | "**分类CART生成**:\n", 181 | "\n", 182 | "- 1. 对现有特征A的每一个特征,每一个可能的取值a,**根据样本点对$A=a$的测试是“是”还是“否”**,将$D$分割成$D_1$和$D_2$两部分,计算$A=a$时的基尼指数。\n", 183 | "- 2.选择基尼指数最小的特征机器对应的切分点作为**最优特征**和**最优切分点**。\n", 184 | "- 3.递归调用,直到满足停止条件。\n", 185 | "\n", 186 | "**停止条件**:\n", 187 | "\n", 188 | "- 结点中样本个数小于预定阈值;\n", 189 | "- 样本集的基尼指数小于预定阈值(基本属于同一类);\n", 190 | "- 没有更多特征。\n", 191 | " \n", 192 | "### CART剪枝\n", 193 | "\n", 194 | "相比一般剪枝算法,CART剪枝算法的优势在于,**不用提前确定$α$值**,而是在剪枝的同时找到最优的α值。\n", 195 | "\n", 196 | "对于固定的$α$值,一定存在让$C_α(T)$最小的唯一的子树,记为$T_α$。\n", 197 | "\n", 198 | "对于某个结点$t$,单结点树的损失函数是:$C_α(t) = C(t) + α$,而以$t$为根的子树$T_t$的损失函数是:$C_α(T_t) = C(T_t) + α|T_t|$。\n", 199 | "\n", 200 | "当$α$充分小的时候,有$C_α(T_t) < C_α(t)$;\n", 201 | "\n", 202 | "当$α$增大到某一$α$时有:$C_α(T_t) = C_α(t)$。\n", 203 | "\n", 204 | "即,只要$α = \\frac{ C(t)-C(T_t)}{|T_t|-1}$,就可以保证$T_t$和$t$有相同的损失函数,也就代表着可以对$T_t$剪枝。\n", 205 | "\n", 206 | "因此,对于每个内部结点,计算$g(t) = \\frac{C(t)-C(T_t)}{|T_t|-1}$,代表**剪枝后整体损失函数减少的程度**,或者用我自己的话理解就是**代表$α$最少要达到多少时,结点$t$是可剪的。**\n", 207 | "\n", 208 | "将最小的$g(t)$设为$α_1$,剪枝得$T_1$,不断地重复此步骤,可以增加$α$,获得一系列$T_0, T_1, ..., T_n$。\n", 209 | "\n", 210 | "通过**交叉验证**,从剪枝得到的子树序列$T_0, T_1, ..., T_n$中选取最优子树$T_α$。\n", 211 | "\n", 212 | "**CART剪枝算法**:\n", 213 | "- 输入:生成的决策树$T_0$;\n", 214 | "- 输出:最有决策树$T_α$;\n", 215 | "- 1.$k=0,T=T_0$;\n", 216 | "- 2.$α=+∞$;\n", 217 | "- 3.自下而上地对各内部结点$t$计算$C(T_t),|T_t|$以及$g(t)=\\frac{C(t)-C(T_t)}{|T_t|-1}$。\n", 218 | "- 4.从小到大遍历$α=g(t)$剪枝得到的子树序列$T_0, T_1, ..., T_n$。\n", 219 | "- 5.交叉验证法在子树序列$T_0, T_1, ..., T_n$中选取最优子树$T_α$。\n", 220 | "\n", 221 | "\n", 222 | "## 算法总结\n", 223 | "\n", 224 | "ID3算法/C4.5算法/CART算法。\n", 225 | "\n", 226 | "ID3算法和C4.5算法用于生成**分类树**,区别主要在于选取特征的依据,前者是**信息增益**,后者是**信息增益比**。\n", 227 | "\n", 228 | "CART算法可以生成**分类树**和**回归树**,分类树使用**基尼指数**选取特征,并且不用提前确定$α$值,而是在剪枝的同时找到最优的$α$值。" 229 | ] 230 | }, 231 | { 232 | "cell_type": "markdown", 233 | "metadata": { 234 | "collapsed": true 235 | }, 236 | "source": [ 237 | "## 决策树实践\n", 238 | "\n", 239 | "使用sklearn的决策树实现来看看实践中如何使用决策树模型,\n", 240 | "\n", 241 | "sklearn中的决策树模型:**DecisionTreeClassifier**。\n", 242 | "\n", 243 | "```\n", 244 | "class sklearn.tree.DecisionTreeClassifier(criterion=’gini’, splitter=’best’, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort=False)\n", 245 | "```\n", 246 | "\n", 247 | "重要参数:\n", 248 | "\n", 249 | "- `criterion`: “gini” for the Gini impurity and “entropy” for the information gain.\n", 250 | "- `max_depth`: 树的最大深度。\n", 251 | "- `min_impurity_decrease`: 最小的基尼指数下降。\n", 252 | "\n", 253 | "下面代码摘自[这里](http://blog.csdn.net/sinat_22594309/article/details/59090895):" 254 | ] 255 | }, 256 | { 257 | "cell_type": "code", 258 | "execution_count": 1, 259 | "metadata": {}, 260 | "outputs": [ 261 | { 262 | "name": "stdout", 263 | "output_type": "stream", 264 | "text": [ 265 | "正确率是95.56%\n" 266 | ] 267 | }, 268 | { 269 | "name": "stderr", 270 | "output_type": "stream", 271 | "text": [ 272 | "/home/cer/anaconda2/lib/python2.7/site-packages/sklearn/model_selection/_split.py:2010: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified.\n", 273 | " FutureWarning)\n" 274 | ] 275 | } 276 | ], 277 | "source": [ 278 | "import numpy as np \n", 279 | "import sklearn\n", 280 | "from sklearn.tree import DecisionTreeClassifier \n", 281 | "from sklearn.model_selection import train_test_split\n", 282 | "from sklearn import datasets \n", 283 | " \n", 284 | "#读取数据,划分训练集和测试集 \n", 285 | "iris=datasets.load_iris() \n", 286 | "# 只保留数据集的前五个特征\n", 287 | "x=iris.data[:, :5]\n", 288 | "y=iris.target \n", 289 | "x_train, x_test, y_train, y_test = train_test_split(x, y, train_size=0.7, random_state=1) \n", 290 | "#模型训练 \n", 291 | "model = DecisionTreeClassifier(max_depth=3) \n", 292 | "model = model.fit(x_train,y_train) \n", 293 | "y_test_hat = model.predict(x_test) \n", 294 | "res=y_test == y_test_hat \n", 295 | "acc=np.mean(res) \n", 296 | "print '正确率是%.2f%%'%(acc*100) " 297 | ] 298 | }, 299 | { 300 | "cell_type": "markdown", 301 | "metadata": {}, 302 | "source": [ 303 | "比较不同深度对预测准确率的影响:" 304 | ] 305 | }, 306 | { 307 | "cell_type": "code", 308 | "execution_count": 2, 309 | "metadata": {}, 310 | "outputs": [ 311 | { 312 | "name": "stdout", 313 | "output_type": "stream", 314 | "text": [ 315 | "正确率是60.00%\n", 316 | "正确率是60.00%\n", 317 | "正确率是95.56%\n", 318 | "正确率是95.56%\n", 319 | "正确率是95.56%\n", 320 | "正确率是95.56%\n", 321 | "正确率是95.56%\n", 322 | "正确率是95.56%\n", 323 | "正确率是95.56%\n", 324 | "正确率是95.56%\n", 325 | "正确率是95.56%\n" 326 | ] 327 | }, 328 | { 329 | "data": { 330 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAhIAAAFyCAYAAACgITN4AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAPYQAAD2EBqD+naQAAIABJREFUeJzs3XmcXFWZ//HPE9aJyGaQJkIQZA3IKoyNigsSJDOWIDpR\nIgOJokgyMIkkuPwwEcdxEgUGoihgNODSioNGHEQiKmokkCFdHbY0CAQaWUKaHYotyfP7496C6urq\nTlf17b6nTn/fr1e9mj517q3z7S5ST997zr3m7oiIiIg0YlTeAxAREZHmpUJCREREGqZCQkRERBqm\nQkJEREQapkJCREREGqZCQkRERBqmQkJEREQapkJCREREGqZCQkRERBqmQkJEejGzXc1sg5nNzHss\nITOzd6c/pyNzev256etvn8fri4AKCYmcmZ2e/kO7LO+xSG9m1mpmc8xs67zHMghDfp8BM/uCmX2o\nj9fWfQ4kVyokJHYnAquBw81s97wHI70cAXwZ2DbvgQTui0CtQkIkdyokJFpmthvJB9VMoBuYnO+I\n+mZmo/MeQ05swB0TWwzlYESkfiokJGaTgSeAa4D/oY9CIv2AOtPMbjWzF8zsMTO71swOqer3CTO7\n2cyeN7MnzOxPZnZ0xfMbzOzLNfZ/v5l9v+L7k8vn1c3sYjNbAzyYPjcubes0s5KZdZvZlWa2a439\nbmNmF5jZajN70cweNLPLzWx7M3udmT1nZhfU2O5NZrbOzM4eyA/RzP49zVAysxvMbL+K505JsxxY\nY7svpq+zUx/7nQPMT7+9P93PejMblz6/wcwuMrMTzex24EXgmPQ5S8d1e/o7e9TMvmtmvY5smNmx\nZvbn9OfxjJn9r5mNH0j2qv28ycwWp/tZY2bnA1tQoxgys380s9+a2VPp++UGMzuiqk95fsPe6e/4\n6fT3/d+VBZOZbQBGA+Wf9YbK91NqOzNbZGZPpq/5fTPbst6MIo3YNO8BiAyhE4Gr3H2dmbUBp5nZ\noe6+oqrf94GTSQqOy0j+v3gX8HagHV790JsD/BU4B3gZ+EfgvcDvNjKOvs5hXww8BnwFeF3adlj6\num3A34E3A6cDfzSz8e7+Yjqe1wFLgb2BhUARGAMUgJ3d/VYz+yUwycxmunvlGE5Mv/5oI+OG5Oey\nFfAtYEvgTOD3ZvZWd19LUqB9m6RIW1m17YnAH9z9kT72fRWwF/CxdL+Pp+1rK/ocBfxL+vrdwP1p\n+6XAv5L87i4EdgP+DTjIzN7h7usBzOwkYBHwW2A2yQfyZ4G/mNnB7t41gJ8B6YfyH4Cd09d7BDgJ\neB9Vv18zex/wG+AWYC6wAZgC/MHM3unut6Rdy9tdSXL67fMkv/szSE71nJI+/wmS3/HNaW6Aeytf\nMt3Hfek+DgE+BawBvjCQfCKD4u566BHdAziU5B/w91a0dQHnV/V7b9rv/H729RZgHfDzjbzmBuDL\nNdpXA9+v+P7ktO8NgFX13aLG9oen/SdXtH0FWA8U+hnP0WmfCVXtHSQf8P1l2TV9zeeAlor2w9L2\nb1a0/Rh4sGr7g9N+J23kdT6XjnFcHz/PV4C9q9rfmT43qUbeDcDH0u9fR3JE6jtV/XYAngS+W8f7\n6cx0nB+uaNsSuDttP7Ki/S7gmurfK8mH/28r2uak4/1FVd9vpfvcv6Lt2cr3UI19XFrVfhXwWNb/\nX+mhR62HTm1IrCYDj5J8WJf9DPiYmVUeij6B5B/ic/vZ1/Ekf/X116deDlzm7j3+mnX3l8r/bWab\nWrKs7z7gKZK/NMs+DKx096v7eY3rSf5yfvWUjpntDxwA/HCA4/yluz9aMb7/I/nLeGJFnyuAsWb2\n3oq2yUAJ+MUAX6cvN7j7XVVtHyH5efzezN5QfpAclXmOpDgEmABsA/y0qp+nGd7LwB0LPOLur+bx\n5OjQpZWdzOwgYE+greo1Xw/8HqheJuokR3QqLSB5v01kYBy4pKrtL8AbzGyrAe5DpGE6tSHRMbNR\nwCTgj8DuFXXDcpK/gI8i+ZAF2B142N2f6meXu5MUG6syHur91Q3pIfQvkhzWfhOvnX93kg/FsreQ\nnFbok7u7mf2Y5JTOlukH32TghY1tW+GeGm13Ax+t+P53JEXbZJJTMEZyumKxuz8/wNfpy/012vYk\nOfT/WI3nHHhj+t97kPz8/thHv6frGMeu1P5ZVBc5e6Zfr+hjPxvMbBt3r3zt6v3eS/J+e3Md46s+\nRfNk+nU7kuJKZMiokJAYvQ/YieTD7ONVzznJB9711RsNoU36aH+hRtu3SE59XADcRPJh5yRHUxo5\ngngFMAs4Dvgpyc/j1+7+bAP7qsndN5jZT4BPmdnpJPNLxjKwORgbU+tnNIrk/P+J1F71sbain5PM\nMVhTo9+6DMZXa2yQFKzVc0bKNvbB3sh1Idb30T7gVTEijVIhITEqf3CcTu9/SE8Ajjez09LTCPcC\nE8xs236OStxL8gExHri1n9d9kqrrIZjZZiRFzUCdACxy99kV+9iier/pmPbf2M7c/Q4zKwKTzewh\nYBwwrY7x7FmjbS96Hym4gmSZ7QdJDsk/BiwZwP4b+dC8l+So0o2Vp4L66GfAWnf/QwOvU+kBYL8a\n7fvUeE2AZ+t4zT3T/ZftQfJ+u7+iTRedkmBpjoREJT01cDzJX92/dPdfVD5I/uLfmmR1AyST0kaR\nTFrry2KSf8i/XDW/otq99D4H/hn6PiJRy3p6/395Ro19XAUcaLWvdljthyTLJv+dZOXDb+sYz3Fm\nNrb8jZkdTrJa5TeVndz9NuA24FSSYqjN3TcMYP/lUx/1XJDqSpI/gmottd3EzMqngK4DngG+aGa9\n/mgyszF1vOZvSOaBnFCx/WiSvJVWkLwPzkpX1mzsNY3ehd0ZJO+3ayvankcX7ZJA6YiExOZDJBPb\n+pqEeBPJoe/JJKswbjCzHwJnmNleJB+yo0gOz//B3S9293vN7GvA/yNZNvgL4CWSFQwPufuX0n1/\nD/iumf0PybyBA0km/FUuZyzrqyD5X+AkM3sGuBNoJfnru7uq3zdIJh3+3Mx+QPIB9gaSIwKfST/Y\ny35Ccr2G44CLPV0aOUD3AEvN7Du8tvxzbfr61a4AvknyIfjjAe5/BcnP4j/N7KckqzSudvdapzQA\ncPc/m9klwOfTyY1L0u32IvmZnEGyEuJZM/tsOq72dP9rSY7K/BPJ8tkzBjjOy4DpwA/N7G28tvyz\nxxyQdF7Kp0gKjzvS381DJPNd3ktyqqq6+NvNzH5F8t47guS9+aOq3+EK4P1mNgN4GFjt7ssHOHaR\noZX3shE99MjyAfyK5Bz0lv30+T7JxY22S783ksPyd5Cck3+U5AP9oKrtTia5NkCJ5IP9D8D7Kp43\n4D9JTqs8S3Jdit1IVl0srNrPeuCQGmPbmqQgWUPyoXMNyaHvHvtI+25Lck2DrnTcD5Bcb2C7Gvv9\n3/Q1/3GAP8dd0/4zSI5k3J/m/iMVyxKrttmR5AP9zjp/Z19MM7xCxVLQ9L8v7Ge7T5JMoH2OZBVH\nR/rz37Gq35EkH+xPkHzw353+nA6uc5w7A79Mf7drgPN4bYntkVV9DwB+TnKKp5T+/tqA91T0mZNu\nuzfJUZan0vfVfwObV+1vr/Rn/1y6zfer9rF9jfdqzWW1euiR9cPcdepNJHbpUZT93X2vIXyNN5D8\npT7X3f9zqF4nFulFzr4M7ODuT+Q9HpFG1T1HwszeZWZXm9lD6aVaCwPY5j1mtsKSy/jebWYn1+jz\nUTNbZcnlblea2bH1jk1EerPkEtX/RN9LErMyheTflCxWa4hIk2hkjsTrSA4hLmQAF5sxszeTHFa9\nmGS51vuB75nZw+7+u7TPESTncc8mOZQ7GVicXsL2zgbGKDLipf/vvZPkcskvU3XxpAxf570kKxq+\nSHIBqwFddjoE6aqa7TfS7WlPL00uIr0N6tRGejOZ47yfq+uZ2TzgWHc/oKKtDdjG3Sem3/8UGO3u\nhYo+y4Ciu5/e8ABFRrD0yN8PSOY3fM7dfzlEr/NHkkmhS0kuid3XvTWCY2bvpvYFq8ocmOLumR/N\n0akNicVwFBJ/Ala4+8yKtlOAC9x9u/T7B4Dz3P2iij5zgQ+5+8END1BEpB/pUtFDN9LtDnevdUEr\nEWF4ln+20PuqcmuArc1sC08uKNNXn5a+dppO7DqG5K8tHXYUkUb1d3l0gDeZ2ZuGZSQiw2dLksuw\nX+fuj2+kb7+a+ToSxzDwteoiIiLS22SSOYoNG45C4lGS9eWVdgSe8dcub9tXn0fp2/0AP/rRj9h3\n330zGGa+ZsyYwQUXXJD3MDKjPOGKKQsoT8hiygJx5Vm1ahWf+MQnoPaN8eoyHIXEMpJb8FaakLZX\n9jkKuKii7eiqPtVeBNh333055JBD+unWHLbZZpsocpQpT7hiygLKE7KYskB8eVKDnhrQyHUkXmdm\nB6aXpoXkNs0Hmtku6fNfN7PLKzb5btpnnpntnd4d8CPA+RV9LgQ+YGYz0z5zSSZAfauRUM3o0Uf7\nO/jSfJQnXDFlAeUJWUxZIL48WWnkpl1vA4ok1353ksvEtgNfSZ9vAXYpd3b3+0kuhvN+kutPzAA+\n6e7XV/RZRnKNiU+nfT5MsmJjxFxD4qGHHsp7CJlSnnDFlAWUJ2QxZYH48mSl7lMb7v4n+ilA3H1K\njbY/s5ElVu5+FckdDUekQw/d2Aq05qI84YopCyhPyGLKAvHlyYpuIx6Ij3/843kPIVPKE66YsoDy\nhCymLBBfnqw07U27zOwQYMWKFStinPwiIiIyZNrb28tHWA519/bB7EtHJERERKRhKiQCMWVKr6kl\nTU15whVTFlCekMWUBeLLkxUVEoGYMGFC3kPIlPKEK6YsoDwhiykLxJcnK5ojISIiMsJojoSIiIgE\nQYWEiIiINEyFRCCWLl2a9xAypTzhiikLKE/IYsoC8eXJigqJQMyfPz/vIWRKecIVUxZQnpDFlAXi\ny5MVTbYMRKlUYvTo0XkPIzPKE66YsoDyhCymLBBXHk22jFAsb84y5QlXTFlAeUIWUxaIL09WVEiI\niIhIw1RIiIiISMNUSARi1qxZeQ8hU8oTrpiygPKELKYsEF+erKiQCMS4cePyHkKmlCdcMWUB5QlZ\nTFkgvjxZ0aoNERGREUarNkRERCQIKiRERESkYSokAtHZ2Zn3EDKlPOGKKQsoT8hiygLx5cmKColA\nzJ49O+8hZEp5whVTFlCekMWUBeLLkxVNtgxEV1dXVDOClSdcMWUB5QlZTFkgrjyabBmhWN6cZcoT\nrpiygPKELKYsEF+erDRUSJjZNDNbbWYvmNlNZnbYAPrfaWYlM1tlZidVPX+ymW0ws/Xp1w1mVmpk\nbCIiIjJ8Nq13AzObBJwHfBpYDswArjOzvdy9u0b/zwJfAz4F3AL8I3CZmT3h7tdUdH0a2Auw9Pvm\nPOcSq5dfhrPOgqeeynskIiIyWI8/ntmu6i4kSAqHS9z9CgAzOw34J2AqUOtm7Z9I+/9P+v396RGM\ns4HKQsLdfW0D44nCvHnzOPvss/MeRt+WL4cFC+Dww2GLLTbafd6DD3L2LrsMw8CGR0x5YsoCyhOy\nmLJAZHmeey6zXdVVSJjZZsChwH+W29zdzex6oLWPzbYAXqxqexE43Mw2cff1adtWZnY/yemWduCL\n7n5nPeNrZqVS4GdyisWkgFi6FDbbbKPdS3PmwFe+MgwDGx4x5YkpCyhPyGLKApHlaW+HZLLloNW1\nasPMdgIeAlrd/eaK9nnAke7eq5gws68BpwAfdPd2M3sb8GvgjcBYd19jZm8H9gBuBbYBZgFHAuPd\n/eE+xhLVqo3gTZ0Kt94Kt9yS90hERGSQsly10cipjXp9FdgRWGZmo4BHgUXAbGADgLvfBNxU3sDM\nlgGrgM8Ac4ZhjLIxHR2ggk1ERKrUu2qjG1hPUhhU2pGkQOjF3V90908Bo4FdgXHAA8Czfc2JcPd1\nQJHkKEW/Jk6cSKFQ6PFobW1l8eLFPfotWbKEQqHQa/tp06axcOHCHm3t7e0UCgW6u3vOHZ0zZw7z\n5s3r0dbV1UWhUOh1xbMFCxb0uuVsqVSiUCiwdOnSHu1tbW1MmTKl19gmTZoURo6ZM+H22+Hgg5s7\nRyy/D+VQDuVQjjpytLW1vfrZ2NLSQqFQYMaMGb22aVTdF6Qys5uAm939zPR7A7qAi9z9GwPcxw3A\ng+5+Uh/PjwLuAK5x97P66BPVqY3u7m7GjBmT9zBqW7kSDjoI/vpXOOKIAW0SdJ4GxJQnpiygPCGL\nKQvElSfvC1KdD5xqZv9qZvsA3yU52rAIwMy+bmaXlzub2Z5mNtnM9jCzw83sp8B+wJcq+pxjZkeb\n2W5mdjDwY5IjF99rOFmTmTp1at5D6FuxCGZwwAED3iToPA2IKU9MWUB5QhZTFogvT1bqniPh7lea\n2RjgXJJTGh3AMRWnKVqAyvUxmwCfI7lGxCvAH4Ej3L2ros92wKXptk8CK0gmdI6YO6TMnTs37yH0\nrViEPfeErbYa8CZB52lATHliygLKE7KYskB8ebKie23Ixr373dDSAj/7Wd4jERGRDOR9akNGkg0b\nkhUb6URLERGRSiokpH/33w/PPKNCQkREalIhEYjqJUTBKBaTrwcdVNdmweZpUEx5YsoCyhOymLJA\nfHmyokIiEO3tgzpFNXSKRdhpJ9ix+tIh/Qs2T4NiyhNTFlCekMWUBeLLkxVNtpT+/fM/J/MkfvOb\nvEciIiIZ0WRLGT7FouZHiIhIn1RISN8eewwefliFhIiI9EmFhPStoyP5WudESxERGTlUSASi1g1h\nclcswutfD7vvXvemQeYZhJjyxJQFlCdkMWWB+PJkRYVEIKZPn573EHrr6IADD4RR9b9NgswzCDHl\niSkLKE/IYsoC8eXJilZtSN/22QcmTICLLsp7JCIikiGt2pCh9/zzcPfdmmgpIiL9UiEhtd16K7hr\noqWIiPRLhUQgFi9enPcQeioWYbPNYL/9Gto8uDyDFFOemLKA8oQspiwQX56sqJAIRFtbW95D6Kmj\nA8aPh803b2jz4PIMUkx5YsoCyhOymLJAfHmyosmWUtthh8H++8MPfpD3SEREJGOabClDa906uO02\nTbQUEZGNUiEhvXV2wksvaaKliIhslAoJ6a1YTL4eeGC+4xARkeCpkAjElClT8h7Cazo6kstib7NN\nw7sIKk8GYsoTUxZQnpDFlAXiy5MVFRKBmDBhQt5DeE0Gtw4PKk8GYsoTUxZQnpDFlAXiy5MVrdqQ\nntzhDW+Az30OvvSlvEcjIiJDQKs2ZOh0dcGTT2qipYiIDIgKCempPNFSSz9FRGQAGiokzGyama02\nsxfM7CYzO2wA/e80s5KZrTKzk2r0+Wj63AtmttLMjm1kbM1q6dKleQ8h0dEBO+wAO+00qN0Ekycj\nMeWJKQsoT8hiygLx5clK3YWEmU0CzgPmAAcDK4HrzGxMH/0/C3wN+DIwHpgLfNvM/qmizxHAT4DL\ngIOAXwGLzWx8veNrVvPnz897CInyREuzQe0mmDwZiSlPTFlAeUIWUxaIL09W6p5saWY3ATe7+5np\n9wY8CFzk7r1+ymb2V2Cpu59d0fZN4HB3PzL9/qfAaHcvVPRZBhTd/fQ+xhHVZMtSqcTo0aPzHgbs\nuit8/OPwX/81qN0EkycjMeWJKQsoT8hiygJx5cltsqWZbQYcCvy+3OZJJXI90NrHZlsAL1a1vQgc\nbmabpN+3pvuodF0/+4xOEG/Oxx9PJltmMNEyiDwZiilPTFlAeUIWUxaIL09W6j21MQbYBFhT1b4G\naOljm+uAT6VHEDCztwGfBDZL90e6bT37lKHQ0ZF81URLEREZoOFYtfFV4FpgmZm9AvwSWJQ+t2Gw\nO584cSKFQqHHo7W1tdd945csWUKhUOi1/bRp01i4cGGPtvb2dgqFAt3d3T3a58yZw7x583q0dXV1\nUSgU6Ozs7NG+YMECZs2a1aOtVCpRKBR6Tdhpa2urecW0SZMmDW+OM86gc8stYY89mjtHLL8P5VAO\n5VCODHK0tbW9+tnY0tJCoVBgxowZvbZpmLsP+EFyFOEVoFDVvgj45Ua23QQYCxhwGvBUxXMPAGdU\n9Z9LMkeir/0dAviKFSs8BmeddVbeQ3CfPNm9tTWTXQWRJ0Mx5Ykpi7vyhCymLO5x5VmxYoUDDhzi\nddQBtR51HZFw91eAFcBR5bZ0suVRwI0b2Xa9uz/s7g58DPh1xdPLKveZOjptHxHGjRuX9xCSUxsZ\nndYIIk+GYsoTUxZQnpDFlAXiy5OVRlZt/AvJEYjTgOXADOAjwD7uvtbMvg6MdfeT0/57AocDNwPb\nAzNJioZD3b0r7dMK3AB8AbgG+DjweZJK6c4+xhHVqo3cvfACvP718J3vwKmn5j0aEREZQlmu2ti0\n3g3c/cr0mhHnAjsCHcAx7r427dIC7FKxySbA54C9SE6L/BE4olxEpPtcZmYnklxv4mvA34AP9VVE\nyBC47TZYv14TLUVEpC51FxIA7n4xcHEfz02p+r6TZD7DxvZ5FXBVI+ORDHR0wCabwP775z0SERFp\nIrrXRiCqZ+0Ou2IR9t0Xttwyk93lnidjMeWJKQsoT8hiygLx5cmKColAzJ49O98BZDjREgLIk7GY\n8sSUBZQnZDFlgfjyZKXuyZahiG2yZVdXV34zgtevh623hq9+FWbOzGSXueYZAjHliSkLKE/IYsoC\nceXJ7RLZMnRyfXPefTeUSpkekYjlf7aymPLElAWUJ2QxZYH48mRFhYS8dmnsAw/MdxwiItJ0VEhI\nMtFy111h++3zHomIiDQZFRKBqL4G+7DKeKIl5JxnCMSUJ6YsoDwhiykLxJcnKyokAlEqlfJ5Yffk\niEQGtw6vlFueIRJTnpiygPKELKYsEF+erGjVxkj397/DLrvAr34FNe5uJyIi8dGqDclOeaJlxkck\nRERkZFAhMdIVi8kky1122XhfERGRKiokAtHd3Z3PC5cnWpplutvc8gyRmPLElAWUJ2QxZYH48mRF\nhUQgpk6dms8LD8FES8gxzxCJKU9MWUB5QhZTFogvT1ZUSARi7ty5w/+iTz0Fq1cPya3Dc8kzhGLK\nE1MWUJ6QxZQF4suTFa3aGMn+9Cd4z3vg9tthv/3yHo2IiAwTrdqQbBSLyW3D994775GIiEiTUiEx\nknV0wAEHwKab5j0SERFpUiokArFw4cLhf9EhmmgJOeUZQjHliSkLKE/IYsoC8eXJigqJQLS3D+oU\nVf1eegnuvHNIJlpCDnmGWEx5YsoCyhOymLJAfHmyosmWI1V7Oxx6KCxbBm9/e96jERGRYaTJljJ4\nxSKMGpXMkRAREWmQComRqqMjWa0xenTeIxERkSamQmKkGsKJliIiMnI0VEiY2TQzW21mL5jZTWZ2\n2Eb6TzazDjN73sweNrOFZrZ9xfMnm9kGM1ufft1gZiPqxu+F4byF94YNsHLlkE20hGHOMwxiyhNT\nFlCekMWUBeLLk5W6CwkzmwScB8wBDgZWAteZ2Zg++r8DuBy4DBgPfAQ4HLi0quvTQEvFY9d6x9bM\npk+fPnwvdu+98NxzQ3pEYljzDIOY8sSUBZQnZDFlgfjyZKXuVRtmdhNws7ufmX5vwIPARe4+v0b/\nzwGnufueFW3TgdnuPi79/mTgAnffvnr7fsahVRuNuvJKmDQJ1q6FMTXrPxERiVhuqzbMbDPgUOD3\n5TZPKpHrgdY+NlsG7GJmx6b72BH4KHBNVb+tzOx+M+sys8VmNr6esUkdOjpg551VRIiIyKDVe2pj\nDLAJsKaqfQ3J6Yhe3P1G4BPAz8zsZeAR4Emg8hjRXcBUoABMTsd1o5mNrXN8MhCaaCkiIhkZ8lUb\n6ZGFC4G5wCHAMcBuwCXlPu5+k7v/yN1vdfe/AB8G1gKfGerxhWLx4sXD92LF4pBOtIRhzjMMYsoT\nUxZQnpDFlAXiy5OVeguJbmA9sGNV+47Ao31s83ngr+5+vrvf7u6/A04HpqanOXpx93VAEdhjYwOa\nOHEihUKhx6O1tbXXL3zJkiU1Z9xOmzat1/XT29vbKRQKdHd392ifM2cO8+bN69HW1dVFoVCgs7Oz\nR/uCBQuYNWtWj7ZSqUShUGDp0qU92tva2pgxY0avsU2aNCn7HI8+ypw1a5i3evWQ5JgyZcqr/z2k\nORja30c5R2VbDDkAzjjjjChylH8fle+1Zs5R2S+GHAAXXHBBFDnKv4/K91oz5Whra3v1s7GlpYVC\noVDzM6dRWU227CKZbPmNGv3/B3jZ3U+saGsFlgJvcvdeBYiZjQLuAK5x97P6GIcmWzbi2mth4kS4\n7z7Ybbe8RyMiIjnIcrJlI/ePPh9YZGYrgOXADGA0sAjAzL4OjHX3k9P+vwYuNbPTgOuAscAFJMXI\no+k25wA3AfcA2wKzgXHA9xqLJX0qFmGbbeDNb857JCIiEoG6Cwl3vzK9ZsS5JKc0OoBj3H1t2qUF\n2KWi/+VmthUwDfgm8BTJqo/PV+x2O5LrSrSQTMRcAbS6e8/jPTJ4HR3JREuzvEciIiIRaOSIBO5+\nMXBxH8/1Olnj7t8Gvt3P/mYCMxsZi9SpWIR//ue8RyEiIpHQvTYCUWuyTOaefRbuuWdYln4OS55h\nFFOemLKA8oQspiwQX56sqJAIxIQJE4b+RVauTL4O8dJPGKY8wyimPDFlAeUJWUxZIL48Wal71UYo\ntGqjAQsWwFlnJffZ2GyzvEcjIiI5ye0S2dLkOjpg//1VRIiISGZUSIwkw3BFSxERGVlUSASi+mpl\nmXv5ZbjjjmG7x8aQ5xlmMeWJKQsoT8hiygLx5cmKColAzJ/f6w7s2Vq1KikmhumIxJDnGWYx5Ykp\nCyhPyGLKAvHlyYomWwaiVCoxevTooXuBRYtg6lR4+ml4/euH7nVSQ55nmMWUJ6YsoDwhiykLxJVH\nky0jNORvzo4O2GOPYSkiYBjyDLOY8sSUBZQnZDFlgfjyZEWFxEihiZYiIjIEVEiMBO6v3WNDREQk\nQyokAlF97/lMrV4NzzwzrEckhjRPDmLKE1MWUJ6QxZQF4suTFRUSgRg3btzQ7bxYTL4OYyExpHly\nEFOemLI3KIoxAAAgAElEQVSA8oQspiwQX56saNXGSHDOOfC978Ejj+Q9EhERCYBWbUh9NNFSRESG\niAqJkUATLUVEZIiokAhEZ2fn0Ox47Vp46KFhPyIxZHlyElOemLKA8oQspiwQX56sqJAIxOzZs4dm\nxzlMtIQhzJOTmPLElAWUJ2QxZYH48mRFky0D0dXVNTQzgufPh//4D3jqKRg1fHXjkOXJSUx5YsoC\nyhOymLJAXHk02TJCQ/bmLBbhwAOHtYiA+JZJxZQnpiygPCGLKQvElycrKiRip4mWIiIyhFRIxOz5\n5+Guu7T0U0REhowKiUDMmzcv+53eemtyn40cjkgMSZ4cxZQnpiygPCGLKQvElycrKiQCUSqVst9p\nRwdsuinst1/2+96IIcmTo5jyxJQFlCdkMWWB+PJkpaFVG2Y2DTgLaAFWAv/m7v/XT//JwCxgT+Bp\n4Fpglrs/UdHno8C5wJuBu4HPu/u1/ewzqlUbQ+LTn4bly5OCQkREJJXrqg0zmwScB8wBDiYpJK4z\nszF99H8HcDlwGTAe+AhwOHBpRZ8jgJ+kfQ4CfgUsNrPx9Y5PKmiipYiIDLFGTm3MAC5x9yvcvRM4\nDSgBU/vo/3Zgtbt/290fcPcbgUtIiomyM4Br3f18d7/L3b8MtAPTGxifAKxbB7fdpomWIiIypOoq\nJMxsM+BQ4PflNk/OjVwPtPax2TJgFzM7Nt3HjsBHgWsq+rSm+6h0XT/7jE53d3e2O+zshBdfzO2I\nROZ5chZTnpiygPKELKYsEF+erNR7RGIMsAmwpqp9Dcl8iV7SIxCfAH5mZi8DjwBP0vNoQ0s9+4zR\n1Kl9HdBpUHleRE6FROZ5chZTnpiygPKELKYsEF+erAz5qo10nsOFwFzgEOAYYDeS0xuDNnHiRAqF\nQo9Ha2srixcv7tFvyZIlFAqFXttPmzaNhQsX9mhrb2+nUCj0qj7nzJnTa/lPV1cXhUKh181cFixY\nwKxZs3q0lUolCoUCS5cu7dHe1tZWM9ukSZMaz1Es0v6mN1E46aRhzTFlyhQA5s6dm00O8vl9lHOU\nzZ07N4ocAC+++GIUOcq/j8r3WjPnKJs7d24UOQCOOeaYKHKUfx+V77VmytHW1vbqZ2NLSwuFQoEZ\nM2b02qZRda3aSE9tlIAT3P3qivZFwDbufnyNba4AtnT3f6loewfwF2And19jZg8A57n7RRV95gIf\ncveaJ/m1amMjjjoKtt0Wrroq75GIiEhgclu14e6vACuAo8ptZmbp9zf2sdloYF1V2wbAAUu/X1a5\nz9TRabvUyz25x4YmWoqIyBDbtIFtzgcWmdkKYDnJKo7RwCIAM/s6MNbdT077/xq41MxOI5lAORa4\nALjZ3R9N+1wI3GBmM0kmYX6cZFLnqY2EGvG6uuDJJ7X0U0REhlzdcyTc/UqSi1GdCxSBA4Bj3H1t\n2qUF2KWi/+XATGAacBvwM2AVcEJFn2XAicCngQ7gwySnNe6sP1Jzqj7PNijliZY5HpHINE8AYsoT\nUxZQnpDFlAXiy5OVhiZbuvvF7v5md/8Hd29191sqnpvi7u+r6v9td3+ru2/l7ju7+8nu/khVn6vc\nfZ90nwe4+3WNRWpO7e2DOkXVU7EIO+wAY8dmt886ZZonADHliSkLKE/IYsoC8eXJSkOXyA6BJlv2\n47jjoFSCJUvyHomIiAQo10tkSxPQREsRERkmKiRi8/jjyWRLTbQUEZFhoEIiNitXJl91REJERIaB\nColA1Lr6W0OKRRg9GvbcM5v9NSizPIGIKU9MWUB5QhZTFogvT1ZUSARi+vSMbnTa0QEHHACbbJLN\n/hqUWZ5AxJQnpiygPCGLKQvElycrWrURm/33hyOPhIsvznskIiISKK3akNpeeCG5fbgmWoqIyDBR\nIRGT22+H9es10VJERIaNColAVN+2tiHFYjI3Yv/9B7+vQcokT0BiyhNTFlCekMWUBeLLkxUVEoFo\na2sb/E46OmCffeAf/mHw+xqkTPIEJKY8MWUB5QlZTFkgvjxZ0WTLmLS2wh57wA9/mPdIREQkYJps\nKb2tXw+33qqJliIiMqxUSMTib39LbtSliZYiIjKMVEjEolhMvuqIhIiIDCMVEoGYMmXK4HbQ0QHj\nxsH222czoEEadJ7AxJQnpiygPCGLKQvElycrKiQCMWHChMHtILBbhw86T2BiyhNTFlCekMWUBeLL\nkxWt2oiBO7zxjTBtGsydm/doREQkcFq1IT09/DB0dwd1REJEREYGFRIxKE+0VCEhIiLDTIVEIJYu\nXdr4xh0dsN12sMsu2Q1okAaVJ0Ax5YkpCyhPyGLKAvHlyYoKiUDMnz+/8Y3LEy3NshvQIA0qT4Bi\nyhNTFlCekMWUBeLLkxVNtgxEqVRi9OjRjW28++5w/PFw3nnZDmoQBpUnQDHliSkLKE/IYsoCceXJ\nfbKlmU0zs9Vm9oKZ3WRmh/XT9wdmtsHM1qdfy4/bKvqcXKNPqZGxNauG35xPPQWrVwc3PyKW/9nK\nYsoTUxZQnpDFlAXiy5OVugsJM5sEnAfMAQ4GVgLXmdmYPjY5A2gBdkq/7gw8AVxZ1e/p9PnyY9d6\nxzYirVyZfA2skBARkZGhkSMSM4BL3P0Kd+8ETgNKwNRand39WXd/rPwADge2BRb17uprK/qubWBs\nI09HB2y5Jey9d94jERGREaiuQsLMNgMOBX5fbvNkksX1QOsAdzMVuN7dH6xq38rM7jezLjNbbGbj\n6xlbs5s1a1ZjGxaL8Na3wqabZjugQWo4T6BiyhNTFlCekMWUBeLLk5V6j0iMATYB1lS1ryE5HdEv\nM9sJOBa4rOqpu0gKjAIwOR3XjWY2ts7xNa1x48Y1tmGxGOSNuhrOE6iY8sSUBZQnZDFlgfjyZKWu\nVRtpIfAQ0OruN1e0zwOOdPd+j0qY2RdITo2Mdfd1/fTbFFgF/MTd5/TRJ6pVGw156SXYaiu46CL4\n7GfzHo2IiDSJPFdtdAPrgR2r2ncEHh3A9lOAK/orIgDS54vAHhvb4cSJEykUCj0era2tLF68uEe/\nJUuWUCgUem0/bdo0Fi5c2KOtvb2dQqFAd3d3j/Y5c+Ywb968Hm1dXV0UCgU6Ozt7tC9YsKDXYbBS\nqUShUOh1UZO2traad5WbNGlS/znuuAPWrYODD27uHBWUQzmUQzmUI9scbW1tr342trS0UCgUmDFj\nRq9tGlX3dSTM7CbgZnc/M/3egC7gInf/Rj/bvYdkbsX+7r5qI68xCrgDuMbdz+qjj45IfP/78KlP\nwbPPwutel/doRESkSeR9HYnzgVPN7F/NbB/gu8Bo0lUYZvZ1M7u8xnafJClAehURZnaOmR1tZruZ\n2cHAj4FxwPcaGF9Tqq5IB6RYTFZrBFhENJQnYDHliSkLKE/IYsoC8eXJSt2FhLtfCZwFnEty+uEA\n4JiK5ZotQI+bPpjZ1sDx9F0YbAdcCtwJXANsRTIPY8T81mbPnl3/RoFOtIQG8wQspjwxZQHlCVlM\nWSC+PFnRJbID0dXVVd+M4A0bYJtt4JxzIMA3d915AhdTnpiygPKELKYsEFeevE9tyBCo+815773w\n3HPBXtEylv/ZymLKE1MWUJ6QxZQF4suTFRUSzaqjI/ka6KkNEREZGVRINKtiEd70Jthhh7xHIiIi\nI5gKiUBUry/eqIAnWkIDeQIXU56YsoDyhCymLBBfnqyokAhEqVTnXdM7OoKdHwEN5AlcTHliygLK\nE7KYskB8ebKiVRvN6NFHYaed4Kqr4MMfzns0IiLSZLRqY6TTREsREQmEColmVCzC1lvDbrvlPRIR\nERnhVEgEovoGL/0qT7Q0G7oBDVJdeZpATHliygLKE7KYskB8ebKiQiIQU6dOHXjnwCdaQp15mkBM\neWLKAsoTspiyQHx5sqJCIhBz584dWMdnn4W//S34QmLAeZpETHliygLKE7KYskB8ebKiVRvN5q9/\nhXe+MzkqceCBeY9GRESakFZtjGTFImy+Oey7b94jERERUSHRdIpF2G+/pJgQERHJmQqJQCxcuHBg\nHZtgoiXUkadJxJQnpiygPCGLKQvElycrKiQC0d4+gFNUr7wCt9/eFIXEgPI0kZjyxJQFlCdkMWWB\n+PJkRZMtm8nKlcn1I/7yl2TCpYiISAM02XKkKl8aW6s1REQkECokmkmxCHvsAa9/fd4jERERAVRI\nNJcmmWgpIiIjhwqJQBQKhf47uDdVIbHRPE0mpjwxZQHlCVlMWSC+PFlRIRGI6dOn999h9Wp4+umm\nuXX4RvM0mZjyxJQFlCdkMWWB+PJkRas2msUvfgEnnACPPAItLXmPRkREmphWbYxExSLsuKOKCBER\nCUpDhYSZTTOz1Wb2gpndZGaH9dP3B2a2wczWp1/Lj9uq+n3UzFal+1xpZsc2MrZoNdH8CBERGTnq\nLiTMbBJwHjAHOBhYCVxnZmP62OQMoAXYKf26M/AEcGXFPo8AfgJcBhwE/ApYbGbj6x1fs1q8eHH/\nHYrFpiokNpqnycSUJ6YsoDwhiykLxJcnK40ckZgBXOLuV7h7J3AaUAKm1urs7s+6+2PlB3A4sC2w\nqKLbGcC17n6+u9/l7l8G2oERM7Olra2t7yfXroWHHmqaiZawkTxNKKY8MWUB5QlZTFkgvjxZqWuy\npZltRlI0nODuV1e0LwK2cffjB7CPq4HN3f0DFW0PAOe5+0UVbXOBD7l7zT/DR9Rky9/9DiZMgLvv\nhj33zHs0IiLS5PKcbDkG2ARYU9W+huS0Rb/MbCfgWJJTGJVaGt3niFAswlZbwVvekvdIREREehju\nVRunAE+SzIHIxMSJEykUCj0era2tvc5lLVmypObFRKZNm9br1rDt7e0UCgW6u7t7tM+ZM4d58+b1\naOvq6qJQKNDZ2dmjfcGCBcyaNatHW6lUolAosHTp0h7tbW1tTJkypdfYJk2alOTo6EjurzFqVHPn\nqKAcyqEcyqEcw5Ojra3t1c/GlpYWCoUCM2bM6LVNo4b11IaZ3Q1c7e5nVbXr1EZ/9t0X3v9+WLAg\n75GIiEgEcju14e6vACuAo8ptZmbp9zf2t62ZvQd4C7CwxtPLKveZOjptHxFqVZQAPP883HVXU020\nhH7yNKmY8sSUBZQnZDFlgfjyZGXTBrY5H1hkZiuA5SSrOEaTrsIws68DY9395KrtPgnc7O6rauzz\nQuAGM5sJXAN8HDgUOLWB8TWlCRMm1H7ittuS+2w00dJP6CdPk4opT0xZQHlCFlMWiC9PVhq6RLaZ\nnQ7MBnYEOoB/c/db0ud+AOzq7u+r6L818DBwhrt/v499ngB8DdgV+Bswy92v62cMI+PUxne+A2ec\nAc89B1tskfdoREQkAlme2mjkiATufjFwcR/P9Tr24+7PAFttZJ9XAVc1Mp6odXTA+PEqIkREJEi6\n10bomuyKliIiMrKokAhE9ZIeANatS+ZINNlES+gjTxOLKU9MWUB5QhZTFogvT1ZUSARi/vz5vRvv\nugtefLEpj0jUzNPEYsoTUxZQnpDFlAXiy5OVhiZbhiC2yZalUonRo0f3bPzRj+Ckk+DJJ2HbbfMZ\nWINq5mliMeWJKQsoT8hiygJx5cnzEtkyRGq+OTs6YLfdmq6IgD7yNLGY8sSUBZQnZDFlgfjyZEWF\nRMg00VJERAKnQiJU7kkh0YQTLUVEZORQIRGI6hu08OCDydyIJj0i0StPk4spT0xZQHlCFlMWiC9P\nVlRIBGLcuHE9G4rF5GuTHpHolafJxZQnpiygPCGLKQvElycrWrURqq98Bb71LXjsMTDLezQiIhIR\nrdoYCcoTLVVEiIhIwFRIhEoTLUVEpAmokAhEZ2fna9888QR0dTXtREuoyhOBmPLElAWUJ2QxZYH4\n8mRFhUQgZs+e/do3HR3J1yY+ItEjTwRiyhNTFlCekMWUBeLLkxVNtgxEV1fXazOCzz8fzjkHnnkG\nNtkk34E1qEeeCMSUJ6YsoDwhiykLxJVHky0j1OPNWSzCAQc0bREB8S2TiilPTFlAeUIWUxaIL09W\nVEiESBMtRUSkSaiQCM0LL0BnZ1NPtBQRkZFDhUQg5s2bl/zH7bfD+vVNf0Ti1TyRiClPTFlAeUIW\nUxaIL09WVEgEolQqJf/R0ZHMjXjrW/Md0CC9micSMeWJKQsoT8hiygLx5cmKVm2E5vTT4c9/To5M\niIiIDAGt2oiZJlqKiEgTUSERkvXr4dZbNdFSRESaRkOFhJlNM7PVZvaCmd1kZodtpP/mZvY1M7vf\nzF40s/vM7JSK5082sw1mtj79usHMRtTJqO7ubvjb36BUiuKIRHd3d95DyFRMeWLKAsoTspiyQHx5\nslJ3IWFmk4DzgDnAwcBK4DozG9PPZj8H3gtMAfYCPg7cVdXnaaCl4rFrvWNrZlOnTn3t0tgRHJGY\nOnVq3kPIVEx5YsoCyhOymLJAfHmysmkD28wALnH3KwDM7DTgn4CpwPzqzmb2AeBdwO7u/lTa3FVj\nv+7uaxsYTxTmzp0LP/sZjBsH22+f93AGbe7cuXkPIVMx5YkpCyhPyGLKAvHlyUpdRyTMbDPgUOD3\n5TZPln1cD7T2sdkHgVuAs83s72Z2l5l9w8y2rOq3VXrqo8vMFpvZ+HrG1uwOOeSQqCZaRrWShrjy\nxJQFlCdkMWWB+PJkpd5TG2OATYA1Ve1rSE5H1LI7yRGJ/YDjgDOBjwDfruhzF8kRjQIwOR3XjWY2\nts7xNS/35NRGBKc1RERk5Gjk1Ea9RgEbgBPd/TkAM5sJ/NzMTnf3l9z9JuCm8gZmtgxYBXyGZC5G\n/B5+GNaujeaIhIiIjAz1HpHoBtYDO1a17wg82sc2jwAPlYuI1CrAgJ1rbeDu64AisMfGBjRx4kQK\nhUKPR2trK4sXL+7Rb8mSJRQKhV7bT5s2jYULF/Zoa29vp1Ao9JqhO2fOnF6XSO3q6qJQKNDZ2dmj\nfcGCBcyaNatHW6lUolAosHTp0h7tbW1tvOPoo5NvKo5ITJo0qelyTJkyBaDHWJo5R9nChQujyAFw\n2GGHRZGj/PuoHF8z5yhbuHBhFDkAZs6cGUWO8u+jeszNkqOtre3Vz8aWlhYKhQIzZszotU3D3L2u\nB8mRgwsrvjfgQWBWH/1PBZ4DRle0fQh4Bdiij21GkRQb3+xnHIcAvmLFCo/B6Ycf7r7ddu4bNuQ9\nlEycfvrpeQ8hUzHliSmLu/KELKYs7nHlWbFihQMOHOJ11gHVj7ovkW1m/wIsAk4DlpOs4vgIsI+7\nrzWzrwNj3f3ktP/rgDvTAmQusANwGfBHdz8t7XNO+vw9wLbAbJL5Eoe6e89S7bVxxHWJ7BNOgCef\nhD/8Ie+RiIhI5LK8RHbdcyTc/cr0mhHnkpzS6ACO8deWbrYAu1T0f97MjgYWAP8HPA78DDinYrfb\nAZem2z4JrABa+yoiotTRAccdl/coRERE6tLQZEt3vxi4uI/nep2scfe7gWP62d9MYGYjY4nC00/D\nffdpoqWIiDQd3WsjBCtXJl+19FNERJqMCokQFIsURo2CvffOeySZqTVTu5nFlCemLKA8IYspC8SX\nJysqJEJQLDL9LW+BzTbLeySZmT59et5DyFRMeWLKAsoTspiyQHx5slL3qo1QRLVq46CD4PDD4dJL\n8x6JiIiMAFmu2tARiby99BLccYcmWoqISFNSIZG3O++Edes00VJERJqSCom8FYtgxuL77897JJmq\nvsRss4spT0xZQHlCFlMWiC9PVlRI5K1YhL32oi2yN2hbW1veQ8hUTHliygLKE7KYskB8ebKiyZZ5\ne9e7YOedQW9QEREZJppsGYsNG5JLY2uipYiINCkVEnm67z547jlNtBQRkaalQiJPxWLyVUckRESk\nSamQyFOxCGPHwhvfyJQpve511tSUJ1wxZQHlCVlMWSC+PFlRIZGnjo5XT2tMmDAh58FkS3nCFVMW\nUJ6QxZQF4suTFa3ayNNOO8EnPwn/8R95j0REREYQrdqIwaOPJg9NtBQRkSamQiIvHR3JV020FBGR\nJqZCIi/FImy9Ney2GwBLly7NeUDZUp5wxZQFlCdkMWWB+PJkRYVEXsoXohqV/Armz5+f84CypTzh\niikLKE/IYsoC8eXJiiZb5mWvveDYY+HCCwEolUqMHj0650FlR3nCFVMWUJ6QxZQF4sqjyZbN7tln\n4Z57eky0jOXNWaY84YopCyhPyGLKAvHlyYoKiTzceiu4a6KliIg0PRUSeSgWYbPNYPz4vEciIiIy\nKCok8tDRAfvvD5tv/mrTrFmzchxQ9pQnXDFlAeUJWUxZIL48WWmokDCzaWa22sxeMLObzOywjfTf\n3My+Zmb3m9mLZnafmZ1S1eejZrYq3edKMzu2kbE1hWKx12mNcePG5TSYoaE84YopCyhPyGLKAvHl\nyUrdqzbMbBJwOfBpYDkwA/gosJe7d/exza+AHYAvAfcCOwGj3H1Z+vwRwJ+As4FrgMnpfx/s7nf2\nsc/mXLXxyiuw1VbwzW/Cv/1b3qMREZERKO9VGzOAS9z9CnfvBE4DSsDUWp3N7APAu4CJ7v5Hd+9y\n95vLRUTqDOBadz/f3e9y9y8D7cD0BsYXtlWr4OWXNdFSRESiUFchYWabAYcCvy+3eXJI43qgtY/N\nPgjcApxtZn83s7vM7BtmtmVFn9Z0H5Wu62efzatYTL4eeGC+4xAREcnApnX2HwNsAqypal8D7N3H\nNruTHJF4ETgu3cd3gO2BT6Z9WvrYZ8tGR/Tv/w7bbjuAoQfirrtgjz2Sy2NX6OzsZJ999slpUNlT\nnnDFlAWUJ2QxZYH48mRlOFZtjAI2ACe6+y3u/ltgJnCymW0x2J1PvPlmCsuX93i0/uUvLH7kkR79\nljz2GIXly3ttP+2221jY1dWjrf2ppygsX073Sy/1aJ9z113Mu+eeHm1dpRKF5cvpfPbZHu0LVq9m\n1p09p3eU1q2j8PTTLD3++B7tbW1tNe9zP2nSJBYvXtwzx5IlFAqF3jmmTWPhwoU9c7S3UygU6O7u\nOXVlzpw5zJs3r2eOri4KhQKdnZ09cyxY0GumcqlUolAo9LrufFtbG1OmTAFg9uzZUeQomz17dhQ5\nAI466qgocpR/H5XvtWbOUTZ79uwocgCccsopUeQo/z4q32vNlKOtrY1CoUBraystLS0UCgVmzJjR\na5tG1TXZMj21UQJOcPerK9oXAdu4+/E1tlkEHOHue1W07QPcQTJB814zewA4z90vqugzF/iQu9e8\nz3bTTrbsQ1dXV1QzgpUnXDFlAeUJWUxZIK48uU22dPdXgBXAUeU2M7P0+xv72OyvwFgzq7y26N4k\nRyn+nn6/rHKfqaPT9hEhljdnmfKEK6YsoDwhiykLxJcnK42c2jgfONXM/jU9svBdYDSwCMDMvm5m\nl1f0/wnwOPADM9vXzI4E5gML3b187uBC4ANmNtPM9k6PRhwKfKuRUCIiIjI86p1sibtfaWZjgHOB\nHYEO4Bh3X5t2aQF2qej/vJkdDSwA/o+kqPgZcE5Fn2VmdiLwtfTxN5LTGjWvISEiIiJhaGiypbtf\n7O5vdvd/cPdWd7+l4rkp7v6+qv53u/sx7r6Vu+/q7rMrjkaU+1zl7vuk+zzA3a9rLFJzqp6E0+yU\nJ1wxZQHlCVlMWSC+PFnRvTYCUSqV8h5CppQnXDFlAeUJWUxZIL48Wan7EtmhiG3VhoiIyHDJ+xLZ\nIiIiIoAKCRERERkEFRKBqL4KWrNTnnDFlAWUJ2QxZYH48mRFhUQgpk6tefPUpqU84YopCyhPyGLK\nAvHlyYoKiUDMnTs37yFkSnnCFVMWUJ6QxZQF4suTFa3aEBERGWG0akNERESCoEJCREREGqZCIhDV\n97RvdsoTrpiygPKELKYsEF+erKiQCER7+6BOUQVHecIVUxZQnpDFlAXiy5MVTbYUEREZYTTZUkRE\nRIKgQkJEREQapkJCREREGqZCIhCFQiHvIWRKecIVUxZQnpDFlAXiy5MVFRKBmD59et5DyJTyhCum\nLKA8IYspC8SXJytatSEiIjLCaNWGiIiIBEGFhIiIiDRMhUQgFi9enPcQMqU84YopCyhPyGLKAvHl\nyYoKiUDMmzcv7yFkSnnCFVMWUJ6QxZQF4suTlYYKCTObZmarzewFM7vJzA7rp++7zWxD1WO9mb2x\nos/JFe3lPqVGxtasdthhh7yHkCnlCVdMWUB5QhZTFogvT1Y2rXcDM5sEnAd8GlgOzACuM7O93L27\nj80c2At49tUG98eq+jyd9rGKbURERCRgjRyRmAFc4u5XuHsncBpQAqZuZLu17v5Y+VHjeXf3yj5r\nGxibiIiIDKO6Cgkz2ww4FPh9uc2TC1FcD7T2tynQYWYPm9kSMzuiRp+tzOx+M+sys8VmNr6esYmI\niMjwq/fUxhhgE2BNVfsaYO8+tnkE+AxwC7AFcCpwg5kd7u4daZ+7SI5o3ApsA8wCbjSz8e7+cB/7\n3RJg1apVdUYI0/Lly6O6173yhCumLKA8IYspC8SVp+Kzc8vB7quuK1ua2U7AQ0Cru99c0T4PONLd\n+zsqUbmfG4AH3P3kPp7fFFgF/MTd5/TR50TgxwMevIiIiFSb7O4/GcwO6j0i0Q2sB3asat8ReLSO\n/SwH3tHXk+6+zsyKwB797OM6YDJwP/BiHa8tIiIy0m0JvJnks3RQ6iok3P0VM1sBHAVcDWBmln5/\nUR27OojklEdNZjYKeCtwTT9jeRwYVBUlIiIygt2YxU7qXv4JnA8sSguK8vLP0cAiADP7OjC2fNrC\nzM4EVgN3kFRApwLvBY4u79DMzgFuAu4BtgVmA+OA7zUSSkRERIZH3YWEu19pZmOAc0lOaXQAx1Qs\n12wBdqnYZHOS606MJVkmeitwlLv/uaLPdsCl6bZPAitI5mF01js+ERERGT5NextxERERyZ/utSEi\nIiINUyEhIiIiDWu6QsLM3mVmV5vZQ+nNvQp5j6lRZvYFM1tuZs+Y2Roz+6WZ7ZX3uBplZqeZ2Uoz\ne9nsrYQAAAYYSURBVDp93GhmH8h7XFkws8+n77fz8x5LI8xsTo2b592Z97gaZWZjzeyHZtZtZqX0\nfXdI3uNqRHoDxOrfzQYzW5D32BphZqPM7Ktmdl/6u7nHzP5f3uNqlJltZWb/nV55uWRmS83sbXmP\nayAG8nlpZuemV50umdnvzKy/yy7U1HSFBPA6kgmep9P8N/Z6F7AA+Efg/cBmwBIz+4dcR9W4B4Gz\ngUNILqX+B+BXZrZvrqMapPTutp8GVuY9lkG6nWSCdEv6eGe+w2mMmW0L/BV4CTgG2Bf4HMlE7Wb0\nNl77nbSQrGhz4Mo8BzUInye5mvHpwD4kq/Bmm9n0XEfVuIUklziYDOwP/A64Pr1AY+j6/bw0s7OB\n6ST/vh0OPE9yE87N63mRpp5saWYbgOPc/eq8x5KFdDXMYyRXCV2a93iyYGaPA2e5+w/yHksjzGwr\nklVEnwXOAYruPjPfUdXPzOYAH3L3pvyrvZKZ/RfJqq535z2WoWBm/w1MdPemPDppZr8GHnX3Uyva\n/gcoufu/5jey+pnZliR3rf6gu/+2ov0W4Dfu/uXcBlenWp+XZvYw8A13vyD9fmuSW16c7O4DLmSb\n8YhEzLYlqRqfyHsgg5Ue3vwYyTVGluU9nkH4NvBrd/9D3gPJwJ7pIc57zexHZrbLxjcJ0geBW8zs\nyvSUYLuZfSrvQWUhvTHiZJK/gpvVjcBRZrYngJkdSHIl49/kOqrGbEpyf6mXqtpfoEmP6JWZ2W4k\nR8Aqb8L5DHAz/d+Es5dGLkglQyC9Quh/A0vdvZnPXe9PUjiUK/njm/V6IGkhdBDJoedmdxNwCskN\n8nYC5gJ/NrP93f35HMfViN1JjhCdB3yN5JDsRWb2krv/MNeRDd7xJDcuvDzvgQzCfwFbA51mtp7k\nD9YvuftP8x1W/dz9OTNbBpxjZp0kf62fSPJB+7dcBzd4LSR/uNa6CWdLPTtSIRGOi4Hx9HMPkibR\nCRxI8o/hR4ArzOzIZismzGxnksLu/e7+St7jGSx3r7ye/u1mthx4APgXoNlOO40Clrv7Oen3K9MC\n9jSg2QuJqcC17l7PvYtCM4nkw/ZjwJ0kxfiFZvZwkxZ6nwC+T3LDynVAO8ntGQ7Nc1Ah0amNAJjZ\nt4CJwHvcvc97kDQDd1/n7ve5e9Hdv0QyQfHMvMfVgEOBHYB2M3vFzF4B3g2caWYvp0eQmpa7Pw3c\nTf83xgvVIyR3B660iuSy+k3LzMaRTLq+LO+xDNJ84L/c/efufoe7/xi4APhCzuNqiLuvdvf3kkxc\n3MXd305yxeb78h3ZoP3/9u7eNaogCsP4c0QQDCmsrZRIyohNSlExhYVWFjYiwX9CsBHBRvCjsRKx\nENMqKUQRu3TWgihEYhAtFCziFmLG4tyVGESzs5HJhecHt9lm32E/5uzs2T0fgWD8IZwWEq11RcQZ\n4FgpZaV1nv9gF7CndYgKz8nBcYfJE5YZ4CXwAJgpfe5S5lcT6RR/GZ63gy0B05tumyZPWPpsnjxW\n7mMvwUZ7ySnRG63T8/2mlDIopXyKiH3kr4Uetc40jlLKMlkwnBje1jVbzjLiMK/efbURERPkG+Dw\nE+HBrpnnSynlfbtko4uIO8A54DSwFhHDyvBrKaV3o9Ej4hrwBFgBJsmmsaPAXMtcNbq+gd96VSJi\nDfhcStn8aXjHi4jrwCK52e4HrgDfgYWWuSrdBJYi4hL5E8lZ4CI5ELCXuhOuC8D9Usp64zjjWgQu\nR8QqOazxCDncsZdDGCNijtxvXgOHyBOXV3SDKneyLeyXt8jH6i3wDrgKrAKPR7qjUkqvLnJjWicr\n3o3XvdbZKtbyp3X8AM63zla5nrvkcd+ArHSfAcdb59rG9b0AbrTOUZl9oXuDGJCF3kPgQOtcY6zn\nFDkA8Bu5Wc23zjTmek52r/2p1lm2YS0T5JToZfJ/Cd6Qhevu1tkq13OWnEw9IPskbgOTrXNtMfs/\n90uy8fpD91p6WvMc7PX/SEiSpLZ6/Z2VJElqy0JCkiRVs5CQJEnVLCQkSVI1CwlJklTNQkKSJFWz\nkJAkSdUsJCRJUjULCUmSVM1CQpIkVbOQkCRJ1X4Ct8kK9nGnsR0AAAAASUVORK5CYII=\n", 331 | "text/plain": [ 332 | "" 333 | ] 334 | }, 335 | "metadata": {}, 336 | "output_type": "display_data" 337 | } 338 | ], 339 | "source": [ 340 | "from matplotlib import pyplot as plt \n", 341 | "%matplotlib inline \n", 342 | "#模型训练 \n", 343 | "depth_test=np.linspace(1,10,11) \n", 344 | "accurate=[] \n", 345 | "for depth in depth_test: \n", 346 | " test_model=DecisionTreeClassifier(max_depth=depth) \n", 347 | " test_model=test_model.fit(x_train,y_train) \n", 348 | " y_test_hat=test_model.predict(x_test) \n", 349 | " res=y_test==y_test_hat \n", 350 | " acc=np.mean(res) \n", 351 | " accurate.append(acc) \n", 352 | " print '正确率是%.2f%%'%(acc*100) \n", 353 | "plt.plot(depth_test,accurate,'r-') \n", 354 | "plt.grid() \n", 355 | "plt.title('Accuracy by tree_depth') \n", 356 | "plt.show() " 357 | ] 358 | }, 359 | { 360 | "cell_type": "markdown", 361 | "metadata": {}, 362 | "source": [ 363 | "### 决策树的可解释性\n", 364 | "\n", 365 | "本文一开始提到决策树的一个优点是其可解释性。\n", 366 | "\n", 367 | "接下来通过一些代码来演示其可解释性,代码来自sklearn官网。\n", 368 | "\n", 369 | "1.**用graphviz可视化决策树**:\n", 370 | "\n", 371 | "```python\n", 372 | "import graphviz \n", 373 | "import sklearn.tree as tree\n", 374 | "# sklearn 支持将决策树模型导出成可视化的graphviz\n", 375 | "dot_data = tree.export_graphviz(model, out_file=None) \n", 376 | "graph = graphviz.Source(dot_data) \n", 377 | "graph.render(\"iris\") \n", 378 | "graph\n", 379 | "```\n", 380 | "![](https://raw.githubusercontent.com/applenob/machine_learning_basic/master/res/tree_1.png)\n", 381 | "```python\n", 382 | "dot_data = tree.export_graphviz(model, out_file=None, \n", 383 | " feature_names=iris.feature_names, \n", 384 | " class_names=iris.target_names, \n", 385 | " filled=True, rounded=True, \n", 386 | " special_characters=True) \n", 387 | "graph = graphviz.Source(dot_data) \n", 388 | "graph \n", 389 | "```\n", 390 | "![](https://raw.githubusercontent.com/applenob/machine_learning_basic/master/res/tree_2.png)" 391 | ] 392 | }, 393 | { 394 | "cell_type": "markdown", 395 | "metadata": {}, 396 | "source": [ 397 | "2.**手动输出决策树信息**\n", 398 | "\n", 399 | "sklearn中决策树模型的信息保存在`xxx.tree_`中:\n", 400 | "\n", 401 | "```\n", 402 | "Attributes\n", 403 | "----------\n", 404 | "node_count : int\n", 405 | " The number of nodes (internal nodes + leaves) in the tree.\n", 406 | "\n", 407 | "capacity : int\n", 408 | " The current capacity (i.e., size) of the arrays, which is at least as\n", 409 | " great as `node_count`.\n", 410 | "\n", 411 | "max_depth : int\n", 412 | " The maximal depth of the tree.\n", 413 | "\n", 414 | "children_left : array of int, shape [node_count]\n", 415 | " children_left[i] holds the node id of the left child of node i.\n", 416 | " For leaves, children_left[i] == TREE_LEAF. Otherwise,\n", 417 | " children_left[i] > i. This child handles the case where\n", 418 | " X[:, feature[i]] <= threshold[i].\n", 419 | "\n", 420 | "children_right : array of int, shape [node_count]\n", 421 | " children_right[i] holds the node id of the right child of node i.\n", 422 | " For leaves, children_right[i] == TREE_LEAF. Otherwise,\n", 423 | " children_right[i] > i. This child handles the case where\n", 424 | " X[:, feature[i]] > threshold[i].\n", 425 | "\n", 426 | "feature : array of int, shape [node_count]\n", 427 | " feature[i] holds the feature to split on, for the internal node i.\n", 428 | "\n", 429 | "threshold : array of double, shape [node_count]\n", 430 | " threshold[i] holds the threshold for the internal node i.\n", 431 | "\n", 432 | "value : array of double, shape [node_count, n_outputs, max_n_classes]\n", 433 | " Contains the constant prediction value of each node.\n", 434 | "\n", 435 | "impurity : array of double, shape [node_count]\n", 436 | " impurity[i] holds the impurity (i.e., the value of the splitting\n", 437 | " criterion) at node i.\n", 438 | "\n", 439 | "n_node_samples : array of int, shape [node_count]\n", 440 | " n_node_samples[i] holds the number of training samples reaching node i.\n", 441 | "\n", 442 | "weighted_n_node_samples : array of int, shape [node_count]\n", 443 | " weighted_n_node_samples[i] holds the weighted number of training samples\n", 444 | " reaching node i.\n", 445 | "```" 446 | ] 447 | }, 448 | { 449 | "cell_type": "code", 450 | "execution_count": 9, 451 | "metadata": {}, 452 | "outputs": [ 453 | { 454 | "name": "stdout", 455 | "output_type": "stream", 456 | "text": [ 457 | "n_nodes: 13\n", 458 | "children_left: [ 1 2 3 -1 -1 -1 7 8 -1 -1 11 -1 -1]\n", 459 | "children_right: [ 6 5 4 -1 -1 -1 10 9 -1 -1 12 -1 -1]\n", 460 | "feature: [ 0 1 0 -2 -2 -2 0 1 -2 -2 1 -2 -2]\n", 461 | "threshold: [ 5.44999981 2.80000019 4.69999981 -2. -2. -2. 6.25\n", 462 | " 3.45000005 -2. -2. 2.54999995 -2. -2. ]\n" 463 | ] 464 | } 465 | ], 466 | "source": [ 467 | "# The decision estimator has an attribute called tree_ which stores the entire\n", 468 | "# tree structure and allows access to low level attributes. The binary tree\n", 469 | "# tree_ is represented as a number of parallel arrays. The i-th element of each\n", 470 | "# array holds information about the node `i`. Node 0 is the tree's root. NOTE:\n", 471 | "# Some of the arrays only apply to either leaves or split nodes, resp. In this\n", 472 | "# case the values of nodes of the other type are arbitrary!\n", 473 | "#\n", 474 | "# Among those arrays, we have:\n", 475 | "# - left_child, id of the left child of the node\n", 476 | "# - right_child, id of the right child of the node\n", 477 | "# - feature, feature used for splitting the node\n", 478 | "# - threshold, threshold value at the node\n", 479 | "#\n", 480 | "\n", 481 | "# Using those arrays, we can parse the tree structure:\n", 482 | "\n", 483 | "n_nodes = model.tree_.node_count\n", 484 | "children_left = model.tree_.children_left\n", 485 | "children_right = model.tree_.children_right\n", 486 | "feature = model.tree_.feature\n", 487 | "threshold = model.tree_.threshold\n", 488 | "\n", 489 | "print \"n_nodes: \", n_nodes\n", 490 | "print \"children_left: \", children_left\n", 491 | "print \"children_right: \", children_right\n", 492 | "print \"feature: \", feature\n", 493 | "print \"threshold: \", threshold" 494 | ] 495 | }, 496 | { 497 | "cell_type": "markdown", 498 | "metadata": {}, 499 | "source": [ 500 | "注意到上面出现了-1和-2这些让人觉得奇怪的值,解释一下:\n", 501 | "```python\n", 502 | "TREE_LEAF = -1\n", 503 | "TREE_UNDEFINED = -2\n", 504 | "```" 505 | ] 506 | }, 507 | { 508 | "cell_type": "code", 509 | "execution_count": 10, 510 | "metadata": {}, 511 | "outputs": [ 512 | { 513 | "name": "stdout", 514 | "output_type": "stream", 515 | "text": [ 516 | "The binary tree structure has 13 nodes and has the following tree structure:\n", 517 | "node=0 test node: go to node 1 if X[:, 0] <= 5.44999980927 else to node 6.\n", 518 | "\tnode=1 test node: go to node 2 if X[:, 1] <= 2.80000019073 else to node 5.\n", 519 | "\t\tnode=2 test node: go to node 3 if X[:, 0] <= 4.69999980927 else to node 4.\n", 520 | "\t\t\tnode=3 leaf node.\n", 521 | "\t\t\tnode=4 leaf node.\n", 522 | "\t\tnode=5 leaf node.\n", 523 | "\tnode=6 test node: go to node 7 if X[:, 0] <= 6.25 else to node 10.\n", 524 | "\t\tnode=7 test node: go to node 8 if X[:, 1] <= 3.45000004768 else to node 9.\n", 525 | "\t\t\tnode=8 leaf node.\n", 526 | "\t\t\tnode=9 leaf node.\n", 527 | "\t\tnode=10 test node: go to node 11 if X[:, 1] <= 2.54999995232 else to node 12.\n", 528 | "\t\t\tnode=11 leaf node.\n", 529 | "\t\t\tnode=12 leaf node.\n" 530 | ] 531 | } 532 | ], 533 | "source": [ 534 | "# 遍历树,获取每个结点的深度和每个结点是否是叶结点\n", 535 | "# The tree structure can be traversed to compute various properties such\n", 536 | "# as the depth of each node and whether or not it is a leaf.\n", 537 | "node_depth = np.zeros(shape=n_nodes, dtype=np.int64)\n", 538 | "is_leaves = np.zeros(shape=n_nodes, dtype=bool)\n", 539 | "stack = [(0, -1)] # seed is the root node id and its parent depth\n", 540 | "while len(stack) > 0:\n", 541 | " node_id, parent_depth = stack.pop()\n", 542 | " node_depth[node_id] = parent_depth + 1\n", 543 | "\n", 544 | " # If we have a test node\n", 545 | " if (children_left[node_id] != children_right[node_id]):\n", 546 | " stack.append((children_left[node_id], parent_depth + 1))\n", 547 | " stack.append((children_right[node_id], parent_depth + 1))\n", 548 | " else:\n", 549 | " is_leaves[node_id] = True\n", 550 | "\n", 551 | "print(\"The binary tree structure has %s nodes and has \"\n", 552 | " \"the following tree structure:\"\n", 553 | " % n_nodes)\n", 554 | "for i in range(n_nodes):\n", 555 | " if is_leaves[i]:\n", 556 | " print(\"%snode=%s leaf node.\" % (node_depth[i] * \"\\t\", i))\n", 557 | " else:\n", 558 | " print(\"%snode=%s test node: go to node %s if X[:, %s] <= %s else to \"\n", 559 | " \"node %s.\"\n", 560 | " % (node_depth[i] * \"\\t\",\n", 561 | " i,\n", 562 | " children_left[i],\n", 563 | " feature[i],\n", 564 | " threshold[i],\n", 565 | " children_right[i],\n", 566 | " ))" 567 | ] 568 | }, 569 | { 570 | "cell_type": "code", 571 | "execution_count": 8, 572 | "metadata": {}, 573 | "outputs": [ 574 | { 575 | "name": "stdout", 576 | "output_type": "stream", 577 | "text": [ 578 | "Rules used to predict sample 0: \n", 579 | "decision id node 9 : (X_test[0, -2] (= 5.8) > -2.0)\n", 580 | "\n", 581 | "The following samples [0, 1] share the node [0] in the tree\n", 582 | "It is 7 % of all nodes.\n" 583 | ] 584 | } 585 | ], 586 | "source": [ 587 | "# First let's retrieve the decision path of each sample. The decision_path\n", 588 | "# method allows to retrieve the node indicator functions. A non zero element of\n", 589 | "# indicator matrix at the position (i, j) indicates that the sample i goes\n", 590 | "# through the node j.\n", 591 | "\n", 592 | "node_indicator = model.decision_path(x_test)\n", 593 | "\n", 594 | "# Similarly, we can also have the leaves ids reached by each sample.\n", 595 | "\n", 596 | "leave_id = model.apply(x_test)\n", 597 | "\n", 598 | "# Now, it's possible to get the tests that were used to predict a sample or\n", 599 | "# a group of samples. First, let's make it for the sample.\n", 600 | "\n", 601 | "sample_id = 0\n", 602 | "node_index = node_indicator.indices[node_indicator.indptr[sample_id]:\n", 603 | " node_indicator.indptr[sample_id + 1]]\n", 604 | "\n", 605 | "print('Rules used to predict sample %s: ' % sample_id)\n", 606 | "for node_id in node_index:\n", 607 | " if leave_id[sample_id] != node_id:\n", 608 | " continue\n", 609 | "\n", 610 | " if (x_test[sample_id, feature[node_id]] <= threshold[node_id]):\n", 611 | " threshold_sign = \"<=\"\n", 612 | " else:\n", 613 | " threshold_sign = \">\"\n", 614 | "\n", 615 | " print(\"decision id node %s : (X_test[%s, %s] (= %s) %s %s)\"\n", 616 | " % (node_id,\n", 617 | " sample_id,\n", 618 | " feature[node_id],\n", 619 | " x_test[sample_id, feature[node_id]],\n", 620 | " threshold_sign,\n", 621 | " threshold[node_id]))\n", 622 | "\n", 623 | "# For a group of samples, we have the following common node.\n", 624 | "sample_ids = [0, 1]\n", 625 | "common_nodes = (node_indicator.toarray()[sample_ids].sum(axis=0) ==\n", 626 | " len(sample_ids))\n", 627 | "\n", 628 | "common_node_id = np.arange(n_nodes)[common_nodes]\n", 629 | "\n", 630 | "print(\"\\nThe following samples %s share the node %s in the tree\"\n", 631 | " % (sample_ids, common_node_id))\n", 632 | "print(\"It is %s %% of all nodes.\" % (100 * len(common_node_id) / n_nodes,))" 633 | ] 634 | }, 635 | { 636 | "cell_type": "markdown", 637 | "metadata": { 638 | "collapsed": true 639 | }, 640 | "source": [ 641 | "## 参考链接\n", 642 | " \n", 643 | "- [数据挖掘面试题之决策树必知必会](http://www.jianshu.com/p/fb97b21aeb1d)\n", 644 | "- [机器学习笔记(五)决策树算法及实践](http://blog.csdn.net/sinat_22594309/article/details/59090895)" 645 | ] 646 | } 647 | ], 648 | "metadata": { 649 | "anaconda-cloud": {}, 650 | "kernelspec": { 651 | "display_name": "Python [default]", 652 | "language": "python", 653 | "name": "python2" 654 | }, 655 | "language_info": { 656 | "codemirror_mode": { 657 | "name": "ipython", 658 | "version": 2 659 | }, 660 | "file_extension": ".py", 661 | "mimetype": "text/x-python", 662 | "name": "python", 663 | "nbconvert_exporter": "python", 664 | "pygments_lexer": "ipython2", 665 | "version": "2.7.14" 666 | } 667 | }, 668 | "nbformat": 4, 669 | "nbformat_minor": 1 670 | } 671 | -------------------------------------------------------------------------------- /11_Tree_Ensemble.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# 决策树模型的各种Ensemble\n", 8 | "\n", 9 | "## 目录\n", 10 | "\n", 11 | "- [概览](#概览)\n", 12 | "- [Bagging](#Bagging)\n", 13 | " - [基础Bagging](#基础Bagging)\n", 14 | " - [Bagging实践](#Bagging实践)\n", 15 | " - [随机森林](#随机森林)\n", 16 | " - [随机森林实践](#随机森林实践)\n", 17 | "- [Boosting](#Boosting)\n", 18 | " - [AdaBoost](#AdaBoost)\n", 19 | " - [GBDT](#GBDT)\n", 20 | " - [XGBoost](#XGBoost)\n", 21 | " - [Boosting实践](#Boosting实践)\n", 22 | " - [xgboost实践](#xgboost实践)\n", 23 | "- [参考链接](#参考链接)" 24 | ] 25 | }, 26 | { 27 | "cell_type": "markdown", 28 | "metadata": {}, 29 | "source": [ 30 | "## 概览\n", 31 | "\n", 32 | "Ensemble的方法主要有两大类:**Bagging**和**Boosting**。\n", 33 | "\n", 34 | "Boosting主要关注**降低偏差**,因此Boost能基于泛化性能相当弱的学习器构建出很强的集成;\n", 35 | "\n", 36 | "Bagging主要关注**降低方差**,因此它在不剪枝的决策树、神经网络等学习器上效用更为明显。\n", 37 | "\n", 38 | "Boosting的个体学习器之间存在强依赖关系,必须**串行**生成;\n", 39 | "\n", 40 | "Bagging的个体学习器之间不存在强依赖关系,可以同时生成即**并行化**。\n", 41 | "\n", 42 | "## Bagging\n", 43 | "\n", 44 | "### 基础Bagging\n", 45 | "\n", 46 | "先讲Bagging,Bagging是Bootstrap aggregation的缩写。\n", 47 | "\n", 48 | "所谓的**Bootstrap**是**有放回抽样**,而这里的抽样指的是对数据样本的抽样。\n", 49 | "\n", 50 | "如果对于一个有$n$个样本的数据集$D$,有放回抽样$n'$个数据,那么当$n$足够大的时候,满足抽样出来的样本个数(无重复的个数)和原数据集的比例是:$(1 - 1/e) (≈63.2\\%) $\n", 51 | "\n", 52 | "**证明**:\n", 53 | "\n", 54 | "某个样本没有被抽中的概率是:$p_{not} = (1-\\frac{1}{n})^n$\n", 55 | "\n", 56 | "$\\frac{1}{p_{not}} = (\\frac{n}{n-1})^{n} = (1+\\frac{1}{n-1})^{n-1}(1+\\frac{1}{n-1})$\n", 57 | "\n", 58 | "当n很大时,上式等于e(根据常用极限:$lim_{x\\rightarrow∞}(1+\\frac{1}{x})^x=e$)。\n", 59 | "\n", 60 | "因此,$p_{not} = (1-\\frac{1}{n})^n = \\frac{1}{e}$。 \n", 61 | "\n", 62 | "回到Bagging,**Bagging的基本做法**:\n", 63 | "\n", 64 | "- 1 从样本**有放回地**抽取n个样本;\n", 65 | "- 2 在所有的属性上,对这n个样本建立分类器;\n", 66 | "- 3 重复上述过程m次,得到m个分类器;\n", 67 | "- 4 将数据放在这m个分类器上分类,最终结果由所有分类器结果投票决定。\n", 68 | "\n", 69 | "### Bagging实践\n", 70 | "\n", 71 | " `sklearn.ensemble.BaggingClassifier`提供了Bagging的模型。" 72 | ] 73 | }, 74 | { 75 | "cell_type": "code", 76 | "execution_count": 2, 77 | "metadata": {}, 78 | "outputs": [ 79 | { 80 | "name": "stdout", 81 | "output_type": "stream", 82 | "text": [ 83 | "决策树测试集正确率75.56%\n", 84 | "Bagging测试集正确率77.78%\n" 85 | ] 86 | } 87 | ], 88 | "source": [ 89 | "import numpy as np \n", 90 | "from sklearn.tree import DecisionTreeClassifier \n", 91 | "from sklearn.model_selection import train_test_split \n", 92 | "from sklearn.ensemble import BaggingClassifier \n", 93 | "from sklearn import datasets \n", 94 | " \n", 95 | "#读取数据,划分训练集和测试集 \n", 96 | "iris=datasets.load_iris() \n", 97 | "x=iris.data[: , :2] \n", 98 | "y=iris.target \n", 99 | "x_train, x_test, y_train, y_test = train_test_split(x, y, train_size=0.7, random_state=1) \n", 100 | " \n", 101 | "#模型训练 \n", 102 | "# sklearn 自带的决策树分类器\n", 103 | "model1=DecisionTreeClassifier(max_depth=3) \n", 104 | "# sklearn自带的bagging分类器\n", 105 | "model2=BaggingClassifier(model1,n_estimators=100,max_samples=0.3) \n", 106 | "model1.fit(x_train,y_train) \n", 107 | "model2.fit(x_train,y_train) \n", 108 | "model1_pre=model1.predict(x_test) \n", 109 | "model2_pre=model2.predict(x_test) \n", 110 | "res1=model1_pre==y_test \n", 111 | "res2=model2_pre==y_test \n", 112 | "print '决策树测试集正确率%.2f%%'%np.mean(res1*100) \n", 113 | "print 'Bagging测试集正确率%.2f%%'%np.mean(res2*100) " 114 | ] 115 | }, 116 | { 117 | "cell_type": "markdown", 118 | "metadata": {}, 119 | "source": [ 120 | "### 随机森林\n", 121 | "\n", 122 | "随机森林(Random Forest, 简称RF)是Bagging的一个扩展变体。\n", 123 | "\n", 124 | "RF在Bagging的基础上,加入了**随机属性选择**,即,**对特征进行无放回抽样**,并且使用CART树。\n", 125 | "\n", 126 | "**随机森林的基本做法**:\n", 127 | "\n", 128 | "- 1 首先在样本集中有放回的抽样n个样本;\n", 129 | "- 2 在所有的属性当中再随机选择K个属性;\n", 130 | "- 3 根据这n个样本的K个属性,建立CART树;\n", 131 | "- 4 重复以上过程m次,得到了m棵CART树;\n", 132 | "- 5 利用这m棵树对样本进行预测并投票。\n", 133 | "\n", 134 | "### 随机森林实践\n", 135 | "\n", 136 | " `sklearn.ensemble.RandomForestClassifier`提供了随机森林的模型。" 137 | ] 138 | }, 139 | { 140 | "cell_type": "code", 141 | "execution_count": 3, 142 | "metadata": {}, 143 | "outputs": [ 144 | { 145 | "name": "stdout", 146 | "output_type": "stream", 147 | "text": [ 148 | "决策树训练集正确率83.81%\n", 149 | "随机森林训练集正确率85.71%\n" 150 | ] 151 | } 152 | ], 153 | "source": [ 154 | "import numpy as np \n", 155 | "from sklearn.tree import DecisionTreeClassifier \n", 156 | "from sklearn.model_selection import train_test_split \n", 157 | "from sklearn.ensemble import RandomForestClassifier \n", 158 | "from sklearn import datasets \n", 159 | " \n", 160 | "#读取数据,划分训练集和测试集 \n", 161 | "iris=datasets.load_iris() \n", 162 | "x=iris.data[:,:2] \n", 163 | "y=iris.target \n", 164 | "x_train, x_test, y_train, y_test = train_test_split(x, y, train_size=0.7, random_state=1) \n", 165 | " \n", 166 | "#模型训练 \n", 167 | "model1=DecisionTreeClassifier(max_depth=3) \n", 168 | "model2=RandomForestClassifier(n_estimators=200, criterion='entropy', max_depth=3) \n", 169 | "model1.fit(x_train,y_train) \n", 170 | "model2.fit(x_train,y_train) \n", 171 | "model1_pre=model1.predict(x_train) \n", 172 | "model2_pre=model2.predict(x_train) \n", 173 | "res1=model1_pre==y_train \n", 174 | "res2=model2_pre==y_train \n", 175 | "print '决策树训练集正确率%.2f%%'%np.mean(res1*100) \n", 176 | "print '随机森林训练集正确率%.2f%%'%np.mean(res2*100) " 177 | ] 178 | }, 179 | { 180 | "cell_type": "markdown", 181 | "metadata": {}, 182 | "source": [ 183 | "## Boosting\n", 184 | "\n", 185 | "Boosting(提升)通过**给样本设置不同的权值**,每轮迭代调整权值。\n", 186 | "\n", 187 | "不同的提升算法之间的差别,一般是:\n", 188 | "\n", 189 | "(1)如何更新**样本的权值**;\n", 190 | "\n", 191 | "(2)如何组合每个分类器的预测,即,调整**分类器的权值**。\n", 192 | "\n", 193 | "其中Adaboost中,样本权值是增加那些被错误分类的样本的权值,分类器$C_i$的重要性依赖于它的错误率。\n" 194 | ] 195 | }, 196 | { 197 | "cell_type": "markdown", 198 | "metadata": { 199 | "collapsed": true 200 | }, 201 | "source": [ 202 | "### AdaBoost\n", 203 | "\n", 204 | "直接看算法:\n", 205 | "\n", 206 | "- 1.初始化训练数据的权值分布:$D_1 = (w_{11}, ..., w_{1i}, ..., w_{1N},)$,$w_{1,i}=\\frac{1}{N}$。(N是数据个数)\n", 207 | "- 对$m=1, ..., M$(M是弱分类器的个数):\n", 208 | " - 2.使用具有权值分布$D_m$训练得到弱分类器:$G_m(x)$\n", 209 | " - 3.计算$G_m(x)$在训练集上的分类错误率:$e_m = \\sum_{i=1}^Nw_{mi}I(G_m(x_i)≠y_i)$\n", 210 | " - 4.计算$G_m(x)$的系数:$α_m=\\frac{1}{2}log\\frac{1-e_m}{e_m}$(**分类器错误率越大,权重越小**)\n", 211 | " - 5.更新训练集权重:$w_{m+1, i} = \\frac{w_{mi}}{Z_m}exp(-α_my_iG_m(x_i))$(**$x_i$分类错误则提高权重**)\n", 212 | "- 6.构建弱分类器的线性组合:$f(x) = \\sum_{m=1}^Mα_mG_m(x)$" 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "### GBDT\n", 220 | "\n", 221 | "**GBDT(Gradient Boosting Decision Tree),梯度提升算法**又叫 **MART(Multiple Additive Regression Tree)**。\n", 222 | "\n", 223 | "从后面的名字可以看出,这是一种**回归树**的模型。它使用的基础分类器也是CART。\n", 224 | "\n", 225 | "着重分析GBDT中的Boosting,即**Additive Training**。\n", 226 | "\n", 227 | "这里的方法是:第一轮去拟合一个大致上和目标差不多的值,然后计算残差,下一轮的拟合目标就是这个残差,即:\n", 228 | "\n", 229 | "- 1.用训练集训练一个弱分类器:$f_1(x) = y$\n", 230 | "- 2.用残差训练一个弱分类器:$h_1(x) = y - f_1(x)$\n", 231 | "- 3.获得新模型:$f_2(x) = f_1(x) + h_1(x)$\n", 232 | "\n", 233 | "上一轮迭代得到的强学习器是$f_{t-1}(x)$,损失函数是$L(y, f_{t-1}(x))$。\n", 234 | "\n", 235 | "我们本轮迭代的目标是:找到一个CART回归树模型的弱学习器$h_t(x)$,让本轮的损失损失$L(y, f_{t}(x)) =L(y, f_{t-1}(x)+ h_t(x))$最小。\n", 236 | "\n", 237 | "- 第t轮的第i个样本,损失函数的**负梯度**表示:$r_{ti} = -\\bigg[\\frac{\\partial L(y, f(x_i)))}{\\partial f(x_i)}\\bigg]_{f(x) = f_{t-1}\\;\\; (x)}$,负梯度可以用来估计残差(注:为什么要使用负梯度而不直接使用残差是为了使用不同Loss function时,使用负梯度更容易优化)。\n", 238 | "- 再用$(x_i, r_{ti})$去拟合下一棵CART回归树:\n", 239 | " - 对$j=1,2,...,J$,计算:$c_{tj} = \\underbrace{arg\\; min}_{c}\\sum\\limits_{x_i \\in R_{tj}} L(y_i,f_{t-1}(x_i) +c)$\n", 240 | " - 得到本轮的拟合函数:$h_t(x) = \\sum\\limits_{j=1}^{J}c_{tj}I(x \\in R_{tj})$\n", 241 | " - 更新强学习器:$f_{t}(x) = f_{t-1}(x) + \\sum\\limits_{j=1}^{J}c_{tj}I(x \\in R_{tj})$\n", 242 | " \n", 243 | "上面算法描述版本参考自李航老师的《统计学习方法》,$c_{tj}$是决策树单元$R_{tj}$上固定输出值。然而更常见的算法版本(参考wikipedia)并不使用$c$,而是使用**步长$\\gamma$**,步长×负梯度更reasonable:\n", 244 | "\n", 245 | "即:\n", 246 | "- 使用$(x_i, r_{ti})$去拟合下一棵CART回归树:\n", 247 | " - 拟合的输出是$h_m(x)$\n", 248 | " - 再用线性搜索找到最佳步长$\\gamma_m = \\underset{\\gamma}{arg\\; min}\\sum_{i=1}^nL(y_i, F_{m-1}(x_i)+\\gamma h_m(x_i))$,其中的$\\gamma h_m(x)$等价于$c$。" 249 | ] 250 | }, 251 | { 252 | "cell_type": "markdown", 253 | "metadata": { 254 | "collapsed": true 255 | }, 256 | "source": [ 257 | "### XGBoost\n", 258 | "\n", 259 | "XGBoost的符号表示稍微有些不同,参考`陈天奇`的[slide](https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf)。\n", 260 | "\n", 261 | "XGBoost本质上也是一个gradient boosting。\n", 262 | "\n", 263 | "我们每轮都训练一个基础分类器:\n", 264 | "\n", 265 | "$\\hat y_i^{(0)} = 0$\n", 266 | "\n", 267 | "$\\hat y_i^{(1)} = f_1(x_i) = \\hat y_i^{(0)} + f_1(x_i)$\n", 268 | "\n", 269 | "$\\hat y_i^{(2)} = f_1(x_i) + f_2(x_i) = \\hat y_i^{(1)} + f_2(x_i)$\n", 270 | "\n", 271 | "...\n", 272 | "\n", 273 | "$\\hat y_i^{(t)} = \\sum_{k=1}^t f_k(x_i) = \\hat y_i^{(t-1)} + f_t(x_i)$\n", 274 | "\n", 275 | "上面的最后一条式子解释如下:当我们训练了t轮以后,有了t个弱学习器,每一轮都将之前已经Boosting的强学习器和现在这轮的弱学习器的结果相加。\n", 276 | "\n", 277 | "**目标函数(损失函数)**:$Obj^{(t)} = \\sum_{i=1}^nl(y_i, \\hat y_i^{(t)})+\\sum_{i=1}^tΩ(f_i)$\n", 278 | "\n", 279 | "回忆**泰勒展开**:$f(x + \\Delta x)≈f(x)+f'(x)\\Delta x+\\frac{1}{2}f''(x)\\Delta x^2$\n", 280 | "\n", 281 | "将$\\hat y_i^{(t)} = \\hat y_i^{(t-1)} + f_t(x_i)$带入目标函数,转换成:\n", 282 | "\n", 283 | "$Obj^{(t)} = \\sum_{i=1}^n[l(y_i, \\hat y_i^{(t-1)})+g_if_t(x_i)+\\frac{1}{2}h_if_t^2(x_i)]+Ω(f_t)+constant$,其中,$g_i = \\partial_{\\hat y_i^{(t-1)}}l(y_i, \\hat y_i^{(t-1)})$,$h_i = \\partial^2_{\\hat y_i^{(t-1)}}l(y_i, \\hat y_i^{(t-1)})^2$\n", 284 | "\n", 285 | "把上面的常数都去掉,剩下:\n", 286 | "\n", 287 | "$Obj^{(t)} = \\sum_{i=1}^n[g_if_t(x_i)+\\frac{1}{2}h_if_t^2(x_i)]+Ω(f_t)$\n", 288 | "\n", 289 | "引入叶结点的权重:$f_t(x) = w_{q(x)}$,$q(x)$是样本到叶结点的映射,正则函数:$Ω(f_t) = \\gamma T + \\frac{1}{2}\\lambda \\sum^T_{j=1}w^2_j$。$T$是叶结点的个数。\n", 290 | "\n", 291 | "定义在叶结点j的样本集:$I_j = \\{i|q(x_i) = j\\}$\n", 292 | "\n", 293 | "把叶结点权重和正则函数带入目标函数,得到:\n", 294 | "\n", 295 | "$Obj^{(t)} = \\sum_{i=1}^n[g_iw_{q(x_i)}+\\frac{1}{2}h_iw^2_{q(x_i)}]+\\gamma T + \\frac{1}{2}\\lambda \\sum^T_{j=1}w^2_j\\\\\n", 296 | "=\\sum_{j=1}^T[(\\sum_{i∈I_j }g_i)w_j + \\frac{1}{2}(\\sum_{i∈I_j}h_i+\\lambda)w_j^2]+\\gamma T$\n", 297 | "\n", 298 | "定义:$G_j = \\sum_{i∈I_j}g_i$,$H_j = \\sum_{i∈I_j}h_i$,\n", 299 | "\n", 300 | "$Obj^{(t)} =\\sum_{j=1}^T[G_jw_j + \\frac{1}{2}(H_j + \\lambda)w_j^2] + \\lambda T$\n", 301 | "\n", 302 | "回忆一下一元二次函数的性质:\n", 303 | "\n", 304 | "对于:$Gx + \\frac{1}{2}Hx^2$,($H>0$),最小值为:$-\\frac{1}{2}\\frac{G^2}{H}$,在$ -\\frac{G}{H}$处取得。\n", 305 | "\n", 306 | "回到目标函数中去,如果树的结构($q(x)$)固定,那最优的权值分配是:\n", 307 | "\n", 308 | "$w_j^* = -\\frac{G_i}{H_j+\\lambda}$\n", 309 | "\n", 310 | "$Obj^* = -\\frac{1}{2}\\sum^T_{j=1}\\frac{G_j^2}{H_j+\\lambda}+\\gamma T$\n", 311 | "\n", 312 | "接下来考虑**如何学习树的结构(Greedy Learning)**:\n", 313 | "\n", 314 | "回忆一下CART是如何做的:\n", 315 | "\n", 316 | "对现有特征A的每一个特征,每一个可能的取值a,**根据样本点对$A=a$的测试是“是”还是“否”**,将$D$分割成$D_1$和$D_2$两部分,计算$A=a$时的基尼指数。选择基尼指数最小的特征机器对应的切分点作为**最优特征**和**最优切分点**。\n", 317 | "\n", 318 | "这里不算基尼指数,这里的Gain是:$\\frac{1}{2}[\\frac{G^2_L}{H_L+\\lambda}+\\frac{G^2_R}{H_R+\\lambda}-\\frac{(G_L+G_R)^2}{H_L+H_R+\\lambda}] - \\gamma$。\n", 319 | "\n", 320 | "上式方括号中的三项分别代表:**左子树的得分**;**右子树的得分**;**如果不分割的得分**。得分可以理解成是损失的反面。\n", 321 | "\n", 322 | "寻找最优分割的算法:\n", 323 | "\n", 324 | "- 对每个结点,遍历所有feature:\n", 325 | " - 对每个feature,将数据样本按照feature 值排序;\n", 326 | " - 使用linear scan决定这个feature的最佳split;\n", 327 | " - 如此循环,找到所有feature的最佳split。\n", 328 | "\n", 329 | "上面算法的时间复杂度是:$O(d\\cdot K\\cdot nlogn )$,其中,n是数据个数,d是特征个数,K是树深度。\n", 330 | "\n", 331 | "**剪枝**:\n", 332 | "\n", 333 | "这里的剪枝和其他普通的决策树剪枝没有区别,分为前剪枝和后剪枝。其中,后剪枝的策略是:递归地减掉所有产生negative gain的split。" 334 | ] 335 | }, 336 | { 337 | "cell_type": "markdown", 338 | "metadata": {}, 339 | "source": [ 340 | "### Boosting实践\n", 341 | "\n", 342 | "sklearn中自带:`sklearn.ensemble.GradientBoostingClassifier`和`sklearn.ensemble.AdaBoostClassifier`。" 343 | ] 344 | }, 345 | { 346 | "cell_type": "code", 347 | "execution_count": 1, 348 | "metadata": {}, 349 | "outputs": [ 350 | { 351 | "name": "stdout", 352 | "output_type": "stream", 353 | "text": [ 354 | "决策树正确率84.67%\n", 355 | "GDBT正确率92.00%\n", 356 | "AdaBoost正确率92.67%\n" 357 | ] 358 | } 359 | ], 360 | "source": [ 361 | "import numpy as np \n", 362 | "from sklearn.tree import DecisionTreeClassifier \n", 363 | "from sklearn.ensemble import GradientBoostingClassifier \n", 364 | "from sklearn.ensemble import AdaBoostClassifier \n", 365 | "import matplotlib.pyplot as plt \n", 366 | "import matplotlib as mpl \n", 367 | "from sklearn import datasets \n", 368 | " \n", 369 | "iris=datasets.load_iris() \n", 370 | "x=iris.data[:,:2] \n", 371 | "y=iris.target \n", 372 | " \n", 373 | "model1=DecisionTreeClassifier(max_depth=5) \n", 374 | "model2=GradientBoostingClassifier(n_estimators=100) \n", 375 | "model3=AdaBoostClassifier(model1,n_estimators=100) \n", 376 | "model1.fit(x,y) \n", 377 | "model2.fit(x,y) \n", 378 | "model3.fit(x,y) \n", 379 | "model1_pre=model1.predict(x) \n", 380 | "model2_pre=model2.predict(x) \n", 381 | "model3_pre=model3.predict(x) \n", 382 | "res1=model1_pre==y \n", 383 | "res2=model2_pre==y \n", 384 | "res3=model3_pre==y \n", 385 | "print '决策树正确率%.2f%%'%np.mean(res1*100) \n", 386 | "print 'GDBT正确率%.2f%%'%np.mean(res2*100) \n", 387 | "print 'AdaBoost正确率%.2f%%'%np.mean(res3*100) " 388 | ] 389 | }, 390 | { 391 | "cell_type": "markdown", 392 | "metadata": {}, 393 | "source": [ 394 | "### xgboost实践\n", 395 | "\n", 396 | "xgboost需要额外安装一下:[官方安装地址](https://xgboost.readthedocs.io/en/latest/build.html)\n", 397 | "\n", 398 | "这里简单说下ubuntu下python接口的安装:\n", 399 | "\n", 400 | "```shell\n", 401 | "git clone --recursive https://github.com/dmlc/xgboost\n", 402 | "cd xgboost; make -j4\n", 403 | "cd python-package; sudo python setup.py install\n", 404 | "```\n", 405 | "\n", 406 | "同样,[官网这里](https://xgboost.readthedocs.io/en/latest/how_to/param_tuning.html)对调参方法有很详细的介绍。" 407 | ] 408 | }, 409 | { 410 | "cell_type": "code", 411 | "execution_count": 3, 412 | "metadata": {}, 413 | "outputs": [ 414 | { 415 | "name": "stdout", 416 | "output_type": "stream", 417 | "text": [ 418 | "[0]\ttrain-merror:0.133333\ttest-merror:0.266667\n", 419 | "[1]\ttrain-merror:0.142857\ttest-merror:0.266667\n", 420 | "[2]\ttrain-merror:0.12381\ttest-merror:0.266667\n", 421 | "[3]\ttrain-merror:0.12381\ttest-merror:0.266667\n", 422 | "[4]\ttrain-merror:0.12381\ttest-merror:0.266667\n", 423 | "[5]\ttrain-merror:0.114286\ttest-merror:0.288889\n", 424 | "[6]\ttrain-merror:0.104762\ttest-merror:0.288889\n", 425 | "[7]\ttrain-merror:0.104762\ttest-merror:0.288889\n", 426 | "[8]\ttrain-merror:0.114286\ttest-merror:0.288889\n", 427 | "[9]\ttrain-merror:0.114286\ttest-merror:0.288889\n", 428 | "predicting, classification error=0.288889\n" 429 | ] 430 | } 431 | ], 432 | "source": [ 433 | "import xgboost as xgb \n", 434 | "from sklearn.model_selection import train_test_split \n", 435 | "from sklearn import datasets \n", 436 | " \n", 437 | "iris=datasets.load_iris() \n", 438 | "x=iris.data[:,:2] \n", 439 | "y=iris.target \n", 440 | "x_train, x_test, y_train, y_test = train_test_split(x, y, train_size=0.7, random_state=1) \n", 441 | "data_train = xgb.DMatrix(x_train,label=y_train) \n", 442 | "data_test=xgb.DMatrix(x_test,label=y_test) \n", 443 | "param = {} \n", 444 | "param['objective'] = 'multi:softmax' \n", 445 | "param['eta'] = 0.1 \n", 446 | "param['max_depth'] = 5 \n", 447 | "param['silent'] = 1 \n", 448 | "param['nthread'] = 4 \n", 449 | "param['num_class'] = 3 \n", 450 | "watchlist = [ (data_train,'train'), (data_test, 'test') ] \n", 451 | "num_round = 10 \n", 452 | "bst = xgb.train(param, data_train, num_round, watchlist ); \n", 453 | "pred = bst.predict( data_test ); \n", 454 | "print ('predicting, classification error=%f' % (sum( int(pred[i]) != y_test[i] for i in range(len(y_test))) / float(len(y_test)) ))" 455 | ] 456 | }, 457 | { 458 | "cell_type": "markdown", 459 | "metadata": {}, 460 | "source": [ 461 | "## 参考链接\n", 462 | "\n", 463 | "- [A Kaggle Master Explains Gradient Boosting](http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/)\n", 464 | "- [机器学习笔记(六)Bagging及随机森林](http://blog.csdn.net/sinat_22594309/article/details/60465700)\n", 465 | "- [机器学习笔记(七)Boost算法(GDBT,AdaBoost,XGBoost)原理及实践](http://blog.csdn.net/sinat_22594309/article/details/60957594)\n", 466 | "- [Introduction to Boosted Trees ](https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf)\n", 467 | "- [wikipedia](https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting)" 468 | ] 469 | } 470 | ], 471 | "metadata": { 472 | "anaconda-cloud": {}, 473 | "kernelspec": { 474 | "display_name": "Python [default]", 475 | "language": "python", 476 | "name": "python2" 477 | }, 478 | "language_info": { 479 | "codemirror_mode": { 480 | "name": "ipython", 481 | "version": 2 482 | }, 483 | "file_extension": ".py", 484 | "mimetype": "text/x-python", 485 | "name": "python", 486 | "nbconvert_exporter": "python", 487 | "pygments_lexer": "ipython2", 488 | "version": "2.7.14" 489 | } 490 | }, 491 | "nbformat": 4, 492 | "nbformat_minor": 1 493 | } 494 | -------------------------------------------------------------------------------- /12_EM.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# EM算法总结\n", 8 | "\n", 9 | "在概率模型中,最常用的模型参数估计方法应该就是**最大似然法**。\n", 10 | "\n", 11 | "**EM算法**本质上也是最大似然,它是针对模型中存在**隐变量**的情况的**最大似然**。\n", 12 | "\n", 13 | "下面通过两个例子引入。\n", 14 | "\n", 15 | "## 没有隐变量的硬币模型\n", 16 | "\n", 17 | "![](https://github.com/applenob/machine_learning_basic/raw/master/res/maximum_likelihood.png)\n", 18 | "\n", 19 | "假设有两个硬币,$A$和$B$,这两个硬币具体材质未知,即抛硬币的结果是head的概率不一定是50%。\n", 20 | "\n", 21 | "在这个实验中,我们每次拿其中一个硬币,抛10次,统计结果。\n", 22 | "\n", 23 | "实验的目标是统计$A$和$B$的head朝上的概率,即估计$\\hat \\theta_A$和$\\hat \\theta_B$。\n", 24 | "\n", 25 | "对每一枚硬币来说,使用**极大似然法**来估计它的参数:\n", 26 | "\n", 27 | "假设硬币$A$正面朝上的次数是$n^A_h$,反面朝上的次数是:$n^A_t$。\n", 28 | "\n", 29 | "似然函数:$L(\\theta_A) = (\\theta_A)^{n^A_h}(1-\\theta_A)^{n^A_t}$。\n", 30 | "\n", 31 | "对数似然函数:$log\\;L(\\theta_A) = n^A_h\\cdot log(\\theta_A)+n^A_t\\cdot log(1-\\theta_A)$。\n", 32 | "\n", 33 | "$\\hat \\theta_A = \\underset{\\theta_A}{argmax}\\;log\\;L(\\theta_A)$ 。\n", 34 | "\n", 35 | "对参数求偏导:$\\frac{\\partial log\\; L(\\theta_A)}{\\partial \\theta_A}=\\frac{n^A_h}{\\theta_A}-\\frac{n^A_t}{1-\\theta_A}$。\n", 36 | "\n", 37 | "令上式为$0$,解得:$\\hat \\theta_A = \\frac{n^A_h}{n^A_h+n^A_t}$。\n", 38 | "\n", 39 | "即$\\hat \\theta_A = \\frac{number\\; of\\; heads\\; using\\; coin\\; A}{total\\; number\\; of\\; flips\\; using\\; coin\\; A}$。" 40 | ] 41 | }, 42 | { 43 | "cell_type": "markdown", 44 | "metadata": {}, 45 | "source": [ 46 | "## 有隐变量的硬币模型\n", 47 | "\n", 48 | "![](https://github.com/applenob/machine_learning_basic/raw/master/res/expectation_maximization.png)\n", 49 | "\n", 50 | "这个问题是上一个问题的困难版,即给出一系列统计的实验,**但不告诉你某组实验采用的是哪枚硬币**,即某组实验采用哪枚硬币成了一个**隐变量**。\n", 51 | "\n", 52 | "这里引入**EM算法的思路**:\n", 53 | "\n", 54 | "- 1.先随机给出模型参数的估计,以初始化模型参数。\n", 55 | "- 2.根据之前模型参数的估计,和观测数据,计算**隐变量的分布**。\n", 56 | "- 3.根据隐变量的分布,求**联合分布的对数**关于隐变量分布的**期望**。\n", 57 | "- 4.重新估计**模型参数**,这次最大化的不是似然函数,而是第3步求的**期望**。\n", 58 | "\n", 59 | "一般教科书会把EM算法分成两步:E步和M步,即求期望和最大化期望。\n", 60 | "\n", 61 | "E步对应上面2,3;M对应4。" 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": {}, 67 | "source": [ 68 | "## EM算法\n", 69 | "\n", 70 | "输入:观测变量数据$Y$,隐变量数据$Z$,联合分布$P(Y,Z|\\theta)$,条件分布$P(Z|Y,\\theta)$;\n", 71 | "\n", 72 | "输出:模型参数$\\theta$。\n", 73 | "\n", 74 | "- 1.选择参数的初始值$\\theta^{(0)}$,开始迭代;\n", 75 | "- 在第$i+1$次迭代:\n", 76 | " - 2.E步:$Q(\\theta,\\theta^{(i)}) = \\sum_zlog\\;P(Y,Z|\\theta)P(Z|Y,\\theta^{(i)})$\n", 77 | " - 3.M步:$Q^{(i+1)} = \\underset{\\theta}{argmax}\\;Q(\\theta,\\theta^{(i)})$\n", 78 | "- 4.重复2,3直至收敛。\n" 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "metadata": {}, 84 | "source": [ 85 | "## 参考资料\n", 86 | "\n", 87 | "- [如何感性地理解EM算法?](http://www.jianshu.com/p/1121509ac1dc)\n", 88 | "- [What is the expectation maximization algorithm?](http://pan.baidu.com/s/1i4NfvP7)" 89 | ] 90 | }, 91 | { 92 | "cell_type": "code", 93 | "execution_count": 1, 94 | "metadata": {}, 95 | "outputs": [ 96 | { 97 | "name": "stdout", 98 | "output_type": "stream", 99 | "text": [ 100 | "done\n" 101 | ] 102 | } 103 | ], 104 | "source": [ 105 | "print \"done\"" 106 | ] 107 | } 108 | ], 109 | "metadata": { 110 | "anaconda-cloud": {}, 111 | "kernelspec": { 112 | "display_name": "Python [default]", 113 | "language": "python", 114 | "name": "python2" 115 | }, 116 | "language_info": { 117 | "codemirror_mode": { 118 | "name": "ipython", 119 | "version": 2 120 | }, 121 | "file_extension": ".py", 122 | "mimetype": "text/x-python", 123 | "name": "python", 124 | "nbconvert_exporter": "python", 125 | "pygments_lexer": "ipython2", 126 | "version": "2.7.14" 127 | } 128 | }, 129 | "nbformat": 4, 130 | "nbformat_minor": 1 131 | } 132 | -------------------------------------------------------------------------------- /13_graph.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# 图模型总结\n", 8 | "\n", 9 | "![](https://raw.githubusercontent.com/applenob/machine_learning_basic/master/res/crf.png)\n", 10 | "\n", 11 | "## 1.图模型的引入\n", 12 | "\n", 13 | "首先总结一下图模型,所谓图模型,其实就是在统计建模的时候,结合图论的思想。\n", 14 | "\n", 15 | "图模型=概率论+图论\n", 16 | "\n", 17 | "![](https://raw.githubusercontent.com/applenob/machine_learning_basic/master/res/graph.png)\n", 18 | "\n", 19 | "让我们先从**朴素贝叶斯**开始思考,随机变量y和所有的观测变量X有关,但每个观测变量对于y来说,又是独立的,也就是我们说的“naive”。\n", 20 | "\n", 21 | "这基本上是最简单的随机变量的关系了:$P(X|y)=p(x_1|y)\\cdot p(x_2|y)...\\cdot p(x_n|y)$。\n", 22 | "\n", 23 | "那我们可以从这里引申出什么呢?如果把所有的随机变量,都用图论中的节点表示,变量间的关系,由边表示,暂时先不考虑边的方向的问题,那么朴素贝叶斯就可以很直观地画成上面的第一幅图。\n", 24 | "\n", 25 | "再来回忆一下**最大熵**。最大熵的建模思想并不是来源于图论,但是看看最大熵模型的表达式:\n", 26 | "\n", 27 | "$$P_w(y|x)=\\frac{1}{Z_w(x)}exp\\bigl(\\begin{smallmatrix}\n", 28 | "\\sum_{i=1}^{n} w_i\\cdot f_i(x,y)\n", 29 | "\\end{smallmatrix}\\bigr)$$\n", 30 | "\n", 31 | "我们换一个思路去想:特征函数$f_i(x,y)$刻画的是变量$x$和$y$之间的关系,这跟朴素贝叶斯中的条件概率不同,**条件概率是单向的**,$p(y|x)!=p(x|y)$,而**特征函数是双向的,或者说是无向的**,$f(x,y)=f(y,x)$。因此为了区分这二者的区别我们把图模型分为有向图和无向图模型两种。\n", 32 | "\n", 33 | "下面给出二者更加规范的定义(来自:[An introduction to conditional random fields](https://link.zhihu.com/?target=http%3A//homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf)):" 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "metadata": {}, 39 | "source": [ 40 | "**无向图**:\n", 41 | "\n", 42 | "考虑一系列随机变量$Y$,$s∈1,2,...|Y|$。$y$是$Y$的分布。\n", 43 | "\n", 44 | "认为$y$的概率分布可以表示成一系列和$Y$有关的**因素(factor)**的乘积。这个因素的形式是:$\\Psi_a(y_a)$,$a∈1,2,...,A$。\n", 45 | "**加粗代表是向量。**\n", 46 | "\n", 47 | "$$p(\\mathbf{y})=\\frac{1}{Z}\\prod_{a=1}^{A}\\Psi_a(\\mathbf{y}_a)$$\n", 48 | "\n", 49 | "其中$Z$是归一化因子。\n", 50 | "\n", 51 | "例:\n", 52 | "\n", 53 | "![](https://raw.githubusercontent.com/applenob/machine_learning_basic/master/res/dag.png)\n", 54 | "\n", 55 | "\n", 56 | "$$p(y_1,y_2,y_3)\\propto \\Psi_1(y_1,y_2) \\Psi_2(y_2,y_3) \\Psi_3(y_3,y_1)$$" 57 | ] 58 | }, 59 | { 60 | "cell_type": "markdown", 61 | "metadata": {}, 62 | "source": [ 63 | "**有向图**:\n", 64 | "$G$是一个DAG(有向无环图),$π(s)$是$Y_s$的下标,DAG的模型可以这么理解:联合概率分布等于每个节点在它们的父节点的条件下的条件概率的累乘,写成:\n", 65 | "\n", 66 | "$$p(\\mathbf{y})=\\prod^S_{s=1}p(y_s|\\mathbf{y}_{π(s)})$$\n", 67 | "\n", 68 | "例:\n", 69 | "\n", 70 | "![](https://raw.githubusercontent.com/applenob/machine_learning_basic/master/res/ug.png)\n" 71 | ] 72 | }, 73 | { 74 | "cell_type": "code", 75 | "execution_count": 1, 76 | "metadata": {}, 77 | "outputs": [ 78 | { 79 | "name": "stdout", 80 | "output_type": "stream", 81 | "text": [ 82 | "done\n" 83 | ] 84 | } 85 | ], 86 | "source": [ 87 | "print \"done\"" 88 | ] 89 | } 90 | ], 91 | "metadata": { 92 | "anaconda-cloud": {}, 93 | "kernelspec": { 94 | "display_name": "Python [default]", 95 | "language": "python", 96 | "name": "python2" 97 | }, 98 | "language_info": { 99 | "codemirror_mode": { 100 | "name": "ipython", 101 | "version": 2 102 | }, 103 | "file_extension": ".py", 104 | "mimetype": "text/x-python", 105 | "name": "python", 106 | "nbconvert_exporter": "python", 107 | "pygments_lexer": "ipython2", 108 | "version": "2.7.14" 109 | } 110 | }, 111 | "nbformat": 4, 112 | "nbformat_minor": 1 113 | } 114 | -------------------------------------------------------------------------------- /14_tran_learn.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# 迁移学习入门\n", 8 | "\n", 9 | "迁移学习简单地说,首先有个目标任务$T$,目标数据$D_T$数据量偏小,此时有一个相似的源任务$S$,源数据$D_S$相对更充分,迁移学习的目标即如何借助源任务去提高目标任务的效果。\n", 10 | "\n", 11 | "第四范式的罗远升有一篇很不错的文章[《迁移学习实战:从算法到实践》](http://www.sohu.com/a/160626995_470008),看了那篇文章,你会对迁移学习有个大概的认识。\n", 12 | "\n", 13 | "## 迁移学习方法分类\n", 14 | "\n", 15 | "根据所要迁移的知识的表示形式(即“What to transfer”),分为以下四大类:\n", 16 | "- 基于样本的迁移学习(instance-transfer);\n", 17 | "- 基于参数的迁移学习(parameter-transfer);\n", 18 | "- 基于特征表示的迁移学习(feature-representation-transfer);\n", 19 | "- 基于关系知识的迁移(relation-knowledge-transfer)。\n", 20 | "\n", 21 | "## 基于样本的迁移学习\n", 22 | "\n", 23 | "所谓基于样本,就是从源数据中选取对目标领域建模有帮助的样本。这样操作就有一个前提假设:源领域和目标领域的特征空间和目标空间要一致。\n", 24 | "\n", 25 | "### TrAdaBoost\n", 26 | "\n", 27 | "TrAdaBoost是一个典型的基于样本的迁移学习算法。关于基础的AdaBoost可以看[之前写的博客](https://applenob.github.io/tree_ensemble.html#AdaBoost)。\n", 28 | "\n", 29 | "TrAdaBoost的**思想**是:\n", 30 | "- 当一个目标数据$D_T$中的样本被错误的分类之后,可以认为这个样本是很难分类的,因此**增大这个样本的权重**,这样在下一次的训练中这个样本所占的比重变大,这一点和基本的AdaBoost算法的思想是一样的;\n", 31 | "- 如果源数据$D_S$中的一个样本被错误的分类了,可以认为这个样本对于目标数据是不同的,因此**降低这个样本的权重**,降低这个样本在分类器中所占的比重。\n", 32 | "\n", 33 | "**TrAdaBoost算法**:\n", 34 | "\n", 35 | "- 先规定源数据:$D_S=(x_i^s, c(x_i^s))$,样本个数为$n$,目标数据:$D_T=(x_i^t, c(x_i^t))$,样本个数为$m$,测试数据:$S = \\{(x_i^{test})\\}$。\n", 36 | "- 输入:训练集$D_S$和$D_T$,测试集$S$,一个基本分类器Learner,迭代次数N。\n", 37 | "- 1.初始化:\n", 38 | " - 初始化权重向量$\\mathbf w^1 = (w_1^1, ..., w_{n+m}^1)$,$w_i^1=\\left\\{\\begin{matrix} \\frac{1}{n},\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; 当i=1,...,n\\\\ \\frac{1}{m},\\; 当i=n+1,...,n+m\\end{matrix}\\right.$\n", 39 | " - 设置$\\beta = 1/(1+\\sqrt{2\\ln{n/N}})$\n", 40 | "- 2.$For\\;t = 1, ..., N$:\n", 41 | " - 3.归一化的权重:$\\mathbf p^t = \\frac{\\mathbf w^t}{\\sum_{i=1}^{n+m}w_i^t}$;\n", 42 | " - 4.使用带权重的两个训练数据训练Learner,得到在$S$上的分类器$h_t:X\\rightarrow Y$;\n", 43 | " - 5.计算$h_t$在$D_T$中的错误率:$\\epsilon_t = \\sum_{i=n+1}^{n+m}\\frac{w_i^t|h_t(x_i)-c(x_i)|}{\\sum_{i=n+1}^{n+m}w_i^t}$;\n", 44 | " - 6.令$\\beta_t = \\epsilon_t(1-\\epsilon_t)$,**更新权重向量(重点)**:$w_i^{t+1} = \\left\\{\\begin{matrix}w_i^t\\beta^{|h_t(x_i)-c(x_i)|},\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; 当i=1,...,n\\\\ w_i^t\\beta^{-h_t(x_i)-c(x_i)|},\\; 当i=n+1,...,n+m\\end{matrix}\\right.$" 45 | ] 46 | }, 47 | { 48 | "cell_type": "markdown", 49 | "metadata": {}, 50 | "source": [ 51 | "## 基于参数的迁移学习\n", 52 | "\n", 53 | "基于参数就是源任务和目标任务使用同一个模型,并且共享参数。典型的方法:多任务学习(multi-task learning)。\n", 54 | "\n", 55 | "![](https://github.com/applenob/machine_learning_basic/raw/master/res/multi_task.png)" 56 | ] 57 | }, 58 | { 59 | "cell_type": "markdown", 60 | "metadata": {}, 61 | "source": [ 62 | "## 基于特征表示的迁移学习\n", 63 | "\n", 64 | "基于特征表示的迁移学习是指利用源数据学会一个特征表示的方法,再用这个方法去提取目标数据的特征。\n", 65 | "\n", 66 | "根据特征表示学习方法的不同也可以分为有监督和无监督两类。" 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": 1, 72 | "metadata": {}, 73 | "outputs": [ 74 | { 75 | "name": "stdout", 76 | "output_type": "stream", 77 | "text": [ 78 | "done\n" 79 | ] 80 | } 81 | ], 82 | "source": [ 83 | "print \"done\"" 84 | ] 85 | } 86 | ], 87 | "metadata": { 88 | "anaconda-cloud": {}, 89 | "kernelspec": { 90 | "display_name": "Python [default]", 91 | "language": "python", 92 | "name": "python2" 93 | }, 94 | "language_info": { 95 | "codemirror_mode": { 96 | "name": "ipython", 97 | "version": 2 98 | }, 99 | "file_extension": ".py", 100 | "mimetype": "text/x-python", 101 | "name": "python", 102 | "nbconvert_exporter": "python", 103 | "pygments_lexer": "ipython2", 104 | "version": "2.7.14" 105 | } 106 | }, 107 | "nbformat": 4, 108 | "nbformat_minor": 1 109 | } 110 | -------------------------------------------------------------------------------- /15_interview.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# 机器学习基础知识汇总" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "## 计算CNN输出尺寸\n", 15 | "\n", 16 | "公式:`输出尺寸=(输入尺寸-filter尺寸+2*padding)/stride+1`" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "## ROC 和AUC\n", 24 | "\n", 25 | "ROC曲线的x轴是伪阳率即$\\frac{伪阳}{真阴+伪阳}$,y轴是真阳率即$\\frac{真阳}{真阳+伪阴}$。\n", 26 | "\n", 27 | "- thresh-hold很高,导致全部预测阴,则真阳率为0,伪阳率为0,在坐标点$(0, 0)$;\n", 28 | "- thresh-hold很低,导致全部预测阳,则真阳率为1,伪阳率为1,在坐标点$(1, 1)$;\n", 29 | "- 如果分类效果很好,则真阳率很高,伪阳率很低,接近坐标点$(0, 1)$。\n", 30 | "\n", 31 | "AUC即(Area Under Curve),即ROC曲线下的面积。如果分类效果越好,点越接近$(0, 1)$则AUC越大。\n", 32 | "\n", 33 | "[参考链接](https://www.zhihu.com/question/39840928?from=profile_question_card)" 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "metadata": {}, 39 | "source": [ 40 | "## 卡方检验\n", 41 | "\n", 42 | "卡方检验可以用于两个变量间的**相关性检测**。\n", 43 | "\n", 44 | "核心思想:卡方衡量了**实际值与理论值的差异程度**。\n", 45 | "\n", 46 | "即,先假设两个变量之间是**相互独立的**,计算一组理论值$T$,设实际值是$A$,则$\\mathfrak{X}^2=\\frac{\\sum(A-T)^2}{T}$\n", 47 | "\n", 48 | "[参考链接](https://segmentfault.com/a/1190000003719712)" 49 | ] 50 | }, 51 | { 52 | "cell_type": "markdown", 53 | "metadata": {}, 54 | "source": [ 55 | "## 交叉熵损失函数\n", 56 | "\n", 57 | "$H(p, q) = \\sum_i p_i × log\\frac{1}{q_i}$,可以衡量两个分布的相似度。\n", 58 | "\n", 59 | "可以配合sigmoid,使其在误差下降的时候,梯度不会太小。" 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "metadata": { 65 | "collapsed": true 66 | }, 67 | "source": [ 68 | "## Perplexity\n", 69 | "\n", 70 | "困惑度,如果语言模型生成的句子越不像是人说的,困惑度越大,语言模型越差。\n", 71 | "\n", 72 | "在语言模型中,可以将一句话的似然函数,用来描述这句话的困惑度。似然函数越大,困惑度越小。\n", 73 | "\n", 74 | "于是有:$PPL=\\sqrt[n]{\\frac{1}{P(w_1,w_2,...,w_N)}}\\\\=e^{\\frac{1}{N}ln\\frac{1}{P(w_1,w_2,...,w_N)}}\\\\=e^{-\\frac{1}{N}\\sum_{i=1}^NlnP(w_i)}$\n", 75 | "\n", 76 | "[参考链接](http://blog.csdn.net/luo123n/article/details/48902815)" 77 | ] 78 | }, 79 | { 80 | "cell_type": "markdown", 81 | "metadata": { 82 | "collapsed": true 83 | }, 84 | "source": [ 85 | "## Haffman 编码\n", 86 | "\n", 87 | "本身概念不难,最短编码。构造也很简单,每次把两个最小的拿出来,合并,再丢回去。\n", 88 | "\n", 89 | "需要注意的是,编码的时候,所有的编码,不可以是其他的任何编码的前缀。\n", 90 | "\n" 91 | ] 92 | }, 93 | { 94 | "cell_type": "markdown", 95 | "metadata": {}, 96 | "source": [ 97 | "## RNN和LSTM\n", 98 | "\n", 99 | "RNN有什么问题?LSTM为何可以解决这个问题?\n", 100 | "\n", 101 | "RNN由于长期依赖的问题,经过许多阶段传播后,梯度**倾向于消失(大部分情况)**或**爆炸(很少,但对优化过程影响很大)。\n", 102 | "\n", 103 | "- LSTM的自循环的权重视上下文而定,而不是固定的;而普通的RNN是固定的W。\n", 104 | "- 内部状态$s$或者$h$:\n", 105 | " - RNN:$h^{(t)}=\\sigma(b+Wh^{(t−1)}+Ux^{(t)})$\n", 106 | " - LSTM:$s^{(t)}_i=f^{(t)}_is^{(t−1)}_i+g^{(t)}_ii^{(t)}_i$,$g^{(t)}_i$又称为“备选状态”。\n", 107 | " - GRU:$h^{(t)}_i=u^{(t−1)}_ih^{(t−1)}_i+(1−u^{(t−1)}_i)\\tilde h_t$\n", 108 | " - 传统的RNN使用**“覆写”**的方式计算状态:$S_t=f(S_{t-1},x_t)$,根据求导的链式法则,这种形式直接导致梯度别表示成连积的形式,容易导致梯度消失或者梯度爆炸。\n", 109 | " - 现代的RNN(包括但不限于LSTM单元),使用**“累加”**的方式计算状态:$S_t = \\sum_{\\tau=1}^t\\Delta S_{\\tau}$,这种累加形式导致导数也是累加的形式,因此避免了梯度的消失。" 110 | ] 111 | }, 112 | { 113 | "cell_type": "code", 114 | "execution_count": 1, 115 | "metadata": {}, 116 | "outputs": [ 117 | { 118 | "name": "stdout", 119 | "output_type": "stream", 120 | "text": [ 121 | "done\n" 122 | ] 123 | } 124 | ], 125 | "source": [ 126 | "print \"done\"" 127 | ] 128 | } 129 | ], 130 | "metadata": { 131 | "anaconda-cloud": {}, 132 | "kernelspec": { 133 | "display_name": "Python [default]", 134 | "language": "python", 135 | "name": "python2" 136 | }, 137 | "language_info": { 138 | "codemirror_mode": { 139 | "name": "ipython", 140 | "version": 2 141 | }, 142 | "file_extension": ".py", 143 | "mimetype": "text/x-python", 144 | "name": "python", 145 | "nbconvert_exporter": "python", 146 | "pygments_lexer": "ipython2", 147 | "version": "2.7.14" 148 | } 149 | }, 150 | "nbformat": 4, 151 | "nbformat_minor": 1 152 | } 153 | -------------------------------------------------------------------------------- /16_max_entropy.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Max Entropy学习总结" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "## 模型推导\n", 15 | "\n", 16 | "最大熵模型从最基础的最大熵思想,去推出表达式。\n", 17 | "\n", 18 | "**最大熵**是选择最优模型的一个准则,即,首先要**满足已有的事实(约束条件)**,然后在没有更多信息的情况下,那些不确定的部分都是**“等可能的”。**“等可能”本身不容易操作,熵是一个可以优化的数值目标。\n", 19 | "\n", 20 | "这里最大熵最大化的,是**条件熵**$H(Y|X)$。\n", 21 | "\n", 22 | "$$H(Y|X)=-\\sum_{x,y}\\hat{P}(x)P(y|x)logP(y|x)$$\n", 23 | "\n", 24 | "具体的关于信息熵的文章,可以看[colah的这篇博客](https://colah.github.io/posts/2015-09-Visual-Information/),里面对信息熵/互信息/条件熵的物理意义做很好的解释。\n", 25 | "\n", 26 | "好的,我们的最大熵有了目标函数。刚才说了,我们还必须要保证满足已有的事实,这一点如何用数学公式去描述呢?\n", 27 | "\n", 28 | "首先引入**特征函数**:\n", 29 | "\n", 30 | "$$f(x,y)=\\left\\{\\begin{matrix}\n", 31 | "1, \\;\\;x与y满足某一事实\\\\ \n", 32 | "0,\\;\\;否则\n", 33 | "\\end{matrix}\\right.$$\n", 34 | "\n", 35 | "下面注意到两个期望:\n", 36 | "- 1.特征函数$f(x,y)$在**训练样本**中关于**经验分布**$\\tilde P(x,y)$的期望:$E_{\\tilde P}(f)=\\sum_{x,y}\\tilde P(x,y)f(x,y)=\\frac{1}{N}\\sum_{j=1}^Nf_i(x_j, y_j)$\n", 37 | "- 2.特征函数$f(x,y)$关于建立的**理论模型**$P(Y|X)$与经验分布$\\tilde P(x)$的期望:$E_{P}(f)=\\sum_{x,y}\\tilde P(x)P(y|x)f(x,y)=\\frac{1}{N}\\sum^N_{j=1}\\sum_yp^{(n)}(y|x_j)f_i(x_i, y)$\n", 38 | "\n", 39 | "我们希望,模型在训练完以后,**能够获取到训练数据中的信息**。这个想法,用上面的两个期望表达,就是:\n", 40 | "\n", 41 | "$$E_{\\tilde P}(f)=E_{P}(f)$$\n", 42 | "\n", 43 | "给定了目标函数和约束条件,我们通过拉格朗日对偶法,解得模型的**更一般的形式**,(具体的求解过程省略,这里主要是展现模型思想):\n", 44 | "\n", 45 | "$$P_w(y|x)=\\frac{1}{Z_w(x)}exp\\bigl(\\begin{smallmatrix}\n", 46 | "\\sum_{i=1}^{n} w_i\\cdot f_i(x,y)\n", 47 | "\\end{smallmatrix}\\bigr)$$\n", 48 | "\n", 49 | "其中,$Z_w(x)$是归一化因子,$Z_w(x)=\\sum_yexp\\bigl(\\begin{smallmatrix}\\sum_{i=1}^{n} w_i\\cdot f_i(x,y)\n", 50 | "\\end{smallmatrix}\\bigr)$。$w \\in R^n$是权值向量,$f_i(x,y)$是特征函数。\n", 51 | "\n", 52 | "这个形式和无向图模型几乎一毛一样~" 53 | ] 54 | }, 55 | { 56 | "cell_type": "markdown", 57 | "metadata": {}, 58 | "source": [ 59 | "## 对数似然函数\n", 60 | "\n", 61 | "$$L(w) = log\\prod_{x,y}P(y|x)^{\\tilde P(x,y)} \\\\ = \\sum_{x,y}\\tilde P(x,y)logP(y|x) \\;\\;代入最大熵模型 \\\\= \\sum_{x,y}\\tilde P(x,y)\\sum_{i=1}^nw_if_i(x,y)-\\sum_{x,y}\\tilde P(x,y)logZ_w(x)\\\\= \\sum_{x,y}\\tilde P(x,y)\\sum_{i=1}^nw_if_i(x,y)-\\sum_x\\tilde P(x)logZ_w(x)$$" 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": { 67 | "collapsed": true 68 | }, 69 | "source": [ 70 | "## 模型训练的优化算法\n", 71 | "\n", 72 | "### GIS算法\n", 73 | "\n", 74 | "GIS,Generalized Iterative Scaling,算法流程:\n", 75 | "\n", 76 | "- 初始化所有$w_i$为任意值,一般可以设置为0,即:$w_i^{(0)}=0,\\;i\\in \\{1,2,3,...,n\\}$。其中$n$是特征的个数,上标表示迭代轮数。\n", 77 | "- 重复更新权值直到收敛:\n", 78 | " - $w_i^{(t+1)}=w_i^{(t)}+\\frac{1}{C}log\\frac{E_{\\tilde P}(f_i)}{E_{P^{(n)}}(f_i)}$\n", 79 | " \n", 80 | "**GIS的python实现**,参考http://www.hankcs.com/ml/the-logistic-regression-and-the-maximum-entropy-model.html" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": 1, 86 | "metadata": { 87 | "collapsed": true 88 | }, 89 | "outputs": [], 90 | "source": [ 91 | "import sys\n", 92 | "import math\n", 93 | "from collections import defaultdict" 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": 2, 99 | "metadata": { 100 | "collapsed": true 101 | }, 102 | "outputs": [], 103 | "source": [ 104 | "class MaxEnt:\n", 105 | " def __init__(self):\n", 106 | " self._train_data = [] # 样本集, 元素是[y,x1,x2,...,xn]的元组\n", 107 | " self._Y = set() # 标签集合,相当于去重之后的y\n", 108 | " self._xy2num = defaultdict(int) #Key是(xi,yi)对,Value是count(xi,yi)\n", 109 | " self._data_N = 0 # 样本数量\n", 110 | " self._fea_N = 0 # 特征对(xi,yi)总数量\n", 111 | " self._xy2id = {} # 对(x,y)对做的顺序编号(ID), Key是(xi,yi)对,Value是ID\n", 112 | " self._C = 0 # 样本最大的特征数量,用于求参数时的迭代,见IIS原理说明\n", 113 | " self._train_exp = [] # 样本分布的特征期望值\n", 114 | " self._model_exp = [] # 模型分布的特征期望值\n", 115 | " self._w = [] # 对应n个特征的权值\n", 116 | " self._lastw = [] # 上一轮迭代的权值\n", 117 | " self._EPS = 0.01 # 判断是否收敛的阈值\n", 118 | " \n", 119 | " def load_data(self, filename):\n", 120 | " for line in open(filename, \"r\"):\n", 121 | " sample = line.strip().split(\"\\t\")\n", 122 | " if len(sample) < 2: # 至少:标签 + 一个特征\n", 123 | " continue\n", 124 | " y = sample[0]\n", 125 | " X = sample[1:]\n", 126 | " self._train_data.append(sample) # labe + features\n", 127 | " self._Y.add(y) # label\n", 128 | " for x in set(X): # set给X去重\n", 129 | " self._xy2num[(x, y)] += 1\n", 130 | " \n", 131 | " def _init_params(self):\n", 132 | " self._data_N = len(self._train_data)\n", 133 | " self._fea_N = len(self._xy2num)\n", 134 | " self._C = max([len(sample) - 1 for sample in self._train_data])\n", 135 | " self._w = [0.0 for _ in range(self._fea_N)]\n", 136 | " self._lastw = self._w[:]\n", 137 | " self._calc_train_exp()\n", 138 | " \n", 139 | " def is_convergence(self):\n", 140 | " \"\"\"判断是否收敛\"\"\"\n", 141 | " for w, lw in zip(self._w, self._lastw):\n", 142 | " if math.fabs(w - lw) >= self._EPS:\n", 143 | " return False\n", 144 | " return True\n", 145 | " \n", 146 | " def _calc_train_exp(self):\n", 147 | " \"\"\"特征关于经验数据的期望\"\"\"\n", 148 | " self._train_exp = [0.0 for _ in range(self._fea_N)]\n", 149 | " for i, xy in enumerate(self._xy2num):\n", 150 | " self._train_exp[i] = self._xy2num[xy] * 1.0 / self._data_N\n", 151 | " self._xy2id[xy] = i\n", 152 | " \n", 153 | " def _zx(self, X):\n", 154 | " \"\"\"计算Z(X)\"\"\"\n", 155 | " ZX = 0.0\n", 156 | " for y in self._Y:\n", 157 | " sum_ = 0.0\n", 158 | " for x in X:\n", 159 | " if (x, y) in self._xy2num:\n", 160 | " sum_ += self._w[self._xy2id[(x, y)]]\n", 161 | " ZX += math.exp(sum_)\n", 162 | " return ZX\n", 163 | " \n", 164 | " def _pyx(self, X):\n", 165 | " \"\"\"计算p(y|x)\"\"\"\n", 166 | " ZX = self._zx(X)\n", 167 | " results = []\n", 168 | " for y in self._Y:\n", 169 | " sum_ = 0.0\n", 170 | " for x in X:\n", 171 | " if (x, y) in self._xy2num: # 这个判断相当于指示函数的作用\n", 172 | " sum_ += self._w[self._xy2id[(x, y)]]\n", 173 | " pyx = 1.0 / ZX * math.exp(sum_)\n", 174 | " results.append((y, pyx))\n", 175 | " return results\n", 176 | " \n", 177 | " def _calc_model_exp(self):\n", 178 | " \"\"\"特征关于模型的期望\"\"\"\n", 179 | " self._model_exp = [0.0] * self._fea_N\n", 180 | " for sample in self._train_data:\n", 181 | " X = sample[1:]\n", 182 | " pyx = self._pyx(X)\n", 183 | " for y, p in pyx:\n", 184 | " for x in X:\n", 185 | " if (x, y) in self._xy2num:\n", 186 | " self._model_exp[self._xy2id[(x, y)]] += p * 1.0 / self._data_N\n", 187 | " \n", 188 | " def train(self, maxiter = 1000):\n", 189 | " self._init_params()\n", 190 | " for i in range(0, maxiter):\n", 191 | "# print(\"Iter:%d...\" % i)\n", 192 | " self._lastw = self._w[:] # 保存上一轮权值\n", 193 | " self._calc_model_exp()\n", 194 | " #更新每个特征的权值\n", 195 | " for i, w in enumerate(self._w):\n", 196 | " # 迭代式更新w\n", 197 | " self._w[i] += 1.0 / self._C * math.log(self._train_exp[i] / self._model_exp[i])\n", 198 | "# print(self._w)\n", 199 | " #检查是否收敛\n", 200 | " if self.is_convergence():\n", 201 | " break\n", 202 | " \n", 203 | " def predict(self, input_):\n", 204 | " X = input_.strip().split(\"\\t\")\n", 205 | " prob = self._pyx(X)\n", 206 | " return prob\n", 207 | " " 208 | ] 209 | }, 210 | { 211 | "cell_type": "markdown", 212 | "metadata": {}, 213 | "source": [ 214 | "训练数据来自各种天气情况下是否打球的例子。其中字段依次是:\n", 215 | "\n", 216 | "play / outlook / temperature / humidity / windy" 217 | ] 218 | }, 219 | { 220 | "cell_type": "code", 221 | "execution_count": 3, 222 | "metadata": {}, 223 | "outputs": [ 224 | { 225 | "name": "stdout", 226 | "output_type": "stream", 227 | "text": [ 228 | "[('yes', 0.0041626518719793), ('no', 0.9958373481280207)]\n", 229 | "[('yes', 0.9943682102360447), ('no', 0.00563178976395537)]\n", 230 | "[('yes', 1.4464465173635744e-07), ('no', 0.9999998553553482)]\n" 231 | ] 232 | } 233 | ], 234 | "source": [ 235 | "maxent = MaxEnt()\n", 236 | "maxent.load_data('max_entropy_data.txt')\n", 237 | "maxent.train()\n", 238 | "print(maxent.predict(\"sunny\\thot\\thigh\\tFALSE\"))\n", 239 | "print(maxent.predict(\"overcast\\thot\\thigh\\tFALSE\"))\n", 240 | "print(maxent.predict(\"sunny\\tcool\\thigh\\tTRUE\"))" 241 | ] 242 | }, 243 | { 244 | "cell_type": "markdown", 245 | "metadata": {}, 246 | "source": [ 247 | "## IIS算法\n", 248 | "\n", 249 | "IIS,Improved Iterative Scaling。GIS的收敛取决于$C$的取值,因此有了改进的IIS。\n", 250 | "\n", 251 | "IIS的想法是:假设最大熵模型当前的参数向量是$w=(w_1, ..., w_n)^T$,我们希望找到一个新的参数向量$w+\\delta=(w_1+\\delta_1, ..., w_n+\\delta_n)^T$,使得对数似然函数值能增加。\n", 252 | "\n", 253 | "$$L(w+\\delta)-L(w)=\\sum_{x,y}\\tilde P(x,y)\\sum^n_{i=1}\\delta_if_i(x,y)-\\sum_x\\tilde P(x)log\\frac{Z_{w+\\delta}(x)}{Z_w(x)}\\\\ \\geq\\sum_{x,y}\\tilde P(x,y)\\sum^n_{i=1}\\delta_if_i(x,y)+1-\\sum_x\\tilde P(x)\\frac{Z_{w+\\delta}(x)}{Z_w(x)}\\;\\;根据-log\\alpha \\geq 1-\\alpha\\\\=\\sum_{x,y}\\tilde P(x,y)\\sum^n_{i=1}\\delta_if_i(x,y)+1-\\sum_x\\tilde P(x)\\sum_yP_w(y|x)exp\\sum^n_{i=1}\\delta_if_i(x,y)\\;\\;记为A(\\delta|w)$$\n", 254 | "\n", 255 | "即,$A(\\delta|w)$是对数似然函数改变量的下界。IIS试图每一次只优化一个变量$\\delta_i$使$A(\\delta|w)$最大。\n", 256 | "\n", 257 | "引入新的量:$f^\\#(x,y) = \\sum_if_i(x,y)$,表示所有特征在$(x,y)$出现的次数。\n", 258 | "\n", 259 | "$A(\\delta|w)改写成\\;\\;\\sum_{x,y}\\tilde P(x,y)\\sum^n_{i=1}\\delta_if_i(x,y)+1-\\sum_x\\tilde P(x)\\sum_yP_w(y|x)exp(f^\\#(x,y)\\sum^n_{i=1}\\frac{\\delta_if_i(x,y)}{f^\\#(x,y)})$\n", 260 | "\n", 261 | "利用Jensen不等式,得到:\n", 262 | "\n", 263 | "$exp(\\sum^n_{i=1}\\frac{f_i(x,y)}{f^\\#(x,y)}\\delta_if^\\#(x,y)) \\leq \\sum^n_{i=1}\\frac{f_i(x,y)} {f^\\#(x,y)}exp(\\delta_if^\\#(x,y))$\n", 264 | "\n", 265 | "记$B(\\delta|w)=\\sum_{x,y}\\tilde P(x,y)\\sum^n_{i=1}\\delta_if_i(x,y)+1-\\sum_x\\tilde P(x)\\sum_yP_w(y|x)\\sum^n_{i=1}\\frac{f_i(x,y)} {f^\\#(x,y)}exp(\\delta_if^\\#(x,y))$\n", 266 | "\n", 267 | "求偏导:$\\frac{\\partial B(\\delta|w)}{\\partial \\delta_i} = \\sum_{x,y}\\tilde P(x,y)f_i(x,y)-\\sum_x\\tilde P(x)\\sum_yP_w(y|x)f_i(x,y)exp(\\delta_if^\\#(x,y))$\n", 268 | "\n", 269 | "令偏导为0,求得每次更新的$\\delta$。\n", 270 | "\n", 271 | "**IIS算法流程**:\n", 272 | "- 输入:特征函数$f_1, f_2, ..., f_n$。经验分布$\\tilde P(X, Y)$,模型$P_w(y|x)$。\n", 273 | "- 输出:最优参数值$w_i^*$,最优模型$P_w$。\n", 274 | "- 对所有$i \\in \\{1,2,..., n\\}$,取初值$w_i=0$。\n", 275 | "- 对每一$i \\in \\{1,2,..., n\\}$:\n", 276 | " - 令$\\delta_i$是方程:$\\sum_{x,y}\\tilde P(x,y)f_i(x,y)-\\sum_x\\tilde P(x)\\sum_yP_w(y|x)f_i(x,y)exp(\\delta_if^\\#(x,y))$的解。\n", 277 | " - 更新$w_i$的值为:$w_i + \\delta_i$。" 278 | ] 279 | }, 280 | { 281 | "cell_type": "code", 282 | "execution_count": null, 283 | "metadata": { 284 | "collapsed": true 285 | }, 286 | "outputs": [], 287 | "source": [] 288 | } 289 | ], 290 | "metadata": { 291 | "kernelspec": { 292 | "display_name": "Python [conda env:py36]", 293 | "language": "python", 294 | "name": "conda-env-py36-py" 295 | }, 296 | "language_info": { 297 | "codemirror_mode": { 298 | "name": "ipython", 299 | "version": 3 300 | }, 301 | "file_extension": ".py", 302 | "mimetype": "text/x-python", 303 | "name": "python", 304 | "nbconvert_exporter": "python", 305 | "pygments_lexer": "ipython3", 306 | "version": "3.6.3" 307 | } 308 | }, 309 | "nbformat": 4, 310 | "nbformat_minor": 2 311 | } 312 | -------------------------------------------------------------------------------- /2_LDA.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "# LDA主题模型学习总结\n", 10 | "\n", 11 | "`本篇博客是《LDA漫游指南》和《LDA数学八卦》的学习笔记。`\n", 12 | "\n", 13 | "## 目录\n", 14 | "\n", 15 | "- [简介](#简介)\n", 16 | " - [LDA算法输入与输出](#LDA算法输入与输出)\n", 17 | "- [前置知识](#前置知识)\n", 18 | " - [gamma函数](#gamma函数)\n", 19 | " - [二项分布](#二项分布)\n", 20 | " - [Beta分布](#Beta分布)\n", 21 | " - [多项分布](#多项分布)\n", 22 | " - [Dirichlet分布](#Dirichlet分布)\n", 23 | " - [共轭先验分布](#共轭先验分布)\n", 24 | " - [MCMC](#MCMC)\n", 25 | "- [LDA推导](#LDA推导)\n", 26 | " - [贝叶斯unigram](#贝叶斯unigram)\n", 27 | " - [LDA模型的标准生成过程](#LDA模型的标准生成过程)\n", 28 | " - [数学表示](#数学表示)\n", 29 | "- [交给Gibbs Sampling](#交给Gibbs-Sampling)\n", 30 | " - [最终的Gibbs Smapling公式](#最终的Gibbs-Smapling公式)\n", 31 | "- [LDA训练](#LDA训练)\n", 32 | "- [LDA的inference](#LDA的inference)\n", 33 | "- [LDA实现](#LDA实现)\n", 34 | " \n", 35 | "\n", 36 | "## 简介\n", 37 | "\n", 38 | "LDA(Latent Dirichlet Allocation)是一种**非监督**机器学习技术,可以用来识别大规模文档集或语料库中潜在隐藏的主题信息。\n", 39 | "\n", 40 | "LDA假设每个词是由背后的一个潜在隐藏的主题中抽取出来的,对于每篇文档,生成过程如下:\n", 41 | "- 1.对于每篇文档,从主题分布中抽取一个主题。\n", 42 | "- 2.从上述被抽到的主题所对应的单词分布中抽取一个单词。\n", 43 | "- 3.重复上述过程直到遍历文档中的每个单词。\n", 44 | "\n", 45 | "### LDA算法输入与输出\n", 46 | "- 输入:分词后的文章集。主题数$K$,超参数:$\\alpha$和$\\beta$。\n", 47 | "- 输出:\n", 48 | " - 1.每篇文章每个词被指定的主题编号。\n", 49 | " - 2.每篇文章的主题概率分布:$\\theta$\n", 50 | " - 3.每个主题下的词概率分布:$\\phi$\n", 51 | " - 4.词和id的映射表。\n", 52 | " - 5.每个主题$\\phi$下\n" 53 | ] 54 | }, 55 | { 56 | "cell_type": "markdown", 57 | "metadata": {}, 58 | "source": [ 59 | "## 前置知识\n", 60 | "\n", 61 | "### gamma函数\n", 62 | "\n", 63 | "所谓的gamma函数就是阶乘的函数形式。\n", 64 | "\n", 65 | "$$\\Gamma(x)=\\int_0^{+\\infty}e^{-t}t^{x-1}dt\\;\\;\\;(x>0)$$\n", 66 | "\n", 67 | "$$\\Gamma(n) = (n-1)!$$\n", 68 | "\n", 69 | "### 二项分布\n", 70 | "\n", 71 | "打靶,$n$次中中了$k$次的概率:\n", 72 | "\n", 73 | "$$f(k;n,p)=Pr(X=k)=\\binom{n}{k}p^k(1-p)^{n-k}$$\n", 74 | "\n", 75 | "### Beta分布\n", 76 | "\n", 77 | "$X\\sim Beta(\\alpha, \\beta)$\n", 78 | "\n", 79 | "概率密度函数:$$f(x;\\alpha, \\beta) = \\frac{\\Gamma(\\alpha+\\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)}x^{\\alpha-1}(1-x)^{\\beta-1}\\\\=\\frac{1}{B(\\alpha, \\beta)}x^{\\alpha-1}(1-x)^{\\beta-1}$$\n", 80 | "\n", 81 | "期望:$$E(p) = \\int_0^1t\\cdot Beta(t|\\alpha, \\beta)dt\\\\=\\frac{\\alpha}{\\alpha+\\beta}$$\n", 82 | "\n", 83 | "### 多项分布\n", 84 | "\n", 85 | "多项分布是二项分布的推广:投$n$次骰子,共有六种结果,概率为$p_i$,$i$点出现$x_i$次的组合概率:\n", 86 | "\n", 87 | "$$f(x_1, ...x_k;n,p_1,...,p_k)=Pr(X_1=x_1\\; and\\; ... and\\; X_k=x_k)\\\\=\\frac{n!}{x_1!...x_k!}p_1^{x_1}...p_k^{x_k}\\;\\;\\;when\\;\\sum_{i=1}^kx_i = n$$\n", 88 | "\n", 89 | "### Dirichlet分布\n", 90 | "\n", 91 | "$$p\\sim D(t|\\alpha)$$\n", 92 | "\n", 93 | "概率密度函数:$$f(p_1,..., p_k-1)=\\frac{1}{\\Delta (\\alpha)}\\prod_{i=1}^kp_i^{\\alpha_i-1}$$\n", 94 | "\n", 95 | "期望:$$E(p) = (\\frac{\\alpha_1}{\\sum_{i=1}^K\\alpha_i}, \\frac{\\alpha_2}{\\sum_{i=1}^K\\alpha_i}, ..., \\frac{\\alpha_K}{\\sum_{i=1}^K\\alpha_i})$$\n", 96 | "\n", 97 | "### 共轭先验分布\n", 98 | "\n", 99 | "贝叶斯公式:$$p(\\theta|x) = \\frac{p(x|\\theta)p(\\theta)}{p(x)}$$\n", 100 | "\n", 101 | "即:**后验分布=似然函数×先验分布**\n", 102 | "\n", 103 | "**共轭**:选取一个函数作为似然函数,使得先验分布函数和后验分布函数的形式一致。\n", 104 | "\n", 105 | "- beta分布是二项分布的共轭先验分布,即,二项分布作为似然函数,先验分布是beta分布,后验分布依然是beta分布。\n", 106 | "- Dirichlet分布是多项式分布的共轭先验分布,即,多项式布作为似然函数,先验分布是Dirichlet分布,后验分布依然是Dirichlet分布。\n", 107 | "\n", 108 | "### MCMC\n", 109 | "\n", 110 | "参考之前的博客:https://applenob.github.io/1_MCMC.html" 111 | ] 112 | }, 113 | { 114 | "cell_type": "markdown", 115 | "metadata": {}, 116 | "source": [ 117 | "## LDA推导\n", 118 | "\n", 119 | "### 贝叶斯unigram\n", 120 | "\n", 121 | "不考虑单词简单顺序,被称为“词袋模型”。\n", 122 | "\n", 123 | "$$P(W) = p(w_1)p(w_2)...p(w_n) = \\prod^V_{t=1}p_t^{n_t}\\;\\;\\;\\sum^V_{t=1}p_t = 1$$\n", 124 | "\n", 125 | "为什么似然是多项式分布?想象一个巨大的骰子,有$V$个面,每面代表一个词,每个面的概率是$\\vec{p}=(p_1, ...p_V)$,产生次数是:$\\vec{n} = (n_1, ..., n_V)$,那么生成某篇文章的概率是服从多项式分布的。\n", 126 | "\n", 127 | "贝叶斯学派认为参数也服从某种分布,即,不知道上帝用哪个骰子来生成文档,这个选取骰子的概率,服从Dirichlet分布。\n", 128 | "\n", 129 | "又有:$Dir(\\vec{p}|\\vec{\\alpha}) + MultCount(\\vec{n}) = Dir(\\vec{p}|\\vec{\\alpha}+\\vec{n})$,综合上面Dirichlet分布的期望,可以得到对于每一个$p_i$,可以如下**估计**:$\\tilde p_i = \\frac{n_i+\\alpha_i}{\\sum_{i=1}^V(n_i + \\alpha_i)}$。即,每个参数的估计值是其对应事件的先验的伪计数和数据中的计数的和在整体技术中的比例。\n", 130 | "\n", 131 | "进一步,计算出**文本语料的产生概率**是:$p(W|\\vec{\\alpha}) = \\int p(W|\\vec{p})p(\\vec{p}|\\vec{\\alpha})d\\vec{p}=\\frac{\\Delta(\\vec{n}+\\vec{\\alpha})}{\\Delta \\vec{\\alpha}}$\n", 132 | "\n", 133 | "![](https://github.com/applenob/machine_learning_basic/raw/master/res/bayes_unigram.png)\n", 134 | "\n", 135 | "用通俗的话说,这是上帝从一个坛子中抽一个骰子,再丢这个骰子,观察结果的过程。\n", 136 | "\n", 137 | "### LDA模型的标准生成过程\n", 138 | "\n", 139 | "LDA相当于两个上面的步骤的结合(两个坛子)。上帝有两个大坛子,第一个坛子装doc-topic骰子,第二个坛子装topic-word骰子:\n", 140 | "\n", 141 | "- 1.选择$\\theta_i \\sim Dir(\\vec{\\alpha})$,这里$i\\in\\{1,2,...,M\\}$,$M$代表文章数。每生成一篇文章,从第一个坛子中选一个doc-topic骰子。\n", 142 | "- 2.选择$\\phi_i \\sim Dir(\\vec{\\beta})$,这里$k \\in \\{1,2,...,K\\}$,$K$代表主题个数。独立地挑了$K$个topic-word骰子。\n", 143 | "- 3.对每个单词的位置$W_{i,j}$,这里$j \\in \\{1,...,N_i\\}$,$i \\in \\{1,...,M\\}$\n", 144 | " - 4.选择一个topic主题:$z_{i,j} \\sim Multinominal(\\theta_i)$。投掷这个doc-topic骰子,得到一个topic编号$z$。\n", 145 | " - 5.选择一个word词:$w_{i,j} \\sim Multinominal(\\phi_{z_{i,j}})$。投掷topic是$z$的topic-word骰子,得到一个词。\n", 146 | " \n", 147 | "![](https://github.com/applenob/machine_learning_basic/raw/master/res/lda.png)\n" 148 | ] 149 | }, 150 | { 151 | "cell_type": "markdown", 152 | "metadata": { 153 | "collapsed": true 154 | }, 155 | "source": [ 156 | "### 数学表示\n", 157 | "\n", 158 | "对每个doc-topic骰子,有:$p(\\vec z_m | \\vec \\alpha) = \\frac{\\Delta(\\vec n_m +\\vec \\alpha)}{\\Delta \\vec \\alpha}$\n", 159 | "\n", 160 | "其中:$\\vec n_m = (n_m^{(1)}, .., n_m^{(K)})$,$n_m^{(k)}$代表第$m$篇文档中第$k$个topic产生的词的个数。$\\vec \\alpha$是$K$维的向量。\n", 161 | "\n", 162 | "因为M篇文章生成topic的过程上**相互独立**的,有M个doc-topic骰子的联合概率分布,即**整个语料的topics生成概率**:$$p(\\vec z| \\vec \\alpha) = \\prod^M_{m=1}p(\\vec z_m|\\vec \\alpha)\\\\=\\prod^M_{m=1}\\frac{\\Delta(\\vec n_m + \\vec \\alpha)}{\\Delta \\vec \\alpha}$$\n", 163 | "\n", 164 | "对每topic-word骰子,有:$p(\\vec w_k|\\vec \\beta) = \\frac{\\Delta(\\vec n_k + \\vec \\beta)}{\\Delta \\vec \\beta}$\n", 165 | "\n", 166 | "其中:$\\vec n_k = (n_k^{(1)}, .., n_k^{(V)})$,$n_k^{(v)}$代表第$k$个topic产生的词中,第$v$个word产生的词的个数。$\\vec \\beta\n", 167 | "$是$V$维的向量。\n", 168 | "\n", 169 | "因为K个topic生成word的过程也是**相互独立**的,有K个topic-word骰子的联合概率分布,即**整个语料中words生成的概率**:$$p(\\vec w|\\vec z, \\vec \\beta)\\\\=\\prod_{k=1}^Kp(\\vec w_{(k)}|\\vec z_{(k)}, \\vec \\beta)\\\\=\\prod_{k=1}^K\\frac{\\Delta(\\vec n_k + \\vec \\beta)}{\\Delta \\vec \\beta}$$\n", 170 | "\n", 171 | "联合上面两个联合概率分布,得到**整个语料中words生成的概率**和**整个语料的topics生成概率**的**联合概率分布**:\n", 172 | "\n", 173 | "$$p(\\vec w, \\vec z| \\vec \\alpha, \\vec \\beta)\\\\=p(\\vec w|\\vec z, \\vec \\beta)p(\\vec z| \\vec \\alpha)\\\\=\\prod_{k=1}^K\\frac{\\Delta(\\vec n_k + \\vec \\beta)}{\\Delta \\vec \\beta}\\prod^M_{m=1}\\frac{\\Delta(\\vec n_m + \\vec \\alpha)}{\\Delta \\vec \\alpha}$$" 174 | ] 175 | }, 176 | { 177 | "cell_type": "markdown", 178 | "metadata": { 179 | "collapsed": true 180 | }, 181 | "source": [ 182 | "## 交给Gibbs Sampling\n", 183 | "\n", 184 | "Gibbs Smapling建议先回顾下之前的[博客文章](https://applenob.github.io/1_MCMC.html#Gibbs-Sampling)。\n", 185 | "\n", 186 | "有了联合分布$p(\\vec w, \\vec z)$,可以使用Gibbs Sampling了。\n", 187 | "\n", 188 | "我们的**终极目标**是:要使用一个马尔科夫链,sample出一些列的状态点,使得最终的平稳分布状态就是我们给定的联合概率分布。\n", 189 | "\n", 190 | "语料库中第$i$个词对应的topic记为$z_i$,其中$i=(m,n)$是一个二维下标,对应第$m$篇文档的第$n$个词。$-i$表示去除下标$i$的词。\n", 191 | "\n", 192 | "我们要采样的分布是$p(\\vec z| \\vec w)$,根据Gibbs Sampling的要求,我们要知道**完全条件概率(full conditionals)**,这里即:$p(z_i=k|\\vec z_{-i}, \\vec w)$。设观测到的词$w_i=t$,根据贝叶斯法则,有:$p(z_i=k|\\vec z_{-i}, \\vec w)\\propto p(z_i=k, w_i=t|\\vec z_{-i}, \\vec w_{-i})$\n", 193 | "\n", 194 | "**完整推导**:\n", 195 | "![](https://github.com/applenob/machine_learning_basic/raw/master/res/lda_gibbs.png)\n", 196 | "\n", 197 | "### 最终的Gibbs Smapling公式\n", 198 | "\n", 199 | "$$p(z_i=k|\\vec z_{-i}, \\vec w)\\propto \\frac{n^{k}_{m,-i}+\\alpha_k}{\\sum_{k=1}^K(n^{k}_{m,-i}+\\alpha_k)} \\cdot \\frac{n^{t}_{k,-i}+\\beta_t}{\\sum_{t=1}^V(n^{t}_{k,-i}+\\beta_t)}$$\n", 200 | "\n", 201 | "右边是$p(topic|doc)\\cdot p(word|topic)$,这个概率其实是$doc\\rightarrow topic \\rightarrow word$的路径概率。\n", 202 | "\n", 203 | "![](https://github.com/applenob/machine_learning_basic/raw/master/res/doc-topic-word.png)" 204 | ] 205 | }, 206 | { 207 | "cell_type": "markdown", 208 | "metadata": {}, 209 | "source": [ 210 | "## LDA训练\n", 211 | "\n", 212 | "LDA训练算法:\n", 213 | "- 1.随机初始化:对语料中每篇文档的每个词$w$,随机赋一个topic编号$z$。\n", 214 | "- 2.重新扫描语料库,对每个词$w$,按照Gibbs Sampling公式重新采样它的topic,在语料中进行更新。\n", 215 | "- 3.重复上面的过程直到Gibbs Sampling收敛。\n", 216 | "- 4.统计语料库的topic-word共现频率矩阵,该矩阵就是LDA模型。\n", 217 | "\n", 218 | "## LDA的inference\n", 219 | "\n", 220 | "LDA的inference:\n", 221 | "- 1.随机初始化:当前文档的每个词$w$,随机赋一个topic编号$z$。\n", 222 | "- 2.重新当前文档,对每个词$w$,按照Gibbs Sampling公式重新采样它的topic。\n", 223 | "- 3.重复上面的过程直到Gibbs Sampling收敛。\n", 224 | "- 4.统计当前文档中的topic分布,该分部就是$\\vec \\theta_{new}$。\n", 225 | "\n", 226 | "## LDA实现\n", 227 | "\n", 228 | "投骰子程序(累加法),参考[之前的博客](https://applenob.github.io/1_MCMC.html#离散分布采样):" 229 | ] 230 | }, 231 | { 232 | "cell_type": "code", 233 | "execution_count": 1, 234 | "metadata": { 235 | "collapsed": true 236 | }, 237 | "outputs": [], 238 | "source": [ 239 | "import numpy as np\n", 240 | "import time" 241 | ] 242 | }, 243 | { 244 | "cell_type": "code", 245 | "execution_count": 2, 246 | "metadata": { 247 | "collapsed": true 248 | }, 249 | "outputs": [], 250 | "source": [ 251 | "def sample_discrete(vec):\n", 252 | " if sum(vec) != 1:\n", 253 | " vec = vec / sum(vec)\n", 254 | " u = np.random.rand()\n", 255 | " start = 0\n", 256 | " for i, num in enumerate(vec): \n", 257 | " if u > start:\n", 258 | " start += num\n", 259 | " else:\n", 260 | " return i-1\n", 261 | " return i" 262 | ] 263 | }, 264 | { 265 | "cell_type": "markdown", 266 | "metadata": {}, 267 | "source": [ 268 | "将最终的Gibbs Sampling的公式换成代码里的变量:\n", 269 | "\n", 270 | "$$p(z_i=k|\\vec z_{-i}, \\vec w)\\propto \\frac{nd[m][k]+\\alpha}{ndsum[m]+K\\alpha} \\cdot \\frac{nw[wordid][k]+\\beta}{nwsum[k]+V\\beta}$$" 271 | ] 272 | }, 273 | { 274 | "cell_type": "code", 275 | "execution_count": 3, 276 | "metadata": { 277 | "collapsed": true 278 | }, 279 | "outputs": [], 280 | "source": [ 281 | "def lda_train(doc_set, word2id, K, alpha=1.0, beta=1.0, iter_number=200, with_debug_log=True, check_every=10):\n", 282 | " \"\"\"\n", 283 | " input:\n", 284 | " doc_set: 分词后的语料库。\n", 285 | " word2id: 单词到单词id的映射。\n", 286 | " K: 主题数。\n", 287 | " alpha: doc-topic先验参数。\n", 288 | " beta: topic-word先验参数。\n", 289 | " iter_num: 迭代次数。\n", 290 | " \n", 291 | " output:\n", 292 | " theta: size M×K(doc->topic)。\n", 293 | " phi: size K×V(topic->word)。\n", 294 | " tassign文件(topic assignment)。\n", 295 | " \n", 296 | " 重要变量:\n", 297 | " nw:size:V×K,表示第i个词assign到第j个topic的个数。\n", 298 | " nwsum:size:K,表示assign到第j个topic的所有词的个数。\n", 299 | " nd:size:M×K,表示第i个文档中,第j个topic的词出现的个数。\n", 300 | " ndsum:size:M,表示第i个文档中所有词的的个数。\n", 301 | " z:size:M×per_doc_word_len,表示第m篇文档的第n个word被指定的topic id。\n", 302 | " \"\"\"\n", 303 | " \n", 304 | " print(\"init ...\")\n", 305 | " M = np.shape(doc_set)[0]\n", 306 | " N = max(map(len, doc_set))\n", 307 | " V = len(word2id)\n", 308 | " nw = np.zeros((V, K), dtype=int)\n", 309 | " nwsum = np.zeros(K, dtype=int)\n", 310 | " nd = np.zeros((M, K), dtype=int)\n", 311 | " ndsum = np.zeros(M, dtype=int)\n", 312 | " z = np.zeros((M, N), dtype=int)\n", 313 | " theta = np.zeros((M, K))\n", 314 | " phi = np.zeros((K, V))\n", 315 | " \n", 316 | " # 初始化阶段\n", 317 | " for m, doc in enumerate(doc_set):\n", 318 | " for n, word in enumerate(doc):\n", 319 | " topic_id = np.random.randint(0, K) # 初始化阶段随机指定\n", 320 | " word_id = word2id[word]\n", 321 | " z[m][n] = topic_id # 将随机产生的主题存入z中\n", 322 | " nw[word_id][topic_id] += 1 # 为相应的统计量+1\n", 323 | " nwsum[topic_id] += 1\n", 324 | " nd[m][topic_id] += 1\n", 325 | " ndsum[m] += 1\n", 326 | " if with_debug_log:\n", 327 | " id2word = dict([(v, k) for k, v in word2id.items()])\n", 328 | " print(\"nw: \", nw, \"nd: \", nd, \"nwsum: \", nwsum, \"ndsum: \", ndsum)\n", 329 | " \n", 330 | " print(\"start iterating ...\")\n", 331 | " # Gibbs Sampling迭代阶段\n", 332 | " for one_iter in range(iter_number):\n", 333 | " ss = time.time()\n", 334 | " for m, doc in enumerate(doc_set):\n", 335 | " for n, word in enumerate(doc):\n", 336 | " word_id = word2id[word]\n", 337 | " t = z[m][n]\n", 338 | " nw[word_id][t] -= 1\n", 339 | " nwsum[t] -= 1\n", 340 | " nd[m][t] -= 1\n", 341 | " p = [0 for _ in range(K)] # 存放各主题概率\n", 342 | " for k in range(K):\n", 343 | " p[k] = float(nd[m][k] + alpha) / (ndsum[m] + K * alpha) * \\\n", 344 | " float(nw[word_id][k] + beta) / (nwsum[k] + V * beta)\n", 345 | " # 重新采样\n", 346 | " new_t = sample_discrete(p)\n", 347 | " z[m][n] = new_t\n", 348 | " nw[word_id][new_t] += 1\n", 349 | " nwsum[new_t] += 1\n", 350 | " nd[m][new_t] += 1\n", 351 | " if with_debug_log and (one_iter + 1) % check_every == 0:\n", 352 | " print(\"iter #\", one_iter, \" time cost: \", time.time() - ss)\n", 353 | " print(\"nw: \", nw, \"nd: \", nd, \"nwsum: \", nwsum, \"ndsum: \", ndsum)\n", 354 | " for m in range(M):\n", 355 | " for k in range(K):\n", 356 | " theta[m][k] = (nd[m][k] + alpha) / (ndsum[m] + K * alpha)\n", 357 | " for k in range(K):\n", 358 | " for v in range(V):\n", 359 | " phi[k][v] = (nw[v][k] + beta) / (nwsum[k] + V * beta)\n", 360 | " for one_topic in phi:\n", 361 | " print(one_topic.argsort()[: -21: -1])\n", 362 | " print(\" \".join([id2word[i] for i in one_topic.argsort()[: -21: -1]]))\n", 363 | " print(\"calculating theta and phi\")\n", 364 | " # 最后计算最终的theta和phi矩阵\n", 365 | " for m in range(M):\n", 366 | " for k in range(K):\n", 367 | " theta[m][k] = (nd[m][k] + alpha) / (ndsum[m] + K * alpha)\n", 368 | " for k in range(K):\n", 369 | " for v in range(V):\n", 370 | " phi[k][v] = (nw[v][k] + beta) / (nwsum[k] + V * beta)\n", 371 | "# print(\"z(tassign.txt): \", z)\n", 372 | " return z, theta, phi" 373 | ] 374 | }, 375 | { 376 | "cell_type": "markdown", 377 | "metadata": {}, 378 | "source": [ 379 | "## 数据预处理\n", 380 | "\n", 381 | "由于python版的程序运行较慢,为了演示效果,使用较少的数据,这里用100篇doc。" 382 | ] 383 | }, 384 | { 385 | "cell_type": "code", 386 | "execution_count": 4, 387 | "metadata": {}, 388 | "outputs": [ 389 | { 390 | "name": "stderr", 391 | "output_type": "stream", 392 | "text": [ 393 | "Building prefix dict from the default dictionary ...\n", 394 | "Loading model from cache /tmp/jieba.cache\n", 395 | "Loading model cost 0.732 seconds.\n", 396 | "Prefix dict has been built succesfully.\n" 397 | ] 398 | }, 399 | { 400 | "data": { 401 | "text/html": [ 402 | "
\n", 403 | "\n", 416 | "\n", 417 | " \n", 418 | " \n", 419 | " \n", 420 | " \n", 421 | " \n", 422 | " \n", 423 | " \n", 424 | " \n", 425 | " \n", 426 | " \n", 427 | " \n", 428 | " \n", 429 | " \n", 430 | " \n", 431 | " \n", 432 | " \n", 433 | " \n", 434 | " \n", 435 | " \n", 436 | " \n", 437 | " \n", 438 | " \n", 439 | " \n", 440 | " \n", 441 | " \n", 442 | " \n", 443 | " \n", 444 | " \n", 445 | " \n", 446 | " \n", 447 | " \n", 448 | " \n", 449 | " \n", 450 | " \n", 451 | " \n", 452 | " \n", 453 | " \n", 454 | " \n", 455 | " \n", 456 | " \n", 457 | "
contentclasscut_text
4117资料图:据台湾检方初步掌握资料显示,死者大多陈尸在游览车后座区,分析想从后座逃离,但无法破窗...新闻资料 图 : 据 台湾 检方 初步 掌握 资料 显示 , 死者 大多 陈尸 在 游览车 后座...
1244导语:“性冷淡风”风靡时髦客的时间很久依然受宠到极致,究其原因,除了简洁的外观外,还有一种在...时尚导语 : “ 性冷淡 风 ” 风靡 时髦 客 的 时间 很 久 依然 受宠 到 极致 , 究...
3131凤凰体育讯 北京时间7月19日晚,2016“意大利传奇巨星中国行”中意明星元老对抗赛荔波战在...体育凤凰 体育讯 北京 时间 7 月 19 日晚 , 2016 “ 意大利 传奇 巨星 中国...
1602作为盗墓笔记的前传,《老九门》昨晚终于播了。22点守候在电机前的哔宝看了首播只想说,好吓人,...娱乐作为 盗墓 笔记 的 前 传 , 《 老 九门 》 昨晚 终于 播 了 。 22 点 守候 ...
3964火箭熊凤凰体育讯 北京时间7月23日,根据ClutchFans报道,一位火箭内部的消息人士确...体育火箭 熊 凤凰 体育讯 北京 时间 7 月 23 日 , 根据 ClutchFans 报...
\n", 458 | "
" 459 | ], 460 | "text/plain": [ 461 | " content class \\\n", 462 | "4117 资料图:据台湾检方初步掌握资料显示,死者大多陈尸在游览车后座区,分析想从后座逃离,但无法破窗... 新闻 \n", 463 | "1244 导语:“性冷淡风”风靡时髦客的时间很久依然受宠到极致,究其原因,除了简洁的外观外,还有一种在... 时尚 \n", 464 | "3131 凤凰体育讯 北京时间7月19日晚,2016“意大利传奇巨星中国行”中意明星元老对抗赛荔波战在... 体育 \n", 465 | "1602 作为盗墓笔记的前传,《老九门》昨晚终于播了。22点守候在电机前的哔宝看了首播只想说,好吓人,... 娱乐 \n", 466 | "3964 火箭熊凤凰体育讯 北京时间7月23日,根据ClutchFans报道,一位火箭内部的消息人士确... 体育 \n", 467 | "\n", 468 | " cut_text \n", 469 | "4117 资料 图 : 据 台湾 检方 初步 掌握 资料 显示 , 死者 大多 陈尸 在 游览车 后座... \n", 470 | "1244 导语 : “ 性冷淡 风 ” 风靡 时髦 客 的 时间 很 久 依然 受宠 到 极致 , 究... \n", 471 | "3131 凤凰 体育讯 北京 时间 7 月 19 日晚 , 2016 “ 意大利 传奇 巨星 中国... \n", 472 | "1602 作为 盗墓 笔记 的 前 传 , 《 老 九门 》 昨晚 终于 播 了 。 22 点 守候 ... \n", 473 | "3964 火箭 熊 凤凰 体育讯 北京 时间 7 月 23 日 , 根据 ClutchFans 报... " 474 | ] 475 | }, 476 | "execution_count": 4, 477 | "metadata": {}, 478 | "output_type": "execute_result" 479 | } 480 | ], 481 | "source": [ 482 | "import numpy as np\n", 483 | "import pandas as pd\n", 484 | "import jieba\n", 485 | "from sklearn.feature_extraction.text import CountVectorizer\n", 486 | "copus_name = \"~/Data/nlp/fenghuang.csv\"\n", 487 | "copus_df = pd.read_csv(copus_name).sample(frac=0.02, replace=True)\n", 488 | "\n", 489 | "def cut_text_with_jieba(text):\n", 490 | " return \" \".join(jieba.cut(text, cut_all=False))\n", 491 | "\n", 492 | "copus_df[\"cut_text\"] = copus_df[\"content\"].apply(cut_text_with_jieba)\n", 493 | "copus_df.head()" 494 | ] 495 | }, 496 | { 497 | "cell_type": "code", 498 | "execution_count": 5, 499 | "metadata": {}, 500 | "outputs": [ 501 | { 502 | "data": { 503 | "text/plain": [ 504 | "(95, 3)" 505 | ] 506 | }, 507 | "execution_count": 5, 508 | "metadata": {}, 509 | "output_type": "execute_result" 510 | } 511 | ], 512 | "source": [ 513 | "copus_df.shape" 514 | ] 515 | }, 516 | { 517 | "cell_type": "code", 518 | "execution_count": 6, 519 | "metadata": {}, 520 | "outputs": [ 521 | { 522 | "name": "stdout", 523 | "output_type": "stream", 524 | "text": [ 525 | "8197 [('资料', 0), ('图', 1), (':', 2), ('据', 3), ('台湾', 4), ('检方', 5), ('初步', 6), ('掌握', 7), ('显示', 8), (',', 9)]\n" 526 | ] 527 | } 528 | ], 529 | "source": [ 530 | "word2id = {}\n", 531 | "for doc in copus_df[\"cut_text\"].values:\n", 532 | " for word in doc.split():\n", 533 | " if word not in word2id:\n", 534 | " word2id[word] = len(word2id)\n", 535 | "print(len(word2id), list(word2id.items())[:10])" 536 | ] 537 | }, 538 | { 539 | "cell_type": "code", 540 | "execution_count": 7, 541 | "metadata": {}, 542 | "outputs": [ 543 | { 544 | "data": { 545 | "text/plain": [ 546 | "['资料', '图', ':', '据', '台湾', '检方', '初步', '掌握', '资料', '显示']" 547 | ] 548 | }, 549 | "execution_count": 7, 550 | "metadata": {}, 551 | "output_type": "execute_result" 552 | } 553 | ], 554 | "source": [ 555 | "copus_list = copus_df[\"cut_text\"].apply(lambda doc: doc.split()).values\n", 556 | "copus_list[0][:10]" 557 | ] 558 | }, 559 | { 560 | "cell_type": "code", 561 | "execution_count": 8, 562 | "metadata": {}, 563 | "outputs": [ 564 | { 565 | "name": "stdout", 566 | "output_type": "stream", 567 | "text": [ 568 | "init ...\n", 569 | "start iterating ...\n", 570 | "time cost: 973.8627579212189\n" 571 | ] 572 | } 573 | ], 574 | "source": [ 575 | "s = time.time()\n", 576 | "z, theta, phi = lda_train(copus_list, word2id, 10, with_debug_log=False)\n", 577 | "# print(\"z: \", z, \"theta: \", theta, \"phi: \", phi)\n", 578 | "print(\"time cost: \", time.time() - s)" 579 | ] 580 | }, 581 | { 582 | "cell_type": "code", 583 | "execution_count": 9, 584 | "metadata": {}, 585 | "outputs": [ 586 | { 587 | "data": { 588 | "text/plain": [ 589 | "((95, 10), (10, 8197))" 590 | ] 591 | }, 592 | "execution_count": 9, 593 | "metadata": {}, 594 | "output_type": "execute_result" 595 | } 596 | ], 597 | "source": [ 598 | "theta.shape, phi.shape" 599 | ] 600 | }, 601 | { 602 | "cell_type": "code", 603 | "execution_count": 10, 604 | "metadata": { 605 | "collapsed": true 606 | }, 607 | "outputs": [], 608 | "source": [ 609 | "id2word = dict([(v, k) for k, v in word2id.items()])" 610 | ] 611 | }, 612 | { 613 | "cell_type": "code", 614 | "execution_count": 11, 615 | "metadata": {}, 616 | "outputs": [ 617 | { 618 | "name": "stdout", 619 | "output_type": "stream", 620 | "text": [ 621 | "[ 0.00308642 0.01234568 0.00925926 0.00308642 0.00925926 0.01851852\n", 622 | " 0.00308642 0.58641975 0.0308642 0.32407407] 7\n" 623 | ] 624 | } 625 | ], 626 | "source": [ 627 | "print(theta[0], np.argmax(theta[0]))" 628 | ] 629 | }, 630 | { 631 | "cell_type": "code", 632 | "execution_count": 12, 633 | "metadata": {}, 634 | "outputs": [ 635 | { 636 | "name": "stdout", 637 | "output_type": "stream", 638 | "text": [ 639 | "[ 267 82 7327 5828 3159 5827 2405 475 242 3198 290 259 311 233 1719\n", 640 | " 237 3167 253 7328 3162]\n", 641 | "- 、 伊斯 人队 布朗 76 号 赛季 元老 NBA 6 队 11 19 分 传奇 合同 巴乔 科 火箭队\n", 642 | "[2515 2516 2524 2521 2519 2517 2610 2583 2514 2496 2523 2636 2654 99 2844\n", 643 | " 2532 1937 675 2892 2996]\n", 644 | "高宗 孝宗 退位 内禅 皇帝 光宗 即位 太子 徽宗 父亲 生前 金国 宋高宗 与 去世 两宋 不确定性 命运 极端 严重\n", 645 | "[2323 1647 4289 4300 4341 4299 4304 477 45 5453 2536 5362 7016 3617 5449\n", 646 | " 1846 4303 6963 96 4308]\n", 647 | "系统 部署 萨德 反导 朝鲜 美韩 半岛 可能 月 类人 官方 盘初 裸色 打击 另一半 回答 利益 伤害 一份 坚决\n", 648 | "[1869 176 7448 1909 2313 1917 1912 7453 1930 1908 1929 7452 2215 2214 1949\n", 649 | " 1860 149 1820 1816 1822]\n", 650 | ", 公司 万科 地产 ; 保利 停牌 证监局 业务 中航 开发 深圳 设备 生产 整合 : 相关 唐纳 地图 记者会\n", 651 | "[3610 2354 3588 3587 3571 4390 3605 3931 1860 3639 1531 3901 897 5191 3937\n", 652 | " 6465 1220 4498 3612 3565]\n", 653 | "美元 / ) ( 原油 英镑 下跌 库存 : 汽油 至 贸易 报 指数 高位 新西兰 或 预期 反弹 油价\n", 654 | "[1446 7888 7050 4890 1180 7889 7051 1197 5002 1756 1224 361 505 7047 1276\n", 655 | " 4986 1345 1284 364 3388]\n", 656 | "长征 哈达铺 马思纯 将军 战狼 纪念馆 盛一伦 吴京 团 成员 演员 《 角色 侣 导演 参观 视频 动作 》 图为\n", 657 | "[4108 4105 5226 4106 4109 1913 6855 2346 4146 2362 4206 372 187 2332 2057\n", 658 | " 4121 4120 4119 4275 4129]\n", 659 | "全聚德 高管 关晓彤 集体 股价 公告 ILO 争议 请辞 高 式 哔宝 时髦 探索 上赛季 邢颖 总经理 王志强 腾讯 职务\n", 660 | "[ 9 103 27 13 368 48 50 82 166 203 45 2 139 472 172 177 44 95\n", 661 | " 99 127]\n", 662 | ", 的 。 在 了 “ ” 、 是 和 月 : 我 他 也 将 7 上 与 有\n", 663 | "[6743 8060 8061 1835 8062 6768 6750 4682 8101 6737 8092 6739 4685 3181 4690\n", 664 | " 4700 8065 3800 6744 2370]\n", 665 | "棋手 吉祥 宝宝 太平岛 食神 组 本赛 江启臣 厨神 杯 腊肠 预赛 冯世宽 东莞 防务 考察 三维动画 大战 普通 32\n", 666 | "[5746 5745 120 65 2215 5226 7500 7501 83 34 3619 8142 7519 70 7363\n", 667 | " 4562 8145 4070 33 813]\n", 668 | "充电 无线 技术 录音 设备 关晓彤 罹难者 家属 矮仔 事故 去年 电动汽车 理赔 肇事 肖天 装置 用电 能量 台 提升\n" 669 | ] 670 | } 671 | ], 672 | "source": [ 673 | "for one_topic in phi:\n", 674 | " print(one_topic.argsort()[: -21: -1])\n", 675 | " print(\" \".join([id2word[i] for i in one_topic.argsort()[: -21: -1]]))" 676 | ] 677 | }, 678 | { 679 | "cell_type": "markdown", 680 | "metadata": { 681 | "collapsed": true 682 | }, 683 | "source": [ 684 | "从这个Demo看,还是make sense的。" 685 | ] 686 | } 687 | ], 688 | "metadata": { 689 | "kernelspec": { 690 | "display_name": "Python [conda env:py36]", 691 | "language": "python", 692 | "name": "conda-env-py36-py" 693 | }, 694 | "language_info": { 695 | "codemirror_mode": { 696 | "name": "ipython", 697 | "version": 3 698 | }, 699 | "file_extension": ".py", 700 | "mimetype": "text/x-python", 701 | "name": "python", 702 | "nbconvert_exporter": "python", 703 | "pygments_lexer": "ipython3", 704 | "version": "3.6.3" 705 | } 706 | }, 707 | "nbformat": 4, 708 | "nbformat_minor": 1 709 | } 710 | -------------------------------------------------------------------------------- /3_Logistic_Regression.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "# Logistic Regression 学习总结" 10 | ] 11 | }, 12 | { 13 | "cell_type": "markdown", 14 | "metadata": {}, 15 | "source": [ 16 | "## 模型推导\n", 17 | "\n", 18 | "首先,LR是一个二分类器。\n", 19 | "\n", 20 | "直接给出Logistic Regression的模型:\n", 21 | "$$P(Y=1|x)=\\frac{exp(w\\cdot x)}{1+exp(w\\cdot x)}$$\n", 22 | "$$P(Y=0|x)=\\frac{1}{1+exp(w\\cdot x)}$$" 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "好吧,肯定有人觉得不高兴了这个模型难道是凭空而来的吗?当然不是,给你看个图:\n", 30 | "![](http://images.cnitblog.com/blog/573996/201310/26124600-9b795df815364c62aea84a0d88774f1b.png)\n", 31 | "这个就是传说中的Logistic函数,和我们模型的表达式是不是一毛一样?\n", 32 | "\n", 33 | "如果我们把$ w\\cdot x$理解成evidence,那么当我们获得evidence的时候,我想知道数据是否是属于某个类,我们把他扔进Logistic函数,就会出来一个0-1的值。evidence在某个范围的时候这个值,就会趋近于0,evidence在另外一个范围的时候,它就会趋近1,那这个值其实就可以认为是原始数据是否属于某个类的概率。\n", 34 | "\n", 35 | "以上,就是Logistic Regression的intuition。" 36 | ] 37 | }, 38 | { 39 | "cell_type": "markdown", 40 | "metadata": {}, 41 | "source": [ 42 | "OK,有了模型的表达式,接下来要估计模型的参数了。\n", 43 | "\n", 44 | "很自然的,我们想到了极大似然估计参数。\n", 45 | "\n", 46 | "对数似然函数:\n", 47 | "\n", 48 | "$$L(w)=log(\\prod^N_{i=1}[π(x_i)]^{y_i}[1-π(x_i)]^{1-y_i})$$\n", 49 | "$$=\\sum^N_{i=1}[y_ilogπ(x_i)+(1-y_i)log(1-π(x_i))]$$\n", 50 | "$$=\\sum^N_{i=1}[y_ilog\\frac{π(x_i)}{1-π(x_i)}+log(1-π(x_i))]$$\n", 51 | "$$=\\sum^N_{i=1}[y_i(w\\cdot x_i)-log(1+exp(w\\cdot x_i))]$$\n", 52 | "\n", 53 | "其中: $π(x_i)=P(Y=1|x_i)$\n", 54 | "\n", 55 | "算到这一步,就变成了最优化问题,对$L(w)$求极大值,得到$w$的估计值。具体的最优化求解过程,这篇文章暂且不提。" 56 | ] 57 | }, 58 | { 59 | "cell_type": "markdown", 60 | "metadata": {}, 61 | "source": [ 62 | "## 4.实现" 63 | ] 64 | }, 65 | { 66 | "cell_type": "code", 67 | "execution_count": 1, 68 | "metadata": {}, 69 | "outputs": [ 70 | { 71 | "name": "stderr", 72 | "output_type": "stream", 73 | "text": [ 74 | "Using gpu device 0: GeForce GTX 960M (CNMeM is disabled, cuDNN not available)\n" 75 | ] 76 | } 77 | ], 78 | "source": [ 79 | "import numpy\n", 80 | "import theano\n", 81 | "import theano.tensor as T\n", 82 | "def calc_acu(label, predict):\n", 83 | " comp = [1 if l==pre else 0 for l, pre in zip(label, predict)]\n", 84 | " return float(sum(comp))/len(comp)" 85 | ] 86 | }, 87 | { 88 | "cell_type": "code", 89 | "execution_count": 2, 90 | "metadata": {}, 91 | "outputs": [ 92 | { 93 | "name": "stdout", 94 | "output_type": "stream", 95 | "text": [ 96 | "target values for D:\n", 97 | "[1 0 0 1 0 0 1 1 0 1 0 1 1 1 1 1 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 1 1 0 0 1 1\n", 98 | " 0 0 0 0 1 0 1 1 0 1 1 0 1 1 1 0 1 0 0 1 1 0 1 1 1 0 1 0 0 0 0 1 1 1 0 0 1\n", 99 | " 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 1 0 0 0 0 0 1 1\n", 100 | " 1 1 1 0 1 0 0 1 0 0 0 0 1 1 0 0 1 1 0 1 0 0 0 0 0 1 1 0 1 0 0 0 0 1 1 1 1\n", 101 | " 0 0 1 0 0 0 1 0 1 0 1 1 0 1 0 0 0 0 1 0 1 1 0 0 0 0 1 0 1 1 1 0 1 0 0 1 0\n", 102 | " 1 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 0 0 0 0 1 0 1 1 0 0 0 1 0 0 1 1 1 0 0 1\n", 103 | " 1 0 1 0 1 0 0 0 1 0 0 1 0 1 1 1 1 0 1 0 0 1 0 0 1 1 1 0 1 0 0 0 1 0 0 1 1\n", 104 | " 1 0 0 0 1 1 1 1 1 1 0 0 0 1 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 1 0 1 1 1 1 1\n", 105 | " 0 0 0 1 1 1 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 1 1 1 0 1 1 1 0 0 1 0 1\n", 106 | " 1 1 1 0 1 1 0 0 1 1 1 1 1 0 1 1 0 0 1 1 1 1 0 1 1 1 0 0 0 0 1 1 0 1 1 1 0\n", 107 | " 1 1 0 0 1 0 0 1 0 1 0 1 1 1 1 0 1 1 1 0 0 0 0 0 0 1 1 0 1 1]\n", 108 | "prediction on D:\n", 109 | "[1 0 0 1 0 0 1 1 0 1 0 1 1 1 1 1 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 1 1 0 0 1 1\n", 110 | " 0 0 0 0 1 0 1 1 0 1 1 0 1 1 1 0 1 0 0 1 1 0 1 1 1 0 1 0 0 0 0 1 1 1 0 0 1\n", 111 | " 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0 0 1 0 0 0 0 0 1 1\n", 112 | " 1 1 1 0 1 0 0 1 0 0 0 0 1 1 0 0 1 1 0 1 0 0 0 0 0 1 1 0 1 0 0 0 0 1 1 1 1\n", 113 | " 0 0 1 0 0 0 1 0 1 0 1 1 0 1 0 0 0 0 1 0 1 1 0 0 0 0 1 0 1 1 1 0 1 0 0 1 0\n", 114 | " 1 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 0 0 0 0 1 0 1 1 0 0 0 1 0 0 1 1 1 0 0 1\n", 115 | " 1 0 1 0 1 0 0 0 1 0 0 1 0 1 1 1 1 0 1 0 0 1 0 0 1 1 1 0 1 0 0 0 1 0 0 1 1\n", 116 | " 1 0 0 0 1 1 1 1 1 1 0 0 0 1 0 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 1 0 1 1 1 1 1\n", 117 | " 0 0 0 1 1 1 0 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 1 1 1 0 1 1 1 0 0 1 0 1\n", 118 | " 1 1 1 0 1 1 0 0 1 1 1 1 1 0 1 1 0 0 1 1 1 1 0 1 1 1 0 0 0 0 1 1 0 1 1 1 0\n", 119 | " 1 1 0 0 1 0 0 1 0 1 0 1 1 1 1 0 1 1 1 0 0 0 0 0 0 1 1 0 1 1]\n", 120 | "accuracy:\n", 121 | "1.0\n" 122 | ] 123 | } 124 | ], 125 | "source": [ 126 | "\"\"\"\n", 127 | "code from http://deeplearning.net/software/theano/tutorial/examples.html\n", 128 | "\"\"\"\n", 129 | "rng = numpy.random\n", 130 | "\n", 131 | "N = 400 # training sample size\n", 132 | "feats = 784 # number of input variables\n", 133 | "\n", 134 | "# generate a dataset: D = (input_values, target_class)\n", 135 | "D = (rng.randn(N, feats), rng.randint(size=N, low=0, high=2))\n", 136 | "training_steps = 10000\n", 137 | "\n", 138 | "# Declare Theano symbolic variables\n", 139 | "x = T.dmatrix(\"x\")\n", 140 | "y = T.dvector(\"y\")\n", 141 | "\n", 142 | "# initialize the weight vector w randomly\n", 143 | "#\n", 144 | "# this and the following bias variable b\n", 145 | "# are shared so they keep their values\n", 146 | "# between training iterations (updates)\n", 147 | "w = theano.shared(rng.randn(feats), name=\"w\")\n", 148 | "\n", 149 | "# initialize the bias term\n", 150 | "b = theano.shared(0., name=\"b\")\n", 151 | "\n", 152 | "# print(\"Initial model:\")\n", 153 | "# print(w.get_value())\n", 154 | "# print(b.get_value())\n", 155 | "\n", 156 | "# Construct Theano expression graph\n", 157 | "p_1 = 1 / (1 + T.exp(-T.dot(x, w) - b)) # Probability that target = 1\n", 158 | "prediction = p_1 > 0.5 # The prediction thresholded\n", 159 | "xent = -y * T.log(p_1) - (1-y) * T.log(1-p_1) # Cross-entropy loss function\n", 160 | "cost = xent.mean() + 0.01 * (w ** 2).sum()# The cost to minimize\n", 161 | "gw, gb = T.grad(cost, [w, b]) # Compute the gradient of the cost\n", 162 | " # w.r.t weight vector w and\n", 163 | " # bias term b\n", 164 | " # (we shall return to this in a\n", 165 | " # following section of this tutorial)\n", 166 | "\n", 167 | "# Compile\n", 168 | "train = theano.function(\n", 169 | " inputs=[x,y],\n", 170 | " outputs=[prediction, xent],\n", 171 | " updates=((w, w - 0.1 * gw), (b, b - 0.1 * gb)))\n", 172 | "predict = theano.function(inputs=[x], outputs=prediction)\n", 173 | "\n", 174 | "# Train\n", 175 | "for i in range(training_steps):\n", 176 | " pred, err = train(D[0], D[1])\n", 177 | "\n", 178 | "# print(\"Final model:\")\n", 179 | "# print(w.get_value())\n", 180 | "# print(b.get_value())\n", 181 | "print(\"target values for D:\")\n", 182 | "print(D[1])\n", 183 | "print(\"prediction on D:\")\n", 184 | "print(predict(D[0]))\n", 185 | "print(\"accuracy:\")\n", 186 | "print(calc_acu(D[1], predict(D[0])))" 187 | ] 188 | }, 189 | { 190 | "cell_type": "markdown", 191 | "metadata": {}, 192 | "source": [ 193 | "## Reference\n", 194 | "《统计学习方法》李航" 195 | ] 196 | } 197 | ], 198 | "metadata": { 199 | "anaconda-cloud": {}, 200 | "kernelspec": { 201 | "display_name": "Python [default]", 202 | "language": "python", 203 | "name": "python2" 204 | }, 205 | "language_info": { 206 | "codemirror_mode": { 207 | "name": "ipython", 208 | "version": 2 209 | }, 210 | "file_extension": ".py", 211 | "mimetype": "text/x-python", 212 | "name": "python", 213 | "nbconvert_exporter": "python", 214 | "pygments_lexer": "ipython2", 215 | "version": "2.7.14" 216 | } 217 | }, 218 | "nbformat": 4, 219 | "nbformat_minor": 1 220 | } 221 | -------------------------------------------------------------------------------- /4_theano_tutorial.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# My Theano Tutorial" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "## 1.简介" 15 | ] 16 | }, 17 | { 18 | "cell_type": "markdown", 19 | "metadata": {}, 20 | "source": [ 21 | "初次接触Theano的时候,完全没有明白为什么要把function的定义搞得这么麻烦。\n", 22 | "\n", 23 | "后来接触到了Tensorflow,突然想起来和Theano好像,在Theano中定义一个function和Tensorflow中跑一个session.run(),简直不要太像。都使用到了feed和fetch的概念。\n", 24 | "\n", 25 | "之前看cs224d的时候,助教把Tensorflow和Numpy进行类比,并说它们的很多架构很像。但上我觉得这是一种误导性的。因为实际上,Theano和Tensorflow都是一种符号计算框架,与其说像Numpy,莫不如说像SymPy。由于是符号运算,所以一些基础的运算,这些框架都要自己实现一遍。这可以给我们提供一个强大的功能,自动求导。\n", 26 | "\n", 27 | "根据链式法则,我们可以通过一个表达式的图模型,自动地推算出导数。大神colah这里又有[一篇博客](http://colah.github.io/posts/2015-08-Backprop/),再次生动形象地描绘了自动求导的机制。OK,说了这么多废话,接下来通过代码,很多来自[官网](http://deeplearning.net/software/theano/tutorial/),来对Theano有一个基本的了解。" 28 | ] 29 | }, 30 | { 31 | "cell_type": "markdown", 32 | "metadata": {}, 33 | "source": [ 34 | "## 2.basic use" 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": 2, 40 | "metadata": { 41 | "collapsed": false 42 | }, 43 | "outputs": [ 44 | { 45 | "name": "stderr", 46 | "output_type": "stream", 47 | "text": [ 48 | "Using gpu device 0: GeForce GTX 960M (CNMeM is disabled, cuDNN not available)\n" 49 | ] 50 | } 51 | ], 52 | "source": [ 53 | "import numpy\n", 54 | "import theano\n", 55 | "import theano.tensor as T\n", 56 | "from theano import function " 57 | ] 58 | }, 59 | { 60 | "cell_type": "markdown", 61 | "metadata": {}, 62 | "source": [ 63 | "定义一个function:" 64 | ] 65 | }, 66 | { 67 | "cell_type": "code", 68 | "execution_count": 3, 69 | "metadata": { 70 | "collapsed": true 71 | }, 72 | "outputs": [], 73 | "source": [ 74 | "x = T.dscalar('x')\n", 75 | "y = T.dscalar('y')\n", 76 | "z = x + y\n", 77 | "f = function([x, y], z)" 78 | ] 79 | }, 80 | { 81 | "cell_type": "code", 82 | "execution_count": 4, 83 | "metadata": { 84 | "collapsed": false 85 | }, 86 | "outputs": [ 87 | { 88 | "data": { 89 | "text/plain": [ 90 | "array(5.0)" 91 | ] 92 | }, 93 | "execution_count": 4, 94 | "metadata": {}, 95 | "output_type": "execute_result" 96 | } 97 | ], 98 | "source": [ 99 | "f(2, 3)" 100 | ] 101 | }, 102 | { 103 | "cell_type": "markdown", 104 | "metadata": {}, 105 | "source": [ 106 | "解释下,上面的代码中,x,y被称为**符号变量**,dscalar中的d代表类型double,scalar代表数据是标量。其他的组合如下:\n", 107 | "\n", 108 | "- byte: bscalar, bvector, bmatrix, brow, bcol, btensor3, btensor4\n", 109 | "- 16-bit integers: wscalar, wvector, wmatrix, wrow, wcol, wtensor3, wtensor4\n", 110 | "- 32-bit integers: iscalar, ivector, imatrix, irow, icol, itensor3, itensor4\n", 111 | "- 64-bit integers: lscalar, lvector, lmatrix, lrow, lcol, ltensor3, ltensor4\n", 112 | "- float: fscalar, fvector, fmatrix, frow, fcol, ftensor3, ftensor4\n", 113 | "- double: dscalar, dvector, dmatrix, drow, dcol, dtensor3, dtensor4\n", 114 | "- complex: cscalar, cvector, cmatrix, crow, ccol, ctensor3, ctensor4\n", 115 | "\n", 116 | "稍微复杂一点,两个输出:" 117 | ] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "execution_count": 5, 122 | "metadata": { 123 | "collapsed": true 124 | }, 125 | "outputs": [], 126 | "source": [ 127 | "x = T.dscalar('x')\n", 128 | "y = T.dscalar('y')\n", 129 | "z = x + y\n", 130 | "n_x = -x\n", 131 | "f = function([x, y], [z,n_x])" 132 | ] 133 | }, 134 | { 135 | "cell_type": "code", 136 | "execution_count": 6, 137 | "metadata": { 138 | "collapsed": false 139 | }, 140 | "outputs": [ 141 | { 142 | "data": { 143 | "text/plain": [ 144 | "[array(5.0), array(-2.0)]" 145 | ] 146 | }, 147 | "execution_count": 6, 148 | "metadata": {}, 149 | "output_type": "execute_result" 150 | } 151 | ], 152 | "source": [ 153 | "f(2, 3)" 154 | ] 155 | }, 156 | { 157 | "cell_type": "markdown", 158 | "metadata": {}, 159 | "source": [ 160 | "将function以图片的方式输出,需要先安装pydot,graphviz" 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": 7, 166 | "metadata": { 167 | "collapsed": false 168 | }, 169 | "outputs": [ 170 | { 171 | "data": { 172 | "image/svg+xml": [ 173 | "\n", 174 | "\n", 175 | "G\n", 176 | "\n", 177 | "\n", 178 | "140437588396816\n", 179 | "\n", 180 | "Elemwise{neg,no_inplace}\n", 181 | "\n", 182 | "\n", 183 | "140437588397008\n", 184 | "\n", 185 | "TensorType(float64, scalar)\n", 186 | "\n", 187 | "\n", 188 | "140437588396816->140437588397008\n", 189 | "\n", 190 | "\n", 191 | "\n", 192 | "\n", 193 | "140437588394128\n", 194 | "\n", 195 | "name=x TensorType(float64, scalar)\n", 196 | "\n", 197 | "\n", 198 | "140437588394128->140437588396816\n", 199 | "\n", 200 | "\n", 201 | "\n", 202 | "\n", 203 | "140437588394384\n", 204 | "\n", 205 | "Elemwise{add,no_inplace}\n", 206 | "\n", 207 | "\n", 208 | "140437588394128->140437588394384\n", 209 | "\n", 210 | "\n", 211 | "0\n", 212 | "\n", 213 | "\n", 214 | "140437588394448\n", 215 | "\n", 216 | "TensorType(float64, scalar)\n", 217 | "\n", 218 | "\n", 219 | "140437588394384->140437588394448\n", 220 | "\n", 221 | "\n", 222 | "\n", 223 | "\n", 224 | "140437588394192\n", 225 | "\n", 226 | "name=y TensorType(float64, scalar)\n", 227 | "\n", 228 | "\n", 229 | "140437588394192->140437588394384\n", 230 | "\n", 231 | "\n", 232 | "1\n", 233 | "\n", 234 | "\n", 235 | "" 236 | ], 237 | "text/plain": [ 238 | "" 239 | ] 240 | }, 241 | "execution_count": 7, 242 | "metadata": {}, 243 | "output_type": "execute_result" 244 | } 245 | ], 246 | "source": [ 247 | "from IPython.display import SVG\n", 248 | "SVG(theano.printing.pydotprint(f, return_image=True,\n", 249 | " format='svg'))" 250 | ] 251 | }, 252 | { 253 | "cell_type": "markdown", 254 | "metadata": { 255 | "collapsed": false 256 | }, 257 | "source": [ 258 | "使用In类设置参数的默认值:" 259 | ] 260 | }, 261 | { 262 | "cell_type": "code", 263 | "execution_count": 10, 264 | "metadata": { 265 | "collapsed": false 266 | }, 267 | "outputs": [ 268 | { 269 | "data": { 270 | "text/plain": [ 271 | "array(4.0)" 272 | ] 273 | }, 274 | "execution_count": 10, 275 | "metadata": {}, 276 | "output_type": "execute_result" 277 | } 278 | ], 279 | "source": [ 280 | "from theano import In\n", 281 | "x, y, w = T.dscalars('x', 'y', 'w')\n", 282 | "z = (x + y) * w\n", 283 | "f = function([x, In(y, value=1), In(w, value=2, name='w_by_name')], z)\n", 284 | "f(1)" 285 | ] 286 | }, 287 | { 288 | "cell_type": "markdown", 289 | "metadata": {}, 290 | "source": [ 291 | "## 3.共享变量 shared variable\n", 292 | "共享变量同样存在于Tensorflow,我认为是一个比较重要的知识点。\n", 293 | "\n", 294 | "tutorial中的解释是:共享变量是符号变量和非符号变量的混合体,它可以在多个function中共享。可以通过get_value和set_value来获取和设置共享变量的值。\n", 295 | "\n", 296 | "你可以像使用其他符号变量一样在表达式中使用共享变量,也可以像使用非符号变量一样,获取共享变量的值。" 297 | ] 298 | }, 299 | { 300 | "cell_type": "code", 301 | "execution_count": 12, 302 | "metadata": { 303 | "collapsed": true 304 | }, 305 | "outputs": [], 306 | "source": [ 307 | "from theano import shared\n", 308 | "state = shared(0)\n", 309 | "inc = T.iscalar('inc')\n", 310 | "accumulator = function([inc], state, updates=[(state, state+inc)])" 311 | ] 312 | }, 313 | { 314 | "cell_type": "markdown", 315 | "metadata": {}, 316 | "source": [ 317 | "这里的function用到了**updates**参数,updates参数使用下面的形式:(shared-variable, new expression)" 318 | ] 319 | }, 320 | { 321 | "cell_type": "code", 322 | "execution_count": 13, 323 | "metadata": { 324 | "collapsed": false 325 | }, 326 | "outputs": [ 327 | { 328 | "data": { 329 | "image/svg+xml": [ 330 | "\n", 331 | "\n", 332 | "G\n", 333 | "\n", 334 | "\n", 335 | "140438391472976\n", 336 | "\n", 337 | "Elemwise{add,no_inplace}\n", 338 | "\n", 339 | "\n", 340 | "140438391472400\n", 341 | "\n", 342 | "TensorType(int64, scalar)\n", 343 | "\n", 344 | "\n", 345 | "140438391472976->140438391472400\n", 346 | "\n", 347 | "\n", 348 | "\n", 349 | "\n", 350 | "140438391472464\n", 351 | "\n", 352 | "TensorType(int64, scalar)\n", 353 | "\n", 354 | "\n", 355 | "140438391472464->140438391472976\n", 356 | "\n", 357 | "\n", 358 | "0\n", 359 | "\n", 360 | "\n", 361 | "140438391473040\n", 362 | "\n", 363 | "name=inc TensorType(int32, scalar)\n", 364 | "\n", 365 | "\n", 366 | "140438391473040->140438391472976\n", 367 | "\n", 368 | "\n", 369 | "1\n", 370 | "\n", 371 | "\n", 372 | "140438391472400->140438391472464\n", 373 | "\n", 374 | "\n", 375 | "UPDATE\n", 376 | "\n", 377 | "\n", 378 | "" 379 | ], 380 | "text/plain": [ 381 | "" 382 | ] 383 | }, 384 | "execution_count": 13, 385 | "metadata": {}, 386 | "output_type": "execute_result" 387 | } 388 | ], 389 | "source": [ 390 | "SVG(theano.printing.pydotprint(accumulator, return_image=True,\n", 391 | " format='svg'))" 392 | ] 393 | }, 394 | { 395 | "cell_type": "code", 396 | "execution_count": 15, 397 | "metadata": { 398 | "collapsed": false 399 | }, 400 | "outputs": [ 401 | { 402 | "name": "stdout", 403 | "output_type": "stream", 404 | "text": [ 405 | "0\n", 406 | "2\n", 407 | "4\n", 408 | "6\n", 409 | "8\n", 410 | "10\n", 411 | "12\n", 412 | "14\n", 413 | "16\n", 414 | "18\n" 415 | ] 416 | } 417 | ], 418 | "source": [ 419 | "for i in range(10):\n", 420 | " ans = accumulator(2)\n", 421 | " print ans" 422 | ] 423 | }, 424 | { 425 | "cell_type": "code", 426 | "execution_count": 17, 427 | "metadata": { 428 | "collapsed": false 429 | }, 430 | "outputs": [ 431 | { 432 | "name": "stdout", 433 | "output_type": "stream", 434 | "text": [ 435 | "38\n", 436 | "36\n", 437 | "34\n", 438 | "32\n", 439 | "30\n", 440 | "28\n", 441 | "26\n", 442 | "24\n", 443 | "22\n", 444 | "20\n" 445 | ] 446 | } 447 | ], 448 | "source": [ 449 | "# 在多个function中使用同一个共享变量\n", 450 | "decrementor = function([inc], state, updates=[(state, state-inc)])\n", 451 | "for i in range(10):\n", 452 | " ans = decrementor(2)\n", 453 | " print state.get_value()" 454 | ] 455 | }, 456 | { 457 | "cell_type": "markdown", 458 | "metadata": {}, 459 | "source": [ 460 | "再来看一个function中的重要参数**givens**。givens常常使用在你需要替换掉表达式的某些特定节点,当然替换的对象必须是符号变量或者共享变量。" 461 | ] 462 | }, 463 | { 464 | "cell_type": "code", 465 | "execution_count": 25, 466 | "metadata": { 467 | "collapsed": false 468 | }, 469 | "outputs": [ 470 | { 471 | "name": "stdout", 472 | "output_type": "stream", 473 | "text": [ 474 | "state = 20\n", 475 | "state * 2 + inc = 41\n" 476 | ] 477 | } 478 | ], 479 | "source": [ 480 | "print \"state = \", state.get_value()\n", 481 | "fn_of_state = state * 2 + inc\n", 482 | "use_shared = function([inc], fn_of_state)\n", 483 | "print \"state * 2 + inc = \", use_shared(1)" 484 | ] 485 | }, 486 | { 487 | "cell_type": "markdown", 488 | "metadata": {}, 489 | "source": [ 490 | "上面的fn_of_state定义了一个新的表达式,使用了之前定义的共享变量state。但在这里的应用场景下,我只想再次使用这个表达式,而不是使用共享变量中的值。" 491 | ] 492 | }, 493 | { 494 | "cell_type": "code", 495 | "execution_count": 26, 496 | "metadata": { 497 | "collapsed": false 498 | }, 499 | "outputs": [ 500 | { 501 | "name": "stdout", 502 | "output_type": "stream", 503 | "text": [ 504 | "foo * 2 + inc = 7\n" 505 | ] 506 | } 507 | ], 508 | "source": [ 509 | "foo = T.scalar(dtype=state.dtype)\n", 510 | "skip_shared = function([inc, foo], fn_of_state, givens=[(state, foo)])\n", 511 | "print \"foo * 2 + inc = \", skip_shared(1, 3) " 512 | ] 513 | }, 514 | { 515 | "cell_type": "markdown", 516 | "metadata": {}, 517 | "source": [ 518 | "## 4. copy函数\n", 519 | "我们调用function的copy函数,来拷贝这个function,swap参数可以指定新的共享变量。" 520 | ] 521 | }, 522 | { 523 | "cell_type": "code", 524 | "execution_count": 28, 525 | "metadata": { 526 | "collapsed": false 527 | }, 528 | "outputs": [ 529 | { 530 | "name": "stdout", 531 | "output_type": "stream", 532 | "text": [ 533 | "100\n" 534 | ] 535 | } 536 | ], 537 | "source": [ 538 | "new_state = theano.shared(0)\n", 539 | "new_accumulator = accumulator.copy(swap={state:new_state})\n", 540 | "new_accumulator(100)\n", 541 | "print new_state.get_value()" 542 | ] 543 | }, 544 | { 545 | "cell_type": "markdown", 546 | "metadata": {}, 547 | "source": [ 548 | "## 5. 求导\n" 549 | ] 550 | }, 551 | { 552 | "cell_type": "code", 553 | "execution_count": 34, 554 | "metadata": { 555 | "collapsed": false 556 | }, 557 | "outputs": [ 558 | { 559 | "data": { 560 | "text/plain": [ 561 | "'((fill((x ** TensorConstant{2}), TensorConstant{1.0}) * TensorConstant{2}) * (x ** (TensorConstant{2} - TensorConstant{1})))'" 562 | ] 563 | }, 564 | "execution_count": 34, 565 | "metadata": {}, 566 | "output_type": "execute_result" 567 | } 568 | ], 569 | "source": [ 570 | "from theano import pp\n", 571 | "x = T.dscalar('x')\n", 572 | "y = x ** 2\n", 573 | "gy = T.grad(y, x)\n", 574 | "pp(gy)" 575 | ] 576 | }, 577 | { 578 | "cell_type": "code", 579 | "execution_count": 37, 580 | "metadata": { 581 | "collapsed": false 582 | }, 583 | "outputs": [ 584 | { 585 | "data": { 586 | "text/plain": [ 587 | "'(TensorConstant{2.0} * x)'" 588 | ] 589 | }, 590 | "execution_count": 37, 591 | "metadata": {}, 592 | "output_type": "execute_result" 593 | } 594 | ], 595 | "source": [ 596 | "pp(f.maker.fgraph.outputs[0])" 597 | ] 598 | }, 599 | { 600 | "cell_type": "code", 601 | "execution_count": 35, 602 | "metadata": { 603 | "collapsed": false 604 | }, 605 | "outputs": [ 606 | { 607 | "name": "stdout", 608 | "output_type": "stream", 609 | "text": [ 610 | "gy = 2 * x: 8.0\n" 611 | ] 612 | } 613 | ], 614 | "source": [ 615 | "f = theano.function([x], gy)\n", 616 | "print \"gy = 2 * x: \",f(4)" 617 | ] 618 | }, 619 | { 620 | "cell_type": "code", 621 | "execution_count": null, 622 | "metadata": { 623 | "collapsed": false 624 | }, 625 | "outputs": [], 626 | "source": [] 627 | }, 628 | { 629 | "cell_type": "code", 630 | "execution_count": null, 631 | "metadata": { 632 | "collapsed": true 633 | }, 634 | "outputs": [], 635 | "source": [] 636 | } 637 | ], 638 | "metadata": { 639 | "anaconda-cloud": {}, 640 | "kernelspec": { 641 | "display_name": "Python [default]", 642 | "language": "python", 643 | "name": "python2" 644 | }, 645 | "language_info": { 646 | "codemirror_mode": { 647 | "name": "ipython", 648 | "version": 2 649 | }, 650 | "file_extension": ".py", 651 | "mimetype": "text/x-python", 652 | "name": "python", 653 | "nbconvert_exporter": "python", 654 | "pygments_lexer": "ipython2", 655 | "version": "2.7.12" 656 | } 657 | }, 658 | "nbformat": 4, 659 | "nbformat_minor": 1 660 | } 661 | -------------------------------------------------------------------------------- /5_HMM.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# 隐马尔科夫模型(HMM)及其Python实现\n", 8 | "\n", 9 | "## 目录\n", 10 | "\n", 11 | "- [1.基础介绍](#1.基础介绍)\n", 12 | " - [形式定义](#形式定义)\n", 13 | " - [隐马尔科夫模型的两个基本假设](#隐马尔科夫模型的两个基本假设)\n", 14 | " - [一个关于感冒的实例](#一个关于感冒的实例)\n", 15 | "- [2.HMM的三个问题](#2.HMM的三个问题)\n", 16 | " - [2.1概率计算问题](#2.1概率计算问题)\n", 17 | " - [2.2学习问题](#2.2学习问题)\n", 18 | " - [2.3预测问题](#2.3预测问题)\n", 19 | "- [3.完整代码](#3.完整代码)" 20 | ] 21 | }, 22 | { 23 | "cell_type": "markdown", 24 | "metadata": {}, 25 | "source": [ 26 | "## 1.基础介绍\n", 27 | "\n", 28 | "首先看下模型结构,对模型有一个直观的概念:\n", 29 | "![](http://img.my.csdn.net/uploads/201304/24/1366772946_8884.png)\n", 30 | "\n", 31 | "描述下这个图:\n", 32 | "\n", 33 | "分成两排,第一排是$y$序列,第二排是$x$序列。每个$x$都只有一个$y$指向它,每个$y$也都有另一个$y$指向它。\n" 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "metadata": {}, 39 | "source": [ 40 | "OK,直觉上的东西说完了,下面给出定义(参考《统计学习方法》):\n", 41 | "* **状态序列(上图中的$y$,下面的$I$)**:\n", 42 | "隐藏的马尔科夫链随机生成的状态序列,称为状态序列(state sequence)\n", 43 | "* **观测序列(上图中的$x$,下面的$O$)**:\n", 44 | "每个状态生成一个观测,而由此产生的观测的随机序列,称为观测序列(obeservation sequence)\n", 45 | "* **马尔科夫模型**:\n", 46 | "马尔科夫模型是关于时序的概率模型,描述由一个隐藏的马尔科夫链**随机生成不可观测的状态随机序列,再由各个状态生成一个观测而产生观测随机序列**的过程。\n", 47 | "\n", 48 | "### 形式定义\n", 49 | "\n", 50 | "设$Q$是所有可能的状态的集合,$V$是所有可能的观测的集合。\n", 51 | "\n", 52 | "$Q={q_1,q_2,...,q_N},V={v_1,v_2,...,v_M}$\n", 53 | "\n", 54 | "其中,$N$是可能的状态数,$M$是可能的观测数。\n", 55 | "\n", 56 | "$I$是长度为$T$的状态序列,$O$是对应的观测序列。\n", 57 | "\n", 58 | "$I=(i_1,i_2,...,i_T),O=(o_1,o_2,...,o_T)$\n", 59 | "\n", 60 | "**A是状态转移矩阵**:$A=[a_{ij}]_{N×N}$\n", 61 | "\n", 62 | "$i=1,2,...,N; j=1,2,...,N$\n", 63 | "\n", 64 | "其中,在时刻$t$,处于$q_i$ 状态的条件下在时刻$t+1$转移到状态$q_j$ 的概率:\n", 65 | "\n", 66 | "$a_{ij}=P(i_{t+1}=q_j|i_t=q_i)$\n", 67 | "\n", 68 | "**B是观测概率矩阵**:$B=[b_j(k)]_{N×M}$\n", 69 | "\n", 70 | "$k=1,2,...,M; j=1,2,...,N$\n", 71 | "\n", 72 | "其中,在时刻$t$处于状态$q_j$ 的条件下生成观测$v_k$ 的概率:\n", 73 | "\n", 74 | "$b_j(k)=P(o_t=v_k|i_t=q_j)$\n", 75 | "\n", 76 | "**π是初始状态概率向量**:$π=(π_i)$\n", 77 | "\n", 78 | "其中,$π_i=P(i_1=q_i)$\n", 79 | "\n", 80 | "隐马尔科夫模型由初始状态概率向量$π$、状态转移概率矩阵A和观测概率矩阵$B$决定。$π$和$A$决定状态序列,$B$决定观测序列。因此,隐马尔科夫模型$λ$可以由三元符号表示,即:$λ=(A,B,π)$。$A,B,π$称为隐马尔科夫模型的**三要素**。\n", 81 | "\n", 82 | "### 隐马尔科夫模型的两个基本假设\n", 83 | "\n", 84 | "(1):设隐马尔科夫链在任意时刻$t$的状态只依赖于**其前一时刻**的状态,与其他时刻的状态及观测无关,也与时刻$t$无关。(**齐次马尔科夫性假设**)\n", 85 | "\n", 86 | "(2):假设任意时刻的观测只依赖于该时刻的马尔科夫链的状态,与其他观测和状态无关。(**观测独立性假设**)\n" 87 | ] 88 | }, 89 | { 90 | "cell_type": "markdown", 91 | "metadata": {}, 92 | "source": [ 93 | "### 一个关于感冒的实例\n", 94 | "\n", 95 | "定义讲完了,举个实例,参考hankcs和知乎上的**感冒预测**的例子(实际上都是来自wikipidia: https://en.wikipedia.org/wiki/Viterbi_algorithm#Example ),这里我用最简单的语言去描述。\n", 96 | "\n", 97 | "假设你是一个医生,眼前有个病人,你的任务是确定他是否得了感冒。\n", 98 | "\n", 99 | "- 首先,病人的状态($Q$)只有两种:{感冒,没有感冒}。\n", 100 | "- 然后,病人的感觉(观测$V$)有三种:{正常,冷,头晕}。\n", 101 | "- 手头有病人的病例,你可以从病例的第一天确定$π$(初始状态概率向量);\n", 102 | "- 然后根据其他病例信息,确定$A$(状态转移矩阵)也就是病人某天是否感冒和他第二天是否感冒的关系;\n", 103 | "- 还可以确定$B$(观测概率矩阵)也就是病人某天是什么感觉和他那天是否感冒的关系。\n", 104 | "\n", 105 | "![](https://raw.githubusercontent.com/applenob/machine_learning_basic/master/res/hmm.jpg)" 106 | ] 107 | }, 108 | { 109 | "cell_type": "code", 110 | "execution_count": 1, 111 | "metadata": { 112 | "collapsed": true 113 | }, 114 | "outputs": [], 115 | "source": [ 116 | "import numpy as np" 117 | ] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "execution_count": 2, 122 | "metadata": { 123 | "collapsed": true 124 | }, 125 | "outputs": [], 126 | "source": [ 127 | "# 对应状态集合Q\n", 128 | "states = ('Healthy', 'Fever')\n", 129 | "# 对应观测集合V\n", 130 | "observations = ('normal', 'cold', 'dizzy')\n", 131 | "# 初始状态概率向量π\n", 132 | "start_probability = {'Healthy': 0.6, 'Fever': 0.4}\n", 133 | "# 状态转移矩阵A\n", 134 | "transition_probability = {\n", 135 | " 'Healthy': {'Healthy': 0.7, 'Fever': 0.3},\n", 136 | " 'Fever': {'Healthy': 0.4, 'Fever': 0.6},\n", 137 | "}\n", 138 | "# 观测概率矩阵B\n", 139 | "emission_probability = {\n", 140 | " 'Healthy': {'normal': 0.5, 'cold': 0.4, 'dizzy': 0.1},\n", 141 | " 'Fever': {'normal': 0.1, 'cold': 0.3, 'dizzy': 0.6},\n", 142 | "}" 143 | ] 144 | }, 145 | { 146 | "cell_type": "code", 147 | "execution_count": 3, 148 | "metadata": { 149 | "collapsed": true 150 | }, 151 | "outputs": [], 152 | "source": [ 153 | "# 随机生成观测序列和状态序列 \n", 154 | "def simulate(T):\n", 155 | "\n", 156 | " def draw_from(probs):\n", 157 | " \"\"\"\n", 158 | " 1.np.random.multinomial:\n", 159 | " 按照多项式分布,生成数据\n", 160 | " >>> np.random.multinomial(20, [1/6.]*6, size=2)\n", 161 | " array([[3, 4, 3, 3, 4, 3],\n", 162 | " [2, 4, 3, 4, 0, 7]])\n", 163 | " For the first run, we threw 3 times 1, 4 times 2, etc. \n", 164 | " For the second, we threw 2 times 1, 4 times 2, etc.\n", 165 | " 2.np.where:\n", 166 | " >>> x = np.arange(9.).reshape(3, 3)\n", 167 | " >>> np.where( x > 5 )\n", 168 | " (array([2, 2, 2]), array([0, 1, 2]))\n", 169 | " \"\"\"\n", 170 | " return np.where(np.random.multinomial(1,probs) == 1)[0][0]\n", 171 | "\n", 172 | " observations = np.zeros(T, dtype=int)\n", 173 | " states = np.zeros(T, dtype=int)\n", 174 | " states[0] = draw_from(pi)\n", 175 | " observations[0] = draw_from(B[states[0],:])\n", 176 | " for t in range(1, T):\n", 177 | " states[t] = draw_from(A[states[t-1],:])\n", 178 | " observations[t] = draw_from(B[states[t],:])\n", 179 | " return observations, states" 180 | ] 181 | }, 182 | { 183 | "cell_type": "code", 184 | "execution_count": 4, 185 | "metadata": {}, 186 | "outputs": [ 187 | { 188 | "name": "stdout", 189 | "output_type": "stream", 190 | "text": [ 191 | "{0: 'Healthy', 1: 'Fever'} {'Healthy': 0, 'Fever': 1}\n", 192 | "{0: 'normal', 1: 'cold', 2: 'dizzy'} {'normal': 0, 'cold': 1, 'dizzy': 2}\n" 193 | ] 194 | } 195 | ], 196 | "source": [ 197 | "def generate_index_map(lables):\n", 198 | " id2label = {}\n", 199 | " label2id = {}\n", 200 | " i = 0\n", 201 | " for l in lables:\n", 202 | " id2label[i] = l\n", 203 | " label2id[l] = i\n", 204 | " i += 1\n", 205 | " return id2label, label2id\n", 206 | " \n", 207 | "states_id2label, states_label2id = generate_index_map(states)\n", 208 | "observations_id2label, observations_label2id = generate_index_map(observations)\n", 209 | "print(states_id2label, states_label2id)\n", 210 | "print(observations_id2label, observations_label2id)" 211 | ] 212 | }, 213 | { 214 | "cell_type": "code", 215 | "execution_count": 5, 216 | "metadata": { 217 | "collapsed": true 218 | }, 219 | "outputs": [], 220 | "source": [ 221 | "def convert_map_to_vector(map_, label2id):\n", 222 | " \"\"\"将概率向量从dict转换成一维array\"\"\"\n", 223 | " v = np.zeros(len(map_), dtype=float)\n", 224 | " for e in map_:\n", 225 | " v[label2id[e]] = map_[e]\n", 226 | " return v\n", 227 | "\n", 228 | " \n", 229 | "def convert_map_to_matrix(map_, label2id1, label2id2):\n", 230 | " \"\"\"将概率转移矩阵从dict转换成矩阵\"\"\"\n", 231 | " m = np.zeros((len(label2id1), len(label2id2)), dtype=float)\n", 232 | " for line in map_:\n", 233 | " for col in map_[line]:\n", 234 | " m[label2id1[line]][label2id2[col]] = map_[line][col]\n", 235 | " return m" 236 | ] 237 | }, 238 | { 239 | "cell_type": "code", 240 | "execution_count": 6, 241 | "metadata": {}, 242 | "outputs": [ 243 | { 244 | "name": "stdout", 245 | "output_type": "stream", 246 | "text": [ 247 | "[[ 0.7 0.3]\n", 248 | " [ 0.4 0.6]]\n", 249 | "[[ 0.5 0.4 0.1]\n", 250 | " [ 0.1 0.3 0.6]]\n", 251 | "[ 0.6 0.4]\n" 252 | ] 253 | } 254 | ], 255 | "source": [ 256 | "A = convert_map_to_matrix(transition_probability, states_label2id, states_label2id)\n", 257 | "print(A)\n", 258 | "B = convert_map_to_matrix(emission_probability, states_label2id, observations_label2id)\n", 259 | "print(B)\n", 260 | "observations_index = [observations_label2id[o] for o in observations]\n", 261 | "pi = convert_map_to_vector(start_probability, states_label2id)\n", 262 | "print(pi)" 263 | ] 264 | }, 265 | { 266 | "cell_type": "code", 267 | "execution_count": 7, 268 | "metadata": {}, 269 | "outputs": [ 270 | { 271 | "name": "stdout", 272 | "output_type": "stream", 273 | "text": [ 274 | "[0 0 1 1 2 1 2 2 2 0]\n", 275 | "[0 0 0 0 1 1 1 1 1 0]\n", 276 | "病人的状态: ['Healthy', 'Healthy', 'Healthy', 'Healthy', 'Fever', 'Fever', 'Fever', 'Fever', 'Fever', 'Healthy']\n", 277 | "病人的观测: ['normal', 'normal', 'cold', 'cold', 'dizzy', 'cold', 'dizzy', 'dizzy', 'dizzy', 'normal']\n" 278 | ] 279 | } 280 | ], 281 | "source": [ 282 | "# 生成模拟数据\n", 283 | "observations_data, states_data = simulate(10)\n", 284 | "print(observations_data)\n", 285 | "print(states_data)\n", 286 | "# 相应的label\n", 287 | "print(\"病人的状态: \", [states_id2label[index] for index in states_data])\n", 288 | "print(\"病人的观测: \", [observations_id2label[index] for index in observations_data])" 289 | ] 290 | }, 291 | { 292 | "cell_type": "markdown", 293 | "metadata": {}, 294 | "source": [ 295 | "## 2.HMM的三个问题\n", 296 | "\n", 297 | "HMM在实际应用中,一般会遇上三种问题:\n", 298 | "- 1.**概率计算问题**:给定模型$λ=(A,B,π)$ 和观测序列$O={o_1,o_2,...,o_T}$,计算在模型$λ$下观测序列$O$出现的概率$P(O|λ)$。\n", 299 | "- 2.**学习问题**:已知观测序列$O={o_1,o_2,...,o_T}$,估计模型$λ=(A,B,π)$,使$P(O|λ)$最大。即用**极大似然法**的方法估计参数。\n", 300 | "- 3.**预测问题**(也称为解码(decoding)问题):已知观测序列$O={o_1,o_2,...,o_T}$ 和模型$λ=(A,B,π)$,求给定观测序列条件概率$P(I|O)$最大的状态序列$I=(i_1,i_2,...,i_T)$,即给定观测序列,求最有可能的对应的**状态序列**。 \n", 301 | "\n", 302 | "回到刚才的例子,这三个问题就是:\n", 303 | "- 1.**概率计算问题**:如果给定模型参数,病人某一系列观测的症状出现的概率。\n", 304 | "- 2.**学习问题**:根据病人某一些列观测的症状,学习模型参数。\n", 305 | "- 3.**预测问题**:根据学到的模型,预测病人这几天是不是有感冒。" 306 | ] 307 | }, 308 | { 309 | "cell_type": "markdown", 310 | "metadata": {}, 311 | "source": [ 312 | "### 2.1 概率计算问题 \n", 313 | "\n", 314 | "概率计算问题计算的是:在模型$λ$下观测序列$O$出现的概率$P(O|λ)$。\n", 315 | "\n", 316 | "**直接计算**:\n", 317 | "\n", 318 | "对于状态序列$I=(i_1,i_2, ..., i_T)$的概率是:$P(I|\\lambda)=\\pi_{i_1}a_{i_1i_2}a_{i_2i_3}...a_{i_{T-1}i_T}$。\n", 319 | "\n", 320 | "对上面这种状态序列,产生观测序列$O=(o_1, o_2, ..., o_T)$的概率是$P(O|I,\\lambda)=b_{i_1}(o_1)b_{i_2}(o_2)...b_{i_T}(o_{T})$。\n", 321 | "\n", 322 | "$I$和$O$的**联合概率**为$P(O,I|\\lambda)=P(O|I,\\lambda)P(I|\\lambda)=\\pi_{i_1}b_{i_1}(o_1)a_{i_1i_2}b_{i_2}(o_2)...a_{i_{T-1}i_T}b_{i_T}(o_{T})$。\n", 323 | "\n", 324 | "对所有可能的$I$求和,得到$P(O|λ)=\\sum_IP(O,I|\\lambda)=\\sum_{i_1,...,i_T}\\pi_{i_1}b_{i_1}(o_1)a_{i_1i_2}b_{i_2}(o_2)...a_{i_{T-1}i_T}b_{i_T}(o_{T})$。\n", 325 | "\n", 326 | "如果直接计算,时间复杂度太高,是$O(TN^T)$。\n", 327 | "\n", 328 | "**前向算法(或者后向算法)**:\n", 329 | "\n", 330 | "首先引入**前向概率**:\n", 331 | "\n", 332 | "给定模型$λ$,定义到时刻$t$部分观测序列为$o_1,o_2,...,o_t$ 且状态为$q_i$ 的概率为前向概率。记作:\n", 333 | "$$α_t(i)=P(o_1,o_2,...,o_t,i_t=q_i|λ)$$\n", 334 | "\n", 335 | "用感冒例子描述就是:某一天是否感冒以及这天和这天之前所有的观测症状的联合概率。\n", 336 | "\n", 337 | "后向概率定义类似。\n", 338 | "\n", 339 | "![助记图片](http://img.blog.csdn.net/20160521211814167)\n", 340 | "\n", 341 | "**前向算法**\n", 342 | "\n", 343 | "输入:隐马模型$λ$,观测序列$O$;\n", 344 | "输出:观测序列概率$P(O|λ)$.\n", 345 | "- 1. 初值$(t=1)$,$α_1(i)=P(o_1,i_1=q_1|λ)=π_ib_i(o_1)$,$i=1,2,...,N $\n", 346 | "- 2. 递推:对$t=1,2,...,N$,$α_{t+1}(i)=[\\sum^N_{j=1}α_t(j)a_{ji}]b_i(o_{t+1})$\n", 347 | "- 3. 终结:$P(O|λ)=\\sum^N_{i=1}α_T(i)$" 348 | ] 349 | }, 350 | { 351 | "cell_type": "markdown", 352 | "metadata": {}, 353 | "source": [ 354 | "**前向算法理解:**\n", 355 | "\n", 356 | "前向算法使用**前向概率**的概念,记录每个时间下的前向概率,使得在递推计算下一个前向概率时,只需要上一个时间点的所有前向概率即可。原理上也是用空间换时间。这样的**时间复杂度是$O(N^2T)$**。\n", 357 | "\n", 358 | "![](http://img.my.csdn.net/uploads/201304/24/1366781746_6470.png)" 359 | ] 360 | }, 361 | { 362 | "cell_type": "markdown", 363 | "metadata": { 364 | "collapsed": true 365 | }, 366 | "source": [ 367 | "前向算法/后向算法python实现:" 368 | ] 369 | }, 370 | { 371 | "cell_type": "code", 372 | "execution_count": 8, 373 | "metadata": { 374 | "collapsed": true 375 | }, 376 | "outputs": [], 377 | "source": [ 378 | "def forward(obs_seq):\n", 379 | " \"\"\"前向算法\"\"\"\n", 380 | " N = A.shape[0]\n", 381 | " T = len(obs_seq)\n", 382 | " \n", 383 | " # F保存前向概率矩阵\n", 384 | " F = np.zeros((N,T))\n", 385 | " F[:,0] = pi * B[:, obs_seq[0]]\n", 386 | "\n", 387 | " for t in range(1, T):\n", 388 | " for n in range(N):\n", 389 | " F[n,t] = np.dot(F[:,t-1], (A[:,n])) * B[n, obs_seq[t]]\n", 390 | "\n", 391 | " return F\n", 392 | "\n", 393 | "def backward(obs_seq):\n", 394 | " \"\"\"后向算法\"\"\"\n", 395 | " N = A.shape[0]\n", 396 | " T = len(obs_seq)\n", 397 | " # X保存后向概率矩阵\n", 398 | " X = np.zeros((N,T))\n", 399 | " X[:,-1:] = 1\n", 400 | "\n", 401 | " for t in reversed(range(T-1)):\n", 402 | " for n in range(N):\n", 403 | " X[n,t] = np.sum(X[:,t+1] * A[n,:] * B[:, obs_seq[t+1]])\n", 404 | "\n", 405 | " return X" 406 | ] 407 | }, 408 | { 409 | "cell_type": "markdown", 410 | "metadata": {}, 411 | "source": [ 412 | "### 2.2学习问题\n", 413 | "\n", 414 | "学习问题我们这里只关注非监督的学习算法,有监督的学习算法在有标注数据的前提下,使用**极大似然估计法**可以很方便地估计模型参数。\n", 415 | "\n", 416 | "非监督的情况,也就是我们只有一堆观测数据,对应到感冒预测的例子,即,我们只知道病人之前的几天是什么感受,但是不知道他之前是否被确认为感冒。\n", 417 | "\n", 418 | "在这种情况下,我们可以使用**EM算法**,将**状态变量视作隐变量**。使用EM算法学习HMM参数的算法称为**Baum-Weich算法**。\n", 419 | "\n", 420 | "模型表达式:\n", 421 | "\n", 422 | "$$P(O|λ)=\\sum_IP(O|I,λ)P(I|λ)$$\n", 423 | "\n", 424 | "**Baum-Weich算法**:\n", 425 | "\n", 426 | "(1). 确定完全数据的对数似然函数\n", 427 | "\n", 428 | "完全数据是$(O,I)=(o_1,o_2,...,o_T,i_1,...,i_T)$\n", 429 | "\n", 430 | "完全数据的对数似然函数是:$logP(O,I|λ)$。\n", 431 | "\n", 432 | "(2). EM算法的E步:\n", 433 | "\n", 434 | "$$Q(λ,\\hatλ)=\\sum_IlogP(O,I|λ)P(O,I|\\hatλ)$$\n", 435 | "\n", 436 | "注意,这里忽略了对于$λ$而言是常数因子的$\\frac{1}{P(O|\\hatλ)}$\n", 437 | "\n", 438 | "其中,$\\hatλ$ 是隐马尔科夫模型参数的当前估计值,λ是要极大化的因马尔科夫模型参数。\n", 439 | "\n", 440 | "又有:\n", 441 | "$$P(O,I|λ)=π_{i_1}b_{i_1}(o_1)a_{i_1,i_2}b_{i_2}(o_2)...a_{i_T-1,i_T}b_{i_T}(o_T)$$\n", 442 | "\n", 443 | "于是$Q(λ,\\hatλ)$可以写成:\n", 444 | "$$Q(λ,\\hatλ)=\\sum_Ilogπ_{i_1}P(O,I|\\hatλ)+\\sum_I(\\sum^{T-1}_{t=1}loga_{i_t-1,i_t})P(O,I|\\hatλ)+\\sum_I(\\sum^{T-1}_{t=1}logb_{i_t}(o_t))P(O,I|\\hatλ)$$\n", 445 | "\n", 446 | "(3). EM算法的M步:\n", 447 | "\n", 448 | "极大化Q函数$Q(λ,\\hatλ)$ 求模型参数$A,B,π$。\n", 449 | "\n", 450 | "应用拉格朗日乘子法对各参数求偏导,解得**Baum-Weich模型参数估计公式**:\n", 451 | "- $a_{ij}=\\frac{\\sum_{t=1}^{T-1}ξ_t(i,j)}{\\sum_{t=1}^{T-1}γ_t(i)}$\n", 452 | "- $b_j(k)=\\frac{\\sum^T_{t=1,o_t=v_k}γ_t(j)}{\\sum_{t=1}^{T}γ_t(j)}$\n", 453 | "- $π_i=γ_1(i)$\n", 454 | "\n", 455 | "其中$γ_t(i)$和$ξ_t(i,j)$是:\n", 456 | "\n", 457 | "$γ_t(i)\\\\=P(i_t=q_i|O,λ)\\\\=\\frac {P(i_t=q_i,O|λ)}{P(O|λ)}\\\\=\\frac{α_t(i)β_t(i)}{\\sum_{j=1}^Nα_t(j)β_t(j)}$\n", 458 | "\n", 459 | "读作gamma,即,**给定模型参数和所有观测,时刻$t$处于状态$q_i$的概率**。\n", 460 | "\n", 461 | "$ξ_t(i,j)\\\\=P(i_t=q_i,i_{i+1}=q_j|O,λ)\\\\=\\frac{P(i_t=q_i,i_{i+1}=q_j,O|λ)}{P(O|λ)}\\\\=\\frac{P(i_t=q_i,i_{i+1}=q_j,O|λ)}{\\sum_{i=1}^N\\sum_{j=1}^NP(i_t=q_i,i_{i+1}=q_j,O|λ)}$\n", 462 | "\n", 463 | "读作xi,即,**给定模型参数和所有观测,时刻$t$处于状态$q_i$且时刻$t+1$处于状态$q_j$的概率**。\n", 464 | "\n", 465 | "带入$P(i_t=q_i,i_{i+1}=q_j,O|λ)=α_t(i)a_{ij}b_j(o_{t+1})β_{t+1}(j)$\n", 466 | "\n", 467 | "得到:$ξ_t(i,j)=\\frac{α_t(i)a_{ij}b_j(o_{t+1})β_{t+1}(j)}{\\sum_{i=1}^N\\sum_{j=1}^Nα_t(i)a_{ij}b_j(o_{t+1})β_{t+1}(j)}$\n", 468 | "\n", 469 | "**Baum-Weich算法**的python实现:" 470 | ] 471 | }, 472 | { 473 | "cell_type": "code", 474 | "execution_count": 9, 475 | "metadata": { 476 | "collapsed": true 477 | }, 478 | "outputs": [], 479 | "source": [ 480 | "def baum_welch_train(observations, A, B, pi, criterion=0.05):\n", 481 | " \"\"\"无监督学习算法——Baum-Weich算法\"\"\"\n", 482 | " n_states = A.shape[0]\n", 483 | " n_samples = len(observations)\n", 484 | "\n", 485 | " done = False\n", 486 | " while not done:\n", 487 | " # alpha_t(i) = P(O_1 O_2 ... O_t, q_t = S_i | hmm)\n", 488 | " # Initialize alpha\n", 489 | " alpha = forward(observations)\n", 490 | "\n", 491 | " # beta_t(i) = P(O_t+1 O_t+2 ... O_T | q_t = S_i , hmm)\n", 492 | " # Initialize beta\n", 493 | " beta = backward(observations)\n", 494 | " # ξ_t(i,j)=P(i_t=q_i,i_{i+1}=q_j|O,λ)\n", 495 | " xi = np.zeros((n_states,n_states,n_samples-1))\n", 496 | " for t in range(n_samples-1):\n", 497 | " denom = np.dot(np.dot(alpha[:,t].T, A) * B[:,observations[t+1]].T, beta[:,t+1])\n", 498 | " for i in range(n_states):\n", 499 | " numer = alpha[i,t] * A[i,:] * B[:,observations[t+1]].T * beta[:,t+1].T\n", 500 | " xi[i,:,t] = numer / denom\n", 501 | "\n", 502 | " # γ_t(i):gamma_t(i) = P(q_t = S_i | O, hmm)\n", 503 | " gamma = np.sum(xi,axis=1)\n", 504 | " # Need final gamma element for new B\n", 505 | " # xi的第三维长度n_samples-1,少一个,所以gamma要计算最后一个\n", 506 | " prod = (alpha[:,n_samples-1] * beta[:,n_samples-1]).reshape((-1,1))\n", 507 | " gamma = np.hstack((gamma, prod / np.sum(prod))) #append one more to gamma!!!\n", 508 | " \n", 509 | " # 更新模型参数\n", 510 | " newpi = gamma[:,0]\n", 511 | " newA = np.sum(xi,2) / np.sum(gamma[:,:-1],axis=1).reshape((-1,1))\n", 512 | " newB = np.copy(B)\n", 513 | " num_levels = B.shape[1]\n", 514 | " sumgamma = np.sum(gamma,axis=1)\n", 515 | " for lev in range(num_levels):\n", 516 | " mask = observations == lev\n", 517 | " newB[:,lev] = np.sum(gamma[:,mask],axis=1) / sumgamma\n", 518 | " \n", 519 | " # 检查是否满足阈值\n", 520 | " if np.max(abs(pi - newpi)) < criterion and \\\n", 521 | " np.max(abs(A - newA)) < criterion and \\\n", 522 | " np.max(abs(B - newB)) < criterion:\n", 523 | " done = 1\n", 524 | " A[:], B[:], pi[:] = newA, newB, newpi\n", 525 | " return newA, newB, newpi\n" 526 | ] 527 | }, 528 | { 529 | "cell_type": "markdown", 530 | "metadata": {}, 531 | "source": [ 532 | "回到预测感冒的问题,下面我们先自己建立一个HMM模型,再模拟出一个观测序列和一个状态序列。\n", 533 | "\n", 534 | "然后,只用观测序列去学习模型,获得模型参数。" 535 | ] 536 | }, 537 | { 538 | "cell_type": "code", 539 | "execution_count": 10, 540 | "metadata": {}, 541 | "outputs": [ 542 | { 543 | "name": "stdout", 544 | "output_type": "stream", 545 | "text": [ 546 | "newA: [[ 0.5 0.5]\n", 547 | " [ 0.5 0.5]]\n", 548 | "newB: [[ 0.28 0.32 0.4 ]\n", 549 | " [ 0.28 0.32 0.4 ]]\n", 550 | "newpi: [ 0.5 0.5]\n" 551 | ] 552 | } 553 | ], 554 | "source": [ 555 | "A = np.array([[0.5, 0.5],[0.5, 0.5]])\n", 556 | "B = np.array([[0.3, 0.3, 0.3],[0.3, 0.3, 0.3]])\n", 557 | "pi = np.array([0.5, 0.5])\n", 558 | "\n", 559 | "observations_data, states_data = simulate(100)\n", 560 | "newA, newB, newpi = baum_welch_train(observations_data, A, B, pi)\n", 561 | "print(\"newA: \", newA)\n", 562 | "print(\"newB: \", newB)\n", 563 | "print(\"newpi: \", newpi)" 564 | ] 565 | }, 566 | { 567 | "cell_type": "markdown", 568 | "metadata": {}, 569 | "source": [ 570 | "### 2.3预测问题\n", 571 | "\n", 572 | "考虑到预测问题是求给定观测序列条件概率$P(I|O)$最大的状态序列$I=(i_1,i_2,...,i_T)$,类比这个问题和最短路问题:\n", 573 | "\n", 574 | "我们可以把求$P(I|O)$的最大值类比成求节点间距离的最小值,于是考虑**类似于动态规划的viterbi算法**。\n", 575 | "\n", 576 | "**首先导入两个变量$δ$和$ψ$**:\n", 577 | "\n", 578 | "定义**在时刻$t$状态为$i$的所有单个路径$(i_1,i_2,i_3,...,i_t)$中概率最大值**为(这里考虑$P(I,O)$便于计算,因为给定的$P(O)$,$P(I|O)$正比于$P(I,O)$):\n", 579 | "\n", 580 | "$$δ_t(i)=max_{i_1,i_2,...,i_t-1}P(i_t=i,i_{t-1},...,i_1,o_t,o_{t-1},...,o_1|λ)$$\n", 581 | "\n", 582 | "读作delta,其中,$i=1,2,...,N$\n", 583 | "\n", 584 | "得到其递推公式:\n", 585 | "\n", 586 | "$$δ_t(i)=max_{1≤j≤N}[δ_{t-1}(j)a_{ji}]b_i(o_1)$$\n", 587 | "\n", 588 | "\n", 589 | "定义**在时刻$t$状态为$i$的所有单个路径$(i_1,i_2,i_3,...,i_{t-1},i)$中概率最大的路径的第$t-1$个结点**为\n", 590 | "\n", 591 | "$$ψ_t(i)=argmax_{1≤j≤N}[δ_{t-1}(j)a_{ji}]$$\n", 592 | "\n", 593 | "读作psi,其中,$i=1,2,...,N$\n", 594 | "\n", 595 | "下面介绍维特比算法。\n", 596 | "\n", 597 | "**维特比(viterbi)算法**(动态规划):\n", 598 | "\n", 599 | "输入:模型$λ=(A,B,π)$和观测$O=(o_1,o_2,...,o_T)$\n", 600 | "\n", 601 | "输出:最优路径$I^*=(i^*_1,i^*_2,...,i^*_T)$\n", 602 | "\n", 603 | "(1).初始化:\n", 604 | "$$δ_1(i)=π_ib_i(o_1)$$\n", 605 | "$$ψ_1(i)=0$$\n", 606 | "\n", 607 | "(2).**递推。**对$t=2,3,...,T$\n", 608 | "$$δ_t(i)=max_{1≤j≤N}[δ_{t-1}(j)a_{ji}]b_i(o_t)$$\n", 609 | "$$ψ_t(i)=argmax_{1≤j≤N}[δ_{t-1}(j)a_{ji}]$$\n", 610 | "\n", 611 | "(3).终止:\n", 612 | "$$P^*=max_{1≤i≤N}δ_T(i)$$\n", 613 | "$$i^*_T=argmax_{1≤i≤N}δ_T(i)$$\n", 614 | "\n", 615 | "(4).最优路径回溯,对$t=T-1,T-2,...,1$\n", 616 | "\n", 617 | "$$i^*_t=ψ_{t+1}(i^*_{t+1})$$\n", 618 | "\n", 619 | "求得最优路径$I^*=(i_1^*,i_2^*,...,i_T^*)$\n", 620 | "\n", 621 | "**注:上面的$b_i(o_t)$和$ψ_{t+1}(i^*_{t+1})$的括号,并不是函数,而是类似于数组取下标的操作。**\n", 622 | "\n", 623 | "viterbi算法python实现(**V对应δ,prev对应ψ**):" 624 | ] 625 | }, 626 | { 627 | "cell_type": "code", 628 | "execution_count": 11, 629 | "metadata": { 630 | "collapsed": true 631 | }, 632 | "outputs": [], 633 | "source": [ 634 | "def viterbi(obs_seq, A, B, pi):\n", 635 | " \"\"\"\n", 636 | " Returns\n", 637 | " -------\n", 638 | " V : numpy.ndarray\n", 639 | " V [s][t] = Maximum probability of an observation sequence ending\n", 640 | " at time 't' with final state 's'\n", 641 | " prev : numpy.ndarray\n", 642 | " Contains a pointer to the previous state at t-1 that maximizes\n", 643 | " V[state][t]\n", 644 | " \n", 645 | " V对应δ,prev对应ψ\n", 646 | " \"\"\"\n", 647 | " N = A.shape[0]\n", 648 | " T = len(obs_seq)\n", 649 | " prev = np.zeros((T - 1, N), dtype=int)\n", 650 | "\n", 651 | " # DP matrix containing max likelihood of state at a given time\n", 652 | " V = np.zeros((N, T))\n", 653 | " V[:,0] = pi * B[:,obs_seq[0]]\n", 654 | "\n", 655 | " for t in range(1, T):\n", 656 | " for n in range(N):\n", 657 | " seq_probs = V[:,t-1] * A[:,n] * B[n, obs_seq[t]]\n", 658 | " prev[t-1,n] = np.argmax(seq_probs)\n", 659 | " V[n,t] = np.max(seq_probs)\n", 660 | "\n", 661 | " return V, prev\n", 662 | "\n", 663 | "def build_viterbi_path(prev, last_state):\n", 664 | " \"\"\"Returns a state path ending in last_state in reverse order.\n", 665 | " 最优路径回溯\n", 666 | " \"\"\"\n", 667 | " T = len(prev)\n", 668 | " yield(last_state)\n", 669 | " for i in range(T-1, -1, -1):\n", 670 | " yield(prev[i, last_state])\n", 671 | " last_state = prev[i, last_state]\n", 672 | " \n", 673 | "def observation_prob(obs_seq):\n", 674 | " \"\"\" P( entire observation sequence | A, B, pi ) \"\"\"\n", 675 | " return np.sum(forward(obs_seq)[:,-1])\n", 676 | "\n", 677 | "def state_path(obs_seq, A, B, pi):\n", 678 | " \"\"\"\n", 679 | " Returns\n", 680 | " -------\n", 681 | " V[last_state, -1] : float\n", 682 | " Probability of the optimal state path\n", 683 | " path : list(int)\n", 684 | " Optimal state path for the observation sequence\n", 685 | " \"\"\"\n", 686 | " V, prev = viterbi(obs_seq, A, B, pi)\n", 687 | " # Build state path with greatest probability\n", 688 | " last_state = np.argmax(V[:,-1])\n", 689 | " path = list(build_viterbi_path(prev, last_state))\n", 690 | "\n", 691 | " return V[last_state,-1], reversed(path)" 692 | ] 693 | }, 694 | { 695 | "cell_type": "markdown", 696 | "metadata": {}, 697 | "source": [ 698 | "继续感冒预测的例子,根据刚才学得的模型参数,再去预测状态序列,观测准确率。" 699 | ] 700 | }, 701 | { 702 | "cell_type": "code", 703 | "execution_count": 12, 704 | "metadata": {}, 705 | "outputs": [ 706 | { 707 | "name": "stdout", 708 | "output_type": "stream", 709 | "text": [ 710 | "0.54\n" 711 | ] 712 | } 713 | ], 714 | "source": [ 715 | "states_out = state_path(observations_data, newA, newB, newpi)[1]\n", 716 | "p = 0.0\n", 717 | "for s in states_data:\n", 718 | " if next(states_out) == s: \n", 719 | " p += 1\n", 720 | "\n", 721 | "print(p / len(states_data))" 722 | ] 723 | }, 724 | { 725 | "cell_type": "markdown", 726 | "metadata": {}, 727 | "source": [ 728 | "因为是随机生成的样本,因此准确率较低也可以理解。\n", 729 | "\n", 730 | "使用Viterbi算法计算病人的病情以及相应的概率:" 731 | ] 732 | }, 733 | { 734 | "cell_type": "code", 735 | "execution_count": 13, 736 | "metadata": {}, 737 | "outputs": [ 738 | { 739 | "name": "stdout", 740 | "output_type": "stream", 741 | "text": [ 742 | " normal cold dizzy\n", 743 | "Healthy: 0.140000 0.022400 0.004480\n", 744 | " Fever: 0.140000 0.022400 0.004480\n", 745 | "\n", 746 | "The most possible states and probability are:\n", 747 | "Healthy\n", 748 | "Healthy\n", 749 | "Healthy\n", 750 | "0.00448\n" 751 | ] 752 | } 753 | ], 754 | "source": [ 755 | "A = convert_map_to_matrix(transition_probability, states_label2id, states_label2id)\n", 756 | "B = convert_map_to_matrix(emission_probability, states_label2id, observations_label2id)\n", 757 | "observations_index = [observations_label2id[o] for o in observations]\n", 758 | "pi = convert_map_to_vector(start_probability, states_label2id)\n", 759 | "V, p = viterbi(observations_index, newA, newB, newpi)\n", 760 | "print(\" \" * 7, \" \".join((\"%10s\" % observations_id2label[i]) for i in observations_index))\n", 761 | "for s in range(0, 2):\n", 762 | " print(\"%7s: \" % states_id2label[s] + \" \".join(\"%10s\" % (\"%f\" % v) for v in V[s]))\n", 763 | "print('\\nThe most possible states and probability are:')\n", 764 | "p, ss = state_path(observations_index, newA, newB, newpi)\n", 765 | "for s in ss:\n", 766 | " print(states_id2label[s])\n", 767 | "print(p)" 768 | ] 769 | }, 770 | { 771 | "cell_type": "markdown", 772 | "metadata": {}, 773 | "source": [ 774 | "## 3.完整代码\n", 775 | "\n", 776 | "代码主要参考[Hankcs](http://www.hankcs.com/ml/hidden-markov-model.html)的博客,hankcs参考的是[colostate大学的教学代码](http://www.cs.colostate.edu/~anderson/cs440/index.html/doku.php?id=notes:hmm2)。\n", 777 | "\n", 778 | "完整的隐马尔科夫用类包装的代码:" 779 | ] 780 | }, 781 | { 782 | "cell_type": "code", 783 | "execution_count": 15, 784 | "metadata": { 785 | "collapsed": true 786 | }, 787 | "outputs": [], 788 | "source": [ 789 | "class HMM:\n", 790 | " \"\"\"\n", 791 | " Order 1 Hidden Markov Model\n", 792 | " \n", 793 | " Attributes\n", 794 | " ----------\n", 795 | " A : numpy.ndarray\n", 796 | " State transition probability matrix\n", 797 | " B: numpy.ndarray\n", 798 | " Output emission probability matrix with shape(N, number of output types)\n", 799 | " pi: numpy.ndarray\n", 800 | " Initial state probablity vector\n", 801 | " \"\"\"\n", 802 | " def __init__(self, A, B, pi):\n", 803 | " self.A = A\n", 804 | " self.B = B\n", 805 | " self.pi = pi\n", 806 | " \n", 807 | " def simulate(self, T):\n", 808 | " \n", 809 | " def draw_from(probs):\n", 810 | " \"\"\"\n", 811 | " 1.np.random.multinomial:\n", 812 | " 按照多项式分布,生成数据\n", 813 | " >>> np.random.multinomial(20, [1/6.]*6, size=2)\n", 814 | " array([[3, 4, 3, 3, 4, 3],\n", 815 | " [2, 4, 3, 4, 0, 7]])\n", 816 | " For the first run, we threw 3 times 1, 4 times 2, etc. \n", 817 | " For the second, we threw 2 times 1, 4 times 2, etc.\n", 818 | " 2.np.where:\n", 819 | " >>> x = np.arange(9.).reshape(3, 3)\n", 820 | " >>> np.where( x > 5 )\n", 821 | " (array([2, 2, 2]), array([0, 1, 2]))\n", 822 | " \"\"\"\n", 823 | " return np.where(np.random.multinomial(1,probs) == 1)[0][0]\n", 824 | "\n", 825 | " observations = np.zeros(T, dtype=int)\n", 826 | " states = np.zeros(T, dtype=int)\n", 827 | " states[0] = draw_from(self.pi)\n", 828 | " observations[0] = draw_from(self.B[states[0],:])\n", 829 | " for t in range(1, T):\n", 830 | " states[t] = draw_from(self.A[states[t-1],:])\n", 831 | " observations[t] = draw_from(self.B[states[t],:])\n", 832 | " return observations,states\n", 833 | " \n", 834 | " def _forward(self, obs_seq):\n", 835 | " \"\"\"前向算法\"\"\"\n", 836 | " N = self.A.shape[0]\n", 837 | " T = len(obs_seq)\n", 838 | "\n", 839 | " F = np.zeros((N,T))\n", 840 | " F[:,0] = self.pi * self.B[:, obs_seq[0]]\n", 841 | "\n", 842 | " for t in range(1, T):\n", 843 | " for n in range(N):\n", 844 | " F[n,t] = np.dot(F[:,t-1], (self.A[:,n])) * self.B[n, obs_seq[t]]\n", 845 | "\n", 846 | " return F\n", 847 | " \n", 848 | " def _backward(self, obs_seq):\n", 849 | " \"\"\"后向算法\"\"\"\n", 850 | " N = self.A.shape[0]\n", 851 | " T = len(obs_seq)\n", 852 | "\n", 853 | " X = np.zeros((N,T))\n", 854 | " X[:,-1:] = 1\n", 855 | "\n", 856 | " for t in reversed(range(T-1)):\n", 857 | " for n in range(N):\n", 858 | " X[n,t] = np.sum(X[:,t+1] * self.A[n,:] * self.B[:, obs_seq[t+1]])\n", 859 | "\n", 860 | " return X\n", 861 | " \n", 862 | " def baum_welch_train(self, observations, criterion=0.05):\n", 863 | " \"\"\"无监督学习算法——Baum-Weich算法\"\"\"\n", 864 | " n_states = self.A.shape[0]\n", 865 | " n_samples = len(observations)\n", 866 | "\n", 867 | " done = False\n", 868 | " while not done:\n", 869 | " # alpha_t(i) = P(O_1 O_2 ... O_t, q_t = S_i | hmm)\n", 870 | " # Initialize alpha\n", 871 | " alpha = self._forward(observations)\n", 872 | "\n", 873 | " # beta_t(i) = P(O_t+1 O_t+2 ... O_T | q_t = S_i , hmm)\n", 874 | " # Initialize beta\n", 875 | " beta = self._backward(observations)\n", 876 | "\n", 877 | " xi = np.zeros((n_states,n_states,n_samples-1))\n", 878 | " for t in range(n_samples-1):\n", 879 | " denom = np.dot(np.dot(alpha[:,t].T, self.A) * self.B[:,observations[t+1]].T, beta[:,t+1])\n", 880 | " for i in range(n_states):\n", 881 | " numer = alpha[i,t] * self.A[i,:] * self.B[:,observations[t+1]].T * beta[:,t+1].T\n", 882 | " xi[i,:,t] = numer / denom\n", 883 | "\n", 884 | " # gamma_t(i) = P(q_t = S_i | O, hmm)\n", 885 | " gamma = np.sum(xi,axis=1)\n", 886 | " # Need final gamma element for new B\n", 887 | " prod = (alpha[:,n_samples-1] * beta[:,n_samples-1]).reshape((-1,1))\n", 888 | " gamma = np.hstack((gamma, prod / np.sum(prod))) #append one more to gamma!!!\n", 889 | "\n", 890 | " newpi = gamma[:,0]\n", 891 | " newA = np.sum(xi,2) / np.sum(gamma[:,:-1],axis=1).reshape((-1,1))\n", 892 | " newB = np.copy(self.B)\n", 893 | "\n", 894 | " num_levels = self.B.shape[1]\n", 895 | " sumgamma = np.sum(gamma,axis=1)\n", 896 | " for lev in range(num_levels):\n", 897 | " mask = observations == lev\n", 898 | " newB[:,lev] = np.sum(gamma[:,mask],axis=1) / sumgamma\n", 899 | "\n", 900 | " if np.max(abs(self.pi - newpi)) < criterion and \\\n", 901 | " np.max(abs(self.A - newA)) < criterion and \\\n", 902 | " np.max(abs(self.B - newB)) < criterion:\n", 903 | " done = 1\n", 904 | "\n", 905 | " self.A[:],self.B[:],self.pi[:] = newA,newB,newpi\n", 906 | " \n", 907 | " def observation_prob(self, obs_seq):\n", 908 | " \"\"\" P( entire observation sequence | A, B, pi ) \"\"\"\n", 909 | " return np.sum(self._forward(obs_seq)[:,-1])\n", 910 | "\n", 911 | " def state_path(self, obs_seq):\n", 912 | " \"\"\"\n", 913 | " Returns\n", 914 | " -------\n", 915 | " V[last_state, -1] : float\n", 916 | " Probability of the optimal state path\n", 917 | " path : list(int)\n", 918 | " Optimal state path for the observation sequence\n", 919 | " \"\"\"\n", 920 | " V, prev = self.viterbi(obs_seq)\n", 921 | "\n", 922 | " # Build state path with greatest probability\n", 923 | " last_state = np.argmax(V[:,-1])\n", 924 | " path = list(self.build_viterbi_path(prev, last_state))\n", 925 | "\n", 926 | " return V[last_state,-1], reversed(path)\n", 927 | "\n", 928 | " def viterbi(self, obs_seq):\n", 929 | " \"\"\"\n", 930 | " Returns\n", 931 | " -------\n", 932 | " V : numpy.ndarray\n", 933 | " V [s][t] = Maximum probability of an observation sequence ending\n", 934 | " at time 't' with final state 's'\n", 935 | " prev : numpy.ndarray\n", 936 | " Contains a pointer to the previous state at t-1 that maximizes\n", 937 | " V[state][t]\n", 938 | " \"\"\"\n", 939 | " N = self.A.shape[0]\n", 940 | " T = len(obs_seq)\n", 941 | " prev = np.zeros((T - 1, N), dtype=int)\n", 942 | "\n", 943 | " # DP matrix containing max likelihood of state at a given time\n", 944 | " V = np.zeros((N, T))\n", 945 | " V[:,0] = self.pi * self.B[:,obs_seq[0]]\n", 946 | "\n", 947 | " for t in range(1, T):\n", 948 | " for n in range(N):\n", 949 | " seq_probs = V[:,t-1] * self.A[:,n] * self.B[n, obs_seq[t]]\n", 950 | " prev[t-1,n] = np.argmax(seq_probs)\n", 951 | " V[n,t] = np.max(seq_probs)\n", 952 | "\n", 953 | " return V, prev\n", 954 | "\n", 955 | " def build_viterbi_path(self, prev, last_state):\n", 956 | " \"\"\"Returns a state path ending in last_state in reverse order.\"\"\"\n", 957 | " T = len(prev)\n", 958 | " yield(last_state)\n", 959 | " for i in range(T-1, -1, -1):\n", 960 | " yield(prev[i, last_state])\n", 961 | " last_state = prev[i, last_state]" 962 | ] 963 | } 964 | ], 965 | "metadata": { 966 | "anaconda-cloud": {}, 967 | "kernelspec": { 968 | "display_name": "Python [conda env:py36]", 969 | "language": "python", 970 | "name": "conda-env-py36-py" 971 | }, 972 | "language_info": { 973 | "codemirror_mode": { 974 | "name": "ipython", 975 | "version": 3 976 | }, 977 | "file_extension": ".py", 978 | "mimetype": "text/x-python", 979 | "name": "python", 980 | "nbconvert_exporter": "python", 981 | "pygments_lexer": "ipython3", 982 | "version": "3.6.3" 983 | } 984 | }, 985 | "nbformat": 4, 986 | "nbformat_minor": 1 987 | } 988 | -------------------------------------------------------------------------------- /6_CRF.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "# 条件随机场 CRF\n", 10 | "\n", 11 | "## 目录\n", 12 | "\n", 13 | "- [概率无向图模型](#概率无向图模型)\n", 14 | "- [条件随机场](#条件随机场)\n", 15 | " - [参数化形式](#线性链条件随机场的参数化形式)\n", 16 | " - [简化形式](#简化形式)\n", 17 | " - [矩阵形式](#矩阵形式)\n", 18 | "- [三个问题](#三个问题)\n", 19 | " - [概率计算问题](#概率计算问题)\n", 20 | " - [改进的迭代尺度法](#改进的迭代尺度法)\n", 21 | " - [BFGS算法](#BFGS算法)\n", 22 | " - [学习方法](#学习方法)\n", 23 | " - [预测算法](#预测算法)\n", 24 | " \n", 25 | "\n", 26 | "## 概率无向图模型\n", 27 | "\n", 28 | "回顾一下之前讲解的概率无向图模型:https://applenob.github.io/graph_model.html\n", 29 | "\n", 30 | "总结一下:\n", 31 | "\n", 32 | "- **最大团**:无向图$G$中任何两个结点都有边连接的结点子集称为团(clique)。若团$C$不能在加入任何一个结点使其称为一个更大的团,则称$C$为图$G$的一个最大团。\n", 33 | "- 概率无向图模型的**联合概率分**布可以表示成其最大团上的随机变量的函数的乘积形式。这也被称为概率无向图模型的**因子分解**。\n", 34 | "- $P(Y) = \\frac{1}{Z}\\prod_C\\psi_C(Y_C)$,其中,$Z$是规范化因子,$Z = \\sum_Y\\prod_C\\psi_C(Y_C)$,$\\psi_C(Y_C)$称为**势函数**,要求势函数是严格正的(因为涉及到累乘)。\n", 35 | "\n", 36 | "## 条件随机场\n", 37 | "\n", 38 | "条件随机场(Conditional Random Field, CRF)也是一种无向图模型。它是在给定随机变量$X$的条件下,随机变量$Y$的马尔科夫随机场。\n", 39 | "\n", 40 | "我们常用的是**线性链条件随机场**,多用于序列标注等问题。形式化定义:设$X=(X_1, X_2, ..., X_n)$,$Y=(Y_1, Y_2, ..., Y_n)$均为线性链表示的随机变量序列,若给在定随机变量序列$X$的条件下,随机变量序列$Y$的条件概率分布$P(Y|X)$构成条件随机场,即满足**马尔科夫性**:$P(Y_i|X, Y_1,...,Y_{i-1}, Y_{i+1}, ..., Y_n) = P(Y_i|X, Y_{i-1}, Y_{i+1})\\;\\;i=1,2, ..., n$(**这个式子是核心,充分理解这个式子和下面的图片**),则称$P(Y|X)$是线性链条件随机场。\n", 41 | "\n", 42 | "![](https://github.com/applenob/machine_learning_basic/raw/master/res/linear_crf.png)\n", 43 | "\n", 44 | "### 参数化形式\n", 45 | "\n", 46 | "$$P(y|x) = \\frac{1}{Z(x)}exp(\\sum_{i,k}\\lambda_kt_k(y_{i-1}, y_i, x, i)+\\sum_{i,l}\\mu_ls_l(y_i, x, i))$$\n", 47 | "\n", 48 | "其中,$Z(x) = \\sum_yexp(\\sum_{i,k}\\lambda_kt_k(y_{i-1}, y_i, x, i)+\\sum_{i,l}\\mu_ls_l(y_i, x, i))$,$t_k$是转移(transform)特征函数,依赖于当前和前一个位置,$s_l$是状态(state)特征函数,依赖于当前位置,$\\lambda_k$和$\\mu_l$是对应的权值。\n", 49 | "\n", 50 | "从模型的参数化形式可以看出,线性链条件随机场也是对数线性模型。\n", 51 | "\n", 52 | "### 简化形式\n", 53 | "\n", 54 | "所谓简化形式即,将局部特征特征统一成一个全局特征函数。\n", 55 | "\n", 56 | "设有$K_1$个转移特征,有$K_2$个状态特征,$K=K_1+K_2$。\n", 57 | "\n", 58 | "$$f_k(y_{i-1}, y_i, x, i) = \\left\\{\\begin{matrix}t_k(y_{i-1}, y_i, x, i),\\;\\;k=1,2,...,K_1\\\\ s_l(y_i, x, i),\\;\\; k=K_1+l,\\; l=1,2,..,K_2\\end{matrix}\\right.$$\n", 59 | "\n", 60 | "对所有在位置$i$的特征求和:\n", 61 | "\n", 62 | "$$f_k(y,x) = \\sum^n_{i=1}f_k(y_{i-1}, y_i, x, i), \\;\\; k=1,2,...,K$$\n", 63 | "\n", 64 | "用$w_k$表示特征$f_k(y,x)$的权值,即:$w_k = \\left\\{\\begin{matrix}\\lambda_k,\\;\\;k=1,2,...,K_1\\\\ \\mu_l,\\;\\;k=K_1+l;l=1,2,...,K_2\\end{matrix}\\right.$\n", 65 | "\n", 66 | "于是条件随机场又可以表示成:\n", 67 | "\n", 68 | "$$P(y|x) = \\frac{1}{Z(x)}exp\\sum^K_{k=1}w_kf_k(y,x)\\\\Z(x)=\\sum_y exp\\sum^K_{k=1}w_kf_k(y,x)$$\n", 69 | "\n", 70 | "如果用$w$表示权值向量,即$w=(w_1, ..., w_K)^T$,用$F(y,x)$表示全局特征向量,即$F(y,x) = (f_1(y,x),f_2(y,x),...,f_K(y,x))^T$,则条件随机场可以携程向量$w$和$F(y,x)$的内积的形式:\n", 71 | "$$P_w(y|x) = \\frac{exp(w\\cdot F(y,x))}{Z_w(x)}\\\\Z(x)=\\sum_yexp(w\\cdot F(y,x))$$\n", 72 | "\n", 73 | "### 矩阵形式\n", 74 | "\n", 75 | "引入特殊的起点和终点状态标记$y_0=start$,$y_{n+1}=stop$。\n", 76 | "\n", 77 | "对于观测序列$x$的每一个位置$i=1,2,...,n+1$定义$n+1$个$m$阶方阵($m$是标记$y_i$取值的个数)。\n", 78 | "- $M_i(x) = [M_i(y_{i-1}, y_i| x)]$\n", 79 | "- $M_i(y_{i-1}, y_i| x) = exp(W_i(y_{i-1}, y_i| x))$\n", 80 | "- $W_i(y_{i-1}, y_i| x) = \\sum_{k=1}^Kw_kf_k(y_i,y_i,x,i)$\n", 81 | "\n", 82 | "条件随机场的矩阵形式:\n", 83 | "\n", 84 | "$$P_w(y|x) = \\frac{1}{Z_w(x)}\\prod^{n+1}_{i=1}w_kf_k(y_{i-1},y_i,x,i)\\\\Z_w(x)=(M_1(x)M_2(x)...M_{n+1}(x))_{start, stop}$$\n", 85 | "\n", 86 | "即,简化了配分函数$Z_w(x)$的计算方式。" 87 | ] 88 | }, 89 | { 90 | "cell_type": "markdown", 91 | "metadata": {}, 92 | "source": [ 93 | "## 三个问题\n", 94 | "\n", 95 | "类似于隐马尔科夫模型(HMM),CRF也有典型的三个问题。对比二者在这三个问题的解决方法的不同,可以更深入理解这两个模型。\n", 96 | "\n", 97 | "- 1.**概率计算问题**:给定条件随机场$P(Y|X)$,输入序列$x$和输出序列$y$,计算条件概率$P(Y_i=y_i|x)$和$P(Y_{i-1}=y_{i-1}, Y_i=y_i|x)$和相应的数学期望。\n", 98 | "- 2.**学习问题**:给定训练数据集,估计条件随机场模型参数,即用**极大似然法**的方法估计参数。\n", 99 | "- 3.**预测问题**:给定条件随机场$P(Y|X)$和输入序列(观测序列)$x$,求条件概率最大的输出序列(标记序列)$y^*$。\n", 100 | "\n", 101 | "### 概率计算问题\n", 102 | "\n", 103 | "给定条件随机场$P(Y|X)$,输入序列$x$和输出序列$y$,计算条件概率$P(Y_i=y_i|x)$和$P(Y_{i-1}=y_{i-1}, Y_i=y_i|x)$。\n", 104 | "\n", 105 | "在这里我们可以明显看出,条件随机场直接计算**条件概率**,因此是判别模型;而HMM先由上一个状态**生成**下一个状态,再由下一个状态生成下一个输出,因此HMM是生成模型。\n", 106 | "\n", 107 | "类似于HMM,引入**前向-后向向量**:\n", 108 | "\n", 109 | "对每个下标$i=0,1,...,n+1$,定义前向向量$\\alpha_i(x)$:\n", 110 | "$$\\alpha_0(y|x) = \\left\\{\\begin{matrix} 1, \\;\\;y=start\\\\ 0, \\;\\;否则\\end{matrix}\\right.$$\n", 111 | "\n", 112 | "递推公式:\n", 113 | "$$\\alpha_i^T(y_i|x) = \\alpha_{i-1}^T(y_{i-1}|x)M_i(y_{i-1},y_i|x), \\;\\;i=1,2,...,n+1$$\n", 114 | "\n", 115 | "简单地表示:\n", 116 | "$$\\alpha_i^T(x) = \\alpha_{i-1}^T(x)M_i(x)$$\n", 117 | "\n", 118 | "第$i$个前向向量表示在位置$i$的标记是$y_i$,并且到位置$i$的前部分标记序列的非规范化概率。$y_i$的取值有$m$个,所以$\\alpha_i(x)$是$m$维列向量。\n", 119 | "\n", 120 | "类似地,对于每个下标$i=0,1,...,n+1$,定义前后向向量$\\beta_i(x)$:\n", 121 | "$$\\beta_{n+1}(y_{n+1}|x) = \\left\\{\\begin{matrix} 1, \\;\\;y_{n+1}=stop\\\\ 0, \\;\\;否则\\end{matrix}\\right.$$\n", 122 | "\n", 123 | "递推公式:\n", 124 | "$$\\beta_i(y_i|x) = M_{i+1}(y_i,y_{i+1}|x)\\beta_{i+1}(y_{i+1}|x), \\;\\;i=1,2,...,n+1$$\n", 125 | "\n", 126 | "简单地表示:\n", 127 | "$$\\beta_i(x) = M_{i+1}(x)\\beta_{i+1}(x)$$\n", 128 | "\n", 129 | "第$i$个后向向量表示在位置$i$的标记是$y_i$,并且从位置$i+1$到$n$的后部分标记序列的非规范化概率。\n", 130 | "\n", 131 | "用前向-后向向量表示配分函数:$Z(x)=\\alpha_n^T(x)\\cdot 1 = 1^T\\cdot\\beta_1(x)$\n", 132 | "\n", 133 | "**概率计算**:\n", 134 | "\n", 135 | "不同于HMM的概率计算,使用前向概率**或者**后向概率即可,这里计算需要**同时**使用前向向量和后向向量。\n", 136 | "\n", 137 | "$$P(Y_i=y_i|x) = \\frac{\\alpha_i^T(y_i|x)\\beta_i(y_i|x)}{Z(x)}$$\n", 138 | "\n", 139 | "$$P(Y_{i-1}=y_{i-1}, Y_i=y_i|x) = \\frac{\\alpha_{i-1}^T(y_{i-1}|x)M_i(y_{i-1}, y_i|x)\\beta_i(y_i|x)}{Z(x)}$$\n", 140 | "\n", 141 | "**期望值计算**:\n", 142 | "\n", 143 | "特征函数$f_k$关于条件分布$P(Y|X)$的期望:\n", 144 | "\n", 145 | "$$E_{P(Y|X)}[f_k] = \\sum_yP(y|x)f_k(y, x)\\\\ = \\sum_{i=1}^{n+1}\\sum_{y_{i-1}, y_i}f_k(y_{i-1}, y_i, x, i)\\frac{\\alpha_{i-1}^T(y_{i-1}|x)M_i(y_{i-1}, y_i|x)\\beta_i(y_i|x)}{Z(x)}\\\\k=1,2,..,K$$\n", 146 | "\n", 147 | "特征函数$f_k$关于联合分布$P(X, Y)$的期望:\n", 148 | "\n", 149 | "$$E_{P(X,Y)}[f_k] = \\sum_{x,y}P(x,y)\\sum_{i=1}^{n+1}f_k(y_{i-1}, y_i, x, i)\\\\=\\sum_x\\tilde P(x)\\sum_yP(y|x)\\sum^{n+1}_{i=1}f_k(y_{i-1}, y, x, i)\\\\=\\sum_x\\tilde P(x)\\sum_{i=1}^{n+1}\\sum_{y_{i-1}y_i}\\frac{\\alpha_{i-1}^T(y_{i-1}|x)M_i(y_{i-1}, y_i|x)\\beta_i(y_i|x)}{Z(x)}\\\\k=1,2,..,K$$\n" 150 | ] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "metadata": { 155 | "collapsed": true 156 | }, 157 | "source": [ 158 | "核心代码:\n", 159 | "\n", 160 | "前向后向和M矩阵都用保存其**log值**(因为它们本身的值可能很小,计算乘法可能下溢)。\n", 161 | "\n", 162 | "```python\n", 163 | "\"\"\"\n", 164 | "关键变量的尺寸,Y是标注空间的个数,K是特征函数的个数。\n", 165 | "all_features:\tlen(x_vec) + 1, Y, Y, K\n", 166 | "log_M_s:\t\tlen(x_vec) + 1, Y, Y\n", 167 | "log_alphas:\t\tlen(x_vec) + 1, Y\n", 168 | "log_betas:\t\tlen(x_vec) + 1, Y\n", 169 | "log_probs:\t\tlen(x_vec) + 1, Y, Y\n", 170 | "\"\"\"\n", 171 | "```\n", 172 | "\n", 173 | "$M$:\n", 174 | "\n", 175 | "```python\n", 176 | "log_M_s = np.dot(all_features, w)\n", 177 | "```\n", 178 | "前向向量初始化:\n", 179 | "\n", 180 | "```python\n", 181 | "alpha = alphas[0]\n", 182 | "alpha[start] = 0 # log1 = 0\n", 183 | "```\n", 184 | "\n", 185 | "前向向量的递推公式:\n", 186 | "\n", 187 | "```python\n", 188 | "alphas[t] = log_dot_vm(alpha, log_M_s[t - 1])\n", 189 | "```\n", 190 | "\n", 191 | "后向向量的初始化:\n", 192 | "\n", 193 | "```python\n", 194 | "beta = betas[-1]\n", 195 | "beta[end] = 0 # log1 = 0\n", 196 | "```\n", 197 | "\n", 198 | "后向向量的递推公式:\n", 199 | "\n", 200 | "```python\n", 201 | "betas[t] = log_dot_mv(log_M_s[t], beta)\n", 202 | "```\n", 203 | "\n", 204 | "其中:\n", 205 | "\n", 206 | "```python\n", 207 | "def log_dot_vm(loga, logM):\n", 208 | " \"\"\"通过log向量和log矩阵,计算log(向量^T 点乘 矩阵)\"\"\"\n", 209 | " return special.logsumexp(np.expand_dims(loga, axis=1) + logM, axis=0)\n", 210 | "\n", 211 | "\n", 212 | "def log_dot_mv(logM, logb):\n", 213 | " \"\"\"通过log向量和log矩阵,计算log(矩阵 点乘 向量)\"\"\"\n", 214 | " return special.logsumexp(logM + np.expand_dims(logb, axis=0), axis=1)\n", 215 | "```\n", 216 | "\n", 217 | "$Z$:\n", 218 | "\n", 219 | "```python\n", 220 | "log_Z = special.logsumexp(last)\n", 221 | "```\n", 222 | "\n", 223 | "注:`special.logsumexp`函数等价于`np.log(np.sum(np.exp(a), axis))`\n", 224 | "\n", 225 | "计算$P(Y_{i-1}=y_{i-1}, Y_i=y_i|x) = \\frac{\\alpha_{i-1}^T(y_{i-1}|x)M_i(y_{i-1}, y_i|x)\\beta_i(y_i|x)}{Z(x)}$:\n", 226 | "\n", 227 | "```python\n", 228 | "log_alphas1 = np.expand_dims(log_alphas, axis=2)\n", 229 | "log_betas1 = np.expand_dims(log_betas, axis=1)\n", 230 | "log_probs = log_alphas1 + log_M + log_betas1 - log_Z\n", 231 | "```\n", 232 | "\n", 233 | "计算特征函数$f_k$关于条件分布$P(Y|X)$的期望:\n", 234 | "\n", 235 | "```python\n", 236 | "exp_features = np.sum(np.exp(log_probs) * all_features, axis=(0, 1, 2))\n", 237 | "```\n", 238 | "\n", 239 | "特征函数$f_k$关于联合分布$P(X, Y)$的期望:\n", 240 | "\n", 241 | "```python\n", 242 | "# y_vec = [START] + y_vec + [END]\n", 243 | "yp_vec_ids = y_vec[:-1]\n", 244 | "y_vec_ids = y_vec[1:]\n", 245 | "emp_features = np.sum(all_features[range(length), yp_vec_ids, y_vec_ids], axis=0)\n", 246 | "```" 247 | ] 248 | }, 249 | { 250 | "cell_type": "markdown", 251 | "metadata": { 252 | "collapsed": true 253 | }, 254 | "source": [ 255 | "### 学习方法\n", 256 | "\n", 257 | "给定训练数据集,估计条件随机场模型参数,即用**极大似然法**的方法估计参数。\n", 258 | "\n", 259 | "这里学习的参数是$w$,应该对比最大熵的学习算法,HMM的有监督学习的参数估计很简单,参数估计的是三元组概率矩阵。\n", 260 | "\n", 261 | "#### 改进的迭代尺度法\n", 262 | "\n", 263 | "$$L(w) = L_{\\tilde P}(P_w) \\\\ = log\\prod_{x,y}P_w(y|x)^{\\tilde P(x,y)} \\\\ = \\sum_{x,y}\\tilde P(x,y)logP_w(y|x)\n", 264 | "\\\\ = \\sum_{x,y}[\\tilde P(x,y)\\sum_{k=1}^Kw_kf_k(x,y)-\\tilde P(x,y)logZ_w(x)] \\\\ = \\sum_{j=1}^N\\sum_{k=1}^Kw_kf_k(y_j,x_j)-\\sum_{j=1}^NlogZ_w(x_j)$$\n", 265 | "\n", 266 | "改进的迭代尺度法引入参数向量的增量向量:$\\delta=(\\delta_1, ..., \\delta_K)^T$。\n", 267 | "\n", 268 | "类似于最大熵的迭代尺度法,引入两个方程:\n", 269 | "\n", 270 | "- **关于转移特征的方程**:$\\sum_{x,y}\\tilde P(x,y) \\sum_{i=1}^{n+1}t_k(y_{i-1},y_i,x,i)=\\sum_{x,y}\\tilde P(x)P(y|x)\\sum_{i=1}^{n+1}t_k(y_{i-1},y_i,x,i)exp(\\delta_kT(x,y))\\\\k=1,2,...,K_1$\n", 271 | "- **关于状态特征的方程**:$\\sum_{x,y}\\tilde P(x,y) \\sum_{i=1}^{n+1}s_l(y_{i-1},y_i,x,i)=\\sum_{x,y}\\tilde P(x)P(y|x)\\sum_{i=1}^{n+1}s_l(y_{i-1},y_i,x,i)exp(\\delta_{K_1+l}T(x,y))\\\\l=1,2,...,K_2$\n", 272 | "- 其中:$T(x,y) = \\sum_kf_k(y,x) = \\sum_{k=1}^K\\sum_{i=1}^{n+1}f_k(y_{i-1}, y_i, x, i)$是某数据$(x,y)$出现的所有特征数的总和。\n", 273 | "\n", 274 | "具体算法流程:\n", 275 | "- 输入:特征函数:$t_1,...,t_{K_1}$,$s_1, ..., s_{K_2}$;经验分布$\\tilde P(x,y)$。\n", 276 | "- 输出:参数估计值$\\hat w$;模型$P_{\\hat w}$。\n", 277 | "- 1.对于所有的$k \\in \\{1,2,...,K\\}$,取初始值$w_k=0$\n", 278 | "- 2.对于每一$k \\in \\{1,2,...,K\\}$:\n", 279 | " - a.当$k = 1,2,...,K_1$时,令$\\delta_k$是关于转移特征的方程的解;当$k = K_1+l\\;l=1,...,K_2$时,令$\\delta_k$是关于状态特征的方程的解。\n", 280 | " - b.更新$w_k$:$w_k\\leftarrow w_k+\\delta_k$\n", 281 | "\n", 282 | "#### BFGS算法\n", 283 | "\n", 284 | "梯度函数:$g(w) = \\sum_{x,y}\\tilde P(x)P_w(y|x)f(x,y) - E_{\\tilde P}(f)$\n", 285 | "\n", 286 | "具体算法流程:\n", 287 | "- 输入:特征函数$f_1,...,f_n$;经验分布$\\tilde P(x,y)$。\n", 288 | "- 输出:参数估计值$\\hat w$;模型$P_{\\hat w}$。\n", 289 | "- 1.选定初始点$w^{(0)}$,取$B_0$是正定对称矩阵,置$k=0$。\n", 290 | "- 2.计算$g_k=g(w^{(k)})$,若$g_k=0$,则停止计算,否则转步骤3。\n", 291 | "- 3.由$B_kp_k=-g_k$,求出$p_k$\n", 292 | "- 4.一维搜索:求$\\lambda_k$使得:$f(w^{(k)}+\\lambda_kp_k) = min_{\\lambda\\geq 0}f(w^{(k)}+\\lambda p_k)$\n", 293 | "- 5.置$g_{k+1} = g(w^{(k+1)})$,若$g_k=0$,则停止计算;否则,求$B_{k+1}$:$B_{k+1}=B_k+\\frac{y_kt_k^T}{y_k^T\\delta_k}-\\frac{B_k\\delta_k\\delta_k^TB_k}{\\delta_k^tB_k\\delta_k}$,其中,$y_k = g_{k+1}-g_k$,$\\delta_k = w^{(k+1)-w^{(k)}}$\n", 294 | "- 7.置$k=k+1$,转到步骤3。" 295 | ] 296 | }, 297 | { 298 | "cell_type": "markdown", 299 | "metadata": { 300 | "collapsed": true 301 | }, 302 | "source": [ 303 | "关键代码:\n", 304 | "\n", 305 | "似然函数:\n", 306 | "```python\n", 307 | "likelihood += np.sum(log_M_s[range(length), yp_vec_ids, y_vec_ids]) - log_Z\n", 308 | "```\n", 309 | "\n", 310 | "训练,直接使用scipy中的`optimize.fmin_l_bfgs_b`去优化似然函数:\n", 311 | "\n", 312 | "```python\n", 313 | "def train(self, x_vecs, y_vecs, debug=False):\n", 314 | " vectorised_x_vecs, vectorised_y_vecs = self.create_vector_list(x_vecs, y_vecs)\n", 315 | " l = lambda w: self.neg_likelihood_and_deriv(vectorised_x_vecs, vectorised_y_vecs, w)\n", 316 | " val = optimize.fmin_l_bfgs_b(l, self.w)\n", 317 | " if debug:\n", 318 | " print(val)\n", 319 | " self.w, _, _ = val\n", 320 | " return self.w\n", 321 | "```\n", 322 | "\n", 323 | "`optimize.fmin_l_bfgs_b`的第一个参数是被优化的目标函数,这个函数需要返回函数值和梯度值,梯度值的计算:\n", 324 | "\n", 325 | "```python\n", 326 | "derivative += emp_features - exp_features\n", 327 | "```\n", 328 | "\n", 329 | "即特征关于模型的训练数据的期望和关于模型的期望的差。" 330 | ] 331 | }, 332 | { 333 | "cell_type": "markdown", 334 | "metadata": { 335 | "collapsed": true 336 | }, 337 | "source": [ 338 | "### 预测算法\n", 339 | "\n", 340 | "给定条件随机场$P(Y|X)$和输入序列(观测序列)$x$,求条件概率最大的输出序列(标记序列)$y^*$,即,对观测序列进行标注。\n", 341 | "\n", 342 | "类似于HMM,CRF也是采用**维特比算法**进行预测。\n", 343 | "\n", 344 | "$$y^* = max_y(w\\cdot F(y,x))\\\\w=(w_1,...,w_K)^T\\\\F(y,x)=(f_1(y,x), ..., f_K(y,x))^T\\\\f_k(y,x) = \\sum_{i=1}^nf_k(y_{i-1}, y_i, x, i), k=1,2,...,K$$\n", 345 | "\n", 346 | "注意,这里只用计算非规范化概率,即不用计算配分函数$Z$,可以大大提高效率。\n", 347 | "\n", 348 | "具体算法流程:\n", 349 | "\n", 350 | "- 输入:模型特征向量$F(y,x)$和权值向量$w$,观测序列$x=(x_1,...,x_n)$;\n", 351 | "- 输出:最优路径$y^*=(y_1^*, y_2^*, ..., y_n^*)$\n", 352 | "- 1.初始化非规范化概率:$\\delta_1(j) = w\\cdot F_1(y_0=start, y_1=j, x), \\;\\;\\;j=1,...,m$\n", 353 | "- 2.递推:对$i=1,2,...,n$:\n", 354 | " - $\\delta_i(l) = max_{1\\leq j \\leq m}\\{\\delta_{i-1}(j) + w\\cdot F_i(y_{i-1}=j,y_i=l, x)\\;\\;\\;l=1,2,...,m\\}$\n", 355 | " - 对应的路径:$\\Psi_i(l) = argmax_{1\\leq j \\leq m}\\{\\delta_{i-1}(j) + w\\cdot F_i(y_{i-1}=j,y_i=l, x)\\;\\;\\;l=1,2,...,m\\}$\n", 356 | "- 3.终止:\n", 357 | " - $max_y(w\\cdot F(y,x)) = max_{1\\leq j \\leq m}\\delta_n(j)$\n", 358 | " - $y^*_n = argmax_{1\\leq j \\leq m}\\delta_n(j)$\n", 359 | "- 4.返回路径:$y_i^* = \\Psi_{i+1}(y_{i+1}^*), \\;\\;i=n-1,n-2,...,1$" 360 | ] 361 | }, 362 | { 363 | "cell_type": "markdown", 364 | "metadata": { 365 | "collapsed": true 366 | }, 367 | "source": [ 368 | "核心代码:\n", 369 | "```python\n", 370 | "def predict(self, x_vec, debug=False):\n", 371 | " \"\"\"给定x,预测y。使用Viterbi算法\"\"\"\n", 372 | " # all_features, len(x_vec) + 1, Y, Y, K\n", 373 | " all_features = self.get_all_features(x_vec)\n", 374 | " # log_potential: len(x_vec) + 1, Y, Y 保存各个下标的非规范化概率\n", 375 | " log_potential = np.dot(all_features, self.w)\n", 376 | " T = len(x_vec)\n", 377 | " Y = len(self.labels)\n", 378 | " # Psi保存每个时刻最优情况的下标\n", 379 | " Psi = np.ones((T, Y), dtype=np.int32) * -1\n", 380 | " # 初始化\n", 381 | " delta = log_potential[0, 0]\n", 382 | " # 递推\n", 383 | " for t in range(1, T):\n", 384 | " next_delta = np.zeros(Y)\n", 385 | " for y in range(Y):\n", 386 | " w = delta + log_potential[t, :, y]\n", 387 | " Psi[t, y] = psi = w.argmax()\n", 388 | " next_delta[y] = w[psi]\n", 389 | " delta = next_delta\n", 390 | " # 回溯找到最优路径\n", 391 | " y = delta.argmax()\n", 392 | " trace = []\n", 393 | " for t in reversed(range(T)):\n", 394 | " trace.append(y)\n", 395 | " y = Psi[t, y]\n", 396 | " trace.reverse()\n", 397 | " return [self.labels[i] for i in trace]\n", 398 | "```" 399 | ] 400 | } 401 | ], 402 | "metadata": { 403 | "kernelspec": { 404 | "display_name": "Python [default]", 405 | "language": "python", 406 | "name": "python2" 407 | }, 408 | "language_info": { 409 | "codemirror_mode": { 410 | "name": "ipython", 411 | "version": 2 412 | }, 413 | "file_extension": ".py", 414 | "mimetype": "text/x-python", 415 | "name": "python", 416 | "nbconvert_exporter": "python", 417 | "pygments_lexer": "ipython2", 418 | "version": "2.7.14" 419 | } 420 | }, 421 | "nbformat": 4, 422 | "nbformat_minor": 1 423 | } 424 | -------------------------------------------------------------------------------- /7_GA.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# 遗传算法原理和应用(python实现)" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "求解函数 f(x) = x + 10*sin(5*x) + 7*cos(4*x) 在区间[0,9]的最大值。" 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": 1, 20 | "metadata": { 21 | "collapsed": true 22 | }, 23 | "outputs": [], 24 | "source": [ 25 | "#encoding=utf-8\n", 26 | "\n", 27 | "import math\n", 28 | "import random\n", 29 | "import operator\n", 30 | "\n", 31 | "class GA():\n", 32 | " def __init__(self, length, count):\n", 33 | " # 染色体长度\n", 34 | " self.length = length\n", 35 | " # 种群中的染色体数量\n", 36 | " self.count = count\n", 37 | " # 随机生成初始种群\n", 38 | " self.population = self.gen_population(length, count)\n", 39 | "\n", 40 | " def evolve(self, retain_rate=0.2, random_select_rate=0.5, mutation_rate=0.01):\n", 41 | " \"\"\"\n", 42 | " 进化\n", 43 | " 对当前一代种群依次进行选择、交叉并生成新一代种群,然后对新一代种群进行变异\n", 44 | " \"\"\"\n", 45 | " parents = self.selection(retain_rate, random_select_rate)\n", 46 | " self.crossover(parents)\n", 47 | " self.mutation(mutation_rate)\n", 48 | "\n", 49 | " def gen_chromosome(self, length):\n", 50 | " \"\"\"\n", 51 | " 随机生成长度为length的染色体,每个基因的取值是0或1\n", 52 | " 这里用一个bit表示一个基因\n", 53 | " \"\"\"\n", 54 | " chromosome = 0\n", 55 | " for i in xrange(length):\n", 56 | " chromosome |= (1 << i) * random.randint(0, 1)\n", 57 | " return chromosome\n", 58 | "\n", 59 | " def gen_population(self, length, count):\n", 60 | " \"\"\"\n", 61 | " 获取初始种群(一个含有count个长度为length的染色体的列表)\n", 62 | " \"\"\"\n", 63 | " return [self.gen_chromosome(length) for i in xrange(count)]\n", 64 | "\n", 65 | " def fitness(self, chromosome):\n", 66 | " \"\"\"\n", 67 | " 计算适应度,将染色体解码为0~9之间数字,代入函数计算\n", 68 | " 因为是求最大值,所以数值越大,适应度越高\n", 69 | " \"\"\"\n", 70 | " x = self.decode(chromosome)\n", 71 | " return x + 10*math.sin(5*x) + 7*math.cos(4*x)\n", 72 | "\n", 73 | " def selection(self, retain_rate, random_select_rate):\n", 74 | " \"\"\"\n", 75 | " 选择\n", 76 | " 先对适应度从大到小排序,选出存活的染色体\n", 77 | " 再进行随机选择,选出适应度虽然小,但是幸存下来的个体\n", 78 | " \"\"\"\n", 79 | " # 对适应度从大到小进行排序\n", 80 | " graded = [(self.fitness(chromosome), chromosome) for chromosome in self.population]\n", 81 | " graded = [x[1] for x in sorted(graded, reverse=True)]\n", 82 | " # 选出适应性强的染色体\n", 83 | " retain_length = int(len(graded) * retain_rate)\n", 84 | " parents = graded[:retain_length]\n", 85 | " # 选出适应性不强,但是幸存的染色体\n", 86 | " for chromosome in graded[retain_length:]:\n", 87 | " if random.random() < random_select_rate:\n", 88 | " parents.append(chromosome)\n", 89 | " return parents\n", 90 | "\n", 91 | " def crossover(self, parents):\n", 92 | " \"\"\"\n", 93 | " 染色体的交叉、繁殖,生成新一代的种群\n", 94 | " \"\"\"\n", 95 | " # 新出生的孩子,最终会被加入存活下来的父母之中,形成新一代的种群。\n", 96 | " children = []\n", 97 | " # 需要繁殖的孩子的量\n", 98 | " target_count = len(self.population) - len(parents)\n", 99 | " # 开始根据需要的量进行繁殖\n", 100 | " while len(children) < target_count:\n", 101 | " male = random.randint(0, len(parents)-1)\n", 102 | " female = random.randint(0, len(parents)-1)\n", 103 | " if male != female:\n", 104 | " # 随机选取交叉点\n", 105 | " cross_pos = random.randint(0, self.length)\n", 106 | " # 生成掩码,方便位操作\n", 107 | " mask = 0\n", 108 | " for i in xrange(cross_pos):\n", 109 | " mask |= (1 << i) \n", 110 | " male = parents[male]\n", 111 | " female = parents[female]\n", 112 | " # 孩子将获得父亲在交叉点前的基因和母亲在交叉点后(包括交叉点)的基因\n", 113 | " child = ((male & mask) | (female & ~mask)) & ((1 << self.length) - 1)\n", 114 | " children.append(child)\n", 115 | " # 经过繁殖后,孩子和父母的数量与原始种群数量相等,在这里可以更新种群。\n", 116 | " self.population = parents + children\n", 117 | "\n", 118 | " def mutation(self, rate):\n", 119 | " \"\"\"\n", 120 | " 变异\n", 121 | " 对种群中的所有个体,随机改变某个个体中的某个基因\n", 122 | " \"\"\"\n", 123 | " for i in xrange(len(self.population)):\n", 124 | " if random.random() < rate:\n", 125 | " j = random.randint(0, self.length-1)\n", 126 | " self.population[i] ^= 1 << j\n", 127 | "\n", 128 | "\n", 129 | " def decode(self, chromosome):\n", 130 | " \"\"\"\n", 131 | " 解码染色体,将二进制转化为属于[0, 9]的实数\n", 132 | " \"\"\"\n", 133 | " return chromosome * 9.0 / (2**self.length-1)\n", 134 | "\n", 135 | " def result(self):\n", 136 | " \"\"\"\n", 137 | " 获得当前代的最优值,这里取的是函数取最大值时x的值。\n", 138 | " \"\"\"\n", 139 | " graded = [(self.fitness(chromosome), chromosome) for chromosome in self.population]\n", 140 | " graded = [x[1] for x in sorted(graded, reverse=True)]\n", 141 | " return ga.decode(graded[0])" 142 | ] 143 | }, 144 | { 145 | "cell_type": "code", 146 | "execution_count": 2, 147 | "metadata": { 148 | "collapsed": false 149 | }, 150 | "outputs": [ 151 | { 152 | "name": "stdout", 153 | "output_type": "stream", 154 | "text": [ 155 | "7.85672650701\n" 156 | ] 157 | } 158 | ], 159 | "source": [ 160 | "# 染色体长度为17, 种群数量为300\n", 161 | "ga = GA(17, 300)\n", 162 | "\n", 163 | "# 200次进化迭代\n", 164 | "for x in xrange(200):\n", 165 | " ga.evolve()\n", 166 | "\n", 167 | "print ga.result()" 168 | ] 169 | }, 170 | { 171 | "cell_type": "code", 172 | "execution_count": null, 173 | "metadata": { 174 | "collapsed": true 175 | }, 176 | "outputs": [], 177 | "source": [ 178 | "" 179 | ] 180 | } 181 | ], 182 | "metadata": { 183 | "anaconda-cloud": {}, 184 | "kernelspec": { 185 | "display_name": "Python [default]", 186 | "language": "python", 187 | "name": "python2" 188 | }, 189 | "language_info": { 190 | "codemirror_mode": { 191 | "name": "ipython", 192 | "version": 2.0 193 | }, 194 | "file_extension": ".py", 195 | "mimetype": "text/x-python", 196 | "name": "python", 197 | "nbconvert_exporter": "python", 198 | "pygments_lexer": "ipython2", 199 | "version": "2.7.12" 200 | } 201 | }, 202 | "nbformat": 4, 203 | "nbformat_minor": 0 204 | } -------------------------------------------------------------------------------- /9_SVM.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "# 如何优雅地手推SVM\n", 10 | "\n", 11 | "```\n", 12 | "手推svm一直是面试中常见的一个问题,但是却经常难住很多人。这里总结下为什么总是会手推失败的原因,以及如何更优雅地手推。\n", 13 | "```" 14 | ] 15 | }, 16 | { 17 | "cell_type": "markdown", 18 | "metadata": {}, 19 | "source": [ 20 | "## 1.手推失败原因总结\n", 21 | "\n", 22 | "1.没有重点。完整的推倒过程非常复杂,全部讲清楚是一个大工程,如何选择重点非常重要,如果只是花时间去记具体细节的公式化简,很容易手推失败。\n", 23 | "\n", 24 | "2.先验知识掌握不牢。 比如KKT/Hinge Loss/kernel。 这些知识应该拆开来分块学,避免学混了,以为kernel就是给SVM用的。\n", 25 | "\n", 26 | "3.在模型中使用trick的地方没有着重记忆。" 27 | ] 28 | }, 29 | { 30 | "cell_type": "markdown", 31 | "metadata": {}, 32 | "source": [ 33 | "## 2.先验知识\n", 34 | "\n", 35 | "### 2.1 拉格朗日对偶性\n", 36 | "\n", 37 | "**原始问题**:\n", 38 | "- $f(x),c_i(x),h_j(x)$是定义在$R^n$上的连续可微函数,考虑约束最优化问题:\n", 39 | "- $\\underset{x\\in R^n}{min}f(x)\\\\s.t.\\;\\;c_i(x)\\leq 0\\;\\;i=1,...,k\\\\h_j(x)=0\\;\\;j=1,...,l$\n", 40 | "- 即有$k$个不等式约束:$c_i(x)$和$l$个等式约束:$h_j(x)$。\n", 41 | "\n", 42 | "**拉格朗日函数**:\n", 43 | "- $L(x,\\alpha,\\beta)=f(x)+\\sum_{i=1}^k\\alpha_i c_i(x)+\\sum_{j=1}^l\\beta_jh_j(x)$\n", 44 | "- $\\alpha_i$和$\\beta_j$,称为**拉格朗日乘子**,$\\alpha_i\\geq 0$。\n", 45 | "\n", 46 | "**拉格朗日函数的极大极小问题**:\n", 47 | "- 令$\\theta_P(x) = \\underset{\\alpha,\\beta}{max}\\;L(x,\\alpha,\\beta)$\n", 48 | "- 如果存在$x$违反了原始问题的约束条件,则$\\theta_P(x)=\\infty$,当$x$不违反原始问题的约束条件,则$\\theta_P(x)=f(x)$\n", 49 | "- 因此:$\\underset{x}{min}\\theta_P(x)=\\underset{x}{min}\\underset{\\alpha,\\beta}{max}L(x,\\alpha,\\beta)$等价于原问题。\n", 50 | "\n", 51 | "**对偶问题**:\n", 52 | "- $\\underset{\\alpha,\\beta}{max}\\underset{x}{min}\\;L(x,\\alpha,\\beta)$\n", 53 | "- 定理:如果函数$f(x)$和$c_i(x)$是凸函数,$h_j(x)$是仿射函数,则$x^*,\\alpha^*,\\beta^*$是同时是原始问题和对偶问题的解的必要条件是满足KKT条件。\n", 54 | "\n", 55 | "**KKT条件**:\n", 56 | "- 1.拉格朗日函数对$x$,$λ$,$α$求偏导都为0:\n", 57 | " - $\\triangledown_xL(x^*,\\alpha^*,\\beta^*)=0$\n", 58 | " - $\\triangledown_{\\alpha}L(x^*,\\alpha^*,\\beta^*)=0$\n", 59 | " - $\\triangledown_{\\beta}L(x^*,\\alpha^*,\\beta^*)=0$\n", 60 | "- 2.对于不等式约束,$\\alpha_i^*c_i(x^*)=0\\;\\;\\;i=1,...,k$(对偶互补条件)。\n", 61 | "\n", 62 | "### 2.2 Hinge Loss\n", 63 | "\n", 64 | "参考[wikipedia](https://en.wikipedia.org/wiki/Hinge_loss)以及[这篇博客](https://quomodocumque.wordpress.com/2016/01/23/ranking-mathematicians-by-hinge-loss/)和[这篇paper](https://arxiv.org/pdf/1512.08949.pdf)。\n", 65 | "\n", 66 | "对于二分类问题:\n", 67 | "\n", 68 | "$HingeLoss = max(0, 1-y_{true}\\cdot y_{pred})$,也就是说,如果$y_{true}\\cdot y_{pred}>1$,$HingeLoss=0$,在svm中,$y_{true}\\cdot y_{pred}=y_{true}(w\\cdot x+b)$,即函数间隔。也就是说,**我们希望这个函数间隔大于1就好**,具体在svm上的应用,后文继续。\n", 69 | "\n", 70 | "![](https://i.stack.imgur.com/lNgJE.png)\n", 71 | "\n", 72 | "对于ranking问题:\n", 73 | "\n", 74 | "$HingeLoss(f) = max(0, 1-f(a)+f(a’))$,下面介绍下ranking的$HingeLoss$的intuition。\n", 75 | "\n", 76 | "三元组:$(r,a,a’)$代表rater(评分人),$a$和$a’$代表两个app,并且$a$的评分高于$a’$。我们需要找到一个ranking方式,使得对所有$a > a’$,都有$f(a) > f(a’)$。\n", 77 | "\n", 78 | "最直接地,会想到0-1损失函数,即$mistake_j(f) = 1, \\; if\\; f(a) < f(a’) + 1$,$mistake_j(f) = 0, \\;if\\; f(a) >= f(a’) + 1$,然后最终的损失函数是:$M = \\sum_j mistake_j(f)$。\n", 79 | "\n", 80 | "但是这么做,M是**非凸**的,直接使用梯度下降去优化很可能只能获得局部最优。\n", 81 | "\n", 82 | "所以使用$hinge_j(f) = max(0, 1-f(a)+f(a’))$来替换$mistake_j(f)$,这样最终的损失函数是**凸函数**。\n", 83 | "\n", 84 | "### 2.3 Kernel Trick\n", 85 | "\n", 86 | "简单地说,Kernel Trick通过一个**非线性变换**,将**输入空间(欧氏空间$R^n$或者离散集合)**映射到**特征空间(希尔伯特空间$Η$)**,一般是**升维**的映射,所以有些人会说核技巧就是升维的,但这个说法并不严谨。\n", 87 | "\n", 88 | "![](https://www.researchgate.net/profile/Gokmen_Zararsiz/publication/258856315/figure/fig8/AS:297028224602116@1447828455490/Fig-8-The-kernel-trick-of-SVM-28-The-linearly-inseparable-data-in-two-dimensions-can.png)\n", 89 | "\n", 90 | "**核函数**:\n", 91 | "\n", 92 | "设$X$是输入空间(欧氏空间$R^n$或者离散集合),$H$是特征空间(希尔伯特空间),若存在一个从$X$到$H$的映射,即$φ(x):X\\rightarrow H$,使得对所有$x,z∈X$,有$K(x,z)=φ(x)\\cdot φ(z)$,也就是说**核函数是两个映射函数的点乘**。\n", 93 | "\n", 94 | "**核技巧**:\n", 95 | "\n", 96 | "核技巧不显式地定义映射函数$φ(x)$,只定义$K(x,z)$,这样计算更容易。\n", 97 | "\n", 98 | "**常用的核函数**:\n", 99 | "\n", 100 | "1.多项式核函数(polynomial kernel function):$K(x,z) = (x\\cdot z)^p$\n", 101 | "\n", 102 | "2.高斯核函数(Gaussian kernel function):$K(x,z) = exp(-\\frac{||x-z||^2}{2σ^2})$\n" 103 | ] 104 | }, 105 | { 106 | "cell_type": "markdown", 107 | "metadata": {}, 108 | "source": [ 109 | "## 3 具体推导\n", 110 | "\n", 111 | "### 3.1 硬间隔最大化\n", 112 | "\n", 113 | "对于二分类线性可分的问题,我们想要找到一个分割超平面,使得在平面的正面一侧的点都是正类,在反面一侧的点都是负类。\n", 114 | "\n", 115 | "**分割超平面**的表达式:$wx+b=0$。\n", 116 | "\n", 117 | "svm的独特之处在于,它认为什么样的分割超平面是最好的,也就是分割超平面的评价标准。这个评价标准就是:**两边离超平面最近的点,它们离超平面的距离要最远**。用通俗一点的话去类比就是,如果要评价一个酒店的服务质量好坏,可以比较它们的差评,哪个差评相对好一些的,就认为这个酒店服务质量还不错。两边离分割超平面最近的点即**支撑向量(support vector)**。\n", 118 | "\n", 119 | "假设我们找到了这些点,并做两个平行于分割超平面的**支撑平面**。\n", 120 | "\n", 121 | "两个支撑平面的表达式:\n", 122 | "\n", 123 | "$wx+b=1$,$wx+b=-1$;\n", 124 | "\n", 125 | "![](http://4.bp.blogspot.com/-v4XFRpY7w-U/VRKS58QsTJI/AAAAAAAAnQU/H4WT8lzfQn4/s1600/svm.png)\n", 126 | "\n", 127 | "上面的支撑平面使用了第一个**trick**,也就是限定支撑平面等式右面的系数是$1$和$-1$,这是利用了$w$和$b$可以等比例缩放的性质,固定支撑平面的系数,可以方便后续计算。\n", 128 | "\n", 129 | "对于所有训练数据:\n", 130 | "\n", 131 | "要满足$wx_++b≥1$,$wx_-+b≤-1$;\n", 132 | "\n", 133 | "因为$y_+=1$,$y_-=-1$,可以使用使用一个**trick**将上面两个式子合成一个:\n", 134 | "\n", 135 | "$y(wx+b)≥1$\n", 136 | "\n", 137 | "下面是个**重点**,来推导两个支撑平面的间隔:\n", 138 | "\n", 139 | "$width=(x_+-x_-)\\frac{w}{||w||}=\\frac{x_+w}{||w||}-\\frac{x_-w}{||w||}=\\frac{2}{||w||}$\n", 140 | "\n", 141 | "而最大化$\\frac{2}{||w||}$和最小化$\\frac{1}{2}||w||^2$是等价的,至此问题转换成了一个**带不等式约束的凸优化问题**,其中的不等式约束是:$y(wx+b)≥1$。\n", 142 | "\n", 143 | "参考上面介绍的**KKT条件**,拉格朗日函数为:$L(w,b,α)=\\frac{1}{2}||w||^2-\\sum_1^Nα_iy_i(wx_i+b)+\\sum_1^Nα_i$,问题转换为:$\\underset{w,b}{min}\\;\\underset{α,α\\geq 0}{max}L(w,b,α)$。令拉格朗日函数对$w$和$b$求偏导,并使之为0,可以得到:$w=\\sum_i^Nα_iy_ix_i$,$\\sum_i^Nα_iy_i=0$。\n", 144 | "\n", 145 | "从$w=\\sum_i^Nα_iy_ix_i$可以看出,模型的参数$w$可以完全用数据和$α$来计算,模型在优化的时候,**保存的参数是$α$**。优化完$α$以后,直接计算出$w$即可。\n", 146 | "\n", 147 | "继续回到推导上来,将上面的两个式子带回到拉格朗日函数,化简得到:$L(w,b,α)=-\\frac{1}{2}\\sum_i^N\\sum_j^Nα_iα_jy_iy_j(x_i\\cdot x_j)+\\sum_i^Nα_i$。\n", 148 | "\n", 149 | "问题通过引入**KKT条件**做了一番推导以后可以归纳成:\n", 150 | "\n", 151 | "$minL(α) = \\frac{1}{2}\\sum_i^N\\sum_j^Nα_iα_jy_iy_j(x_i\\cdot x_j)-\\sum_i^Nα_i$\n", 152 | "\n", 153 | "约束条件:$\\sum_i^Nα_iy_i=0$和$α_i\\geq 0$。\n", 154 | "\n", 155 | "推导到这里,可以直接把问题交给优化算法SMO了。\n", 156 | "\n", 157 | "SMO算法就不展开说了,[这篇文章](https://zhuanlan.zhihu.com/p/23068673)不错,可以参考。\n", 158 | "\n", 159 | "### 3.2 软间隔最大化\n", 160 | "\n", 161 | "硬间隔最大化的一个很难实现的前提是:**线性可分**。现实中很多数据正负类相互交缠,不太可能严格满足线性可分,这个时候就需要**软间隔最大化**。\n", 162 | "\n", 163 | "可以认为存在一些特异点(outlier),去除了这些特异点之后,模型依然是可分的。\n", 164 | "\n", 165 | "于是引入一个**松弛变量$ξ_i$**,约束条件变成:$y_i(wx_i+b)≥1-ξ_i$。\n", 166 | "\n", 167 | "再引入**惩罚参数**:$C>0$,目标函数变成:$min\\;\\frac{1}{2}||w||^2+C\\sum_i^Nξ_i$。\n", 168 | "\n", 169 | "![](http://opencv-python-tutroals.readthedocs.io/en/latest/_images/svm_basics3.png)\n", 170 | "\n", 171 | "接下来的推导和硬间隔的类似,最终的优化式子不变,只有其中一个约束变成:$0 \\leq α_i\\leq C$\n", 172 | "\n", 173 | "得出了这个结果之后,我们终于到了Hinge Loss了,让我们来看看软间隔最大化和Hinge Loss的关系:\n", 174 | "\n", 175 | "**软间隔最大化**:\n", 176 | "\n", 177 | "目标函数:\n", 178 | "\n", 179 | "$\\underset{w,b,ξ}{min}\\frac{1}{2}||w||^2+C\\sum_i^Nξ_i$\n", 180 | "\n", 181 | "约束条件:\n", 182 | "\n", 183 | "$y_i(wx_i+b)≥1-ξ_i$,$ξ_i \\geq 0$\n", 184 | "\n", 185 | "**Hinge Loss**:\n", 186 | "\n", 187 | "目标函数:\n", 188 | "\n", 189 | "$min \\sum_i^N max(0, 1-y_i(wx_i+b)) + λ||w||^2$ \n", 190 | "\n", 191 | "**证明等价**:从Hinge Loss往回推导,令$1-y_i(wx_i+b)=ξ_i$,且$ξ_i \\geq 0$,于是有$max(0, 1-y_i(wx_i+b))=max(0,ξ_i)=ξ_i$,所以Hinge Loss变成了$\\underset{w,b}{min}\\; \\sum_i^Nξ_i + λ||w||^2$。取$λ=\\frac{1}{2C}$,又写成:$\\frac{1}{C}(C\\sum_i^Nξ_i + \\frac{1}{2}||w||^2)$,与**软间隔最大化**等价。" 192 | ] 193 | }, 194 | { 195 | "cell_type": "markdown", 196 | "metadata": { 197 | "collapsed": true 198 | }, 199 | "source": [ 200 | "### 非线性支持向量机\n", 201 | "\n", 202 | "从推导后的式子可以看出,目标函数只使用了数据集两两点乘的结果。这样我们可以直接使用核函数来代替这个点乘,实现空间非线性映射。\n", 203 | "\n", 204 | "**非线性支持向量机表述**:\n", 205 | "\n", 206 | "目标函数:\n", 207 | "\n", 208 | "$minL(α) = \\frac{1}{2}\\sum_i^N\\sum_j^Nα_iα_jy_iy_jK(x_i,x_j)-\\sum_i^Nα_i$\n", 209 | "\n", 210 | "约束条件:$\\sum_i^Nα_iy_i=0$和$0 \\leq α_i \\leq C$。\n", 211 | "\n", 212 | "分割平面:\n", 213 | "\n", 214 | "$\\sum_i^Nα_iy_iK(x,x_i)+b$" 215 | ] 216 | }, 217 | { 218 | "cell_type": "code", 219 | "execution_count": 1, 220 | "metadata": {}, 221 | "outputs": [ 222 | { 223 | "name": "stdout", 224 | "output_type": "stream", 225 | "text": [ 226 | "done\n" 227 | ] 228 | } 229 | ], 230 | "source": [ 231 | "print \"done\"" 232 | ] 233 | } 234 | ], 235 | "metadata": { 236 | "anaconda-cloud": {}, 237 | "kernelspec": { 238 | "display_name": "Python [default]", 239 | "language": "python", 240 | "name": "python2" 241 | }, 242 | "language_info": { 243 | "codemirror_mode": { 244 | "name": "ipython", 245 | "version": 2 246 | }, 247 | "file_extension": ".py", 248 | "mimetype": "text/x-python", 249 | "name": "python", 250 | "nbconvert_exporter": "python", 251 | "pygments_lexer": "ipython2", 252 | "version": "2.7.14" 253 | } 254 | }, 255 | "nbformat": 4, 256 | "nbformat_minor": 1 257 | } 258 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 基础机器学习模型的原理介绍和python实现 2 | 3 | - [机器学习中的Monte-Carlo方法](https://github.com/applenob/machine_learning_basic/blob/master/1_MCMC.ipynb) 4 | - [LDA主题模型学习总结](https://github.com/applenob/machine_learning_basic/blob/master/2_LDA.ipynb) 5 | - [Logistic Regression 学习总结](https://github.com/applenob/machine_learning_basic/blob/master/3_Logistic_Regression.ipynb) 6 | - [隐马尔科夫模型(HMM)及其Python实现](https://github.com/applenob/machine_learning_basic/blob/master/5_HMM.ipynb) 7 | - [遗传算法原理和应用(python实现)](https://github.com/applenob/machine_learning_basic/blob/master/7_GA.ipynb) 8 | - [PCA](https://github.com/applenob/machine_learning_basic/blob/master/8_PCA.ipynb) 9 | - [如何优雅地手推SVM](https://github.com/applenob/machine_learning_basic/blob/master/9_SVM.ipynb) 10 | - [决策树基础](https://github.com/applenob/machine_learning_basic/blob/master/10_Tree_Basic.ipynb) 11 | - [决策树模型的各种Ensemble](https://github.com/applenob/machine_learning_basic/blob/master/11_Tree_Ensemble.ipynb) 12 | - [EM算法总结](https://github.com/applenob/machine_learning_basic/blob/master/12_EM.ipynb) 13 | - [图模型总结](https://github.com/applenob/machine_learning_basic/blob/master/13_graph.ipynb) 14 | - [迁移学习入门](https://github.com/applenob/machine_learning_basic/blob/master/14_tran_learn.ipynb) 15 | - [机器学习基础知识汇总](https://github.com/applenob/machine_learning_basic/blob/master/15_interview.ipynb) 16 | - [Max Entropy学习总结](https://github.com/applenob/machine_learning_basic/blob/master/16_max_entropy.ipynb) -------------------------------------------------------------------------------- /book/AndrieuFreitasDoucetJordan2003.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/book/AndrieuFreitasDoucetJordan2003.pdf -------------------------------------------------------------------------------- /iris: -------------------------------------------------------------------------------- 1 | digraph Tree { 2 | node [shape=box] ; 3 | 0 [label="X[2] <= 2.6\ngini = 0.665\nsamples = 105\nvalue = [36, 32, 37]"] ; 4 | 1 [label="gini = 0.0\nsamples = 36\nvalue = [36, 0, 0]"] ; 5 | 0 -> 1 [labeldistance=2.5, labelangle=45, headlabel="True"] ; 6 | 2 [label="X[3] <= 1.65\ngini = 0.497\nsamples = 69\nvalue = [0, 32, 37]"] ; 7 | 0 -> 2 [labeldistance=2.5, labelangle=-45, headlabel="False"] ; 8 | 3 [label="X[2] <= 5.0\ngini = 0.161\nsamples = 34\nvalue = [0, 31, 3]"] ; 9 | 2 -> 3 ; 10 | 4 [label="gini = 0.0\nsamples = 30\nvalue = [0, 30, 0]"] ; 11 | 3 -> 4 ; 12 | 5 [label="gini = 0.375\nsamples = 4\nvalue = [0, 1, 3]"] ; 13 | 3 -> 5 ; 14 | 6 [label="X[2] <= 4.85\ngini = 0.056\nsamples = 35\nvalue = [0, 1, 34]"] ; 15 | 2 -> 6 ; 16 | 7 [label="gini = 0.375\nsamples = 4\nvalue = [0, 1, 3]"] ; 17 | 6 -> 7 ; 18 | 8 [label="gini = 0.0\nsamples = 31\nvalue = [0, 0, 31]"] ; 19 | 6 -> 8 ; 20 | } -------------------------------------------------------------------------------- /iris.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/iris.pdf -------------------------------------------------------------------------------- /max_entropy_data.txt: -------------------------------------------------------------------------------- 1 | no sunny hot high FALSE 2 | no sunny hot high TRUE 3 | yes overcast hot high FALSE 4 | yes rainy mild high FALSE 5 | yes rainy cool normal FALSE 6 | no rainy cool normal TRUE 7 | yes overcast cool normal TRUE 8 | no sunny mild high FALSE 9 | yes sunny cool normal FALSE 10 | yes rainy mild normal FALSE 11 | yes sunny mild normal TRUE 12 | yes overcast mild high TRUE 13 | yes overcast hot normal FALSE 14 | no rainy mild high TRUE 15 | -------------------------------------------------------------------------------- /res/Hinge_loss_vs_zero_one_loss.svg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/Hinge_loss_vs_zero_one_loss.svg.png -------------------------------------------------------------------------------- /res/bayes_unigram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/bayes_unigram.png -------------------------------------------------------------------------------- /res/box_muller.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/box_muller.png -------------------------------------------------------------------------------- /res/crf.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/crf.png -------------------------------------------------------------------------------- /res/dag.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/dag.png -------------------------------------------------------------------------------- /res/detail-balance.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/detail-balance.png -------------------------------------------------------------------------------- /res/doc-topic-word.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/doc-topic-word.png -------------------------------------------------------------------------------- /res/dtree.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/dtree.jpg -------------------------------------------------------------------------------- /res/expectation_maximization.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/expectation_maximization.png -------------------------------------------------------------------------------- /res/full_con.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/full_con.png -------------------------------------------------------------------------------- /res/gibbs2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/gibbs2.png -------------------------------------------------------------------------------- /res/graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/graph.png -------------------------------------------------------------------------------- /res/hmm.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/hmm.jpg -------------------------------------------------------------------------------- /res/lda.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/lda.png -------------------------------------------------------------------------------- /res/lda_gibbs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/lda_gibbs.png -------------------------------------------------------------------------------- /res/linear_crf.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/linear_crf.png -------------------------------------------------------------------------------- /res/maximum_likelihood.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/maximum_likelihood.png -------------------------------------------------------------------------------- /res/multi_task.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/multi_task.png -------------------------------------------------------------------------------- /res/prof.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/prof.png -------------------------------------------------------------------------------- /res/rejection_sampling.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/rejection_sampling.png -------------------------------------------------------------------------------- /res/tree_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/tree_1.png -------------------------------------------------------------------------------- /res/tree_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/tree_2.png -------------------------------------------------------------------------------- /res/ug.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/applenob/machine_learning_basic/598a9253d777171c92b8f45bf1d9a5eea760aa91/res/ug.png -------------------------------------------------------------------------------- /test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | import sklearn 3 | from sklearn import decomposition 4 | 5 | decomposition.PCA --------------------------------------------------------------------------------