├── .gitattributes └── 机器学习 ├── KNN ├── KNN算法.R └── KNN算法简述 ├── K均值聚类 ├── k均值聚类 └── k均值聚类(青少年).R ├── logistic ├── Roc图.png ├── logistics原理 ├── logistics回归分类.R ├── logistics理论 ├── 一元,多元回归分析 ├── 主成分思想 └── 回归分析常规算法 ├── 分而治之-应用决策树和规则分类 ├── 决策树C50.R ├── 决策树与规则分类 └── 规则学习.R ├── 回归方法 ├── psych--散点矩阵图.png ├── 回归树与模型数葡萄酒评级.R ├── 多元回归系数最优解.R ├── 广义线性模型.png ├── 广义线性模式.png └── 数值回归 ├── 基于关联规则的购物篮分析 ├── rules.csv ├── 关联规则 └── 关联规则挖掘.R ├── 提高模型性能 ├── bagging集成学习.R ├── boosting.R ├── caret自动参数调整.R ├── 提高模型性能 ├── 随机森林 └── 随机森林caret.R ├── 支持向量机 ├── SVM核函数.docx ├── SVM线性可分.docx ├── 支持向量机原理 └── 支持向量机(字符识别).R ├── 时间序列模型 ├── 偏相关图.png ├── 时间序列 ├── 时间序列(arima算法).R ├── 时间预测.png └── 自相关.png ├── 概率学习-朴素贝叶斯分类 ├── 文本处理函数.R ├── 朴素贝叶斯简述.R └── 贝叶斯 ├── 模型性能评价 └── 模型性能评价度量指标 ├── 电影协同过率推荐 ├── 协同过滤推荐算法 ├── 电子商务智能推荐(协同过滤) └── 电影协同过滤推荐算法.R ├── 神经网络 ├── bp神经网络.R ├── hidden=5.png ├── plot.png └── 神经网络 └── 简介--初始--机器学习 /.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto 3 | -------------------------------------------------------------------------------- /机器学习/KNN/KNN算法.R: -------------------------------------------------------------------------------- 1 | #--------用KNN算法诊断乳腺癌 2 | 3 | #--------knn常用于:计算机视觉:面部识别,光学字符识别 一个人是否喜欢会喜欢推荐的电影或音乐 4 | 5 | #适用于分类任务,其中特征值和目标类之间的关系是众多的、复杂的,但是具有相似类的项目有非常接近 6 | #加载class包:knn()算法 加载gmodels包:Crosstable()交叉表 7 | 8 | library(class) 9 | library(gmodels) 10 | #读入数据 11 | wbcd<-read.csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 03\\wisc_bc_data.csv',header=TRUE) 12 | 13 | #剔除第一列,第一列没有意义 14 | wbcd<-wbcd[-1] 15 | wbcd$diagnosis<-factor(wbcd$diagnosis,levels = c('B','M'),labels = c('Benign','Malignant')) 16 | 17 | #这里先采用最大-最小标准化 18 | normalize<-function(x){ 19 | return((x-min(x))/(max(x)-min(x))) 20 | } 21 | 22 | #调用lapply()函数将所有的特征标准化,这里使全部的特征向量标准化 23 | wbcd_n<-as.data.frame(lapply(wbcd[2:31],normalize)) 24 | 25 | #将数据分为训练集与测试集 26 | data<-sample(2,nrow(wbcd_n),replace = T,prob = c(0.7,0.3)) 27 | #标准化训练集与测试机 28 | #训练集 29 | wbcd_train<-wbcd_n[data==1,] 30 | #测试集 31 | wbcd_test<-wbcd_n[data==2,] 32 | 33 | #存储类标签 34 | wbcd_train_lables<-wbcd[data==1,1] 35 | wbcd_test_lables<-wbcd[data==2,1] 36 | 37 | 38 | #建立模型 用法:knn(train,test,class,k) class表示分类的因子变量 这里k值取:sqrt(训练集个数383)最好选奇数 39 | 40 | wbcd_pred<-knn(wbcd_train,wbcd_test,wbcd_train_lables,k=21) 41 | 42 | #混淆矩阵 43 | m1<-table(wbcd_pred,wbcd_test_lables) 44 | sum(diag(m1)/sum(m1)) 45 | 46 | #-----------------评估模型新性能-------------------- 47 | 48 | #建立交叉表 49 | CrossTable(wbcd_pred,wbcd_test_lables,prop.chisq = F) 50 | 51 | 52 | #-----------------性能优化————————------------------- 53 | --1、采用Z分数标准化 scale() 54 | --2、测试不同的k值 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | -------------------------------------------------------------------------------- /机器学习/KNN/KNN算法简述: -------------------------------------------------------------------------------- 1 | ----懒惰学习——近邻分类 2 | 3 | 一、懒惰学习: 4 | 基于近邻分类的分类算法都是被认为时懒惰学习,在技术上没有抽象化的步骤。抽象过程和一般化过程都被跳过了。 5 | 懒惰学习并不是真正的在学习什么,它只是一字不差的存储训练数据,训练阶段进行的很快,但在进行预测的过程中会变得比较慢。同时又被称为基于实例的学习或者机械学习。 6 | 由于基于实例的学习算法并不会建立一个模型,所以该方法归类为非参数学习方法,即没有需要学习的数据参数。所以非参数方法限制了我们理解分类器如何使用数据的能力,另一方面,它并不是将数据拟合为一个预先设定的可能的偏差函数形式 7 | 8 | 9 | 二、近邻分类基本原理 10 | 相似的东西可能具有相似的属性 11 | **定义分类器的关键概念,通过距离测量案例的相似性,使用KNN近邻分类器 12 | 所谓的近邻分类是把未标记的案例归类为与他们最相似的带有标记的案例所在类 13 | 14 | 15 | 三、KNN算法 16 | 1、优点:简单有效、数据分布没有要求、训练阶段快 17 | 缺点:没有模型,没有函数调优权衡、需要选择合适的K值、分类阶段慢、名义和有序变量需要额外处理 18 | 19 | 2、概念:使用关于一个案例的k个近邻的信息来分类无标记的例子,knn算法将特征处理为一个多维空间的坐标 20 | 21 | 3、基本原理: 22 | 通过距离度量相似性: 23 | 寻求一个距离函数或者一个用来度量两个案例之间相似性的公式 24 | KNN采用传统上的欧式距离(最短路径),当然也可以采用如曼哈顿距离、余弦值等 25 | 欧式距离: 26 | √((x2-x1)^2+(y2-y1)^2 ) 27 | 28 | 选择一个合适的K: 29 | 确定用于KNN算法的邻近数量k值将决定把模型推广到未来数据时模型的好坏 30 | 过度拟合与欠拟合训练数据集之间的平衡问题称为"偏差--方差的权衡" 31 | K值越小:一旦有噪声的成分存在,将对预测结果产生较大的影响 32 | K值越大:近似误差偏大,较远的训练实例会对预测结果有影响,而且容易忽视不易察觉但重要的模式 33 | K值应该尽量选取奇数,保证计算结果产生较多类别 34 | 35 | K值的选取: 36 | 从k等于训练集案例数量的平方根开始 37 | 测试k值,计算k去不同值时的误差 38 | 39 | 准备knn使用的数据: 40 | 对自变量特征进行标准化:最大-最小标准化 z分数标准化(scale) 41 | 对因变量名义变量进行哑变量编码 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | -------------------------------------------------------------------------------- /机器学习/K均值聚类/k均值聚类: -------------------------------------------------------------------------------- 1 | ==-------K均值聚类--寻找数据分组---------------------------------------- 2 | 学习目标: 3 | 聚类:无监督学习 4 | 聚类如何定义分组,分组如何根据k均值来确定 5 | 将k均值应用于市场细分,确定青少年社交媒体 6 | 7 | 8 | 1、理解聚类: 9 | 聚类是一种无监督的机器学习任务,自动将数据划分为类,或者具有类似倾向的分组。 10 | 聚类的原则是在一个组内的项,彼此应该是非常相似的,而与改组之外的项是截然不同的。 11 | 12 | 应用: 13 | 将客户细分为具有相同的人口地理特征或者相同的购买模式的组,从而应用于有针对性的营销活动 14 | 通过识别不同于已知类的模式来检测异常行为 15 | 将具有相似特征划分为较小的具有同质类特征的组,从而简化特大的数据集 16 | 连续型数据离散 17 | 18 | 2、k均值聚类的优缺点:---------------------------------------------局部最优 19 | 优:可以通过非统计术语解释简单原则 20 | 高度的灵活性,并且通过简单的调整,可以修正缺点 21 | 22 | 缺:随机k值,不能保证找到最优的类 23 | 需要合理的猜测:数据有多少个自然类 24 | 对于密度差异较大的类不是很理想 25 | 26 | 算法本质阶段: 27 | 将案例分配到初始的k个类中 28 | 根据落入当前类的案例调整类的边界来更新分配。重新更新和分配过程多次,直到不会再提升优度 29 | 30 | 31 | 具体过程: 32 | 1、使用距离来分配和更类:在特征空间中随机选择k个类中心;在这之后,其他的案例将根据距离函数被分配到最相近的类中心。 33 | 2、更新类:将初始的类中心转移到一个新的位置,称为质心;通过计算分配到当前类中的各点的平均值来得到 34 | 3、选择适当的聚类数:一般根据业务情况判定;或者没有信息前提下,使用sqrt(n/2),n为案例数 35 | 36 | ===========------或者可以使用肘部法的技术试图度量对于不同的k值,类内部的同质性或者异质性是如何变化的 37 | 38 | 39 | 40 | 41 | 42 | 43 | -------------------------------------------------------------------------------- /机器学习/K均值聚类/k均值聚类(青少年).R: -------------------------------------------------------------------------------- 1 | #==---K均值聚类----------------------- 2 | 3 | data<-read.csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 09\\snsdata.csv') 4 | 5 | #查看某一特征的缺失值数量 6 | table(data$gender,useNA = 'ifany') #或者可以用sum(is.na(data$gender)) 7 | 8 | summary(data$age) 9 | 10 | #限制年龄,青少年合理的年龄 11 | data$age<-ifelse(data$age>=13 & data$age <=20,data$age,NA) 12 | 13 | #将性别特征作为一个单独的类别 14 | 15 | data$female<-ifelse(data$gender=='F' & !is.na(data$gender),1,0) 16 | data$no_gender<-ifelse(is.na(data$gender),1,0) 17 | 18 | #--插补缺失值 19 | 20 | mean(data$age,na.rm = T) #这里如果不处理缺失值,均值是无法计算的 21 | #按照毕业年份进行插补,而不是直接用总的平均 22 | 23 | aggregate(data=data,age~gradyear,mean,na.rm=T) #为数据的子组计算统计量 24 | ave_age<-ave(data$age,data$gradyear,FUN = function(x) mean(x,na.rm = T)) 25 | 26 | data$age<-ifelse(is.na(data$age),ave_age,data$age) 27 | 28 | 29 | #==3、训练模型----------------------------- 30 | 31 | #归一化 32 | dat<-data[5:40] 33 | dat<-scale(dat) 34 | model<-kmeans(dat,5) 35 | 36 | 37 | #评估模型性能 38 | model$size #分为5组,每组的数量长度 39 | 40 | model$centers #聚类质心的坐标 41 | 42 | 43 | #将模型中的分类结果加入到原始数据中 44 | 45 | data$cluster<-model$cluster 46 | 47 | #用aggregate()函数分组,了解每一类不同特征的关系 48 | aggregate(data=data,age~cluster,mean) 49 | 50 | 51 | -------------------------------------------------------------------------------- /机器学习/logistic/Roc图.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShydowLi/Machine-Learning-With-R-/1a7bb06b79200836bb5079263ee3866dbf15a8db/机器学习/logistic/Roc图.png -------------------------------------------------------------------------------- /机器学习/logistic/logistics原理: -------------------------------------------------------------------------------- 1 | logistics的原理: 2 | 0、公式: 3 | 设y=1的概率为p,则y=0的概率为1-p 4 | 优势比:p/1-p 5 | 在优势比的基础上取自然对数:logit(P)=ln(p/1-p) ==p=1/(1+e↑-z) 6 | 7 | logistic的回归模型: 8 | ln(p/1-p)=x1+x2+...xn 9 | 10 | 1、逻辑回归可分为条件回归和非条件回归 11 | 前者适用于配对情况下的研究,后者适用于对照研究成组的资料分析 12 | 主要适用于二分类变量(0或1) 13 | 在实际应用中,Logistic模型主要有三大用途: 14 | 15 | 1)寻找危险因素,找到某些影响因变量的"坏因素",一般可以通过优势比发现危险因素; 16 | 17 | 2)用于预测,可以预测某种情况发生的概率或可能性大小; 18 | 19 | 3)用于判别,判断某个新样本所属的类别。 20 | 21 | 2、Logistic模型实际上是一种回归模型,但这种模型又与普通的线性回归模型又有一定的区别: 22 | 23 | 1)Logistic回归模型的因变量为二分类变量; 24 | 25 | 2)该模型的因变量和自变量之间不存在线性关系; 26 | 27 | 3)一般线性回归模型中需要假设独立同分布、方差齐性等,而Logistic回归模型不需要; 28 | 29 | 4)Logistic回归没有关于自变量分布的假设条件,可以是连续变量、离散变量和虚拟变量; 30 | 31 | 5)由于因变量和自变量之间不存在线性关系,所以参数(偏回归系数)使用最大似然估计法计算。 32 | 33 | 3、混淆矩阵中: 34 | 0 1 35 | 0 a b 36 | 1 c d 37 | 正例覆盖率:d/(c+d) 负例覆盖率:1- 38 | 正例准确率: d/(b+d) 负例准确率:a/(a+b) 39 | 4、评价指标 40 | ROC曲线(接受者操作曲线): ===主要用于类似信用违约评估等 ===覆盖率 41 | 相关指标:正例覆盖率(TpR)d/(c+d) 负例覆盖率(FpR)a/(a+b) 42 | 当预测效果较好时,ROC曲线凸向左上角的顶点。平移图中对角线,与ROC曲线相切,可以得到TPR较大而FPR较小的点。模型效果越好,则ROC曲线越远离对角线,极端的情形是ROC曲线经过(0,1)点,即将正例全部预测为正例而将负例全部预测为负例。ROC曲线下的面积可以定量地评价模型的效果,记作AUC,AUC越大则模型效果越好。 43 | lift曲线: +++===主要用于精准营销,识别潜在客户 ===准确率 44 | lift值=PV/K 45 | PV=d/(b+d) 正例的命中率(预测中为1的占比) k=(c+d)/(a+b+c+d) 正例的比例 46 | depth深度: 预测为正例的比例,(b+d)/(a+b+c+d) -------------------------------------------------------------------------------- /机器学习/logistic/logistics回归分类.R: -------------------------------------------------------------------------------- 1 | #回归分析之logistics回归(0/1回归) 2 | mydata<-read.csv('E:\\新建文件夹\\R数据挖掘\\chapter5\\示例程序\\data\\bankloan.csv',header=TRUE) 3 | colnames(mydata)<-c('x1','x2','x3','x4','x5','x6','x7','x8','y') 4 | #构建logistics回归模型 5 | model<-glm(y~.,family = binomial(link = 'logit'),data = mydata) 6 | summary(model) 7 | 8 | #逐步寻优法(可以选择前向、后向、逐步) 9 | model.step<-step(model,direction = 'both') 10 | summary(model.step) 11 | 12 | #听过逐步寻优得出显著性模型 13 | model1<-glm(y~x1+x3+x4+x6+x7,family = binomial(link = 'logit'),data = mydata) 14 | test<-mydata[c(1:5),] 15 | pre<-predict(model1,test,type = 'response') 16 | class<-pre>0.5 17 | summary(class) 18 | 19 | 20 | #将结果转化为0,1值 用ifelse函数 21 | result<-ifelse(pre>0.5,1,0) 22 | 23 | #生成混淆矩阵 24 | table(result,test$y) 25 | 26 | 27 | #roc曲线 28 | library(pROC) 29 | pre1<-predict(model,mydata,type = 'response') 30 | summary(pre1) 31 | modelroc<-roc(mydata$y,pre1) 32 | plot(modelroc,print.auc=TRUE,auc.polygon=TRUE,grid=c(0.1,0.2),grid.col=c("green","red"),max.auc.polygon=TRUE,auc.polygon.col="skyblue",print.thres=TRUE) 33 | 34 | 35 | -------------------------------------------------------------------------------- /机器学习/logistic/logistics理论: -------------------------------------------------------------------------------- 1 | #分类、聚类、关联规则、预测、离群点检验等思想 2 | 3 | 1、分类和预测算法 4 | 分类:建立在已有类标记的数据上 5 | 预测:建立两种或两种以上变量间相互依赖的函数模型 6 | 7 | ①、常用的分类和预测算法 8 | 回归分析:预测值与其他变量之间相互依赖的定量关系 9 | 决策树:自顶向下的递归方式,在内部节点进行属性值的比较,并根据不同的属性值从该节点向下分支 10 | 人工神经网络:模仿大脑神经网络结构和功能建立的信息处理系统,输入与输出变量之间的关系 11 | 贝叶斯网络:信度网络(不确定知识领域和推理领域) 12 | 支持向量机:通过某种低维的非线性映射,转化为高维的线性可分,在高维的空间进行线性分析的算法 13 | 14 | logistic回归:概率型非线性回归,分为二分类和多分类回归模型 15 | 当自变量之间出现多重共线时,是指变量之间存在精确相关关系或高度相关关系 16 | 17 | 事件优势比:p/1-p 18 | 取自然对数即:logit(p)=Ln(p/1-p) 19 | 20 | 建模步骤:根据分析的目的设置指标变量 21 | 用Ln(p/1-p)和自变量列出线性回归方程,估计出模型中的回归系数 22 | 进行模型检验:F值,p值 23 | 回归系数显著性检验,剔除不显著性变量(逐步回归法) 24 | 模型应用 25 | 26 | 27 | 28 | 29 | 决策树:树状结构,每一个叶节点对应着一个分类,非叶节点对应着某个属性的划分 30 | 核心问题:每一步该如何选择适当的属性对样本进行拆分,自上而下,分而治之 31 | ID3:用信息增益度量不确定性 32 | 33 | -------------------------------------------------------------------------------- /机器学习/logistic/一元,多元回归分析: -------------------------------------------------------------------------------- 1 | #1、直线回归与相关 2 | 步骤: 3 | ①、散点图(plot()):相关性分析,线性还是非线性 4 | ②、方差分析anova():查看p值与F值,p<0.05表示具有显著 5 | ③、summary():参数估计结果,查看自变量与常数项是否显著,也可以通过R方查看拟合度 6 | ④、confint():给出因变量的置信区间 7 | ⑤、fitted(),residuals():拟合模型的预测值和残差 8 | 9 | 2、多元线性回归 10 | 满足条件:因变量y服从正太分布的连续型随机变量 11 | m个自变量是固定变量,且不存在多重共线性 12 | 自变量与残差独立,残差是随机变量,均值为0,方差为常数 13 | 残差之间相互独立 14 | 残差服从正态分布 15 | -------------------------------------------------------------------------------- /机器学习/logistic/主成分思想: -------------------------------------------------------------------------------- 1 | 数据降维:从数据中提取公共部分,进行分析和处理 2 | 方法:主成分分析、因子分析、典型相关分析 3 | 主成分分析是一种通过降维技术把多个变量化成少数几个主成分的方法, 4 | 这些主成分能够反映原始变量的大部分信息,他们通常表示为原始变量的线性组合。 5 | 6 | R中运用到的函数: 7 | #R中作为主成分分析最主要的函数是princomp()函数 8 | #princomp()主成分分析 可以从相关阵或者从协方差阵做主成分分析 9 | #summary()提取主成分信息 10 | #loadings()显示主成分分析或因子分析中载荷的内容 11 | #predict()预测主成分的值 12 | #screeplot()画出主成分的碎石图 13 | #biplot()画出数据关于主成分的散点图和原坐标在主成分下的方向 14 | #engin(cor())计算特征值,$values 特征值 $vectors特征向量 15 | #最终得分: 特征值*得分矩阵 16 | 17 | 案例:采用R中自带的数据集 swiss 数据集,为瑞士各个地区的教育、商务等发展情况,利用主成分分析可以看出 18 | 各地区的发展情况,以及swiss的发展水平。 19 | 20 | attach(swiss) 21 | mydata<-scale(swiss) 22 | cor(mydata) 23 | pri<-princomp(mydata,cor=true) #这里cor=true表示用相关系数做主成分,false表示用协方差cov 24 | summary(pri,loadings=true) #输出载荷 25 | screeplot(pri,type='line') #散石图确定主成分个数 26 | pre<-predict(pri) #用predict()计算得分 也可以直接pri$scores 查看得分 27 | summary(pre) #各项得分 28 | y<-eigen(cor(pdat)) #计算特征值 变量为相关系数矩阵 y$values特征值 y$vectors特征向量 29 | score<-(y$values[1]*pre[,1]+y$values[2]*pre[,2]+y$values[3]*pre[,3]+y$values[4]*pre[4])/sum(y$values) 30 | #计算每个城市的最终得分 用特征值*pre中的每一列 31 | 32 | PCA的思想: 33 | PCA顾名思义,就是找出数据里最主要的方面,用数据里最主要的方面来代替原始数据。具体的,假如我们的数据集是n维的,共有m个数据(x(1),x(2),...,x(m))(x(1),x(2),...,x(m))。我们希望将这m个数据的维度从n维降到n'维,希望这m个n'维的数据集尽可能的代表原始数据集。 34 | 35 | PCA算法总结: 36 | 作为一个非监督学习的降维方法,它只需要特征值分解,就可以对数据进行压缩,去噪。因此在实际场景应用很广泛。为了克服PCA的一些缺点,出现了很多PCA的变种,比如第六节的为解决非线性降维的KPCA,还有解决内存限制的增量PCA方法Incremental PCA,以及解决稀疏数据降维的PCA方法Sparse PCA等。 37 | 38 |     PCA算法的主要优点有: 39 | 40 |     1)仅仅需要以方差衡量信息量,不受数据集以外的因素影响。  41 | 42 |     2)各主成分之间正交,可消除原始数据成分间的相互影响的因素。 43 | 44 |     3)计算方法简单,主要运算是特征值分解,易于实现。 45 | 46 |     PCA算法的主要缺点有: 47 | 48 |     1)主成分各个特征维度的含义具有一定的模糊性,不如原始样本特征的解释性强。 49 | 50 |     2)方差小的非主成分也可能含有对样本差异的重要信息,因降维丢弃可能对后续数据处理有影响。 51 | 52 | 53 | 54 | 55 | -------------------------------------------------------------------------------- /机器学习/logistic/回归分析常规算法: -------------------------------------------------------------------------------- 1 | #回归分析具体分类: 2 | 目标变量是连续型的,则称其为回归分析 3 | (1)一元线性回归分析 4 | y=kx+b 5 | sol.lm<-lm(y~x,data) 6 | abline(sol.lm) 7 | 使模型误差的平方和最小,求参数k和b,称为最小二乘法 8 | 9 | k=cov(x,y)/cov(x,x) 10 | b=mean(y)-k*mean(x) 11 | 12 | 估计参数b,k的取值范围 p元模型 p是自变量数,n是样本数 13 | [ki-sd(ki)ta/2(n-p-1),ki+sd(ki)ta/2(n-p-1)] k0表示回归模型的b; k1表示k;sd(k)是标准差 14 | 自由度 df<-sol.lm$df.residual 15 | left<-summary(sol.lm)$coefficients[,1]-summary(sol.lm)$coeffients[,2]*qt(1-alpha/2,df) 16 | right<-summary(sol.lm)$coefficients[,1]+summary(sol.lm)$coeffients[,2]*qt(1-alpha/2,df) 17 | 18 | 衡量相关程度 19 | 变量x和y相关系数r=Sxy/sqrt(Sxx)sqrt(Syy) 取值范围是[-1,1] cor(x,y) 20 | 判定系数r^2 21 | 22 | 修正判定系数 adjusted.r^2 23 | 判定系数在用于多元回归分析时有一个缺点,自变量数越多,判定系数越大 24 | 25 | 回归系数的显著性检验 26 | 27 | T检验 summary(sol.lm)$coefficients[,4] 28 | 计算得到的p.value值越小,其值等于0的概率也就越小,当p.value<0.05,可认定k!=0 29 | 30 | F检验 summary(sol.lm)$p.value 31 | 在整体上检验模型参数是否为0,并计算等于0的概率,当p.value<0.05,则通过了F检验 32 | 33 | summary(sol.lm)$fstatistic 给出了样本自由度f、自变量自由度df1、F值df2 34 | 35 | 可以使用如下代码直接读取p.value值 36 | pf(f,df1,df2,lower.tail=F) 或 1-pf(f,df1,df2) 37 | 38 | 模型误差(残差) residuals 39 | 40 | 对一个正确的回归模型,其误差要服从正态分布 41 | 42 | 残差的标准误差可以从整体上体现一个模型的误差情况,它可以用于不同模型间性能的对比 43 | 44 | 预测 45 | 46 | predict(sol.lm) 47 | 48 | (2)多元回归分析 49 | sol.lm<-lm(formula=y~. ,data.train) 50 | 51 | 模型修正函数update(object,formula) 52 | update函数可以在lm模型结果的基础上任意添加或减少自变量,或对目标变量做取对数及开方等建模 53 | 例如: 54 | 增加x2平方变量 55 | lm.new<-update(sol.lm, .~.+I(x2^2)) 56 | 删除x2变量 57 | .~.-x2 58 | 把x2变为x2平方变量 59 | .~.-x2+I(x2^2) 60 | 增加x1*x2 61 | .~.+x1*x2 62 | 在模型中对y开方建模 63 | sqrt(.)~. 64 | 65 | 逐步回归分析函数 step() 66 | 逐步减少变量的方法 67 | lm.step<-step(sol.lm) 68 | 模型的ACI数值越小越好 69 | 70 | 自变量中包含分类型数据的回归分析 71 | 分类变量a的取值为i,则模型预测值是f(a1=0,...ai=1,ap=0) 72 | 73 | (3)Logic回归 y=1/(1+exp(-x)) 使用最大似然法来估算 74 | 使用RODBC包读取Excel文件 75 | 76 | root<-"C:/" 77 | file<-paste(root,"data.xls",sep="") 78 | library(RODBC) 79 | excel_file<-odbcConnectExcel(file) 80 | data<-sqlFetch(excel_file,"data") 81 | close(excel_file) 82 | 83 | 使用模型的预测正确率来衡量 84 | 85 | 预测数据 86 | num11 num10 87 | 实际数据 num01 num00 88 | 89 | 预测正确率=(num11+num00)/样本总数量=(num11+num00)/(num11+num10+num01+num00) 90 | 91 | t()返回转置 92 | 93 | glm()是用R语言实现logic回归分析的核心函数 94 | family=binomial("logit") 95 | 使用step()函数对模型进行修正 96 | str函数查看包含的数据属性 97 | 98 | 模型预测 99 | new<-predict(old,newdata=test.data) 100 | new<-1/(1+exp(-new)) 101 | new<-as.factor(ifelse(new>=0.5,1,0)) 102 | 103 | 模型的性能衡量 104 | performance<-length(which((predict.data==data)==TRUE))/nrow(data) 105 | (4)回归树CART 106 | 实现CART算法的核心函数是rpart包的rpart函数,再用plot函数画 107 | maptree包的draw.tree函数 108 | 109 | 读取叶节点sol.rpart$frame$var=="" 110 | 读取叶节点序号sol$rpart$where 111 | 要使测试集误差和回归树的规模尽可能小 112 | 113 | cp复杂度系数 sol.rpart$cptable 114 | xerror是通过交叉验证获得的模型误差 115 | xstd是模型误差的标准差 xerror取xerror+/-xstd 116 | 剪枝就是找到一个合理的cp值 117 | 随着拆分的增多,复杂性参数会单调下降,但预测误差会先降后生 118 | 119 | 剪枝 120 | prune(sol.part,0.02) 把cp<0.02的树剪除 121 | 使用plotcp()函数可以绘制出cp的波动关系 -------------------------------------------------------------------------------- /机器学习/分而治之-应用决策树和规则分类/决策树C50.R: -------------------------------------------------------------------------------- 1 | #------------使用C5.0识别高风险银行贷款—————————————————————————————————— 2 | library(C50) 3 | library(gmodels) 4 | 5 | 6 | #导入数据 7 | data<-read.csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 05\\credit.csv',header = TRUE) 8 | 9 | #按9:1构建训练集与测试集 10 | set.seed(123) 11 | dat<-sample(2,length(data),replace = T,prob = c(0.9,0.1)) 12 | train_data<-data[dat==1,] 13 | test_data<-data[dat==2,] 14 | 15 | #构建模型 16 | model<-C5.0(default~.,data = train_data,control=C5.0Control(noGlobalPruning = FALSE)) 17 | 18 | #评估模型的性能 19 | credit_pred<-predict(model,test_data,type = 'class') 20 | CrossTable(test_data$default,credit_pred,prop.chisq = F,prop.c = F,prop.r = F,dnn = c('actual','predict')) 21 | 22 | 23 | #------------提高模型的性能------------------------------------------------------ 24 | #1、提高决策树的准确性:自适应增强算法:通过投票表决的方法为每一个案例选择最优的分类,即BOOSTING算法 25 | credit_boost10<-C5.0(default~.,data = train_data,trials=10) #参数trials:添加boosting算法 26 | summary(credit_boost10) 27 | credit_boost_pred<-predict(credit_boost10,test_data) 28 | CrossTable(test_data$default,credit_boost_pred,prop.chisq = F,prop.r = F,prop.c = F,dnn = c('actual','predict')) 29 | 30 | #--boosting算法:将学习能力弱的算法组合在一起,创建一个团队,使他们的优点和缺点互补的多种学习方法的组合,可以显著的提高分类的准确性 31 | 32 | 33 | #2、犯一些比其他错误更严重的错误 34 | #--拒绝大量处于边界线的申请者,将一个惩罚因子分配到不同类型的错误上,该惩罚因子设定在一个代价矩阵中,用来指定每种错误相对于任何其他预测的严重程度 35 | 36 | #在这里因为预测值与实际值都是两个变量,需要构建一个2X2的矩阵 37 | matrix_dim<-list(c('no','yes'),c('no','yes')) 38 | names(matrix_dim)<-c('predict','actual') 39 | error_cost<-matrix(c(0,1,4,0),nrow = 2,dimnames = matrix_dim) 40 | 41 | credit_cost<-C5.0(default~.,data = train_data,costs=error_cost) 42 | credit_cost_pred<-predict(credit_cost,test_data) 43 | CrossTable(credit_cost_pred,test_data$default,prop.chisq = F,prop.c = F,prop.r = F,dnn = c('predict','actual')) 44 | 45 | #增加惩罚因子,实际上是以假阳性为代价,减少假阴性,这种这种是可以接受的。关注的只是召唤率 46 | 47 | 48 | 49 | 50 | 51 | 52 | -------------------------------------------------------------------------------- /机器学习/分而治之-应用决策树和规则分类/决策树与规则分类: -------------------------------------------------------------------------------- 1 | -------------------应用决策树和规则分类--------------------------------------- 2 | 1、决策树:利用树形结构对特征和潜在结果之间关系建立模型 3 | 用途:信用评估模型:其中导致申请被拒绝的准则需要清楚的记录且没有偏差 4 | 客户流失或者客户满意度行为的市场调查将于广告公司或者管理机构共享 5 | 基于实验室测量、症状或者疾病进展率的医疗条件诊断 6 | 7 | 2、原理(分而治之):决策树建立使用一种称为递归划分的探索法,即将数据集分为子集,然后反复的分解成更小的子集,直到算法 决定数据内的子集足够均匀或者另一种停止准则已经满足。 8 | 9 | 3、C5.0决策树算法: 10 | 优点:适用于大多数分类问题的分类器;高度自动化的学习过程,可以处理数值数据,名义特征;排除不重要的特征;数据集数量 不限 11 | 缺点:在具有大量水平特征进行划分时往往是有偏差的,很容易过度拟合或欠拟合;依赖于轴平行分割对某些关系建立模型时会 有困难;小的变化会引起大的变化 12 | 13 | 4、步骤: 14 | 1、选择最优的分割 15 | 一个案例子集仅包含单个类的程度称为纯度,由单个类构成的任意子集都认为是纯的。 16 | C5.0算法在一个类集合中使用熵,具有高熵值的集合非常多样化,且提供关于可能属于这些集合的其他项的信息很少。 17 | 对于N个类,熵值的范围为0~log2(n),在一个案例中,最小值表示样本同质,最大值表示数据尽可能的多样化 18 | 19 | 熵的定义: Entropy(S)=求和(-pilog2(pi)) 20 | 21 | 信息增益=Entropy(s1)-Entropy(s2) 其中s2计算:求和wi*Ebtropy(pi) 22 | 23 | 2、修剪决策树 24 | 决策树增长的过大,将使许多决策过于具体,模型过度拟合训练数据。而修剪一颗决策树涉及减小他的大小,以使决策树 能够能够更好的推广到未知数据。 25 | 提前停止法:当决策树达到一定数量的决策,或者当决策节点仅含有少量案例时,就停止树的增长(预剪枝) 26 | 后剪枝决策树法:决策树生长得过大,就修剪叶节点,减少到合适的大小。 27 | 28 | 29 | 5、综上决策树的步骤: 30 | 1、开始把所有数据看作是一个节点 31 | 2、遍历每一个变量的每一种分割方式,找到做好的分割点 32 | 3、分割成两个节点N1和N2 33 | 4、对N1和N2分别执行2-3步,直到每一个节点都足够纯,或者达到每一停止情况(没有特征可分,达到最大迭代次数) 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | ==------------------------------------------------------------------------------------------------------------------------ 42 | --理解分类规 43 | 1、分类规则代表的是逻辑if-else语句形式的知识,可用来对无标记的案例指定一个分类。无标记的案例根据前件和后件的概念来指定,而前件与后件构成了一个假设,即:如果这种事情发生,那么那种情况就会发生。前件是由特征值的特定组合构成的,当规则的条件被满足时,后件描述用来指定的分类值。 44 | 用于:确定导致机械设备出现硬件故障的条件 45 | 描述用于客户细分市场人群的关键特征 46 | 发现某种事物大跌或者大涨的前提条件 47 | 48 | 不同:决策树必须从上到下通过一系列决策应用,而规则是可以被阅读的命题,事实的陈述。模型结果规则学习跟简单,更直接理解 49 | 50 | 2、方法:确定训练数据中覆盖一个案例子集的规则,然后再从剩余数据中分离出该子集,随着规则的增加,更多的数据子集会被分离,直到整个数据都被覆盖,不再有剩余案例。(向下钻取数据) 51 | 52 | ==----------------------------------------------------------------------------------------------------------------------- 53 | 54 | 一、1R算法: 55 | 通过选择一个单一的规则来提高算法的性能,该算法的运作方式:对于每一个特征,1R基于相似的特征值对数据分组,然后对于每一个数据分组,该算法预测大多数类,规则错误率的计算基于每一个特征,犯最少错误的规则选为唯一的规则。 56 | 57 | RIPPER算法(重复增量修剪): 58 | 生长、修剪、优化 59 | 生长阶段对规则添加条件,直到该规则完全划分为一个数据子集或者没有属性可用于分割。同决策树一样,利用信息增益准则确定下一个分割的属性,当增加一个规则的特异性而熵不在减小时,该规则需立即修剪。重复生长和裁剪两个阶段,直到达到停止准则,然后利用探索法进行整套规则优化。 60 | 61 | 62 | 贪婪算法:因为决策树和规则学习都是基于想到想得的思想使用数据,它们都试图一次性分区,首相找到同质性最好的分区,接着是次好,直到都被归类。 63 | 64 | 决策树与规则的不同:决策树一旦根据某一特征分割后,那么分割所创的分区将不会重新占据,而规则学习,没有被规则条件覆盖的样本可能会被重新占据。 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | -------------------------------------------------------------------------------- /机器学习/分而治之-应用决策树和规则分类/规则学习.R: -------------------------------------------------------------------------------- 1 | #----独而治之-规则分类---------------------------------------------------------------------------------------------------- 2 | 3 | #--1R算法: 4 | 5 | data<-read.csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 05\\mushrooms.csv') 6 | 7 | #veil_type特征只有一个因子水平,属于异常值,直接删除这列 8 | data$veil_type<-NULL 9 | 10 | library(RWeka) #Rweak包:1R算法 oneR() RIPPER算法:JRIP() 11 | 12 | #----1R算法 13 | model1<-OneR(type~.,data =data) 14 | summary(model1) #查看总体结果 15 | 16 | #----PIPPER算法 17 | 18 | model2<-JRip(type~.,data = data) 19 | 20 | -------------------------------------------------------------------------------- /机器学习/回归方法/psych--散点矩阵图.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShydowLi/Machine-Learning-With-R-/1a7bb06b79200836bb5079263ee3866dbf15a8db/机器学习/回归方法/psych--散点矩阵图.png -------------------------------------------------------------------------------- /机器学习/回归方法/回归树与模型数葡萄酒评级.R: -------------------------------------------------------------------------------- 1 | #------------用回归树与模型树估计葡萄酒的质量------------------------------------------------------ 2 | 3 | data<-read.csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 06\\whitewines.csv') 4 | 5 | hist(data$quality) #查看葡萄酒质量的一个分布情况,排除极端情况 6 | 7 | dat<-sample(2,nrow(data),replace = T,prob = c(0.75,0.25)) 8 | train_data<-data[dat==1,] 9 | test_data<-data[dat==2,] 10 | 11 | #加载rpart包,调用rpart()函数 12 | library(rpart) 13 | library(rpart.plot) 14 | 15 | model<-rpart(quality~.,data = train_data) 16 | 17 | #可视化回归书 18 | rpart.plot(model,digits = 3) 19 | 20 | rpart.plot(model,digits = 3,fallen.leaves = T,type = 3,extra = 101) 21 | 22 | 23 | #--2、评估模型的性能(数值型的预测值评估) 24 | 25 | pre<-predict(model,test_data) 26 | 27 | summary(pre) 28 | summary(test_data$quality) 29 | #由以上两个看出模型不能识别极端值 30 | 31 | #查看预测值对于真实值的程度有多好,cor() 32 | cor(pre,test_data$quality) 33 | 34 | #用平均绝对误差度量性能 35 | mean(abs(pre-test_data$quality)) 36 | 37 | 38 | #--3、提高模型的性能(构建模型树) ---好像显著啊 39 | library(RWeka) 40 | 41 | model1<-M5P(quality~.,data = test_data) 42 | 43 | pre1<-predict(model1,test_data) 44 | 45 | mean(abs(pre-test_data$quality)) 46 | 47 | cor(pre1,test_data$quality) 48 | 49 | 50 | 51 | -------------------------------------------------------------------------------- /机器学习/回归方法/多元回归系数最优解.R: -------------------------------------------------------------------------------- 1 | #------多元线性回归中最有特征系数求解------------------------ 2 | 3 | reg<-function(y,x){ 4 | x<-as.matrix(x) 5 | x<-cbind(intercept=1,x) 6 | b<-solve(t(x) %*% x) %*% t(x) %*% y 7 | colnames(b)<-'estimate' 8 | print(b) 9 | } 10 | 11 | #--函数solve()求解逆矩阵 函数t()求转置矩阵 %*%矩阵相乘 12 | 13 | 14 | #--利用线性回归预测医疗费用 15 | 16 | data<-read.csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 06\\insurance.csv',stringsAsFactors = F) 17 | 18 | #看一下数值分布 19 | hist(data$expenses) 20 | 21 | #--1、探索特征之间的关系--相关系数矩阵 22 | cor(data[c('age','bmi','children','expenses')]) 23 | 24 | #--2、可视化特征之间的关系--散点矩阵图 25 | pairs(data[c('age','bmi','children','expenses')]) #该散点图看着不舒服 26 | 27 | library(psych) 28 | pairs.panels(data[c('age','bmi','children','expenses')]) #在散点矩阵下方的椭圆中:椭圆越扁,相关强度越大 29 | #--在相关散点图中,圆表示相关椭圆,越扁越相关强度越大 图中的线表示局部回归曲线:x轴与y轴变量之间的一般关系 30 | 31 | 32 | #--3、基于数据训练模型 33 | model<-lm(expenses~.,data = data) #在实际生活中,当特征值全为0时,也就是预测值等于截距时,往往忽略 34 | 35 | 36 | #--4、评估模型的性能 37 | #--一般考虑残差、p值、R^2;残差可以分析误差的分布 p值越小且小于显著性水平,表示该特征越显著 R^2越接近1,模型性能越好 38 | 39 | #--5、模型设定--加入相互作用的影响 40 | #以上只是研究因变量与每个特征的单独的关系,并不考虑特征之间的关系,特征之间可能存在着相互作用 41 | #①、添加非线性关系:在线性回归中,一般认为变量之间的关系是线性的,而这并不是确定的 42 | #将年龄的平方添加到自变量中,这将是模型对年龄的的线性和非线性影响区分开 43 | data$age2<-data$age^2 44 | 45 | #②、转换--将一个数值变量转换为二元指标,也就是将连续型变量(看起来相关较小)离散化为二值型数据 46 | data$bmi30<-ifelse(data$bmi>=30,1,0) 47 | 48 | #③、模型设定--加入相互作用的影响 49 | #如果哪两个特征具有相互作用,则可以为他们创建一个相互作用特征 50 | expenses~bmi30*smoker #添加后这个公式代表着三个影响的:2+1 51 | 52 | #④全部放在一起--改进的回归模型 53 | 54 | model<-lm(expenses~age+age2+children+bmi+sex+bmi30*smoker+region,data = data) 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | -------------------------------------------------------------------------------- /机器学习/回归方法/广义线性模型.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShydowLi/Machine-Learning-With-R-/1a7bb06b79200836bb5079263ee3866dbf15a8db/机器学习/回归方法/广义线性模型.png -------------------------------------------------------------------------------- /机器学习/回归方法/广义线性模式.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShydowLi/Machine-Learning-With-R-/1a7bb06b79200836bb5079263ee3866dbf15a8db/机器学习/回归方法/广义线性模式.png -------------------------------------------------------------------------------- /机器学习/回归方法/数值回归: -------------------------------------------------------------------------------- 1 | ==----------预测数值型数据--回归方法------------------------------ 2 | 1、 3 | ==学习: 4 | --回归中使用的基本统计原则,对数值关系的和规模和强度建立模型技术 5 | --为一个回归模型准备数据,估计并解释一个回归模型 6 | --一对称为回归树和模型树的混合技术,他们适用于数值预测任务的决策树分类 7 | 8 | ==回归: 9 | --回归主要关注确定一个唯一的因变量和一个或多个数值型自变量之间的关系,回归最简单的形式是假设因变量和自变量之间的关系 遵循线性管线。 10 | --y=ax+b;确定x和y,从而使指定的直线更加适合用来反映所提供X值和Y值之间的关系 11 | --一般用于:根据种群和个体测得特征;量化事件及其响应之间的因果关系;给定已知的准则,确定可用来预测未来行为的模型 12 | 13 | ==--回归方法也可用于统计假设检验,根据观测数据确定假设更可能是真的还是假的。 14 | 15 | 16 | ==简单线性回归和多元线性回归:两种方法都是假设因变量是以一种连续的尺度测量的 17 | 18 | ==逻辑回归:对二元分类建模 19 | 20 | ==泊松回归:对整形计数数据建模 21 | 22 | ==多项逻辑回归:分类结果建模 23 | 24 | 25 | 2、 26 | 1、普通最小二乘法 27 | 斜率和截距的选择要使得误差(y的预测值与y的真实值之间的垂直距离)的平方和最小,这些误差被称为残差。 28 | sum(Yi-yi)^2=sum(e^2) Yi是指预测值 yi是指真实值 29 | 公式: 30 | y=a+bX --b=cov(x,y)/var(x) --a=y'-bx' --y',x'平均值 31 | 32 | 33 | 2、相关性 34 | 两个变量之间的相关系数是一个数,表示两个变量服从一条直线的关系有多么的紧密。 35 | p(x,y)=corr(x,y)=cov(x,y)/sd(x,y) --即协方差/x,y的标准差 36 | 37 | 38 | 3、多元线性回归 39 | 40 | 1、优点:数值建模最常用的方法;几乎适用于所有数值建模任务;提供特征与结果之间的强度和大小的估计 41 | 缺点:对数据使用很强的假设;模型必须由使用者事先指定;不能处理缺失数据;只能处理数值特征 42 | 43 | 多元线性回归方程:y=α+β1X1+β2X2+.......βiXi+e α表示截距项 e表示残差项 44 | 45 | 由于截距项α是一个定值: 46 | y=β0X0+β1X1+β2X2+.......βiXi+e 47 | 将记录带入公式,我们将得到一个矩阵运算: 48 | Y=βX+e 此时:因变量Y是一个向量 自变量被合并为矩阵X (其中需要加上截距项为1的列) β和e都是向量 49 | 50 | 接下来求解预测值与真实值之间的误差平方和最小的回归系数向量β 51 | β=(X'X)-1X'Y 52 | 53 | 54 | 55 | 4、回归树与模型树 56 | 57 | 用于数值预测的决策树可分为两类,是分类回归树和模型树。 58 | 回归树:基于到达叶节点的案例的平均值做出的预测 59 | 模型树:模型树与回归树以大致相同的方式生长,但是在每个叶子节点,根据到达该节点的案例建立多元线性回归模型。 60 | 61 | 优点:将决策树的优点与对数值数据建立模型结合;能自动选择特征 62 | 缺点:需要大量的训练数据集;难以确定单个特征对于结果的影响;大型决策树变得比回归模型更难理解 63 | 64 | 区别:分类决策树中,一致性(均匀性)是由熵值决定的,而在数值决策树中,一致性可以通过统计量(方差,平均值等)来 度量,一种常见的分割标准为:标准偏差减少 65 | SDR=sd(T)-sum((Ti/T)*sd(Ti)) 66 | sd(T)表示标准差,T1...Tn是指一个特征的一次分割产生值的集合 67 | 通过比较分割前的标准差与分够后的加权标准差来度量标准差的减少量。 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | -------------------------------------------------------------------------------- /机器学习/基于关联规则的购物篮分析/rules.csv: -------------------------------------------------------------------------------- 1 | "rules","support","confidence","lift","count" 2 | "{potted plants} => {whole milk}",0.00691408235892222,0.4,1.56545961002786,68 3 | "{pasta} => {whole milk}",0.00610066090493137,0.405405405405405,1.58661446962283,60 4 | "{herbs} => {root vegetables}",0.00701576004067107,0.43125,3.95647737873134,69 5 | "{herbs} => {other vegetables}",0.00772750381291307,0.475,2.45487388334209,76 6 | "{herbs} => {whole milk}",0.00772750381291307,0.475,1.85898328690808,76 7 | "{processed cheese} => {whole milk}",0.00701576004067107,0.423312883435583,1.65669805355709,69 8 | "{semi-finished bread} => {whole milk}",0.00711743772241993,0.402298850574713,1.57445650433836,70 9 | "{beverages} => {whole milk}",0.00681240467717336,0.26171875,1.02427533077994,67 10 | "{detergent} => {other vegetables}",0.00640569395017794,0.333333333333333,1.72271851462603,63 11 | "{detergent} => {whole milk}",0.00894763599389934,0.465608465608466,1.82222811749274,88 12 | "{pickled vegetables} => {other vegetables}",0.00640569395017794,0.357954545454545,1.84996476854727,63 13 | "{pickled vegetables} => {whole milk}",0.00711743772241993,0.397727272727273,1.5565649531527,70 14 | "{baking powder} => {other vegetables}",0.00732079308591764,0.413793103448276,2.13854712160473,72 15 | "{baking powder} => {whole milk}",0.00925266903914591,0.522988505747126,2.04679345563987,91 16 | "{flour} => {other vegetables}",0.00630401626842908,0.362573099415205,1.87383417380375,62 17 | "{flour} => {whole milk}",0.00843924758515506,0.485380116959064,1.89960742152503,83 18 | "{soft cheese} => {other vegetables}",0.00711743772241993,0.416666666666667,2.15339814328254,70 19 | "{soft cheese} => {whole milk}",0.00752414844941535,0.44047619047619,1.72386921342353,74 20 | "{specialty bar} => {soda}",0.00721911540416879,0.263940520446097,1.51361808663986,71 21 | "{misc. beverages} => {soda}",0.00732079308591764,0.258064516129032,1.47992100065833,72 22 | "{grapes} => {tropical fruit}",0.00610066090493137,0.272727272727273,2.59910147991543,60 23 | "{grapes} => {other vegetables}",0.0090493136756482,0.404545454545455,2.09075383365977,89 24 | "{grapes} => {whole milk}",0.00732079308591764,0.327272727272727,1.28083059002279,72 25 | "{cat food} => {yogurt}",0.00620233858668022,0.266375545851528,1.90947776490509,61 26 | "{cat food} => {other vegetables}",0.00650737163192679,0.279475982532751,1.44437534850741,64 27 | "{cat food} => {whole milk}",0.00884595831215048,0.379912663755459,1.48684482611816,87 28 | "{specialty chocolate} => {whole milk}",0.00803253685815963,0.264214046822742,1.03404104675753,79 29 | "{meat} => {rolls/buns}",0.00691408235892222,0.267716535433071,1.45549592370605,68 30 | "{meat} => {other vegetables}",0.0099644128113879,0.385826771653543,1.99401276889784,98 31 | "{meat} => {whole milk}",0.0099644128113879,0.385826771653543,1.50999056872766,98 32 | "{frozen meals} => {other vegetables}",0.00752414844941535,0.265232974910394,1.37076526970243,74 33 | "{frozen meals} => {whole milk}",0.00986273512963904,0.347670250896057,1.36065933846507,97 34 | "{hard cheese} => {yogurt}",0.00640569395017794,0.261410788381743,1.87388855957321,63 35 | "{hard cheese} => {other vegetables}",0.00945602440264362,0.385892116182573,1.99435047958781,93 36 | "{hard cheese} => {whole milk}",0.0100660904931368,0.410788381742739,1.6076815497174,99 37 | "{butter milk} => {yogurt}",0.00854092526690392,0.305454545454545,2.18961038961039,84 38 | "{butter milk} => {rolls/buns}",0.00762582613116421,0.272727272727273,1.48273782602141,75 39 | "{butter milk} => {other vegetables}",0.0103711235383833,0.370909090909091,1.9169158744566,102 40 | "{butter milk} => {whole milk}",0.0115912557193696,0.414545454545455,1.62238541402887,114 41 | "{candy} => {soda}",0.00864260294865277,0.289115646258503,1.65798972650285,85 42 | "{candy} => {whole milk}",0.00823589222165735,0.275510204081633,1.07825024160082,81 43 | "{ham} => {yogurt}",0.0067107269954245,0.2578125,1.84809470663265,66 44 | "{ham} => {rolls/buns}",0.00691408235892222,0.265625,1.4441248618021,68 45 | "{ham} => {other vegetables}",0.00915099135739705,0.3515625,1.81692968339464,90 46 | "{ham} => {whole milk}",0.0114895780376207,0.44140625,1.72750913997214,113 47 | "{sliced cheese} => {sausage}",0.00701576004067107,0.286307053941909,3.04743493021501,69 48 | "{sliced cheese} => {yogurt}",0.00803253685815963,0.327800829875519,2.3497967651791,79 49 | "{sliced cheese} => {rolls/buns}",0.00762582613116421,0.311203319502075,1.69192075583356,75 50 | "{sliced cheese} => {other vegetables}",0.0090493136756482,0.369294605809129,1.90857196433672,89 51 | "{sliced cheese} => {whole milk}",0.0107778342653787,0.439834024896266,1.72135600272772,106 52 | "{oil} => {root vegetables}",0.00701576004067107,0.25,2.29361007462687,69 53 | "{oil} => {other vegetables}",0.0099644128113879,0.355072463768116,1.83506972210164,98 54 | "{oil} => {whole milk}",0.011286222674123,0.402173913043478,1.57396754269105,111 55 | "{onions} => {root vegetables}",0.00945602440264362,0.304918032786885,2.79745228774162,93 56 | "{onions} => {other vegetables}",0.0142348754448399,0.459016393442623,2.37226811850142,140 57 | "{onions} => {whole milk}",0.0120996441281139,0.390163934426229,1.52696470158455,119 58 | "{berries} => {whipped/sour cream}",0.0090493136756482,0.27217125382263,3.7968855054547,89 59 | "{berries} => {yogurt}",0.010574478901881,0.318042813455657,2.27984771890408,104 60 | "{berries} => {other vegetables}",0.0102694458566345,0.308868501529052,1.59628045850669,101 61 | "{berries} => {whole milk}",0.0117946110828673,0.35474006116208,1.38832809452012,116 62 | "{hamburger meat} => {rolls/buns}",0.00864260294865277,0.259938837920489,1.41321087393478,85 63 | "{hamburger meat} => {other vegetables}",0.0138281647178444,0.415902140672783,2.14944695402881,136 64 | "{hamburger meat} => {whole milk}",0.0147432638535841,0.443425076452599,1.73541011815015,145 65 | "{hygiene articles} => {other vegetables}",0.00955770208439248,0.290123456790123,1.49940315161895,94 66 | "{hygiene articles} => {whole milk}",0.0128113879003559,0.388888888888889,1.52197462086041,126 67 | "{salty snack} => {other vegetables}",0.0107778342653787,0.28494623655914,1.47264647218032,106 68 | "{salty snack} => {whole milk}",0.0111845449923742,0.295698924731183,1.15726180848833,110 69 | "{sugar} => {other vegetables}",0.0107778342653787,0.318318318318318,1.64511858153477,106 70 | "{sugar} => {whole milk}",0.0150482968988307,0.444444444444444,1.73939956669762,148 71 | "{waffles} => {other vegetables}",0.0100660904931368,0.261904761904762,1.35356454720617,99 72 | "{waffles} => {whole milk}",0.012709710218607,0.330687830687831,1.29419610617382,125 73 | "{long life bakery product} => {other vegetables}",0.0106761565836299,0.285326086956522,1.47460959811739,105 74 | "{long life bakery product} => {whole milk}",0.0135231316725979,0.361413043478261,1.41444380525615,133 75 | "{dessert} => {soda}",0.00986273512963904,0.265753424657534,1.52401453732178,97 76 | "{dessert} => {yogurt}",0.00986273512963904,0.265753424657534,1.90501817165222,97 77 | "{dessert} => {other vegetables}",0.0115912557193696,0.312328767123288,1.61416364932083,114 78 | "{dessert} => {whole milk}",0.0137264870360956,0.36986301369863,1.44751402297096,135 79 | "{cream cheese} => {yogurt}",0.0124046771733604,0.312820512820513,2.24241234955521,122 80 | "{cream cheese} => {rolls/buns}",0.0099644128113879,0.251282051282051,1.36614647559921,98 81 | "{cream cheese} => {other vegetables}",0.0137264870360956,0.346153846153846,1.78897691903472,135 82 | "{cream cheese} => {whole milk}",0.0164717844433147,0.415384615384615,1.62566959502893,162 83 | "{chicken} => {root vegetables}",0.0108795119471276,0.253554502369668,2.32622064440829,107 84 | "{chicken} => {other vegetables}",0.0178952719877987,0.417061611374408,2.15543927896337,176 85 | "{chicken} => {whole milk}",0.0175902389425521,0.409952606635071,1.6044106192821,173 86 | "{white bread} => {other vegetables}",0.0137264870360956,0.326086956521739,1.68526811213416,135 87 | "{white bread} => {whole milk}",0.0170818505338078,0.405797101449275,1.58814743046304,168 88 | "{chocolate} => {soda}",0.0135231316725979,0.272540983606557,1.56293911007026,133 89 | "{chocolate} => {other vegetables}",0.012709710218607,0.256147540983607,1.32381033398517,125 90 | "{chocolate} => {whole milk}",0.0166751398068124,0.336065573770492,1.31524270514635,164 91 | "{coffee} => {whole milk}",0.0187086934417895,0.322241681260946,1.2611408417037,184 92 | "{frozen vegetables} => {yogurt}",0.0124046771733604,0.257928118393235,1.84892350174742,122 93 | "{frozen vegetables} => {other vegetables}",0.0177935943060498,0.369978858350951,1.91210828790416,175 94 | "{frozen vegetables} => {whole milk}",0.0204372140315201,0.424947145877378,1.66309398316913,201 95 | "{beef} => {root vegetables}",0.0173868835790544,0.331395348837209,3.04036684311003,171 96 | "{beef} => {rolls/buns}",0.0136248093543467,0.25968992248062,1.41185759402814,134 97 | "{beef} => {other vegetables}",0.0197254702592781,0.375968992248062,1.94306623161308,194 98 | "{beef} => {whole milk}",0.0212506354855109,0.405038759689922,1.58517954697588,209 99 | "{curd} => {yogurt}",0.0172852058973055,0.324427480916031,2.32561536064808,170 100 | "{curd} => {other vegetables}",0.0171835282155567,0.322519083969466,1.66682879182328,169 101 | "{curd} => {whole milk}",0.026131164209456,0.490458015267176,1.91948053328797,257 102 | "{napkins} => {other vegetables}",0.0144382308083376,0.275728155339806,1.4250059946227,142 103 | "{napkins} => {whole milk}",0.0197254702592781,0.376699029126214,1.47426778808448,194 104 | "{pork} => {other vegetables}",0.0216573462125064,0.375661375661376,1.94147642124521,213 105 | "{pork} => {whole milk}",0.0221657346212506,0.384479717813051,1.5047186727781,218 106 | "{frankfurter} => {rolls/buns}",0.0192170818505338,0.325862068965517,1.77161605764282,189 107 | "{frankfurter} => {other vegetables}",0.0164717844433147,0.279310344827586,1.44351930708319,162 108 | "{frankfurter} => {whole milk}",0.0205388917132689,0.348275862068965,1.36302948804149,202 109 | "{bottled beer} => {whole milk}",0.0204372140315201,0.253787878787879,0.993236684392673,201 110 | "{brown bread} => {other vegetables}",0.0187086934417895,0.288401253918495,1.49050253930026,184 111 | "{brown bread} => {whole milk}",0.0252160650737163,0.38871473354232,1.5212930379581,248 112 | "{margarine} => {rolls/buns}",0.0147432638535841,0.251736111111111,1.36861506510657,145 113 | "{margarine} => {other vegetables}",0.0197254702592781,0.336805555555556,1.74066349915338,194 114 | "{margarine} => {whole milk}",0.0241992882562278,0.413194444444444,1.61709803466419,238 115 | "{butter} => {yogurt}",0.0146415861718353,0.264220183486239,1.89402733570492,144 116 | "{butter} => {other vegetables}",0.0200305033045247,0.361467889908257,1.86812227916327,197 117 | "{butter} => {whole milk}",0.02755465175394,0.497247706422018,1.94605300145665,271 118 | "{newspapers} => {whole milk}",0.0273512963904423,0.342675159235669,1.34111030285826,269 119 | "{domestic eggs} => {other vegetables}",0.0222674123029995,0.350961538461538,1.81382382068798,219 120 | "{domestic eggs} => {whole milk}",0.0299949161159126,0.47275641025641,1.85020266409542,295 121 | "{fruit/vegetable juice} => {soda}",0.018403660396543,0.254571026722925,1.45988690834984,181 122 | "{fruit/vegetable juice} => {yogurt}",0.0187086934417895,0.258790436005626,1.85510491116278,184 123 | "{fruit/vegetable juice} => {other vegetables}",0.0210472801220132,0.291139240506329,1.50465287986324,207 124 | "{fruit/vegetable juice} => {whole milk}",0.0266395526182003,0.368495077355837,1.44216040023663,262 125 | "{whipped/sour cream} => {yogurt}",0.0207422470767667,0.28936170212766,2.07425097698654,204 126 | "{whipped/sour cream} => {other vegetables}",0.0288764616166751,0.402836879432624,2.08192365171827,284 127 | "{whipped/sour cream} => {whole milk}",0.0322318251143874,0.449645390070922,1.75975424247812,317 128 | "{pip fruit} => {tropical fruit}",0.0204372140315201,0.270161290322581,2.57464756814204,201 129 | "{pip fruit} => {other vegetables}",0.026131164209456,0.345430107526882,1.78523652523746,257 130 | "{pip fruit} => {whole milk}",0.0300965937976614,0.397849462365591,1.55704316051158,296 131 | "{pastry} => {other vegetables}",0.0225724453482461,0.253714285714286,1.31123489227535,222 132 | "{pastry} => {whole milk}",0.033248601931876,0.373714285714286,1.46258654994031,327 133 | "{citrus fruit} => {yogurt}",0.0216573462125064,0.261670761670762,1.87575214360929,213 134 | "{citrus fruit} => {other vegetables}",0.0288764616166751,0.348894348894349,1.80314026346606,284 135 | "{citrus fruit} => {whole milk}",0.0305033045246568,0.368550368550369,1.44237679056621,300 136 | "{sausage} => {soda}",0.0243009659379766,0.258658008658009,1.48332449863062,239 137 | "{sausage} => {rolls/buns}",0.0306049822064057,0.325757575757576,1.7710479588589,301 138 | "{sausage} => {other vegetables}",0.0269445856634469,0.286796536796537,1.48220911161006,265 139 | "{sausage} => {whole milk}",0.0298932384341637,0.318181818181818,1.24525196252216,294 140 | "{bottled water} => {soda}",0.028978139298424,0.262189512419503,1.50357659163021,285 141 | "{bottled water} => {whole milk}",0.0343670564311134,0.310947562097516,1.21693962325072,338 142 | "{tropical fruit} => {yogurt}",0.0292831723436706,0.27906976744186,2.00047460844803,288 143 | "{tropical fruit} => {other vegetables}",0.0358922216573462,0.342054263565891,1.7677896385552,353 144 | "{tropical fruit} => {whole milk}",0.0422979156075241,0.403100775193798,1.57759495584202,416 145 | "{root vegetables} => {other vegetables}",0.047381799694967,0.434701492537313,2.2466049285888,466 146 | "{root vegetables} => {whole milk}",0.0489069649211998,0.448694029850746,1.75603095247994,481 147 | "{yogurt} => {other vegetables}",0.0434163701067616,0.311224489795918,1.6084565723294,427 148 | "{yogurt} => {whole milk}",0.0560244026436197,0.401603498542274,1.57173514053453,551 149 | "{rolls/buns} => {whole milk}",0.0566344687341129,0.307904919845218,1.20503178936638,557 150 | "{other vegetables} => {whole milk}",0.0748347737671581,0.386757750919601,1.51363409482462,736 151 | "{whole milk} => {other vegetables}",0.0748347737671581,0.292877039395145,1.51363409482462,736 152 | "{onions,other vegetables} => {whole milk}",0.00660904931367565,0.464285714285714,1.81705133306805,65 153 | "{onions,whole milk} => {other vegetables}",0.00660904931367565,0.546218487394958,2.82294210379895,65 154 | "{hamburger meat,other vegetables} => {whole milk}",0.00630401626842908,0.455882352941176,1.78416352613469,62 155 | "{hamburger meat,whole milk} => {other vegetables}",0.00630401626842908,0.427586206896552,2.20983202565822,62 156 | "{other vegetables,sugar} => {whole milk}",0.00630401626842908,0.584905660377358,2.28911546749356,62 157 | "{sugar,whole milk} => {other vegetables}",0.00630401626842908,0.418918918918919,2.16503813324623,62 158 | "{cream cheese,yogurt} => {whole milk}",0.00660904931367565,0.532786885245902,2.08514087401251,65 159 | "{cream cheese,whole milk} => {yogurt}",0.00660904931367565,0.401234567901235,2.8761967750063,65 160 | "{cream cheese,other vegetables} => {whole milk}",0.0067107269954245,0.488888888888889,1.91333952336738,66 161 | "{cream cheese,whole milk} => {other vegetables}",0.0067107269954245,0.407407407407407,2.10554485120959,66 162 | "{chicken,other vegetables} => {whole milk}",0.00843924758515506,0.471590909090909,1.84564130159534,83 163 | "{chicken,whole milk} => {other vegetables}",0.00843924758515506,0.479768786127168,2.47951971180278,83 164 | "{coffee,other vegetables} => {whole milk}",0.00640569395017794,0.477272727272727,1.86787794378324,63 165 | "{coffee,whole milk} => {other vegetables}",0.00640569395017794,0.342391304347826,1.76953151774087,63 166 | "{frozen vegetables,root vegetables} => {other vegetables}",0.00610066090493137,0.526315789473684,2.72008186519899,60 167 | "{frozen vegetables,other vegetables} => {root vegetables}",0.00610066090493137,0.342857142857143,3.1455223880597,60 168 | "{frozen vegetables,root vegetables} => {whole milk}",0.00620233858668022,0.535087719298246,2.09414553095831,61 169 | "{frozen vegetables,whole milk} => {root vegetables}",0.00620233858668022,0.303482587064677,2.78428287666147,61 170 | "{frozen vegetables,yogurt} => {whole milk}",0.00610066090493137,0.491803278688525,1.9247454221654,60 171 | "{frozen vegetables,whole milk} => {yogurt}",0.00610066090493137,0.298507462686567,2.13981114833993,60 172 | "{frozen vegetables,other vegetables} => {whole milk}",0.00965937976614133,0.542857142857143,2.12455232789495,95 173 | "{frozen vegetables,whole milk} => {other vegetables}",0.00965937976614133,0.472636815920398,2.44266058043989,95 174 | "{beef,root vegetables} => {other vegetables}",0.00793085917641078,0.456140350877193,2.35740428317246,78 175 | "{beef,other vegetables} => {root vegetables}",0.00793085917641078,0.402061855670103,3.68869249115248,78 176 | "{beef,root vegetables} => {whole milk}",0.00803253685815963,0.461988304093567,1.80806007590936,79 177 | "{beef,whole milk} => {root vegetables}",0.00803253685815963,0.37799043062201,3.46785063914875,79 178 | "{beef,yogurt} => {whole milk}",0.00610066090493137,0.521739130434783,2.04190383916677,60 179 | "{beef,whole milk} => {yogurt}",0.00610066090493137,0.287081339712919,2.05790450151352,60 180 | "{beef,rolls/buns} => {whole milk}",0.00681240467717336,0.5,1.95682451253482,67 181 | "{beef,whole milk} => {rolls/buns}",0.00681240467717336,0.320574162679426,1.74286726918306,67 182 | "{beef,other vegetables} => {whole milk}",0.00925266903914591,0.469072164948454,1.83578382103782,91 183 | "{beef,whole milk} => {other vegetables}",0.00925266903914591,0.435406698564593,2.25024954302826,91 184 | "{curd,tropical fruit} => {whole milk}",0.00650737163192679,0.633663366336634,2.47993601588571,64 185 | "{curd,root vegetables} => {whole milk}",0.00620233858668022,0.570093457943925,2.23114570588082,61 186 | "{curd,yogurt} => {other vegetables}",0.00610066090493137,0.352941176470588,1.82405489783932,60 187 | "{curd,other vegetables} => {yogurt}",0.00610066090493137,0.355029585798817,2.54498249003743,60 188 | "{curd,yogurt} => {whole milk}",0.0100660904931368,0.582352941176471,2.27912502048173,99 189 | "{curd,whole milk} => {yogurt}",0.0100660904931368,0.385214007782101,2.76135551496863,99 190 | "{curd,other vegetables} => {whole milk}",0.00986273512963904,0.57396449704142,2.24629559427074,97 191 | "{curd,whole milk} => {other vegetables}",0.00986273512963904,0.377431906614786,1.95062680060768,97 192 | "{napkins,yogurt} => {whole milk}",0.00610066090493137,0.495867768595041,1.94065240912544,60 193 | "{napkins,whole milk} => {yogurt}",0.00610066090493137,0.309278350515464,2.21702082895014,60 194 | "{napkins,other vegetables} => {whole milk}",0.00681240467717336,0.471830985915493,1.84658087802581,67 195 | "{napkins,whole milk} => {other vegetables}",0.00681240467717336,0.345360824742268,1.78487846103006,67 196 | "{pork,root vegetables} => {other vegetables}",0.00701576004067107,0.514925373134328,2.66121442184767,69 197 | "{other vegetables,pork} => {root vegetables}",0.00701576004067107,0.323943661971831,2.97200178684045,69 198 | "{pork,root vegetables} => {whole milk}",0.00681240467717336,0.5,1.95682451253482,67 199 | "{pork,whole milk} => {root vegetables}",0.00681240467717336,0.307339449541284,2.81966743119266,67 200 | "{pork,rolls/buns} => {whole milk}",0.00620233858668022,0.54954954954955,2.15074405882205,61 201 | "{pork,whole milk} => {rolls/buns}",0.00620233858668022,0.279816513761468,1.52127994076508,61 202 | "{other vegetables,pork} => {whole milk}",0.0101677681748856,0.469483568075117,1.8373939084834,100 203 | "{pork,whole milk} => {other vegetables}",0.0101677681748856,0.458715596330275,2.37071355223766,100 204 | "{frankfurter,yogurt} => {whole milk}",0.00620233858668022,0.554545454545455,2.17029627753862,61 205 | "{frankfurter,whole milk} => {yogurt}",0.00620233858668022,0.301980198019802,2.16470499090725,61 206 | "{frankfurter,other vegetables} => {whole milk}",0.00762582613116421,0.462962962962963,1.81187454864335,75 207 | "{frankfurter,whole milk} => {other vegetables}",0.00762582613116421,0.371287128712871,1.918869632628,75 208 | "{bottled beer,bottled water} => {whole milk}",0.00610066090493137,0.387096774193548,1.51496091293018,60 209 | "{bottled beer,whole milk} => {bottled water}",0.00610066090493137,0.298507462686567,2.700847189993,60 210 | "{bottled beer,other vegetables} => {whole milk}",0.00762582613116421,0.471698113207547,1.8460608608819,75 211 | "{bottled beer,whole milk} => {other vegetables}",0.00762582613116421,0.373134328358209,1.9284162477157,75 212 | "{brown bread,yogurt} => {whole milk}",0.00711743772241993,0.48951048951049,1.91577225003409,70 213 | "{brown bread,whole milk} => {yogurt}",0.00711743772241993,0.282258064516129,2.02332949308756,70 214 | "{brown bread,other vegetables} => {whole milk}",0.00935434672089476,0.5,1.95682451253482,92 215 | "{brown bread,whole milk} => {other vegetables}",0.00935434672089476,0.370967741935484,1.91721899208381,92 216 | "{margarine,yogurt} => {whole milk}",0.00701576004067107,0.492857142857143,1.92886987664146,69 217 | "{margarine,whole milk} => {yogurt}",0.00701576004067107,0.289915966386555,2.07822414680158,69 218 | "{margarine,rolls/buns} => {whole milk}",0.00793085917641078,0.537931034482759,2.10527326865815,78 219 | "{margarine,whole milk} => {rolls/buns}",0.00793085917641078,0.327731092436975,1.78177738757194,78 220 | "{margarine,other vegetables} => {whole milk}",0.00925266903914591,0.469072164948454,1.83578382103782,91 221 | "{margarine,whole milk} => {other vegetables}",0.00925266903914591,0.382352941176471,1.97605947265927,91 222 | "{butter,whipped/sour cream} => {whole milk}",0.0067107269954245,0.66,2.58300835654596,66 223 | "{butter,tropical fruit} => {whole milk}",0.00620233858668022,0.622448979591837,2.43604684213518,61 224 | "{butter,root vegetables} => {other vegetables}",0.00660904931367565,0.511811023622047,2.6451189791502,65 225 | "{butter,other vegetables} => {root vegetables}",0.00660904931367565,0.32994923857868,3.0270995908781,65 226 | "{butter,root vegetables} => {whole milk}",0.00823589222165735,0.637795275590551,2.49610685850898,81 227 | "{butter,whole milk} => {root vegetables}",0.00823589222165735,0.298892988929889,2.74217588257972,81 228 | "{butter,yogurt} => {other vegetables}",0.00640569395017794,0.4375,2.26106805044666,63 229 | "{butter,other vegetables} => {yogurt}",0.00640569395017794,0.319796954314721,2.29242204496012,63 230 | "{butter,yogurt} => {whole milk}",0.00935434672089476,0.638888888888889,2.50038687712782,92 231 | "{butter,whole milk} => {yogurt}",0.00935434672089476,0.339483394833948,2.4335416823556,92 232 | "{butter,rolls/buns} => {whole milk}",0.00660904931367565,0.492424242424242,1.92717565628429,65 233 | "{butter,other vegetables} => {whole milk}",0.0114895780376207,0.573604060913706,2.24488497377091,113 234 | "{butter,whole milk} => {other vegetables}",0.0114895780376207,0.416974169741697,2.15498736700452,113 235 | "{newspapers,yogurt} => {whole milk}",0.00660904931367565,0.43046357615894,1.68468335516243,65 236 | "{newspapers,rolls/buns} => {whole milk}",0.00762582613116421,0.38659793814433,1.51300864371249,75 237 | "{newspapers,whole milk} => {rolls/buns}",0.00762582613116421,0.278810408921933,1.51581004518917,75 238 | "{newspapers,other vegetables} => {whole milk}",0.0083375699034062,0.431578947368421,1.689048526609,82 239 | "{newspapers,whole milk} => {other vegetables}",0.0083375699034062,0.304832713754647,1.57542287954648,82 240 | "{domestic eggs,tropical fruit} => {whole milk}",0.00691408235892222,0.607142857142857,2.37614405093514,68 241 | "{domestic eggs,root vegetables} => {other vegetables}",0.00732079308591764,0.51063829787234,2.63905815006541,72 242 | "{domestic eggs,other vegetables} => {root vegetables}",0.00732079308591764,0.328767123287671,3.01625434471478,72 243 | "{domestic eggs,root vegetables} => {whole milk}",0.00854092526690392,0.595744680851064,2.33153558940319,84 244 | "{domestic eggs,whole milk} => {root vegetables}",0.00854092526690392,0.284745762711864,2.61238300025297,84 245 | "{domestic eggs,yogurt} => {whole milk}",0.00772750381291307,0.539007092198582,2.1094845808886,76 246 | "{domestic eggs,whole milk} => {yogurt}",0.00772750381291307,0.257627118644068,1.84676582497406,76 247 | "{domestic eggs,rolls/buns} => {whole milk}",0.00660904931367565,0.422077922077922,1.65186484824368,65 248 | "{domestic eggs,other vegetables} => {whole milk}",0.0123029994916116,0.552511415525114,2.16233576270971,121 249 | "{domestic eggs,whole milk} => {other vegetables}",0.0123029994916116,0.410169491525424,2.11981973155677,121 250 | "{fruit/vegetable juice,tropical fruit} => {other vegetables}",0.00660904931367565,0.481481481481481,2.48837118779315,65 251 | "{fruit/vegetable juice,other vegetables} => {tropical fruit}",0.00660904931367565,0.314009661835749,2.99252424821181,65 252 | "{fruit/vegetable juice,root vegetables} => {other vegetables}",0.00660904931367565,0.550847457627119,2.84686534196674,65 253 | "{fruit/vegetable juice,other vegetables} => {root vegetables}",0.00660904931367565,0.314009661835749,2.88086289566659,65 254 | "{fruit/vegetable juice,root vegetables} => {whole milk}",0.00650737163192679,0.542372881355932,2.12265709834285,64 255 | "{fruit/vegetable juice,soda} => {whole milk}",0.00610066090493137,0.331491712707182,1.29734221825513,60 256 | "{fruit/vegetable juice,yogurt} => {other vegetables}",0.00823589222165735,0.440217391304348,2.27511195138111,81 257 | "{fruit/vegetable juice,other vegetables} => {yogurt}",0.00823589222165735,0.391304347826087,2.8050133096717,81 258 | "{fruit/vegetable juice,yogurt} => {whole milk}",0.00945602440264362,0.505434782608696,1.97809434419281,93 259 | "{fruit/vegetable juice,whole milk} => {yogurt}",0.00945602440264362,0.354961832061069,2.54449680635613,93 260 | "{fruit/vegetable juice,other vegetables} => {whole milk}",0.0104728012201322,0.497584541062802,1.94737125402016,103 261 | "{fruit/vegetable juice,whole milk} => {other vegetables}",0.0104728012201322,0.393129770992366,2.03175580541772,103 262 | "{citrus fruit,whipped/sour cream} => {whole milk}",0.00630401626842908,0.579439252336449,2.26772186499362,62 263 | "{tropical fruit,whipped/sour cream} => {yogurt}",0.00620233858668022,0.448529411764706,3.21522358943577,61 264 | "{whipped/sour cream,yogurt} => {tropical fruit}",0.00620233858668022,0.299019607843137,2.84966845265238,61 265 | "{tropical fruit,whipped/sour cream} => {other vegetables}",0.00782918149466192,0.566176470588235,2.92608806528392,77 266 | "{other vegetables,whipped/sour cream} => {tropical fruit}",0.00782918149466192,0.27112676056338,2.58384853695818,77 267 | "{tropical fruit,whipped/sour cream} => {whole milk}",0.00793085917641078,0.573529411764706,2.2445928232017,78 268 | "{root vegetables,whipped/sour cream} => {yogurt}",0.00640569395017794,0.375,2.68813775510204,63 269 | "{whipped/sour cream,yogurt} => {root vegetables}",0.00640569395017794,0.308823529411765,2.8332830333626,63 270 | "{root vegetables,whipped/sour cream} => {other vegetables}",0.00854092526690392,0.5,2.58407777193904,84 271 | "{other vegetables,whipped/sour cream} => {root vegetables}",0.00854092526690392,0.295774647887324,2.71356684885432,84 272 | "{root vegetables,whipped/sour cream} => {whole milk}",0.00945602440264362,0.553571428571429,2.16648428173498,93 273 | "{whipped/sour cream,whole milk} => {root vegetables}",0.00945602440264362,0.293375394321767,2.69155504025613,93 274 | "{whipped/sour cream,yogurt} => {other vegetables}",0.0101677681748856,0.490196078431373,2.5334095803324,100 275 | "{other vegetables,whipped/sour cream} => {yogurt}",0.0101677681748856,0.352112676056338,2.52407300948548,100 276 | "{whipped/sour cream,yogurt} => {whole milk}",0.0108795119471276,0.524509803921569,2.05274728275711,107 277 | "{whipped/sour cream,whole milk} => {yogurt}",0.0108795119471276,0.337539432176656,2.4196066439194,107 278 | "{rolls/buns,whipped/sour cream} => {other vegetables}",0.0067107269954245,0.458333333333333,2.36873795761079,66 279 | "{rolls/buns,whipped/sour cream} => {whole milk}",0.00782918149466192,0.534722222222222,2.09271510368307,77 280 | "{other vegetables,whipped/sour cream} => {whole milk}",0.0146415861718353,0.507042253521127,1.98438542116207,144 281 | "{whipped/sour cream,whole milk} => {other vegetables}",0.0146415861718353,0.454258675078864,2.34767948996355,144 282 | "{pip fruit,tropical fruit} => {yogurt}",0.00640569395017794,0.313432835820896,2.24680170575693,63 283 | "{pip fruit,yogurt} => {tropical fruit}",0.00640569395017794,0.355932203389831,3.39204769412692,63 284 | "{pip fruit,tropical fruit} => {other vegetables}",0.00945602440264362,0.462686567164179,2.39123614716747,93 285 | "{other vegetables,pip fruit} => {tropical fruit}",0.00945602440264362,0.361867704280156,3.44861324766989,93 286 | "{other vegetables,tropical fruit} => {pip fruit}",0.00945602440264362,0.263456090651558,3.48264872521246,93 287 | "{pip fruit,tropical fruit} => {whole milk}",0.00843924758515506,0.412935323383085,1.61608392577502,83 288 | "{pip fruit,whole milk} => {tropical fruit}",0.00843924758515506,0.280405405405405,2.67227438194008,83 289 | "{pip fruit,root vegetables} => {other vegetables}",0.00813421453990849,0.522875816993464,2.70230355235456,80 290 | "{other vegetables,pip fruit} => {root vegetables}",0.00813421453990849,0.311284046692607,2.85585690225913,80 291 | "{pip fruit,root vegetables} => {whole milk}",0.00894763599389934,0.57516339869281,2.25098767454986,88 292 | "{pip fruit,whole milk} => {root vegetables}",0.00894763599389934,0.297297297297297,2.72753630496168,88 293 | "{pip fruit,yogurt} => {other vegetables}",0.00813421453990849,0.451977401129944,2.33588951135733,80 294 | "{other vegetables,pip fruit} => {yogurt}",0.00813421453990849,0.311284046692607,2.23139839593425,80 295 | "{pip fruit,yogurt} => {whole milk}",0.00955770208439248,0.531073446327684,2.07843507546071,94 296 | "{pip fruit,whole milk} => {yogurt}",0.00955770208439248,0.317567567567568,2.27644098179812,94 297 | "{pip fruit,rolls/buns} => {whole milk}",0.00620233858668022,0.445255474452555,1.74257365349816,61 298 | "{other vegetables,pip fruit} => {whole milk}",0.0135231316725979,0.517509727626459,2.02535144098935,133 299 | "{pip fruit,whole milk} => {other vegetables}",0.0135231316725979,0.449324324324324,2.32217799775603,133 300 | "{pastry,tropical fruit} => {whole milk}",0.0067107269954245,0.507692307692308,1.98692950503535,66 301 | "{pastry,soda} => {whole milk}",0.00823589222165735,0.391304347826087,1.53142787937508,81 302 | "{pastry,yogurt} => {other vegetables}",0.00660904931367565,0.373563218390805,1.93063281811538,65 303 | "{other vegetables,pastry} => {yogurt}",0.00660904931367565,0.292792792792793,2.09884629527487,65 304 | "{pastry,yogurt} => {whole milk}",0.00915099135739705,0.517241379310345,2.02430121986361,90 305 | "{pastry,whole milk} => {yogurt}",0.00915099135739705,0.275229357798165,1.9729451413593,90 306 | "{pastry,rolls/buns} => {other vegetables}",0.00610066090493137,0.29126213592233,1.50528802248876,60 307 | "{other vegetables,pastry} => {rolls/buns}",0.00610066090493137,0.27027027027027,1.46937982758878,60 308 | "{pastry,rolls/buns} => {whole milk}",0.00854092526690392,0.407766990291262,1.59585688400898,84 309 | "{pastry,whole milk} => {rolls/buns}",0.00854092526690392,0.256880733944954,1.39658486365319,84 310 | "{other vegetables,pastry} => {whole milk}",0.010574478901881,0.468468468468468,1.83342116489749,104 311 | "{pastry,whole milk} => {other vegetables}",0.010574478901881,0.318042813455657,1.64369472955144,104 312 | "{citrus fruit,tropical fruit} => {yogurt}",0.00630401626842908,0.316326530612245,2.26754477301125,62 313 | "{citrus fruit,yogurt} => {tropical fruit}",0.00630401626842908,0.291079812206573,2.7740018924919,62 314 | "{citrus fruit,tropical fruit} => {other vegetables}",0.0090493136756482,0.454081632653061,2.34676450716913,89 315 | "{citrus fruit,other vegetables} => {tropical fruit}",0.0090493136756482,0.313380281690141,2.98652623102959,89 316 | "{other vegetables,tropical fruit} => {citrus fruit}",0.0090493136756482,0.252124645892351,3.04624802500157,89 317 | "{citrus fruit,tropical fruit} => {whole milk}",0.0090493136756482,0.454081632653061,1.77711613893468,89 318 | "{citrus fruit,whole milk} => {tropical fruit}",0.0090493136756482,0.296666666666667,2.82724483204134,89 319 | "{citrus fruit,root vegetables} => {other vegetables}",0.0103711235383833,0.586206896551724,3.02960842227336,102 320 | "{citrus fruit,other vegetables} => {root vegetables}",0.0103711235383833,0.359154929577465,3.2950454593231,102 321 | "{citrus fruit,root vegetables} => {whole milk}",0.00915099135739705,0.517241379310345,2.02430121986361,90 322 | "{citrus fruit,whole milk} => {root vegetables}",0.00915099135739705,0.3,2.75233208955224,90 323 | "{citrus fruit,yogurt} => {other vegetables}",0.00762582613116421,0.352112676056338,1.81977307883031,75 324 | "{citrus fruit,other vegetables} => {yogurt}",0.00762582613116421,0.264084507042254,1.89305475711411,75 325 | "{citrus fruit,yogurt} => {whole milk}",0.0102694458566345,0.474178403755869,1.85576784756823,101 326 | "{citrus fruit,whole milk} => {yogurt}",0.0102694458566345,0.336666666666667,2.41335034013605,101 327 | "{citrus fruit,rolls/buns} => {whole milk}",0.00721911540416879,0.43030303030303,1.68405503502997,71 328 | "{citrus fruit,other vegetables} => {whole milk}",0.0130147432638536,0.450704225352113,1.76389815214406,128 329 | "{citrus fruit,whole milk} => {other vegetables}",0.0130147432638536,0.426666666666667,2.20507969872132,128 330 | "{root vegetables,shopping bags} => {other vegetables}",0.00660904931367565,0.515873015873016,2.66611198692124,65 331 | "{other vegetables,shopping bags} => {root vegetables}",0.00660904931367565,0.285087719298246,2.61552026053941,65 332 | "{shopping bags,soda} => {rolls/buns}",0.00630401626842908,0.256198347107438,1.39287492747466,62 333 | "{rolls/buns,shopping bags} => {soda}",0.00630401626842908,0.322916666666667,1.85182823129252,62 334 | "{shopping bags,soda} => {whole milk}",0.00681240467717336,0.276859504132231,1.08353092842837,67 335 | "{shopping bags,whole milk} => {soda}",0.00681240467717336,0.278008298755187,1.59429248877974,67 336 | "{other vegetables,shopping bags} => {whole milk}",0.00762582613116421,0.328947368421053,1.28738454772028,75 337 | "{shopping bags,whole milk} => {other vegetables}",0.00762582613116421,0.311203319502075,1.60834716095791,75 338 | "{sausage,tropical fruit} => {whole milk}",0.00721911540416879,0.518248175182482,2.02824146554704,71 339 | "{root vegetables,sausage} => {other vegetables}",0.00681240467717336,0.45578231292517,2.3555538873458,67 340 | "{other vegetables,sausage} => {root vegetables}",0.00681240467717336,0.252830188679245,2.31957547169811,67 341 | "{root vegetables,sausage} => {whole milk}",0.00772750381291307,0.517006802721088,2.02338316942376,76 342 | "{sausage,whole milk} => {root vegetables}",0.00772750381291307,0.258503401360544,2.37162402274343,76 343 | "{sausage,soda} => {rolls/buns}",0.00965937976614133,0.397489539748954,2.16103351212325,95 344 | "{rolls/buns,sausage} => {soda}",0.00965937976614133,0.315614617940199,1.80995321716727,95 345 | "{rolls/buns,soda} => {sausage}",0.00965937976614133,0.251989389920424,2.68215979422876,95 346 | "{sausage,soda} => {other vegetables}",0.00721911540416879,0.297071129707113,1.53530980592194,71 347 | "{other vegetables,sausage} => {soda}",0.00721911540416879,0.267924528301887,1.53646515209858,71 348 | "{sausage,soda} => {whole milk}",0.0067107269954245,0.276150627615063,1.08075663453806,66 349 | "{sausage,yogurt} => {other vegetables}",0.00813421453990849,0.414507772020725,2.14224063994947,80 350 | "{other vegetables,sausage} => {yogurt}",0.00813421453990849,0.30188679245283,2.16403542549095,80 351 | "{sausage,yogurt} => {whole milk}",0.00874428063040163,0.44559585492228,1.74390578319165,86 352 | "{sausage,whole milk} => {yogurt}",0.00874428063040163,0.292517006802721,2.09686935998889,86 353 | "{rolls/buns,sausage} => {other vegetables}",0.00884595831215048,0.289036544850498,1.49378582165247,87 354 | "{other vegetables,sausage} => {rolls/buns}",0.00884595831215048,0.328301886792453,1.78488062830502,87 355 | "{rolls/buns,sausage} => {whole milk}",0.00935434672089476,0.305647840531561,1.19619837311099,92 356 | "{sausage,whole milk} => {rolls/buns}",0.00935434672089476,0.312925170068027,1.70128195003817,92 357 | "{other vegetables,sausage} => {whole milk}",0.0101677681748856,0.377358490566038,1.47684868870552,100 358 | "{sausage,whole milk} => {other vegetables}",0.0101677681748856,0.340136054421769,1.75787603533268,100 359 | "{bottled water,tropical fruit} => {yogurt}",0.00711743772241993,0.384615384615385,2.75706436420722,70 360 | "{bottled water,yogurt} => {tropical fruit}",0.00711743772241993,0.309734513274336,2.95178191671812,70 361 | "{bottled water,tropical fruit} => {other vegetables}",0.00620233858668022,0.335164835164835,1.73218400097013,61 362 | "{bottled water,other vegetables} => {tropical fruit}",0.00620233858668022,0.25,2.38250968992248,61 363 | "{bottled water,tropical fruit} => {whole milk}",0.00803253685815963,0.434065934065934,1.69878171967308,79 364 | "{bottled water,root vegetables} => {other vegetables}",0.00701576004067107,0.448051948051948,2.31560215927005,69 365 | "{bottled water,other vegetables} => {root vegetables}",0.00701576004067107,0.282786885245902,2.59441139588941,69 366 | "{bottled water,root vegetables} => {whole milk}",0.00732079308591764,0.467532467532468,1.82975798574684,72 367 | "{bottled water,soda} => {yogurt}",0.0074224707676665,0.256140350877193,1.83610812746151,73 368 | "{bottled water,yogurt} => {soda}",0.0074224707676665,0.323008849557522,1.85235687195232,73 369 | "{soda,yogurt} => {bottled water}",0.0074224707676665,0.271375464684015,2.4553612651033,73 370 | "{bottled water,rolls/buns} => {soda}",0.00681240467717336,0.281512605042017,1.61438861258789,67 371 | "{bottled water,soda} => {whole milk}",0.00752414844941535,0.259649122807018,1.01617553633387,74 372 | "{bottled water,yogurt} => {rolls/buns}",0.00711743772241993,0.309734513274336,1.68393528913936,70 373 | "{bottled water,rolls/buns} => {yogurt}",0.00711743772241993,0.294117647058824,2.10834333733493,70 374 | "{bottled water,yogurt} => {other vegetables}",0.00813421453990849,0.353982300884956,1.82943559075331,80 375 | "{bottled water,other vegetables} => {yogurt}",0.00813421453990849,0.327868852459016,2.3502843760455,80 376 | "{bottled water,yogurt} => {whole milk}",0.00965937976614133,0.420353982300885,1.645117953016,95 377 | "{bottled water,whole milk} => {yogurt}",0.00965937976614133,0.281065088757396,2.01477780461297,95 378 | "{bottled water,rolls/buns} => {other vegetables}",0.00732079308591764,0.302521008403361,1.56347562671942,72 379 | "{bottled water,other vegetables} => {rolls/buns}",0.00732079308591764,0.295081967213115,1.60427371340021,72 380 | "{bottled water,rolls/buns} => {whole milk}",0.00874428063040163,0.361344537815126,1.41417569813441,86 381 | "{bottled water,whole milk} => {rolls/buns}",0.00874428063040163,0.254437869822485,1.38330373117974,86 382 | "{bottled water,other vegetables} => {whole milk}",0.0107778342653787,0.434426229508197,1.70019178957943,106 383 | "{bottled water,whole milk} => {other vegetables}",0.0107778342653787,0.313609467455621,1.62078250784342,106 384 | "{root vegetables,tropical fruit} => {yogurt}",0.00813421453990849,0.386473429951691,2.77038351572513,80 385 | "{tropical fruit,yogurt} => {root vegetables}",0.00813421453990849,0.277777777777778,2.54845563847429,80 386 | "{root vegetables,yogurt} => {tropical fruit}",0.00813421453990849,0.31496062992126,3.00158701092596,80 387 | "{root vegetables,tropical fruit} => {other vegetables}",0.0123029994916116,0.584541062801932,3.0209991343442,121 388 | "{other vegetables,tropical fruit} => {root vegetables}",0.0123029994916116,0.342776203966006,3.14477981903514,121 389 | "{other vegetables,root vegetables} => {tropical fruit}",0.0123029994916116,0.259656652360515,2.47453796120704,121 390 | "{root vegetables,tropical fruit} => {whole milk}",0.011997966446365,0.570048309178744,2.23096900945999,118 391 | "{tropical fruit,whole milk} => {root vegetables}",0.011997966446365,0.283653846153846,2.60236527698048,118 392 | "{soda,tropical fruit} => {yogurt}",0.00660904931367565,0.317073170731707,2.27289696366351,65 393 | "{soda,tropical fruit} => {other vegetables}",0.00721911540416879,0.346341463414634,1.78994655422119,71 394 | "{soda,tropical fruit} => {whole milk}",0.00782918149466192,0.375609756097561,1.47000475575786,77 395 | "{tropical fruit,yogurt} => {rolls/buns}",0.00874428063040163,0.298611111111111,1.623460628954,86 396 | "{rolls/buns,tropical fruit} => {yogurt}",0.00874428063040163,0.355371900826446,2.54743632990386,86 397 | "{rolls/buns,yogurt} => {tropical fruit}",0.00874428063040163,0.254437869822485,2.42480276134122,86 398 | "{tropical fruit,yogurt} => {other vegetables}",0.0123029994916116,0.420138888888889,2.17134312780989,121 399 | "{other vegetables,tropical fruit} => {yogurt}",0.0123029994916116,0.342776203966006,2.45714574781754,121 400 | "{other vegetables,yogurt} => {tropical fruit}",0.0123029994916116,0.283372365339578,2.7005496251112,121 401 | "{tropical fruit,yogurt} => {whole milk}",0.0151499745805796,0.517361111111111,2.02476980810894,149 402 | "{tropical fruit,whole milk} => {yogurt}",0.0151499745805796,0.358173076923077,2.56751618916797,149 403 | "{whole milk,yogurt} => {tropical fruit}",0.0151499745805796,0.270417422867514,2.57708852122286,149 404 | "{rolls/buns,tropical fruit} => {other vegetables}",0.00782918149466192,0.318181818181818,1.64441312759757,77 405 | "{rolls/buns,tropical fruit} => {whole milk}",0.0109811896288765,0.446280991735537,1.7465871682129,108 406 | "{tropical fruit,whole milk} => {rolls/buns}",0.0109811896288765,0.259615384615385,1.41145235361653,108 407 | "{other vegetables,tropical fruit} => {whole milk}",0.0170818505338078,0.475920679886686,1.86258650484901,168 408 | "{tropical fruit,whole milk} => {other vegetables}",0.0170818505338078,0.403846153846154,2.08713973887384,168 409 | "{root vegetables,soda} => {other vegetables}",0.00823589222165735,0.442622950819672,2.28754425712637,81 410 | "{other vegetables,soda} => {root vegetables}",0.00823589222165735,0.251552795031056,2.30785609993511,81 411 | "{root vegetables,soda} => {whole milk}",0.00813421453990849,0.437158469945355,1.71088481970257,80 412 | "{root vegetables,yogurt} => {rolls/buns}",0.00721911540416879,0.279527559055118,1.51970897916367,71 413 | "{rolls/buns,root vegetables} => {yogurt}",0.00721911540416879,0.297071129707113,2.12951498591068,71 414 | "{root vegetables,yogurt} => {other vegetables}",0.0129130655821047,0.5,2.58407777193904,127 415 | "{other vegetables,root vegetables} => {yogurt}",0.0129130655821047,0.272532188841202,1.95361084347902,127 416 | "{other vegetables,yogurt} => {root vegetables}",0.0129130655821047,0.297423887587822,2.72869770002447,127 417 | "{root vegetables,yogurt} => {whole milk}",0.0145399084900864,0.562992125984252,2.20335358498015,143 418 | "{root vegetables,whole milk} => {yogurt}",0.0145399084900864,0.297297297297297,2.1311362382791,143 419 | "{whole milk,yogurt} => {root vegetables}",0.0145399084900864,0.259528130671506,2.38102534062898,143 420 | "{rolls/buns,root vegetables} => {other vegetables}",0.0122013218098627,0.502092050209205,2.59488981282582,120 421 | "{other vegetables,root vegetables} => {rolls/buns}",0.0122013218098627,0.257510729613734,1.40000996448373,120 422 | "{other vegetables,rolls/buns} => {root vegetables}",0.0122013218098627,0.286396181384248,2.6275246678303,120 423 | "{rolls/buns,root vegetables} => {whole milk}",0.012709710218607,0.523012552301255,2.04688756541299,125 424 | "{root vegetables,whole milk} => {rolls/buns}",0.012709710218607,0.25987525987526,1.41286521883537,125 425 | "{other vegetables,root vegetables} => {whole milk}",0.0231825114387392,0.489270386266094,1.91483257020575,228 426 | "{root vegetables,whole milk} => {other vegetables}",0.0231825114387392,0.474012474012474,2.44977019543494,228 427 | "{other vegetables,whole milk} => {root vegetables}",0.0231825114387392,0.309782608695652,2.84208204899416,228 428 | "{soda,yogurt} => {rolls/buns}",0.00864260294865277,0.315985130111524,1.71791805121439,85 429 | "{rolls/buns,yogurt} => {soda}",0.00864260294865277,0.251479289940828,1.44215674435455,85 430 | "{soda,yogurt} => {other vegetables}",0.0083375699034062,0.304832713754647,1.57542287954648,82 431 | "{other vegetables,soda} => {yogurt}",0.0083375699034062,0.254658385093168,1.82548485232602,82 432 | "{soda,yogurt} => {whole milk}",0.0104728012201322,0.382899628252788,1.49853475681105,103 433 | "{soda,whole milk} => {yogurt}",0.0104728012201322,0.261421319796954,1.87396405262613,103 434 | "{rolls/buns,soda} => {other vegetables}",0.00986273512963904,0.257294429708223,1.3297376333055,97 435 | "{other vegetables,soda} => {rolls/buns}",0.00986273512963904,0.301242236024845,1.63776527988079,97 436 | "{other vegetables,soda} => {whole milk}",0.0139298423995933,0.425465838509317,1.66512396408242,137 437 | "{soda,whole milk} => {other vegetables}",0.0139298423995933,0.347715736040609,1.79704900891192,137 438 | "{rolls/buns,yogurt} => {other vegetables}",0.0114895780376207,0.334319526627219,1.72781531496516,113 439 | "{other vegetables,yogurt} => {rolls/buns}",0.0114895780376207,0.26463700234192,1.4387534096367,113 440 | "{other vegetables,rolls/buns} => {yogurt}",0.0114895780376207,0.269689737470167,1.93323510788564,113 441 | "{rolls/buns,yogurt} => {whole milk}",0.015556685307575,0.452662721893491,1.77156302022383,153 442 | "{whole milk,yogurt} => {rolls/buns}",0.015556685307575,0.277676950998185,1.50964776841744,153 443 | "{rolls/buns,whole milk} => {yogurt}",0.015556685307575,0.274685816876122,1.969048840362,153 444 | "{other vegetables,yogurt} => {whole milk}",0.0222674123029995,0.51288056206089,2.00723451168677,219 445 | "{whole milk,yogurt} => {other vegetables}",0.0222674123029995,0.397459165154265,2.05413078785717,219 446 | "{other vegetables,whole milk} => {yogurt}",0.0222674123029995,0.297554347826087,2.13297887089618,219 447 | "{other vegetables,rolls/buns} => {whole milk}",0.0178952719877987,0.420047732696897,1.64391939955192,176 448 | "{rolls/buns,whole milk} => {other vegetables}",0.0178952719877987,0.315978456014363,1.63302580919667,176 449 | "{other vegetables,root vegetables,tropical fruit} => {whole milk}",0.00701576004067107,0.570247933884298,2.23175027049426,69 450 | "{root vegetables,tropical fruit,whole milk} => {other vegetables}",0.00701576004067107,0.584745762711864,3.02205705531854,69 451 | "{other vegetables,tropical fruit,whole milk} => {root vegetables}",0.00701576004067107,0.410714285714286,3.76807369402985,69 452 | "{other vegetables,root vegetables,whole milk} => {tropical fruit}",0.00701576004067107,0.302631578947368,2.88409067727458,69 453 | "{other vegetables,tropical fruit,yogurt} => {whole milk}",0.00762582613116421,0.619834710743802,2.4258155114068,75 454 | "{tropical fruit,whole milk,yogurt} => {other vegetables}",0.00762582613116421,0.503355704697987,2.60142057577756,75 455 | "{other vegetables,tropical fruit,whole milk} => {yogurt}",0.00762582613116421,0.446428571428571,3.2001639941691,75 456 | "{other vegetables,whole milk,yogurt} => {tropical fruit}",0.00762582613116421,0.342465753424658,3.2637119040034,75 457 | "{other vegetables,root vegetables,yogurt} => {whole milk}",0.00782918149466192,0.606299212598425,2.37284232228632,77 458 | "{root vegetables,whole milk,yogurt} => {other vegetables}",0.00782918149466192,0.538461538461538,2.78285298516512,77 459 | "{other vegetables,root vegetables,whole milk} => {yogurt}",0.00782918149466192,0.337719298245614,2.42089598997494,77 460 | "{other vegetables,whole milk,yogurt} => {root vegetables}",0.00782918149466192,0.351598173515982,3.22571645198664,77 461 | "{other vegetables,rolls/buns,root vegetables} => {whole milk}",0.00620233858668022,0.508333333333333,1.9894382544104,61 462 | "{rolls/buns,root vegetables,whole milk} => {other vegetables}",0.00620233858668022,0.488,2.52205990541251,61 463 | "{other vegetables,root vegetables,whole milk} => {rolls/buns}",0.00620233858668022,0.267543859649123,1.45455713634556,61 464 | "{other vegetables,rolls/buns,whole milk} => {root vegetables}",0.00620233858668022,0.346590909090909,3.17977760345997,61 465 | -------------------------------------------------------------------------------- /机器学习/基于关联规则的购物篮分析/关联规则: -------------------------------------------------------------------------------- 1 | ====-----基于关联规则的购物篮分析-----------------------------------无监督的学习 2 | 购物篮分析: 3 | 1、使用简单的性能指标,发现大型数据集中的关联方法 4 | 2、理解事务型数据的特性 5 | 3、知道如何识别有用且可行动的模式 6 | 7 | 8 | 1、理解关联规则: 9 | 购物篮分析的基础是可能出现在任意给定交易中的项。括号里的一件商品或多件商品构成一个集合,或者说是出现在具有某种规律性数据中的项集。 10 | {花生,果酱,面包} 11 | 购物篮分析的结果是关联规则的集合,这些规则表示了项集之间关系的模式,关联规则一般由项集的子集组成,通常将规则左项的一个子集与规则右项的一个子集联系在一起表示。 12 | LHS---------→RHS LHS表示为了满足规则所要达到的条件 RHS表示满足条件后的预期结果 13 | {花生,果酱}------→{面包} 14 | 15 | 16 | 适用于:查找发生在信用卡欺诈或者保险应用相结合的购物或者医疗津贴的模式 17 | 找到客户放弃他们的移动电话服务或者升级他们的有线电视服务套餐之前的行为 18 | 19 | 20 | 2、Apriori算法(事务型数据) 21 | 1、利用关于频繁项集性质的一个简单的先验信念,寻找最大频繁项集,且一个频繁项集的所有子集必须也是频繁的。 22 | 23 | 优点:能够处理大量的事务性数据;规则的结果很容易被理解;发现意想不到的知识 24 | 缺点:对于小的数据集不是很有帮助;容易得出虚假的模式 25 | 26 | 2、度量规则兴趣度--支持度和置信度 27 | 支持度:是指一个项集或者规则在数据中出现的频率。 28 | support(X)=count(X)/N -------------------x表示某一项集 N表示交易数据(总的项集) count(x):X出现的频数 29 | 30 | 置信度:一个规则的预测能力或者准确率的度量,定义为同时包含项集X和项集Y的支持度除以只包含项集X的支持度: 31 | confidence(X→Y)=support(X,Y)/support(X) 32 | 33 | 本质上置信度表示交易中项或者项集X的出现导致项集Y出现的比例。(x导致y的置信度与y导致x的置信度是不同的) 34 | 35 | (其实上面的公式有点像贝叶斯公式) 36 | 37 | 强规则:同时具有高支持度与高置信度。 38 | 39 | 提升度(lift):用来度量一类商品或者商品集相对于它的一般购买率,此时被购买的可能性有多大 40 | lift(X→Y)=confidence(X→Y)/support(Y) -------lift(x--y)与lift(y--x)是一样的 41 | 提升度越大说明该规则越重要,并反映了商品之间的联系。 42 | 43 | 44 | 3、Apriori原则建立规则 45 | Apriori的原则是一个频繁项集的所有子集也必须是频繁的。 46 | 假如{A,B}是频繁项集,则{A},{B}也必须是频繁项集,如果其中一个不是频繁项集(不满足支持度阈值),那也就不用考虑任 何包含该项集的集合了。 47 | 48 | 创建规则的实际过程: 49 | 1、识别所有满足最小支持度阈值的项集 (迭代:从一项集开始,一直到i项集) 50 | 2、使用那些满足最小置信度阈值的项集来创建规则 51 | 52 | 53 | 4、规则的选取(获取关联规则) 54 | 可行动的规则 55 | 平凡的规则 56 | 令人费解的规则 57 | 58 | 59 | 60 | 61 | -------------------------------------------------------------------------------- /机器学习/基于关联规则的购物篮分析/关联规则挖掘.R: -------------------------------------------------------------------------------- 1 | #==-----------------用关联规则确定购买食品杂货---------------------------------- 2 | 3 | #加载arules包 4 | library(arules) 5 | 6 | data<-read.transactions('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 08\\groceries.csv',sep = ',') 7 | 8 | summary(data) #可以查看输入信息的记录数,商品总数,最频繁的项集等数据 9 | 10 | #可以用inspect查看数据 11 | inspect(data[1:5]) 12 | 13 | #若要研究某一特定的商品(一列数据),可以使用itemfrequency()函数查看 14 | 15 | itemFrequency(data[,1:3]) #给出的是前三个商品的支持度 16 | 17 | #可视化商品的支持度,可以使用itemfrequencyplot() 18 | 19 | itemFrequencyPlot(data,support=0.1) #这里规定支持度至少为0.1 20 | 21 | itemFrequencyPlot(data,topN=20) #根据支持度降序排列前20 22 | 23 | #可视化交易数据-----绘制稀疏矩阵 image() 24 | 25 | image(data[1:100]) 26 | 27 | 28 | #==--------训练模型----------------------------------------- 29 | myrules<-apriori(data = data,parameter = list(support=0.006,confidence=0.25,minlen=2)) 30 | 31 | summary(myrules) 32 | inspect(myrules[1:3]) #查看前三条规则 33 | 34 | 35 | #==----------提高模型的性能------------------------------------- 36 | 37 | #根据不同标准对规则进行排序,提取出来,并进行整理 38 | #--1、对关联规则进行集合排序 39 | #-------通过lift排序:这里by只后可以换成 support confidence 40 | inspect(sort(myrules,by='lift')[1:5]) 41 | 42 | #--2、-----提取关联规则的子集:subset() 43 | #假设想要知道某一商品是否与其他商品一起被购买--这里以berries为例 44 | 45 | berriesRules<-subset(myrules,items %in% 'berries') #这里items项也可以用lhs,rhs来代替,将某一项固定在左边或者右边 46 | inspect(berriesRules) 47 | 48 | berriesRules<-subset(myrules,lhs %in% 'berries' & lift>2) #可以与与、或、非结合 49 | 50 | #提取规则时: items代表所有的规则;可以使用%pin%(部分匹配),%ain%(完全匹配);也可以加入支持度,置信度等条件 51 | 52 | #--3、将规则写入数据框或者文件中 53 | 54 | write(myrules,file = 'rules.csv',sep=',',quote=T,row.names=F) #写入文件,csv格式 55 | 56 | dat<-as(myrules,'data.frame') #变为数据框 57 | 58 | 59 | 60 | -------------------------------------------------------------------------------- /机器学习/提高模型性能/bagging集成学习.R: -------------------------------------------------------------------------------- 1 | #----使用元学习来提高模型的性能---------------------------------------------- 2 | 3 | library(ipred) #加载ipred包,使用bagging()函数 4 | library(caret) 5 | 6 | data<-read_csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 05\\credit.csv') 7 | 8 | set.seed(300) 9 | #建立一个10折cv的控制队形 10 | 11 | #在caret包中有bagging的函数--treebag 12 | ctrl<-trainControl(method = 'cv',number = 10) 13 | train(default~.,data = data,method='treebag',trControl=ctrl) 14 | 15 | 16 | #建立一个基于baggging的svmbag的支持向量机模型,适用前面的ksvm()函数,caret包中自带svmbag 17 | 18 | str(svmBag) #这里可以看到三个bagging指定的三个函数:拟合模型、预测、聚集投票 19 | 20 | #创建一个bagging控制对象 21 | bagctrl<-bagControl(fit = svmBag$fit,predict = svmBag$pred,aggregate = svmBag$aggregate) 22 | 23 | model<-train(default~.,data=data,'bag',trControl=ctrl,bagControl=bagctrl) -------------------------------------------------------------------------------- /机器学习/提高模型性能/boosting.R: -------------------------------------------------------------------------------- 1 | #--boosting--自适应boosting--- 2 | 3 | library(adabag) 4 | 5 | m_adaboosting<-boosting(default~.,data = data) 6 | pred_adaboost<-predict(m_adaboosting,data) 7 | 8 | model<-boosting.cv(default~.,data = data) 9 | -------------------------------------------------------------------------------- /机器学习/提高模型性能/caret自动参数调整.R: -------------------------------------------------------------------------------- 1 | #----使用caret包进行自动调整参数 2 | 3 | library(caret) 4 | library(readr) 5 | 6 | #当你想查看某个模型的参数时,你可以使用modellookup()函数进行查找 7 | #当模型由p个参数时,caret会自动进行3^p次调整,对每一个参数进行三次调整 8 | modelLookup('C5.0') 9 | 10 | #--1、创建简单的模型 11 | 12 | data<-read_csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 05\\credit.csv') 13 | 14 | set.seed(300) 15 | model<-train(default~.,data = data,method='C5.0') #模型会选择自助抽样法进行数据的选取 16 | 17 | 18 | pre<-predict(model,data) #如果想要得到概率的话,可以在后面添加参数type='prob' 19 | table(pre,data$default) #在这里可能正确率很高,但是不能作为是未来预测的能力 20 | 21 | 22 | #----定制调整过程 23 | 24 | #使用traincontrol()函数进行创建配置选项,控制对象 traincontrol(method,selectfunction) 25 | #其中method参数用来设置重抽样的方法 selectfunction参数用来选择最优模型 26 | 27 | ctrl<-trainControl(method = 'cv',number = 10,selectionFunction = 'oneSE') 28 | 29 | grid<-expand.grid(.model='tree',.trials=c(1,5,10,15,20,25,30,35),.winnow='FALSE') #expand.grid多种形式组合 30 | 31 | #自定义模型 32 | set.seed(300) 33 | m<-train(default~.,data = data,method='C5.0',metric='Kappa',trControl=ctrl,tuneGrid=grid) 34 | 35 | 36 | -------------------------------------------------------------------------------- /机器学习/提高模型性能/提高模型性能: -------------------------------------------------------------------------------- 1 | ==-----提高模型的性能-------------------------------------- 2 | 3 | 1、如何通过系统化的寻找训练条件的最佳集合来自动调整模型的性能 4 | 2、将多个模型组合起来从而处理更有难度的学习任务的方法 5 | 3、如何使用决策树的变种 6 | 7 | ==------------------------------------------------------------------- 8 | 9 | --、使用CARET包进行自动参数调整 10 | 11 | 自动参数调整需要考虑需要考虑以下三类问题: 12 | 1、对数据应该训练那种机器学习模型 13 | 2、哪些模型参数可以调整,为了找到最优设置对他们进行调整强度有多大 14 | 3、使用何种评价标准来评估模型从而找到最优的模型候选者 15 | 16 | caret包中对p个参数中的每一个参数最多搜索3个可能值,这一位着最对优3^p个候选模型被测试。 17 | 18 | 19 | 自选模型的三个步骤: 20 | 1、搜索候选模型的集合,该集合由所有参数组合的矩阵或者网格的形式构成 (3^p个参数调整) 21 | 2、从众多模型中识别出最好的模型(使用k折cv或者使用度量准确率的模型性能统计量筛选模型) 22 | 23 | 24 | caret包中的train()函数和selectfunction()函数联合使用,自动创建简单模型和定制调整模型 25 | 26 | 27 | ==---------------------------------------------------------------------------------------------------- 28 | 29 | --、使用元学习来提高模型的性能 30 | 31 | 1、元学习是指将多个模型合并成一个更强的组,一种通过自适应、自修改学习方式的高度复杂算法。 32 | 33 | 集成学习: 34 | 如何选择或者构造那些较弱的学习模型 35 | 如何将这些较弱的学习模型的预测结果组合起来形成最终的预测 36 | 37 | 模式: 38 | 训练数据--分配数据--model1、model2、model3--组合函数--组合模型 39 | 40 | 分配函数:决定每个模型接收多少训练数据集,每个模型接收的是完整的数据集还是某个抽样样本,模型是接受一个特 征还是不同特征的子集。 41 | 组合函数:对预测中的不一致进行调解(采用投票加权来决定最终的预测) 42 | 43 | 优点: 44 | 1、对未来问题更好的适应 45 | 2、可以提升大量数据或少量数据的性能 46 | 3、将不同领域数据合成的能力 47 | 4、对于困难学习任务更细致的理解 48 | 49 | 50 | 2、bagging(自助汇聚法) 51 | 对原始训练数据使用自助抽样法的方式产生多个训练数据集,使用单一的机器学习算法产生多个模型,然后使用投票或 者平均的方法组合预测值。一般和决策树一起使用。 52 | 53 | 使用ipred添加包中的bagging()函数,具体用法为:bagging(model,data,nbagg) nbagg参数用来调整数量 54 | 55 | 56 | 57 | 3、boosting 58 | 增加弱学习器的性能来获得强学习器的性能。同样也是使用在不同的重抽样数据中训练模型的集成,并通过投票来决定 最终的预测值。 59 | 60 | 与bagging的不同: 61 | 1、重抽样数据集的构建是专门用来产生互补的模型 62 | 2、选票并不是同等重要,需要根据之前的表现进行加权,也就意味着性能好的模型对预测有更大的影响 63 | 64 | boosting构建集成学习中的模型是互补的,所以通过简单的为组添加其他学习器的方式将性能提升到任意阈值是可能的 前提是每一个分类器的性能优于随机分类。 65 | 66 | 原理:分类器迭代,产生弱分类器来迭代地学习训练集中很大比例难以分类的样本,对经常分错类的样本进行更多的关 注(给予更大的权重), 67 | 68 | 69 | 70 | 71 | 72 | -------------------------------------------------------------------------------- /机器学习/提高模型性能/随机森林: -------------------------------------------------------------------------------- 1 | ==----------随机森林(决策树森林)------------------------------------------- 2 | 3 | 只关注决策树的集成学习,将bgging和随机特征选择结合起来,对决策树模型添加额外的多样性,在树的集成之后,该模型使用投票的方法来组合预测结果。 4 | 集合学习只需要使用全体特征集中的一个很小的随机部分,因此随机森林可以处理大量的数据。 5 | 6 | 优势: 7 | 对于大多数问题都是很有效的通用模型 8 | 可以吃力噪声和缺失值,适用于分类和连续的特征 9 | 只选择最重要的特征 10 | 可以适应于特征数目或者样本量极大的情况 11 | 缺点: 12 | 很难理解模型;需要使得模型符合数据 13 | 14 | 15 | 16 | -------------------------------------------------------------------------------- /机器学习/提高模型性能/随机森林caret.R: -------------------------------------------------------------------------------- 1 | #----随机森林----------------- 2 | 3 | library(randomForest) 4 | library(caret) 5 | library(readr) 6 | 7 | credit<-read_csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 05\\credit.csv') 8 | 9 | #----1、使用randomforest()函数训练数据集 10 | 11 | #randomforest(train,class,ntree=500,mtry=sqrt(p)) train:训练集数据框 class:每一行的类别 ntree:树的数目 12 | #mtry:每次划分中随机选择特征的数量,默认值为sqrt(p) 13 | 14 | set.seed(300) 15 | rf<-randomForest(default ~ .,data = credit) 16 | 17 | 18 | #----2、使用caret包进行参数调整,提高模型性能 19 | 20 | ctrl<-trainControl(method = 'repeatedcv',number = 10,repeats = 10) 21 | grid<-expand.grid(.mtry=c(2,4,8,16)) 22 | rf_model<-train(default~.,data = credit,method='rf',metric='Kappa',trControl=ctrl,tuneGrid=grid) -------------------------------------------------------------------------------- /机器学习/支持向量机/SVM核函数.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShydowLi/Machine-Learning-With-R-/1a7bb06b79200836bb5079263ee3866dbf15a8db/机器学习/支持向量机/SVM核函数.docx -------------------------------------------------------------------------------- /机器学习/支持向量机/SVM线性可分.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShydowLi/Machine-Learning-With-R-/1a7bb06b79200836bb5079263ee3866dbf15a8db/机器学习/支持向量机/SVM线性可分.docx -------------------------------------------------------------------------------- /机器学习/支持向量机/支持向量机原理: -------------------------------------------------------------------------------- 1 | #-------支持向量机----------------------------- 2 | 3 | 1、支持向量机可以想象为一个平面,该平面创建了数据点之间的边界,而这些数据点绘制在代表案例及其特征值的多维空间中。支持向量机的目标是创建一个平面边界(超平面),将空间划分来创建任何一边都相当均匀的分区。 4 | 5 | 应用:识别癌症或其他遗传疾病的基因序列分类;文本分类;事件检测 6 | 7 | 8 | 2、用超平面分类 9 | 寻找最大间隔超平面 10 | 支持向量:每个类中最接近最大间隔超平面的点,而且每类必须至少有一个支持向量 11 | 12 | 1、线性可分的数据情况: 13 | 寻找最大间隔超平面:在线性可分的情况下,最大间隔超平面要尽可能的远离两组数据点的外边界,这些外边界称为凸包,最大 间隔超平面就是两个凸包之间最短距离直线的垂直平分线。使用二次优化的技术 14 | 15 | 16 | 2、非线性可分的数据情况 17 | 如果数据不是可分的,使用一个松弛变量,这样就创建一个软件隔,允许一些点落在边界的不正确的一边。 18 | 用成本值表示所有违反约束的点,而且该算法试图使总成本最小,而不是寻找最大间隔。 19 | 20 | 21 | 3、对非线性空间使用和函数 22 | 当变量之间的关系是非线性的时候,支持向量机使用一种称为核技巧的处理方式将问题映射到一个更高维的空间中。 23 | 24 | 核函数: 25 | 线性核函数;多项式核函数;s型核函数;高斯RBF核函数 26 | 27 | ==--------------------------------------------------------------------------------------------------------------------- 28 | 支持向量机的优缺点: 29 | 优点:用于分类和数值预测;不会过多的收到噪声数据的影响,不容易出现过度拟合; 30 | 缺点:寻找最好的模型需要测试不同的核函数和模型参数的组合;训练缓慢;黑箱模型很难理解 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | -------------------------------------------------------------------------------- /机器学习/支持向量机/支持向量机(字符识别).R: -------------------------------------------------------------------------------- 1 | #--------支持向量机进行光学字符的识别-------------------------------------------- 2 | 3 | #读入数据‘ 4 | data<-read.csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 07\\letterdata.csv') 5 | 6 | #训练集与测试集 7 | 8 | train_data<-data[1:16000,] 9 | test_data<-data[16001:20000,] 10 | 11 | #采用kernlab包,待用ksvm()函数 12 | 13 | library(kernlab) 14 | 15 | model<-ksvm(letter~.,data=train_data,kernel='vanilladot') 16 | 17 | pred<-predict(model,test_data) 18 | 19 | table(pred,test_data$letter) 20 | 21 | agreement<-pred==test_data$letter 22 | table(agreement) 23 | prop.table(table(agreement)) 24 | 25 | 26 | #--------------提高模型的性能----------------------- 27 | 28 | #==-----采用不同的核函数,或者添加参数c,改变成本约束参数修改边界的宽度。 29 | -------------------------------------------------------------------------------- /机器学习/时间序列模型/偏相关图.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShydowLi/Machine-Learning-With-R-/1a7bb06b79200836bb5079263ee3866dbf15a8db/机器学习/时间序列模型/偏相关图.png -------------------------------------------------------------------------------- /机器学习/时间序列模型/时间序列: -------------------------------------------------------------------------------- 1 | #时间序列分析(应用系统负载分析与磁盘容量预测) 2 | 1、背景与挖掘目标 3 | 服务器、数据库、中间件等硬件会随着时间的推移负载增加,将会导致系统的性能下降。了解硬件的负载情况,以便预防,确保系统安全稳定的运行。 4 | 当前负载率=当前负载量/磁盘能承受负载量 5 | 负载增长趋势(增长率)=一段时间内负载的上升情况 6 | 7 | 性能数据:CPU使用信息、内存使用信息、磁盘使用信息 8 | 9 | 2、建模过程 10 | 从数据源中选择性抽取历史数据和实时更新数据 11 | 对抽取的数据进行周期性分析以及数据清洗、数据变换 12 | 采用时间序列分析法对建模数据进行模型构建,预测磁盘已使用的情况 13 | 模型评价,预测模型的使用情况 14 | 15 | 3、应用 -------------------------------------------------------------------------------- /机器学习/时间序列模型/时间序列(arima算法).R: -------------------------------------------------------------------------------- 1 | #时间序列分析 2 | #应用系统负载分析与磁盘容量预测 3 | dat<-read.csv('E:\\新建文件夹\\R数据挖掘\\chapter11\\示例程序\\data\\discdata.csv',header=T,encoding='utf-8') 4 | #数据清洗,去除缺失值 5 | dat<-na.omit(dat) 6 | #删除重复值 7 | index1<-which(dat$VALUE==52323324) 8 | index2<-which(dat$VALUE==157283328) 9 | dat<-dat[-c(index1,index2),] 10 | #提取c\d盘数据 11 | cdat<-subset(dat,dat$ENTITY=='C:\\') 12 | ddat<-dat[which(dat$ENTITY=='D:\\'),] 13 | 14 | #对D盘数据进行平稳性检验 1、时序图 2、ADF表 15 | #ADF表 16 | library(fUnitRoots) 17 | adfTest(ddat$VALUE) 18 | adfTest(diff(ddat$VALUE)) #diff()函数一阶差分 在进行adf测试时p值越大越不平稳 19 | 20 | #白噪声识别:验证数据中有用的信息是否已被提取完毕 21 | Box.test(ddat$VALUE,type = 'Ljung-Box') #p值越小越好 22 | 23 | #用自相关图与偏相关图确定p、q参数 24 | acf(diff(ddat$VALUE),lag.max = 20) #自相关图:看有多少阶超出置信边界 确定p值 0 25 | pacf(diff(ddat$VALUE),lag.max = 20) #偏相关图:同上 确定q值 2 26 | #以上可以确定模型为arima(0,1,2) 27 | 28 | #建构arima模型,加载forecast 包 29 | library(forecast) 30 | model<-arima(ddat$VALUE,order = c(0,1,2)) 31 | summary(model) 32 | 33 | #未来相同时间间隔内的5次预测 34 | fore<-forecast(model,5) 35 | 36 | #对残差进行平稳性检验 37 | r1<-model$residuals 38 | adfTest(r1) 39 | 40 | #模型评价: 41 | #平均绝对误差、均方根误差、平均百分比误差 42 | 43 | 44 | 45 | -------------------------------------------------------------------------------- /机器学习/时间序列模型/时间预测.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShydowLi/Machine-Learning-With-R-/1a7bb06b79200836bb5079263ee3866dbf15a8db/机器学习/时间序列模型/时间预测.png -------------------------------------------------------------------------------- /机器学习/时间序列模型/自相关.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShydowLi/Machine-Learning-With-R-/1a7bb06b79200836bb5079263ee3866dbf15a8db/机器学习/时间序列模型/自相关.png -------------------------------------------------------------------------------- /机器学习/概率学习-朴素贝叶斯分类/文本处理函数.R: -------------------------------------------------------------------------------- 1 | #---------------文本处理函数---------------------- 2 | 3 | library(tm) 4 | library(SnowballC) 5 | 6 | create_corpus <- function(x) { 7 | result_corpus <- VCorpus(VectorSource(x)) %>% 8 | tm_map(tolower) %>% # 把所有字母转换为小写字母 9 | tm_map(PlainTextDocument) %>% # 把文本转换为纯文本文档 10 | tm_map(removePunctuation) %>% # 删除标点符号 11 | tm_map(removeWords, stopwords()) %>% # 剔除停用词 12 | tm_map(removeNumbers) %>% # 剔除数字 13 | tm_map(stripWhitespace) %>% # 剔除空格 14 | tm_map(stemDocument) # 采用Porter's stemming算法提取词干 15 | return(result_corpus) 16 | } 17 | 18 | 19 | 20 | #-------------文本处理值后转变为稀疏矩阵-------------- 21 | # 创建稀疏矩阵 22 | 23 | dtm_ham <- DocumentTermMatrix(corpus_ham) %>% 24 | removeSparseTerms(0.99) # 降维,相当于剔除文本中的低频词汇 25 | 26 | ## 展示稀疏矩阵 27 | 28 | inspect(dtm_ham[35:45, 1:10]) 29 | as.matrix(dtm_ham[35:45, 1:10]) # 展示稀疏矩阵的一部分 30 | 31 | 32 | #----创建一个函数,用来将稀疏矩阵转置为矩阵,在计算每一个词出现的频率-------------------------- 33 | create_term_frequency_counts <- function(dtm) { 34 | m <- as.matrix(t(dtm)) 35 | v <- sort(rowSums(m), decreasing = TRUE) 36 | d <- data.frame(word = names(v), freq = v, stringsAsFactors = FALSE) 37 | return(d) 38 | } 39 | -------------------------------------------------------------------------------- /机器学习/概率学习-朴素贝叶斯分类/朴素贝叶斯简述.R: -------------------------------------------------------------------------------- 1 | #---------概率学习--朴素贝叶斯分类------------------------------------------------------------------------------ 2 | library(tm) #--文本处理包 3 | library(SnowballC) #与tm包一起使用,用于词干提取 4 | library(wordcloud2) 5 | library(wordcloud) 6 | library(e1071) 7 | library(gmodels) 8 | #读入数据集 9 | data<-read.csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 04\\sms_spam.csv',header = TRUE,encoding = 'utf-8') 10 | data<-data[-1072,] 11 | data$type<-factor(data$type) 12 | #文本拆分--词袋 13 | #查看垃圾短信各有多少 14 | table(data$type) 15 | 16 | 17 | #数据准备--清理和标准化文本数据 18 | data_corpus<-VCorpus(VectorSource(data$text)) #函数vectorsource指定文档来源 函数vcorpus构建词库 19 | 20 | #清理文本,去除标点符号,所有字母转化为小写 21 | #data_corpus$text<-iconv(data_corpus$text,"WINDOWS-1252","UTF-8") 22 | #data_clean<-tm_map(data_corpus,content_transformer(tolower)) --转化为小写错误 23 | 24 | data_clean<-tm_map(data_corpus,removeNumbers) #去除数字 25 | 26 | #去除填充词,类似and,or等 使用stopword()函数 27 | data_clean<-tm_map(data_clean,removeWords,stopwords()) #removeword是指移除填充词 28 | 29 | #去除标点符哈皮 30 | data_clean<-tm_map(data_clean,removePunctuation) 31 | 32 | #词干提取 33 | data_clean<-tm_map(data_clean,stemDocument) 34 | 35 | #去除多余空格 36 | data_clean<-tm_map(data_clean,stripWhitespace) 37 | 38 | 39 | #将文本文档拆解成词语 40 | data_dtm<-DocumentTermMatrix(data_clean) #类似于稀疏矩阵 41 | 42 | #当然如果你不想创建以上麻烦的步骤,你也可以直接建立DTM矩阵 43 | # data_DTM1<-DocumentTermMatrix(data_clean,control = list( 44 | # tolower=TRUE, 45 | # removeNumbers=TRUE, 46 | # stopwords=TRUE, #这里可以变为stopwords=function(x){removeWords(x,stopwords())} 47 | # removePunctuation=TRUE, 48 | # stemming=TRUE 49 | # )) 50 | 51 | 52 | #建立测试集与训练集 53 | data_train<-data_dtm[1:4200,] 54 | data_test<-data_dtm[4201:5558,] 55 | #建立标签向量 56 | data_train_lables<-data[1:4200,]$type 57 | data_test_lables<-data[4201:5558,]$type 58 | 59 | #可视化词云,描述词语出现的频率 60 | wordcloud(data_clean,min.freq = 50,random.order = F) #信息中所有词出现的频率 61 | #找出垃圾短信与非垃圾短信中的词,用词云表示 62 | sapm<-subset(data,type=='spam') #垃圾短信 63 | ham<-subset(data,type=='ham') #非垃圾短息n 64 | wordcloud(sapm$text,max.words = 40,scale = c(3,0.5)) #垃圾短信词云 65 | wordcloud(ham$text,max.words = 40,scale = c(3,0.5)) #非垃圾短信 66 | 67 | 68 | 69 | #减少特征的数量,找出次数不少于指定次数的单词 70 | data_freq_words<-findFreqTerms(data_train,5) #findfreqTerms()找出词频 71 | data_dtm_freq_train<-data_train[,data_freq_words] 72 | data_dtm_freq_test<-data_test[,data_freq_words] 73 | 74 | 75 | #将数值变量转化为分类变量(yes,no) 76 | convert_counts<-function(x){ 77 | x<-ifelse(x>0,'yes','no') 78 | } 79 | 80 | ac_train<-apply(data_dtm_freq_train,2,convert_counts) 81 | ac_test<-apply(data_dtm_freq_test,2,convert_counts) 82 | 83 | #建立数据模型 (e1071) 84 | naivemodel<-naiveBayes(ac_train,data_train_lables) #参数拉普拉斯估计laplace设为默认,因为这里我们已经选取出现次数为5 85 | 86 | pred<-predict(naivemodel,ac_test,type = 'class') 87 | 88 | CrossTable(pred,data_test_lables,prop.chisq = F,prop.t = F,dnn = c('预测','实际')) 89 | 90 | #--------------提高模型的性能--------------------- 91 | # 在建立的朴素贝叶斯模型中将laplace参数设置为1 92 | 93 | naivemodel2<-naiveBayes(ac_train,data_train_lables,laplace = 1) 94 | pred2<-predict(naivemodel2,ac_test,type = 'class') 95 | CrossTable(pred2,data_test_lables,prop.chisq = F,prop.t = F,dnn = c('预测','实际')) 96 | 97 | 98 | 99 | 100 | 101 | 102 | #------------提取部分DTM矩阵(稀疏)转化为数据框---------------------------------------------------------------------- 103 | 104 | 105 | # 将某一列列转换为utf-8格式,不能转换的字符使用其十六进制形式替换 106 | # enc2utf8()将该列的内容转换为utf-8格式 107 | # iconv(sub = 'byte')指定不能转换的字符用其十六进制形式替换 108 | #类似----sms$message <- sapply(sms$message, function(x) iconv(enc2utf8(x), sub = "byte")) 109 | 110 | 111 | 112 | 113 | 114 | -------------------------------------------------------------------------------- /机器学习/概率学习-朴素贝叶斯分类/贝叶斯: -------------------------------------------------------------------------------- 1 | -----------------概率学习——朴素贝叶斯简述----------------------------------------------------------------------- 2 | 1、朴素贝叶斯: 3 | 概率的基本原则; 4 | 使用R分析文本数据需要的专用方法和数据结构 5 | 使用朴素贝叶斯分类器建立SMS垃圾短信过滤器 6 | 7 | 理解朴素贝叶斯:描述事件概率以及如何根据附加信息修正概率的基本原则,贝叶斯方法 8 | 利用训练数据并根据特征取值提供的证据来计算每一个结果被观察到的概率。当分类器用于无标签数据时,分类器就会根据观测到的概率来预测新的特征最有可能属于哪一类。 9 | 常用于:文本分类、比如垃圾邮件过滤 10 | 在计算机网络中进行入侵检测或者异常检测 11 | 根据一组观察到的症状,诊断身体状况 12 | 最适用于解决:为了估计一个结果的总体概率,从众多属性中提取的信息应该被同时考虑。贝叶斯分类利用所有可获得的证据来巧妙的修正预测,如果大量特征产生的影响较小,但将它们放在一起,组合影响可能会很大。 13 | 14 | 15 | 2、贝叶斯方法的基本概念: 16 | 贝叶斯概率理论基于这样一个思想:即一个事件或者一个可能结果的似然估计应建立在手中已有的证据之上,而证据需要经过多次试验或者事件发生的机会来得到。 17 | ** 18 | 1、完全相互独立事件:实验有两个不可能同时发生的结果;它的对立事件为1-p 19 | 2、条件概率:事件A发生的情况下事件B发生的概率 20 | P(B|A)=P(AB)/P(A) 21 | p(B1∪B2/A)=P(B1/A)+P(B2/A)-P(B1B2/A) 22 | 3、乘法定理: 23 | P(AB)=P(B|A)P(A) 24 | P(ABC)=P(C|BA)P(B|A)P(A) 25 | 当两个事件时相互独立事件时:P(AB)=P(A)P(B) 26 | 4、全概率与贝叶斯公式 27 | 28 | 全概率: 29 | P(A)=P(A|B1)P(B1)+P(A|B2)P(B2)+P(A|B3)P(B3)+....+P(A|Bi)P(Bi) 30 | 贝叶斯: 31 | --P(a|b)与P(b|a)的相互转化 32 | P(Bi|A)=P(BiA)/P(A)=P(A|Bi)P(Bi)/全概率(p(A)) 33 | 常用于当n=2时: 34 | 35 | P(A)=P(A|B)P(B)+P(A|B')P(B') 36 | P(B|A)=P(AB)/P(A)=P(A|B)P(B)/P(A) --P(A)转化为以上的全概率公式 37 | 38 | 39 | 3、朴素贝叶斯算法 40 | 1、将贝叶斯定理应用于分类问题的简单方法,常用于文本分类应用中 41 | 优点:简单、快速、有效,能够很好的处理噪声数据,需要训练的案例较少,很容易获得预测的估计概率值 42 | 缺点:依赖于一个常用的错误假设:一样的重要性和独立特征,当特征大量时模型并不理想,概率估计值并不可靠 43 | **朴素贝叶斯假设数据集的所有特征都具有相同的重要性和独立性,假设太理想。 44 | **虽然朴素贝叶斯可能违背了以上的假设,但是可以很好的应用。原因在于我们最终关注的知识模型的正确率,而不是 45 | 模型识别正确的把握,也就是:准确率>>概率,我们更关注的是概率 46 | 47 | 2、拉普拉斯估计: 48 | 给频率表中的每一个计数加上一个较小的数,保证每一个特征发生的概率为非零。一般情况下加上的数值设定为1,这样保证每 类特征的组合至少在数据中出现一次。 49 | 50 | 3、朴素贝叶斯算法中使用数值特征 51 | 创建类和特征值的组合构成的矩阵,每一个特征必须是分类变量。需要将数值特征值离散化。也可以自己根据实际情况创建一 个自然分类。将数值离散化总是会导致信息量的减少,因为特征的原始粒度减少为几个数目较少的类别,所以分段应该均衡,分段 越多会导致频率表数值较小,太少会受噪声数据的影响。 52 | 53 | 54 | 55 | 56 | 57 | -------------------------------------------------------------------------------- /机器学习/模型性能评价/模型性能评价度量指标: -------------------------------------------------------------------------------- 1 | ==----模型性能评价--------------------------------------- 2 | 3 | 目标: 4 | 为什么预测准确率不足以度量性能,以及替代的度量性能的方法 5 | 用来确保性能度量方法可以合理的反映模型对于未知情形的预测能力的方法 6 | 怎样将这些有用的度量应用于前面的学习案例 7 | 8 | 1、度量分类方法的性能: 9 | 目的:最好的度量方式要看其是否能成功的实现预期目标,拥有能够度量实用性而不是原始准确率的模型评价方法是至 关重要的。评价一个分类器性能的目的在于能更好的了解其对未来情形的预测能力 10 | 11 | 性能度量指标: 12 | 13 | 混淆矩阵: 阳性:我们感兴趣的类别, 阴形: 我们不感兴趣的类别 14 | 真阳性(TP):正确的分类为感兴趣的类别 15 | 真阴性(TN):正确的分类为不敢兴趣的类别 16 | 假阳性(FP):错误的分类为感兴趣的类别 17 | 假阴性(FN):错误的分类为不感兴趣的类别 18 | 19 | 准确率=(TP+TN)/(TP+TN+FP+FN) 20 | 21 | 22 | caret包中的confusionmatrix()函数: 23 | 1、Kappa统计量:通过完全因为巧合而被预测正确的概率来对准确率进行调整(当存在数据类别不均衡是很重要) 24 | Kappa统计量仅对正确判断的分类器给予加分,而不是采取简单的猜测策略的分类器 25 | Kappa统计量的范围是0--1,越接近1说明与真实值越一致 26 | 27 | 很差一致性:0--0.2 28 | 尚可一致性:0.2--0.4 29 | 中等一致性:0.4--0.6 30 | 不错一致性:0.6--0.8 31 | 很好一致性:0.8--1 32 | 当然这些都是主观的,具体的问题具体分析 33 | 34 | k=Pr(a)-Pr(e)/1-Pr(e) Pr(a):分类器和真实值之间的真实一致性比例 Pr(e):分类器和真实值之间的期望 一致性的比例。 35 | Pr(a);实际就是准确率 Pr(e):p(实际为1)*p(预测为1)+p(实际为2)*p(预测为2) 36 | 37 | 38 | 2、灵敏度和特异性(两者都是在实际值中考虑) 39 | 灵敏度(真阳性率):度量阳性样本被正确分类的的比例 40 | 灵敏度=TP/TP+FN 41 | 42 | 特异性(真阴性率):度量阴性样本被正确分类的比例 43 | 特异性=TN/TN+FP 44 | 45 | 46 | 3、精确度和找回率 47 | 48 | 精确度=tp/tp+fp 49 | 召回率=tp/tp+fn 50 | 51 | 4、F度量(F计分) 52 | F度量=2*精确度*召回率/精确度+召回率 53 | 54 | 55 | 5、性能权衡可视化(ROCR):ROC曲线 56 | 57 | library(pROC) 58 | pre1<-predict(model,mydata,type = 'response') 59 | summary(pre1) 60 | modelroc<-roc(mydata$y,pre1) 61 | plot(modelroc,print.auc=TRUE,auc.polygon=TRUE,grid=c(0.1,0.2),grid.col=c("green","red"),max.auc.polygon=TRUE,auc.polygon.col="skyblue",print.thres=TRUE) 62 | 63 | 64 | 65 | 66 | ==-------------------------------------------------------------------------------------------------------------- 67 | 68 | 2、评估模型未来的性能 69 | 可能存在这样一种情况,我们的数据量很小,不能够划分训练集和测试集情况下,这时就需要用一些别的评估模型进行对其未见过数据的性能 70 | 71 | 保持法:将数据划分为训练集和测试集的过程,但是在进行构建模型的时候,我们可能会构建多个模型,再将测试集带入测试,选取最好结果的那个模型,但是在这一个过程中,我们重复使用了测试集,所以我们只是基于当前测试集而选取了最好的模型,那么它对未来数据的评估将没有多大的作用! 72 | 73 | 为了解决上面的问题,所以我们应该再对数据在进行划分,在训练集和测试集之后,再添加一个验证集。验证集对模型进行迭代和改善,测试数据集只使用一次。 74 | 75 | 76 | ==--------------------------------------------------------------------------------------------------------------- 77 | 78 | 3、随机抽样的弊端 79 | 在进行随机抽样的过程中,每个划分包含某些类别的数量可能会过大或者过小。在某种特定的情况下,某些类别比例很小时,就可能导致训练集中不包含该类数据(或者很少,导致数据不平衡) 80 | 81 | 为了解决以上的问题,我们便使用分层抽样方法,确保在数据集中每一个类别的比例与总体数据集中的比例近似相等。 82 | 83 | 使用caret包中提供了creatDataPartition()函数 84 | 使用方法: 85 | in_train<-creatDataPartition(credit$default,p=0.75,list=F) credit代表类别 86 | train<-credit[in_train,] 87 | test<-credit[-in_train,] 88 | 89 | 分层抽样虽然使样本的类别均匀,但是对于某些类别包含过多或者过少的困难样本,极端值,在数据量少时没有足够的划分。所以这种抽样方法过于保守。 90 | 91 | 92 | ==------------------------------------------------------------------------------------------------------------- 93 | 94 | 4、重复保持技术: 95 | 保持法的特殊形式,缓解随机构建训练集问题。对多个随机保持样本的模型评估,然后用结果的均值来评价整个模块的性能 96 | 97 | 1、交叉验证: 98 | k折交叉(k折cv):将数据分割成k个完全分割开的部分,目前一般将k设为10.(也就是每一折包含数据的10%),机器学习模型使用剩下的90%建立模型。重复10次,输出所有折的平均性能指标。 99 | 100 | 使用caret包中的creatfolds()函数来创建交叉验证的数据集: 101 | 102 | folds<-creatfolds(credit$default,k=10) 这里的folds只是数据的行数 103 | 104 | folds的形式是:folds01,fold02......folds10 105 | 106 | 故:train01<-credit[-folds01,] 107 | test<-credit[folds,] 108 | 像上面一样将训练集和测试集分为10组,最后分局模型结果求平均性能指标。。。。。可以使用lapply函数依次带入模型 109 | 110 | 111 | 2、自助法验证 112 | 上面的k折cv是样本数据分割成两个不同的部分,而在自助法中是将数据样本有放回的抽取,有些样本可能是重复的,剩下的数据同样作为测试集。 113 | 114 | 0.632自助法: 错误率=0.632*错误率(测试)+0.328*(训练) 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | 136 | -------------------------------------------------------------------------------- /机器学习/电影协同过率推荐/协同过滤推荐算法: -------------------------------------------------------------------------------- 1 | 协同过滤推荐算法原理 2 | 0、物与聚类,人与群分,假设当甲喜欢电影有:钢铁侠、原计划,复仇者联盟,我们想要将其他电影推荐给甲,那么我们就去会寻找与甲有类似爱好的群体,找出他们喜欢的电影,将甲没有看过的电影推荐给甲。 3 | 4 | 1、步骤: 5 | 找出与目标用户相似的群体集合 6 | 找到这个集合中用户喜欢的、并且目标用户没有听说过的物品推荐给目标用户 7 | 8 | 2、方法: 9 | 余弦相似度: 10 | 欧式距离:向量坐标之间的距离 11 | 12 | 3、 -------------------------------------------------------------------------------- /机器学习/电影协同过率推荐/电子商务智能推荐(协同过滤): -------------------------------------------------------------------------------- 1 | #电子商务智能推荐服务(向量、欧式距离) 2 | 背景和目标: 3 | 1、帮助用户发现感兴趣的内容 4 | 2、提供用户对电子商务的忠诚度与满意度 5 | 3、将合适的资源推荐给用户,建立稳定的企业忠实顾客群 6 | 7 | 目标: 8 | 1、按地域研究用户访问时间、访问内容、访问次数等分析主题,深入了解用户对访问网站的行为和目的及关心的内容 9 | 2、借助大量的用户访问记录,发现用户的访问行为习惯,对不同需求的用户进行相关服务页面的推荐 10 | 11 | 分析方法和过程: 12 | 1、协同过滤算法 13 | 2、分类处理分析(对用户的兴趣和需求进行分类) 14 | 3、根据浏览网页的信息对用户进行分类(也可以用停在某一网页的时间进行分类) 15 | 16 | 具体流程: 17 | 1、获取原始记录 18 | 2、对数据进行对维度分析(用户访问内容、流失用户、用户分类) 19 | 3、对数据进行预处理 20 | 4、以网址信息进行处理 21 | 5、对比多种推荐算法进行推荐,模型评价,对样本进行预测 22 | 23 | 协同规律推荐算法: 24 | 根据历史数据找出相似的用户或者网页,因此应选择大量的数据,这样就能降低推荐结果的随机性,提高准确性。 25 | 步骤: 26 | 1、以用户的访问时间为条件,选取三个月内用户的访问数据作为原始数据集 27 | 2、每个地区的用户访问习惯以及兴趣爱好存在差异性 28 | -------------------------------------------------------------------------------- /机器学习/电影协同过率推荐/电影协同过滤推荐算法.R: -------------------------------------------------------------------------------- 1 | #协同过滤算法(基于用户推荐UBCF) 2 | 3 | #对于realRatingMatrix数据类型,recommenderlab包一共提供了6种模型,分别是基于项目协同过滤(IBCF)、主成分分析(PCA)、基于流行度推荐(POPULAR)、随机推荐(RANDOM)、奇异值分解(SVD)和基于用户推荐(UBCF),这里采用基于用户推荐(UBCF)。 4 | 5 | dat<-read.csv('E:\\R WORK SPACE\\Data Files\\ratings.csv',header = TRUE) 6 | dat<-dat[-4] 7 | #画图 8 | library(ggplot2) 9 | library(reshape2) 10 | library(reshape) 11 | library(recommenderlab) 12 | p<-ggplot(data = dat,aes(x=rating))+geom_histogram(binwidth = 0.1) 13 | dat<-cast(data = dat,userId~movieId,value = 'rating') 14 | #去掉多余的列 15 | dat<-dat[,-1] 16 | class(dat)<-'data.frame' 17 | dat<-as.matrix(dat) 18 | dat<-as(dat,'realRatingMatrix') 19 | 20 | #建模分析 21 | colnames(dat)<-paste('movie',1:9066,sep = '') 22 | model<-Recommender(data = dat[1:600,],method='UBCF') 23 | pre<-predict(model,dat[601:606,],type='ratings') 24 | pre2<-predict(model,dat[601:606],n=6) 25 | as(pre,'matrix')[1:6,1:6] 26 | as(pre2,'list') 27 | 28 | -------------------------------------------------------------------------------- /机器学习/神经网络/bp神经网络.R: -------------------------------------------------------------------------------- 1 | #-----BP神经网络---对混泥土的强度进行建模-------------------------------------- 2 | 3 | data<-read.csv('F:\\r帮助文档\\MLwR-master\\Machine Learning with R (2nd Ed.)\\Chapter 07\\concrete.csv') 4 | 5 | #对数据进行标准化(如果数据呈现正态分布使用z分数标准化,如果处于均匀分布或者非正态分布则最大最小标准化) 6 | #在这里数据不满足正态分布,最大最小标准化,定义公式 7 | 8 | normalize<-function(x){ 9 | return((x-min(x))/(max(x)-min(x))) 10 | } 11 | 12 | data_normalize<-as.data.frame(lapply(data,normalize)) 13 | 14 | #训练集与测试集 15 | dat<-sample(2,nrow(data_normalize),replace = T,prob = c(0.75,0.25)) 16 | train_data<-data_normalize[dat==1,] 17 | test_data<-data_normalize[dat==2,] 18 | 19 | 20 | #这里用neuralnet包,也可以用nnet包、RSNNS包 21 | 22 | library(neuralnet) 23 | 24 | model<-neuralnet(strength~cement+slag+ash+water+superplastic+coarseagg+fineagg+age,data=train_data,hidden = 1) 25 | 26 | #---评估模型的性能 27 | 28 | pre_strength<-compute(model,test_data[1:8]) 29 | 30 | #cor()相关系数获取实际值与预测值之间的关系 31 | cor(pre_strength$net.result,test_data$strength) 32 | #或者使用平均绝对误差 33 | mean(abs(test_data$strength-pre_strength$net.result)) #平均绝对误差0.08,已经很高了 34 | 35 | 36 | 37 | #==----------------提高模型的性能----------------------- 38 | #增加隐藏层,hidden=5时 39 | model2<-neuralnet(strength~cement+slag+ash+water+superplastic+coarseagg+fineagg+age,data=train_data,hidden = 5) 40 | plot(model2) 41 | 42 | pre_strength2<-compute(model2,test_data[1:8]) 43 | 44 | cor(pre_strength2$net.result,test_data$strength) 45 | 46 | mean(abs(test_data$strength-pre_strength2$net.result)) -------------------------------------------------------------------------------- /机器学习/神经网络/hidden=5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShydowLi/Machine-Learning-With-R-/1a7bb06b79200836bb5079263ee3866dbf15a8db/机器学习/神经网络/hidden=5.png -------------------------------------------------------------------------------- /机器学习/神经网络/plot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShydowLi/Machine-Learning-With-R-/1a7bb06b79200836bb5079263ee3866dbf15a8db/机器学习/神经网络/plot.png -------------------------------------------------------------------------------- /机器学习/神经网络/神经网络: -------------------------------------------------------------------------------- 1 | #------黑箱方法--------神经网络和支持向量机-------------------------------------------------------- 2 | 3 | 1、黑箱方法:输入转换为输出是通过一个假想的箱子来模糊处理的,发挥作用的复杂的数学。 4 | 学习目标: 5 | 神经网络模仿动物大脑的结构来模拟任意函数(功能); 6 | 支持向量机使用多维曲面来定义特征与结果之间的关系; 7 | 怎么样去应用于现实的世界中; 8 | 9 | 2、神经网络: 10 | 人工神经网络是对一组输入信号和一组输出信号之间的关系进行建模,使用的模型来源于人类大脑对来自感觉输入刺激是如 何反应的理解。人工神经网络同样使用神经元或者节点来处理学习问题。 11 | 从广义上讲,神经网络可以用于几乎所有学习任务的多功能学习方法:分类、数值预测、甚至无监督的模式识别 12 | 13 | 基本的理解:一个有向网络中图定义了树突接受的输入信号(变量x)和输出信号(y)之间的关系。与生物神经元一样,每 一个树突的信号都根据其重要性被加权。输入信号有细胞体求和,然后该信号根据一个用f表示的激活函数来传递。 14 | y(x)=f(sum(wi*xi)) 15 | 权重w可以控制n个输入信号中的每一个输入对输入信号之和所做的贡献的大小。激活函数f(x)使用净总和,y()就是输出轴 突 16 | 17 | 激活函数:将神经元的组合输入信号变换成单一的输入信号,以便进一步的在网络中传播; 18 | 网络拓扑(结构):描述模型中神经元的数量以及层数和它们的连接方式 19 | 训练算法:指定如何连接权重,以便减小或者增加神经元在输入信号中的比例 20 | 21 | 22 | 3、激活函数:(阈值激活函数) 23 | 激活函数:是指人工神经元中处理输入信息并将信息传递到整个网络的机制。 24 | 激活函数的类型:单位跳跃激活函数,S型激活函数(比较常用),除此之外:线性激活、饱和线性激活、双曲面正切、高斯 25 | 区分激活函数:就是输出信号的范围不同,大多在(0,1),(-1,1),(-oo,+oo) 26 | 对于许多激活函数而言,影响输出信号的输入值范围是相对窄的(所以数据需要标准化或规范化) 27 | 28 | 拓扑结构: 29 | 层数;网络的信息是否允许向后传播;网络每一层的节点数 30 | 拓扑结构决定了通过神经网络进行学习的复杂性,可能更大、更复杂的网络能够识别更微妙的模式和更复杂的决策边界。 但是神经网络的效能不仅是一个网络规模的函数,还取决于构成元素的组织方式。 31 | 32 | 根据信息传播的方向可以分为: 33 | 前馈网络:输入信号从一个节点到另一个节点连续传播,直至到达输出层(多层感知器) 34 | 深度神经网络:具有多个隐藏层的神经网络 35 | 递归网络(反馈网络):允许信号使用循环在两个方向上传送,增加一个延迟或者短期记忆会使得递归网络有很大功 效 36 | 37 | 每一层节点数: 38 | 输入层:输入节点数有输入数据的特征已经决定了 39 | 输出层:输出节点数由需要建模的结果树或者结果的分类水平确定 40 | 隐藏层:由使用者确定 41 | 42 | 4、后向传播神经网络(BP神经网络) 43 | 优点:适用于分类和数值预测;能够模拟复杂的模式;对数据的基本关系不需要做出假设 44 | 缺点:计算量庞大,训练缓慢,很容易过度拟合 45 | 46 | 在前向阶段中:神经元在从输入层到输出层的序列中被激活,沿途应用每一个神经元的权重和激活函数,一旦到达最后一 层,就会产生一个输出信号。 47 | 在后向阶段:由前向阶段产生的输出信号与训练数据中的真实目标值进行比较,网络的输出信号与真实目标值之间的差异 产生的误差在网络中向后传播,从而修正神经元之间的连接权重,并减小将来产生的误差。 48 | 49 | 梯度下降法: 50 | 51 | 后向传播算法利用每一个神经元的激活函数的倒数来确定每一个输入权重方向上的梯度,因此,需要一个可微的激活函数 很重要,梯度将因为权重的改变而表明误差是如何急剧减小或者增大。该算法通过一个称为学习率的量来改变权重来使得 误差最大化的减少。学习率越大,算法试图降下的梯度就越快。 52 | 53 | ==--------------------------------------------------------------------------------------------------------------------- 误差校正学习算法:(分类与预测) 54 | 根据神经网络的输出误差对神经元的连接强度进行修正(也就是每一个节点的权重)。设神经网络中神经元i作为输入,神经元j 55 | 为输出神经元,他们的连接权值为w,则权值的修正为:学习率*偏差*输出值 56 | 57 | 误差函数:sum(【实际输出值-期望输出值】^2) 58 | 59 | 60 | ==--------------------------------------------------------------------------------------------------------------------- 61 | 神经网络的本质:就是人为的选择f(.)激活函数,并用训练集确定阈值和权值,使得函数F(X,β)能够更好的描述训练数据的关系。 62 | -------------------------------------------------------------------------------- /机器学习/简介--初始--机器学习: -------------------------------------------------------------------------------- 1 | ----机器学习----------------------------------------------------------------------------------------------- 2 | 3 | 机器学习:如果机器能够获取经验并且能利用它们,在以后的类似经验中能够提高它的表现 4 | 学习过程: 5 | 数据存储:利用观测值、记忆存储、提供进一步推理的事实依据 6 | 抽象化:将数据转化为更宽泛的表示和概念 7 | 一般化:创建知识和推理,使得形动具有新的背景 8 | 评估:衡量学习知识,提出潜在的改进之处 9 | 10 | 实践机器学习中的步骤: 11 | 数据收集: 12 | 数据探索和准备:特征工程 13 | 模型训练 14 | 模型评价 15 | 模型改进 16 | 17 | 18 | 模型: 19 | 预测模型:目标特征与其他特征之间关系建立模型(有监督)----分类 20 | 描述性模型:总结数据并获得洞察(无监督)---模式发现--购物篮分析--------聚类 21 | 元学习算法:专注于如何更有效的学习 22 | 23 | 有监督模型包括:最近邻、朴素贝叶斯、决策树、分类规则学习算法、线性回归、回归树、模型树、神经网络、支持向量机 24 | 无监督模型包括:关联规则、K均值聚类 25 | 元学习算法:Bagging Boosting 随机森林 26 | 27 | 28 | =-----------------------------------------------------------------------------------------------------------= 29 | 注:准确类,精准类,召回率的区别 30 | 31 | 准确率:指将样本正例预测为正例,样本负例预测为负例的个数占所有数量的比重 32 | 33 | 精准率:精准率是针对预测结果而言,表示预测为正的样本中有多少是对的,那么预测为正就有两种可能,一种是正例预测为正例(TP),另一种是把负类预测为正类(FP) 34 | p=tp/(tp+fp) 35 | 36 | 召回率:召回率是针对我们原来的样本而言,表示样本中的正例有多少被预测正确了,一种是原来的正例被预测为正例(top),一种是原来的正例备预测为负例(fn) 37 | p=tp/(tp+fn) 38 | 39 | 40 | =------------------------------------------------------------------------------------------------------------= 41 | 注:偏差与方差的区别 42 | 43 | 1、在做分类器时:出现以下4种的结果 44 | 45 | ·1 .2 .3 .4 46 | 训练集(error): 1% 15% 15% 0.8% 47 | 测试集 (error) : 11% 16% 30% 1% 48 | 49 | --如上图1,4两种情况中训练集的误差已经很小了,称为low bias,说明训练已经很到位了,而2,3两种情况的训练集误差很大,称为 50 | High bias,故偏差(bias)是用来衡量训练集和我们的最小误差的差距。 51 | --在1情况下,验证集相比训练集误差上升了很多,而在2情况下,虽然误差升高了,但变化并不大,所以说:方差是指测试集和训练 集的效果的差别,而不是一个绝对的值。 52 | --偏差(bias):针对的是训练集与我们规定的最小误差 53 | --方差(variance):针对的是训练集和测试集 54 | 55 | --上图中1属于过拟合,2属于欠拟合,3属于烂,4属于赞 56 | 57 | ==--高偏差意味着模型训练不到位,也就是欠拟合,高方差意味着模型训练过头了,过拟合了。但是过拟合一般是训练集误差很小而验 证集误差很大,如果两者都高,只能说明模型太烂。 58 | 59 | 60 | 61 | 2、解决偏差与误差的问题 62 | 63 | --如果模型是高偏差(欠拟合): 64 | --尝试使用更复杂更大的网络结构(增加单元数,增加层数,或者更改结构),训练更长的时间(增加迭代书) 65 | --当高偏差解决后,如果存在高方差: 66 | --收集更多的样本去训练使用,正则化手段 67 | --当经过以上还是高方差:说明遇到典型的超度 68 | --正则化:给损失函数加一个正则化项,相当于给它一个惩罚因子(惩罚项) 69 | --L2正则化:(λ/2m)*sum(w^2) 即所有的参数权值(w)的平方,再乘以λ/2m(使模型变得简单) dropout early -stopping 70 | 71 | 72 | ==------------------------------------------------------------------------------------------------------------------------ 73 | 74 | 注:相关关系、因果关系、回归关系的区别: 75 | 76 | --相关关系与因果关系: 77 | 两个变量之间存在相关关系,不一定说明两者之间存在着因果关系。 78 | 因果关系是指一个变量的存在一定会导致另一个变量的产生,而相关性是统计学上的一个概念,指一个变量变化的同时,另一个变量也伴随着发生变化,但不能确定一个变量变化是由 79 | 另一个变量引起的。 80 | 81 | --相关关系与回归关系: 82 | 联系:没有相关性就没有回归,相关程度越高,回归方程的拟合程度就越好 83 | 相关系数和回归系数可以相互推算 84 | 85 | 区别:相关中x和y是对等的,而在回归中x和y要确定自变量和因变量 86 | 相关中x和y都是随机变量,在回归中只有x是随机变量 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 | --------------------------------------------------------------------------------