├── Docker ├── README.md ├── x64 │ ├── 12.04 │ │ ├── docker-engine_1.13.1-0~ubuntu-precise_amd64.deb │ │ └── libltdl7_2.4.2-1ubuntu1_amd64.deb │ ├── 14.04 │ │ ├── docker-engine_1.13.1-0~ubuntu-trusty_amd64.deb │ │ └── libltdl7_2.4.2-1.7ubuntu1_amd64.deb │ └── 16.04 │ │ ├── docker-engine_1.13.1-0~ubuntu-xenial_amd64.deb │ │ └── libltdl7_2.4.6-0.1_amd64.deb └── x86 │ ├── 14.04 │ ├── docker-engine_1.13.1-0~ubuntu-trusty_armhf.deb │ └── libltdl7_2.4.2-1.7ubuntu1_i386.deb │ └── 16.04 │ ├── docker-engine_1.13.1-0~ubuntu-xenial_armhf.deb │ └── libltdl7_2.4.6-0.1_i386.deb ├── Expert 1-3 Source Code ├── exper1.txt ├── exper2.txt └── exper3.txt ├── Expert 4 Source Code ├── batch-represent │ └── main.lua ├── demos │ └── classifier.py ├── step-1_find-faces.py ├── step-2a_finding-face-landmarks.py ├── step-2b_projecting-faces.py └── util │ └── align-dlib.py ├── Introduction to Machine Learning.pdf ├── Jupyter └── README.md ├── README.md └── images ├── 4-10.png ├── 4-11.png ├── 4-12.png ├── 4-4.png ├── 4-5.png ├── 4-6.png ├── 4-7.png ├── 4-8.png ├── 4-9.png ├── yangmi ├── 1.jpg ├── 10.jpg ├── 2.jpg ├── 3.jpg ├── 4.jpg ├── 5.jpg ├── 6.jpg ├── 7.jpg ├── 8.jpg └── 9.jpg └── zhaoliying ├── 1.jpg ├── 10.jpg ├── 2.jpg ├── 3.jpg ├── 4.jpg ├── 5.jpg ├── 6.jpg ├── 7.jpg ├── 8.jpg └── 9.jpg /Docker/README.md: -------------------------------------------------------------------------------- 1 | ## Docker安装 ## 2 | 3 | - 在线安装命令: 4 | 5 | sudo apt install docker.io 6 | 7 | - 离线安装命令,使用`cd`命令进入对应版本目录后可输入以下命令安装: 8 | 9 | sudo dpkg -i *.deb 10 | 11 | ## Docker加速器 ## 12 | 13 | - 使用在线安装方法时,可以先使用以下命令安装Docker加速器,以加快在线安装速度: 14 | 15 | sudo curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://196340ca.m.daocloud.io 16 | 17 | ## Docker从文件载入镜像 ## 18 | 19 | - `openfaceallset.tar`镜像文件下载: 20 | 21 | >下载地址:[百度云盘](https://pan.baidu.com/s/1NLzLCBm1Ub9Hxw93t0cXkQ) 22 | 23 | >提取码:`5uow` 24 | 25 | - 载入镜像命令: 26 | 27 | sudo docker load < openfaceallset.tar 28 | 29 | ## Docker运行命令 ## 30 | 31 | - 添加映射命令: 32 | 33 | sudo xhost +local:root 34 | 35 | - 开启容器命令: 36 | 37 | sudo docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw -v /home/$USER:/home/$USER:rw -p 9000:9000 -p 8000:8000 -t -i openface/allset /bin/bash -------------------------------------------------------------------------------- /Docker/x64/12.04/docker-engine_1.13.1-0~ubuntu-precise_amd64.deb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/Docker/x64/12.04/docker-engine_1.13.1-0~ubuntu-precise_amd64.deb -------------------------------------------------------------------------------- /Docker/x64/12.04/libltdl7_2.4.2-1ubuntu1_amd64.deb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/Docker/x64/12.04/libltdl7_2.4.2-1ubuntu1_amd64.deb -------------------------------------------------------------------------------- /Docker/x64/14.04/docker-engine_1.13.1-0~ubuntu-trusty_amd64.deb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/Docker/x64/14.04/docker-engine_1.13.1-0~ubuntu-trusty_amd64.deb -------------------------------------------------------------------------------- /Docker/x64/14.04/libltdl7_2.4.2-1.7ubuntu1_amd64.deb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/Docker/x64/14.04/libltdl7_2.4.2-1.7ubuntu1_amd64.deb -------------------------------------------------------------------------------- /Docker/x64/16.04/docker-engine_1.13.1-0~ubuntu-xenial_amd64.deb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/Docker/x64/16.04/docker-engine_1.13.1-0~ubuntu-xenial_amd64.deb -------------------------------------------------------------------------------- /Docker/x64/16.04/libltdl7_2.4.6-0.1_amd64.deb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/Docker/x64/16.04/libltdl7_2.4.6-0.1_amd64.deb -------------------------------------------------------------------------------- /Docker/x86/14.04/docker-engine_1.13.1-0~ubuntu-trusty_armhf.deb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/Docker/x86/14.04/docker-engine_1.13.1-0~ubuntu-trusty_armhf.deb -------------------------------------------------------------------------------- /Docker/x86/14.04/libltdl7_2.4.2-1.7ubuntu1_i386.deb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/Docker/x86/14.04/libltdl7_2.4.2-1.7ubuntu1_i386.deb -------------------------------------------------------------------------------- /Docker/x86/16.04/docker-engine_1.13.1-0~ubuntu-xenial_armhf.deb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/Docker/x86/16.04/docker-engine_1.13.1-0~ubuntu-xenial_armhf.deb -------------------------------------------------------------------------------- /Docker/x86/16.04/libltdl7_2.4.6-0.1_i386.deb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/Docker/x86/16.04/libltdl7_2.4.6-0.1_i386.deb -------------------------------------------------------------------------------- /Expert 1-3 Source Code/exper1.txt: -------------------------------------------------------------------------------- 1 | print(__doc__) 2 | 3 | # Author: Gael Varoquaux 4 | # License: BSD 3 clause 5 | 6 | # Standard scientific Python imports 7 | import matplotlib.pyplot as plt 8 | 9 | # Import datasets, classifiers and performance metrics 10 | from sklearn import datasets, svm, metrics 11 | 12 | # The digits dataset 13 | # 加载数据集 14 | digits = datasets.load_digits() 15 | 16 | # The data that we are interested in is made of 8x8 images of digits, let's 17 | # have a look at the first 4 images, stored in the `images` attribute of the 18 | # dataset. If we were working from image files, we could load them using 19 | # matplotlib.pyplot.imread. Note that each image must have the same size. For these 20 | # images, we know which digit they represent: it is given in the 'target' of 21 | # the dataset. 22 | # 查看前4张图片 23 | images_and_labels = list(zip(digits.images, digits.target)) 24 | for index, (image, label) in enumerate(images_and_labels[:4]): 25 | plt.subplot(2, 4, index + 1) 26 | plt.axis('off') 27 | plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') 28 | plt.title('Training: %i' % label) 29 | 30 | # To apply a classifier on this data, we need to flatten the image, to 31 | # turn the data in a (samples, feature) matrix: 32 | # 数据预处理:展开成向量 33 | n_samples = len(digits.images) 34 | data = digits.images.reshape((n_samples, -1)) 35 | 36 | # Create a classifier: a support vector classifier 37 | # 构建分类器SVM 38 | classifier = svm.SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, 39 | degree=3, gamma=0.001, kernel='rbf', 40 | max_iter=-1, probability=False, random_state=None, shrinking=True, 41 | tol=0.001, verbose=False) 42 | 43 | # We learn the digits on the first half of the digits 44 | # 训练分类器 45 | classifier.fit(data[:n_samples // 2], digits.target[:n_samples // 2]) 46 | 47 | # Now predict the value of the digit on the second half: 48 | # 测试分类效果 49 | expected = digits.target[n_samples // 2:] 50 | predicted = classifier.predict(data[n_samples // 2:]) 51 | 52 | print("Classification report for classifier %s:\n%s\n" 53 | % (classifier, metrics.classification_report(expected, predicted))) 54 | print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted)) 55 | 56 | images_and_predictions = list(zip(digits.images[n_samples // 2:], predicted)) 57 | for index, (image, prediction) in enumerate(images_and_predictions[:4]): 58 | plt.subplot(2, 4, index + 5) 59 | plt.axis('off') 60 | plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') 61 | plt.title('Prediction: %i' % prediction) 62 | 63 | plt.show() -------------------------------------------------------------------------------- /Expert 1-3 Source Code/exper2.txt: -------------------------------------------------------------------------------- 1 | print(__doc__) 2 | 3 | from time import time 4 | import numpy as np 5 | import matplotlib.pyplot as plt 6 | 7 | from sklearn import metrics 8 | from sklearn.cluster import KMeans 9 | from sklearn.datasets import load_digits 10 | from sklearn.decomposition import PCA 11 | from sklearn.preprocessing import scale 12 | 13 | # 设定随机数种子 14 | np.random.seed(42) 15 | 16 | # 加载数据集 17 | digits = load_digits() 18 | data = scale(digits.data) 19 | 20 | # 解析数据集 21 | # 数据集包含10个分类(手写数字1-10),1797个样本,特征维度为64维; 22 | n_samples, n_features = data.shape 23 | n_digits = len(np.unique(digits.target)) 24 | labels = digits.target 25 | 26 | sample_size = 300 27 | 28 | print("n_digits: %d, \t n_samples %d, \t n_features %d" 29 | % (n_digits, n_samples, n_features)) 30 | 31 | 32 | print(82 * '_') 33 | print('init\t\ttime\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette') 34 | 35 | # 函数:训练并测试分类效果 36 | def bench_k_means(estimator, name, data): 37 | t0 = time() 38 | estimator.fit(data) 39 | print('%-9s\t%.2fs\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f' 40 | % (name, (time() - t0), estimator.inertia_, 41 | metrics.homogeneity_score(labels, estimator.labels_), 42 | metrics.completeness_score(labels, estimator.labels_), 43 | metrics.v_measure_score(labels, estimator.labels_), 44 | metrics.adjusted_rand_score(labels, estimator.labels_), 45 | metrics.adjusted_mutual_info_score(labels, estimator.labels_), 46 | metrics.silhouette_score(data, estimator.labels_, 47 | metric='euclidean', 48 | sample_size=sample_size))) 49 | 50 | # 构建K-means分类器1: 初始化中心点为k-means++,传入以上函数 51 | bench_k_means(KMeans(init='k-means++', n_clusters=n_digits, n_init=10), 52 | name="k-means++", data=data) 53 | 54 | # 构建K-means分类器2:初始化中心点为random,传入以上函数 55 | bench_k_means(KMeans(init='random', n_clusters=n_digits, n_init=10), 56 | name="random", data=data) 57 | 58 | # in this case the seeding of the centers is deterministic, hence we run the 59 | # kmeans algorithm only once with n_init=1 60 | # 构建K-means分类器3,利用PCA找到数据主轴,将其作为kmeans初始化中心点的方法,传入以上函数 61 | pca = PCA(n_components=n_digits).fit(data) 62 | bench_k_means(KMeans(init=pca.components_, n_clusters=n_digits, n_init=1), 63 | name="PCA-based", 64 | data=data) 65 | print(82 * '_') 66 | 67 | # ############################################################################# 68 | # 聚类可视化。使用matplotlib可视化聚类结果(PCA降维到2维以便平面显示) 69 | # Visualize the results on PCA-reduced data 70 | 71 | reduced_data = PCA(n_components=2).fit_transform(data) 72 | # 习题:修改此处参数 73 | kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10) 74 | kmeans.fit(reduced_data) 75 | 76 | # Step size of the mesh. Decrease to increase the quality of the VQ. 77 | h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max]. 78 | 79 | # Plot the decision boundary. For that, we will assign a color to each 80 | x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 81 | y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 82 | xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) 83 | 84 | # Obtain labels for each point in mesh. Use last trained model. 85 | Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) 86 | 87 | # Put the result into a color plot 88 | Z = Z.reshape(xx.shape) 89 | plt.figure(1) 90 | plt.clf() 91 | plt.imshow(Z, interpolation='nearest', 92 | extent=(xx.min(), xx.max(), yy.min(), yy.max()), 93 | cmap=plt.cm.Paired, 94 | aspect='auto', origin='lower') 95 | 96 | plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) 97 | # Plot the centroids as a white X 98 | centroids = kmeans.cluster_centers_ 99 | plt.scatter(centroids[:, 0], centroids[:, 1], 100 | marker='x', s=169, linewidths=3, 101 | color='w', zorder=10) 102 | plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n' 103 | 'Centroids are marked with white cross') 104 | plt.xlim(x_min, x_max) 105 | plt.ylim(y_min, y_max) 106 | plt.xticks(()) 107 | plt.yticks(()) 108 | plt.show() -------------------------------------------------------------------------------- /Expert 1-3 Source Code/exper3.txt: -------------------------------------------------------------------------------- 1 | 2 | """A very simple MNIST classifier. 3 | See extensive documentation at 4 | https://www.tensorflow.org/get_started/mnist/beginners 5 | """ 6 | # part1 7 | from __future__ import absolute_import 8 | from __future__ import division 9 | from __future__ import print_function 10 | 11 | import argparse 12 | import sys 13 | 14 | from tensorflow.examples.tutorials.mnist import input_data 15 | import matplotlib.pyplot as plt 16 | import tensorflow as tf 17 | 18 | # 载入数据 19 | mnist = input_data.read_data_sets("/tmp/tensorflow/mnist/input_data", one_hot=True) 20 | 21 | # 构建单层神经网络 22 | x = tf.placeholder(tf.float32, [None, 784]) 23 | W = tf.Variable(tf.zeros([784, 10])) 24 | b = tf.Variable(tf.zeros([10])) 25 | y = tf.matmul(x, W) + b 26 | 27 | # 定义损失函数和优化器 28 | y_ = tf.placeholder(tf.float32, [None, 10]) 29 | 30 | # The raw formulation of cross-entropy, 31 | # 32 | # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)), 33 | # reduction_indices=[1])) 34 | # 35 | # can be numerically unstable. 36 | # 37 | # So here we use tf.nn.softmax_cross_entropy_with_logits on the raw 38 | # outputs of 'y', and then average across the batch. 39 | cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) 40 | # 注意learning_rate 41 | train_step = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cross_entropy) 42 | 43 | sess = tf.InteractiveSession() 44 | tf.global_variables_initializer().run() 45 | # 训练模型:range内迭代次数 46 | for i in range(1000): 47 | batch_xs, batch_ys = mnist.train.next_batch(100) 48 | sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) 49 | if i%100==0: 50 | print("cross_entropy error:",sess.run(cross_entropy, feed_dict={x: batch_xs, y_: batch_ys})) 51 | 52 | # 测试训练好的模型 53 | correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 54 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 55 | print("test accuracy: ",sess.run(accuracy, feed_dict={x: mnist.test.images, 56 | y_: mnist.test.labels})) 57 | 58 | # part2 :选择图片测试 59 | # 第几张图片? 60 | p = 0 61 | 62 | s = sess.run(y,feed_dict={x: mnist.test.images[p].reshape(1,784)}) 63 | print("Prediction : ",sess.run(tf.argmax(s, 1))) 64 | 65 | #显示图片 66 | plt.imshow(mnist.test.images[p].reshape(28,28), cmap=plt.cm.gray_r, interpolation='nearest') 67 | plt.show() -------------------------------------------------------------------------------- /Expert 4 Source Code/batch-represent/main.lua: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env th 2 | 3 | require 'torch' 4 | require 'optim' 5 | 6 | require 'paths' 7 | 8 | require 'xlua' 9 | require 'csvigo' 10 | 11 | require 'nn' 12 | require 'dpnn' 13 | 14 | local opts = paths.dofile('opts.lua') 15 | 16 | opt = opts.parse(arg) 17 | print(opt) 18 | 19 | torch.setdefaulttensortype('torch.FloatTensor') 20 | 21 | if opt.cuda then 22 | require 'cutorch' 23 | require 'cunn' 24 | cutorch.setDevice(opt.device) 25 | end 26 | 27 | opt.manualSeed = 2 28 | torch.manualSeed(opt.manualSeed) 29 | 30 | paths.dofile('dataset.lua') 31 | paths.dofile('batch-represent.lua') 32 | 33 | model = torch.load(opt.model) 34 | model:evaluate() 35 | if opt.cuda then 36 | model:cuda() 37 | end 38 | 39 | repsCSV = csvigo.File(paths.concat(opt.outDir, "reps.csv"), 'w') 40 | labelsCSV = csvigo.File(paths.concat(opt.outDir, "labels.csv"), 'w') 41 | 42 | batchRepresent() 43 | 44 | repsCSV:close() 45 | labelsCSV:close() 46 | -------------------------------------------------------------------------------- /Expert 4 Source Code/demos/classifier.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # 3 | # Example to classify faces. 4 | # Brandon Amos 5 | # 2015/10/11 6 | # 7 | # Copyright 2015-2016 Carnegie Mellon University 8 | # 9 | # Licensed under the Apache License, Version 2.0 (the "License"); 10 | # you may not use this file except in compliance with the License. 11 | # You may obtain a copy of the License at 12 | # 13 | # http://www.apache.org/licenses/LICENSE-2.0 14 | # 15 | # Unless required by applicable law or agreed to in writing, software 16 | # distributed under the License is distributed on an "AS IS" BASIS, 17 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 18 | # See the License for the specific language governing permissions and 19 | # limitations under the License. 20 | 21 | import time 22 | 23 | start = time.time() 24 | 25 | import argparse 26 | import cv2 27 | import os 28 | import pickle 29 | import sys 30 | 31 | from operator import itemgetter 32 | 33 | import numpy as np 34 | np.set_printoptions(precision=2) 35 | import pandas as pd 36 | 37 | import openface 38 | 39 | from sklearn.pipeline import Pipeline 40 | from sklearn.lda import LDA 41 | from sklearn.preprocessing import LabelEncoder 42 | from sklearn.svm import SVC 43 | from sklearn.grid_search import GridSearchCV 44 | from sklearn.mixture import GMM 45 | from sklearn.tree import DecisionTreeClassifier 46 | from sklearn.naive_bayes import GaussianNB 47 | 48 | fileDir = os.path.dirname(os.path.realpath(__file__)) 49 | modelDir = os.path.join(fileDir, '..', 'models') 50 | dlibModelDir = os.path.join(modelDir, 'dlib') 51 | openfaceModelDir = os.path.join(modelDir, 'openface') 52 | 53 | 54 | def getRep(imgPath, multiple=False): 55 | start = time.time() 56 | bgrImg = cv2.imread(imgPath) 57 | if bgrImg is None: 58 | raise Exception("Unable to load image: {}".format(imgPath)) 59 | 60 | rgbImg = cv2.cvtColor(bgrImg, cv2.COLOR_BGR2RGB) 61 | 62 | if args.verbose: 63 | print(" + Original size: {}".format(rgbImg.shape)) 64 | if args.verbose: 65 | print("Loading the image took {} seconds.".format(time.time() - start)) 66 | 67 | start = time.time() 68 | 69 | if multiple: 70 | bbs = align.getAllFaceBoundingBoxes(rgbImg) 71 | else: 72 | bb1 = align.getLargestFaceBoundingBox(rgbImg) 73 | bbs = [bb1] 74 | if len(bbs) == 0 or (not multiple and bb1 is None): 75 | raise Exception("Unable to find a face: {}".format(imgPath)) 76 | if args.verbose: 77 | print("Face detection took {} seconds.".format(time.time() - start)) 78 | 79 | reps = [] 80 | for bb in bbs: 81 | start = time.time() 82 | alignedFace = align.align( 83 | args.imgDim, 84 | rgbImg, 85 | bb, 86 | landmarkIndices=openface.AlignDlib.OUTER_EYES_AND_NOSE) 87 | if alignedFace is None: 88 | raise Exception("Unable to align image: {}".format(imgPath)) 89 | if args.verbose: 90 | print("Alignment took {} seconds.".format(time.time() - start)) 91 | print("This bbox is centered at {}, {}".format(bb.center().x, bb.center().y)) 92 | 93 | start = time.time() 94 | rep = net.forward(alignedFace) 95 | if args.verbose: 96 | print("Neural network forward pass took {} seconds.".format( 97 | time.time() - start)) 98 | reps.append((bb.center().x, rep)) 99 | sreps = sorted(reps, key=lambda x: x[0]) 100 | return sreps 101 | 102 | 103 | def train(args): 104 | print("Loading embeddings.") 105 | fname = "{}/labels.csv".format(args.workDir) 106 | labels = pd.read_csv(fname, header=None).as_matrix()[:, 1] 107 | labels = list(map(itemgetter(1), 108 | map(os.path.split, 109 | map(os.path.dirname, labels)))) # Get the directory. 110 | fname = "{}/reps.csv".format(args.workDir) 111 | embeddings = pd.read_csv(fname, header=None).as_matrix() 112 | le = LabelEncoder().fit(labels) 113 | labelsNum = le.transform(labels) 114 | nClasses = len(le.classes_) 115 | print("Training for {} classes.".format(nClasses)) 116 | 117 | if args.classifier == 'LinearSvm': 118 | clf = SVC(C=1, kernel='linear', probability=True) 119 | elif args.classifier == 'GridSearchSvm': 120 | print(""" 121 | Warning: In our experiences, using a grid search over SVM hyper-parameters only 122 | gives marginally better performance than a linear SVM with C=1 and 123 | is not worth the extra computations of performing a grid search. 124 | """) 125 | param_grid = [ 126 | {'C': [1, 10, 100, 1000], 127 | 'kernel': ['linear']}, 128 | {'C': [1, 10, 100, 1000], 129 | 'gamma': [0.001, 0.0001], 130 | 'kernel': ['rbf']} 131 | ] 132 | clf = GridSearchCV(SVC(C=1, probability=True), param_grid, cv=5) 133 | elif args.classifier == 'GMM': # Doesn't work best 134 | clf = GMM(n_components=nClasses) 135 | 136 | # ref: 137 | # http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#example-classification-plot-classifier-comparison-py 138 | elif args.classifier == 'RadialSvm': # Radial Basis Function kernel 139 | # works better with C = 1 and gamma = 2 140 | clf = SVC(C=1, kernel='rbf', probability=True, gamma=2) 141 | elif args.classifier == 'DecisionTree': # Doesn't work best 142 | clf = DecisionTreeClassifier(max_depth=20) 143 | elif args.classifier == 'GaussianNB': 144 | clf = GaussianNB() 145 | 146 | # ref: https://jessesw.com/Deep-Learning/ 147 | elif args.classifier == 'DBN': 148 | from nolearn.dbn import DBN 149 | clf = DBN([embeddings.shape[1], 500, labelsNum[-1:][0] + 1], # i/p nodes, hidden nodes, o/p nodes 150 | learn_rates=0.3, 151 | # Smaller steps mean a possibly more accurate result, but the 152 | # training will take longer 153 | learn_rate_decays=0.9, 154 | # a factor the initial learning rate will be multiplied by 155 | # after each iteration of the training 156 | epochs=300, # no of iternation 157 | # dropouts = 0.25, # Express the percentage of nodes that 158 | # will be randomly dropped as a decimal. 159 | verbose=1) 160 | 161 | if args.ldaDim > 0: 162 | clf_final = clf 163 | clf = Pipeline([('lda', LDA(n_components=args.ldaDim)), 164 | ('clf', clf_final)]) 165 | 166 | clf.fit(embeddings, labelsNum) 167 | 168 | fName = "{}/classifier.pkl".format(args.workDir) 169 | print("Saving classifier to '{}'".format(fName)) 170 | with open(fName, 'wb') as f: 171 | pickle.dump((le, clf), f) 172 | 173 | 174 | def infer(args, multiple=False): 175 | with open(args.classifierModel, 'rb') as f: 176 | if sys.version_info[0] < 3: 177 | (le, clf) = pickle.load(f) 178 | else: 179 | (le, clf) = pickle.load(f, encoding='latin1') 180 | 181 | for img in args.imgs: 182 | print("\n=== {} ===".format(img)) 183 | reps = getRep(img, multiple) 184 | if len(reps) > 1: 185 | print("List of faces in image from left to right") 186 | for r in reps: 187 | rep = r[1].reshape(1, -1) 188 | bbx = r[0] 189 | start = time.time() 190 | predictions = clf.predict_proba(rep).ravel() 191 | maxI = np.argmax(predictions) 192 | person = le.inverse_transform(maxI) 193 | confidence = predictions[maxI] 194 | if args.verbose: 195 | print("Prediction took {} seconds.".format(time.time() - start)) 196 | if multiple: 197 | print("Predict {} @ x={} with {:.2f} confidence.".format(person, bbx, 198 | confidence)) 199 | else: 200 | print("Predict {} with {:.2f} confidence.".format(person, confidence)) 201 | if isinstance(clf, GMM): 202 | dist = np.linalg.norm(rep - clf.means_[maxI]) 203 | print(" + Distance from the mean: {}".format(dist)) 204 | 205 | 206 | if __name__ == '__main__': 207 | 208 | parser = argparse.ArgumentParser() 209 | 210 | parser.add_argument( 211 | '--dlibFacePredictor', 212 | type=str, 213 | help="Path to dlib's face predictor.", 214 | default=os.path.join( 215 | dlibModelDir, 216 | "shape_predictor_68_face_landmarks.dat")) 217 | parser.add_argument( 218 | '--networkModel', 219 | type=str, 220 | help="Path to Torch network model.", 221 | default=os.path.join( 222 | openfaceModelDir, 223 | 'nn4.small2.v1.t7')) 224 | parser.add_argument('--imgDim', type=int, 225 | help="Default image dimension.", default=96) 226 | parser.add_argument('--cuda', action='store_true') 227 | parser.add_argument('--verbose', action='store_true') 228 | 229 | subparsers = parser.add_subparsers(dest='mode', help="Mode") 230 | trainParser = subparsers.add_parser('train', 231 | help="Train a new classifier.") 232 | trainParser.add_argument('--ldaDim', type=int, default=-1) 233 | trainParser.add_argument( 234 | '--classifier', 235 | type=str, 236 | choices=[ 237 | 'LinearSvm', 238 | 'GridSearchSvm', 239 | 'GMM', 240 | 'RadialSvm', 241 | 'DecisionTree', 242 | 'GaussianNB', 243 | 'DBN'], 244 | help='The type of classifier to use.', 245 | default='LinearSvm') 246 | trainParser.add_argument( 247 | 'workDir', 248 | type=str, 249 | help="The input work directory containing 'reps.csv' and 'labels.csv'. Obtained from aligning a directory with 'align-dlib' and getting the representations with 'batch-represent'.") 250 | 251 | inferParser = subparsers.add_parser( 252 | 'infer', help='Predict who an image contains from a trained classifier.') 253 | inferParser.add_argument( 254 | 'classifierModel', 255 | type=str, 256 | help='The Python pickle representing the classifier. This is NOT the Torch network model, which can be set with --networkModel.') 257 | inferParser.add_argument('imgs', type=str, nargs='+', 258 | help="Input image.") 259 | inferParser.add_argument('--multi', help="Infer multiple faces in image", 260 | action="store_true") 261 | 262 | args = parser.parse_args() 263 | if args.verbose: 264 | print("Argument parsing and import libraries took {} seconds.".format( 265 | time.time() - start)) 266 | 267 | if args.mode == 'infer' and args.classifierModel.endswith(".t7"): 268 | raise Exception(""" 269 | Torch network model passed as the classification model, 270 | which should be a Python pickle (.pkl) 271 | 272 | See the documentation for the distinction between the Torch 273 | network and classification models: 274 | 275 | http://cmusatyalab.github.io/openface/demo-3-classifier/ 276 | http://cmusatyalab.github.io/openface/training-new-models/ 277 | 278 | Use `--networkModel` to set a non-standard Torch network model.""") 279 | start = time.time() 280 | 281 | align = openface.AlignDlib(args.dlibFacePredictor) 282 | net = openface.TorchNeuralNet(args.networkModel, imgDim=args.imgDim, 283 | cuda=args.cuda) 284 | 285 | if args.verbose: 286 | print("Loading the dlib and OpenFace models took {} seconds.".format( 287 | time.time() - start)) 288 | start = time.time() 289 | 290 | if args.mode == 'train': 291 | train(args) 292 | elif args.mode == 'infer': 293 | infer(args, args.multi) 294 | -------------------------------------------------------------------------------- /Expert 4 Source Code/step-1_find-faces.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import dlib 3 | from skimage import io 4 | 5 | # Take the image file name from the command line 6 | file_name = sys.argv[1] 7 | 8 | # Create a HOG face detector using the built-in dlib class 9 | face_detector = dlib.get_frontal_face_detector() 10 | 11 | win = dlib.image_window() 12 | 13 | # Load the image into an array 14 | image = io.imread(file_name) 15 | 16 | # Run the HOG face detector on the image data. 17 | # The result will be the bounding boxes of the faces in our image. 18 | detected_faces = face_detector(image, 1) 19 | 20 | print("I found {} faces in the file {}".format(len(detected_faces), file_name)) 21 | 22 | # Open a window on the desktop showing the image 23 | win.set_image(image) 24 | 25 | # Loop through each face we found in the image 26 | for i, face_rect in enumerate(detected_faces): 27 | 28 | # Detected faces are returned as an object with the coordinates 29 | # of the top, left, right and bottom edges 30 | print("- Face #{} found at Left: {} Top: {} Right: {} Bottom: {}".format(i, face_rect.left(), face_rect.top(), face_rect.right(), face_rect.bottom())) 31 | 32 | # Draw a box around each face we found 33 | win.add_overlay(face_rect) 34 | 35 | # Wait until the user hits to close the window 36 | dlib.hit_enter_to_continue() -------------------------------------------------------------------------------- /Expert 4 Source Code/step-2a_finding-face-landmarks.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import dlib 3 | from skimage import io 4 | 5 | # You can download the required pre-trained face detection model here: 6 | # http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 7 | predictor_model = "./models/dlib/shape_predictor_68_face_landmarks.dat" 8 | 9 | # Take the image file name from the command line 10 | file_name = sys.argv[1] 11 | 12 | # Create a HOG face detector using the built-in dlib class 13 | face_detector = dlib.get_frontal_face_detector() 14 | face_pose_predictor = dlib.shape_predictor(predictor_model) 15 | 16 | win = dlib.image_window() 17 | 18 | # Take the image file name from the command line 19 | file_name = sys.argv[1] 20 | 21 | # Load the image 22 | image = io.imread(file_name) 23 | 24 | # Run the HOG face detector on the image data 25 | detected_faces = face_detector(image, 1) 26 | 27 | print("Found {} faces in the image file {}".format(len(detected_faces), file_name)) 28 | 29 | # Show the desktop window with the image 30 | win.set_image(image) 31 | 32 | # Loop through each face we found in the image 33 | for i, face_rect in enumerate(detected_faces): 34 | 35 | # Detected faces are returned as an object with the coordinates 36 | # of the top, left, right and bottom edges 37 | print("- Face #{} found at Left: {} Top: {} Right: {} Bottom: {}".format(i, face_rect.left(), face_rect.top(), face_rect.right(), face_rect.bottom())) 38 | 39 | # Draw a box around each face we found 40 | win.add_overlay(face_rect) 41 | 42 | # Get the the face's pose 43 | pose_landmarks = face_pose_predictor(image, face_rect) 44 | 45 | # Draw the face landmarks on the screen. 46 | win.add_overlay(pose_landmarks) 47 | 48 | dlib.hit_enter_to_continue() 49 | -------------------------------------------------------------------------------- /Expert 4 Source Code/step-2b_projecting-faces.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import dlib 3 | import cv2 4 | import openface 5 | 6 | # You can download the required pre-trained face detection model here: 7 | # http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 8 | predictor_model = "./models/dlib/shape_predictor_68_face_landmarks.dat" 9 | 10 | # Take the image file name from the command line 11 | file_name = sys.argv[1] 12 | 13 | # Create a HOG face detector using the built-in dlib class 14 | face_detector = dlib.get_frontal_face_detector() 15 | face_pose_predictor = dlib.shape_predictor(predictor_model) 16 | face_aligner = openface.AlignDlib(predictor_model) 17 | 18 | # Take the image file name from the command line 19 | file_name = sys.argv[1] 20 | 21 | # Load the image 22 | image = cv2.imread(file_name) 23 | 24 | # Run the HOG face detector on the image data 25 | detected_faces = face_detector(image, 1) 26 | 27 | print("Found {} faces in the image file {}".format(len(detected_faces), file_name)) 28 | 29 | # Loop through each face we found in the image 30 | for i, face_rect in enumerate(detected_faces): 31 | 32 | # Detected faces are returned as an object with the coordinates 33 | # of the top, left, right and bottom edges 34 | print("- Face #{} found at Left: {} Top: {} Right: {} Bottom: {}".format(i, face_rect.left(), face_rect.top(), face_rect.right(), face_rect.bottom())) 35 | 36 | # Get the the face's pose 37 | pose_landmarks = face_pose_predictor(image, face_rect) 38 | 39 | # Use openface to calculate and perform the face alignment 40 | alignedFace = face_aligner.align(534, image, face_rect, landmarkIndices=openface.AlignDlib.OUTER_EYES_AND_NOSE) 41 | 42 | # Save the aligned image to a file 43 | cv2.imwrite("aligned_face_{}.jpg".format(i), alignedFace) 44 | 45 | -------------------------------------------------------------------------------- /Expert 4 Source Code/util/align-dlib.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # 3 | # Copyright 2015-2016 Carnegie Mellon University 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | import argparse 18 | import cv2 19 | import numpy as np 20 | import os 21 | import random 22 | import shutil 23 | 24 | import openface 25 | import openface.helper 26 | from openface.data import iterImgs 27 | 28 | fileDir = os.path.dirname(os.path.realpath(__file__)) 29 | modelDir = os.path.join(fileDir, '..', 'models') 30 | dlibModelDir = os.path.join(modelDir, 'dlib') 31 | openfaceModelDir = os.path.join(modelDir, 'openface') 32 | 33 | 34 | def write(vals, fName): 35 | if os.path.isfile(fName): 36 | print("{} exists. Backing up.".format(fName)) 37 | os.rename(fName, "{}.bak".format(fName)) 38 | with open(fName, 'w') as f: 39 | for p in vals: 40 | f.write(",".join(str(x) for x in p)) 41 | f.write("\n") 42 | 43 | 44 | def computeMeanMain(args): 45 | align = openface.AlignDlib(args.dlibFacePredictor) 46 | 47 | imgs = list(iterImgs(args.inputDir)) 48 | if args.numImages > 0: 49 | imgs = random.sample(imgs, args.numImages) 50 | 51 | facePoints = [] 52 | for img in imgs: 53 | rgb = img.getRGB() 54 | bb = align.getLargestFaceBoundingBox(rgb) 55 | alignedPoints = align.align(rgb, bb) 56 | if alignedPoints: 57 | facePoints.append(alignedPoints) 58 | 59 | facePointsNp = np.array(facePoints) 60 | mean = np.mean(facePointsNp, axis=0) 61 | std = np.std(facePointsNp, axis=0) 62 | 63 | write(mean, "{}/mean.csv".format(args.modelDir)) 64 | write(std, "{}/std.csv".format(args.modelDir)) 65 | 66 | # Only import in this mode. 67 | import matplotlib as mpl 68 | mpl.use('Agg') 69 | import matplotlib.pyplot as plt 70 | 71 | fig, ax = plt.subplots() 72 | ax.scatter(mean[:, 0], -mean[:, 1], color='k') 73 | ax.axis('equal') 74 | for i, p in enumerate(mean): 75 | ax.annotate(str(i), (p[0] + 0.005, -p[1] + 0.005), fontsize=8) 76 | plt.savefig("{}/mean.png".format(args.modelDir)) 77 | 78 | 79 | def alignMain(args): 80 | openface.helper.mkdirP(args.outputDir) 81 | 82 | imgs = list(iterImgs(args.inputDir)) 83 | 84 | # Shuffle so multiple versions can be run at once. 85 | random.shuffle(imgs) 86 | 87 | landmarkMap = { 88 | 'outerEyesAndNose': openface.AlignDlib.OUTER_EYES_AND_NOSE, 89 | 'innerEyesAndBottomLip': openface.AlignDlib.INNER_EYES_AND_BOTTOM_LIP 90 | } 91 | if args.landmarks not in landmarkMap: 92 | raise Exception("Landmarks unrecognized: {}".format(args.landmarks)) 93 | 94 | landmarkIndices = landmarkMap[args.landmarks] 95 | 96 | align = openface.AlignDlib(args.dlibFacePredictor) 97 | 98 | nFallbacks = 0 99 | for imgObject in imgs: 100 | print("=== {} ===".format(imgObject.path)) 101 | outDir = os.path.join(args.outputDir, imgObject.cls) 102 | openface.helper.mkdirP(outDir) 103 | outputPrefix = os.path.join(outDir, imgObject.name) 104 | imgName = outputPrefix + ".png" 105 | 106 | if os.path.isfile(imgName): 107 | if args.verbose: 108 | print(" + Already found, skipping.") 109 | else: 110 | rgb = imgObject.getRGB() 111 | if rgb is None: 112 | if args.verbose: 113 | print(" + Unable to load.") 114 | outRgb = None 115 | else: 116 | outRgb = align.align(args.size, rgb, 117 | landmarkIndices=landmarkIndices, 118 | skipMulti=args.skipMulti) 119 | if outRgb is None and args.verbose: 120 | print(" + Unable to align.") 121 | 122 | if args.fallbackLfw and outRgb is None: 123 | nFallbacks += 1 124 | deepFunneled = "{}/{}.jpg".format(os.path.join(args.fallbackLfw, 125 | imgObject.cls), 126 | imgObject.name) 127 | shutil.copy(deepFunneled, "{}/{}.jpg".format(os.path.join(args.outputDir, 128 | imgObject.cls), 129 | imgObject.name)) 130 | 131 | if outRgb is not None: 132 | if args.verbose: 133 | print(" + Writing aligned file to disk.") 134 | outBgr = cv2.cvtColor(outRgb, cv2.COLOR_RGB2BGR) 135 | cv2.imwrite(imgName, outBgr) 136 | 137 | if args.fallbackLfw: 138 | print('nFallbacks:', nFallbacks) 139 | 140 | if __name__ == '__main__': 141 | parser = argparse.ArgumentParser() 142 | 143 | parser.add_argument('inputDir', type=str, help="Input image directory.") 144 | parser.add_argument('--dlibFacePredictor', type=str, help="Path to dlib's face predictor.", 145 | default=os.path.join(dlibModelDir, "shape_predictor_68_face_landmarks.dat")) 146 | 147 | subparsers = parser.add_subparsers(dest='mode', help="Mode") 148 | computeMeanParser = subparsers.add_parser( 149 | 'computeMean', help='Compute the image mean of a directory of images.') 150 | computeMeanParser.add_argument('--numImages', type=int, help="The number of images. '0' for all images.", 151 | default=0) # <= 0 ===> all imgs 152 | alignmentParser = subparsers.add_parser( 153 | 'align', help='Align a directory of images.') 154 | alignmentParser.add_argument('landmarks', type=str, 155 | choices=['outerEyesAndNose', 156 | 'innerEyesAndBottomLip', 157 | 'eyes_1'], 158 | help='The landmarks to align to.') 159 | alignmentParser.add_argument( 160 | 'outputDir', type=str, help="Output directory of aligned images.") 161 | alignmentParser.add_argument('--size', type=int, help="Default image size.", 162 | default=96) 163 | alignmentParser.add_argument('--fallbackLfw', type=str, 164 | help="If alignment doesn't work, fallback to copying the deep funneled version from this directory..") 165 | alignmentParser.add_argument( 166 | '--skipMulti', action='store_true', help="Skip images with more than one face.") 167 | alignmentParser.add_argument('--verbose', action='store_true') 168 | 169 | args = parser.parse_args() 170 | 171 | if args.mode == 'computeMean': 172 | computeMeanMain(args) 173 | else: 174 | alignMain(args) 175 | -------------------------------------------------------------------------------- /Introduction to Machine Learning.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/Introduction to Machine Learning.pdf -------------------------------------------------------------------------------- /Jupyter/README.md: -------------------------------------------------------------------------------- 1 | ## Anaconda安装 ## 2 | 3 | - Anaconda Linux 64位安装程序`Anaconda3-5.0.1-Linux-x86_64.sh`下载: 4 | 5 | >下载地址:[百度云盘](https://pan.baidu.com/s/1w90tNSWkDeb57w6NGpdumw) 6 | 7 | >提取码:`ysc6` 8 | 9 | - Anaconda安装命令: 10 | 11 | bash Anaconda3-5.0.1-Linux-x86_64.sh 12 | 13 | - 安装完成之后需要重启`Terminal`终端,Anaconda才能生效。 14 | 15 | - 在安装的过程中,会问你安装路径,直接回车`Enter`默认就可以了。有个地方问你是否将Anaconda安装路径加入到`bash`资源文件`.bashrc`中,输入`yes`,默认的是`no`。 16 | 17 | - 如果没输入`yes`就需要配置环境变量,在`Terminal`终端输入以下命令使用`Gedit`文本编辑器打开`profile`文件: 18 | 19 | sudo gedit /etc/profile 20 | 21 | - 在`profile`文件中添加以下语句,把语句中的`/home/bupt`换成你自己的Anaconda安装路径: 22 | 23 | export PATH=/home/bupt/anaconda3/bin:$PATH 24 | 25 | - `Ctrl`+`S`保存修改,然后退出`Gedit`。 26 | 27 | - 重启`Terminal`终端,如果还是不行,则重启Linux系统。 28 | 29 | - 配置好`PATH`后,可以通过以下命令检查配置是否正确: 30 | 31 | conda –version 32 | 33 | - 可以通过以下命令查看Anaconda组件: 34 | 35 | conda list 36 | 37 | - Python版本:`3.6.3` 38 | 39 | ## TensorFlow安装 ## 40 | 41 | - 安装好Anaconda后,运行以下命令安装`1.3.0`版本TensorFlow: 42 | 43 | conda install tensorflow==1.3.0 -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ---------- 2 | 3 | # 目录 # 4 | 5 | - [准备工作](#准备工作) 6 | 7 | - [实验一、Python机器学习入门:有监督学习](#实验一python机器学习入门有监督学习) 8 | 9 | - [实验目标](#实验目标) 10 | 11 | - [实验器材及准备](#实验器材及准备) 12 | 13 | - [实验内容与步骤](#实验内容与步骤) 14 | 15 | - [实验结果](#实验结果) 16 | 17 | - [课后习题](#课后习题) 18 | 19 | - [实验二、Python机器学习入门:无监督学习](#实验二python机器学习入门无监督学习) 20 | 21 | - [实验目标](#实验目标-1) 22 | 23 | - [实验器材及准备](#实验器材及准备-1) 24 | 25 | - [实验内容与步骤](#实验内容与步骤-1) 26 | 27 | - [实验结果](#实验结果-1) 28 | 29 | - [课后习题](#课后习题-1) 30 | 31 | - [实验三、Python深度学习入门:单层神经网络](#实验三python深度学习入门单层神经网络) 32 | 33 | - [实验目标](#实验目标-2) 34 | 35 | - [实验器材及准备](#实验器材及准备-2) 36 | 37 | - [实验内容与步骤](#实验内容与步骤-2) 38 | 39 | - [实验结果](#实验结果-2) 40 | 41 | - [课后习题](#课后习题-2) 42 | 43 | - [实验四、Python深度学习入门:人脸识别实验](#实验四python深度学习入门人脸识别实验) 44 | 45 | - [实验目标](#实验目标-3) 46 | 47 | - [实验器材及准备](#实验器材及准备-3) 48 | 49 | - [实验内容与步骤](#实验内容与步骤-3) 50 | 51 | - [实验环境及数据准备](#实验环境及数据准备) 52 | 53 | - [从照片中获取人脸](#从照片中获取人脸) 54 | 55 | - [获取脸部特征并进行仿射变换](#获取脸部特征并进行仿射变换) 56 | 57 | - [获取面部特征编码文件](#获取面部特征编码文件) 58 | 59 | - [进行完整的人脸识别实验](#进行完整的人脸识别实验) 60 | 61 | - [Jupyter环境部署](Jupyter) 62 | 63 | - [Anaconda安装](Jupyter#anaconda安装) 64 | 65 | - [TensorFlow安装](Jupyter#tensorflow安装) 66 | 67 | - [Docker环境部署](Docker) 68 | 69 | - [Docker安装](Docker#docker安装) 70 | 71 | - [Docker加速器](Docker#docker加速器) 72 | 73 | - [Docker从文件载入镜像](Docker#docker从文件载入镜像) 74 | 75 | - [Docker运行命令](Docker#docker运行命令) 76 | 77 | ---------- 78 | 79 | # 准备工作 # 80 | 81 | - 开机选择第一个操作系统:`LINUX` 82 | 83 | - LINUX系统管理员密码:`123456` 84 | 85 | - 系统启动后打开左边栏`FireFox`火狐浏览器![](http://www.firefox.com.cn/media/img/firefox/favicon.e6bb0e59df3d.ico) 86 | 87 | - 在上方地址编辑栏输入`FTP`网址: 88 | 89 | >地址:[`ftp://10.105.240.91`](ftp://student:asdf1234@10.105.240.91/Machine%20Learning) 90 | 91 | >用户名:`student` 92 | 93 | >密码:`asdf1234` 94 | 95 | - 进入编程之美目录 96 | 97 | - 下载实验视频资料以及相应文档 98 | 99 | ---------- 100 | 101 | # 实验一、Python机器学习入门:有监督学习 # 102 | 103 | ## 实验目标 ## 104 | 105 | - 了解机器学习的基本概念,了解机器学习的应用方法。 106 | 107 | - 通过实验掌握机器学习预测任务的基本流程。 108 | 109 | ## 实验器材及准备 ## 110 | 111 | ### 实验器材 ### 112 | 113 | - 硬件:电脑PC一台 114 | 115 | - 软件:Ubuntu、Anaconda3 5.0.1、Scikit-learn 0.19及其依赖包 116 | 117 | ### 实验准备 ### 118 | 119 | - 查阅机器学习-有监督学习[基本原理和算法](http://sklearn.apachecn.org/#/docs/1) 120 | 121 | - 查阅SVM分类器[基本原理](http://sklearn.apachecn.org/#/docs/5) 122 | 123 | ## 实验内容与步骤 ## 124 | 125 | ### Jupyter Notebook 简介 ### 126 | 127 | >Jupyter Notebook,以前又称为IPython notebook,是一个交互式笔记本,支持运行40+种编程语言,详见[介绍](https://www.zhihu.com/question/37490497)。 128 | 129 | ### 打开编译环境 ### 130 | 131 | 1. 进入Ubuntu系统后,同时按下`Ctrl`+`Alt`+`T`打开终端`Terminal`窗口。 132 | 133 | 1. 在终端窗口中输入以下**`1`**条命令打开Jupyter Notebook编译环境: 134 | 135 | ```bash 136 | jupyter notebook 137 | ``` 138 | 139 | 1. 键入`Enter`回车键后等待,浏览器会自动打开如下地址: 140 | 141 | ![](https://i.imgur.com/PZYQqSc.png) 142 | 143 | 1. 点击页面右上方区域按钮`New`->`Python3`。 144 | 145 | 1. 将`exper1.txt`文件中代码复制入`In[]:`后光标中,键入`Shift`+`Enter`运行。 146 | 147 | ![](https://i.imgur.com/PFyjMn9.png) 148 | 149 | 1. 代码文件`exper1.txt`中实现了以下步骤: 150 | 1. 导入实验依赖模块 151 | ```python 152 | #导入matplotlib绘图工具包 153 | import matplotlib.pyplot as plt 154 | # Import datasets, classifiers and performance metrics 155 | from sklearn import datasets, svm, metrics 156 | ``` 157 | 1. 载入示例数据集。载入Scikit-learn自带数据集手写数字识别集(Handwritten Digits Data Set) 158 | 159 | ```python 160 | # The digits dataset 161 | # 加载数据集 162 | digits = datasets.load_digits() 163 | ``` 164 | 1. 查看数据集。使用`matplotlib`显示数据集图片。 165 | 166 | ```python 167 | # The data that we are interested in is made of 8x8 images of digits, let's 168 | # have a look at the first 4 images, stored in the `images` attribute of the 169 | # dataset. If we were working from image files, we could load them using 170 | # matplotlib.pyplot.imread. Note that each image must have the same size. For these 171 | # images, we know which digit they represent: it is given in the 'target' of 172 | # the dataset. 173 | # 查看数据集前4张图片 174 | images_and_labels = list(zip(digits.images, digits.target)) 175 | for index, (image, label) in enumerate(images_and_labels[:4]): 176 | plt.subplot(2, 4, index + 1) 177 | plt.axis('off') 178 | plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') 179 | plt.title('Training: %i' % label) 180 | ``` 181 | 182 | 1. 数据预处理。使用`numpy`将图片展开成向量。 183 | 184 | ```python 185 | # To apply a classifier on this data, we need to flatten the image, to 186 | # turn the data in a (samples, feature) matrix: 187 | # 数据预处理:将数据集展开成向量 188 | n_samples = len(digits.images) 189 | data = digits.images.reshape((n_samples, -1)) 190 | ``` 191 | 192 | 1. 构建分类器模型。使用Scikit-learn中的分类器`SVM`。 193 | 194 | ```python 195 | # Create a classifier: a support vector classifier 196 | # 构建分类器SVM 197 | classifier = svm.SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, 198 | degree=3, gamma=0.001, kernel='rbf', 199 | max_iter=-1, probability=False, random_state=None, shrinking=True, 200 | tol=0.001, verbose=False) 201 | ``` 202 | 203 | 1. 训练分类器模型。使用一半数据集进行模型的训练。 204 | 205 | ```python 206 | # We learn the digits on the first half of the digits 207 | # 训练分类器 208 | classifier.fit(data[:n_samples // 2], digits.target[:n_samples // 2]) 209 | ``` 210 | 1. 使用训练好的分类器模型预测另一半数据集。 211 | 212 | ```python 213 | # Now predict the value of the digit on the second half: 214 | # 测试分类效果 215 | expected = digits.target[n_samples // 2:] 216 | predicted = classifier.predict(data[n_samples // 2:]) 217 | ``` 218 | 1. 检查分类器的预测效果。使用Scikit-learn自带`metrics`检查预测准确率、召回率及混淆矩阵(Confusion Matrix)等。 219 | 220 | ```python 221 | print("Classification report for classifier %s:\n%s\n" 222 | % (classifier, metrics.classification_report(expected, predicted))) 223 | print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted)) 224 | 225 | images_and_predictions = list(zip(digits.images[n_samples // 2:], predicted)) 226 | for index, (image, prediction) in enumerate(images_and_predictions[:4]): 227 | plt.subplot(2, 4, index + 5) 228 | plt.axis('off') 229 | plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') 230 | plt.title('Prediction: %i' % prediction) 231 | plt.show() 232 | ``` 233 | 234 | ## 实验结果 ## 235 | 236 | - 分类器说明和每个分类的准确率`precision`,召回率`recall`,F1分数`f1-score`和各类别参与训练的样本数。 237 | 238 | ![](https://i.imgur.com/PdUhY7M.png) 239 | 240 | - 混淆矩阵:可看到测试数据集被分类的情况。 241 | 242 | ![](https://i.imgur.com/mCVubl6.png) 243 | 244 | - 训练和测试情况。 245 | 246 | ![](https://i.imgur.com/2G57l0P.png) 247 | 248 | ## 课后习题 ## 249 | 250 | - 参考[SVM参数表](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC)修改SVM参数,如惩罚因子`C`、rbf核函数的系数`gamma`等,观察预测结果的变化情况。修改代码后键入`Shift`+`Enter`可再次运行。 251 | 252 | ![](https://i.imgur.com/f55ZeJC.png) 253 | 254 | ---------- 255 | 256 | **附**`expert1.txt`文件内容如下: 257 | 258 | ```python 259 | print(__doc__) 260 | 261 | # Author: Gael Varoquaux 262 | # License: BSD 3 clause 263 | 264 | # Standard scientific Python imports 265 | import matplotlib.pyplot as plt 266 | 267 | # Import datasets, classifiers and performance metrics 268 | from sklearn import datasets, svm, metrics 269 | 270 | # The digits dataset 271 | # 加载数据集 272 | digits = datasets.load_digits() 273 | 274 | # The data that we are interested in is made of 8x8 images of digits, let's 275 | # have a look at the first 4 images, stored in the `images` attribute of the 276 | # dataset. If we were working from image files, we could load them using 277 | # matplotlib.pyplot.imread. Note that each image must have the same size. For these 278 | # images, we know which digit they represent: it is given in the 'target' of 279 | # the dataset. 280 | # 查看前4张图片 281 | images_and_labels = list(zip(digits.images, digits.target)) 282 | for index, (image, label) in enumerate(images_and_labels[:4]): 283 | plt.subplot(2, 4, index + 1) 284 | plt.axis('off') 285 | plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') 286 | plt.title('Training: %i' % label) 287 | 288 | # To apply a classifier on this data, we need to flatten the image, to 289 | # turn the data in a (samples, feature) matrix: 290 | # 数据预处理:展开成向量 291 | n_samples = len(digits.images) 292 | data = digits.images.reshape((n_samples, -1)) 293 | 294 | # Create a classifier: a support vector classifier 295 | # 构建分类器SVM 296 | classifier = svm.SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, 297 | degree=3, gamma=0.001, kernel='rbf', 298 | max_iter=-1, probability=False, random_state=None, shrinking=True, 299 | tol=0.001, verbose=False) 300 | 301 | # We learn the digits on the first half of the digits 302 | # 训练分类器 303 | classifier.fit(data[:n_samples // 2], digits.target[:n_samples // 2]) 304 | 305 | # Now predict the value of the digit on the second half: 306 | # 测试分类效果 307 | expected = digits.target[n_samples // 2:] 308 | predicted = classifier.predict(data[n_samples // 2:]) 309 | 310 | print("Classification report for classifier %s:\n%s\n" 311 | % (classifier, metrics.classification_report(expected, predicted))) 312 | print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted)) 313 | 314 | images_and_predictions = list(zip(digits.images[n_samples // 2:], predicted)) 315 | for index, (image, prediction) in enumerate(images_and_predictions[:4]): 316 | plt.subplot(2, 4, index + 5) 317 | plt.axis('off') 318 | plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') 319 | plt.title('Prediction: %i' % prediction) 320 | 321 | plt.show() 322 | ``` 323 | 324 | ---------- 325 | 326 | # 实验二、Python机器学习入门:无监督学习 # 327 | 328 | ## 实验目标 ## 329 | 330 | - 了解机器学习有监督和无监督的区别。 331 | 332 | - 通过实验掌握简单无监督算法使用方式。 333 | 334 | ## 实验器材及准备 ## 335 | 336 | ### 实验器材 ### 337 | 338 | - 硬件:电脑PC一台 339 | 340 | - 软件:Ubuntu、Anaconda3 5.0.1、Scikit-learn 0.19及其依赖包 341 | 342 | ### 实验准备 ### 343 | 344 | - 查阅机器学习-无监督学习[基本原理和相关算法](http://sklearn.apachecn.org/#/docs/19) 345 | 346 | - 查阅聚类基本原理及K-Means[算法原理](http://sklearn.apachecn.org/#/docs/22) 347 | 348 | ## 实验内容与步骤 ## 349 | 350 | 1. 打开编译环境。如实验一打开Jupyter Notebook,新建`New`->`Python3`交互窗口。 351 | 352 | 2. 代码文件`exper2.txt`中实现了以下步骤: 353 | 354 | 1. 导入实验依赖模块 355 | 356 | ```python 357 | from time import time 358 | import numpy as np 359 | import matplotlib.pyplot as plt 360 | from sklearn import metrics 361 | from sklearn.cluster import KMeans 362 | from sklearn.datasets import load_digits 363 | from sklearn.decomposition import PCA 364 | from sklearn.preprocessing import scale 365 | ``` 366 | 367 | 1. 载入示例数据集。载入Scikit-learn自带数据集手写数字识别集(Handwritten Digits Data Set)。 368 | 369 | ```python 370 | # 设定随机数种子 371 | np.random.seed(42) 372 | # 加载数据集 373 | digits = load_digits() 374 | ``` 375 | 376 | 1. 数据预处理。标准化数据,并获取数据集相关信息。 377 | 378 | ```python 379 | data = scale(digits.data) 380 | # 解析数据集 381 | # 数据集包含10个分类(手写数字1-10),1797个样本,特征维度为64维; 382 | # 样本数,特征维度 383 | n_samples, n_features = data.shape 384 | # 类别数 385 | n_digits = len(np.unique(digits.target)) 386 | labels = digits.target 387 | sample_size = 300 388 | print("n_digits: %d, \t n_samples %d, \t n_features %d" 389 | % (n_digits, n_samples, n_features)) 390 | ``` 391 | 392 | 1. 定义bench_k_means函数用以完成模型的训练及打印模型的聚类评估指标。 393 | 394 | ```python 395 | # 函数:训练并测试分类效果 396 | def bench_k_means(estimator, name, data): 397 | t0 = time() 398 | estimator.fit(data) 399 | print('%-9s\t%.2fs\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f' 400 | % (name, (time() - t0), estimator.inertia_, 401 | metrics.homogeneity_score(labels, estimator.labels_), 402 | metrics.completeness_score(labels, estimator.labels_), 403 | metrics.v_measure_score(labels, estimator.labels_), 404 | metrics.adjusted_rand_score(labels, estimator.labels_), 405 | metrics.adjusted_mutual_info_score(labels, estimator.labels_), 406 | metrics.silhouette_score(data, estimator.labels_, 407 | metric='euclidean', 408 | sample_size=sample_size))) 409 | ``` 410 | 411 | 1. 分别构建三种不同的K-means聚类器 412 | 413 | ```python 414 | print(82 * '_') 415 | print('init\t\ttime\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette') 416 | # 构建K-means分类器1: 初始化中心点为k-means++,传入以上函数 417 | 418 | bench_k_means(KMeans(init='k-means++', n_clusters=n_digits, n_init=10), 419 | name="k-means++", data=data) 420 | 421 | # 构建K-means分类器2:初始化中心点为random,传入以上函数 422 | bench_k_means(KMeans(init='random', n_clusters=n_digits, n_init=10), 423 | name="random", data=data) 424 | 425 | # in this case the seeding of the centers is deterministic, hence we run the 426 | # kmeans algorithm only once with n_init=1 427 | # 构建K-means分类器3,利用PCA找到数据主轴,将其作为kmeans初始化中心点的方法,传入以上函数 428 | # 进行PCA降维 429 | pca = PCA(n_components=n_digits).fit(data) 430 | bench_k_means(KMeans(init=pca.components_, n_clusters=n_digits, n_init=1), 431 | name="PCA-based", 432 | data=data) 433 | print(82 * '_') 434 | ``` 435 | 436 | 1. 聚类可视化。使用`matplotlib`可视化聚类结果(`PCA`降维到`2`维以便平面显示)。 437 | 438 | ```python 439 | reduced_data = PCA(n_components=2).fit_transform(data) 440 | # 习题:修改此处参数 441 | kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10) 442 | kmeans.fit(reduced_data) 443 | 444 | # Step size of the mesh. Decrease to increase the quality of the VQ. 445 | h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max]. 446 | 447 | # Plot the decision boundary. For that, we will assign a color to each 448 | x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 449 | y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 450 | xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) 451 | 452 | # Obtain labels for each point in mesh. Use last trained model. 453 | Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) 454 | 455 | # Put the result into a color plot 456 | Z = Z.reshape(xx.shape) 457 | plt.figure(1) 458 | plt.clf() 459 | plt.imshow(Z, interpolation='nearest', 460 | extent=(xx.min(), xx.max(), yy.min(), yy.max()), 461 | cmap=plt.cm.Paired, 462 | aspect='auto', origin='lower') 463 | 464 | plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) 465 | # Plot the centroids as a white X 466 | centroids = kmeans.cluster_centers_ 467 | plt.scatter(centroids[:, 0], centroids[:, 1], 468 | marker='x', s=169, linewidths=3, 469 | color='w', zorder=10) 470 | plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n' 471 | 'Centroids are marked with white cross') 472 | plt.xlim(x_min, x_max) 473 | plt.ylim(y_min, y_max) 474 | plt.xticks(()) 475 | plt.yticks(()) 476 | plt.show() 477 | ``` 478 | 479 | ## 实验结果 ## 480 | 481 | - 数据集包含10个分类(手写数字1-10),1797个样本,特征维度为64维。 482 | 483 | ![](https://i.imgur.com/prIQJYS.png) 484 | 485 | - 可以看到3个不同kmeans初始化中心点方法的聚类器的效果,注意使用PCA-based方法初始化中心点速度极快,因为中心点更新次数少。 486 | 487 | ![](https://i.imgur.com/fYW8F6h.png) 488 | 489 | ![](https://i.imgur.com/Ypn5CuK.png) 490 | 491 | - 最后在图中可以看到PCA降维到2维的数据聚类情况。 492 | 493 | ![](https://i.imgur.com/6hPIppJ.png) 494 | 495 | ## 课后习题 ## 496 | 497 | - 参考[K-Means参数表](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans)修改K-Means参数,如**聚类器**初始化类型(`k-means++`、`random`、`PCA-based`)、类别数`n_clusters`等,观察聚类结果变化情况。 498 | 499 | ![](https://i.imgur.com/eEP75zg.png) 500 | 501 | ---------- 502 | 503 | **附**`expert2.txt`文件内容如下: 504 | 505 | ```python 506 | print(__doc__) 507 | 508 | from time import time 509 | import numpy as np 510 | import matplotlib.pyplot as plt 511 | 512 | from sklearn import metrics 513 | from sklearn.cluster import KMeans 514 | from sklearn.datasets import load_digits 515 | from sklearn.decomposition import PCA 516 | from sklearn.preprocessing import scale 517 | 518 | # 设定随机数种子 519 | np.random.seed(42) 520 | 521 | # 加载数据集 522 | digits = load_digits() 523 | data = scale(digits.data) 524 | 525 | # 解析数据集 526 | n_samples, n_features = data.shape 527 | n_digits = len(np.unique(digits.target)) 528 | labels = digits.target 529 | 530 | sample_size = 300 531 | 532 | print("n_digits: %d, \t n_samples %d, \t n_features %d" 533 | % (n_digits, n_samples, n_features)) 534 | 535 | 536 | print(82 * '_') 537 | print('init\t\ttime\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette') 538 | 539 | # 函数:训练并测试分类效果 540 | def bench_k_means(estimator, name, data): 541 | t0 = time() 542 | estimator.fit(data) 543 | print('%-9s\t%.2fs\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f' 544 | % (name, (time() - t0), estimator.inertia_, 545 | metrics.homogeneity_score(labels, estimator.labels_), 546 | metrics.completeness_score(labels, estimator.labels_), 547 | metrics.v_measure_score(labels, estimator.labels_), 548 | metrics.adjusted_rand_score(labels, estimator.labels_), 549 | metrics.adjusted_mutual_info_score(labels, estimator.labels_), 550 | metrics.silhouette_score(data, estimator.labels_, 551 | metric='euclidean', 552 | sample_size=sample_size))) 553 | 554 | # 构建K-means分类器1,传入以上函数 555 | bench_k_means(KMeans(init='k-means++', n_clusters=n_digits, n_init=10), 556 | name="k-means++", data=data) 557 | 558 | # 构建K-means分类器2,传入以上函数 559 | bench_k_means(KMeans(init='random', n_clusters=n_digits, n_init=10), 560 | name="random", data=data) 561 | 562 | # in this case the seeding of the centers is deterministic, hence we run the 563 | # kmeans algorithm only once with n_init=1 564 | # 构建K-means分类器3,添加PCA降维,传入以上函数 565 | pca = PCA(n_components=n_digits).fit(data) 566 | bench_k_means(KMeans(init=pca.components_, n_clusters=n_digits, n_init=1), 567 | name="PCA-based", 568 | data=data) 569 | print(82 * '_') 570 | 571 | # ############################################################################# 572 | # 聚类可视化。使用matplotlib可视化聚类结果(PCA降维到2维以便平面显示) 573 | # Visualize the results on PCA-reduced data 574 | 575 | reduced_data = PCA(n_components=2).fit_transform(data) 576 | # 习题:修改此处参数 577 | kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10) 578 | kmeans.fit(reduced_data) 579 | 580 | # Step size of the mesh. Decrease to increase the quality of the VQ. 581 | h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max]. 582 | 583 | # Plot the decision boundary. For that, we will assign a color to each 584 | x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 585 | y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 586 | xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) 587 | 588 | # Obtain labels for each point in mesh. Use last trained model. 589 | Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) 590 | 591 | # Put the result into a color plot 592 | Z = Z.reshape(xx.shape) 593 | plt.figure(1) 594 | plt.clf() 595 | plt.imshow(Z, interpolation='nearest', 596 | extent=(xx.min(), xx.max(), yy.min(), yy.max()), 597 | cmap=plt.cm.Paired, 598 | aspect='auto', origin='lower') 599 | 600 | plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) 601 | # Plot the centroids as a white X 602 | centroids = kmeans.cluster_centers_ 603 | plt.scatter(centroids[:, 0], centroids[:, 1], 604 | marker='x', s=169, linewidths=3, 605 | color='w', zorder=10) 606 | plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n' 607 | 'Centroids are marked with white cross') 608 | plt.xlim(x_min, x_max) 609 | plt.ylim(y_min, y_max) 610 | plt.xticks(()) 611 | plt.yticks(()) 612 | plt.show() 613 | ``` 614 | 615 | ---------- 616 | 617 | # 实验三、Python深度学习入门:单层神经网络 # 618 | 619 | ## 实验目标 ## 620 | 621 | - 了解深度学习的基本概念。 622 | 623 | - 通过实验学会使用框架实现简单神经网络。 624 | 625 | ## 实验器材及准备 ## 626 | 627 | ### 实验器材 ### 628 | 629 | - 硬件:电脑PC一台 630 | 631 | - 软件:Ubuntu、Anaconda3 5.0.1、TensorFlow 1.3.0及其依赖包 632 | 633 | ### 实验准备 ### 634 | 635 | - 查阅深度学习与TensorFlow[基本原理和相关算法](https://www.tensorflow.org/get_started/mnist/beginners) 636 | 637 | ## 实验内容与步骤 ## 638 | 639 | 1. 打开编译环境。如实验一打开Jupyter Notebook,新建`New`->`Python3`交互窗口。 640 | 641 | 1. 代码文件`exper3.txt`中实现了以下步骤: 642 | 643 | 1. 导入实验所需模块 644 | 645 | ```python 646 | from __future__ import absolute_import 647 | from __future__ import division 648 | from __future__ import print_function 649 | 650 | import argparse 651 | import sys 652 | 653 | from tensorflow.examples.tutorials.mnist import input_data 654 | import matplotlib.pyplot as plt 655 | import tensorflow as tf 656 | ``` 657 | 658 | 1. 载入示例数据集。载入Tensorflow自带数据集手写数字识别集(MNIST Data)。 659 | 660 | ```python 661 | # 载入数据 662 | mnist = input_data.read_data_sets("/tmp/tensorflow/mnist/input_data", one_hot=True) 663 | ``` 664 | 665 | 1. 构建神经网络。利用Tensorflow构建简单神经网络。 666 | 667 | ```python 668 | # 构建单层神经网络 669 | x = tf.placeholder(tf.float32, [None, 784]) 670 | W = tf.Variable(tf.zeros([784, 10])) 671 | b = tf.Variable(tf.zeros([10])) 672 | y = tf.matmul(x, W) + b 673 | y_ = tf.placeholder(tf.float32, [None, 10]) 674 | ``` 675 | 676 | 1. 构建损失函数和优化器 677 | 678 | ```python 679 | # The raw formulation of cross-entropy, 680 | # 681 | # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)), 682 | # reduction_indices=[1])) 683 | # 684 | # can be numerically unstable. 685 | # 686 | # So here we use tf.nn.softmax_cross_entropy_with_logits on the raw 687 | # outputs of 'y', and then average across the batch. 688 | # 构建损失函数 689 | cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) 690 | # 构建优化器,注意learning_rate 691 | train_step = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cross_entropy) 692 | ``` 693 | 694 | 1. 构建TensorFlow会话并初始化变量。 695 | 696 | ```python 697 | sess = tf.InteractiveSession() 698 | tf.global_variables_initializer().run() 699 | ``` 700 | 701 | 1. 进行模型的训练,在给定的训练样本上运行以上神经网络,每隔100轮打印一次训练情况,观察交叉熵(Cross Entropy)误差`error`值的变化。 702 | 703 | ```python 704 | # 训练模型:range内迭代次数 705 | for i in range(1000): 706 | batch_xs, batch_ys = mnist.train.next_batch(100) 707 | sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) 708 | if i%100==0: 709 | print("cross_entropy error:",sess.run(cross_entropy, feed_dict={x: batch_xs, y_: batch_ys})) 710 | ``` 711 | 712 | 1. 检查模型预测效果。定义准确率`accuracy`计算方式,检查模型预测准确率。 713 | 714 | ```python 715 | # 测试训练好的模型 716 | correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 717 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 718 | print("test accuracy: ",sess.run(accuracy, feed_dict={x: mnist.test.images, 719 | y_: mnist.test.labels})) 720 | ``` 721 | 722 | 1. 选择图片进行测试,选择一张图片,用训练好的模型对新的图片进行预测。 723 | 724 | ```python 725 | # part2 :选择图片测试 726 | # 第几张图片 727 | p = 0 728 | s = sess.run(y,feed_dict={x: mnist.test.images[p].reshape(1,784)}) 729 | ``` 730 | 731 | 1. 打印模型预测结果,并可视化测试图片。 732 | 733 | ```python 734 | print("Prediction : ",sess.run(tf.argmax(s, 1))) 735 | 736 | #显示图片 737 | plt.imshow(mnist.test.images[p].reshape(28,28), cmap=plt.cm.gray_r, interpolation='nearest') 738 | plt.show() 739 | ``` 740 | ## 实验结果 ## 741 | 742 | - 随着迭代的进行,神经网络在数据集上的交叉熵(Cross Entropy)误差`error`值越来越小,代表正在慢慢拟合训练数据,最后在测试集上的测试准确率`accuracy`为`91.02%`。 743 | 744 | ![](https://i.imgur.com/Igv7sT4.png) 745 | 746 | - 选取测试集图片,进行预测: 747 | 748 | ![](https://i.imgur.com/bnho3Gf.png) 749 | 750 | ## 课后习题 ## 751 | 752 | - 修改学习迭代次数`range`、学习率`learning_rate`等,观察结果的变化。 753 | 754 | ![](https://i.imgur.com/cI6Lehb.png) 755 | 756 | ---------- 757 | 758 | **附**`expert3.txt`文件内容如下: 759 | 760 | `part1`: 761 | 762 | ```python 763 | """A very simple MNIST classifier. 764 | See extensive documentation at 765 | https://www.tensorflow.org/get_started/mnist/beginners 766 | """ 767 | # part1 768 | from __future__ import absolute_import 769 | from __future__ import division 770 | from __future__ import print_function 771 | 772 | import argparse 773 | import sys 774 | 775 | from tensorflow.examples.tutorials.mnist import input_data 776 | import matplotlib.pyplot as plt 777 | import tensorflow as tf 778 | 779 | # 载入数据 780 | mnist = input_data.read_data_sets("/tmp/tensorflow/mnist/input_data", one_hot=True) 781 | 782 | # 构建单层神经网络 783 | x = tf.placeholder(tf.float32, [None, 784]) 784 | W = tf.Variable(tf.zeros([784, 10])) 785 | b = tf.Variable(tf.zeros([10])) 786 | y = tf.matmul(x, W) + b 787 | 788 | # 定义损失函数和优化器 789 | y_ = tf.placeholder(tf.float32, [None, 10]) 790 | 791 | # The raw formulation of cross-entropy, 792 | # 793 | # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)), 794 | # reduction_indices=[1])) 795 | # 796 | # can be numerically unstable. 797 | # 798 | # So here we use tf.nn.softmax_cross_entropy_with_logits on the raw 799 | # outputs of 'y', and then average across the batch. 800 | cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) 801 | # 注意learning_rate 802 | train_step = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cross_entropy) 803 | 804 | sess = tf.InteractiveSession() 805 | tf.global_variables_initializer().run() 806 | # 训练模型:range内迭代次数 807 | for i in range(1000): 808 | batch_xs, batch_ys = mnist.train.next_batch(100) 809 | sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) 810 | if i%100==0: 811 | print("cross_entropy error:",sess.run(cross_entropy, feed_dict={x: batch_xs, y_: batch_ys})) 812 | 813 | # 测试训练好的模型 814 | correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 815 | accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 816 | print("test accuracy: ",sess.run(accuracy, feed_dict={x: mnist.test.images, 817 | y_: mnist.test.labels})) 818 | ``` 819 | 820 | `part2`: 821 | 822 | ```python 823 | # part2 :选择图片测试 824 | # 第几张图片? 825 | p = 0 826 | 827 | s = sess.run(y,feed_dict={x: mnist.test.images[p].reshape(1,784)}) 828 | print("Prediction : ",sess.run(tf.argmax(s, 1))) 829 | 830 | #显示图片 831 | plt.imshow(mnist.test.images[p].reshape(28,28), cmap=plt.cm.gray_r, interpolation='nearest') 832 | plt.show() 833 | ``` 834 | 835 | ---------- 836 | 837 | # 实验四、Python深度学习入门:人脸识别实验 # 838 | 839 | ## 实验目标 ## 840 | 841 | - 了解人脸识别的基本原理 842 | 843 | - 通过实验熟悉人脸识别的四个过程 844 | 845 | ## 实验器材及准备 ## 846 | 847 | ### 实验器材 ### 848 | 849 | - 硬件:电脑PC一台 850 | 851 | - 软件:Ubuntu、Docker、openface项目Docker容器镜像及其相关依赖包 852 | 853 | ### 实验准备 ### 854 | 855 | - 仔细阅读课程讲义PPT内容,了解人脸识别**四个基本步骤**。 856 | 857 | - 查阅Docker工具[使用手册](http://www.docker.org.cn/book/docker/what-is-docker-16.html) 858 | 859 | - 查阅机器学习及人脸识别相关理论及算法:[HOG](http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf)、[仿射变换](https://en.wikipedia.org/wiki/Affine_transformation)、[128维embedding面部特征向量编码](https://www.cv-foundation.org/openaccess/content_cvpr_2015/app/1A_089.pdf) 860 | 861 | - 查阅openface相关背景知识:[cmusatyalab](https://cmusatyalab.github.io/openface/)、[openface](https://github.com/cmusatyalab/openface) 862 | 863 | ## 实验内容与步骤 ## 864 | 865 | ### 实验环境及数据准备 ### 866 | 867 | #### Linux相关基础知识 #### 868 | 869 | 1. 本实验说明中所有命令语句均可使用 **复制**`Copy` / **粘贴**`Paste` 操作在实验机的`Terminal`命令窗口中直接运行,调取命令窗口 **快捷键** 为`Ctrl`+`Alt`+`T`。 870 | 871 | 1. **复制**`Copy` 命令语句时,请用鼠标选定本手册中每条命令语句的 **第一个非空格字符** 直至 **最后一个非空格字符**,注意命令语句中不要遗漏 **斜杠**`/` 或者 **空格**`Space`。 872 | 873 | 1. `Terminal`命令窗口中的 **复制**`Copy` 操作可以使用 **鼠标右键菜单->复制**`Copy` 或者使用 **快捷键**`Ctrl`+`Shift`+`C`。 874 | 875 | 1. `Terminal`命令窗口中的 **粘贴**`Paste` 操作可以使用 **鼠标右键菜单->粘贴**`Paste` 或者使用 **快捷键**`Ctrl`+`Shift`+`V`。 876 | 877 | 1. `Terminal`命令窗口中可以使用 **方向键** **上**`∧` / **下**`∨` 查看之前自己输入过的命令语句。 878 | 879 | 1. `Terminal`命令窗口中输入命令或路径时,可以使用 **快捷键**`Tab` 对命令或路径进行快速补全操作。 880 | 881 | #### Docker简介 #### 882 | 883 | >Docker是一个开源的引擎,可以轻松的为任何应用创建一个轻量级的、可移植的、自给自足的容器。开发者在笔记本上编译测试通过的容器可以批量地在生产环境中部署,包括VMs(虚拟机)、[bare metal](http://www.whatis.com.cn/word_5275.htm)、OpenStack集群和其他的基础应用平台。 884 | 885 | 1. 本实验利用基于openface开源项目所提供的Docker容器镜像`bamos/openface`环境进行人脸识别实验。 886 | 887 | 1. **注意事项**`!!!重要!!!`】: 888 | 889 | 在`Terminal`命令窗口中进入Docker容器内运行的所有实验代码的**工作目录**均为`/root/openface`,请把命令语句中**所有的** `your_test_image_fullpath.jpg`替换为你自己的**完整图片路径**,例如:`/home/bupt/my_pic.jpg` 890 | 891 | #### 运行Docker实验环境 #### 892 | 893 | 1. 使用快捷键`Ctrl`+`Alt`+`T`打开`Terminal`命令行窗口 894 | 895 | 1. 在`Terminal`命令行窗口中依次运行以下**`2`**条命令进入Docker容器openface环境内: 896 | 897 | ```bash 898 | sudo xhost +local:root 899 | ``` 900 | 901 | ```bash 902 | sudo docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw -v /home/$USER:/home/$USER:rw -t -i openface/allset /bin/bash 903 | ``` 904 | 905 | 1. 运行以上第`1`条命令后会被要求输入管理员密码,密码为`123456`,在输入密码时`Terminal`命令行窗口中**不会显示密码输入**,在确认输入无误后单击回车`Enter`按钮即可,若显示如下信息则表示第`1`条命令运行成功,可以继续输入第`2`条命令: 906 | 907 | ![](https://i.imgur.com/ip3WtFF.png) 908 | 909 | 1. 运行完以上**`2`**条命令进入Docker容器后,运行以下**`1`**条命令转至 **openface工作目录**`/root/openface`【**`!!!重要!!!`**】: 910 | 911 | ```bash 912 | cd /root/openface 913 | ``` 914 | 915 | 1. 运行以下**`4`**条命令清除示例样本文件(若提示文件不存在则可忽略): 916 | 917 | ```bash 918 | rm /root/openface/aligned_face_0.jpg 919 | ``` 920 | 921 | ```bash 922 | rm -r /home/bupt/training-images 923 | ``` 924 | 925 | ```bash 926 | rm -r /home/bupt/aligned-images 927 | ``` 928 | 929 | ```bash 930 | rm -r /home/bupt/generated-embeddings 931 | ``` 932 | 933 | #### 建立人脸样本库 #### 934 | 935 | 1. 运行以下**`1`**条命令(**或者**使用鼠标左键单击左边栏图标![](https://i.imgur.com/G2jr5vR.png)手动进入bupt用户主目录`/home/bupt`内,再使用鼠标右键菜单新建文件夹`New Folder`选项)创建`training-images`文件夹(若提示目录已存在则可忽略): 936 | 937 | ```bash 938 | mkdir -vm 777 /home/bupt/training-images 939 | ``` 940 | 941 | 1. 运行类似以下命令语句(**或者**使用鼠标左键双击文件夹图标![](https://i.imgur.com/r7fF9KI.png)进入`/home/bupt/training-images/`目录内,再使用鼠标右键菜单新建文件夹`New Folder`选项)创建各个(**必须`2`个以上**)不同人的样本库文件夹,把以下命令中的`person1`、`person2`、`person3`等各自改为你自定义的名称后再运行,例如你自己的名字、你朋友的名字、你喜欢的明星的名字等: 942 | 943 | ```bash 944 | mkdir -vm 777 /home/bupt/training-images/person1 945 | ``` 946 | 947 | ```bash 948 | mkdir -vm 777 /home/bupt/training-images/person2 949 | ``` 950 | 951 | ```bash 952 | mkdir -vm 777 /home/bupt/training-images/person3 953 | ``` 954 | 955 | ![](images/4-4.png) 956 | 957 | 1. 拷贝需要作为训练集的人脸照片样本至相应目录: 958 | 959 | 通过网络下载或者U盘等移动存储设备把照片样本复制`Copy`至bupt用户主目录`/home/bupt`下的`training-images`文件夹中对应的各文件目录`person1`、`person2`、`person3`下(此时文件夹名应该**已经**替换为你自定义的名称了,并且每一个文件夹都**必须**要拷入图片文件使其**不为空**,如果不小心多建了不用的文件夹请**务必`Delete`删除**)。 960 | 961 | ![](images/4-5.png) 962 | 963 | 1. **注意事项**`!!!重要!!!`】: 964 | 965 | - 把各个人的照片样本拷贝至**与其相应**的样本库文件夹,并保证每个照片样本中**只有一张人脸**,并清晰可见(**眉/眼/鼻/嘴/脸廓**完整,最好**不要**佩戴眼镜)。 966 | 967 | - **必须确保**每个已建立的人脸库文件夹中均包含照片样本而**不为空**。 968 | 969 | - **必须至少**建立**`2`**个以上的人脸库文件夹,推荐`3`个以上。 970 | 971 | - 推荐每个人都有**`10`**张以上的照片样本,最好包含不同角度、侧面等,但是必须保证**眉/眼/鼻/嘴/脸廓**完整。 972 | 973 | - **并不需要**提供经过align裁剪后的照片样本,普通的日常照片就行,openface会根据命令对照片样本进行align裁剪。 974 | 975 | ---------- 976 | 977 | ### 从照片中获取人脸 ### 978 | 979 | #### 运行`step-1_find-faces.py`获取人脸位置 #### 980 | 981 | 1. 把以下**`1`**条命令中的`your_test_image_fullpath.jpg`替换为你自己准备的待测图片包含**文件名及全路径**的**完整路径**,例如:`/home/bupt/my_pic.jpg`,再运行命令: 982 | 983 | ```bash 984 | python /root/openface/step-1_find-faces.py your_test_image_fullpath.jpg 985 | ``` 986 | 987 | 1. 运行以上命令之后会显示如下类似图片结果: 988 | 989 | ![](images/4-6.png) 990 | 991 | 1. 在Terminal命令行窗口中**键入`Enter`回车按钮**继续。 992 | 993 | ---------- 994 | 995 | ### 获取脸部特征并进行仿射变换 ### 996 | 997 | #### 运行`step-2a_finding-face-landmarks.py`获取脸部特征 #### 998 | 999 | 1. 把以下**`1`**条命令中的`your_test_image_fullpath.jpg`替换为你自己准备的待测图片包含**文件名及全路径**的**完整图片路径**,例如:`/home/bupt/my_pic.jpg`,再运行命令: 1000 | 1001 | ```bash 1002 | python /root/openface/step-2a_finding-face-landmarks.py your_test_image_fullpath.jpg 1003 | ``` 1004 | 1005 | 1. 运行以上命令之后会显示如下类似图片结果: 1006 | 1007 | ![](images/4-7.png) 1008 | 1009 | 1. 在Terminal命令行窗口中**键入`Enter`回车按钮**继续。 1010 | 1011 | #### 运行`step-2b_projecting-faces.py`获取仿射变换后的照片 #### 1012 | 1013 | 1. 把以下**`1`**条命令中的`your_test_image_fullpath.jpg`替换为你自己准备的待测图片包含**文件名及全路径**的**完整图片路径**,例如:`/home/bupt/my_pic.jpg`,再运行命令: 1014 | 1015 | ```bash 1016 | python /root/openface/step-2b_projecting-faces.py your_test_image_fullpath.jpg 1017 | ``` 1018 | 1019 | 1. 运行以上命令之后会在工作目录`/root/openface`下生成如下相应的裁剪图片文件`aligned_face_0.jpg`: 1020 | 1021 | 1. 可运行以下**`1`**条命令把裁剪后的图片文件`aligned_face_0.jpg`拷贝至bupt用户主目录`/home/bupt`: 1022 | 1023 | ```bash 1024 | cp /root/openface/aligned_face_0.jpg /home/bupt/ 1025 | ``` 1026 | 1027 | 然后在主目录**双击**打开![](images/4-8.png)查看: 1028 | 1029 | ![](images/4-9.png) 1030 | 1031 | ---------- 1032 | 1033 | ### 获取面部特征编码文件 ### 1034 | 1035 | #### 运行`main.lua`对仿射变换后的人脸图片提取特征编码 #### 1036 | 1037 | 1. 依次运行以下**`3`**条命令: 1038 | 1039 | ```bash 1040 | mkdir -p /home/bupt/my_aligned_face/my_face 1041 | ``` 1042 | 1043 | ```bash 1044 | cp /root/openface/aligned_face_0.jpg /home/bupt/my_aligned_face/my_face/ 1045 | ``` 1046 | 1047 | ```bash 1048 | /root/openface/batch-represent/main.lua -outDir /home/bupt/my_reps/ -data /home/bupt/my_aligned_face/ 1049 | ``` 1050 | 1051 | 1. 运行以上命令之后可在`/home/bupt/my_reps`目录下找到如下相应的`128`维面部特征编码文件`reps.csv`: 1052 | 1053 | ![](https://i.imgur.com/RsTyaaW.png) 1054 | 1055 | 可**双击**打开![](https://i.imgur.com/ozCFHGi.png)查看文件内容: 1056 | 1057 | ![](https://i.imgur.com/rmYCERD.png) 1058 | 1059 | ---------- 1060 | 1061 | ### 进行完整的人脸识别实验 ### 1062 | 1063 | #### 运行`align-dlib.py`进行仿射变换 #### 1064 | 1065 | 1. 运行以下**`1`**条命令: 1066 | 1067 | ```bash 1068 | /root/openface/util/align-dlib.py /home/bupt/training-images/ align outerEyesAndNose /home/bupt/aligned-images/ --size 96 1069 | ``` 1070 | 1071 | 1. 运行以上命令之后可在`/home/bupt/aligned-images`目录下找到仿射变换后的图片文件: 1072 | 1073 | ![](images/4-10.png) 1074 | 1075 | ![](images/4-11.png) 1076 | 1077 | #### 运行`main.lua`获取`128`维面部特征向量表示文件 #### 1078 | 1079 | 1. 运行以下**`1`**条命令: 1080 | 1081 | ```bash 1082 | /root/openface/batch-represent/main.lua -outDir /home/bupt/generated-embeddings/ -data /home/bupt/aligned-images/ 1083 | ``` 1084 | 1085 | 1. 运行以上命令之后可在`/home/bupt/generated-embaddings`目录下找到`labels.csv`特征向量标识文件及`reps.csv`特征向量表示文件: 1086 | 1087 | ![](https://i.imgur.com/kHfR4tD.png) 1088 | 1089 | #### 运行`classifier.py train`训练样本集并生成分类器 #### 1090 | 1091 | 1. 运行以下**`1`**条命令: 1092 | 1093 | ```bash 1094 | /root/openface/demos/classifier.py train /home/bupt/generated-embeddings/ 1095 | ``` 1096 | 1097 | 1. 运行以上命令之后可在`/home/bupt/generated-embaddings`目录下找到如下`classifier.pkl`分类器文件: 1098 | 1099 | ![](https://i.imgur.com/bh7Y5ev.png) 1100 | 1101 | #### 运行`classifier.py infer`识别被测照片 #### 1102 | 1103 | 1. 把以下**`1`**条命令中的`your_test_image_fullpath.jpg`替换为你自己准备的待测图片包含**文件名及全路径**的**完整图片路径**,例如:`/home/bupt/my_pic.jpg`,再运行命令: 1104 | 1105 | ```bash 1106 | /root/openface/demos/classifier.py infer /home/bupt/generated-embeddings/classifier.pkl your_test_image_fullpath.jpg 1107 | ``` 1108 | 1109 | 1. 运行以上命令之后会在`Terminal`命令窗口中显示类似如下识别结果: 1110 | 1111 | ![](images/4-12.png) 1112 | 1113 | ---------- 1114 | -------------------------------------------------------------------------------- /images/4-10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/4-10.png -------------------------------------------------------------------------------- /images/4-11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/4-11.png -------------------------------------------------------------------------------- /images/4-12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/4-12.png -------------------------------------------------------------------------------- /images/4-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/4-4.png -------------------------------------------------------------------------------- /images/4-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/4-5.png -------------------------------------------------------------------------------- /images/4-6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/4-6.png -------------------------------------------------------------------------------- /images/4-7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/4-7.png -------------------------------------------------------------------------------- /images/4-8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/4-8.png -------------------------------------------------------------------------------- /images/4-9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/4-9.png -------------------------------------------------------------------------------- /images/yangmi/1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/yangmi/1.jpg -------------------------------------------------------------------------------- /images/yangmi/10.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/yangmi/10.jpg -------------------------------------------------------------------------------- /images/yangmi/2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/yangmi/2.jpg -------------------------------------------------------------------------------- /images/yangmi/3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/yangmi/3.jpg -------------------------------------------------------------------------------- /images/yangmi/4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/yangmi/4.jpg -------------------------------------------------------------------------------- /images/yangmi/5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/yangmi/5.jpg -------------------------------------------------------------------------------- /images/yangmi/6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/yangmi/6.jpg -------------------------------------------------------------------------------- /images/yangmi/7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/yangmi/7.jpg -------------------------------------------------------------------------------- /images/yangmi/8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/yangmi/8.jpg -------------------------------------------------------------------------------- /images/yangmi/9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/yangmi/9.jpg -------------------------------------------------------------------------------- /images/zhaoliying/1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/zhaoliying/1.jpg -------------------------------------------------------------------------------- /images/zhaoliying/10.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/zhaoliying/10.jpg -------------------------------------------------------------------------------- /images/zhaoliying/2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/zhaoliying/2.jpg -------------------------------------------------------------------------------- /images/zhaoliying/3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/zhaoliying/3.jpg -------------------------------------------------------------------------------- /images/zhaoliying/4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/zhaoliying/4.jpg -------------------------------------------------------------------------------- /images/zhaoliying/5.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/zhaoliying/5.jpg -------------------------------------------------------------------------------- /images/zhaoliying/6.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/zhaoliying/6.jpg -------------------------------------------------------------------------------- /images/zhaoliying/7.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/zhaoliying/7.jpg -------------------------------------------------------------------------------- /images/zhaoliying/8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/zhaoliying/8.jpg -------------------------------------------------------------------------------- /images/zhaoliying/9.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuhuijiayou/MachineLearningClass/b4d9fe4a6996544cf2837b648c3e09ef6797e0d2/images/zhaoliying/9.jpg --------------------------------------------------------------------------------