├── README.md └── examples ├── PyTorch_MNIST.py ├── PyTorch_Tensors.py ├── Python1.0_Install.md ├── pytorch0.3&0.4 ├── PyTorch4_CIFAR10.py ├── PyTorch_CUDA_CUDNN_Test.py ├── PyTorch_MNIST.py └── PyTorch_Tensors.py └── pytorch_cudn_cudnn_test.py /README.md: -------------------------------------------------------------------------------- 1 | # PyTorch-From-Zero-To-One 2 | PyTorch 从入门到精通:含入门指南、在线教程、视频教程和书籍推荐等资源 3 | 4 | > 注1:♥ 表示推荐指数,越多越好 5 | 6 | > 注2:TensorFlow从入门到精通请参考:[TensorFlow-From-Zero-To-One](https://github.com/amusi/TensorFlow-From-Zero-To-One) 7 | 8 | ## 笔记 9 | 10 | 安装教程 11 | 12 | - [Ubuntu](examples/Python1.0_Install.md) 13 | - [Windows](https://blog.csdn.net/amusi1994/article/details/80077667) 14 | 15 | ## PyTorch参考学习资料 16 | 17 | ### PyTorch入门指南 18 | 19 | [Awesome-pytorch-list](https://github.com/bharathgs/Awesome-pytorch-list) [中文版]( https://github.com/xavier-zy/Awesome-pytorch-list-CNVersion ):A comprehensive list of pytorch related content on github,such as different models,implementations,helper libraries,tutorials etc. 20 | 21 | [知乎:新手如何入门PyTorch](https://www.zhihu.com/question/55720139) 22 | 23 | [PyTorch:60分钟入门](http://pytorch.org/tutorials/) 24 | 25 | [the-incredible-pytorch](https://github.com/ritchieng/the-incredible-pytorch):The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch. 26 | 27 | [PyTorch internals](http://blog.ezyang.com/2019/05/pytorch-internals/):This post is a long form essay version of a talk about PyTorch internals. 28 | 29 | ### 官网&社区 30 | 31 | **国外** 32 | 33 | - **(♥♥♥♥♥)**[PyTorch官网](http://pytorch.org/) 34 | - **(♥♥♥♥♥)**[GitHub:PyTorch](https://github.com/pytorch/pytorch) 35 | - **(♥♥♥♥)**[Twitter:PyTorch](https://twitter.com/pytorch):官方维护,分享最快动态 36 | - **(♥♥♥♥)**[PyTorch官方论坛](https://discuss.pytorch.org/) 37 | 38 | **国内** 39 | 40 | - [PyTorch中文文档&教程](https://pytorch.apachecn.org/#/) 41 | 42 | - [知乎话题:PyTorch](https://www.zhihu.com/topic/20075993/hot) 43 | 44 | ### 在线教程 45 | 46 | **国外** 47 | 48 | - **(♥♥♥♥♥)**[PyTorch:Doc](http://pytorch.org/docs/) 49 | - **(♥♥♥♥♥)**[pytorch-tutorial](https://github.com/yunjey/pytorch-tutorial):PyTorch Tutorial for Deep Learning Researchers 50 | - **(♥♥♥♥♥)**[practicalAI](https://github.com/GokuMohandas/practicalAI/):Implement basic ML algorithms and deep neural networks with [PyTorch](https://pytorch.org/). 51 | - **(♥♥♥♥♥)**[Dive Into Deep Learning - PyTorch]( https://github.com/dsgiitr/d2l-pytorch): from MXNet into PyTorch 52 | - **(♥♥♥♥)**[practical-pytorch](https://github.com/spro/practical-pytorch):PyTorch tutorials demonstrating modern techniques with readable code 53 | - **(♥♥♥♥)**[Deep Learning with PyTorch](http://deeplizard.com/learn/playlist/PLZbbT5o_s2xrfNyHZsM6ufI0iZENK9xgG):This series is all about neural network programming and PyTorch! 54 | - **(♥♥♥)** [EffectivePyTorch](https://github.com/vahidk/EffectivePyTorch): PyTorch tutorials and best practices. 55 | - **(♥♥♥)**[Minicourse in Deep Learning with PyTorch](https://github.com/Atcold/pytorch-Deep-Learning-Minicourse):Mini Course in Deep Learning with PyTorch for AIMS 56 | - **(♥♥♥)**[pytorch-cpp]( https://github.com/prabhuomkar/pytorch-cpp ):C++ Implementation of PyTorch Tutorial for Deep Learning Researchers 57 | - **(♥♥♥)**[pytorch-examples](https://github.com/jcjohnson/pytorch-examples):Simple examples to introduce PyTorch 58 | - **(♥♥♥)**[PyTorchZeroToAll](https://github.com/hunkim/PyTorchZeroToAll):Simple PyTorch Tutorials Zero to ALL! 59 | 60 | **国内** 61 | 62 | - **(♥♥♥♥♥)**[pytorch-book](https://github.com/chenyuntc/pytorch-book):PyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation 63 | - **(♥♥♥♥♥)**[莫凡:PyTorch教学](https://morvanzhou.github.io/tutorials/machine-learning/torch/):Build your neural network easy and fast 64 | - **(♥♥♥♥♥)**[pytorch-handbook](https://github.com/zergtant/pytorch-handbook):pytorch handbook是一本开源的书籍 65 | - **(♥♥♥♥)**[Dive-into-DL-PyTorch](https://github.com/ShusenTang/Dive-into-DL-PyTorch):本项目将《动手学深度学习》(Dive into Deep Learning)原书中的MXNet实现改为PyTorch实现。 66 | - **(♥♥♥♥)**[PyTorch_Tutorial](https://github.com/tensor-yu/PyTorch_Tutorial):《Pytorch模型训练实用教程》中配套代码 [PyTorch学习笔记](https://zhuanlan.zhihu.com/c_1056853059086430208) 67 | 68 | ### 视频教程 69 | 70 | **国外** 71 | 72 | - **(♥♥♥♥)**[Deep Learning with PyTorch]( https://www.youtube.com/playlist?list=PLyMom0n-MBroupZiLfVSZqK5asX8KfoHL ) 73 | 74 | - **(♥♥♥♥)**[PyTorch - Deep Learning with Python](https://www.youtube.com/playlist?list=PLQVvvaa0QuDdeMyHEYc0gxFpYwHY2Qfdh) 75 | 76 | - **(♥♥♥)**[Neural Network Programming - Deep Learning with PyTorch](https://www.youtube.com/watch?v=v5cngxo4mIg&list=PLZbbT5o_s2xrfNyHZsM6ufI0iZENK9xgG) 77 | - **(♥♥♥)**[Intro to Deep Learning with PyTorch](https://cn.udacity.com/course/deep-learning-pytorch--ud188):优达免费课程 78 | 79 | **国内** 80 | 81 | - **(♥♥♥♥♥)**[莫凡:PyTorch教学](https://morvanzhou.github.io/tutorials/machine-learning/torch/) 82 | 83 | ### 书籍资源 84 | 85 | **国外** 86 | 87 | - **(♥♥♥♥♥)**[Deep Learning with PyTorch](https://www.manning.com/books/deep-learning-with-pytorch):LeCun力荐,PyTorch官方权威教程书 [github代码]( https://github.com/deep-learning-with-pytorch/dlwpt-code ) 88 | 89 | - **(♥♥♥)**[Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python](https://github.com/rasbt/deep-learning-book) 90 | 91 | **国内** 92 | 93 | - ~~**(♥♥♥♥♥)**[《深度学习框架PyTorch:入门与实践》](https://book.douban.com/subject/27624483/)~~ [github](https://github.com/chenyuntc/pytorch-book) 94 | 95 | ### 经验&技巧 96 | 97 | - **(♥♥♥♥♥)**[PyTorchTricks](https://github.com/lartpang/PyTorchTricks) 98 | 99 | ### 实战项目 100 | 101 | - **(♥♥♥♥♥)**[pytorch-examples](https://github.com/pytorch/examples):官网示例 102 | - **(♥♥♥♥♥)**[pretrained-models.pytorch](https://github.com/Cadene/pretrained-models.pytorch):Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc. 103 | - **(♥♥♥♥)**[Detectron2]( https://github.com/facebookresearch/detectron2):FAIR's next-generation research platform for object detection and segmentation. 104 | - **(♥♥♥♥)**[mmdetection](https://github.com/open-mmlab/mmdetection):Open MMLab Detection Toolbox with PyTorch 1.0 105 | - **(♥♥♥♥)**[pytorch-semseg](https://github.com/meetshah1995/pytorch-semseg):Semantic Segmentation Architectures Implemented in PyTorch 106 | - **(♥♥♥♥)**[PyTorch Image Models]( https://github.com/rwightman/pytorch-image-models ): (SE)ResNet/ResNeXT, DPN, EfficientNet, MixNet, MobileNet-V3/V2/V1, MNASNet, Single-Path NAS, FBNet, and more 107 | - [PyTorch CNN Finetune](https://github.com/creafz/pytorch-cnn-finetune):Fine-tune pretrained Convolutional Neural Networks with PyTorch 108 | - [PyTorch-Deep-Learning-Template](https://github.com/FrancescoSaverioZuppichini/PyTorch-Deep-Learning-Template) 109 | - [semantic-segmentation-pytorch](https://github.com/CSAILVision/semantic-segmentation-pytorch) 110 | - [pytorch-pretrained-BERT](https://github.com/huggingface/pytorch-pretrained-BERT) 111 | - [torchcv](https://github.com/youansheng/torchcv):A PyTorch-Based Framework for Deep Learning in Computer Vision 112 | - https://github.com/devnag/pytorch-generative-adversarial-networks 113 | 114 | ### 生态工具 115 | 116 | - [Ecosystem Tools]( https://pytorch.org/ecosystem/ ):官方认证的PyTorch生态圈工具列表,含几十个工具包,强烈推荐! 117 | - [PyTorch Lightning](https://github.com/williamFalcon/pytorch-lightning):Lightning is a very lightweight wrapper on PyTorch. 118 | - [Hydra]( https://hydra.cc/ ):A framework for elegantly configuring complex applications 119 | - [Torchmeta](https://github.com/tristandeleu/pytorch-meta):PyTorch 少样本学习和元学习库 120 | - [Torch Optimizer](https://github.com/jettify/pytorch-optimizer):PyTorch优化器工具库 121 | - [Pytorch-Toolbox](https://github.com/PistonY/torch-toolbox) 122 | - [Eisen](http://eisen.ai/):a python package for solid deep learning 123 | - [Dassl](https://github.com/KaiyangZhou/Dassl.pytorch):A PyTorch toolbox for domain adaptation and semi-supervised learning. 124 | - [PyRetri](https://github.com/PyRetri/PyRetri):基于PyTorch的无监督图像检索工具库 125 | - [Kornia](https://github.com/kornia/kornia):Open Source Differentiable Computer Vision Library for PyTorch 126 | - [FastReID](https://github.com/JDAI-CV/fast-reid): 一个面向学术界和工业界的 ReID Toolbox 127 | - [KAIR](https://github.com/cszn/KAIR):基于PyTorch的图像复原/修复工具箱(支持训练和测试) 128 | - [FAIRScale](https://github.com/facebookresearch/fairscale):用于高性能和大规模训练的PyTorch工具 129 | - [PyTorch3D](https://github.com/facebookresearch/pytorch3d):用于3D计算机视觉的PyTorch工具 130 | 131 | ### PyTorch技巧 132 | 133 | - [Pytorch有什么节省显存的小技巧?](https://www.zhihu.com/question/274635237) 134 | - [PyTorch 有哪些坑/bug?](https://www.zhihu.com/question/67209417) 135 | 136 | 137 | ### 其他资料 138 | 139 | - [Awesome-PyTorch-list](https://github.com/bharathgs/Awesome-pytorch-list) [中文版](https://github.com/xavier-zy/Awesome-pytorch-list-CNVersion) -------------------------------------------------------------------------------- /examples/PyTorch_MNIST.py: -------------------------------------------------------------------------------- 1 | # Summary: 使用PyTorch玩转MNIST 2 | # Author: Amusi 3 | # Date: 2018-12-20 4 | # github: https://github.com/amusi/PyTorch-From-Zero-To-One 5 | # Reference: https://blog.csdn.net/victoriaw/article/details/72354307 6 | 7 | from __future__ import print_function 8 | import argparse 9 | import torch 10 | import torch.nn as nn 11 | import torch.nn.functional as F 12 | import torch.optim as optim 13 | from torchvision import datasets, transforms 14 | from torch.autograd import Variable 15 | 16 | # Training settings 17 | parser = argparse.ArgumentParser(description='PyTorch MNIST Example') 18 | parser.add_argument('--batch-size', type=int, default=64, metavar='N', 19 | help='input batch size for training (default: 64)') 20 | parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', 21 | help='input batch size for testing (default: 1000)') 22 | parser.add_argument('--epochs', type=int, default=10, metavar='N', 23 | help='number of epochs to train (default: 10)') 24 | parser.add_argument('--lr', type=float, default=0.01, metavar='LR', 25 | help='learning rate (default: 0.01)') 26 | parser.add_argument('--momentum', type=float, default=0.5, metavar='M', 27 | help='SGD momentum (default: 0.5)') 28 | parser.add_argument('--no-cuda', action='store_true', default=False, 29 | help='disables CUDA training') 30 | parser.add_argument('--seed', type=int, default=1, metavar='S', 31 | help='random seed (default: 1)') 32 | parser.add_argument('--log-interval', type=int, default=10, metavar='N', 33 | help='how many batches to wait before logging training status') 34 | args = parser.parse_args() 35 | args.cuda = not args.no_cuda and torch.cuda.is_available() 36 | 37 | torch.manual_seed(args.seed) #为CPU设置种子用于生成随机数,以使得结果是确定的 38 | if args.cuda: 39 | torch.cuda.manual_seed(args.seed)#为当前GPU设置随机种子;如果使用多个GPU,应该使用torch.cuda.manual_seed_all()为所有的GPU设置种子。 40 | 41 | 42 | kwargs = {'num_workers': 1, 'pin_memory': True} if args.cuda else {} 43 | """加载数据。组合数据集和采样器,提供数据上的单或多进程迭代器 44 | 参数: 45 | dataset:Dataset类型,从其中加载数据 46 | batch_size:int,可选。每个batch加载多少样本 47 | shuffle:bool,可选。为True时表示每个epoch都对数据进行洗牌 48 | sampler:Sampler,可选。从数据集中采样样本的方法。 49 | num_workers:int,可选。加载数据时使用多少子进程。默认值为0,表示在主进程中加载数据。 50 | collate_fn:callable,可选。 51 | pin_memory:bool,可选 52 | drop_last:bool,可选。True表示如果最后剩下不完全的batch,丢弃。False表示不丢弃。 53 | """ 54 | train_loader = torch.utils.data.DataLoader( 55 | datasets.MNIST('../data', train=True, download=True, 56 | transform=transforms.Compose([ 57 | transforms.ToTensor(), 58 | transforms.Normalize((0.1307,), (0.3081,)) 59 | ])), 60 | batch_size=args.batch_size, shuffle=True, **kwargs) 61 | test_loader = torch.utils.data.DataLoader( 62 | datasets.MNIST('../data', train=False, transform=transforms.Compose([ 63 | transforms.ToTensor(), 64 | transforms.Normalize((0.1307,), (0.3081,)) 65 | ])), 66 | batch_size=args.batch_size, shuffle=True, **kwargs) 67 | 68 | 69 | class Net(nn.Module): 70 | def __init__(self): 71 | super(Net, self).__init__() 72 | self.conv1 = nn.Conv2d(1, 10, kernel_size=5)#输入和输出通道数分别为1和10 73 | self.conv2 = nn.Conv2d(10, 20, kernel_size=5)#输入和输出通道数分别为10和20 74 | self.conv2_drop = nn.Dropout2d()#随机选择输入的信道,将其设为0 75 | self.fc1 = nn.Linear(320, 50)#输入的向量大小和输出的大小分别为320和50 76 | self.fc2 = nn.Linear(50, 10) 77 | 78 | def forward(self, x): 79 | x = F.relu(F.max_pool2d(self.conv1(x), 2))#conv->max_pool->relu 80 | x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))#conv->dropout->max_pool->relu 81 | x = x.view(-1, 320) 82 | x = F.relu(self.fc1(x))#fc->relu 83 | x = F.dropout(x, training=self.training)#dropout 84 | x = self.fc2(x) 85 | return F.log_softmax(x) 86 | 87 | model = Net() 88 | if args.cuda: 89 | model.cuda()#将所有的模型参数移动到GPU上 90 | 91 | optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) 92 | 93 | def train(epoch): 94 | model.train()#把module设成training模式,对Dropout和BatchNorm有影响 95 | for batch_idx, (data, target) in enumerate(train_loader): 96 | if args.cuda: 97 | data, target = data.cuda(), target.cuda() 98 | data, target = Variable(data), Variable(target)#Variable类对Tensor对象进行封装,会保存该张量对应的梯度,以及对生成该张量的函数grad_fn的一个引用。如果该张量是用户创建的,grad_fn是None,称这样的Variable为叶子Variable。 99 | optimizer.zero_grad() 100 | output = model(data) 101 | loss = F.nll_loss(output, target)#负log似然损失 102 | loss.backward() 103 | optimizer.step() 104 | if batch_idx % args.log_interval == 0: 105 | print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( 106 | epoch, batch_idx * len(data), len(train_loader.dataset), 107 | 100. * batch_idx / len(train_loader), loss.item())) 108 | 109 | def test(epoch): 110 | model.eval()#把module设置为评估模式,只对Dropout和BatchNorm模块有影响 111 | test_loss = 0 112 | correct = 0 113 | for data, target in test_loader: 114 | if args.cuda: 115 | data, target = data.cuda(), target.cuda() 116 | data, target = Variable(data, volatile=True), Variable(target) 117 | output = model(data) 118 | test_loss += F.nll_loss(output, target).item()#Variable.data 119 | pred = output.data.max(1)[1] # get the index of the max log-probability 120 | correct += pred.eq(target.data).cpu().sum() 121 | 122 | test_loss = test_loss 123 | test_loss /= len(test_loader) # loss function already averages over batch size 124 | print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( 125 | test_loss, correct, len(test_loader.dataset), 126 | 100. * correct / len(test_loader.dataset))) 127 | 128 | 129 | if __name__ == '__main__': 130 | for epoch in range(1, args.epochs + 1): 131 | train(epoch) 132 | test(epoch) 133 | -------------------------------------------------------------------------------- /examples/PyTorch_Tensors.py: -------------------------------------------------------------------------------- 1 | # Summary:PyTorch的Tensor基础知识 2 | # Author: Amusi 3 | # Date: 2018-12-20 4 | # github: https://github.com/amusi/PyTorch-From-Zero-To-One 5 | # Reference: http://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-tensors 6 | 7 | import torch 8 | 9 | dtype = torch.FloatTensor 10 | # dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU 11 | 12 | # N is batch size; D_in is input dimension; 13 | # H is hidden dimension; D_out is output dimension. 14 | N, D_in, H, D_out = 64, 1000, 100, 10 15 | 16 | # Create random input and output data 17 | x = torch.randn(N, D_in).type(dtype) 18 | y = torch.randn(N, D_out).type(dtype) 19 | 20 | # Randomly initialize weights 21 | w1 = torch.randn(D_in, H).type(dtype) 22 | w2 = torch.randn(H, D_out).type(dtype) 23 | 24 | learning_rate = 1e-6 25 | for t in range(500): 26 | # Forward pass: compute predicted y 27 | h = x.mm(w1) 28 | h_relu = h.clamp(min=0) 29 | y_pred = h_relu.mm(w2) 30 | 31 | # Compute and print loss 32 | loss = (y_pred - y).pow(2).sum() 33 | print(t, loss) 34 | 35 | # Backprop to compute gradients of w1 and w2 with respect to loss 36 | grad_y_pred = 2.0 * (y_pred - y) 37 | grad_w2 = h_relu.t().mm(grad_y_pred) 38 | grad_h_relu = grad_y_pred.mm(w2.t()) 39 | grad_h = grad_h_relu.clone() 40 | grad_h[h < 0] = 0 41 | grad_w1 = x.t().mm(grad_h) 42 | 43 | # Update weights using gradient descent 44 | w1 -= learning_rate * grad_w1 45 | w2 -= learning_rate * grad_w2 46 | -------------------------------------------------------------------------------- /examples/Python1.0_Install.md: -------------------------------------------------------------------------------- 1 | > Summary:Python1.0安装教程 2 | > 3 | > Author:Amusi 4 | > 5 | > Date:2018-12-20 6 | > 7 | > github:https://github.com/amusi/PyTorch-From-Zero-To-One 8 | > 9 | > 知乎:https://www.zhihu.com/people/amusi1994 10 | > 11 | > 微信公众号:CVer 12 | 13 | 本文是在Ubuntu下进行PyTorch1.0正式版的安装,Windows安装教程与之类似,也可以参考该教程进行安装:https://blog.csdn.net/amusi1994/article/details/80077667 14 | 15 | # 环境说明 16 | 17 | - OS:Ubuntu16.04 18 | - CUDA:8.0 19 | - cudnn:6.0 20 | - Python(conda):3.6.4 21 | 22 | # 安装教程 23 | 24 | 官网:https://pytorch.org/ 25 | 26 | 检查Python环境 27 | 28 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20181220015720334.png) 29 | 30 | 根据当前系统环境点击选项 31 | 32 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20181220015731149.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L2FtdXNpMTk5NA==,size_16,color_FFFFFF,t_70) 33 | 34 | 在终端输入匹配的安装PyTorch1.0的命令 35 | 36 | ``` 37 | conda install pytorch torchvision cuda80 -c pytorch 38 | ``` 39 | 40 | 回车进行安装,此时会有如下提示,当搜索到PyTorch1.0的相关packages时,输入 y,确定继续安装。 41 | 42 | > 注:此时可能会找不到相应的packages,比如Windows环境下。所以你可以添加相关的搜索源,如清华的源。此处可以自行百度解决。 43 | 44 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20181220015817515.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L2FtdXNpMTk5NA==,size_16,color_FFFFFF,t_70) 45 | 46 | 此时需要等待一会儿(具体看网速),因为PyTorch 1.0.0这个packages有437.5 MB大小。 47 | 48 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20181220015826530.png) 49 | 50 | 安装成功后,会提示done。 51 | 52 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20181220015837555.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L2FtdXNpMTk5NA==,size_16,color_FFFFFF,t_70) 53 | 54 | 加载PyTorch并输出版本号,验证是否安装成功。 55 | 56 | ``` 57 | python 58 | import torch 59 | print(torch.__version__) 60 | ``` 61 | 62 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20181220015853962.png) 63 | 64 | # 测试示例 65 | 66 | ## 测试1:检查CUDA和CUDNN 67 | 68 | 创建并打开新的脚本文件pytorch_cudn_cudnn_test.py 69 | 70 | ``` 71 | touch pytorch_cudn_cudnn_test.py 72 | gedit pytorch_cudn_cudnn_test.py 73 | ``` 74 | 75 | 写入测试代码 76 | 77 | ```python 78 | # Summary: 检测当前Pytorch和设备是否支持CUDA和cudnn 79 | # Author: Amusi 80 | # Date: 2018-12-20 81 | # github: https://github.com/amusi/PyTorch-From-Zero-To-One 82 | 83 | import torch 84 | 85 | if __name__ == '__main__': 86 | print("Support CUDA ?: ", torch.cuda.is_available()) 87 | x = torch.Tensor([1.0]) 88 | xx = x.cuda() 89 | print(xx) 90 | 91 | y = torch.randn(2, 3) 92 | yy = y.cuda() 93 | print(yy) 94 | 95 | zz = xx + yy 96 | print(zz) 97 | 98 | # CUDNN TEST 99 | from torch.backends import cudnn 100 | print("Support cudnn ?: ",cudnn.is_acceptable(xx)) 101 | ``` 102 | 103 | 运行该测试代码 104 | 105 | ``` 106 | python pytorch_cudn_cudnn_test.py 107 | ``` 108 | 109 | 输入结果如下: 110 | 111 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20181220015916862.png) 112 | 113 | 114 | ## 测试2:Tensors 115 | 116 | 创建并打开新的脚本文件pytorch_tensors.py 117 | 118 | ``` 119 | touch pytorch_tensors.py 120 | gedit pytorch_tensors.py 121 | ``` 122 | 123 | 写入测试代码: 124 | 125 | ``` 126 | # Summary:PyTorch的Tensor基础知识 127 | # Author: Amusi 128 | # Date: 2018-12-20 129 | # github: https://github.com/amusi/PyTorch-From-Zero-To-One 130 | # Reference: http://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-tensors 131 | 132 | import torch 133 | 134 | dtype = torch.FloatTensor 135 | # dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU 136 | 137 | # N is batch size; D_in is input dimension; 138 | # H is hidden dimension; D_out is output dimension. 139 | N, D_in, H, D_out = 64, 1000, 100, 10 140 | 141 | # Create random input and output data 142 | x = torch.randn(N, D_in).type(dtype) 143 | y = torch.randn(N, D_out).type(dtype) 144 | 145 | # Randomly initialize weights 146 | w1 = torch.randn(D_in, H).type(dtype) 147 | w2 = torch.randn(H, D_out).type(dtype) 148 | 149 | learning_rate = 1e-6 150 | for t in range(500): 151 | # Forward pass: compute predicted y 152 | h = x.mm(w1) 153 | h_relu = h.clamp(min=0) 154 | y_pred = h_relu.mm(w2) 155 | 156 | # Compute and print loss 157 | loss = (y_pred - y).pow(2).sum() 158 | print(t, loss) 159 | 160 | # Backprop to compute gradients of w1 and w2 with respect to loss 161 | grad_y_pred = 2.0 * (y_pred - y) 162 | grad_w2 = h_relu.t().mm(grad_y_pred) 163 | grad_h_relu = grad_y_pred.mm(w2.t()) 164 | grad_h = grad_h_relu.clone() 165 | grad_h[h < 0] = 0 166 | grad_w1 = x.t().mm(grad_h) 167 | 168 | # Update weights using gradient descent 169 | w1 -= learning_rate * grad_w1 170 | w2 -= learning_rate * grad_w2 171 | 172 | ``` 173 | 174 | 运行该测试代码 175 | 176 | ``` 177 | python pytorch_tensors.py 178 | ``` 179 | 180 | 输入结果如下: 181 | 182 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20181220015951900.png) 183 | 184 | 185 | ## 测试3:MNIST 186 | 187 | 创建并打开新的脚本文件pytorch_mnist.py 188 | 189 | ``` 190 | touch pytorch_mnist.py 191 | gedit pytorch_mnist.py 192 | ``` 193 | 194 | 写入测试代码: 195 | 196 | ```python 197 | # Summary: 使用PyTorch玩转MNIST 198 | # Author: Amusi 199 | # Date: 2018-12-20 200 | # github: https://github.com/amusi/PyTorch-From-Zero-To-One 201 | # Reference: https://blog.csdn.net/victoriaw/article/details/72354307 202 | 203 | from __future__ import print_function 204 | import argparse 205 | import torch 206 | import torch.nn as nn 207 | import torch.nn.functional as F 208 | import torch.optim as optim 209 | from torchvision import datasets, transforms 210 | from torch.autograd import Variable 211 | 212 | # Training settings 213 | parser = argparse.ArgumentParser(description='PyTorch MNIST Example') 214 | parser.add_argument('--batch-size', type=int, default=64, metavar='N', 215 | help='input batch size for training (default: 64)') 216 | parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', 217 | help='input batch size for testing (default: 1000)') 218 | parser.add_argument('--epochs', type=int, default=10, metavar='N', 219 | help='number of epochs to train (default: 10)') 220 | parser.add_argument('--lr', type=float, default=0.01, metavar='LR', 221 | help='learning rate (default: 0.01)') 222 | parser.add_argument('--momentum', type=float, default=0.5, metavar='M', 223 | help='SGD momentum (default: 0.5)') 224 | parser.add_argument('--no-cuda', action='store_true', default=False, 225 | help='disables CUDA training') 226 | parser.add_argument('--seed', type=int, default=1, metavar='S', 227 | help='random seed (default: 1)') 228 | parser.add_argument('--log-interval', type=int, default=10, metavar='N', 229 | help='how many batches to wait before logging training status') 230 | args = parser.parse_args() 231 | args.cuda = not args.no_cuda and torch.cuda.is_available() 232 | 233 | torch.manual_seed(args.seed) #为CPU设置种子用于生成随机数,以使得结果是确定的 234 | if args.cuda: 235 | torch.cuda.manual_seed(args.seed)#为当前GPU设置随机种子;如果使用多个GPU,应该使用torch.cuda.manual_seed_all()为所有的GPU设置种子。 236 | 237 | 238 | kwargs = {'num_workers': 1, 'pin_memory': True} if args.cuda else {} 239 | """加载数据。组合数据集和采样器,提供数据上的单或多进程迭代器 240 | 参数: 241 | dataset:Dataset类型,从其中加载数据 242 | batch_size:int,可选。每个batch加载多少样本 243 | shuffle:bool,可选。为True时表示每个epoch都对数据进行洗牌 244 | sampler:Sampler,可选。从数据集中采样样本的方法。 245 | num_workers:int,可选。加载数据时使用多少子进程。默认值为0,表示在主进程中加载数据。 246 | collate_fn:callable,可选。 247 | pin_memory:bool,可选 248 | drop_last:bool,可选。True表示如果最后剩下不完全的batch,丢弃。False表示不丢弃。 249 | """ 250 | train_loader = torch.utils.data.DataLoader( 251 | datasets.MNIST('../data', train=True, download=True, 252 | transform=transforms.Compose([ 253 | transforms.ToTensor(), 254 | transforms.Normalize((0.1307,), (0.3081,)) 255 | ])), 256 | batch_size=args.batch_size, shuffle=True, **kwargs) 257 | test_loader = torch.utils.data.DataLoader( 258 | datasets.MNIST('../data', train=False, transform=transforms.Compose([ 259 | transforms.ToTensor(), 260 | transforms.Normalize((0.1307,), (0.3081,)) 261 | ])), 262 | batch_size=args.batch_size, shuffle=True, **kwargs) 263 | 264 | 265 | class Net(nn.Module): 266 | def __init__(self): 267 | super(Net, self).__init__() 268 | self.conv1 = nn.Conv2d(1, 10, kernel_size=5)#输入和输出通道数分别为1和10 269 | self.conv2 = nn.Conv2d(10, 20, kernel_size=5)#输入和输出通道数分别为10和20 270 | self.conv2_drop = nn.Dropout2d()#随机选择输入的信道,将其设为0 271 | self.fc1 = nn.Linear(320, 50)#输入的向量大小和输出的大小分别为320和50 272 | self.fc2 = nn.Linear(50, 10) 273 | 274 | def forward(self, x): 275 | x = F.relu(F.max_pool2d(self.conv1(x), 2))#conv->max_pool->relu 276 | x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))#conv->dropout->max_pool->relu 277 | x = x.view(-1, 320) 278 | x = F.relu(self.fc1(x))#fc->relu 279 | x = F.dropout(x, training=self.training)#dropout 280 | x = self.fc2(x) 281 | return F.log_softmax(x) 282 | 283 | model = Net() 284 | if args.cuda: 285 | model.cuda()#将所有的模型参数移动到GPU上 286 | 287 | optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) 288 | 289 | def train(epoch): 290 | model.train()#把module设成training模式,对Dropout和BatchNorm有影响 291 | for batch_idx, (data, target) in enumerate(train_loader): 292 | if args.cuda: 293 | data, target = data.cuda(), target.cuda() 294 | data, target = Variable(data), Variable(target)#Variable类对Tensor对象进行封装,会保存该张量对应的梯度,以及对生成该张量的函数grad_fn的一个引用。如果该张量是用户创建的,grad_fn是None,称这样的Variable为叶子Variable。 295 | optimizer.zero_grad() 296 | output = model(data) 297 | loss = F.nll_loss(output, target)#负log似然损失 298 | loss.backward() 299 | optimizer.step() 300 | if batch_idx % args.log_interval == 0: 301 | print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( 302 | epoch, batch_idx * len(data), len(train_loader.dataset), 303 | 100. * batch_idx / len(train_loader), loss.item())) 304 | 305 | def test(epoch): 306 | model.eval()#把module设置为评估模式,只对Dropout和BatchNorm模块有影响 307 | test_loss = 0 308 | correct = 0 309 | for data, target in test_loader: 310 | if args.cuda: 311 | data, target = data.cuda(), target.cuda() 312 | data, target = Variable(data, volatile=True), Variable(target) 313 | output = model(data) 314 | test_loss += F.nll_loss(output, target).item()#Variable.data 315 | pred = output.data.max(1)[1] # get the index of the max log-probability 316 | correct += pred.eq(target.data).cpu().sum() 317 | 318 | test_loss = test_loss 319 | test_loss /= len(test_loader) # loss function already averages over batch size 320 | print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( 321 | test_loss, correct, len(test_loader.dataset), 322 | 100. * correct / len(test_loader.dataset))) 323 | 324 | 325 | if __name__ == '__main__': 326 | for epoch in range(1, args.epochs + 1): 327 | train(epoch) 328 | test(epoch) 329 | ``` 330 | 331 | 运行该测试代码 332 | 333 | ``` 334 | python pytorch_mnist.py 335 | ``` 336 | 337 | 输入结果如下: 338 | 339 | ![在这里插入图片描述](https://img-blog.csdnimg.cn/20181220020013914.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L2FtdXNpMTk5NA==,size_16,color_FFFFFF,t_70) 340 | 341 | # 参考 342 | 343 | - https://pytorch.org/ 344 | 345 | - https://github.com/amusi/PyTorch-From-Zero-To-One 346 | 347 | - https://blog.csdn.net/amusi1994/article/details/80077667 -------------------------------------------------------------------------------- /examples/pytorch0.3&0.4/PyTorch4_CIFAR10.py: -------------------------------------------------------------------------------- 1 | # Summary:PyTorch0.4.0,而且是在Windows上 2 | # Author: Amusi 3 | # Date: 2018-03-31 4 | # Reference: https://zhuanlan.zhihu.com/p/39667289 5 | 6 | # 导入包 7 | # torch: Pytorch最核心的库(可以代表PyTorch),主要用于计算 8 | # torchvision,其实PyTorch常用到的库,有models、datasets和transforms三个子包,分别提供了网络、预训练模型、数据集和格式变换等功能 9 | # torchvision: https://github.com/pytorch/vision/tree/master/torchvision 10 | 11 | import torch 12 | import torch.nn as nn 13 | import torch.nn.functional as F 14 | import torchvision 15 | import torchvision.transforms as transforms 16 | 17 | # 在构建数据集的时候指定transform,就会应用我们定义好的transform 18 | # cifar-10官方提供的数据集是用numpy array存储的 19 | # 下面这个transform会把numpy array变成torch tensor,然后把rgb值归一到[0, 1]这个区间 20 | transform = transforms.Compose( 21 | [transforms.ToTensor(), 22 | transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) 23 | 24 | # root是存储数据的文件夹,download=True指定如果数据不存在先下载数据,这里我们使用CIFAR数据集,大概200MB,网速不好的同学,请耐心等待(并不是程序卡住了) 25 | cifar_train = torchvision.datasets.CIFAR10(root='./data', train=True, 26 | download=True, transform=transform) 27 | cifar_test = torchvision.datasets.CIFAR10(root='./data', train=False, 28 | transform=transform) 29 | 30 | 31 | print(cifar_train) 32 | 33 | print(cifar_test) 34 | 35 | trainloader = torch.utils.data.DataLoader(cifar_train, batch_size=32, shuffle=True) 36 | testloader = torch.utils.data.DataLoader(cifar_test, batch_size=32, shuffle=True) 37 | 38 | class LeNet(nn.Module): 39 | # 一般在__init__中定义网络需要的操作算子,比如卷积、全连接算子等等 40 | def __init__(self): 41 | super(LeNet, self).__init__() 42 | # Conv2d的第一个参数是输入的channel数量,第二个是输出的channel数量,第三个是kernel size 43 | self.conv1 = nn.Conv2d(3, 6, 5) 44 | self.conv2 = nn.Conv2d(6, 16, 5) 45 | # 由于上一层有16个channel输出,每个feature map大小为5*5,所以全连接层的输入是16*5*5 46 | self.fc1 = nn.Linear(16*5*5, 120) 47 | self.fc2 = nn.Linear(120, 84) 48 | # 最终有10类,所以最后一个全连接层输出数量是10 49 | self.fc3 = nn.Linear(84, 10) 50 | self.pool = nn.MaxPool2d(2, 2) 51 | # forward这个函数定义了前向传播的运算,只需要像写普通的python算数运算那样就可以了 52 | def forward(self, x): 53 | x = F.relu(self.conv1(x)) 54 | x = self.pool(x) 55 | x = F.relu(self.conv2(x)) 56 | x = self.pool(x) 57 | # 下面这步把二维特征图变为一维,这样全连接层才能处理 58 | x = x.view(-1, 16*5*5) 59 | x = F.relu(self.fc1(x)) 60 | x = F.relu(self.fc2(x)) 61 | x = self.fc3(x) 62 | return x 63 | 64 | # 如果你没有GPU,那么可以忽略device相关的代码 65 | device = torch.device("cuda:0") 66 | net = LeNet().to(device) 67 | 68 | # optim中定义了各种各样的优化方法,包括SGD 69 | import torch.optim as optim 70 | 71 | # CrossEntropyLoss就是我们需要的损失函数 72 | criterion = nn.CrossEntropyLoss() 73 | optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) 74 | 75 | print("Start Training...") 76 | for epoch in range(30): 77 | # 我们用一个变量来记录每100个batch的平均loss 78 | loss100 = 0.0 79 | # 我们的dataloader派上了用场 80 | for i, data in enumerate(trainloader): 81 | inputs, labels = data 82 | inputs, labels = inputs.to(device), labels.to(device) # 注意需要复制到GPU 83 | optimizer.zero_grad() 84 | outputs = net(inputs) 85 | loss = criterion(outputs, labels) 86 | loss.backward() 87 | optimizer.step() 88 | loss100 += loss.item() 89 | if i % 100 == 99: 90 | print('[Epoch %d, Batch %5d] loss: %.3f' % 91 | (epoch + 1, i + 1, loss100 / 100)) 92 | loss100 = 0.0 93 | 94 | print("Done Training!") 95 | 96 | # 构造测试的dataloader 97 | dataiter = iter(testloader) 98 | # 预测正确的数量和总数量 99 | correct = 0 100 | total = 0 101 | # 使用torch.no_grad的话在前向传播中不记录梯度,节省内存 102 | with torch.no_grad(): 103 | for data in testloader: 104 | images, labels = data 105 | images, labels = images.to(device), labels.to(device) 106 | # 预测 107 | outputs = net(images) 108 | # 我们的网络输出的实际上是个概率分布,去最大概率的哪一项作为预测分类 109 | _, predicted = torch.max(outputs.data, 1) 110 | total += labels.size(0) 111 | correct += (predicted == labels).sum().item() 112 | 113 | print('Accuracy of the network on the 10000 test images: %d %%' % ( 114 | 100 * correct / total)) -------------------------------------------------------------------------------- /examples/pytorch0.3&0.4/PyTorch_CUDA_CUDNN_Test.py: -------------------------------------------------------------------------------- 1 | # Summary: 检测当前Pytorch和设备是否支持CUDA和cudnn 2 | # Author: Amusi 3 | # Date: 2018-04-01 4 | 5 | import torch 6 | 7 | if __name__ == '__main__': 8 | print("Support CUDA ?: ", torch.cuda.is_available()) 9 | x = torch.Tensor([1.0]) 10 | xx = x.cuda() 11 | print(xx) 12 | 13 | y = torch.randn(2, 3) 14 | yy = y.cuda() 15 | print(yy) 16 | 17 | zz = xx + yy 18 | print(zz) 19 | 20 | # CUDNN TEST 21 | from torch.backends import cudnn 22 | print("Support cudnn ?: ",cudnn.is_acceptable(xx)) -------------------------------------------------------------------------------- /examples/pytorch0.3&0.4/PyTorch_MNIST.py: -------------------------------------------------------------------------------- 1 | # Summary: 2 | # Author: Amusi 3 | # Date: 2018-03-31 4 | # Reference: https://blog.csdn.net/victoriaw/article/details/72354307 5 | 6 | from __future__ import print_function 7 | import argparse 8 | import torch 9 | import torch.nn as nn 10 | import torch.nn.functional as F 11 | import torch.optim as optim 12 | from torchvision import datasets, transforms 13 | from torch.autograd import Variable 14 | 15 | # Training settings 16 | parser = argparse.ArgumentParser(description='PyTorch MNIST Example') 17 | parser.add_argument('--batch-size', type=int, default=64, metavar='N', 18 | help='input batch size for training (default: 64)') 19 | parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', 20 | help='input batch size for testing (default: 1000)') 21 | parser.add_argument('--epochs', type=int, default=10, metavar='N', 22 | help='number of epochs to train (default: 10)') 23 | parser.add_argument('--lr', type=float, default=0.01, metavar='LR', 24 | help='learning rate (default: 0.01)') 25 | parser.add_argument('--momentum', type=float, default=0.5, metavar='M', 26 | help='SGD momentum (default: 0.5)') 27 | parser.add_argument('--no-cuda', action='store_true', default=False, 28 | help='disables CUDA training') 29 | parser.add_argument('--seed', type=int, default=1, metavar='S', 30 | help='random seed (default: 1)') 31 | parser.add_argument('--log-interval', type=int, default=10, metavar='N', 32 | help='how many batches to wait before logging training status') 33 | args = parser.parse_args() 34 | args.cuda = not args.no_cuda and torch.cuda.is_available() 35 | 36 | torch.manual_seed(args.seed) #为CPU设置种子用于生成随机数,以使得结果是确定的 37 | if args.cuda: 38 | torch.cuda.manual_seed(args.seed)#为当前GPU设置随机种子;如果使用多个GPU,应该使用torch.cuda.manual_seed_all()为所有的GPU设置种子。 39 | 40 | 41 | kwargs = {'num_workers': 1, 'pin_memory': True} if args.cuda else {} 42 | """加载数据。组合数据集和采样器,提供数据上的单或多进程迭代器 43 | 参数: 44 | dataset:Dataset类型,从其中加载数据 45 | batch_size:int,可选。每个batch加载多少样本 46 | shuffle:bool,可选。为True时表示每个epoch都对数据进行洗牌 47 | sampler:Sampler,可选。从数据集中采样样本的方法。 48 | num_workers:int,可选。加载数据时使用多少子进程。默认值为0,表示在主进程中加载数据。 49 | collate_fn:callable,可选。 50 | pin_memory:bool,可选 51 | drop_last:bool,可选。True表示如果最后剩下不完全的batch,丢弃。False表示不丢弃。 52 | """ 53 | train_loader = torch.utils.data.DataLoader( 54 | datasets.MNIST('../data', train=True, download=True, 55 | transform=transforms.Compose([ 56 | transforms.ToTensor(), 57 | transforms.Normalize((0.1307,), (0.3081,)) 58 | ])), 59 | batch_size=args.batch_size, shuffle=True, **kwargs) 60 | test_loader = torch.utils.data.DataLoader( 61 | datasets.MNIST('../data', train=False, transform=transforms.Compose([ 62 | transforms.ToTensor(), 63 | transforms.Normalize((0.1307,), (0.3081,)) 64 | ])), 65 | batch_size=args.batch_size, shuffle=True, **kwargs) 66 | 67 | 68 | class Net(nn.Module): 69 | def __init__(self): 70 | super(Net, self).__init__() 71 | self.conv1 = nn.Conv2d(1, 10, kernel_size=5)#输入和输出通道数分别为1和10 72 | self.conv2 = nn.Conv2d(10, 20, kernel_size=5)#输入和输出通道数分别为10和20 73 | self.conv2_drop = nn.Dropout2d()#随机选择输入的信道,将其设为0 74 | self.fc1 = nn.Linear(320, 50)#输入的向量大小和输出的大小分别为320和50 75 | self.fc2 = nn.Linear(50, 10) 76 | 77 | def forward(self, x): 78 | x = F.relu(F.max_pool2d(self.conv1(x), 2))#conv->max_pool->relu 79 | x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))#conv->dropout->max_pool->relu 80 | x = x.view(-1, 320) 81 | x = F.relu(self.fc1(x))#fc->relu 82 | x = F.dropout(x, training=self.training)#dropout 83 | x = self.fc2(x) 84 | return F.log_softmax(x) 85 | 86 | model = Net() 87 | if args.cuda: 88 | model.cuda()#将所有的模型参数移动到GPU上 89 | 90 | optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) 91 | 92 | def train(epoch): 93 | model.train()#把module设成training模式,对Dropout和BatchNorm有影响 94 | for batch_idx, (data, target) in enumerate(train_loader): 95 | if args.cuda: 96 | data, target = data.cuda(), target.cuda() 97 | data, target = Variable(data), Variable(target)#Variable类对Tensor对象进行封装,会保存该张量对应的梯度,以及对生成该张量的函数grad_fn的一个引用。如果该张量是用户创建的,grad_fn是None,称这样的Variable为叶子Variable。 98 | optimizer.zero_grad() 99 | output = model(data) 100 | loss = F.nll_loss(output, target)#负log似然损失 101 | loss.backward() 102 | optimizer.step() 103 | if batch_idx % args.log_interval == 0: 104 | print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( 105 | epoch, batch_idx * len(data), len(train_loader.dataset), 106 | 100. * batch_idx / len(train_loader), loss.data[0])) 107 | 108 | def test(epoch): 109 | model.eval()#把module设置为评估模式,只对Dropout和BatchNorm模块有影响 110 | test_loss = 0 111 | correct = 0 112 | for data, target in test_loader: 113 | if args.cuda: 114 | data, target = data.cuda(), target.cuda() 115 | data, target = Variable(data, volatile=True), Variable(target) 116 | output = model(data) 117 | test_loss += F.nll_loss(output, target).data[0]#Variable.data 118 | pred = output.data.max(1)[1] # get the index of the max log-probability 119 | correct += pred.eq(target.data).cpu().sum() 120 | 121 | test_loss = test_loss 122 | test_loss /= len(test_loader) # loss function already averages over batch size 123 | print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( 124 | test_loss, correct, len(test_loader.dataset), 125 | 100. * correct / len(test_loader.dataset))) 126 | 127 | 128 | if __name__ == '__main__': 129 | for epoch in range(1, args.epochs + 1): 130 | train(epoch) 131 | test(epoch) -------------------------------------------------------------------------------- /examples/pytorch0.3&0.4/PyTorch_Tensors.py: -------------------------------------------------------------------------------- 1 | # Summary:Amusi第一次跑PyTorch,而且是在Windows上 2 | # Author: Amusi 3 | # Date: 2018-03-31 4 | # Reference: http://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-tensors 5 | 6 | import torch 7 | 8 | 9 | dtype = torch.FloatTensor 10 | # dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU 11 | 12 | # N is batch size; D_in is input dimension; 13 | # H is hidden dimension; D_out is output dimension. 14 | N, D_in, H, D_out = 64, 1000, 100, 10 15 | 16 | # Create random input and output data 17 | x = torch.randn(N, D_in).type(dtype) 18 | y = torch.randn(N, D_out).type(dtype) 19 | 20 | # Randomly initialize weights 21 | w1 = torch.randn(D_in, H).type(dtype) 22 | w2 = torch.randn(H, D_out).type(dtype) 23 | 24 | learning_rate = 1e-6 25 | for t in range(500): 26 | # Forward pass: compute predicted y 27 | h = x.mm(w1) 28 | h_relu = h.clamp(min=0) 29 | y_pred = h_relu.mm(w2) 30 | 31 | # Compute and print loss 32 | loss = (y_pred - y).pow(2).sum() 33 | print(t, loss) 34 | 35 | # Backprop to compute gradients of w1 and w2 with respect to loss 36 | grad_y_pred = 2.0 * (y_pred - y) 37 | grad_w2 = h_relu.t().mm(grad_y_pred) 38 | grad_h_relu = grad_y_pred.mm(w2.t()) 39 | grad_h = grad_h_relu.clone() 40 | grad_h[h < 0] = 0 41 | grad_w1 = x.t().mm(grad_h) 42 | 43 | # Update weights using gradient descent 44 | w1 -= learning_rate * grad_w1 45 | w2 -= learning_rate * grad_w2 -------------------------------------------------------------------------------- /examples/pytorch_cudn_cudnn_test.py: -------------------------------------------------------------------------------- 1 | # Summary: 检测当前Pytorch和设备是否支持CUDA和cudnn 2 | # Author: Amusi 3 | # Date: 2018-12-20 4 | # github: https://github.com/amusi/PyTorch-From-Zero-To-One 5 | 6 | import torch 7 | 8 | if __name__ == '__main__': 9 | print("Support CUDA ?: ", torch.cuda.is_available()) 10 | x = torch.Tensor([1.0]) 11 | xx = x.cuda() 12 | print(xx) 13 | 14 | y = torch.randn(2, 3) 15 | yy = y.cuda() 16 | print(yy) 17 | 18 | zz = xx + yy 19 | print(zz) 20 | 21 | # CUDNN TEST 22 | from torch.backends import cudnn 23 | print("Support cudnn ?: ",cudnn.is_acceptable(xx)) 24 | 25 | --------------------------------------------------------------------------------