├── .gitattributes
├── .gitignore
├── LICENSE
├── README.en.md
├── README.md
├── data
├── __init__.py
├── myModel_46.pth
├── myModel_import2.py
└── net_graph.png
├── example_img
├── c52d245900b9bae9565b6cf4d4781130.jpeg
├── pic
│ ├── 1.png
│ ├── 10.png
│ ├── 11.png
│ ├── 12.png
│ ├── 13.png
│ ├── 14.png
│ ├── 15.png
│ ├── 16.png
│ ├── 17.png
│ ├── 18.png
│ ├── 19.png
│ ├── 2.png
│ ├── 20.png
│ ├── 21.png
│ ├── 22.png
│ ├── 23.png
│ ├── 24.png
│ ├── 25.png
│ ├── 26.png
│ ├── 27.png
│ ├── 3.png
│ ├── 4.png
│ ├── 5.png
│ ├── 6.png
│ ├── 7.png
│ ├── 8.png
│ └── 9.png
└── pic2
│ ├── c1.png
│ ├── c2.png
│ ├── c3.png
│ ├── c4.png
│ ├── c5.png
│ ├── c6.png
│ ├── c7.png
│ ├── c8.png
│ ├── d1.png
│ ├── d2.png
│ ├── d3.png
│ ├── d4.png
│ ├── d5.png
│ ├── d6.png
│ ├── d7.png
│ └── d8.png
├── modelDemo.py
├── process
├── OpenJupyterLab.bat
├── Train7gpu.ipynb
├── __init__.py
├── make_qrcode.py
├── myModelGraph.py
└── openTensorboard.bat
├── readme_static
└── readme_img
│ ├── 1.png
│ ├── 2.png
│ ├── 3.png
│ ├── net.png
│ ├── p1.png
│ └── p2.png
├── requirements.txt
├── test.py
└── 开源许可证中文翻译.txt
/.gitattributes:
--------------------------------------------------------------------------------
1 | *.ipynb linguist-language=python
2 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | .idea
2 | __pycache__
3 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 bytesc
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.en.md:
--------------------------------------------------------------------------------
1 | # Image_Classify_WebGUI_CIFAR10
2 |
3 | ✨**Intelligent Image Classification Web Applcation based on Convolutional Neural Networks and the CIFAR10 Dataset** : Image classification visualization interface, image classification front-end web page, image classification Demo display-Pywebio. AI artificial intelligence image classification-Pytorch. CIFAR10 dataset, small model. 100% pure Python code, lightweight, easy to reproduce.
4 |
5 | [简体中文文档](./README.md)
6 |
7 | [Personal website: www.bytesc.top](http://www.bytesc.top) includes online project demonstrations.
8 |
9 | 🔔 If you have any project-related questions, feel free to raise an `issue` in this project, I will usually reply within 24 hours.
10 |
11 | ## Project Introduction
12 | * 1. Use pytorch to implement intelligent classification of CIFAR10 dataset images
13 | * 2. Use a small model, lightweight, with a 76% accuracy rate
14 | * 3. Use pywebio as the web visualization framework, no need for front-end language, written in pure python. Lightweight, easy to reproduce, easy to deploy
15 |
16 | Network structure used
17 | 
18 |
19 | ## Screenshot of the effect
20 | 
21 | 
22 | 
23 |
24 | ## How to use
25 | Python version 3.9
26 |
27 | First install dependencies
28 | > pip install -r requirement.txt
29 |
30 | modelDemo.py is the project entry point, run this file to start the server
31 | > python modelDemo.py
32 |
33 | Copy the link to the browser and open it
34 | 
35 | Click "Demo" to enter the Web interface
36 | 
37 |
38 | After that, you can also click "Upload File" and select an image file from the example_img folder to upload and test
39 |
40 | ## Project structure
41 | ```
42 | └─Image_Classify_WebGUI_CIFAR10
43 | ├─data
44 | │ └─logs_import
45 | ├─example_img
46 | ├─process
47 | │ └─logs
48 | └─readme_static
49 | ```
50 | * The data folder stores some static resources, including the trained model.pth
51 | * The process folder stores some process files, including the model training program, etc.
52 | * readme_static stores static resources used in the readme document
53 | * The example_img folder contains some images that can be used for testing
54 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Image_Classify_WebGUI_CIFAR10
2 |
3 | ✨ **基于卷积神经网络(CNN)和 CIFAR10 数据集的图像智能分类**:图像分类可视化界面,图像分类前端网页,图像分类Demo展示-Pywebio。AI人工智能图像分类-Pytorch。CIFAR10数据集,小模型。100%纯Python代码,轻量化,易复现
4 |
5 | [English Readme](./README.en.md)🚩
6 |
7 | [个人网站:www.bytesc.top](http://www.bytesc.top) 包含项目在线演示。
8 |
9 | 🔔 如有项目相关问题,欢迎在本项目提出`issue`,我一般会在 24 小时内回复。
10 |
11 | ## 项目简介
12 | * 1, 使用pytorch实现CIFAR10数据集图片的智能分类
13 | * 2, 使用小模型,轻量化,76%准确率
14 | * 3,使用pywebio作为web可视化框架,无需前端语言,使用纯python编写。轻量化,易复现,易部署
15 |
16 | 使用的网络结构
17 | 
18 |
19 | ## 效果截图
20 | 
21 | 
22 | 
23 |
24 | ## 如何使用
25 | python版本3.9
26 |
27 | 先安装依赖
28 | > pip install -r requirement.txt
29 |
30 | modelDemo.py是项目入口,运行此文件即可启动服务器
31 | > python modelDemo.py
32 |
33 | 复制链接到浏览器打开
34 | 
35 | 点击”Demo“即可进入Web界面
36 | 
37 |
38 | 之后,也可以点击“上传文件”,选择example_img文件夹内图片文件上传测试
39 |
40 | ## 项目结构
41 | ```
42 | └─Image_Classify_WebGUI_CIFAR10
43 | ├─data
44 | │ └─logs_import
45 | ├─example_img
46 | ├─process
47 | │ └─logs
48 | └─readme_static
49 | ```
50 | * data文件夹存放部分静态资源,包括训练好的模型.pth
51 | * process文件夹存放一些过程文件,包括模型的训练程序等
52 | * readme_static存放readme文档中用的静态资源
53 | * example_img文件夹内存放了一些图片,可用于测试
54 |
55 | # 开源许可证
56 |
57 | 此翻译版本仅供参考,以 LICENSE 文件中的英文版本为准
58 |
59 | MIT 开源许可证:
60 |
61 | 版权所有 (c) 2023 bytesc
62 |
63 | 特此授权,免费向任何获得本软件及相关文档文件(以下简称“软件”)副本的人提供使用、复制、修改、合并、出版、发行、再许可和/或销售软件的权利,但须遵守以下条件:
64 |
65 | 上述版权声明和本许可声明应包含在所有副本或实质性部分中。
66 |
67 | 本软件按“原样”提供,不作任何明示或暗示的保证,包括但不限于适销性、特定用途适用性和非侵权性。在任何情况下,作者或版权持有人均不对因使用本软件而产生的任何索赔、损害或其他责任负责,无论是在合同、侵权或其他方面。
68 |
--------------------------------------------------------------------------------
/data/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/data/__init__.py
--------------------------------------------------------------------------------
/data/myModel_46.pth:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/data/myModel_46.pth
--------------------------------------------------------------------------------
/data/myModel_import2.py:
--------------------------------------------------------------------------------
1 | import torch.nn
2 |
3 |
4 |
5 | class CIFAR10Classify01(torch.nn.Module):
6 | def __init__(self):
7 | super(CIFAR10Classify01, self).__init__()
8 | self.model1 = torch.nn.Sequential(
9 | torch.nn.Conv2d(in_channels=3, out_channels=32, kernel_size=5, padding=2, ),
10 | torch.nn.MaxPool2d(kernel_size=2),
11 | torch.nn.Conv2d(in_channels=32, out_channels=32, kernel_size=5, padding=2, ),
12 | torch.nn.MaxPool2d(kernel_size=2),
13 | torch.nn.BatchNorm2d(32),
14 | torch.nn.Sigmoid(),
15 | torch.nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5, padding=2, ),
16 | torch.nn.MaxPool2d(2),
17 | torch.nn.Flatten(),
18 | # 第一个linear的in_features可以由print(x.shape)得到
19 | torch.nn.Linear(in_features=1024, out_features=64),
20 | torch.nn.Linear(in_features=64, out_features=10)
21 | # 分类问题的最终out_features和类数相同
22 | )
23 |
24 | def forward(self, x):
25 | x = self.model1(x)
26 | return x
27 |
28 |
29 | if __name__ == "__main__":
30 | import torch.utils.tensorboard
31 | model = CIFAR10Classify01()
32 | test_tensor = torch.ones(64, 3, 32, 32)
33 | # 用形状和实际数据集相同的tenser测试网络是否可用
34 | output = model(test_tensor)
35 | print(output.shape)
36 | # output.shape[0]==batch_size, output.shape[1]==分类数量
37 |
38 | writer = torch.utils.tensorboard.SummaryWriter("logs_import")
39 | writer.add_graph(model, test_tensor)
40 | writer.close()
41 |
42 | import os
43 | os.system("tensorboard --logdir=logs_import --port=998")
44 |
--------------------------------------------------------------------------------
/data/net_graph.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/data/net_graph.png
--------------------------------------------------------------------------------
/example_img/c52d245900b9bae9565b6cf4d4781130.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/c52d245900b9bae9565b6cf4d4781130.jpeg
--------------------------------------------------------------------------------
/example_img/pic/1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/1.png
--------------------------------------------------------------------------------
/example_img/pic/10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/10.png
--------------------------------------------------------------------------------
/example_img/pic/11.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/11.png
--------------------------------------------------------------------------------
/example_img/pic/12.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/12.png
--------------------------------------------------------------------------------
/example_img/pic/13.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/13.png
--------------------------------------------------------------------------------
/example_img/pic/14.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/14.png
--------------------------------------------------------------------------------
/example_img/pic/15.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/15.png
--------------------------------------------------------------------------------
/example_img/pic/16.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/16.png
--------------------------------------------------------------------------------
/example_img/pic/17.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/17.png
--------------------------------------------------------------------------------
/example_img/pic/18.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/18.png
--------------------------------------------------------------------------------
/example_img/pic/19.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/19.png
--------------------------------------------------------------------------------
/example_img/pic/2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/2.png
--------------------------------------------------------------------------------
/example_img/pic/20.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/20.png
--------------------------------------------------------------------------------
/example_img/pic/21.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/21.png
--------------------------------------------------------------------------------
/example_img/pic/22.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/22.png
--------------------------------------------------------------------------------
/example_img/pic/23.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/23.png
--------------------------------------------------------------------------------
/example_img/pic/24.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/24.png
--------------------------------------------------------------------------------
/example_img/pic/25.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/25.png
--------------------------------------------------------------------------------
/example_img/pic/26.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/26.png
--------------------------------------------------------------------------------
/example_img/pic/27.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/27.png
--------------------------------------------------------------------------------
/example_img/pic/3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/3.png
--------------------------------------------------------------------------------
/example_img/pic/4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/4.png
--------------------------------------------------------------------------------
/example_img/pic/5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/5.png
--------------------------------------------------------------------------------
/example_img/pic/6.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/6.png
--------------------------------------------------------------------------------
/example_img/pic/7.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/7.png
--------------------------------------------------------------------------------
/example_img/pic/8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/8.png
--------------------------------------------------------------------------------
/example_img/pic/9.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic/9.png
--------------------------------------------------------------------------------
/example_img/pic2/c1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/c1.png
--------------------------------------------------------------------------------
/example_img/pic2/c2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/c2.png
--------------------------------------------------------------------------------
/example_img/pic2/c3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/c3.png
--------------------------------------------------------------------------------
/example_img/pic2/c4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/c4.png
--------------------------------------------------------------------------------
/example_img/pic2/c5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/c5.png
--------------------------------------------------------------------------------
/example_img/pic2/c6.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/c6.png
--------------------------------------------------------------------------------
/example_img/pic2/c7.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/c7.png
--------------------------------------------------------------------------------
/example_img/pic2/c8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/c8.png
--------------------------------------------------------------------------------
/example_img/pic2/d1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/d1.png
--------------------------------------------------------------------------------
/example_img/pic2/d2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/d2.png
--------------------------------------------------------------------------------
/example_img/pic2/d3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/d3.png
--------------------------------------------------------------------------------
/example_img/pic2/d4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/d4.png
--------------------------------------------------------------------------------
/example_img/pic2/d5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/d5.png
--------------------------------------------------------------------------------
/example_img/pic2/d6.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/d6.png
--------------------------------------------------------------------------------
/example_img/pic2/d7.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/d7.png
--------------------------------------------------------------------------------
/example_img/pic2/d8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/example_img/pic2/d8.png
--------------------------------------------------------------------------------
/modelDemo.py:
--------------------------------------------------------------------------------
1 | import PIL.Image
2 | import torchvision.transforms
3 | import torch.nn
4 |
5 | import pywebio
6 | from io import BytesIO
7 |
8 | from data.myModel_import2 import *
9 |
10 |
11 | @pywebio.config(title="卷积神经网络Demo", description="基于CIFAR10数据集的图像分类")
12 | def page1():
13 | train_set_classes = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
14 | graph_img = PIL.Image.open("./data/net_graph.png")
15 |
16 | def show_info():
17 | pywebio.output.put_markdown("# 基于CIFAR10数据集的图像分类")
18 | pywebio.output.put_html("
")
19 | pywebio.output.put_table([
20 | [pywebio.output.span('数据集', row=1), pywebio.output.span('支持类别', col=2)],
21 | ['CIFAR10', train_set_classes]
22 | ])
23 | pywebio.output.put_html("
")
24 |
25 | show_net = [pywebio.output.put_text('net'),
26 | pywebio.output.put_image(graph_img)]
27 |
28 | def popup_window(title, content):
29 | pywebio.output.popup(title=title, content=content)
30 |
31 | show_info()
32 | pywebio.output.put_buttons(['查看网络结构'], [lambda: popup_window("网络结构", show_net)])
33 | # pywebio.output.put_buttons(['点击查看网络结构'], [popup_window])
34 | pywebio.input.actions("", [{'label': "上传图片", 'value': "", 'color': 'success', }])
35 | inpic = pywebio.input.file_upload(label="上传图片 please upload a image")
36 | pywebio.output.popup("加载中", [
37 | pywebio.output.put_loading(),
38 | ])
39 |
40 | # img_path = "./pic2/102.png"
41 | img = PIL.Image.open(BytesIO(inpic['content']))
42 | # pywebio.output.put_image(inpic['content'])
43 | img = img.convert("RGB")
44 | # print(img.size)
45 | transform01 = torchvision.transforms.Compose([
46 | torchvision.transforms.Resize((32, 32)),
47 | torchvision.transforms.ToTensor()
48 | ])
49 | img = transform01(img)
50 | img = torch.reshape(img, (1, 3, 32, 32))
51 | # print(img.shape)
52 |
53 | model = torch.load("data/myModel_46.pth", map_location=torch.device('cpu'))
54 | # print(model)
55 |
56 | # model.eval()
57 | with torch.no_grad():
58 | output = model(img)
59 | # print(output)
60 | # print(output.argmax(1))
61 | # print(train_set.classes[output.argmax(1).item()])
62 | # pywebio.output.put_text(train_set.classes[output.argmax(1).item()])
63 | pywebio.output.popup(title='识别结果',
64 | content=[
65 | pywebio.output.put_markdown("分类结果:\n # " + train_set_classes[output.argmax(1).item()]),
66 | pywebio.output.put_image(None if not inpic else inpic['content'])])
67 |
68 | # img = torch.reshape(img, (3, 32, 32))
69 | # transform2 = torchvision.transforms.Compose([
70 | # torchvision.transforms.Resize((160, 160)),
71 | # torchvision.transforms.ToPILImage()
72 | # ])
73 | # img = transform2(img)
74 | #
75 | # img_bytes = BytesIO()
76 | # img.save(img_bytes, format="JPEG")
77 | # img = img_bytes.getvalue()
78 | # pywebio.output.put_image(img, height="512", width="512")
79 | # pywebio.output.put_image(inpic['content'], height="512", width="512")
80 | # pywebio.output.put_image(inpic['content'])
81 | del model, inpic, img
82 |
83 |
84 | if __name__ == "__main__":
85 | # page1()
86 | pywebio.start_server(
87 | applications=[page1, ],
88 | debug=True,
89 | cdn=False,
90 | auto_open_webbrowser=False,
91 | remote_access=False,
92 | port=6007
93 | )
94 |
--------------------------------------------------------------------------------
/process/OpenJupyterLab.bat:
--------------------------------------------------------------------------------
1 | jupyter lab
2 |
3 |
--------------------------------------------------------------------------------
/process/Train7gpu.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 1,
6 | "id": "cc1e2529-a2e4-4bf9-8c3b-b3a037e06fd1",
7 | "metadata": {},
8 | "outputs": [],
9 | "source": [
10 | "import torchvision\n",
11 | "import torch.nn\n",
12 | "import torch.utils.tensorboard\n",
13 | "\n",
14 | "import torchvision.transforms\n",
15 | "from torch.utils.data import DataLoader"
16 | ]
17 | },
18 | {
19 | "cell_type": "code",
20 | "execution_count": 2,
21 | "id": "efb32107-c090-43bf-9b5c-44ce2c5b4904",
22 | "metadata": {},
23 | "outputs": [
24 | {
25 | "name": "stdout",
26 | "output_type": "stream",
27 | "text": [
28 | "Files already downloaded and verified\n",
29 | "Files already downloaded and verified\n"
30 | ]
31 | }
32 | ],
33 | "source": [
34 | "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
35 | "# 准备数据集\n",
36 | "train_data = torchvision.datasets.CIFAR10(\"../CIFAR10\", train=True,\n",
37 | " transform=torchvision.transforms.ToTensor(), download=True)\n",
38 | "# cifar10是默认piltenser,要转化为pytorchtensor\n",
39 | "\n",
40 | "test_data = torchvision.datasets.CIFAR10(\"../CIFAR10\", train=False, download=True,\n",
41 | " transform=torchvision.transforms.ToTensor())\n",
42 | "# print(len(train_data))\n",
43 | "# print(len(test_data))\n",
44 | "train_data_size = len(train_data)\n",
45 | "test_data_size = len(test_data)\n",
46 | "# 用dataloader加载数据集,加载为一组64个\n",
47 | "train_dataloader = DataLoader(dataset=train_data, batch_size=64)\n",
48 | "test_dataloader = DataLoader(dataset=test_data, batch_size=64)"
49 | ]
50 | },
51 | {
52 | "cell_type": "code",
53 | "execution_count": 3,
54 | "id": "67cca792-7dd3-4ead-b657-ca5336aa978b",
55 | "metadata": {},
56 | "outputs": [],
57 | "source": [
58 | "# 添加tensorboard可视化\n",
59 | "writer = torch.utils.tensorboard.SummaryWriter(\"./logs\")"
60 | ]
61 | },
62 | {
63 | "cell_type": "code",
64 | "execution_count": 4,
65 | "id": "eb1bb815-35c3-4d5e-8a3b-64e3726088bf",
66 | "metadata": {},
67 | "outputs": [
68 | {
69 | "data": {
70 | "text/plain": [
71 | "CIFAR10Classify01(\n",
72 | " (model1): Sequential(\n",
73 | " (0): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n",
74 | " (1): Conv2d(3, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n",
75 | " (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
76 | " (3): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n",
77 | " (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
78 | " (5): Sigmoid()\n",
79 | " (6): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n",
80 | " (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n",
81 | " (8): Flatten(start_dim=1, end_dim=-1)\n",
82 | " (9): Linear(in_features=1024, out_features=64, bias=True)\n",
83 | " (10): Linear(in_features=64, out_features=10, bias=True)\n",
84 | " )\n",
85 | ")"
86 | ]
87 | },
88 | "execution_count": 4,
89 | "metadata": {},
90 | "output_type": "execute_result"
91 | }
92 | ],
93 | "source": [
94 | "# 搭建神经网络\n",
95 | "class CIFAR10Classify01(torch.nn.Module):\n",
96 | " def __init__(self):\n",
97 | " super(CIFAR10Classify01, self).__init__()\n",
98 | " self.model1 = torch.nn.Sequential(\n",
99 | " torch.nn.Conv2d(in_channels=3, out_channels=32, kernel_size=5, padding=2, ),\n",
100 | " torch.nn.MaxPool2d(kernel_size=2),\n",
101 | " torch.nn.BatchNorm2d(32),\n",
102 | " torch.nn.Conv2d(in_channels=32, out_channels=32, kernel_size=5, padding=2, ),\n",
103 | " torch.nn.MaxPool2d(kernel_size=2),\n",
104 | " torch.nn.Sigmoid(),\n",
105 | " torch.nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5, padding=2, ),\n",
106 | " torch.nn.MaxPool2d(2),\n",
107 | " torch.nn.Flatten(),\n",
108 | " # 第一个linear的in_features可以由print(x.shape)得到\n",
109 | " torch.nn.Linear(in_features=1024, out_features=64),\n",
110 | " torch.nn.Linear(in_features=64, out_features=10)\n",
111 | " # 分类问题的最终out_features和类数相同\n",
112 | " )\n",
113 | "\n",
114 | " def forward(self, x):\n",
115 | " x = self.model1(x)\n",
116 | " return x\n",
117 | " \n",
118 | "myModel = CIFAR10Classify01()\n",
119 | "myModel.to(device)"
120 | ]
121 | },
122 | {
123 | "cell_type": "code",
124 | "execution_count": 5,
125 | "id": "d605e974-7266-4ee3-ab72-954314233065",
126 | "metadata": {},
127 | "outputs": [],
128 | "source": [
129 | "# 定义损失函数\n",
130 | "loss_fn = torch.nn.CrossEntropyLoss()\n",
131 | "loss_fn.to(device)\n",
132 | "\n",
133 | "# 定义优化器\n",
134 | "learning_rate = 1e-2\n",
135 | "optim = torch.optim.SGD(params=myModel.parameters(), lr=learning_rate)\n",
136 | "# SGD随机梯度下降"
137 | ]
138 | },
139 | {
140 | "cell_type": "code",
141 | "execution_count": 6,
142 | "id": "4f6defce-7903-4db5-94e0-90433d4917fd",
143 | "metadata": {},
144 | "outputs": [
145 | {
146 | "name": "stdout",
147 | "output_type": "stream",
148 | "text": [
149 | "------0------\n"
150 | ]
151 | },
152 | {
153 | "ename": "RuntimeError",
154 | "evalue": "running_mean should contain 3 elements not 32",
155 | "output_type": "error",
156 | "traceback": [
157 | "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
158 | "\u001b[1;31mRuntimeError\u001b[0m Traceback (most recent call last)",
159 | "Input \u001b[1;32mIn [6]\u001b[0m, in \u001b[0;36m\u001b[1;34m()\u001b[0m\n\u001b[0;32m 10\u001b[0m imgs \u001b[38;5;241m=\u001b[39m imgs\u001b[38;5;241m.\u001b[39mto(device)\n\u001b[0;32m 11\u001b[0m targets \u001b[38;5;241m=\u001b[39m targets\u001b[38;5;241m.\u001b[39mto(device)\n\u001b[1;32m---> 12\u001b[0m output \u001b[38;5;241m=\u001b[39m \u001b[43mmyModel\u001b[49m\u001b[43m(\u001b[49m\u001b[43mimgs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 13\u001b[0m \u001b[38;5;66;03m# 优化器梯度清零\u001b[39;00m\n\u001b[0;32m 14\u001b[0m optim\u001b[38;5;241m.\u001b[39mzero_grad()\n",
160 | "File \u001b[1;32m~\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\torch\\nn\\modules\\module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[1;34m(self, *input, **kwargs)\u001b[0m\n\u001b[0;32m 1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[0;32m 1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[0;32m 1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[0;32m 1129\u001b[0m \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[1;32m-> 1130\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m forward_call(\u001b[38;5;241m*\u001b[39m\u001b[38;5;28minput\u001b[39m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n\u001b[0;32m 1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[0;32m 1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
161 | "Input \u001b[1;32mIn [4]\u001b[0m, in \u001b[0;36mCIFAR10Classify01.forward\u001b[1;34m(self, x)\u001b[0m\n\u001b[0;32m 21\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, x):\n\u001b[1;32m---> 22\u001b[0m x \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmodel1\u001b[49m\u001b[43m(\u001b[49m\u001b[43mx\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 23\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m x\n",
162 | "File \u001b[1;32m~\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\torch\\nn\\modules\\module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[1;34m(self, *input, **kwargs)\u001b[0m\n\u001b[0;32m 1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[0;32m 1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[0;32m 1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[0;32m 1129\u001b[0m \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[1;32m-> 1130\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m forward_call(\u001b[38;5;241m*\u001b[39m\u001b[38;5;28minput\u001b[39m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n\u001b[0;32m 1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[0;32m 1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
163 | "File \u001b[1;32m~\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\torch\\nn\\modules\\container.py:139\u001b[0m, in \u001b[0;36mSequential.forward\u001b[1;34m(self, input)\u001b[0m\n\u001b[0;32m 137\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mforward\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m):\n\u001b[0;32m 138\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m module \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m:\n\u001b[1;32m--> 139\u001b[0m \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mmodule\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[0;32m 140\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28minput\u001b[39m\n",
164 | "File \u001b[1;32m~\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\torch\\nn\\modules\\module.py:1130\u001b[0m, in \u001b[0;36mModule._call_impl\u001b[1;34m(self, *input, **kwargs)\u001b[0m\n\u001b[0;32m 1126\u001b[0m \u001b[38;5;66;03m# If we don't have any hooks, we want to skip the rest of the logic in\u001b[39;00m\n\u001b[0;32m 1127\u001b[0m \u001b[38;5;66;03m# this function, and just call forward.\u001b[39;00m\n\u001b[0;32m 1128\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_backward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_forward_pre_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_backward_hooks\n\u001b[0;32m 1129\u001b[0m \u001b[38;5;129;01mor\u001b[39;00m _global_forward_hooks \u001b[38;5;129;01mor\u001b[39;00m _global_forward_pre_hooks):\n\u001b[1;32m-> 1130\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m forward_call(\u001b[38;5;241m*\u001b[39m\u001b[38;5;28minput\u001b[39m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n\u001b[0;32m 1131\u001b[0m \u001b[38;5;66;03m# Do not call functions when jit is used\u001b[39;00m\n\u001b[0;32m 1132\u001b[0m full_backward_hooks, non_full_backward_hooks \u001b[38;5;241m=\u001b[39m [], []\n",
165 | "File \u001b[1;32m~\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\torch\\nn\\modules\\batchnorm.py:168\u001b[0m, in \u001b[0;36m_BatchNorm.forward\u001b[1;34m(self, input)\u001b[0m\n\u001b[0;32m 161\u001b[0m bn_training \u001b[38;5;241m=\u001b[39m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mrunning_mean \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m) \u001b[38;5;129;01mand\u001b[39;00m (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mrunning_var \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[0;32m 163\u001b[0m \u001b[38;5;124mr\u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[0;32m 164\u001b[0m \u001b[38;5;124;03mBuffers are only updated if they are to be tracked and we are in training mode. Thus they only need to be\u001b[39;00m\n\u001b[0;32m 165\u001b[0m \u001b[38;5;124;03mpassed when the update should occur (i.e. in training mode when they are tracked), or when buffer stats are\u001b[39;00m\n\u001b[0;32m 166\u001b[0m \u001b[38;5;124;03mused for normalization (i.e. in eval mode when buffers are not None).\u001b[39;00m\n\u001b[0;32m 167\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m--> 168\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mF\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbatch_norm\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 169\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[0;32m 170\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;66;43;03m# If buffers are not to be tracked, ensure that they won't be updated\u001b[39;49;00m\n\u001b[0;32m 171\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrunning_mean\u001b[49m\n\u001b[0;32m 172\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43;01mif\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mnot\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mtraining\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mtrack_running_stats\u001b[49m\n\u001b[0;32m 173\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43;01melse\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[0;32m 174\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrunning_var\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mif\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mnot\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mtraining\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mtrack_running_stats\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01melse\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[0;32m 175\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 176\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbias\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 177\u001b[0m \u001b[43m \u001b[49m\u001b[43mbn_training\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 178\u001b[0m \u001b[43m \u001b[49m\u001b[43mexponential_average_factor\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 179\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43meps\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 180\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n",
166 | "File \u001b[1;32m~\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\torch\\nn\\functional.py:2438\u001b[0m, in \u001b[0;36mbatch_norm\u001b[1;34m(input, running_mean, running_var, weight, bias, training, momentum, eps)\u001b[0m\n\u001b[0;32m 2435\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m training:\n\u001b[0;32m 2436\u001b[0m _verify_batch_size(\u001b[38;5;28minput\u001b[39m\u001b[38;5;241m.\u001b[39msize())\n\u001b[1;32m-> 2438\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mtorch\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbatch_norm\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 2439\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mweight\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mbias\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mrunning_mean\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mrunning_var\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtraining\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmomentum\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43meps\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtorch\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbackends\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcudnn\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43menabled\u001b[49m\n\u001b[0;32m 2440\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n",
167 | "\u001b[1;31mRuntimeError\u001b[0m: running_mean should contain 3 elements not 32"
168 | ]
169 | }
170 | ],
171 | "source": [
172 | "# 设置训练网络参数\n",
173 | "train_step = 0\n",
174 | "test_step = 0\n",
175 | "epoch = 100\n",
176 | "for i in range(epoch):\n",
177 | " train_loss = 0\n",
178 | " print(\"------{}------\".format(i))\n",
179 | " myModel.train()\n",
180 | " for imgs, targets in train_dataloader:\n",
181 | " imgs = imgs.to(device)\n",
182 | " targets = targets.to(device)\n",
183 | " output = myModel(imgs)\n",
184 | " # 优化器梯度清零\n",
185 | " optim.zero_grad()\n",
186 | " # 计算损失\n",
187 | " loss = loss_fn(output, targets)\n",
188 | " # 反向传播算梯度\n",
189 | " loss.backward()\n",
190 | " # 优化器优化模型\n",
191 | " optim.step()\n",
192 | " train_loss += loss.item()\n",
193 | " train_step += 1\n",
194 | " # print(loss)\n",
195 | " writer.add_scalar(\"trainloss\", train_loss/len(train_data), train_step)\n",
196 | " print(\"running_loss: \", train_loss/len(train_data))\n",
197 | "\n",
198 | " # 验证每轮学习效果\n",
199 | " test_loss = 0\n",
200 | " total_accuracy = 0\n",
201 | " with torch.no_grad():\n",
202 | " # 取消梯度,验证时不调优\n",
203 | " myModel.eval()\n",
204 | " for imgs, targets in test_dataloader:\n",
205 | " imgs = imgs.to(device)\n",
206 | " targets = targets.to(device)\n",
207 | " output = myModel(imgs)\n",
208 | " loss = loss_fn(output, targets)\n",
209 | " test_loss += loss.item()\n",
210 | " accuracy = (output.argmax(1) == targets).sum()\n",
211 | " total_accuracy += accuracy\n",
212 | " test_step += 1\n",
213 | " writer.add_scalar(\"testloss\", test_loss/len(test_data), test_step)\n",
214 | " writer.add_scalar(\"total_accuracy\", total_accuracy/len(test_data), test_step)\n",
215 | " print(\"test_loss: \", test_loss/len(test_data))\n",
216 | " print(\"total_accuracy: \", total_accuracy/len(test_data))\n",
217 | " torch.save(myModel, \"./myModel_{}.pth\".format(i))\n",
218 | "\n",
219 | "writer.close()"
220 | ]
221 | },
222 | {
223 | "cell_type": "code",
224 | "execution_count": null,
225 | "id": "16cef185-cf09-4eab-b5a4-643ae14148fc",
226 | "metadata": {},
227 | "outputs": [],
228 | "source": [
229 | "%reload_ext tensorboard\n"
230 | ]
231 | },
232 | {
233 | "cell_type": "code",
234 | "execution_count": null,
235 | "id": "d3280638-14cd-47f8-9717-c6eb86613ca5",
236 | "metadata": {},
237 | "outputs": [],
238 | "source": [
239 | "\n",
240 | "from tensorboard import notebook\n",
241 | "notebook.list()\n",
242 | "notebook.start(\"--logdir ./logs --port=709\")"
243 | ]
244 | }
245 | ],
246 | "metadata": {
247 | "kernelspec": {
248 | "display_name": "Python 3 (ipykernel)",
249 | "language": "python",
250 | "name": "python3"
251 | },
252 | "language_info": {
253 | "codemirror_mode": {
254 | "name": "ipython",
255 | "version": 3
256 | },
257 | "file_extension": ".py",
258 | "mimetype": "text/x-python",
259 | "name": "python",
260 | "nbconvert_exporter": "python",
261 | "pygments_lexer": "ipython3",
262 | "version": "3.10.6"
263 | }
264 | },
265 | "nbformat": 4,
266 | "nbformat_minor": 5
267 | }
268 |
--------------------------------------------------------------------------------
/process/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/process/__init__.py
--------------------------------------------------------------------------------
/process/make_qrcode.py:
--------------------------------------------------------------------------------
1 | import qrcode
2 |
3 | ad = input()
4 | img = qrcode.make(ad)
5 |
6 | img.show()
7 |
--------------------------------------------------------------------------------
/process/myModelGraph.py:
--------------------------------------------------------------------------------
1 | import torch.nn
2 | import torchvision.datasets
3 | import torchvision.transforms
4 | import torch.utils.tensorboard
5 | from torch.utils.data import DataLoader
6 |
7 | import sys
8 | sys.path.append('..')
9 | from data.myModel_import2 import *
10 |
11 |
12 | # 用ones测试模型是否可用
13 | model2 = CIFAR10Classify01()
14 | print(model2)
15 | iinput = torch.ones((64, 3, 32, 32))
16 | output = model2(iinput)
17 | print(output.shape)
18 |
19 |
20 | writer = torch.utils.tensorboard.SummaryWriter("logs")
21 | writer.add_graph(model2, iinput)
22 | writer.close()
23 |
24 |
--------------------------------------------------------------------------------
/process/openTensorboard.bat:
--------------------------------------------------------------------------------
1 | tensorboard --logdir=logs --port=998
--------------------------------------------------------------------------------
/readme_static/readme_img/1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/readme_static/readme_img/1.png
--------------------------------------------------------------------------------
/readme_static/readme_img/2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/readme_static/readme_img/2.png
--------------------------------------------------------------------------------
/readme_static/readme_img/3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/readme_static/readme_img/3.png
--------------------------------------------------------------------------------
/readme_static/readme_img/net.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/readme_static/readme_img/net.png
--------------------------------------------------------------------------------
/readme_static/readme_img/p1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/readme_static/readme_img/p1.png
--------------------------------------------------------------------------------
/readme_static/readme_img/p2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bytesc/Image_Classify_WebGUI_CIFAR10/981131d8148eb1492cbcfe073fb7b030e276b105/readme_static/readme_img/p2.png
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | torch==2.0.0
2 | torchvision==0.15.1
3 | pywebio==1.7.1
4 |
5 |
--------------------------------------------------------------------------------
/test.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/开源许可证中文翻译.txt:
--------------------------------------------------------------------------------
1 | 此翻译版本仅供参考,以 LICENSE 文件中的英文版本为准
2 |
3 | MIT 开源许可证:
4 |
5 | 版权所有 (c) 2023 bytesc
6 |
7 | 特此授权,免费向任何获得本软件及相关文档文件(以下简称“软件”)副本的人提供使用、复制、修改、合并、出版、发行、再许可和/或销售软件的权利,但须遵守以下条件:
8 |
9 | 上述版权声明和本许可声明应包含在所有副本或实质性部分中。
10 |
11 | 本软件按“原样”提供,不作任何明示或暗示的保证,包括但不限于适销性、特定用途适用性和非侵权性。在任何情况下,作者或版权持有人均不对因使用本软件而产生的任何索赔、损害或其他责任负责,无论是在合同、侵权或其他方面。
12 |
--------------------------------------------------------------------------------
|