{{name}}
87 | {{version}} 88 |
89 | 90 |
Author
91 |
Component Creation Author : {{ author }}
92 | 93 |
Keywords
94 |
Component search keywords
95 | {{ keywords }}
96 | 97 |
License
98 |
Component License : {{ license }}
99 |file | 업로드할 컴포넌트 파일 | string |
18 |
19 | ### Response
20 | | Name | Description | Type |
21 | |------------------------|-------------------------|-------|
22 | | 200 OK | 업로드 성공 | string |
23 |
24 | ### 레지스트리에서 처리
25 | - 전송받은 컴포넌트 zip파일을 저장하고
26 | - 압축을 풀어서 그 안에 있는 manifest 내용을 웹 UI에서 보여줌
27 | - manifest 안에 ID가 있음
28 |
29 | ## 2. Na-컴포넌트 다운로드
30 | - 서버에 있는 컴포넌트 파일(.zip)을 다운로드
31 |
32 | HTTP
33 | ```
34 | GET http://SERVER_ADDR/component/{ID}
35 | ```
36 |
37 | ### URI Parameters
38 | | Name | Description | Type |
39 | |------------------------|-------------------------|-------|
40 | | ID | 다운로드할 컴포넌트 ID | string |
41 |
42 | ## 3. AI 서버 URL 등록
43 | - AI 서버의 URL과 입출력 타입 등을 업로드/다운로드
44 |
45 | HTTP
46 | ```
47 | POST http://SERVER_ADDR/aiserver/upload
48 | ```
49 |
50 | ### Request Body
51 | | Name | Description | Type |
52 | |------------------------|-------------------------|-------|
53 | | json | 업로드할 AI서버 정보 파일 | string |
54 |
55 | ### Response
56 | | Name | Description | Type |
57 | |------------------------|-------------------------|-------|
58 | | 200 OK | 업로드 성공 | string |
59 |
60 | ### 레지스트리에서 처리
61 | - 전송받은 json파일을 저장, json 파일에는 해당 서버의 모듈을 사용하기 위한 REST API 정보 포함
62 | - 해당 json파일에 대한 ID 생성
63 | - json 파일 내용
64 | - URL
65 | - Reuqest_Body
66 | - Response_Body
67 | - Response
68 | - description
69 | - 웹 UI에서는 URL과 description 등 보여줌
70 |
71 |
72 | ## 4. AI 서버 URL 다운로드
73 | - AI 서버 정보 json파일 다운로드
74 |
75 | HTTP
76 | ```
77 | GET http://SERVER_ADDR/aiserver/{ID}
78 | ```
79 |
80 | ### URI Parameters
81 | | Name | Description | Type |
82 | |------------------------|-------------------------|-------|
83 | | ID | 다운로드할 json ID | string |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | pip-wheel-metadata/
24 | share/python-wheels/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 | MANIFEST
29 |
30 | # PyInstaller
31 | # Usually these files are written by a python script from a template
32 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
33 | *.manifest
34 | *.spec
35 |
36 | # Installer logs
37 | pip-log.txt
38 | pip-delete-this-directory.txt
39 |
40 | # Unit test / coverage reports
41 | htmlcov/
42 | .tox/
43 | .nox/
44 | .coverage
45 | .coverage.*
46 | .cache
47 | nosetests.xml
48 | coverage.xml
49 | *.cover
50 | *.py,cover
51 | .hypothesis/
52 | .pytest_cache/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # IPython
81 | profile_default/
82 | ipython_config.py
83 |
84 | # pyenv
85 | .python-version
86 |
87 | # pipenv
88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
91 | # install all needed dependencies.
92 | #Pipfile.lock
93 |
94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
95 | __pypackages__/
96 |
97 | # Celery stuff
98 | celerybeat-schedule
99 | celerybeat.pid
100 |
101 | # SageMath parsed files
102 | *.sage.py
103 |
104 | # Environments
105 | .env
106 | .venv
107 | env/
108 | venv/
109 | ENV/
110 | env.bak/
111 | venv.bak/
112 |
113 | # Spyder project settings
114 | .spyderproject
115 | .spyproject
116 |
117 | # Rope project settings
118 | .ropeproject
119 |
120 | # mkdocs documentation
121 | /site
122 |
123 | # mypy
124 | .mypy_cache/
125 | .dmypy.json
126 | dmypy.json
127 |
128 | # Pyre type checker
129 | .pyre/
130 |
--------------------------------------------------------------------------------
/ApiDocumentation.md:
--------------------------------------------------------------------------------
1 | # ONNX Inference REST API Reference
2 |
3 |
4 | ## 1 모델 업로드
5 | - 모델 파일을 서버에 업로드 하고, json파일을 읽어 결과를 리턴받는다.
6 |
7 | HTTP
8 | ```
9 | POST http://SERVER_ADDR/model/upload/{name}
10 | ```
11 | ### URI Parameters
12 |
13 | | Name | Description | Type |
14 | |------------------------|-------------------------|-------:|
15 | | name | 서버의 모델 폴더에 저장될 이름 | string |
16 |
17 |
18 | ### Request Body
19 | | Name | Description | Type |
20 | |------------------------|-------------------------|-------:|
21 | | model_file | 업로드할 모델 파일 | string |
22 | | jsonf | 읽혀질 json 파일 | string |
23 |
24 | ### Response
25 | | Name | Description | Type |
26 | |------------------------|-------------------------|-------:|
27 | | 200 OK | 모델 업로드 성공 | string |
28 |
29 |
30 | ## 2. 모델을 이용한 추론
31 | - 입력 데이터를 주고 서버에 저장된 모델을 이용하여 추론하고 결과를 리턴 받는다.
32 |
33 | HTTP
34 | ```
35 | POST http://SERVER_ADDR/model/predict/{name}
36 | ```
37 |
38 | ### URI Parameters
39 |
40 | | Name | Description | Type |
41 | |------------------------|-------------------------|-------:|
42 | | name | 사용할 모델 이름 | string |
43 |
44 |
45 | ### Request Body
46 | | Name | Description | Type |
47 | |------------------------|-------------------------|-------:|
48 | | input_data | 모델의 입력 데이터 | string |
49 |
50 | ### Response
51 | | Name | Description | Type |
52 | |------------------------|-------------------------|-------:|
53 | | 200 OK | 추론 성공 | string |
54 |
55 | ### Response Body
56 | - json
57 |
58 | | Name | Description | Type |
59 | |------------------------|-------------------------|-------:|
60 | | result | 추론 결과 | string |
61 |
62 | ## 3. 모델 다운로드
63 | - 서버에 있는 모델 파일을 다운로드 한다.
64 |
65 | HTTP
66 | ```
67 | POST http://SERVER_ADDR/model/download/{name}
68 | ```
69 | ### URI Parameters
70 |
71 | | Name | Description | Type |
72 | |------------------------|-------------------------|-------:|
73 | | name | 서버의 모델 폴더내의 파일 | string |
74 |
75 |
76 | ### Request Body
77 | | Name | Description | Type |
78 | |------------------------|-------------------------|-------:|
79 | | model_file | 다운로드할 모델 파일 | string |
80 |
81 | ### Response
82 | | Name | Description | Type |
83 | |------------------------|-------------------------|-------:|
84 | | 200 OK | 모델 다운로드 성공 | string |
85 |
86 |
87 | ## 4. 모델 사용 통계
88 |
89 | # Example
90 |
91 | ## 모델 업로드하고 추론하기
92 |
93 | ### 모델 추론
94 | ```
95 | curl -X POST -F "input_data=@{input.jpg}" http://SERVER_ADDR/model/predict/knn_iris
96 | ```
97 | - URI 파라미터
98 | - name = knn_iris
99 | - Body 파라미터
100 | - input_data = input.jpg 파일 컨텐츠
101 | - Response
102 | ```
103 | 200 OK
104 | ```
105 | - Response Body
106 | ```
107 | {
108 | "result": "9"
109 | }
110 | ```
111 |
112 |
113 | ### 모델 업로드 및 json 파일 읽기
114 | ```
115 | curl -X POST -F "model_file=@{knn_iris.onnx}" -F "jsonf=@{ex.json}" http://SERVER_ADDR/model/upload/knn_iris
116 | ```
117 | - URI 파라미터
118 | - name = knn_iris
119 | - Body 파라미터
120 | - model_file = knn_iris.onnx 파일 컨텐츠
121 | - jsonf = ex.json 파일
122 | - Response
123 | ```
124 | 200 OK
125 | ```
126 |
127 |
--------------------------------------------------------------------------------
/onnx_distinguish_run.py:
--------------------------------------------------------------------------------
1 | import onnx
2 | import onnxruntime as ort
3 | import numpy as np
4 | from sklearn.metrics import accuracy_score,f1_score
5 | # 분류다 평가지표 분류 평가지표
6 | from ONNX_Registry_ver1_2.onnx_to_nengo_model import toNengoModel, classification_accuracy, classification_error, objective
7 |
8 | import nengo
9 | import nengo_dl
10 | import tensorflow as tf
11 |
12 | """
13 | onnx 파일이
14 | skl2onnx로 -> https://github.com/onnx/onnx/blob/master/docs/Operators-ml.md
15 | keras2onnx로 -> https://github.com/onnx/onnx/blob/master/docs/Operators.md
16 | tensorflow2onnx 로 만들어 졌을 때를 가정
17 | """
18 |
19 | # 모델 판별 Module 클래스
20 | # 주요기능 1. 해당 모델이 어떤 프레임워크로 만들어졌는지
21 | # 주요기능 2. 해당 모델이 어떤 연산자 기본인지 -> ai.onnx / onnx.ml /
22 | class Distinguish_onnx:
23 | def __init__(self, model_path):
24 | self.model_path = model_path
25 | self.model_framework = None
26 | self.testX = None; # TEST DATA
27 | self.onnx_load_model = onnx.load_model(model_path)
28 | self.model_type = None
29 |
30 | # model type 이 snn 이지 아닌지 구분
31 | # 어떠한 프레임워크로 만들어 졌는가.
32 | def getModelFramework(self):
33 | # onnx 만든 프레임워크
34 | # model type 이 snn 이지 아닌지 구분
35 | if self.model_type == None:
36 | self.model_type = self.onnx_load_model.producer_name
37 | self.model_type = self.model_type.replace('2onnx','')
38 | print(self.model_type)
39 | self.model_framework = self.model_type
40 | if self.model_type == 'skl':
41 | self.model_framework = 'scikit-learn'
42 | return 'ML ' + self.model_framework
43 | elif self.model_type == 'keras':
44 | self.model_framework = 'keras'
45 | return 'DL ' + self.model_framework
46 | elif self.model_type =='tf':
47 | self.model_framework = 'tensorflow'
48 | return 'DL ' + self.model_framework
49 | elif self.model_type == 'snn':
50 | print('model_type 이 snn 입니다')
51 | self.model_framework = 'nengo'
52 | return 'SNN' + self.model_framework
53 |
54 | else: # 오류 발생시
55 | raise ValueError('사용되어진 Framework를 알수 없습니다')
56 |
57 | # 어떠한 연산자를 기본으로 사용하고 있는가
58 | def getModelOperator(self):
59 | for i in range(len(self.onnx_load_model.graph.node)):
60 | op_type = self.onnx_load_model.graph.node[i].op_type.lower()
61 | if op_type == "lif" or op_type == "lifrate" or op_type == "adaptivelif" \
62 | or op_type == "adaptivelifrate" or op_type == "izhikevich" \
63 | or op_type == "softlifrate":
64 | self.model_type = 'snn'
65 | print(self.model_type)
66 | return
67 | return
68 |
69 | #dl과 ml의 구분_"ai.onnx"
70 | def getModeldomain(self):
71 | model_domain_opertor = self.onnx_load_model.domain
72 | return model_domain_opertor
73 |
74 | # ONNX Runtime 으로 추론 - ml, dl
75 | def ort_run(self, testX):
76 | self.testX = testX
77 | sess = ort.InferenceSession(self.model_path)
78 | input_name = sess.get_inputs()[0].name
79 | label_name = sess.get_outputs()[0].name
80 | pred_onx = sess.run([label_name], {input_name: self.testX.astype(np.float32)})[0]
81 | print('-- 추론 Complete -- ')
82 |
83 | return pred_onx
84 |
85 | def nengo_run(self, testX):
86 | self.testX = testX
87 | print('mnist data 준비')
88 |
89 | # to nengo 로 한거지
90 | otn = toNengoModel(self.model_path)
91 | model = otn.get_model()
92 | inp = otn.get_inputProbe()
93 | pre_layer = otn.get_endLayer()
94 |
95 | # 돌리는 것
96 | with model:
97 | out_p = nengo.Probe(pre_layer)
98 | out_p_filt = nengo.Probe(pre_layer, synapse=0.01)
99 |
100 | # ----------------------------------------------------------- run
101 | sim = nengo_dl.Simulator(model, device="/cpu:0")
102 |
103 | # when testing our network with spiking neurons we will need to run it
104 | # over time, so we repeat the input/target data for a number of
105 | # timesteps.
106 |
107 | n_steps = 30
108 | print(self.testX.shape) # 30, 28, 28, 1
109 | self.testX = self.testX.reshape((self.testX.shape[0], -1))
110 | print(self.testX.shape) # 30, 784
111 | test_images = np.tile(self.testX[:, None, :], (1, n_steps, 1))
112 | print(test_images.shape)
113 |
114 | # load parameters
115 | print('load_params')
116 | sim.load_params("weights/mnist_params_adam_0.001_3_100")
117 |
118 | sim.compile(loss={out_p_filt: classification_accuracy})
119 | data = sim.predict(test_images)
120 | sim.close()
121 | print('simulator 종료')
122 | return data
123 |
--------------------------------------------------------------------------------
/templates/netron_wrapper.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | Intelligent Component Registry
7 |
8 |
9 |
10 |
11 |
12 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 | {{model_name}}
80 |
81 | Download
82 |
83 |
84 |
85 |
86 | {{name}}
87 | {{version}}
88 |
89 |
90 | Author
91 | Component Creation Author : {{ author }}
92 |
93 | Keywords
94 | Component search keywords
95 | {{ keywords }}
96 |
97 | License
98 | Component License : {{ license }}
99 |
100 |
101 |
102 |
103 |
104 | -
105 |
106 | package.json
107 |
108 | -
109 |
110 | model Name: {{model_name}}
111 | -
112 |
113 | Name: {{name}}
114 | -
115 |
116 | Version: {{version}}
117 | -
118 |
119 | Description: {{description}}
120 | -
121 |
122 | Author : {{author}}
123 | -
124 |
125 | Keywords:{{keywords}}
126 | -
127 |
128 | license: {{license}}
129 |
130 |
131 |
132 |
134 |
135 |
136 |
137 |
138 |
139 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Contributor Covenant Code of Conduct
2 |
3 | ## Our Pledge
4 |
5 | We as members, contributors, and leaders pledge to make participation in our
6 | community a harassment-free experience for everyone, regardless of age, body
7 | size, visible or invisible disability, ethnicity, sex characteristics, gender
8 | identity and expression, level of experience, education, socio-economic status,
9 | nationality, personal appearance, race, religion, or sexual identity
10 | and orientation.
11 |
12 | We pledge to act and interact in ways that contribute to an open, welcoming,
13 | diverse, inclusive, and healthy community.
14 |
15 | ## Our Standards
16 |
17 | Examples of behavior that contributes to a positive environment for our
18 | community include:
19 |
20 | * Demonstrating empathy and kindness toward other people
21 | * Being respectful of differing opinions, viewpoints, and experiences
22 | * Giving and gracefully accepting constructive feedback
23 | * Accepting responsibility and apologizing to those affected by our mistakes,
24 | and learning from the experience
25 | * Focusing on what is best not just for us as individuals, but for the
26 | overall community
27 |
28 | Examples of unacceptable behavior include:
29 |
30 | * The use of sexualized language or imagery, and sexual attention or
31 | advances of any kind
32 | * Trolling, insulting or derogatory comments, and personal or political attacks
33 | * Public or private harassment
34 | * Publishing others' private information, such as a physical or email
35 | address, without their explicit permission
36 | * Other conduct which could reasonably be considered inappropriate in a
37 | professional setting
38 |
39 | ## Enforcement Responsibilities
40 |
41 | Community leaders are responsible for clarifying and enforcing our standards of
42 | acceptable behavior and will take appropriate and fair corrective action in
43 | response to any behavior that they deem inappropriate, threatening, offensive,
44 | or harmful.
45 |
46 | Community leaders have the right and responsibility to remove, edit, or reject
47 | comments, commits, code, wiki edits, issues, and other contributions that are
48 | not aligned to this Code of Conduct, and will communicate reasons for moderation
49 | decisions when appropriate.
50 |
51 | ## Scope
52 |
53 | This Code of Conduct applies within all community spaces, and also applies when
54 | an individual is officially representing the community in public spaces.
55 | Examples of representing our community include using an official e-mail address,
56 | posting via an official social media account, or acting as an appointed
57 | representative at an online or offline event.
58 |
59 | ## Enforcement
60 |
61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be
62 | reported to the community leaders responsible for enforcement at
63 | @jyheo.
64 | All complaints will be reviewed and investigated promptly and fairly.
65 |
66 | All community leaders are obligated to respect the privacy and security of the
67 | reporter of any incident.
68 |
69 | ## Enforcement Guidelines
70 |
71 | Community leaders will follow these Community Impact Guidelines in determining
72 | the consequences for any action they deem in violation of this Code of Conduct:
73 |
74 | ### 1. Correction
75 |
76 | **Community Impact**: Use of inappropriate language or other behavior deemed
77 | unprofessional or unwelcome in the community.
78 |
79 | **Consequence**: A private, written warning from community leaders, providing
80 | clarity around the nature of the violation and an explanation of why the
81 | behavior was inappropriate. A public apology may be requested.
82 |
83 | ### 2. Warning
84 |
85 | **Community Impact**: A violation through a single incident or series
86 | of actions.
87 |
88 | **Consequence**: A warning with consequences for continued behavior. No
89 | interaction with the people involved, including unsolicited interaction with
90 | those enforcing the Code of Conduct, for a specified period of time. This
91 | includes avoiding interactions in community spaces as well as external channels
92 | like social media. Violating these terms may lead to a temporary or
93 | permanent ban.
94 |
95 | ### 3. Temporary Ban
96 |
97 | **Community Impact**: A serious violation of community standards, including
98 | sustained inappropriate behavior.
99 |
100 | **Consequence**: A temporary ban from any sort of interaction or public
101 | communication with the community for a specified period of time. No public or
102 | private interaction with the people involved, including unsolicited interaction
103 | with those enforcing the Code of Conduct, is allowed during this period.
104 | Violating these terms may lead to a permanent ban.
105 |
106 | ### 4. Permanent Ban
107 |
108 | **Community Impact**: Demonstrating a pattern of violation of community
109 | standards, including sustained inappropriate behavior, harassment of an
110 | individual, or aggression toward or disparagement of classes of individuals.
111 |
112 | **Consequence**: A permanent ban from any sort of public interaction within
113 | the community.
114 |
115 | ## Attribution
116 |
117 | This Code of Conduct is adapted from the [Contributor Covenant][homepage],
118 | version 2.0, available at
119 | https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
120 |
121 | Community Impact Guidelines were inspired by [Mozilla's code of conduct
122 | enforcement ladder](https://github.com/mozilla/diversity).
123 |
124 | [homepage]: https://www.contributor-covenant.org
125 |
126 | For answers to common questions about this code of conduct, see the FAQ at
127 | https://www.contributor-covenant.org/faq. Translations are available at
128 | https://www.contributor-covenant.org/translations.
129 |
--------------------------------------------------------------------------------
/n3ml-onnx/stbp export.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import time
3 | import argparse
4 | import torch
5 | import torchvision
6 | from torchvision.transforms import transforms
7 | import torch.nn as nn
8 | import torch.optim as optim
9 | import matplotlib.pyplot as plt
10 | import n3ml.model
11 | import onnx
12 | from n3ml.layer import _Wu
13 | from tqdm import tqdm
14 | np.random.seed(0)
15 | torch.manual_seed(0)
16 |
17 |
18 | def validate(val_loader, model, criterion):
19 |
20 | total_images = 0
21 | num_corrects = 0
22 | total_loss = 0
23 |
24 | for step, (images, labels) in enumerate(val_loader):
25 | images = images.cuda()
26 | labels = labels.cuda()
27 |
28 | preds = model(images)
29 | labels_ = torch.zeros(torch.numel(labels), 10).cuda()
30 | labels_ = labels_.scatter_(1, labels.view(-1, 1), 1)
31 |
32 | loss = criterion(preds, labels_)
33 |
34 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
35 | total_loss += loss.cpu().detach().numpy() * images.size(0)
36 | total_images += images.size(0)
37 |
38 | val_acc = num_corrects.float() / total_images
39 | val_loss = total_loss / total_images
40 |
41 | return val_acc, val_loss
42 |
43 |
44 | def train(train_loader, model, criterion, optimizer):
45 |
46 | total_images = 0
47 | num_corrects = 0
48 | total_loss = 0
49 |
50 | list_loss = []
51 | list_acc = []
52 |
53 | for step, (images, labels) in enumerate(train_loader):
54 |
55 | images = images.cuda()
56 | labels = labels.cuda()
57 |
58 | preds = model(images)
59 |
60 | labels_ = torch.zeros(torch.numel(labels), 10).cuda()
61 | labels_ = labels_.scatter_(1, labels.view(-1, 1), 1)
62 |
63 | loss = criterion(preds, labels_)
64 |
65 | optimizer.zero_grad()
66 | loss.backward()
67 | optimizer.step()
68 |
69 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
70 | total_loss += loss.cpu().detach().numpy() * images.size(0)
71 | total_images += images.size(0)
72 |
73 | if total_images > 0: # and total_images % 30 == 0
74 | list_loss.append(total_loss / total_images)
75 | list_acc.append(float(num_corrects) / total_images)
76 |
77 |
78 | train_acc = num_corrects.float() / total_images
79 | train_loss = total_loss / total_images
80 |
81 | return train_acc, train_loss
82 |
83 |
84 | def app(opt):
85 | print(opt)
86 |
87 | # Load MNIST / FashionMNIST dataset
88 | train_loader = torch.utils.data.DataLoader(
89 | torchvision.datasets.MNIST(
90 | opt.data,
91 | train=True,
92 | download = True,
93 | transform=torchvision.transforms.Compose([
94 | transforms.ToTensor()])), # , transforms.Lambda(lambda x: x * 32)
95 | drop_last=True,
96 | batch_size=opt.batch_size,
97 | shuffle=True)
98 |
99 | # Load MNIST/ FashionMNIST dataset
100 | val_loader = torch.utils.data.DataLoader(
101 | torchvision.datasets.MNIST(
102 | opt.data,
103 | train=False,
104 | download=True,
105 | transform=torchvision.transforms.Compose([
106 | transforms.ToTensor(), transforms.Lambda(lambda x: x * 32)])),
107 | drop_last=True,
108 | batch_size=opt.batch_size,
109 | shuffle=True)
110 |
111 |
112 | model = n3ml.model.Wu2018(batch_size=opt.batch_size, time_interval=opt.time_interval).cuda()
113 | criterion = nn.MSELoss()
114 | optimizer = torch.optim.Adam(model.parameters(), lr = opt.lr)
115 | lr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[30, 60, 90])
116 |
117 | for epoch in tqdm(range(opt.num_epochs)):
118 | start = time.time()
119 | train_acc, train_loss = train(train_loader, model, criterion, optimizer)
120 | end = time.time()
121 | print('total time: {:.2f}s - epoch: {} - accuracy: {} - loss: {}'.format(end-start, epoch, train_acc, train_loss))
122 |
123 | lr_scheduler.step()
124 |
125 |
126 | print("train finish")
127 | print("validation start")
128 |
129 | model.eval()
130 | val_acc, val_loss = validate(val_loader, model, criterion)
131 | print("val_acc :",val_acc)
132 | print("val_loss :", val_loss)
133 | print("validation finish")
134 |
135 | from types import MethodType
136 |
137 | def forward(self, input):
138 | x = self.conv1(input)
139 | x = self.avgpool(x)
140 | x = self.conv2(x)
141 | x = self.avgpool(x)
142 | x = x.view(-1, 7 * 7 * 32)
143 | x = self.fc1(x)
144 | x = self.fc2(x)
145 |
146 | return x
147 |
148 | model.forward = MethodType(forward, model)
149 |
150 | dummy_input = torch.randn(opt.batch_size, 1,28, 28, dtype=torch.float32, device="cuda")
151 | torch.onnx.export(model, dummy_input, "stbp.onnx")
152 |
153 | val_acc, val_loss = validate(val_loader, model, criterion)
154 | print("after change", val_acc)
155 | print("after change", val_loss)
156 |
157 |
158 | if __name__ == '__main__':
159 | parser = argparse.ArgumentParser()
160 | parser.add_argument('--data', default='data')
161 | parser.add_argument('--num_classes', default=10, type=int)
162 | parser.add_argument('--num_epochs', default=2, type=int)
163 | parser.add_argument('--batch_size', default=1024, type=int)
164 | parser.add_argument('--num_workers', default=-1, type=int)
165 | parser.add_argument('--time_interval', default=1, type=int)
166 | parser.add_argument('--lr', default=1e-03, type=float)
167 |
168 | app(parser.parse_args())
169 |
--------------------------------------------------------------------------------
/templates/onnx_manager.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | Intelligent Component Registry
7 |
8 |
9 |
10 |
11 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 | Intelligent Component Registry
63 |
64 |
65 | File Upload
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 | File Folders
76 |
98 |
99 | File Name Last Modified File Size Description
100 |
101 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 | File Download
117 |
121 |
122 |
123 |
124 |
125 |
126 | File Visualization
127 |
131 |
132 |
133 |
134 |
135 |
136 | Logs
137 |
138 |
139 |
140 |
141 |
159 |
160 |
161 |
162 |
163 |
--------------------------------------------------------------------------------
/softlif_dense_export.py:
--------------------------------------------------------------------------------
1 | # n3ml2
2 |
3 | import time
4 | import argparse
5 |
6 | import torch
7 | import torch.nn as nn
8 | import torch.optim as optim
9 | import torchvision
10 |
11 | from n3ml.network import Network
12 | from n3ml.layer import SoftLIF
13 | import torchvision.transforms as transforms
14 | import onnx
15 | import onnx.numpy_helper as numpy_helper
16 |
17 |
18 | class Hunsberger2015(Network):
19 | def __int__(self):
20 | super(Hunsberger2015, self).__init__()
21 |
22 | def forward(self, x):
23 | for m in self.named_children():
24 | x = m[1](x)
25 | return x
26 |
27 |
28 | def validate(val_loader, model, criterion):
29 | model.eval()
30 |
31 | total_images = 0
32 | num_corrects = 0
33 | total_loss = 0
34 |
35 | for step, (images, labels) in enumerate(val_loader):
36 | images = images.cuda()
37 | labels = labels.cuda()
38 |
39 | preds = model(images)
40 |
41 | loss = criterion(preds, labels)
42 |
43 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
44 | total_loss += loss.cpu().detach().numpy() * images.size(0)
45 | total_images += images.size(0)
46 |
47 | val_acc = num_corrects.float() / total_images
48 | val_loss = total_loss / total_images
49 |
50 | return val_acc, val_loss
51 |
52 |
53 | def train(train_loader, model, criterion, optimizer):
54 | model.train()
55 |
56 | total_images = 0
57 | num_corrects = 0
58 | total_loss = 0
59 |
60 | for step, (images, labels) in enumerate(train_loader):
61 | images = images.cuda()
62 | labels = labels.cuda()
63 |
64 | preds = model(images)
65 |
66 | loss = criterion(preds, labels)
67 |
68 | optimizer.zero_grad()
69 | loss.backward()
70 | optimizer.step()
71 |
72 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
73 | total_loss += loss.cpu().detach().numpy() * images.size(0)
74 | total_images += images.size(0)
75 |
76 | train_acc = num_corrects.float() / total_images
77 | train_loss = total_loss / total_images
78 |
79 | return train_acc, train_loss
80 |
81 |
82 | def softlif_onnx_export(model, model_name, opt):
83 | # onnx export start
84 | model_layer_info = []
85 | model_weight = []
86 | # 모델 레이어 이름, 오브젝트 저장
87 | for i in model.named_children():
88 | model_layer_info.append(i)
89 | # 모델 가중치 저장
90 | for i in model.named_parameters():
91 | model_weight.append(i)
92 |
93 | # 모델 새로 생성
94 | model = Hunsberger2015()
95 |
96 | # 레이어를 다시 쌓으며 softlif를 제거하고 relu로 대체
97 | for i in range(len(model_layer_info)):
98 | if "LIF" in str(model_layer_info[i][1]): # soft lif layer
99 | model.add_module(model_layer_info[i][0], nn.ReLU())
100 | else:
101 | model.add_module(model_layer_info[i][0], model_layer_info[i][1])
102 |
103 | # 가중치를 원래 모델
104 | for i in model_weight:
105 | model.state_dict()[i[0]].data.copy_(i[1])
106 |
107 | dummy_input = torch.randn(opt.batch_size, 1,28, 28, dtype=torch.float32).cuda()
108 | torch.onnx.export(model, dummy_input, model_name)
109 |
110 | # onnx export end
111 | print("onnx export")
112 |
113 | onnx_model = onnx.load(model_name)
114 |
115 | for i in range(len(onnx_model.graph.node)):
116 | if "relu" in onnx_model.graph.node[i].name.lower():
117 | onnx_model.graph.node[i].name += "_softlif"
118 |
119 | onnx.save(onnx_model, model_name)
120 |
121 | def app(opt):
122 | print(opt)
123 |
124 | train_loader = torch.utils.data.DataLoader(
125 | torchvision.datasets.MNIST(
126 | opt.data,
127 | train=True,
128 | download=True,
129 | transform=torchvision.transforms.Compose([transforms.ToTensor()])),
130 | batch_size=opt.batch_size,
131 | shuffle=True)
132 |
133 | val_loader = torch.utils.data.DataLoader(
134 | torchvision.datasets.MNIST(
135 | opt.data,
136 | train=False,
137 | download=True,
138 | transform=torchvision.transforms.Compose([transforms.ToTensor()])),
139 | batch_size=opt.batch_size)
140 |
141 | model = Hunsberger2015()
142 |
143 | model.add_module('flatten', nn.Flatten())
144 | model.add_module('fc1', nn.Linear(784, 128, bias=False))
145 | model.add_module('slif1', SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
146 | model.add_module('fc2', nn.Linear(128, 64, bias=False))
147 | model.add_module('slif2', SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
148 | model.add_module('fc6', nn.Linear(64, opt.num_classes, bias=False))
149 |
150 | model.cuda()
151 |
152 | criterion = nn.CrossEntropyLoss()
153 |
154 | optimizer = torch.optim.SGD(model.parameters(), lr=opt.lr, momentum=opt.momentum)
155 |
156 | lr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[30, 60, 90])
157 |
158 | best_acc = 0
159 |
160 | for epoch in range(opt.num_epochs):
161 | start = time.time()
162 | train_acc, train_loss = train(train_loader, model, criterion, optimizer)
163 | end = time.time()
164 | print('total time: {:.2f}s - epoch: {} - accuracy: {} - loss: {}'.format(end-start, epoch, train_acc, train_loss))
165 |
166 | val_acc, val_loss = validate(val_loader, model, criterion)
167 |
168 | if val_acc > best_acc:
169 | best_acc = val_acc
170 | state = {
171 | 'epoch': epoch,
172 | 'model': model.state_dict(),
173 | 'best_acc': best_acc,
174 | 'optimizer': optimizer.state_dict()}
175 | torch.save(state, opt.pretrained)
176 | print('in test, epoch: {} - best accuracy: {} - loss: {}'.format(epoch, best_acc, val_loss))
177 |
178 | lr_scheduler.step()
179 |
180 | softlif_onnx_export(model, opt.name, opt)
181 |
182 | if __name__ == '__main__':
183 | parser = argparse.ArgumentParser()
184 |
185 | parser.add_argument('--data', default='data')
186 | parser.add_argument('--num_classes', default=10, type=int)
187 | parser.add_argument('--num_epochs', default=1, type=int)
188 | parser.add_argument('--batch_size', default=32, type=int)
189 | parser.add_argument('--lr', default=4e-2, type=float)
190 | parser.add_argument('--momentum', default=0.9, type=float)
191 | parser.add_argument('--pretrained', default='pretrained/softlif_dynamic.pt')
192 |
193 | parser.add_argument('--amplitude', default=0.063, type=float)
194 | parser.add_argument('--tau_ref', default=0.001, type=float)
195 | parser.add_argument('--tau_rc', default=0.05, type=float)
196 | parser.add_argument('--gain', default=0.825, type=float)
197 | parser.add_argument('--sigma', default=0.02, type=float)
198 | parser.add_argument('--name', default='softlif_dense.onnx')
199 |
200 | app(parser.parse_args())
--------------------------------------------------------------------------------
/templates/mainpage.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | Intelligent Component Registry
6 |
7 |
8 |
9 |
10 |
105 |
106 |
107 |
108 |
109 |
110 |
111 |
112 | Intelligent Component Registry
113 |
114 |
119 |
120 |
128 |
129 |
130 |
131 |
132 |
133 |
134 | Click to Upload
135 |
136 |
137 |
138 |
139 |
140 |
141 |
142 |
143 |
144 |
145 |
146 |
147 |
148 |
149 |
176 |
177 |
178 |
179 |
180 |
181 | Logs
182 |
183 |
184 |
185 |
186 |
187 |
188 |
189 |
207 |
208 |
--------------------------------------------------------------------------------
/n3ml-onnx/softlif model load.py:
--------------------------------------------------------------------------------
1 | # n3ml2
2 |
3 | import time
4 | import argparse
5 |
6 | import torch
7 | import torch.nn as nn
8 | import torch.optim as optim
9 | import torchvision
10 |
11 | from n3ml.network import Network
12 | from n3ml.layer import SoftLIF
13 | import torchvision.transforms as transforms
14 | import onnx
15 | import onnx.numpy_helper as numpy_helper
16 |
17 |
18 | class Hunsberger2015(Network):
19 | def __int__(self):
20 | super(Hunsberger2015, self).__init__()
21 |
22 | def forward(self, x):
23 | for m in self.named_children():
24 | x = m[1](x)
25 | return x
26 |
27 |
28 | def validate(val_loader, model, criterion):
29 | model.eval()
30 |
31 | total_images = 0
32 | num_corrects = 0
33 | total_loss = 0
34 |
35 | for step, (images, labels) in enumerate(val_loader):
36 | images = images.cuda()
37 | labels = labels.cuda()
38 |
39 | preds = model(images)
40 |
41 | loss = criterion(preds, labels)
42 |
43 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
44 | total_loss += loss.cpu().detach().numpy() * images.size(0)
45 | total_images += images.size(0)
46 |
47 | val_acc = num_corrects.float() / total_images
48 | val_loss = total_loss / total_images
49 |
50 | return val_acc, val_loss
51 |
52 |
53 | def train(train_loader, model, criterion, optimizer):
54 | model.train()
55 |
56 | total_images = 0
57 | num_corrects = 0
58 | total_loss = 0
59 |
60 | for step, (images, labels) in enumerate(train_loader):
61 | images = images.cuda()
62 | labels = labels.cuda()
63 |
64 | preds = model(images)
65 |
66 | loss = criterion(preds, labels)
67 |
68 | optimizer.zero_grad()
69 | loss.backward()
70 | optimizer.step()
71 |
72 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
73 | total_loss += loss.cpu().detach().numpy() * images.size(0)
74 | total_images += images.size(0)
75 |
76 | train_acc = num_corrects.float() / total_images
77 | train_loss = total_loss / total_images
78 |
79 | return train_acc, train_loss
80 |
81 |
82 | def load_onnx_to_softlif(model_path, opt=None):
83 | onnx_model = onnx.load_model(model_path)
84 | graph = onnx_model.graph
85 |
86 | nodes = [i for i in graph.node]
87 | weights = []
88 | for init in graph.initializer:
89 | weight = numpy_helper.to_array(init)
90 |
91 | if len(weight) == 2:
92 | weight = weight.T
93 | else:
94 | weight = numpy_helper.to_array(init)
95 |
96 | weights.append(weight)
97 |
98 | weights = sorted(weights, key=lambda x: len(x.shape), reverse=True)
99 |
100 | model = Hunsberger2015()
101 |
102 | for node in nodes:
103 | name = node.name
104 | op = node.op_type.lower()
105 |
106 | if op == "matmul": # dense without bias
107 | input_unit, output_unit = weights[0].shape[0], weights[0].shape[1]
108 | model.add_module(name, nn.Linear(input_unit, output_unit, bias=False))
109 | del weights[0]
110 |
111 | elif op == "relu":
112 | if opt is not None:
113 | model.add_module(name, SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
114 | else:
115 | model.add_module(name, SoftLIF())
116 |
117 | elif op == "flatten":
118 | model.add_module(name, nn.Flatten())
119 |
120 | elif op == "conv":
121 | input_channels = weights[0].shape[1]
122 | output_channels = weights[0].shape[0]
123 |
124 | kernel_size = 3
125 | stride = 1
126 | padding = 0
127 |
128 | for att in node.attribute:
129 | if att.name == "kernel_shape":
130 | kernel_size = att.ints[0]
131 | elif att.name == "strides":
132 | stride = att.ints[0]
133 | elif att.name == "pads":
134 | padding = att.ints[0]
135 |
136 | model.add_module(name, nn.Conv2d(input_channels,
137 | output_channels,
138 | kernel_size=kernel_size,
139 | stride=stride,
140 | padding=padding,
141 | bias=False))
142 |
143 | del weights[0]
144 |
145 | elif op == "averagepool":
146 | kernel_size = 2
147 |
148 | for att in node.attribute:
149 | if att.name == "kernel_shape":
150 | kernel_size = att.ints[0]
151 |
152 | model.add_module(name, nn.AvgPool2d(kernel_size=kernel_size))
153 |
154 | elif op == "pad":
155 | pass
156 |
157 | else:
158 | pass
159 |
160 | # generate model end
161 |
162 | # load model weight insert
163 | initializers = []
164 | for init in graph.initializer:
165 | initializers.append(numpy_helper.to_array(init))
166 |
167 | model_layer_info = [i for i in model.named_parameters()]
168 |
169 | initializers = sorted(initializers, key=lambda x: len(x.shape), reverse=True)
170 |
171 | for i in range(len(model_layer_info)):
172 | layer_name = model_layer_info[i][0]
173 |
174 | if "matmul" in layer_name.lower():
175 | weight = torch.from_numpy(initializers[i]).T
176 | else:
177 | weight = torch.from_numpy(initializers[i])
178 |
179 | model.state_dict()[layer_name].data.copy_(weight)
180 | # load model weight end
181 |
182 | print("load {}".format(model_path))
183 | return model
184 |
185 |
186 | def app(opt):
187 | print(opt)
188 |
189 | val_loader = torch.utils.data.DataLoader(
190 | torchvision.datasets.MNIST(
191 | opt.data,
192 | train=False,
193 | download=True,
194 | transform=torchvision.transforms.Compose([transforms.ToTensor()])),
195 | batch_size=opt.batch_size)
196 |
197 | model = load_onnx_to_softlif("result/softlif_conv.onnx", opt)
198 | model.cuda()
199 |
200 | criterion = nn.CrossEntropyLoss()
201 |
202 | val_acc, val_loss = validate(val_loader, model, criterion)
203 | print('in test, val accuracy: {} - loss: {}'.format(val_acc, val_loss))
204 |
205 |
206 |
207 |
208 | if __name__ == '__main__':
209 | parser = argparse.ArgumentParser()
210 |
211 | parser.add_argument('--data', default='data')
212 | parser.add_argument('--num_classes', default=10, type=int)
213 | parser.add_argument('--num_epochs', default=10, type=int)
214 | parser.add_argument('--batch_size', default=32, type=int)
215 | parser.add_argument('--lr', default=4e-2, type=float)
216 | parser.add_argument('--momentum', default=0.9, type=float)
217 | parser.add_argument('--pretrained', default='pretrained/softlif_dynamic.pt')
218 |
219 | parser.add_argument('--amplitude', default=0.063, type=float)
220 | parser.add_argument('--tau_ref', default=0.001, type=float)
221 | parser.add_argument('--tau_rc', default=0.05, type=float)
222 | parser.add_argument('--gain', default=0.825, type=float)
223 | parser.add_argument('--sigma', default=0.02, type=float)
224 |
225 | app(parser.parse_args())
--------------------------------------------------------------------------------
/n3ml-onnx/softlif model save.py:
--------------------------------------------------------------------------------
1 | # n3ml2
2 |
3 | import time
4 | import argparse
5 |
6 | import torch
7 | import torch.nn as nn
8 | import torch.optim as optim
9 | import torchvision
10 |
11 | from n3ml.network import Network
12 | from n3ml.layer import SoftLIF
13 | import torchvision.transforms as transforms
14 | import onnx
15 | import onnx.numpy_helper as numpy_helper
16 |
17 |
18 | class Hunsberger2015(Network):
19 | def __int__(self):
20 | super(Hunsberger2015, self).__init__()
21 |
22 | def forward(self, x):
23 | for m in self.named_children():
24 | x = m[1](x)
25 | return x
26 |
27 |
28 | def validate(val_loader, model, criterion):
29 | model.eval()
30 |
31 | total_images = 0
32 | num_corrects = 0
33 | total_loss = 0
34 |
35 | for step, (images, labels) in enumerate(val_loader):
36 | images = images.cuda()
37 | labels = labels.cuda()
38 |
39 | preds = model(images)
40 |
41 | loss = criterion(preds, labels)
42 |
43 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
44 | total_loss += loss.cpu().detach().numpy() * images.size(0)
45 | total_images += images.size(0)
46 |
47 | val_acc = num_corrects.float() / total_images
48 | val_loss = total_loss / total_images
49 |
50 | return val_acc, val_loss
51 |
52 |
53 | def train(train_loader, model, criterion, optimizer):
54 | model.train()
55 |
56 | total_images = 0
57 | num_corrects = 0
58 | total_loss = 0
59 |
60 | for step, (images, labels) in enumerate(train_loader):
61 | images = images.cuda()
62 | labels = labels.cuda()
63 |
64 | preds = model(images)
65 |
66 | loss = criterion(preds, labels)
67 |
68 | optimizer.zero_grad()
69 | loss.backward()
70 | optimizer.step()
71 |
72 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
73 | total_loss += loss.cpu().detach().numpy() * images.size(0)
74 | total_images += images.size(0)
75 |
76 | train_acc = num_corrects.float() / total_images
77 | train_loss = total_loss / total_images
78 |
79 | return train_acc, train_loss
80 |
81 |
82 | def save_softlif_to_onnx(model, model_name):
83 | model_layer_info = [i for i in model.named_children()]
84 | model_weight = [i for i in model.named_parameters()]
85 |
86 | model = Hunsberger2015()
87 |
88 | # softLIF -> RELU
89 | for i in range(len(model_layer_info)):
90 | if "LIF" in str(model_layer_info[i][1]):
91 | model.add_module(model_layer_info[i][0], nn.ReLU())
92 | else:
93 | model.add_module(model_layer_info[i][0], model_layer_info[i][1])
94 |
95 | # weight recovery
96 | for i in model_weight:
97 | model.state_dict()[i[0]].data.copy_(i[1])
98 |
99 | dummy_input = torch.randn(1, 1, 28, 28, dtype=torch.float32).cuda()
100 | torch.onnx.export(model, dummy_input, model_name)
101 | print("saved {}".format(model_name))
102 |
103 | # layer name change
104 | onnx_model = onnx.load(model_name)
105 |
106 | for i in range(len(onnx_model.graph.node)):
107 | if "relu" in onnx_model.graph.node[i].name.lower():
108 | onnx_model.graph.node[i].name += "_softlif"
109 |
110 | onnx.save(onnx_model, model_name)
111 | # layer name change end
112 |
113 |
114 | def app(opt):
115 | print(opt)
116 |
117 | train_loader = torch.utils.data.DataLoader(
118 | torchvision.datasets.MNIST(
119 | opt.data,
120 | train=True,
121 | download=True,
122 | transform=torchvision.transforms.Compose([transforms.ToTensor()])),
123 | batch_size=opt.batch_size,
124 | shuffle=True)
125 |
126 | val_loader = torch.utils.data.DataLoader(
127 | torchvision.datasets.MNIST(
128 | opt.data,
129 | train=False,
130 | download=True,
131 | transform=torchvision.transforms.Compose([transforms.ToTensor()])),
132 | batch_size=opt.batch_size)
133 |
134 | model = Hunsberger2015()
135 |
136 | model.add_module('conv1', nn.Conv2d(1, 64, kernel_size=3, stride=2, padding=1, bias=False))
137 | model.add_module('slif1', SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
138 | model.add_module('apool1', nn.AvgPool2d(kernel_size=2))
139 | model.add_module('conv2', nn.Conv2d(64, 192, kernel_size=3, padding=1, bias=False))
140 | model.add_module('slif2', SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
141 | model.add_module('apool2', nn.AvgPool2d(kernel_size=2))
142 | model.add_module('conv3', nn.Conv2d(192, 256, kernel_size=3, padding=1, bias=False))
143 | model.add_module('slif3', SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
144 | model.add_module('apool3', nn.AvgPool2d(kernel_size=2))
145 | model.add_module('flatten4', nn.Flatten())
146 | model.add_module('drop5', nn.Dropout())
147 | model.add_module('fc5', nn.Linear(256, 1024, bias=False))
148 | model.add_module('slif5', SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
149 | model.add_module('fc6', nn.Linear(1024, opt.num_classes, bias=False))
150 |
151 | model.cuda()
152 |
153 | criterion = nn.CrossEntropyLoss()
154 |
155 | optimizer = torch.optim.SGD(model.parameters(), lr=opt.lr, momentum=opt.momentum)
156 |
157 | lr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[30, 60, 90])
158 |
159 | best_acc = 0
160 |
161 | for epoch in range(opt.num_epochs):
162 | start = time.time()
163 | train_acc, train_loss = train(train_loader, model, criterion, optimizer)
164 | end = time.time()
165 | print('total time: {:.2f}s - epoch: {} - accuracy: {} - loss: {}'.format(end-start, epoch, train_acc, train_loss))
166 |
167 | val_acc, val_loss = validate(val_loader, model, criterion)
168 |
169 | if val_acc > best_acc:
170 | best_acc = val_acc
171 | state = {
172 | 'epoch': epoch,
173 | 'model': model.state_dict(),
174 | 'best_acc': best_acc,
175 | 'optimizer': optimizer.state_dict()}
176 | torch.save(state, opt.pretrained)
177 | print('in test, epoch: {} - best accuracy: {} - loss: {}'.format(epoch, best_acc, val_loss))
178 |
179 | lr_scheduler.step()
180 |
181 | save_softlif_to_onnx(model, "result/softlif_conv.onnx")
182 |
183 |
184 | if __name__ == '__main__':
185 | parser = argparse.ArgumentParser()
186 |
187 | parser.add_argument('--data', default='data')
188 | parser.add_argument('--num_classes', default=10, type=int)
189 | parser.add_argument('--num_epochs', default=1, type=int)
190 | parser.add_argument('--batch_size', default=32, type=int)
191 | parser.add_argument('--lr', default=4e-2, type=float)
192 | parser.add_argument('--momentum', default=0.9, type=float)
193 | parser.add_argument('--pretrained', default='pretrained/softlif_dynamic.pt')
194 |
195 | parser.add_argument('--amplitude', default=0.063, type=float)
196 | parser.add_argument('--tau_ref', default=0.001, type=float)
197 | parser.add_argument('--tau_rc', default=0.05, type=float)
198 | parser.add_argument('--gain', default=0.825, type=float)
199 | parser.add_argument('--sigma', default=0.02, type=float)
200 |
201 | app(parser.parse_args())
--------------------------------------------------------------------------------
/n3ml-onnx/softlif export.py:
--------------------------------------------------------------------------------
1 | import time
2 | import argparse
3 | import numpy as np
4 | import matplotlib.pyplot as plt
5 | import torch
6 | import torchvision
7 | import torch.nn as nn
8 | import torch.optim as optim
9 | from n3ml.model import Hunsberger2015
10 |
11 |
12 | class Plot:
13 | def __init__(self):
14 | plt.ion()
15 | self.fig, self.ax = plt.subplots(figsize=(10, 10))
16 | self.ax2 = self.ax.twinx()
17 | plt.title('Soft LIF')
18 |
19 | def update(self, y1, y2):
20 | x = torch.arange(y1.shape[0]) * 64 * 100
21 |
22 | ax1 = self.ax
23 | ax2 = self.ax2
24 |
25 | ax1.plot(x, y1, 'g')
26 | ax2.plot(x, y2, 'b')
27 |
28 | ax1.set_xlabel('number of images')
29 | ax1.set_ylabel('accuracy', color='g')
30 | ax2.set_ylabel('loss', color='b')
31 |
32 | self.fig.canvas.draw()
33 | self.fig.canvas.flush_events()
34 |
35 |
36 | def validate(val_loader, model, criterion):
37 | model.eval()
38 |
39 | total_images = 0
40 | num_corrects = 0
41 | total_loss = 0
42 |
43 | for step, (images, labels) in enumerate(val_loader):
44 | images = images.cuda()
45 | labels = labels.cuda()
46 |
47 | preds = model(images)
48 |
49 | loss = criterion(preds, labels)
50 |
51 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
52 | total_loss += loss.cpu().detach().numpy() * images.size(0)
53 | total_images += images.size(0)
54 |
55 | val_acc = num_corrects.float() / total_images
56 | val_loss = total_loss / total_images
57 |
58 | return val_acc, val_loss
59 |
60 |
61 | def train(train_loader, model, criterion, optimizer, list_acc, list_loss, plotter):
62 | model.train()
63 |
64 | total_images = 0
65 | num_corrects = 0
66 | total_loss = 0
67 |
68 | for step, (images, labels) in enumerate(train_loader):
69 | images = images.cuda()
70 | labels = labels.cuda()
71 |
72 | preds = model(images)
73 |
74 | loss = criterion(preds, labels)
75 |
76 | optimizer.zero_grad()
77 | loss.backward()
78 | optimizer.step()
79 |
80 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
81 | total_loss += loss.cpu().detach().numpy() * images.size(0)
82 | total_images += images.size(0)
83 |
84 | if step > 0 and step % 100 == 0:
85 | list_loss.append(total_loss / total_images)
86 | list_acc.append(num_corrects.float() / total_images)
87 |
88 | plotter.update(y1=np.array(list_acc), y2=np.array(list_loss))
89 |
90 | train_acc = num_corrects.float() / total_images
91 | train_loss = total_loss / total_images
92 |
93 | return train_acc, train_loss
94 |
95 |
96 | def app(opt):
97 | print(opt)
98 |
99 | train_loader = torch.utils.data.DataLoader(
100 | torchvision.datasets.CIFAR10(
101 | opt.data,
102 | train=True,
103 | transform=torchvision.transforms.Compose([
104 | torchvision.transforms.RandomCrop(24),
105 | torchvision.transforms.ToTensor()])),
106 | batch_size=opt.batch_size,
107 | shuffle=True,
108 | num_workers=opt.num_workers)
109 |
110 | val_loader = torch.utils.data.DataLoader(
111 | torchvision.datasets.CIFAR10(
112 | opt.data,
113 | train=False,
114 | transform=torchvision.transforms.Compose([
115 | torchvision.transforms.CenterCrop(24),
116 | torchvision.transforms.ToTensor()])),
117 | batch_size=opt.batch_size,
118 | num_workers=opt.num_workers)
119 |
120 | model = Hunsberger2015(num_classes=opt.num_classes, amplitude=opt.amplitude, tau_ref=opt.tau_ref,
121 | tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma).cuda()
122 | criterion = nn.CrossEntropyLoss()
123 |
124 | optimizer = torch.optim.SGD(model.parameters(), lr=opt.lr, momentum=opt.momentum)
125 |
126 | lr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[30, 60, 90])
127 |
128 | # for plot
129 | plotter = Plot()
130 |
131 | list_loss = []
132 | list_acc = []
133 |
134 | best_acc = 0
135 |
136 | for epoch in range(opt.num_epochs):
137 | start = time.time()
138 | train_acc, train_loss = train(train_loader, model, criterion, optimizer, list_acc, list_loss, plotter)
139 | end = time.time()
140 | print('total time: {:.2f}s - epoch: {} - accuracy: {} - loss: {}'.format(end-start, epoch, train_acc, train_loss))
141 |
142 | val_acc, val_loss = validate(val_loader, model, criterion)
143 |
144 | if val_acc > best_acc:
145 | best_acc = val_acc
146 | state = {
147 | 'epoch': epoch,
148 | 'model': model.state_dict(),
149 | 'best_acc': best_acc,
150 | 'optimizer': optimizer.state_dict()}
151 | torch.save(state, opt.pretrained)
152 | print('in test, epoch: {} - best accuracy: {} - loss: {}'.format(epoch, best_acc, val_loss))
153 |
154 | lr_scheduler.step()
155 |
156 | # training finish
157 |
158 | # method change
159 |
160 | temp_weights = []
161 | for i, param in enumerate(model.parameters()):
162 | print(param.data.shape)
163 | temp_weights.append(param.data)
164 |
165 | # change softlif -> relu
166 | model.extractor = nn.Sequential(
167 | nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1, bias=False),
168 | nn.ReLU(),
169 | nn.AvgPool2d(kernel_size=2),
170 | nn.Conv2d(64, 192, kernel_size=3, padding=1, bias=False),
171 | nn.ReLU(),
172 | nn.AvgPool2d(kernel_size=2),
173 | nn.Conv2d(192, 256, kernel_size=3, padding=1, bias=False),
174 | nn.ReLU(),
175 | nn.AvgPool2d(kernel_size=2)
176 | )
177 | model.classifier = nn.Sequential(
178 | nn.Dropout(),
179 | nn.Linear(256, 1024, bias=False),
180 | nn.ReLU(),
181 | nn.Linear(1024, opt.num_classes, bias=False)
182 | )
183 |
184 | # weight initialize
185 | model.extractor[0].weight.data = temp_weights[0]
186 | model.extractor[3].weight.data = temp_weights[1]
187 | model.extractor[6].weight.data = temp_weights[2]
188 | model.classifier[1].weight.data = temp_weights[3]
189 | model.classifier[3].weight.data = temp_weights[4]
190 |
191 | model.cuda()
192 |
193 | val_acc, val_loss = validate(val_loader, model, criterion)
194 | print("val_acc :", val_acc)
195 | print(val_loss)
196 |
197 | # crop by 24x24
198 | dummy_input = torch.randn(opt.batch_size, 3,24, 24, dtype=torch.float32).cuda()
199 | torch.onnx.export(model, dummy_input, "softlif.onnx")
200 |
201 |
202 | if __name__ == '__main__':
203 | parser = argparse.ArgumentParser()
204 |
205 | parser.add_argument('--data', default='data')
206 | parser.add_argument('--num_classes', default=10, type=int)
207 | parser.add_argument('--num_epochs', default=10, type=int)
208 | parser.add_argument('--batch_size', default=256, type=int)
209 | parser.add_argument('--num_workers', default=8, type=int)
210 | parser.add_argument('--lr', default=4e-2, type=float)
211 | parser.add_argument('--momentum', default=0.9, type=float)
212 | parser.add_argument('--pretrained', default='pretrained/softlif.pt')
213 |
214 | parser.add_argument('--amplitude', default=0.063, type=float)
215 | parser.add_argument('--tau_ref', default=0.001, type=float)
216 | parser.add_argument('--tau_rc', default=0.05, type=float)
217 | parser.add_argument('--gain', default=0.825, type=float)
218 | parser.add_argument('--sigma', default=0.02, type=float)
219 |
220 | app(parser.parse_args())
221 |
--------------------------------------------------------------------------------
/softlif_conv_export.py:
--------------------------------------------------------------------------------
1 | # n3ml2
2 |
3 | import time
4 | import argparse
5 |
6 | import torch
7 | import torch.nn as nn
8 | import torch.optim as optim
9 | import torchvision
10 |
11 | from n3ml.network import Network
12 | from n3ml.layer import SoftLIF
13 | import torchvision.transforms as transforms
14 | import onnx
15 | import onnx.numpy_helper as numpy_helper
16 |
17 |
18 | class Hunsberger2015(Network):
19 | def __int__(self):
20 | super(Hunsberger2015, self).__init__()
21 |
22 | def forward(self, x):
23 | for m in self.named_children():
24 | x = m[1](x)
25 | return x
26 |
27 |
28 | def validate(val_loader, model, criterion):
29 | model.eval()
30 |
31 | total_images = 0
32 | num_corrects = 0
33 | total_loss = 0
34 |
35 | for step, (images, labels) in enumerate(val_loader):
36 | images = images.cuda()
37 | labels = labels.cuda()
38 |
39 | preds = model(images)
40 |
41 | loss = criterion(preds, labels)
42 |
43 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
44 | total_loss += loss.cpu().detach().numpy() * images.size(0)
45 | total_images += images.size(0)
46 |
47 | val_acc = num_corrects.float() / total_images
48 | val_loss = total_loss / total_images
49 |
50 | return val_acc, val_loss
51 |
52 |
53 | def train(train_loader, model, criterion, optimizer):
54 | model.train()
55 |
56 | total_images = 0
57 | num_corrects = 0
58 | total_loss = 0
59 |
60 | for step, (images, labels) in enumerate(train_loader):
61 | images = images.cuda()
62 | labels = labels.cuda()
63 |
64 | preds = model(images)
65 |
66 | loss = criterion(preds, labels)
67 |
68 | optimizer.zero_grad()
69 | loss.backward()
70 | optimizer.step()
71 |
72 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
73 | total_loss += loss.cpu().detach().numpy() * images.size(0)
74 | total_images += images.size(0)
75 |
76 | train_acc = num_corrects.float() / total_images
77 | train_loss = total_loss / total_images
78 |
79 | return train_acc, train_loss
80 |
81 |
82 | def save_softlif_to_onnx(model, model_name):
83 | # onnx export start
84 | model_layer_info = [i for i in model.named_children()]
85 | model_weight = [i for i in model.named_parameters()]
86 |
87 | # generate model start
88 | model = Hunsberger2015()
89 |
90 | # softLIF -> RELU
91 | for i in range(len(model_layer_info)):
92 | if "LIF" in str(model_layer_info[i][1]):
93 | model.add_module(model_layer_info[i][0], nn.ReLU())
94 | else:
95 | model.add_module(model_layer_info[i][0], model_layer_info[i][1])
96 |
97 | # weight recovery
98 | for i in model_weight:
99 | model.state_dict()[i[0]].data.copy_(i[1])
100 |
101 | dummy_input = torch.randn(1, 1, 28, 28, dtype=torch.float32).cuda()
102 | torch.onnx.export(model, dummy_input, model_name)
103 | print("saved {}".format(model_name))
104 | # onnx export end
105 |
106 | # layer name change
107 | onnx_model = onnx.load(model_name)
108 |
109 | for i in range(len(onnx_model.graph.node)):
110 | if "relu" in onnx_model.graph.node[i].name.lower():
111 | onnx_model.graph.node[i].name += "_softlif"
112 |
113 | onnx.save(onnx_model, model_name)
114 | # layer name change end
115 |
116 |
117 | def app(opt):
118 | print(opt)
119 |
120 | train_loader = torch.utils.data.DataLoader(
121 | torchvision.datasets.MNIST(
122 | opt.data,
123 | train=True,
124 | download=True,
125 | transform=torchvision.transforms.Compose([transforms.ToTensor()])),
126 | batch_size=opt.batch_size,
127 | shuffle=True)
128 |
129 | val_loader = torch.utils.data.DataLoader(
130 | torchvision.datasets.MNIST(
131 | opt.data,
132 | train=False,
133 | download=True,
134 | transform=torchvision.transforms.Compose([transforms.ToTensor()])),
135 | batch_size=opt.batch_size)
136 |
137 | model = Hunsberger2015()
138 |
139 | model.add_module('conv1', nn.Conv2d(1, 64, kernel_size=3, stride=2, padding=1, bias=False))
140 | model.add_module('slif1', SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
141 | model.add_module('apool1', nn.AvgPool2d(kernel_size=2))
142 | model.add_module('conv2', nn.Conv2d(64, 192, kernel_size=3, padding=1, bias=False))
143 | model.add_module('slif2', SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
144 | model.add_module('apool2', nn.AvgPool2d(kernel_size=2))
145 | model.add_module('conv3', nn.Conv2d(192, 256, kernel_size=3, padding=1, bias=False))
146 | model.add_module('slif3', SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
147 | model.add_module('apool3', nn.AvgPool2d(kernel_size=2))
148 | model.add_module('flatten4', nn.Flatten())
149 | model.add_module('drop5', nn.Dropout())
150 | model.add_module('fc5', nn.Linear(256, 1024, bias=False))
151 | model.add_module('slif5', SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
152 | model.add_module('fc6', nn.Linear(1024, opt.num_classes, bias=False))
153 |
154 | model.cuda()
155 |
156 | criterion = nn.CrossEntropyLoss()
157 |
158 | optimizer = torch.optim.SGD(model.parameters(), lr=opt.lr, momentum=opt.momentum)
159 |
160 | lr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[30, 60, 90])
161 |
162 | best_acc = 0
163 |
164 | for epoch in range(opt.num_epochs):
165 | start = time.time()
166 | train_acc, train_loss = train(train_loader, model, criterion, optimizer)
167 | end = time.time()
168 | print('total time: {:.2f}s - epoch: {} - accuracy: {} - loss: {}'.format(end-start, epoch, train_acc, train_loss))
169 |
170 | val_acc, val_loss = validate(val_loader, model, criterion)
171 |
172 | if val_acc > best_acc:
173 | best_acc = val_acc
174 | state = {
175 | 'epoch': epoch,
176 | 'model': model.state_dict(),
177 | 'best_acc': best_acc,
178 | 'optimizer': optimizer.state_dict()}
179 | torch.save(state, opt.pretrained)
180 | print('in test, epoch: {} - best accuracy: {} - loss: {}'.format(epoch, best_acc, val_loss))
181 |
182 | lr_scheduler.step()
183 |
184 | save_softlif_to_onnx(model, opt.name)
185 |
186 |
187 | if __name__ == '__main__':
188 | parser = argparse.ArgumentParser()
189 |
190 | parser.add_argument('--data', default='data')
191 | parser.add_argument('--num_classes', default=10, type=int)
192 | parser.add_argument('--num_epochs', default=1, type=int)
193 | parser.add_argument('--batch_size', default=32, type=int)
194 | parser.add_argument('--lr', default=4e-2, type=float)
195 | parser.add_argument('--momentum', default=0.9, type=float)
196 | parser.add_argument('--pretrained', default='pretrained/softlif_dynamic.pt')
197 |
198 | parser.add_argument('--amplitude', default=0.063, type=float)
199 | parser.add_argument('--tau_ref', default=0.001, type=float)
200 | parser.add_argument('--tau_rc', default=0.05, type=float)
201 | parser.add_argument('--gain', default=0.825, type=float)
202 | parser.add_argument('--sigma', default=0.02, type=float)
203 | parser.add_argument('--name', default='softlif_conv.onnx')
204 |
205 | app(parser.parse_args())
--------------------------------------------------------------------------------
/onnx_inference_restapi.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import sys
3 |
4 | from flask import Flask
5 | from flask_restful import Api, Resource, reqparse
6 |
7 | from ONNX_Inference_RESTAPI.onnx_distinguish_run import Distinguish_onnx
8 | from tensorflow.keras.preprocessing.image import load_img, img_to_array
9 |
10 | app = Flask('ONNX IRIS')
11 | api = Api(app)
12 |
13 | # Model 종류 판별해주는 함수
14 | def Distinguish_Model(model_path):
15 | distinguish_onnx = Distinguish_onnx(model_path=model_path)
16 |
17 | distinguish_onnx.model_path = model_path
18 | # snn 인지 아닌지 먼저 판별
19 | # 1. op_type 확인 - snn 활성화 함수 (lif, .. 가 들어있으면 snn 으로 정함)
20 | distinguish_onnx.getModelOperator()
21 | print('--- 사용되어진 Model은 ', model_path, '입니다')
22 |
23 | # 2. 사용된 딥러닝 Framework
24 | model_framework = distinguish_onnx.getModelFramework()
25 | print('--- 사용되어진 Model Framework는 ', model_framework, '입니다')
26 |
27 | return distinguish_onnx, model_framework # class 정보와, model_framework 정보 return
28 |
29 | @app.route('/test', methods=['GET', 'POST'])
30 | class IrisEstimator(Resource):
31 | def get(self):
32 | try:
33 | # 파라미터 파싱
34 | parser = reqparse.RequestParser()
35 | parser.add_argument('model_name', required=True, help='model_name') #
36 |
37 | ## iris ML 용 (방법 2가지)
38 | # 1번째 (변수 인자 직접 입력)
39 | # 2번째 test_file 인자로 npy입력 : test-data/iris_X_test.npy
40 | parser.add_argument('sepal_length', type=float, help='sepal_length is required')
41 | parser.add_argument('sepal_width', type=float, help='sepal_width is required')
42 | parser.add_argument('petal_length', type=float, help='petal_length is required')
43 | parser.add_argument('petal_width', type=float, help='petal_width is required')
44 |
45 | ## mnist DL 용
46 | parser.add_argument('test_file')
47 | parser.add_argument('test_image')
48 | args = parser.parse_args()
49 |
50 | ## 모델 판별 함수
51 | # distinguish_onnx(클래스 객체), model_framework(어떤 모델인지 반환)
52 | distinguish_onnx, model_framework = Distinguish_Model(args['model_name'])
53 |
54 | # 위 판별식 결과에 따라서 구분
55 | # SNN 인경우..
56 | if model_framework=='SNNnengo':
57 |
58 | # SNN 방법1 - Mnist 실제 이미지 파일로
59 | if args['test_image'] != None:
60 | # load an image from file
61 | image = load_img(args['test_image'], grayscale=True) # image load
62 | image = img_to_array(image) # convert the image pixels to a numpy array
63 | image = image[np.newaxis, :, :, :].astype(np.float32) # 맨앞에 차원 1 추가(batch size이자 1임)
64 | print("image shape:", image.shape)
65 | features = image
66 |
67 | # SNN 방법2 - Mnist .npy로
68 | else :
69 | features = np.load(args['test_file'])
70 | print(features.shape)
71 | pred_nengo = distinguish_onnx.nengo_run(features)
72 |
73 | predict_probabilty = np.array(list(pred_nengo.values())[0]) # ordering dictionary 형태라서 out_p 가져올려고 이렇게 함.
74 | print(predict_probabilty.shape) # 10, 30, 10 -> 10장이 30초동안 10개의 확률값 (0~9)
75 |
76 | # 가운데 timestemp 중 마지막 timestemp 경우만 뽑기 위해서. (그게 최종)
77 | predict_probabilty = predict_probabilty[:, predict_probabilty.shape[1] - 1, :10] # 맨 마지막 timestep 결과
78 | print(predict_probabilty.shape) # 10, 10 -> 10장에 대한 0~9까지의 확률값
79 | predict_argmax = predict_probabilty.argmax(axis=1) #
80 |
81 | # 최종 결과 REST API 출력을 위해 list를 String으로 변경
82 | result_list = list(map(str, predict_argmax.flatten()))
83 | result = ','.join(result_list)
84 | print(result)
85 | return result
86 |
87 | # ML, DL 인 경우..
88 | else:
89 | model_domain = distinguish_onnx.getModeldomain()
90 | # ML 인경우
91 | if model_domain == 'ai.onnx':
92 | # ml의 경우1 - test_file : npy로 하는법
93 | if args['test_file'] != None:
94 | features = np.load(args['test_file'])
95 |
96 | # ml의 경우2 - args 직접 변수값 줘서 하는법
97 | else:
98 | features = [args['sepal_length'], args['sepal_width'], args['petal_length'], args['petal_width']] # 인자들합침
99 | features = np.reshape(features, (1, 4))
100 |
101 | # list 형태로 일자로 쭉 펼쳐주긔
102 | pred_onx = distinguish_onnx.ort_run(features) # numpy 배열 넘겨줌
103 | result_list = list(map(str, pred_onx.flatten()))
104 |
105 | # iris data 직접 라벨링
106 | for r in range(0, len(result_list)):
107 | if result_list[r] == '0':
108 | result_list[r] = 'setosa'
109 | elif result_list[r] == '1':
110 | result_list[r] = 'versicolor'
111 | elif result_list[r] == '2':
112 | result_list[r] = 'virginica'
113 |
114 | # 최종 결과 REST API 출력을 위해 list를 String으로 변경
115 | result = ','.join(result_list)
116 | print(result)
117 | return result
118 |
119 | # DL 인 경우
120 | else:
121 | # DL 방법1 - Mnist 실제 이미지 파일로
122 | if args['test_image'] != None:
123 | # load an image from file
124 | image = load_img(args['test_image'], grayscale=True) # image load
125 | image = img_to_array(image) # convert the image pixels to a numpy array
126 | image = image[np.newaxis, :, :, :].astype(np.float32) # 맨앞에 1 추가(batch size이자 1임)
127 | print("image shape:", image.shape)
128 | features = image
129 |
130 | # DL 방법2 - Mnist npy로
131 | else:
132 | features = np.load(args['test_file'])
133 |
134 | # ONNX Runtime 사용용
135 | pred_onx = distinguish_onnx.ort_run(features) # numpy 배열 넘겨주기 (10, 10) 임 -> 10개 즉 0~9 까지의 mnist 각각 확률이 쭉 펼쳐져있음 소수점으로
136 | pred_onx = pred_onx.argmax(axis=1) # 가장 큰거 선택
137 | print(pred_onx, pred_onx.shape)
138 |
139 | # 최종 결과 REST API 출력을 위해 list를 String으로 변경
140 | result_list = list(map(str, pred_onx.flatten()))
141 | result = ','.join(result_list)
142 | print(result)
143 | return result # 최종 결과
144 |
145 | except Exception as e:
146 | return {'error2': str(e)}
147 |
148 | api.add_resource(IrisEstimator, '/')
149 |
150 | if __name__ == '__main__':
151 | app.run(host='127.0.0.1', port=5095, debug=True) # 5090포트로 실행
152 |
153 |
154 | ## curl 명령어
155 | # ML 방법1 - iris 인자 직접입력
156 | # curl - d "sepal_length=6.3&sepal_width=3.3&petal_length=6.0&petal_width=2.5&model_name=model/logreg_iris.onnx" - X GET http: // localhost: 5095
157 | #
158 | # # ML 방법2 - .npy
159 | # curl - d "model_name=model/logreg_iris.onnx&test_file=test_data/iris_X_test.npy" - X GET http: // localhost: 5095
160 | #
161 | # # DL 방법1 - image
162 | # curl - d "model_name=model/lenet-1.onnx&test_image=test_data/test_mnist_9.jpg" - X GET http: // localhost: 5095
163 | #
164 | # # DL 방법2 - .npy
165 | # curl - d "model_name=model/lenet-1.onnx&test_file=test_data/mnist_X_test_10.npy" - X GET http: // localhost: 5095
166 | #
167 | # # SNN 방법1 - .npy
168 | # curl - d "model_name=model/lenet-1_snn.onnx&test_file=test_data/mnist_X_test_10.npy" - X GET http: // localhost: 5095
169 | #
170 | # # SNN 방법2 - image
171 | # curl - d "model_name=model/lenet-1_snn.onnx&test_image=test_data/test_mnist_5.jpg" - X GET http: // localhost: 5095
172 | ######
--------------------------------------------------------------------------------
/onnx_registry.py:
--------------------------------------------------------------------------------
1 | from flask import Flask, render_template, send_from_directory, request, redirect, session, send_file
2 | import os, time
3 | import zipfile, json
4 |
5 | from onnx_distinguish_run import Distinguish_onnx
6 |
7 | # Flask 객체 생성
8 | app = Flask(__name__)
9 |
10 | import netron
11 |
12 | server = 'localhost'
13 | netron_port = 8080
14 |
15 | # folder name
16 | onnx_folder_path = 'onnx_folder/'
17 | log_file_path = 'log_folder/'
18 |
19 |
20 | # main page
21 | @app.route('/', methods=['GET', 'POST'])
22 | def main_page():
23 | # 파일 기본 정보들 출력
24 | zip_file_list = get_filelist() # 파일 이름 리스트 반환
25 | zip_file_latest_time = get_file_latest_time(zip_file_list) # 파일 마지막 수정시간 반환
26 | zip_file_size = get_filesize(zip_file_list) # 파일 크기 반환
27 | # -------------------------------------------------------
28 | # upload
29 | if request.method == 'POST':
30 | upload_process()
31 | # -------------------------------------------------------
32 |
33 | # log 데이터
34 | fr = open(log_file_path + "log.txt", 'r+')
35 | log_list = fr.readlines()
36 | #print(type(log_list), log_list)
37 | fr.close()
38 |
39 | # -----------------------------
40 | keywords_list = []
41 | author_list = []
42 | for i in zip_file_list:
43 | file_name = onnx_folder_path + i
44 | ext = zipfile.ZipFile(file_name).extract('package.json')
45 |
46 | with open(ext, "r", encoding="utf8") as f:
47 | contents = f.read() # string 타입
48 | json_data = json.loads(contents)
49 |
50 |
51 | keywords = json_data["keywords"]
52 | keywords = ', '.join(keywords)
53 | print(keywords)
54 | keywords_list.append(keywords)
55 | print(keywords_list)
56 | author = json_data["author"]
57 | author_list.append(author)
58 |
59 | return render_template('mainpage.html',
60 | zip_file_list=zip_file_list,
61 | zip_file_latest_time=zip_file_latest_time,
62 | zip_file_size=zip_file_size,
63 | log_list=log_list,
64 | len=len(keywords_list),
65 | keywords_list=keywords_list,
66 | author_list=author_list,
67 | zip=zip, # zip이란 python 함수도 jinja에 같이 넘긴 것
68 | )
69 |
70 |
71 | # upload 기능 + log write 기능
72 | def upload_process():
73 | f = request.files['file']
74 | print(f)
75 | file_name = f.filename
76 | file_path = os.getcwd() + '\\' + onnx_folder_path[:-1] + '\\' # 파일 저장 위치 경로
77 | f.save(file_path + file_name) # 저장할 경로 + 파일명
78 | print(file_path + file_name + '에 파일 저장 완료')
79 |
80 | user_ip = request.remote_addr
81 | upload_time = time.strftime('%y-%m-%d %H:%M:%S')
82 |
83 | # txt 파일 저장
84 | f = open(log_file_path + "log.txt", 'a+')
85 | f.write(
86 | '(' + upload_time + ') - ' + 'Upload_Success ' + ' - ' + file_name + ' - ' + user_ip + ' - ' + file_path + '\n')
87 | f.close()
88 | return redirect('/')
89 |
90 | @app.route('/post', methods=['POST'])
91 | def post():
92 | value = request.form['author']
93 | board = request.form['description']
94 | model = request.form['model']
95 |
96 | f = open(model + ".json", 'a+')
97 | f.write('{\"author\":\"' + value + '\",\"description\":\"' + board + '\"}')
98 | f.close()
99 |
100 | return redirect('/')
101 |
102 |
103 | # download 기능
104 | @app.route('/download/', methods=['GET', 'POST'])
105 | def download(filename):
106 | download_time = time.strftime('%y-%m-%d %H:%M:%S')
107 | # log txt 파일 저장
108 | f = open(log_file_path + "log.txt", 'a+')
109 | f.write(
110 | '(' + download_time + ') - ' + 'Download_Success ' + ' - ' + filename + ' - \n')
111 | f.close()
112 |
113 | return send_from_directory('onnx_folder', filename) # directory 명, 파일이름
114 |
115 |
116 | # Visualization
117 | @app.route('/visualizations/', methods=['GET', 'POST'])
118 | def visualization(filename):
119 |
120 | file_name = onnx_folder_path + filename
121 | ext = zipfile.ZipFile(file_name).extract('package.json')
122 |
123 | with open(ext, "r", encoding="utf8") as f:
124 | contents = f.read() # string 타입
125 | json_data = json.loads(contents)
126 |
127 |
128 | print(file_name)
129 | model_name = str(os.path.basename(file_name))
130 | name = json_data["name"]
131 | version = json_data["version"]
132 | description = json_data["description"]
133 | keywords = json_data["keywords"]
134 | keywords = ', '.join(keywords)
135 | author = json_data["author"]
136 | license = json_data["license"]
137 | #exc_file = json_data["nodes"[""]]
138 | #print('model_type 정보는=', model_type)
139 |
140 | #netron.start(file=file_name, browse=False, port=netron_port, host=server)
141 | return render_template('netron_wrapper.html',
142 | model_name=model_name,
143 | name=name,
144 | version=version,
145 | description=description,
146 | keywords=keywords,
147 | author=author,
148 | license=license
149 | )
150 |
151 |
152 | ## 기능함수들
153 | # 1. 파일 이름 리스트 반환
154 | def get_filelist():
155 | # 파일 리스트 만들기 - onnx 파일만 출력
156 | dir_list = os.listdir(onnx_folder_path)
157 | zip_file_list = []
158 | json_file_list = []
159 | for x in dir_list:
160 | if '.zip' in x:
161 | zip_file_list.append(x)
162 | return zip_file_list
163 |
164 |
165 |
166 | def get_file_latest_time(file_list):
167 | # 파일 최근 수정날짜 정보
168 | latest_time = []
169 | latest_ls = latest_time.append
170 | for i in file_list:
171 | latest_ls(time.ctime(os.path.getmtime(onnx_folder_path + i))) ## 최근 수정날짜 정보
172 | return latest_time
173 |
174 |
175 | def get_filesize(file_list):
176 | # 파일 크기
177 | file_size = []
178 | size_ls = file_size.append
179 |
180 | # 리스트 정보들 만들어서 넘겨주기
181 | for i in file_list:
182 | size_ls(str(os.path.getsize(onnx_folder_path + i)) + ' B') ## 파일크기
183 | return file_size
184 |
185 |
186 | import onnx
187 |
188 |
189 | def onnxruntime_imformation(onnx_file_name):
190 | model1 = onnx.load(onnx_file_name)
191 |
192 | # input_type 정보 출력
193 | s = model1.graph.input[0].type.tensor_type.shape.dim
194 | ls = list(map(lambda x: refunc(x), s))
195 | input_type = 'tensor(' + ','.join(ls) + ')'
196 | print(input_type)
197 |
198 | # output_type 정보 출력
199 | o = model1.graph.output[0].type.tensor_type.shape.dim
200 | ls = list(map(lambda x: refunc(x), o))
201 | output_type = 'tensor(' + ','.join(ls) + ')'
202 | print(output_type)
203 |
204 | return input_type, output_type
205 |
206 |
207 | import re
208 |
209 |
210 | def refunc(x):
211 | try:
212 | k = re.sub(r'^"|"$', '', str(x).split(' ')[1].strip())
213 | except:
214 | k = str(x)
215 | return k
216 |
217 |
218 | # onnx 판별식 코드(ml,dl,snn인지 판별)
219 | def onnx_type(onnx_file_name):
220 | distinguish_onnx = Distinguish_onnx(onnx_file_name) # 모델경로
221 | # snn인지 아닌지 먼저 판별
222 | a = Distinguish_onnx.getModelOperator(distinguish_onnx)
223 | print('Operator=', a)
224 | model_framework = distinguish_onnx.getModelFramework()
225 | print('Framework=', model_framework)
226 |
227 | if model_framework == 'SNNnengo':
228 | return 'SNN'
229 | else:
230 | model_domain = distinguish_onnx.getModeldomain()
231 | if model_domain == 'ai.onnx':
232 | # ml의 경우
233 | # print("ml입니다")
234 | return 'ml'
235 | else: # onnx 일경우
236 | # dl의 경우
237 | # print("dl입니다")
238 | return 'dl'
239 |
240 |
241 | '''
242 | def getModelOperator(onnx_file_name):
243 | #for i in range(len(onnx_file_name.onnx_load_model.graph.node)):
244 | division=str(onnx_file_name.producer_name)
245 | print(division)
246 | '''
247 |
248 | # 서버 실행
249 | if __name__ == '__main__':
250 | app.run(host='0.0.0.0', port=5065, debug=True)
251 |
--------------------------------------------------------------------------------
/n3ml-onnx/softlif model onnx save, load.py:
--------------------------------------------------------------------------------
1 | # n3ml2
2 |
3 | import time
4 | import argparse
5 |
6 | import torch
7 | import torch.nn as nn
8 | import torch.optim as optim
9 | import torchvision
10 |
11 | from n3ml.network import Network
12 | from n3ml.layer import SoftLIF
13 | import torchvision.transforms as transforms
14 | import onnx
15 | import onnx.numpy_helper as numpy_helper
16 |
17 |
18 | class Hunsberger2015(Network):
19 | def __int__(self):
20 | super(Hunsberger2015, self).__init__()
21 |
22 | def forward(self, x):
23 | for m in self.named_children():
24 | x = m[1](x)
25 | return x
26 |
27 |
28 | def validate(val_loader, model, criterion):
29 | model.eval()
30 |
31 | total_images = 0
32 | num_corrects = 0
33 | total_loss = 0
34 |
35 | for step, (images, labels) in enumerate(val_loader):
36 | images = images.cuda()
37 | labels = labels.cuda()
38 |
39 | preds = model(images)
40 |
41 | loss = criterion(preds, labels)
42 |
43 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
44 | total_loss += loss.cpu().detach().numpy() * images.size(0)
45 | total_images += images.size(0)
46 |
47 | val_acc = num_corrects.float() / total_images
48 | val_loss = total_loss / total_images
49 |
50 | return val_acc, val_loss
51 |
52 |
53 | def train(train_loader, model, criterion, optimizer):
54 | model.train()
55 |
56 | total_images = 0
57 | num_corrects = 0
58 | total_loss = 0
59 |
60 | for step, (images, labels) in enumerate(train_loader):
61 | images = images.cuda()
62 | labels = labels.cuda()
63 |
64 | preds = model(images)
65 |
66 | loss = criterion(preds, labels)
67 |
68 | optimizer.zero_grad()
69 | loss.backward()
70 | optimizer.step()
71 |
72 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
73 | total_loss += loss.cpu().detach().numpy() * images.size(0)
74 | total_images += images.size(0)
75 |
76 | train_acc = num_corrects.float() / total_images
77 | train_loss = total_loss / total_images
78 |
79 | return train_acc, train_loss
80 |
81 |
82 | def load_onnx_to_softlif(model_path, opt=None):
83 | onnx_model = onnx.load_model(model_path)
84 | graph = onnx_model.graph
85 |
86 | nodes = [i for i in graph.node]
87 | weights = []
88 | for init in graph.initializer:
89 | weight = numpy_helper.to_array(init)
90 |
91 | if len(weight) == 2:
92 | weight = weight.T
93 | else:
94 | weight = numpy_helper.to_array(init)
95 |
96 | weights.append(weight)
97 |
98 | weights = sorted(weights, key=lambda x: len(x.shape), reverse=True)
99 |
100 | model = Hunsberger2015()
101 |
102 | for node in nodes:
103 | name = node.name
104 | op = node.op_type.lower()
105 |
106 | if op == "matmul": # dense without bias
107 | input_unit, output_unit = weights[0].shape[0], weights[0].shape[1]
108 | model.add_module(name, nn.Linear(input_unit, output_unit, bias=False))
109 | del weights[0]
110 |
111 | elif op == "relu":
112 | if opt is not None:
113 | model.add_module(name, SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
114 | else:
115 | model.add_module(name, SoftLIF())
116 |
117 | elif op == "flatten":
118 | model.add_module(name, nn.Flatten())
119 |
120 | elif op == "conv":
121 | input_channels = weights[0].shape[1]
122 | output_channels = weights[0].shape[0]
123 |
124 | kernel_size = 3
125 | stride = 1
126 | padding = 0
127 |
128 | for att in node.attribute:
129 | if att.name == "kernel_shape":
130 | kernel_size = att.ints[0]
131 | elif att.name == "strides":
132 | stride = att.ints[0]
133 | elif att.name == "pads":
134 | padding = att.ints[0]
135 |
136 | model.add_module(name, nn.Conv2d(input_channels,
137 | output_channels,
138 | kernel_size=kernel_size,
139 | stride=stride,
140 | padding=padding,
141 | bias=False))
142 |
143 | del weights[0]
144 |
145 | elif op == "averagepool":
146 | kernel_size = 2
147 |
148 | for att in node.attribute:
149 | if att.name == "kernel_shape":
150 | kernel_size = att.ints[0]
151 |
152 | model.add_module(name, nn.AvgPool2d(kernel_size=kernel_size))
153 |
154 | elif op == "pad":
155 | pass
156 |
157 | else:
158 | pass
159 |
160 | # generate model end
161 |
162 | # load model weight insert
163 | initializers = []
164 | for init in graph.initializer:
165 | initializers.append(numpy_helper.to_array(init))
166 |
167 | model_layer_info = [i for i in model.named_parameters()]
168 |
169 | initializers = sorted(initializers, key=lambda x: len(x.shape), reverse=True)
170 |
171 | for i in range(len(model_layer_info)):
172 | layer_name = model_layer_info[i][0]
173 |
174 | if "matmul" in layer_name.lower():
175 | weight = torch.from_numpy(initializers[i]).T
176 | else:
177 | weight = torch.from_numpy(initializers[i])
178 |
179 | model.state_dict()[layer_name].data.copy_(weight)
180 | # load model weight end
181 |
182 | print("load {}".format(model_path))
183 | return model
184 |
185 |
186 | def save_softlif_to_onnx(model, model_name):
187 | model_layer_info = [i for i in model.named_children()]
188 | model_weight = [i for i in model.named_parameters()]
189 |
190 | model = Hunsberger2015()
191 |
192 | # softLIF -> RELU
193 | for i in range(len(model_layer_info)):
194 | if "LIF" in str(model_layer_info[i][1]):
195 | model.add_module(model_layer_info[i][0], nn.ReLU())
196 | else:
197 | model.add_module(model_layer_info[i][0], model_layer_info[i][1])
198 |
199 | # weight recovery
200 | for i in model_weight:
201 | model.state_dict()[i[0]].data.copy_(i[1])
202 |
203 | dummy_input = torch.randn(1, 1, 28, 28, dtype=torch.float32).cuda()
204 | torch.onnx.export(model, dummy_input, model_name)
205 | print("saved {}".format(model_name))
206 |
207 | # layer name change
208 | onnx_model = onnx.load(model_name)
209 |
210 | for i in range(len(onnx_model.graph.node)):
211 | if "relu" in onnx_model.graph.node[i].name.lower():
212 | onnx_model.graph.node[i].name += "_softlif"
213 |
214 | onnx.save(onnx_model, model_name)
215 | # layer name change end
216 |
217 |
218 | def app(opt):
219 | print(opt)
220 |
221 | val_loader = torch.utils.data.DataLoader(
222 | torchvision.datasets.MNIST(
223 | opt.data,
224 | train=False,
225 | download=True,
226 | transform=torchvision.transforms.Compose([transforms.ToTensor()])),
227 | batch_size=opt.batch_size)
228 |
229 | model = load_onnx_to_softlif("result/softlif_conv.onnx", opt)
230 | model.cuda()
231 |
232 | criterion = nn.CrossEntropyLoss()
233 |
234 | val_acc, val_loss = validate(val_loader, model, criterion)
235 | print('in test, val accuracy: {} - loss: {}'.format(val_acc, val_loss))
236 |
237 | save_softlif_to_onnx(model, "result/softlif_conv.onnx")
238 |
239 |
240 | if __name__ == '__main__':
241 | parser = argparse.ArgumentParser()
242 |
243 | parser.add_argument('--data', default='data')
244 | parser.add_argument('--num_classes', default=10, type=int)
245 | parser.add_argument('--num_epochs', default=10, type=int)
246 | parser.add_argument('--batch_size', default=32, type=int)
247 | parser.add_argument('--lr', default=4e-2, type=float)
248 | parser.add_argument('--momentum', default=0.9, type=float)
249 | parser.add_argument('--pretrained', default='pretrained/softlif_dynamic.pt')
250 |
251 | parser.add_argument('--amplitude', default=0.063, type=float)
252 | parser.add_argument('--tau_ref', default=0.001, type=float)
253 | parser.add_argument('--tau_rc', default=0.05, type=float)
254 | parser.add_argument('--gain', default=0.825, type=float)
255 | parser.add_argument('--sigma', default=0.02, type=float)
256 |
257 | app(parser.parse_args())
--------------------------------------------------------------------------------
/softlif_dense_import.py:
--------------------------------------------------------------------------------
1 | # n3ml2
2 |
3 | import time
4 | import argparse
5 |
6 | import torch
7 | import torch.nn as nn
8 | import torch.optim as optim
9 | import torchvision
10 |
11 | from n3ml.network import Network
12 | from n3ml.layer import SoftLIF
13 | import torchvision.transforms as transforms
14 | import onnx
15 | import onnx.numpy_helper as numpy_helper
16 |
17 |
18 | class Hunsberger2015(Network):
19 | def __int__(self):
20 | super(Hunsberger2015, self).__init__()
21 |
22 | def forward(self, x):
23 | for m in self.named_children():
24 | x = m[1](x)
25 | return x
26 |
27 |
28 | def validate(val_loader, model, criterion):
29 | model.eval()
30 |
31 | total_images = 0
32 | num_corrects = 0
33 | total_loss = 0
34 |
35 | for step, (images, labels) in enumerate(val_loader):
36 | images = images.cuda()
37 | labels = labels.cuda()
38 |
39 | preds = model(images)
40 |
41 | loss = criterion(preds, labels)
42 |
43 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
44 | total_loss += loss.cpu().detach().numpy() * images.size(0)
45 | total_images += images.size(0)
46 |
47 | val_acc = num_corrects.float() / total_images
48 | val_loss = total_loss / total_images
49 |
50 | return val_acc, val_loss
51 |
52 |
53 | def train(train_loader, model, criterion, optimizer):
54 | model.train()
55 |
56 | total_images = 0
57 | num_corrects = 0
58 | total_loss = 0
59 |
60 | for step, (images, labels) in enumerate(train_loader):
61 | images = images.cuda()
62 | labels = labels.cuda()
63 |
64 | preds = model(images)
65 |
66 | loss = criterion(preds, labels)
67 |
68 | optimizer.zero_grad()
69 | loss.backward()
70 | optimizer.step()
71 |
72 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
73 | total_loss += loss.cpu().detach().numpy() * images.size(0)
74 | total_images += images.size(0)
75 |
76 | train_acc = num_corrects.float() / total_images
77 | train_loss = total_loss / total_images
78 |
79 | return train_acc, train_loss
80 |
81 |
82 | def onnx_to_softlif(model_path, opt):
83 | onnx_model = onnx.load_model(model_path)
84 |
85 | graph = onnx_model.graph
86 |
87 | # get layer name, op
88 | onnx_layer = []
89 | for i in graph.node:
90 | onnx_layer.append((i.name, i.op_type.lower()))
91 |
92 | # get layer weight
93 | weights = []
94 | for i, init in enumerate(graph.initializer):
95 | weight = numpy_helper.to_array(init)
96 | # dense는 transpose 해야함
97 | if len(weight) == 2:
98 | weight = weight.T
99 |
100 | else:
101 | weight = numpy_helper.to_array(init)
102 | weights.append(weight)
103 |
104 | # model generate
105 | model = Hunsberger2015()
106 |
107 | for i, (name, op) in enumerate(onnx_layer):
108 | if op == "matmul": # dense without bias
109 | print("matmul")
110 | input_unit, output_unit = weights[0].shape[0], weights[0].shape[1]
111 | del weights[0]
112 | model.add_module(name, nn.Linear(input_unit, output_unit, bias=False))
113 |
114 | if op == "gemm":
115 | print("gemm")
116 | # gemm은 input output이 matmul과 반대
117 | input_unit, output_unit = weights[1].shape[1], weights[1].shape[0]
118 | del weights[0]
119 | del weights[0]
120 | model.add_module(name, nn.Linear(input_unit, output_unit))
121 |
122 | elif op == "relu":
123 | print("relu")
124 | model.add_module(name, SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
125 |
126 | elif op == "flatten":
127 | print("flatten")
128 | model.add_module(name, nn.Flatten())
129 | # model generate end
130 |
131 | # model weight insert
132 | initializers = []
133 | for init in graph.initializer:
134 | initializers.append((init.name, numpy_helper.to_array(init)))
135 |
136 | model_layer_info = []
137 | for i in model.named_parameters():
138 | model_layer_info.append(i)
139 |
140 | for i in range(len(model_layer_info)):
141 | layer_name = model_layer_info[i][0]
142 |
143 | # bias가 포함된 dense는 weight와 bias 자리를 바꿔줌
144 | if "gemm" in layer_name.lower() and "weight" in layer_name.lower():
145 | initializers[i], initializers[i+1] = initializers[i+1], initializers[i]
146 |
147 | # matmul은 transpose
148 | if "matmul" in layer_name.lower():
149 | weight = torch.from_numpy(initializers[i][1]).T
150 |
151 | else:
152 | weight = torch.from_numpy(initializers[i][1])
153 |
154 | # layer name에 해당하는 layer에 weight 삽입
155 | print(layer_name, weight.shape)
156 | model.state_dict()[layer_name].data.copy_(weight)
157 | # model weight insert end
158 |
159 | return model
160 |
161 | def softlif_onnx_export(model, model_name, opt):
162 | # onnx export start
163 | model_layer_info = []
164 | model_weight = []
165 | # 모델 레이어 이름, 오브젝트 저장
166 | for i in model.named_children():
167 | model_layer_info.append(i)
168 | # 모델 가중치 저장
169 | for i in model.named_parameters():
170 | model_weight.append(i)
171 |
172 | # 모델 새로 생성
173 | model = Hunsberger2015()
174 |
175 | # 레이어를 다시 쌓으며 softlif를 제거하고 relu로 대체
176 | for i in range(len(model_layer_info)):
177 | if "LIF" in str(model_layer_info[i][1]): # soft lif layer
178 | model.add_module(model_layer_info[i][0], nn.ReLU())
179 | else:
180 | model.add_module(model_layer_info[i][0], model_layer_info[i][1])
181 |
182 | for i in model_weight:
183 | model.state_dict()[i[0]].data.copy_(i[1])
184 |
185 | dummy_input = torch.randn(opt.batch_size, 1,28, 28, dtype=torch.float32).cuda()
186 | torch.onnx.export(model, dummy_input, model_name)
187 | # onnx export end
188 |
189 | def app(opt):
190 | print(opt)
191 |
192 | train_loader = torch.utils.data.DataLoader(
193 | torchvision.datasets.MNIST(
194 | opt.data,
195 | train=True,
196 | download=True,
197 | transform=torchvision.transforms.Compose([transforms.ToTensor()])),
198 | batch_size=opt.batch_size,
199 | shuffle=True)
200 |
201 | val_loader = torch.utils.data.DataLoader(
202 | torchvision.datasets.MNIST(
203 | opt.data,
204 | train=False,
205 | download=True,
206 | transform=torchvision.transforms.Compose([transforms.ToTensor()])),
207 | batch_size=opt.batch_size)
208 |
209 | #model = onnx_to_softlif("result/softlif_dense.onnx", opt)
210 | model = onnx_to_softlif(opt.name, opt)
211 |
212 | model.cuda()
213 |
214 | criterion = nn.CrossEntropyLoss()
215 |
216 | optimizer = torch.optim.SGD(model.parameters(), lr=opt.lr, momentum=opt.momentum)
217 |
218 | lr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[30, 60, 90])
219 |
220 | best_acc = 0
221 |
222 | for epoch in range(opt.num_epochs):
223 | # start = time.time()
224 | # #train_acc, train_loss = train(train_loader, model, criterion, optimizer)
225 | # end = time.time()
226 | # print('total time: {:.2f}s - epoch: {} - accuracy: {} - loss: {}'.format(end-start, epoch, train_acc, train_loss))
227 |
228 | val_acc, val_loss = validate(val_loader, model, criterion)
229 |
230 | if val_acc > best_acc:
231 | best_acc = val_acc
232 | state = {
233 | 'epoch': epoch,
234 | 'model': model.state_dict(),
235 | 'best_acc': best_acc,
236 | 'optimizer': optimizer.state_dict()}
237 | torch.save(state, opt.pretrained)
238 | print('in test, epoch: {} - best accuracy: {} - loss: {}'.format(epoch, best_acc, val_loss))
239 |
240 | lr_scheduler.step()
241 |
242 |
243 |
244 | if __name__ == '__main__':
245 | parser = argparse.ArgumentParser()
246 |
247 | parser.add_argument('--data', default='data')
248 | parser.add_argument('--num_classes', default=10, type=int)
249 | parser.add_argument('--num_epochs', default=10, type=int)
250 | parser.add_argument('--batch_size', default=32, type=int)
251 | parser.add_argument('--lr', default=4e-2, type=float)
252 | parser.add_argument('--momentum', default=0.9, type=float)
253 | parser.add_argument('--pretrained', default='pretrained/softlif_dynamic.pt')
254 |
255 | parser.add_argument('--amplitude', default=0.063, type=float)
256 | parser.add_argument('--tau_ref', default=0.001, type=float)
257 | parser.add_argument('--tau_rc', default=0.05, type=float)
258 | parser.add_argument('--gain', default=0.825, type=float)
259 | parser.add_argument('--sigma', default=0.02, type=float)
260 | parser.add_argument('--name', default='softlif_dense.onnx')
261 |
262 | app(parser.parse_args())
--------------------------------------------------------------------------------
/softlif_conv_import.py:
--------------------------------------------------------------------------------
1 | # n3ml2
2 |
3 | import time
4 | import argparse
5 |
6 | import torch
7 | import torch.nn as nn
8 | import torch.optim as optim
9 | import torchvision
10 |
11 | from n3ml.network import Network
12 | from n3ml.layer import SoftLIF
13 | import torchvision.transforms as transforms
14 | import onnx
15 | import onnx.numpy_helper as numpy_helper
16 |
17 |
18 | class Hunsberger2015(Network):
19 | def __int__(self):
20 | super(Hunsberger2015, self).__init__()
21 |
22 | def forward(self, x):
23 | for m in self.named_children():
24 | x = m[1](x)
25 | return x
26 |
27 |
28 | def validate(val_loader, model, criterion):
29 | model.eval()
30 |
31 | total_images = 0
32 | num_corrects = 0
33 | total_loss = 0
34 |
35 | for step, (images, labels) in enumerate(val_loader):
36 | images = images.cuda()
37 | labels = labels.cuda()
38 |
39 | preds = model(images)
40 |
41 | loss = criterion(preds, labels)
42 |
43 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
44 | total_loss += loss.cpu().detach().numpy() * images.size(0)
45 | total_images += images.size(0)
46 |
47 | val_acc = num_corrects.float() / total_images
48 | val_loss = total_loss / total_images
49 |
50 | return val_acc, val_loss
51 |
52 |
53 | def train(train_loader, model, criterion, optimizer):
54 | model.train()
55 |
56 | total_images = 0
57 | num_corrects = 0
58 | total_loss = 0
59 |
60 | for step, (images, labels) in enumerate(train_loader):
61 | images = images.cuda()
62 | labels = labels.cuda()
63 |
64 | preds = model(images)
65 |
66 | loss = criterion(preds, labels)
67 |
68 | optimizer.zero_grad()
69 | loss.backward()
70 | optimizer.step()
71 |
72 | num_corrects += torch.argmax(preds, dim=1).eq(labels).sum(dim=0)
73 | total_loss += loss.cpu().detach().numpy() * images.size(0)
74 | total_images += images.size(0)
75 |
76 | train_acc = num_corrects.float() / total_images
77 | train_loss = total_loss / total_images
78 |
79 | return train_acc, train_loss
80 |
81 |
82 | def load_onnx_to_softlif(model_path, opt=None):
83 | onnx_model = onnx.load_model(model_path)
84 | graph = onnx_model.graph
85 |
86 | nodes = [i for i in graph.node]
87 | weights = [] # conv, dense layer shape
88 | for init in graph.initializer:
89 | weight = numpy_helper.to_array(init)
90 | # dense -> transpose
91 | if len(weight) == 2:
92 | weight = weight.T
93 | else:
94 | weight = numpy_helper.to_array(init)
95 |
96 | weights.append(weight)
97 |
98 | # conv를 앞으로, dense를 뒤로 정렬
99 | weights = sorted(weights, key=lambda x : len(x.shape), reverse=True)
100 |
101 | # generate model start
102 | model = Hunsberger2015()
103 |
104 | for node in nodes:
105 | name = node.name
106 | op = node.op_type.lower()
107 |
108 | if op == "matmul": # dense without bias
109 | input_unit, output_unit = weights[0].shape[0], weights[0].shape[1]
110 | model.add_module(name, nn.Linear(input_unit, output_unit, bias=False))
111 | del weights[0]
112 |
113 | # if op == "gemm": # softlif는 bias를 사용하지 않으므로 없어도 됨
114 | # # gemm은 input output이 matmul과 반대
115 | # input_unit, output_unit = weights[1].shape[1], weights[1].shape[0]
116 | # del weights[0]
117 | # del weights[0]
118 | # model.add_module(name, nn.Linear(input_unit, output_unit))
119 |
120 | elif op == "relu":
121 | if opt is not None:
122 | model.add_module(name, SoftLIF(amplitude=opt.amplitude, tau_ref=opt.tau_ref, tau_rc=opt.tau_rc, gain=opt.gain, sigma=opt.sigma))
123 | else:
124 | model.add_module(name, SoftLIF())
125 |
126 | elif op == "flatten":
127 | model.add_module(name, nn.Flatten())
128 |
129 | elif op == "conv":
130 | input_channels = weights[0].shape[1]
131 | output_channels = weights[0].shape[0]
132 |
133 | # default
134 | kernel_size = 3
135 | stride = 1
136 | padding = 0
137 |
138 | for att in node.attribute:
139 | if att.name == "kernel_shape":
140 | kernel_size = att.ints[0]
141 | elif att.name == "strides":
142 | stride = att.ints[0]
143 | elif att.name == "pads":
144 | padding = att.ints[0]
145 |
146 | model.add_module(name, nn.Conv2d(input_channels,
147 | output_channels,
148 | kernel_size=kernel_size,
149 | stride=stride,
150 | padding=padding,
151 | bias=False))
152 |
153 | del weights[0]
154 |
155 | elif op == "averagepool":
156 | kernel_size = 2
157 |
158 | for att in node.attribute:
159 | if att.name == "kernel_shape":
160 | kernel_size = att.ints[0]
161 |
162 | model.add_module(name, nn.AvgPool2d(kernel_size=kernel_size))
163 |
164 | elif op == "pad":
165 | pass
166 | # generate model end
167 |
168 | # load model weight insert
169 | initializers = []
170 | for init in graph.initializer:
171 | initializers.append(numpy_helper.to_array(init))
172 |
173 | model_layer_info = [i for i in model.named_parameters()]
174 |
175 | initializers = sorted(initializers, key=lambda x: len(x.shape), reverse=True)
176 |
177 | for i in range(len(model_layer_info)):
178 | layer_name = model_layer_info[i][0]
179 |
180 | # # bias가 포함된 dense는 weight와 bias 자리를 바꿔줌
181 | # if "gemm" in layer_name.lower() and "weight" in layer_name.lower():
182 | # initializers[i], initializers[i+1] = initializers[i+1], initializers[i]
183 |
184 | if "matmul" in layer_name.lower():
185 | weight = torch.from_numpy(initializers[i]).T
186 | else:
187 | weight = torch.from_numpy(initializers[i])
188 |
189 | model.state_dict()[layer_name].data.copy_(weight)
190 | # load model weight end
191 |
192 | return model
193 |
194 |
195 | def softlif_onnx_export(model, model_name, opt):
196 | # onnx export start
197 | model_layer_info = []
198 | model_weight = []
199 | # 모델 레이어 이름, 오브젝트 저장
200 | for i in model.named_children():
201 | model_layer_info.append(i)
202 | # 모델 가중치 저장
203 | for i in model.named_parameters():
204 | model_weight.append(i)
205 |
206 | # 모델 새로 생성
207 | model = Hunsberger2015()
208 |
209 | # 레이어를 다시 쌓으며 softlif를 제거하고 relu로 대체
210 | for i in range(len(model_layer_info)):
211 | if "LIF" in str(model_layer_info[i][1]): # soft lif layer
212 | model.add_module(model_layer_info[i][0], nn.ReLU())
213 | else:
214 | model.add_module(model_layer_info[i][0], model_layer_info[i][1])
215 |
216 | for i in model_weight:
217 | model.state_dict()[i[0]].data.copy_(i[1])
218 |
219 | dummy_input = torch.randn(opt.batch_size, 1,28, 28, dtype=torch.float32).cuda()
220 | torch.onnx.export(model, dummy_input, model_name)
221 | # onnx export end
222 |
223 | def app(opt):
224 | print(opt)
225 |
226 | val_loader = torch.utils.data.DataLoader(
227 | torchvision.datasets.MNIST(
228 | opt.data,
229 | train=False,
230 | download=True,
231 | transform=torchvision.transforms.Compose([transforms.ToTensor()])),
232 | batch_size=opt.batch_size)
233 |
234 | model = load_onnx_to_softlif(opt.name, opt)
235 | model.cuda()
236 |
237 | criterion = nn.CrossEntropyLoss()
238 |
239 | val_acc, val_loss = validate(val_loader, model, criterion)
240 | print('in test, val accuracy: {} - loss: {}'.format(val_acc, val_loss))
241 |
242 |
243 |
244 |
245 | if __name__ == '__main__':
246 | parser = argparse.ArgumentParser()
247 |
248 | parser.add_argument('--data', default='data')
249 | parser.add_argument('--num_classes', default=10, type=int)
250 | parser.add_argument('--num_epochs', default=10, type=int)
251 | parser.add_argument('--batch_size', default=32, type=int)
252 | parser.add_argument('--lr', default=4e-2, type=float)
253 | parser.add_argument('--momentum', default=0.9, type=float)
254 | parser.add_argument('--pretrained', default='pretrained/softlif_dynamic.pt')
255 |
256 | parser.add_argument('--amplitude', default=0.063, type=float)
257 | parser.add_argument('--tau_ref', default=0.001, type=float)
258 | parser.add_argument('--tau_rc', default=0.05, type=float)
259 | parser.add_argument('--gain', default=0.825, type=float)
260 | parser.add_argument('--sigma', default=0.02, type=float)
261 | parser.add_argument('--name', default='softlif_conv.onnx')
262 |
263 | app(parser.parse_args())
--------------------------------------------------------------------------------
/n3ml-onnx/softlif dense nengo-loihi.py:
--------------------------------------------------------------------------------
1 | # nengo
2 |
3 | import os
4 |
5 | import nengo
6 | import nengo_dl
7 | import numpy as np
8 | import matplotlib.pyplot as plt
9 | import tensorflow as tf
10 | import onnx
11 | import onnx.numpy_helper as numpy_helper
12 | import warnings
13 | warnings.filterwarnings('ignore')
14 |
15 | try:
16 | import requests
17 |
18 | has_requests = True
19 | except ImportError:
20 | has_requests = False
21 |
22 | import nengo_loihi
23 |
24 |
25 | def save_npz(onnx_model, npz_name):
26 | graph = onnx_model.graph
27 | initializers = []
28 |
29 | neurons = 0
30 |
31 | for init in graph.initializer:
32 | initializers.append([init.name, numpy_helper.to_array(init)])
33 |
34 | # conv, dense 가중치 순서가 뒤집혀있기 때문에 올바르게 정렬
35 | initializers = sorted(initializers, key=lambda x: (-len(x[1].shape)))
36 |
37 | temp = []
38 |
39 | for (name, weight) in initializers:
40 | if len(weight.shape) == 4: # conv
41 | weight = np.transpose(weight, (2, 3, 1, 0))
42 |
43 | if len(weight.shape) == 2: # dense
44 | weight = np.transpose(weight, (1, 0))
45 |
46 | temp.append(weight)
47 | neurons += weight.shape[0]
48 |
49 | if graph.node[-1].op_type != "relu" and graph.node[-1].op_type != "softmax":
50 | neurons -= weight.shape[0] # 마지막 층이 활성화 함수 없이 dense로 끝나면 뉴런은 빼야함
51 |
52 | last = np.random.normal(1, 1, neurons)
53 | temp.append(last)
54 |
55 | np.savez_compressed(npz_name, *temp)
56 |
57 |
58 | def download(fname, drive_id):
59 | """Download a file from Google Drive.
60 |
61 | Adapted from https://stackoverflow.com/a/39225039/1306923
62 | """
63 |
64 | def get_confirm_token(response):
65 | for key, value in response.cookies.items():
66 | if key.startswith("download_warning"):
67 | return value
68 | return None
69 |
70 | def save_response_content(response, destination):
71 | CHUNK_SIZE = 32768
72 |
73 | with open(destination, "wb") as f:
74 | for chunk in response.iter_content(CHUNK_SIZE):
75 | if chunk: # filter out keep-alive new chunks
76 | f.write(chunk)
77 |
78 | if os.path.exists(fname):
79 | return
80 | if not has_requests:
81 | link = "https://drive.google.com/open?id=%s" % drive_id
82 | raise RuntimeError(
83 | "Cannot find '%s'. Download the file from\n %s\n"
84 | "and place it in %s." % (fname, link, os.getcwd())
85 | )
86 |
87 | url = "https://docs.google.com/uc?export=download"
88 | session = requests.Session()
89 | response = session.get(url, params={"id": drive_id}, stream=True)
90 | token = get_confirm_token(response)
91 | if token is not None:
92 | params = {"id": drive_id, "confirm": token}
93 | response = session.get(url, params=params, stream=True)
94 | save_response_content(response, fname)
95 |
96 |
97 | # load mnist dataset
98 | (train_images, train_labels), (
99 | test_images,
100 | test_labels,
101 | ) = tf.keras.datasets.mnist.load_data()
102 |
103 | # flatten images
104 | train_images = train_images.reshape((train_images.shape[0], -1))
105 | test_images = test_images.reshape((test_images.shape[0], -1))
106 |
107 | # plot some examples
108 | for i in range(3):
109 | plt.figure()
110 | plt.imshow(np.reshape(train_images[i], (28, 28)))
111 | plt.axis("off")
112 | plt.title(str(train_labels[i]))
113 |
114 | dt = 0.001 # simulation timestep
115 | presentation_time = 0.1 # input presentation time
116 | max_rate = 200 # neuron firing rates
117 | # neuron spike amplitude (scaled so that the overall output is ~1)
118 | amp = 1 / max_rate
119 | # input image shape
120 |
121 | with nengo.Network(seed=0) as net:
122 | nengo_loihi.add_params(net)
123 | net.config[nengo.Connection].synapse = None
124 | neuron_type = nengo.LIF(tau_rc=0.02, tau_ref=0.001, amplitude=0.005)
125 |
126 | onnx_name = "softlif_dense"
127 | onnx_model = onnx.load("result/" + onnx_name + ".onnx")
128 | print("load {}.onnx".format(onnx_name))
129 | save_npz(onnx_model, "result/" + onnx_name + ".npz")
130 | graph = onnx_model.graph
131 |
132 | input_size = numpy_helper.to_array(graph.initializer[0]).shape[0]
133 | inp = nengo.Node(nengo.processes.PresentInput(test_images, presentation_time), size_out=input_size)
134 | pre_layer = inp
135 |
136 | for i in range(len(graph.initializer)):
137 | n_nodes = numpy_helper.to_array(graph.initializer[i]).shape[1]
138 |
139 | if i == 0: #문제없음
140 | layer = nengo.Ensemble(n_neurons=n_nodes, dimensions=1, neuron_type=neuron_type)
141 | net.config[layer].on_chip = False
142 | nengo.Connection(pre_layer, layer.neurons, transform=nengo_dl.dists.Glorot())
143 | pre_layer = layer
144 |
145 | elif i != 0 and i != len(graph.initializer)-1:
146 | layer = nengo.Ensemble(n_neurons=n_nodes, dimensions=1, neuron_type=neuron_type)
147 | nengo.Connection(pre_layer.neurons, layer.neurons, transform=nengo_dl.dists.Glorot())
148 | pre_layer = layer
149 |
150 | else:
151 | out = nengo.Node(size_in=n_nodes)
152 | nengo.Connection(pre_layer.neurons, out, transform=nengo_dl.dists.Glorot())
153 |
154 | out_p = nengo.Probe(out, label="out_p")
155 | out_p_filt = nengo.Probe(out, synapse=nengo.Alpha(0.01), label="out_p_filt")
156 |
157 | # set up training data, adding the time dimension (with size 1)
158 | minibatch_size = 200
159 | train_images = train_images[:, None, :]
160 | train_labels = train_labels[:, None, None]
161 |
162 | # for the test data evaluation we'll be running the network over time
163 | # using spiking neurons, so we need to repeat the input/target data
164 | # for a number of timesteps (based on the presentation_time)
165 | n_steps = int(presentation_time / dt)
166 | test_images = np.tile(test_images[:minibatch_size*2, None, :], (1, n_steps, 1))
167 | test_labels = np.tile(test_labels[:minibatch_size*2, None, None], (1, n_steps, 1))
168 |
169 |
170 | def classification_accuracy(y_true, y_pred):
171 | return 100 * tf.metrics.sparse_categorical_accuracy(y_true[:, -1], y_pred[:, -1])
172 |
173 |
174 | do_training = False
175 |
176 | with nengo_dl.Simulator(net, minibatch_size=minibatch_size, seed=0) as sim:
177 | if do_training:
178 | sim.compile(loss={out_p_filt: classification_accuracy})
179 | print(
180 | "accuracy before training: %.2f%%"
181 | % sim.evaluate(test_images, {out_p_filt: test_labels}, verbose=0)["loss"]
182 | )
183 |
184 | # run training
185 | sim.compile(
186 | optimizer=tf.optimizers.RMSprop(0.001),
187 | loss={out_p: tf.losses.SparseCategoricalCrossentropy(from_logits=True)},
188 | )
189 | sim.fit(train_images, train_labels, epochs=10)
190 |
191 | sim.compile(loss={out_p_filt: classification_accuracy})
192 | print(
193 | "accuracy after training: %.2f%%"
194 | % sim.evaluate(test_images, {out_p_filt: test_labels}, verbose=0)["loss"]
195 | )
196 |
197 | sim.save_params("./mnist_params")
198 | else:
199 | download("mnist_params.npz", "1geZoS-Nz-u_XeeDv3cdZgNjUxDOpgXe5")
200 | #sim.load_params("./mnist_params")
201 |
202 | sim.load_params("result/softlif_dense")
203 |
204 | sim.compile(loss={out_p_filt: classification_accuracy})
205 | # print(
206 | # "nengo_dl accuracy load after training: %.2f%%"
207 | # % sim.evaluate(test_images, {out_p_filt: test_labels}, verbose=0)["loss"]
208 | # )
209 |
210 | # store trained parameters back into the network
211 | sim.freeze_params(net)
212 |
213 | for conn in net.all_connections:
214 | conn.synapse = 0.005
215 |
216 | if do_training:
217 | with nengo_dl.Simulator(net, minibatch_size=minibatch_size) as sim:
218 | sim.compile(loss={out_p_filt: classification_accuracy})
219 | print(
220 | "accuracy w/ synapse: %.2f%%"
221 | % sim.evaluate(test_images, {out_p_filt: test_labels}, verbose=0)["loss"]
222 | )
223 |
224 | n_presentations = 100 # 네트워크에 입력하는 테스트 이미지 갯수?
225 |
226 | # if running on Loihi, increase the max input spikes per step
227 | hw_opts = dict(snip_max_spikes_per_step=120)
228 | with nengo_loihi.Simulator(
229 | net,
230 | dt=dt,
231 | precompute=False,
232 | hardware_options=hw_opts,
233 | ) as sim:
234 | # run the simulation on Loihi
235 | sim.run(n_presentations * presentation_time)
236 |
237 | # check classification accuracy
238 | step = int(presentation_time / dt)
239 |
240 | output = sim.data[out_p_filt][step - 1 :: step]
241 | correct = 100 * np.mean(
242 | np.argmax(output, axis=-1) == test_labels[:n_presentations, 0, 0]
243 | )
244 | print("loihi accuracy: %.2f%%" % correct)
245 |
246 | n_plots = 10
247 | plt.figure()
248 |
249 | plt.subplot(2, 1, 1)
250 | images = test_images.reshape(-1, 28, 28, 1)[::step]
251 | ni, nj, nc = images[0].shape
252 | allimage = np.zeros((ni, nj * n_plots, nc), dtype=images.dtype)
253 | for i, image in enumerate(images[:n_plots]):
254 | allimage[:, i * nj : (i + 1) * nj] = image
255 | if allimage.shape[-1] == 1:
256 | allimage = allimage[:, :, 0]
257 | plt.imshow(allimage, aspect="auto", interpolation="none", cmap="gray")
258 |
259 | plt.subplot(2, 1, 2)
260 | plt.plot(sim.trange()[: n_plots * step], sim.data[out_p_filt][: n_plots * step])
261 | plt.legend(["%d" % i for i in range(10)], loc="best")
262 | # plt.show()
--------------------------------------------------------------------------------
/softlif_dense_loihi_import.py:
--------------------------------------------------------------------------------
1 | # nengo
2 |
3 | import os
4 |
5 | import nengo
6 | import nengo_dl
7 | import numpy as np
8 | import matplotlib.pyplot as plt
9 | import tensorflow as tf
10 | import onnx
11 | import onnx.numpy_helper as numpy_helper
12 | import argparse
13 |
14 | try:
15 | import requests
16 |
17 | has_requests = True
18 | except ImportError:
19 | has_requests = False
20 |
21 | import nengo_loihi
22 |
23 |
24 | parser = argparse.ArgumentParser()
25 | parser.add_argument('--name', default='softlif_dense.onnx')
26 | opt = parser.parse_args()
27 |
28 | def save_npz(onnx_model, npz_name):
29 | graph = onnx_model.graph
30 | initializers = []
31 |
32 | neurons = 0
33 |
34 | for init in graph.initializer:
35 | initializers.append([init.name, numpy_helper.to_array(init)])
36 |
37 | # conv, dense 가중치 순서가 뒤집혀있기 때문에 올바르게 정렬
38 | initializers = sorted(initializers, key=lambda x: (-len(x[1].shape)))
39 |
40 | temp = []
41 |
42 | for (name, weight) in initializers:
43 | if len(weight.shape) == 4: # conv
44 | weight = np.transpose(weight, (2, 3, 1, 0))
45 |
46 | if len(weight.shape) == 2: # dense
47 | weight = np.transpose(weight, (1, 0))
48 |
49 | temp.append(weight)
50 | neurons += weight.shape[0]
51 |
52 | if graph.node[-1].op_type != "relu" and graph.node[-1].op_type != "softmax":
53 | neurons -= weight.shape[0] # 마지막 층이 활성화 함수 없이 dense로 끝나면 뉴런은 빼야함
54 |
55 | last = np.random.normal(1, 1, neurons)
56 | temp.append(last)
57 |
58 | np.savez_compressed(npz_name+".npz", *temp)
59 |
60 |
61 | def download(fname, drive_id):
62 | """Download a file from Google Drive.
63 |
64 | Adapted from https://stackoverflow.com/a/39225039/1306923
65 | """
66 |
67 | def get_confirm_token(response):
68 | for key, value in response.cookies.items():
69 | if key.startswith("download_warning"):
70 | return value
71 | return None
72 |
73 | def save_response_content(response, destination):
74 | CHUNK_SIZE = 32768
75 |
76 | with open(destination, "wb") as f:
77 | for chunk in response.iter_content(CHUNK_SIZE):
78 | if chunk: # filter out keep-alive new chunks
79 | f.write(chunk)
80 |
81 | if os.path.exists(fname):
82 | return
83 | if not has_requests:
84 | link = "https://drive.google.com/open?id=%s" % drive_id
85 | raise RuntimeError(
86 | "Cannot find '%s'. Download the file from\n %s\n"
87 | "and place it in %s." % (fname, link, os.getcwd())
88 | )
89 |
90 | url = "https://docs.google.com/uc?export=download"
91 | session = requests.Session()
92 | response = session.get(url, params={"id": drive_id}, stream=True)
93 | token = get_confirm_token(response)
94 | if token is not None:
95 | params = {"id": drive_id, "confirm": token}
96 | response = session.get(url, params=params, stream=True)
97 | save_response_content(response, fname)
98 |
99 |
100 | # load mnist dataset
101 | (train_images, train_labels), (
102 | test_images,
103 | test_labels,
104 | ) = tf.keras.datasets.mnist.load_data()
105 |
106 | # flatten images
107 | train_images = train_images.reshape((train_images.shape[0], -1))
108 | test_images = test_images.reshape((test_images.shape[0], -1))
109 |
110 | # plot some examples
111 | for i in range(3):
112 | plt.figure()
113 | plt.imshow(np.reshape(train_images[i], (28, 28)))
114 | plt.axis("off")
115 | plt.title(str(train_labels[i]))
116 |
117 | dt = 0.001 # simulation timestep
118 | presentation_time = 0.1 # input presentation time
119 | max_rate = 200 # neuron firing rates
120 | # neuron spike amplitude (scaled so that the overall output is ~1)
121 | amp = 1 / max_rate
122 | # input image shape
123 |
124 | with nengo.Network(seed=0) as net:
125 | nengo_loihi.add_params(net)
126 | net.config[nengo.Connection].synapse = None
127 | neuron_type = nengo.LIF(tau_rc=0.02, tau_ref=0.001, amplitude=0.005)
128 |
129 | onnx_name = opt.name[:-5]
130 | onnx_model = onnx.load(onnx_name + ".onnx")
131 | save_npz(onnx_model, onnx_name)
132 | graph = onnx_model.graph
133 |
134 | input_size = numpy_helper.to_array(graph.initializer[0]).shape[0]
135 | inp = nengo.Node(nengo.processes.PresentInput(test_images, presentation_time), size_out=input_size)
136 | pre_layer = inp
137 |
138 | for i in range(len(graph.initializer)):
139 | n_nodes = numpy_helper.to_array(graph.initializer[i]).shape[1]
140 |
141 | if i == 0: #문제없음
142 | layer = nengo.Ensemble(n_neurons=n_nodes, dimensions=1, neuron_type=neuron_type)
143 | net.config[layer].on_chip = False
144 | nengo.Connection(pre_layer, layer.neurons, transform=nengo_dl.dists.Glorot())
145 | pre_layer = layer
146 |
147 | elif i != 0 and i != len(graph.initializer)-1:
148 | layer = nengo.Ensemble(n_neurons=n_nodes, dimensions=1, neuron_type=neuron_type)
149 | nengo.Connection(pre_layer.neurons, layer.neurons, transform=nengo_dl.dists.Glorot())
150 | pre_layer = layer
151 |
152 | else:
153 | out = nengo.Node(size_in=n_nodes)
154 | nengo.Connection(pre_layer.neurons, out, transform=nengo_dl.dists.Glorot())
155 |
156 | out_p = nengo.Probe(out, label="out_p")
157 | out_p_filt = nengo.Probe(out, synapse=nengo.Alpha(0.01), label="out_p_filt")
158 |
159 | # set up training data, adding the time dimension (with size 1)
160 | minibatch_size = 200
161 | train_images = train_images[:, None, :]
162 | train_labels = train_labels[:, None, None]
163 |
164 | # for the test data evaluation we'll be running the network over time
165 | # using spiking neurons, so we need to repeat the input/target data
166 | # for a number of timesteps (based on the presentation_time)
167 | n_steps = int(presentation_time / dt)
168 | test_images = np.tile(test_images[:minibatch_size*2, None, :], (1, n_steps, 1))
169 | test_labels = np.tile(test_labels[:minibatch_size*2, None, None], (1, n_steps, 1))
170 |
171 |
172 | def classification_accuracy(y_true, y_pred):
173 | return 100 * tf.metrics.sparse_categorical_accuracy(y_true[:, -1], y_pred[:, -1])
174 |
175 |
176 | do_training = False
177 |
178 | with nengo_dl.Simulator(net, minibatch_size=minibatch_size, seed=0) as sim:
179 | if do_training:
180 | sim.compile(loss={out_p_filt: classification_accuracy})
181 | print(
182 | "accuracy before training: %.2f%%"
183 | % sim.evaluate(test_images, {out_p_filt: test_labels}, verbose=0)["loss"]
184 | )
185 |
186 | # run training
187 | sim.compile(
188 | optimizer=tf.optimizers.RMSprop(0.001),
189 | loss={out_p: tf.losses.SparseCategoricalCrossentropy(from_logits=True)},
190 | )
191 | sim.fit(train_images, train_labels, epochs=10)
192 |
193 | sim.compile(loss={out_p_filt: classification_accuracy})
194 | print(
195 | "accuracy after training: %.2f%%"
196 | % sim.evaluate(test_images, {out_p_filt: test_labels}, verbose=0)["loss"]
197 | )
198 |
199 | sim.save_params("./mnist_params")
200 | else:
201 | download("mnist_params.npz", "1geZoS-Nz-u_XeeDv3cdZgNjUxDOpgXe5")
202 | #sim.load_params("./mnist_params")
203 |
204 | sim.load_params(opt.name[:-5])
205 |
206 | sim.compile(loss={out_p_filt: classification_accuracy})
207 | # print(
208 | # "nengo_dl accuracy load after training: %.2f%%"
209 | # % sim.evaluate(test_images, {out_p_filt: test_labels}, verbose=0)["loss"]
210 | # )
211 |
212 | # store trained parameters back into the network
213 | sim.freeze_params(net)
214 |
215 | for conn in net.all_connections:
216 | conn.synapse = 0.005
217 |
218 | if do_training:
219 | with nengo_dl.Simulator(net, minibatch_size=minibatch_size) as sim:
220 | sim.compile(loss={out_p_filt: classification_accuracy})
221 | print(
222 | "accuracy w/ synapse: %.2f%%"
223 | % sim.evaluate(test_images, {out_p_filt: test_labels}, verbose=0)["loss"]
224 | )
225 |
226 | n_presentations = 100 # 네트워크에 입력하는 테스트 이미지 갯수?
227 |
228 | # if running on Loihi, increase the max input spikes per step
229 | hw_opts = dict(snip_max_spikes_per_step=120)
230 | with nengo_loihi.Simulator(
231 | net,
232 | dt=dt,
233 | precompute=False,
234 | hardware_options=hw_opts,
235 | ) as sim:
236 | # run the simulation on Loihi
237 | sim.run(n_presentations * presentation_time)
238 |
239 | # check classification accuracy
240 | step = int(presentation_time / dt)
241 |
242 | output = sim.data[out_p_filt][step - 1 :: step]
243 | correct = 100 * np.mean(
244 | np.argmax(output, axis=-1) == test_labels[:n_presentations, 0, 0]
245 | )
246 | print("loihi accuracy: %.2f%%" % correct)
247 |
248 | n_plots = 10
249 | plt.figure()
250 |
251 | plt.subplot(2, 1, 1)
252 | images = test_images.reshape(-1, 28, 28, 1)[::step]
253 | ni, nj, nc = images[0].shape
254 | allimage = np.zeros((ni, nj * n_plots, nc), dtype=images.dtype)
255 | for i, image in enumerate(images[:n_plots]):
256 | allimage[:, i * nj : (i + 1) * nj] = image
257 | if allimage.shape[-1] == 1:
258 | allimage = allimage[:, :, 0]
259 | plt.imshow(allimage, aspect="auto", interpolation="none", cmap="gray")
260 |
261 | plt.subplot(2, 1, 2)
262 | plt.plot(sim.trange()[: n_plots * step], sim.data[out_p_filt][: n_plots * step])
263 | plt.legend(["%d" % i for i in range(10)], loc="best")
264 | # plt.show()
--------------------------------------------------------------------------------
/onnx_to_nengo_model.py:
--------------------------------------------------------------------------------
1 | import os
2 | import re
3 | import onnx
4 | import numpy as np
5 | import nengo
6 | import nengo_dl
7 | import tensorflow as tf
8 |
9 | class toNengoModel:
10 | def __init__(self, onnx_path, amplitude=0.01):
11 | self.nengoCode = ""
12 | self.amplitude = amplitude # default amplitude = 0.01
13 | self.onnx_model = onnx.load(onnx_path) # onnx model
14 | self.model = None
15 | self.end_layer = None
16 | self.set_network() # 모델 network 구성 시작
17 |
18 | # 모델 분석 시작 부분
19 | def set_network(self):
20 | # init code generating
21 | model = self.gen_init()
22 |
23 | # layer code generating
24 | onnx_model_graph = self.onnx_model.graph # graph가
25 | node_len = len(onnx_model_graph.node)
26 | input_shape = np.array(onnx_model_graph.input[0].type.tensor_type.shape.dim) # input 데이터에 대한 shape dim 이 나타남 ( 28, 28, 1)
27 |
28 | # temp는 최종적으로 input 크기를 가리킴 = input_shape 로 마지막 지칭
29 | temp = []
30 | regex = re.compile(":.*\d*")
31 | for index in range(1, len(input_shape)):
32 | data = regex.findall(str(input_shape[index]))[0]
33 | temp.append(int(data[2:len(data)]))
34 | input_shape = temp # temp => [28,28,1] 만들기 위함
35 |
36 | # model 과 input 크기 list 가지고 들어감.
37 | # model 객체와, output shape와, 만들어지고 있는 x 반환(node객체)
38 | model, output_shape, pre_layer = self.gen_input(model, input_shape)
39 | self.model = model
40 |
41 | # node들 갯수만큼 op_type 찾아서 변환
42 | for index in range(node_len):
43 | node_info = onnx_model_graph.node[index] # node 들 정보
44 | op_type = node_info.op_type.lower() # op_type 리턴
45 | if op_type == "conv":
46 | model, output_shape, pre_layer = self.convert_conv2d(model, pre_layer, output_shape, index, onnx_model_graph)
47 | self.model = model
48 | elif op_type == "batchnormalization":
49 | model, output_shape, pre_layer = self.convert_batchnormalization2d(model, pre_layer, output_shape, node_info)
50 | self.model = model
51 | elif op_type == "maxpool":
52 | code, output_shape, pre_layer = self.convert_maxpool2d(model, pre_layer, output_shape, node_info)
53 | self.model = model
54 | elif op_type == "averagepool":
55 | code, output_shape, pre_layer = self.convert_avgpool2d(model, pre_layer, output_shape, node_info)
56 | self.model = model
57 | elif op_type == "reshape":
58 | regex = re.compile("flatten")
59 | if regex.findall(node_info.name):
60 | code, output_shape, pre_layer = self.convert_flatten(model, pre_layer, output_shape)
61 | self.model = model
62 | elif op_type == "matmul":
63 | code, output_shape, pre_layer = self.convert_dense(model, pre_layer, output_shape, index, onnx_model_graph)
64 | self.model = model
65 | self.model = model
66 | self.end_layer = pre_layer
67 |
68 | # NENGO 모델 기본적인 설정
69 | def gen_init(self):
70 | model = nengo.Network()
71 | with model:
72 | model.config[nengo.Ensemble].max_rates = nengo.dists.Choice([100])
73 | model.config[nengo.Ensemble].intercepts = nengo.dists.Choice([0])
74 | nengo_dl.configure_settings(trainable=False)
75 | return model
76 |
77 | # input node 얻기 - Nengo의 Node 객체 얻는 것.
78 | def gen_input(self, model, input_shape):
79 | length = len(input_shape) # 3차원 [28,28,1]
80 | dim = 1
81 | for index in range(0, length): #
82 | dim *= input_shape[index] # dim은 전체 28x28x1
83 |
84 | with model:
85 | self.inp = nengo.Node([0] * dim) # Nengo Node (28x28x1)개 생성
86 | x = self.inp
87 | output_shape = input_shape # 현재 하나만 했을때 output shape 는 이것
88 |
89 | # model 객체와, output shape와, 만들어지고 있는 x 반환(node객체)
90 | return model, output_shape, x # x를 return 하면서 모델을 계속 쌓아감
91 |
92 | def convert_conv2d(self, model, pre_layer, input_shape, index, onnx_model_graph):
93 | onnx_model_graph_node = onnx_model_graph.node
94 | node_info = onnx_model_graph_node[index]
95 | neuron_type = self.get_neuronType(index, onnx_model_graph_node)
96 | filters = self.get_filterNum(node_info, onnx_model_graph)
97 | for index in range(len(node_info.attribute)):
98 | if node_info.attribute[index].name == "kernel_shape":
99 | kernel_size = node_info.attribute[index].ints[0]
100 | elif node_info.attribute[index].name == "strides":
101 | strides = node_info.attribute[index].ints[0]
102 | elif node_info.attribute[index].name == "auto_pad":
103 | padding = node_info.attribute[index].s.decode('ascii').lower()
104 | if padding != "valid":
105 | padding = "same"
106 | if padding == "same":
107 | output_shape = [input_shape[0], input_shape[1], filters]
108 | else:
109 | output_shape = [int((input_shape[0] - kernel_size) / strides + 1),
110 | int((input_shape[1] - kernel_size) / strides + 1), filters]
111 | with model:
112 | x = nengo_dl.Layer(tf.keras.layers.Convolution2D(
113 | filters=filters, kernel_size=kernel_size, padding=padding))\
114 | (pre_layer, shape_in=(input_shape[0], input_shape[1], input_shape[2]))
115 |
116 | # activation
117 | if neuron_type == "lif":
118 | x = nengo_dl.Layer(nengo.LIF(amplitude=self.amplitude))(x)
119 | print('activation lif 추가 완료')
120 | elif neuron_type == "lifrate":
121 | x = nengo_dl.Layer(nengo.LIFRate(amplitude=self.amplitude))(x)
122 | elif neuron_type == "adaptivelif":
123 | x = nengo_dl.Layer(nengo.AdaptiveLIF(amplitude=self.amplitude))(x)
124 | elif neuron_type == "adaptivelifrate":
125 | x = nengo_dl.Layer(nengo.AdaptiveLIFRate(amplitude=self.amplitude))(x)
126 | elif neuron_type == "izhikevich":
127 | x = nengo_dl.Layer(nengo.Izhikevich(amplitude=self.amplitude))(x)
128 | elif neuron_type == "softlifrate":
129 | x = nengo_dl.Layer(nengo_dl.neurons.SoftLIFRate(amplitude=self.amplitude))(x)
130 | elif neuron_type == None: # default neuron_type = LIF
131 | x = nengo_dl.Layer(nengo.LIF(amplitude=self.amplitude))(x)
132 | print('convert_conv2d finish')
133 | return model, output_shape, x
134 |
135 | # batchnormalization
136 | def convert_batchnormalization2d(self, model, pre_layer, input_shape, node_info):
137 | for index in range(len(node_info.attribute)):
138 | if node_info.attribute[index].name == "momentum":
139 | momentum = round(node_info.attribute[index].f, 4)
140 | if momentum == 0:
141 | momentum = 0.99
142 | elif node_info.attribute[index].name == "epsilon":
143 | epsilon = round(node_info.attribute[index].f, 4)
144 | if epsilon == 0:
145 | epsilon = 0.001
146 | with model:
147 | x = nengo_dl.Layer(tf.keras.layers.batch_normalization)(
148 | pre_layer, shape_in=(input_shape[0], input_shape[1], input_shape[2]), momentum=momentum, epsilon=epsilon)
149 | output_shape = input_shape
150 | print('convert_batchnormalization2d finish')
151 | return model, output_shape, x # x를 return 하면서 모델을 계속 쌓아감
152 |
153 | # maxpooling
154 | def convert_maxpool2d(self, model, pre_layer, input_shape, node_info):
155 | for index in range(len(node_info.attribute)):
156 | if node_info.attribute[index].name == "kernel_shape":
157 | pool_size = node_info.attribute[index].ints[0]
158 | elif node_info.attribute[index].name == "strides":
159 | strides = node_info.attribute[index].ints[0]
160 | output_shape = [int(input_shape[0] / strides), int(input_shape[1] / strides), input_shape[2]]
161 | with model:
162 | x = nengo_dl.Layer(tf.keras.layers.MaxPooling2D(pool_size=pool_size, strides=strides))(
163 | pre_layer, shape_in=(input_shape[0], input_shape[1], input_shape[2]))
164 | print('convert_maxpool2d finish')
165 | return model, output_shape, x
166 |
167 | # average pooling
168 | def convert_avgpool2d(self, model, pre_layer, input_shape, node_info):
169 | for index in range(len(node_info.attribute)):
170 | if node_info.attribute[index].name == "kernel_shape":
171 | pool_size = node_info.attribute[index].ints[0]
172 | elif node_info.attribute[index].name == "strides":
173 | strides = node_info.attribute[index].ints[0]
174 | output_shape = [int(input_shape[0] / strides), int(input_shape[1] / strides), input_shape[2]]
175 | with model:
176 | x = nengo_dl.Layer(tf.keras.layers.AveragePooling2D(pool_size=pool_size, strides=strides))(
177 | pre_layer, shape_in=(input_shape[0], input_shape[1], input_shape[2]))
178 | print('convert_avgpool2d finish')
179 | return model, output_shape, x
180 |
181 | # flatten
182 | def convert_flatten(self, model, pre_layer, input_shape):
183 | with model:
184 | x = nengo_dl.Layer(tf.keras.layers.Flatten)(pre_layer)
185 | output_shape = 1
186 | for index in range(len(input_shape)):
187 | output_shape *= input_shape[index]
188 | output_shape = [output_shape, 1]
189 | print('Flatten finish')
190 | return model, output_shape, x
191 |
192 | # matmul과 같은 존재
193 | def convert_dense(self, model, pre_layer, input_shape, index, onnx_model_graph):
194 | onnx_model_graph_node = onnx_model_graph.node
195 | node_info = onnx_model_graph_node[index]
196 | dense_num = self.get_dense_num(node_info, onnx_model_graph)
197 | neuron_type = self.get_neuronType(index, onnx_model_graph_node) # node들 지나다니면서 - neuron이 op_type이 어떤건지 찾음
198 |
199 | with model:
200 | x = nengo_dl.Layer(tf.keras.layers.Dense(units=dense_num))(pre_layer)
201 | if neuron_type != "softmax":
202 | if neuron_type == "lif":
203 | x = nengo_dl.Layer(nengo.LIF(amplitude=self.amplitude))(x)
204 | elif neuron_type == "lifrate":
205 | x = nengo_dl.Layer(nengo.LIFRate(amplitude=self.amplitude))(x)
206 | elif neuron_type == "adaptivelif":
207 | x = nengo_dl.Layer(nengo.AdaptiveLIF(amplitude=self.amplitude))(x)
208 | elif neuron_type == "adaptivelifrate":
209 | x = nengo_dl.Layer(nengo.AdaptiveLIFRate(amplitude=self.amplitude))(x)
210 | elif neuron_type == "izhikevich":
211 | x = nengo_dl.Layer(nengo.Izhikevich(amplitude=self.amplitude))(x)
212 | elif neuron_type == "softlifrate":
213 | x = nengo_dl.Layer(nengo_dl.neurons.SoftLIFRate(amplitude=self.amplitude))(x)
214 | elif neuron_type == None: # default neuron_type = LIF
215 | x = nengo_dl.Layer(nengo.LIF(amplitude=self.amplitude))(x)
216 | output_shape = [dense_num, 1]
217 | print('convert Dense finish')
218 | return model, output_shape, x # x를 return 하면서 모델을 계속 쌓아감
219 |
220 | # neuron이 op_type이 어떤건지 - node들 지나다니면서 뒤짐
221 | def get_neuronType(self, node_index, onnx_model_graph_node):
222 | node_len = len(onnx_model_graph_node)
223 | for index in range(node_index, node_len):
224 | node_info = onnx_model_graph_node[index]
225 | op_type = node_info.op_type.lower()
226 | if op_type == "lif" or op_type == "lifrate" or op_type == "adaptivelif" or op_type == "adaptivelifrate" or op_type == "izhikevich" or op_type == "softmax" or op_type == "softlifrate":
227 | return op_type
228 | return None
229 |
230 | # 모델 반환
231 | def get_model(self):
232 | return self.model
233 |
234 | # input
235 | def get_inputProbe(self):
236 | return self.inp
237 |
238 | def get_endLayer(self):
239 | return self.end_layer
240 |
241 | def get_filterNum(self, node_info, onnx_model_graph):
242 | weight_name = node_info.input[1]
243 | for m in range(len(onnx_model_graph.initializer)):
244 | name = onnx_model_graph.initializer[m].name
245 | for n in range(len(node_info.input)):
246 | if node_info.input[n] == name and name == weight_name:
247 | shape = onnx_model_graph.initializer[m].dims
248 | return shape[0]
249 | return None
250 |
251 | def get_dense_num(self, node_info, onnx_model_graph):
252 | weight_name = node_info.input[1]
253 | for m in range(len(onnx_model_graph.initializer)):
254 | name = onnx_model_graph.initializer[m].name
255 | for n in range(len(node_info.input)):
256 | if node_info.input[n] == name and name == weight_name:
257 | shape = onnx_model_graph.initializer[m].dims
258 | return shape[1]
259 | return None
260 |
261 |
262 | # 일반
263 | def objective(outputs, targets):
264 | return tf.keras.nn.softmax_cross_entropy_with_logits_v2(logits=outputs, labels=targets)
265 |
266 | def classification_error(outputs, targets):
267 | return 100 * tf.reduce_mean(tf.cast(tf.not_equal(tf.argmax(outputs[:, -1], axis=-1), tf.argmax(targets[:, -1], axis=-1)), tf.float32))
268 |
269 | def classification_accuracy(y_true, y_pred):
270 | return tf.metrics.sparse_categorical_accuracy(y_true[:, -1], y_pred[:, -1])
--------------------------------------------------------------------------------