├── .gitignore ├── LICENSE ├── README.md ├── changingSpeed_test.mat ├── changingSpeed_train.mat ├── figs ├── data.png ├── network.png └── result.png ├── model ├── __pycache__ │ └── multi_scale_ori.cpython-36.pyc ├── multi_scale_nores.py ├── multi_scale_one3x3.py ├── multi_scale_one5x5.py ├── multi_scale_one7x7.py └── multi_scale_ori.py ├── multi_scale_ori.py ├── result ├── changing3x3 │ ├── TestAccuracy_ChangingSpeed_Train98.424Test93.188.mat │ ├── TestLoss_ChangingSpeed_Train98.424Test93.188.mat │ ├── TrainAccuracy_ChangingSpeed_Train98.424Test93.188.mat │ └── TrainLoss_ChangingSpeed_Train98.424Test93.188.mat ├── changing5x5 │ ├── TestAccuracy_ChangingSpeed_Train96.628Test93.466.mat │ ├── TestLoss_ChangingSpeed_Train96.628Test93.466.mat │ ├── TrainAccuracy_ChangingSpeed_Train96.628Test93.466.mat │ └── TrainLoss_ChangingSpeed_Train96.628Test93.466.mat ├── changing7x7 │ ├── TestAccuracy_ChangingSpeed_Train97.601Test92.585.mat │ ├── TestLoss_ChangingSpeed_Train97.601Test92.585.mat │ ├── TrainAccuracy_ChangingSpeed_Train97.601Test92.585.mat │ └── TrainLoss_ChangingSpeed_Train97.601Test92.585.mat ├── changingMS │ ├── TestAccuracy_ChangingSpeed_Train97.126Test94.300.mat │ ├── TestLoss_ChangingSpeed_Train97.126Test94.300.mat │ ├── TrainAccuracy_ChangingSpeed_Train97.126Test94.300.mat │ └── TrainLoss_ChangingSpeed_Train97.126Test94.300.mat └── changingNores │ ├── TestAccuracy_ChangingSpeed_Train96.187Test94.068.mat │ ├── TestLoss_ChangingSpeed_Train96.187Test94.068.mat │ ├── TrainAccuracy_ChangingSpeed_Train96.187Test94.068.mat │ └── TrainLoss_ChangingSpeed_Train96.187Test94.068.mat ├── test.py ├── train.py └── weights ├── changing3x3 └── ChaningSpeed_Train98.424Test93.188.pkl ├── changing5x5 └── ChaningSpeed_Train96.628Test93.466.pkl ├── changing7x7 └── ChaningSpeed_Train97.601Test92.585.pkl ├── changingNores └── ChaningSpeed_Train96.187Test94.068.pkl └── changingResnet └── ChaningSpeed_Train98.655Test95.690.pkl /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | MANIFEST 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | .pytest_cache/ 49 | 50 | # Translations 51 | *.mo 52 | *.pot 53 | 54 | # Django stuff: 55 | *.log 56 | local_settings.py 57 | db.sqlite3 58 | 59 | # Flask stuff: 60 | instance/ 61 | .webassets-cache 62 | 63 | # Scrapy stuff: 64 | .scrapy 65 | 66 | # Sphinx documentation 67 | docs/_build/ 68 | 69 | # PyBuilder 70 | target/ 71 | 72 | # Jupyter Notebook 73 | .ipynb_checkpoints 74 | 75 | # pyenv 76 | .python-version 77 | 78 | # celery beat schedule file 79 | celerybeat-schedule 80 | 81 | # SageMath parsed files 82 | *.sage.py 83 | 84 | # Environments 85 | .env 86 | .venv 87 | env/ 88 | venv/ 89 | ENV/ 90 | env.bak/ 91 | venv.bak/ 92 | 93 | # Spyder project settings 94 | .spyderproject 95 | .spyproject 96 | 97 | # Rope project settings 98 | .ropeproject 99 | 100 | # mkdocs documentation 101 | /site 102 | 103 | # mypy 104 | .mypy_cache/ 105 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Fei Wang 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Multi Scale 1D ResNet 2 | 3 | ![NetWork](figs/network.png) 4 | 5 | This is a variation of our [CSI-Net](https://github.com/geekfeiw/CSI-Net), but it is a super light-weighted classification network for time serial data with 1D convolutional operation, where 1D kernels sweep along with the *time* axis. The multi scale setting is inspired by Inception, and we found it useful. 6 | 7 | ## Tested Environment 8 | 1. python 3.6 9 | 1. pytorch 0.4.1 10 | 2. cuda 8.0/9.0 11 | 3. Windows7/Ubuntu 16.04 12 | -------------------------------------------------------------------------------- /changingSpeed_test.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/changingSpeed_test.mat -------------------------------------------------------------------------------- /changingSpeed_train.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/changingSpeed_train.mat -------------------------------------------------------------------------------- /figs/data.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/figs/data.png -------------------------------------------------------------------------------- /figs/network.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/figs/network.png -------------------------------------------------------------------------------- /figs/result.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/figs/result.png -------------------------------------------------------------------------------- /model/__pycache__/multi_scale_ori.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/model/__pycache__/multi_scale_ori.cpython-36.pyc -------------------------------------------------------------------------------- /model/multi_scale_nores.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import math 3 | import torch.utils.model_zoo as model_zoo 4 | 5 | import torch 6 | 7 | def conv3x3(in_planes, out_planes, stride=1): 8 | """3x3 convolution with padding""" 9 | return nn.Conv1d(in_planes, out_planes, kernel_size=3, stride=stride, 10 | padding=1, bias=False) 11 | 12 | def conv5x5(in_planes, out_planes, stride=1): 13 | return nn.Conv1d(in_planes, out_planes, kernel_size=5, stride=stride, 14 | padding=1, bias=False) 15 | 16 | def conv7x7(in_planes, out_planes, stride=1): 17 | return nn.Conv1d(in_planes, out_planes, kernel_size=7, stride=stride, 18 | padding=1, bias=False) 19 | 20 | 21 | 22 | class BasicBlock3x3(nn.Module): 23 | expansion = 1 24 | 25 | def __init__(self, inplanes3, planes, stride=1, downsample=None): 26 | super(BasicBlock3x3, self).__init__() 27 | self.conv1 = conv3x3(inplanes3, planes, stride) 28 | self.bn1 = nn.BatchNorm1d(planes) 29 | self.relu = nn.ReLU(inplace=True) 30 | self.conv2 = conv3x3(planes, planes) 31 | self.bn2 = nn.BatchNorm1d(planes) 32 | self.downsample = downsample 33 | self.stride = stride 34 | 35 | def forward(self, x): 36 | residual = x 37 | 38 | out = self.conv1(x) 39 | out = self.bn1(out) 40 | out = self.relu(out) 41 | 42 | out = self.conv2(out) 43 | out = self.bn2(out) 44 | 45 | if self.downsample is not None: 46 | residual = self.downsample(x) 47 | 48 | out += residual 49 | out = self.relu(out) 50 | 51 | return out 52 | 53 | 54 | class BasicBlock5x5(nn.Module): 55 | expansion = 1 56 | 57 | def __init__(self, inplanes5, planes, stride=1, downsample=None): 58 | super(BasicBlock5x5, self).__init__() 59 | self.conv1 = conv5x5(inplanes5, planes, stride) 60 | self.bn1 = nn.BatchNorm1d(planes) 61 | self.relu = nn.ReLU(inplace=True) 62 | self.conv2 = conv5x5(planes, planes) 63 | self.bn2 = nn.BatchNorm1d(planes) 64 | self.downsample = downsample 65 | self.stride = stride 66 | 67 | def forward(self, x): 68 | residual = x 69 | 70 | out = self.conv1(x) 71 | out = self.bn1(out) 72 | out = self.relu(out) 73 | 74 | out = self.conv2(out) 75 | out = self.bn2(out) 76 | 77 | if self.downsample is not None: 78 | residual = self.downsample(x) 79 | 80 | d = residual.shape[2] - out.shape[2] 81 | out1 = residual[:,:,0:-d] + out 82 | out1 = self.relu(out1) 83 | # out += residual 84 | 85 | return out1 86 | 87 | 88 | 89 | class BasicBlock7x7(nn.Module): 90 | expansion = 1 91 | 92 | def __init__(self, inplanes7, planes, stride=1, downsample=None): 93 | super(BasicBlock7x7, self).__init__() 94 | self.conv1 = conv7x7(inplanes7, planes, stride) 95 | self.bn1 = nn.BatchNorm1d(planes) 96 | self.relu = nn.ReLU(inplace=True) 97 | self.conv2 = conv7x7(planes, planes) 98 | self.bn2 = nn.BatchNorm1d(planes) 99 | self.downsample = downsample 100 | self.stride = stride 101 | 102 | def forward(self, x): 103 | residual = x 104 | 105 | out = self.conv1(x) 106 | out = self.bn1(out) 107 | out = self.relu(out) 108 | 109 | out = self.conv2(out) 110 | out = self.bn2(out) 111 | 112 | if self.downsample is not None: 113 | residual = self.downsample(x) 114 | 115 | d = residual.shape[2] - out.shape[2] 116 | out1 = residual[:, :, 0:-d] + out 117 | out1 = self.relu(out1) 118 | # out += residual 119 | 120 | return out1 121 | 122 | 123 | 124 | 125 | class MSResNet(nn.Module): 126 | def __init__(self, input_channel, layers=[1, 1, 1, 1], num_classes=10): 127 | self.inplanes3 = 64 128 | self.inplanes5 = 64 129 | self.inplanes7 = 64 130 | 131 | super(MSResNet, self).__init__() 132 | 133 | 134 | self.conv1 = nn.Conv1d(input_channel, 64, kernel_size=7, stride=2, padding=3, 135 | bias=False) 136 | self.bn1 = nn.BatchNorm1d(64) 137 | self.relu = nn.ReLU(inplace=True) 138 | self.maxpool = nn.MaxPool1d(kernel_size=3, stride=2, padding=1) 139 | 140 | 141 | self.block3x3 = nn.Sequential( 142 | nn.Conv1d(64, 64, kernel_size=3, stride=2, padding=1, bias=False), 143 | nn.BatchNorm1d(64), 144 | nn.ReLU(inplace=True), 145 | nn.Conv1d(64, 64, kernel_size=3, stride=1, padding=1, bias=False), 146 | nn.BatchNorm1d(64), 147 | nn.ReLU(inplace=True), 148 | 149 | nn.Conv1d(64, 128, kernel_size=3, stride=2, padding=1, bias=False), 150 | nn.BatchNorm1d(128), 151 | nn.ReLU(inplace=True), 152 | nn.Conv1d(128, 128, kernel_size=3, stride=1, padding=1, bias=False), 153 | nn.BatchNorm1d(128), 154 | nn.ReLU(inplace=True), 155 | 156 | nn.Conv1d(128, 256, kernel_size=3, stride=2, padding=1, bias=False), 157 | nn.BatchNorm1d(256), 158 | nn.ReLU(inplace=True), 159 | nn.Conv1d(256, 256, kernel_size=3, stride=1, padding=1, bias=False), 160 | nn.BatchNorm1d(256), 161 | nn.ReLU(inplace=True), 162 | ) 163 | # maxplooing kernel size: 16, 11, 6 164 | self.maxpool3 = nn.AvgPool1d(kernel_size=16, stride=1, padding=0) 165 | # 166 | 167 | 168 | self.block5x5 = nn.Sequential( 169 | nn.Conv1d(64, 64, kernel_size=5, stride=2, padding=1, bias=False), 170 | nn.BatchNorm1d(64), 171 | nn.ReLU(inplace=True), 172 | nn.Conv1d(64, 64, kernel_size=5, stride=1, padding=1, bias=False), 173 | nn.BatchNorm1d(64), 174 | nn.ReLU(inplace=True), 175 | 176 | nn.Conv1d(64, 128, kernel_size=5, stride=2, padding=1, bias=False), 177 | nn.BatchNorm1d(128), 178 | nn.ReLU(inplace=True), 179 | nn.Conv1d(128, 128, kernel_size=5, stride=1, padding=1, bias=False), 180 | nn.BatchNorm1d(128), 181 | nn.ReLU(inplace=True), 182 | 183 | nn.Conv1d(128, 256, kernel_size=5, stride=2, padding=1, bias=False), 184 | nn.BatchNorm1d(256), 185 | nn.ReLU(inplace=True), 186 | nn.Conv1d(256, 256, kernel_size=5, stride=1, padding=1, bias=False), 187 | nn.BatchNorm1d(256), 188 | nn.ReLU(inplace=True), 189 | ) 190 | # maxplooing kernel size: 16, 11, 6 191 | self.maxpool5 = nn.AvgPool1d(kernel_size=11, stride=1, padding=0) 192 | 193 | self.block7x7 = nn.Sequential( 194 | nn.Conv1d(64, 64, kernel_size=7, stride=2, padding=1, bias=False), 195 | nn.BatchNorm1d(64), 196 | nn.ReLU(inplace=True), 197 | nn.Conv1d(64, 64, kernel_size=7, stride=1, padding=1, bias=False), 198 | nn.BatchNorm1d(64), 199 | nn.ReLU(inplace=True), 200 | 201 | nn.Conv1d(64, 128, kernel_size=7, stride=2, padding=1, bias=False), 202 | nn.BatchNorm1d(128), 203 | nn.ReLU(inplace=True), 204 | nn.Conv1d(128, 128, kernel_size=7, stride=1, padding=1, bias=False), 205 | nn.BatchNorm1d(128), 206 | nn.ReLU(inplace=True), 207 | 208 | nn.Conv1d(128, 256, kernel_size=7, stride=2, padding=1, bias=False), 209 | nn.BatchNorm1d(256), 210 | nn.ReLU(inplace=True), 211 | nn.Conv1d(256, 256, kernel_size=7, stride=1, padding=1, bias=False), 212 | nn.BatchNorm1d(256), 213 | nn.ReLU(inplace=True), 214 | ) 215 | # maxplooing kernel size: 16, 11, 6 216 | self.maxpool7 = nn.AvgPool1d(kernel_size=6, stride=1, padding=0) 217 | # 218 | # # self.drop = nn.Dropout(p=0.2) 219 | self.fc = nn.Linear(256*3, num_classes) 220 | 221 | # todo: modify the initialization 222 | # for m in self.modules(): 223 | # if isinstance(m, nn.Conv1d): 224 | # n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 225 | # m.weight.data.normal_(0, math.sqrt(2. / n)) 226 | # elif isinstance(m, nn.BatchNorm1d): 227 | # m.weight.data.fill_(1) 228 | # m.bias.data.zero_() 229 | 230 | def _make_layer3(self, block, planes, blocks, stride=2): 231 | downsample = None 232 | if stride != 1 or self.inplanes3 != planes * block.expansion: 233 | downsample = nn.Sequential( 234 | nn.Conv1d(self.inplanes3, planes * block.expansion, 235 | kernel_size=1, stride=stride, bias=False), 236 | nn.BatchNorm1d(planes * block.expansion), 237 | ) 238 | 239 | layers = [] 240 | layers.append(block(self.inplanes3, planes, stride, downsample)) 241 | self.inplanes3 = planes * block.expansion 242 | for i in range(1, blocks): 243 | layers.append(block(self.inplanes3, planes)) 244 | 245 | return nn.Sequential(*layers) 246 | 247 | def _make_layer5(self, block, planes, blocks, stride=2): 248 | downsample = None 249 | if stride != 1 or self.inplanes5 != planes * block.expansion: 250 | downsample = nn.Sequential( 251 | nn.Conv1d(self.inplanes5, planes * block.expansion, 252 | kernel_size=1, stride=stride, bias=False), 253 | nn.BatchNorm1d(planes * block.expansion), 254 | ) 255 | 256 | layers = [] 257 | layers.append(block(self.inplanes5, planes, stride, downsample)) 258 | self.inplanes5 = planes * block.expansion 259 | for i in range(1, blocks): 260 | layers.append(block(self.inplanes5, planes)) 261 | 262 | return nn.Sequential(*layers) 263 | 264 | 265 | def _make_layer7(self, block, planes, blocks, stride=2): 266 | downsample = None 267 | if stride != 1 or self.inplanes7 != planes * block.expansion: 268 | downsample = nn.Sequential( 269 | nn.Conv1d(self.inplanes7, planes * block.expansion, 270 | kernel_size=1, stride=stride, bias=False), 271 | nn.BatchNorm1d(planes * block.expansion), 272 | ) 273 | 274 | layers = [] 275 | layers.append(block(self.inplanes7, planes, stride, downsample)) 276 | self.inplanes7 = planes * block.expansion 277 | for i in range(1, blocks): 278 | layers.append(block(self.inplanes7, planes)) 279 | 280 | return nn.Sequential(*layers) 281 | 282 | def forward(self, x0): 283 | x0 = self.conv1(x0) 284 | x0 = self.bn1(x0) 285 | x0 = self.relu(x0) 286 | x0 = self.maxpool(x0) 287 | 288 | x = self.block3x3(x0) 289 | x = self.maxpool3(x) 290 | # 291 | y = self.block5x5(x0) 292 | y = self.maxpool5(y) 293 | # 294 | z = self.block7x7(x0) 295 | z = self.maxpool7(z) 296 | # 297 | out = torch.cat([x, y, z], dim=1) 298 | # 299 | out = out.squeeze() 300 | # # out = self.drop(out) 301 | out1 = self.fc(out) 302 | return out1, out 303 | 304 | 305 | 306 | 307 | 308 | 309 | 310 | -------------------------------------------------------------------------------- /model/multi_scale_one3x3.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import math 3 | import torch.utils.model_zoo as model_zoo 4 | 5 | import torch 6 | 7 | def conv3x3(in_planes, out_planes, stride=1): 8 | """3x3 convolution with padding""" 9 | return nn.Conv1d(in_planes, out_planes, kernel_size=3, stride=stride, 10 | padding=1, bias=False) 11 | 12 | def conv5x5(in_planes, out_planes, stride=1): 13 | return nn.Conv1d(in_planes, out_planes, kernel_size=5, stride=stride, 14 | padding=1, bias=False) 15 | 16 | def conv7x7(in_planes, out_planes, stride=1): 17 | return nn.Conv1d(in_planes, out_planes, kernel_size=7, stride=stride, 18 | padding=1, bias=False) 19 | 20 | 21 | 22 | class BasicBlock3x3_1(nn.Module): 23 | expansion = 1 24 | 25 | def __init__(self, inplanes3_1, planes, stride=1, downsample=None): 26 | super(BasicBlock3x3_1, self).__init__() 27 | self.conv1 = conv3x3(inplanes3_1, planes, stride) 28 | self.bn1 = nn.BatchNorm1d(planes) 29 | self.relu = nn.ReLU(inplace=True) 30 | self.conv2 = conv3x3(planes, planes) 31 | self.bn2 = nn.BatchNorm1d(planes) 32 | self.downsample = downsample 33 | self.stride = stride 34 | 35 | def forward(self, x): 36 | residual = x 37 | 38 | out = self.conv1(x) 39 | out = self.bn1(out) 40 | out = self.relu(out) 41 | 42 | out = self.conv2(out) 43 | out = self.bn2(out) 44 | 45 | if self.downsample is not None: 46 | residual = self.downsample(x) 47 | 48 | out += residual 49 | out = self.relu(out) 50 | 51 | return out 52 | 53 | 54 | class BasicBlock3x3_2(nn.Module): 55 | expansion = 1 56 | 57 | def __init__(self, inplanes3_2, planes, stride=1, downsample=None): 58 | super(BasicBlock3x3_2, self).__init__() 59 | self.conv1 = conv3x3(inplanes3_2, planes, stride) 60 | self.bn1 = nn.BatchNorm1d(planes) 61 | self.relu = nn.ReLU(inplace=True) 62 | self.conv2 = conv3x3(planes, planes) 63 | self.bn2 = nn.BatchNorm1d(planes) 64 | self.downsample = downsample 65 | self.stride = stride 66 | 67 | def forward(self, x): 68 | residual = x 69 | 70 | out = self.conv1(x) 71 | out = self.bn1(out) 72 | out = self.relu(out) 73 | 74 | out = self.conv2(out) 75 | out = self.bn2(out) 76 | 77 | if self.downsample is not None: 78 | residual = self.downsample(x) 79 | 80 | out += residual 81 | out = self.relu(out) 82 | 83 | return out 84 | 85 | class BasicBlock3x3_3(nn.Module): 86 | expansion = 1 87 | 88 | def __init__(self, inplanes3_3, planes, stride=1, downsample=None): 89 | super(BasicBlock3x3_3, self).__init__() 90 | self.conv1 = conv3x3(inplanes3_3, planes, stride) 91 | self.bn1 = nn.BatchNorm1d(planes) 92 | self.relu = nn.ReLU(inplace=True) 93 | self.conv2 = conv3x3(planes, planes) 94 | self.bn2 = nn.BatchNorm1d(planes) 95 | self.downsample = downsample 96 | self.stride = stride 97 | 98 | def forward(self, x): 99 | residual = x 100 | 101 | out = self.conv1(x) 102 | out = self.bn1(out) 103 | out = self.relu(out) 104 | 105 | out = self.conv2(out) 106 | out = self.bn2(out) 107 | 108 | if self.downsample is not None: 109 | residual = self.downsample(x) 110 | 111 | out += residual 112 | out = self.relu(out) 113 | 114 | return out 115 | 116 | class MSResNet(nn.Module): 117 | def __init__(self, input_channel, layers=[1, 1, 1, 1], num_classes=10): 118 | self.inplanes3_1 = 64 119 | self.inplanes3_2 = 64 120 | self.inplanes3_3 = 64 121 | 122 | super(MSResNet, self).__init__() 123 | 124 | self.conv1 = nn.Conv1d(input_channel, 64, kernel_size=7, stride=2, padding=3, 125 | bias=False) 126 | self.bn1 = nn.BatchNorm1d(64) 127 | self.relu = nn.ReLU(inplace=True) 128 | self.maxpool = nn.MaxPool1d(kernel_size=3, stride=2, padding=1) 129 | 130 | self.layer3x3_11 = self._make_layer3_1(BasicBlock3x3_1, 64, layers[0], stride=2) 131 | self.layer3x3_12 = self._make_layer3_1(BasicBlock3x3_1, 128, layers[1], stride=2) 132 | self.layer3x3_13 = self._make_layer3_1(BasicBlock3x3_1, 256, layers[2], stride=2) 133 | # self.layer3x3_4 = self._make_layer3(BasicBlock3x3, 512, layers[3], stride=2) 134 | 135 | # maxplooing kernel size: 16, 11, 6 136 | self.maxpool3_1 = nn.AvgPool1d(kernel_size=16, stride=1, padding=0) 137 | 138 | self.layer3x3_21 = self._make_layer3_2(BasicBlock3x3_2, 64, layers[0], stride=2) 139 | self.layer3x3_22 = self._make_layer3_2(BasicBlock3x3_2, 128, layers[1], stride=2) 140 | self.layer3x3_23 = self._make_layer3_2(BasicBlock3x3_2, 256, layers[2], stride=2) 141 | # self.layer3x3_4 = self._make_layer3(BasicBlock3x3, 512, layers[3], stride=2) 142 | 143 | # maxplooing kernel size: 16, 11, 6 144 | self.maxpool3_2 = nn.AvgPool1d(kernel_size=16, stride=1, padding=0) 145 | 146 | self.layer3x3_31 = self._make_layer3_3(BasicBlock3x3_3, 64, layers[0], stride=2) 147 | self.layer3x3_32 = self._make_layer3_3(BasicBlock3x3_3, 128, layers[1], stride=2) 148 | self.layer3x3_33 = self._make_layer3_3(BasicBlock3x3_3, 256, layers[2], stride=2) 149 | # self.layer3x3_4 = self._make_layer3(BasicBlock3x3, 512, layers[3], stride=2) 150 | 151 | # maxplooing kernel size: 16, 11, 6 152 | self.maxpool3_3 = nn.AvgPool1d(kernel_size=16, stride=1, padding=0) 153 | 154 | 155 | 156 | # self.drop = nn.Dropout(p=0.2) 157 | self.fc = nn.Linear(256*3, num_classes) 158 | 159 | # todo: modify the initialization 160 | # for m in self.modules(): 161 | # if isinstance(m, nn.Conv1d): 162 | # n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 163 | # m.weight.data.normal_(0, math.sqrt(2. / n)) 164 | # elif isinstance(m, nn.BatchNorm1d): 165 | # m.weight.data.fill_(1) 166 | # m.bias.data.zero_() 167 | 168 | def _make_layer3_1(self, block, planes, blocks, stride=2): 169 | downsample = None 170 | if stride != 1 or self.inplanes3_1 != planes * block.expansion: 171 | downsample = nn.Sequential( 172 | nn.Conv1d(self.inplanes3_1, planes * block.expansion, 173 | kernel_size=1, stride=stride, bias=False), 174 | nn.BatchNorm1d(planes * block.expansion), 175 | ) 176 | 177 | layers = [] 178 | layers.append(block(self.inplanes3_1, planes, stride, downsample)) 179 | self.inplanes3_1 = planes * block.expansion 180 | for i in range(1, blocks): 181 | layers.append(block(self.inplanes3_1, planes)) 182 | 183 | return nn.Sequential(*layers) 184 | 185 | def _make_layer3_2(self, block, planes, blocks, stride=2): 186 | downsample = None 187 | if stride != 1 or self.inplanes3_2 != planes * block.expansion: 188 | downsample = nn.Sequential( 189 | nn.Conv1d(self.inplanes3_2, planes * block.expansion, 190 | kernel_size=1, stride=stride, bias=False), 191 | nn.BatchNorm1d(planes * block.expansion), 192 | ) 193 | 194 | layers = [] 195 | layers.append(block(self.inplanes3_2, planes, stride, downsample)) 196 | self.inplanes3_2 = planes * block.expansion 197 | for i in range(1, blocks): 198 | layers.append(block(self.inplanes3_2, planes)) 199 | 200 | return nn.Sequential(*layers) 201 | 202 | 203 | def _make_layer3_3(self, block, planes, blocks, stride=2): 204 | downsample = None 205 | if stride != 1 or self.inplanes3_3 != planes * block.expansion: 206 | downsample = nn.Sequential( 207 | nn.Conv1d(self.inplanes3_3, planes * block.expansion, 208 | kernel_size=1, stride=stride, bias=False), 209 | nn.BatchNorm1d(planes * block.expansion), 210 | ) 211 | 212 | layers = [] 213 | layers.append(block(self.inplanes3_3, planes, stride, downsample)) 214 | self.inplanes3_3 = planes * block.expansion 215 | for i in range(1, blocks): 216 | layers.append(block(self.inplanes3_3, planes)) 217 | 218 | return nn.Sequential(*layers) 219 | 220 | 221 | 222 | def _make_layer5(self, block, planes, blocks, stride=2): 223 | downsample = None 224 | if stride != 1 or self.inplanes5 != planes * block.expansion: 225 | downsample = nn.Sequential( 226 | nn.Conv1d(self.inplanes5, planes * block.expansion, 227 | kernel_size=1, stride=stride, bias=False), 228 | nn.BatchNorm1d(planes * block.expansion), 229 | ) 230 | 231 | layers = [] 232 | layers.append(block(self.inplanes5, planes, stride, downsample)) 233 | self.inplanes5 = planes * block.expansion 234 | for i in range(1, blocks): 235 | layers.append(block(self.inplanes5, planes)) 236 | 237 | return nn.Sequential(*layers) 238 | 239 | 240 | def _make_layer7(self, block, planes, blocks, stride=2): 241 | downsample = None 242 | if stride != 1 or self.inplanes7 != planes * block.expansion: 243 | downsample = nn.Sequential( 244 | nn.Conv1d(self.inplanes7, planes * block.expansion, 245 | kernel_size=1, stride=stride, bias=False), 246 | nn.BatchNorm1d(planes * block.expansion), 247 | ) 248 | 249 | layers = [] 250 | layers.append(block(self.inplanes7, planes, stride, downsample)) 251 | self.inplanes7 = planes * block.expansion 252 | for i in range(1, blocks): 253 | layers.append(block(self.inplanes7, planes)) 254 | 255 | return nn.Sequential(*layers) 256 | 257 | def forward(self, x0): 258 | x0 = self.conv1(x0) 259 | x0 = self.bn1(x0) 260 | x0 = self.relu(x0) 261 | x0 = self.maxpool(x0) 262 | 263 | x = self.layer3x3_11(x0) 264 | x = self.layer3x3_12(x) 265 | x = self.layer3x3_13(x) 266 | # x = self.layer3x3_4(x) 267 | x = self.maxpool3_1(x) 268 | 269 | y = self.layer3x3_21(x0) 270 | y = self.layer3x3_22(y) 271 | y = self.layer3x3_23(y) 272 | # y = self.layer5x5_4(y) 273 | y = self.maxpool3_2(y) 274 | 275 | z = self.layer3x3_31(x0) 276 | z = self.layer3x3_32(z) 277 | z = self.layer3x3_33(z) 278 | # z = self.layer7x7_4(z) 279 | z = self.maxpool3_3(z) 280 | 281 | out = torch.cat([x, y, z], dim=1) 282 | 283 | out = out.squeeze() 284 | # out = self.drop(out) 285 | out1 = self.fc(out) 286 | 287 | return out1, out 288 | 289 | 290 | 291 | 292 | 293 | 294 | 295 | -------------------------------------------------------------------------------- /model/multi_scale_one5x5.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import math 3 | import torch.utils.model_zoo as model_zoo 4 | 5 | import torch 6 | 7 | def conv3x3(in_planes, out_planes, stride=1): 8 | """3x3 convolution with padding""" 9 | return nn.Conv1d(in_planes, out_planes, kernel_size=3, stride=stride, 10 | padding=1, bias=False) 11 | 12 | def conv5x5(in_planes, out_planes, stride=1): 13 | return nn.Conv1d(in_planes, out_planes, kernel_size=5, stride=stride, 14 | padding=1, bias=False) 15 | 16 | def conv7x7(in_planes, out_planes, stride=1): 17 | return nn.Conv1d(in_planes, out_planes, kernel_size=7, stride=stride, 18 | padding=1, bias=False) 19 | 20 | 21 | 22 | class BasicBlock5x5_1(nn.Module): 23 | expansion = 1 24 | 25 | def __init__(self, inplanes5_1, planes, stride=1, downsample=None): 26 | super(BasicBlock5x5_1, self).__init__() 27 | self.conv1 = conv5x5(inplanes5_1, planes, stride) 28 | self.bn1 = nn.BatchNorm1d(planes) 29 | self.relu = nn.ReLU(inplace=True) 30 | self.conv2 = conv5x5(planes, planes) 31 | self.bn2 = nn.BatchNorm1d(planes) 32 | self.downsample = downsample 33 | self.stride = stride 34 | 35 | def forward(self, x): 36 | residual = x 37 | 38 | out = self.conv1(x) 39 | out = self.bn1(out) 40 | out = self.relu(out) 41 | 42 | out = self.conv2(out) 43 | out = self.bn2(out) 44 | 45 | if self.downsample is not None: 46 | residual = self.downsample(x) 47 | 48 | d = residual.shape[2] - out.shape[2] 49 | out1 = residual[:,:,0:-d] + out 50 | out1 = self.relu(out1) 51 | # out += residual 52 | 53 | return out1 54 | 55 | 56 | class BasicBlock5x5_2(nn.Module): 57 | expansion = 1 58 | 59 | def __init__(self, inplanes5_2, planes, stride=1, downsample=None): 60 | super(BasicBlock5x5_2, self).__init__() 61 | self.conv1 = conv5x5(inplanes5_2, planes, stride) 62 | self.bn1 = nn.BatchNorm1d(planes) 63 | self.relu = nn.ReLU(inplace=True) 64 | self.conv2 = conv5x5(planes, planes) 65 | self.bn2 = nn.BatchNorm1d(planes) 66 | self.downsample = downsample 67 | self.stride = stride 68 | 69 | def forward(self, x): 70 | residual = x 71 | 72 | out = self.conv1(x) 73 | out = self.bn1(out) 74 | out = self.relu(out) 75 | 76 | out = self.conv2(out) 77 | out = self.bn2(out) 78 | 79 | if self.downsample is not None: 80 | residual = self.downsample(x) 81 | 82 | d = residual.shape[2] - out.shape[2] 83 | out1 = residual[:,:,0:-d] + out 84 | out1 = self.relu(out1) 85 | # out += residual 86 | 87 | return out1 88 | 89 | class BasicBlock5x5_3(nn.Module): 90 | expansion = 1 91 | 92 | def __init__(self, inplanes5_3, planes, stride=1, downsample=None): 93 | super(BasicBlock5x5_3, self).__init__() 94 | self.conv1 = conv5x5(inplanes5_3, planes, stride) 95 | self.bn1 = nn.BatchNorm1d(planes) 96 | self.relu = nn.ReLU(inplace=True) 97 | self.conv2 = conv5x5(planes, planes) 98 | self.bn2 = nn.BatchNorm1d(planes) 99 | self.downsample = downsample 100 | self.stride = stride 101 | 102 | def forward(self, x): 103 | residual = x 104 | 105 | out = self.conv1(x) 106 | out = self.bn1(out) 107 | out = self.relu(out) 108 | 109 | out = self.conv2(out) 110 | out = self.bn2(out) 111 | 112 | if self.downsample is not None: 113 | residual = self.downsample(x) 114 | 115 | d = residual.shape[2] - out.shape[2] 116 | out1 = residual[:,:,0:-d] + out 117 | out1 = self.relu(out1) 118 | # out += residual 119 | 120 | return out1 121 | 122 | class MSResNet(nn.Module): 123 | def __init__(self, input_channel, layers=[1, 1, 1, 1], num_classes=10): 124 | self.inplanes5_1 = 64 125 | self.inplanes5_2 = 64 126 | self.inplanes5_3 = 64 127 | 128 | super(MSResNet, self).__init__() 129 | 130 | self.conv1 = nn.Conv1d(input_channel, 64, kernel_size=7, stride=2, padding=3, 131 | bias=False) 132 | self.bn1 = nn.BatchNorm1d(64) 133 | self.relu = nn.ReLU(inplace=True) 134 | self.maxpool = nn.MaxPool1d(kernel_size=3, stride=2, padding=1) 135 | 136 | 137 | self.layer5x5_11 = self._make_layer5_1(BasicBlock5x5_1, 64, layers[0], stride=2) 138 | self.layer5x5_12 = self._make_layer5_1(BasicBlock5x5_1, 128, layers[1], stride=2) 139 | self.layer5x5_13 = self._make_layer5_1(BasicBlock5x5_1, 256, layers[2], stride=2) 140 | # self.layer3x3_4 = self._make_layer3(BasicBlock3x3, 512, layers[3], stride=2) 141 | # maxplooing kernel size: 16, 11, 6 142 | self.maxpool5_1 = nn.AvgPool1d(kernel_size=11, stride=1, padding=0) 143 | 144 | 145 | self.layer5x5_21 = self._make_layer5_2(BasicBlock5x5_2, 64, layers[0], stride=2) 146 | self.layer5x5_22 = self._make_layer5_2(BasicBlock5x5_2, 128, layers[1], stride=2) 147 | self.layer5x5_23 = self._make_layer5_2(BasicBlock5x5_2, 256, layers[2], stride=2) 148 | # self.layer3x3_4 = self._make_layer3(BasicBlock3x3, 512, layers[3], stride=2) 149 | 150 | # maxplooing kernel size: 16, 11, 6 151 | self.maxpool5_2 = nn.AvgPool1d(kernel_size=11, stride=1, padding=0) 152 | 153 | self.layer5x5_31 = self._make_layer5_3(BasicBlock5x5_3, 64, layers[0], stride=2) 154 | self.layer5x5_32 = self._make_layer5_3(BasicBlock5x5_3, 128, layers[1], stride=2) 155 | self.layer5x5_33 = self._make_layer5_3(BasicBlock5x5_3, 256, layers[2], stride=2) 156 | # self.layer3x3_4 = self._make_layer3(BasicBlock3x3, 512, layers[3], stride=2) 157 | 158 | # maxplooing kernel size: 16, 11, 6 159 | self.maxpool5_3 = nn.AvgPool1d(kernel_size=11, stride=1, padding=0) 160 | 161 | # self.drop = nn.Dropout(p=0.2) 162 | self.fc = nn.Linear(256*3, num_classes) 163 | 164 | # todo: modify the initialization 165 | # for m in self.modules(): 166 | # if isinstance(m, nn.Conv1d): 167 | # n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 168 | # m.weight.data.normal_(0, math.sqrt(2. / n)) 169 | # elif isinstance(m, nn.BatchNorm1d): 170 | # m.weight.data.fill_(1) 171 | # m.bias.data.zero_() 172 | 173 | def _make_layer3(self, block, planes, blocks, stride=2): 174 | downsample = None 175 | if stride != 1 or self.inplanes3 != planes * block.expansion: 176 | downsample = nn.Sequential( 177 | nn.Conv1d(self.inplanes3, planes * block.expansion, 178 | kernel_size=1, stride=stride, bias=False), 179 | nn.BatchNorm1d(planes * block.expansion), 180 | ) 181 | 182 | layers = [] 183 | layers.append(block(self.inplanes3, planes, stride, downsample)) 184 | self.inplanes3 = planes * block.expansion 185 | for i in range(1, blocks): 186 | layers.append(block(self.inplanes3, planes)) 187 | 188 | return nn.Sequential(*layers) 189 | 190 | def _make_layer5_1(self, block, planes, blocks, stride=2): 191 | downsample = None 192 | if stride != 1 or self.inplanes5_1 != planes * block.expansion: 193 | downsample = nn.Sequential( 194 | nn.Conv1d(self.inplanes5_1, planes * block.expansion, 195 | kernel_size=1, stride=stride, bias=False), 196 | nn.BatchNorm1d(planes * block.expansion), 197 | ) 198 | 199 | layers = [] 200 | layers.append(block(self.inplanes5_1, planes, stride, downsample)) 201 | self.inplanes5_1 = planes * block.expansion 202 | for i in range(1, blocks): 203 | layers.append(block(self.inplanes5_1, planes)) 204 | 205 | return nn.Sequential(*layers) 206 | 207 | def _make_layer5_2(self, block, planes, blocks, stride=2): 208 | downsample = None 209 | if stride != 1 or self.inplanes5_2 != planes * block.expansion: 210 | downsample = nn.Sequential( 211 | nn.Conv1d(self.inplanes5_2, planes * block.expansion, 212 | kernel_size=1, stride=stride, bias=False), 213 | nn.BatchNorm1d(planes * block.expansion), 214 | ) 215 | 216 | layers = [] 217 | layers.append(block(self.inplanes5_2, planes, stride, downsample)) 218 | self.inplanes5_2 = planes * block.expansion 219 | for i in range(1, blocks): 220 | layers.append(block(self.inplanes5_2, planes)) 221 | 222 | return nn.Sequential(*layers) 223 | 224 | def _make_layer5_3(self, block, planes, blocks, stride=2): 225 | downsample = None 226 | if stride != 1 or self.inplanes5_3 != planes * block.expansion: 227 | downsample = nn.Sequential( 228 | nn.Conv1d(self.inplanes5_3, planes * block.expansion, 229 | kernel_size=1, stride=stride, bias=False), 230 | nn.BatchNorm1d(planes * block.expansion), 231 | ) 232 | 233 | layers = [] 234 | layers.append(block(self.inplanes5_3, planes, stride, downsample)) 235 | self.inplanes5_3 = planes * block.expansion 236 | for i in range(1, blocks): 237 | layers.append(block(self.inplanes5_3, planes)) 238 | 239 | return nn.Sequential(*layers) 240 | 241 | 242 | def _make_layer7(self, block, planes, blocks, stride=2): 243 | downsample = None 244 | if stride != 1 or self.inplanes7 != planes * block.expansion: 245 | downsample = nn.Sequential( 246 | nn.Conv1d(self.inplanes7, planes * block.expansion, 247 | kernel_size=1, stride=stride, bias=False), 248 | nn.BatchNorm1d(planes * block.expansion), 249 | ) 250 | 251 | layers = [] 252 | layers.append(block(self.inplanes7, planes, stride, downsample)) 253 | self.inplanes7 = planes * block.expansion 254 | for i in range(1, blocks): 255 | layers.append(block(self.inplanes7, planes)) 256 | 257 | return nn.Sequential(*layers) 258 | 259 | def forward(self, x0): 260 | x0 = self.conv1(x0) 261 | x0 = self.bn1(x0) 262 | x0 = self.relu(x0) 263 | x0 = self.maxpool(x0) 264 | 265 | x = self.layer5x5_11(x0) 266 | x = self.layer5x5_12(x) 267 | x = self.layer5x5_13(x) 268 | # x = self.layer3x3_4(x) 269 | x = self.maxpool5_1(x) 270 | 271 | y = self.layer5x5_21(x0) 272 | y = self.layer5x5_22(y) 273 | y = self.layer5x5_23(y) 274 | # y = self.layer5x5_4(y) 275 | y = self.maxpool5_2(y) 276 | 277 | z = self.layer5x5_31(x0) 278 | z = self.layer5x5_32(z) 279 | z = self.layer5x5_33(z) 280 | # z = self.layer7x7_4(z) 281 | z = self.maxpool5_3(z) 282 | 283 | out = torch.cat([x, y, z], dim=1) 284 | 285 | out = out.squeeze() 286 | # out = self.drop(out) 287 | out1 = self.fc(out) 288 | 289 | return out1, out 290 | 291 | 292 | 293 | 294 | 295 | 296 | 297 | -------------------------------------------------------------------------------- /model/multi_scale_one7x7.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import math 3 | import torch.utils.model_zoo as model_zoo 4 | 5 | import torch 6 | 7 | 8 | def conv3x3(in_planes, out_planes, stride=1): 9 | """3x3 convolution with padding""" 10 | return nn.Conv1d(in_planes, out_planes, kernel_size=3, stride=stride, 11 | padding=1, bias=False) 12 | 13 | 14 | def conv5x5(in_planes, out_planes, stride=1): 15 | return nn.Conv1d(in_planes, out_planes, kernel_size=5, stride=stride, 16 | padding=1, bias=False) 17 | 18 | 19 | def conv7x7(in_planes, out_planes, stride=1): 20 | return nn.Conv1d(in_planes, out_planes, kernel_size=7, stride=stride, 21 | padding=1, bias=False) 22 | 23 | 24 | class BasicBlock7x7_1(nn.Module): 25 | expansion = 1 26 | 27 | def __init__(self, inplanes7_1, planes, stride=1, downsample=None): 28 | super(BasicBlock7x7_1, self).__init__() 29 | self.conv1 = conv7x7(inplanes7_1, planes, stride) 30 | self.bn1 = nn.BatchNorm1d(planes) 31 | self.relu = nn.ReLU(inplace=True) 32 | self.conv2 = conv7x7(planes, planes) 33 | self.bn2 = nn.BatchNorm1d(planes) 34 | self.downsample = downsample 35 | self.stride = stride 36 | 37 | def forward(self, x): 38 | residual = x 39 | 40 | out = self.conv1(x) 41 | out = self.bn1(out) 42 | out = self.relu(out) 43 | 44 | out = self.conv2(out) 45 | out = self.bn2(out) 46 | 47 | if self.downsample is not None: 48 | residual = self.downsample(x) 49 | 50 | d = residual.shape[2] - out.shape[2] 51 | out1 = residual[:, :, 0:-d] + out 52 | out1 = self.relu(out1) 53 | # out += residual 54 | 55 | return out1 56 | 57 | 58 | class BasicBlock7x7_2(nn.Module): 59 | expansion = 1 60 | 61 | def __init__(self, inplanes7_2, planes, stride=1, downsample=None): 62 | super(BasicBlock7x7_2, self).__init__() 63 | self.conv1 = conv7x7(inplanes7_2, planes, stride) 64 | self.bn1 = nn.BatchNorm1d(planes) 65 | self.relu = nn.ReLU(inplace=True) 66 | self.conv2 = conv7x7(planes, planes) 67 | self.bn2 = nn.BatchNorm1d(planes) 68 | self.downsample = downsample 69 | self.stride = stride 70 | 71 | def forward(self, x): 72 | residual = x 73 | 74 | out = self.conv1(x) 75 | out = self.bn1(out) 76 | out = self.relu(out) 77 | 78 | out = self.conv2(out) 79 | out = self.bn2(out) 80 | 81 | if self.downsample is not None: 82 | residual = self.downsample(x) 83 | 84 | d = residual.shape[2] - out.shape[2] 85 | out1 = residual[:, :, 0:-d] + out 86 | out1 = self.relu(out1) 87 | # out += residual 88 | 89 | return out1 90 | 91 | 92 | class BasicBlock7x7_3(nn.Module): 93 | expansion = 1 94 | 95 | def __init__(self, inplanes7_3, planes, stride=1, downsample=None): 96 | super(BasicBlock7x7_3, self).__init__() 97 | self.conv1 = conv7x7(inplanes7_3, planes, stride) 98 | self.bn1 = nn.BatchNorm1d(planes) 99 | self.relu = nn.ReLU(inplace=True) 100 | self.conv2 = conv7x7(planes, planes) 101 | self.bn2 = nn.BatchNorm1d(planes) 102 | self.downsample = downsample 103 | self.stride = stride 104 | 105 | def forward(self, x): 106 | residual = x 107 | 108 | out = self.conv1(x) 109 | out = self.bn1(out) 110 | out = self.relu(out) 111 | 112 | out = self.conv2(out) 113 | out = self.bn2(out) 114 | 115 | if self.downsample is not None: 116 | residual = self.downsample(x) 117 | 118 | d = residual.shape[2] - out.shape[2] 119 | out1 = residual[:, :, 0:-d] + out 120 | out1 = self.relu(out1) 121 | # out += residual 122 | 123 | return out1 124 | 125 | 126 | class MSResNet(nn.Module): 127 | def __init__(self, input_channel, layers=[1, 1, 1, 1], num_classes=10): 128 | self.inplanes7_1 = 64 129 | self.inplanes7_2 = 64 130 | self.inplanes7_3 = 64 131 | 132 | super(MSResNet, self).__init__() 133 | 134 | self.conv1 = nn.Conv1d(input_channel, 64, kernel_size=7, stride=2, padding=3, 135 | bias=False) 136 | self.bn1 = nn.BatchNorm1d(64) 137 | self.relu = nn.ReLU(inplace=True) 138 | self.maxpool = nn.MaxPool1d(kernel_size=3, stride=2, padding=1) 139 | 140 | self.layer7x7_11 = self._make_layer7_1(BasicBlock7x7_1, 64, layers[0], stride=2) 141 | self.layer7x7_12 = self._make_layer7_1(BasicBlock7x7_1, 128, layers[1], stride=2) 142 | self.layer7x7_13 = self._make_layer7_1(BasicBlock7x7_1, 256, layers[2], stride=2) 143 | # self.layer3x3_4 = self._make_layer3(BasicBlock3x3, 512, layers[3], stride=2) 144 | # maxplooing kernel size: 16, 11, 6 145 | self.maxpool7_1 = nn.AvgPool1d(kernel_size=6, stride=1, padding=0) 146 | 147 | self.layer7x7_21 = self._make_layer7_2(BasicBlock7x7_2, 64, layers[0], stride=2) 148 | self.layer7x7_22 = self._make_layer7_2(BasicBlock7x7_2, 128, layers[1], stride=2) 149 | self.layer7x7_23 = self._make_layer7_2(BasicBlock7x7_2, 256, layers[2], stride=2) 150 | # self.layer3x3_4 = self._make_layer3(BasicBlock3x3, 512, layers[3], stride=2) 151 | 152 | # maxplooing kernel size: 16, 11, 6 153 | self.maxpool7_2 = nn.AvgPool1d(kernel_size=6, stride=1, padding=0) 154 | 155 | self.layer7x7_31 = self._make_layer7_3(BasicBlock7x7_3, 64, layers[0], stride=2) 156 | self.layer7x7_32 = self._make_layer7_3(BasicBlock7x7_3, 128, layers[1], stride=2) 157 | self.layer7x7_33 = self._make_layer7_3(BasicBlock7x7_3, 256, layers[2], stride=2) 158 | # self.layer3x3_4 = self._make_layer3(BasicBlock3x3, 512, layers[3], stride=2) 159 | 160 | # maxplooing kernel size: 16, 11, 6 161 | self.maxpool7_3 = nn.AvgPool1d(kernel_size=6, stride=1, padding=0) 162 | 163 | # self.drop = nn.Dropout(p=0.2) 164 | self.fc = nn.Linear(256 * 3, num_classes) 165 | 166 | # todo: modify the initialization 167 | # for m in self.modules(): 168 | # if isinstance(m, nn.Conv1d): 169 | # n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 170 | # m.weight.data.normal_(0, math.sqrt(2. / n)) 171 | # elif isinstance(m, nn.BatchNorm1d): 172 | # m.weight.data.fill_(1) 173 | # m.bias.data.zero_() 174 | 175 | def _make_layer3(self, block, planes, blocks, stride=2): 176 | downsample = None 177 | if stride != 1 or self.inplanes3 != planes * block.expansion: 178 | downsample = nn.Sequential( 179 | nn.Conv1d(self.inplanes3, planes * block.expansion, 180 | kernel_size=1, stride=stride, bias=False), 181 | nn.BatchNorm1d(planes * block.expansion), 182 | ) 183 | 184 | layers = [] 185 | layers.append(block(self.inplanes3, planes, stride, downsample)) 186 | self.inplanes3 = planes * block.expansion 187 | for i in range(1, blocks): 188 | layers.append(block(self.inplanes3, planes)) 189 | 190 | return nn.Sequential(*layers) 191 | 192 | def _make_layer5_1(self, block, planes, blocks, stride=2): 193 | downsample = None 194 | if stride != 1 or self.inplanes5_1 != planes * block.expansion: 195 | downsample = nn.Sequential( 196 | nn.Conv1d(self.inplanes5_1, planes * block.expansion, 197 | kernel_size=1, stride=stride, bias=False), 198 | nn.BatchNorm1d(planes * block.expansion), 199 | ) 200 | 201 | layers = [] 202 | layers.append(block(self.inplanes5_1, planes, stride, downsample)) 203 | self.inplanes5_1 = planes * block.expansion 204 | for i in range(1, blocks): 205 | layers.append(block(self.inplanes5_1, planes)) 206 | 207 | return nn.Sequential(*layers) 208 | 209 | def _make_layer7_1(self, block, planes, blocks, stride=2): 210 | downsample = None 211 | if stride != 1 or self.inplanes7_1 != planes * block.expansion: 212 | downsample = nn.Sequential( 213 | nn.Conv1d(self.inplanes7_1, planes * block.expansion, 214 | kernel_size=1, stride=stride, bias=False), 215 | nn.BatchNorm1d(planes * block.expansion), 216 | ) 217 | 218 | layers = [] 219 | layers.append(block(self.inplanes7_1, planes, stride, downsample)) 220 | self.inplanes7_1 = planes * block.expansion 221 | for i in range(1, blocks): 222 | layers.append(block(self.inplanes7_1, planes)) 223 | 224 | return nn.Sequential(*layers) 225 | 226 | def _make_layer7_2(self, block, planes, blocks, stride=2): 227 | downsample = None 228 | if stride != 1 or self.inplanes7_2 != planes * block.expansion: 229 | downsample = nn.Sequential( 230 | nn.Conv1d(self.inplanes7_2, planes * block.expansion, 231 | kernel_size=1, stride=stride, bias=False), 232 | nn.BatchNorm1d(planes * block.expansion), 233 | ) 234 | 235 | layers = [] 236 | layers.append(block(self.inplanes7_2, planes, stride, downsample)) 237 | self.inplanes7_2 = planes * block.expansion 238 | for i in range(1, blocks): 239 | layers.append(block(self.inplanes7_2, planes)) 240 | 241 | return nn.Sequential(*layers) 242 | 243 | 244 | def _make_layer7_3(self, block, planes, blocks, stride=2): 245 | downsample = None 246 | if stride != 1 or self.inplanes7_3 != planes * block.expansion: 247 | downsample = nn.Sequential( 248 | nn.Conv1d(self.inplanes7_3, planes * block.expansion, 249 | kernel_size=1, stride=stride, bias=False), 250 | nn.BatchNorm1d(planes * block.expansion), 251 | ) 252 | 253 | layers = [] 254 | layers.append(block(self.inplanes7_3, planes, stride, downsample)) 255 | self.inplanes7_3 = planes * block.expansion 256 | for i in range(1, blocks): 257 | layers.append(block(self.inplanes7_3, planes)) 258 | 259 | return nn.Sequential(*layers) 260 | 261 | def forward(self, x0): 262 | x0 = self.conv1(x0) 263 | x0 = self.bn1(x0) 264 | x0 = self.relu(x0) 265 | x0 = self.maxpool(x0) 266 | 267 | x = self.layer7x7_11(x0) 268 | x = self.layer7x7_12(x) 269 | x = self.layer7x7_13(x) 270 | # x = self.layer3x3_4(x) 271 | x = self.maxpool7_1(x) 272 | 273 | y = self.layer7x7_21(x0) 274 | y = self.layer7x7_22(y) 275 | y = self.layer7x7_23(y) 276 | # y = self.layer5x5_4(y) 277 | y = self.maxpool7_2(y) 278 | 279 | z = self.layer7x7_31(x0) 280 | z = self.layer7x7_32(z) 281 | z = self.layer7x7_33(z) 282 | # z = self.layer7x7_4(z) 283 | z = self.maxpool7_3(z) 284 | 285 | out = torch.cat([x, y, z], dim=1) 286 | 287 | out = out.squeeze() 288 | # out = self.drop(out) 289 | out1 = self.fc(out) 290 | 291 | return out1, out 292 | 293 | 294 | 295 | 296 | 297 | 298 | 299 | -------------------------------------------------------------------------------- /model/multi_scale_ori.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import math 3 | import torch.utils.model_zoo as model_zoo 4 | 5 | import torch 6 | 7 | def conv3x3(in_planes, out_planes, stride=1): 8 | """3x3 convolution with padding""" 9 | return nn.Conv1d(in_planes, out_planes, kernel_size=3, stride=stride, 10 | padding=1, bias=False) 11 | 12 | def conv5x5(in_planes, out_planes, stride=1): 13 | return nn.Conv1d(in_planes, out_planes, kernel_size=5, stride=stride, 14 | padding=1, bias=False) 15 | 16 | def conv7x7(in_planes, out_planes, stride=1): 17 | return nn.Conv1d(in_planes, out_planes, kernel_size=7, stride=stride, 18 | padding=1, bias=False) 19 | 20 | 21 | 22 | class BasicBlock3x3(nn.Module): 23 | expansion = 1 24 | 25 | def __init__(self, inplanes3, planes, stride=1, downsample=None): 26 | super(BasicBlock3x3, self).__init__() 27 | self.conv1 = conv3x3(inplanes3, planes, stride) 28 | self.bn1 = nn.BatchNorm1d(planes) 29 | self.relu = nn.ReLU(inplace=True) 30 | self.conv2 = conv3x3(planes, planes) 31 | self.bn2 = nn.BatchNorm1d(planes) 32 | self.downsample = downsample 33 | self.stride = stride 34 | 35 | def forward(self, x): 36 | residual = x 37 | 38 | out = self.conv1(x) 39 | out = self.bn1(out) 40 | out = self.relu(out) 41 | 42 | out = self.conv2(out) 43 | out = self.bn2(out) 44 | 45 | if self.downsample is not None: 46 | residual = self.downsample(x) 47 | 48 | out += residual 49 | out = self.relu(out) 50 | 51 | return out 52 | 53 | 54 | class BasicBlock5x5(nn.Module): 55 | expansion = 1 56 | 57 | def __init__(self, inplanes5, planes, stride=1, downsample=None): 58 | super(BasicBlock5x5, self).__init__() 59 | self.conv1 = conv5x5(inplanes5, planes, stride) 60 | self.bn1 = nn.BatchNorm1d(planes) 61 | self.relu = nn.ReLU(inplace=True) 62 | self.conv2 = conv5x5(planes, planes) 63 | self.bn2 = nn.BatchNorm1d(planes) 64 | self.downsample = downsample 65 | self.stride = stride 66 | 67 | def forward(self, x): 68 | residual = x 69 | 70 | out = self.conv1(x) 71 | out = self.bn1(out) 72 | out = self.relu(out) 73 | 74 | out = self.conv2(out) 75 | out = self.bn2(out) 76 | 77 | if self.downsample is not None: 78 | residual = self.downsample(x) 79 | 80 | d = residual.shape[2] - out.shape[2] 81 | out1 = residual[:,:,0:-d] + out 82 | out1 = self.relu(out1) 83 | # out += residual 84 | 85 | return out1 86 | 87 | 88 | 89 | class BasicBlock7x7(nn.Module): 90 | expansion = 1 91 | 92 | def __init__(self, inplanes7, planes, stride=1, downsample=None): 93 | super(BasicBlock7x7, self).__init__() 94 | self.conv1 = conv7x7(inplanes7, planes, stride) 95 | self.bn1 = nn.BatchNorm1d(planes) 96 | self.relu = nn.ReLU(inplace=True) 97 | self.conv2 = conv7x7(planes, planes) 98 | self.bn2 = nn.BatchNorm1d(planes) 99 | self.downsample = downsample 100 | self.stride = stride 101 | 102 | def forward(self, x): 103 | residual = x 104 | 105 | out = self.conv1(x) 106 | out = self.bn1(out) 107 | out = self.relu(out) 108 | 109 | out = self.conv2(out) 110 | out = self.bn2(out) 111 | 112 | if self.downsample is not None: 113 | residual = self.downsample(x) 114 | 115 | d = residual.shape[2] - out.shape[2] 116 | out1 = residual[:, :, 0:-d] + out 117 | out1 = self.relu(out1) 118 | # out += residual 119 | 120 | return out1 121 | 122 | 123 | 124 | 125 | class MSResNet(nn.Module): 126 | def __init__(self, input_channel, layers=[1, 1, 1, 1], num_classes=10): 127 | self.inplanes3 = 64 128 | self.inplanes5 = 64 129 | self.inplanes7 = 64 130 | 131 | super(MSResNet, self).__init__() 132 | 133 | self.conv1 = nn.Conv1d(input_channel, 64, kernel_size=7, stride=2, padding=3, 134 | bias=False) 135 | self.bn1 = nn.BatchNorm1d(64) 136 | self.relu = nn.ReLU(inplace=True) 137 | self.maxpool = nn.MaxPool1d(kernel_size=3, stride=2, padding=1) 138 | 139 | self.layer3x3_1 = self._make_layer3(BasicBlock3x3, 64, layers[0], stride=2) 140 | self.layer3x3_2 = self._make_layer3(BasicBlock3x3, 128, layers[1], stride=2) 141 | self.layer3x3_3 = self._make_layer3(BasicBlock3x3, 256, layers[2], stride=2) 142 | # self.layer3x3_4 = self._make_layer3(BasicBlock3x3, 512, layers[3], stride=2) 143 | 144 | # maxplooing kernel size: 16, 11, 6 145 | self.maxpool3 = nn.AvgPool1d(kernel_size=16, stride=1, padding=0) 146 | 147 | 148 | self.layer5x5_1 = self._make_layer5(BasicBlock5x5, 64, layers[0], stride=2) 149 | self.layer5x5_2 = self._make_layer5(BasicBlock5x5, 128, layers[1], stride=2) 150 | self.layer5x5_3 = self._make_layer5(BasicBlock5x5, 256, layers[2], stride=2) 151 | # self.layer5x5_4 = self._make_layer5(BasicBlock5x5, 512, layers[3], stride=2) 152 | self.maxpool5 = nn.AvgPool1d(kernel_size=11, stride=1, padding=0) 153 | 154 | 155 | self.layer7x7_1 = self._make_layer7(BasicBlock7x7, 64, layers[0], stride=2) 156 | self.layer7x7_2 = self._make_layer7(BasicBlock7x7, 128, layers[1], stride=2) 157 | self.layer7x7_3 = self._make_layer7(BasicBlock7x7, 256, layers[2], stride=2) 158 | # self.layer7x7_4 = self._make_layer7(BasicBlock7x7, 512, layers[3], stride=2) 159 | self.maxpool7 = nn.AvgPool1d(kernel_size=6, stride=1, padding=0) 160 | 161 | # self.drop = nn.Dropout(p=0.2) 162 | self.fc = nn.Linear(256*3, num_classes) 163 | 164 | # todo: modify the initialization 165 | # for m in self.modules(): 166 | # if isinstance(m, nn.Conv1d): 167 | # n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 168 | # m.weight.data.normal_(0, math.sqrt(2. / n)) 169 | # elif isinstance(m, nn.BatchNorm1d): 170 | # m.weight.data.fill_(1) 171 | # m.bias.data.zero_() 172 | 173 | def _make_layer3(self, block, planes, blocks, stride=2): 174 | downsample = None 175 | if stride != 1 or self.inplanes3 != planes * block.expansion: 176 | downsample = nn.Sequential( 177 | nn.Conv1d(self.inplanes3, planes * block.expansion, 178 | kernel_size=1, stride=stride, bias=False), 179 | nn.BatchNorm1d(planes * block.expansion), 180 | ) 181 | 182 | layers = [] 183 | layers.append(block(self.inplanes3, planes, stride, downsample)) 184 | self.inplanes3 = planes * block.expansion 185 | for i in range(1, blocks): 186 | layers.append(block(self.inplanes3, planes)) 187 | 188 | return nn.Sequential(*layers) 189 | 190 | def _make_layer5(self, block, planes, blocks, stride=2): 191 | downsample = None 192 | if stride != 1 or self.inplanes5 != planes * block.expansion: 193 | downsample = nn.Sequential( 194 | nn.Conv1d(self.inplanes5, planes * block.expansion, 195 | kernel_size=1, stride=stride, bias=False), 196 | nn.BatchNorm1d(planes * block.expansion), 197 | ) 198 | 199 | layers = [] 200 | layers.append(block(self.inplanes5, planes, stride, downsample)) 201 | self.inplanes5 = planes * block.expansion 202 | for i in range(1, blocks): 203 | layers.append(block(self.inplanes5, planes)) 204 | 205 | return nn.Sequential(*layers) 206 | 207 | 208 | def _make_layer7(self, block, planes, blocks, stride=2): 209 | downsample = None 210 | if stride != 1 or self.inplanes7 != planes * block.expansion: 211 | downsample = nn.Sequential( 212 | nn.Conv1d(self.inplanes7, planes * block.expansion, 213 | kernel_size=1, stride=stride, bias=False), 214 | nn.BatchNorm1d(planes * block.expansion), 215 | ) 216 | 217 | layers = [] 218 | layers.append(block(self.inplanes7, planes, stride, downsample)) 219 | self.inplanes7 = planes * block.expansion 220 | for i in range(1, blocks): 221 | layers.append(block(self.inplanes7, planes)) 222 | 223 | return nn.Sequential(*layers) 224 | 225 | def forward(self, x0): 226 | x0 = self.conv1(x0) 227 | x0 = self.bn1(x0) 228 | x0 = self.relu(x0) 229 | x0 = self.maxpool(x0) 230 | 231 | x = self.layer3x3_1(x0) 232 | x = self.layer3x3_2(x) 233 | x = self.layer3x3_3(x) 234 | # x = self.layer3x3_4(x) 235 | x = self.maxpool3(x) 236 | 237 | y = self.layer5x5_1(x0) 238 | y = self.layer5x5_2(y) 239 | y = self.layer5x5_3(y) 240 | # y = self.layer5x5_4(y) 241 | y = self.maxpool5(y) 242 | 243 | z = self.layer7x7_1(x0) 244 | z = self.layer7x7_2(z) 245 | z = self.layer7x7_3(z) 246 | # z = self.layer7x7_4(z) 247 | z = self.maxpool7(z) 248 | 249 | out = torch.cat([x, y, z], dim=1) 250 | 251 | out = out.squeeze() 252 | # out = self.drop(out) 253 | out1 = self.fc(out) 254 | 255 | return out1, out 256 | 257 | 258 | 259 | 260 | 261 | 262 | 263 | -------------------------------------------------------------------------------- /multi_scale_ori.py: -------------------------------------------------------------------------------- 1 | import torch.nn as nn 2 | import math 3 | import torch.utils.model_zoo as model_zoo 4 | 5 | import torch 6 | 7 | def conv3x3(in_planes, out_planes, stride=1): 8 | """3x3 convolution with padding""" 9 | return nn.Conv1d(in_planes, out_planes, kernel_size=3, stride=stride, 10 | padding=1, bias=False) 11 | 12 | def conv5x5(in_planes, out_planes, stride=1): 13 | return nn.Conv1d(in_planes, out_planes, kernel_size=5, stride=stride, 14 | padding=1, bias=False) 15 | 16 | def conv7x7(in_planes, out_planes, stride=1): 17 | return nn.Conv1d(in_planes, out_planes, kernel_size=7, stride=stride, 18 | padding=1, bias=False) 19 | 20 | 21 | 22 | class BasicBlock3x3(nn.Module): 23 | expansion = 1 24 | 25 | def __init__(self, inplanes3, planes, stride=1, downsample=None): 26 | super(BasicBlock3x3, self).__init__() 27 | self.conv1 = conv3x3(inplanes3, planes, stride) 28 | self.bn1 = nn.BatchNorm1d(planes) 29 | self.relu = nn.ReLU(inplace=True) 30 | self.conv2 = conv3x3(planes, planes) 31 | self.bn2 = nn.BatchNorm1d(planes) 32 | self.downsample = downsample 33 | self.stride = stride 34 | 35 | def forward(self, x): 36 | residual = x 37 | 38 | out = self.conv1(x) 39 | out = self.bn1(out) 40 | out = self.relu(out) 41 | 42 | out = self.conv2(out) 43 | out = self.bn2(out) 44 | 45 | if self.downsample is not None: 46 | residual = self.downsample(x) 47 | 48 | out += residual 49 | out = self.relu(out) 50 | 51 | return out 52 | 53 | 54 | class BasicBlock5x5(nn.Module): 55 | expansion = 1 56 | 57 | def __init__(self, inplanes5, planes, stride=1, downsample=None): 58 | super(BasicBlock5x5, self).__init__() 59 | self.conv1 = conv5x5(inplanes5, planes, stride) 60 | self.bn1 = nn.BatchNorm1d(planes) 61 | self.relu = nn.ReLU(inplace=True) 62 | self.conv2 = conv5x5(planes, planes) 63 | self.bn2 = nn.BatchNorm1d(planes) 64 | self.downsample = downsample 65 | self.stride = stride 66 | 67 | def forward(self, x): 68 | residual = x 69 | 70 | out = self.conv1(x) 71 | out = self.bn1(out) 72 | out = self.relu(out) 73 | 74 | out = self.conv2(out) 75 | out = self.bn2(out) 76 | 77 | if self.downsample is not None: 78 | residual = self.downsample(x) 79 | 80 | d = residual.shape[2] - out.shape[2] 81 | out1 = residual[:,:,0:-d] + out 82 | out1 = self.relu(out1) 83 | # out += residual 84 | 85 | return out1 86 | 87 | 88 | 89 | class BasicBlock7x7(nn.Module): 90 | expansion = 1 91 | 92 | def __init__(self, inplanes7, planes, stride=1, downsample=None): 93 | super(BasicBlock7x7, self).__init__() 94 | self.conv1 = conv7x7(inplanes7, planes, stride) 95 | self.bn1 = nn.BatchNorm1d(planes) 96 | self.relu = nn.ReLU(inplace=True) 97 | self.conv2 = conv7x7(planes, planes) 98 | self.bn2 = nn.BatchNorm1d(planes) 99 | self.downsample = downsample 100 | self.stride = stride 101 | 102 | def forward(self, x): 103 | residual = x 104 | 105 | out = self.conv1(x) 106 | out = self.bn1(out) 107 | out = self.relu(out) 108 | 109 | out = self.conv2(out) 110 | out = self.bn2(out) 111 | 112 | if self.downsample is not None: 113 | residual = self.downsample(x) 114 | 115 | d = residual.shape[2] - out.shape[2] 116 | out1 = residual[:, :, 0:-d] + out 117 | out1 = self.relu(out1) 118 | # out += residual 119 | 120 | return out1 121 | 122 | 123 | 124 | 125 | class MSResNet(nn.Module): 126 | def __init__(self, input_channel, layers=[1, 1, 1, 1], num_classes=10): 127 | self.inplanes3 = 64 128 | self.inplanes5 = 64 129 | self.inplanes7 = 64 130 | 131 | super(MSResNet, self).__init__() 132 | 133 | self.conv1 = nn.Conv1d(input_channel, 64, kernel_size=7, stride=2, padding=3, 134 | bias=False) 135 | self.bn1 = nn.BatchNorm1d(64) 136 | self.relu = nn.ReLU(inplace=True) 137 | self.maxpool = nn.MaxPool1d(kernel_size=3, stride=2, padding=1) 138 | 139 | self.layer3x3_1 = self._make_layer3(BasicBlock3x3, 64, layers[0], stride=2) 140 | self.layer3x3_2 = self._make_layer3(BasicBlock3x3, 128, layers[1], stride=2) 141 | self.layer3x3_3 = self._make_layer3(BasicBlock3x3, 256, layers[2], stride=2) 142 | # self.layer3x3_4 = self._make_layer3(BasicBlock3x3, 512, layers[3], stride=2) 143 | 144 | # maxplooing kernel size: 16, 11, 6 145 | self.maxpool3 = nn.AvgPool1d(kernel_size=16, stride=1, padding=0) 146 | 147 | 148 | self.layer5x5_1 = self._make_layer5(BasicBlock5x5, 64, layers[0], stride=2) 149 | self.layer5x5_2 = self._make_layer5(BasicBlock5x5, 128, layers[1], stride=2) 150 | self.layer5x5_3 = self._make_layer5(BasicBlock5x5, 256, layers[2], stride=2) 151 | # self.layer5x5_4 = self._make_layer5(BasicBlock5x5, 512, layers[3], stride=2) 152 | self.maxpool5 = nn.AvgPool1d(kernel_size=11, stride=1, padding=0) 153 | 154 | 155 | self.layer7x7_1 = self._make_layer7(BasicBlock7x7, 64, layers[0], stride=2) 156 | self.layer7x7_2 = self._make_layer7(BasicBlock7x7, 128, layers[1], stride=2) 157 | self.layer7x7_3 = self._make_layer7(BasicBlock7x7, 256, layers[2], stride=2) 158 | # self.layer7x7_4 = self._make_layer7(BasicBlock7x7, 512, layers[3], stride=2) 159 | self.maxpool7 = nn.AvgPool1d(kernel_size=6, stride=1, padding=0) 160 | 161 | # self.drop = nn.Dropout(p=0.2) 162 | self.fc = nn.Linear(256*3, num_classes) 163 | 164 | # todo: modify the initialization 165 | # for m in self.modules(): 166 | # if isinstance(m, nn.Conv1d): 167 | # n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels 168 | # m.weight.data.normal_(0, math.sqrt(2. / n)) 169 | # elif isinstance(m, nn.BatchNorm1d): 170 | # m.weight.data.fill_(1) 171 | # m.bias.data.zero_() 172 | 173 | def _make_layer3(self, block, planes, blocks, stride=2): 174 | downsample = None 175 | if stride != 1 or self.inplanes3 != planes * block.expansion: 176 | downsample = nn.Sequential( 177 | nn.Conv1d(self.inplanes3, planes * block.expansion, 178 | kernel_size=1, stride=stride, bias=False), 179 | nn.BatchNorm1d(planes * block.expansion), 180 | ) 181 | 182 | layers = [] 183 | layers.append(block(self.inplanes3, planes, stride, downsample)) 184 | self.inplanes3 = planes * block.expansion 185 | for i in range(1, blocks): 186 | layers.append(block(self.inplanes3, planes)) 187 | 188 | return nn.Sequential(*layers) 189 | 190 | def _make_layer5(self, block, planes, blocks, stride=2): 191 | downsample = None 192 | if stride != 1 or self.inplanes5 != planes * block.expansion: 193 | downsample = nn.Sequential( 194 | nn.Conv1d(self.inplanes5, planes * block.expansion, 195 | kernel_size=1, stride=stride, bias=False), 196 | nn.BatchNorm1d(planes * block.expansion), 197 | ) 198 | 199 | layers = [] 200 | layers.append(block(self.inplanes5, planes, stride, downsample)) 201 | self.inplanes5 = planes * block.expansion 202 | for i in range(1, blocks): 203 | layers.append(block(self.inplanes5, planes)) 204 | 205 | return nn.Sequential(*layers) 206 | 207 | 208 | def _make_layer7(self, block, planes, blocks, stride=2): 209 | downsample = None 210 | if stride != 1 or self.inplanes7 != planes * block.expansion: 211 | downsample = nn.Sequential( 212 | nn.Conv1d(self.inplanes7, planes * block.expansion, 213 | kernel_size=1, stride=stride, bias=False), 214 | nn.BatchNorm1d(planes * block.expansion), 215 | ) 216 | 217 | layers = [] 218 | layers.append(block(self.inplanes7, planes, stride, downsample)) 219 | self.inplanes7 = planes * block.expansion 220 | for i in range(1, blocks): 221 | layers.append(block(self.inplanes7, planes)) 222 | 223 | return nn.Sequential(*layers) 224 | 225 | def forward(self, x0): 226 | x0 = self.conv1(x0) 227 | x0 = self.bn1(x0) 228 | x0 = self.relu(x0) 229 | x0 = self.maxpool(x0) 230 | 231 | x = self.layer3x3_1(x0) 232 | x = self.layer3x3_2(x) 233 | x = self.layer3x3_3(x) 234 | # x = self.layer3x3_4(x) 235 | x = self.maxpool3(x) 236 | 237 | y = self.layer5x5_1(x0) 238 | y = self.layer5x5_2(y) 239 | y = self.layer5x5_3(y) 240 | # y = self.layer5x5_4(y) 241 | y = self.maxpool5(y) 242 | 243 | z = self.layer7x7_1(x0) 244 | z = self.layer7x7_2(z) 245 | z = self.layer7x7_3(z) 246 | # z = self.layer7x7_4(z) 247 | z = self.maxpool7(z) 248 | 249 | out = torch.cat([x, y, z], dim=1) 250 | 251 | out = out.squeeze() 252 | # out = self.drop(out) 253 | out1 = self.fc(out) 254 | 255 | return out1, out 256 | 257 | 258 | 259 | 260 | 261 | 262 | 263 | -------------------------------------------------------------------------------- /result/changing3x3/TestAccuracy_ChangingSpeed_Train98.424Test93.188.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing3x3/TestAccuracy_ChangingSpeed_Train98.424Test93.188.mat -------------------------------------------------------------------------------- /result/changing3x3/TestLoss_ChangingSpeed_Train98.424Test93.188.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing3x3/TestLoss_ChangingSpeed_Train98.424Test93.188.mat -------------------------------------------------------------------------------- /result/changing3x3/TrainAccuracy_ChangingSpeed_Train98.424Test93.188.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing3x3/TrainAccuracy_ChangingSpeed_Train98.424Test93.188.mat -------------------------------------------------------------------------------- /result/changing3x3/TrainLoss_ChangingSpeed_Train98.424Test93.188.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing3x3/TrainLoss_ChangingSpeed_Train98.424Test93.188.mat -------------------------------------------------------------------------------- /result/changing5x5/TestAccuracy_ChangingSpeed_Train96.628Test93.466.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing5x5/TestAccuracy_ChangingSpeed_Train96.628Test93.466.mat -------------------------------------------------------------------------------- /result/changing5x5/TestLoss_ChangingSpeed_Train96.628Test93.466.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing5x5/TestLoss_ChangingSpeed_Train96.628Test93.466.mat -------------------------------------------------------------------------------- /result/changing5x5/TrainAccuracy_ChangingSpeed_Train96.628Test93.466.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing5x5/TrainAccuracy_ChangingSpeed_Train96.628Test93.466.mat -------------------------------------------------------------------------------- /result/changing5x5/TrainLoss_ChangingSpeed_Train96.628Test93.466.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing5x5/TrainLoss_ChangingSpeed_Train96.628Test93.466.mat -------------------------------------------------------------------------------- /result/changing7x7/TestAccuracy_ChangingSpeed_Train97.601Test92.585.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing7x7/TestAccuracy_ChangingSpeed_Train97.601Test92.585.mat -------------------------------------------------------------------------------- /result/changing7x7/TestLoss_ChangingSpeed_Train97.601Test92.585.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing7x7/TestLoss_ChangingSpeed_Train97.601Test92.585.mat -------------------------------------------------------------------------------- /result/changing7x7/TrainAccuracy_ChangingSpeed_Train97.601Test92.585.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing7x7/TrainAccuracy_ChangingSpeed_Train97.601Test92.585.mat -------------------------------------------------------------------------------- /result/changing7x7/TrainLoss_ChangingSpeed_Train97.601Test92.585.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changing7x7/TrainLoss_ChangingSpeed_Train97.601Test92.585.mat -------------------------------------------------------------------------------- /result/changingMS/TestAccuracy_ChangingSpeed_Train97.126Test94.300.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changingMS/TestAccuracy_ChangingSpeed_Train97.126Test94.300.mat -------------------------------------------------------------------------------- /result/changingMS/TestLoss_ChangingSpeed_Train97.126Test94.300.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changingMS/TestLoss_ChangingSpeed_Train97.126Test94.300.mat -------------------------------------------------------------------------------- /result/changingMS/TrainAccuracy_ChangingSpeed_Train97.126Test94.300.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changingMS/TrainAccuracy_ChangingSpeed_Train97.126Test94.300.mat -------------------------------------------------------------------------------- /result/changingMS/TrainLoss_ChangingSpeed_Train97.126Test94.300.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changingMS/TrainLoss_ChangingSpeed_Train97.126Test94.300.mat -------------------------------------------------------------------------------- /result/changingNores/TestAccuracy_ChangingSpeed_Train96.187Test94.068.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changingNores/TestAccuracy_ChangingSpeed_Train96.187Test94.068.mat -------------------------------------------------------------------------------- /result/changingNores/TestLoss_ChangingSpeed_Train96.187Test94.068.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changingNores/TestLoss_ChangingSpeed_Train96.187Test94.068.mat -------------------------------------------------------------------------------- /result/changingNores/TrainAccuracy_ChangingSpeed_Train96.187Test94.068.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changingNores/TrainAccuracy_ChangingSpeed_Train96.187Test94.068.mat -------------------------------------------------------------------------------- /result/changingNores/TrainLoss_ChangingSpeed_Train96.187Test94.068.mat: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/result/changingNores/TrainLoss_ChangingSpeed_Train96.187Test94.068.mat -------------------------------------------------------------------------------- /test.py: -------------------------------------------------------------------------------- 1 | import scipy.io as sio 2 | from torch.utils.data import TensorDataset, DataLoader 3 | import numpy as np 4 | import torch 5 | import torch.nn as nn 6 | from torch.autograd import Variable 7 | import torch.nn.functional as F 8 | import matplotlib.pyplot as plt 9 | import math 10 | import time 11 | import torch 12 | from torch import nn 13 | from torch.autograd import Variable 14 | import numpy as np 15 | import matplotlib.pyplot as plt 16 | import scipy.io as sio 17 | from torch.utils.data import TensorDataset, DataLoader 18 | import numpy as np 19 | import torch 20 | import torch.nn as nn 21 | from torch.autograd import Variable 22 | import torch.nn.functional as F 23 | import matplotlib.pyplot as plt 24 | import math 25 | import time 26 | from tqdm import tqdm 27 | 28 | # from model.multi_scale_ori import * 29 | # from multi_scale_nores import * 30 | # from multi_scale_one3x3 import * 31 | # from multi_scale_one5x5 import * 32 | # from multi_scale_one7x7 import * 33 | 34 | batch_size = 1024 35 | 36 | data = sio.loadmat('data/changingSpeed_test.mat') 37 | test_data = data['test_data_split'] 38 | test_label = data['test_label_split'] 39 | 40 | num_test_instances = len(test_data) 41 | 42 | test_data = torch.from_numpy(test_data).type(torch.FloatTensor) 43 | test_label = torch.from_numpy(test_label).type(torch.LongTensor) 44 | test_data = test_data.view(num_test_instances, 1, -1) 45 | test_label = test_label.view(num_test_instances, 1) 46 | 47 | test_dataset = TensorDataset(test_data, test_label) 48 | test_data_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) 49 | 50 | # msresnet = MSResNet(input_channel=1, layers=[1, 1, 1, 1], num_classes=6) 51 | msresnet = torch.load('weights/changingResnet/ChaningSpeed_Train98.655Test95.690.pkl') 52 | msresnet = msresnet.cuda() 53 | msresnet.eval() 54 | 55 | correct_test = 0 56 | for i, (samples, labels) in enumerate(test_data_loader): 57 | with torch.no_grad(): 58 | samplesV = Variable(samples.cuda()) 59 | labels = labels.squeeze() 60 | labelsV = Variable(labels.cuda()) 61 | # labelsV = labelsV.view(-1) 62 | 63 | predict_label = msresnet(samplesV) 64 | prediction = predict_label[0].data.max(1)[1] 65 | correct_test += prediction.eq(labelsV.data.long()).sum() 66 | 67 | if i == 0: 68 | batch_prediction= prediction 69 | batch_featuremap = predict_label[1].data 70 | fault_prediction = batch_prediction 71 | featuremap = batch_featuremap 72 | 73 | elif i > 0: 74 | batch_prediction = prediction 75 | batch_featuremap = predict_label[1].data 76 | 77 | fault_prediction = np.concatenate((fault_prediction, batch_prediction), axis=0) 78 | featuremap = np.concatenate((featuremap, batch_featuremap), axis=0) 79 | 80 | print("Test accuracy:", (100 * float(correct_test) / num_test_instances)) 81 | 82 | 83 | -------------------------------------------------------------------------------- /train.py: -------------------------------------------------------------------------------- 1 | import scipy.io as sio 2 | from torch.utils.data import TensorDataset, DataLoader 3 | import numpy as np 4 | import torch 5 | import torch.nn as nn 6 | from torch.autograd import Variable 7 | import torch.nn.functional as F 8 | import matplotlib.pyplot as plt 9 | import math 10 | import time 11 | import torch 12 | from torch import nn 13 | from torch.autograd import Variable 14 | import numpy as np 15 | import matplotlib.pyplot as plt 16 | import scipy.io as sio 17 | from torch.utils.data import TensorDataset, DataLoader 18 | import numpy as np 19 | import torch 20 | import torch.nn as nn 21 | from torch.autograd import Variable 22 | import torch.nn.functional as F 23 | import matplotlib.pyplot as plt 24 | import math 25 | import time 26 | 27 | from tqdm import tqdm 28 | 29 | from model.multi_scale_ori import * 30 | # from multi_scale_nores import * 31 | # from multi_scale_one3x3 import * 32 | # from multi_scale_one5x5 import * 33 | # from multi_scale_one7x7 import * 34 | 35 | 36 | batch_size = 1024 37 | num_epochs = 350 38 | 39 | 40 | # load data 41 | data = sio.loadmat('data/changingSpeed_train.mat') 42 | train_data = data['train_data_split'] 43 | train_label = data['train_label_split'] 44 | 45 | num_train_instances = len(train_data) 46 | 47 | train_data = torch.from_numpy(train_data).type(torch.FloatTensor) 48 | train_label = torch.from_numpy(train_label).type(torch.LongTensor) 49 | train_data = train_data.view(num_train_instances, 1, -1) 50 | train_label = train_label.view(num_train_instances, 1) 51 | 52 | train_dataset = TensorDataset(train_data, train_label) 53 | train_data_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) 54 | 55 | 56 | 57 | data = sio.loadmat('data/changingSpeed_test.mat') 58 | test_data = data['test_data_split'] 59 | test_label = data['test_label_split'] 60 | 61 | num_test_instances = len(test_data) 62 | 63 | test_data = torch.from_numpy(test_data).type(torch.FloatTensor) 64 | test_label = torch.from_numpy(test_label).type(torch.LongTensor) 65 | test_data = test_data.view(num_test_instances, 1, -1) 66 | test_label = test_label.view(num_test_instances, 1) 67 | 68 | test_dataset = TensorDataset(test_data, test_label) 69 | test_data_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) 70 | 71 | 72 | msresnet = MSResNet(input_channel=1, layers=[1, 1, 1, 1], num_classes=6) 73 | msresnet = msresnet.cuda() 74 | 75 | criterion = nn.CrossEntropyLoss(size_average=False).cuda() 76 | 77 | optimizer = torch.optim.Adam(msresnet.parameters(), lr=0.005) 78 | scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[50, 100, 150, 200, 250, 300], gamma=0.1) 79 | train_loss = np.zeros([num_epochs, 1]) 80 | test_loss = np.zeros([num_epochs, 1]) 81 | train_acc = np.zeros([num_epochs, 1]) 82 | test_acc = np.zeros([num_epochs, 1]) 83 | 84 | for epoch in range(num_epochs): 85 | print('Epoch:', epoch) 86 | msresnet.train() 87 | scheduler.step() 88 | # for i, (samples, labels) in enumerate(train_data_loader): 89 | loss_x = 0 90 | for (samples, labels) in tqdm(train_data_loader): 91 | samplesV = Variable(samples.cuda()) 92 | labels = labels.squeeze() 93 | labelsV = Variable(labels.cuda()) 94 | 95 | # Forward + Backward + Optimize 96 | optimizer.zero_grad() 97 | predict_label = msresnet(samplesV) 98 | 99 | loss = criterion(predict_label[0], labelsV) 100 | 101 | loss_x += loss.item() 102 | 103 | loss.backward() 104 | optimizer.step() 105 | 106 | train_loss[epoch] = loss_x / num_train_instances 107 | 108 | msresnet.eval() 109 | # loss_x = 0 110 | correct_train = 0 111 | for i, (samples, labels) in enumerate(train_data_loader): 112 | with torch.no_grad(): 113 | samplesV = Variable(samples.cuda()) 114 | labels = labels.squeeze() 115 | labelsV = Variable(labels.cuda()) 116 | # labelsV = labelsV.view(-1) 117 | 118 | predict_label = msresnet(samplesV) 119 | prediction = predict_label[0].data.max(1)[1] 120 | correct_train += prediction.eq(labelsV.data.long()).sum() 121 | 122 | loss = criterion(predict_label[0], labelsV) 123 | # loss_x += loss.item() 124 | 125 | print("Training accuracy:", (100*float(correct_train)/num_train_instances)) 126 | 127 | # train_loss[epoch] = loss_x / num_train_instances 128 | train_acc[epoch] = 100*float(correct_train)/num_train_instances 129 | 130 | trainacc = str(100*float(correct_train)/num_train_instances)[0:6] 131 | 132 | 133 | loss_x = 0 134 | correct_test = 0 135 | for i, (samples, labels) in enumerate(test_data_loader): 136 | with torch.no_grad(): 137 | samplesV = Variable(samples.cuda()) 138 | labels = labels.squeeze() 139 | labelsV = Variable(labels.cuda()) 140 | # labelsV = labelsV.view(-1) 141 | 142 | predict_label = msresnet(samplesV) 143 | prediction = predict_label[0].data.max(1)[1] 144 | correct_test += prediction.eq(labelsV.data.long()).sum() 145 | 146 | loss = criterion(predict_label[0], labelsV) 147 | loss_x += loss.item() 148 | 149 | print("Test accuracy:", (100 * float(correct_test) / num_test_instances)) 150 | 151 | test_loss[epoch] = loss_x / num_test_instances 152 | test_acc[epoch] = 100 * float(correct_test) / num_test_instances 153 | 154 | testacc = str(100 * float(correct_test) / num_test_instances)[0:6] 155 | 156 | if epoch == 0: 157 | temp_test = correct_test 158 | temp_train = correct_train 159 | elif correct_test>temp_test: 160 | torch.save(msresnet, 'weights/changingResnet/ChaningSpeed_Train' + trainacc + 'Test' + testacc + '.pkl') 161 | temp_test = correct_test 162 | temp_train = correct_train 163 | 164 | sio.savemat('result/changingResnet/TrainLoss_' + 'ChangingSpeed_Train' + str(100*float(temp_train)/num_train_instances)[0:6] + 'Test' + str(100*float(temp_test)/num_test_instances)[0:6] + '.mat', {'train_loss': train_loss}) 165 | sio.savemat('result/changingResnet/TestLoss_' + 'ChangingSpeed_Train' + str(100*float(temp_train)/num_train_instances)[0:6] + 'Test' + str(100*float(temp_test)/num_test_instances)[0:6] + '.mat', {'test_loss': test_loss}) 166 | sio.savemat('result/changingResnet/TrainAccuracy_' + 'ChangingSpeed_Train' + str(100*float(temp_train)/num_train_instances)[0:6] + 'Test' + str(100*float(temp_test)/num_test_instances)[0:6] + '.mat', {'train_acc': train_acc}) 167 | sio.savemat('result/changingResnet/TestAccuracy_' + 'ChangingSpeed_Train' + str(100*float(temp_train)/num_train_instances)[0:6] + 'Test' + str(100*float(temp_test)/num_test_instances)[0:6] + '.mat', {'test_acc': test_acc}) 168 | print(str(100*float(temp_test)/num_test_instances)[0:6]) 169 | 170 | plt.plot(train_loss) 171 | plt.show() 172 | -------------------------------------------------------------------------------- /weights/changing3x3/ChaningSpeed_Train98.424Test93.188.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/weights/changing3x3/ChaningSpeed_Train98.424Test93.188.pkl -------------------------------------------------------------------------------- /weights/changing5x5/ChaningSpeed_Train96.628Test93.466.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/weights/changing5x5/ChaningSpeed_Train96.628Test93.466.pkl -------------------------------------------------------------------------------- /weights/changing7x7/ChaningSpeed_Train97.601Test92.585.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/weights/changing7x7/ChaningSpeed_Train97.601Test92.585.pkl -------------------------------------------------------------------------------- /weights/changingNores/ChaningSpeed_Train96.187Test94.068.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/weights/changingNores/ChaningSpeed_Train96.187Test94.068.pkl -------------------------------------------------------------------------------- /weights/changingResnet/ChaningSpeed_Train98.655Test95.690.pkl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geekfeiw/Multi-Scale-1D-ResNet/bf9d780ff36fdfa44fd8b19a4c881b7a79b3a3e1/weights/changingResnet/ChaningSpeed_Train98.655Test95.690.pkl --------------------------------------------------------------------------------