├── .gitignore ├── LICENSE ├── README.md ├── Wei_CMMNO2019_CNN-RSN.pdf ├── data_loader.py ├── iter_utils.py ├── models ├── __init__.py ├── dcnn.py ├── fft_wdcnn.py ├── ince.py ├── lenet.py ├── mlp.py ├── vgg.py └── wdcnn.py ├── run.py └── toydata ├── data.txt └── label.txt /.gitignore: -------------------------------------------------------------------------------- 1 | # Python: 2 | *.py[cod] 3 | *.so 4 | *.egg 5 | *.egg-info 6 | *.txt 7 | dist 8 | build 9 | 10 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 3-Clause License 2 | 3 | Copyright (c) 2017, Dongdong Wei 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | * Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | * Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | * Neither the name of the copyright holder nor the names of its 17 | contributors may be used to endorse or promote products derived from 18 | this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # deep-sensor 2 | 3 | A [PyTorch](http://pytorch.org/) framework for sensory signals in machinery fault diagnostics. 4 | 5 | [![](https://img.shields.io/badge/build-passing-brightgreen.svg)](https://github.com/redone17/conv-rotor) [![](https://img.shields.io/badge/python-3.7-blue.svg)](https://www.python.org/) [![](https://img.shields.io/badge/license-BSD3-ff69b4.svg)](https://github.com/redone17/conv-rotor/blob/master/LICENSE) 6 | 7 | ## requirements: 8 | * numpy (>= 1.13.1) 9 | * pytorch (>= 0.1.12.post2) 10 | 11 | ## usage 12 | 1. data: 13 | * .txt file for data: \# signal_samples x \# signal_length 14 | * .txt file for label: \# signal_samples x 1 15 | * An example of fake dataset with 100 samples and labels are in [toydata](./toydata). 16 | 2. ``` $ python run.py ``` 17 | 18 | ## models 19 | ### 1D 20 | * [WDCNN](https://github.com/redone17/conv-rotor/blob/master/models/wdcnn.py) from paper: [A New Deep Learning Model for Fault Diagnosis with Good Anti-Noise and Domain Adaptation Ability on Raw Vibration Signals](http://dx.doi.org/10.3390/s17020425) 21 | * [Ince's](https://github.com/redone17/deep-sensor/blob/master/models/ince.py) from paper: [Real-Time Motor Fault Detection by 1-D Convolutional Neural Networks](https://doi.org/10.1109/TIE.2016.2582729) 22 | * [1DCNN](https://github.com/redone17/deep-sensor/blob/master/models/dcnn.py) from paper: [Convolutional Neural Networks for Fault Diagnosis Using Rotating Speed Normalized Vibration](https://doi.org/10.1007/978-3-030-11220-2_8) (cite as) 23 | ~~~~ 24 | @InProceedings{wei_conv_2019, 25 | author="Wei, Dongdong 26 | and Wang, KeSheng 27 | and Heyns, Stephan 28 | and Zuo, Ming J.", 29 | title="Convolutional Neural Networks for Fault Diagnosis Using Rotating Speed Normalized Vibration", 30 | booktitle="Advances in Condition Monitoring of Machinery in Non-Stationary Operations", 31 | year="2019", 32 | publisher="Springer International Publishing", 33 | pages="67--76", 34 | isbn="978-3-030-11220-2" 35 | doi="10.1007/978-3-030-11220-2_8" 36 | } 37 | ~~~~ 38 | 39 | ### 2D 40 | * [LeNet5](https://github.com/redone17/deep-sensor/blob/master/models/lenet.py) for spectrogram of signals 41 | * [VGG](https://github.com/redone17/deep-sensor/blob/master/models/vgg.py) forked from https://github.com/kuangliu/pytorch-cifar/blob/master/models/vgg.py 42 | 43 | -------------------------------------------------------------------------------- /Wei_CMMNO2019_CNN-RSN.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redone17/deep-sensor/5f7124d986b56856969990dc6d58a8a0238eb9fd/Wei_CMMNO2019_CNN-RSN.pdf -------------------------------------------------------------------------------- /data_loader.py: -------------------------------------------------------------------------------- 1 | # -*- coding:utf-8 -*- 2 | # python2 3 | 4 | ''' 5 | functions: 6 | - load data and label from txt files 7 | - split train and test sets 8 | - resample 9 | - fast fourier transfrom 10 | - short time fourier transform 11 | ''' 12 | 13 | import time 14 | import numpy as np 15 | import torch 16 | import torch.utils.data as data_utils 17 | import scipy.signal as sig 18 | 19 | def load_data(*data_file): 20 | ''' 21 | data_file is a tuple with 1 or 2 elements; 22 | first is vibration matrix, 23 | second can be rotating speed matrix. 24 | 25 | input two files only for rsn, 26 | load_data twice if you have two vibration signal files. 27 | 28 | rsn is a preprocessing for signal, 29 | it requires rotating speed matrix. 30 | 31 | see paper "Convolutional Neural Networks for Fault Diagnosis 32 | Using Rotating Speed Normalized Vibration". 33 | ''' 34 | since = time.time() 35 | data_arr = np.loadtxt(data_file[0]) 36 | if len(data_file)>1: 37 | print('rsn used (see the paper).') 38 | rpm_arr = np.loadtxt(data_file[1]) 39 | mean_rpm = np.mean(rpm_arr) 40 | data_arr = np.power(mean_rpm,2)*data_arr / (rpm_arr*rpm_arr) 41 | time_elapsed = time.time() - since 42 | print('Data in file {} loaded in {:.0f}m {:.0f}s'.format(data_file, time_elapsed//60, time_elapsed%60)) 43 | return np.expand_dims(data_arr, axis=1) 44 | 45 | def load_label(label_file): 46 | ''' 47 | load labels corrsponding to data 48 | ''' 49 | return np.loadtxt(label_file, ndmin=1) 50 | 51 | def split_set(data, label, p=0.8): 52 | ''' 53 | split data and label array into train and test partitions 54 | ''' 55 | n_total = np.shape(data)[0] 56 | n_train = int(n_total*p) 57 | n_test = n_total - n_train 58 | idx = np.random.permutation(n_total) 59 | train_mask = idx[:n_train] 60 | test_mask = idx[n_total-n_test:] 61 | trainset = arr_to_dataset(data[train_mask], label[train_mask]) 62 | testset = arr_to_dataset(data[test_mask], label[test_mask]) 63 | return trainset, testset 64 | 65 | def arr_to_dataset(data_arr, label_vec): 66 | ''' 67 | convert numpy array into tensor dataset 68 | dataset = (X,y) 69 | ''' 70 | data_ten = torch.from_numpy(data_arr).float() 71 | label_ten = torch.from_numpy(label_vec).long() 72 | dataset = data_utils.TensorDataset(data_ten, label_ten) 73 | return dataset 74 | 75 | def resample_arr(data_arr, num=2048, method='Fourier'): 76 | ''' 77 | Resample input numpy array into legnthed at num 78 | methods option: 'Fourier' and 'Poly' 79 | ''' 80 | if(method=='Fourier'): 81 | new_arr = sig.resample(data_arr, num, axis=2) 82 | elif(method=='Poly'): 83 | new_arr = sig.resample_poly(data_arr, num, data_arr.shape[2], axis=2) 84 | return new_arr 85 | 86 | def fft_arr(arr): 87 | ''' 88 | Fourier transform for signals in a Numpy array 89 | ''' 90 | (n, _, l) = arr.shape 91 | amp = np.zeros((n,1,l/2)) 92 | ang = np.zeros((n,1,l/2)) 93 | for idx in range(n): 94 | ft = np.fft.fft(arr[idx,:,:])[:,:l/2] 95 | amp[idx] = np.absolute(ft) 96 | ang[idx] = np.angle(ft) 97 | return amp, ang 98 | 99 | def stft_arr(arr, output_size=(32,32)): 100 | ''' 101 | Short Time Fourier Transform for signals in a Numpy array 102 | ''' 103 | (n, _, l) = arr.shape 104 | spectrogram = np.zeros((n, 1, output_size[0], output_size[1])) 105 | for idx in range(n): 106 | f, t, S = sig.spectrogram(arr[idx,0,:], fs=10240, nperseg=output_size[0]*2, noverlap=0) 107 | spectrogram[idx, 0] = np.absolute(S[:(output_size[0]), :(output_size[1])]) 108 | return spectrogram 109 | 110 | ''' 111 | arr = np.random.rand(10, 1, 2048) 112 | arr = resample_arr(arr, 240, method='Poly') 113 | print(arr.shape) 114 | ''' 115 | 116 | -------------------------------------------------------------------------------- /iter_utils.py: -------------------------------------------------------------------------------- 1 | from __future__ import division 2 | import time 3 | import copy 4 | import torch 5 | import torch.utils.data as data_utils 6 | import torch.optim as optim 7 | from torch.autograd import Variable 8 | from torch import cuda 9 | 10 | def learning_scheduler(optimizer, epoch, lr=0.001, lr_decay_epoch=10): 11 | lr = lr * (0.5**(epoch // lr_decay_epoch)) 12 | if epoch % lr_decay_epoch == 0: 13 | print('Learning rate is set to {}'.format(lr)) 14 | for param in optimizer.param_groups: 15 | param['lr'] = lr 16 | return optimizer 17 | 18 | def train(model, train_loader, criterion, optimizer, init_lr=0.001, decay_epoch=10, n_epoch=20, use_cuda=True): 19 | if use_cuda: 20 | model = model.cuda() 21 | best_model = model 22 | best_accuracy = 0.0 23 | loss_curve = [] 24 | since = time.time() 25 | 26 | for epoch in range(n_epoch): 27 | print('Epoch {}/{}'.format(epoch+1, n_epoch)) 28 | 29 | for phase in ['train', 'val']: 30 | if phase == 'train': 31 | optimizer = learning_scheduler(optimizer, epoch, lr=init_lr, lr_decay_epoch=decay_epoch) 32 | model.train(True) 33 | else: 34 | model.train(False) 35 | 36 | running_loss = 0.0 37 | running_corrects = 0 38 | for batch_idx, (inputs, targets) in enumerate(train_loader): 39 | if use_cuda: 40 | inputs, targets = inputs.cuda(), targets.cuda() 41 | optimizer.zero_grad() 42 | inputs, targets = Variable(inputs), Variable(targets) 43 | outputs = model(inputs) 44 | loss = criterion(outputs, targets) 45 | if phase == 'train': 46 | loss.backward() 47 | optimizer.step() 48 | running_loss += loss.data 49 | _, predicted = torch.max(outputs.data, 1) 50 | running_corrects += torch.sum(predicted==targets.data).item() 51 | loss_curve.append(loss.data) 52 | 53 | epoch_loss = running_loss / len(train_loader.dataset) 54 | epoch_accuracy = running_corrects / len(train_loader.dataset) 55 | print('{} loss: {:.4f}, accuracy: {:.4f}'.format(phase, epoch_loss, epoch_accuracy)) 56 | 57 | if phase == 'val' and epoch_accuracy>best_accuracy: 58 | best_accuracy = epoch_accuracy 59 | best_model = copy.deepcopy(model) 60 | print(' ') 61 | 62 | time_elapsed = time.time() - since 63 | print('{} trained in {:.0f}m {:.0f}s'.format(best_model.name, time_elapsed//60, time_elapsed%60)) 64 | print('Best validation accuracy: {:.4f}'.format(best_accuracy)) 65 | return best_model, loss_curve 66 | 67 | def test(model, test_loader): 68 | corrects = 0 69 | model = model.cpu() 70 | for batch_idx, (inputs, targets) in enumerate(test_loader): 71 | with torch.no_grad(): 72 | inputs, targets = Variable(inputs), Variable(targets) 73 | outputs = model(inputs) 74 | _, predicted = torch.max(outputs.data, 1) 75 | corrects += torch.sum(predicted==targets.data).item() 76 | accuracy = corrects / len(test_loader.dataset) 77 | return accuracy 78 | -------------------------------------------------------------------------------- /models/__init__.py: -------------------------------------------------------------------------------- 1 | from .wdcnn import * 2 | from .ince import * 3 | from .fft_wdcnn import * 4 | from .lenet import * 5 | from .vgg import * 6 | from .dcnn import * 7 | # from .mlp import * 8 | 9 | -------------------------------------------------------------------------------- /models/dcnn.py: -------------------------------------------------------------------------------- 1 | ''' 2 | mport torch 3 | import torch.nn as nn 4 | from torch.autograd import Variable 5 | DCNN for one-dimensional signals in Pytorch. 6 | ''' 7 | 8 | import torch 9 | import torch.nn as nn 10 | from torch.autograd import Variable 11 | 12 | cfg = { 13 | 'DCNN08': [8, 'M', 16, 'M', 32, 'M', 64, 'M', 128, 'M'], 14 | 'DCNN11': [8, 'M', 16, 'M', 32, 32, 'M', 64, 64, 'M', 128, 128, 'M'], 15 | 'DCNN13': [8, 8, 'M', 16, 16, 'M', 32, 32, 'M', 64, 64, 'M', 128, 128, 'M'], 16 | 'DCNN16': [8, 8, 'M', 16, 16, 'M', 32, 32, 32, 'M', 64, 64, 64, 'M', 128, 128, 128, 'M'], 17 | 'DCNN19': [8, 8, 'M', 16, 16, 'M', 32, 32, 32, 32, 'M', 64, 64, 64, 64, 'M', 128, 128, 128, 128, 'M'], 18 | } 19 | 20 | class Net(nn.Module): 21 | def __init__(self, vgg_name, in_channels=1, n_class=5, use_feature=False): 22 | super(Net, self).__init__() 23 | self.name = vgg_name 24 | self.in_channels = in_channels 25 | self.use_feature = use_feature 26 | self.features = self._make_layers(cfg[vgg_name]) 27 | self.classifier = nn.Linear(128, n_class) 28 | 29 | def forward(self, x): 30 | features = self.features(x) 31 | activations = self.classifier(features.view(features.size(0), -1)) 32 | if self.use_feature: 33 | out = (activations, features) 34 | else: 35 | out = activations 36 | return out 37 | 38 | def _make_layers(self, cfg): 39 | layers = [] 40 | in_channels = self.in_channels 41 | for x in cfg: 42 | if x == 'M': 43 | layers += [nn.MaxPool1d(kernel_size=2, stride=2)] 44 | else: 45 | layers += [nn.Conv1d(in_channels, x, kernel_size=3, padding=1), 46 | nn.BatchNorm1d(x), 47 | nn.ReLU(inplace=True)] 48 | in_channels = x 49 | layers += [nn.AvgPool1d(kernel_size=64, stride=1)] # AvePool for bigger input sizes 50 | return nn.Sequential(*layers) 51 | 52 | ''' 53 | net = Net('DCNN08',1,5) 54 | x = torch.randn(10,1,2048) 55 | print(net(Variable(x)).size()) 56 | ''' 57 | 58 | -------------------------------------------------------------------------------- /models/fft_wdcnn.py: -------------------------------------------------------------------------------- 1 | ''' 2 | WDCNN model with pytorch 3 | 4 | Reference: 5 | Wei Zhang, Gaoliang Peng, Chuanhao Li, Yuanhang Chen and Zhujun Zhang 6 | A New Deep Learning Model for Fault Diagnosis with Good Anti-Noise and Domain Adaptation Ability on Raw Vibration Signals 7 | Sensors, MDPI 8 | doi:10.3390/s17020425 9 | ''' 10 | import torch 11 | import torch.nn as nn 12 | import torch.nn.functional as F 13 | from torch.autograd import Variable 14 | 15 | class BasicBlock(nn.Module): 16 | def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1): 17 | super(BasicBlock, self).__init__() 18 | self.conv = nn.Conv1d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False) 19 | self.bn = nn.BatchNorm1d(out_channels) 20 | self.pool = nn.MaxPool1d(2,stride=2) 21 | 22 | def forward(self, x): 23 | out = self.conv(x) 24 | out = self.bn(out) 25 | out = self.pool(out) 26 | out = F.relu(out) 27 | return out 28 | 29 | class Net(nn.Module): 30 | def __init__(self, in_channels, n_class, use_feature=False): 31 | super(Net, self).__init__() 32 | self.name = 'FFT-WDCNN' 33 | self.use_feature = use_feature 34 | self.b0 = nn.BatchNorm1d(in_channels) 35 | self.b1 = BasicBlock(in_channels, 16, kernel_size=64, stride=16, padding=24) 36 | self.b2 = BasicBlock(16, 32) 37 | self.b3 = BasicBlock(32, 64) 38 | self.b4 = BasicBlock(64, 64, padding=0) 39 | self.n_features = 64*3 40 | self.fc = nn.Linear(self.n_features, n_class) 41 | 42 | def forward(self, x): 43 | f0 = self.b0(x) 44 | f1 = self.b1(f0) 45 | f2 = self.b2(f1) 46 | f3 = self.b3(f2) 47 | f4 = self.b4(f3) 48 | features = (f0,f1,f2,f3,f4) 49 | activations = self.fc(features[-1].view(-1, self.n_features)) 50 | if self.use_feature: 51 | out = (activations, features) 52 | else: 53 | out = activations 54 | return out 55 | 56 | ''' 57 | net = Net(1, 4) 58 | x = torch.randn(10,1,1024) 59 | y = net(Variable(x)) 60 | print(y.size) 61 | ''' 62 | 63 | -------------------------------------------------------------------------------- /models/ince.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Ince's model with pytorch 3 | 4 | Reference: 5 | Turker Ince, Serkan Kiranyaz, Levent Eren, Murat Askar, and Moncef Gabbouj 6 | Real-Time Motor Fault Detection by 1-D Convolutional Neural Networks 7 | IEEE Transactions on Industrial Electronics 8 | doi:10.1109/TIE.2016.258272 9 | ''' 10 | import torch 11 | import torch.nn as nn 12 | import torch.nn.functional as F 13 | from torch.autograd import Variable 14 | 15 | class BasicBlock(nn.Module): 16 | def __init__(self, in_channels, out_channels, kernel_size=9, stride=1, padding=0): 17 | super(BasicBlock, self).__init__() 18 | self.conv = nn.Conv1d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False) 19 | self.bn = nn.BatchNorm1d(out_channels) 20 | self.pool = nn.MaxPool1d(4,stride=4) 21 | 22 | def forward(self, x): 23 | out = self.conv(x) 24 | out = self.bn(out) 25 | out = self.pool(out) 26 | out = F.relu(out) 27 | return out 28 | 29 | class Net(nn.Module): 30 | def __init__(self, in_channels, n_class, use_feature=False): 31 | super(Net, self).__init__() 32 | self.name = 'Inces Models' 33 | self.use_feature = use_feature 34 | self.b0 = nn.BatchNorm1d(in_channels) 35 | self.b1 = BasicBlock(in_channels, 60) 36 | self.b2 = BasicBlock(60, 40, padding=1) 37 | self.b3 = nn.Sequential( 38 | nn.Conv1d(40, 40, kernel_size=9, stride=1, padding=0), 39 | nn.BatchNorm1d(40), 40 | nn.MaxPool1d(5, stride=5), 41 | nn.ReLU() 42 | ) 43 | self.n_features = 40 44 | self.fc = nn.Linear(self.n_features, n_class) 45 | 46 | def forward(self, x): 47 | f0 = self.b0(x) 48 | f1 = self.b1(f0) 49 | f2 = self.b2(f1) 50 | f3 = self.b3(f2) 51 | features = (f0,f1,f2,f3) 52 | activations = self.fc(features[-1].view(-1, self.n_features)) 53 | if self.use_feature: 54 | out = (activations, features) 55 | else: 56 | out = activations 57 | return out 58 | 59 | ''' 60 | net = Net(1, 4) 61 | x = torch.randn(10,1,240) 62 | y = net(Variable(x)) 63 | print(y.size()) 64 | ''' 65 | 66 | -------------------------------------------------------------------------------- /models/lenet.py: -------------------------------------------------------------------------------- 1 | ''' 2 | classical LeNet5 for 32x32 images with pytorch 3 | ''' 4 | import torch 5 | import torch.nn as nn 6 | import torch.nn.functional as F 7 | from torch.autograd import Variable 8 | 9 | class BasicBlock(nn.Module): 10 | def __init__(self, in_channels, out_channels, kernel_size=5, stride=1, padding=0): 11 | super(BasicBlock, self).__init__() 12 | self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False) 13 | self.bn = nn.BatchNorm2d(out_channels) 14 | self.pool = nn.MaxPool2d(2,stride=2) 15 | 16 | def forward(self, x): 17 | out = self.conv(x) 18 | out = self.bn(out) 19 | out = self.pool(out) 20 | out = F.relu(out) 21 | return out 22 | 23 | class Net(nn.Module): 24 | def __init__(self, in_channels, n_class, use_feature=False): 25 | super(Net, self).__init__() 26 | self.name = 'LeNet5' 27 | self.use_feature = use_feature 28 | self.b0 = nn.BatchNorm2d(in_channels) 29 | self.b1 = BasicBlock(in_channels, 6) 30 | self.b2 = BasicBlock(6, 16) 31 | self.n_features = 16*5*5 32 | self.fc = nn.Sequential( 33 | nn.Linear(self.n_features, 120), 34 | nn.ReLU(), 35 | nn.Linear(120, 84), 36 | nn.ReLU(), 37 | nn.Linear( 84, n_class) 38 | ) 39 | 40 | def forward(self, x): 41 | f0 = self.b0(x) 42 | f1 = self.b1(f0) 43 | f2 = self.b2(f1) 44 | features = (f0,f1,f2) 45 | activations = self.fc(features[-1].view(-1, self.n_features)) 46 | if self.use_feature: 47 | out = (activations, features) 48 | else: 49 | out = activations 50 | return out 51 | 52 | ''' 53 | net = Net(1, 4) 54 | x = torch.randn(10,1,32,32) 55 | y = net(Variable(x)) 56 | print(y.data.size()) 57 | ''' 58 | 59 | -------------------------------------------------------------------------------- /models/mlp.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Multi-Layer Perceptron 3 | a baseline 4 | ''' 5 | import torch 6 | import torch.nn as nn 7 | from torch.autograd import Variable 8 | 9 | class Net(nn.Mudule): 10 | def __init__(self, in_features, n_class, n_hiddens=[500, 100], use_batchnorm=False, use_dropout=False): 11 | super(Net, self).__init__() 12 | self.name = vgg_name 13 | self.in_features = in_features 14 | self.use_batchnorm = use_batchnorm 15 | self.use_dropout = use_dropout 16 | self.n_layers = len(n_hiddens) + 1 17 | self.n_hiddens = n_hiddens 18 | self.mlp = self._make_layers(in_features, n_hiddens, n_class) 19 | 20 | def _make_layers(in_features, n_hiddens, n_class): 21 | TODO 22 | 23 | -------------------------------------------------------------------------------- /models/vgg.py: -------------------------------------------------------------------------------- 1 | ''' 2 | VGG11/13/16/19 in Pytorch. 3 | forked from kuangliu/pytorch-cifar 4 | https://github.com/kuangliu/pytorch-cifar/blob/master/models/vgg.py 5 | ''' 6 | 7 | import torch 8 | import torch.nn as nn 9 | from torch.autograd import Variable 10 | 11 | cfg = { 12 | 'VGG11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], 13 | 'VGG13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], 14 | 'VGG16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'], 15 | 'VGG19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'], 16 | 'cVGG19': [8, 8, 'M', 16, 16, 'M', 32, 32, 32, 32, 'M', 64, 64, 64, 64, 'M', 128, 128, 128, 128, 'M'], 17 | } 18 | 19 | class Net(nn.Module): 20 | def __init__(self, vgg_name, in_channels=1, n_class=5, use_feature=False): 21 | super(Net, self).__init__() 22 | self.name = vgg_name 23 | self.in_channels = in_channels 24 | self.use_feature = use_feature 25 | self.features = self._make_layers(cfg[vgg_name]) 26 | self.classifier = nn.Linear(128, n_class) 27 | 28 | def forward(self, x): 29 | features = self.features(x) 30 | activations = self.classifier(features.view(features.size(0), -1)) 31 | if self.use_feature: 32 | out = (activations, features) 33 | else: 34 | out = activations 35 | return out 36 | 37 | def _make_layers(self, cfg): 38 | layers = [] 39 | in_channels = self.in_channels 40 | for x in cfg: 41 | if x == 'M': 42 | layers += [nn.MaxPool2d(kernel_size=2, stride=2)] 43 | else: 44 | layers += [nn.Conv2d(in_channels, x, kernel_size=3, padding=1), 45 | nn.BatchNorm2d(x), 46 | nn.ReLU(inplace=True)] 47 | in_channels = x 48 | layers += [nn.AvgPool2d(kernel_size=2, stride=1)] # AvePool for bigger input sizes 49 | return nn.Sequential(*layers) 50 | 51 | ''' 52 | net = Net('cVGG19',3,4) 53 | x = torch.randn(10,3,32,32) 54 | print(net(Variable(x)).size()) 55 | ''' 56 | 57 | -------------------------------------------------------------------------------- /models/wdcnn.py: -------------------------------------------------------------------------------- 1 | ''' 2 | WDCNN model with pytorch 3 | 4 | Reference: 5 | Wei Zhang, Gaoliang Peng, Chuanhao Li, Yuanhang Chen and Zhujun Zhang 6 | A New Deep Learning Model for Fault Diagnosis with Good Anti-Noise and Domain Adaptation Ability on Raw Vibration Signals 7 | Sensors, MDPI 8 | doi:10.3390/s17020425 9 | ''' 10 | import torch 11 | import torch.nn as nn 12 | import torch.nn.functional as F 13 | from torch.autograd import Variable 14 | 15 | class BasicBlock(nn.Module): 16 | def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1): 17 | super(BasicBlock, self).__init__() 18 | self.conv = nn.Conv1d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False) 19 | self.bn = nn.BatchNorm1d(out_channels) 20 | self.pool = nn.MaxPool1d(2,stride=2) 21 | 22 | def forward(self, x): 23 | out = self.conv(x) 24 | out = self.bn(out) 25 | out = self.pool(out) 26 | out = F.relu(out) 27 | return out 28 | 29 | class Net(nn.Module): 30 | def __init__(self, in_channels, n_class, use_feature=False): 31 | super(Net, self).__init__() 32 | self.name = 'WDCNN' 33 | self.use_feature = use_feature 34 | self.b0 = nn.BatchNorm1d(in_channels) 35 | self.b1 = BasicBlock(in_channels, 16, kernel_size=64, stride=16, padding=24) 36 | self.b2 = BasicBlock(16, 32) 37 | self.b3 = BasicBlock(32, 64) 38 | self.b4 = BasicBlock(64, 64) 39 | self.b5 = BasicBlock(64, 64, padding=0) 40 | self.n_features = 64*3 41 | self.fc = nn.Linear(self.n_features, n_class) 42 | 43 | def forward(self, x): 44 | f0 = self.b0(x) 45 | f1 = self.b1(f0) 46 | f2 = self.b2(f1) 47 | f3 = self.b3(f2) 48 | f4 = self.b4(f3) 49 | f5 = self.b5(f4) 50 | features = (f0,f1,f2,f3,f4,f5) 51 | activations = self.fc(features[-1].view(-1, self.n_features)) 52 | if self.use_feature: 53 | out = (activations, features) 54 | else: 55 | out = activations 56 | return out 57 | ''' 58 | net = Net(1, 4) 59 | x = torch.randn(10,1,2048) 60 | y = net(Variable(x)) 61 | print(y.size) 62 | ''' 63 | 64 | -------------------------------------------------------------------------------- /run.py: -------------------------------------------------------------------------------- 1 | # -*- coding:utf-8 -*- 2 | # python2 3 | 4 | ''' 5 | mian .py for this project 6 | 0. initialize 7 | 1. get data; 8 | 2. make models; 9 | 3. train; 10 | 4. test; 11 | 5. visualize; 12 | ''' 13 | 14 | ## initialize 15 | from __future__ import print_function, division 16 | import torch.optim as optim 17 | import torch.nn as nn 18 | import data_loader 19 | import iter_utils 20 | import torch.utils.data as data_utils 21 | from models import * 22 | 23 | def main(): 24 | ## load data 25 | # data_arr_01 = data_loader.load_data(r'toydata/data.txt') 26 | # label_vec = data_loader.load_label(r'toydata/label.txt') 27 | 28 | data_arr_01 = data_loader.load_data(r'data/uestc_pgb/SF01/vib_data_1.txt') 29 | # data_arr_03 = data_loader.load_data('data/pgb/SF03/vib_data_1.txt') 30 | # data_arr_01 = data_loader.resample_arr(data_arr_01, num=240) # add for Ince's model 31 | # data_arr_03 = data_loader.resample_arr(data_arr_03, num=240) # add for Ince's model 32 | # data_arr_01, _ = data_loader.fft_arr(data_arr_01) # add for fft wdcnn 33 | # data_arr_03, _ = data_loader.fft_arr(data_arr_03) # add for fft wdcnn 34 | # data_arr_01 = data_loader.stft_arr(data_arr_01) # add for stft-LeNet 35 | # data_arr_03 = data_loader.stft_arr(data_arr_03) 36 | label_vec = data_loader.load_label(r'data/uestc_pgb/SF01/label_vec.txt') 37 | 38 | trainset_01, testset_01 = data_loader.split_set(data_arr_01, label_vec) 39 | # trainset_03, testset_03 = data_loader.split_set(data_arr_03, label_vec) 40 | train_loader = data_utils.DataLoader(dataset = trainset_01, batch_size =512 , shuffle = True, num_workers = 2) 41 | test_loader = data_utils.DataLoader(dataset = testset_01, batch_size = 512, shuffle = True, num_workers = 2) 42 | print('Number of training samples: {}'.format(len(train_loader.dataset))) 43 | print('Number of testing samples: {}'.format(len(test_loader.dataset))) 44 | print( ) 45 | 46 | ## make models 47 | model = wdcnn.Net(1, 5) 48 | 49 | ## train 50 | criterion = nn.CrossEntropyLoss() 51 | optimizer = optim.Adam(model.parameters(), weight_decay=0.0001) 52 | best_model, loss_curve = iter_utils.train(model, train_loader, criterion, optimizer, 53 | init_lr=0.0001, decay_epoch=5, n_epoch=10, use_cuda=False) 54 | 55 | # test 56 | test_accuracy = iter_utils.test(best_model, test_loader) 57 | print('Test accuracy: {:.4f}%'.format(100*test_accuracy)) 58 | 59 | 60 | ## visualization 61 | # TODO 62 | 63 | if __name__ == '__main__': 64 | main() 65 | -------------------------------------------------------------------------------- /toydata/label.txt: -------------------------------------------------------------------------------- 1 | 0.0000000e+00 2 | 1.0000000e+00 3 | 2.0000000e+00 4 | 0.0000000e+00 5 | 2.0000000e+00 6 | 2.0000000e+00 7 | 3.0000000e+00 8 | 0.0000000e+00 9 | 3.0000000e+00 10 | 1.0000000e+00 11 | 0.0000000e+00 12 | 1.0000000e+00 13 | 1.0000000e+00 14 | 1.0000000e+00 15 | 1.0000000e+00 16 | 2.0000000e+00 17 | 3.0000000e+00 18 | 0.0000000e+00 19 | 0.0000000e+00 20 | 3.0000000e+00 21 | 1.0000000e+00 22 | 2.0000000e+00 23 | 0.0000000e+00 24 | 2.0000000e+00 25 | 1.0000000e+00 26 | 3.0000000e+00 27 | 1.0000000e+00 28 | 1.0000000e+00 29 | 2.0000000e+00 30 | 3.0000000e+00 31 | 0.0000000e+00 32 | 2.0000000e+00 33 | 3.0000000e+00 34 | 0.0000000e+00 35 | 3.0000000e+00 36 | 1.0000000e+00 37 | 1.0000000e+00 38 | 1.0000000e+00 39 | 2.0000000e+00 40 | 0.0000000e+00 41 | 2.0000000e+00 42 | 1.0000000e+00 43 | 3.0000000e+00 44 | 0.0000000e+00 45 | 3.0000000e+00 46 | 2.0000000e+00 47 | 2.0000000e+00 48 | 0.0000000e+00 49 | 2.0000000e+00 50 | 1.0000000e+00 51 | 3.0000000e+00 52 | 0.0000000e+00 53 | 0.0000000e+00 54 | 1.0000000e+00 55 | 0.0000000e+00 56 | 0.0000000e+00 57 | 0.0000000e+00 58 | 1.0000000e+00 59 | 0.0000000e+00 60 | 1.0000000e+00 61 | 0.0000000e+00 62 | 0.0000000e+00 63 | 2.0000000e+00 64 | 1.0000000e+00 65 | 1.0000000e+00 66 | 2.0000000e+00 67 | 1.0000000e+00 68 | 2.0000000e+00 69 | 1.0000000e+00 70 | 1.0000000e+00 71 | 3.0000000e+00 72 | 1.0000000e+00 73 | 1.0000000e+00 74 | 2.0000000e+00 75 | 1.0000000e+00 76 | 1.0000000e+00 77 | 1.0000000e+00 78 | 2.0000000e+00 79 | 2.0000000e+00 80 | 2.0000000e+00 81 | 3.0000000e+00 82 | 0.0000000e+00 83 | 3.0000000e+00 84 | 0.0000000e+00 85 | 2.0000000e+00 86 | 3.0000000e+00 87 | 0.0000000e+00 88 | 2.0000000e+00 89 | 1.0000000e+00 90 | 3.0000000e+00 91 | 2.0000000e+00 92 | 1.0000000e+00 93 | 2.0000000e+00 94 | 3.0000000e+00 95 | 0.0000000e+00 96 | 3.0000000e+00 97 | 3.0000000e+00 98 | 0.0000000e+00 99 | 0.0000000e+00 100 | 2.0000000e+00 101 | --------------------------------------------------------------------------------