├── AGCA.png ├── LICENSE.md ├── AGCA.py └── README.md /AGCA.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/C1nDeRainBo0M/AGCA/HEAD/AGCA.png -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 C1nDeRainBo0M 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /AGCA.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn as nn 3 | from torch.nn import init 4 | 5 | 6 | class AGCA(nn.Module): 7 | def __init__(self, in_channel, ratio): 8 | super(AGCA, self).__init__() 9 | hide_channel = in_channel // ratio 10 | self.avg_pool = nn.AdaptiveAvgPool2d(1) 11 | self.conv1 = nn.Conv2d(in_channel, hide_channel, kernel_size=1, bias=False) 12 | self.softmax = nn.Softmax(2) 13 | # Choose to deploy A0 on GPU or CPU according to your needs 14 | self.A0 = torch.eye(hide_channel).to('cuda') 15 | # self.A0 = torch.eye(hide_channel) 16 | # A2 is initialized to 1e-6 17 | self.A2 = nn.Parameter(torch.FloatTensor(torch.zeros((hide_channel, hide_channel))), requires_grad=True) 18 | init.constant_(self.A2, 1e-6) 19 | self.conv2 = nn.Conv1d(1, 1, kernel_size=1, bias=False) 20 | self.conv3 = nn.Conv1d(1, 1, kernel_size=1, bias=False) 21 | self.relu = nn.ReLU(inplace=True) 22 | self.conv4 = nn.Conv2d(hide_channel, in_channel, kernel_size=1, bias=False) 23 | self.sigmoid = nn.Sigmoid() 24 | 25 | def forward(self, x): 26 | y = self.avg_pool(x) 27 | y = self.conv1(y) 28 | B, C, _, _ = y.size() 29 | y = y.flatten(2).transpose(1, 2) 30 | A1 = self.softmax(self.conv2(y)) 31 | A1 = A1.expand(B, C, C) 32 | A = (self.A0 * A1) + self.A2 33 | y = torch.matmul(y, A) 34 | y = self.relu(self.conv3(y)) 35 | y = y.transpose(1, 2).view(-1, C, 1, 1) 36 | y = self.sigmoid(self.conv4(y)) 37 | 38 | return x * y 39 | 40 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # AGCA: An Adaptive Graph Channel Attention Module for Steel Surface Defect Detection 2 | This work has been accepted for publication in the IEEE Transactions on Instrumentation and Measurement (https://ieeexplore.ieee.org/document/10050536). 3 | 4 | ***AGCA.py*** is the pytorch code implementation of the AGCA attention module. 5 | 6 | 7 | ## Abstrct 8 | Surface defect detection is an important part of the steel production process. Recently, attention mechanisms have been widely used in steel surface defect detection to ensure product quality. Existing attention modules cannot distinguish the difference between steel surface images and natural images. Therefore, we propose an Adaptive Graph Channel Attention (AGCA) module which introduces graph convolutional theory into channel attention. The AGCA module takes each channel as a feature vertex, and their relationship is represented by an adjacency matrix. We perform non-local operations on features by analyzing graphs constructed in AGCA. The operation significantly improves the feature representation capability. Like other attention modules, AGCA has lightweight and plug-and-play characteristics. It enables the module easily embedded into defect detection networks. The experimental results on various backbone networks and datasets show that AGCA outperforms state-of-the-art methods. 9 | 10 | 11 |
13 |