├── LICENSE
├── README.md
├── ReconNet.ipynb
├── TestDataset.py
├── TrainDataset.py
├── docs
└── Kulkarni_ReconNet_Non-Iterative_Reconstruction_CVPR_2016_paper.pdf
├── images
├── AdaptiveRconNet.gif
├── all images
│ ├── BSDS200.zip
│ └── SR_training_datasets.zip
├── image_reconstruction.png
├── reconnet.png
├── sample_result.png
├── test1.png
└── test_sample.png
├── model.py
├── model_state.pth
└── phi
├── phi_0_01_1089.mat
├── phi_0_04_1089.mat
├── phi_0_10_1089.mat
└── phi_0_25_1089.mat
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2020 Chinmay Rane
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Measurements [](https://twitter.com/intent/tweet?text=An%20Implementation%20of%20the%20research%20paper%20-%20ReconNet:%20Non%20Iterative%20Reconstruction%20of%20Images%20from%20Compressively%20Sensed%20Measurements.%0Aand%20here%27s%20the%20Github%20link%20-%3E&url=https://github.com/Chinmayrane16/ReconNet&hashtags=Image_Reconstruction,Non_Iterative,Compressive_Sensing,Pytorch,Deep_Learning)
2 |
3 |
4 |
5 | This repository is an implementation of the research paper ReconNet.
6 |
7 | **ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Measurements**
8 | \- Kuldeep Kulkarni, Suhas Lohit, Pavan Turaga, Ronan Kerviche, Amit Ashok
9 | [[arXiv](https://arxiv.org/pdf/1601.06892.pdf)][[Github](https://github.com/KuldeepKulkarni/ReconNet)]
10 |
11 | ## ReconNet
12 |
13 | The paper introduces a non-iterative and an extremely fast image reconstruction algorithm from the compressively sensed random measurements. The authors proposed a CNN architecture which outputs intermediate reconstruction, which is further passsed through a denoiser to further improve the reconstruction. This algorithm is suitable when the task is to identify the major contents in the image. It tries to acquire very few measurements and is still able to reconstruct the image preserving the property of the scene. Moreover, it is computationally inexpensive due to it's non-iterative nature and can be introduced in resource-constrained environments.
14 | The following image is taken from the research paper.
15 |
16 |
17 |
18 |
19 | ## Compressive Sensing
20 |
21 | It is a technique widely used in signal processing to sample a signal at sub-Nyguist rates. It takes the advantage of sparseness of the signal to reconstruct the original signal. The input signal is generally converted to DFT or DCT to make it sparse.
22 | A small number of random linear projections of original signal are measured (less than the signal size) and a reconstruction algorithm is used to recover the original signal. In this paper, the measurement matrix is constructed by generating a random Gaussian matric and then orthonormalizing its rows to find a set of orthogonal vectors. An improvement of ReconNet ([AdaptiveReconNet](https://link.springer.com/chapter/10.1007/978-981-10-7302-1_34)) allows learning of the measurements by the network itself. The idea is that adaptive measurement fits dataset better rather than random Gaussian measurement matrix, under the same measurement rate.
23 |
24 | Following is the architecture of ReconNet taken from its paper.
25 |
26 |
27 |
28 |
29 | Following is the structure of Adaptive ReconNet taken from the paper [Adaptive measurement network for CS image reconstruction.](https://link.springer.com/chapter/10.1007/978-981-10-7302-1_34)
30 |
31 |
32 |
33 |
34 | ## Requirements
35 | * [python](https://www.python.org/downloads/) = 3.6
36 | * [pytorch](https://pytorch.org/) = 1.0
37 | * [opencv]() = 3.4.9
38 | * [pandas](https://pandas.pydata.org/) = 0.22.0
39 | * [numpy](https://www.numpy.org/) = 1.14.3
40 | * [CUDA](https://developer.nvidia.com/cuda-zone) (recommended version >= 8.0)
41 |
42 | ## Sample Results
43 |
44 | Original Image | Reconstructed Image
45 | :-------------------------:|:-------------------------:
46 |  | 
47 |
48 |
49 |
50 | ## Future Scope
51 | The paper suggests use of off-the-shelf denoiser (BM3D) on the intermediate reconstruction to improve the image's quality. I have not implemented the denoising part, if you want to add the implementation of BM3D to this repository, feel free to submit a pull request. :smiley:
52 |
53 | ## References
54 | [ReconNet paper](https://arxiv.org/pdf/1601.06892.pdf)
55 | [ReconNet code (Caffe)](https://github.com/KuldeepKulkarni/ReconNet)
56 | [ReconNet code (TF)](https://github.com/kaushik333/Reconnet)
57 | [Adaptive ReconNet paper](https://arxiv.org/pdf/1710.01244.pdf)
58 | [Adaptive ReconNet code (TF)](https://github.com/yucicheung/AdaptiveReconNet)
59 | [Compressive Sensing](https://www.ece.iastate.edu/~namrata/EE527_Spring08/CompSens2.pdf)
60 |
--------------------------------------------------------------------------------
/TestDataset.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torchvision
3 | import numpy as np
4 | from torchvision import transforms
5 | from torch.utils.data import Dataset
6 |
7 |
8 | class TestDataset(Dataset):
9 | def __init__(self,image_blocks,mat,transform = None,phi=0.25):
10 | self.image_blocks = image_blocks
11 | self.transform = transform
12 | self.phi = phi
13 | self.mat = mat
14 |
15 | def __len__(self):
16 | return len(self.image_blocks)
17 |
18 | def __getitem__(self,idx):
19 | image_block = self.image_blocks[idx]
20 | label = image_block
21 | if self.transform is not None:
22 | image_block = self.transform(image_block)
23 | label = self.transform(label)
24 | image_block = image_block.view(33*33)
25 | label = label.view(33*33)
26 | image_block = image_block.double()
27 | label = label.double()
28 | with torch.no_grad():
29 | image_block = torch.matmul(self.mat,image_block)
30 |
31 | return image_block,label
--------------------------------------------------------------------------------
/TrainDataset.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import os
3 | import cv2
4 | import torchvision
5 | import numpy as np
6 | import matplotlib.pyplot as plt
7 | from torchvision import transforms
8 | from torch.utils.data import Dataset
9 |
10 | def img_to_blocks(imgs,path,stride=14,filter_size=33):
11 | images_dataset = []
12 | for img in imgs:
13 | image = plt.imread(os.path.join(path,img))
14 | # 3 dimensions (RGB) convert to YCrCb (take only Y -> luminance)
15 | image = cv2.cvtColor(image, cv2.COLOR_BGR2YCR_CB)
16 | image = cv2.normalize(image, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
17 | image = image[:,:,0]
18 | h,w = image.shape
19 | h_n = ((h - filter_size) // stride) + 1
20 | w_n = ((w - filter_size) // stride) + 1
21 |
22 | for i in range(h_n):
23 | for j in range(w_n):
24 | blocks = image[i*stride:(i*stride)+filter_size, j*stride:(j*stride)+filter_size]
25 | images_dataset.append(blocks)
26 |
27 | return np.array(images_dataset)
28 |
29 |
30 | class TrainDataset(Dataset):
31 | def __init__(self,data_dir,mat,transform = None,phi=0.25):
32 | self.data_dir = os.listdir(data_dir)
33 | self.transform = transform
34 | self.image_blocks = img_to_blocks(self.data_dir,data_dir)
35 | self.phi = phi
36 | self.mat = mat
37 |
38 | def __len__(self):
39 | return len(self.image_blocks)
40 |
41 | def __getitem__(self,idx):
42 | image_block = self.image_blocks[idx]
43 | label = image_block
44 | if self.transform is not None:
45 | image_block = self.transform(image_block)
46 | label = self.transform(label)
47 | image_block = image_block.view(33*33)
48 | label = label.view(33*33)
49 | image_block = image_block.double()
50 | label = label.double()
51 | with torch.no_grad():
52 | image_block = torch.matmul(self.mat,image_block)
53 | return image_block,label
--------------------------------------------------------------------------------
/docs/Kulkarni_ReconNet_Non-Iterative_Reconstruction_CVPR_2016_paper.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/docs/Kulkarni_ReconNet_Non-Iterative_Reconstruction_CVPR_2016_paper.pdf
--------------------------------------------------------------------------------
/images/AdaptiveRconNet.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/images/AdaptiveRconNet.gif
--------------------------------------------------------------------------------
/images/all images/BSDS200.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/images/all images/BSDS200.zip
--------------------------------------------------------------------------------
/images/all images/SR_training_datasets.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/images/all images/SR_training_datasets.zip
--------------------------------------------------------------------------------
/images/image_reconstruction.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/images/image_reconstruction.png
--------------------------------------------------------------------------------
/images/reconnet.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/images/reconnet.png
--------------------------------------------------------------------------------
/images/sample_result.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/images/sample_result.png
--------------------------------------------------------------------------------
/images/test1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/images/test1.png
--------------------------------------------------------------------------------
/images/test_sample.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/images/test_sample.png
--------------------------------------------------------------------------------
/model.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torchvision
3 | import torch.nn as nn
4 | import torch.nn.functional as F
5 | from torch.autograd import Variable
6 |
7 |
8 |
9 | class ReconNet(nn.Module):
10 | def __init__(self,measurement_rate=0.25):
11 | super(ReconNet,self).__init__()
12 |
13 | self.measurement_rate = measurement_rate
14 | self.fc1 = nn.Linear(int(self.measurement_rate*1089),1089)
15 | nn.init.normal_(self.fc1.weight, mean=0, std=0.1)
16 | self.conv1 = nn.Conv2d(1,64,11,1,padding=5)
17 | nn.init.normal_(self.conv1.weight, mean=0, std=0.1)
18 | self.conv2 = nn.Conv2d(64,32,1,1,padding=0)
19 | nn.init.normal_(self.conv2.weight, mean=0, std=0.1)
20 | self.conv3 = nn.Conv2d(32,1,7,1,padding=3)
21 | nn.init.normal_(self.conv3.weight, mean=0, std=0.1)
22 | self.conv4 = nn.Conv2d(1,64,11,1,padding=5)
23 | nn.init.normal_(self.conv4.weight, mean=0, std=0.1)
24 | self.conv5 = nn.Conv2d(64,32,1,1,padding=0)
25 | nn.init.normal_(self.conv5.weight, mean=0, std=0.1)
26 | self.conv6 = nn.Conv2d(32,1,7,1,padding=3)
27 | nn.init.normal_(self.conv6.weight, mean=0, std=0.1)
28 |
29 | def forward(self,x):
30 | x = F.relu(self.fc1(x))
31 | x = x.view(-1,33,33)
32 | x = x.unsqueeze(1)
33 | x = F.relu(self.conv1(x))
34 | x = F.relu(self.conv2(x))
35 | x = F.relu(self.conv3(x))
36 | x = F.relu(self.conv4(x))
37 | x = F.relu(self.conv5(x))
38 | x = self.conv6(x)
39 |
40 | return x
--------------------------------------------------------------------------------
/model_state.pth:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/model_state.pth
--------------------------------------------------------------------------------
/phi/phi_0_01_1089.mat:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/phi/phi_0_01_1089.mat
--------------------------------------------------------------------------------
/phi/phi_0_04_1089.mat:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/phi/phi_0_04_1089.mat
--------------------------------------------------------------------------------
/phi/phi_0_10_1089.mat:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/phi/phi_0_10_1089.mat
--------------------------------------------------------------------------------
/phi/phi_0_25_1089.mat:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Chinmayrane16/ReconNet-PyTorch/702586ed9a3d466b65c8ca801243a0f32382e299/phi/phi_0_25_1089.mat
--------------------------------------------------------------------------------