├── .DS_Store ├── Fig ├── Fig2.png └── Tab2.png └── README.md /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TIDESlab/ITS/HEAD/.DS_Store -------------------------------------------------------------------------------- /Fig/Fig2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TIDESlab/ITS/HEAD/Fig/Fig2.png -------------------------------------------------------------------------------- /Fig/Tab2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TIDESlab/ITS/HEAD/Fig/Tab2.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Boosting of Implicit Neural Representation-based Image Denoiser 2 | 3 | [Zipei Yan](https://yanzipei.github.io/), [Zhengji Liu](https://epicwatermelon.github.io/), [Jizhou Li](http://jizhou.li/) 4 | 5 | This paper is available on [arXiv](https://arxiv.org/abs/2401.01548) and [IEEEXplore](https://ieeexplore.ieee.org/document/10447327). 6 | 7 | ## Abstract 8 | Implicit Neural Representation (INR) has emerged as an effective 9 | method for unsupervised image denoising. However, INR models 10 | are typically overparameterized; consequently, these models are 11 | prone to overfitting during learning, resulting in suboptimal results, 12 | even noisy ones. To tackle this problem, we propose a general recipe 13 | for regularizing INR models in image denoising. In detail, we propose 14 | to iteratively substitute the supervision signal with the mean 15 | value derived from both the prediction and supervision signal during 16 | the learning process. We theoretically prove that such a simple iterative 17 | substitute can gradually enhance the signal-to-noise ratio of the 18 | supervision signal, thereby benefiting INR models during the learning 19 | process. Our experimental results demonstrate that INR models 20 | can be effectively regularized by the proposed approach, relieving 21 | overfitting and boosting image denoising performance. 22 | 23 | ## How to use ITS for boosting INR-based image denoiser? 24 | 25 | Since ITS is INR model agnostic/independent, you can plug ITS into your own INR-based denoiser. 26 | 27 | ### Iterative Substitution (ITS) for Renewing the Supervision Signal 28 | 29 | For a given set of iterations $`\{N, 2N, ..., kN\}`$ where $N$ is a hyper-parameter, ITS renews the supervision signals with the mean value derived from both the prediction and supervision signal, which is formulated as follows: 30 | 31 | $$ 32 | \boldsymbol{\hat{y}}^{kN+1} = \frac{\boldsymbol{y} + \boldsymbol{\hat{x}}^{kN}}{2}, 33 | $$ 34 | 35 | where $\boldsymbol{\hat{y}}^{kN+1}$ is the renewed supervision signal, $\boldsymbol{y}$ is the original noisy observation and $\boldsymbol{\hat{x}}^{kN}$ is the denoised results from INR model. 36 | 37 | ### Implementation 38 | ```python 39 | import torch 40 | 41 | 42 | """ 43 | num_iters: number of training iterations 44 | inr_model: INR model 45 | z: coordinates 46 | y: noisy observation 47 | N: every N-th iteration for renewing the supervision signal 48 | """ 49 | 50 | y_hat = torch.clone(y) 51 | 52 | for i in range(1, num_iters + 1): 53 | x_hat = inr_model(z) 54 | 55 | loss = ((x_hat - y_hat) ** 2).mean() 56 | 57 | optimizer.zero_grad() 58 | loss.backward() 59 | optimizer.step() 60 | 61 | # renew supervision signal at every N-th iteration 62 | if i % N == 0: 63 | y_hat = (y + x_hat.detach()) / 2. 64 | ``` 65 | 66 | ## Results 67 | 68 | ![image](./Fig/Fig2.png) 69 | 70 | ![image](./Fig/Tab2.png) 71 | 72 | More results could be found in our paper. 73 | 74 | ## Repos for INR-based image denoiser 75 | 76 | DIP: https://github.com/DmitryUlyanov/deep-image-prior 77 | 78 | SIREN: https://github.com/LeoZDong/siren_denoise 79 | 80 | LINR: https://github.com/WenTXuL/LINR/tree/main 81 | 82 | WIRE: https://github.com/vishwa91/wire/tree/main 83 | 84 | ADMM-DIPTV: https://github.com/sedaboni/ADMM-DIPTV 85 | 86 | DeepRED: https://github.com/GaryMataev/DeepRED 87 | 88 | 89 | ## Citation 90 | If this work is useful for your research, please kindly cite it: 91 | ``` 92 | @inproceedings{yan2024its, 93 | title={Boosting of Implicit Neural Representation-based Image Denoiser}, 94 | author={Yan, Zipei and Liu, Zhengji and Li, Jizhou}, 95 | booktitle={ICASSP}, 96 | year={2024}, 97 | } 98 | ``` 99 | 100 | ## Contact 101 | 102 | Please contact: lijz AT ieee DOT org 103 | --------------------------------------------------------------------------------