├── .gitignore
├── LICENSE
├── LR
├── baboon.png
└── comic.png
├── QA.md
├── README.md
├── RRDBNet_arch.py
├── figures
├── 102061.gif
├── 43074.gif
├── 81.gif
├── BN_artifacts.jpg
├── RRDB.png
├── abalation_study.png
├── architecture.jpg
├── baboon.jpg
├── net_interp.jpg
├── patch_a.png
├── patch_b.png
├── qualitative_cmp_01.jpg
├── qualitative_cmp_02.jpg
├── qualitative_cmp_03.jpg
├── qualitative_cmp_04.jpg
├── train_deeper_neta.png
└── train_deeper_netb.png
├── models
└── README.md
├── net_interp.py
├── results
└── baboon_ESRGAN.png
├── test.py
└── transer_RRDB_models.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # folder
2 | .vscode
3 |
4 | # file type
5 | *.svg
6 | *.pyc
7 | *.pth
8 | *.t7
9 | *.caffemodel
10 | *.mat
11 | *.npy
12 |
13 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/LR/baboon.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/LR/baboon.png
--------------------------------------------------------------------------------
/LR/comic.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/LR/comic.png
--------------------------------------------------------------------------------
/QA.md:
--------------------------------------------------------------------------------
1 | # Frequently Asked Questions
2 |
3 | ### 1. How to reproduce your results in the [PIRM18-SR Challenge](https://www.pirm2018.org/PIRM-SR.html) (with low perceptual index)?
4 |
5 | First, the released ESRGAN model in the GitHub (`RRDB_ESRGAN_x4.pth`) is **different** from the model we submitted in the competition.
6 | We found that the lower perceptual index does not always guarantee a better visual quality.
7 | The aims for the competition and our ESRGAN work will be a bit different.
8 | We think the aim for the competition is the lower perceptual index and the aim for our ESRGAN work is the better visual quality.
9 | > More analyses can be found in Sec 4.1 and Sec 5 in [PIRM18-SR Chanllenge report](https://arxiv.org/pdf/1809.07517.pdf).
10 | > It points out that PI (perceptual index) is well correlated with the human-opinion-scores on a coarse scale, but it is not always well-correlated with these scores on a finer scale. This highlights the urgent need for better perceptual quality metrics.)
11 |
12 | Therefore, in the PIRM18-SR Challenge competition, we used several tricks for the best perceptual index (see Section 4.5 in the [paper](https://arxiv.org/abs/1809.00219)).
13 |
14 | Here, we provid the models and codes used in the competition, which is able to produce the results on the `PIRM test dataset` (we use MATLAB 2016b/2017a):
15 |
16 | | Group | Perceptual index | RMSE |
17 | | ------------- |:-------------:| -----:|
18 | | SuperSR | 1.978 | 15.30 |
19 |
20 | > 1. Download the model and codes from [GoogleDrive](https://drive.google.com/file/d/1l0gBRMqhVLpL_-7R7aN-q-3hnv5ADFSM/view?usp=sharing)
21 | > 2. Put LR input images in the `LR` folder
22 | > 3. Run `python test.py`
23 | > 4. Run `main_reverse_filter.m` in MATLAB as a post processing
24 | > 5. The results on my computer are: Perceptual index: **1.9777** and RMSE: **15.304**
25 |
26 |
27 | ### 2. How do you get the perceptual index in your ESRGAN paper?
28 | In our paper, we provide the perceptual index in two places.
29 |
30 | 1). In the Fig. 2, the perceptual index on PIRM self validation dataset is obtained with the **model we submitted in the competition**.
31 | Since the pupose of this figure is to show the perception-distortion plane. And we also use the post-precessing here same as in the competition.
32 |
33 | 2). In the Fig.7, the perceptual indexs are provided as references and they are tested on the data generated by the released ESRGAN model `RRDB_ESRGAN_x4.pth` in the GiuHub.
34 | Also, there is **no** post-processing when testing the ESRGAN model for better visual quality.
35 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ## ESRGAN (Enhanced SRGAN) [:rocket: [BasicSR](https://github.com/xinntao/BasicSR)] [[Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN)]
2 |
3 | :sparkles: **New Updates.**
4 |
5 | We have extended ESRGAN to [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN), which is a **more practical algorithm for real-world image restoration**. For example, it can also remove annoying JPEG compression artifacts.
You are recommended to have a try :smiley:
6 |
7 | In the [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) repo,
8 |
9 | - You can still use the original ESRGAN model or your re-trained ESRGAN model. [The model zoo in Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN#european_castle-model-zoo).
10 | - We provide a more handy inference script, which supports 1) **tile** inference; 2) images with **alpha channel**; 3) **gray** images; 4) **16-bit** images.
11 | - We also provide a **Windows executable file** `RealESRGAN-ncnn-vulkan` for easier use without installing the environment. This executable file also includes the original ESRGAN model.
12 | - The full training codes are also released in the [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) repo.
13 |
14 | Welcome to open issues or open discussions in the [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) repo.
15 |
16 | - If you have any question, you can open an issue in the [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) repo.
17 | - If you have any good ideas or demands, please open an issue/discussion in the [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) repo to let me know.
18 | - If you have some images that Real-ESRGAN could not well restored, please also open an issue/discussion in the [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) repo. I will record it (but I cannot guarantee to resolve it😛).
19 |
20 | Here are some examples for Real-ESRGAN:
21 |
22 |
23 |
24 |
25 | :book: Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data
26 |
27 | > [[Paper](https://arxiv.org/abs/2107.10833)]
28 | > [Xintao Wang](https://xinntao.github.io/), Liangbin Xie, [Chao Dong](https://scholar.google.com.hk/citations?user=OSDCB0UAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en)
29 | > Applied Research Center (ARC), Tencent PCG
30 | > Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
31 |
32 | -----
33 |
34 | As there may be some repos have dependency on this ESRGAN repo, we will not modify this ESRGAN repo (especially the codes).
35 |
36 | The following is the original README:
37 |
38 | #### The training codes are in :rocket: [BasicSR](https://github.com/xinntao/BasicSR). This repo only provides simple testing codes, pretrained models and the network interpolation demo.
39 |
40 | [BasicSR](https://github.com/xinntao/BasicSR) is an **open source** image and video super-resolution toolbox based on PyTorch (will extend to more restoration tasks in the future).
41 | It includes methods such as **EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR**, etc. It now also supports **StyleGAN2**.
42 |
43 | ### Enhanced Super-Resolution Generative Adversarial Networks
44 | By Xintao Wang, [Ke Yu](https://yuke93.github.io/), Shixiang Wu, [Jinjin Gu](http://www.jasongt.com/), Yihao Liu, [Chao Dong](https://scholar.google.com.hk/citations?user=OSDCB0UAAAAJ&hl=en), [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao/), [Chen Change Loy](http://personal.ie.cuhk.edu.hk/~ccloy/)
45 |
46 | We won the first place in [PIRM2018-SR competition](https://www.pirm2018.org/PIRM-SR.html) (region 3) and got the best perceptual index.
47 | The paper is accepted to [ECCV2018 PIRM Workshop](https://pirm2018.org/).
48 |
49 | :triangular_flag_on_post: Add [Frequently Asked Questions](https://github.com/xinntao/ESRGAN/blob/master/QA.md).
50 |
51 | > For instance,
52 | > 1. How to reproduce your results in the PIRM18-SR Challenge (with low perceptual index)?
53 | > 2. How do you get the perceptual index in your ESRGAN paper?
54 |
55 | #### BibTeX
56 |
57 | @InProceedings{wang2018esrgan,
58 | author = {Wang, Xintao and Yu, Ke and Wu, Shixiang and Gu, Jinjin and Liu, Yihao and Dong, Chao and Qiao, Yu and Loy, Chen Change},
59 | title = {ESRGAN: Enhanced super-resolution generative adversarial networks},
60 | booktitle = {The European Conference on Computer Vision Workshops (ECCVW)},
61 | month = {September},
62 | year = {2018}
63 | }
64 |
65 |
66 |
67 |
68 |
69 | The **RRDB_PSNR** PSNR_oriented model trained with DF2K dataset (a merged dataset with [DIV2K](https://data.vision.ee.ethz.ch/cvl/DIV2K/) and [Flickr2K](http://cv.snu.ac.kr/research/EDSR/Flickr2K.tar) (proposed in [EDSR](https://github.com/LimBee/NTIRE2017))) is also able to achive high PSNR performance.
70 |
71 | | Method | Training dataset | Set5 | Set14 | BSD100 | Urban100 | Manga109 |
72 | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|
73 | | [SRCNN](http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html)| 291| 30.48/0.8628 |27.50/0.7513|26.90/0.7101|24.52/0.7221|27.58/0.8555|
74 | | [EDSR](https://github.com/thstkdgus35/EDSR-PyTorch) | DIV2K | 32.46/0.8968 | 28.80/0.7876 | 27.71/0.7420 | 26.64/0.8033 | 31.02/0.9148 |
75 | | [RCAN](https://github.com/yulunzhang/RCAN) | DIV2K | 32.63/0.9002 | 28.87/0.7889 | 27.77/0.7436 | 26.82/ 0.8087| 31.22/ 0.9173|
76 | |RRDB(ours)| DF2K| **32.73/0.9011** |**28.99/0.7917** |**27.85/0.7455** |**27.03/0.8153** |**31.66/0.9196**|
77 |
78 | ## Quick Test
79 | #### Dependencies
80 | - Python 3
81 | - [PyTorch >= 1.0](https://pytorch.org/) (CUDA version >= 7.5 if installing with CUDA. [More details](https://pytorch.org/get-started/previous-versions/))
82 | - Python packages: `pip install numpy opencv-python`
83 |
84 | ### Test models
85 | 1. Clone this github repo.
86 | ```
87 | git clone https://github.com/xinntao/ESRGAN
88 | cd ESRGAN
89 | ```
90 | 2. Place your own **low-resolution images** in `./LR` folder. (There are two sample images - baboon and comic).
91 | 3. Download pretrained models from [Google Drive](https://drive.google.com/drive/u/0/folders/17VYV_SoZZesU6mbxz2dMAIccSSlqLecY) or [Baidu Drive](https://pan.baidu.com/s/1-Lh6ma-wXzfH8NqeBtPaFQ). Place the models in `./models`. We provide two models with high perceptual quality and high PSNR performance (see [model list](https://github.com/xinntao/ESRGAN/tree/master/models)).
92 | 4. Run test. We provide ESRGAN model and RRDB_PSNR model and you can config in the `test.py`.
93 | ```
94 | python test.py
95 | ```
96 | 5. The results are in `./results` folder.
97 | ### Network interpolation demo
98 | You can interpolate the RRDB_ESRGAN and RRDB_PSNR models with alpha in [0, 1].
99 |
100 | 1. Run `python net_interp.py 0.8`, where *0.8* is the interpolation parameter and you can change it to any value in [0,1].
101 | 2. Run `python test.py models/interp_08.pth`, where *models/interp_08.pth* is the model path.
102 |
103 |
104 |
105 |
106 |
107 | ## Perceptual-driven SR Results
108 |
109 | You can download all the resutls from [Google Drive](https://drive.google.com/drive/folders/1iaM-c6EgT1FNoJAOKmDrK7YhEhtlKcLx?usp=sharing). (:heavy_check_mark: included; :heavy_minus_sign: not included; :o: TODO)
110 |
111 | HR images can be downloaed from [BasicSR-Datasets](https://github.com/xinntao/BasicSR#datasets).
112 |
113 | | Datasets |LR | [*ESRGAN*](https://arxiv.org/abs/1809.00219) | [SRGAN](https://arxiv.org/abs/1609.04802) | [EnhanceNet](http://openaccess.thecvf.com/content_ICCV_2017/papers/Sajjadi_EnhanceNet_Single_Image_ICCV_2017_paper.pdf) | [CX](https://arxiv.org/abs/1803.04626) |
114 | |:---:|:---:|:---:|:---:|:---:|:---:|
115 | | Set5 |:heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:| :o: |
116 | | Set14 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:| :o: |
117 | | BSDS100 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:| :o: |
118 | | [PIRM](https://pirm.github.io/)
(val, test) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark:| :heavy_check_mark: |
119 | | [OST300](https://arxiv.org/pdf/1804.02815.pdf) |:heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark:| :o: |
120 | | urban100 | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark:| :o: |
121 | | [DIV2K](https://data.vision.ee.ethz.ch/cvl/DIV2K/)
(val, test) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark:| :o: |
122 |
123 | ## ESRGAN
124 | We improve the [SRGAN](https://arxiv.org/abs/1609.04802) from three aspects:
125 | 1. adopt a deeper model using Residual-in-Residual Dense Block (RRDB) without batch normalization layers.
126 | 2. employ [Relativistic average GAN](https://ajolicoeur.wordpress.com/relativisticgan/) instead of the vanilla GAN.
127 | 3. improve the perceptual loss by using the features before activation.
128 |
129 | In contrast to SRGAN, which claimed that **deeper models are increasingly difficult to train**, our deeper ESRGAN model shows its superior performance with easy training.
130 |
131 |
132 |
133 |
134 |
135 |
136 |
137 |
138 | ## Network Interpolation
139 | We propose the **network interpolation strategy** to balance the visual quality and PSNR.
140 |
141 |
142 |
143 |
144 |
145 | We show the smooth animation with the interpolation parameters changing from 0 to 1.
146 | Interestingly, it is observed that the network interpolation strategy provides a smooth control of the RRDB_PSNR model and the fine-tuned ESRGAN model.
147 |
148 |
149 |
150 |    
151 |
152 |
153 |
154 | ## Qualitative Results
155 | PSNR (evaluated on the Y channel) and the perceptual index used in the PIRM-SR challenge are also provided for reference.
156 |
157 |
158 |
159 |
160 |
161 |
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 | ## Ablation Study
171 | Overall visual comparisons for showing the effects of each component in
172 | ESRGAN. Each column represents a model with its configurations in the top.
173 | The red sign indicates the main improvement compared with the previous model.
174 |
175 |
176 |
177 |
178 | ## BN artifacts
179 | We empirically observe that BN layers tend to bring artifacts. These artifacts,
180 | namely BN artifacts, occasionally appear among iterations and different settings,
181 | violating the needs for a stable performance over training. We find that
182 | the network depth, BN position, training dataset and training loss
183 | have impact on the occurrence of BN artifacts.
184 |
185 |
186 |
187 |
188 | ## Useful techniques to train a very deep network
189 | We find that residual scaling and smaller initialization can help to train a very deep network. More details are in the Supplementary File attached in our [paper](https://arxiv.org/abs/1809.00219).
190 |
191 |
192 |
193 |
194 |
195 |
196 | ## The influence of training patch size
197 | We observe that training a deeper network benefits from a larger patch size. Moreover, the deeper model achieves more improvement (∼0.12dB) than the shallower one (∼0.04dB) since larger model capacity is capable of taking full advantage of
198 | larger training patch size. (Evaluated on Set5 dataset with RGB channels.)
199 |
200 |
201 |
202 |
203 |
--------------------------------------------------------------------------------
/RRDBNet_arch.py:
--------------------------------------------------------------------------------
1 | import functools
2 | import torch
3 | import torch.nn as nn
4 | import torch.nn.functional as F
5 |
6 |
7 | def make_layer(block, n_layers):
8 | layers = []
9 | for _ in range(n_layers):
10 | layers.append(block())
11 | return nn.Sequential(*layers)
12 |
13 |
14 | class ResidualDenseBlock_5C(nn.Module):
15 | def __init__(self, nf=64, gc=32, bias=True):
16 | super(ResidualDenseBlock_5C, self).__init__()
17 | # gc: growth channel, i.e. intermediate channels
18 | self.conv1 = nn.Conv2d(nf, gc, 3, 1, 1, bias=bias)
19 | self.conv2 = nn.Conv2d(nf + gc, gc, 3, 1, 1, bias=bias)
20 | self.conv3 = nn.Conv2d(nf + 2 * gc, gc, 3, 1, 1, bias=bias)
21 | self.conv4 = nn.Conv2d(nf + 3 * gc, gc, 3, 1, 1, bias=bias)
22 | self.conv5 = nn.Conv2d(nf + 4 * gc, nf, 3, 1, 1, bias=bias)
23 | self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
24 |
25 | # initialization
26 | # mutil.initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1)
27 |
28 | def forward(self, x):
29 | x1 = self.lrelu(self.conv1(x))
30 | x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1)))
31 | x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))
32 | x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1)))
33 | x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))
34 | return x5 * 0.2 + x
35 |
36 |
37 | class RRDB(nn.Module):
38 | '''Residual in Residual Dense Block'''
39 |
40 | def __init__(self, nf, gc=32):
41 | super(RRDB, self).__init__()
42 | self.RDB1 = ResidualDenseBlock_5C(nf, gc)
43 | self.RDB2 = ResidualDenseBlock_5C(nf, gc)
44 | self.RDB3 = ResidualDenseBlock_5C(nf, gc)
45 |
46 | def forward(self, x):
47 | out = self.RDB1(x)
48 | out = self.RDB2(out)
49 | out = self.RDB3(out)
50 | return out * 0.2 + x
51 |
52 |
53 | class RRDBNet(nn.Module):
54 | def __init__(self, in_nc, out_nc, nf, nb, gc=32):
55 | super(RRDBNet, self).__init__()
56 | RRDB_block_f = functools.partial(RRDB, nf=nf, gc=gc)
57 |
58 | self.conv_first = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
59 | self.RRDB_trunk = make_layer(RRDB_block_f, nb)
60 | self.trunk_conv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
61 | #### upsampling
62 | self.upconv1 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
63 | self.upconv2 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
64 | self.HRconv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
65 | self.conv_last = nn.Conv2d(nf, out_nc, 3, 1, 1, bias=True)
66 |
67 | self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
68 |
69 | def forward(self, x):
70 | fea = self.conv_first(x)
71 | trunk = self.trunk_conv(self.RRDB_trunk(fea))
72 | fea = fea + trunk
73 |
74 | fea = self.lrelu(self.upconv1(F.interpolate(fea, scale_factor=2, mode='nearest')))
75 | fea = self.lrelu(self.upconv2(F.interpolate(fea, scale_factor=2, mode='nearest')))
76 | out = self.conv_last(self.lrelu(self.HRconv(fea)))
77 |
78 | return out
79 |
--------------------------------------------------------------------------------
/figures/102061.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/102061.gif
--------------------------------------------------------------------------------
/figures/43074.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/43074.gif
--------------------------------------------------------------------------------
/figures/81.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/81.gif
--------------------------------------------------------------------------------
/figures/BN_artifacts.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/BN_artifacts.jpg
--------------------------------------------------------------------------------
/figures/RRDB.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/RRDB.png
--------------------------------------------------------------------------------
/figures/abalation_study.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/abalation_study.png
--------------------------------------------------------------------------------
/figures/architecture.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/architecture.jpg
--------------------------------------------------------------------------------
/figures/baboon.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/baboon.jpg
--------------------------------------------------------------------------------
/figures/net_interp.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/net_interp.jpg
--------------------------------------------------------------------------------
/figures/patch_a.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/patch_a.png
--------------------------------------------------------------------------------
/figures/patch_b.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/patch_b.png
--------------------------------------------------------------------------------
/figures/qualitative_cmp_01.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/qualitative_cmp_01.jpg
--------------------------------------------------------------------------------
/figures/qualitative_cmp_02.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/qualitative_cmp_02.jpg
--------------------------------------------------------------------------------
/figures/qualitative_cmp_03.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/qualitative_cmp_03.jpg
--------------------------------------------------------------------------------
/figures/qualitative_cmp_04.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/qualitative_cmp_04.jpg
--------------------------------------------------------------------------------
/figures/train_deeper_neta.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/train_deeper_neta.png
--------------------------------------------------------------------------------
/figures/train_deeper_netb.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/figures/train_deeper_netb.png
--------------------------------------------------------------------------------
/models/README.md:
--------------------------------------------------------------------------------
1 | ## Place pretrained models here.
2 |
3 | We provide two pretrained models:
4 |
5 | 1. `RRDB_ESRGAN_x4.pth`: the final ESRGAN model we used in our [paper](https://arxiv.org/abs/1809.00219).
6 | 2. `RRDB_PSNR_x4.pth`: the PSNR-oriented model with **high PSNR performance**.
7 |
8 | *Note that* the pretrained models are trained under the `MATLAB bicubic` kernel.
9 | If the downsampled kernel is different from that, the results may have artifacts.
10 |
--------------------------------------------------------------------------------
/net_interp.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import torch
3 | from collections import OrderedDict
4 |
5 | alpha = float(sys.argv[1])
6 |
7 | net_PSNR_path = './models/RRDB_PSNR_x4.pth'
8 | net_ESRGAN_path = './models/RRDB_ESRGAN_x4.pth'
9 | net_interp_path = './models/interp_{:02d}.pth'.format(int(alpha*10))
10 |
11 | net_PSNR = torch.load(net_PSNR_path)
12 | net_ESRGAN = torch.load(net_ESRGAN_path)
13 | net_interp = OrderedDict()
14 |
15 | print('Interpolating with alpha = ', alpha)
16 |
17 | for k, v_PSNR in net_PSNR.items():
18 | v_ESRGAN = net_ESRGAN[k]
19 | net_interp[k] = (1 - alpha) * v_PSNR + alpha * v_ESRGAN
20 |
21 | torch.save(net_interp, net_interp_path)
22 |
--------------------------------------------------------------------------------
/results/baboon_ESRGAN.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/xinntao/ESRGAN/73e9b634cf987f5996ac2dd33f4050922398a921/results/baboon_ESRGAN.png
--------------------------------------------------------------------------------
/test.py:
--------------------------------------------------------------------------------
1 | import os.path as osp
2 | import glob
3 | import cv2
4 | import numpy as np
5 | import torch
6 | import RRDBNet_arch as arch
7 |
8 | model_path = 'models/RRDB_ESRGAN_x4.pth' # models/RRDB_ESRGAN_x4.pth OR models/RRDB_PSNR_x4.pth
9 | device = torch.device('cuda') # if you want to run on CPU, change 'cuda' -> cpu
10 | # device = torch.device('cpu')
11 |
12 | test_img_folder = 'LR/*'
13 |
14 | model = arch.RRDBNet(3, 3, 64, 23, gc=32)
15 | model.load_state_dict(torch.load(model_path), strict=True)
16 | model.eval()
17 | model = model.to(device)
18 |
19 | print('Model path {:s}. \nTesting...'.format(model_path))
20 |
21 | idx = 0
22 | for path in glob.glob(test_img_folder):
23 | idx += 1
24 | base = osp.splitext(osp.basename(path))[0]
25 | print(idx, base)
26 | # read images
27 | img = cv2.imread(path, cv2.IMREAD_COLOR)
28 | img = img * 1.0 / 255
29 | img = torch.from_numpy(np.transpose(img[:, :, [2, 1, 0]], (2, 0, 1))).float()
30 | img_LR = img.unsqueeze(0)
31 | img_LR = img_LR.to(device)
32 |
33 | with torch.no_grad():
34 | output = model(img_LR).data.squeeze().float().cpu().clamp_(0, 1).numpy()
35 | output = np.transpose(output[[2, 1, 0], :, :], (1, 2, 0))
36 | output = (output * 255.0).round()
37 | cv2.imwrite('results/{:s}_rlt.png'.format(base), output)
38 |
--------------------------------------------------------------------------------
/transer_RRDB_models.py:
--------------------------------------------------------------------------------
1 | import os
2 | import torch
3 | import RRDBNet_arch as arch
4 |
5 | pretrained_net = torch.load('./models/RRDB_ESRGAN_x4.pth')
6 | save_path = './models/RRDB_ESRGAN_x4.pth'
7 |
8 | crt_model = arch.RRDBNet(3, 3, 64, 23, gc=32)
9 | crt_net = crt_model.state_dict()
10 |
11 | load_net_clean = {}
12 | for k, v in pretrained_net.items():
13 | if k.startswith('module.'):
14 | load_net_clean[k[7:]] = v
15 | else:
16 | load_net_clean[k] = v
17 | pretrained_net = load_net_clean
18 |
19 | print('###################################\n')
20 | tbd = []
21 | for k, v in crt_net.items():
22 | tbd.append(k)
23 |
24 | # directly copy
25 | for k, v in crt_net.items():
26 | if k in pretrained_net and pretrained_net[k].size() == v.size():
27 | crt_net[k] = pretrained_net[k]
28 | tbd.remove(k)
29 |
30 | crt_net['conv_first.weight'] = pretrained_net['model.0.weight']
31 | crt_net['conv_first.bias'] = pretrained_net['model.0.bias']
32 |
33 | for k in tbd.copy():
34 | if 'RDB' in k:
35 | ori_k = k.replace('RRDB_trunk.', 'model.1.sub.')
36 | if '.weight' in k:
37 | ori_k = ori_k.replace('.weight', '.0.weight')
38 | elif '.bias' in k:
39 | ori_k = ori_k.replace('.bias', '.0.bias')
40 | crt_net[k] = pretrained_net[ori_k]
41 | tbd.remove(k)
42 |
43 | crt_net['trunk_conv.weight'] = pretrained_net['model.1.sub.23.weight']
44 | crt_net['trunk_conv.bias'] = pretrained_net['model.1.sub.23.bias']
45 | crt_net['upconv1.weight'] = pretrained_net['model.3.weight']
46 | crt_net['upconv1.bias'] = pretrained_net['model.3.bias']
47 | crt_net['upconv2.weight'] = pretrained_net['model.6.weight']
48 | crt_net['upconv2.bias'] = pretrained_net['model.6.bias']
49 | crt_net['HRconv.weight'] = pretrained_net['model.8.weight']
50 | crt_net['HRconv.bias'] = pretrained_net['model.8.bias']
51 | crt_net['conv_last.weight'] = pretrained_net['model.10.weight']
52 | crt_net['conv_last.bias'] = pretrained_net['model.10.bias']
53 |
54 | torch.save(crt_net, save_path)
55 | print('Saving to ', save_path)
56 |
--------------------------------------------------------------------------------