├── .gitignore
├── README.md
├── config
├── config_demo.json
└── config_train.json
├── data_utils.py
├── demo_simulation.py
├── environment.yml
├── exposure_module.py
├── isp.py
├── new_tone_curve.txt
├── preprocess
├── preprocess_canon.py
└── preprocess_nikon.py
├── pyexiftool
├── COPYING.BSD
├── COPYING.GPL
├── MANIFEST.in
├── README.rst
├── doc
│ ├── .gitignore
│ ├── Makefile
│ ├── conf.py
│ └── index.rst
├── exiftool.py
├── setup.py
└── test
│ ├── __init__.py
│ ├── rose.jpg
│ ├── skyblue.png
│ └── test_exiftool.py
├── train.py
├── unet_attention_decouple.py
├── unet_model_ori.py
└── unet_parts_ori.py
/.gitignore:
--------------------------------------------------------------------------------
1 | ProcessedData/
2 | exp/
3 | __pycache__/
4 | pyexiftool/__pycache__/
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Neural Camera Simulators
2 |
3 |
4 | **This repository includes official codes for "[Neural Camera Simulators (CVPR2021)](https://arxiv.org/abs/2104.05237)".**
5 |
6 | > **Neural Camera Simulators**
7 | > Hao Ouyang*, Zifan Shi*, Chenyang Lei, Ka Lung Law, Qifeng Chen (* indicates joint first authors)
8 | > HKUST
9 |
10 | [[Paper](https://arxiv.org/abs/2104.05237)]
11 | [[Datasets](https://hkustconnect-my.sharepoint.com/:f:/g/personal/zshiaj_connect_ust_hk/Eu7788io4UBBkQoEVGQr7hQBeKpowDnNunlPW5Xv1qGrdQ?e=vdmKN2)]
12 |
13 |
14 |
15 | ## Installation
16 | Clone this repo.
17 | ```bash
18 | git clone https://github.com/ken-ouyang/neural_image_simulator.git
19 | cd neural_image_simulator/
20 | ```
21 |
22 | We have tested our code on Ubuntu 18.04 LTS with PyTorch 1.3.0 and CUDA 10.1. Please install dependencies by
23 | ```bash
24 | conda env create -f environment.yml
25 | ```
26 |
27 | ## Preparing datasets
28 | We provide two datasets for training and test: [[Nikon](https://hkustconnect-my.sharepoint.com/:f:/g/personal/zshiaj_connect_ust_hk/EsKZAEit4FZItIO9oTt-IaYBVz5tDyHjC4NYbywUTSlU4Q?e=XJc8cE)] and [[Canon](https://hkustconnect-my.sharepoint.com/:f:/g/personal/zshiaj_connect_ust_hk/ErmBVoXbwRFPmciYmtQeQscByOB8TZrBIHpvFZOSLfeLig?e=c3yq1d)]. The data can be preprocessed with the command:
29 | ```bash
30 | python preprocess/preprocess_nikon.py --input_dir the_directory_of_the_dataset --output_dir the_directory_to_save_the_preprocessed_data --image_size 512
31 | OR
32 | python preprocess/preprocess_canon.py --input_dir the_directory_of_the_dataset --output_dir the_directory_to_save_the_preprocessed_data --image_size 512
33 | ```
34 | The preprocessed data can also be downloaded with the link [[Nikon](https://hkustconnect-my.sharepoint.com/:f:/g/personal/zshiaj_connect_ust_hk/Eo2dgGZxa35LuPsgUge7aSgBQojaj1cxweNhY4B2g8WC-Q?e=I84tFK)] and [[Canon](https://hkustconnect-my.sharepoint.com/:f:/g/personal/zshiaj_connect_ust_hk/ElZXGJfixIlIqHhIpY2AO_wBp5o2I7qKfUvq_YojArBCtQ?e=byzKm1)]. The preprocessed dataset can be put into the folder `./ProcessedData/Nikon/` or `./ProcessedData/Canon/`
35 |
36 | ## Training networks
37 | The training arguments are specified in a json file. To train the model, run with the following code
38 | ```bash
39 | python train.py --config config/config_train.json
40 | ```
41 | The checkpoints will be saved into `./exp/{exp_name}/`.
42 | When training the noise module, set `unet_training` in the json file to be `true`. Other times it will be `false`. `aperture` is `true` when training the aperture module while other times it is `false`.
43 |
44 | ## Demo
45 | Download the pretrained demo [[checkpoints](https://hkustconnect-my.sharepoint.com/:f:/g/personal/zshiaj_connect_ust_hk/EkTcqA8Ewj5PhCta-gH0FlYBt9iAzylEG-jS533a2Nbz-A?e=y24WRz)] and put them under `./exp/demo/`. Then, run the command
46 | ```bash
47 | python demo_simulation.py --config config/config_demo.json
48 | ```
49 | The simulated results are available under `./exp/{exp_name}`
50 |
51 | ## Citation
52 |
53 | ```
54 | @inproceedings{ouyang2021neural,
55 | title = {Neural Camera Simulators},
56 | author = {Ouyang, Hao and Shi, Zifan and Lei, Chenyang and Law, Ka Lung and Chen, Qifeng},
57 | booktitle = {CVPR},
58 | year = {2021}
59 | }
60 | ```
61 | ## Acknowledgement
62 | Part of the codes benefit from [Pytorch-UNet](https://github.com/milesial/Pytorch-UNet) and [pyexiftool](https://github.com/smarnach/pyexiftool).
--------------------------------------------------------------------------------
/config/config_demo.json:
--------------------------------------------------------------------------------
1 | {
2 | "test_config": {
3 | "output_directory": "exp/test",
4 | "seed": 1234,
5 | "checkpoint_path1": "./exp/demo/exp_model",
6 | "checkpoint_path2": "./exp/demo/unet_model",
7 | "checkpoint_path3": "./exp/demo/ap_model"
8 | },
9 | "data_config": {
10 | "seed": 1234,
11 | "file_list":"./ProcessedData/Nikon/AutoModenikon_test_512/camera_params.csv",
12 | "aperture":true,
13 | "n_params": 2,
14 | "threshold": 0.5,
15 | "unet_training": false,
16 | "maskthr": 0.99,
17 | "camera":"Nikon"
18 | },
19 | "network_config": {
20 | "n_input_channels": 8,
21 | "n_output_channels": 4
22 | },
23 | "network_config2": {
24 | "n_params": 2,
25 | "n_input_channels": 4,
26 | "n_output_channels": 4
27 | }
28 | }
29 |
--------------------------------------------------------------------------------
/config/config_train.json:
--------------------------------------------------------------------------------
1 | {
2 | "train_config": {
3 | "output_directory": "exp/experiment_1",
4 | "epochs": 10000,
5 | "learning_rate1": 1e-3,
6 | "learning_rate2": 1e-3,
7 | "learning_rate3": 1e-3,
8 | "aperture": true,
9 | "iters_per_checkpoint": 2000,
10 | "batch_size":1,
11 | "epoch_size":100,
12 | "loss_type1":"l1",
13 | "loss_type2":"l1",
14 | "loss_type3":"l1",
15 | "net_type":"u_net",
16 | "net_type_ap":"unet_att_style",
17 | "residual_learning1":true,
18 | "residual_learning2":true,
19 | "seed": 1234,
20 | "checkpoint_path1": "",
21 | "checkpoint_path2": "",
22 | "checkpoint_path3": "",
23 | "parallel":false,
24 | "multi_stage":5,
25 | "multi_stage2":30,
26 | "variance_map":true,
27 | "isp_save": true
28 | },
29 | "data_config": {
30 | "seed": 1234,
31 | "file_list":"./ProcessedData/Nikon/AutoModenikon_train_512/camera_params.csv",
32 | "aperture":true,
33 | "n_params": 2,
34 | "threshold": 1.0,
35 | "unet_training": false,
36 | "maskthr": 0.99,
37 | "camera":"Nikon"
38 | },
39 | "network_config": {
40 | "n_input_channels": 8,
41 | "n_output_channels": 4
42 | },
43 | "network_config2": {
44 | "n_params": 2,
45 | "n_input_channels": 4,
46 | "n_output_channels": 4
47 | }
48 | }
49 |
--------------------------------------------------------------------------------
/data_utils.py:
--------------------------------------------------------------------------------
1 | import os
2 | import random
3 | import torch
4 | import torch.utils.data
5 | import torchvision.transforms as transforms
6 | import torchvision.transforms.functional as transformsF
7 | import sys
8 | import PIL.Image
9 | import PIL.ExifTags
10 | import cv2
11 | import math
12 | import rawpy
13 | import csv
14 | import numpy as np
15 | import json
16 |
17 |
18 |
19 | def files_to_pairs_p(filename, threshold, unet_training, camera):
20 | separating_points=[0]
21 | # Dictionary for aperture
22 | aperture_dict = {'22.0':1, '20.0':1.33333, '18.0':1.666667, '16.0':2, '14.0':2.66667, '13.0':3.33333, '11.0':4, '10.0':5.33333, \
23 | '9.0':6.66667, '8.0':8, '7.1':10.66667, '6.3':13.33333, '5.6':16, '5.0':21.33333, '4.5':26.66667, '4.0':32, '2.8':64, '2.0':128, '1.4':256}
24 | with open(filename, newline='') as csvfile:
25 | reader = csv.reader(csvfile, delimiter='\t', quotechar='|')
26 | files = []
27 | iso_list = []
28 | time_list = []
29 | # wb_list=[]
30 | aperture_list = []
31 | noise_list=[]
32 | # skip header
33 | next(reader)
34 | if camera == "Nikon":
35 | for row in reader:
36 | cells = row[0].split(",")
37 | files.append(cells[0])
38 | time_list.append(float(cells[1]))
39 | iso_list.append(float(cells[2]))
40 | aperture_list.append(float(aperture_dict[cells[3]]))
41 | noise_list.append([float(i) for i in cells[8].split(" ")])
42 | elif camera == "Canon":
43 | for row in reader:
44 | cells = row[0].split(",")
45 | files.append(cells[0])
46 | time_list.append(float(cells[1]))
47 | iso_list.append(float(cells[2]))
48 | aperture_list.append(float(aperture_dict[cells[3]]))
49 | noise_list.append([float(i) for i in cells[4].split(" ")])
50 |
51 | # separate scene
52 | for i in range(len(files)-1):
53 | if not files[i].split("/")[-2][-3:] == files[i+1].split("/")[-2][-3:]:
54 | separating_points.append(i+1)
55 | separating_points.append(len(files)-1)
56 | pair_list = []
57 | for i in range(len(separating_points)-1):
58 | # append all parameters
59 | scene_separate = files[separating_points[i]:separating_points[i+1]]
60 | time_separate = time_list[separating_points[i]:separating_points[i+1]]
61 | iso_separate = iso_list[separating_points[i]:separating_points[i+1]]
62 | aperture_separate = aperture_list[separating_points[i]:separating_points[i+1]]
63 | noise_separate = noise_list[separating_points[i]:separating_points[i+1]]
64 | for ind1 in range(len(scene_separate)):
65 | for ind2 in range(len(scene_separate)):
66 | if unet_training:
67 | if aperture_separate[ind1] > aperture_separate[ind2]:
68 | continue
69 | else:
70 | if aperture_separate[ind1] > aperture_separate[ind2] or aperture_separate[ind2]/aperture_separate[ind1] 1:
77 | continue
78 | if iso_separate[ind2] > 800:
79 | continue
80 | pair_list.append((scene_separate[ind1], scene_separate[ind2], time_separate[ind1],
81 | time_separate[ind2], iso_separate[ind1], iso_separate[ind2],
82 | aperture_separate[ind1], aperture_separate[ind2],
83 | noise_separate[ind1], noise_separate[ind2]))
84 |
85 | return pair_list
86 |
87 |
88 |
89 |
90 | class autoTrainSetRaw2jpgProcessed(torch.utils.data.Dataset):
91 | """
92 | DataLoader for automode
93 | """
94 | def __init__(self, seed, file_list, aperture, n_params, threshold, unet_training, maskthr, camera):
95 | self.image_pairs = files_to_pairs_p(file_list, threshold, unet_training, camera)
96 | random.seed(seed)
97 | self.seed = seed
98 | self.aperture = aperture
99 | self.n_params = n_params
100 | self.maskthr = maskthr
101 | self.camera = camera
102 |
103 | def __getitem__(self, index):
104 | # Read image
105 | img_name = self.image_pairs[index][0]
106 | input_img = PIL.Image.open(self.image_pairs[index][0])
107 |
108 | # Read raw and save in memory
109 | input_raw_path = self.image_pairs[index][0][:-3]+"npy"
110 | output_raw_path = self.image_pairs[index][1][:-3]+"npy"
111 | input_raw = np.load(input_raw_path)
112 | output_raw = np.load(output_raw_path)
113 |
114 | # Get exif
115 | input_exposure_time = self.image_pairs[index][2]
116 | output_exposure_time = self.image_pairs[index][3]
117 | input_iso = self.image_pairs[index][4]
118 | output_iso = self.image_pairs[index][5]
119 | input_aperture = self.image_pairs[index][6]
120 | output_aperture = self.image_pairs[index][7]
121 | # for later usage
122 | input_noise = self.image_pairs[index][8]
123 | output_noise = self.image_pairs[index][9]
124 |
125 |
126 | i=0
127 | j=0
128 | h=512
129 | w=768
130 | input_img = transformsF.crop(input_img, i, j, h, w)
131 | input_raw = input_raw[:,i:i+h, j:j+w]
132 | output_raw = output_raw[:,i:i+h, j:j+w]
133 |
134 | # Random flip
135 | if random.random() > 0.5:
136 | input_raw = np.flip(input_raw, axis=2).copy()
137 | output_raw = np.flip(output_raw, axis=2).copy()
138 | input_img = transformsF.hflip(input_img)
139 |
140 | if random.random() > 0.5:
141 | input_raw = np.flip(input_raw, axis=1).copy()
142 | output_raw = np.flip(output_raw, axis=1).copy()
143 | input_img = transformsF.vflip(input_img)
144 |
145 |
146 | exp_params = torch.Tensor([(float(output_exposure_time)/float(input_exposure_time))*
147 | (float(output_aperture)/float(input_aperture))*
148 | (float(output_iso)/float(input_iso))
149 | ])
150 |
151 | noise_params = torch.Tensor([(math.log((float(output_exposure_time)/float(input_exposure_time))*
152 | (float(output_aperture)/float(input_aperture)), 2)+1)*10
153 | ] )
154 | if self.n_params == 1:
155 | ap_params = torch.Tensor([float(output_aperture)/float(input_aperture)])
156 | else:
157 | ap_params = torch.Tensor([float(input_aperture), float(output_aperture)])
158 |
159 | # mask
160 | mask = input_raw.copy()
161 |
162 | one_mask = (mask[0]>self.maskthr) & (mask[1]>self.maskthr) & (mask[2]>self.maskthr) & (mask[3]>self.maskthr)
163 | one_mask = np.expand_dims(one_mask, 0)
164 | one_mask = np.repeat(one_mask, 4, axis=0)
165 | mask[one_mask] = 0
166 | mask[~one_mask] = 1
167 | # Normalize and append
168 | input_raw = torch.from_numpy(input_raw)
169 | output_raw = torch.from_numpy(output_raw)
170 | input_jpg = transformsF.to_tensor(input_img)
171 | mask = torch.from_numpy(mask)
172 | # Noise level
173 | if self.camera == "Nikon":
174 | input_shot_noise = torch.from_numpy(np.array((input_noise[0], input_noise[2], input_noise[4], input_noise[2]), dtype=np.float32))
175 | input_read_noise = torch.from_numpy(np.array((input_noise[1], input_noise[3], input_noise[5], input_noise[3]), dtype=np.float32))
176 | output_shot_noise = torch.from_numpy(np.array((output_noise[0], output_noise[2], output_noise[4], output_noise[2]), dtype=np.float32))
177 | output_read_noise = torch.from_numpy(np.array((output_noise[1], output_noise[3], output_noise[5], output_noise[3]), dtype=np.float32))
178 | elif self.camera == "Canon":
179 | input_shot_noise = torch.from_numpy(np.array((input_noise[0], input_noise[0], input_noise[0], input_noise[0]), dtype=np.float32))
180 | input_read_noise = torch.from_numpy(np.array((input_noise[1], input_noise[1], input_noise[1], input_noise[1]), dtype=np.float32))
181 | output_shot_noise = torch.from_numpy(np.array((output_noise[0], output_noise[0], output_noise[0], output_noise[0]), dtype=np.float32))
182 | output_read_noise = torch.from_numpy(np.array((output_noise[1], output_noise[1], output_noise[1], output_noise[1]), dtype=np.float32))
183 |
184 |
185 | return exp_params, ap_params, noise_params, input_raw, input_jpg, output_raw, mask, input_shot_noise, input_read_noise, output_shot_noise, output_read_noise, img_name
186 |
187 | def __len__(self):
188 | return len(self.image_pairs)
189 |
190 |
191 |
192 | def get_output_params(filename, camera):
193 | # from csv to list
194 | aperture_dict = {'22.0':1, '20.0':1.33333, '18.0':1.666667, '16.0':2, '14.0':2.66667, '13.0':3.33333, '11.0':4, '10.0':5.33333, \
195 | '9.0':6.66667, '8.0':8, '7.1':10.66667, '6.3':13.33333, '5.6':16, '5.0':21.33333, '4.5':26.66667, '4.0':32, '2.8':64, '2.0':128, '1.4':256}
196 | with open(filename, newline='') as csvfile:
197 | reader = csv.reader(csvfile, delimiter='\t', quotechar='|')
198 | files = []
199 | iso_list = []
200 | time_list = []
201 | aperture_list = []
202 | noise_list=[]
203 | next(reader)
204 | if camera == "Nikon":
205 | for row in reader:
206 | cells = row[0].split(",")
207 | files.append(cells[0])
208 | time_list.append(float(cells[1]))
209 | iso_list.append(float(cells[2]))
210 | aperture_list.append(float(aperture_dict[cells[3]]))
211 | noise_list.append([float(i) for i in cells[8].split(" ")])
212 | elif camera == "Canon":
213 | for row in reader:
214 | cells = row[0].split(",")
215 | files.append(cells[0])
216 | time_list.append(float(cells[1]))
217 | iso_list.append(float(cells[2]))
218 | aperture_list.append(float(aperture_dict[cells[3]]))
219 | noise_list.append([float(i) for i in cells[4].split(" ")])
220 |
221 | # noise params
222 | noise_params = {
223 | "100":[float(7.88891432059205e-06), float(5.71150998895191e-09), float(7.47234302281224e-06), float(5.73712214136873e-09), float(7.31670099946593e-06), float(5.59509111432998e-09)],
224 | "200":[float(1.61577782864118e-05), float(2.35794788659258e-08), float(1.53704127565423e-05), float(2.32535060169844e-08), float(1.47447928587778e-05), float(2.24409022721233e-08)],
225 | "400":[float(3.25795376516365e-05), float(5.23908219573611e-08), float(3.09620813305867e-05), float(5.22487909303224e-08), float(2.95490959029526e-05), float(4.85513274723299e-08)],
226 | "800":[float(6.47394522011139e-05), float(2.34025221765005e-08), float(6.19623102159152e-05), float(2.22569604502207e-08), float(5.78805218585489e-05), float(1.94326385518926e-08)],
227 | "1000":[float(8.31208514534218e-05), float(3.04121679427574e-08), float(8.0016403448539e-05), float(2.9493332257382e-08), float(7.24422064545663e-05), float(2.74710713004799e-08)],
228 | "1600":[float(0.000138265049210346), float(5.69427999550786e-08), float(0.00013417868314641), float(5.73036984664066e-08), float(0.000116127260242618), float(5.99138096354303e-08)],
229 | "3200":[float(0.000274854657816434), float(2.12355012442878e-07), float(0.00026780804150454), float(2.07206969807097e-07), float(0.000231322194247349), float(2.1032233889198e-07)],
230 | "6400":[float(0.000536276798657206), float(6.94797157253651e-07), float(0.000529454489967193), float(7.25184811907467e-07), float(0.000460526436255436), float(7.1397833103636e-07)]
231 | }
232 | # define iso list
233 | iso_params = [800, 6400]
234 | # define time list
235 | time_params = [5, 10, 320]
236 | # define aperture list
237 | aperture_params = [16, 32]
238 | data_list = []
239 | for idx in range(len(files)):
240 | for iso in iso_params:
241 | for time in time_params:
242 | for aperture in aperture_params:
243 | if aperture_list[idx]>aperture:
244 | continue
245 | data_list.append((files[idx], time_list[idx], 1.0/time, iso_list[idx], iso,
246 | aperture_list[idx], aperture, noise_list[idx], noise_params[str(iso)]))
247 |
248 | return data_list
249 |
250 |
251 |
252 | class autoTestSetRaw2jpgProcessed(torch.utils.data.Dataset):
253 | """
254 | DataLoader for automode
255 | """
256 | def __init__(self, seed, file_list, aperture, n_params, threshold, unet_training, maskthr, camera):
257 |
258 | self.image_lists = get_output_params(file_list, camera)
259 | random.seed(seed)
260 | self.seed = seed
261 | self.aperture = aperture
262 | self.n_params = n_params
263 | self.camera = camera
264 |
265 | def __getitem__(self, index):
266 | # Read image
267 | img_name = self.image_lists[index][0]
268 | input_img = PIL.Image.open(self.image_lists[index][0])
269 |
270 | # Read raw
271 | input_raw_path = self.image_lists[index][0][:-3]+"npy"
272 | input_raw = np.load(input_raw_path)
273 |
274 |
275 | # Get exif
276 | input_exposure_time = self.image_lists[index][1]
277 | output_exposure_time = self.image_lists[index][2]
278 | input_iso = self.image_lists[index][3]
279 | output_iso = self.image_lists[index][4]
280 | input_aperture = self.image_lists[index][5]
281 | output_aperture = self.image_lists[index][6]
282 | input_noise = self.image_lists[index][7]
283 | output_noise = self.image_lists[index][8]
284 |
285 |
286 | i=0
287 | j=0
288 | h=512
289 | w=768
290 | input_img = transformsF.crop(input_img, i, j, h, w)
291 | input_raw = input_raw[:,i:i+h, j:j+w]
292 |
293 | exp_params = torch.Tensor([(float(output_exposure_time)/float(input_exposure_time))*
294 | (float(output_aperture)/float(input_aperture))*
295 | (float(output_iso)/float(input_iso))
296 | ])
297 |
298 | noise_params = torch.Tensor([(math.log((float(output_exposure_time)/float(input_exposure_time))*
299 | (float(output_aperture)/float(input_aperture)), 2)+1)*10
300 | ] )
301 | if self.n_params == 1:
302 | ap_params = torch.Tensor([float(output_aperture)/float(input_aperture)])
303 | else:
304 | ap_params = torch.Tensor([float(input_aperture), float(output_aperture)])
305 |
306 | # decide output image path
307 | scene_name = self.image_lists[index][0].split("/")[-2]
308 | image_name = self.image_lists[index][0].split("/")[-1][:-4]
309 | image_path = "%s-%s/iso_%d_time_%.5f_ap_%.3f.jpg" % (scene_name, image_name, int(output_iso), float(1.0/output_exposure_time), float(output_aperture))
310 |
311 | input_raw = torch.from_numpy(input_raw)
312 |
313 | if self.camera == "Nikon":
314 | input_shot_noise = torch.from_numpy(np.array((input_noise[0], input_noise[2], input_noise[4], input_noise[2]), dtype=np.float32))
315 | input_read_noise = torch.from_numpy(np.array((input_noise[1], input_noise[3], input_noise[5], input_noise[3]), dtype=np.float32))
316 | output_shot_noise = torch.from_numpy(np.array((output_noise[0], output_noise[2], output_noise[4], output_noise[2]), dtype=np.float32))
317 | output_read_noise = torch.from_numpy(np.array((output_noise[1], output_noise[3], output_noise[5], output_noise[3]), dtype=np.float32))
318 | elif self.camera == "Canon":
319 | input_shot_noise = torch.from_numpy(np.array((input_noise[0], input_noise[0], input_noise[0], input_noise[0]), dtype=np.float32))
320 | input_read_noise = torch.from_numpy(np.array((input_noise[1], input_noise[1], input_noise[1], input_noise[1]), dtype=np.float32))
321 | output_shot_noise = torch.from_numpy(np.array((output_noise[0], output_noise[0], output_noise[0], output_noise[0]), dtype=np.float32))
322 | output_read_noise = torch.from_numpy(np.array((output_noise[1], output_noise[1], output_noise[1], output_noise[1]), dtype=np.float32))
323 |
324 |
325 |
326 | return exp_params, ap_params, noise_params, input_raw, input_shot_noise, input_read_noise, output_shot_noise, output_read_noise, image_path, img_name
327 |
328 | def __len__(self):
329 | return len(self.image_lists)
330 |
--------------------------------------------------------------------------------
/demo_simulation.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 | import torch
5 | import torch.nn as nn
6 | import numpy as np
7 | from PIL import Image
8 | from torch.utils.data import DataLoader
9 | import torch.distributions as tdist
10 | from exposure_module import ExposureNet
11 | from unet_model_ori import UNet
12 | from unet_attention_decouple import AttenUnet_style
13 | from data_utils import autoTestSetRaw2jpgProcessed
14 | from isp import isp
15 |
16 |
17 | os.environ["CUDA_VISIBLE_DEVICES"]="7"
18 |
19 | def load_checkpoint(checkpoint_path, model):
20 | assert os.path.isfile(checkpoint_path)
21 | checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
22 | model_for_loading = checkpoint_dict['model']
23 | model.load_state_dict(model_for_loading.state_dict())
24 | print("Loaded checkpoint '{}' " .format(
25 | checkpoint_path))
26 | return model
27 |
28 | def clip(img):
29 | img[img>1] = 1
30 | img[img<0] = 0
31 |
32 | def save_testing_images(img_list, output_directory, image_path, img_name, alpha):
33 | print("Saving output images")
34 | b,c,h,w = img_list[0].shape
35 | batch_list = []
36 | image_path = os.path.join(output_directory, image_path[0])
37 | out_folder = os.path.dirname(image_path)
38 | # make new directory
39 | if not os.path.isdir(out_folder):
40 | print("making directory %s" % (out_folder))
41 | os.makedirs(out_folder)
42 | print("Saving as %s" % (image_path))
43 | cnt = 0
44 | for img in img_list:
45 | clip(img)
46 | if cnt == 0:
47 | new_img = isp(img[0,:,:,:], img_name[0], data_config["file_list"], 1)
48 | else:
49 | new_img = isp(img[0,:,:,:], img_name[0], data_config["file_list"], alpha[0])
50 | new_img_save = Image.fromarray(np.transpose(new_img* 255.0, [1,2,0]).astype('uint8'), 'RGB')
51 | img_path_new = os.path.join(out_folder, image_path.split('/')[-1][:-4]+'_'+str(cnt)+'.jpg')
52 | cnt = cnt + 1
53 | new_img_save.save(img_path_new, quality=95)
54 |
55 |
56 |
57 | def get_variance_map(input_raw, shot_noise, read_noise, mul=None):
58 | if not type(mul) == type(None):
59 | shot_noise = shot_noise * mul
60 | read_noise = read_noise * mul * mul
61 | b, c, h, w = input_raw.size()
62 | read_noise = torch.unsqueeze(read_noise, 2)
63 | read_noise = torch.unsqueeze(read_noise, 3)
64 | read_noise = read_noise.repeat(1,1,h,w)
65 | shot_noise = torch.unsqueeze(shot_noise, 2)
66 | shot_noise = torch.unsqueeze(shot_noise, 3)
67 | shot_noise = shot_noise.repeat(1,1,h,w)
68 |
69 | variance = torch.add(input_raw * shot_noise, read_noise)
70 | n = tdist.Normal(loc=torch.zeros_like(variance), scale=torch.sqrt(variance))
71 | noise = n.sample()
72 | var_map = input_raw + noise
73 | return var_map
74 |
75 | def test(output_directory, seed, checkpoint_path1, checkpoint_path2, checkpoint_path3):
76 | # set manual seed
77 | torch.manual_seed(seed)
78 | torch.cuda.manual_seed(seed)
79 |
80 | # build model
81 | model_exposure = ExposureNet().cuda()
82 | model_noise = UNet(**network_config).cuda()
83 | model_aperture = AttenUnet_style(**network_config2).cuda()
84 |
85 |
86 | # Load checkpoint
87 | if checkpoint_path3 != "":
88 | model_exposure = load_checkpoint(checkpoint_path1, model_exposure)
89 | model_noise = load_checkpoint(checkpoint_path2, model_noise)
90 | model_aperture = load_checkpoint(checkpoint_path3, model_aperture)
91 | else:
92 | print("No checkpoint!")
93 | return 0
94 |
95 | # build dataset
96 | testset = autoTestSetRaw2jpgProcessed(**data_config)
97 | test_loader = DataLoader(testset, num_workers=4, shuffle=False,
98 | sampler=None,
99 | batch_size=1,
100 | pin_memory=False,
101 | drop_last=True)
102 |
103 | if not os.path.isdir(output_directory):
104 | os.makedirs(output_directory)
105 | os.chmod(output_directory, 0o775)
106 | print("output directory", output_directory)
107 |
108 | model_exposure.eval()
109 | model_noise.eval()
110 | model_aperture.eval()
111 |
112 | loss = torch.nn.MSELoss()
113 |
114 | for i, batch in enumerate(test_loader):
115 | model_exposure.zero_grad()
116 | model_noise.zero_grad()
117 | model_aperture.zero_grad()
118 |
119 | exp_params, ap_params, noise_params, input_raw, input_shot_noise, input_read_noise, output_shot_noise, output_read_noise, image_path, img_name = batch
120 | exp_params = torch.autograd.Variable(exp_params.cuda())
121 | ap_params = torch.autograd.Variable(ap_params.cuda())
122 | noise_params = torch.autograd.Variable(noise_params.cuda())
123 | input_shot_noise = torch.autograd.Variable(input_shot_noise.cuda())
124 | input_read_noise = torch.autograd.Variable(input_read_noise.cuda())
125 | output_shot_noise = torch.autograd.Variable(output_shot_noise.cuda())
126 | output_read_noise = torch.autograd.Variable(output_read_noise.cuda())
127 | input_raw = torch.autograd.Variable(input_raw.cuda())
128 | output_exp, exp_params_m = model_exposure([exp_params, input_raw])
129 |
130 |
131 | variance_input = get_variance_map(output_exp, input_shot_noise, input_read_noise, exp_params_m)
132 | input_cat = torch.cat([output_exp, variance_input], 1)
133 | output_ns = model_noise(input_cat)
134 |
135 | output_ap = model_aperture([ap_params, output_ns+output_exp])
136 | output_final = get_variance_map(output_exp+output_ns+output_ap, output_shot_noise, output_read_noise)
137 | output_save = output_final.cpu().data.numpy()
138 | output_save[np.isinf(output_save)] = 1
139 | output_save[np.isnan(output_save)] = 0
140 | save_testing_images([input_raw.cpu().data.numpy(),
141 | output_exp.cpu().data.numpy(),
142 | (output_ns+output_exp).cpu().data.numpy(),
143 | (output_ap+output_ns+output_exp).cpu().data.numpy(),
144 | output_save],
145 | output_directory, image_path, img_name, exp_params_m.cpu().data.numpy())
146 |
147 |
148 |
149 | if __name__ == "__main__":
150 | parser = argparse.ArgumentParser()
151 | parser.add_argument('-c', '--config', type=str,
152 | help='JSON file for configuration')
153 | args = parser.parse_args()
154 | global config_path
155 | config_path = args.config
156 |
157 | with open(config_path) as f:
158 | data = f.read()
159 | config = json.loads(data)
160 | test_config = config["test_config"]
161 | global data_config
162 | data_config = config["data_config"]
163 | global network_config
164 | network_config = config["network_config"]
165 | global network_config2
166 | network_config2 = config["network_config2"]
167 |
168 | torch.backends.cudnn.enabled = True
169 | torch.backends.cudnn.benchmark = False
170 | test(**test_config)
171 |
--------------------------------------------------------------------------------
/environment.yml:
--------------------------------------------------------------------------------
1 | name: neural_image_simulator
2 | channels:
3 | - defaults
4 | dependencies:
5 | - cudatoolkit=10.1.168=0
6 | - python=3.7.4=h265db76_1
7 | - pip:
8 | - colour-demosaicing==0.1.5
9 | - easydict==1.9
10 | - imageio==2.9.0
11 | - joblib==0.14.1
12 | - matplotlib==3.1.2
13 | - opencv-python==4.1.1.26
14 | - rawpy==0.13.1
15 | - scikit-image==0.17.2
16 | - scikit-learn==0.22.2.post1
17 | - scipy==1.4.0
18 | - sklearn==0.0
19 | - torch==1.3.0
20 | - torchvision==0.4.1
21 | - tqdm==4.53.0
22 |
23 |
24 |
--------------------------------------------------------------------------------
/exposure_module.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.init as init
4 | import torch.nn.functional as F
5 | from torch.nn.parameter import Parameter
6 |
7 | def init_linear(linear):
8 | init.ones_(linear.weight)
9 | if linear.bias is not None:
10 | linear.bias.data.zero_()
11 |
12 |
13 | def init_conv(conv, glu=True):
14 | init.kaiming_normal(conv.weight)
15 | if conv.bias is not None:
16 | conv.bias.data.zero_()
17 |
18 | class ExposureNet(nn.Module):
19 | def __init__(self):
20 | super(ExposureNet, self).__init__()
21 | linear = nn.Linear(1, 1, bias=False)
22 | init_linear(linear)
23 | self.bias = Parameter(torch.Tensor(4))
24 | self.bias.data.zero_()
25 | self.linear = linear
26 |
27 | def forward(self, input):
28 | # params: Estimated brightness correction in theory
29 | # input: input raw (4 channels)
30 | params = input[0]
31 | image = input[1]
32 | b,c,h,w = image.shape
33 | exp = self.linear(params)
34 | output = (image+ self.bias.unsqueeze(0).unsqueeze(2).unsqueeze(3).repeat((b,1,h,w))) * exp.unsqueeze(2).unsqueeze(3).repeat((1,4,h,w))
35 | return torch.clamp(output, 0.0, 1.0), exp
36 |
--------------------------------------------------------------------------------
/isp.py:
--------------------------------------------------------------------------------
1 | import rawpy
2 | import numpy as np
3 | import PIL.Image
4 | import pyexiftool.exiftool as exiftool
5 | import cv2
6 | import os
7 | from IPython.display import Image, display
8 | import matplotlib.pyplot as plt
9 | from scipy.interpolate import RegularGridInterpolator
10 | import colour_demosaicing
11 | import csv
12 | import sys
13 |
14 | def demosaic(raw_array, half):
15 | if half:
16 | rgb = np.stack([raw_array[0::2, 0::2], (raw_array[0::2, 1::2] + raw_array[1::2, 0::2]) / 2,
17 | raw_array[1::2, 1::2]], axis=2)
18 | else:
19 | rgb = colour_demosaicing.demosaicing_CFA_Bayer_bilinear(raw_array,'RGBG')
20 |
21 | return rgb
22 |
23 | def clip(raw_array, alpha):
24 | if alpha<1:
25 | raw_array[raw_array>alpha] = alpha
26 | else:
27 | raw_array[raw_array>1] = 1
28 | raw_array[raw_array<0] = 0
29 |
30 |
31 | def clip2(raw_array, wb):
32 | raw_array[:,:,0][raw_array[:,:,0]>wb[0]] = wb[0]
33 | raw_array[:,:,1][raw_array[:,:,1]>wb[1]] = wb[1]
34 | raw_array[:,:,2][raw_array[:,:,2]>wb[2]] = wb[2]
35 |
36 |
37 |
38 | def correct_gamma(rgb, gamma):
39 | return np.power(rgb + 1e-7, 1 / gamma)
40 |
41 | def get_matrix(m1, m2, tp1, tp2, tp):
42 | if (tp < tp1):
43 | m = m1
44 | elif (tp > tp2):
45 | m = m2
46 | else:
47 | g = (1/ float(tp) - 1 / float(tp2)) / (1 / float(tp1) - 1 / float(tp2))
48 | m = g * m1 + (1-g) * m2
49 | return m
50 |
51 |
52 | def rgb2hsv(rgb):
53 | r = rgb[0,:]
54 | g = rgb[1,:]
55 | b = rgb[2,:]
56 | maxc = np.maximum(np.maximum(r, g), b)
57 | minc = np.minimum(np.minimum(r, g), b)
58 | v = maxc
59 | maxc[maxc==0]=0.00001
60 | deltac = maxc - minc
61 | s = deltac / maxc
62 | deltac[deltac == 0] = 1 # to not divide by zero (those results in any way would be overridden in next lines)
63 | rc = (maxc - r) / deltac
64 | gc = (maxc - g) / deltac
65 | bc = (maxc - b) / deltac
66 |
67 | h = 4.0 + gc - rc
68 | h[g == maxc] = 2.0 + rc[g == maxc] - bc[g == maxc]
69 | h[r == maxc] = bc[r == maxc] - gc[r == maxc]
70 | h[minc == maxc] = 0.0
71 |
72 | h = (h / 6.0) % 1.0
73 | hsv = np.stack([h, s, v], axis=1)
74 | return hsv
75 |
76 | def hsv2rgb(hsv):
77 | h = hsv[:, 0]
78 | s = hsv[:, 1]
79 | v = hsv[:, 2]
80 | i = np.int32(h * 6.0)
81 | f = (h * 6.0) - i
82 | p = v * (1.0 - s)
83 | q = v * (1.0 - s * f)
84 | t = v * (1.0 - s * (1.0 - f))
85 | i = i % 6
86 |
87 | rgb = np.zeros_like(hsv)
88 | v, t, p, q = v.reshape(-1, 1), t.reshape(-1, 1), p.reshape(-1, 1), q.reshape(-1, 1)
89 | rgb[i == 0] = np.hstack([v, t, p])[i == 0]
90 | rgb[i == 1] = np.hstack([q, v, p])[i == 1]
91 | rgb[i == 2] = np.hstack([p, v, t])[i == 2]
92 | rgb[i == 3] = np.hstack([p, q, v])[i == 3]
93 | rgb[i == 4] = np.hstack([t, p, v])[i == 4]
94 | rgb[i == 5] = np.hstack([v, p, q])[i == 5]
95 | rgb[s == 0.0] = np.hstack([v, v, v])[s == 0.0]
96 | return rgb.T
97 |
98 | def adjust_hsv(hsv, hsv_map, hsv_dims):
99 | h = np.linspace(0, 1, hsv_dims[0])
100 | s = np.linspace(0, 1, hsv_dims[1])
101 | if hsv_dims[2] == 1:
102 | data = np.zeros((hsv_dims[0], hsv_dims[1], 3))
103 | for j in range(hsv_dims[0]):
104 | for k in range(hsv_dims[1]):
105 | starting_idx = int((j*hsv_dims[1] + k)*3)
106 | data[j,k,:] = hsv_map[starting_idx:(starting_idx+3)]
107 | interpolating_hsv_f = RegularGridInterpolator((h, s), data)
108 | hsv_correction = interpolating_hsv_f(hsv[:,:2])
109 | else:
110 | data = np.zeros((hsv_dims[2], hsv_dims[0], hsv_dims[1], 3))
111 | v = np.linspace(0, 1, hsv_dims[2])
112 | for i in range(hsv_dims[2]):
113 | for j in range(hsv_dims[0]):
114 | for k in range(hsv_dims[1]):
115 | starting_idx = int((j*hsv_dims[1] + k)*3)
116 | data[i,j,k,:] = hsv_map[starting_idx:(starting_idx+3)]
117 | interpolating_hsv_f = RegularGridInterpolator((v, h, s), data)
118 | hsv_correction = interpolating_hsv_f(hsv)
119 |
120 | hsv[:, 0] = (hsv[:, 0] + hsv_correction[:, 0] / 360.0 ) % 1
121 | hsv[:, 1] = hsv[:, 1] * hsv_correction[:, 1]
122 | hsv[:, 2] = hsv[:, 2] * hsv_correction[:, 2]
123 | clip(hsv, 1)
124 | return hsv
125 |
126 | def isp(raw, path, datapath, alpha):
127 | baseline_exposure_shift = 0.8
128 | saturation_scale = 1
129 | lamda = 0.4
130 | matrix_used = 2
131 |
132 | path_name = path
133 | csv.field_size_limit(sys.maxsize)
134 | params=[]
135 | data_path = datapath[:-17]+'camera_isp_params.csv'
136 | with open(data_path, newline='') as csvfile:
137 | reader = csv.reader(csvfile, delimiter='\t', quotechar='|')
138 | next(reader)
139 | for row in reader:
140 | cells = row[0].split(",")
141 | if cells[0]==path_name:
142 | params = cells
143 | break
144 | temperature1 = 2855
145 | temperature2 = 6500
146 | d50tosrgb = np.reshape(np.array([3.1338561, -1.6168667, -0.4906146, -0.9787684, 1.9161415,
147 | 0.0334540, 0.0719453, -0.2289914, 1.4052427]), (3,3))
148 | d50toprophotorgb = np.reshape(np.array([1.3459433, -0.2556075, -0.0511118, -0.5445989, 1.5081673, 0.0205351,
149 | 0, 0, 1.2118128]), (3,3))
150 | prophotorgbtod50 = np.linalg.inv(d50toprophotorgb)
151 | clip(raw,1)
152 | camera_color_matrix = np.reshape(np.asarray([float(i) for i in cells[17].split(" ")]), (3,3))
153 | br_coef = np.power(2, float(cells[9]) + baseline_exposure_shift)
154 | raw = raw * br_coef
155 | #back to bayer
156 | c,h,w = raw.shape
157 | H=2*h
158 | W=2*w
159 | bayer = np.zeros((H,W))
160 | bayer[0:H:2,0:W:2] = raw[0,:,:]
161 | bayer[0:H:2,1:W:2] = raw[1,:,:]
162 | bayer[1:H:2,1:W:2] = raw[2,:,:]
163 | bayer[1:H:2,0:W:2] = raw[3,:,:]
164 | #Step 4 demosaic
165 | rgb = demosaic(bayer, True)
166 | #Step 5 Color Correction to sRGB
167 | height, width, channels = rgb.shape
168 | forward_matrix1 = np.reshape(np.asarray(
169 | [float(i) for i in cells[4].split(" ")]
170 | ), (3,3))
171 | forward_matrix2 = np.reshape(np.asarray(
172 | [float(i) for i in cells[5].split(" ")]
173 | ), (3,3))
174 | color_matrix1 = np.reshape(np.asarray(
175 | [float(i) for i in cells[2].split(" ")]
176 | ), (3,3))
177 | color_matrix2 = np.reshape(np.asarray(
178 | [float(i) for i in cells[3].split(" ")]
179 | ), (3,3))
180 | camera_calibration1 = np.reshape(np.asarray(
181 | [float(i) for i in cells[7].split(" ")]
182 | ), (3,3))
183 | camera_calibration2 = np.reshape(np.asarray(
184 | [float(i) for i in cells[8].split(" ")]
185 | ), (3,3))
186 | analog_balance = np.diag(np.asarray([float(i) for i in cells[1].split(" ")]))
187 | neutral_wb = np.asarray([float(i) for i in cells[6].split(" ")])
188 | image_temperatue = float(cells[10])
189 |
190 | forward_matrix = get_matrix(forward_matrix1, forward_matrix2, temperature1, temperature2, image_temperatue)
191 | camera_calibration = get_matrix(camera_calibration1, camera_calibration2, temperature1, temperature2, image_temperatue)
192 | rgb_reshaped = np.reshape(np.transpose(rgb, (2,0,1)),(3,-1))
193 | ref_neutral = np.matmul(np.linalg.inv(np.matmul(analog_balance,camera_calibration)), neutral_wb)
194 | d = np.linalg.inv(np.diag(ref_neutral))
195 | camera2d50 = np.matmul(np.matmul(forward_matrix, d),
196 | np.linalg.inv(np.matmul(analog_balance, camera_calibration)))
197 | if (matrix_used == 1):
198 | camera2srgb = np.matmul(d50tosrgb, camera2d50)
199 | else:
200 | camera2srgb = np.matmul(camera_color_matrix, d)
201 | camera2prophoto = np.matmul(d50toprophotorgb, camera2d50)
202 | rgb_srgb = np.matmul(camera2srgb, rgb_reshaped)
203 | clip(rgb_srgb, alpha)
204 |
205 | # Applying the hue / saturation / value mapping
206 | if (matrix_used == 1):
207 | rgb_prophoto = np.matmul(camera2prophoto, rgb_reshaped)
208 | clip(rgb_prophoto, 1)
209 | hsv = rgb2hsv(rgb_prophoto)
210 | else:
211 | hsv = rgb2hsv(rgb_srgb)
212 |
213 | # Read hsv table
214 | hsv_dims = np.asarray(
215 | [int(i) for i in cells[11].split(" ")])
216 | hsv_map1 = np.asarray(
217 | [float(i) for i in cells[14].split(" ")])
218 | hsv_map2 = np.asarray(
219 | [float(i) for i in cells[15].split(" ")])
220 | hsv_map = get_matrix(hsv_map1, hsv_map2, temperature1, temperature2, image_temperatue)
221 | look_table_dims = np.asarray(
222 | [int(i) for i in cells[12].split(" ")])
223 | look_table = np.asarray(
224 | [float(i) for i in cells[16].split(" ")])
225 |
226 | # Adjust hsv
227 | hsv_corrected = adjust_hsv(hsv, look_table, look_table_dims)
228 | hsv_corrected = adjust_hsv(hsv_corrected, hsv_map, hsv_dims)
229 | hsv_corrected[:,1] = hsv_corrected[:,1] * saturation_scale
230 | clip(hsv_corrected, 1)
231 |
232 | if (matrix_used == 1):
233 | rgb_prophoto_corrected = hsv2rgb(hsv_corrected)
234 | prophoto2srgb = np.matmul(camera2srgb, np.linalg.inv(camera2prophoto))
235 | rgb_srgb_corrected = np.matmul(prophoto2srgb, rgb_prophoto_corrected)
236 | else:
237 | rgb_srgb_corrected = hsv2rgb(hsv_corrected)
238 |
239 | #tone mapping
240 | f = open("new_tone_curve.txt", "r")
241 | point_list = []
242 | for x in f:
243 | x = x.strip()
244 | point_list.append([float(i) for i in x.split(" ")])
245 | tone_curve_sparse = np.asarray(point_list)
246 | x = tone_curve_sparse[:,0]
247 | y = tone_curve_sparse[:,1]
248 | y_gamma = correct_gamma(x, 2.2)
249 | y_combined = lamda * y + (1-lamda) * y_gamma
250 | z_combined = np.polyfit(x, y_combined, 4)
251 | p_combined = np.poly1d(z_combined)
252 | rgb_combined = p_combined(rgb_srgb_corrected)
253 | return np.reshape(rgb_combined, (channels, height, width))
254 |
--------------------------------------------------------------------------------
/new_tone_curve.txt:
--------------------------------------------------------------------------------
1 | 0.0000000 0.0000000
2 | 0.0158730 0.0124980
3 | 0.0317460 0.0411199
4 | 0.0476190 0.0840778
5 | 0.0634921 0.1326179
6 | 0.0793651 0.1814154
7 | 0.0952381 0.2288956
8 | 0.1111111 0.2742619
9 | 0.1269841 0.3153141
10 | 0.1428571 0.3517171
11 | 0.1587302 0.3845277
12 | 0.1746032 0.4145658
13 | 0.1904762 0.4424488
14 | 0.2063492 0.4686378
15 | 0.2222222 0.4934768
16 | 0.2380952 0.5172227
17 | 0.2539683 0.5400680
18 | 0.2698413 0.5621576
19 | 0.2857143 0.5835997
20 | 0.3015873 0.6044750
21 | 0.3174603 0.6248435
22 | 0.3333333 0.6447492
23 | 0.3492064 0.6642243
24 | 0.3650794 0.6832920
25 | 0.3809524 0.7019538
26 | 0.3968254 0.7201142
27 | 0.4126984 0.7376656
28 | 0.4285714 0.7545304
29 | 0.4444444 0.7706557
30 | 0.4603175 0.7860080
31 | 0.4761905 0.8005697
32 | 0.4920635 0.8143359
33 | 0.5079365 0.8273113
34 | 0.5238096 0.8395090
35 | 0.5396826 0.8509476
36 | 0.5555556 0.8616506
37 | 0.5714286 0.8716488
38 | 0.5873016 0.8809972
39 | 0.6031746 0.8897540
40 | 0.6190476 0.8979700
41 | 0.6349207 0.9056903
42 | 0.6507937 0.9129540
43 | 0.6666667 0.9197963
44 | 0.6825397 0.9262479
45 | 0.6984127 0.9323359
46 | 0.7142857 0.9380845
47 | 0.7301587 0.9435153
48 | 0.7460318 0.9486472
49 | 0.7619048 0.9534974
50 | 0.7777778 0.9580809
51 | 0.7936508 0.9624116
52 | 0.8095238 0.9665017
53 | 0.8253968 0.9703622
54 | 0.8412699 0.9740032
55 | 0.8571429 0.9774337
56 | 0.8730159 0.9796620
57 | 0.8888889 0.9826957
58 | 0.9047619 0.9855417
59 | 0.9206349 0.9882063
60 | 0.9365079 0.9906955
61 | 0.9523810 0.9930145
62 | 0.9682540 0.9951685
63 | 0.9841270 0.9971622
64 | 1.0000000 1.0000000
--------------------------------------------------------------------------------
/preprocess/preprocess_canon.py:
--------------------------------------------------------------------------------
1 | # For Canon
2 | # Save parameters in csv file
3 | # Save resized raw data as numpy array
4 | # Save resized rgb image as a new jpg
5 | import csv
6 | import PIL.Image
7 | import PIL.ExifTags
8 | import rawpy
9 | import numpy as np
10 | import os
11 | import cv2
12 | import pyexiftool.exiftool as exiftool
13 | import sys
14 | import argparse
15 |
16 | def get_bayer(path):
17 | white_lv = 15000
18 | black_lv = 2048
19 | raw = rawpy.imread(path)
20 | camera_color_matrix = raw.color_matrix[:,:3]
21 | wb = raw.camera_whitebalance
22 | # Center crop following the ISP of camera
23 | bayer=raw.raw_image_visible.astype(np.float32)[12:3660, 12:5484]
24 | bayer = (bayer - black_lv) / (white_lv -black_lv)
25 | return bayer, camera_color_matrix, wb
26 |
27 | def pack_bayer(bayer):
28 | bayer_shape = bayer.shape
29 | H = bayer_shape[0]
30 | W = bayer_shape[1]
31 | bayer = np.expand_dims(bayer,axis=0)
32 | reshaped = np.concatenate((bayer[:,0:H:2,0:W:2],
33 | bayer[:,0:H:2,1:W:2],
34 | bayer[:,1:H:2,1:W:2],
35 | bayer[:,1:H:2,0:W:2]), axis=0)
36 |
37 | return reshaped
38 |
39 | def generate_list(input_dir):
40 | file_list = []
41 | for home, dirs, files in os.walk(path):
42 | for name in sorted(files):
43 | if name.lower().endswith('.jpg'):
44 | file_list.append(os.path.join(home, name))
45 | return file_list
46 |
47 | def resize_bayer(bayer, image_size):
48 | bayer_shape = bayer.shape
49 | H = bayer_shape[1]
50 | W = bayer_shape[2]
51 | new_h = image_size
52 | new_w = int(new_h * W / float(H))
53 | resized = np.zeros([4, new_h, new_w], dtype=np.float32)
54 | resized[0,:,:] = cv2.resize(bayer[0,:,:], dsize=(new_w, new_h), interpolation=cv2.INTER_NEAREST)
55 | resized[1,:,:] = cv2.resize(bayer[1,:,:], dsize=(new_w, new_h), interpolation=cv2.INTER_NEAREST)
56 | resized[2,:,:] = cv2.resize(bayer[2,:,:], dsize=(new_w, new_h), interpolation=cv2.INTER_NEAREST)
57 | resized[3,:,:] = cv2.resize(bayer[3,:,:], dsize=(new_w, new_h), interpolation=cv2.INTER_NEAREST)
58 | return resized
59 |
60 |
61 | def process_data(input_dir, output_dir, image_size)
62 | crop_size = image_size
63 | patch = False
64 | params_path = "camera_params.csv"
65 | params_isp_path = "camera_isp_params.csv"
66 | if not os.path.isdir(output_dir):
67 | os.makedirs(output_dir)
68 | input_files = generate_list(input_dir)
69 | all_list = []
70 | all_list.append(["path","time","iso","aperture","noiseprofile"])
71 | isp_list = []
72 | isp_list.append(["path","AnalogBalance",
73 | "ColorMatrix1", "ColorMatrix2", "ForwardMatrix1", "ForwardMatrix2", "AsShotNeutral",
74 | "CameraCalibration1", "CameraCalibration2", 'BaselineExposure', "ColorTempAuto",
75 | "ProfileHueSatMapDims", "ProfileLookTableDims", "Make", "ProfileHueSatMapData1",
76 | "ProfileHueSatMapData2", "ProfileLookTableData", "CameraColorMatrix"])
77 |
78 |
79 |
80 |
81 | for path in input_files:
82 | #read isp params
83 | with exiftool.ExifTool() as et:
84 | print(path[:-4]+".dng")
85 | metadata = et.get_metadata(path[:-4]+".dng")
86 | exif_dict = et.get_tags(("AnalogBalance", "ColorMatrix1", "ColorMatrix2", "ForwardMatrix1", "ForwardMatrix2",
87 | "AsShotNeutral","CameraCalibration1", "CameraCalibration2", 'BaselineExposure',
88 | "ColorTempAuto","ProfileHueSatMapDims", "ProfileLookTableDims", "Make", "NoiseProfile","FNumber"), path[:-4]+".dng")
89 | exif_map_dict = et.get_tags(("b", "ProfileHueSatMapData1", "ProfileHueSatMapData2",
90 | "ProfileLookTableData"), path[:-4]+".dng")
91 |
92 | # resize rgb
93 | print(path)
94 | params_list = []
95 | params_isp_list = []
96 | rgb = PIL.Image.open(path)
97 | W, H = rgb.size
98 | new_h = image_size
99 | new_w = int(new_h * W / float(H))
100 | rgb_resized = rgb.resize((new_w, new_h), resample=PIL.Image.BILINEAR)
101 | # get exif data
102 | exif = {
103 | PIL.ExifTags.TAGS[k]: v
104 | for k, v in rgb._getexif().items()
105 | if k in PIL.ExifTags.TAGS
106 | }
107 | exposure_time = exif["ExposureTime"][0] / float(exif["ExposureTime"][1])
108 | iso = exif["ISOSpeedRatings"]
109 | aperture = float(exif_dict["EXIF:FNumber"])
110 | noiseprofile = exif_dict["EXIF:NoiseProfile"]
111 | output_scene_dir = output_dir + "/" + path.split("/")[-2]
112 | if not os.path.isdir(output_scene_dir):
113 | os.makedirs(output_scene_dir)
114 | new_path = output_scene_dir + "/" + path.split("/")[-1]
115 | # quality to highest
116 | rgb_resized.save(new_path, quality=100)
117 | params_list.append(new_path)
118 | params_list.append(exposure_time)
119 | params_list.append(iso)
120 | # resize raw
121 | raw, camera_color_matrix, wb = get_bayer(path[:-4]+".dng")
122 | camera_color_matrix_re = np.reshape(camera_color_matrix, -1)
123 | camera_color_matrix_str = str(camera_color_matrix_re[0])
124 | for i in range(camera_color_matrix_re.shape[0]-1):
125 | camera_color_matrix_str = camera_color_matrix_str + " "
126 | camera_color_matrix_str = camera_color_matrix_str + str(camera_color_matrix_re[i+1])
127 | params_list.append(aperture)
128 |
129 | params_list.append(noiseprofile)
130 | # pack and resize
131 | raw = pack_bayer(raw)
132 | raw_resized = resize_bayer(raw, image_size)
133 | all_list.append(params_list)
134 | np.save(new_path[:-4]+".npy", raw_resized)
135 | #get params for isp
136 | params_isp_list.append(new_path)
137 | for i in isp_list[0][1:17]:
138 | if i=="ProfileHueSatMapData1" or i=="ProfileHueSatMapData2" or i=="ProfileLookTableData":
139 | params_isp_list.append(exif_map_dict["EXIF:"+i])
140 | elif i=="ColorTempAuto":
141 | params_isp_list.append(exif_dict["MakerNotes:ColorTempAuto"])
142 | else:
143 | params_isp_list.append(exif_dict["EXIF:"+i])
144 | params_isp_list.append(camera_color_matrix_str)
145 | isp_list.append(params_isp_list)
146 |
147 |
148 | # save to csv
149 | csv.field_size_limit(sys.maxsize)
150 |
151 | output_params_path = output_dir + "/" + params_path
152 | output_stream = open(output_params_path, 'w+')
153 | csvWriter = csv.writer(output_stream)
154 | for row in all_list:
155 | csvWriter.writerow(row)
156 | output_stream.close()
157 |
158 | output_params_isp_path = output_dir + "/" + params_isp_path
159 | output_stream = open(output_params_isp_path, 'w+')
160 | csvWriter = csv.writer(output_stream)
161 | for row in isp_list:
162 | csvWriter.writerow(row)
163 | output_stream.close()
164 |
165 | if __name__ == '__main__':
166 | parser = argparse.ArgumentParser()
167 | parser.add_argument('--input_dir', type=str, required=True)
168 | parser.add_argument('--output_dir', type=str, default='ProcessedData/AutoModeNikon_Train')
169 | parser.add_argument('--image_size', type=int, default=512)
170 | args = parser.parse_args()
171 | process_data(args.input_dir, args.output_dir, args.image_size)
--------------------------------------------------------------------------------
/preprocess/preprocess_nikon.py:
--------------------------------------------------------------------------------
1 | # For Nikon
2 | # Save parameters in csv file
3 | # Save resized raw data as numpy array
4 | # Save resized rgb image as a new jpg
5 | import csv
6 | import PIL.Image
7 | import PIL.ExifTags
8 | import rawpy
9 | import numpy as np
10 | import os
11 | import cv2
12 | import pyexiftool.exiftool as exiftool
13 | import sys
14 | import argparse
15 |
16 | def get_bayer(path):
17 | white_lv = 15520
18 | black_lv = 1008
19 | raw = rawpy.imread(path)
20 | camera_color_matrix = raw.color_matrix[:,:3]
21 | wb = raw.camera_whitebalance
22 | # Center crop following the ISP of camera
23 | bayer=raw.raw_image_visible.astype(np.float32)[8:4032, 8:6056]
24 | bayer = (bayer - black_lv) / (white_lv -black_lv)
25 | return bayer, camera_color_matrix, wb
26 |
27 | def pack_bayer(bayer):
28 | bayer_shape = bayer.shape
29 | H = bayer_shape[0]
30 | W = bayer_shape[1]
31 | bayer = np.expand_dims(bayer,axis=0)
32 | reshaped = np.concatenate((bayer[:,0:H:2,0:W:2],
33 | bayer[:,0:H:2,1:W:2],
34 | bayer[:,1:H:2,1:W:2],
35 | bayer[:,1:H:2,0:W:2]), axis=0)
36 |
37 | return reshaped
38 |
39 | def generate_list(input_dir):
40 | file_list = []
41 | for home, dirs, files in os.walk(path):
42 | for name in sorted(files):
43 | if name.lower().endswith('.jpg'):
44 | file_list.append(os.path.join(home, name))
45 | return file_list
46 |
47 | def resize_bayer(bayer, image_size):
48 | bayer_shape = bayer.shape
49 | H = bayer_shape[1]
50 | W = bayer_shape[2]
51 | new_h = image_size
52 | if image_size == 512:
53 | new_w = int(new_h * W / float(H)+1)
54 | else:
55 | new_w = int(new_h * W / float(H))
56 | resized = np.zeros([4, new_h, new_w], dtype=np.float32)
57 |
58 | resized[0,:,:] = cv2.resize(bayer[0,:,:], dsize=(new_w, new_h), interpolation=cv2.INTER_NEAREST)
59 | resized[1,:,:] = cv2.resize(bayer[1,:,:], dsize=(new_w, new_h), interpolation=cv2.INTER_NEAREST)
60 | resized[2,:,:] = cv2.resize(bayer[2,:,:], dsize=(new_w, new_h), interpolation=cv2.INTER_NEAREST)
61 | resized[3,:,:] = cv2.resize(bayer[3,:,:], dsize=(new_w, new_h), interpolation=cv2.INTER_NEAREST)
62 | return resized
63 |
64 |
65 | def process_data(input_dir, output_dir, image_size)
66 | crop_size = image_size
67 | patch = False
68 | params_path = "camera_params.csv"
69 | params_isp_path = "camera_isp_params.csv"
70 | if not os.path.isdir(output_dir):
71 | os.makedirs(output_dir)
72 | input_files = generate_list(input_dir)
73 | all_list = []
74 | all_list.append(["path","time","iso","aperture","wb","noiseprofile"])
75 | isp_list = []
76 | isp_list.append(["path","AnalogBalance",
77 | "ColorMatrix1", "ColorMatrix2", "ForwardMatrix1", "ForwardMatrix2", "AsShotNeutral",
78 | "CameraCalibration1", "CameraCalibration2", 'BaselineExposure', "ColorTemperatureAuto",
79 | "ProfileHueSatMapDims", "ProfileLookTableDims", "Make", "ProfileHueSatMapData1",
80 | "ProfileHueSatMapData2", "ProfileLookTableData", "CameraColorMatrix"])
81 |
82 |
83 | for path in input_files:
84 | #read isp params
85 | with exiftool.ExifTool() as et:
86 | print(path[:-4]+".dng")
87 | metadata = et.get_metadata(path[:-4]+".dng")
88 | exif_dict = et.get_tags(("AnalogBalance", "ColorMatrix1", "ColorMatrix2", "ForwardMatrix1", "ForwardMatrix2",
89 | "AsShotNeutral","CameraCalibration1", "CameraCalibration2", 'BaselineExposure',
90 | "ColorTemperatureAuto","ProfileHueSatMapDims", "ProfileLookTableDims", "Make", "NoiseProfile","FNumber"), path[:-4]+".dng")
91 | exif_map_dict = et.get_tags(("b", "ProfileHueSatMapData1", "ProfileHueSatMapData2",
92 | "ProfileLookTableData"), path[:-4]+".dng")
93 |
94 | # resize rgb
95 | print(path)
96 | params_list = []
97 | params_isp_list = []
98 | rgb = PIL.Image.open(path)
99 | W, H = rgb.size
100 | new_h = image_size
101 | if image_size == 512:
102 | new_w = int(new_h * W / float(H)+1)
103 | else:
104 | new_w = int(new_h * W / float(H))
105 | rgb_resized = rgb.resize((new_w, new_h), resample=PIL.Image.BILINEAR)
106 | # get exif data
107 | exif = {
108 | PIL.ExifTags.TAGS[k]: v
109 | for k, v in rgb._getexif().items()
110 | if k in PIL.ExifTags.TAGS
111 | }
112 | exposure_time = exif["ExposureTime"][0] / float(exif["ExposureTime"][1])
113 | iso = exif["ISOSpeedRatings"]
114 | aperture = float(exif_dict["EXIF:FNumber"])
115 | noiseprofile =exif_dict["EXIF:NoiseProfile"]
116 | output_scene_dir = output_dir + "/" + path.split("/")[-2]
117 | if not os.path.isdir(output_scene_dir):
118 | os.makedirs(output_scene_dir)
119 | new_path = output_scene_dir + "/" + path.split("/")[-1]
120 | # quality to highest
121 | rgb_resized.save(new_path, quality=100)
122 | params_list.append(new_path)
123 | params_list.append(exposure_time)
124 | params_list.append(iso)
125 | # resize raw
126 | raw, camera_color_matrix, wb = get_bayer(path[:-4]+".dng")
127 | camera_color_matrix_re = np.reshape(camera_color_matrix, -1)
128 | camera_color_matrix_str = str(camera_color_matrix_re[0])
129 | for i in range(camera_color_matrix_re.shape[0]-1):
130 | camera_color_matrix_str = camera_color_matrix_str + " "
131 | camera_color_matrix_str = camera_color_matrix_str + str(camera_color_matrix_re[i+1])
132 | params_list.append(aperture)
133 | params_list.append(wb)
134 | params_list.append(noiseprofile)
135 | # pack and resize
136 | raw = pack_bayer(raw)
137 | raw_resized = resize_bayer(raw, image_size)
138 | all_list.append(params_list)
139 | np.save(new_path[:-4]+".npy", raw_resized)
140 |
141 | #get params for isp
142 |
143 | params_isp_list.append(new_path)
144 | for i in isp_list[0][1:17]:
145 | if i=="ProfileHueSatMapData1" or i=="ProfileHueSatMapData2" or i=="ProfileLookTableData":
146 | params_isp_list.append(exif_map_dict["EXIF:"+i])
147 | elif i=="ColorTemperatureAuto":
148 | params_isp_list.append(exif_dict["MakerNotes:ColorTemperatureAuto"])
149 | else:
150 | params_isp_list.append(exif_dict["EXIF:"+i])
151 | params_isp_list.append(camera_color_matrix_str)
152 | isp_list.append(params_isp_list)
153 |
154 |
155 | # save to csv
156 | csv.field_size_limit(sys.maxsize)
157 |
158 | output_params_path = output_dir + "/" + params_path
159 | output_stream = open(output_params_path, 'w+')
160 | csvWriter = csv.writer(output_stream)
161 | for row in all_list:
162 | csvWriter.writerow(row)
163 | output_stream.close()
164 |
165 | output_params_isp_path = output_dir + "/" + params_isp_path
166 | output_stream = open(output_params_isp_path, 'w+')
167 | csvWriter = csv.writer(output_stream)
168 | for row in isp_list:
169 | csvWriter.writerow(row)
170 | output_stream.close()
171 |
172 | if __name__ == '__main__':
173 | parser = argparse.ArgumentParser()
174 | parser.add_argument('--input_dir', type=str, required=True)
175 | parser.add_argument('--output_dir', type=str, default='ProcessedData/AutoModeNikon_Train')
176 | parser.add_argument('--image_size', type=int, default=512)
177 | args = parser.parse_args()
178 | process_data(args.input_dir, args.output_dir, args.image_size)
179 |
--------------------------------------------------------------------------------
/pyexiftool/COPYING.BSD:
--------------------------------------------------------------------------------
1 | Copyright 2012 Sven Marnach
2 | All rights reserved.
3 |
4 | Redistribution and use in source and binary forms, with or without
5 | modification, are permitted provided that the following conditions are met:
6 | * Redistributions of source code must retain the above copyright notice,
7 | this list of conditions and the following disclaimer.
8 | * Redistributions in binary form must reproduce the above copyright notice,
9 | this list of conditions and the following disclaimer in the documentation
10 | and/or other materials provided with the distribution.
11 | * The names of its contributors may not be used to endorse or promote
12 | products derived from this software without specific prior written
13 | permission.
14 |
15 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
16 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
17 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
18 | DISCLAIMED. IN NO EVENT SHALL SVEN MARNACH BE LIABLE FOR ANY DIRECT, INDIRECT,
19 | INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
20 | LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
21 | PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
22 | LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
23 | OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
24 | ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
25 |
--------------------------------------------------------------------------------
/pyexiftool/COPYING.GPL:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 3, 29 June 2007
3 |
4 | Copyright (C) 2007 Free Software Foundation, Inc.
5 | Everyone is permitted to copy and distribute verbatim copies
6 | of this license document, but changing it is not allowed.
7 |
8 | Preamble
9 |
10 | The GNU General Public License is a free, copyleft license for
11 | software and other kinds of works.
12 |
13 | The licenses for most software and other practical works are designed
14 | to take away your freedom to share and change the works. By contrast,
15 | the GNU General Public License is intended to guarantee your freedom to
16 | share and change all versions of a program--to make sure it remains free
17 | software for all its users. We, the Free Software Foundation, use the
18 | GNU General Public License for most of our software; it applies also to
19 | any other work released this way by its authors. You can apply it to
20 | your programs, too.
21 |
22 | When we speak of free software, we are referring to freedom, not
23 | price. Our General Public Licenses are designed to make sure that you
24 | have the freedom to distribute copies of free software (and charge for
25 | them if you wish), that you receive source code or can get it if you
26 | want it, that you can change the software or use pieces of it in new
27 | free programs, and that you know you can do these things.
28 |
29 | To protect your rights, we need to prevent others from denying you
30 | these rights or asking you to surrender the rights. Therefore, you have
31 | certain responsibilities if you distribute copies of the software, or if
32 | you modify it: responsibilities to respect the freedom of others.
33 |
34 | For example, if you distribute copies of such a program, whether
35 | gratis or for a fee, you must pass on to the recipients the same
36 | freedoms that you received. You must make sure that they, too, receive
37 | or can get the source code. And you must show them these terms so they
38 | know their rights.
39 |
40 | Developers that use the GNU GPL protect your rights with two steps:
41 | (1) assert copyright on the software, and (2) offer you this License
42 | giving you legal permission to copy, distribute and/or modify it.
43 |
44 | For the developers' and authors' protection, the GPL clearly explains
45 | that there is no warranty for this free software. For both users' and
46 | authors' sake, the GPL requires that modified versions be marked as
47 | changed, so that their problems will not be attributed erroneously to
48 | authors of previous versions.
49 |
50 | Some devices are designed to deny users access to install or run
51 | modified versions of the software inside them, although the manufacturer
52 | can do so. This is fundamentally incompatible with the aim of
53 | protecting users' freedom to change the software. The systematic
54 | pattern of such abuse occurs in the area of products for individuals to
55 | use, which is precisely where it is most unacceptable. Therefore, we
56 | have designed this version of the GPL to prohibit the practice for those
57 | products. If such problems arise substantially in other domains, we
58 | stand ready to extend this provision to those domains in future versions
59 | of the GPL, as needed to protect the freedom of users.
60 |
61 | Finally, every program is threatened constantly by software patents.
62 | States should not allow patents to restrict development and use of
63 | software on general-purpose computers, but in those that do, we wish to
64 | avoid the special danger that patents applied to a free program could
65 | make it effectively proprietary. To prevent this, the GPL assures that
66 | patents cannot be used to render the program non-free.
67 |
68 | The precise terms and conditions for copying, distribution and
69 | modification follow.
70 |
71 | TERMS AND CONDITIONS
72 |
73 | 0. Definitions.
74 |
75 | "This License" refers to version 3 of the GNU General Public License.
76 |
77 | "Copyright" also means copyright-like laws that apply to other kinds of
78 | works, such as semiconductor masks.
79 |
80 | "The Program" refers to any copyrightable work licensed under this
81 | License. Each licensee is addressed as "you". "Licensees" and
82 | "recipients" may be individuals or organizations.
83 |
84 | To "modify" a work means to copy from or adapt all or part of the work
85 | in a fashion requiring copyright permission, other than the making of an
86 | exact copy. The resulting work is called a "modified version" of the
87 | earlier work or a work "based on" the earlier work.
88 |
89 | A "covered work" means either the unmodified Program or a work based
90 | on the Program.
91 |
92 | To "propagate" a work means to do anything with it that, without
93 | permission, would make you directly or secondarily liable for
94 | infringement under applicable copyright law, except executing it on a
95 | computer or modifying a private copy. Propagation includes copying,
96 | distribution (with or without modification), making available to the
97 | public, and in some countries other activities as well.
98 |
99 | To "convey" a work means any kind of propagation that enables other
100 | parties to make or receive copies. Mere interaction with a user through
101 | a computer network, with no transfer of a copy, is not conveying.
102 |
103 | An interactive user interface displays "Appropriate Legal Notices"
104 | to the extent that it includes a convenient and prominently visible
105 | feature that (1) displays an appropriate copyright notice, and (2)
106 | tells the user that there is no warranty for the work (except to the
107 | extent that warranties are provided), that licensees may convey the
108 | work under this License, and how to view a copy of this License. If
109 | the interface presents a list of user commands or options, such as a
110 | menu, a prominent item in the list meets this criterion.
111 |
112 | 1. Source Code.
113 |
114 | The "source code" for a work means the preferred form of the work
115 | for making modifications to it. "Object code" means any non-source
116 | form of a work.
117 |
118 | A "Standard Interface" means an interface that either is an official
119 | standard defined by a recognized standards body, or, in the case of
120 | interfaces specified for a particular programming language, one that
121 | is widely used among developers working in that language.
122 |
123 | The "System Libraries" of an executable work include anything, other
124 | than the work as a whole, that (a) is included in the normal form of
125 | packaging a Major Component, but which is not part of that Major
126 | Component, and (b) serves only to enable use of the work with that
127 | Major Component, or to implement a Standard Interface for which an
128 | implementation is available to the public in source code form. A
129 | "Major Component", in this context, means a major essential component
130 | (kernel, window system, and so on) of the specific operating system
131 | (if any) on which the executable work runs, or a compiler used to
132 | produce the work, or an object code interpreter used to run it.
133 |
134 | The "Corresponding Source" for a work in object code form means all
135 | the source code needed to generate, install, and (for an executable
136 | work) run the object code and to modify the work, including scripts to
137 | control those activities. However, it does not include the work's
138 | System Libraries, or general-purpose tools or generally available free
139 | programs which are used unmodified in performing those activities but
140 | which are not part of the work. For example, Corresponding Source
141 | includes interface definition files associated with source files for
142 | the work, and the source code for shared libraries and dynamically
143 | linked subprograms that the work is specifically designed to require,
144 | such as by intimate data communication or control flow between those
145 | subprograms and other parts of the work.
146 |
147 | The Corresponding Source need not include anything that users
148 | can regenerate automatically from other parts of the Corresponding
149 | Source.
150 |
151 | The Corresponding Source for a work in source code form is that
152 | same work.
153 |
154 | 2. Basic Permissions.
155 |
156 | All rights granted under this License are granted for the term of
157 | copyright on the Program, and are irrevocable provided the stated
158 | conditions are met. This License explicitly affirms your unlimited
159 | permission to run the unmodified Program. The output from running a
160 | covered work is covered by this License only if the output, given its
161 | content, constitutes a covered work. This License acknowledges your
162 | rights of fair use or other equivalent, as provided by copyright law.
163 |
164 | You may make, run and propagate covered works that you do not
165 | convey, without conditions so long as your license otherwise remains
166 | in force. You may convey covered works to others for the sole purpose
167 | of having them make modifications exclusively for you, or provide you
168 | with facilities for running those works, provided that you comply with
169 | the terms of this License in conveying all material for which you do
170 | not control copyright. Those thus making or running the covered works
171 | for you must do so exclusively on your behalf, under your direction
172 | and control, on terms that prohibit them from making any copies of
173 | your copyrighted material outside their relationship with you.
174 |
175 | Conveying under any other circumstances is permitted solely under
176 | the conditions stated below. Sublicensing is not allowed; section 10
177 | makes it unnecessary.
178 |
179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180 |
181 | No covered work shall be deemed part of an effective technological
182 | measure under any applicable law fulfilling obligations under article
183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184 | similar laws prohibiting or restricting circumvention of such
185 | measures.
186 |
187 | When you convey a covered work, you waive any legal power to forbid
188 | circumvention of technological measures to the extent such circumvention
189 | is effected by exercising rights under this License with respect to
190 | the covered work, and you disclaim any intention to limit operation or
191 | modification of the work as a means of enforcing, against the work's
192 | users, your or third parties' legal rights to forbid circumvention of
193 | technological measures.
194 |
195 | 4. Conveying Verbatim Copies.
196 |
197 | You may convey verbatim copies of the Program's source code as you
198 | receive it, in any medium, provided that you conspicuously and
199 | appropriately publish on each copy an appropriate copyright notice;
200 | keep intact all notices stating that this License and any
201 | non-permissive terms added in accord with section 7 apply to the code;
202 | keep intact all notices of the absence of any warranty; and give all
203 | recipients a copy of this License along with the Program.
204 |
205 | You may charge any price or no price for each copy that you convey,
206 | and you may offer support or warranty protection for a fee.
207 |
208 | 5. Conveying Modified Source Versions.
209 |
210 | You may convey a work based on the Program, or the modifications to
211 | produce it from the Program, in the form of source code under the
212 | terms of section 4, provided that you also meet all of these conditions:
213 |
214 | a) The work must carry prominent notices stating that you modified
215 | it, and giving a relevant date.
216 |
217 | b) The work must carry prominent notices stating that it is
218 | released under this License and any conditions added under section
219 | 7. This requirement modifies the requirement in section 4 to
220 | "keep intact all notices".
221 |
222 | c) You must license the entire work, as a whole, under this
223 | License to anyone who comes into possession of a copy. This
224 | License will therefore apply, along with any applicable section 7
225 | additional terms, to the whole of the work, and all its parts,
226 | regardless of how they are packaged. This License gives no
227 | permission to license the work in any other way, but it does not
228 | invalidate such permission if you have separately received it.
229 |
230 | d) If the work has interactive user interfaces, each must display
231 | Appropriate Legal Notices; however, if the Program has interactive
232 | interfaces that do not display Appropriate Legal Notices, your
233 | work need not make them do so.
234 |
235 | A compilation of a covered work with other separate and independent
236 | works, which are not by their nature extensions of the covered work,
237 | and which are not combined with it such as to form a larger program,
238 | in or on a volume of a storage or distribution medium, is called an
239 | "aggregate" if the compilation and its resulting copyright are not
240 | used to limit the access or legal rights of the compilation's users
241 | beyond what the individual works permit. Inclusion of a covered work
242 | in an aggregate does not cause this License to apply to the other
243 | parts of the aggregate.
244 |
245 | 6. Conveying Non-Source Forms.
246 |
247 | You may convey a covered work in object code form under the terms
248 | of sections 4 and 5, provided that you also convey the
249 | machine-readable Corresponding Source under the terms of this License,
250 | in one of these ways:
251 |
252 | a) Convey the object code in, or embodied in, a physical product
253 | (including a physical distribution medium), accompanied by the
254 | Corresponding Source fixed on a durable physical medium
255 | customarily used for software interchange.
256 |
257 | b) Convey the object code in, or embodied in, a physical product
258 | (including a physical distribution medium), accompanied by a
259 | written offer, valid for at least three years and valid for as
260 | long as you offer spare parts or customer support for that product
261 | model, to give anyone who possesses the object code either (1) a
262 | copy of the Corresponding Source for all the software in the
263 | product that is covered by this License, on a durable physical
264 | medium customarily used for software interchange, for a price no
265 | more than your reasonable cost of physically performing this
266 | conveying of source, or (2) access to copy the
267 | Corresponding Source from a network server at no charge.
268 |
269 | c) Convey individual copies of the object code with a copy of the
270 | written offer to provide the Corresponding Source. This
271 | alternative is allowed only occasionally and noncommercially, and
272 | only if you received the object code with such an offer, in accord
273 | with subsection 6b.
274 |
275 | d) Convey the object code by offering access from a designated
276 | place (gratis or for a charge), and offer equivalent access to the
277 | Corresponding Source in the same way through the same place at no
278 | further charge. You need not require recipients to copy the
279 | Corresponding Source along with the object code. If the place to
280 | copy the object code is a network server, the Corresponding Source
281 | may be on a different server (operated by you or a third party)
282 | that supports equivalent copying facilities, provided you maintain
283 | clear directions next to the object code saying where to find the
284 | Corresponding Source. Regardless of what server hosts the
285 | Corresponding Source, you remain obligated to ensure that it is
286 | available for as long as needed to satisfy these requirements.
287 |
288 | e) Convey the object code using peer-to-peer transmission, provided
289 | you inform other peers where the object code and Corresponding
290 | Source of the work are being offered to the general public at no
291 | charge under subsection 6d.
292 |
293 | A separable portion of the object code, whose source code is excluded
294 | from the Corresponding Source as a System Library, need not be
295 | included in conveying the object code work.
296 |
297 | A "User Product" is either (1) a "consumer product", which means any
298 | tangible personal property which is normally used for personal, family,
299 | or household purposes, or (2) anything designed or sold for incorporation
300 | into a dwelling. In determining whether a product is a consumer product,
301 | doubtful cases shall be resolved in favor of coverage. For a particular
302 | product received by a particular user, "normally used" refers to a
303 | typical or common use of that class of product, regardless of the status
304 | of the particular user or of the way in which the particular user
305 | actually uses, or expects or is expected to use, the product. A product
306 | is a consumer product regardless of whether the product has substantial
307 | commercial, industrial or non-consumer uses, unless such uses represent
308 | the only significant mode of use of the product.
309 |
310 | "Installation Information" for a User Product means any methods,
311 | procedures, authorization keys, or other information required to install
312 | and execute modified versions of a covered work in that User Product from
313 | a modified version of its Corresponding Source. The information must
314 | suffice to ensure that the continued functioning of the modified object
315 | code is in no case prevented or interfered with solely because
316 | modification has been made.
317 |
318 | If you convey an object code work under this section in, or with, or
319 | specifically for use in, a User Product, and the conveying occurs as
320 | part of a transaction in which the right of possession and use of the
321 | User Product is transferred to the recipient in perpetuity or for a
322 | fixed term (regardless of how the transaction is characterized), the
323 | Corresponding Source conveyed under this section must be accompanied
324 | by the Installation Information. But this requirement does not apply
325 | if neither you nor any third party retains the ability to install
326 | modified object code on the User Product (for example, the work has
327 | been installed in ROM).
328 |
329 | The requirement to provide Installation Information does not include a
330 | requirement to continue to provide support service, warranty, or updates
331 | for a work that has been modified or installed by the recipient, or for
332 | the User Product in which it has been modified or installed. Access to a
333 | network may be denied when the modification itself materially and
334 | adversely affects the operation of the network or violates the rules and
335 | protocols for communication across the network.
336 |
337 | Corresponding Source conveyed, and Installation Information provided,
338 | in accord with this section must be in a format that is publicly
339 | documented (and with an implementation available to the public in
340 | source code form), and must require no special password or key for
341 | unpacking, reading or copying.
342 |
343 | 7. Additional Terms.
344 |
345 | "Additional permissions" are terms that supplement the terms of this
346 | License by making exceptions from one or more of its conditions.
347 | Additional permissions that are applicable to the entire Program shall
348 | be treated as though they were included in this License, to the extent
349 | that they are valid under applicable law. If additional permissions
350 | apply only to part of the Program, that part may be used separately
351 | under those permissions, but the entire Program remains governed by
352 | this License without regard to the additional permissions.
353 |
354 | When you convey a copy of a covered work, you may at your option
355 | remove any additional permissions from that copy, or from any part of
356 | it. (Additional permissions may be written to require their own
357 | removal in certain cases when you modify the work.) You may place
358 | additional permissions on material, added by you to a covered work,
359 | for which you have or can give appropriate copyright permission.
360 |
361 | Notwithstanding any other provision of this License, for material you
362 | add to a covered work, you may (if authorized by the copyright holders of
363 | that material) supplement the terms of this License with terms:
364 |
365 | a) Disclaiming warranty or limiting liability differently from the
366 | terms of sections 15 and 16 of this License; or
367 |
368 | b) Requiring preservation of specified reasonable legal notices or
369 | author attributions in that material or in the Appropriate Legal
370 | Notices displayed by works containing it; or
371 |
372 | c) Prohibiting misrepresentation of the origin of that material, or
373 | requiring that modified versions of such material be marked in
374 | reasonable ways as different from the original version; or
375 |
376 | d) Limiting the use for publicity purposes of names of licensors or
377 | authors of the material; or
378 |
379 | e) Declining to grant rights under trademark law for use of some
380 | trade names, trademarks, or service marks; or
381 |
382 | f) Requiring indemnification of licensors and authors of that
383 | material by anyone who conveys the material (or modified versions of
384 | it) with contractual assumptions of liability to the recipient, for
385 | any liability that these contractual assumptions directly impose on
386 | those licensors and authors.
387 |
388 | All other non-permissive additional terms are considered "further
389 | restrictions" within the meaning of section 10. If the Program as you
390 | received it, or any part of it, contains a notice stating that it is
391 | governed by this License along with a term that is a further
392 | restriction, you may remove that term. If a license document contains
393 | a further restriction but permits relicensing or conveying under this
394 | License, you may add to a covered work material governed by the terms
395 | of that license document, provided that the further restriction does
396 | not survive such relicensing or conveying.
397 |
398 | If you add terms to a covered work in accord with this section, you
399 | must place, in the relevant source files, a statement of the
400 | additional terms that apply to those files, or a notice indicating
401 | where to find the applicable terms.
402 |
403 | Additional terms, permissive or non-permissive, may be stated in the
404 | form of a separately written license, or stated as exceptions;
405 | the above requirements apply either way.
406 |
407 | 8. Termination.
408 |
409 | You may not propagate or modify a covered work except as expressly
410 | provided under this License. Any attempt otherwise to propagate or
411 | modify it is void, and will automatically terminate your rights under
412 | this License (including any patent licenses granted under the third
413 | paragraph of section 11).
414 |
415 | However, if you cease all violation of this License, then your
416 | license from a particular copyright holder is reinstated (a)
417 | provisionally, unless and until the copyright holder explicitly and
418 | finally terminates your license, and (b) permanently, if the copyright
419 | holder fails to notify you of the violation by some reasonable means
420 | prior to 60 days after the cessation.
421 |
422 | Moreover, your license from a particular copyright holder is
423 | reinstated permanently if the copyright holder notifies you of the
424 | violation by some reasonable means, this is the first time you have
425 | received notice of violation of this License (for any work) from that
426 | copyright holder, and you cure the violation prior to 30 days after
427 | your receipt of the notice.
428 |
429 | Termination of your rights under this section does not terminate the
430 | licenses of parties who have received copies or rights from you under
431 | this License. If your rights have been terminated and not permanently
432 | reinstated, you do not qualify to receive new licenses for the same
433 | material under section 10.
434 |
435 | 9. Acceptance Not Required for Having Copies.
436 |
437 | You are not required to accept this License in order to receive or
438 | run a copy of the Program. Ancillary propagation of a covered work
439 | occurring solely as a consequence of using peer-to-peer transmission
440 | to receive a copy likewise does not require acceptance. However,
441 | nothing other than this License grants you permission to propagate or
442 | modify any covered work. These actions infringe copyright if you do
443 | not accept this License. Therefore, by modifying or propagating a
444 | covered work, you indicate your acceptance of this License to do so.
445 |
446 | 10. Automatic Licensing of Downstream Recipients.
447 |
448 | Each time you convey a covered work, the recipient automatically
449 | receives a license from the original licensors, to run, modify and
450 | propagate that work, subject to this License. You are not responsible
451 | for enforcing compliance by third parties with this License.
452 |
453 | An "entity transaction" is a transaction transferring control of an
454 | organization, or substantially all assets of one, or subdividing an
455 | organization, or merging organizations. If propagation of a covered
456 | work results from an entity transaction, each party to that
457 | transaction who receives a copy of the work also receives whatever
458 | licenses to the work the party's predecessor in interest had or could
459 | give under the previous paragraph, plus a right to possession of the
460 | Corresponding Source of the work from the predecessor in interest, if
461 | the predecessor has it or can get it with reasonable efforts.
462 |
463 | You may not impose any further restrictions on the exercise of the
464 | rights granted or affirmed under this License. For example, you may
465 | not impose a license fee, royalty, or other charge for exercise of
466 | rights granted under this License, and you may not initiate litigation
467 | (including a cross-claim or counterclaim in a lawsuit) alleging that
468 | any patent claim is infringed by making, using, selling, offering for
469 | sale, or importing the Program or any portion of it.
470 |
471 | 11. Patents.
472 |
473 | A "contributor" is a copyright holder who authorizes use under this
474 | License of the Program or a work on which the Program is based. The
475 | work thus licensed is called the contributor's "contributor version".
476 |
477 | A contributor's "essential patent claims" are all patent claims
478 | owned or controlled by the contributor, whether already acquired or
479 | hereafter acquired, that would be infringed by some manner, permitted
480 | by this License, of making, using, or selling its contributor version,
481 | but do not include claims that would be infringed only as a
482 | consequence of further modification of the contributor version. For
483 | purposes of this definition, "control" includes the right to grant
484 | patent sublicenses in a manner consistent with the requirements of
485 | this License.
486 |
487 | Each contributor grants you a non-exclusive, worldwide, royalty-free
488 | patent license under the contributor's essential patent claims, to
489 | make, use, sell, offer for sale, import and otherwise run, modify and
490 | propagate the contents of its contributor version.
491 |
492 | In the following three paragraphs, a "patent license" is any express
493 | agreement or commitment, however denominated, not to enforce a patent
494 | (such as an express permission to practice a patent or covenant not to
495 | sue for patent infringement). To "grant" such a patent license to a
496 | party means to make such an agreement or commitment not to enforce a
497 | patent against the party.
498 |
499 | If you convey a covered work, knowingly relying on a patent license,
500 | and the Corresponding Source of the work is not available for anyone
501 | to copy, free of charge and under the terms of this License, through a
502 | publicly available network server or other readily accessible means,
503 | then you must either (1) cause the Corresponding Source to be so
504 | available, or (2) arrange to deprive yourself of the benefit of the
505 | patent license for this particular work, or (3) arrange, in a manner
506 | consistent with the requirements of this License, to extend the patent
507 | license to downstream recipients. "Knowingly relying" means you have
508 | actual knowledge that, but for the patent license, your conveying the
509 | covered work in a country, or your recipient's use of the covered work
510 | in a country, would infringe one or more identifiable patents in that
511 | country that you have reason to believe are valid.
512 |
513 | If, pursuant to or in connection with a single transaction or
514 | arrangement, you convey, or propagate by procuring conveyance of, a
515 | covered work, and grant a patent license to some of the parties
516 | receiving the covered work authorizing them to use, propagate, modify
517 | or convey a specific copy of the covered work, then the patent license
518 | you grant is automatically extended to all recipients of the covered
519 | work and works based on it.
520 |
521 | A patent license is "discriminatory" if it does not include within
522 | the scope of its coverage, prohibits the exercise of, or is
523 | conditioned on the non-exercise of one or more of the rights that are
524 | specifically granted under this License. You may not convey a covered
525 | work if you are a party to an arrangement with a third party that is
526 | in the business of distributing software, under which you make payment
527 | to the third party based on the extent of your activity of conveying
528 | the work, and under which the third party grants, to any of the
529 | parties who would receive the covered work from you, a discriminatory
530 | patent license (a) in connection with copies of the covered work
531 | conveyed by you (or copies made from those copies), or (b) primarily
532 | for and in connection with specific products or compilations that
533 | contain the covered work, unless you entered into that arrangement,
534 | or that patent license was granted, prior to 28 March 2007.
535 |
536 | Nothing in this License shall be construed as excluding or limiting
537 | any implied license or other defenses to infringement that may
538 | otherwise be available to you under applicable patent law.
539 |
540 | 12. No Surrender of Others' Freedom.
541 |
542 | If conditions are imposed on you (whether by court order, agreement or
543 | otherwise) that contradict the conditions of this License, they do not
544 | excuse you from the conditions of this License. If you cannot convey a
545 | covered work so as to satisfy simultaneously your obligations under this
546 | License and any other pertinent obligations, then as a consequence you may
547 | not convey it at all. For example, if you agree to terms that obligate you
548 | to collect a royalty for further conveying from those to whom you convey
549 | the Program, the only way you could satisfy both those terms and this
550 | License would be to refrain entirely from conveying the Program.
551 |
552 | 13. Use with the GNU Affero General Public License.
553 |
554 | Notwithstanding any other provision of this License, you have
555 | permission to link or combine any covered work with a work licensed
556 | under version 3 of the GNU Affero General Public License into a single
557 | combined work, and to convey the resulting work. The terms of this
558 | License will continue to apply to the part which is the covered work,
559 | but the special requirements of the GNU Affero General Public License,
560 | section 13, concerning interaction through a network will apply to the
561 | combination as such.
562 |
563 | 14. Revised Versions of this License.
564 |
565 | The Free Software Foundation may publish revised and/or new versions of
566 | the GNU General Public License from time to time. Such new versions will
567 | be similar in spirit to the present version, but may differ in detail to
568 | address new problems or concerns.
569 |
570 | Each version is given a distinguishing version number. If the
571 | Program specifies that a certain numbered version of the GNU General
572 | Public License "or any later version" applies to it, you have the
573 | option of following the terms and conditions either of that numbered
574 | version or of any later version published by the Free Software
575 | Foundation. If the Program does not specify a version number of the
576 | GNU General Public License, you may choose any version ever published
577 | by the Free Software Foundation.
578 |
579 | If the Program specifies that a proxy can decide which future
580 | versions of the GNU General Public License can be used, that proxy's
581 | public statement of acceptance of a version permanently authorizes you
582 | to choose that version for the Program.
583 |
584 | Later license versions may give you additional or different
585 | permissions. However, no additional obligations are imposed on any
586 | author or copyright holder as a result of your choosing to follow a
587 | later version.
588 |
589 | 15. Disclaimer of Warranty.
590 |
591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599 |
600 | 16. Limitation of Liability.
601 |
602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610 | SUCH DAMAGES.
611 |
612 | 17. Interpretation of Sections 15 and 16.
613 |
614 | If the disclaimer of warranty and limitation of liability provided
615 | above cannot be given local legal effect according to their terms,
616 | reviewing courts shall apply local law that most closely approximates
617 | an absolute waiver of all civil liability in connection with the
618 | Program, unless a warranty or assumption of liability accompanies a
619 | copy of the Program in return for a fee.
620 |
621 | END OF TERMS AND CONDITIONS
622 |
623 | How to Apply These Terms to Your New Programs
624 |
625 | If you develop a new program, and you want it to be of the greatest
626 | possible use to the public, the best way to achieve this is to make it
627 | free software which everyone can redistribute and change under these terms.
628 |
629 | To do so, attach the following notices to the program. It is safest
630 | to attach them to the start of each source file to most effectively
631 | state the exclusion of warranty; and each file should have at least
632 | the "copyright" line and a pointer to where the full notice is found.
633 |
634 |
635 | Copyright (C)
636 |
637 | This program is free software: you can redistribute it and/or modify
638 | it under the terms of the GNU General Public License as published by
639 | the Free Software Foundation, either version 3 of the License, or
640 | (at your option) any later version.
641 |
642 | This program is distributed in the hope that it will be useful,
643 | but WITHOUT ANY WARRANTY; without even the implied warranty of
644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645 | GNU General Public License for more details.
646 |
647 | You should have received a copy of the GNU General Public License
648 | along with this program. If not, see .
649 |
650 | Also add information on how to contact you by electronic and paper mail.
651 |
652 | If the program does terminal interaction, make it output a short
653 | notice like this when it starts in an interactive mode:
654 |
655 | Copyright (C)
656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
657 | This is free software, and you are welcome to redistribute it
658 | under certain conditions; type `show c' for details.
659 |
660 | The hypothetical commands `show w' and `show c' should show the appropriate
661 | parts of the General Public License. Of course, your program's commands
662 | might be different; for a GUI interface, you would use an "about box".
663 |
664 | You should also get your employer (if you work as a programmer) or school,
665 | if any, to sign a "copyright disclaimer" for the program, if necessary.
666 | For more information on this, and how to apply and follow the GNU GPL, see
667 | .
668 |
669 | The GNU General Public License does not permit incorporating your program
670 | into proprietary programs. If your program is a subroutine library, you
671 | may consider it more useful to permit linking proprietary applications with
672 | the library. If this is what you want to do, use the GNU Lesser General
673 | Public License instead of this License. But first, please read
674 | .
675 |
--------------------------------------------------------------------------------
/pyexiftool/MANIFEST.in:
--------------------------------------------------------------------------------
1 | include README.rst COPYING doc/Makefile doc/conf.py doc/*.rst
2 |
--------------------------------------------------------------------------------
/pyexiftool/README.rst:
--------------------------------------------------------------------------------
1 | PyExifTool
2 | ==========
3 |
4 | PyExifTool is a Python library to communicate with an instance of Phil
5 | Harvey's excellent ExifTool_ command-line application. The library
6 | provides the class ``exiftool.ExifTool`` that runs the command-line
7 | tool in batch mode and features methods to send commands to that
8 | program, including methods to extract meta-information from one or
9 | more image files. Since ``exiftool`` is run in batch mode, only a
10 | single instance needs to be launched and can be reused for many
11 | queries. This is much more efficient than launching a separate
12 | process for every single query.
13 |
14 | .. _ExifTool: http://www.sno.phy.queensu.ca/~phil/exiftool/
15 |
16 | Getting PyExifTool
17 | ------------------
18 |
19 | The source code can be checked out from the github repository with
20 |
21 | ::
22 |
23 | git clone git://github.com/smarnach/pyexiftool.git
24 |
25 | Alternatively, you can download a tarball_. There haven't been any
26 | releases yet.
27 |
28 | .. _tarball: https://github.com/smarnach/pyexiftool/tarball/master
29 |
30 | Installation
31 | ------------
32 |
33 | PyExifTool runs on Python 2.6 and above, including 3.x. It has been
34 | tested on Windows and Linux, and probably also runs on other Unix-like
35 | platforms.
36 |
37 | You need an installation of the ``exiftool`` command-line tool. The
38 | code has been tested with version 8.60, but should work with version
39 | 8.40 or above (which was the first production version of exiftool
40 | featuring the ``-stay_open`` option for batch mode).
41 |
42 | PyExifTool currently only consists of a single module, so you can
43 | simply copy or link this module to a place where Python finds it, or
44 | you can call
45 |
46 | ::
47 |
48 | python setup.py install [--user|--prefix=' where is one of"
21 | @echo " html to make standalone HTML files"
22 | @echo " dirhtml to make HTML files named index.html in directories"
23 | @echo " singlehtml to make a single large HTML file"
24 | @echo " pickle to make pickle files"
25 | @echo " json to make JSON files"
26 | @echo " htmlhelp to make HTML files and a HTML help project"
27 | @echo " qthelp to make HTML files and a qthelp project"
28 | @echo " devhelp to make HTML files and a Devhelp project"
29 | @echo " epub to make an epub"
30 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
31 | @echo " latexpdf to make LaTeX files and run them through pdflatex"
32 | @echo " text to make text files"
33 | @echo " man to make manual pages"
34 | @echo " texinfo to make Texinfo files"
35 | @echo " info to make Texinfo files and run them through makeinfo"
36 | @echo " gettext to make PO message catalogs"
37 | @echo " changes to make an overview of all changed/added/deprecated items"
38 | @echo " linkcheck to check all external links for integrity"
39 | @echo " doctest to run all doctests embedded in the documentation (if enabled)"
40 |
41 | clean:
42 | -rm -rf $(BUILDDIR)/*
43 |
44 | html:
45 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
46 | @echo
47 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
48 |
49 | dirhtml:
50 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
51 | @echo
52 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
53 |
54 | singlehtml:
55 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
56 | @echo
57 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
58 |
59 | pickle:
60 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
61 | @echo
62 | @echo "Build finished; now you can process the pickle files."
63 |
64 | json:
65 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
66 | @echo
67 | @echo "Build finished; now you can process the JSON files."
68 |
69 | htmlhelp:
70 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
71 | @echo
72 | @echo "Build finished; now you can run HTML Help Workshop with the" \
73 | ".hhp project file in $(BUILDDIR)/htmlhelp."
74 |
75 | qthelp:
76 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
77 | @echo
78 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \
79 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
80 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/PyExifTool.qhcp"
81 | @echo "To view the help file:"
82 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/PyExifTool.qhc"
83 |
84 | devhelp:
85 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
86 | @echo
87 | @echo "Build finished."
88 | @echo "To view the help file:"
89 | @echo "# mkdir -p $$HOME/.local/share/devhelp/PyExifTool"
90 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/PyExifTool"
91 | @echo "# devhelp"
92 |
93 | epub:
94 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
95 | @echo
96 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
97 |
98 | latex:
99 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
100 | @echo
101 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
102 | @echo "Run \`make' in that directory to run these through (pdf)latex" \
103 | "(use \`make latexpdf' here to do that automatically)."
104 |
105 | latexpdf:
106 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
107 | @echo "Running LaTeX files through pdflatex..."
108 | $(MAKE) -C $(BUILDDIR)/latex all-pdf
109 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
110 |
111 | text:
112 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
113 | @echo
114 | @echo "Build finished. The text files are in $(BUILDDIR)/text."
115 |
116 | man:
117 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
118 | @echo
119 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
120 |
121 | texinfo:
122 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
123 | @echo
124 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
125 | @echo "Run \`make' in that directory to run these through makeinfo" \
126 | "(use \`make info' here to do that automatically)."
127 |
128 | info:
129 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
130 | @echo "Running Texinfo files through makeinfo..."
131 | make -C $(BUILDDIR)/texinfo info
132 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
133 |
134 | gettext:
135 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
136 | @echo
137 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
138 |
139 | changes:
140 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
141 | @echo
142 | @echo "The overview file is in $(BUILDDIR)/changes."
143 |
144 | linkcheck:
145 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
146 | @echo
147 | @echo "Link check complete; look for any errors in the above output " \
148 | "or in $(BUILDDIR)/linkcheck/output.txt."
149 |
150 | doctest:
151 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
152 | @echo "Testing of doctests in the sources finished, look at the " \
153 | "results in $(BUILDDIR)/doctest/output.txt."
154 |
--------------------------------------------------------------------------------
/pyexiftool/doc/conf.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # PyExifTool documentation build configuration file, created by
4 | # sphinx-quickstart on Thu Apr 12 17:42:54 2012.
5 | #
6 | # This file is execfile()d with the current directory set to its containing dir.
7 | #
8 | # Note that not all possible configuration values are present in this
9 | # autogenerated file.
10 | #
11 | # All configuration values have a default; values that are commented out
12 | # serve to show the default.
13 |
14 | import sys, os
15 |
16 | # If extensions (or modules to document with autodoc) are in another directory,
17 | # add these directories to sys.path here. If the directory is relative to the
18 | # documentation root, use os.path.abspath to make it absolute, like shown here.
19 | sys.path.insert(1, os.path.abspath('..'))
20 |
21 | # -- General configuration -----------------------------------------------------
22 |
23 | # If your documentation needs a minimal Sphinx version, state it here.
24 | #needs_sphinx = '1.0'
25 |
26 | # Add any Sphinx extension module names here, as strings. They can be extensions
27 | # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
28 | extensions = ['sphinx.ext.autodoc']
29 |
30 | # Add any paths that contain templates here, relative to this directory.
31 | templates_path = ['_templates']
32 |
33 | # The suffix of source filenames.
34 | source_suffix = '.rst'
35 |
36 | # The encoding of source files.
37 | #source_encoding = 'utf-8-sig'
38 |
39 | # The master toctree document.
40 | master_doc = 'index'
41 |
42 | # General information about the project.
43 | project = u'PyExifTool'
44 | copyright = u'2012, Sven Marnach'
45 |
46 | # The version info for the project you're documenting, acts as replacement for
47 | # |version| and |release|, also used in various other places throughout the
48 | # built documents.
49 | #
50 | # The short X.Y version.
51 | version = '0.1'
52 | # The full version, including alpha/beta/rc tags.
53 | release = '0.1'
54 |
55 | # The language for content autogenerated by Sphinx. Refer to documentation
56 | # for a list of supported languages.
57 | #language = None
58 |
59 | # There are two options for replacing |today|: either, you set today to some
60 | # non-false value, then it is used:
61 | #today = ''
62 | # Else, today_fmt is used as the format for a strftime call.
63 | #today_fmt = '%B %d, %Y'
64 |
65 | # List of patterns, relative to source directory, that match files and
66 | # directories to ignore when looking for source files.
67 | exclude_patterns = ['_build']
68 |
69 | # The reST default role (used for this markup: `text`) to use for all documents.
70 | #default_role = None
71 |
72 | # If true, '()' will be appended to :func: etc. cross-reference text.
73 | #add_function_parentheses = True
74 |
75 | # If true, the current module name will be prepended to all description
76 | # unit titles (such as .. function::).
77 | #add_module_names = True
78 |
79 | # If true, sectionauthor and moduleauthor directives will be shown in the
80 | # output. They are ignored by default.
81 | #show_authors = False
82 |
83 | # The name of the Pygments (syntax highlighting) style to use.
84 | pygments_style = 'sphinx'
85 |
86 | # A list of ignored prefixes for module index sorting.
87 | #modindex_common_prefix = []
88 |
89 |
90 | # -- Options for HTML output ---------------------------------------------------
91 |
92 | # The theme to use for HTML and HTML Help pages. See the documentation for
93 | # a list of builtin themes.
94 | html_theme = 'default'
95 |
96 | # Theme options are theme-specific and customize the look and feel of a theme
97 | # further. For a list of options available for each theme, see the
98 | # documentation.
99 | #html_theme_options = {}
100 |
101 | # Add any paths that contain custom themes here, relative to this directory.
102 | #html_theme_path = []
103 |
104 | # The name for this set of Sphinx documents. If None, it defaults to
105 | # " v documentation".
106 | #html_title = None
107 |
108 | # A shorter title for the navigation bar. Default is the same as html_title.
109 | #html_short_title = None
110 |
111 | # The name of an image file (relative to this directory) to place at the top
112 | # of the sidebar.
113 | #html_logo = None
114 |
115 | # The name of an image file (within the static path) to use as favicon of the
116 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
117 | # pixels large.
118 | #html_favicon = None
119 |
120 | # Add any paths that contain custom static files (such as style sheets) here,
121 | # relative to this directory. They are copied after the builtin static files,
122 | # so a file named "default.css" will overwrite the builtin "default.css".
123 | html_static_path = ['_static']
124 |
125 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
126 | # using the given strftime format.
127 | #html_last_updated_fmt = '%b %d, %Y'
128 |
129 | # If true, SmartyPants will be used to convert quotes and dashes to
130 | # typographically correct entities.
131 | #html_use_smartypants = True
132 |
133 | # Custom sidebar templates, maps document names to template names.
134 | #html_sidebars = {}
135 |
136 | # Additional templates that should be rendered to pages, maps page names to
137 | # template names.
138 | #html_additional_pages = {}
139 |
140 | # If false, no module index is generated.
141 | #html_domain_indices = True
142 |
143 | # If false, no index is generated.
144 | #html_use_index = True
145 |
146 | # If true, the index is split into individual pages for each letter.
147 | #html_split_index = False
148 |
149 | # If true, links to the reST sources are added to the pages.
150 | #html_show_sourcelink = True
151 |
152 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
153 | #html_show_sphinx = True
154 |
155 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
156 | #html_show_copyright = True
157 |
158 | # If true, an OpenSearch description file will be output, and all pages will
159 | # contain a tag referring to it. The value of this option must be the
160 | # base URL from which the finished HTML is served.
161 | #html_use_opensearch = ''
162 |
163 | # This is the file name suffix for HTML files (e.g. ".xhtml").
164 | #html_file_suffix = None
165 |
166 | # Output file base name for HTML help builder.
167 | htmlhelp_basename = 'PyExifTooldoc'
168 |
169 |
170 | # -- Options for LaTeX output --------------------------------------------------
171 |
172 | latex_elements = {
173 | # The paper size ('letterpaper' or 'a4paper').
174 | #'papersize': 'letterpaper',
175 |
176 | # The font size ('10pt', '11pt' or '12pt').
177 | #'pointsize': '10pt',
178 |
179 | # Additional stuff for the LaTeX preamble.
180 | #'preamble': '',
181 | }
182 |
183 | # Grouping the document tree into LaTeX files. List of tuples
184 | # (source start file, target name, title, author, documentclass [howto/manual]).
185 | latex_documents = [
186 | ('index', 'PyExifTool.tex', u'PyExifTool Documentation',
187 | u'Sven Marnach', 'manual'),
188 | ]
189 |
190 | # The name of an image file (relative to this directory) to place at the top of
191 | # the title page.
192 | #latex_logo = None
193 |
194 | # For "manual" documents, if this is true, then toplevel headings are parts,
195 | # not chapters.
196 | #latex_use_parts = False
197 |
198 | # If true, show page references after internal links.
199 | #latex_show_pagerefs = False
200 |
201 | # If true, show URL addresses after external links.
202 | #latex_show_urls = False
203 |
204 | # Documents to append as an appendix to all manuals.
205 | #latex_appendices = []
206 |
207 | # If false, no module index is generated.
208 | #latex_domain_indices = True
209 |
210 |
211 | # -- Options for manual page output --------------------------------------------
212 |
213 | # One entry per manual page. List of tuples
214 | # (source start file, name, description, authors, manual section).
215 | man_pages = [
216 | ('index', 'pyexiftool', u'PyExifTool Documentation',
217 | [u'Sven Marnach'], 1)
218 | ]
219 |
220 | # If true, show URL addresses after external links.
221 | #man_show_urls = False
222 |
223 |
224 | # -- Options for Texinfo output ------------------------------------------------
225 |
226 | # Grouping the document tree into Texinfo files. List of tuples
227 | # (source start file, target name, title, author,
228 | # dir menu entry, description, category)
229 | texinfo_documents = [
230 | ('index', 'PyExifTool', u'PyExifTool Documentation',
231 | u'Sven Marnach', 'PyExifTool', 'One line description of project.',
232 | 'Miscellaneous'),
233 | ]
234 |
235 | # Documents to append as an appendix to all manuals.
236 | #texinfo_appendices = []
237 |
238 | # If false, no module index is generated.
239 | #texinfo_domain_indices = True
240 |
241 | # How to display URL addresses: 'footnote', 'no', or 'inline'.
242 | #texinfo_show_urls = 'footnote'
243 |
--------------------------------------------------------------------------------
/pyexiftool/doc/index.rst:
--------------------------------------------------------------------------------
1 | .. PyExifTool documentation master file, created by
2 | sphinx-quickstart on Thu Apr 12 17:42:54 2012.
3 |
4 | PyExifTool -- A Python wrapper for Phil Harvey's ExifTool
5 | ==========================================================
6 |
7 | .. automodule:: exiftool
8 | :members:
9 |
--------------------------------------------------------------------------------
/pyexiftool/exiftool.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | # PyExifTool
3 | # Copyright 2012 Sven Marnach
4 |
5 | # This file is part of PyExifTool.
6 | #
7 | # PyExifTool is free software: you can redistribute it and/or modify
8 | # it under the terms of the GNU General Public License as published by
9 | # the Free Software Foundation, either version 3 of the licence, or
10 | # (at your option) any later version, or the BSD licence.
11 | #
12 | # PyExifTool is distributed in the hope that it will be useful,
13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
15 | #
16 | # See COPYING.GPL or COPYING.BSD for more details.
17 |
18 | """
19 | PyExifTool is a Python library to communicate with an instance of Phil
20 | Harvey's excellent ExifTool_ command-line application. The library
21 | provides the class :py:class:`ExifTool` that runs the command-line
22 | tool in batch mode and features methods to send commands to that
23 | program, including methods to extract meta-information from one or
24 | more image files. Since ``exiftool`` is run in batch mode, only a
25 | single instance needs to be launched and can be reused for many
26 | queries. This is much more efficient than launching a separate
27 | process for every single query.
28 |
29 | .. _ExifTool: http://www.sno.phy.queensu.ca/~phil/exiftool/
30 |
31 | The source code can be checked out from the github repository with
32 |
33 | ::
34 |
35 | git clone git://github.com/smarnach/pyexiftool.git
36 |
37 | Alternatively, you can download a tarball_. There haven't been any
38 | releases yet.
39 |
40 | .. _tarball: https://github.com/smarnach/pyexiftool/tarball/master
41 |
42 | PyExifTool is licenced under GNU GPL version 3 or later.
43 |
44 | Example usage::
45 |
46 | import exiftool
47 |
48 | files = ["a.jpg", "b.png", "c.tif"]
49 | with exiftool.ExifTool() as et:
50 | metadata = et.get_metadata_batch(files)
51 | for d in metadata:
52 | print("{:20.20} {:20.20}".format(d["SourceFile"],
53 | d["EXIF:DateTimeOriginal"]))
54 | """
55 |
56 | from __future__ import unicode_literals
57 |
58 | import sys
59 | import subprocess
60 | import os
61 | import json
62 | import warnings
63 | import codecs
64 |
65 | try: # Py3k compatibility
66 | basestring
67 | except NameError:
68 | basestring = (bytes, str)
69 |
70 | executable = "exiftool"
71 | """The name of the executable to run.
72 |
73 | If the executable is not located in one of the paths listed in the
74 | ``PATH`` environment variable, the full path should be given here.
75 | """
76 |
77 | # Sentinel indicating the end of the output of a sequence of commands.
78 | # The standard value should be fine.
79 | sentinel = b"{ready}"
80 |
81 | # The block size when reading from exiftool. The standard value
82 | # should be fine, though other values might give better performance in
83 | # some cases.
84 | block_size = 4096
85 |
86 | # This code has been adapted from Lib/os.py in the Python source tree
87 | # (sha1 265e36e277f3)
88 | def _fscodec():
89 | encoding = sys.getfilesystemencoding()
90 | errors = "strict"
91 | if encoding != "mbcs":
92 | try:
93 | codecs.lookup_error("surrogateescape")
94 | except LookupError:
95 | pass
96 | else:
97 | errors = "surrogateescape"
98 |
99 | def fsencode(filename):
100 | """
101 | Encode filename to the filesystem encoding with 'surrogateescape' error
102 | handler, return bytes unchanged. On Windows, use 'strict' error handler if
103 | the file system encoding is 'mbcs' (which is the default encoding).
104 | """
105 | if isinstance(filename, bytes):
106 | return filename
107 | else:
108 | return filename.encode(encoding, errors)
109 |
110 | return fsencode
111 |
112 | fsencode = _fscodec()
113 | del _fscodec
114 |
115 | class ExifTool(object):
116 | """Run the `exiftool` command-line tool and communicate to it.
117 |
118 | You can pass the file name of the ``exiftool`` executable as an
119 | argument to the constructor. The default value ``exiftool`` will
120 | only work if the executable is in your ``PATH``.
121 |
122 | Most methods of this class are only available after calling
123 | :py:meth:`start()`, which will actually launch the subprocess. To
124 | avoid leaving the subprocess running, make sure to call
125 | :py:meth:`terminate()` method when finished using the instance.
126 | This method will also be implicitly called when the instance is
127 | garbage collected, but there are circumstance when this won't ever
128 | happen, so you should not rely on the implicit process
129 | termination. Subprocesses won't be automatically terminated if
130 | the parent process exits, so a leaked subprocess will stay around
131 | until manually killed.
132 |
133 | A convenient way to make sure that the subprocess is terminated is
134 | to use the :py:class:`ExifTool` instance as a context manager::
135 |
136 | with ExifTool() as et:
137 | ...
138 |
139 | .. warning:: Note that there is no error handling. Nonsensical
140 | options will be silently ignored by exiftool, so there's not
141 | much that can be done in that regard. You should avoid passing
142 | non-existent files to any of the methods, since this will lead
143 | to undefied behaviour.
144 |
145 | .. py:attribute:: running
146 |
147 | A Boolean value indicating whether this instance is currently
148 | associated with a running subprocess.
149 | """
150 |
151 | def __init__(self, executable_=None):
152 | if executable_ is None:
153 | self.executable = executable
154 | else:
155 | self.executable = executable_
156 | self.running = False
157 |
158 | def start(self):
159 | """Start an ``exiftool`` process in batch mode for this instance.
160 |
161 | This method will issue a ``UserWarning`` if the subprocess is
162 | already running. The process is started with the ``-G`` and
163 | ``-n`` as common arguments, which are automatically included
164 | in every command you run with :py:meth:`execute()`.
165 | """
166 | if self.running:
167 | warnings.warn("ExifTool already running; doing nothing.")
168 | return
169 | with open(os.devnull, "w") as devnull:
170 | self._process = subprocess.Popen(
171 | [self.executable, "-stay_open", "True", "-@", "-",
172 | "-common_args", "-G", "-n"],
173 | stdin=subprocess.PIPE, stdout=subprocess.PIPE,
174 | stderr=devnull)
175 | self.running = True
176 |
177 | def terminate(self):
178 | """Terminate the ``exiftool`` process of this instance.
179 |
180 | If the subprocess isn't running, this method will do nothing.
181 | """
182 | if not self.running:
183 | return
184 | self._process.stdin.write(b"-stay_open\nFalse\n")
185 | self._process.stdin.flush()
186 | self._process.communicate()
187 | del self._process
188 | self.running = False
189 |
190 | def __enter__(self):
191 | self.start()
192 | return self
193 |
194 | def __exit__(self, exc_type, exc_val, exc_tb):
195 | self.terminate()
196 |
197 | def __del__(self):
198 | self.terminate()
199 |
200 | def execute(self, *params):
201 | """Execute the given batch of parameters with ``exiftool``.
202 |
203 | This method accepts any number of parameters and sends them to
204 | the attached ``exiftool`` process. The process must be
205 | running, otherwise ``ValueError`` is raised. The final
206 | ``-execute`` necessary to actually run the batch is appended
207 | automatically; see the documentation of :py:meth:`start()` for
208 | the common options. The ``exiftool`` output is read up to the
209 | end-of-output sentinel and returned as a raw ``bytes`` object,
210 | excluding the sentinel.
211 |
212 | The parameters must also be raw ``bytes``, in whatever
213 | encoding exiftool accepts. For filenames, this should be the
214 | system's filesystem encoding.
215 |
216 | .. note:: This is considered a low-level method, and should
217 | rarely be needed by application developers.
218 | """
219 | if not self.running:
220 | raise ValueError("ExifTool instance not running.")
221 | self._process.stdin.write(b"\n".join(params + (b"-execute\n",)))
222 | self._process.stdin.flush()
223 | output = b""
224 | fd = self._process.stdout.fileno()
225 | while not output[-32:].strip().endswith(sentinel):
226 | output += os.read(fd, block_size)
227 | return output.strip()[:-len(sentinel)]
228 |
229 | def execute_json(self, *params):
230 | """Execute the given batch of parameters and parse the JSON output.
231 |
232 | This method is similar to :py:meth:`execute()`. It
233 | automatically adds the parameter ``-j`` to request JSON output
234 | from ``exiftool`` and parses the output. The return value is
235 | a list of dictionaries, mapping tag names to the corresponding
236 | values. All keys are Unicode strings with the tag names
237 | including the ExifTool group name in the format :.
238 | The values can have multiple types. All strings occurring as
239 | values will be Unicode strings. Each dictionary contains the
240 | name of the file it corresponds to in the key ``"SourceFile"``.
241 |
242 | The parameters to this function must be either raw strings
243 | (type ``str`` in Python 2.x, type ``bytes`` in Python 3.x) or
244 | Unicode strings (type ``unicode`` in Python 2.x, type ``str``
245 | in Python 3.x). Unicode strings will be encoded using
246 | system's filesystem encoding. This behaviour means you can
247 | pass in filenames according to the convention of the
248 | respective Python version – as raw strings in Python 2.x and
249 | as Unicode strings in Python 3.x.
250 | """
251 | params = map(fsencode, params)
252 | return json.loads(self.execute(b"-j", *params).decode("utf-8"))
253 |
254 | def get_metadata_batch(self, filenames):
255 | """Return all meta-data for the given files.
256 |
257 | The return value will have the format described in the
258 | documentation of :py:meth:`execute_json()`.
259 | """
260 | return self.execute_json(*filenames)
261 |
262 | def get_metadata(self, filename):
263 | """Return meta-data for a single file.
264 |
265 | The returned dictionary has the format described in the
266 | documentation of :py:meth:`execute_json()`.
267 | """
268 | return self.execute_json(filename)[0]
269 |
270 | def get_tags_batch(self, tags, filenames):
271 | """Return only specified tags for the given files.
272 |
273 | The first argument is an iterable of tags. The tag names may
274 | include group names, as usual in the format :.
275 |
276 | The second argument is an iterable of file names.
277 |
278 | The format of the return value is the same as for
279 | :py:meth:`execute_json()`.
280 | """
281 | # Explicitly ruling out strings here because passing in a
282 | # string would lead to strange and hard-to-find errors
283 | if isinstance(tags, basestring):
284 | raise TypeError("The argument 'tags' must be "
285 | "an iterable of strings")
286 | if isinstance(filenames, basestring):
287 | raise TypeError("The argument 'filenames' must be "
288 | "an iterable of strings")
289 | params = ["-" + t for t in tags]
290 | params.extend(filenames)
291 | return self.execute_json(*params)
292 |
293 | def get_tags(self, tags, filename):
294 | """Return only specified tags for a single file.
295 |
296 | The returned dictionary has the format described in the
297 | documentation of :py:meth:`execute_json()`.
298 | """
299 | return self.get_tags_batch(tags, [filename])[0]
300 |
301 | def get_tag_batch(self, tag, filenames):
302 | """Extract a single tag from the given files.
303 |
304 | The first argument is a single tag name, as usual in the
305 | format :.
306 |
307 | The second argument is an iterable of file names.
308 |
309 | The return value is a list of tag values or ``None`` for
310 | non-existent tags, in the same order as ``filenames``.
311 | """
312 | data = self.get_tags_batch([tag], filenames)
313 | result = []
314 | for d in data:
315 | d.pop("SourceFile")
316 | result.append(next(iter(d.values()), None))
317 | return result
318 |
319 | def get_tag(self, tag, filename):
320 | """Extract a single tag from a single file.
321 |
322 | The return value is the value of the specified tag, or
323 | ``None`` if this tag was not found in the file.
324 | """
325 | return self.get_tag_batch(tag, [filename])[0]
326 |
--------------------------------------------------------------------------------
/pyexiftool/setup.py:
--------------------------------------------------------------------------------
1 | # PyExifTool
2 | # Copyright 2012 Sven Marnach
3 |
4 | # This file is part of PyExifTool.
5 | #
6 | # PyExifTool is free software: you can redistribute it and/or modify
7 | # it under the terms of the GNU General Public License as published by
8 | # the Free Software Foundation, either version 3 of the licence, or
9 | # (at your option) any later version, or the BSD licence.
10 | #
11 | # PyExifTool is distributed in the hope that it will be useful,
12 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
14 | #
15 | # See COPYING.GPL or COPYING.BSD for more details.
16 |
17 | from distutils.core import setup
18 |
19 | setup(name="PyExifTool",
20 | version="0.1",
21 | description="Python wrapper for exiftool",
22 | license="GPLv3+",
23 | author="Sven Marnach",
24 | author_email="sven@marnach.net",
25 | url="http://github.com/smarnach/pyexiftool",
26 | classifiers=[
27 | "Development Status :: 3 - Alpha",
28 | "Intended Audience :: Developers",
29 | "License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
30 | "Programming Language :: Python :: 2.6",
31 | "Programming Language :: Python :: 2.7",
32 | "Programming Language :: Python :: 3",
33 | "Topic :: Multimedia"],
34 | py_modules=["exiftool"])
35 |
--------------------------------------------------------------------------------
/pyexiftool/test/__init__.py:
--------------------------------------------------------------------------------
1 | # Dummy file to make this directory a package.
2 |
--------------------------------------------------------------------------------
/pyexiftool/test/rose.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ken-ouyang/neural_image_simulator/7999d44aaaa194308769f521f7c1818f73c0ebca/pyexiftool/test/rose.jpg
--------------------------------------------------------------------------------
/pyexiftool/test/skyblue.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ken-ouyang/neural_image_simulator/7999d44aaaa194308769f521f7c1818f73c0ebca/pyexiftool/test/skyblue.png
--------------------------------------------------------------------------------
/pyexiftool/test/test_exiftool.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | from __future__ import unicode_literals
4 |
5 | import unittest
6 | import exiftool
7 | import warnings
8 | import os
9 |
10 | class TestExifTool(unittest.TestCase):
11 | def setUp(self):
12 | self.et = exiftool.ExifTool()
13 | def tearDown(self):
14 | if hasattr(self, "et"):
15 | self.et.terminate()
16 | if hasattr(self, "process"):
17 | if self.process.poll() is None:
18 | self.process.terminate()
19 | def test_termination_cm(self):
20 | # Test correct subprocess start and termination when using
21 | # self.et as a context manager
22 | self.assertFalse(self.et.running)
23 | self.assertRaises(ValueError, self.et.execute)
24 | with self.et:
25 | self.assertTrue(self.et.running)
26 | with warnings.catch_warnings(record=True) as w:
27 | self.et.start()
28 | self.assertEquals(len(w), 1)
29 | self.assertTrue(issubclass(w[0].category, UserWarning))
30 | self.process = self.et._process
31 | self.assertEqual(self.process.poll(), None)
32 | self.assertFalse(self.et.running)
33 | self.assertNotEqual(self.process.poll(), None)
34 | def test_termination_explicit(self):
35 | # Test correct subprocess start and termination when
36 | # explicitly using start() and terminate()
37 | self.et.start()
38 | self.process = self.et._process
39 | self.assertEqual(self.process.poll(), None)
40 | self.et.terminate()
41 | self.assertNotEqual(self.process.poll(), None)
42 | def test_termination_implicit(self):
43 | # Test implicit process termination on garbage collection
44 | self.et.start()
45 | self.process = self.et._process
46 | del self.et
47 | self.assertNotEqual(self.process.poll(), None)
48 | def test_get_metadata(self):
49 | expected_data = [{"SourceFile": "rose.jpg",
50 | "File:FileType": "JPEG",
51 | "File:ImageWidth": 70,
52 | "File:ImageHeight": 46,
53 | "XMP:Subject": "Röschen",
54 | "Composite:ImageSize": "70x46"},
55 | {"SourceFile": "skyblue.png",
56 | "File:FileType": "PNG",
57 | "PNG:ImageWidth": 64,
58 | "PNG:ImageHeight": 64,
59 | "Composite:ImageSize": "64x64"}]
60 | script_path = os.path.dirname(__file__)
61 | source_files = []
62 | for d in expected_data:
63 | d["SourceFile"] = f = os.path.join(script_path, d["SourceFile"])
64 | self.assertTrue(os.path.exists(f))
65 | source_files.append(f)
66 | with self.et:
67 | actual_data = self.et.get_metadata_batch(source_files)
68 | tags0 = self.et.get_tags(["XMP:Subject"], source_files[0])
69 | tag0 = self.et.get_tag("XMP:Subject", source_files[0])
70 | for expected, actual in zip(expected_data, actual_data):
71 | et_version = actual["ExifTool:ExifToolVersion"]
72 | self.assertTrue(isinstance(et_version, float))
73 | if isinstance(et_version, float): # avoid exception in Py3k
74 | self.assertTrue(
75 | et_version >= 8.40,
76 | "you should at least use ExifTool version 8.40")
77 | actual["SourceFile"] = os.path.normpath(actual["SourceFile"])
78 | for k, v in expected.items():
79 | self.assertEqual(actual[k], v)
80 | tags0["SourceFile"] = os.path.normpath(tags0["SourceFile"])
81 | self.assertEqual(tags0, dict((k, expected_data[0][k])
82 | for k in ["SourceFile", "XMP:Subject"]))
83 | self.assertEqual(tag0, "Röschen")
84 |
85 | if __name__ == '__main__':
86 | unittest.main()
87 |
--------------------------------------------------------------------------------
/train.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import json
3 | import os
4 | import torch
5 | import torch.nn as nn
6 | import numpy as np
7 | from PIL import Image
8 | from torch.utils.data import DataLoader
9 | import torch.distributions as tdist
10 | from torch.optim.lr_scheduler import MultiStepLR
11 | from unet_model_ori import UNet
12 | from unet_attention_decouple import AttenUnet_style
13 | from data_utils import autoTrainSetRaw2jpgProcessed
14 | from exposure_module import ExposureNet
15 | from isp import isp
16 |
17 | os.environ["CUDA_VISIBLE_DEVICES"]="7"
18 |
19 | def load_checkpoint(checkpoint_path, model, optimizer):
20 | assert os.path.isfile(checkpoint_path)
21 | checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
22 | iteration = checkpoint_dict['iteration']
23 | optimizer.load_state_dict(checkpoint_dict['optimizer'])
24 | model_for_loading = checkpoint_dict['model']
25 | model.load_state_dict(model_for_loading.state_dict())
26 | print("Loaded checkpoint '{}' (iteration {})" .format(
27 | checkpoint_path, iteration))
28 | return model, optimizer, iteration
29 |
30 | def save_checkpoint(model, net_type, optimizer, learning_rate, iteration, filepath, parallel):
31 | print("Saving model and optimizer state at iteration {} to {}".format(
32 | iteration, filepath))
33 | if net_type == "exposure":
34 | model_for_saving = ExposureNet().cuda()
35 | if net_type == "u_net":
36 | model_for_saving = UNet(**network_config).cuda()
37 | if net_type == "unet_att_style":
38 | model_for_saving = AttenUnet_style(**network_config2).cuda()
39 | if parallel:
40 | model_for_saving = nn.DataParallel(model_for_saving)
41 | model_for_saving.load_state_dict(model.state_dict())
42 | torch.save({'model': model_for_saving,
43 | 'iteration': iteration,
44 | 'optimizer': optimizer.state_dict(),
45 | 'learning_rate': learning_rate}, filepath)
46 |
47 |
48 | def save_training_images_raw(img_list, image_path, img_name, alpha):
49 | print("Saving output images")
50 | b,c,h,w = img_list[0].shape
51 | batch_list = []
52 | for img in img_list:
53 | clip(img)
54 | tmp_batch = isp(img[0,:,:,:], img_name[0], data_config["file_list"], alpha[0])
55 | for i in range(b-1):
56 | tmp_batch = np.concatenate((tmp_batch, isp(img[i+1,:,:,:], img_name[i+1], data_config["file_list"], alpha[i+1])), axis=1)
57 | batch_list.append(tmp_batch)
58 | new_img_array = np.concatenate(batch_list, axis=2) * 255
59 | new_img = Image.fromarray(np.transpose(new_img_array, [1,2,0]).astype('uint8'), 'RGB')
60 | new_img.save(image_path, quality=95)
61 |
62 | def clip(img):
63 | img[img>1] = 1
64 | img[img<0] = 0
65 |
66 | def get_variance_map(input_raw, shot_noise, read_noise, mul=None):
67 | if not type(mul) == type(None):
68 | shot_noise = shot_noise * mul
69 | read_noise = read_noise * mul * mul
70 | b, c, h, w = input_raw.size()
71 | read_noise = torch.unsqueeze(read_noise, 2)
72 | read_noise = torch.unsqueeze(read_noise, 3)
73 | read_noise = read_noise.repeat(1,1,h,w)
74 | shot_noise = torch.unsqueeze(shot_noise, 2)
75 | shot_noise = torch.unsqueeze(shot_noise, 3)
76 | shot_noise = shot_noise.repeat(1,1,h,w)
77 |
78 | variance = torch.add(input_raw * shot_noise, read_noise)
79 | n = tdist.Normal(loc=torch.zeros_like(variance), scale=torch.sqrt(variance))
80 | noise = n.sample()
81 | var_map = input_raw + noise
82 | return var_map
83 |
84 | def train(output_directory, epochs, learning_rate1, learning_rate2, learning_rate3, aperture,
85 | iters_per_checkpoint, batch_size, epoch_size, loss_type1, loss_type2, loss_type3,
86 | net_type, net_type_ap, seed, checkpoint_path1, checkpoint_path2, checkpoint_path3, residual_learning1,
87 | residual_learning2, parallel, variance_map, isp_save, multi_stage=None, multi_stage2=None):
88 | # set manual seed
89 | torch.manual_seed(seed)
90 | torch.cuda.manual_seed(seed)
91 |
92 | # build exposure module
93 | model_exposure = ExposureNet().cuda()
94 |
95 | # build noise model
96 | if net_type == "u_net":
97 | model_noise = UNet(**network_config).cuda()
98 | else:
99 | print("unsupported network type")
100 | return 0
101 | # build aperture model
102 | if aperture:
103 | if net_type_ap == "unet_att_style":
104 | model_aperture = AttenUnet_style(**network_config2).cuda()
105 | else:
106 | print("unsupported network type")
107 | return 0
108 |
109 |
110 | if parallel:
111 | model_exposure = nn.DataParallel(model_exposure)
112 | model_noise = nn.DataParallel(model_noise)
113 | if aperture:
114 | model_aperture = nn.DataParallel(model_aperture)
115 | optimizer_1 = torch.optim.Adam(model_exposure.parameters(), lr=learning_rate1)
116 | optimizer_2 = torch.optim.Adam(model_noise.parameters(), lr=learning_rate2)
117 | scheduler_2 = MultiStepLR(optimizer_2, milestones=[20, 40], gamma=0.1)
118 | if aperture:
119 | optimizer_3 = torch.optim.Adam(model_aperture.parameters(), lr=learning_rate3)
120 | scheduler_3 = MultiStepLR(optimizer_3, milestones=[20, 40], gamma=0.1)
121 |
122 | # Load checkpoint if one exists
123 | iteration = 0
124 |
125 | if checkpoint_path1 != "":
126 | model_exposure, optimizer_1, iteration = load_checkpoint(checkpoint_path1, model_exposure, optimizer_1)
127 | if checkpoint_path2 != "":
128 | model_noise, optimizer_2, iteration = load_checkpoint(checkpoint_path2, model_noise,
129 | optimizer_2)
130 | if checkpoint_path3 !="":
131 | model_aperture, optimizer_3, iteration = load_checkpoint(checkpoint_path3, model_aperture,
132 | optimizer_3)
133 | iteration += 1
134 |
135 | # build dataset
136 | trainset = autoTrainSetRaw2jpgProcessed(**data_config)
137 | epoch_size = min(len(trainset), epoch_size)
138 | train_sampler = torch.utils.data.RandomSampler(trainset, True, epoch_size)
139 | train_loader = DataLoader(trainset, num_workers=5, shuffle=False,
140 | sampler=train_sampler,
141 | batch_size=batch_size,
142 | pin_memory=False,
143 | drop_last=True)
144 |
145 | # Get shared output_directory ready
146 | if not os.path.isdir(output_directory):
147 | os.makedirs(output_directory)
148 | os.chmod(output_directory, 0o775)
149 |
150 | print("output directory", output_directory)
151 |
152 | model_noise.train()
153 | model_exposure.train()
154 | if aperture:
155 | model_aperture.train()
156 | epoch_offset = max(0, int(iteration / len(train_loader)))
157 |
158 | # ================ MAIN TRAINNIG LOOP! ===================
159 | for epoch in range(epoch_offset, epochs):
160 | print("Epoch: {}".format(epoch))
161 | for i, batch in enumerate(train_loader):
162 | model_exposure.zero_grad()
163 | model_noise.zero_grad()
164 | if aperture:
165 | model_aperture.zero_grad()
166 |
167 | exp_params, ap_params, noise_params, input_raw, input_jpg, output_raw, mask, \
168 | input_shot_noise, input_read_noise, output_shot_noise, output_read_noise, \
169 | img_name = batch
170 | if aperture:
171 | ap_params = torch.autograd.Variable(ap_params.cuda())
172 | exp_params = torch.autograd.Variable(exp_params.cuda())
173 | noise_params = torch.autograd.Variable(noise_params.cuda())
174 | input_shot_noise = torch.autograd.Variable(input_shot_noise.cuda())
175 | input_read_noise = torch.autograd.Variable(input_read_noise.cuda())
176 | output_shot_noise = torch.autograd.Variable(output_shot_noise.cuda())
177 | output_read_noise = torch.autograd.Variable(output_read_noise.cuda())
178 | mask = mask.cuda()
179 |
180 |
181 | input_raw = torch.autograd.Variable(input_raw.cuda())
182 | output_raw = torch.autograd.Variable(output_raw.cuda())
183 |
184 | # simple exposure correction
185 | output_exp, exp_params_m = model_exposure([exp_params, input_raw])
186 | # noise correction
187 | # Estimate variance map
188 | if variance_map:
189 | # input variance map
190 | variance_input = get_variance_map(output_exp, input_shot_noise, input_read_noise, exp_params_m)
191 | # output variance map (As long as the output iso is known, it can be estimated)
192 | input_cat = torch.cat([output_exp, variance_input], 1)
193 | if net_type == "u_net":
194 | output_ns = model_noise(input_cat)
195 | else:
196 | if net_type == "u_net":
197 | output_ns = model_noise(output_exp)
198 |
199 |
200 | #aperture module
201 | if aperture:
202 | output_ap = model_aperture([ap_params, output_ns+output_exp])
203 |
204 | # define exposure loss
205 | if loss_type1 == "l1":
206 | loss_f1 = torch.nn.L1Loss()
207 | elif loss_type1 == "l2":
208 | loss_f1 = torch.nn.MSELoss()
209 |
210 | loss1 = loss_f1(output_exp*mask, output_raw*mask)
211 |
212 | # define noise loss
213 | if loss_type2 == "l1":
214 | loss_f2 = torch.nn.L1Loss()
215 | elif loss_type2 == "l2":
216 | loss_f2 = torch.nn.MSELoss()
217 |
218 | if residual_learning1:
219 | loss2 = loss_f2(output_ns*mask, (output_raw-output_exp)*mask)
220 | else:
221 | loss2 = loss_f2(output_ns*mask, output_raw*mask)
222 |
223 |
224 | if aperture:
225 | if loss_type3 == "l1":
226 | loss_f3 = torch.nn.L1Loss()
227 | elif loss_type3 == "l2":
228 | loss_f3 = torch.nn.MSELoss()
229 | if residual_learning2:
230 | loss3 = loss_f3((output_exp+output_ns+output_ap)*mask, output_raw*mask)
231 | else:
232 | loss3 = loss_f3(output_ap*mask, output_raw*mask)
233 |
234 | if not multi_stage:
235 | loss1.backward(retain_graph=True)
236 | optimizer_1.step()
237 | loss2.backward(retain_graph=True)
238 | optimizer_2.step()
239 | if aperture:
240 | loss3.backward(retain_graph=True)
241 | optimizer_3.step()
242 |
243 | else:
244 | if not aperture:
245 | if epoch < multi_stage:
246 | loss1.backward(retain_graph=True)
247 | optimizer_1.step()
248 | else:
249 | loss2.backward(retain_graph=True)
250 | optimizer_2.step()
251 | else:
252 | if epoch < multi_stage:
253 | loss1.backward(retain_graph=True)
254 | optimizer_1.step()
255 | elif epoch >= multi_stage and epoch < multi_stage2:
256 | loss2.backward(retain_graph=True)
257 | optimizer_2.step()
258 | else:
259 | loss3.backward(retain_graph=True)
260 | optimizer_3.step()
261 |
262 |
263 | if aperture:
264 | print("epochs{} iters{}:\t{:.9f}\t{:.9f}\t{:.9f}".format(epoch, iteration, loss1, loss2, loss3))
265 | else:
266 | print("epochs{} iters{}:\t{:.9f}\t{:.9f}".format(epoch, iteration, loss1, loss2))
267 |
268 | if (iteration % iters_per_checkpoint == 0):
269 | checkpoint_path1 = "{}/exp_{}".format(
270 | output_directory, iteration)
271 | checkpoint_path2 = "{}/unet_{}".format(
272 | output_directory, iteration)
273 | checkpoint_path3 = "{}/unet_att_{}".format(
274 | output_directory, iteration)
275 | image_path = "{}/img_{}.jpg".format(
276 | output_directory, iteration)
277 | # save checkpoints
278 | save_checkpoint(model_exposure, "exposure", optimizer_1, learning_rate1, iteration,
279 | checkpoint_path1, parallel)
280 | save_checkpoint(model_noise, net_type, optimizer_2, learning_rate2, iteration,
281 | checkpoint_path2, parallel)
282 | save_checkpoint(model_aperture, net_type_ap, optimizer_3, learning_rate3, iteration,
283 | checkpoint_path3, parallel)
284 | # save testing images
285 | if residual_learning1:
286 | if isp_save:
287 | if aperture:
288 | if residual_learning2:
289 | save_training_images_raw([input_raw.cpu().data.numpy(),
290 | output_exp.cpu().data.numpy(),
291 | (output_ns+output_exp).cpu().data.numpy(),
292 | (output_ap+output_ns+output_exp).cpu().data.numpy(),
293 | output_raw.cpu().data.numpy()], image_path, img_name, exp_params_m.cpu().data.numpy())
294 | else:
295 | save_training_images_raw([input_raw.cpu().data.numpy(),
296 | output_exp.cpu().data.numpy(),
297 | (output_ns+output_exp).cpu().data.numpy(),
298 | output_ap.cpu().data.numpy(),
299 | output_raw.cpu().data.numpy()], image_path, img_name, exp_params_m.cpu().data.numpy())
300 | else:
301 | save_training_images_raw([input_raw.cpu().data.numpy(),
302 | output_exp.cpu().data.numpy(),
303 | (output_ns+output_exp).cpu().data.numpy(),
304 | output_raw.cpu().data.numpy()], image_path, img_name, exp_params_m.cpu().data.numpy())
305 |
306 | iteration += 1
307 | if epoch > multi_stage2:
308 | scheduler_3.step()
309 | elif epoch > multi_stage:
310 | scheduler_2.step()
311 |
312 |
313 |
314 | if __name__ == "__main__":
315 | parser = argparse.ArgumentParser()
316 | parser.add_argument('-c', '--config', type=str,
317 | help='JSON file for configuration')
318 | args = parser.parse_args()
319 | global config_path
320 | config_path = args.config
321 |
322 | with open(config_path) as f:
323 | data = f.read()
324 | config = json.loads(data)
325 | train_config = config["train_config"]
326 | global data_config
327 | data_config = config["data_config"]
328 | global network_config
329 | network_config = config["network_config"]
330 | global network_config2
331 | network_config2 = config["network_config2"]
332 |
333 | torch.backends.cudnn.enabled = True
334 | torch.backends.cudnn.benchmark = False
335 | train(**train_config)
336 |
--------------------------------------------------------------------------------
/unet_attention_decouple.py:
--------------------------------------------------------------------------------
1 | import torch.nn as nn
2 | import torch.nn.functional as F
3 | import torch
4 | import torch.nn.init as init
5 |
6 | def init_linear(linear):
7 | init.xavier_normal(linear.weight)
8 | linear.bias.data.zero_()
9 |
10 | class LinearL(nn.Module):
11 | def __init__(self, in_dim, out_dim):
12 | super().__init__()
13 |
14 | linear = nn.Linear(in_dim, out_dim)
15 | init_linear(linear)
16 | self.linear = linear
17 |
18 | def forward(self, input):
19 | return self.linear(input)
20 |
21 | class AdaptiveInstanceNorm(nn.Module):
22 | def __init__(self, in_channel, style_dim):
23 | super().__init__()
24 |
25 | self.norm = nn.InstanceNorm2d(in_channel)
26 | self.style = LinearL(style_dim, in_channel * 2)
27 |
28 | self.style.linear.bias.data[:in_channel] = 1
29 | self.style.linear.bias.data[in_channel:] = 0
30 |
31 | def forward(self, input, style):
32 | style = self.style(style).unsqueeze(2).unsqueeze(3)
33 | gamma, beta = style.chunk(2, 1)
34 | out = self.norm(input)
35 | out = gamma * out + beta
36 |
37 | return out
38 |
39 | class conv_block(nn.Module):
40 | """
41 | Convolution Block
42 | """
43 | def __init__(self, in_ch, out_ch, n_params):
44 | super(conv_block, self).__init__()
45 |
46 | self.conv1 = nn.Conv2d(in_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True)
47 | self.norm1 = AdaptiveInstanceNorm(out_ch, n_params)
48 | self.relu1 = nn.ReLU(inplace=True)
49 | self.conv2 = nn.Conv2d(out_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True)
50 | self.norm2 = AdaptiveInstanceNorm(out_ch, n_params)
51 | self.relu2 = nn.ReLU(inplace=True)
52 |
53 | def forward(self, x, params):
54 | x = self.conv1(x)
55 | x = self.norm1(x, params)
56 | x = self.relu1(x)
57 | x = self.conv2(x)
58 | x = self.norm2(x, params)
59 | x = self.relu2(x)
60 |
61 | return x
62 |
63 |
64 | class up_conv(nn.Module):
65 | """
66 | Up Convolution Block
67 | """
68 | def __init__(self, in_ch, out_ch, n_params):
69 | super(up_conv, self).__init__()
70 |
71 | self.up = nn.Upsample(scale_factor=2)
72 | self.conv = nn.Conv2d(in_ch, out_ch, kernel_size=3, stride=1, padding=1, bias=True)
73 | self.norm = AdaptiveInstanceNorm(out_ch, n_params)
74 | self.relu = nn.ReLU(inplace=True)
75 |
76 | def forward(self, x, params):
77 | x = self.up(x)
78 | x = self.conv(x)
79 | x = self.norm(x, params)
80 | x = self.relu(x)
81 | return x
82 |
83 | class Attention_block(nn.Module):
84 | """
85 | Attention Block
86 | """
87 |
88 | def __init__(self, F_g, F_l, F_int, n_params):
89 | super(Attention_block, self).__init__()
90 |
91 | self.W_g_conv1 = nn.Conv2d(F_l, F_int, kernel_size=1, stride=1, padding=0, bias=True)
92 | self.W_g_norm1 = AdaptiveInstanceNorm(F_int, n_params)
93 |
94 |
95 | self.W_x_conv1 = nn.Conv2d(F_g, F_int, kernel_size=1, stride=1, padding=0, bias=True)
96 | self.W_x_norm1 = AdaptiveInstanceNorm(F_int, n_params)
97 |
98 |
99 | self.psi_conv = nn.Conv2d(F_int, 1, kernel_size=1, stride=1, padding=0, bias=True)
100 | self.psi_norm = AdaptiveInstanceNorm(1, n_params)
101 | self.psi_sig = nn.Sigmoid()
102 |
103 | self.relu1 = nn.ReLU(inplace=True)
104 |
105 | self.W_g_conv2 = nn.Conv2d(F_l, F_int, kernel_size=1, stride=1, padding=0, bias=True)
106 | self.W_g_norm2 = AdaptiveInstanceNorm(F_int, n_params)
107 |
108 | self.W_x_conv2 = nn.Conv2d(F_g, F_int, kernel_size=1, stride=1, padding=0, bias=True)
109 | self.W_x_norm2 = AdaptiveInstanceNorm(F_int, n_params)
110 |
111 | self.global_ap = nn.AdaptiveAvgPool2d(1)
112 | self.conv_down = nn.Conv2d(F_int, F_int // 4, kernel_size=1, bias=False)
113 | self.conv_up = nn.Conv2d(F_int // 4, F_l, kernel_size=1, bias=False)
114 | self.relu2 = nn.ReLU(inplace=True)
115 | self.sig = nn.Sigmoid()
116 |
117 |
118 |
119 | def forward(self, g, x, params):
120 | g1 = self.W_g_conv1(g)
121 | g1 = self.W_g_norm1(g1, params)
122 | x1 = self.W_x_conv1(x)
123 | x1 = self.W_x_norm1(x1, params)
124 | psi = self.relu1(g1 + x1)
125 | psi = self.psi_conv(psi)
126 | psi = self.psi_norm(psi, params)
127 | psi = self.psi_sig(psi)
128 |
129 | g2 = self.W_g_conv2(g)
130 | g2 = self.W_g_norm2(g2, params)
131 | x2 = self.W_x_conv2(x)
132 | x2 = self.W_x_norm2(x2, params)
133 | gap = self.global_ap(g2 + x2)
134 | gap = self.conv_down(gap)
135 | gap = self.relu2(gap)
136 | gap = self.conv_up(gap)
137 | gap = self.sig(gap)
138 |
139 | out = x * psi * gap
140 | return out
141 |
142 |
143 | class AttenUnet_style(nn.Module):
144 | """
145 | Attention Unet implementation
146 | Paper: https://arxiv.org/abs/1804.03999
147 | """
148 | def __init__(self, n_params, n_input_channels, n_output_channels):
149 | super(AttenUnet_style, self).__init__()
150 |
151 | self.n_params = n_params
152 | n_input_channels = n_input_channels + n_params
153 |
154 | n1 = 32
155 | filters = [n1, n1 * 2, n1 * 4, n1 * 8, n1 * 16]
156 |
157 | self.Maxpool1 = nn.MaxPool2d(kernel_size=2, stride=2)
158 | self.Maxpool2 = nn.MaxPool2d(kernel_size=2, stride=2)
159 | self.Maxpool3 = nn.MaxPool2d(kernel_size=2, stride=2)
160 | self.Maxpool4 = nn.MaxPool2d(kernel_size=2, stride=2)
161 |
162 | self.Conv1 = conv_block(n_input_channels, filters[0], n_params)
163 | self.Conv2 = conv_block(filters[0], filters[1], n_params)
164 | self.Conv3 = conv_block(filters[1], filters[2], n_params)
165 | self.Conv4 = conv_block(filters[2], filters[3], n_params)
166 | self.Conv5 = conv_block(filters[3], filters[4], n_params)
167 |
168 | self.Up5 = up_conv(filters[4], filters[3], n_params)
169 | self.Att5 = Attention_block(F_g=filters[3], F_l=filters[3], F_int=filters[2], n_params=n_params)
170 | self.Up_conv5 = conv_block(filters[4], filters[3], n_params)
171 |
172 | self.Up4 = up_conv(filters[3], filters[2], n_params)
173 | self.Att4 = Attention_block(F_g=filters[2], F_l=filters[2], F_int=filters[1], n_params=n_params)
174 | self.Up_conv4 = conv_block(filters[3], filters[2], n_params)
175 |
176 | self.Up3 = up_conv(filters[2], filters[1], n_params)
177 | self.Att3 = Attention_block(F_g=filters[1], F_l=filters[1], F_int=filters[0], n_params=n_params)
178 | self.Up_conv3 = conv_block(filters[2], filters[1], n_params)
179 |
180 | self.Up2 = up_conv(filters[1], filters[0], n_params)
181 | self.Att2 = Attention_block(F_g=filters[0], F_l=filters[0], F_int=32, n_params=n_params)
182 | self.Up_conv2 = conv_block(filters[1], filters[0], n_params)
183 |
184 | self.Conv = nn.Conv2d(filters[0], n_output_channels, kernel_size=1, stride=1, padding=0)
185 |
186 |
187 |
188 | def forward(self, input):
189 |
190 | x_param = input[0]
191 | x_img = input[1]
192 | b, c, h, w = x_img.size()
193 | if self.n_params == 1:
194 | x_param_layer = torch.unsqueeze(x_param, 2)
195 | x_param_layer = torch.unsqueeze(x_param_layer, 3)
196 | x_param_layer = x_param_layer.repeat(1, 1, h, w)
197 | x_input = torch.cat((x_img, x_param_layer), 1)
198 | else:
199 | x_param1 = x_param[:,0:int(self.n_params/2)]
200 | x_param2 = x_param[:,int(self.n_params/2):]
201 | x_param1_layer = torch.unsqueeze(x_param1, 2)
202 | x_param1_layer = torch.unsqueeze(x_param1_layer, 3)
203 | x_param1_layer = x_param1_layer.repeat(1, 1, h, w)
204 | x_param2_layer = torch.unsqueeze(x_param2, 2)
205 | x_param2_layer = torch.unsqueeze(x_param2_layer, 3)
206 | x_param2_layer = x_param2_layer.repeat(1, 1, h, w)
207 | x_input = torch.cat((x_img, x_param1_layer, x_param2_layer), 1)
208 |
209 | e1 = self.Conv1(x_input, x_param)
210 |
211 | e2 = self.Maxpool1(e1)
212 | e2 = self.Conv2(e2, x_param)
213 |
214 | e3 = self.Maxpool2(e2)
215 | e3 = self.Conv3(e3, x_param)
216 |
217 | e4 = self.Maxpool3(e3)
218 | e4 = self.Conv4(e4, x_param)
219 |
220 | e5 = self.Maxpool4(e4)
221 | e5 = self.Conv5(e5, x_param)
222 |
223 | d5 = self.Up5(e5, x_param)
224 | x4 = self.Att5(g=d5, x=e4, params=x_param)
225 | d5 = torch.cat((x4, d5), dim=1)
226 | d5 = self.Up_conv5(d5, x_param)
227 |
228 | d4 = self.Up4(d5, x_param)
229 | x3 = self.Att4(g=d4, x=e3, params=x_param)
230 | d4 = torch.cat((x3, d4), dim=1)
231 | d4 = self.Up_conv4(d4, x_param)
232 |
233 | d3 = self.Up3(d4, x_param)
234 | x2 = self.Att3(g=d3, x=e2, params=x_param)
235 | d3 = torch.cat((x2, d3), dim=1)
236 | d3 = self.Up_conv3(d3, x_param)
237 |
238 | d2 = self.Up2(d3, x_param)
239 | x1 = self.Att2(g=d2, x=e1, params=x_param)
240 | d2 = torch.cat((x1, d2), dim=1)
241 | d2 = self.Up_conv2(d2, x_param)
242 |
243 | out = self.Conv(d2)
244 |
245 | return out
246 |
--------------------------------------------------------------------------------
/unet_model_ori.py:
--------------------------------------------------------------------------------
1 | import torch.nn.functional as F
2 |
3 | from unet_parts_ori import *
4 |
5 | class UNet(nn.Module):
6 | def __init__(self, n_input_channels, n_output_channels):
7 | super(UNet, self).__init__()
8 | self.inc = inconv(n_input_channels, 64)
9 | self.down1 = down(64, 128)
10 | self.down2 = down(128, 256)
11 | self.down3 = down(256, 512)
12 | self.down4 = down(512, 512)
13 | self.up1 = up(1024, 256)
14 | self.up2 = up(512, 128)
15 | self.up3 = up(256, 64)
16 | self.up4 = up(128, 64)
17 | self.outc = outconv(64, n_output_channels)
18 |
19 | def forward(self, x):
20 | x1 = self.inc(x)
21 | x2 = self.down1(x1)
22 | x3 = self.down2(x2)
23 | x4 = self.down3(x3)
24 | x5 = self.down4(x4)
25 | x = self.up1(x5, x4)
26 | x = self.up2(x, x3)
27 | x = self.up3(x, x2)
28 | x = self.up4(x, x1)
29 | x = self.outc(x)
30 | return x
31 |
--------------------------------------------------------------------------------
/unet_parts_ori.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 |
5 |
6 | class double_conv(nn.Module):
7 | '''(conv => BN => ReLU) * 2'''
8 | def __init__(self, in_ch, out_ch):
9 | super(double_conv, self).__init__()
10 | self.conv = nn.Sequential(
11 | nn.Conv2d(in_ch, out_ch, 3, padding=1),
12 | nn.BatchNorm2d(out_ch),
13 | nn.ReLU(inplace=True),
14 | nn.Conv2d(out_ch, out_ch, 3, padding=1),
15 | nn.BatchNorm2d(out_ch),
16 | nn.ReLU(inplace=True)
17 | )
18 |
19 | def forward(self, x):
20 | x = self.conv(x)
21 | return x
22 |
23 |
24 | class inconv(nn.Module):
25 | def __init__(self, in_ch, out_ch):
26 | super(inconv, self).__init__()
27 | self.conv = double_conv(in_ch, out_ch)
28 |
29 | def forward(self, x):
30 | x = self.conv(x)
31 | return x
32 |
33 |
34 | class down(nn.Module):
35 | def __init__(self, in_ch, out_ch):
36 | super(down, self).__init__()
37 | self.mpconv = nn.Sequential(
38 | nn.MaxPool2d(2),
39 | double_conv(in_ch, out_ch)
40 | )
41 |
42 | def forward(self, x):
43 | x = self.mpconv(x)
44 | return x
45 |
46 |
47 | class up(nn.Module):
48 | def __init__(self, in_ch, out_ch, bilinear=True):
49 | super(up, self).__init__()
50 | if bilinear:
51 | self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
52 | else:
53 | self.up = nn.ConvTranspose2d(in_ch//2, in_ch//2, 2, stride=2)
54 |
55 | self.conv = double_conv(in_ch, out_ch)
56 |
57 | def forward(self, x1, x2):
58 | x1 = self.up(x1)
59 |
60 | # input is CHW
61 | diffY = x2.size()[2] - x1.size()[2]
62 | diffX = x2.size()[3] - x1.size()[3]
63 |
64 | x1 = F.pad(x1, (diffX // 2, diffX - diffX//2,
65 | diffY // 2, diffY - diffY//2))
66 |
67 | x = torch.cat([x2, x1], dim=1)
68 | x = self.conv(x)
69 | return x
70 |
71 |
72 | class outconv(nn.Module):
73 | def __init__(self, in_ch, out_ch):
74 | super(outconv, self).__init__()
75 | self.conv = nn.Conv2d(in_ch, out_ch, 1)
76 |
77 | def forward(self, x):
78 | x = self.conv(x)
79 | return x
80 |
--------------------------------------------------------------------------------