├── Project Proposal with summary of papers reviewed.pdf
├── README.md
├── conv_helper.py
├── libs
├── __pycache__
│ ├── utils.cpython-34.pyc
│ ├── utils.cpython-35.pyc
│ ├── vgg16.cpython-34.pyc
│ └── vgg16.cpython-35.pyc
├── utils.py
└── vgg16.py
├── main.py
├── model.py
├── paper.pdf
├── result1.PNG
├── result2.png
├── result3.PNG
├── static
├── css
│ ├── 4-col-portfolio.css
│ ├── bootstrap.css
│ └── bootstrap.min.css
├── fonts
│ ├── glyphicons-halflings-regular.eot
│ ├── glyphicons-halflings-regular.svg
│ ├── glyphicons-halflings-regular.ttf
│ ├── glyphicons-halflings-regular.woff
│ └── glyphicons-halflings-regular.woff2
├── js
│ ├── bootstrap.js
│ ├── bootstrap.min.js
│ └── jquery.js
├── output.png
└── style.css
├── templates
└── index.html
├── test.py
├── train.py
└── utils.py
/Project Proposal with summary of papers reviewed.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/Project Proposal with summary of papers reviewed.pdf
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Denoising with GAN
2 | [Paper](https://uofi.box.com/shared/static/s16nc93x8j6ctd0ercx9juf5mqmqx4bp.pdf) | [Video](https://www.youtube.com/watch?v=Yh_Bsoe-Qj4)
3 |
4 | ## Introduction
5 |
6 | Animation movie companies like Pixar and Dreamworks render their 3d scenes using a technique called Pathtracing which enables them to create high quality photorealistic frames. Pathtracing involves shooting 1000’s of rays into a pixel randomly(Monte Carlo) which will then hit the objects in the scene and based on the reflective property of the object the rays reflect or refract or get absorbed. The colors returned by these rays are averaged to get the color of the pixel and this process is repeated for all the pixels. Due to the computational complexity it might take 8-16 hours to render a single frame.
7 |
8 | We are proposing a neural network based solution for reducing 8-16 hours to a couple of seconds using a Generative Adversarial Network. The main idea behind this proposed method is to render using small number of samples per pixel (let say 4 spp or 8 spp instead of 32K spp) and pass the noisy image to our network, which will generate a photorealistic image with high quality.
9 |
10 | # Demo Video [Link](https://www.youtube.com/watch?v=Yh_Bsoe-Qj4)
11 |
12 | [](https://www.youtube.com/watch?v=Yh_Bsoe-Qj4)
13 |
14 |
15 |
16 | #### Table of Contents
17 |
18 | * [Installation](#installation)
19 | * [Running](#running)
20 | * [Dataset](#dataset)
21 | * [Hyperparameters](#hyperparameter)
22 | * [Results](#results)
23 | * [Improvements](#improvements)
24 | * [Credits](#credits)
25 |
26 | ## Installation
27 |
28 | To run the project you will need:
29 | * python 3.5
30 | * tensorflow (v1.1 or v1.0)
31 | * PIL
32 | * [CKPT FILE](https://uofi.box.com/shared/static/21a5jwdiqpnx24c50cyolwzwycnr3fwe.gz)
33 | * [Dataset](https://uofi.box.com/shared/static/gy0t3vgwtlk1933xbtz1zvhlakkdac3n.zip)
34 |
35 | ## Running
36 |
37 | Once you have all the depenedencies ready, do the folowing:
38 |
39 | Download the dataset extract it to a folder named 'dataset' (ONLY if you want to train, not needed to run).
40 |
41 | Extract the CKPT files to a folder named 'Checkpoints'
42 |
43 | Run main.py -- python3 main.py
44 |
45 | Go to the browser, if you are running it on a server then [ip-address]:8888, if you are on your local machine then localhost:8888
46 |
47 | ## Dataset
48 | We picked random 40 images from pixar movies, added gaussian noise of different standard deviation, 5 sets of 5 different standard deviation making a total of 1000 images for the training set. For validation we used 10 images completely different from the training set and added gaussian noise. For testing we had both added gaussian images and real noisy images.
49 |
50 | ## Hyperparameters
51 | * Number of iterations - 10K
52 | * Adversarial Loss Factor - 0.5
53 | * Pixel Loss Factor - 1.0
54 | * Feature Loss Factor - 1.0
55 | * Smoothness Loss Factor - 0.0001
56 |
57 | ## Results
58 | 3D rendering test data:
59 |
60 |
61 | Real noise images:
62 |
63 |
64 | CT-Scan:
65 |
66 |
67 |
68 | ## Improvements
69 |
70 | * Increase the num of iteration to 100K.
71 | * Train the network for different noises.
72 | * Make it work on a real-time app.
73 |
74 | ## Credits
75 |
76 | * [SRGAN](https://arxiv.org/pdf/1609.04802.pdf)
77 | * [Image De-raining using conditional generative adversarial network](https://arxiv.org/pdf/1701.05957.pdf)
78 | * [Creating photorealistic images from gameboy camera](http://www.pinchofintelligence.com/photorealistic-neural-network-gameboy/)
79 | * [CS20SI](cs20si.stanford.edu)
80 | * [CS231n](https://cs231n.github.io/)
81 |
--------------------------------------------------------------------------------
/conv_helper.py:
--------------------------------------------------------------------------------
1 | import tensorflow as tf
2 | import tensorflow.contrib.slim as slim
3 |
4 | from utils import *
5 |
6 | def conv_layer(input_image, ksize, in_channels, out_channels, stride, scope_name, activation_function=lrelu, reuse=False):
7 | with tf.variable_scope(scope_name):
8 | filter = tf.Variable(tf.random_normal([ksize, ksize, in_channels, out_channels], stddev=0.03))
9 | output = tf.nn.conv2d(input_image, filter, strides=[1, stride, stride, 1], padding='SAME')
10 | output = slim.batch_norm(output)
11 | if activation_function:
12 | output = activation_function(output)
13 | return output, filter
14 |
15 | def residual_layer(input_image, ksize, in_channels, out_channels, stride, scope_name):
16 | with tf.variable_scope(scope_name):
17 | output, filter = conv_layer(input_image, ksize, in_channels, out_channels, stride, scope_name+"_conv1")
18 | output, filter = conv_layer(output, ksize, out_channels, out_channels, stride, scope_name+"_conv2")
19 | output = tf.add(output, tf.identity(input_image))
20 | return output, filter
21 |
22 | def transpose_deconvolution_layer(input_tensor, used_weights, new_shape, stride, scope_name):
23 | with tf.varaible_scope(scope_name):
24 | output = tf.nn.conv2d_transpose(input_tensor, used_weights, output_shape=new_shape, strides=[1, stride, stride, 1], padding='SAME')
25 | output = tf.nn.relu(output)
26 | return output
27 |
28 | def resize_deconvolution_layer(input_tensor, new_shape, scope_name):
29 | with tf.variable_scope(scope_name):
30 | output = tf.image.resize_images(input_tensor, (new_shape[1], new_shape[2]), method=1)
31 | output, unused_weights = conv_layer(output, 3, new_shape[3]*2, new_shape[3], 1, scope_name+"_deconv")
32 | return output
33 |
34 | def deconvolution_layer(input_tensor, new_shape, scope_name):
35 | return resize_deconvolution_layer(input_tensor, new_shape, scope_name)
36 |
37 | def output_between_zero_and_one(output):
38 | output +=1
39 | return output/2
40 |
--------------------------------------------------------------------------------
/libs/__pycache__/utils.cpython-34.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/libs/__pycache__/utils.cpython-34.pyc
--------------------------------------------------------------------------------
/libs/__pycache__/utils.cpython-35.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/libs/__pycache__/utils.cpython-35.pyc
--------------------------------------------------------------------------------
/libs/__pycache__/vgg16.cpython-34.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/libs/__pycache__/vgg16.cpython-34.pyc
--------------------------------------------------------------------------------
/libs/__pycache__/vgg16.cpython-35.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/libs/__pycache__/vgg16.cpython-35.pyc
--------------------------------------------------------------------------------
/libs/utils.py:
--------------------------------------------------------------------------------
1 | """Utilities used in the Kadenze Academy Course on Deep Learning w/ Tensorflow.
2 |
3 | Creative Applications of Deep Learning w/ Tensorflow.
4 | Kadenze, Inc.
5 | Parag K. Mital
6 |
7 | Copyright Parag K. Mital, June 2016.
8 | """
9 | import matplotlib.pyplot as plt
10 | import tensorflow as tf
11 | import urllib
12 | import numpy as np
13 | import zipfile
14 | import os
15 | from scipy.io import wavfile
16 |
17 |
18 | def download(path):
19 | """Use urllib to download a file.
20 |
21 | Parameters
22 | ----------
23 | path : str
24 | Url to download
25 |
26 | Returns
27 | -------
28 | path : str
29 | Location of downloaded file.
30 | """
31 | import os
32 | from six.moves import urllib
33 |
34 | fname = path.split('/')[-1]
35 | if os.path.exists(fname):
36 | return fname
37 |
38 | print('Downloading ' + path)
39 |
40 | def progress(count, block_size, total_size):
41 | if count % 20 == 0:
42 | print('Downloaded %02.02f/%02.02f MB' % (
43 | count * block_size / 1024.0 / 1024.0,
44 | total_size / 1024.0 / 1024.0), end='\r')
45 |
46 | filepath, _ = urllib.request.urlretrieve(
47 | path, filename=fname, reporthook=progress)
48 | return filepath
49 |
50 |
51 | def download_and_extract_tar(path, dst):
52 | """Download and extract a tar file.
53 |
54 | Parameters
55 | ----------
56 | path : str
57 | Url to tar file to download.
58 | dst : str
59 | Location to save tar file contents.
60 | """
61 | import tarfile
62 | filepath = download(path)
63 | if not os.path.exists(dst):
64 | os.makedirs(dst)
65 | tarfile.open(filepath, 'r:gz').extractall(dst)
66 |
67 |
68 | def download_and_extract_zip(path, dst):
69 | """Download and extract a zip file.
70 |
71 | Parameters
72 | ----------
73 | path : str
74 | Url to zip file to download.
75 | dst : str
76 | Location to save zip file contents.
77 | """
78 | import zipfile
79 | filepath = download(path)
80 | if not os.path.exists(dst):
81 | os.makedirs(dst)
82 | zf = zipfile.ZipFile(file=filepath)
83 | zf.extractall(dst)
84 |
85 |
86 | def load_audio(filename, b_normalize=True):
87 | """Load the audiofile at the provided filename using scipy.io.wavfile.
88 |
89 | Optionally normalizes the audio to the maximum value.
90 |
91 | Parameters
92 | ----------
93 | filename : str
94 | File to load.
95 | b_normalize : bool, optional
96 | Normalize to the maximum value.
97 | """
98 | sr, s = wavfile.read(filename)
99 | if b_normalize:
100 | s = s.astype(np.float32)
101 | s = (s / np.max(np.abs(s)))
102 | s -= np.mean(s)
103 | return s
104 |
105 |
106 | def corrupt(x):
107 | """Take an input tensor and add uniform masking.
108 |
109 | Parameters
110 | ----------
111 | x : Tensor/Placeholder
112 | Input to corrupt.
113 | Returns
114 | -------
115 | x_corrupted : Tensor
116 | 50 pct of values corrupted.
117 | """
118 | return tf.mul(x, tf.cast(tf.random_uniform(shape=tf.shape(x),
119 | minval=0,
120 | maxval=2,
121 | dtype=tf.int32), tf.float32))
122 |
123 |
124 | def interp(l, r, n_samples):
125 | """Intepolate between the arrays l and r, n_samples times.
126 |
127 | Parameters
128 | ----------
129 | l : np.ndarray
130 | Left edge
131 | r : np.ndarray
132 | Right edge
133 | n_samples : int
134 | Number of samples
135 |
136 | Returns
137 | -------
138 | arr : np.ndarray
139 | Inteporalted array
140 | """
141 | return np.array([
142 | l + step_i / (n_samples - 1) * (r - l)
143 | for step_i in range(n_samples)])
144 |
145 |
146 | def make_latent_manifold(corners, n_samples):
147 | """Create a 2d manifold out of the provided corners: n_samples * n_samples.
148 |
149 | Parameters
150 | ----------
151 | corners : list of np.ndarray
152 | The four corners to intepolate.
153 | n_samples : int
154 | Number of samples to use in interpolation.
155 |
156 | Returns
157 | -------
158 | arr : np.ndarray
159 | Stacked array of all 2D interpolated samples
160 | """
161 | left = interp(corners[0], corners[1], n_samples)
162 | right = interp(corners[2], corners[3], n_samples)
163 |
164 | embedding = []
165 | for row_i in range(n_samples):
166 | embedding.append(interp(left[row_i], right[row_i], n_samples))
167 | return np.vstack(embedding)
168 |
169 |
170 | def imcrop_tosquare(img):
171 | """Make any image a square image.
172 |
173 | Parameters
174 | ----------
175 | img : np.ndarray
176 | Input image to crop, assumed at least 2d.
177 |
178 | Returns
179 | -------
180 | crop : np.ndarray
181 | Cropped image.
182 | """
183 | size = np.min(img.shape[:2])
184 | extra = img.shape[:2] - size
185 | crop = img
186 | for i in np.flatnonzero(extra):
187 | crop = np.take(crop, extra[i] // 2 + np.r_[:size], axis=i)
188 | return crop
189 |
190 |
191 | def slice_montage(montage, img_h, img_w, n_imgs):
192 | """Slice a montage image into n_img h x w images.
193 |
194 | Performs the opposite of the montage function. Takes a montage image and
195 | slices it back into a N x H x W x C image.
196 |
197 | Parameters
198 | ----------
199 | montage : np.ndarray
200 | Montage image to slice.
201 | img_h : int
202 | Height of sliced image
203 | img_w : int
204 | Width of sliced image
205 | n_imgs : int
206 | Number of images to slice
207 |
208 | Returns
209 | -------
210 | sliced : np.ndarray
211 | Sliced images as 4d array.
212 | """
213 | sliced_ds = []
214 | for i in range(int(np.sqrt(n_imgs))):
215 | for j in range(int(np.sqrt(n_imgs))):
216 | sliced_ds.append(montage[
217 | 1 + i + i * img_h:1 + i + (i + 1) * img_h,
218 | 1 + j + j * img_w:1 + j + (j + 1) * img_w])
219 | return np.array(sliced_ds)
220 |
221 |
222 | def montage(images, saveto='montage.png'):
223 | """Draw all images as a montage separated by 1 pixel borders.
224 |
225 | Also saves the file to the destination specified by `saveto`.
226 |
227 | Parameters
228 | ----------
229 | images : numpy.ndarray
230 | Input array to create montage of. Array should be:
231 | batch x height x width x channels.
232 | saveto : str
233 | Location to save the resulting montage image.
234 |
235 | Returns
236 | -------
237 | m : numpy.ndarray
238 | Montage image.
239 | """
240 | if isinstance(images, list):
241 | images = np.array(images)
242 | img_h = images.shape[1]
243 | img_w = images.shape[2]
244 | n_plots = int(np.ceil(np.sqrt(images.shape[0])))
245 | if len(images.shape) == 4 and images.shape[3] == 3:
246 | m = np.ones(
247 | (images.shape[1] * n_plots + n_plots + 1,
248 | images.shape[2] * n_plots + n_plots + 1, 3)) * 0.5
249 | else:
250 | m = np.ones(
251 | (images.shape[1] * n_plots + n_plots + 1,
252 | images.shape[2] * n_plots + n_plots + 1)) * 0.5
253 | for i in range(n_plots):
254 | for j in range(n_plots):
255 | this_filter = i * n_plots + j
256 | if this_filter < images.shape[0]:
257 | this_img = images[this_filter]
258 | m[1 + i + i * img_h:1 + i + (i + 1) * img_h,
259 | 1 + j + j * img_w:1 + j + (j + 1) * img_w] = this_img
260 | plt.imsave(arr=m, fname=saveto)
261 | return m
262 |
263 |
264 | def montage_filters(W):
265 | """Draws all filters (n_input * n_output filters) as a
266 | montage image separated by 1 pixel borders.
267 |
268 | Parameters
269 | ----------
270 | W : Tensor
271 | Input tensor to create montage of.
272 |
273 | Returns
274 | -------
275 | m : numpy.ndarray
276 | Montage image.
277 | """
278 | W = np.reshape(W, [W.shape[0], W.shape[1], 1, W.shape[2] * W.shape[3]])
279 | n_plots = int(np.ceil(np.sqrt(W.shape[-1])))
280 | m = np.ones(
281 | (W.shape[0] * n_plots + n_plots + 1,
282 | W.shape[1] * n_plots + n_plots + 1)) * 0.5
283 | for i in range(n_plots):
284 | for j in range(n_plots):
285 | this_filter = i * n_plots + j
286 | if this_filter < W.shape[-1]:
287 | m[1 + i + i * W.shape[0]:1 + i + (i + 1) * W.shape[0],
288 | 1 + j + j * W.shape[1]:1 + j + (j + 1) * W.shape[1]] = (
289 | np.squeeze(W[:, :, :, this_filter]))
290 | return m
291 |
292 |
293 | def get_celeb_files(dst='img_align_celeba', max_images=100):
294 | """Download the first 100 images of the celeb dataset.
295 |
296 | Files will be placed in a directory 'img_align_celeba' if one
297 | doesn't exist.
298 |
299 | Returns
300 | -------
301 | files : list of strings
302 | Locations to the first 100 images of the celeb net dataset.
303 | """
304 | # Create a directory
305 | if not os.path.exists(dst):
306 | os.mkdir(dst)
307 |
308 | # Now perform the following 100 times:
309 | for img_i in range(1, max_images + 1):
310 |
311 | # create a string using the current loop counter
312 | f = '000%03d.jpg' % img_i
313 |
314 | if not os.path.exists(os.path.join(dst, f)):
315 |
316 | # and get the url with that string appended the end
317 | url = 'https://s3.amazonaws.com/cadl/celeb-align/' + f
318 |
319 | # We'll print this out to the console so we can see how far we've gone
320 | print(url, end='\r')
321 |
322 | # And now download the url to a location inside our new directory
323 | urllib.request.urlretrieve(url, os.path.join(dst, f))
324 |
325 | files = [os.path.join(dst, file_i)
326 | for file_i in os.listdir(dst)
327 | if '.jpg' in file_i][:max_images]
328 | return files
329 |
330 |
331 | def get_celeb_imgs(max_images=100):
332 | """Load the first `max_images` images of the celeb dataset.
333 |
334 | Returns
335 | -------
336 | imgs : list of np.ndarray
337 | List of the first 100 images from the celeb dataset
338 | """
339 | return [plt.imread(f_i) for f_i in get_celeb_files(max_images=max_images)]
340 |
341 |
342 | def gauss(mean, stddev, ksize):
343 | """Use Tensorflow to compute a Gaussian Kernel.
344 |
345 | Parameters
346 | ----------
347 | mean : float
348 | Mean of the Gaussian (e.g. 0.0).
349 | stddev : float
350 | Standard Deviation of the Gaussian (e.g. 1.0).
351 | ksize : int
352 | Size of kernel (e.g. 16).
353 |
354 | Returns
355 | -------
356 | kernel : np.ndarray
357 | Computed Gaussian Kernel using Tensorflow.
358 | """
359 | g = tf.Graph()
360 | with tf.Session(graph=g):
361 | x = tf.linspace(-3.0, 3.0, ksize)
362 | z = (tf.exp(tf.neg(tf.pow(x - mean, 2.0) /
363 | (2.0 * tf.pow(stddev, 2.0)))) *
364 | (1.0 / (stddev * tf.sqrt(2.0 * 3.1415))))
365 | return z.eval()
366 |
367 |
368 | def gauss2d(mean, stddev, ksize):
369 | """Use Tensorflow to compute a 2D Gaussian Kernel.
370 |
371 | Parameters
372 | ----------
373 | mean : float
374 | Mean of the Gaussian (e.g. 0.0).
375 | stddev : float
376 | Standard Deviation of the Gaussian (e.g. 1.0).
377 | ksize : int
378 | Size of kernel (e.g. 16).
379 |
380 | Returns
381 | -------
382 | kernel : np.ndarray
383 | Computed 2D Gaussian Kernel using Tensorflow.
384 | """
385 | z = gauss(mean, stddev, ksize)
386 | g = tf.Graph()
387 | with tf.Session(graph=g):
388 | z_2d = tf.matmul(tf.reshape(z, [ksize, 1]), tf.reshape(z, [1, ksize]))
389 | return z_2d.eval()
390 |
391 |
392 | def convolve(img, kernel):
393 | """Use Tensorflow to convolve a 4D image with a 4D kernel.
394 |
395 | Parameters
396 | ----------
397 | img : np.ndarray
398 | 4-dimensional image shaped N x H x W x C
399 | kernel : np.ndarray
400 | 4-dimensional image shape K_H, K_W, C_I, C_O corresponding to the
401 | kernel's height and width, the number of input channels, and the
402 | number of output channels. Note that C_I should = C.
403 |
404 | Returns
405 | -------
406 | result : np.ndarray
407 | Convolved result.
408 | """
409 | g = tf.Graph()
410 | with tf.Session(graph=g):
411 | convolved = tf.nn.conv2d(img, kernel, strides=[1, 1, 1, 1], padding='SAME')
412 | res = convolved.eval()
413 | return res
414 |
415 |
416 | def gabor(ksize=32):
417 | """Use Tensorflow to compute a 2D Gabor Kernel.
418 |
419 | Parameters
420 | ----------
421 | ksize : int, optional
422 | Size of kernel.
423 |
424 | Returns
425 | -------
426 | gabor : np.ndarray
427 | Gabor kernel with ksize x ksize dimensions.
428 | """
429 | g = tf.Graph()
430 | with tf.Session(graph=g):
431 | z_2d = gauss2d(0.0, 1.0, ksize)
432 | ones = tf.ones((1, ksize))
433 | ys = tf.sin(tf.linspace(-3.0, 3.0, ksize))
434 | ys = tf.reshape(ys, [ksize, 1])
435 | wave = tf.matmul(ys, ones)
436 | gabor = tf.mul(wave, z_2d)
437 | return gabor.eval()
438 |
439 |
440 | def build_submission(filename, file_list, optional_file_list=()):
441 | """Helper utility to check homework assignment submissions and package them.
442 |
443 | Parameters
444 | ----------
445 | filename : str
446 | Output zip file name
447 | file_list : tuple
448 | Tuple of files to include
449 | """
450 | # check each file exists
451 | for part_i, file_i in enumerate(file_list):
452 | if not os.path.exists(file_i):
453 | print('\nYou are missing the file {}. '.format(file_i) +
454 | 'It does not look like you have completed Part {}.'.format(
455 | part_i + 1))
456 |
457 | def zipdir(path, zf):
458 | for root, dirs, files in os.walk(path):
459 | for file in files:
460 | # make sure the files are part of the necessary file list
461 | if file.endswith(file_list) or file.endswith(optional_file_list):
462 | zf.write(os.path.join(root, file))
463 |
464 | # create a zip file with the necessary files
465 | zipf = zipfile.ZipFile(filename, 'w', zipfile.ZIP_DEFLATED)
466 | zipdir('.', zipf)
467 | zipf.close()
468 | print('Your assignment zip file has been created!')
469 | print('Now submit the file:\n{}\nto Kadenze for grading!'.format(
470 | os.path.abspath(filename)))
471 |
472 |
473 | def normalize(a, s=0.1):
474 | '''Normalize the image range for visualization'''
475 | return np.uint8(np.clip(
476 | (a - a.mean()) / max(a.std(), 1e-4) * s + 0.5,
477 | 0, 1) * 255)
478 |
479 |
480 | # %%
481 | def weight_variable(shape, **kwargs):
482 | '''Helper function to create a weight variable initialized with
483 | a normal distribution
484 | Parameters
485 | ----------
486 | shape : list
487 | Size of weight variable
488 | '''
489 | if isinstance(shape, list):
490 | initial = tf.random_normal(tf.pack(shape), mean=0.0, stddev=0.01)
491 | initial.set_shape(shape)
492 | else:
493 | initial = tf.random_normal(shape, mean=0.0, stddev=0.01)
494 | return tf.Variable(initial, **kwargs)
495 |
496 |
497 | # %%
498 | def bias_variable(shape, **kwargs):
499 | '''Helper function to create a bias variable initialized with
500 | a constant value.
501 | Parameters
502 | ----------
503 | shape : list
504 | Size of weight variable
505 | '''
506 | if isinstance(shape, list):
507 | initial = tf.random_normal(tf.pack(shape), mean=0.0, stddev=0.01)
508 | initial.set_shape(shape)
509 | else:
510 | initial = tf.random_normal(shape, mean=0.0, stddev=0.01)
511 | return tf.Variable(initial, **kwargs)
512 |
513 |
514 | def binary_cross_entropy(z, x, name=None):
515 | """Binary Cross Entropy measures cross entropy of a binary variable.
516 |
517 | loss(x, z) = - sum_i (x[i] * log(z[i]) + (1 - x[i]) * log(1 - z[i]))
518 |
519 | Parameters
520 | ----------
521 | z : tf.Tensor
522 | A `Tensor` of the same type and shape as `x`.
523 | x : tf.Tensor
524 | A `Tensor` of type `float32` or `float64`.
525 | """
526 | with tf.variable_scope(name or 'bce'):
527 | eps = 1e-12
528 | return (-(x * tf.log(z + eps) +
529 | (1. - x) * tf.log(1. - z + eps)))
530 |
531 |
532 | def conv2d(x, n_output,
533 | k_h=5, k_w=5, d_h=2, d_w=2,
534 | padding='SAME', name='conv2d', reuse=None):
535 | """Helper for creating a 2d convolution operation.
536 |
537 | Parameters
538 | ----------
539 | x : tf.Tensor
540 | Input tensor to convolve.
541 | n_output : int
542 | Number of filters.
543 | k_h : int, optional
544 | Kernel height
545 | k_w : int, optional
546 | Kernel width
547 | d_h : int, optional
548 | Height stride
549 | d_w : int, optional
550 | Width stride
551 | padding : str, optional
552 | Padding type: "SAME" or "VALID"
553 | name : str, optional
554 | Variable scope
555 |
556 | Returns
557 | -------
558 | op : tf.Tensor
559 | Output of convolution
560 | """
561 | with tf.variable_scope(name or 'conv2d', reuse=reuse):
562 | W = tf.get_variable(
563 | name='W',
564 | shape=[k_h, k_w, x.get_shape()[-1], n_output],
565 | initializer=tf.contrib.layers.xavier_initializer_conv2d())
566 |
567 | conv = tf.nn.conv2d(
568 | name='conv',
569 | input=x,
570 | filter=W,
571 | strides=[1, d_h, d_w, 1],
572 | padding=padding)
573 |
574 | b = tf.get_variable(
575 | name='b',
576 | shape=[n_output],
577 | initializer=tf.constant_initializer(0.0))
578 |
579 | h = tf.nn.bias_add(
580 | name='h',
581 | value=conv,
582 | bias=b)
583 |
584 | return h, W
585 |
586 |
587 | def deconv2d(x, n_output_h, n_output_w, n_output_ch, n_input_ch=None,
588 | k_h=5, k_w=5, d_h=2, d_w=2,
589 | padding='SAME', name='deconv2d', reuse=None):
590 | """Deconvolution helper.
591 |
592 | Parameters
593 | ----------
594 | x : tf.Tensor
595 | Input tensor to convolve.
596 | n_output_h : int
597 | Height of output
598 | n_output_w : int
599 | Width of output
600 | n_output_ch : int
601 | Number of filters.
602 | k_h : int, optional
603 | Kernel height
604 | k_w : int, optional
605 | Kernel width
606 | d_h : int, optional
607 | Height stride
608 | d_w : int, optional
609 | Width stride
610 | padding : str, optional
611 | Padding type: "SAME" or "VALID"
612 | name : str, optional
613 | Variable scope
614 |
615 | Returns
616 | -------
617 | op : tf.Tensor
618 | Output of deconvolution
619 | """
620 | with tf.variable_scope(name or 'deconv2d', reuse=reuse):
621 | W = tf.get_variable(
622 | name='W',
623 | shape=[k_h, k_h, n_output_ch, n_input_ch or x.get_shape()[-1]],
624 | initializer=tf.contrib.layers.xavier_initializer_conv2d())
625 |
626 | conv = tf.nn.conv2d_transpose(
627 | name='conv_t',
628 | value=x,
629 | filter=W,
630 | output_shape=tf.pack(
631 | [tf.shape(x)[0], n_output_h, n_output_w, n_output_ch]),
632 | strides=[1, d_h, d_w, 1],
633 | padding=padding)
634 |
635 | conv.set_shape([None, n_output_h, n_output_w, n_output_ch])
636 |
637 | b = tf.get_variable(
638 | name='b',
639 | shape=[n_output_ch],
640 | initializer=tf.constant_initializer(0.0))
641 |
642 | h = tf.nn.bias_add(name='h', value=conv, bias=b)
643 |
644 | return h, W
645 |
646 |
647 | def lrelu(features, leak=0.2):
648 | """Leaky rectifier.
649 |
650 | Parameters
651 | ----------
652 | features : tf.Tensor
653 | Input to apply leaky rectifier to.
654 | leak : float, optional
655 | Percentage of leak.
656 |
657 | Returns
658 | -------
659 | op : tf.Tensor
660 | Resulting output of applying leaky rectifier activation.
661 | """
662 | f1 = 0.5 * (1 + leak)
663 | f2 = 0.5 * (1 - leak)
664 | return f1 * features + f2 * abs(features)
665 |
666 |
667 | def linear(x, n_output, name=None, activation=None, reuse=None):
668 | """Fully connected layer.
669 |
670 | Parameters
671 | ----------
672 | x : tf.Tensor
673 | Input tensor to connect
674 | n_output : int
675 | Number of output neurons
676 | name : None, optional
677 | Scope to apply
678 |
679 | Returns
680 | -------
681 | h, W : tf.Tensor, tf.Tensor
682 | Output of fully connected layer and the weight matrix
683 | """
684 | if len(x.get_shape()) != 2:
685 | x = flatten(x, reuse=reuse)
686 |
687 | n_input = x.get_shape().as_list()[1]
688 |
689 | with tf.variable_scope(name or "fc", reuse=reuse):
690 | W = tf.get_variable(
691 | name='W',
692 | shape=[n_input, n_output],
693 | dtype=tf.float32,
694 | initializer=tf.contrib.layers.xavier_initializer())
695 |
696 | b = tf.get_variable(
697 | name='b',
698 | shape=[n_output],
699 | dtype=tf.float32,
700 | initializer=tf.constant_initializer(0.0))
701 |
702 | h = tf.nn.bias_add(
703 | name='h',
704 | value=tf.matmul(x, W),
705 | bias=b)
706 |
707 | if activation:
708 | h = activation(h)
709 |
710 | return h, W
711 |
712 |
713 | def flatten(x, name=None, reuse=None):
714 | """Flatten Tensor to 2-dimensions.
715 |
716 | Parameters
717 | ----------
718 | x : tf.Tensor
719 | Input tensor to flatten.
720 | name : None, optional
721 | Variable scope for flatten operations
722 |
723 | Returns
724 | -------
725 | flattened : tf.Tensor
726 | Flattened tensor.
727 | """
728 | with tf.variable_scope('flatten'):
729 | dims = x.get_shape().as_list()
730 | if len(dims) == 4:
731 | flattened = tf.reshape(
732 | x,
733 | shape=[-1, dims[1] * dims[2] * dims[3]])
734 | elif len(dims) == 2 or len(dims) == 1:
735 | flattened = x
736 | else:
737 | raise ValueError('Expected n dimensions of 1, 2 or 4. Found:',
738 | len(dims))
739 |
740 | return flattened
741 |
742 |
743 | def to_tensor(x):
744 | """Convert 2 dim Tensor to a 4 dim Tensor ready for convolution.
745 |
746 | Performs the opposite of flatten(x). If the tensor is already 4-D, this
747 | returns the same as the input, leaving it unchanged.
748 |
749 | Parameters
750 | ----------
751 | x : tf.Tesnor
752 | Input 2-D tensor. If 4-D already, left unchanged.
753 |
754 | Returns
755 | -------
756 | x : tf.Tensor
757 | 4-D representation of the input.
758 |
759 | Raises
760 | ------
761 | ValueError
762 | If the tensor is not 2D or already 4D.
763 | """
764 | if len(x.get_shape()) == 2:
765 | n_input = x.get_shape().as_list()[1]
766 | x_dim = np.sqrt(n_input)
767 | if x_dim == int(x_dim):
768 | x_dim = int(x_dim)
769 | x_tensor = tf.reshape(
770 | x, [-1, x_dim, x_dim, 1], name='reshape')
771 | elif np.sqrt(n_input / 3) == int(np.sqrt(n_input / 3)):
772 | x_dim = int(np.sqrt(n_input / 3))
773 | x_tensor = tf.reshape(
774 | x, [-1, x_dim, x_dim, 3], name='reshape')
775 | else:
776 | x_tensor = tf.reshape(
777 | x, [-1, 1, 1, n_input], name='reshape')
778 | elif len(x.get_shape()) == 4:
779 | x_tensor = x
780 | else:
781 | raise ValueError('Unsupported input dimensions')
782 | return x_tensor
783 |
--------------------------------------------------------------------------------
/libs/vgg16.py:
--------------------------------------------------------------------------------
1 | """
2 | Creative Applications of Deep Learning w/ Tensorflow.
3 | Kadenze, Inc.
4 | Copyright Parag K. Mital, June 2016.
5 | """
6 | import tensorflow as tf
7 | import json
8 | import numpy as np
9 | import matplotlib.pyplot as plt
10 | from skimage.transform import resize as imresize
11 | from .utils import download
12 |
13 |
14 | def get_vgg_face_model():
15 | download('https://s3.amazonaws.com/cadl/models/vgg_face.tfmodel')
16 | with open("vgg_face.tfmodel", mode='rb') as f:
17 | graph_def = tf.GraphDef()
18 | try:
19 | graph_def.ParseFromString(f.read())
20 | except:
21 | print('try adding PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python ' +
22 | 'to environment. e.g.:\n' +
23 | 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python ipython\n' +
24 | 'See here for info: ' +
25 | 'https://github.com/tensorflow/tensorflow/issues/582')
26 |
27 | download('https://s3.amazonaws.com/cadl/models/vgg_face.json')
28 | labels = json.load(open('vgg_face.json'))
29 |
30 | return {
31 | 'graph_def': graph_def,
32 | 'labels': labels,
33 | 'preprocess': preprocess,
34 | 'deprocess': deprocess
35 | }
36 |
37 |
38 | def get_vgg_model():
39 | download('https://s3.amazonaws.com/cadl/models/vgg16.tfmodel')
40 | with open("vgg16.tfmodel", mode='rb') as f:
41 | graph_def = tf.GraphDef()
42 | try:
43 | graph_def.ParseFromString(f.read())
44 | except:
45 | print('try adding PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python ' +
46 | 'to environment. e.g.:\n' +
47 | 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python ipython\n' +
48 | 'See here for info: ' +
49 | 'https://github.com/tensorflow/tensorflow/issues/582')
50 |
51 | download('https://s3.amazonaws.com/cadl/models/synset.txt')
52 | with open('synset.txt') as f:
53 | labels = [(idx, l.strip()) for idx, l in enumerate(f.readlines())]
54 |
55 | return {
56 | 'graph_def': graph_def,
57 | 'labels': labels,
58 | 'preprocess': preprocess,
59 | 'deprocess': deprocess
60 | }
61 |
62 |
63 | def preprocess(img, crop=True, resize=True, dsize=(224, 224)):
64 | if img.dtype == np.uint8:
65 | img = img / 255.0
66 |
67 | if crop:
68 | short_edge = min(img.shape[:2])
69 | yy = int((img.shape[0] - short_edge) / 2)
70 | xx = int((img.shape[1] - short_edge) / 2)
71 | crop_img = img[yy: yy + short_edge, xx: xx + short_edge]
72 | else:
73 | crop_img = img
74 |
75 | if resize:
76 | norm_img = imresize(crop_img, dsize, preserve_range=True)
77 | else:
78 | norm_img = crop_img
79 |
80 | return (norm_img).astype(np.float32)
81 |
82 |
83 | def deprocess(img):
84 | return np.clip(img * 255, 0, 255).astype(np.uint8)
85 | # return ((img / np.max(np.abs(img))) * 127.5 +
86 | # 127.5).astype(np.uint8)
87 |
88 |
89 | def test_vgg():
90 | """Loads the VGG network and applies it to a test image.
91 | """
92 | with tf.Session() as sess:
93 | net = get_vgg_model()
94 | tf.import_graph_def(net['graph_def'], name='vgg')
95 | g = tf.get_default_graph()
96 | names = [op.name for op in g.get_operations()]
97 | input_name = names[0] + ':0'
98 | x = g.get_tensor_by_name(input_name)
99 | softmax = g.get_tensor_by_name(names[-2] + ':0')
100 |
101 | og = plt.imread('bosch.png')
102 | img = preprocess(og)[np.newaxis, ...]
103 | res = np.squeeze(softmax.eval(feed_dict={
104 | x: img,
105 | 'vgg/dropout_1/random_uniform:0': [[1.0]],
106 | 'vgg/dropout/random_uniform:0': [[1.0]]}))
107 | print([(res[idx], net['labels'][idx])
108 | for idx in res.argsort()[-5:][::-1]])
109 |
110 | """Let's visualize the network's gradient activation
111 | when backpropagated to the original input image. This
112 | is effectively telling us which pixels contribute to the
113 | predicted class or given neuron"""
114 | features = [name for name in names if 'BiasAdd' in name.split()[-1]]
115 | from math import sqrt, ceil
116 | n_plots = ceil(sqrt(len(features) + 1))
117 | fig, axs = plt.subplots(n_plots, n_plots)
118 | plot_i = 0
119 | axs[0][0].imshow(img[0])
120 | for feature_i, featurename in enumerate(features):
121 | plot_i += 1
122 | feature = g.get_tensor_by_name(featurename + ':0')
123 | neuron = tf.reduce_max(feature, 1)
124 | saliency = tf.gradients(tf.reduce_sum(neuron), x)
125 | neuron_idx = tf.arg_max(feature, 1)
126 | this_res = sess.run([saliency[0], neuron_idx], feed_dict={
127 | x: img,
128 | 'vgg/dropout_1/random_uniform:0': [[1.0]],
129 | 'vgg/dropout/random_uniform:0': [[1.0]]})
130 |
131 | grad = this_res[0][0] / np.max(np.abs(this_res[0]))
132 | ax = axs[plot_i // n_plots][plot_i % n_plots]
133 | ax.imshow((grad * 127.5 + 127.5).astype(np.uint8))
134 | ax.set_title(featurename)
135 |
136 | """Deep Dreaming takes the backpropagated gradient activations
137 | and simply adds it to the image, running the same process again
138 | and again in a loop. There are many tricks one can add to this
139 | idea, such as infinitely zooming into the image by cropping and
140 | scaling, adding jitter by randomly moving the image around, or
141 | adding constraints on the total activations."""
142 | og = plt.imread('street.png')
143 | crop = 2
144 | img = preprocess(og)[np.newaxis, ...]
145 | layer = g.get_tensor_by_name(features[3] + ':0')
146 | n_els = layer.get_shape().as_list()[1]
147 | neuron_i = np.random.randint(1000)
148 | layer_vec = np.zeros((1, n_els))
149 | layer_vec[0, neuron_i] = 1
150 | neuron = tf.reduce_max(layer, 1)
151 | saliency = tf.gradients(tf.reduce_sum(neuron), x)
152 | for it_i in range(3):
153 | print(it_i)
154 | this_res = sess.run(saliency[0], feed_dict={
155 | x: img,
156 | layer: layer_vec,
157 | 'vgg/dropout_1/random_uniform:0': [[1.0]],
158 | 'vgg/dropout/random_uniform:0': [[1.0]]})
159 | grad = this_res[0] / np.mean(np.abs(grad))
160 | img = img[:, crop:-crop - 1, crop:-crop - 1, :]
161 | img = imresize(img[0], (224, 224))[np.newaxis]
162 | img += grad
163 | plt.imshow(deprocess(img[0]))
164 |
165 |
166 | def test_vgg_face():
167 | """Loads the VGG network and applies it to a test image.
168 | """
169 | with tf.Session() as sess:
170 | net = get_vgg_face_model()
171 | x = tf.placeholder(tf.float32, [1, 224, 224, 3], name='x')
172 | tf.import_graph_def(net['graph_def'], name='vgg',
173 | input_map={'Placeholder:0': x})
174 | g = tf.get_default_graph()
175 | names = [op.name for op in g.get_operations()]
176 |
177 | og = plt.imread('bricks.png')[..., :3]
178 | img = preprocess(og)[np.newaxis, ...]
179 | plt.imshow(img[0])
180 | plt.show()
181 |
182 | """Let's visualize the network's gradient activation
183 | when backpropagated to the original input image. This
184 | is effectively telling us which pixels contribute to the
185 | predicted class or given neuron"""
186 | features = [name for name in names if 'BiasAdd' in name.split()[-1]]
187 | from math import sqrt, ceil
188 | n_plots = ceil(sqrt(len(features) + 1))
189 | fig, axs = plt.subplots(n_plots, n_plots)
190 | plot_i = 0
191 | axs[0][0].imshow(img[0])
192 | for feature_i, featurename in enumerate(features):
193 | plot_i += 1
194 | feature = g.get_tensor_by_name(featurename + ':0')
195 | neuron = tf.reduce_max(feature, 1)
196 | saliency = tf.gradients(tf.reduce_sum(neuron), x)
197 | neuron_idx = tf.arg_max(feature, 1)
198 | this_res = sess.run([saliency[0], neuron_idx], feed_dict={x: img})
199 |
200 | grad = this_res[0][0] / np.max(np.abs(this_res[0]))
201 | ax = axs[plot_i // n_plots][plot_i % n_plots]
202 | ax.imshow((grad * 127.5 + 127.5).astype(np.uint8))
203 | ax.set_title(featurename)
204 | plt.waitforbuttonpress()
205 |
206 | """Deep Dreaming takes the backpropagated gradient activations
207 | and simply adds it to the image, running the same process again
208 | and again in a loop. There are many tricks one can add to this
209 | idea, such as infinitely zooming into the image by cropping and
210 | scaling, adding jitter by randomly moving the image around, or
211 | adding constraints on the total activations."""
212 | og = plt.imread('street.png')
213 | crop = 2
214 | img = preprocess(og)[np.newaxis, ...]
215 | layer = g.get_tensor_by_name(features[3] + ':0')
216 | n_els = layer.get_shape().as_list()[1]
217 | neuron_i = np.random.randint(1000)
218 | layer_vec = np.zeros((1, n_els))
219 | layer_vec[0, neuron_i] = 1
220 | neuron = tf.reduce_max(layer, 1)
221 | saliency = tf.gradients(tf.reduce_sum(neuron), x)
222 | for it_i in range(3):
223 | print(it_i)
224 | this_res = sess.run(saliency[0], feed_dict={
225 | x: img,
226 | layer: layer_vec,
227 | 'vgg/dropout_1/random_uniform:0': [[1.0]],
228 | 'vgg/dropout/random_uniform:0': [[1.0]]})
229 | grad = this_res[0] / np.mean(np.abs(grad))
230 | img = img[:, crop:-crop - 1, crop:-crop - 1, :]
231 | img = imresize(img[0], (224, 224))[np.newaxis]
232 | img += grad
233 | plt.imshow(deprocess(img[0]))
234 |
235 | if __name__ == '__main__':
236 | test_vgg_face()
237 |
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
1 | from flask import Flask, render_template, request, jsonify, send_file
2 | import numpy as np
3 | import scipy.misc
4 | import base64
5 | from io import BytesIO
6 | from test import *
7 | import time
8 |
9 | app = Flask(__name__)
10 |
11 | @app.route('/')
12 | def index():
13 | return render_template("index.html")
14 |
15 |
16 | @app.route('/denoisify', methods=['GET', 'POST'])
17 | def denoisify():
18 | if request.method == "POST":
19 | inputImg = request.files['file']
20 | outputImg = denoise(inputImg)
21 | scipy.misc.imsave('static/output.png', outputImg)
22 | return jsonify(result="Success")
23 |
24 |
25 | if __name__=="__main__":
26 | app.run(host="0.0.0.0",port="80")
27 |
--------------------------------------------------------------------------------
/model.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import tensorflow as tf
3 | import tensorflow.contrib.slim as slim
4 |
5 |
6 | from utils import *
7 | from conv_helper import *
8 |
9 |
10 | def generator(input):
11 | conv1, conv1_weights = conv_layer(input, 9, 3, 32, 1, "g_conv1")
12 | conv2, conv2_weights = conv_layer(conv1, 3, 32, 64, 1, "g_conv2")
13 | conv3, conv3_weights = conv_layer(conv2, 3, 64, 128, 1, "g_conv3")
14 |
15 | res1, res1_weights = residual_layer(conv3, 3, 128, 128, 1, "g_res1")
16 | res2, res2_weights = residual_layer(res1, 3, 128, 128, 1, "g_res2")
17 | res3, res3_weights = residual_layer(res2, 3, 128, 128, 1, "g_res3")
18 |
19 | deconv1 = deconvolution_layer(res3, [BATCH_SIZE, 128, 128, 64], 'g_deconv1')
20 | deconv2 = deconvolution_layer(deconv1, [BATCH_SIZE, 256, 256, 32], "g_deconv2")
21 |
22 | deconv2 = deconv2 + conv1
23 |
24 | conv4, conv4_weights = conv_layer(deconv2, 9, 32, 3, 1, "g_conv5", activation_function=tf.nn.tanh)
25 |
26 | conv4 = conv4 + input
27 | output = output_between_zero_and_one(conv4)
28 |
29 | return output
30 |
31 | def discriminator(input, reuse=False):
32 | conv1, conv1_weights = conv_layer(input, 4, 3, 48, 2, "d_conv1", reuse=reuse)
33 | conv2, conv2_weights = conv_layer(conv1, 4, 48, 96, 2, "d_conv2", reuse=reuse)
34 | conv3, conv3_weights = conv_layer(conv2, 4, 96, 192, 2, "d_conv3", reuse=reuse)
35 | conv4, conv4_weights = conv_layer(conv3, 4, 192, 384, 1, "d_conv4", reuse=reuse)
36 | conv5, conv5_weights = conv_layer(conv4, 4, 384, 1, 1, "d_conv5", activation_function=tf.nn.sigmoid, reuse=reuse)
37 |
38 | return conv5
39 |
--------------------------------------------------------------------------------
/paper.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/paper.pdf
--------------------------------------------------------------------------------
/result1.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/result1.PNG
--------------------------------------------------------------------------------
/result2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/result2.png
--------------------------------------------------------------------------------
/result3.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/result3.PNG
--------------------------------------------------------------------------------
/static/css/4-col-portfolio.css:
--------------------------------------------------------------------------------
1 | /*!
2 | * Start Bootstrap - 4 Col Portfolio (http://startbootstrap.com/)
3 | * Copyright 2013-2016 Start Bootstrap
4 | * Licensed under MIT (https://github.com/BlackrockDigital/startbootstrap/blob/gh-pages/LICENSE)
5 | */
6 |
7 | body {
8 | padding-top: 70px; /* Required padding for .navbar-fixed-top. Remove if using .navbar-static-top. Change if height of navigation changes. */
9 | }
10 |
11 | .portfolio-item {
12 | margin-bottom: 25px;
13 | }
14 |
15 | footer {
16 | margin: 50px 0;
17 | }
--------------------------------------------------------------------------------
/static/fonts/glyphicons-halflings-regular.eot:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/static/fonts/glyphicons-halflings-regular.eot
--------------------------------------------------------------------------------
/static/fonts/glyphicons-halflings-regular.ttf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/static/fonts/glyphicons-halflings-regular.ttf
--------------------------------------------------------------------------------
/static/fonts/glyphicons-halflings-regular.woff:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/static/fonts/glyphicons-halflings-regular.woff
--------------------------------------------------------------------------------
/static/fonts/glyphicons-halflings-regular.woff2:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/manumathewthomas/ImageDenoisingGAN/4f1484e36542e960df3d97f24e35132098277ae7/static/fonts/glyphicons-halflings-regular.woff2
--------------------------------------------------------------------------------
/static/js/bootstrap.js:
--------------------------------------------------------------------------------
1 | /*!
2 | * Bootstrap v3.3.7 (http://getbootstrap.com)
3 | * Copyright 2011-2016 Twitter, Inc.
4 | * Licensed under the MIT license
5 | */
6 |
7 | if (typeof jQuery === 'undefined') {
8 | throw new Error('Bootstrap\'s JavaScript requires jQuery')
9 | }
10 |
11 | +function ($) {
12 | 'use strict';
13 | var version = $.fn.jquery.split(' ')[0].split('.')
14 | if ((version[0] < 2 && version[1] < 9) || (version[0] == 1 && version[1] == 9 && version[2] < 1) || (version[0] > 3)) {
15 | throw new Error('Bootstrap\'s JavaScript requires jQuery version 1.9.1 or higher, but lower than version 4')
16 | }
17 | }(jQuery);
18 |
19 | /* ========================================================================
20 | * Bootstrap: transition.js v3.3.7
21 | * http://getbootstrap.com/javascript/#transitions
22 | * ========================================================================
23 | * Copyright 2011-2016 Twitter, Inc.
24 | * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
25 | * ======================================================================== */
26 |
27 |
28 | +function ($) {
29 | 'use strict';
30 |
31 | // CSS TRANSITION SUPPORT (Shoutout: http://www.modernizr.com/)
32 | // ============================================================
33 |
34 | function transitionEnd() {
35 | var el = document.createElement('bootstrap')
36 |
37 | var transEndEventNames = {
38 | WebkitTransition : 'webkitTransitionEnd',
39 | MozTransition : 'transitionend',
40 | OTransition : 'oTransitionEnd otransitionend',
41 | transition : 'transitionend'
42 | }
43 |
44 | for (var name in transEndEventNames) {
45 | if (el.style[name] !== undefined) {
46 | return { end: transEndEventNames[name] }
47 | }
48 | }
49 |
50 | return false // explicit for ie8 ( ._.)
51 | }
52 |
53 | // http://blog.alexmaccaw.com/css-transitions
54 | $.fn.emulateTransitionEnd = function (duration) {
55 | var called = false
56 | var $el = this
57 | $(this).one('bsTransitionEnd', function () { called = true })
58 | var callback = function () { if (!called) $($el).trigger($.support.transition.end) }
59 | setTimeout(callback, duration)
60 | return this
61 | }
62 |
63 | $(function () {
64 | $.support.transition = transitionEnd()
65 |
66 | if (!$.support.transition) return
67 |
68 | $.event.special.bsTransitionEnd = {
69 | bindType: $.support.transition.end,
70 | delegateType: $.support.transition.end,
71 | handle: function (e) {
72 | if ($(e.target).is(this)) return e.handleObj.handler.apply(this, arguments)
73 | }
74 | }
75 | })
76 |
77 | }(jQuery);
78 |
79 | /* ========================================================================
80 | * Bootstrap: alert.js v3.3.7
81 | * http://getbootstrap.com/javascript/#alerts
82 | * ========================================================================
83 | * Copyright 2011-2016 Twitter, Inc.
84 | * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
85 | * ======================================================================== */
86 |
87 |
88 | +function ($) {
89 | 'use strict';
90 |
91 | // ALERT CLASS DEFINITION
92 | // ======================
93 |
94 | var dismiss = '[data-dismiss="alert"]'
95 | var Alert = function (el) {
96 | $(el).on('click', dismiss, this.close)
97 | }
98 |
99 | Alert.VERSION = '3.3.7'
100 |
101 | Alert.TRANSITION_DURATION = 150
102 |
103 | Alert.prototype.close = function (e) {
104 | var $this = $(this)
105 | var selector = $this.attr('data-target')
106 |
107 | if (!selector) {
108 | selector = $this.attr('href')
109 | selector = selector && selector.replace(/.*(?=#[^\s]*$)/, '') // strip for ie7
110 | }
111 |
112 | var $parent = $(selector === '#' ? [] : selector)
113 |
114 | if (e) e.preventDefault()
115 |
116 | if (!$parent.length) {
117 | $parent = $this.closest('.alert')
118 | }
119 |
120 | $parent.trigger(e = $.Event('close.bs.alert'))
121 |
122 | if (e.isDefaultPrevented()) return
123 |
124 | $parent.removeClass('in')
125 |
126 | function removeElement() {
127 | // detach from parent, fire event then clean up data
128 | $parent.detach().trigger('closed.bs.alert').remove()
129 | }
130 |
131 | $.support.transition && $parent.hasClass('fade') ?
132 | $parent
133 | .one('bsTransitionEnd', removeElement)
134 | .emulateTransitionEnd(Alert.TRANSITION_DURATION) :
135 | removeElement()
136 | }
137 |
138 |
139 | // ALERT PLUGIN DEFINITION
140 | // =======================
141 |
142 | function Plugin(option) {
143 | return this.each(function () {
144 | var $this = $(this)
145 | var data = $this.data('bs.alert')
146 |
147 | if (!data) $this.data('bs.alert', (data = new Alert(this)))
148 | if (typeof option == 'string') data[option].call($this)
149 | })
150 | }
151 |
152 | var old = $.fn.alert
153 |
154 | $.fn.alert = Plugin
155 | $.fn.alert.Constructor = Alert
156 |
157 |
158 | // ALERT NO CONFLICT
159 | // =================
160 |
161 | $.fn.alert.noConflict = function () {
162 | $.fn.alert = old
163 | return this
164 | }
165 |
166 |
167 | // ALERT DATA-API
168 | // ==============
169 |
170 | $(document).on('click.bs.alert.data-api', dismiss, Alert.prototype.close)
171 |
172 | }(jQuery);
173 |
174 | /* ========================================================================
175 | * Bootstrap: button.js v3.3.7
176 | * http://getbootstrap.com/javascript/#buttons
177 | * ========================================================================
178 | * Copyright 2011-2016 Twitter, Inc.
179 | * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
180 | * ======================================================================== */
181 |
182 |
183 | +function ($) {
184 | 'use strict';
185 |
186 | // BUTTON PUBLIC CLASS DEFINITION
187 | // ==============================
188 |
189 | var Button = function (element, options) {
190 | this.$element = $(element)
191 | this.options = $.extend({}, Button.DEFAULTS, options)
192 | this.isLoading = false
193 | }
194 |
195 | Button.VERSION = '3.3.7'
196 |
197 | Button.DEFAULTS = {
198 | loadingText: 'loading...'
199 | }
200 |
201 | Button.prototype.setState = function (state) {
202 | var d = 'disabled'
203 | var $el = this.$element
204 | var val = $el.is('input') ? 'val' : 'html'
205 | var data = $el.data()
206 |
207 | state += 'Text'
208 |
209 | if (data.resetText == null) $el.data('resetText', $el[val]())
210 |
211 | // push to event loop to allow forms to submit
212 | setTimeout($.proxy(function () {
213 | $el[val](data[state] == null ? this.options[state] : data[state])
214 |
215 | if (state == 'loadingText') {
216 | this.isLoading = true
217 | $el.addClass(d).attr(d, d).prop(d, true)
218 | } else if (this.isLoading) {
219 | this.isLoading = false
220 | $el.removeClass(d).removeAttr(d).prop(d, false)
221 | }
222 | }, this), 0)
223 | }
224 |
225 | Button.prototype.toggle = function () {
226 | var changed = true
227 | var $parent = this.$element.closest('[data-toggle="buttons"]')
228 |
229 | if ($parent.length) {
230 | var $input = this.$element.find('input')
231 | if ($input.prop('type') == 'radio') {
232 | if ($input.prop('checked')) changed = false
233 | $parent.find('.active').removeClass('active')
234 | this.$element.addClass('active')
235 | } else if ($input.prop('type') == 'checkbox') {
236 | if (($input.prop('checked')) !== this.$element.hasClass('active')) changed = false
237 | this.$element.toggleClass('active')
238 | }
239 | $input.prop('checked', this.$element.hasClass('active'))
240 | if (changed) $input.trigger('change')
241 | } else {
242 | this.$element.attr('aria-pressed', !this.$element.hasClass('active'))
243 | this.$element.toggleClass('active')
244 | }
245 | }
246 |
247 |
248 | // BUTTON PLUGIN DEFINITION
249 | // ========================
250 |
251 | function Plugin(option) {
252 | return this.each(function () {
253 | var $this = $(this)
254 | var data = $this.data('bs.button')
255 | var options = typeof option == 'object' && option
256 |
257 | if (!data) $this.data('bs.button', (data = new Button(this, options)))
258 |
259 | if (option == 'toggle') data.toggle()
260 | else if (option) data.setState(option)
261 | })
262 | }
263 |
264 | var old = $.fn.button
265 |
266 | $.fn.button = Plugin
267 | $.fn.button.Constructor = Button
268 |
269 |
270 | // BUTTON NO CONFLICT
271 | // ==================
272 |
273 | $.fn.button.noConflict = function () {
274 | $.fn.button = old
275 | return this
276 | }
277 |
278 |
279 | // BUTTON DATA-API
280 | // ===============
281 |
282 | $(document)
283 | .on('click.bs.button.data-api', '[data-toggle^="button"]', function (e) {
284 | var $btn = $(e.target).closest('.btn')
285 | Plugin.call($btn, 'toggle')
286 | if (!($(e.target).is('input[type="radio"], input[type="checkbox"]'))) {
287 | // Prevent double click on radios, and the double selections (so cancellation) on checkboxes
288 | e.preventDefault()
289 | // The target component still receive the focus
290 | if ($btn.is('input,button')) $btn.trigger('focus')
291 | else $btn.find('input:visible,button:visible').first().trigger('focus')
292 | }
293 | })
294 | .on('focus.bs.button.data-api blur.bs.button.data-api', '[data-toggle^="button"]', function (e) {
295 | $(e.target).closest('.btn').toggleClass('focus', /^focus(in)?$/.test(e.type))
296 | })
297 |
298 | }(jQuery);
299 |
300 | /* ========================================================================
301 | * Bootstrap: carousel.js v3.3.7
302 | * http://getbootstrap.com/javascript/#carousel
303 | * ========================================================================
304 | * Copyright 2011-2016 Twitter, Inc.
305 | * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
306 | * ======================================================================== */
307 |
308 |
309 | +function ($) {
310 | 'use strict';
311 |
312 | // CAROUSEL CLASS DEFINITION
313 | // =========================
314 |
315 | var Carousel = function (element, options) {
316 | this.$element = $(element)
317 | this.$indicators = this.$element.find('.carousel-indicators')
318 | this.options = options
319 | this.paused = null
320 | this.sliding = null
321 | this.interval = null
322 | this.$active = null
323 | this.$items = null
324 |
325 | this.options.keyboard && this.$element.on('keydown.bs.carousel', $.proxy(this.keydown, this))
326 |
327 | this.options.pause == 'hover' && !('ontouchstart' in document.documentElement) && this.$element
328 | .on('mouseenter.bs.carousel', $.proxy(this.pause, this))
329 | .on('mouseleave.bs.carousel', $.proxy(this.cycle, this))
330 | }
331 |
332 | Carousel.VERSION = '3.3.7'
333 |
334 | Carousel.TRANSITION_DURATION = 600
335 |
336 | Carousel.DEFAULTS = {
337 | interval: 5000,
338 | pause: 'hover',
339 | wrap: true,
340 | keyboard: true
341 | }
342 |
343 | Carousel.prototype.keydown = function (e) {
344 | if (/input|textarea/i.test(e.target.tagName)) return
345 | switch (e.which) {
346 | case 37: this.prev(); break
347 | case 39: this.next(); break
348 | default: return
349 | }
350 |
351 | e.preventDefault()
352 | }
353 |
354 | Carousel.prototype.cycle = function (e) {
355 | e || (this.paused = false)
356 |
357 | this.interval && clearInterval(this.interval)
358 |
359 | this.options.interval
360 | && !this.paused
361 | && (this.interval = setInterval($.proxy(this.next, this), this.options.interval))
362 |
363 | return this
364 | }
365 |
366 | Carousel.prototype.getItemIndex = function (item) {
367 | this.$items = item.parent().children('.item')
368 | return this.$items.index(item || this.$active)
369 | }
370 |
371 | Carousel.prototype.getItemForDirection = function (direction, active) {
372 | var activeIndex = this.getItemIndex(active)
373 | var willWrap = (direction == 'prev' && activeIndex === 0)
374 | || (direction == 'next' && activeIndex == (this.$items.length - 1))
375 | if (willWrap && !this.options.wrap) return active
376 | var delta = direction == 'prev' ? -1 : 1
377 | var itemIndex = (activeIndex + delta) % this.$items.length
378 | return this.$items.eq(itemIndex)
379 | }
380 |
381 | Carousel.prototype.to = function (pos) {
382 | var that = this
383 | var activeIndex = this.getItemIndex(this.$active = this.$element.find('.item.active'))
384 |
385 | if (pos > (this.$items.length - 1) || pos < 0) return
386 |
387 | if (this.sliding) return this.$element.one('slid.bs.carousel', function () { that.to(pos) }) // yes, "slid"
388 | if (activeIndex == pos) return this.pause().cycle()
389 |
390 | return this.slide(pos > activeIndex ? 'next' : 'prev', this.$items.eq(pos))
391 | }
392 |
393 | Carousel.prototype.pause = function (e) {
394 | e || (this.paused = true)
395 |
396 | if (this.$element.find('.next, .prev').length && $.support.transition) {
397 | this.$element.trigger($.support.transition.end)
398 | this.cycle(true)
399 | }
400 |
401 | this.interval = clearInterval(this.interval)
402 |
403 | return this
404 | }
405 |
406 | Carousel.prototype.next = function () {
407 | if (this.sliding) return
408 | return this.slide('next')
409 | }
410 |
411 | Carousel.prototype.prev = function () {
412 | if (this.sliding) return
413 | return this.slide('prev')
414 | }
415 |
416 | Carousel.prototype.slide = function (type, next) {
417 | var $active = this.$element.find('.item.active')
418 | var $next = next || this.getItemForDirection(type, $active)
419 | var isCycling = this.interval
420 | var direction = type == 'next' ? 'left' : 'right'
421 | var that = this
422 |
423 | if ($next.hasClass('active')) return (this.sliding = false)
424 |
425 | var relatedTarget = $next[0]
426 | var slideEvent = $.Event('slide.bs.carousel', {
427 | relatedTarget: relatedTarget,
428 | direction: direction
429 | })
430 | this.$element.trigger(slideEvent)
431 | if (slideEvent.isDefaultPrevented()) return
432 |
433 | this.sliding = true
434 |
435 | isCycling && this.pause()
436 |
437 | if (this.$indicators.length) {
438 | this.$indicators.find('.active').removeClass('active')
439 | var $nextIndicator = $(this.$indicators.children()[this.getItemIndex($next)])
440 | $nextIndicator && $nextIndicator.addClass('active')
441 | }
442 |
443 | var slidEvent = $.Event('slid.bs.carousel', { relatedTarget: relatedTarget, direction: direction }) // yes, "slid"
444 | if ($.support.transition && this.$element.hasClass('slide')) {
445 | $next.addClass(type)
446 | $next[0].offsetWidth // force reflow
447 | $active.addClass(direction)
448 | $next.addClass(direction)
449 | $active
450 | .one('bsTransitionEnd', function () {
451 | $next.removeClass([type, direction].join(' ')).addClass('active')
452 | $active.removeClass(['active', direction].join(' '))
453 | that.sliding = false
454 | setTimeout(function () {
455 | that.$element.trigger(slidEvent)
456 | }, 0)
457 | })
458 | .emulateTransitionEnd(Carousel.TRANSITION_DURATION)
459 | } else {
460 | $active.removeClass('active')
461 | $next.addClass('active')
462 | this.sliding = false
463 | this.$element.trigger(slidEvent)
464 | }
465 |
466 | isCycling && this.cycle()
467 |
468 | return this
469 | }
470 |
471 |
472 | // CAROUSEL PLUGIN DEFINITION
473 | // ==========================
474 |
475 | function Plugin(option) {
476 | return this.each(function () {
477 | var $this = $(this)
478 | var data = $this.data('bs.carousel')
479 | var options = $.extend({}, Carousel.DEFAULTS, $this.data(), typeof option == 'object' && option)
480 | var action = typeof option == 'string' ? option : options.slide
481 |
482 | if (!data) $this.data('bs.carousel', (data = new Carousel(this, options)))
483 | if (typeof option == 'number') data.to(option)
484 | else if (action) data[action]()
485 | else if (options.interval) data.pause().cycle()
486 | })
487 | }
488 |
489 | var old = $.fn.carousel
490 |
491 | $.fn.carousel = Plugin
492 | $.fn.carousel.Constructor = Carousel
493 |
494 |
495 | // CAROUSEL NO CONFLICT
496 | // ====================
497 |
498 | $.fn.carousel.noConflict = function () {
499 | $.fn.carousel = old
500 | return this
501 | }
502 |
503 |
504 | // CAROUSEL DATA-API
505 | // =================
506 |
507 | var clickHandler = function (e) {
508 | var href
509 | var $this = $(this)
510 | var $target = $($this.attr('data-target') || (href = $this.attr('href')) && href.replace(/.*(?=#[^\s]+$)/, '')) // strip for ie7
511 | if (!$target.hasClass('carousel')) return
512 | var options = $.extend({}, $target.data(), $this.data())
513 | var slideIndex = $this.attr('data-slide-to')
514 | if (slideIndex) options.interval = false
515 |
516 | Plugin.call($target, options)
517 |
518 | if (slideIndex) {
519 | $target.data('bs.carousel').to(slideIndex)
520 | }
521 |
522 | e.preventDefault()
523 | }
524 |
525 | $(document)
526 | .on('click.bs.carousel.data-api', '[data-slide]', clickHandler)
527 | .on('click.bs.carousel.data-api', '[data-slide-to]', clickHandler)
528 |
529 | $(window).on('load', function () {
530 | $('[data-ride="carousel"]').each(function () {
531 | var $carousel = $(this)
532 | Plugin.call($carousel, $carousel.data())
533 | })
534 | })
535 |
536 | }(jQuery);
537 |
538 | /* ========================================================================
539 | * Bootstrap: collapse.js v3.3.7
540 | * http://getbootstrap.com/javascript/#collapse
541 | * ========================================================================
542 | * Copyright 2011-2016 Twitter, Inc.
543 | * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
544 | * ======================================================================== */
545 |
546 | /* jshint latedef: false */
547 |
548 | +function ($) {
549 | 'use strict';
550 |
551 | // COLLAPSE PUBLIC CLASS DEFINITION
552 | // ================================
553 |
554 | var Collapse = function (element, options) {
555 | this.$element = $(element)
556 | this.options = $.extend({}, Collapse.DEFAULTS, options)
557 | this.$trigger = $('[data-toggle="collapse"][href="#' + element.id + '"],' +
558 | '[data-toggle="collapse"][data-target="#' + element.id + '"]')
559 | this.transitioning = null
560 |
561 | if (this.options.parent) {
562 | this.$parent = this.getParent()
563 | } else {
564 | this.addAriaAndCollapsedClass(this.$element, this.$trigger)
565 | }
566 |
567 | if (this.options.toggle) this.toggle()
568 | }
569 |
570 | Collapse.VERSION = '3.3.7'
571 |
572 | Collapse.TRANSITION_DURATION = 350
573 |
574 | Collapse.DEFAULTS = {
575 | toggle: true
576 | }
577 |
578 | Collapse.prototype.dimension = function () {
579 | var hasWidth = this.$element.hasClass('width')
580 | return hasWidth ? 'width' : 'height'
581 | }
582 |
583 | Collapse.prototype.show = function () {
584 | if (this.transitioning || this.$element.hasClass('in')) return
585 |
586 | var activesData
587 | var actives = this.$parent && this.$parent.children('.panel').children('.in, .collapsing')
588 |
589 | if (actives && actives.length) {
590 | activesData = actives.data('bs.collapse')
591 | if (activesData && activesData.transitioning) return
592 | }
593 |
594 | var startEvent = $.Event('show.bs.collapse')
595 | this.$element.trigger(startEvent)
596 | if (startEvent.isDefaultPrevented()) return
597 |
598 | if (actives && actives.length) {
599 | Plugin.call(actives, 'hide')
600 | activesData || actives.data('bs.collapse', null)
601 | }
602 |
603 | var dimension = this.dimension()
604 |
605 | this.$element
606 | .removeClass('collapse')
607 | .addClass('collapsing')[dimension](0)
608 | .attr('aria-expanded', true)
609 |
610 | this.$trigger
611 | .removeClass('collapsed')
612 | .attr('aria-expanded', true)
613 |
614 | this.transitioning = 1
615 |
616 | var complete = function () {
617 | this.$element
618 | .removeClass('collapsing')
619 | .addClass('collapse in')[dimension]('')
620 | this.transitioning = 0
621 | this.$element
622 | .trigger('shown.bs.collapse')
623 | }
624 |
625 | if (!$.support.transition) return complete.call(this)
626 |
627 | var scrollSize = $.camelCase(['scroll', dimension].join('-'))
628 |
629 | this.$element
630 | .one('bsTransitionEnd', $.proxy(complete, this))
631 | .emulateTransitionEnd(Collapse.TRANSITION_DURATION)[dimension](this.$element[0][scrollSize])
632 | }
633 |
634 | Collapse.prototype.hide = function () {
635 | if (this.transitioning || !this.$element.hasClass('in')) return
636 |
637 | var startEvent = $.Event('hide.bs.collapse')
638 | this.$element.trigger(startEvent)
639 | if (startEvent.isDefaultPrevented()) return
640 |
641 | var dimension = this.dimension()
642 |
643 | this.$element[dimension](this.$element[dimension]())[0].offsetHeight
644 |
645 | this.$element
646 | .addClass('collapsing')
647 | .removeClass('collapse in')
648 | .attr('aria-expanded', false)
649 |
650 | this.$trigger
651 | .addClass('collapsed')
652 | .attr('aria-expanded', false)
653 |
654 | this.transitioning = 1
655 |
656 | var complete = function () {
657 | this.transitioning = 0
658 | this.$element
659 | .removeClass('collapsing')
660 | .addClass('collapse')
661 | .trigger('hidden.bs.collapse')
662 | }
663 |
664 | if (!$.support.transition) return complete.call(this)
665 |
666 | this.$element
667 | [dimension](0)
668 | .one('bsTransitionEnd', $.proxy(complete, this))
669 | .emulateTransitionEnd(Collapse.TRANSITION_DURATION)
670 | }
671 |
672 | Collapse.prototype.toggle = function () {
673 | this[this.$element.hasClass('in') ? 'hide' : 'show']()
674 | }
675 |
676 | Collapse.prototype.getParent = function () {
677 | return $(this.options.parent)
678 | .find('[data-toggle="collapse"][data-parent="' + this.options.parent + '"]')
679 | .each($.proxy(function (i, element) {
680 | var $element = $(element)
681 | this.addAriaAndCollapsedClass(getTargetFromTrigger($element), $element)
682 | }, this))
683 | .end()
684 | }
685 |
686 | Collapse.prototype.addAriaAndCollapsedClass = function ($element, $trigger) {
687 | var isOpen = $element.hasClass('in')
688 |
689 | $element.attr('aria-expanded', isOpen)
690 | $trigger
691 | .toggleClass('collapsed', !isOpen)
692 | .attr('aria-expanded', isOpen)
693 | }
694 |
695 | function getTargetFromTrigger($trigger) {
696 | var href
697 | var target = $trigger.attr('data-target')
698 | || (href = $trigger.attr('href')) && href.replace(/.*(?=#[^\s]+$)/, '') // strip for ie7
699 |
700 | return $(target)
701 | }
702 |
703 |
704 | // COLLAPSE PLUGIN DEFINITION
705 | // ==========================
706 |
707 | function Plugin(option) {
708 | return this.each(function () {
709 | var $this = $(this)
710 | var data = $this.data('bs.collapse')
711 | var options = $.extend({}, Collapse.DEFAULTS, $this.data(), typeof option == 'object' && option)
712 |
713 | if (!data && options.toggle && /show|hide/.test(option)) options.toggle = false
714 | if (!data) $this.data('bs.collapse', (data = new Collapse(this, options)))
715 | if (typeof option == 'string') data[option]()
716 | })
717 | }
718 |
719 | var old = $.fn.collapse
720 |
721 | $.fn.collapse = Plugin
722 | $.fn.collapse.Constructor = Collapse
723 |
724 |
725 | // COLLAPSE NO CONFLICT
726 | // ====================
727 |
728 | $.fn.collapse.noConflict = function () {
729 | $.fn.collapse = old
730 | return this
731 | }
732 |
733 |
734 | // COLLAPSE DATA-API
735 | // =================
736 |
737 | $(document).on('click.bs.collapse.data-api', '[data-toggle="collapse"]', function (e) {
738 | var $this = $(this)
739 |
740 | if (!$this.attr('data-target')) e.preventDefault()
741 |
742 | var $target = getTargetFromTrigger($this)
743 | var data = $target.data('bs.collapse')
744 | var option = data ? 'toggle' : $this.data()
745 |
746 | Plugin.call($target, option)
747 | })
748 |
749 | }(jQuery);
750 |
751 | /* ========================================================================
752 | * Bootstrap: dropdown.js v3.3.7
753 | * http://getbootstrap.com/javascript/#dropdowns
754 | * ========================================================================
755 | * Copyright 2011-2016 Twitter, Inc.
756 | * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
757 | * ======================================================================== */
758 |
759 |
760 | +function ($) {
761 | 'use strict';
762 |
763 | // DROPDOWN CLASS DEFINITION
764 | // =========================
765 |
766 | var backdrop = '.dropdown-backdrop'
767 | var toggle = '[data-toggle="dropdown"]'
768 | var Dropdown = function (element) {
769 | $(element).on('click.bs.dropdown', this.toggle)
770 | }
771 |
772 | Dropdown.VERSION = '3.3.7'
773 |
774 | function getParent($this) {
775 | var selector = $this.attr('data-target')
776 |
777 | if (!selector) {
778 | selector = $this.attr('href')
779 | selector = selector && /#[A-Za-z]/.test(selector) && selector.replace(/.*(?=#[^\s]*$)/, '') // strip for ie7
780 | }
781 |
782 | var $parent = selector && $(selector)
783 |
784 | return $parent && $parent.length ? $parent : $this.parent()
785 | }
786 |
787 | function clearMenus(e) {
788 | if (e && e.which === 3) return
789 | $(backdrop).remove()
790 | $(toggle).each(function () {
791 | var $this = $(this)
792 | var $parent = getParent($this)
793 | var relatedTarget = { relatedTarget: this }
794 |
795 | if (!$parent.hasClass('open')) return
796 |
797 | if (e && e.type == 'click' && /input|textarea/i.test(e.target.tagName) && $.contains($parent[0], e.target)) return
798 |
799 | $parent.trigger(e = $.Event('hide.bs.dropdown', relatedTarget))
800 |
801 | if (e.isDefaultPrevented()) return
802 |
803 | $this.attr('aria-expanded', 'false')
804 | $parent.removeClass('open').trigger($.Event('hidden.bs.dropdown', relatedTarget))
805 | })
806 | }
807 |
808 | Dropdown.prototype.toggle = function (e) {
809 | var $this = $(this)
810 |
811 | if ($this.is('.disabled, :disabled')) return
812 |
813 | var $parent = getParent($this)
814 | var isActive = $parent.hasClass('open')
815 |
816 | clearMenus()
817 |
818 | if (!isActive) {
819 | if ('ontouchstart' in document.documentElement && !$parent.closest('.navbar-nav').length) {
820 | // if mobile we use a backdrop because click events don't delegate
821 | $(document.createElement('div'))
822 | .addClass('dropdown-backdrop')
823 | .insertAfter($(this))
824 | .on('click', clearMenus)
825 | }
826 |
827 | var relatedTarget = { relatedTarget: this }
828 | $parent.trigger(e = $.Event('show.bs.dropdown', relatedTarget))
829 |
830 | if (e.isDefaultPrevented()) return
831 |
832 | $this
833 | .trigger('focus')
834 | .attr('aria-expanded', 'true')
835 |
836 | $parent
837 | .toggleClass('open')
838 | .trigger($.Event('shown.bs.dropdown', relatedTarget))
839 | }
840 |
841 | return false
842 | }
843 |
844 | Dropdown.prototype.keydown = function (e) {
845 | if (!/(38|40|27|32)/.test(e.which) || /input|textarea/i.test(e.target.tagName)) return
846 |
847 | var $this = $(this)
848 |
849 | e.preventDefault()
850 | e.stopPropagation()
851 |
852 | if ($this.is('.disabled, :disabled')) return
853 |
854 | var $parent = getParent($this)
855 | var isActive = $parent.hasClass('open')
856 |
857 | if (!isActive && e.which != 27 || isActive && e.which == 27) {
858 | if (e.which == 27) $parent.find(toggle).trigger('focus')
859 | return $this.trigger('click')
860 | }
861 |
862 | var desc = ' li:not(.disabled):visible a'
863 | var $items = $parent.find('.dropdown-menu' + desc)
864 |
865 | if (!$items.length) return
866 |
867 | var index = $items.index(e.target)
868 |
869 | if (e.which == 38 && index > 0) index-- // up
870 | if (e.which == 40 && index < $items.length - 1) index++ // down
871 | if (!~index) index = 0
872 |
873 | $items.eq(index).trigger('focus')
874 | }
875 |
876 |
877 | // DROPDOWN PLUGIN DEFINITION
878 | // ==========================
879 |
880 | function Plugin(option) {
881 | return this.each(function () {
882 | var $this = $(this)
883 | var data = $this.data('bs.dropdown')
884 |
885 | if (!data) $this.data('bs.dropdown', (data = new Dropdown(this)))
886 | if (typeof option == 'string') data[option].call($this)
887 | })
888 | }
889 |
890 | var old = $.fn.dropdown
891 |
892 | $.fn.dropdown = Plugin
893 | $.fn.dropdown.Constructor = Dropdown
894 |
895 |
896 | // DROPDOWN NO CONFLICT
897 | // ====================
898 |
899 | $.fn.dropdown.noConflict = function () {
900 | $.fn.dropdown = old
901 | return this
902 | }
903 |
904 |
905 | // APPLY TO STANDARD DROPDOWN ELEMENTS
906 | // ===================================
907 |
908 | $(document)
909 | .on('click.bs.dropdown.data-api', clearMenus)
910 | .on('click.bs.dropdown.data-api', '.dropdown form', function (e) { e.stopPropagation() })
911 | .on('click.bs.dropdown.data-api', toggle, Dropdown.prototype.toggle)
912 | .on('keydown.bs.dropdown.data-api', toggle, Dropdown.prototype.keydown)
913 | .on('keydown.bs.dropdown.data-api', '.dropdown-menu', Dropdown.prototype.keydown)
914 |
915 | }(jQuery);
916 |
917 | /* ========================================================================
918 | * Bootstrap: modal.js v3.3.7
919 | * http://getbootstrap.com/javascript/#modals
920 | * ========================================================================
921 | * Copyright 2011-2016 Twitter, Inc.
922 | * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
923 | * ======================================================================== */
924 |
925 |
926 | +function ($) {
927 | 'use strict';
928 |
929 | // MODAL CLASS DEFINITION
930 | // ======================
931 |
932 | var Modal = function (element, options) {
933 | this.options = options
934 | this.$body = $(document.body)
935 | this.$element = $(element)
936 | this.$dialog = this.$element.find('.modal-dialog')
937 | this.$backdrop = null
938 | this.isShown = null
939 | this.originalBodyPad = null
940 | this.scrollbarWidth = 0
941 | this.ignoreBackdropClick = false
942 |
943 | if (this.options.remote) {
944 | this.$element
945 | .find('.modal-content')
946 | .load(this.options.remote, $.proxy(function () {
947 | this.$element.trigger('loaded.bs.modal')
948 | }, this))
949 | }
950 | }
951 |
952 | Modal.VERSION = '3.3.7'
953 |
954 | Modal.TRANSITION_DURATION = 300
955 | Modal.BACKDROP_TRANSITION_DURATION = 150
956 |
957 | Modal.DEFAULTS = {
958 | backdrop: true,
959 | keyboard: true,
960 | show: true
961 | }
962 |
963 | Modal.prototype.toggle = function (_relatedTarget) {
964 | return this.isShown ? this.hide() : this.show(_relatedTarget)
965 | }
966 |
967 | Modal.prototype.show = function (_relatedTarget) {
968 | var that = this
969 | var e = $.Event('show.bs.modal', { relatedTarget: _relatedTarget })
970 |
971 | this.$element.trigger(e)
972 |
973 | if (this.isShown || e.isDefaultPrevented()) return
974 |
975 | this.isShown = true
976 |
977 | this.checkScrollbar()
978 | this.setScrollbar()
979 | this.$body.addClass('modal-open')
980 |
981 | this.escape()
982 | this.resize()
983 |
984 | this.$element.on('click.dismiss.bs.modal', '[data-dismiss="modal"]', $.proxy(this.hide, this))
985 |
986 | this.$dialog.on('mousedown.dismiss.bs.modal', function () {
987 | that.$element.one('mouseup.dismiss.bs.modal', function (e) {
988 | if ($(e.target).is(that.$element)) that.ignoreBackdropClick = true
989 | })
990 | })
991 |
992 | this.backdrop(function () {
993 | var transition = $.support.transition && that.$element.hasClass('fade')
994 |
995 | if (!that.$element.parent().length) {
996 | that.$element.appendTo(that.$body) // don't move modals dom position
997 | }
998 |
999 | that.$element
1000 | .show()
1001 | .scrollTop(0)
1002 |
1003 | that.adjustDialog()
1004 |
1005 | if (transition) {
1006 | that.$element[0].offsetWidth // force reflow
1007 | }
1008 |
1009 | that.$element.addClass('in')
1010 |
1011 | that.enforceFocus()
1012 |
1013 | var e = $.Event('shown.bs.modal', { relatedTarget: _relatedTarget })
1014 |
1015 | transition ?
1016 | that.$dialog // wait for modal to slide in
1017 | .one('bsTransitionEnd', function () {
1018 | that.$element.trigger('focus').trigger(e)
1019 | })
1020 | .emulateTransitionEnd(Modal.TRANSITION_DURATION) :
1021 | that.$element.trigger('focus').trigger(e)
1022 | })
1023 | }
1024 |
1025 | Modal.prototype.hide = function (e) {
1026 | if (e) e.preventDefault()
1027 |
1028 | e = $.Event('hide.bs.modal')
1029 |
1030 | this.$element.trigger(e)
1031 |
1032 | if (!this.isShown || e.isDefaultPrevented()) return
1033 |
1034 | this.isShown = false
1035 |
1036 | this.escape()
1037 | this.resize()
1038 |
1039 | $(document).off('focusin.bs.modal')
1040 |
1041 | this.$element
1042 | .removeClass('in')
1043 | .off('click.dismiss.bs.modal')
1044 | .off('mouseup.dismiss.bs.modal')
1045 |
1046 | this.$dialog.off('mousedown.dismiss.bs.modal')
1047 |
1048 | $.support.transition && this.$element.hasClass('fade') ?
1049 | this.$element
1050 | .one('bsTransitionEnd', $.proxy(this.hideModal, this))
1051 | .emulateTransitionEnd(Modal.TRANSITION_DURATION) :
1052 | this.hideModal()
1053 | }
1054 |
1055 | Modal.prototype.enforceFocus = function () {
1056 | $(document)
1057 | .off('focusin.bs.modal') // guard against infinite focus loop
1058 | .on('focusin.bs.modal', $.proxy(function (e) {
1059 | if (document !== e.target &&
1060 | this.$element[0] !== e.target &&
1061 | !this.$element.has(e.target).length) {
1062 | this.$element.trigger('focus')
1063 | }
1064 | }, this))
1065 | }
1066 |
1067 | Modal.prototype.escape = function () {
1068 | if (this.isShown && this.options.keyboard) {
1069 | this.$element.on('keydown.dismiss.bs.modal', $.proxy(function (e) {
1070 | e.which == 27 && this.hide()
1071 | }, this))
1072 | } else if (!this.isShown) {
1073 | this.$element.off('keydown.dismiss.bs.modal')
1074 | }
1075 | }
1076 |
1077 | Modal.prototype.resize = function () {
1078 | if (this.isShown) {
1079 | $(window).on('resize.bs.modal', $.proxy(this.handleUpdate, this))
1080 | } else {
1081 | $(window).off('resize.bs.modal')
1082 | }
1083 | }
1084 |
1085 | Modal.prototype.hideModal = function () {
1086 | var that = this
1087 | this.$element.hide()
1088 | this.backdrop(function () {
1089 | that.$body.removeClass('modal-open')
1090 | that.resetAdjustments()
1091 | that.resetScrollbar()
1092 | that.$element.trigger('hidden.bs.modal')
1093 | })
1094 | }
1095 |
1096 | Modal.prototype.removeBackdrop = function () {
1097 | this.$backdrop && this.$backdrop.remove()
1098 | this.$backdrop = null
1099 | }
1100 |
1101 | Modal.prototype.backdrop = function (callback) {
1102 | var that = this
1103 | var animate = this.$element.hasClass('fade') ? 'fade' : ''
1104 |
1105 | if (this.isShown && this.options.backdrop) {
1106 | var doAnimate = $.support.transition && animate
1107 |
1108 | this.$backdrop = $(document.createElement('div'))
1109 | .addClass('modal-backdrop ' + animate)
1110 | .appendTo(this.$body)
1111 |
1112 | this.$element.on('click.dismiss.bs.modal', $.proxy(function (e) {
1113 | if (this.ignoreBackdropClick) {
1114 | this.ignoreBackdropClick = false
1115 | return
1116 | }
1117 | if (e.target !== e.currentTarget) return
1118 | this.options.backdrop == 'static'
1119 | ? this.$element[0].focus()
1120 | : this.hide()
1121 | }, this))
1122 |
1123 | if (doAnimate) this.$backdrop[0].offsetWidth // force reflow
1124 |
1125 | this.$backdrop.addClass('in')
1126 |
1127 | if (!callback) return
1128 |
1129 | doAnimate ?
1130 | this.$backdrop
1131 | .one('bsTransitionEnd', callback)
1132 | .emulateTransitionEnd(Modal.BACKDROP_TRANSITION_DURATION) :
1133 | callback()
1134 |
1135 | } else if (!this.isShown && this.$backdrop) {
1136 | this.$backdrop.removeClass('in')
1137 |
1138 | var callbackRemove = function () {
1139 | that.removeBackdrop()
1140 | callback && callback()
1141 | }
1142 | $.support.transition && this.$element.hasClass('fade') ?
1143 | this.$backdrop
1144 | .one('bsTransitionEnd', callbackRemove)
1145 | .emulateTransitionEnd(Modal.BACKDROP_TRANSITION_DURATION) :
1146 | callbackRemove()
1147 |
1148 | } else if (callback) {
1149 | callback()
1150 | }
1151 | }
1152 |
1153 | // these following methods are used to handle overflowing modals
1154 |
1155 | Modal.prototype.handleUpdate = function () {
1156 | this.adjustDialog()
1157 | }
1158 |
1159 | Modal.prototype.adjustDialog = function () {
1160 | var modalIsOverflowing = this.$element[0].scrollHeight > document.documentElement.clientHeight
1161 |
1162 | this.$element.css({
1163 | paddingLeft: !this.bodyIsOverflowing && modalIsOverflowing ? this.scrollbarWidth : '',
1164 | paddingRight: this.bodyIsOverflowing && !modalIsOverflowing ? this.scrollbarWidth : ''
1165 | })
1166 | }
1167 |
1168 | Modal.prototype.resetAdjustments = function () {
1169 | this.$element.css({
1170 | paddingLeft: '',
1171 | paddingRight: ''
1172 | })
1173 | }
1174 |
1175 | Modal.prototype.checkScrollbar = function () {
1176 | var fullWindowWidth = window.innerWidth
1177 | if (!fullWindowWidth) { // workaround for missing window.innerWidth in IE8
1178 | var documentElementRect = document.documentElement.getBoundingClientRect()
1179 | fullWindowWidth = documentElementRect.right - Math.abs(documentElementRect.left)
1180 | }
1181 | this.bodyIsOverflowing = document.body.clientWidth < fullWindowWidth
1182 | this.scrollbarWidth = this.measureScrollbar()
1183 | }
1184 |
1185 | Modal.prototype.setScrollbar = function () {
1186 | var bodyPad = parseInt((this.$body.css('padding-right') || 0), 10)
1187 | this.originalBodyPad = document.body.style.paddingRight || ''
1188 | if (this.bodyIsOverflowing) this.$body.css('padding-right', bodyPad + this.scrollbarWidth)
1189 | }
1190 |
1191 | Modal.prototype.resetScrollbar = function () {
1192 | this.$body.css('padding-right', this.originalBodyPad)
1193 | }
1194 |
1195 | Modal.prototype.measureScrollbar = function () { // thx walsh
1196 | var scrollDiv = document.createElement('div')
1197 | scrollDiv.className = 'modal-scrollbar-measure'
1198 | this.$body.append(scrollDiv)
1199 | var scrollbarWidth = scrollDiv.offsetWidth - scrollDiv.clientWidth
1200 | this.$body[0].removeChild(scrollDiv)
1201 | return scrollbarWidth
1202 | }
1203 |
1204 |
1205 | // MODAL PLUGIN DEFINITION
1206 | // =======================
1207 |
1208 | function Plugin(option, _relatedTarget) {
1209 | return this.each(function () {
1210 | var $this = $(this)
1211 | var data = $this.data('bs.modal')
1212 | var options = $.extend({}, Modal.DEFAULTS, $this.data(), typeof option == 'object' && option)
1213 |
1214 | if (!data) $this.data('bs.modal', (data = new Modal(this, options)))
1215 | if (typeof option == 'string') data[option](_relatedTarget)
1216 | else if (options.show) data.show(_relatedTarget)
1217 | })
1218 | }
1219 |
1220 | var old = $.fn.modal
1221 |
1222 | $.fn.modal = Plugin
1223 | $.fn.modal.Constructor = Modal
1224 |
1225 |
1226 | // MODAL NO CONFLICT
1227 | // =================
1228 |
1229 | $.fn.modal.noConflict = function () {
1230 | $.fn.modal = old
1231 | return this
1232 | }
1233 |
1234 |
1235 | // MODAL DATA-API
1236 | // ==============
1237 |
1238 | $(document).on('click.bs.modal.data-api', '[data-toggle="modal"]', function (e) {
1239 | var $this = $(this)
1240 | var href = $this.attr('href')
1241 | var $target = $($this.attr('data-target') || (href && href.replace(/.*(?=#[^\s]+$)/, ''))) // strip for ie7
1242 | var option = $target.data('bs.modal') ? 'toggle' : $.extend({ remote: !/#/.test(href) && href }, $target.data(), $this.data())
1243 |
1244 | if ($this.is('a')) e.preventDefault()
1245 |
1246 | $target.one('show.bs.modal', function (showEvent) {
1247 | if (showEvent.isDefaultPrevented()) return // only register focus restorer if modal will actually get shown
1248 | $target.one('hidden.bs.modal', function () {
1249 | $this.is(':visible') && $this.trigger('focus')
1250 | })
1251 | })
1252 | Plugin.call($target, option, this)
1253 | })
1254 |
1255 | }(jQuery);
1256 |
1257 | /* ========================================================================
1258 | * Bootstrap: tooltip.js v3.3.7
1259 | * http://getbootstrap.com/javascript/#tooltip
1260 | * Inspired by the original jQuery.tipsy by Jason Frame
1261 | * ========================================================================
1262 | * Copyright 2011-2016 Twitter, Inc.
1263 | * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
1264 | * ======================================================================== */
1265 |
1266 |
1267 | +function ($) {
1268 | 'use strict';
1269 |
1270 | // TOOLTIP PUBLIC CLASS DEFINITION
1271 | // ===============================
1272 |
1273 | var Tooltip = function (element, options) {
1274 | this.type = null
1275 | this.options = null
1276 | this.enabled = null
1277 | this.timeout = null
1278 | this.hoverState = null
1279 | this.$element = null
1280 | this.inState = null
1281 |
1282 | this.init('tooltip', element, options)
1283 | }
1284 |
1285 | Tooltip.VERSION = '3.3.7'
1286 |
1287 | Tooltip.TRANSITION_DURATION = 150
1288 |
1289 | Tooltip.DEFAULTS = {
1290 | animation: true,
1291 | placement: 'top',
1292 | selector: false,
1293 | template: '