├── Dockerfile ├── Makefile ├── README.md ├── __pycache__ ├── coconet.cpython-35.pyc └── coconet.cpython-36.pyc ├── coconet.py ├── images ├── 13_horse.png ├── 208_horse_noisy_20 FUSED.png ├── 208_horse_noisy_20.png ├── 240_automobile_gt FUSED.png ├── 240_automobile_gt.png ├── butterfly_GT.bmp ├── fig1.png ├── horse_collage.png ├── result.png ├── result_conv.png └── result_dense.png ├── initial_weights.h5 ├── requirements.txt └── test.py /Dockerfile: -------------------------------------------------------------------------------- 1 | ARG cuda_version=9.0 2 | ARG cudnn_version=7 3 | FROM nvidia/cuda:${cuda_version}-cudnn${cudnn_version}-devel 4 | 5 | # Install system packages 6 | RUN apt-get update && apt-get install -y --no-install-recommends \ 7 | bzip2 \ 8 | g++ \ 9 | git \ 10 | graphviz \ 11 | libgl1-mesa-glx \ 12 | libhdf5-dev \ 13 | openmpi-bin \ 14 | wget && \ 15 | rm -rf /var/lib/apt/lists/* 16 | 17 | # Install conda 18 | ENV CONDA_DIR /opt/conda 19 | ENV PATH $CONDA_DIR/bin:$PATH 20 | 21 | RUN wget --quiet --no-check-certificate https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh && \ 22 | echo "c59b3dd3cad550ac7596e0d599b91e75d88826db132e4146030ef471bb434e9a *Miniconda3-4.2.12-Linux-x86_64.sh" | sha256sum -c - && \ 23 | /bin/bash /Miniconda3-4.2.12-Linux-x86_64.sh -f -b -p $CONDA_DIR && \ 24 | rm Miniconda3-4.2.12-Linux-x86_64.sh && \ 25 | echo export PATH=$CONDA_DIR/bin:'$PATH' > /etc/profile.d/conda.sh 26 | 27 | # Install Python packages and keras 28 | ENV NB_USER keras 29 | ENV NB_UID 1000 30 | 31 | RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \ 32 | chown $NB_USER $CONDA_DIR -R && \ 33 | mkdir -p /src && \ 34 | chown $NB_USER /src 35 | 36 | USER $NB_USER 37 | 38 | ARG python_version=3.6 39 | 40 | RUN conda install -y python=${python_version} && \ 41 | pip install --upgrade pip && \ 42 | pip install \ 43 | sklearn_pandas \ 44 | tensorflow-gpu && \ 45 | pip install https://cntk.ai/PythonWheel/GPU/cntk-2.1-cp36-cp36m-linux_x86_64.whl && \ 46 | conda install \ 47 | bcolz \ 48 | h5py \ 49 | matplotlib \ 50 | mkl \ 51 | nose \ 52 | notebook \ 53 | Pillow \ 54 | pandas \ 55 | pygpu \ 56 | pyyaml \ 57 | scikit-learn \ 58 | six \ 59 | theano && \ 60 | git clone git://github.com/keras-team/keras.git /src && pip install -e /src[tests] && \ 61 | pip install git+git://github.com/keras-team/keras.git && \ 62 | conda clean -yt 63 | 64 | ENV PYTHONPATH='/src/:$PYTHONPATH' 65 | 66 | WORKDIR /src 67 | 68 | EXPOSE 8888 69 | 70 | CMD jupyter notebook --port=8888 --ip=0.0.0.0 71 | 72 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | help: 2 | @cat Makefile 3 | 4 | DATA?="${HOME}/Data" 5 | GPU?=0 6 | DOCKER_FILE=Dockerfile 7 | DOCKER=GPU=$(GPU) nvidia-docker 8 | BACKEND=tensorflow 9 | PYTHON_VERSION?=3.6 10 | CUDA_VERSION?=9.0 11 | CUDNN_VERSION?=7 12 | TEST=tests/ 13 | SRC?=$(shell dirname `pwd`) 14 | 15 | build: 16 | docker build -t keras --build-arg python_version=$(PYTHON_VERSION) --build-arg cuda_version=$(CUDA_VERSION) --build-arg cudnn_version=$(CUDNN_VERSION) -f $(DOCKER_FILE) . 17 | 18 | bash: build 19 | $(DOCKER) run -it -u 0 -v $(SRC):/src/workspace -v $(DATA):/data --env KERAS_BACKEND=$(BACKEND) keras bash 20 | 21 | ipython: build 22 | $(DOCKER) run -it -v $(SRC):/src/workspace -v $(DATA):/data --env KERAS_BACKEND=$(BACKEND) keras ipython 23 | 24 | notebook: build 25 | $(DOCKER) run -it -v $(SRC):/src/workspace -v $(DATA):/data --net=host --env KERAS_BACKEND=$(BACKEND) keras 26 | 27 | test: build 28 | $(DOCKER) run -it -v $(SRC):/src/workspace -v $(DATA):/data --env KERAS_BACKEND=$(BACKEND) keras py.test $(TEST) 29 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![image](https://github.com/paubric/python-fuse-coconet/blob/master/images/fig1.png) 2 | 3 | # Functional Storage Encoding - CoCoNet 4 | 5 | We propose a __deep neural network__ approach for mapping the 2D pixel coordinates in an image to the corresponding Red-Green-Blue (RGB) color values. The neural network is termed CocoNet, i.e. COordinates-to-COlor NETwork. During the training process, the neural network learns to encode the input image within its layers. More specifically, the network learns __a continuous function that approximates the discrete RGB values sampled over the discrete 2D pixel locations__. At test time, given a 2D pixel coordinate, the neural network will output the approximate RGB values of the corresponding pixel. By considering every 2D pixel location, the network can actually __reconstruct the entire learned image__. It is important to note that we have to train an individual neural network for each input image, i.e. one network encodes a single image only. Our neural image encoding approach has various low-level image processing applications ranging from __image encoding, image compression and image denoising to image resampling and image completion__. We conduct experiments that include both quantitative and qualitative results, demonstrating the utility of our approach and its superiority over standard baselines, e.g. bilateral filtering or bicubic interpolation. 6 | 7 | [Presentation](https://docs.google.com/presentation/d/1Le9Qo_bpHdKLYXZhZpf9lXUlnp9uigknEMXxGal4xvE/edit?usp=sharing) 8 | 9 | # Installation 10 | 11 | The demontration script are written Python 3 using Keras with Tensorflow back-end, along with other utility libraries. 12 | 13 | ## Linux 14 | Install Python 3. 15 | ``` 16 | sudo apt-get install python3.6 17 | ``` 18 | Install TKinter. 19 | ``` 20 | apt-get install python-tk 21 | ``` 22 | Install python module requirements from provided text file. 23 | ``` 24 | pip install -r requirements.txt 25 | ``` 26 | Run test file. 27 | ``` 28 | python3 test.py 29 | ``` 30 | 31 | ## Windows and Mac OS X 32 | Install Python 3 and TKinter. 33 | Install python module requirements from provided text file. 34 | ``` 35 | pip install -r requirements.txt 36 | ``` 37 | Run test file. 38 | ``` 39 | python3 test.py 40 | ``` 41 | ## Docker version 42 | [Install Docker](https://docs.docker.com/install/#releases) 43 | Build Docker image. 44 | ``` 45 | sudo make bash GPU=0 46 | ``` 47 | Install additional requirements. 48 | ``` 49 | apt-get install python-tk 50 | ``` 51 | Clone repository. 52 | Install python module requirements from provided text file. 53 | ``` 54 | pip install -r requirements.txt 55 | ``` 56 | Run test file. 57 | ``` 58 | python3 test.py 59 | ``` 60 | # Citation 61 | Please cite the following work if you use any part of this code in your scientific work: 62 | ``` 63 | @inproceedings{ Bricman-ICONIP-2018, 64 | authors = {Paul Andrei Bricman and Radu Tudor Ionescu}, 65 | title = "{CocoNet: A deep neural network for mapping pixel coordinates to color values}", 66 | booktitle = {Proceedings of ICONIP}, 67 | year = {2018}} 68 | ``` 69 | https://arxiv.org/abs/1805.11357 70 | 71 | -------------------------------------------------------------------------------- /__pycache__/coconet.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/__pycache__/coconet.cpython-35.pyc -------------------------------------------------------------------------------- /__pycache__/coconet.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/__pycache__/coconet.cpython-36.pyc -------------------------------------------------------------------------------- /coconet.py: -------------------------------------------------------------------------------- 1 | # Math 2 | import numpy as np 3 | from math import atan2 4 | 5 | # Machine Learning 6 | import keras 7 | from keras.models import Sequential 8 | from keras.layers import Dense, Conv2D 9 | 10 | # Image Processing 11 | #from skimage.measure import compare_psnr, compare_ssim, compare_mse 12 | import cv2 13 | 14 | # Others 15 | import os 16 | 17 | def generate_placeholder_tensor(picture_sizex, picture_sizey, enhance = 1, trimensional = False): 18 | # Generate placeholder matrix with given dimensions 19 | X = [] 20 | for x_it in range(0, picture_sizex * enhance): 21 | for y_it in range(0, picture_sizey * enhance): 22 | x0 = x_it / enhance + 0.5 23 | y0 = y_it / enhance + 0.5 24 | x = (x0 - picture_sizex / 2) 25 | y = (y0 - picture_sizey / 2) 26 | X.append((x0, y0, picture_sizex - x0, picture_sizey - y0, (x**2+y**2)**(1/2), atan2(y0, x0))) 27 | if (trimensional == False): 28 | return np.asarray(X) 29 | else: 30 | return np.reshape(np.asarray(X), (1, picture_sizex * enhance, picture_sizey * enhance, 6)) 31 | 32 | def generate_value_tensor(img, picture_sizex, picture_sizey, trimensional = False): 33 | # Generate value matrix from image 34 | Y = [] 35 | for x_iterator in range(0, picture_sizex): 36 | for y_iterator in range(0, picture_sizey): 37 | Y.append(np.multiply(1/255, img[x_iterator][y_iterator])) 38 | if (trimensional == False): 39 | return np.asarray(Y) 40 | else: 41 | return np.reshape(np.asarray(Y), (1, picture_sizex, picture_sizey, 3)) 42 | 43 | def generate_model_dense(width_list): 44 | # Generate dense sequential model with fixed input and output and hidden layer widths from width_list 45 | model = Sequential() 46 | model.add(Dense(width_list[0], input_dim=6, activation = 'tanh', init = 'random_uniform')) 47 | for i in range(1, len(width_list)): 48 | model.add(Dense(width_list[i], activation = 'tanh')) 49 | model.add(Dense(3, activation = 'sigmoid')) 50 | model.compile(loss = 'mean_squared_error', optimizer = keras.optimizers.Adam(lr=0.001), metrics = ['accuracy']) 51 | model.save_weights('initial_weights.h5') 52 | return model 53 | 54 | def generate_model_conv(filters_list, dim = 10, slen = 1): 55 | # Generate conv sequential model with fixed input and output and filter counts from filters_list 56 | model = Sequential() 57 | model.add(Conv2D(kernel_size = (dim, dim), strides = slen, filters = filters_list[0], padding = 'same', input_shape=(None, None, 6), activation = 'tanh', init = 'random_uniform')) 58 | for i in range(1, len(filters_list)): 59 | model.add(Conv2D(kernel_size = (dim, dim), strides = slen, filters = filters_list[i], padding = 'same', activation = 'tanh')) 60 | model.add(Conv2D(kernel_size = (dim, dim), strides = slen, filters = 3, padding = 'same', activation = 'sigmoid')) 61 | model.compile(loss = 'mean_squared_error', optimizer = keras.optimizers.Adam(lr=0.0001), metrics = ['accuracy']) 62 | model.save_weights('initial_weights.h5') 63 | return model 64 | 65 | def load_image(address): 66 | # Load image as np.array and extract filename 67 | filename = os.path.basename(address) 68 | img = cv2.imread(address) 69 | return img, filename 70 | 71 | def compare_images(img1, img2): 72 | # Compute PSNR, SSIM and MSE for 2 images 73 | psnr = compare_psnr(img1, img2) 74 | ssim = compare_ssim(img1, img2, multichannel=True) 75 | mse = compare_mse(img1, img2) 76 | return psnr, ssim, mse 77 | 78 | def predict(model, X, picture_sizex, picture_sizey): 79 | # Predict 80 | prediction = model.predict(X) 81 | prediction = np.multiply(255, prediction) 82 | prediction = prediction.reshape(picture_sizex, picture_sizey, 3) 83 | return prediction.astype('uint8') 84 | 85 | def save_image(img, address): 86 | cv2.imwrite(address, img) 87 | -------------------------------------------------------------------------------- /images/13_horse.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/images/13_horse.png -------------------------------------------------------------------------------- /images/208_horse_noisy_20 FUSED.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/images/208_horse_noisy_20 FUSED.png -------------------------------------------------------------------------------- /images/208_horse_noisy_20.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/images/208_horse_noisy_20.png -------------------------------------------------------------------------------- /images/240_automobile_gt FUSED.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/images/240_automobile_gt FUSED.png -------------------------------------------------------------------------------- /images/240_automobile_gt.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/images/240_automobile_gt.png -------------------------------------------------------------------------------- /images/butterfly_GT.bmp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/images/butterfly_GT.bmp -------------------------------------------------------------------------------- /images/fig1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/images/fig1.png -------------------------------------------------------------------------------- /images/horse_collage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/images/horse_collage.png -------------------------------------------------------------------------------- /images/result.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/images/result.png -------------------------------------------------------------------------------- /images/result_conv.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/images/result_conv.png -------------------------------------------------------------------------------- /images/result_dense.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/images/result_dense.png -------------------------------------------------------------------------------- /initial_weights.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paulbricman/python-fuse-coconet/2cc51f844f38b1ab09b8ed2cfc7746bb2f17f242/initial_weights.h5 -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy==1.15.0 2 | matplotlib==2.2.2 3 | opencv-python==3.4.2.17 4 | tensorflow==1.9.0 5 | keras==2.2.0 6 | -------------------------------------------------------------------------------- /test.py: -------------------------------------------------------------------------------- 1 | import coconet 2 | from tkinter.filedialog import askopenfilename 3 | import matplotlib.pyplot as plt 4 | import cv2 5 | 6 | picture_sizex = 32 7 | picture_sizey = 32 8 | #enhance = 1 9 | enhance = 4 10 | 11 | address = askopenfilename() 12 | 13 | img, filename = coconet.load_image(address) 14 | 15 | X = coconet.generate_placeholder_tensor(picture_sizex, picture_sizey) 16 | X_SR = coconet.generate_placeholder_tensor(picture_sizex, picture_sizey, enhance = enhance) 17 | Y = coconet.generate_value_tensor(img, picture_sizex, picture_sizey) 18 | model = coconet.generate_model_dense([100] * 10) 19 | 20 | 21 | history = model.fit(X, Y, epochs = 1000, batch_size = 128, shuffle = True) 22 | #history = model.fit(X, Y, epochs = 1000, batch_size = 1024) 23 | prediction = coconet.predict(model, X_SR, picture_sizex * enhance, picture_sizey * enhance) 24 | 25 | plt.subplot(121) 26 | plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) 27 | plt.subplot(122) 28 | plt.imshow(cv2.cvtColor(prediction, cv2.COLOR_BGR2RGB)) 29 | plt.show() 30 | 31 | coconet.save_image(prediction, address[:-4] + ' FUSED.png') 32 | --------------------------------------------------------------------------------