├── CONTRIBUTING.md ├── PySimpleGUI_Colorizer.py ├── PySimpleGUI_Colorizer_Webcam_Multi_Window.py ├── model ├── colorization_deploy_v2.prototxt ├── pts_in_hull.npy └── readme.md └── readme.md /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | ## Contributing to PySimpleGUI 2 | 3 | Hi there! Mike here....thank you for taking time to read this document. 4 | 5 | ### Open Source License, but Private Development 6 | 7 | PySimpleGUI is different than most projects on GitHub. It is licensed using the "Open Source License" LGPL3. However, the coding and development of the project is not structured in the same way most open source projects are structured. 8 | 9 | This project/account does not accept user submitted code nor documentation. 10 | 11 | ### You Can Still Contribute 12 | 13 | #### Write Applications, Use PySimpleGUI, Make Repos, Post Screenshots, Write Tutorials, Teach Others 14 | 15 | These are a few of the ways you can directly contribute to PySimpleGUI. Using the package to make cool stuff and helping others learn how to use it to make cool stuff is a big help to PySimpleGUI. **Everyone** learns from seeing other people's implementations. It's through user's creating applications that new problems and needs are discovered. These have had a profound and positive impact on the project in the past. 16 | 17 | #### Make Suggestions 18 | 19 | There are 100's of open issues in the main PySimpleGUI GitHub account that are actively worked, daily. There are 1,000s that have been completed. The evolution of PySimpleGUI over the years has been a combination of my vision for the product and ideas from users. So many people have helped make PySimpleGUI better. 20 | 21 | ### Pull Requests 22 | 23 | Pull requests are *not being accepted* for the project. This includes sending code changes via other means than "pull requests". Plainly put, code you send will not be used. 24 | 25 | I don't mean to be ugly. This isn't personal. Heck, I don't know "you",the reader personally. It's not about ego. It's complicated. The result is that it allows me to dedicate my life to this project. It's what's required, for whatever reason, for me to do this. That's the best explanation I have. I love and respect the users of this work. 26 | 27 | 28 | ### Bug Fixes 29 | 30 | If you file an Issue for a bug, have located the bug, and found a fix in 10 lines of code or less.... and you wish to share your fix with the community, then feel free to include it with the filed Issue. If it's longer than 10 lines and wish to discuss it, then send an email to help@PySimpleGUI.org. 31 | 32 | ## Thank You 33 | 34 | This project comes from a well-meaning, love of computing, and helping others place. It's not about "me", it's about ***you***. 35 | 36 | The support from the user community has been ***amazing***. Your passion for creating PySimpleGUI applications is infectious. Every "thank you" is noticed and appreciated! Your passion for wanting to see PySimpleGUI improve is neither ignored nor unappreciated. At a time when the Internet can feel toxic, there's been expressions of appreciation, gratitude, and encouragement that's unbelievable. I'm touched on a very frequent basis and am filled with gratitude myself as a result. 37 | 38 | It's understood that this way of development of a Python package is unorthodox. You may find it frustrating and slow, but hope you can respect the decision for it to operate in this manner and be supportive. 39 | 40 | -------------------------------------------------------------------------------- /PySimpleGUI_Colorizer.py: -------------------------------------------------------------------------------- 1 | """ 2 | Colorization based on the Zhang Image Colorization Deep Learning Algorithm 3 | This header to remain with this code. 4 | 5 | The implementation of the colorization algorithm is from PyImageSearch 6 | You can learn how the algorithm works and the details of this implementation here: 7 | https://www.pyimagesearch.com/2019/02/25/black-and-white-image-colorization-with-opencv-and-deep-learning/ 8 | 9 | You will need to download the pre-trained data from this location and place in the model folder: 10 | https://www.dropbox.com/s/dx0qvhhp5hbcx7z/colorization_release_v2.caffemodel?dl=1 11 | 12 | GUI implemented in PySimpleGUI by the PySimpleGUI group 13 | Of course, enjoy, learn , play, have fun! 14 | Copyright 2019 PySimpleGUI 15 | """ 16 | 17 | import numpy as np 18 | import cv2 19 | import PySimpleGUI as sg 20 | import os.path 21 | 22 | version = '7 June 2020' 23 | 24 | prototxt = r'model/colorization_deploy_v2.prototxt' 25 | model = r'model/colorization_release_v2.caffemodel' 26 | points = r'model/pts_in_hull.npy' 27 | points = os.path.join(os.path.dirname(__file__), points) 28 | prototxt = os.path.join(os.path.dirname(__file__), prototxt) 29 | model = os.path.join(os.path.dirname(__file__), model) 30 | if not os.path.isfile(model): 31 | sg.popup_scrolled('Missing model file', 'You are missing the file "colorization_release_v2.caffemodel"', 32 | 'Download it and place into your "model" folder', 'You can download this file from this location:\n', r'https://www.dropbox.com/s/dx0qvhhp5hbcx7z/colorization_release_v2.caffemodel?dl=1') 33 | exit() 34 | net = cv2.dnn.readNetFromCaffe(prototxt, model) # load model from disk 35 | pts = np.load(points) 36 | 37 | # add the cluster centers as 1x1 convolutions to the model 38 | class8 = net.getLayerId("class8_ab") 39 | conv8 = net.getLayerId("conv8_313_rh") 40 | pts = pts.transpose().reshape(2, 313, 1, 1) 41 | net.getLayer(class8).blobs = [pts.astype("float32")] 42 | net.getLayer(conv8).blobs = [np.full([1, 313], 2.606, dtype="float32")] 43 | 44 | def colorize_image(image_filename=None, cv2_frame=None): 45 | """ 46 | Where all the magic happens. Colorizes the image provided. Can colorize either 47 | a filename OR a cv2 frame (read from a web cam most likely) 48 | :param image_filename: (str) full filename to colorize 49 | :param cv2_frame: (cv2 frame) 50 | :return: Tuple[cv2 frame, cv2 frame] both non-colorized and colorized images in cv2 format as a tuple 51 | """ 52 | # load the input image from disk, scale the pixel intensities to the range [0, 1], and then convert the image from the BGR to Lab color space 53 | image = cv2.imread(image_filename) if image_filename else cv2_frame 54 | scaled = image.astype("float32") / 255.0 55 | lab = cv2.cvtColor(scaled, cv2.COLOR_BGR2LAB) 56 | 57 | # resize the Lab image to 224x224 (the dimensions the colorization network accepts), split channels, extract the 'L' channel, and then perform mean centering 58 | resized = cv2.resize(lab, (224, 224)) 59 | L = cv2.split(resized)[0] 60 | L -= 50 61 | 62 | # pass the L channel through the network which will *predict* the 'a' and 'b' channel values 63 | 'print("[INFO] colorizing image...")' 64 | net.setInput(cv2.dnn.blobFromImage(L)) 65 | ab = net.forward()[0, :, :, :].transpose((1, 2, 0)) 66 | 67 | # resize the predicted 'ab' volume to the same dimensions as our input image 68 | ab = cv2.resize(ab, (image.shape[1], image.shape[0])) 69 | 70 | # grab the 'L' channel from the *original* input image (not the resized one) and concatenate the original 'L' channel with the predicted 'ab' channels 71 | L = cv2.split(lab)[0] 72 | colorized = np.concatenate((L[:, :, np.newaxis], ab), axis=2) 73 | 74 | # convert the output image from the Lab color space to RGB, then clip any values that fall outside the range [0, 1] 75 | colorized = cv2.cvtColor(colorized, cv2.COLOR_LAB2BGR) 76 | colorized = np.clip(colorized, 0, 1) 77 | 78 | # the current colorized image is represented as a floating point data type in the range [0, 1] -- let's convert to an unsigned 8-bit integer representation in the range [0, 255] 79 | colorized = (255 * colorized).astype("uint8") 80 | return image, colorized 81 | 82 | 83 | def convert_to_grayscale(frame): 84 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Convert webcam frame to grayscale 85 | gray_3_channels = np.zeros_like(frame) # Convert grayscale frame (single channel) to 3 channels 86 | gray_3_channels[:, :, 0] = gray 87 | gray_3_channels[:, :, 1] = gray 88 | gray_3_channels[:, :, 2] = gray 89 | return gray_3_channels 90 | 91 | 92 | # --------------------------------- The GUI --------------------------------- 93 | 94 | # First the window layout...2 columns 95 | 96 | left_col = [[sg.Text('Folder'), sg.In(size=(25,1), enable_events=True ,key='-FOLDER-'), sg.FolderBrowse()], 97 | [sg.Listbox(values=[], enable_events=True, size=(40,20),key='-FILE LIST-')], 98 | [sg.CBox('Convert to gray first',key='-MAKEGRAY-')], 99 | [sg.Text('Version ' + version, font='Courier 8')]] 100 | 101 | images_col = [[sg.Text('Input file:'), sg.In(enable_events=True, key='-IN FILE-'), sg.FileBrowse()], 102 | [sg.Button('Colorize Photo', key='-PHOTO-'), sg.Button('Start Webcam', key='-WEBCAM-'), sg.Button('Save File', key='-SAVE-'), sg.Button('Exit')], 103 | [sg.Image(filename='', key='-IN-'), sg.Image(filename='', key='-OUT-')],] 104 | # ----- Full layout ----- 105 | layout = [[sg.Column(left_col), sg.VSeperator(), sg.Column(images_col)]] 106 | 107 | # ----- Make the window ----- 108 | window = sg.Window('Photo Colorizer', layout, grab_anywhere=True) 109 | 110 | # ----- Run the Event Loop ----- 111 | prev_filename = colorized = cap = None 112 | while True: 113 | event, values = window.read() 114 | if event in (None, 'Exit'): 115 | break 116 | if event == '-FOLDER-': # Folder name was filled in, make a list of files in the folder 117 | folder = values['-FOLDER-'] 118 | img_types = (".png", ".jpg", "jpeg", ".tiff", ".bmp") 119 | # get list of files in folder 120 | try: 121 | flist0 = os.listdir(folder) 122 | except: 123 | continue 124 | fnames = [f for f in flist0 if os.path.isfile( 125 | os.path.join(folder, f)) and f.lower().endswith(img_types)] 126 | window['-FILE LIST-'].update(fnames) 127 | elif event == '-FILE LIST-': # A file was chosen from the listbox 128 | try: 129 | filename = os.path.join(values['-FOLDER-'], values['-FILE LIST-'][0]) 130 | image = cv2.imread(filename) 131 | window['-IN-'].update(data=cv2.imencode('.png', image)[1].tobytes()) 132 | window['-OUT-'].update(data='') 133 | window['-IN FILE-'].update('') 134 | 135 | if values['-MAKEGRAY-']: 136 | gray_3_channels = convert_to_grayscale(image) 137 | window['-IN-'].update(data=cv2.imencode('.png', gray_3_channels)[1].tobytes()) 138 | image, colorized = colorize_image(cv2_frame=gray_3_channels) 139 | else: 140 | image, colorized = colorize_image(filename) 141 | 142 | window['-OUT-'].update(data=cv2.imencode('.png', colorized)[1].tobytes()) 143 | except: 144 | continue 145 | elif event == '-PHOTO-': # Colorize photo button clicked 146 | try: 147 | if values['-IN FILE-']: 148 | filename = values['-IN FILE-'] 149 | elif values['-FILE LIST-']: 150 | filename = os.path.join(values['-FOLDER-'], values['-FILE LIST-'][0]) 151 | else: 152 | continue 153 | if values['-MAKEGRAY-']: 154 | gray_3_channels = convert_to_grayscale(cv2.imread(filename)) 155 | window['-IN-'].update(data=cv2.imencode('.png', gray_3_channels)[1].tobytes()) 156 | image, colorized = colorize_image(cv2_frame=gray_3_channels) 157 | else: 158 | image, colorized = colorize_image(filename) 159 | window['-IN-'].update(data=cv2.imencode('.png', image)[1].tobytes()) 160 | window['-OUT-'].update(data=cv2.imencode('.png', colorized)[1].tobytes()) 161 | except: 162 | continue 163 | elif event == '-IN FILE-': # A single filename was chosen 164 | filename = values['-IN FILE-'] 165 | if filename != prev_filename: 166 | prev_filename = filename 167 | try: 168 | image = cv2.imread(filename) 169 | window['-IN-'].update(data=cv2.imencode('.png', image)[1].tobytes()) 170 | except: 171 | continue 172 | elif event == '-WEBCAM-': # Webcam button clicked 173 | sg.popup_quick_message('Starting up your Webcam... this takes a moment....', auto_close_duration=1, background_color='red', text_color='white', font='Any 16') 174 | window['-WEBCAM-'].update('Stop Webcam', button_color=('white','red')) 175 | cap = cv2.VideoCapture(0) if not cap else cap 176 | while True: # Loop that reads and shows webcam until stop button 177 | ret, frame = cap.read() # Read a webcam frame 178 | gray_3_channels = convert_to_grayscale(frame) 179 | image, colorized = colorize_image(cv2_frame=gray_3_channels) # Colorize the 3-channel grayscale frame 180 | window['-IN-'].update(data=cv2.imencode('.png', gray_3_channels)[1].tobytes()) 181 | window['-OUT-'].update(data=cv2.imencode('.png', colorized)[1].tobytes()) 182 | event, values = window.read(timeout=0) # Update the window outputs and check for new events 183 | if event in (None, '-WEBCAM-', 'Exit'): # Clicked the Stop Webcam button or closed window entirely 184 | window['-WEBCAM-'].update('Start Webcam', button_color=sg.theme_button_color()) 185 | window['-IN-'].update('') 186 | window['-OUT-'].update('') 187 | break 188 | elif event == '-SAVE-' and colorized is not None: # Clicked the Save File button 189 | filename = sg.popup_get_file('Save colorized image.\nColorized image be saved in format matching the extension you enter.', save_as=True) 190 | try: 191 | if filename: 192 | cv2.imwrite(filename, colorized) 193 | sg.popup_quick_message('Image save complete', background_color='red', text_color='white', font='Any 16') 194 | except: 195 | sg.popup_quick_message('ERROR - Image NOT saved!', background_color='red', text_color='white', font='Any 16') 196 | # ----- Exit program ----- 197 | window.close() -------------------------------------------------------------------------------- /PySimpleGUI_Colorizer_Webcam_Multi_Window.py: -------------------------------------------------------------------------------- 1 | """ 2 | July 2020 - experimental multi-window version of the webcam portion of window colorizer program 3 | 4 | Colorization based on the Zhang Image Colorization Deep Learning Algorithm 5 | This header to remain with this code. 6 | 7 | The implementation of the colorization algorithm is from PyImageSearch 8 | You can learn how the algorithm works and the details of this implementation here: 9 | https://www.pyimagesearch.com/2019/02/25/black-and-white-image-colorization-with-opencv-and-deep-learning/ 10 | 11 | You will need to download the pre-trained data from this location and place in the model folder: 12 | https://www.dropbox.com/s/dx0qvhhp5hbcx7z/colorization_release_v2.caffemodel?dl=1 13 | 14 | GUI implemented in PySimpleGUI by the PySimpleGUI group 15 | Of course, enjoy, learn , play, have fun! 16 | Copyright 2020 PySimpleGUI 17 | """ 18 | 19 | import numpy as np 20 | import cv2 21 | import PySimpleGUI as sg 22 | import os.path 23 | 24 | prototxt = r'model/colorization_deploy_v2.prototxt' 25 | model = r'model/colorization_release_v2.caffemodel' 26 | points = r'model/pts_in_hull.npy' 27 | points = os.path.join(os.path.dirname(__file__), points) 28 | prototxt = os.path.join(os.path.dirname(__file__), prototxt) 29 | model = os.path.join(os.path.dirname(__file__), model) 30 | if not os.path.isfile(model): 31 | sg.popup_scrolled('Missing model file', 'You are missing the file "colorization_release_v2.caffemodel"', 32 | 'Download it and place into your "model" folder', 'You can download this file from this location:\n', r'https://www.dropbox.com/s/dx0qvhhp5hbcx7z/colorization_release_v2.caffemodel?dl=1') 33 | exit() 34 | net = cv2.dnn.readNetFromCaffe(prototxt, model) # load model from disk 35 | pts = np.load(points) 36 | 37 | # add the cluster centers as 1x1 convolutions to the model 38 | class8 = net.getLayerId("class8_ab") 39 | conv8 = net.getLayerId("conv8_313_rh") 40 | pts = pts.transpose().reshape(2, 313, 1, 1) 41 | net.getLayer(class8).blobs = [pts.astype("float32")] 42 | net.getLayer(conv8).blobs = [np.full([1, 313], 2.606, dtype="float32")] 43 | 44 | def colorize_image(image_filename=None, cv2_frame=None): 45 | """ 46 | Where all the magic happens. Colorizes the image provided. Can colorize either 47 | a filename OR a cv2 frame (read from a web cam most likely) 48 | :param image_filename: (str) full filename to colorize 49 | :param cv2_frame: (cv2 frame) 50 | :return: cv2 frame colorized image in cv2 format 51 | """ 52 | # load the input image from disk, scale the pixel intensities to the range [0, 1], and then convert the image from the BGR to Lab color space 53 | image = cv2.imread(image_filename) if image_filename else cv2_frame 54 | scaled = image.astype("float32") / 255.0 55 | lab = cv2.cvtColor(scaled, cv2.COLOR_BGR2LAB) 56 | 57 | # resize the Lab image to 224x224 (the dimensions the colorization network accepts), split channels, extract the 'L' channel, and then perform mean centering 58 | resized = cv2.resize(lab, (224, 224)) 59 | L = cv2.split(resized)[0] 60 | L -= 50 61 | 62 | # pass the L channel through the network which will *predict* the 'a' and 'b' channel values 63 | 'print("[INFO] colorizing image...")' 64 | net.setInput(cv2.dnn.blobFromImage(L)) 65 | ab = net.forward()[0, :, :, :].transpose((1, 2, 0)) 66 | 67 | # resize the predicted 'ab' volume to the same dimensions as our input image 68 | ab = cv2.resize(ab, (image.shape[1], image.shape[0])) 69 | 70 | # grab the 'L' channel from the *original* input image (not the resized one) and concatenate the original 'L' channel with the predicted 'ab' channels 71 | L = cv2.split(lab)[0] 72 | colorized = np.concatenate((L[:, :, np.newaxis], ab), axis=2) 73 | 74 | # convert the output image from the Lab color space to RGB, then clip any values that fall outside the range [0, 1] 75 | colorized = cv2.cvtColor(colorized, cv2.COLOR_LAB2BGR) 76 | colorized = np.clip(colorized, 0, 1) 77 | 78 | # the current colorized image is represented as a floating point data type in the range [0, 1] -- let's convert to an unsigned 8-bit integer representation in the range [0, 255] 79 | colorized = (255 * colorized).astype("uint8") 80 | return colorized 81 | 82 | 83 | def convert_to_grayscale(frame): 84 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Convert webcam frame to grayscale 85 | gray_3_channels = np.zeros_like(frame) # Convert grayscale frame (single channel) to 3 channels 86 | gray_3_channels[:, :, 0] = gray 87 | gray_3_channels[:, :, 1] = gray 88 | gray_3_channels[:, :, 2] = gray 89 | return gray_3_channels 90 | 91 | 92 | def make_video_window(title, location): 93 | return sg.Window(title, [[sg.Image(key='-IMAGE-')]], finalize=True, margins=(0,0), element_padding=(0,0), location=location) 94 | 95 | def convert_cvt_to_data(cv2_frame): 96 | return cv2.imencode('.png', cv2_frame)[1].tobytes() 97 | 98 | 99 | def main(): 100 | # --------------------------------- The GUI --------------------------------- 101 | 102 | layout = [ [sg.Text('Colorized Webcam Demo', font='Any 18')], 103 | [sg.Button('Start Webcam', key='-WEBCAM-'), sg.Button('Exit')]] 104 | 105 | # ----- Make the starting window ----- 106 | window_start = sg.Window('Webcam Colorizer', layout, grab_anywhere=True, finalize=True) 107 | 108 | # ----- Run the Event Loop ----- 109 | cap, playback_active = None, False 110 | while True: 111 | window, event, values = sg.read_all_windows(timeout=10) 112 | if event == 'Exit' or (window == window_start and event is None): 113 | break 114 | elif event == '-WEBCAM-': # Webcam button clicked 115 | if not playback_active: 116 | sg.popup_quick_message('Starting up your Webcam... this takes a moment....', auto_close_duration=1, background_color='red', text_color='white', font='Any 16') 117 | window_start['-WEBCAM-'].update('Stop Webcam', button_color=('white','red')) 118 | cap = cv2.VideoCapture(0) if not cap else cap 119 | window_raw_camera = make_video_window('Your Webcam Raw Video', (300,200)) 120 | window_gray_camera = make_video_window('Video as Grayscale', (1000,200)) 121 | window_colorized_camera = make_video_window('Your Colorized Video', (1700,200)) 122 | playback_active = True 123 | else: 124 | playback_active = False 125 | window['-WEBCAM-'].update('Start Webcam', button_color=sg.theme_button_color()) 126 | window_raw_camera.close() 127 | window_gray_camera.close() 128 | window_colorized_camera.close() 129 | elif event == sg.TIMEOUT_EVENT and playback_active: 130 | ret, frame = cap.read() # Read a webcam frame 131 | 132 | # display raw image 133 | if window_raw_camera: 134 | window_raw_camera['-IMAGE-'].update(data=convert_cvt_to_data(frame)) 135 | # display gray image 136 | gray_3_channels = convert_to_grayscale(frame) 137 | if window_gray_camera: 138 | window_gray_camera['-IMAGE-'].update(data=convert_cvt_to_data(gray_3_channels)) 139 | # display colorized image 140 | if window_colorized_camera: 141 | window_colorized_camera['-IMAGE-'].update(data=convert_cvt_to_data(colorize_image(cv2_frame=gray_3_channels))) 142 | 143 | # if a window closed 144 | if event is None: 145 | if window == window_raw_camera: 146 | window_raw_camera.close() 147 | window_raw_camera = None 148 | elif window == window_gray_camera: 149 | window_gray_camera.close() 150 | window_gray_camera = None 151 | elif window == window_colorized_camera: 152 | window_colorized_camera.close() 153 | window_colorized_camera = None 154 | 155 | # If playback is active, but all camera windows closed, indicate not longer playing and change button color 156 | if playback_active and window_colorized_camera is None and window_gray_camera is None and window_raw_camera is None: 157 | playback_active = False 158 | window_start['-WEBCAM-'].update('Start Webcam', button_color=sg.theme_button_color()) 159 | 160 | # ----- Exit program ----- 161 | window.close() 162 | 163 | if __name__ == '__main__': 164 | main() 165 | 166 | -------------------------------------------------------------------------------- /model/colorization_deploy_v2.prototxt: -------------------------------------------------------------------------------- 1 | name: "LtoAB" 2 | 3 | layer { 4 | name: "data_l" 5 | type: "Input" 6 | top: "data_l" 7 | input_param { 8 | shape { dim: 1 dim: 1 dim: 224 dim: 224 } 9 | } 10 | } 11 | 12 | # ***************** 13 | # ***** conv1 ***** 14 | # ***************** 15 | layer { 16 | name: "bw_conv1_1" 17 | type: "Convolution" 18 | bottom: "data_l" 19 | top: "conv1_1" 20 | # param {lr_mult: 0 decay_mult: 0} 21 | # param {lr_mult: 0 decay_mult: 0} 22 | convolution_param { 23 | num_output: 64 24 | pad: 1 25 | kernel_size: 3 26 | } 27 | } 28 | layer { 29 | name: "relu1_1" 30 | type: "ReLU" 31 | bottom: "conv1_1" 32 | top: "conv1_1" 33 | } 34 | layer { 35 | name: "conv1_2" 36 | type: "Convolution" 37 | bottom: "conv1_1" 38 | top: "conv1_2" 39 | # param {lr_mult: 0 decay_mult: 0} 40 | # param {lr_mult: 0 decay_mult: 0} 41 | convolution_param { 42 | num_output: 64 43 | pad: 1 44 | kernel_size: 3 45 | stride: 2 46 | } 47 | } 48 | layer { 49 | name: "relu1_2" 50 | type: "ReLU" 51 | bottom: "conv1_2" 52 | top: "conv1_2" 53 | } 54 | layer { 55 | name: "conv1_2norm" 56 | type: "BatchNorm" 57 | bottom: "conv1_2" 58 | top: "conv1_2norm" 59 | batch_norm_param{ } 60 | param {lr_mult: 0 decay_mult: 0} 61 | param {lr_mult: 0 decay_mult: 0} 62 | param {lr_mult: 0 decay_mult: 0} 63 | } 64 | # ***************** 65 | # ***** conv2 ***** 66 | # ***************** 67 | layer { 68 | name: "conv2_1" 69 | type: "Convolution" 70 | # bottom: "conv1_2" 71 | bottom: "conv1_2norm" 72 | # bottom: "pool1" 73 | top: "conv2_1" 74 | # param {lr_mult: 0 decay_mult: 0} 75 | # param {lr_mult: 0 decay_mult: 0} 76 | convolution_param { 77 | num_output: 128 78 | pad: 1 79 | kernel_size: 3 80 | } 81 | } 82 | layer { 83 | name: "relu2_1" 84 | type: "ReLU" 85 | bottom: "conv2_1" 86 | top: "conv2_1" 87 | } 88 | layer { 89 | name: "conv2_2" 90 | type: "Convolution" 91 | bottom: "conv2_1" 92 | top: "conv2_2" 93 | # param {lr_mult: 0 decay_mult: 0} 94 | # param {lr_mult: 0 decay_mult: 0} 95 | convolution_param { 96 | num_output: 128 97 | pad: 1 98 | kernel_size: 3 99 | stride: 2 100 | } 101 | } 102 | layer { 103 | name: "relu2_2" 104 | type: "ReLU" 105 | bottom: "conv2_2" 106 | top: "conv2_2" 107 | } 108 | layer { 109 | name: "conv2_2norm" 110 | type: "BatchNorm" 111 | bottom: "conv2_2" 112 | top: "conv2_2norm" 113 | batch_norm_param{ } 114 | param {lr_mult: 0 decay_mult: 0} 115 | param {lr_mult: 0 decay_mult: 0} 116 | param {lr_mult: 0 decay_mult: 0} 117 | } 118 | # ***************** 119 | # ***** conv3 ***** 120 | # ***************** 121 | layer { 122 | name: "conv3_1" 123 | type: "Convolution" 124 | # bottom: "conv2_2" 125 | bottom: "conv2_2norm" 126 | # bottom: "pool2" 127 | top: "conv3_1" 128 | # param {lr_mult: 0 decay_mult: 0} 129 | # param {lr_mult: 0 decay_mult: 0} 130 | convolution_param { 131 | num_output: 256 132 | pad: 1 133 | kernel_size: 3 134 | } 135 | } 136 | layer { 137 | name: "relu3_1" 138 | type: "ReLU" 139 | bottom: "conv3_1" 140 | top: "conv3_1" 141 | } 142 | layer { 143 | name: "conv3_2" 144 | type: "Convolution" 145 | bottom: "conv3_1" 146 | top: "conv3_2" 147 | # param {lr_mult: 0 decay_mult: 0} 148 | # param {lr_mult: 0 decay_mult: 0} 149 | convolution_param { 150 | num_output: 256 151 | pad: 1 152 | kernel_size: 3 153 | } 154 | } 155 | layer { 156 | name: "relu3_2" 157 | type: "ReLU" 158 | bottom: "conv3_2" 159 | top: "conv3_2" 160 | } 161 | layer { 162 | name: "conv3_3" 163 | type: "Convolution" 164 | bottom: "conv3_2" 165 | top: "conv3_3" 166 | # param {lr_mult: 0 decay_mult: 0} 167 | # param {lr_mult: 0 decay_mult: 0} 168 | convolution_param { 169 | num_output: 256 170 | pad: 1 171 | kernel_size: 3 172 | stride: 2 173 | } 174 | } 175 | layer { 176 | name: "relu3_3" 177 | type: "ReLU" 178 | bottom: "conv3_3" 179 | top: "conv3_3" 180 | } 181 | layer { 182 | name: "conv3_3norm" 183 | type: "BatchNorm" 184 | bottom: "conv3_3" 185 | top: "conv3_3norm" 186 | batch_norm_param{ } 187 | param {lr_mult: 0 decay_mult: 0} 188 | param {lr_mult: 0 decay_mult: 0} 189 | param {lr_mult: 0 decay_mult: 0} 190 | } 191 | # ***************** 192 | # ***** conv4 ***** 193 | # ***************** 194 | layer { 195 | name: "conv4_1" 196 | type: "Convolution" 197 | # bottom: "conv3_3" 198 | bottom: "conv3_3norm" 199 | # bottom: "pool3" 200 | top: "conv4_1" 201 | # param {lr_mult: 0 decay_mult: 0} 202 | # param {lr_mult: 0 decay_mult: 0} 203 | convolution_param { 204 | num_output: 512 205 | kernel_size: 3 206 | stride: 1 207 | pad: 1 208 | dilation: 1 209 | } 210 | } 211 | layer { 212 | name: "relu4_1" 213 | type: "ReLU" 214 | bottom: "conv4_1" 215 | top: "conv4_1" 216 | } 217 | layer { 218 | name: "conv4_2" 219 | type: "Convolution" 220 | bottom: "conv4_1" 221 | top: "conv4_2" 222 | # param {lr_mult: 0 decay_mult: 0} 223 | # param {lr_mult: 0 decay_mult: 0} 224 | convolution_param { 225 | num_output: 512 226 | kernel_size: 3 227 | stride: 1 228 | pad: 1 229 | dilation: 1 230 | } 231 | } 232 | layer { 233 | name: "relu4_2" 234 | type: "ReLU" 235 | bottom: "conv4_2" 236 | top: "conv4_2" 237 | } 238 | layer { 239 | name: "conv4_3" 240 | type: "Convolution" 241 | bottom: "conv4_2" 242 | top: "conv4_3" 243 | # param {lr_mult: 0 decay_mult: 0} 244 | # param {lr_mult: 0 decay_mult: 0} 245 | convolution_param { 246 | num_output: 512 247 | kernel_size: 3 248 | stride: 1 249 | pad: 1 250 | dilation: 1 251 | } 252 | } 253 | layer { 254 | name: "relu4_3" 255 | type: "ReLU" 256 | bottom: "conv4_3" 257 | top: "conv4_3" 258 | } 259 | layer { 260 | name: "conv4_3norm" 261 | type: "BatchNorm" 262 | bottom: "conv4_3" 263 | top: "conv4_3norm" 264 | batch_norm_param{ } 265 | param {lr_mult: 0 decay_mult: 0} 266 | param {lr_mult: 0 decay_mult: 0} 267 | param {lr_mult: 0 decay_mult: 0} 268 | } 269 | # ***************** 270 | # ***** conv5 ***** 271 | # ***************** 272 | layer { 273 | name: "conv5_1" 274 | type: "Convolution" 275 | # bottom: "conv4_3" 276 | bottom: "conv4_3norm" 277 | # bottom: "pool4" 278 | top: "conv5_1" 279 | # param {lr_mult: 0 decay_mult: 0} 280 | # param {lr_mult: 0 decay_mult: 0} 281 | convolution_param { 282 | num_output: 512 283 | kernel_size: 3 284 | stride: 1 285 | pad: 2 286 | dilation: 2 287 | } 288 | } 289 | layer { 290 | name: "relu5_1" 291 | type: "ReLU" 292 | bottom: "conv5_1" 293 | top: "conv5_1" 294 | } 295 | layer { 296 | name: "conv5_2" 297 | type: "Convolution" 298 | bottom: "conv5_1" 299 | top: "conv5_2" 300 | # param {lr_mult: 0 decay_mult: 0} 301 | # param {lr_mult: 0 decay_mult: 0} 302 | convolution_param { 303 | num_output: 512 304 | kernel_size: 3 305 | stride: 1 306 | pad: 2 307 | dilation: 2 308 | } 309 | } 310 | layer { 311 | name: "relu5_2" 312 | type: "ReLU" 313 | bottom: "conv5_2" 314 | top: "conv5_2" 315 | } 316 | layer { 317 | name: "conv5_3" 318 | type: "Convolution" 319 | bottom: "conv5_2" 320 | top: "conv5_3" 321 | # param {lr_mult: 0 decay_mult: 0} 322 | # param {lr_mult: 0 decay_mult: 0} 323 | convolution_param { 324 | num_output: 512 325 | kernel_size: 3 326 | stride: 1 327 | pad: 2 328 | dilation: 2 329 | } 330 | } 331 | layer { 332 | name: "relu5_3" 333 | type: "ReLU" 334 | bottom: "conv5_3" 335 | top: "conv5_3" 336 | } 337 | layer { 338 | name: "conv5_3norm" 339 | type: "BatchNorm" 340 | bottom: "conv5_3" 341 | top: "conv5_3norm" 342 | batch_norm_param{ } 343 | param {lr_mult: 0 decay_mult: 0} 344 | param {lr_mult: 0 decay_mult: 0} 345 | param {lr_mult: 0 decay_mult: 0} 346 | } 347 | # ***************** 348 | # ***** conv6 ***** 349 | # ***************** 350 | layer { 351 | name: "conv6_1" 352 | type: "Convolution" 353 | bottom: "conv5_3norm" 354 | top: "conv6_1" 355 | convolution_param { 356 | num_output: 512 357 | kernel_size: 3 358 | pad: 2 359 | dilation: 2 360 | } 361 | } 362 | layer { 363 | name: "relu6_1" 364 | type: "ReLU" 365 | bottom: "conv6_1" 366 | top: "conv6_1" 367 | } 368 | layer { 369 | name: "conv6_2" 370 | type: "Convolution" 371 | bottom: "conv6_1" 372 | top: "conv6_2" 373 | convolution_param { 374 | num_output: 512 375 | kernel_size: 3 376 | pad: 2 377 | dilation: 2 378 | } 379 | } 380 | layer { 381 | name: "relu6_2" 382 | type: "ReLU" 383 | bottom: "conv6_2" 384 | top: "conv6_2" 385 | } 386 | layer { 387 | name: "conv6_3" 388 | type: "Convolution" 389 | bottom: "conv6_2" 390 | top: "conv6_3" 391 | convolution_param { 392 | num_output: 512 393 | kernel_size: 3 394 | pad: 2 395 | dilation: 2 396 | } 397 | } 398 | layer { 399 | name: "relu6_3" 400 | type: "ReLU" 401 | bottom: "conv6_3" 402 | top: "conv6_3" 403 | } 404 | layer { 405 | name: "conv6_3norm" 406 | type: "BatchNorm" 407 | bottom: "conv6_3" 408 | top: "conv6_3norm" 409 | batch_norm_param{ } 410 | param {lr_mult: 0 decay_mult: 0} 411 | param {lr_mult: 0 decay_mult: 0} 412 | param {lr_mult: 0 decay_mult: 0} 413 | } 414 | # ***************** 415 | # ***** conv7 ***** 416 | # ***************** 417 | layer { 418 | name: "conv7_1" 419 | type: "Convolution" 420 | bottom: "conv6_3norm" 421 | top: "conv7_1" 422 | convolution_param { 423 | num_output: 512 424 | kernel_size: 3 425 | pad: 1 426 | dilation: 1 427 | } 428 | } 429 | layer { 430 | name: "relu7_1" 431 | type: "ReLU" 432 | bottom: "conv7_1" 433 | top: "conv7_1" 434 | } 435 | layer { 436 | name: "conv7_2" 437 | type: "Convolution" 438 | bottom: "conv7_1" 439 | top: "conv7_2" 440 | convolution_param { 441 | num_output: 512 442 | kernel_size: 3 443 | pad: 1 444 | dilation: 1 445 | } 446 | } 447 | layer { 448 | name: "relu7_2" 449 | type: "ReLU" 450 | bottom: "conv7_2" 451 | top: "conv7_2" 452 | } 453 | layer { 454 | name: "conv7_3" 455 | type: "Convolution" 456 | bottom: "conv7_2" 457 | top: "conv7_3" 458 | convolution_param { 459 | num_output: 512 460 | kernel_size: 3 461 | pad: 1 462 | dilation: 1 463 | } 464 | } 465 | layer { 466 | name: "relu7_3" 467 | type: "ReLU" 468 | bottom: "conv7_3" 469 | top: "conv7_3" 470 | } 471 | layer { 472 | name: "conv7_3norm" 473 | type: "BatchNorm" 474 | bottom: "conv7_3" 475 | top: "conv7_3norm" 476 | batch_norm_param{ } 477 | param {lr_mult: 0 decay_mult: 0} 478 | param {lr_mult: 0 decay_mult: 0} 479 | param {lr_mult: 0 decay_mult: 0} 480 | } 481 | # ***************** 482 | # ***** conv8 ***** 483 | # ***************** 484 | layer { 485 | name: "conv8_1" 486 | type: "Deconvolution" 487 | bottom: "conv7_3norm" 488 | top: "conv8_1" 489 | convolution_param { 490 | num_output: 256 491 | kernel_size: 4 492 | pad: 1 493 | dilation: 1 494 | stride: 2 495 | } 496 | } 497 | layer { 498 | name: "relu8_1" 499 | type: "ReLU" 500 | bottom: "conv8_1" 501 | top: "conv8_1" 502 | } 503 | layer { 504 | name: "conv8_2" 505 | type: "Convolution" 506 | bottom: "conv8_1" 507 | top: "conv8_2" 508 | convolution_param { 509 | num_output: 256 510 | kernel_size: 3 511 | pad: 1 512 | dilation: 1 513 | } 514 | } 515 | layer { 516 | name: "relu8_2" 517 | type: "ReLU" 518 | bottom: "conv8_2" 519 | top: "conv8_2" 520 | } 521 | layer { 522 | name: "conv8_3" 523 | type: "Convolution" 524 | bottom: "conv8_2" 525 | top: "conv8_3" 526 | convolution_param { 527 | num_output: 256 528 | kernel_size: 3 529 | pad: 1 530 | dilation: 1 531 | } 532 | } 533 | layer { 534 | name: "relu8_3" 535 | type: "ReLU" 536 | bottom: "conv8_3" 537 | top: "conv8_3" 538 | } 539 | # ******************* 540 | # ***** Softmax ***** 541 | # ******************* 542 | layer { 543 | name: "conv8_313" 544 | type: "Convolution" 545 | bottom: "conv8_3" 546 | top: "conv8_313" 547 | convolution_param { 548 | num_output: 313 549 | kernel_size: 1 550 | stride: 1 551 | dilation: 1 552 | } 553 | } 554 | layer { 555 | name: "conv8_313_rh" 556 | type: "Scale" 557 | bottom: "conv8_313" 558 | top: "conv8_313_rh" 559 | scale_param { 560 | bias_term: false 561 | filler { type: 'constant' value: 2.606 } 562 | } 563 | } 564 | layer { 565 | name: "class8_313_rh" 566 | type: "Softmax" 567 | bottom: "conv8_313_rh" 568 | top: "class8_313_rh" 569 | } 570 | # ******************** 571 | # ***** Decoding ***** 572 | # ******************** 573 | layer { 574 | name: "class8_ab" 575 | type: "Convolution" 576 | bottom: "class8_313_rh" 577 | top: "class8_ab" 578 | convolution_param { 579 | num_output: 2 580 | kernel_size: 1 581 | stride: 1 582 | dilation: 1 583 | } 584 | } 585 | layer { 586 | name: "Silence" 587 | type: "Silence" 588 | bottom: "class8_ab" 589 | } -------------------------------------------------------------------------------- /model/pts_in_hull.npy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/PySimpleGUI/PySimpleGUI-Photo-Colorizer/fcd37370e7ec274ab2d7f7f13d7072912c78e89b/model/pts_in_hull.npy -------------------------------------------------------------------------------- /model/readme.md: -------------------------------------------------------------------------------- 1 | In order to run the demo, you will first need to download the pre-trained data from this location. Place the file in the folder with this readme. 2 | 3 | https://www.dropbox.com/s/dx0qvhhp5hbcx7z/colorization_release_v2.caffemodel?dl=1 -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | ![pysimplegui_logo](https://user-images.githubusercontent.com/13696193/43165867-fe02e3b2-8f62-11e8-9fd0-cc7c86b11772.png) 2 | 3 | # Photo Colorization Using Deep Learning 4 | 5 | ## Complete Python Application With GUI Using PySimpleGUI 6 | 7 | ### ___Important___ 8 | 9 | In order to run the demo, you will first need to download the pre-trained data from this location. At 125 MB it's too large to put into the GitHub. Place the file in the model folder. 10 | 11 | https://www.dropbox.com/s/dx0qvhhp5hbcx7z/colorization_release_v2.caffemodel?dl=1 12 | 13 | ----------------- 14 | 15 | ![SNAG-0613](https://user-images.githubusercontent.com/46163555/71523947-43c03a00-2899-11ea-8943-e8db1347c7f5.jpg) 16 | ![SNAG-0604](https://user-images.githubusercontent.com/46163555/71523948-4458d080-2899-11ea-8a8a-d54fbf39c9b8.jpg) 17 | 18 | ----------------- 19 | 20 | ## The Zhang Algorithm 21 | 22 | The colorization algorithm was developed by Zhang, et al, and is detailed here: 23 | 24 | http://richzhang.github.io/colorization/ 25 | 26 | ## PyImageSearch 27 | 28 | The code implementing the algorithm is explained in a well-written tutorial by the fine folks at PyImageSearch: 29 | 30 | https://www.pyimagesearch.com/2019/02/25/black-and-white-image-colorization-with-opencv-and-deep-learning/ 31 | 32 | 33 | ## Using the GUI 34 | 35 | To use the GUI you'll need to install PySimpleGUI (http://www.PySimpleGUI.org for instructions) 36 | 37 | One of these will install it for you. 38 | ``` 39 | pip install PySimpleGUI 40 | pip3 install PySimpleGUI 41 | ``` 42 | 43 | Then you run the demo program using either `python` or `python3` dependind on your system: 44 | 45 | ``` 46 | python PySimpleGUI_Colorizer.py 47 | python3 PySimpleGUI_Colorizer.py 48 | ``` 49 | 50 | ### You have 2 options for choosing the image to colorize. 51 | 52 | #### Folder View 53 | 54 | If you choose a folder in the left column, then a list of files will be shown. Clicking on a file will "Preview" the image on the right side. Either copy and paste a path into the input box in the upper left corner, or use the `Browse` button to browse for a folder 55 | 56 | ![SNAG-0627](https://user-images.githubusercontent.com/46163555/71523944-43c03a00-2899-11ea-8dea-a3be3bfc13ca.jpg) 57 | 58 | #### Individual File 59 | 60 | You can also choose an individual file using the input box in the upper right. Either paste a filename into the box or use the `browse` button to choose one. 61 | 62 | ### Webcam 63 | 64 | Press the `Start Webcam` button to see yourself colorized in realtime. It's not super fast, but it does function. 65 | 66 | Press the `Stop Webcam` button to stop. 67 | 68 | ## Saving The Color Image 69 | 70 | To save your image simply press the `Save File` button and enter your filename. 71 | 72 | 73 | ------------------------------- 74 | 75 | Here is more eye-candy courtesy of Deep Learning 76 | 77 | 78 | 79 | 80 | 81 | ![SNAG-0628](https://user-images.githubusercontent.com/46163555/71523943-4327a380-2899-11ea-95b7-a2892f611109.jpg) 82 | 83 | ![SNAG-0626](https://user-images.githubusercontent.com/46163555/71523945-43c03a00-2899-11ea-8bf2-ee6ac2216286.jpg) 84 | 85 | ![SNAG-0620](https://user-images.githubusercontent.com/46163555/71523946-43c03a00-2899-11ea-9f25-2f2b2c882ad3.jpg) 86 | 87 | 88 | ----------------------------------- 89 | 90 | # Webcam Multi-Window Demo 91 | 92 | In July 2020 a new demo was added that uses the new (released to GitHub only at this point) multi-window support. This demo shows 3 video windows: 93 | 94 | 1. Your webcam's raw video stream 95 | 2. Grayscale version of the video 96 | 3. Fully colorized colored of the grayscale video 97 | 98 | Here's a screenshot to give you a rough idea of what to expect from the demo. The colors likely didn't do so well in this specific shot as there was a lot of background lighting. 99 | 100 | ![SNAG-0881](https://user-images.githubusercontent.com/46163555/88486988-9e189a80-cf4f-11ea-8dc7-727b7539bab9.jpg) 101 | 102 | 103 | You will need to use the PySimpleGUI.py file from the project's GitHub http://www.PySimpleGUI.com. Minimum version is 4.26.0.13. 104 | --------------------------------------------------------------------------------