├── README.md ├── Results └── Results.png └── SRGAN_Video_implementation.ipynb /README.md: -------------------------------------------------------------------------------- 1 | # SRGAN Based Video-Enhancement-using-Single-Image-Super-Resolution 2 | 3 | In this project, we have used a pretrained Super-Resolution Generative Adversarial Networks(SRGAN) model to perform Video enhancement using Single Image Super Resolution. The model takes a low resolution video as input and provides a high resolution video as output. The model has been validated in different genres videos at different quality levels. It is done in 6 simple steps which includes conversion of input video into low resolution frames and then converting back the processed high resolution frames into the output video. 4 | ## STEP-1 5 | Clone the following repository which consists of Pretrained SRGAN Model in your Notebook:- 6 | 7 | ``` !git clone https://github.com/krasserm/super-resolution ``` 8 | 9 | ## STEP-2 10 | Create a directory named Super Resolution by using the command given below :- 11 | 12 | ```cd /content/super-resolution``` 13 | 14 | ## STEP-3 15 | The pretrained weights required for running the model can be downloaded from the link given below:- 16 | 17 | [weights-srgan.tar.gz](https://drive.google.com/file/d/1ZKpQvtxLKKq2fM1gKtl085pgHSgSQSBw/view?usp=sharing) 18 | 19 | After downloading the pretrained weights, upload it in your notebook and then run the command below to extract the weights into the root folder- 20 | ```!tar xvfz /content/weights-srgan.tar.gz``` 21 | 22 | 23 | ## STEP-4 24 | To perform video enhancement,the input video should be converted into frames and the model can be used to obtain super resolved frames.This can be done using python codes given below. 25 | 26 | ```python 27 | # Importing all necessary libraries 28 | import timei 29 | import cv2 30 | import os 31 | import numpy as np 32 | from model import resolve_single 33 | from utils import load_image, plot_sample 34 | from model.srgan import generator 35 | 36 | # Read the video from specified path 37 | cam = cv2.VideoCapture("/content/Drama144p_input.3gp") 38 | fps = cam.get(cv2.CAP_PROP_FPS) 39 | print(fps) 40 | 41 | 42 | try: 43 | 44 | # creating a folder named data 45 | if not os.path.exists('data'): 46 | os.makedirs('data') 47 | 48 | # if not created then raise error 49 | except OSError: 50 | print ('Error: Creating directory of data') 51 | 52 | #frames Extraction from video 53 | currentframe = 0 54 | arr_img = [] 55 | while(True): 56 | 57 | # reading from frame 58 | ret,frame = cam.read() 59 | 60 | if ret: 61 | # if video is still left continue creating images 62 | name = './data/frame' + str(currentframe).zfill(3) + '.jpg' 63 | print ('Creating...' + name) 64 | 65 | # writing the extracted images 66 | cv2.imwrite(name, frame) 67 | 68 | # increasing counter so that it will show how many frames are created 69 | currentframe += 1 70 | #storing the path of extracted frames in a list 71 | arr_img.append(name) 72 | else: 73 | break 74 | #print(arr_img) 75 | 76 | start = timeit.default_timer() 77 | model = generator() 78 | model.load_weights('weights/srgan/gan_generator.h5') 79 | 80 | #Initialization of an empty list to store the super resolved images 81 | arr_output=[] 82 | print(len(arr_img)) 83 | n= len(arr_img) 84 | 85 | #Implementation of SRGAN Model in extracted frames 86 | for i in range(n): 87 | lr = load_image(arr_img[i]) 88 | sr = resolve_single(model, lr) 89 | #plot_sample(lr, sr) 90 | 91 | arr_output.append(sr) 92 | stop = timeit.default_timer() 93 | #print(arr_output) 94 | 95 | print("time : ", stop-start) 96 | 97 | # Release all space and windows once done 98 | cam.release() 99 | cv2.destroyAllWindows() 100 | ``` 101 | 102 | Here we have attatched a sample image that shows model implementation on an input frame- 103 | 104 | ![Results](Results/Results.png) 105 | 106 | # STEP-5 107 | Run the Python codes given below to save the super resolved frames obtained in a folder and to store their output path in a list- 108 | 109 | ```python 110 | #Importing necessary libraries 111 | from keras.preprocessing.image import load_img 112 | from keras.preprocessing.image import img_to_array 113 | from keras.preprocessing.image import array_to_img 114 | from keras.preprocessing.image import save_img 115 | 116 | #Making a directory for storing super resolved frames in image format 117 | os.makedirs("output_images") 118 | 119 | #Initialization of an empty list to store the path of Super resolved frames 120 | s_res= [] 121 | for j in range(len(arr_output)): 122 | out_name = '/content/super-resolution/output_images/frame' + str(j).zfill(3) + '.jpg' 123 | img_pil = array_to_img(arr_output[j]) 124 | img1 = save_img(out_name, img_pil) 125 | s_res.append(out_name) 126 | 127 | #print(s_res) 128 | ``` 129 | 130 | # STEP-6 131 | Run the Python codes given below to convert super resolved frames obtained into a Video- 132 | 133 | ```python 134 | import cv2 135 | import numpy as np 136 | for i in range(len(s_res)): 137 | filename=s_res[i] 138 | #reading each files 139 | img = cv2.imread(filename) 140 | height, width, layers = img.shape 141 | size = (width,height) 142 | 143 | fps = 20 #Put the fps value as your convenience or 144 | #Calculate by using (No. of frames)/Video_duration in seconds 145 | 146 | #Creation of output video 147 | out = cv2.VideoWriter('drama2_output.mp4',cv2.VideoWriter_fourcc(*'DIVX'), fps , size) 148 | 149 | #Writing Frames into video 150 | for i in range(len(s_res)): 151 | out.write(cv2.imread(s_res[i])) 152 | out.release() 153 | ``` 154 | 155 | # Final Results - 156 | Below link provides the results obtained for Video enhancement using single image super resolution using SRGAN Model. 157 | 158 | [Video Results](https://drive.google.com/drive/folders/1NiyJCLsB_-pAmFJNF97QhZiho7zPLMCw?usp=sharing) 159 | 160 | 161 | -------------------------------------------------------------------------------- /Results/Results.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Abhishank005/SRGAN-Based-Video-Enhancement-using-Single-Image-Super-Resolution/2652a279bfbf76ac9014f7eaab95b84ec68772cc/Results/Results.png -------------------------------------------------------------------------------- /SRGAN_Video_implementation.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "nbformat": 4, 3 | "nbformat_minor": 0, 4 | "metadata": { 5 | "colab": { 6 | "name": "SRGAN.ipynb", 7 | "provenance": [], 8 | "collapsed_sections": [] 9 | }, 10 | "kernelspec": { 11 | "name": "python3", 12 | "display_name": "Python 3" 13 | }, 14 | "accelerator": "GPU" 15 | }, 16 | "cells": [ 17 | { 18 | "cell_type": "code", 19 | "metadata": { 20 | "id": "8A1wjeU8t_lI", 21 | "colab_type": "code", 22 | "colab": { 23 | "base_uri": "https://localhost:8080/", 24 | "height": 105 25 | }, 26 | "outputId": "667ca559-2e60-4c66-f58a-bbfb340f57db" 27 | }, 28 | "source": [ 29 | "!git clone https://github.com/krasserm/super-resolution #Cloning a Pretrained Model" 30 | ], 31 | "execution_count": 2, 32 | "outputs": [ 33 | { 34 | "output_type": "stream", 35 | "text": [ 36 | "Cloning into 'super-resolution'...\n", 37 | "remote: Enumerating objects: 385, done.\u001b[K\n", 38 | "remote: Total 385 (delta 0), reused 0 (delta 0), pack-reused 385\u001b[K\n", 39 | "Receiving objects: 100% (385/385), 47.38 MiB | 8.75 MiB/s, done.\n", 40 | "Resolving deltas: 100% (200/200), done.\n" 41 | ], 42 | "name": "stdout" 43 | } 44 | ] 45 | }, 46 | { 47 | "cell_type": "code", 48 | "metadata": { 49 | "id": "qlrrTdrQuJq4", 50 | "colab_type": "code", 51 | "colab": { 52 | "base_uri": "https://localhost:8080/", 53 | "height": 34 54 | }, 55 | "outputId": "7463366a-18fc-4905-8a0b-f6cb5ad15f9f" 56 | }, 57 | "source": [ 58 | "cd /content/super-resolution #Creation of Directory" 59 | ], 60 | "execution_count": 3, 61 | "outputs": [ 62 | { 63 | "output_type": "stream", 64 | "text": [ 65 | "/content/super-resolution\n" 66 | ], 67 | "name": "stdout" 68 | } 69 | ] 70 | }, 71 | { 72 | "cell_type": "code", 73 | "metadata": { 74 | "id": "vLF35wOCuX5a", 75 | "colab_type": "code", 76 | "colab": { 77 | "base_uri": "https://localhost:8080/", 78 | "height": 70 79 | }, 80 | "outputId": "b0e54e4e-8f6b-4645-b91f-ff7f00404085" 81 | }, 82 | "source": [ 83 | "!tar xvfz /content/weights-srgan.tar.gz #Loading weights" 84 | ], 85 | "execution_count": 4, 86 | "outputs": [ 87 | { 88 | "output_type": "stream", 89 | "text": [ 90 | "weights/srgan/gan_discriminator.h5\n", 91 | "weights/srgan/gan_generator.h5\n", 92 | "weights/srgan/pre_generator.h5\n" 93 | ], 94 | "name": "stdout" 95 | } 96 | ] 97 | }, 98 | { 99 | "cell_type": "markdown", 100 | "metadata": { 101 | "id": "DBLk0lNyLXlD", 102 | "colab_type": "text" 103 | }, 104 | "source": [ 105 | "# **SRGAN Model Implementation :-**\n", 106 | "\n", 107 | "---\n", 108 | "\n", 109 | "\n", 110 | "\n", 111 | "---\n", 112 | "\n" 113 | ] 114 | }, 115 | { 116 | "cell_type": "code", 117 | "metadata": { 118 | "id": "mkBU5g7C42BB", 119 | "colab_type": "code", 120 | "colab": { 121 | "base_uri": "https://localhost:8080/", 122 | "height": 1000 123 | }, 124 | "outputId": "cbd36a78-e834-44ae-809d-3b6ca3584145" 125 | }, 126 | "source": [ 127 | "# Importing all necessary libraries \n", 128 | "import timeit\n", 129 | "import cv2 \n", 130 | "import os\n", 131 | "import numpy as np\n", 132 | "from model import resolve_single\n", 133 | "from utils import load_image, plot_sample\n", 134 | "from model.srgan import generator\n", 135 | "\n", 136 | "# Read the video from specified path \n", 137 | "cam = cv2.VideoCapture(\"/content/Drama144p_input.3gp\") \n", 138 | "fps = cam.get(cv2.CAP_PROP_FPS)\n", 139 | "print(fps)\n", 140 | "\n", 141 | "\n", 142 | "try:\n", 143 | " \n", 144 | " # Creating a folder named data \n", 145 | " if not os.path.exists('data'): \n", 146 | " os.makedirs('data') \n", 147 | " \n", 148 | "# if not created then raise error \n", 149 | "except OSError:\n", 150 | " print ('Error: Creating directory of data') \n", 151 | " \n", 152 | "#frames Extraction from video \n", 153 | "currentframe = 0\n", 154 | "arr_img = []\n", 155 | "while(True): \n", 156 | " \n", 157 | " # reading from frame \n", 158 | " ret,frame = cam.read() \n", 159 | " \n", 160 | " if ret: \n", 161 | " # if video is still left continue creating images \n", 162 | " name = './data/frame' + str(currentframe).zfill(3) + '.jpg'\n", 163 | " print ('Creating...' + name) \n", 164 | " \n", 165 | " # writing the extracted images \n", 166 | " cv2.imwrite(name, frame) \n", 167 | " \n", 168 | " # increasing counter so that it will show how many frames are created \n", 169 | " currentframe += 1\n", 170 | " #storing the path of extracted frames in a list\n", 171 | " arr_img.append(name)\n", 172 | " else: \n", 173 | " break\n", 174 | "#print(arr_img)\n", 175 | "\n", 176 | "start = timeit.default_timer()\n", 177 | "model = generator()\n", 178 | "model.load_weights('weights/srgan/gan_generator.h5')\n", 179 | "\n", 180 | "#Initialization of an empty list to store the super resolved images\n", 181 | "arr_output=[]\n", 182 | "print(len(arr_img))\n", 183 | "n= len(arr_img)\n", 184 | "\n", 185 | "#Implementation of SRGAN Model in extracted frames\n", 186 | "for i in range(n):\n", 187 | " lr = load_image(arr_img[i])\n", 188 | " sr = resolve_single(model, lr)\n", 189 | " #plot_sample(lr, sr)\n", 190 | " \n", 191 | " arr_output.append(sr)\n", 192 | "stop = timeit.default_timer()\n", 193 | "#print(arr_output)\n", 194 | "\n", 195 | "print(\"time : \", stop-start)\n", 196 | "\n", 197 | "# Release all space and windows once done \n", 198 | "cam.release() \n", 199 | "cv2.destroyAllWindows()" 200 | ], 201 | "execution_count": 8, 202 | "outputs": [ 203 | { 204 | "output_type": "stream", 205 | "text": [ 206 | "20.0\n", 207 | "Creating..../data/frame000.jpg\n", 208 | "Creating..../data/frame001.jpg\n", 209 | "Creating..../data/frame002.jpg\n", 210 | "Creating..../data/frame003.jpg\n", 211 | "Creating..../data/frame004.jpg\n", 212 | "Creating..../data/frame005.jpg\n", 213 | "Creating..../data/frame006.jpg\n", 214 | "Creating..../data/frame007.jpg\n", 215 | "Creating..../data/frame008.jpg\n", 216 | "Creating..../data/frame009.jpg\n", 217 | "Creating..../data/frame010.jpg\n", 218 | "Creating..../data/frame011.jpg\n", 219 | "Creating..../data/frame012.jpg\n", 220 | "Creating..../data/frame013.jpg\n", 221 | "Creating..../data/frame014.jpg\n", 222 | "Creating..../data/frame015.jpg\n", 223 | "Creating..../data/frame016.jpg\n", 224 | "Creating..../data/frame017.jpg\n", 225 | "Creating..../data/frame018.jpg\n", 226 | "Creating..../data/frame019.jpg\n", 227 | "Creating..../data/frame020.jpg\n", 228 | "Creating..../data/frame021.jpg\n", 229 | "Creating..../data/frame022.jpg\n", 230 | "Creating..../data/frame023.jpg\n", 231 | "Creating..../data/frame024.jpg\n", 232 | "Creating..../data/frame025.jpg\n", 233 | "Creating..../data/frame026.jpg\n", 234 | "Creating..../data/frame027.jpg\n", 235 | "Creating..../data/frame028.jpg\n", 236 | "Creating..../data/frame029.jpg\n", 237 | "Creating..../data/frame030.jpg\n", 238 | "Creating..../data/frame031.jpg\n", 239 | "Creating..../data/frame032.jpg\n", 240 | "Creating..../data/frame033.jpg\n", 241 | "Creating..../data/frame034.jpg\n", 242 | "Creating..../data/frame035.jpg\n", 243 | "Creating..../data/frame036.jpg\n", 244 | "Creating..../data/frame037.jpg\n", 245 | "Creating..../data/frame038.jpg\n", 246 | "Creating..../data/frame039.jpg\n", 247 | "Creating..../data/frame040.jpg\n", 248 | "Creating..../data/frame041.jpg\n", 249 | "Creating..../data/frame042.jpg\n", 250 | "Creating..../data/frame043.jpg\n", 251 | "Creating..../data/frame044.jpg\n", 252 | "Creating..../data/frame045.jpg\n", 253 | "Creating..../data/frame046.jpg\n", 254 | "Creating..../data/frame047.jpg\n", 255 | "Creating..../data/frame048.jpg\n", 256 | "Creating..../data/frame049.jpg\n", 257 | "Creating..../data/frame050.jpg\n", 258 | "Creating..../data/frame051.jpg\n", 259 | "Creating..../data/frame052.jpg\n", 260 | "Creating..../data/frame053.jpg\n", 261 | "Creating..../data/frame054.jpg\n", 262 | "Creating..../data/frame055.jpg\n", 263 | "Creating..../data/frame056.jpg\n", 264 | "Creating..../data/frame057.jpg\n", 265 | "Creating..../data/frame058.jpg\n", 266 | "Creating..../data/frame059.jpg\n", 267 | "Creating..../data/frame060.jpg\n", 268 | "Creating..../data/frame061.jpg\n", 269 | "Creating..../data/frame062.jpg\n", 270 | "Creating..../data/frame063.jpg\n", 271 | "Creating..../data/frame064.jpg\n", 272 | "Creating..../data/frame065.jpg\n", 273 | "Creating..../data/frame066.jpg\n", 274 | "Creating..../data/frame067.jpg\n", 275 | "Creating..../data/frame068.jpg\n", 276 | "Creating..../data/frame069.jpg\n", 277 | "Creating..../data/frame070.jpg\n", 278 | "Creating..../data/frame071.jpg\n", 279 | "Creating..../data/frame072.jpg\n", 280 | "Creating..../data/frame073.jpg\n", 281 | "Creating..../data/frame074.jpg\n", 282 | "Creating..../data/frame075.jpg\n", 283 | "Creating..../data/frame076.jpg\n", 284 | "Creating..../data/frame077.jpg\n", 285 | "Creating..../data/frame078.jpg\n", 286 | "Creating..../data/frame079.jpg\n", 287 | "Creating..../data/frame080.jpg\n", 288 | "Creating..../data/frame081.jpg\n", 289 | "Creating..../data/frame082.jpg\n", 290 | "Creating..../data/frame083.jpg\n", 291 | "Creating..../data/frame084.jpg\n", 292 | "Creating..../data/frame085.jpg\n", 293 | "Creating..../data/frame086.jpg\n", 294 | "Creating..../data/frame087.jpg\n", 295 | "Creating..../data/frame088.jpg\n", 296 | "Creating..../data/frame089.jpg\n", 297 | "Creating..../data/frame090.jpg\n", 298 | "Creating..../data/frame091.jpg\n", 299 | "Creating..../data/frame092.jpg\n", 300 | "Creating..../data/frame093.jpg\n", 301 | "Creating..../data/frame094.jpg\n", 302 | "Creating..../data/frame095.jpg\n", 303 | "Creating..../data/frame096.jpg\n", 304 | "Creating..../data/frame097.jpg\n", 305 | "Creating..../data/frame098.jpg\n", 306 | "Creating..../data/frame099.jpg\n", 307 | "Creating..../data/frame100.jpg\n", 308 | "Creating..../data/frame101.jpg\n", 309 | "Creating..../data/frame102.jpg\n", 310 | "Creating..../data/frame103.jpg\n", 311 | "Creating..../data/frame104.jpg\n", 312 | "Creating..../data/frame105.jpg\n", 313 | "Creating..../data/frame106.jpg\n", 314 | "Creating..../data/frame107.jpg\n", 315 | "Creating..../data/frame108.jpg\n", 316 | "Creating..../data/frame109.jpg\n", 317 | "Creating..../data/frame110.jpg\n", 318 | "Creating..../data/frame111.jpg\n", 319 | "Creating..../data/frame112.jpg\n", 320 | "Creating..../data/frame113.jpg\n", 321 | "Creating..../data/frame114.jpg\n", 322 | "Creating..../data/frame115.jpg\n", 323 | "Creating..../data/frame116.jpg\n", 324 | "Creating..../data/frame117.jpg\n", 325 | "Creating..../data/frame118.jpg\n", 326 | "Creating..../data/frame119.jpg\n", 327 | "Creating..../data/frame120.jpg\n", 328 | "Creating..../data/frame121.jpg\n", 329 | "Creating..../data/frame122.jpg\n", 330 | "Creating..../data/frame123.jpg\n", 331 | "Creating..../data/frame124.jpg\n", 332 | "Creating..../data/frame125.jpg\n", 333 | "Creating..../data/frame126.jpg\n", 334 | "Creating..../data/frame127.jpg\n", 335 | "Creating..../data/frame128.jpg\n", 336 | "Creating..../data/frame129.jpg\n", 337 | "Creating..../data/frame130.jpg\n", 338 | "Creating..../data/frame131.jpg\n", 339 | "Creating..../data/frame132.jpg\n", 340 | "Creating..../data/frame133.jpg\n", 341 | "Creating..../data/frame134.jpg\n", 342 | "Creating..../data/frame135.jpg\n", 343 | "Creating..../data/frame136.jpg\n", 344 | "Creating..../data/frame137.jpg\n", 345 | "Creating..../data/frame138.jpg\n", 346 | "Creating..../data/frame139.jpg\n", 347 | "Creating..../data/frame140.jpg\n", 348 | "Creating..../data/frame141.jpg\n", 349 | "Creating..../data/frame142.jpg\n", 350 | "Creating..../data/frame143.jpg\n", 351 | "Creating..../data/frame144.jpg\n", 352 | "Creating..../data/frame145.jpg\n", 353 | "Creating..../data/frame146.jpg\n", 354 | "Creating..../data/frame147.jpg\n", 355 | "Creating..../data/frame148.jpg\n", 356 | "Creating..../data/frame149.jpg\n", 357 | "Creating..../data/frame150.jpg\n", 358 | "Creating..../data/frame151.jpg\n", 359 | "Creating..../data/frame152.jpg\n", 360 | "Creating..../data/frame153.jpg\n", 361 | "Creating..../data/frame154.jpg\n", 362 | "Creating..../data/frame155.jpg\n", 363 | "Creating..../data/frame156.jpg\n", 364 | "Creating..../data/frame157.jpg\n", 365 | "Creating..../data/frame158.jpg\n", 366 | "Creating..../data/frame159.jpg\n", 367 | "Creating..../data/frame160.jpg\n", 368 | "161\n", 369 | "time : 21.333814638999684\n" 370 | ], 371 | "name": "stdout" 372 | } 373 | ] 374 | }, 375 | { 376 | "cell_type": "markdown", 377 | "metadata": { 378 | "id": "62ZtxImOJled", 379 | "colab_type": "text" 380 | }, 381 | "source": [ 382 | "# **Saving the Super Resolved Frames :-**\n", 383 | "The super resolved frames are in numpy array format, so here we will change them in an image format." 384 | ] 385 | }, 386 | { 387 | "cell_type": "code", 388 | "metadata": { 389 | "id": "ZHFzAIHUk9N2", 390 | "colab_type": "code", 391 | "colab": {} 392 | }, 393 | "source": [ 394 | "#Importing necessary libraries\n", 395 | "from keras.preprocessing.image import load_img\n", 396 | "from keras.preprocessing.image import img_to_array\n", 397 | "from keras.preprocessing.image import array_to_img\n", 398 | "from keras.preprocessing.image import save_img\n", 399 | "\n", 400 | "#Making a directory for storing super resolved frames in image format\n", 401 | "os.makedirs(\"output_images\")\n", 402 | "\n", 403 | "#Initialization of an empty list to store the path of Super resolved frames\n", 404 | "s_res= []\n", 405 | "for j in range(len(arr_output)):\n", 406 | " out_name = '/content/super-resolution/output_images/frame' + str(j).zfill(3) + '.jpg'\n", 407 | " img_pil = array_to_img(arr_output[j])\n", 408 | " img1 = save_img(out_name, img_pil)\n", 409 | " s_res.append(out_name)\n", 410 | " \n", 411 | "#print(s_res)" 412 | ], 413 | "execution_count": 11, 414 | "outputs": [] 415 | }, 416 | { 417 | "cell_type": "markdown", 418 | "metadata": { 419 | "id": "rAf8BC72W_Yo", 420 | "colab_type": "text" 421 | }, 422 | "source": [ 423 | "# **Conversion of Super Resolved frames into a video**" 424 | ] 425 | }, 426 | { 427 | "cell_type": "code", 428 | "metadata": { 429 | "id": "nIf9j6qJHU_y", 430 | "colab_type": "code", 431 | "colab": {} 432 | }, 433 | "source": [ 434 | "import cv2\n", 435 | "import numpy as np\n", 436 | "for i in range(len(s_res)):\n", 437 | " filename=s_res[i]\n", 438 | " #reading each files\n", 439 | " img = cv2.imread(filename)\n", 440 | " height, width, layers = img.shape\n", 441 | " size = (width,height)\n", 442 | "\n", 443 | "fps = 20 #Put the fps value as your convenience or \n", 444 | " #Calculate by using (No. of frames)/Video_duration in seconds \n", 445 | "\n", 446 | "#Creation of output video \n", 447 | "out = cv2.VideoWriter('drama2_output.mp4',cv2.VideoWriter_fourcc(*'DIVX'), fps , size)\n", 448 | "\n", 449 | "#Writing Frames into video\n", 450 | "for i in range(len(s_res)):\n", 451 | " out.write(cv2.imread(s_res[i]))\n", 452 | "out.release()" 453 | ], 454 | "execution_count": 23, 455 | "outputs": [] 456 | } 457 | ] 458 | } --------------------------------------------------------------------------------