├── README.md └── final.ipynb /README.md: -------------------------------------------------------------------------------- 1 | # Detection-of-face-Manipulated-videos 2 | This is the implementation code for Detection of Manipulated Facial Images using Faceforensics++ data. 3 | 4 | For detailed documentation please refer following medium link. 5 | 6 | https://medium.com/analytics-vidhya/detection-of-face-manipulated-videos-using-deep-learning-6bca870f3a6a?source=friends_link&sk=46e2f2beb6415cba6850cb66bc0bcd43 7 | 8 | 9 | ## Problem Definition: 10 | As there is a lot of active research going on image/video generation and manipulation, that helps to evolve many new ways to manipulate the original sources at the same time this leads to a loss of trust in digital content, but it might even cause further harm by spreading false information and the creation of fake news. 11 | 12 | ## Objective 13 | We need to build a model such that it should recognize whether the given video(or image sequence) is fake or real. 14 | 15 | 16 | ## Data Background 17 | 18 | The Faceforensics++ data was collected by Visual Computing Group (http://niessnerlab.org/ (http://niessnerlab.org/)) , one can obtain this data on accepting their terms and conditions. 19 | 20 | ### In short 21 | 1. Total 1000 videos (videos are like news readers reading news) are downloaded from youtube. 22 | 2. Manipulated vidoes are generated by using 3 automated state-of-the-art face manipulation methods (viz., DeepFakes, FaceSwap, Face2Face) that are applied to these 1,000 pristine videos. 23 | 3. Gathered images from these both pritine videos(real) and manipulated videos(fake) 24 | 25 | To get more understanding please visit https://arxiv.org/abs/1901.08971 26 | 27 | and to download and extract the data please visit https://github.com/ondyari/FaceForensics (https://github.com/ondyari/FaceForensics) 28 | 29 | ## Files in this Repository 30 | 1. Data_Analysis_Extraction.ipynb ----> this file contains understanding and extracting and preprocessing the data 31 | 2. modeling.ipynb -----> All modeling part was included in this file 32 | 3. final.ipynb -----> Total pipline for performing video classification was included in this file 33 | -------------------------------------------------------------------------------- /final.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 2, 6 | "metadata": { 7 | "colab": { 8 | "base_uri": "https://localhost:8080/", 9 | "height": 97 10 | }, 11 | "colab_type": "code", 12 | "id": "GN-7QbU0Bgbh", 13 | "outputId": "041a7d60-2850-4be4-a5e6-77bbaa1d7696" 14 | }, 15 | "outputs": [ 16 | { 17 | "name": "stderr", 18 | "output_type": "stream", 19 | "text": [ 20 | "Using TensorFlow backend.\n" 21 | ] 22 | } 23 | ], 24 | "source": [ 25 | "import cv2\n", 26 | "import dlib\n", 27 | "import os\n", 28 | "import numpy as np\n", 29 | "import matplotlib.pyplot as plt\n", 30 | "face_detector = dlib.get_frontal_face_detector()\n", 31 | "from keras.applications.xception import preprocess_input\n", 32 | "from keras import models" 33 | ] 34 | }, 35 | { 36 | "cell_type": "code", 37 | "execution_count": 2, 38 | "metadata": { 39 | "colab": { 40 | "base_uri": "https://localhost:8080/", 41 | "height": 122 42 | }, 43 | "colab_type": "code", 44 | "id": "pKg415IMBoGv", 45 | "outputId": "40eabbef-a7fc-4c0a-bf94-960ef4dc5023" 46 | }, 47 | "outputs": [ 48 | { 49 | "name": "stdout", 50 | "output_type": "stream", 51 | "text": [ 52 | "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code\n", 53 | "\n", 54 | "Enter your authorization code:\n", 55 | "··········\n", 56 | "Mounted at /content/drive\n" 57 | ] 58 | } 59 | ], 60 | "source": [ 61 | "from google.colab import drive\n", 62 | "drive.mount('/content/drive')" 63 | ] 64 | }, 65 | { 66 | "cell_type": "code", 67 | "execution_count": 4, 68 | "metadata": { 69 | "colab": {}, 70 | "colab_type": "code", 71 | "id": "INTXw88HBgbw" 72 | }, 73 | "outputs": [], 74 | "source": [ 75 | "model_Xc = models.load_model('/content/drive/My Drive/Submit/model_finetuned_xception.hdf5') # \n", 76 | "\n", 77 | "\n", 78 | "def get_boundingbox(face, width, height, scale=1.3, minsize=None):\n", 79 | " #Reference: https://github.com/ondyari/FaceForensics\n", 80 | " \"\"\"\n", 81 | " Expects a dlib face to generate a quadratic bounding box.\n", 82 | " :param face: dlib face class\n", 83 | " :param width: frame width\n", 84 | " :param height: frame height\n", 85 | " :param scale: bounding box size multiplier to get a bigger face region\n", 86 | " :param minsize: set minimum bounding box size\n", 87 | " :return: x, y, bounding_box_size in opencv form\n", 88 | " \"\"\"\n", 89 | " x1 = face.left() # Taking lines numbers around face\n", 90 | " y1 = face.top()\n", 91 | " x2 = face.right()\n", 92 | " y2 = face.bottom()\n", 93 | " size_bb = int(max(x2 - x1, y2 - y1) * scale) # scaling size of box to 1.3\n", 94 | " if minsize:\n", 95 | " if size_bb < minsize:\n", 96 | " size_bb = minsize\n", 97 | " center_x, center_y = (x1 + x2) // 2, (y1 + y2) // 2\n", 98 | "\n", 99 | " # Check for out of bounds, x-y top left corner\n", 100 | " x1 = max(int(center_x - size_bb // 2), 0)\n", 101 | " y1 = max(int(center_y - size_bb // 2), 0)\n", 102 | " # Check for too big bb size for given x, y\n", 103 | " size_bb = min(width - x1, size_bb)\n", 104 | " size_bb = min(height - y1, size_bb)\n", 105 | "\n", 106 | " return x1, y1, size_bb\n", 107 | "\n", 108 | "def get_predicition(image):\n", 109 | " \"\"\"Expects the image input, this image further cropped to face\n", 110 | " and the cropped face image will be sent for evalution funtion \n", 111 | " finally \n", 112 | " returns the annotated reusult with bounding box around the face. \n", 113 | " \"\"\"\n", 114 | " height, width = image.shape[:2]\n", 115 | " try: # If in case face is not detected at any frame\n", 116 | " face = face_detector(image, 1)[0] # Face detection\n", 117 | " x, y, size = get_boundingbox(face=face, width=width, height=height) # Calling to get bound box around the face\n", 118 | " except IndexError:\n", 119 | " pass\n", 120 | " cropped_face = image[y:y+size, x:x+size] # cropping the face \n", 121 | " output,label = evaluate(cropped_face) # Sending the cropped face to get classifier result \n", 122 | " font_face = cv2.FONT_HERSHEY_SIMPLEX # font settings\n", 123 | " thickness = 2\n", 124 | " font_scale = 1\n", 125 | " if label=='Real':\n", 126 | " color = (0,255, 0)\n", 127 | " else:\n", 128 | " color = (0, 0, 255)\n", 129 | " x = face.left() # Setting the bounding box on uncropped image\n", 130 | " y = face.top()\n", 131 | " w = face.right() - x\n", 132 | " h = face.bottom() - y\n", 133 | " cv2.putText(image, label+'_'+str('%.2f'%output)+'%', (x, y+h+30), \n", 134 | " font_face, font_scale,\n", 135 | " color, thickness, 2) # Putting the label and confidence values\n", 136 | "\n", 137 | " return cv2.rectangle(image, (x, y), (x + w, y + h), color, 2)# draw box over face\n", 138 | "\n", 139 | "def evaluate(cropped_face):\n", 140 | " \"\"\"This function classifies the cropped face on loading the trained model\n", 141 | " and \n", 142 | " returns the label and confidence value\n", 143 | " \"\"\" \n", 144 | " img = cv2.resize(cropped_face, (299, 299))\n", 145 | " img = np.expand_dims(img, axis=0)\n", 146 | " img = preprocess_input(img) \n", 147 | " res = model_Xc.predict(img)[0]\n", 148 | " if np.argmax(res)==1:\n", 149 | " label = 'Fake'\n", 150 | " else:\n", 151 | " label = 'Real'\n", 152 | " return res[np.argmax(res)]*100.0, label\n", 153 | "\n", 154 | "\n", 155 | "def final_model(video_path,limit_frames):\n", 156 | " \"\"\"Expects the video path : '/xxx.mp4'\n", 157 | " limit_frames : total number frames to be taken from input video \n", 158 | " function will write the video with \n", 159 | " classification results and place the output video in the pwd\"\"\"\n", 160 | " output_ = video_path.split(\"/\")[-1].split(\".\")[-2]\n", 161 | " capture = cv2.VideoCapture(video_path)\n", 162 | " if capture.isOpened():\n", 163 | " _,image = capture.read()\n", 164 | " frame_width = int(capture.get(3))\n", 165 | " frame_height = int(capture.get(4))\n", 166 | " out = cv2.VideoWriter(output_+'_output.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 10, (frame_width,frame_height))\n", 167 | " else:\n", 168 | " _ = False\n", 169 | " i=1 \n", 170 | " while (_):\n", 171 | " _, image = capture.read()\n", 172 | " classified_img = get_predicition(image)\n", 173 | " out.write(classified_img)\n", 174 | " if i%10 == 0:\n", 175 | " print(\"Number of frames complted:{}\".format(i))\n", 176 | " if i==limit_frames:\n", 177 | " break\n", 178 | " i=i+1\n", 179 | " capture.release()" 180 | ] 181 | }, 182 | { 183 | "cell_type": "code", 184 | "execution_count": 13, 185 | "metadata": { 186 | "colab": { 187 | "base_uri": "https://localhost:8080/", 188 | "height": 357 189 | }, 190 | "colab_type": "code", 191 | "id": "EjR3JsUIFM5c", 192 | "outputId": "2097bd3d-b7ae-4c02-f5e9-9c00f2a5be2f" 193 | }, 194 | "outputs": [ 195 | { 196 | "name": "stdout", 197 | "output_type": "stream", 198 | "text": [ 199 | "Number of frames complted:10\n", 200 | "Number of frames complted:20\n", 201 | "Number of frames complted:30\n", 202 | "Number of frames complted:40\n", 203 | "Number of frames complted:50\n", 204 | "Number of frames complted:60\n", 205 | "Number of frames complted:70\n", 206 | "Number of frames complted:80\n", 207 | "Number of frames complted:90\n", 208 | "Number of frames complted:100\n", 209 | "Number of frames complted:110\n", 210 | "Number of frames complted:120\n", 211 | "Number of frames complted:130\n", 212 | "Number of frames complted:140\n", 213 | "Number of frames complted:150\n", 214 | "Number of frames complted:160\n", 215 | "Number of frames complted:170\n", 216 | "Number of frames complted:180\n", 217 | "Number of frames complted:190\n", 218 | "Number of frames complted:200\n" 219 | ] 220 | } 221 | ], 222 | "source": [ 223 | "#For testing any new video\n", 224 | "final_model(video_path='/content/drive/My Drive/FaceForensics++/notebooks/878_866.mp4',\n", 225 | " limit_frames=200)" 226 | ] 227 | } 228 | ], 229 | "metadata": { 230 | "colab": { 231 | "collapsed_sections": [], 232 | "name": "final.ipynb", 233 | "provenance": [] 234 | }, 235 | "kernelspec": { 236 | "display_name": "Python 3", 237 | "language": "python", 238 | "name": "python3" 239 | }, 240 | "language_info": { 241 | "codemirror_mode": { 242 | "name": "ipython", 243 | "version": 3 244 | }, 245 | "file_extension": ".py", 246 | "mimetype": "text/x-python", 247 | "name": "python", 248 | "nbconvert_exporter": "python", 249 | "pygments_lexer": "ipython3", 250 | "version": "3.7.3" 251 | } 252 | }, 253 | "nbformat": 4, 254 | "nbformat_minor": 1 255 | } 256 | --------------------------------------------------------------------------------