├── .gitignore ├── Application of Convolutional Neural Network for Image Forgery Detection and Localization.pdf ├── Aptfile ├── ManTraNet_Ptrain4.h5 ├── Procfile ├── README.md ├── app.py ├── bot.py ├── modelCore.py ├── requirements.txt ├── static ├── images │ ├── s1.svg │ ├── s2.svg │ └── s3.svg ├── scripts │ └── fullpage.min.js └── stylesheets │ ├── fullpage.min.css │ └── style.css └── templates └── base.html /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | .vscode 3 | .ipynb_checkpoints 4 | venv 5 | *.png -------------------------------------------------------------------------------- /Application of Convolutional Neural Network for Image Forgery Detection and Localization.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neelanjan00/Image-Forgery-Detection/e046420ba4eb103a00a246c03c9eb07d18789217/Application of Convolutional Neural Network for Image Forgery Detection and Localization.pdf -------------------------------------------------------------------------------- /Aptfile: -------------------------------------------------------------------------------- 1 | libsm6 2 | libxrender1 3 | libfontconfig1 4 | libice6 -------------------------------------------------------------------------------- /ManTraNet_Ptrain4.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neelanjan00/Image-Forgery-Detection/e046420ba4eb103a00a246c03c9eb07d18789217/ManTraNet_Ptrain4.h5 -------------------------------------------------------------------------------- /Procfile: -------------------------------------------------------------------------------- 1 | web: python app.py 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![Image Forgery](https://firebasestorage.googleapis.com/v0/b/neelanjan-manna.appspot.com/o/project-images%2FImageForgery.jpeg?alt=media&token=eaaa86e3-a2eb-45b2-aa98-5719767b66e8) 2 |

Welcome to Image Forgery Detection 👋

3 |

4 | Version 5 | 6 | Twitter: NeelanjanManna 7 | 8 |

9 | 10 | > [Published in Project Innovations in Distributed Computing and Internet Technology, 10th Edition (Pages 69-79), Springer](https://github.com/neelanjan00/Image-Forgery-Detection/blob/master/Application%20of%20Convolutional%20Neural%20Network%20for%20Image%20Forgery%20Detection%20and%20Localization.pdf) With the quick adoption of internet, social media, and widespread availability of unsophisticated image manipulation tools, the adverse consequences of image forgery are taking a toll on society. More than often visual inspection is ineffective to determine a forgery in an image, making it potentially perilous for all the illicit means of its application. To overcome this problem, we analyze the effectiveness of a deep convolutional neural network for the detection as well as localization of the area of manipulation in forged images, bearing forgeries of simple as well as complex nature. Further along, we interface the trained model with a web application for users to interact with the model in a simple and effective manner, and finally, we also develop a chatbot for further easing the process of interaction with the model and effectively tackling the problem of fake news forwards in popular internet messaging platforms such as WhatsApp. 11 | 12 | ### 🏠 [Homepage](https://github.com/neelanjan00/Image-Forgery-Detection) 13 | 14 | ## Install 15 | 16 | ```sh 17 | pip install -r requirements.txt 18 | ``` 19 | 20 | ## Usage 21 | 22 | ### Web Application 23 | ```sh 24 | python app.py 25 | ``` 26 | ### Telegram Bot 27 | ```sh 28 | python bot.py 29 | ``` 30 | 31 | ## Author 32 | 33 | 👤 **Neelanjan Manna** 34 | 35 | * Website: https://neelanjanmanna.ml/ 36 | * Twitter: [@NeelanjanManna](https://twitter.com/NeelanjanManna) 37 | * Github: [@neelanjan00](https://github.com/neelanjan00) 38 | * LinkedIn: [@neelanjan00](https://linkedin.com/in/neelanjan00) 39 | 40 | ## Show your support 41 | 42 | Give a ⭐️ if this project helped you! 43 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | import keras 2 | from keras import backend as K 3 | import modelCore 4 | import tensorflow as tf 5 | from cv2 import cv2 6 | import numpy as np 7 | from flask import Flask, redirect, request, render_template 8 | import matplotlib.pyplot as plt 9 | import base64 10 | import os 11 | 12 | tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) 13 | 14 | app = Flask(__name__) 15 | 16 | def decode_an_image_array(rgb, dn=1): 17 | x = np.expand_dims(rgb.astype('float32') / 255. * 2 - 1, axis=0)[:, ::dn, ::dn] 18 | K.clear_session() 19 | manTraNet = modelCore.load_trained_model() 20 | return manTraNet.predict(x)[0, ..., 0] 21 | 22 | 23 | def decode_an_image_file(image_file, dn=1): 24 | mask = decode_an_image_array(image_file, dn) 25 | plt.xticks([]) 26 | plt.yticks([]) 27 | plt.imshow(image_file[::dn, ::dn]) 28 | plt.imshow(mask, cmap='jet', alpha=.5) 29 | plt.savefig('h.png', bbox_inches='tight', pad_inches=-0.1) 30 | 31 | 32 | ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg'} 33 | 34 | def allowed_file(filename): 35 | return '.' in filename and \ 36 | filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS 37 | 38 | 39 | @app.route("/", methods=['GET', 'POST']) 40 | def base(): 41 | if request.method == 'GET': 42 | return render_template("base.html", output=0) 43 | else: 44 | if 'input_image' not in request.files: 45 | print("No file part") 46 | return redirect(request.url) 47 | 48 | file = request.files['input_image'] 49 | 50 | if file.filename == '': 51 | print('No selected file') 52 | return redirect(request.url) 53 | 54 | if file and allowed_file(file.filename): 55 | inp_img = cv2.imdecode(np.frombuffer(file.read(), np.uint8), cv2.IMREAD_UNCHANGED) 56 | decode_an_image_file(inp_img) 57 | output = cv2.imread('h.png') 58 | _, outputBuffer = cv2.imencode('.jpg', output) 59 | OutputBase64String = base64.b64encode(outputBuffer).decode('utf-8') 60 | return render_template("base.html", img=OutputBase64String, output=1) 61 | 62 | 63 | if __name__ == "__main__": 64 | app.secret_key = 'qwertyuiop1234567890' 65 | port = int(os.environ.get('PORT', 5000)) 66 | app.run(debug=True, host='0.0.0.0', port=port) 67 | 68 | -------------------------------------------------------------------------------- /bot.py: -------------------------------------------------------------------------------- 1 | import keras 2 | from keras import backend as K 3 | from telegram.chataction import ChatAction 4 | from telegram.files.document import Document 5 | import modelCore 6 | import tensorflow as tf 7 | import matplotlib.pyplot as plt 8 | import base64 9 | import os 10 | 11 | import numpy as np 12 | from cv2 import cv2 13 | import logging 14 | from typing import List 15 | 16 | from telegram import Update,Message 17 | from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, CallbackContext 18 | from telegram.files.file import File 19 | from telegram.files.photosize import PhotoSize 20 | 21 | # Enable logging 22 | logging.basicConfig( 23 | format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.INFO 24 | ) 25 | 26 | logger = logging.getLogger(__name__) 27 | 28 | 29 | def decode_an_image_array(rgb, dn=1): 30 | x = np.expand_dims(rgb.astype('float32') / 255. * 2 - 1, axis=0)[:, ::dn, ::dn] 31 | K.clear_session() 32 | manTraNet = modelCore.load_trained_model() 33 | return manTraNet.predict(x)[0, ..., 0] 34 | 35 | 36 | def decode_an_image_file(image_file, dn=1): 37 | mask = decode_an_image_array(image_file, dn) 38 | plt.xticks([]) 39 | plt.yticks([]) 40 | plt.imshow(image_file[::dn, ::dn]) 41 | plt.imshow(mask, cmap='jet', alpha=.5) 42 | plt.savefig('h.png', bbox_inches='tight', pad_inches=-0.1) 43 | 44 | 45 | ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg'} 46 | 47 | def allowed_file(filename): 48 | return '.' in filename and \ 49 | filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS 50 | 51 | 52 | 53 | # Define a few command handlers. These usually take the two arguments update and 54 | # context. Error handlers also receive the raised TelegramError object in error. 55 | def start(update: Update, context: CallbackContext) -> None: 56 | """Send a message when the command /start is issued.""" 57 | update.message.reply_text("Upload an image as a document for interacting with the model") 58 | 59 | 60 | def help_command(update: Update, context: CallbackContext) -> None: 61 | """Send a message when the command /help is issued.""" 62 | update.message.reply_text('Help!') 63 | 64 | 65 | def predict(update: Update, context: CallbackContext) -> None: 66 | """Echo the user message.""" 67 | if update.message.text: 68 | update.message.reply_text(update.message.text) 69 | message:Message = update.message 70 | photo:List[PhotoSize] = message.photo 71 | document:Document = message.document 72 | context.bot.send_chat_action(chat_id=message.chat.id,action=ChatAction.UPLOAD_PHOTO,timeout=60000) 73 | if photo: 74 | p:PhotoSize = photo[-1] 75 | file:File = p.get_file(timeout=10000) 76 | arr:bytearray = file.download_as_bytearray() 77 | nparr = np.frombuffer(arr,np.uint8) 78 | inp_img = cv2.imdecode(np.frombuffer(nparr, np.uint8), cv2.IMREAD_UNCHANGED) 79 | decode_an_image_file(inp_img) 80 | output = cv2.imread('h.png') 81 | _, outputBuffer = cv2.imencode('.jpg', output) 82 | OutputBase64String = base64.b64encode(outputBuffer).decode('utf-8') 83 | message.reply_photo(photo=open('h.png','rb')) 84 | elif document: 85 | file:File = document.get_file(timeout=10000) 86 | arr:bytearray = file.download_as_bytearray() 87 | nparr = np.frombuffer(arr,np.uint8) 88 | inp_img = cv2.imdecode(np.frombuffer(nparr, np.uint8), cv2.IMREAD_UNCHANGED) 89 | decode_an_image_file(inp_img) 90 | output = cv2.imread('h.png') 91 | _, outputBuffer = cv2.imencode('.jpg', output) 92 | OutputBase64String = base64.b64encode(outputBuffer).decode('utf-8') 93 | message.reply_photo(photo=open('h.png','rb')) 94 | else: 95 | pass 96 | 97 | 98 | 99 | def main(): 100 | """Start the bot.""" 101 | 102 | updater = Updater("1307258507:AAEEcGsr2hnCEy5kQxckVxmVuR7pgZ9ZbC0", use_context=True) 103 | 104 | dispatcher = updater.dispatcher 105 | 106 | dispatcher.add_handler(CommandHandler("start", start)) 107 | dispatcher.add_handler(CommandHandler("help", help_command)) 108 | 109 | dispatcher.add_handler(MessageHandler(~Filters.command, predict)) 110 | 111 | updater.start_polling() 112 | 113 | updater.idle() 114 | 115 | 116 | if __name__ == '__main__': 117 | main() -------------------------------------------------------------------------------- /modelCore.py: -------------------------------------------------------------------------------- 1 | """ 2 | ManTra-Net Model Definition 3 | 4 | Created on Thu Nov 29 18:07:45 2018 5 | 6 | @author: yue_wu 7 | """ 8 | import os 9 | from keras.layers import Layer, Input, GlobalAveragePooling2D, Lambda, Dense 10 | from keras.layers import ConvLSTM2D, Conv2D, AveragePooling2D, BatchNormalization 11 | from keras.constraints import unit_norm, non_neg 12 | from keras.activations import softmax 13 | from keras.models import Model 14 | from keras.initializers import Constant 15 | from keras.constraints import Constraint 16 | from keras import backend as K 17 | from keras.layers.convolutional import _Conv 18 | from keras.legacy import interfaces 19 | from keras.engine import InputSpec 20 | import tensorflow as tf 21 | import numpy as np 22 | 23 | ################################################################################# 24 | # Model Utils for Image Manipulation Classification 25 | ################################################################################# 26 | class Conv2DSymPadding( _Conv ) : 27 | @interfaces.legacy_conv2d_support 28 | def __init__(self, filters, 29 | kernel_size, 30 | strides=(1, 1), 31 | data_format=None, 32 | dilation_rate=(1, 1), 33 | activation=None, 34 | padding='same', 35 | use_bias=True, 36 | kernel_initializer='glorot_uniform', 37 | bias_initializer='zeros', 38 | kernel_regularizer=None, 39 | bias_regularizer=None, 40 | activity_regularizer=None, 41 | kernel_constraint=None, 42 | bias_constraint=None, 43 | **kwargs): 44 | super(Conv2DSymPadding, self).__init__( 45 | rank=2, 46 | filters=filters, 47 | kernel_size=kernel_size, 48 | strides=strides, 49 | padding='same', 50 | data_format=data_format, 51 | dilation_rate=dilation_rate, 52 | activation=activation, 53 | use_bias=use_bias, 54 | kernel_initializer=kernel_initializer, 55 | bias_initializer=bias_initializer, 56 | kernel_regularizer=kernel_regularizer, 57 | bias_regularizer=bias_regularizer, 58 | activity_regularizer=activity_regularizer, 59 | kernel_constraint=kernel_constraint, 60 | bias_constraint=bias_constraint, 61 | **kwargs) 62 | self.input_spec = InputSpec(ndim=4) 63 | def get_config(self): 64 | config = super(Conv2DSymPadding, self).get_config() 65 | config.pop('rank') 66 | return config 67 | def call( self, inputs ) : 68 | if ( isinstance( self.kernel_size, tuple ) ) : 69 | kh, kw = self.kernel_size 70 | else : 71 | kh = kw = self.kernel_size 72 | ph, pw = kh//2, kw//2 73 | inputs_pad = tf.pad( inputs, [[0,0],[ph,ph],[pw,pw],[0,0]], mode='symmetric' ) 74 | outputs = K.conv2d( 75 | inputs_pad, 76 | self.kernel, 77 | strides=self.strides, 78 | padding='valid', 79 | data_format=self.data_format, 80 | dilation_rate=self.dilation_rate) 81 | if self.use_bias: 82 | outputs = K.bias_add( 83 | outputs, 84 | self.bias, 85 | data_format=self.data_format) 86 | 87 | if self.activation is not None: 88 | return self.activation(outputs) 89 | return outputs 90 | 91 | class BayarConstraint( Constraint ) : 92 | def __init__( self ) : 93 | self.mask = None 94 | def _initialize_mask( self, w ) : 95 | nb_rows, nb_cols, nb_inputs, nb_outputs = K.int_shape(w) 96 | m = np.zeros([nb_rows, nb_cols, nb_inputs, nb_outputs]).astype('float32') 97 | m[nb_rows//2,nb_cols//2] = 1. 98 | self.mask = K.variable( m, dtype='float32' ) 99 | return 100 | def __call__( self, w ) : 101 | if self.mask is None : 102 | self._initialize_mask(w) 103 | w *= (1-self.mask) 104 | rest_sum = K.sum( w, axis=(0,1), keepdims=True) 105 | w /= rest_sum + K.epsilon() 106 | w -= self.mask 107 | return w 108 | 109 | class CombinedConv2D( Conv2DSymPadding ) : 110 | def __init__(self, filters, 111 | kernel_size=(5,5), 112 | strides=(1,1), 113 | data_format=None, 114 | dilation_rate=(1,1), 115 | activation=None, 116 | padding='same', 117 | use_bias=False, 118 | kernel_initializer='glorot_uniform', 119 | bias_initializer='zeros', 120 | kernel_regularizer=None, 121 | bias_regularizer=None, 122 | activity_regularizer=None, 123 | kernel_constraint=None, 124 | bias_constraint=None, 125 | **kwargs): 126 | super(CombinedConv2D, self).__init__( 127 | filters=filters, 128 | kernel_size=(5,5), 129 | strides=strides, 130 | padding='same', 131 | data_format=data_format, 132 | dilation_rate=dilation_rate, 133 | activation=activation, 134 | use_bias=False, 135 | kernel_initializer=kernel_initializer, 136 | bias_initializer=bias_initializer, 137 | kernel_regularizer=kernel_regularizer, 138 | bias_regularizer=None, 139 | activity_regularizer=activity_regularizer, 140 | kernel_constraint=kernel_constraint, 141 | bias_constraint=None, 142 | **kwargs) 143 | self.input_spec = InputSpec(ndim=4) 144 | def _get_srm_list( self ) : 145 | # srm kernel 1 146 | srm1 = np.zeros([5,5]).astype('float32') 147 | srm1[1:-1,1:-1] = np.array([[-1, 2, -1], 148 | [2, -4, 2], 149 | [-1, 2, -1]] ) 150 | srm1 /= 4. 151 | # srm kernel 2 152 | srm2 = np.array([[-1, 2, -2, 2, -1], 153 | [2, -6, 8, -6, 2], 154 | [-2, 8, -12, 8, -2], 155 | [2, -6, 8, -6, 2], 156 | [-1, 2, -2, 2, -1]]).astype('float32') 157 | srm2 /= 12. 158 | # srm kernel 3 159 | srm3 = np.zeros([5,5]).astype('float32') 160 | srm3[2,1:-1] = np.array([1,-2,1]) 161 | srm3 /= 2. 162 | return [ srm1, srm2, srm3 ] 163 | def _build_SRM_kernel( self ) : 164 | kernel = [] 165 | srm_list = self._get_srm_list() 166 | for idx, srm in enumerate( srm_list ): 167 | for ch in range(3) : 168 | this_ch_kernel = np.zeros([5,5,3]).astype('float32') 169 | this_ch_kernel[:,:,ch] = srm 170 | kernel.append( this_ch_kernel ) 171 | kernel = np.stack( kernel, axis=-1 ) 172 | srm_kernel = K.variable( kernel, dtype='float32', name='srm' ) 173 | return srm_kernel 174 | def build( self, input_shape ) : 175 | if self.data_format == 'channels_first': 176 | channel_axis = 1 177 | else: 178 | channel_axis = -1 179 | if input_shape[channel_axis] is None: 180 | raise( ValueError, 'The channel dimension of the inputs ' 181 | 'should be defined. Found `None`.') 182 | input_dim = input_shape[channel_axis] 183 | # 1. regular conv kernels, fully trainable 184 | filters = self.filters - 9 - 3 185 | if filters >= 1 : 186 | regular_kernel_shape = self.kernel_size + (input_dim, filters) 187 | self.regular_kernel = self.add_weight(shape=regular_kernel_shape, 188 | initializer=self.kernel_initializer, 189 | name='regular_kernel', 190 | regularizer=self.kernel_regularizer, 191 | constraint=self.kernel_constraint) 192 | else : 193 | self.regular_kernel = None 194 | # 2. SRM kernels, not trainable 195 | self.srm_kernel = self._build_SRM_kernel() 196 | # 3. bayar kernels, trainable but under constraint 197 | bayar_kernel_shape = self.kernel_size + (input_dim, 3) 198 | self.bayar_kernel = self.add_weight(shape=bayar_kernel_shape, 199 | initializer=self.kernel_initializer, 200 | name='bayar_kernel', 201 | regularizer=self.kernel_regularizer, 202 | constraint=BayarConstraint()) 203 | # 4. collect all kernels 204 | if ( self.regular_kernel is not None ) : 205 | all_kernels = [ self.regular_kernel, 206 | self.srm_kernel, 207 | self.bayar_kernel] 208 | else : 209 | all_kernels = [ self.srm_kernel, 210 | self.bayar_kernel] 211 | self.kernel = K.concatenate( all_kernels, axis=-1 ) 212 | # Set input spec. 213 | self.input_spec = InputSpec(ndim=self.rank + 2, 214 | axes={channel_axis: input_dim}) 215 | self.built = True 216 | 217 | def create_featex_vgg16_base( type=1 ) : 218 | base = 32 219 | img_input = Input(shape=(None,None,3), name='image_in') 220 | # block 1 221 | bname = 'b1' # 32 222 | nb_filters = base 223 | x = CombinedConv2D( 32 if type in [0,1] else 16, activation='relu', use_bias=False, padding='same', name=bname+'c1')( img_input ) 224 | x = Conv2DSymPadding( nb_filters, (3,3), activation='relu', padding='same', name=bname+'c2')( x ) 225 | # block 2 226 | bname = 'b2' 227 | nb_filters = 2 * base # 64 228 | x = Conv2DSymPadding( nb_filters, (3,3), activation='relu', padding='same', name=bname+'c1')( x ) 229 | x = Conv2DSymPadding( nb_filters, (3,3), activation='relu', padding='same', name=bname+'c2')( x ) 230 | # block 3 231 | bname = 'b3' 232 | nb_filters = 4 * base # 96 233 | x = Conv2DSymPadding( nb_filters, (3,3), activation='relu', padding='same', name=bname+'c1')( x ) 234 | x = Conv2DSymPadding( nb_filters, (3,3), activation='relu', padding='same', name=bname+'c2')( x ) 235 | x = Conv2DSymPadding( nb_filters, (3,3), activation='relu', padding='same', name=bname+'c3')( x ) 236 | # block 4 237 | bname = 'b4' 238 | nb_filters = 8 * base # 128 239 | x = Conv2DSymPadding( nb_filters, (3,3), activation='relu', padding='same', name=bname+'c1')( x ) 240 | x = Conv2DSymPadding( nb_filters, (3,3), activation='relu', padding='same', name=bname+'c2')( x ) 241 | x = Conv2DSymPadding( nb_filters, (3,3), activation='relu', padding='same', name=bname+'c3')( x ) 242 | # block 5/bottle-neck 243 | bname = 'b5' 244 | x = Conv2DSymPadding( nb_filters, (3,3), activation='relu', padding='same', name=bname+'c1')( x ) 245 | x = Conv2DSymPadding( nb_filters, (3,3), activation='relu', padding='same', name=bname+'c2')( x ) 246 | activation=None if type >=1 else 'tanh' 247 | print ("INFO: use activation in the last CONV={}".format( activation )) 248 | sf = Conv2DSymPadding( nb_filters, (3,3), 249 | activation=activation, 250 | name='transform', 251 | padding='same' )(x) 252 | sf = Lambda( lambda t : K.l2_normalize( t, axis=-1), name='L2')(sf) 253 | return Model( inputs= img_input, outputs=sf, name='Featex') 254 | 255 | class GlobalStd2D( Layer ) : 256 | '''Custom Keras Layer to compute sample-wise feature deviation 257 | ''' 258 | def __init__( self, min_std_val=1e-5, **kwargs ) : 259 | self.min_std_val = min_std_val 260 | super( GlobalStd2D, self ).__init__( **kwargs ) 261 | def build( self, input_shape ) : 262 | nb_feats = input_shape[-1] 263 | std_shape = ( 1,1,1, nb_feats ) 264 | self.min_std = self.add_weight( shape=std_shape, 265 | initializer=Constant(self.min_std_val), 266 | name='min_std', 267 | constraint=non_neg() ) 268 | self.built = True 269 | return 270 | def call( self, x ) : 271 | x_std = K.std( x, axis=(1,2), keepdims=True ) 272 | x_std = K.maximum( x_std, self.min_std_val/10. + self.min_std ) 273 | return x_std 274 | def compute_output_shape( self, input_shape ) : 275 | return (input_shape[0], 1, 1, input_shape[-1] ) 276 | 277 | class NestedWindowAverageFeatExtrator( Layer ) : 278 | '''Custom Keras Layer of NestedWindowAverageFeatExtrator 279 | ''' 280 | def __init__( self, 281 | window_size_list, 282 | output_mode='5d', 283 | minus_original=False, 284 | include_global=True, 285 | **kwargs ) : 286 | ''' 287 | INPUTS: 288 | win_size_list = list of int or tuples, each elem indicate a winsize of interest 289 | output_mode = '5d' or '4d', where 290 | '5d' merges all win_avgs along a new time axis 291 | '4d' merges all win_avgs along the existing feat axis 292 | ''' 293 | self.window_size_list = window_size_list 294 | assert output_mode in ['5d','4d'], "ERROR: unkown output mode={}".format( output_mode ) 295 | self.output_mode = output_mode 296 | self.minus_original = minus_original 297 | self.include_global = include_global 298 | super(NestedWindowAverageFeatExtrator, self).__init__(**kwargs) 299 | def build( self, input_shape ) : 300 | self.num_woi = len( self.window_size_list ) 301 | self.count_ii = None 302 | self.lut = dict() 303 | self.built = True 304 | self.max_wh, self.max_ww = self._get_max_size() 305 | return 306 | def _initialize_ii_buffer( self, x ) : 307 | x_pad = K.spatial_2d_padding( x, ((self.max_wh//2+1,self.max_wh//2+1), (self.max_ww//2+1,self.max_ww//2+1)) ) 308 | ii_x = K.cumsum( x_pad, axis=1 ) 309 | ii_x2 = K.cumsum( ii_x, axis=2 ) 310 | return ii_x2 311 | def _get_max_size( self ) : 312 | mh, mw = 0, 0 313 | for hw in self.window_size_list : 314 | if ( isinstance( hw, int ) ) : 315 | h = w = hw 316 | else : 317 | h, w = hw[:2] 318 | mh = max( h, mh ) 319 | mw = max( w, mw ) 320 | return mh, mw 321 | def _compute_for_one_size( self, x, x_ii, height, width ) : 322 | # 1. compute valid counts for this key 323 | top = self.max_wh//2 - height//2 324 | bot = top + height 325 | left = self.max_ww//2 - width //2 326 | right = left + width 327 | Ay, Ax = (top, left) #self.max_wh, self.max_ww 328 | By, Bx = (top, right) # Ay, Ax + width 329 | Cy, Cx = (bot, right) #By + height, Bx 330 | Dy, Dx = (bot, left) #Cy, Ax 331 | ii_key = (height,width) 332 | top_0 = -self.max_wh//2 - height//2 - 1 333 | bot_0 = top_0 + height 334 | left_0 = -self.max_ww//2 - width//2 - 1 335 | right_0 = left_0 + width 336 | Ay0, Ax0 = (top_0, left_0) #self.max_wh, self.max_ww 337 | By0, Bx0 = (top_0, right_0) # Ay, Ax + width 338 | Cy0, Cx0 = (bot_0, right_0) #By + height, Bx 339 | Dy0, Dx0 = (bot_0, left_0) #Cy, Ax 340 | # used in testing, where each batch is a sample of different shapes 341 | counts = K.ones_like( x[:1,...,:1] ) 342 | count_ii = self._initialize_ii_buffer( counts ) 343 | # compute winsize if necessary 344 | counts_2d = count_ii[:,Ay:Ay0, Ax:Ax0] \ 345 | + count_ii[:,Cy:Cy0, Cx:Cx0] \ 346 | - count_ii[:,By:By0, Bx:Bx0] \ 347 | - count_ii[:,Dy:Dy0, Dx:Dx0] 348 | # 2. compute summed feature 349 | sum_x_2d = x_ii[:,Ay:Ay0, Ax:Ax0] \ 350 | + x_ii[:,Cy:Cy0, Cx:Cx0] \ 351 | - x_ii[:,By:By0, Bx:Bx0] \ 352 | - x_ii[:,Dy:Dy0, Dx:Dx0] 353 | # 3. compute average feature 354 | avg_x_2d = sum_x_2d / counts_2d 355 | return avg_x_2d 356 | def call( self, x ) : 357 | x_win_avgs = [] 358 | # 1. compute corr(x, window_mean) for different sizes 359 | # 1.1 compute integral image buffer 360 | x_ii = self._initialize_ii_buffer( x ) 361 | for hw in self.window_size_list : 362 | if isinstance( hw, int ) : 363 | height = width = hw 364 | else : 365 | height, width = hw[:2] 366 | this_avg = self._compute_for_one_size( x, x_ii, height, width ) 367 | if ( self.minus_original ) : 368 | x_win_avgs.append( this_avg-x ) 369 | else : 370 | x_win_avgs.append( this_avg ) 371 | # 2. compute corr(x, global_mean) 372 | if ( self.include_global ) : 373 | if ( self.minus_original ) : 374 | mu = K.mean( x, axis=(1,2), keepdims=True ) 375 | x_win_avgs.append( mu-x ) 376 | else : 377 | mu = K.mean( x, axis=(1,2), keepdims=True ) * K.ones_like(x) 378 | x_win_avgs.append( mu ) 379 | if self.output_mode == '4d' : 380 | return K.concatenate( x_win_avgs, axis=-1 ) 381 | elif self.output_mode == '5d' : 382 | return K.stack( x_win_avgs, axis=1 ) 383 | else : 384 | raise (NotImplementedError, "ERROR: unknown output_mode={}".format( self.output_mode )) 385 | def compute_output_shape(self, input_shape): 386 | batch_size, num_rows, num_cols, num_filts = input_shape 387 | if self.output_mode == '4d' : 388 | return ( batch_size, num_rows, num_cols, (self.num_woi+int(self.include_global))*num_filts ) 389 | else : 390 | return ( batch_size, self.num_woi+int(self.include_global), num_rows, num_cols, num_filts ) 391 | 392 | def create_manTraNet_model( Featex, pool_size_list=[7,15,31], is_dynamic_shape=True, apply_normalization=True ) : 393 | """ 394 | Create ManTra-Net from a pretrained IMC-Featex model 395 | """ 396 | img_in = Input(shape=(None,None,3), name='img_in' ) 397 | rf = Featex( img_in ) 398 | rf = Conv2D( 64, (1,1), 399 | activation=None, # no need to use tanh if sf is L2normalized 400 | use_bias=False, 401 | kernel_constraint = unit_norm( axis=-2 ), 402 | name='outlierTrans', 403 | padding = 'same' )(rf) 404 | bf = BatchNormalization( axis=-1, name='bnorm', center=False, scale=False )(rf) 405 | devf5d = NestedWindowAverageFeatExtrator(window_size_list=pool_size_list, 406 | output_mode='5d', 407 | minus_original=True, 408 | name='nestedAvgFeatex' )( bf ) 409 | if ( apply_normalization ) : 410 | sigma = GlobalStd2D( name='glbStd' )( bf ) 411 | sigma5d = Lambda( lambda t : K.expand_dims( t, axis=1 ), name='expTime')( sigma ) 412 | devf5d = Lambda( lambda vs : K.abs(vs[0]/vs[1]), name='divStd' )([devf5d, sigma5d]) 413 | # convert back to 4d 414 | devf = ConvLSTM2D( 8, (7,7), 415 | activation='tanh', 416 | recurrent_activation='hard_sigmoid', 417 | padding='same', 418 | name='cLSTM', 419 | return_sequences=False )(devf5d) 420 | pred_out = Conv2D(1, (7,7), padding='same', activation='sigmoid', name='pred')( devf ) 421 | return Model( inputs=img_in, outputs=pred_out, name='sigNet' ) 422 | 423 | def create_model( IMC_model_idx, freeze_featex, window_size_list ) : 424 | type_idx = IMC_model_idx if IMC_model_idx < 4 else 2 425 | Featex = create_featex_vgg16_base( type_idx ) 426 | if freeze_featex : 427 | print ("INFO: freeze feature extraction part, trainable=False") 428 | Featex.trainable = False 429 | else : 430 | print ("INFO: unfreeze feature extraction part, trainable=True") 431 | 432 | if ( len( window_size_list ) == 4 ) : 433 | for ly in Featex.layers[:5] : 434 | ly.trainable = False 435 | print ("INFO: freeze", ly.name) 436 | model = create_manTraNet_model( Featex, 437 | pool_size_list=window_size_list, 438 | is_dynamic_shape=True, 439 | apply_normalization=True, ) 440 | return model 441 | 442 | def load_trained_model() : 443 | IMC_model_idx, freeze_featex, window_size_list = 2, False, [7, 15, 31] 444 | single_gpu_model = create_model( IMC_model_idx, freeze_featex, window_size_list ) 445 | weight_file = f"{os.getcwd()}/ManTraNet_Ptrain4.h5" 446 | assert os.path.isfile(weight_file), "ERROR: fail to locate the pretrained weight file" 447 | single_gpu_model.load_weights(weight_file) 448 | return single_gpu_model 449 | 450 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | Keras==2.2.4 2 | tensorflow-cpu==1.15.0 3 | opencv-python==4.1.0.25 4 | numpy==1.17.0 5 | Flask==1.1.1 6 | matplotlib==3.1.1 7 | certifi==2020.4.5.1 8 | click==7.1.2 9 | gunicorn==20.0.4 10 | itsdangerous==1.1.0 11 | Jinja2==2.11.2 12 | MarkupSafe==1.1.1 13 | Werkzeug==1.0.1 14 | wincertstore==0.2 15 | python-telegram-bot==13.0 -------------------------------------------------------------------------------- /static/images/s1.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 5 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | 112 | 113 | 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | 136 | 137 | 138 | 139 | 140 | 141 | 142 | 143 | 144 | 145 | 146 | 147 | 148 | 149 | 150 | 151 | 152 | 153 | 154 | 155 | 156 | 157 | 158 | 159 | 160 | 161 | 162 | 163 | 164 | 165 | 166 | 167 | 168 | 169 | 170 | 171 | 172 | 173 | 174 | 175 | 176 | 177 | 178 | 179 | 180 | 181 | 182 | 183 | 184 | 185 | 186 | 187 | 188 | 189 | 190 | 191 | 192 | 193 | 194 | 195 | 196 | 197 | 198 | 199 | 200 | 201 | 202 | 203 | 204 | 205 | 206 | 207 | 208 | 209 | 210 | 211 | 212 | 213 | 214 | 215 | 216 | 217 | 218 | 219 | 220 | 221 | 222 | 223 | 224 | 225 | 226 | 227 | 228 | 229 | 230 | 231 | 232 | 233 | 234 | 235 | 236 | 237 | 238 | 239 | 240 | 241 | 242 | 243 | 244 | 245 | 246 | 247 | 248 | 249 | 250 | 251 | 252 | 253 | 254 | 255 | 256 | 257 | 258 | 259 | 260 | 261 | 262 | 263 | 264 | 265 | 266 | 267 | 268 | 269 | 270 | 271 | 272 | 273 | 274 | 275 | 276 | 277 | 278 | 279 | 280 | 281 | 282 | 283 | 284 | 285 | 286 | 287 | 288 | 289 | 290 | 291 | 292 | 293 | 294 | 295 | 296 | 297 | 298 | 299 | 300 | 301 | 302 | 303 | 304 | 305 | 306 | 307 | 308 | 309 | 310 | 311 | 312 | 313 | 314 | 315 | 316 | 317 | 318 | 319 | 320 | 321 | 322 | 323 | 324 | 325 | 326 | 327 | 328 | 329 | 330 | 331 | 332 | 333 | 334 | 335 | 336 | 337 | 338 | 339 | 340 | 341 | 342 | 343 | 344 | 345 | 346 | 347 | 348 | 349 | 350 | 351 | 352 | 353 | 354 | 355 | 356 | 357 | 358 | 359 | 360 | 361 | 362 | 363 | 364 | 365 | 366 | 367 | 368 | 369 | 370 | 371 | 372 | 373 | 374 | 375 | 376 | 377 | 378 | 379 | 380 | 381 | 382 | 383 | 384 | 385 | 386 | 387 | 388 | 389 | 390 | 391 | 392 | 393 | 394 | 395 | 396 | 397 | 398 | 399 | 400 | 401 | 402 | 403 | 404 | 405 | 406 | 407 | 408 | 409 | 410 | 411 | 412 | 413 | 414 | 415 | 416 | 417 | 418 | 419 | 420 | 421 | 422 | 423 | 424 | 425 | 426 | 427 | 428 | 430 | 435 | 436 | 437 | 438 | 439 | 440 | 441 | 442 | 443 | 444 | 446 | 447 | 448 | 449 | 450 | 451 | 452 | 453 | 454 | 455 | 456 | 457 | 458 | 459 | 460 | 462 | 463 | 464 | 465 | 466 | 468 | 471 | 472 | 479 | 481 | 482 | 483 | 484 | 485 | 488 | 490 | 494 | 496 | 498 | 499 | 500 | 501 | 502 | 503 | 505 | 506 | 507 | 508 | 509 | 510 | 511 | 512 | 513 | 514 | 515 | 516 | 517 | 518 | 519 | 521 | 523 | 524 | 528 | 529 | 530 | 531 | 532 | 534 | 535 | 536 | 537 | 538 | 539 | 540 | 541 | 543 | 547 | 548 | 549 | 550 | 551 | 553 | 554 | 555 | 556 | 557 | 558 | 559 | 560 | 561 | 562 | 563 | 574 | 575 | 576 | 577 | 578 | 579 | 580 | 581 | 582 | 583 | 584 | 585 | 586 | 587 | 588 | 590 | 592 | 593 | 594 | 595 | 596 | 598 | 599 | 600 | 601 | 602 | 603 | 606 | 608 | 609 | 610 | 611 | 613 | 614 | 615 | 616 | 617 | 619 | 625 | 626 | 627 | 628 | 629 | 631 | 633 | 635 | 636 | 638 | 641 | 643 | 644 | 645 | 646 | 647 | 648 | 650 | 653 | 656 | 658 | 660 | 665 | 666 | 667 | 668 | 669 | 670 | 671 | 672 | 673 | 674 | 676 | 677 | 678 | 680 | 682 | 684 | 686 | 691 | 693 | 694 | 697 | 698 | 701 | 703 | 706 | 708 | 709 | 715 | 717 | 718 | 724 | 726 | 727 | 728 | 729 | 730 | 731 | 734 | 735 | 738 | 739 | 743 | 746 | 749 | 762 | 763 | 764 | 765 | 766 | 767 | 768 | 769 | -------------------------------------------------------------------------------- /static/images/s3.svg: -------------------------------------------------------------------------------- 1 | ML_LCO -------------------------------------------------------------------------------- /static/scripts/fullpage.min.js: -------------------------------------------------------------------------------- 1 | /*! 2 | * fullPage 3.0.8 3 | * https://github.com/alvarotrigo/fullPage.js 4 | * 5 | * @license GPLv3 for open source use only 6 | * or Fullpage Commercial License for commercial use 7 | * http://alvarotrigo.com/fullPage/pricing/ 8 | * 9 | * Copyright (C) 2018 http://alvarotrigo.com/fullPage - A project by Alvaro Trigo 10 | */ 11 | !function(e,t,n,o,r){"function"==typeof define&&define.amd?define(function(){return e.fullpage=o(t,n),e.fullpage}):"object"==typeof exports?module.exports=o(t,n):t.fullpage=o(t,n)}(this,window,document,function(Rt,Nt){"use strict";var zt="fullpage-wrapper",jt="."+zt,Pt="fp-responsive",Dt="fp-notransition",Vt="fp-destroyed",Wt="fp-enabled",Yt="fp-viewing",Ft="active",Ut="."+Ft,Xt="fp-completely",Kt="fp-section",_t="."+Kt,$t=_t+Ut,qt="fp-tableCell",Qt="."+qt,Gt="fp-auto-height",Jt="fp-normal-scroll",Zt="fp-nav",en="#"+Zt,tn="fp-tooltip",nn="fp-slide",on="."+nn,rn=on+Ut,ln="fp-slides",an="."+ln,sn="fp-slidesContainer",cn="."+sn,un="fp-table",fn="fp-slidesNav",dn="."+fn,vn=dn+" a",e="fp-controlArrow",pn="."+e,hn="fp-prev",gn=pn+".fp-prev",mn=pn+".fp-next";function Sn(e,t){Rt.console&&Rt.console[e]&&Rt.console[e]("fullPage: "+t)}function wn(e,t){return(t=1'+be(r,"Section")+"";var l=E.navigationTooltips[r];void 0!==l&&""!==l&&(o+='
'+l+"
"),o+=""}wn("ul",n)[0].innerHTML=o,xn(wn(en),{"margin-top":"-"+wn(en)[0].offsetHeight/2+"px"}),Bn(wn("a",wn("li",wn(en)[0])[Cn(wn($t)[0],_t)]),Ft)}(),wn('iframe[src*="youtube.com/embed/"]',d).forEach(function(e){var t,n,o;n="enablejsapi=1",o=(t=e).getAttribute("src"),t.setAttribute("src",o+(/\?/.test(o)?"&":"?")+n)}),E.scrollOverflow&&(m=E.scrollOverflowHandler.init(E))}(),oe(!0),re(!0),J(E.autoScrolling,"internal"),at(),yt(),"complete"===Nt.readyState&&_e(),Rt.addEventListener("load",_e),E.scrollOverflow||ye(),function(){for(var e=1;e<4;e++)C=setTimeout(Se,350*e)}(),Rt.addEventListener("scroll",Ee),Rt.addEventListener("hashchange",$e),Rt.addEventListener("blur",tt),Rt.addEventListener("resize",it),Nt.addEventListener("keydown",Qe),Nt.addEventListener("keyup",Ge),["click","touchstart"].forEach(function(e){Nt.addEventListener(e,he)}),E.normalScrollElements&&(["mouseenter","touchstart"].forEach(function(e){ge(e,!1)}),["mouseleave","touchend"].forEach(function(e){ge(e,!0)})));var Y=!1,F=0,U=0,X=0,K=0,_=0,$=(new Date).getTime(),q=0,Q=0,G=A;return h}function J(e,t){e||xt(0),kt("autoScrolling",e,t);var n=wn($t)[0];if(E.autoScrolling&&!E.scrollBar)xn(r,{overflow:"hidden",height:"100%"}),Z(P.recordHistory,"internal"),xn(d,{"-ms-touch-action":"none","touch-action":"none"}),null!=n&&xt(n.offsetTop);else if(xn(r,{overflow:"visible",height:"initial"}),Z(!!E.autoScrolling&&P.recordHistory,"internal"),xn(d,{"-ms-touch-action":"","touch-action":""}),null!=n){var o=je(n.offsetTop);o.element.scrollTo(0,o.options)}}function Z(e,t){kt("recordHistory",e,t)}function ee(e,t){kt("scrollingSpeed",e,t)}function te(e,t){kt("fitToSection",e,t)}function ne(e){e?(function(){var e,t="";Rt.addEventListener?e="addEventListener":(e="attachEvent",t="on");var n="onwheel"in Nt.createElement("div")?"wheel":void 0!==Nt.onmousewheel?"mousewheel":"DOMMouseScroll",o=!!R&&{passive:!1};"DOMMouseScroll"==n?Nt[e](t+"MozMousePixelScroll",Ce,o):Nt[e](t+n,Ce,o)}(),d.addEventListener("mousedown",Je),d.addEventListener("mouseup",Ze)):(Nt.addEventListener?(Nt.removeEventListener("mousewheel",Ce,!1),Nt.removeEventListener("wheel",Ce,!1),Nt.removeEventListener("MozMousePixelScroll",Ce,!1)):Nt.detachEvent("onmousewheel",Ce),d.removeEventListener("mousedown",Je),d.removeEventListener("mouseup",Ze))}function oe(t,e){void 0!==e?(e=e.replace(/ /g,"").split(",")).forEach(function(e){Tt(t,e,"m")}):Tt(t,"all","m")}function re(e){e?(ne(!0),function(){if(o||l){E.autoScrolling&&(L.removeEventListener(I.touchmove,Ae,{passive:!1}),L.addEventListener(I.touchmove,Ae,{passive:!1}));var e=E.touchWrapper;e.removeEventListener(I.touchstart,Oe),e.removeEventListener(I.touchmove,Te,{passive:!1}),e.addEventListener(I.touchstart,Oe),e.addEventListener(I.touchmove,Te,{passive:!1})}}()):(ne(!1),function(){if(o||l){E.autoScrolling&&(L.removeEventListener(I.touchmove,Te,{passive:!1}),L.removeEventListener(I.touchmove,Ae,{passive:!1}));var e=E.touchWrapper;e.removeEventListener(I.touchstart,Oe),e.removeEventListener(I.touchmove,Te,{passive:!1})}}())}function ie(t,e){void 0!==e?(e=e.replace(/ /g,"").split(",")).forEach(function(e){Tt(t,e,"k")}):(Tt(t,"all","k"),E.keyboardScrolling=t)}function le(){var e=An(wn($t)[0],_t);e||!E.loopTop&&!E.continuousVertical||(e=Mn(wn(_t))),null!=e&&Be(e,null,!0)}function ae(){var e=Tn(wn($t)[0],_t);e||!E.loopBottom&&!E.continuousVertical||(e=wn(_t)[0]),null!=e&&Be(e,null,!1)}function se(e,t){ee(0,"internal"),ce(e,t),ee(P.scrollingSpeed,"internal")}function ce(e,t){var n=ht(e);void 0!==t?gt(e,t):null!=n&&Be(n)}function ue(e){He("right",e)}function fe(e){He("left",e)}function de(e){if(!yn(d,Vt)){g=!0,A=En(),s=Ln();for(var t=wn(_t),n=0;n'),qn('
')],Vn(wn(an,l)[0],a),"#fff"!==E.controlArrowColor&&(xn(wn(mn,l),{"border-color":"transparent transparent transparent "+E.controlArrowColor}),xn(wn(gn,l),{"border-color":"transparent "+E.controlArrowColor+" transparent transparent"})),E.loopHorizontal||Hn(wn(gn,l))),E.slidesNavigation&&function(e,t){Nn(qn('
    '),e);var n=wn(dn,e)[0];Bn(n,"fp-"+E.slidesNavPosition);for(var o=0;o'+be(o,"Slide")+""),wn("ul",n)[0]);xn(n,{"margin-left":"-"+n.innerWidth/2+"px"}),Bn(wn("a",wn("li",n)[0]),Ft)}(e,n)),t.forEach(function(e){xn(e,{width:r+"%"}),E.verticalCentered&&dt(e)});var c=wn(rn,e)[0];null!=c&&(0!==Cn(wn($t),_t)||0===Cn(wn($t),_t)&&0!==Cn(c))?Lt(c,"internal"):Bn(t[0],Ft)}function be(e,t){return E.navigationTooltips[e]||E.anchors[e]||t+" "+(e+1)}function ye(){var e,t,n=wn($t)[0];Bn(n,Xt),We(n),Ve(),Fe(n),E.scrollOverflow&&E.scrollOverflowHandler.afterLoad(),e=qe(),t=ht(e.section),e.section&&t&&(void 0===t||Cn(t)!==Cn(f))||!Xn(E.afterLoad)||Re("afterLoad",{activeSection:n,element:n,direction:null,anchorLink:n.getAttribute("data-anchor"),sectionIndex:Cn(n,_t)}),Xn(E.afterRender)&&Re("afterRender")}function Ee(){var e,t,n,o,r,i;if(!E.autoScrolling||E.scrollBar){var l=Yn(),a=(i=F<(r=l)?"down":"up",q=F=r,i),s=0,c=l+En()/2,u=L.offsetHeight-En()===l,f=wn(_t);if(u)s=f.length-1;else if(l)for(var d=0;d=Yn()+En())&&(yn(wn($t)[0],Xt)||(Bn(wn($t)[0],Xt),Rn(Fn(wn($t)[0]),Xt))),!yn(e=f[s],Ft)){Y=!0;var v,p,h=wn($t)[0],g=Cn(h,_t)+1,m=ft(e),S=e.getAttribute("data-anchor"),w=Cn(e,_t)+1,b=wn(rn,e)[0],y={activeSection:h,sectionIndex:w-1,anchorLink:S,element:e,leavingSection:g,direction:m};b&&(p=b.getAttribute("data-anchor"),v=Cn(b)),T&&(Bn(e,Ft),Rn(Fn(e),Ft),Xn(E.onLeave)&&Re("onLeave",y),Xn(E.afterLoad)&&Re("afterLoad",y),Xe(h),We(e),Fe(e),ut(S,w-1),E.anchors.length&&(x=S),St(v,p,S)),clearTimeout(k),k=setTimeout(function(){Y=!1},100)}E.fitToSection&&(clearTimeout(O),O=setTimeout(function(){E.fitToSection&&wn($t)[0].offsetHeight<=A&&Le()},E.fitToSectionDelay))}}function Le(){T&&(g=!0,Be(wn($t)[0]),g=!1)}function xe(e){if(p.m[e]){var t="down"===e?ae:le;if(E.scrollOverflow){var n=E.scrollOverflowHandler.scrollable(wn($t)[0]),o="down"===e?"bottom":"top";if(null!=n){if(!E.scrollOverflowHandler.isScrolled(o,n))return!0;t()}else t()}else t()}}function Ae(e){E.autoScrolling&&ke(e)&&p.m.up&&Un(e)}function Te(e){var t=Dn(e.target,_t)||wn($t)[0];if(ke(e)){E.autoScrolling&&Un(e);var n=Et(e);K=n.y,_=n.x,wn(an,t).length&&Math.abs(X-_)>Math.abs(U-K)?!a&&Math.abs(X-_)>Ln()/100*E.touchSensitivity&&(_Rt.innerHeight/100*E.touchSensitivity&&(KQ&&p.m.down&&ae()),Q=e.pageY)}function ot(e,t,n){var o,r,i=Dn(e,_t),l={slides:e,destiny:t,direction:n,destinyPos:{left:t.offsetLeft},slideIndex:Cn(t),section:i,sectionIndex:Cn(i,_t),anchorLink:i.getAttribute("data-anchor"),slidesNav:wn(dn,i)[0],slideAnchor:bt(t),prevSlide:wn(rn,i)[0],prevSlideIndex:Cn(wn(rn,i)[0]),localIsResizing:g};l.xMovement=(o=l.prevSlideIndex,r=l.slideIndex,o==r?"none":r20*Math.max(G,t)/100&&(de(!0),G=t)}}else Se()}function at(){var e=E.responsive||E.responsiveWidth,t=E.responsiveHeight,n=e&&Rt.innerWidth 2 | 3 | 4 | 5 | 6 | 7 | Image Forgery Detection 8 | 10 | 11 | 12 | 14 | 119 | 120 | 121 | 122 |
    123 |
    124 |
    125 |
    126 |
    127 |

    Image Forgery
    Detection

    128 |

    A DEEP LEARNING APPROACH

    129 |
    130 |
    131 | 132 |
    133 |
    134 |
    135 |
    136 |
    137 |
    138 |
    139 |
    140 |
    141 | 142 |
    143 |
    144 |
    145 |

    ABSTRACT

    146 |

    147 | Image manipulation and forgery has been a severe cyber threat, ever since the inception of 148 | social 149 | media. It potentially conveys a false news or claim with an underlying malicious intent such 150 | as 151 | invoking communal violence. Here a deep learning algorithm is being used to identify 385 152 | different 153 | kinds of image forgery using a novel neural network architecture called Manipulation Tracing 154 | Network 155 | (ManTraNet).

    156 | 158 | Based upon the original 2019 paper by Wu et. al. 159 | 160 |

    161 |
    162 |
    163 |
    164 |
    165 |
    166 |
    167 |
    168 |
    169 |
    170 |
    Interactive Model
    171 |
    172 | Image Preview 174 |
    175 |
    176 | 177 |
    178 | 179 | 180 |
    181 | 183 |
    184 |
    185 |
    186 | {% if output %} 187 |
    188 |
    189 |
    Predicted Forged Regions
    190 | 192 |
    193 |
    194 | {% endif %} 195 |
    196 |
    197 |
    198 |
    199 |
    200 |
    201 |
    202 |

    ABOUT

    203 |

    This project is made by Neelanjan Manna, a fellow Machine 204 | Learning and Data Science enthusiast currently enrolled in Bachelor of Technology course at 205 | KIIT University.

    206 | 208 | 210 |

    REFERENCES

    211 |

    212 | 1. 213 | ManTra-Net: Manipulation Tracing Network for Detection and Localization of Image 214 | Forgeries With Anomalous Features

    215 | 2. Image forgery detection, Using the power of CNN's to detect 217 | image manipulation
    218 |

    219 |
    220 |
    221 | 222 |
    223 |
    224 |
    225 |
    226 |
    227 | 230 | 231 | 274 | {% if output %} 275 | 278 | {% endif %} 279 | 280 | 281 | --------------------------------------------------------------------------------