├── README.md ├── age ├── age_deploy.prototxt └── age_net.caffemodel ├── detect.py ├── face ├── opencv_face_detector.pbtxt └── opencv_face_detector_uint8.pb ├── gender ├── gender_deploy.prototxt └── gender_net.caffemodel ├── girl1.jpg ├── girl2.jpg ├── kid1.jpg ├── kid2.jpg ├── man1.jpg ├── man2.jpg ├── sample ├── Detecting age and gender girl1.png ├── Detecting age and gender girl2.png ├── Detecting age and gender kid1.png ├── Detecting age and gender kid2.png ├── Detecting age and gender man1.png ├── Detecting age and gender man2.png └── Detecting age and gender woman1.png └── woman1.jpg /README.md: -------------------------------------------------------------------------------- 1 | # Gender-and-Age-Detection 2 | 3 |

Objective :

4 |

To build a gender and age detector that can approximately guess the gender and age of the person (face) in a picture or through webcam.

5 | 6 |

About the Project :

7 |

In this Python Project, I had used Deep Learning to accurately identify the gender and age of a person from a single image of a face. I used the models trained by Adrian Rosebrock. The predicted gender may be one of ‘Male’ and ‘Female’, and the predicted age may be one of the following ranges- (0 – 2), (4 – 6), (8 – 12), (15 – 20), (25 – 32), (38 – 43), (48 – 53), (60 – 100) (8 nodes in the final softmax layer). It is very difficult to accurately guess an exact age from a single image because of factors like makeup, lighting, obstructions, and facial expressions. And so, I made this a classification problem instead of making it one of regression.

8 | 9 |

Dataset :

10 |

For this project Adience dataset is used; the dataset is available in the public domain and you can find it here. It has a total of 26,580 photos of 2,284 subjects in eight age ranges (as mentioned above) and is about 1GB in size. The models I used had been trained on this dataset.

11 | 12 |

Additional Python Libraries Required :

13 | 18 | 23 | 24 |

The contents of this Project :

25 | 34 |

For face detection, we have a .pb file- this is a protobuf file (protocol buffer); it holds the graph definition and the trained weights of the model. We can use this to run the trained model. And while a .pb file holds the protobuf in binary format, one with the .pbtxt extension holds it in text format. These are TensorFlow files. For age and gender, the .prototxt files describe the network configuration and the .caffemodel file defines the internal states of the parameters of the layers.

35 | 36 |

Usage :

37 | 44 |

Note: The Image should be present in same folder where all the files are present

45 | 50 | 53 | 54 | 55 |

Examples :

56 |

NOTE:- I downloaded the images from Google,if you have any query or problem i can remove them, i just used it for Educational purpose.

57 | 58 | >python detect.py --image girl1.jpg 59 | Gender: Female 60 | Age: 25-32 years 61 | 62 | 63 | 64 | >python detect.py --image kid1.jpg 65 | Gender: Male 66 | Age: 4-6 years 67 | 68 | 69 | 70 | >python detect.py --image man1.jpg 71 | Gender: Male 72 | Age: 38-43 years 73 | 74 | 75 | 76 | 77 | >python detect.py --image woman1.jpg 78 | Gender: Female 79 | Age: 38-43 years 80 | 81 | 82 | 83 | # Support : 84 | If you found this project helpful and if you have any Query regarding this then contact me: 85 | 89 | -------------------------------------------------------------------------------- /age/age_deploy.prototxt: -------------------------------------------------------------------------------- 1 | name: "CaffeNet" 2 | input: "data" 3 | input_dim: 1 4 | input_dim: 3 5 | input_dim: 227 6 | input_dim: 227 7 | layers { 8 | name: "conv1" 9 | type: CONVOLUTION 10 | bottom: "data" 11 | top: "conv1" 12 | convolution_param { 13 | num_output: 96 14 | kernel_size: 7 15 | stride: 4 16 | } 17 | } 18 | layers { 19 | name: "relu1" 20 | type: RELU 21 | bottom: "conv1" 22 | top: "conv1" 23 | } 24 | layers { 25 | name: "pool1" 26 | type: POOLING 27 | bottom: "conv1" 28 | top: "pool1" 29 | pooling_param { 30 | pool: MAX 31 | kernel_size: 3 32 | stride: 2 33 | } 34 | } 35 | layers { 36 | name: "norm1" 37 | type: LRN 38 | bottom: "pool1" 39 | top: "norm1" 40 | lrn_param { 41 | local_size: 5 42 | alpha: 0.0001 43 | beta: 0.75 44 | } 45 | } 46 | layers { 47 | name: "conv2" 48 | type: CONVOLUTION 49 | bottom: "norm1" 50 | top: "conv2" 51 | convolution_param { 52 | num_output: 256 53 | pad: 2 54 | kernel_size: 5 55 | } 56 | } 57 | layers { 58 | name: "relu2" 59 | type: RELU 60 | bottom: "conv2" 61 | top: "conv2" 62 | } 63 | layers { 64 | name: "pool2" 65 | type: POOLING 66 | bottom: "conv2" 67 | top: "pool2" 68 | pooling_param { 69 | pool: MAX 70 | kernel_size: 3 71 | stride: 2 72 | } 73 | } 74 | layers { 75 | name: "norm2" 76 | type: LRN 77 | bottom: "pool2" 78 | top: "norm2" 79 | lrn_param { 80 | local_size: 5 81 | alpha: 0.0001 82 | beta: 0.75 83 | } 84 | } 85 | layers { 86 | name: "conv3" 87 | type: CONVOLUTION 88 | bottom: "norm2" 89 | top: "conv3" 90 | convolution_param { 91 | num_output: 384 92 | pad: 1 93 | kernel_size: 3 94 | } 95 | } 96 | layers{ 97 | name: "relu3" 98 | type: RELU 99 | bottom: "conv3" 100 | top: "conv3" 101 | } 102 | layers { 103 | name: "pool5" 104 | type: POOLING 105 | bottom: "conv3" 106 | top: "pool5" 107 | pooling_param { 108 | pool: MAX 109 | kernel_size: 3 110 | stride: 2 111 | } 112 | } 113 | layers { 114 | name: "fc6" 115 | type: INNER_PRODUCT 116 | bottom: "pool5" 117 | top: "fc6" 118 | inner_product_param { 119 | num_output: 512 120 | } 121 | } 122 | layers { 123 | name: "relu6" 124 | type: RELU 125 | bottom: "fc6" 126 | top: "fc6" 127 | } 128 | layers { 129 | name: "drop6" 130 | type: DROPOUT 131 | bottom: "fc6" 132 | top: "fc6" 133 | dropout_param { 134 | dropout_ratio: 0.5 135 | } 136 | } 137 | layers { 138 | name: "fc7" 139 | type: INNER_PRODUCT 140 | bottom: "fc6" 141 | top: "fc7" 142 | inner_product_param { 143 | num_output: 512 144 | } 145 | } 146 | layers { 147 | name: "relu7" 148 | type: RELU 149 | bottom: "fc7" 150 | top: "fc7" 151 | } 152 | layers { 153 | name: "drop7" 154 | type: DROPOUT 155 | bottom: "fc7" 156 | top: "fc7" 157 | dropout_param { 158 | dropout_ratio: 0.5 159 | } 160 | } 161 | layers { 162 | name: "fc8" 163 | type: INNER_PRODUCT 164 | bottom: "fc7" 165 | top: "fc8" 166 | inner_product_param { 167 | num_output: 8 168 | } 169 | } 170 | layers { 171 | name: "prob" 172 | type: SOFTMAX 173 | bottom: "fc8" 174 | top: "prob" 175 | } -------------------------------------------------------------------------------- /age/age_net.caffemodel: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/age/age_net.caffemodel -------------------------------------------------------------------------------- /detect.py: -------------------------------------------------------------------------------- 1 | #A Gender and Age Detection program by Mahesh Sawant 2 | 3 | import cv2 4 | import math 5 | import argparse 6 | 7 | def highlightFace(net, frame, conf_threshold=0.7): 8 | frameOpencvDnn=frame.copy() 9 | frameHeight=frameOpencvDnn.shape[0] 10 | frameWidth=frameOpencvDnn.shape[1] 11 | blob=cv2.dnn.blobFromImage(frameOpencvDnn, 1.0, (300, 300), [104, 117, 123], True, False) 12 | 13 | net.setInput(blob) 14 | detections=net.forward() 15 | # [,frame,no of detections,[classid,class score,conf,x,y,h,w] 16 | faceBoxes=[] 17 | for i in range(detections.shape[2]): 18 | confidence=detections[0,0,i,2] 19 | if confidence>conf_threshold: 20 | x1=int(detections[0,0,i,3]*frameWidth) 21 | y1=int(detections[0,0,i,4]*frameHeight) 22 | x2=int(detections[0,0,i,5]*frameWidth) 23 | y2=int(detections[0,0,i,6]*frameHeight) 24 | faceBoxes.append([x1,y1,x2,y2]) 25 | cv2.rectangle(frameOpencvDnn, (x1,y1), (x2,y2), (0,255,0), int(round(frameHeight/150)), 8) 26 | return frameOpencvDnn,faceBoxes 27 | 28 | 29 | parser=argparse.ArgumentParser() 30 | parser.add_argument('--image') 31 | 32 | args=parser.parse_args() 33 | 34 | faceProto="face/opencv_face_detector.pbtxt" 35 | faceModel="face/opencv_face_detector_uint8.pb" 36 | ageProto="age/age_deploy.prototxt" 37 | ageModel="age/age_net.caffemodel" 38 | genderProto="gender/gender_deploy.prototxt" 39 | genderModel="gender/gender_net.caffemodel" 40 | 41 | MODEL_MEAN_VALUES=(78.4263377603, 87.7689143744, 114.895847746) 42 | ageList=['(0-2)', '(4-6)', '(8-12)', '(15-20)', '(25-32)', '(38-43)', '(48-53)', '(60-100)'] 43 | genderList=['Male','Female'] 44 | 45 | faceNet=cv2.dnn.readNet(faceModel,faceProto) 46 | ageNet=cv2.dnn.readNet(ageModel,ageProto) 47 | genderNet=cv2.dnn.readNet(genderModel,genderProto) 48 | 49 | video=cv2.VideoCapture(args.image if args.image else 0) 50 | 51 | #To Save the Video 52 | '''fourcc = cv2.VideoWriter_fourcc(*'XVID') 53 | out = cv2.VideoWriter('webcamOut.avi',fourcc,30.0,(640,480))''' 54 | 55 | padding=20 56 | while True: 57 | hasFrame,frame=video.read() 58 | if not hasFrame: 59 | cv2.waitKey() 60 | break 61 | 62 | resultImg,faceBoxes=highlightFace(faceNet,frame) 63 | if not faceBoxes: 64 | print("No face detected") 65 | 66 | for faceBox in faceBoxes: 67 | face=frame[max(0,faceBox[1]-padding): 68 | min(faceBox[3]+padding,frame.shape[0]-1),max(0,faceBox[0]-padding) 69 | :min(faceBox[2]+padding, frame.shape[1]-1)] 70 | 71 | blob=cv2.dnn.blobFromImage(face, 1.0, (227,227), MODEL_MEAN_VALUES, swapRB=False) 72 | genderNet.setInput(blob) 73 | genderPreds=genderNet.forward() 74 | gender=genderList[genderPreds[0].argmax()] 75 | print(f'Gender: {gender}') 76 | 77 | ageNet.setInput(blob) 78 | agePreds=ageNet.forward() 79 | age=ageList[agePreds[0].argmax()] 80 | print(f'Age: {age[1:-1]} years') 81 | 82 | cv2.putText(resultImg, f'{gender}, {age}', (faceBox[0], faceBox[1]-10), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0,255,255), 2, cv2.LINE_AA) 83 | cv2.imshow("Detecting age and gender", resultImg) 84 | key = cv2.waitKey(1) & 0xFF 85 | 86 | # if the `q` key was pressed, break from the loop 87 | if key == ord("q"): 88 | break 89 | # do a bit of cleanup 90 | cv2.destroyAllWindows() 91 | video.release() 92 | #out.release() -------------------------------------------------------------------------------- /face/opencv_face_detector.pbtxt: -------------------------------------------------------------------------------- 1 | node { 2 | name: "data" 3 | op: "Placeholder" 4 | attr { 5 | key: "dtype" 6 | value { 7 | type: DT_FLOAT 8 | } 9 | } 10 | } 11 | node { 12 | name: "data_bn/FusedBatchNorm" 13 | op: "FusedBatchNorm" 14 | input: "data:0" 15 | input: "data_bn/gamma" 16 | input: "data_bn/beta" 17 | input: "data_bn/mean" 18 | input: "data_bn/std" 19 | attr { 20 | key: "epsilon" 21 | value { 22 | f: 1.00099996416e-05 23 | } 24 | } 25 | } 26 | node { 27 | name: "data_scale/Mul" 28 | op: "Mul" 29 | input: "data_bn/FusedBatchNorm" 30 | input: "data_scale/mul" 31 | } 32 | node { 33 | name: "data_scale/BiasAdd" 34 | op: "BiasAdd" 35 | input: "data_scale/Mul" 36 | input: "data_scale/add" 37 | } 38 | node { 39 | name: "SpaceToBatchND/block_shape" 40 | op: "Const" 41 | attr { 42 | key: "value" 43 | value { 44 | tensor { 45 | dtype: DT_INT32 46 | tensor_shape { 47 | dim { 48 | size: 2 49 | } 50 | } 51 | int_val: 1 52 | int_val: 1 53 | } 54 | } 55 | } 56 | } 57 | node { 58 | name: "SpaceToBatchND/paddings" 59 | op: "Const" 60 | attr { 61 | key: "value" 62 | value { 63 | tensor { 64 | dtype: DT_INT32 65 | tensor_shape { 66 | dim { 67 | size: 2 68 | } 69 | dim { 70 | size: 2 71 | } 72 | } 73 | int_val: 3 74 | int_val: 3 75 | int_val: 3 76 | int_val: 3 77 | } 78 | } 79 | } 80 | } 81 | node { 82 | name: "Pad" 83 | op: "SpaceToBatchND" 84 | input: "data_scale/BiasAdd" 85 | input: "SpaceToBatchND/block_shape" 86 | input: "SpaceToBatchND/paddings" 87 | } 88 | node { 89 | name: "conv1_h/Conv2D" 90 | op: "Conv2D" 91 | input: "Pad" 92 | input: "conv1_h/weights" 93 | attr { 94 | key: "dilations" 95 | value { 96 | list { 97 | i: 1 98 | i: 1 99 | i: 1 100 | i: 1 101 | } 102 | } 103 | } 104 | attr { 105 | key: "padding" 106 | value { 107 | s: "VALID" 108 | } 109 | } 110 | attr { 111 | key: "strides" 112 | value { 113 | list { 114 | i: 1 115 | i: 2 116 | i: 2 117 | i: 1 118 | } 119 | } 120 | } 121 | } 122 | node { 123 | name: "conv1_h/BiasAdd" 124 | op: "BiasAdd" 125 | input: "conv1_h/Conv2D" 126 | input: "conv1_h/bias" 127 | } 128 | node { 129 | name: "BatchToSpaceND" 130 | op: "BatchToSpaceND" 131 | input: "conv1_h/BiasAdd" 132 | } 133 | node { 134 | name: "conv1_bn_h/FusedBatchNorm" 135 | op: "FusedBatchNorm" 136 | input: "BatchToSpaceND" 137 | input: "conv1_bn_h/gamma" 138 | input: "conv1_bn_h/beta" 139 | input: "conv1_bn_h/mean" 140 | input: "conv1_bn_h/std" 141 | attr { 142 | key: "epsilon" 143 | value { 144 | f: 1.00099996416e-05 145 | } 146 | } 147 | } 148 | node { 149 | name: "conv1_scale_h/Mul" 150 | op: "Mul" 151 | input: "conv1_bn_h/FusedBatchNorm" 152 | input: "conv1_scale_h/mul" 153 | } 154 | node { 155 | name: "conv1_scale_h/BiasAdd" 156 | op: "BiasAdd" 157 | input: "conv1_scale_h/Mul" 158 | input: "conv1_scale_h/add" 159 | } 160 | node { 161 | name: "Relu" 162 | op: "Relu" 163 | input: "conv1_scale_h/BiasAdd" 164 | } 165 | node { 166 | name: "conv1_pool/MaxPool" 167 | op: "MaxPool" 168 | input: "Relu" 169 | attr { 170 | key: "ksize" 171 | value { 172 | list { 173 | i: 1 174 | i: 3 175 | i: 3 176 | i: 1 177 | } 178 | } 179 | } 180 | attr { 181 | key: "padding" 182 | value { 183 | s: "SAME" 184 | } 185 | } 186 | attr { 187 | key: "strides" 188 | value { 189 | list { 190 | i: 1 191 | i: 2 192 | i: 2 193 | i: 1 194 | } 195 | } 196 | } 197 | } 198 | node { 199 | name: "layer_64_1_conv1_h/Conv2D" 200 | op: "Conv2D" 201 | input: "conv1_pool/MaxPool" 202 | input: "layer_64_1_conv1_h/weights" 203 | attr { 204 | key: "dilations" 205 | value { 206 | list { 207 | i: 1 208 | i: 1 209 | i: 1 210 | i: 1 211 | } 212 | } 213 | } 214 | attr { 215 | key: "padding" 216 | value { 217 | s: "SAME" 218 | } 219 | } 220 | attr { 221 | key: "strides" 222 | value { 223 | list { 224 | i: 1 225 | i: 1 226 | i: 1 227 | i: 1 228 | } 229 | } 230 | } 231 | } 232 | node { 233 | name: "layer_64_1_bn2_h/FusedBatchNorm" 234 | op: "BiasAdd" 235 | input: "layer_64_1_conv1_h/Conv2D" 236 | input: "layer_64_1_conv1_h/Conv2D_bn_offset" 237 | } 238 | node { 239 | name: "layer_64_1_scale2_h/Mul" 240 | op: "Mul" 241 | input: "layer_64_1_bn2_h/FusedBatchNorm" 242 | input: "layer_64_1_scale2_h/mul" 243 | } 244 | node { 245 | name: "layer_64_1_scale2_h/BiasAdd" 246 | op: "BiasAdd" 247 | input: "layer_64_1_scale2_h/Mul" 248 | input: "layer_64_1_scale2_h/add" 249 | } 250 | node { 251 | name: "Relu_1" 252 | op: "Relu" 253 | input: "layer_64_1_scale2_h/BiasAdd" 254 | } 255 | node { 256 | name: "layer_64_1_conv2_h/Conv2D" 257 | op: "Conv2D" 258 | input: "Relu_1" 259 | input: "layer_64_1_conv2_h/weights" 260 | attr { 261 | key: "dilations" 262 | value { 263 | list { 264 | i: 1 265 | i: 1 266 | i: 1 267 | i: 1 268 | } 269 | } 270 | } 271 | attr { 272 | key: "padding" 273 | value { 274 | s: "SAME" 275 | } 276 | } 277 | attr { 278 | key: "strides" 279 | value { 280 | list { 281 | i: 1 282 | i: 1 283 | i: 1 284 | i: 1 285 | } 286 | } 287 | } 288 | } 289 | node { 290 | name: "add" 291 | op: "Add" 292 | input: "layer_64_1_conv2_h/Conv2D" 293 | input: "conv1_pool/MaxPool" 294 | } 295 | node { 296 | name: "layer_128_1_bn1_h/FusedBatchNorm" 297 | op: "FusedBatchNorm" 298 | input: "add" 299 | input: "layer_128_1_bn1_h/gamma" 300 | input: "layer_128_1_bn1_h/beta" 301 | input: "layer_128_1_bn1_h/mean" 302 | input: "layer_128_1_bn1_h/std" 303 | attr { 304 | key: "epsilon" 305 | value { 306 | f: 1.00099996416e-05 307 | } 308 | } 309 | } 310 | node { 311 | name: "layer_128_1_scale1_h/Mul" 312 | op: "Mul" 313 | input: "layer_128_1_bn1_h/FusedBatchNorm" 314 | input: "layer_128_1_scale1_h/mul" 315 | } 316 | node { 317 | name: "layer_128_1_scale1_h/BiasAdd" 318 | op: "BiasAdd" 319 | input: "layer_128_1_scale1_h/Mul" 320 | input: "layer_128_1_scale1_h/add" 321 | } 322 | node { 323 | name: "Relu_2" 324 | op: "Relu" 325 | input: "layer_128_1_scale1_h/BiasAdd" 326 | } 327 | node { 328 | name: "layer_128_1_conv_expand_h/Conv2D" 329 | op: "Conv2D" 330 | input: "Relu_2" 331 | input: "layer_128_1_conv_expand_h/weights" 332 | attr { 333 | key: "dilations" 334 | value { 335 | list { 336 | i: 1 337 | i: 1 338 | i: 1 339 | i: 1 340 | } 341 | } 342 | } 343 | attr { 344 | key: "padding" 345 | value { 346 | s: "SAME" 347 | } 348 | } 349 | attr { 350 | key: "strides" 351 | value { 352 | list { 353 | i: 1 354 | i: 2 355 | i: 2 356 | i: 1 357 | } 358 | } 359 | } 360 | } 361 | node { 362 | name: "layer_128_1_conv1_h/Conv2D" 363 | op: "Conv2D" 364 | input: "Relu_2" 365 | input: "layer_128_1_conv1_h/weights" 366 | attr { 367 | key: "dilations" 368 | value { 369 | list { 370 | i: 1 371 | i: 1 372 | i: 1 373 | i: 1 374 | } 375 | } 376 | } 377 | attr { 378 | key: "padding" 379 | value { 380 | s: "SAME" 381 | } 382 | } 383 | attr { 384 | key: "strides" 385 | value { 386 | list { 387 | i: 1 388 | i: 2 389 | i: 2 390 | i: 1 391 | } 392 | } 393 | } 394 | } 395 | node { 396 | name: "layer_128_1_bn2/FusedBatchNorm" 397 | op: "BiasAdd" 398 | input: "layer_128_1_conv1_h/Conv2D" 399 | input: "layer_128_1_conv1_h/Conv2D_bn_offset" 400 | } 401 | node { 402 | name: "layer_128_1_scale2/Mul" 403 | op: "Mul" 404 | input: "layer_128_1_bn2/FusedBatchNorm" 405 | input: "layer_128_1_scale2/mul" 406 | } 407 | node { 408 | name: "layer_128_1_scale2/BiasAdd" 409 | op: "BiasAdd" 410 | input: "layer_128_1_scale2/Mul" 411 | input: "layer_128_1_scale2/add" 412 | } 413 | node { 414 | name: "Relu_3" 415 | op: "Relu" 416 | input: "layer_128_1_scale2/BiasAdd" 417 | } 418 | node { 419 | name: "layer_128_1_conv2/Conv2D" 420 | op: "Conv2D" 421 | input: "Relu_3" 422 | input: "layer_128_1_conv2/weights" 423 | attr { 424 | key: "dilations" 425 | value { 426 | list { 427 | i: 1 428 | i: 1 429 | i: 1 430 | i: 1 431 | } 432 | } 433 | } 434 | attr { 435 | key: "padding" 436 | value { 437 | s: "SAME" 438 | } 439 | } 440 | attr { 441 | key: "strides" 442 | value { 443 | list { 444 | i: 1 445 | i: 1 446 | i: 1 447 | i: 1 448 | } 449 | } 450 | } 451 | } 452 | node { 453 | name: "add_1" 454 | op: "Add" 455 | input: "layer_128_1_conv2/Conv2D" 456 | input: "layer_128_1_conv_expand_h/Conv2D" 457 | } 458 | node { 459 | name: "layer_256_1_bn1/FusedBatchNorm" 460 | op: "FusedBatchNorm" 461 | input: "add_1" 462 | input: "layer_256_1_bn1/gamma" 463 | input: "layer_256_1_bn1/beta" 464 | input: "layer_256_1_bn1/mean" 465 | input: "layer_256_1_bn1/std" 466 | attr { 467 | key: "epsilon" 468 | value { 469 | f: 1.00099996416e-05 470 | } 471 | } 472 | } 473 | node { 474 | name: "layer_256_1_scale1/Mul" 475 | op: "Mul" 476 | input: "layer_256_1_bn1/FusedBatchNorm" 477 | input: "layer_256_1_scale1/mul" 478 | } 479 | node { 480 | name: "layer_256_1_scale1/BiasAdd" 481 | op: "BiasAdd" 482 | input: "layer_256_1_scale1/Mul" 483 | input: "layer_256_1_scale1/add" 484 | } 485 | node { 486 | name: "Relu_4" 487 | op: "Relu" 488 | input: "layer_256_1_scale1/BiasAdd" 489 | } 490 | node { 491 | name: "SpaceToBatchND_1/paddings" 492 | op: "Const" 493 | attr { 494 | key: "value" 495 | value { 496 | tensor { 497 | dtype: DT_INT32 498 | tensor_shape { 499 | dim { 500 | size: 2 501 | } 502 | dim { 503 | size: 2 504 | } 505 | } 506 | int_val: 1 507 | int_val: 1 508 | int_val: 1 509 | int_val: 1 510 | } 511 | } 512 | } 513 | } 514 | node { 515 | name: "layer_256_1_conv_expand/Conv2D" 516 | op: "Conv2D" 517 | input: "Relu_4" 518 | input: "layer_256_1_conv_expand/weights" 519 | attr { 520 | key: "dilations" 521 | value { 522 | list { 523 | i: 1 524 | i: 1 525 | i: 1 526 | i: 1 527 | } 528 | } 529 | } 530 | attr { 531 | key: "padding" 532 | value { 533 | s: "SAME" 534 | } 535 | } 536 | attr { 537 | key: "strides" 538 | value { 539 | list { 540 | i: 1 541 | i: 2 542 | i: 2 543 | i: 1 544 | } 545 | } 546 | } 547 | } 548 | node { 549 | name: "conv4_3_norm/l2_normalize" 550 | op: "L2Normalize" 551 | input: "Relu_4:0" 552 | input: "conv4_3_norm/l2_normalize/Sum/reduction_indices" 553 | } 554 | node { 555 | name: "conv4_3_norm/mul_1" 556 | op: "Mul" 557 | input: "conv4_3_norm/l2_normalize" 558 | input: "conv4_3_norm/mul" 559 | } 560 | node { 561 | name: "conv4_3_norm_mbox_loc/Conv2D" 562 | op: "Conv2D" 563 | input: "conv4_3_norm/mul_1" 564 | input: "conv4_3_norm_mbox_loc/weights" 565 | attr { 566 | key: "dilations" 567 | value { 568 | list { 569 | i: 1 570 | i: 1 571 | i: 1 572 | i: 1 573 | } 574 | } 575 | } 576 | attr { 577 | key: "padding" 578 | value { 579 | s: "SAME" 580 | } 581 | } 582 | attr { 583 | key: "strides" 584 | value { 585 | list { 586 | i: 1 587 | i: 1 588 | i: 1 589 | i: 1 590 | } 591 | } 592 | } 593 | } 594 | node { 595 | name: "conv4_3_norm_mbox_loc/BiasAdd" 596 | op: "BiasAdd" 597 | input: "conv4_3_norm_mbox_loc/Conv2D" 598 | input: "conv4_3_norm_mbox_loc/bias" 599 | } 600 | node { 601 | name: "flatten/Reshape" 602 | op: "Flatten" 603 | input: "conv4_3_norm_mbox_loc/BiasAdd" 604 | } 605 | node { 606 | name: "conv4_3_norm_mbox_conf/Conv2D" 607 | op: "Conv2D" 608 | input: "conv4_3_norm/mul_1" 609 | input: "conv4_3_norm_mbox_conf/weights" 610 | attr { 611 | key: "dilations" 612 | value { 613 | list { 614 | i: 1 615 | i: 1 616 | i: 1 617 | i: 1 618 | } 619 | } 620 | } 621 | attr { 622 | key: "padding" 623 | value { 624 | s: "SAME" 625 | } 626 | } 627 | attr { 628 | key: "strides" 629 | value { 630 | list { 631 | i: 1 632 | i: 1 633 | i: 1 634 | i: 1 635 | } 636 | } 637 | } 638 | } 639 | node { 640 | name: "conv4_3_norm_mbox_conf/BiasAdd" 641 | op: "BiasAdd" 642 | input: "conv4_3_norm_mbox_conf/Conv2D" 643 | input: "conv4_3_norm_mbox_conf/bias" 644 | } 645 | node { 646 | name: "flatten_6/Reshape" 647 | op: "Flatten" 648 | input: "conv4_3_norm_mbox_conf/BiasAdd" 649 | } 650 | node { 651 | name: "Pad_1" 652 | op: "SpaceToBatchND" 653 | input: "Relu_4" 654 | input: "SpaceToBatchND/block_shape" 655 | input: "SpaceToBatchND_1/paddings" 656 | } 657 | node { 658 | name: "layer_256_1_conv1/Conv2D" 659 | op: "Conv2D" 660 | input: "Pad_1" 661 | input: "layer_256_1_conv1/weights" 662 | attr { 663 | key: "dilations" 664 | value { 665 | list { 666 | i: 1 667 | i: 1 668 | i: 1 669 | i: 1 670 | } 671 | } 672 | } 673 | attr { 674 | key: "padding" 675 | value { 676 | s: "VALID" 677 | } 678 | } 679 | attr { 680 | key: "strides" 681 | value { 682 | list { 683 | i: 1 684 | i: 2 685 | i: 2 686 | i: 1 687 | } 688 | } 689 | } 690 | } 691 | node { 692 | name: "layer_256_1_bn2/FusedBatchNorm" 693 | op: "BiasAdd" 694 | input: "layer_256_1_conv1/Conv2D" 695 | input: "layer_256_1_conv1/Conv2D_bn_offset" 696 | } 697 | node { 698 | name: "BatchToSpaceND_1" 699 | op: "BatchToSpaceND" 700 | input: "layer_256_1_bn2/FusedBatchNorm" 701 | } 702 | node { 703 | name: "layer_256_1_scale2/Mul" 704 | op: "Mul" 705 | input: "BatchToSpaceND_1" 706 | input: "layer_256_1_scale2/mul" 707 | } 708 | node { 709 | name: "layer_256_1_scale2/BiasAdd" 710 | op: "BiasAdd" 711 | input: "layer_256_1_scale2/Mul" 712 | input: "layer_256_1_scale2/add" 713 | } 714 | node { 715 | name: "Relu_5" 716 | op: "Relu" 717 | input: "layer_256_1_scale2/BiasAdd" 718 | } 719 | node { 720 | name: "layer_256_1_conv2/Conv2D" 721 | op: "Conv2D" 722 | input: "Relu_5" 723 | input: "layer_256_1_conv2/weights" 724 | attr { 725 | key: "dilations" 726 | value { 727 | list { 728 | i: 1 729 | i: 1 730 | i: 1 731 | i: 1 732 | } 733 | } 734 | } 735 | attr { 736 | key: "padding" 737 | value { 738 | s: "SAME" 739 | } 740 | } 741 | attr { 742 | key: "strides" 743 | value { 744 | list { 745 | i: 1 746 | i: 1 747 | i: 1 748 | i: 1 749 | } 750 | } 751 | } 752 | } 753 | node { 754 | name: "add_2" 755 | op: "Add" 756 | input: "layer_256_1_conv2/Conv2D" 757 | input: "layer_256_1_conv_expand/Conv2D" 758 | } 759 | node { 760 | name: "layer_512_1_bn1/FusedBatchNorm" 761 | op: "FusedBatchNorm" 762 | input: "add_2" 763 | input: "layer_512_1_bn1/gamma" 764 | input: "layer_512_1_bn1/beta" 765 | input: "layer_512_1_bn1/mean" 766 | input: "layer_512_1_bn1/std" 767 | attr { 768 | key: "epsilon" 769 | value { 770 | f: 1.00099996416e-05 771 | } 772 | } 773 | } 774 | node { 775 | name: "layer_512_1_scale1/Mul" 776 | op: "Mul" 777 | input: "layer_512_1_bn1/FusedBatchNorm" 778 | input: "layer_512_1_scale1/mul" 779 | } 780 | node { 781 | name: "layer_512_1_scale1/BiasAdd" 782 | op: "BiasAdd" 783 | input: "layer_512_1_scale1/Mul" 784 | input: "layer_512_1_scale1/add" 785 | } 786 | node { 787 | name: "Relu_6" 788 | op: "Relu" 789 | input: "layer_512_1_scale1/BiasAdd" 790 | } 791 | node { 792 | name: "layer_512_1_conv_expand_h/Conv2D" 793 | op: "Conv2D" 794 | input: "Relu_6" 795 | input: "layer_512_1_conv_expand_h/weights" 796 | attr { 797 | key: "dilations" 798 | value { 799 | list { 800 | i: 1 801 | i: 1 802 | i: 1 803 | i: 1 804 | } 805 | } 806 | } 807 | attr { 808 | key: "padding" 809 | value { 810 | s: "SAME" 811 | } 812 | } 813 | attr { 814 | key: "strides" 815 | value { 816 | list { 817 | i: 1 818 | i: 1 819 | i: 1 820 | i: 1 821 | } 822 | } 823 | } 824 | } 825 | node { 826 | name: "layer_512_1_conv1_h/Conv2D" 827 | op: "Conv2D" 828 | input: "Relu_6" 829 | input: "layer_512_1_conv1_h/weights" 830 | attr { 831 | key: "dilations" 832 | value { 833 | list { 834 | i: 1 835 | i: 1 836 | i: 1 837 | i: 1 838 | } 839 | } 840 | } 841 | attr { 842 | key: "padding" 843 | value { 844 | s: "SAME" 845 | } 846 | } 847 | attr { 848 | key: "strides" 849 | value { 850 | list { 851 | i: 1 852 | i: 1 853 | i: 1 854 | i: 1 855 | } 856 | } 857 | } 858 | } 859 | node { 860 | name: "layer_512_1_bn2_h/FusedBatchNorm" 861 | op: "BiasAdd" 862 | input: "layer_512_1_conv1_h/Conv2D" 863 | input: "layer_512_1_conv1_h/Conv2D_bn_offset" 864 | } 865 | node { 866 | name: "layer_512_1_scale2_h/Mul" 867 | op: "Mul" 868 | input: "layer_512_1_bn2_h/FusedBatchNorm" 869 | input: "layer_512_1_scale2_h/mul" 870 | } 871 | node { 872 | name: "layer_512_1_scale2_h/BiasAdd" 873 | op: "BiasAdd" 874 | input: "layer_512_1_scale2_h/Mul" 875 | input: "layer_512_1_scale2_h/add" 876 | } 877 | node { 878 | name: "Relu_7" 879 | op: "Relu" 880 | input: "layer_512_1_scale2_h/BiasAdd" 881 | } 882 | node { 883 | name: "layer_512_1_conv2_h/convolution/SpaceToBatchND" 884 | op: "SpaceToBatchND" 885 | input: "Relu_7" 886 | input: "layer_512_1_conv2_h/convolution/SpaceToBatchND/block_shape" 887 | input: "layer_512_1_conv2_h/convolution/SpaceToBatchND/paddings" 888 | } 889 | node { 890 | name: "layer_512_1_conv2_h/convolution" 891 | op: "Conv2D" 892 | input: "layer_512_1_conv2_h/convolution/SpaceToBatchND" 893 | input: "layer_512_1_conv2_h/weights" 894 | attr { 895 | key: "dilations" 896 | value { 897 | list { 898 | i: 1 899 | i: 1 900 | i: 1 901 | i: 1 902 | } 903 | } 904 | } 905 | attr { 906 | key: "padding" 907 | value { 908 | s: "VALID" 909 | } 910 | } 911 | attr { 912 | key: "strides" 913 | value { 914 | list { 915 | i: 1 916 | i: 1 917 | i: 1 918 | i: 1 919 | } 920 | } 921 | } 922 | } 923 | node { 924 | name: "layer_512_1_conv2_h/convolution/BatchToSpaceND" 925 | op: "BatchToSpaceND" 926 | input: "layer_512_1_conv2_h/convolution" 927 | input: "layer_512_1_conv2_h/convolution/BatchToSpaceND/block_shape" 928 | input: "layer_512_1_conv2_h/convolution/BatchToSpaceND/crops" 929 | } 930 | node { 931 | name: "add_3" 932 | op: "Add" 933 | input: "layer_512_1_conv2_h/convolution/BatchToSpaceND" 934 | input: "layer_512_1_conv_expand_h/Conv2D" 935 | } 936 | node { 937 | name: "last_bn_h/FusedBatchNorm" 938 | op: "FusedBatchNorm" 939 | input: "add_3" 940 | input: "last_bn_h/gamma" 941 | input: "last_bn_h/beta" 942 | input: "last_bn_h/mean" 943 | input: "last_bn_h/std" 944 | attr { 945 | key: "epsilon" 946 | value { 947 | f: 1.00099996416e-05 948 | } 949 | } 950 | } 951 | node { 952 | name: "last_scale_h/Mul" 953 | op: "Mul" 954 | input: "last_bn_h/FusedBatchNorm" 955 | input: "last_scale_h/mul" 956 | } 957 | node { 958 | name: "last_scale_h/BiasAdd" 959 | op: "BiasAdd" 960 | input: "last_scale_h/Mul" 961 | input: "last_scale_h/add" 962 | } 963 | node { 964 | name: "last_relu" 965 | op: "Relu" 966 | input: "last_scale_h/BiasAdd" 967 | } 968 | node { 969 | name: "conv6_1_h/Conv2D" 970 | op: "Conv2D" 971 | input: "last_relu" 972 | input: "conv6_1_h/weights" 973 | attr { 974 | key: "dilations" 975 | value { 976 | list { 977 | i: 1 978 | i: 1 979 | i: 1 980 | i: 1 981 | } 982 | } 983 | } 984 | attr { 985 | key: "padding" 986 | value { 987 | s: "SAME" 988 | } 989 | } 990 | attr { 991 | key: "strides" 992 | value { 993 | list { 994 | i: 1 995 | i: 1 996 | i: 1 997 | i: 1 998 | } 999 | } 1000 | } 1001 | } 1002 | node { 1003 | name: "conv6_1_h/BiasAdd" 1004 | op: "BiasAdd" 1005 | input: "conv6_1_h/Conv2D" 1006 | input: "conv6_1_h/bias" 1007 | } 1008 | node { 1009 | name: "conv6_1_h/Relu" 1010 | op: "Relu" 1011 | input: "conv6_1_h/BiasAdd" 1012 | } 1013 | node { 1014 | name: "conv6_2_h/Conv2D" 1015 | op: "Conv2D" 1016 | input: "conv6_1_h/Relu" 1017 | input: "conv6_2_h/weights" 1018 | attr { 1019 | key: "dilations" 1020 | value { 1021 | list { 1022 | i: 1 1023 | i: 1 1024 | i: 1 1025 | i: 1 1026 | } 1027 | } 1028 | } 1029 | attr { 1030 | key: "padding" 1031 | value { 1032 | s: "SAME" 1033 | } 1034 | } 1035 | attr { 1036 | key: "strides" 1037 | value { 1038 | list { 1039 | i: 1 1040 | i: 2 1041 | i: 2 1042 | i: 1 1043 | } 1044 | } 1045 | } 1046 | } 1047 | node { 1048 | name: "conv6_2_h/BiasAdd" 1049 | op: "BiasAdd" 1050 | input: "conv6_2_h/Conv2D" 1051 | input: "conv6_2_h/bias" 1052 | } 1053 | node { 1054 | name: "conv6_2_h/Relu" 1055 | op: "Relu" 1056 | input: "conv6_2_h/BiasAdd" 1057 | } 1058 | node { 1059 | name: "conv7_1_h/Conv2D" 1060 | op: "Conv2D" 1061 | input: "conv6_2_h/Relu" 1062 | input: "conv7_1_h/weights" 1063 | attr { 1064 | key: "dilations" 1065 | value { 1066 | list { 1067 | i: 1 1068 | i: 1 1069 | i: 1 1070 | i: 1 1071 | } 1072 | } 1073 | } 1074 | attr { 1075 | key: "padding" 1076 | value { 1077 | s: "SAME" 1078 | } 1079 | } 1080 | attr { 1081 | key: "strides" 1082 | value { 1083 | list { 1084 | i: 1 1085 | i: 1 1086 | i: 1 1087 | i: 1 1088 | } 1089 | } 1090 | } 1091 | } 1092 | node { 1093 | name: "conv7_1_h/BiasAdd" 1094 | op: "BiasAdd" 1095 | input: "conv7_1_h/Conv2D" 1096 | input: "conv7_1_h/bias" 1097 | } 1098 | node { 1099 | name: "conv7_1_h/Relu" 1100 | op: "Relu" 1101 | input: "conv7_1_h/BiasAdd" 1102 | } 1103 | node { 1104 | name: "Pad_2" 1105 | op: "SpaceToBatchND" 1106 | input: "conv7_1_h/Relu" 1107 | input: "SpaceToBatchND/block_shape" 1108 | input: "SpaceToBatchND_1/paddings" 1109 | } 1110 | node { 1111 | name: "conv7_2_h/Conv2D" 1112 | op: "Conv2D" 1113 | input: "Pad_2" 1114 | input: "conv7_2_h/weights" 1115 | attr { 1116 | key: "dilations" 1117 | value { 1118 | list { 1119 | i: 1 1120 | i: 1 1121 | i: 1 1122 | i: 1 1123 | } 1124 | } 1125 | } 1126 | attr { 1127 | key: "padding" 1128 | value { 1129 | s: "VALID" 1130 | } 1131 | } 1132 | attr { 1133 | key: "strides" 1134 | value { 1135 | list { 1136 | i: 1 1137 | i: 2 1138 | i: 2 1139 | i: 1 1140 | } 1141 | } 1142 | } 1143 | } 1144 | node { 1145 | name: "conv7_2_h/BiasAdd" 1146 | op: "BiasAdd" 1147 | input: "conv7_2_h/Conv2D" 1148 | input: "conv7_2_h/bias" 1149 | } 1150 | node { 1151 | name: "BatchToSpaceND_2" 1152 | op: "BatchToSpaceND" 1153 | input: "conv7_2_h/BiasAdd" 1154 | } 1155 | node { 1156 | name: "conv7_2_h/Relu" 1157 | op: "Relu" 1158 | input: "BatchToSpaceND_2" 1159 | } 1160 | node { 1161 | name: "conv8_1_h/Conv2D" 1162 | op: "Conv2D" 1163 | input: "conv7_2_h/Relu" 1164 | input: "conv8_1_h/weights" 1165 | attr { 1166 | key: "dilations" 1167 | value { 1168 | list { 1169 | i: 1 1170 | i: 1 1171 | i: 1 1172 | i: 1 1173 | } 1174 | } 1175 | } 1176 | attr { 1177 | key: "padding" 1178 | value { 1179 | s: "SAME" 1180 | } 1181 | } 1182 | attr { 1183 | key: "strides" 1184 | value { 1185 | list { 1186 | i: 1 1187 | i: 1 1188 | i: 1 1189 | i: 1 1190 | } 1191 | } 1192 | } 1193 | } 1194 | node { 1195 | name: "conv8_1_h/BiasAdd" 1196 | op: "BiasAdd" 1197 | input: "conv8_1_h/Conv2D" 1198 | input: "conv8_1_h/bias" 1199 | } 1200 | node { 1201 | name: "conv8_1_h/Relu" 1202 | op: "Relu" 1203 | input: "conv8_1_h/BiasAdd" 1204 | } 1205 | node { 1206 | name: "conv8_2_h/Conv2D" 1207 | op: "Conv2D" 1208 | input: "conv8_1_h/Relu" 1209 | input: "conv8_2_h/weights" 1210 | attr { 1211 | key: "dilations" 1212 | value { 1213 | list { 1214 | i: 1 1215 | i: 1 1216 | i: 1 1217 | i: 1 1218 | } 1219 | } 1220 | } 1221 | attr { 1222 | key: "padding" 1223 | value { 1224 | s: "SAME" 1225 | } 1226 | } 1227 | attr { 1228 | key: "strides" 1229 | value { 1230 | list { 1231 | i: 1 1232 | i: 1 1233 | i: 1 1234 | i: 1 1235 | } 1236 | } 1237 | } 1238 | } 1239 | node { 1240 | name: "conv8_2_h/BiasAdd" 1241 | op: "BiasAdd" 1242 | input: "conv8_2_h/Conv2D" 1243 | input: "conv8_2_h/bias" 1244 | } 1245 | node { 1246 | name: "conv8_2_h/Relu" 1247 | op: "Relu" 1248 | input: "conv8_2_h/BiasAdd" 1249 | } 1250 | node { 1251 | name: "conv9_1_h/Conv2D" 1252 | op: "Conv2D" 1253 | input: "conv8_2_h/Relu" 1254 | input: "conv9_1_h/weights" 1255 | attr { 1256 | key: "dilations" 1257 | value { 1258 | list { 1259 | i: 1 1260 | i: 1 1261 | i: 1 1262 | i: 1 1263 | } 1264 | } 1265 | } 1266 | attr { 1267 | key: "padding" 1268 | value { 1269 | s: "SAME" 1270 | } 1271 | } 1272 | attr { 1273 | key: "strides" 1274 | value { 1275 | list { 1276 | i: 1 1277 | i: 1 1278 | i: 1 1279 | i: 1 1280 | } 1281 | } 1282 | } 1283 | } 1284 | node { 1285 | name: "conv9_1_h/BiasAdd" 1286 | op: "BiasAdd" 1287 | input: "conv9_1_h/Conv2D" 1288 | input: "conv9_1_h/bias" 1289 | } 1290 | node { 1291 | name: "conv9_1_h/Relu" 1292 | op: "Relu" 1293 | input: "conv9_1_h/BiasAdd" 1294 | } 1295 | node { 1296 | name: "conv9_2_h/Conv2D" 1297 | op: "Conv2D" 1298 | input: "conv9_1_h/Relu" 1299 | input: "conv9_2_h/weights" 1300 | attr { 1301 | key: "dilations" 1302 | value { 1303 | list { 1304 | i: 1 1305 | i: 1 1306 | i: 1 1307 | i: 1 1308 | } 1309 | } 1310 | } 1311 | attr { 1312 | key: "padding" 1313 | value { 1314 | s: "SAME" 1315 | } 1316 | } 1317 | attr { 1318 | key: "strides" 1319 | value { 1320 | list { 1321 | i: 1 1322 | i: 1 1323 | i: 1 1324 | i: 1 1325 | } 1326 | } 1327 | } 1328 | } 1329 | node { 1330 | name: "conv9_2_h/BiasAdd" 1331 | op: "BiasAdd" 1332 | input: "conv9_2_h/Conv2D" 1333 | input: "conv9_2_h/bias" 1334 | } 1335 | node { 1336 | name: "conv9_2_h/Relu" 1337 | op: "Relu" 1338 | input: "conv9_2_h/BiasAdd" 1339 | } 1340 | node { 1341 | name: "conv9_2_mbox_loc/Conv2D" 1342 | op: "Conv2D" 1343 | input: "conv9_2_h/Relu" 1344 | input: "conv9_2_mbox_loc/weights" 1345 | attr { 1346 | key: "dilations" 1347 | value { 1348 | list { 1349 | i: 1 1350 | i: 1 1351 | i: 1 1352 | i: 1 1353 | } 1354 | } 1355 | } 1356 | attr { 1357 | key: "padding" 1358 | value { 1359 | s: "SAME" 1360 | } 1361 | } 1362 | attr { 1363 | key: "strides" 1364 | value { 1365 | list { 1366 | i: 1 1367 | i: 1 1368 | i: 1 1369 | i: 1 1370 | } 1371 | } 1372 | } 1373 | } 1374 | node { 1375 | name: "conv9_2_mbox_loc/BiasAdd" 1376 | op: "BiasAdd" 1377 | input: "conv9_2_mbox_loc/Conv2D" 1378 | input: "conv9_2_mbox_loc/bias" 1379 | } 1380 | node { 1381 | name: "flatten_5/Reshape" 1382 | op: "Flatten" 1383 | input: "conv9_2_mbox_loc/BiasAdd" 1384 | } 1385 | node { 1386 | name: "conv9_2_mbox_conf/Conv2D" 1387 | op: "Conv2D" 1388 | input: "conv9_2_h/Relu" 1389 | input: "conv9_2_mbox_conf/weights" 1390 | attr { 1391 | key: "dilations" 1392 | value { 1393 | list { 1394 | i: 1 1395 | i: 1 1396 | i: 1 1397 | i: 1 1398 | } 1399 | } 1400 | } 1401 | attr { 1402 | key: "padding" 1403 | value { 1404 | s: "SAME" 1405 | } 1406 | } 1407 | attr { 1408 | key: "strides" 1409 | value { 1410 | list { 1411 | i: 1 1412 | i: 1 1413 | i: 1 1414 | i: 1 1415 | } 1416 | } 1417 | } 1418 | } 1419 | node { 1420 | name: "conv9_2_mbox_conf/BiasAdd" 1421 | op: "BiasAdd" 1422 | input: "conv9_2_mbox_conf/Conv2D" 1423 | input: "conv9_2_mbox_conf/bias" 1424 | } 1425 | node { 1426 | name: "flatten_11/Reshape" 1427 | op: "Flatten" 1428 | input: "conv9_2_mbox_conf/BiasAdd" 1429 | } 1430 | node { 1431 | name: "conv8_2_mbox_loc/Conv2D" 1432 | op: "Conv2D" 1433 | input: "conv8_2_h/Relu" 1434 | input: "conv8_2_mbox_loc/weights" 1435 | attr { 1436 | key: "dilations" 1437 | value { 1438 | list { 1439 | i: 1 1440 | i: 1 1441 | i: 1 1442 | i: 1 1443 | } 1444 | } 1445 | } 1446 | attr { 1447 | key: "padding" 1448 | value { 1449 | s: "SAME" 1450 | } 1451 | } 1452 | attr { 1453 | key: "strides" 1454 | value { 1455 | list { 1456 | i: 1 1457 | i: 1 1458 | i: 1 1459 | i: 1 1460 | } 1461 | } 1462 | } 1463 | } 1464 | node { 1465 | name: "conv8_2_mbox_loc/BiasAdd" 1466 | op: "BiasAdd" 1467 | input: "conv8_2_mbox_loc/Conv2D" 1468 | input: "conv8_2_mbox_loc/bias" 1469 | } 1470 | node { 1471 | name: "flatten_4/Reshape" 1472 | op: "Flatten" 1473 | input: "conv8_2_mbox_loc/BiasAdd" 1474 | } 1475 | node { 1476 | name: "conv8_2_mbox_conf/Conv2D" 1477 | op: "Conv2D" 1478 | input: "conv8_2_h/Relu" 1479 | input: "conv8_2_mbox_conf/weights" 1480 | attr { 1481 | key: "dilations" 1482 | value { 1483 | list { 1484 | i: 1 1485 | i: 1 1486 | i: 1 1487 | i: 1 1488 | } 1489 | } 1490 | } 1491 | attr { 1492 | key: "padding" 1493 | value { 1494 | s: "SAME" 1495 | } 1496 | } 1497 | attr { 1498 | key: "strides" 1499 | value { 1500 | list { 1501 | i: 1 1502 | i: 1 1503 | i: 1 1504 | i: 1 1505 | } 1506 | } 1507 | } 1508 | } 1509 | node { 1510 | name: "conv8_2_mbox_conf/BiasAdd" 1511 | op: "BiasAdd" 1512 | input: "conv8_2_mbox_conf/Conv2D" 1513 | input: "conv8_2_mbox_conf/bias" 1514 | } 1515 | node { 1516 | name: "flatten_10/Reshape" 1517 | op: "Flatten" 1518 | input: "conv8_2_mbox_conf/BiasAdd" 1519 | } 1520 | node { 1521 | name: "conv7_2_mbox_loc/Conv2D" 1522 | op: "Conv2D" 1523 | input: "conv7_2_h/Relu" 1524 | input: "conv7_2_mbox_loc/weights" 1525 | attr { 1526 | key: "dilations" 1527 | value { 1528 | list { 1529 | i: 1 1530 | i: 1 1531 | i: 1 1532 | i: 1 1533 | } 1534 | } 1535 | } 1536 | attr { 1537 | key: "padding" 1538 | value { 1539 | s: "SAME" 1540 | } 1541 | } 1542 | attr { 1543 | key: "strides" 1544 | value { 1545 | list { 1546 | i: 1 1547 | i: 1 1548 | i: 1 1549 | i: 1 1550 | } 1551 | } 1552 | } 1553 | } 1554 | node { 1555 | name: "conv7_2_mbox_loc/BiasAdd" 1556 | op: "BiasAdd" 1557 | input: "conv7_2_mbox_loc/Conv2D" 1558 | input: "conv7_2_mbox_loc/bias" 1559 | } 1560 | node { 1561 | name: "flatten_3/Reshape" 1562 | op: "Flatten" 1563 | input: "conv7_2_mbox_loc/BiasAdd" 1564 | } 1565 | node { 1566 | name: "conv7_2_mbox_conf/Conv2D" 1567 | op: "Conv2D" 1568 | input: "conv7_2_h/Relu" 1569 | input: "conv7_2_mbox_conf/weights" 1570 | attr { 1571 | key: "dilations" 1572 | value { 1573 | list { 1574 | i: 1 1575 | i: 1 1576 | i: 1 1577 | i: 1 1578 | } 1579 | } 1580 | } 1581 | attr { 1582 | key: "padding" 1583 | value { 1584 | s: "SAME" 1585 | } 1586 | } 1587 | attr { 1588 | key: "strides" 1589 | value { 1590 | list { 1591 | i: 1 1592 | i: 1 1593 | i: 1 1594 | i: 1 1595 | } 1596 | } 1597 | } 1598 | } 1599 | node { 1600 | name: "conv7_2_mbox_conf/BiasAdd" 1601 | op: "BiasAdd" 1602 | input: "conv7_2_mbox_conf/Conv2D" 1603 | input: "conv7_2_mbox_conf/bias" 1604 | } 1605 | node { 1606 | name: "flatten_9/Reshape" 1607 | op: "Flatten" 1608 | input: "conv7_2_mbox_conf/BiasAdd" 1609 | } 1610 | node { 1611 | name: "conv6_2_mbox_loc/Conv2D" 1612 | op: "Conv2D" 1613 | input: "conv6_2_h/Relu" 1614 | input: "conv6_2_mbox_loc/weights" 1615 | attr { 1616 | key: "dilations" 1617 | value { 1618 | list { 1619 | i: 1 1620 | i: 1 1621 | i: 1 1622 | i: 1 1623 | } 1624 | } 1625 | } 1626 | attr { 1627 | key: "padding" 1628 | value { 1629 | s: "SAME" 1630 | } 1631 | } 1632 | attr { 1633 | key: "strides" 1634 | value { 1635 | list { 1636 | i: 1 1637 | i: 1 1638 | i: 1 1639 | i: 1 1640 | } 1641 | } 1642 | } 1643 | } 1644 | node { 1645 | name: "conv6_2_mbox_loc/BiasAdd" 1646 | op: "BiasAdd" 1647 | input: "conv6_2_mbox_loc/Conv2D" 1648 | input: "conv6_2_mbox_loc/bias" 1649 | } 1650 | node { 1651 | name: "flatten_2/Reshape" 1652 | op: "Flatten" 1653 | input: "conv6_2_mbox_loc/BiasAdd" 1654 | } 1655 | node { 1656 | name: "conv6_2_mbox_conf/Conv2D" 1657 | op: "Conv2D" 1658 | input: "conv6_2_h/Relu" 1659 | input: "conv6_2_mbox_conf/weights" 1660 | attr { 1661 | key: "dilations" 1662 | value { 1663 | list { 1664 | i: 1 1665 | i: 1 1666 | i: 1 1667 | i: 1 1668 | } 1669 | } 1670 | } 1671 | attr { 1672 | key: "padding" 1673 | value { 1674 | s: "SAME" 1675 | } 1676 | } 1677 | attr { 1678 | key: "strides" 1679 | value { 1680 | list { 1681 | i: 1 1682 | i: 1 1683 | i: 1 1684 | i: 1 1685 | } 1686 | } 1687 | } 1688 | } 1689 | node { 1690 | name: "conv6_2_mbox_conf/BiasAdd" 1691 | op: "BiasAdd" 1692 | input: "conv6_2_mbox_conf/Conv2D" 1693 | input: "conv6_2_mbox_conf/bias" 1694 | } 1695 | node { 1696 | name: "flatten_8/Reshape" 1697 | op: "Flatten" 1698 | input: "conv6_2_mbox_conf/BiasAdd" 1699 | } 1700 | node { 1701 | name: "fc7_mbox_loc/Conv2D" 1702 | op: "Conv2D" 1703 | input: "last_relu" 1704 | input: "fc7_mbox_loc/weights" 1705 | attr { 1706 | key: "dilations" 1707 | value { 1708 | list { 1709 | i: 1 1710 | i: 1 1711 | i: 1 1712 | i: 1 1713 | } 1714 | } 1715 | } 1716 | attr { 1717 | key: "padding" 1718 | value { 1719 | s: "SAME" 1720 | } 1721 | } 1722 | attr { 1723 | key: "strides" 1724 | value { 1725 | list { 1726 | i: 1 1727 | i: 1 1728 | i: 1 1729 | i: 1 1730 | } 1731 | } 1732 | } 1733 | } 1734 | node { 1735 | name: "fc7_mbox_loc/BiasAdd" 1736 | op: "BiasAdd" 1737 | input: "fc7_mbox_loc/Conv2D" 1738 | input: "fc7_mbox_loc/bias" 1739 | } 1740 | node { 1741 | name: "flatten_1/Reshape" 1742 | op: "Flatten" 1743 | input: "fc7_mbox_loc/BiasAdd" 1744 | } 1745 | node { 1746 | name: "mbox_loc" 1747 | op: "ConcatV2" 1748 | input: "flatten/Reshape" 1749 | input: "flatten_1/Reshape" 1750 | input: "flatten_2/Reshape" 1751 | input: "flatten_3/Reshape" 1752 | input: "flatten_4/Reshape" 1753 | input: "flatten_5/Reshape" 1754 | input: "mbox_loc/axis" 1755 | } 1756 | node { 1757 | name: "fc7_mbox_conf/Conv2D" 1758 | op: "Conv2D" 1759 | input: "last_relu" 1760 | input: "fc7_mbox_conf/weights" 1761 | attr { 1762 | key: "dilations" 1763 | value { 1764 | list { 1765 | i: 1 1766 | i: 1 1767 | i: 1 1768 | i: 1 1769 | } 1770 | } 1771 | } 1772 | attr { 1773 | key: "padding" 1774 | value { 1775 | s: "SAME" 1776 | } 1777 | } 1778 | attr { 1779 | key: "strides" 1780 | value { 1781 | list { 1782 | i: 1 1783 | i: 1 1784 | i: 1 1785 | i: 1 1786 | } 1787 | } 1788 | } 1789 | } 1790 | node { 1791 | name: "fc7_mbox_conf/BiasAdd" 1792 | op: "BiasAdd" 1793 | input: "fc7_mbox_conf/Conv2D" 1794 | input: "fc7_mbox_conf/bias" 1795 | } 1796 | node { 1797 | name: "flatten_7/Reshape" 1798 | op: "Flatten" 1799 | input: "fc7_mbox_conf/BiasAdd" 1800 | } 1801 | node { 1802 | name: "mbox_conf" 1803 | op: "ConcatV2" 1804 | input: "flatten_6/Reshape" 1805 | input: "flatten_7/Reshape" 1806 | input: "flatten_8/Reshape" 1807 | input: "flatten_9/Reshape" 1808 | input: "flatten_10/Reshape" 1809 | input: "flatten_11/Reshape" 1810 | input: "mbox_conf/axis" 1811 | } 1812 | node { 1813 | name: "mbox_conf_reshape" 1814 | op: "Reshape" 1815 | input: "mbox_conf" 1816 | input: "reshape_before_softmax" 1817 | } 1818 | node { 1819 | name: "mbox_conf_softmax" 1820 | op: "Softmax" 1821 | input: "mbox_conf_reshape" 1822 | attr { 1823 | key: "axis" 1824 | value { 1825 | i: 2 1826 | } 1827 | } 1828 | } 1829 | node { 1830 | name: "mbox_conf_flatten" 1831 | op: "Flatten" 1832 | input: "mbox_conf_softmax" 1833 | } 1834 | node { 1835 | name: "PriorBox_0" 1836 | op: "PriorBox" 1837 | input: "conv4_3_norm/mul_1" 1838 | input: "data" 1839 | attr { 1840 | key: "aspect_ratio" 1841 | value { 1842 | tensor { 1843 | dtype: DT_FLOAT 1844 | tensor_shape { 1845 | dim { 1846 | size: 1 1847 | } 1848 | } 1849 | float_val: 2.0 1850 | } 1851 | } 1852 | } 1853 | attr { 1854 | key: "clip" 1855 | value { 1856 | b: false 1857 | } 1858 | } 1859 | attr { 1860 | key: "flip" 1861 | value { 1862 | b: true 1863 | } 1864 | } 1865 | attr { 1866 | key: "max_size" 1867 | value { 1868 | i: 60 1869 | } 1870 | } 1871 | attr { 1872 | key: "min_size" 1873 | value { 1874 | i: 30 1875 | } 1876 | } 1877 | attr { 1878 | key: "offset" 1879 | value { 1880 | f: 0.5 1881 | } 1882 | } 1883 | attr { 1884 | key: "step" 1885 | value { 1886 | f: 8.0 1887 | } 1888 | } 1889 | attr { 1890 | key: "variance" 1891 | value { 1892 | tensor { 1893 | dtype: DT_FLOAT 1894 | tensor_shape { 1895 | dim { 1896 | size: 4 1897 | } 1898 | } 1899 | float_val: 0.10000000149 1900 | float_val: 0.10000000149 1901 | float_val: 0.20000000298 1902 | float_val: 0.20000000298 1903 | } 1904 | } 1905 | } 1906 | } 1907 | node { 1908 | name: "PriorBox_1" 1909 | op: "PriorBox" 1910 | input: "last_relu" 1911 | input: "data" 1912 | attr { 1913 | key: "aspect_ratio" 1914 | value { 1915 | tensor { 1916 | dtype: DT_FLOAT 1917 | tensor_shape { 1918 | dim { 1919 | size: 2 1920 | } 1921 | } 1922 | float_val: 2.0 1923 | float_val: 3.0 1924 | } 1925 | } 1926 | } 1927 | attr { 1928 | key: "clip" 1929 | value { 1930 | b: false 1931 | } 1932 | } 1933 | attr { 1934 | key: "flip" 1935 | value { 1936 | b: true 1937 | } 1938 | } 1939 | attr { 1940 | key: "max_size" 1941 | value { 1942 | i: 111 1943 | } 1944 | } 1945 | attr { 1946 | key: "min_size" 1947 | value { 1948 | i: 60 1949 | } 1950 | } 1951 | attr { 1952 | key: "offset" 1953 | value { 1954 | f: 0.5 1955 | } 1956 | } 1957 | attr { 1958 | key: "step" 1959 | value { 1960 | f: 16.0 1961 | } 1962 | } 1963 | attr { 1964 | key: "variance" 1965 | value { 1966 | tensor { 1967 | dtype: DT_FLOAT 1968 | tensor_shape { 1969 | dim { 1970 | size: 4 1971 | } 1972 | } 1973 | float_val: 0.10000000149 1974 | float_val: 0.10000000149 1975 | float_val: 0.20000000298 1976 | float_val: 0.20000000298 1977 | } 1978 | } 1979 | } 1980 | } 1981 | node { 1982 | name: "PriorBox_2" 1983 | op: "PriorBox" 1984 | input: "conv6_2_h/Relu" 1985 | input: "data" 1986 | attr { 1987 | key: "aspect_ratio" 1988 | value { 1989 | tensor { 1990 | dtype: DT_FLOAT 1991 | tensor_shape { 1992 | dim { 1993 | size: 2 1994 | } 1995 | } 1996 | float_val: 2.0 1997 | float_val: 3.0 1998 | } 1999 | } 2000 | } 2001 | attr { 2002 | key: "clip" 2003 | value { 2004 | b: false 2005 | } 2006 | } 2007 | attr { 2008 | key: "flip" 2009 | value { 2010 | b: true 2011 | } 2012 | } 2013 | attr { 2014 | key: "max_size" 2015 | value { 2016 | i: 162 2017 | } 2018 | } 2019 | attr { 2020 | key: "min_size" 2021 | value { 2022 | i: 111 2023 | } 2024 | } 2025 | attr { 2026 | key: "offset" 2027 | value { 2028 | f: 0.5 2029 | } 2030 | } 2031 | attr { 2032 | key: "step" 2033 | value { 2034 | f: 32.0 2035 | } 2036 | } 2037 | attr { 2038 | key: "variance" 2039 | value { 2040 | tensor { 2041 | dtype: DT_FLOAT 2042 | tensor_shape { 2043 | dim { 2044 | size: 4 2045 | } 2046 | } 2047 | float_val: 0.10000000149 2048 | float_val: 0.10000000149 2049 | float_val: 0.20000000298 2050 | float_val: 0.20000000298 2051 | } 2052 | } 2053 | } 2054 | } 2055 | node { 2056 | name: "PriorBox_3" 2057 | op: "PriorBox" 2058 | input: "conv7_2_h/Relu" 2059 | input: "data" 2060 | attr { 2061 | key: "aspect_ratio" 2062 | value { 2063 | tensor { 2064 | dtype: DT_FLOAT 2065 | tensor_shape { 2066 | dim { 2067 | size: 2 2068 | } 2069 | } 2070 | float_val: 2.0 2071 | float_val: 3.0 2072 | } 2073 | } 2074 | } 2075 | attr { 2076 | key: "clip" 2077 | value { 2078 | b: false 2079 | } 2080 | } 2081 | attr { 2082 | key: "flip" 2083 | value { 2084 | b: true 2085 | } 2086 | } 2087 | attr { 2088 | key: "max_size" 2089 | value { 2090 | i: 213 2091 | } 2092 | } 2093 | attr { 2094 | key: "min_size" 2095 | value { 2096 | i: 162 2097 | } 2098 | } 2099 | attr { 2100 | key: "offset" 2101 | value { 2102 | f: 0.5 2103 | } 2104 | } 2105 | attr { 2106 | key: "step" 2107 | value { 2108 | f: 64.0 2109 | } 2110 | } 2111 | attr { 2112 | key: "variance" 2113 | value { 2114 | tensor { 2115 | dtype: DT_FLOAT 2116 | tensor_shape { 2117 | dim { 2118 | size: 4 2119 | } 2120 | } 2121 | float_val: 0.10000000149 2122 | float_val: 0.10000000149 2123 | float_val: 0.20000000298 2124 | float_val: 0.20000000298 2125 | } 2126 | } 2127 | } 2128 | } 2129 | node { 2130 | name: "PriorBox_4" 2131 | op: "PriorBox" 2132 | input: "conv8_2_h/Relu" 2133 | input: "data" 2134 | attr { 2135 | key: "aspect_ratio" 2136 | value { 2137 | tensor { 2138 | dtype: DT_FLOAT 2139 | tensor_shape { 2140 | dim { 2141 | size: 1 2142 | } 2143 | } 2144 | float_val: 2.0 2145 | } 2146 | } 2147 | } 2148 | attr { 2149 | key: "clip" 2150 | value { 2151 | b: false 2152 | } 2153 | } 2154 | attr { 2155 | key: "flip" 2156 | value { 2157 | b: true 2158 | } 2159 | } 2160 | attr { 2161 | key: "max_size" 2162 | value { 2163 | i: 264 2164 | } 2165 | } 2166 | attr { 2167 | key: "min_size" 2168 | value { 2169 | i: 213 2170 | } 2171 | } 2172 | attr { 2173 | key: "offset" 2174 | value { 2175 | f: 0.5 2176 | } 2177 | } 2178 | attr { 2179 | key: "step" 2180 | value { 2181 | f: 100.0 2182 | } 2183 | } 2184 | attr { 2185 | key: "variance" 2186 | value { 2187 | tensor { 2188 | dtype: DT_FLOAT 2189 | tensor_shape { 2190 | dim { 2191 | size: 4 2192 | } 2193 | } 2194 | float_val: 0.10000000149 2195 | float_val: 0.10000000149 2196 | float_val: 0.20000000298 2197 | float_val: 0.20000000298 2198 | } 2199 | } 2200 | } 2201 | } 2202 | node { 2203 | name: "PriorBox_5" 2204 | op: "PriorBox" 2205 | input: "conv9_2_h/Relu" 2206 | input: "data" 2207 | attr { 2208 | key: "aspect_ratio" 2209 | value { 2210 | tensor { 2211 | dtype: DT_FLOAT 2212 | tensor_shape { 2213 | dim { 2214 | size: 1 2215 | } 2216 | } 2217 | float_val: 2.0 2218 | } 2219 | } 2220 | } 2221 | attr { 2222 | key: "clip" 2223 | value { 2224 | b: false 2225 | } 2226 | } 2227 | attr { 2228 | key: "flip" 2229 | value { 2230 | b: true 2231 | } 2232 | } 2233 | attr { 2234 | key: "max_size" 2235 | value { 2236 | i: 315 2237 | } 2238 | } 2239 | attr { 2240 | key: "min_size" 2241 | value { 2242 | i: 264 2243 | } 2244 | } 2245 | attr { 2246 | key: "offset" 2247 | value { 2248 | f: 0.5 2249 | } 2250 | } 2251 | attr { 2252 | key: "step" 2253 | value { 2254 | f: 300.0 2255 | } 2256 | } 2257 | attr { 2258 | key: "variance" 2259 | value { 2260 | tensor { 2261 | dtype: DT_FLOAT 2262 | tensor_shape { 2263 | dim { 2264 | size: 4 2265 | } 2266 | } 2267 | float_val: 0.10000000149 2268 | float_val: 0.10000000149 2269 | float_val: 0.20000000298 2270 | float_val: 0.20000000298 2271 | } 2272 | } 2273 | } 2274 | } 2275 | node { 2276 | name: "mbox_priorbox" 2277 | op: "ConcatV2" 2278 | input: "PriorBox_0" 2279 | input: "PriorBox_1" 2280 | input: "PriorBox_2" 2281 | input: "PriorBox_3" 2282 | input: "PriorBox_4" 2283 | input: "PriorBox_5" 2284 | input: "mbox_loc/axis" 2285 | } 2286 | node { 2287 | name: "detection_out" 2288 | op: "DetectionOutput" 2289 | input: "mbox_loc" 2290 | input: "mbox_conf_flatten" 2291 | input: "mbox_priorbox" 2292 | attr { 2293 | key: "background_label_id" 2294 | value { 2295 | i: 0 2296 | } 2297 | } 2298 | attr { 2299 | key: "code_type" 2300 | value { 2301 | s: "CENTER_SIZE" 2302 | } 2303 | } 2304 | attr { 2305 | key: "confidence_threshold" 2306 | value { 2307 | f: 0.00999999977648 2308 | } 2309 | } 2310 | attr { 2311 | key: "keep_top_k" 2312 | value { 2313 | i: 200 2314 | } 2315 | } 2316 | attr { 2317 | key: "nms_threshold" 2318 | value { 2319 | f: 0.449999988079 2320 | } 2321 | } 2322 | attr { 2323 | key: "num_classes" 2324 | value { 2325 | i: 2 2326 | } 2327 | } 2328 | attr { 2329 | key: "share_location" 2330 | value { 2331 | b: true 2332 | } 2333 | } 2334 | attr { 2335 | key: "top_k" 2336 | value { 2337 | i: 400 2338 | } 2339 | } 2340 | } 2341 | node { 2342 | name: "reshape_before_softmax" 2343 | op: "Const" 2344 | attr { 2345 | key: "value" 2346 | value { 2347 | tensor { 2348 | dtype: DT_INT32 2349 | tensor_shape { 2350 | dim { 2351 | size: 3 2352 | } 2353 | } 2354 | int_val: 0 2355 | int_val: -1 2356 | int_val: 2 2357 | } 2358 | } 2359 | } 2360 | } 2361 | library { 2362 | } -------------------------------------------------------------------------------- /face/opencv_face_detector_uint8.pb: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/face/opencv_face_detector_uint8.pb -------------------------------------------------------------------------------- /gender/gender_deploy.prototxt: -------------------------------------------------------------------------------- 1 | name: "CaffeNet" 2 | input: "data" 3 | input_dim: 10 4 | input_dim: 3 5 | input_dim: 227 6 | input_dim: 227 7 | layers { 8 | name: "conv1" 9 | type: CONVOLUTION 10 | bottom: "data" 11 | top: "conv1" 12 | convolution_param { 13 | num_output: 96 14 | kernel_size: 7 15 | stride: 4 16 | } 17 | } 18 | layers { 19 | name: "relu1" 20 | type: RELU 21 | bottom: "conv1" 22 | top: "conv1" 23 | } 24 | layers { 25 | name: "pool1" 26 | type: POOLING 27 | bottom: "conv1" 28 | top: "pool1" 29 | pooling_param { 30 | pool: MAX 31 | kernel_size: 3 32 | stride: 2 33 | } 34 | } 35 | layers { 36 | name: "norm1" 37 | type: LRN 38 | bottom: "pool1" 39 | top: "norm1" 40 | lrn_param { 41 | local_size: 5 42 | alpha: 0.0001 43 | beta: 0.75 44 | } 45 | } 46 | layers { 47 | name: "conv2" 48 | type: CONVOLUTION 49 | bottom: "norm1" 50 | top: "conv2" 51 | convolution_param { 52 | num_output: 256 53 | pad: 2 54 | kernel_size: 5 55 | } 56 | } 57 | layers { 58 | name: "relu2" 59 | type: RELU 60 | bottom: "conv2" 61 | top: "conv2" 62 | } 63 | layers { 64 | name: "pool2" 65 | type: POOLING 66 | bottom: "conv2" 67 | top: "pool2" 68 | pooling_param { 69 | pool: MAX 70 | kernel_size: 3 71 | stride: 2 72 | } 73 | } 74 | layers { 75 | name: "norm2" 76 | type: LRN 77 | bottom: "pool2" 78 | top: "norm2" 79 | lrn_param { 80 | local_size: 5 81 | alpha: 0.0001 82 | beta: 0.75 83 | } 84 | } 85 | layers { 86 | name: "conv3" 87 | type: CONVOLUTION 88 | bottom: "norm2" 89 | top: "conv3" 90 | convolution_param { 91 | num_output: 384 92 | pad: 1 93 | kernel_size: 3 94 | } 95 | } 96 | layers{ 97 | name: "relu3" 98 | type: RELU 99 | bottom: "conv3" 100 | top: "conv3" 101 | } 102 | layers { 103 | name: "pool5" 104 | type: POOLING 105 | bottom: "conv3" 106 | top: "pool5" 107 | pooling_param { 108 | pool: MAX 109 | kernel_size: 3 110 | stride: 2 111 | } 112 | } 113 | layers { 114 | name: "fc6" 115 | type: INNER_PRODUCT 116 | bottom: "pool5" 117 | top: "fc6" 118 | inner_product_param { 119 | num_output: 512 120 | } 121 | } 122 | layers { 123 | name: "relu6" 124 | type: RELU 125 | bottom: "fc6" 126 | top: "fc6" 127 | } 128 | layers { 129 | name: "drop6" 130 | type: DROPOUT 131 | bottom: "fc6" 132 | top: "fc6" 133 | dropout_param { 134 | dropout_ratio: 0.5 135 | } 136 | } 137 | layers { 138 | name: "fc7" 139 | type: INNER_PRODUCT 140 | bottom: "fc6" 141 | top: "fc7" 142 | inner_product_param { 143 | num_output: 512 144 | } 145 | } 146 | layers { 147 | name: "relu7" 148 | type: RELU 149 | bottom: "fc7" 150 | top: "fc7" 151 | } 152 | layers { 153 | name: "drop7" 154 | type: DROPOUT 155 | bottom: "fc7" 156 | top: "fc7" 157 | dropout_param { 158 | dropout_ratio: 0.5 159 | } 160 | } 161 | layers { 162 | name: "fc8" 163 | type: INNER_PRODUCT 164 | bottom: "fc7" 165 | top: "fc8" 166 | inner_product_param { 167 | num_output: 2 168 | } 169 | } 170 | layers { 171 | name: "prob" 172 | type: SOFTMAX 173 | bottom: "fc8" 174 | top: "prob" 175 | } -------------------------------------------------------------------------------- /gender/gender_net.caffemodel: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/gender/gender_net.caffemodel -------------------------------------------------------------------------------- /girl1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/girl1.jpg -------------------------------------------------------------------------------- /girl2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/girl2.jpg -------------------------------------------------------------------------------- /kid1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/kid1.jpg -------------------------------------------------------------------------------- /kid2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/kid2.jpg -------------------------------------------------------------------------------- /man1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/man1.jpg -------------------------------------------------------------------------------- /man2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/man2.jpg -------------------------------------------------------------------------------- /sample/Detecting age and gender girl1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/sample/Detecting age and gender girl1.png -------------------------------------------------------------------------------- /sample/Detecting age and gender girl2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/sample/Detecting age and gender girl2.png -------------------------------------------------------------------------------- /sample/Detecting age and gender kid1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/sample/Detecting age and gender kid1.png -------------------------------------------------------------------------------- /sample/Detecting age and gender kid2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/sample/Detecting age and gender kid2.png -------------------------------------------------------------------------------- /sample/Detecting age and gender man1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/sample/Detecting age and gender man1.png -------------------------------------------------------------------------------- /sample/Detecting age and gender man2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/sample/Detecting age and gender man2.png -------------------------------------------------------------------------------- /sample/Detecting age and gender woman1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/sample/Detecting age and gender woman1.png -------------------------------------------------------------------------------- /woman1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/yash42828/Gender-and-Age-detection-using-OpenCV/edc47ab1eb7669395f9f44879e2d63a50e07bb23/woman1.jpg --------------------------------------------------------------------------------