├── Overview.png
├── README.md
├── Results ArcFace ICAO.png
├── Results ArcFace Inpainting.png
├── Results ArcFace Masking.png
└── src
├── XFIQ.py
├── auxiliary_models.py
├── explain_quality.py
├── gradient_calculator.py
├── kerasarc_v3.py
├── preprocessing.py
├── serfiq.py
├── test_images
├── cap.png
└── mask.png
└── utils.py
/Overview.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pterhoer/ExplainableFaceImageQuality/39e577b2bd45b76cb79a8dc276e9308dbad9f8ba/Overview.png
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Explainable Face Image Quality
2 |
3 |
4 |
5 | ## Pixel-Level Face Image Quality Assessment for Explainable Face Recognition
6 |
7 | * [Research Paper](https://arxiv.org/abs/2110.11001)
8 | * [Implementation on ArcFace](src/XFIQ.py)
9 |
10 |
11 |
12 | ## Table of Contents
13 |
14 | - [Abstract](#abstract)
15 | - [Key Points](#key-points)
16 | - [Results](#results)
17 | - [Installation](#installation)
18 | - [Citing](#citing)
19 | - [Related Works](#related-works)
20 | - [Acknowledgement](#acknowledgement)
21 | - [License](#license)
22 |
23 | ## Abstract
24 |
25 | An essential factor to achieve high performance in face recognition systems is the quality of its samples. Since these systems are involved in various daily life there is a strong need of making face recognition processes understandable for humans. In this work, we introduce the concept of pixel-level face image quality that determines the utility of pixels in a face image for recognition. Given an arbitrary face recognition network, in this work, we propose a training-free approach to assess the pixel-level qualities of a face image. To achieve this, a model-specific quality value of the input image is estimated and used to build a sample-specific quality regression model. Based on this model, quality-based gradients are back-propagated and converted into pixel-level quality estimates. In the experiments, we qualitatively and quantitatively investigated the meaningfulness of the pixel-level qualities based on real and artificial disturbances and by comparing the explanation maps on ICAO-incompliant faces. In all scenarios, the results demonstrate that the proposed solution produces meaningful pixel-level qualities.
26 |
27 |
28 |
29 | ## Key Points
30 |
31 | To summarize, the proposed Pixel-Level Quality Assessment approach
32 | - can be applied on arbitrary FR networks,
33 | - does not require training,
34 | - and provides a pixel-level utility description of an input face explaining how well pixels in a face image are suited for recognition (prior to any matching).
35 |
36 | The solution can explain why an image cannot be used as a reference image during the acquisition/enrolment process and in which area of the face the subject have to do changes to increase the quality. Consequently, PLQ maps provide guidance on the reasons behind low quality images, and thus can provide interpretable instructions to improve the FIQ.
37 |
38 |
39 | ## Results
40 |
41 | The proposed pixel-level quality estimation approach is analysed from two directions. First, low-quality face images with low-quality areas, such as occlusions, are localised and inpainted to demonstrate that this improves the face image quality. Second, random masks are placed on high-quality faces to show that the proposed methodology identifies these as low-quality areas. Both evaluation approaches, enhancing low-quality images and degrading high-quality images,
42 | aim at quantitatively (via quality-changes) and qualitatively (via changes in the PLQ-maps) investigating the effectiveness of the proposed PLQA approach. Lastly, the
43 | PLQ-maps are investigated on ICAO-incompliant faces.
44 |
45 | In the following, we will focus on some qualitative results. For more details and quanitative results, please take a look at the paper.
46 |
47 |
48 | **PLQ explanation maps before and after inpainting** - Images before and after the inpainting process are shown with their
49 | corresponding PLQ-maps and FIQ values. The images show the effect of small and large occlusions, glasses, headgears, and beards on the
50 | PLQ-maps for two FR models. In general, these are identified as areas of low pixel-quality and inpainting these areas strongly increases
51 | the pixel-qualities of these areas as well as the FIQ. This demonstrates that our solution leads to reasonable pixel-level quality estimates
52 | and thus can give interpretable recommendations on the causes of low quality estimates.
53 |
54 |
55 |
56 | **PLQ-explanation maps for random masks** - For two random identities, their masked and unmasked images are shown with
57 | their corresponding PLQ-maps. In general, the effect of the mask on the PLQ-map is clearly visible demonstrating the effectiveness of the
58 | proposed approach to detect disturbances.
59 |
60 |
61 |
62 | **PLQ-explanation maps for ICAO imcompliant images** - One ICAO-compliant image and twelve images with imcompliances
63 | are shown with their corresponding PLQ-maps. Occlusions (b,c,d), distorted areas of the face (f), and reflections result in low-pixel
64 | qualities.
65 |
66 |
67 |
68 |
69 | ## Installation
70 | Python 3.7 or 3.8 is recommended. Requires opencv-python>=4.5.2.52, Tensorflow>=2.3.0, numpy>=1.20.3, sklearn>=0.24.2, matplotlib>=3.4.2 and tqdm>=4.60.0.
71 |
72 | Download the [weights-file](https://owncloud.fraunhofer.de/index.php/s/TchwFEPL86gHjAY) and place it in the src-folder.
73 | The Keras-based ArcFace model is then created during the first execution. Create
74 | ```
75 | src/gradients
76 | ```
77 | directory for the gradients and
78 |
79 | ```
80 | src/plots
81 | ```
82 | directory for the PLQ-maps.
83 |
84 | To run the code, use the *XFIQ.py* file in the src-folder:
85 |
86 | ```
87 | python XFIQ.py
88 | ```
89 |
90 | All parameters can be adjusted there. The script takes all images from a folder and saves the raw gradients as well
91 | as the explanation maps, including the quality values.
92 |
93 | The images need to be preprocessed (cropped, aligned). The proprocessing can be done using the
94 | *setup_img(img)* method in *preprocessing.py*. The preprocessing script requires skimage>=0.18.1 and MTCNN>=0.1.0
95 | to be installed.
96 |
97 |
98 |
99 | ## Citing
100 |
101 | If you use this code, please cite the following paper.
102 |
103 |
104 | ```
105 | @article{DBLP:journals/corr/abs-2110-11001,
106 | author = {Philipp Terh{\"{o}}rst and
107 | Marco Huber and
108 | Naser Damer and
109 | Florian Kirchbuchner and
110 | Kiran Raja and
111 | Arjan Kuijper},
112 | title = {Pixel-Level Face Image Quality Assessment for Explainable Face Recognition},
113 | journal = {CoRR},
114 | volume = {abs/2110.11001},
115 | year = {2021},
116 | url = {https://arxiv.org/abs/2110.11001},
117 | eprinttype = {arXiv},
118 | eprint = {2110.11001},
119 | timestamp = {Thu, 28 Oct 2021 15:25:31 +0200},
120 | biburl = {https://dblp.org/rec/journals/corr/abs-2110-11001.bib},
121 | bibsource = {dblp computer science bibliography, https://dblp.org}
122 | }
123 | ```
124 |
125 | If you make use of our implementation based on ArcFace, please additionally cite the original.
126 |
127 | ## Related Works
128 |
129 | - IJCB 2023: Explaining Face Recognition Through SHAP-Based Pixel-Level Face Image Quality Assessment
130 | - EUSIPCO 2022: [On Evaluating Pixel-Level Face Image Quality Assessment](https://eurasip.org/Proceedings/Eusipco/Eusipco2022/pdfs/0001052.pdf)
131 |
132 | ## Acknowledgement
133 |
134 | This research work has been funded by the German Federal Ministry of Education and Research and the Hessen State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE.
135 | Portions of the research in this paper use the FERET database of facial images collected under the FERET program, sponsored by the DOD Counterdrug Technology Development Program Office.
136 | This work was carried out during the tenure of an ERCIM ’Alain Bensoussan‘ Fellowship Programme.
137 |
138 | ## License
139 |
140 | This project is licensed under the terms of the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.
141 | Copyright (c) 2021 Fraunhofer Institute for Computer Graphics Research IGD Darmstadt
142 |
143 |
--------------------------------------------------------------------------------
/Results ArcFace ICAO.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pterhoer/ExplainableFaceImageQuality/39e577b2bd45b76cb79a8dc276e9308dbad9f8ba/Results ArcFace ICAO.png
--------------------------------------------------------------------------------
/Results ArcFace Inpainting.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pterhoer/ExplainableFaceImageQuality/39e577b2bd45b76cb79a8dc276e9308dbad9f8ba/Results ArcFace Inpainting.png
--------------------------------------------------------------------------------
/Results ArcFace Masking.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pterhoer/ExplainableFaceImageQuality/39e577b2bd45b76cb79a8dc276e9308dbad9f8ba/Results ArcFace Masking.png
--------------------------------------------------------------------------------
/src/XFIQ.py:
--------------------------------------------------------------------------------
1 | # Explainable Face Image Quality (XFIQ)
2 |
3 | # Pixel-Level Face Image Quality Assessment for Explainable Face Recognition
4 | # Philipp Terhörst, Marco Huber, Naser Damer, Florian Kirchbuchner, Kiran Raja, and Arjan Kuijper
5 | # 2021
6 |
7 | # Copyright (c) 2020 Fraunhofer Institute for Computer Graphics Research IGD Darmstadt
8 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
9 | # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
10 | # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
11 | # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
12 | # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
13 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
14 |
15 | # Author: Marco Huber, 2021
16 | # Fraunhofer IGD
17 | # marco.huber[at]igd.fraunhofer.de
18 |
19 | import cv2
20 | import numpy as np
21 | import os.path
22 | from tqdm import tqdm
23 | from tensorflow.keras import models
24 |
25 | from auxiliary_models import build_model
26 | from serfiq import get_scaled_quality
27 | from gradient_calculator import get_gradients
28 | from utils import image_iter
29 | from explain_quality import plot_comparison
30 |
31 | def run(image_path, model_path, save_path, T):
32 |
33 | """
34 | Calculates the gradients using the calculated SER-FIQ image quality.
35 |
36 | Parameters
37 | ----------
38 | image_path : str
39 | Path to the image folder.
40 | model_path : str
41 | Path to the stored keras model.
42 | save_path : str
43 | Path to save the gradients.
44 | T : int
45 | Number of stochastic forward passes to calculate the SER-FIQ quality.
46 |
47 | Returns
48 | -------
49 | None. But saves the (image_path, gradients, quality score)
50 |
51 | """
52 |
53 | # load model
54 | keras_model = models.load_model(model_path)
55 |
56 | # get images paths
57 | images = image_iter(image_path)
58 |
59 | save = []
60 |
61 | # calculating quality and gradients for each image
62 | for i in tqdm(images):
63 |
64 | # read & prepare image
65 | img = cv2.imread(i)
66 | img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
67 | img = cv2.resize(img, (112, 112), interpolation = cv2.INTER_AREA)
68 | img = np.expand_dims(img, axis=0)
69 |
70 | # calculate quality score
71 | score = get_scaled_quality(img, keras_model, T, alpha, r)
72 |
73 | # calculate gradient
74 | grads = get_gradients(img, keras_model, score)
75 | grads = grads.numpy()
76 |
77 | # add to save
78 | tmp = (i, grads, score)
79 | save.append(tmp)
80 |
81 | # save
82 | np.save(save_path, save)
83 |
84 | if __name__ == "__main__":
85 |
86 | # SER-FIQ - Parameter
87 | T = 100 # number of forward passes to calculate quality
88 |
89 | # Quality Scaling - Parameters
90 | alpha = 130 # param to scale qualities to a wider range
91 | r = 0.88 # param to scale qualities to a wider range
92 |
93 | # Visualization Scaling - Parameters
94 | a = 10**7.5 # param to scale grads
95 | b = 2 # param to scale grads
96 |
97 | # Paths
98 | weights_path = "interkerasarc.npy"
99 | model_path = "model_kerasarc_v3.h5"
100 | image_path = "./test_images/"
101 | plot_path = "./plots/"
102 | grad_path = "./gradients/arc_test_images_gradients.npy"
103 |
104 | # check if model exists, else build
105 | if not os.path.isfile(model_path):
106 | build_model(weights_path)
107 |
108 | # Explain Face Image Quality at Pixel-Level
109 | run(image_path, model_path, grad_path, T)
110 | loaded_gradients = np.load(grad_path, allow_pickle=True)
111 | plot_comparison(loaded_gradients, plot_path, a, b, True)
112 |
113 |
--------------------------------------------------------------------------------
/src/auxiliary_models.py:
--------------------------------------------------------------------------------
1 | # Auxiliary Models for Explainable Face Image Quality
2 |
3 | # This script contains auxiliary functions for the models.
4 |
5 | # Author: Marco Huber, 2020
6 | # Fraunhofer IGD
7 | # marco.huber[at]igd.fraunhofer.de
8 |
9 |
10 | from tensorflow.keras import backend as K
11 | from tensorflow.keras import Model
12 | from tensorflow.keras.layers import (Input, Dense, BatchNormalization,
13 | Dropout, Lambda)
14 |
15 | from kerasarc_v3 import KitModel, load_weights_from_file
16 |
17 | def euclid_normalize(x):
18 | return K.l2_normalize(x, axis=1)
19 |
20 | def build_model(weights_path):
21 |
22 | model = KitModel(weights_path)
23 | model.save("model_kerasarc_v3.h5")
24 |
25 | def get_pre_dropout_state(img, model):
26 | """
27 | Returns the pre-dropout layer activations
28 |
29 | Parameters
30 | ----------
31 | img : numpy ndarray
32 | The alinged, preprocessed and batch-ified image
33 | model : keras model
34 | The used model
35 |
36 | Returns
37 | -------
38 | state : numpy array
39 | The activation state before the stochastic dropout layer.
40 |
41 | """
42 | # define new model
43 | pre_dropout_model = Model(inputs=model.input, outputs=model.get_layer('flatten').output)
44 |
45 | # calculate state
46 | state = pre_dropout_model(img)
47 |
48 | # delete model
49 | del pre_dropout_model
50 |
51 | return state
52 |
53 |
54 | def get_stochastic_pass_model(model):
55 | """
56 | Returns a minimized model only consisting of the stochastic model part
57 |
58 | Parameters
59 | ----------
60 | model : keras model
61 | The model to be used for the stochastic forward pass.
62 |
63 | Returns
64 | -------
65 | stochastic_pass_model : keras model
66 | Minimized keras model only including the stochastic part of the model.
67 |
68 | """
69 |
70 | # define stochastic model
71 | inputs = Input(shape=(25088,))
72 | x = inputs
73 |
74 | x = Dropout(name='dropout', rate=0.5)(x, training=True)
75 | x = Dense(name='pre_fc1', units=512, use_bias=True)(x)
76 | x = BatchNormalization(name='fc1', axis=1, epsilon=1.9999999494757503e-05, center = True, scale = False)(x)
77 | x = Lambda(euclid_normalize)(x)
78 | output = x
79 |
80 | # declare model
81 | stochastic_pass_model = Model(inputs=inputs, outputs=output)
82 | stochastic_pass_model.get_layer('pre_fc1').set_weights(model.get_layer('pre_fc1').get_weights())
83 |
84 | return stochastic_pass_model
85 |
86 |
--------------------------------------------------------------------------------
/src/explain_quality.py:
--------------------------------------------------------------------------------
1 | # File to visualize the PLQ maps.
2 |
3 | # Author: Marco Huber, 2021
4 | # Fraunhofer IGD
5 | # marco.huber[at]igd.fraunhofer.de
6 |
7 | import cv2
8 | import numpy as np
9 | import matplotlib.pyplot as plt
10 |
11 | from tqdm import tqdm
12 |
13 | def process_grad(gradient, a, b):
14 |
15 | grad = abs(gradient)
16 | x = np.mean(grad, axis=2)
17 | x = 1 - (1 / (1 + (a * x **b)))
18 | return x
19 |
20 | def plot_comparison(grad_save, save_path, a, b, withtitle=True):
21 |
22 | # split
23 | names, grads, score = grad_save[:,0], grad_save[:,1], grad_save[:,2]
24 |
25 | # iterate over each gradient set
26 | for idx, g in enumerate(tqdm(grads)):
27 |
28 | axes = []
29 | fig = plt.figure()
30 |
31 | plt.rc('font',size=20)
32 |
33 | # plot original image for reference
34 | img = cv2.imread(names[idx])
35 | img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
36 | img = cv2.resize(img, (112,112))
37 | tit = "Image $I$"
38 |
39 | if withtitle == True:
40 | axes.append(fig.add_subplot(1,2,1, title=tit))
41 | else:
42 | axes.append(fig.add_subplot(1,2,1))
43 |
44 | subtit = "$\hat{Q}_{I}$: " + str(np.round(score[idx],3))
45 |
46 | plt.text(30, 126, subtit, fontsize=24)
47 |
48 | plt.xticks([])
49 | plt.yticks([])
50 | plt.imshow(img)
51 |
52 | plt.box(False)
53 |
54 | # plot explained image
55 | title = "PLQ-Map $P(\hat{Q}_{I})$"
56 |
57 | if withtitle == True:
58 | axes.append(fig.add_subplot(1,2,2, title=title))
59 | else:
60 | axes.append(fig.add_subplot(1,2,2))
61 |
62 | plt.xticks([])
63 | plt.yticks([])
64 | plt.imshow(process_grad(g, a, b), cmap=plt.get_cmap('RdYlGn'), vmin=0, vmax=1)
65 |
66 | # save
67 | save = names[idx].split("/")[-1]
68 | plt.tight_layout()
69 | plt.gca().set_axis_off()
70 | plt.subplots_adjust(top = 1, bottom = 0, right = 1, left = 0,
71 | hspace = 0, wspace = 0.05)
72 | plt.margins(10,10)
73 | plt.gca().xaxis.set_major_locator(plt.NullLocator())
74 | plt.gca().yaxis.set_major_locator(plt.NullLocator())
75 | plt.savefig(save_path + save + ".png", bbox_inches='tight', pad_inches = 0)
76 | plt.close()
77 |
--------------------------------------------------------------------------------
/src/gradient_calculator.py:
--------------------------------------------------------------------------------
1 | # File to calculate the gradients
2 |
3 | # Author: Marco Huber, 2021
4 | # Fraunhofer IGD
5 | # marco.huber[at]igd.fraunhofer.de
6 |
7 | import numpy as np
8 | import tensorflow as tf
9 |
10 | from tensorflow.keras import Model
11 | from tensorflow.keras import backend as K
12 | from tensorflow.keras.layers import BatchNormalization, Dense, Flatten, Lambda
13 |
14 | def euclid_normalize(x):
15 | return K.l2_normalize(x, axis=1)
16 |
17 | def calculate_weights(quality, emb):
18 |
19 | # calculate weights
20 | sum_act = np.sum(emb)
21 | w = quality / sum_act
22 |
23 | # build array
24 | arr_w = np.repeat(w, len(emb))
25 | arr_w = arr_w.reshape((1, len(emb), 1))
26 | return arr_w
27 |
28 | def get_gradients(image, model, quality):
29 |
30 | # build model
31 | curr_out = model.get_layer('bn1').output
32 | x = Flatten(name="flatten")(curr_out)
33 | x = Dense(name='pre_fc1', units=512, use_bias=True)(x)
34 | x = BatchNormalization(name='fc1', axis=1, epsilon=1.9999999494757503e-05, center = True, scale = False)(x)
35 | embedding = Lambda(euclid_normalize, name="euclidnorm")(x)
36 |
37 | # expand model with quality part
38 | out = Dense(1, activation="linear", use_bias=False, name="quality")(embedding)
39 |
40 | # define model
41 | grad_model = Model(model.inputs, [out, embedding])
42 |
43 | # set weights
44 | grad_model.get_layer('pre_fc1').set_weights(model.get_layer('pre_fc1').get_weights())
45 |
46 | # get current embedding to adjust quality weights
47 | _, curr_emb = grad_model.predict(image)
48 | curr_emb = curr_emb.squeeze()
49 |
50 | # set quality weights
51 | grad_model.get_layer('quality').set_weights(calculate_weights(quality, curr_emb))
52 |
53 | # calculate gradients
54 | with tf.GradientTape() as gtape:
55 | inputs = tf.cast(image, tf.float32)
56 | gtape.watch(inputs)
57 | outputs, _ = grad_model(inputs)
58 |
59 | # get gradients
60 | grads = gtape.gradient(outputs, inputs)[0]
61 |
62 | # clear model
63 | del grad_model
64 | K.clear_session()
65 |
66 | return grads
--------------------------------------------------------------------------------
/src/kerasarc_v3.py:
--------------------------------------------------------------------------------
1 | # This file was mainly automatically created using MMdnn (https://github.com/microsoft/MMdnn)
2 | # and the Caffe/MxNet arcface model (https://github.com/deepinsight/insightface)
3 | #
4 | # Manual changes due to conversion errors were:
5 | # - the two lamda layers at the beginning of the model
6 | # - additional flatten layer before dropout (necessary in Keras)
7 | #
8 | # Author: Marco Huber, Fraunhofer IGD, 2020
9 | # Fraunhofer IGD
10 | # marco.huber[at]igd.fraunhofer.de
11 |
12 |
13 | import numpy as np
14 | from tqdm import tqdm
15 |
16 | import tensorflow.keras
17 | from tensorflow.keras.models import Model
18 | from tensorflow.keras import layers
19 | from tensorflow.keras.layers import Lambda
20 | import tensorflow.keras.backend as K
21 |
22 | weights_dict = dict()
23 | def load_weights_from_file(weight_file):
24 | try:
25 | weights_dict = np.load(weight_file, allow_pickle=True).item()
26 | except:
27 | weights_dict = np.load(weight_file, allow_pickle=True, encoding='bytes').item()
28 |
29 | return weights_dict
30 |
31 |
32 | def set_layer_weights(model, weights_dict):
33 | for layer in tqdm(model.layers):
34 | if layer.name in weights_dict:
35 | cur_dict = weights_dict[layer.name]
36 | current_layer_parameters = list()
37 | if layer.__class__.__name__ == "BatchNormalization":
38 | if 'scale' in cur_dict:
39 | current_layer_parameters.append(cur_dict['scale'])
40 | if 'bias' in cur_dict:
41 | current_layer_parameters.append(cur_dict['bias'])
42 | current_layer_parameters.extend([cur_dict['mean'], cur_dict['var']])
43 | elif layer.__class__.__name__ == "Scale":
44 | if 'scale' in cur_dict:
45 | current_layer_parameters.append(cur_dict['scale'])
46 | if 'bias' in cur_dict:
47 | current_layer_parameters.append(cur_dict['bias'])
48 | elif layer.__class__.__name__ == "SeparableConv2D":
49 | current_layer_parameters = [cur_dict['depthwise_filter'], cur_dict['pointwise_filter']]
50 | if 'bias' in cur_dict:
51 | current_layer_parameters.append(cur_dict['bias'])
52 | elif layer.__class__.__name__ == "Embedding":
53 | current_layer_parameters.append(cur_dict['weights'])
54 | elif layer.__class__.__name__ == "PReLU":
55 | gamma = np.ones(list(layer.input_shape[1:]))*cur_dict['gamma']
56 | current_layer_parameters.append(gamma)
57 | else:
58 | # rot
59 | if 'weights' in cur_dict:
60 | current_layer_parameters = [cur_dict['weights']]
61 | if 'bias' in cur_dict:
62 | current_layer_parameters.append(cur_dict['bias'])
63 | model.get_layer(layer.name).set_weights(current_layer_parameters)
64 |
65 | return model
66 |
67 |
68 | def KitModel(weight_file = None):
69 | global weights_dict
70 | weights_dict = load_weights_from_file(weight_file) if not weight_file == None else None
71 |
72 | data = layers.Input(name = 'data', shape = (112, 112, 3,) )
73 | minusscalar0 = Lambda(lambda x: x - 127.5) (data)
74 | mulscalar0 = Lambda(lambda x: x * 0.0078125) (minusscalar0)
75 | conv0_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(mulscalar0)
76 | conv0 = convolution(weights_dict, name='conv0', input=conv0_input, group=1, conv_type='layers.Conv2D', filters=64, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
77 | bn0 = layers.BatchNormalization(name = 'bn0', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(conv0)
78 | relu0 = layers.PReLU(name='relu0')(bn0)
79 | stage1_unit1_bn1 = layers.BatchNormalization(name = 'stage1_unit1_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(relu0)
80 | stage1_unit1_conv1sc = convolution(weights_dict, name='stage1_unit1_conv1sc', input=relu0, group=1, conv_type='layers.Conv2D', filters=64, kernel_size=(1, 1), strides=(2, 2), dilation_rate=(1, 1), padding='valid', use_bias=False)
81 | stage1_unit1_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage1_unit1_bn1)
82 | stage1_unit1_conv1 = convolution(weights_dict, name='stage1_unit1_conv1', input=stage1_unit1_conv1_input, group=1, conv_type='layers.Conv2D', filters=64, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
83 | stage1_unit1_sc = layers.BatchNormalization(name = 'stage1_unit1_sc', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage1_unit1_conv1sc)
84 | stage1_unit1_bn2 = layers.BatchNormalization(name = 'stage1_unit1_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage1_unit1_conv1)
85 | stage1_unit1_relu1 = layers.PReLU(name='stage1_unit1_relu1')(stage1_unit1_bn2)
86 | stage1_unit1_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage1_unit1_relu1)
87 | stage1_unit1_conv2 = convolution(weights_dict, name='stage1_unit1_conv2', input=stage1_unit1_conv2_input, group=1, conv_type='layers.Conv2D', filters=64, kernel_size=(3, 3), strides=(2, 2), dilation_rate=(1, 1), padding='valid', use_bias=False)
88 | stage1_unit1_bn3 = layers.BatchNormalization(name = 'stage1_unit1_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage1_unit1_conv2)
89 | plus0 = Lambda(lambda x: x[0] + x[1])([stage1_unit1_bn3, stage1_unit1_sc])
90 | stage1_unit2_bn1 = layers.BatchNormalization(name = 'stage1_unit2_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus0)
91 | stage1_unit2_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage1_unit2_bn1)
92 | stage1_unit2_conv1 = convolution(weights_dict, name='stage1_unit2_conv1', input=stage1_unit2_conv1_input, group=1, conv_type='layers.Conv2D', filters=64, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
93 | stage1_unit2_bn2 = layers.BatchNormalization(name = 'stage1_unit2_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage1_unit2_conv1)
94 | stage1_unit2_relu1 = layers.PReLU(name='stage1_unit2_relu1')(stage1_unit2_bn2)
95 | stage1_unit2_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage1_unit2_relu1)
96 | stage1_unit2_conv2 = convolution(weights_dict, name='stage1_unit2_conv2', input=stage1_unit2_conv2_input, group=1, conv_type='layers.Conv2D', filters=64, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
97 | stage1_unit2_bn3 = layers.BatchNormalization(name = 'stage1_unit2_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage1_unit2_conv2)
98 | plus1 = Lambda(lambda x: x[0] + x[1])([stage1_unit2_bn3, plus0])
99 | stage1_unit3_bn1 = layers.BatchNormalization(name = 'stage1_unit3_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus1)
100 | stage1_unit3_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage1_unit3_bn1)
101 | stage1_unit3_conv1 = convolution(weights_dict, name='stage1_unit3_conv1', input=stage1_unit3_conv1_input, group=1, conv_type='layers.Conv2D', filters=64, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
102 | stage1_unit3_bn2 = layers.BatchNormalization(name = 'stage1_unit3_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage1_unit3_conv1)
103 | stage1_unit3_relu1 = layers.PReLU(name='stage1_unit3_relu1')(stage1_unit3_bn2)
104 | stage1_unit3_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage1_unit3_relu1)
105 | stage1_unit3_conv2 = convolution(weights_dict, name='stage1_unit3_conv2', input=stage1_unit3_conv2_input, group=1, conv_type='layers.Conv2D', filters=64, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
106 | stage1_unit3_bn3 = layers.BatchNormalization(name = 'stage1_unit3_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage1_unit3_conv2)
107 | plus2 = Lambda(lambda x: x[0] + x[1])([stage1_unit3_bn3, plus1])
108 | stage2_unit1_bn1 = layers.BatchNormalization(name = 'stage2_unit1_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus2)
109 | stage2_unit1_conv1sc = convolution(weights_dict, name='stage2_unit1_conv1sc', input=plus2, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(1, 1), strides=(2, 2), dilation_rate=(1, 1), padding='valid', use_bias=False)
110 | stage2_unit1_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit1_bn1)
111 | stage2_unit1_conv1 = convolution(weights_dict, name='stage2_unit1_conv1', input=stage2_unit1_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
112 | stage2_unit1_sc = layers.BatchNormalization(name = 'stage2_unit1_sc', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit1_conv1sc)
113 | stage2_unit1_bn2 = layers.BatchNormalization(name = 'stage2_unit1_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit1_conv1)
114 | stage2_unit1_relu1 = layers.PReLU(name='stage2_unit1_relu1')(stage2_unit1_bn2)
115 | stage2_unit1_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit1_relu1)
116 | stage2_unit1_conv2 = convolution(weights_dict, name='stage2_unit1_conv2', input=stage2_unit1_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(2, 2), dilation_rate=(1, 1), padding='valid', use_bias=False)
117 | stage2_unit1_bn3 = layers.BatchNormalization(name = 'stage2_unit1_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit1_conv2)
118 | plus3 = Lambda(lambda x: x[0] + x[1])([stage2_unit1_bn3, stage2_unit1_sc])
119 | stage2_unit2_bn1 = layers.BatchNormalization(name = 'stage2_unit2_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus3)
120 | stage2_unit2_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit2_bn1)
121 | stage2_unit2_conv1 = convolution(weights_dict, name='stage2_unit2_conv1', input=stage2_unit2_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
122 | stage2_unit2_bn2 = layers.BatchNormalization(name = 'stage2_unit2_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit2_conv1)
123 | stage2_unit2_relu1 = layers.PReLU(name='stage2_unit2_relu1')(stage2_unit2_bn2)
124 | stage2_unit2_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit2_relu1)
125 | stage2_unit2_conv2 = convolution(weights_dict, name='stage2_unit2_conv2', input=stage2_unit2_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
126 | stage2_unit2_bn3 = layers.BatchNormalization(name = 'stage2_unit2_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit2_conv2)
127 | plus4 = Lambda(lambda x: x[0] + x[1])([stage2_unit2_bn3, plus3])
128 | stage2_unit3_bn1 = layers.BatchNormalization(name = 'stage2_unit3_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus4)
129 | stage2_unit3_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit3_bn1)
130 | stage2_unit3_conv1 = convolution(weights_dict, name='stage2_unit3_conv1', input=stage2_unit3_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
131 | stage2_unit3_bn2 = layers.BatchNormalization(name = 'stage2_unit3_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit3_conv1)
132 | stage2_unit3_relu1 = layers.PReLU(name='stage2_unit3_relu1')(stage2_unit3_bn2)
133 | stage2_unit3_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit3_relu1)
134 | stage2_unit3_conv2 = convolution(weights_dict, name='stage2_unit3_conv2', input=stage2_unit3_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
135 | stage2_unit3_bn3 = layers.BatchNormalization(name = 'stage2_unit3_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit3_conv2)
136 | plus5 = Lambda(lambda x: x[0] + x[1])([stage2_unit3_bn3, plus4])
137 | stage2_unit4_bn1 = layers.BatchNormalization(name = 'stage2_unit4_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus5)
138 | stage2_unit4_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit4_bn1)
139 | stage2_unit4_conv1 = convolution(weights_dict, name='stage2_unit4_conv1', input=stage2_unit4_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
140 | stage2_unit4_bn2 = layers.BatchNormalization(name = 'stage2_unit4_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit4_conv1)
141 | stage2_unit4_relu1 = layers.PReLU(name='stage2_unit4_relu1')(stage2_unit4_bn2)
142 | stage2_unit4_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit4_relu1)
143 | stage2_unit4_conv2 = convolution(weights_dict, name='stage2_unit4_conv2', input=stage2_unit4_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
144 | stage2_unit4_bn3 = layers.BatchNormalization(name = 'stage2_unit4_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit4_conv2)
145 | plus6 = Lambda(lambda x: x[0] + x[1])([stage2_unit4_bn3, plus5])
146 | stage2_unit5_bn1 = layers.BatchNormalization(name = 'stage2_unit5_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus6)
147 | stage2_unit5_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit5_bn1)
148 | stage2_unit5_conv1 = convolution(weights_dict, name='stage2_unit5_conv1', input=stage2_unit5_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
149 | stage2_unit5_bn2 = layers.BatchNormalization(name = 'stage2_unit5_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit5_conv1)
150 | stage2_unit5_relu1 = layers.PReLU(name='stage2_unit5_relu1')(stage2_unit5_bn2)
151 | stage2_unit5_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit5_relu1)
152 | stage2_unit5_conv2 = convolution(weights_dict, name='stage2_unit5_conv2', input=stage2_unit5_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
153 | stage2_unit5_bn3 = layers.BatchNormalization(name = 'stage2_unit5_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit5_conv2)
154 | plus7 = Lambda(lambda x: x[0] + x[1])([stage2_unit5_bn3, plus6])
155 | stage2_unit6_bn1 = layers.BatchNormalization(name = 'stage2_unit6_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus7)
156 | stage2_unit6_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit6_bn1)
157 | stage2_unit6_conv1 = convolution(weights_dict, name='stage2_unit6_conv1', input=stage2_unit6_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
158 | stage2_unit6_bn2 = layers.BatchNormalization(name = 'stage2_unit6_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit6_conv1)
159 | stage2_unit6_relu1 = layers.PReLU(name='stage2_unit6_relu1')(stage2_unit6_bn2)
160 | stage2_unit6_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit6_relu1)
161 | stage2_unit6_conv2 = convolution(weights_dict, name='stage2_unit6_conv2', input=stage2_unit6_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
162 | stage2_unit6_bn3 = layers.BatchNormalization(name = 'stage2_unit6_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit6_conv2)
163 | plus8 = Lambda(lambda x: x[0] + x[1])([stage2_unit6_bn3, plus7])
164 | stage2_unit7_bn1 = layers.BatchNormalization(name = 'stage2_unit7_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus8)
165 | stage2_unit7_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit7_bn1)
166 | stage2_unit7_conv1 = convolution(weights_dict, name='stage2_unit7_conv1', input=stage2_unit7_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
167 | stage2_unit7_bn2 = layers.BatchNormalization(name = 'stage2_unit7_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit7_conv1)
168 | stage2_unit7_relu1 = layers.PReLU(name='stage2_unit7_relu1')(stage2_unit7_bn2)
169 | stage2_unit7_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit7_relu1)
170 | stage2_unit7_conv2 = convolution(weights_dict, name='stage2_unit7_conv2', input=stage2_unit7_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
171 | stage2_unit7_bn3 = layers.BatchNormalization(name = 'stage2_unit7_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit7_conv2)
172 | plus9 = Lambda(lambda x: x[0] + x[1])([stage2_unit7_bn3, plus8])
173 | stage2_unit8_bn1 = layers.BatchNormalization(name = 'stage2_unit8_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus9)
174 | stage2_unit8_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit8_bn1)
175 | stage2_unit8_conv1 = convolution(weights_dict, name='stage2_unit8_conv1', input=stage2_unit8_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
176 | stage2_unit8_bn2 = layers.BatchNormalization(name = 'stage2_unit8_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit8_conv1)
177 | stage2_unit8_relu1 = layers.PReLU(name='stage2_unit8_relu1')(stage2_unit8_bn2)
178 | stage2_unit8_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit8_relu1)
179 | stage2_unit8_conv2 = convolution(weights_dict, name='stage2_unit8_conv2', input=stage2_unit8_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
180 | stage2_unit8_bn3 = layers.BatchNormalization(name = 'stage2_unit8_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit8_conv2)
181 | plus10 = Lambda(lambda x: x[0] + x[1])([stage2_unit8_bn3, plus9])
182 | stage2_unit9_bn1 = layers.BatchNormalization(name = 'stage2_unit9_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus10)
183 | stage2_unit9_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit9_bn1)
184 | stage2_unit9_conv1 = convolution(weights_dict, name='stage2_unit9_conv1', input=stage2_unit9_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
185 | stage2_unit9_bn2 = layers.BatchNormalization(name = 'stage2_unit9_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit9_conv1)
186 | stage2_unit9_relu1 = layers.PReLU(name='stage2_unit9_relu1')(stage2_unit9_bn2)
187 | stage2_unit9_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit9_relu1)
188 | stage2_unit9_conv2 = convolution(weights_dict, name='stage2_unit9_conv2', input=stage2_unit9_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
189 | stage2_unit9_bn3 = layers.BatchNormalization(name = 'stage2_unit9_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit9_conv2)
190 | plus11 = Lambda(lambda x: x[0] + x[1])([stage2_unit9_bn3, plus10])
191 | stage2_unit10_bn1 = layers.BatchNormalization(name = 'stage2_unit10_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus11)
192 | stage2_unit10_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit10_bn1)
193 | stage2_unit10_conv1 = convolution(weights_dict, name='stage2_unit10_conv1', input=stage2_unit10_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
194 | stage2_unit10_bn2 = layers.BatchNormalization(name = 'stage2_unit10_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit10_conv1)
195 | stage2_unit10_relu1 = layers.PReLU(name='stage2_unit10_relu1')(stage2_unit10_bn2)
196 | stage2_unit10_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit10_relu1)
197 | stage2_unit10_conv2 = convolution(weights_dict, name='stage2_unit10_conv2', input=stage2_unit10_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
198 | stage2_unit10_bn3 = layers.BatchNormalization(name = 'stage2_unit10_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit10_conv2)
199 | plus12 = Lambda(lambda x: x[0] + x[1])([stage2_unit10_bn3, plus11])
200 | stage2_unit11_bn1 = layers.BatchNormalization(name = 'stage2_unit11_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus12)
201 | stage2_unit11_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit11_bn1)
202 | stage2_unit11_conv1 = convolution(weights_dict, name='stage2_unit11_conv1', input=stage2_unit11_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
203 | stage2_unit11_bn2 = layers.BatchNormalization(name = 'stage2_unit11_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit11_conv1)
204 | stage2_unit11_relu1 = layers.PReLU(name='stage2_unit11_relu1')(stage2_unit11_bn2)
205 | stage2_unit11_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit11_relu1)
206 | stage2_unit11_conv2 = convolution(weights_dict, name='stage2_unit11_conv2', input=stage2_unit11_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
207 | stage2_unit11_bn3 = layers.BatchNormalization(name = 'stage2_unit11_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit11_conv2)
208 | plus13 = Lambda(lambda x: x[0] + x[1])([stage2_unit11_bn3, plus12])
209 | stage2_unit12_bn1 = layers.BatchNormalization(name = 'stage2_unit12_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus13)
210 | stage2_unit12_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit12_bn1)
211 | stage2_unit12_conv1 = convolution(weights_dict, name='stage2_unit12_conv1', input=stage2_unit12_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
212 | stage2_unit12_bn2 = layers.BatchNormalization(name = 'stage2_unit12_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit12_conv1)
213 | stage2_unit12_relu1 = layers.PReLU(name='stage2_unit12_relu1')(stage2_unit12_bn2)
214 | stage2_unit12_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit12_relu1)
215 | stage2_unit12_conv2 = convolution(weights_dict, name='stage2_unit12_conv2', input=stage2_unit12_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
216 | stage2_unit12_bn3 = layers.BatchNormalization(name = 'stage2_unit12_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit12_conv2)
217 | plus14 = Lambda(lambda x: x[0] + x[1])([stage2_unit12_bn3, plus13])
218 | stage2_unit13_bn1 = layers.BatchNormalization(name = 'stage2_unit13_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus14)
219 | stage2_unit13_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit13_bn1)
220 | stage2_unit13_conv1 = convolution(weights_dict, name='stage2_unit13_conv1', input=stage2_unit13_conv1_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
221 | stage2_unit13_bn2 = layers.BatchNormalization(name = 'stage2_unit13_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit13_conv1)
222 | stage2_unit13_relu1 = layers.PReLU(name='stage2_unit13_relu1')(stage2_unit13_bn2)
223 | stage2_unit13_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage2_unit13_relu1)
224 | stage2_unit13_conv2 = convolution(weights_dict, name='stage2_unit13_conv2', input=stage2_unit13_conv2_input, group=1, conv_type='layers.Conv2D', filters=128, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
225 | stage2_unit13_bn3 = layers.BatchNormalization(name = 'stage2_unit13_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage2_unit13_conv2)
226 | plus15 = Lambda(lambda x: x[0] + x[1])([stage2_unit13_bn3, plus14])
227 | stage3_unit1_bn1 = layers.BatchNormalization(name = 'stage3_unit1_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus15)
228 | stage3_unit1_conv1sc = convolution(weights_dict, name='stage3_unit1_conv1sc', input=plus15, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(1, 1), strides=(2, 2), dilation_rate=(1, 1), padding='valid', use_bias=False)
229 | stage3_unit1_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit1_bn1)
230 | stage3_unit1_conv1 = convolution(weights_dict, name='stage3_unit1_conv1', input=stage3_unit1_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
231 | stage3_unit1_sc = layers.BatchNormalization(name = 'stage3_unit1_sc', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit1_conv1sc)
232 | stage3_unit1_bn2 = layers.BatchNormalization(name = 'stage3_unit1_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit1_conv1)
233 | stage3_unit1_relu1 = layers.PReLU(name='stage3_unit1_relu1')(stage3_unit1_bn2)
234 | stage3_unit1_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit1_relu1)
235 | stage3_unit1_conv2 = convolution(weights_dict, name='stage3_unit1_conv2', input=stage3_unit1_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(2, 2), dilation_rate=(1, 1), padding='valid', use_bias=False)
236 | stage3_unit1_bn3 = layers.BatchNormalization(name = 'stage3_unit1_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit1_conv2)
237 | plus16 = Lambda(lambda x: x[0] + x[1])([stage3_unit1_bn3, stage3_unit1_sc])
238 | stage3_unit2_bn1 = layers.BatchNormalization(name = 'stage3_unit2_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus16)
239 | stage3_unit2_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit2_bn1)
240 | stage3_unit2_conv1 = convolution(weights_dict, name='stage3_unit2_conv1', input=stage3_unit2_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
241 | stage3_unit2_bn2 = layers.BatchNormalization(name = 'stage3_unit2_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit2_conv1)
242 | stage3_unit2_relu1 = layers.PReLU(name='stage3_unit2_relu1')(stage3_unit2_bn2)
243 | stage3_unit2_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit2_relu1)
244 | stage3_unit2_conv2 = convolution(weights_dict, name='stage3_unit2_conv2', input=stage3_unit2_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
245 | stage3_unit2_bn3 = layers.BatchNormalization(name = 'stage3_unit2_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit2_conv2)
246 | plus17 = Lambda(lambda x: x[0] + x[1])([stage3_unit2_bn3, plus16])
247 | stage3_unit3_bn1 = layers.BatchNormalization(name = 'stage3_unit3_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus17)
248 | stage3_unit3_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit3_bn1)
249 | stage3_unit3_conv1 = convolution(weights_dict, name='stage3_unit3_conv1', input=stage3_unit3_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
250 | stage3_unit3_bn2 = layers.BatchNormalization(name = 'stage3_unit3_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit3_conv1)
251 | stage3_unit3_relu1 = layers.PReLU(name='stage3_unit3_relu1')(stage3_unit3_bn2)
252 | stage3_unit3_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit3_relu1)
253 | stage3_unit3_conv2 = convolution(weights_dict, name='stage3_unit3_conv2', input=stage3_unit3_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
254 | stage3_unit3_bn3 = layers.BatchNormalization(name = 'stage3_unit3_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit3_conv2)
255 | plus18 = Lambda(lambda x: x[0] + x[1])([stage3_unit3_bn3, plus17])
256 | stage3_unit4_bn1 = layers.BatchNormalization(name = 'stage3_unit4_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus18)
257 | stage3_unit4_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit4_bn1)
258 | stage3_unit4_conv1 = convolution(weights_dict, name='stage3_unit4_conv1', input=stage3_unit4_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
259 | stage3_unit4_bn2 = layers.BatchNormalization(name = 'stage3_unit4_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit4_conv1)
260 | stage3_unit4_relu1 = layers.PReLU(name='stage3_unit4_relu1')(stage3_unit4_bn2)
261 | stage3_unit4_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit4_relu1)
262 | stage3_unit4_conv2 = convolution(weights_dict, name='stage3_unit4_conv2', input=stage3_unit4_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
263 | stage3_unit4_bn3 = layers.BatchNormalization(name = 'stage3_unit4_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit4_conv2)
264 | plus19 = Lambda(lambda x: x[0] + x[1])([stage3_unit4_bn3, plus18])
265 | stage3_unit5_bn1 = layers.BatchNormalization(name = 'stage3_unit5_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus19)
266 | stage3_unit5_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit5_bn1)
267 | stage3_unit5_conv1 = convolution(weights_dict, name='stage3_unit5_conv1', input=stage3_unit5_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
268 | stage3_unit5_bn2 = layers.BatchNormalization(name = 'stage3_unit5_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit5_conv1)
269 | stage3_unit5_relu1 = layers.PReLU(name='stage3_unit5_relu1')(stage3_unit5_bn2)
270 | stage3_unit5_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit5_relu1)
271 | stage3_unit5_conv2 = convolution(weights_dict, name='stage3_unit5_conv2', input=stage3_unit5_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
272 | stage3_unit5_bn3 = layers.BatchNormalization(name = 'stage3_unit5_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit5_conv2)
273 | plus20 = Lambda(lambda x: x[0] + x[1])([stage3_unit5_bn3, plus19])
274 | stage3_unit6_bn1 = layers.BatchNormalization(name = 'stage3_unit6_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus20)
275 | stage3_unit6_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit6_bn1)
276 | stage3_unit6_conv1 = convolution(weights_dict, name='stage3_unit6_conv1', input=stage3_unit6_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
277 | stage3_unit6_bn2 = layers.BatchNormalization(name = 'stage3_unit6_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit6_conv1)
278 | stage3_unit6_relu1 = layers.PReLU(name='stage3_unit6_relu1')(stage3_unit6_bn2)
279 | stage3_unit6_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit6_relu1)
280 | stage3_unit6_conv2 = convolution(weights_dict, name='stage3_unit6_conv2', input=stage3_unit6_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
281 | stage3_unit6_bn3 = layers.BatchNormalization(name = 'stage3_unit6_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit6_conv2)
282 | plus21 = Lambda(lambda x: x[0] + x[1])([stage3_unit6_bn3, plus20])
283 | stage3_unit7_bn1 = layers.BatchNormalization(name = 'stage3_unit7_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus21)
284 | stage3_unit7_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit7_bn1)
285 | stage3_unit7_conv1 = convolution(weights_dict, name='stage3_unit7_conv1', input=stage3_unit7_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
286 | stage3_unit7_bn2 = layers.BatchNormalization(name = 'stage3_unit7_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit7_conv1)
287 | stage3_unit7_relu1 = layers.PReLU(name='stage3_unit7_relu1')(stage3_unit7_bn2)
288 | stage3_unit7_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit7_relu1)
289 | stage3_unit7_conv2 = convolution(weights_dict, name='stage3_unit7_conv2', input=stage3_unit7_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
290 | stage3_unit7_bn3 = layers.BatchNormalization(name = 'stage3_unit7_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit7_conv2)
291 | plus22 = Lambda(lambda x: x[0] + x[1])([stage3_unit7_bn3, plus21])
292 | stage3_unit8_bn1 = layers.BatchNormalization(name = 'stage3_unit8_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus22)
293 | stage3_unit8_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit8_bn1)
294 | stage3_unit8_conv1 = convolution(weights_dict, name='stage3_unit8_conv1', input=stage3_unit8_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
295 | stage3_unit8_bn2 = layers.BatchNormalization(name = 'stage3_unit8_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit8_conv1)
296 | stage3_unit8_relu1 = layers.PReLU(name='stage3_unit8_relu1')(stage3_unit8_bn2)
297 | stage3_unit8_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit8_relu1)
298 | stage3_unit8_conv2 = convolution(weights_dict, name='stage3_unit8_conv2', input=stage3_unit8_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
299 | stage3_unit8_bn3 = layers.BatchNormalization(name = 'stage3_unit8_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit8_conv2)
300 | plus23 = Lambda(lambda x: x[0] + x[1])([stage3_unit8_bn3, plus22])
301 | stage3_unit9_bn1 = layers.BatchNormalization(name = 'stage3_unit9_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus23)
302 | stage3_unit9_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit9_bn1)
303 | stage3_unit9_conv1 = convolution(weights_dict, name='stage3_unit9_conv1', input=stage3_unit9_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
304 | stage3_unit9_bn2 = layers.BatchNormalization(name = 'stage3_unit9_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit9_conv1)
305 | stage3_unit9_relu1 = layers.PReLU(name='stage3_unit9_relu1')(stage3_unit9_bn2)
306 | stage3_unit9_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit9_relu1)
307 | stage3_unit9_conv2 = convolution(weights_dict, name='stage3_unit9_conv2', input=stage3_unit9_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
308 | stage3_unit9_bn3 = layers.BatchNormalization(name = 'stage3_unit9_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit9_conv2)
309 | plus24 = Lambda(lambda x: x[0] + x[1])([stage3_unit9_bn3, plus23])
310 | stage3_unit10_bn1 = layers.BatchNormalization(name = 'stage3_unit10_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus24)
311 | stage3_unit10_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit10_bn1)
312 | stage3_unit10_conv1 = convolution(weights_dict, name='stage3_unit10_conv1', input=stage3_unit10_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
313 | stage3_unit10_bn2 = layers.BatchNormalization(name = 'stage3_unit10_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit10_conv1)
314 | stage3_unit10_relu1 = layers.PReLU(name='stage3_unit10_relu1')(stage3_unit10_bn2)
315 | stage3_unit10_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit10_relu1)
316 | stage3_unit10_conv2 = convolution(weights_dict, name='stage3_unit10_conv2', input=stage3_unit10_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
317 | stage3_unit10_bn3 = layers.BatchNormalization(name = 'stage3_unit10_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit10_conv2)
318 | plus25 = Lambda(lambda x: x[0] + x[1])([stage3_unit10_bn3, plus24])
319 | stage3_unit11_bn1 = layers.BatchNormalization(name = 'stage3_unit11_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus25)
320 | stage3_unit11_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit11_bn1)
321 | stage3_unit11_conv1 = convolution(weights_dict, name='stage3_unit11_conv1', input=stage3_unit11_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
322 | stage3_unit11_bn2 = layers.BatchNormalization(name = 'stage3_unit11_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit11_conv1)
323 | stage3_unit11_relu1 = layers.PReLU(name='stage3_unit11_relu1')(stage3_unit11_bn2)
324 | stage3_unit11_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit11_relu1)
325 | stage3_unit11_conv2 = convolution(weights_dict, name='stage3_unit11_conv2', input=stage3_unit11_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
326 | stage3_unit11_bn3 = layers.BatchNormalization(name = 'stage3_unit11_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit11_conv2)
327 | plus26 = Lambda(lambda x: x[0] + x[1])([stage3_unit11_bn3, plus25])
328 | stage3_unit12_bn1 = layers.BatchNormalization(name = 'stage3_unit12_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus26)
329 | stage3_unit12_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit12_bn1)
330 | stage3_unit12_conv1 = convolution(weights_dict, name='stage3_unit12_conv1', input=stage3_unit12_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
331 | stage3_unit12_bn2 = layers.BatchNormalization(name = 'stage3_unit12_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit12_conv1)
332 | stage3_unit12_relu1 = layers.PReLU(name='stage3_unit12_relu1')(stage3_unit12_bn2)
333 | stage3_unit12_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit12_relu1)
334 | stage3_unit12_conv2 = convolution(weights_dict, name='stage3_unit12_conv2', input=stage3_unit12_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
335 | stage3_unit12_bn3 = layers.BatchNormalization(name = 'stage3_unit12_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit12_conv2)
336 | plus27 = Lambda(lambda x: x[0] + x[1])([stage3_unit12_bn3, plus26])
337 | stage3_unit13_bn1 = layers.BatchNormalization(name = 'stage3_unit13_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus27)
338 | stage3_unit13_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit13_bn1)
339 | stage3_unit13_conv1 = convolution(weights_dict, name='stage3_unit13_conv1', input=stage3_unit13_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
340 | stage3_unit13_bn2 = layers.BatchNormalization(name = 'stage3_unit13_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit13_conv1)
341 | stage3_unit13_relu1 = layers.PReLU(name='stage3_unit13_relu1')(stage3_unit13_bn2)
342 | stage3_unit13_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit13_relu1)
343 | stage3_unit13_conv2 = convolution(weights_dict, name='stage3_unit13_conv2', input=stage3_unit13_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
344 | stage3_unit13_bn3 = layers.BatchNormalization(name = 'stage3_unit13_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit13_conv2)
345 | plus28 = Lambda(lambda x: x[0] + x[1])([stage3_unit13_bn3, plus27])
346 | stage3_unit14_bn1 = layers.BatchNormalization(name = 'stage3_unit14_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus28)
347 | stage3_unit14_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit14_bn1)
348 | stage3_unit14_conv1 = convolution(weights_dict, name='stage3_unit14_conv1', input=stage3_unit14_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
349 | stage3_unit14_bn2 = layers.BatchNormalization(name = 'stage3_unit14_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit14_conv1)
350 | stage3_unit14_relu1 = layers.PReLU(name='stage3_unit14_relu1')(stage3_unit14_bn2)
351 | stage3_unit14_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit14_relu1)
352 | stage3_unit14_conv2 = convolution(weights_dict, name='stage3_unit14_conv2', input=stage3_unit14_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
353 | stage3_unit14_bn3 = layers.BatchNormalization(name = 'stage3_unit14_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit14_conv2)
354 | plus29 = Lambda(lambda x: x[0] + x[1])([stage3_unit14_bn3, plus28])
355 | stage3_unit15_bn1 = layers.BatchNormalization(name = 'stage3_unit15_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus29)
356 | stage3_unit15_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit15_bn1)
357 | stage3_unit15_conv1 = convolution(weights_dict, name='stage3_unit15_conv1', input=stage3_unit15_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
358 | stage3_unit15_bn2 = layers.BatchNormalization(name = 'stage3_unit15_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit15_conv1)
359 | stage3_unit15_relu1 = layers.PReLU(name='stage3_unit15_relu1')(stage3_unit15_bn2)
360 | stage3_unit15_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit15_relu1)
361 | stage3_unit15_conv2 = convolution(weights_dict, name='stage3_unit15_conv2', input=stage3_unit15_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
362 | stage3_unit15_bn3 = layers.BatchNormalization(name = 'stage3_unit15_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit15_conv2)
363 | plus30 = Lambda(lambda x: x[0] + x[1])([stage3_unit15_bn3, plus29])
364 | stage3_unit16_bn1 = layers.BatchNormalization(name = 'stage3_unit16_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus30)
365 | stage3_unit16_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit16_bn1)
366 | stage3_unit16_conv1 = convolution(weights_dict, name='stage3_unit16_conv1', input=stage3_unit16_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
367 | stage3_unit16_bn2 = layers.BatchNormalization(name = 'stage3_unit16_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit16_conv1)
368 | stage3_unit16_relu1 = layers.PReLU(name='stage3_unit16_relu1')(stage3_unit16_bn2)
369 | stage3_unit16_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit16_relu1)
370 | stage3_unit16_conv2 = convolution(weights_dict, name='stage3_unit16_conv2', input=stage3_unit16_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
371 | stage3_unit16_bn3 = layers.BatchNormalization(name = 'stage3_unit16_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit16_conv2)
372 | plus31 = Lambda(lambda x: x[0] + x[1])([stage3_unit16_bn3, plus30])
373 | stage3_unit17_bn1 = layers.BatchNormalization(name = 'stage3_unit17_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus31)
374 | stage3_unit17_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit17_bn1)
375 | stage3_unit17_conv1 = convolution(weights_dict, name='stage3_unit17_conv1', input=stage3_unit17_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
376 | stage3_unit17_bn2 = layers.BatchNormalization(name = 'stage3_unit17_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit17_conv1)
377 | stage3_unit17_relu1 = layers.PReLU(name='stage3_unit17_relu1')(stage3_unit17_bn2)
378 | stage3_unit17_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit17_relu1)
379 | stage3_unit17_conv2 = convolution(weights_dict, name='stage3_unit17_conv2', input=stage3_unit17_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
380 | stage3_unit17_bn3 = layers.BatchNormalization(name = 'stage3_unit17_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit17_conv2)
381 | plus32 = Lambda(lambda x: x[0] + x[1])([stage3_unit17_bn3, plus31])
382 | stage3_unit18_bn1 = layers.BatchNormalization(name = 'stage3_unit18_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus32)
383 | stage3_unit18_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit18_bn1)
384 | stage3_unit18_conv1 = convolution(weights_dict, name='stage3_unit18_conv1', input=stage3_unit18_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
385 | stage3_unit18_bn2 = layers.BatchNormalization(name = 'stage3_unit18_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit18_conv1)
386 | stage3_unit18_relu1 = layers.PReLU(name='stage3_unit18_relu1')(stage3_unit18_bn2)
387 | stage3_unit18_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit18_relu1)
388 | stage3_unit18_conv2 = convolution(weights_dict, name='stage3_unit18_conv2', input=stage3_unit18_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
389 | stage3_unit18_bn3 = layers.BatchNormalization(name = 'stage3_unit18_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit18_conv2)
390 | plus33 = Lambda(lambda x: x[0] + x[1])([stage3_unit18_bn3, plus32])
391 | stage3_unit19_bn1 = layers.BatchNormalization(name = 'stage3_unit19_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus33)
392 | stage3_unit19_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit19_bn1)
393 | stage3_unit19_conv1 = convolution(weights_dict, name='stage3_unit19_conv1', input=stage3_unit19_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
394 | stage3_unit19_bn2 = layers.BatchNormalization(name = 'stage3_unit19_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit19_conv1)
395 | stage3_unit19_relu1 = layers.PReLU(name='stage3_unit19_relu1')(stage3_unit19_bn2)
396 | stage3_unit19_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit19_relu1)
397 | stage3_unit19_conv2 = convolution(weights_dict, name='stage3_unit19_conv2', input=stage3_unit19_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
398 | stage3_unit19_bn3 = layers.BatchNormalization(name = 'stage3_unit19_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit19_conv2)
399 | plus34 = Lambda(lambda x: x[0] + x[1])([stage3_unit19_bn3, plus33])
400 | stage3_unit20_bn1 = layers.BatchNormalization(name = 'stage3_unit20_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus34)
401 | stage3_unit20_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit20_bn1)
402 | stage3_unit20_conv1 = convolution(weights_dict, name='stage3_unit20_conv1', input=stage3_unit20_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
403 | stage3_unit20_bn2 = layers.BatchNormalization(name = 'stage3_unit20_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit20_conv1)
404 | stage3_unit20_relu1 = layers.PReLU(name='stage3_unit20_relu1')(stage3_unit20_bn2)
405 | stage3_unit20_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit20_relu1)
406 | stage3_unit20_conv2 = convolution(weights_dict, name='stage3_unit20_conv2', input=stage3_unit20_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
407 | stage3_unit20_bn3 = layers.BatchNormalization(name = 'stage3_unit20_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit20_conv2)
408 | plus35 = Lambda(lambda x: x[0] + x[1])([stage3_unit20_bn3, plus34])
409 | stage3_unit21_bn1 = layers.BatchNormalization(name = 'stage3_unit21_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus35)
410 | stage3_unit21_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit21_bn1)
411 | stage3_unit21_conv1 = convolution(weights_dict, name='stage3_unit21_conv1', input=stage3_unit21_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
412 | stage3_unit21_bn2 = layers.BatchNormalization(name = 'stage3_unit21_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit21_conv1)
413 | stage3_unit21_relu1 = layers.PReLU(name='stage3_unit21_relu1')(stage3_unit21_bn2)
414 | stage3_unit21_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit21_relu1)
415 | stage3_unit21_conv2 = convolution(weights_dict, name='stage3_unit21_conv2', input=stage3_unit21_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
416 | stage3_unit21_bn3 = layers.BatchNormalization(name = 'stage3_unit21_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit21_conv2)
417 | plus36 = Lambda(lambda x: x[0] + x[1])([stage3_unit21_bn3, plus35])
418 | stage3_unit22_bn1 = layers.BatchNormalization(name = 'stage3_unit22_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus36)
419 | stage3_unit22_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit22_bn1)
420 | stage3_unit22_conv1 = convolution(weights_dict, name='stage3_unit22_conv1', input=stage3_unit22_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
421 | stage3_unit22_bn2 = layers.BatchNormalization(name = 'stage3_unit22_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit22_conv1)
422 | stage3_unit22_relu1 = layers.PReLU(name='stage3_unit22_relu1')(stage3_unit22_bn2)
423 | stage3_unit22_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit22_relu1)
424 | stage3_unit22_conv2 = convolution(weights_dict, name='stage3_unit22_conv2', input=stage3_unit22_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
425 | stage3_unit22_bn3 = layers.BatchNormalization(name = 'stage3_unit22_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit22_conv2)
426 | plus37 = Lambda(lambda x: x[0] + x[1])([stage3_unit22_bn3, plus36])
427 | stage3_unit23_bn1 = layers.BatchNormalization(name = 'stage3_unit23_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus37)
428 | stage3_unit23_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit23_bn1)
429 | stage3_unit23_conv1 = convolution(weights_dict, name='stage3_unit23_conv1', input=stage3_unit23_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
430 | stage3_unit23_bn2 = layers.BatchNormalization(name = 'stage3_unit23_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit23_conv1)
431 | stage3_unit23_relu1 = layers.PReLU(name='stage3_unit23_relu1')(stage3_unit23_bn2)
432 | stage3_unit23_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit23_relu1)
433 | stage3_unit23_conv2 = convolution(weights_dict, name='stage3_unit23_conv2', input=stage3_unit23_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
434 | stage3_unit23_bn3 = layers.BatchNormalization(name = 'stage3_unit23_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit23_conv2)
435 | plus38 = Lambda(lambda x: x[0] + x[1])([stage3_unit23_bn3, plus37])
436 | stage3_unit24_bn1 = layers.BatchNormalization(name = 'stage3_unit24_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus38)
437 | stage3_unit24_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit24_bn1)
438 | stage3_unit24_conv1 = convolution(weights_dict, name='stage3_unit24_conv1', input=stage3_unit24_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
439 | stage3_unit24_bn2 = layers.BatchNormalization(name = 'stage3_unit24_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit24_conv1)
440 | stage3_unit24_relu1 = layers.PReLU(name='stage3_unit24_relu1')(stage3_unit24_bn2)
441 | stage3_unit24_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit24_relu1)
442 | stage3_unit24_conv2 = convolution(weights_dict, name='stage3_unit24_conv2', input=stage3_unit24_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
443 | stage3_unit24_bn3 = layers.BatchNormalization(name = 'stage3_unit24_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit24_conv2)
444 | plus39 = Lambda(lambda x: x[0] + x[1])([stage3_unit24_bn3, plus38])
445 | stage3_unit25_bn1 = layers.BatchNormalization(name = 'stage3_unit25_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus39)
446 | stage3_unit25_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit25_bn1)
447 | stage3_unit25_conv1 = convolution(weights_dict, name='stage3_unit25_conv1', input=stage3_unit25_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
448 | stage3_unit25_bn2 = layers.BatchNormalization(name = 'stage3_unit25_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit25_conv1)
449 | stage3_unit25_relu1 = layers.PReLU(name='stage3_unit25_relu1')(stage3_unit25_bn2)
450 | stage3_unit25_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit25_relu1)
451 | stage3_unit25_conv2 = convolution(weights_dict, name='stage3_unit25_conv2', input=stage3_unit25_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
452 | stage3_unit25_bn3 = layers.BatchNormalization(name = 'stage3_unit25_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit25_conv2)
453 | plus40 = Lambda(lambda x: x[0] + x[1])([stage3_unit25_bn3, plus39])
454 | stage3_unit26_bn1 = layers.BatchNormalization(name = 'stage3_unit26_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus40)
455 | stage3_unit26_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit26_bn1)
456 | stage3_unit26_conv1 = convolution(weights_dict, name='stage3_unit26_conv1', input=stage3_unit26_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
457 | stage3_unit26_bn2 = layers.BatchNormalization(name = 'stage3_unit26_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit26_conv1)
458 | stage3_unit26_relu1 = layers.PReLU(name='stage3_unit26_relu1')(stage3_unit26_bn2)
459 | stage3_unit26_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit26_relu1)
460 | stage3_unit26_conv2 = convolution(weights_dict, name='stage3_unit26_conv2', input=stage3_unit26_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
461 | stage3_unit26_bn3 = layers.BatchNormalization(name = 'stage3_unit26_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit26_conv2)
462 | plus41 = Lambda(lambda x: x[0] + x[1])([stage3_unit26_bn3, plus40])
463 | stage3_unit27_bn1 = layers.BatchNormalization(name = 'stage3_unit27_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus41)
464 | stage3_unit27_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit27_bn1)
465 | stage3_unit27_conv1 = convolution(weights_dict, name='stage3_unit27_conv1', input=stage3_unit27_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
466 | stage3_unit27_bn2 = layers.BatchNormalization(name = 'stage3_unit27_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit27_conv1)
467 | stage3_unit27_relu1 = layers.PReLU(name='stage3_unit27_relu1')(stage3_unit27_bn2)
468 | stage3_unit27_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit27_relu1)
469 | stage3_unit27_conv2 = convolution(weights_dict, name='stage3_unit27_conv2', input=stage3_unit27_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
470 | stage3_unit27_bn3 = layers.BatchNormalization(name = 'stage3_unit27_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit27_conv2)
471 | plus42 = Lambda(lambda x: x[0] + x[1])([stage3_unit27_bn3, plus41])
472 | stage3_unit28_bn1 = layers.BatchNormalization(name = 'stage3_unit28_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus42)
473 | stage3_unit28_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit28_bn1)
474 | stage3_unit28_conv1 = convolution(weights_dict, name='stage3_unit28_conv1', input=stage3_unit28_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
475 | stage3_unit28_bn2 = layers.BatchNormalization(name = 'stage3_unit28_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit28_conv1)
476 | stage3_unit28_relu1 = layers.PReLU(name='stage3_unit28_relu1')(stage3_unit28_bn2)
477 | stage3_unit28_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit28_relu1)
478 | stage3_unit28_conv2 = convolution(weights_dict, name='stage3_unit28_conv2', input=stage3_unit28_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
479 | stage3_unit28_bn3 = layers.BatchNormalization(name = 'stage3_unit28_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit28_conv2)
480 | plus43 = Lambda(lambda x: x[0] + x[1])([stage3_unit28_bn3, plus42])
481 | stage3_unit29_bn1 = layers.BatchNormalization(name = 'stage3_unit29_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus43)
482 | stage3_unit29_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit29_bn1)
483 | stage3_unit29_conv1 = convolution(weights_dict, name='stage3_unit29_conv1', input=stage3_unit29_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
484 | stage3_unit29_bn2 = layers.BatchNormalization(name = 'stage3_unit29_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit29_conv1)
485 | stage3_unit29_relu1 = layers.PReLU(name='stage3_unit29_relu1')(stage3_unit29_bn2)
486 | stage3_unit29_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit29_relu1)
487 | stage3_unit29_conv2 = convolution(weights_dict, name='stage3_unit29_conv2', input=stage3_unit29_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
488 | stage3_unit29_bn3 = layers.BatchNormalization(name = 'stage3_unit29_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit29_conv2)
489 | plus44 = Lambda(lambda x: x[0] + x[1])([stage3_unit29_bn3, plus43])
490 | stage3_unit30_bn1 = layers.BatchNormalization(name = 'stage3_unit30_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus44)
491 | stage3_unit30_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit30_bn1)
492 | stage3_unit30_conv1 = convolution(weights_dict, name='stage3_unit30_conv1', input=stage3_unit30_conv1_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
493 | stage3_unit30_bn2 = layers.BatchNormalization(name = 'stage3_unit30_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit30_conv1)
494 | stage3_unit30_relu1 = layers.PReLU(name='stage3_unit30_relu1')(stage3_unit30_bn2)
495 | stage3_unit30_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage3_unit30_relu1)
496 | stage3_unit30_conv2 = convolution(weights_dict, name='stage3_unit30_conv2', input=stage3_unit30_conv2_input, group=1, conv_type='layers.Conv2D', filters=256, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
497 | stage3_unit30_bn3 = layers.BatchNormalization(name = 'stage3_unit30_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage3_unit30_conv2)
498 | plus45 = Lambda(lambda x: x[0] + x[1])([stage3_unit30_bn3, plus44])
499 | stage4_unit1_bn1 = layers.BatchNormalization(name = 'stage4_unit1_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus45)
500 | stage4_unit1_conv1sc = convolution(weights_dict, name='stage4_unit1_conv1sc', input=plus45, group=1, conv_type='layers.Conv2D', filters=512, kernel_size=(1, 1), strides=(2, 2), dilation_rate=(1, 1), padding='valid', use_bias=False)
501 | stage4_unit1_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage4_unit1_bn1)
502 | stage4_unit1_conv1 = convolution(weights_dict, name='stage4_unit1_conv1', input=stage4_unit1_conv1_input, group=1, conv_type='layers.Conv2D', filters=512, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
503 | stage4_unit1_sc = layers.BatchNormalization(name = 'stage4_unit1_sc', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage4_unit1_conv1sc)
504 | stage4_unit1_bn2 = layers.BatchNormalization(name = 'stage4_unit1_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage4_unit1_conv1)
505 | stage4_unit1_relu1 = layers.PReLU(name='stage4_unit1_relu1')(stage4_unit1_bn2)
506 | stage4_unit1_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage4_unit1_relu1)
507 | stage4_unit1_conv2 = convolution(weights_dict, name='stage4_unit1_conv2', input=stage4_unit1_conv2_input, group=1, conv_type='layers.Conv2D', filters=512, kernel_size=(3, 3), strides=(2, 2), dilation_rate=(1, 1), padding='valid', use_bias=False)
508 | stage4_unit1_bn3 = layers.BatchNormalization(name = 'stage4_unit1_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage4_unit1_conv2)
509 | plus46 = Lambda(lambda x: x[0] + x[1])([stage4_unit1_bn3, stage4_unit1_sc])
510 | stage4_unit2_bn1 = layers.BatchNormalization(name = 'stage4_unit2_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus46)
511 | stage4_unit2_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage4_unit2_bn1)
512 | stage4_unit2_conv1 = convolution(weights_dict, name='stage4_unit2_conv1', input=stage4_unit2_conv1_input, group=1, conv_type='layers.Conv2D', filters=512, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
513 | stage4_unit2_bn2 = layers.BatchNormalization(name = 'stage4_unit2_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage4_unit2_conv1)
514 | stage4_unit2_relu1 = layers.PReLU(name='stage4_unit2_relu1')(stage4_unit2_bn2)
515 | stage4_unit2_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage4_unit2_relu1)
516 | stage4_unit2_conv2 = convolution(weights_dict, name='stage4_unit2_conv2', input=stage4_unit2_conv2_input, group=1, conv_type='layers.Conv2D', filters=512, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
517 | stage4_unit2_bn3 = layers.BatchNormalization(name = 'stage4_unit2_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage4_unit2_conv2)
518 | plus47 = Lambda(lambda x: x[0] + x[1])([stage4_unit2_bn3, plus46])
519 | stage4_unit3_bn1 = layers.BatchNormalization(name = 'stage4_unit3_bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus47)
520 | stage4_unit3_conv1_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage4_unit3_bn1)
521 | stage4_unit3_conv1 = convolution(weights_dict, name='stage4_unit3_conv1', input=stage4_unit3_conv1_input, group=1, conv_type='layers.Conv2D', filters=512, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
522 | stage4_unit3_bn2 = layers.BatchNormalization(name = 'stage4_unit3_bn2', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage4_unit3_conv1)
523 | stage4_unit3_relu1 = layers.PReLU(name='stage4_unit3_relu1')(stage4_unit3_bn2)
524 | stage4_unit3_conv2_input = layers.ZeroPadding2D(padding = ((1, 1), (1, 1)))(stage4_unit3_relu1)
525 | stage4_unit3_conv2 = convolution(weights_dict, name='stage4_unit3_conv2', input=stage4_unit3_conv2_input, group=1, conv_type='layers.Conv2D', filters=512, kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1), padding='valid', use_bias=False)
526 | stage4_unit3_bn3 = layers.BatchNormalization(name = 'stage4_unit3_bn3', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(stage4_unit3_conv2)
527 | plus48 = Lambda(lambda x: x[0] + x[1])([stage4_unit3_bn3, plus47])
528 | bn1 = layers.BatchNormalization(name = 'bn1', axis = 3, epsilon = 1.9999999494757503e-05, center = True, scale = True)(plus48)
529 | flatten = layers.Flatten(name = 'flatten')(bn1)
530 | dropout0 = layers.Dropout(name = 'dropout0', rate = 0.4000000059604645, seed = None)(flatten)
531 | pre_fc1 = layers.Dense(name = 'pre_fc1', units = 512, use_bias = True)(dropout0)
532 | fc1 = layers.BatchNormalization(name = 'fc1', axis = 1, epsilon = 1.9999999494757503e-05, center = True, scale = False)(pre_fc1)
533 | model = Model(inputs = [data], outputs = [fc1])
534 | set_layer_weights(model, weights_dict)
535 | return model
536 |
537 |
538 | def convolution(weights_dict, name, input, group, conv_type, filters=None, **kwargs):
539 | if not conv_type.startswith('layer'):
540 | layer = tensorflow.keras.applications.mobilenet.DepthwiseConv2D(name=name, **kwargs)(input)
541 | return layer
542 | elif conv_type == 'layers.DepthwiseConv2D':
543 | layer = layers.DepthwiseConv2D(name=name, **kwargs)(input)
544 | return layer
545 |
546 | inp_filters = K.int_shape(input)[-1]
547 | inp_grouped_channels = int(inp_filters / group)
548 | out_grouped_channels = int(filters / group)
549 | group_list = []
550 | if group == 1:
551 | func = getattr(layers, conv_type.split('.')[-1])
552 | layer = func(name = name, filters = filters, **kwargs)(input)
553 | return layer
554 | weight_groups = list()
555 | if not weights_dict == None:
556 | w = np.array(weights_dict[name]['weights'])
557 | weight_groups = np.split(w, indices_or_sections=group, axis=-1)
558 | for c in range(group):
559 | x = layers.Lambda(lambda z: z[..., c * inp_grouped_channels:(c + 1) * inp_grouped_channels])(input)
560 | x = layers.Conv2D(name=name + "_" + str(c), filters=out_grouped_channels, **kwargs)(x)
561 | weights_dict[name + "_" + str(c)] = dict()
562 | weights_dict[name + "_" + str(c)]['weights'] = weight_groups[c]
563 | group_list.append(x)
564 | layer = layers.concatenate(group_list, axis = -1)
565 | if 'bias' in weights_dict[name]:
566 | b = K.variable(weights_dict[name]['bias'], name = name + "_bias")
567 | layer = layer + b
568 | return layer
569 |
570 | def mul_constant(weight_factor, layer_name):
571 | weight = Lambda(lambda x: x*weight_factor)
572 | weight(layer_name)
573 | return weight.output
574 |
575 |
576 |
--------------------------------------------------------------------------------
/src/preprocessing.py:
--------------------------------------------------------------------------------
1 | # Preprocessing of Images
2 |
3 | # This script provides image the preprocessing function using MTCNN
4 | # Parts of this file are strongly adapted from:
5 | # https://github.com/deepinsight/insightface/blob/master/src/common/face_preprocess.py
6 |
7 | # Author: Marco Huber, 2020
8 | # Fraunhofer IGD
9 | # marco.huber[at]igd.fraunhofer.de
10 |
11 |
12 | import cv2
13 | import numpy as np
14 |
15 | from mtcnn import MTCNN
16 | from skimage import transform
17 |
18 | def setup_img(img):
19 | """
20 | Prepares the input image
21 |
22 | Parameters
23 | ----------
24 | img : img array
25 | The preprocessed and aligned keras model.
26 |
27 | Returns
28 | -------
29 | in_img : TYPE
30 | Prepared image ready to be fed into the network.
31 |
32 | """
33 |
34 | # preprare image
35 | in_img = preprocess_img(img)
36 |
37 | if in_img is None:
38 | return None
39 |
40 | in_img = np.expand_dims(in_img, axis=0)
41 | in_img = np.moveaxis(in_img, 1, 3)
42 |
43 | return in_img
44 |
45 | def preprocess_img(img):
46 | """
47 | Aligns and preprocess the provided image
48 |
49 | Parameters
50 | ----------
51 | img : array of the images
52 | The image to be aligned and preprocessed.
53 |
54 | Returns
55 | -------
56 | nimg : numpy ndarray
57 | Aligned and processed image.
58 |
59 | """
60 | # define thresholds
61 | thrs = [0.6,0.7,0.8]
62 |
63 | # get detector
64 | detector = MTCNN(steps_threshold=thrs)
65 |
66 | # detect face
67 | detected = detector.detect_faces(img)
68 |
69 | if detected is None or detected == []:
70 | print("MTCNN could not detected a face.")
71 | return None
72 |
73 | # get box and points
74 | bbox, points = detected[0]['box'], detected[0]['keypoints']
75 |
76 | # rearrange points
77 | p_points = []
78 | for v in points.values():
79 | p_points.append(v)
80 |
81 | p_points = np.asarray(p_points)
82 |
83 | # preprocess
84 | nimg = preprocess(img, bbox, p_points, image_size="112,112")
85 | nimg = cv2.cvtColor(nimg, cv2.COLOR_BGR2RGB)
86 |
87 | return np.transpose(nimg, (2,0,1))
88 |
89 |
90 | def read_image(img_path, **kwargs):
91 |
92 | mode = kwargs.get('mode', 'rgb')
93 | layout = kwargs.get('layout', 'HWC')
94 |
95 | if mode=='gray':
96 | img = cv2.imread(img_path, cv2.CV_LOAD_IMAGE_GRAYSCALE)
97 | else:
98 | img = cv2.imread(img_path, cv2.CV_LOAD_IMAGE_COLOR)
99 | if mode=='rgb':
100 | img = img[...,::-1]
101 | if layout=='CHW':
102 | img = np.transpose(img, (2,0,1))
103 | return img
104 |
105 | def preprocess(img, bbox=None, landmark=None, **kwargs):
106 |
107 | if isinstance(img, str):
108 | img = read_image(img, **kwargs)
109 |
110 | M = None
111 | image_size = []
112 | str_image_size = kwargs.get('image_size', '')
113 |
114 | if len(str_image_size)>0:
115 | image_size = [int(x) for x in str_image_size.split(',')]
116 | if len(image_size)==1:
117 | image_size = [image_size[0], image_size[0]]
118 | assert len(image_size)==2
119 | assert image_size[0]==112 or image_size[0]==160
120 | assert image_size[0]==112 or image_size[1]==96 or image_size[0]==160
121 |
122 | if landmark is not None:
123 | assert len(image_size)==2
124 | src = np.array([
125 | [30.2946, 51.6963],
126 | [65.5318, 51.5014],
127 | [48.0252, 71.7366],
128 | [33.5493, 92.3655],
129 | [62.7299, 92.2041] ], dtype=np.float32)
130 | if image_size[1]==112 or image_size[1]==160:
131 | src[:,0] += 8.0
132 | dst = landmark.astype(np.float32)
133 |
134 | tform = transform.SimilarityTransform()
135 | tform.estimate(dst, src)
136 | M = tform.params[0:2,:]
137 |
138 | if M is None:
139 | if bbox is None:
140 | det = np.zeros(4, dtype=np.int32)
141 | det[0] = int(img.shape[1]*0.0625)
142 | det[1] = int(img.shape[0]*0.0625)
143 | det[2] = img.shape[1] - det[0]
144 | det[3] = img.shape[0] - det[1]
145 | else:
146 | det = bbox
147 | margin = kwargs.get('margin', 44)
148 | bb = np.zeros(4, dtype=np.int32)
149 | bb[0] = np.maximum(det[0]-margin/2, 0)
150 | bb[1] = np.maximum(det[1]-margin/2, 0)
151 | bb[2] = np.minimum(det[2]+margin/2, img.shape[1])
152 | bb[3] = np.minimum(det[3]+margin/2, img.shape[0])
153 | ret = img[bb[1]:bb[3],bb[0]:bb[2],:]
154 | if len(image_size)>0:
155 | ret = cv2.resize(ret, (image_size[1], image_size[0]))
156 | return ret
157 | else:
158 | assert len(image_size)==2
159 | warped = cv2.warpAffine(img,M,(image_size[1],image_size[0]), borderValue = 0.0)
160 | return warped
161 |
162 |
--------------------------------------------------------------------------------
/src/serfiq.py:
--------------------------------------------------------------------------------
1 | # Implementation of SER-FIQ based on Keras
2 |
3 | # Author: Marco Huber, 2020
4 | # Fraunhofer IGD
5 | # marco.huber[at]igd.fraunhofer.de
6 |
7 | import numpy as np
8 |
9 | from tensorflow.keras import backend as K
10 | from sklearn.preprocessing import normalize
11 | from sklearn.metrics.pairwise import euclidean_distances
12 |
13 | from auxiliary_models import get_pre_dropout_state, get_stochastic_pass_model
14 |
15 | def get_scaled_quality(img, model, T, alpha, r):
16 | """
17 | Returns the scaled SER-FIQ quality of an image
18 |
19 | Performs Unsupervised Estimation of Face Image Quality Based on Stochastic
20 | Embedding Robustness (SER-FIQ) based on the arcface keras model.
21 |
22 | SEF-FIQ was proposed by Terhörst, Kolf, Damer, Kirchbuchner and
23 | Kuijper at CVPR, 2020
24 |
25 | Parameters
26 | ----------
27 | img : preprocess and aligned image
28 | The image to be processed.
29 | model : Keras model
30 | The model to be used.
31 | T : int
32 | number of stochastic forward passes.
33 | alpha : float
34 | Scaling parameter.
35 | r : float
36 | Scaling parameter.
37 |
38 | Returns
39 | -------
40 | Robustness score: float64
41 | The scaled SER-FIQ score.
42 |
43 | """
44 |
45 | # get pre-dropout state
46 | state = get_pre_dropout_state(img, model)
47 |
48 | # get stochastic part
49 | stochastic_model = get_stochastic_pass_model(model)
50 |
51 | # repeat T times
52 | t_states = np.repeat(state, repeats=T, axis=0)
53 |
54 | # predict
55 | X = stochastic_model.predict(t_states)
56 |
57 | # normalize
58 | norm = normalize(X, axis=1)
59 |
60 | # calculate SER_FIQ quality
61 | eucl_dist = euclidean_distances(norm, norm)[np.triu_indices(T, k=1)]
62 | quality = 2*(1/(1+np.exp(np.mean(eucl_dist))))
63 |
64 | # scale
65 | quality = 1 / (1+np.exp(-(alpha * (quality - r))))
66 |
67 | # clear model
68 | del stochastic_model
69 | K.clear_session()
70 |
71 | return quality
72 |
--------------------------------------------------------------------------------
/src/test_images/cap.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pterhoer/ExplainableFaceImageQuality/39e577b2bd45b76cb79a8dc276e9308dbad9f8ba/src/test_images/cap.png
--------------------------------------------------------------------------------
/src/test_images/mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pterhoer/ExplainableFaceImageQuality/39e577b2bd45b76cb79a8dc276e9308dbad9f8ba/src/test_images/mask.png
--------------------------------------------------------------------------------
/src/utils.py:
--------------------------------------------------------------------------------
1 | # Utility functions
2 |
3 | # Author: Marco Huber, 2020
4 | # Fraunhofer IGD
5 | # marco.huber[at]igd.fraunhofer.de
6 |
7 | import os
8 |
9 | def image_iter(path):
10 | """
11 | Takes path to a folder of images and returns every image path as
12 | string.
13 |
14 | Parameters
15 | ----------
16 | path : str
17 | The path of the folder.
18 |
19 | Returns
20 | -------
21 | image_paths : list of image paths
22 | List containing the path to every single image.
23 |
24 | """
25 | image_paths = []
26 |
27 | for path, subdirs, files in os.walk(path):
28 | for name in files:
29 | image_paths.append(os.path.join(path, name))
30 |
31 | return image_paths
32 |
33 |
--------------------------------------------------------------------------------