├── conv_names.csv
└── README.md
/conv_names.csv:
--------------------------------------------------------------------------------
1 | Layer Number,Layer Name VGGFace2,Layer Name ImageNet/Scratch
2 | 1,conv1/7x7_s2,conv1
3 | 2,conv2_1_1x1_reduce,res2a_branch2a
4 | 3,conv2_1_3x3,res2a_branch2b
5 | 4,conv2_1_1x1_increase,res2a_branch2c
6 | 5,conv2_1_1x1_proj,res2a_branch1
7 | 6,conv2_2_1x1_reduce,res2b_branch2a
8 | 7,conv2_2_3x3,res2b_branch2b
9 | 8,conv2_2_1x1_increase,res2b_branch2c
10 | 9,conv2_3_1x1_reduce,res2c_branch2a
11 | 10,conv2_3_3x3,res2c_branch2b
12 | 11,conv2_3_1x1_increase,res2c_branch2c
13 | 12,conv3_1_1x1_reduce,res3a_branch2a
14 | 13,conv3_1_3x3,res3a_branch2b
15 | 14,conv3_1_1x1_increase,res3a_branch2c
16 | 15,conv3_1_1x1_proj,res3a_branch1
17 | 16,conv3_2_1x1_reduce,res3b_branch2a
18 | 17,conv3_2_3x3,res3b_branch2b
19 | 18,conv3_2_1x1_increase,res3b_branch2c
20 | 19,conv3_3_1x1_reduce,res3c_branch2a
21 | 20,conv3_3_3x3,res3c_branch2b
22 | 21,conv3_3_1x1_increase,res3c_branch2c
23 | 22,conv3_4_1x1_reduce,res3d_branch2a
24 | 23,conv3_4_3x3,res3d_branch2b
25 | 24,conv3_4_1x1_increase,res3d_branch2c
26 | 25,conv4_1_1x1_reduce,res4a_branch2a
27 | 26,conv4_1_3x3,res4a_branch2b
28 | 27,conv4_1_1x1_increase,res4a_branch2c
29 | 28,conv4_1_1x1_proj,res4a_branch1
30 | 29,conv4_2_1x1_reduce,res4b_branch2a
31 | 30,conv4_2_3x3,res4b_branch2b
32 | 31,conv4_2_1x1_increase,res4b_branch2c
33 | 32,conv4_3_1x1_reduce,res4c_branch2a
34 | 33,conv4_3_3x3,res4c_branch2b
35 | 34,conv4_3_1x1_increase,res4c_branch2c
36 | 35,conv4_4_1x1_reduce,res4d_branch2a
37 | 36,conv4_4_3x3,res4d_branch2b
38 | 37,conv4_4_1x1_increase,res4d_branch2c
39 | 38,conv4_5_1x1_reduce,res4e_branch2a
40 | 39,conv4_5_3x3,res4e_branch2b
41 | 40,conv4_5_1x1_increase,res4e_branch2c
42 | 41,conv4_6_1x1_reduce,res4f_branch2a
43 | 42,conv4_6_3x3,res4f_branch2b
44 | 43,conv4_6_1x1_increase,res4f_branch2c
45 | 44,conv5_1_1x1_reduce,res5a_branch2a
46 | 45,conv5_1_3x3,res5a_branch2b
47 | 46,conv5_1_1x1_increase,res5a_branch2c
48 | 47,conv5_1_1x1_proj,res5a_branch1
49 | 48,conv5_2_1x1_reduce,res5b_branch2a
50 | 49,conv5_2_3x3,res5b_branch2b
51 | 50,conv5_2_1x1_increase,res5b_branch2c
52 | 51,conv5_3_1x1_reduce,res5c_branch2a
53 | 52,conv5_3_3x3,res5c_branch2b
54 | 53,conv5_3_1x1_increase,res5c_branch2c
55 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # BTAS2019DeepFeatureExtraction
2 | This repo contains the supplementary material for the 2019 BTAS submission Deep Learning-Based Feature Extraction in Iris Recognition: Use Existing Models, Fine-tune or Train From Scratch?
3 |
4 | Trained models can be found at the following links (GitHub has a 100Mb file size limit so I had to upload to Google Drive):
5 |
6 | Model trained from scratch using 363,512 iris images from 2000 classes: email aboyd3@nd.edu
7 |
8 | Model fine-tuned from ImageNet weights on same iris data as Scratch network: email aboyd3@nd.edu
9 |
10 | Model fine-tuned from VGGFace2 weights on same iris data as Scratch network: email aboyd3@nd.edu
11 |
12 | This project was written in Python. The code I used to split the CASIA-Iris-Thousand database (found here: http://biometrics.idealtest.org/dbDetailForUser.do?id=4) into the single 70% train/30% test split:
13 |
14 | X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=4242, stratify=y)
15 | -> the random seed is important of you are trying to replicate my results.
16 |
17 | Note: Depending on how successful your segmentation is, you may see slightly varying results. If you want to know my configuration for OSIRIS, send me an email at aboyd3@nd.edu and I can send you my config file so you can replicate the experiments exactly.
18 |
19 | If you want to know how features were extracted by each layer, email me and I will send you my extraction code :)
20 | But this is the important lines in the extraction (this is simplified to one image, you should create a list of all images and loop through them):
21 |
22 | img = image.load_img(image_path, target_size=(64, 512), color_mode="rgb")
23 | img = image.img_to_array(img)
24 | img = np.expand_dims(img, axis=0)
25 | intermediate_layer_model = Model(inputs=model.input,
26 | outputs=model.get_layer(layer).output)
27 | features = intermediate_layer_model.predict(img)
28 | np.save(PATH, features) # make sure your filename ends with .npy not .png or .jpg!
29 |
30 | I then saved these features to a numpy array which will be loaded by the classification program.
31 |
32 | I did Min/Max scaling on the data before PCA, this can be done by:
33 |
34 | from sklearn.preprocessing import MinMaxScaler
35 |
36 | scaler = MinMaxScaler(feature_range=(0, 1))
37 | scaler.fit(X_train)
38 | X_train = scaler.transform(X_train)
39 | X_test = scaler.transform(X_test)
40 |
41 | For PCA I used the library:
42 | from sklearn.decomposition import PCA
43 |
44 | and the code to run the PCA is:
45 |
46 | pca = PCA(n_components=2000, svd_solver='randomized')
47 | pca.fit(X_train)
48 | X_train = pca.transform(X_train)
49 | X_test = pca.transform(X_test)
50 | coverage = 0
51 | num_feats = 0
52 | for val in pca.explained_variance_ratio_:
53 | coverage = coverage + val
54 | num_feats = num_feats + 1
55 | if coverage >= 0.9: # This is taking the number of features corresponding to 90% of the variance
56 | break
57 | X_train = X_train[:, :num_feats]
58 | X_test = X_test[:, :num_feats]
59 | -> Sometimes 90% of the coverage requires more than the 2000 features, in which case just the 2000 will be used
60 |
61 | For classification I used the one-versus-rest SVM from scikit learn, this can be implemented as follows:
62 |
63 | clf = multiclass.OneVsRestClassifier(svm.SVC(kernel='linear', gamma='auto'), n_jobs=-1)
64 | clf.fit(X_train, y_train)
65 | estimates = clf.predict(X_test)
66 |
67 |
68 | I am more than happy to answer any questions on this work so don't hesitate to email me!
69 |
--------------------------------------------------------------------------------