├── Convert_VGG_Annotations
├── __init__.py
├── annotations
│ ├── train
│ │ └── README.md
│ └── val
│ │ └── README.md
├── fracture_images
│ ├── train
│ │ └── README.md
│ ├── val
│ │ └── README.md
│ └── test
│ │ ├── s17_3.jpg
│ │ ├── s1_1.jpg
│ │ ├── s23_4.jpg
│ │ ├── s29_1.jpg
│ │ ├── t12_2.jpg
│ │ ├── t13_4.jpg
│ │ ├── u22_2.jpg
│ │ └── README.md
├── Export_annotations.py
└── Annotate.py
├── weights
├── .gitattributes
└── README.md
├── images
├── tb_unet.jpg
├── VGG_annotator.jpg
├── predictions.jpg
├── SEM_Predictions.jpg
└── tensorboard_unet.jpg
├── test_dataset
├── s4_1.jpg
├── s4_2.jpg
├── s4_4.jpg
├── s4_5.jpg
├── s4_6.jpg
├── s4_7.jpg
├── s4_8.jpg
├── s4_9.jpg
├── t8_7.jpg
├── s17_11.jpg
├── s4_10.jpg
├── s4_11.jpg
└── t16_4.jpg
├── logs
└── README.md
├── predict.py
├── train.py
├── Utils.py
├── README.md
└── LICENSE
/Convert_VGG_Annotations/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/weights/.gitattributes:
--------------------------------------------------------------------------------
1 | trained_weights.hdf5 filter=lfs diff=lfs merge=lfs -text
2 |
--------------------------------------------------------------------------------
/images/tb_unet.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/images/tb_unet.jpg
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/annotations/train/README.md:
--------------------------------------------------------------------------------
1 | This folder contains the annotations of the training dataset.
2 |
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/annotations/val/README.md:
--------------------------------------------------------------------------------
1 | This folder contains the annotations of the validation dataset.
2 |
--------------------------------------------------------------------------------
/test_dataset/s4_1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/s4_1.jpg
--------------------------------------------------------------------------------
/test_dataset/s4_2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/s4_2.jpg
--------------------------------------------------------------------------------
/test_dataset/s4_4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/s4_4.jpg
--------------------------------------------------------------------------------
/test_dataset/s4_5.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/s4_5.jpg
--------------------------------------------------------------------------------
/test_dataset/s4_6.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/s4_6.jpg
--------------------------------------------------------------------------------
/test_dataset/s4_7.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/s4_7.jpg
--------------------------------------------------------------------------------
/test_dataset/s4_8.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/s4_8.jpg
--------------------------------------------------------------------------------
/test_dataset/s4_9.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/s4_9.jpg
--------------------------------------------------------------------------------
/test_dataset/t8_7.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/t8_7.jpg
--------------------------------------------------------------------------------
/images/VGG_annotator.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/images/VGG_annotator.jpg
--------------------------------------------------------------------------------
/images/predictions.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/images/predictions.jpg
--------------------------------------------------------------------------------
/test_dataset/s17_11.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/s17_11.jpg
--------------------------------------------------------------------------------
/test_dataset/s4_10.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/s4_10.jpg
--------------------------------------------------------------------------------
/test_dataset/s4_11.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/s4_11.jpg
--------------------------------------------------------------------------------
/test_dataset/t16_4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/test_dataset/t16_4.jpg
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/fracture_images/train/README.md:
--------------------------------------------------------------------------------
1 | Download the training dataset from URL of dataset and save it here.
2 |
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/fracture_images/val/README.md:
--------------------------------------------------------------------------------
1 | Download the validation dataset from URL of Datset and save it here.
2 |
--------------------------------------------------------------------------------
/images/SEM_Predictions.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/images/SEM_Predictions.jpg
--------------------------------------------------------------------------------
/images/tensorboard_unet.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/images/tensorboard_unet.jpg
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/fracture_images/test/s17_3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/Convert_VGG_Annotations/fracture_images/test/s17_3.jpg
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/fracture_images/test/s1_1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/Convert_VGG_Annotations/fracture_images/test/s1_1.jpg
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/fracture_images/test/s23_4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/Convert_VGG_Annotations/fracture_images/test/s23_4.jpg
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/fracture_images/test/s29_1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/Convert_VGG_Annotations/fracture_images/test/s29_1.jpg
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/fracture_images/test/t12_2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/Convert_VGG_Annotations/fracture_images/test/t12_2.jpg
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/fracture_images/test/t13_4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/Convert_VGG_Annotations/fracture_images/test/t13_4.jpg
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/fracture_images/test/u22_2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SteliosTsop/QF-image-segmentation-keras/HEAD/Convert_VGG_Annotations/fracture_images/test/u22_2.jpg
--------------------------------------------------------------------------------
/weights/README.md:
--------------------------------------------------------------------------------
1 | The **trained weights** can be downloaded by clicking on [this link](https://technionmail-my.sharepoint.com/:u:/g/personal/tsopanidis_campus_technion_ac_il/EWlBxa-O8hlDg5v1_4NjNCwBpBWZCG7DV5mKg1erGwTmQg)
2 |
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/fracture_images/test/README.md:
--------------------------------------------------------------------------------
1 | This is the __test dataset__ : a set of SEM fracture images that has not been used during the training nor during the validation process.
2 | The objective of having a test dataset is to allow as to evaluate the accuracy of the predictions of the code.
3 |
--------------------------------------------------------------------------------
/logs/README.md:
--------------------------------------------------------------------------------
1 | This folder contains the Tensorboard log files that can be used to monitor the training and validation accuracy, during the training process.
2 |
3 | Run the following command:
4 |
5 | ```
6 | tensorboard --logdir=directory-of-log-files --host=127.0.0.1
7 | ```
8 |
9 | and open the local host URL with the internet browser.
10 |
11 |
12 |
13 |
--------------------------------------------------------------------------------
/predict.py:
--------------------------------------------------------------------------------
1 | import argparse, os
2 | os.environ['KERAS_BACKEND'] = 'tensorflow'
3 | import Utils
4 | import glob
5 | import cv2
6 | from PIL import Image, ImageDraw
7 | import numpy as np
8 |
9 |
10 |
11 | # Parse command line arguments and assign them to parameters
12 | parser = argparse.ArgumentParser()
13 | parser.add_argument("--save_weights_path", type = str )
14 | parser.add_argument("--test_images", type = str , default = "")
15 | parser.add_argument("--output_path", type = str , default = "")
16 | parser.add_argument("--input_height", type=int , default = 224 )
17 | parser.add_argument("--input_width", type=int , default = 224 )
18 | parser.add_argument("--n_classes", type=int )
19 |
20 | args = parser.parse_args()
21 |
22 | n_classes = args.n_classes
23 | images_path = args.test_images
24 | input_width = args.input_width
25 | input_height = args.input_height
26 | trained_weights = args.save_weights_path
27 |
28 | # Initialize a model and load pre-trained weights
29 | keras_model = Utils.VGG16_Unet(n_classes, False, input_height=input_height, input_width=input_width)
30 | keras_model.load_weights(trained_weights)
31 |
32 |
33 | # Compile the model
34 | keras_model.compile(loss='categorical_crossentropy',optimizer= 'adam', metrics=['accuracy'])
35 | # keras_model.summary() # Display the CNN architecture
36 |
37 |
38 | # Define output dimensions
39 | output_height = keras_model.outputHeight
40 | output_width = keras_model.outputWidth
41 |
42 | # Define the image paths
43 | images = glob.glob(images_path + "*.jpg") + glob.glob(images_path + "*.png") + glob.glob(images_path + "*.tiff")
44 | images.sort()
45 |
46 | # Colors for each label class
47 | colors = [(0,0,0),(0,250,0),(250,0,0)]
48 |
49 |
50 | for imgName in images:
51 | outName = imgName[:-4] + "_pred.jpg"
52 | X = Utils.get_images(imgName, input_width, input_height)
53 | pr = keras_model.predict(np.array([X]))[0]
54 | pr = pr.reshape((output_height, output_width, n_classes)).argmax( axis=2 )
55 | img = cv2.imread(imgName, 1)
56 | seg_img = cv2.resize(img, (output_width, output_height))
57 | for c in range(1,n_classes):
58 | seg_img[:, :, 0] = np.where(pr[:, :] == c, seg_img[:, :, 0]*0.65 + 0.35*colors[c][0], seg_img[:, :, 0])
59 | seg_img[:, :, 1] = np.where(pr[:, :] == c, seg_img[:, :, 1]*0.65 + 0.35*colors[c][1], seg_img[:, :, 1])
60 | seg_img[:, :, 2] = np.where(pr[:, :] == c, seg_img[:, :, 2]*0.65 + 0.35*colors[c][2], seg_img[:, :, 2])
61 |
62 |
63 | seg_img = cv2.resize(seg_img, (input_width, input_height))
64 | cv2.imwrite(outName, seg_img)
65 |
66 | print("image {} is done!!!".format(outName))
67 |
68 |
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/Export_annotations.py:
--------------------------------------------------------------------------------
1 | """
2 | The main objective of this code is to convert the annotations exported in a json file by the
3 | VGG Image Annotator into a format that the main segmentation program can use.
4 | The format that the segmentation code use to annotate the images is an image where each pixel has a
5 | value equal to the class id that it belongs.
6 | """
7 |
8 |
9 |
10 |
11 | import os
12 | import sys
13 | import numpy as np
14 | import cv2
15 | import scipy.misc
16 |
17 | from Annotate import Annotations
18 |
19 | # Root directory of the project
20 | ROOT_DIR = os.getcwd()
21 |
22 | sys.path.append(ROOT_DIR)
23 |
24 |
25 | # Annotations directory
26 | ANNOT_DIR = os.path.join(ROOT_DIR, "fracture_images\train")
27 |
28 |
29 | # Create a annotation class object
30 | fracture_annotations = Annotations()
31 |
32 | # Load all the information from the json file produced by the VGG Image Annotator
33 | fracture_annotations.load(ANNOT_DIR)
34 |
35 | # organize the information and store it at the annotation object parameters
36 | fracture_annotations.organize()
37 |
38 | # Print the number of images and the existing classes
39 | print("Image Count: {}".format(len(fracture_annotations.img_ids)))
40 | print("Class Count: {}".format(fracture_annotations.num_classes))
41 | for i, info in enumerate(fracture_annotations.class_info):
42 | print("{:3}. {:50}".format(i, info['name']))
43 |
44 |
45 |
46 | img_ids = fracture_annotations.img_ids
47 |
48 | for img_id in img_ids:
49 | # Image information
50 | info = fracture_annotations.img_info[img_id]
51 | # Load the image
52 | img = fracture_annotations.load_img(img_id)
53 | # Store the coordinates of the polygons of each annotation in the mask and the corresponding class_ids
54 | mask, class_ids = fracture_annotations.load_mask(img_id)
55 | class_names = fracture_annotations.class_names
56 |
57 | # Pull masks of instances belonging to the same class.
58 | m1 = mask[:, :, np.where(class_ids == 1)[0]]
59 | m1 = np.sum(m1*255, -1)
60 | m2 = mask[:, :, np.where(class_ids == 2)[0]]
61 | m2 = np.sum(m2*80, -1)
62 | t = m1 + m2
63 | t = t.astype(np.uint8)
64 | t[t<=20] = 0
65 | t[(t>20) & (t<200)] = 1
66 | t[t>=200] = 2
67 |
68 | # Export the annotated images in the format that the segmentation code requires.
69 | img_dir = os.path.join(ROOT_DIR, "annotations\train")
70 | img_name = info["id"][:-4] + ".png"
71 | out_dir = os.path.join(img_dir, img_name)
72 |
73 | cv2.imwrite(out_dir, t.astype(np.uint8))
74 | print("Image {} is converted !!!".format(info["id"]))
75 |
--------------------------------------------------------------------------------
/Convert_VGG_Annotations/Annotate.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import numpy as np
4 | import skimage.io
5 | import json
6 |
7 |
8 | class Annotations:
9 |
10 | def __init__(self):
11 | self.img_ids = []
12 | self.img_info = []
13 |
14 |
15 | def load(self, dataset_dir):
16 |
17 |
18 | # Define the names and the corrwspon
19 | self.class_info = [{"id": 0, "name": "BG"},
20 | {"id": 1, "name": "intergranular"},
21 | {"id": 2, "name": "transgranular"}]
22 |
23 |
24 | # load json file, created by VGG image annotator
25 | annotations = json.load(open(os.path.join(dataset_dir, "via_region_data.json")))
26 | # keep only the values
27 | annotations = list(annotations.values())
28 |
29 | # Keep only images that have been annotated
30 | annotations = [a for a in annotations if a['regions']]
31 |
32 | # Add images
33 | for a in annotations:
34 | # The x,y coordinates of each corner of a polygon are used to create masks for the different annotated areas.
35 | # These coordinates are stored in the shape_attributes
36 | polygons = [r['shape_attributes'] for r in a['regions']]
37 | names = [r['region_attributes'] for r in a['regions']]
38 | # The image dimensions are also needed for the creation of the mask
39 | img_path = os.path.join(dataset_dir, a['filename'])
40 | img = skimage.io.imread(img_path)
41 | height, width = img.shape[:2]
42 |
43 | self.img_info.append({ "id": a['filename'],
44 | "source": "fracture",
45 | "path": img_path,
46 | "width":width,
47 | "height":height,
48 | "polygons":polygons,
49 | "names":names})
50 | def organize(self):
51 |
52 | # Define the class parameters needed.
53 | self.num_classes = len(self.class_info)
54 | self.class_ids = np.arange(self.num_classes)
55 | self.class_names = [c["name"] for c in self.class_info]
56 | self.num_imgs = len(self.img_info)
57 | self.img_ids = np.arange(self.num_imgs)
58 |
59 |
60 |
61 |
62 | def load_img(self, img_id):
63 |
64 | # Load image
65 | img = skimage.io.imread(self.img_info[img_id]['path'])
66 | # If grayscale. Convert to RGB.
67 | if img.ndim != 3:
68 | img = skimage.color.gray2rgb(img)
69 | # If has an alpha channel, remove it.
70 | if img.shape[-1] == 4:
71 | img = img[..., :3]
72 | return img
73 |
74 |
75 | def load_mask(self, img_id):
76 | """This function returns the masks for each image
77 |
78 | masks: A bool array of shape [height, width, instance count] with
79 | one mask per instance.
80 | class_ids: a 1D array of class IDs of the instance masks.
81 | """
82 |
83 |
84 | info = self.img_info[img_id]
85 | class_names = info["names"]
86 | # we create an array of masks with dimensions equal to the image dimensions and zeros everywhere except
87 | # for the annotated areas. This mask array holds one mask for each annotation.
88 | mask = np.zeros([info["height"], info["width"], len(info["polygons"])], dtype=np.uint8)
89 | for i, p in enumerate(info["polygons"]):
90 | # The index i is the annotation number and p holds the x,y coordinates of the polygon corners
91 | # Each mask is an array with zeros everywhere except from the pixels that belong to annotated areas
92 | rr, cc = skimage.draw.polygon(p['all_points_y'], p['all_points_x'])
93 | mask[rr, cc, i] = 1
94 |
95 | # Assign class_ids
96 | class_ids = np.zeros([len(info["polygons"])])
97 | # In the fracture dataset, pictures are labeled with name 'i' and 't' representing intergranular and transgranular fracture.
98 | for j, p in enumerate(class_names):
99 | #"name" is the attributes name decided when performing the annotation with VGG anotator, etc. 'region_attributes': {name:'n'}
100 | if p['name'] == 'i':
101 | class_ids[j] = 1
102 | elif p['name'] == 't':
103 | class_ids[j] = 2
104 |
105 | class_ids = class_ids.astype(int)
106 |
107 | return mask.astype(np.bool), class_ids
108 |
--------------------------------------------------------------------------------
/train.py:
--------------------------------------------------------------------------------
1 | import argparse,os
2 | os.environ['KERAS_BACKEND'] = 'tensorflow'
3 | # os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # uncomment this if you want to run in CPU
4 | from model_size import get_model_memory_usage
5 | from datetime import datetime
6 | from keras import optimizers
7 | from keras.callbacks import ModelCheckpoint,TensorBoard
8 | import Utils
9 |
10 |
11 |
12 | # Parse command line arguments
13 | parser = argparse.ArgumentParser()
14 | parser.add_argument("--save_weights_path", type = str)
15 | parser.add_argument("--train_images", type = str)
16 | parser.add_argument("--train_annotations", type = str)
17 | parser.add_argument("--logs", type = str, default = "")
18 | parser.add_argument("--n_classes", type=int )
19 | parser.add_argument("--input_height", type=int , default = 640 )
20 | parser.add_argument("--input_width", type=int , default = 640 )
21 |
22 | parser.add_argument("--val_images", type = str , default = "")
23 | parser.add_argument("--val_annotations", type = str , default = "")
24 |
25 | parser.add_argument("--start_epoch", type = int, default = 0)
26 | parser.add_argument("--end_epoch", type = int, default = 10)
27 | parser.add_argument("--batch_size", type = int, default = 1)
28 | parser.add_argument("--val_batch_size", type = int, default = 1)
29 | parser.add_argument("--load_weights", type = str , default = "")
30 |
31 | parser.add_argument("--optimizer_name", type = str , default = "")
32 | parser.add_argument("--init_learning_rate", type = str , default = 0.001)
33 | parser.add_argument("--epoch_steps", type = str , default = 200)
34 |
35 |
36 |
37 | # Assign command line arguments to parameter
38 | args = parser.parse_args()
39 |
40 | train_images_path = args.train_images
41 | train_labels_path = args.train_annotations
42 | train_batch_size = args.batch_size
43 | n_classes = args.n_classes
44 | input_height = args.input_height
45 | input_width = args.input_width
46 | logs_dir = args.logs
47 |
48 | save_weights_path = args.save_weights_path
49 | start_epoch = args.start_epoch
50 | end_epoch = args.end_epoch
51 | trained_weights = args.load_weights
52 |
53 | optimizer_name = args.optimizer_name
54 | learning_rate = float(args.init_learning_rate)
55 | steps = int(args.epoch_steps)
56 |
57 | val_images_path = args.val_images
58 | val_labels_path = args.val_annotations
59 | val_batch_size = args.val_batch_size
60 |
61 |
62 | # 1. Initialize a model and Load the weights in case you continue a previous training
63 | if len(trained_weights) > 0:
64 | keras_model = Utils.VGG16_Unet(n_classes, False, input_height=input_height, input_width=input_width)
65 | keras_model.load_weights(trained_weights)
66 | else:
67 | keras_model = Utils.VGG16_Unet(n_classes, True, input_height=input_height, input_width=input_width)
68 |
69 |
70 | # 2. Define the output size of your results.
71 | output_height = keras_model.outputHeight
72 | output_width = keras_model.outputWidth
73 | print("Model output shape", keras_model.output_shape)
74 |
75 | # Use this lines if you like to determine the memory size of the model and see the model architecture
76 | # model_mem_size = get_model_memory_usage(train_batch_size, keras_model)
77 | # print("Model memory size is: {}GB".format(model_mem_size))
78 | # keras_model.summary()
79 |
80 |
81 | # 3. Load you batches for the train and validation dataset
82 | train_data = Utils.image_labels_generator(train_images_path, train_labels_path, train_batch_size, n_classes, input_height, input_width, output_height, output_width)
83 |
84 | val_data = Utils.image_labels_generator(val_labels_path, val_labels_path, val_batch_size, n_classes, input_height, input_width, output_height, output_width)
85 |
86 |
87 |
88 | # 4. Select the optimizer and the learning rate (default option is Adam)
89 | if optimizer_name == 'rmsprop':
90 | optimizer = optimizers.RMSprop(lr=learning_rate, rho=0.9, epsilon=None, decay=0.0)
91 | elif optimizer_name == 'adadelta':
92 | optimizer = optimizers.Adadelta(lr=learning_rate, rho=0.95, epsilon=None, decay=0.0)
93 | elif optimizer_name == 'sgd':
94 | optimizer = optimizers.SGD(lr=learning_rate, decay=0.0)
95 | else:
96 | optimizer = optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
97 |
98 |
99 | # 5. Set the callbacks for saving the weights and the tensorboard
100 | weights = save_weights_path + "_lr_{}".format(round(learning_rate,8)) + "_{epoch:02d}.hdf5"
101 | checkpoint = ModelCheckpoint(weights, monitor='val_acc', verbose=0, save_best_only=True, save_weights_only=True)
102 | logs_dir = logs_dir + "/lr_{}".format(round(learning_rate,8))
103 | tensorboard = TensorBoard(log_dir=logs_dir, histogram_freq=0, write_images=False)
104 |
105 |
106 | # 6. Compile the model with the selected optimizer
107 | keras_model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
108 |
109 |
110 | # 7. Train your network
111 | keras_model.fit_generator(train_data, steps, validation_data=val_data, validation_steps=100, epochs=end_epoch, callbacks=[checkpoint,tensorboard], initial_epoch=start_epoch)
112 |
--------------------------------------------------------------------------------
/Utils.py:
--------------------------------------------------------------------------------
1 | from keras.models import *
2 | from keras.layers import *
3 | import os
4 | import numpy as np
5 | import cv2
6 | import glob
7 | import itertools,random
8 |
9 |
10 | # define the path to load the VGG16 weights
11 | file_path = os.path.dirname( os.path.abspath(__file__) )
12 | VGG16_Weights_path = file_path + "/data/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5"
13 |
14 |
15 |
16 |
17 | def VGG16_Unet( n_classes , load_vgg, input_height, input_width, batch_norm = False ):
18 |
19 | # This function defines the architecture of the U-net.
20 | # It is used to initialize the keras model.
21 |
22 |
23 | assert input_height%32 == 0
24 | assert input_width%32 == 0
25 |
26 | img_input = Input(shape=(input_height,input_width,3))
27 |
28 |
29 | # ENCODER = VGG16 (without the top layers)
30 |
31 | # Block 1
32 | c1 = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(img_input)
33 | c1 = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(c1)
34 | p1 = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(c1)
35 |
36 | # Block 2
37 | c2 = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(p1)
38 | c2 = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(c2)
39 | p2 = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(c2)
40 |
41 | # Block 3
42 | c3 = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(p2)
43 | c3 = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(c3)
44 | c3 = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(c3)
45 | p3 = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(c3)
46 |
47 | # Block 4
48 | c4 = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(p3)
49 | c4 = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(c4)
50 | c4 = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(c4)
51 | p4 = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(c4)
52 |
53 | # Block 5
54 | c5 = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(p4)
55 | c5 = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(c5)
56 | c5 = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(c5)
57 | p5 = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(c5)
58 |
59 | # In case the training is not continued from a previous pre-trained state,
60 | # it is needed to load the pre-trained weights for the encoder part
61 | if load_vgg:
62 | vgg = Model(img_input, p5)
63 | vgg.load_weights(VGG16_Weights_path, by_name=True)
64 | print("Loading VGG !!!")
65 |
66 | # DECODER
67 | c6 = Conv2D(1024, (3, 3), padding='same')(p5)
68 |
69 | u6 = UpSampling2D((2, 2))(c6)
70 | u6 = concatenate([u6, c5], axis=3) # concatenate the weights from the corresponding encoder layer
71 | c7 = Conv2D(1024, (3, 3), padding='same')(u6)
72 | c7 = Conv2D(1024, (3, 3), padding='same')(c7)
73 |
74 | u7 = UpSampling2D((2, 2))(c7)
75 | u7 = concatenate([u7, c4], axis=3) # concatenate the weights from the corresponding encoder layer
76 | c8 = Conv2D(512, (3, 3), padding='same')(u7)
77 | c8 = Conv2D(512, (3, 3), padding='same')(c8)
78 |
79 | u8 = UpSampling2D((2, 2))(c8)
80 | u8 = concatenate([u8, c3], axis=3)
81 | c9 = Conv2D(256, (3, 3), padding='same')(u8)
82 | c9 = Conv2D(256, (3, 3), padding='same')(c9)
83 |
84 | u9 = UpSampling2D((2, 2))(c9)
85 | u9 = concatenate([u9, c2], axis=3) # concatenate the weights from the corresponding encoder layer
86 | c10 = Conv2D(128, (3, 3), padding='same')(u9)
87 | c10 = Conv2D(128, (3, 3), padding='same')(c10)
88 |
89 | u10 = UpSampling2D((2, 2))(c10)
90 | u10 = concatenate([u10, c1], axis=3)
91 | c11 = Conv2D(128, (3, 3), padding='same')(u10)
92 | c11 = Conv2D(128, (3, 3), padding='same')(c11)
93 |
94 | o = Conv2D(n_classes, (3, 3), padding='valid')(c11)
95 |
96 | o_shape = Model(img_input, o).output_shape
97 | outputHeight = o_shape[1]
98 | outputWidth = o_shape[2]
99 |
100 | o = (Reshape((outputHeight * outputWidth, n_classes)))(o)
101 |
102 | o = (Activation('softmax'))(o)
103 | model = Model(img_input, o)
104 | model.outputWidth = outputWidth
105 | model.outputHeight = outputHeight
106 |
107 | return model
108 |
109 | def get_images(path, width, height, imgNorm="sub_mean"):
110 |
111 | # Load images
112 | img = cv2.imread(path, 1)
113 | img = cv2.resize(img, (width , height ))
114 | img = img.astype(np.float32)
115 | img[:,:,0] -= 103.939
116 | img[:,:,1] -= 116.779
117 | img[:,:,2] -= 123.68
118 |
119 | return img
120 |
121 |
122 | def get_labels(path, nClasses, width, height):
123 |
124 | # Load labels
125 | seg_labels = np.zeros((height, width, nClasses))
126 | try:
127 | labels = cv2.imread(path, 1)
128 | labels = cv2.resize(labels, (width, height))
129 | labels = labels[:, : , 0]
130 |
131 |
132 | for c in range(nClasses):
133 | seg_labels[: , : , c ] = (labels == c ).astype(int)
134 |
135 | except Exception:
136 | print(Exception)
137 |
138 | seg_labels = np.reshape(seg_labels, ( width*height , nClasses ))
139 | return seg_labels
140 |
141 |
142 | def image_labels_generator(images_path, labels_path, batch_size, n_classes, input_height, input_width, output_height, output_width):
143 |
144 | # This function feeds the keras fit_generator function with the dataset(images and annotations)
145 |
146 | assert images_path[-1] == '/'
147 | assert labels_path[-1] == '/'
148 |
149 | images = glob.glob(images_path + "*.jpg") + glob.glob(images_path + "*.png")
150 | images.sort()
151 | labels = glob.glob(labels_path + "*.jpg") + glob.glob(labels_path + "*.png")
152 | labels.sort()
153 |
154 | assert len(images) == len(labels)
155 |
156 | z = list(zip(images,labels))
157 | random.shuffle(z)
158 | zipped = itertools.cycle(z)
159 |
160 | while True:
161 | X = []
162 | Y = []
163 | for _ in range(batch_size):
164 | img , label = next(zipped)
165 | X.append( get_images(img, input_width, input_height) )
166 | Y.append(get_labels(label, n_classes, output_width, output_height))
167 |
168 | yield np.array(X) , np.array(Y)
169 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Quantitative Fractography Semantic Segmentation
2 |
3 | This repository is created to present the training process and the predictions results that are published on the scientific research work: _" Toward quantitative fractography using convolutional neural networks "_(https://arxiv.org/abs/1908.02242).
4 |
5 | The source code is a modification of the code published at [image-segmentation-keras](https://github.com/divamgupta/image-segmentation-keras) developed by https://divamgupta.com, with the addition of some extra tools needed for the training process.
6 |
7 | The main objective of publishing this work is to propose a new method for the topographic charactirization of fracture surfaces based on Convolutional Neural Networks and attract the interest of the Fractography research community in order to built on this basis and develop tools that optimize the Quantitative Fractogrphy techniques.
8 |
9 | More specifically, the Convolutional Neural Network (CNN) model after being trained in Scanning Electron Microscopy (SEM) images of fracture surfaces is able to identify the _intergranular_ or _transgranular_ fracture modes for any brittle material.
10 |
11 |
12 |
13 |
14 | ## Annotation of the training and validation datasets
15 |
16 | The first part of the training of every Convolutional Neural Network (CNN) model involveds the annotation of the images. In our case the dataset is composed by SEM images of the fracture surfaces.
17 |
18 | The annotation for the SEM fracture images has been performed with the online open source VGG Image Annotator (http://www.robots.ox.ac.uk/~vgg/software/via/via.html). Using the polygon tool it becomes possible to label the different areas of the SEM images as _intergranular_ or _transgranular_, while the areas that were more ambiguous or between the borders of adjucent areas were classified as _background_. Furthermore, the image annotation is a very time consuming task and the introduction of the _background_ label was necessary.
19 |
20 |
21 |
22 |
67 |
68 |
69 | ## Predictions
70 |
71 | Once the training is completed, the trained parameters of the each layer have been stored on the .hdf5 weights file.
72 |
73 | Importing the weights to the __predict.py__ code it becomes possible to classify every pixel of any SEM fracture image of a brittle material as _intergranular_ or _transgranular_. The pixels that are not classified are considered as _background_, as it is done in the training process.
74 |
75 | To run the predict.py script, it is neccesary to provide the path for the trained weights (__save_weights_path__), the number of classes (__nClasses__), the dimensions of the test images that you wish to classify(__input_height__ and __input_width__) and the directory of the test images. Note that the dimensions of the test image can be different than the dimensions of the images that are used during the training(_640x640_) and interestringly the predictions on larger images (_1280x1280_) where equal or even more accurate than the images of the same size as the training images.
76 |
77 | So, you can run the following command:
78 |
79 | ```
80 | python predict.py --save_weights_path="weights/trained_weights.hdf5" --test_images="test_dataset/" --n_classes=3 --input_height=640 --input_width=640
81 | ```
82 |
83 | And you have classified your fracture images !!!
84 |
85 |
86 |
87 |
88 | ### Prerequisites
89 |
90 | - Keras 2.0
91 | - Opencv for Python
92 | - Tensorflow
93 | - Pillow
94 |
95 |
96 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 3, 29 June 2007
3 |
4 | Copyright (C) 2007 Free Software Foundation, Inc.