├── ml_api ├── lib │ ├── __init__.py │ ├── test.jpg │ ├── timelapse_video.py │ └── detection_model.py ├── model │ ├── names │ ├── model.meta │ ├── model.weights.url │ └── model.cfg ├── .gitignore ├── .gitattributes ├── bin │ ├── model_aarch64.so │ ├── model_x86_64.so │ ├── model_gpu_aarch64.so │ └── model_gpu_x86_64.so └── scripts │ └── detect_timelapse.sh ├── api_keys.py ├── .gitignore ├── data_visualisation.py ├── gen_video.py ├── printcontrol.py ├── get_score.py ├── run ├── README.md └── old_README.md /ml_api/lib/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /ml_api/model/names: -------------------------------------------------------------------------------- 1 | failure 2 | -------------------------------------------------------------------------------- /ml_api/.gitignore: -------------------------------------------------------------------------------- 1 | bin/model.so 2 | model/model.weights 3 | -------------------------------------------------------------------------------- /api_keys.py: -------------------------------------------------------------------------------- 1 | URL = "Your URL" 2 | API_KEY = "Your API_KEY" 3 | -------------------------------------------------------------------------------- /ml_api/.gitattributes: -------------------------------------------------------------------------------- 1 | *.weights filter=lfs diff=lfs merge=lfs -text 2 | -------------------------------------------------------------------------------- /ml_api/model/model.meta: -------------------------------------------------------------------------------- 1 | classes= 1 2 | names = ./ml_api/model/names 3 | -------------------------------------------------------------------------------- /ml_api/lib/test.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manicben/3DPrintSaviour/HEAD/ml_api/lib/test.jpg -------------------------------------------------------------------------------- /ml_api/bin/model_aarch64.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manicben/3DPrintSaviour/HEAD/ml_api/bin/model_aarch64.so -------------------------------------------------------------------------------- /ml_api/bin/model_x86_64.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manicben/3DPrintSaviour/HEAD/ml_api/bin/model_x86_64.so -------------------------------------------------------------------------------- /ml_api/model/model.weights.url: -------------------------------------------------------------------------------- 1 | https://tsd-pub-static.s3.amazonaws.com/ml-models/3209.neg_32213.22300.weights 2 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | octoclient/ 2 | test_images/ 3 | __pycache__/ 4 | *.pyc 5 | *.jpg 6 | *.csv 7 | *.log 8 | .vscode/* 9 | data/* -------------------------------------------------------------------------------- /ml_api/bin/model_gpu_aarch64.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manicben/3DPrintSaviour/HEAD/ml_api/bin/model_gpu_aarch64.so -------------------------------------------------------------------------------- /ml_api/bin/model_gpu_x86_64.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Manicben/3DPrintSaviour/HEAD/ml_api/bin/model_gpu_x86_64.so -------------------------------------------------------------------------------- /data_visualisation.py: -------------------------------------------------------------------------------- 1 | from matplotlib import pyplot as plt 2 | 3 | with open('output/11/output.log') as f: 4 | content = f.readlines() 5 | 6 | i, score, deviance, diff_socre, diff_deviance = [], [], [], [], [] 7 | 8 | for line in content: 9 | lst = line.split() 10 | i.append(int(lst[0])) 11 | if len(lst) == 5: 12 | score.append(float(lst[1])) 13 | deviance.append(float(lst[2])) 14 | diff_socre.append(float(lst[3])) 15 | diff_deviance.append(float(lst[4])) 16 | else: 17 | score.append(0) 18 | deviance.append(0) 19 | diff_socre.append(0) 20 | diff_deviance.append(0) 21 | 22 | score_plt, = plt.plot(i, score, '-r', label='score') 23 | deviance_plt, = plt.plot(i, deviance, '-b', label='deviance') 24 | diff_score_plt, = plt.plot(i, diff_socre, '-g', label='diff_score') 25 | diff_deviance_plt, = plt.plot(i, diff_deviance, '-y', label='diff_deviance') 26 | 27 | plt.legend(loc='upper right') 28 | 29 | plt.show() 30 | -------------------------------------------------------------------------------- /ml_api/scripts/detect_timelapse.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | TL_FILE=$1 4 | OUT_DIR=$2 5 | FPS=$3 6 | 7 | TL_BN=$(basename "$TL_FILE") 8 | 9 | JPG_IN_DIR="/tmp/in/$TL_BN" 10 | JPG_OUT_DIR="/tmp/out/$TL_BN" 11 | rm -rf "$JPG_IN_DIR" 12 | mkdir -p "$JPG_IN_DIR" 13 | rm -rf "$JPG_OUT_DIR" 14 | mkdir -p "$JPG_OUT_DIR" 15 | 16 | if [ "x" = ${FPS+x} ]; then 17 | FRM_NUM=$(ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 "$TL_FILE") 18 | 19 | if [ $FRM_NUM -gt 750 ]; then 20 | FPS=$((30*750/$FRM_NUM)) 21 | else 22 | FPS=30 23 | fi 24 | fi 25 | 26 | ffmpeg -i "$TL_FILE" -vf fps=$FPS -qscale:v 2 "$JPG_IN_DIR/%5d.jpg" 27 | 28 | python -m lib.timelapse_video "$JPG_IN_DIR" "$JPG_OUT_DIR" model/model.weights 0.25 29 | 30 | ffmpeg -i "$JPG_OUT_DIR/%05d.jpg" -c:v libx264 -vf fps=25 -pix_fmt yuv420p "$OUT_DIR/$TL_BN" 31 | cp "$JPG_OUT_DIR/detections.json" "$OUT_DIR/$TL_BN.json" 32 | cp "$JPG_OUT_DIR/00001.jpg" "$OUT_DIR/$TL_BN.poster.jpg" 33 | 34 | rm -rf "$JPG_IN_DIR" 35 | rm -rf "$JPG_OUT_DIR" 36 | -------------------------------------------------------------------------------- /gen_video.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import glob 3 | 4 | img_names = glob.glob('img/*.jpg') 5 | img_names.sort() 6 | n = len(img_names) 7 | 8 | out = cv2.VideoWriter('output1.avi', cv2.VideoWriter_fourcc('X', 'V', 'I', 'D'), 10.0, (1640, 1232)) 9 | log = open('img/detected_result.txt') 10 | font = cv2.FONT_HERSHEY_SIMPLEX 11 | RED = (0, 0, 255) 12 | ORANGE = (0, 128, 255) 13 | YELLOW = (0, 255, 255) 14 | 15 | for i in range(n): 16 | curr_file = img_names[i] 17 | print(curr_file) 18 | curr_img = cv2.imread(curr_file) 19 | curr_img = cv2.resize(curr_img, (1640, 1232)) 20 | # cv2.imshow('w', curr_img) 21 | if i > 6: 22 | detection = log.readline() 23 | if detection != None and detection.strip() != '': 24 | defect = '' 25 | if 'BUT' in detection: 26 | defect = 'SPAGHETTI' 27 | 28 | if detection[0] == '*': 29 | if detection[13:15] == 'Po': 30 | defect = 'BREAKAGE' 31 | elif detection[13:15] == 'Fi': 32 | defect = 'AIR PRINT' 33 | elif detection[13:15] == 'Pr': 34 | defect = 'DETACHMENT' 35 | elif detection[13:15] == 'Sp': 36 | defect = 'SPAGHETTI' 37 | 38 | if defect == 'SPAGHETTI': 39 | cv2.rectangle(curr_img, (25, 25), (1615, 1207), RED, 50) 40 | cv2.putText(curr_img, defect, (100, 1150), font, 8, RED, 20) 41 | 42 | if defect == 'DETACHMENT': 43 | cv2.rectangle(curr_img, (25, 25), (1615, 1207), ORANGE, 50) 44 | cv2.putText(curr_img, defect, (100, 1150), font, 8, ORANGE, 20) 45 | 46 | if defect == 'AIR PRINT': 47 | cv2.rectangle(curr_img, (25, 25), (1615, 1207), YELLOW, 50) 48 | cv2.putText(curr_img, defect, (100, 1150), font, 8, YELLOW, 20) 49 | 50 | out.write(curr_img) 51 | 52 | out.release() 53 | log.close() 54 | 55 | -------------------------------------------------------------------------------- /ml_api/lib/timelapse_video.py: -------------------------------------------------------------------------------- 1 | import sys, os 2 | import cv2 3 | from os import path 4 | import json 5 | import glob 6 | 7 | from lib.detection_model import load_net, detect 8 | 9 | EWM_ALPHA = 2/(9 + 1) # 9 is the optimal EWM span in hyper parameter grid search 10 | 11 | def next_ewm_mean(p, current_ewm_mean): 12 | return p * EWM_ALPHA + current_ewm_mean * (1-EWM_ALPHA) 13 | 14 | def sum_score(detections): 15 | return sum([d[1] for d in detections]) 16 | 17 | def overlay_detections(img, detections): 18 | for d in detections: 19 | score = '%.2f' % d[1] 20 | (xc, yc, w, h) = map(int, d[2]) 21 | img = cv2.rectangle(img,(xc-w//2,yc-h//2),(xc+w//2,yc+h//2),(0,255,0),3) 22 | #font = cv2.FONT_HERSHEY_SIMPLEX 23 | #img = cv2.putText(img,score,(xc-w//2,yc-h//2-10), font, 1,(255,0,0),3,cv2.LINE_AA) 24 | return img 25 | 26 | def video_detect(jpgs_path, save_frame_to=None, weights_path=path.join(path.dirname(__file__), "..", "model", "model.weights"), thresh=0.25): 27 | cfg_path = path.join(path.dirname(__file__), "..", "model", "model.cfg") 28 | meta_path = path.join(path.dirname(__file__), "..", "model", "model.meta") 29 | net_main, meta_main = load_net(cfg_path, weights_path, meta_path) 30 | 31 | if save_frame_to: 32 | if not path.exists(save_frame_to): 33 | os.makedirs(save_frame_to) 34 | 35 | jpg_files = sorted(glob.glob(path.join(jpgs_path, '*.jpg'))) 36 | result = [] 37 | session = {} 38 | for idx, jpg_file in enumerate(jpg_files): 39 | custom_image_bgr = cv2.imread(jpg_file) 40 | detections = detect(net_main, meta_main, custom_image_bgr, thresh=thresh) 41 | img_file = "%05d.jpg" % idx 42 | if save_frame_to: 43 | cv2.imwrite(path.join(save_frame_to, img_file), overlay_detections(custom_image_bgr, detections)) 44 | 45 | p = 0.0 46 | p = next_ewm_mean(sum_score(detections), p) 47 | 48 | result += [dict(frame=idx, p=p, detections=detections)] 49 | 50 | if save_frame_to: 51 | with open(path.join(save_frame_to, 'detections.json'), 'w') as outfile: 52 | json.dump(result, outfile) 53 | 54 | return result 55 | 56 | if __name__ == "__main__": 57 | video_detect(sys.argv[1], save_frame_to=sys.argv[2], weights_path=sys.argv[3], thresh=float(sys.argv[4])) 58 | 59 | -------------------------------------------------------------------------------- /printcontrol.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from sys import argv 4 | from octoclient import OctoClient 5 | from api_keys import URL, API_KEY 6 | import os 7 | from os.path import dirname 8 | 9 | # Pause the printer 10 | def pause_print(): 11 | try: 12 | client = OctoClient(url=URL, apikey=API_KEY) 13 | flags = client.printer()['state']['flags'] 14 | if flags['printing']: 15 | client.pause() 16 | print("Print paused.") 17 | print("Layer: " + str(LAYER)) 18 | elif flags['paused'] or flags['pausing']: 19 | print("Print already paused.") 20 | else: 21 | print("Print cancelled or error occurred.") 22 | except Exception as e: 23 | print(e) 24 | 25 | print(argv) 26 | if len(argv) < 2: 27 | exit() 28 | 29 | if argv[2] == 'nan': # Exit if SCORE is NaN, this occurs on the background and first layer 30 | img_file = str(argv[3]) 31 | logfile = dirname(img_file) + '/output.log' 32 | os.system("sed -i '/g$/d' {}".format(logfile)) 33 | exit() 34 | 35 | # Get SCORE, DEVIANCE and current layer from get_score.py 36 | 37 | LAYER = int(argv[1]) 38 | # Do nothing if it is the background or first layer 39 | if LAYER <= 7: 40 | img_file = str(argv[6]) 41 | logfile = dirname(img_file) + '/output.log' 42 | os.system("sed -i '/g$/d' {}".format(logfile)) 43 | quit() 44 | 45 | SCORE = float(argv[2]) 46 | DEVIANCE = float(argv[3]) 47 | SCR_DIFF = float(argv[4]) 48 | DEV_DIFF = float(argv[5]) 49 | img_file = str(argv[6]) 50 | 51 | # Detachment thresholds 52 | SCR_THRES = 1.0 53 | DEV_THRES = 1.0 54 | 55 | # Partial Breakage thresholds for DIFF values 56 | BR_SCR_THRES = 0.2 57 | BR_DEV_THRES = 0.2 58 | 59 | # Filament run out/clog thresholds 60 | FIL_SCR_THRES = 0.2 61 | FIL_DEV_THRES = 0.2 62 | 63 | 64 | # This indicates the model has detached from the bed 65 | if SCORE > SCR_THRES and DEVIANCE > DEV_THRES: 66 | print("Cause: Print detached from bed") 67 | pause_print() 68 | # This indicates a part of the model has broken off 69 | elif SCR_DIFF > BR_SCR_THRES and DEV_DIFF > BR_DEV_THRES: 70 | print("Cause: Potential (partial) breakage") 71 | pause_print() 72 | elif SCORE < FIL_SCR_THRES and DEVIANCE < FIL_DEV_THRES: 73 | print("Cause: Filament ran out or nozzle/extruder clog") 74 | pause_print() 75 | 76 | else: 77 | import cv2 78 | from ml_api.lib.detection_model import load_net, detect 79 | 80 | net_main, meta_main = load_net("./ml_api/model/model.cfg", "./ml_api/model/model.weights", "./ml_api/model/model.meta") 81 | img = cv2.imread(img_file) 82 | detection = detect(net_main, meta_main, img, thresh=0.3) 83 | print(len(detection)) 84 | if len(detection) > 0: 85 | print("Cause: Spaghetti") 86 | pause_print() 87 | 88 | logfile = dirname(img_file) + '/output.log' 89 | os.system("sed -i '/g$/d' {}".format(logfile)) -------------------------------------------------------------------------------- /get_score.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | # import the necessary packages 4 | from skimage.metrics import normalized_root_mse 5 | from sys import argv, exit, stderr 6 | from os.path import dirname 7 | import os 8 | import cv2 9 | 10 | # get filenames 11 | fst_file = argv[1] 12 | #fst_file = 'output/11/spaghetti_test_0.15mm_PLA_MK3S_40m000150.jpg' 13 | if fst_file[-4:] != ".jpg": 14 | exit() 15 | 16 | # Set current image and previous image 17 | curr = int(fst_file[-10:-4]) 18 | prev = curr - 1 19 | 20 | # For the 1st and 2nd images, output 0 21 | # 1st image is used as background image 22 | # 2nd image has no previous image (gives errors as prev is BG) 23 | if curr == 0: 24 | print("0 nan") 25 | print("Background Image", file=stderr) 26 | exit() 27 | if curr == 1: 28 | print("1 nan") 29 | print("First Image, no previous images", file=stderr) 30 | exit() 31 | 32 | snd_file = fst_file[:-10] + str(prev).rjust(6, '0') + fst_file[-4:] 33 | 34 | bg_file = fst_file[:-10] + '000000.jpg' 35 | 36 | # load the two input images and background image 37 | #imageA = cv2.imread(fst_file) 38 | #imageB = cv2.imread(snd_file) 39 | #imageBG = cv2.imread(bg_file) 40 | 41 | # convert the images to grayscale 42 | grayA = cv2.imread(fst_file,0) 43 | grayB = cv2.imread(snd_file,0) 44 | grayBG = cv2.imread(bg_file,0) 45 | 46 | # crop the images 47 | grayA = grayA[:, 120:520] 48 | grayB = grayB[:, 120:520] 49 | grayBG = grayBG[:, 120:520] 50 | 51 | # Remove background and threshold to remove shadow effects 52 | threshold = 20 53 | 54 | diffA = cv2.absdiff(grayA, grayBG) 55 | thresA = cv2.threshold(diffA, threshold, 255, cv2.THRESH_BINARY)[1] 56 | 57 | diffB = cv2.absdiff(grayB, grayBG) 58 | thresB = cv2.threshold(diffB, threshold, 255, cv2.THRESH_BINARY)[1] 59 | 60 | # compute the Normalised Root Mean-Squared Error (NRMSE) between the two 61 | # images 62 | score = normalized_root_mse(thresA, thresB) 63 | 64 | # Compare the current image with the image from 5 layers ago 65 | # This is used to check for filament runout or huge deviance 66 | deviance = 1.0 67 | scr_diff = 0.0 68 | dev_diff = 0.0 69 | if curr > 5: 70 | trd_file = fst_file[:-10] + str(curr-5).rjust(6, '0') + fst_file[-4:] 71 | #imageC = cv2.imread(trd_file) 72 | grayC = cv2.imread(trd_file,0) 73 | grayC = grayC[:, 120:520] 74 | diffC = cv2.absdiff(grayC, grayBG) 75 | thresC = cv2.threshold(diffC, threshold, 255, cv2.THRESH_BINARY)[1] 76 | deviance = normalized_root_mse(thresA, thresC) 77 | 78 | # Calculate difference compared with previous layer score and deviance 79 | logfile = dirname(fst_file) + '/output.log' 80 | with open(logfile) as log: 81 | data = log.readlines() 82 | prev_layer = data[-1] 83 | layer,scr,dev,s_diff,d_diff = prev_layer.split(" ") 84 | scr_diff = abs(score-float(scr)) 85 | dev_diff = abs(deviance-float(dev)) 86 | 87 | print("{} {} {} {} {}".format(curr,score,deviance,scr_diff,dev_diff)) 88 | print(fst_file) 89 | print("Image: {:d}\t Score: {}\t Deviance: {}\tDiffs: {}/{}".format(curr,score,deviance,scr_diff,dev_diff), file=stderr) 90 | -------------------------------------------------------------------------------- /ml_api/model/model.cfg: -------------------------------------------------------------------------------- 1 | [net] 2 | # Testing 3 | batch=64 4 | subdivisions=8 5 | # Training 6 | # batch=64 7 | # subdivisions=8 8 | height=416 9 | width=416 10 | channels=3 11 | momentum=0.9 12 | decay=0.0005 13 | angle=0 14 | saturation = 1.5 15 | exposure = 1.5 16 | hue=.1 17 | 18 | learning_rate=0.001 19 | burn_in=1000 20 | max_batches = 50000 21 | policy=steps 22 | steps=40000,60000 23 | scales=.1,.1 24 | 25 | [convolutional] 26 | batch_normalize=1 27 | filters=32 28 | size=3 29 | stride=1 30 | pad=1 31 | activation=leaky 32 | 33 | [maxpool] 34 | size=2 35 | stride=2 36 | 37 | [convolutional] 38 | batch_normalize=1 39 | filters=64 40 | size=3 41 | stride=1 42 | pad=1 43 | activation=leaky 44 | 45 | [maxpool] 46 | size=2 47 | stride=2 48 | 49 | [convolutional] 50 | batch_normalize=1 51 | filters=128 52 | size=3 53 | stride=1 54 | pad=1 55 | activation=leaky 56 | 57 | [convolutional] 58 | batch_normalize=1 59 | filters=64 60 | size=1 61 | stride=1 62 | pad=1 63 | activation=leaky 64 | 65 | [convolutional] 66 | batch_normalize=1 67 | filters=128 68 | size=3 69 | stride=1 70 | pad=1 71 | activation=leaky 72 | 73 | [maxpool] 74 | size=2 75 | stride=2 76 | 77 | [convolutional] 78 | batch_normalize=1 79 | filters=256 80 | size=3 81 | stride=1 82 | pad=1 83 | activation=leaky 84 | 85 | [convolutional] 86 | batch_normalize=1 87 | filters=128 88 | size=1 89 | stride=1 90 | pad=1 91 | activation=leaky 92 | 93 | [convolutional] 94 | batch_normalize=1 95 | filters=256 96 | size=3 97 | stride=1 98 | pad=1 99 | activation=leaky 100 | 101 | [maxpool] 102 | size=2 103 | stride=2 104 | 105 | [convolutional] 106 | batch_normalize=1 107 | filters=512 108 | size=3 109 | stride=1 110 | pad=1 111 | activation=leaky 112 | 113 | [convolutional] 114 | batch_normalize=1 115 | filters=256 116 | size=1 117 | stride=1 118 | pad=1 119 | activation=leaky 120 | 121 | [convolutional] 122 | batch_normalize=1 123 | filters=512 124 | size=3 125 | stride=1 126 | pad=1 127 | activation=leaky 128 | 129 | [convolutional] 130 | batch_normalize=1 131 | filters=256 132 | size=1 133 | stride=1 134 | pad=1 135 | activation=leaky 136 | 137 | [convolutional] 138 | batch_normalize=1 139 | filters=512 140 | size=3 141 | stride=1 142 | pad=1 143 | activation=leaky 144 | 145 | [maxpool] 146 | size=2 147 | stride=2 148 | 149 | [convolutional] 150 | batch_normalize=1 151 | filters=1024 152 | size=3 153 | stride=1 154 | pad=1 155 | activation=leaky 156 | 157 | [convolutional] 158 | batch_normalize=1 159 | filters=512 160 | size=1 161 | stride=1 162 | pad=1 163 | activation=leaky 164 | 165 | [convolutional] 166 | batch_normalize=1 167 | filters=1024 168 | size=3 169 | stride=1 170 | pad=1 171 | activation=leaky 172 | 173 | [convolutional] 174 | batch_normalize=1 175 | filters=512 176 | size=1 177 | stride=1 178 | pad=1 179 | activation=leaky 180 | 181 | [convolutional] 182 | batch_normalize=1 183 | filters=1024 184 | size=3 185 | stride=1 186 | pad=1 187 | activation=leaky 188 | 189 | 190 | ####### 191 | 192 | [convolutional] 193 | batch_normalize=1 194 | size=3 195 | stride=1 196 | pad=1 197 | filters=1024 198 | activation=leaky 199 | 200 | [convolutional] 201 | batch_normalize=1 202 | size=3 203 | stride=1 204 | pad=1 205 | filters=1024 206 | activation=leaky 207 | 208 | [route] 209 | layers=-9 210 | 211 | [convolutional] 212 | batch_normalize=1 213 | size=1 214 | stride=1 215 | pad=1 216 | filters=64 217 | activation=leaky 218 | 219 | [reorg] 220 | stride=2 221 | 222 | [route] 223 | layers=-1,-4 224 | 225 | [convolutional] 226 | batch_normalize=1 227 | size=3 228 | stride=1 229 | pad=1 230 | filters=1024 231 | activation=leaky 232 | 233 | [convolutional] 234 | size=1 235 | stride=1 236 | pad=1 237 | filters=30 238 | activation=linear 239 | 240 | 241 | [region] 242 | anchors = 1.3221, 1.73145, 3.19275, 4.00944, 5.05587, 8.09892, 9.47112, 4.84053, 11.2364, 10.0071 243 | bias_match=1 244 | classes=1 245 | coords=4 246 | num=5 247 | softmax=1 248 | jitter=.3 249 | rescore=1 250 | 251 | object_scale=5 252 | noobject_scale=1 253 | class_scale=1 254 | coord_scale=1 255 | 256 | absolute=1 257 | thresh = .6 258 | random=1 259 | -------------------------------------------------------------------------------- /run: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # This script runs both Python scripts when a new file is created via lsyncd 3 | # Note that 'moved_to' is used due to the way rsync works 4 | 5 | DIR="$HOME/3DPrintSaviour/output" #path to your image folder 6 | PRINTCTRL="$HOME/3DPrintSaviour/printcontrol.py" #path to printcontrol.py 7 | GETSCORE="$HOME/3DPrintSaviour/get_score.py" #path to get_score.py 8 | 9 | usage() { echo -e "$0 usage:\n\tNo flag provided - Runs default with logging, but no debugging." && grep "[[:space:]].)\ #" $0 | sed 's/#//' | sed -r 's/([a-z])\)/-\1/'; exit 0; } 10 | 11 | echo "3D Print Saviour running..." 12 | 13 | while getopts "dih" arg; do 14 | case $arg in 15 | d) # Debug Mode - Logs output to logfile, does not run print control. 16 | echo -e "DEBUG MODE. Logging Score output to logfile in image directory\nPrint Control disabled" 17 | inotifywait -m -r -e moved_to --format '%w%f' "$DIR" 2> /dev/null | while read f 18 | do 19 | RES_DIR=${f%/*} 20 | $GETSCORE "$f" >> "$RES_DIR/output.log" 21 | done 22 | ;; 23 | i) # Install - Installs dependencies, including OpenCV 3, scikit-image, numpy, scipy and OctoClient. 24 | echo -e "Installing dependencies, please note this will be a long process\nThis will NOT set up a virtualenv for OpenCV, follow this guide instead if you wish to use virtualenvs: https://www.pyimagesearch.com/2017/09/04/raspbian-stretch-install-opencv-3-python-on-your-raspberry-pi/" 25 | sudo apt-get update && sudo apt-get -y upgrade 26 | sudo apt-get install -y build-essential cmake pkg-config 27 | sudo apt-get install -y libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev 28 | sudo apt-get install -y libavcodec-dev libavformat-dev libswscale-dev libv4l-dev 29 | sudo apt-get install -y libxvidcore-dev libx264-dev 30 | sudo apt-get install -y libgtk2.0-dev libgtk-3-dev 31 | sudo apt-get install -y libatlas-base-dev gfortran 32 | sudo apt-get install -y python2.7-dev python3-dev 33 | sudo apt-get install -y inotify-tools 34 | 35 | mkdir ./snapshots 36 | 37 | echo "Now downloading weights for detection model" 38 | wget --quiet -O ml_api/model/model.weights $(cat ml_api/model/model.weights.url | tr -d '\r') 39 | 40 | # Download and unzip opencv 3.4.6 and opencv_contrib 41 | #echo "Installing OpenCV 3.4.6" 42 | #cd ~ 43 | #wget -O opencv.zip https://github.com/opencv/opencv/archive/3.4.6.zip 44 | #unzip opencv.zip 45 | #wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/3.4.6.zip 46 | #unzip opencv_contrib.zip 47 | 48 | # Install pip 49 | wget https://bootstrap.pypa.io/get-pip.py 50 | sudo python get-pip.py 51 | sudo python3 get-pip.py 52 | 53 | pip3 install numpy 54 | 55 | #cd ~/opencv-3.4.6/ 56 | #mkdir build 57 | #cd build 58 | #cmake -D CMAKE_BUILD_TYPE=RELEASE \ 59 | # -D CMAKE_INSTALL_PREFIX=/usr/local \ 60 | # -D INSTALL_PYTHON_EXAMPLES=ON \ 61 | # -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.4.6/modules \ 62 | # -D BUILD_EXAMPLES=ON .. 63 | 64 | # Increase swapsize to allow quad-core compilation 65 | #sudo sed -i -e 's/CONF_SWPASIZE\=100/CONF_SWAPSIZE\=1024/g' /etc/dphys-swapfile 66 | #sudo /etc/init.d/dphys-swapfile stop 67 | #sudo /etc/init.d/dphys-swapfile start 68 | 69 | # Compile OpenCV 70 | #make -j4 71 | 72 | # Install OpenCV 73 | #sudo make install 74 | #sudo ldconfig 75 | 76 | # Increase swapsize to allow quad-core compilation 77 | #sudo sed -i -e 's/CONF_SWPASIZE\=1024/CONF_SWAPSIZE\=100/g' /etc/dphys-swapfile 78 | #sudo /etc/init.d/dphys-swapfile stop 79 | #sudo /etc/init.d/dphys-swapfile start 80 | 81 | #cd /usr/local/lib/python3.6/site-packages/ 82 | #sudo mv cv2.cpython-36m-arm-linux-gnueabihf.so cv2.so 83 | #cd ~/.virtualenvs/cv/lib/python3.6/site-packages/ 84 | #ln -s /usr/local/lib/python3.6/site-packages/cv2.so cv2.so 85 | 86 | #if [ $( python3 -c "exec(\"import cv2\nprint(cv2.__version__)\")" ) == "3.4.6" ]; 87 | #then echo "OpenCV 3.4.6 installed succesfully!"; 88 | #else echo "OpenCV 3.4.6 did not install correctly, check the output above."; 89 | #fi 90 | 91 | echo "Now installing additional dependencies..." 92 | pip3 install scipy 93 | pip3 install scikit-image 94 | 95 | git clone https://github.com/hroncok/octoclient.git 96 | cd octoclient 97 | sudo python3 setup.py install 98 | echo "Setup complete!" 99 | ;; 100 | h) # Help - Displays help. 101 | usage 102 | exit 0 103 | ;; 104 | esac 105 | done 106 | if (( $OPTIND == 1 )); then 107 | inotifywait -m -r -e moved_to --format '%w%f' "$DIR" 2> /dev/null | while read f #change 'created' to 'moved_to' for testing 108 | do 109 | RES_DIR=${f%/*} 110 | python3 $PRINTCTRL `python3 $GETSCORE "$f" | tee -a "$RES_DIR/output.log"` 111 | done 112 | fi 113 | 114 | #ctrl+z, bg, disown -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 3DPrintSaviour 2 | 3 | ## What is 3DPrintSaviour? 4 | 3DPrintSaviour (3DPS) is an automatic print failure detection system for 3D printers and runs on an RPi and another 64-bit computer. It uses Octoprint and Octolapse to get timelapse images, which 3DPS uses to determine if a failure occurred. 5 | 6 | ## How does it work? 7 | *Previously done by Manicben*: 8 | 9 | Octolapse generates amazing timelapse images where, from the camera's viewpoint, only the 3D model changes. These images are sent to the 64-bit computer (using lsyncd/rsync), where the arrival of a new image (with inotifywait) triggers the Python scripts. By comparing the previous layer image to the current layer image and through the use of OpenCV, a score in the form of a Normalised Root Mean-Squared Error (NRMSE) value is calculated, which represents how similar the two images are, with values over 1 representing a significant deviation. This value stays consistently below 1 after shadow thresholding during a simple 3D print. 10 | 11 | This was extended, so that the current image is compared to the image from 5 layers prior. This is named the "deviance" and is calculated in the same way as the score. This value aims to represent how much the print has deviated from past layers. This value is quite high when the current layer has less of a change than the layer from 5 layers prior. An example would be anything with a large base, the base itself is large, anything on top of the base would be smaller in comparison, meaning the deviance would be high. Once 5 layers into the part on top of the base, the deviance will decrease as there is less of a change. 12 | Using both the score and deviance, it is possible to detect when a 3D print has either detached from the bed or a part has broken off, even when the filament has either ran out or clogged. 13 | * Detachment - Score > 1.0 AND Deviance > 1.0 14 | * Breakage - Score Diff > 0.2 AND Deviance Diff > 0.2 15 | * Filament run-out/clog - Score < 0.2 AND Deviance < 0.2 16 | 17 | The above threshold values are used to detect when a failure has occurred. If either of the above conditions are true, `printcontrol.py` sends a pause signal to the printer via the Octoprint REST API and notes down the layer at which the pause was issued and what potentially caused the pause. Please note that 3DPS will not be able to detect a failure within the first 7 layers, but if one has, the system should detect something went wrong later (i.e. the model is missing). 18 | Please note that the threshold values are subject to change upon further experimentation. They have been chosen purely based on experiment observations. 19 | 20 | *Added by Kevinskwk:* 21 | 22 | Another method besides image comparison has been implemented using The Spaghetti Detective - a spaghetti detection model from https://github.com/TheSpaghettiDetective/TheSpaghettiDetective. This allows using a machine learning model to examine each snapshot took by Octolapse and detects spaghetti failure. The Spaghetti detection model has been integrated into the `printcontrol.py`. If a spaghetti is detected, same procedure to pause the printing will be invoked. 23 | 24 | ## How to use 25 | 26 | ### Installation on computer side 27 | **Note:** The Spaghetti Detective only supports 64 bit operating systems! 28 | 29 | Clone this repo to your home directory: 30 | ``` 31 | $ git clone https://github.com/Kevinskwk/3DPrintSaviour 32 | ``` 33 | 34 | In terminal, navigate to the root directory of this repo, run the `run` script with `-i` flag for dependency installation: 35 | ``` 36 | $ ./run -i 37 | ``` 38 | This may take quite some time as a number of packages are being installed. 39 | 40 | You can press ctrl-c to exit once you see "Setup complete!" in your terminal. 41 | 42 | To make sure 3DPrintSaviour works, you also need to make sure that you have **OpenCV** installed with version **>= 3.4.6**. The commands in `run` to install OpenCV are commented out by default as they might not work. Do check if you have the right version installed. I will not include the steps to install OpenCV here. 43 | 44 | ### Setting up Octoprint and Octolapse on the RPi 45 | While waiting for the installation to finish, you can start to work on the RPi. Follow the tutorial below to setup Octoprint on your RPi: https://octoprint.org/download/ both model 3 and 4 should be fine. 46 | 47 | After you are done, set up Octolapse following this tutorial: 48 | https://github.com/FormerLurker/Octolapse/wiki/Installation 49 | 50 | Next, open `api_keys.py` with your preferred text editor, and replace the URL and API_KEY with the ones for your Octoprint. 51 | 52 | ### Setting up rsync 53 | reference: https://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories-on-a-vps 54 | 55 | First, make sure that both your computer and RPi are connected to the same network. Both machines should have rsync installed. Set up SSH access from your computer to the RPi. Check this if you don't know how to do: https://itsfoss.com/ssh-into-raspberry/ 56 | 57 | The `run` script you ran just now should have created a `snapshots` directory for you. In your `3DPrintSaviour` directory, execute the following command: 58 | ``` 59 | $ rsync -a [YOUR_RPI_USERNAME]@[YOUR_RPI_IP]:/home/[YOUR_RPI_USERNAME]/.octoprint/data/octolapse/snapshots ./snapshots 60 | ``` 61 | Enter password for your RPi as it might require. 62 | 63 | ### Using 64 | Before, starting your print, start the run script on your computer: 65 | ``` 66 | $ ./run 67 | ``` 68 | On you Octoprint, properly set the Octolapse configuration that you prefer, it is recommended to set nozzle position when taking snapshots to a corner that is out of the view of the camera. Don't forget to turn on octolapse at the end. 69 | 70 | Then just start the printing through octoprint. When the snapshots are taken, the images will be synchronized to the snapshots folder on your computer, then the python scripts will be invoked. 71 | 72 | 73 | ## Changelog 74 | 75 | ### 25/03/2020 76 | * Implemented The Spaghetti Detective spaghetti detection model. Adding support for spaghetti failure detection. 77 | ### 09/03/2020 78 | * Lowered filament run-out score to 0.2, deviance to 0.2. Raised Breakage score diff to 0.2, deviance diff to 0.2. For lower false positive rate. 79 | ### 03/03/2020 80 | * Updated the OpenCV installation version to 3.4.6, bug fixing. 81 | 82 | Check [`old_README.md`](./old_README.md) for the past changelogs, Plan and Roadmap, and Acknoledgements and References 83 | -------------------------------------------------------------------------------- /ml_api/lib/detection_model.py: -------------------------------------------------------------------------------- 1 | #!python3 2 | 3 | #pylint: disable=R, W0401, W0614, W0703 4 | from ctypes import * 5 | import math 6 | import random 7 | import os, sys 8 | import cv2 9 | 10 | def sample(probs): 11 | s = sum(probs) 12 | probs = [a/s for a in probs] 13 | r = random.uniform(0, 1) 14 | for i in range(len(probs)): 15 | r = r - probs[i] 16 | if r <= 0: 17 | return i 18 | return len(probs)-1 19 | 20 | def c_array(ctype, values): 21 | arr = (ctype*len(values))() 22 | arr[:] = values 23 | return arr 24 | 25 | class BOX(Structure): 26 | _fields_ = [("x", c_float), 27 | ("y", c_float), 28 | ("w", c_float), 29 | ("h", c_float)] 30 | 31 | class DETECTION(Structure): 32 | _fields_ = [("bbox", BOX), 33 | ("classes", c_int), 34 | ("prob", POINTER(c_float)), 35 | ("mask", POINTER(c_float)), 36 | ("objectness", c_float), 37 | ("sort_class", c_int)] 38 | 39 | 40 | class IMAGE(Structure): 41 | _fields_ = [("w", c_int), 42 | ("h", c_int), 43 | ("c", c_int), 44 | ("data", POINTER(c_float))] 45 | 46 | class METADATA(Structure): 47 | _fields_ = [("classes", c_int), 48 | ("names", POINTER(c_char_p))] 49 | 50 | 51 | if os.environ.get('HAS_GPU', 'False') == 'True': 52 | hasGPU = True 53 | so_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "..", "bin", "model_gpu.so") 54 | else: 55 | hasGPU = False 56 | so_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "..", "bin", "model_x86_64.so") 57 | 58 | lib = CDLL(so_path, RTLD_GLOBAL) 59 | lib.network_width.argtypes = [c_void_p] 60 | lib.network_width.restype = c_int 61 | lib.network_height.argtypes = [c_void_p] 62 | lib.network_height.restype = c_int 63 | 64 | predict = lib.network_predict 65 | predict.argtypes = [c_void_p, POINTER(c_float)] 66 | predict.restype = POINTER(c_float) 67 | 68 | if hasGPU: 69 | set_gpu = lib.cuda_set_device 70 | set_gpu.argtypes = [c_int] 71 | 72 | make_image = lib.make_image 73 | make_image.argtypes = [c_int, c_int, c_int] 74 | make_image.restype = IMAGE 75 | 76 | get_network_boxes = lib.get_network_boxes 77 | get_network_boxes.argtypes = [c_void_p, c_int, c_int, c_float, c_float, POINTER(c_int), c_int, POINTER(c_int), c_int] 78 | get_network_boxes.restype = POINTER(DETECTION) 79 | 80 | make_network_boxes = lib.make_network_boxes 81 | make_network_boxes.argtypes = [c_void_p] 82 | make_network_boxes.restype = POINTER(DETECTION) 83 | 84 | free_detections = lib.free_detections 85 | free_detections.argtypes = [POINTER(DETECTION), c_int] 86 | 87 | free_ptrs = lib.free_ptrs 88 | free_ptrs.argtypes = [POINTER(c_void_p), c_int] 89 | 90 | network_predict = lib.network_predict 91 | network_predict.argtypes = [c_void_p, POINTER(c_float)] 92 | 93 | reset_rnn = lib.reset_rnn 94 | reset_rnn.argtypes = [c_void_p] 95 | 96 | load_net = lib.load_network 97 | load_net.argtypes = [c_char_p, c_char_p, c_int] 98 | load_net.restype = c_void_p 99 | 100 | load_net_custom = lib.load_network_custom 101 | load_net_custom.argtypes = [c_char_p, c_char_p, c_int, c_int] 102 | load_net_custom.restype = c_void_p 103 | 104 | do_nms_obj = lib.do_nms_obj 105 | do_nms_obj.argtypes = [POINTER(DETECTION), c_int, c_int, c_float] 106 | 107 | do_nms_sort = lib.do_nms_sort 108 | do_nms_sort.argtypes = [POINTER(DETECTION), c_int, c_int, c_float] 109 | 110 | free_image = lib.free_image 111 | free_image.argtypes = [IMAGE] 112 | 113 | letterbox_image = lib.letterbox_image 114 | letterbox_image.argtypes = [IMAGE, c_int, c_int] 115 | letterbox_image.restype = IMAGE 116 | 117 | load_meta = lib.get_metadata 118 | lib.get_metadata.argtypes = [c_char_p] 119 | lib.get_metadata.restype = METADATA 120 | 121 | load_image = lib.load_image_color 122 | load_image.argtypes = [c_char_p, c_int, c_int] 123 | load_image.restype = IMAGE 124 | 125 | rgbgr_image = lib.rgbgr_image 126 | rgbgr_image.argtypes = [IMAGE] 127 | 128 | predict_image = lib.network_predict_image 129 | predict_image.argtypes = [c_void_p, IMAGE] 130 | predict_image.restype = POINTER(c_float) 131 | 132 | def array_to_image(arr): 133 | import numpy as np 134 | # need to return old values to avoid python freeing memory 135 | arr = arr.transpose(2,0,1) 136 | c = arr.shape[0] 137 | h = arr.shape[1] 138 | w = arr.shape[2] 139 | arr = np.ascontiguousarray(arr.flat, dtype=np.float32) / 255.0 140 | data = arr.ctypes.data_as(POINTER(c_float)) 141 | im = IMAGE(w,h,c,data) 142 | return im, arr 143 | 144 | def classify(net, meta, im): 145 | out = predict_image(net, im) 146 | res = [] 147 | for i in range(meta.classes): 148 | if alt_names is None: 149 | nameTag = meta.names[i] 150 | else: 151 | nameTag = alt_names[i] 152 | res.append((nameTag, out[i])) 153 | res = sorted(res, key=lambda x: -x[1]) 154 | return res 155 | 156 | def detect(net, meta, image, thresh=.5, hier_thresh=.5, nms=.45, debug= False): 157 | #pylint: disable= C0321 158 | custom_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) 159 | im, arr = array_to_image(custom_image) # you should comment line below: free_image(im) 160 | if debug: print("Loaded image") 161 | num = c_int(0) 162 | if debug: print("Assigned num") 163 | pnum = pointer(num) 164 | if debug: print("Assigned pnum") 165 | predict_image(net, im) 166 | if debug: print("did prediction") 167 | dets = get_network_boxes(net, custom_image.shape[1], custom_image.shape[0], thresh, hier_thresh, None, 0, pnum, 0) # OpenCV 168 | if debug: print("Got dets") 169 | num = pnum[0] 170 | if debug: print("got zeroth index of pnum") 171 | if nms: 172 | do_nms_sort(dets, num, meta.classes, nms) 173 | if debug: print("did sort") 174 | res = [] 175 | if debug: print("about to range") 176 | for j in range(num): 177 | if debug: print("Ranging on "+str(j)+" of "+str(num)) 178 | if debug: print("Classes: "+str(meta), meta.classes, meta.names) 179 | for i in range(meta.classes): 180 | if debug: print("Class-ranging on "+str(i)+" of "+str(meta.classes)+"= "+str(dets[j].prob[i])) 181 | if dets[j].prob[i] > 0: 182 | b = dets[j].bbox 183 | if alt_names is None: 184 | nameTag = meta.names[i] 185 | else: 186 | nameTag = alt_names[i] 187 | if debug: 188 | print("Got bbox", b) 189 | print(nameTag) 190 | print(dets[j].prob[i]) 191 | print((b.x, b.y, b.w, b.h)) 192 | res.append((nameTag, dets[j].prob[i], (b.x, b.y, b.w, b.h))) 193 | if debug: print("did range") 194 | res = sorted(res, key=lambda x: -x[1]) 195 | if debug: print("did sort") 196 | free_detections(dets, num) 197 | if debug: print("freed detections") 198 | return res 199 | 200 | 201 | net_main = None 202 | meta_main = None 203 | alt_names = None 204 | 205 | def load_net(config_path, weight_path, meta_path): 206 | global meta_main, net_main, alt_names #pylint: disable=W0603 207 | if not os.path.exists(config_path): 208 | raise ValueError("Invalid config path `"+os.path.abspath(config_path)+"`") 209 | if not os.path.exists(weight_path): 210 | raise ValueError("Invalid weight path `"+os.path.abspath(weight_path)+"`") 211 | if not os.path.exists(meta_path): 212 | raise ValueError("Invalid data file path `"+os.path.abspath(meta_path)+"`") 213 | if net_main is None: 214 | net_main = load_net_custom(config_path.encode("ascii"), weight_path.encode("ascii"), 0, 1) # batch size = 1 215 | if meta_main is None: 216 | meta_main = load_meta(meta_path.encode("ascii")) 217 | if alt_names is None: 218 | # In Python 3, the metafile default access craps out on Windows (but not Linux) 219 | # Read the names file and create a list to feed to detect 220 | try: 221 | with open(meta_path) as metaFH: 222 | metaContents = metaFH.read() 223 | import re 224 | match = re.search("names *= *(.*)$", metaContents, re.IGNORECASE | re.MULTILINE) 225 | if match: 226 | result = match.group(1) 227 | else: 228 | result = None 229 | try: 230 | if os.path.exists(result): 231 | with open(result) as namesFH: 232 | namesList = namesFH.read().strip().split("\n") 233 | alt_names = [x.strip() for x in namesList] 234 | except TypeError: 235 | pass 236 | except Exception: 237 | pass 238 | 239 | return net_main, meta_main 240 | 241 | 242 | if __name__ == "__main__": 243 | net_main_1, meta_main_1 = load_net("../model/model.cfg", "../model/model.weights", "../model/model.meta") 244 | 245 | import cv2 246 | custom_image_bgr = cv2.imread(sys.argv[1]) # use: detect(,,imagePath,) 247 | print(detect(net_main_1, meta_main_1, custom_image_bgr, thresh=0.3)) 248 | -------------------------------------------------------------------------------- /old_README.md: -------------------------------------------------------------------------------- 1 | # 3DPrintSaviour 2 | 3 | ## What is 3DPrintSaviour? 4 | 3DPrintSaviour (3DPS) is an automatic print failure detection system for 3D printers and runs on Raspberry Pi 3 Models. It uses Octoprint and Octolapse to get timelapse images, which 3DPS uses to determine if a failure occurred. 5 | 6 | ## How does it work? 7 | Octolapse generates amazing timelapse images where, from the camera's viewpoint, only the 3D model changes. These images are sent to another Pi (using lsyncd/rsync), where the arrival of a new image (with inotifywait) triggers the Python scripts. By comparing the previous layer image to the current layer image and through the use of OpenCV, a score in the form of a Normalised Root Mean-Squared Error (NRMSE) value is calculated, which represents how similar the two images are, with values over 1 representing a significant deviation. This value stays consistently below 1 after shadow thresholding during a simple 3D print. 8 | 9 | This was extended, so that the current image is compared to the image from 5 layers prior. This is named the "deviance" and is calculated in the same way as the score. This value aims to represent how much the print has deviated from past layers. This value is quite high when the current layer has less of a change than the layer from 5 layers prior. An example would be anything with a large base, the base itself is large, anything on top of the base would be smaller in comparison, meaning the deviance would be high. Once 5 layers into the part on top of the base, the deviance will decrease as there is less of a change. 10 | Using both the score and deviance, it is possible to detect when a 3D print has either detached from the bed or a part has broken off, even when the filament has either ran out or clogged. 11 | * Detachment - Score > 1.0 AND Deviance > 1.0 12 | * Breakage - Score Diff > 0.15 AND Deviance Diff > 0.10 13 | * Filament run-out/clog - Score < 0.25 AND Deviance < 0.28 14 | 15 | The above threshold values are used to detect when a failure has occurred. If either of the above conditions are true, printcontrol.py sends a pause signal to the printer via the Octoprint REST API and notes down the layer at which the pause was issued and what potentially caused the pause. Please note that 3DPS will not be able to detect a failure within the first 7 layers, but if one has, the system should detect something went wrong later (i.e. the model is missing). 16 | Please note that the threshold values are subject to change upon further experimentation. They have been chosen purely based on experiment observations. 17 | 18 | ## Changelog 19 | ### 12/06/2018 20 | * Printcontrol tweaked, lowered filament run-out score to 0.23. All affected tests have been redone. 21 | ### 11/06/2018 22 | * Printcontrol slightly tweaked. Start checking after layer 7. Breakage dev\_diff lowered to 0.10, Filament run-out deviance lowered to 0.28. All affected tests have been redone. 23 | ### 09/06/2018 24 | * Printcontrol slightly changed. Raised Deviance filament threshold from 0.25 to 0.30. Affected tests have been redone. 25 | ### 08/06/2018 26 | * Printcontrol slightly changed to only start checking values after layer 6 (had rare occurence when dev\_diff was very high and scr\_diff fluctuated and system triggered) 27 | ### 07/06/2018 28 | * Stoppped inotify output (not helpful), added echo to stdout when program starts 29 | * Final adjustments made, testing now in progress 30 | ### 06/06/2018 31 | * Breakage detection now works 32 | * Breakage detection uses absolute differences compared to previous layer score/deviance 33 | ### 05/06/2018 34 | * Breakage detection no longer working 35 | * Default usage now always produces logfile 36 | * Started work on using previous values from logfile and getting absolute difference between current score/deviance and previous layer values, this should allow for (better) breakage detection 37 | ### 04/06/2018 (3DPS V1) 38 | * All work on 3DPS V2 has been indefinitely halted 39 | * Support for filament run-out/clogs added 40 | * Now supports wider range of filament colours, requires better positioning of camera (x-axis must be out of sight) 41 | * Work started on refactoring code and easy installation 42 | ### 12/04/2018 (3DPS V1) 43 | * Bugfixes related to previous commit 44 | ### 11/04/2018 (3DPS V1) 45 | * Changed run script to use -d (Debug), -l (Logging), -h (Help) flags, as well as the default behaviour when no flags are given. This allows for running whilst logging output, either with (-l) or without (-d) print control, which is useful for data collection after a failure. 46 | ### 10/04/2018 (3DPS V1) 47 | * Both SCORE and DEVIANCE values are logged with '-test' or '-t' and printcontrol.py is not run in test mode 48 | * Added new detection based on SCORE and DEVIANCE that detects partial breakages. Needs further testing 49 | * Added DEVIANCE calculation using current image and image from 5 layers prior 50 | * Renamed variables to make more sense 51 | ### 08/04/2018 (3DPS V1) 52 | * Renamed run.sh to run. Now uses BASH and can provide '-test' or '-t' to copy output from get\_scores.py to a logfile in the same directory as the images. Useful for gathering NRMSE data for complete prints and view how the values change for each layer 53 | * Forgot to remove old URL and API\_KEY from octoclient\_test.py, has been replaced 54 | * Renamed test scripts and folders (not part of 3DPS, just used to test the setup) 55 | 56 | ### 04/04/2018 (3DPS V1) 57 | * Removed (and reset) API keys and URLs. Silly me! 58 | * Refactored Python files to make more sense and be more usable 59 | * Renamed Python files to make more sense and not rely on the score method 60 | * Switched from using Structured Similarity Index (SSIM) to Normalised Root Mean-Squared Error (NRMSE) for the score value 61 | * Changed SSIM threshold to 0.995 62 | * Added background removal by subtracting grayscale image 0 from grayscale images (prev and current) with cv2.absdiff 63 | * Added thresholding to foregound images to remove shadows 64 | 65 | ### 03/04/2018 (3DPS V1) 66 | * Initial commit to GitHub 67 | * Used SSIM to calculate the score to detect simple print failures (initial SSIM threshold of 0.95) 68 | * Added Readme 69 | * Changed SSIM threshold to 0.973 70 | 71 | ## Plan and Roadmap 72 | This is a Final Year Project (FYP) at Imperial College London and therefore there are deadlines to meet. Please note that currently I am only testing this system on a modified Original Prusa i3 MK2S and I will not add support for other printers, although doing so should be trivial. 73 | 74 | Currently planned are: 75 | * **3DPS V1** Simple failure detection (Is a model there or not) - TESTING 76 | * **3DPS V2** Detection by comparing top-down image with 2D gcode (should be more accurate, but timelapses will not be nice) - NOT STARTED 77 | 78 | Other features being considered are: 79 | * Email notification upon print failure with attached snapshot 80 | * Conversion to a Octoprint plugin (will require a lot of work due to the size of OpenCV, but may be possible with an external server processing the images...) 81 | * Attempt to put the entire system on the same Pi hosting Octoprint (probably won't do this, but it may be possible) 82 | 83 | ## Acknowledgements and References 84 | 85 | Many thanks to: 86 | 87 | **Dr Peter Cheung** - For giving me the opportunity to work on an awesome and useful project as part of my university degree. 88 | 89 | **Gina Häußge** - [Octoprint](https://octoprint.org/) - The printer control software used here. An amazing project used by many 3D printer owners. 90 | 91 | **FormerLurker** - [Octolapse](https://github.com/FormerLurker/Octolapse) - The Octoprint plugin that does amazing, customisable timelapses. Can be added to Octoprint through the Plugin Manager. 92 | 93 | **Miro Hrončok** - [Octoclient](https://github.com/hroncok/octoclient) - A Python wrapper for the Octoprint REST API. Easy to use and definitely more readable than manual GET/POST requests. 94 | 95 | **Axel Kittenberger** - [lsyncd](https://github.com/axkibe/lsyncd) - Lsyncd (Live Syncing Daemon) is used for syncing the Octolapse timelapse image directory to the Pi hosting the OpenCV code. It's pretty fast too! 96 | 97 | **Radu Voicilas** - [inotify-tools](https://github.com/rvoicilas/inotify-tools) - C Library and command line tools that extend inotify's functions. This project uses inotify-wait, which is used as a trigger whenever new files are added to a watched directory. Extremely useful for file-based event handling. 98 | 99 | **Stéfan van der Walt, Johannes L. Schönberger, Juan Nunez-Iglesias, François Boulogne, Joshua D. Warner, Neil Yager, Emmanuelle Gouillart, Tony Yu and the scikit-image contributors** - [scikit-image](http://scikit-image.org/) - scikit-image is a Python package for image processing. This project uses it for calculating the score, whether this be SSIM or NRMSE. 100 | 101 | **OpenCV team** - [OpenCV](https://opencv.org/) - One of the most important components, OpenCV allows me, with minimal computer vision knowledge, to perform complex operations on images. 102 | 103 | **Adrian Rosebrock** - [OpenCV on Pi](https://www.pyimagesearch.com/2017/09/04/raspbian-stretch-install-opencv-3-python-on-your-raspberry-pi/) + [Image Difference](https://www.pyimagesearch.com/2017/06/19/image-difference-with-opencv-and-python/) - This guy is THE authority on using OpenCV with Python, Raspberry Pis, etc. His tutorials are amazing, informative and very well-written. I used his guides on installing OpenCV for Python on Raspbian Stretch, as well as his Image Difference using OpenCV and Python tutorial. I went from a OpenCV n00b to somewhat understanding OpenCV to the point where I could do some image processing myself. Definitely check out his site! 104 | 105 | **Kim Salmi** - [Fall Detection](https://github.com/infr/falldetector-public/blob/master/thesis.md) - This guy is working on a CV solution for improving safety for home care patients, thus taking care of our aging population. His thesis gave me the information needed to improve 3DPS V1, by simply adding the thresholding to remove shadows. Thanks so much, this saved me so much pain and effort and I'm glad I chanced upon your thesis. 106 | 107 | Regarding the open-source projects above, I'd also like to thank all of the contributors involved. I understand that such big projects cannot be done on their own and I really appreciate it. 108 | --------------------------------------------------------------------------------