├── README.md ├── images ├── example_test.png ├── test_01.png ├── test_02.png ├── test_03.png ├── test_04.png └── test_05.png ├── multiple_choice_template.psd ├── outputs ├── output1.jpg └── output2.jpg ├── test_grader.py └── test_grader_mine.py /README.md: -------------------------------------------------------------------------------- 1 | # OMR Scanner and Test Grader using OpenCV 2 | ### Scans an OMR and Grades the responses 3 | 4 | Awesome OMR Scanner and Grader using just OpenCV and Python 5 | 6 | All thanks to Adrian Rosebrock (from [pyimagesearch](https://www.pyimagesearch.com/)) for making 7 | great tutorials. This project is inspired from his blog: [Bubble sheet multiple choice scanner and test grader using OMR, Python and OpenCV](https://www.pyimagesearch.com/2016/10/03/bubble-sheet-multiple-choice-scanner-and-test-grader-using-omr-python-and-opencv/). I have included the author's code and the one i wrote my self as well. 8 | 9 | ## **Key Points** 10 | 1. Combines the following techniques: 11 | 1. [Building document scanner](https://github.com/practical-cv/Document-Scanner) 12 | 2. Sorting the contours 13 | 3. Perspective transform to get top-down view 14 | 2. Steps involved: 15 | 1. Detect the OMR sheet 16 | 2. Apply perspective transform to get the top-down view of the sheet 17 | 3. Extract out all the bubbles in the sheet 18 | 4. Sort them in rows 19 | 5. Determine the answer bubble for each row 20 | 6. Match with the correct answer 21 | 7. Do this for all questions (all rows) 22 | 2. Assumptions: 23 | 1. The app assumes that the OMR document we are scanning is the main focus of the image. 24 | 2. All 4 edges of the OMR document are visible in the image. 25 | 3. The largest rectangle available in the image will be the OMR document. 26 | 3. Stored the correct answer keys in a dict in python. 27 | 4. Used Canny edge detection for detecting the edges in the document and Gussian blur for reducing high frequency noise. 28 | 5. OpenCV has the way to get the top-down view of the image. Used that methodology to get the top-down view. 29 | 6. Used Otsu’s thresholding method for thresholding. 30 | 7. Determined the bubbles using the aspect ratio of approx one (1) for it's bounding rectangle. 31 | 8. Used bitwise operations and masking to find the filled in bubble using the amount of shaded pixels in the bubble. 32 | 33 | ## **Requirements: (with versions i tested on)** 34 | 1. python (3.7.3) 35 | 2. opencv (4.1.0) 36 | 3. numpy (1.61.4) 37 | 4. imutils (0.5.2) 38 | 39 | ## **Commands to run the detection:** 40 | ``` 41 | python test_grader.py --image images/test_02.png 42 | ``` 43 | 44 | 45 | ## **Results:** 46 | The results are pretty amazing. The grading system is working perfectly fine. 47 | 48 | **input** 49 | ___ 50 | ![image1](images/test_02.png) 51 | 52 | **Output** 53 | ___ 54 | ![image1](outputs/output1.jpg) 55 | 56 | **input** 57 | ___ 58 | ![image1](images/test_05.png) 59 | 60 | **Output** 61 | ___ 62 | ![image1](outputs/output2.jpg) 63 | 64 | 65 | ## **The limitations** 66 | 1. There is no logic to handle non-filled bubbles. (This can be solved by using a threshold value for marking a bubble as filled) 67 | 2. Multiple bubbles marking is not handled. (This can also be solved using a threshold for marking bubble as filled and counting the number and position of filled bubbles) 68 | -------------------------------------------------------------------------------- /images/example_test.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Practical-CV/OMR-Scanner-and-Test-Grader-using-OpenCV/871ed8b4f1acf8e544f10420afe9a008087ff830/images/example_test.png -------------------------------------------------------------------------------- /images/test_01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Practical-CV/OMR-Scanner-and-Test-Grader-using-OpenCV/871ed8b4f1acf8e544f10420afe9a008087ff830/images/test_01.png -------------------------------------------------------------------------------- /images/test_02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Practical-CV/OMR-Scanner-and-Test-Grader-using-OpenCV/871ed8b4f1acf8e544f10420afe9a008087ff830/images/test_02.png -------------------------------------------------------------------------------- /images/test_03.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Practical-CV/OMR-Scanner-and-Test-Grader-using-OpenCV/871ed8b4f1acf8e544f10420afe9a008087ff830/images/test_03.png -------------------------------------------------------------------------------- /images/test_04.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Practical-CV/OMR-Scanner-and-Test-Grader-using-OpenCV/871ed8b4f1acf8e544f10420afe9a008087ff830/images/test_04.png -------------------------------------------------------------------------------- /images/test_05.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Practical-CV/OMR-Scanner-and-Test-Grader-using-OpenCV/871ed8b4f1acf8e544f10420afe9a008087ff830/images/test_05.png -------------------------------------------------------------------------------- /multiple_choice_template.psd: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Practical-CV/OMR-Scanner-and-Test-Grader-using-OpenCV/871ed8b4f1acf8e544f10420afe9a008087ff830/multiple_choice_template.psd -------------------------------------------------------------------------------- /outputs/output1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Practical-CV/OMR-Scanner-and-Test-Grader-using-OpenCV/871ed8b4f1acf8e544f10420afe9a008087ff830/outputs/output1.jpg -------------------------------------------------------------------------------- /outputs/output2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Practical-CV/OMR-Scanner-and-Test-Grader-using-OpenCV/871ed8b4f1acf8e544f10420afe9a008087ff830/outputs/output2.jpg -------------------------------------------------------------------------------- /test_grader.py: -------------------------------------------------------------------------------- 1 | # USAGE 2 | # python test_grader.py --image images/test_01.png 3 | 4 | # import the necessary packages 5 | from imutils.perspective import four_point_transform 6 | from imutils import contours 7 | import numpy as np 8 | import argparse 9 | import imutils 10 | import cv2 11 | 12 | # construct the argument parse and parse the arguments 13 | ap = argparse.ArgumentParser() 14 | ap.add_argument("-i", "--image", required=True, 15 | help="path to the input image") 16 | args = vars(ap.parse_args()) 17 | 18 | # define the answer key which maps the question number 19 | # to the correct answer 20 | ANSWER_KEY = {0: 1, 1: 4, 2: 0, 3: 3, 4: 1} 21 | 22 | # load the image, convert it to grayscale, blur it 23 | # slightly, then find edges 24 | image = cv2.imread(args["image"]) 25 | gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) 26 | blurred = cv2.GaussianBlur(gray, (5, 5), 0) 27 | edged = cv2.Canny(blurred, 75, 200) 28 | 29 | # find contours in the edge map, then initialize 30 | # the contour that corresponds to the document 31 | cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, 32 | cv2.CHAIN_APPROX_SIMPLE) 33 | cnts = imutils.grab_contours(cnts) 34 | docCnt = None 35 | 36 | # ensure that at least one contour was found 37 | if len(cnts) > 0: 38 | # sort the contours according to their size in 39 | # descending order 40 | cnts = sorted(cnts, key=cv2.contourArea, reverse=True) 41 | 42 | # loop over the sorted contours 43 | for c in cnts: 44 | # approximate the contour 45 | peri = cv2.arcLength(c, True) 46 | approx = cv2.approxPolyDP(c, 0.02 * peri, True) 47 | 48 | # if our approximated contour has four points, 49 | # then we can assume we have found the paper 50 | if len(approx) == 4: 51 | docCnt = approx 52 | break 53 | 54 | # apply a four point perspective transform to both the 55 | # original image and grayscale image to obtain a top-down 56 | # birds eye view of the paper 57 | paper = four_point_transform(image, docCnt.reshape(4, 2)) 58 | warped = four_point_transform(gray, docCnt.reshape(4, 2)) 59 | 60 | # apply Otsu's thresholding method to binarize the warped 61 | # piece of paper 62 | thresh = cv2.threshold(warped, 0, 255, 63 | cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1] 64 | 65 | # find contours in the thresholded image, then initialize 66 | # the list of contours that correspond to questions 67 | cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, 68 | cv2.CHAIN_APPROX_SIMPLE) 69 | cnts = imutils.grab_contours(cnts) 70 | questionCnts = [] 71 | 72 | # loop over the contours 73 | for c in cnts: 74 | # compute the bounding box of the contour, then use the 75 | # bounding box to derive the aspect ratio 76 | (x, y, w, h) = cv2.boundingRect(c) 77 | ar = w / float(h) 78 | 79 | # in order to label the contour as a question, region 80 | # should be sufficiently wide, sufficiently tall, and 81 | # have an aspect ratio approximately equal to 1 82 | if w >= 20 and h >= 20 and ar >= 0.9 and ar <= 1.1: 83 | questionCnts.append(c) 84 | 85 | # sort the question contours top-to-bottom, then initialize 86 | # the total number of correct answers 87 | questionCnts = contours.sort_contours(questionCnts, 88 | method="top-to-bottom")[0] 89 | correct = 0 90 | 91 | # each question has 5 possible answers, to loop over the 92 | # question in batches of 5 93 | for (q, i) in enumerate(np.arange(0, len(questionCnts), 5)): 94 | # sort the contours for the current question from 95 | # left to right, then initialize the index of the 96 | # bubbled answer 97 | cnts = contours.sort_contours(questionCnts[i:i + 5])[0] 98 | bubbled = None 99 | 100 | # loop over the sorted contours 101 | for (j, c) in enumerate(cnts): 102 | # construct a mask that reveals only the current 103 | # "bubble" for the question 104 | mask = np.zeros(thresh.shape, dtype="uint8") 105 | cv2.drawContours(mask, [c], -1, 255, -1) 106 | 107 | # apply the mask to the thresholded image, then 108 | # count the number of non-zero pixels in the 109 | # bubble area 110 | mask = cv2.bitwise_and(thresh, thresh, mask=mask) 111 | total = cv2.countNonZero(mask) 112 | 113 | # if the current total has a larger number of total 114 | # non-zero pixels, then we are examining the currently 115 | # bubbled-in answer 116 | if bubbled is None or total > bubbled[0]: 117 | bubbled = (total, j) 118 | 119 | # initialize the contour color and the index of the 120 | # *correct* answer 121 | color = (0, 0, 255) 122 | k = ANSWER_KEY[q] 123 | 124 | # check to see if the bubbled answer is correct 125 | if k == bubbled[1]: 126 | color = (0, 255, 0) 127 | correct += 1 128 | 129 | # draw the outline of the correct answer on the test 130 | cv2.drawContours(paper, [cnts[k]], -1, color, 3) 131 | 132 | # grab the test taker 133 | score = (correct / 5.0) * 100 134 | print("[INFO] score: {:.2f}%".format(score)) 135 | cv2.putText(paper, "{:.2f}%".format(score), (10, 30), 136 | cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 0, 255), 2) 137 | cv2.imshow("Original", image) 138 | cv2.imshow("Exam", paper) 139 | cv2.waitKey(0) 140 | -------------------------------------------------------------------------------- /test_grader_mine.py: -------------------------------------------------------------------------------- 1 | from imutils.perspective import four_point_transform 2 | from imutils import contours 3 | import imutils 4 | import cv2 5 | import numpy as np 6 | import argparse 7 | import random 8 | 9 | def show_images(images, titles, kill_later=True): 10 | for index, image in enumerate(images): 11 | cv2.imshow(titles[index], image) 12 | cv2.waitKey(0) 13 | if kill_later: 14 | cv2.destroyAllWindows() 15 | 16 | ap = argparse.ArgumentParser() 17 | ap.add_argument("-i", "--image", required=True, help="path to the input image") 18 | args = vars(ap.parse_args()) 19 | 20 | ANSWER_KEY = { 21 | 0: 1, 22 | 1: 4, 23 | 2: 0, 24 | 3: 3, 25 | 4: 1 26 | } 27 | 28 | # edge detection 29 | image = cv2.imread(args['image']) 30 | gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) 31 | blurred = cv2.GaussianBlur(gray, (5, 5), 0) 32 | edged = cv2.Canny(blurred, 75, 200) 33 | show_images([edged], ["Edged"]) 34 | 35 | 36 | # find contours in edge detected image 37 | cnts = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) 38 | cnts = imutils.grab_contours(cnts) 39 | docCnt = None 40 | 41 | allContourImage = image.copy() 42 | cv2.drawContours(allContourImage, cnts, -1, (0, 0, 255), 3) 43 | print("Total contours found after edge detection {}".format(len(cnts))) 44 | show_images([allContourImage], ["All contours from edge detected image"]) 45 | 46 | # finding the document contour 47 | if len(cnts) > 0: 48 | cnts = sorted(cnts, key=cv2.contourArea, reverse=True) 49 | 50 | for c in cnts: 51 | peri = cv2.arcLength(c, closed=True) 52 | approx = cv2.approxPolyDP(c, epsilon=peri*0.02, closed=True) 53 | 54 | if len(approx) == 4: 55 | docCnt = approx 56 | break 57 | 58 | contourImage = image.copy() 59 | cv2.drawContours(contourImage, [docCnt], -1, (0, 0, 255), 2) 60 | show_images([contourImage], ["Outline"]) 61 | 62 | 63 | # Getting the bird's eye view, top-view of the document 64 | paper = four_point_transform(image, docCnt.reshape(4, 2)) 65 | warped = four_point_transform(gray, docCnt.reshape(4, 2)) 66 | show_images([paper, warped], ["Paper", "Gray"]) 67 | 68 | 69 | # Thresholding the document 70 | thresh = cv2.threshold(warped, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1] 71 | show_images([thresh], ["Thresh"]) 72 | 73 | 74 | # Finding contours in threshold image 75 | cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) 76 | cnts = imutils.grab_contours(cnts) 77 | print("Total contours found after threshold {}".format(len(cnts))) 78 | questionCnts = [] 79 | 80 | allContourImage = paper.copy() 81 | cv2.drawContours(allContourImage, cnts, -1, (0, 0, 255), 3) 82 | show_images([allContourImage], ["All contours from threshold image"]) 83 | 84 | # Finding the questions contours 85 | for c in cnts: 86 | (x, y, w, h) = cv2.boundingRect(c) 87 | ar = w / float(h) 88 | 89 | if w >= 20 and h >= 20 and ar >= 0.9 and ar <= 1.1: 90 | questionCnts.append(c) 91 | 92 | print("Total questions contours found: {}".format(len(questionCnts))) 93 | 94 | questionsContourImage = paper.copy() 95 | cv2.drawContours(questionsContourImage, questionCnts, -1, (0, 0, 255), 3) 96 | show_images([questionsContourImage], ["All questions contours after filtering questions"]) 97 | 98 | # Sorting the contours according to the question 99 | questionCnts = contours.sort_contours(questionCnts, method="top-to-bottom")[0] 100 | correct = 0 101 | questionsContourImage = paper.copy() 102 | 103 | for (q, i) in enumerate(np.arange(0, len(questionCnts), 5)): 104 | cnts = contours.sort_contours(questionCnts[i: i+5])[0] 105 | cv2.drawContours(questionsContourImage, cnts, -1, (random.randint(0, 255), random.randint(0, 255), random.randint(0, 255)), 2) 106 | bubbled = None 107 | 108 | for (j, c) in enumerate(cnts): 109 | mask = np.zeros(thresh.shape, dtype="uint8") 110 | cv2.drawContours(mask, [c], -1, 255, -1) 111 | 112 | mask = cv2.bitwise_and(thresh, thresh, mask=mask) 113 | show_images([mask], ["Mask of question {} for row {}".format(j+1, q+1)]) 114 | total = cv2.countNonZero(mask) 115 | 116 | if bubbled is None or total > bubbled[0]: 117 | bubbled = (total, j) 118 | 119 | color = (0, 0, 255) 120 | k = ANSWER_KEY[q] 121 | 122 | if k == bubbled[1]: 123 | color = (0, 255, 0) 124 | correct += 1 125 | 126 | cv2.drawContours(paper, [cnts[k]], -1, color, 3) 127 | 128 | show_images([questionsContourImage], ["All questions contours with different colors"]) 129 | 130 | score = (correct / 5.0) * 100 131 | print("INFO Score: {:.2f}%".format(score)) 132 | cv2.putText(paper, "{:.2f}%".format(score), (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 0, 255), 2) 133 | show_images([image, paper], ["Original", "exam"]) 134 | 135 | 136 | 137 | --------------------------------------------------------------------------------