├── README.md ├── blinkr.py ├── paper.bib └── requirements.txt /README.md: -------------------------------------------------------------------------------- 1 | # Blinkr 2 | Jetson Nano AI Blink Detection and Reminder Project 3 | 4 | ⭐️ Winner of the NVIDIA Jetson AI Specialist Award and NVIDIA Project of the Month ⭐️ 5 | 6 | ![ScreenShot](https://i.ibb.co/pQpCdX8/Are-tch-2.png) 7 | 8 | ## Inspiration for building Blinkr 9 | As many studies show, while looking at a computer screen our blink rate reduces. We usually do not realize this. Not blinking can lead to unwanted side affects that we want to avoid. I wanted to find a way to make ourselves blink more while using the computer. 10 | 11 | ![ScreenShot](https://i.ibb.co/f2XzGk2/Are-tch-3.png) 12 | 13 | ## How Blinkr aims to solve this problem 14 | Blinkr is a device that utlizes [AI (Artifical Intellegence)](https://en.wikipedia.org/wiki/Artificial_intelligence) to detect blinks. Blinkr uses a camera that faces the user. Blinkr counts the number of times a user blinks and warns them if they are not blinking enough. An average adult blinks 10-20 times a minute. Blinkr looks at how much you are blinking per minute and informs you on whether you should blink or not. 15 | 16 | ![ScreenShot](https://i.ibb.co/DCbsY5Z/Are-tch-5.png) 17 | 18 | ## How does Blinkr work 19 | The Blinkr devices utilizes the [NVIDIA Jetson Nano AI Computer](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/). The NVIDIA Jetson Nano is a fast single board computer meant for AI. My code runs on this computer. Additionally Blinkr uses a camera, speaker, as well as screen. 20 | 21 | ![ScreenShot](https://i.ibb.co/r6ZHpgQ/Are-tch-6.png) 22 | 23 | ## Make your own Blinkr 24 | Recreating your own Blinkr device is easy! Using the code repository and the instructions below you can make your own Blinkr and blink more. Follow the steps belowed carefully and you'll be on your way. Below is a picture of a research paper that explains how to detect blinks: 25 | 26 | ![ScreenShot](https://i.ibb.co/4RY4cRH/Screen-Shot-2020-11-23-at-5-35-36-PM.png) 27 | ###### Real-Time Eye Blink Detection using Facial Landmarks-Tereza Soukupova and Jan Cech-Center for Machine Perception, Department of Cybernetics-Faculty of Electrical Engineering, Czech Technical University in Prague 28 | 29 | ***Here we cam see how the EAR (Eye Aspect Ratio) changes when someone blinks. Using this we can detect blinks.*** 30 | 31 | ## Materials needed to build Blinkr 32 | 33 | 1. [NVIDIA Jetson Nano 2GB Developer Kit](https://www.amazon.com/NVIDIA-Jetson-Nano-2GB-Developer/dp/B08J157LHH/ref=sr_1_3?dchild=1&keywords=nvidia+jetson+nano&qid=1606178473&sr=8-3) -- This is the Jetson Nano AI Computer that will be the core of the Blinkr device. This computer will be doing all of the AI and running the code. We will be connecting things such as a camera to this device. 34 | 35 | 2. [HDMI Display](https://www.amazon.com/Developer-Accessories-Powerful-Development-XYGStudy/dp/B08629Y5JR/ref=sr_1_1_sspa?dchild=1&keywords=nvidia%2Bjetson%2Bnano%2Bdisplay&qid=1606178640&sr=8-1-spons&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEzTkZDV1A2REZGVVhPJmVuY3J5cHRlZElkPUExMDM4NDgyMkdTS1dWSkNXWks0WSZlbmNyeXB0ZWRBZElkPUEwMzk0NjI2MzlVVUlZUzVFQkxVUCZ3aWRnZXROYW1lPXNwX2F0ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU&th=1) -- You can either use a large HDMI display or you can purchase a small HDMI display and make the device fully integrated. I have given the link for the small HDMI display compatible with the Jetson Nano. This link sends you to a product that contains the display as well as cables and more to get you started. 36 | 37 | 3. [Raspberry Pi Camera V2](https://www.amazon.com/gp/product/B07W5T3J5T/ref=ppx_yo_dt_b_asin_title_o05_s00?ie=UTF8&psc=1) -- This is the link to the Raspberry Pi Camera but you can also use a web cam if you choose to do so. The Raspberry Pi Camera is small and compact so I decided to use it. 38 | 39 | 4. [5V Power Supply](https://www.amazon.com/gp/product/B07TYQRXTK/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1) -- You will need a good 5V Power Supply to power the Jetson Nano. I have chosen a good one that prouduces a stable source of power. 40 | 41 | 5. [Micro SD Card](https://www.amazon.com/SanDisk-Ultra-microSDXC-Memory-Adapter/dp/B073JWXGNT/ref=sr_1_2?dchild=1&keywords=sandisk+32gb&qid=1606179035&sr=8-2) -- You will need a Micro SD Card to put the OS (Operating System) onto the Jetson Nano. 42 | 43 | 6. [HDMI Cable](https://www.amazon.com/AmazonBasics-High-Speed-HDMI-Cable-1-Pack/dp/B014I8T0YQ/ref=sr_1_1_sspa?crid=20LGUYKA7TEIC&dchild=1&keywords=hdmi+cable+amazonbasics&qid=1606179113&sprefix=HDMI+Cable+amazon%2Caps%2C227&sr=8-1-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEzQTc5VDMxOFo0U0o3JmVuY3J5cHRlZElkPUEwNTIxMTExM0hGVURXN1ZaSzNHTyZlbmNyeXB0ZWRBZElkPUEwNzYzMTI2M0o3RFVOQ1NORVBJMCZ3aWRnZXROYW1lPXNwX2F0ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU=) -- You will need a HDMI cable to connect a moniter/display to the Jetson Nano. 44 | 45 | 7. [Speaker](https://www.amazon.com/HONKYOB-Speaker-Computer-Multimedia-Notebook/dp/B075M7FHM1/ref=sr_1_3?dchild=1&keywords=usb+mini+speaker&qid=1606240300&sr=8-3) -- You will need a USB Speaker for the audible blink reminder. 46 | 47 | ## Setting up the NVIDIA Jetson Nano 2GB Developer Kit 48 | Go to this [link](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/education-projects/) and follow the steps to get your NVIDIA Jetson Nano working. After following all of these steps and flashing the OS onto your Nano continue on below. 49 | 50 | ## Connecting the Camera to the Jetson Nano 51 | 52 | The web cam can be connect via USB. To connect the Raspberry Pi Camera V2 to the Jetson Nano, follow this [video](https://www.youtube.com/watch?v=dHvb225Pw1s). If you are using a Web Cam comment out this line: 53 | ```python 54 | cam = 'nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method='+flip+' ! video/x-raw, width='+width+', height='+height+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink' 55 | ``` 56 | If you are using the Raspberry Pi Camera V2 then comment out this line: 57 | ```python 58 | cam = cv2.VideoCapture(0) 59 | ``` 60 | Also add a height, width and flip variable above. I reccomend the width and height to be (you will have to test the flip and see what works): 61 | ``` 62 | width = 320 63 | height = 240 64 | flip = 1 65 | ``` 66 | 67 | ## Installing packages 68 | 69 | Before installing python packages you will need to download the "shape_predictor_68_face_landmarks.dat" file and modify this line with the path to the file: 70 | 71 | ```python 72 | dlib_facelandmark = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat") 73 | ``` 74 | 75 | To download it visit this [link](http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2). 76 | 77 | To start installing packages via pip you will have to run this line in the LX Terminal: 78 | ```python 79 | sudo apt-get install python3-pip 80 | ``` 81 | These are the packages used in the Python script: 82 | ```python 83 | from scipy.spatial import distance 84 | from gtts import gTTS 85 | import playsound 86 | import time 87 | import dlib 88 | import cv2 89 | import sys 90 | import os 91 | ``` 92 | 93 | You can download these packages via pip using the [requirements.txt](https://github.com/AdritRao/Blinkr/blob/main/requirements.txt) file: 94 | ```python 95 | pip3 install -r requirements.txt 96 | ``` 97 | 98 | ## Time and Blink Count Modifications 99 | 100 | You can modify the time on line 37. Change the number on this line to change the seconds until the next reminder: 101 | 102 | ```python 103 | while time.time() - start < 60: 104 | ``` 105 | 106 | You can modify the number of blinks needed over here: 107 | 108 | ```python 109 | if blink_count >= 10: 110 | print("Good, keep blinking") 111 | blink_count = 0 112 | os.execl(sys.executable, sys.executable, *sys.argv) 113 | else: 114 | speak("Blink more please") 115 | blink_count = 0 116 | os.execl(sys.executable, sys.executable, *sys.argv) 117 | ``` 118 | 119 | ## Eye Line and Blink Color Modifications 120 | 121 | You can change the color of the line around the left and right eye by changing this line of code on lines 58 and 70: 122 | 123 | ```python 124 | cv2.line(frame,(x,y),(x2,y2),(0,255,0),1) 125 | ``` 126 | 127 | You can change the font color of "Blink" on line 78: 128 | 129 | ```python 130 | cv2.putText(frame, "Blink", (20, 100), cv2.FONT_HERSHEY_SIMPLEX, 3, (255, 0, 0), 4) 131 | ``` 132 | 133 | ## Program Restart 134 | 135 | Every one minute the program will restart via this line of code: 136 | 137 | ```python 138 | os.execl(sys.executable, sys.executable, *sys.argv) 139 | ``` 140 | 141 | ## Final Step - Running the code on the Jetson Nano 142 | Download the code from this Repo and save it onto your Jetson Nano. Open the LX Terminal and then navigate to the folder. Then write this to see the code in action: 143 | ``` 144 | python3 blinkr.py 145 | ``` 146 | I hope you enjoy using Blinkr! 👁 147 | 148 | 149 | 150 | -------------------------------------------------------------------------------- /blinkr.py: -------------------------------------------------------------------------------- 1 | from scipy.spatial import distance 2 | from gtts import gTTS 3 | import playsound 4 | import time 5 | import dlib 6 | import cv2 7 | import sys 8 | import os 9 | 10 | ## Blink count variable to count number of blinks 11 | blink_count = 0 12 | 13 | def speak(text: str): 14 | tts = gTTS(text=text, lang="en") 15 | filename = "sound.mp3" 16 | tts.save(filename) 17 | playsound.playsound(filename) 18 | 19 | 20 | def calculate_EAR(eye): 21 | A = distance.euclidean(eye[1], eye[5]) 22 | B = distance.euclidean(eye[2], eye[4]) 23 | C = distance.euclidean(eye[0], eye[3]) 24 | ear_aspect_ratio = (A+B)/(2.0*C) 25 | return ear_aspect_ratio 26 | 27 | ## This is if you are using a Web Cam. If 0 does not work change it to 1. 28 | cam = cv2.VideoCapture(0) 29 | 30 | ## This is if you are using a Raspberry Pi Camera V2. 31 | # cam = 'nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw, width='+str(width)+', height='+str(height)+', format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink' 32 | 33 | hog_face_detector = dlib.get_frontal_face_detector() 34 | dlib_facelandmark = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat") 35 | 36 | start = time.time() 37 | 38 | ## Timer while loop -- code will restart after time is up 39 | while time.time() - start < 60: 40 | 41 | _, frame = cam.read() 42 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 43 | 44 | faces = hog_face_detector(gray) 45 | for face in faces: 46 | 47 | face_landmarks = dlib_facelandmark(gray, face) 48 | leftEye = [] 49 | rightEye = [] 50 | ## Left eye detection 51 | for n in range(36,42): 52 | x = face_landmarks.part(n).x 53 | y = face_landmarks.part(n).y 54 | leftEye.append((x,y)) 55 | next_point = n+1 56 | if n == 41: 57 | next_point = 36 58 | x2 = face_landmarks.part(next_point).x 59 | y2 = face_landmarks.part(next_point).y 60 | cv2.line(frame,(x,y),(x2,y2),(0,255,0),1) 61 | 62 | ## Right eye detection 63 | for n in range(42,48): 64 | x = face_landmarks.part(n).x 65 | y = face_landmarks.part(n).y 66 | rightEye.append((x,y)) 67 | next_point = n+1 68 | if n == 47: 69 | next_point = 42 70 | x2 = face_landmarks.part(next_point).x 71 | y2 = face_landmarks.part(next_point).y 72 | cv2.line(frame,(x,y),(x2,y2),(0,255,0),1) 73 | 74 | left_ear = calculate_EAR(leftEye) 75 | right_ear = calculate_EAR(rightEye) 76 | 77 | EAR = (left_ear+right_ear)/2 78 | EAR = round(EAR,2) 79 | ## Check Eye Aspect Ratio for blink 80 | if EAR < 0.26: 81 | cv2.putText(frame, "Blink", (20, 100), cv2.FONT_HERSHEY_SIMPLEX, 3, (255, 0, 0), 4) 82 | blink_count = blink_count + 1 83 | print("Blinked " + str(blink_count)) 84 | 85 | 86 | cv2.imshow("Blinkr", frame) 87 | 88 | key = cv2.waitKey(1) 89 | if key == 'q': 90 | break 91 | ## Check blink count after time is up 92 | else: 93 | if blink_count >= 10: 94 | print("Good, keep blinking") 95 | blink_count = 0 96 | os.execl(sys.executable, sys.executable, *sys.argv) 97 | else: 98 | speak("Blink more please") 99 | blink_count = 0 100 | os.execl(sys.executable, sys.executable, *sys.argv) 101 | 102 | cam.release() 103 | cv2.destroyAllWindows() 104 | -------------------------------------------------------------------------------- /paper.bib: -------------------------------------------------------------------------------- 1 | 2 | @article{cech2016real, 3 | title={Real-time eye blink detection using facial landmarks}, 4 | author={Cech, Jan and Soukupova, Tereza}, 5 | journal={Cent. Mach. Perception, Dep. Cybern. Fac. Electr. Eng. Czech Tech. Univ. Prague}, 6 | pages={1--8}, 7 | year={2016} 8 | } 9 | 10 | @misc{dlib c++ library, title={Dlib}, url={http://dlib.net/python/index.html#}, journal={Dlib C++ Library}} 11 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | gTTS==2.2.1 2 | scipy==1.5.4 3 | opencv_python==3.4.11.45 4 | playsound==1.2.2 5 | dlib==19.21.0 6 | --------------------------------------------------------------------------------