├── Examples ├── .DS_Store ├── Angles.png ├── FacePan.gif ├── CalculateAngles.png └── FacialLandmarks.png ├── .gitignore ├── Documents └── FacePoseDetection.pdf ├── requirements.txt ├── Dockerfile ├── README.md ├── DetectFacePose.py └── Notebooks └── CV2_Face_Pose_Detection.ipynb /Examples/.DS_Store: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | Notebooks/.ipynb_checkpoints 2 | Output_detection.jpg 3 | .DS_Store -------------------------------------------------------------------------------- /Examples/Angles.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nawafalageel/Side-Profile-Detection/HEAD/Examples/Angles.png -------------------------------------------------------------------------------- /Examples/FacePan.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nawafalageel/Side-Profile-Detection/HEAD/Examples/FacePan.gif -------------------------------------------------------------------------------- /Examples/CalculateAngles.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nawafalageel/Side-Profile-Detection/HEAD/Examples/CalculateAngles.png -------------------------------------------------------------------------------- /Examples/FacialLandmarks.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nawafalageel/Side-Profile-Detection/HEAD/Examples/FacialLandmarks.png -------------------------------------------------------------------------------- /Documents/FacePoseDetection.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nawafalageel/Side-Profile-Detection/HEAD/Documents/FacePoseDetection.pdf -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | torch==1.9.1 2 | requests==2.25.1 3 | opencv_python_headless==4.5.2.52 4 | matplotlib==3.3.4 5 | numpy==1.18.2 6 | facenet_pytorch==2.5.2 7 | Pillow==8.3.2 8 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.7-slim 2 | 3 | COPY . /app 4 | 5 | WORKDIR /app 6 | 7 | RUN pip install -r requirements.txt 8 | 9 | RUN pip install jupyter 10 | 11 | EXPOSE 8888 12 | 13 | CMD jupyter notebook --no-browser --ip=0.0.0.0 --port 8888 --allow-root --NotebookApp.token='' --NotebookApp.allow_origin='*' 14 | 15 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | - [Installation](#installation) 3 | - [Natively](#natively) 4 | - [Docker](#docker) 5 | - [Face Pose Detection](#face-pose-detection) 6 | - [How to use](#howtouse) 7 | 8 | 9 | # Installation 10 | ## Natively 11 | ```pip install -r requirements.txt``` 12 | 13 | ## Docker 14 | 15 | The continer runs a jupyter notebook server with all requirements. 16 | 17 | ```docker build -t facepose:v0 .``` 18 | 19 | ```docker run -p 10091:8888 facepose:v0``` 20 | 21 | to use the notebook with gpu: 22 | 23 | ```docker run -p 10091:8888 --gpus all facepose:v0``` 24 | 25 | # How to use 26 | You can see the example [notebook](Notebooks/FacePoseDetection.ipynb) and to make the installation eaiser use docker. Also you can install all the requirements and run this command 27 | 28 | ```python DetectFacePose.py -p ``` 29 | 30 | use `-u ` for urls. 31 | 32 | 33 | Or run the command with no argument it will use your camera 34 | 35 | ```python DetectFacePose.py ``` 36 | 37 | # Face Pose Detection 38 | This repo will look into a technique that might help detect face orientation or pose. And I will focus on only detecting three main poses which are: 39 | - Frontal Face. 40 | - Right Profile. 41 | - Left Profile. 42 | 43 | Detecting tilted face either front or back is out of the scope of this post. The technique detects only out-of-plane orientation estimation which is face rotation with respect to the y-axis (pan rotation). Since the technique relies heavily on the face detection model, I used MTCNN because it produces facial landmarks that we will use to detect face poses. Simply explained when a face is detected for the first time, we extract face landmarks (i.e., right eye, left eye, nose, left mouth, and right mouth) and from these points, we calculate the angels between the right eye, left eye, and the nose using a Cartesian coordinate system for Euclidean space 2D. Setting threshold ranges -which we already experimented with- for the right eye angle and left eye angle can estimate if the face left, or right profile, or frontal face. 44 | ![Facial Landmarks](Examples/FacialLandmarks.png) 45 | Facial Landmarks and Illustration for eyes anglesAs shown in the figure we are going to calculate the angles $\theta_1$ and $\theta_2$. 46 | After the model produces the landmarks we draw an "imaginary" line between three points i.e., right eye, left eye, and the nose. Forming a triangle and through that triangle, we calculate the angle $\theta_1$ and $\theta_2$. This step, geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow or a line. Its magnitude is its length, and its direction is the direction that the line points to. For more details, I would recommend to see [Manivannan](https://manivannan-ai.medium.com/find-the-angle-between-three-points-from-2d-using-python-348c513e2cd) post about how to calculate angles with python, also thanks to him since his post helped me a lot. Next figure demonstrate how to calculate angles: 47 | ![Calculating angles](Examples/CalculateAngles.png) 48 | 49 | And here an example from 50 | [Head Pose Image Database](http://crowley-coutaz.fr/Head%20Pose%20Image%20Database.html): 51 | 52 | ![HPID Example](Examples/FacePan.gif) 53 | 54 | For more details see the full document [Here](Documents/FacePoseDetection.pdf). -------------------------------------------------------------------------------- /DetectFacePose.py: -------------------------------------------------------------------------------- 1 | from facenet_pytorch import MTCNN 2 | from PIL import Image 3 | from matplotlib import pyplot as plt 4 | import numpy as np 5 | import math 6 | import requests 7 | import argparse 8 | import torch 9 | import cv2 10 | 11 | parser = argparse.ArgumentParser("Face pose detection for one face") 12 | parser.add_argument("-p", "--path", help="To use image path.", type=str) 13 | parser.add_argument("-u", "--url", help="To use image url", type=str) 14 | args = parser.parse_args() 15 | 16 | path = args.path 17 | url = args.url 18 | 19 | left_offset = 20 20 | fontScale = 2 21 | fontThickness = 3 22 | text_color = (0,0,255) 23 | lineColor = (255, 255, 0) 24 | 25 | device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') 26 | print(f'Running on device: {device}') 27 | 28 | mtcnn = MTCNN(image_size=160, 29 | margin=0, 30 | min_face_size=20, 31 | thresholds=[0.6, 0.7, 0.7], # MTCNN thresholds 32 | factor=0.709, 33 | post_process=True, 34 | device=device # If you don't have GPU 35 | ) 36 | 37 | # Landmarks: [Left Eye], [Right eye], [nose], [left mouth], [right mouth] 38 | def npAngle(a, b, c): 39 | ba = a - b 40 | bc = c - b 41 | 42 | cosine_angle = np.dot(ba, bc)/(np.linalg.norm(ba)*np.linalg.norm(bc)) 43 | angle = np.arccos(cosine_angle) 44 | 45 | return np.degrees(angle) 46 | 47 | def visualize(image, landmarks_, angle_R_, angle_L_, pred_): 48 | fig , ax = plt.subplots(1, 1, figsize= (8,8)) 49 | 50 | leftCount = len([i for i in pred_ if i == 'Left Profile']) 51 | rightCount = len([i for i in pred_ if i == 'Right Profile']) 52 | frontalCount = len([i for i in pred_ if i == 'Frontal']) 53 | facesCount = len(pred_) # Number of detected faces (above the threshold) 54 | ax.set_title(f"Number of detected faces = {facesCount} \n frontal = {frontalCount}, left = {leftCount}, right = {rightCount}") 55 | for landmarks, angle_R, angle_L, pred in zip(landmarks_, angle_R_, angle_L_, pred_): 56 | 57 | if pred == 'Frontal': 58 | color = 'white' 59 | elif pred == 'Right Profile': 60 | color = 'blue' 61 | else: 62 | color = 'red' 63 | 64 | point1 = [landmarks[0][0], landmarks[1][0]] 65 | point2 = [landmarks[0][1], landmarks[1][1]] 66 | 67 | point3 = [landmarks[2][0], landmarks[0][0]] 68 | point4 = [landmarks[2][1], landmarks[0][1]] 69 | 70 | point5 = [landmarks[2][0], landmarks[1][0]] 71 | point6 = [landmarks[2][1], landmarks[1][1]] 72 | for land in landmarks: 73 | ax.scatter(land[0], land[1]) 74 | plt.plot(point1, point2, 'y', linewidth=3) 75 | plt.plot(point3, point4, 'y', linewidth=3) 76 | plt.plot(point5, point6, 'y', linewidth=3) 77 | plt.text(point1[0], point2[0], f"{pred} \n {math.floor(angle_L)}, {math.floor(angle_R)}", 78 | size=20, ha="center", va="center", color=color) 79 | ax.imshow(image) 80 | fig.savefig('Output_detection.jpg') 81 | return print('Done detect') 82 | 83 | def visualizeCV2(frame, landmarks_, angle_R_, angle_L_, pred_): 84 | 85 | for landmarks, angle_R, angle_L, pred in zip(landmarks_, angle_R_, angle_L_, pred_): 86 | 87 | if pred == 'Frontal': 88 | color = (0, 0, 0) 89 | elif pred == 'Right Profile': 90 | color = (255, 0, 0) 91 | else: 92 | color = (0, 0, 255) 93 | 94 | point1 = [int(landmarks[0][0]), int(landmarks[1][0])] 95 | point2 = [int(landmarks[0][1]), int(landmarks[1][1])] 96 | 97 | point3 = [int(landmarks[2][0]), int(landmarks[0][0])] 98 | point4 = [int(landmarks[2][1]), int(landmarks[0][1])] 99 | 100 | point5 = [int(landmarks[2][0]), int(landmarks[1][0])] 101 | point6 = [int(landmarks[2][1]), int(landmarks[1][1])] 102 | 103 | for land in landmarks: 104 | cv2.circle(frame, (int(land[0]), int(land[1])), radius=5, color=(0, 255, 255), thickness=-1) 105 | cv2.line(frame, (int(landmarks[0][0]), int(landmarks[0][1])), (int(landmarks[1][0]), int(landmarks[1][1])), lineColor, 3) 106 | cv2.line(frame, (int(landmarks[0][0]), int(landmarks[0][1])), (int(landmarks[2][0]), int(landmarks[2][1])), lineColor, 3) 107 | cv2.line(frame, (int(landmarks[1][0]), int(landmarks[1][1])), (int(landmarks[2][0]), int(landmarks[2][1])), lineColor, 3) 108 | 109 | text_sizeR, _ = cv2.getTextSize(pred, cv2.FONT_HERSHEY_PLAIN, fontScale, 4) 110 | text_wR, text_hR = text_sizeR 111 | 112 | cv2.putText(frame, pred, (point1[0], point2[0]), cv2.FONT_HERSHEY_PLAIN, fontScale, color, fontThickness, cv2.LINE_AA) 113 | 114 | def predFacePose(frame): 115 | 116 | bbox_, prob_, landmarks_ = mtcnn.detect(frame, landmarks=True) # The detection part producing bounding box, probability of the detected face, and the facial landmarks 117 | angle_R_List = [] 118 | angle_L_List = [] 119 | predLabelList = [] 120 | 121 | for bbox, landmarks, prob in zip(bbox_, landmarks_, prob_): 122 | if bbox is not None: # To check if we detect a face in the image 123 | if prob > 0.9: # To check if the detected face has probability more than 90%, to avoid 124 | angR = npAngle(landmarks[0], landmarks[1], landmarks[2]) # Calculate the right eye angle 125 | angL = npAngle(landmarks[1], landmarks[0], landmarks[2])# Calculate the left eye angle 126 | angle_R_List.append(angR) 127 | angle_L_List.append(angL) 128 | if ((int(angR) in range(35, 57)) and (int(angL) in range(35, 58))): 129 | predLabel='Frontal' 130 | predLabelList.append(predLabel) 131 | else: 132 | if angR < angL: 133 | predLabel='Left Profile' 134 | else: 135 | predLabel='Right Profile' 136 | predLabelList.append(predLabel) 137 | else: 138 | print('The detected face is Less then the detection threshold') 139 | else: 140 | print('No face detected in the image') 141 | return landmarks_, angle_R_List, angle_L_List, predLabelList 142 | 143 | 144 | def predFacePoseApp(path, url): 145 | 146 | if path is not None: 147 | try: 148 | im = Image.open(path) 149 | if im.mode != "RGB": # Convert the image if it has more than 3 channels, because MTCNN does not accept more than 3 channels. 150 | im = im.convert('RGB') 151 | landmarks_, angle_R_List, angle_L_List, predLabelList = predFacePose(im) 152 | visualize(im, landmarks_, angle_R_List, angle_L_List, predLabelList) 153 | except Exception as e: 154 | return print(f"Issue with image path: {e}") 155 | elif url is not None: 156 | try: 157 | im = Image.open(requests.get(url, stream=True).raw) 158 | if im.mode != "RGB": # Convert the image if it has more than 3 channels, because MTCNN does not accept more than 3 channels. 159 | im = im.convert('RGB') 160 | landmarks_, angle_R_List, angle_L_List, predLabelList = predFacePose(im) 161 | visualize(im, landmarks_, angle_R_List, angle_L_List, predLabelList) 162 | except Exception as e: 163 | return print(f"Issue with image URL: {e}") 164 | else: 165 | source = 0 166 | # Create a video capture object from the VideoCapture Class. 167 | video_cap = cv2.VideoCapture(0) 168 | # Create a named window for the video display. 169 | win_name = 'Video Preview' 170 | cv2.namedWindow(win_name) 171 | video_cadesired_width = 160 172 | desired_height = 160 173 | # dim = (desired_width, desired_height) 174 | left_offset = 20 175 | fontScale = 2 176 | fontThickness = 3 177 | text_color = (0,0,255) 178 | while True: 179 | # Read one frame at a time using the video capture object. 180 | has_frame, frame = video_cap.read() 181 | if not has_frame: 182 | break 183 | 184 | landmarks_, angle_R_List, angle_L_List, predLabelList = predFacePose(frame) 185 | 186 | # Annotate each video frame. 187 | visualizeCV2(frame, landmarks_, angle_R_List, angle_L_List, predLabelList) 188 | cv2.imshow(win_name, frame) 189 | 190 | key = cv2.waitKey(1) 191 | 192 | # You can use this feature to check if the user selected the `q` key to quit the video stream. 193 | if key == ord('Q') or key == ord('q') or key == 27: 194 | # Exit the loop. 195 | break 196 | 197 | video_cap.release() 198 | cv2.destroyWindow(win_name) 199 | 200 | pass 201 | 202 | if __name__ == '__main__': 203 | predFacePoseApp(path, url) -------------------------------------------------------------------------------- /Notebooks/CV2_Face_Pose_Detection.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 37, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "from facenet_pytorch import MTCNN\n", 10 | "from PIL import Image\n", 11 | "from matplotlib import pyplot as plt\n", 12 | "import numpy as np\n", 13 | "import math\n", 14 | "import cv2" 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": 38, 20 | "metadata": {}, 21 | "outputs": [], 22 | "source": [ 23 | "mtcnn = MTCNN(image_size=160,\n", 24 | " margin=0,\n", 25 | " min_face_size=20,\n", 26 | " thresholds=[0.6, 0.7, 0.7], # MTCNN thresholds\n", 27 | " factor=0.709,\n", 28 | " post_process=True,\n", 29 | " device='cpu' # If you don't have GPU\n", 30 | " )\n", 31 | "lineColor = (255, 255, 0)" 32 | ] 33 | }, 34 | { 35 | "cell_type": "code", 36 | "execution_count": 39, 37 | "metadata": {}, 38 | "outputs": [], 39 | "source": [ 40 | "# Landmaeks: [Left Eye], [Right eye], [nose], [left mouth], [right mouth]\n", 41 | "def npAngle(a, b, c):\n", 42 | " ba = np.array(a) - np.array(b)\n", 43 | " bc = np.array(c) - np.array(b) \n", 44 | " \n", 45 | " cosine_angle = np.dot(ba, bc)/(np.linalg.norm(ba)*np.linalg.norm(bc))\n", 46 | " angle = np.arccos(cosine_angle)\n", 47 | " \n", 48 | " return np.degrees(angle)" 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": 43, 54 | "metadata": {}, 55 | "outputs": [], 56 | "source": [ 57 | "def visualizeCV2(image, landmarks_, angle_R_, angle_L_, pred_):\n", 58 | " \n", 59 | " for landmarks, angle_R, angle_L, pred in zip(landmarks_, angle_R_, angle_L_, pred_):\n", 60 | " \n", 61 | " if pred == 'Frontal':\n", 62 | " color = (0, 0, 0)\n", 63 | " elif pred == 'Right Profile':\n", 64 | " color = (255, 0, 0)\n", 65 | " else:\n", 66 | " color = (0, 0, 255)\n", 67 | " \n", 68 | " point1 = [int(landmarks[0][0]), int(landmarks[1][0])]\n", 69 | " point2 = [int(landmarks[0][1]), int(landmarks[1][1])]\n", 70 | "\n", 71 | " point3 = [int(landmarks[2][0]), int(landmarks[0][0])]\n", 72 | " point4 = [int(landmarks[2][1]), int(landmarks[0][1])]\n", 73 | "\n", 74 | " point5 = [int(landmarks[2][0]), int(landmarks[1][0])]\n", 75 | " point6 = [int(landmarks[2][1]), int(landmarks[1][1])]\n", 76 | " print(landmarks.shape)\n", 77 | " for land in landmarks:\n", 78 | " cv2.circle(image, (int(land[0]), int(land[1])), radius=5, color=(0, 255, 255), thickness=-1)\n", 79 | " cv2.line(image, (int(landmarks[0][0]), int(landmarks[0][1])), (int(landmarks[1][0]), int(landmarks[1][1])), lineColor, 3)\n", 80 | " cv2.line(image, (int(landmarks[0][0]), int(landmarks[0][1])), (int(landmarks[2][0]), int(landmarks[2][1])), lineColor, 3)\n", 81 | " cv2.line(image, (int(landmarks[1][0]), int(landmarks[1][1])), (int(landmarks[2][0]), int(landmarks[2][1])), lineColor, 3)\n", 82 | " \n", 83 | " text_sizeR, _ = cv2.getTextSize(pred, cv2.FONT_HERSHEY_PLAIN, fontScale, 4)\n", 84 | " text_wR, text_hR = text_sizeR\n", 85 | " \n", 86 | " cv2.putText(frame, pred, (point1[0], point2[0]), cv2.FONT_HERSHEY_PLAIN, fontScale, color, fontThickness, cv2.LINE_AA)\n" 87 | ] 88 | }, 89 | { 90 | "cell_type": "code", 91 | "execution_count": 44, 92 | "metadata": {}, 93 | "outputs": [], 94 | "source": [ 95 | "def predFacePoseCV2(frame):\n", 96 | " \n", 97 | " bbox_, prob_, landmarks_ = mtcnn.detect(frame, landmarks=True) # The detection part producing bounding box, probability of the detected face, and the facial landmarks\n", 98 | " angle_R_List = []\n", 99 | " angle_L_List = []\n", 100 | " predLabelList = []\n", 101 | " \n", 102 | " for bbox, landmarks, prob in zip(bbox_, landmarks_, prob_):\n", 103 | " if bbox is not None: # To check if we detect a face in the image\n", 104 | " if prob > 0.9: # To check if the detected face has probability more than 90%, to avoid \n", 105 | " angR = npAngle(landmarks[0], landmarks[1], landmarks[2]) # Calculate the right eye angle\n", 106 | " angL = npAngle(landmarks[1], landmarks[0], landmarks[2])# Calculate the left eye angle\n", 107 | " angle_R_List.append(angR)\n", 108 | " angle_L_List.append(angL)\n", 109 | " if ((int(angR) in range(35, 57)) and (int(angL) in range(35, 58))):\n", 110 | " predLabel='Frontal'\n", 111 | " predLabelList.append(predLabel)\n", 112 | " else: \n", 113 | " if angR < angL:\n", 114 | " predLabel='Left Profile'\n", 115 | " else:\n", 116 | " predLabel='Right Profile'\n", 117 | " predLabelList.append(predLabel)\n", 118 | " else:\n", 119 | " print('The detected face is Less then the detection threshold')\n", 120 | " else:\n", 121 | " print('No face detected in the image')\n", 122 | " return landmarks_, angle_R_List, angle_L_List, predLabelList\n", 123 | " " 124 | ] 125 | }, 126 | { 127 | "cell_type": "code", 128 | "execution_count": 45, 129 | "metadata": {}, 130 | "outputs": [ 131 | { 132 | "name": "stdout", 133 | "output_type": "stream", 134 | "text": [ 135 | "(5, 2)\n", 136 | "%^%^%^%^%^%^\n", 137 | "landmarks[0]: [586.21716 576.7024 ]\n", 138 | "landmarks[0][0]: 586.21716\n", 139 | "(5, 2)\n", 140 | "%^%^%^%^%^%^\n", 141 | "landmarks[0]: [619.3135 561.47894]\n", 142 | "landmarks[0][0]: 619.3135\n", 143 | "(5, 2)\n", 144 | "%^%^%^%^%^%^\n", 145 | "landmarks[0]: [637.7896 550.391 ]\n", 146 | "landmarks[0][0]: 637.7896\n", 147 | "(5, 2)\n", 148 | "%^%^%^%^%^%^\n", 149 | "landmarks[0]: [652.78546 549.52185]\n", 150 | "landmarks[0][0]: 652.78546\n", 151 | "(5, 2)\n", 152 | "%^%^%^%^%^%^\n", 153 | "landmarks[0]: [629.4998 549.9074]\n", 154 | "landmarks[0][0]: 629.4998\n", 155 | "(5, 2)\n", 156 | "%^%^%^%^%^%^\n", 157 | "landmarks[0]: [608.0183 548.81537]\n", 158 | "landmarks[0][0]: 608.0183\n", 159 | "(5, 2)\n", 160 | "%^%^%^%^%^%^\n", 161 | "landmarks[0]: [600.2789 534.01 ]\n", 162 | "landmarks[0][0]: 600.2789\n", 163 | "(5, 2)\n", 164 | "%^%^%^%^%^%^\n", 165 | "landmarks[0]: [608.5925 508.81674]\n", 166 | "landmarks[0][0]: 608.5925\n", 167 | "(5, 2)\n", 168 | "%^%^%^%^%^%^\n", 169 | "landmarks[0]: [609.645 506.5854]\n", 170 | "landmarks[0][0]: 609.645\n", 171 | "(5, 2)\n", 172 | "%^%^%^%^%^%^\n", 173 | "landmarks[0]: [611.42334 496.93643]\n", 174 | "landmarks[0][0]: 611.42334\n", 175 | "(5, 2)\n", 176 | "%^%^%^%^%^%^\n", 177 | "landmarks[0]: [609.40753 507.50433]\n", 178 | "landmarks[0][0]: 609.40753\n", 179 | "(5, 2)\n", 180 | "%^%^%^%^%^%^\n", 181 | "landmarks[0]: [607.93146 492.15802]\n", 182 | "landmarks[0][0]: 607.93146\n", 183 | "(5, 2)\n", 184 | "%^%^%^%^%^%^\n", 185 | "landmarks[0]: [600.1888 495.63672]\n", 186 | "landmarks[0][0]: 600.1888\n", 187 | "(5, 2)\n", 188 | "%^%^%^%^%^%^\n", 189 | "landmarks[0]: [599.05426 478.39005]\n", 190 | "landmarks[0][0]: 599.05426\n", 191 | "(5, 2)\n", 192 | "%^%^%^%^%^%^\n", 193 | "landmarks[0]: [599.4337 473.5407]\n", 194 | "landmarks[0][0]: 599.4337\n", 195 | "(5, 2)\n", 196 | "%^%^%^%^%^%^\n", 197 | "landmarks[0]: [575.74243 475.36133]\n", 198 | "landmarks[0][0]: 575.74243\n", 199 | "(5, 2)\n", 200 | "%^%^%^%^%^%^\n", 201 | "landmarks[0]: [537.3351 482.37286]\n", 202 | "landmarks[0][0]: 537.3351\n", 203 | "(5, 2)\n", 204 | "%^%^%^%^%^%^\n", 205 | "landmarks[0]: [518.6847 490.22873]\n", 206 | "landmarks[0][0]: 518.6847\n", 207 | "(5, 2)\n", 208 | "%^%^%^%^%^%^\n", 209 | "landmarks[0]: [510.26096 494.1703 ]\n", 210 | "landmarks[0][0]: 510.26096\n", 211 | "(5, 2)\n", 212 | "%^%^%^%^%^%^\n", 213 | "landmarks[0]: [504.94617 495.3045 ]\n", 214 | "landmarks[0][0]: 504.94617\n", 215 | "(5, 2)\n", 216 | "%^%^%^%^%^%^\n", 217 | "landmarks[0]: [503.83542 495.49316]\n", 218 | "landmarks[0][0]: 503.83542\n", 219 | "(5, 2)\n", 220 | "%^%^%^%^%^%^\n", 221 | "landmarks[0]: [504.7424 493.61014]\n", 222 | "landmarks[0][0]: 504.7424\n", 223 | "(5, 2)\n", 224 | "%^%^%^%^%^%^\n", 225 | "landmarks[0]: [514.87427 492.9431 ]\n", 226 | "landmarks[0][0]: 514.87427\n", 227 | "(5, 2)\n", 228 | "%^%^%^%^%^%^\n", 229 | "landmarks[0]: [581.469 480.72217]\n", 230 | "landmarks[0][0]: 581.469\n", 231 | "(5, 2)\n", 232 | "%^%^%^%^%^%^\n", 233 | "landmarks[0]: [652.85693 475.56128]\n", 234 | "landmarks[0][0]: 652.85693\n", 235 | "(5, 2)\n", 236 | "%^%^%^%^%^%^\n", 237 | "landmarks[0]: [684.408 470.64212]\n", 238 | "landmarks[0][0]: 684.408\n", 239 | "(5, 2)\n", 240 | "%^%^%^%^%^%^\n", 241 | "landmarks[0]: [717.08887 475.1834 ]\n", 242 | "landmarks[0][0]: 717.08887\n", 243 | "(5, 2)\n", 244 | "%^%^%^%^%^%^\n", 245 | "landmarks[0]: [722.1165 485.9131]\n", 246 | "landmarks[0][0]: 722.1165\n", 247 | "(5, 2)\n", 248 | "%^%^%^%^%^%^\n", 249 | "landmarks[0]: [722.5619 486.47842]\n", 250 | "landmarks[0][0]: 722.5619\n", 251 | "(5, 2)\n", 252 | "%^%^%^%^%^%^\n", 253 | "landmarks[0]: [726.3725 481.40988]\n", 254 | "landmarks[0][0]: 726.3725\n", 255 | "(5, 2)\n", 256 | "%^%^%^%^%^%^\n", 257 | "landmarks[0]: [718.713 477.65643]\n", 258 | "landmarks[0][0]: 718.713\n", 259 | "(5, 2)\n", 260 | "%^%^%^%^%^%^\n", 261 | "landmarks[0]: [687.3151 465.59927]\n", 262 | "landmarks[0][0]: 687.3151\n", 263 | "(5, 2)\n", 264 | "%^%^%^%^%^%^\n", 265 | "landmarks[0]: [646.31824 459.678 ]\n", 266 | "landmarks[0][0]: 646.31824\n", 267 | "(5, 2)\n", 268 | "%^%^%^%^%^%^\n", 269 | "landmarks[0]: [602.257 468.61835]\n", 270 | "landmarks[0][0]: 602.257\n", 271 | "(5, 2)\n", 272 | "%^%^%^%^%^%^\n", 273 | "landmarks[0]: [551.5754 474.279 ]\n", 274 | "landmarks[0][0]: 551.5754\n", 275 | "(5, 2)\n", 276 | "%^%^%^%^%^%^\n", 277 | "landmarks[0]: [537.8131 490.5056]\n", 278 | "landmarks[0][0]: 537.8131\n", 279 | "(5, 2)\n", 280 | "%^%^%^%^%^%^\n", 281 | "landmarks[0]: [522.5724 515.4861]\n", 282 | "landmarks[0][0]: 522.5724\n", 283 | "(5, 2)\n", 284 | "%^%^%^%^%^%^\n", 285 | "landmarks[0]: [518.0137 525.4579]\n", 286 | "landmarks[0][0]: 518.0137\n", 287 | "(5, 2)\n", 288 | "%^%^%^%^%^%^\n", 289 | "landmarks[0]: [519.24536 526.39575]\n", 290 | "landmarks[0][0]: 519.24536\n", 291 | "(5, 2)\n", 292 | "%^%^%^%^%^%^\n", 293 | "landmarks[0]: [522.1694 524.5413]\n", 294 | "landmarks[0][0]: 522.1694\n", 295 | "(5, 2)\n", 296 | "%^%^%^%^%^%^\n", 297 | "landmarks[0]: [533.4218 505.28537]\n", 298 | "landmarks[0][0]: 533.4218\n", 299 | "(5, 2)\n", 300 | "%^%^%^%^%^%^\n", 301 | "landmarks[0]: [579.7986 484.18338]\n", 302 | "landmarks[0][0]: 579.7986\n", 303 | "(5, 2)\n", 304 | "%^%^%^%^%^%^\n", 305 | "landmarks[0]: [592.46625 477.49792]\n", 306 | "landmarks[0][0]: 592.46625\n", 307 | "(5, 2)\n", 308 | "%^%^%^%^%^%^\n", 309 | "landmarks[0]: [593.04614 478.01782]\n", 310 | "landmarks[0][0]: 593.04614\n", 311 | "(5, 2)\n", 312 | "%^%^%^%^%^%^\n", 313 | "landmarks[0]: [596.5441 483.11404]\n", 314 | "landmarks[0][0]: 596.5441\n", 315 | "(5, 2)\n", 316 | "%^%^%^%^%^%^\n", 317 | "landmarks[0]: [608.3815 485.79596]\n", 318 | "landmarks[0][0]: 608.3815\n", 319 | "(5, 2)\n", 320 | "%^%^%^%^%^%^\n", 321 | "landmarks[0]: [612.9602 480.9649]\n", 322 | "landmarks[0][0]: 612.9602\n", 323 | "(5, 2)\n", 324 | "%^%^%^%^%^%^\n", 325 | "landmarks[0]: [612.04266 483.11768]\n", 326 | "landmarks[0][0]: 612.04266\n", 327 | "(5, 2)\n", 328 | "%^%^%^%^%^%^\n", 329 | "landmarks[0]: [612.96375 485.99765]\n", 330 | "landmarks[0][0]: 612.96375\n", 331 | "(5, 2)\n", 332 | "%^%^%^%^%^%^\n", 333 | "landmarks[0]: [612.38086 493.8161 ]\n", 334 | "landmarks[0][0]: 612.38086\n", 335 | "(5, 2)\n", 336 | "%^%^%^%^%^%^\n", 337 | "landmarks[0]: [607.9569 515.20166]\n", 338 | "landmarks[0][0]: 607.9569\n" 339 | ] 340 | } 341 | ], 342 | "source": [ 343 | "source = 0\n", 344 | "\n", 345 | "# Create a video capture object from the VideoCapture Class.\n", 346 | "video_cap = cv2.VideoCapture(0)\n", 347 | "\n", 348 | "# Create a named window for the video display.\n", 349 | "win_name = 'Video Preview'\n", 350 | "cv2.namedWindow(win_name)\n", 351 | "video_cadesired_width = 160\n", 352 | "desired_height = 160\n", 353 | "# dim = (desired_width, desired_height)\n", 354 | "left_offset = 20\n", 355 | "fontScale = 2\n", 356 | "fontThickness = 3\n", 357 | "text_color = (0,0,255)\n", 358 | "while True:\n", 359 | " # Read one frame at a time using the video capture object.\n", 360 | " has_frame, frame = video_cap.read()\n", 361 | " if not has_frame:\n", 362 | " break\n", 363 | " \n", 364 | " landmarks_, angle_R_List, angle_L_List, predLabelList = predFacePoseCV2(frame)\n", 365 | "\n", 366 | " # Annotate each video frame.\n", 367 | " visualizeCV2(frame, landmarks_, angle_R_List, angle_L_List, predLabelList)\n", 368 | " cv2.imshow(win_name, frame)\n", 369 | "\n", 370 | " key = cv2.waitKey(1)\n", 371 | "\n", 372 | " # You can use this feature to check if the user selected the `q` key to quit the video stream.\n", 373 | " if key == ord('Q') or key == ord('q') or key == 27:\n", 374 | " # Exit the loop.\n", 375 | " break\n", 376 | "\n", 377 | "video_cap.release()\n", 378 | "cv2.destroyWindow(win_name)" 379 | ] 380 | }, 381 | { 382 | "cell_type": "code", 383 | "execution_count": null, 384 | "metadata": {}, 385 | "outputs": [], 386 | "source": [] 387 | }, 388 | { 389 | "cell_type": "code", 390 | "execution_count": null, 391 | "metadata": {}, 392 | "outputs": [], 393 | "source": [] 394 | } 395 | ], 396 | "metadata": { 397 | "kernelspec": { 398 | "display_name": "Python 3", 399 | "language": "python", 400 | "name": "python3" 401 | }, 402 | "language_info": { 403 | "codemirror_mode": { 404 | "name": "ipython", 405 | "version": 3 406 | }, 407 | "file_extension": ".py", 408 | "mimetype": "text/x-python", 409 | "name": "python", 410 | "nbconvert_exporter": "python", 411 | "pygments_lexer": "ipython3", 412 | "version": "3.7.4" 413 | } 414 | }, 415 | "nbformat": 4, 416 | "nbformat_minor": 4 417 | } 418 | --------------------------------------------------------------------------------