20 | Gesture is a desktop application that allows the user to control various applications through hand gestures.
21 |
22 |
23 |
24 |
25 |
26 |
27 |
30 |
31 | ## Tech Stack
32 |
33 | **Frontend**
34 |
35 | - JS
36 | - React
37 | - Electron
38 |
39 | **Backend**
40 |
41 | - Python
42 | - Flask
43 | - OpenCV
44 |
45 | ## Setup
46 |
47 | Installation
48 |
49 | ```
50 | git clone https://github.com/aggie-coding-club/Vision-Controls
51 | cd Gesture
52 | pip install -r requirements.txt
53 | cd frontend
54 | npm install
55 | ```
56 |
57 | Once ready run **npm run dev** to launch application.
58 |
59 | ## Features
60 |
61 | - Full Desktop UI Using React and Electron
62 | - Gesture Recognition / mouse movement through Python with OpenCV
63 | - Settings page to change application preference / gestures
64 |
65 | ## Extra
66 |
67 | This project is managed by Aggie Coding Club.
68 |
--------------------------------------------------------------------------------
/assets/readme/blank.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/assets/readme/blank.png
--------------------------------------------------------------------------------
/assets/readme/main.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/assets/readme/main.png
--------------------------------------------------------------------------------
/flask/README.md:
--------------------------------------------------------------------------------
1 | (TO BE UPDATED)
2 |
3 | # Hand Gesture Recognition
4 |
5 | ## Dependencies
6 |
7 | Dependencies can be installed using: `pip install -r requirements.txt`
8 |
9 | ## Using Gesture Recognition
10 |
11 | ### Setup
12 |
13 | `camera_index` - Default is 0, but if you camera is not recognized your index may be 1
14 |
15 | ### Commands
16 |
17 | `python HandTracker.py`
18 |
19 | - Uses computer's default camera to detect hand gestures
20 |
21 | `python HandTracker.py -m mouse`
22 |
23 | - Uses gesture recognition to control the mouse
24 |
25 | ## Using features
26 |
27 | ### Basics
28 |
29 | - Keep your palm facing the camera when making hand gestures
30 |
31 | ### Mouse Control
32 |
33 | - Start HandTracker.py in mouse control mode with the `-m mouse` flag
34 | - The mouse control currently works on an anchor system
35 | - When the camera sees a "Thumbs Up", it sets that position as the anchor
36 | - Keep a thumbs up and move your hand away from the anchor to see the mouse move.
37 | - Change to a "Fist" when you want to click
38 |
39 | ## Notes for Developers
40 |
41 | ### Averaging Frames
42 |
43 | - frames_until_change: number of frames to look at before determining a hand gesture (all must match). Note the following:
44 | - With a higher `frames_until_change`, the script may be slow to recognize a gesture.
45 | - With a lower `frames_until_change`, the script will be very fast, but may have mistakes.
46 |
47 | ### Adding / Editing Gestures
48 |
49 | - When adding or editing a gesture in the `gesture` function, keep the following in mind:
50 | - `f[0]` = thumb, `f[1]` = index, `f[2]` = middle, `f[3]` = ring, `f[4]` = pinky
51 |
--------------------------------------------------------------------------------
/flask/app.py:
--------------------------------------------------------------------------------
1 | from flask import Flask
2 | from flask_sqlalchemy import SQLAlchemy
3 | from flask_cors import CORS
4 |
5 | db = SQLAlchemy()
6 |
7 | def create_app():
8 | app = Flask(__name__)
9 | CORS(app)
10 | app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///database.db'
11 | app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
12 |
13 | db.init_app(app)
14 |
15 | from webcam import feed
16 | from config import cf
17 | app.register_blueprint(feed)
18 | app.register_blueprint(cf)
19 |
20 | return app
21 |
22 | if __name__ == "__main__":
23 | app = create_app()
24 | app.run(port=5000)
--------------------------------------------------------------------------------
/flask/config.py:
--------------------------------------------------------------------------------
1 | from flask import Blueprint, jsonify, request
2 | from app import db
3 | from model import Configuration
4 |
5 | cf = Blueprint("config", __name__, url_prefix="/config")
6 |
7 | @cf.route("add_configuration", methods=["POST"])
8 | def addConfiguration():
9 | configData = request.get_json()
10 |
11 | newConfiguration = Configuration(
12 | id=configData["id"],
13 | hand=configData["hand"],
14 | gesture=configData["gesture"],
15 | action=configData["action"],
16 | alias=configData["alias"],
17 | )
18 |
19 | db.session.add(newConfiguration)
20 | db.session.commit()
21 |
22 | return "Added", 201
23 |
24 |
25 | @cf.route("/retrieve")
26 | def retrieve():
27 | configQuery = Configuration.query.all()
28 | configData = []
29 |
30 | for configuration in configQuery:
31 | if not(configuration.id):
32 | configuration.id = "none"
33 | configData.append({ "hand" : configuration.hand,
34 | "gesture" : configuration.gesture,
35 | "action" : configuration.action,
36 | "alias" : configuration.alias,
37 | "id": configuration.id})
38 |
39 | return jsonify({"config" : configData})
40 |
41 |
42 | @cf.route("update_configuration", methods=["POST"])
43 | def updateConfiguration():
44 | configData = request.get_json()
45 |
46 | configuration = Configuration.query.filter_by(alias=configData["alias"]).first()
47 | configuration.gesture = configData["gesture"]
48 | db.session.commit()
49 |
50 | return "Updated", 201
51 |
52 |
53 |
54 |
55 |
56 |
--------------------------------------------------------------------------------
/flask/database.db:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/flask/database.db
--------------------------------------------------------------------------------
/flask/model.py:
--------------------------------------------------------------------------------
1 | from app import db
2 |
3 | class Configuration(db.Model):
4 | id = db.Column(db.Integer, primary_key=True)
5 | hand = db.Column(db.String(10))
6 | gesture = db.Column(db.String(20))
7 | action = db.Column(db.String(60))
8 | alias = db.Column(db.String(30))
--------------------------------------------------------------------------------
/flask/recognition/Actions.py:
--------------------------------------------------------------------------------
1 | from pymitter import EventEmitter
2 | import webbrowser
3 | import os, getpass
4 | from ctypes import cast, POINTER
5 | from comtypes import CLSCTX_ALL
6 | from pycaw.pycaw import AudioUtilities, IAudioEndpointVolume
7 |
8 | event = EventEmitter()
9 | username = getpass.getuser()
10 |
11 | devices = AudioUtilities.GetSpeakers()
12 | interface = devices.Activate(
13 | IAudioEndpointVolume._iid_, CLSCTX_ALL, None)
14 |
15 | volume = cast(interface, POINTER(IAudioEndpointVolume))
16 |
17 | # start event, when the gesture is first made
18 | # :hand: left or right
19 | # :gest: the gesture
20 | @event.on("start")
21 | def openProject(configData, hand, gest):
22 | for x in configData:
23 | if (x["gesture"] == gest and x["hand"] == hand):
24 | action = x["action"]
25 | alias = x["alias"]
26 | if (action == 'x'): return
27 | elif action.startswith("http"):
28 | webbrowser.open(action)
29 | elif alias == "Open Chrome":
30 | os.startfile(action)
31 | elif action.startswith("C://"):
32 | try:
33 | os.startfile(action)
34 | except:
35 | os.startfile("C://Users//" + username + action)
36 | print("No program at path:", "C://Users//" + username + action)
37 | elif alias.startswith("Volume"):
38 | # current = volume.GetMasterVolumeLevel()
39 | if (alias[7] == 'M'):
40 | volume.SetMute(1, None)
41 | else:
42 | volume.SetMute(0, None)
43 |
--------------------------------------------------------------------------------
/flask/recognition/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/flask/recognition/__init__.py
--------------------------------------------------------------------------------
/flask/recognition/detection.py:
--------------------------------------------------------------------------------
1 | # Using python version 3.7.9 for media-pipe
2 | import cv2
3 | import mediapipe as mp
4 | import numpy as np
5 | import pyautogui
6 | import math
7 |
8 | import recognition.Actions as Actions
9 |
10 |
11 | settings = {
12 | "camera_index": 0, # 0 should be the default for built in cameras. If this doesn't work, try 1.
13 | }
14 |
15 | if (settings["camera_index"] == 0):
16 | cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)
17 | else:
18 | cap = cv2.VideoCapture(1)
19 |
20 |
21 | switch = False
22 |
23 | if cap is None or not cap.isOpened():
24 | pyautogui.alert('Your camera is unavailable. Try to fix this issue and try again!', 'Error')
25 |
26 | # restricting webcam size / framrate
27 | cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
28 | cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
29 | cap.set(cv2.CAP_PROP_FPS, 60)
30 |
31 |
32 | # Number of consecutive frames a gesture has to be detected before it changes
33 | # Lower the number the faster it changes, but the more jumpy it is
34 | # Higher the number the slower it changes, but the less jumpy it is
35 | frames_until_change = 10
36 | prevGestures = [] # gestures calculated in previous frames
37 |
38 | # Getting media-pipe ready
39 | mpHands = mp.solutions.hands
40 | hands = mpHands.Hands(False, 2, 0.5, 0.5)
41 | mpDraw = mp.solutions.drawing_utils
42 |
43 |
44 | def dotProduct(v1, v2):
45 | return v1[0]*v2[0] + v1[1]*v2[1]
46 |
47 | def normalize(v):
48 | mag = np.sqrt(v[0] ** 2 + v[1] ** 2)
49 | v[0] = v[0] / mag
50 | v[1] = v[1] / mag
51 | return v
52 |
53 | def angle_between(a,b,c):
54 | '''
55 | Gets angle ABC from points
56 |
57 | cos(theta) = (u*v)/ (|u| |v|)
58 | '''
59 | BA = (a.x - b.x, a.y-b.y, a.z-b.z)
60 | BC = (c.x - b.x, c.y-b.y, c.z-b.z)
61 |
62 | dot = BA[0] * BC[0] + BA[1] * BC[1] + BA[2] * BC[2]
63 | BA_mag = math.sqrt(BA[0]**2 + BA[1]**2 + BA[2]**2)
64 | BC_mag = math.sqrt(BC[0]**2 + BC[1]**2 + BC[2]**2)
65 |
66 | angle = math.acos(dot/(BA_mag*BC_mag))
67 | return angle
68 |
69 | def gesture(f, hand):
70 | """
71 | Uses the open fingers list to recognize gestures
72 | :param f: list of open fingers (+ num) and closed fingers (- num)
73 | :param hand: hand information
74 | :return: string representing the gesture that is detected
75 | """
76 |
77 | if f[1] > 0 and f[2] < 0 and f[3] < 0 and f[4] > 0:
78 | index_tip = hand.landmark[8]
79 | index_base = hand.landmark[5]
80 | if index_tip.y > index_base.y: # Y goes from top to bottom instead of bottom to top
81 | return "Horns Down"
82 | elif f[0] > 0:
83 | return "rockandroll"
84 | else:
85 | return "No Gesture"
86 | elif f[0] > 0 and (f[1] < 0 and f[2] < 0 and f[3] < 0 and f[4] < 0):
87 | thumb_tip = hand.landmark[4]
88 | thumb_base = hand.landmark[2]
89 | if thumb_tip.y < thumb_base.y: # Y goes from top to bottom instead of bottom to top
90 | return "thumbsup"
91 | else:
92 | return "thumbsdown"
93 | elif f[0] < 0 and f[1] > 0 and f[2] < 0 and (f[3] < 0 and f[4] < 0):
94 | return "onefinger"
95 | elif f[0] < 0 and f[1] > 0 and f[2] > 0 and (f[3] < 0 and f[4] < 0):
96 | return "twofinger"
97 | elif f[0] > 0 and f[1] > 0 and f[2] > 0 and f[3] > 0 and f[4] > 0:
98 | mid_tip = hand.landmark[12]
99 | ring_tip = hand.landmark[16]
100 | wrist = hand.landmark[0]
101 | if angle_between(mid_tip, wrist, ring_tip) > 0.3:
102 | return 'Vulcan Salute'
103 | else:
104 | return "openhand"
105 | elif f[0] < 0 and f[1] < 0 and f[2] < 0 and f[3] < 0 and f[4] < 0:
106 | return "fist"
107 | elif f[0] < 0 and f[1] > 0 and f[2] > 0 and f[3] > 0 and f[4] > 0:
108 | return "fourfinger"
109 | elif f[0] < 0 and f[1] > 0 and f[2] > 0 and f[3] > 0 and f[4] < 0:
110 | return "threefinger"
111 | else:
112 | return "No Gesture"
113 |
114 | def straightFingers(hand, img):
115 | """
116 | Calculates which fingers are open and which fingers are closed
117 | :param hand: media-pipe object of the hand
118 | :param img: frame with the hand in it
119 | :return: list of open (+ num) and closed (- num) fingers
120 | """
121 | fingerTipIDs = [4, 8, 12, 16, 20] # list of the id's for the finger tip landmarks
122 | openFingers = []
123 | lms = hand.landmark # 2d list of all 21 landmarks with there respective x, an y coordinates
124 |
125 | mpDraw.draw_landmarks(img, hand, connections=mpHands.HAND_CONNECTIONS, connection_drawing_spec=mpDraw.DrawingSpec(color=(255,0,0)))
126 |
127 | for id in fingerTipIDs:
128 | if id == 4: # This is for the thumb calculation, because it works differently than the other fingers
129 | x2, y2 = lms[id].x, lms[id].y # x, and y of the finger tip
130 | x1, y1 = lms[id-2].x, lms[id-2].y # x, and y of the joint 2 points below the finger tip
131 | x0, y0 = lms[0].x, lms[0].y # x, and y of the wrist
132 | fv = [x2-x1, y2-y1] # joint to finger tip vector
133 | fv = normalize(fv)
134 | pv = [x1-x0, y1-y0] # wrist to joint vector
135 | pv = normalize(pv)
136 |
137 | thumb = dotProduct(fv, pv)
138 | # Thumb that is greater than 0, but less than .65 is typically
139 | # folded across the hand, which should be calculated as "down"
140 | if thumb > .65:
141 | openFingers.append(thumb) # Calculates if the finger is open or closed
142 | else:
143 | openFingers.append(-1)
144 |
145 | else: # for any other finger (not thumb)
146 | x2, y2 = lms[id].x, lms[id].y # x, and y of the finger tip
147 | x1, y1 = lms[id-2].x, lms[id-2].y # x, and y of the joint 2 points below the finger tip
148 | x0, y0 = lms[0].x, lms[0].y # x, and y of the wrist
149 | fv = [x2-x1, y2-y1] # joint to finger tip vector
150 | fv = normalize(fv)
151 | pv = [x1-x0, y1-y0] # wrist to joint vector
152 | pv = normalize(pv)
153 | openFingers.append(dotProduct(fv, pv)) # Calculates if the finger is open or closed
154 |
155 | return openFingers
156 |
157 | def getHand(handedness):
158 | '''
159 | Mediapipe assumes that the camera is mirrored
160 | :param handedness: media-pipe object of handedness
161 | :return: string that is 'Left' or 'Right'
162 | '''
163 | hand = handedness.classification[0].label
164 |
165 | if(hand == 'Left'):
166 | return 'Right'
167 | else:
168 | return 'Left'
169 |
170 | def gen_video(configData):
171 | global switch
172 | # reopens camera after release
173 | if (settings["camera_index"] == 0):
174 | cap.open(0, cv2.CAP_DSHOW);
175 | else:
176 | cap.open(1);
177 |
178 | prevGests = {
179 | "right": [],
180 | "left": [],
181 | }
182 | currGests = {
183 | "right": None,
184 | "left": None,
185 | }
186 |
187 | while True:
188 | """
189 | Main code loop
190 | """
191 |
192 | if switch:
193 | if (settings["camera_index"] == 0):
194 | cap.open(1)
195 | settings["camera_index"] = 1
196 | if cap.read()[1] is None:
197 | cap.open(0, cv2.CAP_DSHOW)
198 | settings["camera_index"] = 0
199 | else:
200 | cap.open(0, cv2.CAP_DSHOW)
201 | settings["camera_index"] = 0
202 |
203 | switch = False
204 |
205 | success, img = cap.read()
206 |
207 | if img is None:
208 | print("Video ended. Closing.")
209 | break
210 |
211 | imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
212 | results = hands.process(imgRGB)
213 |
214 | # if there are hands in frame, calculate which fingers are open and draw the landmarks for each hand
215 | if results.multi_hand_landmarks:
216 | gestures = {}
217 |
218 | for handLms, handedness in zip(results.multi_hand_landmarks, results.multi_handedness):
219 | fingers = straightFingers(handLms, img)
220 | hand = getHand(handedness)
221 | if hand == "Left":
222 | gestures['left'] = gesture(fingers, handLms)
223 | else:
224 | gestures['right'] = gesture(fingers, handLms)
225 |
226 | for hand in ['left', 'right']:
227 | if not hand in gestures:
228 | continue
229 | # if gesture is diff from currGesture and the previous 3 gestures are the same as the current gesture
230 | # too much gesture, it is not a word anymore
231 | if(gestures[hand] != currGests[hand] and all(x == gestures[hand] for x in prevGests[hand])):
232 | Actions.event.emit("start", configData=configData, hand=hand, gest=gestures[hand])
233 | currGests[hand] = gestures[hand]
234 |
235 | # keep only the 3 previous Gestures
236 | prevGests[hand].append(gestures[hand])
237 | prevGests[hand] = prevGests[hand][-frames_until_change:]
238 |
239 | ret, buffer = cv2.imencode('.jpg', img)
240 | frame = buffer.tobytes()
241 | yield (b'--frame\r\n'
242 | b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
243 |
244 |
245 | if cv2.waitKey(1) == 27:
246 | break
247 |
248 | def gen_off():
249 | img = cv2.imread("../frontend/src/assets/loading.png", 1)
250 | ret, buffer = cv2.imencode('.jpg', img)
251 | frame = buffer.tobytes()
252 | yield (b'--frame\r\n'
253 | b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
254 | cap.release()
255 |
256 |
257 | def switchWebcam():
258 | global switch
259 | if switch:
260 | switch = False
261 | else:
262 | switch = True
263 |
264 |
265 |
--------------------------------------------------------------------------------
/flask/recognition/not_in_use/Emitter.py:
--------------------------------------------------------------------------------
1 | from pymitter import EventEmitter
2 | import Actions
3 | import MultiGesture
4 |
5 | event = EventEmitter()
6 |
7 | # event.emit("start", hand="test", gest="test")
--------------------------------------------------------------------------------
/flask/recognition/not_in_use/HandTracker.py:
--------------------------------------------------------------------------------
1 | # Using python version 3.7.9 for media-pipe
2 | import cv2
3 | import mediapipe as mp
4 | import numpy as np
5 | import pyautogui
6 | import time
7 | import argparse
8 | import config
9 | import math
10 |
11 | from Emitter import event
12 |
13 | # Getting openCV ready
14 | cap = cv2.VideoCapture(config.settings["camera_index"])
15 | #Camera detection
16 | if cap is None or not cap.isOpened():
17 | pyautogui.alert('Your camera is unavailable. Try to fix this issue and try again!', 'Error')
18 | # Dimensions of the camera output window
19 | wCam = int(cap.get(3))
20 | hCam = int(cap.get(4))
21 |
22 | # For testing, write output to video
23 | #out = cv2.VideoWriter('output.mp4',cv2.VideoWriter_fourcc('M','J','P','G'), 30, (wCam,hCam))
24 |
25 | # Number of consecutive frames a gesture has to be detected before it changes
26 | # Lower the number the faster it changes, but the more jumpy it is
27 | # Higher the number the slower it changes, but the less jumpy it is
28 | frames_until_change = 3
29 | prevGestures = [] # gestures calculated in previous frames
30 |
31 | # Getting media-pipe ready
32 | mpHands = mp.solutions.hands
33 | hands = mpHands.Hands(min_detection_confidence=.7)
34 | mpDraw = mp.solutions.drawing_utils
35 |
36 | # Vars used to calculate avg fps
37 | prevTime = 0
38 | currTime = 0
39 | fpsList = []
40 |
41 | # Mouse movement anchor
42 | mouseAnchor = [-1,-1]
43 | wristPositionHistory = []
44 | pyautogui.PAUSE = 0
45 | pyautogui.FAILSAFE = False
46 |
47 | screenWidth, screenHeight = pyautogui.size()
48 |
49 | def parse_arguments():
50 | """Parses Arguments
51 | -m: mode that gesture will be recognized for
52 | """
53 | # Setting up the argument parser
54 | p = argparse.ArgumentParser(description='Used to parse options for hand tracking')
55 |
56 | # -v flag is the path to the video, -m flag is the background subtraction method
57 | p.add_argument('-m', type=str, help='The mode that the recognition will control for (ie. mouse)')
58 |
59 | return p.parse_args()
60 |
61 | def dotProduct(v1, v2):
62 | return v1[0]*v2[0] + v1[1]*v2[1]
63 |
64 | def normalize(v):
65 | mag = np.sqrt(v[0] ** 2 + v[1] ** 2)
66 | v[0] = v[0] / mag
67 | v[1] = v[1] / mag
68 | return v
69 |
70 | def angle_between(a,b,c):
71 | '''
72 | Gets angle ABC from points
73 |
74 | cos(theta) = (u*v)/ (|u| |v|)
75 | '''
76 | BA = (a.x - b.x, a.y-b.y, a.z-b.z)
77 | BC = (c.x - b.x, c.y-b.y, c.z-b.z)
78 |
79 | dot = BA[0] * BC[0] + BA[1] * BC[1] + BA[2] * BC[2]
80 | BA_mag = math.sqrt(BA[0]**2 + BA[1]**2 + BA[2]**2)
81 | BC_mag = math.sqrt(BC[0]**2 + BC[1]**2 + BC[2]**2)
82 |
83 | angle = math.acos(dot/(BA_mag*BC_mag))
84 | return angle
85 |
86 | def gesture(f, hand):
87 | """
88 | Uses the open fingers list to recognize gestures
89 | :param f: list of open fingers (+ num) and closed fingers (- num)
90 | :param hand: hand information
91 | :return: string representing the gesture that is detected
92 | """
93 |
94 | if f[1] > 0 and f[2] < 0 and f[3] < 0 and f[4] > 0:
95 | index_tip = hand.landmark[8]
96 | index_base = hand.landmark[5]
97 | if index_tip.y > index_base.y: # Y goes from top to bottom instead of bottom to top
98 | return "Horns Down"
99 | elif f[0] > 0:
100 | return "Rock and Roll"
101 | else:
102 | return "No Gesture"
103 | elif f[0] > 0 and (f[1] < 0 and f[2] < 0 and f[3] < 0 and f[4] < 0):
104 | thumb_tip = hand.landmark[4]
105 | thumb_base = hand.landmark[2]
106 | if thumb_tip.y < thumb_base.y: # Y goes from top to bottom instead of bottom to top
107 | return "Gig Em"
108 | else:
109 | return "Thumbs Down"
110 | elif f[0] < 0 and f[1] > 0 and f[2] < 0 and (f[3] < 0 and f[4] < 0):
111 | return "1 finger"
112 | elif f[0] < 0 and f[1] > 0 and f[2] > 0 and (f[3] < 0 and f[4] < 0):
113 | return "Peace"
114 | elif f[0] > 0 and f[1] > 0 and f[2] > 0 and f[3] > 0 and f[4] > 0:
115 | mid_tip = hand.landmark[12]
116 | ring_tip = hand.landmark[16]
117 | wrist = hand.landmark[0]
118 | if angle_between(mid_tip, wrist, ring_tip) > 0.3:
119 | return 'Vulcan Salute'
120 | else:
121 | return "Open Hand"
122 | elif f[0] < 0 and f[1] < 0 and f[2] < 0 and f[3] < 0 and f[4] < 0:
123 | return "Fist"
124 | elif f[0] < 0 and f[1] > 0 and f[2] > 0 and f[3] > 0 and f[4] > 0:
125 | return "4 fingers"
126 | elif f[0] < 0 and f[1] > 0 and f[2] > 0 and f[3] > 0 and f[4] < 0:
127 | return "3 fingers"
128 | else:
129 | return "No Gesture"
130 |
131 | def calcFPS(pt, ct, framelist):
132 | fps = 1 / (ct - pt)
133 | if len(framelist) < 30:
134 | framelist.append(fps)
135 | else:
136 | framelist.append(fps)
137 | framelist.pop(0)
138 | return framelist
139 |
140 | def findLandMarks(img):
141 | """
142 | Draws the landmarks on the hand (not being used currently)
143 | :param img: frame with the hand in it
144 | :return:
145 | """
146 | imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
147 | hands = mpHands.Hands()
148 | pHands = hands.process(imgRGB)
149 |
150 | if pHands.multi_hand_landmarks:
151 | for handlms in pHands.multi_hand_landmarks:
152 | # mpDraw.draw_landmarks(img, handlms, mpHands.HAND_CONNECTIONS)
153 | mpDraw.draw_landmarks(img, handlms)
154 |
155 | def straightFingers(hand, img):
156 | """
157 | Calculates which fingers are open and which fingers are closed
158 | :param hand: media-pipe object of the hand
159 | :param img: frame with the hand in it
160 | :return: list of open (+ num) and closed (- num) fingers
161 | """
162 | fingerTipIDs = [4, 8, 12, 16, 20] # list of the id's for the finger tip landmarks
163 | openFingers = []
164 | lms = hand.landmark # 2d list of all 21 landmarks with there respective x, an y coordinates
165 |
166 | # Draws the blue part
167 | palm_connections = filter(lambda x: x[1] in [0,1,2,5,6,9,10,13,14,17,18], mpHands.HAND_CONNECTIONS)
168 | mpDraw.draw_landmarks(img,hand, connections=palm_connections, connection_drawing_spec=mpDraw.DrawingSpec(color=(255,0,0)))
169 |
170 | for id in fingerTipIDs:
171 | if id == 4: # This is for the thumb calculation, because it works differently than the other fingers
172 | x2, y2 = lms[id].x, lms[id].y # x, and y of the finger tip
173 | x1, y1 = lms[id-2].x, lms[id-2].y # x, and y of the joint 2 points below the finger tip
174 | x0, y0 = lms[0].x, lms[0].y # x, and y of the wrist
175 | fv = [x2-x1, y2-y1] # joint to finger tip vector
176 | fv = normalize(fv)
177 | pv = [x1-x0, y1-y0] # wrist to joint vector
178 | pv = normalize(pv)
179 |
180 | thumb = dotProduct(fv, pv)
181 | # Thumb that is greater than 0, but less than .65 is typically
182 | # folded across the hand, which should be calculated as "down"
183 | if thumb > .65:
184 | openFingers.append(thumb) # Calculates if the finger is open or closed
185 | else:
186 | openFingers.append(-1)
187 |
188 | # Code below draws the two vectors from above
189 | cx, cy = int(lms[id].x * wCam), int(lms[id].y * hCam)
190 | cx2, cy2 = int(lms[id-2].x * wCam), int(lms[id-2].y * hCam)
191 | cx0, cy0 = int(lms[0].x * wCam), int(lms[0].y * hCam)
192 | finger_connections = filter(lambda x: id-2 <= x[0] and x[0] <= id, mpHands.HAND_CONNECTIONS) # gets the connections only for the thumb
193 | if dotProduct(fv, pv) >= .65:
194 | mpDraw.draw_landmarks(img,hand, connections=finger_connections)
195 | else:
196 | mpDraw.draw_landmarks(img,hand, connections=finger_connections, connection_drawing_spec=mpDraw.DrawingSpec(color=(0,0,255)))
197 |
198 | else: # for any other finger (not thumb)
199 | x2, y2 = lms[id].x, lms[id].y # x, and y of the finger tip
200 | x1, y1 = lms[id-2].x, lms[id-2].y # x, and y of the joint 2 points below the finger tip
201 | x0, y0 = lms[0].x, lms[0].y # x, and y of the wrist
202 | fv = [x2-x1, y2-y1] # joint to finger tip vector
203 | fv = normalize(fv)
204 | pv = [x1-x0, y1-y0] # wrist to joint vector
205 | pv = normalize(pv)
206 | openFingers.append(dotProduct(fv, pv)) # Calculates if the finger is open or closed
207 |
208 | # Code below draws the two vectors from above
209 | cx, cy = int(lms[id].x * wCam), int(lms[id].y * hCam)
210 | cx2, cy2 = int(lms[id-2].x * wCam), int(lms[id-2].y * hCam)
211 | cx0, cy0 = int(lms[0].x * wCam), int(lms[0].y * hCam)
212 |
213 | # Connections from tip to first knuckle from base
214 | finger_connections = [(id-1, id),
215 | (id-2, id-1)]
216 | if dotProduct(fv, pv) >= 0:
217 | mpDraw.draw_landmarks(img,hand, connections=finger_connections)
218 | else:
219 | mpDraw.draw_landmarks(img,hand, connections=finger_connections, connection_drawing_spec=mpDraw.DrawingSpec(color=(0,0,255)))
220 | # cv2.circle(img, (cx, cy), 15, (255, 0, 255), cv2.FILLED)
221 | return openFingers
222 |
223 | def getHand(handedness):
224 | '''
225 | Mediapipe assumes that the camera is mirrored
226 | :param handedness: media-pipe object of handedness
227 | :return: string that is 'Left' or 'Right'
228 | '''
229 | hand = handedness.classification[0].label
230 |
231 | if(hand == 'Left'):
232 | return 'Right'
233 | else:
234 | return 'Left'
235 |
236 | #Handles entering and exiting mouse-movement mode and also handles mouse clicks
237 | def mouseModeHandler(detectedHand, currGests, gestures, results, mouseHand):
238 | # Enters mouse movement mode on Gig Em gesture, setting a mouse anchor point at that position
239 | if(detectedHand == mouseHand and currGests[detectedHand] != "Gig Em" and currGests[detectedHand] != "Fist" and gestures[detectedHand] == "Gig Em"):
240 | print("Entering mouse mode at (" + str(results.multi_hand_landmarks[0].landmark[0].x) + ", " + str(results.multi_hand_landmarks[0].landmark[0].y) + ")")
241 | return [results.multi_hand_landmarks[0].landmark[0].x, results.multi_hand_landmarks[0].landmark[0].y]
242 |
243 | # Leave mouse mode when gesture isn't Gig Em or fist anymore
244 | if (detectedHand == mouseHand and (currGests[detectedHand] == "Gig Em" or currGests[detectedHand] == "Fist") and gestures[detectedHand] != "Fist" and gestures[hand] != "Gig Em"):
245 | print("Exiting mouse mode.")
246 | return [-1,-1]
247 |
248 | # Clicks the mouse upon a fist gesture while in mouse-movement mode
249 | if(detectedHand == mouseHand and currGests[detectedHand] == "Gig Em" and gestures[detectedHand] == "Fist"):
250 | pyautogui.click()
251 | print("Click!")
252 |
253 | return mouseAnchor
254 |
255 | #Moves the mouse
256 | #anchorMouse mode: While in mouse-movement mode (a.k.a. when mouseAnchor isn't [-1,-1]), when distance from mouse anchor point is far enough, start moving the mouse in that direction.
257 | #absoluteMouse mode: Moves mouse proportionately to screen size.
258 | def moveMouse(results):
259 | if(args.m == 'anchorMouse'):
260 | if(mouseAnchor != [-1,-1] and ((results.multi_hand_landmarks[0].landmark[0].x - mouseAnchor[0])**2 + (results.multi_hand_landmarks[0].landmark[0].y - mouseAnchor[1])**2)**0.5 > 0.025):
261 | pyautogui.moveTo(pyautogui.position()[0] - ((results.multi_hand_landmarks[0].landmark[0].x - mouseAnchor[0])*abs(results.multi_hand_landmarks[0].landmark[0].x - mouseAnchor[0])*1000), pyautogui.position()[1] + (((results.multi_hand_landmarks[0].landmark[0].y - mouseAnchor[1])*abs(results.multi_hand_landmarks[0].landmark[0].y - mouseAnchor[1]))*1000))
262 |
263 | if(args.m == 'absoluteMouse' and mouseAnchor != [-1,-1]):
264 | if(len(wristPositionHistory) == 10):
265 | wristPositionHistory.pop(0)
266 | wristPositionHistory.append((results.multi_hand_landmarks[0].landmark[0].x, results.multi_hand_landmarks[0].landmark[0].y))
267 | else:
268 | wristPositionHistory.append((results.multi_hand_landmarks[0].landmark[0].x, results.multi_hand_landmarks[0].landmark[0].y))
269 |
270 | avgx = 0
271 | avgy = 0
272 |
273 | for i in wristPositionHistory:
274 | avgx += i[0]
275 | avgy += i[1]
276 |
277 | avgx /= len(wristPositionHistory)
278 | avgy /= len(wristPositionHistory)
279 |
280 | pyautogui.moveTo(-(avgx - 0.5)*2*screenWidth + screenWidth/2, (avgy - 0.5)*2*screenHeight + screenHeight/2)
281 |
282 |
283 | # Preparing arguments for main
284 | args = parse_arguments() # parsing arguments
285 |
286 | prevGests = {
287 | "right": [],
288 | "left": [],
289 | }
290 | currGests = {
291 | "right": None,
292 | "left": None,
293 | }
294 | frame_count = 0
295 | while True:
296 | """
297 | Main code loop
298 | """
299 | # Gets the image from openCV and gets the hand data from media-pipe
300 | success, img = cap.read()
301 |
302 | # If there are no more frames, break loop
303 | if img is None:
304 | print("Video ended. Closing.")
305 | break
306 |
307 | imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
308 | results = hands.process(imgRGB)
309 |
310 | leftPrevGestures = []
311 | rightPrevGestures = []
312 | # if there are hands in frame, calculate which fingers are open and draw the landmarks for each hand
313 | if results.multi_hand_landmarks:
314 | gestures = {}
315 |
316 | for handLms, handedness in zip(results.multi_hand_landmarks, results.multi_handedness):
317 | fingers = straightFingers(handLms, img)
318 | hand = getHand(handedness)
319 | if hand == "Left":
320 | gestures['left'] = gesture(fingers, handLms)
321 | else:
322 | gestures['right'] = gesture(fingers, handLms)
323 | frame_count += 1
324 | #mpDraw.draw_landmarks(img, handLms, mpHands.HAND_CONNECTIONS)
325 | mpDraw.draw_landmarks(img, handLms)
326 |
327 | # print(f"{frame_count}, {gestures}, {len(results.multi_hand_landmarks)}")
328 | for hand in ['left', 'right']:
329 | if not hand in gestures:
330 | continue
331 |
332 | #Moves mouse if in mouse mode
333 | if (args.m == 'anchorMouse' or args.m == 'absoluteMouse'):
334 | moveMouse(results)
335 |
336 | # if gesture is diff from currGesture and the previous 3 gestures are the same as the current gesture
337 | # too much gesture, it is not a word anymore
338 | if(gestures[hand] != currGests[hand] and all(x == gestures[hand] for x in prevGests[hand])):
339 |
340 | print(f'{hand} : {gestures[hand]}')
341 |
342 | if (args.m == 'anchorMouse' or args.m == 'absoluteMouse'):
343 | # Handles mouse-movement mode through mouseModeHandler function
344 | mouseAnchor = mouseModeHandler(hand, currGests, gestures, results, "right")
345 | else:
346 | # event.emit("end", hand=hand, gest=currGests[hand]) ## doesn't do anything yet
347 | event.emit("start", hand=hand, gest=gestures[hand])
348 |
349 | currGests[hand] = gestures[hand]
350 |
351 | # keep only the 3 previous Gestures
352 | prevGests[hand].append(gestures[hand])
353 | prevGests[hand] = prevGests[hand][-frames_until_change:]
354 |
355 | # Used for fps calculation
356 | currTime = time.time()
357 | fpsList = calcFPS(prevTime, currTime, fpsList)
358 | prevTime = currTime
359 |
360 | # Displays the fps
361 | cv2.putText(img, str(int(np.average(fpsList))), (10, 70),
362 | cv2.FONT_HERSHEY_PLAIN, 2, (0, 255, 0), 3)
363 |
364 | cv2.imshow("Video with Hand Detection", img)
365 |
366 | # Used for testing, writing video to output
367 | #out.write(img)
368 |
369 | if cv2.waitKey(1) == 27:
370 | break
371 | cap.release()
372 | cv2.destroyAllWindows()
--------------------------------------------------------------------------------
/flask/recognition/not_in_use/MultiGesture.py:
--------------------------------------------------------------------------------
1 | from Emitter import event
2 | class MultiGesture():
3 | def __init__(self, name, gestures):
4 | self.name = name
5 | self.gestures = gestures
6 | self.on = 0
7 |
8 | def on_start_gest(self, hand, gest):
9 | # print(self.on)
10 | if type(self.gestures[self.on]) == str:
11 | if self.gestures[self.on] == gest: # doesn't matter which hand
12 | # print("Go to next gesture")
13 | self.on += 1
14 | elif self.gestures[self.on] == "No Gesture":
15 | return
16 | elif hand == self.gestures[self.on][0] and gest == self.gestures[self.on][1]: # does check which hand
17 | # print("Go to next gesture")
18 | self.on += 1
19 | elif self.gestures[self.on][0] == "No Gesture":
20 | return
21 | else:
22 | self.on = 0
23 |
24 | if self.on == len(self.gestures):
25 | event.emit("multigesture", gest=self.name)
26 | self.on = 0
27 |
28 | counting = ["1 finger", "Peace", "3 fingers", "4 fingers", "Open Hand"]
29 |
30 | countToFive = MultiGesture("Count to 5", counting)
31 | event.on("start", countToFive.on_start_gest)
32 |
33 | countDown = MultiGesture("Count Down from 5",list(reversed(counting)))
34 | event.on("start", countDown.on_start_gest)
35 | # %%
36 | # m = MultiGesture(["1 finger", "2 finger"], lambda: print("counting"))
37 | # m.on_keydown("left", "1 finger")
38 | # m.on_keydown("right", "2 finger")
39 |
40 | # m.on_keydown("right", "2 finger")
41 |
42 | # m.on_keydown("left", "1 finger")
43 | # m.on_keydown("right", "Thumbs Down")
44 | # m.on_keydown("right", "2 finger")
45 |
46 | # m.on_keydown("left", "1 finger")
47 | # m.on_keydown("right", "2 finger")
48 |
49 | # %%
50 | # m = MultiGesture([("right", "1 finger"), "2 finger"], lambda: print("counting"))
51 | # m.on_keydown("left", "1 finger")
52 | # m.on_keydown("right", "2 finger")
53 |
54 | # m.on_keydown("right", "1 finger")
55 | # m.on_keydown("right", "2 finger")
56 |
57 | # %%
58 | # m = MultiGesture([("right", "1 finger"), "2 finger"], lambda: print("counting"))
59 | # event.on("key down", m.on_keydown)
60 |
61 | # event.emit("key down", hand="right", gest="1 finger")
62 | # event.emit("key down", hand="right", gest="2 finger")
--------------------------------------------------------------------------------
/flask/recognition/not_in_use/config.py:
--------------------------------------------------------------------------------
1 |
2 | '''
3 | ### CONFIGURING ACTIONS ###
4 | Each action is formatted:
5 | "action": ["hand", "gesture", "path_to_executable"]
6 |
7 | Options for hands: Left, Right
8 |
9 | Options for gestures:
10 | 1 finger, Peace, 3 fingers, 4 fingers, Open Hand, Fist, Gig Em, Thumbs Down, Rock and Roll, Horns Down
11 |
12 | There are some examples already written below.
13 | '''
14 | actions = {
15 | "chrome": [
16 | "Right",
17 | "Peace",
18 | "C:/Program Files (x86)/Google/Chrome/Application/chrome.exe"
19 | ],
20 | "Vision-Controls": [
21 | "Right",
22 | "Gig Em",
23 | "https://github.com/aggie-coding-club/Vision-Controls"
24 | ],
25 | "close": [ # Closes application that you are currently on. (Caution: you can close out of this application with this too)
26 | "Right",
27 | "1 finger",
28 | "x"
29 | ]
30 | }
31 |
32 | settings = {
33 | "camera_index": 0, # 0 should be the default for built in cameras. If this doesn't work, try 1.
34 | }
--------------------------------------------------------------------------------
/flask/recognition/not_in_use/mediapipe_hands.py:
--------------------------------------------------------------------------------
1 | # %%
2 | import cv2
3 | import mediapipe as mp
4 | from time import time
5 | import sys
6 | import pickle
7 | import math
8 |
9 | mp_drawing = mp.solutions.drawing_utils
10 | mp_hands = mp.solutions.hands
11 | target = sys.argv[1] if len(sys.argv) > 1 else 0 # use a file a input or the webcam
12 |
13 | # %%
14 | def vec_sub(a,b):
15 | return (a.x-b.x, a.y-b.y,a.z-b.z)
16 |
17 | def vec_dot(a,b):
18 | return sum([i*b for i,b in zip(a,b)])
19 |
20 | def vec_mag(a):
21 | return math.sqrt(sum([i**2 for i in a]))
22 |
23 | def finger_straightness(hand_landmarks, base_knuckle):
24 | ''' The higher the more straight goes between ~3.9-~6'''
25 | knuckles = hand_landmarks[base_knuckle:base_knuckle+4] # 4 knuckles in finger
26 | # print(knuckles)
27 | bendyness = 0
28 | for i in range(1,len(knuckles)-1): # loop through list excluding first and last
29 | # cos(theta) = a*b/ |a||b|
30 | # A -> B -> C
31 | # a = BA
32 | # b = BC
33 | a = vec_sub(knuckles[i-1], knuckles[i])
34 | b = vec_sub(knuckles[i+1], knuckles[i])
35 | dot = vec_dot(a,b)
36 | theta = math.acos(dot/ (vec_mag(a) * vec_mag(b)))
37 |
38 | bendyness += theta
39 | return bendyness
40 |
41 | def is_finger_bent(hand_landmarks, base_knuckle):
42 | straightness = finger_straightness(hand_landmarks, base_knuckle)
43 | return straightness < 6
44 | # img_hand_detect(['../../test-vids/piece_sign.png'])
45 | # %%
46 | def img_hand_detect(file_list):
47 | # For static images:
48 | with mp_hands.Hands(
49 | static_image_mode=True,
50 | max_num_hands=2,
51 | min_detection_confidence=0.5) as hands:
52 | for idx, file in enumerate(file_list):
53 | # Read an image, flip it around y-axis for correct handedness output (see
54 | # above).
55 | image = cv2.flip(cv2.imread(file), 1)
56 | # Convert the BGR image to RGB before processing.
57 | results = hands.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
58 |
59 | # Print handedness and draw hand landmarks on the image.
60 | print('Handedness:', results.multi_handedness)
61 | if not results.multi_hand_landmarks:
62 | continue
63 | image_height, image_width, _ = image.shape
64 | annotated_image = image.copy()
65 | for hand_landmarks in results.multi_hand_landmarks:
66 | print('hand_landmarks:', hand_landmarks)
67 | # print('Finger straightness: ', finger_straightness(hand_landmarks.landmark, mp_hands.HandLandmark.INDEX_FINGER_MCP))
68 | print('Index bent:', is_finger_bent(hand_landmarks.landmark, mp_hands.HandLandmark.INDEX_FINGER_MCP))
69 | print(
70 | f'Index finger tip coordinates: (',
71 | f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].x * image_width}, '
72 | f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y * image_height})'
73 | )
74 | mp_drawing.draw_landmarks(
75 | annotated_image, hand_landmarks, mp_hands.HAND_CONNECTIONS)
76 | cv2.imwrite(
77 | '/tmp/annotated_image' + str(idx) + '.png', cv2.flip(annotated_image, 1))
78 |
79 | # %%
80 | hand_joints = {
81 | 'Index': mp_hands.HandLandmark.INDEX_FINGER_MCP,
82 | 'Middle': mp_hands.HandLandmark.MIDDLE_FINGER_MCP,
83 | 'Ring': mp_hands.HandLandmark.RING_FINGER_MCP,
84 | 'Pinky': mp_hands.HandLandmark.PINKY_MCP,
85 | 'Thumb': mp_hands.HandLandmark.THUMB_CMC,
86 | }
87 | def vid_hand_detect(target):
88 | # For webcam input:
89 | cap = cv2.VideoCapture(target)
90 |
91 | frames = 0
92 | t = 0
93 | start_t = time()
94 | with mp_hands.Hands(
95 | min_detection_confidence=0.5,
96 | min_tracking_confidence=0.5) as hands:
97 | while cap.isOpened():
98 | success, image = cap.read()
99 | if not success:
100 | cv2.destroyAllWindows()
101 | t = time()
102 | # If loading a video, use 'break' instead of 'continue'.
103 | break
104 |
105 | # Flip the image horizontally for a later selfie-view display, and convert
106 | # the BGR image to RGB.
107 | image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
108 | # To improve performance, optionally mark the image as not writeable to
109 | # pass by reference.
110 | image.flags.writeable = False
111 | results = hands.process(image)
112 |
113 | # Draw the hand annotations on the image.
114 | image.flags.writeable = True
115 | image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
116 | if results.multi_hand_landmarks:
117 | for hand_landmarks in results.multi_hand_landmarks:
118 | straight = []
119 | for finger, joint in hand_joints.items():
120 | if not is_finger_bent(hand_landmarks.landmark, joint):
121 | straight.append(finger)
122 | print(' '.join(straight))
123 |
124 | mp_drawing.draw_landmarks(
125 | image, hand_landmarks, mp_hands.HAND_CONNECTIONS)
126 | cv2.imshow('MediaPipe Hands', image)
127 | if cv2.waitKey(5) & 0xFF == 27:
128 | cv2.destroyAllWindows()
129 | t = time()
130 | break
131 | frames += 1
132 | cap.release()
133 |
134 | fps = frames / (t-start_t)
135 | print('FPS: ', fps)
136 |
137 | # %%
138 | vid_hand_detect(target)
139 |
--------------------------------------------------------------------------------
/flask/webcam.py:
--------------------------------------------------------------------------------
1 | from flask import Blueprint, Response
2 | import recognition.detection as dt
3 | from model import Configuration
4 |
5 | feed = Blueprint('video', __name__, url_prefix="/video")
6 |
7 | @feed.route('/feed')
8 | def video_feed():
9 | configQuery = Configuration.query.all()
10 | configData = []
11 |
12 | for configuration in configQuery:
13 | configData.append({ "hand" : configuration.hand,
14 | "gesture" : configuration.gesture,
15 | "action" : configuration.action,
16 | "alias" : configuration.alias})
17 |
18 | return Response(dt.gen_video(configData), mimetype='multipart/x-mixed-replace; boundary=frame')
19 |
20 |
21 | @feed.route('/off')
22 | def off():
23 | return Response(dt.gen_off(), mimetype='multipart/x-mixed-replace; boundary=frame')
24 |
25 |
26 | @feed.route('/switch')
27 | def switch():
28 | dt.switchWebcam()
29 | return "Switched", 200
30 |
31 |
--------------------------------------------------------------------------------
/frontend/.babelrc:
--------------------------------------------------------------------------------
1 | {
2 | "presets": ["react"]
3 | }
4 |
--------------------------------------------------------------------------------
/frontend/README.md:
--------------------------------------------------------------------------------
1 | # Vision Controls Frontend
2 |
3 |
4 | _Thanks to Keith Weaver, for creating used Electron-React Boilerplate_
5 | https://github.com/keithweaver
6 |
7 | ### To get started:
8 | * Run `npm install` or `yarn install`
9 |
10 | ##### Development
11 | * Run `npm run dev` to start webpack-dev-server. Electron will launch automatically after compilation.
12 |
13 | ##### Production
14 | _You have two options, an automatic build or two manual steps_
15 |
16 | ###### One Shot
17 | * Run `npm run package` to have webpack compile your application into `dist/bundle.js` and `dist/index.html`, and then an electron-packager run will be triggered for the current platform/arch, outputting to `builds/`
18 |
19 | ###### Manual
20 | _Recommendation: Update the "postpackage" script call in package.json to specify parameters as you choose and use the `npm run package` command instead of running these steps manually_
21 | * Run `npm run build` to have webpack compile and output your bundle to `dist/bundle.js`
22 | * Then you can call electron-packager directly with any commands you choose
23 |
24 | If you want to test the production build (In case you think Babili might be breaking something) after running `npm run build` you can then call `npm run prod`. This will cause electron to load off of the `dist/` build instead of looking for the webpack-dev-server instance. Electron will launch automatically after compilation.
25 |
--------------------------------------------------------------------------------
/frontend/etc/constants.js:
--------------------------------------------------------------------------------
1 | module.exports = {
2 | CATCH_ON_MAIN: "catch-on-main",
3 | SEND_TO_RENDERER: "send-to-renderer",
4 | CREATE_FILE: "create-file",
5 | BUTTON_CLICK: "button-click",
6 | OPEN_FILE_EXPLORER: 'open-file-explorer',
7 | SEND_FILE_PATH: 'send-file-path',
8 | ADD_FILE_SETTING: 'add-file-setting'
9 | };
10 |
--------------------------------------------------------------------------------
/frontend/etc/helpers.js:
--------------------------------------------------------------------------------
1 | const path = require('path');
2 |
3 | // Helper functions
4 | function root(args) {
5 | args = Array.prototype.slice.call(arguments, 0);
6 | return path.join.apply(path, [__dirname].concat('../', ...args));
7 | }
8 |
9 | exports.root = root;
--------------------------------------------------------------------------------
/frontend/main.js:
--------------------------------------------------------------------------------
1 | "use strict";
2 |
3 | // Import parts of electron to use
4 | const { app, BrowserWindow, ipcMain, dialog } = require("electron");
5 | const path = require("path");
6 | const fs = require("fs");
7 | const url = require("url");
8 | const { SEND_TO_RENDERER, BUTTON_CLICK, OPEN_FILE_EXPLORER, SEND_FILE_PATH, ADD_FILE_SETTING } = require("./etc/constants");
9 |
10 | let mainWindow;
11 |
12 | // Dev mode
13 | let dev = false;
14 | if (process.defaultApp || /[\\/]electron-prebuilt[\\/]/.test(process.execPath) || /[\\/]electron[\\/]/.test(process.execPath)) {
15 | dev = true;
16 | }
17 |
18 | function createWindow() {
19 | // load flask webserver
20 | require("child_process").spawn("python", ["../flask/app.py"]);
21 |
22 | mainWindow = new BrowserWindow({
23 | width: 840,
24 | height: 512,
25 | autoHideMenuBar: true,
26 | frame: false,
27 | resizable: false,
28 | maximizable: false,
29 | fullscreenable: false,
30 | icon: "./src/assets/transparent.ico",
31 | webPreferences: {
32 | nodeIntegration: true,
33 | },
34 | });
35 |
36 | let indexPath;
37 | if (dev && process.argv.indexOf("--noDevServer") === -1) {
38 | indexPath = url.format({
39 | protocol: "http:",
40 | host: "localhost:4000",
41 | pathname: "index.html",
42 | slashes: true,
43 | });
44 | } else {
45 | indexPath = url.format({
46 | protocol: "file:",
47 | pathname: path.join(__dirname, "dist", "index.html"),
48 | slashes: true,
49 | });
50 | }
51 | mainWindow.loadURL(indexPath);
52 |
53 | // Don't show until we are ready and loaded
54 | mainWindow.once("ready-to-show", () => {
55 | mainWindow.show();
56 | // Open the DevTools automatically if developing
57 | // if (dev) {
58 | // mainWindow.webContents.openDevTools();
59 | // }
60 | });
61 |
62 | // Emitted when the window is closed.
63 | mainWindow.on("closed", function () {
64 | // Dereference the window object, usually you would store windows
65 | // in an array if your app supports multi windows, this is the time
66 | // when you should delete the corresponding element.
67 | mainWindow = null;
68 | });
69 | }
70 |
71 | //Catch home button being clicked and send message to console
72 | //...Electron receiving message from React...
73 | ipcMain.on(BUTTON_CLICK, (event, arg) => {
74 | console.log("This button was clicked", arg);
75 | //...Electron sending message to React...
76 | mainWindow.send(SEND_TO_RENDERER, "Button Click received by Electron");
77 | });
78 |
79 | //open file explorer
80 | ipcMain.on(OPEN_FILE_EXPLORER, (event, arg) => {
81 | dialog.showOpenDialog(function (filePaths) {
82 | if (filePaths) {
83 | mainWindow.send(SEND_FILE_PATH, filePaths[0]);
84 | }
85 | });
86 | });
87 |
88 | //add file setting
89 | ipcMain.on(ADD_FILE_SETTING, (event, arg) => {
90 | console.log("add file setting: ", arg);
91 | });
92 |
93 | //...................EXAMPLES................................
94 | // ipcMain.on(CATCH_ON_MAIN, (event, arg) => {
95 | // console.log('this button was clicked', arg);
96 | // mainWindow.send(SEND_TO_RENDERER, 'pong');
97 | // })
98 | //
99 | // ipcMain.on(CREATE_FILE, (event, arg) => {
100 | // console.log("writing file...");
101 | // fs.writeFile('tmp.js', arg, function (err) {
102 | // console.log(err);
103 | // });
104 | // })
105 | //...........................................................
106 |
107 | // This method will be called when Electron has finished
108 | // initialization and is ready to create browser windows.
109 | // Some APIs can only be used after this event occurs.
110 | app.on("ready", createWindow);
111 |
112 | // Quit when all windows are closed.
113 | app.on("window-all-closed", () => {
114 | // On macOS it is common for applications and their menu bar
115 | // to stay active until the user quits explicitly with Cmd + Q
116 | if (process.platform !== "darwin") {
117 | app.quit();
118 | }
119 | });
120 |
121 | app.on("activate", () => {
122 | // On macOS it's common to re-create a window in the app when the
123 | // dock icon is clicked and there are no other windows open.
124 | if (mainWindow === null) {
125 | createWindow();
126 | }
127 | });
128 |
--------------------------------------------------------------------------------
/frontend/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "new-frontend",
3 | "version": "0.7.0",
4 | "description": "Desktop Application that allows the user to control various applications through hand gestures",
5 | "author": "Aggie Coding Club",
6 | "homepage": "https://github.com/aggie-coding-club/Vision-Controls",
7 | "repository": {
8 | "type": "git",
9 | "url": "https://github.com/aggie-coding-club/Vision-Controls"
10 | },
11 | "main": "main.js",
12 | "scripts": {
13 | "prod": "webpack --config webpack.build.config.js && electron --noDevServer .",
14 | "dev": "webpack-dev-server --hot --host 127.0.0.1 --port 4000 --config=./webpack.dev.config.js",
15 | "build": "webpack --config webpack.build.config.js",
16 | "package": "webpack --config webpack.build.config.js",
17 | "postpackage": "electron-packager ./ --out=./builds"
18 | },
19 | "devDependencies": {
20 | "babel-core": "^6.24.1",
21 | "babel-loader": "^7.1.2",
22 | "babel-preset-react": "^6.24.1",
23 | "babili-webpack-plugin": "^0.1.2",
24 | "css-loader": "^0.28.1",
25 | "electron": "^1.7.8",
26 | "electron-packager": "^9.1.0",
27 | "extract-text-webpack-plugin": "^3.0.1",
28 | "file-loader": "^1.1.5",
29 | "html-webpack-plugin": "^2.28.0",
30 | "react": "^16.0.0",
31 | "react-dom": "^16.0.0",
32 | "react-modal": "^3.14.2",
33 | "style-loader": "^0.19.0",
34 | "webpack": "^3.6.0",
35 | "webpack-dev-server": "^2.4.5"
36 | },
37 | "dependencies": {
38 | "react-router-dom": "^5.2.0"
39 | },
40 | "proxy": "http://localhost:5000"
41 | }
42 |
--------------------------------------------------------------------------------
/frontend/public/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 | Gesture
8 |
9 |
10 |
11 |
12 |
13 |
--------------------------------------------------------------------------------
/frontend/src/assets/about.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/about.png
--------------------------------------------------------------------------------
/frontend/src/assets/aboutlogo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/aboutlogo.png
--------------------------------------------------------------------------------
/frontend/src/assets/blank.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/blank.png
--------------------------------------------------------------------------------
/frontend/src/assets/cambutton.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/cambutton.png
--------------------------------------------------------------------------------
/frontend/src/assets/camera-off.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/camera-off.png
--------------------------------------------------------------------------------
/frontend/src/assets/close.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/close.png
--------------------------------------------------------------------------------
/frontend/src/assets/gestures/fist.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/gestures/fist.png
--------------------------------------------------------------------------------
/frontend/src/assets/gestures/fourfinger.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/gestures/fourfinger.png
--------------------------------------------------------------------------------
/frontend/src/assets/gestures/onefinger.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/gestures/onefinger.png
--------------------------------------------------------------------------------
/frontend/src/assets/gestures/openhand.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/gestures/openhand.png
--------------------------------------------------------------------------------
/frontend/src/assets/gestures/rockandroll.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/gestures/rockandroll.png
--------------------------------------------------------------------------------
/frontend/src/assets/gestures/threefinger.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/gestures/threefinger.png
--------------------------------------------------------------------------------
/frontend/src/assets/gestures/thumbsdown.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/gestures/thumbsdown.png
--------------------------------------------------------------------------------
/frontend/src/assets/gestures/thumbsup.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/gestures/thumbsup.png
--------------------------------------------------------------------------------
/frontend/src/assets/gestures/twofinger.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/gestures/twofinger.png
--------------------------------------------------------------------------------
/frontend/src/assets/home.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/home.png
--------------------------------------------------------------------------------
/frontend/src/assets/loading.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/loading.png
--------------------------------------------------------------------------------
/frontend/src/assets/min.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/min.png
--------------------------------------------------------------------------------
/frontend/src/assets/profile.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/profile.png
--------------------------------------------------------------------------------
/frontend/src/assets/settings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/settings.png
--------------------------------------------------------------------------------
/frontend/src/assets/switch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/switch.png
--------------------------------------------------------------------------------
/frontend/src/assets/transparent.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/transparent.ico
--------------------------------------------------------------------------------
/frontend/src/assets/webcam.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aggie-coding-club/Gesture/a7c21fdf9f29f3e25a6c166fc67c49560fd2779c/frontend/src/assets/webcam.png
--------------------------------------------------------------------------------
/frontend/src/components/About/AboutLayout.js:
--------------------------------------------------------------------------------
1 | import React from "react";
2 | import SideBar from "./SideBar.js";
3 | import MenuBar from "../MenuBar/MenuBar";
4 | import aboutImage from "../../assets/aboutlogo.png";
5 |
6 | export default function AboutLayout() {
7 | const btnClick = (name) => {
8 | console.log("clicked", name);
9 | };
10 |
11 | const flexContainer = {
12 | display: "flex",
13 | margin: 0,
14 | padding: 0,
15 | backgroundColor: "#ececec",
16 | };
17 |
18 | const sideScreen = {
19 | flex: 1,
20 | };
21 |
22 | const aboutStyle = {
23 | margin: "0vh 1vh 1vh 0",
24 | padding: 0,
25 | flex: 3,
26 | height: "100vh",
27 | };
28 |
29 | const titleStyle = {
30 | textAlign: "center",
31 | color: "#111111",
32 | fontFamily: "Lobster Two",
33 | };
34 |
35 | const textStyle = {
36 | textAlign: "center",
37 | letterSpacing: "0.1em",
38 | color: "#111111",
39 | fontFamily: "Oxygen",
40 | fontWeight: "normal",
41 | fontSize: 15,
42 | margin: "-3vh 4vw 0 3vw",
43 | };
44 |
45 | const imageStyle = {
46 | width: "110vh",
47 | display: "block",
48 | marginTop: "6vh",
49 | marginLeft: "5vw",
50 | };
51 |
52 | return (
53 |
54 |
55 |
56 |
57 |
58 |
About
59 |
60 |
61 | Vision Controls is a desktop application that allows the user to control various applications through hand gestures. This purpose of this
62 | project is to provide students with a way to work in a team setting and achieve something while doing it. This project is managed by the
63 | Aggie Coding Club.
64 |