├── .gitignore ├── LICENSE ├── README.md ├── ball_tracking_example ├── ball_tracking_example.mp4 ├── sequential.py └── taskified.py ├── cloud_server_coordinator.py ├── iot_client_coordinator.py ├── requirements.txt └── task_interface_example.py /.gitignore: -------------------------------------------------------------------------------- 1 | # IDE 2 | .idea/ 3 | 4 | # Byte-compiled / optimized / DLL files 5 | __pycache__/ 6 | *.py[cod] 7 | *$py.class 8 | 9 | # C extensions 10 | *.so 11 | 12 | # Distribution / packaging 13 | .Python 14 | build/ 15 | develop-eggs/ 16 | dist/ 17 | downloads/ 18 | eggs/ 19 | .eggs/ 20 | lib/ 21 | lib64/ 22 | parts/ 23 | sdist/ 24 | var/ 25 | wheels/ 26 | *.egg-info/ 27 | .installed.cfg 28 | *.egg 29 | MANIFEST 30 | 31 | # PyInstaller 32 | # Usually these files are written by a python script from a template 33 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 34 | *.manifest 35 | *.spec 36 | 37 | # Installer logs 38 | pip-log.txt 39 | pip-delete-this-directory.txt 40 | 41 | # Unit test / coverage reports 42 | htmlcov/ 43 | .tox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | 53 | # Translations 54 | *.mo 55 | *.pot 56 | 57 | # Django stuff: 58 | *.log 59 | local_settings.py 60 | db.sqlite3 61 | 62 | # Flask stuff: 63 | instance/ 64 | .webassets-cache 65 | 66 | # Scrapy stuff: 67 | .scrapy 68 | 69 | # Sphinx documentation 70 | docs/_build/ 71 | 72 | # PyBuilder 73 | target/ 74 | 75 | # Jupyter Notebook 76 | .ipynb_checkpoints 77 | 78 | # pyenv 79 | .python-version 80 | 81 | # celery beat schedule file 82 | celerybeat-schedule 83 | 84 | # SageMath parsed files 85 | *.sage.py 86 | 87 | # Environments 88 | .env 89 | .venv 90 | env/ 91 | venv/ 92 | ENV/ 93 | env.bak/ 94 | venv.bak/ 95 | 96 | # Spyder project settings 97 | .spyderproject 98 | .spyproject 99 | 100 | # Rope project settings 101 | .ropeproject 102 | 103 | # mkdocs documentation 104 | /site 105 | 106 | # mypy 107 | .mypy_cache/ 108 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Tejasvi (Teju) Nareddy 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Efficient IoT Compute Allocation Framework 2 | 3 | [![standard-readme compliant](https://img.shields.io/badge/standard--readme-OK-green.svg?style=flat-square)](https://github.com/RichardLitt/standard-readme) 4 | 5 | > A framework for IoT devices to offload tasks to the cloud, resulting in efficient computation and decreased cloud costs. 6 | 7 | ## Table of Contents 8 | 9 | - [Background](#background) 10 | - [Code Structure](#code-structure) 11 | - [Prerequisites](#prerequisites) 12 | - [Install](#install) 13 | - [Usage](#usage) 14 | - [Maintainers](#maintainers) 15 | - [Contributing](#contributing) 16 | - [License](#license) 17 | 18 | ## Background 19 | 20 | Many IoT devices lack the compute resources to process and analyze data locally. This is not ideal, as most data collected by IoT devices is either noisy or requires computation to extract useful information. A common solution recommended by major cloud providers, such as AWS and GCP, is to send all raw data directly to the cloud. This allows data processing to be auto-scaled to the rate at which data is collected. Unfortunately, this leads to underutilization of the CPU onboard the IoT device. In large-scale deployments of hundreds of IoT devices, collectively making use of all CPU resources could reduce the cost of cloud processing by a large factor. 21 | 22 | We implemented a framework that can efficiently use the maximal amount of compute resources on the IoT device while still maintaining a high throughput of data processing. Such a framework would allow data processing to be run directly on the IoT device, but would automatically offload data processing tasks to the cloud when throughput expectations are not met. By making maximal use of local compute resources on the IoT device, this framework reduces the cost of data processing on the cloud, while still allowing tasks to be offloaded to the cloud if greater throughput is required. 23 | 24 | _See our CS 4365 [Final Report](https://docs.google.com/document/d/1Dh7aKAofPXTKovecV3e-9cyr0K4ERvkXMdTNScChcp8/edit?usp=sharing) for more details._ 25 | 26 | ## Code Structure 27 | 28 | Our framework is designed to work with any IoT application that follows the **Task Interface**. 29 | Our final report contains more information on the design of the **Task Interface**. 30 | `task_interface_example.py` provides a explanation of the **Task Interface** via example code. 31 | 32 | For the performance evaluation and demo, we have provided 1 example IoT application in the `ball_tracking_example` folder. 33 | `ball_tracking_example/sequential.py` contains the original IoT example application found in [a blog post](https://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/) online. 34 | `ball_tracking_example/taskified.py` contains the modified IoT application that adheres to the **Task Interface**. 35 | 36 | `iot_client_coordinator.py` contains the framework code that runs on the IoT device. This is responsible for: 37 | 38 | - Collecting data from the IoT sensors (_example: camera_) 39 | - Running tasks on the IoT device 40 | - Collecting metrics on IoT device load 41 | - Offload tasks to the server when required (_The Automatic Scheduler_) 42 | 43 | `cloud_server_coordinator.py` contains the framework code that runs on the cloud server. This is responsible for: 44 | 45 | - Receiving tasks from the IoT device 46 | - Running the remaining tasks on the server 47 | - Reporting results of computation 48 | 49 | ## Prerequisites 50 | 51 | This project was implemented using `Python 3.7`. 52 | This project should still run on other minor versions of `Python 3`, but we provide no guarantees. 53 | 54 | `pip` is required to install python dependencies. 55 | 56 | ### Optional Requirements 57 | 58 | To realistically calculate metrics, the following are required: 59 | - A IoT device with Python Support (Raspberry Pi Zero) 60 | - Compute resources on a cloud provider with Python support (GCP Compute Engine) 61 | 62 | **These are optional:** The IoT device and Cloud server can be run in two separate processes on a single development machine. 63 | 64 | ## Install 65 | 66 | Run the following from the root folder: 67 | 68 | ```bash 69 | pip install -r requirements.txt 70 | ``` 71 | 72 | ## Usage 73 | 74 | This section describes multiple usage scenarios. 75 | 76 | ### IoT Client Only 77 | 78 | For debug/demo purposes, we can run all tasks on the client only. 79 | All computation will happen locally, and the IoT client will never connect to the cloud server. 80 | This run configuration does not represent the intended use-case. 81 | 82 | ```bash 83 | python3 iot_client_coordinator.py 84 | ``` 85 | 86 | ### IoT Client Offloading to Cloud Server 87 | 88 | To support offloading tasks to the cloud, the server must be run on a static IP address in the cloud: 89 | 90 | ```bash 91 | python3 cloud_server_coordinator.py 92 | ``` 93 | 94 | Please make note of the static IP address for the remaining run configurations. 95 | When running the IoT Client (in a new terminal session), 96 | set the `HOST` environment variable to this static IP address. Examples: 97 | 98 | ```bash 99 | export HOST='localhost' # When running server locally for testing 100 | export HOST='35.190.176.6' # Realistic server in the cloud 101 | ``` 102 | 103 | _Note_: In this example, there are a total of 7 tasks. 104 | 105 | _Note_: 106 | The first task must always run on the IoT Client, as it collects sensor data from the IoT device. 107 | The last task must always run on the Cloud Server, as it aggregates data across sensors. 108 | 109 | #### IoT Manual Configuration Mode 110 | 111 | _Manual Configuration Mode_ only runs a specific, pre-set number of tasks on the IoT Client. 112 | The remainder of the tasks are run on the Cloud Server. 113 | 114 | Pass in an argument with the number of tasks to run on the IoT client. Examples: 115 | 116 | ```bash 117 | python3 iot_client_coordinator 1 # Run only the first task locally 118 | python3 iot_client_coordinator 6 # Run all but the last task locally 119 | python3 iot_client_coordinator 3 # Run the first 3 tasks locally 120 | ``` 121 | 122 | #### IoT Automatic Configuration Mode 123 | 124 | _Automatic Configuration Mode_ starts off like _manual configuration mode_, 125 | but automatically re-adjusts the number of tasks offloaded to the cloud server 126 | in order to meet the expected throughput. 127 | 128 | In addition to the argument for _manual configuration mode_, 129 | the expected throughput must also be configured via the arguments. 130 | 131 | ```bash 132 | python3 iot_client_coordinator 6 17 # Start running 6 tasks locally, but re-adjust to meet 17 FPS 133 | python3 iot_client_coordinator 4 28 # Start running 4 tasks locally, but re-adjust to meet 28 FPS 134 | ``` 135 | 136 | ## Maintainers 137 | 138 | [@nareddyt](https://github.com/nareddyt) 139 | [@rishiy15](https://github.com/rishiy15) 140 | 141 | ## Contributing 142 | 143 | If editing the README, please conform to the [standard-readme](https://github.com/RichardLitt/standard-readme) specification. 144 | 145 | ## License 146 | 147 | MIT © 2019 Tejasvi Nareddy 148 | -------------------------------------------------------------------------------- /ball_tracking_example/ball_tracking_example.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nareddyt/cs4365-task-offload-framework/a3b66beb3bff5a8aae63674d6f65254ff61e0e62/ball_tracking_example/ball_tracking_example.mp4 -------------------------------------------------------------------------------- /ball_tracking_example/sequential.py: -------------------------------------------------------------------------------- 1 | 2 | # import the necessary packages 3 | import cv2 4 | import imutils 5 | import time 6 | 7 | FPS_AVG_WINDOW = 5 8 | 9 | # define the lower and upper boundaries of the "green" 10 | # ball in the HSV color space, then initialize the 11 | # list of tracked points 12 | greenLower = (29, 86, 6) 13 | greenUpper = (64, 255, 255) 14 | 15 | # grab a reference to the video file 16 | vs = cv2.VideoCapture("./ball_tracking_example.mp4") 17 | 18 | start_time = time.time() 19 | fps_counter = 0 20 | frame_counter = 0 21 | 22 | # keep looping 23 | while True: 24 | # grab the current frame 25 | frame = vs.read()[1] 26 | frame_counter += 1 27 | 28 | # if we are viewing a video and we did not grab a frame, 29 | # then we have reached the end of the video 30 | if frame is None: 31 | break 32 | 33 | # resize the frame, blur it, and convert it to the HSV 34 | # color space 35 | frame = imutils.resize(frame, width=300) 36 | blurred = cv2.GaussianBlur(frame, (11, 11), 0) 37 | hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV) 38 | 39 | # construct a mask for the color "green", then perform 40 | # a series of dilations and erosions to remove any small 41 | # blobs left in the mask 42 | mask = cv2.inRange(hsv, greenLower, greenUpper) 43 | mask = cv2.erode(mask, None, iterations=2) 44 | mask = cv2.dilate(mask, None, iterations=2) 45 | 46 | # find contours in the mask and initialize the current 47 | # (x, y) center of the ball 48 | contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, 49 | cv2.CHAIN_APPROX_SIMPLE) 50 | contours = imutils.grab_contours(contours) 51 | center = None 52 | 53 | # only proceed if at least one contour was found 54 | if len(contours) > 0: 55 | # find the largest contour in the mask, then use 56 | # it to compute the minimum enclosing circle and 57 | # centroid 58 | c = max(contours, key=cv2.contourArea) 59 | ((x, y), radius) = cv2.minEnclosingCircle(c) 60 | M = cv2.moments(c) 61 | center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"])) 62 | 63 | # only proceed if the radius meets a minimum size 64 | if radius > 10: 65 | # draw the circle and centroid on the frame, 66 | # then update the list of tracked points 67 | cv2.circle(frame, (int(x), int(y)), int(radius), 68 | (0, 255, 255), 2) 69 | cv2.circle(frame, center, 5, (0, 0, 255), -1) 70 | 71 | # show the frame to our screen 72 | cv2.imshow("Frame", frame) 73 | cv2.waitKey(1) 74 | 75 | # Calculate FPS 76 | fps_counter += 1 77 | end_time = time.time() 78 | if (end_time - start_time) > FPS_AVG_WINDOW: 79 | print("FPS: ", fps_counter / (end_time - start_time)) 80 | fps_counter = 0 81 | start_time = end_time 82 | 83 | # Stop the video stream 84 | vs.release() 85 | 86 | # close all windows 87 | cv2.destroyAllWindows() 88 | 89 | print('NUMBER OF FRAMES =', frame_counter) -------------------------------------------------------------------------------- /ball_tracking_example/taskified.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import imutils 3 | import threading 4 | 5 | # Define the lower and upper boundaries of the "green" ball in the HSV color space 6 | greenLower = (29, 86, 6) 7 | greenUpper = (64, 255, 255) 8 | 9 | # grab a reference to the video file 10 | vs = cv2.VideoCapture("./ball_tracking_example/ball_tracking_example.mp4") 11 | 12 | 13 | # Start defining tasks 14 | 15 | 16 | def get_frame(): 17 | # Grab the current frame 18 | frame = vs.read()[1] 19 | 20 | if frame is None: 21 | # End of video, stop execution 22 | return False, None 23 | 24 | # Resize to save space 25 | frame = imutils.resize(frame, width=500) 26 | 27 | return True, frame 28 | 29 | 30 | def calculate_hsv(frame): 31 | # Blur and convert frame to the HSV color space 32 | blurred = cv2.GaussianBlur(frame, (11, 11), 0) 33 | hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV) 34 | 35 | return True, (frame, hsv) 36 | 37 | 38 | def calculate_mask(frame, hsv): 39 | # Construct a mask for the color "green" 40 | # Perform a series of dilations and erosions to remove any small blobs left in the mask 41 | mask = cv2.inRange(hsv, greenLower, greenUpper) 42 | mask = cv2.erode(mask, None, iterations=2) 43 | mask = cv2.dilate(mask, None, iterations=2) 44 | 45 | return True, (frame, mask) 46 | 47 | 48 | def find_contours(frame, mask): 49 | # Find contours in the mask 50 | contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) 51 | contours = imutils.grab_contours(contours) 52 | 53 | if len(contours) == 0: 54 | # No contours found, stop execution 55 | return False, None 56 | 57 | return True, (frame, contours) 58 | 59 | 60 | def calculate_circle(frame, contours): 61 | # Find the largest contour in the mask 62 | # Use it to compute the minimum enclosing circle and centroid 63 | c = max(contours, key=cv2.contourArea) 64 | ((x, y), radius) = cv2.minEnclosingCircle(c) 65 | M = cv2.moments(c) 66 | center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"])) 67 | 68 | if radius <= 5: 69 | # Circle is false positive 70 | return False, None 71 | 72 | return True, (frame, x, y, radius, center) 73 | 74 | 75 | def draw_circle(frame, x, y, radius, center): 76 | # Draw the circle and centroid on the frame 77 | cv2.circle(frame, (int(x), int(y)), int(radius), 78 | (0, 255, 255), 2) 79 | cv2.circle(frame, center, 5, (0, 0, 255), -1) 80 | 81 | return True, frame 82 | 83 | 84 | def show_frame(frame): 85 | 86 | # come up with a unique name 87 | frame_name = "Local frame" 88 | try: 89 | frame_name = threading.current_thread().getName() 90 | except: 91 | pass 92 | 93 | # show the frame to our screen 94 | cv2.imshow(frame_name, frame) 95 | 96 | # Sleep a tiny amount before next draw 97 | cv2.waitKey(1) 98 | 99 | return True, None 100 | 101 | 102 | # Export functions as tasks 103 | tasks = [ 104 | get_frame, 105 | calculate_hsv, 106 | calculate_mask, 107 | find_contours, 108 | calculate_circle, 109 | draw_circle, 110 | show_frame 111 | ] 112 | -------------------------------------------------------------------------------- /cloud_server_coordinator.py: -------------------------------------------------------------------------------- 1 | import pickle 2 | import socket 3 | import struct 4 | import threading 5 | 6 | from ball_tracking_example.taskified import tasks 7 | 8 | HOST = '' 9 | PORT = 8089 10 | 11 | 12 | def run_task(task_func, args): 13 | # Call task 14 | if args is None: 15 | # No args to pass 16 | return task_func() 17 | elif type(args) is tuple: 18 | # Unzip tuple into args 19 | return task_func(*args) 20 | else: 21 | # Single arg 22 | return task_func(args) 23 | 24 | 25 | def main(): 26 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 27 | print('Socket created') 28 | 29 | s.bind((HOST, PORT)) 30 | print('Socket bind complete') 31 | s.listen(10) 32 | print('Socket now listening on port', PORT) 33 | 34 | while True: 35 | print('Waiting for client to connect') 36 | 37 | # Receive connection from client 38 | client_socket, (client_ip, client_port) = s.accept() 39 | print('Received connection from:', client_ip, client_port) 40 | 41 | # Start a new thread for the client. Use daemon threads to make exiting the server easier 42 | # Set a unique name to display all images 43 | t = threading.Thread(target=on_new_client, args=[client_socket], daemon=True) 44 | t.setName(str(client_ip) + ':' + str(client_port)) 45 | t.start() 46 | print('Started thread with name:', t.getName()) 47 | 48 | 49 | # ========================================================================== # 50 | # Everything below this point is called via a new thread per request. # 51 | # ========================================================================== # 52 | 53 | def on_new_client(conn): 54 | data = b'' 55 | payload_size = struct.calcsize("L") 56 | 57 | try: 58 | while True: 59 | # Reset args list every loop 60 | next_task_args_list = [] 61 | 62 | # Retrieve number of args for next task 63 | while len(data) < payload_size: 64 | data += conn.recv(4096) 65 | 66 | packed_num_next_task_args = data[:payload_size] 67 | data = data[payload_size:] 68 | num_next_task_args = struct.unpack("L", packed_num_next_task_args)[0] 69 | 70 | # Retrieve the next task index 71 | while len(data) < payload_size: 72 | data += conn.recv(4096) 73 | 74 | packed_next_task_num = data[:payload_size] 75 | data = data[payload_size:] 76 | next_task_num = struct.unpack("L", packed_next_task_num)[0] 77 | 78 | # Retrieve all args per task 79 | for i in range(num_next_task_args): 80 | # Retrieve each argument size 81 | while len(data) < payload_size: 82 | data += conn.recv(4096) 83 | packed_argsize = data[:payload_size] 84 | data = data[payload_size:] 85 | argsize = struct.unpack("L", packed_argsize)[0] 86 | 87 | # Retrieve data based on arg size 88 | while len(data) < argsize: 89 | data += conn.recv(4096) 90 | 91 | next_arg_data = data[:argsize] 92 | data = data[argsize:] 93 | # Extract next arg 94 | next_arg = pickle.loads(next_arg_data) 95 | 96 | next_task_args_list.append(next_arg) 97 | 98 | # Set variables and args for running tasks 99 | next_task_run_index = next_task_num 100 | if len(next_task_args_list) == 0: 101 | # No args to pass 102 | next_task_args = None 103 | elif len(next_task_args_list) == 1: 104 | next_task_args = next_task_args_list[0] 105 | else: 106 | next_task_args = tuple(next_task_args_list) 107 | 108 | while True: 109 | task = tasks[next_task_run_index] 110 | to_continue, next_task_args = run_task(task_func=task, 111 | args=next_task_args) 112 | 113 | if to_continue is False or next_task_run_index == (len(tasks) - 1): 114 | # Done with this message, get next message by breaking out of loop 115 | break 116 | 117 | # Still working on this message, increment task num 118 | next_task_run_index += 1 119 | 120 | except ConnectionResetError: 121 | # Client disconnected 122 | print('Client disconnected') 123 | conn.close() 124 | 125 | 126 | if __name__ == '__main__': 127 | main() 128 | -------------------------------------------------------------------------------- /iot_client_coordinator.py: -------------------------------------------------------------------------------- 1 | import time 2 | import sys 3 | import os 4 | import socket 5 | import pickle 6 | import struct 7 | 8 | from ball_tracking_example.taskified import tasks 9 | 10 | # Show throughputs every given number of seconds 11 | DEFAULT_THROUGHPUT_PERIOD = 3 12 | 13 | 14 | def init_task_names(): 15 | return [task.__name__ for task in tasks] 16 | 17 | 18 | def parse_args(): 19 | # Default to all tasks 20 | if len(sys.argv) == 1: 21 | return len(tasks), None 22 | 23 | # Parse manual configuration init 24 | end_index = int(sys.argv[1]) 25 | 26 | # Check validity 27 | if not 0 < end_index < len(tasks): 28 | raise AssertionError('Manual Configuration number of tasks to run is not valid') 29 | 30 | # Check for automatic configuration 31 | if len(sys.argv) == 2: 32 | return end_index, None 33 | 34 | # Automatic configuration enabled, parse expected throughput 35 | expected_throughput = int(sys.argv[2]) 36 | 37 | # Check validity 38 | if expected_throughput <= 0: 39 | raise AssertionError('Automatic Configuration expected throughput is not valid') 40 | 41 | return end_index, expected_throughput 42 | 43 | 44 | def emulate_iot_device(): 45 | # Our computer's CPU is too fast 46 | # Waste some CPU resources to emulate an IoT device 47 | for i in range(0, 500000): 48 | pass 49 | 50 | 51 | def run_task(task_func, args): 52 | emulate_iot_device() 53 | 54 | # Call task 55 | if args is None: 56 | # No args to pass 57 | return task_func() 58 | elif type(args) is tuple: 59 | # Unzip tuple into args 60 | return task_func(*args) 61 | else: 62 | # Single arg 63 | return task_func(args) 64 | 65 | 66 | def reconfigure_with_throughput(task_names, loop_count, start_time, end_time, 67 | throughput_period, expected_throughput, num_client_tasks): 68 | # Calculate FPS of each task 69 | throughput = int(loop_count / (end_time - start_time)) 70 | 71 | # Debug 72 | print('Average Throughput over', throughput_period, 'seconds:', 73 | throughput, 'frames per second') 74 | 75 | # Don't re-adjust in manual mode 76 | if expected_throughput is None: 77 | print('Running in manual mode, no throughput re-adjustment') 78 | return num_client_tasks 79 | 80 | # Check if re-adjustment needed 81 | if throughput >= expected_throughput: 82 | return num_client_tasks 83 | 84 | # Last task must be offloaded! 85 | offload_task_index = num_client_tasks - 1 86 | 87 | # Don't offload initial task 88 | if offload_task_index == 0: 89 | print('Cannot offload initial task!') 90 | return num_client_tasks 91 | 92 | # Offload task 93 | print('Offloaded task', task_names[offload_task_index]) 94 | return offload_task_index 95 | 96 | 97 | def offload_to_peer(next_task_num, next_task_args, client_socket): 98 | send_data = b'' 99 | next_arg_data = [] 100 | 101 | if next_task_args is not None: 102 | if type(next_task_args) is tuple: 103 | for arg in next_task_args: 104 | next_arg_data.append(arg) 105 | else: 106 | next_arg_data.append(next_task_args) 107 | 108 | # Send number of args 109 | send_data += struct.pack("L", len(next_arg_data)) 110 | 111 | # Send the next task's number 112 | send_data += struct.pack("L", next_task_num) 113 | 114 | if len(next_arg_data) > 0: 115 | for next_arg in next_arg_data: 116 | data = pickle.dumps(next_arg) 117 | arg_size = struct.pack("L", len(data)) 118 | send_data += arg_size 119 | send_data += data 120 | 121 | client_socket.sendall(send_data) 122 | 123 | 124 | def main(): 125 | # Args parse 126 | num_client_tasks, expected_throughput = parse_args() 127 | 128 | # Variables for task state 129 | task_index = 0 130 | task_names = init_task_names() 131 | 132 | # Variables for calculating throughput 133 | throughput_period = DEFAULT_THROUGHPUT_PERIOD 134 | loop_count = 0 135 | start_time = time.time() 136 | 137 | # Init tasks args 138 | next_task_args = None 139 | 140 | # Peer server connection for offload tasking 141 | # https://stackoverflow.com/questions/30988033/sending-live-video-frame-over-network-in-python-opencv 142 | client_socket = None 143 | if num_client_tasks < len(tasks): 144 | print('Running', num_client_tasks, 'out of', len(tasks), 145 | 'tasks on the client. Connecting to server to offload the remaining', 146 | len(tasks) - num_client_tasks, 'tasks') 147 | 148 | # Ensure env var is present 149 | if 'HOST' not in os.environ: 150 | raise EnvironmentError( 151 | 'HOST env var not set to server address. Please set it as described in the README.md') 152 | 153 | # Create connection 154 | client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 155 | client_socket.connect((os.environ['HOST'], 8089)) 156 | else: 157 | print('Running all', len(tasks), 'tasks on IoT Client, not connecting to Cloud Server') 158 | 159 | # Keep running tasks in sequential order 160 | while True: 161 | 162 | # Determine which task to run 163 | task = tasks[task_index] 164 | 165 | # Run task 166 | to_continue, next_task_args = run_task(task_func=task, 167 | args=next_task_args) 168 | 169 | # Calculate fps 170 | end_time = time.time() 171 | if (end_time - start_time) > throughput_period: 172 | # Configuration with task throughput if needed 173 | num_client_tasks = reconfigure_with_throughput( 174 | task_names=task_names, 175 | loop_count=loop_count, 176 | start_time=start_time, 177 | end_time=end_time, 178 | throughput_period=throughput_period, 179 | expected_throughput=expected_throughput, 180 | num_client_tasks=num_client_tasks 181 | ) 182 | 183 | # Reset vars for throughput 184 | loop_count = 0 185 | start_time = time.time() 186 | 187 | # No need to continue running tasks, end of stream 188 | if to_continue is False and task_index == 0: 189 | client_socket.close() 190 | break 191 | 192 | # Increment index (cyclical) 193 | task_index += 1 194 | 195 | # Reset to first frame if more function calls are not needed 196 | # or reached end of sequence 197 | if to_continue is False or task_index >= num_client_tasks: 198 | 199 | if to_continue is not False and client_socket is not None: 200 | # Send frame to peer server 201 | offload_to_peer(next_task_num=task_index, 202 | next_task_args=next_task_args, 203 | client_socket=client_socket) 204 | 205 | # Reset vars 206 | task_index = 0 207 | loop_count += 1 208 | next_task_args = None 209 | continue 210 | 211 | 212 | if __name__ == '__main__': 213 | main() 214 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | imutils==0.5.2 2 | numpy==1.16.1 3 | opencv-python==4.0.0.21 4 | -------------------------------------------------------------------------------- /task_interface_example.py: -------------------------------------------------------------------------------- 1 | # Define 4 tasks that run in sequence 2 | 3 | 4 | def task1(): 5 | """ 6 | Initial task that gathers data, therefore does not need any other arguments. 7 | """ 8 | 9 | # Gather data from IoT sensor 10 | pass 11 | 12 | # Return success and arg for next task 13 | return True, 'arg1' 14 | 15 | 16 | def task2(arg1): 17 | """ 18 | Second task that depends on the output of the first task. 19 | """ 20 | 21 | # Do computation 22 | pass 23 | 24 | # Return success and multiple args for next task 25 | return True, ('arg1', 'arg2', 'arg3') 26 | 27 | 28 | def task3(arg1, arg2, arg3): 29 | """ 30 | Third task that depends on the output of the second task. 31 | Performs some basic filtering on the data. 32 | """ 33 | 34 | # Do computation 35 | pass 36 | 37 | # Do filter 38 | if arg1 == arg2: 39 | # Return failure, next task will not be run 40 | return False 41 | 42 | # Otherwise return success, args for next task 43 | return True, 'arg4' 44 | 45 | 46 | def task4(arg4): 47 | """ 48 | Last task that depends on the output of the third task. 49 | Does not return any results, as this is the last task. 50 | """ 51 | 52 | # Report results 53 | pass 54 | 55 | # End of tasks 56 | return False 57 | 58 | 59 | # Export sequential ordering of tasks 60 | tasks = [ 61 | task1, 62 | task2, 63 | task3, 64 | task4 65 | ] 66 | --------------------------------------------------------------------------------