├── .github └── workflows │ └── publish-to-pypi.yml ├── .gitignore ├── LICENSE ├── README.md ├── deploy.sh ├── edge_impulse_linux ├── __init__.py ├── audio.py ├── image.py └── runner.py ├── examples ├── audio │ └── classify.py ├── custom │ ├── classify.py │ └── collect.py └── image │ ├── .gitignore │ ├── classify-full-frame.py │ ├── classify-image.py │ ├── classify-video.py │ ├── classify.py │ ├── device_patches.py │ └── resize_demo.py ├── pyproject.toml ├── requirements.txt ├── setup.cfg └── setup.py /.github/workflows/publish-to-pypi.yml: -------------------------------------------------------------------------------- 1 | name: Upload Python Package 2 | 3 | on: 4 | push: 5 | branches: 6 | - master 7 | 8 | permissions: 9 | contents: read 10 | 11 | jobs: 12 | release-build: 13 | runs-on: ubuntu-latest 14 | 15 | steps: 16 | - name: Checkout repo 17 | uses: actions/checkout@v4 18 | 19 | - name: Setup Python 20 | uses: actions/setup-python@v5 21 | with: 22 | python-version: "3.9" 23 | 24 | - name: Install build tools 25 | env: 26 | SB_API_KEY: ${{ secrets.SB_API_KEY }} 27 | run: | 28 | export PYPI_REGISTRY=https://$SB_API_KEY.pypimirror.stablebuild.com/2023-12-27/ 29 | curl https://$SB_API_KEY.httpcache.stablebuild.com/pip-cache-20231228/https://bootstrap.pypa.io/get-pip.py -o get-pip.py && \ 30 | python get-pip.py -i $PYPI_REGISTRY "pip==21.3.1" "setuptools==62.6.0" && \ 31 | rm get-pip.py 32 | python -m pip install -i https://$SB_API_KEY.pypimirror.stablebuild.com/2023-01-30/ twine 33 | 34 | - name: Build package 35 | run: | 36 | rm -rf build/ 37 | rm -rf dist/ 38 | python setup.py sdist 39 | 40 | - name: Publish package 41 | env: 42 | TWINE_USERNAME: __token__ 43 | TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }} 44 | run: | 45 | python -m twine upload dist/* 46 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | dist 3 | edge_impulse_linux.egg-info 4 | build 5 | *.jpg 6 | .DS_Store 7 | .venv/ 8 | .vscode 9 | *.eim 10 | .act-secrets 11 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The Clear BSD License 2 | 3 | Copyright (c) 2025 EdgeImpulse Inc. 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted (subject to the limitations in the disclaimer 8 | below) provided that the following conditions are met: 9 | 10 | * Redistributions of source code must retain the above copyright notice, 11 | this list of conditions and the following disclaimer. 12 | 13 | * Redistributions in binary form must reproduce the above copyright 14 | notice, this list of conditions and the following disclaimer in the 15 | documentation and/or other materials provided with the distribution. 16 | 17 | * Neither the name of the copyright holder nor the names of its 18 | contributors may be used to endorse or promote products derived from this 19 | software without specific prior written permission. 20 | 21 | NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE GRANTED BY 22 | THIS LICENSE. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND 23 | CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 24 | LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A 25 | PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR 26 | CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 27 | EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, 28 | PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR 29 | BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER 30 | IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 31 | ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 32 | POSSIBILITY OF SUCH DAMAGE. 33 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Edge Impulse Linux SDK for Python 2 | 3 | This library lets you run machine learning models and collect sensor data on Linux machines using Python. This SDK is part of [Edge Impulse](https://www.edgeimpulse.com) where we enable developers to create the next generation of intelligent device solutions with embedded machine learning. [Start here to learn more and train your first model](https://docs.edgeimpulse.com). 4 | 5 | ## Installation guide 6 | 7 | 1. Install a recent version of [Python 3](https://www.python.org/downloads/) and `pip` tools. 8 | 1. Install the SDK: 9 | 10 | **Raspberry Pi** 11 | 12 | ``` 13 | $ sudo apt-get install libatlas-base-dev libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev libopenjp2-7 libgtk-3-0 libswscale-dev libavformat58 libavcodec58 14 | $ pip3 install edge_impulse_linux -i https://pypi.python.org/simple 15 | ``` 16 | 17 | **Other platforms** 18 | 19 | ``` 20 | $ pip3 install edge_impulse_linux 21 | ``` 22 | 23 | 1. Clone this repository to get the examples: 24 | 25 | ``` 26 | $ git clone https://github.com/edgeimpulse/linux-sdk-python 27 | ``` 28 | 29 | 4. Install pip dependencies: 30 | 31 | ``` 32 | $ pip3 install -r requirements.txt 33 | ``` 34 | 35 | For the computer vision examples you'll want `opencv-python>=4.5.1.48` 36 | Note on macOS on apple silicon, you will need to use a later version, 37 | 4.10.0.84 tested and installs cleanly 38 | 39 | ## Collecting data 40 | 41 | Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code. 42 | 43 | ### Collecting data from the camera or microphone 44 | 45 | To collect data from the camera or microphone, follow the [getting started guide](https://docs.edgeimpulse.com/docs/edge-impulse-for-linux) for your development board. 46 | 47 | ### Collecting data from other sensors 48 | 49 | To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. [Here's an end-to-end example](https://github.com/edgeimpulse/linux-sdk-python/blob/master/examples/custom/collect.py). 50 | 51 | ## Classifying data 52 | 53 | To classify data (whether this is from the camera, the microphone, or a custom sensor) you'll need a model file. This model file contains all signal processing code, classical ML algorithms and neural networks - and typically contains hardware optimizations to run as fast as possible. To grab a model file: 54 | 55 | 1. Train your model in Edge Impulse. 56 | 1. Install the [Edge Impulse for Linux CLI](https://docs.edgeimpulse.com/docs/edge-impulse-for-linux). 57 | 1. Download the model file via: 58 | 59 | ``` 60 | $ edge-impulse-linux-runner --download modelfile.eim 61 | ``` 62 | 63 | This downloads the file into `modelfile.eim`. (Want to switch projects? Add `--clean`) 64 | 65 | Then you can start classifying realtime sensor data. We have examples for: 66 | 67 | * [Audio](https://github.com/edgeimpulse/linux-sdk-python/blob/master/examples/audio/classify.py) - grabs data from the microphone and classifies it in realtime. 68 | * [Camera](https://github.com/edgeimpulse/linux-sdk-python/blob/master/examples/image/classify.py) - grabs data from a webcam and classifies it in realtime. 69 | * [Camera (full frame)](https://github.com/edgeimpulse/linux-sdk-python/blob/master/examples/image/classify-full-frame.py) - grabs data from a webcam and classifies it twice (once cut from the left, once cut from the right). This is useful if you have a wide-angle lense and don't want to miss any events. 70 | * [Still image](https://github.com/edgeimpulse/linux-sdk-python/blob/master/examples/image/classify-image.py) - classifies a still image from your hard drive. 71 | * [Video](https://github.com/edgeimpulse/linux-sdk-python/blob/master/examples/image/classify-video.py) - grabs frames from a video source from your hard drive and classifies it. 72 | * [Custom data](https://github.com/edgeimpulse/linux-sdk-python/blob/master/examples/custom/classify.py) - classifies custom sensor data. 73 | 74 | ## Troubleshooting 75 | 76 | ### Collecting print out from the model 77 | 78 | To display the logging messages (ie, you may be used to in other deployments), init the runner like so 79 | ``` 80 | # model_info = runner.init(debug=True) # to get debug print out 81 | ``` 82 | This will pipe stdout and stderr into the same of your own process 83 | 84 | 85 | ### [Errno -9986] Internal PortAudio error (macOS) 86 | 87 | If you see this error you can re-install portaudio via: 88 | 89 | ``` 90 | brew uninstall --ignore-dependencies portaudio 91 | brew install portaudio --HEAD​ 92 | ``` 93 | 94 | ### Abort trap (6) (macOS) 95 | 96 | This error shows when you want to gain access to the camera or the microphone on macOS from a virtual shell (like the terminal in Visual Studio Code). Try to run the command from the normal Terminal.app. 97 | -------------------------------------------------------------------------------- /deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | SCRIPTPATH="$( cd "$(dirname "$0")" ; pwd -P )" 5 | 6 | cd $SCRIPTPATH 7 | 8 | if [ ! -d ".venv" ]; then 9 | if [ -z "$SB_API_KEY" ]; then 10 | echo "Missing SB_API_KEY, set to a StableBuild API Key (required to install pinned build dependencies)" 11 | exit 1 12 | fi 13 | 14 | python3.9 -m venv .venv 15 | source .venv/bin/activate 16 | 17 | pip3 install \ 18 | -i https://$SB_API_KEY.pypimirror.stablebuild.com/2023-01-30/ \ 19 | twine 20 | else 21 | source .venv/bin/activate 22 | fi 23 | 24 | rm -rf build/ 25 | rm -rf dist/ 26 | .venv/bin/python3 setup.py sdist 27 | .venv/bin/twine upload dist/* 28 | 29 | deactivate 30 | -------------------------------------------------------------------------------- /edge_impulse_linux/__init__.py: -------------------------------------------------------------------------------- 1 | from edge_impulse_linux import runner 2 | from edge_impulse_linux import audio 3 | from edge_impulse_linux import image 4 | -------------------------------------------------------------------------------- /edge_impulse_linux/audio.py: -------------------------------------------------------------------------------- 1 | 2 | import numpy as np 3 | import pyaudio 4 | import time 5 | from six.moves import queue 6 | from edge_impulse_linux.runner import ImpulseRunner as ImpulseRunner 7 | CHUNK_SIZE = 1024 8 | OVERLAP = 0.25 9 | 10 | def now(): 11 | return round(time.time() * 1000) 12 | 13 | class Microphone(): 14 | def __init__(self, rate, chunk_size, device_id = None, channels = 1): 15 | self.buff = queue.Queue() 16 | self.chunk_size = chunk_size 17 | self.data = [] 18 | self.rate = rate 19 | self.closed = True 20 | self.channels = channels 21 | self.interface = pyaudio.PyAudio() 22 | self.device_id = device_id 23 | self.zero_counter = 0 24 | 25 | while self.device_id == None or not self.checkDeviceModelCompatibility(self.device_id): 26 | input_devices = self.listAvailableDevices() 27 | input_device_id = int(input("Type the id of the audio device you want to use: \n")) 28 | for device in input_devices: 29 | if device[0] == input_device_id: 30 | if self.checkDeviceModelCompatibility(input_device_id): 31 | self.device_id = input_device_id 32 | else: 33 | print('That device is not compatible') 34 | 35 | print('selected Audio device: %i'% self.device_id) 36 | 37 | def checkDeviceModelCompatibility(self, device_id): 38 | supported = False 39 | try: 40 | supported = self.interface.is_format_supported(self.rate, 41 | input_device=device_id, 42 | input_channels=self.channels, 43 | input_format=pyaudio.paInt16) 44 | except: 45 | supported = False 46 | finally: 47 | return supported 48 | 49 | 50 | 51 | def listAvailableDevices(self): 52 | if not self.interface: 53 | self.interface = pyaudio.PyAudio() 54 | 55 | info = self.interface.get_host_api_info_by_index(0) 56 | numdevices = info.get('deviceCount') 57 | input_devices = [] 58 | for i in range (0,numdevices): 59 | if self.interface.get_device_info_by_host_api_device_index(0,i).get('maxInputChannels')>0: 60 | input_devices.append((i, self.interface.get_device_info_by_host_api_device_index(0,i).get('name'))) 61 | 62 | if len(input_devices) == 0: 63 | raise Exception('There are no audio devices available'); 64 | 65 | for i in range (0, len(input_devices)): 66 | print("%i --> %s" % input_devices[i]) 67 | 68 | return input_devices 69 | 70 | def __enter__(self): 71 | if not self.interface: 72 | self.interface = pyaudio.PyAudio() 73 | 74 | self.stream = self.interface.open( 75 | input_device_index = self.device_id, 76 | format = pyaudio.paInt16, 77 | channels = self.channels, 78 | rate = self.rate, 79 | input = True, 80 | frames_per_buffer = self.chunk_size, 81 | stream_callback = self.fill_buffer 82 | ) 83 | self.closed = False 84 | return self 85 | 86 | def __exit__(self, type, value, traceback): 87 | self.stream.stop_stream() 88 | self.stream.close() 89 | self.closed = True 90 | self.interface.terminate() 91 | 92 | def fill_buffer(self, in_data, frame_count, time_info, status_flags): 93 | zeros=bytes(self.chunk_size*2) 94 | if in_data != zeros: 95 | self.zero_counter = 0 96 | else: 97 | self.zero_counter+=1 98 | 99 | if self.zero_counter > self.rate / self.chunk_size: 100 | self.closed = True 101 | raise Exception('There is no audio data comming from the audio interface') 102 | 103 | self.buff.put(in_data) 104 | return None, pyaudio.paContinue 105 | 106 | def generator(self): 107 | while not self.closed: 108 | chunk = self.buff.get() 109 | 110 | if chunk is None: 111 | return 112 | 113 | data = [chunk] 114 | while True: 115 | try: 116 | chunk = self.buff.get(block=False) 117 | if chunk is None: 118 | return 119 | data.append(chunk) 120 | except queue.Empty: 121 | break 122 | 123 | yield b''.join(data) 124 | 125 | class AudioImpulseRunner(ImpulseRunner): 126 | def __init__(self, model_path: str): 127 | super(AudioImpulseRunner, self).__init__(model_path) 128 | self.closed = True 129 | self.sampling_rate = 0 130 | self.window_size = 0 131 | self.labels = [] 132 | 133 | def init(self, debug=False): 134 | model_info = super(AudioImpulseRunner, self).init(debug) 135 | if model_info['model_parameters']['frequency'] == 0: 136 | raise Exception('Model file "' + self._model_path + '" is not suitable for audio recognition') 137 | 138 | self.window_size = model_info['model_parameters']['input_features_count'] 139 | self.sampling_rate = model_info['model_parameters']['frequency'] 140 | self.labels = model_info['model_parameters']['labels'] 141 | 142 | return model_info 143 | 144 | def __enter__(self): 145 | self.closed = False 146 | return self 147 | 148 | def __exit__(self, type, value, traceback): 149 | self.closed = True 150 | 151 | def classify(self, data): 152 | return super(AudioImpulseRunner, self).classify(data) 153 | 154 | def classifier(self, device_id = None): 155 | with Microphone(self.sampling_rate, CHUNK_SIZE, device_id=device_id) as mic: 156 | generator = mic.generator() 157 | features = np.array([], dtype=np.int16) 158 | while not self.closed: 159 | for audio in generator: 160 | data = np.frombuffer(audio, dtype=np.int16) 161 | features = np.concatenate((features, data), axis=0) 162 | while len(features) >= self.window_size: 163 | begin = now() 164 | res = self.classify(features[:self.window_size].tolist()) 165 | features = features[int(self.window_size * OVERLAP):] 166 | yield res, audio 167 | -------------------------------------------------------------------------------- /edge_impulse_linux/image.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import numpy as np 4 | import cv2 5 | from edge_impulse_linux.runner import ImpulseRunner 6 | import math 7 | import psutil 8 | 9 | class ImageImpulseRunner(ImpulseRunner): 10 | def __init__(self, model_path: str): 11 | super(ImageImpulseRunner, self).__init__(model_path) 12 | self.closed = True 13 | self.labels = [] 14 | self.dim = (0, 0) 15 | self.videoCapture = cv2.VideoCapture() 16 | self.isGrayscale = False 17 | self.resizeMode = '' 18 | 19 | def init(self, debug=False): 20 | model_info = super(ImageImpulseRunner, self).init(debug) 21 | width = model_info['model_parameters']['image_input_width'] 22 | height = model_info['model_parameters']['image_input_height'] 23 | 24 | if width == 0 or height == 0: 25 | raise Exception('Model file "' + self._model_path + '" is not suitable for image recognition') 26 | 27 | self.dim = (width, height) 28 | self.labels = model_info['model_parameters']['labels'] 29 | self.isGrayscale = model_info['model_parameters']['image_channel_count'] == 1 30 | self.resizeMode = model_info['model_parameters'].get('image_resize_mode', 'not-reported') 31 | return model_info 32 | 33 | def __enter__(self): 34 | self.videoCapture = cv2.VideoCapture() 35 | self.closed = False 36 | return self 37 | 38 | def __exit__(self, type, value, traceback): 39 | self.videoCapture.release() 40 | self.closed = True 41 | 42 | def classify(self, data): 43 | return super(ImageImpulseRunner, self).classify(data) 44 | 45 | # This returns images in RGB format (not BGR) 46 | def get_frames(self, videoDeviceId = 0): 47 | if psutil.OSX or psutil.MACOS: 48 | print('Make sure to grant the this script access to your webcam.') 49 | print('If your webcam is not responding, try running "tccutil reset Camera" to reset the camera access privileges.') 50 | 51 | self.videoCapture = cv2.VideoCapture(videoDeviceId) 52 | while not self.closed: 53 | success, img = self.videoCapture.read() 54 | img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) 55 | if success: 56 | yield img 57 | 58 | # This returns images in RGB format (not BGR) 59 | def classifier(self, videoDeviceId = 0): 60 | if psutil.OSX or psutil.MACOS: 61 | print('Make sure to grant the this script access to your webcam.') 62 | print('If your webcam is not responding, try running "tccutil reset Camera" to reset the camera access privileges.') 63 | 64 | self.videoCapture = cv2.VideoCapture(videoDeviceId) 65 | while not self.closed: 66 | success, img = self.videoCapture.read() 67 | if success: 68 | img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) 69 | features, cropped = self.get_features_from_image(img) 70 | 71 | res = self.classify(features) 72 | yield res, cropped 73 | 74 | # This expects images in RGB format (not BGR), DEPRECATED, use get_features_from_image_auto_studio_settings 75 | def get_features_from_image(self, img, crop_direction_x='center', crop_direction_y='center'): 76 | features = [] 77 | 78 | EI_CLASSIFIER_INPUT_WIDTH = self.dim[0] 79 | EI_CLASSIFIER_INPUT_HEIGHT = self.dim[1] 80 | 81 | in_frame_cols = img.shape[1] 82 | in_frame_rows = img.shape[0] 83 | 84 | factor_w = EI_CLASSIFIER_INPUT_WIDTH / in_frame_cols 85 | factor_h = EI_CLASSIFIER_INPUT_HEIGHT / in_frame_rows 86 | 87 | # Maintain the same aspect ratio by scaling by the same factor for both dimensions 88 | largest_factor = factor_w if factor_w > factor_h else factor_h 89 | 90 | resize_size_w = int(math.ceil(largest_factor * in_frame_cols)) 91 | resize_size_h = int(math.ceil(largest_factor * in_frame_rows)) 92 | # One dim will match the classifier size, the other will be larger 93 | resize_size = (resize_size_w, resize_size_h) 94 | 95 | resized = cv2.resize(img, resize_size, interpolation=cv2.INTER_AREA) 96 | 97 | if (crop_direction_x == 'center'): 98 | crop_x = int((resize_size_w - EI_CLASSIFIER_INPUT_WIDTH) / 2) # 0 when same 99 | elif (crop_direction_x == 'left'): 100 | crop_x = 0 101 | elif (crop_direction_x == 'right'): 102 | crop_x = resize_size_w - EI_CLASSIFIER_INPUT_WIDTH # can't be negative b/c one size will match input and the other will be larger 103 | else: 104 | raise Exception('Invalid value for crop_direction_x, should be center, left or right') 105 | 106 | if (crop_direction_y == 'center'): 107 | crop_y = int((resize_size_h - resize_size_w) / 2) if resize_size_h > resize_size_w else 0 108 | elif (crop_direction_y == 'top'): 109 | crop_y = 0 110 | elif (crop_direction_y == 'bottom'): 111 | crop_y = resize_size_h - EI_CLASSIFIER_INPUT_HEIGHT 112 | else: 113 | raise Exception('Invalid value for crop_direction_y, should be center, top or bottom') 114 | 115 | cropped = resized[crop_y: crop_y + EI_CLASSIFIER_INPUT_HEIGHT, 116 | crop_x: crop_x + EI_CLASSIFIER_INPUT_WIDTH] 117 | 118 | if self.isGrayscale: 119 | cropped = cv2.cvtColor(cropped, cv2.COLOR_BGR2GRAY) 120 | pixels = np.array(cropped).flatten().tolist() 121 | 122 | for p in pixels: 123 | features.append((p << 16) + (p << 8) + p) 124 | else: 125 | pixels = np.array(cropped).flatten().tolist() 126 | 127 | for ix in range(0, len(pixels), 3): 128 | r = pixels[ix + 0] 129 | g = pixels[ix + 1] 130 | b = pixels[ix + 2] 131 | features.append((r << 16) + (g << 8) + b) 132 | 133 | return features, cropped 134 | 135 | def get_features_from_image_auto_studio_settings(self, img): 136 | if self.resizeMode == '': 137 | raise Exception( 138 | 'Runner has not initialized, please call init() first') 139 | if self.resizeMode == 'not-reported': 140 | raise Exception( 141 | 'Model file "' + self._model_path + '" does not report the image resize mode\n' 142 | 'Please update the model file via edge-impulse-linux-runner --download') 143 | return get_features_from_image_with_studio_mode(img, self.resizeMode, self.dim[0], self.dim[1], self.isGrayscale) 144 | 145 | 146 | def resize_image(image, size): 147 | """Resize an image to the given size using a common interpolation method. 148 | 149 | Args: 150 | image: The input image as a NumPy array. 151 | size: A tuple (width, height) specifying the desired output size. 152 | 153 | Returns: 154 | The resized image as a NumPy array. 155 | """ 156 | return cv2.resize(image, size, interpolation=cv2.INTER_AREA) 157 | 158 | def resize_with_letterbox(image, target_width, target_height): 159 | """Resize an image while maintaining aspect ratio using letterboxing. 160 | 161 | Args: 162 | image: The input image as a NumPy array. 163 | target_size: A tuple (width, height) specifying the desired output size. 164 | 165 | Returns: 166 | The resized image as a NumPy array and the letterbox dimensions. 167 | """ 168 | 169 | height, width = image.shape[:2] 170 | 171 | # Calculate scale factors to preserve aspect ratio 172 | scale_x = target_width / width 173 | scale_y = target_height / height 174 | scale = min(scale_x, scale_y) 175 | 176 | # Calculate new dimensions and padding 177 | new_width = int(width * scale) 178 | new_height = int(height * scale) 179 | top_pad = (target_height - new_height) // 2 180 | bottom_pad = target_height - new_height - top_pad 181 | left_pad = (target_width - new_width) // 2 182 | right_pad = target_width - new_width - left_pad 183 | 184 | # Resize image and add padding 185 | resized_image = resize_image(image, (new_width, new_height)) 186 | padded_image = cv2.copyMakeBorder(resized_image, top_pad, bottom_pad, left_pad, right_pad, cv2.BORDER_CONSTANT, value=0) 187 | 188 | return padded_image 189 | 190 | 191 | def get_features_from_image_with_studio_mode(img, mode, output_width, output_height, is_grayscale): 192 | """ 193 | Extract features from an image using different resizing modes suitable for Edge Impulse Studio. 194 | 195 | Args: 196 | img (numpy.ndarray): The input image as a NumPy array. 197 | mode (str): The resizing mode to use. Options are 'fit-shortest', 'fit-longest', and 'squash'. 198 | output_width (int): The desired output width of the image. 199 | output_height (int): The desired output height of the image. 200 | is_grayscale (bool): Whether the output image should be converted to grayscale. 201 | 202 | Returns: 203 | tuple: A tuple containing: 204 | - features (list): A list of pixel values in the format (R << 16) + (G << 8) + B for color images, 205 | or (P << 16) + (P << 8) + P for grayscale images. 206 | - resized_img (numpy.ndarray): The resized image as a NumPy array. 207 | """ 208 | features = [] 209 | 210 | in_frame_cols = img.shape[1] 211 | in_frame_rows = img.shape[0] 212 | 213 | if mode == 'fit-shortest': 214 | aspect_ratio = output_width / output_height 215 | if in_frame_cols / in_frame_rows > aspect_ratio: 216 | # Image is wider than target aspect ratio 217 | new_width = int(in_frame_rows * aspect_ratio) 218 | offset = (in_frame_cols - new_width) // 2 219 | cropped_img = img[:, offset:offset + new_width] 220 | else: 221 | # Image is taller than target aspect ratio 222 | new_height = int(in_frame_cols / aspect_ratio) 223 | offset = (in_frame_rows - new_height) // 2 224 | cropped_img = img[offset:offset + new_height, :] 225 | 226 | resized_img = cv2.resize(cropped_img, (output_width, output_height), interpolation=cv2.INTER_AREA) 227 | elif mode == 'fit-longest': 228 | resized_img = resize_with_letterbox(img, output_width, output_height) 229 | elif mode == 'squash': 230 | resized_img = resize_image(img, (output_width, output_height)) 231 | else: 232 | raise ValueError(f"Unsupported mode: {mode}") 233 | 234 | if is_grayscale: 235 | resized_img = cv2.cvtColor(resized_img, cv2.COLOR_BGR2GRAY) 236 | pixels = np.array(resized_img).flatten().tolist() 237 | 238 | for p in pixels: 239 | features.append((p << 16) + (p << 8) + p) 240 | else: 241 | pixels = np.array(resized_img).flatten().tolist() 242 | 243 | for ix in range(0, len(pixels), 3): 244 | r = pixels[ix + 0] 245 | g = pixels[ix + 1] 246 | b = pixels[ix + 2] 247 | features.append((r << 16) + (g << 8) + b) 248 | 249 | return features, resized_img -------------------------------------------------------------------------------- /edge_impulse_linux/runner.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import os.path 3 | import tempfile 4 | import shutil 5 | import time 6 | import signal 7 | import socket 8 | import json 9 | 10 | 11 | def now(): 12 | return round(time.time() * 1000) 13 | 14 | 15 | class ImpulseRunner: 16 | def __init__(self, model_path: str): 17 | self._model_path = model_path 18 | self._tempdir = None 19 | self._runner = None 20 | self._client = None 21 | self._ix = 0 22 | self._debug = False 23 | 24 | def init(self, debug=False): 25 | if not os.path.exists(self._model_path): 26 | raise Exception("Model file does not exist: " + self._model_path) 27 | 28 | if not os.access(self._model_path, os.X_OK): 29 | raise Exception('Model file "' + self._model_path + '" is not executable') 30 | 31 | self._debug = debug 32 | self._tempdir = tempfile.mkdtemp() 33 | socket_path = os.path.join(self._tempdir, "runner.sock") 34 | cmd = [self._model_path, socket_path] 35 | if debug: 36 | self._runner = subprocess.Popen(cmd) 37 | else: 38 | self._runner = subprocess.Popen( 39 | cmd, 40 | stdout=subprocess.PIPE, 41 | stderr=subprocess.PIPE, 42 | ) 43 | 44 | while not os.path.exists(socket_path) or self._runner.poll() is not None: 45 | time.sleep(0.1) 46 | 47 | if self._runner.poll() is not None: 48 | raise Exception("Failed to start runner (" + str(self._runner.poll()) + ")") 49 | 50 | self._client = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 51 | self._client.connect(socket_path) 52 | 53 | return self.hello() 54 | 55 | def stop(self): 56 | if self._tempdir: 57 | shutil.rmtree(self._tempdir) 58 | 59 | if self._client: 60 | self._client.close() 61 | 62 | if self._runner: 63 | os.kill(self._runner.pid, signal.SIGINT) 64 | # todo: in Node we send a SIGHUP after 0.5sec if process has not died, can we do this somehow here too? 65 | 66 | def hello(self): 67 | msg = {"hello": 1} 68 | return self.send_msg(msg) 69 | 70 | def classify(self, data): 71 | msg = {"classify": data} 72 | if self._debug: 73 | msg["debug"] = True 74 | return self.send_msg(msg) 75 | 76 | def send_msg(self, msg): 77 | t_send_msg = now() 78 | 79 | if not self._client: 80 | raise Exception("ImpulseRunner is not initialized (call init())") 81 | 82 | self._ix = self._ix + 1 83 | ix = self._ix 84 | 85 | msg["id"] = ix 86 | self._client.send(json.dumps(msg).encode("utf-8")) 87 | 88 | t_sent_msg = now() 89 | 90 | data = b"" 91 | while True: 92 | chunk = self._client.recv(1024) 93 | # end chunk has \x00 in the end 94 | if chunk[-1] == 0: 95 | data = data + chunk[:-1] 96 | break 97 | data = data + chunk 98 | 99 | t_received_msg = now() 100 | 101 | braces_open = 0 102 | braces_closed = 0 103 | line = "" 104 | resp = None 105 | 106 | for c in data.decode("utf-8"): 107 | if c == "{": 108 | line = line + c 109 | braces_open = braces_open + 1 110 | elif c == "}": 111 | line = line + c 112 | braces_closed = braces_closed + 1 113 | if braces_closed == braces_open: 114 | resp = json.loads(line) 115 | elif braces_open > 0: 116 | line = line + c 117 | 118 | if resp is not None: 119 | break 120 | 121 | if resp is None: 122 | raise Exception("No data or corrupted data received") 123 | 124 | if resp["id"] != ix: 125 | raise Exception("Wrong id, expected: " + str(ix) + " but got " + resp["id"]) 126 | 127 | if not resp["success"]: 128 | raise Exception(resp["error"]) 129 | 130 | del resp["id"] 131 | del resp["success"] 132 | 133 | t_parsed_msg = now() 134 | # print('sent', t_sent_msg - t_send_msg, 'received', t_received_msg - t_send_msg, 'parsed', t_parsed_msg - t_send_msg) 135 | return resp 136 | -------------------------------------------------------------------------------- /examples/audio/classify.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys, getopt 3 | import signal 4 | import time 5 | from edge_impulse_linux.audio import AudioImpulseRunner 6 | 7 | runner = None 8 | 9 | def signal_handler(sig, frame): 10 | print('Interrupted') 11 | if (runner): 12 | runner.stop() 13 | sys.exit(0) 14 | 15 | signal.signal(signal.SIGINT, signal_handler) 16 | 17 | def help(): 18 | print('python classify.py ' ) 19 | 20 | def main(argv): 21 | try: 22 | opts, args = getopt.getopt(argv, "h", ["--help"]) 23 | except getopt.GetoptError: 24 | help() 25 | sys.exit(2) 26 | 27 | for opt, arg in opts: 28 | if opt in ('-h', '--help'): 29 | help() 30 | sys.exit() 31 | 32 | if len(args) == 0: 33 | help() 34 | sys.exit(2) 35 | 36 | model = args[0] 37 | 38 | dir_path = os.path.dirname(os.path.realpath(__file__)) 39 | modelfile = os.path.join(dir_path, model) 40 | 41 | with AudioImpulseRunner(modelfile) as runner: 42 | try: 43 | model_info = runner.init() 44 | # model_info = runner.init(debug=True) # to get debug print out 45 | labels = model_info['model_parameters']['labels'] 46 | print('Loaded runner for "' + model_info['project']['owner'] + ' / ' + model_info['project']['name'] + '"') 47 | 48 | #Let the library choose an audio interface suitable for this model, or pass device ID parameter to manually select a specific audio interface 49 | selected_device_id = None 50 | if len(args) >= 2: 51 | selected_device_id=int(args[1]) 52 | print("Device ID "+ str(selected_device_id) + " has been provided as an argument.") 53 | 54 | for res, audio in runner.classifier(device_id=selected_device_id): 55 | print('Result (%d ms.) ' % (res['timing']['dsp'] + res['timing']['classification']), end='') 56 | for label in labels: 57 | score = res['result']['classification'][label] 58 | print('%s: %.2f\t' % (label, score), end='') 59 | print('', flush=True) 60 | 61 | finally: 62 | if (runner): 63 | runner.stop() 64 | 65 | if __name__ == '__main__': 66 | main(sys.argv[1:]) 67 | -------------------------------------------------------------------------------- /examples/custom/classify.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys, getopt 3 | import signal 4 | import time 5 | from edge_impulse_linux.runner import ImpulseRunner 6 | import io 7 | 8 | runner = None 9 | 10 | def signal_handler(sig, frame): 11 | print('Interrupted') 12 | if (runner): 13 | runner.stop() 14 | sys.exit(0) 15 | 16 | signal.signal(signal.SIGINT, signal_handler) 17 | 18 | def help(): 19 | print('python classify.py ') 20 | 21 | def main(argv): 22 | try: 23 | opts, args = getopt.getopt(argv, "h", ["--help"]) 24 | except getopt.GetoptError: 25 | help() 26 | sys.exit(2) 27 | 28 | for opt, arg in opts: 29 | if opt in ('-h', '--help'): 30 | help() 31 | sys.exit() 32 | 33 | if len(args) <= 1: 34 | help() 35 | sys.exit(2) 36 | 37 | model = args[0] 38 | 39 | 40 | features_file = io.open(args[1], 'r', encoding='utf8') 41 | features = features_file.read().strip().split(",") 42 | if '0x' in features[0]: 43 | features = [float(int(f, 16)) for f in features] 44 | else: 45 | features = [float(f) for f in features] 46 | 47 | 48 | dir_path = os.path.dirname(os.path.realpath(__file__)) 49 | modelfile = os.path.join(dir_path, model) 50 | 51 | print('MODEL: ' + modelfile) 52 | 53 | 54 | runner = ImpulseRunner(modelfile) 55 | try: 56 | model_info = runner.init() 57 | # model_info = runner.init(debug=True) # to get debug print out 58 | 59 | print('Loaded runner for "' + model_info['project']['owner'] + ' / ' + model_info['project']['name'] + '"') 60 | 61 | res = runner.classify(features) 62 | print("classification:") 63 | print(res["result"]) 64 | print("timing:") 65 | print(res["timing"]) 66 | 67 | finally: 68 | if (runner): 69 | runner.stop() 70 | 71 | if __name__ == '__main__': 72 | main(sys.argv[1:]) 73 | -------------------------------------------------------------------------------- /examples/custom/collect.py: -------------------------------------------------------------------------------- 1 | # First, install the dependencies via: 2 | # $ pip3 install requests 3 | 4 | import json 5 | import time, hmac, hashlib 6 | import requests 7 | import re, uuid 8 | import math 9 | 10 | # Your API & HMAC keys can be found here (go to your project > Dashboard > Keys to find this) 11 | HMAC_KEY = "fed53116f20684c067774ebf9e7bcbdc" 12 | API_KEY = "ei_fd83..." 13 | 14 | # empty signature (all zeros). HS256 gives 32 byte signature, and we encode in hex, so we need 64 characters here 15 | emptySignature = ''.join(['0'] * 64) 16 | 17 | # use MAC address of network interface as deviceId 18 | device_name =":".join(re.findall('..', '%012x' % uuid.getnode())) 19 | 20 | # here we have new data every 16 ms 21 | INTERVAL_MS = 16 22 | 23 | if INTERVAL_MS <= 0: 24 | raise Exception("Interval in miliseconds cannot be equal or lower than 0.") 25 | 26 | # here we'll collect 2 seconds of data at a frequency defined by interval_ms 27 | freq =1000/INTERVAL_MS 28 | values_list=[] 29 | for i in range (2*int(round(freq,0))): 30 | values_list.append([math.sin(i * 0.1) * 10, 31 | math.cos(i * 0.1) * 10, 32 | (math.sin(i * 0.1) + math.cos(i * 0.1)) * 10]) 33 | 34 | data = { 35 | "protected": { 36 | "ver": "v1", 37 | "alg": "HS256", 38 | "iat": time.time() # epoch time, seconds since 1970 39 | }, 40 | "signature": emptySignature, 41 | "payload": { 42 | "device_name": device_name, 43 | "device_type": "LINUX_TEST", 44 | "interval_ms": INTERVAL_MS, 45 | "sensors": [ 46 | { "name": "accX", "units": "m/s2" }, 47 | { "name": "accY", "units": "m/s2" }, 48 | { "name": "accZ", "units": "m/s2" } 49 | ], 50 | "values": values_list 51 | } 52 | } 53 | 54 | 55 | 56 | # encode in JSON 57 | encoded = json.dumps(data) 58 | 59 | # sign message 60 | signature = hmac.new(bytes(HMAC_KEY, 'utf-8'), msg = encoded.encode('utf-8'), digestmod = hashlib.sha256).hexdigest() 61 | 62 | # set the signature again in the message, and encode again 63 | data['signature'] = signature 64 | encoded = json.dumps(data) 65 | 66 | # and upload the file 67 | res = requests.post(url='https://ingestion.edgeimpulse.com/api/training/data', 68 | data=encoded, 69 | headers={ 70 | 'Content-Type': 'application/json', 71 | 'x-file-name': 'idle01', 72 | 'x-api-key': API_KEY 73 | }) 74 | if (res.status_code == 200): 75 | print('Uploaded file to Edge Impulse', res.status_code, res.content) 76 | else: 77 | print('Failed to upload file to Edge Impulse', res.status_code, res.content) -------------------------------------------------------------------------------- /examples/image/.gitignore: -------------------------------------------------------------------------------- 1 | img-mac-x86_64-v4.eim 2 | *.jpg 3 | __pycache__ 4 | -------------------------------------------------------------------------------- /examples/image/classify-full-frame.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import device_patches # Device specific patches for Jetson Nano (needs to be before importing cv2) 4 | 5 | import cv2 6 | import os 7 | import sys, getopt 8 | import signal 9 | import time 10 | from edge_impulse_linux.image import ImageImpulseRunner 11 | 12 | runner = None 13 | 14 | def now(): 15 | return round(time.time() * 1000) 16 | 17 | def get_webcams(): 18 | port_ids = [] 19 | for port in range(5): 20 | print("Looking for a camera in port %s:" %port) 21 | camera = cv2.VideoCapture(port) 22 | if camera.isOpened(): 23 | ret = camera.read()[0] 24 | if ret: 25 | backendName =camera.getBackendName() 26 | w = camera.get(3) 27 | h = camera.get(4) 28 | print("Camera %s (%s x %s) found in port %s " %(backendName,h,w, port)) 29 | port_ids.append(port) 30 | camera.release() 31 | return port_ids 32 | 33 | def sigint_handler(sig, frame): 34 | print('Interrupted') 35 | if (runner): 36 | runner.stop() 37 | sys.exit(0) 38 | 39 | signal.signal(signal.SIGINT, sigint_handler) 40 | 41 | def help(): 42 | print('python classify.py ') 43 | 44 | def main(argv): 45 | try: 46 | opts, args = getopt.getopt(argv, "h", ["--help"]) 47 | except getopt.GetoptError: 48 | help() 49 | sys.exit(2) 50 | 51 | for opt, arg in opts: 52 | if opt in ('-h', '--help'): 53 | help() 54 | sys.exit() 55 | 56 | if len(args) == 0: 57 | help() 58 | sys.exit(2) 59 | 60 | model = args[0] 61 | 62 | dir_path = os.path.dirname(os.path.realpath(__file__)) 63 | modelfile = os.path.join(dir_path, model) 64 | 65 | print('MODEL: ' + modelfile) 66 | 67 | with ImageImpulseRunner(modelfile) as runner: 68 | try: 69 | model_info = runner.init() 70 | # model_info = runner.init(debug=True) # to get debug print out 71 | 72 | print('Loaded runner for "' + model_info['project']['owner'] + ' / ' + model_info['project']['name'] + '"') 73 | labels = model_info['model_parameters']['labels'] 74 | if len(args)>= 2: 75 | videoCaptureDeviceId = int(args[1]) 76 | else: 77 | port_ids = get_webcams() 78 | if len(port_ids) == 0: 79 | raise Exception('Cannot find any webcams') 80 | if len(args)<= 1 and len(port_ids)> 1: 81 | raise Exception("Multiple cameras found. Add the camera port ID as a second argument to use to this script") 82 | videoCaptureDeviceId = int(port_ids[0]) 83 | 84 | camera = cv2.VideoCapture(videoCaptureDeviceId) 85 | ret = camera.read()[0] 86 | if ret: 87 | backendName = camera.getBackendName() 88 | w = camera.get(3) 89 | h = camera.get(4) 90 | print("Camera %s (%s x %s) in port %s selected." %(backendName,h,w, videoCaptureDeviceId)) 91 | camera.release() 92 | else: 93 | raise Exception("Couldn't initialize selected camera.") 94 | 95 | next_frame = 0 # limit to ~10 fps here 96 | 97 | for img in runner.get_frames(videoCaptureDeviceId): 98 | if (next_frame > now()): 99 | time.sleep((next_frame - now()) / 1000) 100 | 101 | # make two cuts from the image, one on the left and one on the right 102 | features_l, cropped_l = runner.get_features_from_image(img, 'left') 103 | features_r, cropped_r = runner.get_features_from_image(img, 'right') 104 | 105 | # classify both 106 | res_l = runner.classify(features_l) 107 | res_r = runner.classify(features_r) 108 | 109 | cv2.imwrite('debug_l.jpg', cv2.cvtColor(cropped_l, cv2.COLOR_RGB2BGR)) 110 | cv2.imwrite('debug_r.jpg', cv2.cvtColor(cropped_r, cv2.COLOR_RGB2BGR)) 111 | 112 | def print_classification(res, tag): 113 | if "classification" in res["result"].keys(): 114 | print('%s: Result (%d ms.) ' % (tag, res['timing']['dsp'] + res['timing']['classification']), end='') 115 | for label in labels: 116 | score = res['result']['classification'][label] 117 | print('%s: %.2f\t' % (label, score), end='') 118 | print('', flush=True) 119 | elif "bounding_boxes" in res["result"].keys(): 120 | print('%s: Found %d bounding boxes (%d ms.)' % (tag, len(res["result"]["bounding_boxes"]), res['timing']['dsp'] + res['timing']['classification'])) 121 | for bb in res["result"]["bounding_boxes"]: 122 | print('\t%s (%.2f): x=%d y=%d w=%d h=%d' % (bb['label'], bb['value'], bb['x'], bb['y'], bb['width'], bb['height'])) 123 | 124 | if "visual_anomaly_grid" in res["result"].keys(): 125 | print('Found %d visual anomalies (%d ms.)' % (len(res["result"]["visual_anomaly_grid"]), res['timing']['dsp'] + 126 | res['timing']['classification'] + 127 | res['timing']['anomaly'])) 128 | for grid_cell in res["result"]["visual_anomaly_grid"]: 129 | print('\t%s (%.2f): x=%d y=%d w=%d h=%d' % (grid_cell['label'], grid_cell['value'], grid_cell['x'], grid_cell['y'], grid_cell['width'], grid_cell['height'])) 130 | 131 | print_classification(res_l, 'LEFT') 132 | print_classification(res_r, 'RIGHT') 133 | 134 | next_frame = now() + 100 135 | 136 | finally: 137 | if (runner): 138 | runner.stop() 139 | 140 | if __name__ == "__main__": 141 | main(sys.argv[1:]) 142 | -------------------------------------------------------------------------------- /examples/image/classify-image.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import device_patches # Device specific patches for Jetson Nano (needs to be before importing cv2) # noqa: F401 4 | 5 | import cv2 6 | import os 7 | import sys 8 | import getopt 9 | from edge_impulse_linux.image import ImageImpulseRunner 10 | 11 | runner = None 12 | 13 | def help(): 14 | print('python classify-image.py ') 15 | 16 | def main(argv): 17 | try: 18 | opts, args = getopt.getopt(argv, "h", ["--help"]) 19 | except getopt.GetoptError: 20 | help() 21 | sys.exit(2) 22 | 23 | for opt, arg in opts: 24 | if opt in ('-h', '--help'): 25 | help() 26 | sys.exit() 27 | 28 | if len(args) != 2: 29 | help() 30 | sys.exit(2) 31 | 32 | model = args[0] 33 | 34 | dir_path = os.path.dirname(os.path.realpath(__file__)) 35 | modelfile = os.path.join(dir_path, model) 36 | 37 | print('MODEL: ' + modelfile) 38 | 39 | with ImageImpulseRunner(modelfile) as runner: 40 | try: 41 | model_info = runner.init() 42 | # model_info = runner.init(debug=True) # to get debug print out 43 | 44 | print('Loaded runner for "' + model_info['project']['owner'] + ' / ' + model_info['project']['name'] + '"') 45 | labels = model_info['model_parameters']['labels'] 46 | 47 | img = cv2.imread(args[1]) 48 | if img is None: 49 | print('Failed to load image', args[1]) 50 | exit(1) 51 | 52 | # imread returns images in BGR format, so we need to convert to RGB 53 | img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) 54 | 55 | # get_features_from_image also takes a crop direction arguments in case you don't have square images 56 | # features, cropped = runner.get_features_from_image(img) 57 | 58 | # this mode uses the same settings used in studio to crop and resize the input 59 | features, cropped = runner.get_features_from_image_auto_studio_settings(img) 60 | 61 | res = runner.classify(features) 62 | 63 | if "classification" in res["result"].keys(): 64 | print('Result (%d ms.) ' % (res['timing']['dsp'] + res['timing']['classification']), end='') 65 | for label in labels: 66 | score = res['result']['classification'][label] 67 | print('%s: %.2f\t' % (label, score), end='') 68 | print('', flush=True) 69 | 70 | elif "bounding_boxes" in res["result"].keys(): 71 | print('Found %d bounding boxes (%d ms.)' % (len(res["result"]["bounding_boxes"]), res['timing']['dsp'] + res['timing']['classification'])) 72 | for bb in res["result"]["bounding_boxes"]: 73 | print('\t%s (%.2f): x=%d y=%d w=%d h=%d' % (bb['label'], bb['value'], bb['x'], bb['y'], bb['width'], bb['height'])) 74 | cropped = cv2.rectangle(cropped, (bb['x'], bb['y']), (bb['x'] + bb['width'], bb['y'] + bb['height']), (255, 0, 0), 1) 75 | 76 | if "visual_anomaly_grid" in res["result"].keys(): 77 | print('Found %d visual anomalies (%d ms.)' % (len(res["result"]["visual_anomaly_grid"]), res['timing']['dsp'] + 78 | res['timing']['classification'] + 79 | res['timing']['anomaly'])) 80 | for grid_cell in res["result"]["visual_anomaly_grid"]: 81 | print('\t%s (%.2f): x=%d y=%d w=%d h=%d' % (grid_cell['label'], grid_cell['value'], grid_cell['x'], grid_cell['y'], grid_cell['width'], grid_cell['height'])) 82 | cropped = cv2.rectangle(cropped, (grid_cell['x'], grid_cell['y']), (grid_cell['x'] + grid_cell['width'], grid_cell['y'] + grid_cell['height']), (255, 125, 0), 1) 83 | values = [grid_cell['value'] for grid_cell in res["result"]["visual_anomaly_grid"]] 84 | mean_value = sum(values) / len(values) 85 | max_value = max(values) 86 | print('Max value: %.2f' % max_value) 87 | print('Mean value: %.2f' % mean_value) 88 | 89 | # the image will be resized and cropped, save a copy of the picture here 90 | # so you can see what's being passed into the classifier 91 | cv2.imwrite('debug.jpg', cv2.cvtColor(cropped, cv2.COLOR_RGB2BGR)) 92 | 93 | finally: 94 | if (runner): 95 | runner.stop() 96 | 97 | if __name__ == "__main__": 98 | main(sys.argv[1:]) 99 | -------------------------------------------------------------------------------- /examples/image/classify-video.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import device_patches # Device specific patches for Jetson Nano (needs to be before importing cv2) 4 | 5 | import cv2 6 | import os 7 | import time 8 | import sys, getopt 9 | import numpy as np 10 | from edge_impulse_linux.image import ImageImpulseRunner 11 | 12 | runner = None 13 | # if you don't want to see a video preview, set this to False 14 | show_camera = True 15 | if (sys.platform == 'linux' and not os.environ.get('DISPLAY')): 16 | show_camera = False 17 | 18 | 19 | def help(): 20 | print('python classify-video.py ') 21 | 22 | def main(argv): 23 | try: 24 | opts, args = getopt.getopt(argv, "h", ["--help"]) 25 | except getopt.GetoptError: 26 | help() 27 | sys.exit(2) 28 | 29 | for opt, arg in opts: 30 | if opt in ('-h', '--help'): 31 | help() 32 | sys.exit() 33 | 34 | if len(args) != 2: 35 | help() 36 | sys.exit(2) 37 | 38 | model = args[0] 39 | 40 | dir_path = os.path.dirname(os.path.realpath(__file__)) 41 | modelfile = os.path.join(dir_path, model) 42 | 43 | print('MODEL: ' + modelfile) 44 | 45 | with ImageImpulseRunner(modelfile) as runner: 46 | try: 47 | model_info = runner.init() 48 | # model_info = runner.init(debug=True) # to get debug print out 49 | print('Loaded runner for "' + model_info['project']['owner'] + ' / ' + model_info['project']['name'] + '"') 50 | labels = model_info['model_parameters']['labels'] 51 | 52 | vidcap = cv2.VideoCapture(args[1]) 53 | sec = 0 54 | start_time = time.time() 55 | 56 | def getFrame(sec): 57 | vidcap.set(cv2.CAP_PROP_POS_MSEC,sec*1000) 58 | hasFrames,image = vidcap.read() 59 | if hasFrames: 60 | return image 61 | else: 62 | print('Failed to load frame', args[1]) 63 | exit(1) 64 | 65 | 66 | img = getFrame(sec) 67 | 68 | while img.size != 0: 69 | 70 | # imread returns images in BGR format, so we need to convert to RGB 71 | img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) 72 | 73 | # get_features_from_image also takes a crop direction arguments in case you don't have square images 74 | features, cropped = runner.get_features_from_image(img) 75 | 76 | # the image will be resized and cropped, save a copy of the picture here 77 | # so you can see what's being passed into the classifier 78 | cv2.imwrite('debug.jpg', cv2.cvtColor(cropped, cv2.COLOR_RGB2BGR)) 79 | 80 | res = runner.classify(features) 81 | 82 | if "classification" in res["result"].keys(): 83 | print('Result (%d ms.) ' % (res['timing']['dsp'] + res['timing']['classification']), end='') 84 | for label in labels: 85 | score = res['result']['classification'][label] 86 | print('%s: %.2f\t' % (label, score), end='') 87 | print('', flush=True) 88 | 89 | elif "bounding_boxes" in res["result"].keys(): 90 | print('Found %d bounding boxes (%d ms.)' % (len(res["result"]["bounding_boxes"]), res['timing']['dsp'] + res['timing']['classification'])) 91 | for bb in res["result"]["bounding_boxes"]: 92 | print('\t%s (%.2f): x=%d y=%d w=%d h=%d' % (bb['label'], bb['value'], bb['x'], bb['y'], bb['width'], bb['height'])) 93 | img = cv2.rectangle(cropped, (bb['x'], bb['y']), (bb['x'] + bb['width'], bb['y'] + bb['height']), (255, 0, 0), 1) 94 | 95 | if "visual_anomaly_grid" in res["result"].keys(): 96 | print('Found %d visual anomalies (%d ms.)' % (len(res["result"]["visual_anomaly_grid"]), res['timing']['dsp'] + 97 | res['timing']['classification'] + 98 | res['timing']['anomaly'])) 99 | for grid_cell in res["result"]["visual_anomaly_grid"]: 100 | print('\t%s (%.2f): x=%d y=%d w=%d h=%d' % (grid_cell['label'], grid_cell['value'], grid_cell['x'], grid_cell['y'], grid_cell['width'], grid_cell['height'])) 101 | img = cv2.rectangle(cropped, (grid_cell['x'], grid_cell['y']), (grid_cell['x'] + grid_cell['width'], grid_cell['y'] + grid_cell['height']), (255, 125, 0), 1) 102 | 103 | if (show_camera): 104 | cv2.imshow('edgeimpulse', cv2.cvtColor(cropped, cv2.COLOR_RGB2BGR)) 105 | if cv2.waitKey(1) == ord('q'): 106 | break 107 | 108 | sec = time.time() - start_time 109 | sec = round(sec, 2) 110 | print("Getting frame at: %.2f sec" % sec) 111 | img = getFrame(sec) 112 | finally: 113 | if (runner): 114 | runner.stop() 115 | 116 | if __name__ == "__main__": 117 | main(sys.argv[1:]) 118 | -------------------------------------------------------------------------------- /examples/image/classify.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import device_patches # Device specific patches for Jetson Nano (needs to be before importing cv2) 4 | 5 | import cv2 6 | import os 7 | import sys, getopt 8 | import signal 9 | import time 10 | from edge_impulse_linux.image import ImageImpulseRunner 11 | 12 | runner = None 13 | # if you don't want to see a camera preview, set this to False 14 | show_camera = True 15 | if (sys.platform == 'linux' and not os.environ.get('DISPLAY')): 16 | show_camera = False 17 | 18 | def now(): 19 | return round(time.time() * 1000) 20 | 21 | def get_webcams(): 22 | port_ids = [] 23 | for port in range(5): 24 | print("Looking for a camera in port %s:" %port) 25 | camera = cv2.VideoCapture(port) 26 | if camera.isOpened(): 27 | ret = camera.read()[0] 28 | if ret: 29 | backendName =camera.getBackendName() 30 | w = camera.get(3) 31 | h = camera.get(4) 32 | print("Camera %s (%s x %s) found in port %s " %(backendName,h,w, port)) 33 | port_ids.append(port) 34 | camera.release() 35 | return port_ids 36 | 37 | def sigint_handler(sig, frame): 38 | print('Interrupted') 39 | if (runner): 40 | runner.stop() 41 | sys.exit(0) 42 | 43 | signal.signal(signal.SIGINT, sigint_handler) 44 | 45 | def help(): 46 | print('python classify.py ') 47 | 48 | def main(argv): 49 | try: 50 | opts, args = getopt.getopt(argv, "h", ["--help"]) 51 | except getopt.GetoptError: 52 | help() 53 | sys.exit(2) 54 | 55 | for opt, arg in opts: 56 | if opt in ('-h', '--help'): 57 | help() 58 | sys.exit() 59 | 60 | if len(args) == 0: 61 | help() 62 | sys.exit(2) 63 | 64 | model = args[0] 65 | 66 | dir_path = os.path.dirname(os.path.realpath(__file__)) 67 | modelfile = os.path.join(dir_path, model) 68 | 69 | print('MODEL: ' + modelfile) 70 | 71 | with ImageImpulseRunner(modelfile) as runner: 72 | try: 73 | model_info = runner.init() 74 | # model_info = runner.init(debug=True) # to get debug print out 75 | print('Loaded runner for "' + model_info['project']['owner'] + ' / ' + model_info['project']['name'] + '"') 76 | labels = model_info['model_parameters']['labels'] 77 | if len(args)>= 2: 78 | videoCaptureDeviceId = int(args[1]) 79 | else: 80 | port_ids = get_webcams() 81 | if len(port_ids) == 0: 82 | raise Exception('Cannot find any webcams') 83 | if len(args)<= 1 and len(port_ids)> 1: 84 | raise Exception("Multiple cameras found. Add the camera port ID as a second argument to use to this script") 85 | videoCaptureDeviceId = int(port_ids[0]) 86 | 87 | camera = cv2.VideoCapture(videoCaptureDeviceId) 88 | ret = camera.read()[0] 89 | if ret: 90 | backendName = camera.getBackendName() 91 | w = camera.get(3) 92 | h = camera.get(4) 93 | print("Camera %s (%s x %s) in port %s selected." %(backendName,h,w, videoCaptureDeviceId)) 94 | camera.release() 95 | else: 96 | raise Exception("Couldn't initialize selected camera.") 97 | 98 | next_frame = 0 # limit to ~10 fps here 99 | 100 | for res, img in runner.classifier(videoCaptureDeviceId): 101 | if (next_frame > now()): 102 | time.sleep((next_frame - now()) / 1000) 103 | 104 | # print('classification runner response', res) 105 | 106 | if "classification" in res["result"].keys(): 107 | print('Result (%d ms.) ' % (res['timing']['dsp'] + res['timing']['classification']), end='') 108 | for label in labels: 109 | score = res['result']['classification'][label] 110 | print('%s: %.2f\t' % (label, score), end='') 111 | print('', flush=True) 112 | elif "bounding_boxes" in res["result"].keys(): 113 | print('Found %d bounding boxes (%d ms.)' % (len(res["result"]["bounding_boxes"]), res['timing']['dsp'] + res['timing']['classification'])) 114 | for bb in res["result"]["bounding_boxes"]: 115 | print('\t%s (%.2f): x=%d y=%d w=%d h=%d' % (bb['label'], bb['value'], bb['x'], bb['y'], bb['width'], bb['height'])) 116 | img = cv2.rectangle(img, (bb['x'], bb['y']), (bb['x'] + bb['width'], bb['y'] + bb['height']), (255, 0, 0), 1) 117 | 118 | if (show_camera): 119 | cv2.imshow('edgeimpulse', cv2.cvtColor(img, cv2.COLOR_RGB2BGR)) 120 | if cv2.waitKey(1) == ord('q'): 121 | break 122 | 123 | next_frame = now() + 100 124 | finally: 125 | if (runner): 126 | runner.stop() 127 | 128 | if __name__ == "__main__": 129 | main(sys.argv[1:]) 130 | -------------------------------------------------------------------------------- /examples/image/device_patches.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | def get_device(): 4 | # On Jetson Nano `OPENBLAS_CORETYPE=ARMV8` needs to be set, otherwise including OpenCV 5 | # throws an illegal instruction error 6 | if (os.path.exists('/proc/device-tree/model')): 7 | with open('/proc/device-tree/model', 'r') as f: 8 | model = f.read() 9 | if ('NVIDIA Jetson Nano' in model): 10 | return 'jetson-nano' 11 | return 'unknown' 12 | 13 | device = get_device() 14 | if (device == 'jetson-nano'): 15 | os.environ['OPENBLAS_CORETYPE'] = 'ARMV8' 16 | -------------------------------------------------------------------------------- /examples/image/resize_demo.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import cv2 3 | from edge_impulse_linux.image import get_features_from_image_with_studio_mode 4 | 5 | 6 | def create_test_image(frame_buffer_cols, frame_buffer_rows): 7 | # Create an empty image with 3 channels (RGB) 8 | image_rgb888_packed = np.zeros((frame_buffer_rows, frame_buffer_cols, 3), dtype=np.uint8) 9 | 10 | for row in range(frame_buffer_rows): 11 | for col in range(frame_buffer_cols): 12 | # Change color a bit (light -> dark from top->down, so we know if the image looks good quickly) 13 | blue_intensity = int((255.0 / frame_buffer_rows) * row) 14 | green_intensity = int((255.0 / frame_buffer_cols) * col) 15 | 16 | # Set the pixel values 17 | image_rgb888_packed[row, col, 0] = blue_intensity # Blue channel 18 | image_rgb888_packed[row, col, 1] = green_intensity # Green channel 19 | image_rgb888_packed[row, col, 2] = 0 # Red channel is zero for test 20 | 21 | return image_rgb888_packed 22 | 23 | # %% 24 | 25 | def demo_mode(mode): 26 | frame_buffer_rows = 480 27 | frame_buffer_cols = 640 28 | test_image = create_test_image(frame_buffer_cols, frame_buffer_rows) 29 | 30 | _, test_image = get_features_from_image_with_studio_mode(test_image, mode, 200,200, False) 31 | 32 | # Display the image using OpenCV and ensure it stays open 33 | cv2.imshow(mode, test_image) 34 | cv2.waitKey(0) # Wait for a key press to close the image window 35 | cv2.destroyAllWindows() # Close the image window 36 | cv2.waitKey(1) # Patch for macOS 37 | 38 | demo_mode('fit-shortest') 39 | demo_mode('fit-longest') 40 | demo_mode('squash') 41 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = [ 3 | "setuptools>=42", 4 | "wheel" 5 | ] 6 | build-backend = "setuptools.build_meta" 7 | 8 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy>=1.19 2 | PyAudio==0.2.11 3 | psutil>=5.8.0 4 | edge_impulse_linux 5 | six==1.16.0 -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [metadata] 2 | name = edge_impulse_linux 3 | version = 1.0.12 4 | author = EdgeImpulse Inc. 5 | author_email = hello@edgeimpulse.com 6 | description = Python runner for real-time ML classification 7 | long_description = file: README.md 8 | long_description_content_type = text/markdown 9 | url = https://github.com/edgeimpulse/linux-sdk-python 10 | project_urls = 11 | Bug Tracker = https://github.com/edgeimpulse/linux-sdk-python/issues 12 | classifiers = 13 | Programming Language :: Python :: 3 14 | Operating System :: OS Independent 15 | license_files = LICENSE 16 | [options] 17 | packages = find: 18 | python_requires = >=3.6 19 | install_requires = file: requirements.txt 20 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | import setuptools 2 | 3 | setuptools.setup() 4 | --------------------------------------------------------------------------------