├── LICENSE ├── README.md ├── session1 ├── README.md └── slides.pdf ├── session2 ├── README.md ├── ei-fan_status_detection_using_vibration-arduino-1.0.4.zip ├── fan_status_detection │ └── fan_status_detection.ino ├── images │ ├── arduino_nano.png │ ├── inference_results.png │ ├── install_accelerometer.png │ ├── install_edgeimpulselibrary.png │ └── serial_plotter.png └── slides.pdf └── session3 ├── README.md ├── inferencing └── inferencing.ino ├── slides.pdf └── snoring_detection_inferencing.zip /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | **MLT Edge AI Lab** was formed in 2020 when a team of researchers/engineers visited a local farm in Chiba to brainstorm solutions for problems faced by local farmers. EdgeAI Lab was formed with an aim of creating an OPEN environment where anyone can experiment with tools and apply their learnings to create quick prototypes. 4 | 5 | **MLT Edge AI Lab with Microcontrollers Series** is created by **Naveen Kumar** and **Yoovraj Shinde**, to learn and share about the steps in building EdgeAI applications. We will take a look at the general steps in edgeAI pipeline and the different hardwares available to deploy the models. We will also take a look at 2 simple applications, to strengthen our understanding about data collection/ processing/ building / deployment and inference.
6 | 7 | We are looking forward to active participation from the MLT community members to build cool apps together. We will do a short brainstorming session in the end to find good ideas which we can work together throughout the series. 8 | 9 | Our goal is to help the MLT Edge AI community to build their application by end of the series. 10 | 11 | Join us on **#edge_ai_lab** channel on [MLT Slack](https://machinelearningtokyo.slack.com) 12 | 13 | [Feedback Survey](https://forms.gle/811apJr1yesA9EdK9) 14 | 15 | # Sessions 16 | | Date | Topic | Description | Presentation Link | Video | 17 | | ---- | ----- | ----------- | ----------------- | ----- | 18 | | 22 Aug 2021 | [Overview of EdgeAI Applications](session1/README.md) | Brief introduction of different hardware and short talk about the pipelines in edgeAI applications | [Slides](session1/slides.pdf) | [Video Recording](https://www.youtube.com/watch?v=S9Ejmi_3Vrw) | 19 | | 29 Aug 2021 | [Motion Based Application using IMU](session2/README.md) | Walkthrough different blocks of pipeline for developing a motion based Edge AI Application. Type of data that can be extracted from IMU on Arduino Nano BLE Sense board. Whiteboard & Brainstorming | [Slides](session2/slides.pdf) | [Video Recording](https://www.youtube.com/watch?v=jIzV5BJcH6Y) | 20 | | 05 Sep 2021 | [Audio Based Application using Microphone](session3/README.md) | Walkthrough different blocks of pipeline for developing a audio based Edge AI Application. Type of data that can be extracted from microphone on Arduino Nano BLE Sense board. Whiteboard & Brainstorming | [Slides](session3/slides.pdf) | [Video Recording](https://www.youtube.com/watch?v=Jxa_kI7ix5M) | 21 | | 19 Sep 2021 | Wrap-up Session | Summary and team presentations | TBA | TBA | 22 | 23 | # About Session Leads 24 | [Naveen Kumar](https://www.hackster.io/naveenbskumar) is a Senior Technical Scientist at RIKEN working on microbial DNA sequencing data analysis. He is a maker, tinkerer, embedded electronics hobbyist, and Edge AI enthusiast. In his free time, he enjoys watching movies, photography, and playing with microcontrollers. 25 | 26 | 27 | [Yoovraj Shinde](https://www.linkedin.com/in/yoovraj-shinde/) is Engineering Team Manager in the Research Department for Rakuten Institute of Technology, Tokyo. His background is in electronics engineering and loves to tinker around with circuits / hardware / robots for kids. He is interested in working around Machine Learning and Hardware, and learning new things. 28 | 29 | # Code of Conduct 30 | MLT promotes an inclusive environment that values integrity, openness, and respect. https://github.com/Machine-Learning-Tokyo/MLT_starterkit 31 | 32 | -------------------------------------------------------------------------------- /session1/README.md: -------------------------------------------------------------------------------- 1 | # Overview of EdgeAI Applications 2 | Our first session was to give a brief overview of what and why EdgeAI is important. We also discussed generalized pipeline involved with EdgeAI applications. 3 | Two demo apps were also shared during the session. 4 | 5 | # Importance of EdgeAI 6 | - Offline Processing 7 | - User Privacy & Security 8 | - Low Power / Low Cost 9 | - Portability 10 | 11 | As the number of IoT devices are increasing, there is more and more data generated on the user side. Sending this data to cloud and running models there might not be scalable approach when the number of data sources increases. Data and Compute come close together when models are deployed on Edge. 12 | 13 | # Use cases for EdgeAI 14 | - Inertial Sensor/Environmental Sensor Analytics 15 | - Predictive Maintenance 16 | - Body Monitoring 17 | - Audio Analytics 18 | - Audio Scene Classification 19 | - Audio Event Detection 20 | - Keyword Recognition 21 | - Image Analytics 22 | - Surveillance and Monitoring 23 | - Autonomous Vehicles 24 | - Expression Analysis to improve shopping, advertising, or driving 25 | 26 | # General Blocks of EdgeAI pipeline 27 | Here are the general steps considered while desiging EdgeAI applications. 28 | Note that depending on usecase, some blocks might not be necessary. 29 | ![image](https://user-images.githubusercontent.com/948498/130361281-dd5323e5-8708-4a29-9b93-4f585301d7ce.png) 30 | 31 | # Arduino Nano BLE Sense Board 32 | - 9 axis inertial sensor (accelerometer, gyroscope, magnetometer) 33 | - Humidity, barometric pressure and temperature sensor 34 | - Gesture, proximity, light color/intensity sensor 35 | - Microphone 36 | - Price ~30 USD / ~4000 JPY, Find at [Arduino Website](https://store.arduino.cc/usa/nano-33-ble-sense) 37 | - 64 MHz Clock Speed 38 | - 1 MB Flash Memory 39 | - 256KB SRAM 40 | ![image](https://user-images.githubusercontent.com/948498/130361392-5bb00523-ac4f-4d5e-8c4c-991a3faf45ac.png) 41 | 42 | # Helpful Links 43 | - [Getting Started with Arduino](https://www.arduino.cc/en/Guide) 44 | - [Arduino Nano BLE Sense](https://store.arduino.cc/usa/nano-33-ble-sense) 45 | 46 | 47 | -------------------------------------------------------------------------------- /session1/slides.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/edgeai-lab-microcontroller-series/53aea1ad723ce7cef4e64a6cdf9253c480d09b50/session1/slides.pdf -------------------------------------------------------------------------------- /session2/README.md: -------------------------------------------------------------------------------- 1 | # Motion Based Application using IMU 2 | Our second session of the EdgeAI Lab with Microcontroller Series is to show how to develop a simple Motion Based application using the IMU device on Arduino Nano 33 BLE Sense board. 3 | 4 | # What is IMU 5 | IMU stands for inertial measurement unit. It is an electronic device that measures and reports a body's specific force, angular rate and the orientation of the body, using a combination of accelerometers, gyroscopes, and oftentimes magnetometers. 6 | ![image](images/arduino_nano.png) 7 | 8 | A simple tutorial to [Access IMU Data on Arduino Nano 33 BLE Sense Board](https://docs.arduino.cc/tutorials/nano-33-ble-sense/imu_accelerometer). 9 | 10 | A quick clip code to visualize your data on Arduino Serial Plotter. 11 | ``` 12 | void loop() { 13 | if (IMU.accelerationAvailable()) { 14 | IMU.readAcceleration(x,y,z); 15 | Serial.print(x);Serial.print(","); 16 | Serial.print(y);Serial.print(","); 17 | Serial.println(z); 18 | delay(10); 19 | } 20 | } 21 | ``` 22 | ![image](images/serial_plotter.png) 23 | 24 | 25 | # Edge AI Industrial Use cases of Vibration Analysis 26 | - Predictive Maintenance of rotating machines (fans, motors, gears etc.) as well as reciprocating machines (pistons, pumps, compressors etc.) 27 | - Structural Health Monitoring like Bridges, Pipes, Turbine Blades etc. 28 | 29 | # Fan Status Detection Demo Project using vibration analysis. 30 | The aim of this project is to detect the speed state of fan by the vibrations created by fan. 31 | 32 | ## Background 33 | A fan consists of single phased induction motor which is connected to blades to create an airflow. The speed of rotation is controlled generally by voltage controlled circuits which has a user interface to enable switching between different states by simple state buttons. 34 | 35 | The hypothesis is that the fan will generate different set of vibrations due to motor speed and it can be used to distinguish the states. 36 | 37 | ## Blocks of EdgeAI Pipeline 38 | Based on this, let us identify different blocks of the generalized EdgeAI Pipeline. 39 | - Data Collection & Storage 40 | - Arduino Nano 33 BLE Sense will be attached tightly on the Fan. This will help the accelerometer to capture stable vibration signals. 41 | - The vibration signals will be sent via USB Serial to laptop or local pc and then a program will transfer the data to cloud (EdgeImpulse in our demo). 42 | ![image](https://user-images.githubusercontent.com/948498/131223377-8e8fef52-63ad-4aa2-b278-f0c1beac0c28.png) 43 | 44 | - Data Processing and EDA on the vibration data. 45 | - Raw Data collected from sensor is in Time Domain. So we will need to process it appropriately and convert it to Frequency Domain. 46 | - Standard steps like Scaling and Normalization will be carried out on the data. 47 | - High Pass Filter will be applied on the data. Vibrational data which we are collecting is in higher range of frequencies. And also to eliminate the lower frequency noise. 48 | - Fast Fourier Transform will be applied to convert it to frequency domain. 49 | ![image](https://user-images.githubusercontent.com/948498/131224150-b33e4228-c2d8-4ff9-b754-847453c82448.png) 50 | 51 | 52 | - On Device Deployment and Inference 53 | - Once model is ready, create deployment bundle for Arduino and export the module. This can be imported into Arduino as library. 54 | - For simple inference, we will use serial monitor and RGB Led to demonstrate how we can customize the code to adapt to any application. 55 | ![image](https://user-images.githubusercontent.com/948498/131223316-f4806959-2171-44aa-9110-742a47814386.png) 56 | 57 | 58 | ## Steps to reproduce the project 59 | ### Install Arduino IDE 1.8.14+ 60 | Please download the IDE from [Arduino Website](https://www.arduino.cc/en/software). Install any version >= 1.8.14 61 | 62 | Please checkout our first session [EdgeAI Lab with Microcontrollers Session #1: Overview of EdgeAI Applications](https://youtu.be/S9Ejmi_3Vrw?t=2412) if you want to refer to quick video guide. 63 | 64 | 65 | ### Add Arduino Mbed OS Core for Nano 33 BLE Sense 66 | The next step is to install the Libraries related to Arduino Nano 33 BLE Sense board.
67 | There is a simple [QuickStart Guide](https://docs.arduino.cc/hardware/nano-33-ble-sense) to install the necessary packages. You can check if the board can be programmed by trying out Simple LED Blink example. 68 | 69 | 70 | ### Install Arduino_LSM9DS1 library using library manager 71 | Next, we need to install the [Arduino LSM9DS1 library](https://www.arduino.cc/en/Reference/ArduinoLSM9DS1) to be able to communicate with the accelerometer. Arduino Nano 33 BLE Sense board uses [LSM9DS1](https://content.arduino.cc/assets/Nano_BLE_Sense_lsm9ds1.pdf) which contains 3D accelerometer, 3D gyroscope, 3D magnetometer. 72 | 73 | You can install libraries from the Library Manager. You can find that from Arduino Tools menu > Manage Libraries > Library Manager. 74 | ![image](images/install_accelerometer.png) 75 | 76 | 77 | ### Include the Edge Impulse Library 78 | Please download the [Prebuild Library](ei-fan_status_detection_using_vibration-arduino-1.0.4.zip). We need to add this to our project.
79 | To add, select Sketch menu > Include Library > Add .ZIP Library ... 80 | If you get no error, then everything is fine. 81 | ![image](images/install_edgeimpulselibrary.png) 82 | 83 | ### Programming Arduino 84 | To run the example, open the file fan_status_detection.ino in Arduino. Since its .ino file, it will be automatically opened by Arduino IDE if it was installed successfully. If not, then you can open it through the standard operation (File > Open > [Path to fan_status_detection.ino](fan_status_detection/fan_status_detection.ino)).
85 | Click Verify button. It might take a while for compilation and verification process. You should get the following message on successful completion. 86 | ``` 87 | Done Compiling 88 | 89 | Sketch uses 271672 bytes (27%) of program storage space. Maximum is 983040 bytes. 90 | Global variables use 47888 bytes (18%) of dynamic memory, leaving 214256 bytes for local variables. Maximum is 262144 bytes. 91 | ``` 92 | 93 | After that, click on Upload button to upload the firmware to Arduino Nano board. 94 | ``` 95 | Sketch uses 271672 bytes (27%) of program storage space. Maximum is 983040 bytes. 96 | Global variables use 47888 bytes (18%) of dynamic memory, leaving 214256 bytes for local variables. Maximum is 262144 bytes. 97 | Device : nRF52840-QIAA 98 | Version : Arduino Bootloader (SAM-BA extended) 2.0 [Arduino:IKXYZ] 99 | Address : 0x0 100 | Pages : 256 101 | Page Size : 4096 bytes 102 | Total Size : 1024KB 103 | Planes : 1 104 | Lock Regions : 0 105 | Locked : none 106 | Security : false 107 | Erase flash 108 | 109 | Done in 0.001 seconds 110 | Write 271680 bytes to flash (67 pages) 111 | [==============================] 100% (67/67 pages) 112 | Done in 10.714 seconds 113 | ``` 114 | 115 | ### Inference from Arduino 116 | The inference results are sent via Serial Port configured with baud-rate of 115200 bps.
117 | You can open up the Serial Monitor from the Tools menu. 118 | The probabilities of each class (Idle, High, Medium and Low) are logged in to the console. 119 | ![image](images/inference_results.png) 120 | 121 | -------------------------------------------------------------------------------- /session2/ei-fan_status_detection_using_vibration-arduino-1.0.4.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/edgeai-lab-microcontroller-series/53aea1ad723ce7cef4e64a6cdf9253c480d09b50/session2/ei-fan_status_detection_using_vibration-arduino-1.0.4.zip -------------------------------------------------------------------------------- /session2/fan_status_detection/fan_status_detection.ino: -------------------------------------------------------------------------------- 1 | /* Includes ---------------------------------------------------------------- */ 2 | #include 3 | #include 4 | 5 | /* Constant defines -------------------------------------------------------- */ 6 | #define CONVERT_G_TO_MS2 9.80665f 7 | 8 | /* Private variables ------------------------------------------------------- */ 9 | static bool debug_nn = false; // Set this to true to see e.g. features generated from the raw signal 10 | 11 | /** 12 | @brief Arduino setup function 13 | */ 14 | void setup() 15 | { 16 | // put your setup code here, to run once: 17 | Serial.begin(115200); 18 | Serial.println("Fan status detection inferencing demo"); 19 | 20 | pinMode(LEDR, OUTPUT); 21 | pinMode(LEDG, OUTPUT); 22 | pinMode(LEDB, OUTPUT); 23 | 24 | 25 | // all LEDs off 26 | digitalWrite(LEDR, HIGH); 27 | digitalWrite(LEDG, HIGH); 28 | digitalWrite(LEDB, HIGH); 29 | 30 | 31 | if (!IMU.begin()) { 32 | ei_printf("Failed to initialize IMU!\r\n"); 33 | } 34 | else { 35 | ei_printf("IMU initialized\r\n"); 36 | } 37 | 38 | if (EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME != 3) { 39 | ei_printf("ERR: EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME should be equal to 3 (the 3 sensor axes)\n"); 40 | return; 41 | } 42 | } 43 | 44 | /** 45 | @brief Printf function uses vsnprintf and output using Arduino Serial 46 | 47 | @param[in] format Variable argument list 48 | */ 49 | void ei_printf(const char *format, ...) { 50 | static char print_buf[1024] = { 0 }; 51 | 52 | va_list args; 53 | va_start(args, format); 54 | int r = vsnprintf(print_buf, sizeof(print_buf), format, args); 55 | va_end(args); 56 | 57 | if (r > 0) { 58 | Serial.write(print_buf); 59 | } 60 | } 61 | 62 | /** 63 | @brief Get data and run inferencing 64 | 65 | @param[in] debug Get debug info if true 66 | */ 67 | void loop() 68 | { 69 | ei_printf("\nStarting inferencing in 2 seconds...\n"); 70 | 71 | delay(2000); 72 | 73 | ei_printf("Sampling...\n"); 74 | 75 | // Allocate a buffer here for the values we'll read from the IMU 76 | float buffer[EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE] = { 0 }; 77 | 78 | for (size_t ix = 0; ix < EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE; ix += 3) { 79 | // Determine the next tick (and then sleep later) 80 | uint64_t next_tick = micros() + (EI_CLASSIFIER_INTERVAL_MS * 1000); 81 | 82 | IMU.readAcceleration(buffer[ix], buffer[ix + 1], buffer[ix + 2]); 83 | 84 | buffer[ix + 0] *= CONVERT_G_TO_MS2; 85 | buffer[ix + 1] *= CONVERT_G_TO_MS2; 86 | buffer[ix + 2] *= CONVERT_G_TO_MS2; 87 | 88 | delayMicroseconds(next_tick - micros()); 89 | } 90 | 91 | // Turn the raw buffer in a signal which we can the classify 92 | signal_t signal; 93 | int err = numpy::signal_from_buffer(buffer, EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, &signal); 94 | if (err != 0) { 95 | ei_printf("Failed to create signal from buffer (%d)\n", err); 96 | return; 97 | } 98 | 99 | // Run the classifier 100 | ei_impulse_result_t result = { 0 }; 101 | 102 | err = run_classifier(&signal, &result, debug_nn); 103 | if (err != EI_IMPULSE_OK) { 104 | ei_printf("ERR: Failed to run classifier (%d)\n", err); 105 | return; 106 | } 107 | 108 | // print the predictions 109 | ei_printf("Predictions "); 110 | ei_printf("(DSP: %d ms., Classification: %d ms.)", 111 | result.timing.dsp, result.timing.classification); 112 | ei_printf(": \n"); 113 | 114 | float max_val = 0.0f; 115 | int max_ix = 0; 116 | 117 | for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) { 118 | if (result.classification[ix].value > max_val) { 119 | max_val = result.classification[ix].value; 120 | max_ix = ix; 121 | } 122 | ei_printf(" %s: %.5f\n", result.classification[ix].label, result.classification[ix].value); 123 | } 124 | 125 | switch (max_ix) { 126 | case 0: // Fan status: high 127 | digitalWrite(LEDR, LOW); 128 | digitalWrite(LEDG, HIGH); 129 | digitalWrite(LEDB, HIGH); 130 | break; 131 | 132 | case 1: // Fan status: idle; all LEDs off 133 | digitalWrite(LEDR, HIGH); 134 | digitalWrite(LEDG, HIGH); 135 | digitalWrite(LEDB, HIGH); 136 | break; 137 | 138 | case 2: // Fan status: low 139 | digitalWrite(LEDR, HIGH); 140 | digitalWrite(LEDG, HIGH); 141 | digitalWrite(LEDB, LOW); 142 | break; 143 | 144 | case 3: // Fan status: medium 145 | digitalWrite(LEDR, HIGH); 146 | digitalWrite(LEDG, LOW); 147 | digitalWrite(LEDB, HIGH); 148 | break; 149 | default: 150 | ei_printf("Wrong class index\n"); 151 | } 152 | } 153 | 154 | #if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_ACCELEROMETER 155 | #error "Invalid model for current sensor" 156 | #endif 157 | -------------------------------------------------------------------------------- /session2/images/arduino_nano.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/edgeai-lab-microcontroller-series/53aea1ad723ce7cef4e64a6cdf9253c480d09b50/session2/images/arduino_nano.png -------------------------------------------------------------------------------- /session2/images/inference_results.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/edgeai-lab-microcontroller-series/53aea1ad723ce7cef4e64a6cdf9253c480d09b50/session2/images/inference_results.png -------------------------------------------------------------------------------- /session2/images/install_accelerometer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/edgeai-lab-microcontroller-series/53aea1ad723ce7cef4e64a6cdf9253c480d09b50/session2/images/install_accelerometer.png -------------------------------------------------------------------------------- /session2/images/install_edgeimpulselibrary.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/edgeai-lab-microcontroller-series/53aea1ad723ce7cef4e64a6cdf9253c480d09b50/session2/images/install_edgeimpulselibrary.png -------------------------------------------------------------------------------- /session2/images/serial_plotter.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/edgeai-lab-microcontroller-series/53aea1ad723ce7cef4e64a6cdf9253c480d09b50/session2/images/serial_plotter.png -------------------------------------------------------------------------------- /session2/slides.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/edgeai-lab-microcontroller-series/53aea1ad723ce7cef4e64a6cdf9253c480d09b50/session2/slides.pdf -------------------------------------------------------------------------------- /session3/README.md: -------------------------------------------------------------------------------- 1 | # Audio Based Application using Microphone 2 | Our third session of the EdgeAI Lab with Microcontroller Series is to show how to develop a simple Audio Based application using the Digital Microphone device on Arduino Nano 33 BLE Sense board. 3 | 4 | # What is Digital Microphone 5 | Microphones are components that convert physical sound into digital data. Microphones are commonly used in mobile terminals, speech recognition systems or even gaming and virtual reality input devices. 6 | ![image](https://user-images.githubusercontent.com/948498/132099176-57e317a7-fd97-4a57-8e2a-73f0a84a62d4.png) 7 | 8 | Here is a simple tutorial to [Create Sound Meter using Microphone on Arduino Nano 33 BLE Sense Board](https://docs.arduino.cc/tutorials/nano-33-ble-sense/microphone_sensor). 9 | 10 | 11 | # Edge AI Applications using Microphone 12 | - Detect Audio Events 13 | - Detect Human Voice 14 | - Keyword Detection 15 | - Detect Detect Respiratory Sound 16 | 17 | # Snoring Sound Detection 18 | The aim of this project is to detect snoring sound during sleep by simple microphone. 19 | 20 | ## Background 21 | Snoring is caused by the rattling and vibration of tissues near the airway in the back of the throat. These sounds occur in low frequency range and can be easily detected using simple models. 22 | 23 | ## Blocks of EdgeAI Pipeline 24 | Based on this, let us identify different blocks of the generalized EdgeAI Pipeline. 25 | - Data Collection 26 | - We will use the public [AudioSet](http://research.google.com/audioset/) dataset for training a 1D CNN model. 27 | 28 | - Digital Signal Processing 29 | - Anotating the snore frames in raw audio data and data extraction for training. 30 | 31 | - On Device Deployment and Inference 32 | - The code uses multithreading. One thread will sample the audio stream and another frame will perform DSP on the data window. A motor vibrator is connected to display the results. It will vibrate when snoring activity is detected. 33 | - The results are also send via serial port, so you can check the results using Serial Monitor. 34 | 35 | ## Steps to reproduce the project 36 | ### Install Arduino IDE 1.8.14+ 37 | Please download the IDE from [Arduino Website](https://www.arduino.cc/en/software). Install any version >= 1.8.14 38 | 39 | Please checkout our first session [EdgeAI Lab with Microcontrollers Session #1: Overview of EdgeAI Applications](https://youtu.be/S9Ejmi_3Vrw?t=2412) if you want to refer to quick video guide. 40 | 41 | 42 | ### Add Arduino Mbed OS Core for Nano 33 BLE Sense 43 | The next step is to install the Libraries related to Arduino Nano 33 BLE Sense board.
44 | There is a simple [QuickStart Guide](https://docs.arduino.cc/hardware/nano-33-ble-sense) to install the necessary packages. You can check if the board can be programmed by trying out Simple LED Blink example. 45 | 46 | 47 | ### Install Libraries using Library manager 48 | #### RingBuf Library 49 | You can install libraries from the Library Manager. You can find that from Arduino Tools menu > Manage Libraries > Library Manager. 50 | ![image](https://user-images.githubusercontent.com/948498/132113967-d86c06ff-8262-48f9-b668-9425cfc9b32d.png) 51 | 52 | 53 | ### Include the Edge Impulse Library 54 | Please download the [Prebuild Library](snoring_detection_inferencing.zip). We need to add this to our project.
55 | To add, select Sketch menu > Include Library > Add .ZIP Library ... 56 | If you get no error, then everything is fine. 57 | ![image](https://user-images.githubusercontent.com/948498/132113862-d9e25ed2-35f2-4eec-ba16-1f28bb485058.png) 58 | 59 | 60 | ### Programming Arduino 61 | To run the example, open the file inferencing.ino in Arduino. Since its .ino file, it will be automatically opened by Arduino IDE if it was installed successfully. If not, then you can open it through the standard operation (File > Open > [Path to inferencing.ino](inferencing/inferencing.ino)).
62 | Click Verify button. It might take a while for compilation and verification process. You should get the following message on successful completion. 63 | ``` 64 | Done Compiling 65 | 66 | Sketch uses 523696 bytes (53%) of program storage space. Maximum is 983040 bytes. 67 | Global variables use 45856 bytes (17%) of dynamic memory, leaving 216288 bytes for local variables. Maximum is 262144 bytes. 68 | ``` 69 | 70 | After that, click on Upload button to upload the firmware to Arduino Nano board. 71 | ``` 72 | Sketch uses 523696 bytes (53%) of program storage space. Maximum is 983040 bytes. 73 | Global variables use 45856 bytes (17%) of dynamic memory, leaving 216288 bytes for local variables. Maximum is 262144 bytes. 74 | Device : nRF52840-QIAA 75 | Version : Arduino Bootloader (SAM-BA extended) 2.0 [Arduino:IKXYZ] 76 | Address : 0x0 77 | Pages : 256 78 | Page Size : 4096 bytes 79 | Total Size : 1024KB 80 | Planes : 1 81 | Lock Regions : 0 82 | Locked : none 83 | Security : false 84 | Erase flash 85 | 86 | Done in 0.001 seconds 87 | Write 524432 bytes to flash (129 pages) 88 | [==============================] 100% (129/129 pages) 89 | Done in 20.619 seconds 90 | ``` 91 | 92 | ### Inference from Arduino 93 | The inference results are sent via Serial Port configured with baud-rate of 115200 bps.
94 | You can open up the Serial Monitor from the Tools menu. 95 | 96 | 97 | 98 | 99 | -------------------------------------------------------------------------------- /session3/inferencing/inferencing.ino: -------------------------------------------------------------------------------- 1 | // If your target is limited in memory remove this macro to save 10K RAM 2 | #define EIDSP_QUANTIZE_FILTERBANK 0 3 | 4 | /** 5 | Define the number of slices per model window. E.g. a model window of 1000 ms 6 | with slices per model window set to 4. Results in a slice size of 250 ms. 7 | For more info: https://docs.edgeimpulse.com/docs/continuous-audio-sampling 8 | */ 9 | #define EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW 3 10 | 11 | /* Includes ---------------------------------------------------------------- */ 12 | #include 13 | #include 14 | #include 15 | #include 16 | 17 | /** Audio buffers, pointers and selectors */ 18 | typedef struct { 19 | signed short *buffers[2]; 20 | unsigned char buf_select; 21 | unsigned char buf_ready; 22 | unsigned int buf_count; 23 | unsigned int n_samples; 24 | } inference_t; 25 | 26 | static inference_t inference; 27 | static bool record_ready = false; 28 | static signed short *sampleBuffer; 29 | static bool debug_nn = false; // Set this to true to see e.g. features generated from the raw signal 30 | static int print_results = -(EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW); 31 | 32 | bool alert = false; 33 | 34 | RingBuf last_ten_predictions; 35 | int greenLED = 23; 36 | int vibratorPin = 3; // Vibration motor connected to D3 PWM pin 37 | bool is_motor_running = false; 38 | 39 | void run_vibration() 40 | { 41 | if (alert) 42 | { 43 | is_motor_running = true; 44 | 45 | for (int i = 0; i < 2; i++) 46 | { 47 | analogWrite(vibratorPin, 30); 48 | delay(1000); 49 | analogWrite(vibratorPin, 0); 50 | delay(1500); 51 | } 52 | 53 | is_motor_running = false; 54 | } else { 55 | if (is_motor_running) 56 | { 57 | analogWrite(vibratorPin, 0); 58 | } 59 | } 60 | yield(); 61 | } 62 | 63 | 64 | 65 | /** 66 | @brief Printf function uses vsnprintf and output using Arduino Serial 67 | 68 | @param[in] format Variable argument list 69 | */ 70 | void ei_printf(const char *format, ...) { 71 | static char print_buf[1024] = { 0 }; 72 | 73 | va_list args; 74 | va_start(args, format); 75 | int r = vsnprintf(print_buf, sizeof(print_buf), format, args); 76 | va_end(args); 77 | 78 | if (r > 0) { 79 | Serial.write(print_buf); 80 | } 81 | } 82 | 83 | /** 84 | @brief PDM buffer full callback 85 | Get data and call audio thread callback 86 | */ 87 | static void pdm_data_ready_inference_callback(void) 88 | { 89 | int bytesAvailable = PDM.available(); 90 | 91 | // read into the sample buffer 92 | int bytesRead = PDM.read((char *)&sampleBuffer[0], bytesAvailable); 93 | 94 | if (record_ready == true) { 95 | for (int i = 0; i> 1; i++) { 96 | inference.buffers[inference.buf_select][inference.buf_count++] = sampleBuffer[i]; 97 | 98 | if (inference.buf_count >= inference.n_samples) { 99 | inference.buf_select ^= 1; 100 | inference.buf_count = 0; 101 | inference.buf_ready = 1; 102 | } 103 | } 104 | } 105 | } 106 | 107 | /** 108 | @brief Init inferencing struct and setup/start PDM 109 | 110 | @param[in] n_samples The n samples 111 | 112 | @return { description_of_the_return_value } 113 | */ 114 | static bool microphone_inference_start(uint32_t n_samples) 115 | { 116 | inference.buffers[0] = (signed short *)malloc(n_samples * sizeof(signed short)); 117 | 118 | if (inference.buffers[0] == NULL) { 119 | return false; 120 | } 121 | 122 | inference.buffers[1] = (signed short *)malloc(n_samples * sizeof(signed short)); 123 | 124 | if (inference.buffers[0] == NULL) { 125 | free(inference.buffers[0]); 126 | return false; 127 | } 128 | 129 | sampleBuffer = (signed short *)malloc((n_samples >> 1) * sizeof(signed short)); 130 | 131 | if (sampleBuffer == NULL) { 132 | free(inference.buffers[0]); 133 | free(inference.buffers[1]); 134 | return false; 135 | } 136 | 137 | inference.buf_select = 0; 138 | inference.buf_count = 0; 139 | inference.n_samples = n_samples; 140 | inference.buf_ready = 0; 141 | 142 | // configure the data receive callback 143 | PDM.onReceive(&pdm_data_ready_inference_callback); 144 | 145 | PDM.setBufferSize((n_samples >> 1) * sizeof(int16_t)); 146 | 147 | // initialize PDM with: 148 | // - one channel (mono mode) 149 | // - a 16 kHz sample rate 150 | if (!PDM.begin(1, EI_CLASSIFIER_FREQUENCY)) { 151 | ei_printf("Failed to start PDM!"); 152 | } 153 | 154 | // set the gain, defaults to 20 155 | PDM.setGain(127); 156 | 157 | record_ready = true; 158 | 159 | return true; 160 | } 161 | 162 | /** 163 | @brief Wait on new data 164 | 165 | @return True when finished 166 | */ 167 | static bool microphone_inference_record(void) 168 | { 169 | bool ret = true; 170 | 171 | if (inference.buf_ready == 1) { 172 | ei_printf( 173 | "Error sample buffer overrun. Decrease the number of slices per model window " 174 | "(EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)\n"); 175 | ret = false; 176 | } 177 | 178 | while (inference.buf_ready == 0) { 179 | delay(1); 180 | } 181 | 182 | inference.buf_ready = 0; 183 | 184 | return ret; 185 | } 186 | 187 | /** 188 | Get raw audio signal data 189 | */ 190 | static int microphone_audio_signal_get_data(size_t offset, size_t length, float * out_ptr) 191 | { 192 | numpy::int16_to_float(&inference.buffers[inference.buf_select ^ 1][offset], out_ptr, length); 193 | 194 | return 0; 195 | } 196 | 197 | /** 198 | @brief Stop PDM and release buffers 199 | */ 200 | static void microphone_inference_end(void) 201 | { 202 | PDM.end(); 203 | free(inference.buffers[0]); 204 | free(inference.buffers[1]); 205 | free(sampleBuffer); 206 | } 207 | 208 | 209 | void setup() 210 | { 211 | Serial.begin(115200); 212 | 213 | pinMode(greenLED, OUTPUT); 214 | pinMode(greenLED, LOW); 215 | pinMode(vibratorPin, OUTPUT); // sets the pin as output 216 | 217 | // summary of inferencing settings (from model_metadata.h) 218 | ei_printf("Inferencing settings:\n"); 219 | ei_printf("\tInterval: %.2f ms.\n", (float)EI_CLASSIFIER_INTERVAL_MS); 220 | ei_printf("\tFrame size: %d\n", EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE); 221 | ei_printf("\tSample length: %d ms.\n", EI_CLASSIFIER_RAW_SAMPLE_COUNT / 16); 222 | ei_printf("\tNo. of classes: %d\n", sizeof(ei_classifier_inferencing_categories) / 223 | sizeof(ei_classifier_inferencing_categories[0])); 224 | 225 | run_classifier_init(); 226 | if (microphone_inference_start(EI_CLASSIFIER_SLICE_SIZE) == false) { 227 | ei_printf("ERR: Failed to setup audio sampling\r\n"); 228 | return; 229 | } 230 | 231 | Scheduler.startLoop(run_vibration); 232 | } 233 | 234 | void loop() 235 | { 236 | 237 | bool m = microphone_inference_record(); 238 | 239 | if (!m) { 240 | ei_printf("ERR: Failed to record audio...\n"); 241 | return; 242 | } 243 | 244 | signal_t signal; 245 | signal.total_length = EI_CLASSIFIER_SLICE_SIZE; 246 | signal.get_data = µphone_audio_signal_get_data; 247 | ei_impulse_result_t result = {0}; 248 | 249 | EI_IMPULSE_ERROR r = run_classifier_continuous(&signal, &result, debug_nn); 250 | if (r != EI_IMPULSE_OK) { 251 | ei_printf("ERR: Failed to run classifier (%d)\n", r); 252 | return; 253 | } 254 | 255 | if (++print_results >= (EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)) { 256 | // print the predictions 257 | ei_printf("Predictions "); 258 | ei_printf("(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)", 259 | result.timing.dsp, result.timing.classification, result.timing.anomaly); 260 | ei_printf(": \n"); 261 | 262 | for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) { 263 | ei_printf(" %s: %.5f\n", result.classification[ix].label, 264 | result.classification[ix].value); 265 | 266 | if (ix == 1 && !is_motor_running && result.classification[ix].value > 0.9) { 267 | if (last_ten_predictions.isFull()) { 268 | uint8_t k; 269 | last_ten_predictions.pop(k); 270 | } 271 | 272 | last_ten_predictions.push(ix); 273 | 274 | uint8_t count = 0; 275 | 276 | for (uint8_t j = 0; j < last_ten_predictions.size(); j++) { 277 | count += last_ten_predictions[j]; 278 | //ei_printf("%d, ", last_ten_predictions[j]); 279 | } 280 | //ei_printf("\n"); 281 | ei_printf("Snoring\n"); 282 | pinMode(greenLED, HIGH); 283 | if (count >= 5) { 284 | ei_printf("Trigger vibration motor\n"); 285 | alert = true; 286 | } 287 | } else { 288 | ei_printf("Noise\n"); 289 | pinMode(greenLED, LOW); 290 | alert = false; 291 | } 292 | 293 | print_results = 0; 294 | } 295 | } 296 | } 297 | 298 | 299 | #if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_MICROPHONE 300 | #error "Invalid model for current sensor." 301 | #endif 302 | -------------------------------------------------------------------------------- /session3/slides.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/edgeai-lab-microcontroller-series/53aea1ad723ce7cef4e64a6cdf9253c480d09b50/session3/slides.pdf -------------------------------------------------------------------------------- /session3/snoring_detection_inferencing.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Machine-Learning-Tokyo/edgeai-lab-microcontroller-series/53aea1ad723ce7cef4e64a6cdf9253c480d09b50/session3/snoring_detection_inferencing.zip --------------------------------------------------------------------------------