├── .github ├── ISSUE_TEMPLATE │ ├── bug_report.md │ ├── feature-proposal.md │ ├── feature_request.md │ └── vulnerability-report.md ├── release-drafter.yml └── workflows │ ├── codeql-analysis.yml │ └── release-drafter.yml ├── CODE-OF-CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── arduino └── all_nano_33_ble_sense │ ├── all_model.cpp │ ├── all_model.h │ ├── all_nano_33_ble_sense.ino │ ├── arduino_main.cpp │ ├── main_functions.h │ ├── model_settings.cpp │ └── model_settings.h ├── assets └── images │ ├── all-arduino-nano-33-ble-classifier.gif │ ├── all-idb.jpg │ ├── bug-report.jpg │ ├── feature-proposals.jpg │ ├── feature-request.jpg │ ├── fork.jpg │ ├── project-banner.jpg │ └── repo-issues.jpg ├── classifier.py ├── configuration └── config.json ├── docs ├── img │ ├── arduino-ide.jpg │ ├── arduino-nano-33-ble-sense-sd_bb.jpg │ ├── plots │ │ ├── accuracy.png │ │ ├── auc.png │ │ ├── confusion-matrix.png │ │ ├── loss.png │ │ ├── precision.png │ │ └── recall.png │ └── project-banner.jpg ├── index.md ├── installation │ ├── arduino.md │ └── ubuntu.md └── usage │ ├── arduino.md │ ├── notebooks.md │ └── python.md ├── logs └── .gitkeep ├── mkdocs.yml ├── model ├── all_nano_33_ble_sense.cc ├── all_nano_33_ble_sense.h5 ├── all_nano_33_ble_sense.json ├── all_nano_33_ble_sense.tflite ├── data │ ├── README.md │ ├── test │ │ └── README.md │ └── train │ │ └── README.md └── plots │ ├── .gitkeep │ ├── accuracy.png │ ├── auc.png │ ├── confusion-matrix.png │ ├── loss.png │ ├── precision.png │ └── recall.png ├── modules ├── AbstractClassifier.py ├── AbstractData.py ├── AbstractModel.py ├── AbstractServer.py ├── __init__.py ├── augmentation.py ├── data.py ├── helpers.py ├── model.py └── server.py ├── notebooks └── classifier.ipynb └── scripts └── install.sh /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. iOS] 28 | - Browser [e.g. chrome, safari] 29 | - Version [e.g. 22] 30 | 31 | **Smartphone (please complete the following information):** 32 | - Device: [e.g. iPhone6] 33 | - OS: [e.g. iOS8.1] 34 | - Browser [e.g. stock browser, safari] 35 | - Version [e.g. 22] 36 | 37 | **Additional context** 38 | Add any other context about the problem here. 39 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature-proposal.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature proposal 3 | about: Suggest an proposal for this project 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Is your feature proposal related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like to implement** 14 | A clear and concise description of what you want to make happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/vulnerability-report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Vulnerability report 3 | about: Create a vulnerability report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the vulnerability ** 11 | A clear and concise description of what the vulnerability is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. iOS] 28 | - Browser [e.g. chrome, safari] 29 | - Version [e.g. 22] 30 | 31 | **Smartphone (please complete the following information):** 32 | - Device: [e.g. iPhone6] 33 | - OS: [e.g. iOS8.1] 34 | - Browser [e.g. stock browser, safari] 35 | - Version [e.g. 22] 36 | 37 | **Additional context** 38 | Add any other context about the problem here. 39 | -------------------------------------------------------------------------------- /.github/release-drafter.yml: -------------------------------------------------------------------------------- 1 | name-template: 'v$RESOLVED_VERSION 🌈' 2 | tag-template: 'v$RESOLVED_VERSION' 3 | categories: 4 | - title: '🚀 Features' 5 | labels: 6 | - 'feature' 7 | - 'enhancement' 8 | - title: '🐛 Bug Fixes' 9 | labels: 10 | - 'fix' 11 | - 'bugfix' 12 | - 'bug' 13 | - title: '🧰 Maintenance' 14 | label: 'chore' 15 | change-template: '- $TITLE @$AUTHOR (#$NUMBER)' 16 | change-title-escapes: '\<*_&' # You can add # and @ to disable mentions, and add ` to disable code blocks. 17 | version-resolver: 18 | major: 19 | labels: 20 | - 'major' 21 | minor: 22 | labels: 23 | - 'minor' 24 | patch: 25 | labels: 26 | - 'patch' 27 | default: patch 28 | template: | 29 | ## Changes 30 | $CHANGES 31 | -------------------------------------------------------------------------------- /.github/workflows/codeql-analysis.yml: -------------------------------------------------------------------------------- 1 | # For most projects, this workflow file will not need changing; you simply need 2 | # to commit it to your repository. 3 | # 4 | # You may wish to alter this file to override the set of languages analyzed, 5 | # or to provide custom queries or build logic. 6 | # 7 | # ******** NOTE ******** 8 | # We have attempted to detect the languages in your repository. Please check 9 | # the `language` matrix defined below to confirm you have the correct set of 10 | # supported CodeQL languages. 11 | # 12 | name: "CodeQL" 13 | 14 | on: 15 | push: 16 | branches: [ main ] 17 | pull_request: 18 | # The branches below must be a subset of the branches above 19 | branches: [ main ] 20 | schedule: 21 | - cron: '32 14 * * 5' 22 | 23 | jobs: 24 | analyze: 25 | name: Analyze 26 | runs-on: ubuntu-latest 27 | permissions: 28 | actions: read 29 | contents: read 30 | security-events: write 31 | 32 | strategy: 33 | fail-fast: false 34 | matrix: 35 | language: [ 'cpp', 'python' ] 36 | # CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ] 37 | # Learn more: 38 | # https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed 39 | 40 | steps: 41 | - name: Checkout repository 42 | uses: actions/checkout@v2 43 | 44 | # Initializes the CodeQL tools for scanning. 45 | - name: Initialize CodeQL 46 | uses: github/codeql-action/init@v1 47 | with: 48 | languages: ${{ matrix.language }} 49 | # If you wish to specify custom queries, you can do so here or in a config file. 50 | # By default, queries listed here will override any specified in a config file. 51 | # Prefix the list here with "+" to use these queries and those in the config file. 52 | # queries: ./path/to/local/query, your-org/your-repo/queries@main 53 | 54 | # Autobuild attempts to build any compiled languages (C/C++, C#, or Java). 55 | # If this step fails, then you should remove it and run the build manually (see below) 56 | - name: Autobuild 57 | uses: github/codeql-action/autobuild@v1 58 | 59 | # ℹ️ Command-line programs to run using the OS shell. 60 | # 📚 https://git.io/JvXDl 61 | 62 | # ✏️ If the Autobuild fails above, remove it and uncomment the following three lines 63 | # and modify them (or add more) to build your code if your project 64 | # uses a compiled language 65 | 66 | #- run: | 67 | # make bootstrap 68 | # make release 69 | 70 | - name: Perform CodeQL Analysis 71 | uses: github/codeql-action/analyze@v1 72 | -------------------------------------------------------------------------------- /.github/workflows/release-drafter.yml: -------------------------------------------------------------------------------- 1 | name: Release Drafter 2 | 3 | on: 4 | push: 5 | # branches to consider in the event; optional, defaults to all 6 | branches: 7 | - master 8 | # pull_request event is required only for autolabeler 9 | pull_request: 10 | # Only following types are handled by the action, but one can default to all as well 11 | types: [opened, reopened, synchronize] 12 | 13 | jobs: 14 | update_release_draft: 15 | runs-on: ubuntu-latest 16 | steps: 17 | # (Optional) GitHub Enterprise requires GHE_HOST variable set 18 | #- name: Set GHE_HOST 19 | # run: | 20 | # echo "GHE_HOST=${GITHUB_SERVER_URL##https:\/\/}" >> $GITHUB_ENV 21 | 22 | # Drafts your next Release notes as Pull Requests are merged into "master" 23 | - uses: release-drafter/release-drafter@v5 24 | # (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml 25 | # with: 26 | # config-name: my-config.yml 27 | # disable-autolabeler: true 28 | env: 29 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 30 | -------------------------------------------------------------------------------- /CODE-OF-CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity 6 | and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. 7 | 8 | We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. 9 | 10 | ## Our Standards 11 | 12 | Examples of behavior that contributes to a positive environment for our community include: 13 | 14 | - Demonstrating empathy and kindness toward other people 15 | - Being respectful of differing opinions, viewpoints, and experiences 16 | - Giving and gracefully accepting constructive feedback 17 | - Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience 18 | - Focusing on what is best not just for us as individuals, but for the overall community 19 | 20 | Examples of unacceptable behavior include: 21 | 22 | - The use of sexualized language or imagery, and sexual attention or advances of any kind 23 | - Trolling, insulting or derogatory comments, and personal or political attacks 24 | - Public or private harassment 25 | - Publishing others' private information, such as a physical or email address, without their explicit permission 26 | - Other conduct which could reasonably be considered inappropriate in a professional setting 27 | 28 | ## Enforcement Responsibilities 29 | 30 | Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, 31 | or harmful. 32 | 33 | Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. 34 | 35 | ## Scope 36 | 37 | This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting 38 | via an official social media account, or acting as an appointed representative at an online or offline event. 39 | 40 | ## Enforcement 41 | 42 | Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement. All complaints will be reviewed and investigated promptly and fairly. 43 | 44 | All community leaders are obligated to respect the privacy and security of the reporter of any incident. 45 | 46 | ## Enforcement Guidelines 47 | 48 | Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: 49 | 50 | ### 1. Correction 51 | 52 | **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. 53 | 54 | **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. 55 | 56 | ### 2. Warning 57 | 58 | **Community Impact**: A violation through a single incident or series of actions. 59 | 60 | **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. 61 | 62 | ### 3. Temporary Ban 63 | 64 | **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. 65 | 66 | **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. 67 | 68 | ### 4. Permanent Ban 69 | 70 | **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. 71 | 72 | **Consequence**: A permanent ban from any sort of public interaction within the community. 73 | 74 | ## Attribution 75 | 76 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at 77 | https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. 78 | 79 | Community Impact Guidelines were inspired by 80 | [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity). 81 | 82 | [homepage]: https://www.contributor-covenant.org 83 | 84 | For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. 85 | Translations are available at https://www.contributor-covenant.org/translations. -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss projects 2 | 3 | Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss encourages and welcomes code contributions, bug fixes and enhancements from the Github community. 4 | 5 | ## Ground rules & expectations 6 | 7 | Before we get started, here are a few things we expect from you (and that you should expect from others): 8 | 9 | - Be kind and thoughtful in your conversations around this project. We all come from different backgrounds and projects, which means we likely have different perspectives on "how open source is done." Try to listen to others rather than convince them that your way is correct. 10 | - This project is released with a [Contributor Code of Conduct](CODE-OF-CONDUCT.md). By participating in this project, you agree to abide by its terms. 11 | - Please ensure that your contribution complies with this document. If it does not, you will need to address and fix all issues before we can merge your contribution. 12 | - When adding content, please consider if it is widely valuable. 13 | 14 | ## Overview 15 | 16 | Being an Open Source project, everyone can contribute, provided that you respect the following points: 17 | 18 | - Before contributing any code, the author must make sure all the tests work (see below how to launch the tests). 19 | - Developed code must adhere to the syntax guidelines enforced by the linters. 20 | - Code must be developed following the [SemVer (Semantic Versioning 2.0.0)](https://semver.org/) branching model. 21 | - For any new feature added, unit tests must be provided, following the example of the ones already created. 22 | 23 | ## How to contribute 24 | 25 | If you'd like to contribute, start by searching through the issues and pull requests to see whether someone else has raised a similar idea or question. 26 | 27 | If you don't see your idea listed, and you think it fits into the goals of this guide, do one of the following: 28 | 29 | - Bug Report 30 | - Feature Proposal 31 | - Feature Request 32 | 33 | ### Repository Issues 34 | 35 | The first step is to head to our repository issues tab and decide how you would like to contribute. 36 | 37 | ![Repository Issues](assets/images/repo-issues.jpg) 38 | 39 | ### Bug reports 40 | 41 | ![Bug Reports](assets/images/bug-report.jpg) 42 | 43 | If you would like to contribute bug fixes or make the team aware of bugs you have identified in the project, please raise a **Bug report** issue in the [issues section](issues/new/choose) section. A template is provided that will allow you to provide your suggestions for your bug report / bug fix(es) which will be reviewed by the team. 44 | 45 | Bug fix issues are the first step to creating a pull request for bug fixes, once you have created your issue and it has been approved you can proceed with your bug fixes. 46 | 47 | ### Feature proposals 48 | 49 | ![Feature Proposals](assets/images/feature-proposals.jpg) 50 | 51 | If you would like to contribute new features to the project, please raise a **Feature proposal** issue in the [issues section](issues/new/choose) section. A template is provided that will allow you to provide your suggestions for your feature proposal. 52 | 53 | Feature proposal issues are the first step to creating a pull request for feature proposals, once you have created your issue and it has been approved you can proceed with your feature proposal. 54 | 55 | ### Feature requests 56 | 57 | ![Feature requests](assets/images/feature-request.jpg) 58 | 59 | If you would like to suggest a new feature/new features for this project, please raise a **Feature request** issue in the [issues section](issues/new/choose) section. A template is provided that will allow you to provide your suggestions for your feature request. 60 | 61 | ### Community 62 | 63 | Discussions about the Open Source Guides take place on this repository's 64 | [Issues](issues) and [Pull Requests](pulls) sections, or the [discussions](discussions). Anybody is welcome to join these conversations. 65 | 66 | Wherever possible, do not take these conversations to private channels, including contacting the maintainers directly. Keeping communication public means everybody can benefit and learn from the conversation. 67 | 68 | ### Getting Started 69 | 70 | In order to start contributing: 71 | 72 | ![Fork](assets/images/fork.jpg) 73 | 74 | 1. Fork this repository clicking on the "Fork" button on the upper-right area of the page. 75 | 76 | 2. Clone your just forked repository: 77 | 78 | ```bash 79 | git clone https://github.com/YourAccount/ALL-Arduino-Nano-33-BLE-Sense-Classifier.git 80 | ``` 81 | 82 | 3. Add the main ALL-Arduino-Nano-33-BLE-Sense-Classifier repository as a remote to your forked repository: 83 | 84 | ```bash 85 | git remote add ALL-Arduino-Nano-33-BLE-Sense-Classifier https://github.com/AmlResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier.git 86 | ``` 87 | 88 | Before starting your contribution, remember to synchronize the latest `dev` branch in your forked repository with the `dev` branch in the main ALL-Arduino-Nano-33-BLE-Sense-Classifier repository as specified by the **CURRENT DEV BRANCH** badge in the repository [README](README.md). To do this, use following these steps 89 | 90 | 1. Change to your local `dev` branch (in case you are not in it already): 91 | 92 | ```bash 93 | git checkout 1.0.0 94 | ``` 95 | 96 | 2. Fetch the remote changes: 97 | 98 | ```bash 99 | git fetch ALL-Arduino-Nano-33-BLE-Sense-Classifier 100 | ``` 101 | 102 | 3. Merge them: 103 | 104 | ```bash 105 | git rebase ALL-Arduino-Nano-33-BLE-Sense-Classifier/1.0.0 106 | ``` 107 | 108 | Contributions following these guidelines will be added to the `dev` branch, and released in the next version. The release process is explained in the _Releasing_ section below. 109 | 110 | ### Documentation 111 | 112 | Changes you make to the code in the repository or new projects that you make should be supported with documentation added to the **docs** directory. 113 | 114 | It is the contributor's responsibility to ensure that the documentation is up to date. If you are contributing to an existing repository you will ensure that these documents are updated and/or added to to reflect your changes. 115 | 116 | We use [MKDocs](https://www.mkdocs.org/) along with [Read the Docs](https://docs.readthedocs.io/en/stable/index.html). Use the [Getting Started with MkDocs](https://docs.readthedocs.io/en/stable/intro/getting-started-with-mkdocs.html) guide to find out how to update/create documentation for the project. 117 | 118 | ### Repository structure 119 | 120 | Repository structures **must be followed exactly** for all contributions. Pull Requests that do not follow this structure will be rejected and closed with no further discussion. 121 | 122 | ``` 123 | - Project Root (Directory) 124 | - assets (Directory) 125 | - images (Directory) 126 | - project-banner.jpg (Image) 127 | - bug-report.jpg (Image) 128 | - feature-proposal.jpg (Image) 129 | - feature-request.jpg (Image) 130 | - fork.jpg (Image) 131 | - repo-issues.jpg (Image) 132 | - configuration (Directory) 133 | - config.json (File) 134 | - docs (Directory) 135 | - img (Directory) 136 | - project-banner.jpg (image) 137 | - installation (Directory) 138 | - ubuntu.md (File) 139 | - usage (Directory) 140 | - ubuntu.md (File) 141 | - index.md (File) 142 | - logs (Directory) 143 | - Auto generated log files 144 | - modules (Directory) 145 | - AbstractClassifier.py (File) 146 | - AbstractData.py (File) 147 | - AbstractModel.py (File) 148 | - AbstractServer.py (File) 149 | - helpers.py (File) 150 | - augmentation.py (File) 151 | - data.py (File) 152 | - model.py (File) 153 | - server.py (File) 154 | - model (Directory) 155 | - data (Directory) 156 | - test (Directory) 157 | - train (Directory) 158 | - plots (Directory) 159 | - model.json (File) 160 | - weights.h (File) 161 | - notebooks (Directory) 162 | - classifier.ipynb (File) 163 | - scripts 164 | - install.sh (File) 165 | - classifier.py (File) 166 | - CODE-OF-CONDUCT.md (File) 167 | - CONTRIBUTING.md (File) 168 | - LICENSE (File) 169 | - mkdocs.yml (File) 170 | - README.md (File) 171 | ``` 172 | 173 | **Directories and files may be added to the above structure as required, but none must be removed.** 174 | 175 | ### Abstract Classes 176 | 177 | Abstract classes are part of our "framework". Contributors may modify abstract classes by adding new methods or removing redundant methods. If you modify an abstract class make sure to add your attribution in the contributors area in the header. 178 | 179 | ### Installation Scripts 180 | 181 | The default installation script is [install.sh](scripts/install.sh) found in the [scripts](scripts) directory. 182 | 183 | You must include the installation commands for all libraries for the project using apt/pip/make etc. Replace **# DEVELOPER TO ADD INSTALLATION COMMANDS FOR ALL REQUIRED LIBRARIES (apt/pip etc)** with the relevant installation commands. If you are contributing an existing repository you will ensure that these scripts are updated to reflect your changes. 184 | 185 | ### Logging 186 | 187 | The [helpers file](modules/helpers.py) handles all logging for the project. Loggig should be used in all cases rather than `print()`. The following logs are supported: 188 | 189 | - all 190 | - error 191 | - warning 192 | 193 | ### Configuration 194 | 195 | The project configuration file [config.json](configuration/config.json) can be found in the [configuration](configuration) directory. 196 | 197 | All configurable variables should be held within this file and used wherever relevant throughout the project. 198 | 199 | The [helpers file](modules/helpers.py) loads the configuration and makes it available as `helpers.confs`. 200 | 201 | You may remove redundent objects/arrays/values. from the configuration and/or add new ones. 202 | 203 | ### Project Images 204 | 205 | Images used in the project must be **jpg**. You must own rights to images you upload to the project, or include attribution. Contributors are solely responsible for any images they publish to our Github. 206 | 207 | ### Naming Scheme 208 | 209 | The following naming scheme must be used: 210 | 211 | - **Directories:** Snake case (snake_case) 212 | - **Abstract Files:** Camel case (CamelCase) 213 | - **Files:** Spinal case (spinal-case) 214 | - **Images:** Spinal case (spinal-case) 215 | 216 | Please use descriptive but short names, and make sure to not use spaces in directory and file names. 217 | 218 | ### Headers 219 | 220 | All Python files must include the following header, replacing **Module Title** with a short but descriptive title for the module, and **Module Description** with a paragraph explaining what the module is for. 221 | 222 | ``` 223 | #!/usr/bin/env python3 224 | """ Module Title. 225 | 226 | Module Description. 227 | 228 | MIT License 229 | 230 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial 231 | Para la Leucemia Peter Moss 232 | 233 | Permission is hereby granted, free of charge, to any person obtaining a copy 234 | of this software and associated documentation files(the "Software"), to deal 235 | in the Software without restriction, including without limitation the rights 236 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 237 | copies of the Software, and to permit persons to whom the Software is 238 | furnished to do so, subject to the following conditions: 239 | 240 | The above copyright notice and this permission notice shall be included in all 241 | copies or substantial portions of the Software. 242 | 243 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 244 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 245 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 246 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 247 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 248 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 249 | SOFTWARE. 250 | 251 | Contributors: 252 | 253 | """ 254 | ``` 255 | 256 | #### Attribution 257 | 258 | - When you create a new module you should add your name to the **Contributors** section. 259 | - When you make a change to an existing module you should add your name to the **Contributors** section below existing contributors. You must not remove existing contributors from a header. 260 | 261 | ### Footers 262 | 263 | All READMEs and documentation should include the following footer (Replace/add contributors as required): 264 | 265 | ``` 266 | # Contributing 267 | Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss encourages and welcomes code contributions, bug fixes and enhancements from the Github community. 268 | 269 | Please read the [CONTRIBUTING](CONTRIBUTING.md "CONTRIBUTING") document for a full guide to forking our repositories and submitting your pull requests. You will also find our code of conduct in the [Code of Conduct](CODE-OF-CONDUCT.md) document. 270 | 271 | ## Contributors 272 | - [Adam Milton-Barker](https://www.leukemiaairesearch.com/association/volunteers/adam-milton-barker "Adam Milton-Barker") - [Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss](https://www.leukemiaresearchassociation.ai "Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss") President/Founder & Lead Developer, Sabadell, Spain 273 | 274 |   275 | 276 | # Versioning 277 | We use [SemVer](https://semver.org/) for versioning. 278 | 279 |   280 | 281 | # License 282 | This project is licensed under the **MIT License** - see the [LICENSE](LICENSE "LICENSE") file for details. 283 | 284 |   285 | 286 | # Bugs/Issues 287 | We use the [repo issues](issues "repo issues") to track bugs and general requests related to using this project. See [CONTRIBUTING](CONTRIBUTING.md "CONTRIBUTING") for more info on how to submit bugs, feature requests and proposals. 288 | ``` 289 | 290 | Remember to use **relative URLs**, and in the case of footers in the [docs](docs) folder, you must us **absolute URLs**. 291 | 292 | The contributors section should include a list of contributors that have contributed to the related document. In the case of the README footer, the Contributors section should include a list of contributors that have contributed to **any** part of the project. 293 | 294 | You should add your details below existing contributors. Details should include: 295 | 296 | - Name 297 | - Company/University etc 298 | - Position 299 | - City 300 | - Country 301 | 302 | ### Branching model 303 | 304 | There are one special branch in the repository: 305 | 306 | - `main`: contains the tagged and released version 307 | - `1.0.0`: is the current `dev` branch and contains the latest development code. New features and bugfixes are always merged to current `dev` branch. 308 | 309 | In order to start developing a new feature or refactoring, a new branch will be created following the SemVer scheme: 310 | 311 | Given a version number MAJOR.MINOR.PATCH, increment the: 312 | 313 | - `MAJOR` version when you make incompatible code changes, 314 | - `MINOR` version when you add functionality in a backwards compatible manner, and 315 | - `PATCH` version when you make backwards compatible bug fixes. 316 | 317 | - If `MAJOR` is 0, then Minor could mean the version is not backwards compatible 318 | - If `MINOR` is 1, this means the release is stable. 319 | 320 | The branch will be created by our team depending on the nature of your issue. Once the new functionality has been completed, a Pull Request will be created from the feature branch to the relevant branch. Remember to check both the linters, and 321 | the tests before creating the Pull Request. 322 | 323 | In order to contribute to the repository, the same scheme should be replicated in the forked repositories, so the new features or fixes should all come from the current version of `dev` branch and end up in the current `dev` branch again. 324 | 325 | ### PEP 8 -- Style Guide for Python Code 326 | 327 | All Python projects must align with the [PEP 8 -- Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). 328 | 329 | ### CII Best Practices 330 | 331 | All projects must align with the [CII Best Practice](https://www.construction-institute.org/resources/knowledgebase/best-practices). 332 | 333 | ### Changelog 334 | 335 | The project contains a changelog that is automatically created from the description of the Pull Requests that have been merged into develop, thanks to the [Release Drafter GitHub action](https://github.com/marketplace/actions/release-drafter). 336 | 337 | ### Releasing 338 | 339 | The process of making a release simply consists of creating the release in Github and providing the new tag name, this task is carried out by our team. 340 | 341 | ### Version numbers 342 | 343 | The version number will change for each release, following the SemVer scheme described previously. 344 | 345 | ### Bugfix in releases 346 | 347 | When a bug is found affecting a release, a branch will be created from the `main` branch. As a part of the patch, the release version will be increased in it's last number (Z). The patch then will be merged (via pull request (PR)) to the `main` branch, and a new version will be released. 348 | 349 | ### Commits 350 | 351 | Commits should be [atomic](https://en.wikipedia.org/wiki/Atomic_commit), make commits to your fork until you have resolved/completed the work specified in your issue before submitting your PR, this keeps an easy to follow history of what work has been carried out and makes the review process easier and quicker 352 | 353 | When making a commit, the subjects of each commit should be as follows, where XXX represents the issue number, # must : 354 | 355 | #### Commit Title 356 | 357 | ##### Bug Fixes 358 | 359 | - fix #xxx 360 | - fixes #xxx 361 | - fixed #xxx 362 | 363 | ##### Partial Resolutions 364 | 365 | - partially resolves #xxx 366 | - partially resolved #xxx 367 | 368 | ##### Resolution 369 | 370 | - resolves #xxx 371 | - resolved #xxx 372 | 373 | ##### Alignment 374 | 375 | - aligns with #xxx 376 | 377 | ##### Closure 378 | 379 | - close #xxx 380 | - closes #xxx 381 | - closed #xxx 382 | 383 | #### Commit Description 384 | 385 | Your commit description should include a detailed description of the changes that have been made. 386 | 387 | #### Committing 388 | 389 | When you are ready to commit, you should do the following: 390 | 391 | ##### Show The Status Of All Changed/Added/Deleted files 392 | 393 | ``` 394 | git status 395 | ``` 396 | 397 | ##### Diff 398 | 399 | You may want to check the differences between changed files, you can do this using the following command. 400 | 401 | ``` 402 | git diff 403 | ``` 404 | 405 | ##### Add All Changes 406 | 407 | The following will add all changes shown by git status to your commit. 408 | 409 | ``` 410 | git add . 411 | ``` 412 | 413 | ##### Add One Or More Changes 414 | 415 | ``` 416 | git add file1 file2 file5 417 | ``` 418 | 419 | ##### Commit Added Changes 420 | 421 | Commit your added changes to your local repository, remember to follow the [Commit Title](#commit-title) & [Commit Description](#commit-description) guides above. 422 | 423 | To create your commit with both a title and description, use the following command which states the commit fixes issue ID 1 and provides a detailed description: 424 | 425 | ``` 426 | git commit -m "fixes #1" -m "Fixes the documentation typos described in issue #1" 427 | ``` 428 | 429 | ### Push Your Changes 430 | 431 | When you have made your changes, ensured you have aligned with the procedures in this document, and made your commits to your local repository aligning with the guide above, you need to push your changes to your forked repository. 432 | 433 | Push changes to your fork by using the following command: 434 | 435 | ``` 436 | git push 437 | ``` 438 | 439 | ### Pull Request protocol 440 | 441 | Contributions to the ALL-Arduino-Nano-33-BLE-Sense-Classifier repository are done using a PR. The detailed "protocol" used in such PR is described below: 442 | 443 | * Direct commits to main or develop branches (even single-line modifications) are not allowed. Every modification has to come as a PR to the latest `dev branch` 444 | * PRs implement/fix submitted issues, the issue number has to be referenced in the subject of the relevant commit and PR 445 | * Anybody is welcome to provide comments to the PR (either direct comments or using the review feature offered by Github) 446 | * Use *code line comments* instead of *general comments*, for traceability reasons (see comments lifecycle below) 447 | * Comments lifecycle 448 | * Comment is created, initiating a *comment thread* 449 | * New comments can be added as responses to the original one, starting a discussion 450 | * After discussion, the comment thread ends in one of the following ways: 451 | * `Fixed in ` in case the discussion involves a fix in the PR branch (which commit hash is included as reference) 452 | * `NTC`, if finally nothing needs to be done (NTC = Nothing To Change) 453 | * PR can be merged when the following conditions are met: 454 | * All comment threads are closed 455 | * All the participants in the discussion have provided a `LGTM` general comment (LGTM = Looks good to me) 456 | * All documentation has been updated to reflect your changes. 457 | * No proprietory software or images have been added. 458 | 459 | * Self-merging is not allowed (except in rare and justified circumstances) 460 | 461 | Some additional remarks to take into account when contributing with new PRs: 462 | 463 | * PR must include not only code contributions, but their corresponding pieces of documentation (new or modifications to existing one) and tests 464 | * Documentation must be added to the **docs** folder 465 | * In the case empty directories need to be uploaded, add a `.gitkeep` file inside. 466 | * The project banner is included in all documentation 467 | * Contributing, Versioning, Licensing, Bugs/Issues footer is included in all information 468 | * Contributors have been added to all Contributors footers 469 | * PR modifications must pass full regression based on existing tests in addition to whichever new test added due to the new functionality 470 | * PR should be of an appropriated size that makes review achievable. Too large PRs could be closed with a "please, redo the work in smaller pieces" without any further discussion -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research Project 2 | ## Acute Lymphoblastic Leukemia Arduino Nano 33 BLE Sense Classifier 3 | 4 | ![Acute Lymphoblastic Leukemia Arduino Nano 33 BLE Sense Classifier](assets/images/project-banner.jpg) 5 | 6 | [![CURRENT RELEASE](https://img.shields.io/badge/CURRENT%20RELEASE-1.0.2-blue.svg)](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/tree/1.0.2) [![UPCOMING RELEASE](https://img.shields.io/badge/CURRENT%20DEV%20BRANCH-2.0.0-blue.svg)](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/tree/2.0.0) [![Contributions Welcome!](https://img.shields.io/badge/Contributions-Welcome-lightgrey.svg)](CONTRIBUTING.md) [![Issues](https://img.shields.io/badge/Issues-Welcome-lightgrey.svg)](issues) 7 | 8 | [![PEP8](https://img.shields.io/badge/code%20style-pep8-orange.svg)](https://www.python.org/dev/peps/pep-0008/) [![Documentation Status](https://readthedocs.org/projects/all-arduino-nano-33-ble-sense-classifier/badge/?version=latest)](https://all-arduino-nano-33-ble-sense-classifier.readthedocs.io/en/latest/?badge=latest) [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/5065/badge)](https://bestpractices.coreinfrastructure.org/projects/5065) 9 | 10 | ![Unit Tests](https://img.shields.io/badge/Unit%20Tests-TODO-red) 11 | ![Functional Tests](https://img.shields.io/badge/Functional%20Tests-TODO-red) 12 | 13 | [![LICENSE](https://img.shields.io/badge/LICENSE-MIT-blue.svg)](LICENSE) 14 | 15 |   16 | 17 | # Table Of Contents 18 | 19 | - [Introduction](#introduction) 20 | - [DISCLAIMER](#disclaimer) 21 | - [Motivation](#motivation) 22 | - [Acute Lymphoblastic Leukemia](#acute-lymphoblastic-leukemia) 23 | - [ALL IDB](#all-idb) 24 | - [Segmentation](#segmentation) 25 | - [Getting Started](#getting-started) 26 | - [Contributing](#contributing) 27 | - [Contributors](#contributors) 28 | - [Versioning](#versioning) 29 | - [License](#license) 30 | - [Bugs/Issues](#bugs-issues) 31 | 32 |   33 | 34 | # Introduction 35 | 36 | The **Acute Lypmhoblastic Leukemia Arduino Nano 33 BLE Sense Classifier** is an experiment to explore how low powered microcontrollers, specifically the Arduino Nano 33 BLE Sense, can be used to detect Acute Lymphoblastic Leukemia. The [Arduino Nano 33 BLE Sense](https://store.arduino.cc/arduino-nano-33-ble-sense) is the latest Arduino Board which supports Tensorflow Lite, allowing machine learning on Arduino. 37 | 38 | ![Acute Lymphoblastic Leukemia Arduino Nano 33 BLE Sense Classifier](assets/images/all-arduino-nano-33-ble-classifier.gif) 39 | 40 | The model you will train is a 6 layer Convoluntional Neural Network trained using [Intel® Optimization for Tensorflow*](https://software.intel.com/content/www/us/en/develop/articles/intel-optimization-for-tensorflow-installation-guide.html) from the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit/download.html?operatingsystem=linux) to optimize and accelerate the training process. 41 | 42 | Checkout the [official video](https://www.youtube.com/watch?v=CDJEXdj2KZs) for the project. 43 | 44 |   45 | 46 | # DISCLAIMER 47 | 48 | _This project should be used for research purposes only. The purpose of the project is to show the potential of Artificial Intelligence for medical support systems such as diagnostic systems._ 49 | 50 | _Although the model is accurate and shows good results both on paper and in real world testing, it is trained on a small amount of data and needs to be trained on larger datasets to really evaluate it's accuracy._ 51 | 52 | _Developers that have contributed to this repository have experience in using Artificial Intelligence for detecting certain types of cancer. They are not doctors, medical or cancer experts._ 53 | 54 |   55 | 56 | # Motivation 57 | 58 | The motivation for this project was to explore how low powered devices such as Arduino can be used to detect Acute Lymphoblastic Leukemia. The project will be submitted to the Tensorflow For Microcontroller Challenge and the Eyes on Edge: tinyML Vision Challenge. 59 | 60 |   61 | 62 | # Acute Lymphoblastic Leukemia 63 | [Acute lymphoblastic leukemia (ALL)](https://www.leukemiaairesearch.com/research/leukemia), also known as acute lymphocytic leukemia, is a cancer that affects the lymphoid blood cell lineage. It is the most common leukemia in children, and it accounts for 10-20% of acute leukemias in adults. The prognosis for both adult and especially childhood ALL has improved substantially since the 1970s. The 5- year survival is approximately 95% in children. In adults, the 5-year survival varies between 25% and 75%, with more favorable results in younger than in older patients. 64 | 65 | For more information about Acute Lymphoblastic Leukemia please visit our [Leukemia Information Page](https://www.leukemiaairesearch.com/research/leukemia) 66 | 67 |   68 | 69 | # ALL-IDB 70 | 71 | ![Acute Lymphoblastic Leukemia Arduino Nano 33 BLE Sense Classifier](assets/images/all-idb.jpg) 72 | 73 | You need to be granted access to use the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset. You can find the application form and information about getting access to the dataset on [this page](https://homes.di.unimi.it/scotti/all/#download) as well as information on how to contribute back to the project [here](https://homes.di.unimi.it/scotti/all/results.php). If you are not able to obtain a copy of the dataset please feel free to try this tutorial on your own dataset, we would be very happy to find additional AML & ALL datasets. 74 | 75 |   76 | 77 | # Getting Started 78 | 79 | To get started follow the [official documentation](docs/index.md). 80 | 81 |   82 | 83 | # Contributing 84 | Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss encourages and welcomes code contributions, bug fixes and enhancements from the Github community. 85 | 86 | Please read the [CONTRIBUTING](CONTRIBUTING.md "CONTRIBUTING") document for a full guide to forking our repositories and submitting your pull requests. You will also find our code of conduct in the [Code of Conduct](CODE-OF-CONDUCT.md) document. 87 | 88 | ## Contributors 89 | - [Adam Milton-Barker](https://www.leukemiaairesearch.com/association/volunteers/adam-milton-barker "Adam Milton-Barker") - [Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss](https://www.leukemiaresearchassociation.ai "Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss") President/Founder & Lead Developer, Sabadell, Spain 90 | 91 |   92 | 93 | # Versioning 94 | We use [SemVer](https://semver.org/) for versioning. 95 | 96 |   97 | 98 | # License 99 | This project is licensed under the **MIT License** - see the [LICENSE](LICENSE "LICENSE") file for details. 100 | 101 |   102 | 103 | # Bugs/Issues 104 | We use the [repo issues](issues "repo issues") to track bugs and general requests related to using this project. See [CONTRIBUTING](CONTRIBUTING.md "CONTRIBUTING") for more info on how to submit bugs, feature requests and proposals. -------------------------------------------------------------------------------- /arduino/all_nano_33_ble_sense/all_model.h: -------------------------------------------------------------------------------- 1 | 2 | /* ALL Arduino Nano 33 BLE Sense Classifier 3 | 4 | An experiment to explore how low powered microcontrollers, specifically the 5 | Arduino Nano 33 BLE Sense, can be used to detect Acute Lymphoblastic Leukemia. 6 | 7 | MIT License 8 | 9 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial 10 | Para la Leucemia Peter Moss 11 | 12 | Permission is hereby granted, free of charge, to any person obtaining a copy 13 | of this software and associated documentation files(the "Software"), to deal 14 | in the Software without restriction, including without limitation the rights 15 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 16 | copies of the Software, and to permit persons to whom the Software is 17 | furnished to do so, subject to the following conditions: 18 | 19 | The above copyright notice and this permission notice shall be included in all 20 | copies or substantial portions of the Software. 21 | 22 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 23 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 24 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 25 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 26 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 27 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 28 | SOFTWARE. 29 | 30 | Contributors: 31 | - Adam Milton-Barker 32 | ==============================================================================*/ 33 | 34 | #ifndef TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MODEL_DATA_H_ 35 | #define TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MODEL_DATA_H_ 36 | 37 | extern const unsigned char all_model[]; 38 | extern const int all_model_len; 39 | 40 | #endif // TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MODEL_DATA_H_ 41 | -------------------------------------------------------------------------------- /arduino/all_nano_33_ble_sense/all_nano_33_ble_sense.ino: -------------------------------------------------------------------------------- 1 | 2 | /* ALL Arduino Nano 33 BLE Sense Classifier 3 | 4 | An experiment to explore how low powered microcontrollers, specifically the 5 | Arduino Nano 33 BLE Sense, can be used to detect Acute Lymphoblastic Leukemia. 6 | 7 | MIT License 8 | 9 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial 10 | Para la Leucemia Peter Moss 11 | 12 | Permission is hereby granted, free of charge, to any person obtaining a copy 13 | of this software and associated documentation files(the "Software"), to deal 14 | in the Software without restriction, including without limitation the rights 15 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 16 | copies of the Software, and to permit persons to whom the Software is 17 | furnished to do so, subject to the following conditions: 18 | 19 | The above copyright notice and this permission notice shall be included in all 20 | copies or substantial portions of the Software. 21 | 22 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 23 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 24 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 25 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 26 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 27 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 28 | SOFTWARE. 29 | 30 | Contributors: 31 | - Adam Milton-Barker 32 | ==============================================================================*/ 33 | 34 | #include "Arduino.h" 35 | #include 36 | 37 | #include 38 | 39 | #include "main_functions.h" 40 | #include "all_model.h" 41 | #include "model_settings.h" 42 | 43 | #include "tensorflow/lite/micro/micro_error_reporter.h" 44 | #include "tensorflow/lite/micro/micro_interpreter.h" 45 | #include "tensorflow/lite/micro/micro_mutable_op_resolver.h" 46 | #include "tensorflow/lite/schema/schema_generated.h" 47 | #include "tensorflow/lite/version.h" 48 | 49 | #include 50 | 51 | String images[]={ 52 | "Im006_1.jpg", 53 | "Im020_1.jpg", 54 | "Im024_1.jpg", 55 | "Im026_1.jpg", 56 | "Im028_1.jpg", 57 | "Im031_1.jpg", 58 | "Im035_0.jpg", 59 | "Im041_0.jpg", 60 | "Im047_0.jpg", 61 | "Im053_1.jpg", 62 | "Im057_1.jpg", 63 | "Im060_1.jpg", 64 | "Im063_1.jpg", 65 | "Im069_0.jpg", 66 | "Im074_0.jpg", 67 | "Im088_0.jpg", 68 | "Im095_0.jpg", 69 | "Im099_0.jpg", 70 | "Im101_0.jpg", 71 | "Im106_0.jpg" 72 | }; 73 | 74 | int tp = 0; 75 | int fp = 0; 76 | int tn = 0; 77 | int fn = 0; 78 | 79 | namespace { 80 | tflite::ErrorReporter* error_reporter = nullptr; 81 | const tflite::Model* model = nullptr; 82 | tflite::MicroInterpreter* interpreter = nullptr; 83 | TfLiteTensor* input = nullptr; 84 | constexpr int kTensorArenaSize = 136 * 1024; 85 | static uint8_t tensor_arena[kTensorArenaSize]; 86 | } 87 | 88 | void setup() { 89 | 90 | Serial.begin(9600); 91 | while (!Serial) { 92 | ; 93 | } 94 | 95 | Serial.println(F("Initialising SD card...")); 96 | if (!SD.begin(10)) { 97 | Serial.println(F("Initialisation failed!")); 98 | return; 99 | } 100 | Serial.println(F("Initialisation done.")); 101 | 102 | static tflite::MicroErrorReporter micro_error_reporter; 103 | error_reporter = µ_error_reporter; 104 | 105 | model = tflite::GetModel(all_model); 106 | if (model->version() != TFLITE_SCHEMA_VERSION) { 107 | TF_LITE_REPORT_ERROR(error_reporter, 108 | "Model provided is schema version %d not equal " 109 | "to supported version %d.", 110 | model->version(), TFLITE_SCHEMA_VERSION); 111 | return; 112 | } 113 | 114 | static tflite::MicroMutableOpResolver<6> micro_op_resolver; 115 | micro_op_resolver.AddAveragePool2D(); 116 | micro_op_resolver.AddConv2D(); 117 | micro_op_resolver.AddDepthwiseConv2D(); 118 | micro_op_resolver.AddReshape(); 119 | micro_op_resolver.AddFullyConnected(); 120 | micro_op_resolver.AddSoftmax(); 121 | 122 | static tflite::MicroInterpreter static_interpreter( 123 | model, micro_op_resolver, tensor_arena, kTensorArenaSize, error_reporter); 124 | interpreter = &static_interpreter; 125 | 126 | TfLiteStatus allocate_status = interpreter->AllocateTensors(); 127 | if (allocate_status != kTfLiteOk) { 128 | TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed"); 129 | return; 130 | } 131 | 132 | input = interpreter->input(0); 133 | getInputInfo(input); 134 | 135 | for (int i = 0; i < 20; i++) { 136 | getImage(images[i], input->data.int8); 137 | TfLiteTensor* output = interpreter->output(0); 138 | int8_t all_score = output->data.int8[kAllIndex]; 139 | int8_t no_all_score = output->data.int8[kNotAllIndex]; 140 | processScores(all_score, no_all_score, images[i]); 141 | delay(2000); 142 | } 143 | 144 | Serial.print("True Positives: "); 145 | Serial.println(tp); 146 | Serial.print("False Positives: "); 147 | Serial.println(fp); 148 | Serial.print("True Negatives: "); 149 | Serial.println(tn); 150 | Serial.print("False Negatives: "); 151 | Serial.println(fn); 152 | } 153 | 154 | void getInputInfo(TfLiteTensor* input){ 155 | Serial.println(""); 156 | Serial.println("Model input info"); 157 | Serial.println("==============="); 158 | Serial.print("Dimensions: "); 159 | Serial.println(input->dims->size); 160 | Serial.print("Dim 1 size: "); 161 | Serial.println(input->dims->data[0]); 162 | Serial.print("Dim 2 size: "); 163 | Serial.println(input->dims->data[1]); 164 | Serial.print("Dim 3 size: "); 165 | Serial.println(input->dims->data[2]); 166 | Serial.print("Dim 4 size: "); 167 | Serial.println(input->dims->data[3]); 168 | Serial.print("Input type: "); 169 | Serial.println(input->type); 170 | Serial.println("==============="); 171 | Serial.println(""); 172 | } 173 | 174 | TfLiteStatus getImage(String filepath, int8_t* image_data){ 175 | File jpegFile = SD.open(filepath, FILE_READ); 176 | 177 | if ( !jpegFile ) { 178 | Serial.print("ERROR: File not found!"); 179 | return kTfLiteError; 180 | } 181 | 182 | boolean decoded = JpegDec.decodeSdFile(jpegFile); 183 | processImage(filepath, image_data); 184 | 185 | return kTfLiteOk; 186 | } 187 | 188 | void processImage(String filename, int8_t* image_data){ 189 | 190 | // Crop the image by keeping a certain number of MCUs in each dimension 191 | const int keep_x_mcus = kNumCols / JpegDec.MCUWidth; 192 | const int keep_y_mcus = kNumRows / JpegDec.MCUHeight; 193 | 194 | // Calculate how many MCUs we will throw away on the x axis 195 | const int skip_x_mcus = JpegDec.MCUSPerRow - keep_x_mcus; 196 | // Roughly center the crop by skipping half the throwaway MCUs at the 197 | // beginning of each row 198 | const int skip_start_x_mcus = skip_x_mcus / 2; 199 | // Index where we will start throwing away MCUs after the data 200 | const int skip_end_x_mcu_index = skip_start_x_mcus + keep_x_mcus; 201 | // Same approach for the columns 202 | const int skip_y_mcus = JpegDec.MCUSPerCol - keep_y_mcus; 203 | const int skip_start_y_mcus = skip_y_mcus / 2; 204 | const int skip_end_y_mcu_index = skip_start_y_mcus + keep_y_mcus; 205 | 206 | // Pointer to the current pixel 207 | uint16_t* pImg; 208 | // Color of the current pixel 209 | uint16_t color; 210 | 211 | // Loop over the MCUs 212 | while (JpegDec.read()) { 213 | // Skip over the initial set of rows 214 | if (JpegDec.MCUy < skip_start_y_mcus) { 215 | continue; 216 | } 217 | // Skip if we're on a column that we don't want 218 | if (JpegDec.MCUx < skip_start_x_mcus || 219 | JpegDec.MCUx >= skip_end_x_mcu_index) { 220 | continue; 221 | } 222 | // Skip if we've got all the rows we want 223 | if (JpegDec.MCUy >= skip_end_y_mcu_index) { 224 | continue; 225 | } 226 | // Pointer to the current pixel 227 | pImg = JpegDec.pImage; 228 | 229 | // The x and y indexes of the current MCU, ignoring the MCUs we skip 230 | int relative_mcu_x = JpegDec.MCUx - skip_start_x_mcus; 231 | int relative_mcu_y = JpegDec.MCUy - skip_start_y_mcus; 232 | 233 | // The coordinates of the top left of this MCU when applied to the output 234 | // image 235 | int x_origin = relative_mcu_x * JpegDec.MCUWidth; 236 | int y_origin = relative_mcu_y * JpegDec.MCUHeight; 237 | 238 | // Loop through the MCU's rows and columns 239 | for (int mcu_row = 0; mcu_row < JpegDec.MCUHeight; mcu_row++) { 240 | // The y coordinate of this pixel in the output index 241 | int current_y = y_origin + mcu_row; 242 | for (int mcu_col = 0; mcu_col < JpegDec.MCUWidth; mcu_col++) { 243 | // Read the color of the pixel as 16-bit integer 244 | color = *pImg++; 245 | // Extract the color values (5 red bits, 6 green, 5 blue) 246 | uint8_t r, g, b; 247 | r = ((color & 0xF800) >> 11) * 8; 248 | g = ((color & 0x07E0) >> 5) * 4; 249 | b = ((color & 0x001F) >> 0) * 8; 250 | // Convert to grayscale by calculating luminance 251 | // See https://en.wikipedia.org/wiki/Grayscale for magic numbers 252 | float gray_value = (0.2126 * r) + (0.7152 * g) + (0.0722 * b); 253 | 254 | // Convert to signed 8-bit integer by subtracting 128. 255 | gray_value -= 128; 256 | // The x coordinate of this pixel in the output image 257 | int current_x = x_origin + mcu_col; 258 | // The index of this pixel in our flat output buffer 259 | int index = (current_y * kNumCols) + current_x; 260 | image_data[index] = static_cast(gray_value); 261 | } 262 | } 263 | } 264 | } 265 | 266 | void processScores(int8_t all_score, int8_t no_all_score, String filename){ 267 | 268 | Serial.println(filename); 269 | Serial.println("==============="); 270 | Serial.print("ALL positive score: "); 271 | Serial.println(all_score); 272 | Serial.print("ALL negative score: "); 273 | Serial.println(no_all_score); 274 | if(all_score > no_all_score && filename.indexOf("_1") > 0){ 275 | Serial.println("True Positive"); 276 | tp = tp + 1; 277 | } 278 | else if(all_score > no_all_score && filename.indexOf("_0") > 0){ 279 | Serial.println("False Positive"); 280 | fp = fp + 1; 281 | } 282 | else if(all_score < no_all_score && filename.indexOf("_1") > 0){ 283 | Serial.println("False Negative"); 284 | fn = fn + 1; 285 | } 286 | else if(all_score < no_all_score && filename.indexOf("_0") > 0){ 287 | Serial.println("True Negative"); 288 | tn = tn + 1; 289 | } 290 | Serial.println(""); 291 | 292 | static bool is_initialized = false; 293 | if (!is_initialized) { 294 | pinMode(LEDR, OUTPUT); 295 | pinMode(LEDG, OUTPUT); 296 | pinMode(LEDB, OUTPUT); 297 | is_initialized = true; 298 | } 299 | 300 | digitalWrite(LEDG, HIGH); 301 | digitalWrite(LEDR, HIGH); 302 | 303 | digitalWrite(LEDB, LOW); 304 | delay(100); 305 | digitalWrite(LEDB, HIGH); 306 | 307 | if (all_score > no_all_score) { 308 | digitalWrite(LEDG, HIGH); 309 | digitalWrite(LEDR, LOW); 310 | delay(200); 311 | digitalWrite(LEDR, HIGH); 312 | } else { 313 | digitalWrite(LEDR, HIGH); 314 | digitalWrite(LEDG, LOW); 315 | delay(200); 316 | digitalWrite(LEDG, HIGH); 317 | } 318 | 319 | } 320 | 321 | void jpegInfo() { 322 | 323 | Serial.println("JPEG image info"); 324 | Serial.println("==============="); 325 | Serial.print("Width :"); 326 | Serial.println(JpegDec.width); 327 | Serial.print("Height :"); 328 | Serial.println(JpegDec.height); 329 | Serial.print("Components :"); 330 | Serial.println(JpegDec.comps); 331 | Serial.print("MCU / row :"); 332 | Serial.println(JpegDec.MCUSPerRow); 333 | Serial.print("MCU / col :"); 334 | Serial.println(JpegDec.MCUSPerCol); 335 | Serial.print("Scan type :"); 336 | Serial.println(JpegDec.scanType); 337 | Serial.print("MCU width :"); 338 | Serial.println(JpegDec.MCUWidth); 339 | Serial.print("MCU height :"); 340 | Serial.println(JpegDec.MCUHeight); 341 | Serial.println("==============="); 342 | Serial.println(""); 343 | } 344 | 345 | void loop() { 346 | } 347 | -------------------------------------------------------------------------------- /arduino/all_nano_33_ble_sense/arduino_main.cpp: -------------------------------------------------------------------------------- 1 | /* Copyright 2019 The TensorFlow Authors. All Rights Reserved. 2 | 3 | Licensed under the Apache License, Version 2.0 (the "License"); 4 | you may not use this file except in compliance with the License. 5 | You may obtain a copy of the License at 6 | 7 | http://www.apache.org/licenses/LICENSE-2.0 8 | 9 | Unless required by applicable law or agreed to in writing, software 10 | distributed under the License is distributed on an "AS IS" BASIS, 11 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | See the License for the specific language governing permissions and 13 | limitations under the License. 14 | ==============================================================================*/ 15 | 16 | #ifndef TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MAIN_FUNCTIONS_H_ 17 | #define TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MAIN_FUNCTIONS_H_ 18 | 19 | // Expose a C friendly interface for main functions. 20 | #ifdef __cplusplus 21 | extern "C" { 22 | #endif 23 | 24 | // Initializes all data needed for the example. The name is important, and needs 25 | // to be setup() for Arduino compatibility. 26 | void setup(); 27 | 28 | // Runs one iteration of data gathering and inference. This should be called 29 | // repeatedly from the application code. The name needs to be loop() for Arduino 30 | // compatibility. 31 | void loop(); 32 | 33 | #ifdef __cplusplus 34 | } 35 | #endif 36 | 37 | #endif // TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MAIN_FUNCTIONS_H_ 38 | -------------------------------------------------------------------------------- /arduino/all_nano_33_ble_sense/main_functions.h: -------------------------------------------------------------------------------- 1 | /* Copyright 2019 The TensorFlow Authors. All Rights Reserved. 2 | 3 | Licensed under the Apache License, Version 2.0 (the "License"); 4 | you may not use this file except in compliance with the License. 5 | You may obtain a copy of the License at 6 | 7 | http://www.apache.org/licenses/LICENSE-2.0 8 | 9 | Unless required by applicable law or agreed to in writing, software 10 | distributed under the License is distributed on an "AS IS" BASIS, 11 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | See the License for the specific language governing permissions and 13 | limitations under the License. 14 | ==============================================================================*/ 15 | 16 | #ifndef TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MAIN_FUNCTIONS_H_ 17 | #define TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MAIN_FUNCTIONS_H_ 18 | 19 | // Expose a C friendly interface for main functions. 20 | #ifdef __cplusplus 21 | extern "C" { 22 | #endif 23 | 24 | // Initializes all data needed for the example. The name is important, and needs 25 | // to be setup() for Arduino compatibility. 26 | void setup(); 27 | 28 | // Runs one iteration of data gathering and inference. This should be called 29 | // repeatedly from the application code. The name needs to be loop() for Arduino 30 | // compatibility. 31 | void loop(); 32 | 33 | #ifdef __cplusplus 34 | } 35 | #endif 36 | 37 | #endif // TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MAIN_FUNCTIONS_H_ 38 | -------------------------------------------------------------------------------- /arduino/all_nano_33_ble_sense/model_settings.cpp: -------------------------------------------------------------------------------- 1 | /* Copyright 2019 The TensorFlow Authors. All Rights Reserved. 2 | 3 | Licensed under the Apache License, Version 2.0 (the "License"); 4 | you may not use this file except in compliance with the License. 5 | You may obtain a copy of the License at 6 | 7 | http://www.apache.org/licenses/LICENSE-2.0 8 | 9 | Unless required by applicable law or agreed to in writing, software 10 | distributed under the License is distributed on an "AS IS" BASIS, 11 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | See the License for the specific language governing permissions and 13 | limitations under the License. 14 | ==============================================================================*/ 15 | 16 | #include "model_settings.h" 17 | 18 | const char* kCategoryLabels[kCategoryCount] = { 19 | "negative", 20 | "positive", 21 | }; 22 | -------------------------------------------------------------------------------- /arduino/all_nano_33_ble_sense/model_settings.h: -------------------------------------------------------------------------------- 1 | 2 | /* ALL Arduino Nano 33 BLE Sense Classifier 3 | 4 | An experiment to explore how low powered microcontrollers, specifically the 5 | Arduino Nano 33 BLE Sense, can be used to detect Acute Lymphoblastic Leukemia. 6 | 7 | MIT License 8 | 9 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial 10 | Para la Leucemia Peter Moss 11 | 12 | Permission is hereby granted, free of charge, to any person obtaining a copy 13 | of this software and associated documentation files(the "Software"), to deal 14 | in the Software without restriction, including without limitation the rights 15 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 16 | copies of the Software, and to permit persons to whom the Software is 17 | furnished to do so, subject to the following conditions: 18 | 19 | The above copyright notice and this permission notice shall be included in all 20 | copies or substantial portions of the Software. 21 | 22 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 23 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 24 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 25 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 26 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 27 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 28 | SOFTWARE. 29 | 30 | Contributors: 31 | - Adam Milton-Barker 32 | ==============================================================================*/ 33 | 34 | #ifndef TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MODEL_SETTINGS_H_ 35 | #define TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MODEL_SETTINGS_H_ 36 | 37 | constexpr int kNumCols = 100; 38 | constexpr int kNumRows = 100; 39 | constexpr int kNumChannels = 3; 40 | 41 | constexpr int kMaxImageSize = kNumCols * kNumRows * kNumChannels; 42 | 43 | constexpr int kCategoryCount = 2; 44 | constexpr int kAllIndex = 1; 45 | constexpr int kNotAllIndex = 0; 46 | extern const char* kCategoryLabels[kCategoryCount]; 47 | 48 | #endif // TENSORFLOW_LITE_MICRO_ACUTE_LYMPHOBLASTIC_LEUKEMIA_MODEL_SETTINGS_H_ 49 | -------------------------------------------------------------------------------- /assets/images/all-arduino-nano-33-ble-classifier.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/assets/images/all-arduino-nano-33-ble-classifier.gif -------------------------------------------------------------------------------- /assets/images/all-idb.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/assets/images/all-idb.jpg -------------------------------------------------------------------------------- /assets/images/bug-report.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/assets/images/bug-report.jpg -------------------------------------------------------------------------------- /assets/images/feature-proposals.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/assets/images/feature-proposals.jpg -------------------------------------------------------------------------------- /assets/images/feature-request.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/assets/images/feature-request.jpg -------------------------------------------------------------------------------- /assets/images/fork.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/assets/images/fork.jpg -------------------------------------------------------------------------------- /assets/images/project-banner.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/assets/images/project-banner.jpg -------------------------------------------------------------------------------- /assets/images/repo-issues.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/assets/images/repo-issues.jpg -------------------------------------------------------------------------------- /classifier.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ ALL Arduino Nano 33 BLE Sense Classifier 3 | 4 | An experiment to explore how low powered microcontrollers, specifically the 5 | Arduino Nano 33 BLE Sense, can be used to detect Acute Lymphoblastic Leukemia. 6 | 7 | MIT License 8 | 9 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial 10 | Para la Leucemia Peter Moss 11 | 12 | Permission is hereby granted, free of charge, to any person obtaining a copy 13 | of this software and associated documentation files(the "Software"), to deal 14 | in the Software without restriction, including without limitation the rights 15 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 16 | copies of the Software, and to permit persons to whom the Software is 17 | furnished to do so, subject to the following conditions: 18 | 19 | The above copyright notice and this permission notice shall be included in all 20 | copies or substantial portions of the Software. 21 | 22 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 23 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 24 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 25 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 26 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 27 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 28 | SOFTWARE. 29 | 30 | Contributors: 31 | - Adam Milton-Barker 32 | 33 | """ 34 | 35 | import sys 36 | 37 | from abc import ABC, abstractmethod 38 | 39 | from modules.AbstractClassifier import AbstractClassifier 40 | 41 | from modules.helpers import helpers 42 | from modules.model import model 43 | from modules.server import server 44 | 45 | 46 | class classifier(AbstractClassifier): 47 | """ ALL Arduino Nano 33 BLE Sense Classifier 48 | 49 | Represents a HIAS AI Agent that processes data 50 | using the ALL Arduino Nano BLE Sense Classifier model. 51 | """ 52 | 53 | def train(self): 54 | """ Creates & trains the model. """ 55 | 56 | self.model.prepare_data() 57 | self.model.prepare_network() 58 | self.model.train() 59 | self.model.evaluate() 60 | 61 | def set_model(self): 62 | """ Loads the model class """ 63 | 64 | self.model = model(self.helpers) 65 | 66 | def load_model(self): 67 | """ Loads the trained model """ 68 | 69 | self.model.load() 70 | 71 | def inference(self): 72 | """ Classifies test data locally """ 73 | 74 | self.load_model() 75 | self.model.test() 76 | 77 | def server(self): 78 | """ Loads the API server """ 79 | 80 | self.load_model() 81 | self.server = server(self.helpers, self.model, 82 | self.model_type) 83 | self.server.start() 84 | 85 | def inference_http(self): 86 | """ Classifies test data via HTTP requests """ 87 | 88 | self.model.test_http() 89 | 90 | def signal_handler(self, signal, frame): 91 | self.helpers.logger.info("Disconnecting") 92 | sys.exit(1) 93 | 94 | 95 | classifier = classifier() 96 | 97 | 98 | def main(): 99 | 100 | if len(sys.argv) < 2: 101 | print("You must provide an argument") 102 | exit() 103 | elif sys.argv[1] not in classifier.helpers.confs["agent"]["params"]: 104 | print("Mode not supported! server, train or inference") 105 | exit() 106 | 107 | mode = sys.argv[1] 108 | 109 | if mode == "train": 110 | classifier.set_model() 111 | classifier.train() 112 | 113 | elif mode == "classify": 114 | classifier.set_model() 115 | classifier.inference() 116 | 117 | elif mode == "server": 118 | classifier.set_model() 119 | classifier.server() 120 | 121 | elif mode == "classify_http": 122 | classifier.set_model() 123 | classifier.inference_http() 124 | 125 | 126 | if __name__ == "__main__": 127 | main() 128 | -------------------------------------------------------------------------------- /configuration/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "agent": { 3 | "cores": 8, 4 | "ip": "", 5 | "port": 1234, 6 | "params": [ 7 | "train", 8 | "classify", 9 | "server", 10 | "classify_http" 11 | ] 12 | }, 13 | "data": { 14 | "dim": 100, 15 | "file_type": ".jpg", 16 | "labels": [0, 1], 17 | "rotations": 10, 18 | "seed": 2, 19 | "split": 0.255, 20 | "test": "model/data/test", 21 | "test_data": [ 22 | "Im006_1.jpg", 23 | "Im020_1.jpg", 24 | "Im024_1.jpg", 25 | "Im026_1.jpg", 26 | "Im028_1.jpg", 27 | "Im031_1.jpg", 28 | "Im035_0.jpg", 29 | "Im041_0.jpg", 30 | "Im047_0.jpg", 31 | "Im053_1.jpg", 32 | "Im057_1.jpg", 33 | "Im060_1.jpg", 34 | "Im063_1.jpg", 35 | "Im069_0.jpg", 36 | "Im074_0.jpg", 37 | "Im088_0.jpg", 38 | "Im095_0.jpg", 39 | "Im099_0.jpg", 40 | "Im101_0.jpg", 41 | "Im106_0.jpg" 42 | ], 43 | "train_dir": "model/data/train", 44 | "valid_types": [ 45 | ".jpg" 46 | ] 47 | }, 48 | "model": { 49 | "model": "model/all_nano_33_ble_sense.json", 50 | "model_c_array": "model/all_nano_33_ble_sense.cc", 51 | "tfmodel": "model/all_nano_33_ble_sense.tflite", 52 | "weights": "model/all_nano_33_ble_sense.h5" 53 | }, 54 | "train": { 55 | "batch": 100, 56 | "decay_adam": 1e-6, 57 | "epochs": 150, 58 | "learning_rate_adam": 1e-4, 59 | "val_steps": 10 60 | } 61 | } -------------------------------------------------------------------------------- /docs/img/arduino-ide.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/docs/img/arduino-ide.jpg -------------------------------------------------------------------------------- /docs/img/arduino-nano-33-ble-sense-sd_bb.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/docs/img/arduino-nano-33-ble-sense-sd_bb.jpg -------------------------------------------------------------------------------- /docs/img/plots/accuracy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/docs/img/plots/accuracy.png -------------------------------------------------------------------------------- /docs/img/plots/auc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/docs/img/plots/auc.png -------------------------------------------------------------------------------- /docs/img/plots/confusion-matrix.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/docs/img/plots/confusion-matrix.png -------------------------------------------------------------------------------- /docs/img/plots/loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/docs/img/plots/loss.png -------------------------------------------------------------------------------- /docs/img/plots/precision.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/docs/img/plots/precision.png -------------------------------------------------------------------------------- /docs/img/plots/recall.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/docs/img/plots/recall.png -------------------------------------------------------------------------------- /docs/img/project-banner.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/docs/img/project-banner.jpg -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | # Documentation 2 | 3 | ![ALL Arduino Nano 33 BLE Sense Classifier](img/project-banner.jpg) 4 | 5 | # Welcome 6 | 7 | Welcome to the [ALL Arduino Nano 33 BLE Sense Classifier](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier) official documentation. 8 | 9 |   10 | 11 | #DISCLAIMER 12 | 13 | _This project should be used for research purposes only. The purpose of the project is to show the potential of Artificial Intelligence for medical support systems such as diagnostic systems._ 14 | 15 | _Although the model is accurate and shows good results both on paper and in real world testing, it is trained on a small amount of data and needs to be trained on larger datasets to really evaluate it's accuracy._ 16 | 17 | _Developers that have contributed to this repository have experience in using Artificial Intelligence for detecting certain types of cancer. They are not doctors, medical or cancer experts._ 18 | 19 |   20 | 21 | # Installation 22 | 23 | Use the following installation guides to set up your project.: 24 | 25 | - [Ubuntu Installation Guide](installation/ubuntu.md) 26 | - [Arduino Installation Guide](installation/arduino.md) 27 | 28 |   29 | 30 | # Usage 31 | 32 | Use the following usage guides to to train your classifier and use your classifier on Arduino: 33 | 34 | - [Python Usage Guide](usage/python.md) 35 | - [Jupyter Notebooks Usage Guide](usage/notebooks.md) 36 | - [Arduino Usage Guide](usage/arduino.md) 37 | 38 |   39 | 40 | # Contributing 41 | Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss encourages and welcomes code contributions, bug fixes and enhancements from the Github community. 42 | 43 | Please read the [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") document for a full guide to forking our repositories and submitting your pull requests. You will also find our code of conduct in the [Code of Conduct](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CODE-OF-CONDUCT.md) document. 44 | 45 | ## Contributors 46 | - [Adam Milton-Barker](https://www.leukemiaairesearch.com/association/volunteers/adam-milton-barker "Adam Milton-Barker") - [Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss](https://www.leukemiaresearchassociation.ai "Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss") President/Founder & Lead Developer, Sabadell, Spain 47 | 48 |   49 | 50 | # Versioning 51 | We use [SemVer](https://semver.org/) for versioning. 52 | 53 |   54 | 55 | # License 56 | This project is licensed under the **MIT License** - see the [LICENSE](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/LICENSE "LICENSE") file for details. 57 | 58 |   59 | 60 | # Bugs/Issues 61 | We use the [repo issues](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/issues "repo issues") to track bugs and general requests related to using this project. See [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") for more info on how to submit bugs, feature requests and proposals. -------------------------------------------------------------------------------- /docs/installation/arduino.md: -------------------------------------------------------------------------------- 1 | # Arduino Installation 2 | 3 | ![ALL Arduino Nano 33 BLE Sense Classifier](../img/project-banner.jpg) 4 | 5 | # Introduction 6 | This guide will guide you through the installation process for the **ALL Arduino Nano 33 BLE Sense Classifier** Arduino project. 7 | 8 |   9 | 10 | # Operating System 11 | This project requires the Windows 10 operating system(s), but may work as described on other OS. 12 | 13 |   14 | 15 | # Hardware 16 | The following hardware is required for this project: 17 | 18 | - [Arduino Nano 33 BLE Sense](https://store.arduino.cc/arduino-nano-33-ble-sense) 19 | - [kwmobile Micro SD Card Module for Arduino](https://www.amazon.es/gp/product/B06XHJTGGC) 20 | - [SD Card](https://www.amazon.es/Tarjeta-Memoria-Kingston-32GB-Micro/dp/B00JRZIOIE) 21 | 22 |   23 | 24 | # Software 25 | The following Arduino software libraries are used with this project: 26 | 27 | - [Arduino IDE](https://www.arduino.cc/en/software) 28 | - [Arduino Tensorflow Lite For Microcontrollers](https://github.com/tensorflow/tflite-micro) 29 | - [JpegDecoder](https://github.com/Bodmer/JPEGDecoder) 30 | 31 |   32 | 33 | # Prerequisites 34 | You will need to ensure you have followed the provided guides below: 35 | 36 | - [Ubuntu Installation Guide](../installation/ubuntu.md) 37 | - [Python Usage Guide](../usage/python.md) or [Jupyter Notebooks Usage Guide](../usage/notebooks.md) 38 | - [Getting started with the Arduino Nano 33 BLE Sense](https://www.arduino.cc/en/Guide/NANO33BLESense) 39 | - [Why doesn't the 5V pin work in the Arduino Nano 33 BLE boards?](https://support.arduino.cc/hc/en-us/articles/360014779679-Why-doesn-t-the-5V-pin-work-in-the-Arduino-Nano-33-BLE-boards-) 40 | 41 |   42 | 43 | # Setup 44 | 45 | ![ALL Arduino Nano 33 BLE Sense Classifier](../img/arduino-nano-33-ble-sense-sd_bb.jpg) 46 | 47 | Follow the diagram above to connect your SD card reader to the Arduino Nano 33 BLE Sense. Remember you need to follow the steps in [Why doesn't the 5V pin work in the Arduino Nano 33 BLE boards?](https://support.arduino.cc/hc/en-us/articles/360014779679-Why-doesn-t-the-5V-pin-work-in-the-Arduino-Nano-33-BLE-boards-) to enable 5V on the Arduino Nano BLE Sense. 48 | 49 | Below is a pin guide to help. 50 | 51 | | Arduino Pin | SD Card Pin | 52 | | ---------- | ---------- | 53 | | D10 | CS | 54 | | D11 | MOSI | 55 | | D12 | MISO | 56 | | D13 | SCK | 57 | | 5v | VCC | 58 | | GND | GND | 59 | 60 |   61 | 62 | # Continue 63 | 64 | Now you are ready to use your Arduino Nano 33 BLE Sense. Head over to the [Arduino Usage Guide](../usage/arduino.md) for instructions on how to use your model with the Arduino. 65 | 66 |   67 | 68 | # Contributing 69 | Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss encourages and welcomes code contributions, bug fixes and enhancements from the Github community. 70 | 71 | Please read the [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") document for a full guide to forking our repositories and submitting your pull requests. You will also find our code of conduct in the [Code of Conduct](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CODE-OF-CONDUCT.md) document. 72 | 73 | ## Contributors 74 | - [Adam Milton-Barker](https://www.leukemiaairesearch.com/association/volunteers/adam-milton-barker "Adam Milton-Barker") - [Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss](https://www.leukemiaresearchassociation.ai "Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss") President/Founder & Lead Developer, Sabadell, Spain 75 | 76 |   77 | 78 | # Versioning 79 | We use [SemVer](https://semver.org/) for versioning. 80 | 81 |   82 | 83 | # License 84 | This project is licensed under the **MIT License** - see the [LICENSE](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/LICENSE "LICENSE") file for details. 85 | 86 |   87 | 88 | # Bugs/Issues 89 | We use the [repo issues](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/issues "repo issues") to track bugs and general requests related to using this project. See [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") for more info on how to submit bugs, feature requests and proposals. -------------------------------------------------------------------------------- /docs/installation/ubuntu.md: -------------------------------------------------------------------------------- 1 | # Ubuntu Installation 2 | 3 | ![ALL Arduino Nano 33 BLE Sense Classifier](../img/project-banner.jpg) 4 | 5 | # Introduction 6 | This guide will take you through the installation process for the **ALL Arduino Nano 33 BLE Sense Classifier** trainer. 7 | 8 |   9 | 10 | # Operating System 11 | This project supports the following operating system(s), but may work as described on other OS. 12 | 13 | - [Ubuntu 20.04](https://releases.ubuntu.com/20.04/) 14 | 15 |   16 | 17 | # Software 18 | This project uses the following libraries. 19 | 20 | - Conda 21 | - Intel® oneAPI AI Analytics Toolkit 22 | - Jupyter Notebooks 23 | - NBConda 24 | - Mlxtend 25 | - Pillow 26 | - Opencv 27 | - Scipy 28 | - Scikit Image 29 | - Scikit Learn 30 | 31 |   32 | 33 | # Clone the repository 34 | 35 | Clone the [ALL Arduino Nano 33 BLE Sense Classifier](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier " ALL Arduino Nano 33 BLE Sense Classifier") repository from the [Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research Project](https://github.com/AMLResearchProject "Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research Project") Github Organization. 36 | 37 | To clone the repository and install the project, make sure you have Git installed. Now navigate to the directory you would like to clone the project to and then use the following command. 38 | 39 | ``` bash 40 | git clone https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier.git 41 | ``` 42 | 43 | This will clone the ALL Arduino Nano 33 BLE Sense Classifier repository. 44 | 45 | ``` bash 46 | ls 47 | ``` 48 | 49 | Using the ls command in your home directory should show you the following. 50 | 51 | ``` bash 52 | ALL-Arduino-Nano-33-BLE-Sense-Classifier 53 | ``` 54 | 55 | Navigate to the **ALL-Arduino-Nano-33-BLE-Sense-Classifier** directory, this is your project root directory for this tutorial. 56 | 57 | ## Developer forks 58 | 59 | Developers from the Github community that would like to contribute to the development of this project should first create a fork, and clone that repository. For detailed information please view the [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/master/CONTRIBUTING.md "CONTRIBUTING") guide. You should pull the latest code from the development branch. 60 | 61 | ``` bash 62 | git clone -b "2.0.0" https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier.git 63 | ``` 64 | 65 | The **-b "2.0.0"** parameter ensures you get the code from the latest master branch. Before using the below command please check our latest master branch in the button at the top of the project README. 66 | 67 |   68 | 69 | # Installation 70 | You are now ready to install the ALL Arduino Nano 33 BLE Sense Classifier trainer. All software requirements are included in **scripts/install.sh**. You can run this file on your machine from the project root in terminal. Use the following command: 71 | 72 | ``` bash 73 | sh scripts/install.sh 74 | ``` 75 | 76 | **WARNING:** This script assumes you have not already installed the oneAPI Basekit. 77 | 78 | **WARNING:** This script assumes you have not already installed the oneAPI AI Analytics Toolkit. 79 | 80 | **WARNING:** This script assumes you have an Intel GPU. 81 | 82 | **WARNING:** This script assumes you have already installed the Intel GPU drivers. 83 | 84 | **HINT:** If any of the above are not relevant to you, please comment out the relevant sections below before running this installation script. 85 | 86 |   87 | 88 | # Continue 89 | 90 | Choose one of the following usage guides to train your model: 91 | 92 | - [Python Usage Guide](../usage/python.md) 93 | - [Jupyter Notebooks Usage Guide](../usage/notebooks.md) 94 | 95 |   96 | 97 | # Contributing 98 | Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss encourages and welcomes code contributions, bug fixes and enhancements from the Github community. 99 | 100 | Please read the [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") document for a full guide to forking our repositories and submitting your pull requests. You will also find our code of conduct in the [Code of Conduct](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CODE-OF-CONDUCT.md) document. 101 | 102 | ## Contributors 103 | - [Adam Milton-Barker](https://www.leukemiaairesearch.com/association/volunteers/adam-milton-barker "Adam Milton-Barker") - [Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss](https://www.leukemiaresearchassociation.ai "Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss") President/Founder & Lead Developer, Sabadell, Spain 104 | 105 |   106 | 107 | # Versioning 108 | We use [SemVer](https://semver.org/) for versioning. 109 | 110 |   111 | 112 | # License 113 | This project is licensed under the **MIT License** - see the [LICENSE](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/LICENSE "LICENSE") file for details. 114 | 115 |   116 | 117 | # Bugs/Issues 118 | We use the [repo issues](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/issues "repo issues") to track bugs and general requests related to using this project. See [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") for more info on how to submit bugs, feature requests and proposals. -------------------------------------------------------------------------------- /docs/usage/arduino.md: -------------------------------------------------------------------------------- 1 | # Arduino Usage 2 | 3 | ![ALL Arduino Nano 33 BLE Sense Classifier](../img/project-banner.jpg) 4 | 5 | # Introduction 6 | This guide will take you through the using the **ALL Arduino Nano 33 BLE Sense Classifier** to detect Acute Lymphoblastic Leukemia. 7 | 8 |   9 | 10 | # Installation 11 | First you need to install the required software for training the model and setup your Arduino Nano 33 BLE Sense. Below are the available installation guides: 12 | 13 | - [Ubuntu installation guide](../installation/ubuntu.md). 14 | - [Arduino installation guide](../installation/arduino.md). 15 | 16 |   17 | 18 | # Training 19 | Before you can start to use this tutorial you must have already trained your classifier, to do so use one of the following guides: 20 | 21 | - [Python Usage Guide](../usage/python.md). 22 | - [Jupyter Notebooks Usage Guide](../usage/notebooks.md) 23 | 24 |   25 | 26 | # Arduino IDE 27 | 28 | ![Arduino IDE](../img/arduino-ide.jpg) 29 | 30 | Open your Arduino IDE and open the **all_nano_33_ble_sense** sketch located in the Arduino folder in the project root. 31 | 32 |   33 | 34 | # C Array Model 35 | Now you need to import your C array model into the Arduino project. On your development machine navigate to the **model** dir located in the project root and open the **all_nano_33_ble_sense.cc** file. First you need to copy the model and replace everything within **all_model[]{}** with your newly created model. Next you need to replace **all_model_len** with the actual length of your model which is found at the bottom of your model file. 36 | 37 |   38 | 39 | # Test Data 40 | During training the test data was resized and moved to the **model/data/test/** directory. Before you can continue you need to upload these files to the SD card. 41 | 42 |   43 | 44 | # Run The Classifier 45 | Now it is time to run your classifier on the Arduino Nano 33 BLE Sense. Make sure you are connected to your Arduino and click on the **upload** button. Once the model is uploaded it will start to run, open your serial monitor and watch the output. You will see the onboard LED on the Arduino Nano 33 BLE Sense turn **red** if Acute Lymphoblastic Leukemia is detected and **green** if it is not. 46 | 47 | ``` bash 48 | 19:22:40.139 -> Initialising SD card... 49 | 19:22:40.148 -> Initialisation done. 50 | 19:22:40.158 -> 51 | 19:22:40.163 -> Model input info 52 | 19:22:40.274 -> =============== 53 | 19:22:40.284 -> Dimensions: 4 54 | 19:22:40.299 -> Dim 1 size: 1 55 | 19:22:40.314 -> Dim 2 size: 100 56 | 19:22:40.328 -> Dim 3 size: 100 57 | 19:22:40.343 -> Dim 4 size: 3 58 | 19:22:40.354 -> Input type: 9 59 | 19:22:40.365 -> =============== 60 | 19:22:40.375 -> 61 | 19:22:40.381 -> Im006_1.jpg 62 | 19:22:40.458 -> =============== 63 | 19:22:40.468 -> ALL positive score: -7 64 | 19:22:40.483 -> ALL negative score: -18 65 | 19:22:40.504 -> True Positive 66 | 19:22:40.515 -> 67 | 19:22:40.521 -> Im020_1.jpg 68 | 19:22:43.194 -> =============== 69 | 19:22:43.201 -> ALL positive score: -14 70 | 19:22:43.223 -> ALL negative score: -6 71 | 19:22:43.229 -> False Negative 72 | 19:22:43.238 -> 73 | 19:22:43.241 -> Im024_1.jpg 74 | 19:22:45.916 -> =============== 75 | 19:22:45.922 -> ALL positive score: 18 76 | 19:22:45.928 -> ALL negative score: 24 77 | 19:22:45.938 -> False Negative 78 | 19:22:45.946 -> 79 | 19:22:45.950 -> Im026_1.jpg 80 | 19:22:48.680 -> =============== 81 | 19:22:48.685 -> ALL positive score: 27 82 | 19:22:48.695 -> ALL negative score: 24 83 | 19:22:48.705 -> True Positive 84 | 19:22:48.714 -> 85 | 19:22:48.719 -> Im028_1.jpg 86 | 19:22:51.409 -> =============== 87 | 19:22:51.416 -> ALL positive score: 13 88 | 19:22:51.427 -> ALL negative score: 18 89 | 19:22:51.439 -> False Negative 90 | 19:22:51.448 -> 91 | 19:22:51.454 -> Im031_1.jpg 92 | 19:22:54.138 -> =============== 93 | 19:22:54.148 -> ALL positive score: -13 94 | 19:22:54.164 -> ALL negative score: -16 95 | 19:22:54.179 -> True Positive 96 | 19:22:54.183 -> 97 | 19:22:54.188 -> Im035_0.jpg 98 | 19:22:56.883 -> =============== 99 | 19:22:56.890 -> ALL positive score: 12 100 | 19:22:56.901 -> ALL negative score: 20 101 | 19:22:56.908 -> True Negative 102 | 19:22:56.916 -> 103 | 19:22:56.921 -> Im041_0.jpg 104 | 19:22:59.631 -> =============== 105 | 19:22:59.640 -> ALL positive score: 14 106 | 19:22:59.653 -> ALL negative score: 6 107 | 19:22:59.663 -> False Positive 108 | 19:22:59.673 -> 109 | 19:22:59.679 -> Im047_0.jpg 110 | 19:23:02.365 -> =============== 111 | 19:23:02.373 -> ALL positive score: 25 112 | 19:23:02.384 -> ALL negative score: 20 113 | 19:23:02.393 -> False Positive 114 | 19:23:02.399 -> 115 | 19:23:02.404 -> Im053_1.jpg 116 | 19:23:05.160 -> =============== 117 | 19:23:05.174 -> ALL positive score: 39 118 | 19:23:05.190 -> ALL negative score: 5 119 | 19:23:05.202 -> True Positive 120 | 19:23:05.218 -> 121 | 19:23:05.223 -> Im057_1.jpg 122 | 19:23:07.881 -> =============== 123 | 19:23:07.896 -> ALL positive score: 6 124 | 19:23:07.912 -> ALL negative score: -1 125 | 19:23:07.928 -> True Positive 126 | 19:23:07.937 -> 127 | 19:23:07.942 -> Im060_1.jpg 128 | 19:23:10.618 -> =============== 129 | 19:23:10.630 -> ALL positive score: 25 130 | 19:23:10.648 -> ALL negative score: 12 131 | 19:23:10.661 -> True Positive 132 | 19:23:10.667 -> 133 | 19:23:10.673 -> Im063_1.jpg 134 | 19:23:13.359 -> =============== 135 | 19:23:13.368 -> ALL positive score: 23 136 | 19:23:13.382 -> ALL negative score: -52 137 | 19:23:13.400 -> True Positive 138 | 19:23:13.411 -> 139 | 19:23:13.417 -> Im069_0.jpg 140 | 19:23:16.097 -> =============== 141 | 19:23:16.108 -> ALL positive score: -4 142 | 19:23:16.129 -> ALL negative score: 34 143 | 19:23:16.148 -> True Negative 144 | 19:23:16.159 -> 145 | 19:23:16.164 -> Im074_0.jpg 146 | 19:23:18.812 -> =============== 147 | 19:23:18.819 -> ALL positive score: 22 148 | 19:23:18.834 -> ALL negative score: 18 149 | 19:23:18.850 -> False Positive 150 | 19:23:18.861 -> 151 | 19:23:18.867 -> Im088_0.jpg 152 | 19:23:21.564 -> =============== 153 | 19:23:21.575 -> ALL positive score: -21 154 | 19:23:21.594 -> ALL negative score: -24 155 | 19:23:21.613 -> False Positive 156 | 19:23:21.625 -> 157 | 19:23:21.630 -> Im095_0.jpg 158 | 19:23:24.274 -> =============== 159 | 19:23:24.284 -> ALL positive score: -33 160 | 19:23:24.302 -> ALL negative score: -38 161 | 19:23:24.321 -> False Positive 162 | 19:23:24.333 -> 163 | 19:23:24.339 -> Im099_0.jpg 164 | 19:23:27.014 -> =============== 165 | 19:23:27.025 -> ALL positive score: -46 166 | 19:23:27.042 -> ALL negative score: -22 167 | 19:23:27.062 -> True Negative 168 | 19:23:27.074 -> 169 | 19:23:27.080 -> Im101_0.jpg 170 | 19:23:29.769 -> =============== 171 | 19:23:29.779 -> ALL positive score: -17 172 | 19:23:29.796 -> ALL negative score: -14 173 | 19:23:29.816 -> True Negative 174 | 19:23:29.830 -> 175 | 19:23:29.837 -> Im106_0.jpg 176 | 19:23:32.530 -> =============== 177 | 19:23:32.545 -> ALL positive score: -42 178 | 19:23:32.562 -> ALL negative score: -45 179 | 19:23:32.587 -> False Positive 180 | 19:23:32.602 -> 181 | 19:23:32.609 -> True Positives: 7 182 | 19:23:34.833 -> False Positives: 6 183 | 19:23:34.844 -> True Negatives: 4 184 | 19:23:34.858 -> False Negatives: 3 185 | ``` 186 | 187 |   188 | 189 | # Conclusion 190 | 191 | We see that our model that can correctly classify all twenty images only gets 11/20 when running on Arduino. There are some additional testing steps we can take which will be introduced in V2 that will allow us to test the Arduino model on our development machine to help identify where the bug is coming from. For now this is a good first attempt at building a classifier to detect Acute Lymphoblastic Leukemia detection on Arduino. If you would like to view the ongoing issue in the Tensorflow Micro repository [click here](https://github.com/tensorflow/tflite-micro/issues/287) thanks to [Advait Jain](https://github.com/advaitjain) for the asistance with this issue. 192 | 193 |   194 | 195 | # Continue 196 | 197 | Now you are ready to set up your Arduino Nano 33 BLE Sense. Head over to the [Arduino Installation Guide](../installation/arduino.md) to prepare your Arduino. 198 | 199 |   200 | 201 | # Contributing 202 | Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss encourages and welcomes code contributions, bug fixes and enhancements from the Github community. 203 | 204 | Please read the [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") document for a full guide to forking our repositories and submitting your pull requests. You will also find our code of conduct in the [Code of Conduct](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CODE-OF-CONDUCT.md) document. 205 | 206 | ## Contributors 207 | - [Adam Milton-Barker](https://www.leukemiaairesearch.com/association/volunteers/adam-milton-barker "Adam Milton-Barker") - [Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss](https://www.leukemiaresearchassociation.ai "Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss") President/Founder & Lead Developer, Sabadell, Spain 208 | 209 |   210 | 211 | # Versioning 212 | We use [SemVer](https://semver.org/) for versioning. 213 | 214 |   215 | 216 | # License 217 | This project is licensed under the **MIT License** - see the [LICENSE](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/LICENSE "LICENSE") file for details. 218 | 219 |   220 | 221 | # Bugs/Issues 222 | We use the [repo issues](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/issues "repo issues") to track bugs and general requests related to using this project. See [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") for more info on how to submit bugs, feature requests and proposals. -------------------------------------------------------------------------------- /docs/usage/notebooks.md: -------------------------------------------------------------------------------- 1 | # Notebooks Usage 2 | 3 | ![ALL Arduino Nano 33 BLE Sense Classifier](../img/project-banner.jpg) 4 | 5 | # Introduction 6 | This guide will take you through using the **ALL Arduino Nano 33 BLE Sense Classifier** Jupyter Notebook to train and test your classifier. 7 | 8 |   9 | 10 | # Installation 11 | First you need to install the required software for training the model. Below are the available installation guides: 12 | 13 | - [Ubuntu installation guide](../installation/ubuntu.md). 14 | 15 |   16 | 17 | # Network Architecture 18 | We will build a Convolutional Neural Network with the following architecture: 19 | 20 | - Average pooling layer 21 | - Conv layer 22 | - Depthwise conv layer 23 | - Flatten layer 24 | - Fully connected layer 25 | - Softmax layer 26 | 27 |   28 | 29 | # Data 30 | You need to be granted access to use the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset. You can find the application form and information about getting access to the dataset on [this page](https://homes.di.unimi.it/scotti/all/#download) as well as information on how to contribute back to the project [here](https://homes.di.unimi.it/scotti/all/results.php). 31 | 32 | _If you are not able to obtain a copy of the dataset please feel free to try this tutorial on your own dataset._ 33 | 34 | Once you have your data you need to add it to the project filesystem. You will notice the data folder in the Model directory, **model/data**, inside you have **train** & **test**. Add all of the images from the ALL_IDB1 dataset to the **model/data/train** folder. 35 | 36 | ## Data Augmentation 37 | 38 | We will create an augmented dataset based on the [Leukemia Blood Cell Image Classification Using Convolutional Neural Network](http://www.ijcte.org/vol10/1198-H0012.pdf "Leukemia Blood Cell Image Classification Using Convolutional Neural Network") by T. T. P. Thanh, Caleb Vununu, Sukhrob Atoev, Suk-Hwan Lee, and Ki-Ryong Kwon. 39 | 40 | ## Application testing data 41 | 42 | In the data processing stage, ten negative images and ten positive images are removed from the dataset and moved to the **model/data/test/** directory. This data is not seen by the network during the training process, and is used to test the performance of the model. 43 | 44 | To ensure your model gets the same results, you should use the same test images. You can also try with your own image selection, however results may vary. 45 | 46 |   47 | 48 | # Start Jupyter Notebooks 49 | Now you need to start Jupyter Notebooks. In your project root execute the following command, replacing the IP and port as desired. 50 | 51 | ``` bash 52 | jupyter notebook --ip YourIP --port 8888 53 | ``` 54 | 55 |   56 | 57 | # Open The Training Notebook 58 | 59 | Navigate to the URL provided when starting Jupyter Notebooks and you should be in the project root. Now navigate to **notebooks/classifier.ipynb**. With everything set up you can now begin training. Run the Jupyter Notebook and wait for it to finish. 60 | 61 |   62 | 63 | # Preparing For Arduino 64 | 65 | During training the model was converted to TFLite and optimized with full integer quantization. The TFLite model was then converted to C array ready to be deployed to our Arduino Nano 33 BLE Sense. The test data that was removed before training was converted to 100px x 100px so as not to require additional resizing on the Arduino. 66 | 67 |   68 | 69 | # Conclusion 70 | 71 | Here we trained a deep learning model for Acute Lymphoblastic Leukemia detection utilizing Intel® Optimization for Tensorflow* from the Intel® oneAPI AI Analytics Toolkit to optimize and accelarate training. We introduced a 6 layer deep learning model and applied data augmentation to increase the training data. 72 | 73 | We trained our model with a target of 150 epochs and used early stopping to avoid overfitting. The model trained for 27 epochs resulting in a fairly good fit, and accuracy/precision/recall and AUC are satisfying. In addition the model reacts well during testing classifying each of the twenty unseen test images correctly. 74 | 75 |   76 | 77 | # Continue 78 | 79 | Now you are ready to set up your Arduino Nano 33 BLE Sense. Head over to the [Arduino Installation Guide](../installation/arduino.md) to prepare your Arduino. 80 | 81 |   82 | 83 | # Contributing 84 | Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss encourages and welcomes code contributions, bug fixes and enhancements from the Github community. 85 | 86 | Please read the [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") document for a full guide to forking our repositories and submitting your pull requests. You will also find our code of conduct in the [Code of Conduct](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CODE-OF-CONDUCT.md) document. 87 | 88 | ## Contributors 89 | - [Adam Milton-Barker](https://www.leukemiaairesearch.com/association/volunteers/adam-milton-barker "Adam Milton-Barker") - [Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss](https://www.leukemiaresearchassociation.ai "Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss") President/Founder & Lead Developer, Sabadell, Spain 90 | 91 |   92 | 93 | # Versioning 94 | We use [SemVer](https://semver.org/) for versioning. 95 | 96 |   97 | 98 | # License 99 | This project is licensed under the **MIT License** - see the [LICENSE](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/LICENSE "LICENSE") file for details. 100 | 101 |   102 | 103 | # Bugs/Issues 104 | We use the [repo issues](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/issues "repo issues") to track bugs and general requests related to using this project. See [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") for more info on how to submit bugs, feature requests and proposals. -------------------------------------------------------------------------------- /docs/usage/python.md: -------------------------------------------------------------------------------- 1 | # Python Usage 2 | 3 | ![ALL Arduino Nano 33 BLE Sense Classifier](../img/project-banner.jpg) 4 | 5 | # Introduction 6 | This guide will take you through the using the **ALL Arduino Nano 33 BLE Sense Classifier** Python trainer to train and test your classifier. 7 | 8 |   9 | 10 | # Installation 11 | First you need to install the required software for training the model. Below are the available installation guides: 12 | 13 | - [Ubuntu installation guide](../installation/ubuntu.md). 14 | 15 |   16 | 17 | # Network Architecture 18 | We will build a Convolutional Neural Network with the following architecture: 19 | 20 | - Average pooling layer 21 | - Conv layer 22 | - Depthwise conv layer 23 | - Flatten layer 24 | - Fully connected layer 25 | - Softmax layer 26 | 27 |   28 | 29 | # Data 30 | You need to be granted access to use the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset. You can find the application form and information about getting access to the dataset on [this page](https://homes.di.unimi.it/scotti/all/#download) as well as information on how to contribute back to the project [here](https://homes.di.unimi.it/scotti/all/results.php). 31 | 32 | _If you are not able to obtain a copy of the dataset please feel free to try this tutorial on your own dataset._ 33 | 34 | Once you have your data you need to add it to the project filesystem. You will notice the data folder in the Model directory, **model/data**, inside you have **train** & **test**. Add all of the images from the ALL_IDB1 dataset to the **model/data/train** folder. 35 | 36 | ## Data Augmentation 37 | 38 | We will create an augmented dataset based on the [Leukemia Blood Cell Image Classification Using Convolutional Neural Network](http://www.ijcte.org/vol10/1198-H0012.pdf "Leukemia Blood Cell Image Classification Using Convolutional Neural Network") by T. T. P. Thanh, Caleb Vununu, Sukhrob Atoev, Suk-Hwan Lee, and Ki-Ryong Kwon. 39 | 40 | ## Application testing data 41 | 42 | In the data processing stage, ten negative images and ten positive images are removed from the dataset and moved to the **model/data/test/** directory. This data is not seen by the network during the training process, and is used to test the performance of the model. 43 | 44 | To ensure your model gets the same results, you should use the same test images. You can also try with your own image selection, however results may vary. 45 | 46 | To specify which test images to use modify the [configuration/config.json](../configuration/config.json) file as shown below: 47 | 48 | ``` json 49 | "test_data": [ 50 | "Im006_1.jpg", 51 | "Im020_1.jpg", 52 | "Im024_1.jpg", 53 | "Im026_1.jpg", 54 | "Im028_1.jpg", 55 | "Im031_1.jpg", 56 | "Im035_0.jpg", 57 | "Im041_0.jpg", 58 | "Im047_0.jpg", 59 | "Im053_1.jpg", 60 | "Im057_1.jpg", 61 | "Im060_1.jpg", 62 | "Im063_1.jpg", 63 | "Im069_0.jpg", 64 | "Im074_0.jpg", 65 | "Im088_0.jpg", 66 | "Im095_0.jpg", 67 | "Im099_0.jpg", 68 | "Im101_0.jpg", 69 | "Im106_0.jpg" 70 | ], 71 | ``` 72 | 73 |   74 | 75 | # Configuration 76 | 77 | All configuration can be found in the **configuration/config.json** file. 78 | 79 | ``` json 80 | { 81 | "agent": { 82 | "cores": 8, 83 | "ip": "", 84 | "port": 1234, 85 | "params": [ 86 | "train", 87 | "classify", 88 | "server", 89 | "classify_http" 90 | ] 91 | }, 92 | "data": { 93 | "dim": 100, 94 | "file_type": ".jpg", 95 | "labels": [0, 1], 96 | "rotations": 10, 97 | "seed": 2, 98 | "split": 0.255, 99 | "test": "model/data/test", 100 | "test_data": [ 101 | "Im006_1.jpg", 102 | "Im020_1.jpg", 103 | "Im024_1.jpg", 104 | "Im026_1.jpg", 105 | "Im028_1.jpg", 106 | "Im031_1.jpg", 107 | "Im035_0.jpg", 108 | "Im041_0.jpg", 109 | "Im047_0.jpg", 110 | "Im053_1.jpg", 111 | "Im057_1.jpg", 112 | "Im060_1.jpg", 113 | "Im063_1.jpg", 114 | "Im069_0.jpg", 115 | "Im074_0.jpg", 116 | "Im088_0.jpg", 117 | "Im095_0.jpg", 118 | "Im099_0.jpg", 119 | "Im101_0.jpg", 120 | "Im106_0.jpg" 121 | ], 122 | "train_dir": "model/data/train", 123 | "valid_types": [ 124 | ".jpg" 125 | ] 126 | }, 127 | "model": { 128 | "model": "model/all_nano_33_ble_sense.json", 129 | "model_c_array": "model/all_nano_33_ble_sense.cc", 130 | "tfmodel": "model/all_nano_33_ble_sense.tflite", 131 | "weights": "model/all_nano_33_ble_sense.h5" 132 | }, 133 | "train": { 134 | "batch": 100, 135 | "decay_adam": 1e-6, 136 | "epochs": 150, 137 | "learning_rate_adam": 1e-4, 138 | "val_steps": 10 139 | } 140 | } 141 | ``` 142 | 143 | You should update the following values: 144 | 145 | - **agent->cores** Should represent the amount of cores your CPU has. 146 | - **agent->ip** Should be the IP of the machine you will run your test server on. 147 | - **agent->port** Should be the port you will run your test server on. 148 | 149 | You can modify the values in the train object as required, however to ensure you achieve the same results you can leave them as they are. 150 | 151 | # Training 152 | Now you are ready to train your model. 153 | 154 | ## Metrics 155 | We can use metrics to measure the effectiveness of our model. In this network you will use the following metrics: 156 | 157 | ``` 158 | tf.keras.metrics.BinaryAccuracy(name='accuracy'), 159 | tf.keras.metrics.Precision(name='precision'), 160 | tf.keras.metrics.Recall(name='recall'), 161 | tf.keras.metrics.AUC(name='auc') 162 | ``` 163 | 164 | These metrics will be displayed and plotted once our model is trained. A useful tutorial while working on the metrics was the [Classification on imbalanced data](https://www.tensorflow.org/tutorials/structured_data/imbalanced_data) tutorial on Tensorflow's website. 165 | 166 | ## Start Training 167 | Ensuring you have completed all previous steps, you can start training using the following command. 168 | 169 | ``` bash 170 | python classifier.py train 171 | ``` 172 | 173 | This tells the application to start training the model. 174 | 175 | ## Training Data 176 | First the training and validation data will be prepared. 177 | 178 | ``` bash 179 | 2021-07-18 17:42:26,075 - Classifier - INFO - Augmented data size: 1584 180 | 2021-07-18 17:42:26,075 - Classifier - INFO - Negative data size: 882 181 | 2021-07-18 17:42:26,076 - Classifier - INFO - Positive data size: 702 182 | 2021-07-18 17:42:26,076 - Classifier - INFO - Augmented data shape: (1584, 100, 100, 3) 183 | 2021-07-18 17:42:26,076 - Classifier - INFO - Labels shape: (1584, 2) 184 | 2021-07-18 17:42:26,267 - Classifier - INFO - Training data: (1180, 100, 100, 3) 185 | 2021-07-18 17:42:26,267 - Classifier - INFO - Training labels: (1180, 2) 186 | 2021-07-18 17:42:26,267 - Classifier - INFO - Validation data: (404, 100, 100, 3) 187 | 2021-07-18 17:42:26,267 - Classifier - INFO - Validation labels: (404, 2) 188 | 2021-07-18 17:42:26,267 - Classifier - INFO - Data preperation complete. 189 | ``` 190 | 191 | ### Model Summary 192 | 193 | Before the model begins training, you will be shown the model summary. 194 | 195 | ``` bash 196 | Model: "AllANBS" 197 | _________________________________________________________________ 198 | Layer (type) Output Shape Param # 199 | ================================================================= 200 | average_pooling2d (AveragePo (None, 50, 50, 3) 0 201 | _________________________________________________________________ 202 | conv2d (Conv2D) (None, 46, 46, 30) 2280 203 | _________________________________________________________________ 204 | depthwise_conv2d (DepthwiseC (None, 17, 17, 30) 27030 205 | _________________________________________________________________ 206 | flatten (Flatten) (None, 8670) 0 207 | _________________________________________________________________ 208 | dense (Dense) (None, 2) 17342 209 | _________________________________________________________________ 210 | activation (Activation) (None, 2) 0 211 | ================================================================= 212 | Total params: 46,652 213 | Trainable params: 46,652 214 | Non-trainable params: 0 215 | _________________________________________________________________ 216 | 2021-07-18 17:42:26,323 - Classifier - INFO - Network initialization complete. 217 | 2021-07-18 17:42:26,324 - Classifier - INFO - Using Adam Optimizer. 218 | 2021-07-18 17:42:26,324 - Classifier - INFO - Using Early Stopping. 219 | ``` 220 | 221 | ## Training Results 222 | Below are the training results for 28 epochs. 223 | 224 | ![Accuracy](../img/plots/accuracy.png) 225 | 226 | _Fig 1. Accuracy_ 227 | 228 | ![Loss](../img/plots/loss.png) 229 | 230 | _Fig 2. Loss_ 231 | 232 | ![Precision](../img/plots/precision.png) 233 | 234 | _Fig 3. Precision_ 235 | 236 | ![Recall](../img/plots/recall.png) 237 | 238 | _Fig 4. Recall_ 239 | 240 | ![AUC](../img/plots/auc.png) 241 | 242 | _Fig 5. AUC_ 243 | 244 | ![Confusion Matrix](../img/plots/confusion-matrix.png) 245 | 246 | _Fig 6. Confusion Matrix_ 247 | 248 | 249 | ``` bash 250 | 2021-07-18 17:47:55,953 - Classifier - INFO - Metrics: loss 0.2371470034122467 251 | 2021-07-18 17:47:55,953 - Classifier - INFO - Metrics: acc 0.9331682920455933 252 | 2021-07-18 17:47:55,953 - Classifier - INFO - Metrics: precision 0.9331682920455933 253 | 2021-07-18 17:47:55,953 - Classifier - INFO - Metrics: recall 0.9331682920455933 254 | 2021-07-18 17:47:55,953 - Classifier - INFO - Metrics: auc 0.9677298069000244 255 | 2021-07-18 17:47:56,536 - Classifier - INFO - Confusion Matrix: [[217 4] [ 23 160]] 256 | 2021-07-18 17:47:56,633 - Classifier - INFO - True Positives: 160(39.603960396039604%) 257 | 2021-07-18 17:47:56,633 - Classifier - INFO - False Positives: 4(0.9900990099009901%) 258 | 2021-07-18 17:47:56,633 - Classifier - INFO - True Negatives: 217(53.71287128712871%) 259 | 2021-07-18 17:47:56,633 - Classifier - INFO - False Negatives: 23(5.693069306930693%) 260 | 2021-07-18 17:47:56,633 - Classifier - INFO - Specificity: 0.9819004524886877 261 | 2021-07-18 17:47:56,633 - Classifier - INFO - Misclassification: 27(6.683168316831683%) 262 | ``` 263 | 264 | ## Metrics Overview 265 | | Accuracy | Recall | Precision | AUC/ROC | 266 | | ---------- | ---------- | ---------- | ---------- | 267 | | 0.9331682920455933 | 0.9331682920455933 | 0.9331682920455933 | 0.9677298069000244 | 268 | 269 | ## Figures Of Merit 270 | | Figures of merit | Amount/Value | Percentage | 271 | | -------------------- | ----- | ---------- | 272 | | True Positives | 160 | 39.603960396039604% | 273 | | False Positives | 4 | 0.9900990099009901% | 274 | | True Negatives | 217 | 53.71287128712871% | 275 | | False Negatives | 23 | 5.693069306930693% | 276 | | Misclassification | 27 | 6.683168316831683% | 277 | | Sensitivity / Recall | 0.9331682920455933 | 93% | 278 | | Specificity | 0.9819004524886877 | 98% | 279 | 280 |   281 | 282 | # Testing 283 | 284 | Now you will test the classifier on your training machine. You will use the 20 images that were removed from the training data in a previous part of this tutorial. 285 | 286 | To run the classifier in test mode use the following command: 287 | 288 | ``` 289 | python3 classifier.py classify 290 | ``` 291 | 292 | You should see the following which shows you the network architecture: 293 | 294 | ``` 295 | Model: "AllANBS" 296 | _________________________________________________________________ 297 | Layer (type) Output Shape Param # 298 | ================================================================= 299 | average_pooling2d (AveragePo (None, 50, 50, 3) 0 300 | _________________________________________________________________ 301 | conv2d (Conv2D) (None, 46, 46, 30) 2280 302 | _________________________________________________________________ 303 | depthwise_conv2d (DepthwiseC (None, 17, 17, 30) 27030 304 | _________________________________________________________________ 305 | flatten (Flatten) (None, 8670) 0 306 | _________________________________________________________________ 307 | dense (Dense) (None, 2) 17342 308 | _________________________________________________________________ 309 | activation (Activation) (None, 2) 0 310 | ================================================================= 311 | Total params: 46,652 312 | Trainable params: 46,652 313 | Non-trainable params: 0 314 | _________________________________________________________________ 315 | ``` 316 | 317 | Finally the application will start processing the test images and the results will be displayed in the console. 318 | 319 | ``` 320 | 2021-07-18 17:51:28,684 - Classifier - INFO - Loaded test image model/data/test/Im063_1.jpg 321 | 2021-07-18 17:51:28,804 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly detected (True Positive) in 0.12069535255432129 seconds. 322 | 2021-07-18 17:51:28,804 - Classifier - INFO - Loaded test image model/data/test/Im028_1.jpg 323 | 2021-07-18 17:51:28,838 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly detected (True Positive) in 0.03429007530212402 seconds. 324 | 2021-07-18 17:51:28,839 - Classifier - INFO - Loaded test image model/data/test/Im106_0.jpg 325 | 2021-07-18 17:51:28,872 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly not detected (True Negative) in 0.03346753120422363 seconds. 326 | 2021-07-18 17:51:28,872 - Classifier - INFO - Loaded test image model/data/test/Im101_0.jpg 327 | 2021-07-18 17:51:28,906 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly not detected (True Negative) in 0.034415245056152344 seconds. 328 | 2021-07-18 17:51:28,907 - Classifier - INFO - Loaded test image model/data/test/Im024_1.jpg 329 | 2021-07-18 17:51:28,939 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly detected (True Positive) in 0.032686471939086914 seconds. 330 | 2021-07-18 17:51:28,940 - Classifier - INFO - Loaded test image model/data/test/Im074_0.jpg 331 | 2021-07-18 17:51:28,972 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly not detected (True Negative) in 0.03266596794128418 seconds. 332 | 2021-07-18 17:51:28,973 - Classifier - INFO - Loaded test image model/data/test/Im035_0.jpg 333 | 2021-07-18 17:51:29,005 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly not detected (True Negative) in 0.032935142517089844 seconds. 334 | 2021-07-18 17:51:29,006 - Classifier - INFO - Loaded test image model/data/test/Im006_1.jpg 335 | 2021-07-18 17:51:29,039 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly detected (True Positive) in 0.03386235237121582 seconds. 336 | 2021-07-18 17:51:29,040 - Classifier - INFO - Loaded test image model/data/test/Im020_1.jpg 337 | 2021-07-18 17:51:29,076 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly detected (True Positive) in 0.0370943546295166 seconds. 338 | 2021-07-18 17:51:29,077 - Classifier - INFO - Loaded test image model/data/test/Im095_0.jpg 339 | 2021-07-18 17:51:29,109 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly not detected (True Negative) in 0.03277897834777832 seconds. 340 | 2021-07-18 17:51:29,110 - Classifier - INFO - Loaded test image model/data/test/Im069_0.jpg 341 | 2021-07-18 17:51:29,143 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly not detected (True Negative) in 0.03318381309509277 seconds. 342 | 2021-07-18 17:51:29,143 - Classifier - INFO - Loaded test image model/data/test/Im031_1.jpg 343 | 2021-07-18 17:51:29,175 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly detected (True Positive) in 0.03194856643676758 seconds. 344 | 2021-07-18 17:51:29,176 - Classifier - INFO - Loaded test image model/data/test/Im099_0.jpg 345 | 2021-07-18 17:51:29,208 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly not detected (True Negative) in 0.032364845275878906 seconds. 346 | 2021-07-18 17:51:29,208 - Classifier - INFO - Loaded test image model/data/test/Im026_1.jpg 347 | 2021-07-18 17:51:29,241 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly detected (True Positive) in 0.03291964530944824 seconds. 348 | 2021-07-18 17:51:29,241 - Classifier - INFO - Loaded test image model/data/test/Im057_1.jpg 349 | 2021-07-18 17:51:29,276 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly detected (True Positive) in 0.035039663314819336 seconds. 350 | 2021-07-18 17:51:29,277 - Classifier - INFO - Loaded test image model/data/test/Im088_0.jpg 351 | 2021-07-18 17:51:29,312 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly not detected (True Negative) in 0.03563833236694336 seconds. 352 | 2021-07-18 17:51:29,313 - Classifier - INFO - Loaded test image model/data/test/Im060_1.jpg 353 | 2021-07-18 17:51:29,346 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly detected (True Positive) in 0.03394198417663574 seconds. 354 | 2021-07-18 17:51:29,347 - Classifier - INFO - Loaded test image model/data/test/Im053_1.jpg 355 | 2021-07-18 17:51:29,383 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly detected (True Positive) in 0.03695321083068848 seconds. 356 | 2021-07-18 17:51:29,384 - Classifier - INFO - Loaded test image model/data/test/Im041_0.jpg 357 | 2021-07-18 17:51:29,417 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly not detected (True Negative) in 0.03377509117126465 seconds. 358 | 2021-07-18 17:51:29,418 - Classifier - INFO - Loaded test image model/data/test/Im047_0.jpg 359 | 2021-07-18 17:51:29,450 - Classifier - INFO - Acute Lymphoblastic Leukemia correctly not detected (True Negative) in 0.03243684768676758 seconds. 360 | 2021-07-18 17:51:29,450 - Classifier - INFO - Images Classified: 20 361 | 2021-07-18 17:51:29,450 - Classifier - INFO - True Positives: 10 362 | 2021-07-18 17:51:29,450 - Classifier - INFO - False Positives: 0 363 | 2021-07-18 17:51:29,450 - Classifier - INFO - True Negatives: 10 364 | 2021-07-18 17:51:29,450 - Classifier - INFO - False Negatives: 0 365 | 2021-07-18 17:51:29,450 - Classifier - INFO - Total Time Taken: 0.7630934715270996 366 | ``` 367 | 368 |   369 | 370 | # Preparing For Arduino 371 | 372 | During training the model was converted to TFLite and optimized with full integer quantization. The TFLite model was then converted to C array ready to be deployed to our Arduino Nano 33 BLE Sense. The test data that was removed before training was converted to 100px x 100px so as not to require additional resizing on the Arduino. 373 | 374 |   375 | 376 | # Conclusion 377 | 378 | Here we trained a deep learning model for Acute Lymphoblastic Leukemia detection utilizing Intel® Optimization for Tensorflow* from the Intel® oneAPI AI Analytics Toolkit to optimize and accelarate training. We introduced a 6 layer deep learning model and applied data augmentation to increase the training data. 379 | 380 | We trained our model with a target of 150 epochs and used early stopping to avoid overfitting. The model trained for 29 epochs resulting in a slightly noisy fit, however accuracy/precision/recall and AUC are satisfying. In addition the model reacts well during testing classifying each of the twenty unseen test images correctly. 381 | 382 |   383 | 384 | # Continue 385 | 386 | Now you are ready to set up your Arduino Nano 33 BLE Sense. Head over to the [Arduino Installation Guide](../installation/arduino.md) to prepare your Arduino. 387 | 388 |   389 | 390 | # Contributing 391 | Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss encourages and welcomes code contributions, bug fixes and enhancements from the Github community. 392 | 393 | Please read the [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") document for a full guide to forking our repositories and submitting your pull requests. You will also find our code of conduct in the [Code of Conduct](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CODE-OF-CONDUCT.md) document. 394 | 395 | ## Contributors 396 | - [Adam Milton-Barker](https://www.leukemiaairesearch.com/association/volunteers/adam-milton-barker "Adam Milton-Barker") - [Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss](https://www.leukemiaresearchassociation.ai "Asociación de Investigacion en Inteligencia Artificial Para la Leucemia Peter Moss") President/Founder & Lead Developer, Sabadell, Spain 397 | 398 |   399 | 400 | # Versioning 401 | We use [SemVer](https://semver.org/) for versioning. 402 | 403 |   404 | 405 | # License 406 | This project is licensed under the **MIT License** - see the [LICENSE](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/LICENSE "LICENSE") file for details. 407 | 408 |   409 | 410 | # Bugs/Issues 411 | We use the [repo issues](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/issues "repo issues") to track bugs and general requests related to using this project. See [CONTRIBUTING](https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier/blob/main/CONTRIBUTING.md "CONTRIBUTING") for more info on how to submit bugs, feature requests and proposals. -------------------------------------------------------------------------------- /logs/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/logs/.gitkeep -------------------------------------------------------------------------------- /mkdocs.yml: -------------------------------------------------------------------------------- 1 | site_name: ALL Arduino Nano 33 BLE Sense Classifier 2 | site_url: https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier 3 | nav: 4 | - Home: 'index.md' 5 | - 'Installation': 6 | - 'Ubuntu': 'installation/ubuntu.md' 7 | - 'Arduino': 'installation/arduino.md' 8 | - 'Usage': 9 | - 'Notebooks': 'usage/notebooks.md' 10 | - 'Python': 'usage/python.md' 11 | - 'Arduino': 'usage/arduino.md' -------------------------------------------------------------------------------- /model/all_nano_33_ble_sense.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/model/all_nano_33_ble_sense.h5 -------------------------------------------------------------------------------- /model/all_nano_33_ble_sense.json: -------------------------------------------------------------------------------- 1 | {"class_name": "Sequential", "config": {"name": "AllANBS", "layers": [{"class_name": "InputLayer", "config": {"batch_input_shape": [null, 100, 100, 3], "dtype": "float32", "sparse": false, "ragged": false, "name": "input_1"}}, {"class_name": "AveragePooling2D", "config": {"name": "average_pooling2d", "trainable": true, "dtype": "float32", "pool_size": [2, 2], "padding": "valid", "strides": [2, 2], "data_format": "channels_last"}}, {"class_name": "Conv2D", "config": {"name": "conv2d", "trainable": true, "dtype": "float32", "filters": 30, "kernel_size": [5, 5], "strides": [1, 1], "padding": "valid", "data_format": "channels_last", "dilation_rate": [1, 1], "groups": 1, "activation": "relu", "use_bias": true, "kernel_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "DepthwiseConv2D", "config": {"name": "depthwise_conv2d", "trainable": true, "dtype": "float32", "kernel_size": [30, 30], "strides": [1, 1], "padding": "valid", "data_format": "channels_last", "dilation_rate": [1, 1], "groups": 1, "activation": "relu", "use_bias": true, "bias_initializer": {"class_name": "Zeros", "config": {}}, "bias_regularizer": null, "activity_regularizer": null, "bias_constraint": null, "depth_multiplier": 1, "depthwise_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "depthwise_regularizer": null, "depthwise_constraint": null}}, {"class_name": "Flatten", "config": {"name": "flatten", "trainable": true, "dtype": "float32", "data_format": "channels_last"}}, {"class_name": "Dense", "config": {"name": "dense", "trainable": true, "dtype": "float32", "units": 2, "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "Activation", "config": {"name": "activation", "trainable": true, "dtype": "float32", "activation": "softmax"}}]}, "keras_version": "2.4.0", "backend": "tensorflow"} -------------------------------------------------------------------------------- /model/all_nano_33_ble_sense.tflite: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/model/all_nano_33_ble_sense.tflite -------------------------------------------------------------------------------- /model/data/README.md: -------------------------------------------------------------------------------- 1 | # Gain access to ALL-IDB 2 | 3 | You you need to be granted access to use the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset. You can find the application form and information about getting access to the dataset on [this page](https://homes.di.unimi.it/scotti/all/#download) as well as information on how to contribute back to the project [here](https://homes.di.unimi.it/scotti/all/results.php). If you are not able to obtain a copy of the dataset please feel free to try this tutorial on your own dataset, we would be very happy to find additional AML & ALL datasets. -------------------------------------------------------------------------------- /model/data/test/README.md: -------------------------------------------------------------------------------- 1 | # Gain access to ALL-IDB 2 | 3 | You you need to be granted access to use the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset. You can find the application form and information about getting access to the dataset on [this page](https://homes.di.unimi.it/scotti/all/#download) as well as information on how to contribute back to the project [here](https://homes.di.unimi.it/scotti/all/results.php). If you are not able to obtain a copy of the dataset please feel free to try this tutorial on your own dataset, we would be very happy to find additional AML & ALL datasets. -------------------------------------------------------------------------------- /model/data/train/README.md: -------------------------------------------------------------------------------- 1 | # Gain access to ALL-IDB 2 | 3 | You you need to be granted access to use the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset. You can find the application form and information about getting access to the dataset on [this page](https://homes.di.unimi.it/scotti/all/#download) as well as information on how to contribute back to the project [here](https://homes.di.unimi.it/scotti/all/results.php). If you are not able to obtain a copy of the dataset please feel free to try this tutorial on your own dataset, we would be very happy to find additional AML & ALL datasets. -------------------------------------------------------------------------------- /model/plots/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/model/plots/.gitkeep -------------------------------------------------------------------------------- /model/plots/accuracy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/model/plots/accuracy.png -------------------------------------------------------------------------------- /model/plots/auc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/model/plots/auc.png -------------------------------------------------------------------------------- /model/plots/confusion-matrix.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/model/plots/confusion-matrix.png -------------------------------------------------------------------------------- /model/plots/loss.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/model/plots/loss.png -------------------------------------------------------------------------------- /model/plots/precision.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/model/plots/precision.png -------------------------------------------------------------------------------- /model/plots/recall.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/model/plots/recall.png -------------------------------------------------------------------------------- /modules/AbstractClassifier.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ Abstract class representing an AI Classifier. 3 | 4 | Represents an AI Classifier. AI Classifiers process data using AI 5 | models. Based on HIAS AI Agents for future compatibility with 6 | the HIAS Network. 7 | 8 | MIT License 9 | 10 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial 11 | Para la Leucemia Peter Moss 12 | 13 | Permission is hereby granted, free of charge, to any person obtaining a copy 14 | of this software and associated documentation files(the "Software"), to deal 15 | in the Software without restriction, including without limitation the rights 16 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 17 | copies of the Software, and to permit persons to whom the Software is 18 | furnished to do so, subject to the following conditions: 19 | 20 | The above copyright notice and this permission notice shall be included in all 21 | copies or substantial portions of the Software. 22 | 23 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 24 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 25 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 26 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 27 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 28 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 29 | SOFTWARE. 30 | 31 | Contributors: 32 | - Adam Milton-Barker 33 | 34 | """ 35 | 36 | from abc import ABC, abstractmethod 37 | 38 | from modules.helpers import helpers 39 | from modules.model import model 40 | 41 | 42 | class AbstractClassifier(ABC): 43 | """ Abstract class representing an AI Classifier. 44 | 45 | Represents an AI Classifier. AI Classifiers process data using AI 46 | models. Based on HIAS AI Agents for future compatibility with 47 | the HIAS Network. 48 | """ 49 | 50 | def __init__(self): 51 | """ Initializes the AbstractClassifier object. """ 52 | super().__init__() 53 | 54 | self.helpers = helpers("Classifier") 55 | self.confs = self.helpers.confs 56 | self.model_type = None 57 | 58 | self.helpers.logger.info("Classifier initialization complete.") 59 | 60 | @abstractmethod 61 | def set_model(self): 62 | """ Loads the model class """ 63 | pass 64 | 65 | @abstractmethod 66 | def train(self): 67 | """ Creates & trains the model. """ 68 | pass 69 | 70 | @abstractmethod 71 | def load_model(self): 72 | """ Loads the AI model """ 73 | pass 74 | 75 | @abstractmethod 76 | def inference(self): 77 | """ Loads model and classifies test data """ 78 | pass 79 | 80 | @abstractmethod 81 | def server(self): 82 | """ Loads the API server """ 83 | pass 84 | 85 | @abstractmethod 86 | def inference_http(self): 87 | """ Classifies test data via HTTP requests """ 88 | pass -------------------------------------------------------------------------------- /modules/AbstractData.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ AI Model Data Abstract Class. 3 | 4 | Provides the AI Model with the required required data 5 | processing functionality. 6 | 7 | MIT License 8 | 9 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial 10 | Para la Leucemia Peter Moss 11 | 12 | Permission is hereby granted, free of charge, to any person obtaining a copy 13 | of this software and associated documentation files(the "Software"), to deal 14 | in the Software without restriction, including without limitation the rights 15 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 16 | copies of the Software, and to permit persons to whom the Software is 17 | furnished to do so, subject to the following conditions: 18 | 19 | The above copyright notice and this permission notice shall be included in all 20 | copies or substantial portions of the Software. 21 | 22 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 23 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 24 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 25 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 26 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 27 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 28 | SOFTWARE. 29 | 30 | Contributors: 31 | - Adam Milton-Barker 32 | 33 | """ 34 | 35 | import cv2 36 | import pathlib 37 | import random 38 | 39 | from numpy.random import seed 40 | 41 | from abc import ABC, abstractmethod 42 | 43 | 44 | class AbstractData(ABC): 45 | """ AI Model Data Abstract Class. 46 | 47 | Provides the AI Model with the required required data 48 | processing functionality. 49 | """ 50 | 51 | def __init__(self, helpers): 52 | "Initializes the AbstractData object." 53 | super().__init__() 54 | 55 | self.helpers = helpers 56 | self.confs = self.helpers.confs 57 | 58 | self.seed = self.confs["data"]["seed"] 59 | self.dim = self.confs["data"]["dim"] 60 | 61 | seed(self.seed) 62 | random.seed(self.seed) 63 | 64 | self.data = [] 65 | self.labels = [] 66 | 67 | self.helpers.logger.info("Data class initialization complete.") 68 | 69 | def remove_testing(self): 70 | """ Removes the testing images from the dataset. """ 71 | 72 | for img in self.confs["data"]["test_data"]: 73 | original = "model/data/train/"+img 74 | destination = "model/data/test/"+img 75 | pathlib.Path(original).rename(destination) 76 | self.helpers.logger.info(original + " moved to " + destination) 77 | cv2.imwrite(destination, cv2.resize(cv2.imread(destination), 78 | (self.dim, self.dim))) 79 | self.helpers.logger.info("Resized " + destination) 80 | 81 | @abstractmethod 82 | def process(self): 83 | """ Processes the images. """ 84 | pass 85 | 86 | @abstractmethod 87 | def encode_labels(self): 88 | """ One Hot Encodes the labels. """ 89 | pass 90 | 91 | @abstractmethod 92 | def convert_data(self): 93 | """ Converts the training data to a numpy array. """ 94 | pass 95 | 96 | @abstractmethod 97 | def shuffle(self): 98 | """ Shuffles the data and labels. """ 99 | pass 100 | 101 | @abstractmethod 102 | def get_split(self): 103 | """ Splits the data and labels creating training and validation datasets. """ 104 | pass 105 | 106 | @abstractmethod 107 | def resize(self, path, dim): 108 | """ Resizes an image to the provided dimensions (dim). """ 109 | pass 110 | -------------------------------------------------------------------------------- /modules/AbstractModel.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ Abstract class representing a HIAS AI Model. 3 | 4 | Represents an AI Model. HIAS AI Models are used by AI Agents to process 5 | incoming data. Based on HIAS AI Models for future compatibility with 6 | the HIAS Network. 7 | 8 | MIT License 9 | 10 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial 11 | Para la Leucemia Peter Moss 12 | 13 | Permission is hereby granted, free of charge, to any person obtaining a copy 14 | of this software and associated documentation files(the "Software"), to deal 15 | in the Software without restriction, including without limitation the rights 16 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 17 | copies of the Software, and to permit persons to whom the Software is 18 | furnished to do so, subject to the following conditions: 19 | 20 | The above copyright notice and this permission notice shall be included in all 21 | copies or substantial portions of the Software. 22 | 23 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 24 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 25 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 26 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 27 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 28 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 29 | SOFTWARE. 30 | 31 | Contributors: 32 | - Adam Milton-Barker - First version - 2021-5-1 33 | 34 | """ 35 | 36 | import os 37 | import random 38 | 39 | from numpy.random import seed 40 | 41 | from abc import ABC, abstractmethod 42 | 43 | from modules.data import data 44 | 45 | class AbstractModel(ABC): 46 | """ Abstract class representing an AI Model. 47 | 48 | Represents an AI Model. HIAS AI Models are used by AI Agents 49 | to process incoming data. Based on HIAS AI Models for future 50 | compatibility with the HIAS Network. 51 | """ 52 | 53 | def __init__(self, helpers): 54 | """ Initializes the AbstractModel object. """ 55 | super().__init__() 56 | 57 | self.helpers = helpers 58 | self.confs = self.helpers.confs 59 | 60 | os.environ["KMP_BLOCKTIME"] = "1" 61 | os.environ["KMP_SETTINGS"] = "1" 62 | os.environ["KMP_AFFINITY"] = "granularity=fine,verbose,compact,1,0" 63 | os.environ["OMP_NUM_THREADS"] = str( 64 | self.confs["agent"]["cores"]) 65 | 66 | self.data = data(self.helpers) 67 | 68 | self.testing_dir = self.confs["data"]["test"] 69 | self.valid = self.confs["data"]["valid_types"] 70 | self.seed = self.confs["data"]["seed"] 71 | 72 | random.seed(self.seed) 73 | seed(self.seed) 74 | 75 | self.weights_file_path = self.confs["model"]["weights"] 76 | self.json_model_path = self.confs["model"]["model"] 77 | self.tflite_model_path = self.confs["model"]["tfmodel"] 78 | self.c_array_model_path = self.confs["model"]["model_c_array"] 79 | 80 | self.helpers.logger.info("Model class initialization complete.") 81 | 82 | @abstractmethod 83 | def prepare_data(self): 84 | """ Prepares the model data """ 85 | pass 86 | 87 | @abstractmethod 88 | def prepare_network(self): 89 | """ Builds the network """ 90 | pass 91 | 92 | @abstractmethod 93 | def train(self): 94 | """ Trains the model """ 95 | pass 96 | 97 | @abstractmethod 98 | def save_model_as_json(self): 99 | """ Saves the model as JSON """ 100 | pass 101 | 102 | @abstractmethod 103 | def save_weights(self): 104 | """ Saves the model weights """ 105 | pass 106 | 107 | @abstractmethod 108 | def evaluate(self): 109 | """ Evaluates the model """ 110 | pass 111 | 112 | @abstractmethod 113 | def plot_accuracy(self): 114 | """ Plots the accuracy. """ 115 | pass 116 | 117 | @abstractmethod 118 | def plot_loss(self): 119 | """ Plots the loss. """ 120 | pass 121 | 122 | @abstractmethod 123 | def plot_auc(self): 124 | """ Plots the AUC curve. """ 125 | pass 126 | 127 | @abstractmethod 128 | def plot_precision(self): 129 | """ Plots the precision. """ 130 | pass 131 | 132 | @abstractmethod 133 | def plot_recall(self): 134 | """ Plots the recall. """ 135 | pass 136 | 137 | @abstractmethod 138 | def confusion_matrix(self): 139 | """ Prints/displays the confusion matrix. """ 140 | pass 141 | 142 | @abstractmethod 143 | def figures_of_merit(self): 144 | """ Calculates/prints the figures of merit. """ 145 | pass 146 | 147 | @abstractmethod 148 | def predictions(self): 149 | """ Makes predictions on the train & test sets. """ 150 | pass 151 | 152 | @abstractmethod 153 | def predict(self, img): 154 | """ Gets a prediction for an image. """ 155 | pass 156 | 157 | @abstractmethod 158 | def reshape(self, img): 159 | """ Reshapes an image. """ 160 | pass 161 | 162 | @abstractmethod 163 | def test(self): 164 | """Local test mode 165 | 166 | Loops through the test directory and classifies the images. 167 | """ 168 | pass 169 | 170 | @abstractmethod 171 | def http_reshape(self, img): 172 | """ Reshapes an image sent via HTTP. """ 173 | pass 174 | 175 | @abstractmethod 176 | def http_request(self): 177 | """ Sends image to the inference API endpoint. """ 178 | pass 179 | 180 | @abstractmethod 181 | def test_http(self): 182 | """Server test mode 183 | 184 | Loops through the test directory and sends the images to the classification server. 185 | """ 186 | pass -------------------------------------------------------------------------------- /modules/AbstractServer.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ Server/API abstract class. 3 | 4 | Abstract class for the classifier server/API. 5 | 6 | MIT License 7 | 8 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial 9 | Para la Leucemia Peter Moss 10 | 11 | Permission is hereby granted, free of charge, to any person obtaining a copy 12 | of this software and associated documentation files(the "Software"), to deal 13 | in the Software without restriction, including without limitation the rights 14 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 15 | copies of the Software, and to permit persons to whom the Software is 16 | furnished to do so, subject to the following conditions: 17 | 18 | The above copyright notice and this permission notice shall be included in all 19 | copies or substantial portions of the Software. 20 | 21 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 22 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 23 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 24 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 25 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 26 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 27 | SOFTWARE. 28 | 29 | Contributors: 30 | - Adam Milton-Barker 31 | 32 | """ 33 | 34 | from abc import ABC, abstractmethod 35 | 36 | class AbstractServer(ABC): 37 | """ Server/API abstract class. 38 | 39 | Abstract class for the classifier server/API. 40 | """ 41 | 42 | def __init__(self, helpers, model, model_type): 43 | "Initializes the AbstractServer object." 44 | super().__init__() 45 | 46 | self.helpers = helpers 47 | self.confs = self.helpers.confs 48 | 49 | self.model = model 50 | self.model_type = model_type 51 | 52 | self.helpers.logger.info("Server initialization complete.") 53 | 54 | 55 | @abstractmethod 56 | def predict(self, req): 57 | """ Classifies an image sent via HTTP. """ 58 | pass 59 | 60 | @abstractmethod 61 | def start(self, img_path): 62 | """ Sends image to the inference API endpoint. """ 63 | pass 64 | -------------------------------------------------------------------------------- /modules/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMLResearchProject/all-arduino-nano-33-ble-sense-classifier/9e8f94e0e7350753525260e4d13679b30531e110/modules/__init__.py -------------------------------------------------------------------------------- /modules/augmentation.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ Data Augmentation Class. 3 | 4 | Provides data augmentation methods. 5 | 6 | Permission is hereby granted, free of charge, to any person obtaining a copy 7 | of this software and associated documentation files(the "Software"), to deal 8 | in the Software without restriction, including without limitation the rights 9 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 10 | copies of the Software, and to permit persons to whom the Software is 11 | furnished to do so, subject to the following conditions: 12 | 13 | The above copyright notice and this permission notice shall be included in all 14 | copies or substantial portions of the Software. 15 | 16 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 19 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 20 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 21 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 22 | SOFTWARE. 23 | 24 | Contributors: 25 | - Adam Milton-Barker 26 | 27 | """ 28 | 29 | import cv2 30 | import random 31 | 32 | import numpy as np 33 | 34 | from numpy.random import seed 35 | from scipy import ndimage 36 | from skimage import transform as tm 37 | 38 | class augmentation(): 39 | """ HIAS AI Model Data Augmentation Class 40 | 41 | Provides data augmentation methods. 42 | """ 43 | 44 | def __init__(self, helpers): 45 | """ Initializes the class. """ 46 | 47 | self.helpers = helpers 48 | 49 | self.seed = self.helpers.confs["data"]["seed"] 50 | seed(self.seed) 51 | 52 | self.helpers.logger.info("Augmentation class initialization complete.") 53 | 54 | def grayscale(self, data): 55 | """ Creates a grayscale copy. """ 56 | 57 | gray = cv2.cvtColor(data, cv2.COLOR_BGR2GRAY) 58 | return np.dstack([gray, gray, gray]).astype(np.float32)/255. 59 | 60 | def equalize_hist(self, data): 61 | """ Creates a histogram equalized copy. """ 62 | 63 | img_to_yuv = cv2.cvtColor(data, cv2.COLOR_BGR2YUV) 64 | img_to_yuv[:, :, 0] = cv2.equalizeHist(img_to_yuv[:, :, 0]) 65 | hist_equalization_result = cv2.cvtColor(img_to_yuv, cv2.COLOR_YUV2BGR) 66 | return hist_equalization_result.astype(np.float32)/255. 67 | 68 | def reflection(self, data): 69 | """ Creates a reflected copy. """ 70 | 71 | return cv2.flip(data, 0).astype(np.float32)/255., cv2.flip(data, 1).astype(np.float32)/255. 72 | 73 | def gaussian(self, data): 74 | """ Creates a gaussian blurred copy. """ 75 | 76 | return ndimage.gaussian_filter(data, sigma=5.11).astype(np.float32)/255. 77 | 78 | def translate(self, data): 79 | """ Creates transformed copy. """ 80 | 81 | cols, rows, chs = data.shape 82 | 83 | return cv2.warpAffine(data, np.float32([[1, 0, 84], [0, 1, 56]]), (rows, cols), 84 | borderMode=cv2.BORDER_CONSTANT, borderValue=(144, 159, 162)).astype(np.float32)/255. 85 | 86 | def rotation(self, data): 87 | """ Creates a rotated copy. """ 88 | 89 | cols, rows, chs = data.shape 90 | 91 | rand_deg = random.randint(-180, 180) 92 | matrix = cv2.getRotationMatrix2D((cols/2, rows/2), rand_deg, 0.70) 93 | rotated = cv2.warpAffine(data, matrix, (rows, cols), borderMode=cv2.BORDER_CONSTANT, 94 | borderValue=(144, 159, 162)) 95 | 96 | return rotated.astype(np.float32)/255. 97 | 98 | def shear(self, data): 99 | """ Creates a histogram equalized copy. """ 100 | 101 | at = tm.AffineTransform(shear=0.5) 102 | return tm.warp(data, inverse_map=at) 103 | -------------------------------------------------------------------------------- /modules/data.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ AI Model Data Class. 3 | 4 | Provides the AI Model with the required required data 5 | processing functionality. 6 | 7 | Permission is hereby granted, free of charge, to any person obtaining a copy 8 | of this software and associated documentation files(the "Software"), to deal 9 | in the Software without restriction, including without limitation the rights 10 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 11 | copies of the Software, and to permit persons to whom the Software is 12 | furnished to do so, subject to the following conditions: 13 | 14 | The above copyright notice and this permission notice shall be included in all 15 | copies or substantial portions of the Software. 16 | 17 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 18 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 19 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 20 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 21 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 22 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 23 | SOFTWARE. 24 | 25 | Contributors: 26 | - Adam Milton-Barker 27 | 28 | """ 29 | 30 | import cv2 31 | import os 32 | import pathlib 33 | 34 | import numpy as np 35 | 36 | from numpy.random import seed 37 | from PIL import Image 38 | from sklearn.model_selection import train_test_split 39 | from sklearn.preprocessing import OneHotEncoder 40 | from sklearn.utils import shuffle 41 | 42 | from modules.AbstractData import AbstractData 43 | from modules.augmentation import augmentation 44 | 45 | class data(AbstractData): 46 | """ AI Model Data Class. 47 | 48 | Provides the AI Model with the required required data 49 | processing functionality. 50 | """ 51 | 52 | def process(self): 53 | """ Processes the images. """ 54 | 55 | aug = augmentation(self.helpers) 56 | 57 | data_dir = pathlib.Path(self.confs["data"]["train_dir"]) 58 | data = list(data_dir.glob( 59 | '*' + self.confs["data"]["file_type"])) 60 | 61 | count = 0 62 | neg_count = 0 63 | pos_count = 0 64 | 65 | augmented_data = [] 66 | self.labels = [] 67 | temp = [] 68 | 69 | for rimage in data: 70 | fpath = str(rimage) 71 | fname = os.path.basename(rimage) 72 | label = 0 if "_0" in fname else 1 73 | 74 | # Resize Image 75 | image = self.resize(fpath, self.dim) 76 | 77 | if image.shape[2] == 1: 78 | image = np.dstack( 79 | [image, image, image]) 80 | 81 | temp.append(image.astype(np.float32)/255.) 82 | 83 | self.data.append(image.astype(np.float32)/255.) 84 | self.labels.append(label) 85 | 86 | # Grayscale 87 | self.data.append(aug.grayscale(image)) 88 | self.labels.append(label) 89 | 90 | # Histogram Equalization 91 | self.data.append(aug.equalize_hist(image)) 92 | self.labels.append(label) 93 | 94 | # Reflection 95 | horizontal, vertical = aug.reflection(image) 96 | self.data.append(horizontal) 97 | self.labels.append(label) 98 | self.data.append(vertical) 99 | self.labels.append(label) 100 | 101 | # Gaussian Blur 102 | self.data.append(aug.gaussian(image)) 103 | self.labels.append(label) 104 | 105 | # Translation 106 | self.data.append(aug.translate(image)) 107 | self.labels.append(label) 108 | 109 | # Shear 110 | self.data.append(aug.shear(image)) 111 | self.labels.append(label) 112 | 113 | # Rotation 114 | for i in range(0, self.helpers.confs["data"]["rotations"]): 115 | self.data.append(aug.rotation(image)) 116 | self.labels.append(label) 117 | if "_0" in fname: 118 | neg_count += 1 119 | else: 120 | pos_count += 1 121 | count += 1 122 | 123 | if "_0" in fname: 124 | neg_count += 8 125 | else: 126 | pos_count += 8 127 | count += 8 128 | 129 | self.shuffle() 130 | self.convert_data() 131 | self.encode_labels() 132 | 133 | self.helpers.logger.info("Augmented data size: " + str(count)) 134 | self.helpers.logger.info("Negative data size: " + str(neg_count)) 135 | self.helpers.logger.info("Positive data size: " + str(pos_count)) 136 | self.helpers.logger.info("Augmented data shape: " + str(self.data.shape)) 137 | self.helpers.logger.info("Labels shape: " + str(self.labels.shape)) 138 | 139 | self.X_train_arr = np.asarray(temp) 140 | 141 | self.get_split() 142 | 143 | def convert_data(self): 144 | """ Converts the training data to a numpy array. """ 145 | 146 | self.data = np.array(self.data) 147 | 148 | def encode_labels(self): 149 | """ One Hot Encodes the labels. """ 150 | 151 | encoder = OneHotEncoder(categories='auto') 152 | 153 | self.labels = np.reshape(self.labels, (-1, 1)) 154 | self.labels = encoder.fit_transform(self.labels).toarray() 155 | 156 | def shuffle(self): 157 | """ Shuffles the data and labels. """ 158 | 159 | self.data, self.labels = shuffle( 160 | self.data, self.labels, random_state=self.seed) 161 | 162 | def get_split(self): 163 | """ Splits the data and labels creating training and validation datasets. """ 164 | 165 | self.X_train, self.X_test, self.y_train, self.y_test = train_test_split( 166 | self.data, self.labels, test_size=self.helpers.confs["data"]["split"], 167 | random_state=self.seed) 168 | 169 | self.helpers.logger.info("Training data: " + str(self.X_train.shape)) 170 | self.helpers.logger.info("Training labels: " + str(self.y_train.shape)) 171 | self.helpers.logger.info("Validation data: " + str(self.X_test.shape)) 172 | self.helpers.logger.info("Validation labels: " + str(self.y_test.shape)) 173 | 174 | def resize(self, path, dim): 175 | """ Resizes an image to the provided dimensions (dim). """ 176 | 177 | return cv2.resize(cv2.imread(path), (dim, dim)) 178 | -------------------------------------------------------------------------------- /modules/helpers.py: -------------------------------------------------------------------------------- 1 | """ Helpers file. 2 | 3 | Configuration and logging functions. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files(the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | 23 | Contributors: 24 | - Adam Milton-Barker 25 | 26 | """ 27 | 28 | import logging 29 | import logging.handlers as handlers 30 | import json 31 | import os 32 | import sys 33 | import time 34 | 35 | from datetime import datetime 36 | 37 | 38 | class helpers(): 39 | """ Helper Class 40 | 41 | Configuration and logging functions. 42 | """ 43 | 44 | def __init__(self, ltype, log=True): 45 | """ Initializes the Helpers Class. """ 46 | 47 | # Loads system configs 48 | self.confs = {} 49 | self.loadConfs() 50 | 51 | # Sets system logging 52 | self.logger = logging.getLogger(ltype) 53 | self.logger.setLevel(logging.INFO) 54 | 55 | formatter = logging.Formatter( 56 | '%(asctime)s - %(name)s - %(levelname)s - %(message)s') 57 | 58 | allLogHandler = handlers.TimedRotatingFileHandler( 59 | os.path.dirname(os.path.abspath(__file__)) + '/../logs/all.log', when='H', interval=1, backupCount=0) 60 | allLogHandler.setLevel(logging.INFO) 61 | allLogHandler.setFormatter(formatter) 62 | 63 | errorLogHandler = handlers.TimedRotatingFileHandler( 64 | os.path.dirname(os.path.abspath(__file__)) + '/../logs/error.log', when='H', interval=1, backupCount=0) 65 | errorLogHandler.setLevel(logging.ERROR) 66 | errorLogHandler.setFormatter(formatter) 67 | 68 | warningLogHandler = handlers.TimedRotatingFileHandler( 69 | os.path.dirname(os.path.abspath(__file__)) + '/../logs/warning.log', when='H', interval=1, backupCount=0) 70 | warningLogHandler.setLevel(logging.WARNING) 71 | warningLogHandler.setFormatter(formatter) 72 | 73 | consoleHandler = logging.StreamHandler(sys.stdout) 74 | consoleHandler.setFormatter(formatter) 75 | 76 | self.logger.addHandler(allLogHandler) 77 | self.logger.addHandler(errorLogHandler) 78 | self.logger.addHandler(warningLogHandler) 79 | self.logger.addHandler(consoleHandler) 80 | 81 | if log is True: 82 | self.logger.info("Helpers class initialization complete.") 83 | 84 | def loadConfs(self): 85 | """ Load the configuration. """ 86 | 87 | with open(os.path.dirname(os.path.abspath(__file__)) + '/../configuration/config.json') as confs: 88 | self.confs = json.loads(confs.read()) 89 | -------------------------------------------------------------------------------- /modules/model.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ Class representing a HIAS AI Model. 3 | 4 | Represents a HIAS AI Model. HIAS AI Models are used by AI Agents to process 5 | incoming data. 6 | 7 | MIT License 8 | 9 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial 10 | Para la Leucemia Peter Moss 11 | 12 | Permission is hereby granted, free of charge, to any person obtaining a copy 13 | of this software and associated documentation files(the "Software"), to deal 14 | in the Software without restriction, including without limitation the rights 15 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 16 | copies of the Software, and to permit persons to whom the Software is 17 | furnished to do so, subject to the following conditions: 18 | 19 | The above copyright notice and this permission notice shall be included in all 20 | copies or substantial portions of the Software. 21 | 22 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 23 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 24 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 25 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 26 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 27 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 28 | SOFTWARE. 29 | 30 | Contributors: 31 | - Adam Milton-Barker - First version - 2021-5-1 32 | 33 | """ 34 | 35 | import cv2 36 | import json 37 | import os 38 | import pathlib 39 | import requests 40 | import time 41 | 42 | import matplotlib.pyplot as plt 43 | import numpy as np 44 | import tensorflow as tf 45 | 46 | from PIL import Image 47 | from sklearn.metrics import confusion_matrix 48 | from tensorflow.keras import layers, models 49 | from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 50 | from mlxtend.plotting import plot_confusion_matrix 51 | 52 | from modules.AbstractModel import AbstractModel 53 | 54 | plt.switch_backend('Agg') 55 | 56 | 57 | class model(AbstractModel): 58 | """ Class representing a HIAS AI Model. 59 | 60 | This object represents a HIAS AI Model.HIAS AI Models 61 | are used by AI Agents to process incoming data. 62 | """ 63 | 64 | def prepare_data(self): 65 | """ Creates/sorts dataset. """ 66 | 67 | self.data.remove_testing() 68 | self.data.process() 69 | 70 | self.helpers.logger.info("Data preperation complete.") 71 | 72 | def prepare_network(self): 73 | """ Builds the network. 74 | 75 | Replicates the networked outlined in the Acute Leukemia Classification 76 | Using Convolution Neural Network In Clinical Decision Support System paper. 77 | https://airccj.org/CSCP/vol7/csit77505.pdf 78 | """ 79 | 80 | self.tf_model = tf.keras.models.Sequential([ 81 | tf.keras.layers.InputLayer(input_shape=(self.data.X_train.shape[1:])), 82 | tf.keras.layers.AveragePooling2D( 83 | pool_size=(2, 2), strides=None, padding='valid'), 84 | tf.keras.layers.Conv2D(30, (5, 5), strides=1, 85 | padding="valid", activation='relu'), 86 | tf.keras.layers.DepthwiseConv2D(30, (1, 1), 87 | padding="valid", activation='relu'), 88 | tf.keras.layers.Flatten(), 89 | tf.keras.layers.Dense(2), 90 | tf.keras.layers.Activation('softmax') 91 | ], 92 | "AllANBS") 93 | self.tf_model.summary() 94 | 95 | self.helpers.logger.info("Network initialization complete.") 96 | 97 | def train(self): 98 | """ Trains the model 99 | 100 | Compiles and fits the model. 101 | """ 102 | 103 | self.helpers.logger.info("Using Adam Optimizer.") 104 | optimizer = tf.keras.optimizers.Adam(learning_rate=self.confs["train"]["learning_rate_adam"], 105 | decay = self.confs["train"]["decay_adam"]) 106 | 107 | self.helpers.logger.info("Using Early Stopping.") 108 | callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', 109 | patience=3, 110 | verbose=0, 111 | mode='auto', 112 | restore_best_weights=True) 113 | 114 | self.tf_model.compile(optimizer=optimizer, 115 | loss='binary_crossentropy', 116 | metrics=[tf.keras.metrics.BinaryAccuracy(name='acc'), 117 | tf.keras.metrics.Precision(name='precision'), 118 | tf.keras.metrics.Recall(name='recall'), 119 | tf.keras.metrics.AUC(name='auc') ]) 120 | 121 | self.history = self.tf_model.fit(self.data.X_train, self.data.y_train, 122 | validation_data=(self.data.X_test, self.data.y_test), 123 | validation_steps=self.confs["train"]["val_steps"], 124 | epochs=self.confs["train"]["epochs"], callbacks=[callback]) 125 | 126 | print(self.history) 127 | print("") 128 | 129 | self.save_model_as_json() 130 | self.save_weights() 131 | self.convert_to_tflite() 132 | self.save_tflite_model() 133 | self.convert_to_c_array() 134 | 135 | def save_model_as_json(self): 136 | """ Saves the model as JSON """ 137 | 138 | with open(self.json_model_path, "w") as file: 139 | file.write(self.tf_model.to_json()) 140 | 141 | self.helpers.logger.info("Model JSON saved " + self.json_model_path) 142 | 143 | def save_weights(self): 144 | """ Saves the model weights """ 145 | 146 | self.tf_model.save_weights(self.weights_file_path) 147 | self.helpers.logger.info("Weights saved " + self.weights_file_path) 148 | 149 | def convert_to_tflite(self): 150 | """ Converts model to TFLite """ 151 | 152 | def representative_dataset(): 153 | 154 | for input_value in tf.data.Dataset.from_tensor_slices( 155 | self.data.X_train_arr).batch(1).take(100): 156 | yield [input_value] 157 | 158 | converter = tf.lite.TFLiteConverter.from_keras_model(self.tf_model) 159 | converter.optimizations = [tf.lite.Optimize.DEFAULT] 160 | converter.representative_dataset = representative_dataset 161 | converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] 162 | converter.inference_input_type = tf.int8 163 | converter.inference_output_type = tf.int8 164 | 165 | self.tflite_model = converter.convert() 166 | 167 | def save_tflite_model(self): 168 | """ Saves the TFLite model """ 169 | 170 | with open(self.tflite_model_path, "wb") as file: 171 | file.write(self.tflite_model) 172 | 173 | self.helpers.logger.info("Model TFLite saved " + self.tflite_model_path) 174 | 175 | def convert_to_c_array(self): 176 | """ Converts the TFLite model to C array """ 177 | 178 | os.system('xxd -i ' + self.tflite_model_path + ' > ' + self.c_array_model_path) 179 | self.helpers.logger.info("C array model created " + self.c_array_model_path) 180 | 181 | def predictions(self): 182 | """ Gets a prediction for an image. """ 183 | 184 | self.train_preds = self.tf_model.predict(self.data.X_train) 185 | self.test_preds = self.tf_model.predict(self.data.X_test) 186 | 187 | def evaluate(self): 188 | """ Evaluates the model """ 189 | 190 | self.predictions() 191 | 192 | metrics = self.tf_model.evaluate( 193 | self.data.X_test, self.data.y_test, verbose=0) 194 | for name, value in zip(self.tf_model.metrics_names, metrics): 195 | self.helpers.logger.info("Metrics: " + name + " " + str(value)) 196 | print() 197 | 198 | self.plot_accuracy() 199 | self.plot_loss() 200 | self.plot_auc() 201 | self.plot_precision() 202 | self.plot_recall() 203 | self.confusion_matrix() 204 | self.figures_of_merit() 205 | 206 | def plot_accuracy(self): 207 | """ Plots the accuracy. """ 208 | 209 | plt.plot(self.history.history['acc']) 210 | plt.plot(self.history.history['val_acc']) 211 | plt.title('Model Accuracy') 212 | plt.ylabel('Accuracy') 213 | plt.xlabel('Epoch') 214 | plt.ylim((0, 1)) 215 | plt.legend(['Train', 'Validate'], loc='upper left') 216 | plt.savefig('model/plots/accuracy.png') 217 | plt.show() 218 | plt.clf() 219 | 220 | def plot_loss(self): 221 | """ Plots the loss. """ 222 | 223 | plt.plot(self.history.history['loss']) 224 | plt.plot(self.history.history['val_loss']) 225 | plt.title('Model Loss') 226 | plt.ylabel('loss') 227 | plt.xlabel('Epoch') 228 | plt.legend(['Train', 'Validate'], loc='upper left') 229 | plt.savefig('model/plots/loss.png') 230 | plt.show() 231 | plt.clf() 232 | 233 | def plot_auc(self): 234 | """ Plots the AUC. """ 235 | 236 | plt.plot(self.history.history['auc']) 237 | plt.plot(self.history.history['val_auc']) 238 | plt.title('Model AUC') 239 | plt.ylabel('AUC') 240 | plt.xlabel('Epoch') 241 | plt.legend(['Train', 'Validate'], loc='upper left') 242 | plt.savefig('model/plots/auc.png') 243 | plt.show() 244 | plt.clf() 245 | 246 | def plot_precision(self): 247 | """ Plots the precision. """ 248 | 249 | plt.plot(self.history.history['precision']) 250 | plt.plot(self.history.history['val_precision']) 251 | plt.title('Model Precision') 252 | plt.ylabel('Precision') 253 | plt.xlabel('Epoch') 254 | plt.legend(['Train', 'Validate'], loc='upper left') 255 | plt.savefig('model/plots/precision.png') 256 | plt.show() 257 | plt.clf() 258 | 259 | def plot_recall(self): 260 | """ Plots the recall. """ 261 | 262 | plt.plot(self.history.history['recall']) 263 | plt.plot(self.history.history['val_recall']) 264 | plt.title('Model Recall') 265 | plt.ylabel('Recall') 266 | plt.xlabel('Epoch') 267 | plt.legend(['Train', 'Validate'], loc='upper left') 268 | plt.savefig('model/plots/recall.png') 269 | plt.show() 270 | plt.clf() 271 | 272 | def confusion_matrix(self): 273 | """ Plots the confusion matrix. """ 274 | 275 | self.matrix = confusion_matrix(self.data.y_test.argmax(axis=1), 276 | self.test_preds.argmax(axis=1)) 277 | 278 | self.helpers.logger.info("Confusion Matrix: " + str(self.matrix)) 279 | print("") 280 | 281 | plot_confusion_matrix(conf_mat=self.matrix) 282 | plt.savefig('model/plots/confusion-matrix.png') 283 | plt.show() 284 | plt.clf() 285 | 286 | def figures_of_merit(self): 287 | """ Calculates/prints the figures of merit. 288 | 289 | https://homes.di.unimi.it/scotti/all/ 290 | """ 291 | 292 | test_len = len(self.data.X_test) 293 | 294 | TP = self.matrix[1][1] 295 | TN = self.matrix[0][0] 296 | FP = self.matrix[0][1] 297 | FN = self.matrix[1][0] 298 | 299 | TPP = (TP * 100)/test_len 300 | FPP = (FP * 100)/test_len 301 | FNP = (FN * 100)/test_len 302 | TNP = (TN * 100)/test_len 303 | 304 | specificity = TN/(TN+FP) 305 | 306 | misc = FP + FN 307 | miscp = (misc * 100)/test_len 308 | 309 | self.helpers.logger.info( 310 | "True Positives: " + str(TP) + "(" + str(TPP) + "%)") 311 | self.helpers.logger.info( 312 | "False Positives: " + str(FP) + "(" + str(FPP) + "%)") 313 | self.helpers.logger.info( 314 | "True Negatives: " + str(TN) + "(" + str(TNP) + "%)") 315 | self.helpers.logger.info( 316 | "False Negatives: " + str(FN) + "(" + str(FNP) + "%)") 317 | 318 | self.helpers.logger.info("Specificity: " + str(specificity)) 319 | self.helpers.logger.info("Misclassification: " + 320 | str(misc) + "(" + str(miscp) + "%)") 321 | 322 | def load(self): 323 | """ Loads the model """ 324 | 325 | with open(self.json_model_path) as file: 326 | m_json = file.read() 327 | 328 | self.tf_model = tf.keras.models.model_from_json(m_json) 329 | self.tf_model.load_weights(self.weights_file_path) 330 | 331 | self.helpers.logger.info("Model loaded ") 332 | 333 | self.tf_model.summary() 334 | 335 | def predict(self, img): 336 | """ Gets a prediction for an image. """ 337 | 338 | predictions = self.tf_model.predict(img) 339 | prediction = np.argmax(predictions, axis=-1) 340 | 341 | return prediction 342 | 343 | def reshape(self, img): 344 | """ Reshapes an image. """ 345 | 346 | dx, dy, dz = img.shape 347 | input_data = img.reshape((-1, dx, dy, dz)) 348 | input_data = input_data / 255.0 349 | 350 | return input_data 351 | 352 | def test(self): 353 | """ Test mode 354 | 355 | Loops through the test directory and classifies the images. 356 | """ 357 | 358 | files = 0 359 | tp = 0 360 | fp = 0 361 | tn = 0 362 | fn = 0 363 | totaltime = 0 364 | 365 | for testFile in os.listdir(self.testing_dir): 366 | if os.path.splitext(testFile)[1] in self.valid: 367 | files += 1 368 | fileName = self.testing_dir + "/" + testFile 369 | 370 | start = time.time() 371 | img = cv2.imread(fileName).astype(np.float32) 372 | self.helpers.logger.info("Loaded test image " + fileName) 373 | 374 | img = cv2.resize(img, (self.data.dim, 375 | self.data.dim)) 376 | img = self.reshape(img) 377 | 378 | prediction = self.predict(img) 379 | end = time.time() 380 | benchmark = end - start 381 | totaltime += benchmark 382 | 383 | msg = "" 384 | if prediction == 1 and "_1." in testFile: 385 | tp += 1 386 | msg = "Acute Lymphoblastic Leukemia correctly detected (True Positive) in " + str(benchmark) + " seconds." 387 | elif prediction == 1 and "_0." in testFile: 388 | fp += 1 389 | msg = "Acute Lymphoblastic Leukemia incorrectly detected (False Positive) in " + str(benchmark) + " seconds." 390 | elif prediction == 0 and "_0." in testFile: 391 | tn += 1 392 | msg = "Acute Lymphoblastic Leukemia correctly not detected (True Negative) in " + str(benchmark) + " seconds." 393 | elif prediction == 0 and "_1." in testFile: 394 | fn += 1 395 | msg = "Acute Lymphoblastic Leukemia incorrectly not detected (False Negative) in " + str(benchmark) + " seconds." 396 | self.helpers.logger.info(msg) 397 | 398 | self.helpers.logger.info("Images Classified: " + str(files)) 399 | self.helpers.logger.info("True Positives: " + str(tp)) 400 | self.helpers.logger.info("False Positives: " + str(fp)) 401 | self.helpers.logger.info("True Negatives: " + str(tn)) 402 | self.helpers.logger.info("False Negatives: " + str(fn)) 403 | self.helpers.logger.info("Total Time Taken: " + str(totaltime)) 404 | 405 | def http_reshape(self, img): 406 | """ Classifies an image sent via HTTP. """ 407 | 408 | n, c, h, w = [1, 3, self.confs["data"]["dim"], 409 | self.confs["data"]["dim"]] 410 | processed = img.resize((h, w), resample=Image.BILINEAR) 411 | processed = (np.array(processed) - 0) / 255.0 412 | processed = processed.transpose((2, 0, 1)) 413 | processed = processed.reshape((n, h, w, c)) 414 | 415 | return processed 416 | 417 | def http_request(self, img_path): 418 | """ Sends image to the inference API endpoint. """ 419 | 420 | self.helpers.logger.info("Sending request for: " + img_path) 421 | 422 | _, img_encoded = cv2.imencode('.png', cv2.imread(img_path)) 423 | response = requests.post(self.addr, data=img_encoded.tostring(), headers=self.headers) 424 | response = json.loads(response.text) 425 | 426 | return response 427 | 428 | def test_http(self): 429 | """Server test mode 430 | 431 | Loops through the test directory and sends the images to the 432 | classification server. 433 | """ 434 | 435 | totaltime = 0 436 | files = 0 437 | 438 | tp = 0 439 | fp = 0 440 | tn = 0 441 | fn = 0 442 | 443 | self.addr = "http://" + self.helpers.confs["agent"]["ip"] + \ 444 | ':'+str(self.helpers.confs["agent"]["port"]) + '/Inference' 445 | self.headers = {'content-type': 'image/jpeg'} 446 | 447 | for testFile in os.listdir(self.testing_dir): 448 | if os.path.splitext(testFile)[1] in self.valid: 449 | 450 | start = time.time() 451 | prediction = self.http_request(self.testing_dir + "/" + testFile) 452 | end = time.time() 453 | benchmark = end - start 454 | totaltime += benchmark 455 | 456 | msg = "" 457 | status = "" 458 | outcome = "" 459 | 460 | if prediction["Diagnosis"] == "Positive" and "_1." in testFile: 461 | tp += 1 462 | status = "correctly" 463 | outcome = "(True Positive)" 464 | elif prediction["Diagnosis"] == "Positive" and "_0." in testFile: 465 | fp += 1 466 | status = "incorrectly" 467 | outcome = "(False Positive)" 468 | elif prediction["Diagnosis"] == "Negative" and "_0." in testFile: 469 | tn += 1 470 | status = "correctly" 471 | outcome = "(True Negative)" 472 | elif prediction["Diagnosis"] == "Negative" and "_1." in testFile: 473 | fn += 1 474 | status = "incorrectly" 475 | outcome = "(False Negative)" 476 | 477 | files += 1 478 | self.helpers.logger.info("Acute Lymphoblastic Leukemia " + status + 479 | " detected " + outcome + " in " + str(benchmark) + " seconds.") 480 | 481 | self.helpers.logger.info("Images Classified: " + str(files)) 482 | self.helpers.logger.info("True Positives: " + str(tp)) 483 | self.helpers.logger.info("False Positives: " + str(fp)) 484 | self.helpers.logger.info("True Negatives: " + str(tn)) 485 | self.helpers.logger.info("False Negatives: " + str(fn)) 486 | self.helpers.logger.info("Total Time Taken: " + str(totaltime)) 487 | -------------------------------------------------------------------------------- /modules/server.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ Server/API class. 3 | 4 | Class for the classifier server/API. 5 | 6 | MIT License 7 | 8 | Copyright (c) 2021 Asociación de Investigacion en Inteligencia Artificial 9 | Para la Leucemia Peter Moss 10 | 11 | Permission is hereby granted, free of charge, to any person obtaining a copy 12 | of this software and associated documentation files(the "Software"), to deal 13 | in the Software without restriction, including without limitation the rights 14 | to use, copy, modify, merge, publish, distribute, sublicense, and / or sell 15 | copies of the Software, and to permit persons to whom the Software is 16 | furnished to do so, subject to the following conditions: 17 | 18 | The above copyright notice and this permission notice shall be included in all 19 | copies or substantial portions of the Software. 20 | 21 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 22 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 23 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 24 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 25 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 26 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 27 | SOFTWARE. 28 | 29 | Contributors: 30 | - Adam Milton-Barker 31 | 32 | """ 33 | 34 | import cv2 35 | import json 36 | import jsonpickle 37 | import os 38 | import requests 39 | import time 40 | 41 | import numpy as np 42 | 43 | from io import BytesIO 44 | from PIL import Image 45 | from flask import Flask, request, Response 46 | 47 | from modules.AbstractServer import AbstractServer 48 | 49 | class server(AbstractServer): 50 | """ Server/API class. 51 | 52 | Class for the classifier server/API. 53 | """ 54 | 55 | def predict(self, req): 56 | """ Classifies an image sent via HTTP. """ 57 | 58 | if len(req.files) != 0: 59 | img = Image.open(req.files['file'].stream) 60 | else: 61 | img = Image.open(BytesIO(req.data)) 62 | 63 | return self.model.predict(self.model.http_reshape(img)) 64 | 65 | def start(self): 66 | """ Starts the server. """ 67 | 68 | app = Flask("AllANBS") 69 | 70 | @app.route('/Inference', methods=['POST']) 71 | def Inference(): 72 | """ Responds to HTTP POST requests. """ 73 | 74 | prediction = self.predict(request) 75 | 76 | if prediction == 1: 77 | message = "Acute Lymphoblastic Leukemia detected!" 78 | diagnosis = "Positive" 79 | elif prediction == 0: 80 | message = "Acute Lymphoblastic Leukemia not detected!" 81 | diagnosis = "Negative" 82 | 83 | resp = jsonpickle.encode({ 84 | 'Response': 'OK', 85 | 'Message': message, 86 | 'Diagnosis': diagnosis 87 | }) 88 | 89 | return Response(response=resp, status=200, mimetype="application/json") 90 | 91 | app.run(host=self.helpers.confs["agent"]["ip"], 92 | port=self.helpers.confs["agent"]["port"]) -------------------------------------------------------------------------------- /scripts/install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | FMSG="Acute Lymphoblastic Leukemia Arduino Nano 33 BLE Sense Classifier trainer installation terminated!" 4 | 5 | echo "This script will install Acute Lymphoblastic Leukemia Arduino Nano 33 BLE Sense Classifier." 6 | echo "HINT: This script assumes Ubuntu 20.04." 7 | echo "WARNING: This script assumes you have not already installed the oneAPI Basekit." 8 | echo "WARNING: This script assumes you have not already installed the oneAPI AI Analytics Toolkit." 9 | echo "WARNING: This script assumes you have an Intel GPU." 10 | echo "WARNING: This script assumes you have already installed the Intel GPU drivers." 11 | echo "HINT: If any of the above are not relevant to you, please comment out the relevant sections below before running this installation script." 12 | 13 | read -p "Proceed (y/n)? " proceed 14 | if [ "$proceed" = "Y" -o "$proceed" = "y" ]; then 15 | # Comment out the following if you have already installed oneAPI Basekit 16 | wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB -O - | sudo apt-key add - 17 | echo "deb https://apt.repos.intel.com/oneapi all main" | sudo tee /etc/apt/sources.list.d/oneAPI.list 18 | sudo apt update 19 | sudo apt install intel-basekit 20 | sudo apt -y install cmake pkg-config build-essential 21 | echo 'source /opt/intel/oneapi/setvars.sh' >> ~/.bashrc 22 | source ~/.bashrc 23 | # Comment out the following if you have already installed oneAPI AI Analytics 24 | sudo apt install intel-aikit 25 | # Comment out the following if you have already installed the Intel GPU drivers 26 | # or do not have an Intel GPU on your training device 27 | sudo apt-get install -y gpg-agent wget 28 | wget -qO - https://repositories.intel.com/graphics/intel-graphics.key | 29 | sudo apt-key add - 30 | sudo apt-add-repository \ 31 | 'deb [arch=amd64] https://repositories.intel.com/graphics/ubuntu focal main' 32 | sudo apt-get update 33 | sudo apt-get install \ 34 | intel-opencl-icd \ 35 | intel-level-zero-gpu level-zero \ 36 | intel-media-va-driver-non-free libmfx1 37 | stat -c "%G" /dev/dri/render* 38 | groups ${USER} 39 | sudo gpasswd -a ${USER} render 40 | newgrp render 41 | sudo usermod -a -G video ${USER} 42 | # The following wil install all other required packages 43 | conda create -n all-nano-33-ble-sense -c intel intel-aikit-tensorflow 44 | conda activate all-nano-33-ble-sense 45 | conda install jupyter 46 | conda install nb_conda 47 | conda install -c conda-forge mlxtend 48 | conda install matplotlib 49 | conda install Pillow 50 | conda install opencv 51 | conda install scipy 52 | conda install scikit-learn 53 | conda install scikit-image 54 | else 55 | echo $FMSG; 56 | exit 1 57 | fi --------------------------------------------------------------------------------