├── .gitignore
├── LICENSE
├── README.md
├── edgeai-mcu
├── README.md
└── readme_sdk.md
├── edgeai-mpu
├── README.md
├── assets
│ ├── high-level-dev-flow.png
│ ├── workblocks_tools_software.png
│ ├── workflow_bring_your_own_data.png
│ ├── workflow_bring_your_own_model.png
│ └── workflow_train_your_own_model.png
├── docs
│ └── release_notes.md
├── getting_started.md
├── readme_models-j6.md
├── readme_publications.md
└── readme_sdk.md
└── make_release.sh
/.gitignore:
--------------------------------------------------------------------------------
1 | .idea
2 | .vscode
3 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright (c) 2018-2021, Texas Instruments Incorporated
2 | All Rights Reserved.
3 |
4 | Redistribution and use in source and binary forms, with or without
5 | modification, are permitted provided that the following conditions are met:
6 |
7 | * Redistributions of source code must retain the above copyright notice, this
8 | list of conditions and the following disclaimer.
9 |
10 | * Redistributions in binary form must reproduce the above copyright notice,
11 | this list of conditions and the following disclaimer in the documentation
12 | and/or other materials provided with the distribution.
13 |
14 | * Neither the name of the copyright holder nor the names of its
15 | contributors may be used to endorse or promote products derived from
16 | this software without specific prior written permission.
17 |
18 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
19 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
20 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
21 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
22 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
23 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
24 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
25 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
26 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
27 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
28 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Edge AI Software And Development Tools
2 |
3 | ---
4 |
5 | ## Notice
6 | Our documentation landing pages are the following:
7 | - https://www.ti.com/edgeai : Technology page summarizing TI’s edge AI software/hardware products
8 | - https://github.com/TexasInstruments/edgeai : Landing page for developers to understand overall software and tools offering
9 |
10 | ---
11 |
12 | ## Edge AI Software for MPUs
13 | [Edge AI Software And Development Tools for Micro Processor devices with Linux and TIDL support](edgeai-mpu/)
14 |
15 | ---
16 |
17 | ## Edge AI Software for MCUs
18 | [Edge AI / Tiny ML Software And Development Tools for Micro Controller devices](edgeai-mcu/)
19 |
20 | ---
21 |
--------------------------------------------------------------------------------
/edgeai-mcu/README.md:
--------------------------------------------------------------------------------
1 | # Edge AI / Tiny ML Software And Development Tools for Micro Controller devices
2 |
3 | ## Introduction
4 |
5 | Analytics for TI's Application Specific Microcontrollers (MCUs)
6 |
7 |
8 | ## Details of various tools
9 |
10 | The table below provides detailed explanation of each of the tools:
11 |
12 | | Category | Tool/Link | Purpose | IS NOT |
13 | |---------------------------------------|-------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------|-----------------------|
14 | | **Edge AI Studio Model Composer** | [Edge AI Studio Model Composer](https://dev.ti.com/modelcomposer/) | GUI based, No-Code AI development for MCUs and MPUs - Data Capture, Annotation, Model Training, Compilation and Live Preview (MCU support coming in Nov 2024)| |
15 | | **Tiny ML Model development for MCUs** | [tinyml-tensorlab](https://github.com/TexasInstruments/tinyml-tensorlab) | Commandline based development for advanced users - Model Zoo, ModelMaker, Model Optimization Tools, Model training, compilation, examples and other tools. Browse the link for detailed documentation. | |
16 | | **Software Development Kit for MCUs** | [Devices & SDKs](readme_sdk.md) | SDK for compatible devices. Run ML inference on device. | |
17 | | **Neural Network Compilation Tools for MCUs** | [Neural Network Compiler for MCUs](https://software-dl.ti.com/mctools/nnc/mcu/users_guide/) | Compile Neural Network Models for accelerated inference in TI MCUs | |
18 |
19 |
20 |
21 |
22 | ## What is New
23 | - [2025-January] Major feature updates (version 1.0.0) of the software
24 | - Tiny ML Modelmaker is now a pip installable package!
25 | - Feature Extraction transforms are now modular and compatible with C2000Ware 5.05 only
26 | - Multiclass ROC-AUC graphs are autogenerated for better explainability of reports and help select thresholds based on false alarm/ sensitivity preference
27 | - Run now begins with displaying inference time, sram usage and flash usage for all the devices for any model.
28 | - Supports Haar and Hadamard Transform
29 | - Golden test vectors file has one set uncommented by default to work OOB
30 | - Existing models can be modified on the fly through a config file (check Tiny ML Modelmaker docs)
31 | - PCA graphs are auto plotted for feature extracted data → Helps in identifying if the feature extraction actually helped
32 | - [2024-November] Updated (version 0.9.0) of the software
33 | - [2024-August] Release version 0.8.0 of the software
34 | - [2024-July] Release version 0.7.0 of the software
35 | - [2024-June] Release version 0.6.0 of the software
36 | - [2024-May] First release (version 0.5.0) of the software
37 |
38 |
39 |
40 |
--------------------------------------------------------------------------------
/edgeai-mcu/readme_sdk.md:
--------------------------------------------------------------------------------
1 | # Devices and SDKs supported by our MCU Analytics tools
2 |
3 | ## F28P55
4 | * Product information: https://www.ti.com/product/TMS320F28P550SJ
5 | * Launchpad: https://www.ti.com/tool/LAUNCHXL-F28P55X
6 | * C2000 SDK: https://www.ti.com/tool/C2000WARE
7 |
8 | ## F28P65
9 | * Product information: https://www.ti.com/product/TMS320F28P650DK
10 | * Launchpad: https://www.ti.com/tool/LAUNCHXL-F28P65X
11 | * C2000 SDK: https://www.ti.com/tool/C2000WARE
12 |
13 | ## F2837
14 | * Product information: https://www.ti.com/product/TMS320F28377D
15 | * Launchpad: https://www.ti.com/tool/LAUNCHXL-F28379D
16 | * C2000 SDK: https://www.ti.com/tool/C2000WARE
17 |
18 | ## F28004
19 | * Product information: https://www.ti.com/product/TMS320F280049C
20 | * Launchpad: https://www.ti.com/tool/LAUNCHXL-F280049C
21 | * C2000 SDK: https://www.ti.com/tool/C2000WARE
22 |
23 | ## F28003
24 | * Product information: https://www.ti.com/product/TMS320F280039C
25 | * Launchpad: https://www.ti.com/tool/LAUNCHXL-F280039C
26 | * C2000 SDK: https://www.ti.com/tool/C2000WARE
27 |
28 |
29 |
30 | # Application Specific SDKs
31 |
32 | ## C2000 Motor Control SDK
33 | * https://www.ti.com/tool/C2000WARE-MOTORCONTROL-SDK
34 |
35 | ## C2000 Digital Power SDK
36 | * https://www.ti.com/tool/C2000WARE-DIGITALPOWER-SDK
37 |
38 |
39 |
--------------------------------------------------------------------------------
/edgeai-mpu/README.md:
--------------------------------------------------------------------------------
1 | # Edge AI Software And Development Tools for Micro Processor devices with Linux and TIDL support
2 |
3 |
4 |
5 | ## Release Notes
6 |
7 | - [2024 Dec ~ 2025 March] 10.1 release. SDKs, edgeai-tidl-tools and edgeai-tensorlab has been updated.
8 |
9 | Further details are in the [Release Notes](./docs/release_notes.md).
10 |
11 | Also see the SDKs release notes, [edgeai-tidl-tools release notes](https://github.com/TexasInstruments/edgeai-tidl-tools/releases) and [edgeai-tensorlab release notes](https://github.com/TexasInstruments/edgeai-tensorlab/blob/main/docs/release_notes.md)
12 |
13 |
14 |
15 | ## Introduction
16 |
17 | Embedded inference of Deep Learning models is quite challenging - due to high compute requirements. TI’s Edge AI comprehensive software product help to optimize and accelerate inference on TI’s embedded devices. It supports heterogeneous execution of DNNs across cortex-A based MPUs, TI’s latest generation C7x DSP and DNN accelerator (MMA).
18 |
19 | TI's Edge AI solution simplifies the whole product life cycle of DNN development and deployment by providing a rich set of tools and optimized libraries.
20 |
21 | See our [Getting Started guide](./getting_started.md) for AM6xA and TDA4x with Edge AI and TI Deep Learning
22 |
23 | ## Overview
24 |
25 | The figure below provides a high level summary of the relevant tools:
26 |
27 |
28 |
29 | ## Details of various tools
30 |
31 | The table below provides detailed explanation of each of the tools:
32 |
33 | | Category | Tool/Link| Purpose | IS NOT |
34 | |---------------------------------------------------------|----------|-------------|-----------------------|
35 | | **Inference (and compilation) Tools** |[edgeai-tidl-tools](https://github.com/TexasInstruments/edgeai-tidl-tools)| To get familiar with model compilation and inference flow
- [Post training quantization](https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/tidl_fsg_quantization.md)
- Benchmark latency with out of box example models (10+)
- Compile user / custom model for deployment
- Inference of compiled models on X86_PC or TI SOC using file base input and output
- Docker for easy development environment setup |- Does not support benchmarking accuracy of models using TIDL with standard datasets, for e.g. - accuracy benchmarking using MS COCO dataset for object detection models. Please refer to edgeai-benchmark for the same.
- Does not support Camera, Display and inference based end-to-end pipeline development. Please refer Edge AI SDK for such usage | |
36 | | **Model Selection Tool** |[Edge AI Studio: Model Selection Tool](https://www.ti.com/tool/EDGE-AI-STUDIO) | Understand performance statistics of models such as FPS, Latency, Accuracy & DDR bandwidth. Find the model that best meets your performance and accuracy goals on TI Processor from TI Model Zoo.| |
37 | | **Integrated environment for training and compilation** |[Edge AI Studio: Model Analyzer](https://www.ti.com/tool/EDGE-AI-STUDIO)| Browser based environment to allow model evaluation with TI EVM farm
- Allow model evaluation without and software/hardware setup at user end
- User can reserve EVM from TI EVM farm and perform model evaluation using jupyter notebook
- **Model selection tool**: To provide suitable model architectures for TI devices |- Does not support Camera, Display and inference based end-to-end pipeline development. Please refer Edge AI SDK for such usage | |
38 | |ditto |[Edge AI Studio: Model Composer](https://www.ti.com/tool/EDGE-AI-STUDIO)| GUI based Integrated environment for data set capture, annotation, training, compilation with connectivity to TI development board
- Bring/Capture your own data, annotate, select a model, perform training and generate artifacts for deployment on SDK
- Live preview for quick feedback |- Does not support Bring Your Own Model workflow |
39 | |**Edge AI Software Development Kit**| [Devices & SDKs](readme_sdk.md) | SDK to develop end-to-end AI pipeline with camera, inference and display
- Different inference runtime: TFLiteRT, ONNXRT, NEO AI DLR, TIDL-RT
- Framework: openVX, gstreamer
- Device drivers: Camera, display, networking
- OS: Linux, RTOS
- May other software modules: codecs, OpenCV,… | |
40 |
41 |
42 |
43 |
44 | | Category | Tool/Link | Purpose | IS NOT |
45 | |---------------------------------------------------------|-----------------|-------------------|-----------|
46 | | **Model Zoo, Model training, compilation/benchmark & associated tools** | [edgeai-tensorlab](https://github.com/TexasInstruments/edgeai-tensorlab) | To provide model training software, collection of pretrained models and documemtation and compilation/benchmark scripts. Includes edgeai-modelzoo, edgeai-benchmark, edgeai-modeloptimization, edgeai-modelmaker, edgeai-torchvision, edgeai-mmdetection and such repositories. | |
47 |
48 |
49 |
50 |
51 | ## Workflows
52 | Bring your own model (BYOM) workflow:
53 |
54 | Train your own model (TYOM) workflow:
55 |
56 | Bring your own data (BYOD) workflow:
57 |
58 |
59 |
60 | ## Tech Reports
61 |
62 | Technical documentation can be found in the documentation of each repository. Here we have a collection of technical reports & tutorials that give high level overview on various topics.
63 |
64 | - [**Edge AI Tech Reports in edgeai-tensorlab**](https://github.com/TexasInstruments/edgeai-tensorlab/blob/main/docs/tech_reports/README.md)
65 |
66 |
67 |
68 | ## Publications
69 |
70 | - Read some of our [**Technical publications**](./readme_publications.md)
71 |
72 |
73 |
74 | ## Issue Trackers
75 | **Issue tracker for [Edge AI Studio](https://www.ti.com/tool/EDGE-AI-STUDIO)** is listed in its landing page.
76 |
77 | **[Issue tracker for TIDL](https://e2e.ti.com/support/processors/f/791/tags/TIDL)**: Please include the tag **TIDL** (as you create a new issue, there is a space to enter tags, at the bottom of the page).
78 |
79 | **[Issue tracker for edge AI SDK](https://e2e.ti.com/support/processors/f/791/tags/EDGEAI)** Please include the tag **EDGEAI** (as you create a new issue, there is a space to enter tags, at the bottom of the page).
80 |
81 | **[Issue tracker for ModelZoo, Model Benchmark & Deep Neural Network Training Software](https://e2e.ti.com/support/processors/f/791/tags/MODELZOO):** Please include the tag **MODELZOO** (as you create a new issue, there is a space to enter tags, at the bottom of the page).
82 |
83 |
84 |
85 | ## License
86 | Please see the [LICENSE](./LICENSE) file for more information about the license under which this landing repository is made available. The LICENSE file of each repository is inside that repository.
87 |
--------------------------------------------------------------------------------
/edgeai-mpu/assets/high-level-dev-flow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TexasInstruments/edgeai/8edd1bbf409d63d9667eeebc3fb2e8af5d1dff19/edgeai-mpu/assets/high-level-dev-flow.png
--------------------------------------------------------------------------------
/edgeai-mpu/assets/workblocks_tools_software.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TexasInstruments/edgeai/8edd1bbf409d63d9667eeebc3fb2e8af5d1dff19/edgeai-mpu/assets/workblocks_tools_software.png
--------------------------------------------------------------------------------
/edgeai-mpu/assets/workflow_bring_your_own_data.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TexasInstruments/edgeai/8edd1bbf409d63d9667eeebc3fb2e8af5d1dff19/edgeai-mpu/assets/workflow_bring_your_own_data.png
--------------------------------------------------------------------------------
/edgeai-mpu/assets/workflow_bring_your_own_model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TexasInstruments/edgeai/8edd1bbf409d63d9667eeebc3fb2e8af5d1dff19/edgeai-mpu/assets/workflow_bring_your_own_model.png
--------------------------------------------------------------------------------
/edgeai-mpu/assets/workflow_train_your_own_model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/TexasInstruments/edgeai/8edd1bbf409d63d9667eeebc3fb2e8af5d1dff19/edgeai-mpu/assets/workflow_train_your_own_model.png
--------------------------------------------------------------------------------
/edgeai-mpu/docs/release_notes.md:
--------------------------------------------------------------------------------
1 |
2 | ## Release Notes
3 | - [2024 Dec ~ 2025 March] 10.1 release. SDKs, edgeai-tidl-tools and edgeai-tensorlab are updated.
4 | - [2024 September] 10.0 release has been done. SDKs, edgeai-tidl-tools and edgeai-tensorlab has been updated.
5 | - [2024 April] 9.2 release has been done. Several repositories has been consilidated under edgeai-tensorlab
6 | - [2023 Dec] Updated link to Model Optimization Tools
7 | - [2023 May] Documentation update and restructure.
8 | - [2023 March] Several of these repositories have been updated
9 | - [2022 April] Several of these repositories have been updated
10 | - [2021 August] Several of our repositories are being moved from git.ti.com to github.com
11 | - [2021 December-21] Several of our repositories are being updated in preparation for the 8.1 (08_01_00_xx) release. These include edgeai-tidl-tools, edgeai-benchmark, edgeai-modelzoo and edgeai-torchvision. A new version of PROCESSOR-SDK-LINUX-SK-TDA4VM that corresponds to this will be available in a few days.
12 | - [2022-April-5] Several of the repositories are being updated in preparation for the 8.2 (08_02_00_xx) release.
--------------------------------------------------------------------------------
/edgeai-mpu/getting_started.md:
--------------------------------------------------------------------------------
1 | # Getting Started with Edge AI MPU's
2 |
3 | This guide will focus on guiding you to the right resources, tools, examples, and documents for new users of TI's Edge AI solution for microprocessor (MPU) devices. This is a high-level, technical document intended for developers.
4 |
5 | **Quick links for developers:**
6 | * [Skip straight to the details and start](#getting-started-with-your-selected-processor)
7 | * [Understand the AI model development flow](#ti-edge-ai-model-development-flow)
8 |
9 | Here we will focus on accelerated Edge AI devices from the AM6xA and TDA4x family of industrial and automotive processors. These feature the C7x-MMA neural network accelerator, which combines a SIMD-DSP and matrix multiplication accelerator for fast execution of neural nets.
10 | * Such devices are focused on computer vision and perception, featuring additional hardware acceleration for other vision functions like ISP, lens distortion correction, image scaling, stereo depth estimation[^1], and optical flow[^1].
11 |
12 | Please find our demo applications and use-cases on [ti.com/edgeaiprojects](https://www.ti.com/edgeaiprojects) and from our [TexasInstruments-Sandbox Github repositories](https://github.com/TexasInstruments-Sandbox?q=edgeai&type=all&language=&sort=)
13 |
14 | [^1]: Stereo depth and optical flow are part of the "DMPAC" accelerator, which is only on select processors
15 |
16 | ## Overview
17 |
18 | To use these AM6xA and TDA4x processors for your Edge AI system, follow these steps:
19 | 1) [Select a processor and acquire evaluation hardware](#select-a-processor-suited-to-your-use-case)
20 | 2) Setup the [Software Development Kit (SDK) for your device](./readme_sdk.md), install to an SD card, and boot the Edge AI SDK on the evaluation/development hardware
21 | 3) [Use TI Deep Learning (TIDL) to compile neural network models for the C7xMMA AI accelerator and run on the target device](#ti-edge-ai-model-development-flow)
22 | 4) Run models with open-source runtimes and integrate into an end-to-end application with Gstreamer
23 |
24 | ## Select a processor suited to your use-case
25 |
26 | TI has a scalable platform to cover a wide range of performance requirements. Our platform of AM6xA and TDA4x processors leverages common software so you may easily migrate to higher or lower performance devices.
27 |
28 | Key considerations include:
29 | 1) Type of AI processing, how many models, and their complexity
30 | * TOPS is the primary metric for Edge AI performance, but true benchmarks are needed because TOPS are not equivalent across the industry
31 | * **Note that TI Deep Learning (TIDL) software is optimized for vision-based models like CNN's and ViT's. Network architectures for language or time-series data may not be supported, e.g. LLM's and RNN's**
32 | 2) The number/type of sensors, e.g. cameras (and their resolution, framerate, etc.)
33 | 3) General purpose processing cores
34 | * Arm A-cores for high level operating system and general compute
35 | * MCU/Real-time cores for IO, functional safety
36 | 4) IO and peripherals for external devices and networking, e.g. USB, PCIe, SPI, Ethernet
37 | 5) Additional hardware acceleration needs, like GPU or stereo-depth accelerator
38 | 6) Power budget and thermal dissipation
39 |
40 | If you are not yet sure which processor to use, TI has resources to help decide without needing any development kits. The [following section](#low-touch-processor-evaluation-in-the-cloud) is geared to assist consideration #1 for AI processing capabilities.
41 |
42 | If you have selected a processor already, skip ahead to a later section and [evaluate with local hardware](#getting-started-with-your-selected-processor)
43 |
44 |
45 | #### Low-touch processor evaluation in the cloud
46 | You can learn about and evaluate TI's Edge AI processor options before committing to a particular one. [TI's Edge AI Studio](https://dev.ti.com/edgeaistudio/) features tools for evaluating without local hardware, in TI's low/no-code environment.
47 |
48 | Edge AI studio enables you to:
49 | 1) View and compare benchmarks on common network architectures to understand processing speed, accuracy on standard datasets, DDR bandwidth, and more with _Model Selection Tool_
50 | 2) Compile models in the cloud and and run models on a cloud-host EVM with _Model Analyzer_
51 | 3) Curate a dataset, fine-tune a model, and compile for a target processor with _Model Composer_
52 |
53 | Please view the _Model Selection_ tool to understand achievable performance across the TI's Edge Ai processors and select the one most suited to your needs. If you'd like to see more of the programming interface and run a few benchmarks yourself, check out the _Model Analyzer_!
54 |
55 | ## Getting started with your selected processor
56 | Please note that some SDK documentation links will take you to the AM62A docs -- equivalent pages will exist in the docs pages for other SOCs as well.
57 |
58 | Follow these guidelines for Edge AI development:
59 |
60 | 1) Acquire a development/evaluation board, often called a Starter Kit EVM.
61 | 2) Setup the [Software Development Kit (SDK)](./readme_sdk.md) on an SD card for the development board.
62 | 3) Evaluate the [out-of-box demo and sample end-to-end pipelines](https://software-dl.ti.com/processor-sdk-linux/esd/AM62AX/latest/exports/edgeai-docs/common/sample_apps.html) on models from [TI's model zoo](./README.md#details-of-various-tools).
63 | 4) Bring your AI task onto the processor by compiling and running models for your target hardware.
64 | * Compile models yourself with [edgeai-tidl-tools](https://github.com/TexasInstruments/edgeai-tidl-tools) -- Recommended to first try with [pre-validated examples](https://github.com/TexasInstruments/edgeai-tidl-tools?tab=readme-ov-file#validate-and-benchmark-out-of-box-examples), especially Python3 examples.
65 | * Bring your own model (BYOM) and compile for the processor with [edgeai-tidl-tools custom model flow](https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/custom_model_evaluation.md#custom-model-evaluation), using Python.
66 | * Please note: The more unique/custom your model is, the more likely you will need to modify the code to handle preprocessing and postprocessing.
67 | 5) Test and optimize your model for performance and accuracy
68 | * Embedded AI accelerators typically use fixed-point math rather than floating point to speed up processing time. Some accuracy loss is expected -- TI has tooling to mitigate this.
69 | * Ensure your as many of your model's layers are supported as possible -- [see the list of supported operators](https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/supported_ops_rts_versions.md).
70 | * Optimize accuracy with more calibration data, QAT, and hybrid quantization modes -- learn more in the [edgeai-tidl-tools docs](https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master/docs) and [available compilation settings](https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master/examples/osrt_python#user-options-for-tidl-acceleration).
71 | 6) Integrate into application with Gstreamer or TI OpenVX
72 | * Start from [example dataflows](https://software-dl.ti.com/processor-sdk-linux/esd/AM62AX/latest/exports/edgeai-docs/common/edgeai_dataflows.html) to run your model in an end-to-end pipeline.
73 | * Linux systems should use [Gstreamer](https://github.com/TexasInstruments/edgeai-gst-apps); non-Linux systems and/or tightly optimized applications (including ADAS) may leverage [TIOVX](https://github.com/TexasInstruments/edgeai-tiovx-apps).
74 |
75 | See the next section on AI model development to learn more steps #4 and #5.
76 |
77 | ## TI Edge AI model development flow
78 |
79 | At a high-level, the Edge AI development flows features two stages.
80 | 1) On x86 host machine (PC or server), import and compile a trained model for the C7xMMA AI accelerator
81 | * Compiled models can be run on PC with a bit-accurate emulator of the C7xMMA to evaluate and optimize accuracy
82 | 2) Transfer your compiled model to the embedded processor and accelerate with standard runtimes.
83 |
84 | 
85 |
86 | TI provides tools for various points in the process. See more in the [parent page](./README.md), which lists and provides detail on each of these tools.
87 | * For each GUI-based tool, there is a corresponding open source, programmatic / command-line tool
88 |
89 | After initial evaluation of a model architecture, developers will often iterate on their model to achieve optimal performance and accuracy.
90 | * The first stage of performance optimization is by ensuring all layers within a model have acceleration support. See our list of [supported operators](https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/supported_ops_rts_versions.md)
91 | * Accuracy optimization can be approached during or after model training. Please find supporting materials in our github repositories for topics on [post training quantization (PTQ)](https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/tidl_fsg_quantization.md) and [quantization-aware training (QAT)](https://github.com/TexasInstruments/edgeai-tensorlab/tree/main/edgeai-modeloptimization/torchmodelopt/edgeai_torchmodelopt/xmodelopt/quantization)
--------------------------------------------------------------------------------
/edgeai-mpu/readme_models-j6.md:
--------------------------------------------------------------------------------
1 | ## Model Training for Jacinto6 family of devices
2 |
3 | Deep Learning and Traditional ML examples for Jacinto 6 family of devices - e.g. (TDA2x, TDA3x). These older modules are not included as submodules in this repo, but can be obtained using the links below.
4 |
5 | **[Caffe-Jacinto](https://git.ti.com/cgit/jacinto-ai/caffe-jacinto/about/)**: Our Caffe fork for training sparse CNN models including Object Detection and Semantic Segmentation models.
6 |
7 | **[Caffe-Jacinto-Models](https://git.ti.com/cgit/jacinto-ai/caffe-jacinto-models/about/)**: Scripts and examples for training sparse CNN models for Image Classification, Object Detection and Semantic Segmentation.
8 |
9 | **[Acf-Jacinto](https://git.ti.com/cgit/jacinto-ai/acf-jacinto/about/)**: Training tool for HOG/ACF/AdaBoost Object Detector (traditional machine learning based)
10 |
--------------------------------------------------------------------------------
/edgeai-mpu/readme_publications.md:
--------------------------------------------------------------------------------
1 |
2 | We have the introduction, application notes and other documents listed at our main landing page https://www.ti.com/edgeai. Those would be useful to get a high level understanding of our edge AI offering. The publications listed here are articles covering in-depth technical details.
3 |
4 | # Technical Articles
5 |
6 | - Prune Efficiently by Soft Pruning, Parakh Agarwal Manu Mathew Kunal Ranjan Patel, Varun Tripathi, Pramod Swami, Embedded Vision Workshop, CVPR 2024, https://openaccess.thecvf.com/content/CVPR2024W/EVW/papers/Agarwal_Prune_Efficiently_by_Soft_Pruning_CVPRW_2024_paper.pdf
7 |
8 | - YOLO-Pose: Enhancing YOLO for Multi Person Pose Estimation Using Object Keypoint Similarity Loss, Debapriya Maji, Soyeb Nagori, Manu Mathew, Deepak Poddar, https://arxiv.org/abs/2204.06806
9 |
10 | - Accelerated Point Cloud-based 3D Object Detection on Texas Instrument TDA4 Based Processor, https://medium.com/@deepak.kumar.poddar/accelerated-point-cloud-based-3d-object-detection-using-tda4-b20a413f3a41
11 |
12 | - SS3D: Single Shot 3D Object Detector, Aniket Limaye, Manu Mathew, Soyeb Nagori, Pramod Kumar Swami, Debapriya Maji, Kumar Desappan, https://arxiv.org/abs/2004.14674
13 |
14 | - Deep Learning based Parking Spot Detection and Classification in Fish-Eye Images , Deepak Poddar, Soyeb Nagori, Manu Mathew, Debapriya Maji, Hrushikesh Garud, 2019 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), 2019, pp. 1-5, doi: 10.1109/CONECCT47791.2019.9012933.
15 |
16 | - Efficient Semantic Segmentation using Gradual Grouping, Nikitha Vallurupalli, Sriharsha Annamaneni, Girish Varma, C V Jawahar, Manu Mathew, Soyeb Nagori, https://arxiv.org/abs/1806.08522
17 |
18 |
19 | # Other Resources
20 |
21 | - Process This: A Monthly Webinar Series - Monthly webinars on embedded processing topics from product announcements and technical trainings to market and design trends. https://training.ti.com/process-monthly-webinar-series
22 |
23 | - TI edge AI Academy - Become an expert in AI development in days. Learn with Free Cloud Tools. Build with an 8 TOPS processor starter kit. NO EXPERIENCE needed! https://ti.com/edgeaiacademy
24 |
25 | - TI edge AI demos - Add embedded intelligence to your design using TI edge AI and robotics software demos created with TDA4x processors, https://ti.com/edgeaiprojects
26 |
27 |
28 |
--------------------------------------------------------------------------------
/edgeai-mpu/readme_sdk.md:
--------------------------------------------------------------------------------
1 | #### Notes
2 | - Note 1: In the below links, replace the field {version} with specific version of SDK (for example 09_01_00 or with latest). Versions of SDK can be found at the SDK download page of the device.
3 | - Note 2: Additional information is available here: https://github.com/TexasInstruments/edgeai-tidl-tools#supported-devices
4 |
5 | ## Supported Devices & SDKs
6 |
7 | ### AM62A
8 | * Product information: https://www.ti.com/product/AM62A7
9 | * Development Board: https://www.ti.com/tool/SK-AM62A-LP
10 | * SDK landing page: https://www.ti.com/tool/PROCESSOR-SDK-AM62A
11 | * Edge AI Linux SDK: https://www.ti.com/tool/download/PROCESSOR-SDK-LINUX-AM62A
12 | * Edge AI SDK documentation: https://software-dl.ti.com/processor-sdk-linux/esd/AM62AX/{version}/exports/edgeai-docs/common/sdk_overview.html
13 |
14 |
15 |
16 | ### AM67A
17 | * Product information: https://www.ti.com/product/AM67A
18 | * Development Board: https://www.ti.com/tool/J722SXH01EVM
19 | * SDK landing page: https://www.ti.com/tool/PROCESSOR-SDK-AM67A
20 | * Edge AI Linux SDK: https://www.ti.com/tool/download/PROCESSOR-SDK-LINUX-AM67A
21 | * Edge AI SDK documentation: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-edgeai/AM67A/{version}/exports/docs/common/sdk_overview.html
22 |
23 | ### TDA4AEN
24 | * Product information: https://www.ti.com/product/TDA4AEN-Q1
25 | * Development Board: https://www.ti.com/product/TDA4AEN-Q1#design-development
26 | * SDK landing page: https://www.ti.com/tool/PROCESSOR-SDK-J722S
27 | * TIDL Documentation: https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-j722s/{version}/exports/docs/psdk_rtos/docs/user_guide/sdk_components_j722s.html#ti-deep-learning-product-tidl
28 | * **Note: Models compiled for AM67A can be used in this device also.**
29 |
30 |
31 |
32 | ### AM68A
33 | * Product information: https://www.ti.com/product/AM68A
34 | * Development Board: https://www.ti.com/tool/SK-AM68
35 | * SDK landing page: https://www.ti.com/tool/PROCESSOR-SDK-AM68A
36 | * Edge AI Linux SDK: https://www.ti.com/tool/download/PROCESSOR-SDK-LINUX-AM68A
37 | * Edge AI SDK documentation: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-edgeai/AM68A/{version}/exports/docs/common/sdk_overview.html
38 |
39 | ### TDA4AL, TDA4VE, TDA4VL
40 | * Product information: https://www.ti.com/product/TDA4AL-Q1, https://www.ti.com/product/TDA4VL-Q1, https://www.ti.com/product/TDA4VE-Q1
41 | * Development Board: https://www.ti.com/product/TDA4AL-Q1#design-development
42 | * SDK landing page: https://www.ti.com/tool/download/PROCESSOR-SDK-RTOS-J721S2
43 | * TIDL Documentation: https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-j721s2/{version}/exports/docs/psdk_rtos/docs/user_guide/sdk_components_j721s2.html#ti-deep-learning-product-tidl
44 | * **Note: Models compiled for AM68A can be used in these devices also.**
45 |
46 |
47 |
48 | ### AM69A
49 | * Product information: https://www.ti.com/product/AM69A
50 | * Development Board: https://www.ti.com/tool/SK-AM69
51 | * SDK landing page: https://www.ti.com/tool/PROCESSOR-SDK-AM69A
52 | * Edge AI Linux SDK: https://www.ti.com/tool/download/PROCESSOR-SDK-LINUX-AM69A
53 | * Edge AI SDK documentation: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-edgeai/AM69A/{version}/exports/docs/common/sdk_overview.html
54 |
55 |
56 | ### TDA4VH, TDA4AH, TDA4VP, TDA4AP
57 | * Product information: https://www.ti.com/product/TDA4VH-Q1, https://www.ti.com/product/TDA4AH-Q1, https://www.ti.com/product/TDA4VP-Q1, https://www.ti.com/product/TDA4AP-Q1
58 | * Development Board: https://www.ti.com/product/TDA4VH-Q1#design-development
59 | * SDK landing page: https://www.ti.com/tool/download/PROCESSOR-SDK-RTOS-J784S4
60 | * TIDL Documentation: https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-j784s4/{version}/exports/docs/psdk_rtos/docs/user_guide/sdk_components_j784s4.html#ti-deep-learning-product-tidl
61 | * **Note: Models compiled for AM69A can be used in these devices also.**
62 |
63 |
64 |
65 | ### TDA4VM
66 | * Product information: https://www.ti.com/product/TDA4VM
67 | * Development Board: https://www.ti.com/tool/SK-TDA4VM
68 | * SDK landing page: https://www.ti.com/tool/PROCESSOR-SDK-J721E
69 | * Edge AI Linux SDK: https://www.ti.com/tool/download/PROCESSOR-SDK-LINUX-SK-TDA4VM
70 | * Edge AI SDK documentation: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-sk-tda4vm/{version}/exports/edgeai-docs/common/sdk_overview.html
71 | * **Note: Also referred to as AM68PA**
72 |
73 |
74 |
75 | ### AM62
76 | * Product information: https://www.ti.com/product/AM623, https://www.ti.com/product/AM625, https://www.ti.com/product/AM62P
77 | * Development Board: https://www.ti.com/tool/SK-AM62, https://www.ti.com/tool/SK-AM62-LP
78 | * SDK landing page: https://www.ti.com/tool/PROCESSOR-SDK-AM62X
79 | * Edge AI Linux SDK: https://www.ti.com/tool/download/PROCESSOR-SDK-LINUX-AM62X
80 | * Edge AI SDK documentation: https://software-dl.ti.com/processor-sdk-linux/esd/AM62AX/{version}/exports/edgeai-docs/common/sdk_overview.html
81 |
82 |
83 |
--------------------------------------------------------------------------------
/make_release.sh:
--------------------------------------------------------------------------------
1 | git push
2 | git push github
3 |
--------------------------------------------------------------------------------