├── .DS_Store ├── README.md ├── demo ├── float.GIF └── quant.GIF └── example_project └── Object_Detect_examples.tar.gz /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/os-hxfan/tutorial_TFLite_android/8bf43b565ae6364da787b0e50f9bfcdbf3447488/.DS_Store -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 1. Tutorial_TFLite_android 2 | Tutorial of how to deloy DNN on android device using TFLite. 3 | 4 | # 2. Content 5 | 6 | 7 | - [1. Tutorial_TFLite_android](#1-tutorial_tflite_android) 8 | - [2. Content](#2-content) 9 | - [3. Quick Start](#3-quick-start) 10 | - [3.1. Quick Guide](#31-quick-guide) 11 | - [3.2. 8-bit Quantization and Deployment on android](#32-8-bit-quantization-and-deployment-on-android) 12 | - [4. Demo](#4-demo) 13 | - [5. Object Detection](#5-object-detection) 14 | 15 | 16 | 17 | # 3. Quick Start 18 | ## 3.1. Quick Guide 19 | You can follow the tutorial [here](https://www.tensorflow.org/lite/examples/). Unfortunately, only the demo of classfication works. When you run the object detection demo on your phone, it will report the runtime error "Object Detector not found". The reason is that it is lack of a shared library. This tutorial will introduce how to solve this on [this section](#object-detection). 20 | 21 | 22 | ## 3.2. 8-bit Quantization and Deployment on android 23 | The [tutorial](https://www.tensorflow.org/lite/examples/) mentioned above only offers example models such as SSD and mobilenetv1. You can also use your own model. However, there is not a thorough and detailed tutorial which teaches you how to perform quantization and andoird deployment from online now. This toturial covers how to do this step by step. 24 | 1. If you are using other frameworks such as MXNet or PyTorch, the first thing you need to do is convert your DNN models into tensorflow model based on [ONNX](https://onnx.ai/). 25 | 1. Before 8-bit quantization, you first need to apply fake quantization into the model using `tf.contrib.quantize.create_training_graph` for recording min/max and `tf.contrib.quantize.create_eval_graph` for evalutaion (you can refer the [link](https://www.tensorflow.org/api_docs/python/tf/contrib/quantize/create_training_graph)). Then run it on your calibarate dataset. 26 | 1. Using [TFLite Convertor](https://www.tensorflow.org/api_docs/python/tf/lite/TFLiteConverter) to convert the fake quantized model to tflite model. Note that the operator list needs to contrain all the intermediate operators. You can do this in both command line and python code. 27 | 1. Put you quantized model on [`asset folder`](https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android/app/src/main/assets) and revise the path in the code [here](https://github.com/tensorflow/examples/blob/34884ff54ffbba5e4466f87e1347000adabcd930/lite/examples/object_detection/android/app/src/main/java/org/tensorflow/lite/examples/detection/DetectorActivity.java#L55). Pls also make sure your label.txt in `asset folder` is corresponding to the your dataset. 28 | 29 | # 4. Demo 30 | The significant performance improvement can be seen when comparing these floating point with 8-bit quantization. 31 |
Floating-point Demo | 8-bit demo |