├── README.md ├── android_permissions.py ├── applayout.py ├── buildozer.spec ├── camerax_provider ├── camerax_src │ └── org │ │ └── kivy │ │ └── camerax │ │ ├── CallbackWrapper.java │ │ ├── CameraX.java │ │ ├── ImageAnalysisAnalyzer.java │ │ ├── ImageProxyOps.java │ │ ├── ImageSavedCallback.java │ │ ├── KivySurfaceProvider.java │ │ └── VideoSavedCallback.java └── gradle_options.py ├── classifyobject.py ├── example.jpg ├── icons ├── camera-flip-outline.png ├── camera_red.png ├── camera_white.png ├── cellphone-screenshot_red.png ├── cellphone-screenshot_white.png ├── flash-auto.png ├── flash-off.png ├── flash.png ├── monitor-screenshot_red.png ├── monitor-screenshot_white.png ├── video-off.png ├── video_red.png └── video_white.png ├── main.py ├── object_detection ├── efficientdet_lite0.tflite ├── efficientdet_lite0_edgetpu.tflite ├── model.tflite └── object_detector.py └── tfl_2_12_not_arm7 └── tflite-runtime ├── CMakeLists.patch ├── __init__.py └── build_with_cmake.patch /README.md: -------------------------------------------------------------------------------- 1 | Camera4Kivy Tensoflow Lite Example 2 | ================================== 3 | 4 | *Tensorflow Lite Image Analysis using Camera4Kivy* 5 | 6 | **2023-11-13 This repository is archived.** 7 | 8 | # Overview 9 | 10 | Uses Tensoflow Lite to classify objects in the image stream, classified objects are boxed and labeled in the Preview. 11 | 12 | Available on some of the [usual platforms](https://github.com/Android-for-Python/Camera4Kivy/#tested-examples-and-platforms). 13 | 14 | This example is based on a [Tensorflow Lite Object Detection Example](https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/raspberry_pi). This example was trained on the [COCO dataset](https://cocodataset.org/#home) which contains 1000 objects commonly found in American houses. For example chair, person, cup, etc. It exhibits false positives, a picture on the wall is classified as a TV. 15 | 16 | ![Example](example.jpg) 17 | 18 | The original Google example depends on numpy and opencv. This derived example removes the `opencv.resize` usage, and uses `self.auto_analyze_resolution` to set the resolution required by the example. This was done to decrease apk size (no opencv) and increase performance on faster devices. 19 | 20 | This Google example was chosen because it was trained on a complex dataset and thus slow to infer objects, and so is a test of any performance limitations. Detection update rates and detection confidence vary by platform, because platforms use different delegates. The NNAPI delegate, available on Android >= 11 seems to be the fastest. 21 | 22 | For this example platform performance is approximately (note with the excepton of arm7 list is a list of the highest version I have actually tried, not a list of what version is required): 23 | 24 | | Platform | tflite version | approx annotations per sec| 25 | |----------|----------------|-----------------------| 26 | | Windows i7 | 2.12.0 | 9 | 27 | | Emulator x86 (Windows i7) | 2.8.0 | 1 | 28 | | Mac i5 | 2.5.0.post1 | 3 | 29 | | Nexus5 Android 6 arm7 | 2.8.0 | 5 | 30 | | Windows + Coral | 2.5.0.post1 | 7 | 31 | | Pixel5 Android 12 arm8 | 2.12.0 | 27 | 32 | 33 | These are annotation update rates, for Camera4Kivy background image frame update rates are mostly independent of this. Surprisingly single digit annoatation rates provide acceptable behavior, this is because of the stochastic nature of detection. Conversly a high update rate appears too fast, tuning might be required. 34 | 35 | On the desktop we use the Python tensorflow package. On mobile devices the Python tflite-runtime package. 36 | 37 | The example uses the recipe for `tflite-runtime` included in p4a. This uses tflite-runtime 2.11.0, and runs on arm7 and arm8 devices and the x86 (but not x86_64) emulator. 38 | 39 | # Image Analysis Architecture 40 | 41 | Tensorflow Image analysis has four distinct components: 42 | 43 | - The tensorflow lite model, formated to include tensor labels. Created for example with [Model Maker](https://www.tensorflow.org/lite/guide/model_maker) and also [here](https://www.tensorflow.org/lite/api_docs/python/tflite_model_maker). A model is created using a training data set. 44 | 45 | - The Python `tflite-runtime` package. This performs inference based on the model and the input image. This package is available from Google on most desktops, and provided by a Python-for-Android recipe on Android. 46 | 47 | - The model interface, specific to the tflite model, in this case [`object_detection/object_detector.py`](https://github.com/Android-for-Python/c4k_tflite_example/blob/main/object_detection/object_detector.py). This encodes the model input for tflite runtime, executes the tflite runtime on one frame, and decodes the tflite runtime output. 48 | 49 | - The Camera4Kivy interface [classifyobject.py](https://github.com/Android-for-Python/c4k_tflite_example/blob/main/classifyobject.py). This passes the video frame to the model interface, and annotates the model interface output onto the Preview widget canvas. 50 | 51 | # tflite-runtime Documentation 52 | 53 | [Python API](https://www.tensorflow.org/lite/api_docs/python/tf/lite), 54 | [Python quickstart](https://www.tensorflow.org/lite/guide/python), and 55 | [21 Python examples](https://github.com/tensorflow/examples/tree/master/lite/examples). 56 | 57 | # Install 58 | 59 | This example depends on [Camera4Kivy](https://github.com/Android-for-Python/Camera4Kivy#camera4kivy). Depending on the platform you may need to install a [camera provider](https://github.com/Android-for-Python/Camera4Kivy#camera-provider). 60 | 61 | ## Windows, MacOS x86_64, Linux 62 | `pip3 install numpy camera4kivy tensorflow` 63 | 64 | If you use a [Coral Accelerator](https://coral.ai/products/accelerator) set `enable_edgetpu = True` in `classifyobject.py`. 65 | 66 | ## Android 67 | 68 | The buildozer.spec has these characteristics: 69 | 70 | ``` 71 | source.include_exts = ...,tflite 72 | source.exclude_patterns = object_detection/efficient*.tflite 73 | requirements = python3,kivy,camera4kivy,gestures4kivy,numpy,tflite-runtime 74 | android.api = 33 75 | android.arch = arm64-v8a 76 | p4a.hook = camerax_provider/gradle_options.py 77 | ``` 78 | 79 | Note: the p4a recipe uses tlflite 2.8.0, this has not been updated due to a Tensorflow issue on armeab-v7a devices [issue status](https://github.com/tensorflow/tensorflow/issues/59970). However it is possible to run 2.12.0 on other Android archectures, a recipe for 2.12.0 is included with this repository (but fails to build on armeab-v7a). To use, in buildozer.spec set: 80 | ``` 81 | p4a.local_recipes = tfl_2_12_not_arm7 82 | ``` 83 | 84 | Note armeab-v7a: Some very old armeab-v7a devices may not have NEON instructions and will not work (the failure mechanism, if any, is unknown). Set `android.minapi = 23` to exclude these devices (and some devices that do have NEON instructions as well). 85 | 86 | Note x86_84: This recipe does not build for x86_64, for an emulator use x86. Using x86_64 will result in one of these app run time error messages: "ModuleNotFoundError: No module named 'tensorflow'", or "ModuleNotFoundError: No module named 'tflite_runtime'. 87 | 88 | ## iOS 89 | 90 | **This example is not available on iOS due to lack of an tflite-runtime recipe.** 91 | 92 | -------------------------------------------------------------------------------- /android_permissions.py: -------------------------------------------------------------------------------- 1 | from kivy.utils import platform 2 | from kivy.clock import mainthread 3 | 4 | if platform == 'android': 5 | from kivy.uix.button import Button 6 | from kivy.uix.modalview import ModalView 7 | from kivy.clock import Clock 8 | from android import api_version, mActivity 9 | from android.permissions import request_permissions, check_permission, \ 10 | Permission 11 | 12 | 13 | ######################################################################### 14 | # 15 | # The start_app callback may occur up to two timesteps after this class 16 | # is instantiated. So the class must exist for two time steps, if not the 17 | # callback will not be called. 18 | # 19 | # To defer garbage collection, instantiate this class with a class variable: 20 | # 21 | # def on_start(self): 22 | # self.dont_gc = AndroidPermissions(self.start_app) 23 | # 24 | # def start_app(self): 25 | # self.dont_gc = None 26 | # 27 | ########################################################################### 28 | # 29 | # Android Behavior: 30 | # 31 | # If the user selects "Don't Allow", the ONLY way to enable 32 | # the disallowed permission is with App Settings. 33 | # This class give the user an additional chance if "Don't Allow" is 34 | # selected once. 35 | # 36 | ########################################################################### 37 | 38 | class AndroidPermissions: 39 | def __init__(self, start_app = None): 40 | self.permission_dialog_count = 0 41 | self.start_app = start_app 42 | if platform == 'android': 43 | ################################################# 44 | # Customize run time permissions for the app here 45 | ################################################# 46 | self.permissions = [Permission.CAMERA] 47 | ################################################# 48 | self.permission_status([],[]) 49 | elif self.start_app: 50 | self.start_app() 51 | 52 | def permission_status(self, permissions, grants): 53 | granted = True 54 | for p in self.permissions: 55 | granted = granted and check_permission(p) 56 | if granted: 57 | if self.start_app: 58 | self.start_app() 59 | elif self.permission_dialog_count < 2: 60 | Clock.schedule_once(self.permission_dialog) 61 | else: 62 | self.no_permission_view() 63 | 64 | def permission_dialog(self, dt): 65 | self.permission_dialog_count += 1 66 | request_permissions(self.permissions, self.permission_status) 67 | 68 | @mainthread 69 | def no_permission_view(self): 70 | view = ModalView() 71 | view.add_widget(Button(text='Permission NOT granted.\n\n' +\ 72 | 'Tap to quit app.\n\n\n' +\ 73 | 'If you selected "Don\'t Allow",\n' +\ 74 | 'enable permission with App Settings.', 75 | on_press=self.bye)) 76 | view.open() 77 | 78 | def bye(self, instance): 79 | mActivity.finishAndRemoveTask() 80 | -------------------------------------------------------------------------------- /applayout.py: -------------------------------------------------------------------------------- 1 | from kivy.core.window import Window 2 | from kivy.lang import Builder 3 | from kivy.properties import StringProperty, ObjectProperty 4 | from kivy.uix.floatlayout import FloatLayout 5 | from kivy.uix.relativelayout import RelativeLayout 6 | from kivy.utils import platform 7 | from classifyobject import ClassifyObject 8 | 9 | class AppLayout(FloatLayout): 10 | detect = ObjectProperty() 11 | 12 | class ButtonsLayout(RelativeLayout): 13 | normal = StringProperty() 14 | down = StringProperty() 15 | 16 | def __init__(self, **kwargs): 17 | super().__init__(**kwargs) 18 | if platform == 'android': 19 | self.normal = 'icons/cellphone-screenshot_white.png' 20 | self.down = 'icons/cellphone-screenshot_red.png' 21 | else: 22 | self.normal = 'icons/monitor-screenshot_white.png' 23 | self.down = 'icons/monitor-screenshot_red.png' 24 | 25 | def on_size(self, layout, size): 26 | if platform == 'android': 27 | self.ids.screen.min_state_time = 0.5 28 | else: 29 | self.ids.screen.min_state_time = 1 30 | if Window.width < Window.height: 31 | self.pos = (0 , 0) 32 | self.size_hint = (1 , 0.2) 33 | self.ids.other.pos_hint = {'center_x':.3,'center_y':.5} 34 | self.ids.other.size_hint = (.2, None) 35 | self.ids.screen.pos_hint = {'center_x':.7,'center_y':.5} 36 | self.ids.screen.size_hint = (.2, None) 37 | else: 38 | self.pos = (Window.width * 0.8, 0) 39 | self.size_hint = (0.2 , 1) 40 | self.ids.other.pos_hint = {'center_x':.5,'center_y':.7} 41 | self.ids.other.size_hint = (None, .2) 42 | self.ids.screen.pos_hint = {'center_x':.5,'center_y':.3} 43 | self.ids.screen.size_hint = (None, .2) 44 | 45 | def screenshot(self): 46 | self.parent.detect.capture_screenshot() 47 | 48 | def select_camera(self, facing): 49 | self.parent.detect.select_camera(facing) 50 | 51 | Builder.load_string(""" 52 | : 53 | detect: self.ids.preview 54 | ClassifyObject: 55 | letterbox_color: 'steelblue' 56 | id:preview 57 | ButtonsLayout: 58 | id:buttons 59 | 60 | : 61 | normal: 62 | down: 63 | Button: 64 | id:other 65 | on_press: root.select_camera('toggle') 66 | height: self.width 67 | width: self.height 68 | background_normal: 'icons/camera-flip-outline.png' 69 | background_down: 'icons/camera-flip-outline.png' 70 | Button: 71 | id:screen 72 | on_press: root.screenshot() 73 | height: self.width 74 | width: self.height 75 | background_normal: root.normal 76 | background_down: root.down 77 | """) 78 | 79 | 80 | -------------------------------------------------------------------------------- /buildozer.spec: -------------------------------------------------------------------------------- 1 | [app] 2 | 3 | # (str) Title of your application 4 | title = TFL 5 | 6 | # (str) Package name 7 | package.name = c4k_tflite 8 | 9 | # (str) Package domain (needed for android/ios packaging) 10 | package.domain = org.example 11 | 12 | # (str) Source code where the main.py live 13 | source.dir = . 14 | 15 | # (list) Source files to include (let empty to include all the files) 16 | source.include_exts = py,png,jpg,kv,atlas,tflite 17 | 18 | # (list) List of inclusions using pattern matching 19 | #source.include_patterns = assets/*,images/*.png 20 | 21 | # (list) Source files to exclude (let empty to not exclude anything) 22 | #source.exclude_exts = spec 23 | 24 | # (list) List of directory to exclude (let empty to not exclude anything) 25 | #source.exclude_dirs = tests, bin, venv 26 | 27 | # (list) List of exclusions using pattern matching 28 | # Do not prefix with './' 29 | source.exclude_patterns = object_detection/efficient*.tflite 30 | 31 | # (str) Application versioning (method 1) 32 | version = 0.1 33 | 34 | # (str) Application versioning (method 2) 35 | # version.regex = __version__ = ['"](.*)['"] 36 | # version.filename = %(source.dir)s/main.py 37 | 38 | # (list) Application requirements 39 | # comma separated e.g. requirements = sqlite3,kivy 40 | requirements = python3,kivy,camera4kivy,gestures4kivy,numpy,tflite-runtime 41 | 42 | # (str) Custom source folders for requirements 43 | # Sets custom source for any requirements with recipes 44 | # requirements.source.kivy = ../../kivy 45 | 46 | # (str) Presplash of the application 47 | #presplash.filename = %(source.dir)s/data/presplash.png 48 | 49 | # (str) Icon of the application 50 | #icon.filename = %(source.dir)s/data/icon.png 51 | 52 | # (str) Supported orientation (one of landscape, sensorLandscape, portrait or all) 53 | orientation = portrait, landscape, portrait-reverse, landscape-reverse 54 | 55 | # (list) List of service to declare 56 | #services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY 57 | 58 | # 59 | # OSX Specific 60 | # 61 | 62 | # 63 | # author = © Copyright Info 64 | 65 | # change the major version of python used by the app 66 | osx.python_version = 3 67 | 68 | # Kivy version to use 69 | osx.kivy_version = 1.9.1 70 | 71 | # 72 | # Android specific 73 | # 74 | 75 | # (bool) Indicate if the application should be fullscreen or not 76 | fullscreen = 0 77 | 78 | # (string) Presplash background color (for android toolchain) 79 | # Supported formats are: #RRGGBB #AARRGGBB or one of the following names: 80 | # red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray, 81 | # darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy, 82 | # olive, purple, silver, teal. 83 | #android.presplash_color = #FFFFFF 84 | 85 | # (string) Presplash animation using Lottie format. 86 | # see https://lottiefiles.com/ for examples and https://airbnb.design/lottie/ 87 | # for general documentation. 88 | # Lottie files can be created using various tools, like Adobe After Effect or Synfig. 89 | #android.presplash_lottie = "path/to/lottie/file.json" 90 | 91 | # (str) Adaptive icon of the application (used if Android API level is 26+ at runtime) 92 | #icon.adaptive_foreground.filename = %(source.dir)s/data/icon_fg.png 93 | #icon.adaptive_background.filename = %(source.dir)s/data/icon_bg.png 94 | 95 | # (list) Permissions 96 | #android.permissions = INTERNET 97 | 98 | # (list) features (adds uses-feature -tags to manifest) 99 | #android.features = android.hardware.usb.host 100 | 101 | # (int) Target Android API, should be as high as possible. 102 | android.api = 33 103 | 104 | # (int) Minimum API your APK will support. 105 | #android.minapi = 21 106 | 107 | # (int) Android SDK version to use 108 | #android.sdk = 20 109 | 110 | # (str) Android NDK version to use 111 | #android.ndk = 19b 112 | 113 | # (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi. 114 | #android.ndk_api = 21 115 | 116 | # (bool) Use --private data storage (True) or --dir public storage (False) 117 | #android.private_storage = True 118 | 119 | # (str) Android NDK directory (if empty, it will be automatically downloaded.) 120 | #android.ndk_path = 121 | 122 | # (str) Android SDK directory (if empty, it will be automatically downloaded.) 123 | #android.sdk_path = 124 | 125 | # (str) ANT directory (if empty, it will be automatically downloaded.) 126 | #android.ant_path = 127 | 128 | # (bool) If True, then skip trying to update the Android sdk 129 | # This can be useful to avoid excess Internet downloads or save time 130 | # when an update is due and you just want to test/build your package 131 | # android.skip_update = False 132 | 133 | # (bool) If True, then automatically accept SDK license 134 | # agreements. This is intended for automation only. If set to False, 135 | # the default, you will be shown the license when first running 136 | # buildozer. 137 | # android.accept_sdk_license = False 138 | 139 | # (str) Android entry point, default is ok for Kivy-based app 140 | #android.entrypoint = org.kivy.android.PythonActivity 141 | 142 | # (str) Full name including package path of the Java class that implements Android Activity 143 | # use that parameter together with android.entrypoint to set custom Java class instead of PythonActivity 144 | #android.activity_class_name = org.kivy.android.PythonActivity 145 | 146 | # (str) Extra xml to write directly inside the element of AndroidManifest.xml 147 | # use that parameter to provide a filename from where to load your custom XML code 148 | #android.extra_manifest_xml = ./src/android/extra_manifest.xml 149 | 150 | # (str) Extra xml to write directly inside the tag of AndroidManifest.xml 151 | # use that parameter to provide a filename from where to load your custom XML arguments: 152 | #android.extra_manifest_application_arguments = ./src/android/extra_manifest_application_arguments.xml 153 | 154 | # (str) Full name including package path of the Java class that implements Python Service 155 | # use that parameter to set custom Java class instead of PythonService 156 | #android.service_class_name = org.kivy.android.PythonService 157 | 158 | # (str) Android app theme, default is ok for Kivy-based app 159 | # android.apptheme = "@android:style/Theme.NoTitleBar" 160 | 161 | # (list) Pattern to whitelist for the whole project 162 | #android.whitelist = 163 | 164 | # (str) Path to a custom whitelist file 165 | #android.whitelist_src = 166 | 167 | # (str) Path to a custom blacklist file 168 | #android.blacklist_src = 169 | 170 | # (list) List of Java .jar files to add to the libs so that pyjnius can access 171 | # their classes. Don't add jars that you do not need, since extra jars can slow 172 | # down the build process. Allows wildcards matching, for example: 173 | # OUYA-ODK/libs/*.jar 174 | #android.add_jars = foo.jar,bar.jar,path/to/more/*.jar 175 | 176 | # (list) List of Java files to add to the android project (can be java or a 177 | # directory containing the files) 178 | #android.add_src = 179 | 180 | # (list) Android AAR archives to add 181 | #android.add_aars = 182 | 183 | # (list) Put these files or directories in the apk assets directory. 184 | # Either form may be used, and assets need not be in 'source.include_exts'. 185 | # 1) android.add_assets = source_asset_relative_path 186 | # 2) android.add_assets = source_asset_path:destination_asset_relative_path 187 | #android.add_assets = 188 | 189 | # (list) Gradle dependencies to add 190 | #android.gradle_dependencies = 191 | 192 | # (bool) Enable AndroidX support. Enable when 'android.gradle_dependencies' 193 | # contains an 'androidx' package, or any package from Kotlin source. 194 | # android.enable_androidx requires android.api >= 28 195 | #android.enable_androidx = False 196 | 197 | # (list) add java compile options 198 | # this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option 199 | # see https://developer.android.com/studio/write/java8-support for further information 200 | # android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8" 201 | 202 | # (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies} 203 | # please enclose in double quotes 204 | # e.g. android.gradle_repositories = "maven { url 'https://kotlin.bintray.com/ktor' }" 205 | #android.add_gradle_repositories = 206 | 207 | # (list) packaging options to add 208 | # see https://google.github.io/android-gradle-dsl/current/com.android.build.gradle.internal.dsl.PackagingOptions.html 209 | # can be necessary to solve conflicts in gradle_dependencies 210 | # please enclose in double quotes 211 | # e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'" 212 | #android.add_packaging_options = 213 | 214 | # (list) Java classes to add as activities to the manifest. 215 | #android.add_activities = com.example.ExampleActivity 216 | 217 | # (str) OUYA Console category. Should be one of GAME or APP 218 | # If you leave this blank, OUYA support will not be enabled 219 | #android.ouya.category = GAME 220 | 221 | # (str) Filename of OUYA Console icon. It must be a 732x412 png image. 222 | #android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png 223 | 224 | # (str) XML file to include as an intent filters in tag 225 | #android.manifest.intent_filters = 226 | 227 | # (str) launchMode to set for the main activity 228 | #android.manifest.launch_mode = standard 229 | 230 | # (list) Android additional libraries to copy into libs/armeabi 231 | #android.add_libs_armeabi = libs/android/*.so 232 | #android.add_libs_armeabi_v7a = libs/android-v7/*.so 233 | #android.add_libs_arm64_v8a = libs/android-v8/*.so 234 | #android.add_libs_x86 = libs/android-x86/*.so 235 | #android.add_libs_mips = libs/android-mips/*.so 236 | 237 | # (bool) Indicate whether the screen should stay on 238 | # Don't forget to add the WAKE_LOCK permission if you set this to True 239 | #android.wakelock = False 240 | 241 | # (list) Android application meta-data to set (key=value format) 242 | #android.meta_data = 243 | 244 | # (list) Android library project to add (will be added in the 245 | # project.properties automatically.) 246 | #android.library_references = 247 | 248 | # (list) Android shared libraries which will be added to AndroidManifest.xml using tag 249 | #android.uses_library = 250 | 251 | # (str) Android logcat filters to use 252 | #android.logcat_filters = *:S python:D 253 | 254 | # (bool) Android logcat only display log for activity's pid 255 | #android.logcat_pid_only = False 256 | 257 | # (str) Android additional adb arguments 258 | #android.adb_args = -H host.docker.internal 259 | 260 | # (bool) Copy library instead of making a libpymodules.so 261 | #android.copy_libs = 1 262 | 263 | # (str) The Android arch to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64 264 | android.archs = arm64-v8a 265 | 266 | # (int) overrides automatic versionCode computation (used in build.gradle) 267 | # this is not the same as app version and should only be edited if you know what you're doing 268 | # android.numeric_version = 1 269 | 270 | # (bool) enables Android auto backup feature (Android API >=23) 271 | android.allow_backup = True 272 | 273 | # (str) XML file for custom backup rules (see official auto backup documentation) 274 | # android.backup_rules = 275 | 276 | # (str) If you need to insert variables into your AndroidManifest.xml file, 277 | # you can do so with the manifestPlaceholders property. 278 | # This property takes a map of key-value pairs. (via a string) 279 | # Usage example : android.manifest_placeholders = [myCustomUrl:\"org.kivy.customurl\"] 280 | # android.manifest_placeholders = [:] 281 | 282 | # (bool) disables the compilation of py to pyc/pyo files when packaging 283 | # android.no-compile-pyo = True 284 | 285 | # 286 | # Python for android (p4a) specific 287 | # 288 | 289 | # (str) python-for-android URL to use for checkout 290 | #p4a.url = 291 | 292 | # (str) python-for-android fork to use in case if p4a.url is not specified, defaults to upstream (kivy) 293 | #p4a.fork = kivy 294 | 295 | # (str) python-for-android branch to use, defaults to master 296 | # p4a.branch = develop 297 | 298 | # (str) python-for-android specific commit to use, defaults to HEAD, must be within p4a.branch 299 | #p4a.commit = HEAD 300 | 301 | # (str) python-for-android git clone directory (if empty, it will be automatically cloned from github) 302 | #p4a.source_dir = 303 | 304 | # (str) The directory in which python-for-android should look for your own build recipes (if any) 305 | #p4a.local_recipes = 306 | 307 | # (str) Filename to the hook for p4a 308 | p4a.hook = camerax_provider/gradle_options.py 309 | 310 | # (str) Bootstrap to use for android builds 311 | # p4a.bootstrap = sdl2 312 | 313 | # (int) port number to specify an explicit --port= p4a argument (eg for bootstrap flask) 314 | #p4a.port = 315 | 316 | # Control passing the --use-setup-py vs --ignore-setup-py to p4a 317 | # "in the future" --use-setup-py is going to be the default behaviour in p4a, right now it is not 318 | # Setting this to false will pass --ignore-setup-py, true will pass --use-setup-py 319 | # NOTE: this is general setuptools integration, having pyproject.toml is enough, no need to generate 320 | # setup.py if you're using Poetry, but you need to add "toml" to source.include_exts. 321 | #p4a.setup_py = false 322 | 323 | # (str) extra command line arguments to pass when invoking pythonforandroid.toolchain 324 | #p4a.extra_args = 325 | 326 | 327 | # 328 | # iOS specific 329 | # 330 | 331 | # (str) Path to a custom kivy-ios folder 332 | #ios.kivy_ios_dir = ../kivy-ios 333 | # Alternately, specify the URL and branch of a git checkout: 334 | ios.kivy_ios_url = https://github.com/kivy/kivy-ios 335 | ios.kivy_ios_branch = master 336 | 337 | # Another platform dependency: ios-deploy 338 | # Uncomment to use a custom checkout 339 | #ios.ios_deploy_dir = ../ios_deploy 340 | # Or specify URL and branch 341 | ios.ios_deploy_url = https://github.com/phonegap/ios-deploy 342 | ios.ios_deploy_branch = 1.10.0 343 | 344 | # (bool) Whether or not to sign the code 345 | ios.codesign.allowed = false 346 | 347 | # (str) Name of the certificate to use for signing the debug version 348 | # Get a list of available identities: buildozer ios list_identities 349 | #ios.codesign.debug = "iPhone Developer: ()" 350 | 351 | # (str) The development team to use for signing the debug version 352 | #ios.codesign.development_team.debug = 353 | 354 | # (str) Name of the certificate to use for signing the release version 355 | #ios.codesign.release = %(ios.codesign.debug)s 356 | 357 | # (str) The development team to use for signing the release version 358 | #ios.codesign.development_team.release = 359 | 360 | # (str) URL pointing to .ipa file to be installed 361 | # This option should be defined along with `display_image_url` and `full_size_image_url` options. 362 | #ios.manifest.app_url = 363 | 364 | # (str) URL pointing to an icon (57x57px) to be displayed during download 365 | # This option should be defined along with `app_url` and `full_size_image_url` options. 366 | #ios.manifest.display_image_url = 367 | 368 | # (str) URL pointing to a large icon (512x512px) to be used by iTunes 369 | # This option should be defined along with `app_url` and `display_image_url` options. 370 | #ios.manifest.full_size_image_url = 371 | 372 | 373 | [buildozer] 374 | 375 | # (int) Log level (0 = error only, 1 = info, 2 = debug (with command output)) 376 | log_level = 2 377 | 378 | # (int) Display warning if buildozer is run as root (0 = False, 1 = True) 379 | warn_on_root = 1 380 | 381 | # (str) Path to build artifact storage, absolute or relative to spec file 382 | # build_dir = ./.buildozer 383 | 384 | # (str) Path to build output (i.e. .apk, .ipa) storage 385 | # bin_dir = ./bin 386 | 387 | # ----------------------------------------------------------------------------- 388 | # List as sections 389 | # 390 | # You can define all the "list" as [section:key]. 391 | # Each line will be considered as a option to the list. 392 | # Let's take [app] / source.exclude_patterns. 393 | # Instead of doing: 394 | # 395 | #[app] 396 | #source.exclude_patterns = license,data/audio/*.wav,data/images/original/* 397 | # 398 | # This can be translated into: 399 | # 400 | #[app:source.exclude_patterns] 401 | #license 402 | #data/audio/*.wav 403 | #data/images/original/* 404 | # 405 | 406 | 407 | # ----------------------------------------------------------------------------- 408 | # Profiles 409 | # 410 | # You can extend section / key with a profile 411 | # For example, you want to deploy a demo version of your application without 412 | # HD content. You could first change the title to add "(demo)" in the name 413 | # and extend the excluded directories to remove the HD content. 414 | # 415 | #[app@demo] 416 | #title = My Application (demo) 417 | # 418 | #[app:source.exclude_patterns@demo] 419 | #images/hd/* 420 | # 421 | # Then, invoke the command line with the "demo" profile: 422 | # 423 | #buildozer --profile demo android debug 424 | -------------------------------------------------------------------------------- /camerax_provider/camerax_src/org/kivy/camerax/CallbackWrapper.java: -------------------------------------------------------------------------------- 1 | package org.kivy.camerax; 2 | 3 | import androidx.camera.core.ImageProxy; 4 | import android.graphics.Rect; 5 | import android.util.Size; 6 | import java.util.Dictionary; 7 | 8 | public interface CallbackWrapper { 9 | public void callback_string(String filepath); 10 | public void callback_image(ImageProxy image); 11 | public void callback_config(Rect croprect, Size resolution, int rotation); 12 | } 13 | 14 | 15 | 16 | -------------------------------------------------------------------------------- /camerax_provider/camerax_src/org/kivy/camerax/CameraX.java: -------------------------------------------------------------------------------- 1 | package org.kivy.camerax; 2 | 3 | //# General 4 | import java.io.File; 5 | import java.util.concurrent.Executor; 6 | import java.util.concurrent.Executors; 7 | import java.util.concurrent.ExecutionException; 8 | 9 | import com.google.common.util.concurrent.ListenableFuture; 10 | 11 | import org.kivy.android.PythonActivity; 12 | 13 | import android.app.Activity; 14 | import android.util.Size; 15 | import android.util.Rational; 16 | import android.graphics.Rect; 17 | import android.graphics.SurfaceTexture; 18 | import android.content.Context; 19 | import android.content.ContentResolver; 20 | import android.content.ContentValues; 21 | import android.net.Uri; 22 | import android.provider.MediaStore; 23 | 24 | //MediaActionSound = import android.media.MediaActionSound; 25 | 26 | import androidx.core.content.ContextCompat; 27 | import androidx.camera.core.Camera; 28 | import androidx.camera.core.CameraState; 29 | import androidx.camera.core.Preview; 30 | import androidx.camera.core.AspectRatio; 31 | import androidx.camera.core.ImageCapture; 32 | import androidx.camera.core.VideoCapture; 33 | import androidx.camera.core.FocusMeteringAction; 34 | import androidx.camera.core.MeteringPoint; 35 | import androidx.camera.core.SurfaceOrientedMeteringPointFactory; 36 | import androidx.camera.core.ImageAnalysis; 37 | import androidx.camera.lifecycle.ProcessCameraProvider; 38 | import androidx.camera.core.UseCaseGroup; 39 | import androidx.camera.core.CameraSelector; 40 | import androidx.camera.core.ZoomState; 41 | import androidx.camera.core.ViewPort; 42 | import androidx.lifecycle.ProcessLifecycleOwner; 43 | import androidx.lifecycle.LifecycleOwner; 44 | 45 | // Local Java 46 | import org.kivy.camerax.ImageSavedCallback; 47 | import org.kivy.camerax.VideoSavedCallback; 48 | import org.kivy.camerax.ImageAnalysisAnalyzer; 49 | import org.kivy.camerax.KivySurfaceProvider; 50 | import org.kivy.camerax.CallbackWrapper; 51 | 52 | class CameraX { 53 | 54 | // Initial State 55 | private boolean photo; 56 | private boolean video; 57 | private boolean analysis; 58 | private int lensFacing; 59 | private int [] cameraResolution; 60 | private int aspectRatio; 61 | private CallbackWrapper callbackClass; 62 | private int flashMode; 63 | private int imageOptimize; 64 | private float zoomScaleFront; 65 | private float zoomScaleBack; 66 | private int dataFormat; 67 | 68 | // Connect State 69 | private int viewPortWidth; 70 | private int viewPortHeight; 71 | private Rect cropRect; 72 | private Size resolution; 73 | private int rotation; 74 | 75 | // Persistent run time State 76 | // executor may persist after disconnect for capture to complete 77 | public static Executor executor = Executors.newSingleThreadExecutor(); 78 | 79 | // Run time State 80 | private UseCaseGroup useCaseGroup = null; 81 | private Preview preview = null; 82 | private ProcessCameraProvider cameraProvider = null; 83 | private ImageCapture imageCapture = null; 84 | private VideoCapture videoCapture = null; 85 | private Camera camera = null; 86 | private KivySurfaceProvider kivySurfaceProvider = null; 87 | private boolean videoIsRecording = false; 88 | private boolean selectingCamera = false; 89 | private boolean imageIsReady = false; 90 | 91 | public CameraX(boolean photo, 92 | boolean video, 93 | boolean analysis, 94 | String facing, 95 | int[] resolution, 96 | String aspect_ratio, 97 | CallbackWrapper callback_class, 98 | String flash, 99 | String optimize, 100 | float zoom_scale, 101 | String data_format) { 102 | 103 | this.photo = photo; 104 | this.video = video; 105 | this.analysis = analysis; 106 | if (this.analysis == true) { 107 | this.video = false; 108 | } 109 | if (facing.equals("front")) { 110 | this.lensFacing = CameraSelector.LENS_FACING_FRONT; 111 | } else { 112 | this.lensFacing = CameraSelector.LENS_FACING_BACK; 113 | } 114 | cameraResolution = resolution; 115 | 116 | if (aspect_ratio.equals("16:9")) { 117 | this.aspectRatio = AspectRatio.RATIO_16_9; 118 | } else { 119 | this.aspectRatio = AspectRatio.RATIO_4_3; 120 | } 121 | this.callbackClass = callback_class; 122 | 123 | if (flash.equals("on")) { 124 | this.flashMode = ImageCapture.FLASH_MODE_ON; 125 | } else if (flash.equals("auto")) { 126 | this.flashMode = ImageCapture.FLASH_MODE_AUTO; 127 | } else { 128 | this.flashMode = ImageCapture.FLASH_MODE_OFF; 129 | } 130 | 131 | if (optimize.equals("quality")) { 132 | this.imageOptimize = ImageCapture.CAPTURE_MODE_MAXIMIZE_QUALITY; 133 | } else { 134 | this.imageOptimize = ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY; 135 | } 136 | 137 | this.zoomScaleFront = zoom_scale; 138 | this.zoomScaleBack = zoom_scale; 139 | 140 | if (data_format.equals("rgba")) { 141 | this.dataFormat = ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888; 142 | } else { 143 | this.dataFormat = ImageAnalysis.OUTPUT_IMAGE_FORMAT_YUV_420_888; 144 | } 145 | } 146 | 147 | public String providerVersion() { 148 | return "0.0.3"; // Tested in camera4kivy/preview_camerax.py 149 | } 150 | 151 | //############################## 152 | //# Android CameraX 153 | //############################## 154 | 155 | public void setViewPort(int []view_port_size) { 156 | this.viewPortWidth = view_port_size[0]; 157 | this.viewPortHeight = view_port_size[1]; 158 | } 159 | 160 | public void startCamera() { 161 | Context context = PythonActivity.mActivity.getApplicationContext(); 162 | 163 | final ListenableFuture cameraProviderFuture = 164 | ProcessCameraProvider.getInstance(context); 165 | 166 | cameraProviderFuture.addListener(new Runnable() { 167 | @Override 168 | public void run() { 169 | try { 170 | ProcessCameraProvider cameraProvider = 171 | cameraProviderFuture.get(); 172 | CameraX.this.cameraProvider = cameraProvider; 173 | configureCamera(); 174 | } catch (ExecutionException | InterruptedException e) { 175 | // No errors need to be handled for this Future. 176 | // This should never be reached. 177 | } 178 | } 179 | }, ContextCompat.getMainExecutor(context)); 180 | } 181 | 182 | public void configureCameraMainThread() { 183 | Context context = PythonActivity.mActivity.getApplicationContext(); 184 | Executor executor = ContextCompat.getMainExecutor(context); 185 | executor.execute(new Runnable() { 186 | @Override 187 | public void run() { configureCamera(); } 188 | }); 189 | } 190 | 191 | private void configureCamera() { 192 | Activity mActivity = PythonActivity.mActivity; 193 | int rotation = 194 | mActivity.getWindowManager().getDefaultDisplay().getRotation(); 195 | 196 | // ImageAnalysis 197 | ImageAnalysis imageAnalysis = null; 198 | if (this.analysis) { 199 | int strategy = ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST; 200 | ImageAnalysis.Builder ib = new ImageAnalysis.Builder(); 201 | if (cameraResolution.length != 0) { 202 | ib.setTargetResolution(new Size(cameraResolution[0], 203 | cameraResolution[1])); 204 | } else { 205 | ib.setTargetAspectRatio(this.aspectRatio); 206 | } 207 | ib.setOutputImageFormat(this.dataFormat); 208 | ib.setBackpressureStrategy(strategy); 209 | ib.setTargetRotation(rotation); 210 | imageAnalysis = ib.build(); 211 | ImageAnalysisAnalyzer iaa = 212 | new ImageAnalysisAnalyzer(this.callbackClass); 213 | imageAnalysis.setAnalyzer(executor, iaa); 214 | } 215 | 216 | // ImageCapture 217 | if (this.video) { 218 | VideoCapture.Builder cb = new VideoCapture.Builder(); 219 | if (cameraResolution.length != 0) { 220 | cb.setTargetResolution(new Size(cameraResolution[0], 221 | cameraResolution[1])); 222 | } else { 223 | cb.setTargetAspectRatio(this.aspectRatio); 224 | } 225 | cb.setTargetRotation(rotation); 226 | this.videoCapture = cb.build(); 227 | } 228 | 229 | if (this.photo) { 230 | ImageCapture.Builder cb = new ImageCapture.Builder(); 231 | cb.setFlashMode(this.flashMode); 232 | cb.setCaptureMode(this.imageOptimize); 233 | if (cameraResolution.length != 0) { 234 | cb.setTargetResolution(new Size(cameraResolution[0], 235 | cameraResolution[1])); 236 | } else { 237 | cb.setTargetAspectRatio(this.aspectRatio); 238 | } 239 | cb.setTargetRotation(rotation); 240 | this.imageCapture = cb.build(); 241 | } 242 | 243 | // Preview 244 | int aspect = this.aspectRatio; 245 | if (cameraResolution.length != 0) { 246 | if ((Math.max(cameraResolution[0],cameraResolution[1]) / 247 | Math.min(cameraResolution[0],cameraResolution[1]) > 1.5)) { 248 | aspect = AspectRatio.RATIO_16_9; 249 | } else { 250 | aspect = AspectRatio.RATIO_4_3; 251 | } 252 | } 253 | this.preview = new Preview.Builder() 254 | .setTargetAspectRatio(aspect) 255 | .build(); 256 | 257 | // ViewPort 258 | Rational vpAspect = new Rational(this.viewPortWidth, 259 | this.viewPortHeight); 260 | ViewPort viewPort = new ViewPort.Builder(vpAspect, rotation).build(); 261 | 262 | // UseCaseGroup 263 | UseCaseGroup.Builder ucgb = new UseCaseGroup.Builder(); 264 | ucgb.setViewPort(viewPort); 265 | ucgb.addUseCase(this.preview); 266 | if (this.video) { 267 | ucgb.addUseCase(this.videoCapture); 268 | } 269 | if (this.photo) { 270 | ucgb.addUseCase(this.imageCapture); 271 | } 272 | if (this.analysis) { 273 | ucgb.addUseCase(imageAnalysis); 274 | } 275 | this.useCaseGroup = ucgb.build(); 276 | 277 | bindPreview(); 278 | } 279 | 280 | private void bindPreview() { 281 | // CameraSelector 282 | CameraSelector cameraSelector; 283 | if (this.lensFacing == CameraSelector.LENS_FACING_BACK) { 284 | cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA; 285 | } else { 286 | cameraSelector = CameraSelector.DEFAULT_FRONT_CAMERA; 287 | } 288 | 289 | // Bind 290 | this.cameraProvider.unbindAll(); 291 | LifecycleOwner plo = ProcessLifecycleOwner.get(); 292 | this.camera = this.cameraProvider.bindToLifecycle(plo, 293 | cameraSelector, 294 | this.useCaseGroup); 295 | 296 | // Set touch state 297 | float zoom = zoomScaleFront; 298 | if (this.lensFacing == CameraSelector.LENS_FACING_BACK) { 299 | zoom = zoomScaleBack; 300 | } 301 | this.camera.getCameraControl().setLinearZoom(zoom); 302 | focus((float)0.5,(float)0.5); 303 | 304 | this.cropRect = this.preview.getResolutionInfo().getCropRect(); 305 | this.resolution = this.preview.getResolutionInfo().getResolution(); 306 | this.rotation = this.preview.getResolutionInfo().getRotationDegrees(); 307 | 308 | this.callbackClass.callback_config(this.cropRect, 309 | this.resolution, 310 | this.rotation); 311 | 312 | } 313 | 314 | public boolean imageReady() { 315 | if (this.imageIsReady) { 316 | if (this.kivySurfaceProvider != null) { 317 | kivySurfaceProvider.KivySurfaceTextureUpdate(); 318 | } 319 | return true; 320 | } 321 | return false; 322 | } 323 | 324 | public void setTexture(int texture_id, int[] size) { 325 | Context context = PythonActivity.mActivity.getApplicationContext(); 326 | Executor mainExecutor = ContextCompat.getMainExecutor(context); 327 | 328 | this.kivySurfaceProvider = new KivySurfaceProvider(texture_id, 329 | mainExecutor, 330 | size[0], 331 | size[1]); 332 | this.imageIsReady = false; 333 | this.kivySurfaceProvider.surfaceTexture.setOnFrameAvailableListener( 334 | new SurfaceTexture.OnFrameAvailableListener() { 335 | @Override 336 | public void onFrameAvailable(final SurfaceTexture surfaceTexture) { 337 | CameraX.this.imageIsReady = true; 338 | } 339 | }); 340 | } 341 | 342 | public void setSurfaceProvider(boolean enable) { 343 | if (enable) { 344 | this.preview.setSurfaceProvider(this.kivySurfaceProvider); 345 | } else { 346 | this.preview.setSurfaceProvider(null); 347 | } 348 | } 349 | 350 | public void unbind_camera() { 351 | if (this.cameraProvider != null) { 352 | this.cameraProvider.unbindAll(); 353 | this.cameraProvider = null; 354 | this.useCaseGroup = null; 355 | this.preview = null; 356 | this.camera = null; 357 | this.imageCapture = null; 358 | this.videoCapture = null; 359 | this.kivySurfaceProvider = null; 360 | this.videoIsRecording = false; 361 | this.selectingCamera = false; 362 | } 363 | } 364 | 365 | //############################## 366 | //# User Events 367 | //############################## 368 | 369 | public void select_camera(String facing) { 370 | if (!this.selectingCamera) { 371 | this.selectingCamera = true; 372 | this.lensFacing = CameraSelector.LENS_FACING_BACK; 373 | if (facing.equals("front")) { 374 | this.lensFacing = CameraSelector.LENS_FACING_FRONT; 375 | } 376 | bindPreview(); 377 | this.selectingCamera = false; 378 | } 379 | } 380 | 381 | 382 | public void focus(float x, float y) { 383 | SurfaceOrientedMeteringPointFactory factory = 384 | new SurfaceOrientedMeteringPointFactory((float)1.0,(float)1.0); 385 | MeteringPoint point = factory.createPoint(x / this.viewPortWidth, 386 | y / this.viewPortHeight); 387 | FocusMeteringAction action = 388 | new FocusMeteringAction.Builder(point).build(); 389 | if (this.camera != null) { 390 | this.camera.getCameraControl().startFocusAndMetering(action); 391 | } 392 | } 393 | 394 | public void zoom(float scale, boolean absolute) { 395 | if (this.camera != null) { 396 | ZoomState zs = 397 | this.camera.getCameraInfo().getZoomState().getValue(); 398 | float newScale = scale; 399 | if (absolute == false) { 400 | newScale = zs.getZoomRatio() * scale; 401 | } 402 | newScale = Math.min(newScale,zs.getMaxZoomRatio()); 403 | newScale = Math.max(newScale,zs.getMinZoomRatio()); 404 | this.camera.getCameraControl().setZoomRatio(newScale); 405 | zs = this.camera.getCameraInfo().getZoomState().getValue(); 406 | if (this.lensFacing == CameraSelector.LENS_FACING_BACK) { 407 | this.zoomScaleBack = zs.getLinearZoom(); 408 | } else { 409 | this.zoomScaleFront = zs.getLinearZoom(); 410 | } 411 | } 412 | } 413 | 414 | public String flash(String mode) { 415 | if (this.photo) { 416 | this.flashMode = ImageCapture.FLASH_MODE_OFF; 417 | if (mode.equals("on")) { 418 | this.flashMode = ImageCapture.FLASH_MODE_ON; 419 | } else if (mode == "auto") { 420 | this.flashMode = ImageCapture.FLASH_MODE_AUTO; 421 | } 422 | this.imageCapture.setFlashMode(this.flashMode); 423 | } 424 | return mode; 425 | } 426 | 427 | public String torch(String mode) { 428 | if (this.camera != null && this.camera.getCameraInfo().hasFlashUnit()) { 429 | if (mode.equals("on")) { 430 | this.camera.getCameraControl().enableTorch(true); 431 | } else { 432 | mode = "off"; 433 | this.camera.getCameraControl().enableTorch(false); 434 | } 435 | } else { 436 | mode = "off"; 437 | } 438 | return mode; 439 | } 440 | 441 | public void stop_capture_video() { 442 | if (this.video && this.videoIsRecording) { 443 | this.videoCapture.stopRecording(); 444 | this.videoIsRecording = false; 445 | } 446 | } 447 | 448 | public void capture_video(String location, String filename, 449 | boolean fileStorage) { 450 | if (this.video && !this.videoIsRecording) { 451 | this.videoIsRecording = true; 452 | VideoCapture.OutputFileOptions vcf; 453 | if (fileStorage) { 454 | String filePath = location + "/" + filename; 455 | File videoFile = new File(filePath); 456 | vcf = new VideoCapture.OutputFileOptions.Builder(videoFile) 457 | .build(); 458 | } else { 459 | ContentResolver cr = 460 | PythonActivity.mActivity.getContentResolver(); 461 | Uri collection = MediaStore.Video.Media.EXTERNAL_CONTENT_URI; 462 | ContentValues cv = new ContentValues(); 463 | cv.put(MediaStore.MediaColumns.DISPLAY_NAME, filename); 464 | cv.put(MediaStore.MediaColumns.MIME_TYPE, "video/mp4"); 465 | cv.put(MediaStore.MediaColumns.RELATIVE_PATH, location); 466 | vcf = new VideoCapture.OutputFileOptions.Builder(cr, 467 | collection, 468 | cv).build(); 469 | } 470 | VideoSavedCallback vsc = new VideoSavedCallback(this.callbackClass); 471 | this.videoCapture.startRecording(vcf,executor,vsc); 472 | } 473 | } 474 | 475 | public void capture_photo(String location, String filename, 476 | boolean fileStorage) { 477 | if (this.photo) { 478 | ImageCapture.OutputFileOptions icf; 479 | if (fileStorage) { 480 | String filePath = location + "/" + filename; 481 | File photoFile = new File(filePath); 482 | icf = new ImageCapture.OutputFileOptions.Builder(photoFile) 483 | .build(); 484 | } else { 485 | ContentResolver cr = PythonActivity.mActivity.getContentResolver(); 486 | Uri collection = MediaStore.Images.Media.EXTERNAL_CONTENT_URI; 487 | ContentValues cv = new ContentValues(); 488 | cv.put(MediaStore.MediaColumns.DISPLAY_NAME, filename); 489 | cv.put(MediaStore.MediaColumns.MIME_TYPE, "image/jpeg"); 490 | cv.put(MediaStore.MediaColumns.RELATIVE_PATH, location); 491 | icf = new ImageCapture.OutputFileOptions.Builder(cr, 492 | collection, 493 | cv).build(); 494 | } 495 | ImageSavedCallback isc = new ImageSavedCallback(this.callbackClass); 496 | this.imageCapture.takePicture(icf,executor,isc); 497 | } 498 | } 499 | } 500 | 501 | 502 | -------------------------------------------------------------------------------- /camerax_provider/camerax_src/org/kivy/camerax/ImageAnalysisAnalyzer.java: -------------------------------------------------------------------------------- 1 | package org.kivy.camerax; 2 | 3 | import androidx.camera.core.ImageAnalysis; 4 | import androidx.camera.core.ImageProxy; 5 | import org.kivy.camerax.CallbackWrapper; 6 | 7 | public class ImageAnalysisAnalyzer implements ImageAnalysis.Analyzer { 8 | 9 | private CallbackWrapper callback_wrapper; 10 | 11 | public ImageAnalysisAnalyzer(CallbackWrapper callback_wrapper) { 12 | this.callback_wrapper = callback_wrapper; 13 | } 14 | 15 | public void analyze(ImageProxy image) { 16 | this.callback_wrapper.callback_image(image); 17 | } 18 | } 19 | -------------------------------------------------------------------------------- /camerax_provider/camerax_src/org/kivy/camerax/ImageProxyOps.java: -------------------------------------------------------------------------------- 1 | package org.kivy.camerax; 2 | 3 | import androidx.camera.core.ImageProxy; 4 | import android.graphics.ImageFormat; 5 | import android.graphics.PixelFormat; 6 | import java.lang.IllegalArgumentException; 7 | import java.nio.ByteBuffer; 8 | 9 | public class ImageProxyOps { 10 | 11 | private byte[] bytes; 12 | 13 | public ImageProxyOps() { 14 | this.bytes = new byte[0]; 15 | } 16 | 17 | public byte[] copyYUVtoBytes(ImageProxy image) { 18 | if (image.getFormat() != ImageFormat.YUV_420_888) { 19 | throw new IllegalArgumentException("Invalid image format"); 20 | } 21 | 22 | ByteBuffer yBuffer = image.getPlanes()[0].getBuffer(); 23 | ByteBuffer uBuffer = image.getPlanes()[1].getBuffer(); 24 | ByteBuffer vBuffer = image.getPlanes()[2].getBuffer(); 25 | 26 | int ySize = yBuffer.remaining(); 27 | int uSize = uBuffer.remaining(); 28 | int vSize = vBuffer.remaining(); 29 | 30 | if (this.bytes.length != ySize + uSize + vSize) { 31 | this.bytes = new byte[ySize + uSize + vSize]; 32 | } 33 | 34 | // U and V are swapped 35 | yBuffer.get(this.bytes, 0, ySize); 36 | vBuffer.get(this.bytes, ySize, vSize); 37 | uBuffer.get(this.bytes, ySize + vSize, uSize); 38 | 39 | return this.bytes; 40 | } 41 | 42 | public byte[] copyRGBAtoBytes(ImageProxy image) { 43 | if (image.getFormat() != PixelFormat.RGBA_8888) { 44 | throw new IllegalArgumentException("Invalid image format"); 45 | } 46 | 47 | // RGBA bytes are in plane zero 48 | ByteBuffer buffer = image.getPlanes()[0].getBuffer(); 49 | 50 | int size = buffer.remaining(); 51 | 52 | if (this.bytes.length != size ) { 53 | this.bytes = new byte[size]; 54 | } 55 | 56 | buffer.get(this.bytes, 0, size); 57 | 58 | return this.bytes; 59 | } 60 | } 61 | -------------------------------------------------------------------------------- /camerax_provider/camerax_src/org/kivy/camerax/ImageSavedCallback.java: -------------------------------------------------------------------------------- 1 | package org.kivy.camerax; 2 | 3 | import androidx.camera.core.ImageCapture.OnImageSavedCallback; 4 | import androidx.camera.core.ImageCapture.OutputFileResults; 5 | import androidx.camera.core.ImageCaptureException; 6 | import android.net.Uri; 7 | import android.content.Context; 8 | import android.database.Cursor; 9 | import android.provider.MediaStore.MediaColumns; 10 | import org.kivy.camerax.CallbackWrapper; 11 | import org.kivy.android.PythonActivity; 12 | 13 | public class ImageSavedCallback implements OnImageSavedCallback { 14 | 15 | private CallbackWrapper callback_wrapper; 16 | 17 | public ImageSavedCallback(CallbackWrapper callback_wrapper) { 18 | this.callback_wrapper = callback_wrapper; 19 | } 20 | 21 | public void onImageSaved(OutputFileResults outputFileResults){ 22 | Uri saveuri = outputFileResults.getSavedUri(); 23 | String result = ""; 24 | if (saveuri != null) { 25 | if (saveuri.getScheme().equals("content")) { 26 | Context context = 27 | PythonActivity.mActivity.getApplicationContext(); 28 | Cursor cursor = 29 | context.getContentResolver().query(saveuri, null, 30 | null, null, null); 31 | String dn = MediaColumns.DISPLAY_NAME; 32 | String rp = MediaColumns.RELATIVE_PATH; 33 | int nameIndex = cursor.getColumnIndex(dn); 34 | int pathIndex = cursor.getColumnIndex(rp); 35 | cursor.moveToFirst(); 36 | String file_name = cursor.getString(nameIndex); 37 | String file_path = cursor.getString(pathIndex); 38 | cursor.close(); 39 | result = file_path + file_name; 40 | } 41 | } 42 | this.callback_wrapper.callback_string(result); 43 | } 44 | 45 | public void onError(ImageCaptureException exception) { 46 | //int id = exception.getImageCaptureError(); 47 | String msg = "Image Capture terminated by early camera disconnect."; 48 | this.callback_wrapper.callback_string(msg); 49 | } 50 | } 51 | -------------------------------------------------------------------------------- /camerax_provider/camerax_src/org/kivy/camerax/KivySurfaceProvider.java: -------------------------------------------------------------------------------- 1 | package org.kivy.camerax; 2 | 3 | import java.util.concurrent.Executor; 4 | import androidx.camera.core.Preview; 5 | import androidx.camera.core.SurfaceRequest; 6 | import android.graphics.SurfaceTexture; 7 | import android.view.Surface; 8 | 9 | // Ref https://developer.android.com/reference/androidx/camera/core/Preview.SurfaceProvider?hl=zh-cn#onSurfaceRequested(androidx.camera.core.SurfaceRequest) 10 | 11 | class KivySurfaceProvider implements Preview.SurfaceProvider { 12 | // This executor must have also been used with Preview.setSurfaceProvider() 13 | // to ensure onSurfaceRequested() is called on our GL thread. 14 | Executor mGlExecutor; 15 | Surface surface; 16 | public SurfaceTexture surfaceTexture; 17 | 18 | public KivySurfaceProvider(int id, Executor self_te, 19 | int width, int height) { 20 | surfaceTexture = new SurfaceTexture(id); 21 | surfaceTexture.setDefaultBufferSize(width, height); 22 | 23 | surface = new Surface(surfaceTexture); 24 | mGlExecutor = self_te; 25 | } 26 | 27 | public void KivySurfaceTextureUpdate() { 28 | surfaceTexture.updateTexImage(); 29 | } 30 | 31 | @Override 32 | public void onSurfaceRequested(SurfaceRequest request) { 33 | request.provideSurface(surface, mGlExecutor, (result) -> {}); 34 | } 35 | } 36 | 37 | -------------------------------------------------------------------------------- /camerax_provider/camerax_src/org/kivy/camerax/VideoSavedCallback.java: -------------------------------------------------------------------------------- 1 | package org.kivy.camerax; 2 | 3 | import androidx.camera.core.VideoCapture.OnVideoSavedCallback; 4 | import androidx.camera.core.VideoCapture.OutputFileResults; 5 | import android.net.Uri; 6 | import android.content.Context; 7 | import android.database.Cursor; 8 | import android.provider.MediaStore.MediaColumns; 9 | import org.kivy.camerax.CallbackWrapper; 10 | import org.kivy.android.PythonActivity; 11 | 12 | public class VideoSavedCallback implements OnVideoSavedCallback { 13 | 14 | private CallbackWrapper callback_wrapper; 15 | 16 | public VideoSavedCallback(CallbackWrapper callback_wrapper) { 17 | this.callback_wrapper = callback_wrapper; 18 | } 19 | 20 | public void onVideoSaved(OutputFileResults outputFileResults){ 21 | Uri saveuri = outputFileResults.getSavedUri(); 22 | String result = ""; 23 | if (saveuri != null) { 24 | if (saveuri.getScheme().equals("content")) { 25 | Context context = 26 | PythonActivity.mActivity.getApplicationContext(); 27 | Cursor cursor = 28 | context.getContentResolver().query(saveuri, null, 29 | null, null, null); 30 | String dn = MediaColumns.DISPLAY_NAME; 31 | String rp = MediaColumns.RELATIVE_PATH; 32 | int nameIndex = cursor.getColumnIndex(dn); 33 | int pathIndex = cursor.getColumnIndex(rp); 34 | cursor.moveToFirst(); 35 | String file_name = cursor.getString(nameIndex); 36 | String file_path = cursor.getString(pathIndex); 37 | cursor.close(); 38 | result = file_path + file_name; 39 | } 40 | } 41 | this.callback_wrapper.callback_string(result); 42 | } 43 | 44 | public void onError(int videoCaptureError, String message, Throwable cause){ 45 | String msg = "Video Capture terminated by early camera disconnect."; 46 | this.callback_wrapper.callback_string(msg); 47 | } 48 | } 49 | -------------------------------------------------------------------------------- /camerax_provider/gradle_options.py: -------------------------------------------------------------------------------- 1 | # 2 | # Add gradle options for CameraX 3 | # 4 | from pythonforandroid.recipe import info 5 | from os.path import dirname, join, exists 6 | 7 | def before_apk_build(toolchain): 8 | unprocessed_args = toolchain.args.unknown_args 9 | 10 | if '--enable-androidx' not in unprocessed_args: 11 | unprocessed_args.append('--enable-androidx') 12 | info('Camerax Provider: Add android.enable_androidx = True') 13 | 14 | if 'CAMERA' not in unprocessed_args: 15 | unprocessed_args.append('--permission') 16 | unprocessed_args.append('CAMERA') 17 | info('Camerax Provider: Add android.permissions = CAMERA') 18 | 19 | if 'RECORD_AUDIO' not in unprocessed_args: 20 | unprocessed_args.append('--permission') 21 | unprocessed_args.append('RECORD_AUDIO') 22 | info('Camerax Provider: Add android.permissions = RECORD_AUDIO') 23 | 24 | # Check the current versions of these camera Gradle dependencies here: 25 | #https://developer.android.com/jetpack/androidx/releases/camera#dependencies 26 | # and the other packages at https://mvnrepository.com/ 27 | required_depends = ['androidx.camera:camera-core:1.2.1', 28 | 'androidx.camera:camera-camera2:1.2.1', 29 | 'androidx.camera:camera-lifecycle:1.2.1', 30 | 'androidx.lifecycle:lifecycle-process:2.5.1', 31 | 'androidx.core:core:1.9.0'] 32 | existing_depends = [] 33 | read_next = False 34 | for ua in unprocessed_args: 35 | if read_next: 36 | existing_depends.append(ua) 37 | read_next = False 38 | if ua == '--depend': 39 | read_next = True 40 | 41 | message = False 42 | for rd in required_depends: 43 | name, version = rd.rsplit(':',1) 44 | found = False 45 | for ed in existing_depends: 46 | if name in ed: 47 | found = True 48 | break 49 | if not found: 50 | unprocessed_args.append('--depend') 51 | unprocessed_args.append('{}:{}'.format(name,version)) 52 | message = True 53 | if message: 54 | info('Camerax Provider: Add android.gradle_dependencies reqired ' +\ 55 | 'for CameraX') 56 | 57 | # Add the Java source 58 | camerax_java = join(dirname(__file__), 'camerax_src') 59 | if exists(camerax_java): 60 | unprocessed_args.append('--add-source') 61 | unprocessed_args.append(camerax_java) 62 | info('Camerax Provider: Add android.add_src = ' +\ 63 | './camerax_provider/camerax_src') 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | -------------------------------------------------------------------------------- /classifyobject.py: -------------------------------------------------------------------------------- 1 | # This example demonstrates Object Detection using Tensorflow Lite 2 | # It is based on: 3 | # https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection 4 | # 5 | # cv2 references this file and object_detection/object_detector.py from 6 | # example above replaced with 'self.auto_analyze_resolution' 7 | 8 | import time 9 | from kivy.clock import mainthread 10 | from kivy.graphics import Color, Line, Rectangle 11 | from kivy.core.text import Label as CoreLabel 12 | from kivy.metrics import dp, sp 13 | from kivy.utils import platform 14 | import numpy as np 15 | from camera4kivy import Preview 16 | from object_detection.object_detector import ObjectDetector 17 | from object_detection.object_detector import ObjectDetectorOptions 18 | 19 | class ClassifyObject(Preview): 20 | 21 | def __init__(self, **kwargs): 22 | super().__init__(**kwargs) 23 | self.classified = [] 24 | num_threads = 4 25 | enable_edgetpu = False ## Change this for Coral Accererator 26 | if platform == 'android': 27 | model = 'object_detection/model.tflite' 28 | elif enable_edgetpu: 29 | model = 'object_detection/efficientdet_lite0_edgetpu.tflite' 30 | else: 31 | model = 'object_detection/efficientdet_lite0.tflite' 32 | options = ObjectDetectorOptions( 33 | num_threads = 4, 34 | score_threshold = 0.5, 35 | max_results = 3, 36 | enable_edgetpu = enable_edgetpu) 37 | self.detector = ObjectDetector(model_path=model, options=options) 38 | # Get the required analyze resolution from the detector, a 2 ele list. 39 | # as a concequence, scale will be a 2 ele list 40 | self.auto_analyze_resolution = self.detector._input_size 41 | self.start_time = time.time() 42 | 43 | #################################### 44 | # Analyze a Frame - NOT on UI Thread 45 | #################################### 46 | 47 | def analyze_pixels_callback(self, pixels, image_size, image_pos, 48 | image_scale, mirror): 49 | # Convert pixels to numpy rgb 50 | rgba = np.fromstring(pixels, np.uint8).reshape(image_size[1], 51 | image_size[0], 4) 52 | rgb = rgba[:,:,:3] 53 | # detect 54 | detections = self.detector.detect(rgb) 55 | now = time.time() 56 | fps = 0 57 | if now - self.start_time: 58 | fps = 1 / (now - self.start_time) 59 | self.start_time = now 60 | found = [] 61 | for detection in detections: 62 | # Bounding box, pixels coordinates 63 | x = detection.bounding_box.left 64 | y = detection.bounding_box.top 65 | w = detection.bounding_box.right - x 66 | h = detection.bounding_box.bottom - y 67 | 68 | # Map tflite style coordinates to Kivy Preview coordinates 69 | y = max(image_size[1] -y -h, 0) 70 | if mirror: 71 | x = max(image_size[0] -x -w, 0) 72 | 73 | # Map Analysis Image coordinates to Preview coordinates 74 | # image_scale is a list because we used self.auto_analyze_resolution 75 | x = round(x * image_scale[0] + image_pos[0]) 76 | y = round(y * image_scale[1] + image_pos[1]) 77 | w = round(w * image_scale[0]) 78 | h = round(h * image_scale[1]) 79 | 80 | # Category text for canvas 81 | category = detection.categories[0] 82 | class_name = category.label 83 | probability = round(category.score, 2) 84 | result_text = class_name +\ 85 | ' (Probability: {:.2f} FPS: {:.1f} )'.format(probability,fps) 86 | label = CoreLabel(font_size = sp(20)) 87 | label.text = result_text 88 | label.refresh() 89 | 90 | # Thread safe result 91 | found.append({'x':x, 'y':y, 'w':w, 'h':h, 't': label.texture}) 92 | self.make_thread_safe(list(found)) ## A COPY of the list 93 | 94 | @mainthread 95 | def make_thread_safe(self, found): 96 | if self.camera_connected: 97 | self.classified = found 98 | else: 99 | # Clear local state so no thread related ghosts on re-connect 100 | self.classified = [] 101 | 102 | ################################ 103 | # Canvas Update - on UI Thread 104 | ################################ 105 | 106 | def canvas_instructions_callback(self, texture, tex_size, tex_pos): 107 | # Add the analysis annotations 108 | Color(0,1,0,1) 109 | for r in self.classified: 110 | # Draw box 111 | Line(rectangle=(r['x'], r['y'], r['w'], r['h']), width = dp(1.5)) 112 | # Draw text 113 | Rectangle(size = r['t'].size, 114 | pos = [r['x'] + dp(10), r['y'] + dp(10)], 115 | texture = r['t']) 116 | 117 | -------------------------------------------------------------------------------- /example.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/example.jpg -------------------------------------------------------------------------------- /icons/camera-flip-outline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/camera-flip-outline.png -------------------------------------------------------------------------------- /icons/camera_red.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/camera_red.png -------------------------------------------------------------------------------- /icons/camera_white.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/camera_white.png -------------------------------------------------------------------------------- /icons/cellphone-screenshot_red.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/cellphone-screenshot_red.png -------------------------------------------------------------------------------- /icons/cellphone-screenshot_white.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/cellphone-screenshot_white.png -------------------------------------------------------------------------------- /icons/flash-auto.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/flash-auto.png -------------------------------------------------------------------------------- /icons/flash-off.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/flash-off.png -------------------------------------------------------------------------------- /icons/flash.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/flash.png -------------------------------------------------------------------------------- /icons/monitor-screenshot_red.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/monitor-screenshot_red.png -------------------------------------------------------------------------------- /icons/monitor-screenshot_white.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/monitor-screenshot_white.png -------------------------------------------------------------------------------- /icons/video-off.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/video-off.png -------------------------------------------------------------------------------- /icons/video_red.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/video_red.png -------------------------------------------------------------------------------- /icons/video_white.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/icons/video_white.png -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | from kivy.app import App 2 | from kivy.core.window import Window 3 | from kivy.uix.boxlayout import BoxLayout 4 | from kivy.utils import platform 5 | from kivy.clock import Clock 6 | from applayout import AppLayout 7 | from android_permissions import AndroidPermissions 8 | 9 | if platform == 'android': 10 | from jnius import autoclass 11 | from android.runnable import run_on_ui_thread 12 | from android import mActivity 13 | View = autoclass('android.view.View') 14 | 15 | @run_on_ui_thread 16 | def hide_landscape_status_bar(instance, width, height): 17 | # width,height gives false layout events, on pinch/spread 18 | # so use Window.width and Window.height 19 | if Window.width > Window.height: 20 | # Hide status bar 21 | option = View.SYSTEM_UI_FLAG_FULLSCREEN 22 | else: 23 | # Show status bar 24 | option = View.SYSTEM_UI_FLAG_VISIBLE 25 | mActivity.getWindow().getDecorView().setSystemUiVisibility(option) 26 | elif platform != 'ios': 27 | # Dispose of that nasty red dot, required for gestures4kivy. 28 | from kivy.config import Config 29 | Config.set('input', 'mouse', 'mouse, disable_multitouch') 30 | 31 | class MyApp(App): 32 | 33 | def build(self): 34 | self.started = False 35 | if platform == 'android': 36 | Window.bind(on_resize=hide_landscape_status_bar) 37 | self.layout = AppLayout() 38 | return self.layout 39 | 40 | def on_start(self): 41 | self.dont_gc = AndroidPermissions(self.start_app) 42 | 43 | def start_app(self): 44 | self.dont_gc = None 45 | # Can't connect camera till after on_start() 46 | Clock.schedule_once(self.connect_camera) 47 | 48 | def connect_camera(self,dt): 49 | self.layout.detect.connect_camera(enable_analyze_pixels = True, 50 | enable_video = False) 51 | 52 | def on_stop(self): 53 | self.layout.detect.disconnect_camera() 54 | 55 | MyApp().run() 56 | 57 | -------------------------------------------------------------------------------- /object_detection/efficientdet_lite0.tflite: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/object_detection/efficientdet_lite0.tflite -------------------------------------------------------------------------------- /object_detection/efficientdet_lite0_edgetpu.tflite: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/object_detection/efficientdet_lite0_edgetpu.tflite -------------------------------------------------------------------------------- /object_detection/model.tflite: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Android-for-Python/c4k_tflite_example/3fb41c47d86c981e0415d9e72f9704ed9961ec1a/object_detection/model.tflite -------------------------------------------------------------------------------- /object_detection/object_detector.py: -------------------------------------------------------------------------------- 1 | # Copyright 2021 The TensorFlow Authors. All Rights Reserved. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | """A module to run object detection with a TensorFlow Lite model.""" 15 | 16 | import platform 17 | from typing import List, NamedTuple 18 | import zipfile 19 | 20 | #import cv2 ## remove cv2 ## 21 | import numpy as np 22 | 23 | # pylint: disable=g-import-not-at-top 24 | try: 25 | # Import TFLite interpreter from tflite_runtime package if it's available. 26 | from tflite_runtime.interpreter import Interpreter 27 | from tflite_runtime.interpreter import load_delegate 28 | except ImportError: 29 | # If not, fallback to use the TFLite interpreter from the full TF package. 30 | import tensorflow as tf 31 | 32 | Interpreter = tf.lite.Interpreter 33 | load_delegate = tf.lite.experimental.load_delegate 34 | 35 | # pylint: enable=g-import-not-at-top 36 | 37 | 38 | class ObjectDetectorOptions(NamedTuple): 39 | """A config to initialize an object detector.""" 40 | 41 | enable_edgetpu: bool = False 42 | """Enable the model to run on EdgeTPU.""" 43 | 44 | label_allow_list: List[str] = None 45 | """The optional allow list of labels.""" 46 | 47 | label_deny_list: List[str] = None 48 | """The optional deny list of labels.""" 49 | 50 | max_results: int = -1 51 | """The maximum number of top-scored detection results to return.""" 52 | 53 | num_threads: int = 1 54 | """The number of CPU threads to be used.""" 55 | 56 | score_threshold: float = 0.0 57 | """The score threshold of detection results to return.""" 58 | 59 | 60 | class Rect(NamedTuple): 61 | """A rectangle in 2D space.""" 62 | left: float 63 | top: float 64 | right: float 65 | bottom: float 66 | 67 | 68 | class Category(NamedTuple): 69 | """A result of a classification task.""" 70 | label: str 71 | score: float 72 | index: int 73 | 74 | 75 | class Detection(NamedTuple): 76 | """A detected object as the result of an ObjectDetector.""" 77 | bounding_box: Rect 78 | categories: List[Category] 79 | 80 | 81 | def edgetpu_lib_name(): 82 | """Returns the library name of EdgeTPU in the current platform.""" 83 | return { 84 | 'Darwin': 'libedgetpu.1.dylib', 85 | 'Linux': 'libedgetpu.so.1', 86 | 'Windows': 'edgetpu.dll', 87 | }.get(platform.system(), None) 88 | 89 | 90 | class ObjectDetector: 91 | """A wrapper class for a TFLite object detection model.""" 92 | 93 | _mean = 127.5 94 | """Default mean normalization parameter for float model.""" 95 | _std = 127.5 96 | """Default std normalization parameter for float model.""" 97 | 98 | _OUTPUT_LOCATION_NAME = 'location' 99 | _OUTPUT_CATEGORY_NAME = 'category' 100 | _OUTPUT_SCORE_NAME = 'score' 101 | _OUTPUT_NUMBER_NAME = 'number of detections' 102 | 103 | def __init__( 104 | self, 105 | model_path: str, 106 | options: ObjectDetectorOptions = ObjectDetectorOptions() 107 | ) -> None: 108 | """Initialize a TFLite object detection model. 109 | 110 | Args: 111 | model_path: Path to the TFLite model. 112 | options: The config to initialize an object detector. (Optional) 113 | 114 | Raises: 115 | ValueError: If the TFLite model is invalid. 116 | OSError: If the current OS isn't supported by EdgeTPU. 117 | """ 118 | 119 | # Load label list from metadata. 120 | try: 121 | with zipfile.ZipFile(model_path) as model_with_metadata: 122 | if not model_with_metadata.namelist(): 123 | raise ValueError('Invalid TFLite model: no label file found.') 124 | 125 | file_name = model_with_metadata.namelist()[0] 126 | with model_with_metadata.open(file_name) as label_file: 127 | label_list = label_file.read().splitlines() 128 | self._label_list = [label.decode('ascii') for label in label_list] 129 | except zipfile.BadZipFile: 130 | print( 131 | 'ERROR: Please use models trained with Model Maker or downloaded from TensorFlow Hub.' 132 | ) 133 | raise ValueError('Invalid TFLite model: no metadata found.') 134 | 135 | # Initialize TFLite model. 136 | if options.enable_edgetpu: 137 | if edgetpu_lib_name() is None: 138 | raise OSError("The current OS isn't supported by Coral EdgeTPU.") 139 | interpreter = Interpreter( 140 | model_path=model_path, 141 | experimental_delegates=[load_delegate(edgetpu_lib_name())], 142 | num_threads=options.num_threads) 143 | else: 144 | interpreter = Interpreter( 145 | model_path=model_path, num_threads=options.num_threads) 146 | 147 | interpreter.allocate_tensors() 148 | input_detail = interpreter.get_input_details()[0] 149 | 150 | # From TensorFlow 2.6, the order of the outputs become undefined. 151 | # Therefore we need to sort the tensor indices of TFLite outputs and to know 152 | # exactly the meaning of each output tensor. For example, if 153 | # output indices are [601, 599, 598, 600], tensor names and indices aligned 154 | # are: 155 | # - location: 598 156 | # - category: 599 157 | # - score: 600 158 | # - detection_count: 601 159 | # because of the op's ports of TFLITE_DETECTION_POST_PROCESS 160 | # (https://github.com/tensorflow/tensorflow/blob/a4fe268ea084e7d323133ed7b986e0ae259a2bc7/tensorflow/lite/kernels/detection_postprocess.cc#L47-L50). 161 | sorted_output_indices = sorted( 162 | [output['index'] for output in interpreter.get_output_details()]) 163 | self._output_indices = { 164 | self._OUTPUT_LOCATION_NAME: sorted_output_indices[0], 165 | self._OUTPUT_CATEGORY_NAME: sorted_output_indices[1], 166 | self._OUTPUT_SCORE_NAME: sorted_output_indices[2], 167 | self._OUTPUT_NUMBER_NAME: sorted_output_indices[3], 168 | } 169 | 170 | self._input_size = input_detail['shape'][2], input_detail['shape'][1] 171 | self._is_quantized_input = input_detail['dtype'] == np.uint8 172 | self._interpreter = interpreter 173 | self._options = options 174 | 175 | def detect(self, input_image: np.ndarray) -> List[Detection]: 176 | """Run detection on an input image. 177 | 178 | Args: 179 | input_image: A [height, width, 3] RGB image. Note that height and width 180 | can be anything since the image will be immediately resized according 181 | to the needs of the model within this function. 182 | 183 | Returns: 184 | A Person instance. 185 | """ 186 | image_height, image_width, _ = input_image.shape 187 | 188 | input_tensor = self._preprocess(input_image) 189 | 190 | self._set_input_tensor(input_tensor) 191 | self._interpreter.invoke() 192 | 193 | # Get all output details 194 | boxes = self._get_output_tensor(self._OUTPUT_LOCATION_NAME) 195 | classes = self._get_output_tensor(self._OUTPUT_CATEGORY_NAME) 196 | scores = self._get_output_tensor(self._OUTPUT_SCORE_NAME) 197 | count = int(self._get_output_tensor(self._OUTPUT_NUMBER_NAME)) 198 | 199 | return self._postprocess(boxes, classes, scores, count, image_width, 200 | image_height) 201 | 202 | def _preprocess(self, input_image: np.ndarray) -> np.ndarray: 203 | """Preprocess the input image as required by the TFLite model.""" 204 | 205 | # Resize the input 206 | ## remove cv2 ## 207 | #input_tensor = cv2.resize(input_image, self._input_size) 208 | input_tensor = input_image 209 | 210 | # Normalize the input if it's a float model (aka. not quantized) 211 | if not self._is_quantized_input: 212 | input_tensor = (np.float32(input_tensor) - self._mean) / self._std 213 | 214 | # Add batch dimension 215 | input_tensor = np.expand_dims(input_tensor, axis=0) 216 | 217 | return input_tensor 218 | 219 | def _set_input_tensor(self, image): 220 | """Sets the input tensor.""" 221 | tensor_index = self._interpreter.get_input_details()[0]['index'] 222 | input_tensor = self._interpreter.tensor(tensor_index)()[0] 223 | input_tensor[:, :] = image 224 | 225 | def _get_output_tensor(self, name): 226 | """Returns the output tensor at the given index.""" 227 | output_index = self._output_indices[name] 228 | tensor = np.squeeze(self._interpreter.get_tensor(output_index)) 229 | return tensor 230 | 231 | def _postprocess(self, boxes: np.ndarray, classes: np.ndarray, 232 | scores: np.ndarray, count: int, image_width: int, 233 | image_height: int) -> List[Detection]: 234 | """Post-process the output of TFLite model into a list of Detection objects. 235 | 236 | Args: 237 | boxes: Bounding boxes of detected objects from the TFLite model. 238 | classes: Class index of the detected objects from the TFLite model. 239 | scores: Confidence scores of the detected objects from the TFLite model. 240 | count: Number of detected objects from the TFLite model. 241 | image_width: Width of the input image. 242 | image_height: Height of the input image. 243 | 244 | Returns: 245 | A list of Detection objects detected by the TFLite model. 246 | """ 247 | results = [] 248 | 249 | # Parse the model output into a list of Detection entities. 250 | for i in range(count): 251 | if scores[i] >= self._options.score_threshold: 252 | y_min, x_min, y_max, x_max = boxes[i] 253 | bounding_box = Rect( 254 | top=int(y_min * image_height), 255 | left=int(x_min * image_width), 256 | bottom=int(y_max * image_height), 257 | right=int(x_max * image_width)) 258 | class_id = int(classes[i]) 259 | category = Category( 260 | score=scores[i], 261 | label=self._label_list[class_id], # 0 is reserved for background 262 | index=class_id) 263 | result = Detection(bounding_box=bounding_box, categories=[category]) 264 | results.append(result) 265 | 266 | # Sort detection results by score ascending 267 | sorted_results = sorted( 268 | results, 269 | key=lambda detection: detection.categories[0].score, 270 | reverse=True) 271 | 272 | # Filter out detections in deny list 273 | filtered_results = sorted_results 274 | if self._options.label_deny_list is not None: 275 | filtered_results = list( 276 | filter( 277 | lambda detection: detection.categories[0].label not in self. 278 | _options.label_deny_list, filtered_results)) 279 | 280 | # Keep only detections in allow list 281 | if self._options.label_allow_list is not None: 282 | filtered_results = list( 283 | filter( 284 | lambda detection: detection.categories[0].label in self._options. 285 | label_allow_list, filtered_results)) 286 | 287 | # Only return maximum of max_results detection. 288 | if self._options.max_results > 0: 289 | result_count = min(len(filtered_results), self._options.max_results) 290 | filtered_results = filtered_results[:result_count] 291 | 292 | return filtered_results 293 | -------------------------------------------------------------------------------- /tfl_2_12_not_arm7/tflite-runtime/CMakeLists.patch: -------------------------------------------------------------------------------- 1 | --- tflite-runtime/tensorflow/lite/CMakeLists.txt 2023-03-03 13:26:03.000000000 -1000 2 | +++ CMakeLists.txt 2023-03-12 02:46:48.466007316 -1000 3 | @@ -243,6 +243,9 @@ 4 | if(NOT "${CMAKE_SYSTEM_NAME}" STREQUAL "iOS") 5 | list(FILTER TFLITE_SRCS EXCLUDE REGEX ".*minimal_logging_ios\\.cc$") 6 | endif() 7 | +if("${CMAKE_SYSTEM_NAME}" STREQUAL "Android") 8 | + list(FILTER TFLITE_SRCS EXCLUDE REGEX ".*minimal_logging_default\\.cc$") 9 | +endif() 10 | populate_tflite_source_vars("core" TFLITE_CORE_SRCS) 11 | populate_tflite_source_vars("core/api" TFLITE_CORE_API_SRCS) 12 | populate_tflite_source_vars("core/c" TFLITE_CORE_C_SRCS) 13 | @@ -493,6 +496,7 @@ 14 | ${TFLITE_SOURCE_DIR}/profiling/root_profiler.h 15 | ${TFLITE_SOURCE_DIR}/profiling/root_profiler.cc 16 | ${TFLITE_SOURCE_DIR}/profiling/telemetry/profiler.cc 17 | + ${TFLITE_SOURCE_DIR}/profiling/telemetry/telemetry.cc 18 | ) 19 | if(CMAKE_SYSTEM_NAME MATCHES "Android") 20 | list(APPEND TFLITE_PROFILER_SRCS 21 | @@ -567,6 +571,7 @@ 22 | pthreadpool 23 | ${CMAKE_DL_LIBS} 24 | ${TFLITE_TARGET_DEPENDENCIES} 25 | + ${ANDROID_LOG_LIB} 26 | ) 27 | 28 | if (NOT BUILD_SHARED_LIBS) 29 | @@ -651,7 +656,8 @@ 30 | tensorflow-lite 31 | ${CMAKE_DL_LIBS} 32 | ) 33 | + 34 | target_compile_options(_pywrap_tensorflow_interpreter_wrapper 35 | PUBLIC ${TFLITE_TARGET_PUBLIC_OPTIONS} 36 | PRIVATE ${TFLITE_TARGET_PRIVATE_OPTIONS} 37 | -) 38 | \ No newline at end of file 39 | +) 40 | -------------------------------------------------------------------------------- /tfl_2_12_not_arm7/tflite-runtime/__init__.py: -------------------------------------------------------------------------------- 1 | from pythonforandroid.recipe import PythonRecipe, current_directory, \ 2 | shprint, info_main, warning 3 | from pythonforandroid.logger import error 4 | from os.path import join 5 | import sh 6 | 7 | 8 | class TFLiteRuntimeRecipe(PythonRecipe): 9 | ############################################################### 10 | # 11 | # tflite-runtime README: 12 | # https://github.com/Android-for-Python/c4k_tflite_example/blob/main/README.md 13 | # 14 | # Recipe build references: 15 | # https://developer.android.com/ndk/guides/cmake 16 | # https://developer.android.com/ndk/guides/cpu-arm-neon#cmake 17 | # https://www.tensorflow.org/lite/guide/build_cmake 18 | # https://www.tensorflow.org/lite/guide/build_cmake_arm 19 | # 20 | # Tested using cmake 3.22.1 requires cmake >= 3.15 21 | # 22 | ############################################################### 23 | 24 | version = '2.12.0' 25 | url = 'https://github.com/tensorflow/tensorflow/archive/refs/tags/v{version}.zip' 26 | depends = ['pybind11', 'numpy'] 27 | patches = ['CMakeLists.patch', 'build_with_cmake.patch'] 28 | site_packages_name = 'tflite-runtime' 29 | call_hostpython_via_targetpython = False 30 | 31 | def should_build(self, arch): 32 | name = self.folder_name.replace('-', '_') 33 | 34 | if self.ctx.has_package(name, arch): 35 | info_main('Python package already exists in site-packages') 36 | return False 37 | info_main('{} apparently isn\'t already in site-packages'.format(name)) 38 | return True 39 | 40 | def build_arch(self, arch): 41 | env = self.get_recipe_env(arch) 42 | 43 | # Directories 44 | root_dir = self.get_build_dir(arch.arch) 45 | script_dir = join(root_dir, 46 | 'tensorflow', 'lite', 'tools', 'pip_package') 47 | build_dir = join(script_dir, 'gen', 'tflite_pip', 'python3') 48 | 49 | # Includes 50 | python_include_dir = self.ctx.python_recipe.include_root(arch.arch) 51 | pybind11_recipe = self.get_recipe('pybind11', self.ctx) 52 | pybind11_include_dir = pybind11_recipe.get_include_dir(arch) 53 | numpy_include_dir = join(self.ctx.get_site_packages_dir(arch), 54 | 'numpy', 'core', 'include') 55 | includes = ' -I' + python_include_dir + \ 56 | ' -I' + numpy_include_dir + \ 57 | ' -I' + pybind11_include_dir 58 | 59 | # Scripts 60 | build_script = join(script_dir, 'build_pip_package_with_cmake.sh') 61 | toolchain = join(self.ctx.ndk_dir, 62 | 'build', 'cmake', 'android.toolchain.cmake') 63 | 64 | # Build 65 | ######## 66 | with current_directory(root_dir): 67 | env.update({ 68 | 'TENSORFLOW_TARGET': 'android', 69 | 'CMAKE_TOOLCHAIN_FILE': toolchain, 70 | 'ANDROID_PLATFORM': str(self.ctx.ndk_api), 71 | 'ANDROID_ABI': arch.arch, 72 | 'WRAPPER_INCLUDES': includes, 73 | 'CMAKE_SHARED_LINKER_FLAGS': env['LDFLAGS'], 74 | }) 75 | 76 | try: 77 | info_main('tflite-runtime is building...') 78 | info_main('Expect this to take at least 5 minutes...') 79 | cmd = sh.Command(build_script) 80 | cmd(_env=env) 81 | except sh.ErrorReturnCode as e: 82 | error(str(e.stderr)) 83 | exit(1) 84 | 85 | # Install 86 | ########## 87 | info_main('Installing tflite-runtime into site-packages') 88 | with current_directory(build_dir): 89 | hostpython = sh.Command(self.hostpython_location) 90 | install_dir = self.ctx.get_python_install_dir(arch.arch) 91 | env['PACKAGE_VERSION'] = self.version 92 | env['PROJECT_NAME'] = self.site_packages_name 93 | shprint(hostpython, 'setup.py', 'install', '-O2', 94 | '--root={}'.format(install_dir), 95 | '--install-lib=.', 96 | _env=env) 97 | 98 | 99 | recipe = TFLiteRuntimeRecipe() 100 | -------------------------------------------------------------------------------- /tfl_2_12_not_arm7/tflite-runtime/build_with_cmake.patch: -------------------------------------------------------------------------------- 1 | --- tflite-runtime/tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh 2023-02-13 14:49:26.000000000 -1000 2 | +++ build_pip_package_with_cmake.sh 2023-02-22 12:56:29.555916074 -1000 3 | @@ -29,8 +29,8 @@ 4 | export TENSORFLOW_TARGET="armhf" 5 | fi 6 | PYTHON_INCLUDE=$(${PYTHON} -c "from sysconfig import get_paths as gp; print(gp()['include'])") 7 | -PYBIND11_INCLUDE=$(${PYTHON} -c "import pybind11; print (pybind11.get_include())") 8 | -NUMPY_INCLUDE=$(${PYTHON} -c "import numpy; print (numpy.get_include())") 9 | +# PYBIND11_INCLUDE=$(${PYTHON} -c "import pybind11; print (pybind11.get_include())") 10 | +# NUMPY_INCLUDE=$(${PYTHON} -c "import numpy; print (numpy.get_include())") 11 | export CROSSTOOL_PYTHON_INCLUDE_PATH=${PYTHON_INCLUDE} 12 | 13 | # Fix container image for cross build. 14 | @@ -60,7 +60,7 @@ 15 | "${TENSORFLOW_LITE_DIR}/python/metrics/metrics_portable.py" \ 16 | "${BUILD_DIR}/tflite_runtime" 17 | echo "__version__ = '${PACKAGE_VERSION}'" >> "${BUILD_DIR}/tflite_runtime/__init__.py" 18 | -echo "__git_version__ = '$(git -C "${TENSORFLOW_DIR}" describe)'" >> "${BUILD_DIR}/tflite_runtime/__init__.py" 19 | +echo "__git_version__ = '${PACKAGE_VERSION}'" >> "${BUILD_DIR}/tflite_runtime/__init__.py" 20 | 21 | # Build python interpreter_wrapper. 22 | mkdir -p "${BUILD_DIR}/cmake_build" 23 | @@ -106,13 +106,25 @@ 24 | -DCMAKE_SYSTEM_PROCESSOR=aarch64 \ 25 | "${TENSORFLOW_LITE_DIR}" 26 | ;; 27 | + android) 28 | + BUILD_FLAGS=${BUILD_FLAGS:-"${WRAPPER_INCLUDES}"} 29 | + cmake \ 30 | + -DCMAKE_SYSTEM_NAME=Android \ 31 | + -DANDROID_ARM_NEON=ON \ 32 | + -DCMAKE_CXX_FLAGS="${BUILD_FLAGS}" \ 33 | + -DCMAKE_SHARED_LINKER_FLAGS="${CMAKE_SHARED_LINKER_FLAGS}" \ 34 | + -DCMAKE_TOOLCHAIN_FILE="${CMAKE_TOOLCHAIN_FILE}" \ 35 | + -DANDROID_PLATFORM="${ANDROID_PLATFORM}" \ 36 | + -DANDROID_ABI="${ANDROID_ABI}" \ 37 | + "${TENSORFLOW_LITE_DIR}" 38 | + ;; 39 | native) 40 | BUILD_FLAGS=${BUILD_FLAGS:-"-march=native -I${PYTHON_INCLUDE} -I${PYBIND11_INCLUDE} -I${NUMPY_INCLUDE}"} 41 | cmake \ 42 | -DCMAKE_C_FLAGS="${BUILD_FLAGS}" \ 43 | -DCMAKE_CXX_FLAGS="${BUILD_FLAGS}" \ 44 | "${TENSORFLOW_LITE_DIR}" 45 | - ;; 46 | + ;; 47 | *) 48 | BUILD_FLAGS=${BUILD_FLAGS:-"-I${PYTHON_INCLUDE} -I${PYBIND11_INCLUDE} -I${NUMPY_INCLUDE}"} 49 | cmake \ 50 | @@ -164,7 +176,7 @@ 51 | ${PYTHON} setup.py bdist --plat-name=${WHEEL_PLATFORM_NAME} \ 52 | bdist_wheel --plat-name=${WHEEL_PLATFORM_NAME} 53 | else 54 | - ${PYTHON} setup.py bdist bdist_wheel 55 | + ${PYTHON} setup.py bdist 56 | fi 57 | ;; 58 | esac 59 | --------------------------------------------------------------------------------