├── .gitignore ├── LICENSE ├── README.md └── notebooks └── resnet18-app ├── CMakeLists.txt ├── CPP_Export.ipynb ├── model.pt └── resnet18-app.cpp /.gitignore: -------------------------------------------------------------------------------- 1 | # binaries 2 | libtorch 3 | 4 | # CMake 5 | CMakeLists.txt.user 6 | CMakeCache.txt 7 | CMakeFiles 8 | CMakeScripts 9 | Testing 10 | Makefile 11 | cmake_install.cmake 12 | install_manifest.txt 13 | compile_commands.json 14 | CTestTestfile.cmake 15 | _deps 16 | 17 | # Archive 18 | *.zip 19 | *.tar.gz 20 | 21 | # Jupyter 22 | .ipynb_checkpoints 23 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Cedric Chee 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Deprecation Notice 2 | 3 | As of [this announcement](https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/) PyTorch 1.3 now officialy supports an end-to-end workflow from Python to deployment on iOS and Android through [PyTorch Mobile](https://pytorch.org/mobile/home/). Thank you for your interest with PyTorch Lite. 4 | 5 | --- 6 | 7 | # PyTorch Lite 8 | 9 | _~~Note: The Android port is not ready for general usage yet.~~_ 10 | 11 | ## What is PyTorch Lite? 12 | 13 | PyTorch Lite aims to be a machine learning framework for on-device inference. We are simplifying deployment of PyTorch models on mobile devices. 14 | 15 | ## The Project 16 | 17 | This project is my attempt to port libtorch for mobile (Android for the start). PyTorch 1.0 gained support for [using it directly from C++](https://pytorch.org/tutorials/advanced/cpp_export.html) and deploying models there. 18 | 19 | With PyTorch Lite, you load your model and are ready to go — no more jumping through hoops; no cumbersome ONNX, no learning Caffe2. Just PyTorch and libtorch. 20 | 21 | ## Porting 22 | 23 | It would be nice to make it easier to build libtorch such that it can be used with Android Studio's NDK. PyTorch maintainers have port libtorch for Android. 24 | 25 | Below is the step to build mobile libtorch from source (master). 26 | 27 | ## Context 28 | 29 | Currently the official / supported solution for mobile is PyTorch -> ONNX -> {Caffe2/CoreML/nnapi}. 30 | 31 | That being said, [@t-vi](https://github.com/t-vi) wants to take a crack at porting libtorch to run with Android NDK. Read https://lernapparat.de/pytorch-android/ for more details. 32 | 33 | See some of the issues or feature requests: 34 | - [Android OSS fixes](https://github.com/pytorch/pytorch/pull/15509) to get AICamera Android app working again on PyTorch master. 35 | - [Support the Android NDK with libtorch](https://github.com/pytorch/pytorch/issues/14258). 36 | - [Improve `build_android.sh` experience for Caffe2](https://github.com/pytorch/pytorch/issues/13116). 37 | 38 | PyTorch maintainers commits to the codebase before May 4 2019: 39 | - cmake: 40 | - new macro `FEATURE_TORCH_MOBILE` used by libtorch mobile build to enable features that are not enabled by Caffe2 mobile build. Should only use it when necessary as the PyTorch team are committed to converging libtorch mobile and Caffe2 mobile builds and removing it eventually. 41 | - rename `BUILD_ATEN_MOBILE` to `INTERN_BUILD_MOBILE` and make it private. 42 | - PR for [CMakeLists changes to enable libtorch for Android](https://github.com/pytorch/pytorch/pull/19762): 43 | 44 | ## Guide 45 | 46 | This worked for me on Ubuntu 16.04, Python 3.6, Android NDK r19 and CMake 3.6.0. 47 | 48 | 1. git clone PyTorch source and switch to commit 3ac4d928248a4b5949b5fafe0d2f43c649cd0cd9: 49 | 50 | ```sh 51 | git clone --recurse-submodules -j8 https://github.com/pytorch/pytorch.git 52 | cd pytorch 53 | 54 | # We are using this PyTorch master commit 3ac4d928248a4b5949b5fafe0d2f43c649cd0cd9 (May 8, 2019, 8:40 AM GMT+8, "tweak scripts/build_android.sh for ABI and header install") 55 | git checkout 3ac4d928248a4b5949b5fafe0d2f43c649cd0cd9 56 | ``` 57 | 58 | 2. You'll need to download the Android NDK (above r18 release) if you have not and CMake 3.6.0 or higher is required. 59 | 60 | ```sh 61 | $ cat ~/m/dev/android/sdk/ndk-bundle/source.properties 62 | Pkg.Desc = Android NDK 63 | Pkg.Revision = 19.2.5345600 64 | ``` 65 | 66 | 3. Then we compile libtorch for arm-v7a ABI and then for x86. 67 | 68 | Set build environment: 69 | - PyTorch folder is at $PYTORCH_ROOT 70 | - This repository folder is at $AICAMERA_ROOT 71 | - Android NDK folder is at $ANDROID_NDK 72 | 73 | ```sh 74 | # make sure $PYTORCH_ROOT, $AICAMERA_ROOT and $ANDROID_NDK are set 75 | export ANDROID_NDK=~/m/dev/android/sdk/ndk-bundle/ 76 | # export PYTORCH_ROOT=~/m/dev/scratch/repo/pytorch/ 77 | export AICAMERA_ROOT=~/m/dev/android/android-studio-projects/FastaiCamera/ 78 | ``` 79 | 80 | Then, do the following: 81 | 82 | ```sh 83 | # Only for my case - cmake OS level version is older than android-cmake 84 | # nano ~/.bashrc 85 | # Add android-cmake to PATH 86 | # export PATH="/home/cedric/m/dev/android/sdk/cmake/3.6.4111459/bin:/home/cedric/miniconda3/bin:$PATH" 87 | 88 | # Only for my case - activate Python environment 89 | # workon py36mixwork 90 | ``` 91 | 92 | Build libtorch: 93 | - This script will use Clang implicitly if NDK >= 18. 94 | - The architecture is overridable by providing an environment variable `ANDROID_ABI`. By default, the architecture is "armeabi-v7a with NEON". 95 | - `BUILD_CAFFE2_MOBILE` is the master switch to choose between libcaffe2 vs. libtorch mobile build. When it's enabled it builds original libcaffe2 mobile library without ATen/TH ops nor TorchScript support; When it's disabled it builds libtorch mobile library, which contains ATen/TH ops and native support for TorchScript model, but doesn't contain not-yet-unified Caffe2 ops. 96 | 97 | ```sh 98 | build_args+=("-DBUILD_CAFFE2_MOBILE=OFF") 99 | build_args+=("-DCMAKE_PREFIX_PATH=$(python -c 'from distutils.sysconfig import get_python_lib; print(get_python_lib())')") 100 | build_args+=("-DPYTHON_EXECUTABLE=$(python -c 'import sys; print(sys.executable)')") 101 | ./scripts/build_android.sh "${build_args[@]}" 102 | ``` 103 | 104 | ```sh 105 | # Error experienced. Fix by changing cmake to use the Android NDK version (adding to PATH above) 106 | Build with ANDROID_ABI[armeabi-v7a with NEON], ANDROID_NATIVE_API_LEVEL[21] 107 | Bash: GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu) 108 | Caffe2 path: /home/cedric/m/dev/scratch/repo/pytorch 109 | Using Android NDK at /home/cedric/m/dev/android/sdk/ndk-bundle/ 110 | Android NDK version: 19 111 | Building protoc 112 | ... ... ... 113 | ... ... ... 114 | CMake Error at /home/cedric/m/dev/android/sdk/ndk-bundle/build/cmake/android.toolchain.cmake:38 (cmake_minimum_required): 115 | CMake 3.6.0 or higher is required. You are running version 3.5.1 116 | ``` 117 | 118 | ```sh 119 | # Successful run 120 | Build with ANDROID_ABI[armeabi-v7a with NEON], ANDROID_NATIVE_API_LEVEL[21] 121 | Bash: GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu) 122 | Caffe2 path: /home/cedric/m/dev/scratch/repo/pytorch 123 | Using Android NDK at /home/cedric/m/dev/android/sdk/ndk-bundle/ 124 | Android NDK version: 19 125 | Building protoc 126 | -- Configuring done 127 | -- Generating done 128 | -- Build files have been written to: /home/cedric/m/dev/scratch/repo/pytorch/build_host_protoc/build 129 | [1/1] Install the project... 130 | -- Install configuration: "" 131 | -- Up-to-date: /home/cedric/m/dev/scratch/repo/pytorch/build_host_protoc/lib/libprotobuf-lite.a 132 | 133 | [796/796] Install the project... 134 | -- Install configuration: "Release" 135 | Installation completed, now you can copy the headers/libs from /home/cedric/m/dev/scratch/repo/pytorch/build_android/install to your Android project directory. 136 | ... ... ... 137 | ... ... ... 138 | -- Up-to-date: /home/cedric/m/dev/scratch/repo/pytorch/build_host_protoc/lib/cmake/protobuf/protobuf-config.cmake 139 | -- std::exception_ptr is supported. 140 | -- NUMA is disabled 141 | -- Turning off deprecation warning due to glog. 142 | -- Use custom protobuf build. 143 | -- Caffe2 protobuf include directory: $$ 144 | -- Trying to find preferred BLAS backend of choice: Eigen 145 | -- Brace yourself, we are building NNPACK 146 | -- NNPACK backend is neon 147 | -- Using third party subdirectory Eigen. 148 | CMake Warning at cmake/Dependencies.cmake:676 (find_package): 149 | Could not find a package configuration file provided by "pybind11" with any 150 | of the following names: 151 | 152 | pybind11Config.cmake 153 | pybind11-config.cmake 154 | 155 | Add the installation prefix of "pybind11" to CMAKE_PREFIX_PATH or set 156 | "pybind11_DIR" to a directory containing one of the above files. If 157 | "pybind11" provides a separate development package or SDK, be sure it has 158 | been installed. 159 | Call Stack (most recent call first): 160 | CMakeLists.txt:258 (include) 161 | 162 | 163 | -- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR) 164 | -- Using third_party/pybind11. 165 | -- Using custom protoc executable 166 | -- 167 | -- ******** Summary ******** 168 | -- CMake version : 3.6.0-rc2 169 | -- CMake command : /home/cedric/m/dev/android/sdk/cmake/3.6.4111459/bin/cmake 170 | -- System : Android 171 | -- C++ compiler : /home/cedric/m/dev/android/sdk/ndk-bundle/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++ 172 | -- C++ compiler version : 8.0 173 | -- CXX flags : -g -DANDROID -fdata-sections -ffunction-sections -funwind-tables -fstack-protector-strong -no-canonical-prefixes -mfpu=vfpv3-d16 -fno-addrsig -march=armv7-a -mthumb -mfpu=neon -Wa,--noexecstack -Wformat -Werror=format-security -stdlib=libc++ -frtti -fexceptions -Wno-deprecated -fvisibility-inlines-hidden -Wnon-virtual-dtor 174 | -- Build type : Release 175 | -- Compile definitions : 176 | -- CMAKE_PREFIX_PATH : /home/cedric/.virtualenvs/py36mixwork/lib/python3.6/site-packages 177 | -- CMAKE_INSTALL_PREFIX : /home/cedric/m/dev/scratch/repo/pytorch/build_android/install 178 | -- CMAKE_MODULE_PATH : /home/cedric/m/dev/scratch/repo/pytorch/cmake/Modules 179 | -- 180 | -- ONNX version : 1.5.0 181 | -- ONNX NAMESPACE : onnx_c2 182 | -- ONNX_BUILD_TESTS : OFF 183 | -- ONNX_BUILD_BENCHMARKS : OFF 184 | -- ONNX_USE_LITE_PROTO : OFF 185 | -- ONNXIFI_DUMMY_BACKEND : OFF 186 | -- ONNXIFI_ENABLE_EXT : OFF 187 | -- 188 | -- Protobuf compiler : 189 | -- Protobuf includes : 190 | -- Protobuf libraries : 191 | -- BUILD_ONNX_PYTHON : OFF 192 | -- 193 | -- ******** Summary ******** 194 | -- CMake version : 3.6.0-rc2 195 | -- CMake command : /home/cedric/m/dev/android/sdk/cmake/3.6.4111459/bin/cmake 196 | -- System : Android 197 | -- C++ compiler : /home/cedric/m/dev/android/sdk/ndk-bundle/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++ 198 | -- C++ compiler version : 8.0 199 | -- CXX flags : -g -DANDROID -fdata-sections -ffunction-sections -funwind-tables -fstack-protector-strong -no-canonical-prefixes -mfpu=vfpv3-d16 -fno-addrsig -march=armv7-a -mthumb -mfpu=neon -Wa,--noexecstack -Wformat -Werror=format-security -stdlib=libc++ -frtti -fexceptions -Wno-deprecated -fvisibility-inlines-hidden -Wnon-virtual-dtor 200 | -- Build type : Release 201 | -- Compile definitions : 202 | -- CMAKE_PREFIX_PATH : /home/cedric/.virtualenvs/py36mixwork/lib/python3.6/site-packages 203 | -- CMAKE_INSTALL_PREFIX : /home/cedric/m/dev/scratch/repo/pytorch/build_android/install 204 | -- CMAKE_MODULE_PATH : /home/cedric/m/dev/scratch/repo/pytorch/cmake/Modules 205 | -- 206 | -- ONNX version : 1.4.1 207 | -- ONNX NAMESPACE : onnx_c2 208 | -- ONNX_BUILD_TESTS : OFF 209 | -- ONNX_BUILD_BENCHMARKS : OFF 210 | -- ONNX_USE_LITE_PROTO : OFF 211 | -- ONNXIFI_DUMMY_BACKEND : OFF 212 | -- 213 | -- Protobuf compiler : 214 | -- Protobuf includes : 215 | -- Protobuf libraries : 216 | -- BUILD_ONNX_PYTHON : OFF 217 | -- don't use NUMA 218 | disabling CUDA because USE_CUDA is set false 219 | -- Looking for clock_gettime in rt 220 | -- Looking for clock_gettime in rt - not found 221 | -- Looking for mmap 222 | -- Looking for mmap - found 223 | -- Looking for shm_open 224 | -- Looking for shm_open - not found 225 | -- Looking for shm_unlink 226 | -- Looking for shm_unlink - not found 227 | -- Looking for malloc_usable_size 228 | -- Looking for malloc_usable_size - found 229 | -- Looking for sys/types.h 230 | -- Looking for sys/types.h - found 231 | -- Looking for stdint.h 232 | -- Looking for stdint.h - found 233 | -- Looking for stddef.h 234 | -- Looking for stddef.h - found 235 | -- Check size of void* 236 | -- Check size of void* - done 237 | -- NCCL operators skipped due to no CUDA support 238 | -- Excluding ideep operators as we are not using ideep 239 | -- Excluding image processing operators due to no opencv 240 | -- Excluding video processing operators due to no opencv 241 | -- MPI operators skipped due to no MPI support 242 | CMake Warning at CMakeLists.txt:457 (message): 243 | Generated cmake files are only fully tested if one builds with system glog, 244 | gflags, and protobuf. Other settings may generate files that are not well 245 | tested. 246 | 247 | 248 | CMake Warning at CMakeLists.txt:508 (message): 249 | Generated cmake files are only available when building shared libs. 250 | 251 | 252 | -- 253 | -- ******** Summary ******** 254 | -- General: 255 | -- CMake version : 3.6.0-rc2 256 | -- CMake command : /home/cedric/m/dev/android/sdk/cmake/3.6.4111459/bin/cmake 257 | -- System : Android 258 | -- C++ compiler : /home/cedric/m/dev/android/sdk/ndk-bundle/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++ 259 | -- C++ compiler id : Clang 260 | -- C++ compiler version : 8.0 261 | -- BLAS : Eigen 262 | -- CXX flags : -g -DANDROID -fdata-sections -ffunction-sections -funwind-tables -fstack-protector-strong -no-canonical-prefixes -mfpu=vfpv3-d16 -fno-addrsig -march=armv7-a -mthumb -mfpu=neon -Wa,--noexecstack -Wformat -Werror=format-security -stdlib=libc++ -frtti -fexceptions -Wno-deprecated -fvisibility-inlines-hidden -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math 263 | -- Build type : Release 264 | -- Compile definitions : ONNX_NAMESPACE=onnx_c2 265 | -- CMAKE_PREFIX_PATH : /home/cedric/.virtualenvs/py36mixwork/lib/python3.6/site-packages 266 | -- CMAKE_INSTALL_PREFIX : /home/cedric/m/dev/scratch/repo/pytorch/build_android/install 267 | -- 268 | -- TORCH_VERSION : 1.0.0 269 | -- CAFFE2_VERSION : 1.0.0 270 | -- BUILD_CAFFE2_MOBILE : OFF 271 | -- BUILD_ATEN_ONLY : OFF 272 | -- BUILD_BINARY : OFF 273 | -- BUILD_CUSTOM_PROTOBUF : ON 274 | -- Protobuf compiler : 275 | -- Protobuf includes : 276 | -- Protobuf libraries : 277 | -- BUILD_DOCS : OFF 278 | -- BUILD_PYTHON : OFF 279 | -- BUILD_CAFFE2_OPS : OFF 280 | -- BUILD_SHARED_LIBS : OFF 281 | -- BUILD_TEST : OFF 282 | -- INTERN_BUILD_MOBILE : ON 283 | -- USE_ASAN : OFF 284 | -- USE_CUDA : OFF 285 | -- USE_ROCM : OFF 286 | -- USE_EIGEN_FOR_BLAS : ON 287 | -- USE_FBGEMM : OFF 288 | -- USE_FFMPEG : OFF 289 | -- USE_GFLAGS : OFF 290 | -- USE_GLOG : OFF 291 | -- USE_LEVELDB : OFF 292 | -- USE_LITE_PROTO : OFF 293 | -- USE_LMDB : OFF 294 | -- USE_METAL : OFF 295 | -- USE_MKL : 296 | -- USE_MKLDNN : 297 | -- USE_NCCL : OFF 298 | -- USE_NNPACK : ON 299 | -- USE_NUMPY : ON 300 | -- USE_OBSERVERS : OFF 301 | -- USE_OPENCL : OFF 302 | -- USE_OPENCV : OFF 303 | -- USE_OPENMP : OFF 304 | -- USE_PROF : OFF 305 | -- USE_QNNPACK : ON 306 | -- USE_REDIS : OFF 307 | -- USE_ROCKSDB : OFF 308 | -- USE_ZMQ : OFF 309 | -- USE_DISTRIBUTED : OFF 310 | -- Public Dependencies : Threads::Threads 311 | -- Private Dependencies : qnnpack;nnpack;cpuinfo;fp16;log;foxi_loader;dl 312 | -- Configuring done 313 | -- Generating done 314 | -- Build files have been written to: /home/cedric/m/dev/scratch/repo/pytorch/build_android 315 | Will install headers and libs to /home/cedric/m/dev/scratch/repo/pytorch/build_android/install for further Android project usage. 316 | [71/796] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/proto_writer.cc.o 317 | In file included from /home/cedric/m/dev/scratch/repo/pytorch/third_party/protobuf/src/google/protobuf/util/internal/proto_writer.cc:31: 318 | ... ... ... 319 | ... ... ... 320 | [796/796] Install the project... 321 | -- Install configuration: "Release" 322 | Installation completed, now you can copy the headers/libs from /home/cedric/m/dev/scratch/repo/pytorch/build_android/install to your Android project directory. 323 | ``` 324 | 325 | 4. Test libtorch.so on Android device. 326 | 327 | Copy them over into AICamera Android project directory: 328 | 329 | ```sh 330 | # use my fork of AICamera 331 | git clone https://github.com/cedrickchee/pytorch-android.git 332 | cd pytorch-android 333 | 334 | # copy headers 335 | cp -r build_android/install/include/* $AICAMERA_ROOT/app/src/main/cpp/ 336 | 337 | # copy arm libs 338 | rm -rf $AICAMERA_ROOT/app/src/main/jniLibs/armeabi-v7a/ 339 | mkdir $AICAMERA_ROOT/app/src/main/jniLibs/armeabi-v7a 340 | cp -r build_android/lib/lib* $AICAMERA_ROOT/app/src/main/jniLibs/armeabi-v7a/ 341 | ``` 342 | 343 | ## Deploy Resnet-18 Model to PyTorch Lite 344 | 345 | This is an example to get you started easily deploying a PyTorch model using the C++ frontend, libtorch and PyTorch Lite. 346 | 347 | ```sh 348 | git clone https://github.com/cedrickchee/pytorch-lite.git 349 | 350 | cd pytorch-lite 351 | ``` 352 | 353 | First, export your Python model using Torch Script. To do that, you have to download and install libtorch CPU distribution in your computer where you run your Jupyter Notebook. 354 | - 1.1, cuda 9: https://download.pytorch.org/libtorch/cu90/libtorch-shared-with-deps-latest.zip 355 | - 1.1, cpu: https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-latest.zip 356 | - Nightly, cuda 10: https://download.pytorch.org/libtorch/nightly/cu100/libtorch-shared-with-deps-latest.zip 357 | - Nightly, cpu: https://download.pytorch.org/libtorch/nightly/cpu/libtorch-shared-with-deps-latest.zip 358 | - 1.0, cpu: https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-1.0.0.zip 359 | 360 | Open the Jupyter Notebook `CPP_Export.ipynb` in `notebooks/resnet18-app/` directory to see the full instruction for this step. 361 | 362 | Next, build the ScriptModule (C++ application) from within the `resnet18-app/` directory: 363 | 364 | ```sh 365 | cd notebooks/resnet18-app/ 366 | 367 | # please change your /path/to/libtorch 368 | # where /path/to/libtorch should be the full path to the unzipped libtorch CPU distribution. 369 | cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch 370 | 371 | # If all goes well, it will look something like this: 372 | -- The C compiler identification is GNU 5.4.0 373 | -- The CXX compiler identification is GNU 5.4.0 374 | -- Check for working C compiler: /usr/bin/cc 375 | -- Check for working C compiler: /usr/bin/cc -- works 376 | -- Detecting C compiler ABI info 377 | -- Detecting C compiler ABI info - done 378 | -- Detecting C compile features 379 | -- Detecting C compile features - done 380 | -- Check for working CXX compiler: /usr/bin/c++ 381 | -- Check for working CXX compiler: /usr/bin/c++ -- works 382 | -- Detecting CXX compiler ABI info 383 | -- Detecting CXX compiler ABI info - done 384 | -- Detecting CXX compile features 385 | -- Detecting CXX compile features - done 386 | -- Found torch: /path/to/libtorch 387 | -- Configuring done 388 | -- Generating done 389 | -- Build files have been written to: /home/cedric/m/dev/work/repo/pytorch-lite/notebooks/resnet18-app 390 | 391 | make 392 | ``` 393 | 394 | ## Android Development 395 | 396 | Open AICamera using Android Studio, connect your Android device and run the project to build and deploy the APK to your devices. 397 | 398 | ## Analysis 399 | 400 | Even if libtorch does compile for Android, it's not really optimized for ARM64 right now, and the binary size will be pretty large. 401 | 402 | - More issues: https://github.com/pytorch/pytorch/issues/9925 403 | 404 | --- 405 | 406 | Alternatively, build mobile libtorch using [t-vi's PyTorch fork](https://github.com/t-vi/pytorch): 407 | 408 | ```sh 409 | git clone --recurse-submodules -j8 https://github.com/t-vi/pytorch.git t-vi-pytorch 410 | cd t-vi-pytorch 411 | git checkout libtorch_android 412 | git submodule init 413 | git submodule update 414 | 415 | export ANDROID_NDK=~/m/dev/android/sdk/ndk-bundle/ 416 | BUILD_LIBTORCH_PY=$PWD/tools/build_libtorch.py 417 | scripts/build_host_protoc.sh 418 | mkdir -p android-build 419 | pushd android-build 420 | VERBOSE=1 DEBUG=1 python $BUILD_LIBTORCH_PY --android-abi="armeabi-v7a with NEON" 421 | popd 422 | ``` 423 | -------------------------------------------------------------------------------- /notebooks/resnet18-app/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 3.0 FATAL_ERROR) 2 | project(custom_ops) 3 | 4 | find_package(Torch REQUIRED) 5 | 6 | add_executable(resnet18-app resnet18-app.cpp) 7 | target_link_libraries(resnet18-app "${TORCH_LIBRARIES}") 8 | set_property(TARGET resnet18-app PROPERTY CXX_STANDARD 11) -------------------------------------------------------------------------------- /notebooks/resnet18-app/CPP_Export.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Loading a PyTorch Model in C++" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": 1, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import torch\n", 17 | "import torchvision" 18 | ] 19 | }, 20 | { 21 | "cell_type": "markdown", 22 | "metadata": {}, 23 | "source": [ 24 | "## Step 1: Converting Your PyTorch Model to Torch Script" 25 | ] 26 | }, 27 | { 28 | "cell_type": "code", 29 | "execution_count": 2, 30 | "metadata": {}, 31 | "outputs": [], 32 | "source": [ 33 | "# An instance of your model.\n", 34 | "model = torchvision.models.resnet18()" 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": 3, 40 | "metadata": {}, 41 | "outputs": [], 42 | "source": [ 43 | "# An example input you would normally provide to your model's forward() method.\n", 44 | "example = torch.rand(1, 3, 224, 224)" 45 | ] 46 | }, 47 | { 48 | "cell_type": "code", 49 | "execution_count": 4, 50 | "metadata": {}, 51 | "outputs": [], 52 | "source": [ 53 | "# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.\n", 54 | "traced_script_module = torch.jit.trace(model, example)" 55 | ] 56 | }, 57 | { 58 | "cell_type": "code", 59 | "execution_count": 5, 60 | "metadata": {}, 61 | "outputs": [], 62 | "source": [ 63 | "output = traced_script_module(torch.ones(1, 3, 224, 224))" 64 | ] 65 | }, 66 | { 67 | "cell_type": "code", 68 | "execution_count": 6, 69 | "metadata": { 70 | "scrolled": true 71 | }, 72 | "outputs": [ 73 | { 74 | "name": "stdout", 75 | "output_type": "stream", 76 | "text": [ 77 | "tensor([ 0.0808, 0.5042, 0.7030, -0.9465, 0.4538], grad_fn=)\n" 78 | ] 79 | } 80 | ], 81 | "source": [ 82 | "print(output[0, :5])" 83 | ] 84 | }, 85 | { 86 | "cell_type": "markdown", 87 | "metadata": {}, 88 | "source": [ 89 | "## Converting to Torch Script via Annotation" 90 | ] 91 | }, 92 | { 93 | "cell_type": "code", 94 | "execution_count": 7, 95 | "metadata": {}, 96 | "outputs": [], 97 | "source": [ 98 | "class MyModule(torch.nn.Module):\n", 99 | " def __init__(self, N, M):\n", 100 | " super(MyModule, self).__init__()\n", 101 | " self.weight = torch.nn.Parameter(torch.rand(N, M))\n", 102 | "\n", 103 | " def forward(self, input):\n", 104 | " if input.sum() > 0:\n", 105 | " output = self.weight.mv(input)\n", 106 | " else:\n", 107 | " output = self.weight + input\n", 108 | " return output" 109 | ] 110 | }, 111 | { 112 | "cell_type": "markdown", 113 | "metadata": {}, 114 | "source": [ 115 | "Because the `forward` method of this module uses control flow that is dependent on the input, it is not suitable for tracing. Instead, we can convert it to a `ScriptModule` by subclassing it from `torch.jit.ScriptModule` and adding a `@torch.jit.script_method` annotation to the model’s `forward` method:" 116 | ] 117 | }, 118 | { 119 | "cell_type": "code", 120 | "execution_count": 8, 121 | "metadata": {}, 122 | "outputs": [], 123 | "source": [ 124 | "class MyModule(torch.jit.ScriptModule):\n", 125 | " def __init__(self, N, M):\n", 126 | " super(MyModule, self).__init__()\n", 127 | " self.weight = torch.nn.Parameter(torch.rand(N, M))\n", 128 | "\n", 129 | " @torch.jit.script_method\n", 130 | " def forward(self, input):\n", 131 | " if bool(input.sum() > 0):\n", 132 | " output = self.weight.mv(input)\n", 133 | " else:\n", 134 | " output = self.weight + input\n", 135 | " return output" 136 | ] 137 | }, 138 | { 139 | "cell_type": "code", 140 | "execution_count": 9, 141 | "metadata": {}, 142 | "outputs": [], 143 | "source": [ 144 | "my_script_module = MyModule(2, 3)" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "metadata": {}, 150 | "source": [ 151 | "## Step 2: Serializing Your Script Module to a File" 152 | ] 153 | }, 154 | { 155 | "cell_type": "markdown", 156 | "metadata": {}, 157 | "source": [ 158 | "Once you have a `ScriptModule` in your hands, either from tracing or annotating a PyTorch model, you are ready to serialize it to a file. Later on, you’ll be able to load the module from this file in C++ and execute it without any dependency on Python. Say we want to serialize the `ResNet18` model shown earlier in the tracing example. To perform this serialization, simply call save on the module and pass it a filename:" 159 | ] 160 | }, 161 | { 162 | "cell_type": "code", 163 | "execution_count": 10, 164 | "metadata": {}, 165 | "outputs": [], 166 | "source": [ 167 | "traced_script_module.save(\"model.pt\")" 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": {}, 173 | "source": [ 174 | "This will produce a `model.pt` file in your working directory. We have now officially left the realm of Python and are ready to cross over to the sphere of C++." 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "execution_count": 11, 180 | "metadata": {}, 181 | "outputs": [ 182 | { 183 | "name": "stdout", 184 | "output_type": "stream", 185 | "text": [ 186 | "total 45M\r\n", 187 | "drwxrwxr-x 3 ubuntu ubuntu 4.0K May 11 02:27 .\r\n", 188 | "drwxrwxr-x 4 ubuntu ubuntu 4.0K May 11 02:26 ..\r\n", 189 | "-rw-rw-r-- 1 ubuntu ubuntu 260 May 10 18:16 CMakeLists.txt\r\n", 190 | "-rw-rw-r-- 1 ubuntu ubuntu 7.3K May 10 17:48 CPP_Export.ipynb\r\n", 191 | "drwxrwxr-x 2 ubuntu ubuntu 4.0K May 11 02:26 .ipynb_checkpoints\r\n", 192 | "-rw-rw-r-- 1 ubuntu ubuntu 45M May 11 02:27 model.pt\r\n", 193 | "-rw-rw-r-- 1 ubuntu ubuntu 796 May 11 02:14 resnet18-app.cpp\r\n" 194 | ] 195 | } 196 | ], 197 | "source": [ 198 | "!ls -lah" 199 | ] 200 | }, 201 | { 202 | "cell_type": "markdown", 203 | "metadata": {}, 204 | "source": [ 205 | "## Step 3: Loading Your Script Module in C++" 206 | ] 207 | }, 208 | { 209 | "cell_type": "markdown", 210 | "metadata": {}, 211 | "source": [ 212 | "To load your serialized PyTorch model in C++, your application must depend on the PyTorch C++ API – also known as LibTorch. The LibTorch distribution encompasses a collection of shared libraries, header files and CMake build configuration files. While CMake is not a requirement for depending on LibTorch, it is the recommended approach and will be well supported into the future. For this tutorial, we will be building a minimal C++ application using CMake and LibTorch that simply loads and executes a serialized PyTorch model." 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "### A Minimal C++ Application" 220 | ] 221 | }, 222 | { 223 | "cell_type": "code", 224 | "execution_count": 14, 225 | "metadata": { 226 | "collapsed": true 227 | }, 228 | "outputs": [ 229 | { 230 | "name": "stdout", 231 | "output_type": "stream", 232 | "text": [ 233 | "#include // One-stop header.\r\n", 234 | "\r\n", 235 | "#include \r\n", 236 | "#include \r\n", 237 | "\r\n", 238 | "int main(int argc, const char* argv[]) {\r\n", 239 | " if (argc != 2) {\r\n", 240 | " std::cerr << \"usage: example-app \\n\";\r\n", 241 | " return -1;\r\n", 242 | " }\r\n", 243 | "\r\n", 244 | " // Deserialize the ScriptModule from a file using torch::jit::load().\r\n", 245 | " std::shared_ptr module = torch::jit::load(argv[1]);\r\n", 246 | "\r\n", 247 | " assert(module != nullptr);\r\n", 248 | " std::cout << \"ok\\n\";\r\n", 249 | "}" 250 | ] 251 | } 252 | ], 253 | "source": [ 254 | "!cat resnet18-app.cpp" 255 | ] 256 | }, 257 | { 258 | "cell_type": "markdown", 259 | "metadata": {}, 260 | "source": [ 261 | "### Depending on LibTorch and Building the Application" 262 | ] 263 | }, 264 | { 265 | "cell_type": "code", 266 | "execution_count": 15, 267 | "metadata": {}, 268 | "outputs": [ 269 | { 270 | "name": "stdout", 271 | "output_type": "stream", 272 | "text": [ 273 | "cmake_minimum_required(VERSION 3.0 FATAL_ERROR)\r\n", 274 | "project(custom_ops)\r\n", 275 | "\r\n", 276 | "find_package(Torch REQUIRED)\r\n", 277 | "\r\n", 278 | "add_executable(resnet18-app resnet18-app.cpp)\r\n", 279 | "target_link_libraries(resnet18-app \"${TORCH_LIBRARIES}\")\r\n", 280 | "set_property(TARGET resnet18-app PROPERTY CXX_STANDARD 11)" 281 | ] 282 | } 283 | ], 284 | "source": [ 285 | "!cat CMakeLists.txt" 286 | ] 287 | }, 288 | { 289 | "cell_type": "markdown", 290 | "metadata": {}, 291 | "source": [ 292 | "The last thing we need to build the example application is the LibTorch distribution. You can always grab the latest stable release from the [download page](https://pytorch.org/get-started/locally/#start-locally) on the PyTorch website. If you download and unzip the latest archive, you should receive a folder with the following directory structure:" 293 | ] 294 | }, 295 | { 296 | "cell_type": "code", 297 | "execution_count": 21, 298 | "metadata": { 299 | "collapsed": true 300 | }, 301 | "outputs": [ 302 | { 303 | "name": "stdout", 304 | "output_type": "stream", 305 | "text": [ 306 | "total 63364\r\n", 307 | "drwxr-xr-x 6 ubuntu ubuntu 4096 Dec 6 07:14 libtorch\r\n", 308 | "-rw-rw-r-- 1 ubuntu ubuntu 64875864 Dec 7 19:14 libtorch-shared-with-deps-1.0.0.zip\r\n", 309 | "drwxrwxr-x 4 ubuntu ubuntu 4096 May 11 02:26 notebooks\r\n", 310 | "total 20\r\n", 311 | "drwxr-xr-x 2 ubuntu ubuntu 4096 Dec 6 07:14 bin\r\n", 312 | "-rw-r--r-- 1 ubuntu ubuntu 6 Dec 6 07:14 build-version\r\n", 313 | "drwxr-xr-x 8 ubuntu ubuntu 4096 Dec 6 07:14 include\r\n", 314 | "drwxr-xr-x 2 ubuntu ubuntu 4096 Dec 6 07:38 lib\r\n", 315 | "drwxr-xr-x 3 ubuntu ubuntu 4096 Dec 6 07:14 share\r\n" 316 | ] 317 | } 318 | ], 319 | "source": [ 320 | "!ls -l ../../; ls -l ../../libtorch" 321 | ] 322 | }, 323 | { 324 | "cell_type": "markdown", 325 | "metadata": {}, 326 | "source": [ 327 | "- The `lib/` folder contains the shared libraries you must link against,\n", 328 | "- The `include/` folder contains header files your program will need to include,\n", 329 | "- The `share/` folder contains the necessary CMake configuration to enable the simple find_package(Torch) command above." 330 | ] 331 | }, 332 | { 333 | "cell_type": "markdown", 334 | "metadata": {}, 335 | "source": [ 336 | "The last step is building the application. For this, assume our example directory is laid out like this:" 337 | ] 338 | }, 339 | { 340 | "cell_type": "code", 341 | "execution_count": 25, 342 | "metadata": {}, 343 | "outputs": [ 344 | { 345 | "name": "stdout", 346 | "output_type": "stream", 347 | "text": [ 348 | "total 45780\r\n", 349 | "-rw-rw-r-- 1 ubuntu ubuntu 260 May 10 18:16 CMakeLists.txt\r\n", 350 | "-rw-rw-r-- 1 ubuntu ubuntu 10519 May 11 02:37 CPP_Export.ipynb\r\n", 351 | "-rw-rw-r-- 1 ubuntu ubuntu 46857749 May 11 02:27 model.pt\r\n", 352 | "-rw-rw-r-- 1 ubuntu ubuntu 468 May 11 02:30 resnet18-app.cpp\r\n" 353 | ] 354 | } 355 | ], 356 | "source": [ 357 | "!ls -l" 358 | ] 359 | }, 360 | { 361 | "cell_type": "markdown", 362 | "metadata": {}, 363 | "source": [ 364 | "We can now run the following commands to build the application from within the `resnet18-app/` folder:" 365 | ] 366 | }, 367 | { 368 | "cell_type": "code", 369 | "execution_count": 45, 370 | "metadata": { 371 | "collapsed": true 372 | }, 373 | "outputs": [ 374 | { 375 | "name": "stdout", 376 | "output_type": "stream", 377 | "text": [ 378 | "-- The C compiler identification is GNU 5.4.0\n", 379 | "-- The CXX compiler identification is GNU 5.4.0\n", 380 | "-- Check for working C compiler: /usr/bin/cc\n", 381 | "-- Check for working C compiler: /usr/bin/cc -- works\n", 382 | "-- Detecting C compiler ABI info\n", 383 | "-- Detecting C compiler ABI info - done\n", 384 | "-- Detecting C compile features\n", 385 | "-- Detecting C compile features - done\n", 386 | "-- Check for working CXX compiler: /usr/bin/c++\n", 387 | "-- Check for working CXX compiler: /usr/bin/c++ -- works\n", 388 | "-- Detecting CXX compiler ABI info\n", 389 | "-- Detecting CXX compiler ABI info - done\n", 390 | "-- Detecting CXX compile features\n", 391 | "-- Detecting CXX compile features - done\n", 392 | "-- Looking for pthread.h\n", 393 | "-- Looking for pthread.h - found\n", 394 | "-- Looking for pthread_create\n", 395 | "-- Looking for pthread_create - not found\n", 396 | "-- Looking for pthread_create in pthreads\n", 397 | "-- Looking for pthread_create in pthreads - not found\n", 398 | "-- Looking for pthread_create in pthread\n", 399 | "-- Looking for pthread_create in pthread - found\n", 400 | "-- Found Threads: TRUE \n", 401 | "-- Found torch: /home/ubuntu/development/pytorch-lite/libtorch/lib/libtorch.so \n", 402 | "-- Configuring done\n", 403 | "-- Generating done\n", 404 | "-- Build files have been written to: /home/ubuntu/development/pytorch-lite/notebooks/resnet18-app\n" 405 | ] 406 | } 407 | ], 408 | "source": [ 409 | "!cmake -DCMAKE_PREFIX_PATH=../../libtorch" 410 | ] 411 | }, 412 | { 413 | "cell_type": "code", 414 | "execution_count": 46, 415 | "metadata": {}, 416 | "outputs": [ 417 | { 418 | "name": "stdout", 419 | "output_type": "stream", 420 | "text": [ 421 | "total 45812\r\n", 422 | "-rw-rw-r-- 1 ubuntu ubuntu 13002 May 11 02:43 CMakeCache.txt\r\n", 423 | "drwxrwxr-x 5 ubuntu ubuntu 4096 May 11 02:43 CMakeFiles\r\n", 424 | "-rw-rw-r-- 1 ubuntu ubuntu 1431 May 11 02:43 cmake_install.cmake\r\n", 425 | "-rw-rw-r-- 1 ubuntu ubuntu 260 May 10 18:16 CMakeLists.txt\r\n", 426 | "-rw-rw-r-- 1 ubuntu ubuntu 11696 May 11 02:43 CPP_Export.ipynb\r\n", 427 | "-rw-rw-r-- 1 ubuntu ubuntu 5145 May 11 02:43 Makefile\r\n", 428 | "-rw-rw-r-- 1 ubuntu ubuntu 46857749 May 11 02:27 model.pt\r\n", 429 | "-rw-rw-r-- 1 ubuntu ubuntu 468 May 11 02:30 resnet18-app.cpp\r\n" 430 | ] 431 | } 432 | ], 433 | "source": [ 434 | "!ls -l" 435 | ] 436 | }, 437 | { 438 | "cell_type": "code", 439 | "execution_count": 47, 440 | "metadata": {}, 441 | "outputs": [ 442 | { 443 | "name": "stdout", 444 | "output_type": "stream", 445 | "text": [ 446 | "\u001b[35m\u001b[1mScanning dependencies of target resnet18-app\u001b[0m\n", 447 | "[ 50%] \u001b[32mBuilding CXX object CMakeFiles/resnet18-app.dir/resnet18-app.cpp.o\u001b[0m\n", 448 | "[100%] \u001b[32m\u001b[1mLinking CXX executable resnet18-app\u001b[0m\n", 449 | "[100%] Built target resnet18-app\n" 450 | ] 451 | } 452 | ], 453 | "source": [ 454 | "!make" 455 | ] 456 | }, 457 | { 458 | "cell_type": "code", 459 | "execution_count": 50, 460 | "metadata": {}, 461 | "outputs": [ 462 | { 463 | "name": "stdout", 464 | "output_type": "stream", 465 | "text": [ 466 | "-rwxrwxr-x 1 ubuntu ubuntu 83320 May 11 02:44 resnet18-app\r\n" 467 | ] 468 | } 469 | ], 470 | "source": [ 471 | "!ls -l resnet18-app" 472 | ] 473 | }, 474 | { 475 | "cell_type": "markdown", 476 | "metadata": {}, 477 | "source": [ 478 | "If we supply the path to the serialized `ResNet18` model we created earlier to the resulting `resnet18-app` binary, we should be rewarded with a friendly \"ok\":" 479 | ] 480 | }, 481 | { 482 | "cell_type": "code", 483 | "execution_count": 52, 484 | "metadata": {}, 485 | "outputs": [ 486 | { 487 | "name": "stdout", 488 | "output_type": "stream", 489 | "text": [ 490 | "usage: example-app \r\n" 491 | ] 492 | } 493 | ], 494 | "source": [ 495 | "!./resnet18-app" 496 | ] 497 | }, 498 | { 499 | "cell_type": "code", 500 | "execution_count": 53, 501 | "metadata": {}, 502 | "outputs": [ 503 | { 504 | "name": "stdout", 505 | "output_type": "stream", 506 | "text": [ 507 | "ok\r\n" 508 | ] 509 | } 510 | ], 511 | "source": [ 512 | "!./resnet18-app model.pt" 513 | ] 514 | }, 515 | { 516 | "cell_type": "markdown", 517 | "metadata": {}, 518 | "source": [ 519 | "## Step 4: Executing the Script Module in C++" 520 | ] 521 | }, 522 | { 523 | "cell_type": "markdown", 524 | "metadata": {}, 525 | "source": [ 526 | "Having successfully loaded our serialized `ResNet18` in C++, we are now just a couple lines of code away from executing it! Let’s add those lines to our C++ application’s `main()` function:" 527 | ] 528 | }, 529 | { 530 | "cell_type": "code", 531 | "execution_count": 54, 532 | "metadata": { 533 | "collapsed": true 534 | }, 535 | "outputs": [ 536 | { 537 | "name": "stdout", 538 | "output_type": "stream", 539 | "text": [ 540 | "#include // One-stop header.\r\n", 541 | "\r\n", 542 | "#include \r\n", 543 | "#include \r\n", 544 | "\r\n", 545 | "int main(int argc, const char* argv[]) {\r\n", 546 | " if (argc != 2) {\r\n", 547 | " std::cerr << \"usage: example-app \\n\";\r\n", 548 | " return -1;\r\n", 549 | " }\r\n", 550 | "\r\n", 551 | " // Deserialize the ScriptModule from a file using torch::jit::load().\r\n", 552 | " std::shared_ptr module = torch::jit::load(argv[1]);\r\n", 553 | "\r\n", 554 | " assert(module != nullptr);\r\n", 555 | " std::cout << \"ok\\n\";\r\n", 556 | "\r\n", 557 | " // Create a vector of inputs.\r\n", 558 | " std::vector inputs;\r\n", 559 | " inputs.push_back(torch::ones({1, 3, 224, 224}));\r\n", 560 | "\r\n", 561 | " // Execute the model and turn its output into a tensor.\r\n", 562 | " at::Tensor output = module->forward(inputs).toTensor();\r\n", 563 | "\r\n", 564 | " std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\\n';\r\n", 565 | "}" 566 | ] 567 | } 568 | ], 569 | "source": [ 570 | "!cat resnet18-app.cpp" 571 | ] 572 | }, 573 | { 574 | "cell_type": "markdown", 575 | "metadata": {}, 576 | "source": [ 577 | "Let’s try it out by re-compiling our application and running it with the same serialized model:" 578 | ] 579 | }, 580 | { 581 | "cell_type": "code", 582 | "execution_count": 55, 583 | "metadata": {}, 584 | "outputs": [ 585 | { 586 | "name": "stdout", 587 | "output_type": "stream", 588 | "text": [ 589 | "\u001b[35m\u001b[1mScanning dependencies of target resnet18-app\u001b[0m\n", 590 | "[ 50%] \u001b[32mBuilding CXX object CMakeFiles/resnet18-app.dir/resnet18-app.cpp.o\u001b[0m\n", 591 | "[100%] \u001b[32m\u001b[1mLinking CXX executable resnet18-app\u001b[0m\n", 592 | "[100%] Built target resnet18-app\n" 593 | ] 594 | } 595 | ], 596 | "source": [ 597 | "!make" 598 | ] 599 | }, 600 | { 601 | "cell_type": "code", 602 | "execution_count": 57, 603 | "metadata": {}, 604 | "outputs": [ 605 | { 606 | "name": "stdout", 607 | "output_type": "stream", 608 | "text": [ 609 | "ok\n", 610 | " 0.0808 0.5042 0.7030 -0.9465 0.4538\n", 611 | "[ Variable[CPUFloatType]{1,5} ]\n" 612 | ] 613 | } 614 | ], 615 | "source": [ 616 | "!./resnet18-app model.pt" 617 | ] 618 | }, 619 | { 620 | "cell_type": "markdown", 621 | "metadata": {}, 622 | "source": [ 623 | "For reference, the output in Python previously was:\n", 624 | "\n", 625 | "```sh\n", 626 | "tensor([ 0.0808, 0.5042, 0.7030, -0.9465, 0.4538], grad_fn=)\n", 627 | "```" 628 | ] 629 | }, 630 | { 631 | "cell_type": "markdown", 632 | "metadata": {}, 633 | "source": [ 634 | "Looks like a good match!" 635 | ] 636 | }, 637 | { 638 | "cell_type": "markdown", 639 | "metadata": {}, 640 | "source": [ 641 | "## Step 5: Getting Help and Exploring the API" 642 | ] 643 | }, 644 | { 645 | "cell_type": "markdown", 646 | "metadata": {}, 647 | "source": [ 648 | "This tutorial has hopefully equipped you with a general understanding of a PyTorch model’s path from Python to C++. With the concepts described in this tutorial, you should be able to go from a vanilla, “eager” PyTorch model, to a compiled `ScriptModule` in Python, to a serialized file on disk and – to close the loop – to an executable `script::Module` in C++.\n", 649 | "\n", 650 | "Of course, there are many concepts we did not cover. To learn more, go to: https://pytorch.org/tutorials/advanced/cpp_export.html#step-5-getting-help-and-exploring-the-api" 651 | ] 652 | } 653 | ], 654 | "metadata": { 655 | "kernelspec": { 656 | "display_name": "Python 3", 657 | "language": "python", 658 | "name": "python3" 659 | }, 660 | "language_info": { 661 | "codemirror_mode": { 662 | "name": "ipython", 663 | "version": 3 664 | }, 665 | "file_extension": ".py", 666 | "mimetype": "text/x-python", 667 | "name": "python", 668 | "nbconvert_exporter": "python", 669 | "pygments_lexer": "ipython3", 670 | "version": "3.6.7" 671 | } 672 | }, 673 | "nbformat": 4, 674 | "nbformat_minor": 2 675 | } 676 | -------------------------------------------------------------------------------- /notebooks/resnet18-app/model.pt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cedrickchee/pytorch-lite/f5f72c1949d9b152a27d59b2f3e327b450da56db/notebooks/resnet18-app/model.pt -------------------------------------------------------------------------------- /notebooks/resnet18-app/resnet18-app.cpp: -------------------------------------------------------------------------------- 1 | #include // One-stop header. 2 | 3 | #include 4 | #include 5 | 6 | int main(int argc, const char* argv[]) { 7 | if (argc != 2) { 8 | std::cerr << "usage: example-app \n"; 9 | return -1; 10 | } 11 | 12 | // Deserialize the ScriptModule from a file using torch::jit::load(). 13 | //std::shared_ptr module = torch::jit::load(argv[1]); 14 | 15 | //assert(module != nullptr); 16 | std::cout << "ok\n"; 17 | 18 | // Create a vector of inputs. 19 | //std::vector inputs; 20 | //inputs.push_back(torch::ones({1, 3, 224, 224})); 21 | 22 | // Execute the model and turn its output into a tensor. 23 | //at::Tensor output = module->forward(inputs).toTensor(); 24 | 25 | //std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n'; 26 | } 27 | --------------------------------------------------------------------------------