├── .gitignore ├── LICENSE ├── README.md ├── export_onnx.py ├── pc.jpg ├── test_python.py └── windows_sln ├── onnxruntime_windows.sln ├── onnxruntime_windows_c# ├── Program.cs └── onnxruntime_windows_c#.csproj ├── onnxruntime_windows_cpp_cpu ├── onnxruntime_windows_cpp_cpu.cpp ├── onnxruntime_windows_cpp_cpu.vcxproj └── onnxruntime_windows_cpp_cpu.vcxproj.filters ├── onnxruntime_windows_cpp_gpu ├── onnxruntime_windows_cpp_gpu.cpp ├── onnxruntime_windows_cpp_gpu.vcxproj └── onnxruntime_windows_cpp_gpu.vcxproj.filters └── openvino_windows_cpp ├── openvino_windows_cpp.cpp ├── openvino_windows_cpp.vcxproj └── openvino_windows_cpp.vcxproj.filters /.gitignore: -------------------------------------------------------------------------------- 1 | *.onnx 2 | *.bin 3 | *.xml 4 | __pycache__ 5 | 6 | .vs 7 | x64 8 | x86 9 | onnxruntime-win-x64-1.15.1 10 | onnxruntime-win-x64-gpu-1.15.1 11 | *.vcxproj.user 12 | bin 13 | obj 14 | openvino 15 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Chao Zhang 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ModelInferBench 2 | 3 | This tool tests ONNX model inference speed using different deployment methods. 4 | 5 | ## My Results 6 | 7 | **Note**: My PC has two graphics cards: GTX1070Ti and Intel A770. My display cable is connected to the A770. In DirectML, device 0 corresponds to A770 and device 1 corresponds to GTX1070Ti. 8 | 9 | ### System Configurations 10 | #### My PC: 11 | 12 | - CPU: i9-13900 13 | 14 | - Memory: 32GB DDR4 3000 MHz 15 | 16 | - GPUs: GTX1070Ti + A770 16G 17 | 18 | - OS: Windows 11 Pro 22H2 22621.2215 19 | 20 | - GPU Driver version: 536.99 & 31.0.101.4669 21 | 22 | - Python Version: 3.11.4 23 | 24 | - PyTorch Version: 2.0.1+cu118 25 | 26 | #### My Mac: 27 | 28 | - MacBook Pro 16-inch, 2021, A2485 29 | 30 | - CPU: Apple M1 Pro, 8P + 2E, 16-core GPU,16-core Neural Engine 31 | 32 | - OS: Ventura 13.5 (22G74) 33 | 34 | - Python Version: 3.9.16 35 | 36 | - PyTorch Version: 2.0.0 37 | 38 | ### Test Parameters 39 | 40 | - **Test Model**: `torchvision.models.efficientnet_b4` 41 | 42 | - **Input Size**: `batch_size, 3, 224, 224` 43 | 44 | - **Inference Runs**: 20 times (the average of the last 10 runs is taken) 45 | 46 | - **Unit**: ms 47 | 48 | | PC/batch_size | 1 | 4 | 128| 49 | |:------|:----:|:------:|:-:| 50 | | Python PyTorch CPU | 172 ms | 514 ms | * | 51 | | Python ONNX Runtime CPU | 12 ms | 30 ms | * | 52 | | Python OpenVINO CPU | 11 ms | 29 ms | * | 53 | | C++ ONNX Runtime CPU | 10 ms | 34 ms | 3800 ms | 54 | | C++ OpenVINO CPU | 10 ms | 26 ms | * | 55 | | C# ONNX Runtime CPU | 170 ms | 473 ms | 3876 ms | 56 | ||||| 57 | | Python PyTorch 1070Ti | 11 ms | 23 ms | * | 58 | | Python ONNX Runtime 1070Ti | 7 ms | 18 ms | 430 ms | 59 | | Python OpenVINO 1070Ti | 49 ms | * | * | 60 | | C++ ONNX Runtime 1070Ti | 7 ms | 17 ms | 424 ms | 61 | | C# ONNX Runtime 1070Ti | 7 ms | 17 ms | 427 ms| 62 | | C# DirectML 1070Ti | 12 ms | 31 ms | 812 ms| 63 | ||||| 64 | | Python OpenVINO A770 | 10 ms | 15 ms | 919 ms | 65 | | C++ OpenVINO A770 | 7 ms | 10 ms | 870 ms | 66 | | C# DirectML A770 | 9 ms | 19 ms | 485 ms| 67 | 68 | | MacBook/batch_size | 1 | 4 | 69 | |:------|:----:|:------:| 70 | | Python PyTorch CPU | 887 ms | 1207 ms | 71 | | Python PyTorch mps | 37 ms | 39 ms | 72 | | Python ONNX Runtime CPU | 59 ms | 208 ms | 73 | 74 | 75 | ## Instructions 76 | 77 | To test inference speed, either export an ONNX file using the provided Python script or use your own ONNX model. Depending on the model, you may also need to update the file path, input shape, input name, or data type in the code. 78 | 79 | ### Python 80 | 81 | Run the following command to export a onnx model: 82 | 83 | ``` 84 | python export_onnx.py 85 | ``` 86 | 87 | Run the following command to test the python ONNX Runtime: 88 | 89 | ``` 90 | python test_python.py model.onnx 91 | ``` 92 | 93 | 94 | ### Windows/C++ 95 | #### Attention 96 | 97 | If you only have CUDA 12.2 installed, you might experience crashes when trying to use the GPU. To avoid this, install CUDA 11.8 to ensure all necessary DLLs are available. 98 | 99 | #### ONNX Runtime: 100 | 101 | 1 Download the ONNX Runtime release from https://github.com/microsoft/onnxruntime/releases/tag/v1.15.1 102 | 103 | 2 Download either `onnxruntime-win-x64-1.15.1.zip` or `onnxruntime-win-x64-gpu-1.15.1.zip` 104 | 105 | 3 Unzip the downloaded file and place its contents in either `windows_sln\onnxruntime_windows_cpp_cpu` or `windows_sln\onnxruntime_windows_cpp_gpu` 106 | 107 | 3 Depending on the version you downloaded, you may need to update the following project settings: 108 | - Properties -> C/C++ -> General -> Additional Include Directories 109 | - Properties -> Linker -> Input -> Additional Dependencies 110 | - Build Events -> Post-Build Event -> Command Line 111 | 112 | #### OpenVINO: 113 | 114 | 1 Download OpenVINO from https://www.intel.cn/content/www/cn/zh/developer/tools/openvino-toolkit/overview.html and unzip it. 115 | 116 | 2 Create a new folder named `openvino` in `windows_sln\openvino_windows_cpp\`. 117 | 118 | 3 Copy `openvino_toolkit\runtime\lib` and `openvino_toolkit\runtime\include` to `windows_sln\openvino_windows_cpp\openvino` 119 | 120 | 4 Build project. 121 | 122 | 5 Copy all `.dll` files from `openvino_toolkit\runtime\bin\intel64\Release\` and `openvino_toolkit\runtime\3rdparty\tbb\bin\` to the output directory `windows_sln\x64\Release\`. 123 | 124 | ### Windows/C# 125 | 126 | Install one of the following NuGet Packages `Microsoft.ML.OnnxRuntime.DirectML`, `Microsoft.ML.OnnxRuntime`, `Microsoft.ML.OnnxRuntime.Gpu`. 127 | 128 | After adding the ONNX file to the project, change its properties to "Content". 129 | 130 | ![PC](/pc.jpg "PC") -------------------------------------------------------------------------------- /export_onnx.py: -------------------------------------------------------------------------------- 1 | import time 2 | 3 | from pathlib import Path 4 | 5 | import onnx 6 | import openvino as ov 7 | import torch 8 | import torchvision 9 | 10 | skip_cpu = True 11 | onnx_path = Path("model.onnx") 12 | batch_size = 1 13 | input_size = (batch_size, 3, 224, 224) 14 | 15 | # model = torchvision.models.efficientnet_v2_l() 16 | model = torchvision.models.efficientnet_b4() 17 | 18 | def export_onnx(model): 19 | model.eval().to("cpu") 20 | input = torch.randn(*input_size).to("cpu") 21 | torch.onnx.export( 22 | model, 23 | input, 24 | onnx_path, 25 | input_names=["input"], 26 | output_names=["output"], 27 | dynamic_axes={"input": {0: "batch_size"}, "output": {0: "batch_size"}}, 28 | ) 29 | model = onnx.load(onnx_path) 30 | onnx.checker.check_model(model) 31 | print(f"ONNX model saved: {onnx_path}") 32 | return onnx_path 33 | 34 | def export_openvino_ir(onnx_path): 35 | ov_model = ov.convert_model(onnx_path) 36 | ov_path = onnx_path.with_suffix(".xml") 37 | ov.save_model(ov_model, ov_path) 38 | print(f"OpenVINO IR model saved: {ov_path}") 39 | 40 | results = [] 41 | 42 | def test_Pytorch(model): 43 | devices = [] if skip_cpu else ["cpu"] 44 | if torch.cuda.is_available(): 45 | devices.append("cuda") 46 | if torch.backends.mps.is_available(): 47 | devices.append("mps") 48 | # TODO ROCm 49 | for device in devices: 50 | print(f"PyTorch {device}...") 51 | model.eval().to(device) 52 | input = torch.randn(*input_size).to(device) 53 | 54 | total_time = 0.0 55 | epoch = 20 56 | for i in range(epoch): 57 | start_time = time.time() 58 | output = model(input) 59 | end_time = time.time() 60 | elapsed_time = (end_time - start_time) * 1000 61 | print(f"Pytorch: {device}. Running time: {elapsed_time:.0f} ms") 62 | if i >= epoch - 10: 63 | total_time += elapsed_time 64 | total_time /= 10 65 | result = f"Pytorch: {device}. Average running time in last 10 epochs: {total_time:.0f} ms" 66 | print(result) 67 | results.append(result) 68 | 69 | 70 | export_onnx(model) 71 | export_openvino_ir(onnx_path) 72 | 73 | test_Pytorch(model) 74 | print(f"\nResults (batch_size: {batch_size}):") 75 | list(map(print, results)) 76 | -------------------------------------------------------------------------------- /pc.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/zhangchaosd/ModelInferBench/970f3cd3f57fce446fd2048261a811f4533b9985/pc.jpg -------------------------------------------------------------------------------- /test_python.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import time 4 | 5 | import numpy as np 6 | import onnxruntime as ort 7 | import openvino as ov 8 | 9 | skip_cpu = False 10 | # onnx_path = "model.onnx" 11 | onnx_path = sys.argv[1] 12 | batch_size = 1 13 | sess = ort.InferenceSession(onnx_path, providers=["CPUExecutionProvider"]) 14 | 15 | # get input infos 16 | input_name = sess.get_inputs()[0].name 17 | input_shape = sess.get_inputs()[0].shape 18 | if isinstance(input_shape[0], str): 19 | input_shape[0] = batch_size 20 | input_type = sess.get_inputs()[0].type 21 | if input_type == "tensor(float)": 22 | np_type = np.float32 23 | elif input_type == "tensor(uint8)": 24 | np_type = np.uint8 25 | else: 26 | print(f"Not impl for other dtype {input_type}") 27 | exit() 28 | 29 | 30 | results = [] 31 | 32 | 33 | def test_ONNXRuntime(): 34 | available_providers = ort.get_available_providers() 35 | available_providers.remove("TensorrtExecutionProvider") 36 | if skip_cpu: 37 | available_providers.remove("CPUExecutionProvider") 38 | print(f"ONNX Runtime available_providers: {available_providers}") 39 | for provider in available_providers: 40 | print(f"ONNX Runtime {provider}...") 41 | sess = ort.InferenceSession(onnx_path, providers=[provider]) 42 | input = np.random.rand(*input_shape).astype(np_type) 43 | times = [] 44 | epoch = 20 45 | for _ in range(epoch): 46 | start_time = time.time() 47 | output = sess.run(["output"], {"input": input})[0] 48 | end_time = time.time() 49 | elapsed_time = (end_time - start_time) * 1000 50 | print(f"ONNX Runtime: {provider}. Running time: {elapsed_time:.0f} ms") 51 | times.append(elapsed_time) 52 | result = f"ONNX Runtime: {provider}. Average running time in last 10 epochs: {np.mean(times[-10:]):.0f} ms" 53 | print(result) 54 | results.append(result) 55 | 56 | 57 | def test_onnxruntime_directml(): 58 | # pip install onnxruntime-directml 59 | pass 60 | 61 | 62 | def test_OpenVINO(): 63 | # initialize OpenVINO 64 | core = ov.Core() 65 | 66 | # print available devices 67 | devices = core.available_devices 68 | for device in devices: 69 | device_name = core.get_property(device, "FULL_DEVICE_NAME") 70 | print(f"{device}: {device_name}") 71 | 72 | if skip_cpu: 73 | devices.remove("CPU") 74 | 75 | for device in devices: 76 | print(f"OpenVINO {device} {core.get_property(device,'FULL_DEVICE_NAME')}...") 77 | 78 | # Construct input 79 | input_tensor = np.random.rand(*input_shape).astype(np_type) 80 | c_input_image = np.ascontiguousarray(input_tensor, dtype=np_type) 81 | input_tensor = ov.Tensor(c_input_image, shared_memory=True) 82 | 83 | config = {"PERFORMANCE_HINT": "LATENCY"} 84 | if device == "CPU": 85 | config["INFERENCE_NUM_THREADS"] = os.cpu_count() 86 | ov_model = ov.convert_model(onnx_path) 87 | compiled_model = core.compile_model(ov_model, device, config=config) 88 | 89 | times = [] 90 | epoch = 20 91 | for _ in range(epoch): 92 | start_time = time.time() 93 | result = compiled_model(input_tensor)[compiled_model.output(0)][0] 94 | end_time = time.time() 95 | elapsed_time = (end_time - start_time) * 1000 96 | print(f"OpenVINO {device}. Running time: {elapsed_time:.0f} ms") 97 | times.append(elapsed_time) 98 | result = f"OpenVINO: {device}. Average running time in last 10 epochs: {np.mean(times[-10:]):.0f} ms" 99 | print(result) 100 | results.append(result) 101 | del compiled_model 102 | 103 | 104 | test_ONNXRuntime() 105 | test_OpenVINO() 106 | 107 | print(f"\nResults (input_shape: {input_shape}, dtype:{np_type}):") 108 | list(map(print, results)) 109 | -------------------------------------------------------------------------------- /windows_sln/onnxruntime_windows.sln: -------------------------------------------------------------------------------- 1 |  2 | Microsoft Visual Studio Solution File, Format Version 12.00 3 | # Visual Studio Version 17 4 | VisualStudioVersion = 17.7.34018.315 5 | MinimumVisualStudioVersion = 10.0.40219.1 6 | Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "onnxruntime_windows_cpp_cpu", "onnxruntime_windows_cpp_cpu\onnxruntime_windows_cpp_cpu.vcxproj", "{15A02801-C43B-409B-A7AE-AD39DCA3AB15}" 7 | EndProject 8 | Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "onnxruntime_windows_cpp_gpu", "onnxruntime_windows_cpp_gpu\onnxruntime_windows_cpp_gpu.vcxproj", "{E0359D5C-B860-4159-85E3-517AED065C18}" 9 | EndProject 10 | Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "onnxruntime_windows_c#", "onnxruntime_windows_c#\onnxruntime_windows_c#.csproj", "{EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}" 11 | EndProject 12 | Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "openvino_windows_cpp", "openvino_windows_cpp\openvino_windows_cpp.vcxproj", "{121470C6-94BA-46E2-B45C-37E535C62762}" 13 | EndProject 14 | Global 15 | GlobalSection(SolutionConfigurationPlatforms) = preSolution 16 | Debug|Any CPU = Debug|Any CPU 17 | Debug|x64 = Debug|x64 18 | Debug|x86 = Debug|x86 19 | Release|Any CPU = Release|Any CPU 20 | Release|x64 = Release|x64 21 | Release|x86 = Release|x86 22 | EndGlobalSection 23 | GlobalSection(ProjectConfigurationPlatforms) = postSolution 24 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Debug|Any CPU.ActiveCfg = Debug|x64 25 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Debug|Any CPU.Build.0 = Debug|x64 26 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Debug|x64.ActiveCfg = Debug|x64 27 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Debug|x64.Build.0 = Debug|x64 28 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Debug|x86.ActiveCfg = Debug|Win32 29 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Debug|x86.Build.0 = Debug|Win32 30 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Release|Any CPU.ActiveCfg = Release|x64 31 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Release|Any CPU.Build.0 = Release|x64 32 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Release|x64.ActiveCfg = Release|x64 33 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Release|x64.Build.0 = Release|x64 34 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Release|x86.ActiveCfg = Release|Win32 35 | {15A02801-C43B-409B-A7AE-AD39DCA3AB15}.Release|x86.Build.0 = Release|Win32 36 | {E0359D5C-B860-4159-85E3-517AED065C18}.Debug|Any CPU.ActiveCfg = Debug|x64 37 | {E0359D5C-B860-4159-85E3-517AED065C18}.Debug|Any CPU.Build.0 = Debug|x64 38 | {E0359D5C-B860-4159-85E3-517AED065C18}.Debug|x64.ActiveCfg = Debug|x64 39 | {E0359D5C-B860-4159-85E3-517AED065C18}.Debug|x64.Build.0 = Debug|x64 40 | {E0359D5C-B860-4159-85E3-517AED065C18}.Debug|x86.ActiveCfg = Debug|Win32 41 | {E0359D5C-B860-4159-85E3-517AED065C18}.Debug|x86.Build.0 = Debug|Win32 42 | {E0359D5C-B860-4159-85E3-517AED065C18}.Release|Any CPU.ActiveCfg = Release|x64 43 | {E0359D5C-B860-4159-85E3-517AED065C18}.Release|Any CPU.Build.0 = Release|x64 44 | {E0359D5C-B860-4159-85E3-517AED065C18}.Release|x64.ActiveCfg = Release|x64 45 | {E0359D5C-B860-4159-85E3-517AED065C18}.Release|x64.Build.0 = Release|x64 46 | {E0359D5C-B860-4159-85E3-517AED065C18}.Release|x86.ActiveCfg = Release|Win32 47 | {E0359D5C-B860-4159-85E3-517AED065C18}.Release|x86.Build.0 = Release|Win32 48 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Debug|Any CPU.ActiveCfg = Debug|Any CPU 49 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Debug|Any CPU.Build.0 = Debug|Any CPU 50 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Debug|x64.ActiveCfg = Debug|Any CPU 51 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Debug|x64.Build.0 = Debug|Any CPU 52 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Debug|x86.ActiveCfg = Debug|Any CPU 53 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Debug|x86.Build.0 = Debug|Any CPU 54 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Release|Any CPU.ActiveCfg = Release|Any CPU 55 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Release|Any CPU.Build.0 = Release|Any CPU 56 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Release|x64.ActiveCfg = Release|Any CPU 57 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Release|x64.Build.0 = Release|Any CPU 58 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Release|x86.ActiveCfg = Release|Any CPU 59 | {EE53FCFA-C3EC-45C0-B05C-B67A5727CB1B}.Release|x86.Build.0 = Release|Any CPU 60 | {121470C6-94BA-46E2-B45C-37E535C62762}.Debug|Any CPU.ActiveCfg = Debug|x64 61 | {121470C6-94BA-46E2-B45C-37E535C62762}.Debug|Any CPU.Build.0 = Debug|x64 62 | {121470C6-94BA-46E2-B45C-37E535C62762}.Debug|x64.ActiveCfg = Debug|x64 63 | {121470C6-94BA-46E2-B45C-37E535C62762}.Debug|x64.Build.0 = Debug|x64 64 | {121470C6-94BA-46E2-B45C-37E535C62762}.Debug|x86.ActiveCfg = Debug|Win32 65 | {121470C6-94BA-46E2-B45C-37E535C62762}.Debug|x86.Build.0 = Debug|Win32 66 | {121470C6-94BA-46E2-B45C-37E535C62762}.Release|Any CPU.ActiveCfg = Release|x64 67 | {121470C6-94BA-46E2-B45C-37E535C62762}.Release|Any CPU.Build.0 = Release|x64 68 | {121470C6-94BA-46E2-B45C-37E535C62762}.Release|x64.ActiveCfg = Release|x64 69 | {121470C6-94BA-46E2-B45C-37E535C62762}.Release|x64.Build.0 = Release|x64 70 | {121470C6-94BA-46E2-B45C-37E535C62762}.Release|x86.ActiveCfg = Release|Win32 71 | {121470C6-94BA-46E2-B45C-37E535C62762}.Release|x86.Build.0 = Release|Win32 72 | EndGlobalSection 73 | GlobalSection(SolutionProperties) = preSolution 74 | HideSolutionNode = FALSE 75 | EndGlobalSection 76 | GlobalSection(ExtensibilityGlobals) = postSolution 77 | SolutionGuid = {726F4CE9-73E8-4B2C-9083-211A2325D050} 78 | EndGlobalSection 79 | EndGlobal 80 | -------------------------------------------------------------------------------- /windows_sln/onnxruntime_windows_c#/Program.cs: -------------------------------------------------------------------------------- 1 | using Microsoft.ML.OnnxRuntime; 2 | using Microsoft.ML.OnnxRuntime.Tensors; 3 | using System; 4 | using System.Diagnostics; 5 | 6 | namespace OnnxRuntimeCSharpExample 7 | { 8 | class Program 9 | { 10 | enum Device 11 | { 12 | // You May need to change here 13 | Onnxruntime_cpu, 14 | Onnxruntime_gpu, 15 | DirectML_A770_16G, // Microsoft.ML.OnnxRuntime.DirectML 16 | DirectML_GTX1070Ti, // Microsoft.ML.OnnxRuntime.DirectML 17 | } 18 | static void Main(string[] args) 19 | { 20 | 21 | int batch_size = 1; 22 | Device device = Device.Onnxruntime_cpu; 23 | string modelPath = "model.onnx"; 24 | 25 | Console.WriteLine($"batch_size: {batch_size}"); 26 | 27 | 28 | var options = new SessionOptions(); 29 | // Notice: You May need to change here 30 | // Notice: You need to change NuGet packages first. 31 | switch (device) 32 | { 33 | case Device.Onnxruntime_cpu: 34 | break; 35 | case Device.Onnxruntime_gpu: 36 | options.AppendExecutionProvider_CUDA(0); 37 | break; 38 | case Device.DirectML_A770_16G: 39 | options.AppendExecutionProvider_DML(0); 40 | break; 41 | case Device.DirectML_GTX1070Ti: 42 | options.AppendExecutionProvider_DML(1); 43 | break; 44 | default: 45 | break; 46 | } 47 | using var session = new InferenceSession(modelPath, options); 48 | float[] inputData = new float[batch_size * 3 * 224 * 224]; 49 | 50 | var tensor = new DenseTensor(inputData, new int[] { batch_size, 3, 224, 224 }); 51 | var inputs = new NamedOnnxValue[] { NamedOnnxValue.CreateFromTensor("input", tensor) }; 52 | 53 | 54 | long total_time = 0; 55 | for (int i = 0; i < 20; i++) 56 | { 57 | Stopwatch stopwatch = new Stopwatch(); 58 | stopwatch.Start(); 59 | using var results = session.Run(inputs); // results 是 IDisposable 60 | stopwatch.Stop(); 61 | long elapsedMilliseconds = stopwatch.ElapsedMilliseconds; 62 | Console.WriteLine($"{i}: Elapsed time: {elapsedMilliseconds} ms"); 63 | if (i >= 10) 64 | { 65 | total_time += elapsedMilliseconds; 66 | } 67 | } 68 | total_time /= 10; 69 | Console.WriteLine($"{device}: Average elapsed time: {total_time} ms"); 70 | Console.ReadLine(); 71 | } 72 | } 73 | } 74 | -------------------------------------------------------------------------------- /windows_sln/onnxruntime_windows_c#/onnxruntime_windows_c#.csproj: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Exe 5 | net6.0 6 | onnxruntime_windows_c_ 7 | enable 8 | enable 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | Always 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | -------------------------------------------------------------------------------- /windows_sln/onnxruntime_windows_cpp_cpu/onnxruntime_windows_cpp_cpu.cpp: -------------------------------------------------------------------------------- 1 | // onnxruntime_windows_cpp_cpu.cpp : This file contains the 'main' function. Program execution begins and ends there. 2 | // 3 | 4 | #include 5 | #include 6 | 7 | #include 8 | 9 | class Timer { 10 | public: 11 | Timer() : start_time(std::chrono::high_resolution_clock::now()) {} 12 | 13 | void reset() { 14 | start_time = std::chrono::high_resolution_clock::now(); 15 | } 16 | 17 | long long elapsedMilliseconds() { 18 | auto end_time = std::chrono::high_resolution_clock::now(); 19 | return std::chrono::duration_cast(end_time - start_time).count(); 20 | } 21 | 22 | private: 23 | std::chrono::high_resolution_clock::time_point start_time; 24 | }; 25 | 26 | int main() 27 | { 28 | auto providers = Ort::GetAvailableProviders(); 29 | for (auto provider : providers) { 30 | std::cout << provider << std::endl; 31 | } 32 | 33 | const int64_t batch_size = 1; 34 | std::cout << "batch_size:" << batch_size << std::endl; 35 | 36 | // Initialize ONNX Runtime 37 | Ort::Env env; 38 | 39 | // Initialize session 40 | Ort::Session onnx_session(env, L"../../model.onnx", Ort::SessionOptions{ nullptr }); 41 | 42 | // Create input tensor objects (This might differ based on your model) 43 | auto memory_info = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU); 44 | std::unique_ptr input_image_(new float_t[batch_size * 3 * 224 * 224]); 45 | std::array input_shape_{ batch_size, 3, 224, 224 }; 46 | Ort::Value input_tensor = Ort::Value::CreateTensor(memory_info, input_image_.get(), batch_size * 3 * 224 * 224, 47 | input_shape_.data(), input_shape_.size()); 48 | 49 | // Run model 50 | std::vector input_node_names = { "input" }; 51 | std::vector output_node_names = { "output" }; 52 | 53 | long long total_time = 0; 54 | Timer timer; 55 | for (int i = 0; i < 20; i++) { 56 | timer.reset(); 57 | auto outputs = onnx_session.Run(Ort::RunOptions{ nullptr }, input_node_names.data(), &input_tensor, 1, output_node_names.data(), 1); 58 | long long elapsed_time = timer.elapsedMilliseconds(); 59 | std::cout << i << " Elapsed time: " << elapsed_time << " ms" << std::endl; 60 | if (i >= 10) { 61 | total_time += elapsed_time; 62 | } 63 | } 64 | std::cout << "Running done" << std::endl; 65 | std::cout << "ONNX Runtime CPU: Average elapsed time: " << total_time / 10 << " ms" << std::endl; 66 | int a; 67 | std::cin >> a; 68 | 69 | return 0; 70 | } 71 | -------------------------------------------------------------------------------- /windows_sln/onnxruntime_windows_cpp_cpu/onnxruntime_windows_cpp_cpu.vcxproj: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Debug 6 | Win32 7 | 8 | 9 | Release 10 | Win32 11 | 12 | 13 | Debug 14 | x64 15 | 16 | 17 | Release 18 | x64 19 | 20 | 21 | 22 | 17.0 23 | Win32Proj 24 | {15a02801-c43b-409b-a7ae-ad39dca3ab15} 25 | onnxruntimewindowscppcpu 26 | 10.0 27 | 28 | 29 | 30 | Application 31 | true 32 | v143 33 | Unicode 34 | 35 | 36 | Application 37 | false 38 | v143 39 | true 40 | Unicode 41 | 42 | 43 | Application 44 | true 45 | v143 46 | Unicode 47 | 48 | 49 | Application 50 | false 51 | v143 52 | true 53 | Unicode 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | Level3 76 | true 77 | WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) 78 | true 79 | ./onnxruntime-win-x64-1.15.1/include;%(AdditionalIncludeDirectories) 80 | 81 | 82 | Console 83 | true 84 | ./onnxruntime-win-x64-1.15.1/lib/onnxruntime.lib;%(AdditionalDependencies) 85 | 86 | 87 | copy "$(ProjectDir)onnxruntime-win-x64-1.15.1\lib\onnxruntime.dll" "$(OutDir)" 88 | 89 | 90 | 91 | 92 | Level3 93 | true 94 | true 95 | true 96 | WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) 97 | true 98 | ./onnxruntime-win-x64-1.15.1/include;%(AdditionalIncludeDirectories) 99 | 100 | 101 | Console 102 | true 103 | true 104 | true 105 | ./onnxruntime-win-x64-1.15.1/lib/onnxruntime.lib;%(AdditionalDependencies) 106 | 107 | 108 | copy "$(ProjectDir)onnxruntime-win-x64-1.15.1\lib\onnxruntime.dll" "$(OutDir)" 109 | 110 | 111 | 112 | 113 | Level3 114 | true 115 | _DEBUG;_CONSOLE;%(PreprocessorDefinitions) 116 | true 117 | ./onnxruntime-win-x64-1.15.1/include;%(AdditionalIncludeDirectories) 118 | 119 | 120 | Console 121 | true 122 | ./onnxruntime-win-x64-1.15.1/lib/onnxruntime.lib;%(AdditionalDependencies) 123 | 124 | 125 | copy "$(ProjectDir)onnxruntime-win-x64-1.15.1\lib\onnxruntime.dll" "$(OutDir)" 126 | 127 | 128 | 129 | 130 | Level3 131 | true 132 | true 133 | true 134 | NDEBUG;_CONSOLE;%(PreprocessorDefinitions) 135 | true 136 | ./onnxruntime-win-x64-1.15.1/include;%(AdditionalIncludeDirectories) 137 | 138 | 139 | Console 140 | true 141 | true 142 | true 143 | ./onnxruntime-win-x64-1.15.1/lib/onnxruntime.lib;%(AdditionalDependencies) 144 | 145 | 146 | copy "$(ProjectDir)onnxruntime-win-x64-1.15.1\lib\onnxruntime.dll" "$(OutDir)" 147 | 148 | 149 | 150 | 151 | 152 | 153 | 154 | 155 | -------------------------------------------------------------------------------- /windows_sln/onnxruntime_windows_cpp_cpu/onnxruntime_windows_cpp_cpu.vcxproj.filters: -------------------------------------------------------------------------------- 1 |  2 | 3 | 4 | 5 | {4FC737F1-C7A5-4376-A066-2A32D752A2FF} 6 | cpp;c;cc;cxx;c++;cppm;ixx;def;odl;idl;hpj;bat;asm;asmx 7 | 8 | 9 | {93995380-89BD-4b04-88EB-625FBE52EBFB} 10 | h;hh;hpp;hxx;h++;hm;inl;inc;ipp;xsd 11 | 12 | 13 | {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} 14 | rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms 15 | 16 | 17 | 18 | 19 | Source Files 20 | 21 | 22 | -------------------------------------------------------------------------------- /windows_sln/onnxruntime_windows_cpp_gpu/onnxruntime_windows_cpp_gpu.cpp: -------------------------------------------------------------------------------- 1 | // onnxruntime_windows_cpp_gpu.cpp : This file contains the 'main' function. Program execution begins and ends there. 2 | // 3 | 4 | #include 5 | #include 6 | 7 | #include 8 | 9 | class Timer { 10 | public: 11 | Timer() : start_time(std::chrono::high_resolution_clock::now()) {} 12 | 13 | void reset() { 14 | start_time = std::chrono::high_resolution_clock::now(); 15 | } 16 | 17 | long long elapsedMilliseconds() { 18 | auto end_time = std::chrono::high_resolution_clock::now(); 19 | return std::chrono::duration_cast(end_time - start_time).count(); 20 | } 21 | 22 | private: 23 | std::chrono::high_resolution_clock::time_point start_time; 24 | }; 25 | 26 | int main() 27 | { 28 | auto providers = Ort::GetAvailableProviders(); 29 | for (auto provider : providers) { 30 | std::cout << provider << std::endl; 31 | } 32 | 33 | const int64_t batch_size = 1; 34 | std::cout << "batch_size:" << batch_size << std::endl; 35 | 36 | // Initialize ONNX Runtime 37 | Ort::Env env; 38 | 39 | Ort::SessionOptions session_options; 40 | //OrtCUDAProviderOptions options; Here is the only difference from cpu version 41 | session_options.AppendExecutionProvider_CUDA(OrtCUDAProviderOptions{}); 42 | 43 | // Initialize session 44 | Ort::Session onnx_session(env, L"../../model.onnx", session_options); 45 | 46 | // Create input tensor objects (This might differ based on your model) 47 | auto memory_info = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU); 48 | std::unique_ptr input_image_(new float_t[batch_size * 3 * 224 * 224]); 49 | std::array input_shape_{ batch_size, 3, 224, 224 }; 50 | Ort::Value input_tensor = Ort::Value::CreateTensor(memory_info, input_image_.get(), batch_size * 3 * 224 * 224, 51 | input_shape_.data(), input_shape_.size()); 52 | 53 | // Run model 54 | std::vector input_node_names = { "input" }; 55 | std::vector output_node_names = { "output" }; 56 | 57 | long long total_time = 0; 58 | Timer timer; 59 | for (int i = 0; i < 20; i++) { 60 | timer.reset(); 61 | auto outputs = onnx_session.Run(Ort::RunOptions{ nullptr }, input_node_names.data(), &input_tensor, 1, output_node_names.data(), 1); 62 | long long elapsed_time = timer.elapsedMilliseconds(); 63 | std::cout << i << " Elapsed time: " << elapsed_time << " ms" << std::endl; 64 | if (i >= 10) { 65 | total_time += elapsed_time; 66 | } 67 | } 68 | std::cout << "Running done" << std::endl; 69 | std::cout << "ONNX Runtime GPU: Average elapsed time: " << total_time / 10 << " ms" << std::endl; 70 | int a; 71 | std::cin >> a; 72 | 73 | return 0; 74 | } 75 | -------------------------------------------------------------------------------- /windows_sln/onnxruntime_windows_cpp_gpu/onnxruntime_windows_cpp_gpu.vcxproj: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Debug 6 | Win32 7 | 8 | 9 | Release 10 | Win32 11 | 12 | 13 | Debug 14 | x64 15 | 16 | 17 | Release 18 | x64 19 | 20 | 21 | 22 | 17.0 23 | Win32Proj 24 | {e0359d5c-b860-4159-85e3-517aed065c18} 25 | onnxruntimewindowscppgpu 26 | 10.0 27 | 28 | 29 | 30 | Application 31 | true 32 | v143 33 | Unicode 34 | 35 | 36 | Application 37 | false 38 | v143 39 | true 40 | Unicode 41 | 42 | 43 | Application 44 | true 45 | v143 46 | Unicode 47 | 48 | 49 | Application 50 | false 51 | v143 52 | true 53 | Unicode 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | Level3 76 | true 77 | WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) 78 | true 79 | ./onnxruntime-win-x64-gpu-1.15.1/include;%(AdditionalIncludeDirectories) 80 | 81 | 82 | Console 83 | true 84 | ./onnxruntime-win-x64-gpu-1.15.1/lib/onnxruntime.lib;%(AdditionalDependencies) 85 | 86 | 87 | copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_shared.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_cuda.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_tensorrt.dll" "$(OutDir)" 88 | 89 | 90 | 91 | 92 | Level3 93 | true 94 | true 95 | true 96 | WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) 97 | true 98 | ./onnxruntime-win-x64-gpu-1.15.1/include;%(AdditionalIncludeDirectories) 99 | 100 | 101 | Console 102 | true 103 | true 104 | true 105 | ./onnxruntime-win-x64-gpu-1.15.1/lib/onnxruntime.lib;%(AdditionalDependencies) 106 | 107 | 108 | copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_shared.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_cuda.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_tensorrt.dll" "$(OutDir)" 109 | 110 | 111 | 112 | 113 | Level3 114 | true 115 | _DEBUG;_CONSOLE;%(PreprocessorDefinitions) 116 | true 117 | ./onnxruntime-win-x64-gpu-1.15.1/include;%(AdditionalIncludeDirectories) 118 | 119 | 120 | Console 121 | true 122 | ./onnxruntime-win-x64-gpu-1.15.1/lib/onnxruntime.lib;%(AdditionalDependencies) 123 | 124 | 125 | copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_shared.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_cuda.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_tensorrt.dll" "$(OutDir)" 126 | 127 | 128 | 129 | 130 | Level3 131 | true 132 | true 133 | true 134 | NDEBUG;_CONSOLE;%(PreprocessorDefinitions) 135 | true 136 | ./onnxruntime-win-x64-gpu-1.15.1/include;%(AdditionalIncludeDirectories) 137 | 138 | 139 | Console 140 | true 141 | true 142 | true 143 | ./onnxruntime-win-x64-gpu-1.15.1/lib/onnxruntime.lib;%(AdditionalDependencies) 144 | 145 | 146 | copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_shared.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_cuda.dll" "$(OutDir)" && copy "$(ProjectDir)onnxruntime-win-x64-gpu-1.15.1\lib\onnxruntime_providers_tensorrt.dll" "$(OutDir)" 147 | 148 | 149 | 150 | 151 | 152 | 153 | 154 | 155 | -------------------------------------------------------------------------------- /windows_sln/onnxruntime_windows_cpp_gpu/onnxruntime_windows_cpp_gpu.vcxproj.filters: -------------------------------------------------------------------------------- 1 |  2 | 3 | 4 | 5 | {4FC737F1-C7A5-4376-A066-2A32D752A2FF} 6 | cpp;c;cc;cxx;c++;cppm;ixx;def;odl;idl;hpj;bat;asm;asmx 7 | 8 | 9 | {93995380-89BD-4b04-88EB-625FBE52EBFB} 10 | h;hh;hpp;hxx;h++;hm;inl;inc;ipp;xsd 11 | 12 | 13 | {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} 14 | rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms 15 | 16 | 17 | 18 | 19 | Source Files 20 | 21 | 22 | -------------------------------------------------------------------------------- /windows_sln/openvino_windows_cpp/openvino_windows_cpp.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | 12 | #include "openvino/openvino.hpp" 13 | 14 | 15 | using namespace ov::preprocess; 16 | 17 | class Timer { 18 | public: 19 | Timer() : start_time(std::chrono::high_resolution_clock::now()) {} 20 | 21 | void reset() { 22 | start_time = std::chrono::high_resolution_clock::now(); 23 | } 24 | 25 | long long elapsedMilliseconds() { 26 | auto end_time = std::chrono::high_resolution_clock::now(); 27 | return std::chrono::duration_cast(end_time - start_time).count(); 28 | } 29 | 30 | private: 31 | std::chrono::high_resolution_clock::time_point start_time; 32 | }; 33 | 34 | int main(int argc, char* argv[]) { 35 | std::cout << ov::get_openvino_version() << std::endl; 36 | 37 | const std::string model_path = "../../model.onnx"; 38 | size_t batch_size = 1; 39 | size_t input_width = 224; 40 | size_t input_height = 224; 41 | const std::string device_name = "CPU"; 42 | // const std::string device_name = "GPU"; 43 | const int shape[] = { batch_size, 3, input_height, input_width }; 44 | 45 | ov::Core core; 46 | 47 | std::cout << "Loading model files: " << model_path << std::endl; 48 | std::shared_ptr model = core.read_model(model_path); 49 | 50 | OPENVINO_ASSERT(model->inputs().size() == 1, "Sample supports models with 1 input only"); 51 | OPENVINO_ASSERT(model->outputs().size() == 1, "Sample supports models with 1 output only"); 52 | 53 | std::string input_tensor_name = model->input().get_any_name(); 54 | //std::string output_tensor_name = model->output().get_any_name(); 55 | 56 | ov::CompiledModel compiled_model = core.compile_model(model, device_name); 57 | ov::InferRequest infer_request = compiled_model.create_infer_request(); 58 | 59 | const int total_size = shape[0] * shape[1] * shape[2] * shape[3]; 60 | std::shared_ptr image_data(new float[total_size], std::default_delete()); 61 | 62 | /*std::random_device rd; 63 | std::mt19937 gen(rd()); 64 | std::uniform_real_distribution<> dis(0, 1); 65 | float* raw_ptr = image_data.get(); 66 | for (int i = 0; i < total_size; ++i) { 67 | raw_ptr[i] = dis(gen); 68 | }*/ 69 | 70 | ov::Tensor input_tensor{ ov::element::f32, {batch_size, 3, input_height, input_width}, image_data.get() }; 71 | infer_request.set_tensor(input_tensor_name, input_tensor); 72 | 73 | long long total_time = 0; 74 | Timer timer; 75 | for (int i = 0; i < 20; i++) { 76 | timer.reset(); 77 | infer_request.infer(); 78 | long long elapsed_time = timer.elapsedMilliseconds(); 79 | std::cout << i << " Elapsed time: " << elapsed_time << " ms" << std::endl; 80 | if (i >= 10) { 81 | total_time += elapsed_time; 82 | } 83 | } 84 | printf("Running done"); 85 | std::cout << "OpenVINO C++:" << device_name << " Average elapsed time : " << total_time / 10 << " ms" << std::endl; 86 | 87 | std::cout << "Infer done" << std::endl; 88 | int a; 89 | std::cin >> a; 90 | 91 | return 0; 92 | } 93 | 94 | -------------------------------------------------------------------------------- /windows_sln/openvino_windows_cpp/openvino_windows_cpp.vcxproj: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Debug 6 | Win32 7 | 8 | 9 | Release 10 | Win32 11 | 12 | 13 | Debug 14 | x64 15 | 16 | 17 | Release 18 | x64 19 | 20 | 21 | 22 | 17.0 23 | Win32Proj 24 | {121470c6-94ba-46e2-b45c-37e535c62762} 25 | openvinowindowscpp 26 | 10.0 27 | 28 | 29 | 30 | Application 31 | true 32 | v143 33 | Unicode 34 | 35 | 36 | Application 37 | false 38 | v143 39 | true 40 | Unicode 41 | 42 | 43 | Application 44 | true 45 | v143 46 | Unicode 47 | 48 | 49 | Application 50 | false 51 | v143 52 | true 53 | Unicode 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | Level3 76 | true 77 | WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) 78 | true 79 | openvino/include;%(AdditionalIncludeDirectories) 80 | 81 | 82 | Console 83 | true 84 | openvino\lib\intel64\Debug\openvinod.lib;%(AdditionalDependencies) 85 | 86 | 87 | 88 | 89 | Level3 90 | true 91 | true 92 | true 93 | WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) 94 | true 95 | openvino/include;%(AdditionalIncludeDirectories) 96 | 97 | 98 | Console 99 | true 100 | true 101 | true 102 | openvino\lib\intel64\Release\openvino.lib;%(AdditionalDependencies) 103 | 104 | 105 | 106 | 107 | Level3 108 | true 109 | _DEBUG;_CONSOLE;%(PreprocessorDefinitions) 110 | true 111 | openvino/include;%(AdditionalIncludeDirectories) 112 | 113 | 114 | Console 115 | true 116 | openvino\lib\intel64\Debug\openvinod.lib;%(AdditionalDependencies) 117 | 118 | 119 | 120 | 121 | Level3 122 | true 123 | true 124 | true 125 | NDEBUG;_CONSOLE;%(PreprocessorDefinitions) 126 | true 127 | openvino/include;%(AdditionalIncludeDirectories) 128 | 129 | 130 | Console 131 | true 132 | true 133 | true 134 | openvino\lib\intel64\Release\openvino.lib;%(AdditionalDependencies) 135 | 136 | 137 | 138 | 139 | 140 | 141 | 142 | 143 | -------------------------------------------------------------------------------- /windows_sln/openvino_windows_cpp/openvino_windows_cpp.vcxproj.filters: -------------------------------------------------------------------------------- 1 |  2 | 3 | 4 | 5 | {4FC737F1-C7A5-4376-A066-2A32D752A2FF} 6 | cpp;c;cc;cxx;c++;cppm;ixx;def;odl;idl;hpj;bat;asm;asmx 7 | 8 | 9 | {93995380-89BD-4b04-88EB-625FBE52EBFB} 10 | h;hh;hpp;hxx;h++;hm;inl;inc;ipp;xsd 11 | 12 | 13 | {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} 14 | rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms 15 | 16 | 17 | 18 | 19 | Source Files 20 | 21 | 22 | --------------------------------------------------------------------------------