├── README.md
├── auto_benchmark_app.py
├── requirements.txt
└── sample_result
└── result_0104-105607.csv
/README.md:
--------------------------------------------------------------------------------
1 | # Semi-automated OpenVINO benchmark_app with variable parameters
2 |
3 | ## Description
4 | This program allows the users to specify variable parameters in the OpenVINO benchmark_app and run the benchmark with all combinations of the given parameters automatically.
5 | The program will generate the report file in the CSV format with coded date and time file name ('`result_DDmm-HHMMSS.csv`'). You can analyze or visualize the benchmark result with MS Excel or a spreadsheet application.
6 |
7 | **The program is just a front-end for the OpenVINO official benchmark_app.**
8 | This program utilizes the benchmark_app as the benchmark core logic. So the performance result measured by this program must be consistent with the one measured by the benchmark_app.
9 | Also, the command line parameters and their meaning are compatible with the benchmark_app.
10 |
11 | ### Requirements
12 | - OpenVINO 2022.1 or higher
13 | This program is not compatible with OpenVINO 2021.
14 |
15 | ### How to run
16 | 1. Install required Python modules.
17 | ```sh
18 | python -m pip install --upgrade pip setuptools
19 | python -m pip install -r requirements.txt
20 | ```
21 |
22 | 2. Run the auto benchmark (command line example)
23 | ```sh
24 | python auto_benchmark_app.py -m resnet.xml -niter 100 -nthreads %1,2,4,8 -nstreams %1,2 -d %CPU,GPU -cdir cache
25 | ```
26 | With this command line, `-nthreads` has 4 options (1,2,4,8), `-nstreams` has 2 options (1,2), and `-d` option has 2 options (CPU,GPU). As the result, 16 (4x2x2) benchmarks will be performed in total.
27 |
28 | ### Parameter options
29 | You can specify variable parameters by adding following prefix to the parameters.
30 | |Prefix|Type|Description/Example|
31 | |---|---|---|
32 | |$|range|`$1,8,2` == `range(1,8,2)` => `[1,3,5,7]`
All `range()` compatible expressions are possible. e.g. `$1,5` or `$5,1,-1`|
33 | |%|list|`%CPU,GPU` => `['CPU', 'GPU']`, `%1,2,4,8` => `[1,2,4,8]`|
34 | |@|ir-models|`@models` == IR models in the \'`./models`' dir => `['resnet.xml', 'googlenet.xml', ...]`
This option will recursively search the '.xml' files in the specified directory.|
35 |
36 | ### Examples of command line
37 | `python auto_benchmark_app.py -cdir cache -m resnet.xml -nthreads $1,5 -nstreams %1,2,4,8 -d %CPU,GPU`
38 | - Run benchmark with `-nthreads`=`range(1,5)`=[1,2,3,4], `-nstreams`=[1,2,4,8], `-d`=['CPU','GPU']. Total 32 combinations.
39 |
40 | `python auto_benchmark_app.py -m @models -niter 100 -nthreads %1,2,4,8 -nstreams %1,2 -d CPU -cdir cache`
41 | - Run benchmark with `-m`=[all .xml files in `models` directory], `-nthreads` = [1,2,4,8], `-nstreams`=[1,2].
42 |
43 | ### Example of a result file
44 | The last 4 items in each line are the performance data in the order of 'count', 'duration (ms)', 'latency AVG (ms)', and 'throughput (fps)'.
45 | ```
46 | #CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz
47 | #MEM: 33947893760
48 | #OS: Windows-10-10.0.22000-SP0
49 | #OpenVINO: 2022.1.0-7019-cdb9bec7210-releases/2022/1
50 | #Last 4 items in the lines : test count, duration (ms), latency AVG (ms), and throughput (fps)
51 | benchmark_app.py,-m,models\FP16\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,772.55,30.20,129.44
52 | benchmark_app.py,-m,models\FP16\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,1917.62,75.06,52.15
53 | benchmark_app.py,-m,models\FP16\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,195.28,7.80,512.10
54 | benchmark_app.py,-m,models\FP16-INT8\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,337.09,24.75,308.53
55 | benchmark_app.py,-m,models\FP16-INT8\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,1000.39,38.85,99.96
56 | benchmark_app.py,-m,models\FP16-INT8\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,64.22,4.69,1619.38
57 | benchmark_app.py,-m,models\FP32\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,778.90,30.64,128.39
58 | benchmark_app.py,-m,models\FP32\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,1949.73,76.91,51.29
59 | benchmark_app.py,-m,models\FP32\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,182.59,7.58,547.69
60 | benchmark_app.py,-m,models\FP32-INT8\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,331.73,24.90,313.51
61 | benchmark_app.py,-m,models\FP32-INT8\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,968.38,38.45,103.27
62 | benchmark_app.py,-m,models\FP32-INT8\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,67.70,5.04,1536.23
63 | benchmark_app.py,-m,models\FP16\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1536.14,15.30,65.10
64 | benchmark_app.py,-m,models\FP16\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,3655.59,36.50,27.36
65 | benchmark_app.py,-m,models\FP16\squeezenet1.1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,366.73,3.68,272.68
66 | benchmark_app.py,-m,models\FP16-INT8\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,872.87,8.66,114.56
67 | benchmark_app.py,-m,models\FP16-INT8\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1963.67,19.54,50.93
68 | benchmark_app.py,-m,models\FP16-INT8\squeezenet1.1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,242.28,2.34,412.74
69 | benchmark_app.py,-m,models\FP32\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1506.14,14.96,66.39
70 | benchmark_app.py,-m,models\FP32\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,3593.88,35.88,27.83
71 | benchmark_app.py,-m,models\FP32\squeezenet1.1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,366.28,3.56,273.01
72 | benchmark_app.py,-m,models\FP32-INT8\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,876.52,8.69,114.09
73 | benchmark_app.py,-m,models\FP32-INT8\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1934.72,19.25,51.69
74 | ```
75 |
76 | END
77 |
--------------------------------------------------------------------------------
/auto_benchmark_app.py:
--------------------------------------------------------------------------------
1 | # This program requires OpenVINO 2022.1 or higher.
2 |
3 | import sys
4 | import glob
5 | import re
6 | import os
7 | import datetime
8 | import platform
9 | import itertools
10 |
11 | import cpuinfo
12 | import psutil
13 |
14 | import openvino
15 | from openvino.tools.benchmark.main import main as benchmark_app
16 |
17 | dry_run = False
18 |
19 | def help():
20 | app_name = sys.argv[0]
21 | print('OpenVINO Benchmark_app extension - Parametric benchmark front end')
22 | print()
23 | print('You can specify variable parameters by adding following prefix to the parameters.')
24 | print(' $ : range - $1,8,2 == range(1,8,2) => [1,3,5,7]')
25 | print(' % : list - %CPU,GPU => [\'CPU\', \'GPU\'], %1,2,4,8 => [1,2,4,8]')
26 | print(' @ : ir-models - @models == IR models in the \'models\' dir => [\'resnet.xml\', \'googlenet.xml\', ...]')
27 | print('Example:')
28 | print(' python {} -cdir cache -m resnet.xml -nthreads $1,6,2 -nstreams %1,2,4,8 -d %CPU,GPU'.format(app_name))
29 | print(' python {} -m @models -niter 100 -nthreads %1,2,4,8 -nstreams %1,2 -d %CPU -cdir cache'.format(app_name))
30 | print()
31 | print('The program will generate a report file in CSV format. File name would be generated based on the time => \'result_DDmm-HHMMSS.csv\'')
32 |
33 | def search_ir_models(dir):
34 | xmls = glob.glob(dir+'/**/*.xml', recursive=True)
35 | models = []
36 | for xml in xmls:
37 | path, filename = os.path.split(xml)
38 | base, ext = os.path.splitext(filename)
39 | if os.path.isfile(os.path.join(path, base+'.bin')):
40 | models.append(xml)
41 | return models
42 |
43 | def find_result(filename, item):
44 | with open(filename, 'rt') as f:
45 | for line in f:
46 | line = line.rstrip('\n')
47 | res = re.findall(item + r'\s+([0-9\.]+)', line)
48 | if len(res)>0:
49 | return res[0]
50 | return None
51 |
52 |
53 | def main():
54 | global dry_run
55 | if len(sys.argv)<2:
56 | help()
57 | return 0
58 |
59 | argstr = ''
60 | params = []
61 | for i, arg in enumerate(sys.argv):
62 | if arg == '-h' or sys.argv[1] == '--help':
63 | help()
64 | return 0
65 | if i == 0:
66 | argstr += 'benchmark_app.py'
67 | elif arg[0] == '$':
68 | params.append(list(eval('range({})'.format(arg[1:]))))
69 | argstr += ' {}'
70 | elif arg[0] == '%':
71 | params.append(list(arg[1:].split(',')))
72 | argstr += ' {}'
73 | elif arg[0] == '@':
74 | # search IR models
75 | ir_search_root_dir = arg[1:]
76 | if os.path.isdir(ir_search_root_dir):
77 | models = search_ir_models(ir_search_root_dir)
78 | print(models)
79 | else:
80 | print('The directory {} is not existing.'.format(ir_search_root_dir))
81 | return -1
82 | print('{} models found.'.format(len(models)))
83 | params.append(models)
84 | argstr += ' {}'
85 | else:
86 | argstr += ' '+arg
87 | print(argstr)
88 |
89 | combinations = tuple(itertools.product(*params, repeat=1))
90 | print('Total number of parameter combinations:', len(combinations))
91 |
92 | tmplog='tmplog.txt'
93 | now = datetime.datetime.now()
94 | with open(now.strftime('result_%d%m-%H%M%S.csv'), 'wt') as log:
95 | cpu_info = cpuinfo.get_cpu_info()
96 | print('#CPU:', cpu_info['brand_raw'], file=log)
97 | print('#MEM:', psutil.virtual_memory().total, file=log)
98 | print('#OS:', platform.platform(), file=log)
99 | print('#OpenVINO:', openvino.runtime.get_version(), file=log)
100 | print('#Last 4 items in the lines : test count, duration (ms), latency AVG (ms), and throughput (fps)')
101 | for combination in combinations:
102 | cmd_str = argstr.format(*combination)
103 |
104 | argv = cmd_str.split(' ')
105 | print(*argv, sep=',', end='', file=log)
106 | print(',', end='', file=log)
107 |
108 | if dry_run == True:
109 | continue
110 |
111 | sys.argv = argv
112 | with open(tmplog, 'wt') as htmplog:
113 | sys.stdout = htmplog # redirect output
114 | sys.stderr = None
115 | benchmark_app() # call benchmark_app
116 | sys.stderr = sys.__stderr__
117 | sys.stdout = sys.__stdout__
118 |
119 | count = find_result(tmplog, 'Count:')
120 | duration = find_result(tmplog, 'Duration:')
121 | latency = find_result(tmplog, '\s+AVG:') # Latency = Median, AVG, MAX, MIN
122 | throughput = find_result(tmplog, 'Throughput:')
123 |
124 | print(count, duration, latency, throughput, sep=',', file=log)
125 |
126 | if __name__ == '__main__':
127 | sys.exit(main())
128 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | psutil
2 | py-cpuinfo
3 | openvino==2022.1.0
4 | openvino-dev==2022.1.0
5 | #openvino-dev[tensorflow2,onnx,pytorch,caffe,kaldi]==2022.1.0
6 |
--------------------------------------------------------------------------------
/sample_result/result_0104-105607.csv:
--------------------------------------------------------------------------------
1 | #CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz
2 | #MEM: 33947893760
3 | #OS: Windows-10-10.0.22000-SP0
4 | #OpenVINO: 2022.1.0-7019-cdb9bec7210-releases/2022/1
5 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,1,-nstreams,5,-d,CPU,-cdir=.,12,281.24,91.61,42.67
6 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,2,-nstreams,5,-d,CPU,-cdir=.,10,281.25,128.15,35.56
7 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,4,-nstreams,5,-d,CPU,-cdir=.,10,218.75,104.07,45.71
8 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,8,-nstreams,5,-d,CPU,-cdir=.,10,187.50,93.59,53.33
9 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,1,-nstreams,4,-d,CPU,-cdir=.,12,359.38,114.93,33.39
10 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,2,-nstreams,4,-d,CPU,-cdir=.,12,265.79,86.91,45.15
11 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,4,-nstreams,4,-d,CPU,-cdir=.,12,296.85,97.93,40.43
12 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,8,-nstreams,4,-d,CPU,-cdir=.,12,218.74,71.91,54.86
13 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,1,-nstreams,3,-d,CPU,-cdir=.,12,328.12,83.32,36.57
14 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,2,-nstreams,3,-d,CPU,-cdir=.,12,374.98,90.46,32.00
15 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,4,-nstreams,3,-d,CPU,-cdir=.,12,374.96,92.58,32.00
16 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,8,-nstreams,3,-d,CPU,-cdir=.,12,250.01,60.31,48.00
17 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,1,-nstreams,2,-d,CPU,-cdir=.,10,421.90,81.69,23.70
18 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,2,-nstreams,2,-d,CPU,-cdir=.,10,390.63,76.89,25.60
19 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,4,-nstreams,2,-d,CPU,-cdir=.,10,234.38,43.85,42.67
20 | benchmark_app.py,-m,n:\omz2022.1\public\resnet-50-tf\FP16\resnet-50-tf.xml,-niter,10,-nthreads,8,-nstreams,2,-d,CPU,-cdir=.,10,140.61,26.31,71.12
21 |
--------------------------------------------------------------------------------