├── Jupyter
├── README.md
├── inference.py
├── store_aisle_monitor_jupyter.ipynb
└── store_aisle_monitor_jupyter.py
├── LICENSE
├── README.md
├── application
├── inference.py
└── store_aisle_monitor.py
├── docs
└── images
│ ├── figure1.png
│ ├── figure2.png
│ ├── figure3.png
│ ├── jupy1.png
│ └── jupy2.png
├── resources
└── config.json
└── setup.sh
/Jupyter/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # Store Aisle Monitor
4 |
5 |
6 | | Details | |
7 | |-----------------------|------------------|
8 | | Target OS | Ubuntu\* 18.04 LTS |
9 | | Programming Language | Python* 3.6 |
10 | | Time to complete | 30 min |
11 |
12 |
13 | 
14 |
15 | ## Introduction
16 |
17 | This reference implementation counts the number of people present in an image and generates a motion heatmap. It takes the input from the camera, or a video file for processing. Snapshots of the output are taken at regular intervals and are uploaded to the cloud. It also stores the snapshots of the output locally.
18 |
19 | ## Requirements
20 |
21 | ### Hardware
22 | * 6th to 8th generation Intel® Core™ processors with Intel® Iris® Pro graphics or Intel® HD Graphics
23 |
24 | ### Software
25 |
26 | * [Ubuntu* 18.04](http://releases.ubuntu.com/18.04/)
27 | * OpenCL™ Runtime Package
28 | **Note**: We recommend using a 4.14+ kernel to use this software. Run the following command to determine your kernel version:
29 | ```
30 | uname -a
31 | ```
32 | * Intel® Distribution of OpenVINO™ toolkit 2020 R3 Release
33 | * Microsoft Azure* Python SDK
34 |
35 | ## How it Works
36 | - The application uses a video source, such as a camera or a video file, to grab the frames. The [OpenCV functions](https://docs.opencv.org/3.4/dd/d43/tutorial_py_video_display.html) are used to calculate frame width, frame height and frames per second (fps) of the video source. The application counts the number of people and generates motion heatmap.
37 |
38 | 
39 |
40 | - People counter: A trained neural network model detects the people in the frame and bounding boxes are drawn on the people detected. This reference implementation uses a pre-trained model **person-detection-retail-0013** that can be downloaded using the **model downloader**, provided by the Intel® Distribution of OpenVINO™ toolkit.
41 |
42 | - Motion Heatmap generation: An accumulated frame is used, on which every frame is added after preprocessing. This accumulated frame is used to generate the motion heatmap using [applyColorMap](https://docs.opencv.org/3.4/d3/d50/group__imgproc__colormap.html#gadf478a5e5ff49d8aa24e726ea6f65d15). The original frame and heatmap frame are merged using [addWeighted](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_core/py_image_arithmetics/py_image_arithmetics.html), to visualize the movement patterns over time.
43 |
44 | - The heatmap frame and people counter frame are merged using [addWeighted](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_core/py_image_arithmetics/py_image_arithmetics.html) and this merged frame is saved locally at regular intervals. The output is present in the *output_snapshots* directory of the project directory.
45 |
46 | - The application also uploads the results to the Microsoft Azure cloud at regular intervals, if a Microsoft Azure storage name and key are provided.
47 |
48 | 
49 |
50 | ## Setup
51 |
52 | ### Install Intel® Distribution of OpenVINO™ toolkit
53 |
54 | Refer to [https://software.intel.com/en-us/articles/OpenVINO-Install-Linux](https://software.intel.com/en-us/articles/OpenVINO-Install-Linux) for more information on how to install and setup the Intel® Distribution of OpenVINO™ toolkit.
55 |
56 | The OpenCL™ Runtime package is required to run the inference on a GPU. It is not mandatory for CPU inference.
57 |
58 | ### Other dependencies
59 | **Microsoft Azure python SDK**
60 | The Azure python SDK allows you to build applications against Microsoft Azure Storage. [Azure Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction) is Microsoft's cloud storage solution for modern data storage scenarios. Azure Storage offers a massively scalable object store for data objects, a file system service for the cloud, a messaging store for reliable messaging, and a NoSQL store.
61 |
62 |
63 |
64 | ### Which model to use
65 |
66 | This application uses the [**person-detection-retail-0013**](https://docs.openvinotoolkit.org/2020.3/_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html) Intel® pre-trained model, that can be accessed using the **model downloader**. The **model downloader** downloads the __.xml__ and __.bin__ files that will be used by the application.
67 |
68 | To download the model and install the dependencies of the application, run the below command in the `store-aisle-monitor-python` directory:
69 | ```
70 | ./setup.sh
71 | ```
72 |
73 | ### The Config File
74 |
75 | The _resources/config.json_ contains the path of video that will be used by the application as input.
76 |
77 | For example:
78 | ```
79 | {
80 | "inputs": [
81 | {
82 | "video":"path_to_video/video1.mp4"
83 | }
84 | ]
85 | }
86 | ```
87 |
88 | The `path/to/video` is the path to an input video file.
89 |
90 | ### Which Input Video to use
91 |
92 | We recommend using [store-aisle-detection](https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/store-aisle-detection.mp4).
93 | For example:
94 | ```
95 | {
96 | "inputs": [
97 | {
98 | "video":"sample-videos/store-aisle-detection.mp4
99 | }
100 | ]
101 | }
102 | ```
103 | If the user wants to use any other video, it can be used by providing the path in the config.json file.
104 |
105 |
106 | ### Using the Camera Stream instead of video
107 |
108 | Replace `path/to/video` with the camera ID in the config.json file, where the ID is taken from the video device (the number X in /dev/videoX).
109 |
110 | On Ubuntu, to list all available video devices use the following command:
111 |
112 | ```
113 | ls /dev/video*
114 | ```
115 |
116 | For example, if the output of above command is __/dev/video0__, then config.json would be:
117 |
118 | ```
119 | {
120 | "inputs": [
121 | {
122 | "video":"0"
123 | }
124 | ]
125 | }
126 | ```
127 |
128 | ### Setup the environment
129 |
130 | Configure the environment to use the Intel® Distribution of OpenVINO™ toolkit one time per session by running the following command:
131 |
132 | source /opt/intel/openvino/bin/setupvars.sh
133 |
134 | **Note:** This command needs to be executed only once in the terminal where the application will be executed. If the terminal is closed, the command needs to be executed again.
135 |
136 |
137 | ### Run the application on Jupyter*
138 |
139 | Go to the store-aisle-monitor-python directory and open the Jupyter notebook by running the following commands:
140 |
141 | ```
142 | cd /Jupyter
143 | jupyter notebook
144 | ```
145 |
146 | #### Follow the steps to run the code on Jupyter*:
147 |
148 | 
149 |
150 |
151 | 1. Click on **New** button on the right side of the Jupyter window.
152 |
153 | 2. Click on **Python 3** option from the drop down list.
154 |
155 | 3. In the first cell type **import os** and press **Shift+Enter** from the keyboard.
156 |
157 | 4. Export the below environment variables in second cell of Jupyter and press **Shift+Enter**.
158 | ```
159 | %env MODEL = /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP32/person-detection-retail-0013.xml
160 | %env INPUT = resources/store-aisle-detection.mp4
161 | ```
162 | 5. User can set the threshold for the detection (PROB_THRESHOLD) and target device to infer on (DEVICE).
163 | Export these environment variables as given below if required, else skip this step. If user skips this step, these values are set to default values.
164 | ```
165 | %env DEVICE = CPU
166 | %env PROB_THRESHOLD = 0.7
167 | ```
168 |
169 | 6. To upload the results to the Microsoft Azure cloud (optional), export the below environment variables with an appropriate Microsoft Azure storage name and key.
170 | ```
171 | %env ACCOUNT_NAME =
172 | %env ACCOUNT_KEY =
173 | ```
174 | 7. To run the application on sync mode, export the environment variable **%env FLAG = sync**. By default, the application runs on async mode.
175 | 8. Copy the code from **store_aisle_monitor_jupyter.py** and paste it in the next cell and press **Shift+Enter**.
176 |
177 | 9. Alternatively, code can be run in the following way:
178 |
179 | i. Click on the **store_aisle_monitor_jupyter.ipynb** notebook file from the Jupyter notebook home window.
180 |
181 | ii. Click on the **Kernel** menu and then select **Restart & Run All** from the drop down list.
182 |
183 | iii. On the pop-up window, click on **Restart and Run All Cells**.
184 |
185 | 
186 |
187 | **NOTE:**
188 |
189 | 1. To run the application on **GPU**:
190 | * With the floating point precision 32 (FP32), change the **%env DEVICE = CPU** to **%env DEVICE = GPU**
191 | * With the floating point precision 16 (FP16), change the environment variables as given below:
192 |
193 | %env DEVICE = GPU
194 | %env MODEL=/opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml
195 |
196 | 2. To run the application on **Intel® Neural Compute Stick**:
197 | * Change the **%env DEVICE = CPU** to **%env DEVICE = MYRIAD**
198 | * The Intel® Neural Compute Stick can only run FP16 models. Hence change the environment variable for the model as shown below.
199 |
200 | %env MODEL=/opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml
201 |
202 | 3. To run the application on **Intel® Movidius™ VPU**:
203 | * Change the **%env DEVICE = CPU** to **%env DEVICE = HDDL**
204 | * The HDDL-R can only run FP16 models. Change the environment variable for the model as shown below and the models that are passed to the application must be of data type FP16.
205 |
206 | %env MODEL=/opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml
207 |
208 |
217 |
218 | 4. To obtain **account name** and **account key** from **azure portal**, please refer:
219 | https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python#copy-your-credentials-from-the-azure-portal
220 | 5. To view the uploaded snapshots on cloud, please refer:
221 | https://docs.microsoft.com/en-us/azure/storage/blobs/storage-upload-process-images?tabs=net#verify-the-image-is-shown-in-the-storage-account
222 |
223 |
--------------------------------------------------------------------------------
/Jupyter/inference.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | """
3 | Copyright (c) 2018 Intel Corporation.
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining
6 | a copy of this software and associated documentation files (the
7 | "Software"), to deal in the Software without restriction, including
8 | without limitation the rights to use, copy, modify, merge, publish,
9 | distribute, sublicense, and/or sell copies of the Software, and to
10 | permit persons to whom the Software is furnished to do so, subject to
11 | the following conditions:
12 |
13 | The above copyright notice and this permission notice shall be
14 | included in all copies or substantial portions of the Software.
15 |
16 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
19 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
20 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
21 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
22 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
23 | """
24 |
25 | import os
26 | import sys
27 | import logging as log
28 | from openvino.inference_engine import IENetwork, IECore
29 |
30 |
31 | class Network:
32 | """
33 | Load and configure inference plugins for the specified target devices
34 | and performs synchronous and asynchronous modes for the specified infer requests.
35 | """
36 |
37 | def __init__(self):
38 | self.net = None
39 | self.plugin = None
40 | self.input_blob = None
41 | self.out_blob = None
42 | self.net_plugin = None
43 | self.infer_request_handle = None
44 |
45 | def load_model(self, model, device, input_size, output_size, num_requests, cpu_extension=None, plugin=None):
46 | """
47 | Loads a network and an image to the Inference Engine plugin.
48 | :param model: .xml file of pre trained model
49 | :param cpu_extension: extension for the CPU device
50 | :param device: Target device
51 | :param input_size: Number of input layers
52 | :param output_size: Number of output layers
53 | :param num_requests: Index of Infer request value. Limited to device capabilities.
54 | :param plugin: Plugin for specified device
55 | :return: Shape of input layer
56 | """
57 |
58 | model_xml = model
59 | model_bin = os.path.splitext(model_xml)[0] + ".bin"
60 | # Plugin initialization for specified device
61 | # and load extensions library if specified
62 | if not plugin:
63 | log.info("Initializing plugin for {} device...".format(device))
64 | self.plugin = IECore()
65 | else:
66 | self.plugin = plugin
67 |
68 | if cpu_extension and 'CPU' in device:
69 | self.plugin.add_extension(cpu_extension, "CPU")
70 |
71 | # Read IR
72 | log.info("Reading IR...")
73 | self.net = self.plugin.read_network(model=model_xml, weights=model_bin)
74 | log.info("Loading IR to the plugin...")
75 |
76 | if device == "CPU":
77 | supported_layers = self.plugin.query_network(self.net, "CPU")
78 | not_supported_layers = \
79 | [l for l in self.net.layers.keys() if l not in supported_layers]
80 | if len(not_supported_layers) != 0:
81 | log.error("Following layers are not supported by "
82 | "the plugin for specified device {}:\n {}".
83 | format(device,
84 | ', '.join(not_supported_layers)))
85 | log.error("Please try to specify cpu extensions library path"
86 | " in command line parameters using -l "
87 | "or --cpu_extension command line argument")
88 | sys.exit(1)
89 |
90 | if num_requests == 0:
91 | # Loads network read from IR to the plugin
92 | self.net_plugin = self.plugin.load_network(network=self.net, device_name=device)
93 | else:
94 | self.net_plugin = self.plugin.load_network(network=self.net, num_requests=num_requests, device_name=device)
95 | # log.error("num_requests != 0")
96 |
97 | self.input_blob = next(iter(self.net.inputs))
98 | self.out_blob = next(iter(self.net.outputs))
99 | assert len(self.net.inputs.keys()) == input_size, \
100 | "Supports only {} input topologies".format(len(self.net.inputs))
101 | assert len(self.net.outputs) == output_size, \
102 | "Supports only {} output topologies".format(len(self.net.outputs))
103 |
104 | return self.plugin, self.get_input_shape()
105 |
106 | def get_input_shape(self):
107 | """
108 | Gives the shape of the input layer of the network.
109 | :return: None
110 | """
111 | return self.net.inputs[self.input_blob].shape
112 |
113 | def performance_counter(self, request_id):
114 | """
115 | Queries performance measures per layer to get feedback of what is the
116 | most time consuming layer.
117 | :param request_id: Index of Infer request value. Limited to device capabilities
118 | :return: Performance of the layer
119 | """
120 | perf_count = self.net_plugin.requests[request_id].get_perf_counts()
121 | return perf_count
122 |
123 | def exec_net(self, request_id, frame):
124 | """
125 | Starts asynchronous inference for specified request.
126 | :param request_id: Index of Infer request value. Limited to device capabilities.
127 | :param frame: Input image
128 | :return: Instance of Executable Network class
129 | """
130 | self.infer_request_handle = self.net_plugin.start_async(
131 | request_id=request_id, inputs={self.input_blob: frame})
132 | return self.net_plugin
133 |
134 | def wait(self, request_id):
135 | """
136 | Waits for the result to become available.
137 | :param request_id: Index of Infer request value. Limited to device capabilities.
138 | :return: Timeout value
139 | """
140 | wait_process = self.net_plugin.requests[request_id].wait(-1)
141 | return wait_process
142 |
143 | def get_output(self, request_id, output=None):
144 | """
145 | Gives a list of results for the output layer of the network.
146 | :param request_id: Index of Infer request value. Limited to device capabilities.
147 | :param output: Name of the output layer
148 | :return: Results for the specified request
149 | """
150 | if output:
151 | res = self.infer_request_handle.outputs[output]
152 | else:
153 | res = self.net_plugin.requests[request_id].outputs[self.out_blob]
154 | return res
155 |
156 | def clean(self):
157 | """
158 | Deletes all the instances
159 | :return: None
160 | """
161 | del self.net_plugin
162 | del self.plugin
163 | del self.net
164 |
--------------------------------------------------------------------------------
/Jupyter/store_aisle_monitor_jupyter.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": null,
6 | "metadata": {},
7 | "outputs": [],
8 | "source": [
9 | "import os"
10 | ]
11 | },
12 | {
13 | "cell_type": "code",
14 | "execution_count": null,
15 | "metadata": {},
16 | "outputs": [],
17 | "source": [
18 | "%env MODEL = /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP32/person-detection-retail-0013.xml "
19 | ]
20 | },
21 | {
22 | "cell_type": "code",
23 | "execution_count": null,
24 | "metadata": {},
25 | "outputs": [],
26 | "source": [
27 | "\"\"\"Store Aisle Monitor\"\"\"\n",
28 | "\n",
29 | "\"\"\"\n",
30 | " Copyright (c) 2018 Intel Corporation.\n",
31 | " Permission is hereby granted, free of charge, to any person obtaining\n",
32 | " a copy of this software and associated documentation files (the\n",
33 | " \"Software\"), to deal in the Software without restriction, including\n",
34 | " without limitation the rights to use, copy, modify, merge, publish,\n",
35 | " distribute, sublicense, and/or sell copies of the Software, and to\n",
36 | " permit person to whom the Software is furnished to do so, subject to\n",
37 | " the following conditions:\n",
38 | " The above copyright notice and this permission notice shall be\n",
39 | " included in all copies or substantial portions of the Software.\n",
40 | " THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n",
41 | " EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n",
42 | " MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\n",
43 | " NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE\n",
44 | " LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n",
45 | " OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\n",
46 | " WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n",
47 | "\"\"\"\n",
48 | "\n",
49 | "\n",
50 | "import os\n",
51 | "import sys\n",
52 | "import time\n",
53 | "from argparse import ArgumentParser\n",
54 | "import pathlib\n",
55 | "import cv2\n",
56 | "import json\n",
57 | "import numpy as np\n",
58 | "from azure.storage.blob import BlockBlobService, PublicAccess\n",
59 | "from inference import Network\n",
60 | "\n",
61 | "CONFIG_FILE = '../resources/config.json'\n",
62 | "\n",
63 | "# Weightage/ratio to merge (for Heatmap) original frame and colorMap frame(sum of both should be 1)\n",
64 | "INITIAL_FRAME_WEIGHTAGE = 0.65\n",
65 | "COLORMAP_FRAME_WEIGHTAGE = 0.35\n",
66 | "\n",
67 | "# Weightage/ratio to merge (for integrated output) people count frame and colorMap frame(sum of both should be 1)\n",
68 | "P_COUNT_FRAME_WEIGHTAGE = 0.65\n",
69 | "COLORMAP_FRAME_WEIGHTAGE_1 = 0.35\n",
70 | "\n",
71 | "# Multiplication factor to compute time interval for uploading snapshots to the cloud\n",
72 | "MULTIPLICATION_FACTOR = 5\n",
73 | "\n",
74 | "# Azure Blob container name\n",
75 | "CONTAINER_NAME = 'store-aisle-monitor-snapshots'\n",
76 | "\n",
77 | "# To get current working directory\n",
78 | "CWD = os.getcwd()\n",
79 | "# Creates subdirectories to save output videos and snapshots\n",
80 | "pathlib.Path(CWD + '/output_snapshots/').mkdir(parents=True, exist_ok=True)\n",
81 | "\n",
82 | "\n",
83 | "def apply_time_stamp_and_save(image, people_count, upload_azure):\n",
84 | " \"\"\"\n",
85 | " Saves snapshots with timestamps.\n",
86 | " \"\"\"\n",
87 | " current_date_time = time.strftime(\"%y-%m-%d_%H:%M:%S\", time.gmtime())\n",
88 | " file_name = current_date_time + \"_PCount_\" + str(people_count) + \".png\"\n",
89 | " file_path = CWD + \"/output_snapshots/\"\n",
90 | " local_file_name = \"output_\" + file_name\n",
91 | " file_name = file_path + local_file_name\n",
92 | " cv2.imwrite(file_name, image)\n",
93 | " if upload_azure is 1:\n",
94 | " upload_snapshot(file_path, local_file_name)\n",
95 | "\n",
96 | "\n",
97 | "def create_cloud_container(account_name, account_key):\n",
98 | " \"\"\"\n",
99 | " Creates a BlockBlobService container on cloud.\n",
100 | " \"\"\"\n",
101 | " global BLOCK_BLOB_SERVICE\n",
102 | "\n",
103 | " # Create the BlockBlobService to call the Blob service for the storage account\n",
104 | " BLOCK_BLOB_SERVICE = BlockBlobService(account_name, account_key)\n",
105 | " # Create BlockBlobService container\n",
106 | " BLOCK_BLOB_SERVICE.create_container(CONTAINER_NAME)\n",
107 | " # Set the permission so that the blobs are public\n",
108 | " BLOCK_BLOB_SERVICE.set_container_acl(CONTAINER_NAME, public_access=PublicAccess.Container)\n",
109 | "\n",
110 | "\n",
111 | "def upload_snapshot(file_path, local_file_name):\n",
112 | " \"\"\"\n",
113 | " Uploads snapshots to cloud storage container.\n",
114 | " \"\"\"\n",
115 | " try:\n",
116 | "\n",
117 | " full_path_to_file = file_path + local_file_name\n",
118 | " print(\"\\nUploading to cloud storage as blob : \" + local_file_name)\n",
119 | " # Upload the snapshot, with local_file_name as the blob name\n",
120 | " BLOCK_BLOB_SERVICE.create_blob_from_path(CONTAINER_NAME, local_file_name, full_path_to_file)\n",
121 | "\n",
122 | " except Exception as e:\n",
123 | " print(e)\n",
124 | "\n",
125 | "\n",
126 | "def main():\n",
127 | " global CONFIG_FILE\n",
128 | " model_xml = (os.environ[\"MODEL\"])\n",
129 | " device = os.environ['DEVICE'] if 'DEVICE' in os.environ.keys() else 'CPU'\n",
130 | " cpu_extension = os.environ[\n",
131 | " 'CPU_EXTENSION'] if 'CPU_EXTENSION' in os.environ.keys() else None\n",
132 | " try:\n",
133 | " # Probability threshold for detections filtering\n",
134 | " prob_threshold = float(os.environ['PROB_THRESHOLD'])\n",
135 | " except KeyError:\n",
136 | " prob_threshold = 0.5\n",
137 | " try:\n",
138 | " # Specify the azure storage name to upload results to cloud.\n",
139 | " account_name = os.environ['ACCOUNT_NAME']\n",
140 | " except:\n",
141 | " account_name = None\n",
142 | " try:\n",
143 | " # Specify the azure storage key to upload results to cloud.\n",
144 | " account_key = os.environ['ACCOUNT_KEY']\n",
145 | " except:\n",
146 | " account_key = None \n",
147 | "\n",
148 | " if account_name is \"\" or account_key is \"\":\n",
149 | " print(\"Invalid account name or account key!\")\n",
150 | " sys.exit(1)\n",
151 | " elif account_name is not None and account_key is None:\n",
152 | " print(\"Please provide account key using -ak option!\")\n",
153 | " sys.exit(1) \n",
154 | " elif account_name is None and account_key is not None:\n",
155 | " print(\"Please provide account name using -an option!\")\n",
156 | " sys.exit(1) \n",
157 | " elif account_name is None and account_key is None:\n",
158 | " upload_azure = 0\n",
159 | " else:\n",
160 | " print(\"Uploading the results to Azure storage \\\"\"+ account_name+ \"\\\"\" )\n",
161 | " upload_azure = 1\n",
162 | " create_cloud_container(account_name, account_key)\n",
163 | " assert os.path.isfile(CONFIG_FILE), \"{} file doesn't exist\".format(CONFIG_FILE)\n",
164 | " config = json.loads(open(CONFIG_FILE).read())\n",
165 | " for idx, item in enumerate(config['inputs']):\n",
166 | " if item['video'].isdigit():\n",
167 | " input_stream = int(item['video'])\n",
168 | " cap = cv2.VideoCapture(input_stream)\n",
169 | " if not cap.isOpened():\n",
170 | " print(\"\\nCamera not plugged in... Exiting...\\n\")\n",
171 | " sys.exit(0)\n",
172 | " else:\n",
173 | " input_stream = item['video']\n",
174 | " cap = cv2.VideoCapture(input_stream)\n",
175 | " if not cap.isOpened():\n",
176 | " print(\"\\nUnable to open video file... Exiting...\\n\")\n",
177 | " sys.exit(0)\n",
178 | " fps = cap.get(cv2.CAP_PROP_FPS)\n",
179 | " flag = os.environ['FLAG'] if 'FLAG' in os.environ.keys() else \"async\"\n",
180 | " # Initialise the class\n",
181 | " infer_network = Network()\n",
182 | " # Load the network to IE plugin to get shape of input layer\n",
183 | " n, c, h, w = infer_network.load_model(model_xml, device, 1, 1, 2, cpu_extension)[1]\n",
184 | "\n",
185 | " print(\"To stop the execution press Esc button\")\n",
186 | " initial_w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))\n",
187 | " initial_h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n",
188 | " fps = int(cap.get(cv2.CAP_PROP_FPS))\n",
189 | " frame_count = 1\n",
190 | " accumulated_image = np.zeros((initial_h, initial_w), np.uint8)\n",
191 | " mog = cv2.createBackgroundSubtractorMOG2()\n",
192 | " ret, frame = cap.read()\n",
193 | " cur_request_id = 0\n",
194 | " next_request_id = 1\n",
195 | " if flag == \"sync\":\n",
196 | " print('Aplication running in Sync mode')\n",
197 | " is_async_mode = False\n",
198 | " else:\n",
199 | " print('Aplication running in Async mode')\n",
200 | " is_async_mode = True\n",
201 | "\n",
202 | " while cap.isOpened():\n",
203 | " ret, next_frame = cap.read()\n",
204 | " if not ret:\n",
205 | " break\n",
206 | " frame_count = frame_count + 1\n",
207 | " in_frame = cv2.resize(next_frame, (w, h))\n",
208 | " # Change data layout from HWC to CHW\n",
209 | " in_frame = in_frame.transpose((2, 0, 1)) \n",
210 | " in_frame = in_frame.reshape((n, c, h, w))\n",
211 | "\n",
212 | " # Start asynchronous inference for specified request.\n",
213 | " inf_start = time.time()\n",
214 | " if is_async_mode:\n",
215 | " infer_network.exec_net(next_request_id, in_frame)\n",
216 | " else:\n",
217 | " infer_network.exec_net(cur_request_id, in_frame)\n",
218 | "\n",
219 | " # Wait for the result\n",
220 | " if infer_network.wait(cur_request_id) == 0:\n",
221 | " det_time = time.time() - inf_start\n",
222 | " people_count = 0\n",
223 | "\n",
224 | " # Converting to Grayscale\n",
225 | " gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\n",
226 | "\n",
227 | " # Remove the background\n",
228 | " fgbgmask = mog.apply(gray)\n",
229 | "\n",
230 | " # Thresholding the image\n",
231 | " thresh = 2\n",
232 | " max_value = 2\n",
233 | " threshold_image = cv2.threshold(fgbgmask, thresh, max_value,\n",
234 | " cv2.THRESH_BINARY)[1]\n",
235 | " # Adding to the accumulated image\n",
236 | " accumulated_image = cv2.add(threshold_image, accumulated_image)\n",
237 | " colormap_image = cv2.applyColorMap(accumulated_image, cv2.COLORMAP_HOT)\n",
238 | "\n",
239 | " # Results of the output layer of the network\n",
240 | " res = infer_network.get_output(cur_request_id)\n",
241 | " for obj in res[0][0]:\n",
242 | " # Draw only objects when probability more than specified threshold\n",
243 | " if obj[2] > prob_threshold:\n",
244 | " xmin = int(obj[3] * initial_w)\n",
245 | " ymin = int(obj[4] * initial_h)\n",
246 | " xmax = int(obj[5] * initial_w)\n",
247 | " ymax = int(obj[6] * initial_h)\n",
248 | " class_id = int(obj[1])\n",
249 | " # Draw bounding box\n",
250 | " color = (min(class_id * 12.5, 255), min(class_id * 7, 255),\n",
251 | " min(class_id * 5, 255))\n",
252 | " cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), color, 2)\n",
253 | " people_count = people_count + 1\n",
254 | "\n",
255 | " people_count_message = \"People Count : \" + str(people_count)\n",
256 | " inf_time_message = \"Inference time: N\\A for async mode\" if is_async_mode else \\\n",
257 | " \"Inference time: {:.3f} ms\".format(det_time * 1000)\n",
258 | " cv2.putText(frame, inf_time_message, (15, 25), cv2.FONT_HERSHEY_COMPLEX, 1,\n",
259 | " (255, 255, 255), 2)\n",
260 | " cv2.putText(frame, people_count_message, (15, 65), cv2.FONT_HERSHEY_COMPLEX, 1,\n",
261 | " (255, 255, 255), 2)\n",
262 | " final_result_overlay = cv2.addWeighted(frame, P_COUNT_FRAME_WEIGHTAGE,\n",
263 | " colormap_image,\n",
264 | " COLORMAP_FRAME_WEIGHTAGE_1, 0)\n",
265 | " cv2.imshow(\"Detection Results\", final_result_overlay)\n",
266 | "\n",
267 | " time_interval = MULTIPLICATION_FACTOR * fps\n",
268 | " if frame_count % time_interval == 0:\n",
269 | " apply_time_stamp_and_save(final_result_overlay, people_count, upload_azure)\n",
270 | "\n",
271 | " frame = next_frame\n",
272 | " if is_async_mode:\n",
273 | " cur_request_id, next_request_id = next_request_id, cur_request_id\n",
274 | " key = cv2.waitKey(1)\n",
275 | " if key == 27:\n",
276 | " break\n",
277 | " cap.release()\n",
278 | " cv2.destroyAllWindows()\n",
279 | " infer_network.clean()\n",
280 | "\n",
281 | "\n",
282 | "if __name__ == '__main__':\n",
283 | " sys.exit(main() or 0)\n"
284 | ]
285 | },
286 | {
287 | "cell_type": "code",
288 | "execution_count": null,
289 | "metadata": {},
290 | "outputs": [],
291 | "source": []
292 | }
293 | ],
294 | "metadata": {
295 | "kernelspec": {
296 | "display_name": "Python 3",
297 | "language": "python",
298 | "name": "python3"
299 | },
300 | "language_info": {
301 | "codemirror_mode": {
302 | "name": "ipython",
303 | "version": 3
304 | },
305 | "file_extension": ".py",
306 | "mimetype": "text/x-python",
307 | "name": "python",
308 | "nbconvert_exporter": "python",
309 | "pygments_lexer": "ipython3",
310 | "version": "3.6.9"
311 | }
312 | },
313 | "nbformat": 4,
314 | "nbformat_minor": 2
315 | }
316 |
--------------------------------------------------------------------------------
/Jupyter/store_aisle_monitor_jupyter.py:
--------------------------------------------------------------------------------
1 | """Store Aisle Monitor"""
2 |
3 | """
4 | Copyright (c) 2018 Intel Corporation.
5 | Permission is hereby granted, free of charge, to any person obtaining
6 | a copy of this software and associated documentation files (the
7 | "Software"), to deal in the Software without restriction, including
8 | without limitation the rights to use, copy, modify, merge, publish,
9 | distribute, sublicense, and/or sell copies of the Software, and to
10 | permit person to whom the Software is furnished to do so, subject to
11 | the following conditions:
12 | The above copyright notice and this permission notice shall be
13 | included in all copies or substantial portions of the Software.
14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
15 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
16 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
17 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
18 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
20 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
21 | """
22 |
23 |
24 | import os
25 | import sys
26 | import time
27 | from argparse import ArgumentParser
28 | import pathlib
29 | import cv2
30 | import json
31 | import numpy as np
32 | from azure.storage.blob import BlockBlobService, PublicAccess
33 | from inference import Network
34 |
35 | CONFIG_FILE = '../resources/config.json'
36 |
37 | # Weightage/ratio to merge (for Heatmap) original frame and colorMap frame(sum of both should be 1)
38 | INITIAL_FRAME_WEIGHTAGE = 0.65
39 | COLORMAP_FRAME_WEIGHTAGE = 0.35
40 |
41 | # Weightage/ratio to merge (for integrated output) people count frame and colorMap frame(sum of both should be 1)
42 | P_COUNT_FRAME_WEIGHTAGE = 0.65
43 | COLORMAP_FRAME_WEIGHTAGE_1 = 0.35
44 |
45 | # Multiplication factor to compute time interval for uploading snapshots to the cloud
46 | MULTIPLICATION_FACTOR = 5
47 |
48 | # Azure Blob container name
49 | CONTAINER_NAME = 'store-aisle-monitor-snapshots'
50 |
51 | # To get current working directory
52 | CWD = os.getcwd()
53 | # Creates subdirectories to save output videos and snapshots
54 | pathlib.Path(CWD + '/output_snapshots/').mkdir(parents=True, exist_ok=True)
55 |
56 |
57 | def apply_time_stamp_and_save(image, people_count, upload_azure):
58 | """
59 | Saves snapshots with timestamps.
60 | """
61 | current_date_time = time.strftime("%y-%m-%d_%H:%M:%S", time.gmtime())
62 | file_name = current_date_time + "_PCount_" + str(people_count) + ".png"
63 | file_path = CWD + "/output_snapshots/"
64 | local_file_name = "output_" + file_name
65 | file_name = file_path + local_file_name
66 | cv2.imwrite(file_name, image)
67 | if upload_azure is 1:
68 | upload_snapshot(file_path, local_file_name)
69 |
70 |
71 | def create_cloud_container(account_name, account_key):
72 | """
73 | Creates a BlockBlobService container on cloud.
74 | """
75 | global BLOCK_BLOB_SERVICE
76 |
77 | # Create the BlockBlobService to call the Blob service for the storage account
78 | BLOCK_BLOB_SERVICE = BlockBlobService(account_name, account_key)
79 | # Create BlockBlobService container
80 | BLOCK_BLOB_SERVICE.create_container(CONTAINER_NAME)
81 | # Set the permission so that the blobs are public
82 | BLOCK_BLOB_SERVICE.set_container_acl(CONTAINER_NAME, public_access=PublicAccess.Container)
83 |
84 |
85 | def upload_snapshot(file_path, local_file_name):
86 | """
87 | Uploads snapshots to cloud storage container.
88 | """
89 | try:
90 |
91 | full_path_to_file = file_path + local_file_name
92 | print("\nUploading to cloud storage as blob : " + local_file_name)
93 | # Upload the snapshot, with local_file_name as the blob name
94 | BLOCK_BLOB_SERVICE.create_blob_from_path(CONTAINER_NAME, local_file_name, full_path_to_file)
95 |
96 | except Exception as e:
97 | print(e)
98 |
99 |
100 | def main():
101 | global CONFIG_FILE
102 | model_xml = (os.environ["MODEL"])
103 | device = os.environ['DEVICE'] if 'DEVICE' in os.environ.keys() else 'CPU'
104 | cpu_extension = os.environ[
105 | 'CPU_EXTENSION'] if 'CPU_EXTENSION' in os.environ.keys() else None
106 | try:
107 | # Probability threshold for detections filtering
108 | prob_threshold = float(os.environ['PROB_THRESHOLD'])
109 | except KeyError:
110 | prob_threshold = 0.5
111 | try:
112 | # Specify the azure storage name to upload results to cloud.
113 | account_name = os.environ['ACCOUNT_NAME']
114 | except:
115 | account_name = None
116 | try:
117 | # Specify the azure storage key to upload results to cloud.
118 | account_key = os.environ['ACCOUNT_KEY']
119 | except:
120 | account_key = None
121 |
122 | if account_name is "" or account_key is "":
123 | print("Invalid account name or account key!")
124 | sys.exit(1)
125 | elif account_name is not None and account_key is None:
126 | print("Please provide account key using -ak option!")
127 | sys.exit(1)
128 | elif account_name is None and account_key is not None:
129 | print("Please provide account name using -an option!")
130 | sys.exit(1)
131 | elif account_name is None and account_key is None:
132 | upload_azure = 0
133 | else:
134 | print("Uploading the results to Azure storage \""+ account_name+ "\"" )
135 | upload_azure = 1
136 | create_cloud_container(account_name, account_key)
137 | assert os.path.isfile(CONFIG_FILE), "{} file doesn't exist".format(CONFIG_FILE)
138 | config = json.loads(open(CONFIG_FILE).read())
139 | for idx, item in enumerate(config['inputs']):
140 | if item['video'].isdigit():
141 | input_stream = int(item['video'])
142 | cap = cv2.VideoCapture(input_stream)
143 | if not cap.isOpened():
144 | print("\nCamera not plugged in... Exiting...\n")
145 | sys.exit(0)
146 | else:
147 | input_stream = item['video']
148 | cap = cv2.VideoCapture(input_stream)
149 | if not cap.isOpened():
150 | print("\nUnable to open video file... Exiting...\n")
151 | sys.exit(0)
152 | fps = cap.get(cv2.CAP_PROP_FPS)
153 | flag = os.environ['FLAG'] if 'FLAG' in os.environ.keys() else "async"
154 | # Initialise the class
155 | infer_network = Network()
156 | # Load the network to IE plugin to get shape of input layer
157 | n, c, h, w = infer_network.load_model(model_xml, device, 1, 1, 2, cpu_extension)[1]
158 |
159 | print("To stop the execution press Esc button")
160 | initial_w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
161 | initial_h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
162 | fps = int(cap.get(cv2.CAP_PROP_FPS))
163 | frame_count = 1
164 | accumulated_image = np.zeros((initial_h, initial_w), np.uint8)
165 | mog = cv2.createBackgroundSubtractorMOG2()
166 | ret, frame = cap.read()
167 | cur_request_id = 0
168 | next_request_id = 1
169 | if flag == "sync":
170 | print('Aplication running in Sync mode')
171 | is_async_mode = False
172 | else:
173 | print('Aplication running in Async mode')
174 | is_async_mode = True
175 |
176 | while cap.isOpened():
177 | ret, next_frame = cap.read()
178 | if not ret:
179 | break
180 | frame_count = frame_count + 1
181 | in_frame = cv2.resize(next_frame, (w, h))
182 | # Change data layout from HWC to CHW
183 | in_frame = in_frame.transpose((2, 0, 1))
184 | in_frame = in_frame.reshape((n, c, h, w))
185 |
186 | # Start asynchronous inference for specified request.
187 | inf_start = time.time()
188 | if is_async_mode:
189 | infer_network.exec_net(next_request_id, in_frame)
190 | else:
191 | infer_network.exec_net(cur_request_id, in_frame)
192 |
193 | # Wait for the result
194 | if infer_network.wait(cur_request_id) == 0:
195 | det_time = time.time() - inf_start
196 | people_count = 0
197 |
198 | # Converting to Grayscale
199 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
200 |
201 | # Remove the background
202 | fgbgmask = mog.apply(gray)
203 |
204 | # Thresholding the image
205 | thresh = 2
206 | max_value = 2
207 | threshold_image = cv2.threshold(fgbgmask, thresh, max_value,
208 | cv2.THRESH_BINARY)[1]
209 | # Adding to the accumulated image
210 | accumulated_image = cv2.add(threshold_image, accumulated_image)
211 | colormap_image = cv2.applyColorMap(accumulated_image, cv2.COLORMAP_HOT)
212 |
213 | # Results of the output layer of the network
214 | res = infer_network.get_output(cur_request_id)
215 | for obj in res[0][0]:
216 | # Draw only objects when probability more than specified threshold
217 | if obj[2] > prob_threshold:
218 | xmin = int(obj[3] * initial_w)
219 | ymin = int(obj[4] * initial_h)
220 | xmax = int(obj[5] * initial_w)
221 | ymax = int(obj[6] * initial_h)
222 | class_id = int(obj[1])
223 | # Draw bounding box
224 | color = (min(class_id * 12.5, 255), min(class_id * 7, 255),
225 | min(class_id * 5, 255))
226 | cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), color, 2)
227 | people_count = people_count + 1
228 |
229 | people_count_message = "People Count : " + str(people_count)
230 | inf_time_message = "Inference time: N\A for async mode" if is_async_mode else \
231 | "Inference time: {:.3f} ms".format(det_time * 1000)
232 | cv2.putText(frame, inf_time_message, (15, 25), cv2.FONT_HERSHEY_COMPLEX, 1,
233 | (255, 255, 255), 2)
234 | cv2.putText(frame, people_count_message, (15, 65), cv2.FONT_HERSHEY_COMPLEX, 1,
235 | (255, 255, 255), 2)
236 | final_result_overlay = cv2.addWeighted(frame, P_COUNT_FRAME_WEIGHTAGE,
237 | colormap_image,
238 | COLORMAP_FRAME_WEIGHTAGE_1, 0)
239 | cv2.imshow("Detection Results", final_result_overlay)
240 |
241 | time_interval = MULTIPLICATION_FACTOR * fps
242 | if frame_count % time_interval == 0:
243 | apply_time_stamp_and_save(final_result_overlay, people_count, upload_azure)
244 |
245 | frame = next_frame
246 | if is_async_mode:
247 | cur_request_id, next_request_id = next_request_id, cur_request_id
248 | key = cv2.waitKey(1)
249 | if key == 27:
250 | break
251 | cap.release()
252 | cv2.destroyAllWindows()
253 | infer_network.clean()
254 |
255 |
256 | if __name__ == '__main__':
257 | sys.exit(main() or 0)
258 |
259 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | BSD 3-Clause License
2 |
3 | Copyright (c) 2021, Intel Corporation
4 | All rights reserved.
5 |
6 | Redistribution and use in source and binary forms, with or without
7 | modification, are permitted provided that the following conditions are met:
8 |
9 | 1. Redistributions of source code must retain the above copyright notice, this
10 | list of conditions and the following disclaimer.
11 |
12 | 2. Redistributions in binary form must reproduce the above copyright notice,
13 | this list of conditions and the following disclaimer in the documentation
14 | and/or other materials provided with the distribution.
15 |
16 | 3. Neither the name of the copyright holder nor the names of its
17 | contributors may be used to endorse or promote products derived from
18 | this software without specific prior written permission.
19 |
20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # DISCONTINUATION OF PROJECT #
2 | This project will no longer be maintained by Intel.
3 | Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.
4 | Intel no longer accepts patches to this project.
5 |
6 |
7 | # Store Aisle Monitor
8 |
9 |
10 | | Details | |
11 | |-----------------------|------------------|
12 | | Target OS | Ubuntu\* 18.04 LTS |
13 | | Programming Language | Python* 3.6 |
14 | | Time to complete | 30 min |
15 |
16 | This reference implementation is also [available in C++](https://github.com/intel-iot-devkit/reference-implementation-private/blob/master/store-aisle-monitor/README.md)
17 |
18 | 
19 |
20 | ## Introduction
21 |
22 | This reference implementation counts the number of people present in an image and generates a motion heatmap. It takes the input from the camera, or a video file for processing. Snapshots of the output are taken at regular intervals and are uploaded to the cloud. It also stores the snapshots of the output locally.
23 |
24 | ## Requirements
25 |
26 | ### Hardware
27 | * 6th to 8th generation Intel® Core™ processors with Intel® Iris® Pro graphics or Intel® HD Graphics
28 |
29 | ### Software
30 |
31 | * [Ubuntu* 18.04](http://releases.ubuntu.com/18.04/)
32 | * OpenCL™ Runtime Package
33 | **Note**: We recommend using a 4.14+ kernel to use this software. Run the following command to determine your kernel version:
34 | ```
35 | uname -a
36 | ```
37 | * Intel® Distribution of OpenVINO™ toolkit 2020 R3 Release
38 | * Microsoft Azure* Python SDK
39 |
40 | ## How it Works
41 | - The application uses a video source, such as a camera or a video file, to grab the frames. The [OpenCV functions](https://docs.opencv.org/3.4/dd/d43/tutorial_py_video_display.html) are used to calculate frame width, frame height and frames per second (fps) of the video source. The application counts the number of people and generates motion heatmap.
42 | 
43 |
44 | - People counter: A trained neural network model detects the people in the frame and bounding boxes are drawn on the people detected. This reference implementation uses a pre-trained model **person-detection-retail-0013** that can be downloaded using the **model downloader**, provided by the Intel® Distribution of OpenVINO™ toolkit.
45 |
46 | - Motion Heatmap generation: An accumulated frame is used, on which every frame is added after preprocessing. This accumulated frame is used to generate the motion heatmap using [applyColorMap](https://docs.opencv.org/3.4/d3/d50/group__imgproc__colormap.html#gadf478a5e5ff49d8aa24e726ea6f65d15). The original frame and heatmap frame are merged using [addWeighted](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_core/py_image_arithmetics/py_image_arithmetics.html), to visualize the movement patterns over time.
47 |
48 | - The heatmap frame and people counter frame are merged using [addWeighted](https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_core/py_image_arithmetics/py_image_arithmetics.html) and this merged frame is saved locally at regular intervals. The output is present in the *application/output_snapshots* directory of the project directory.
49 |
50 | - The application also uploads the results to the Microsoft Azure cloud at regular intervals, if a Microsoft Azure storage name and key are provided.
51 |
52 | 
53 |
54 | ## Setup
55 | ### Get the code
56 | Clone the reference implementation
57 | ```
58 | sudo apt-get update && sudo apt-get install git
59 | git clone https://github.com/intel-iot-devkit/store-aisle-monitor-python.git
60 | ```
61 |
62 | ### Install Intel® Distribution of OpenVINO™ toolkit
63 |
64 | Refer to [https://software.intel.com/en-us/articles/OpenVINO-Install-Linux](https://software.intel.com/en-us/articles/OpenVINO-Install-Linux) for more information on how to install and setup the Intel® Distribution of OpenVINO™ toolkit.
65 |
66 | The OpenCL™ Runtime package is required to run the inference on a GPU. It is not mandatory for CPU inference.
67 |
68 | ### Other dependencies
69 | **Microsoft Azure python SDK**
70 | The Azure python SDK allows you to build applications against Microsoft Azure Storage. [Azure Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction) is Microsoft's cloud storage solution for modern data storage scenarios. Azure Storage offers a massively scalable object store for data objects, a file system service for the cloud, a messaging store for reliable messaging, and a NoSQL store.
71 |
72 |
73 |
74 | ### Which model to use
75 |
76 | This application uses the [**person-detection-retail-0013**](https://docs.openvinotoolkit.org/2020.3/_models_intel_person_detection_retail_0013_description_person_detection_retail_0013.html) Intel® pre-trained model, that can be accessed using the **model downloader**. The **model downloader** downloads the __.xml__ and __.bin__ files that will be used by the application.
77 |
78 | To download the model and install the dependencies of the application, run the below command in the `store-aisle-monitor-python` directory:
79 | ```
80 | ./setup.sh
81 | ```
82 |
83 | ### The Config File
84 |
85 | The _resources/config.json_ contains the path of video that will be used by the application as input.
86 |
87 | For example:
88 | ```
89 | {
90 | "inputs": [
91 | {
92 | "video":"path_to_video/video1.mp4"
93 | }
94 | ]
95 | }
96 | ```
97 |
98 | The `path/to/video` is the path to an input video file.
99 |
100 | ### Which Input Video to use
101 |
102 | We recommend using [store-aisle-detection](https://raw.githubusercontent.com/intel-iot-devkit/sample-videos/master/store-aisle-detection.mp4).
103 | For example:
104 | ```
105 | {
106 | "inputs": [
107 | {
108 | "video":"sample-videos/store-aisle-detection.mp4
109 | }
110 | ]
111 | }
112 | ```
113 | If the user wants to use any other video, it can be used by providing the path in the config.json file.
114 |
115 |
116 | ### Using the Camera Stream instead of video
117 |
118 | Replace `path/to/video` with the camera ID in the config.json file, where the ID is taken from the video device (the number X in /dev/videoX).
119 |
120 | On Ubuntu, to list all available video devices use the following command:
121 |
122 | ```
123 | ls /dev/video*
124 | ```
125 |
126 | For example, if the output of above command is __/dev/video0__, then config.json would be:
127 |
128 | ```
129 | {
130 | "inputs": [
131 | {
132 | "video":"0"
133 | }
134 | ]
135 | }
136 | ```
137 |
138 | ### Setup the environment
139 |
140 | Configure the environment to use the Intel® Distribution of OpenVINO™ toolkit one time per session by running the following command:
141 |
142 | source /opt/intel/openvino/bin/setupvars.sh
143 |
144 | **Note:** This command needs to be executed only once in the terminal where the application will be executed. If the terminal is closed, the command needs to be executed again.
145 |
146 | ## Run the application
147 | Change the current directory to the git-cloned application code location on your system:
148 | ```
149 | cd /application
150 | ```
151 | To see a list of the various options:
152 |
153 | python3 store_aisle_monitor.py --help
154 |
155 | A user can specify what target device to run on by using the device command-line argument `-d` followed by one of the values `CPU`, `GPU`, `HDDL` or `MYRIAD`.
156 | To run with multiple devices use -d MULTI:device1,device2. For example: `-d MULTI:CPU,GPU,HDDL`
157 | If no target device is specified the application will run on the CPU by default.
158 |
159 | ### Run on the CPU
160 |
161 | Although the application runs on the CPU by default, this can also be explicitly specified through the `-d CPU` command-line argument:
162 | ```
163 | python3 store_aisle_monitor.py -m /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP32/person-detection-retail-0013.xml -d CPU -pt 0.7
164 | ```
165 | To run the application on sync mode, use `-f sync` as command line argument. By default, the application runs on async mode.
166 |
167 | ### Run on the Integrated GPU
168 |
169 | * To run on the integrated Intel® GPU in 32-bit mode, use the below command.
170 | ```
171 | python3 store_aisle_monitor.py -m /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP32/person-detection-retail-0013.xml -d GPU -pt 0.7
172 | ```
173 | **FP32**: FP32 is single-precision floating-point arithmetic uses 32 bits to represent numbers. 8 bits for the magnitude and 23 bits for the precision. For more information, [click here](https://en.wikipedia.org/wiki/Single-precision_floating-point_format)
174 |
175 | * To run on the integrated Intel® GPU in 16-bit mode, use the below command.
176 | ```
177 | python3 store_aisle_monitor.py -m /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml -d GPU -pt 0.7
178 | ```
179 | **FP16**: FP16 is half-precision floating-point arithmetic uses 16 bits. 5 bits for the magnitude and 10 bits for the precision. For more information, [click here](https://en.wikipedia.org/wiki/Half-precision_floating-point_format)
180 |
181 | ### Run on the Intel® Neural Compute Stick
182 | To run on the Intel® Neural Compute Stick, use the ```-d MYRIAD``` command-line argument:
183 | ```
184 | python3 store_aisle_monitor.py -d MYRIAD -m /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml -pt 0.7
185 | ```
186 | **Note:** The Intel® Neural Compute Stick can only run FP16 models. The model that is passed to the application, through the `-m ` command-line argument, must be of data type FP16.
187 |
188 | ### Run on the Intel® Movidius™ VPU
189 | To run on the Intel® Movidius™ VPU, use the `-d HDDL` command-line argument:
190 |
191 | python3 store_aisle_monitor.py -m /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP16/person-detection-retail-0013.xml -d HDDL
192 |
193 | **Note:** The HDDL-R can only run FP16 models. The model that is passed to the application, through the `-m ` command-line argument, must be of data type FP16.
194 |
212 |
213 | ## (Optional) Saving snapshots to the Cloud
214 | To upload the results to the cloud, the Microsoft Azure storage name and storage key are provided as the command line arguments.
215 | Use `-an` and `-ak` options to specify Microsoft Azure storage name and storage key respectively.
216 |
217 | python3 store_aisle_monitor.py -m /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/person-detection-retail-0013/FP32/person-detection-retail-0013.xml -d CPU -pt 0.7 -an -ak
218 |
219 | **Note:**
220 | To obtain account name and account key from the Microsoft Azure portal, please refer:
221 | https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python#copy-your-credentials-from-the-azure-portal
222 |
223 | To view the uploaded snapshots on cloud, please refer:
224 | https://docs.microsoft.com/en-us/azure/storage/blobs/storage-upload-process-images?tabs=net#verify-the-image-is-shown-in-the-storage-account
225 |
--------------------------------------------------------------------------------
/application/inference.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | """
3 | Copyright (c) 2018 Intel Corporation.
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining
6 | a copy of this software and associated documentation files (the
7 | "Software"), to deal in the Software without restriction, including
8 | without limitation the rights to use, copy, modify, merge, publish,
9 | distribute, sublicense, and/or sell copies of the Software, and to
10 | permit persons to whom the Software is furnished to do so, subject to
11 | the following conditions:
12 |
13 | The above copyright notice and this permission notice shall be
14 | included in all copies or substantial portions of the Software.
15 |
16 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
19 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
20 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
21 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
22 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
23 | """
24 |
25 | import os
26 | import sys
27 | import logging as log
28 | from openvino.inference_engine import IENetwork, IECore
29 |
30 |
31 | class Network:
32 | """
33 | Load and configure inference plugins for the specified target devices
34 | and performs synchronous and asynchronous modes for the specified infer requests.
35 | """
36 |
37 | def __init__(self):
38 | self.net = None
39 | self.plugin = None
40 | self.input_blob = None
41 | self.out_blob = None
42 | self.net_plugin = None
43 | self.infer_request_handle = None
44 |
45 | def load_model(self, model, device, input_size, output_size, num_requests, cpu_extension=None, plugin=None):
46 | """
47 | Loads a network and an image to the Inference Engine plugin.
48 | :param model: .xml file of pre trained model
49 | :param cpu_extension: extension for the CPU device
50 | :param device: Target device
51 | :param input_size: Number of input layers
52 | :param output_size: Number of output layers
53 | :param num_requests: Index of Infer request value. Limited to device capabilities.
54 | :param plugin: Plugin for specified device
55 | :return: Shape of input layer
56 | """
57 |
58 | model_xml = model
59 | model_bin = os.path.splitext(model_xml)[0] + ".bin"
60 | # Plugin initialization for specified device
61 | # and load extensions library if specified
62 | if not plugin:
63 | log.info("Initializing plugin for {} device...".format(device))
64 | self.plugin = IECore()
65 | else:
66 | self.plugin = plugin
67 |
68 | if cpu_extension and 'CPU' in device:
69 | self.plugin.add_extension(cpu_extension, "CPU")
70 |
71 | # Read IR
72 | log.info("Reading IR...")
73 | self.net = self.plugin.read_network(model=model_xml, weights=model_bin)
74 | log.info("Loading IR to the plugin...")
75 |
76 | if device == "CPU":
77 | supported_layers = self.plugin.query_network(self.net, "CPU")
78 | not_supported_layers = \
79 | [l for l in self.net.layers.keys() if l not in supported_layers]
80 | if len(not_supported_layers) != 0:
81 | log.error("Following layers are not supported by "
82 | "the plugin for specified device {}:\n {}".
83 | format(device,
84 | ', '.join(not_supported_layers)))
85 | log.error("Please try to specify cpu extensions library path"
86 | " in command line parameters using -l "
87 | "or --cpu_extension command line argument")
88 | sys.exit(1)
89 |
90 | if num_requests == 0:
91 | # Loads network read from IR to the plugin
92 | self.net_plugin = self.plugin.load_network(network=self.net, device_name=device)
93 | else:
94 | self.net_plugin = self.plugin.load_network(network=self.net, num_requests=num_requests, device_name=device)
95 | # log.error("num_requests != 0")
96 |
97 | self.input_blob = next(iter(self.net.inputs))
98 | self.out_blob = next(iter(self.net.outputs))
99 | assert len(self.net.inputs.keys()) == input_size, \
100 | "Supports only {} input topologies".format(len(self.net.inputs))
101 | assert len(self.net.outputs) == output_size, \
102 | "Supports only {} output topologies".format(len(self.net.outputs))
103 |
104 | return self.plugin, self.get_input_shape()
105 |
106 | def get_input_shape(self):
107 | """
108 | Gives the shape of the input layer of the network.
109 | :return: None
110 | """
111 | return self.net.inputs[self.input_blob].shape
112 |
113 | def performance_counter(self, request_id):
114 | """
115 | Queries performance measures per layer to get feedback of what is the
116 | most time consuming layer.
117 | :param request_id: Index of Infer request value. Limited to device capabilities
118 | :return: Performance of the layer
119 | """
120 | perf_count = self.net_plugin.requests[request_id].get_perf_counts()
121 | return perf_count
122 |
123 | def exec_net(self, request_id, frame):
124 | """
125 | Starts asynchronous inference for specified request.
126 | :param request_id: Index of Infer request value. Limited to device capabilities.
127 | :param frame: Input image
128 | :return: Instance of Executable Network class
129 | """
130 | self.infer_request_handle = self.net_plugin.start_async(
131 | request_id=request_id, inputs={self.input_blob: frame})
132 | return self.net_plugin
133 |
134 | def wait(self, request_id):
135 | """
136 | Waits for the result to become available.
137 | :param request_id: Index of Infer request value. Limited to device capabilities.
138 | :return: Timeout value
139 | """
140 | wait_process = self.net_plugin.requests[request_id].wait(-1)
141 | return wait_process
142 |
143 | def get_output(self, request_id, output=None):
144 | """
145 | Gives a list of results for the output layer of the network.
146 | :param request_id: Index of Infer request value. Limited to device capabilities.
147 | :param output: Name of the output layer
148 | :return: Results for the specified request
149 | """
150 | if output:
151 | res = self.infer_request_handle.outputs[output]
152 | else:
153 | res = self.net_plugin.requests[request_id].outputs[self.out_blob]
154 | return res
155 |
156 | def clean(self):
157 | """
158 | Deletes all the instances
159 | :return: None
160 | """
161 | del self.net_plugin
162 | del self.plugin
163 | del self.net
164 |
--------------------------------------------------------------------------------
/application/store_aisle_monitor.py:
--------------------------------------------------------------------------------
1 | """Store Aisle Monitor"""
2 |
3 | """
4 | Copyright (c) 2018 Intel Corporation.
5 | Permission is hereby granted, free of charge, to any person obtaining
6 | a copy of this software and associated documentation files (the
7 | "Software"), to deal in the Software without restriction, including
8 | without limitation the rights to use, copy, modify, merge, publish,
9 | distribute, sublicense, and/or sell copies of the Software, and to
10 | permit person to whom the Software is furnished to do so, subject to
11 | the following conditions:
12 | The above copyright notice and this permission notice shall be
13 | included in all copies or substantial portions of the Software.
14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
15 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
16 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
17 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
18 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
20 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
21 | """
22 |
23 | import os
24 | import sys
25 | import time
26 | from argparse import ArgumentParser
27 | import pathlib
28 | import cv2
29 | import numpy as np
30 | import json
31 | from azure.storage.blob import BlockBlobService, PublicAccess
32 | from inference import Network
33 |
34 | isasyncmode = True
35 | CONFIG_FILE = '../resources/config.json'
36 |
37 | # Weightage/ratio to merge (for Heatmap) original frame and colorMap frame(sum of both should be 1)
38 | INITIAL_FRAME_WEIGHTAGE = 0.65
39 | COLORMAP_FRAME_WEIGHTAGE = 0.35
40 |
41 | # Weightage/ratio to merge (for integrated output) people count frame and colorMap frame(sum of both should be 1)
42 | P_COUNT_FRAME_WEIGHTAGE = 0.65
43 | COLORMAP_FRAME_WEIGHTAGE_1 = 0.35
44 |
45 | # Multiplication factor to compute time interval for uploading snapshots to the cloud
46 | MULTIPLICATION_FACTOR = 5
47 |
48 | # Azure Blob container name
49 | CONTAINER_NAME = 'store-aisle-monitor-snapshots'
50 |
51 | # To get current working directory
52 | CWD = os.getcwd()
53 |
54 | # Creates subdirectory to save output snapshots
55 | pathlib.Path(CWD + '/output_snapshots/').mkdir(parents=True, exist_ok=True)
56 |
57 |
58 | def build_argparser():
59 | parser = ArgumentParser()
60 | parser.add_argument("-m", "--model",
61 | help="Path to an .xml file with a trained model.",
62 | required=True, type=str)
63 | parser.add_argument("-l", "--cpu_extension",
64 | help="MKLDNN (CPU)-targeted custom layers. Absolute "
65 | "path to a shared library with the kernels impl.",
66 | type=str, default=None)
67 | parser.add_argument("-d", "--device",
68 | help="Specify the target device to infer on; "
69 | "CPU, GPU, FPGA, HDDL or MYRIAD is acceptable. Application"
70 | " will look for a suitable plugin for device "
71 | "specified (CPU by default)", default="CPU", type=str)
72 | parser.add_argument("-pt", "--prob_threshold",
73 | help="Probability threshold for detections filtering",
74 | default=0.5, type=float)
75 | parser.add_argument("-an", "--account_name",
76 | help="Account name of Azure cloud storage container",
77 | default=None, type=str)
78 | parser.add_argument("-ak", "--account_key",
79 | help="Account key of Azure cloud storage container",
80 | default=None, type=str)
81 | parser.add_argument("-f", "--flag", help="sync or async", default="async", type=str)
82 |
83 | return parser
84 |
85 |
86 | def apply_time_stamp_and_save(image, people_count, upload_azure):
87 | """
88 | Saves snapshots with timestamps.
89 | """
90 | current_date_time = time.strftime("%y-%m-%d_%H:%M:%S", time.gmtime())
91 | file_name = current_date_time + "_PCount_" + str(people_count) + ".png"
92 | file_path = CWD + "/output_snapshots/"
93 | local_file_name = "output_" + file_name
94 | file_name = file_path + local_file_name
95 | cv2.imwrite(file_name, image)
96 | if upload_azure is 1:
97 | upload_snapshot(file_path, local_file_name)
98 |
99 |
100 | def create_cloud_container(account_name, account_key):
101 | """
102 | Creates a BlockBlobService container on cloud.
103 | """
104 | global BLOCK_BLOB_SERVICE
105 |
106 | # Create the BlockBlobService to call the Blob service for the storage account
107 | BLOCK_BLOB_SERVICE = BlockBlobService(account_name, account_key)
108 | # Create BlockBlobService container
109 | BLOCK_BLOB_SERVICE.create_container(CONTAINER_NAME)
110 | # Set the permission so that the blobs are public
111 | BLOCK_BLOB_SERVICE.set_container_acl(CONTAINER_NAME, public_access=PublicAccess.Container)
112 |
113 |
114 | def upload_snapshot(file_path, local_file_name):
115 | """
116 | Uploads snapshots to cloud storage container.
117 | """
118 | try:
119 |
120 | full_path_to_file = file_path + local_file_name
121 | print("\nUploading to cloud storage as blob : " + local_file_name)
122 | # Upload the snapshot, with local_file_name as the blob name
123 | BLOCK_BLOB_SERVICE.create_blob_from_path(CONTAINER_NAME, local_file_name, full_path_to_file)
124 |
125 | except Exception as e:
126 | print(e)
127 |
128 |
129 | def main():
130 | global CONFIG_FILE
131 | global is_async_mode
132 | args = build_argparser().parse_args()
133 |
134 | account_name = args.account_name
135 | account_key = args.account_key
136 |
137 | if account_name is "" or account_key is "":
138 | print("Invalid account name or account key!")
139 | sys.exit(1)
140 | elif account_name is not None and account_key is None:
141 | print("Please provide account key using -ak option!")
142 | sys.exit(1)
143 | elif account_name is None and account_key is not None:
144 | print("Please provide account name using -an option!")
145 | sys.exit(1)
146 | elif account_name is None and account_key is None:
147 | upload_azure = 0
148 | else:
149 | print("Uploading the results to Azure storage \""+ account_name+ "\"" )
150 | upload_azure = 1
151 | create_cloud_container(account_name, account_key)
152 |
153 | assert os.path.isfile(CONFIG_FILE), "{} file doesn't exist".format(CONFIG_FILE)
154 | config = json.loads(open(CONFIG_FILE).read())
155 | for idx, item in enumerate(config['inputs']):
156 | if item['video'].isdigit():
157 | input_stream = int(item['video'])
158 | cap = cv2.VideoCapture(input_stream)
159 | if not cap.isOpened():
160 | print("\nCamera not plugged in... Exiting...\n")
161 | sys.exit(0)
162 | else:
163 | input_stream = item['video']
164 | cap = cv2.VideoCapture(input_stream)
165 | if not cap.isOpened():
166 | print("\nUnable to open video file... Exiting...\n")
167 | sys.exit(0)
168 | fps = cap.get(cv2.CAP_PROP_FPS)
169 | if args.flag == "async":
170 | is_async_mode = True
171 | print('Application running in async mode')
172 | else:
173 | is_async_mode = False
174 | print('Application running in sync mode')
175 |
176 | # Initialise the class
177 | infer_network = Network()
178 | # Load the network to IE plugin to get shape of input layer
179 | n, c, h, w = infer_network.load_model(args.model, args.device, 1, 1, 2, args.cpu_extension)[1]
180 |
181 | print("To stop the execution press Esc button")
182 | initial_w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
183 | initial_h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
184 | fps = int(cap.get(cv2.CAP_PROP_FPS))
185 | frame_count = 1
186 | accumulated_image = np.zeros((initial_h, initial_w), np.uint8)
187 | mog = cv2.createBackgroundSubtractorMOG2()
188 | ret, frame = cap.read()
189 | cur_request_id = 0
190 | next_request_id = 1
191 |
192 | while cap.isOpened():
193 | ret, next_frame = cap.read()
194 | if not ret:
195 | break
196 | frame_count = frame_count + 1
197 | in_frame = cv2.resize(next_frame, (w, h))
198 | # Change data layout from HWC to CHW
199 | in_frame = in_frame.transpose((2, 0, 1))
200 | in_frame = in_frame.reshape((n, c, h, w))
201 |
202 | # Start asynchronous inference for specified request.
203 | inf_start = time.time()
204 | if isasyncmode:
205 | infer_network.exec_net(next_request_id, in_frame)
206 | else:
207 | infer_network.exec_net(cur_request_id, in_frame)
208 | # Wait for the result
209 | if infer_network.wait(cur_request_id) == 0:
210 | det_time = time.time() - inf_start
211 |
212 | people_count = 0
213 |
214 | # Converting to Grayscale
215 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
216 |
217 | # Remove the background
218 | fgbgmask = mog.apply(gray)
219 | # Thresholding the image
220 | thresh = 2
221 | max_value = 2
222 | threshold_image = cv2.threshold(fgbgmask, thresh, max_value,
223 | cv2.THRESH_BINARY)[1]
224 | # Adding to the accumulated image
225 | accumulated_image = cv2.add(threshold_image, accumulated_image)
226 | colormap_image = cv2.applyColorMap(accumulated_image, cv2.COLORMAP_HOT)
227 |
228 | # Results of the output layer of the network
229 | res = infer_network.get_output(cur_request_id)
230 | for obj in res[0][0]:
231 | # Draw only objects when probability more than specified threshold
232 | if obj[2] > args.prob_threshold:
233 | xmin = int(obj[3] * initial_w)
234 | ymin = int(obj[4] * initial_h)
235 | xmax = int(obj[5] * initial_w)
236 | ymax = int(obj[6] * initial_h)
237 | class_id = int(obj[1])
238 | # Draw bounding box
239 | color = (min(class_id * 12.5, 255), min(class_id * 7, 255),
240 | min(class_id * 5, 255))
241 | cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), color, 2)
242 | people_count = people_count + 1
243 |
244 | people_count_message = "People Count : " + str(people_count)
245 | inf_time_message = "Inference time: N\A for async mode" if is_async_mode else\
246 | "Inference time: {:.3f} ms".format(det_time * 1000)
247 | cv2.putText(frame, inf_time_message, (15, 25), cv2.FONT_HERSHEY_COMPLEX, 1,
248 | (255, 255, 255), 2)
249 | cv2.putText(frame, people_count_message, (15, 65), cv2.FONT_HERSHEY_COMPLEX, 1,
250 | (255, 255, 255), 2)
251 | final_result_overlay = cv2.addWeighted(frame, P_COUNT_FRAME_WEIGHTAGE,
252 | colormap_image,
253 | COLORMAP_FRAME_WEIGHTAGE_1, 0)
254 | cv2.imshow("Detection Results", final_result_overlay)
255 |
256 | time_interval = MULTIPLICATION_FACTOR * fps
257 | if frame_count % time_interval == 0:
258 | apply_time_stamp_and_save(final_result_overlay, people_count, upload_azure)
259 |
260 | frame = next_frame
261 | if isasyncmode:
262 | cur_request_id, next_request_id = next_request_id, cur_request_id
263 |
264 | key = cv2.waitKey(1)
265 | if key == 27:
266 | break
267 | cap.release()
268 | cv2.destroyAllWindows()
269 | infer_network.clean()
270 |
271 |
272 | if __name__ == '__main__':
273 | sys.exit(main() or 0)
274 |
275 |
276 |
277 |
--------------------------------------------------------------------------------
/docs/images/figure1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/intel-iot-devkit/store-aisle-monitor-python/84bf0f528340ef6396626697303065d1579c9d1a/docs/images/figure1.png
--------------------------------------------------------------------------------
/docs/images/figure2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/intel-iot-devkit/store-aisle-monitor-python/84bf0f528340ef6396626697303065d1579c9d1a/docs/images/figure2.png
--------------------------------------------------------------------------------
/docs/images/figure3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/intel-iot-devkit/store-aisle-monitor-python/84bf0f528340ef6396626697303065d1579c9d1a/docs/images/figure3.png
--------------------------------------------------------------------------------
/docs/images/jupy1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/intel-iot-devkit/store-aisle-monitor-python/84bf0f528340ef6396626697303065d1579c9d1a/docs/images/jupy1.png
--------------------------------------------------------------------------------
/docs/images/jupy2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/intel-iot-devkit/store-aisle-monitor-python/84bf0f528340ef6396626697303065d1579c9d1a/docs/images/jupy2.png
--------------------------------------------------------------------------------
/resources/config.json:
--------------------------------------------------------------------------------
1 | {
2 | "inputs":[
3 | {
4 | "video":"../resources/store-aisle-detection.mp4"
5 | }
6 | ]
7 | }
8 |
--------------------------------------------------------------------------------
/setup.sh:
--------------------------------------------------------------------------------
1 | : '
2 | Copyright (c) 2018 Intel Corporation.
3 | *
4 | * Permission is hereby granted, free of charge, to any person obtaining
5 | * a copy of this software and associated documentation files (the
6 | * "Software"), to deal in the Software without restriction, including
7 | * without limitation the rights to use, copy, modify, merge, publish,
8 | * distribute, sublicense, and/or sell copies of the Software, and to
9 | * permit persons to whom the Software is furnished to do so, subject to
10 | * the following conditions:
11 | *
12 | * The above copyright notice and this permission notice shall be
13 | * included in all copies or substantial portions of the Software.
14 | *
15 | * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
16 | * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
17 | * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
18 | * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
19 | * LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
20 | * OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
21 | * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
22 | *
23 | '
24 |
25 | sudo apt-get update
26 | sudo apt-get install python3-pip
27 | sudo apt-get install python3-dev libssl-dev
28 | sudo pip3 install numpy jupyter
29 | pip3 install azure-storage-blob==2.1.0 azure-mgmt-storage
30 |
31 | cd resources
32 | wget -O store-aisle-detection.mp4 https://github.com/intel-iot-devkit/sample-videos/raw/master/store-aisle-detection.mp4
33 |
34 | cd /opt/intel/openvino/deployment_tools/tools/model_downloader
35 | sudo ./downloader.py --name person-detection-retail-0013
36 |
37 |
--------------------------------------------------------------------------------