├── .DS_Store
├── .gitignore
├── .vscode
└── settings.json
├── CHANGELOG.md
├── LICENSE
├── README.md
├── db
├── .git_ignore
└── empty
├── examples
└── stream.py
├── get_models.sh
├── images
└── README.md
├── install_as_service.sh
├── known_faces
└── README.md
├── make_changelog.sh
├── make_tag.sh
├── mlapi.py
├── mlapi.service
├── mlapi_dbuser.py
├── mlapi_face_train.py
├── mlapi_logrot.sh
├── mlapiconfig.ini
├── modules
├── .gitignore
├── __init__.py
├── common_params.py
├── db.py
├── log.py
└── utils.py
├── requirements.txt
├── secrets.ini
├── tools
└── config_edit.py
└── unknown_faces
└── README.md
/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ZoneMinder/mlapi/6b68dc15cc66b1fc20a587872369793dcf676fee/.DS_Store
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 | db/db.json
3 | test/detect.sh
4 | build/
5 | dist/
6 | images/*.jpg
7 | *.egg-info/
8 | *.egg
9 | *.py[cod]
10 | **/__pycache__/
11 | *.so
12 | *~
13 | known_faces/*
14 | unknown_faces/*
15 | !known_faces/README.md
16 | models/
17 | # due to using tox and pytest
18 | .tox
19 | .cache
20 | models/*
21 | secrets.mine
22 |
--------------------------------------------------------------------------------
/.vscode/settings.json:
--------------------------------------------------------------------------------
1 | {
2 | "python.formatting.provider": "autopep8",
3 | "python.pythonPath": "/usr/bin/python"
4 | }
--------------------------------------------------------------------------------
/CHANGELOG.md:
--------------------------------------------------------------------------------
1 | # Changelog
2 |
3 | ## [v2.2.0](https://github.com/pliablepixels/mlapi/tree/v2.2.0) (2021-02-14)
4 |
5 | [Full Changelog](https://github.com/pliablepixels/mlapi/compare/v2.1.9...v2.2.0)
6 |
7 | **Implemented enhancements:**
8 |
9 | - Training known faces [\#13](https://github.com/pliablepixels/mlapi/issues/13)
10 |
11 | **Closed issues:**
12 |
13 | - fix crappy ml\_overrides approach from zm\_detect to mlapi [\#33](https://github.com/pliablepixels/mlapi/issues/33)
14 |
15 | **Merged pull requests:**
16 |
17 | - Monitor specific support [\#34](https://github.com/pliablepixels/mlapi/pull/34) ([pliablepixels](https://github.com/pliablepixels))
18 |
19 | ## [v2.1.9](https://github.com/pliablepixels/mlapi/tree/v2.1.9) (2021-01-26)
20 |
21 | [Full Changelog](https://github.com/pliablepixels/mlapi/compare/v2.1.8...v2.1.9)
22 |
23 | **Fixed bugs:**
24 |
25 | - show\_percent=no not taken into account by mlapi [\#31](https://github.com/pliablepixels/mlapi/issues/31)
26 |
27 | **Closed issues:**
28 |
29 | - ValueError: malformed node or string: \<\_ast.Name object at 0x7f0e4cec2390\> [\#30](https://github.com/pliablepixels/mlapi/issues/30)
30 | - missing yolov3\_classes.txt [\#28](https://github.com/pliablepixels/mlapi/issues/28)
31 |
32 | ## [v2.1.8](https://github.com/pliablepixels/mlapi/tree/v2.1.8) (2021-01-09)
33 |
34 | [Full Changelog](https://github.com/pliablepixels/mlapi/compare/v2.1.7...v2.1.8)
35 |
36 | **Closed issues:**
37 |
38 | - remote ML requirements question [\#29](https://github.com/pliablepixels/mlapi/issues/29)
39 |
40 | ## [v2.1.7](https://github.com/pliablepixels/mlapi/tree/v2.1.7) (2021-01-07)
41 |
42 | [Full Changelog](https://github.com/pliablepixels/mlapi/compare/v2.1.6...v2.1.7)
43 |
44 | ## [v2.1.6](https://github.com/pliablepixels/mlapi/tree/v2.1.6) (2021-01-03)
45 |
46 | [Full Changelog](https://github.com/pliablepixels/mlapi/compare/v2.1.5...v2.1.6)
47 |
48 | ## [v2.1.5](https://github.com/pliablepixels/mlapi/tree/v2.1.5) (2021-01-03)
49 |
50 | [Full Changelog](https://github.com/pliablepixels/mlapi/compare/v2.1.4...v2.1.5)
51 |
52 | ## [v2.1.4](https://github.com/pliablepixels/mlapi/tree/v2.1.4) (2021-01-03)
53 |
54 | [Full Changelog](https://github.com/pliablepixels/mlapi/compare/v2.1.3...v2.1.4)
55 |
56 | ## [v2.1.3](https://github.com/pliablepixels/mlapi/tree/v2.1.3) (2021-01-03)
57 |
58 | [Full Changelog](https://github.com/pliablepixels/mlapi/compare/v2.1.2...v2.1.3)
59 |
60 | ## [v2.1.2](https://github.com/pliablepixels/mlapi/tree/v2.1.2) (2021-01-02)
61 |
62 | [Full Changelog](https://github.com/pliablepixels/mlapi/compare/v2.1.1...v2.1.2)
63 |
64 | ## [v2.1.1](https://github.com/pliablepixels/mlapi/tree/v2.1.1) (2021-01-02)
65 |
66 | [Full Changelog](https://github.com/pliablepixels/mlapi/compare/v2.1.0...v2.1.1)
67 |
68 | ## [v2.1.0](https://github.com/pliablepixels/mlapi/tree/v2.1.0) (2021-01-02)
69 |
70 | [Full Changelog](https://github.com/pliablepixels/mlapi/compare/52262891ef0b5a235c1498889076f3cc17d53b58...v2.1.0)
71 |
72 | **Implemented enhancements:**
73 |
74 | - Support ML and stream sequences [\#27](https://github.com/pliablepixels/mlapi/issues/27)
75 | - Enhance to use a proper WSGI server [\#10](https://github.com/pliablepixels/mlapi/issues/10)
76 | - Add OpenCV 4.1.2 DNN support [\#7](https://github.com/pliablepixels/mlapi/issues/7)
77 | - RFE: crop and save \(unknown\) faces [\#5](https://github.com/pliablepixels/mlapi/issues/5)
78 |
79 | **Fixed bugs:**
80 |
81 | - Sending multiple requests in parallel messes up detection [\#2](https://github.com/pliablepixels/mlapi/issues/2)
82 |
83 | **Closed issues:**
84 |
85 | - Yolo v5 support [\#26](https://github.com/pliablepixels/mlapi/issues/26)
86 | - Error executing remote API: Extra data: line 1 column 329 \(char 328\) [\#24](https://github.com/pliablepixels/mlapi/issues/24)
87 | - ModuleNotFoundError: No module named 'cv2.cv2' [\#23](https://github.com/pliablepixels/mlapi/issues/23)
88 | - installing python3-edgetpu breaks the coral detection [\#21](https://github.com/pliablepixels/mlapi/issues/21)
89 | - dont getting gpu support running on jetson xavier agx [\#18](https://github.com/pliablepixels/mlapi/issues/18)
90 | - Early return in Detect\(\) prevents image deletion [\#16](https://github.com/pliablepixels/mlapi/issues/16)
91 | - Update utils.py to pull down yolo4 models [\#14](https://github.com/pliablepixels/mlapi/issues/14)
92 | - WORKER TIMEOUT, Booting worker [\#12](https://github.com/pliablepixels/mlapi/issues/12)
93 | - Edge TPU module [\#9](https://github.com/pliablepixels/mlapi/issues/9)
94 | - No images written, neither in zoneminder dir nor bbox debug [\#4](https://github.com/pliablepixels/mlapi/issues/4)
95 | - Disabling vibrate on push has no effect. [\#3](https://github.com/pliablepixels/mlapi/issues/3)
96 | - Return Image with boxes? [\#1](https://github.com/pliablepixels/mlapi/issues/1)
97 |
98 | **Merged pull requests:**
99 |
100 | - Remove early return in Detect\(\) so that images can be deleted. [\#17](https://github.com/pliablepixels/mlapi/pull/17) ([neillbell](https://github.com/neillbell))
101 | - Add fork link to README [\#11](https://github.com/pliablepixels/mlapi/pull/11) ([themoosman](https://github.com/themoosman))
102 | - Implement a simple health check endpoint [\#8](https://github.com/pliablepixels/mlapi/pull/8) ([themoosman](https://github.com/themoosman))
103 |
104 |
105 |
106 | \* *This Changelog was automatically generated by [github_changelog_generator](https://github.com/github-changelog-generator/github-changelog-generator)*
107 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | GPLv3
2 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | Note
2 | =====
3 | Release 2.1.1 onwards of mlapi requires ES 6.1.0
4 | Starting 2.1.1, the mlapiconfig.ini file has changed to support sequence structures. Please read
5 | [this](https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#understanding-detection-configuration) to understand what is going on and how to make best use of these changes.
6 |
7 | What
8 | =====
9 | An API gateway that you can install in your own server to do object and face recognition.
10 | Easy to extend to many/any other model. You can pass images as:
11 | - a local file
12 | - remote url
13 |
14 | This can also be used as a remote face/recognition and object recognition server if you are using my [ZoneMinder Event Server](https://github.com/ZoneMinder/zmeventnotification)!
15 |
16 | This is an example of invoking `python ./stream.py video.mp4` ([video courtesy of pexels](https://www.pexels.com/video/people-walking-by-on-a-sidewalk-854100/))
17 |
18 |
19 |
20 |
21 | Why
22 | =====
23 | Wanted to learn how to write an API gateway easily. Object detection was a good use-case since I use it extensively for other things (like my event server). This is the first time I've used flask/jwt/tinydb etc. so its very likely there are improvements that can be made. Feel free to PR.
24 |
25 | Tip of the Hat
26 | ===============
27 | A tip of the hat to [Adrian Rosebrock](https://www.pyimagesearch.com/about/) to get me started. His articles are great.
28 |
29 | Install
30 | =======
31 | - It's best to create a virtual environment with python3, but not mandatory
32 | - You need python3 for this to run
33 | - face recognition requires cmake/gcc/standard linux dev libraries installed (if you have gcc, you likely have everything else. You may need to install cmake on top of it if you don't already have it)
34 | - If you plan on using Tiny/Yolo V4, You need Open CV > 4.3
35 | - If you plan on using the Google Coral TPU, please make sure you have all the libs
36 | installed as per https://coral.ai/docs/accelerator/get-started/
37 |
38 |
39 | Note that this package also needs OpenCV which is not installed by the above step by default. This is because you may have a GPU and may want to use GPU support. If not, pip is fine. See [this page](https://zmeventnotification.readthedocs.io/en/latest/guides/hooks.html#opencv-install) on how to install OpenCV
40 |
41 | Then:
42 | ```
43 | git clone https://github.com/ZoneMinder/mlapi
44 | cd mlapi
45 | sudo -H pip3 install -r requirements.txt
46 | ```
47 |
48 | Note: By default, `mlapiconfig.ini` uses the bjoern WSGI server. On debian, the following
49 | dependencies are needed for bjoern:
50 | ```
51 | sudo apt install libev-dev libevdev2
52 | ```
53 | Alternately, you can just comment out `wsgi_server` and it will fall back to using flask.
54 |
55 | Finally, you also need to get the inferencing models. Note this step is ONLY needed if you
56 | don't already have the models downloaded. If you are running mlapi on the same server ZMES is
57 | running, you likely already have the models in `/var/lib/zmeventnotification/models/`.
58 |
59 | To download all models, except coral edgetpu models:
60 | ```
61 | ./get_models.sh
62 | ```
63 |
64 | To download all models, including coral edge tpu models:
65 | (Coral needs the coral device, so it is not downloaded by default):
66 | ```
67 | INSTALL_CORAL_EDGETPU=yes ./get_models.sh
68 | ```
69 |
70 | **Please make sure you edit `mlapiconfig.ini` to meet your needs**
71 |
72 |
73 | Running
74 | ========
75 |
76 | Before you run, you need to create at least one user. Use `python3 mlapi_dbuser.py` for that. Do a `python3 mlapi_dbuser.py --help` for options.
77 |
78 | Server: Manually
79 | ------------------
80 | To run the server:
81 | ```
82 | python3 ./mlapi.py -c mlapiconfig.ini
83 | ```
84 |
85 | Server: Automatically
86 | -----------------------
87 | Take a look at `mlapi.service` and customize it for your needs
88 |
89 |
90 | Client Side: From zm_detect
91 | -----------------------------
92 | One of the key uses of mlapi is to act as an API gateway for zm_detect, the ML
93 | python process for zmeventnotification. When run in this mode, zm_detect.py does not do local
94 | inferencing. Instead if invokes an API call to mlapi. The big advantage is mlapi only loads the model(s) once
95 | and keeps them in memory, greatly reducing total time for detection. If you downloaded mlapi to do this,
96 | read ``objectconfig.ini`` in ``/etc/zm/`` to set it up. It is as simple as configuring the ``[remote]``
97 | section of ``objectconfig.ini``.
98 |
99 | Client Side: From CLI
100 | ------------------------
101 |
102 | (Note: The format of response that is returned for a CLI client is different from what is returned to zm_detect.
103 | zm_detect uses a different format suited for its own needs)
104 |
105 | To invoke detection from CLI, you need to:
106 |
107 | Client Side:
108 |
109 | (General note: I use [httpie](https://httpie.org) for command line http requests. Curl, while powerful has too many quirks/oddities. That being said, given curl is everywhere, examples are in curl. See later for a programmatic way)
110 |
111 | - Get an access token
112 | ```
113 | curl -H "Content-Type:application/json" -XPOST -d '{"username":"", "password":""}' "http://localhost:5000/api/v1/login"
114 | ```
115 | This will return a JSON object like:
116 | ```
117 | {"access_token":"eyJ0eX","expires":3600}
118 | ```
119 |
120 | Now use that token like so:
121 |
122 | ```
123 | export ACCESS_TOKEN=
124 | ```
125 |
126 | Object detection for a remote image (via url):
127 |
128 | ```
129 | curl -H "Content-Type:application/json" -H "Authorization: Bearer ${ACCESS_TOKEN}" -XPOST -d "{\"url\":\"https://upload.wikimedia.org/wikipedia/commons/c/c4/Anna%27s_hummingbird.jpg\"}" http://localhost:5000/api/v1/detect/object
130 | ```
131 |
132 | **NOTE**: The payload shown below is when you invoke this from command line. When it is invoked by
133 | `zm_detect` a different format is returned that is compatible with the ES needs.
134 |
135 | returns:
136 |
137 | ```
138 | [{"type": "bird", "confidence": "99.98%", "box": [433, 466, 2441, 1660]}]
139 | ```
140 |
141 | Object detection for a local image:
142 | ```
143 | curl -H "Authorization: Bearer ${ACCESS_TOKEN}" -XPOST -F"file=@IMG_1833.JPG" http://localhost:5000/api/v1/detect/object -v
144 | ```
145 |
146 | returns:
147 | ```
148 | [{"type": "person", "confidence": "99.77%", "box": [2034, 1076, 3030, 2344]}, {"type": "person", "confidence": "97.70%", "box": [463, 519, 1657, 1351]}, {"type": "cup", "confidence": "97.42%", "box": [1642, 978, 1780, 1198]}, {"type": "dining table", "confidence": "95.78%", "box": [636, 1088, 2370, 2262]}, {"type": "person", "confidence": "94.44%", "box": [22, 718, 640, 2292]}, {"type": "person", "confidence": "93.08%", "box": [408, 1002, 1254, 2016]}, {"type": "cup", "confidence": "92.57%", "box":[1918, 1272, 2110, 1518]}, {"type": "cup", "confidence": "90.04%", "box": [1384, 1768, 1564, 2044]}, {"type": "bowl", "confidence": "83.41%", "box": [882, 1760, 1134, 2024]}, {"type": "person", "confidence": "82.64%", "box": [2112, 984, 2508, 1946]}, {"type": "cup", "confidence": "50.14%", "box": [1854, 1520, 2072, 1752]}]
149 | ```
150 |
151 | Face detection for the same image above:
152 |
153 | ```
154 | curl -H "Authorization: Bearer ${ACCESS_TOKEN}" -XPOST -F"file=@IMG_1833.JPG" "http://localhost:5000/api/v1/detect/object?type=face"
155 | ```
156 |
157 | returns:
158 |
159 | ```
160 | [{"type": "face", "confidence": "52.87%", "box": [904, 1037, 1199, 1337]}]
161 | ```
162 |
163 | Object detection on a live Zoneminder feed:
164 | (Note that ampersands have to be escaped as `%26` when passed as a data parameter)
165 |
166 | ```
167 | curl -XPOST "http://localhost:5000/api/v1/detect/object?delete=false" -d "url=https://demo.zoneminder.com/cgi-bin-zm/nph-zms?mode=single%26maxfps=5%26buffer=1000%26monitor=18%26user=zmuser%26pass=zmpass"
168 | -H "Authorization: Bearer ${ACCESS_TOKEN}"
169 | ```
170 |
171 | returns
172 |
173 | ```
174 | [{"type": "bear", "confidence": "99.40%", "box": [6, 184, 352, 436]}, {"type": "bear
175 | ", "confidence": "72.66%", "box": [615, 219, 659, 279]}]
176 | ```
177 |
178 | Note that the server stores the images and the objects detected inside its `images/` folder. If you want the server to delete them after analysis add `&delete=true` to the query parameters.
179 |
180 |
181 | Live Streams or Recorded Video files
182 | ======================================
183 | This is an image based object detection API. If you want to pass a video file or live stream,
184 | take a look at the full example below.
185 |
186 |
187 | Full Example
188 | =============
189 | Take a look at [stream.py](https://github.com/ZoneMinder/mlapi/blob/master/examples/stream.py). This program reads any media source and/or webcam and invokes detection via the API gateway
190 |
191 |
192 |
--------------------------------------------------------------------------------
/db/.git_ignore:
--------------------------------------------------------------------------------
1 | db.json
2 |
--------------------------------------------------------------------------------
/db/empty:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ZoneMinder/mlapi/6b68dc15cc66b1fc20a587872369793dcf676fee/db/empty
--------------------------------------------------------------------------------
/examples/stream.py:
--------------------------------------------------------------------------------
1 | # Example of how you can use the MLAPI gateway
2 | # with live streams
3 |
4 | # Usage:
5 | # python3 ./stream.py
6 | # if you leave out local video file, it will open your webcam
7 |
8 | import cv2
9 | import requests
10 | import json
11 | import imutils
12 | import sys
13 |
14 | #--------- Change to your needs---------------
15 | BASE_API_URL='http://localhost:5000/api/v1'
16 | USER='pp'
17 | PASSWORD='abc123'
18 | FRAME_SKIP = 5
19 |
20 | # if you want face
21 | #PARAMS = {'delete':'true', 'type':'face'}
22 |
23 | # if you want object
24 | PARAMS = {'delete':'true'}
25 | # If you want to use webcam
26 | CAPTURE_SRC=0
27 | # you can also point it to any media URL, like an RTSP one or a file
28 | #CAPTURE_URL='rtsp://whatever'
29 |
30 | # If you want to use ZM
31 | # note your URL may need /cgi-bin/zm/nph-zms - make sure you specify it correctly
32 | # CAPTURE_SRC='https://demo.zoneminder.com/cgi-bin-zm/nph-zms?mode=jpeg&maxfps=5&buffer=1000&monitor=18&user=zmuser&pass=zmpass'
33 | #--------- end ----------------------------
34 |
35 | if sys.argv[1]:
36 | CAPTURE_SRC=sys.argv[1]
37 |
38 | login_url = BASE_API_URL+'/login'
39 | object_url = BASE_API_URL+'/detect/object'
40 | access_token = None
41 | auth_header = None
42 |
43 | # Get API access token
44 | r = requests.post(url=BASE_API_URL+'/login',
45 | data=json.dumps({'username':USER, 'password':PASSWORD}),
46 | headers={'content-type': 'application/json'})
47 | data = r.json()
48 | access_token = data.get('access_token')
49 | if not access_token:
50 | print (data)
51 | print ('Error retrieving access token')
52 | exit()
53 |
54 | # subsequent requests needs this JWT token
55 | # Note it will expire in 2 hrs by default
56 | auth_header = {'Authorization': 'Bearer '+access_token}
57 |
58 | # Draws bounding box around detections
59 | def draw_boxes(frame,data):
60 | color = (0,255,0) # bgr
61 | for item in data:
62 | bbox = item.get('box')
63 | label = item.get('type')
64 |
65 | cv2.rectangle(frame, (bbox[0],bbox[1]), (bbox[2],bbox[3]), color, 2)
66 | cv2.putText(frame, label, (bbox[0],bbox[1]-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
67 |
68 |
69 | video_source = cv2.VideoCapture(CAPTURE_SRC)
70 | frame_cnt = 0
71 |
72 | if not video_source.isOpened():
73 | print("Could not open video_source")
74 | exit()
75 |
76 |
77 | # read the video source, frame by frame and process it
78 | while video_source.isOpened():
79 | status, frame = video_source.read()
80 | if not status:
81 | print('Error reading frame')
82 | exit()
83 |
84 | # resize width down to 800px before analysis
85 | # don't need more
86 | frame = imutils.resize(frame,width=800)
87 | frame_cnt+=1
88 | if frame_cnt % FRAME_SKIP:
89 | continue
90 |
91 | frame_cnt = 0
92 | # The API expects non-raw images, so lets convert to jpg
93 | ret, jpeg = cv2.imencode('.jpg', frame)
94 | # filename is important because the API checks filename type
95 | files = {'file': ('image.jpg', jpeg.tobytes())}
96 | r = requests.post(url=object_url, headers=auth_header,params=PARAMS,files=files)
97 | data = r.json()
98 | f = draw_boxes(frame,data)
99 |
100 | cv2.imshow('Object detection via MLAPI', frame)
101 |
102 | if cv2.waitKey(1) & 0xFF == ord('q'):
103 | break
104 |
105 | # release resources
106 | video_source.release()
107 | cv2.destroyAllWindows()
108 |
--------------------------------------------------------------------------------
/get_models.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #-----------------------------------------------------
4 | # Download required ML models
5 | #
6 | #
7 | #-----------------------------------------------------
8 |
9 | # --- Change these if you want --
10 |
11 | PYTHON=python3
12 | PIP=pip3
13 |
14 | INSTALL_YOLOV3=${INSTALL_YOLOV3:-yes}
15 | INSTALL_TINYYOLOV3=${INSTALL_TINYYOLOV3:-yes}
16 | INSTALL_YOLOV4=${INSTALL_YOLOV4:-yes}
17 | INSTALL_TINYYOLOV4=${INSTALL_TINYYOLOV4:-yes}
18 | INSTALL_CORAL_EDGETPU=${INSTALL_CORAL_EDGETPU:-no}
19 |
20 | TARGET_DIR='./models'
21 | WGET=$(which wget)
22 |
23 | # utility functions for color coded pretty printing
24 | print_error() {
25 | COLOR="\033[1;31m"
26 | NOCOLOR="\033[0m"
27 | echo -e "${COLOR}ERROR:${NOCOLOR}$1"
28 | }
29 |
30 | print_important() {
31 | COLOR="\033[0;34m"
32 | NOCOLOR="\033[0m"
33 | echo -e "${COLOR}IMPORTANT:${NOCOLOR}$1"
34 | }
35 |
36 | print_warning() {
37 | COLOR="\033[0;33m"
38 | NOCOLOR="\033[0m"
39 | echo -e "${COLOR}WARNING:${NOCOLOR}$1"
40 | }
41 |
42 | print_success() {
43 | COLOR="\033[1;32m"
44 | NOCOLOR="\033[0m"
45 | echo -e "${COLOR}Success:${NOCOLOR}$1"
46 | }
47 |
48 |
49 | mkdir -p "${TARGET_DIR}/yolov3" 2>/dev/null
50 | mkdir -p "${TARGET_DIR}/yolov4" 2>/dev/null
51 | mkdir -p "${TARGET_DIR}/tinyyolov3" 2>/dev/null
52 | mkdir -p "${TARGET_DIR}/tinyyolov4" 2>/dev/null
53 | mkdir -p "${TARGET_DIR}/yolov4" 2>/dev/null
54 | mkdir -p "${TARGET_DIR}/coral_edgetpu" 2>/dev/null
55 |
56 | if [ "${INSTALL_CORAL_EDGETPU}" == "yes" ]
57 | then
58 | # Coral files
59 | echo
60 |
61 | echo 'Checking for Google Coral Edge TPU data files...'
62 | targets=( 'coco_indexed.names' 'ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite' 'ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite' 'ssd_mobilenet_v2_face_quant_postprocess_edgetpu.tflite')
63 | sources=('https://dl.google.com/coral/canned_models/coco_labels.txt'
64 | 'https://github.com/google-coral/edgetpu/raw/master/test_data/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite'
65 | 'https://github.com/google-coral/test_data/raw/master/ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite'
66 | 'https://github.com/google-coral/test_data/raw/master/ssd_mobilenet_v2_face_quant_postprocess_edgetpu.tflite'
67 | )
68 |
69 |
70 | for ((i=0;i<${#targets[@]};++i))
71 | do
72 | if [ ! -f "${TARGET_DIR}/coral_edgetpu/${targets[i]}" ]
73 | then
74 | ${WGET} "${sources[i]}" -O"${TARGET_DIR}/coral_edgetpu/${targets[i]}"
75 | else
76 | echo "${targets[i]} exists, no need to download"
77 |
78 | fi
79 | done
80 | fi
81 |
82 | if [ "${INSTALL_YOLOV3}" == "yes" ]
83 | then
84 | # If you don't already have data files, get them
85 | # First YOLOV3
86 | echo 'Checking for YoloV3 data files....'
87 | targets=('yolov3.cfg' 'coco.names' 'yolov3.weights')
88 | sources=('https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3.cfg'
89 | 'https://raw.githubusercontent.com/pjreddie/darknet/master/data/coco.names'
90 | 'https://pjreddie.com/media/files/yolov3.weights')
91 |
92 | [ -f "${TARGET_DIR}/yolov3/yolov3_classes.txt" ] && rm "${TARGET_DIR}/yolov3/yolov3_classes.txt"
93 |
94 |
95 | for ((i=0;i<${#targets[@]};++i))
96 | do
97 | if [ ! -f "${TARGET_DIR}/yolov3/${targets[i]}" ]
98 | then
99 | ${WGET} "${sources[i]}" -O"${TARGET_DIR}/yolov3/${targets[i]}"
100 | else
101 | echo "${targets[i]} exists, no need to download"
102 |
103 | fi
104 | done
105 | fi
106 |
107 | if [ "${INSTALL_TINYYOLOV3}" == "yes" ]
108 | then
109 | # Next up, TinyYOLOV3
110 | echo
111 | echo 'Checking for TinyYOLOV3 data files...'
112 | targets=('yolov3-tiny.cfg' 'coco.names' 'yolov3-tiny.weights')
113 | sources=('https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3-tiny.cfg'
114 | 'https://raw.githubusercontent.com/pjreddie/darknet/master/data/coco.names'
115 | 'https://pjreddie.com/media/files/yolov3-tiny.weights')
116 |
117 | [ -f "${TARGET_DIR}/tinyyolov3/yolov3-tiny.txt" ] && rm "${TARGET_DIR}/yolov3/yolov3-tiny.txt"
118 |
119 | for ((i=0;i<${#targets[@]};++i))
120 | do
121 | if [ ! -f "${TARGET_DIR}/tinyyolov3/${targets[i]}" ]
122 | then
123 | ${WGET} "${sources[i]}" -O"${TARGET_DIR}/tinyyolov3/${targets[i]}"
124 | else
125 | echo "${targets[i]} exists, no need to download"
126 |
127 | fi
128 | done
129 | fi
130 |
131 | if [ "${INSTALL_TINYYOLOV4}" == "yes" ]
132 | then
133 | # Next up, TinyYOLOV4
134 | echo
135 | echo 'Checking for TinyYOLOV4 data files...'
136 | targets=('yolov4-tiny.cfg' 'coco.names' 'yolov4-tiny.weights')
137 | sources=('https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-tiny.cfg'
138 | 'https://raw.githubusercontent.com/pjreddie/darknet/master/data/coco.names'
139 | 'https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights')
140 |
141 | for ((i=0;i<${#targets[@]};++i))
142 | do
143 | if [ ! -f "${TARGET_DIR}/tinyyolov4/${targets[i]}" ]
144 | then
145 | ${WGET} "${sources[i]}" -O"${TARGET_DIR}/tinyyolov4/${targets[i]}"
146 | else
147 | echo "${targets[i]} exists, no need to download"
148 |
149 | fi
150 | done
151 | fi
152 |
153 | if [ "${INSTALL_YOLOV4}" == "yes" ]
154 | then
155 |
156 | echo
157 | echo 'Checking for YOLOV4 data files...'
158 | print_warning 'Note, you need OpenCV > 4.3 for Yolov4 to work'
159 | targets=('yolov4.cfg' 'coco.names' 'yolov4.weights')
160 | sources=('https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4.cfg'
161 | 'https://raw.githubusercontent.com/pjreddie/darknet/master/data/coco.names'
162 | 'https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights'
163 | )
164 |
165 | for ((i=0;i<${#targets[@]};++i))
166 | do
167 | if [ ! -f "${TARGET_DIR}/yolov4/${targets[i]}" ]
168 | then
169 | ${WGET} "${sources[i]}" -O"${TARGET_DIR}/yolov4/${targets[i]}"
170 | else
171 | echo "${targets[i]} exists, no need to download"
172 |
173 | fi
174 | done
175 | fi
176 |
--------------------------------------------------------------------------------
/images/README.md:
--------------------------------------------------------------------------------
1 | This is where images will be stored, either temporarily or permanently
2 |
3 |
--------------------------------------------------------------------------------
/install_as_service.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #-----------------------------------------------------
4 | # Install script to make mlapi run as a service
5 | # Only for ubuntu
6 | #-----------------------------------------------------
7 |
8 |
9 | TARGET_MLAPI_DIR="/var/lib/zmeventnotification/mlapi"
10 | RSYNC="rsync -av --progress"
11 |
12 | echo "This is really just my internal script to run mlapi as a service"
13 | echo "You probably need to modify mlapi.service and this script"
14 | echo "Which means, you likely don't want to run this..."
15 | echo
16 | read -p "Meh. I know what I'm doing. INSTALL! (Press some key...)"
17 |
18 | if [[ $EUID -ne 0 ]]
19 | then
20 | echo
21 | echo "********************************************************************************"
22 | echo " This script needs to run as root"
23 | echo "********************************************************************************"
24 | echo
25 | exit
26 | fi
27 |
28 | # Same dir check
29 | touch .temp-dir-check.txt 2>/dev/null
30 | if [ -f ${TARGET_MLAPI_DIR}/.temp-dir-check.txt ]
31 | then
32 | echo "*** Error, the target and source directory seem to be the same! **"
33 | rm -f .temp-dir-check.txt 2>/dev/null
34 | exit 1
35 | else
36 | rm -f .temp-dir-check.txt 2>/dev/null
37 | fi
38 |
39 | if [ ! -d "${TARGET_MLAPI_DIR}" ]
40 | then
41 | echo "Creating ${TARGET_MLAPI_DIR}"
42 | mkdir -p "${TARGET_MLAPI_DIR}"
43 | fi
44 |
45 | echo "Syncing files to ${TARGET_MLAPI_DIR}"
46 | EXCLUDE_PATTERN="--exclude .git"
47 |
48 | if [ -d "${TARGET_MLAPI_DIR}/db" ]
49 | then
50 | echo "Skipping db directory as it already exists in: ${TARGET_MLAPI_DIR}"
51 | EXCLUDE_PATTERN="${EXCLUDE_PATTERN} --exclude db"
52 | fi
53 |
54 | if [ -d "${TARGET_MLAPI_DIR}/known_faces" ]
55 | then
56 | echo "Skipping known_faces directory as it already exists in: ${TARGET_MLAPI_DIR}"
57 | EXCLUDE_PATTERN="${EXCLUDE_PATTERN} --exclude known_faces"
58 | fi
59 |
60 | if [ -d "${TARGET_MLAPI_DIR}/unknown_faces" ]
61 | then
62 | echo "Skipping unknown_faces directory as it already exists in: ${TARGET_MLAPI_DIR}"
63 | EXCLUDE_PATTERN="${EXCLUDE_PATTERN} --exclude unknown_faces"
64 | fi
65 |
66 | if [ -f "${TARGET_MLAPI_DIR}/mlapiconfig.ini" ]
67 | then
68 | echo "Skipping mlapiconfig.ini file as it already exists in: ${TARGET_MLAPI_DIR}"
69 | EXCLUDE_PATTERN="${EXCLUDE_PATTERN} --exclude mlapiconfig.ini"
70 | fi
71 |
72 | if [ -f "${TARGET_MLAPI_DIR}/secrets.ini" ]
73 | then
74 | echo "Skipping secrets.ini file as it already exists in: ${TARGET_MLAPI_DIR}"
75 | EXCLUDE_PATTERN="${EXCLUDE_PATTERN} --exclude secrets.ini"
76 | fi
77 |
78 |
79 | echo ${RSYNC} . ${TARGET_MLAPI_DIR} ${EXCLUDE_PATTERN}
80 | ${RSYNC} . ${TARGET_MLAPI_DIR} ${EXCLUDE_PATTERN}
81 |
82 | #cp -R * "${TARGET_MLAPI_DIR}/"
83 | install -m 755 -o "www-data" -g "www-data" mlapi.py "${TARGET_MLAPI_DIR}"
84 | install -m 755 -o "www-data" -g "www-data" mlapi_logrot.sh "${TARGET_MLAPI_DIR}"
85 | install -m 755 -o "www-data" -g "www-data" mlapi_face_train.py "${TARGET_MLAPI_DIR}"
86 |
87 |
88 | chown -R www-data:www-data ${TARGET_MLAPI_DIR}
89 |
90 | echo "Copying service file"
91 | cp mlapi.service /etc/systemd/system
92 | chmod 644 /etc/systemd/system/mlapi.service
93 | systemctl enable mlapi.service
94 |
95 |
96 | echo "Starting mlapi service"
97 | systemctl daemon-reload
98 | service mlapi restart
99 |
100 |
--------------------------------------------------------------------------------
/known_faces/README.md:
--------------------------------------------------------------------------------
1 | put your face recognition training images here
2 | One directory per person and images of that person's face inside it
3 | example:
4 | ```
5 | known_faces/
6 | +----------bruce_lee/
7 | +------1.jpg
8 | +------2.jpg
9 | +----------david_gilmour/
10 | +------1.jpg
11 | +------img2.jpg
12 | +------3.jpg
13 | +----------ramanujan/
14 | +------face1.jpg
15 | +------face2.jpg
16 | ```
17 |
--------------------------------------------------------------------------------
/make_changelog.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | if [ -z "$1" ]; then
3 | TAGVER=`python3 ./mlapi.py --version`
4 | else
5 | TAGVER=$1
6 | fi
7 | VER="${TAGVER/v/}"
8 | read -p "Future release is v${VER}. Please press any key to confirm..."
9 | github_changelog_generator -u pliablepixels -p mlapi --future-release v${VER}
10 | #github_changelog_generator --future-release v${VER}
11 |
12 |
--------------------------------------------------------------------------------
/make_tag.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | if [ -z "$1" ]; then
3 | echo "Inferring version name from modules/__init__.py"
4 | if [[ `cat modules/__init__.py` =~ ^__version__\ =\ \"(.*)\" ]];
5 | then
6 | TAGVER=${BASH_REMATCH[1]}
7 | else
8 | echo "Bad version parsing"
9 | exit
10 | fi
11 | else
12 | TAGVER=$1
13 | fi
14 | VER="${TAGVER/v/}"
15 | echo "Creating tag:v$VER"
16 |
17 | read -p "Please generate CHANGELOG and commit it BEFORE you tag. Press a key when ready..."
18 | read -p "Press any key to create the tag or Ctrl-C to break..." -n1
19 | git tag -fa v$VER -m"v$VER"
20 | git push -f --tags
21 | git push upstream -f --tags
22 |
--------------------------------------------------------------------------------
/mlapi.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 |
3 | from flask import Flask, send_file, request, jsonify, render_template
4 | import requests as py_requests
5 | from flask_restful import Resource, Api, reqparse, abort, inputs
6 | from flask_jwt_extended import (JWTManager, jwt_required, create_access_token, get_jwt_identity)
7 | from werkzeug.utils import secure_filename
8 | from werkzeug.exceptions import HTTPException, default_exceptions
9 | from werkzeug.datastructures import FileStorage
10 | from functools import wraps
11 | from mimetypes import guess_extension
12 | #from collections import deque
13 |
14 | import os
15 |
16 | import cv2
17 | import uuid
18 | import numpy as np
19 | import argparse
20 | import copy
21 |
22 | import modules.common_params as g
23 | import modules.db as Database
24 | import modules.utils as utils
25 | from modules.__init__ import __version__
26 | from pyzm import __version__ as pyzm_version
27 |
28 |
29 | from pyzm.ml.detect_sequence import DetectSequence
30 | import pyzm.helpers.utils as pyzmutils
31 | import ast
32 | import pyzm.api as zmapi
33 |
34 |
35 | def file_ext(str):
36 | f,e = os.path.splitext(str)
37 | return e.lower()
38 |
39 | # Checks if filename is allowed
40 | def allowed_ext(ext):
41 | return ext.lower() in g.ALLOWED_EXTENSIONS
42 |
43 | def parse_args():
44 | parser = reqparse.RequestParser()
45 | parser.add_argument('type', location='args', default=None)
46 | parser.add_argument('response_format', location='args', default='legacy')
47 | parser.add_argument('delete', location='args', type=inputs.boolean, default=False)
48 | parser.add_argument('download', location='args', type=inputs.boolean, default=False)
49 | parser.add_argument('url', default=False)
50 | parser.add_argument('file', type=FileStorage, location='files')
51 | return parser.parse_args()
52 |
53 | def get_file(args):
54 | # Assigns a unique name to the image and saves it locally for analysis
55 | unique_filename = str(uuid.uuid4())
56 | file_with_path_no_ext = os.path.join(app.config['UPLOAD_FOLDER'], unique_filename)
57 | ext = None
58 |
59 | # uploaded as multipart data
60 | if args['file']:
61 | file = args['file']
62 | ext = file_ext(file.filename)
63 | if file.filename and allowed_ext(ext):
64 | file.save(file_with_path_no_ext+ext)
65 | else:
66 | abort (500, msg='Bad file type {}'.format(file.filename))
67 |
68 | # passed as a payload url
69 | elif args['url']:
70 | url = args['url']
71 | g.log.Debug (1,'Got url:{}'.format(url))
72 | ext = file_ext(url)
73 | r = py_requests.get(url, allow_redirects=True)
74 |
75 | cd = r.headers.get('content-disposition')
76 | ct = r.headers.get('content-type')
77 | if cd:
78 | ext = file_ext(cd)
79 | g.log.Debug (1,'extension {} derived from {}'.format(ext,cd))
80 | elif ct:
81 | ext = guess_extension(ct.partition(';')[0].strip())
82 | if ext == '.jpe':
83 | ext = '.jpg'
84 | g.log.Debug (1,'extension {} derived from {}'.format(ext,ct))
85 | if not allowed_ext(ext):
86 | abort(400, msg='filetype {} not allowed'.format(ext))
87 | else:
88 | ext = '.jpg'
89 | open(file_with_path_no_ext+ext, 'wb').write(r.content)
90 | else:
91 | abort(400, msg='could not determine file type')
92 |
93 | g.log.Debug (1,'get_file returned: {}{}'.format(file_with_path_no_ext,ext))
94 | return file_with_path_no_ext, ext
95 | # general argument processing
96 |
97 |
98 | class Detect(Resource):
99 | @jwt_required()
100 | def post(self):
101 | args = parse_args()
102 | req = request.get_json()
103 |
104 | fi = None
105 | stream_options={}
106 | stream = None
107 | ml_overrides = {}
108 | config_copy = None
109 | poly_copy = None
110 | ml_options = None
111 | mid = None
112 |
113 | if not req:
114 | req = {}
115 |
116 | if req.get('mid') and str(req.get('mid')) in g.monitor_config:
117 | mid = str(req.get('mid'))
118 | g.logger.Debug (1, 'Monitor ID {} provided & matching config found in mlapi, ignoring objectconfig.ini'.format(mid))
119 | config_copy = copy.copy(g.config)
120 | poly_copy = copy.copy(g.polygons)
121 | g.polygons = copy.copy(g.monitor_polygons[mid])
122 |
123 | for key in g.monitor_config[mid]:
124 | # This will also take care of copying over mid specific stream_options
125 | g.logger.Debug(2, 'Overriding global {} with {}...'.format(key, g.monitor_config[mid][key][:30]))
126 | g.config[key] = g.monitor_config[mid][key]
127 |
128 | # stupid mlapi and zm_detect config incompatibility
129 | if not g.config.get('image_path') and g.config.get('images_path'):
130 | g.config['image_path'] = g.config['images_path']
131 |
132 | # At this stage, polygons has a copy of that monitor polygon set
133 | # g.config has overriden values of config from the mid
134 |
135 | r = req.get('reason')
136 | if r and g.config['only_triggered_zm_zones'] == 'yes' and g.config['import_zm_zones'] == 'yes':
137 | g.logger.Debug(2, 'Only filtering polygon names that have {}'.format(r))
138 | g.logger.Debug(2, 'Original polygons being used: {}'.format(g.polygons))
139 |
140 | g.polygons[:] = [item for item in g.polygons if utils.findWholeWord(item['name'])(r)]
141 | g.logger.Debug(2, 'Final polygons being used: {}'.format(g.polygons))
142 | if g.config['ml_sequence'] and g.config['use_sequence'] == 'yes':
143 | g.log.Debug(2,'using ml_sequence')
144 | ml_options = g.config['ml_sequence']
145 | secrets = pyzmutils.read_config(g.config['secrets'])
146 | ml_options = pyzmutils.template_fill(input_str=ml_options, config=None, secrets=secrets._sections.get('secrets'))
147 | ml_options = ast.literal_eval(ml_options)
148 | #print (ml_options)
149 | else:
150 | g.logger.Debug(2,'mapping legacy ml data from config')
151 | ml_options = utils.convert_config_to_ml_sequence()
152 |
153 | g.logger.Debug (2, 'Overwriting ml_sequence of pre loaded model')
154 | m.set_ml_options(ml_options)
155 | else:
156 | g.logger.Debug(1,'Monitor ID not specified, or not found in mlapi config, using zm_detect overrides')
157 | ml_overrides = req.get('ml_overrides',{})
158 | if g.config['ml_sequence'] and g.config['use_sequence'] == 'yes':
159 | g.log.Debug(2,'using ml_sequence')
160 | ml_options = g.config['ml_sequence']
161 | secrets = pyzmutils.read_config(g.config['secrets'])
162 | ml_options = pyzmutils.template_fill(input_str=ml_options, config=None, secrets=secrets._sections.get('secrets'))
163 | ml_options = ast.literal_eval(ml_options)
164 | #print (ml_options)
165 | else:
166 | g.logger.Debug(2,'mapping legacy ml data from config')
167 | ml_options = utils.convert_config_to_ml_sequence()
168 | if 'polygons' in req.get('stream_options', {}):
169 | g.logger.Debug(2, "Set polygons from request")
170 | g.polygons = req.get('stream_options')['polygons']
171 | poly_copy = copy.deepcopy(g.polygons)
172 |
173 | if g.config.get('stream_sequence'):
174 | g.logger.Debug(2, 'Found stream_sequence in mlapi config, ignoring objectconfig.ini')
175 | stream_options = ast.literal_eval(g.config.get('stream_sequence'))
176 | else:
177 | stream_options = req.get('stream_options')
178 | if not stream_options:
179 | if config_copy:
180 | g.log.Debug(2, 'Restoring global config & ml_options')
181 | g.config = config_copy
182 | g.polygons = poly_copy
183 | abort(400, msg='No stream options found')
184 | stream_options['api'] = zmapi
185 | stream_options['polygons'] = g.polygons
186 |
187 | stream = req.get('stream')
188 |
189 | if args['type'] == 'face':
190 | g.log.Debug(1, 'Face Recognition requested')
191 |
192 | elif args['type'] == 'alpr':
193 | g.log.Debug(1, 'ALPR requested')
194 |
195 | elif args['type'] in [None, 'object']:
196 | g.log.Debug(1, 'Object Recognition requested')
197 | #m = ObjectDetect.Object()
198 | else:
199 | if config_copy:
200 | g.log.Debug(2, 'Restoring global config & ml_options')
201 | g.config = config_copy
202 | g.polygons = poly_copy
203 | abort(400, msg='Invalid Model:{}'.format(args['type']))
204 |
205 | if not stream:
206 | g.log.Debug (1, 'Stream info not found, looking at args...')
207 | fip,ext = get_file(args)
208 | fi = fip+ext
209 | stream = fi
210 |
211 | #image = cv2.imread(fi)
212 | #bbox,label,conf = m.detect(image)
213 |
214 | stream_options['mid'] = mid
215 | if not stream_options.get('delay') and g.config.get('wait'):
216 | stream_options['delay'] = g.config.get('wait')
217 | g.log.Debug(1, 'Calling detect streams')
218 | matched_data,all_matches = m.detect_stream(stream=stream, options=stream_options, ml_overrides=ml_overrides)
219 |
220 | #if matched_data['image_dimensions'] and matched_data['image_dimensions']['original']:
221 | #oldh = matched_data['image_dimensions']['original'][0]
222 | #oldw = matched_data['image_dimensions']['original'][1]
223 |
224 | if config_copy:
225 | g.log.Debug(2, 'Restoring global config & ml_options')
226 | g.config = config_copy
227 | g.polygons = poly_copy
228 |
229 | matched_data['image'] = None
230 | if args.get('response_format') == 'zm_detect':
231 | resp_obj= {
232 | 'matched_data': matched_data,
233 | 'all_matches': all_matches,
234 | }
235 | g.log.Debug (1, 'Returning {}'.format(resp_obj))
236 | return resp_obj
237 |
238 | # legacy format
239 | bbox = matched_data['boxes']
240 | label = matched_data['labels']
241 | conf = matched_data['confidences']
242 |
243 |
244 | detections=[]
245 | for l, c, b in zip(label, conf, bbox):
246 | c = "{:.2f}%".format(c * 100)
247 | obj = {
248 | 'type': 'object',
249 | 'label': l,
250 | 'confidence': c,
251 | 'box': b
252 | }
253 | detections.append(obj)
254 |
255 | if args['delete'] and fi:
256 | #pass
257 | os.remove(fi)
258 | return detections
259 |
260 |
261 | # generates a JWT token to use for auth
262 | class Login(Resource):
263 | def post(self):
264 | if not request.is_json:
265 | abort(400, msg='Missing JSON in request')
266 |
267 | username = request.json.get('username', None)
268 | password = request.json.get('password', None)
269 | if not username:
270 | abort(400, message='Missing username')
271 |
272 | if not password:
273 | abort(400, message='Missing password')
274 |
275 | if not db.check_credentials(username,password):
276 | abort(401, message='incorrect credentials')
277 | # Identity can be any data that is json serializable
278 | access_token = create_access_token(identity=username)
279 | response = jsonify(access_token=access_token, expires=g.ACCESS_TOKEN_EXPIRES)
280 | response.status_code = 200
281 | return response
282 |
283 | # implement a basic health check.
284 | class Health(Resource):
285 | def get(self):
286 | response = jsonify("ok")
287 | response.status_code = 200
288 | return response
289 |
290 | def get_http_exception_handler(app):
291 | """Overrides the default http exception handler to return JSON."""
292 | handle_http_exception = app.handle_http_exception
293 | @wraps(handle_http_exception)
294 | def ret_val(exception):
295 | exc = handle_http_exception(exception)
296 | return jsonify({'code': exc.code, 'msg': exc.description}), exc.code
297 | return ret_val
298 |
299 |
300 | #-----------------------------------------------
301 | # main init
302 | #-----------------------------------------------
303 |
304 | ap = argparse.ArgumentParser()
305 | ap.add_argument('-c', '--config', help='config file with path')
306 | ap.add_argument('-vv', '--verboseversion', action='store_true', help='print version and exit')
307 | ap.add_argument('-v', '--version', action='store_true', help='print mlapi version and exit')
308 | ap.add_argument('-d', '--debug', help='enables debug on console', action='store_true')
309 | ap.add_argument('-g', '--gpu', type=int, help='specify which GPU to use if multiple are present')
310 |
311 | args, u = ap.parse_known_args()
312 | args = vars(args)
313 |
314 | if args.get('version'):
315 | print('{}'.format(__version__))
316 | exit(0)
317 |
318 | if not args.get('config'):
319 | print ('--config required')
320 | exit(1)
321 |
322 | utils.process_config(args)
323 |
324 | app = Flask(__name__)
325 | # Override the HTTP exception handler.
326 | app.handle_http_exception = get_http_exception_handler(app)
327 | api = Api(app, prefix='/api/v1')
328 | app.config['UPLOAD_FOLDER'] = g.config['images_path']
329 | app.config['MAX_CONTENT_LENGTH'] = g.MAX_FILE_SIZE_MB * 1024 * 1024
330 | app.config['JWT_SECRET_KEY'] = g.config['mlapi_secret_key']
331 | app.config['JWT_ACCESS_TOKEN_EXPIRES'] = g.ACCESS_TOKEN_EXPIRES
332 | app.config['PROPAGATE_EXCEPTIONS'] = True
333 | app.debug = False
334 | jwt = JWTManager(app)
335 | db = Database.Database()
336 | api.add_resource(Login, '/login')
337 | api.add_resource(Detect, '/detect/object')
338 | api.add_resource(Health, '/health')
339 |
340 | secrets_conf = pyzmutils.read_config(g.config['secrets'])
341 |
342 | g.config['api_portal'] = g.config['api_portal'] or pyzmutils.get(key='ZM_API_PORTAL', section='secrets', conf=secrets_conf)
343 | g.config['portal'] = g.config['portal'] or pyzmutils.get(key='ZM_PORTAL', section='secrets', conf=secrets_conf)
344 | g.config['user'] = g.config['user'] or pyzmutils.get(key='ZM_USER', section='secrets', conf=secrets_conf)
345 | g.config['password'] = g.config['password'] or pyzmutils.get(key='ZM_PASSWORD', section='secrets', conf=secrets_conf)
346 |
347 | if g.config['auth_enabled'] == 'no':
348 | g.config['user'] = None
349 | g.config['password'] = None
350 | g.logger.Info('Turning off auth for mlapi')
351 |
352 | api_options = {
353 | 'apiurl': g.config['api_portal'],
354 | 'portalurl':g.config['portal'],
355 | 'user':g.config['user'] ,
356 | 'password': g.config['password'],
357 | 'basic_auth_user': g.config['basic_auth_user'],
358 | 'basic_auth_password': g.config['basic_auth_password'],
359 | 'disable_ssl_cert_check':False if g.config['allow_self_signed']=='no' else True
360 | }
361 |
362 | g.log.set_level(5)
363 |
364 | if not api_options.get('apiurl') or not api_options.get('portalurl'):
365 | g.log.Info('Missing API and/or Portal URLs. Your secrets file probably doesn\'t have these values')
366 | else:
367 | zmapi = zmapi.ZMApi(options=api_options)
368 | utils.check_and_import_zones(zmapi)
369 |
370 | ml_options = {}
371 | stream_options = {}
372 |
373 | if g.config['ml_sequence'] and g.config['use_sequence'] == 'yes':
374 | g.log.Debug(2,'using ml_sequence')
375 | ml_options = g.config['ml_sequence']
376 | secrets = pyzmutils.read_config(g.config['secrets'])
377 | ml_options = pyzmutils.template_fill(input_str=ml_options, config=None, secrets=secrets._sections.get('secrets'))
378 | #print (ml_options)
379 | ml_options = ast.literal_eval(ml_options)
380 | #print (ml_options)
381 | else:
382 | g.logger.Debug(2,'mapping legacy ml data from config')
383 | ml_options = utils.convert_config_to_ml_sequence()
384 | g.config['ml_options'] = ml_options
385 |
386 | # stream options will come from zm_detect
387 |
388 | #print(ml_options)
389 |
390 | m = DetectSequence(options=ml_options, global_config=g.config)
391 |
392 |
393 | if __name__ == '__main__':
394 | g.log.Info ('--------| mlapi version:{}, pyzm version:{} |--------'.format(__version__, pyzm_version))
395 |
396 | #app.run(host='0.0.0.0', port=5000, threaded=True)
397 | #app.run(host='0.0.0.0', port=g.config['port'], threaded=False, processes=g.config['processes'])
398 |
399 | cudaDeviceCount = cv2.cuda.getCudaEnabledDeviceCount()
400 | if cudaDeviceCount == 1:
401 | deviceCountPlural = ''
402 | else:
403 | deviceCountPlural = 's'
404 |
405 | g.log.Debug (1, '{} CUDA-enabled device{} found'.format(cudaDeviceCount, deviceCountPlural))
406 |
407 | if args.get('gpu'):
408 | selectedGPU = int(args.get('gpu'))
409 | if selectedGPU in range(0,cudaDeviceCount):
410 | g.log.Debug(1, 'Using GPU #{}'.format(selectedGPU))
411 | cv2.cuda.setDevice(selectedGPU)
412 | else:
413 | g.log.Warning('Invalid CUDA GPU #{} selected, ignoring'.format(selectedGPU))
414 | if (cudaDeviceCount > 1):
415 | g.log.Info('Valid options for GPU are 0-{}:'.format(cudaDeviceCount-1))
416 | for cudaDevice in range(0,cudaDeviceCount):
417 | cv2.cuda.printShortCudaDeviceInfo(cudaDevice)
418 |
419 |
420 | if g.config['wsgi_server'] == 'bjoern':
421 | g.log.Info ('Using bjoern as WSGI server')
422 | import bjoern
423 | bjoern.run(app,host='0.0.0.0',port=g.config['port'])
424 | else:
425 | g.log.Info ('Using flask as WSGI server')
426 | g.log.Info ('Starting server with max:{} processes'.format(g.config['processes']))
427 | app.run(host='0.0.0.0', port=g.config['port'], threaded=False, processes=g.config['processes'])
428 |
429 |
--------------------------------------------------------------------------------
/mlapi.service:
--------------------------------------------------------------------------------
1 | # This is a systemd startup file if you are on a system that
2 | # supports systemd and you want mlapi to work as an always
3 | # on service
4 |
5 | # Please make sure you run mlapi manually first
6 | # to create a user/password for access and then enable
7 | # this service
8 |
9 | # To make this persistent
10 | # sudo cp mlapi.service /etc/systemd/system
11 | # sudo chmod 644 /etc/systemd/system/mlapi.service
12 | # sudo systemctl enable mlapi.service
13 |
14 | # To start,
15 | # sudo systemctl start mlapi
16 |
17 | [Unit]
18 | Description=Machine Learning API service
19 | After=network.target
20 | StartLimitIntervalSec=0
21 |
22 | [Service]
23 | Type=simple
24 | Restart=always
25 | RestartSec=5
26 | # We need this to get logs correctly
27 | Environment=PYTHONUNBUFFERED=1
28 |
29 | # change this
30 | WorkingDirectory=/var/lib/zmeventnotification/mlapi
31 | # Change to your username
32 | User=www-data
33 | #Change paths if needed
34 | ExecStart=/var/lib/zmeventnotification/mlapi/mlapi.py -c ./mlapiconfig.ini
35 | ExecStartPost=+/bin/sh -c 'umask 022; pgrep mlapi.py > /var/run/mlapi.pid'
36 |
37 |
38 | # Note that if you enable use_zm_logs=yes in mlapiconfig.ini
39 | # you can comment these out. If you enable use_zm_logs, the logs
40 | # will be written in ZM log format to /zm_mlapi.log
41 | StandardOutput=file:/var/log/zm/mlapi.log
42 | StandardError=file:/var/log/zm/mlapi_error.log
43 |
44 | [Install]
45 | WantedBy=multi-user.target
--------------------------------------------------------------------------------
/mlapi_dbuser.py:
--------------------------------------------------------------------------------
1 | from tinydb import TinyDB, Query, where
2 | from passlib.hash import sha256_crypt
3 | import modules.common_params as g
4 | import getpass
5 |
6 | import modules.db as Database
7 | import argparse
8 |
9 | ap = argparse.ArgumentParser()
10 | ap.add_argument('-u', '--user', help='username to create')
11 | ap.add_argument('-p', '--password', help='password of user')
12 | ap.add_argument('-d', '--dbpath', default='./db', help='path to DB')
13 | ap.add_argument('-f', '--force', help='force overwrite user', action='store_true')
14 | ap.add_argument('-l', '--list', help='list all users', action='store_true')
15 | ap.add_argument('-r', '--remove', help='remove user' )
16 |
17 | args, u = ap.parse_known_args()
18 | args = vars(args)
19 |
20 |
21 | g.config['db_path']= args.get('dbpath')
22 |
23 | db = Database.Database(prompt_to_create=False)
24 |
25 | if args.get('list'):
26 | print ('----- Configured users ---------------')
27 | for i in db.get_all_users():
28 | print ('User: {}'.format(i.get('name')))
29 | exit(0)
30 |
31 | if args.get('remove'):
32 | u = args.get('remove')
33 | if not db.get_user(u):
34 | print ('User: {} not found'.format(u))
35 | else:
36 | db.delete_user(args.get('remove'))
37 | print ('OK')
38 | exit(0)
39 |
40 |
41 | if not args.get('user') or not args.get('password'):
42 | print ('--------------- User Creation ------------')
43 | while True:
44 | name = input ('\nuser name (Ctrl+C to exit):')
45 | if not name:
46 | print ('Error: username needed')
47 | continue
48 | p1 = getpass.getpass('Please enter password:')
49 | if not p1:
50 | print ('Error: password cannot be empty')
51 | continue
52 | p2 = getpass.getpass('Please re-enter password:')
53 | if p1 != p2:
54 | print ('Passwords do not match, please re-try')
55 | continue
56 | break
57 | else:
58 | name = args.get('user')
59 | p1 = args.get('password')
60 |
61 | if not db.get_user(name) or args.get('force'):
62 | db.add_user(name,p1)
63 | print ('User: {} created'.format(name))
64 | else:
65 | print ('User: {} already exists. Use --force to override or --remove to remove old user first'.format(name))
66 |
--------------------------------------------------------------------------------
/mlapi_face_train.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 |
3 | import modules.utils as utils
4 | import modules.common_params as g
5 | import argparse
6 | import pyzm.ml.face_train as train
7 |
8 |
9 | ap = argparse.ArgumentParser()
10 | ap.add_argument('-c',
11 | '--config',
12 | default='./mlapiconfig.ini',
13 | help='config file with path')
14 |
15 | ap.add_argument('-s',
16 | '--size',
17 | type=int,
18 | help='resize amount (if you run out of memory)')
19 |
20 | ap.add_argument('-d', '--debug', help='enables debug on console', action='store_true')
21 |
22 |
23 | args, u = ap.parse_known_args()
24 | args = vars(args)
25 | utils.process_config(args)
26 | train.FaceTrain(options=g.config).train(size=args['size'])
27 |
28 |
--------------------------------------------------------------------------------
/mlapi_logrot.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # To enable log rotation being processed in mlapi
4 | # I edited /etc/logrotate.d/zoneminder to add an extra line
5 |
6 | # /var/lib/zmeventnotification/mlapi/mlapi_logrot.sh inside postrotate
7 | # at the end
8 |
9 | #-----------------------------------------------------
10 | # Handles HUP for logrot
11 | #-----------------------------------------------------
12 |
13 |
14 | if [ -f "/var/run/mlapi.pid" ]
15 | then
16 | kill -HUP `cat /var/run/mlapi.pid`
17 | fi
18 |
--------------------------------------------------------------------------------
/mlapiconfig.ini:
--------------------------------------------------------------------------------
1 | [general]
2 | # This is an optional file
3 | # If specified, you can specify tokens with secret values in that file
4 | # and onlt refer to the tokens in your main config file
5 |
6 | #secrets=./secrets.ini
7 | secrets=/etc/zm/secrets.ini
8 |
9 | # portal/user/password are needed if you plan on using ZM's
10 | # auth mechanism to get images
11 | portal=!ZM_PORTAL
12 | user=!ZM_USER
13 | password=!ZM_PASSWORD
14 | #basic_auth_user=username
15 | #basic_auth_password=password
16 |
17 | # api portal is needed if you plan to use tokens to get images
18 | # requires ZM 1.33 or above
19 | api_portal=!ZM_API_PORTAL
20 |
21 | # make this no, if you don't plan to use auth. Default is yes.
22 | auth_enabled=yes
23 |
24 | # port that mlapi will listen on. Default 5000
25 | port=5000
26 |
27 | # Maximum # of processes that will be forked
28 | # to handle requests. Note that each process will
29 | # have its own copy of the model, so memory can
30 | # build up very quickly
31 | # This number also dictates how many requests will be executed in parallel
32 | # The rest will be queued
33 |
34 | # default: flask
35 | wsgi_server=bjoern
36 |
37 | # if yes, will use ZM logs. Default no
38 | #use_zm_logs=no
39 | use_zm_logs=yes
40 | pyzm_overrides={'log_level_debug':5}
41 |
42 | # If you are using bjoern, processes is always 1
43 | # For now, keep this to 1 if you are on a GPU
44 | processes=1
45 |
46 | # the secret key that will be used to sign
47 | # JWT tokens. Make sure you change the value
48 | # in your secrets.ini
49 | mlapi_secret_key=!MLAPI_SECRET_KEY
50 |
51 | # base data path for various files the ES+OD needs
52 | # we support in config variable substitution as well
53 | base_data_path=/var/lib/zmeventnotification
54 | #base_data_path=.
55 | # folder where images will be uploaded
56 | # default ./images
57 | images_path={{base_data_path}}/images
58 |
59 | # folder where the user DB will be stored
60 | db_path=./db
61 |
62 |
63 | # If yes, will allow connections to self signed certificates
64 | # Default yes
65 | allow_self_signed=yes
66 |
67 |
68 | # You can now limit the # of detection process
69 | # per target processor. If not specified, default is 1
70 | # Other detection processes will wait to acquire lock
71 |
72 | cpu_max_processes=3
73 | tpu_max_processes=1
74 | gpu_max_processes=1
75 |
76 | # NEW: Time to wait in seconds per processor to be free, before
77 | # erroring out. Default is 120 (2 mins)
78 | cpu_max_lock_wait=120
79 | tpu_max_lock_wait=120
80 | gpu_max_lock_wait=120
81 |
82 | model_sequence=object,face,alpr
83 |
84 |
85 | # If yes, will import zm zones defined for monitors. Default is no
86 | #import_zm_zones=yes
87 |
88 | # If enabled, will only filter zone names that match the alarm cause
89 | # This is useful if you only want to report detections where motion
90 | # was detected by ZM. Default no
91 | #only_triggered_zm_zones=no
92 |
93 | # if yes, last detection will be stored for monitors
94 | # and bounding boxes that match, along with labels
95 | # will be discarded for new detections. This may be helpful
96 | # in getting rid of static objects that get detected
97 | # due to some motion.
98 | match_past_detections=no
99 |
100 | # The max difference in area between the objects if match_past_detection is on
101 | # can also be specified in px like 300px. Default is 5%. Basically, bounding boxes of the same
102 | # object can slightly differ ever so slightly between detection. Contributor @neillbell put in this PR
103 | # to calculate the difference in areas and based on his tests, 5% worked well. YMMV. Change it if needed.
104 | # Note: You can specify label/object specific max_diff_areas as well. If present, they override this value
105 | # example:
106 | # person_past_det_max_diff_area=5%
107 | # car_past_det_max_diff_area=5000px
108 | past_det_max_diff_area=5%
109 |
110 | # this is the maximum size a detected object can have. You can specify it in px or % just like past_det_max_diff_area
111 | # This is pretty useful to eliminate bogus detection. In my case, depending on shadows and other lighting conditions,
112 | # I sometimes see "car" or "person" detected that covers most of my driveway view. That is practically impossible
113 | # and therefore I set mine to 70% because I know any valid detected objected cannot be larger than that area
114 |
115 | max_detection_size=90%
116 |
117 | # config for object
118 | [object]
119 |
120 | # If you are using legacy format (use_sequence=no) then these parameters will
121 | # be used during ML inferencing
122 | #object_detection_pattern=.*
123 | object_detection_pattern=(person|car|motorbike|bus|truck|boat)
124 | object_min_confidence=0.3
125 | object_framework=coral_edgetpu
126 | object_processor=tpu
127 | object_weights={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
128 | object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names
129 |
130 | # If you are using the new ml_sequence format (use_sequence=yes) then
131 | # you can fiddle with these parameters and look at ml_sequence later
132 | # Note that these can be named anything. You can add custom variables, ad-infinitum
133 |
134 |
135 | # This is a useful debugging trick. If you are chaning models and want to know which
136 | # model detected an object, make this yes. When yes, it will prefix the model name before the
137 | # detected object. Example: Instead of 'person', it will say '(yolo) person'
138 | show_models=no
139 |
140 | # Google Coral
141 | # The mobiledet model came out in Nov 2020 and is supposed to be faster and more accurate but YMMV
142 | tpu_object_weights_mobiledet={{base_data_path}}/models/coral_edgetpu/ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite
143 | tpu_object_weights_mobilenet={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
144 | tpu_object_weights_yolov5={{base_data_path}}/models/coral_edgetpu/yolov5s-int8_edgetpu.tflite
145 | tpu_object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names
146 | tpu_object_framework=coral_edgetpu
147 | tpu_object_processor=tpu
148 | tpu_min_confidence=0.6
149 |
150 |
151 | # Yolo v4 on GPU (falls back to CPU if no GPU)
152 | yolo4_object_weights={{base_data_path}}/models/yolov4/yolov4.weights
153 | yolo4_object_labels={{base_data_path}}/models/yolov4/coco.names
154 | yolo4_object_config={{base_data_path}}/models/yolov4/yolov4.cfg
155 | yolo4_object_framework=opencv
156 | yolo4_object_processor=gpu
157 |
158 | # Yolo v3 on GPU (falls back to CPU if no GPU)
159 | yolo3_object_weights={{base_data_path}}/models/yolov3/yolov3.weights
160 | yolo3_object_labels={{base_data_path}}/models/yolov3/coco.names
161 | yolo3_object_config={{base_data_path}}/models/yolov3/yolov3.cfg
162 | yolo3_object_framework=opencv
163 | yolo3_object_processor=gpu
164 |
165 | # Tiny Yolo V4 on GPU (falls back to CPU if no GPU)
166 | tinyyolo_object_config={{base_data_path}}/models/tinyyolov4/yolov4-tiny.cfg
167 | tinyyolo_object_weights={{base_data_path}}/models/tinyyolov4/yolov4-tiny.weights
168 | tinyyolo_object_labels={{base_data_path}}/models/tinyyolov4/coco.names
169 | tinyyolo_object_framework=opencv
170 | tinyyolo_object_processor=gpu
171 |
172 |
173 |
174 | [face]
175 |
176 | # NOTE: None of these are used if use_sequence is enabled. Ig enabled
177 | # only values in ml_sequence are processed
178 |
179 |
180 | face_detection_framework=dlib
181 | face_recognition_framework=dlib
182 | face_num_jitters=0
183 | face_upsample_times=1
184 | face_model=cnn
185 | face_train_model=cnn
186 | face_recog_dist_threshold=0.6
187 | face_recog_knn_algo=ball_tree
188 | known_images_path={{base_data_path}}/known_faces
189 | unknown_images_path={{base_data_path}}/unknown_faces
190 |
191 | unknown_face_name=unknown face
192 | save_unknown_faces=yes
193 | save_unknown_faces_leeway_pixels=50
194 |
195 | [alpr]
196 |
197 | # NOTE: None of these are used if use_sequence is enabled. Ig enabled
198 | # only values in ml_sequence are processed
199 |
200 |
201 | alpr_use_after_detection_only=yes
202 | alpr_api_type=cloud
203 |
204 | # -----| If you are using plate recognizer | ------
205 | alpr_service=plate_recognizer
206 | alpr_key=!PLATEREC_ALPR_KEY
207 | platerec_stats=yes
208 | #platerec_regions=['us','cn','kr']
209 | platerec_min_dscore=0.1
210 | platerec_min_score=0.2
211 |
212 | # ----| If you are using openALPR |-----
213 | #alpr_service=open_alpr
214 | #alpr_key=!OPENALPR_ALPR_KEY
215 | #openalpr_recognize_vehicle=1
216 | #openalpr_country=us
217 | #openalpr_state=ca
218 | # openalpr returns percents, but we convert to between 0 and 1
219 | #openalpr_min_confidence=0.3
220 |
221 | # ----| If you are using openALPR command line |-----
222 | openalpr_cmdline_binary=alpr
223 | openalpr_cmdline_params=-j -d
224 | openalpr_cmdline_min_confidence=0.3
225 |
226 |
227 | ## Monitor specific settings
228 | # You can override any parameter on a per monitor basis
229 | # The format is [monitor-N] where N is the monitor id
230 |
231 | [monitor-9998]
232 | # doorbell
233 | model_sequence=face
234 | object_detection_pattern=(person|monitor_doorbell)
235 | valid_face_area=184,235 1475,307 1523,1940 146,1940
236 | match_past_detections=yes
237 |
238 |
239 | [monitor-9999]
240 | #deck
241 | object_detection_pattern=(person|monitor_deck)
242 | stream_sequence = {
243 | 'frame_strategy': 'most_models',
244 | 'frame_set': 'alarm',
245 | 'contig_frames_before_error': 5,
246 | 'max_attempts': 3,
247 | 'sleep_between_attempts': 4,
248 | 'resize':800
249 |
250 | }
251 |
252 | [ml]
253 | # if enabled, will not grab exclusive locks before running inferencing
254 | # locking seems to cause issues on some unique file systems
255 | disable_locks = no
256 | my_frame_strategy = most_models
257 |
258 | use_sequence = yes
259 |
260 | stream_sequence = {
261 | 'frame_strategy': '{{my_frame_strategy}}',
262 | 'frame_set': 'snapshot,alarm',
263 | 'contig_frames_before_error': 5,
264 | 'max_attempts': 3,
265 | 'sleep_between_attempts': 4,
266 | 'resize':800,
267 | # if yes, will convert 'snapshot' to a specific frame id
268 | # This is useful because you may see boxes drawn at the wrong places when using mlapi
269 | # This is because when mlapi detects an image, a 'snapshot' could point to, say, frame 45
270 | # But when zm_detect gets the detections back and draws the boxes, snapshot could have moved
271 | # to frame 50 (example). Enabling this makes sure mlapi tells zm_detect which frame id to use
272 | # default is 'no'
273 | 'convert_snapshot_to_fid': 'yes',
274 |
275 | } # very important - this brace needs to be indented inside stream_sequence
276 |
277 | ml_sequence= {
278 | 'general': {
279 | 'model_sequence': '{{model_sequence}}',
280 | 'disable_locks': '{{disable_locks}}',
281 | 'match_past_detections': '{{match_past_detections}}',
282 | 'past_det_max_diff_area': '5%',
283 | 'car_past_det_max_diff_area': '10%',
284 | #'ignore_past_detection_labels': ['dog', 'cat']
285 | # when matching past detections, names in a group are treated the same
286 | 'aliases': [['car','bus','truck','boat'], ['broccoli', 'pottedplant']]
287 |
288 | },
289 | 'object': {
290 | 'general':{
291 | 'pattern':'{{object_detection_pattern}}',
292 | 'same_model_sequence_strategy': 'most_unique', # also 'most', 'most_unique's
293 | },
294 | 'sequence': [{
295 | #First run on TPU with higher confidence
296 | #'maxsize':320,
297 | 'name': 'TPU object detection',
298 | 'enabled': 'no',
299 | 'object_weights':'{{tpu_object_weights_mobiledet}}',
300 | 'object_labels': '{{tpu_object_labels}}',
301 | 'object_min_confidence': {{tpu_min_confidence}},
302 | 'object_framework':'{{tpu_object_framework}}',
303 | 'tpu_max_processes': {{tpu_max_processes}},
304 | 'tpu_max_lock_wait': {{tpu_max_lock_wait}},
305 | 'max_detection_size':'{{max_detection_size}}',
306 | 'show_models':'{{show_models}}',
307 |
308 | },
309 | {
310 | # YoloV4 on GPU if TPU fails (because sequence strategy is 'first')
311 | 'name': 'CPU/GPU Yolov4 Object Detection',
312 | 'enabled': 'yes',
313 | 'object_config':'{{yolo4_object_config}}',
314 | 'object_weights':'{{yolo4_object_weights}}',
315 | 'object_labels': '{{yolo4_object_labels}}',
316 | 'object_min_confidence': {{object_min_confidence}},
317 | 'object_framework':'{{yolo4_object_framework}}',
318 | 'object_processor': '{{yolo4_object_processor}}',
319 | 'gpu_max_processes': {{gpu_max_processes}},
320 | 'gpu_max_lock_wait': {{gpu_max_lock_wait}},
321 | 'cpu_max_processes': {{cpu_max_processes}},
322 | 'cpu_max_lock_wait': {{cpu_max_lock_wait}},
323 | 'max_detection_size':'{{max_detection_size}}',
324 | 'match_past_detections': 'yes',
325 | 'past_det_max_diff_area': '5%',
326 | 'show_models':'{{show_models}}'
327 |
328 | }]
329 | },
330 | 'face': {
331 | 'general':{
332 | 'pattern': '{{face_detection_pattern}}',
333 | #'pre_existing_labels': ['person'], # when put in general section, it will check if a previous detection type (like object) found this label
334 | 'same_model_sequence_strategy': 'union' # combine results below
335 | },
336 | 'sequence': [
337 | {
338 | 'name': 'Face Detection (TPU)',
339 | 'enabled': 'no', # make this yes if you want face detection with TPU first
340 | 'face_detection_framework': 'tpu',
341 | 'face_weights':'/var/lib/zmeventnotification/models/coral_edgetpu/ssd_mobilenet_v2_face_quant_postprocess_edgetpu.tflite',
342 | 'face_min_confidence': 0.3
343 | },
344 | {
345 | 'name':'Face Recognition (Dlib)', # optional
346 | 'enabled': 'yes', # optional
347 | # 'pre_existing_labels': ['face'], # If you use TPU detection first, we can run this ONLY if TPU detects a face first
348 | 'save_unknown_faces':'{{save_unknown_faces}}',
349 | 'save_unknown_faces_leeway_pixels':{{save_unknown_faces_leeway_pixels}},
350 | 'face_detection_framework': '{{face_detection_framework}}',
351 | 'known_images_path': '{{known_images_path}}',
352 | 'unknown_images_path': '{{unknown_images_path}}',
353 | 'face_model': '{{face_model}}',
354 | 'face_train_model': '{{face_train_model}}',
355 | 'face_recog_dist_threshold': {{face_recog_dist_threshold}},
356 | 'face_num_jitters': {{face_num_jitters}},
357 | 'face_upsample_times':{{face_upsample_times}},
358 | 'gpu_max_processes': {{gpu_max_processes}},
359 | 'gpu_max_lock_wait': {{gpu_max_lock_wait}},
360 | 'cpu_max_processes': {{cpu_max_processes}},
361 | 'cpu_max_lock_wait': {{cpu_max_lock_wait}},
362 | 'max_size':800
363 | }]
364 | },
365 |
366 | 'alpr': {
367 | 'general':{
368 | 'same_model_sequence_strategy': 'first',
369 | 'pre_existing_labels':['car', 'motorbike', 'bus', 'truck', 'boat'],
370 | 'pattern': '{{alpr_detection_pattern}}'
371 |
372 | },
373 | 'sequence': [{
374 | 'name': 'Platerecognizer Cloud Service',
375 | 'enabled': 'yes',
376 | 'alpr_api_type': '{{alpr_api_type}}',
377 | 'alpr_service': '{{alpr_service}}',
378 | 'alpr_key': '{{alpr_key}}',
379 | 'platrec_stats': '{{platerec_stats}}',
380 | 'platerec_min_dscore': {{platerec_min_dscore}},
381 | 'platerec_min_score': {{platerec_min_score}},
382 | 'max_size':1600,
383 | #'platerec_payload': {
384 | #'regions':['us'],
385 | #'camera_id':12,
386 | #},
387 | #'platerec_config': {
388 | # 'region':'strict',
389 | # 'mode': 'fast'
390 | #}
391 | }]
392 | }
393 | } # very important - this brace needs to be indented inside ml_sequence
394 |
395 |
--------------------------------------------------------------------------------
/modules/.gitignore:
--------------------------------------------------------------------------------
1 | *.pyc
2 | __pycache__
3 |
4 |
--------------------------------------------------------------------------------
/modules/__init__.py:
--------------------------------------------------------------------------------
1 | __version__ = "2.2.24"
2 | VERSION=__version__
3 |
--------------------------------------------------------------------------------
/modules/common_params.py:
--------------------------------------------------------------------------------
1 | import modules.log as g_log
2 |
3 |
4 |
5 | MAX_FILE_SIZE_MB = 5
6 | ALLOWED_EXTENSIONS = set(['.png', '.jpg', '.jpeg'])
7 | ACCESS_TOKEN_EXPIRES = 60 * 60 # 1 hr
8 |
9 | log = g_log.ConsoleLog()
10 | logger = log
11 |
12 | config = {}
13 | monitor_config = {}
14 | monitor_polygons = {}
15 | monitor_zone_patterns={}
16 | config_vals = {
17 | 'secrets':{
18 | 'section': 'general',
19 | 'default': None,
20 | 'type': 'string',
21 | },
22 | 'auth_enabled':{
23 | 'section': 'general',
24 | 'default': 'yes',
25 | 'type': 'string',
26 | },
27 | 'portal':{
28 | 'section': 'general',
29 | 'default': '',
30 | 'type': 'string',
31 | },
32 | 'api_portal':{
33 | 'section': 'general',
34 | 'default': '',
35 | 'type': 'string',
36 | },
37 | 'user':{
38 | 'section': 'general',
39 | 'default': None,
40 | 'type': 'string'
41 | },
42 | 'password':{
43 | 'section': 'general',
44 | 'default': None,
45 | 'type': 'string'
46 | },
47 | 'basic_auth_user':{
48 | 'section': 'general',
49 | 'default': None,
50 | 'type': 'string'
51 | },
52 | 'basic_auth_password':{
53 | 'section': 'general',
54 | 'default': None,
55 | 'type': 'string'
56 | },
57 | 'import_zm_zones':{
58 | 'section': 'general',
59 | 'default': 'no',
60 | 'type': 'string',
61 | },
62 | 'only_triggered_zm_zones':{
63 | 'section': 'general',
64 | 'default': 'no',
65 | 'type': 'string',
66 | },
67 | 'cpu_max_processes':{
68 | 'section': 'general',
69 | 'default': '1',
70 | 'type': 'int',
71 | },
72 | 'gpu_max_processes':{
73 | 'section': 'general',
74 | 'default': '1',
75 | 'type': 'int',
76 | },
77 | 'tpu_max_processes':{
78 | 'section': 'general',
79 | 'default': '1',
80 | 'type': 'int',
81 | },
82 |
83 | 'cpu_max_lock_wait':{
84 | 'section': 'general',
85 | 'default': '120',
86 | 'type': 'int',
87 | },
88 |
89 | 'gpu_max_lock_wait':{
90 | 'section': 'general',
91 | 'default': '120',
92 | 'type': 'int',
93 | },
94 | 'tpu_max_lock_wait':{
95 | 'section': 'general',
96 | 'default': '120',
97 | 'type': 'int',
98 | },
99 |
100 | 'processes':{
101 | 'section': 'general',
102 | 'default': '1',
103 | 'type': 'int',
104 | },
105 | 'port':{
106 | 'section': 'general',
107 | 'default': '5000',
108 | 'type': 'int',
109 | },
110 |
111 | 'wsgi_server':{
112 | 'section': 'general',
113 | 'default': 'flask',
114 | 'type': 'string',
115 | },
116 |
117 | 'images_path':{
118 | 'section': 'general',
119 | 'default': './images',
120 | 'type': 'string',
121 | },
122 | 'db_path':{
123 | 'section': 'general',
124 | 'default': './db',
125 | 'type': 'string',
126 | },
127 | 'wait': {
128 | 'section': 'general',
129 | 'default':'0',
130 | 'type': 'int'
131 | },
132 | 'mlapi_secret_key':{
133 | 'section': 'general',
134 | 'default': None,
135 | 'type': 'string',
136 | },
137 |
138 | 'max_detection_size':{
139 | 'section': 'general',
140 | 'default': '100%',
141 | 'type': 'string',
142 | },
143 | 'detection_sequence':{
144 | 'section': 'general',
145 | 'default': 'object',
146 | 'type': 'str_split'
147 | },
148 | 'detection_mode': {
149 | 'section':'general',
150 | 'default':'all',
151 | 'type':'string'
152 | },
153 | 'use_zm_logs': {
154 | 'section':'general',
155 | 'default':'no',
156 | 'type':'string'
157 | },
158 |
159 | 'pyzm_overrides': {
160 | 'section': 'general',
161 | 'default': "{}",
162 | 'type': 'dict',
163 |
164 | },
165 |
166 | 'allow_self_signed':{
167 | 'section': 'general',
168 | 'default': 'yes',
169 | 'type': 'string'
170 | },
171 |
172 | 'resize':{
173 | 'section': 'general',
174 | 'default': 'no',
175 | 'type': 'string'
176 | },
177 |
178 | 'object_framework':{
179 | 'section': 'object',
180 | 'default': 'opencv',
181 | 'type': 'string'
182 | },
183 |
184 | 'disable_locks': {
185 | 'section': 'ml',
186 | 'default': 'no',
187 | 'type': 'string'
188 | },
189 |
190 | 'use_sequence': {
191 | 'section': 'ml',
192 | 'default': 'no',
193 | 'type': 'string'
194 | },
195 | 'ml_sequence': {
196 | 'section': 'ml',
197 | 'default': None,
198 | 'type': 'string'
199 | },
200 | 'stream_sequence': {
201 | 'section': 'ml',
202 | 'default': None,
203 | 'type': 'string'
204 | },
205 |
206 |
207 |
208 | 'object_processor':{
209 | 'section': 'object',
210 | 'default': 'cpu',
211 | 'type': 'string'
212 | },
213 | 'object_config':{
214 | 'section': 'object',
215 | 'default': 'models/yolov3/yolov3.cfg',
216 | 'type': 'string'
217 | },
218 | 'object_weights':{
219 | 'section': 'object',
220 | 'default': 'models/yolov3/yolov3.weights',
221 | 'type': 'string'
222 | },
223 | 'object_labels':{
224 | 'section': 'object',
225 | 'default': 'models/yolov3/coco.names',
226 | 'type': 'string'
227 | },
228 |
229 |
230 | 'object_min_confidence': {
231 | 'section': 'object',
232 | 'default': '0.4',
233 | 'type': 'float'
234 | },
235 | 'object_detection_pattern': {
236 | 'section': 'object',
237 | 'default': '.*',
238 | 'type': 'string'
239 | },
240 | # Face
241 | 'face_detection_framework':{
242 | 'section': 'face',
243 | 'default': 'dlib',
244 | 'type': 'string'
245 | },
246 | 'face_recognition_framework':{
247 | 'section': 'face',
248 | 'default': 'dlib',
249 | 'type': 'string'
250 | },
251 | 'face_num_jitters':{
252 | 'section': 'face',
253 | 'default': '0',
254 | 'type': 'int',
255 | },
256 | 'face_upsample_times':{
257 | 'section': 'face',
258 | 'default': '1',
259 | 'type': 'int',
260 | },
261 | 'face_model':{
262 | 'section': 'face',
263 | 'default': 'hog',
264 | 'type': 'string',
265 | },
266 | 'face_train_model':{
267 | 'section': 'face',
268 | 'default': 'hog',
269 | 'type': 'string',
270 | },
271 | 'face_recog_dist_threshold': {
272 | 'section': 'face',
273 | 'default': '0.6',
274 | 'type': 'float'
275 | },
276 | 'face_recog_knn_algo': {
277 | 'section': 'face',
278 | 'default': 'ball_tree',
279 | 'type': 'string'
280 | },
281 | 'known_images_path':{
282 | 'section': 'face',
283 | 'default': './known_faces',
284 | 'type': 'string',
285 | },
286 | 'unknown_images_path':{
287 | 'section': 'face',
288 | 'default': './unknown_faces',
289 | 'type': 'string',
290 | },
291 | 'unknown_face_name':{
292 | 'section': 'face',
293 | 'default': 'unknown face',
294 | 'type': 'string',
295 | },
296 | 'save_unknown_faces':{
297 | 'section': 'face',
298 | 'default': 'yes',
299 | 'type': 'string',
300 | },
301 |
302 | 'save_unknown_faces_leeway_pixels':{
303 | 'section': 'face',
304 | 'default': '50',
305 | 'type': 'int',
306 | },
307 | 'face_detection_pattern': {
308 | 'section': 'face',
309 | 'default': '.*',
310 | 'type': 'string'
311 | },
312 | # ALPR
313 |
314 | 'alpr_service': {
315 | 'section': 'alpr',
316 | 'default': 'plate_recognizer',
317 | 'type': 'string',
318 | },
319 | 'alpr_url': {
320 | 'section': 'alpr',
321 | 'default': None,
322 | 'type': 'string',
323 | },
324 | 'alpr_key': {
325 | 'section': 'alpr',
326 | 'default': '',
327 | 'type': 'string',
328 | },
329 | 'alpr_use_after_detection_only': {
330 | 'section': 'alpr',
331 | 'type': 'string',
332 | 'default': 'yes',
333 | },
334 |
335 | 'alpr_detection_pattern':{
336 | 'section': 'general',
337 | 'default': '.*',
338 | 'type': 'string'
339 | },
340 | 'alpr_api_type':{
341 | 'section': 'alpr',
342 | 'default': 'cloud',
343 | 'type': 'string'
344 | },
345 |
346 | # Plate recognition specific
347 | 'platerec_stats':{
348 | 'section': 'alpr',
349 | 'default': 'no',
350 | 'type': 'string'
351 | },
352 |
353 |
354 | 'platerec_regions':{
355 | 'section': 'alpr',
356 | 'default': None,
357 | 'type': 'eval'
358 | },
359 | 'platerec_payload':{
360 | 'section': 'alpr',
361 | 'default': None,
362 | 'type': 'eval'
363 | },
364 | 'platerec_config':{
365 | 'section': 'alpr',
366 | 'default': None,
367 | 'type': 'eval'
368 | },
369 | 'platerec_min_dscore':{
370 | 'section': 'alpr',
371 | 'default': '0.3',
372 | 'type': 'float'
373 | },
374 |
375 | 'platerec_min_score':{
376 | 'section': 'alpr',
377 | 'default': '0.5',
378 | 'type': 'float'
379 | },
380 |
381 | # OpenALPR specific
382 | 'openalpr_recognize_vehicle':{
383 | 'section': 'alpr',
384 | 'default': '0',
385 | 'type': 'int'
386 | },
387 | 'openalpr_country':{
388 | 'section': 'alpr',
389 | 'default': 'us',
390 | 'type': 'string'
391 | },
392 | 'openalpr_state':{
393 | 'section': 'alpr',
394 | 'default': None,
395 | 'type': 'string'
396 | },
397 |
398 | 'openalpr_min_confidence': {
399 | 'section': 'alpr',
400 | 'default': '0.3',
401 | 'type': 'float'
402 | },
403 |
404 | # OpenALPR command line specfic
405 |
406 | 'openalpr_cmdline_binary':{
407 | 'section': 'alpr',
408 | 'default': 'alpr',
409 | 'type': 'string'
410 | },
411 |
412 | 'openalpr_cmdline_params':{
413 | 'section': 'alpr',
414 | 'default': '-j',
415 | 'type': 'string'
416 | },
417 | 'openalpr_cmdline_min_confidence': {
418 | 'section': 'alpr',
419 | 'default': '0.3',
420 | 'type': 'float'
421 | },
422 |
423 |
424 |
425 | }
426 |
427 |
--------------------------------------------------------------------------------
/modules/db.py:
--------------------------------------------------------------------------------
1 | from tinydb import TinyDB, Query, where
2 | from passlib.hash import bcrypt
3 | import modules.common_params as g
4 | import getpass
5 |
6 | class Database:
7 |
8 | def _get_hash(self,password):
9 | return bcrypt.hash(password)
10 |
11 | def __init__(self, prompt_to_create=True):
12 | db = g.config['db_path']+'/db.json'
13 | g.log.Debug (1,'Opening DB at {}'.format(db))
14 | self.db = TinyDB(db)
15 | self.users = self.db.table('users')
16 | self.query = Query()
17 | g.log.Debug (1,'DB engine ready')
18 | if not len(self.users) and prompt_to_create:
19 | g.log.Debug (1,'Initializing default users')
20 |
21 | print ('--------------- User Creation ------------')
22 | print ('Please configure atleast one user:')
23 | while True:
24 | name = input ('user name:')
25 | if not name:
26 | print ('Error: username needed')
27 | continue
28 | p1 = getpass.getpass('Please enter password:')
29 | if not p1:
30 | print ('Error: password cannot be empty')
31 | continue
32 | p2 = getpass.getpass('Please re-enter password:')
33 | if p1 != p2:
34 | print ('Passwords do not match, please re-try')
35 | continue
36 | break
37 | self.users.insert({'name':name, 'password':self._get_hash(p1)})
38 | print ('------- User: {} created ----------------'.format(name))
39 |
40 |
41 | def check_credentials(self,user, supplied_password):
42 | user_object = self.get_user(user)
43 | if not user_object:
44 | return False # user doesn't exist
45 | stored_password_hash = user_object.get('password')
46 |
47 | if not bcrypt.verify(supplied_password, stored_password_hash):
48 | g.log.Debug (1,'Hashes do NOT match: incorrect password')
49 | return False
50 | else:
51 | g.log.Debug (1,'Hashes are correct: password matched')
52 | return True
53 |
54 |
55 | def get_all_users(self):
56 | return self.users.all()
57 |
58 | def get_user(self, user):
59 | return self.users.get(self.query.name == user)
60 |
61 | def delete_user(self,user):
62 | return self.users.remove(where('name')==user)
63 |
64 | def add_user(self, user,password):
65 | hashed_password = self._get_hash(password)
66 | return self.users.upsert({'name':user, 'password':hashed_password}, self.query.name == user)
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
--------------------------------------------------------------------------------
/modules/log.py:
--------------------------------------------------------------------------------
1 | from datetime import datetime
2 |
3 | class Log:
4 | def __init__(self):
5 | print('Initializing log')
6 |
7 | def debug(self, message):
8 | print('DEBUG: {}'.format(message))
9 |
10 | def error(self, message):
11 | print('ERROR: {}'.format(message))
12 |
13 | def info(self, message):
14 | print('INFO: {}'.format(message))
15 |
16 | class ConsoleLog:
17 | ' console based logging function that is used if no logging handler is passed'
18 | def __init__(self):
19 | self.dtformat = "%b %d %Y %H:%M:%S.%f"
20 | self.level = 5
21 |
22 | def set_level(self,level):
23 | self.level = level
24 |
25 | def get_level(self):
26 | return self.level
27 |
28 | def Debug (self,level, message, caller=None):
29 | if level <= self.level:
30 | dt = datetime.now().strftime(self.dtformat)
31 | print ('{} [DBG {}] {}'.format(dt, level, message))
32 |
33 | def Info (self,message, caller=None):
34 | dt = datetime.now().strftime(self.dtformat)
35 | print ('{} [INF] {}'.format( dt, message))
36 |
37 | def Warning (self,message, caller=None):
38 | dt = datetime.now().strftime(self.dtformat)
39 | print ('{} [WAR] {}'.format( dt, message))
40 |
41 | def Error (self,message, caller=None):
42 | dt = datetime.now().strftime(self.dtformat)
43 | print ('{} [ERR] {}'.format(dt, message))
44 |
45 | def Fatal (self,message, caller=None):
46 | dt = datetime.now().strftime(self.dtformat)
47 | print ('{} [FAT] {}'.format(dt, message))
48 | exit(-1)
49 |
50 | def Panic (self,message, caller=None):
51 | dt = datetime.now().strftime(self.dtformat)
52 | print ('{} [PNC] {}'.format(dt, message))
53 | exit(-2)
54 |
--------------------------------------------------------------------------------
/modules/utils.py:
--------------------------------------------------------------------------------
1 |
2 | from configparser import ConfigParser
3 | import modules.common_params as g
4 | import requests
5 | import progressbar as pb
6 | import os
7 | import cv2
8 | import re
9 | import ast
10 | import traceback
11 | import pyzm.helpers.utils as pyzmutils
12 |
13 | g.config = {}
14 |
15 | def str2tuple(str):
16 | m = [tuple(map(int, x.strip().split(','))) for x in str.split(' ')]
17 | if len(m) < 3:
18 | raise ValueError ('{} formed an invalid polygon. Needs to have at least 3 points'.format(m))
19 | else:
20 | return m
21 |
22 | # credit: https://stackoverflow.com/a/5320179
23 | def findWholeWord(w):
24 | return re.compile(r'\b({0})\b'.format(w), flags=re.IGNORECASE).search
25 |
26 | def check_and_import_zones(api):
27 |
28 | url = '{}/api/zones.json'.format(g.config.get('portal'))
29 | try:
30 | j = api._make_request(url=url, type='get')
31 | except Exception as e:
32 | g.logger.Error ('Zone API error: {}'.format(e))
33 | return
34 |
35 | for item in j.get('zones'):
36 | mid = item['Zone']['MonitorId']
37 |
38 | # if we have a 'no' inside local monitor section, don't import
39 | if mid in g.monitor_config and g.monitor_config[mid].get('import_zm_zones') == 'no':
40 | g.logger.Debug(1,'Not importing zones for monitor:{} as the monitor specific section says no'.format(mid))
41 | continue
42 | # else if global is no, and there is no local, don't import
43 | elif g.config['import_zm_zones'] == 'no' and (mid not in g.monitor_config or not g.monitor_config[mid].get('import_zm_zones')):
44 | g.logger.Debug(1,'Not importing zone:{} for monitor:{} as the global setting says no and there is no local override'.format(item['Zone']['Name'], mid))
45 | continue
46 |
47 | # At this stage, global is 'yes' and local is either unspecified or has 'yes'
48 | if not mid in g.monitor_config:
49 | g.monitor_config[mid]={}
50 | g.monitor_zone_patterns[mid] = {}
51 | g.monitor_polygons[mid] = []
52 |
53 | if item['Zone']['Type'] == 'Inactive':
54 | g.logger.Debug(1, 'Skipping {} as it is inactive'.format(item['Zone']['Name']))
55 | continue
56 |
57 | item['Zone']['Name'] = item['Zone']['Name'].replace(' ','_').lower()
58 | g.logger.Debug(2,'For monitor:{} importing zoneminder polygon: {} [{}]'.format(mid,item['Zone']['Name'], item['Zone']['Coords']))
59 | g.monitor_polygons[mid].append({
60 | 'name': item['Zone']['Name'],
61 | 'value': str2tuple(item['Zone']['Coords']),
62 | 'pattern': None
63 | })
64 |
65 | # Now copy over pending zone patterns from process_config
66 | for mid in g.monitor_polygons:
67 | for poly in g.monitor_polygons[mid]:
68 | for zone_name in g.monitor_zone_patterns[mid]:
69 | if poly['name'] == zone_name:
70 | poly['pattern'] = g.monitor_zone_patterns[mid][zone_name]
71 | g.logger.Debug(2, 'For monitor:{} replacing match pattern for polygon:{} with: {}'.format( mid,poly['name'],poly['pattern'] ))
72 |
73 | def convert_config_to_ml_sequence():
74 | ml_options={}
75 |
76 | for ds in g.config['detection_sequence']:
77 | if ds == 'object':
78 |
79 | ml_options['object'] = {
80 | 'general':{
81 | 'pattern': g.config['object_detection_pattern'],
82 | 'disable_locks': g.config['disable_locks'],
83 | 'same_model_sequence_strategy': 'first' # 'first' 'most', 'most_unique'
84 |
85 | },
86 | 'sequence': [{
87 | 'tpu_max_processes': g.config['tpu_max_processes'],
88 | 'tpu_max_lock_wait': g.config['tpu_max_lock_wait'],
89 | 'gpu_max_processes': g.config['gpu_max_processes'],
90 | 'gpu_max_lock_wait': g.config['gpu_max_lock_wait'],
91 | 'cpu_max_processes': g.config['cpu_max_processes'],
92 | 'cpu_max_lock_wait': g.config['cpu_max_lock_wait'],
93 | 'max_detection_size': g.config['max_detection_size'],
94 | 'object_config':g.config['object_config'],
95 | 'object_weights':g.config['object_weights'],
96 | 'object_labels': g.config['object_labels'],
97 | 'object_min_confidence': g.config['object_min_confidence'],
98 | 'object_framework':g.config['object_framework'],
99 | 'object_processor': g.config['object_processor'],
100 | }]
101 | }
102 | elif ds == 'face':
103 | ml_options['face'] = {
104 | 'general':{
105 | 'pattern': g.config['face_detection_pattern'],
106 | 'same_model_sequence_strategy': 'first',
107 | # 'pre_existing_labels':['person'],
108 | },
109 | 'sequence': [{
110 | 'tpu_max_processes': g.config['tpu_max_processes'],
111 | 'tpu_max_lock_wait': g.config['tpu_max_lock_wait'],
112 | 'gpu_max_processes': g.config['gpu_max_processes'],
113 | 'gpu_max_lock_wait': g.config['gpu_max_lock_wait'],
114 | 'cpu_max_processes': g.config['cpu_max_processes'],
115 | 'cpu_max_lock_wait': g.config['cpu_max_lock_wait'],
116 | 'face_detection_framework': g.config['face_detection_framework'],
117 | 'face_recognition_framework': g.config['face_recognition_framework'],
118 | 'face_processor': g.config['face_processor'],
119 | 'known_images_path': g.config['known_images_path'],
120 | 'face_model': g.config['face_model'],
121 | 'face_train_model':g.config['face_train_model'],
122 | 'unknown_images_path': g.config['unknown_images_path'],
123 | 'unknown_face_name': g.config['unknown_face_name'],
124 | 'save_unknown_faces': g.config['save_unknown_faces'],
125 | 'save_unknown_faces_leeway_pixels': g.config['save_unknown_faces_leeway_pixels'],
126 | 'face_recog_dist_threshold': g.config['face_recog_dist_threshold'],
127 | 'face_num_jitters': g.config['face_num_jitters'],
128 | 'face_upsample_times':g.config['face_upsample_times']
129 | }]
130 |
131 | }
132 | elif ds == 'alpr':
133 | ml_options['alpr'] = {
134 | 'general':{
135 | 'pattern': g.config['alpr_detection_pattern'],
136 | 'same_model_sequence_strategy': 'first',
137 | # 'pre_existing_labels':['person'],
138 | },
139 | 'sequence': [{
140 | 'tpu_max_processes': g.config['tpu_max_processes'],
141 | 'tpu_max_lock_wait': g.config['tpu_max_lock_wait'],
142 | 'gpu_max_processes': g.config['gpu_max_processes'],
143 | 'gpu_max_lock_wait': g.config['gpu_max_lock_wait'],
144 | 'cpu_max_processes': g.config['cpu_max_processes'],
145 | 'cpu_max_lock_wait': g.config['cpu_max_lock_wait'],
146 | 'alpr_service': g.config['alpr_service'],
147 | 'alpr_url': g.config['alpr_url'],
148 | 'alpr_key': g.config['alpr_key'],
149 | 'alpr_api_type': g.config['alpr_api_type'],
150 | 'platerec_stats': g.config['platerec_stats'],
151 | 'platerec_regions': g.config['platerec_regions'],
152 | 'platerec_min_dscore': g.config['platerec_min_dscore'],
153 | 'platerec_min_score': g.config['platerec_min_score'],
154 | 'openalpr_recognize_vehicle': g.config['openalpr_recognize_vehicle'],
155 | 'openalpr_country': g.config['openalpr_country'],
156 | 'openalpr_state': g.config['openalpr_state'],
157 | 'openalpr_min_confidence': g.config['openalpr_min_confidence'],
158 | 'openalpr_cmdline_binary': g.config['openalpr_cmdline_binary'],
159 | 'openalpr_cmdline_params': g.config['openalpr_cmdline_params'],
160 | 'openalpr_cmdline_min_confidence': g.config['openalpr_cmdline_min_confidence'],
161 | }]
162 |
163 | }
164 | ml_options['general'] = {
165 | 'model_sequence': ','.join(str(e) for e in g.config['detection_sequence'])
166 | #'model_sequence': 'object,face',
167 | }
168 | if g.config['detection_mode'] == 'all':
169 | g.logger.Debug(3, 'Changing detection_mode from all to most_models to adapt to new features')
170 | g.config['detection_mode'] = 'most_models'
171 | return ml_options
172 |
173 |
174 | def str_split(my_str):
175 | return [x.strip() for x in my_str.split(',')]
176 |
177 | def process_config(args):
178 | # parse config file into a dictionary with defaults
179 |
180 | g.config = {}
181 |
182 | has_secrets = False
183 | secrets_file = None
184 |
185 | def _correct_type(val,t):
186 | if t == 'int':
187 | return int(val)
188 | elif t == 'eval' or t == 'dict':
189 | return ast.literal_eval(val) if val else None
190 | elif t == 'str_split':
191 | return str_split(val) if val else None
192 | elif t == 'string':
193 | return val
194 | elif t == 'float':
195 | return float(val)
196 | else:
197 | g.logger.Error ('Unknown conversion type {} for config key:{}'.format(e['type'], e['key']))
198 | return val
199 |
200 | def _set_config_val(k,v):
201 | # internal function to parse all keys
202 | val = config_file[v['section']].get(k,v['default'])
203 |
204 | if val and val[0] == '!': # its a secret token, so replace
205 | g.logger.Debug (1,'Secret token found in config: {}'.format(val));
206 | if not has_secrets:
207 | raise ValueError('Secret token found, but no secret file specified')
208 | if secrets_file.has_option('secrets', val[1:]):
209 | vn = secrets_file.get('secrets', val[1:])
210 | #g.logger.Debug (1,'Replacing {} with {}'.format(val,vn))
211 | val = vn
212 | else:
213 | raise ValueError ('secret token {} not found in secrets file {}'.format(val,secrets_filename))
214 |
215 |
216 | g.config[k] = _correct_type(val, v['type'])
217 | if k.find('password') == -1:
218 | dval = g.config[k]
219 | else:
220 | dval = '***********'
221 | #g.logger.Debug (1,'Config: setting {} to {}'.format(k,dval))
222 |
223 | # main
224 | try:
225 | config_file = ConfigParser(interpolation=None, inline_comment_prefixes='#')
226 | config_file.read(args['config'])
227 |
228 | g.config['pyzm_overrides'] = {}
229 | if config_file.has_option('general', 'pyzm_overrides'):
230 | pyzm_overrides = config_file.get('general', 'pyzm_overrides')
231 | g.config['pyzm_overrides'] = ast.literal_eval(pyzm_overrides) if pyzm_overrides else {}
232 | if args.get('debug'):
233 | g.config['pyzm_overrides']['dump_console'] = True
234 | g.config['pyzm_overrides']['log_debug'] = True
235 | g.config['pyzm_overrides']['log_level_debug'] = 5
236 | g.config['pyzm_overrides']['log_debug_target'] = None
237 |
238 | if config_file.has_option('general', 'use_zm_logs'):
239 | use_zm_logs = config_file.get('general', 'use_zm_logs')
240 | if use_zm_logs == 'yes':
241 | try:
242 | import pyzm.ZMLog as zmlog
243 | zmlog.init(name='zm_mlapi',override=g.config['pyzm_overrides'])
244 | except Exception as e:
245 | g.logger.Error ('Not able to switch to ZM logs: {}'.format(e))
246 | else:
247 | g.log = zmlog
248 | g.logger=g.log
249 | g.logger.Info('Switched to ZM logs')
250 | g.logger.Info('Reading config from: {}'.format(args.get('config')))
251 |
252 | g.logger.Info('Reading config from: {}'.format(args.get('config')))
253 |
254 |
255 | if config_file.has_option('general','secrets'):
256 | secrets_filename = config_file.get('general', 'secrets')
257 | g.config['secrets'] = secrets_filename
258 | g.logger.Info('Reading secrets from: {}'.format(secrets_filename))
259 | has_secrets = True
260 | secrets_file = ConfigParser(interpolation = None, inline_comment_prefixes='#')
261 | try:
262 | with open(secrets_filename) as f:
263 | secrets_file.read_file(f)
264 | except:
265 | raise
266 | else:
267 | g.logger.Debug (1,'No secrets file configured')
268 | # now read config values
269 |
270 | g.polygons = []
271 | # first, fill in config with default values
272 | for k,v in g.config_vals.items():
273 | val = v.get('default', None)
274 | g.config[k] = _correct_type(val, v['type'])
275 | #print ('{}={}'.format(k,g.config[k]))
276 |
277 |
278 | # now iterate the file
279 | for sec in config_file.sections():
280 | if sec == 'secrets':
281 | continue
282 |
283 | # Move monitor specific stuff to a different structure
284 | if sec.lower().startswith('monitor-'):
285 | ts = sec.split('-')
286 | if len(ts) != 2:
287 | g.logger.Error('Skipping section:{} - could not derive monitor name. Expecting monitor-NUM format')
288 | continue
289 |
290 | mid=ts[1]
291 | g.logger.Debug (2,'Found monitor specific section for monitor: {}'.format(mid))
292 |
293 | g.monitor_polygons[mid] = []
294 | g.monitor_config[mid] = {}
295 | g.monitor_zone_patterns[mid] = {}
296 | # Copy the sequence into each monitor because when we do variable subs
297 | # later, we will use this for monitor specific work
298 | try:
299 | ml = config_file.get('ml', 'ml_sequence')
300 | g.monitor_config[mid]['ml_sequence']=ml
301 | except:
302 | g.logger.Debug (2, 'ml sequence not found in globals')
303 |
304 | try:
305 | ss = config_file.get('ml', 'stream_sequence')
306 | g.monitor_config[mid]['stream_sequence']=ss
307 | except:
308 | g.logger.Debug (2, 'stream sequence not found in globals')
309 |
310 | for item in config_file[sec].items():
311 | k = item[0]
312 | v = item[1]
313 | if k.endswith('_zone_detection_pattern'):
314 | zone_name = k.split('_zone_detection_pattern')[0]
315 | g.logger.Debug(2, 'found zone specific pattern:{} storing'.format(zone_name))
316 | g.monitor_zone_patterns[mid][zone_name] = v
317 | continue
318 | else:
319 | if k in g.config_vals:
320 | # This means its a legit config key that needs to be overriden
321 | g.logger.Debug(2,'[{}] overrides key:{} with value:{}'.format(sec, k, v))
322 | g.monitor_config[mid][k]=_correct_type(v,g.config_vals[k]['type'])
323 | # g.monitor_config[mid].append({ 'key':k, 'value':_correct_type(v,g.config_vals[k]['type'])})
324 | else:
325 | if k.startswith(('object_','face_', 'alpr_')):
326 | g.logger.Debug(2,'assuming {} is an ML sequence'.format(k))
327 | g.monitor_config[mid][k] = v
328 | else:
329 | try:
330 | p = str2tuple(v) # if not poly, exception will be thrown
331 | g.monitor_polygons[mid].append({'name': k, 'value': p,'pattern': None})
332 | g.logger.Debug(2,'adding polygon: {} [{}]'.format(k, v ))
333 | except Exception as e:
334 | g.logger.Debug(2,'{} is not a polygon, adding it as unknown string key'.format(k))
335 | g.monitor_config[mid][k]=v
336 |
337 |
338 | # TBD only_triggered_zones
339 |
340 | # Not monitor specific stuff
341 | else:
342 | for (k, v) in config_file.items(sec):
343 | if k in g.config_vals:
344 | _set_config_val(k,g.config_vals[k] )
345 | else:
346 | g.config[k] = v
347 |
348 |
349 |
350 |
351 | # Parameter substitution
352 |
353 | g.logger.Debug (2,'Doing parameter substitution for globals')
354 | p = r'{{(\w+?)}}'
355 | for gk, gv in g.config.items():
356 | #input ('Continue')
357 | gv = '{}'.format(gv)
358 | #if not isinstance(gv, str):
359 | # continue
360 | while True:
361 | matches = re.findall(p,gv)
362 | replaced = False
363 | for match_key in matches:
364 | if match_key in g.config:
365 | replaced = True
366 | new_val = g.config[gk].replace('{{' + match_key + '}}',str(g.config[match_key]))
367 | g.config[gk] = new_val
368 | gv = new_val
369 | else:
370 | g.logger.Debug(2, 'substitution key: {} not found'.format(match_key))
371 | if not replaced:
372 | break
373 |
374 | g.logger.Debug (2,'Doing parameter substitution for monitor specific entities')
375 | p = r'{{(\w+?)}}'
376 | for mid in g.monitor_config:
377 | for key in g.monitor_config[mid]:
378 | #input ('Continue')
379 | gk = key
380 | gv = g.monitor_config[mid][key]
381 | gv = '{}'.format(gv)
382 | #if not isinstance(gv, str):
383 | # continue
384 | while True:
385 | matches = re.findall(p,gv)
386 | replaced = False
387 | for match_key in matches:
388 | if match_key in g.monitor_config[mid]:
389 | replaced = True
390 | new_val =gv.replace('{{' + match_key + '}}',str(g.monitor_config[mid][match_key]))
391 | gv = new_val
392 | g.monitor_config[mid][key] = gv
393 | elif match_key in g.config:
394 | replaced = True
395 | new_val =gv.replace('{{' + match_key + '}}',str(g.config[match_key]))
396 | gv = new_val
397 | g.monitor_config[mid][key] = gv
398 | else:
399 | g.logger.Debug(2, 'substitution key: {} not found'.format(match_key))
400 | if not replaced:
401 | break
402 |
403 | secrets = pyzmutils.read_config(g.config['secrets'])
404 | #g.monitor_config[mid]['ml_sequence'] = pyzmutils.template_fill(input_str=g.monitor_config[mid]['ml_sequence'], config=None, secrets=secrets._sections.get('secrets'))
405 | #g.monitor_config[mid]['ml_sequence'] = ast.literal_eval(g.monitor_config[mid]['ml_sequence'])
406 |
407 | #g.monitor_config[mid]['stream_sequence'] = pyzmutils.template_fill(input_str=g.monitor_config[mid]['stream_sequence'], config=None, secrets=secrets._sections.get('secrets'))
408 | #g.monitor_config[mid]['stream_sequence'] = ast.literal_eval(g.monitor_config[mid]['stream_sequence'])
409 |
410 |
411 | #print ("GLOBALS={}".format(g.config))
412 | #print ("\n\nMID_SPECIFIC={}".format(g.monitor_config))
413 | #print ("\n\nMID POLYPATTERNS={}".format(g.monitor_polypatterns))
414 | #print ('FINAL POLYS={}'.format(g.monitor_polygons))
415 | #exit(0)
416 | except Exception as e:
417 | g.logger.Error('Error parsing config:{}'.format(args['config']))
418 | g.logger.Error('Error was:{}'.format(e))
419 | g.logger.Fatal('error: Traceback:{}'.format(traceback.format_exc()))
420 | exit(0)
421 |
422 |
423 |
424 |
425 |
426 | def draw_bbox(img, bbox, labels, classes, confidence, color=None, write_conf=True):
427 |
428 | # g.logger.Debug (1,"DRAW BBOX={} LAB={}".format(bbox,labels))
429 | slate_colors = [
430 | (39, 174, 96),
431 | (142, 68, 173),
432 | (0,129,254),
433 | (254,60,113),
434 | (243,134,48),
435 | (91,177,47)
436 | ]
437 |
438 | arr_len = len(bgr_slate_colors)
439 | for i, label in enumerate(labels):
440 | #=g.logger.Debug (1,'drawing box for: {}'.format(label))
441 | color = bgr_slate_colors[i % arr_len]
442 | if write_conf and confidence:
443 | label += ' ' + str(format(confidence[i] * 100, '.2f')) + '%'
444 |
445 | cv2.rectangle(img, (bbox[i][0], bbox[i][1]), (bbox[i][2], bbox[i][3]), color, 2)
446 |
447 | # write text
448 | font_scale = 0.8
449 | font_type = cv2.FONT_HERSHEY_SIMPLEX
450 | font_thickness = 1
451 | #cv2.getTextSize(text, font, font_scale, thickness)
452 | text_size = cv2.getTextSize(label, font_type, font_scale , font_thickness)[0]
453 | text_width_padded = text_size[0] + 4
454 | text_height_padded = text_size[1] + 4
455 |
456 | r_top_left = (bbox[i][0], bbox[i][1] - text_height_padded)
457 | r_bottom_right = (bbox[i][0] + text_width_padded, bbox[i][1])
458 |
459 | cv2.putText(img, label, (bbox[i][0] + 2, bbox[i][1] - 2), font_type, font_scale, [255, 255, 255], font_thickness)
460 |
461 | return img
462 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | requests
2 | passlib
3 | bcrypt
4 | Werkzeug
5 | tinydb
6 | imutils
7 | numpy
8 | flask_jwt_extended
9 | flask_restful
10 | flask
11 | progressbar33
12 | scikit_learn
13 | face_recognition
14 | pyzm>=0.3.56
15 | bjoern
16 | configupdater
17 | markupsafe
18 |
--------------------------------------------------------------------------------
/secrets.ini:
--------------------------------------------------------------------------------
1 | [secrets]
2 | ZM_USER=someuser
3 | ZM_PASSWORD=somepassword
4 | ZM_PORTAL=https://server/zm
5 | ZM_API_PORTAL=https://server/zm/api
6 | MLAPI_SECRET_KEY=change_to_some_secret_key
7 | PLATEREC_ALPR_KEY=something
8 | OPENALPR_ALPR_KEY=something
9 |
10 |
--------------------------------------------------------------------------------
/tools/config_edit.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 |
3 | # argparse k,v handling: https://stackoverflow.com/a/52014520/1361529
4 | import sys
5 | from configupdater import ConfigUpdater
6 | import argparse
7 | import logging
8 |
9 | def parse_var(s):
10 | items = s.split('=',1)
11 | sk = items[0].split(':',1)
12 | if len(sk) == 1:
13 | key = sk[0]
14 | section = '_global_'
15 | else:
16 | key=sk[1]
17 | section=sk[0]
18 |
19 | key = key.strip() # we remove blanks around keys, as is logical
20 | section = section.strip() # we remove blanks around keys, as is logical
21 |
22 | if len(items) > 1:
23 | # rejoin the rest:
24 | value = '='.join(items[1:])
25 | return (section, key, value)
26 |
27 |
28 | def parse_vars(items):
29 | d = {}
30 | if items:
31 | for item in items:
32 | section, key, value = parse_var(item)
33 | #logger.debug ('Updating section:{} key:{} to value:{}'.format(section,key,value))
34 | if not d.get(section):
35 | d[section]={}
36 | d[section][key] = value
37 | return d
38 |
39 |
40 | # main
41 | logger = logging.getLogger()
42 | handler = logging.StreamHandler()
43 | formatter = logging.Formatter('[%(asctime)s] [%(filename)s:%(lineno)d] %(levelname)s - %(message)s','%m-%d %H:%M:%S')
44 |
45 | handler.setFormatter(formatter)
46 | logger.addHandler(handler)
47 |
48 |
49 | ap = argparse.ArgumentParser(
50 | description='config editing script',
51 | epilog='''
52 | Example:
53 | %(prog)s --config /etc/zm/zmeventnotification.ini --set network:address=comment_out general:restart_interval=60 network:port=9999 general:base_data_path='/my new/path with/spaces'
54 |
55 | '''
56 | )
57 | ap.add_argument('-c', '--config', help='input ini file with path', required=True)
58 | ap.add_argument('-o', '--output', help='output file with path')
59 | ap.add_argument('--nologs', action='store_true', help='disable logs')
60 | ap.add_argument('--set',
61 | metavar='[SECTION:]KEY=VALUE',
62 | nargs='+',
63 | help='''
64 | Set a number of key-value pairs.
65 | (do not put spaces before or after the = sign).
66 | If a value contains spaces, you should define
67 | it witin quotes. If you omit the SECTION:, all keys in all
68 | sections that match your key will be updated.
69 | If you do specify a section, remember to add the : after it
70 | Finally, use the special keyword of 'comment_out' if you want
71 | to comment out a key. There is no way to 'uncomment' as once it is
72 | a comment, it won't be found as a key.
73 | ''')
74 |
75 | args, u = ap.parse_known_args()
76 | args = vars(args)
77 |
78 | if args.get('nologs'):
79 | logger.setLevel(logging.CRITICAL + 1)
80 | else:
81 | logger.setLevel(logging.DEBUG)
82 |
83 | values = parse_vars(args['set'])
84 |
85 |
86 | input_file = args['config']
87 | updater = ConfigUpdater(space_around_delimiters=False)
88 | updater.read(input_file)
89 |
90 |
91 | for sec in values:
92 | if sec == '_global_':
93 | continue
94 | for key in values[sec]:
95 | if values[sec][key]=='comment_out' and updater[sec].get(key):
96 | logger.debug ('commenting out [{}]->{}={}'.format(sec,key,updater[sec][key].value))
97 | updater[sec][key].key = '#{}'.format(key)
98 |
99 | else:
100 | logger.debug ('setting [{}]->{}={}'.format(sec,key,values[sec][key]))
101 | updater[sec][key] = values[sec][key]
102 |
103 | if values.get('_global_'):
104 | for key in values.get('_global_'):
105 | for secname in updater.sections():
106 | if updater.has_option(secname,key):
107 | if values['_global_'][key]=='comment_out' and updater[secname].get(key):
108 | logger.debug ('commenting out [{}]->{}={}'.format(secname,key,updater[secname][key].value))
109 | updater[secname][key].key = '#{}'.format(key)
110 | else:
111 | updater[secname][key] = values['_global_'][key]
112 | logger.debug ('{} found in [{}] setting to {}'.format(key,secname,values['_global_'][key]))
113 |
114 |
115 |
116 | output_file_handle = open(args['output'],'w') if args.get('output') else sys.stdout
117 | updater.write(output_file_handle)
118 | if output_file_handle is not sys.stdout:
119 | output_file_handle.close()
120 |
121 |
--------------------------------------------------------------------------------
/unknown_faces/README.md:
--------------------------------------------------------------------------------
1 | This is where MLAPI will store unknown faces
2 |
3 |
--------------------------------------------------------------------------------