├── .gitignore ├── LICENSE ├── README.md ├── Sys_flowchart.jpg ├── cfg ├── IWR1843_2D.cfg ├── IWR1843_3D_10fps_15db.cfg ├── IWR1843_3D_10fps_9db.cfg ├── IWR1843_3D_20fps_15db.cfg ├── IWR1843_3D_20fps_15db_2.cfg ├── IWR1843_3D_20fps_9db.cfg ├── IWR1843_3D_5fps_9db.cfg ├── config_demo.py └── config_demo_3R.py ├── library ├── DBSCAN_generator.py ├── TI │ ├── mmw_demo_example_script.py │ └── parser_mmw_demo.py ├── __init__.py ├── bgnoise_filter.py ├── camera.py ├── data_processor.py ├── email_notifier.py ├── frame_early_processor.py ├── frame_post_processor.py ├── human_object.py ├── human_tracking.py ├── radar_reader.py ├── save_center.py ├── sync_monitor.py ├── utils.py ├── video_compressor.py └── visualizer.py ├── main.py ├── requirements.txt └── tools ├── data_load.py ├── draw_path.py ├── png_to_mp4.py ├── rawdata_load.py └── rename_files.py /.gitignore: -------------------------------------------------------------------------------- 1 | # ignore __pycache__ folder 2 | *__pycache__/ 3 | 4 | # ignore .idea folder 5 | *.idea/ 6 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 DarkSZChao 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Millimeter-Wave Radar-Based Multi-Human Tracking and Fall Detection System 2 | 3 | ## Overview 4 | This repository contains the implementation and code resources for our paper: **Advanced Millimeter-Wave Radar System for Real-Time Multiple-Human Tracking and Fall Detection** (link: https://doi.org/10.3390/s24113660). The study explores an indoor system that employs Millimeter-Wave radars to track multiple humans and detect falls in real time. By integrating signals from non-intrusive radars, our framework addresses challenges such as mobility inconvenience, lighting conditions, and privacy issues inherent in wearable or camera-based systems. 5 | 6 | ### Key Features 7 | - **Multi-Human Tracking**: Tracks multiple humans simultaneously with high precision. 8 | - **Real-Time Fall Detection**: Accurately predicts and classifies human body statuses, including falls. 9 | - **Advanced Signal Processing**: Employs Dynamic DBSCAN clustering and innovative feedback loops for enhanced accuracy. 10 | - **Privacy and Accessibility**: Operates without cameras or wearables, ensuring non-intrusive monitoring. Camera module in the project is just for ground truth. 11 | 12 | --- 13 | 14 | ## Table of Contents 15 | 1. [System Architecture](#system-architecture) 16 | 2. [Installation and Usage](#Installation-and-Usage) 17 | 3. [License](#license) 18 | 19 | --- 20 | 21 | ## System Architecture 22 | ### Components 23 | - **Radar Hardware**: 3 **Millimeter-Wave radars** from Texas Instruments. 24 | - **Real-Time Framework**: Integrates radar signals to track and classify human activity. 25 | 26 | ### Workflow 27 | ![System Flowchart Diagram](Sys_flowchart.jpg) 28 | 29 | --- 30 | 31 | ## Installation and Usage 32 | ### Prerequisites 33 | - Python 3.8 or higher 34 | - Libraries: `numpy`, `Send2Trash`,`scipy`,`pyserial`,`matplotlib`,`scikit-learn`,`opencv-python`,`google-api-python-client`,`google-auth-oauthlib`,`func-timeout`,`moviepy` 35 | 36 | ### Steps 37 | 1. Clone this repository: 38 | ```bash 39 | git clone https://github.com/DarkSZChao/MMWave_Radar_Human_Tracking_and_Fall_detection.git 40 | cd MMWave_Radar_Human_Tracking_and_Fall_detection 41 | 42 | 2. Install dependencies: 43 | ```bash 44 | pip install -r requirements.txt 45 | 46 | 3. Connect radars. 47 | 48 | 4. Check which port number the radars are using: 49 | 50 | 5. Config the parameters: 51 | ```bash 52 | cd cfg 53 | ``` 54 | open **config_demo.py**, under **RADAR_CFG_LIST** parameter, update **cfg_port_name** and **data_port_name** for radars 55 | 56 | 6. Go back to root folder and start the system by runing **main.py** 57 | 58 | --- 59 | 60 | ## License 61 | This project is licensed under the MIT License. 62 | 63 | You are free to use, modify, and distribute this project, provided that you include the original copyright and license notice in any copy of the project or substantial portions of it. 64 | 65 | See the [LICENSE](LICENSE) file for more details. 66 | 67 | -------------------------------------------------------------------------------- /Sys_flowchart.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DarkSZChao/MMWave_Radar_Human_Tracking_and_Fall_detection/88a055908d498e029ec3852a1c50e0de27f1a93a/Sys_flowchart.jpg -------------------------------------------------------------------------------- /cfg/IWR1843_2D.cfg: -------------------------------------------------------------------------------- 1 | % *************************************************************** 2 | % Created for SDK ver:03.05 3 | % Created using Visualizer ver:3.5.0.0 4 | % Frequency:77 5 | % Platform:xWR18xx 6 | % Scene Classifier:best_range_res 7 | % Azimuth Resolution(deg):15 8 | % Range Resolution(m):0.044 9 | % Maximum unambiguous Range(m):5.2 10 | % Maximum Radial Velocity(m/s):1 11 | % Radial velocity resolution(m/s):0.13 12 | % Frame Duration(msec):33.333 13 | % RF calibration data:None 14 | % Range Detection Threshold (dB):15 15 | % Doppler Detection Threshold (dB):15 16 | % Range Peak Grouping:enabled 17 | % Doppler Peak Grouping:enabled 18 | % Static clutter removal:disabled 19 | % Angle of Arrival FoV: Full FoV 20 | % Range FoV: Full FoV 21 | % Doppler FoV: Full FoV 22 | % *************************************************************** 23 | sensorStop 24 | flushCfg 25 | dfeDataOutputMode 1 26 | channelCfg 15 5 0 27 | adcCfg 2 1 28 | adcbufCfg -1 0 1 1 1 29 | profileCfg 0 77 429 7 57.14 0 0 70 1 144 2930 0 0 30 30 | chirpCfg 0 0 0 0 0 0 0 1 31 | chirpCfg 1 1 0 0 0 0 0 4 32 | frameCfg 0 1 16 0 33.333 1 0 33 | lowPower 0 0 34 | guiMonitor -1 1 0 0 0 0 0 35 | cfarCfg -1 0 2 8 4 3 0 15 1 36 | cfarCfg -1 1 0 4 2 3 1 15 1 37 | multiObjBeamForming -1 1 0.5 38 | clutterRemoval -1 0 39 | calibDcRangeSig -1 0 -5 8 256 40 | extendedMaxVelocity -1 0 41 | lvdsStreamCfg -1 0 0 0 42 | compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 43 | measureRangeBiasAndRxChanPhase 0 1.5 0.2 44 | CQRxSatMonitor 0 3 5 121 0 45 | CQSigImgMonitor 0 71 4 46 | analogMonitor 0 0 47 | aoaFovCfg -1 -90 90 -90 90 48 | cfarFovCfg -1 0 0 5.02 49 | cfarFovCfg -1 1 -1 1.00 50 | calibData 0 0 0 51 | sensorStart 52 | -------------------------------------------------------------------------------- /cfg/IWR1843_3D_10fps_15db.cfg: -------------------------------------------------------------------------------- 1 | % *************************************************************** 2 | % Created for SDK ver:03.05 3 | % Created using Visualizer ver:3.5.0.0 4 | % Frequency:77 5 | % Platform:xWR18xx 6 | % Scene Classifier:best_range_res 7 | % Azimuth Resolution(deg):15 + Elevation 8 | % Range Resolution(m):0.044 9 | % Maximum unambiguous Range(m):4.99 10 | % Maximum Radial Velocity(m/s):1 11 | % Radial velocity resolution(m/s):0.13 12 | % Frame Duration(msec):100 13 | % RF calibration data:None 14 | % Range Detection Threshold (dB):15 15 | % Doppler Detection Threshold (dB):15 16 | % Range Peak Grouping:enabled 17 | % Doppler Peak Grouping:enabled 18 | % Static clutter removal:disabled 19 | % Angle of Arrival FoV: Full FoV 20 | % Range FoV: Full FoV 21 | % Doppler FoV: Full FoV 22 | % *************************************************************** 23 | sensorStop 24 | flushCfg 25 | dfeDataOutputMode 1 26 | channelCfg 15 7 0 27 | adcCfg 2 1 28 | adcbufCfg -1 0 1 1 1 29 | profileCfg 0 77 267 7 57.14 0 0 70 1 128 2604 0 0 30 30 | chirpCfg 0 0 0 0 0 0 0 1 31 | chirpCfg 1 1 0 0 0 0 0 4 32 | chirpCfg 2 2 0 0 0 0 0 2 33 | frameCfg 0 2 16 0 100 1 0 34 | lowPower 0 0 35 | guiMonitor -1 1 0 0 0 0 0 36 | cfarCfg -1 0 2 8 4 3 0 15 1 37 | cfarCfg -1 1 0 4 2 3 1 15 1 38 | multiObjBeamForming -1 1 0.5 39 | clutterRemoval -1 0 40 | calibDcRangeSig -1 0 -5 8 256 41 | extendedMaxVelocity -1 0 42 | lvdsStreamCfg -1 0 0 0 43 | compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 44 | measureRangeBiasAndRxChanPhase 0 1.5 0.2 45 | CQRxSatMonitor 0 3 5 121 0 46 | CQSigImgMonitor 0 63 4 47 | analogMonitor 0 0 48 | aoaFovCfg -1 -90 90 -90 90 49 | cfarFovCfg -1 0 0 5 50 | cfarFovCfg -1 1 -1 1 51 | calibData 0 0 0 52 | sensorStart 53 | -------------------------------------------------------------------------------- /cfg/IWR1843_3D_10fps_9db.cfg: -------------------------------------------------------------------------------- 1 | % *************************************************************** 2 | % Created for SDK ver:03.05 3 | % Created using Visualizer ver:3.5.0.0 4 | % Frequency:77 5 | % Platform:xWR18xx 6 | % Scene Classifier:best_range_res 7 | % Azimuth Resolution(deg):15 + Elevation 8 | % Range Resolution(m):0.044 9 | % Maximum unambiguous Range(m):4.99 10 | % Maximum Radial Velocity(m/s):1 11 | % Radial velocity resolution(m/s):0.13 12 | % Frame Duration(msec):100 13 | % RF calibration data:None 14 | % Range Detection Threshold (dB):9 15 | % Doppler Detection Threshold (dB):9 16 | % Range Peak Grouping:enabled 17 | % Doppler Peak Grouping:enabled 18 | % Static clutter removal:disabled 19 | % Angle of Arrival FoV: Full FoV 20 | % Range FoV: Full FoV 21 | % Doppler FoV: Full FoV 22 | % *************************************************************** 23 | sensorStop 24 | flushCfg 25 | dfeDataOutputMode 1 26 | channelCfg 15 7 0 27 | adcCfg 2 1 28 | adcbufCfg -1 0 1 1 1 29 | profileCfg 0 77 267 7 57.14 0 0 70 1 128 2604 0 0 30 30 | chirpCfg 0 0 0 0 0 0 0 1 31 | chirpCfg 1 1 0 0 0 0 0 4 32 | chirpCfg 2 2 0 0 0 0 0 2 33 | frameCfg 0 2 16 0 100 1 0 34 | lowPower 0 0 35 | guiMonitor -1 1 0 0 0 0 0 36 | cfarCfg -1 0 2 8 4 3 0 9 1 37 | cfarCfg -1 1 0 4 2 3 1 9 1 38 | multiObjBeamForming -1 1 0.5 39 | clutterRemoval -1 0 40 | calibDcRangeSig -1 0 -5 8 256 41 | extendedMaxVelocity -1 0 42 | lvdsStreamCfg -1 0 0 0 43 | compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 44 | measureRangeBiasAndRxChanPhase 0 1.5 0.2 45 | CQRxSatMonitor 0 3 5 121 0 46 | CQSigImgMonitor 0 63 4 47 | analogMonitor 0 0 48 | aoaFovCfg -1 -90 90 -90 90 49 | cfarFovCfg -1 0 0 5 50 | cfarFovCfg -1 1 -1 1 51 | calibData 0 0 0 52 | sensorStart 53 | -------------------------------------------------------------------------------- /cfg/IWR1843_3D_20fps_15db.cfg: -------------------------------------------------------------------------------- 1 | % *************************************************************** 2 | % Created for SDK ver:03.05 3 | % Created using Visualizer ver:3.5.0.0 4 | % Frequency:77 5 | % Platform:xWR18xx 6 | % Scene Classifier:best_range_res 7 | % Azimuth Resolution(deg):15 + Elevation 8 | % Range Resolution(m):0.044 9 | % Maximum unambiguous Range(m):4.99 10 | % Maximum Radial Velocity(m/s):1 11 | % Radial velocity resolution(m/s):0.13 12 | % Frame Duration(msec):50 13 | % RF calibration data:None 14 | % Range Detection Threshold (dB):15 15 | % Doppler Detection Threshold (dB):15 16 | % Range Peak Grouping:enabled 17 | % Doppler Peak Grouping:enabled 18 | % Static clutter removal:disabled 19 | % Angle of Arrival FoV: Full FoV 20 | % Range FoV: Full FoV 21 | % Doppler FoV: Full FoV 22 | % *************************************************************** 23 | sensorStop 24 | flushCfg 25 | dfeDataOutputMode 1 26 | channelCfg 15 7 0 27 | adcCfg 2 1 28 | adcbufCfg -1 0 1 1 1 29 | profileCfg 0 77 267 7 57.14 0 0 70 1 128 2604 0 0 30 30 | chirpCfg 0 0 0 0 0 0 0 1 31 | chirpCfg 1 1 0 0 0 0 0 4 32 | chirpCfg 2 2 0 0 0 0 0 2 33 | frameCfg 0 2 16 0 50 1 0 34 | lowPower 0 0 35 | guiMonitor -1 1 0 0 0 0 0 36 | cfarCfg -1 0 2 8 4 3 0 15 1 37 | cfarCfg -1 1 0 4 2 3 1 15 1 38 | multiObjBeamForming -1 1 0.5 39 | clutterRemoval -1 0 40 | calibDcRangeSig -1 0 -5 8 256 41 | extendedMaxVelocity -1 0 42 | lvdsStreamCfg -1 0 0 0 43 | compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 44 | measureRangeBiasAndRxChanPhase 0 1.5 0.2 45 | CQRxSatMonitor 0 3 5 121 0 46 | CQSigImgMonitor 0 63 4 47 | analogMonitor 0 0 48 | aoaFovCfg -1 -90 90 -90 90 49 | cfarFovCfg -1 0 0 5 50 | cfarFovCfg -1 1 -1 1 51 | calibData 0 0 0 52 | sensorStart 53 | -------------------------------------------------------------------------------- /cfg/IWR1843_3D_20fps_15db_2.cfg: -------------------------------------------------------------------------------- 1 | % *************************************************************** 2 | % Created for SDK ver:03.05 3 | % Created using Visualizer ver:3.5.0.0 4 | % Frequency:77 5 | % Platform:xWR18xx 6 | % Scene Classifier:best_range_res 7 | % Azimuth Resolution(deg):15 + Elevation 8 | % Range Resolution(m):0.044 9 | % Maximum unambiguous Range(m):5.01 10 | % Maximum Radial Velocity(m/s):2 11 | % Radial velocity resolution(m/s):0.26 12 | % Frame Duration(msec):50 13 | % RF calibration data:None 14 | % Range Detection Threshold (dB):15 15 | % Doppler Detection Threshold (dB):15 16 | % Range Peak Grouping:enabled 17 | % Doppler Peak Grouping:enabled 18 | % Static clutter removal:disabled 19 | % Angle of Arrival FoV: Full FoV 20 | % Range FoV: Full FoV 21 | % Doppler FoV: Full FoV 22 | % *************************************************************** 23 | sensorStop 24 | flushCfg 25 | dfeDataOutputMode 1 26 | channelCfg 15 7 0 27 | adcCfg 2 1 28 | adcbufCfg -1 0 1 1 1 29 | profileCfg 0 77 105 7 57.14 0 0 70 1 128 2604 0 0 30 30 | chirpCfg 0 0 0 0 0 0 0 1 31 | chirpCfg 1 1 0 0 0 0 0 4 32 | chirpCfg 2 2 0 0 0 0 0 2 33 | frameCfg 0 2 32 0 50 1 0 34 | lowPower 0 0 35 | guiMonitor -1 1 1 0 0 0 1 36 | cfarCfg -1 0 2 8 4 3 0 15 1 37 | cfarCfg -1 1 0 4 2 3 1 15 1 38 | multiObjBeamForming -1 1 0.5 39 | clutterRemoval -1 0 40 | calibDcRangeSig -1 0 -5 8 256 41 | extendedMaxVelocity -1 0 42 | lvdsStreamCfg -1 0 0 0 43 | compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 44 | measureRangeBiasAndRxChanPhase 0 1.5 0.2 45 | CQRxSatMonitor 0 3 5 121 0 46 | CQSigImgMonitor 0 63 4 47 | analogMonitor 0 0 48 | aoaFovCfg -1 -90 90 -90 90 49 | cfarFovCfg -1 0 0 5.01 50 | cfarFovCfg -1 1 -2 2.00 51 | calibData 0 0 0 52 | sensorStart 53 | -------------------------------------------------------------------------------- /cfg/IWR1843_3D_20fps_9db.cfg: -------------------------------------------------------------------------------- 1 | % *************************************************************** 2 | % Created for SDK ver:03.05 3 | % Created using Visualizer ver:3.5.0.0 4 | % Frequency:77 5 | % Platform:xWR18xx 6 | % Scene Classifier:best_range_res 7 | % Azimuth Resolution(deg):15 + Elevation 8 | % Range Resolution(m):0.044 9 | % Maximum unambiguous Range(m):4.99 10 | % Maximum Radial Velocity(m/s):1 11 | % Radial velocity resolution(m/s):0.13 12 | % Frame Duration(msec):50 13 | % RF calibration data:None 14 | % Range Detection Threshold (dB):9 15 | % Doppler Detection Threshold (dB):9 16 | % Range Peak Grouping:enabled 17 | % Doppler Peak Grouping:enabled 18 | % Static clutter removal:disabled 19 | % Angle of Arrival FoV: Full FoV 20 | % Range FoV: Full FoV 21 | % Doppler FoV: Full FoV 22 | % *************************************************************** 23 | sensorStop 24 | flushCfg 25 | dfeDataOutputMode 1 26 | channelCfg 15 7 0 27 | adcCfg 2 1 28 | adcbufCfg -1 0 1 1 1 29 | profileCfg 0 77 267 7 57.14 0 0 70 1 128 2604 0 0 30 30 | chirpCfg 0 0 0 0 0 0 0 1 31 | chirpCfg 1 1 0 0 0 0 0 4 32 | chirpCfg 2 2 0 0 0 0 0 2 33 | frameCfg 0 2 16 0 50 1 0 34 | lowPower 0 0 35 | guiMonitor -1 1 0 0 0 0 0 36 | cfarCfg -1 0 2 8 4 3 0 9 1 37 | cfarCfg -1 1 0 4 2 3 1 9 1 38 | multiObjBeamForming -1 1 0.5 39 | clutterRemoval -1 0 40 | calibDcRangeSig -1 0 -5 8 256 41 | extendedMaxVelocity -1 0 42 | lvdsStreamCfg -1 0 0 0 43 | compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 44 | measureRangeBiasAndRxChanPhase 0 1.5 0.2 45 | CQRxSatMonitor 0 3 5 121 0 46 | CQSigImgMonitor 0 63 4 47 | analogMonitor 0 0 48 | aoaFovCfg -1 -90 90 -90 90 49 | cfarFovCfg -1 0 0 5 50 | cfarFovCfg -1 1 -1 1 51 | calibData 0 0 0 52 | sensorStart 53 | -------------------------------------------------------------------------------- /cfg/IWR1843_3D_5fps_9db.cfg: -------------------------------------------------------------------------------- 1 | % *************************************************************** 2 | % Created for SDK ver:03.05 3 | % Created using Visualizer ver:3.5.0.0 4 | % Frequency:77 5 | % Platform:xWR18xx 6 | % Scene Classifier:best_range_res 7 | % Azimuth Resolution(deg):15 + Elevation 8 | % Range Resolution(m):0.044 9 | % Maximum unambiguous Range(m):4.99 10 | % Maximum Radial Velocity(m/s):1 11 | % Radial velocity resolution(m/s):0.13 12 | % Frame Duration(msec):200 13 | % RF calibration data:None 14 | % Range Detection Threshold (dB):9 15 | % Doppler Detection Threshold (dB):9 16 | % Range Peak Grouping:enabled 17 | % Doppler Peak Grouping:enabled 18 | % Static clutter removal:disabled 19 | % Angle of Arrival FoV: Full FoV 20 | % Range FoV: Full FoV 21 | % Doppler FoV: Full FoV 22 | % *************************************************************** 23 | sensorStop 24 | flushCfg 25 | dfeDataOutputMode 1 26 | channelCfg 15 7 0 27 | adcCfg 2 1 28 | adcbufCfg -1 0 1 1 1 29 | profileCfg 0 77 267 7 57.14 0 0 70 1 128 2604 0 0 30 30 | chirpCfg 0 0 0 0 0 0 0 1 31 | chirpCfg 1 1 0 0 0 0 0 4 32 | chirpCfg 2 2 0 0 0 0 0 2 33 | frameCfg 0 2 16 0 200 1 0 34 | lowPower 0 0 35 | guiMonitor -1 1 0 0 0 0 0 36 | cfarCfg -1 0 2 8 4 3 0 9 1 37 | cfarCfg -1 1 0 4 2 3 1 9 1 38 | multiObjBeamForming -1 1 0.5 39 | clutterRemoval -1 0 40 | calibDcRangeSig -1 0 -5 8 256 41 | extendedMaxVelocity -1 0 42 | lvdsStreamCfg -1 0 0 0 43 | compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 44 | measureRangeBiasAndRxChanPhase 0 1.5 0.2 45 | CQRxSatMonitor 0 3 5 121 0 46 | CQSigImgMonitor 0 63 4 47 | analogMonitor 0 0 48 | aoaFovCfg -1 -90 90 -90 90 49 | cfarFovCfg -1 0 0 5 50 | cfarFovCfg -1 1 -1 1 51 | calibData 0 0 0 52 | sensorStart 53 | -------------------------------------------------------------------------------- /cfg/config_demo.py: -------------------------------------------------------------------------------- 1 | """ 2 | Config file for the system 3 | """ 4 | 5 | # global save parameters 6 | EXPERIMENT_NAME = 'demo' 7 | RADAR_FPS = 20 # 20 frames per second, 100ms per frame 8 | CAMERA_FPS = 30 # 30 frames per second, lower under worse light condition 9 | 10 | # do not enable both save methods at the same time 11 | # manual save 12 | MANSAVE_ENABLE = True # this controls the flag from the source 13 | MANSAVE_PERIOD = 30 # second, the time period saved for manual save 14 | # auto save 15 | AUTOSAVE_ENABLE = False # auto save function requires tracking system 16 | AUTOSAVE_PERIOD = 600 # second, the max time period saved for auto save (radar) 17 | 18 | # multiple class instantiated, multiple config used 19 | RADAR_CFG_LIST = [ 20 | {'name' : 'IWR1843_Ori', 21 | 'cfg_port_name' : 'COM3', 22 | 'data_port_name': 'COM4', 23 | 'cfg_file_name' : './cfg/IWR1843_3D_20fps_9db.cfg', # always use 3D data as input 24 | 'xlim' : None, # the x-direction limit for cloud points from this single radar, set as [a, b), from radar view 25 | 'ylim' : (0.25, 5), 26 | 'zlim' : None, 27 | 'pos_offset' : (0, 0, 1), # default pos_offset is (0, 0, 0) 28 | 'facing_angle' : {'angle': (0, 0, 0), 'sequence': None}, # right-hand global coord-sys, (x, y, z): [-180, 180] positive counted anti-clockwise when facing from axis vertex towards origin, default rotation sequence: zyx 29 | 'ES_threshold' : {'range': (200, None), 'speed_none_0_exception': True}, # if speed_none_0_exception is True, then the data with low ES but with speed will be reserved 30 | }, 31 | ] 32 | 33 | # multiple class instantiated, single config used 34 | FRAME_EARLY_PROCESSOR_CFG = { # early process config 35 | 'FEP_frame_deque_length': 10, # the number of frame stacked 36 | } 37 | 38 | # single class instantiated, single config used 39 | VISUALIZER_CFG = { 40 | 'dimension' : '3D', # only effect visualizer demo, 41 | 'VIS_xlim' : (-2, 2), 42 | 'VIS_ylim' : (0, 4), 43 | 'VIS_zlim' : (0, 2), 44 | 45 | 'auto_inactive_skip_frame': int(4 * RADAR_FPS), # frames, short skip radar frames and process one when no object is detected 46 | } 47 | 48 | # single class instantiated, single config used 49 | FRAME_POST_PROCESSOR_CFG = { # post process config 50 | # cloud point filter para 51 | 'FPP_global_xlim' : (-2, 2), # the x-direction limit for merged cloud points from all radars, set as [a, b), from global view 52 | 'FPP_global_ylim' : (0, 4), 53 | 'FPP_global_zlim' : (0.1, 2), 54 | 'FPP_ES_threshold': {'range': None, 'speed_none_0_exception': True}, # the points in this energy strength range will be preserved, if speed_none_0_exception is True, then the data with low ES but with speed will be reserved 55 | } 56 | 57 | # single class instantiated, single config used 58 | DBSCAN_GENERATOR_CFG = { # DBSCAN para config 59 | 'Default' : { 60 | 'DBS_eps' : 0.3, # maximum distance, larger means the further points can be clustered, smaller means the points need to be closer 61 | 'DBS_min_samples': 10, # minimum samples, larger means more points are needed to form a cluster, 1-each point can be treated as a cluster, no noise 62 | 63 | # DBSCAN filter para 64 | 'DBS_cp_pos_xlim': None, # the position limit in x-direction for central points of clusters 65 | 'DBS_cp_pos_ylim': None, 66 | 'DBS_cp_pos_zlim': (0, 1.8), 67 | 'DBS_size_xlim' : (0.2, 1), # the cluster size limit in x-direction 68 | 'DBS_size_ylim' : (0.2, 1), 69 | 'DBS_size_zlim' : (0.2, 2), 70 | 'DBS_sort' : None, # if sort is required, set it to a number for acquiring this number of the largest cluster 71 | }, 72 | 73 | # DBS_dynamic_para, it allows run multiple DBSCAN with diff para for each data frame 74 | 'Dynamic_ES_0_above' : { # for data points with energy above 0 75 | 'DBS_eps' : 0.2, 76 | 'DBS_min_samples': 15, 77 | }, 78 | 'Dynamic_ES_100_above': { 79 | 'DBS_eps' : 0.8, 80 | 'DBS_min_samples': 6, 81 | }, 82 | 'Dynamic_ES_200_above': { 83 | 'DBS_eps' : 1, 84 | 'DBS_min_samples': 3, 85 | }, 86 | 'Dynamic_ES_300_above': { 87 | 'DBS_eps' : 1.2, 88 | 'DBS_min_samples': 2, 89 | 'DBS_size_xlim' : (0.1, 0.8), # the cluster size limit in x-direction 90 | 'DBS_size_ylim' : (0.1, 0.8), 91 | 'DBS_size_zlim' : (0.1, 2), 92 | }, 93 | 'Dynamic_ES_400_above': { 94 | 'DBS_eps' : 1.5, 95 | 'DBS_min_samples': 2, 96 | 'DBS_size_xlim' : (0.1, 0.8), # the cluster size limit in x-direction 97 | 'DBS_size_ylim' : (0.1, 0.8), 98 | 'DBS_size_zlim' : (0.1, 2), 99 | }, 100 | } 101 | 102 | # single class instantiated, single config used 103 | BGNOISE_FILTER_CFG = { # Background noise filter config 104 | 'BGN_enable' : False, 105 | 106 | # BGN processing para 107 | 'BGN_deque_length' : 150, 108 | 'BGN_accept_ES_threshold': (None, 200), # the noise with this ES range will be accepted when BGN update, it is for DBS noise 109 | 'BGN_filter_ES_threshold': (None, 200), # the noise with this ES range will be filtered when BGN filter 110 | 'BGN_DBS_window_step' : 20, 111 | 'BGN_DBS_eps' : 0.02, 112 | 'BGN_DBS_min_samples' : 20, 113 | 'BGN_cluster_tf' : 0.15, # the threshold factor of data number used to select cluster 114 | 'BGN_cluster_xextension' : 0.05, 115 | 'BGN_cluster_yextension' : 0.05, 116 | 'BGN_cluster_zextension' : 0.01, 117 | } 118 | 119 | # single class instantiated, single config used 120 | HUMAN_TRACKING_CFG = { # tracking system config 121 | 'TRK_enable' : True, 122 | 123 | # Tracking system para 124 | 'TRK_obj_bin_number' : 2, # the maximum number of object which can be detected 125 | 'TRK_poss_clus_deque_length' : 3, # the number of possible clusters stacked before calculating the poss matrix 126 | 'TRK_redundant_clus_remove_cp_dis': 1, # the distance for remove redundant clusters closed to the updated one for multiple obj bin purpose 127 | } 128 | 129 | # multiple class instantiated, single config used 130 | HUMAN_OBJECT_CFG = { # human object config for each object bin 131 | 'obj_deque_length' : 60, # the length of central point, size and status info in timeline stored in object bin 132 | 133 | # object update possibility config 134 | # related_possibility, hard limit 135 | 'dis_diff_threshold' : { 136 | 'threshold' : 0.8, # the distance threshold(m) between the current cp and previous one for object info update 137 | 'dynamic_ratio': 0.2, # the speed ratio for dynamic distance threshold, 0-do nothing 138 | }, 139 | 'size_diff_threshold' : 1, # the size diff threshold(m^3) between the current cp and previous one for object info update 140 | # self_possibility, soft limit, (hard limit is set in DBSCAN_min/max_size), this is also used for classify the status 141 | 'expect_pos' : { 142 | 'default' : (None, None, 1.1), 143 | 'standing': (None, None, 1.1), 144 | 'sitting' : (None, None, 0.7), 145 | 'lying' : (None, None, 0.5), 146 | }, 147 | 'expect_shape' : { 148 | 'default' : (0.8, 0.8, 1.8), 149 | 'standing': (0.7, 0.7, 1.5), 150 | 'sitting' : (0.3, 0.3, 0.6), 151 | 'lying' : (0.8, 0.8, 0.4), 152 | }, 153 | 'sub_possibility_proportion': (1, 1, 1, 1), # the coefficient for the possibility proportion 154 | 'inactive_timeout' : 5, # second, if timeout, object bin status goes inactive 155 | 'obj_delete_timeout' : 60, # second, if timeout, delete this object bin 156 | 157 | # an entrance zone, for object bin start picking up an object 158 | 'fuzzy_boundary_enter' : False, 159 | 'fuzzy_boundary_threshold' : 0.5, 160 | 'scene_xlim' : FRAME_POST_PROCESSOR_CFG['FPP_global_xlim'], 161 | 'scene_ylim' : FRAME_POST_PROCESSOR_CFG['FPP_global_ylim'], 162 | 'scene_zlim' : FRAME_POST_PROCESSOR_CFG['FPP_global_zlim'], 163 | 164 | # object status threshold 165 | 'standing_sitting_threshold': 0.9, 166 | 'sitting_lying_threshold' : 0.4, 167 | 168 | # get last update 2-5 info to show the current position and status 169 | 'get_fuzzy_pos_No' : 20, 170 | 'get_fuzzy_status_No' : 40, 171 | } 172 | 173 | # single class instantiated, single config used 174 | SAVE_CENTER_CFG = { 175 | 'file_save_dir' : './data/CP_107/', 176 | 'experiment_name' : EXPERIMENT_NAME, 177 | # time saved in filename is the end time for manual mode 178 | # time saved in filename is the start time for auto mode 179 | 180 | # manual save 181 | 'mansave_period' : MANSAVE_PERIOD, # the time period saved for manual save 182 | # for radar 183 | 'mansave_rdr_frame_max' : int(MANSAVE_PERIOD * RADAR_FPS * 1.2), # *1.2 to guarantee the sequence integrity 184 | # for camera 185 | 'mansave_cam_frame_max' : int(MANSAVE_PERIOD * CAMERA_FPS * 1.2), # *1.2 to guarantee the sequence integrity 186 | 187 | # auto save 188 | # for radar 189 | 'autosave_rdr_frame_max' : int(AUTOSAVE_PERIOD * RADAR_FPS), 190 | 'autosave_end_remove_period': HUMAN_OBJECT_CFG['obj_delete_timeout'] - HUMAN_OBJECT_CFG['inactive_timeout'], # to remove the empty period for auto save 191 | # for camera 192 | 'autosave_cam_buffer' : 3 * CAMERA_FPS, 193 | } 194 | 195 | # single class instantiated, single config used 196 | CAMERA_CFG = { 197 | 'name' : 'Camera', 198 | 'camera_index' : 1, 199 | 'capture_resolution' : (1280, 720), # (1280, 720) or (960, 540) 200 | 'window_enable' : False, # if True, the radar data and camera data can not be saved together 201 | 202 | 'auto_inactive_skip_enable': True if VISUALIZER_CFG['auto_inactive_skip_frame'] > 0 else False, # long skip all camera frames when no object is detected 203 | } 204 | 205 | # single class instantiated, single config used 206 | RADAR_LOCATION = 'CP 107' 207 | EMAIL_ADDRESS = '1740781310szc@gmail.com' # multiple target addresses 'xxxx@gmail.com, xxxx@qq.com' 208 | EMAIL_NOTIFIER_CFG = { 209 | 'manual_token_path' : './library/email_notifier_token/manual_token.json', 210 | 211 | # message sent when starts 212 | 'message_sys_start' : { 213 | 'to' : EMAIL_ADDRESS, # multiple target addresses 'xxxx@gmail.com, xxxx@qq.com' 214 | 'subject' : f'{RADAR_LOCATION} PC_RADAR Auto_mode Activated Successfully!!', 215 | 'text' : 216 | f""" 217 | The python script on PC_radar in room {RADAR_LOCATION} is activated as auto_mode successfully!!. 218 | """, 219 | 'image_in_text': [], 220 | 'attachment' : [], 221 | } if AUTOSAVE_ENABLE else None, 222 | 223 | # message sent when object detected 224 | 'message_obj_detected' : { 225 | 'to' : EMAIL_ADDRESS, # multiple target addresses 'xxxx@gmail.com, xxxx@qq.com' 226 | 'subject' : f'Human detected in {RADAR_LOCATION}!', 227 | 'text' : 228 | """ 229 | Human detected! See below: 230 | """, 231 | 'image_in_text': [], 232 | 'attachment' : [], 233 | }, 234 | 235 | # message sent for daily check 236 | 'message_daily_check' : { 237 | 'to' : EMAIL_ADDRESS, # multiple target addresses 'xxxx@gmail.com, xxxx@qq.com' 238 | 'subject' : f'{RADAR_LOCATION} PC_RADAR is still ON!!', 239 | 'text' : 240 | f""" 241 | The python script is still running on PC_radar in room {RADAR_LOCATION}. 242 | Another happy day! 243 | """, 244 | 'image_in_text': [], 245 | 'attachment' : [], 246 | } if AUTOSAVE_ENABLE else None, 247 | 'message_daily_check_send_time': ['08-00'] if AUTOSAVE_ENABLE else [], 248 | } 249 | 250 | # single class instantiated, single config used 251 | SYNC_MONITOR_CFG = { 252 | 'rd_qsize_warning': 5, # raw data queue size 253 | 'sc_qsize_warning': 20, # save center queue size 254 | } 255 | -------------------------------------------------------------------------------- /cfg/config_demo_3R.py: -------------------------------------------------------------------------------- 1 | # global save parameters 2 | EXPERIMENT_NAME = 'szc' 3 | RADAR_FPS = 20 # 20 frames per second, 50ms per frame 4 | CAMERA_FPS = 30 # 30 frames per second, lower under worse light condition 5 | # manual save 6 | MANSAVE_ENABLE = True # this controls the flag from the source 7 | MANSAVE_PERIOD = 60 # second, the time period saved for manual save 8 | # auto save 9 | AUTOSAVE_ENABLE = False # auto save function requires tracking system 10 | AUTOSAVE_PERIOD = 600 # second, the max time period saved for auto save (radar) 11 | 12 | # multiple class instantiated, multiple config used 13 | RADAR_CFG_LIST = [ 14 | {'name' : 'IWR1843_Ori', 15 | 'cfg_port_name' : 'COM6', 16 | 'data_port_name': 'COM5', 17 | 'cfg_file_name' : './cfg/IWR1843_3D_20fps_9db.cfg', # always use 3D data as input 18 | 'xlim' : None, # the x-direction limit for cloud points from this single radar, set as [a, b), from radar view 19 | 'ylim' : (0.25, 4), 20 | 'zlim' : None, 21 | 'pos_offset' : (0, 0, 1), # default pos_offset is (0, 0, 0) 22 | 'facing_angle' : {'angle': (0, 0, 0), 'sequence': None}, # right-hand global coord-sys, (x, y, z): [-180, 180] positive counted anti-clockwise when facing from axis end towards origin, default rotation sequence: zyx 23 | 'ES_threshold' : {'range': (0, None), 'speed_none_0_exception': False}, # if speed_none_0_exception is True, then the data with low ES but with speed will be reserved 24 | }, 25 | {'name' : 'IWR1843_Side', 26 | 'cfg_port_name' : 'COM4', 27 | 'data_port_name': 'COM3', 28 | 'cfg_file_name' : './cfg/IWR1843_3D_20fps_9db.cfg', # always use 3D data as input 29 | 'xlim' : None, # the x-direction limit for cloud points from this single radar, set as [a, b), from radar view 30 | 'ylim' : (0.25, 4), 31 | 'zlim' : None, 32 | 'pos_offset' : (-0.15, 0.2, 2.5), # default pos_offset is (0, 0, 0) 33 | 'facing_angle' : {'angle': (-60, 0, 0), 'sequence': None}, # right-hand global coord-sys, (x, y, z): [-180, 180] positive counted anti-clockwise when facing from axis end towards origin, default rotation sequence: zyx 34 | 'ES_threshold' : {'range': (0, None), 'speed_none_0_exception': False}, # if speed_none_0_exception is True, then the data with low ES but with speed will be reserved 35 | }, 36 | {'name' : 'IWR1843_Top', 37 | 'cfg_port_name' : 'COM7', 38 | 'data_port_name': 'COM8', 39 | 'cfg_file_name' : './cfg/IWR1843_3D_20fps_9db.cfg', # always use 3D data as input 40 | 'xlim' : None, # the x-direction limit for cloud points from this single radar, set as [a, b), from radar view 41 | 'ylim' : (0.25, 4), 42 | 'zlim' : None, 43 | 'pos_offset' : (-0.15, 1.7, 2.5), # default pos_offset is (0, 0, 0) 44 | 'facing_angle' : {'angle': (-90, 0, 0), 'sequence': None}, # right-hand global coord-sys, (x, y, z): [-180, 180] positive counted anti-clockwise when facing from axis end towards origin, default rotation sequence: zyx 45 | 'ES_threshold' : {'range': (0, None), 'speed_none_0_exception': False}, # if speed_none_0_exception is True, then the data with low ES but with speed will be reserved 46 | }, 47 | ] 48 | 49 | # multiple class instantiated, single config used 50 | FRAME_EARLY_PROCESSOR_CFG = { # early process config 51 | 'FEP_frame_deque_length': 10, # the number of frame stacked 52 | } 53 | 54 | # single class instantiated, single config used 55 | VISUALIZER_CFG = { 56 | 'dimension' : '3D', # only effect visualizer demo, 57 | 'VIS_xlim' : (-2, 2), 58 | 'VIS_ylim' : (0, 4), 59 | 'VIS_zlim' : (0, 2), 60 | 61 | 'auto_inactive_skip_frame': int(1 * RADAR_FPS), # frames, short skip radar frames and process one when no object is detected 62 | } 63 | 64 | # single class instantiated, single config used 65 | FRAME_POST_PROCESSOR_CFG = { # post process config 66 | # cloud point filter para 67 | 'FPP_global_xlim' : (-1, 1), # the x-direction limit for merged cloud points from all radars, set as [a, b), from global view 68 | 'FPP_global_ylim' : (0, 2.5), 69 | 'FPP_global_zlim' : (-0.5, 2), 70 | 'FPP_ES_threshold': {'range': None, 'speed_none_0_exception': True}, # the points in this energy strength range will be preserved, if speed_none_0_exception is True, then the data with low ES but with speed will be reserved 71 | } 72 | 73 | # single class instantiated, single config used 74 | DBSCAN_GENERATOR_CFG = { # DBSCAN para config 75 | 'Default' : { 76 | 'DBS_eps' : 0.3, # maximum distance, larger means the further points can be clustered, smaller means the points need to be closer 77 | 'DBS_min_samples': 10, # minimum samples, larger means more points are needed to form a cluster, 1-each point can be treated as a cluster, no noise 78 | 79 | # DBSCAN filter para 80 | 'DBS_cp_pos_xlim': None, # the position limit in x-direction for central points of clusters 81 | 'DBS_cp_pos_ylim': None, 82 | 'DBS_cp_pos_zlim': (0, 1.8), 83 | 'DBS_size_xlim' : (0.2, 1), # the cluster size limit in x-direction 84 | 'DBS_size_ylim' : (0.2, 1), 85 | 'DBS_size_zlim' : (0.2, 2), 86 | 'DBS_sort' : None, # if sort is required, set it to a number for acquiring this number of the largest cluster 87 | }, 88 | 89 | # DBS_dynamic_para, it allows run multiple DBSCAN with diff para for each data frame 90 | 'Dynamic_ES_0_above' : { # for data points with energy above 0 91 | 'DBS_eps' : 0.2, 92 | 'DBS_min_samples': 15, 93 | }, 94 | 'Dynamic_ES_100_above': { 95 | 'DBS_eps' : 0.3, 96 | 'DBS_min_samples': 12, 97 | }, 98 | 'Dynamic_ES_200_above': { 99 | 'DBS_eps' : 0.5, 100 | 'DBS_min_samples': 6, 101 | }, 102 | 'Dynamic_ES_300_above': { 103 | 'DBS_eps' : 0.8, 104 | 'DBS_min_samples': 3, 105 | 'DBS_size_xlim' : (0.1, 0.8), # the cluster size limit in x-direction 106 | 'DBS_size_ylim' : (0.1, 0.8), 107 | 'DBS_size_zlim' : (0.1, 2), 108 | }, 109 | 'Dynamic_ES_400_above': { 110 | 'DBS_eps' : 1, 111 | 'DBS_min_samples': 2, 112 | 'DBS_size_xlim' : (0.1, 0.8), # the cluster size limit in x-direction 113 | 'DBS_size_ylim' : (0.1, 0.8), 114 | 'DBS_size_zlim' : (0.1, 2), 115 | }, 116 | } 117 | 118 | # single class instantiated, single config used 119 | BGNOISE_FILTER_CFG = { # Background noise filter config 120 | 'BGN_enable' : False, 121 | 122 | # BGN processing para 123 | 'BGN_deque_length' : 150, 124 | 'BGN_accept_ES_threshold': (None, 200), # the noise with this ES range will be accepted when BGN update, it is for DBS noise 125 | 'BGN_filter_ES_threshold': (None, 200), # the noise with this ES range will be filtered when BGN filter 126 | 'BGN_DBS_window_step' : 20, 127 | 'BGN_DBS_eps' : 0.02, 128 | 'BGN_DBS_min_samples' : 20, 129 | 'BGN_cluster_tf' : 0.15, # the threshold factor of data number used to select cluster 130 | 'BGN_cluster_xextension' : 0.05, 131 | 'BGN_cluster_yextension' : 0.05, 132 | 'BGN_cluster_zextension' : 0.01, 133 | } 134 | 135 | # single class instantiated, single config used 136 | HUMAN_TRACKING_CFG = { # tracking system config 137 | 'TRK_enable' : True, 138 | 139 | # Tracking system para 140 | 'TRK_obj_bin_number' : 2, # the maximum number of object which can be detected 141 | 'TRK_poss_clus_deque_length' : 3, # the number of possible clusters stacked before calculating the poss matrix 142 | 'TRK_redundant_clus_remove_cp_dis': 1, # the distance for remove redundant clusters closed to the updated one for multiple obj bin purpose 143 | } 144 | 145 | # multiple class instantiated, single config used 146 | HUMAN_OBJECT_CFG = { # human object config for each object bin 147 | 'obj_deque_length' : 60, # the length of central point, size and status info in timeline stored in object bin 148 | 149 | # object update possibility config 150 | # related_possibility, hard limit 151 | 'dis_diff_threshold' : { 152 | 'threshold' : 0.8, # the distance threshold(m) between the current cp and previous one for object info update 153 | 'dynamic_ratio': 0.2, # the speed ratio for dynamic distance threshold, 0-do nothing 154 | }, 155 | 'size_diff_threshold' : 1, # the size diff threshold(m^3) between the current cp and previous one for object info update 156 | # self_possibility, soft limit, (hard limit is set in DBSCAN_min/max_size), this is also used for classify the status 157 | 'expect_pos' : { 158 | 'default' : (None, None, 1.1), 159 | 'standing': (None, None, 1.1), 160 | 'sitting' : (None, None, 0.7), 161 | 'lying' : (None, None, 0.5), 162 | }, 163 | 'expect_shape' : { 164 | 'default' : (0.8, 0.8, 1.8), 165 | 'standing': (0.7, 0.7, 1.5), 166 | 'sitting' : (0.3, 0.3, 0.6), 167 | 'lying' : (0.8, 0.8, 0.4), 168 | }, 169 | 'sub_possibility_proportion': (1.8, 1.8, 2, 0.5), # the coefficient for the possibility proportion 170 | 'inactive_timeout' : 5, # second, if timeout, object bin status goes inactive 171 | 'obj_delete_timeout' : 60, # second, if timeout, delete this object bin 172 | 173 | # an entrance zone, for object bin start picking up an object 174 | 'fuzzy_boundary_enter' : False, 175 | 'fuzzy_boundary_threshold' : 0.5, 176 | 'scene_xlim' : FRAME_POST_PROCESSOR_CFG['FPP_global_xlim'], 177 | 'scene_ylim' : FRAME_POST_PROCESSOR_CFG['FPP_global_ylim'], 178 | 'scene_zlim' : FRAME_POST_PROCESSOR_CFG['FPP_global_zlim'], 179 | 180 | # object status threshold 181 | 'standing_sitting_threshold': 0.9, 182 | 'sitting_lying_threshold' : 0.4, 183 | 184 | # get last update 2-5 info to show the current position and status 185 | 'get_fuzzy_pos_No' : 20, 186 | 'get_fuzzy_status_No' : 40, 187 | } 188 | 189 | # single class instantiated, single config used 190 | SAVE_CENTER_CFG = { 191 | 'file_save_dir' : './data/MVB_340/', 192 | 'experiment_name' : EXPERIMENT_NAME, 193 | # time saved in filename is the end time for manual mode 194 | # time saved in filename is the start time for auto mode 195 | 196 | # manual save 197 | 'mansave_period' : MANSAVE_PERIOD, # the time period saved for manual save 198 | # for radar 199 | 'mansave_rdr_frame_max' : int(MANSAVE_PERIOD * RADAR_FPS * 1.2), # *1.2 to guarantee the sequence integrity 200 | # for camera 201 | 'mansave_cam_frame_max' : int(MANSAVE_PERIOD * CAMERA_FPS * 1.2), # *1.2 to guarantee the sequence integrity 202 | 203 | # auto save 204 | # for radar 205 | 'autosave_rdr_frame_max' : int(AUTOSAVE_PERIOD * RADAR_FPS), 206 | 'autosave_end_remove_period': HUMAN_OBJECT_CFG['obj_delete_timeout'] - HUMAN_OBJECT_CFG['inactive_timeout'], # to remove the empty period for auto save 207 | # for camera 208 | 'autosave_cam_buffer' : 3 * CAMERA_FPS, 209 | } 210 | 211 | # single class instantiated, single config used 212 | CAMERA_CFG = { 213 | 'name' : 'Camera', 214 | 'camera_index' : 2, 215 | 'capture_resolution' : (1280, 720), # (1280, 720) or (960, 540) 216 | 'window_enable' : False, # if True, the radar data and camera data can not be saved together 217 | 218 | 'auto_inactive_skip_enable': True if VISUALIZER_CFG['auto_inactive_skip_frame'] > 0 else False, # long skip all camera frames when no object is detected 219 | } 220 | 221 | # single class instantiated, single config used 222 | RADAR_LOCATION = 'MVB 340' 223 | EMAIL_ADDRESS = 'xxxx@gmail.com' # multiple target addresses 'xxxx@gmail.com, xxxx@qq.com' 224 | EMAIL_NOTIFIER_CFG = { 225 | 'manual_token_path' : './library/email_notifier_token/manual_token.json', 226 | 227 | # message sent when starts 228 | 'message_sys_start' : { 229 | 'to' : EMAIL_ADDRESS, # multiple target addresses 'xxxx@gmail.com, xxxx@qq.com' 230 | 'subject' : f'{RADAR_LOCATION} PC_RADAR Auto_mode Activated Successfully!!', 231 | 'text' : 232 | f""" 233 | The python script on PC_radar in room {RADAR_LOCATION} is activated as auto_mode successfully!!. 234 | """, 235 | 'image_in_text': [], 236 | 'attachment' : [], 237 | } if AUTOSAVE_ENABLE else None, 238 | 239 | # message sent when object detected 240 | 'message_obj_detected' : { 241 | 'to' : EMAIL_ADDRESS, # multiple target addresses 'xxxx@gmail.com, xxxx@qq.com' 242 | 'subject' : f'Human detected in {RADAR_LOCATION}!', 243 | 'text' : 244 | """ 245 | Human detected! See below: 246 | """, 247 | 'image_in_text': [], 248 | 'attachment' : [], 249 | }, 250 | 251 | # message sent for daily check 252 | 'message_daily_check' : { 253 | 'to' : EMAIL_ADDRESS, # multiple target addresses 'xxxx@gmail.com, xxxx@qq.com' 254 | 'subject' : f'{RADAR_LOCATION} PC_RADAR is still ON!!', 255 | 'text' : 256 | f""" 257 | The python script is still running on PC_radar in room {RADAR_LOCATION}. 258 | Another happy day! 259 | """, 260 | 'image_in_text': [], 261 | 'attachment' : [], 262 | } if AUTOSAVE_ENABLE else None, 263 | 'message_daily_check_send_time': ['08-00'] if AUTOSAVE_ENABLE else [], 264 | } 265 | 266 | # single class instantiated, single config used 267 | SYNC_MONITOR_CFG = { 268 | 'rd_qsize_warning': 5, # raw data queue size 269 | 'sc_qsize_warning': 20, # save center queue size 270 | } 271 | -------------------------------------------------------------------------------- /library/DBSCAN_generator.py: -------------------------------------------------------------------------------- 1 | """ 2 | DBSCAN generator, abbr. DBS 3 | """ 4 | 5 | import numpy as np 6 | from sklearn.cluster import DBSCAN 7 | 8 | from library.data_processor import DataProcessor 9 | 10 | 11 | class DBSCANGenerator(DataProcessor): 12 | def __init__(self, **kwargs_CFG): 13 | """ 14 | pass config static parameters 15 | """ 16 | """ module own config """ 17 | DBS_CFG = kwargs_CFG['DBSCAN_GENERATOR_CFG'] 18 | self.DBS_CFG = DBS_CFG 19 | 20 | # get default DBSCAN para 21 | self.DBS_eps = DBS_CFG['Default']['DBS_eps'] 22 | self.DBS_min_samples = DBS_CFG['Default']['DBS_min_samples'] 23 | self.DBS_cp_pos_xlim = DBS_CFG['Default']['DBS_cp_pos_xlim'] 24 | self.DBS_cp_pos_ylim = DBS_CFG['Default']['DBS_cp_pos_ylim'] 25 | self.DBS_cp_pos_zlim = DBS_CFG['Default']['DBS_cp_pos_zlim'] 26 | self.DBS_size_xlim = DBS_CFG['Default']['DBS_size_xlim'] 27 | self.DBS_size_ylim = DBS_CFG['Default']['DBS_size_ylim'] 28 | self.DBS_size_zlim = DBS_CFG['Default']['DBS_size_zlim'] 29 | self.DBS_sort = DBS_CFG['Default']['DBS_sort'] 30 | 31 | # get DBSCAN dynamic level listed in config 32 | self.DBS_dynamic_ES_list = [] 33 | for para in DBS_CFG: 34 | if para.split('_')[0] == 'Dynamic': 35 | ES_level = int(para.split('_')[2]) 36 | self.DBS_dynamic_ES_list.append(ES_level) 37 | 38 | """ 39 | inherit father class __init__ para 40 | """ 41 | super().__init__(**kwargs_CFG) 42 | 43 | # generate DBSCAN boxes dynamically 44 | def DBS_dynamic_ES(self, data_points): 45 | """ 46 | :param data_points: (ndarray) data_numbers(n) * channels(5) for a dozen of data frames 47 | :return: vertices_list: (list-ndarray) list of data_numbers(n) * channels(3) for vertices of 3D hull 48 | valid_points_list: (list-ndarray) list of data_numbers(n) * channels(c) for valid points 49 | valid_points: (ndarray) data_numbers(n) * channels(c) for total valid points 50 | noise: (ndarray) data_numbers(n) * channels(c) for total noise points 51 | """ 52 | # initial values 53 | vertices_list_total = [] 54 | valid_points_list_total = [] 55 | valid_points_total = np.ndarray([0, 5], dtype=np.float16) 56 | noise_total = np.ndarray([0, 5], dtype=np.float16) 57 | 58 | if data_points.shape[0] > 0: 59 | # use ES_level_list to minimize the DBSCAN times and lower computation cost by avoiding calculate 2 times for 2 levels which have same data 60 | ES_level_list = [] 61 | prev_ES_dp_No = 0 62 | for ES_level in self.DBS_dynamic_ES_list[::-1]: # get reversed ES level list 63 | curr_ES_dp_No = len(self.DP_np_filter(data_points, axis=4, range_lim=(ES_level, None))[0]) # find how many points within this level 64 | if curr_ES_dp_No != prev_ES_dp_No: # if the number of data points at curr level is diff with prev one 65 | ES_level_list.append(ES_level) # append this level 66 | prev_ES_dp_No = curr_ES_dp_No # update number for prev level 67 | 68 | # run DBSCAN multiple times with diff para 69 | # for ES_level in self.DBSCAN_dynamic_ES_list: 70 | for ES_level in ES_level_list: 71 | # set default DBSCAN para 72 | self.DBS_eps = self.DBS_CFG['Default']['DBS_eps'] 73 | self.DBS_min_samples = self.DBS_CFG['Default']['DBS_min_samples'] 74 | self.DBS_cp_pos_xlim = self.DBS_CFG['Default']['DBS_cp_pos_xlim'] 75 | self.DBS_cp_pos_ylim = self.DBS_CFG['Default']['DBS_cp_pos_ylim'] 76 | self.DBS_cp_pos_zlim = self.DBS_CFG['Default']['DBS_cp_pos_zlim'] 77 | self.DBS_size_xlim = self.DBS_CFG['Default']['DBS_size_xlim'] 78 | self.DBS_size_ylim = self.DBS_CFG['Default']['DBS_size_ylim'] 79 | self.DBS_size_zlim = self.DBS_CFG['Default']['DBS_size_zlim'] 80 | self.DBS_sort = self.DBS_CFG['Default']['DBS_sort'] 81 | 82 | # set DBSCAN para for each subgroup 83 | index = f'Dynamic_ES_{ES_level}_above' 84 | for p in self.DBS_CFG[index]: 85 | exec('self.%s = self.DBS_CFG[index][\'%s\']' % (p, p)) 86 | 87 | # filter the points lower than energy strength level and feed into DBS 88 | data_points_sub, _ = self.DP_np_filter(data_points, axis=4, range_lim=(ES_level, None)) 89 | vertices_list, valid_points_list, valid_points, noise = self.DBS(data_points_sub) 90 | vertices_list_total = vertices_list_total + vertices_list 91 | valid_points_list_total = valid_points_list_total + valid_points_list 92 | valid_points_total = np.concatenate([valid_points_total, valid_points]) 93 | noise_total = np.concatenate([noise_total, noise]) 94 | return vertices_list_total, valid_points_list_total, valid_points_total, noise_total 95 | 96 | # basic DBSCAN function 97 | def DBS(self, data_points): 98 | """ 99 | :param data_points: (ndarray) data_numbers(n) * channels(c>=3) for a dozen of data frames 100 | :return: vertices_list: (list-ndarray) list of data_numbers(n) * channels(3) for vertices of 3D hull 101 | valid_points_list: (list-ndarray) list of data_numbers(n) * channels(c) for valid points 102 | valid_points: (ndarray) data_numbers(n) * channels(c) for total valid points 103 | noise: (ndarray) data_numbers(n) * channels(c) for total noise points 104 | """ 105 | # initial values 106 | vertices_list = [] 107 | valid_points_list = [] 108 | valid_points = np.ndarray([0, 5], dtype=np.float16) 109 | noise = np.ndarray([0, 5], dtype=np.float16) 110 | 111 | if data_points.shape[0] >= int(self.DBS_min_samples * 1): # guarantee enough points and speed up when factor>1 112 | # DBSCAN find clusters 113 | labels = DBSCAN(eps=self.DBS_eps, min_samples=self.DBS_min_samples).fit_predict(data_points[:, 0:3]) # only feed xyz coords 114 | # filter DBSCAN noise 115 | noise = data_points[labels == -1] 116 | valid_points = data_points[labels != -1] 117 | valid_labels = labels[labels != -1] 118 | 119 | # get info for each cluster including central point position, size and label 120 | cluster_info_total = np.ndarray([0, 7], dtype=np.float16) # (cp_pos_x, cp_pos_y, cp_pos_z, size_x, size_y, size_z, label) 121 | valid_labels_unique = np.unique(valid_labels) 122 | for j in range(len(valid_labels_unique)): 123 | label = valid_labels_unique[j] 124 | points = valid_points[valid_labels == label] 125 | x, y, z = self.DP_boundary_calculator(points, axis=range(3)) 126 | cp_pos = np.array([sum(x) / 2, sum(y) / 2, sum(z) / 2], dtype=np.float16) 127 | size = np.concatenate([np.diff(x), np.diff(y), np.diff(z)]) 128 | cluster_info = np.concatenate([cp_pos, size, np.array([label], dtype=np.float16)])[np.newaxis, :] 129 | cluster_info_total = np.concatenate([cluster_info_total, cluster_info]) 130 | # apply filters 131 | cluster_info_total, _ = self.DP_np_filter(cluster_info_total, axis=0, range_lim=self.DBS_cp_pos_xlim) 132 | cluster_info_total, _ = self.DP_np_filter(cluster_info_total, axis=1, range_lim=self.DBS_cp_pos_ylim) 133 | cluster_info_total, _ = self.DP_np_filter(cluster_info_total, axis=2, range_lim=self.DBS_cp_pos_zlim) 134 | cluster_info_total, _ = self.DP_np_filter(cluster_info_total, axis=3, range_lim=self.DBS_size_xlim) 135 | cluster_info_total, _ = self.DP_np_filter(cluster_info_total, axis=4, range_lim=self.DBS_size_ylim) 136 | cluster_info_total, _ = self.DP_np_filter(cluster_info_total, axis=5, range_lim=self.DBS_size_zlim) 137 | # get index of cluster points that passed filters 138 | index = np.zeros(len(valid_labels), dtype=bool) 139 | for info in cluster_info_total: 140 | cluster_index = valid_labels == info[-1] 141 | index = np.logical_or(index, cluster_index) 142 | # update the valid points and labels 143 | valid_points = valid_points[index] 144 | valid_labels = valid_labels[index] 145 | 146 | # DBSCAN sort process 147 | if self.DBS_sort: 148 | # sort the DBSCAN labels based on the point number of cluster, high to low 149 | unique, counts = np.unique(valid_labels, return_counts=True) 150 | unique_sorted = [i[0] for i in sorted(tuple(zip(unique, counts)), key=lambda item: item[1], reverse=True)] 151 | # find the envelope of the biggest several clusters 152 | for i in range(len(unique_sorted)): 153 | if i < self.DBS_sort: # only choose the biggest several clusters 154 | cluster = valid_points[valid_labels == unique_sorted[i]] 155 | # vertices_list.append(self._convexhull(cluster)) # give cluster convexhull vertices 156 | vertices_list.append(self.DP_cubehull(cluster)) # give cluster cubehull vertices 157 | valid_points_list.append(cluster) 158 | else: 159 | break 160 | else: 161 | valid_labels_unique = np.unique(valid_labels) 162 | for i in range(len(valid_labels_unique)): 163 | cluster = valid_points[valid_labels == valid_labels_unique[i]] 164 | # vertices_list.append(self._convexhull(cluster)) # give cluster convexhull vertices 165 | vertices_list.append(self.DP_cubehull(cluster)) # give cluster cubehull vertices 166 | valid_points_list.append(cluster) 167 | 168 | return vertices_list, valid_points_list, valid_points, noise 169 | -------------------------------------------------------------------------------- /library/TI/mmw_demo_example_script.py: -------------------------------------------------------------------------------- 1 | # **************************************************************************** 2 | # * (C) Copyright 2020, Texas Instruments Incorporated. - www.ti.com 3 | # **************************************************************************** 4 | # * 5 | # * Redistribution and use in source and binary forms, with or without 6 | # * modification, are permitted provided that the following conditions are 7 | # * met: 8 | # * 9 | # * Redistributions of source code must retain the above copyright notice, 10 | # * this list of conditions and the following disclaimer. 11 | # * 12 | # * Redistributions in binary form must reproduce the above copyright 13 | # * notice, this list of conditions and the following disclaimer in the 14 | # * documentation and/or other materials provided with the distribution. 15 | # * 16 | # * Neither the name of Texas Instruments Incorporated nor the names of its 17 | # * contributors may be used to endorse or promote products derived from 18 | # * this software without specific prior written permission. 19 | # * 20 | # * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 21 | # * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED 22 | # * PARTICULAR TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 23 | # * A PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR 24 | # * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 25 | # * EXEMPLARY, ORCONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, 26 | # * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR 27 | # * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 28 | # * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING 29 | # * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 30 | # * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 31 | # * 32 | # **************************************************************************** 33 | 34 | 35 | # **************************************************************************** 36 | # Sample mmW demo UART output parser script - should be invoked using python3 37 | # ex: python3 mmw_demo_example_script.py .dat 38 | # 39 | # Notes: 40 | # 1. The parser_mmw_demo script will output the text version 41 | # of the captured files on stdio. User can redirect that output to a log file, if desired 42 | # 2. This example script also outputs the detected point cloud data in mmw_demo_output.csv 43 | # to showcase how to use the output of parser_one_mmw_demo_output_packet 44 | # **************************************************************************** 45 | 46 | import os 47 | import sys 48 | # import the parser function 49 | from parser_mmw_demo import parser_one_mmw_demo_output_packet 50 | 51 | ################################################################################## 52 | # INPUT CONFIGURATION 53 | ################################################################################## 54 | # get the captured file name (obtained from Visualizer via 'Record Start') 55 | # if (len(sys.argv) > 1): 56 | # capturedFileName=sys.argv[1] 57 | # else: 58 | # print ("Error: provide file name of the saved stream from Visualizer for OOB demo") 59 | # exit() 60 | 61 | capturedFileName = 'xwr18xx_processed_stream_2022_05_28T18_43_54_289.dat' 62 | 63 | ################################################################################## 64 | # USE parser_mmw_demo SCRIPT TO PARSE ABOVE INPUT FILES 65 | ################################################################################## 66 | # Read the entire file 67 | fp = open(capturedFileName, 'rb') 68 | readNumBytes = os.path.getsize(capturedFileName) 69 | print("readNumBytes: ", readNumBytes) 70 | allBinData = fp.read() 71 | print("allBinData: ", allBinData[0], allBinData[1], allBinData[2], allBinData[3]) 72 | fp.close() 73 | 74 | # init local variables 75 | totalBytesParsed = 0 76 | numFramesParsed = 0 77 | 78 | # parser_one_mmw_demo_output_packet extracts only one complete frame at a time 79 | # so call this in a loop till end of file 80 | while totalBytesParsed < readNumBytes: 81 | 82 | # parser_one_mmw_demo_output_packet function already prints the 83 | # parsed data to stdio. So showcasing only saving the data to arrays 84 | # here for further custom processing 85 | parser_result, \ 86 | headerStartIndex, \ 87 | totalPacketNumBytes, \ 88 | numDetObj, \ 89 | numTlv, \ 90 | subFrameNumber, \ 91 | detectedX_array, \ 92 | detectedY_array, \ 93 | detectedZ_array, \ 94 | detectedV_array, \ 95 | detectedRange_array, \ 96 | detectedAzimuth_array, \ 97 | detectedElevation_array, \ 98 | detectedSNR_array, \ 99 | detectedNoise_array = parser_one_mmw_demo_output_packet(allBinData[totalBytesParsed::1], readNumBytes - totalBytesParsed) 100 | 101 | # Check the parser result 102 | print("Parser result: ", parser_result) 103 | if parser_result == 0: 104 | totalBytesParsed += (headerStartIndex + totalPacketNumBytes) 105 | numFramesParsed += 1 106 | print("totalBytesParsed: ", totalBytesParsed) 107 | ################################################################################## 108 | # TODO: use the arrays returned by above parser as needed. 109 | # For array dimensions, see help(parser_one_mmw_demo_output_packet) 110 | # help(parser_one_mmw_demo_output_packet) 111 | ################################################################################## 112 | 113 | # For example, dump all S/W objects to a csv file 114 | import csv 115 | 116 | if numFramesParsed == 1: 117 | democsvfile = open('mmw_demo_output.csv', 'w', newline='') 118 | demoOutputWriter = csv.writer(democsvfile, delimiter=',', 119 | quotechar='', quoting=csv.QUOTE_NONE) 120 | demoOutputWriter.writerow(["frame", "DetObj#", "x", "y", "z", "v", "snr", "noise"]) 121 | 122 | for obj in range(numDetObj): 123 | demoOutputWriter.writerow([numFramesParsed - 1, obj, detectedX_array[obj], \ 124 | detectedY_array[obj], \ 125 | detectedZ_array[obj], \ 126 | detectedV_array[obj], \ 127 | detectedSNR_array[obj], \ 128 | detectedNoise_array[obj]]) 129 | 130 | else: 131 | # error in parsing; exit the loop 132 | break 133 | 134 | # All processing done; Exit 135 | print("numFramesParsed: ", numFramesParsed) 136 | -------------------------------------------------------------------------------- /library/TI/parser_mmw_demo.py: -------------------------------------------------------------------------------- 1 | """ 2 | ~/ti/mmwave_sdk_03_05_00_04/packages/ti/demo/xwr16xx/mmw/docs/doxygen/html/struct_mmw_demo__output__message__header__t.html 3 | """ 4 | # **************************************************************************** 5 | # * (C) Copyright 2020, Texas Instruments Incorporated. - www.ti.com 6 | # **************************************************************************** 7 | # * 8 | # * Redistribution and use in source and binary forms, with or without 9 | # * modification, are permitted provided that the following conditions are 10 | # * met: 11 | # * 12 | # * Redistributions of source code must retain the above copyright notice, 13 | # * this list of conditions and the following disclaimer. 14 | # * 15 | # * Redistributions in binary form must reproduce the above copyright 16 | # * notice, this list of conditions and the following disclaimer in the 17 | # * documentation and/or other materials provided with the distribution. 18 | # * 19 | # * Neither the name of Texas Instruments Incorporated nor the names of its 20 | # * contributors may be used to endorse or promote products derived from 21 | # * this software without specific prior written permission. 22 | # * 23 | # * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 24 | # * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED 25 | # * PARTICULAR TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 26 | # * A PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR 27 | # * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 28 | # * EXEMPLARY, ORCONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, 29 | # * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR 30 | # * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 31 | # * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING 32 | # * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 33 | # * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 34 | # * 35 | 36 | # import the required Python packages 37 | import struct 38 | import math 39 | import binascii 40 | import codecs 41 | 42 | # definations for parser pass/fail 43 | TC_PASS = 0 44 | TC_FAIL = 1 45 | 46 | 47 | def getUint32(data): 48 | """! 49 | This function coverts 4 bytes to a 32-bit unsigned integer. 50 | 51 | @param data : 1-demension byte array 52 | @return : 32-bit unsigned integer 53 | """ 54 | return (data[0] + 55 | data[1]*256 + 56 | data[2]*65536 + 57 | data[3]*16777216) 58 | 59 | 60 | def getUint16(data): 61 | """! 62 | This function coverts 2 bytes to a 16-bit unsigned integer. 63 | 64 | @param data : 1-demension byte array 65 | @return : 16-bit unsigned integer 66 | """ 67 | return (data[0] + 68 | data[1]*256) 69 | 70 | 71 | def getHex(data): 72 | """! 73 | This function coverts 4 bytes to a 32-bit unsigned integer in hex. 74 | 75 | @param data : 1-demension byte array 76 | @return : 32-bit unsigned integer in hex 77 | """ 78 | return binascii.hexlify(data[::-1]) 79 | 80 | 81 | def checkMagicPattern(data): 82 | """! 83 | This function check if data array contains the magic pattern which is the start of one mmw demo output packet. 84 | 85 | @param data : 1-demension byte array 86 | @return : 1 if magic pattern is found 87 | 0 if magic pattern is not found 88 | """ 89 | found = 0 90 | if data[0] == 2 and data[1] == 1 and data[2] == 4 and data[3] == 3 and data[4] == 6 and data[5] == 5 and data[6] == 8 and data[7] == 7: 91 | found = 1 92 | return found 93 | 94 | 95 | def parser_helper(data, readNumBytes, print_enable=1): 96 | """! 97 | This function is called by parser_one_mmw_demo_output_packet() function or application to read the input buffer, find the magic number, header location, the length of frame, the number of detected object and the number of TLV contained in this mmw demo output packet. 98 | 99 | @param data : 1-demension byte array holds the the data read from mmw demo output. It ignorant of the fact that data is coming from UART directly or file read. 100 | @param readNumBytes : the number of bytes contained in this input byte array 101 | 102 | @return headerStartIndex : the mmw demo output packet header start location 103 | @return totalPacketNumBytes : the mmw demo output packet lenght 104 | @return numDetObj : the number of detected objects contained in this mmw demo output packet 105 | @return numTlv : the number of TLV contained in this mmw demo output packet 106 | @return subFrameNumber : the sbuframe index (0,1,2 or 3) of the frame contained in this mmw demo output packet 107 | """ 108 | 109 | headerStartIndex = -1 110 | 111 | for index in range(readNumBytes): 112 | if checkMagicPattern(data[index:index+8:1]) == 1: 113 | headerStartIndex = index 114 | break 115 | 116 | if headerStartIndex == -1: # does not find the magic number i.e output packet header 117 | totalPacketNumBytes = -1 118 | numDetObj = -1 119 | numTlv = -1 120 | subFrameNumber = -1 121 | platform = -1 122 | frameNumber = -1 123 | timeCpuCycles = -1 124 | else: # find the magic number i.e output packet header 125 | totalPacketNumBytes = getUint32(data[headerStartIndex+12:headerStartIndex+16:1]) 126 | platform = getHex(data[headerStartIndex+16:headerStartIndex+20:1]) 127 | frameNumber = getUint32(data[headerStartIndex+20:headerStartIndex+24:1]) 128 | timeCpuCycles = getUint32(data[headerStartIndex+24:headerStartIndex+28:1]) 129 | numDetObj = getUint32(data[headerStartIndex+28:headerStartIndex+32:1]) 130 | numTlv = getUint32(data[headerStartIndex+32:headerStartIndex+36:1]) 131 | subFrameNumber = getUint32(data[headerStartIndex+36:headerStartIndex+40:1]) 132 | 133 | if print_enable == 1: 134 | print("headerStartIndex = %d" % headerStartIndex) 135 | print("totalPacketNumBytes = %d" % totalPacketNumBytes) 136 | print("platform = %s" % platform) 137 | print("frameNumber = %d" % frameNumber) 138 | print("timeCpuCycles = %d" % timeCpuCycles) 139 | print("numDetObj = %d" % numDetObj) 140 | print("numTlv = %d" % numTlv) 141 | print("subFrameNumber = %d" % subFrameNumber) 142 | 143 | return headerStartIndex, totalPacketNumBytes, numDetObj, numTlv, subFrameNumber 144 | 145 | 146 | def parser_one_mmw_demo_output_packet(data, readNumBytes, print_enable=1): 147 | """! 148 | This function is called by application. Firstly it calls parser_helper() function to find the start location of the mmw demo output packet, then extract the contents from the output packet. 149 | Each invocation of this function handles only one frame at a time and user needs to manage looping around to parse data for multiple frames. 150 | 151 | @param data : 1-demension byte array holds the the data read from mmw demo output. It ignorant of the fact that data is coming from UART directly or file read. 152 | @param readNumBytes : the number of bytes contained in this input byte array 153 | 154 | @return result : parser result. 0 pass otherwise fail 155 | @return headerStartIndex : the mmw demo output packet header start location 156 | @return totalPacketNumBytes : the mmw demo output packet lenght 157 | @return numDetObj : the number of detected objects contained in this mmw demo output packet 158 | @return numTlv : the number of TLV contained in this mmw demo output packet 159 | @return subFrameNumber : the sbuframe index (0,1,2 or 3) of the frame contained in this mmw demo output packet 160 | @return detectedX_array : 1-demension array holds each detected target's x of the mmw demo output packet 161 | @return detectedY_array : 1-demension array holds each detected target's y of the mmw demo output packet 162 | @return detectedZ_array : 1-demension array holds each detected target's z of the mmw demo output packet 163 | @return detectedV_array : 1-demension array holds each detected target's v of the mmw demo output packet 164 | @return detectedRange_array : 1-demension array holds each detected target's range profile of the mmw demo output packet 165 | @return detectedAzimuth_array : 1-demension array holds each detected target's azimuth of the mmw demo output packet 166 | @return detectedElevAngle_array : 1-demension array holds each detected target's elevAngle of the mmw demo output packet 167 | @return detectedSNR_array : 1-demension array holds each detected target's snr of the mmw demo output packet 168 | @return detectedNoise_array : 1-demension array holds each detected target's noise of the mmw demo output packet 169 | """ 170 | 171 | headerNumBytes = 40 172 | 173 | PI = 3.14159265 174 | 175 | detectedX_array = [] 176 | detectedY_array = [] 177 | detectedZ_array = [] 178 | detectedV_array = [] 179 | detectedRange_array = [] 180 | detectedAzimuth_array = [] 181 | detectedElevAngle_array = [] 182 | detectedSNR_array = [] 183 | detectedNoise_array = [] 184 | 185 | result = TC_PASS 186 | 187 | # call parser_helper() function to find the output packet header start location and packet size 188 | (headerStartIndex, totalPacketNumBytes, numDetObj, numTlv, subFrameNumber) = parser_helper(data, readNumBytes, print_enable) 189 | 190 | if headerStartIndex == -1: 191 | result = TC_FAIL 192 | # print("************ Frame Fail, cannot find the magic words *****************") 193 | else: 194 | nextHeaderStartIndex = headerStartIndex + totalPacketNumBytes 195 | 196 | if headerStartIndex + totalPacketNumBytes > readNumBytes: 197 | result = TC_FAIL 198 | # print("********** Frame Fail, readNumBytes may not long enough ***********") 199 | elif nextHeaderStartIndex + 8 < readNumBytes and checkMagicPattern(data[nextHeaderStartIndex:nextHeaderStartIndex+8:1]) == 0: 200 | result = TC_FAIL 201 | # print("********** Frame Fail, incomplete packet **********") 202 | elif numDetObj <= 0: 203 | result = 2 204 | # print("************ Frame Fail, numDetObj = %d *****************" % numDetObj) 205 | elif subFrameNumber > 3: 206 | result = TC_FAIL 207 | # print("************ Frame Fail, subFrameNumber = %d *****************" % subFrameNumber) 208 | else: 209 | # process the 1st TLV 210 | tlvStart = headerStartIndex + headerNumBytes 211 | 212 | tlvType = getUint32(data[tlvStart+0:tlvStart+4:1]) 213 | tlvLen = getUint32(data[tlvStart+4:tlvStart+8:1]) 214 | offset = 8 215 | 216 | if print_enable == 1: 217 | print("The 1st TLV") 218 | print(" type %d" % tlvType) 219 | print(" len %d bytes" % tlvLen) 220 | 221 | # the 1st TLV must be type 1 222 | if tlvType == 1 and tlvLen < totalPacketNumBytes: # MMWDEMO_UART_MSG_DETECTED_POINTS 223 | 224 | # TLV type 1 contains x, y, z, v values of all detect objects. 225 | # each x, y, z, v are 32-bit float in IEEE 754 single-precision binary floating-point format, so every 16 bytes represent x, y, z, v values of one detect objects. 226 | 227 | # for each detect objects, extract/convert float x, y, z, v values and calculate range profile and azimuth 228 | for obj in range(numDetObj): 229 | # convert byte0 to byte3 to float x value 230 | x = struct.unpack('= 0: 247 | detectedAzimuth = 90 248 | else: 249 | detectedAzimuth = -90 250 | else: 251 | detectedAzimuth = math.atan(x/y) * 180 / PI 252 | 253 | # calculate elevation angle from x, y, z 254 | if x == 0 and y == 0: 255 | if z >= 0: 256 | detectedElevAngle = 90 257 | else: 258 | detectedElevAngle = -90 259 | else: 260 | detectedElevAngle = math.atan(z/math.sqrt((x * x)+(y * y))) * 180 / PI 261 | 262 | detectedX_array.append(x) 263 | detectedY_array.append(y) 264 | detectedZ_array.append(z) 265 | detectedV_array.append(v) 266 | detectedRange_array.append(compDetectedRange) 267 | detectedAzimuth_array.append(detectedAzimuth) 268 | detectedElevAngle_array.append(detectedElevAngle) 269 | 270 | offset = offset + 16 271 | # end of for obj in range(numDetObj) for 1st TLV 272 | 273 | # Process the 2nd TLV 274 | tlvStart = tlvStart + 8 + tlvLen 275 | 276 | tlvType = getUint32(data[tlvStart+0:tlvStart+4:1]) 277 | tlvLen = getUint32(data[tlvStart+4:tlvStart+8:1]) 278 | offset = 8 279 | 280 | if print_enable == 1: 281 | print("The 2nd TLV") 282 | print(" type %d" % tlvType) 283 | print(" len %d bytes" % tlvLen) 284 | 285 | if tlvType == 7: 286 | 287 | # TLV type 7 contains snr and noise of all detect objects. 288 | # each snr and noise are 16-bit integer represented by 2 bytes, so every 4 bytes represent snr and noise of one detect objects. 289 | 290 | # for each detect objects, extract snr and noise 291 | for obj in range(numDetObj): 292 | # byte0 and byte1 represent snr. convert 2 bytes to 16-bit integer 293 | snr = getUint16(data[tlvStart + offset + 0:tlvStart + offset + 2:1]) 294 | # byte2 and byte3 represent noise. convert 2 bytes to 16-bit integer 295 | noise = getUint16(data[tlvStart + offset + 2:tlvStart + offset + 4:1]) 296 | 297 | detectedSNR_array.append(snr) 298 | detectedNoise_array.append(noise) 299 | 300 | offset = offset + 4 301 | else: 302 | for obj in range(numDetObj): 303 | detectedSNR_array.append(0) 304 | detectedNoise_array.append(0) 305 | # end of if tlvType == 7 306 | 307 | if print_enable == 1: 308 | print(" x(m) y(m) z(m) v(m/s) Com0range(m) azimuth(deg) elevAngle(deg) snr(0.1dB) noise(0.1dB)") 309 | for obj in range(numDetObj): 310 | print(" obj%3d: %12f %12f %12f %12f %12f %12f %12d %12d %12d" % (obj, detectedX_array[obj], detectedY_array[obj], detectedZ_array[obj], detectedV_array[obj], detectedRange_array[obj], detectedAzimuth_array[obj], detectedElevAngle_array[obj], detectedSNR_array[obj], detectedNoise_array[obj])) 311 | 312 | return result, headerStartIndex, totalPacketNumBytes, numDetObj, numTlv, subFrameNumber, detectedX_array, detectedY_array, detectedZ_array, detectedV_array, detectedRange_array, detectedAzimuth_array, detectedElevAngle_array, detectedSNR_array, detectedNoise_array 313 | -------------------------------------------------------------------------------- /library/__init__.py: -------------------------------------------------------------------------------- 1 | # essential modules 2 | from .utils import * 3 | from .data_processor import * 4 | 5 | from .radar_reader import * 6 | from .frame_early_processor import * 7 | 8 | from .DBSCAN_generator import * 9 | from .bgnoise_filter import * 10 | from .human_object import * 11 | from .human_tracking import * 12 | from .frame_post_processor import * 13 | from .visualizer import * 14 | from .sync_monitor import * 15 | 16 | # optional modules 17 | try: 18 | from .save_center import * 19 | except: 20 | pass 21 | try: 22 | from .video_compressor import * 23 | except: 24 | pass 25 | try: 26 | from .camera import * 27 | except: 28 | pass 29 | try: 30 | from .email_notifier import * 31 | except: 32 | pass 33 | 34 | 35 | 36 | -------------------------------------------------------------------------------- /library/bgnoise_filter.py: -------------------------------------------------------------------------------- 1 | """ 2 | Background noise filter, abbr. BGN 3 | """ 4 | 5 | from collections import deque 6 | 7 | import numpy as np 8 | from sklearn.cluster import DBSCAN 9 | 10 | from library.data_processor import DataProcessor 11 | 12 | 13 | class BGNoiseFilter(DataProcessor): 14 | def __init__(self, **kwargs_CFG): 15 | """ 16 | pass config static parameters 17 | """ 18 | """ module own config """ 19 | BGN_CFG = kwargs_CFG['BGNOISE_FILTER_CFG'] 20 | self.BGN_enable = BGN_CFG['BGN_enable'] 21 | 22 | # get BGN processing para 23 | self.BGN_deque = deque([], BGN_CFG['BGN_deque_length']) 24 | self.BGN_accept_ES_threshold = BGN_CFG['BGN_accept_ES_threshold'] 25 | self.BGN_filter_ES_threshold = BGN_CFG['BGN_filter_ES_threshold'] 26 | self.BGN_DBS_window_step = BGN_CFG['BGN_DBS_window_step'] 27 | self.BGN_DBS_eps = BGN_CFG['BGN_DBS_eps'] 28 | self.BGN_DBS_min_samples = BGN_CFG['BGN_DBS_min_samples'] 29 | self.BGN_cluster_tf = BGN_CFG['BGN_cluster_tf'] 30 | self.BGN_cluster_xextension = BGN_CFG['BGN_cluster_xextension'] 31 | self.BGN_cluster_yextension = BGN_CFG['BGN_cluster_yextension'] 32 | self.BGN_cluster_zextension = BGN_CFG['BGN_cluster_zextension'] 33 | 34 | """ 35 | self content 36 | """ 37 | self.BGN_cluster_boundary = np.ndarray([0, 6], dtype=np.float16) # (xmin, xmax, ymin, ymax, zmin, zmax) 38 | 39 | """ 40 | inherit father class __init__ para 41 | """ 42 | super().__init__(**kwargs_CFG) 43 | 44 | # filter the background noise 45 | def BGN_filter(self, data_points): 46 | for c in self.BGN_cluster_boundary: 47 | xboundary = (c[0], c[1]) 48 | yboundary = (c[2], c[3]) 49 | zboundary = (c[4], c[5]) 50 | # identify the noise inside each cluster 51 | noise, _ = self.DP_np_filter(data_points, axis=0, range_lim=xboundary, mode=0) 52 | noise, _ = self.DP_np_filter(noise, axis=1, range_lim=yboundary, mode=0) 53 | noise, _ = self.DP_np_filter(noise, axis=2, range_lim=zboundary, mode=0) 54 | # identify the noise with no speed and low ES 55 | noise, _ = self.DP_np_filter(noise, axis=3, range_lim=0) 56 | noise, _ = self.DP_np_filter(noise, axis=4, range_lim=self.BGN_filter_ES_threshold) 57 | # remove noise points specifically 58 | data_points = self.DP_np_2D_row_removal(data_points, noise) 59 | return data_points 60 | 61 | # get background noise area 62 | def BGN_get_filter_area(self): 63 | BGN_block_list = [] 64 | for bb in self.BGN_cluster_boundary: 65 | BGN_block_list.append(self.DP_cubehull(None, (bb[0], bb[1]), (bb[2], bb[3]), (bb[4], bb[5]))) 66 | return BGN_block_list 67 | 68 | # background noise area update 69 | def BGN_update(self, bg_noise): 70 | # identify the noise with no speed and low ES 71 | bg_noise, _ = self.DP_np_filter(bg_noise, axis=3, range_lim=0) 72 | bg_noise, _ = self.DP_np_filter(bg_noise, axis=4, range_lim=self.BGN_accept_ES_threshold) 73 | 74 | # append the BG points 75 | self.BGN_deque.append(bg_noise) 76 | BGN_points = np.concatenate(self.BGN_deque) 77 | 78 | # reach the window length 79 | if len(self.BGN_deque) == self.BGN_deque.maxlen and len(BGN_points) > 1000: 80 | labels = DBSCAN(eps=self.BGN_DBS_eps, min_samples=self.BGN_DBS_min_samples).fit_predict(BGN_points[:, 0:3]) # only feed xyz coords 81 | # filter out DBSCAN noise 82 | valid_points = BGN_points[labels != -1] 83 | valid_labels = labels[labels != -1] 84 | 85 | # select the clusters with point number more than threshold 86 | unique, counts = np.unique(valid_labels, return_counts=True) 87 | unique = unique[counts > len(BGN_points) * self.BGN_cluster_tf] 88 | for u in unique: 89 | cluster = valid_points[valid_labels == u] 90 | xyz = self.DP_boundary_calculator(cluster, axis=range(3)) # get boundary info of cluster 91 | xyz_extended = np.array([xyz[0][0] - self.BGN_cluster_xextension, 92 | xyz[0][1] + self.BGN_cluster_xextension, 93 | xyz[1][0] - self.BGN_cluster_yextension, 94 | xyz[1][1] + self.BGN_cluster_yextension, 95 | xyz[2][0] - self.BGN_cluster_zextension, 96 | xyz[2][1] + self.BGN_cluster_zextension], dtype=np.float16) 97 | 98 | # check whether this cluster should be saved by comparing with the database clusters' area 99 | self.BGN_cluster_boundary = self._check_area_overlap(self.BGN_cluster_boundary, xyz_extended) 100 | 101 | # delete previous points for sliding the window 102 | _ = [self.BGN_deque.popleft() for i in range(self.BGN_DBS_window_step)] 103 | 104 | def _check_area_overlap(self, database, data_point): 105 | """ 106 | check whether incoming cluster cube is inside one of the stored cluster cubes, 107 | if it is inside then don't save, 108 | if it is cross then save, 109 | if it contains then save and delete the clusters which are contained 110 | :param database: (ndarray) data_numbers(n) * channels(6), (xmin, xmax, ymin, ymax, zmin, zmax) for each cluster stored 111 | :param data_point: (ndarray) channels(6), incoming cluster boundary info 112 | :return: database: (ndarray) data_numbers(n) * channels(6), updated database 113 | """ 114 | # check if data_point is inside any of database 115 | idx0, _ = self.DP_get_idx_bool(database, axis=0, range_lim=(None, data_point[0]), mode=0) # xmin 116 | idx1, _ = self.DP_get_idx_bool(database, axis=1, range_lim=(data_point[1], None), mode=0) # xmax 117 | idx2, _ = self.DP_get_idx_bool(database, axis=2, range_lim=(None, data_point[2]), mode=0) # ymin 118 | idx3, _ = self.DP_get_idx_bool(database, axis=3, range_lim=(data_point[3], None), mode=0) # ymax 119 | idx4, _ = self.DP_get_idx_bool(database, axis=4, range_lim=(None, data_point[4]), mode=0) # zmin 120 | idx5, _ = self.DP_get_idx_bool(database, axis=5, range_lim=(data_point[5], None), mode=0) # zmax 121 | idx_inside = (idx0 & idx1 & idx2 & idx3 & idx4 & idx5).any() # if any points found 122 | 123 | # check if data_point contains any of database 124 | idx0, _ = self.DP_get_idx_bool(database, axis=0, range_lim=(data_point[0], None), mode=0) # xmin 125 | idx1, _ = self.DP_get_idx_bool(database, axis=1, range_lim=(None, data_point[1]), mode=0) # xmax 126 | idx2, _ = self.DP_get_idx_bool(database, axis=2, range_lim=(data_point[2], None), mode=0) # ymin 127 | idx3, _ = self.DP_get_idx_bool(database, axis=3, range_lim=(None, data_point[3]), mode=0) # ymax 128 | idx4, _ = self.DP_get_idx_bool(database, axis=4, range_lim=(data_point[4], None), mode=0) # zmin 129 | idx5, _ = self.DP_get_idx_bool(database, axis=5, range_lim=(None, data_point[5]), mode=0) # zmax 130 | idx_contains_np = idx0 & idx1 & idx2 & idx3 & idx4 & idx5 131 | idx_contains = idx_contains_np.any() # if any points found 132 | 133 | # update database, if both idx_inside and idx_contains are True, then data_point is identical with one of database, then pass 134 | if idx_inside: 135 | pass 136 | elif idx_contains: 137 | database = database[~idx_contains_np] 138 | database = np.concatenate([database, data_point[np.newaxis, :]]) 139 | else: 140 | database = np.concatenate([database, data_point[np.newaxis, :]]) 141 | return database 142 | 143 | 144 | if __name__ == '__main__': 145 | bgn = BGNoiseFilter() 146 | datab = np.array([[0.34, 0.34, 3.53, 3.58, -0.40, -0.34], 147 | [0.34, 0.34, 3.53, 3.56, -0.40, -0.04], 148 | ]) 149 | data1 = np.array([0.34, 0.34, 3.53, 3.58, -0.40, -0.34]) 150 | data2 = np.array([0.34, 0.34, 3.53, 3.56, -0.40, -0.04]) 151 | data3 = np.array([0.34, 0.34, 3.53, 3.56, -0.40, -0.14]) 152 | data4 = np.array([0.34, 0.34, 3.53, 3.56, -0.40, -0.04]) 153 | datab = bgn._check_area_overlap(datab, data1) 154 | print(datab) 155 | datab = bgn._check_area_overlap(datab, data2) 156 | print(datab) 157 | datab = bgn._check_area_overlap(datab, data3) 158 | print(datab) 159 | datab = bgn._check_area_overlap(datab, data4) 160 | print(datab) 161 | pass 162 | -------------------------------------------------------------------------------- /library/camera.py: -------------------------------------------------------------------------------- 1 | """ 2 | Designed to capture the image by using web camera, abbr. CAM 3 | """ 4 | 5 | import time 6 | from datetime import datetime 7 | from multiprocessing import Process, Manager 8 | 9 | import cv2 10 | 11 | 12 | class Camera: 13 | def __init__(self, run_flag, shared_param_dict, **kwargs_CFG): 14 | """ 15 | get shared values and queues 16 | """ 17 | self.run_flag = run_flag 18 | # shared params 19 | self.save_queue = shared_param_dict['save_queue'] 20 | self.autosave_flag = shared_param_dict['autosave_flag'] 21 | self.status = shared_param_dict['proc_status_dict'] 22 | self.status['Module_CAM'] = True 23 | """ 24 | pass config static parameters 25 | """ 26 | """ module own config """ 27 | CAM_CFG = kwargs_CFG['CAMERA_CFG'] 28 | self.name = CAM_CFG['name'] 29 | self.capture = cv2.VideoCapture(CAM_CFG['camera_index'], cv2.CAP_DSHOW) # create a camera capture 30 | self.capture.set(cv2.CAP_PROP_FRAME_WIDTH, CAM_CFG['capture_resolution'][0]) 31 | self.capture.set(cv2.CAP_PROP_FRAME_HEIGHT, CAM_CFG['capture_resolution'][1]) 32 | self.window_enable = CAM_CFG['window_enable'] 33 | 34 | self.auto_inactive_skip_enable = CAM_CFG['auto_inactive_skip_enable'] 35 | 36 | """ other configs """ 37 | self.AUTOSAVE_ENABLE = kwargs_CFG['AUTOSAVE_ENABLE'] 38 | 39 | """ 40 | self content 41 | """ 42 | self.capture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG')) # set video coding format 43 | # self.capture.set(cv2.CAP_PROP_FPS, 2) # not work for real-time capture 44 | self.w = int(self.capture.get(cv2.CAP_PROP_FRAME_WIDTH)) 45 | self.h = int(self.capture.get(cv2.CAP_PROP_FRAME_HEIGHT)) 46 | self.fps = self.capture.get(cv2.CAP_PROP_FPS) 47 | 48 | self._log(f'Start...\tWidth: {self.w} Height: {self.h} FPS: {self.fps}') 49 | 50 | # module entrance 51 | def run(self): 52 | if self.window_enable: 53 | # create window and set as top window 54 | cv2.namedWindow(self.name, cv2.WINDOW_NORMAL) 55 | cv2.resizeWindow(self.name, self.w, self.h) 56 | cv2.setWindowProperty(self.name, cv2.WND_PROP_TOPMOST, cv2.WINDOW_FULLSCREEN) 57 | 58 | # read and show frames 59 | while self.run_flag.value: 60 | # adaptive long skip when no object is detected 61 | while self.auto_inactive_skip_enable: 62 | if self.run_flag.value and self.AUTOSAVE_ENABLE and not self.autosave_flag.value: 63 | time.sleep(0.1) 64 | else: 65 | break 66 | 67 | rval, frame = self.capture.read() 68 | # Break the loop if the video capture object is not working 69 | if not rval: 70 | self._log('Camera capture failed!') 71 | break 72 | 73 | # put frame and time into queue 74 | self.save_queue.put({'source' : 'camera', 75 | 'data' : frame, 76 | 'timestamp': time.time(), 77 | }) 78 | 79 | # display window 80 | if self.window_enable: 81 | cv2.imshow(self.name, frame) # show frames 82 | cv2.waitKey(1) # must use cv2 builtin keyboard functions if show frames in real-time 83 | 84 | # close window 85 | cv2.destroyAllWindows() 86 | self._log('Camera is off.') 87 | 88 | def _log(self, txt): # print with device name 89 | print(f'[{self.__class__.__name__}]\t{txt}') 90 | 91 | def __del__(self): 92 | self.capture.release() 93 | self._log(f"Closed. Timestamp: {datetime.now().strftime('%Y-%m-%d_%H:%M:%S')}") 94 | self.status['Module_CAM'] = False 95 | 96 | 97 | def cam_proc_method(_run_flag, _shared_param_dict, _kwargs_CFG): 98 | c = Camera(_run_flag, _shared_param_dict, **_kwargs_CFG) 99 | c.run() 100 | 101 | 102 | def monitor_method(_run_flag, _shared_param_dict): 103 | while _run_flag.value: 104 | _shared_param_dict['save_queue'].get() 105 | 106 | 107 | if __name__ == '__main__': 108 | CAMERA_CFG = { 109 | 'name' : 'Camera', 110 | 'camera_index' : 1, 111 | 'capture_resolution' : (1280, 720), # (1280, 720) or (960, 540) 112 | 'window_enable' : True, # if True, the radar data and camera data can not be saved together 113 | 114 | 'auto_inactive_skip_enable': False, # long skip all camera frames when no object is detected 115 | } 116 | kwargs_CFG = {'CAMERA_CFG' : CAMERA_CFG, 117 | 'AUTOSAVE_ENABLE': False} 118 | 119 | run_f = Manager().Value('b', True) 120 | # generate save flag dict 121 | shared_param_d = {'save_queue' : Manager().Queue(), 122 | 'mansave_flag' : Manager().Value('c', None), # set as None, 'image' or 'video', only triggered at the end of recording 123 | 'autosave_flag' : Manager().Value('b', False), # set as False, True or False, constantly high from the beginning to the end of recording 124 | 'compress_video_file': Manager().Value('c', None), # the record video file waiting to be compressed 125 | 'email_image' : Manager().Value('f', None), # for image info from save_center to email_notifier module 126 | 'proc_status_dict': Manager().dict(), # for process status 127 | } 128 | proc_list = [] 129 | 130 | camera_proc = Process(target=cam_proc_method, args=(run_f, shared_param_d, kwargs_CFG)) 131 | proc_list.append(camera_proc) 132 | monitor_proc = Process(target=monitor_method, args=(run_f, shared_param_d)) 133 | proc_list.append(monitor_proc) 134 | 135 | # start the processes and wait to finish 136 | for t in proc_list: 137 | t.start() 138 | for t in proc_list: 139 | t.join() 140 | 141 | # capture = cv2.VideoCapture(1, cv2.CAP_DSHOW) # 0为电脑内置摄像头 142 | # cv2.namedWindow('video', cv2.WINDOW_NORMAL) 143 | # while 1: 144 | # ret, frame = capture.read() # 摄像头读取,ret为是否成功打开摄像头,true,false。 frame为视频的每一帧图像 145 | # # frame = cv2.flip(frame, 1) # 摄像头是和人对立的,将图像左右调换回来正常显示。 146 | # cv2.imshow("video", frame) 147 | # c = cv2.waitKey(1) 148 | # if c == 27: 149 | # break 150 | # 151 | # if cv2.waitKey(1) & 0xFF == ord('q'): # 如果按下q 就截图保存并退出 152 | # cv2.imwrite("test.png", frame) # 保存路径 153 | # break 154 | # 155 | # capture.release() 156 | # cv2.destroyAllWindows() 157 | -------------------------------------------------------------------------------- /library/data_processor.py: -------------------------------------------------------------------------------- 1 | """ 2 | Designed for processing the data, basic functions, abbr. DP 3 | data(ndarray) = data_numbers(n) * channels(x, y, z, v, SNR) 4 | """ 5 | 6 | import numpy as np 7 | from scipy.spatial import ConvexHull 8 | 9 | 10 | class DataProcessor: 11 | def DP_list_nesting_remover(self, input_list, output_list=None): 12 | """ 13 | to extract each element inside a list with deep nesting level 14 | :param input_list: (list/element) a list with multiple nesting level 15 | :param output_list: (list) a cumulated list during iteration 16 | :return: output_list: (list) a non-nesting list 17 | """ 18 | # list_nesting_remover = lambda list_in: [list_out for i in list_in for list_out in list_nesting_remover(i)] if type(list_in) is list else [list_in] 19 | 20 | if output_list is None: 21 | output_list = [] 22 | if type(input_list) is list: 23 | for i in input_list: 24 | output_list = self.DP_list_nesting_remover(i, output_list) 25 | else: 26 | output_list.append(input_list) 27 | return output_list 28 | 29 | def DP_ES_Speed_filter(self, data_points, ES_threshold): 30 | """ 31 | :param data_points: (ndarray) data_numbers(n) * channels(c=5) 32 | :param ES_threshold: (dict) the ES threshold 33 | :return: data_points: (ndarray) data_numbers(n) * channels(c=5) 34 | noise: (ndarray) data_numbers(n) * channels(c=5) 35 | """ 36 | # remove points with low energy strength 37 | data_points, noise = self.DP_np_filter(data_points, axis=4, range_lim=ES_threshold['range']) 38 | 39 | # identify the noise with speed 40 | if len(noise) > 0 and ES_threshold['speed_none_0_exception']: 41 | noise, noise_with_speed = self.DP_np_filter(noise, axis=3, range_lim=0) 42 | data_points = np.concatenate([data_points, noise_with_speed]) 43 | return data_points, noise 44 | 45 | def DP_np_filter(self, data_points, axis, range_lim, mode=1): 46 | """ 47 | only one axis can be processed in one call 48 | :param data_points: (ndarray) data_numbers(n) * channels(c) 49 | :param axis: (int) the axis number 50 | :param range_lim: (tuple/int/float) (bottom_lim, upper_lim) element can be None, the range for preserved data 51 | :param mode: (int) 0-[min, max], 1-[min, max), 2-(min, max], 3-(min, max), include boundary or not 52 | :return: data_preserved: (ndarray) data_numbers(n) * channels(c) 53 | data_removed: (ndarray) data_numbers(n) * channels(c) 54 | """ 55 | preserved_index, removed_index = self.DP_get_idx_bool(data_points, axis, range_lim, mode) 56 | # get data and noise 57 | data_preserved = data_points[preserved_index] 58 | data_removed = data_points[removed_index] 59 | return data_preserved, data_removed 60 | 61 | def DP_get_idx_bool(self, data_points, axis, range_lim, mode=1): 62 | """ 63 | only one axis can be processed in one call 64 | :param data_points: (ndarray) data_numbers(n) * channels(c) 65 | :param axis: (int) the axis number 66 | :param range_lim: (tuple/int/float) (bottom_lim, upper_lim) element can be None, the range for preserved data 67 | :param mode: (int) 0-[min, max], 1-[min, max), 2-(min, max], 3-(min, max), include boundary or not 68 | :return: preserved_index: (ndarray-bool) data_numbers(n) 69 | removed_index: (ndarray-bool) data_numbers(n) 70 | """ 71 | # initialize the index array 72 | preserved_index = np.ones(len(data_points), dtype=bool) 73 | removed_index = np.zeros(len(data_points), dtype=bool) 74 | 75 | if range_lim is not None: 76 | if type(range_lim) is tuple or type(range_lim) is list: # expect list type 77 | if range_lim[0] is not None: 78 | if mode == 0 or mode == 1: 79 | index = data_points[:, axis] >= range_lim[0] 80 | else: 81 | index = data_points[:, axis] > range_lim[0] 82 | # update the index 83 | preserved_index = preserved_index & index 84 | removed_index = removed_index | ~index 85 | if range_lim[1] is not None: 86 | if mode == 0 or mode == 2: 87 | index = data_points[:, axis] <= range_lim[1] 88 | else: 89 | index = data_points[:, axis] < range_lim[1] 90 | # update the index 91 | preserved_index = preserved_index & index 92 | removed_index = removed_index | ~index 93 | else: # expect int/float type 94 | index = data_points[:, axis] == range_lim 95 | # update the index 96 | preserved_index = preserved_index & index 97 | removed_index = removed_index | ~index 98 | return preserved_index, removed_index 99 | 100 | def DP_np_2D_row_removal(self, database, data_remove): 101 | """ 102 | remove those data rows in the data_remove from the database 103 | :param database: (ndarray) data_numbers(n) * channels(c), original database 104 | :param data_remove: (ndarray) data_numbers(n) * channels(c), need to be removed from database 105 | :return: database: (ndarray) data_numbers(n) * channels(c), updated database 106 | """ 107 | # locate the data_remove in database first and get boolean index 108 | database_index = np.zeros(len(database), dtype=bool) 109 | for d in data_remove: 110 | # locate one data row which needs to be removed in database 111 | row_index = np.ones(len(database), dtype=bool) 112 | for i in range(data_remove.shape[1]): 113 | idx, _ = self.DP_get_idx_bool(database, axis=i, range_lim=d[i]) 114 | row_index = row_index & idx 115 | database_index = database_index | row_index 116 | # reverse the index to remove data_remove and output database 117 | return database[~database_index] 118 | 119 | def DP_boundary_calculator(self, data_points, axis): 120 | """ 121 | multiple axis can be processed in one call 122 | :param data_points: (ndarray) data_numbers(n>0) * channels(c) 123 | :param axis: (int/tuple/list/range) the axis number 124 | :return: result: (tuple-tuple) boundaries, (min, max) 125 | """ 126 | result = [] 127 | axis_list = [axis] if type(axis) == int else axis 128 | for axis in axis_list: 129 | dmin, dmax = data_points[:, axis].min(), data_points[:, axis].max() 130 | result.append((dmin, dmax)) 131 | return tuple(result) if len(result) > 1 else result[0] 132 | 133 | # def repeated_points_removal(self, data_points): 134 | # """ 135 | # :param data_points: (ndarray) data_numbers(n) * channels(c) 136 | # :return: (ndarray) data_numbers(n) * channels(c) 137 | # """ 138 | # # lower the precision to speed up 139 | # data_points = data_points.astype(np.float16) 140 | # data_points = np.around(data_points, decimals=2) # not working when the number is too big more than 500 141 | # # remove repeated points 142 | # data_points = np.unique(data_points, axis=0) 143 | # return data_points 144 | 145 | def DP_convexhull(self, cluster): 146 | """ 147 | :param cluster: (ndarray) data_numbers(n) * channels(c>=3) for cluster data 148 | :return: (ndarray) data_numbers(16) * channels(3) for hull vertices 149 | """ 150 | try: # just in case of insufficient points or all points in single line 151 | vertices_index = ConvexHull(cluster[:, 0:3]).vertices 152 | vertices_index = np.concatenate([vertices_index, vertices_index[0:1]]) # connect end to end for drawing closed shape 153 | except: 154 | vertices_index = [] 155 | return cluster[vertices_index] 156 | 157 | def DP_cubehull(self, cluster, *args): 158 | """ 159 | :param cluster: (ndarray) data_numbers(n) * channels(c>=3) for cluster data 160 | :return: (ndarray) data_numbers(16) * channels(3) for hull vertices 161 | """ 162 | try: 163 | xboundary, yboundary, zboundary = args 164 | except: 165 | xboundary, yboundary, zboundary = self.DP_boundary_calculator(cluster, axis=range(3)) 166 | xmin, xmax = xboundary 167 | ymin, ymax = yboundary 168 | zmin, zmax = zboundary 169 | x = [xmin, xmax, xmax, xmin, xmin, xmin, xmax, xmax, xmin, xmin, xmax, xmax, xmax, xmax, xmin, xmin] 170 | y = [ymin, ymin, ymax, ymax, ymin, ymin, ymin, ymax, ymax, ymin, ymin, ymin, ymax, ymax, ymax, ymax] 171 | z = [zmin, zmin, zmin, zmin, zmin, zmax, zmax, zmax, zmax, zmax, zmax, zmin, zmin, zmax, zmax, zmin] 172 | return np.array([x, y, z]).T 173 | 174 | 175 | if __name__ == '__main__': 176 | # frame_group = np.zeros([5, 3]).astype(np.float16) 177 | # frame_group[0, :] = [2, 1, 1] 178 | # frame_group[1, :] = [1, -1, 1] 179 | # frame_group[2, :] = [1, 0, 0] 180 | # frame_group[-1, :] = [0, 1, 10] 181 | # 182 | # dp = DataProcessor() 183 | # a, b, c = dp.boundary_calculator(frame_group, axis=range(3)) 184 | 185 | a = [1, [12, [1, [1, 23, 4, [1, 2, [3]]]]], [3, [1, [2, 645, [3, 5, [456, [4, [45, 7], [2, [[5]]]]]]]], 4]] 186 | b = [[1, [12, 1, [1, 23, 4]]]] 187 | dp = DataProcessor() 188 | print(dp.DP_list_nesting_remover(a)) 189 | # print(dp.list_nesting_remover(b)) 190 | 191 | pass 192 | -------------------------------------------------------------------------------- /library/email_notifier.py: -------------------------------------------------------------------------------- 1 | """ 2 | Designed to send email notifications, abbr. EMN 3 | based on gmail services and google oauth2 credentials 4 | """ 5 | 6 | import base64 7 | import copy 8 | import os.path 9 | import time 10 | from datetime import datetime 11 | from email.mime.application import MIMEApplication 12 | from email.mime.image import MIMEImage 13 | from email.mime.multipart import MIMEMultipart 14 | from email.mime.text import MIMEText 15 | 16 | import cv2 17 | from func_timeout import func_set_timeout 18 | from google.auth.transport.requests import Request 19 | from google.oauth2.credentials import Credentials 20 | from google_auth_oauthlib.flow import InstalledAppFlow 21 | from googleapiclient.discovery import build 22 | 23 | 24 | class Gmail: 25 | def __init__(self, manual_token_path): 26 | self.manual_token_path = manual_token_path 27 | # check manual_token_path is available 28 | if os.path.exists(manual_token_path): 29 | self.auto_token_path = os.path.join(os.path.dirname(manual_token_path), 'auto_token.json') 30 | else: 31 | raise FileExistsError(f'{manual_token_path} does not exist') 32 | 33 | def send(self, message): 34 | if message is not None: 35 | # get credentials 36 | self._get_credentials() 37 | # create message 38 | to = message['to'] 39 | subject = message['subject'] 40 | text = ''.join(message['text'].split(' ')) 41 | image_in_text = message['image_in_text'] 42 | attachment = message['attachment'] 43 | message = self._create_message(to, subject, text) 44 | message = self._attach_with_image_in_text(message, image_in_text) 45 | message = self._attach_with_attachment(message, attachment) 46 | 47 | # send message 48 | self._send_message({'raw': base64.urlsafe_b64encode(message.as_bytes()).decode()}) 49 | 50 | @staticmethod 51 | def _create_message(to, subject, text): 52 | _message = MIMEMultipart() 53 | _message['to'] = to 54 | _message['subject'] = subject 55 | _message.attach(MIMEText(text)) 56 | return _message 57 | 58 | @staticmethod 59 | def _attach_with_image_in_text(_message, image_path): 60 | image_path = [image_path] if type(image_path) is not list else image_path 61 | for idx, img_p in enumerate(image_path): 62 | html = f""" 63 | 64 | 65 | 66 | 67 | 68 | """ 69 | # attach the image 70 | with open(img_p, 'rb') as f: 71 | img = MIMEImage(f.read()) 72 | img.add_header('Content-ID', f'') 73 | 74 | _message.attach(MIMEText(html, 'html')) 75 | _message.attach(img) 76 | return _message 77 | 78 | @staticmethod 79 | def _attach_with_attachment(_message, file_path): 80 | file_path = [file_path] if type(file_path) is not list else file_path 81 | # add the image attachment 82 | for f_p in file_path: 83 | with open(f_p, 'rb') as f: 84 | attachment = MIMEApplication(f.read()) 85 | attachment.add_header('Content-Disposition', 'attachment', filename=os.path.basename(f_p)) 86 | _message.attach(attachment) 87 | return _message 88 | 89 | @func_set_timeout(60) 90 | def _get_credentials(self): 91 | SCOPES = ['https://mail.google.com/'] 92 | 93 | def _manual_auth(_manual_token_path, _SCOPES): 94 | _flow = InstalledAppFlow.from_client_secrets_file(_manual_token_path, _SCOPES) 95 | _credentials = _flow.run_local_server(port=0) 96 | self._log('access granted') 97 | return _credentials 98 | 99 | # try to get credentials if there is existing auto_token 100 | if os.path.exists(self.auto_token_path): 101 | self._log('auto_token exists') 102 | credentials = Credentials.from_authorized_user_file(self.auto_token_path, SCOPES) 103 | # if invalid, try auto refresh 104 | if not credentials.valid: 105 | try: 106 | credentials.refresh(Request()) 107 | self._log('credentials auto refresh') 108 | except: 109 | # get user manual authorization 110 | self._log('auto refresh failed, need manual authorization') 111 | credentials = _manual_auth(self.manual_token_path, SCOPES) 112 | else: 113 | # get user manual authorization 114 | self._log('no existing auto_token, need manual authorization') 115 | credentials = _manual_auth(self.manual_token_path, SCOPES) 116 | 117 | # save new token for next 118 | with open(self.auto_token_path, 'w') as t: 119 | t.write(credentials.to_json()) 120 | self._log('new auto_token is saved') 121 | self.gmail_service = build('gmail', 'v1', credentials=credentials) 122 | 123 | def _send_message(self, _message): 124 | _message = (self.gmail_service.users().messages().send(userId='me', body=_message).execute()) 125 | self._log(f'sent message to {_message}') 126 | 127 | def _log(self, txt): # print with device name 128 | print(f'[{self.__class__.__name__}]\t{txt}') 129 | 130 | def __del__(self): 131 | self._log('Closed.') 132 | 133 | 134 | class EmailNotifier(Gmail): 135 | def __init__(self, run_flag, shared_param_dict, **kwargs_CFG): 136 | """ 137 | get shared values and queues 138 | """ 139 | self.run_flag = run_flag 140 | # shared params 141 | self.autosave_flag = shared_param_dict['autosave_flag'] 142 | self.email_image = shared_param_dict['email_image'] # sent from save_center 143 | self.status = shared_param_dict['proc_status_dict'] 144 | self.status['Module_EMN'] = True 145 | 146 | """ 147 | pass config static parameters 148 | """ 149 | """ module own config """ 150 | EMN_CFG = kwargs_CFG['EMAIL_NOTIFIER_CFG'] 151 | try: 152 | self.message_sys_start = EMN_CFG['message_sys_start'] 153 | self.message_obj_detected = EMN_CFG['message_obj_detected'] 154 | self.message_daily_check = EMN_CFG['message_daily_check'] 155 | self.message_daily_check_send_time = EMN_CFG['message_daily_check_send_time'] 156 | except: 157 | self.message_sys_start = None 158 | self.message_obj_detected = None 159 | self.message_daily_check = None 160 | self.message_daily_check_send_time = [] 161 | 162 | """ 163 | self content 164 | """ 165 | self.message_sys_start_send_flag = 0 166 | self.cache_image_path = './cache.jpg' 167 | self.message_obj_detected_send_flag = True 168 | self.message_daily_check_send_flag = True 169 | 170 | self._log('Start...') 171 | 172 | """ 173 | inherit father class __init__ para 174 | """ 175 | super().__init__(EMN_CFG['manual_token_path']) 176 | 177 | def run(self): 178 | while self.run_flag.value: 179 | # send sys start message with a 15 sec delay 180 | delay = 15 # set delay for waiting other modules 181 | if self.message_sys_start_send_flag == delay and self.message_sys_start is not None: 182 | self.send(self.message_sys_start) 183 | self.message_sys_start_send_flag = False 184 | elif type(self.message_sys_start_send_flag) is int: 185 | self.message_sys_start_send_flag += 1 186 | 187 | # send object detection message 188 | if self.autosave_flag.value and self.message_obj_detected_send_flag and self.message_obj_detected is not None: 189 | time.sleep(2) 190 | # saved cache image 191 | cv2.imwrite(self.cache_image_path, self.email_image.value) 192 | # send the email 193 | message = copy.deepcopy(self.message_obj_detected) 194 | message['image_in_text'].append(self.cache_image_path) 195 | self.send(message) 196 | os.remove(self.cache_image_path) # clear the cache image 197 | self.message_obj_detected_send_flag = False 198 | # prevent duplicate sending 199 | elif not self.autosave_flag.value: 200 | self.message_obj_detected_send_flag = True 201 | 202 | # send daily status message 203 | if time.strftime('%H-%M', time.localtime()) in self.message_daily_check_send_time and self.message_daily_check_send_flag: 204 | self.send(self.message_daily_check) 205 | self.message_daily_check_send_flag = False 206 | # prevent duplicate sending 207 | elif time.strftime('%H-%M', time.localtime()) not in self.message_daily_check_send_time: 208 | self.message_daily_check_send_flag = True 209 | 210 | # check once per second 211 | time.sleep(1) 212 | 213 | def __del__(self): 214 | self._log(f"Closed. Timestamp: {datetime.now().strftime('%Y-%m-%d_%H:%M:%S')}") 215 | self.status['Module_EMN'] = False 216 | 217 | 218 | if __name__ == '__main__': 219 | mes = { 220 | 'to' : '1740781310szc@gmail.com', 221 | 'subject' : 'Test Email from Gmail API', 222 | 'text' : 223 | """ 224 | Hello, this is a test email 225 | sent using the Gmail API. 226 | """, 227 | 'image_in_text': [], 228 | 'attachment' : [], 229 | } 230 | Gmail('./email_notifier_token/manual_token.json').send(mes) 231 | print('Email sent successfully') 232 | -------------------------------------------------------------------------------- /library/frame_early_processor.py: -------------------------------------------------------------------------------- 1 | """ 2 | Designed for data frame from each radar, abbr. FEP 3 | data(ndarray) = data_numbers(n) * channels(x, y, z, v, SNR) 4 | """ 5 | 6 | from collections import deque 7 | 8 | import numpy as np 9 | from matplotlib import pyplot as plt 10 | from numpy import sin, cos 11 | 12 | from library.data_processor import DataProcessor 13 | 14 | 15 | class FrameEProcessor(DataProcessor): # early processing for frame of each radar before merging 16 | def __init__(self, **kwargs_CFG): 17 | """ 18 | pass config static parameters 19 | """ 20 | """ module own config """ 21 | FEP_CFG = kwargs_CFG['FRAME_EARLY_PROCESSOR_CFG'] 22 | self.FEP_frame_group_deque = deque([], FEP_CFG['FEP_frame_deque_length']) 23 | 24 | """ other configs """ 25 | RDR_CFG = kwargs_CFG['RADAR_CFG'] 26 | self.name = RDR_CFG['name'] 27 | self.xlim = RDR_CFG['xlim'] 28 | self.ylim = RDR_CFG['ylim'] 29 | self.zlim = RDR_CFG['zlim'] 30 | self.pos_offset = RDR_CFG['pos_offset'] 31 | self.facing_angle = RDR_CFG['facing_angle'] 32 | 33 | """ 34 | inherit father class __init__ para 35 | """ 36 | super().__init__() 37 | 38 | def FEP_accumulate_update(self, frame): # ndarray(points, channels=5) of 1 frame 39 | # append frames 40 | self.FEP_frame_group_deque.append(frame) 41 | frame_group = np.concatenate(self.FEP_frame_group_deque).astype(np.float16) # concatenate the deque list 42 | 43 | # apply boundary filter 44 | frame_group = self.FEP_boundary_filter(frame_group) 45 | 46 | # apply angle shift and position updates 47 | frame_group = np.concatenate([self.FEP_trans_rotation_3D(frame_group[:, 0:3]), frame_group[:, 3:5]], axis=1) 48 | frame_group = np.concatenate([self.FEP_trans_position_3D(frame_group[:, 0:3]), frame_group[:, 3:5]], axis=1) 49 | 50 | return frame_group # ndarray(points, channels) of a dozen of frames 51 | 52 | def FEP_boundary_filter(self, data_points): 53 | """ 54 | :param data_points: (ndarray) data_numbers(n) * channels(c>3) 55 | :return: data_points: (ndarray) data_numbers(n) * channels(c>3) 56 | """ 57 | # remove out-ranged points 58 | data_points, _ = self.DP_np_filter(data_points, axis=0, range_lim=self.xlim) 59 | data_points, _ = self.DP_np_filter(data_points, axis=1, range_lim=self.ylim) 60 | data_points, _ = self.DP_np_filter(data_points, axis=2, range_lim=self.zlim) 61 | return data_points 62 | 63 | def FEP_trans_rotation_3D(self, data_points): 64 | """ 65 | update the data from radar based on its facing angle 66 | based on right-hand coord-sys (global coord-sys is used, means the xyz axes are frozen during rotation transformation), 67 | 3 rotation matrices for xyz axes are used, default rotation sequence: dot matrix of z-axis first, followed by y and x-axis 68 | 69 | :param data_points: (ndarray) data_numbers(n) * channels(3) 70 | :return: (ndarray) data_numbers(n) * channels(3) 71 | """ 72 | ref_point = (0, 0, 0) # reference point, (0, 0, 0) by default not using 73 | rpx, rpy, rpz = ref_point 74 | alpha, beta, gamma = np.deg2rad(self.facing_angle['angle']) # tuple of degrees for xyz axes 75 | 76 | # define rotation matrices for each axis 77 | Rx = np.array([[1, 0, 0, 0], 78 | [0, cos(alpha), -sin(alpha), rpy * (1 - cos(alpha)) + rpz * sin(alpha)], 79 | [0, sin(alpha), cos(alpha), rpz * (1 - cos(alpha)) - rpy * sin(alpha)], 80 | [0, 0, 0, 1]], 81 | dtype=data_points.dtype) 82 | Ry = np.array([[cos(beta), 0, sin(beta), rpx * (1 - cos(beta)) - rpz * sin(beta)], 83 | [0, 1, 0, 0], 84 | [-sin(beta), 0, cos(beta), rpz * (1 - cos(beta)) + rpx * sin(beta)], 85 | [0, 0, 0, 1]], 86 | dtype=data_points.dtype) 87 | Rz = np.array([[cos(gamma), -sin(gamma), 0, rpx * (1 - cos(gamma)) + rpy * sin(gamma)], 88 | [sin(gamma), cos(gamma), 0, rpy * (1 - cos(gamma)) - rpx * sin(gamma)], 89 | [0, 0, 1, 0], 90 | [0, 0, 0, 1]], 91 | dtype=data_points.dtype) 92 | 93 | # add one row of 1 to be compatible with matrices 94 | data_points = np.concatenate([data_points, np.ones([data_points.shape[0], 1], dtype=data_points.dtype)], axis=1).T 95 | 96 | # choose rotation sequence 97 | if self.facing_angle['sequence'] == 'xyz': 98 | data_points_transformed = np.dot(np.dot(np.dot(Rz, Ry), Rx), data_points) 99 | elif self.facing_angle['sequence'] == 'xzy': 100 | data_points_transformed = np.dot(np.dot(np.dot(Ry, Rz), Rx), data_points) 101 | elif self.facing_angle['sequence'] == 'yxz': 102 | data_points_transformed = np.dot(np.dot(np.dot(Rz, Rx), Ry), data_points) 103 | elif self.facing_angle['sequence'] == 'yzx': 104 | data_points_transformed = np.dot(np.dot(np.dot(Rx, Rz), Ry), data_points) 105 | elif self.facing_angle['sequence'] == 'zxy': 106 | data_points_transformed = np.dot(np.dot(np.dot(Ry, Rx), Rz), data_points) 107 | else: # default sequence is zyx 108 | data_points_transformed = np.dot(np.dot(np.dot(Rx, Ry), Rz), data_points) 109 | return data_points_transformed.T[:, 0:3] 110 | 111 | def FEP_trans_position_3D(self, data_points): 112 | """ 113 | update the data from radar based on its position offset 114 | :param data_points: (ndarray) data_numbers(n) * channels(3) 115 | :return: (ndarray) data_numbers(n) * channels(3) 116 | """ 117 | dx, dy, dz = self.pos_offset # tuple of distance for xyz axes 118 | 119 | # define distance matrix for each axis 120 | T = np.array([[1, 0, 0, dx], 121 | [0, 1, 0, dy], 122 | [0, 0, 1, dz], 123 | [0, 0, 0, 1]], 124 | dtype=data_points.dtype) 125 | 126 | # add one row of 1 to be compatible with matrices 127 | data_points = np.concatenate([data_points, np.ones([data_points.shape[0], 1], dtype=data_points.dtype)], axis=1).T 128 | data_points_transformed = np.dot(T, data_points) 129 | return data_points_transformed.T[:, 0:3] 130 | 131 | 132 | if __name__ == '__main__': 133 | RADAR_CFG = {'name' : 'test', 134 | 'cfg_port_name' : 'COM3', 135 | 'data_port_name': 'COM4', 136 | 'cfg_file_name' : './cfg/IWR1843_3D.cfg', # always use 3D data as input 137 | 'xlim' : None, # the x-direction limit for cloud points from this single radar, set as [a, b), from radar view 138 | 'ylim' : (0.25, 4), 139 | 'zlim' : None, 140 | 'pos_offset' : (0, 0, 0.8), # default pos_offset is (0, 0, 0) 141 | 'facing_angle' : {'angle': (0, -60, 90), 'sequence': None}, # right-hand global coord-sys, (x, y, z): [-180, 180] positive counted anti-clockwise when facing from axis vertex towards origin, default rotation sequence: zyx 142 | 'ES_threshold' : {'range': (200, None), 'speed_none_0_exception': True}, # if speed_none_0_exception is True, then the data with low ES but with speed will be reserved 143 | } 144 | FRAME_EARLY_PROCESSOR_CFG = { # early process config 145 | 'FEP_frame_deque_length': 10, # the number of frame stacked 146 | } 147 | kwargs_CFG = {'RADAR_CFG' : RADAR_CFG, 148 | 'FRAME_EARLY_PROCESSOR_CFG': FRAME_EARLY_PROCESSOR_CFG} 149 | 150 | # frame = np.array(([], [], [])) 151 | f = FrameEProcessor(**kwargs_CFG) 152 | 153 | """ 154 | This example shows the surface which can be treated as the radar surface facing towards the center. 155 | Because if the radar placed facing down, the received point cloud needs to be lift to compensate the angle. 156 | Following the instruction (global coord-sys, positive counted anti-clockwise), it will be easy to rotate. 157 | """ 158 | # data 159 | points = 1000 160 | x = np.random.uniform(-2, 2, points) 161 | y = np.zeros(points) - 1 162 | z = np.random.uniform(-2, 2, points) 163 | x = np.concatenate([x, np.linspace(-2, 2, points)]) 164 | y = np.concatenate([y, np.zeros(points) - 1]) 165 | z = np.concatenate([z, np.linspace(-2, 2, points)]) 166 | data = f.FEP_trans_rotation_3D(np.concatenate([x[:, np.newaxis], y[:, np.newaxis], z[:, np.newaxis]], axis=1)) 167 | 168 | # create a figure 169 | fig = plt.figure(figsize=(9, 9)) 170 | ax1 = fig.add_subplot(111, projection='3d') 171 | ax1.set_xlim(-2, 2) 172 | ax1.set_ylim(-2, 2) 173 | ax1.set_zlim(-2, 2) 174 | ax1.set_xlabel('x') 175 | ax1.set_ylabel('y') 176 | ax1.set_zlabel('z') 177 | 178 | # draw axis 179 | x_axis = np.arange(-2, 2, 0.01) 180 | y_axis = np.zeros(400) 181 | z_axis = np.zeros(400) 182 | x_axis = np.concatenate([x_axis, np.zeros(400)]) 183 | y_axis = np.concatenate([y_axis, np.arange(-2, 2, 0.01)]) 184 | z_axis = np.concatenate([z_axis, np.zeros(400)]) 185 | x_axis = np.concatenate([x_axis, np.zeros(400)]) 186 | y_axis = np.concatenate([y_axis, np.zeros(400)]) 187 | z_axis = np.concatenate([z_axis, np.arange(-2, 2, 0.01)]) 188 | ax1.plot3D(x_axis, y_axis, z_axis, marker='o', linestyle='None', color='r') 189 | 190 | # draw the surface 191 | ax1.plot3D(data[:, 0], data[:, 1], data[:, 2], marker='.', linestyle='None', color='g') 192 | 193 | plt.show() 194 | -------------------------------------------------------------------------------- /library/frame_post_processor.py: -------------------------------------------------------------------------------- 1 | """ 2 | Designed for post-processing of data frame merged from all radars, abbr. FPP 3 | data(ndarray) = data_numbers(n) * channels(x, y, z, v, SNR) 4 | """ 5 | 6 | from library.DBSCAN_generator import DBSCANGenerator 7 | from library.bgnoise_filter import BGNoiseFilter 8 | from library.human_tracking import HumanTracking 9 | 10 | 11 | class FramePProcessor(DBSCANGenerator, BGNoiseFilter, HumanTracking): 12 | def __init__(self, **kwargs_CFG): 13 | """ 14 | pass config static parameters 15 | """ 16 | """ module own config """ 17 | FPP_CFG = kwargs_CFG['FRAME_POST_PROCESSOR_CFG'] 18 | self.FPP_global_xlim = FPP_CFG['FPP_global_xlim'] 19 | self.FPP_global_ylim = FPP_CFG['FPP_global_ylim'] 20 | self.FPP_global_zlim = FPP_CFG['FPP_global_zlim'] 21 | self.FPP_ES_threshold = FPP_CFG['FPP_ES_threshold'] 22 | 23 | """ 24 | inherit father class __init__ para 25 | """ 26 | super().__init__(**kwargs_CFG) 27 | 28 | def FPP_boundary_filter(self, data_points): 29 | """ 30 | :param data_points: (ndarray) data_numbers(n) * channels(c>3) 31 | :return: data_points: (ndarray) data_numbers(n) * channels(c>3) 32 | """ 33 | # remove out-ranged points 34 | data_points, _ = self.DP_np_filter(data_points, axis=0, range_lim=self.FPP_global_xlim) 35 | data_points, _ = self.DP_np_filter(data_points, axis=1, range_lim=self.FPP_global_ylim) 36 | data_points, _ = self.DP_np_filter(data_points, axis=2, range_lim=self.FPP_global_zlim) 37 | return data_points 38 | 39 | def FPP_ES_Speed_filter(self, data_points): 40 | """ 41 | :param data_points: (ndarray) data_numbers(n) * channels(c=5) 42 | :return: data_points: (ndarray) data_numbers(n) * channels(c=5) 43 | noise: (ndarray) data_numbers(n) * channels(c=5) 44 | """ 45 | data_points, noise = self.DP_ES_Speed_filter(data_points, self.FPP_ES_threshold) 46 | return data_points, noise 47 | -------------------------------------------------------------------------------- /library/human_object.py: -------------------------------------------------------------------------------- 1 | """ 2 | Human object, abbr. OBJ 3 | """ 4 | import time 5 | from collections import deque 6 | from math import hypot 7 | 8 | import numpy as np 9 | from scipy import stats 10 | 11 | 12 | class HumanObject: 13 | def __init__(self, name, **kwargs_CFG): 14 | """ 15 | pass config static parameters 16 | """ 17 | """ module own config """ 18 | OBJ_CFG = kwargs_CFG['HUMAN_OBJECT_CFG'] 19 | # store the previous data 20 | self.obj_cp_deque = deque([], OBJ_CFG['obj_deque_length']) 21 | self.obj_size_deque = deque([], OBJ_CFG['obj_deque_length']) 22 | self.obj_status_deque = deque(np.ones(OBJ_CFG['obj_deque_length'], dtype=int), OBJ_CFG['obj_deque_length']) # set as standing as default 23 | self.obj_speed_deque = deque([], OBJ_CFG['obj_deque_length']) 24 | self.obj_timestamp_deque = deque([], OBJ_CFG['obj_deque_length']) 25 | self.obj_height_dict = {'height': 0, 'count': 0, } # fixed property 26 | 27 | # the threshold for data update 28 | self.dis_diff_threshold = OBJ_CFG['dis_diff_threshold']['threshold'] 29 | self.dis_diff_threshold_dr = OBJ_CFG['dis_diff_threshold']['dynamic_ratio'] 30 | self.size_diff_threshold = OBJ_CFG['size_diff_threshold'] 31 | self.expect_pos = OBJ_CFG['expect_pos'] 32 | self.expect_shape = OBJ_CFG['expect_shape'] 33 | 34 | self.sub_possibility_proportion = OBJ_CFG['sub_possibility_proportion'] 35 | self.inactive_timeout = OBJ_CFG['inactive_timeout'] 36 | self.obj_delete_timeout = OBJ_CFG['obj_delete_timeout'] 37 | 38 | # for object entering the scene 39 | self.fuzzy_boundary_enter = OBJ_CFG['fuzzy_boundary_enter'] 40 | self.fuzzy_boundary_threshold = OBJ_CFG['fuzzy_boundary_threshold'] 41 | self.scene_xlim = OBJ_CFG['scene_xlim'] 42 | self.scene_ylim = OBJ_CFG['scene_ylim'] 43 | self.scene_zlim = OBJ_CFG['scene_zlim'] 44 | 45 | # for object status 46 | self.standing_sitting_threshold = OBJ_CFG['standing_sitting_threshold'] 47 | self.sitting_lying_threshold = OBJ_CFG['sitting_lying_threshold'] 48 | 49 | # get mean position and other info from last serval data 50 | self.get_fuzzy_pos_No = OBJ_CFG['get_fuzzy_pos_No'] 51 | self.get_fuzzy_status_No = OBJ_CFG['get_fuzzy_status_No'] 52 | 53 | """ 54 | self content 55 | """ 56 | self.name = 'obj_' + name 57 | 58 | def check_clus_possibility(self, obj_cp, obj_size): 59 | """ 60 | :param obj_cp: (ndarray) channels(3), xyz, central point of cluster 61 | :param obj_size: (ndarray) channels(3), xyz, size of cluster 62 | :return: (float) the possibility of the point whether it should be taken 63 | """ 64 | # if no previous points saved 65 | if len(self.obj_cp_deque) == 0: 66 | dis_possibility = 0 67 | size_possibility = 0 68 | # enable central point starts around scene boundaries 69 | if self.fuzzy_boundary_enter and not self._boundary_fuzzy_area(obj_cp): 70 | return 0 71 | # if there are previous points 72 | else: 73 | # # use z-score 74 | # if len(self.obj_cp_deque) > 10: 75 | # obj_cp_np = np.array(self.obj_cp_deque).reshape([-1, 3]) 76 | # obj_cp_np = np.concatenate([obj_cp_np, obj_cp]) 77 | # x = obj_cp_np[:, 0] 78 | # y = obj_cp_np[:, 1] 79 | # z = obj_cp_np[:, 2] 80 | # dis_possibility_x = np.exp(-abs(stats.zscore(x)[-1])) 81 | # dis_possibility_y = np.exp(-abs(stats.zscore(y)[-1])) 82 | # dis_possibility_z = np.exp(-abs(stats.zscore(z)[-1])) 83 | # dis_possibility = sum(np.array([dis_possibility_x, dis_possibility_y, dis_possibility_z]) * [0.3, 0.3, 0.4]) 84 | # 85 | # obj_size_np = np.array(self.obj_size_deque).reshape([-1, 3]) 86 | # obj_size_np = np.concatenate([obj_size_np, obj_size]) 87 | # x = obj_size_np[:, 0] 88 | # y = obj_size_np[:, 1] 89 | # z = obj_size_np[:, 2] 90 | # size_possibility_x = np.exp(-abs(stats.zscore(x)[-1])) 91 | # size_possibility_y = np.exp(-abs(stats.zscore(y)[-1])) 92 | # size_possibility_z = np.exp(-abs(stats.zscore(z)[-1])) 93 | # size_possibility = sum(np.array([dis_possibility_x, dis_possibility_y, dis_possibility_z]) * [0.2, 0.2, 0.6]) 94 | 95 | # based on distance between the current and the last 96 | diff = obj_cp - self.obj_cp_deque[-1] 97 | dis_diff = hypot(diff[0], diff[1], diff[2]) 98 | # get dynamic distance threshold by adding object speed of the last frame 99 | dyn_dis_threshold = self.dis_diff_threshold + abs(self.obj_speed_deque[-1]) * self.dis_diff_threshold_dr 100 | # get distance possibility 101 | if dis_diff < dyn_dis_threshold: # if the current point is close enough to previous one 102 | dis_possibility = (dyn_dis_threshold - dis_diff) / dyn_dis_threshold 103 | else: 104 | return 0 105 | 106 | # based on the object size difference between the current and the last 107 | size_diff = abs(np.prod(obj_size) - np.prod(self.obj_size_deque[-1])) 108 | # get size possibility 109 | if size_diff < self.size_diff_threshold: 110 | size_possibility = (self.size_diff_threshold - size_diff) / self.size_diff_threshold 111 | else: 112 | return 0 113 | 114 | # get self_possibility 115 | # status = ['default', 'standing', 'sitting', 'lying'][self.get_info()[1]] 116 | # pos_possibility, shape_possibility = self._get_self_possibility(obj_cp, obj_size, self.expect_pos[status], self.expect_shape[status]) 117 | pos_possibility, shape_possibility = self._get_self_possibility(obj_cp, obj_size, self.expect_pos['default'], self.expect_shape['default']) 118 | 119 | # calculate total possibility 120 | point_taken_possibility = sum(np.array([dis_possibility, size_possibility, pos_possibility, shape_possibility]) * np.array(self.sub_possibility_proportion)) 121 | return point_taken_possibility 122 | 123 | def update_info(self, obj, obj_cp, obj_size): 124 | """ 125 | :param obj: (ndarray) data_numbers(n) * channels(5), cluster data points 126 | :param obj_cp: (ndarray) channels(3), xyz, central point of cluster 127 | :param obj_size: (ndarray) channels(3), xyz, size of cluster 128 | """ 129 | # update the cluster info including central point, size as an object 130 | self.obj_cp_deque.append(obj_cp) 131 | self.obj_size_deque.append(obj_size) 132 | self._update_height(obj_cp, obj_size) 133 | self.obj_status_deque.append(self._get_status(obj_cp, obj_size)) 134 | self.obj_speed_deque.append(self._get_speed(obj)) 135 | self.obj_timestamp_deque.append(time.time()) 136 | 137 | # # adjust data deque based on outlier detection 138 | # 'data_adjusted_range': None, 139 | # # based on the distribution between current point and all sequence 140 | # data = np.squeeze(np.array(list(self.obj_cp_deque))) 141 | # mean_x = np.mean(data[:, 0]) 142 | # std = np.std(data[:, 0]) 143 | 144 | # this function runs every frame loop 145 | def get_info(self): 146 | """ 147 | :return: obj_cp (ndarray) data_numbers(1) * channels(3), the central point of the object (this shape is for plot when no points) 148 | obj_size (ndarray) data_numbers(1) * channels(3) 149 | obj_status (int) 0-losing target, 1-standing, 2-sitting, 3-lying 150 | """ 151 | # default is empty 152 | obj_cp = np.ndarray([0, 3], dtype=np.float16) 153 | obj_size = np.ndarray([0, 3], dtype=np.float16) 154 | obj_status = -1 155 | 156 | # if there are previous points 157 | if len(self.obj_cp_deque) > 0: 158 | # check if long time no update 159 | if self._check_timeout(self.obj_delete_timeout): 160 | self.obj_cp_deque.clear() 161 | self.obj_size_deque.clear() 162 | self.obj_status_deque = deque(np.ones(self.obj_status_deque.maxlen, dtype=int), self.obj_status_deque.maxlen) # set as standing as default 163 | self.obj_speed_deque.clear() 164 | self.obj_timestamp_deque.clear() 165 | 166 | self.obj_height_dict = {'height': 0, 'count': 0, } 167 | else: 168 | # get obj position 169 | if self.get_fuzzy_pos_No: # get comprehensive info based on previous value sequence 170 | # calculate the mean position from the last few data 171 | obj_cp_np = np.array(list(self.obj_cp_deque))[-self.get_fuzzy_pos_No:] 172 | obj_cp = np.array([[np.mean(obj_cp_np[:, 0]), np.mean(obj_cp_np[:, 1]), np.mean(obj_cp_np[:, 2])]]) 173 | else: # get latest info 174 | obj_cp = self.obj_cp_deque[-1][np.newaxis, :] 175 | 176 | # get obj size 177 | if self.get_fuzzy_pos_No: # get comprehensive info based on previous value sequence 178 | obj_size_np = np.array(list(self.obj_size_deque))[-self.get_fuzzy_pos_No:] 179 | obj_size = np.array([[np.mean(obj_size_np[:, 0]), np.mean(obj_size_np[:, 1]), np.mean(obj_size_np[:, 2])]]) 180 | else: # get latest info 181 | obj_size = self.obj_size_deque[-1][np.newaxis, :] 182 | 183 | # get obj status 184 | if self.get_fuzzy_status_No: # get comprehensive info based on previous value sequence 185 | # sort the status labels, high to low 186 | unique, counts = np.unique(list(self.obj_status_deque)[-self.get_fuzzy_status_No:], return_counts=True) 187 | unique_sorted = [i[0] for i in sorted(tuple(zip(unique, counts)), key=lambda item: item[1], reverse=True)] 188 | obj_status = unique_sorted[0] # use the label with maximum number 189 | else: # get latest info 190 | obj_status = self.obj_status_deque[-1] 191 | 192 | # check if temporarily lose the target 193 | if self._check_timeout(self.inactive_timeout): 194 | obj_status = 0 195 | 196 | return obj_cp, obj_size, obj_status 197 | 198 | def _get_status(self, obj_cp, obj_size): 199 | """ 200 | :return: (int) 0-unknown, 1-standing, 2-sitting, 3-lying 201 | """ 202 | # current_height = obj_cp[2] + obj_size[2] / 2 203 | # current_volume = np.prod(obj_size) 204 | 205 | status_possibility_list = [] 206 | for s in ['standing', 'sitting', 'lying']: 207 | pos_possibility, shape_possibility = self._get_self_possibility(obj_cp, obj_size, self.expect_pos[s], self.expect_shape[s]) 208 | status_possibility_list.append(sum(np.array([pos_possibility, shape_possibility]) * np.array(self.sub_possibility_proportion)[2:4])) 209 | status = status_possibility_list.index(max(status_possibility_list)) + 1 210 | 211 | # # for big cluster cube 212 | # if current_volume > 0.25 and obj_size[2] > 1: 213 | # if obj_cp[2] >= self.standing_sitting_threshold and current_height >= self.obj_height_dict['height'] * 0.8: 214 | # status = 1 # standing 215 | # # for big cluster cube 216 | # elif current_volume > 0.04 and obj_size[2] >= 0.2: 217 | # if self.sitting_lying_threshold <= obj_cp[2] < self.standing_sitting_threshold or self.obj_height_dict['height'] * 0.25 <= current_height < self.obj_height_dict['height'] * 0.8: 218 | # status = 2 # sitting 219 | # elif obj_cp[2] < self.sitting_lying_threshold or current_height < self.obj_height_dict['height'] * 0.25: 220 | # status = 3 # lying 221 | return status 222 | 223 | def _get_speed(self, data_points): 224 | """ 225 | :param data_points: (ndarray) data_numbers(n) * channels(5), cluster data points 226 | :return: (float) cluster average speed 227 | """ 228 | # get cluster average speed from all points speed excluding 0s 229 | speed_np = data_points[:, 3] 230 | speed_np = speed_np[speed_np != 0] # filter 0 values 231 | if speed_np.size == 0: 232 | speed = 0 233 | else: 234 | speed = float(np.mean(speed_np)) 235 | return speed 236 | 237 | def _update_height(self, obj_cp, obj_size): 238 | current_height = obj_cp[2] + obj_size[2] / 2 239 | if current_height > self.obj_height_dict['height']: 240 | self.obj_height_dict['height'] = current_height 241 | 242 | # height_range = (1.7, 1.9) 243 | # current_height = obj_cp[2] + obj_size[2] / 2 244 | # current_count = self.obj_height_dict['count'] 245 | # 246 | # if height_range[0] < current_height < height_range[1]: 247 | # self.obj_height_dict['height'] = (self.obj_height_dict['height'] * current_count + current_height) / (current_count + 1) 248 | # self.obj_height_dict['count'] = current_count + 1 249 | # print(obj_cp, obj_size, current_height, current_count, self.obj_height_dict) 250 | 251 | def _get_self_possibility(self, obj_cp, obj_size, expect_pos, expect_shape): 252 | """ 253 | :param obj_cp: (ndarray) channels(3), xyz, central point of cluster 254 | :param obj_size: (ndarray) channels(3), xyz, size of cluster 255 | :param expect_pos: (list), xyz, expect central point of cluster 256 | :param expect_shape: (list), xyz, expect size of cluster 257 | :return: pos_possibility: (float) 258 | shape_possibility: (float) 259 | """ 260 | # based on the object current position 261 | pos_diff_list = [] 262 | for i in range(len(expect_pos)): 263 | if expect_pos[i]: 264 | pos_diff_axis = abs(obj_cp[i] - expect_pos[i]) 265 | else: 266 | pos_diff_axis = 0 267 | pos_diff_list.append(pos_diff_axis) 268 | pos_diff = hypot(pos_diff_list[0], pos_diff_list[1], pos_diff_list[2]) 269 | pos_possibility = np.exp(-pos_diff) 270 | 271 | # based on the object current shape when different status 272 | shape_diff_list = [] 273 | for i in range(len(expect_shape)): 274 | if expect_shape[i]: 275 | shape_diff_axis = abs(obj_size[i] - expect_shape[i]) 276 | else: 277 | shape_diff_axis = 0 278 | shape_diff_list.append(shape_diff_axis) 279 | shape_diff = np.array(shape_diff_list, dtype=np.float16) 280 | shape_possibility = sum(np.exp(-shape_diff) * [0.2, 0.2, 0.6]) 281 | 282 | return pos_possibility, shape_possibility 283 | 284 | def _boundary_fuzzy_area(self, data_point): 285 | """ 286 | :param data_point: (ndarray) channels(3), the central point of the object (this shape is for plot when no points) 287 | :return: (boolean) F-not in the boundary fuzzy area, 1-in the boundary fuzzy area 288 | """ 289 | x_fuzzy1 = (self.scene_xlim[0] - self.fuzzy_boundary_threshold, self.scene_xlim[0] + self.fuzzy_boundary_threshold) 290 | x_fuzzy2 = (self.scene_xlim[1] - self.fuzzy_boundary_threshold, self.scene_xlim[1] + self.fuzzy_boundary_threshold) 291 | y_fuzzy1 = (self.scene_ylim[0] - self.fuzzy_boundary_threshold, self.scene_ylim[0] + self.fuzzy_boundary_threshold) 292 | y_fuzzy2 = (self.scene_ylim[1] - self.fuzzy_boundary_threshold, self.scene_ylim[1] + self.fuzzy_boundary_threshold) 293 | z_fuzzy1 = (self.scene_zlim[0] - self.fuzzy_boundary_threshold, self.scene_zlim[0] + self.fuzzy_boundary_threshold) 294 | z_fuzzy2 = (self.scene_zlim[1] - self.fuzzy_boundary_threshold, self.scene_zlim[1] + self.fuzzy_boundary_threshold) 295 | 296 | x = data_point[0] 297 | y = data_point[1] 298 | z = data_point[2] 299 | if (x_fuzzy1[0] <= x < x_fuzzy1[1]) or (x_fuzzy2[0] <= x < x_fuzzy2[1]) or \ 300 | (y_fuzzy1[0] <= y < y_fuzzy1[1]) or (y_fuzzy2[0] <= y < y_fuzzy2[1]) or \ 301 | (z_fuzzy1[0] <= z < z_fuzzy1[1]) or (z_fuzzy2[0] <= z < z_fuzzy2[1]): 302 | in_boundary_fuzzy_area = True 303 | else: 304 | in_boundary_fuzzy_area = False 305 | return in_boundary_fuzzy_area 306 | 307 | def _check_timeout(self, time_threshold): 308 | """ 309 | :param time_threshold: (float/int) unit is second 310 | :return: (boolean) is timeout or not 311 | """ 312 | if (time.time() - self.obj_timestamp_deque[-1]) >= time_threshold: 313 | timeout = True 314 | else: 315 | timeout = False 316 | return timeout 317 | -------------------------------------------------------------------------------- /library/human_tracking.py: -------------------------------------------------------------------------------- 1 | """ 2 | Human tracking, abbr. TRK 3 | """ 4 | 5 | from collections import deque 6 | from math import hypot 7 | 8 | import numpy as np 9 | 10 | from library.data_processor import DataProcessor 11 | from library.human_object import HumanObject 12 | 13 | 14 | class HumanTracking(DataProcessor): 15 | def __init__(self, **kwargs_CFG): 16 | """ 17 | pass config static parameters 18 | """ 19 | """ module own config """ 20 | TRK_CFG = kwargs_CFG['HUMAN_TRACKING_CFG'] 21 | self.TRK_enable = TRK_CFG['TRK_enable'] 22 | 23 | # get TRK processing para 24 | self.TRK_people_list = [] 25 | for i in range(TRK_CFG['TRK_obj_bin_number']): # create objects based on the maximum number 26 | self.TRK_people_list.append(HumanObject(name=str(i), **kwargs_CFG)) 27 | self.TRK_poss_clus_deque = deque([], TRK_CFG['TRK_poss_clus_deque_length']) 28 | self.TRK_redundant_clus_remove_cp_dis = TRK_CFG['TRK_redundant_clus_remove_cp_dis'] 29 | 30 | """inherit father class __init__ para""" 31 | super().__init__() 32 | 33 | # calculate possibility matrix 34 | def TRK_update_poss_matrix(self, valid_points_list): 35 | # stack cluster valid points 36 | self.TRK_poss_clus_deque.append(valid_points_list) 37 | 38 | # get stacked cluster valid points and remove list nesting 39 | poss_clus_list = self.DP_list_nesting_remover(list(self.TRK_poss_clus_deque)) 40 | 41 | # initial values 42 | obj_cp_total = np.ndarray([0, 3], dtype=np.float16) # (cp_pos_x, cp_pos_y, cp_pos_z) 43 | obj_size_total = np.ndarray([0, 3], dtype=np.float16) # (size_x, size_y, size_z) 44 | # get central point and size for all clusters 45 | for clus_valid_points in poss_clus_list: 46 | x, y, z = self.DP_boundary_calculator(clus_valid_points, axis=range(3)) 47 | obj_cp = np.array([[sum(x) / 2, sum(y) / 2, sum(z) / 2]], dtype=np.float16) 48 | obj_size = np.concatenate([np.diff(x), np.diff(y), np.diff(z)])[np.newaxis, :] 49 | obj_cp_total = np.concatenate([obj_cp_total, obj_cp]) 50 | obj_size_total = np.concatenate([obj_size_total, obj_size]) 51 | 52 | # calculate possibility matrix for each cluster and each object bin 53 | point_taken_poss_matrix = np.zeros([len(poss_clus_list), len(self.TRK_people_list)], dtype=np.float16) 54 | for c in range(len(poss_clus_list)): # for each cluster 55 | for p in range(len(self.TRK_people_list)): # for each object bin 56 | point_taken_poss_matrix[c, p] = self.TRK_people_list[p].check_clus_possibility(obj_cp_total[c], obj_size_total[c]) 57 | 58 | # keep finding the global maximum value of the possibility matrix until no values above 0 59 | while point_taken_poss_matrix.size > 0 and np.max(point_taken_poss_matrix) > 0: 60 | max_index = divmod(np.argmax(point_taken_poss_matrix), point_taken_poss_matrix.shape[1]) 61 | c = max_index[0] 62 | p = max_index[1] 63 | # append the central point and size to the corresponding object 64 | self.TRK_people_list[p].update_info(poss_clus_list[c], obj_cp_total[c], obj_size_total[c]) 65 | 66 | # by setting the poss_matrix raw & column to 0 to remove redundant clusters closed to the updated one including itself, for multiple obj bin purpose 67 | obj_cp_used = obj_cp_total[c] 68 | for i in range(len(obj_cp_total)): 69 | diff = obj_cp_total[i] - obj_cp_used 70 | dis_diff = hypot(diff[0], diff[1]) 71 | if dis_diff < self.TRK_redundant_clus_remove_cp_dis: 72 | point_taken_poss_matrix[i, :] = 0 73 | point_taken_poss_matrix[:, p] = 0 74 | -------------------------------------------------------------------------------- /library/radar_reader.py: -------------------------------------------------------------------------------- 1 | """ 2 | Designed for data collection from radars, abbr. RDR 3 | """ 4 | 5 | import multiprocessing 6 | import re 7 | import time 8 | from datetime import datetime 9 | 10 | import numpy as np 11 | import serial 12 | 13 | from library.TI.parser_mmw_demo import parser_one_mmw_demo_output_packet 14 | from library.frame_early_processor import FrameEProcessor 15 | 16 | header_length = 8 + 32 17 | magic_word = b'\x02\x01\x04\x03\x06\x05\x08\x07' 18 | stop_word = 'sensorStop' 19 | 20 | 21 | class RadarReader: 22 | def __init__(self, run_flag, radar_rd_queue, shared_param_dict, **kwargs_CFG): 23 | """ 24 | get shared values and queues 25 | """ 26 | self.run_flag = run_flag 27 | self.radar_rd_queue = radar_rd_queue 28 | 29 | self.status = shared_param_dict['proc_status_dict'] 30 | self.status[kwargs_CFG['RADAR_CFG']['name']] = True 31 | 32 | """ 33 | pass config static parameters 34 | """ 35 | """ module own config """ 36 | RDR_CFG = kwargs_CFG['RADAR_CFG'] 37 | self.name = RDR_CFG['name'] 38 | self.cfg_port_name = RDR_CFG['cfg_port_name'] 39 | self.data_port_name = RDR_CFG['data_port_name'] 40 | self.cfg_file_name = RDR_CFG['cfg_file_name'] 41 | 42 | """self content""" 43 | self.fep = FrameEProcessor(**kwargs_CFG) # call other class 44 | self.cfg_port = None 45 | self.data_port = None 46 | 47 | self._log('Start...') 48 | 49 | # radar connection 50 | def connect(self) -> bool: 51 | try: 52 | cfg_port, data_port = self._connect_port(self.cfg_port_name, self.data_port_name) 53 | self._send_cfg(self._read_cfg(self.cfg_file_name), cfg_port, print_enable=1) 54 | except: 55 | return False 56 | # set property value 57 | self.cfg_port = cfg_port 58 | self.data_port = data_port 59 | return True 60 | 61 | # module entrance 62 | def run(self): 63 | if not self.connect(): 64 | self._log(f"Radar {self.name} Connection Failed") 65 | self.run_flag.value = False 66 | 67 | data = b'' 68 | while self.run_flag.value: 69 | data += self.data_port.read(self.data_port.inWaiting()) # may have incomplete frame which is not multiple of 32 Bytes 70 | 71 | # guarantee at lease 2 headers which is 1 frame in this data line 72 | if magic_word in data: 73 | data_cache = data[data.index(magic_word) + header_length:] 74 | if magic_word in data_cache: 75 | # parse the data 76 | parser_result, \ 77 | headerStartIndex, \ 78 | totalPacketNumBytes, \ 79 | numDetObj, \ 80 | numTlv, \ 81 | subFrameNumber, \ 82 | detectedX_array, \ 83 | detectedY_array, \ 84 | detectedZ_array, \ 85 | detectedV_array, \ 86 | detectedRange_array, \ 87 | detectedAzimuth_array, \ 88 | detectedElevation_array, \ 89 | detectedSNR_array, \ 90 | detectedNoise_array = parser_one_mmw_demo_output_packet(data, len(data), print_enable=0) 91 | # put data into queue, convert list to nparray, and transpose from (channels, points) to (points, channels) 92 | try: 93 | frame = self.fep.FEP_accumulate_update(np.array((detectedX_array, detectedY_array, detectedZ_array, detectedV_array, detectedSNR_array)).transpose()) 94 | except: 95 | # print('Data parser BROKEN!!!!!!!!!!!!!!!') 96 | pass 97 | self.radar_rd_queue.put(frame) 98 | # self._log(str(len(data)) + '\t' + str(data)) 99 | # winsound.Beep(500, 200) 100 | 101 | # remove the first header and get ready for next header detection 102 | data = data_cache 103 | 104 | # connect the ports 105 | def _connect_port(self, cfg_port_name, data_port_name): 106 | try: 107 | cfg_port = serial.Serial(cfg_port_name, baudrate=115200) 108 | data_port = serial.Serial(data_port_name, baudrate=921600) 109 | assert cfg_port.is_open and data_port.is_open 110 | self._log('Hardware connected') 111 | return cfg_port, data_port 112 | except serial.serialutil.SerialException: 113 | return 114 | 115 | # read cfg file 116 | def _read_cfg(self, _cfg_file_name): 117 | cfg_list = [] 118 | with open(_cfg_file_name) as f: 119 | lines = f.read().split('\n') 120 | for line in lines: 121 | if not (line.startswith('%') or line == ''): 122 | cfg_list.append(line) 123 | return cfg_list 124 | 125 | # send cfg list 126 | def _send_cfg(self, cfg_list, cfg_port, print_enable=1): 127 | for line in cfg_list: 128 | # send cfg line by line 129 | line = (line + '\n').encode() 130 | cfg_port.write(line) 131 | # wait for port response 132 | while cfg_port.inWaiting() <= 20: 133 | pass 134 | time.sleep(0.01) 135 | res_str = cfg_port.read(cfg_port.inWaiting()).decode() 136 | res_list = [i for i in re.split('\n|\r', res_str) if i != ''] 137 | if print_enable == 1: 138 | self._log('\t'.join(res_list[-1:] + res_list[0:-1])) 139 | self._log('cfg SENT\n') 140 | 141 | # print with device name 142 | def _log(self, txt): 143 | print(f'[{self.name}]\t{txt}') 144 | 145 | def __del__(self): 146 | # stop the radar 147 | try: 148 | self._send_cfg([stop_word], self.cfg_port) 149 | self.cfg_port.close() 150 | self.data_port.close() 151 | except: 152 | pass 153 | self._log(f"Closed. Timestamp: {datetime.now().strftime('%Y-%m-%d_%H:%M:%S')}") 154 | self.status[self.name] = True 155 | self.run_flag.value = False 156 | 157 | 158 | if __name__ == '__main__': 159 | config = {'name' : 'Test', 160 | 'cfg_port_name' : 'COM4', 161 | 'data_port_name': 'COM5', 162 | 'cfg_file_name' : '../cfg/IWR1843_3D.cfg', 163 | 'xlim' : (-2, 2), 164 | 'ylim' : (0.2, 4), 165 | 'zlim' : (-2, 2), 166 | 'pos_offset' : (0, 0), # default pos_offset is (0, 0) 167 | 'facing_angle' : 0, # facing_angle is 0 degree(forward in field is up in map, degree 0-360 counted clockwise) 168 | 'enable_save' : True, # data saved for single radar 169 | 'save_length' : 10, # saved frame length for single radar 170 | } 171 | v = multiprocessing.Manager().Value('b', True) 172 | q = multiprocessing.Manager().Queue() 173 | r = RadarReader(v, q, config) 174 | success = r.connect() 175 | if not success: 176 | raise ValueError(f'Radar {config["name"]} Connection Failed') 177 | r.run() 178 | -------------------------------------------------------------------------------- /library/save_center.py: -------------------------------------------------------------------------------- 1 | """ 2 | Designed to collect the data from modules and save them, abbr. SVC 3 | """ 4 | 5 | import os.path 6 | import pickle 7 | import queue 8 | import time 9 | from collections import deque 10 | from datetime import datetime 11 | 12 | import cv2 13 | import numpy as np 14 | import winsound 15 | 16 | from library.utils import folder_create_with_curmonth 17 | 18 | 19 | class SaveCenter: 20 | def __init__(self, run_flag, shared_param_dict, **_kwargs_CFG): 21 | """ 22 | get shared values and queues 23 | """ 24 | self.run_flag = run_flag 25 | # shared params 26 | self.save_queue = shared_param_dict['save_queue'] 27 | self.mansave_flag = shared_param_dict['mansave_flag'] 28 | self.autosave_flag = shared_param_dict['autosave_flag'] 29 | self.compress_video_file = shared_param_dict['compress_video_file'] 30 | self.email_image = shared_param_dict['email_image'] 31 | self.status = shared_param_dict['proc_status_dict'] 32 | self.status['Module_SVC'] = True 33 | 34 | """ 35 | pass config static parameters 36 | """ 37 | """ module own config """ 38 | SVC_CFG = _kwargs_CFG['SAVE_CENTER_CFG'] 39 | self.file_save_dir = SVC_CFG['file_save_dir'] 40 | self.experiment_name = SVC_CFG['experiment_name'] 41 | 42 | # manual save 43 | self.mansave_period = SVC_CFG['mansave_period'] 44 | # for radar 45 | self.mansave_rdr_data_deque = deque([], SVC_CFG['mansave_rdr_frame_max']) 46 | self.mansave_rdr_timestamp_deque = deque([], SVC_CFG['mansave_rdr_frame_max']) 47 | # for camera 48 | self.mansave_cam_data_deque = deque([], SVC_CFG['mansave_cam_frame_max']) 49 | self.mansave_cam_timestamp_deque = deque([], SVC_CFG['mansave_cam_frame_max']) 50 | 51 | # auto save 52 | # for radar 53 | self.autosave_rdr_data_deque = deque([], SVC_CFG['autosave_rdr_frame_max']) # radar auto save data deque 54 | self.autosave_rdr_timestamp_deque = deque([], SVC_CFG['autosave_rdr_frame_max']) 55 | self.autosave_end_remove_period = SVC_CFG['autosave_end_remove_period'] 56 | # for camera 57 | self.autosave_cam_buffer_deque = deque([], SVC_CFG['autosave_cam_buffer']) 58 | self.autosave_cam_timestamp_deque = deque([], SVC_CFG['autosave_cam_buffer']) 59 | 60 | """ 61 | self content 62 | """ 63 | # create a VideoWriter object to save the video 64 | self.videowriter = cv2.VideoWriter() 65 | 66 | # for camera auto save 67 | self.videowriter_status = False 68 | self.buffer_status = False 69 | self.asave_info = { 70 | 'file_label' : 'auto_CameraVideo', 71 | 'file_time' : time.strftime('%b-%d-%H-%M-%S', time.localtime()), 72 | 'file_format': '.mp4', 73 | 'file_path' : folder_create_with_curmonth(self.file_save_dir) + f"{self.experiment_name}_{'auto_CameraVideo'}_{time.strftime('%b-%d-%H-%M-%S', time.localtime())}{'.mp4'}", # whole path includes name 74 | 'fps' : 30, 75 | } 76 | 77 | self._log('Start...') 78 | 79 | # module entrance 80 | def run(self): 81 | while self.run_flag.value: 82 | # get saved data and classify 83 | try: 84 | packet = self.save_queue.get(block=True, timeout=10) 85 | except queue.Empty: 86 | self._log('Save Center Queue Empty.') 87 | break 88 | 89 | """manual save""" 90 | # save data for manual save 91 | if packet['source'] == 'radar': 92 | self.mansave_rdr_data_deque.append(packet['data']) 93 | self.mansave_rdr_timestamp_deque.append(packet['timestamp']) 94 | elif packet['source'] == 'camera': 95 | self.mansave_cam_data_deque.append(packet['data']) 96 | self.mansave_cam_timestamp_deque.append(packet['timestamp']) 97 | # manual save, only triggered at the end of recording 98 | if self.mansave_flag.value == 'image': 99 | msave_time = time.time() 100 | # for radar 101 | if len(self.mansave_rdr_data_deque) > 0: 102 | self._pickle_save([self.mansave_rdr_data_deque[-1]], 103 | file_label='manual_RadarSnapshot', 104 | file_time=time.strftime('%b-%d-%H-%M-%S', time.localtime(msave_time)), 105 | file_dir=folder_create_with_curmonth(self.file_save_dir)) 106 | # for camera 107 | if len(self.mansave_cam_data_deque) > 0: 108 | self._opencv_save(self.mansave_cam_data_deque[-1], 109 | None, 110 | file_label='manual_CameraImage', 111 | file_time=time.strftime('%b-%d-%H-%M-%S', time.localtime(msave_time)), 112 | file_format='.jpg', 113 | file_dir=folder_create_with_curmonth(self.file_save_dir)) 114 | # deactivate flag 115 | self.mansave_flag.value = None 116 | elif self.mansave_flag.value == 'video': 117 | # set start and end save time for asynchronous modules 118 | msave_end_time = time.time() 119 | msave_start_time = msave_end_time - self.mansave_period 120 | # for radar 121 | if len(self.mansave_rdr_data_deque) > 0: 122 | start_index = abs(np.array(self.mansave_rdr_timestamp_deque) - msave_start_time).argmin() 123 | self._pickle_save(list(self.mansave_rdr_data_deque)[start_index:], 124 | file_label='manual_RadarSeq', 125 | file_time=time.strftime('%b-%d-%H-%M-%S', time.localtime(msave_end_time)), 126 | file_dir=folder_create_with_curmonth(self.file_save_dir)) 127 | # clear the save deque 128 | self.mansave_rdr_data_deque.clear() 129 | self.mansave_rdr_timestamp_deque.clear() 130 | # for camera 131 | if len(self.mansave_cam_data_deque) > 0: 132 | start_index = abs(np.array(self.mansave_cam_timestamp_deque) - msave_start_time).argmin() 133 | self._opencv_save(list(self.mansave_cam_data_deque)[start_index:], 134 | list(self.mansave_cam_timestamp_deque)[start_index:], 135 | file_label='manual_CameraVideo', 136 | file_time=time.strftime('%b-%d-%H-%M-%S', time.localtime(msave_end_time)), 137 | file_format='.mp4', 138 | file_dir=folder_create_with_curmonth(self.file_save_dir)) 139 | # clear the save deque 140 | self.mansave_cam_data_deque.clear() 141 | self.mansave_cam_timestamp_deque.clear() 142 | # deactivate flag 143 | self.mansave_flag.value = None 144 | 145 | """auto save""" 146 | # auto save, constantly high from the beginning to the end of recording 147 | if self.autosave_flag.value: 148 | # for email notification 149 | if packet['source'] == 'camera': 150 | self.email_image.value = packet['data'] 151 | 152 | # save data for auto save 153 | if packet['source'] == 'radar': 154 | self.autosave_rdr_data_deque.append(packet['data']) 155 | self.autosave_rdr_timestamp_deque.append(packet['timestamp']) 156 | elif packet['source'] == 'camera': 157 | # take some beginning buffer frames for video recording info 158 | if not self.buffer_status: 159 | self.autosave_cam_buffer_deque.append(packet['data']) 160 | self.autosave_cam_timestamp_deque.append(packet['timestamp']) 161 | 162 | # open VideoWriter for camera after buffer deque is full 163 | if len(self.autosave_cam_buffer_deque) == self.autosave_cam_buffer_deque.maxlen and not self.videowriter_status: 164 | self.asave_info['file_time'] = time.strftime('%b-%d-%H-%M-%S', time.localtime()) 165 | self.asave_info['fps'] = len(self.autosave_cam_timestamp_deque) / (self.autosave_cam_timestamp_deque[-1] - self.autosave_cam_timestamp_deque[0]) 166 | self.asave_info['file_path'] = folder_create_with_curmonth(self.file_save_dir) + f"{self.experiment_name}_{self.asave_info['file_label']}_{self.asave_info['file_time']}{self.asave_info['file_format']}" 167 | width = self.autosave_cam_buffer_deque[0].shape[1] 168 | height = self.autosave_cam_buffer_deque[0].shape[0] 169 | self.videowriter.open(self.asave_info['file_path'], cv2.VideoWriter_fourcc(*'mp4v'), self.asave_info['fps'], (width, height)) 170 | # set status as done 171 | self.videowriter_status = True 172 | self.buffer_status = True 173 | 174 | if self.videowriter_status: 175 | # pop all buffer frames to VideoWriter 176 | for _ in range(len(self.autosave_cam_buffer_deque)): 177 | self.videowriter.write(self.autosave_cam_buffer_deque.popleft()) 178 | # write current frame as usual 179 | self.videowriter.write(packet['data']) 180 | else: 181 | # for radar 182 | if len(self.autosave_rdr_data_deque) > 0: 183 | # set start and end save time for asynchronous modules 184 | asave_end_time = time.time() - self.autosave_end_remove_period 185 | end_index = abs(np.array(self.autosave_rdr_timestamp_deque) - asave_end_time).argmin() + 1 186 | self._pickle_save(list(self.autosave_rdr_data_deque)[:end_index], 187 | file_label='auto_RadarSeq', 188 | file_time=self.asave_info['file_time'], 189 | file_dir=folder_create_with_curmonth(self.file_save_dir)) 190 | # clear the save deque 191 | self.autosave_rdr_data_deque.clear() 192 | self.autosave_rdr_timestamp_deque.clear() 193 | # for camera 194 | if self.videowriter_status: 195 | self.videowriter.release() 196 | self._log(f"{os.path.basename(self.asave_info['file_path'])} saved with FPS: {round(self.asave_info['fps'], 2)}.") 197 | self.videowriter_status = False 198 | self.buffer_status = False 199 | # send file name for compression 200 | self.compress_video_file.value = self.asave_info['file_path'] 201 | winsound.Beep(800, 100) 202 | winsound.Beep(500, 100) 203 | 204 | # used for radar data save of all cases 205 | def _pickle_save(self, data, file_label, file_time, file_dir='./'): 206 | file_name = f'{self.experiment_name}_{file_label}_{file_time}' 207 | with open(file_dir + file_name, 'wb') as file: 208 | pickle.dump(data, file) 209 | self._log(f'{file_name} saved.') 210 | winsound.Beep(500, 100) 211 | winsound.Beep(800, 100) 212 | 213 | # used for camera data save of manual case only 214 | def _opencv_save(self, data, timestamp, file_label, file_time, file_format, file_dir='./'): 215 | file_name = f'{self.experiment_name}_{file_label}_{file_time}{file_format}' 216 | # saved as image 217 | if file_format == '.jpg' or file_format == '.png': 218 | cv2.imwrite(file_dir + file_name, data) 219 | self._log(f'{file_name} saved.') 220 | # saved as video 221 | elif file_format == '.mp4': 222 | try: # get video fps 223 | fps = len(timestamp) / (timestamp[-1] - timestamp[0]) 224 | except: 225 | fps = 30 # for case of time_length = 0 by fast saving 226 | 227 | # open VideoWriter and save video 228 | self.videowriter.open(file_dir + file_name, cv2.VideoWriter_fourcc(*'mp4v'), fps, (data[0].shape[1], data[0].shape[0])) 229 | for f in data: 230 | self.videowriter.write(f) 231 | self.videowriter.release() 232 | self._log(f'{file_name} saved with FPS: {round(fps, 2)}.') 233 | self.compress_video_file.value = file_dir + file_name 234 | else: 235 | self._log(f'{file_name} save failed! Unsupported file type.') 236 | winsound.Beep(800, 100) 237 | winsound.Beep(500, 100) 238 | 239 | def _log(self, txt): # print with device name 240 | print(f'[{self.__class__.__name__}]\t{txt}') 241 | 242 | def __del__(self): 243 | if self.autosave_flag.value: 244 | # for radar 245 | if len(self.autosave_rdr_data_deque) > 0: 246 | # set start and end save time for asynchronous modules 247 | asave_end_time = time.time() 248 | end_index = abs(np.array(self.autosave_rdr_timestamp_deque) - asave_end_time).argmin() + 1 249 | self._pickle_save(list(self.autosave_rdr_data_deque)[:end_index], 250 | file_label='auto_RadarSeq', 251 | file_time=self.asave_info['file_time'], 252 | file_dir=folder_create_with_curmonth(self.file_save_dir)) 253 | # for camera 254 | if self.videowriter_status: 255 | self.videowriter.release() 256 | self._log(f"{os.path.basename(self.asave_info['file_path'])} saved with FPS: {round(self.asave_info['fps'], 2)}.") 257 | winsound.Beep(800, 100) 258 | winsound.Beep(500, 100) 259 | else: 260 | self.videowriter.release() 261 | 262 | self._log(f"Closed. Timestamp: {datetime.now().strftime('%Y-%m-%d_%H:%M:%S')}") 263 | self.status['Module_SVC'] = False 264 | -------------------------------------------------------------------------------- /library/sync_monitor.py: -------------------------------------------------------------------------------- 1 | """ 2 | Designed to monitor and sync the queues, abbr. SCM 3 | """ 4 | 5 | from datetime import datetime 6 | from multiprocessing import Manager 7 | from time import sleep 8 | 9 | import numpy as np 10 | 11 | 12 | class SyncMonitor: 13 | def __init__(self, run_flag, radar_rd_queue_list, shared_param_dict, **kwargs_CFG): 14 | """ 15 | get shared values and queues 16 | """ 17 | self.run_flag = run_flag 18 | # radar rawdata queue list 19 | self.radar_rd_queue_list = radar_rd_queue_list 20 | # shared params 21 | try: 22 | self.save_queue = shared_param_dict['save_queue'] 23 | except: 24 | self.save_queue = Manager().Queue(maxsize=0) 25 | self.status = shared_param_dict['proc_status_dict'] 26 | self.status['Module_SCM'] = True 27 | 28 | """ 29 | pass config static parameters 30 | """ 31 | """ module own config """ 32 | SCM_CFG = kwargs_CFG['SYNC_MONITOR_CFG'] 33 | self.rd_qsize_warning = SCM_CFG['rd_qsize_warning'] 34 | self.sc_qsize_warning = SCM_CFG['sc_qsize_warning'] 35 | 36 | """ 37 | self content 38 | """ 39 | self._log('Start...') 40 | 41 | def run(self): 42 | while self.run_flag.value: 43 | # monitor radar queues 44 | rd_qsize_np = np.array([self.radar_rd_queue_list[i].qsize() for i in range(len(self.radar_rd_queue_list))]) 45 | if sum(rd_qsize_np) > self.rd_qsize_warning: 46 | self._log(f'Radar queue size: {list(rd_qsize_np)}, pls save data until no traffic!') 47 | # sync the queue list when there are multiple queue 48 | if len(rd_qsize_np) > 1: 49 | diff_np = rd_qsize_np - min(rd_qsize_np) 50 | # need calibration when out of sync 51 | if max(diff_np) > 1 and min(rd_qsize_np) == 0: 52 | diff_np = diff_np - 1 # allow diff no more than 1 53 | diff_np[diff_np < 0] = 0 54 | # for each queue 55 | for q_index in range(len(self.radar_rd_queue_list)): 56 | for _ in range(diff_np[q_index]): 57 | self.radar_rd_queue_list[q_index].get(block=True, timeout=1) 58 | self._log(f'Self Sync Calibration: Queue {q_index}') 59 | 60 | # monitor SaveCenter queue 61 | sc_qsize = self.save_queue.qsize() 62 | if sc_qsize > self.sc_qsize_warning: 63 | self._log(f'SaveCenter queue size: {sc_qsize}, pls save data until no traffic!') 64 | 65 | sleep(2) 66 | 67 | def _log(self, txt): # print with device name 68 | print(f'[{self.__class__.__name__}]\t{txt}') 69 | 70 | def __del__(self): 71 | self._log(f"Closed. Timestamp: {datetime.now().strftime('%Y-%m-%d_%H:%M:%S')}") 72 | self.status['Module_SCM'] = False 73 | self.run_flag.value = False 74 | -------------------------------------------------------------------------------- /library/utils.py: -------------------------------------------------------------------------------- 1 | """ 2 | Designed for basic function, can replace the module data_processor step by step 3 | """ 4 | 5 | import os 6 | import shutil 7 | import time 8 | from math import ceil 9 | 10 | import numpy as np 11 | from send2trash import send2trash 12 | 13 | 14 | # file and folder functions 15 | def folder_create(folderpath): 16 | """ 17 | create the folder 18 | :param folderpath: the folderpath 19 | :return: 20 | """ 21 | try: 22 | os.makedirs(folderpath) 23 | except: 24 | pass 25 | 26 | 27 | def folder_create_with_curmonth(folderpath): 28 | """ 29 | create the folder with current date (Year-Month) named 30 | :param folderpath: the folderpath input 31 | :return: folderpath 32 | """ 33 | folderpath = folderpath + time.strftime("%Y_%b", time.localtime()) + '/' 34 | try: 35 | os.makedirs(folderpath) 36 | except: 37 | pass 38 | return folderpath 39 | 40 | 41 | def folder_clean_recreate(folderpath): 42 | """ 43 | clean the folder and recreate it 44 | :param folderpath: the folderpath 45 | :return: 46 | """ 47 | try: 48 | shutil.rmtree(folderpath) 49 | os.makedirs(folderpath) 50 | except: 51 | os.makedirs(folderpath) 52 | 53 | 54 | def folder_safeclean_recreate(folderpath): 55 | """ 56 | clean the folder to trash and recreate it 57 | :param folderpath: the folderpath 58 | :return: 59 | """ 60 | try: 61 | send2trash(folderpath) 62 | os.makedirs(folderpath) 63 | except: 64 | os.makedirs(folderpath) 65 | 66 | 67 | # list processing functions 68 | def list_nesting_remover(input_list, output_list=None): 69 | """ 70 | to extract each element inside a list with deep nesting level 71 | :param input_list: (list/element) a list with multiple nesting level 72 | :param output_list: (list) a cumulated list during iteration 73 | :return: output_list: (list) a non-nesting list 74 | """ 75 | # list_nesting_remover = lambda list_in: [list_out for i in list_in for list_out in list_nesting_remover(i)] if type(list_in) is list else [list_in] 76 | 77 | if output_list is None: 78 | output_list = [] 79 | if type(input_list) is list: 80 | for i in input_list: 81 | output_list = list_nesting_remover(i, output_list) 82 | else: 83 | output_list.append(input_list) 84 | return output_list 85 | 86 | 87 | def random_split(data, split_ratio): 88 | """ 89 | lighter version of torch.torch.utils.data.random_split 90 | split dataset into 2 subsets at 1st dimension 91 | :param data: (1D-list, ndarray) the dataset need to be split 92 | :param split_ratio: (float) the ratio for split 93 | :return: subdata1: data subset1 94 | subdata2: data subset2 95 | """ 96 | # convert input data to numpy array 97 | data = np.array(data) 98 | # get split index of the data 99 | subdata1_idx = np.random.choice(len(data), int(split_ratio * len(data)), replace=False) 100 | subdata2_idx = np.delete(np.arange(len(data)), subdata1_idx) 101 | subdata1 = data[subdata1_idx] 102 | subdata2 = data[subdata2_idx] 103 | return subdata1, subdata2 104 | 105 | 106 | def dataset_split(data, split_ratio, random=True): 107 | """ 108 | lighter version of torch.torch.utils.data.random_split 109 | split dataset into multiple subsets at 1st dimension randomly by default 110 | :param data: (1D-list, ndarray) the dataset need to be split 111 | :param split_ratio: (tuple/list - float) the ratio for split, e.g., (0.7, 0.2, 0.1) 112 | :param random: (Boolean) the enable for random the dataset 113 | :return: subdata_list: data subsets 114 | """ 115 | # convert input data to numpy array 116 | data = np.array(data) 117 | datalen = len(data) 118 | subdata_list = [] 119 | 120 | split_ratio_cum = np.cumsum(split_ratio) 121 | if split_ratio_cum[-1] < 1: 122 | split_ratio_cum = np.concatenate([split_ratio_cum, [1]]) 123 | 124 | if random: 125 | np.random.shuffle(data) 126 | 127 | ra1 = 0 128 | for ra2 in split_ratio_cum: 129 | # get split index of the data 130 | subdata_idx = np.arange(round(datalen * ra1), round(datalen * ra2), 1) 131 | subdata = data[subdata_idx] 132 | 133 | # update split index 134 | ra1 = ra2 135 | 136 | # append the subdata to the list 137 | subdata_list.append(subdata) 138 | 139 | # remove empty nparray 140 | if len(subdata_list[-1]) == 0: 141 | subdata_list.pop(-1) 142 | 143 | return subdata_list 144 | 145 | 146 | # numpy 2D data processing functions 147 | def np_get_idx_bool(data, axis, range_lim, mode=1): 148 | """ 149 | only one axis can be processed in one call 150 | :param data: (ndarray) data_numbers(n) * channels(c) 151 | :param axis: (int) the axis number 152 | :param range_lim: (tuple/int/float) (bottom_lim, upper_lim) element can be None, the range for preserved data 153 | :param mode: (int) 0-[min, max], 1-[min, max), 2-(min, max], 3-(min, max), include boundary or not 154 | :return: preserved_index: (ndarray-bool) data_numbers(n) 155 | removed_index: (ndarray-bool) data_numbers(n) 156 | """ 157 | # initialize the index array 158 | preserved_index = np.ones(len(data), dtype=bool) 159 | removed_index = np.zeros(len(data), dtype=bool) 160 | 161 | if range_lim is not None: 162 | if type(range_lim) is tuple or type(range_lim) is list: # expect list type 163 | if range_lim[0] is not None: 164 | if mode == 0 or mode == 1: 165 | index = data[:, axis] >= range_lim[0] 166 | else: 167 | index = data[:, axis] > range_lim[0] 168 | # update the index 169 | preserved_index = preserved_index & index 170 | removed_index = removed_index | ~index 171 | if range_lim[1] is not None: 172 | if mode == 0 or mode == 2: 173 | index = data[:, axis] <= range_lim[1] 174 | else: 175 | index = data[:, axis] < range_lim[1] 176 | # update the index 177 | preserved_index = preserved_index & index 178 | removed_index = removed_index | ~index 179 | else: # expect int/float type 180 | index = data[:, axis] == range_lim 181 | # update the index 182 | preserved_index = preserved_index & index 183 | removed_index = removed_index | ~index 184 | return preserved_index, removed_index 185 | 186 | 187 | def np_filter(data, axis, range_lim, mode=1): 188 | """ 189 | only one axis can be processed in one call 190 | :param data: (ndarray) data_numbers(n) * channels(c) 191 | :param axis: (int) the axis number 192 | :param range_lim: (tuple/int/float) (bottom_lim, upper_lim) element can be None, the range for preserved data 193 | :param mode: (int) 0-[min, max], 1-[min, max), 2-(min, max], 3-(min, max), include boundary or not 194 | :return: data_preserved: (ndarray) data_numbers(n) * channels(c) 195 | data_removed: (ndarray) data_numbers(n) * channels(c) 196 | """ 197 | preserved_index, removed_index = np_get_idx_bool(data, axis, range_lim, mode) 198 | # get data and noise 199 | data_preserved = data[preserved_index] 200 | data_removed = data[removed_index] 201 | return data_preserved, data_removed 202 | 203 | 204 | def ES_speed_filter(data_points, ES_threshold): 205 | """ 206 | :param data_points: (ndarray) data_numbers(n) * channels(c=5) 207 | :param ES_threshold: (dict) the ES threshold 208 | :return: data_points: (ndarray) data_numbers(n) * channels(c=5) 209 | noise: (ndarray) data_numbers(n) * channels(c=5) 210 | """ 211 | # remove points with low energy strength 212 | data_points, noise = np_filter(data_points, axis=4, range_lim=ES_threshold['range']) 213 | 214 | # identify the noise with speed 215 | if len(noise) > 0 and ES_threshold['speed_none_0_exception']: 216 | noise, noise_with_speed = np_filter(noise, axis=3, range_lim=0) 217 | data_points = np.concatenate([data_points, noise_with_speed]) 218 | return data_points, noise 219 | 220 | 221 | def np_2D_set_operations(dataA, dataB, ops='intersection'): 222 | """ 223 | set operations for 2D ndarray, provide intersection, subtract, union 224 | :param dataA: (ndarray) data_numbers(n) * channels(c) 225 | :param dataB: (ndarray) data_numbers(m) * channels(c) 226 | :param ops: (str) operation name 227 | :return: data_preserved: (ndarray) data_numbers(n) * channels(c) 228 | """ 229 | maskA = np.all(dataA[:, np.newaxis] == dataB, axis=-1).any(axis=1) 230 | maskB = np.all(dataB[:, np.newaxis] == dataA, axis=-1).any(axis=1) 231 | 232 | intersection = dataA[maskA] # or dataB[maskB] 233 | dataA_subtract = dataA[~maskA] 234 | dataB_subtract = dataB[~maskB] 235 | 236 | if ops == 'intersection': 237 | return intersection 238 | elif ops == 'subtract': 239 | return dataA_subtract 240 | elif ops == 'subtract_both': 241 | return dataA_subtract, dataB_subtract 242 | elif ops == 'exclusive_or': 243 | exclusive_or = np.concatenate([dataA_subtract, dataB_subtract]) 244 | return exclusive_or 245 | elif ops == 'union': 246 | union = np.concatenate([dataA_subtract, intersection, dataB_subtract]) 247 | return union 248 | else: 249 | raise ValueError('ops can be intersection, subtract, subtract_both, exclusive_or, union') 250 | 251 | 252 | def np_repeated_points_removal(data, axes=None): 253 | """ 254 | remove the repeated points, by default compare all axes (only remove if values in all axes are repeated), 255 | compare the designated axes if axes are listed (only remove if values in listed axes are repeated) 256 | :param data: (ndarray) data_numbers(n) * channels(c) 257 | :param axes: (tuple/list) axes used to compare the data, if None then means all axes will be used to compare 258 | :return: data_points_unique (ndarray) data_numbers(n) * channels(c) 259 | """ 260 | # lower the precision to speed up 261 | data = data.astype(np.float16) 262 | # data = np.around(data, decimals=2) # not working when the number is too big more than 500 263 | 264 | # remove repeated points 265 | if axes is None: 266 | data_points_unique = np.unique(data, axis=0) 267 | elif type(axes) is tuple or type(axes) is list: 268 | data_points_unique = np.ndarray([0, data.shape[1]]) 269 | # get all rows according to the axes and do unique 270 | data_sub = data[:, axes] 271 | data_sub = np.unique(data_sub, axis=0) 272 | # extract from the original data 273 | for ds in data_sub: 274 | temp = data 275 | # for each unique data_sub, relocate it in data 276 | for i, axis in enumerate(axes): 277 | temp, _ = np_filter(temp, axis=axis, range_lim=ds[i]) 278 | # get average values for the rest axis 279 | data_points_unique = np.concatenate([data_points_unique, np.average(temp, axis=0)[np.newaxis]]) 280 | else: 281 | raise Exception('axes type is not supported') 282 | return data_points_unique 283 | 284 | 285 | def np_window_sliding(data, window_length, step): 286 | """ 287 | window slide and stack at 1st dimension of data, 288 | if the last window step can not be formed due to insufficient rest data, it will be dropped. 289 | :param data: (ndarray) data_numbers(n) * channels(c) 290 | :param window_length: (int) the stacked length for 2nd dimension time 291 | :param step: (int) >0, the number of data skipped for each sliding 292 | :return: data_stacked: (ndarray) data_numbers(n) * time(t) * channels(c) 293 | """ 294 | total_step = ceil((len(data) - window_length) / step) 295 | if total_step > 0: 296 | data_stacked = np.ndarray([0, window_length] + list(data.shape)[1:]) 297 | for i in range(total_step): 298 | data_window = data[i * step:i * step + window_length][np.newaxis] 299 | data_stacked = np.concatenate([data_stacked, data_window]) 300 | else: 301 | raise Exception(f'The data length is not long enough!') 302 | return data_stacked 303 | 304 | 305 | if __name__ == '__main__': 306 | # dataset = [2, 6, 9, 3, 7, 8, 10, 199, 10] 307 | dataset = np.arange(0, 30, 1).reshape(10, -1) 308 | a, b, c = dataset_split(dataset, (0.7, 0.2, 0.1), True) 309 | pass 310 | -------------------------------------------------------------------------------- /library/video_compressor.py: -------------------------------------------------------------------------------- 1 | """ 2 | Designed to compress the video after save, abbr. VDC 3 | """ 4 | 5 | import os 6 | import time 7 | from datetime import datetime 8 | from multiprocessing import Manager 9 | 10 | from moviepy.editor import VideoFileClip 11 | 12 | 13 | class VideoCompressor: 14 | def __init__(self, run_flag, shared_param_dict, **kwargs_CFG): 15 | """ 16 | get shared values and queues 17 | """ 18 | self.run_flag = run_flag 19 | # shared params 20 | self.compress_video_file = shared_param_dict['compress_video_file'] # sent from save_center 21 | self.status = shared_param_dict['proc_status_dict'] 22 | self.status['Module_VDC'] = True 23 | 24 | """ 25 | pass config static parameters 26 | """ 27 | """ module own config """ 28 | VDC_CFG = kwargs_CFG['VIDEO_COMPRESSOR_CFG'] 29 | self.target_bitrate = VDC_CFG['target_bitrate'] 30 | 31 | """ 32 | self content 33 | """ 34 | self._log('Start...') 35 | 36 | # module entrance 37 | def run(self): 38 | while self.run_flag.value: 39 | if self.compress_video_file.value: 40 | input_file = self.compress_video_file.value 41 | output_file = os.path.join(os.path.dirname(input_file), '.'.join([os.path.basename(input_file).split('.')[0] + '_compressed', os.path.basename(input_file).split('.')[-1]])) 42 | self.compress_video_file.value = None 43 | self.compress_video(input_file, output_file) 44 | time.sleep(1) 45 | 46 | def compress_video(self, _input_file, _output_file): 47 | video = VideoFileClip(_input_file) 48 | video.write_videofile(_output_file, bitrate=self.target_bitrate) 49 | 50 | def _log(self, txt): # print with device name 51 | print(f'[{self.__class__.__name__}]\t{txt}') 52 | 53 | def __del__(self): 54 | self._log(f"Closed.") 55 | self.status['Module_VDC'] = False 56 | 57 | 58 | if __name__ == '__main__': 59 | # generate save flag dict 60 | shared_param = {'compress_video_file': None, # the record video file waiting to be compressed 61 | } 62 | VIDEO_COMPRESSOR_CFG = { 63 | # video quality 64 | 'target_bitrate': '2000k', 65 | } 66 | vc = VideoCompressor(Manager().Value('b', True), shared_param, **{'VIDEO_COMPRESSOR_CFG': VIDEO_COMPRESSOR_CFG}) 67 | 68 | import glob 69 | from send2trash import send2trash 70 | 71 | file_dir = '../data/MVB_501/**/' 72 | file_path_list = glob.glob(os.path.join(file_dir, '*.mp4'), recursive=True) 73 | for i, f_path in enumerate(file_path_list): 74 | f_dir = os.path.dirname(f_path) 75 | f_name = os.path.basename(f_path).split('.') 76 | f_name.insert(-1, '_comp.') 77 | f_outputpath = os.path.join(f_dir, ''.join(f_name)) 78 | vc.compress_video(f_path, f_outputpath) 79 | time.sleep(2) 80 | send2trash(f_path) 81 | time.sleep(2) 82 | os.renames(f_outputpath, f_path) 83 | print(f'Finish [{i + 1}/{len(file_path_list)}]') 84 | -------------------------------------------------------------------------------- /library/visualizer.py: -------------------------------------------------------------------------------- 1 | """ 2 | Designed for data visualization, abbr. VIS 3 | """ 4 | 5 | import math 6 | import queue 7 | import time 8 | from datetime import datetime 9 | from multiprocessing import Manager 10 | 11 | import matplotlib 12 | import numpy as np 13 | import winsound 14 | from matplotlib import pyplot as plt 15 | from matplotlib.ticker import LinearLocator 16 | 17 | from library.frame_post_processor import FramePProcessor 18 | 19 | RP_colormap = ['C5', 'C7', 'C8'] # the colormap for radar raw points 20 | ES_colormap = ['lavender', 'thistle', 'violet', 'darkorchid', 'indigo'] # the colormap for radar energy strength 21 | OS_colormap = ['grey', 'green', 'gold', 'red'] # the colormap for object status 22 | 23 | 24 | class Visualizer: 25 | def __init__(self, run_flag, radar_rd_queue_list, shared_param_dict, **kwargs_CFG): 26 | """ 27 | get shared values and queues 28 | """ 29 | self.run_flag = run_flag 30 | # radar rawdata queue list 31 | self.radar_rd_queue_list = radar_rd_queue_list 32 | # shared params 33 | try: 34 | self.save_queue = shared_param_dict['save_queue'] 35 | except: 36 | self.save_queue = Manager().Queue(maxsize=0) 37 | self.mansave_flag = shared_param_dict['mansave_flag'] 38 | self.autosave_flag = shared_param_dict['autosave_flag'] 39 | self.status = shared_param_dict['proc_status_dict'] 40 | self.status['Module_VIS'] = True 41 | """ 42 | pass config static parameters 43 | """ 44 | """ module own config """ 45 | VIS_CFG = kwargs_CFG['VISUALIZER_CFG'] 46 | self.dimension = VIS_CFG['dimension'] 47 | self.VIS_xlim = VIS_CFG['VIS_xlim'] 48 | self.VIS_ylim = VIS_CFG['VIS_ylim'] 49 | self.VIS_zlim = VIS_CFG['VIS_zlim'] 50 | 51 | self.auto_inactive_skip_frame = VIS_CFG['auto_inactive_skip_frame'] 52 | 53 | """ other configs """ 54 | self.MANSAVE_ENABLE = kwargs_CFG['MANSAVE_ENABLE'] 55 | self.AUTOSAVE_ENABLE = kwargs_CFG['AUTOSAVE_ENABLE'] 56 | self.RDR_CFG_LIST = kwargs_CFG['RADAR_CFG_LIST'] 57 | 58 | """ 59 | self content 60 | """ 61 | self.fpp = FramePProcessor(**kwargs_CFG) # call other class 62 | 63 | # setup for matplotlib plot 64 | matplotlib.use('TkAgg') # set matplotlib backend 65 | plt.rcParams['toolbar'] = 'None' # disable the toolbar 66 | # create a figure 67 | self.fig = plt.figure() 68 | # adjust figure position 69 | mngr = plt.get_current_fig_manager() 70 | mngr.window.wm_geometry('+30+30') 71 | # draws a completely frameless window 72 | win = plt.gcf().canvas.manager.window 73 | win.overrideredirect(1) 74 | # interactive mode on, no need plt.show() 75 | plt.ion() 76 | 77 | self._log('Start...') 78 | 79 | # module entrance 80 | def run(self): 81 | if self.dimension == '2D': 82 | # create a plot 83 | ax1 = self.fig.add_subplot(111) 84 | 85 | while self.run_flag.value: 86 | # clear and reset 87 | plt.cla() 88 | ax1.set_xlim(self.VIS_xlim[0], self.VIS_xlim[1]) 89 | ax1.set_ylim(self.VIS_ylim[0], self.VIS_ylim[1]) 90 | ax1.xaxis.set_major_locator(LinearLocator(5)) # set axis scale 91 | ax1.yaxis.set_major_locator(LinearLocator(5)) 92 | ax1.set_xlabel('x') 93 | ax1.set_ylabel('y') 94 | ax1.set_title('Radar') 95 | # update the canvas 96 | self._update_canvas(ax1) 97 | 98 | elif self.dimension == '3D': 99 | # create a plot 100 | ax1 = self.fig.add_subplot(111, projection='3d') 101 | 102 | spin = 0 103 | while self.run_flag.value: 104 | # clear and reset 105 | plt.cla() 106 | ax1.set_xlim(self.VIS_xlim[0], self.VIS_xlim[1]) 107 | ax1.set_ylim(self.VIS_ylim[0], self.VIS_ylim[1]) 108 | ax1.set_zlim(self.VIS_zlim[0], self.VIS_zlim[1]) 109 | ax1.xaxis.set_major_locator(LinearLocator(3)) # set axis scale 110 | ax1.yaxis.set_major_locator(LinearLocator(3)) 111 | ax1.zaxis.set_major_locator(LinearLocator(3)) 112 | ax1.set_xlabel('x') 113 | ax1.set_ylabel('y') 114 | ax1.set_zlabel('z') 115 | ax1.set_title('Radar') 116 | spin += 0.04 117 | ax1.view_init(ax1.elev - 0.5 * math.sin(spin), ax1.azim - 0.3 * math.sin(1.5 * spin)) # spin the view angle 118 | # update the canvas 119 | self._update_canvas(ax1) 120 | else: 121 | while self.run_flag.value: 122 | for q in self.radar_rd_queue_list: 123 | _ = q.get(block=True, timeout=5) 124 | 125 | def _update_canvas(self, ax1): 126 | # draw radar point 127 | for RDR_CFG in self.RDR_CFG_LIST: 128 | self._plot(ax1, [RDR_CFG['pos_offset'][0]], [RDR_CFG['pos_offset'][1]], [RDR_CFG['pos_offset'][2]], marker='o', color='DarkRed') 129 | 130 | # get values from queues of all radars 131 | val_data_allradar = np.ndarray([0, 5], dtype=np.float16) 132 | ES_noise_allradar = np.ndarray([0, 5], dtype=np.float16) 133 | save_data_frame = {} 134 | try: 135 | # adaptive short skip when no object is detected 136 | if self.AUTOSAVE_ENABLE and not self.autosave_flag.value: 137 | for _ in range(self.auto_inactive_skip_frame): 138 | for q in self.radar_rd_queue_list: 139 | _ = q.get(block=True, timeout=5) 140 | 141 | for i, RDR_CFG in enumerate(self.RDR_CFG_LIST): 142 | data_1radar = self.radar_rd_queue_list[i].get(block=True, timeout=5) 143 | # apply ES and speed filter for each radar channel 144 | val_data, ES_noise = self.fpp.DP_ES_Speed_filter(data_1radar, RDR_CFG['ES_threshold']) 145 | val_data_allradar = np.concatenate([val_data_allradar, val_data]) 146 | ES_noise_allradar = np.concatenate([ES_noise_allradar, ES_noise]) 147 | # # draw raw point cloud 148 | # self._plot(ax1, val_data[:, 0], val_data[:, 1], val_data[:, 2], marker='.', linestyle='None', color=RP_colormap[i]) 149 | 150 | # save the frames 151 | save_data_frame[RDR_CFG['name']] = data_1radar 152 | 153 | except queue.Empty: 154 | self._log('Raw Data Queue Empty.') 155 | self.run_flag.value = False 156 | 157 | # put frame and time into queue 158 | self.save_queue.put({'source' : 'radar', 159 | 'data' : save_data_frame, 160 | 'timestamp': time.time(), 161 | }) 162 | 163 | # apply global boundary filter 164 | val_data_allradar = self.fpp.FPP_boundary_filter(val_data_allradar) 165 | ES_noise_allradar = self.fpp.FPP_boundary_filter(ES_noise_allradar) 166 | # apply global energy strength filter 167 | val_data_allradar, global_ES_noise = self.fpp.FPP_ES_Speed_filter(val_data_allradar) 168 | # apply background noise filter 169 | val_data_allradar = self.fpp.BGN_filter(val_data_allradar) 170 | 171 | # draw signal energy strength 172 | for i in range(len(ES_colormap)): 173 | val_data_allradar_ES, _ = self.fpp.DP_np_filter(val_data_allradar, axis=4, range_lim=(i * 100, (i + 1) * 100)) 174 | self._plot(ax1, val_data_allradar_ES[:, 0], val_data_allradar_ES[:, 1], val_data_allradar_ES[:, 2], marker='.', color=ES_colormap[i]) 175 | 176 | # draw valid point, DBSCAN envelope 177 | # vertices_list, valid_points_list, _, DBS_noise = self.fpp.DBS(val_data_allradar) 178 | vertices_list, valid_points_list, _, DBS_noise = self.fpp.DBS_dynamic_ES(val_data_allradar) 179 | for vertices in vertices_list: 180 | self._plot(ax1, vertices[:, 0], vertices[:, 1], vertices[:, 2], linestyle='-', color='salmon') 181 | 182 | # background noise filter 183 | if self.fpp.BGN_enable: 184 | # update the background noise 185 | if len(vertices_list) > 0: 186 | self.fpp.BGN_update(np.concatenate([ES_noise_allradar, global_ES_noise, DBS_noise])) 187 | else: 188 | self.fpp.BGN_update(np.concatenate([ES_noise_allradar, global_ES_noise])) 189 | # draw BGN area 190 | BGN_block_list = self.fpp.BGN_get_filter_area() 191 | for bgn in BGN_block_list: 192 | self._plot(ax1, bgn[:, 0], bgn[:, 1], bgn[:, 2], marker='.', linestyle='-', color='g') 193 | 194 | # tracking system 195 | if self.fpp.TRK_enable: 196 | self.fpp.TRK_update_poss_matrix(valid_points_list) 197 | # draw object central points 198 | obj_status_list = [] 199 | for person in self.fpp.TRK_people_list: 200 | obj_cp, _, obj_status = person.get_info() 201 | obj_status_list.append(obj_status) 202 | self._plot(ax1, obj_cp[:, 0], obj_cp[:, 1], obj_cp[:, 2], marker='o', color=OS_colormap[obj_status]) 203 | # if obj_status == 3: # warning when object falls 204 | # winsound.Beep(1000, 20) 205 | 206 | # auto save based on object detection 207 | if self.AUTOSAVE_ENABLE: 208 | if max(obj_status_list) >= 0: # object detected 209 | # activate flag 210 | self.autosave_flag.value = True 211 | else: 212 | # deactivate flag 213 | self.autosave_flag.value = False 214 | 215 | # wait at the end of each loop 216 | # plt.pause(0.001) 217 | self._detect_key_press(0.001) 218 | 219 | def _plot(self, ax, x, y, z, fmt='', **kwargs): 220 | """ 221 | :param ax: the current canvas 222 | :param x: data in x-axis 223 | :param y: data in y-axis 224 | :param z: data in z-axis 225 | :param fmt: plot and plot3D fmt 226 | :param kwargs: plot and plot3D marker, linestyle, color 227 | :return: None 228 | """ 229 | if len(fmt) > 0: # if fmt is using 230 | if self.dimension == '2D': 231 | ax.plot(x, y, fmt) 232 | elif self.dimension == '3D': 233 | ax.plot3D(x, y, z, fmt) 234 | else: # if para is using 235 | for i in ['marker', 'linestyle', 'color']: 236 | if not (i in kwargs): 237 | kwargs[i] = 'None' 238 | if self.dimension == '2D': 239 | ax.plot(x, y, marker=kwargs['marker'], linestyle=kwargs['linestyle'], color=kwargs['color']) 240 | elif self.dimension == '3D': 241 | ax.plot3D(x, y, z, marker=kwargs['marker'], linestyle=kwargs['linestyle'], color=kwargs['color']) 242 | 243 | def _detect_key_press(self, timeout): # error caused if the key is pressed at very beginning (first loop) 244 | keyPressed = plt.waitforbuttonpress(timeout=timeout) # detect whether key is pressed or not 245 | plt.gcf().canvas.mpl_connect('key_press_event', self._press) # detect which key is pressed 246 | if keyPressed: 247 | if self.the_key == 'escape': 248 | self.run_flag.value = False 249 | # manual save trigger 250 | if self.MANSAVE_ENABLE: 251 | if self.the_key == '+': 252 | # activate flag 253 | self.mansave_flag.value = 'image' 254 | elif self.the_key == '0': 255 | # activate flag 256 | self.mansave_flag.value = 'video' 257 | 258 | def _press(self, event): 259 | self.the_key = event.key 260 | 261 | def _log(self, txt): # print with device name 262 | print(f'[{self.__class__.__name__}]\t{txt}') 263 | 264 | def __del__(self): 265 | plt.close(self.fig) 266 | self._log(f"Closed. Timestamp: {datetime.now().strftime('%Y-%m-%d_%H:%M:%S')}") 267 | self.status['Module_VIS'] = False 268 | self.run_flag.value = False 269 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | """ 2 | Auther: DarkSZChao 3 | Date: 20/09/2024 4 | 5 | In this project: 6 | file_name is os.path.basename() 7 | file_dir is os.path.dirname() 8 | file_path is dir+name 9 | 10 | All the limits are set as [a, b) 11 | """ 12 | 13 | import socket 14 | import tkinter as tk 15 | import tkinter.font as tkf 16 | import warnings 17 | from multiprocessing import Process, Manager 18 | from time import sleep 19 | 20 | import winsound 21 | 22 | # import essential modules 23 | from library import RadarReader, Visualizer, SyncMonitor 24 | 25 | # import optional modules 26 | try: 27 | from library import SaveCenter 28 | 29 | SVC_enable = True 30 | except: 31 | pass 32 | try: 33 | from library import Camera 34 | 35 | CAM_enable = True 36 | except: 37 | pass 38 | try: 39 | from library import EmailNotifier 40 | 41 | EMN_enable = True 42 | except: 43 | pass 44 | try: 45 | from library import VideoCompressor 46 | 47 | VDC_enable = True 48 | except: 49 | pass 50 | 51 | # import module configs 52 | hostname = socket.gethostname() 53 | if hostname == 'IT077979RTX2080': 54 | from cfg.config_queens059 import * 55 | elif hostname == 'IT080027': 56 | from cfg.config_demo import * 57 | elif hostname == 'SZC-LAPTOP-Pro': 58 | from cfg.config_demo import * 59 | # from cfg.config_cp107 import * 60 | else: 61 | from cfg.config_demo import * 62 | raise warnings.warn('Hostname is not found! Default config is applied.') 63 | 64 | 65 | # start the instance for the process 66 | def radar_proc_method(_run_flag, _radar_rd_queue, _shared_param_dict, **_kwargs_CFG): 67 | radar = RadarReader(run_flag=_run_flag, radar_rd_queue=_radar_rd_queue, shared_param_dict=_shared_param_dict, **_kwargs_CFG) 68 | radar.run() 69 | 70 | 71 | def vis_proc_method(_run_flag, _radar_rd_queue_list, _shared_param_dict, **_kwargs_CFG): 72 | vis = Visualizer(run_flag=_run_flag, radar_rd_queue_list=_radar_rd_queue_list, shared_param_dict=_shared_param_dict, **_kwargs_CFG) 73 | vis.run() 74 | 75 | 76 | def monitor_proc_method(_run_flag, _radar_rd_queue_list, _shared_param_dict, **_kwargs_CFG): 77 | sync = SyncMonitor(run_flag=_run_flag, radar_rd_queue_list=_radar_rd_queue_list, shared_param_dict=_shared_param_dict, **_kwargs_CFG) 78 | sync.run() 79 | 80 | 81 | def save_proc_method(_run_flag, _shared_param_dict, **_kwargs_CFG): 82 | save = SaveCenter(run_flag=_run_flag, shared_param_dict=_shared_param_dict, **_kwargs_CFG) 83 | save.run() 84 | 85 | 86 | def camera_proc_method(_run_flag, _shared_param_dict, **_kwargs_CFG): 87 | cam = Camera(run_flag=_run_flag, shared_param_dict=_shared_param_dict, **_kwargs_CFG) 88 | cam.run() 89 | 90 | 91 | def email_proc_method(_run_flag, _shared_param_dict, **_kwargs_CFG): 92 | email = EmailNotifier(run_flag=_run_flag, shared_param_dict=_shared_param_dict, **_kwargs_CFG) 93 | email.run() 94 | 95 | 96 | def vidcompress_proc_method(_run_flag, _shared_param_dict, **_kwargs_CFG): 97 | vidcompress = VideoCompressor(run_flag=_run_flag, shared_param_dict=_shared_param_dict, **_kwargs_CFG) 98 | vidcompress.run() 99 | 100 | 101 | def test_proc_method(_run_flag, _shared_param_dict): 102 | _run_flag = _run_flag 103 | _manual_save_flag = _shared_param_dict['mansave_flag'] 104 | _auto_save_flag = _shared_param_dict['autosave_flag'] 105 | 106 | def gui_button_style(): 107 | style = tkf.Font(family='Calibri', size=16, weight=tkf.BOLD, underline=False, overstrike=False) 108 | return style 109 | 110 | def gui_label_style(): 111 | style = tkf.Font(family='Calibri', size=18, weight=tkf.BOLD, underline=False, overstrike=False) 112 | return style 113 | 114 | def manual_save_image(): 115 | _manual_save_flag.value = 'image' 116 | 117 | def manual_save_video(): 118 | _manual_save_flag.value = 'video' 119 | 120 | def quit(): 121 | _run_flag.value = False 122 | root.destroy() 123 | 124 | # create main window for display 125 | root = tk.Tk() 126 | root.withdraw() 127 | 128 | # sub-window for buttons 129 | window1 = tk.Toplevel() 130 | window1.wm_attributes('-topmost', 1) # keep the window at the top 131 | window1.overrideredirect(True) # remove label area 132 | window1.geometry('+40+510') 133 | window1.resizable(False, False) 134 | tk.Button(window1, text='Save_image', bg='lightblue', width=12, font=gui_button_style(), command=manual_save_image).grid(row=1, column=1) 135 | tk.Button(window1, text='Save_video', bg='lightblue', width=12, font=gui_button_style(), command=manual_save_video).grid(row=1, column=2) 136 | tk.Button(window1, text='Quit', bg='firebrick', width=12, font=gui_button_style(), command=quit).grid(row=1, column=3) 137 | 138 | # sub-window for process status 139 | window2 = tk.Toplevel() 140 | window2.wm_attributes('-topmost', 1) # keep the window at the top 141 | window2.overrideredirect(True) # remove label area 142 | window2.geometry('+670+60') 143 | window2.resizable(False, False) 144 | 145 | # create labels 146 | proc_status_label_dict = {} 147 | for _proc, _status in _shared_param_dict['proc_status_dict'].items(): 148 | label_status = tk.Label(window2, text='', font=gui_label_style(), anchor="w") 149 | label_status.pack(fill="x", padx=20, pady=5) 150 | proc_status_label_dict[_proc] = label_status 151 | 152 | def update_status(): 153 | for __proc, __status in _shared_param_dict['proc_status_dict'].items(): 154 | proc_status_label_dict[__proc].config(text=f"{__proc}:\t{'ON' if __status else 'OFF'}") 155 | if __status: 156 | proc_status_label_dict[__proc].config(fg='green') 157 | else: 158 | proc_status_label_dict[__proc].config(fg='red') 159 | 160 | root.after(50, update_status) # update 161 | 162 | # for update 163 | update_status() 164 | # for display 165 | root.mainloop() 166 | 167 | 168 | # program entrance 169 | if __name__ == '__main__': 170 | # generate shared variables between processes 171 | run_flag = Manager().Value('b', True) # this flag control whole system running 172 | # generate save flag dict 173 | shared_param_dict = {'mansave_flag' : Manager().Value('c', None), # set as None, 'image' or 'video', only triggered at the end of recording 174 | 'autosave_flag' : Manager().Value('b', False), # set as False, True or False, constantly high from the beginning to the end of recording 175 | 'compress_video_file': Manager().Value('c', None), # the record video file waiting to be compressed 176 | 'email_image' : Manager().Value('f', None), # for image info from save_center to email_notifier module 177 | 'proc_status_dict' : Manager().dict(), # for process status 178 | } 179 | 180 | # generate shared queues 181 | radar_rd_queue_list = [] # radar rawdata queue list 182 | proc_list = [] 183 | for RADAR_CFG in RADAR_CFG_LIST: 184 | radar_rd_queue = Manager().Queue() 185 | kwargs_CFG = {'RADAR_CFG': RADAR_CFG, 'FRAME_EARLY_PROCESSOR_CFG': FRAME_EARLY_PROCESSOR_CFG} 186 | radar_proc = Process(target=radar_proc_method, args=(run_flag, radar_rd_queue, shared_param_dict), kwargs=kwargs_CFG, name=RADAR_CFG['name']) 187 | radar_rd_queue_list.append(radar_rd_queue) 188 | proc_list.append(radar_proc) 189 | kwargs_CFG = {'VISUALIZER_CFG' : VISUALIZER_CFG, 190 | 'RADAR_CFG_LIST' : RADAR_CFG_LIST, 191 | 'MANSAVE_ENABLE' : MANSAVE_ENABLE, 192 | 'AUTOSAVE_ENABLE' : AUTOSAVE_ENABLE, 193 | 'FRAME_POST_PROCESSOR_CFG': FRAME_POST_PROCESSOR_CFG, 194 | 'DBSCAN_GENERATOR_CFG' : DBSCAN_GENERATOR_CFG, 195 | 'BGNOISE_FILTER_CFG' : BGNOISE_FILTER_CFG, 196 | 'HUMAN_TRACKING_CFG' : HUMAN_TRACKING_CFG, 197 | 'HUMAN_OBJECT_CFG' : HUMAN_OBJECT_CFG, 198 | 'SYNC_MONITOR_CFG' : SYNC_MONITOR_CFG} 199 | # start essential processes 200 | vis_proc = Process(target=vis_proc_method, args=(run_flag, radar_rd_queue_list, shared_param_dict), kwargs=kwargs_CFG, name='Module_VIS') # visualization process 201 | proc_list.append(vis_proc) 202 | monitor_proc = Process(target=monitor_proc_method, args=(run_flag, radar_rd_queue_list, shared_param_dict), kwargs=kwargs_CFG, name='Module_SCM') # queue monitor process 203 | proc_list.append(monitor_proc) 204 | 205 | # optional processes, can be disabled 206 | try: 207 | kwargs_CFG.update({'SAVE_CENTER_CFG': SAVE_CENTER_CFG}) 208 | shared_param_dict.update({'save_queue': Manager().Queue(maxsize=2000)}) 209 | if SVC_enable: 210 | save_proc = Process(target=save_proc_method, args=(run_flag, shared_param_dict), kwargs=kwargs_CFG, name='Module_SVC') # save center process 211 | proc_list.append(save_proc) 212 | except: 213 | pass 214 | try: 215 | kwargs_CFG.update({'CAMERA_CFG': CAMERA_CFG}) 216 | if CAM_enable: 217 | camera_proc = Process(target=camera_proc_method, args=(run_flag, shared_param_dict), kwargs=kwargs_CFG, name='Module_CAM') # camera process 218 | proc_list.append(camera_proc) 219 | except: 220 | pass 221 | try: 222 | kwargs_CFG.update({'EMAIL_NOTIFIER_CFG': EMAIL_NOTIFIER_CFG}) 223 | if EMN_enable: 224 | email_proc = Process(target=email_proc_method, args=(run_flag, shared_param_dict), kwargs=kwargs_CFG, name='Module_EMN') # email notifier process 225 | proc_list.append(email_proc) 226 | except: 227 | pass 228 | # try: 229 | # kwargs_CFG.update({'VIDEO_COMPRESSOR_CFG': VIDEO_COMPRESSOR_CFG}) 230 | # if VDC_enable: 231 | # vidcompress_proc = Process(target=vidcompress_proc_method, args=(run_flag, shared_param_dict), kwargs=kwargs_CFG) # video compress process 232 | # proc_list.append(vidcompress_proc) 233 | # except: 234 | # pass 235 | 236 | test_proc = Process(target=test_proc_method, args=(run_flag, shared_param_dict), name='Module_GUI') # GUI process 237 | proc_list.insert(0, test_proc) 238 | for proc in proc_list: 239 | if proc.name == 'Module_GUI': 240 | shared_param_dict['proc_status_dict'][proc.name] = True 241 | else: 242 | shared_param_dict['proc_status_dict'][proc.name] = False 243 | 244 | # start the processes and wait to finish 245 | for t in proc_list: 246 | t.start() 247 | sleep(0.5) 248 | for t in proc_list: 249 | t.join() 250 | sleep(0.2) 251 | 252 | winsound.Beep(1000, 500) 253 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | # Basic 2 | numpy 3 | Send2Trash 4 | scipy 5 | pyserial 6 | matplotlib 7 | scikit-learn 8 | 9 | # Camera 10 | opencv-python 11 | 12 | # Gmail 13 | google-api-python-client 14 | google-auth-oauthlib 15 | func-timeout 16 | 17 | # Video Compressor 18 | moviepy -------------------------------------------------------------------------------- /tools/data_load.py: -------------------------------------------------------------------------------- 1 | import math 2 | import pickle 3 | 4 | import matplotlib 5 | import numpy as np 6 | from matplotlib import pyplot as plt 7 | from matplotlib.ticker import LinearLocator 8 | 9 | from library import FramePProcessor, folder_clean_recreate 10 | 11 | RP_colormap = ['C5', 'C7', 'C8'] # the colormap for radar raw points 12 | ES_colormap = ['lavender', 'thistle', 'violet', 'darkorchid', 'indigo'] # the colormap for radar energy strength 13 | OS_colormap = ['grey', 'green', 'gold', 'red'] # the colormap for object status 14 | 15 | 16 | class Visualizer: 17 | def __init__(self, cur_dataset, interval=0.1, image_output_enable=False, **kwargs_CFG): 18 | """ 19 | pass config static parameters 20 | """ 21 | """ module own config """ 22 | VIS_CFG = kwargs_CFG['VISUALIZER_CFG'] 23 | self.dimension = VIS_CFG['dimension'] 24 | self.VIS_xlim = VIS_CFG['VIS_xlim'] 25 | self.VIS_ylim = VIS_CFG['VIS_ylim'] 26 | self.VIS_zlim = VIS_CFG['VIS_zlim'] 27 | 28 | """ other configs """ 29 | self.RDR_CFG_LIST = kwargs_CFG['RADAR_CFG_LIST'] 30 | 31 | """self content""" 32 | self.cur_dataset = cur_dataset 33 | self.interval = interval 34 | self.obj_path_saved = np.ndarray([0, 5]) 35 | self.image_output_enable = image_output_enable 36 | if self.image_output_enable: 37 | self.image_dir = './temp/' 38 | folder_clean_recreate(self.image_dir) 39 | self.image_dpi = 150 40 | 41 | self.fpp = FramePProcessor(**kwargs_CFG) # call other class 42 | 43 | # setup for matplotlib plot 44 | matplotlib.use('TkAgg') # set matplotlib backend 45 | # plt.rcParams['toolbar'] = 'None' # disable the toolbar 46 | # create a figure 47 | self.fig = plt.figure(figsize=(9, 9)) 48 | # adjust figure position 49 | mngr = plt.get_current_fig_manager() 50 | mngr.window.wm_geometry('+150+30') 51 | # draws a completely frameless window 52 | win = plt.gcf().canvas.manager.window 53 | win.overrideredirect(1) 54 | # interactive mode on, no need plt.show() 55 | plt.ion() 56 | 57 | def run(self): 58 | if self.dimension == '2D': 59 | # create a plot 60 | ax1 = self.fig.add_subplot(111) 61 | 62 | for idx, cur_data in enumerate(self.cur_dataset): 63 | # clear and reset 64 | plt.cla() 65 | ax1.set_xlim(self.VIS_xlim[0], self.VIS_xlim[1]) 66 | ax1.set_ylim(self.VIS_ylim[0], self.VIS_ylim[1]) 67 | ax1.xaxis.set_major_locator(LinearLocator(5)) # set axis scale 68 | ax1.yaxis.set_major_locator(LinearLocator(5)) 69 | ax1.set_xlabel('x') 70 | ax1.set_ylabel('y') 71 | ax1.set_title('Radar') 72 | # update the canvas 73 | self._update_canvas(ax1, cur_data) 74 | print(idx) 75 | if self.image_output_enable: 76 | self.fig.savefig(self.image_dir + f'{idx:03d}.png', dpi=self.image_dpi) 77 | 78 | elif self.dimension == '3D': 79 | # create a plot 80 | ax1 = self.fig.add_subplot(111, projection='3d') 81 | 82 | spin = 0 83 | for idx, cur_data in enumerate(self.cur_dataset): 84 | # clear and reset 85 | plt.cla() 86 | ax1.set_xlim(self.VIS_xlim[0], self.VIS_xlim[1]) 87 | ax1.set_ylim(self.VIS_ylim[0], self.VIS_ylim[1]) 88 | ax1.set_zlim(self.VIS_zlim[0], self.VIS_zlim[1]) 89 | ax1.xaxis.set_major_locator(LinearLocator(3)) # set axis scale 90 | ax1.yaxis.set_major_locator(LinearLocator(3)) 91 | ax1.zaxis.set_major_locator(LinearLocator(3)) 92 | ax1.set_xlabel('x') 93 | ax1.set_ylabel('y') 94 | ax1.set_zlabel('z') 95 | ax1.set_title('Radar') 96 | spin += 0.04 97 | ax1.view_init(ax1.elev - 0.5 * math.sin(spin), ax1.azim - 0.3 * math.sin(0.2 * spin)) # spin the view angle 98 | # update the canvas 99 | self._update_canvas(ax1, cur_data) 100 | print(idx) 101 | if self.image_output_enable: 102 | self.fig.savefig(self.image_dir + f'{idx:03d}.png', dpi=self.image_dpi) 103 | 104 | def _update_canvas(self, ax1, cur_data): 105 | # draw radar point 106 | for RDR_CFG in self.RDR_CFG_LIST: 107 | self._plot(ax1, [RDR_CFG['pos_offset'][0]], [RDR_CFG['pos_offset'][1]], [RDR_CFG['pos_offset'][2]], marker='o', color='DarkRed') 108 | 109 | # get values from queues of all radars 110 | val_data_allradar = np.ndarray([0, 5], dtype=np.float16) 111 | ES_noise_allradar = np.ndarray([0, 5], dtype=np.float16) 112 | for i, RDR_CFG in enumerate(self.RDR_CFG_LIST): 113 | data_1radar = cur_data[RDR_CFG['name']] 114 | 115 | # apply ES and speed filter for each radar channel 116 | val_data, ES_noise = self.fpp.DP_ES_Speed_filter(data_1radar, RDR_CFG['ES_threshold']) 117 | val_data_allradar = np.concatenate([val_data_allradar, val_data]) 118 | ES_noise_allradar = np.concatenate([ES_noise_allradar, ES_noise]) 119 | 120 | # apply global boundary filter 121 | val_data_allradar = self.fpp.FPP_boundary_filter(val_data_allradar) 122 | ES_noise_allradar = self.fpp.FPP_boundary_filter(ES_noise_allradar) 123 | # apply global energy strength filter 124 | val_data_allradar, global_ES_noise = self.fpp.FPP_ES_Speed_filter(val_data_allradar) 125 | # apply background noise filter 126 | val_data_allradar = self.fpp.BGN_filter(val_data_allradar) 127 | 128 | # val_data_allradar[:, 0] = -val_data_allradar[:, 0] 129 | # val_data_allradar[:, 1] = 4 - val_data_allradar[:, 1] 130 | 131 | # draw signal energy strength 132 | for i in range(len(ES_colormap)): 133 | val_data_allradar_ES, _ = self.fpp.DP_np_filter(val_data_allradar, axis=4, range_lim=(i * 100, (i + 1) * 100)) 134 | self._plot(ax1, val_data_allradar_ES[:, 0], val_data_allradar_ES[:, 1], val_data_allradar_ES[:, 2], marker='.', color=ES_colormap[i]) 135 | 136 | # draw valid point, DBSCAN envelope 137 | # vertices_list, valid_points_list, _, DBS_noise = self.fpp.DBS(val_data_allradar) 138 | vertices_list, valid_points_list, _, DBS_noise = self.fpp.DBS_dynamic_ES(val_data_allradar) 139 | for vertices in vertices_list: 140 | self._plot(ax1, vertices[:, 0], vertices[:, 1], vertices[:, 2], linestyle='-', color='salmon') 141 | 142 | # background noise filter 143 | if self.fpp.BGN_enable: 144 | # update the background noise 145 | if len(vertices_list) > 0: 146 | self.fpp.BGN_update(np.concatenate([ES_noise_allradar, global_ES_noise, DBS_noise])) 147 | else: 148 | self.fpp.BGN_update(np.concatenate([ES_noise_allradar, global_ES_noise])) 149 | # draw BGN area 150 | BGN_block_list = self.fpp.BGN_get_filter_area() 151 | for bgn in BGN_block_list: 152 | self._plot(ax1, bgn[:, 0], bgn[:, 1], bgn[:, 2], marker='.', linestyle='-', color='g') 153 | 154 | # tracking system 155 | if self.fpp.TRK_enable: 156 | self.fpp.TRK_update_poss_matrix(valid_points_list) 157 | # draw object central points 158 | for person in self.fpp.TRK_people_list: 159 | obj_cp, obj_status = person.get_info() 160 | self._plot(ax1, obj_cp[:, 0], obj_cp[:, 1], obj_cp[:, 2], marker='o', color=OS_colormap[obj_status]) 161 | # if obj_status == 3: # warning when object falls 162 | # winsound.Beep(1000, 20) 163 | 164 | # if obj_cp.size > 0 and person.name == 'obj_0': 165 | if obj_cp.size > 0: 166 | self.obj_path_saved = np.concatenate([self.obj_path_saved, np.concatenate([obj_cp, [[obj_status]], [[person.name]]], axis=1)]) 167 | 168 | # wait at the end of each loop 169 | plt.pause(self.interval) 170 | 171 | def _plot(self, ax, x, y, z, fmt='', **kwargs): 172 | """ 173 | :param ax: the current canvas 174 | :param x: data in x-axis 175 | :param y: data in y-axis 176 | :param z: data in z-axis 177 | :param fmt: plot and plot3D fmt 178 | :param kwargs: plot and plot3D marker, linestyle, color 179 | :return: None 180 | """ 181 | if len(fmt) > 0: # if fmt is using 182 | if self.dimension == '2D': 183 | # ax.plot(x, y, fmt) 184 | ax.plot(-np.array(x), 4-np.array(y), fmt) 185 | elif self.dimension == '3D': 186 | ax.plot3D(x, y, z, fmt) 187 | else: # if para is using 188 | for i in ['marker', 'linestyle', 'color']: 189 | if not (i in kwargs): 190 | kwargs[i] = 'None' 191 | if self.dimension == '2D': 192 | # ax.plot(x, y, marker=kwargs['marker'], linestyle=kwargs['linestyle'], color=kwargs['color']) 193 | ax.plot(-np.array(x), 4-np.array(y), marker=kwargs['marker'], linestyle=kwargs['linestyle'], color=kwargs['color']) 194 | elif self.dimension == '3D': 195 | ax.plot3D(x, y, z, marker=kwargs['marker'], linestyle=kwargs['linestyle'], color=kwargs['color']) 196 | 197 | 198 | if __name__ == '__main__': 199 | # from cfg.config_maggs303 import * 200 | # kwargs_CFG = {'VISUALIZER_CFG' : VISUALIZER_CFG, 201 | # 'RADAR_CFG_LIST' : RADAR_CFG_LIST, 202 | # 'FRAME_POST_PROCESSOR_CFG': FRAME_POST_PROCESSOR_CFG, 203 | # 'DBSCAN_GENERATOR_CFG' : DBSCAN_GENERATOR_CFG, 204 | # 'BGNOISE_FILTER_CFG' : BGNOISE_FILTER_CFG, 205 | # 'HUMAN_TRACKING_CFG' : HUMAN_TRACKING_CFG, 206 | # 'HUMAN_OBJECT_CFG' : HUMAN_OBJECT_CFG} 207 | # 208 | # # Load database 209 | # with open('../data/Maggs_303/2perple_AllRadar_Jan-25-16-50-55', 'rb') as file: 210 | # data = pickle.load(file) 211 | # 212 | # vis = Visualizer(data[80:], interval=0.05, image_output_enable=False, **kwargs_CFG) 213 | # vis.run() 214 | # 215 | # with open(f'obj_path', 'wb') as file: 216 | # pickle.dump(vis.obj_path_saved, file) 217 | 218 | # from cfg.config_maggs307 import * 219 | # kwargs_CFG = {'VISUALIZER_CFG' : VISUALIZER_CFG, 220 | # 'RADAR_CFG_LIST' : RADAR_CFG_LIST, 221 | # 'FRAME_POST_PROCESSOR_CFG': FRAME_POST_PROCESSOR_CFG, 222 | # 'DBSCAN_GENERATOR_CFG' : DBSCAN_GENERATOR_CFG, 223 | # 'BGNOISE_FILTER_CFG' : BGNOISE_FILTER_CFG, 224 | # 'HUMAN_TRACKING_CFG' : HUMAN_TRACKING_CFG, 225 | # 'HUMAN_OBJECT_CFG' : HUMAN_OBJECT_CFG} 226 | # 227 | # # Load database 228 | # with open('../data/Maggs_307/SZC/SZC_auto_RadarSeq_Jun-04-16-59-54', 'rb') as file: 229 | # data = pickle.load(file) 230 | # 231 | # vis = Visualizer(data[0:], interval=0.05, image_output_enable=False, **kwargs_CFG) 232 | # vis.run() 233 | 234 | # from cfg.config_mvb340 import * 235 | # kwargs_CFG = {'VISUALIZER_CFG' : VISUALIZER_CFG, 236 | # 'RADAR_CFG_LIST' : RADAR_CFG_LIST, 237 | # 'FRAME_POST_PROCESSOR_CFG': FRAME_POST_PROCESSOR_CFG, 238 | # 'DBSCAN_GENERATOR_CFG' : DBSCAN_GENERATOR_CFG, 239 | # 'BGNOISE_FILTER_CFG' : BGNOISE_FILTER_CFG, 240 | # 'HUMAN_TRACKING_CFG' : HUMAN_TRACKING_CFG, 241 | # 'HUMAN_OBJECT_CFG' : HUMAN_OBJECT_CFG} 242 | # 243 | # # Load database 244 | # with open('../data/radar_placement/60deg_2', 'rb') as file: 245 | # data = pickle.load(file) 246 | # 247 | # vis = Visualizer(data[400:], interval=0.05, image_output_enable=False, **kwargs_CFG) 248 | # vis.run() 249 | 250 | # from cfg.config_mvb340_3R import * 251 | # kwargs_CFG = {'VISUALIZER_CFG' : VISUALIZER_CFG, 252 | # 'RADAR_CFG_LIST' : RADAR_CFG_LIST, 253 | # 'FRAME_POST_PROCESSOR_CFG': FRAME_POST_PROCESSOR_CFG, 254 | # 'DBSCAN_GENERATOR_CFG' : DBSCAN_GENERATOR_CFG, 255 | # 'BGNOISE_FILTER_CFG' : BGNOISE_FILTER_CFG, 256 | # 'HUMAN_TRACKING_CFG' : HUMAN_TRACKING_CFG, 257 | # 'HUMAN_OBJECT_CFG' : HUMAN_OBJECT_CFG} 258 | # 259 | # # Load database 260 | # with open('../data/fall_posture/4', 'rb') as file: 261 | # data = pickle.load(file) 262 | # 263 | # vis = Visualizer(data[:], interval=0.05, image_output_enable=False, **kwargs_CFG) 264 | # vis.run() 265 | 266 | # from data.fall_detection.config_maggs307 import * 267 | # kwargs_CFG = {'VISUALIZER_CFG' : VISUALIZER_CFG, 268 | # 'RADAR_CFG_LIST' : RADAR_CFG_LIST, 269 | # 'FRAME_POST_PROCESSOR_CFG': FRAME_POST_PROCESSOR_CFG, 270 | # 'DBSCAN_GENERATOR_CFG' : DBSCAN_GENERATOR_CFG, 271 | # 'BGNOISE_FILTER_CFG' : BGNOISE_FILTER_CFG, 272 | # 'HUMAN_TRACKING_CFG' : HUMAN_TRACKING_CFG, 273 | # 'HUMAN_OBJECT_CFG' : HUMAN_OBJECT_CFG} 274 | # 275 | # # Load database 276 | # with open('../data/fall_detection/test_auto_RadarSeq_Jun-10-15-48-31', 'rb') as file: 277 | # data = pickle.load(file) 278 | # 279 | # vis = Visualizer(data[:], interval=0.1, image_output_enable=False, **kwargs_CFG) 280 | # vis.run() 281 | # 282 | # with open(f'./obj_path', 'wb') as file: 283 | # pickle.dump(vis.obj_path_saved, file) 284 | 285 | # from data.multiple_tracking.Tpeople.config_maggs307 import * 286 | # kwargs_CFG = {'VISUALIZER_CFG' : VISUALIZER_CFG, 287 | # 'RADAR_CFG_LIST' : RADAR_CFG_LIST, 288 | # 'FRAME_POST_PROCESSOR_CFG': FRAME_POST_PROCESSOR_CFG, 289 | # 'DBSCAN_GENERATOR_CFG' : DBSCAN_GENERATOR_CFG, 290 | # 'BGNOISE_FILTER_CFG' : BGNOISE_FILTER_CFG, 291 | # 'HUMAN_TRACKING_CFG' : HUMAN_TRACKING_CFG, 292 | # 'HUMAN_OBJECT_CFG' : HUMAN_OBJECT_CFG} 293 | # 294 | # # Load database 295 | # with open('../data/multiple_tracking/Tpeople/test_auto_RadarSeq_Mar-31-16-25-48', 'rb') as file: 296 | # data = pickle.load(file) 297 | # 298 | # vis = Visualizer(data[2000:], interval=0.01, image_output_enable=False, **kwargs_CFG) 299 | # vis.run() 300 | # 301 | # with open(f'./obj_path', 'wb') as file: 302 | # pickle.dump(vis.obj_path_saved, file) 303 | 304 | from cfg.config_mvb501 import * 305 | kwargs_CFG = {'VISUALIZER_CFG' : VISUALIZER_CFG, 306 | 'RADAR_CFG_LIST' : RADAR_CFG_LIST, 307 | 'FRAME_POST_PROCESSOR_CFG': FRAME_POST_PROCESSOR_CFG, 308 | 'DBSCAN_GENERATOR_CFG' : DBSCAN_GENERATOR_CFG, 309 | 'BGNOISE_FILTER_CFG' : BGNOISE_FILTER_CFG, 310 | 'HUMAN_TRACKING_CFG' : HUMAN_TRACKING_CFG, 311 | 'HUMAN_OBJECT_CFG' : HUMAN_OBJECT_CFG} 312 | 313 | # Load database 314 | with open('../data/MVB_501/2023_Nov/test_manual_RadarSeq_Nov-18-16-54-36', 'rb') as file: 315 | data = pickle.load(file) 316 | 317 | vis = Visualizer(data[:], interval=0.01, image_output_enable=False, **kwargs_CFG) 318 | vis.run() 319 | 320 | 321 | -------------------------------------------------------------------------------- /tools/draw_path.py: -------------------------------------------------------------------------------- 1 | import pickle 2 | 3 | import matplotlib 4 | from matplotlib import pyplot as plt 5 | from matplotlib.ticker import LinearLocator 6 | 7 | with open('./obj_path', 'rb') as file: 8 | data = pickle.load(file) 9 | 10 | OS_colormap = ['grey', 'green', 'gold', 'red'] # the colormap for object status 11 | OS_colormap2 = ['green', 'purple', 'deepskyblue'] # the colormap for object status 12 | 13 | 14 | def vis_thread(): 15 | # setup for matplotlib plot 16 | matplotlib.use('TkAgg') # set matplotlib backend 17 | # plt.rcParams['toolbar'] = 'None' # disable the toolbar 18 | # create a figure 19 | fig = plt.figure(figsize=(9, 9)) 20 | ax1 = fig.add_subplot(111) 21 | # adjust figure position 22 | mngr = plt.get_current_fig_manager() 23 | mngr.window.wm_geometry('+300+30') 24 | # draws a completely frameless window 25 | win = plt.gcf().canvas.manager.window 26 | win.overrideredirect(1) 27 | 28 | ax1.set_xlim(-2, 2) 29 | ax1.set_ylim(0, 4) 30 | ax1.xaxis.set_major_locator(LinearLocator(5)) # set axis scale 31 | ax1.yaxis.set_major_locator(LinearLocator(5)) 32 | ax1.set_xlabel('x') 33 | ax1.set_ylabel('y') 34 | ax1.set_title('Object Path') 35 | 36 | for d in data: 37 | # # print position & status 38 | ax1.plot(float(d[0]), float(d[1]), marker='.', linestyle='None', color=OS_colormap[int(d[-2])]) 39 | # ax1.plot(-float(d[0]), 4-float(d[1]), marker='.', linestyle='None', color=OS_colormap[int(d[-2])]) 40 | # print position & objects 41 | # ax1.plot(-float(d[0]), 4-float(d[1]), marker='.', linestyle='None', color=OS_colormap2[int(d[-1].split('_')[-1])]) 42 | # if d[-1] == 'obj_0': 43 | # ax1.plot(-float(d[0]), 4-float(d[1]), marker='.', linestyle='None', color=OS_colormap2[int(d[-1].split('_')[-1])]) 44 | 45 | plt.show() 46 | 47 | 48 | vis_thread() 49 | -------------------------------------------------------------------------------- /tools/png_to_mp4.py: -------------------------------------------------------------------------------- 1 | from moviepy.editor import ImageSequenceClip 2 | import glob 3 | 4 | 5 | image_dir = glob.glob('./temp/*.png') 6 | 7 | # Create a video clip from the PNG files 8 | image_clip = ImageSequenceClip(image_dir, fps=15) 9 | 10 | # Write the final video file 11 | image_clip.write_videofile('output_video.mp4') 12 | -------------------------------------------------------------------------------- /tools/rawdata_load.py: -------------------------------------------------------------------------------- 1 | import pickle 2 | 3 | import matplotlib 4 | from matplotlib import pyplot as plt 5 | from matplotlib.ticker import LinearLocator 6 | 7 | with open('../data/FAUSTSim/faust_pred_pointcloud_10000_dis_randspeed', 'rb') as file: 8 | data, _ = pickle.load(file) 9 | 10 | idx = [78, 1188, 2288] 11 | 12 | 13 | def plot(cur_dataset): 14 | # setup for matplotlib plot 15 | matplotlib.use('TkAgg') # set matplotlib backend 16 | # plt.rcParams['toolbar'] = 'None' # disable the toolbar 17 | # create a figure 18 | fig = plt.figure(figsize=(9, 9)) 19 | ax1 = fig.add_subplot(111, projection='3d') 20 | # adjust figure position 21 | mngr = plt.get_current_fig_manager() 22 | mngr.window.wm_geometry('+300+30') 23 | 24 | ax1.xaxis.set_major_locator(LinearLocator(3)) # set axis scale 25 | ax1.yaxis.set_major_locator(LinearLocator(3)) 26 | ax1.zaxis.set_major_locator(LinearLocator(3)) 27 | ax1.set_xlabel('x') 28 | ax1.set_ylabel('y') 29 | ax1.set_zlabel('z') 30 | 31 | ax1.plot3D(cur_dataset[:, 0], cur_dataset[:, 1], cur_dataset[:, 2], marker='o', linestyle='None') 32 | 33 | 34 | for i in idx: 35 | plot(data[i]) 36 | plt.show() 37 | -------------------------------------------------------------------------------- /tools/rename_files.py: -------------------------------------------------------------------------------- 1 | import glob 2 | import os 3 | 4 | file_path = 'C:/SZC/PhD/MMWave_Radar/ID/data/MVB_Hall/Resolution_2/' 5 | files_list = glob.glob(f'{file_path}*') 6 | 7 | for idx, file in enumerate(files_list): 8 | os.rename(file, f'{file_path + str(idx)}') 9 | pass 10 | --------------------------------------------------------------------------------