└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # frigate-proxmox-docker-openvino 2 | 3 | Complete setting for OpenVINO hardware acceleration in frigate, instead of CORAL 4 | 5 | tutorial is adapted for Docker version of frigate installed in a proxmox LXC and deals mainly with GPU passthroughs 6 | 7 | # Prerequisites 8 | 9 | - Intel iX > GEN6 architecture (i.e. compatible with openvino acceleration) 10 | - A proxmox working installation 11 | 12 | check in your PVE Shell that `/dev/dri/renderD128` is available : 13 | 14 | ``` 15 | cd /dev/dri 16 | ls 17 | ``` 18 | 19 | optionnally install intel GPU tools : 20 | 21 | ``` 22 | apt install intel-gpu-tools 23 | ``` 24 | 25 | now you can check GPU access / usage : 26 | 27 | ``` 28 | intel_gpu_top 29 | ``` 30 | should lead to something like this : 31 | 32 | ![image](https://github.com/user-attachments/assets/0474d76c-e4c7-45df-8023-5dc10809c01c) 33 | 34 | 35 | # create Docker LXC : 36 | 37 | The easiest way is to use [Tteck's scripts](https://community-scripts.github.io/ProxmoxVE/) 38 | 39 | first in the PVE console launch the tteck's script to install a new docker LXC : 40 | 41 | ``` 42 | bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/docker.sh)" 43 | ``` 44 | 45 | during installation : 46 | - switch to "advanced mode" 47 | - select debian 12 48 | - make the LXC **PRIVILEGED** 49 | - you'd better choose 8Go ram and 2 or 4 cores 50 | - add portainer if needed 51 | - add docker compose 52 | 53 | Once the LXC is created you have to can also install intel-gpu-tools **inside** the LXC 54 | 55 | ``` 56 | apt install intel-gpu-tools 57 | ``` 58 | 59 | Next you have to add GPU passthrough to the LXC to allow frigate access the OpenVINO acceleratons. On your LXC "Ressources" add "Device Passtrough" : 60 | 61 | ![image](https://github.com/user-attachments/assets/071007bb-ad90-43c9-92ac-0c79313b83eb) 62 | 63 | and specify the path you want to add : `/dev/dri/renderD128` 64 | 65 | **Reboot** 66 | 67 | now your LXC has access to the GPU. 68 | 69 | # Frigate Docker 70 | 71 | ## create folders 72 | 73 | On the LXC shell, create folders to organize your frigate storage for videos / captures / models and configs. 74 | 75 | Here are my usually settings : 76 | 77 | ``` 78 | mkdir /opt/frigate 79 | mkdir /opt/frigate/media 80 | mkdir /opt/frigate/config 81 | ``` 82 | 83 | 84 | create the forlders according to your needs 85 | 86 | next we will build the docker container. 87 | 88 | create a docker-compose.yml at the root folder 89 | 90 | ``` 91 | cd /opt/frigate 92 | nano docker-compose.yml 93 | ``` 94 | 95 | or create a stack in portainer : 96 | 97 | ![image](https://github.com/user-attachments/assets/3a4add3d-38ec-4313-a9b1-d6761057726c) 98 | 99 | and add : 100 | 101 | ``` 102 | services: 103 | frigate: 104 | container_name: frigate 105 | privileged: true 106 | restart: unless-stopped 107 | image: ghcr.io/blakeblackshear/frigate:0.14.1 108 | cap_add: 109 | - CAP_PERFMON 110 | shm_size: "256mb" 111 | devices: 112 | - /dev/dri/renderD128:/dev/dri/renderD128 113 | - /dev/dri/card0:/dev/dri/card0 114 | volumes: 115 | - /etc/localtime:/etc/localtime:ro 116 | - /opt/frigate/config:/config 117 | - /opt/frigate/media:/media/frigate 118 | - type: tmpfs 119 | target: /tmp/cache 120 | tmpfs: 121 | size: 1G 122 | ports: 123 | - "5000:5000" 124 | - "8971:8971" 125 | - "1984:1984" 126 | - "8554:8554" # RTSP feeds 127 | - "8555:8555/tcp" # WebRTC over tcp 128 | - "8555:8555/udp" # WebRTC over udp 129 | environment: 130 | FRIGATE_RTSP_PASSWORD: **** 131 | PLUS_API_KEY: **** 132 | ``` 133 | 134 | as you can see : 135 | 136 | - container is **privileged** 137 | - /dev/dri/renderD128 is passtrhough from the LXC to the container 138 | - created folders are bind to frigate usual folders 139 | - shm_size has to be set according to [documentation](https://docs.frigate.video/frigate/installation/#calculating-required-shm-size) 140 | - tmpfs has to be adjusted to your configuration, see [documentation](https://docs.frigate.video/frigate/installation/#storage) 141 | - ports for UI, RTSP and webRTC are forwarded 142 | - define some FRIGATE_RTSP_PASSWORD and PLUS_API_KEY if needed 143 | 144 | From now the docker container is ready, and have access to the GPU. 145 | 146 | **Do not start it right now as you have to provide frigate configuraton !** 147 | 148 | ## Setup Frigate for OpenVINO acceleration 149 | 150 | add your frigate configutration : 151 | 152 | ``` 153 | cd /opt/frigate/config 154 | nano config.yml 155 | ``` 156 | edit it accroding to your setup and now you must add the [following lines](https://docs.frigate.video/configuration/object_detectors/#openvino-detector) to your frigate config : 157 | 158 | ``` 159 | detectors: 160 | ov: 161 | type: openvino 162 | device: GPU 163 | 164 | model: 165 | width: 300 166 | height: 300 167 | input_tensor: nhwc 168 | input_pixel_format: bgr 169 | path: /openvino-model/ssdlite_mobilenet_v2.xml 170 | labelmap_path: /openvino-model/coco_91cl_bkgr.txt 171 | ``` 172 | 173 | Once your `config.yml` is ready build your container by either `docker compose up` or "deploy Stack" if you're using portainer 174 | 175 | 176 | reboot all, and go to frigate UI to check everything is working : 177 | 178 | ![image](https://github.com/user-attachments/assets/abad95f0-f0c9-4b59-853f-56a252e6bb65) 179 | 180 | you should see : 181 | - low inference time : ~20 ms 182 | - low CPU usage 183 | - GPU usage 184 | 185 | you can also check with `intel_gpu_top` inside the LXC console and see that Render/3D has some loads according to frigate detections 186 | 187 | ![image](https://github.com/user-attachments/assets/ce307fb5-e856-4846-b1ee-94a6bda9758a) 188 | 189 | and on your PROXMOX, you can see that CPU load of the LXC is drastically lowered : 190 | 191 | ![image](https://github.com/user-attachments/assets/365406eb-f0bc-4367-ba31-42609c587d87) 192 | 193 | # Extra settings 194 | 195 | ## CPU load 196 | 197 | i experimentally found that running those 2 Tteck's scripts int the PVE console greatly reduces the CPU consumption in "idle mode" (i.e. when frigate only "observes" and has no detection running) : 198 | 199 | - [Filesystem Trim](https://tteck.github.io/Proxmox/#proxmox-ve-lxc-filesystem-trim) 200 | - [CPU Scaling Governor](https://tteck.github.io/Proxmox/#proxmox-ve-cpu-scaling-governor) : set governor to **powersave** 201 | 202 | experiment on your own ! 203 | 204 | ## YOLO NAS models 205 | 206 | Except default SSDLite model, [YOLO NAS](https://github.com/Deci-AI/super-gradients) model is also [available for OpenVINO acceleration](https://docs.frigate.video/configuration/object_detectors/#yolo-nas). 207 | 208 | To use it you have to build the model to make it compatible with frigate. this can be easily done with the dedicated [google collab](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) 209 | 210 | the only thing to do is to define the dimensions of the input image shape. 320x320 leads to higher inference time, i'd use 256x256. 211 | 212 | ``` input_image_shape=(256,256),``` 213 | 214 | and select the base precision of the model. **S** version is good enough, **M** induces much higer inference time : 215 | 216 | ``` 217 | model = models.get(Models.YOLO_NAS_S, pretrained_weights="coco") 218 | ``` 219 | *NOTE: you can make some tests and find the good combination for your hardware. try to limit inference time around 20 ms* 220 | 221 | and specify the name of the model file you will generate : 222 | 223 | ``` 224 | files.download('yolo_nas_s.onnx') 225 | ``` 226 | 227 | now simply launch all the steps of the collab, 1 by 1, and it will download the model file : 228 | 229 | ![image](https://github.com/user-attachments/assets/53a211a9-c2e9-4a9d-bff2-946ce674fe27) 230 | 231 | Copy the model file you generated to your frigate config folder `/opt/frigate/config` 232 | 233 | and now change your detector and adapt it accordingly to your settings : 234 | 235 | ``` 236 | detectors: 237 | ov: 238 | type: openvino 239 | device: GPU 240 | 241 | model: 242 | model_type: yolonas 243 | width: 256 # <--- should match whatever was set in notebook 244 | height: 256 # <--- should match whatever was set in notebook 245 | input_tensor: nchw # <--- take care, it changes from the setting fot the SSDLite model ! 246 | input_pixel_format: bgr 247 | path: /config/yolo_nas_s_256.onnx # <--- should match the path and the name of your model file 248 | labelmap_path: /labelmap/coco-80.txt # <--- should match the name and the location of the COCO80 labelmap file 249 | ``` 250 | 251 | *NOTE : YOLO NAS uses the COCO80 labelmap instead of COCO91* 252 | 253 | restart ... and VOILA ! 254 | --------------------------------------------------------------------------------