├── README.md ├── alternative function and wslcofig ├── .wslconfig └── alternative-function.yaml ├── custom-yolov8n.pt ├── function.yaml ├── main.py └── screenshot.png /README.md: -------------------------------------------------------------------------------- 1 | # Integrate custom YOLOv8 model into CVAT for automatic annotation. 2 | 3 | ![screenshot](screenshot.png) 4 | 5 | ## Installation (Linux Ubuntu) 6 | 7 | Generally would follow this documentation (https://opencv.github.io/cvat/docs/administration/advanced/installation_automatic_annotation/) 8 | 9 | In the CVAT directory, run: 10 | 11 | 1. Stop all containers first, if any. 12 | 13 | ``` 14 | docker compose down 15 | ``` 16 | 17 | 1. Start CVAT together with the plugin use for AI automatic annotation assistant. 18 | 19 | ``` 20 | docker compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml up -d 21 | ``` 22 | 1. Create an account 23 | ``` 24 | docker exec -it cvat_server bash -ic 'python3 ~/manage.py createsuperuser' 25 | ``` 26 | 27 | 1. Install `nuctl`* 28 | 29 | ``` 30 | wget https://github.com/nuclio/nuclio/releases/download//nuctl--linux-amd64 31 | ``` 32 | 33 | 34 | 1. After downloading the nuclio, give it a proper permission and do a softlink.* 35 | 36 | ``` 37 | sudo chmod +x nuctl--linux-amd64 38 | sudo ln -sf $(pwd)/nuctl--linux-amd64 /usr/local/bin/nuctl 39 | ``` 40 | 41 | 42 | 1. Build the docker image and run the container. After it is done, you can use the model right away in the CVAT. 43 | ``` 44 | ./serverless/deploy_cpu.sh path/to/this/folder/ 45 | ``` 46 | 47 | 1. Troubleshooting 48 | - Check that your `nuclio` function is running correctly 49 | `docker ps --filter NAME=custom-model-yolov8`: 50 | ``` 51 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 52 | 3dc54494bbb8 custom-model-yolov8:latest "processor" 52 minutes ago Up 52 minutes (healthy) 0.0.0.0:32896->8080/tcp, :::32896->8080/tcp nuclio-nuclio-custom-model-yolov8 53 | ``` 54 | - If it is not running check the container's log, it might reveal what is going wrong: 55 | `docker logs 3dc54494bbb8` (CONTAINER ID) 56 | ``` 57 | ... 58 | 24.01.31 12:28:35.522 processor (D) Processor started 59 | 24.01.31 12:29:27.171 sor.http.w0.python.logger (I) Run custom-model-yolov8 model {"worker_id": "0"} 60 | ``` 61 | - If installing on WSL2, you should update the ".wslconfig" file WLS2 to allow WSL at least 6 GB RAM. Put the .wslconfig file in the C:\/Users\/\ folder. If DOCKER crashes frequently or auto annotation fails immediately after starting, it is probably because of low RAM. Use the alternative function if the installation fails (worked for CVAT-2.16.2 installation in WSL2, with 6 GB allowed to WSL2). 62 | 63 | Note: * is a one time step. 64 | 65 | ## File Structure 66 | 67 | - [`function.yaml`](function.yaml): Declare the model so it can be understand by CVAT. It includes setup the docker environment. 68 | 69 | - [`main.py`](main.py): Contain the handle function that will serve as the endpoint used by CVAT to run detection. 70 | 71 | - `custom-yolov8n.pt`: Your custom yolov8 model. 72 | 73 | ## References 74 | 75 | 1. https://opencv.github.io/cvat/docs/manual/advanced/serverless-tutorial/#adding-your-own-dl-models 76 | 77 | Official documentation on how to add the custom model. 78 | 79 | 1. https://stephencowchau.medium.com/journey-using-cvat-semi-automatic-annotation-with-a-partially-trained-model-to-tag-additional-8057c76bcee2 80 | -------------------------------------------------------------------------------- /alternative function and wslcofig/.wslconfig: -------------------------------------------------------------------------------- 1 | [wsl2] 2 | memory=6GB -------------------------------------------------------------------------------- /alternative function and wslcofig/alternative-function.yaml: -------------------------------------------------------------------------------- 1 | metadata: 2 | name: custom-model-yolov8 3 | namespace: cvat 4 | annotations: 5 | name: custom-model-yolov8 6 | type: detector 7 | framework: pytorch 8 | # change this accordingly to your model output/classes 9 | spec: | 10 | [ 11 | {"id": 0, "name": "tench"}, 12 | {"id": 1, "name": "goldfish"}, 13 | . 14 | . 15 | . 16 | . 17 | {"id": 998, "name": "ear"}, 18 | {"id": 999, "name": "toilet paper"} 19 | ] 20 | 21 | spec: 22 | description: custom-model-yolov8 23 | runtime: 'python:3.9' 24 | handler: main:handler 25 | eventTimeout: 30s 26 | 27 | build: 28 | image: custom-model-yolov8 29 | baseImage: ubuntu:22.04 30 | 31 | directives: 32 | preCopy: 33 | - kind: ENV 34 | value: DEBIAN_FRONTEND=noninteractive 35 | - kind: USER 36 | value: root 37 | - kind: RUN 38 | value: apt-get update && apt-get -y install curl git python3 python3-pip libgl1-mesa-glx libglib2.0-dev 39 | - kind: WORKDIR 40 | value: /opt/nuclio 41 | # 42 | # make sure that for the next step (at least) the ultralytics package version 43 | # is compatible to that of the the ultralytics package used to train the custom model 44 | - kind: RUN 45 | value: pip3 install --no-cache-dir ultralytics==8.0.114 opencv-python==4.7.0.72 numpy==1.24.3 46 | - kind: RUN 47 | value: ln -s /usr/bin/pip3 /usr/local/bin/pip 48 | - kind: RUN 49 | value: ln -s /usr/bin/python3 /usr/local/bin/python 50 | 51 | triggers: 52 | myHttpTrigger: 53 | maxWorkers: 2 54 | kind: 'http' 55 | workerAvailabilityTimeoutMilliseconds: 10000 56 | attributes: 57 | maxRequestBodySize: 33554432 # 32MB 58 | 59 | platform: 60 | attributes: 61 | restartPolicy: 62 | name: always 63 | maximumRetryCount: 3 64 | mountMode: volume 65 | -------------------------------------------------------------------------------- /custom-yolov8n.pt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kurkurzz/custom-yolov8-auto-annotation-cvat-blueprint/b98a71ea4ba9e1fb618f6e646f37d6ddefdf2aff/custom-yolov8n.pt -------------------------------------------------------------------------------- /function.yaml: -------------------------------------------------------------------------------- 1 | metadata: 2 | name: custom-model-yolov8 3 | namespace: cvat 4 | annotations: 5 | name: custom-model-yolov8 6 | type: detector 7 | framework: pytorch 8 | # change this accordingly to your model output/classes 9 | spec: | 10 | [ 11 | {"id": 0, "name": "tench"}, 12 | {"id": 1, "name": "goldfish"}, 13 | . 14 | . 15 | . 16 | . 17 | {"id": 998, "name": "ear"}, 18 | {"id": 999, "name": "toilet paper"} 19 | ] 20 | spec: 21 | description: custom-model-yolov8 22 | runtime: 'python:3.9' 23 | handler: main:handler 24 | eventTimeout: 30s 25 | 26 | build: 27 | image: custom-model-yolov8 28 | baseImage: ubuntu:22.04 29 | 30 | directives: 31 | preCopy: 32 | - kind: ENV 33 | value: DEBIAN_FRONTEND=noninteractive 34 | - kind: RUN 35 | value: apt-get update && apt-get -y install curl git python3 python3-pip 36 | - kind: RUN 37 | value: apt-get -y install libgl1-mesa-glx libglib2.0-dev 38 | - kind: WORKDIR 39 | value: /opt/nuclio 40 | # 41 | # make sure that for the next step (at least) the ultralytics package version 42 | # is compatible to that of the the ultralytics package used to train the custom model 43 | - kind: RUN 44 | value: pip3 install ultralytics==8.0.114 opencv-python==4.7.0.72 numpy==1.24.3 45 | # 46 | - kind: RUN 47 | value: ln -s /usr/bin/pip3 /usr/local/bin/pip 48 | - kind: RUN 49 | value: ln -s /usr/bin/python3 /usr/local/bin/python 50 | 51 | triggers: 52 | myHttpTrigger: 53 | maxWorkers: 1 54 | kind: 'http' 55 | workerAvailabilityTimeoutMilliseconds: 10000 56 | attributes: 57 | maxRequestBodySize: 33554432 # 32MB 58 | 59 | platform: 60 | attributes: 61 | restartPolicy: 62 | name: always 63 | maximumRetryCount: 3 64 | mountMode: volume 65 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | 2 | import io 3 | import base64 4 | import json 5 | 6 | import cv2 7 | import numpy as np 8 | from ultralytics import YOLO 9 | 10 | # Initialize your model 11 | def init_context(context): 12 | context.logger.info('Init context... 0%') 13 | model = YOLO('custom-yolov8n.pt') 14 | context.user_data.model_handler = model 15 | context.logger.info('Init context...100%') 16 | 17 | # Inference endpoint 18 | def handler(context, event): 19 | context.logger.info('Run custom yolov8 model') 20 | data = event.body 21 | image_buffer = io.BytesIO(base64.b64decode(data['image'])) 22 | image = cv2.imdecode(np.frombuffer(image_buffer.getvalue(), np.uint8), cv2.IMREAD_COLOR) 23 | 24 | results = context.user_data.model_handler(image) 25 | result = results[0] 26 | 27 | boxes = result.boxes.data[:,:4] 28 | confs = result.boxes.conf 29 | clss = result.boxes.cls 30 | class_name = result.names 31 | 32 | detections = [] 33 | threshold = 0.1 34 | for box, conf, cls in zip(boxes, confs, clss): 35 | label = class_name[int(cls)] 36 | if conf >= threshold: 37 | # must be in this format 38 | detections.append({ 39 | 'confidence': str(float(conf)), 40 | 'label': label, 41 | 'points': box.tolist(), 42 | 'type': 'rectangle', 43 | }) 44 | 45 | return context.Response(body=json.dumps(detections), headers={}, 46 | content_type='application/json', status_code=200) 47 | 48 | -------------------------------------------------------------------------------- /screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kurkurzz/custom-yolov8-auto-annotation-cvat-blueprint/b98a71ea4ba9e1fb618f6e646f37d6ddefdf2aff/screenshot.png --------------------------------------------------------------------------------