2 |
3 |
4 |
5 |
6 |
7 |
Omni-LPR
8 |
9 | [](https://github.com/habedi/omni-lpr/actions/workflows/tests.yml)
10 | [](https://codecov.io/gh/habedi/omni-lpr)
11 | [](https://www.codefactor.io/repository/github/habedi/omni-lpr)
12 | [](https://github.com/habedi/omni-lpr)
13 | [](https://pypi.org/project/omni-lpr/)
14 | [](https://github.com/habedi/omni-lpr/blob/main/LICENSE)
15 |
16 | [](https://github.com/habedi/omni-lpr/tree/main/docs)
17 | [](https://github.com/habedi/omni-lpr/tree/main/examples)
18 | [](https://github.com/habedi/omni-lpr/pkgs/container/omni-lpr-cpu)
19 | [](https://github.com/habedi/omni-lpr/pkgs/container/omni-lpr-openvino)
20 | [](https://github.com/habedi/omni-lpr/pkgs/container/omni-lpr-cuda)
21 |
22 | A multi-interface (REST and MCP) server for automatic license plate recognition
23 |
24 |
25 |
26 | ---
27 |
28 | Omni-LPR is a self-hostable server that provides automatic license plate recognition (ALPR) capabilities via a REST API
29 | and the Model Context Protocol (MCP). It can be used both as a standalone ALPR microservice and as an ALPR toolbox for
30 | AI agents and large language models (LLMs).
31 |
32 | ### Why Omni-LPR?
33 |
34 | Using Omni-LPR can have the following benefits:
35 |
36 | - **Decoupling.** Your main application can be in any programming language. It doesn't need to be tangled up with Python
37 | or specific ML dependencies because the server handles all of that.
38 |
39 | - **Multiple Interfaces.** You aren't locked into one way of communicating. You can use a standard REST API from any
40 | app, or you can use MCP, which is designed for AI agent integration.
41 |
42 | - **Ready-to-Deploy.** You don't have to build it from scratch. There are pre-built Docker images that are easy to
43 | deploy and start using immediately.
44 |
45 | - **Hardware Acceleration.** The server is optimized for the hardware you have. It supports generic CPUs (ONNX), Intel
46 | CPUs (OpenVINO), and NVIDIA GPUs (CUDA).
47 |
48 | - **Asynchronous I/O.** It's built on Starlette, which means it has high-performance, non-blocking I/O. It can handle
49 | many concurrent requests without getting bogged down.
50 |
51 | - **Scalability.** Because it's a separate service, it can be scaled independently of your main application. If you
52 | suddenly need more ALPR power, you can scale Omni-LPR up without touching anything else.
53 |
54 |
55 | See the [ROADMAP.md](ROADMAP.md) for the list of implemented and planned features.
56 |
57 | > [!IMPORTANT]
58 | > Omni-LPR is in early development, so bugs and breaking API changes are expected.
59 | > Please use the [issues page](https://github.com/habedi/omni-lpr/issues) to report bugs or request features.
60 |
61 | ---
62 |
63 | ### Quickstart
64 |
65 | You can get started with Omni-LPR in a few minutes by following the steps described below.
66 |
67 | #### 1. Install the Server
68 |
69 | You can install Omni-LPR using `pip`:
70 |
71 | ```sh
72 | pip install omni-lpr
73 | ```
74 |
75 | #### 2. Start the Server
76 |
77 | When installed, start the server with a single command:
78 |
79 | ```sh
80 | omni-lpr
81 | ```
82 |
83 | By default, the server will be listening on `http://127.0.0.1:8000`.
84 | You can confirm it's running by accessing the health check endpoint:
85 |
86 | ```sh
87 | curl http://127.0.0.1:8000/api/health
88 | # Sample expected output: {"status": "ok", "version": "0.3.4"}
89 | ```
90 |
91 | #### 3. Recognize a License Plate
92 |
93 | Now you can make a request to recognize a license plate from an image.
94 | The example below uses a publicly available image URL.
95 |
96 | ```sh
97 | curl -X POST \
98 | -H "Content-Type: application/json" \
99 | -d '{"path": "https://www.olavsplates.com/foto_n/n_cx11111.jpg"}' \
100 | http://127.0.0.1:8000/api/v1/tools/detect_and_recognize_plate_from_path/invoke
101 | ```
102 |
103 | You should receive a JSON response with the detected license plate information.
104 |
105 | ### Usage
106 |
107 | Omni-LPR exposes its capabilities as "tools" that can be called via a REST API or over the MCP.
108 |
109 | #### Available Tools
110 |
111 | The server provides tools for listing models, recognizing plates from image data, and recognizing plates from a path.
112 |
113 | - `list_models`: Lists the available detector and OCR models.
114 |
115 | - **Tools that process image data** (provided as Base64 or file upload):
116 | - `recognize_plate`: Recognizes text from a pre-cropped license plate image.
117 | - `detect_and_recognize_plate`: Detects and recognizes all license plates in a full image.
118 |
119 | - **Tools that process an image path** (a URL or local file path):
120 | - `recognize_plate_from_path`: Recognizes text from a pre-cropped license plate image at a given path.
121 | - `detect_and_recognize_plate_from_path`: Detects and recognizes plates in a full image at a given path.
122 |
123 | For more details on how to use the different tools and provide image data, please see the
124 | [API Documentation](docs/README.md).
125 |
126 | #### REST API
127 |
128 | The REST API provides a standard way to interact with the server. All tool endpoints are available under the `/api/v1`
129 | prefix. Once the server is running, you can access interactive API documentation in the Swagger UI
130 | at http://127.0.0.1:8000/api/v1/apidoc/swagger.
131 |
132 | #### MCP Interface
133 |
134 | The server also exposes its tools over the MCP for integration with AI agents and LLMs. The MCP endpoint is available at
135 | http://127.0.0.1:8000/mcp/, via streamable HTTP.
136 |
137 | You can use a tool like [MCP Inspector](https://github.com/modelcontextprotocol/inspector) to explore the available MCP
138 | tools.
139 |
140 |