` with the name of your project on Ikomia SCALE.\n",
223 | "\n",
224 | "## Step 5: Deploying a Workflow\n",
225 | "\n",
226 | "Now that your workflow is in your project on Ikomia SCALE, you can deploy it on a dedicated endpoint.\n",
227 | "\n",
228 | "1. Log in to your [Ikomia SCALE account](https://app.ikomia.ai)\n",
229 | "\n",
230 | "2. Go to the project and select your workflow\n",
231 | "\n",
232 | "3. Configure and deploy your workflow\n",
233 | "\n",
234 | "6. Wait until it's running (it can take several minutes)\n",
235 | "7. Click on the endpoint URL to test it online\n",
236 | "\n",
237 | "## Conclusion\n",
238 | "\n",
239 | "Congratulations! You have successfully onboarded to Ikomia SCALE, created a project, and deployed your first workflow. \n",
240 | "\n",
241 | "You can now use the power of Ikomia to scale your image processing tasks effortlessly.\n",
242 | "\n",
243 | "* Any questions? Contact [the Ikomia Team](mailto:team@ikomia.ai)\n",
244 | "\n",
245 | "* Any technical problems? Contact [the Ikomia Support](mailto:support@ikomia.ai)\n",
246 | "\n",
247 | "* Want to discuss with us? Come on our [discord channel](https://discord.com/invite/82Tnw9UGGc)!\n",
248 | "\n",
249 | "## Next steps\n",
250 | "\n",
251 | "* More algorithms on [Ikomia HUB](https://app.ikomia.ai/hub)\n",
252 | "* More explanations on Ikomia workflows in the [API documentation](https://ikomia-dev.github.io/python-api-documentation/getting_started.html)\n",
253 | "\n",
254 | "## Additional Resources\n",
255 | "\n",
256 | "- [Ikomia Website](https://www.ikomia.ai/)\n",
257 | "- [Ikomia blog](https://www.ikomia.ai/blog)\n",
258 | "- [Ikomia API](https://github.com/Ikomia-dev/IkomiaApi)"
259 | ]
260 | }
261 | ],
262 | "metadata": {
263 | "kernelspec": {
264 | "display_name": "Python 3 (ipykernel)",
265 | "language": "python",
266 | "name": "python3"
267 | },
268 | "language_info": {
269 | "codemirror_mode": {
270 | "name": "ipython",
271 | "version": 3
272 | },
273 | "file_extension": ".py",
274 | "mimetype": "text/x-python",
275 | "name": "python",
276 | "nbconvert_exporter": "python",
277 | "pygments_lexer": "ipython3",
278 | "version": "3.8.18"
279 | }
280 | },
281 | "nbformat": 4,
282 | "nbformat_minor": 4
283 | }
284 |
--------------------------------------------------------------------------------
/examples/HOWTO_make_a_simple_workflow_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "GyGbD_GxAOI0"
7 | },
8 | "source": [
9 | "
\n",
10 | "\n",
11 | "\n"
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {
17 | "id": "oNcONqwxwgkv"
18 | },
19 | "source": [
20 | "# How to make a simple workflow with Ikomia API"
21 | ]
22 | },
23 | {
24 | "cell_type": "markdown",
25 | "metadata": {
26 | "id": "AKTe7F5nwXI4"
27 | },
28 | "source": [
29 | "This tutorial is made for beginners, you will learn how to use the Ikomia API to easily prototype some Computer Vision workflows.\n",
30 | "\n",
31 | "In a few lines of code, you can test and chain different computer vision algorithms.\n",
32 | "\n",
33 | "If you like this tutorial, you can support our project here [Ikomia API GitHub](https://github.com/Ikomia-dev/IkomiaApi).\n",
34 | "\n",
35 | "## ENJOY 🥰 !!\n",
36 | "\n",
37 | "\n",
38 | "
\n",
39 | "
\n",
40 | "
"
41 | ]
42 | },
43 | {
44 | "cell_type": "markdown",
45 | "metadata": {
46 | "id": "x4CdI0J1ej5b"
47 | },
48 | "source": [
49 | "## Setup"
50 | ]
51 | },
52 | {
53 | "cell_type": "markdown",
54 | "metadata": {
55 | "id": "NBmJN2AaDmcI"
56 | },
57 | "source": [
58 | "You need to install Ikomia Python API with pip."
59 | ]
60 | },
61 | {
62 | "cell_type": "code",
63 | "execution_count": null,
64 | "metadata": {
65 | "colab": {
66 | "base_uri": "https://localhost:8080/",
67 | "height": 1000
68 | },
69 | "id": "8eSnQYJygrDy",
70 | "outputId": "7a1ff895-63a8-4b6e-8f1d-a2823f31366a"
71 | },
72 | "outputs": [],
73 | "source": [
74 | "!pip install ikomia"
75 | ]
76 | },
77 | {
78 | "cell_type": "markdown",
79 | "metadata": {
80 | "id": "kVvL0vVfUGN5"
81 | },
82 | "source": [
83 | "\n",
84 | "\n",
85 | "---\n",
86 | "\n",
87 | "\n",
88 | "**-Google Colab ONLY- Restart runtime**\n",
89 | "\n",
90 | "Some Python packages have been updated. Please click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
91 | "\n",
92 | "\n",
93 | "\n"
94 | ]
95 | },
96 | {
97 | "cell_type": "markdown",
98 | "metadata": {
99 | "id": "2hS1T6ky1Wcw"
100 | },
101 | "source": [
102 | "---"
103 | ]
104 | },
105 | {
106 | "cell_type": "markdown",
107 | "metadata": {
108 | "id": "JJsRFzl9Au1c"
109 | },
110 | "source": [
111 | "Ikomia API has already more than 180 pre-integrated algorithms (mainly OpenCV) but the most interesting algorithms are in [Ikomia HUB](https://github.com/Ikomia-hub). \n",
112 | "\n",
113 | "We push regularly state-of-the-art algorithms from individual repos (think of YOLO v7 for example) or from companies (Facebook Detectron2 or Ultralytics/YOLOv5 for example)."
114 | ]
115 | },
116 | {
117 | "cell_type": "markdown",
118 | "metadata": {
119 | "id": "jEdZ_uDYDqjH"
120 | },
121 | "source": [
122 | "## Create your workflow"
123 | ]
124 | },
125 | {
126 | "cell_type": "markdown",
127 | "metadata": {
128 | "id": "5O-fpfWfiNfW"
129 | },
130 | "source": [
131 | "First, you create a new workflow from scratch. \n",
132 | "\n",
133 | "Then we use the YOLOv7 algorithm in order to detect objects in the image and we apply the stylization filter on every detected objects.\n",
134 | "\n",
135 | "It will automagically download the YOLOv7 algorithm from Ikomia Hub and install all the Python dependencies (the 1st time, it can take a while, be patient ! )."
136 | ]
137 | },
138 | {
139 | "cell_type": "code",
140 | "execution_count": null,
141 | "metadata": {
142 | "colab": {
143 | "base_uri": "https://localhost:8080/",
144 | "height": 1000
145 | },
146 | "id": "bRPYGcRd1Pwh",
147 | "outputId": "9b14f113-82a9-4377-c1fb-67fb6b45d61d",
148 | "tags": []
149 | },
150 | "outputs": [],
151 | "source": [
152 | "from ikomia.dataprocess.workflow import Workflow\n",
153 | "\n",
154 | "# Create your worflow\n",
155 | "wf = Workflow() \n",
156 | "\n",
157 | "# Add an object detector\n",
158 | "yolo = wf.add_task(name=\"infer_yolo_v7\", auto_connect=True) \n",
159 | "# Add the OpenCV stylization algorithm\n",
160 | "stylize = wf.add_task(name=\"ocv_stylization\", auto_connect=True) "
161 | ]
162 | },
163 | {
164 | "cell_type": "markdown",
165 | "metadata": {},
166 | "source": [
167 | "## Run and display your results"
168 | ]
169 | },
170 | {
171 | "cell_type": "code",
172 | "execution_count": null,
173 | "metadata": {},
174 | "outputs": [],
175 | "source": [
176 | "from ikomia.utils.displayIO import display\n",
177 | "from PIL import ImageShow\n",
178 | "ImageShow.register(ImageShow.IPythonViewer(), 0) # <-- Specific for displaying in notebooks\n",
179 | "\n",
180 | "# Run\n",
181 | "wf.run_on(url=\"https://cdn.pixabay.com/photo/2020/01/26/18/52/porsche-4795517_960_720.jpg\") # <-- Change image url here if you want\n",
182 | "\n",
183 | "# YOLO output image with bounding boxes\n",
184 | "img_detect = yolo.get_image_with_graphics()\n",
185 | "# Stylization output image\n",
186 | "img_final = stylize.get_output(0).get_image()\n",
187 | "\n",
188 | "display(img_detect)\n",
189 | "display(img_final)"
190 | ]
191 | },
192 | {
193 | "cell_type": "markdown",
194 | "metadata": {
195 | "id": "N0xG6b0k1gZJ"
196 | },
197 | "source": [
198 | "## More advanced workflow using the `ik` auto-completion system"
199 | ]
200 | },
201 | {
202 | "cell_type": "markdown",
203 | "metadata": {},
204 | "source": [
205 | "`ik` is an auto-completion system designed to help developers to find available algorithms in [Ikomia HUB](https://github.com/Ikomia-hub). See the documentation for more explanations [here](https://ikomia-dev.github.io/python-api-documentation/)."
206 | ]
207 | },
208 | {
209 | "cell_type": "code",
210 | "execution_count": null,
211 | "metadata": {
212 | "colab": {
213 | "base_uri": "https://localhost:8080/",
214 | "height": 1000
215 | },
216 | "id": "lIcEQ5HD1E1O",
217 | "outputId": "9d3a7b7b-99fd-4fc7-8c62-d8b3f08fb942"
218 | },
219 | "outputs": [],
220 | "source": [
221 | "from ikomia.dataprocess.workflow import Workflow\n",
222 | "from ikomia.utils import ik\n",
223 | "from ikomia.utils.displayIO import display\n",
224 | "from PIL import ImageShow\n",
225 | "ImageShow.register(ImageShow.IPythonViewer(), 0) # <-- Specific for displaying in notebooks\n",
226 | "\n",
227 | "# Create your worflow\n",
228 | "wf = Workflow()\n",
229 | "\n",
230 | "# Detect objects with pre-trained model on COCO\n",
231 | "yolo = wf.add_task(ik.infer_yolo_v7(), auto_connect=True) \n",
232 | "\n",
233 | "# Filter objects by name\n",
234 | "obj_filter = wf.add_task(ik.ik_object_detection_filter(categories=\"zebra\", confidence=\"0.3\"), auto_connect=True) \n",
235 | "\n",
236 | "# Run\n",
237 | "wf.run_on(url=\"https://cdn.pixabay.com/photo/2016/01/30/17/58/zebra-1170177_960_720.jpg\") # <-- change your input image here\n",
238 | "\n",
239 | "# YOLO output image with bounding boxes\n",
240 | "img_detect = yolo.get_image_with_graphics()\n",
241 | "\n",
242 | "display(img_detect)\n",
243 | "\n",
244 | "print(f\"There are {len(obj_filter.get_output(1).get_objects())} zebras\")"
245 | ]
246 | },
247 | {
248 | "cell_type": "markdown",
249 | "metadata": {
250 | "id": "rpsaQoYSwma8"
251 | },
252 | "source": [
253 | "## -Google Colab ONLY- Save your custom image in your Google Drive space"
254 | ]
255 | },
256 | {
257 | "cell_type": "code",
258 | "execution_count": null,
259 | "metadata": {
260 | "colab": {
261 | "base_uri": "https://localhost:8080/"
262 | },
263 | "id": "pKPQ1JUCwdGW",
264 | "outputId": "72dde86f-aaaa-421e-c8e6-b9311206dfc7"
265 | },
266 | "outputs": [],
267 | "source": [
268 | "# Uncomment these lines if you're working on Colab\n",
269 | "\"\"\" from google.colab import drive\n",
270 | "drive.mount('/content/gdrive')\n",
271 | "\n",
272 | "cv2.imwrite(\"/content/gdrive/MyDrive/img_detect.png\", img_detect) \"\"\""
273 | ]
274 | },
275 | {
276 | "cell_type": "markdown",
277 | "metadata": {
278 | "id": "DyS-Lak6kntB"
279 | },
280 | "source": [
281 | "## -Google Colab ONLY- Download directly your custom image"
282 | ]
283 | },
284 | {
285 | "cell_type": "code",
286 | "execution_count": null,
287 | "metadata": {
288 | "colab": {
289 | "base_uri": "https://localhost:8080/",
290 | "height": 17
291 | },
292 | "id": "s_E2W_3hk07U",
293 | "outputId": "e639ba39-14aa-4b99-8c0b-3034734f09c6"
294 | },
295 | "outputs": [],
296 | "source": [
297 | "# Uncomment these lines if you're working on Colab\n",
298 | "\"\"\" from google.colab import files\n",
299 | "cv2.imwrite(\"/content/img_detect.png\", img_detect)\n",
300 | "files.download('/content/img_detect.png') \"\"\""
301 | ]
302 | }
303 | ],
304 | "metadata": {
305 | "colab": {
306 | "collapsed_sections": [],
307 | "provenance": []
308 | },
309 | "gpuClass": "standard",
310 | "kernelspec": {
311 | "display_name": "Python 3 (ipykernel)",
312 | "language": "python",
313 | "name": "python3"
314 | },
315 | "language_info": {
316 | "codemirror_mode": {
317 | "name": "ipython",
318 | "version": 3
319 | },
320 | "file_extension": ".py",
321 | "mimetype": "text/x-python",
322 | "name": "python",
323 | "nbconvert_exporter": "python",
324 | "pygments_lexer": "ipython3",
325 | "version": "3.10.1"
326 | }
327 | },
328 | "nbformat": 4,
329 | "nbformat_minor": 4
330 | }
331 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_Camera_Stream_Processing_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "# Real-time Object Detection with the Ikomia API"
9 | ]
10 | },
11 | {
12 | "attachments": {},
13 | "cell_type": "markdown",
14 | "metadata": {},
15 | "source": [
16 | "
"
17 | ]
18 | },
19 | {
20 | "attachments": {},
21 | "cell_type": "markdown",
22 | "metadata": {},
23 | "source": [
24 | "### Camera Stream Processing \n",
25 | "Camera stream processing involves the real-time analysis and manipulation of images and video streams captured from a camera. This technique finds widespread application in diverse fields such as computer vision, surveillance, robotics, and entertainment.\n",
26 | "In Computer Vision, camera stream processing plays a pivotal role in tasks like object detection and recognition, face detection, motion tracking, and image segmentation.\n",
27 | "\n",
28 | "•\tFor surveillance purposes, camera stream processing aids in detecting anomalies and events such as intrusion detection and crowd behavior analysis.\n",
29 | "\n",
30 | "•\tIn the realm of robotics, camera stream processing facilitates autonomous navigation, object detection, and obstacle avoidance.\n",
31 | "\n",
32 | "•\tIn the entertainment industry leverages camera stream processing for exciting applications like augmented reality, virtual reality, and gesture recognition.\n",
33 | "\n",
34 | "In essence, camera stream processing assumes a critical role across various domains, enabling the realization of numerous exciting applications that were once considered unattainable.\n",
35 | "To embark on camera stream processing, we will make use of OpenCV and VideoCapture with the YOLOv7 algorithm.\n"
36 | ]
37 | },
38 | {
39 | "attachments": {},
40 | "cell_type": "markdown",
41 | "metadata": {},
42 | "source": [
43 | "### Setup\n",
44 | "\n",
45 | "You need to install Ikomia Python API with pip."
46 | ]
47 | },
48 | {
49 | "cell_type": "code",
50 | "execution_count": null,
51 | "metadata": {},
52 | "outputs": [],
53 | "source": [
54 | "!pip install ikomia"
55 | ]
56 | },
57 | {
58 | "attachments": {},
59 | "cell_type": "markdown",
60 | "metadata": {},
61 | "source": [
62 | "### Run Real-Time Object Detection from your Webcam"
63 | ]
64 | },
65 | {
66 | "cell_type": "code",
67 | "execution_count": null,
68 | "metadata": {},
69 | "outputs": [],
70 | "source": [
71 | "from ikomia.dataprocess.workflow import Workflow\n",
72 | "from ikomia.utils import ik\n",
73 | "from ikomia.utils.displayIO import display\n",
74 | "import cv2\n",
75 | "\n",
76 | "stream = cv2.VideoCapture(0)\n",
77 | "\n",
78 | "# Init the workflow\n",
79 | "wf = Workflow()\n",
80 | "\n",
81 | "# Add color conversion\n",
82 | "cvt = wf.add_task(ik.ocv_color_conversion(code=str(cv2.COLOR_BGR2RGB)), auto_connect=True)\n",
83 | "\n",
84 | "# Add YOLOv7 detection\n",
85 | "yolo = wf.add_task(ik.infer_yolo_v7(conf_thres=\"0.6\"), auto_connect=True)\n",
86 | "\n",
87 | "\n",
88 | "while True:\n",
89 | " ret, frame = stream.read()\n",
90 | " \n",
91 | " # Test if streaming is OK\n",
92 | " if not ret:\n",
93 | " continue\n",
94 | "\n",
95 | " # Run workflow on image\n",
96 | " wf.run_on(frame)\n",
97 | "\n",
98 | " # Display results from \"face\" and \"blur\"\n",
99 | " display(\n",
100 | " yolo.get_image_with_graphics(),\n",
101 | " title=\"Object Detection - press 'q' to quit\",\n",
102 | " viewer=\"opencv\"\n",
103 | " )\n",
104 | "\n",
105 | " # Press 'q' to quit the streaming process\n",
106 | " if cv2.waitKey(1) & 0xFF == ord('q'):\n",
107 | " break\n",
108 | "\n",
109 | "# After the loop release the stream object\n",
110 | "stream.release()\n",
111 | "\n",
112 | "# Destroy all windows\n",
113 | "cv2.destroyAllWindows()"
114 | ]
115 | }
116 | ],
117 | "metadata": {
118 | "kernelspec": {
119 | "display_name": "venvapi",
120 | "language": "python",
121 | "name": "venvapi"
122 | },
123 | "language_info": {
124 | "codemirror_mode": {
125 | "name": "ipython",
126 | "version": 3
127 | },
128 | "file_extension": ".py",
129 | "mimetype": "text/x-python",
130 | "name": "python",
131 | "nbconvert_exporter": "python",
132 | "pygments_lexer": "ipython3",
133 | "version": "3.9.13"
134 | },
135 | "orig_nbformat": 4
136 | },
137 | "nbformat": 4,
138 | "nbformat_minor": 2
139 | }
140 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_DeepLabPlus_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run DeepLab Plus with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "**DeepLabV3** is an advanced neural network architecture designed for the task of semantic image segmentation. This technique involves labeling each pixel in an image with a class, corresponding to what that pixel represents. \n",
27 | "\n",
28 | ""
29 | ]
30 | },
31 | {
32 | "attachments": {},
33 | "cell_type": "markdown",
34 | "metadata": {},
35 | "source": [
36 | "## Setup"
37 | ]
38 | },
39 | {
40 | "attachments": {},
41 | "cell_type": "markdown",
42 | "metadata": {},
43 | "source": [
44 | "You need to install Ikomia Python API with pip\n"
45 | ]
46 | },
47 | {
48 | "cell_type": "code",
49 | "execution_count": null,
50 | "metadata": {},
51 | "outputs": [],
52 | "source": [
53 | "!pip install ikomia"
54 | ]
55 | },
56 | {
57 | "cell_type": "markdown",
58 | "metadata": {},
59 | "source": [
60 | "---\n",
61 | "\n",
62 | "**-Google Colab ONLY- Restart runtime**\n",
63 | "\n",
64 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
65 | "\n",
66 | "---"
67 | ]
68 | },
69 | {
70 | "attachments": {},
71 | "cell_type": "markdown",
72 | "metadata": {},
73 | "source": [
74 | "## Run DeepLab Plus on your image"
75 | ]
76 | },
77 | {
78 | "cell_type": "code",
79 | "execution_count": null,
80 | "metadata": {},
81 | "outputs": [],
82 | "source": [
83 | "from ikomia.dataprocess.workflow import Workflow\n",
84 | "\n",
85 | "\n",
86 | "# Init your workflow\n",
87 | "wf = Workflow()\n",
88 | "\n",
89 | "# Add the Deeplab algorithm\n",
90 | "deeplab = wf.add_task(name=\"infer_detectron2_deeplabv3plus\", auto_connect=True)\n",
91 | "deeplab.set_parameters({\"dataset\": \"Cityscapes\"})\n",
92 | "\n",
93 | "# Run on your image \n",
94 | "wf.run_on(url=\"https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_city.jpeg?raw=true\")\n"
95 | ]
96 | },
97 | {
98 | "cell_type": "code",
99 | "execution_count": null,
100 | "metadata": {},
101 | "outputs": [],
102 | "source": [
103 | "from ikomia.utils.displayIO import display\n",
104 | "\n",
105 | "from PIL import ImageShow\n",
106 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
107 | "\n",
108 | "# Inpect your result\n",
109 | "display(deeplab.get_image_with_mask())"
110 | ]
111 | }
112 | ],
113 | "metadata": {
114 | "kernelspec": {
115 | "display_name": "venvapi",
116 | "language": "python",
117 | "name": "venvapi"
118 | },
119 | "language_info": {
120 | "codemirror_mode": {
121 | "name": "ipython",
122 | "version": 3
123 | },
124 | "file_extension": ".py",
125 | "mimetype": "text/x-python",
126 | "name": "python",
127 | "nbconvert_exporter": "python",
128 | "pygments_lexer": "ipython3",
129 | "version": "3.9.13"
130 | },
131 | "orig_nbformat": 4
132 | },
133 | "nbformat": 4,
134 | "nbformat_minor": 2
135 | }
136 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_Face_Detection_and_Blur_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run Kornia face detector and face blurring with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "Kornia is an open-source computer vision library for Python, specifically designed for use with the PyTorch deep learning framework. It provides differentiable computer vision applications, such as Deep Edge detection, Semantic and Panoptic segmentation, Object Detection and Tracking, Image classification…\n",
27 | "\n",
28 | "The Kornia face detection uses a light weight deep learning model named YuNet. It offers a millisecond-level detection speed, making it ideal for edge computing.\n",
29 | "\n",
30 | "For more info on Kornia: https://github.com/kornia/kornia"
31 | ]
32 | },
33 | {
34 | "attachments": {},
35 | "cell_type": "markdown",
36 | "metadata": {},
37 | "source": [
38 | "## Setup"
39 | ]
40 | },
41 | {
42 | "attachments": {},
43 | "cell_type": "markdown",
44 | "metadata": {},
45 | "source": [
46 | "You need to install Ikomia Python API with pip (and update numpy in Colab)\n"
47 | ]
48 | },
49 | {
50 | "cell_type": "code",
51 | "execution_count": null,
52 | "metadata": {},
53 | "outputs": [],
54 | "source": [
55 | "!pip install ikomia"
56 | ]
57 | },
58 | {
59 | "attachments": {},
60 | "cell_type": "markdown",
61 | "metadata": {},
62 | "source": [
63 | "## Run the face detector and blurring on your image"
64 | ]
65 | },
66 | {
67 | "cell_type": "code",
68 | "execution_count": 15,
69 | "metadata": {},
70 | "outputs": [
71 | {
72 | "name": "stdout",
73 | "output_type": "stream",
74 | "text": [
75 | "Will run on cuda\n",
76 | "Workflow Untitled run successfully in 256.8126 ms.\n"
77 | ]
78 | }
79 | ],
80 | "source": [
81 | "from ikomia.dataprocess.workflow import Workflow\n",
82 | "from ikomia.utils.displayIO import display\n",
83 | "\n",
84 | "# Init the workflow\n",
85 | "wf = Workflow()\n",
86 | "\n",
87 | "# Add and connect algorithms\n",
88 | "face = wf.add_task(name=\"infer_face_detection_kornia\", auto_connect=True)\n",
89 | "face.set_parameters({\n",
90 | " \"conf_thres\": \"0.5\",\n",
91 | " })\n",
92 | "\n",
93 | "blur = wf.add_task(name=\"ocv_blur\", auto_connect=True)\n",
94 | "\n",
95 | "# Set parameters\n",
96 | "blur.set_parameters({\n",
97 | " \"kSizeWidth\": \"61\", \n",
98 | " \"kSizeHeight\":\"61\"\n",
99 | " })\n",
100 | "\n",
101 | "# Run on your image\n",
102 | "wf.run_on(path=\"path/to/your/image.png\")\n",
103 | "\n",
104 | "# Inspect results\n",
105 | "display(face.get_image_with_graphics())\n",
106 | "display(blur.get_output(0).get_image())\n"
107 | ]
108 | },
109 | {
110 | "attachments": {},
111 | "cell_type": "markdown",
112 | "metadata": {},
113 | "source": [
114 | "## Run on your webcam (Jupyter notebook)"
115 | ]
116 | },
117 | {
118 | "cell_type": "code",
119 | "execution_count": null,
120 | "metadata": {},
121 | "outputs": [],
122 | "source": [
123 | "from ikomia.dataprocess.workflow import Workflow\n",
124 | "from ikomia.utils.displayIO import display\n",
125 | "import cv2\n",
126 | "\n",
127 | "stream = cv2.VideoCapture(0)\n",
128 | "\n",
129 | "# Init the workflow\n",
130 | "wf = Workflow()\n",
131 | "\n",
132 | "# Add and connect algorithms\n",
133 | "face = wf.add_task(name=\"infer_face_detection_kornia\", auto_connect=True)\n",
134 | "face.set_parameters({\n",
135 | " \"conf_thres\": \"0.5\",\n",
136 | " })\n",
137 | "\n",
138 | "blur = wf.add_task(name=\"ocv_blur\", auto_connect=True)\n",
139 | "\n",
140 | "# Set parameters\n",
141 | "blur.set_parameters({\n",
142 | " \"kSizeWidth\": \"61\", \n",
143 | " \"kSizeHeight\":\"61\"\n",
144 | " })\n",
145 | "\n",
146 | "\n",
147 | "while True:\n",
148 | " # Read image from stream\n",
149 | " ret, frame = stream.read()\n",
150 | "\n",
151 | " # Test if streaming is OK\n",
152 | " if not ret:\n",
153 | " continue\n",
154 | " \n",
155 | " # Display image with OpenCV\n",
156 | " frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n",
157 | "\n",
158 | " # Run workflow on image\n",
159 | " wf.run_on(frame)\n",
160 | " \n",
161 | " # Display results from \"cvt\"\n",
162 | " display(blur.get_output(0).get_image(), title=\"Demo - press 'q' to quit \", viewer=\"opencv\")\n",
163 | "\n",
164 | " # Press 'q' to quit the streaming process\n",
165 | " if cv2.waitKey(1) & 0xFF == ord('q'):\n",
166 | " break\n",
167 | "\n",
168 | "# After the loop release the stream object\n",
169 | "stream.release()\n",
170 | "# Destroy all windows\n",
171 | "cv2.destroyAllWindows()\n"
172 | ]
173 | },
174 | {
175 | "attachments": {},
176 | "cell_type": "markdown",
177 | "metadata": {},
178 | "source": [
179 | "## -Google Colab ONLY- Save your custom image in your Google Drive space"
180 | ]
181 | },
182 | {
183 | "cell_type": "code",
184 | "execution_count": null,
185 | "metadata": {},
186 | "outputs": [],
187 | "source": [
188 | "# Uncomment these lines if you're working on Colab\n",
189 | "\"\"\" from google.colab import drive\n",
190 | "drive.mount('/content/gdrive')\n",
191 | "\n",
192 | "cv2.imwrite(\"/content/gdrive/MyDrive/paint_img.png\", img_paint) \"\"\""
193 | ]
194 | },
195 | {
196 | "attachments": {},
197 | "cell_type": "markdown",
198 | "metadata": {},
199 | "source": [
200 | "## -Google Colab ONLY- Download directly your custom image"
201 | ]
202 | },
203 | {
204 | "cell_type": "code",
205 | "execution_count": null,
206 | "metadata": {},
207 | "outputs": [],
208 | "source": [
209 | "# Uncomment these lines if you're working on Colab\n",
210 | "\"\"\" from google.colab import files\n",
211 | "cv2.imwrite(\"/content/paint_img.png\", img_paint)\n",
212 | "files.download('/content/paint_img.png') \"\"\""
213 | ]
214 | }
215 | ],
216 | "metadata": {
217 | "kernelspec": {
218 | "display_name": "venvapi",
219 | "language": "python",
220 | "name": "venvapi"
221 | },
222 | "language_info": {
223 | "codemirror_mode": {
224 | "name": "ipython",
225 | "version": 3
226 | },
227 | "file_extension": ".py",
228 | "mimetype": "text/x-python",
229 | "name": "python",
230 | "nbconvert_exporter": "python",
231 | "pygments_lexer": "ipython3",
232 | "version": "3.9.13"
233 | },
234 | "orig_nbformat": 4
235 | },
236 | "nbformat": 4,
237 | "nbformat_minor": 2
238 | }
239 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_Faster_R-CNN_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run Faster R-CNN Plus with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "\n",
27 | "**Faster R-CNN** is an advanced machine learning model that efficiently detects and classifies objects in images by integrating a Region Proposal Network (RPN) with Fast R-CNN for faster and more accurate object detection.\n",
28 | "\n",
29 | ""
30 | ]
31 | },
32 | {
33 | "attachments": {},
34 | "cell_type": "markdown",
35 | "metadata": {},
36 | "source": [
37 | "## Setup"
38 | ]
39 | },
40 | {
41 | "attachments": {},
42 | "cell_type": "markdown",
43 | "metadata": {},
44 | "source": [
45 | "You need to install Ikomia Python API with pip\n"
46 | ]
47 | },
48 | {
49 | "cell_type": "code",
50 | "execution_count": null,
51 | "metadata": {},
52 | "outputs": [],
53 | "source": [
54 | "!pip install ikomia"
55 | ]
56 | },
57 | {
58 | "cell_type": "markdown",
59 | "metadata": {},
60 | "source": [
61 | "---\n",
62 | "\n",
63 | "**-Google Colab ONLY- Restart runtime**\n",
64 | "\n",
65 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
66 | "\n",
67 | "---"
68 | ]
69 | },
70 | {
71 | "attachments": {},
72 | "cell_type": "markdown",
73 | "metadata": {},
74 | "source": [
75 | "## Run Faster R-CNN on your image"
76 | ]
77 | },
78 | {
79 | "cell_type": "code",
80 | "execution_count": null,
81 | "metadata": {},
82 | "outputs": [],
83 | "source": [
84 | "from ikomia.dataprocess.workflow import Workflow\n",
85 | "from ikomia.utils import ik\n",
86 | "\n",
87 | "\n",
88 | "# Init your workflow\n",
89 | "wf = Workflow()\n",
90 | "\n",
91 | "# Add algorithm\n",
92 | "algo = wf.add_task(ik.infer_torchvision_faster_rcnn(conf_thres='0.5'), auto_connect=True)\n",
93 | "\n",
94 | "# Run on your image\n",
95 | "wf.run_on(url=\"https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_city.jpeg?raw=true\")"
96 | ]
97 | },
98 | {
99 | "cell_type": "code",
100 | "execution_count": null,
101 | "metadata": {},
102 | "outputs": [],
103 | "source": [
104 | "from ikomia.utils.displayIO import display\n",
105 | "from PIL import ImageShow\n",
106 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
107 | "\n",
108 | "# Inpect your result\n",
109 | "display(algo.get_image_with_graphics())"
110 | ]
111 | }
112 | ],
113 | "metadata": {
114 | "kernelspec": {
115 | "display_name": "venvapi",
116 | "language": "python",
117 | "name": "venvapi"
118 | },
119 | "language_info": {
120 | "codemirror_mode": {
121 | "name": "ipython",
122 | "version": 3
123 | },
124 | "file_extension": ".py",
125 | "mimetype": "text/x-python",
126 | "name": "python",
127 | "nbconvert_exporter": "python",
128 | "pygments_lexer": "ipython3",
129 | "version": "3.9.13"
130 | },
131 | "orig_nbformat": 4
132 | },
133 | "nbformat": 4,
134 | "nbformat_minor": 2
135 | }
136 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_Grounding_DINO_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run Grounding Dino with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "[GroundingDINO](https://github.com/IDEA-Research/GroundingDINO) is a cutting-edge zero-shot object detection model that marries the powerful [DINO](https://github.com/facebookresearch/dino) architecture with grounded pre-training. \n",
27 | "\n",
28 | "Developed by [IDEA-Research](https://www.idea.edu.cn/), GroundingDINO can detect arbitrary objects based on human inputs, such as category names or referring expressions.\n",
29 | "\n",
30 | "\n"
31 | ]
32 | },
33 | {
34 | "attachments": {},
35 | "cell_type": "markdown",
36 | "metadata": {},
37 | "source": [
38 | "## Setup"
39 | ]
40 | },
41 | {
42 | "cell_type": "markdown",
43 | "metadata": {},
44 | "source": [
45 | "Please use a GPU for this tutorial.\n",
46 | "\n",
47 | "In the Google colab menu, select \"Runtime\" then \"Change runtime type\", choose GPU in \"Hardware accelerator\".\n",
48 | "\n",
49 | "Check your GPU with the following command:"
50 | ]
51 | },
52 | {
53 | "cell_type": "code",
54 | "execution_count": null,
55 | "metadata": {},
56 | "outputs": [],
57 | "source": [
58 | "!nvidia-smi"
59 | ]
60 | },
61 | {
62 | "attachments": {},
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "First, you need to install Ikomia API pip package."
67 | ]
68 | },
69 | {
70 | "cell_type": "code",
71 | "execution_count": null,
72 | "metadata": {},
73 | "outputs": [],
74 | "source": [
75 | "!pip install ikomia"
76 | ]
77 | },
78 | {
79 | "attachments": {},
80 | "cell_type": "markdown",
81 | "metadata": {},
82 | "source": [
83 | "## Run the face detector and blurring on your image"
84 | ]
85 | },
86 | {
87 | "cell_type": "code",
88 | "execution_count": null,
89 | "metadata": {},
90 | "outputs": [],
91 | "source": [
92 | "from ikomia.dataprocess.workflow import Workflow\n",
93 | "from ikomia.utils.displayIO import display\n",
94 | "\n",
95 | "\n",
96 | "# Init your workflow\n",
97 | "wf = Workflow() \n",
98 | "\n",
99 | "# Add the Grounding DINO Object Detector\n",
100 | "dino = wf.add_task(name=\"infer_grounding_dino\", auto_connect=True)\n",
101 | "\n",
102 | "# Set the parameters\n",
103 | "dino.set_parameters({\n",
104 | " \"model_name\": \"Swin-B\",\n",
105 | " \"prompt\": \"laptops . smartphone . headphone .\",\n",
106 | " \"conf_thres\": \"0.35\",\n",
107 | " \"conf_thres_text\": \"0.25\"\n",
108 | "})\n",
109 | "\n",
110 | "\n",
111 | "# Run on your image \n",
112 | "# wf.run_on(path=\"path/to/your/image.png\")\n",
113 | "wf.run_on(url=\"https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg\")\n",
114 | "\n",
115 | "\n",
116 | "# Inspect your results\n",
117 | "display(dino.get_image_with_graphics())"
118 | ]
119 | },
120 | {
121 | "cell_type": "code",
122 | "execution_count": null,
123 | "metadata": {},
124 | "outputs": [],
125 | "source": [
126 | "# Display image on Google Colab\n",
127 | "from PIL import ImageShow\n",
128 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
129 | "\n",
130 | "display(dino.get_image_with_graphics())"
131 | ]
132 | }
133 | ],
134 | "metadata": {
135 | "kernelspec": {
136 | "display_name": "venvapi",
137 | "language": "python",
138 | "name": "venvapi"
139 | },
140 | "language_info": {
141 | "codemirror_mode": {
142 | "name": "ipython",
143 | "version": 3
144 | },
145 | "file_extension": ".py",
146 | "mimetype": "text/x-python",
147 | "name": "python",
148 | "nbconvert_exporter": "python",
149 | "pygments_lexer": "ipython3",
150 | "version": "3.9.13"
151 | },
152 | "orig_nbformat": 4
153 | },
154 | "nbformat": 4,
155 | "nbformat_minor": 2
156 | }
157 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_Kandinsky_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "9XIBGlEafDQf"
7 | },
8 | "source": [
9 | "
\n",
10 | "\n",
11 | "\n"
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {
17 | "id": "DvouGVeYfDQg"
18 | },
19 | "source": [
20 | "# Create your image with Kandinsky 2.2\n",
21 | "\n",
22 | "\n",
23 | "**Kandinsky 2.2** is a text-conditional diffusion model based on unCLIP and latent diffusion. This model series, developed by a team from Russia, has evolved through several iterations, each bringing new features and improvements in image synthesis from text descriptions.\n",
24 | "\n",
25 | "\n",
26 | ""
27 | ]
28 | },
29 | {
30 | "cell_type": "markdown",
31 | "metadata": {
32 | "id": "-STLXz8ifDQh"
33 | },
34 | "source": [
35 | "## Setup"
36 | ]
37 | },
38 | {
39 | "cell_type": "markdown",
40 | "metadata": {},
41 | "source": [
42 | "Please use a GPU for this tutorial.\n",
43 | "\n",
44 | "In the menu, select \"Runtime\" then \"Change runtime type\", choose GPU in \"Hardware accelerator\".\n",
45 | "\n",
46 | "Check your GPU with the following command:"
47 | ]
48 | },
49 | {
50 | "cell_type": "code",
51 | "execution_count": null,
52 | "metadata": {},
53 | "outputs": [],
54 | "source": [
55 | "!nvidia-smi"
56 | ]
57 | },
58 | {
59 | "cell_type": "markdown",
60 | "metadata": {
61 | "id": "cV0_2S0SfDQh"
62 | },
63 | "source": [
64 | "You need to install Ikomia Python API with pip\n"
65 | ]
66 | },
67 | {
68 | "cell_type": "code",
69 | "execution_count": null,
70 | "metadata": {
71 | "colab": {
72 | "base_uri": "https://localhost:8080/",
73 | "height": 1000
74 | },
75 | "id": "cbvRlv_ufDQh",
76 | "outputId": "e3893478-603b-467b-96a3-b272121649b3"
77 | },
78 | "outputs": [],
79 | "source": [
80 | "!pip install ikomia"
81 | ]
82 | },
83 | {
84 | "cell_type": "markdown",
85 | "metadata": {
86 | "id": "-j3VbsAYfDQi"
87 | },
88 | "source": [
89 | "---\n",
90 | "\n",
91 | "**-Google Colab ONLY- Restart runtime**\n",
92 | "\n",
93 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
94 | "\n",
95 | "---"
96 | ]
97 | },
98 | {
99 | "cell_type": "markdown",
100 | "metadata": {},
101 | "source": [
102 | "## Run Kandinsky 2.2 text2img"
103 | ]
104 | },
105 | {
106 | "cell_type": "code",
107 | "execution_count": 1,
108 | "metadata": {
109 | "id": "uBp98pWxiHXq"
110 | },
111 | "outputs": [],
112 | "source": [
113 | "from ikomia.dataprocess.workflow import Workflow\n",
114 | "\n",
115 | "\n",
116 | "# Init your workflow\n",
117 | "wf = Workflow()\n",
118 | "\n",
119 | "# Add algorithm\n",
120 | "algo = wf.add_task(name = \"infer_kandinsky_2\", auto_connect=False)\n",
121 | "\n",
122 | "# Edit paramerters\n",
123 | "algo.set_parameters({\n",
124 | " 'model_name': 'kandinsky-community/kandinsky-2-2-decoder',\n",
125 | " 'prompt': 'A Woman Jedi fighter performs a beautiful move with one lightsabre, full body, dark galaxy background, look at camera, Ancient Chinese style, cinematic, 4K.',\n",
126 | " 'negative_prompt': 'low quality, bad quality',\n",
127 | " 'prior_num_inference_steps': '25',\n",
128 | " 'prior_guidance_scale': '4.0',\n",
129 | " 'num_inference_steps': '100',\n",
130 | " 'guidance_scale': '1.0',\n",
131 | " 'seed': '-1',\n",
132 | " 'width': '1280',\n",
133 | " 'height': '768',\n",
134 | " })\n",
135 | "\n",
136 | "\n",
137 | "# Generate your image\n",
138 | "wf.run()"
139 | ]
140 | },
141 | {
142 | "cell_type": "code",
143 | "execution_count": null,
144 | "metadata": {},
145 | "outputs": [],
146 | "source": [
147 | "from ikomia.utils.displayIO import display\n",
148 | "\n",
149 | "from PIL import ImageShow\n",
150 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
151 | "\n",
152 | "# Display the image\n",
153 | "display(algo.get_output(0).get_image())"
154 | ]
155 | },
156 | {
157 | "cell_type": "markdown",
158 | "metadata": {},
159 | "source": [
160 | "### List of parameters\n",
161 | "\n",
162 | "- **model_name** (str) - default 'kandinsky-community/kandinsky-2-2-decoder': Name of the latent diffusion model. \n",
163 | "- **prompt** (str) - default 'portrait of a young women, blue eyes, cinematic' : Text prompt to guide the image generation .\n",
164 | "- **negative_prompt** (str, *optional*) - default 'low quality, bad quality': The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).\n",
165 | "- **prior_num_inference_steps** (int) - default '25': Number of denoising steps of the prior model (CLIP).\n",
166 | "- **prior_guidance_scale** (float) - default '4.0': Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality. (minimum: 1; maximum: 20).\n",
167 | "- **num_inference_steps** (int) - default '100': The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.\n",
168 | "- **guidance_scale** (float) - default '1.0': Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality. (minimum: 1; maximum: 20).\n",
169 | "- **height** (int) - default '768: The height in pixels of the generated image.\n",
170 | "- **width** (int) - default '768: The width in pixels of the generated image.\n",
171 | "- **seed** (int) - default '-1': Seed value. '-1' generates a random number between 0 and 191965535.\n",
172 | "\n",
173 | "\n",
174 | "*note:\"prior model\" interprets and encodes the input text to understand the desired image content, while the \"decoder model\" translates this encoded information into the actual visual representation, effectively generating the image based on the text description.*"
175 | ]
176 | }
177 | ],
178 | "metadata": {
179 | "accelerator": "GPU",
180 | "colab": {
181 | "gpuType": "V100",
182 | "provenance": []
183 | },
184 | "kernelspec": {
185 | "display_name": "venvapi",
186 | "language": "python",
187 | "name": "venvapi"
188 | },
189 | "language_info": {
190 | "codemirror_mode": {
191 | "name": "ipython",
192 | "version": 3
193 | },
194 | "file_extension": ".py",
195 | "mimetype": "text/x-python",
196 | "name": "python",
197 | "nbconvert_exporter": "python",
198 | "pygments_lexer": "ipython3",
199 | "version": "3.9.13"
200 | },
201 | "orig_nbformat": 4
202 | },
203 | "nbformat": 4,
204 | "nbformat_minor": 0
205 | }
206 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_MODNet_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run MODNet with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "**MODNet** is a light-weight matting objective decomposition network (MODNet), which can process portrait matting from a single input image in realtime.\n",
27 | "\n",
28 | "- MODNet operates at an impressive rate of 67 frames per second (1080Ti GPU), showcasing its ability to handle high-speed matting tasks with remarkable efficiency.\n",
29 | "- MODNet achieves remarkable results in daily photos and videos.\n",
30 | "- MODNet is easy to be trained in an end-to-end style.\n",
31 | "\n",
32 | "MODNet is simple, fast, and effective to avoid using a green screen in real-time portrait matting.\n",
33 | "\n",
34 | ""
35 | ]
36 | },
37 | {
38 | "attachments": {},
39 | "cell_type": "markdown",
40 | "metadata": {},
41 | "source": [
42 | "## Setup"
43 | ]
44 | },
45 | {
46 | "attachments": {},
47 | "cell_type": "markdown",
48 | "metadata": {},
49 | "source": [
50 | "You need to install Ikomia Python API with pip\n"
51 | ]
52 | },
53 | {
54 | "cell_type": "code",
55 | "execution_count": null,
56 | "metadata": {},
57 | "outputs": [],
58 | "source": [
59 | "!pip install ikomia"
60 | ]
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "metadata": {},
65 | "source": [
66 | "---\n",
67 | "\n",
68 | "**-Google Colab ONLY- Restart runtime**\n",
69 | "\n",
70 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
71 | "\n",
72 | "---"
73 | ]
74 | },
75 | {
76 | "attachments": {},
77 | "cell_type": "markdown",
78 | "metadata": {},
79 | "source": [
80 | "## Run MODNet on your image"
81 | ]
82 | },
83 | {
84 | "cell_type": "code",
85 | "execution_count": null,
86 | "metadata": {},
87 | "outputs": [],
88 | "source": [
89 | "from ikomia.dataprocess.workflow import Workflow\n",
90 | "\n",
91 | "# Init your workflow\n",
92 | "wf = Workflow() \n",
93 | "\n",
94 | "# Add the MODNet process to the workflow\n",
95 | "modnet = wf.add_task(name=\"infer_modnet_portrait_matting\", auto_connect=True)\n",
96 | "\n",
97 | "# Set process parameters\n",
98 | "modnet.set_parameters({\n",
99 | " \"input_size\":\"800\",\n",
100 | " \"cuda\":\"True\"})\n",
101 | "\n",
102 | "# Run workflow on the image\n",
103 | "wf.run_on(url=\"https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_portrait_3.jpeg?raw=true\")"
104 | ]
105 | },
106 | {
107 | "cell_type": "code",
108 | "execution_count": null,
109 | "metadata": {},
110 | "outputs": [],
111 | "source": [
112 | "from ikomia.utils.displayIO import display\n",
113 | "\n",
114 | "from PIL import ImageShow\n",
115 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
116 | "\n",
117 | "# Display the image\n",
118 | "display(modnet.get_output(1).get_image())"
119 | ]
120 | }
121 | ],
122 | "metadata": {
123 | "kernelspec": {
124 | "display_name": "venvapi",
125 | "language": "python",
126 | "name": "venvapi"
127 | },
128 | "language_info": {
129 | "codemirror_mode": {
130 | "name": "ipython",
131 | "version": 3
132 | },
133 | "file_extension": ".py",
134 | "mimetype": "text/x-python",
135 | "name": "python",
136 | "nbconvert_exporter": "python",
137 | "pygments_lexer": "ipython3",
138 | "version": "3.9.13"
139 | },
140 | "orig_nbformat": 4
141 | },
142 | "nbformat": 4,
143 | "nbformat_minor": 2
144 | }
145 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_Mask_R-CNN_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run Mask R-CNN Plus with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "\n",
27 | "**Mask R-CNN** (Region-based Convolutional Neural Network) is an extension of the Faster R-CNN, a popular object detection model. While Faster R-CNN efficiently locates objects in an image, **Mask R-CNN takes a step further by generating a high-quality segmentation mask for each instance**\n",
28 | ". \n",
29 | "\n",
30 | "This model is thus not only able to pinpoint the location of objects within an image but also to precisely outline the shape of each object.\n",
31 | ""
32 | ]
33 | },
34 | {
35 | "attachments": {},
36 | "cell_type": "markdown",
37 | "metadata": {},
38 | "source": [
39 | "## Setup"
40 | ]
41 | },
42 | {
43 | "attachments": {},
44 | "cell_type": "markdown",
45 | "metadata": {},
46 | "source": [
47 | "You need to install Ikomia Python API with pip\n"
48 | ]
49 | },
50 | {
51 | "cell_type": "code",
52 | "execution_count": null,
53 | "metadata": {},
54 | "outputs": [],
55 | "source": [
56 | "!pip install ikomia"
57 | ]
58 | },
59 | {
60 | "cell_type": "markdown",
61 | "metadata": {},
62 | "source": [
63 | "---\n",
64 | "\n",
65 | "**-Google Colab ONLY- Restart runtime**\n",
66 | "\n",
67 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
68 | "\n",
69 | "---"
70 | ]
71 | },
72 | {
73 | "attachments": {},
74 | "cell_type": "markdown",
75 | "metadata": {},
76 | "source": [
77 | "## Run Mask R-CNN on your image"
78 | ]
79 | },
80 | {
81 | "cell_type": "code",
82 | "execution_count": null,
83 | "metadata": {},
84 | "outputs": [],
85 | "source": [
86 | "from ikomia.dataprocess.workflow import Workflow\n",
87 | "from ikomia.utils import ik\n",
88 | "\n",
89 | "# Init your workflow\n",
90 | "wf = Workflow()\n",
91 | "\n",
92 | "# Add algorithm\n",
93 | "algo = wf.add_task(ik.infer_torchvision_mask_rcnn(\n",
94 | " conf_thres='0.5',\n",
95 | " iou_thres='0.5'\n",
96 | " ),\n",
97 | " auto_connect=True\n",
98 | ")\n",
99 | "\n",
100 | "# Run on your image\n",
101 | "wf.run_on(url=\"https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_living_room.jpg?raw=true\")\n",
102 | "\n"
103 | ]
104 | },
105 | {
106 | "cell_type": "code",
107 | "execution_count": null,
108 | "metadata": {},
109 | "outputs": [],
110 | "source": [
111 | "from ikomia.utils.displayIO import display\n",
112 | "from PIL import ImageShow\n",
113 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
114 | "\n",
115 | "# Inpect your result\n",
116 | "display(algo.get_image_with_mask_and_graphics())"
117 | ]
118 | }
119 | ],
120 | "metadata": {
121 | "kernelspec": {
122 | "display_name": "venvapi",
123 | "language": "python",
124 | "name": "venvapi"
125 | },
126 | "language_info": {
127 | "codemirror_mode": {
128 | "name": "ipython",
129 | "version": 3
130 | },
131 | "file_extension": ".py",
132 | "mimetype": "text/x-python",
133 | "name": "python",
134 | "nbconvert_exporter": "python",
135 | "pygments_lexer": "ipython3",
136 | "version": "3.9.13"
137 | },
138 | "orig_nbformat": 4
139 | },
140 | "nbformat": 4,
141 | "nbformat_minor": 2
142 | }
143 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_MnasNet_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run MnasNet with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "**MnasNet**, which stands for Multi-Objective Neural Architecture Search Network, is a framework designed to create neural network models that optimize both accuracy and efficiency.\n"
27 | ]
28 | },
29 | {
30 | "attachments": {},
31 | "cell_type": "markdown",
32 | "metadata": {},
33 | "source": [
34 | "## Setup"
35 | ]
36 | },
37 | {
38 | "attachments": {},
39 | "cell_type": "markdown",
40 | "metadata": {},
41 | "source": [
42 | "You need to install Ikomia Python API with pip\n"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": null,
48 | "metadata": {},
49 | "outputs": [],
50 | "source": [
51 | "!pip install ikomia"
52 | ]
53 | },
54 | {
55 | "cell_type": "markdown",
56 | "metadata": {},
57 | "source": [
58 | "---\n",
59 | "\n",
60 | "**-Google Colab ONLY- Restart runtime**\n",
61 | "\n",
62 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
63 | "\n",
64 | "---"
65 | ]
66 | },
67 | {
68 | "attachments": {},
69 | "cell_type": "markdown",
70 | "metadata": {},
71 | "source": [
72 | "## Run MnasNet on your image"
73 | ]
74 | },
75 | {
76 | "cell_type": "code",
77 | "execution_count": null,
78 | "metadata": {},
79 | "outputs": [],
80 | "source": [
81 | "from ikomia.dataprocess.workflow import Workflow\n",
82 | "\n",
83 | "# Init your workflow\n",
84 | "wf = Workflow()\n",
85 | "\n",
86 | "# Add algorithm\n",
87 | "algo = wf.add_task(name=\"infer_torchvision_mnasnet\", auto_connect=True)\n",
88 | "algo.set_parameters({\"input_size\": \"224\"})\n",
89 | "\n",
90 | "# Run directly on your image\n",
91 | "wf.run_on(url=\"https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_taxi.jpg?raw=true\")\n",
92 | "\n",
93 | "# Inspect your result\n",
94 | "display(algo.get_image_with_graphics())"
95 | ]
96 | },
97 | {
98 | "cell_type": "code",
99 | "execution_count": null,
100 | "metadata": {},
101 | "outputs": [],
102 | "source": [
103 | "from ikomia.utils.displayIO import display\n",
104 | "\n",
105 | "from PIL import ImageShow\n",
106 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
107 | "\n",
108 | "# Inpect your result\n",
109 | "display(algo.get_image_with_graphics())"
110 | ]
111 | },
112 | {
113 | "cell_type": "markdown",
114 | "metadata": {},
115 | "source": [
116 | "## List of parameters\n",
117 | "\n",
118 | "\n",
119 | "- **input_size** (int) - default '224': Size of the input image.\n",
120 | "\n",
121 | "If you are using a MnasNet custom model: \n",
122 | "- **model_weight_file** (str, optional): Path to model weights file.\n",
123 | "- **class_file** (str, optional): Path to text file (.txt) containing class names. \n",
124 | "\n"
125 | ]
126 | }
127 | ],
128 | "metadata": {
129 | "kernelspec": {
130 | "display_name": "venv310",
131 | "language": "python",
132 | "name": "python3"
133 | },
134 | "language_info": {
135 | "codemirror_mode": {
136 | "name": "ipython",
137 | "version": 3
138 | },
139 | "file_extension": ".py",
140 | "mimetype": "text/x-python",
141 | "name": "python",
142 | "nbconvert_exporter": "python",
143 | "pygments_lexer": "ipython3",
144 | "version": "3.10.11"
145 | },
146 | "orig_nbformat": 4
147 | },
148 | "nbformat": 4,
149 | "nbformat_minor": 2
150 | }
151 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_Neural_Style_Transfer_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "GyGbD_GxAOI0"
7 | },
8 | "source": [
9 | "
\n",
10 | "\n",
11 | "\n"
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {
17 | "id": "oNcONqwxwgkv"
18 | },
19 | "source": [
20 | "# How to run Neural Style Transfer with Ikomia API in less than 2 minutes"
21 | ]
22 | },
23 | {
24 | "cell_type": "markdown",
25 | "metadata": {
26 | "id": "AKTe7F5nwXI4"
27 | },
28 | "source": [
29 | "Neural Style Transfer is an AI technology which transforms your image in the style of another image. This technology was first introduced by researchers in the paper [\"A Neural Style Algorithm of Artistic Style\"](https://arxiv.org/abs/1508.06576) by Leon Gatsys et al. (2015).\n",
30 | "\n",
31 | "\n",
32 | "In this demo, we use master paintings as style image and we show how it can be very easy to use this technology thanks to Ikomia API and Ikomia Hub. With a few lines of code, every developer can turn photos into artworks !\n",
33 | "\n",
34 | "If you like this tutorial, you can support our project here [Ikomia API GitHub](https://github.com/Ikomia-dev/IkomiaApi).\n",
35 | "\n",
36 | "## ENJOY 🥰 !!\n",
37 | "\n",
38 | "\n",
39 | "
\n",
40 | "
\n",
41 | "
\n",
42 | "
"
43 | ]
44 | },
45 | {
46 | "cell_type": "markdown",
47 | "metadata": {
48 | "id": "x4CdI0J1ej5b"
49 | },
50 | "source": [
51 | "## Setup"
52 | ]
53 | },
54 | {
55 | "cell_type": "markdown",
56 | "metadata": {
57 | "id": "NBmJN2AaDmcI"
58 | },
59 | "source": [
60 | "You need to install Ikomia Python API with pip."
61 | ]
62 | },
63 | {
64 | "cell_type": "code",
65 | "execution_count": null,
66 | "metadata": {
67 | "colab": {
68 | "base_uri": "https://localhost:8080/",
69 | "height": 1000
70 | },
71 | "id": "8eSnQYJygrDy",
72 | "outputId": "47dd7506-8fdb-4f12-fe98-f03e75a79fb5"
73 | },
74 | "outputs": [],
75 | "source": [
76 | "!pip install ikomia numpy==1.23.5"
77 | ]
78 | },
79 | {
80 | "cell_type": "markdown",
81 | "metadata": {
82 | "id": "kVvL0vVfUGN5"
83 | },
84 | "source": [
85 | "\n",
86 | "\n",
87 | "---\n",
88 | "\n",
89 | "\n",
90 | "**-Google Colab ONLY- Restart runtime**\n",
91 | "\n",
92 | "Some Python packages have been updated. Please click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
93 | "\n",
94 | "\n",
95 | "\n"
96 | ]
97 | },
98 | {
99 | "cell_type": "markdown",
100 | "metadata": {
101 | "id": "2hS1T6ky1Wcw"
102 | },
103 | "source": [
104 | "---"
105 | ]
106 | },
107 | {
108 | "cell_type": "markdown",
109 | "metadata": {
110 | "id": "JJsRFzl9Au1c"
111 | },
112 | "source": [
113 | "Ikomia API has already more than 180 pre-integrated algorithms (mainly OpenCV) but the most interesting algorithms are in [Ikomia HUB](https://github.com/Ikomia-hub). \n",
114 | "\n",
115 | "We push regularly state-of-the-art algorithms from individual repos (think of YOLO v7 for example) or from companies (Facebook Detectron2 or Ultralytics/YOLOv5 for example)."
116 | ]
117 | },
118 | {
119 | "cell_type": "markdown",
120 | "metadata": {
121 | "id": "jEdZ_uDYDqjH"
122 | },
123 | "source": [
124 | "## Apply Neural Style Transfer (NST) on your images"
125 | ]
126 | },
127 | {
128 | "cell_type": "markdown",
129 | "metadata": {
130 | "id": "5O-fpfWfiNfW"
131 | },
132 | "source": [
133 | "Create a new workflow from scratch and add the NST algorithm to your workflow.\n",
134 | "It will automagically download the algorithm from Ikomia Hub and install all the Python dependencies."
135 | ]
136 | },
137 | {
138 | "cell_type": "code",
139 | "execution_count": null,
140 | "metadata": {
141 | "colab": {
142 | "base_uri": "https://localhost:8080/"
143 | },
144 | "id": "bRPYGcRd1Pwh",
145 | "outputId": "83005fbf-ef97-413b-fc6a-18558597d0ed"
146 | },
147 | "outputs": [],
148 | "source": [
149 | "from ikomia.dataprocess.workflow import Workflow\n",
150 | "\n",
151 | "# Create your worflow\n",
152 | "wf = Workflow() \n",
153 | "\n",
154 | "# Add the NST algorithm to your workflow\n",
155 | "nst = wf.add_task(name=\"infer_neural_style_transfer\", auto_connect=True) "
156 | ]
157 | },
158 | {
159 | "cell_type": "markdown",
160 | "metadata": {
161 | "id": "9LX24Eg5irYO"
162 | },
163 | "source": [
164 | "Then, you can change the NST algorithm parameters in order to switch between paintings.\n",
165 | "\n",
166 | "Get parameters from the NST algorithm and change the parameters to see different painting style.\n",
167 | "\n",
168 | "The method \"instance_norm\" provides the following paintings:\n",
169 | "\n",
170 | "* candy\n",
171 | "* la_muse\n",
172 | "* mosaic\n",
173 | "* feathers\n",
174 | "* the_scream\n",
175 | "* udnie\n",
176 | "\n",
177 | "The method \"eccv16\" provides the following paintings:\n",
178 | "\n",
179 | "* the_wave\n",
180 | "* starry_night\n",
181 | "* la_muse\n",
182 | "* composition_vii\n",
183 | "\n",
184 | "The algorithm automatically downloads the model and the corresponding painting."
185 | ]
186 | },
187 | {
188 | "cell_type": "code",
189 | "execution_count": 2,
190 | "metadata": {
191 | "colab": {
192 | "background_save": true,
193 | "base_uri": "https://localhost:8080/"
194 | },
195 | "id": "URu9U_jWw-oK",
196 | "outputId": "14e44826-4b7e-4922-b10a-d5607ea4bc65"
197 | },
198 | "outputs": [],
199 | "source": [
200 | "nst_params = {\n",
201 | " \"method\": \"instance_norm\", # <-- change method here\n",
202 | " \"model_name\": \"mosaic\" # <-- change painting here\n",
203 | "}\n",
204 | "nst.set_parameters(nst_params)"
205 | ]
206 | },
207 | {
208 | "cell_type": "markdown",
209 | "metadata": {},
210 | "source": [
211 | "## Run and display your results"
212 | ]
213 | },
214 | {
215 | "cell_type": "code",
216 | "execution_count": null,
217 | "metadata": {},
218 | "outputs": [],
219 | "source": [
220 | "from ikomia.utils.displayIO import display\n",
221 | "from PIL import ImageShow\n",
222 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
223 | "\n",
224 | "# Run\n",
225 | "wf.run_on(url=\"https://img.lemde.fr/2022/05/19/0/0/5571/3687/664/0/75/0/e355ed2_1652966874932-pns-3790466.jpg\") # <-- change input image\n",
226 | "\n",
227 | "# Get NST image result\n",
228 | "img_paint = nst.get_output(0).get_image() # First image is the result\n",
229 | "img_model = nst.get_output(1).get_image() # Second image is the reference painting\n",
230 | "\n",
231 | "# Display images\n",
232 | "display(img_paint)\n",
233 | "display(img_model)"
234 | ]
235 | },
236 | {
237 | "cell_type": "markdown",
238 | "metadata": {
239 | "id": "rpsaQoYSwma8"
240 | },
241 | "source": [
242 | "## -Google Colab ONLY- Save your custom image in your Google Drive space"
243 | ]
244 | },
245 | {
246 | "cell_type": "code",
247 | "execution_count": null,
248 | "metadata": {
249 | "colab": {
250 | "base_uri": "https://localhost:8080/"
251 | },
252 | "id": "pKPQ1JUCwdGW",
253 | "outputId": "0ba86136-ef9f-4e40-bc54-0a9367bed649"
254 | },
255 | "outputs": [],
256 | "source": [
257 | "# Uncomment these lines if you're working on Colab\n",
258 | "\"\"\" from google.colab import drive\n",
259 | "drive.mount('/content/gdrive')\n",
260 | "\n",
261 | "cv2.imwrite(\"/content/gdrive/MyDrive/paint_img.png\", img_paint) \"\"\""
262 | ]
263 | },
264 | {
265 | "cell_type": "markdown",
266 | "metadata": {
267 | "id": "DyS-Lak6kntB"
268 | },
269 | "source": [
270 | "## -Google Colab ONLY- Download directly your custom image"
271 | ]
272 | },
273 | {
274 | "cell_type": "code",
275 | "execution_count": null,
276 | "metadata": {
277 | "colab": {
278 | "base_uri": "https://localhost:8080/",
279 | "height": 17
280 | },
281 | "id": "s_E2W_3hk07U",
282 | "outputId": "e639ba39-14aa-4b99-8c0b-3034734f09c6"
283 | },
284 | "outputs": [],
285 | "source": [
286 | "# Uncomment these lines if you're working on Colab\n",
287 | "\"\"\" from google.colab import files\n",
288 | "cv2.imwrite(\"/content/paint_img.png\", img_paint)\n",
289 | "files.download('/content/paint_img.png') \"\"\""
290 | ]
291 | }
292 | ],
293 | "metadata": {
294 | "colab": {
295 | "collapsed_sections": [],
296 | "provenance": []
297 | },
298 | "gpuClass": "standard",
299 | "kernelspec": {
300 | "display_name": "venv37",
301 | "language": "python",
302 | "name": "venv37"
303 | },
304 | "language_info": {
305 | "codemirror_mode": {
306 | "name": "ipython",
307 | "version": 3
308 | },
309 | "file_extension": ".py",
310 | "mimetype": "text/x-python",
311 | "name": "python",
312 | "nbconvert_exporter": "python",
313 | "pygments_lexer": "ipython3",
314 | "version": "3.7.9"
315 | }
316 | },
317 | "nbformat": 4,
318 | "nbformat_minor": 4
319 | }
320 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_P3M_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run P3M portrait matting with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "**P3M, Privacy-Preserving Portrait Matting**, is an innovative approach that combines the power of deep learning with the necessity of protecting individuals' privacy in digital images. It specifically addresses the challenge of separating a portrait subject from its background (matting) without compromising the individual's identity.\n",
27 | "\n",
28 | "
\n"
29 | ]
30 | },
31 | {
32 | "attachments": {},
33 | "cell_type": "markdown",
34 | "metadata": {},
35 | "source": [
36 | "## Setup"
37 | ]
38 | },
39 | {
40 | "attachments": {},
41 | "cell_type": "markdown",
42 | "metadata": {},
43 | "source": [
44 | "You need to install Ikomia Python API with pip\n"
45 | ]
46 | },
47 | {
48 | "cell_type": "code",
49 | "execution_count": null,
50 | "metadata": {},
51 | "outputs": [],
52 | "source": [
53 | "!pip install ikomia"
54 | ]
55 | },
56 | {
57 | "cell_type": "markdown",
58 | "metadata": {},
59 | "source": [
60 | "---\n",
61 | "\n",
62 | "**-Google Colab ONLY- Restart runtime**\n",
63 | "\n",
64 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
65 | "\n",
66 | "---"
67 | ]
68 | },
69 | {
70 | "attachments": {},
71 | "cell_type": "markdown",
72 | "metadata": {},
73 | "source": [
74 | "## Run P3M on your image"
75 | ]
76 | },
77 | {
78 | "cell_type": "code",
79 | "execution_count": null,
80 | "metadata": {},
81 | "outputs": [],
82 | "source": [
83 | "from ikomia.dataprocess.workflow import Workflow\n",
84 | "from ikomia.utils import ik\n",
85 | "\n",
86 | "# Init your workflow\n",
87 | "wf = Workflow() \n",
88 | "\n",
89 | "# Add the p3m process to the workflow\n",
90 | "p3m = wf.add_task(ik.infer_p3m_portrait_matting(\n",
91 | " model_name=\"resnet34\",\n",
92 | " input_size=\"1024\",\n",
93 | " method='HYBRID',\n",
94 | " cuda=\"True\"), auto_connect=True)\n",
95 | "\n",
96 | "# Run workflow on the image\n",
97 | "wf.run_on(url=\"https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_portrait_4.jpg?raw=true\")\n"
98 | ]
99 | },
100 | {
101 | "cell_type": "code",
102 | "execution_count": null,
103 | "metadata": {},
104 | "outputs": [],
105 | "source": [
106 | "from ikomia.utils.displayIO import display\n",
107 | "\n",
108 | "from PIL import ImageShow\n",
109 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
110 | "\n",
111 | "# Inspect your results\n",
112 | "display(p3m.get_input(0).get_image()) \n",
113 | "display(p3m.get_output(0).get_image())\n",
114 | "display(p3m.get_output(1).get_image())"
115 | ]
116 | }
117 | ],
118 | "metadata": {
119 | "kernelspec": {
120 | "display_name": "venvapi",
121 | "language": "python",
122 | "name": "venvapi"
123 | },
124 | "language_info": {
125 | "codemirror_mode": {
126 | "name": "ipython",
127 | "version": 3
128 | },
129 | "file_extension": ".py",
130 | "mimetype": "text/x-python",
131 | "name": "python",
132 | "nbconvert_exporter": "python",
133 | "pygments_lexer": "ipython3",
134 | "version": "3.9.13"
135 | },
136 | "orig_nbformat": 4
137 | },
138 | "nbformat": 4,
139 | "nbformat_minor": 2
140 | }
141 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_ResNext_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run ResNext with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "**ResNeXt** stands for Residual Networks with Aggregated Transformations. It's a type of CNN that was introduced to address some of the limitations found in traditional CNN architectures. \n",
27 | "\n",
28 | ""
29 | ]
30 | },
31 | {
32 | "attachments": {},
33 | "cell_type": "markdown",
34 | "metadata": {},
35 | "source": [
36 | "## Setup"
37 | ]
38 | },
39 | {
40 | "attachments": {},
41 | "cell_type": "markdown",
42 | "metadata": {},
43 | "source": [
44 | "You need to install Ikomia Python API with pip\n"
45 | ]
46 | },
47 | {
48 | "cell_type": "code",
49 | "execution_count": null,
50 | "metadata": {},
51 | "outputs": [],
52 | "source": [
53 | "!pip install ikomia"
54 | ]
55 | },
56 | {
57 | "cell_type": "markdown",
58 | "metadata": {},
59 | "source": [
60 | "---\n",
61 | "\n",
62 | "**-Google Colab ONLY- Restart runtime**\n",
63 | "\n",
64 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
65 | "\n",
66 | "---"
67 | ]
68 | },
69 | {
70 | "attachments": {},
71 | "cell_type": "markdown",
72 | "metadata": {},
73 | "source": [
74 | "## Run ResNext on your image"
75 | ]
76 | },
77 | {
78 | "cell_type": "code",
79 | "execution_count": null,
80 | "metadata": {},
81 | "outputs": [],
82 | "source": [
83 | "from ikomia.dataprocess.workflow import Workflow\n",
84 | "\n",
85 | "# Init your workflow\n",
86 | "wf = Workflow()\n",
87 | "\n",
88 | "# Add algorithm\n",
89 | "algo = wf.add_task(name=\"infer_torchvision_resnext\", auto_connect=True)\n",
90 | "\n",
91 | "algo.set_parameters({\n",
92 | " \"model_name\": \"resnext101\",\n",
93 | " \"input_size\": \"224\",\n",
94 | "})\n",
95 | "\n",
96 | "# Run directly on your image\n",
97 | "wf.run_on(url=\"https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_bike_rider_2.jpg?raw=true\")"
98 | ]
99 | },
100 | {
101 | "cell_type": "code",
102 | "execution_count": null,
103 | "metadata": {},
104 | "outputs": [],
105 | "source": [
106 | "from ikomia.utils.displayIO import display\n",
107 | "\n",
108 | "from PIL import ImageShow\n",
109 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
110 | "\n",
111 | "# Inpect your result\n",
112 | "display(algo.get_image_with_graphics())"
113 | ]
114 | },
115 | {
116 | "cell_type": "markdown",
117 | "metadata": {},
118 | "source": [
119 | "## List of parameters\n",
120 | "\n",
121 | "- **model_name** (str) - default 'resnext50': Name of the pre-trained model. Other option 'resnext101'\n",
122 | "\n",
123 | "- **input_size** (int) - default '224': Size of the input image.\n",
124 | "\n",
125 | "- **model_weight_file** (str, optional): Path to model weights file.\n",
126 | "\n",
127 | "- **class_file** (str, optional): Path to text file (.txt) containing class names. (If using a custom model)\n"
128 | ]
129 | }
130 | ],
131 | "metadata": {
132 | "kernelspec": {
133 | "display_name": "venv310",
134 | "language": "python",
135 | "name": "python3"
136 | },
137 | "language_info": {
138 | "codemirror_mode": {
139 | "name": "ipython",
140 | "version": 3
141 | },
142 | "file_extension": ".py",
143 | "mimetype": "text/x-python",
144 | "name": "python",
145 | "nbconvert_exporter": "python",
146 | "pygments_lexer": "ipython3",
147 | "version": "3.10.11"
148 | },
149 | "orig_nbformat": 4
150 | },
151 | "nbformat": 4,
152 | "nbformat_minor": 2
153 | }
154 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_SparseInst_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run SparseInst with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "**SparseInst** is an innovative approach to instance segmentation that deviates from traditional dense prediction methods. It introduces a more efficient and focused strategy by predicting a sparse set of instance candidates, thus enhancing computational efficiency and performance.\n",
27 | "\n",
28 | ""
29 | ]
30 | },
31 | {
32 | "attachments": {},
33 | "cell_type": "markdown",
34 | "metadata": {},
35 | "source": [
36 | "## Setup"
37 | ]
38 | },
39 | {
40 | "attachments": {},
41 | "cell_type": "markdown",
42 | "metadata": {},
43 | "source": [
44 | "You need to install Ikomia Python API with pip\n"
45 | ]
46 | },
47 | {
48 | "cell_type": "code",
49 | "execution_count": null,
50 | "metadata": {},
51 | "outputs": [],
52 | "source": [
53 | "!pip install ikomia"
54 | ]
55 | },
56 | {
57 | "cell_type": "markdown",
58 | "metadata": {},
59 | "source": [
60 | "---\n",
61 | "\n",
62 | "**-Google Colab ONLY- Restart runtime**\n",
63 | "\n",
64 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
65 | "\n",
66 | "---"
67 | ]
68 | },
69 | {
70 | "attachments": {},
71 | "cell_type": "markdown",
72 | "metadata": {},
73 | "source": [
74 | "## Run SparseInst on your image"
75 | ]
76 | },
77 | {
78 | "cell_type": "code",
79 | "execution_count": null,
80 | "metadata": {},
81 | "outputs": [],
82 | "source": [
83 | "from ikomia.dataprocess.workflow import Workflow\n",
84 | "from ikomia.utils import ik\n",
85 | "\n",
86 | "# Init your workflow\n",
87 | "wf = Workflow()\n",
88 | "\n",
89 | "# Add algorithm\n",
90 | "algo = wf.add_task(ik.infer_sparseinst(\n",
91 | " model_name='sparse_inst_r101_dcn_giam',\n",
92 | " conf_thres='0.4'\n",
93 | " ),\n",
94 | " auto_connect=True\n",
95 | ")\n",
96 | "\n",
97 | "# Run on your image\n",
98 | "wf.run_on(url=\"https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_city_2.jpg?raw=true\")"
99 | ]
100 | },
101 | {
102 | "cell_type": "code",
103 | "execution_count": null,
104 | "metadata": {},
105 | "outputs": [],
106 | "source": [
107 | "from ikomia.utils.displayIO import display\n",
108 | "\n",
109 | "from PIL import ImageShow\n",
110 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
111 | "\n",
112 | "# Inpect your result\n",
113 | "display(algo.get_image_with_mask_and_graphics())"
114 | ]
115 | },
116 | {
117 | "cell_type": "markdown",
118 | "metadata": {},
119 | "source": [
120 | "## List of parameters\n",
121 | "\n",
122 | "**model_name** (str) - default 'sparse_inst_r50_giam_aug': Name of the SparseInst model. Additional models are available:\n",
123 | " - sparse_inst_r50vd_base\n",
124 | "\n",
125 | " - sparse_inst_r50_giam\n",
126 | "\n",
127 | " - sparse_inst_r50_giam_soft\n",
128 | "\n",
129 | " - sparse_inst_r50_giam_aug\n",
130 | "\n",
131 | " - sparse_inst_r50_dcn_giam_aug\n",
132 | "\n",
133 | " - sparse_inst_r50vd_giam_aug\n",
134 | "\n",
135 | " - sparse_inst_r50vd_dcn_giam_aug\n",
136 | "\n",
137 | " - sparse_inst_r101_giam\n",
138 | "\n",
139 | " - sparse_inst_r101_dcn_giam\n",
140 | "\n",
141 | " - sparse_inst_pvt_b1_giam\n",
142 | "\n",
143 | " - sparse_inst_pvt_b2_li_giam\n",
144 | "\n",
145 | "\n",
146 | "\n",
147 | "**conf_thres** (float) default '0.5': Confidence threshold for the prediction [0,1]\n",
148 | "\n",
149 | "**config_file** (str, optional): Path to the .yaml config file.\n",
150 | "\n",
151 | "**model_weight_file** (str, optional): Path to model weights file .pth"
152 | ]
153 | }
154 | ],
155 | "metadata": {
156 | "kernelspec": {
157 | "display_name": "venv310",
158 | "language": "python",
159 | "name": "python3"
160 | },
161 | "language_info": {
162 | "codemirror_mode": {
163 | "name": "ipython",
164 | "version": 3
165 | },
166 | "file_extension": ".py",
167 | "mimetype": "text/x-python",
168 | "name": "python",
169 | "nbconvert_exporter": "python",
170 | "pygments_lexer": "ipython3",
171 | "version": "3.10.11"
172 | },
173 | "orig_nbformat": 4
174 | },
175 | "nbformat": 4,
176 | "nbformat_minor": 2
177 | }
178 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_Stable_Cascade_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "9XIBGlEafDQf"
7 | },
8 | "source": [
9 | "
\n",
10 | "\n",
11 | "\n"
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {
17 | "id": "DvouGVeYfDQg"
18 | },
19 | "source": [
20 | "# Create your image Stable Cascade\n",
21 | "\n",
22 | "Stable cascade is a diffusion model generating images from text descriptions. It's developed by Stability AI, the developer of Stable Diffusion and is known for being faster, more affordable, and potentially easier to use than previous models like Stable Diffusion XL (SDXL).\n",
23 | "\n",
24 | ""
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "-STLXz8ifDQh"
31 | },
32 | "source": [
33 | "## Setup"
34 | ]
35 | },
36 | {
37 | "cell_type": "markdown",
38 | "metadata": {},
39 | "source": [
40 | "Please use a GPU for this tutorial.\n",
41 | "\n",
42 | "**Note: Stable Cascade requires 17Gb VRAM to run**.\n",
43 | "\n",
44 | "In the menu, select \"Runtime\" then \"Change runtime type\", choose GPU in \"Hardware accelerator\".\n",
45 | "\n",
46 | "Check your GPU with the following command:"
47 | ]
48 | },
49 | {
50 | "cell_type": "code",
51 | "execution_count": null,
52 | "metadata": {},
53 | "outputs": [],
54 | "source": [
55 | "!nvidia-smi"
56 | ]
57 | },
58 | {
59 | "cell_type": "markdown",
60 | "metadata": {
61 | "id": "cV0_2S0SfDQh"
62 | },
63 | "source": [
64 | "You need to install Ikomia Python API with pip\n"
65 | ]
66 | },
67 | {
68 | "cell_type": "code",
69 | "execution_count": null,
70 | "metadata": {
71 | "colab": {
72 | "base_uri": "https://localhost:8080/",
73 | "height": 1000
74 | },
75 | "id": "cbvRlv_ufDQh",
76 | "outputId": "e3893478-603b-467b-96a3-b272121649b3"
77 | },
78 | "outputs": [],
79 | "source": [
80 | "!pip install ikomia"
81 | ]
82 | },
83 | {
84 | "cell_type": "markdown",
85 | "metadata": {
86 | "id": "-j3VbsAYfDQi"
87 | },
88 | "source": [
89 | "---\n",
90 | "\n",
91 | "**-Google Colab ONLY- Restart runtime**\n",
92 | "\n",
93 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
94 | "\n",
95 | "---"
96 | ]
97 | },
98 | {
99 | "cell_type": "markdown",
100 | "metadata": {},
101 | "source": [
102 | "## Run Stable Cascade"
103 | ]
104 | },
105 | {
106 | "cell_type": "code",
107 | "execution_count": 1,
108 | "metadata": {
109 | "id": "uBp98pWxiHXq"
110 | },
111 | "outputs": [],
112 | "source": [
113 | "from ikomia.dataprocess.workflow import Workflow\n",
114 | "from ikomia.utils.displayIO import display\n",
115 | "\n",
116 | "# Init your workflow\n",
117 | "wf = Workflow()\n",
118 | "\n",
119 | "# Add algorithm\n",
120 | "algo = wf.add_task(name = \"infer_stable_cascade\", auto_connect=False)\n",
121 | "\n",
122 | "algo.set_parameters({\n",
123 | " 'prompt': 'Anthropomorphic cat dressed as a pilot',\n",
124 | " 'negative_prompt': '',\n",
125 | " 'prior_num_inference_steps': '20',\n",
126 | " 'prior_guidance_scale': '4.0',\n",
127 | " 'num_inference_steps': '30',\n",
128 | " 'guidance_scale': '0.0',\n",
129 | " 'seed': '-1',\n",
130 | " 'width': '1024',\n",
131 | " 'height': '1024',\n",
132 | " 'num_images_per_prompt':'1',\n",
133 | " })\n",
134 | "\n",
135 | "# Generate your image\n",
136 | "wf.run()\n",
137 | "\n"
138 | ]
139 | },
140 | {
141 | "cell_type": "code",
142 | "execution_count": null,
143 | "metadata": {},
144 | "outputs": [],
145 | "source": [
146 | "from ikomia.utils.displayIO import display\n",
147 | "\n",
148 | "from PIL import ImageShow\n",
149 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
150 | "\n",
151 | "# Display the image\n",
152 | "display(algo.get_output(0).get_image())"
153 | ]
154 | }
155 | ],
156 | "metadata": {
157 | "accelerator": "GPU",
158 | "colab": {
159 | "gpuType": "V100",
160 | "provenance": []
161 | },
162 | "kernelspec": {
163 | "display_name": "venvapi",
164 | "language": "python",
165 | "name": "venvapi"
166 | },
167 | "language_info": {
168 | "codemirror_mode": {
169 | "name": "ipython",
170 | "version": 3
171 | },
172 | "file_extension": ".py",
173 | "mimetype": "text/x-python",
174 | "name": "python",
175 | "nbconvert_exporter": "python",
176 | "pygments_lexer": "ipython3",
177 | "version": "3.9.13"
178 | },
179 | "orig_nbformat": 4
180 | },
181 | "nbformat": 4,
182 | "nbformat_minor": 0
183 | }
184 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_Stable_Diffusion_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "9XIBGlEafDQf"
7 | },
8 | "source": [
9 | "
\n",
10 | "\n",
11 | "\n"
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {
17 | "id": "DvouGVeYfDQg"
18 | },
19 | "source": [
20 | "# Create your image Stable Diffusion\n",
21 | "\n",
22 | "Stable Diffusion is a highly advanced text-to-image model. When provided with a text prompt, it employs deep learning techniques to generate a corresponding AI-generated image. \n",
23 | "\n",
24 | "\n",
25 | ""
26 | ]
27 | },
28 | {
29 | "cell_type": "markdown",
30 | "metadata": {
31 | "id": "-STLXz8ifDQh"
32 | },
33 | "source": [
34 | "## Setup"
35 | ]
36 | },
37 | {
38 | "cell_type": "markdown",
39 | "metadata": {},
40 | "source": [
41 | "Please use a GPU for this tutorial.\n",
42 | "\n",
43 | "In the menu, select \"Runtime\" then \"Change runtime type\", choose GPU in \"Hardware accelerator\".\n",
44 | "\n",
45 | "Check your GPU with the following command:"
46 | ]
47 | },
48 | {
49 | "cell_type": "code",
50 | "execution_count": null,
51 | "metadata": {},
52 | "outputs": [],
53 | "source": [
54 | "!nvidia-smi"
55 | ]
56 | },
57 | {
58 | "cell_type": "markdown",
59 | "metadata": {
60 | "id": "cV0_2S0SfDQh"
61 | },
62 | "source": [
63 | "You need to install Ikomia Python API with pip\n"
64 | ]
65 | },
66 | {
67 | "cell_type": "code",
68 | "execution_count": null,
69 | "metadata": {
70 | "colab": {
71 | "base_uri": "https://localhost:8080/",
72 | "height": 1000
73 | },
74 | "id": "cbvRlv_ufDQh",
75 | "outputId": "e3893478-603b-467b-96a3-b272121649b3"
76 | },
77 | "outputs": [],
78 | "source": [
79 | "!pip install ikomia"
80 | ]
81 | },
82 | {
83 | "cell_type": "markdown",
84 | "metadata": {
85 | "id": "-j3VbsAYfDQi"
86 | },
87 | "source": [
88 | "---\n",
89 | "\n",
90 | "**-Google Colab ONLY- Restart runtime**\n",
91 | "\n",
92 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
93 | "\n",
94 | "---"
95 | ]
96 | },
97 | {
98 | "cell_type": "markdown",
99 | "metadata": {},
100 | "source": [
101 | "## Run Stable Diffusion xl"
102 | ]
103 | },
104 | {
105 | "cell_type": "code",
106 | "execution_count": 1,
107 | "metadata": {
108 | "id": "uBp98pWxiHXq"
109 | },
110 | "outputs": [],
111 | "source": [
112 | "from ikomia.dataprocess.workflow import Workflow\n",
113 | "from ikomia.utils import ik\n",
114 | "\n",
115 | "\n",
116 | "# Init your workflow\n",
117 | "wf = Workflow()\n",
118 | "\n",
119 | "# Add algorithm\n",
120 | "stable_diff = wf.add_task(ik.infer_hf_stable_diffusion(\n",
121 | " model_name='stabilityai/stable-diffusion-xl-base-1.0',\n",
122 | " prompt='Super Mario style jumping, vibrant, cute, cartoony, fantasy, playful, reminiscent of Super Mario series',\n",
123 | " guidance_scale='7.5',\n",
124 | " negative_prompt='low resolution, ugly, deformed',\n",
125 | " num_inference_steps='50',\n",
126 | " seed='19632893',\n",
127 | " use_refiner='True'\n",
128 | " )\n",
129 | ")\n",
130 | "\n",
131 | "# Run your workflow\n",
132 | "wf.run()"
133 | ]
134 | },
135 | {
136 | "cell_type": "code",
137 | "execution_count": null,
138 | "metadata": {},
139 | "outputs": [],
140 | "source": [
141 | "from ikomia.utils.displayIO import display\n",
142 | "\n",
143 | "from PIL import ImageShow\n",
144 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
145 | "\n",
146 | "# Display the image\n",
147 | "display(stable_diff.get_output(0).get_image())"
148 | ]
149 | }
150 | ],
151 | "metadata": {
152 | "accelerator": "GPU",
153 | "colab": {
154 | "gpuType": "V100",
155 | "provenance": []
156 | },
157 | "kernelspec": {
158 | "display_name": "venvapi",
159 | "language": "python",
160 | "name": "venvapi"
161 | },
162 | "language_info": {
163 | "codemirror_mode": {
164 | "name": "ipython",
165 | "version": 3
166 | },
167 | "file_extension": ".py",
168 | "mimetype": "text/x-python",
169 | "name": "python",
170 | "nbconvert_exporter": "python",
171 | "pygments_lexer": "ipython3",
172 | "version": "3.9.13"
173 | },
174 | "orig_nbformat": 4
175 | },
176 | "nbformat": 4,
177 | "nbformat_minor": 0
178 | }
179 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_SwinIR_super_resolution_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run SwinIR with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "cell_type": "markdown",
23 | "metadata": {},
24 | "source": [
25 | "SwinIR is an open-source model that ranks among the best for various super-resolution tasks, showcasing remarkable effectiveness, showcasing remarkable effectiveness and adaptability across diverse real-world degradation scenarios.\n",
26 | ".jpg)\n"
27 | ]
28 | },
29 | {
30 | "attachments": {},
31 | "cell_type": "markdown",
32 | "metadata": {},
33 | "source": [
34 | "## Setup"
35 | ]
36 | },
37 | {
38 | "cell_type": "markdown",
39 | "metadata": {},
40 | "source": [
41 | "Please use a GPU for this tutorial.\n",
42 | "\n",
43 | "In the Google colab menu, select \"Runtime\" then \"Change runtime type\", choose GPU in \"Hardware accelerator\".\n",
44 | "\n",
45 | "Check your GPU with the following command:"
46 | ]
47 | },
48 | {
49 | "cell_type": "code",
50 | "execution_count": null,
51 | "metadata": {},
52 | "outputs": [],
53 | "source": [
54 | "!nvidia-smi"
55 | ]
56 | },
57 | {
58 | "attachments": {},
59 | "cell_type": "markdown",
60 | "metadata": {},
61 | "source": [
62 | "First, you need to install Ikomia API pip package."
63 | ]
64 | },
65 | {
66 | "cell_type": "code",
67 | "execution_count": null,
68 | "metadata": {},
69 | "outputs": [],
70 | "source": [
71 | "!pip install ikomia"
72 | ]
73 | },
74 | {
75 | "cell_type": "markdown",
76 | "metadata": {},
77 | "source": [
78 | "---\n",
79 | "\n",
80 | "**-Google Colab ONLY- Restart runtime**\n",
81 | "\n",
82 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
83 | "\n",
84 | "---"
85 | ]
86 | },
87 | {
88 | "attachments": {},
89 | "cell_type": "markdown",
90 | "metadata": {},
91 | "source": [
92 | "## Run the face detector and blurring on your image"
93 | ]
94 | },
95 | {
96 | "cell_type": "code",
97 | "execution_count": null,
98 | "metadata": {},
99 | "outputs": [],
100 | "source": [
101 | "from ikomia.dataprocess.workflow import Workflow\n",
102 | "from ikomia.utils.displayIO import display\n",
103 | "\n",
104 | "\n",
105 | "# Init your workflow\n",
106 | "wf = Workflow() \n",
107 | "\n",
108 | "\n",
109 | "# Add the SwinIR algorithm\n",
110 | "swinir = wf.add_task(name=\"infer_swinir_super_resolution\", auto_connect=True)\n",
111 | "\n",
112 | "swinir.set_parameters({\n",
113 | " \"use_gan\": \"True\",\n",
114 | " \"large_model\": \"True\",\n",
115 | " \"cuda\": \"True\",\n",
116 | " \"tile\": \"256\",\n",
117 | " \"overlap_ratio\": \"0.1\",\n",
118 | " \"scale\": \"4\"\n",
119 | "})\n",
120 | "\n",
121 | "# Run on your image \n",
122 | "wf.run_on(url=\"https://github.com/JingyunLiang/SwinIR/blob/main/figs/ETH_LR.png?raw=true\")"
123 | ]
124 | },
125 | {
126 | "cell_type": "code",
127 | "execution_count": null,
128 | "metadata": {},
129 | "outputs": [],
130 | "source": [
131 | "# Display image on Google Colab\n",
132 | "from PIL import ImageShow\n",
133 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
134 | "\n",
135 | "display(swinir.get_input(0).get_image())"
136 | ]
137 | },
138 | {
139 | "cell_type": "code",
140 | "execution_count": null,
141 | "metadata": {},
142 | "outputs": [],
143 | "source": [
144 | "display(swinir.get_output(0).get_image())"
145 | ]
146 | }
147 | ],
148 | "metadata": {
149 | "kernelspec": {
150 | "display_name": "venvapi",
151 | "language": "python",
152 | "name": "venvapi"
153 | },
154 | "language_info": {
155 | "codemirror_mode": {
156 | "name": "ipython",
157 | "version": 3
158 | },
159 | "file_extension": ".py",
160 | "mimetype": "text/x-python",
161 | "name": "python",
162 | "nbconvert_exporter": "python",
163 | "pygments_lexer": "ipython3",
164 | "version": "3.9.13"
165 | },
166 | "orig_nbformat": 4
167 | },
168 | "nbformat": 4,
169 | "nbformat_minor": 2
170 | }
171 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_YOLACT_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run YOLACT with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "**YOLACT**, which stands for \"You Only Look At Coefficients,\" is a groundbreaking approach in the field of computer vision, particularly for real-time instance segmentation. This innovative technique, detailed in the paper titled \"YOLACT: Real-time Instance Segmentation,\" has been a game-changer due to its unique blend of efficiency and accuracy. \n",
27 | ""
28 | ]
29 | },
30 | {
31 | "attachments": {},
32 | "cell_type": "markdown",
33 | "metadata": {},
34 | "source": [
35 | "## Setup"
36 | ]
37 | },
38 | {
39 | "attachments": {},
40 | "cell_type": "markdown",
41 | "metadata": {},
42 | "source": [
43 | "You need to install Ikomia Python API with pip\n"
44 | ]
45 | },
46 | {
47 | "cell_type": "code",
48 | "execution_count": null,
49 | "metadata": {},
50 | "outputs": [],
51 | "source": [
52 | "!pip install ikomia"
53 | ]
54 | },
55 | {
56 | "cell_type": "markdown",
57 | "metadata": {},
58 | "source": [
59 | "---\n",
60 | "\n",
61 | "**-Google Colab ONLY- Restart runtime**\n",
62 | "\n",
63 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
64 | "\n",
65 | "---"
66 | ]
67 | },
68 | {
69 | "attachments": {},
70 | "cell_type": "markdown",
71 | "metadata": {},
72 | "source": [
73 | "## Run YOLACT on your image"
74 | ]
75 | },
76 | {
77 | "cell_type": "code",
78 | "execution_count": null,
79 | "metadata": {},
80 | "outputs": [],
81 | "source": [
82 | "from ikomia.dataprocess.workflow import Workflow\n",
83 | "\n",
84 | "# Init your workflow\n",
85 | "wf = Workflow()\n",
86 | "\n",
87 | "# Add algorithm\n",
88 | "yolact = wf.add_task(name=\"infer_yolact\", auto_connect=True)\n",
89 | "\n",
90 | "yolact.set_parameters({\n",
91 | " \"conf_thres\": \"0.5\",\n",
92 | " \"top_k\": \"10\",\n",
93 | "})\n",
94 | "\n",
95 | "# Run on your image \n",
96 | "wf.run_on(url=\"https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_zebra.jpeg?raw=true\")"
97 | ]
98 | },
99 | {
100 | "cell_type": "code",
101 | "execution_count": null,
102 | "metadata": {},
103 | "outputs": [],
104 | "source": [
105 | "from ikomia.utils.displayIO import display\n",
106 | "\n",
107 | "from PIL import ImageShow\n",
108 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
109 | "\n",
110 | "# Inpect your result\n",
111 | "display(yolact.get_image_with_mask_and_graphics())"
112 | ]
113 | }
114 | ],
115 | "metadata": {
116 | "kernelspec": {
117 | "display_name": "venvapi",
118 | "language": "python",
119 | "name": "venvapi"
120 | },
121 | "language_info": {
122 | "codemirror_mode": {
123 | "name": "ipython",
124 | "version": 3
125 | },
126 | "file_extension": ".py",
127 | "mimetype": "text/x-python",
128 | "name": "python",
129 | "nbconvert_exporter": "python",
130 | "pygments_lexer": "ipython3",
131 | "version": "3.9.13"
132 | },
133 | "orig_nbformat": 4
134 | },
135 | "nbformat": 4,
136 | "nbformat_minor": 2
137 | }
138 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_YOLOP_v2_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# Easy Panoptic Driving Perception with YOLOP v2\n",
19 | "\n",
20 | "**YOLOP v2** is a highly efficient multi-task network designed to carry out **three critical functions for autonomous driving** systems: **detecting traffic objects**, **segmenting drivable areas** of the road, and **identifying lane** markings, all in **real-time**.\n",
21 | "\n",
22 | ""
23 | ]
24 | },
25 | {
26 | "attachments": {},
27 | "cell_type": "markdown",
28 | "metadata": {},
29 | "source": [
30 | "## Setup"
31 | ]
32 | },
33 | {
34 | "attachments": {},
35 | "cell_type": "markdown",
36 | "metadata": {},
37 | "source": [
38 | "You need to install Ikomia Python API with pip\n"
39 | ]
40 | },
41 | {
42 | "cell_type": "code",
43 | "execution_count": null,
44 | "metadata": {},
45 | "outputs": [],
46 | "source": [
47 | "!pip install ikomia"
48 | ]
49 | },
50 | {
51 | "attachments": {},
52 | "cell_type": "markdown",
53 | "metadata": {},
54 | "source": [
55 | "## Run YOLOP v2 on your image"
56 | ]
57 | },
58 | {
59 | "cell_type": "markdown",
60 | "metadata": {},
61 | "source": [
62 | "---\n",
63 | "\n",
64 | "**-Google Colab ONLY- Restart runtime after the first run of the workflow below** \n",
65 | "\n",
66 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
67 | "\n",
68 | "---"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": null,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "from ikomia.dataprocess.workflow import Workflow\n",
78 | "from ikomia.utils.displayIO import display\n",
79 | "\n",
80 | "\n",
81 | "# Init your workflow\n",
82 | "wf = Workflow()\n",
83 | "\n",
84 | "# Add algorithm\n",
85 | "algo = wf.add_task(name=\"infer_yolop_v2\", auto_connect=True)\n",
86 | "\n",
87 | "algo.set_parameters({\n",
88 | " \"input_size\": \"640\",\n",
89 | " \"conf_thres\": \"0.2\",\n",
90 | " \"iou_thres\": \"0.45\",\n",
91 | " \"object\": \"True\",\n",
92 | " \"road_lane\": \"True\"\n",
93 | "})\n",
94 | "\n",
95 | "# Run on your image \n",
96 | "wf.run_on(url=\"https://www.cnet.com/a/img/resize/4797a22dd672697529df19c2658364a85e0f9eb4/hub/2023/02/14/9406d927-a754-4fa9-8251-8b1ccd010d5a/ring-car-cam-2023-02-14-14h09m20s720.png?auto=webp&width=1920\")\n",
97 | "\n",
98 | "# # Inpect your result\n",
99 | "display(algo.get_image_with_graphics())\n",
100 | "display(algo.get_output(0).get_overlay_mask())"
101 | ]
102 | },
103 | {
104 | "cell_type": "code",
105 | "execution_count": null,
106 | "metadata": {},
107 | "outputs": [],
108 | "source": [
109 | "from ikomia.utils.displayIO import display\n",
110 | "from PIL import ImageShow\n",
111 | "\n",
112 | "# Display the keypoints\n",
113 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
114 | "\n",
115 | "# Display results\n",
116 | "display(algo.get_image_with_graphics())"
117 | ]
118 | },
119 | {
120 | "cell_type": "code",
121 | "execution_count": null,
122 | "metadata": {},
123 | "outputs": [],
124 | "source": [
125 | "display(algo.get_output(0).get_overlay_mask())"
126 | ]
127 | }
128 | ],
129 | "metadata": {
130 | "kernelspec": {
131 | "display_name": "venvapi",
132 | "language": "python",
133 | "name": "venvapi"
134 | },
135 | "language_info": {
136 | "codemirror_mode": {
137 | "name": "ipython",
138 | "version": 3
139 | },
140 | "file_extension": ".py",
141 | "mimetype": "text/x-python",
142 | "name": "python",
143 | "nbconvert_exporter": "python",
144 | "pygments_lexer": "ipython3",
145 | "version": "3.9.13"
146 | },
147 | "orig_nbformat": 4
148 | },
149 | "nbformat": 4,
150 | "nbformat_minor": 2
151 | }
152 |
--------------------------------------------------------------------------------
/examples/HOWTO_run_YOLO_v8_pose_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# Easy YOLOv8 Pose Estimation\n",
19 | "\n",
20 | "**YOLOv8 Pose Estimation** is a cutting-edge technology within the field of computer vision, specifically tailored for identifying and mapping human body keypoints in images or video frames. \n",
21 | "\n",
22 | ""
23 | ]
24 | },
25 | {
26 | "attachments": {},
27 | "cell_type": "markdown",
28 | "metadata": {},
29 | "source": [
30 | "## Setup"
31 | ]
32 | },
33 | {
34 | "attachments": {},
35 | "cell_type": "markdown",
36 | "metadata": {},
37 | "source": [
38 | "You need to install Ikomia Python API with pip\n"
39 | ]
40 | },
41 | {
42 | "cell_type": "code",
43 | "execution_count": null,
44 | "metadata": {},
45 | "outputs": [],
46 | "source": [
47 | "!pip install ikomia"
48 | ]
49 | },
50 | {
51 | "attachments": {},
52 | "cell_type": "markdown",
53 | "metadata": {},
54 | "source": [
55 | "## Run YOLOv8 Pose Estimation on your image"
56 | ]
57 | },
58 | {
59 | "cell_type": "markdown",
60 | "metadata": {},
61 | "source": [
62 | "---\n",
63 | "\n",
64 | "**-Google Colab ONLY- Restart runtime after the first run of the workflow below** \n",
65 | "\n",
66 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
67 | "\n",
68 | "---"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": null,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "from ikomia.dataprocess.workflow import Workflow\n",
78 | "from ikomia.utils import ik\n",
79 | "\n",
80 | "\n",
81 | "# Init your workflow\n",
82 | "wf = Workflow()\n",
83 | "\n",
84 | "# Add algorithm\n",
85 | "algo = wf.add_task(ik.infer_yolo_v8_pose_estimation(\n",
86 | " conf_thres='0.5',\n",
87 | " iou_thres='0.25',\n",
88 | " input_size='640' ), auto_connect=True)\n",
89 | "\n",
90 | "# Run on your image \n",
91 | "wf.run_on(url=\"https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_runners.jpg?raw=true\")"
92 | ]
93 | },
94 | {
95 | "cell_type": "code",
96 | "execution_count": null,
97 | "metadata": {},
98 | "outputs": [],
99 | "source": [
100 | "from ikomia.utils.displayIO import display\n",
101 | "from PIL import ImageShow\n",
102 | "\n",
103 | "# Display the keypoints\n",
104 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
105 | "\n",
106 | "# Display results\n",
107 | "display(algo.get_image_with_graphics())"
108 | ]
109 | }
110 | ],
111 | "metadata": {
112 | "kernelspec": {
113 | "display_name": "venvapi",
114 | "language": "python",
115 | "name": "venvapi"
116 | },
117 | "language_info": {
118 | "codemirror_mode": {
119 | "name": "ipython",
120 | "version": 3
121 | },
122 | "file_extension": ".py",
123 | "mimetype": "text/x-python",
124 | "name": "python",
125 | "nbconvert_exporter": "python",
126 | "pygments_lexer": "ipython3",
127 | "version": "3.9.13"
128 | },
129 | "orig_nbformat": 4
130 | },
131 | "nbformat": 4,
132 | "nbformat_minor": 2
133 | }
134 |
--------------------------------------------------------------------------------
/examples/HOWTO_use_ByteTrack_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "9XIBGlEafDQf"
7 | },
8 | "source": [
9 | "
\n",
10 | "\n",
11 | "\n"
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {
17 | "id": "DvouGVeYfDQg"
18 | },
19 | "source": [
20 | "# Easy Object Tracking with ByteTrack\n",
21 | "\n",
22 | "**ByteTrack** is a Computer Vision algorithm specifically designed for the task of multi-object tracking (MOT). Using ByteTrack, you can assign unique identifiers to objects within a video, enabling the consistent and accurate tracking of each object over time.\n",
23 | "\n",
24 | ""
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "-STLXz8ifDQh"
31 | },
32 | "source": [
33 | "## Setup"
34 | ]
35 | },
36 | {
37 | "cell_type": "markdown",
38 | "metadata": {
39 | "id": "cV0_2S0SfDQh"
40 | },
41 | "source": [
42 | "You need to install Ikomia Python API with pip\n"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": null,
48 | "metadata": {
49 | "colab": {
50 | "base_uri": "https://localhost:8080/",
51 | "height": 1000
52 | },
53 | "id": "cbvRlv_ufDQh",
54 | "outputId": "e3893478-603b-467b-96a3-b272121649b3"
55 | },
56 | "outputs": [],
57 | "source": [
58 | "!pip install ikomia"
59 | ]
60 | },
61 | {
62 | "cell_type": "markdown",
63 | "metadata": {
64 | "id": "-j3VbsAYfDQi"
65 | },
66 | "source": [
67 | "---\n",
68 | "\n",
69 | "*Note: The script is not compatible with Google Colab as they have disabled cv2.imshow()*\n",
70 | "\n",
71 | "---"
72 | ]
73 | },
74 | {
75 | "cell_type": "markdown",
76 | "metadata": {},
77 | "source": [
78 | "## Download video and cut example video"
79 | ]
80 | },
81 | {
82 | "cell_type": "code",
83 | "execution_count": 1,
84 | "metadata": {
85 | "id": "uBp98pWxiHXq"
86 | },
87 | "outputs": [],
88 | "source": [
89 | "import requests\n",
90 | "import cv2\n",
91 | "\n",
92 | "url = \"https://www.pexels.com/download/video/12116094/?fps=29.97&h=720&w=1280\"\n",
93 | "\n",
94 | "# Define headers to mimic a browser request\n",
95 | "headers = {\n",
96 | " \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3\"\n",
97 | "}\n",
98 | "\n",
99 | "response = requests.get(url, headers=headers, stream=True)\n",
100 | "with open(\"video.mp4\", \"wb\") as f:\n",
101 | " for chunk in response.iter_content(chunk_size=1024):\n",
102 | " f.write(chunk)\n",
103 | "\n",
104 | "# Replace with the path to your downloaded video\n",
105 | "video_path = \"video.mp4\"\n",
106 | "\n",
107 | "# Open the video\n",
108 | "cap = cv2.VideoCapture(video_path)\n",
109 | "\n",
110 | "# Check if the video has opened successfully\n",
111 | "if not cap.isOpened():\n",
112 | " print(\"Error: Could not open video.\")\n",
113 | " exit()\n",
114 | "\n",
115 | "# Get video properties\n",
116 | "fps = cap.get(cv2.CAP_PROP_FPS)\n",
117 | "frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))\n",
118 | "duration = frame_count / fps\n",
119 | "cut_frame = int(frame_count / 4) # Frame to cut the video at\n",
120 | "\n",
121 | "# Define the codec and create VideoWriter object\n",
122 | "fourcc = cv2.VideoWriter_fourcc(*'mp4v') \n",
123 | "out = cv2.VideoWriter('short_video.mp4', fourcc, fps, (int(cap.get(3)), int(cap.get(4))))\n",
124 | "\n",
125 | "# Read and write frames until the cut point\n",
126 | "frame_num = 0\n",
127 | "while True:\n",
128 | " ret, frame = cap.read()\n",
129 | " if not ret or frame_num == cut_frame:\n",
130 | " break\n",
131 | " out.write(frame)\n",
132 | " frame_num += 1\n",
133 | "\n",
134 | "# Release everything\n",
135 | "cap.release()\n",
136 | "out.release()"
137 | ]
138 | },
139 | {
140 | "cell_type": "markdown",
141 | "metadata": {},
142 | "source": [
143 | "## Run YOLOv8 and Bytetrack on your video"
144 | ]
145 | },
146 | {
147 | "cell_type": "code",
148 | "execution_count": null,
149 | "metadata": {
150 | "colab": {
151 | "base_uri": "https://localhost:8080/",
152 | "height": 1000
153 | },
154 | "id": "Q8dA1vWbfDQi",
155 | "outputId": "6e55cb62-36d7-4ee8-840d-7289582d6b71"
156 | },
157 | "outputs": [],
158 | "source": [
159 | "from ikomia.dataprocess.workflow import Workflow\n",
160 | "from ikomia.utils.displayIO import display\n",
161 | "import cv2\n",
162 | "\n",
163 | "\n",
164 | "# Replace 'your_video_path.mp4' with the actual video file path\n",
165 | "input_video_path = 'short_video.mp4'\n",
166 | "output_video_path = 'bytetrack_output_video.avi'\n",
167 | "\n",
168 | "# Init your workflow\n",
169 | "wf = Workflow()\n",
170 | "\n",
171 | "# Add object detection algorithm\n",
172 | "detector = wf.add_task(name=\"infer_yolo_v8\", auto_connect=True)\n",
173 | "\n",
174 | "# Add ByteTrack tracking algorithm\n",
175 | "tracking = wf.add_task(name=\"infer_bytetrack\", auto_connect=True)\n",
176 | "\n",
177 | "tracking.set_parameters({\n",
178 | " \"categories\": \"all\",\n",
179 | " \"conf_thres\": \"0.5\",\n",
180 | "})\n",
181 | "\n",
182 | "# Open the video file\n",
183 | "stream = cv2.VideoCapture(input_video_path)\n",
184 | "if not stream.isOpened():\n",
185 | " print(\"Error: Could not open video.\")\n",
186 | " exit()\n",
187 | "\n",
188 | "# Get video properties for the output\n",
189 | "frame_width = int(stream.get(cv2.CAP_PROP_FRAME_WIDTH))\n",
190 | "frame_height = int(stream.get(cv2.CAP_PROP_FRAME_HEIGHT))\n",
191 | "frame_rate = stream.get(cv2.CAP_PROP_FPS)\n",
192 | "\n",
193 | "# Define the codec and create VideoWriter object\n",
194 | "# The 'XVID' codec is widely supported and provides good quality\n",
195 | "fourcc = cv2.VideoWriter_fourcc(*'XVID')\n",
196 | "out = cv2.VideoWriter(output_video_path, fourcc, frame_rate, (frame_width, frame_height))\n",
197 | "\n",
198 | "while True:\n",
199 | " # Read image from stream\n",
200 | " ret, frame = stream.read()\n",
201 | "\n",
202 | " # Test if the video has ended or there is an error\n",
203 | " if not ret:\n",
204 | " print(\"Info: End of video or error.\")\n",
205 | " break\n",
206 | "\n",
207 | " # Run the workflow on current frame\n",
208 | " wf.run_on(array=frame)\n",
209 | "\n",
210 | " # Get results\n",
211 | " image_out = tracking.get_output(0)\n",
212 | " obj_detect_out = tracking.get_output(1)\n",
213 | "\n",
214 | " # Convert the result to BGR color space for displaying\n",
215 | " img_out = image_out.get_image_with_graphics(obj_detect_out)\n",
216 | " img_res = cv2.cvtColor(img_out, cv2.COLOR_RGB2BGR)\n",
217 | "\n",
218 | " # Save the resulting frame\n",
219 | " out.write(img_out)\n",
220 | "\n",
221 | " # Display\n",
222 | " display(img_res, title=\"ByteTrack\", viewer=\"opencv\")\n",
223 | "\n",
224 | " # Press 'q' to quit the video processing\n",
225 | " if cv2.waitKey(1) & 0xFF == ord('q'):\n",
226 | " break\n",
227 | "\n",
228 | "# After the loop release everything\n",
229 | "stream.release()\n",
230 | "out.release()\n",
231 | "cv2.destroyAllWindows()"
232 | ]
233 | }
234 | ],
235 | "metadata": {
236 | "accelerator": "GPU",
237 | "colab": {
238 | "gpuType": "V100",
239 | "provenance": []
240 | },
241 | "kernelspec": {
242 | "display_name": "venvapi",
243 | "language": "python",
244 | "name": "venvapi"
245 | },
246 | "language_info": {
247 | "codemirror_mode": {
248 | "name": "ipython",
249 | "version": 3
250 | },
251 | "file_extension": ".py",
252 | "mimetype": "text/x-python",
253 | "name": "python",
254 | "nbconvert_exporter": "python",
255 | "pygments_lexer": "ipython3",
256 | "version": "3.9.13"
257 | },
258 | "orig_nbformat": 4
259 | },
260 | "nbformat": 4,
261 | "nbformat_minor": 0
262 | }
263 |
--------------------------------------------------------------------------------
/examples/HOWTO_use_Canny_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run Canny edge detection (OpenCV) with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "Edge detection is an essential image processing technique commonly employed in various computer vision applications, including data extraction, image segmentation, feature extraction, and pattern recognition. This technique helps reduce the amount of noise and irrelevant details in an image while retaining its structural information. As a result, edge detection plays a crucial role in enhancing the accuracy and performance of computer vision algorithms. Whether you're working on object detection, image recognition, or other computer vision tasks, edge detection is a critical step in your workflow.\n",
27 | "\n",
28 | "Canny edge detection is widely regarded as one of the most popular and effective methods for edge detection in computer vision. It uses a multi-stage algorithm to detect a wide range of edges in images. This algorithm can be broken down into four basic steps:\n",
29 | "1. Noise redution\n",
30 | "2. Gradient calculation\n",
31 | "3. Non-maximum suppression\n",
32 | "4. Hysteresis thresholding"
33 | ]
34 | },
35 | {
36 | "attachments": {},
37 | "cell_type": "markdown",
38 | "metadata": {},
39 | "source": [
40 | "## Setup"
41 | ]
42 | },
43 | {
44 | "attachments": {},
45 | "cell_type": "markdown",
46 | "metadata": {},
47 | "source": [
48 | "You need to install Ikomia Python API with pip\n"
49 | ]
50 | },
51 | {
52 | "cell_type": "code",
53 | "execution_count": null,
54 | "metadata": {},
55 | "outputs": [],
56 | "source": [
57 | "!pip install ikomia"
58 | ]
59 | },
60 | {
61 | "attachments": {},
62 | "cell_type": "markdown",
63 | "metadata": {},
64 | "source": [
65 | "## Run the Canny edge dectector on your image"
66 | ]
67 | },
68 | {
69 | "cell_type": "code",
70 | "execution_count": null,
71 | "metadata": {},
72 | "outputs": [],
73 | "source": [
74 | "from ikomia.dataprocess.workflow import Workflow\n",
75 | "from ikomia.utils.displayIO import display\n",
76 | "\n",
77 | "# Init your workflow\n",
78 | "wf = Workflow()\n",
79 | "\n",
80 | "# Add the Canny Edge Detector\n",
81 | "canny = wf.add_task(name=\"ocv_canny\", auto_connect=True)\n",
82 | "\n",
83 | "\n",
84 | "canny.set_parameters({\n",
85 | " \"threshold1\":\"0\",\n",
86 | " \"threshold2\":\"255\",\n",
87 | " \"apertureSize\": \"3\",\n",
88 | " \"L2gradient\": \"0\",\n",
89 | "})\n",
90 | "\n",
91 | "# Run on your image \n",
92 | "# wf.run_on(path=\"path/to/your/image.png\")\n",
93 | "wf.run_on(url=\"https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg\")\n",
94 | "\n",
95 | "\n",
96 | "# Inspect your results\n",
97 | "display(canny.get_input(0).get_image())\n",
98 | "display(canny.get_output(0).get_image())"
99 | ]
100 | },
101 | {
102 | "attachments": {},
103 | "cell_type": "markdown",
104 | "metadata": {},
105 | "source": [
106 | "## Run the Canny edge dectector on your webcam (Jupyter notebook only)"
107 | ]
108 | },
109 | {
110 | "cell_type": "code",
111 | "execution_count": null,
112 | "metadata": {},
113 | "outputs": [],
114 | "source": [
115 | "from ikomia.dataprocess.workflow import Workflow\n",
116 | "from ikomia.utils.displayIO import display\n",
117 | "import cv2\n",
118 | "\n",
119 | "stream = cv2.VideoCapture(0)\n",
120 | "\n",
121 | "# Init the workflow\n",
122 | "wf = Workflow()\n",
123 | "\n",
124 | "# Add color conversion to workflow\n",
125 | "canny = wf.add_task(name=\"ocv_canny\", auto_connect=True)\n",
126 | "\n",
127 | "\n",
128 | "canny.set_parameters({\n",
129 | " \"threshold1\":\"0\",\n",
130 | " \"threshold2\":\"255\",\n",
131 | " \"apertureSize\": \"3\",\n",
132 | " \"L2gradient\": \"0\",\n",
133 | "})\n",
134 | "\n",
135 | "\n",
136 | "while True:\n",
137 | " # Read image from stream\n",
138 | " ret, frame = stream.read()\n",
139 | "\n",
140 | " # Test if streaming is OK\n",
141 | " if not ret:\n",
142 | " continue\n",
143 | " \n",
144 | " # Run workflow on image\n",
145 | " wf.run_on(frame)\n",
146 | " \n",
147 | " # Display results from \"cvt\"\n",
148 | " display(canny.get_output(0).get_image(), title=\"Demo - press 'q' to quit \", viewer=\"opencv\")\n",
149 | "\n",
150 | " # Press 'q' to quit the streaming process\n",
151 | " if cv2.waitKey(1) & 0xFF == ord('q'):\n",
152 | " break\n",
153 | "\n",
154 | "# After the loop release the stream object\n",
155 | "stream.release()\n",
156 | "# Destroy all windows\n",
157 | "cv2.destroyAllWindows()"
158 | ]
159 | },
160 | {
161 | "attachments": {},
162 | "cell_type": "markdown",
163 | "metadata": {},
164 | "source": [
165 | "## -Google Colab ONLY- Save your custom image in your Google Drive space"
166 | ]
167 | },
168 | {
169 | "cell_type": "code",
170 | "execution_count": null,
171 | "metadata": {},
172 | "outputs": [],
173 | "source": [
174 | "# Uncomment these lines if you're working on Colab\n",
175 | "\"\"\" from google.colab import drive\n",
176 | "drive.mount('/content/gdrive')\n",
177 | "\n",
178 | "cv2.imwrite(\"/content/gdrive/MyDrive/paint_img.png\", img_paint) \"\"\""
179 | ]
180 | },
181 | {
182 | "attachments": {},
183 | "cell_type": "markdown",
184 | "metadata": {},
185 | "source": [
186 | "## -Google Colab ONLY- Download directly your custom image"
187 | ]
188 | },
189 | {
190 | "cell_type": "code",
191 | "execution_count": null,
192 | "metadata": {},
193 | "outputs": [],
194 | "source": [
195 | "# Uncomment these lines if you're working on Colab\n",
196 | "\"\"\" from google.colab import files\n",
197 | "cv2.imwrite(\"/content/paint_img.png\", img_paint)\n",
198 | "files.download('/content/paint_img.png') \"\"\""
199 | ]
200 | }
201 | ],
202 | "metadata": {
203 | "kernelspec": {
204 | "display_name": "venvapi",
205 | "language": "python",
206 | "name": "venvapi"
207 | },
208 | "language_info": {
209 | "codemirror_mode": {
210 | "name": "ipython",
211 | "version": 3
212 | },
213 | "file_extension": ".py",
214 | "mimetype": "text/x-python",
215 | "name": "python",
216 | "nbconvert_exporter": "python",
217 | "pygments_lexer": "ipython3",
218 | "version": "3.9.13"
219 | },
220 | "orig_nbformat": 4
221 | },
222 | "nbformat": 4,
223 | "nbformat_minor": 2
224 | }
225 |
--------------------------------------------------------------------------------
/examples/HOWTO_use_DeepSORT_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "9XIBGlEafDQf"
7 | },
8 | "source": [
9 | "
\n",
10 | "\n",
11 | "\n"
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {
17 | "id": "DvouGVeYfDQg"
18 | },
19 | "source": [
20 | "# Easy Object Tracking with DeepSORT\n",
21 | "\n",
22 | "**DeepSORT** (Simple Online and Realtime Tracking with a Deep Association Metric) is an extension of the original SORT (Simple Real-time Tracker) algorithm, which is considered an elegant and widely used framework for object tracking. It incorporates a deep learning methodology to address real-world tracking challenges such as occlusions and different viewpoints.\n",
23 | "\n",
24 | ""
25 | ]
26 | },
27 | {
28 | "cell_type": "markdown",
29 | "metadata": {
30 | "id": "-STLXz8ifDQh"
31 | },
32 | "source": [
33 | "## Setup"
34 | ]
35 | },
36 | {
37 | "cell_type": "markdown",
38 | "metadata": {
39 | "id": "cV0_2S0SfDQh"
40 | },
41 | "source": [
42 | "You need to install Ikomia Python API with pip\n"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": null,
48 | "metadata": {
49 | "colab": {
50 | "base_uri": "https://localhost:8080/",
51 | "height": 1000
52 | },
53 | "id": "cbvRlv_ufDQh",
54 | "outputId": "e3893478-603b-467b-96a3-b272121649b3"
55 | },
56 | "outputs": [],
57 | "source": [
58 | "!pip install ikomia"
59 | ]
60 | },
61 | {
62 | "cell_type": "markdown",
63 | "metadata": {
64 | "id": "-j3VbsAYfDQi"
65 | },
66 | "source": [
67 | "---\n",
68 | "\n",
69 | "*Note: The script is not compatible with Google Colab as they have disabled cv2.imshow()*\n",
70 | "\n",
71 | "---"
72 | ]
73 | },
74 | {
75 | "cell_type": "markdown",
76 | "metadata": {},
77 | "source": [
78 | "## Download video and cut example video"
79 | ]
80 | },
81 | {
82 | "cell_type": "code",
83 | "execution_count": 1,
84 | "metadata": {
85 | "id": "uBp98pWxiHXq"
86 | },
87 | "outputs": [],
88 | "source": [
89 | "\n",
90 | "import requests\n",
91 | "import cv2\n",
92 | "\n",
93 | "url = \"https://www.pexels.com/download/video/12116094/?fps=29.97&h=720&w=1280\"\n",
94 | "\n",
95 | "# Define headers to mimic a browser request\n",
96 | "headers = {\n",
97 | " \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3\"\n",
98 | "}\n",
99 | "\n",
100 | "response = requests.get(url, headers=headers, stream=True)\n",
101 | "with open(\"video.mp4\", \"wb\") as f:\n",
102 | " for chunk in response.iter_content(chunk_size=1024):\n",
103 | " f.write(chunk)\n",
104 | "\n",
105 | "# Replace with the path to your downloaded video\n",
106 | "video_path = \"video.mp4\"\n",
107 | "\n",
108 | "# Open the video\n",
109 | "cap = cv2.VideoCapture(video_path)\n",
110 | "\n",
111 | "# Check if the video has opened successfully\n",
112 | "if not cap.isOpened():\n",
113 | " print(\"Error: Could not open video.\")\n",
114 | " exit()\n",
115 | "\n",
116 | "# Get video properties\n",
117 | "fps = cap.get(cv2.CAP_PROP_FPS)\n",
118 | "frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))\n",
119 | "duration = frame_count / fps\n",
120 | "cut_frame = int(frame_count / 4) # Frame to cut the video at\n",
121 | "\n",
122 | "# Define the codec and create VideoWriter object\n",
123 | "fourcc = cv2.VideoWriter_fourcc(*'mp4v') \n",
124 | "out = cv2.VideoWriter('short_video.mp4', fourcc, fps, (int(cap.get(3)), int(cap.get(4))))\n",
125 | "\n",
126 | "# Read and write frames until the cut point\n",
127 | "frame_num = 0\n",
128 | "while True:\n",
129 | " ret, frame = cap.read()\n",
130 | " if not ret or frame_num == cut_frame:\n",
131 | " break\n",
132 | " out.write(frame)\n",
133 | " frame_num += 1\n",
134 | "\n",
135 | "# Release everything\n",
136 | "cap.release()\n",
137 | "out.release()"
138 | ]
139 | },
140 | {
141 | "cell_type": "markdown",
142 | "metadata": {},
143 | "source": [
144 | "## Run YOLOv7 and DeepSORT on your video"
145 | ]
146 | },
147 | {
148 | "cell_type": "code",
149 | "execution_count": null,
150 | "metadata": {
151 | "colab": {
152 | "base_uri": "https://localhost:8080/",
153 | "height": 1000
154 | },
155 | "id": "Q8dA1vWbfDQi",
156 | "outputId": "6e55cb62-36d7-4ee8-840d-7289582d6b71"
157 | },
158 | "outputs": [],
159 | "source": [
160 | "from ikomia.dataprocess.workflow import Workflow\n",
161 | "from ikomia.utils.displayIO import display\n",
162 | "import cv2\n",
163 | "\n",
164 | "\n",
165 | "# Replace 'your_video_path.mp4' with the actual video file path\n",
166 | "input_video_path = 'short_video.mp4'\n",
167 | "output_video_path = 'deepsort_output_video.avi'\n",
168 | "\n",
169 | "# Init your workflow\n",
170 | "wf = Workflow()\n",
171 | "\n",
172 | "# Add object detection algorithm\n",
173 | "detector = wf.add_task(name=\"infer_yolo_v7\", auto_connect=True)\n",
174 | "\n",
175 | "# Add ByteTrack tracking algorithm\n",
176 | "tracking = wf.add_task(name=\"infer_deepsort\", auto_connect=True)\n",
177 | "\n",
178 | "tracking.set_parameters({\n",
179 | " \"categories\": \"all\",\n",
180 | " \"conf_thres\": \"0.5\",\n",
181 | "})\n",
182 | "\n",
183 | "# Open the video file\n",
184 | "stream = cv2.VideoCapture(input_video_path)\n",
185 | "if not stream.isOpened():\n",
186 | " print(\"Error: Could not open video.\")\n",
187 | " exit()\n",
188 | "\n",
189 | "# Get video properties for the output\n",
190 | "frame_width = int(stream.get(cv2.CAP_PROP_FRAME_WIDTH))\n",
191 | "frame_height = int(stream.get(cv2.CAP_PROP_FRAME_HEIGHT))\n",
192 | "frame_rate = stream.get(cv2.CAP_PROP_FPS)\n",
193 | "\n",
194 | "# Define the codec and create VideoWriter object\n",
195 | "# The 'XVID' codec is widely supported and provides good quality\n",
196 | "fourcc = cv2.VideoWriter_fourcc(*'XVID')\n",
197 | "out = cv2.VideoWriter(output_video_path, fourcc, frame_rate, (frame_width, frame_height))\n",
198 | "\n",
199 | "while True:\n",
200 | " # Read image from stream\n",
201 | " ret, frame = stream.read()\n",
202 | "\n",
203 | " # Test if the video has ended or there is an error\n",
204 | " if not ret:\n",
205 | " print(\"Info: End of video or error.\")\n",
206 | " break\n",
207 | "\n",
208 | " # Run the workflow on current frame\n",
209 | " wf.run_on(array=frame)\n",
210 | "\n",
211 | " # Get results\n",
212 | " image_out = tracking.get_output(0)\n",
213 | " obj_detect_out = tracking.get_output(1)\n",
214 | "\n",
215 | " # Convert the result to BGR color space for displaying\n",
216 | " img_out = image_out.get_image_with_graphics(obj_detect_out)\n",
217 | " img_res = cv2.cvtColor(img_out, cv2.COLOR_RGB2BGR)\n",
218 | "\n",
219 | " # Save the resulting frame\n",
220 | " out.write(img_out)\n",
221 | "\n",
222 | " # Display\n",
223 | " display(img_res, title=\"DeepSORT\", viewer=\"opencv\")\n",
224 | "\n",
225 | " # Press 'q' to quit the video processing\n",
226 | " if cv2.waitKey(1) & 0xFF == ord('q'):\n",
227 | " break\n",
228 | "\n",
229 | "# After the loop release everything\n",
230 | "stream.release()\n",
231 | "out.release()\n",
232 | "cv2.destroyAllWindows()"
233 | ]
234 | }
235 | ],
236 | "metadata": {
237 | "accelerator": "GPU",
238 | "colab": {
239 | "gpuType": "V100",
240 | "provenance": []
241 | },
242 | "kernelspec": {
243 | "display_name": "venvapi",
244 | "language": "python",
245 | "name": "venvapi"
246 | },
247 | "language_info": {
248 | "codemirror_mode": {
249 | "name": "ipython",
250 | "version": 3
251 | },
252 | "file_extension": ".py",
253 | "mimetype": "text/x-python",
254 | "name": "python",
255 | "nbconvert_exporter": "python",
256 | "pygments_lexer": "ipython3",
257 | "version": "3.9.13"
258 | },
259 | "orig_nbformat": 4
260 | },
261 | "nbformat": 4,
262 | "nbformat_minor": 0
263 | }
264 |
--------------------------------------------------------------------------------
/examples/HOWTO_use_Detectron2_Object_Detection_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "9jhW6_ynZ2pS"
7 | },
8 | "source": [
9 | "
\n",
10 | "\n",
11 | "\n"
12 | ]
13 | },
14 | {
15 | "cell_type": "markdown",
16 | "metadata": {
17 | "id": "hQfRa5xGZ2pX"
18 | },
19 | "source": [
20 | "# How to use Detectron2 Object Detection with Ikomia API"
21 | ]
22 | },
23 | {
24 | "cell_type": "markdown",
25 | "metadata": {
26 | "id": "5vJS53BiZ2pa"
27 | },
28 | "source": [
29 | "[Detectron2](https://github.com/facebookresearch/detectron2) is Python Library created by Facebook and providing many algorithms for object detection, object segmentation or pose estimation.\n",
30 | "\n",
31 | "Detectron2 is open source, maintained by Facebook and you can built your own project on top of it.\n",
32 | "\n",
33 | "In this tutorial, we present how it can be very easy to use Detectron2 Object Detection algorithms with a few lines of code.\n",
34 | "\n",
35 | "If you like this tutorial, you can support our project here [Ikomia API GitHub](https://github.com/Ikomia-dev/IkomiaApi).\n",
36 | "\n",
37 | "## ENJOY 🥰 !!\n",
38 | "\n",
39 | "\n",
40 | "
\n",
41 | "
\n",
42 | "
"
43 | ]
44 | },
45 | {
46 | "cell_type": "markdown",
47 | "metadata": {
48 | "id": "x4CdI0J1ej5b"
49 | },
50 | "source": [
51 | "## Setup"
52 | ]
53 | },
54 | {
55 | "cell_type": "markdown",
56 | "metadata": {
57 | "id": "NBmJN2AaDmcI"
58 | },
59 | "source": [
60 | "You need to install Ikomia Python API on Google Colab with pip."
61 | ]
62 | },
63 | {
64 | "cell_type": "code",
65 | "execution_count": null,
66 | "metadata": {
67 | "colab": {
68 | "base_uri": "https://localhost:8080/",
69 | "height": 1000
70 | },
71 | "id": "8eSnQYJygrDy",
72 | "outputId": "5588f89a-c9cb-4dde-bcea-a96e00745919"
73 | },
74 | "outputs": [],
75 | "source": [
76 | "!pip install ikomia"
77 | ]
78 | },
79 | {
80 | "cell_type": "markdown",
81 | "metadata": {
82 | "id": "ktbA-VPOATgP"
83 | },
84 | "source": [
85 | "\n",
86 | "\n",
87 | "---\n",
88 | "\n",
89 | "\n",
90 | "**-Google Colab ONLY- Restart runtime**\n",
91 | "\n",
92 | "Some Python packages have been updated. Please click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
93 | "\n",
94 | "\n",
95 | "\n"
96 | ]
97 | },
98 | {
99 | "cell_type": "markdown",
100 | "metadata": {
101 | "id": "2hS1T6ky1Wcw"
102 | },
103 | "source": [
104 | "---"
105 | ]
106 | },
107 | {
108 | "cell_type": "markdown",
109 | "metadata": {
110 | "id": "JJsRFzl9Au1c"
111 | },
112 | "source": [
113 | "Ikomia API has already more than 180 pre-integrated algorithms (mainly OpenCV) but the most interesting algorithms are in [Ikomia HUB](https://github.com/Ikomia-hub). \n",
114 | "\n",
115 | "We push regularly state-of-the-art algorithms from individual repos (think of YOLO v7 for example) or from companies (Facebook Detectron2 or Ultralytics/YOLOv5 for example)."
116 | ]
117 | },
118 | {
119 | "cell_type": "markdown",
120 | "metadata": {
121 | "id": "jEdZ_uDYDqjH"
122 | },
123 | "source": [
124 | "## Apply Detectron2 Object Detection algorithms on your images\n",
125 | "\n",
126 | "First, you create a new workflow from scratch.\n",
127 | "\n",
128 | "Then you add the Detectron2 algorithm and it will automagically download the algorithm from Ikomia Hub and install all the Python dependencies (the 1st time, it can take a while, be patient ! )."
129 | ]
130 | },
131 | {
132 | "cell_type": "code",
133 | "execution_count": null,
134 | "metadata": {
135 | "id": "bRPYGcRd1Pwh"
136 | },
137 | "outputs": [],
138 | "source": [
139 | "from ikomia.dataprocess.workflow import Workflow\n",
140 | "from ikomia.utils import ik\n",
141 | "\n",
142 | "# Create workflow from scratch\n",
143 | "wf = Workflow()\n",
144 | "\n",
145 | "# Add algorithms to your workflow\n",
146 | "d2 = wf.add_task(ik.infer_detectron2_detection(), auto_connect=True)"
147 | ]
148 | },
149 | {
150 | "cell_type": "markdown",
151 | "metadata": {
152 | "id": "_MvDkGjQCtRV"
153 | },
154 | "source": [
155 | "Once Detectron2 is installed, you can check the available pre-trained models by code."
156 | ]
157 | },
158 | {
159 | "cell_type": "code",
160 | "execution_count": null,
161 | "metadata": {
162 | "colab": {
163 | "base_uri": "https://localhost:8080/"
164 | },
165 | "id": "IS0PJjHYvwVt",
166 | "outputId": "186064e8-0b9e-411c-8988-456888d480b2"
167 | },
168 | "outputs": [],
169 | "source": [
170 | "import detectron2\n",
171 | "import os\n",
172 | "\n",
173 | "config_paths = os.path.dirname(detectron2.__file__) + \"/model_zoo\"\n",
174 | "\n",
175 | "available_cfg = []\n",
176 | "for root, dirs, files in os.walk(config_paths, topdown=False):\n",
177 | " for name in files:\n",
178 | " file_path = os.path.join(root, name)\n",
179 | " possible_cfg = os.path.join(*file_path.split('/')[-2:])\n",
180 | " if \"Detection\" in possible_cfg and possible_cfg.endswith('.yaml') and 'rpn' not in possible_cfg:\n",
181 | " available_cfg.append(possible_cfg.replace('.yaml', ''))\n",
182 | "for model_name in available_cfg:\n",
183 | " print(model_name)"
184 | ]
185 | },
186 | {
187 | "cell_type": "markdown",
188 | "metadata": {
189 | "id": "NKPLZWQ1LtQp"
190 | },
191 | "source": [
192 | "Select your image by changing the url."
193 | ]
194 | },
195 | {
196 | "cell_type": "code",
197 | "execution_count": null,
198 | "metadata": {
199 | "colab": {
200 | "base_uri": "https://localhost:8080/"
201 | },
202 | "id": "QYNohwcoKU4n",
203 | "outputId": "5a5ddc60-53a0-41f8-b8ca-d45bbdf1eb26"
204 | },
205 | "outputs": [],
206 | "source": [
207 | "import requests\n",
208 | "\n",
209 | "# Download the image\n",
210 | "url = \"http://images.cocodataset.org/val2017/000000439715.jpg\"\n",
211 | "response = requests.get(url, stream=True)\n",
212 | "with open(\"image.jpg\", \"wb\") as file:\n",
213 | " for chunk in response.iter_content(chunk_size=8192):\n",
214 | " file.write(chunk)"
215 | ]
216 | },
217 | {
218 | "cell_type": "markdown",
219 | "metadata": {
220 | "id": "7eX8QVkgMAF3"
221 | },
222 | "source": [
223 | "Now select your preferred model. Then run and test !"
224 | ]
225 | },
226 | {
227 | "cell_type": "code",
228 | "execution_count": null,
229 | "metadata": {
230 | "colab": {
231 | "base_uri": "https://localhost:8080/",
232 | "height": 497
233 | },
234 | "id": "8GnJyWEUwoOF",
235 | "outputId": "408ca135-820a-4a09-8918-7265a8521ad2"
236 | },
237 | "outputs": [],
238 | "source": [
239 | "# Set your preferred model\n",
240 | "d2_params = {\n",
241 | " ik.infer_detectron2_detection.model_name: \"COCO-Detection/faster_rcnn_R_50_C4_3x\" # <-- change your model here\n",
242 | "}\n",
243 | "d2.set_parameters(d2_params)"
244 | ]
245 | },
246 | {
247 | "cell_type": "markdown",
248 | "metadata": {},
249 | "source": [
250 | "## Run and display your results"
251 | ]
252 | },
253 | {
254 | "cell_type": "code",
255 | "execution_count": null,
256 | "metadata": {},
257 | "outputs": [],
258 | "source": [
259 | "from ikomia.utils.displayIO import display\n",
260 | "from PIL import ImageShow\n",
261 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
262 | "\n",
263 | "# Run\n",
264 | "wf.run_on(path=os.getcwd()+\"/image.jpg\")\n",
265 | "\n",
266 | "# Display\n",
267 | "img_d2 = d2.get_image_with_graphics()\n",
268 | "\n",
269 | "display(img_d2)"
270 | ]
271 | },
272 | {
273 | "cell_type": "markdown",
274 | "metadata": {
275 | "id": "rpsaQoYSwma8",
276 | "tags": []
277 | },
278 | "source": [
279 | "## -Google Colab ONLY- Save your custom image in your Google Drive space"
280 | ]
281 | },
282 | {
283 | "cell_type": "code",
284 | "execution_count": null,
285 | "metadata": {
286 | "colab": {
287 | "base_uri": "https://localhost:8080/"
288 | },
289 | "id": "pKPQ1JUCwdGW",
290 | "outputId": "afcb97f5-8fc8-4da2-dae5-446d17246ad9"
291 | },
292 | "outputs": [],
293 | "source": [
294 | "# Uncomment these lines if you're working on Colab\n",
295 | "# from google.colab import drive\n",
296 | "# drive.mount('/content/gdrive')\n",
297 | "\n",
298 | "# cv2.imwrite(\"/content/gdrive/MyDrive/img_d2.png\", img_d2)"
299 | ]
300 | },
301 | {
302 | "cell_type": "markdown",
303 | "metadata": {
304 | "id": "DyS-Lak6kntB"
305 | },
306 | "source": [
307 | "## -Google Colab ONLY- Download directly your custom image"
308 | ]
309 | },
310 | {
311 | "cell_type": "code",
312 | "execution_count": null,
313 | "metadata": {
314 | "colab": {
315 | "base_uri": "https://localhost:8080/",
316 | "height": 17
317 | },
318 | "id": "s_E2W_3hk07U",
319 | "outputId": "e639ba39-14aa-4b99-8c0b-3034734f09c6"
320 | },
321 | "outputs": [],
322 | "source": [
323 | "# Uncomment these lines if you're working on Colab\n",
324 | "# from google.colab import files\n",
325 | "# cv2.imwrite(\"/content/img_d2.png\", img_d2)\n",
326 | "# files.download('/content/img_d2.png')"
327 | ]
328 | }
329 | ],
330 | "metadata": {
331 | "colab": {
332 | "collapsed_sections": [],
333 | "provenance": []
334 | },
335 | "gpuClass": "standard",
336 | "kernelspec": {
337 | "display_name": "venv37",
338 | "language": "python",
339 | "name": "venv37"
340 | },
341 | "language_info": {
342 | "codemirror_mode": {
343 | "name": "ipython",
344 | "version": 3
345 | },
346 | "file_extension": ".py",
347 | "mimetype": "text/x-python",
348 | "name": "python",
349 | "nbconvert_exporter": "python",
350 | "pygments_lexer": "ipython3",
351 | "version": "3.7.9"
352 | }
353 | },
354 | "nbformat": 4,
355 | "nbformat_minor": 4
356 | }
357 |
--------------------------------------------------------------------------------
/examples/HOWTO_use_Google_Cloud_Vision_API_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to use the Google Cloud Vision API"
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "**Google Cloud Vision API** is a part of the Google Cloud suite, a set of powerful AI tools and services. It allows developers to integrate vision detection features within applications, including image labeling, face and landmark detection, optical character recognition (OCR), and tagging of explicit content.\n",
27 | "\n",
28 | "- [List of available algorithms](https://app.ikomia.ai/hub/?q=google)\n",
29 | "\n",
30 | "\n",
31 | ""
32 | ]
33 | },
34 | {
35 | "attachments": {},
36 | "cell_type": "markdown",
37 | "metadata": {},
38 | "source": [
39 | "## Setup"
40 | ]
41 | },
42 | {
43 | "attachments": {},
44 | "cell_type": "markdown",
45 | "metadata": {},
46 | "source": [
47 | "First, you need to install Ikomia Python API with pip\n"
48 | ]
49 | },
50 | {
51 | "cell_type": "code",
52 | "execution_count": null,
53 | "metadata": {},
54 | "outputs": [],
55 | "source": [
56 | "!pip install ikomia"
57 | ]
58 | },
59 | {
60 | "cell_type": "markdown",
61 | "metadata": {},
62 | "source": [
63 | "To Use the Google Cloud Vision API, you must first activate the Vision API within your Google Cloud project and generate a Google Cloud Vision API Key. \n",
64 | "\n",
65 | "This process is straightforward and can be guided by the following resources:\n",
66 | "- For a visual and step-by-step guide, consider watching this [tutorial on YouTube](https://www.youtube.com/watch?v=kZ3OL3AN_IA&t=157s). \n",
67 | "- If you prefer reading and like to go at your own pace, a [blog post tutorial](https://daminion.net/docs/how-to-get-google-cloud-vision-api-key/) might be more suitable. "
68 | ]
69 | },
70 | {
71 | "cell_type": "markdown",
72 | "metadata": {},
73 | "source": [
74 | "---\n",
75 | "\n",
76 | "**-Google Colab ONLY- Restart runtime**\n",
77 | "\n",
78 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
79 | "\n",
80 | "---"
81 | ]
82 | },
83 | {
84 | "attachments": {},
85 | "cell_type": "markdown",
86 | "metadata": {},
87 | "source": [
88 | "## Use the Google Cloud Vision API with a few lines of code"
89 | ]
90 | },
91 | {
92 | "cell_type": "markdown",
93 | "metadata": {},
94 | "source": [
95 | "### Text Detection"
96 | ]
97 | },
98 | {
99 | "cell_type": "code",
100 | "execution_count": null,
101 | "metadata": {},
102 | "outputs": [],
103 | "source": [
104 | "from ikomia.dataprocess.workflow import Workflow\n",
105 | "from ikomia.utils import ik\n",
106 | "\n",
107 | "\n",
108 | "api_key_path = 'PATH/TO/YOUR/GOOGLE/CLOUD/VISION/API/KEY.json'\n",
109 | "\n",
110 | "# Init your workflow\n",
111 | "wf = Workflow()\n",
112 | "\n",
113 | "# Add algorithm\n",
114 | "algo = wf.add_task(ik.infer_google_vision_ocr(google_application_credentials=api_key_path), auto_connect=True)\n",
115 | "\n",
116 | "# Run on your image\n",
117 | "wf.run_on(url='https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_text_inspiration.jpg?raw=true')\n",
118 | "\n",
119 | "# Display your result\n",
120 | "img_output = algo.get_output(0)\n",
121 | "recognition_output = algo.get_output(1)\n"
122 | ]
123 | },
124 | {
125 | "cell_type": "code",
126 | "execution_count": null,
127 | "metadata": {},
128 | "outputs": [],
129 | "source": [
130 | "from ikomia.utils.displayIO import display\n",
131 | "\n",
132 | "# Display segmentation mask\n",
133 | "from PIL import ImageShow\n",
134 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
135 | "\n",
136 | "display(img_output.get_image_with_mask_and_graphics(recognition_output), title=\"Google Vision OCR\")"
137 | ]
138 | },
139 | {
140 | "cell_type": "markdown",
141 | "metadata": {},
142 | "source": [
143 | "### Face Detection"
144 | ]
145 | },
146 | {
147 | "cell_type": "code",
148 | "execution_count": null,
149 | "metadata": {},
150 | "outputs": [],
151 | "source": [
152 | "# Init your workflow\n",
153 | "wf = Workflow()\n",
154 | "\n",
155 | "# Add algorithm\n",
156 | "algo = wf.add_task(ik.infer_google_vision_face_detection(google_application_credentials=api_key_path), auto_connect=True)\n",
157 | "\n",
158 | "\n",
159 | "# Run on your image\n",
160 | "wf.run_on(url='https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_portrait_4.jpg?raw=true')\n",
161 | "\n",
162 | "# Display your result\n",
163 | "display(algo.get_image_with_graphics())"
164 | ]
165 | },
166 | {
167 | "cell_type": "markdown",
168 | "metadata": {},
169 | "source": [
170 | "### Label detection"
171 | ]
172 | },
173 | {
174 | "cell_type": "code",
175 | "execution_count": null,
176 | "metadata": {},
177 | "outputs": [],
178 | "source": [
179 | "# Init your workflow\n",
180 | "wf = Workflow()\n",
181 | "\n",
182 | "# Add algorithm\n",
183 | "algo = wf.add_task(ik.infer_google_vision_label_detection(google_application_credentials=api_key_path), auto_connect=True)\n",
184 | "\n",
185 | "# Run on your image\n",
186 | "wf.run_on(url='https://cloud.google.com/static/vision/docs/images/setagaya_small.jpeg')\n",
187 | "\n",
188 | "# Display results\n",
189 | "label_output = algo.get_output(1)\n",
190 | "print(label_output.data)\n",
191 | "label_output.save('label_detection.json')"
192 | ]
193 | }
194 | ],
195 | "metadata": {
196 | "kernelspec": {
197 | "display_name": "venv310",
198 | "language": "python",
199 | "name": "python3"
200 | },
201 | "language_info": {
202 | "codemirror_mode": {
203 | "name": "ipython",
204 | "version": 3
205 | },
206 | "file_extension": ".py",
207 | "mimetype": "text/x-python",
208 | "name": "python",
209 | "nbconvert_exporter": "python",
210 | "pygments_lexer": "ipython3",
211 | "version": "3.10.11"
212 | },
213 | "orig_nbformat": 4
214 | },
215 | "nbformat": 4,
216 | "nbformat_minor": 2
217 | }
218 |
--------------------------------------------------------------------------------
/examples/HOWTO_use_MMOCR_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# Easy text extraction with MMOCR "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "MMOCR shines as a top-tier Optical Character Recognition (OCR) toolbox, especially within the Python community.\n",
27 | "\n",
28 | "\n",
29 | ""
30 | ]
31 | },
32 | {
33 | "attachments": {},
34 | "cell_type": "markdown",
35 | "metadata": {},
36 | "source": [
37 | "## Setup"
38 | ]
39 | },
40 | {
41 | "attachments": {},
42 | "cell_type": "markdown",
43 | "metadata": {},
44 | "source": [
45 | "You need to install Ikomia Python API with pip\n"
46 | ]
47 | },
48 | {
49 | "cell_type": "code",
50 | "execution_count": null,
51 | "metadata": {},
52 | "outputs": [],
53 | "source": [
54 | "!pip install ikomia"
55 | ]
56 | },
57 | {
58 | "attachments": {},
59 | "cell_type": "markdown",
60 | "metadata": {},
61 | "source": [
62 | "## Run MMOCR on your image"
63 | ]
64 | },
65 | {
66 | "cell_type": "markdown",
67 | "metadata": {},
68 | "source": [
69 | "---\n",
70 | "\n",
71 | "**-Google Colab ONLY- Restart runtime after the first run of the workflow below** \n",
72 | "\n",
73 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
74 | "\n",
75 | "---"
76 | ]
77 | },
78 | {
79 | "cell_type": "code",
80 | "execution_count": null,
81 | "metadata": {},
82 | "outputs": [],
83 | "source": [
84 | "from ikomia.dataprocess.workflow import Workflow\n",
85 | "\n",
86 | "# Init your workflow\n",
87 | "wf = Workflow()\n",
88 | "\n",
89 | "# Add text detection algorithm\n",
90 | "text_det = wf.add_task(name=\"infer_mmlab_text_detection\", auto_connect=True)\n",
91 | "\n",
92 | "# Add text recognition algorithm\n",
93 | "text_rec = wf.add_task(name=\"infer_mmlab_text_recognition\", auto_connect=True)\n",
94 | "\n",
95 | "# Run the workflow on image\n",
96 | "wf.run_on(url=\"https://aforismi.meglio.it/img/frasi/doing-nothing-nothing-to-do.jpg\")\n"
97 | ]
98 | },
99 | {
100 | "cell_type": "code",
101 | "execution_count": null,
102 | "metadata": {},
103 | "outputs": [],
104 | "source": [
105 | "from PIL import Image\n",
106 | "from IPython.display import display\n",
107 | "\n",
108 | "# Display results\n",
109 | "img_output = text_rec.get_output(0)\n",
110 | "recognition_output = text_rec.get_output(1)\n",
111 | "image_data = img_output.get_image_with_mask_and_graphics(recognition_output)\n",
112 | "\n",
113 | "# Create a PIL Image from the NumPy array\n",
114 | "image = Image.fromarray(image_data)\n",
115 | "\n",
116 | "# Display the image\n",
117 | "display(image)"
118 | ]
119 | }
120 | ],
121 | "metadata": {
122 | "kernelspec": {
123 | "display_name": "venvapi",
124 | "language": "python",
125 | "name": "venvapi"
126 | },
127 | "language_info": {
128 | "codemirror_mode": {
129 | "name": "ipython",
130 | "version": 3
131 | },
132 | "file_extension": ".py",
133 | "mimetype": "text/x-python",
134 | "name": "python",
135 | "nbconvert_exporter": "python",
136 | "pygments_lexer": "ipython3",
137 | "version": "3.9.13"
138 | },
139 | "orig_nbformat": 4
140 | },
141 | "nbformat": 4,
142 | "nbformat_minor": 2
143 | }
144 |
--------------------------------------------------------------------------------
/examples/HOWTO_use_MMPose_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# Easy pose estimation with MMPose "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "In the world of Computer Vision, pose estimation aims to determine the position and orientation of predefined keypoints on objects or body parts. \n",
27 | "\n",
28 | "For instance, in human pose estimation, the goal is to locate specific keypoints on a person's body, such as the elbows, knees, and shoulders.\n",
29 | "\n",
30 | "\n",
31 | "MMPose, a part of the OpenMMLab's ecosystem, is a cutting-edge library that provides tools and frameworks specifically designed for various pose estimation tasks.\n",
32 | "\n",
33 | "\n",
34 | ""
35 | ]
36 | },
37 | {
38 | "attachments": {},
39 | "cell_type": "markdown",
40 | "metadata": {},
41 | "source": [
42 | "## Setup"
43 | ]
44 | },
45 | {
46 | "attachments": {},
47 | "cell_type": "markdown",
48 | "metadata": {},
49 | "source": [
50 | "You need to install Ikomia Python API with pip\n"
51 | ]
52 | },
53 | {
54 | "cell_type": "code",
55 | "execution_count": null,
56 | "metadata": {},
57 | "outputs": [],
58 | "source": [
59 | "!pip install ikomia"
60 | ]
61 | },
62 | {
63 | "attachments": {},
64 | "cell_type": "markdown",
65 | "metadata": {},
66 | "source": [
67 | "## Run MMPose on your image"
68 | ]
69 | },
70 | {
71 | "cell_type": "markdown",
72 | "metadata": {},
73 | "source": [
74 | "---\n",
75 | "\n",
76 | "**-Google Colab ONLY- Restart runtime after the first run of the workflow below** \n",
77 | "\n",
78 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
79 | "\n",
80 | "---"
81 | ]
82 | },
83 | {
84 | "cell_type": "code",
85 | "execution_count": null,
86 | "metadata": {},
87 | "outputs": [],
88 | "source": [
89 | "from ikomia.dataprocess.workflow import Workflow\n",
90 | "from ikomia.utils import ik\n",
91 | "\n",
92 | "# Init your workflow\n",
93 | "wf = Workflow()\n",
94 | "\n",
95 | "# Add the MMpose algorithm\n",
96 | "pose = wf.add_task(ik.infer_mmlab_pose_estimation(\n",
97 | " config_file = \"configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_vipnas-mbv3_8xb64-210e_coco-256x192.py\",\n",
98 | " conf_thres = '0.5',\n",
99 | " conf_kp_thres = '0.3',\n",
100 | " detector = \"Person\"\n",
101 | " ),\n",
102 | " auto_connect=True\n",
103 | ")\n",
104 | "\n",
105 | "# Run directly on your image\n",
106 | "wf.run_on(url=\"https://cdn.nba.com/teams/legacy/www.nba.com/bulls/sites/bulls/files/jordan_vs_indiana.jpg\")"
107 | ]
108 | },
109 | {
110 | "cell_type": "code",
111 | "execution_count": null,
112 | "metadata": {},
113 | "outputs": [],
114 | "source": [
115 | "from ikomia.utils.displayIO import display\n",
116 | "from PIL import ImageShow\n",
117 | "\n",
118 | "# Display the keypoints\n",
119 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
120 | "\n",
121 | "display(pose.get_image_with_graphics())"
122 | ]
123 | }
124 | ],
125 | "metadata": {
126 | "kernelspec": {
127 | "display_name": "venvapi",
128 | "language": "python",
129 | "name": "venvapi"
130 | },
131 | "language_info": {
132 | "codemirror_mode": {
133 | "name": "ipython",
134 | "version": 3
135 | },
136 | "file_extension": ".py",
137 | "mimetype": "text/x-python",
138 | "name": "python",
139 | "nbconvert_exporter": "python",
140 | "pygments_lexer": "ipython3",
141 | "version": "3.9.13"
142 | },
143 | "orig_nbformat": 4
144 | },
145 | "nbformat": 4,
146 | "nbformat_minor": 2
147 | }
148 |
--------------------------------------------------------------------------------
/examples/HOWTO_use_MMSegmentation_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# Easy semantic segmentation with MMSegmentation"
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "**MMSegmentation** is part of the OpenMMLab project and is developed by the Multimedia Laboratory at the Chinese University of Hong Kong. It specializes in semantic segmentation, a vital component in the field of computer vision. \n",
27 | "\n",
28 | "It offers an extensive collection of segmentation models and algorithms, making it a go-to choice for both researchers and practitioners in the field.\n",
29 | "\n",
30 | "\n",
31 | "\n",
32 | ""
33 | ]
34 | },
35 | {
36 | "attachments": {},
37 | "cell_type": "markdown",
38 | "metadata": {},
39 | "source": [
40 | "## Setup"
41 | ]
42 | },
43 | {
44 | "attachments": {},
45 | "cell_type": "markdown",
46 | "metadata": {},
47 | "source": [
48 | "You need to install Ikomia Python API with pip\n"
49 | ]
50 | },
51 | {
52 | "cell_type": "code",
53 | "execution_count": null,
54 | "metadata": {},
55 | "outputs": [],
56 | "source": [
57 | "!pip install ikomia"
58 | ]
59 | },
60 | {
61 | "attachments": {},
62 | "cell_type": "markdown",
63 | "metadata": {},
64 | "source": [
65 | "## Run MMSegmentation on your image"
66 | ]
67 | },
68 | {
69 | "cell_type": "markdown",
70 | "metadata": {},
71 | "source": [
72 | "---\n",
73 | "\n",
74 | "**-Google Colab ONLY- Restart runtime after the first run of the workflow below** \n",
75 | "\n",
76 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
77 | "\n",
78 | "---"
79 | ]
80 | },
81 | {
82 | "cell_type": "code",
83 | "execution_count": null,
84 | "metadata": {},
85 | "outputs": [],
86 | "source": [
87 | "from ikomia.dataprocess.workflow import Workflow\n",
88 | "\n",
89 | "# Init your workflow\n",
90 | "wf = Workflow()\n",
91 | "\n",
92 | "# Add algorithm\n",
93 | "segmentor = wf.add_task(name=\"infer_mmlab_segmentation\", auto_connect=True)\n",
94 | "\n",
95 | "segmentor.set_parameters({\n",
96 | " \"model_name\": \"pspnet\",\n",
97 | " \"model_config\": \"pspnet_r50-d8_4xb2-40k_cityscapes-512x1024.py\",\n",
98 | " \"cuda\": \"True\",\n",
99 | " })\n",
100 | "\n",
101 | "\n",
102 | "# Run the workflow on image\n",
103 | "wf.run_on(url=\"https://github.com/open-mmlab/mmsegmentation/blob/main/demo/demo.png?raw=true\")\n"
104 | ]
105 | },
106 | {
107 | "cell_type": "code",
108 | "execution_count": null,
109 | "metadata": {},
110 | "outputs": [],
111 | "source": [
112 | "from ikomia.core import IODataType\n",
113 | "from ikomia.utils.displayIO import display\n",
114 | "\n",
115 | "from PIL import ImageShow\n",
116 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
117 | "\n",
118 | "# Display the results\n",
119 | "display(segmentor.get_image_with_mask())"
120 | ]
121 | },
122 | {
123 | "cell_type": "code",
124 | "execution_count": null,
125 | "metadata": {},
126 | "outputs": [],
127 | "source": [
128 | "# Display legend\n",
129 | "results = segmentor.get_results()\n",
130 | "display(results.get_legend())"
131 | ]
132 | },
133 | {
134 | "cell_type": "markdown",
135 | "metadata": {},
136 | "source": [
137 | "## List of parameters\n",
138 | "\n",
139 | "- **model_name** (str, default=\"maskformer\"): model name. \n",
140 | "- **model_config** (str, default=\"maskformer_r50-d32_8xb2-160k_ade20k-512x512\"): name of the model configuration file.\n",
141 | "- **config_file** (str, default=\"\"): path to model config file (only if *use_custom_model=True*). The file is generated at the end of a custom training. Use algorithm ***train_mmlab_detection*** from Ikomia HUB to train custom model.\n",
142 | "- **model_weight_file** (str, default=\"\"): path to model weights file (.pt) (only if *use_custom_model=True*). The file is generated at the end of a custom training.\n",
143 | "- **cuda** (bool, default=True): CUDA acceleration if True, run on CPU otherwise."
144 | ]
145 | },
146 | {
147 | "cell_type": "markdown",
148 | "metadata": {},
149 | "source": [
150 | "MMLab framework for object detection and instance segmentation offers a large range of models. To ease the choice of couple (model_name/model_config), you can call the function *get_model_zoo()* to get a list of possible values.\n"
151 | ]
152 | },
153 | {
154 | "cell_type": "code",
155 | "execution_count": null,
156 | "metadata": {},
157 | "outputs": [],
158 | "source": [
159 | "from ikomia.dataprocess.workflow import Workflow\n",
160 | "\n",
161 | "# Init your workflow\n",
162 | "wf = Workflow()\n",
163 | "\n",
164 | "# Add algorithm\n",
165 | "segmentor = wf.add_task(name=\"infer_mmlab_segmentation\", auto_connect=True)\n",
166 | "\n",
167 | "# Get list of possible models (model_name, model_config)\n",
168 | "models = segmentor.get_model_zoo()\n",
169 | "\n",
170 | "for model in models:\n",
171 | " print(model)"
172 | ]
173 | },
174 | {
175 | "cell_type": "markdown",
176 | "metadata": {},
177 | "source": [
178 | "### Run MMSegmentation on Video\n",
179 | "*Note: The video stream will work on local only, not on Google Colab*"
180 | ]
181 | },
182 | {
183 | "cell_type": "code",
184 | "execution_count": null,
185 | "metadata": {},
186 | "outputs": [],
187 | "source": [
188 | "from ikomia.dataprocess.workflow import Workflow\n",
189 | "from ikomia.utils.displayIO import display\n",
190 | "import cv2\n",
191 | "\n",
192 | "# Use a path to your video\n",
193 | "video_input = 'PATH/TO/YOUR/VIDEO.mp4' # Example video from Pexels: https://www.pexels.com/video/busy-street-in-new-york-854614/\n",
194 | "output_path = 'video_output.mp4'\n",
195 | "\n",
196 | "# Init your workflow\n",
197 | "wf = Workflow()\n",
198 | "\n",
199 | "# Add algorithm\n",
200 | "segmentor = wf.add_task(name=\"infer_mmlab_segmentation\", auto_connect=True)\n",
201 | "\n",
202 | "segmentor.set_parameters({\n",
203 | " \"model_name\": \"segformer\",\n",
204 | " \"model_config\": \"segformer_mit-b0_8xb1-160k_cityscapes-1024x1024\",\n",
205 | " \"cuda\": \"True\",\n",
206 | "})\n",
207 | "\n",
208 | "# Open the video file from URL\n",
209 | "stream = cv2.VideoCapture(video_input)\n",
210 | "if not stream.isOpened():\n",
211 | " print(\"Error: Could not open video.\")\n",
212 | " exit()\n",
213 | "\n",
214 | "# Get video properties for the output\n",
215 | "frame_width = int(stream.get(cv2.CAP_PROP_FRAME_WIDTH))\n",
216 | "frame_height = int(stream.get(cv2.CAP_PROP_FRAME_HEIGHT))\n",
217 | "frame_rate = stream.get(cv2.CAP_PROP_FPS)\n",
218 | "\n",
219 | "# Define the codec and create VideoWriter object\n",
220 | "fourcc = cv2.VideoWriter_fourcc(*'XVID')\n",
221 | "out = cv2.VideoWriter(output_path, fourcc, frame_rate, (frame_width, frame_height))\n",
222 | "\n",
223 | "while True:\n",
224 | " # Read image from stream\n",
225 | " ret, frame = stream.read()\n",
226 | " if not ret:\n",
227 | " print(\"Info: End of video or error.\")\n",
228 | " break\n",
229 | "\n",
230 | " # Run the workflow on current frame\n",
231 | " wf.run_on(array=frame)\n",
232 | "\n",
233 | " # Get results\n",
234 | " image_out = segmentor.get_output(0)\n",
235 | " obj_detect_out = segmentor.get_output(1)\n",
236 | "\n",
237 | " # Convert the result to BGR color space for displaying\n",
238 | " img_out = image_out.get_image_with_mask_and_graphics(obj_detect_out)\n",
239 | " img_res = cv2.cvtColor(img_out, cv2.COLOR_RGB2BGR)\n",
240 | "\n",
241 | " # Save the resulting frame\n",
242 | " out.write(img_res) # This should be img_res instead of img_out if you intend to save the converted BGR image\n",
243 | "\n",
244 | " # Display\n",
245 | " display(img_res, title=\"MMSeg Semantic Segmentation\", viewer=\"opencv\")\n",
246 | "\n",
247 | " # Press 'q' to quit the video processing\n",
248 | " if cv2.waitKey(1) & 0xFF == ord('q'):\n",
249 | " break\n",
250 | "\n",
251 | "# Release everything after the loop\n",
252 | "stream.release()\n",
253 | "out.release()\n",
254 | "cv2.destroyAllWindows()"
255 | ]
256 | }
257 | ],
258 | "metadata": {
259 | "kernelspec": {
260 | "display_name": "venv310",
261 | "language": "python",
262 | "name": "python3"
263 | },
264 | "language_info": {
265 | "codemirror_mode": {
266 | "name": "ipython",
267 | "version": 3
268 | },
269 | "file_extension": ".py",
270 | "mimetype": "text/x-python",
271 | "name": "python",
272 | "nbconvert_exporter": "python",
273 | "pygments_lexer": "ipython3",
274 | "version": "3.10.11"
275 | },
276 | "orig_nbformat": 4
277 | },
278 | "nbformat": 4,
279 | "nbformat_minor": 2
280 | }
281 |
--------------------------------------------------------------------------------
/examples/HOWTO_use_MobileSAM_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run MobileSAM with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "**MobileSAM** (Faster Segment Anything) is a streamlined and efficient variant of the Segment Anything Model (SAM), optimized for mobile applications. \n",
27 | "\n",
28 | "The innovation primarily addresses the challenge posed by the original SAM's resource-intensive image encoder. MobileSAM introduces a lightweight image encoder, significantly reducing the model's size and computational demands without compromising performance.\n",
29 | "\n",
30 | ""
31 | ]
32 | },
33 | {
34 | "attachments": {},
35 | "cell_type": "markdown",
36 | "metadata": {},
37 | "source": [
38 | "## Setup"
39 | ]
40 | },
41 | {
42 | "attachments": {},
43 | "cell_type": "markdown",
44 | "metadata": {},
45 | "source": [
46 | "You need to install Ikomia Python API with pip\n"
47 | ]
48 | },
49 | {
50 | "cell_type": "code",
51 | "execution_count": null,
52 | "metadata": {},
53 | "outputs": [],
54 | "source": [
55 | "!pip install ikomia"
56 | ]
57 | },
58 | {
59 | "cell_type": "markdown",
60 | "metadata": {},
61 | "source": [
62 | "---\n",
63 | "\n",
64 | "**-Google Colab ONLY- Restart runtime**\n",
65 | "\n",
66 | "Click on the \"RESTART RUNTIME\" button at the end the previous window.\n",
67 | "\n",
68 | "---"
69 | ]
70 | },
71 | {
72 | "attachments": {},
73 | "cell_type": "markdown",
74 | "metadata": {},
75 | "source": [
76 | "## Run MobileSAM on your image"
77 | ]
78 | },
79 | {
80 | "cell_type": "markdown",
81 | "metadata": {},
82 | "source": [
83 | "### Box prompt"
84 | ]
85 | },
86 | {
87 | "cell_type": "code",
88 | "execution_count": null,
89 | "metadata": {},
90 | "outputs": [],
91 | "source": [
92 | "from ikomia.dataprocess.workflow import Workflow\n",
93 | "\n",
94 | "# Init your workflow\n",
95 | "wf = Workflow()\n",
96 | "\n",
97 | "# Add algorithm\n",
98 | "algo = wf.add_task(name = \"infer_mobile_segment_anything\", auto_connect=True)\n",
99 | "\n",
100 | "# Setting parameters: boxes on the wheels\n",
101 | "algo.set_parameters({\n",
102 | " \"input_box\": \"[[425, 600, 700, 875], [1240, 675, 1400, 750], [1375, 550, 1650, 800]]\"\n",
103 | "})\n",
104 | "\n",
105 | "# Run directly on your image\n",
106 | "wf.run_on(url=\"https://github.com/facebookresearch/segment-anything/blob/main/notebooks/images/truck.jpg?raw=true\")\n"
107 | ]
108 | },
109 | {
110 | "cell_type": "code",
111 | "execution_count": null,
112 | "metadata": {},
113 | "outputs": [],
114 | "source": [
115 | "from ikomia.utils.displayIO import display\n",
116 | "\n",
117 | "# Display segmentation mask\n",
118 | "from PIL import ImageShow\n",
119 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
120 | "\n",
121 | "display(algo.get_image_with_mask())"
122 | ]
123 | },
124 | {
125 | "cell_type": "markdown",
126 | "metadata": {},
127 | "source": [
128 | "### Point prompt (select mask output)"
129 | ]
130 | },
131 | {
132 | "cell_type": "code",
133 | "execution_count": null,
134 | "metadata": {},
135 | "outputs": [],
136 | "source": [
137 | "# Init your workflow\n",
138 | "wf = Workflow()\n",
139 | "\n",
140 | "# Add algorithm\n",
141 | "algo = wf.add_task(name = \"infer_mobile_segment_anything\", auto_connect=True)\n",
142 | "\n",
143 | "# Setting parameters: boxes on the wheels\n",
144 | "algo.set_parameters({\n",
145 | " \"input_point\": \"[500, 375]\",\n",
146 | " \"mask_id\":\"1\"\n",
147 | "})\n",
148 | "\n",
149 | "# Run directly on your image\n",
150 | "wf.run_on(url=\"https://github.com/facebookresearch/segment-anything/blob/main/notebooks/images/truck.jpg?raw=true\")\n",
151 | "\n",
152 | "# Display your image\n",
153 | "display(algo.get_image_with_mask())"
154 | ]
155 | },
156 | {
157 | "cell_type": "code",
158 | "execution_count": null,
159 | "metadata": {},
160 | "outputs": [],
161 | "source": [
162 | "# Init your workflow\n",
163 | "wf = Workflow()\n",
164 | "\n",
165 | "# Add algorithm\n",
166 | "algo = wf.add_task(name = \"infer_mobile_segment_anything\", auto_connect=True)\n",
167 | "\n",
168 | "# Setting parameters: boxes on the wheels\n",
169 | "algo.set_parameters({\n",
170 | " \"input_point\": \"[500, 375]\",\n",
171 | " \"mask_id\":\"3\"\n",
172 | "})\n",
173 | "\n",
174 | "# Run directly on your image\n",
175 | "wf.run_on(url=\"https://github.com/facebookresearch/segment-anything/blob/main/notebooks/images/truck.jpg?raw=true\")\n",
176 | "\n",
177 | "# Display your image\n",
178 | "display(algo.get_image_with_mask())"
179 | ]
180 | },
181 | {
182 | "cell_type": "code",
183 | "execution_count": null,
184 | "metadata": {},
185 | "outputs": [],
186 | "source": [
187 | "# Init your workflow\n",
188 | "wf = Workflow()\n",
189 | "\n",
190 | "# Add algorithm\n",
191 | "algo = wf.add_task(name = \"infer_mobile_segment_anything\", auto_connect=True)\n",
192 | "\n",
193 | "# Setting parameters: boxes on the wheels\n",
194 | "algo.set_parameters({\n",
195 | " \"input_box\": \"[425, 600, 700, 875]\",\n",
196 | " \"input_point\": \"[500, 375]\",\n",
197 | " \"input_point_label\": \"0\"\n",
198 | "})\n",
199 | "\n",
200 | "# Run directly on your image\n",
201 | "wf.run_on(url=\"https://github.com/facebookresearch/segment-anything/blob/main/notebooks/images/truck.jpg?raw=true\")\n",
202 | "\n",
203 | "# Display your image\n",
204 | "display(algo.get_image_with_mask())"
205 | ]
206 | },
207 | {
208 | "cell_type": "markdown",
209 | "metadata": {},
210 | "source": [
211 | "### Automatic mask generator"
212 | ]
213 | },
214 | {
215 | "cell_type": "code",
216 | "execution_count": null,
217 | "metadata": {},
218 | "outputs": [],
219 | "source": [
220 | "# Init your workflow\n",
221 | "wf = Workflow()\n",
222 | "\n",
223 | "# Add algorithm\n",
224 | "algo = wf.add_task(name = \"infer_mobile_segment_anything\", auto_connect=True)\n",
225 | "\n",
226 | "# Setting parameters: boxes on the wheels\n",
227 | "algo.set_parameters({\n",
228 | " \"points_per_side\": \"16\",\n",
229 | "})\n",
230 | "\n",
231 | "# Run directly on your image\n",
232 | "wf.run_on(url=\"https://github.com/Ikomia-dev/notebooks/blob/main/examples/img/img_work.jpg?raw=true\")\n",
233 | "\n",
234 | "# Display your image\n",
235 | "display(algo.get_image_with_mask())"
236 | ]
237 | },
238 | {
239 | "cell_type": "markdown",
240 | "metadata": {},
241 | "source": [
242 | "#### List of parameters\n",
243 | "\n",
244 | "- **input_box** (list): A Nx4 array of given box prompts to the model, in [XYXY] or [[XYXY], [XYXY]] format.\n",
245 | "- **draw_graphic_input** (Boolean): When set to True, it allows you to draw graphics (box or point) over the object you wish to segment. If set to False, MobileSAM will automatically generate masks for the entire image.\n",
246 | "- **mask_id** (int) - default '1': When [a single graphic point](https://github.com/Ikomia-hub/infer_mobile_segment_anything#a-single-point) is selected, MobileSAM with generate three outputs given a single point (3 best scores). You can select which mask to output using the mask_id parameters (1, 2 or 3). \n",
247 | "- **input_point** (list, *optional*): A Nx2 array of point prompts to the model. Each point is in [X,Y] in pixels.\n",
248 | "- **input_point_label** (list, *optional*): A length N array of labels for the point prompts. 1 indicates a foreground point and 0 indicates a background point\n",
249 | "- **points_per_side** (int) - default '32' : (Automatic detection mode). The number of points to be sampled along one side of the image. The total number of points is points_per_side**2. \n",
250 | "- **points_per_batch** (int) - default '64': (Automatic detection mode). Sets the number of points run simultaneously by the model. Higher numbers may be faster but use more GPU memory.\n",
251 | "- **stability_score_thres** (float) - default '0.95': Filtering threshold in [0,1], using the stability of the mask under changes to the cutoff used to binarize the model's mask predictions.\n",
252 | "- **box_nms_thres** (float) - default '0.7': The box IoU cutoff used by non-maximal suppression to filter duplicate masks.\n",
253 | "- **iou_thres** (float) - default '0.88': A filtering threshold in [0,1], using the model's predicted mask quality.\n",
254 | "- **crop_n_layers** (int) - default '0' : If >0, mask prediction will be run again oncrops of the image. Sets the number of layers to run, where each layer has 2**i_layer number of image crops.\n",
255 | "- **crop_nms_thres** (float) - default '0': The box IoU cutoff used by non-maximal suppression to filter duplicate masks between different crops.\n",
256 | "- **crop_overlap_ratio** (float) default 'float(512 / 1500)'\n",
257 | "- **crop_n_points_downscale_factor** (int) - default '1' : The number of points-per-side sampled in layer n is scaled down by crop_n_points_downscale_factor**n.\n",
258 | "- **min_mask_region_area** (int) - default '0': op layer. Exclusive with points_per_side. min_mask_region_area (int): If >0, postprocessing will be applied to remove disconnected regions and holes in masks with area smaller than min_mask_region_area. \n",
259 | "- **input_size_percent** (int) - default '100': Percentage size of the input image. Can be reduce to save memory usage. "
260 | ]
261 | }
262 | ],
263 | "metadata": {
264 | "kernelspec": {
265 | "display_name": "venv310",
266 | "language": "python",
267 | "name": "python3"
268 | },
269 | "language_info": {
270 | "codemirror_mode": {
271 | "name": "ipython",
272 | "version": 3
273 | },
274 | "file_extension": ".py",
275 | "mimetype": "text/x-python",
276 | "name": "python",
277 | "nbconvert_exporter": "python",
278 | "pygments_lexer": "ipython3",
279 | "version": "3.10.11"
280 | },
281 | "orig_nbformat": 4
282 | },
283 | "nbformat": 4,
284 | "nbformat_minor": 2
285 | }
286 |
--------------------------------------------------------------------------------
/examples/HOWTO_use_SAM_and_SD_inpaint_with_Ikomia_API.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "attachments": {},
5 | "cell_type": "markdown",
6 | "metadata": {},
7 | "source": [
8 | "
\n",
9 | "\n",
10 | "\n"
11 | ]
12 | },
13 | {
14 | "attachments": {},
15 | "cell_type": "markdown",
16 | "metadata": {},
17 | "source": [
18 | "# How to run SAM and stable diffusion inpainting with the Ikomia API "
19 | ]
20 | },
21 | {
22 | "attachments": {},
23 | "cell_type": "markdown",
24 | "metadata": {},
25 | "source": [
26 | "Image segmentation is a critical task in Computer Vision, enabling machines to understand and analyze the contents of images at a pixel level. The Segment Anything Model (SAM) is a groundbreaking instance segmentation model developed by Meta Research, which has taken the field by storm since its release in April 2023. SAM offers unparalleled versatility and efficiency in image analysis tasks, making it a powerful tool for a wide range of applications.\n",
27 | "\n",
28 | "SAM's Promptable Features\n",
29 | "SAM was specifically designed to address the limitations of existing image segmentation models and to introduce new capabilities that revolutionize the field. One of SAM's standout features is its promptable segmentation task, which allows users to generate valid segmentation masks by providing prompts such as spatial or text clues (feature not yet released at the time of writing) that identify specific objects within an image. This flexibility empowers users to obtain precise and tailored segmentation results effortlessly:\n",
30 | "1.\tGenerate segmentation masks for all objects SAM can detect.\n",
31 | "2.\tProvide boxes to guide SAM in generating a mask for specific objects in an image.\n",
32 | "3.\tProvide a box and a point to guide SAM in generating a mask with an area to exclude."
33 | ]
34 | },
35 | {
36 | "attachments": {},
37 | "cell_type": "markdown",
38 | "metadata": {},
39 | "source": [
40 | "## Setup"
41 | ]
42 | },
43 | {
44 | "attachments": {},
45 | "cell_type": "markdown",
46 | "metadata": {},
47 | "source": [
48 | "You need to install Ikomia Python API with pip\n"
49 | ]
50 | },
51 | {
52 | "cell_type": "code",
53 | "execution_count": null,
54 | "metadata": {},
55 | "outputs": [],
56 | "source": [
57 | "!pip install ikomia"
58 | ]
59 | },
60 | {
61 | "attachments": {},
62 | "cell_type": "markdown",
63 | "metadata": {},
64 | "source": [
65 | "## Run the SAM and stable diffusion inpaint algorithms on your image"
66 | ]
67 | },
68 | {
69 | "cell_type": "markdown",
70 | "metadata": {},
71 | "source": [
72 | "_Note: The workflow outlined requires 6.1 GB of GPU RAM. However, by choosing the smallest SAM model, the memory usage can be decreased to 4.9 GB of GPU RAM._"
73 | ]
74 | },
75 | {
76 | "cell_type": "code",
77 | "execution_count": null,
78 | "metadata": {},
79 | "outputs": [],
80 | "source": [
81 | "from ikomia.dataprocess.workflow import Workflow\n",
82 | "from ikomia.utils import ik\n",
83 | "from ikomia.utils.displayIO import display\n",
84 | "\n",
85 | "# Init your workflow\n",
86 | "wf = Workflow()\n",
87 | "\n",
88 | "sam = wf.add_task(ik.infer_segment_anything(\n",
89 | " model_name='vit_l',\n",
90 | " input_box = '[204.8, 221.8, 769.7, 928.5]',\n",
91 | " # draw_graphic_input = 'True', # LOCAL RUN ONLY, Set to true for drawing graphics using the API\n",
92 | " ),\n",
93 | " auto_connect=True\n",
94 | ")\n",
95 | "\n",
96 | "sd_inpaint = wf.add_task(ik.infer_hf_stable_diffusion_inpaint(\n",
97 | " model_name = 'stabilityai/stable-diffusion-2-inpainting',\n",
98 | " prompt = 'dog, high resolution',\n",
99 | " negative_prompt = 'low quality',\n",
100 | " num_inference_steps = '100',\n",
101 | " guidance_scale = '7.5',\n",
102 | " num_images_per_prompt = '1'),\n",
103 | " auto_connect=True\n",
104 | ")\n",
105 | "\n",
106 | "# Run directly on your image\n",
107 | "wf.run_on(url=\"https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_cat.jpg\")"
108 | ]
109 | },
110 | {
111 | "cell_type": "code",
112 | "execution_count": null,
113 | "metadata": {},
114 | "outputs": [],
115 | "source": [
116 | "# Display segmentation mask\n",
117 | "from PIL import ImageShow\n",
118 | "ImageShow.register(ImageShow.IPythonViewer(), 0)\n",
119 | "\n",
120 | "display(sam.get_image_with_mask())"
121 | ]
122 | },
123 | {
124 | "cell_type": "code",
125 | "execution_count": null,
126 | "metadata": {},
127 | "outputs": [],
128 | "source": [
129 | "# Display inpainting output\n",
130 | "display(sd_inpaint.get_output(0).get_image())"
131 | ]
132 | },
133 | {
134 | "attachments": {},
135 | "cell_type": "markdown",
136 | "metadata": {},
137 | "source": [
138 | "## -Google Colab ONLY- Save your custom image in your Google Drive space"
139 | ]
140 | },
141 | {
142 | "cell_type": "code",
143 | "execution_count": null,
144 | "metadata": {},
145 | "outputs": [],
146 | "source": [
147 | "# Uncomment these lines if you're working on Colab\n",
148 | "\"\"\" from google.colab import drive\n",
149 | "drive.mount('/content/gdrive')\n",
150 | "\n",
151 | "cv2.imwrite(\"/content/gdrive/MyDrive/paint_img.png\", img_paint) \"\"\""
152 | ]
153 | },
154 | {
155 | "attachments": {},
156 | "cell_type": "markdown",
157 | "metadata": {},
158 | "source": [
159 | "## -Google Colab ONLY- Download directly your custom image"
160 | ]
161 | },
162 | {
163 | "cell_type": "code",
164 | "execution_count": null,
165 | "metadata": {},
166 | "outputs": [],
167 | "source": [
168 | "# Uncomment these lines if you're working on Colab\n",
169 | "\"\"\" from google.colab import files\n",
170 | "cv2.imwrite(\"/content/paint_img.png\", img_paint)\n",
171 | "files.download('/content/paint_img.png') \"\"\""
172 | ]
173 | }
174 | ],
175 | "metadata": {
176 | "kernelspec": {
177 | "display_name": "venvapi",
178 | "language": "python",
179 | "name": "venvapi"
180 | },
181 | "language_info": {
182 | "codemirror_mode": {
183 | "name": "ipython",
184 | "version": 3
185 | },
186 | "file_extension": ".py",
187 | "mimetype": "text/x-python",
188 | "name": "python",
189 | "nbconvert_exporter": "python",
190 | "pygments_lexer": "ipython3",
191 | "version": "3.9.13"
192 | },
193 | "orig_nbformat": 4
194 | },
195 | "nbformat": 4,
196 | "nbformat_minor": 2
197 | }
198 |
--------------------------------------------------------------------------------
/examples/Ikomia_SCALE_welcome_on_board.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "
\n",
8 | "\n",
9 | "\n"
10 | ]
11 | },
12 | {
13 | "cell_type": "markdown",
14 | "metadata": {},
15 | "source": [
16 | "# Tutorial: Onboarding to Ikomia SCALE\n",
17 | "\n",
18 | "Deploying Ikomia Workflows with `ikomia` and `ikomia-cli` Python packages.\n",
19 | "\n",
20 | "## Introduction\n",
21 | "\n",
22 | "In this tutorial, you will learn how to get started with Ikomia SCALE, a SaaS platform for deploying Ikomia workflows on dedicated endpoints. \n",
23 | "\n",
24 | "We will walk you through the process of:\n",
25 | "\n",
26 | "1. Setting up your environment\n",
27 | "2. Generating access tokens\n",
28 | "3. Creating a project on Ikomia SCALE\n",
29 | "4. Deploying workflows\n",
30 | "\n",
31 | "### Prerequisites\n",
32 | "\n",
33 | "Before you begin, make sure you have the following prerequisites in place:\n",
34 | "\n",
35 | "- An Ikomia SCALE account. If you don't have one, sign up at [Ikomia SCALE signup](https://app.ikomia.ai/signup).\n",
36 | "- Python installed on your local machine.\n",
37 | "\n",
38 | "## Step 1: Installation\n",
39 | "\n",
40 | "### Install the Ikomia CLI (Command Line Interface)\n",
41 | "\n",
42 | "The Ikomia CLI is a tool that allows you to interact with Ikomia SCALE from the command line. You can install it using pip:"
43 | ]
44 | },
45 | {
46 | "cell_type": "code",
47 | "execution_count": null,
48 | "metadata": {},
49 | "outputs": [],
50 | "source": [
51 | "!pip install \"ikomia-cli[full]\""
52 | ]
53 | },
54 | {
55 | "cell_type": "markdown",
56 | "metadata": {},
57 | "source": [
58 | "## Step 2: Generating Access Tokens\n",
59 | "\n",
60 | "Access tokens are required to authenticate with Ikomia SCALE. You can generate a token by running the following command:\n",
61 | "\n",
62 | "```bash\n",
63 | "ikcli login --token-ttl \n",
64 | "```\n",
65 | "\n",
66 | "Replace `` with the desired duration for the token's validity. For example, to generate a token that is valid for 1 hour (3 600 seconds), you can run:"
67 | ]
68 | },
69 | {
70 | "cell_type": "code",
71 | "execution_count": null,
72 | "metadata": {},
73 | "outputs": [],
74 | "source": [
75 | "!ikcli login --token-ttl 3600 --username \"\" --password \"\""
76 | ]
77 | },
78 | {
79 | "cell_type": "markdown",
80 | "metadata": {},
81 | "source": [
82 | "Where `` and `` are your credentials on Ikomia SCALE.\n",
83 | "\n",
84 | "Then, export your token as an environment variable:"
85 | ]
86 | },
87 | {
88 | "cell_type": "code",
89 | "execution_count": null,
90 | "metadata": {},
91 | "outputs": [],
92 | "source": [
93 | "%env IKOMIA_TOKEN="
94 | ]
95 | },
96 | {
97 | "cell_type": "markdown",
98 | "metadata": {},
99 | "source": [
100 | "Where `` is the access token you generated."
101 | ]
102 | },
103 | {
104 | "cell_type": "markdown",
105 | "metadata": {},
106 | "source": [
107 | "## Step 3: Creating a Project on Ikomia SCALE\n",
108 | "\n",
109 | "Before deploying a workflow, you need to create a project on Ikomia SCALE. You can create a project using the Ikomia SCALE web interface.\n",
110 | "\n",
111 | "1. Log in to your [Ikomia SCALE account](https://app.ikomia.ai/).\n",
112 | "\n",
113 | "2. Click on the \"New project\" button.\n",
114 | "\n",
115 | "4. Provide a name and description for your project.\n",
116 | "\n",
117 | "5. Click the \"Create project\" button.\n",
118 | "\n",
119 | "## Step 4: Pushing a Workflow to Your Project\n",
120 | "\n",
121 | "Now that you have a project on Ikomia SCALE, you can push a workflow to it using the Ikomia CLI.\n",
122 | "\n",
123 | "1. Create a workflow from scratch with [Ikomia API](https://github.com/Ikomia-dev/IkomiaApi) and [Ikomia HUB](https://app.ikomia.ai/hub), or use the following example:"
124 | ]
125 | },
126 | {
127 | "cell_type": "code",
128 | "execution_count": null,
129 | "metadata": {},
130 | "outputs": [],
131 | "source": [
132 | "from ikomia.dataprocess.workflow import Workflow\n",
133 | "from ikomia.utils import ik\n",
134 | "\n",
135 | "#*********************************************\n",
136 | "## CREATE\n",
137 | "#*********************************************\n",
138 | "# Init your workflow\n",
139 | "# In this example, \"My Workflow\" is the name of your workflow that will appear in your dashboard in Ikomia SCALE.\n",
140 | "# The description is not mandatory and will appear in your dashboard in Ikomia SCALE.\n",
141 | "\n",
142 | "wf = Workflow(\"My Workflow\")\n",
143 | "\n",
144 | "wf.description = \"This workflow transforms your photo into a visually stunning artistic masterpiece.\"\n",
145 | "\n",
146 | "# Add Neural Style Transfer to your workflow\n",
147 | "# More information about this algorithm here https://github.com/Ikomia-hub/infer_neural_style_transfer\n",
148 | "nst = wf.add_task(ik.infer_neural_style_transfer(), auto_connect=True)\n",
149 | "\n",
150 | "#*********************************************\n",
151 | "## RUN\n",
152 | "#*********************************************\n",
153 | "# Run on a web image\n",
154 | "# It will download the model 'candy.t7' (Kandinsky style)\n",
155 | "wf.run_on(url=\"https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg\")"
156 | ]
157 | },
158 | {
159 | "cell_type": "markdown",
160 | "metadata": {},
161 | "source": [
162 | "If you want to visualize the results, you can run the following cell:"
163 | ]
164 | },
165 | {
166 | "cell_type": "code",
167 | "execution_count": null,
168 | "metadata": {},
169 | "outputs": [],
170 | "source": [
171 | "# Display your your image in the notebook\n",
172 | "from ikomia.utils.displayIO import display\n",
173 | "from PIL import ImageShow\n",
174 | "ImageShow.register(ImageShow.IPythonViewer(), 0) # <-- Specific for displaying in notebooks\n",
175 | "\n",
176 | "display(nst.get_input(0).get_image())\n",
177 | "display(nst.get_output(0).get_image())"
178 | ]
179 | },
180 | {
181 | "cell_type": "markdown",
182 | "metadata": {},
183 | "source": [
184 | "2. Save your workflow by exporting it as a JSON file that contains a description of its structure:"
185 | ]
186 | },
187 | {
188 | "cell_type": "code",
189 | "execution_count": null,
190 | "metadata": {},
191 | "outputs": [],
192 | "source": [
193 | "#*********************************************\n",
194 | "## EXPORT\n",
195 | "#*********************************************\n",
196 | "# Save the workflow as a JSON file in your current folder\n",
197 | "wf.save(\"./my_workflow.json\")"
198 | ]
199 | },
200 | {
201 | "cell_type": "markdown",
202 | "metadata": {},
203 | "source": [
204 | "3. Push the workflow to your project on Ikomia SCALE."
205 | ]
206 | },
207 | {
208 | "cell_type": "code",
209 | "execution_count": null,
210 | "metadata": {},
211 | "outputs": [],
212 | "source": [
213 | "!ikcli project push my_workflow.json"
214 | ]
215 | },
216 | {
217 | "cell_type": "markdown",
218 | "metadata": {},
219 | "source": [
220 | "Replace `` with the name of your project on Ikomia SCALE.\n",
221 | "\n",
222 | "## Step 5: Deploying a Workflow\n",
223 | "\n",
224 | "Now that your workflow is in your project on Ikomia SCALE, you can deploy it on a dedicated endpoint.\n",
225 | "\n",
226 | "1. Log in to your [Ikomia SCALE account](https://app.ikomia.ai)\n",
227 | "\n",
228 | "2. Go to the project and select your workflow\n",
229 | "\n",
230 | "3. Configure and deploy your workflow\n",
231 | "\n",
232 | "6. Wait until it's running (it can take several minutes)\n",
233 | "7. Click on the endpoint URL to test it online\n",
234 | "\n",
235 | "## Conclusion\n",
236 | "\n",
237 | "Congratulations! You have successfully onboarded to Ikomia SCALE, created a project, and deployed your first workflow. \n",
238 | "\n",
239 | "You can now use the power of Ikomia to scale your image processing tasks effortlessly.\n",
240 | "\n",
241 | "* Any questions? Contact [the Ikomia Team](mailto:team@ikomia.ai)\n",
242 | "\n",
243 | "* Any technical problems? Contact [the Ikomia Support](mailto:support@ikomia.ai)\n",
244 | "\n",
245 | "* Want to discuss with us? Come on our [discord channel](https://discord.com/invite/82Tnw9UGGc)!\n",
246 | "\n",
247 | "## Next steps\n",
248 | "\n",
249 | "* More algorithms on [Ikomia HUB](https://app.ikomia.ai/hub)\n",
250 | "* More explanations on Ikomia workflows in the [API documentation](https://ikomia-dev.github.io/python-api-documentation/getting_started.html)\n",
251 | "\n",
252 | "## Additional Resources\n",
253 | "\n",
254 | "- [Ikomia Website](https://www.ikomia.ai/)\n",
255 | "- [Ikomia blog](https://www.ikomia.ai/blog)\n",
256 | "- [Ikomia API](https://github.com/Ikomia-dev/IkomiaApi)"
257 | ]
258 | },
259 | {
260 | "cell_type": "code",
261 | "execution_count": null,
262 | "metadata": {},
263 | "outputs": [],
264 | "source": []
265 | }
266 | ],
267 | "metadata": {
268 | "kernelspec": {
269 | "display_name": "Python 3 (ipykernel)",
270 | "language": "python",
271 | "name": "python3"
272 | },
273 | "language_info": {
274 | "codemirror_mode": {
275 | "name": "ipython",
276 | "version": 3
277 | },
278 | "file_extension": ".py",
279 | "mimetype": "text/x-python",
280 | "name": "python",
281 | "nbconvert_exporter": "python",
282 | "pygments_lexer": "ipython3",
283 | "version": "3.8.18"
284 | }
285 | },
286 | "nbformat": 4,
287 | "nbformat_minor": 4
288 | }
--------------------------------------------------------------------------------
/examples/img/banner_ikomia.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/banner_ikomia.png
--------------------------------------------------------------------------------
/examples/img/display_inst_seg.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/display_inst_seg.png
--------------------------------------------------------------------------------
/examples/img/img_LR.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_LR.jpg
--------------------------------------------------------------------------------
/examples/img/img_LR_vangogh.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_LR_vangogh.png
--------------------------------------------------------------------------------
/examples/img/img_aerial.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_aerial.jpg
--------------------------------------------------------------------------------
/examples/img/img_aerial_bbox.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_aerial_bbox.png
--------------------------------------------------------------------------------
/examples/img/img_bike_rider.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_bike_rider.jpeg
--------------------------------------------------------------------------------
/examples/img/img_bike_rider_2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_bike_rider_2.jpg
--------------------------------------------------------------------------------
/examples/img/img_cars_city.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_cars_city.jpg
--------------------------------------------------------------------------------
/examples/img/img_cars_road.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_cars_road.jpg
--------------------------------------------------------------------------------
/examples/img/img_cat.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_cat.jpg
--------------------------------------------------------------------------------
/examples/img/img_city.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_city.jpeg
--------------------------------------------------------------------------------
/examples/img/img_city_2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_city_2.jpg
--------------------------------------------------------------------------------
/examples/img/img_d2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_d2.png
--------------------------------------------------------------------------------
/examples/img/img_d2_original.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_d2_original.jpg
--------------------------------------------------------------------------------
/examples/img/img_dog.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_dog.png
--------------------------------------------------------------------------------
/examples/img/img_face.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_face.jpg
--------------------------------------------------------------------------------
/examples/img/img_fireman.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_fireman.jpg
--------------------------------------------------------------------------------
/examples/img/img_fireman_bbox.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_fireman_bbox.png
--------------------------------------------------------------------------------
/examples/img/img_fireman_pose.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_fireman_pose.png
--------------------------------------------------------------------------------
/examples/img/img_foot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_foot.png
--------------------------------------------------------------------------------
/examples/img/img_foot_bbox.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_foot_bbox.png
--------------------------------------------------------------------------------
/examples/img/img_living_room.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_living_room.jpg
--------------------------------------------------------------------------------
/examples/img/img_man_living_room.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_man_living_room.jpg
--------------------------------------------------------------------------------
/examples/img/img_people.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_people.jpg
--------------------------------------------------------------------------------
/examples/img/img_people_living_room.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_people_living_room.jpg
--------------------------------------------------------------------------------
/examples/img/img_people_workspace.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_people_workspace.jpg
--------------------------------------------------------------------------------
/examples/img/img_porsche.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_porsche.jpg
--------------------------------------------------------------------------------
/examples/img/img_porsche_res.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_porsche_res.png
--------------------------------------------------------------------------------
/examples/img/img_portrait.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_portrait.jpg
--------------------------------------------------------------------------------
/examples/img/img_portrait_3.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_portrait_3.jpeg
--------------------------------------------------------------------------------
/examples/img/img_portrait_4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_portrait_4.jpg
--------------------------------------------------------------------------------
/examples/img/img_portrait_5.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_portrait_5.jpg
--------------------------------------------------------------------------------
/examples/img/img_protrait_2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_protrait_2.jpg
--------------------------------------------------------------------------------
/examples/img/img_runners.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_runners.jpg
--------------------------------------------------------------------------------
/examples/img/img_starry_night.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_starry_night.jpg
--------------------------------------------------------------------------------
/examples/img/img_taxi.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_taxi.jpg
--------------------------------------------------------------------------------
/examples/img/img_text_inspiration.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_text_inspiration.jpg
--------------------------------------------------------------------------------
/examples/img/img_train.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_train.jpg
--------------------------------------------------------------------------------
/examples/img/img_work.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_work.jpg
--------------------------------------------------------------------------------
/examples/img/img_zebra.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ikomia-dev/notebooks/debc9b912b469ce2a845cbb0967dce1d17822f6f/examples/img/img_zebra.jpeg
--------------------------------------------------------------------------------