├── .github
├── ISSUE_TEMPLATE
│ ├── bug_report.md
│ └── feature_request.md
└── PULL_REQUEST_TEMPLATE.md
├── .gitignore
├── README.md
├── assets
├── config.png
├── gazebo.png
└── rqt_yolov8.png
├── contributing.md
├── detector_plugin.md
├── object_detection
├── config
│ └── params.yaml
├── launch
│ └── object_detection.launch.py
├── object_detection
│ ├── DetectorBase.py
│ ├── Detectors
│ │ ├── EfficientDet.py
│ │ ├── RetinaNet.py
│ │ ├── YOLOv5.py
│ │ ├── YOLOv8.py
│ │ └── __init__.py
│ ├── ObjectDetection.py
│ └── __init__.py
├── package.xml
├── requirements.txt
├── resource
│ └── object_detection
├── setup.cfg
├── setup.py
└── test
│ ├── test_copyright.py
│ ├── test_flake8.py
│ └── test_pep257.py
└── perception_bringup
├── CMakeLists.txt
├── config
└── bridge.yaml
├── env_hooks
└── perception_bringup.sh.in
├── launch
└── playground.launch.py
├── models
├── adhesive
│ ├── meshes
│ │ └── adhesive.dae
│ ├── model.config
│ └── model.sdf
├── first_aid
│ ├── model.config
│ └── model.sdf
├── medicine
│ ├── materials
│ │ └── textures
│ │ │ └── texture.png
│ ├── meshes
│ │ ├── model.mtl
│ │ └── model.obj
│ ├── model.config
│ ├── model.sdf
│ └── thumbnails
│ │ ├── 0.jpg
│ │ ├── 1.jpg
│ │ ├── 2.jpg
│ │ ├── 3.jpg
│ │ └── 4.jpg
├── plastic_cup
│ ├── meshes
│ │ └── plastic_cup.dae
│ ├── model.config
│ ├── model.sdf
│ └── thumbnails
│ │ ├── 1.png
│ │ ├── 2.png
│ │ ├── 3.png
│ │ ├── 4.png
│ │ └── 5.png
└── table
│ ├── Table_Diffuse.jpg
│ ├── model-1_2.sdf
│ ├── model-1_3.sdf
│ ├── model-1_4.sdf
│ ├── model.config
│ ├── model.sdf
│ └── thumbnails
│ ├── 1.png
│ ├── 2.png
│ ├── 3.png
│ ├── 4.png
│ └── 5.png
├── package.xml
└── worlds
└── playground.sdf
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Bug report
3 | about: Create a report to help us improve
4 | title: ''
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 | ---
11 | name: Bug report
12 | about: Report a bug
13 | labels: bug
14 | ---
15 |
16 | ## Environment 🖥️
17 | * OS Version:
18 |
19 | * Python Version:
20 |
21 | * Do you have a dedicated GPU ?
22 |
23 | * NVIDIA Driver Version (if applicable):
24 |
25 | * CUDA Version (if applicable):
26 |
27 | * Are you running this project natively or using Docker ?
28 |
29 | * Is the issue present in the main branch or any of the development branch ?
30 |
31 |
32 |
33 | ```
34 | # paste log here
35 | ```
36 |
37 |
38 |
39 | ## Description 📖
40 | * Expected behavior:
41 | * Actual behavior:
42 |
43 | ## Steps to reproduce 👀
44 |
45 |
46 | 1.
47 | 2.
48 | 3.
49 |
50 | ## Output 💥
51 |
53 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Feature Request Template
3 | about: "For feature requests. Please search for existing issues first. Also see CONTRIBUTING."
4 |
5 | ---
6 |
7 | **Please Describe The Problem To Be Solved**
8 | (Replace This Text: Please present a concise description of the problem to be addressed by this feature request. Please be clear what parts of the problem are considered to be in-scope and out-of-scope.)
9 |
10 | **(Optional): Suggest A Solution**
11 | (Replace This Text: A concise description of your preferred solution. Things to address include:
12 | * Details of the technical implementation
13 | * Tradeoffs made in design decisions
14 | * Caveats and considerations for the future
15 |
16 | If there are multiple solutions, please present each one separately. Save comparisons for the very end.)
--------------------------------------------------------------------------------
/.github/PULL_REQUEST_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | ---
2 | Name: Pull Request template
3 | About: Mention details about your Pull Request
4 | Title: ''
5 | Labels: ''
6 | Assignees: ''
7 |
8 | ---
9 |
10 | # Description 📖
11 |
12 | Please include a summary of the changes and the related issue. Please also include relevant motivation and context. List any dependencies that are required for this change.
13 |
14 | Fixes # (issue)
15 |
16 | ## Type of change 📜
17 |
18 | Please delete options that are not relevant.
19 |
20 | - [ ] Bug fix (non-breaking change which fixes an issue)
21 | - [ ] New feature (non-breaking change which adds functionality)
22 | - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
23 | - [ ] This change requires a documentation update
24 | - [ ] This change requires testing before it can be merged into the main/development branch
25 |
26 | # How Has This Been Tested? 👀
27 |
28 | Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
29 |
30 | - [ ] Test A
31 | - [ ] Test B
32 |
33 | **Test Configuration** 🖥️
34 | * OS version:
35 | * Hardware:
36 | * NVIDIA Driver:
37 | * CUDA Version:
38 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | #Ignore the test node
2 | /object_detection/object_detection/test_node.py
3 |
4 | #Ignore pycache dirs
5 | object_detection/object_detection/Detectors/__pycache__/
6 | object_detection/object_detection/__pycache__/
7 |
8 | #Ignore .vscode dir
9 | .vscode
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
10 |
11 |
12 |
13 |
20 | [![Contributors][contributors-shield]][contributors-url]
21 | [![Forks][forks-shield]][forks-url]
22 | [![Stargazers][stars-shield]][stars-url]
23 | [![Issues][issues-shield]][issues-url]
24 | [![LinkedIn][linkedin-shield]][linkedin-url]
25 |
26 |
27 |
28 |
29 |
30 |
48 |
49 |
50 |
51 |
52 |
53 | Table of Contents
54 |
55 |
56 | About The Project
57 |
60 |
61 |
62 | Getting Started
63 |
67 |
68 | Usage
69 |
72 |
73 |
74 | Contributing
75 | License
76 | Contact
77 | Acknowledgments
78 |
79 |
80 |
81 |
82 |
83 |
84 | ## About The Project
85 |
86 | Our aim is to build a one-stop solution to all the problems related to Robotics-Perception. We are creating a plug-and-play ROS 2 based perception-pipeline which can be customized
87 | for user-specific custom tasks in the blink of an eye. We are in the process of creating different components for tasks like Object Detection, Image Pre-Processing, Image Segmentation etc.
88 | These components can be stitched together to make a custom pipeline for any use-case, just like how we play with LEGO bricks.
89 |
90 | (back to top )
91 |
92 | ### Built With
93 |
94 | * [](https://www.sphinx-docs.org)
95 | * [](https://opencv.org/)
96 | * [](https://ubuntu.com/)
97 | * [](https://www.python.org/)
98 |
99 | (back to top )
100 |
101 |
102 | ## Getting Started
103 |
104 | Follow these steps to setup this project on your systm
105 |
106 | ### Prerequisites
107 |
108 | Follow these steps to install ROS Humble and OpenCV
109 | * ROS Humble
110 | Refer to the official [ROS 2 installation guide](https://docs.ros.org/en/humble/Installation.html)
111 |
112 | * OpenCV
113 | ```bash
114 | pip install opencv-contrib-python
115 | ```
116 |
117 | ### Installation
118 |
119 | 1. Make a new workspace
120 | ```bash
121 | mkdir -p percep_ws/src
122 | ```
123 |
124 | 2. Clone the ROS-Perception-Pipeline repository
125 |
126 | Now go ahead and clone this repository inside the "src" folder of the workspace you just created.
127 |
128 | ```bash
129 | cd percep_ws/src
130 | git clone git@github.com:atom-robotics-lab/ros-perception-pipeline.git
131 | ```
132 |
133 | 3. Compile the package
134 |
135 | Follow this execution to compile your ROS 2 package
136 |
137 | ```bash
138 | colcon build --symlink-install
139 | ```
140 |
141 | 4. Source your workspace
142 | ```bash
143 | source install/local_setup.bash
144 | ```
145 |
146 |
147 |
148 |
149 | ## Usage
150 |
151 |
152 |
153 | ### 1. Launch the Playground simulation
154 | We have made a demo playground world to test our pipeline. To launch this world, follow the steps
155 | given below
156 |
157 | ```bash
158 | ros2 launch perception_bringup playground.launch.py
159 | ```
160 | The above command will launch the playground world as shown below :
161 |
162 |
163 |
164 |
165 | Don't forget to click on the **play** button on the bottom left corner of the Ignition Gazebo window
166 |
167 |
168 |
169 | ### 2. Launch the Object Detection node
170 |
171 |
172 | Use the pip install command as shown below to install the required packages.
173 | ```bash
174 | pip install -r src/ros-perception-pipeline/object_detection/requirements.txt
175 | ```
176 |
177 | Use the command given below to run the ObjectDetection node. Remember to change the path of the object_detection.yaml
178 | file according to your present working directory
179 |
180 | ```bash
181 | ros2 run object_detection ObjectDetection --ros-args --params-file src/ros-perception-pipeline/object_detection/config/object_detection.yaml
182 | ```
183 |
184 | ### 3. Changing the Detector
185 |
186 | To change the object detector being used, you can change the parameters inside the object_detection.yaml file location inside
187 | the **config** folder.
188 |
189 |
190 |
191 |
192 | ## Testing
193 |
194 | Now to see the inference results, open a new terminal and enter the given command
195 |
196 | ```bash
197 | ros2 run rqt_image_view rqt_image_view
198 | ```
199 |
200 |
201 |
202 | (back to top )
203 |
204 |
205 |
219 |
220 |
221 |
222 |
223 | ## Contributing
224 |
225 | Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
226 |
227 | 1. Fork the Project
228 | 2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
229 | 3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
230 | 4. Push to the Branch (`git push origin feature/AmazingFeature`)
231 | 5. Open a Pull Request
232 |
233 | (back to top )
234 |
235 |
236 | ## License
237 |
238 | Distributed under the MIT License. See `LICENSE` for more information.
239 |
240 |
241 |
242 |
243 | ## Contact
244 |
245 | Our Socials - [Linktree](https://linktr.ee/atomlabs) - atom@inventati.org
246 |
247 | (back to top )
248 |
249 |
250 | ## Acknowledgments
251 |
252 | * [Our wiki](https://atom-robotics-lab.github.io/wiki)
253 | * [ROS Official Documentation](http://wiki.ros.org/Documentation)
254 | * [Opencv Official Documentation](https://docs.opencv.org/4.x/)
255 | * [Rviz Documentation](http://wiki.ros.org/rviz)
256 | * [Gazebo Tutorials](https://classic.gazebosim.org/tutorials)
257 | * [Ubuntu Installation guide](https://ubuntu.com/tutorials/install-ubuntu-desktop#1-overview)
258 |
259 | (back to top )
260 |
261 |
262 |
263 |
264 |
265 | [contributors-shield]: https://img.shields.io/github/contributors/atom-robotics-lab/ros-perception-pipeline.svg?style=for-the-badge
266 | [contributors-url]: https://github.com/atom-robotics-lab/ros-perception-pipeline/graphs/contributors
267 | [forks-shield]: https://img.shields.io/github/forks/atom-robotics-lab/ros-perception-pipeline.svg?style=for-the-badge
268 | [forks-url]: https://github.com/atom-robotics-lab/ros-perception-pipeline/network/members
269 | [stars-shield]: https://img.shields.io/github/stars/atom-robotics-lab/ros-perception-pipeline.svg?style=for-the-badge
270 | [stars-url]: https://github.com/atom-robotics-lab/ros-perception-pipeline/stargazers
271 | [issues-shield]: https://img.shields.io/github/issues/atom-robotics-lab/ros-perception-pipeline.svg?style=for-the-badge
272 | [issues-url]: https://github.com/atom-robotics-lab/ros-perception-pipeline/issues
273 | [linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=for-the-badge&logo=linkedin&colorB=555
274 | [linkedin-url]: https://www.linkedin.com/company/a-t-o-m-robotics-lab/
--------------------------------------------------------------------------------
/assets/config.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/assets/config.png
--------------------------------------------------------------------------------
/assets/gazebo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/assets/gazebo.png
--------------------------------------------------------------------------------
/assets/rqt_yolov8.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/assets/rqt_yolov8.png
--------------------------------------------------------------------------------
/contributing.md:
--------------------------------------------------------------------------------
1 | ## Contributing
2 |
3 | Hi there! We're thrilled that you'd like to contribute to Hacktoberfest. Your help is essential for keeping it great.
4 |
5 |
6 | ## Opening an issue
7 |
8 | Thank you for taking the time to open an issue, your feedback helps make Hacktoberfest better.
9 | Before opening an issue, please be sure that your issue hasn't already been asked by using [GitHub search](https://help.github.com/articles/searching-issues/)
10 |
11 | Here are a few things that will help us help resolve your issues:
12 |
13 | - A descriptive title that gives an idea of what your issue refers to
14 | - A thorough description of the issue, (one word descriptions are very hard to understand)
15 | - Screenshots (if appropriate)
16 | - Links (if appropriate)
17 |
18 | ## Submitting a pull request
19 |
20 | * Clone the repository using :
21 | ```bash
22 | git clone git@github.com:atom-robotics-lab/ros-perception-pipeline.git
23 | ```
24 | * Our wiki uses an open-source framework called ROS. Use this [link](http://wiki.ros.org/) to install and use ROS
25 |
26 | * Refer to the section below if your task requires you to make changes to Repository.
27 |
28 | * Fork the repo and Create a new branch:
29 | ```bash
30 | git checkout -b my-branch-name
31 | ```
32 | * Make changes, commit and Push to your branch. [Submit a Pull Request](https://github.com/atom-robotics-lab/ros-perception-pipeline/pulls)
33 |
34 | * Wait for your pull request to be reviewed and merged!
35 |
36 | ## Resources
37 |
38 | - [Contributing to Open Source on GitHub](https://guides.github.com/activities/contributing-to-open-source/)
39 | - [Using Pull Requests](https://help.github.com/articles/using-pull-requests/)
40 | - [GitHub Help](https://help.github.com)
41 | - [ros](http://wiki.ros.org/)
--------------------------------------------------------------------------------
/detector_plugin.md:
--------------------------------------------------------------------------------
1 | # Detector Plugin Architecture
2 |
3 | The `object_detection` package follows a plugin-based architecture for allowing the use of different object detection models. These can be loaded at launch time by setting the [`detector_type`](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/2c8152d6a5ae5b5f6e3541648ae97d9ba79ac6a9/object_detection/config/params.yaml#L7P)
4 | param in the `config/params.yaml` file of the package. Currently the package supports the following detectors out of the box:
5 | * YOLOv5
6 | * YOLOv8
7 | * RetinaNET
8 | * EdgeDET
9 |
10 | The package provides a [`DetectorBase`](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/detector_plugin_architecture/object_detection/object_detection/DetectorBase.py) class which is an abstract class.
11 | It uses the python's in-built `abc.ABC` to define the abstract class and `abc.abstractmethod` decorator to define the blueprint for different class methods that the plugin should implement.
12 | All the detector plugin classes are stored in the [Detectors](https://github.com/atom-robotics-lab/ros-perception-pipeline/tree/detector_plugin_architecture/object_detection/object_detection/Detectors) directory of the
13 | `object_detection` package.
14 |
15 | ## Creating Your own Detector Plugin
16 | To create your own detector plugin, follow the steps below:
17 | * Create a file for your Detector class inside the [Detectors](https://github.com/atom-robotics-lab/ros-perception-pipeline/tree/detector_plugin_architecture/object_detection/object_detection/Detectors) directory.
18 | * The file should import the `DetectorBase` class from the [`DetectorBase.py`](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/detector_plugin_architecture/object_detection/object_detection/DetectorBase.py) module. You can create a class for your Detector in this file which should inherit the `DetectorBase` abstract class.
19 |
20 | > **Note:** The name of the file and class should be the same. This is required in order to allow the [`object_detection node`](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/071c63aa4bc71913d4bf5a4c7f9b4fd03b136338/object_detection/object_detection/ObjectDetection.py#L79) to load the module and its class using the value of the `detector_type` in the param.
21 | * Inside the class constructor, make sure to call the constructor of the `DetectorBase` class using `super().__init__()`. This initializes an empty `predictions` list that would be used to store the predictions later. (explained below)
22 |
23 | * After this, the Detector plugin class needs to implement the abstract methods listed below. These are defined in the `DetectorBase` class and provide a signature for the function's implementations. These abstract methods act as a standard API between the Detector plugins and the ObjectDetection node. The plugins only need to match the function signature (parameter and return types) to allow ObjectDetection node to use them.
24 | * [`build_model()`](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/071c63aa4bc71913d4bf5a4c7f9b4fd03b136338/object_detection/object_detection/DetectorBase.py#L21): It takes 2 strings as parameters: `model_dir_path` and `weight_file_name`. `model_dir_path` is the path which contains the model file and class file. The `weight_file_name` is the name of the weights file (like `.onxx` in case of Yolov5 models). This function should return no parameters and is used by the ObjectDetection node [here](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/071c63aa4bc71913d4bf5a4c7f9b4fd03b136338/object_detection/object_detection/ObjectDetection.py#L83). While creating the plugin, you need not worry about the parameters as they are provided by the node through the ROS 2 params. You just need to use their values inside the functions according to your Detector's requirements.
25 |
26 | * [`load_classes()`](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/071c63aa4bc71913d4bf5a4c7f9b4fd03b136338/object_detection/object_detection/DetectorBase.py#L25): This is similar to the `build_model()` function. It should load the classes file as per the requirement using the provided `model_dir_path` parameter.
27 |
28 | * [`get_predictions()`](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/071c63aa4bc71913d4bf5a4c7f9b4fd03b136338/object_detection/object_detection/DetectorBase.py#L29): This function is [used by the ObjectDetection node](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/071c63aa4bc71913d4bf5a4c7f9b4fd03b136338/object_detection/object_detection/ObjectDetection.py#L92) in the subscriber callback to get the predictions for each frame passed. This function should take an opencv image (which is essentially a numpy array) as a parameter and return a list of dictionaries that contain the predictions. This function can implement any kind of checks, formatting, preprocessing, etc. on the frame (see [this](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/071c63aa4bc71913d4bf5a4c7f9b4fd03b136338/object_detection/object_detection/Detectors/YOLOv5.py#L131) for example). It only strictly needs to follow the signature described by the abstract method definition in `DetectorBase`. To create the predictions list, the function should call the [`create_predictions_list()`](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/071c63aa4bc71913d4bf5a4c7f9b4fd03b136338/object_detection/object_detection/DetectorBase.py#L10) function from the `DetectorBase` class like [this](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/071c63aa4bc71913d4bf5a4c7f9b4fd03b136338/object_detection/object_detection/Detectors/YOLOv5.py#L144). Using the `create_predictions_list()` function is necessary as it arranges the prediction data in a standard format that the `ObjectDetection` node expects.
29 |
30 | > **Note:** For reference you can through the [YOLOv5 Plugin class](https://github.com/atom-robotics-lab/ros-perception-pipeline/blob/detector_plugin_architecture/object_detection/object_detection/Detectors/YOLOv5.py#L131) and how it implements all the abstract methods.
--------------------------------------------------------------------------------
/object_detection/config/params.yaml:
--------------------------------------------------------------------------------
1 | object_detection:
2 | ros__parameters:
3 | input_img_topic: color_camera/image_raw
4 | output_bb_topic: object_detection/img_bb
5 | output_img_topic: object_detection/img
6 | model_params:
7 | detector_type: YOLOv5
8 | model_dir_path: models/
9 | weight_file_name: auto_final.onnx
10 | confidence_threshold : 0.7
11 | show_fps : 1
--------------------------------------------------------------------------------
/object_detection/launch/object_detection.launch.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2018 Intel Corporation
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | import os
16 | import sys
17 |
18 | from ament_index_python.packages import get_package_share_directory
19 |
20 | from launch import LaunchDescription
21 | from launch.actions import IncludeLaunchDescription, DeclareLaunchArgument
22 | from launch.substitutions import LaunchConfiguration
23 | from launch.launch_description_sources import PythonLaunchDescriptionSource
24 | from launch_ros.actions import Node
25 |
26 |
27 | def generate_launch_description():
28 | pkg_object_detection = get_package_share_directory("object_detection")
29 |
30 | params = os.path.join(
31 | pkg_object_detection,
32 | 'config',
33 | 'params.yaml'
34 | )
35 |
36 | node=Node(
37 | package = 'object_detection',
38 | name = 'object_detection',
39 | executable = 'ObjectDetection',
40 | parameters = [params],
41 | output="screen"
42 | )
43 |
44 |
45 | return LaunchDescription([node])
46 |
--------------------------------------------------------------------------------
/object_detection/object_detection/DetectorBase.py:
--------------------------------------------------------------------------------
1 | from abc import ABC, abstractmethod
2 | import numpy as np
3 |
4 |
5 | class DetectorBase(ABC):
6 |
7 | def __init__(self) -> None:
8 | self.predictions = []
9 |
10 | def create_predictions_list(self, class_ids, confidences, boxes):
11 | for i in range(len(class_ids)):
12 | obj_dict = {
13 | "class_id": class_ids[i],
14 | "confidence": confidences[i],
15 | "box": boxes[i]
16 | }
17 |
18 | self.predictions.append(obj_dict)
19 |
20 | @abstractmethod
21 | def build_model(self, model_dir_path: str, weight_file_name: str) -> None:
22 | pass
23 |
24 | @abstractmethod
25 | def load_classes(self, model_dir_path: str) -> None:
26 | pass
27 |
28 | @abstractmethod
29 | def get_predictions(self, cv_image: np.ndarray) -> list[dict]:
30 | pass
--------------------------------------------------------------------------------
/object_detection/object_detection/Detectors/EfficientDet.py:
--------------------------------------------------------------------------------
1 | import tensorflow_hub as hub
2 | import cv2
3 | import numpy
4 | import pandas as pd
5 | import tensorflow as tf
6 | import matplotlib.pyplot as plt
7 |
8 | import tempfile
9 |
10 |
11 | # For drawing onto the image.
12 | import numpy as np
13 | from PIL import Image
14 | from PIL import ImageColor
15 | from PIL import ImageDraw
16 | from PIL import ImageFont
17 | from PIL import ImageOps
18 |
19 | # For measuring the inference time.
20 | import time
21 |
22 | class EfficientDet:
23 | def __init__(self, model_dir_path, weight_file_name, conf_threshold = 0.7, score_threshold = 0.25, nms_threshold = 0.4):
24 |
25 | self.frame_count = 0
26 | self.total_frames = 0
27 | self.fps = -1
28 | self.start = time.time_ns()
29 | self.frame = None
30 |
31 | self.model_dir_path = model_dir_path
32 | self.weight_file_name = weight_file_name
33 | self.conf=conf_threshold
34 |
35 | # Resizing image
36 | self.img_height=800
37 | self.img_width=800
38 | self.predictions=[]
39 |
40 | self.build_model()
41 | self.load_classes()
42 |
43 | def build_model(self) :
44 | module_handle="https://tfhub.dev/tensorflow/efficientdet/d0/1"
45 | # Loading model directly from TensorFlow Hub
46 | self.detector = hub.load(module_handle)
47 |
48 |
49 | def load_classes(self):
50 | self.labels = []
51 | with open(self.model_dir_path + "/classes.txt", "r") as f:
52 | self.labels = [cname.strip() for cname in f.readlines()]
53 | return self.labels
54 |
55 | def display_image(self,image):
56 | cv2.imshow("result", image)
57 | cv2.waitKey(0)
58 | cv2.destroyAllWindows()
59 |
60 | def draw_bounding_box_on_image(self,image,ymin,xmin,ymax,xmax,color,font,thickness=4,display_str_list=()):
61 | """Adds a bounding box to an image."""
62 | draw = ImageDraw.Draw(image)
63 | im_width, im_height = image.size
64 | (left, right, top, bottom) = (xmin * im_width, xmax * im_width,
65 | ymin * im_height, ymax * im_height)
66 | draw.line([(left, top), (left, bottom), (right, bottom), (right, top),
67 | (left, top)],
68 | width=thickness,
69 | fill=color)
70 | # If the total height of the display strings added to the top of the bounding
71 | # box exceeds the top of the image, stack the strings below the bounding box
72 | # instead of above.
73 | display_str_heights = [font.getsize(ds)[1] for ds in display_str_list]
74 | # Each display_str has a top and bottom margin of 0.05x.
75 | total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights)
76 |
77 | if top > total_display_str_height:
78 | text_bottom = top
79 | else:
80 | text_bottom = top + total_display_str_height
81 | # Reverse list and print from bottom to top.
82 | for display_str in display_str_list[::-1]:
83 | text_width, text_height = font.getsize(display_str)
84 | margin = np.ceil(0.05 * text_height)
85 | draw.rectangle([(left, text_bottom - text_height - 2 * margin),
86 | (left + text_width, text_bottom)],
87 | fill=color)
88 | draw.text((left + margin, text_bottom - text_height - margin),
89 | display_str,
90 | fill="black",
91 | font=font)
92 | text_bottom -= text_height - 2 * margin
93 |
94 | # create list of dictionary containing predictions
95 | def create_predictions_list(self, class_ids, confidences, boxes):
96 | for i in range(len(class_ids)):
97 | obj_dict = {
98 | "class_id": class_ids[i],
99 | "confidence": confidences[i],
100 | "box": boxes[i]
101 | }
102 | self.predictions.append(obj_dict)
103 |
104 | def draw_boxes(self,image,boxes,class_ids,confidences,max_boxes=10):
105 | """Overlay labeled boxes on an image with formatted scores and label names."""
106 | colors = list(ImageColor.colormap.values())
107 |
108 | try:
109 | font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationSansNarrow-Regular.ttf",
110 | 25)
111 | except IOError:
112 | print("Font not found, using default font.")
113 | font = ImageFont.load_default()
114 |
115 | for i in range(min(boxes.shape[0], max_boxes)):
116 | if confidences[i] >= self.conf:
117 | ymin, xmin, ymax, xmax = tuple(boxes[i])
118 | display_str = "{}: {}%".format(self.labels[class_ids[i]], int(100 * confidences[i]))
119 | color = colors[hash(class_ids[i]) % len(colors)]
120 | image_pil = Image.fromarray(np.uint8(image)).convert("RGB")
121 | self.draw_bounding_box_on_image(image_pil,ymin,xmin,ymax,xmax,color,font,display_str_list=[display_str])
122 | np.copyto(image, np.array(image_pil))
123 | return image
124 |
125 | def load_img(self,path):
126 | img = tf.io.read_file(path)
127 | img = tf.image.decode_jpeg(img, channels=3)
128 | return img
129 |
130 | def get_predictions(self,cv_image):
131 |
132 | if cv_image is None:
133 | # TODO: show warning message (different color, maybe)
134 | return None,None
135 |
136 | else :
137 | #Convert img to RGB
138 | self.frame = cv_image
139 |
140 | self.frame_count += 1
141 | self.total_frames += 1
142 |
143 | rgb = cv2.cvtColor(self.frame, cv2.COLOR_BGR2RGB)
144 |
145 | # COnverting to uint8
146 | rgb_tensor = tf.convert_to_tensor(rgb, dtype=tf.uint8)
147 |
148 | #Add dims to rgb_tensor
149 | rgb_tensor = tf.expand_dims(rgb_tensor , 0)
150 |
151 |
152 | result = self.detector(rgb_tensor)
153 | result = {key:value.numpy() for key,value in result.items()}
154 |
155 | self.create_predictions_list(result["detection_boxes"][0],result["detection_classes"][0], result["detection_scores"][0])
156 | image_with_boxes = self.draw_boxes(cv_image,result["detection_boxes"][0],result["detection_classes"][0], result["detection_scores"][0])
157 |
158 | # fps
159 | if self.frame_count >= 30:
160 | self.end = time.time_ns()
161 | self.fps = 1000000000 * self.frame_count / (self.end - self.start)
162 | self.frame_count = 0
163 | self.start = time.time_ns()
164 |
165 | if self.fps > 0:
166 | self.fps_label = "FPS: %.2f" % self.fps
167 | cv2.putText(self.frame, self.fps_label, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
168 |
169 | return [self.predictions, image_with_boxes]
170 |
171 |
172 | def detect_img(self,image_url):
173 | start_time = time.time()
174 | self.run_detector(self.detector, image_url)#Convert img to RGB
175 | rgb = cv2.cvtColor(cv_image, cv2.COLOR_BGR2RGB)
176 | # COnverting to uint8
177 | rgb_tensor = tf.convert_to_tensor(rgb, dtype=tf.uint8)
178 | #Add dims to rgb_tensor
179 | rgb_tensor = tf.expand_dims(rgb_tensor , 0)
180 | start_time = time.time()
181 | result = self.detector(rgb_tensor)
182 | end_time = time.time()
183 | result = {key:value.numpy() for key,value in result.items()}
184 | print("Found %d objects." % len(result["detection_scores"]))
185 | print("Inference time: ", end_time-start_time)
186 | self.create_predictions_list(cv_image,result["detection_boxes"][0],result["detection_classes"][0], result["detection_scores"][0])
187 | image_with_boxes = self.draw_boxes(cv_image,result["detection_boxes"][0],result["detection_classes"][0], result["detection_scores"][0])
188 | self.display_image(self.predictions,image_with_boxes)
189 |
190 | end_time = time.time()
191 | print("Inference time:",end_time-start_time)
192 |
193 | if __name__=='__main__':
194 | # Load model
195 | det = EfficientDet()
196 | det.detect_img("/home/sanchay/yolo_catkin/src/yolov8_test/scripts/dog_cat.jpg")
--------------------------------------------------------------------------------
/object_detection/object_detection/Detectors/RetinaNet.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | from tensorflow import keras
4 | from keras_retinanet import models
5 | from keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_image
6 | from keras_retinanet.utils.visualization import draw_box, draw_caption
7 | from keras_retinanet.utils.colors import label_color
8 | import matplotlib.pyplot as plt
9 | import cv2
10 | import os
11 | import numpy as np
12 | import time
13 | import matplotlib.pyplot as plt
14 |
15 |
16 | class RetinaNet:
17 | def __init__(self, model_dir_path, weight_file_name, conf_threshold = 0.7,
18 | score_threshold = 0.4, nms_threshold = 0.25, is_cuda = 0, show_fps = 1):
19 |
20 | self.model_dir_path = model_dir_path
21 | self.weight_file_name = weight_file_name
22 |
23 | self.predictions = []
24 | self.conf_threshold = conf_threshold
25 | self.show_fps = show_fps
26 | self.is_cuda = is_cuda
27 |
28 | if self.show_fps :
29 | self.frame_count = 0
30 | self.total_frames = 0
31 | self.fps = -1
32 | self.start = time.time_ns()
33 |
34 | self.labels_to_names = self.load_classes()
35 | self.build_model()
36 |
37 | def build_model(self) :
38 |
39 | try :
40 | self.model_path = os.path.join(self.model_dir_path, self.weight_file_name)
41 | self.model = models.load_model(self.model_path, backbone_name='resnet50')
42 |
43 | except :
44 | raise Exception("Error loading given model from path: {}. Maybe the file doesn't exist?".format(self.model_path))
45 |
46 |
47 | def load_classes(self):
48 | self.class_list = []
49 |
50 | with open(self.model_dir_path + "/classes.txt", "r") as f:
51 | self.class_list = [cname.strip() for cname in f.readlines()]
52 |
53 | return self.class_list
54 |
55 | def create_predictions_list(self, class_ids, confidences, boxes):
56 | for i in range(len(class_ids)):
57 | obj_dict = {
58 | "class_id": class_ids[i],
59 | "confidence": confidences[i],
60 | "box": boxes[i]
61 | }
62 |
63 | self.predictions.append(obj_dict)
64 |
65 |
66 | def get_predictions(self, cv_image):
67 |
68 | if cv_image is None:
69 | # TODO: show warning message (different color, maybe)
70 | return None,None
71 |
72 | else :
73 |
74 | # copy to draw on
75 | self.frame = cv_image.copy()
76 | # preprocess image for network
77 | input = preprocess_image(self.frame)
78 | input, scale = resize_image(input)
79 |
80 | self.frame_count += 1
81 | self.total_frames += 1
82 |
83 | # process image
84 | start = time.time()
85 | boxes, scores, labels = self.model.predict_on_batch(np.expand_dims(input, axis=0))
86 | #print("processing time: ", time.time() - start)
87 |
88 | # correct for image scale
89 | boxes /= scale
90 |
91 | self.create_predictions_list(labels, scores, boxes)
92 |
93 | # visualize detections
94 | for box, score, label in zip(boxes[0], scores[0], labels[0]):
95 | # scores are sorted so we can break
96 | if score < self.conf_threshold:
97 | break
98 |
99 | color = label_color(label)
100 |
101 | b = box.astype(int)
102 | draw_box(self.frame, b, color=color)
103 |
104 | caption = "{} {:.3f}".format(self.labels_to_names[label], score)
105 | #print(self.labels_to_names[label])
106 | draw_caption(self.frame, b, caption)
107 |
108 | if self.show_fps :
109 | if self.frame_count >= 30:
110 | self.end = time.time_ns()
111 | self.fps = 1000000000 * self.frame_count / (self.end - self.start)
112 | self.frame_count = 0
113 | self.start = time.time_ns()
114 |
115 | if self.fps > 0:
116 | self.fps_label = "FPS: %.2f" % self.fps
117 | cv2.putText(self.frame, self.fps_label, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
118 |
119 | return (self.predictions, self.frame)
120 |
121 |
122 |
--------------------------------------------------------------------------------
/object_detection/object_detection/Detectors/YOLOv5.py:
--------------------------------------------------------------------------------
1 | import time
2 | import os
3 | import cv2
4 | import numpy as np
5 |
6 | from ..DetectorBase import DetectorBase
7 |
8 |
9 | class YOLOv5(DetectorBase):
10 | def __init__(self, conf_threshold = 0.7, score_threshold = 0.4, nms_threshold = 0.25, is_cuda = 0):
11 |
12 | super().__init__()
13 |
14 | # opencv img input
15 | self.frame = None
16 | self.net = None
17 | self.INPUT_WIDTH = 640
18 | self.INPUT_HEIGHT = 640
19 | self.CONFIDENCE_THRESHOLD = conf_threshold
20 |
21 | self.is_cuda = is_cuda
22 |
23 |
24 | # load model and prepare its backend to either run on GPU or CPU, see if it can be added in constructor
25 | def build_model(self, model_dir_path, weight_file_name):
26 | model_path = os.path.join(model_dir_path, weight_file_name)
27 |
28 | try:
29 | self.net = cv2.dnn.readNet(model_path)
30 | except:
31 | raise Exception("Error loading given model from path: {}. Maybe the file doesn't exist?".format(model_path))
32 |
33 | if self.is_cuda:
34 | print("is_cuda was set to True. Attempting to use CUDA")
35 | self.net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
36 | self.net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA_FP16)
37 | else:
38 | print("is_cuda was set to False. Running on CPU")
39 | self.net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
40 | self.net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
41 |
42 |
43 | # load classes.txt that contains mapping of model with labels
44 | # TODO: add try/except to raise exception that tells the use to check the name if it is classes.txt
45 | def load_classes(self, model_dir_path):
46 | self.class_list = []
47 | with open(model_dir_path + "/classes.txt", "r") as f:
48 | self.class_list = [cname.strip() for cname in f.readlines()]
49 | return self.class_list
50 |
51 |
52 | def detect(self, image):
53 | # convert image to 640x640
54 | blob = cv2.dnn.blobFromImage(image, 1/255.0, (self.INPUT_WIDTH, self.INPUT_HEIGHT), swapRB=True, crop=False)
55 | self.net.setInput(blob)
56 | preds = self.net.forward()
57 | return preds
58 |
59 |
60 | # extract bounding box, class IDs and confidences of detected objects
61 | # YOLOv5 returns a 3D tensor of dimension 25200*(5 + n_classes)
62 | def wrap_detection(self, input_image, output_data):
63 | class_ids = []
64 | confidences = []
65 | boxes = []
66 |
67 | rows = output_data.shape[0]
68 |
69 | image_width, image_height, _ = input_image.shape
70 |
71 | x_factor = image_width / self.INPUT_WIDTH
72 | y_factor = image_height / self.INPUT_HEIGHT
73 |
74 | # Iterate through all the 25200 vectors
75 | for r in range(rows):
76 | row = output_data[r]
77 |
78 | # Continue only if Pc > conf_threshold
79 | confidence = row[4]
80 | if confidence >= self.CONFIDENCE_THRESHOLD:
81 |
82 | # One-hot encoded vector representing class of object
83 | classes_scores = row[5:]
84 |
85 | # Returns min and max values in a array alongwith their indices
86 | _, _, _, max_indx = cv2.minMaxLoc(classes_scores)
87 |
88 | # Extract the column index of the maximum values in classes_scores
89 | class_id = max_indx[1]
90 |
91 | # Continue of the class score is greater than a threshold
92 | # class_score represents the probability of an object belonging to that class
93 | if (classes_scores[class_id] > .25):
94 |
95 | confidences.append(confidence)
96 |
97 | class_ids.append(class_id)
98 |
99 | x, y, w, h = row[0].item(), row[1].item(), row[2].item(), row[3].item()
100 | left = int((x - 0.5 * w) * x_factor)
101 | top = int((y - 0.5 * h) * y_factor)
102 | width = int(w * x_factor)
103 | height = int(h * y_factor)
104 | box = np.array([left, top, width, height])
105 | boxes.append(box)
106 |
107 | # removing intersecting bounding boxes
108 | indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.25, 0.45)
109 |
110 | result_class_ids = []
111 | result_confidences = []
112 | result_boxes = []
113 |
114 | for i in indexes:
115 | result_confidences.append(confidences[i])
116 | result_class_ids.append(class_ids[i])
117 | result_boxes.append(boxes[i])
118 |
119 | return result_class_ids, result_confidences, result_boxes
120 |
121 |
122 | # makes image square with dimension max(h, w)
123 | def format_yolov5(self):
124 | row, col, _ = self.frame.shape
125 | _max = max(col, row)
126 | result = np.zeros((_max, _max, 3), np.uint8)
127 | result[0:row, 0:col] = self.frame
128 | return result
129 |
130 |
131 | def get_predictions(self, cv_image):
132 | #Clear list
133 | self.predictions = []
134 |
135 | if cv_image is None:
136 | # TODO: show warning message (different color, maybe)
137 | return None,None
138 | else:
139 | self.frame = cv_image
140 |
141 | # make image square
142 | inputImage = self.format_yolov5()
143 |
144 | outs = self.detect(inputImage)
145 | class_ids, confidences, boxes = self.wrap_detection(inputImage, outs[0])
146 |
147 | super().create_predictions_list(class_ids, confidences, boxes)
148 |
149 | print("Detected ids: ", class_ids)
150 |
151 | return self.predictions
--------------------------------------------------------------------------------
/object_detection/object_detection/Detectors/YOLOv8.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | from ultralytics import YOLO
3 | import os
4 | import time
5 |
6 | class YOLOv8:
7 | def __init__(self, model_dir_path, weight_file_name, conf_threshold = 0.7,
8 | score_threshold = 0.4, nms_threshold = 0.25,
9 | show_fps = 1, is_cuda = 0):
10 |
11 | self.model_dir_path = model_dir_path
12 | self.weight_file_name = weight_file_name
13 |
14 |
15 | self.conf_threshold = conf_threshold
16 | self.show_fps = show_fps
17 | self.is_cuda = is_cuda
18 |
19 | #FPS
20 | if self.show_fps :
21 | self.frame_count = 0
22 | self.total_frames = 0
23 | self.fps = -1
24 | self.start = time.time_ns()
25 | self.frame = None
26 |
27 |
28 | self.predictions = []
29 | self.build_model()
30 | self.load_classes()
31 |
32 |
33 | def build_model(self) :
34 |
35 | try :
36 | model_path = os.path.join(self.model_dir_path, self.weight_file_name)
37 | self.model = YOLO(model_path)
38 |
39 | except :
40 | raise Exception("Error loading given model from path: {}. Maybe the file doesn't exist?".format(model_path))
41 |
42 | def load_classes(self):
43 |
44 | self.class_list = []
45 |
46 | with open(self.model_dir_path + "/classes.txt", "r") as f:
47 | self.class_list = [cname.strip() for cname in f.readlines()]
48 |
49 | return self.class_list
50 |
51 | # create list of dictionary containing predictions
52 | def create_predictions_list(self, class_ids, confidences, boxes):
53 |
54 | for i in range(len(class_ids)):
55 | obj_dict = {
56 | "class_id": class_ids[i],
57 | "confidence": confidences[i],
58 | "box": boxes[i]
59 | }
60 | self.predictions.append(obj_dict)
61 |
62 | def get_predictions(self, cv_image):
63 |
64 | if cv_image is None:
65 | # TODO: show warning message (different color, maybe)
66 | return None,None
67 |
68 | else :
69 | self.frame = cv_image
70 | self.frame_count += 1
71 | self.total_frames += 1
72 |
73 | class_id = []
74 | confidence = []
75 | bb = []
76 | result = self.model.predict(self.frame, conf = self.conf_threshold) # Perform object detection on image
77 | row = result[0].boxes
78 |
79 | for box in row:
80 | class_id.append(box.cls)
81 | confidence.append(box.conf)
82 | bb.append(box.xyxy)
83 |
84 | self.create_predictions_list(class_id,confidence,bb)
85 | result = self.model.predict(self.frame, conf = self.conf_threshold)
86 | output_frame = result[0].plot() # Frame with bounding boxes
87 |
88 | print("frame_count : ", self.frame_count)
89 |
90 |
91 | if self.show_fps :
92 | if self.frame_count >= 30:
93 | self.end = time.time_ns()
94 | self.fps = 1000000000 * self.frame_count / (self.end - self.start)
95 | self.frame_count = 0
96 | self.start = time.time_ns()
97 |
98 | if self.fps > 0:
99 | self.fps_label = "FPS: %.2f" % self.fps
100 | cv2.putText(output_frame, self.fps_label, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
101 |
102 | return self.predictions, output_frame
103 |
--------------------------------------------------------------------------------
/object_detection/object_detection/Detectors/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/object_detection/object_detection/Detectors/__init__.py
--------------------------------------------------------------------------------
/object_detection/object_detection/ObjectDetection.py:
--------------------------------------------------------------------------------
1 | #! /usr/bin/env python3
2 |
3 | import os
4 | import importlib
5 |
6 | import rclpy
7 | from rclpy.node import Node
8 |
9 | from sensor_msgs.msg import Image
10 | #from vision_msgs.msg import BoundingBox2D
11 |
12 | from cv_bridge import CvBridge
13 | import cv2
14 |
15 |
16 | class ObjectDetection(Node):
17 | def __init__(self):
18 | super().__init__('object_detection')
19 |
20 | # create an empty list that will hold the names of all available detector
21 | self.available_detectors = []
22 |
23 | # fill available_detectors with the detectors from Detectors dir
24 | self.discover_detectors()
25 |
26 | self.declare_parameters(
27 | namespace='',
28 | parameters=[
29 |
30 | ('input_img_topic', ""),
31 | ('output_bb_topic', ""),
32 | ('output_img_topic', ""),
33 | ('model_params.detector_type', ""),
34 | ('model_params.model_dir_path', ""),
35 | ('model_params.weight_file_name', ""),
36 | ('model_params.confidence_threshold', 0.7),
37 | ('model_params.show_fps', 1),
38 | ]
39 | )
40 |
41 | # node params
42 | self.input_img_topic = self.get_parameter('input_img_topic').value
43 | self.output_bb_topic = self.get_parameter('output_bb_topic').value
44 | self.output_img_topic = self.get_parameter('output_img_topic').value
45 |
46 | # model params
47 | self.detector_type = self.get_parameter('model_params.detector_type').value
48 | self.model_dir_path = self.get_parameter('model_params.model_dir_path').value
49 | self.weight_file_name = self.get_parameter('model_params.weight_file_name').value
50 | self.confidence_threshold = self.get_parameter('model_params.confidence_threshold').value
51 | self.show_fps = self.get_parameter('model_params.show_fps').value
52 |
53 | # raise an exception if specified detector was not found
54 | if self.detector_type not in self.available_detectors:
55 | raise ModuleNotFoundError(self.detector_type + " Detector specified in config was not found. " +
56 | "Check the Detectors dir for available detectors.")
57 | else:
58 | self.load_detector()
59 |
60 |
61 | self.img_pub = self.create_publisher(Image, self.output_img_topic, 10)
62 | self.bb_pub = None
63 | self.img_sub = self.create_subscription(Image, self.input_img_topic, self.detection_cb, 10)
64 |
65 | self.bridge = CvBridge()
66 |
67 |
68 | def discover_detectors(self):
69 | curr_dir = os.path.dirname(__file__)
70 | dir_contents = os.listdir(curr_dir + "/Detectors")
71 |
72 | for entity in dir_contents:
73 | if entity.endswith('.py'):
74 | self.available_detectors.append(entity[:-3])
75 |
76 | self.available_detectors.remove('__init__')
77 |
78 |
79 | def load_detector(self):
80 | detector_mod = importlib.import_module(".Detectors." + self.detector_type, "object_detection")
81 | detector_class = getattr(detector_mod, self.detector_type)
82 | self.detector = detector_class()
83 |
84 | self.detector.build_model(self.model_dir_path, self.weight_file_name)
85 | self.detector.load_classes(self.model_dir_path)
86 |
87 | print("Your detector : {} has been loaded !".format(self.detector_type))
88 |
89 |
90 | def detection_cb(self, img_msg):
91 | cv_image = self.bridge.imgmsg_to_cv2(img_msg, "bgr8")
92 |
93 | predictions = self.detector.get_predictions(cv_image=cv_image)
94 |
95 | if predictions == None :
96 | print("Image input from topic : {} is empty".format(self.input_img_topic))
97 | else :
98 | for prediction in predictions:
99 | left, top, width, height = prediction['box']
100 | right = left + width
101 | bottom = top + height
102 |
103 | #Draw the bounding box
104 | cv_image = cv2.rectangle(cv_image,(left,top),(right, bottom),(0,255,0),1)
105 |
106 | output = self.bridge.cv2_to_imgmsg(cv_image, "bgr8")
107 | self.img_pub.publish(output)
108 | print(predictions)
109 |
110 |
111 | def main():
112 | rclpy.init()
113 | od = ObjectDetection()
114 | try :
115 | rclpy.spin(od)
116 |
117 | except Exception as e:
118 | print(e)
119 |
120 |
121 | if __name__=="__main__" :
122 | main()
123 |
124 |
125 |
126 |
--------------------------------------------------------------------------------
/object_detection/object_detection/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/object_detection/object_detection/__init__.py
--------------------------------------------------------------------------------
/object_detection/package.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | object_detection
5 | 0.0.0
6 | TODO: Package description
7 | singh
8 | TODO: License declaration
9 |
10 | ament_copyright
11 | ament_flake8
12 | ament_pep257
13 | python3-pytest
14 |
15 |
16 | ament_python
17 |
18 |
19 |
--------------------------------------------------------------------------------
/object_detection/requirements.txt:
--------------------------------------------------------------------------------
1 | keras-retinanet==1.0.0
2 | matplotlib==3.5.4
3 | numpy==1.25.0
4 | opencv-python==4.7.0.72
5 | pandas==2.0.3
6 | pillow==9.5.0
7 | tensorflow==2.12.0
8 | tensorflow-hub==0.13.0
9 | ultralytics==8.0.124
--------------------------------------------------------------------------------
/object_detection/resource/object_detection:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/object_detection/resource/object_detection
--------------------------------------------------------------------------------
/object_detection/setup.cfg:
--------------------------------------------------------------------------------
1 | [develop]
2 | script_dir=$base/lib/object_detection
3 | [install]
4 | install_scripts=$base/lib/object_detection
5 |
--------------------------------------------------------------------------------
/object_detection/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup
2 | import os
3 | from glob import glob
4 |
5 | package_name = 'object_detection'
6 |
7 | setup(
8 | name=package_name,
9 | version='0.0.0',
10 | packages=[package_name],
11 | data_files=[
12 | ('share/ament_index/resource_index/packages',
13 | ['resource/' + package_name]),
14 | ('share/' + package_name, ['package.xml']),
15 | (os.path.join('share', package_name, 'config'), glob('config/*.yaml')),
16 | (os.path.join('share', package_name, 'launch'), glob('launch/*.launch.py')),
17 |
18 | ],
19 | install_requires=['setuptools'],
20 | zip_safe=True,
21 | maintainer='singh',
22 | maintainer_email='jasmeet0915@gmail.com',
23 | description='TODO: Package description',
24 | license='TODO: License declaration',
25 | tests_require=['pytest'],
26 | entry_points={
27 | 'console_scripts': [
28 | 'ObjectDetection = object_detection.ObjectDetection:main',
29 | ],
30 | },
31 | )
32 |
--------------------------------------------------------------------------------
/object_detection/test/test_copyright.py:
--------------------------------------------------------------------------------
1 | # Copyright 2015 Open Source Robotics Foundation, Inc.
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | from ament_copyright.main import main
16 | import pytest
17 |
18 |
19 | # Remove the `skip` decorator once the source file(s) have a copyright header
20 | @pytest.mark.skip(reason='No copyright header has been placed in the generated source file.')
21 | @pytest.mark.copyright
22 | @pytest.mark.linter
23 | def test_copyright():
24 | rc = main(argv=['.', 'test'])
25 | assert rc == 0, 'Found errors'
26 |
--------------------------------------------------------------------------------
/object_detection/test/test_flake8.py:
--------------------------------------------------------------------------------
1 | # Copyright 2017 Open Source Robotics Foundation, Inc.
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | from ament_flake8.main import main_with_errors
16 | import pytest
17 |
18 |
19 | @pytest.mark.flake8
20 | @pytest.mark.linter
21 | def test_flake8():
22 | rc, errors = main_with_errors(argv=[])
23 | assert rc == 0, \
24 | 'Found %d code style errors / warnings:\n' % len(errors) + \
25 | '\n'.join(errors)
26 |
--------------------------------------------------------------------------------
/object_detection/test/test_pep257.py:
--------------------------------------------------------------------------------
1 | # Copyright 2015 Open Source Robotics Foundation, Inc.
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | from ament_pep257.main import main
16 | import pytest
17 |
18 |
19 | @pytest.mark.linter
20 | @pytest.mark.pep257
21 | def test_pep257():
22 | rc = main(argv=['.', 'test'])
23 | assert rc == 0, 'Found code style errors / warnings'
24 |
--------------------------------------------------------------------------------
/perception_bringup/CMakeLists.txt:
--------------------------------------------------------------------------------
1 | cmake_minimum_required(VERSION 3.8)
2 | project(perception_bringup)
3 |
4 | if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
5 | add_compile_options(-Wall -Wextra -Wpedantic)
6 | endif()
7 |
8 | # find dependencies
9 | find_package(ament_cmake REQUIRED)
10 | # uncomment the following section in order to fill in
11 | # further dependencies manually.
12 | # find_package( REQUIRED)
13 |
14 | if(BUILD_TESTING)
15 | find_package(ament_lint_auto REQUIRED)
16 | # the following line skips the linter which checks for copyrights
17 | # comment the line when a copyright and license is added to all source files
18 | set(ament_cmake_copyright_FOUND TRUE)
19 | # the following line skips cpplint (only works in a git repo)
20 | # comment the line when this package is in a git repo and when
21 | # a copyright and license is added to all source files
22 | set(ament_cmake_cpplint_FOUND TRUE)
23 | ament_lint_auto_find_test_dependencies()
24 | endif()
25 |
26 | install(
27 | DIRECTORY launch models worlds config
28 | DESTINATION share/${PROJECT_NAME}
29 | )
30 |
31 | ament_environment_hooks("${CMAKE_CURRENT_SOURCE_DIR}/env_hooks/${PROJECT_NAME}.sh.in")
32 |
33 | ament_package()
34 |
--------------------------------------------------------------------------------
/perception_bringup/config/bridge.yaml:
--------------------------------------------------------------------------------
1 | - gz_topic_name: "/color_camera/image_raw"
2 | ros_topic_name: "/color_camera/image_raw"
3 | ros_type_name: "sensor_msgs/msg/Image"
4 | gz_type_name: "gz.msgs.Image"
5 | direction: GZ_TO_ROS
--------------------------------------------------------------------------------
/perception_bringup/env_hooks/perception_bringup.sh.in:
--------------------------------------------------------------------------------
1 | ament_prepend_unique_value GZ_SIM_RESOURCE_PATH "@CMAKE_CURRENT_SOURCE_DIR@/models:@CMAKE_CURRENT_SOURCE_DIR@/worlds"
--------------------------------------------------------------------------------
/perception_bringup/launch/playground.launch.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2018 Intel Corporation
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | import os
16 | import sys
17 |
18 | from ament_index_python.packages import get_package_share_directory
19 |
20 | from launch import LaunchDescription
21 | from launch.actions import IncludeLaunchDescription, DeclareLaunchArgument
22 | from launch.substitutions import LaunchConfiguration
23 | from launch.launch_description_sources import PythonLaunchDescriptionSource
24 | from launch_ros.actions import Node
25 |
26 |
27 | def generate_launch_description():
28 |
29 | pkg_perception_bringup = get_package_share_directory("perception_bringup")
30 | #pkg_ros_gz_sim = get_package_share_directory("ros_gz_sim")
31 |
32 | world_name = "playground"
33 |
34 | for arg in sys.argv:
35 | if arg.startswith("world:="):
36 | world_name = arg.split(":=")[1]
37 |
38 | world_sdf = pkg_perception_bringup + "/worlds/" + world_name + ".sdf"
39 |
40 | '''gz_sim = IncludeLaunchDescription(
41 | PythonLaunchDescriptionSource(
42 | os.path.join(pkg_ros_gz_sim, 'launch', 'gz_sim.launch.py')),
43 | )'''
44 |
45 | gz_sim_share = get_package_share_directory("ros_gz_sim")
46 | gz_sim = IncludeLaunchDescription(
47 | PythonLaunchDescriptionSource(os.path.join(gz_sim_share, "launch", "gz_sim.launch.py")),
48 | launch_arguments={
49 | "gz_args" : world_sdf
50 | }.items()
51 | )
52 |
53 | parameter_bridge = Node(package="ros_gz_bridge", executable="parameter_bridge",
54 | parameters = [
55 | {'config_file' : os.path.join(pkg_perception_bringup, "config", "bridge.yaml")}
56 | ]
57 | )
58 |
59 | arg_gz_sim = DeclareLaunchArgument('gz_args', default_value=world_sdf)
60 |
61 | arg_world_name = DeclareLaunchArgument('world', default_value='playground_world' )
62 |
63 | launch = [
64 | gz_sim,
65 | parameter_bridge
66 | ]
67 |
68 | args = [
69 | arg_gz_sim,
70 | arg_world_name
71 | ]
72 |
73 | return LaunchDescription(args + launch)
74 |
--------------------------------------------------------------------------------
/perception_bringup/models/adhesive/model.config:
--------------------------------------------------------------------------------
1 |
2 |
3 | adhesive
4 | 1.0
5 | model.sdf
6 |
7 | atom
8 | atom@atom
9 |
10 |
11 |
12 |
--------------------------------------------------------------------------------
/perception_bringup/models/adhesive/model.sdf:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 | 0 0 0 0 0 0
9 | 0.08
10 |
11 |
12 |
13 | 0 0 0 0 0 0
14 |
15 |
16 | model://adhesive/meshes/adhesive.dae
17 |
18 |
19 |
20 |
21 |
22 | true
23 | 0 0 0 0 0 0
24 |
25 |
26 | model://adhesive/meshes/adhesive.dae
27 |
28 |
29 |
30 | 0 0.5 0.5 1
31 | 0 0.5 0.5 1
32 | 0 0.5 0.5 1
33 |
34 |
35 |
36 |
37 |
38 |
39 |
--------------------------------------------------------------------------------
/perception_bringup/models/first_aid/model.config:
--------------------------------------------------------------------------------
1 |
9 |
10 | first aid
11 |
12 | atom
13 | atom@atom
14 |
15 | 1.0
16 | model.sdf
17 | First Aid Box
18 |
19 |
--------------------------------------------------------------------------------
/perception_bringup/models/first_aid/model.sdf:
--------------------------------------------------------------------------------
1 |
2 |
10 |
11 |
12 |
13 |
14 |
15 |
16 | 0.2 0.1 0.20
17 |
18 |
19 |
20 | 1 1 1 1
21 | 1 1 1 1
22 | 0 0 0 1
23 | 0 0 0 1
24 |
25 |
26 |
27 |
28 |
29 | 0.2 0.1 0.20
30 |
31 |
32 |
33 |
34 |
35 |
36 |
--------------------------------------------------------------------------------
/perception_bringup/models/medicine/materials/textures/texture.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/medicine/materials/textures/texture.png
--------------------------------------------------------------------------------
/perception_bringup/models/medicine/meshes/model.mtl:
--------------------------------------------------------------------------------
1 | # Copyright 2020 Google LLC.
2 | #
3 | # This work is licensed under the Creative Commons Attribution 4.0
4 | # International License. To view a copy of this license, visit
5 | # http://creativecommons.org/licenses/by/4.0/ or send a letter
6 | # to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
7 | newmtl material_0
8 | # shader_type beckmann
9 | map_Kd texture.png
10 |
11 | # Kd: Diffuse reflectivity.
12 | Kd 1.000000 1.000000 1.000000
13 |
--------------------------------------------------------------------------------
/perception_bringup/models/medicine/model.config:
--------------------------------------------------------------------------------
1 |
9 |
10 | Weston_No_22_Cajun_Jerky_Tonic_12_fl_oz_nLj64ZnGwDh
11 |
12 | Google
13 | scanned-objects@google.com
14 |
15 | 1.0
16 | model.sdf
17 | Weston No. 22 Cajun Jerky Tonic - 12 fl oz
18 | Weston Premium Jerky Tonic - the cure for the tired tastebud! Our bottles let you use only what you need. Mix one tablespoon of seasoning with one tablespoon of water for every one pound of meat.
19 |
20 |
--------------------------------------------------------------------------------
/perception_bringup/models/medicine/model.sdf:
--------------------------------------------------------------------------------
1 |
2 |
10 |
11 |
12 |
13 |
14 |
15 |
16 | meshes/model.obj
17 |
18 |
19 |
20 |
21 |
22 |
23 | meshes/model.obj
24 |
25 |
26 |
27 |
28 |
29 |
30 |
--------------------------------------------------------------------------------
/perception_bringup/models/medicine/thumbnails/0.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/medicine/thumbnails/0.jpg
--------------------------------------------------------------------------------
/perception_bringup/models/medicine/thumbnails/1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/medicine/thumbnails/1.jpg
--------------------------------------------------------------------------------
/perception_bringup/models/medicine/thumbnails/2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/medicine/thumbnails/2.jpg
--------------------------------------------------------------------------------
/perception_bringup/models/medicine/thumbnails/3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/medicine/thumbnails/3.jpg
--------------------------------------------------------------------------------
/perception_bringup/models/medicine/thumbnails/4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/medicine/thumbnails/4.jpg
--------------------------------------------------------------------------------
/perception_bringup/models/plastic_cup/meshes/plastic_cup.dae:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | Wings3D Collada Exporter
6 | Wings3D 1.5.3 Collada Exporter
7 |
8 |
9 |
10 |
11 | 2014-10-08T21:39:06
12 | 2014-10-08T21:39:06
13 |
14 | Y_UP
15 |
16 | 0.000000 0.000000 0.000000 1.000000
17 | 0.789854 0.813333 0.694044 1.000000
18 | 0.789854 0.813333 0.694044 1.000000
19 | 0.000000 0.000000 0.000000 1.000000
20 | 0.000000
21 |
22 |
23 | 0.000000 0.045102 0.057352 0.014255 0.042267 0.057352 0.026340 0.034192 0.057352 0.034415 0.022107 0.057352 0.037250 0.007852 0.057352 0.034415 -0.006403 0.057352 0.026340 -0.018487 0.057352 0.014255 -0.026562 0.057352 0.000000 -0.029398 0.057352 -0.014255 -0.026562 0.057352 -0.026340 -0.018487 0.057352 -0.034415 -0.006403 0.057352 -0.037250 0.007852 0.057352 -0.034415 0.022107 0.057352 -0.026340 0.034192 0.057352 -0.014255 0.042267 0.057352 -0.000000 0.035352 -0.072648 0.010524 0.033259 -0.072648 0.019445 0.027298 -0.072648 0.025407 0.018376 -0.072648 0.027500 0.007852 -0.072648 0.025407 -0.002671 -0.072648 0.019445 -0.011593 -0.072648 0.010524 -0.017554 -0.072648 0.000000 -0.019648 -0.072648 -0.010524 -0.017554 -0.072648 -0.019445 -0.011593 -0.072648 -0.025407 -0.002671 -0.072648 -0.027500 0.007852 -0.072648 -0.025407 0.018376 -0.072648 -0.019445 0.027298 -0.072648 -0.010524 0.033259 -0.072648 0.013870 0.041338 0.057352 0.000000 0.044097 0.057352 -0.013870 0.041338 0.057352 -0.025629 0.033481 0.057352 -0.033485 0.021722 0.057352 -0.036244 0.007852 0.057352 -0.033485 -0.006018 0.057352 -0.025629 -0.017776 0.057352 -0.013870 -0.025633 0.057352 0.000000 -0.028392 0.057352 0.013870 -0.025633 0.057352 0.025629 -0.017776 0.057352 0.033485 -0.006018 0.057352 0.036244 0.007852 0.057352 0.033485 0.021722 0.057352 0.025629 0.033481 0.057352 0.000000 0.045102 0.057352 0.014255 0.042267 0.057352 0.026340 0.034192 0.057352 0.034415 0.022107 0.057352 0.037250 0.007852 0.057352 0.034415 -0.006403 0.057352 0.026340 -0.018487 0.057352 0.014255 -0.026562 0.057352 0.000000 -0.029398 0.057352 -0.014255 -0.026562 0.057352 -0.026340 -0.018487 0.057352 -0.034415 -0.006403 0.057352 -0.037250 0.007852 0.057352 -0.034415 0.022107 0.057352 -0.026340 0.034192 0.057352 -0.014255 0.042267 0.057352 -0.000000 0.035352 -0.072648 0.010524 0.033259 -0.072648 0.019445 0.027298 -0.072648 0.025407 0.018376 -0.072648 0.027500 0.007852 -0.072648 0.025407 -0.002671 -0.072648 0.019445 -0.011593 -0.072648 0.010524 -0.017554 -0.072648 0.000000 -0.019648 -0.072648 -0.010524 -0.017554 -0.072648 -0.019445 -0.011593 -0.072648 -0.025407 -0.002671 -0.072648 -0.027500 0.007852 -0.072648 -0.025407 0.018376 -0.072648 -0.019445 0.027298 -0.072648 -0.010524 0.033259 -0.072648 0.013870 0.041338 0.057352 0.000000 0.044097 0.057352 -0.013870 0.041338 0.057352 -0.025629 0.033481 0.057352 -0.033485 0.021722 0.057352 -0.036244 0.007852 0.057352 -0.033485 -0.006018 0.057352 -0.025629 -0.017776 0.057352 -0.013870 -0.025633 0.057352 0.000000 -0.028392 0.057352 0.013870 -0.025633 0.057352 0.025629 -0.017776 0.057352 0.033485 -0.006018 0.057352 0.036244 0.007852 0.057352 0.033485 0.021722 0.057352 0.025629 0.033481 0.057352 0.000000 0.043069 0.057505 0.013477 0.040388 0.057505 0.024902 0.032754 0.057505 0.032536 0.021329 0.057505 0.035217 0.007852 0.057505 0.032536 -0.005625 0.057505 0.024902 -0.017050 0.057505 0.013477 -0.024684 0.057505 0.000000 -0.027364 0.057505 -0.013477 -0.024684 0.057505 -0.024902 -0.017050 0.057505 -0.032536 -0.005625 0.057505 -0.035217 0.007852 0.057505 -0.032536 0.021329 0.057505 -0.024902 0.032754 0.057505 -0.013477 0.040388 0.057505 -0.000000 0.033319 -0.072495 0.009746 0.031380 -0.072495 0.018008 0.025860 -0.072495 0.023528 0.017598 -0.072495 0.025467 0.007852 -0.072495 0.023528 -0.001893 -0.072495 0.018008 -0.010155 -0.072495 0.009746 -0.015676 -0.072495 0.000000 -0.017614 -0.072495 -0.009746 -0.015676 -0.072495 -0.018008 -0.010155 -0.072495 -0.023528 -0.001893 -0.072495 -0.025467 0.007852 -0.072495 -0.023528 0.017598 -0.072495 -0.018008 0.025860 -0.072495 -0.009746 0.031380 -0.072495 0.010524 0.033259 -0.069124 0.000000 0.035352 -0.069124 -0.010524 0.033259 -0.069124 -0.019445 0.027298 -0.069124 -0.025407 0.018376 -0.069124 -0.027500 0.007852 -0.069124 -0.025407 -0.002671 -0.069124 -0.019445 -0.011593 -0.069124 -0.010524 -0.017554 -0.069124 0.000000 -0.019648 -0.069124 0.010524 -0.017554 -0.069124 0.019445 -0.011593 -0.069124 0.025407 -0.002671 -0.069124 0.027500 0.007852 -0.069124 0.025407 0.018376 -0.069124 0.019445 0.027298 -0.069124 -0.000000 0.000000 -1.000000 -0.194565 0.978142 -0.073362 -0.554073 0.829229 -0.073362 -0.829229 0.554073 -0.073362 -0.978142 0.194565 -0.073362 -0.978142 -0.194565 -0.073362 -0.829229 -0.554073 -0.073362 -0.554073 -0.829229 -0.073362 -0.194565 -0.978142 -0.073362 0.194565 -0.978142 -0.073362 0.554073 -0.829229 -0.073362 0.829229 -0.554073 -0.073362 0.978142 -0.194565 -0.073362 0.978142 0.194565 -0.073362 0.829229 0.554073 -0.073362 0.554073 0.829229 -0.073362 0.194565 0.978142 -0.073362 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.194565 -0.978142 0.073362 0.554073 -0.829229 0.073362 0.829229 -0.554073 0.073362 0.978142 -0.194565 0.073362 0.978142 0.194565 0.073362 0.829229 0.554073 0.073362 0.554073 0.829229 0.073362 0.194565 0.978142 0.073362 -0.194565 0.978142 0.073362 -0.554073 0.829229 0.073362 -0.829229 0.554073 0.073362 -0.978142 0.194565 0.073362 -0.978142 -0.194565 0.073362 -0.829229 -0.554073 0.073362 -0.554073 -0.829229 0.073362 -0.194565 -0.978142 0.073362 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 -1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 -0.014875 -0.074783 0.997089 0.014875 -0.074783 0.997089 -0.042361 0.063398 -0.997089 0.042361 -0.063398 0.997089 0.014875 0.074783 -0.997089 0.063398 0.042361 0.997089 -0.063398 0.042361 -0.997089 -0.042361 -0.063398 0.997089 -0.042361 0.063398 0.997089 -0.063398 0.042361 0.997089 0.063398 -0.042361 0.997089 -0.063398 -0.042361 0.997089 0.042361 0.063398 -0.997089 0.014875 0.074783 0.997089 0.042361 -0.063398 -0.997089 -0.074783 -0.014875 -0.997089 -0.074783 -0.014875 0.997089 0.074783 -0.014875 0.997089 0.074783 -0.014875 -0.997089 -0.063398 -0.042361 -0.997089 0.074783 0.014875 0.997089 0.063398 0.042361 -0.997089 -0.014875 0.074783 0.997089 -0.074783 0.014875 -0.997089 -0.074783 0.014875 0.997089 -0.014875 0.074783 -0.997089 0.074783 0.014875 -0.997089 0.042361 0.063398 0.997089 0.063398 -0.042361 -0.997089 0.014875 -0.074783 -0.997089 -0.014875 -0.074783 -0.997089 -0.042361 -0.063398 -0.997089 0.195090 0.980785 -0.000000 -0.195090 0.980785 0.000000 -0.555570 0.831470 0.000000 -0.831470 0.555570 0.000000 -0.980785 0.195090 0.000000 -0.980785 -0.195090 -0.000000 -0.831470 -0.555570 -0.000000 -0.555570 -0.831470 0.000000 -0.195090 -0.980785 0.000000 0.195090 -0.980785 -0.000000 0.555570 -0.831470 0.000000 0.831470 -0.555570 -0.000000 0.980785 -0.195090 -0.000000 0.980785 0.195090 0.000000 0.831470 0.555570 0.000000 0.555570 0.831470 0.000000 0.000000 0.000000 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 16 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 16 65 0 0 128 0 0 143 0 0 66 0 0 66 0 0 143 0 0 142 0 0 67 0 0 67 0 0 142 0 0 141 0 0 68 0 0 68 0 0 141 0 0 140 0 0 69 0 0 69 0 0 140 0 0 139 0 0 70 0 0 70 0 0 139 0 0 138 0 0 71 0 0 71 0 0 138 0 0 137 0 0 72 0 0 72 0 0 137 0 0 136 0 0 73 0 0 73 0 0 136 0 0 135 0 0 74 0 0 74 0 0 135 0 0 134 0 0 75 0 0 75 0 0 134 0 0 133 0 0 76 0 0 76 0 0 133 0 0 132 0 0 77 0 0 77 0 0 132 0 0 131 0 0 78 0 0 78 0 0 131 0 0 130 0 0 79 0 0 79 0 0 130 0 0 129 0 0 64 0 0 64 0 0 129 0 0 128 0 0 65 0 0 65 0 0 66 0 0 114 0 0 113 0 0 64 0 0 65 0 0 113 0 0 112 0 0 112 0 0 127 0 0 79 0 0 64 0 0 77 0 0 78 0 0 126 0 0 125 0 0 49 0 0 97 0 0 98 0 0 50 0 0 75 0 0 76 0 0 124 0 0 123 0 0 71 0 0 72 0 0 120 0 0 119 0 0 60 0 0 108 0 0 109 0 0 61 0 0 68 0 0 69 0 0 117 0 0 116 0 0 63 0 0 111 0 0 96 0 0 48 0 0 74 0 0 75 0 0 123 0 0 122 0 0 51 0 0 99 0 0 100 0 0 52 0 0 66 0 0 67 0 0 115 0 0 114 0 0 76 0 0 77 0 0 125 0 0 124 0 0 52 0 0 100 0 0 101 0 0 53 0 0 59 0 0 107 0 0 108 0 0 60 0 0 67 0 0 68 0 0 116 0 0 115 0 0 78 0 0 79 0 0 127 0 0 126 0 0 48 0 0 96 0 0 97 0 0 49 0 0 73 0 0 74 0 0 122 0 0 121 0 0 58 0 0 106 0 0 107 0 0 59 0 0 53 0 0 101 0 0 102 0 0 54 0 0 61 0 0 109 0 0 110 0 0 62 0 0 62 0 0 110 0 0 111 0 0 63 0 0 57 0 0 105 0 0 106 0 0 58 0 0 69 0 0 70 0 0 118 0 0 117 0 0 50 0 0 98 0 0 99 0 0 51 0 0 72 0 0 73 0 0 121 0 0 120 0 0 54 0 0 102 0 0 103 0 0 55 0 0 70 0 0 71 0 0 119 0 0 118 0 0 55 0 0 103 0 0 104 0 0 56 0 0 56 0 0 104 0 0 105 0 0 57 0 0 34 0 0 82 0 0 81 0 0 33 0 0 35 0 0 83 0 0 82 0 0 34 0 0 36 0 0 84 0 0 83 0 0 35 0 0 37 0 0 85 0 0 84 0 0 36 0 0 38 0 0 86 0 0 85 0 0 37 0 0 39 0 0 87 0 0 86 0 0 38 0 0 40 0 0 88 0 0 87 0 0 39 0 0 41 0 0 89 0 0 88 0 0 40 0 0 42 0 0 90 0 0 89 0 0 41 0 0 43 0 0 91 0 0 90 0 0 42 0 0 44 0 0 92 0 0 91 0 0 43 0 0 45 0 0 93 0 0 92 0 0 44 0 0 46 0 0 94 0 0 93 0 0 45 0 0 47 0 0 95 0 0 94 0 0 46 0 0 32 0 0 80 0 0 95 0 0 47 0 0 33 0 0 81 0 0 80 0 0 32 0 0 50 0 0 95 0 0 80 0 0 49 0 0 51 0 0 94 0 0 95 0 0 50 0 0 52 0 0 93 0 0 94 0 0 51 0 0 53 0 0 92 0 0 93 0 0 52 0 0 54 0 0 91 0 0 92 0 0 53 0 0 55 0 0 90 0 0 91 0 0 54 0 0 56 0 0 89 0 0 90 0 0 55 0 0 57 0 0 88 0 0 89 0 0 56 0 0 58 0 0 87 0 0 88 0 0 57 0 0 59 0 0 86 0 0 87 0 0 58 0 0 60 0 0 85 0 0 86 0 0 59 0 0 61 0 0 84 0 0 85 0 0 60 0 0 62 0 0 83 0 0 84 0 0 61 0 0 63 0 0 82 0 0 83 0 0 62 0 0 48 0 0 81 0 0 82 0 0 63 0 0 49 0 0 80 0 0 81 0 0 48 0 0 112 0 0 113 0 0 97 0 0 96 0 0 113 0 0 114 0 0 98 0 0 97 0 0 114 0 0 115 0 0 99 0 0 98 0 0 115 0 0 116 0 0 100 0 0 99 0 0 116 0 0 117 0 0 101 0 0 100 0 0 117 0 0 118 0 0 102 0 0 101 0 0 118 0 0 119 0 0 103 0 0 102 0 0 119 0 0 120 0 0 104 0 0 103 0 0 120 0 0 121 0 0 105 0 0 104 0 0 121 0 0 122 0 0 106 0 0 105 0 0 122 0 0 123 0 0 107 0 0 106 0 0 123 0 0 124 0 0 108 0 0 107 0 0 124 0 0 125 0 0 109 0 0 108 0 0 125 0 0 126 0 0 110 0 0 109 0 0 126 0 0 127 0 0 111 0 0 110 0 0 96 0 0 111 0 0 127 0 0 112 0 0 129 0 0 130 0 0 131 0 0 132 0 0 133 0 0 134 0 0 135 0 0 136 0 0 137 0 0 138 0 0 139 0 0 140 0 0 141 0 0 142 0 0 143 0 0 128 0 0 1 0 0 32 0 0 47 0 0 2 0 0 2 0 0 47 0 0 46 0 0 3 0 0 3 0 0 46 0 0 45 0 0 4 0 0 4 0 0 45 0 0 44 0 0 5 0 0 5 0 0 44 0 0 43 0 0 6 0 0 6 0 0 43 0 0 42 0 0 7 0 0 7 0 0 42 0 0 41 0 0 8 0 0 8 0 0 41 0 0 40 0 0 9 0 0 9 0 0 40 0 0 39 0 0 10 0 0 10 0 0 39 0 0 38 0 0 11 0 0 11 0 0 38 0 0 37 0 0 12 0 0 12 0 0 37 0 0 36 0 0 13 0 0 13 0 0 36 0 0 35 0 0 14 0 0 14 0 0 35 0 0 34 0 0 15 0 0 15 0 0 34 0 0 33 0 0 0 0 0 0 0 0 33 0 0 32 0 0 1 0 0 1 0 0 17 0 0 16 0 0 0 0 0 2 0 0 18 0 0 17 0 0 1 0 0 3 0 0 19 0 0 18 0 0 2 0 0 4 0 0 20 0 0 19 0 0 3 0 0 5 0 0 21 0 0 20 0 0 4 0 0 6 0 0 22 0 0 21 0 0 5 0 0 7 0 0 23 0 0 22 0 0 6 0 0 8 0 0 24 0 0 23 0 0 7 0 0 9 0 0 25 0 0 24 0 0 8 0 0 10 0 0 26 0 0 25 0 0 9 0 0 11 0 0 27 0 0 26 0 0 10 0 0 12 0 0 28 0 0 27 0 0 11 0 0 13 0 0 29 0 0 28 0 0 12 0 0 14 0 0 30 0 0 29 0 0 13 0 0 15 0 0 31 0 0 30 0 0 14 0 0 0 0 0 16 0 0 31 0 0 15 0 0 17 0 0 18 0 0 19 0 0 20 0 0 21 0 0 22 0 0 23 0 0 24 0 0 25 0 0 26 0 0 27 0 0 28 0 0 29 0 0 30 0 0 31 0 0 16 0 0
24 | 0.00000 0.00000 0.00000
25 | 0 0 1 0.00000 0 1 0 0.00000 1 0 0 0.00000 1.0000 1.0000 1.0000
26 |
27 |
--------------------------------------------------------------------------------
/perception_bringup/models/plastic_cup/model.config:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Plastic Cup
5 | 1.0
6 | model.sdf
7 |
8 |
9 | Jackie Kay
10 | jackie@osrfoundation.org
11 |
12 |
13 |
14 | A 13cm high plastic cup.
15 |
16 |
17 |
18 |
--------------------------------------------------------------------------------
/perception_bringup/models/plastic_cup/model.sdf:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | 1
5 |
6 | 0 0 0.065 0 0 0
7 |
8 |
9 |
10 | model://plastic_cup/meshes/plastic_cup.dae
11 |
12 |
13 |
14 |
15 |
16 |
17 | model://plastic_cup/meshes/plastic_cup.dae
18 |
19 |
20 | 0.5
21 |
22 |
23 |
24 |
25 |
--------------------------------------------------------------------------------
/perception_bringup/models/plastic_cup/thumbnails/1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/plastic_cup/thumbnails/1.png
--------------------------------------------------------------------------------
/perception_bringup/models/plastic_cup/thumbnails/2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/plastic_cup/thumbnails/2.png
--------------------------------------------------------------------------------
/perception_bringup/models/plastic_cup/thumbnails/3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/plastic_cup/thumbnails/3.png
--------------------------------------------------------------------------------
/perception_bringup/models/plastic_cup/thumbnails/4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/plastic_cup/thumbnails/4.png
--------------------------------------------------------------------------------
/perception_bringup/models/plastic_cup/thumbnails/5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/plastic_cup/thumbnails/5.png
--------------------------------------------------------------------------------
/perception_bringup/models/table/Table_Diffuse.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/table/Table_Diffuse.jpg
--------------------------------------------------------------------------------
/perception_bringup/models/table/model-1_2.sdf:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | false
5 |
6 |
7 | 0 0 1.0 0 0 0
8 |
9 |
10 | 1.5 0.8 0.03
11 |
12 |
13 |
14 |
15 |
16 | 0.6
17 | 0.6
18 |
19 |
20 |
21 |
22 |
23 | 0 0 1.0 0 0 0
24 |
25 |
26 | 1.4 0.8 0.04
27 |
28 |
29 |
30 |
34 |
35 |
36 |
37 | 0.68 0.38 0.5 0 0 0
38 |
39 |
40 | 0.02
41 | 1.0
42 |
43 |
44 |
45 |
46 | 0.68 0.38 0.5 0 0 0
47 |
48 |
49 | 0.02
50 | 1.0
51 |
52 |
53 |
54 |
58 |
59 |
60 |
61 | 0.68 -0.38 0.5 0 0 0
62 |
63 |
64 | 0.02
65 | 1.0
66 |
67 |
68 |
69 |
70 | 0.68 -0.38 0.5 0 0 0
71 |
72 |
73 | 0.02
74 | 1.0
75 |
76 |
77 |
78 |
82 |
83 |
84 |
85 | -0.68 -0.38 0.5 0 0 0
86 |
87 |
88 | 0.02
89 | 1.0
90 |
91 |
92 |
93 |
94 | -0.68 -0.38 0.5 0 0 0
95 |
96 |
97 | 0.02
98 | 1.0
99 |
100 |
101 |
102 |
106 |
107 |
108 |
109 | -0.68 0.38 0.5 0 0 0
110 |
111 |
112 | 0.02
113 | 1.0
114 |
115 |
116 |
117 |
118 | -0.68 0.38 0.5 0 0 0
119 |
120 |
121 | 0.02
122 | 1.0
123 |
124 |
125 |
126 |
130 |
131 |
132 |
133 |
134 |
135 |
--------------------------------------------------------------------------------
/perception_bringup/models/table/model-1_3.sdf:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | true
5 |
6 |
7 | 0 0 1.0 0 0 0
8 |
9 |
10 | 1.5 0.8 0.03
11 |
12 |
13 |
14 |
15 |
16 | 0.6
17 | 0.6
18 |
19 |
20 |
21 |
22 |
23 | 0 0 1.0 0 0 0
24 |
25 |
26 | 1.5 0.8 0.03
27 |
28 |
29 |
30 |
34 |
35 |
36 |
37 | 0.68 0.38 0.5 0 0 0
38 |
39 |
40 | 0.02
41 | 1.0
42 |
43 |
44 |
45 |
46 | 0.68 0.38 0.5 0 0 0
47 |
48 |
49 | 0.02
50 | 1.0
51 |
52 |
53 |
54 |
58 |
59 |
60 |
61 | 0.68 -0.38 0.5 0 0 0
62 |
63 |
64 | 0.02
65 | 1.0
66 |
67 |
68 |
69 |
70 | 0.68 -0.38 0.5 0 0 0
71 |
72 |
73 | 0.02
74 | 1.0
75 |
76 |
77 |
78 |
82 |
83 |
84 |
85 | -0.68 -0.38 0.5 0 0 0
86 |
87 |
88 | 0.02
89 | 1.0
90 |
91 |
92 |
93 |
94 | -0.68 -0.38 0.5 0 0 0
95 |
96 |
97 | 0.02
98 | 1.0
99 |
100 |
101 |
102 |
106 |
107 |
108 |
109 | -0.68 0.38 0.5 0 0 0
110 |
111 |
112 | 0.02
113 | 1.0
114 |
115 |
116 |
117 |
118 | -0.68 0.38 0.5 0 0 0
119 |
120 |
121 | 0.02
122 | 1.0
123 |
124 |
125 |
126 |
130 |
131 |
132 |
133 |
134 |
135 |
--------------------------------------------------------------------------------
/perception_bringup/models/table/model-1_4.sdf:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | true
5 |
6 |
7 | 0 0 1.0 0 0 0
8 |
9 |
10 | 1.5 0.8 0.03
11 |
12 |
13 |
14 |
15 |
16 | 0.6
17 | 0.6
18 |
19 |
20 |
21 |
22 |
23 | 0 0 1.0 0 0 0
24 |
25 |
26 | 1.5 0.8 0.03
27 |
28 |
29 |
30 |
34 |
35 |
36 |
37 | 0.68 0.38 0.5 0 0 0
38 |
39 |
40 | 0.02
41 | 1.0
42 |
43 |
44 |
45 |
46 | 0.68 0.38 0.5 0 0 0
47 |
48 |
49 | 0.02
50 | 1.0
51 |
52 |
53 |
54 |
58 |
59 |
60 |
61 | 0.68 -0.38 0.5 0 0 0
62 |
63 |
64 | 0.02
65 | 1.0
66 |
67 |
68 |
69 |
70 | 0.68 -0.38 0.5 0 0 0
71 |
72 |
73 | 0.02
74 | 1.0
75 |
76 |
77 |
78 |
82 |
83 |
84 |
85 | -0.68 -0.38 0.5 0 0 0
86 |
87 |
88 | 0.02
89 | 1.0
90 |
91 |
92 |
93 |
94 | -0.68 -0.38 0.5 0 0 0
95 |
96 |
97 | 0.02
98 | 1.0
99 |
100 |
101 |
102 |
106 |
107 |
108 |
109 | -0.68 0.38 0.5 0 0 0
110 |
111 |
112 | 0.02
113 | 1.0
114 |
115 |
116 |
117 |
118 | -0.68 0.38 0.5 0 0 0
119 |
120 |
121 | 0.02
122 | 1.0
123 |
124 |
125 |
126 |
130 |
131 |
132 |
133 |
134 |
135 |
--------------------------------------------------------------------------------
/perception_bringup/models/table/model.config:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Table
5 | 1.0
6 | model-1_2.sdf
7 | model-1_3.sdf
8 | model-1_4.sdf
9 | model.sdf
10 |
11 |
12 | Nate Koenig
13 | nate@osrfoundation.org
14 |
15 |
16 |
17 | A wooden table.
18 |
19 |
20 |
--------------------------------------------------------------------------------
/perception_bringup/models/table/model.sdf:
--------------------------------------------------------------------------------
1 |
2 |
3 | true
4 |
5 |
6 | 0 0 1.0 0 0 0
7 |
8 |
9 | 1.5 0.8 0.03
10 |
11 |
12 |
13 |
14 |
15 | 0.6
16 | 0.6
17 |
18 |
19 |
20 |
21 |
22 | 0 0 1.0 0 0 0
23 |
24 |
25 | 1.5 0.8 0.03
26 |
27 |
28 |
29 | 1.0 1.0 1.0
30 |
34 |
35 |
36 |
37 | model://table/Table_Diffuse.jpg
38 |
39 |
40 |
41 |
42 |
43 | 0.68 0.38 0.5 0 0 0
44 |
45 |
46 | 0.02
47 | 1.0
48 |
49 |
50 |
51 |
52 | 0.68 0.38 0.5 0 0 0
53 |
54 |
55 | 0.02
56 | 1.0
57 |
58 |
59 |
60 | 0.5 0.5 0.5
61 |
65 |
66 |
67 |
68 | 0.68 -0.38 0.5 0 0 0
69 |
70 |
71 | 0.02
72 | 1.0
73 |
74 |
75 |
76 |
77 | 0.68 -0.38 0.5 0 0 0
78 |
79 |
80 | 0.02
81 | 1.0
82 |
83 |
84 |
85 | 0.5 0.5 0.5
86 |
90 |
91 |
92 |
93 | -0.68 -0.38 0.5 0 0 0
94 |
95 |
96 | 0.02
97 | 1.0
98 |
99 |
100 |
101 |
102 | -0.68 -0.38 0.5 0 0 0
103 |
104 |
105 | 0.02
106 | 1.0
107 |
108 |
109 |
110 | 0.5 0.5 0.5
111 |
115 |
116 |
117 |
118 | -0.68 0.38 0.5 0 0 0
119 |
120 |
121 | 0.02
122 | 1.0
123 |
124 |
125 |
126 |
127 | -0.68 0.38 0.5 0 0 0
128 |
129 |
130 | 0.02
131 | 1.0
132 |
133 |
134 |
135 | 0.5 0.5 0.5
136 |
140 |
141 |
142 |
143 |
144 |
145 |
--------------------------------------------------------------------------------
/perception_bringup/models/table/thumbnails/1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/table/thumbnails/1.png
--------------------------------------------------------------------------------
/perception_bringup/models/table/thumbnails/2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/table/thumbnails/2.png
--------------------------------------------------------------------------------
/perception_bringup/models/table/thumbnails/3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/table/thumbnails/3.png
--------------------------------------------------------------------------------
/perception_bringup/models/table/thumbnails/4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/table/thumbnails/4.png
--------------------------------------------------------------------------------
/perception_bringup/models/table/thumbnails/5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/atom-robotics-lab/ros-perception-pipeline/fcf6b6d92d50b33a6cabad83f7bfb4628810c38c/perception_bringup/models/table/thumbnails/5.png
--------------------------------------------------------------------------------
/perception_bringup/package.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | perception_bringup
5 | 0.0.0
6 | TODO: Package description
7 | singh
8 | TODO: License declaration
9 |
10 | ament_cmake
11 |
12 | ament_lint_auto
13 | ament_lint_common
14 |
15 |
16 | ament_cmake
17 |
18 |
19 |
--------------------------------------------------------------------------------
/perception_bringup/worlds/playground.sdf:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 | 3D View
8 | false
9 | docked
10 |
11 | ogre2
12 | scene
13 | 0.4 0.4 0.4
14 | 0.8 0.8 0.8
15 | -6 0 6 0 0.5 0
16 |
17 |
18 |
19 |
20 |
21 | floating
22 | 5
23 | 5
24 | false
25 |
26 |
27 |
28 |
29 |
30 | false
31 | 5
32 | 5
33 | floating
34 | false
35 |
36 |
37 |
38 |
39 |
40 | false
41 | 5
42 | 5
43 | floating
44 | false
45 |
46 |
47 |
48 |
49 |
50 | false
51 | 5
52 | 5
53 | floating
54 | false
55 |
56 |
57 |
58 |
59 |
60 | false
61 | 5
62 | 5
63 | floating
64 | false
65 |
66 |
67 |
68 |
69 |
70 | false
71 | 5
72 | 5
73 | floating
74 | false
75 |
76 |
77 |
78 |
79 |
80 | false
81 | 5
82 | 5
83 | floating
84 | false
85 |
86 |
87 |
88 |
89 |
90 | false
91 | 5
92 | 5
93 | floating
94 | false
95 |
96 |
97 |
98 |
99 |
100 |
101 | World control
102 | false
103 | false
104 | 72
105 | 1
106 | floating
107 |
108 |
109 |
110 |
111 |
112 | true
113 | true
114 | true
115 | true
116 |
117 |
118 |
119 |
120 |
121 | World stats
122 | false
123 | false
124 | 110
125 | 290
126 | 1
127 | floating
128 |
129 |
130 |
131 |
132 |
133 | true
134 | true
135 | true
136 | true
137 |
138 |
139 |
140 |
141 |
142 | false
143 | 0
144 | 0
145 | 250
146 | 50
147 | floating
148 | false
149 | #666666
150 |
151 |
152 |
153 |
154 |
155 |
156 | false
157 | 250
158 | 0
159 | 150
160 | 50
161 | floating
162 | false
163 | #666666
164 |
165 |
166 |
167 |
168 |
169 |
170 | false
171 | 0
172 | 50
173 | 250
174 | 50
175 | floating
176 | false
177 | #777777
178 |
179 |
180 |
181 |
182 |
183 |
184 | false
185 | docked
186 |
187 |
188 |
189 |
190 |
191 |
192 | false
193 | docked
194 |
195 |
196 |
197 |
198 |
199 |
200 | false
201 | docked
202 |
203 | color_camera/image_raw
204 |
205 |
206 |
207 | 0.001
208 | 1
209 | 1000
210 |
211 |
212 |
213 |
214 |
215 |
216 | ogre2
217 |
218 | 0 0 -9.8000000000000007
219 | 5.5644999999999998e-06 2.2875799999999999e-05 -4.2388400000000002e-05
220 |
221 |
222 | 0.400000006 0.400000006 0.400000006 1
223 | 0.699999988 0.699999988 0.699999988 1
224 | true
225 |
226 |
227 | true
228 |
229 |
230 |
231 |
232 | 0 0 1
233 | 100 100
234 |
235 |
236 |
237 |
238 |
239 |
240 |
241 |
242 |
243 |
244 |
245 |
246 |
247 | 0 0 1
248 | 100 100
249 |
250 |
251 |
252 | 0.800000012 0.800000012 0.800000012 1
253 | 0.800000012 0.800000012 0.800000012 1
254 | 0.800000012 0.800000012 0.800000012 1
255 |
256 |
257 | 0 0 0 0 0 0
258 |
259 | 0 0 0 0 0 0
260 | 1
261 |
262 | 1
263 | 0
264 | 0
265 | 1
266 | 0
267 | 1
268 |
269 |
270 | false
271 |
272 | 0 0 0 0 0 0
273 | false
274 |
275 |
276 |
277 | true
278 | -0.77 0.20 1.41 0 0.37 1.57
279 |
280 | true
281 | 0 0 0 0 0 0
282 |
283 | 0.1
284 |
285 | 0.000166667
286 | 0.000166667
287 | 0.000166667
288 |
289 |
290 |
291 |
292 |
293 | 0.1 0.1 0.1
294 |
295 |
296 |
297 |
298 |
299 |
300 | 0.05 0.05 0.05
301 |
302 |
303 |
304 |
305 |
306 | 1.047
307 |
308 | 320
309 | 240
310 |
311 |
312 | 0.1
313 | 100
314 |
315 |
316 | 1
317 | 30
318 | true
319 | color_camera/image_raw
320 |
321 |
322 |
323 |
324 |
325 | model://table
326 | Table
327 | -0.78708818628089183 1.3378657486190808 0 0 0 0
328 |
329 |
330 | model://first_aid
331 | first_aid
332 | -1.1212886571884155 1.1123484373092651 1.1133700370788574 0 0 0.3856180019829511
333 |
334 |
335 | model://adhesive
336 | adhesive
337 | -0.77862358093261719 1.1180200576782227 1.1701400279998779 0 0 0
338 |
339 |
340 | model://plastic_cup
341 | plastic_cup
342 | -0.47838106751441956 1.0659799575805664 1.0227999687194824 0 0 0
343 |
344 |
345 | 0 0 10 0 0 0
346 | true
347 | 1
348 | -0.5 0.10000000000000001 -0.90000000000000002
349 | 0.800000012 0.800000012 0.800000012 1
350 | 0.200000003 0.200000003 0.200000003 1
351 |
352 | 1000
353 | 0.01
354 | 0.90000000000000002
355 | 0.001
356 |
357 |
358 | 0
359 | 0
360 | 0
361 |
362 |
363 |
364 |
365 |
--------------------------------------------------------------------------------