├── LICENSE.md
├── README.md
├── docker
├── README.md
├── docker_build.sh
├── docker_run.sh
├── dockerfile.ros.eloquent.trt_pose
└── packages
│ └── ros_entrypoint.sh
├── giistr-cla.md
├── images
└── detected_output.png
├── pose_msgs
├── CMakeLists.txt
├── msg
│ ├── BodypartDetection.msg
│ └── PersonDetection.msg
└── package.xml
└── ros2_trt_pose
├── launch
├── pose-estimation.launch.py
└── pose_estimation.rviz
├── package.xml
├── resource
└── ros2_trt_pose
├── ros2_trt_pose
├── __init__.py
├── helper.py
├── live_demo.py
└── utils.py
├── setup.cfg
├── setup.py
└── test
├── test_copyright.py
├── test_flake8.py
└── test_pep257.py
/LICENSE.md:
--------------------------------------------------------------------------------
1 | Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
2 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
3 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
4 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
5 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ros2_trt_pose
2 | [NVIDIA Developer Blog](https://developer.nvidia.com/blog/implementing-robotics-applications-with-ros-2-and-ai-on-jetson-platform-2/)
3 |
4 | In this repository, we build ros2 wrapper for [trt_pose](https://github.com/NVIDIA-AI-IOT/trt_pose) for real-time pose estimation on NVIDIA Jetson.
5 |
6 | ## Package outputs:
7 | - Pose message
8 | - Pose detection image message
9 | - Visualization markers
10 | - Launch file for RViz
11 |
12 | 
13 |
14 | ## Requirements:
15 | - ROS 2 Eloquent:
16 | - [Docker Support](https://github.com/NVIDIA-AI-IOT/ros2_trt_pose/blob/main/docker/README.md)
17 | - [Install Instructions](https://index.ros.org/doc/ros2/Installation/Eloquent/Linux-Development-Setup/)
18 | - trt_pose
19 | - [Dependencies for trt_pose](https://github.com/NVIDIA-AI-IOT/trt_pose#step-1---install-dependencies)
20 | - [Install trt_pose] (https://github.com/NVIDIA-AI-IOT/trt_pose#step-2---install-trt_pose)
21 |
22 | ## Build:
23 | - Clone repository under ros2 workspace
24 | ```
25 | $ cd ros2_ws/src/
26 | $ git clone https://github.com/NVIDIA-AI-IOT/ros2_trt_pose.git
27 | ```
28 | - Install requirements using ```rosdep```
29 | ```
30 | $ rosdep install --from-paths src --ignore-src --rosdistro eloquent -y
31 | ```
32 | - Build and Install ros2_trt_pose package
33 | ```
34 | $ colcon build
35 | $ source install/local_setup.sh
36 | ```
37 |
38 | ## Run
39 | - Change Power Mode for Jetson
40 | ``` sudo nvpmodel -m2 ``` (for Jetson Xavier NX)
41 | ``` sudo nvpmodel -m0 ``` (for Jetson Xavier and Jetson Nano)
42 | - Input Images are captured using ```image_tools``` package
43 | ``` ros2 run image_tools cam2image ```
44 | - Keep ```trt_pose``` related model files in ```base_dir```, it should include:
45 | - Model files for resnet18 or densenet121 [download link](https://github.com/NVIDIA-AI-IOT/trt_pose#step-3---run-the-example-notebook)
46 | - Human Pose points [json file](https://github.com/NVIDIA-AI-IOT/trt_pose/blob/master/tasks/human_pose/human_pose.json)
47 | - For running you can use Launch file or manually run each node:
48 | - Run using Launch file
49 | ```
50 | $ ros2 launch ros2_trt_pose pose-estimation.launch.py
51 | ```
52 | *Note: Update rviz file location in launch file in* ```launch/pose_estimation.launch.py```
53 |
54 | - Run ```ros2_trt_pose``` node
55 | ```
56 | $ ros2 run ros2_trt_pose pose-estimation --ros-args -p base_dir:=''
57 | ```
58 | - For following use separate window for each:
59 | - See Pose message
60 | ```
61 | $ source install/local_setup.sh
62 | $ ros2 run rqt_topic rqt_topic
63 | ```
64 | - Visualize markers
65 | ```
66 | $ ros2 run rviz2 rviz2 launch/pose_estimation.rviz
67 | ```
68 |
69 | ## Other related ROS 2 projects
70 | - [ros2_torch_trt](https://github.com/NVIDIA-AI-IOT/ros2_torch_trt) : ROS2 Real Time Classification and Detection
71 | - [ros2_deepstream](https://github.com/NVIDIA-AI-IOT/ros2_deepstream) : ROS2 nodes for DeepStream applications
72 | - [ros2_jetson_stats](https://github.com/NVIDIA-AI-IOT/ros2_jetson_stats) : ROS 2 package for monitoring and controlling NVIDIA Jetson Platform resources
73 |
74 |
--------------------------------------------------------------------------------
/docker/README.md:
--------------------------------------------------------------------------------
1 | # Run ROS2-torch2trt in Docker
2 |
3 | For more Jetson dockers, please have a look at [jetson-containers](https://github.com/dusty-nv/jetson-containers) github repository.
4 |
5 | ## Docker Default Runtime
6 |
7 | To enable access to the CUDA compiler (nvcc) during `docker build` operations, add `"default-runtime": "nvidia"` to your `/etc/docker/daemon.json` configuration file before attempting to build the containers:
8 |
9 | ``` json
10 | {
11 | "runtimes": {
12 | "nvidia": {
13 | "path": "nvidia-container-runtime",
14 | "runtimeArgs": []
15 | }
16 | },
17 |
18 | "default-runtime": "nvidia"
19 | }
20 | ```
21 |
22 | You will then want to restart the Docker service or reboot your system before proceeding.
23 |
24 | ## Building the Containers
25 | ``` $cp /etc/apt/trusted.gpg.d/jetson-ota-public.asc .```
26 | ``` $sh docker_build.sh ```
27 | Once you sucessfully build, you will have a ros2-eloquent container with all necessary packages required for this repository.
28 |
29 |
30 | ## Run Container
31 |
32 | ``` sh docker_run.sh ```
33 | This will initialize docker. Clone this repository using following command and follow build and run instructions for ros2 package from here.
34 |
35 |
36 |
37 |
--------------------------------------------------------------------------------
/docker/docker_build.sh:
--------------------------------------------------------------------------------
1 | sudo docker build -t ros2_trt_pose_base:jp44 -f dockerfile.ros.eloquent.trt_pose .
2 |
--------------------------------------------------------------------------------
/docker/docker_run.sh:
--------------------------------------------------------------------------------
1 | sudo xhost +si:localuser:root
2 | sudo docker run -it --rm --runtime nvidia --device="/dev/video0:/dev/video0" --network host -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /home/ak-nv/ros_docker:/workdir ros2_trt_pose_base:jp44
3 |
--------------------------------------------------------------------------------
/docker/dockerfile.ros.eloquent.trt_pose:
--------------------------------------------------------------------------------
1 | #
2 | # this dockerfile roughly follows the 'Install ROS2 Via Debian Packages' from:
3 | # https://index.ros.org/doc/ros2/Installation/Eloquent/Linux-Install-Debians/
4 | #
5 | ARG BASE_IMAGE=nvcr.io/nvidia/l4t-base:r32.4.3
6 | FROM ${BASE_IMAGE}
7 |
8 | ARG ROS_PKG=ros_base
9 | ENV ROS_DISTRO=eloquent
10 | ENV ROS_ROOT=/opt/ros/${ROS_DISTRO}
11 |
12 | ENV DEBIAN_FRONTEND=noninteractive
13 |
14 | WORKDIR /workspace
15 |
16 | # change the locale from POSIX to UTF-8
17 | RUN locale-gen en_US en_US.UTF-8 && update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8
18 | ENV LANG=en_US.UTF-8
19 |
20 | # add the ROS deb repo to the apt sources list
21 | RUN apt-get update && \
22 | apt-get install -y --no-install-recommends \
23 | git \
24 | cmake \
25 | build-essential \
26 | curl \
27 | wget \
28 | gnupg2 \
29 | lsb-release \
30 | && rm -rf /var/lib/apt/lists/*
31 |
32 | RUN wget --no-check-certificate https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc && apt-key add ros.asc
33 | RUN sh -c 'echo "deb [arch=$(dpkg --print-architecture)] http://packages.ros.org/ros2/ubuntu $(lsb_release -cs) main" > /etc/apt/sources.list.d/ros2-latest.list'
34 |
35 | # install ROS packages
36 | RUN apt-get update && \
37 | apt-get install -y --no-install-recommends \
38 | ros-eloquent-ros-base \
39 | ros-eloquent-launch-xml \
40 | ros-eloquent-launch-yaml \
41 | ros-eloquent-vision-msgs \
42 | ros-eloquent-image-tools \
43 | libpython3-dev \
44 | python3-colcon-common-extensions \
45 | python3-rosdep \
46 | && rm -rf /var/lib/apt/lists/*
47 |
48 | # init/update rosdep
49 | RUN apt-get update && \
50 | cd ${ROS_ROOT} && \
51 | rosdep init && \
52 | rosdep update && \
53 | rm -rf /var/lib/apt/lists/*
54 |
55 | # compile yaml-cpp-0.6, which some ROS packages may use (but is not in the 18.04 apt repo)
56 | RUN git clone --branch yaml-cpp-0.6.0 https://github.com/jbeder/yaml-cpp yaml-cpp-0.6 && \
57 | cd yaml-cpp-0.6 && \
58 | mkdir build && \
59 | cd build && \
60 | cmake -DBUILD_SHARED_LIBS=ON .. && \
61 | make -j$(nproc) && \
62 | cp libyaml-cpp.so.0.6.0 /usr/lib/aarch64-linux-gnu/ && \
63 | ln -s /usr/lib/aarch64-linux-gnu/libyaml-cpp.so.0.6.0 /usr/lib/aarch64-linux-gnu/libyaml-cpp.so.0.6
64 |
65 | # setup entrypoint
66 | COPY ./packages/ros_entrypoint.sh /ros_entrypoint.sh
67 | RUN echo 'source ${ROS_ROOT}/setup.bash' >> /root/.bashrc
68 | RUN chmod +x /ros_entrypoint.sh
69 | ENTRYPOINT ["/ros_entrypoint.sh"]
70 | CMD ["bash"]
71 | WORKDIR /
72 |
73 | #
74 | # install OpenCV (with GStreamer support)
75 | #
76 | COPY jetson-ota-public.asc /etc/apt/trusted.gpg.d/jetson-ota-public.asc
77 |
78 | RUN echo "deb https://repo.download.nvidia.com/jetson/common r32.4 main" > /etc/apt/sources.list.d/nvidia-l4t-apt-source.list && \
79 | apt-get update && \
80 | apt-get install -y --no-install-recommends \
81 | libopencv-python \
82 | && rm /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
83 | && rm -rf /var/lib/apt/lists/*
84 |
85 | # -----------------------------
86 | # PyTorch Installations
87 | # ----------------------------
88 | #
89 | # install prerequisites (many of these are for numpy)
90 | #
91 | ENV PATH="/usr/local/cuda-10.2/bin:${PATH}"
92 | ENV LD_LIBRARY_PATH="/usr/local/cuda-10.2/lib64:/usr/local/cuda-10.2/extras/CUPTI/lib64:${LD_LIBRARY_PATH}"
93 |
94 |
95 | RUN apt-get update && \
96 | ldconfig && \
97 | apt-get install -y --no-install-recommends \
98 | python3-pip \
99 | python3-dev \
100 | libopenblas-dev \
101 | libopenmpi2 \
102 | openmpi-bin \
103 | openmpi-common \
104 | gfortran \
105 | && rm -rf /var/lib/apt/lists/*
106 |
107 | RUN pip3 install setuptools Cython wheel
108 | RUN pip3 install numpy --verbose
109 |
110 | #
111 | # PyTorch (for JetPack 4.4 DP)
112 | #
113 | # PyTorch v1.2.0 https://nvidia.box.com/shared/static/lufbgr3xu2uha40cs9ryq1zn4kxsnogl.whl (torch-1.2.0-cp36-cp36m-linux_aarch64.whl)
114 | # PyTorch v1.3.0 https://nvidia.box.com/shared/static/017sci9z4a0xhtwrb4ps52frdfti9iw0.whl (torch-1.3.0-cp36-cp36m-linux_aarch64.whl)
115 | # PyTorch v1.4.0 https://nvidia.box.com/shared/static/c3d7vm4gcs9m728j6o5vjay2jdedqb55.whl (torch-1.4.0-cp36-cp36m-linux_aarch64.whl)
116 | # PyTorch v1.5.0 https://nvidia.box.com/shared/static/3ibazbiwtkl181n95n9em3wtrca7tdzp.whl (torch-1.5.0-cp36-cp36m-linux_aarch64.whl)
117 | #
118 | ARG PYTORCH_URL=https://nvidia.box.com/shared/static/9eptse6jyly1ggt9axbja2yrmj6pbarc.whl
119 | ARG PYTORCH_WHL=torch-1.6.0-cp36-cp36m-linux_aarch64.whl
120 |
121 | RUN wget --quiet --show-progress --progress=bar:force:noscroll --no-check-certificate ${PYTORCH_URL} -O ${PYTORCH_WHL} && \
122 | pip3 install ${PYTORCH_WHL} --verbose && \
123 | rm ${PYTORCH_WHL}
124 |
125 |
126 | #
127 | # torchvision 0.4
128 | #
129 | ARG TORCHVISION_VERSION=v0.7.0
130 | #ARG PILLOW_VERSION="pillow<7"
131 | ARG TORCH_CUDA_ARCH_LIST="7.2"
132 |
133 | #RUN printenv && echo "torchvision version = $TORCHVISION_VERSION" && echo "pillow version = $PILLOW_VERSION" && echo "TORCH_CUDA_ARCH_LIST = $TORCH_CUDA_ARCH_LIST"
134 |
135 | RUN apt-get update && \
136 | apt-get install -y --no-install-recommends \
137 | git \
138 | build-essential \
139 | libjpeg-dev \
140 | zlib1g-dev \
141 | && rm -rf /var/lib/apt/lists/*
142 |
143 | RUN git clone -b ${TORCHVISION_VERSION} https://github.com/pytorch/vision torchvision && \
144 | cd torchvision && \
145 | python3 setup.py install && \
146 | cd ../ && \
147 | rm -rf torchvision
148 |
149 | #
150 | # PyCUDA
151 | #
152 | ENV PATH="/usr/local/cuda/bin:${PATH}"
153 | ENV LD_LIBRARY_PATH="/usr/local/cuda/lib64:${LD_LIBRARY_PATH}"
154 | RUN echo "$PATH" && echo "$LD_LIBRARY_PATH"
155 |
156 | RUN pip3 install pycuda --verbose
157 | # -------------------
158 | # torch2trt installations
159 | # -------------------
160 | RUN git clone https://github.com/NVIDIA-AI-IOT/torch2trt && \
161 | cd torch2trt && \
162 | python3 setup.py install --plugins
163 | # -------------------
164 | # trt_pose installation
165 | # -------------------
166 | RUN apt-get update && \
167 | apt-get install -y python3-matplotlib
168 | RUN pip3 install tqdm cython pycocotools
169 |
170 | RUN git clone https://github.com/NVIDIA-AI-IOT/trt_pose && \
171 | cd trt_pose && \
172 | python3 setup.py install
173 |
174 | # setup entrypoint
175 | #COPY ./packages/ros_entrypoint.sh /ros_entrypoint.sh
176 | #RUN echo 'source ${ROS_ROOT}/setup.bash' >> /root/.bashrc
177 | #ENTRYPOINT ["/ros_entrypoint.sh"]
178 | #CMD ["bash"]
179 | #WORKDIR /
180 |
--------------------------------------------------------------------------------
/docker/packages/ros_entrypoint.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | set -e
3 |
4 | # setup ros2 environment
5 | source "/opt/ros/$ROS_DISTRO/setup.bash"
6 | exec "$@"
7 |
--------------------------------------------------------------------------------
/giistr-cla.md:
--------------------------------------------------------------------------------
1 | ## Individual Contributor License Agreement (CLA)
2 |
3 | **Thank you for submitting your contributions to this project.**
4 |
5 | By signing this CLA, you agree that the following terms apply to all of your past, present and future contributions
6 | to the project.
7 |
8 | ### License.
9 |
10 | You hereby represent that all present, past and future contributions are governed by the
11 | [MIT License](https://opensource.org/licenses/MIT)
12 | copyright statement.
13 |
14 | This entails that to the extent possible under law, you transfer all copyright and related or neighboring rights
15 | of the code or documents you contribute to the project itself or its maintainers.
16 | Furthermore you also represent that you have the authority to perform the above waiver
17 | with respect to the entirety of you contributions.
18 |
19 | ### Moral Rights.
20 |
21 | To the fullest extent permitted under applicable law, you hereby waive, and agree not to
22 | assert, all of your “moral rights” in or relating to your contributions for the benefit of the project.
23 |
24 | ### Third Party Content.
25 |
26 | If your Contribution includes or is based on any source code, object code, bug fixes, configuration changes, tools,
27 | specifications, documentation, data, materials, feedback, information or other works of authorship that were not
28 | authored by you (“Third Party Content”) or if you are aware of any third party intellectual property or proprietary
29 | rights associated with your Contribution (“Third Party Rights”),
30 | then you agree to include with the submission of your Contribution full details respecting such Third Party
31 | Content and Third Party Rights, including, without limitation, identification of which aspects of your
32 | Contribution contain Third Party Content or are associated with Third Party Rights, the owner/author of the
33 | Third Party Content and Third Party Rights, where you obtained the Third Party Content, and any applicable
34 | third party license terms or restrictions respecting the Third Party Content and Third Party Rights. For greater
35 | certainty, the foregoing obligations respecting the identification of Third Party Content and Third Party Rights
36 | do not apply to any portion of a Project that is incorporated into your Contribution to that same Project.
37 |
38 | ### Representations.
39 |
40 | You represent that, other than the Third Party Content and Third Party Rights identified by
41 | you in accordance with this Agreement, you are the sole author of your Contributions and are legally entitled
42 | to grant the foregoing licenses and waivers in respect of your Contributions. If your Contributions were
43 | created in the course of your employment with your past or present employer(s), you represent that such
44 | employer(s) has authorized you to make your Contributions on behalf of such employer(s) or such employer
45 | (s) has waived all of their right, title or interest in or to your Contributions.
46 |
47 | ### Disclaimer.
48 |
49 | To the fullest extent permitted under applicable law, your Contributions are provided on an "as is"
50 | basis, without any warranties or conditions, express or implied, including, without limitation, any implied
51 | warranties or conditions of non-infringement, merchantability or fitness for a particular purpose. You are not
52 | required to provide support for your Contributions, except to the extent you desire to provide support.
53 |
54 | ### No Obligation.
55 |
56 | You acknowledge that the maintainers of this project are under no obligation to use or incorporate your contributions
57 | into the project. The decision to use or incorporate your contributions into the project will be made at the
58 | sole discretion of the maintainers or their authorized delegates.
59 |
--------------------------------------------------------------------------------
/images/detected_output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA-AI-IOT/ros2_trt_pose/4a3cc9d9aa1cc3d20ac5e6e9e1dcec833f18e58f/images/detected_output.png
--------------------------------------------------------------------------------
/pose_msgs/CMakeLists.txt:
--------------------------------------------------------------------------------
1 | cmake_minimum_required(VERSION 3.5)
2 | project(pose_msgs)
3 |
4 | # Default to C99
5 | if(NOT CMAKE_C_STANDARD)
6 | set(CMAKE_C_STANDARD 99)
7 | endif()
8 |
9 | # Default to C++14
10 | if(NOT CMAKE_CXX_STANDARD)
11 | set(CMAKE_CXX_STANDARD 14)
12 | endif()
13 |
14 | if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
15 | add_compile_options(-Wall -Wextra -Wpedantic)
16 | endif()
17 |
18 | # find dependencies
19 | find_package(ament_cmake REQUIRED)
20 | # uncomment the following section in order to fill in
21 | # further dependencies manually.
22 | # find_package( REQUIRED)
23 | find_package(rosidl_default_generators REQUIRED)
24 |
25 | rosidl_generate_interfaces(${PROJECT_NAME}
26 | "msg/BodypartDetection.msg"
27 | "msg/PersonDetection.msg"
28 | )
29 | if(BUILD_TESTING)
30 | find_package(ament_lint_auto REQUIRED)
31 | # the following line skips the linter which checks for copyrights
32 | # uncomment the line when a copyright and license is not present in all source files
33 | #set(ament_cmake_copyright_FOUND TRUE)
34 | # the following line skips cpplint (only works in a git repo)
35 | # uncomment the line when this package is not in a git repo
36 | #set(ament_cmake_cpplint_FOUND TRUE)
37 | ament_lint_auto_find_test_dependencies()
38 | endif()
39 |
40 | ament_package()
41 |
--------------------------------------------------------------------------------
/pose_msgs/msg/BodypartDetection.msg:
--------------------------------------------------------------------------------
1 | float32 x
2 | float32 y
3 | float32 confidence
4 |
--------------------------------------------------------------------------------
/pose_msgs/msg/PersonDetection.msg:
--------------------------------------------------------------------------------
1 | uint32 num_people_detected
2 | uint32 person_id
3 | BodypartDetection nose
4 | BodypartDetection neck
5 | BodypartDetection right_shoulder
6 | BodypartDetection right_elbow
7 | BodypartDetection right_wrist
8 | BodypartDetection left_shoulder
9 | BodypartDetection left_elbow
10 | BodypartDetection left_wrist
11 | BodypartDetection right_hip
12 | BodypartDetection right_knee
13 | BodypartDetection right_ankle
14 | BodypartDetection left_hip
15 | BodypartDetection left_knee
16 | BodypartDetection left_ankle
17 | BodypartDetection right_eye
18 | BodypartDetection left_eye
19 | BodypartDetection right_ear
20 | BodypartDetection left_ear
21 | BodypartDetection chest
22 |
--------------------------------------------------------------------------------
/pose_msgs/package.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 | pose_msgs
22 | 0.0.0
23 | TODO: Package description
24 | nvidia
25 | TODO: License declaration
26 |
27 | ament_cmake
28 | rosidl_default_generators
29 |
30 | rosidl_default_runtime
31 |
32 | rosidl_interface_packages
33 |
34 | ament_lint_auto
35 | ament_lint_common
36 |
37 |
38 | ament_cmake
39 |
40 |
41 |
--------------------------------------------------------------------------------
/ros2_trt_pose/launch/pose-estimation.launch.py:
--------------------------------------------------------------------------------
1 | # ---------------------------------------------------------------------------------------
2 | # Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
3 | # Permission is hereby granted, free of charge, to any person obtaining
4 | # a copy of this software and associated documentation files (the "Software"),
5 | # to deal in the Software without restriction, including without limitation
6 | # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 | # and/or sell copies of the Software, and to permit persons to whom the
8 | # Software is furnished to do so, subject to the following conditions:
9 | # The above copyright notice and this permission notice shall be included
10 | # in all copies or substantial portions of the Software.
11 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
12 | # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
13 | # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
14 | # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
15 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 | # ---------------------------------------------------------------------------------------
17 |
18 | import launch
19 | import launch_ros
20 | from launch import LaunchDescription
21 | from launch_ros.actions import Node
22 | from launch.substitutions import LaunchConfiguration
23 | import os
24 | def generate_launch_description():
25 | pkg_share = launch_ros.substitutions.FindPackageShare(package='ros2_trt_pose').find('ros2_trt_pose')
26 | print(pkg_share)
27 | default_rviz_config_path = 'src/ros2_trt_pose/ros2_trt_pose/launch/pose_estimation.rviz'
28 |
29 | trt_pose_node = Node(
30 | package="ros2_trt_pose",
31 | node_executable="pose-estimation",
32 | node_name="pose_estimation",
33 | output="screen",
34 | parameters = [{
35 | 'base_dir':'/home/ak-nv/trt_pose/tasks/human_pose',
36 | 'model': 'resnet18',
37 | 'point_range' : 10,
38 | 'show_image' : False,
39 | }],
40 | )
41 | cam2image_node = Node(
42 | package="image_tools",
43 | node_executable="cam2image",
44 | node_name="cam",
45 | )
46 |
47 | rviz_node = Node(
48 | package="rviz2",
49 | node_executable="rviz2",
50 | node_name="rviz2",
51 | arguments=['-d', LaunchConfiguration('rvizconfig')],
52 | )
53 |
54 | return LaunchDescription([
55 | launch.actions.DeclareLaunchArgument(name='rvizconfig', default_value=default_rviz_config_path),
56 | trt_pose_node,
57 | cam2image_node,
58 | rviz_node
59 | ])
60 |
61 |
62 |
63 |
64 |
65 |
--------------------------------------------------------------------------------
/ros2_trt_pose/launch/pose_estimation.rviz:
--------------------------------------------------------------------------------
1 | Panels:
2 | - Class: rviz_common/Displays
3 | Help Height: 78
4 | Name: Displays
5 | Property Tree Widget:
6 | Expanded:
7 | - /Global Options1
8 | - /Status1
9 | Splitter Ratio: 0.5
10 | Tree Height: 404
11 | - Class: rviz_common/Selection
12 | Name: Selection
13 | - Class: rviz_common/Tool Properties
14 | Expanded:
15 | - /2D Goal Pose1
16 | - /Publish Point1
17 | Name: Tool Properties
18 | Splitter Ratio: 0.5886790156364441
19 | - Class: rviz_common/Views
20 | Expanded:
21 | - /Current View1
22 | Name: Views
23 | Splitter Ratio: 0.5
24 | Visualization Manager:
25 | Class: ""
26 | Displays:
27 | - Alpha: 0.5
28 | Cell Size: 1
29 | Class: rviz_default_plugins/Grid
30 | Color: 160; 160; 164
31 | Enabled: true
32 | Line Style:
33 | Line Width: 0.029999999329447746
34 | Value: Lines
35 | Name: Grid
36 | Normal Cell Count: 0
37 | Offset:
38 | X: 0
39 | Y: 0
40 | Z: 0
41 | Plane: XY
42 | Plane Cell Count: 10
43 | Reference Frame:
44 | Value: true
45 | - Class: rviz_default_plugins/Marker
46 | Enabled: true
47 | Name: Marker
48 | Namespaces:
49 | joints: true
50 | Topic:
51 | Depth: 5
52 | Durability Policy: Volatile
53 | History Policy: Keep Last
54 | Reliability Policy: Reliable
55 | Value: /body_joints
56 | Value: true
57 | - Class: rviz_default_plugins/Marker
58 | Enabled: true
59 | Name: Marker
60 | Namespaces:
61 | joint_line: true
62 | Topic:
63 | Depth: 5
64 | Durability Policy: Volatile
65 | History Policy: Keep Last
66 | Reliability Policy: Reliable
67 | Value: /body_skeleton
68 | Value: true
69 | - Class: rviz_default_plugins/Image
70 | Enabled: true
71 | Max Value: 1
72 | Median window: 5
73 | Min Value: 0
74 | Name: Image
75 | Normalize Range: true
76 | Topic:
77 | Depth: 5
78 | Durability Policy: Volatile
79 | History Policy: Keep Last
80 | Reliability Policy: Reliable
81 | Value: /detections_image
82 | Value: true
83 | Enabled: true
84 | Global Options:
85 | Background Color: 48; 48; 48
86 | Fixed Frame: map
87 | Frame Rate: 30
88 | Name: root
89 | Tools:
90 | - Class: rviz_default_plugins/Interact
91 | Hide Inactive Objects: true
92 | - Class: rviz_default_plugins/MoveCamera
93 | - Class: rviz_default_plugins/Select
94 | - Class: rviz_default_plugins/FocusCamera
95 | - Class: rviz_default_plugins/Measure
96 | Line color: 128; 128; 0
97 | - Class: rviz_default_plugins/SetInitialPose
98 | Topic:
99 | Depth: 5
100 | Durability Policy: Volatile
101 | History Policy: Keep Last
102 | Reliability Policy: Reliable
103 | Value: /initialpose
104 | - Class: rviz_default_plugins/SetGoal
105 | Topic:
106 | Depth: 5
107 | Durability Policy: Volatile
108 | History Policy: Keep Last
109 | Reliability Policy: Reliable
110 | Value: /goal_pose
111 | - Class: rviz_default_plugins/PublishPoint
112 | Single click: true
113 | Topic:
114 | Depth: 5
115 | Durability Policy: Volatile
116 | History Policy: Keep Last
117 | Reliability Policy: Reliable
118 | Value: /clicked_point
119 | Transformation:
120 | Current:
121 | Class: rviz_default_plugins/TF
122 | Value: true
123 | Views:
124 | Current:
125 | Class: rviz_default_plugins/Orbit
126 | Distance: 43.63492965698242
127 | Enable Stereo Rendering:
128 | Stereo Eye Separation: 0.05999999865889549
129 | Stereo Focal Distance: 1
130 | Swap Stereo Eyes: false
131 | Value: false
132 | Focal Point:
133 | X: 0
134 | Y: 0
135 | Z: 0
136 | Focal Shape Fixed Size: true
137 | Focal Shape Size: 0.05000000074505806
138 | Invert Z Axis: false
139 | Name: Current View
140 | Near Clip Distance: 0.009999999776482582
141 | Pitch: 0.5653981566429138
142 | Target Frame:
143 | Value: Orbit (rviz)
144 | Yaw: 1.4603978395462036
145 | Saved: ~
146 | Window Geometry:
147 | Displays:
148 | collapsed: false
149 | Height: 1056
150 | Hide Left Dock: false
151 | Hide Right Dock: false
152 | Image:
153 | collapsed: false
154 | QMainWindow State: 000000ff00000000fd0000000400000000000001fc000003dcfc0200000009fb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c00610079007301000000270000021f000000c900fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000000a0049006d006100670065010000024c000001b70000002800ffffff000000010000010f000003dcfc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a005600690065007700730100000027000003dc000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000004420000003efc0100000002fb0000000800540069006d00650100000000000004420000000000000000fb0000000800540069006d0065010000000000000450000000000000000000000428000003dc00000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
155 | Selection:
156 | collapsed: false
157 | Tool Properties:
158 | collapsed: false
159 | Views:
160 | collapsed: false
161 | Width: 1855
162 | X: 65
163 | Y: 24
164 |
--------------------------------------------------------------------------------
/ros2_trt_pose/package.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 | ros2_trt_pose
21 | 0.0.0
22 | TODO: Package description
23 | nvidia
24 | TODO: License declaration
25 | pose_msgs
26 | image_tools
27 |
28 | ament_copyright
29 | ament_flake8
30 | ament_pep257
31 | python3-pytest
32 |
33 |
34 | ament_python
35 |
36 |
37 |
--------------------------------------------------------------------------------
/ros2_trt_pose/resource/ros2_trt_pose:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA-AI-IOT/ros2_trt_pose/4a3cc9d9aa1cc3d20ac5e6e9e1dcec833f18e58f/ros2_trt_pose/resource/ros2_trt_pose
--------------------------------------------------------------------------------
/ros2_trt_pose/ros2_trt_pose/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NVIDIA-AI-IOT/ros2_trt_pose/4a3cc9d9aa1cc3d20ac5e6e9e1dcec833f18e58f/ros2_trt_pose/ros2_trt_pose/__init__.py
--------------------------------------------------------------------------------
/ros2_trt_pose/ros2_trt_pose/helper.py:
--------------------------------------------------------------------------------
1 | # ---------------------------------------------------------------------------------------
2 | # Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
3 | # Permission is hereby granted, free of charge, to any person obtaining
4 | # a copy of this software and associated documentation files (the "Software"),
5 | # to deal in the Software without restriction, including without limitation
6 | # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 | # and/or sell copies of the Software, and to permit persons to whom the
8 | # Software is furnished to do so, subject to the following conditions:
9 | # The above copyright notice and this permission notice shall be included
10 | # in all copies or substantial portions of the Software.
11 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
12 | # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
13 | # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
14 | # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
15 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 | # ---------------------------------------------------------------------------------------
17 |
18 | # ROS2 related
19 | import rclpy
20 | from rclpy.node import Node
21 | from sensor_msgs.msg import Image as ImageMsg
22 | from visualization_msgs.msg import Marker
23 | from geometry_msgs.msg import Point
24 | from pose_msgs.msg import BodypartDetection, PersonDetection # For pose_msgs
25 | from rclpy.duration import Duration
26 |
27 | # TRT_pose related
28 | import cv2
29 | import numpy as np
30 | import math
31 | import os
32 | from ros2_trt_pose.utils import preprocess, load_params, load_model, draw_objects
33 |
34 |
35 | class TRTPose(Node):
36 | def __init__(self):
37 | super().__init__('trt_pose')
38 | self.hp_json_file = None
39 | self.model_weights = None
40 | self.width = 224
41 | self.height = 224
42 | self.i = 0
43 | self.image = None
44 | self.model_trt = None
45 | self.annotated_image = None
46 | self.counts = None
47 | self.peaks = None
48 | self.objects = None
49 | self.topology = None
50 | self.xy_circles = []
51 | self.p = None
52 | # ROS2 parameters
53 | self.declare_parameter('base_dir', '/home/ak-nv/trt_pose/tasks/human_pose')
54 | # Based Dir should contain: model_file resnet/densenet, human_pose json file
55 | self.declare_parameter('model', 'resnet18') # default to Resnet18
56 | self.declare_parameter('point_range', 10) # default range is 0 to 10
57 | self.declare_parameter('show_image', False) # Show image in cv2.imshow
58 | self.base_dir = self.get_parameter('base_dir')._value
59 | self.model_name = self.get_parameter('model')._value
60 | self.point_range = self.get_parameter('point_range')._value
61 | self.show_image_param = self.get_parameter('show_image')._value
62 |
63 | # ROS2 related init
64 | # Image subscriber from cam2image
65 | self.subscriber_ = self.create_subscription(ImageMsg, 'image', self.read_cam_callback, 10)
66 | self.image_pub = self.create_publisher(ImageMsg, 'detections_image', 10)
67 | # Publisher for Body Joints and Skeleton
68 | self.body_joints_pub = self.create_publisher(Marker, 'body_joints', 1000)
69 | self.body_skeleton_pub = self.create_publisher(Marker, 'body_skeleton', 10)
70 | # Publishing pose Message
71 | self.publish_pose = self.create_publisher(PersonDetection, 'pose_msgs', 100)
72 |
73 | def start(self):
74 | # Convert to TRT and Load Params
75 | json_file = os.path.join(self.base_dir, 'human_pose.json')
76 | self.get_logger().info("Loading model weights\n")
77 | self.num_parts, self.num_links, self.model_weights, self.parse_objects, self.topology = load_params(base_dir=self.base_dir,
78 | human_pose_json=json_file,
79 | model_name=self.model_name)
80 | self.model_trt, self.height, self.width = load_model(base_dir=self.base_dir, model_name=self.model_name, num_parts=self.num_parts, num_links=self.num_links,
81 | model_weights=self.model_weights)
82 | self.get_logger().info("Model weights loaded...\n Waiting for images...\n")
83 |
84 | def execute(self):
85 | data = preprocess(image=self.image, width=self.width, height=self.height)
86 | cmap, paf = self.model_trt(data)
87 | cmap, paf = cmap.detach().cpu(), paf.detach().cpu()
88 | self.counts, self.objects, self.peaks = self.parse_objects(cmap,
89 | paf) # , cmap_threshold=0.15, link_threshold=0.15)
90 | annotated_image = draw_objects(image=self.image, object_counts=self.counts, objects=self.objects, normalized_peaks=self.peaks, topology=self.topology)
91 | self.parse_k()
92 |
93 | return annotated_image
94 |
95 | # Subscribe and Publish to image topic
96 | def read_cam_callback(self, msg):
97 | img = np.asarray(msg.data)
98 | self.image = np.reshape(img, (msg.height, msg.width, 3))
99 | self.annotated_image = self.execute()
100 |
101 | image_msg = self.image_np_to_image_msg(self.annotated_image)
102 | self.image_pub.publish(image_msg)
103 | if self.show_image_param:
104 | cv2.imshow('frame', self.annotated_image)
105 | cv2.waitKey(1)
106 |
107 | # Borrowed from OpenPose-ROS repo
108 | def image_np_to_image_msg(self, image_np):
109 | image_msg = ImageMsg()
110 | image_msg.height = image_np.shape[0]
111 | image_msg.width = image_np.shape[1]
112 | image_msg.encoding = 'bgr8'
113 | image_msg.data = image_np.tostring()
114 | image_msg.step = len(image_msg.data) // image_msg.height
115 | image_msg.header.frame_id = 'map'
116 | return image_msg
117 |
118 | def init_body_part_msg(self):
119 | bodypart = BodypartDetection()
120 | bodypart.x = float('NaN')
121 | bodypart.y = float('NaN')
122 | bodypart.confidence = float('NaN')
123 | return bodypart
124 |
125 | def write_body_part_msg(self, pixel_location):
126 | body_part_pixel_loc = BodypartDetection()
127 | body_part_pixel_loc.y = float(pixel_location[0] * self.height)
128 | body_part_pixel_loc.x = float(pixel_location[1] * self.width)
129 | return body_part_pixel_loc
130 |
131 | def init_markers_spheres(self):
132 | marker_joints = Marker()
133 | marker_joints.header.frame_id = '/map'
134 | marker_joints.id = 1
135 | marker_joints.ns = "joints"
136 | marker_joints.type = marker_joints.SPHERE_LIST
137 | marker_joints.action = marker_joints.ADD
138 | marker_joints.scale.x = 0.7
139 | marker_joints.scale.y = 0.7
140 | marker_joints.scale.z = 0.7
141 | marker_joints.color.a = 1.0
142 | marker_joints.color.r = 1.0
143 | marker_joints.color.g = 0.0
144 | marker_joints.color.b = 0.0
145 | marker_joints.lifetime = Duration(seconds=3, nanoseconds=5e2).to_msg()
146 | return marker_joints
147 |
148 | def init_markers_lines(self):
149 | marker_line = Marker()
150 | marker_line.header.frame_id = '/map'
151 | marker_line.id = 1
152 | marker_line.ns = "joint_line"
153 | marker_line.header.stamp = self.get_clock().now().to_msg()
154 | marker_line.type = marker_line.LINE_LIST
155 | marker_line.action = marker_line.ADD
156 | marker_line.scale.x = 0.1
157 | marker_line.scale.y = 0.1
158 | marker_line.scale.z = 0.1
159 | marker_line.color.a = 1.0
160 | marker_line.color.r = 0.0
161 | marker_line.color.g = 1.0
162 | marker_line.color.b = 0.0
163 | marker_line.lifetime = Duration(seconds=3, nanoseconds=5e2).to_msg()
164 | return marker_line
165 |
166 | def init_all_body_msgs(self, _msg, count):
167 | _msg.person_id = count
168 | _msg.nose = self.init_body_part_msg()
169 | _msg.neck = self.init_body_part_msg()
170 | _msg.right_shoulder = self.init_body_part_msg()
171 | _msg.right_elbow = self.init_body_part_msg()
172 | _msg.right_wrist = self.init_body_part_msg()
173 | _msg.left_shoulder = self.init_body_part_msg()
174 | _msg.left_elbow = self.init_body_part_msg()
175 | _msg.left_wrist = self.init_body_part_msg()
176 | _msg.right_hip = self.init_body_part_msg()
177 | _msg.right_knee = self.init_body_part_msg()
178 | _msg.right_ankle = self.init_body_part_msg()
179 | _msg.left_hip = self.init_body_part_msg()
180 | _msg.left_knee = self.init_body_part_msg()
181 | _msg.left_ankle = self.init_body_part_msg()
182 | _msg.right_eye = self.init_body_part_msg()
183 | _msg.left_eye = self.init_body_part_msg()
184 | _msg.right_ear = self.init_body_part_msg()
185 | _msg.left_ear = self.init_body_part_msg()
186 | return _msg
187 |
188 | def add_point_to_marker(self, body_part_msg):
189 | p = Point()
190 | p.x = float((body_part_msg.x / self.width) * self.point_range)
191 | p.y = float((body_part_msg.y / self.height) * self.point_range)
192 | p.z = 0.0
193 | return p
194 |
195 | def valid_marker_point(self, body_part_msg):
196 | if math.isnan(body_part_msg.x) or math.isnan(body_part_msg.y):
197 | return False
198 | return True
199 |
200 | def parse_k(self):
201 | image_idx = 0
202 | try:
203 | count = int(self.counts[image_idx])
204 | primary_msg = PersonDetection() # BodypartDetection()
205 | primary_msg.num_people_detected = count
206 | for i in range(count):
207 | primary_msg.person_id = i
208 | primary_msg = self.init_all_body_msgs(_msg=primary_msg, count=i)
209 | marker_joints = self.init_markers_spheres()
210 | marker_skeleton = self.init_markers_lines()
211 | for k in range(18):
212 | _idx = self.objects[image_idx, i, k]
213 | if _idx >= 0:
214 | _location = self.peaks[image_idx, k, _idx, :]
215 | if k == 0:
216 | primary_msg.nose = self.write_body_part_msg(_location)
217 | marker_joints.points.append(self.add_point_to_marker(primary_msg.nose))
218 | self.get_logger().info(
219 | "Body Part Detected: nose at X:{}, Y:{}".format(primary_msg.nose.x, primary_msg.nose.y))
220 | if k == 1:
221 | primary_msg.left_eye = self.write_body_part_msg(_location)
222 | marker_joints.points.append(self.add_point_to_marker(primary_msg.left_eye))
223 | self.get_logger().info(
224 | "Body Part Detected: left_eye at X:{}, Y:{}".format(primary_msg.left_eye.x,
225 | primary_msg.left_eye.y))
226 | if self.valid_marker_point(primary_msg.nose):
227 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.nose))
228 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_eye))
229 |
230 | if k == 2:
231 | primary_msg.right_eye = self.write_body_part_msg(_location)
232 | marker_joints.points.append(self.add_point_to_marker(primary_msg.right_eye))
233 | self.get_logger().info(
234 | "Body Part Detected: right_eye at X:{}, Y:{}".format(primary_msg.right_eye.x,
235 | primary_msg.right_eye.y))
236 | if self.valid_marker_point(primary_msg.nose):
237 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.nose))
238 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_eye))
239 | if self.valid_marker_point(primary_msg.left_eye):
240 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_eye))
241 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_eye))
242 |
243 | if k == 3:
244 | primary_msg.left_ear = self.write_body_part_msg(_location)
245 | marker_joints.points.append(self.add_point_to_marker(primary_msg.left_ear))
246 | self.get_logger().info(
247 | "Body Part Detected: left_ear at X:{}, Y:{}".format(primary_msg.left_ear.x,
248 | primary_msg.left_ear.y))
249 | if self.valid_marker_point(primary_msg.left_eye):
250 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_eye))
251 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_ear))
252 |
253 | if k == 4:
254 | primary_msg.right_ear = self.write_body_part_msg(_location)
255 | marker_joints.points.append(self.add_point_to_marker(primary_msg.right_ear))
256 | self.get_logger().info(
257 | "Body Part Detected: right_ear at X:{}, Y:{}".format(primary_msg.right_ear.x,
258 | primary_msg.right_ear.y))
259 |
260 | if self.valid_marker_point(primary_msg.right_eye):
261 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_eye))
262 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_ear))
263 |
264 | if k == 5:
265 | primary_msg.left_shoulder = self.write_body_part_msg(_location)
266 | marker_joints.points.append(self.add_point_to_marker(primary_msg.left_shoulder))
267 | self.get_logger().info(
268 | "Body Part Detected: left_shoulder at X:{}, Y:{}".format(primary_msg.left_shoulder.x,
269 | primary_msg.left_shoulder.y))
270 | if self.valid_marker_point(primary_msg.left_ear):
271 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_ear))
272 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_shoulder))
273 |
274 | if k == 6:
275 | primary_msg.right_shoulder = self.write_body_part_msg(_location)
276 | marker_joints.points.append(self.add_point_to_marker(primary_msg.right_shoulder))
277 | self.get_logger().info(
278 | "Body Part Detected: right_shoulder at X:{}, Y:{}".format(primary_msg.right_shoulder.x,
279 | primary_msg.right_shoulder.y))
280 |
281 | if self.valid_marker_point(primary_msg.right_ear):
282 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_ear))
283 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_shoulder))
284 |
285 | if k == 7:
286 | primary_msg.left_elbow = self.write_body_part_msg(_location)
287 | marker_joints.points.append(self.add_point_to_marker(primary_msg.left_elbow))
288 | self.get_logger().info(
289 | "Body Part Detected: left_elbow at X:{}, Y:{}".format(primary_msg.left_elbow.x,
290 | primary_msg.left_elbow.y))
291 |
292 | if self.valid_marker_point(primary_msg.left_shoulder):
293 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_elbow))
294 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_shoulder))
295 |
296 | if k == 8:
297 | primary_msg.right_elbow = self.write_body_part_msg(_location)
298 | marker_joints.points.append(self.add_point_to_marker(primary_msg.right_elbow))
299 | self.get_logger().info(
300 | "Body Part Detected: right_elbow at X:{}, Y:{}".format(primary_msg.right_elbow.x,
301 | primary_msg.right_elbow.y))
302 |
303 | if self.valid_marker_point(primary_msg.right_shoulder):
304 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_elbow))
305 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_shoulder))
306 |
307 | if k == 9:
308 | primary_msg.left_wrist = self.write_body_part_msg(_location)
309 | marker_joints.points.append(self.add_point_to_marker(primary_msg.left_wrist))
310 | self.get_logger().info(
311 | "Body Part Detected: left_wrist at X:{}, Y:{}".format(primary_msg.left_wrist.x,
312 | primary_msg.left_wrist.y))
313 |
314 | if self.valid_marker_point(primary_msg.left_elbow):
315 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_elbow))
316 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_wrist))
317 |
318 | if k == 10:
319 | primary_msg.right_wrist = self.write_body_part_msg(_location)
320 | marker_joints.points.append(self.add_point_to_marker(primary_msg.right_wrist))
321 | self.get_logger().info(
322 | "Body Part Detected: right_wrist at X:{}, Y:{}".format(primary_msg.right_wrist.x,
323 | primary_msg.right_wrist.y))
324 |
325 | if self.valid_marker_point(primary_msg.right_elbow):
326 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_elbow))
327 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_wrist))
328 |
329 | if k == 11:
330 | primary_msg.left_hip = self.write_body_part_msg(_location)
331 | marker_joints.points.append(self.add_point_to_marker(primary_msg.left_hip))
332 | self.get_logger().info(
333 | "Body Part Detected: left_hip at X:{}, Y:{}".format(primary_msg.left_hip.x,
334 | primary_msg.left_hip.y))
335 |
336 | if k == 12:
337 | primary_msg.right_hip = self.write_body_part_msg(_location)
338 | marker_joints.points.append(self.add_point_to_marker(primary_msg.right_hip))
339 | self.get_logger().info(
340 | "Body Part Detected: right_hip at X:{}, Y:{}".format(primary_msg.right_hip.x,
341 | primary_msg.right_hip.y))
342 |
343 | if self.valid_marker_point(primary_msg.left_hip):
344 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_hip))
345 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_hip))
346 |
347 | if k == 13:
348 | primary_msg.left_knee = self.write_body_part_msg(_location)
349 | marker_joints.points.append(self.add_point_to_marker(primary_msg.left_knee))
350 | self.get_logger().info(
351 | "Body Part Detected: left_knee at X:{}, Y:{}".format(primary_msg.left_knee.x,
352 | primary_msg.left_knee.y))
353 |
354 | if self.valid_marker_point(primary_msg.left_hip):
355 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_hip))
356 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_knee))
357 |
358 | if k == 14:
359 | primary_msg.right_knee = self.write_body_part_msg(_location)
360 | marker_joints.points.append(self.add_point_to_marker(primary_msg.right_knee))
361 | self.get_logger().info(
362 | "Body Part Detected: right_knee at X:{}, Y:{}".format(primary_msg.right_knee.x,
363 | primary_msg.right_knee.y))
364 |
365 | if self.valid_marker_point(primary_msg.right_hip):
366 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_hip))
367 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_knee))
368 |
369 | if k == 15:
370 | primary_msg.left_ankle = self.write_body_part_msg(_location)
371 | marker_joints.points.append(self.add_point_to_marker(primary_msg.left_ankle))
372 | self.get_logger().info(
373 | "Body Part Detected: left_ankle at X:{}, Y:{}".format(primary_msg.left_ankle.x,
374 | primary_msg.left_ankle.y))
375 |
376 | if self.valid_marker_point(primary_msg.left_knee):
377 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_ankle))
378 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_knee))
379 | if k == 16:
380 | primary_msg.right_ankle = self.write_body_part_msg(_location)
381 | marker_joints.points.append(self.add_point_to_marker(primary_msg.right_ankle))
382 | self.get_logger().info(
383 | "Body Part Detected: right_ankle at X:{}, Y:{}".format(primary_msg.right_ankle.x,
384 | primary_msg.right_ankle.y))
385 |
386 | if self.valid_marker_point(primary_msg.right_knee):
387 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_ankle))
388 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_knee))
389 |
390 | if k == 17:
391 | primary_msg.neck = self.write_body_part_msg(_location)
392 | marker_joints.points.append(self.add_point_to_marker(primary_msg.neck))
393 | self.get_logger().info(
394 | "Body Part Detected: neck at X:{}, Y:{}".format(primary_msg.neck.x, primary_msg.neck.y))
395 |
396 | if self.valid_marker_point(primary_msg.nose):
397 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.nose))
398 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.neck))
399 | if self.valid_marker_point(primary_msg.right_shoulder):
400 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_shoulder))
401 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.neck))
402 | if self.valid_marker_point(primary_msg.right_hip):
403 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.right_hip))
404 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.neck))
405 | if self.valid_marker_point(primary_msg.left_shoulder):
406 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_shoulder))
407 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.neck))
408 | if self.valid_marker_point(primary_msg.left_hip):
409 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.left_hip))
410 | marker_skeleton.points.append(self.add_point_to_marker(primary_msg.neck))
411 |
412 | self.publish_pose.publish(primary_msg)
413 | self.body_skeleton_pub.publish(marker_skeleton)
414 | self.body_joints_pub.publish(marker_joints)
415 |
416 | self.get_logger().info("Published Message for Person ID:{}".format(primary_msg.person_id))
417 | except:
418 | pass
419 |
--------------------------------------------------------------------------------
/ros2_trt_pose/ros2_trt_pose/live_demo.py:
--------------------------------------------------------------------------------
1 | # ---------------------------------------------------------------------------------------
2 | # Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
3 | # Permission is hereby granted, free of charge, to any person obtaining
4 | # a copy of this software and associated documentation files (the "Software"),
5 | # to deal in the Software without restriction, including without limitation
6 | # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 | # and/or sell copies of the Software, and to permit persons to whom the
8 | # Software is furnished to do so, subject to the following conditions:
9 | # The above copyright notice and this permission notice shall be included
10 | # in all copies or substantial portions of the Software.
11 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
12 | # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
13 | # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
14 | # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
15 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 | # ---------------------------------------------------------------------------------------
17 |
18 | # ROS2 related
19 | import rclpy
20 | from ros2_trt_pose.helper import TRTPose
21 |
22 | def main(args=None):
23 | rclpy.init(args=args)
24 | trt_pose = TRTPose()
25 | trt_pose.start()
26 | rclpy.spin(trt_pose)
27 |
28 | trt_pose.destroy_node()
29 | rclpy.shutdown()
30 |
31 | if __name__ == '__main__':
32 | main()
33 |
--------------------------------------------------------------------------------
/ros2_trt_pose/ros2_trt_pose/utils.py:
--------------------------------------------------------------------------------
1 | # ---------------------------------------------------------------------------------------
2 | # Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
3 | # Permission is hereby granted, free of charge, to any person obtaining
4 | # a copy of this software and associated documentation files (the "Software"),
5 | # to deal in the Software without restriction, including without limitation
6 | # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 | # and/or sell copies of the Software, and to permit persons to whom the
8 | # Software is furnished to do so, subject to the following conditions:
9 | # The above copyright notice and this permission notice shall be included
10 | # in all copies or substantial portions of the Software.
11 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
12 | # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
13 | # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
14 | # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
15 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 | # ---------------------------------------------------------------------------------------
17 |
18 | import torch
19 | import torchvision.transforms as transforms
20 | import PIL.Image
21 | import cv2
22 | import os
23 | import json
24 | import trt_pose.coco
25 | import trt_pose.models
26 | from trt_pose.parse_objects import ParseObjects
27 | import torch2trt
28 | from torch2trt import TRTModule
29 |
30 | # Pre-process image message received from cam2image
31 | def preprocess(image, width, height):
32 | global device
33 | mean = torch.Tensor([0.485, 0.456, 0.406]).cuda()
34 | std = torch.Tensor([0.229, 0.224, 0.225]).cuda()
35 | device = torch.device('cuda')
36 | image = cv2.resize(image, (width, height))
37 | image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
38 | image = PIL.Image.fromarray(image)
39 | image = transforms.functional.to_tensor(image).to(device)
40 | image.sub_(mean[:, None, None]).div_(std[:, None, None])
41 | return image[None, ...]
42 |
43 | def load_params(base_dir, human_pose_json, model_name):
44 | hp_json_file = os.path.join(base_dir,human_pose_json)
45 | if model_name == 'resnet18':
46 | MODEL_WEIGHTS = 'resnet18_baseline_att_224x224_A_epoch_249.pth'
47 | if model_name == 'densenet121':
48 | MODEL_WEIGHTS = 'densenet121_baseline_att_256x256_B_epoch_160.pth'
49 |
50 | model_weights = os.path.join(base_dir, MODEL_WEIGHTS)
51 |
52 | with open(hp_json_file,'r') as f:
53 | human_pose = json.load(f)
54 |
55 | topology = trt_pose.coco.coco_category_to_topology(human_pose)
56 | num_parts = len(human_pose['keypoints']) # Name of the body part
57 | num_links = len(human_pose['skeleton']) # Need to know
58 | parse_objects = ParseObjects(topology)
59 | return num_parts, num_links, model_weights, parse_objects, topology
60 |
61 | def load_model(base_dir, model_name, num_parts, num_links, model_weights):
62 | #self.get_logger().info("Model Weights are loading \n")
63 | if model_name == 'resnet18':
64 | model = trt_pose.models.resnet18_baseline_att(num_parts, 2*num_links).cuda().eval()
65 | model.load_state_dict(torch.load(model_weights))
66 | MODEL_WEIGHTS = 'resnet18_baseline_att_224x224_A_epoch_249.pth'
67 | OPTIMIZED_MODEL = 'resnet18_baseline_att_224x224_A_epoch_249_trt.pth'
68 | height, width = 224,224
69 | if model_name == 'densenet121':
70 | model = trt_pose.models.densenet121_baseline_att(num_parts, 2*num_links).cuda().eval()
71 | model.load_state_dict(torch.load(model_weights))
72 | MODEL_WEIGHTS = 'densenet121_baseline_att_256x256_B_epoch_160.pth'
73 | OPTIMIZED_MODEL = 'densenet121_baseline_att_256x256_B_epoch_160_trt.pth'
74 | height, width = 256,256
75 |
76 | model_file_path = os.path.join(base_dir, OPTIMIZED_MODEL)
77 | if not os.path.isfile(model_file_path):
78 | data = torch.zeros((1,3, height, width)).cuda()
79 | model_trt = torch2trt.torch2trt(model, [data], fp16_mode=True, max_workspace_size=1<<25)
80 | torch.save(model_trt.state_dict(), model_file_path)
81 | model_trt = TRTModule()
82 | model_trt.load_state_dict(torch.load(model_file_path))
83 |
84 | return model_trt, height, width
85 |
86 | def draw_objects(image, object_counts, objects, normalized_peaks, topology):
87 | topology = topology
88 | height = image.shape[0]
89 | width = image.shape[1]
90 | count = int(object_counts[0])
91 | K = topology.shape[0]
92 | for i in range(count):
93 | color = (0, 255, 0)
94 | obj = objects[0][i]
95 | C = obj.shape[0]
96 | for j in range(C):
97 | k = int(obj[j])
98 | if k >= 0:
99 | peak = normalized_peaks[0][j][k]
100 | x = round(float(peak[1]) * width)
101 | y = round(float(peak[0]) * height)
102 | cv2.circle(image, (x, y), 3, color, 2)
103 |
104 | for k in range(K):
105 | c_a = topology[k][2]
106 | c_b = topology[k][3]
107 | if obj[c_a] >= 0 and obj[c_b] >= 0:
108 | peak0 = normalized_peaks[0][c_a][obj[c_a]]
109 | peak1 = normalized_peaks[0][c_b][obj[c_b]]
110 | x0 = round(float(peak0[1]) * width)
111 | y0 = round(float(peak0[0]) * height)
112 | x1 = round(float(peak1[1]) * width)
113 | y1 = round(float(peak1[0]) * height)
114 | cv2.line(image, (x0, y0), (x1, y1), color, 2)
115 |
116 | return image
117 |
--------------------------------------------------------------------------------
/ros2_trt_pose/setup.cfg:
--------------------------------------------------------------------------------
1 | [develop]
2 | script-dir=$base/lib/ros2_trt_pose
3 | [install]
4 | install-scripts=$base/lib/ros2_trt_pose
5 |
--------------------------------------------------------------------------------
/ros2_trt_pose/setup.py:
--------------------------------------------------------------------------------
1 | # ---------------------------------------------------------------------------------------
2 | # Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
3 | # Permission is hereby granted, free of charge, to any person obtaining
4 | # a copy of this software and associated documentation files (the "Software"),
5 | # to deal in the Software without restriction, including without limitation
6 | # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 | # and/or sell copies of the Software, and to permit persons to whom the
8 | # Software is furnished to do so, subject to the following conditions:
9 | # The above copyright notice and this permission notice shall be included
10 | # in all copies or substantial portions of the Software.
11 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
12 | # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
13 | # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
14 | # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
15 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 | # ---------------------------------------------------------------------------------------
17 |
18 | from setuptools import setup
19 | from glob import glob
20 | import os
21 |
22 | package_name = 'ros2_trt_pose'
23 |
24 | setup(
25 | name=package_name,
26 | version='0.0.0',
27 | packages=[package_name],
28 | data_files=[
29 | ('share/ament_index/resource_index/packages',
30 | ['resource/' + package_name]),
31 | ('share/' + package_name, ['package.xml']),
32 | (os.path.join('share', package_name), glob('launch/*.launch.py')),
33 | ],
34 | install_requires=['setuptools'],
35 | zip_safe=True,
36 | maintainer='nvidia',
37 | maintainer_email='ameykulkarni@nvidia.com',
38 | description='TODO: Package description',
39 | license='TODO: License declaration',
40 | tests_require=['pytest'],
41 | entry_points={
42 | 'console_scripts': [
43 | 'pose-estimation = ros2_trt_pose.live_demo:main',
44 | ],
45 | },
46 | )
47 |
--------------------------------------------------------------------------------
/ros2_trt_pose/test/test_copyright.py:
--------------------------------------------------------------------------------
1 | # Copyright 2015 Open Source Robotics Foundation, Inc.
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | from ament_copyright.main import main
16 | import pytest
17 |
18 |
19 | @pytest.mark.copyright
20 | @pytest.mark.linter
21 | def test_copyright():
22 | rc = main(argv=['.', 'test'])
23 | assert rc == 0, 'Found errors'
24 |
--------------------------------------------------------------------------------
/ros2_trt_pose/test/test_flake8.py:
--------------------------------------------------------------------------------
1 | # Copyright 2017 Open Source Robotics Foundation, Inc.
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | from ament_flake8.main import main
16 | import pytest
17 |
18 |
19 | @pytest.mark.flake8
20 | @pytest.mark.linter
21 | def test_flake8():
22 | rc = main(argv=[])
23 | assert rc == 0, 'Found errors'
24 |
--------------------------------------------------------------------------------
/ros2_trt_pose/test/test_pep257.py:
--------------------------------------------------------------------------------
1 | # Copyright 2015 Open Source Robotics Foundation, Inc.
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | from ament_pep257.main import main
16 | import pytest
17 |
18 |
19 | @pytest.mark.linter
20 | @pytest.mark.pep257
21 | def test_pep257():
22 | rc = main(argv=['.', 'test'])
23 | assert rc == 0, 'Found code style errors / warnings'
24 |
--------------------------------------------------------------------------------