├── .gitmodules ├── README.md ├── dependency_hints.txt ├── install_openpose_and_enable_package.sh ├── openpose_ros_msgs ├── CMakeLists.txt ├── README.md ├── msg │ ├── BodypartDetection.msg │ ├── BodypartDetection_3d.msg │ ├── PersonDetection.msg │ └── PersonDetection_3d.msg ├── package.xml └── srv │ └── GetPersons.srv ├── openpose_ros_pkg ├── CMakeLists.txt ├── README.md ├── examples │ └── COCO_val2014_000000000192.jpg ├── package.xml └── src │ ├── openpose.cpp │ ├── openpose_ros_node.cpp │ ├── openpose_ros_node_3d.cpp │ ├── openpose_ros_node_firephinx.cpp │ └── test_openpose_ros_service_call.cpp ├── readme_images ├── Fig1.png ├── Fig10.png ├── Fig11.png ├── Fig12.png ├── Fig13.png ├── Fig14.png ├── Fig15.png ├── Fig16.png ├── Fig17.png ├── Fig18.png ├── Fig19.png ├── Fig2.png ├── Fig20.png ├── Fig21.png ├── Fig3.png ├── Fig4.png ├── Fig5.png ├── Fig6.png ├── Fig7.png ├── Fig8.png └── Fig9.png └── skeleton_extract_3d ├── CMakeLists.txt ├── README.md ├── launch └── openpose_skeleton_extract.launch ├── package.xml └── src ├── skeleton_extract_3d_node.cpp └── skeleton_extract_3d_visualization_node.cpp /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "openpose"] 2 | path = openpose 3 | url = https://github.com/CMU-Perceptual-Computing-Lab/openpose.git 4 | ignore = dirty -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # OpenPose ROS 2 | ROS wrapper for OpenPose ( https://github.com/CMU-Perceptual-Computing-Lab/openpose ) allows the estimation of 2d multi-person pose from a single RGB camera (see commit number `a1e0a5f4136e702b5731a268c2993fb75ca4753c` ). Based on https://github.com/stevenjj/openpose_ros . 3 | 4 | When a depth image is synchronized with the RGB image (RGB-D image), a 3d extractor node has been implemented to obtain 3d pose estimation from the 2d pose estimation given by OpenPose through the projection of the 2d pose estimation onto the point-cloud of the depth image. Also, a visualization node for the 3d results has been implemented. 5 | 6 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig1.png) 7 | 8 | # Installing Openpose ROS Wrapper 9 | 10 | 1. Install openpose and its dependencies ( https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation.md) 11 | 12 | **NOTE**: OpenCV 3.2 recommended, as OpenCv 2.4 might cause some errors. 13 | 14 | 2. Install PCL (http://pointclouds.org/downloads/) 15 | 16 | 3. Enable the package running in openpose_ros directory: 17 | ```` 18 | ./install_openpose_and_enable_package.sh 19 | ```` 20 | 4. If it succeds, compile: 21 | ```` 22 | cd ~/catkin_ws/src 23 | catkin build 24 | cd .. 25 | source devel/setup.bash 26 | ```` 27 | 28 | # Running OpenPose ROS Wrapper and 3d Pose Extractor 29 | The package can be divided into two modules that work independently. One for 2d pose detections, with a visualization tool like the one in OpenPose but implemented in ROS. And another for 3d pose detections, with a visualization node to view the results with RViz. We use the same node to get OpenPose 2d detections for both modules, but we have duplicated it and the services it provides with different names to avoid trouble while calling it with the 2d visualization tool and the 3d extractor node simultaneously. 30 | 31 | ## 2d Detection Module 32 | This module is composed of a service node for getting 2d detections and a node for the output visualization. 33 | 34 | First of all, you might want to change some things in the code to adapt it to your necessities: 35 | 36 | * Go to `/openpose_ros_pkg/src/openpose_ros_node_firephinx.cpp`, and change `/usb_cam/image_raw` for the topic your camera is publishing the `sensor_msgs::Image` messages to: 37 | 38 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig2.png) 39 | 40 | * You may change the output image resolution. To do so, `/openpose_ros_pkg/src/openpose_ros_node_firephinx.cpp` and change: 41 | 42 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig3.png) 43 | 44 | Once you have set those parameters repeat step 4 of installation. 45 | 46 | Now you can run the code. First connect a RGB camera and run the corresponding ROS drivers to start to publish the images (they must be image_raw). For example you can connect a webcam and use https://github.com/ros-drivers/usb_cam. With this drivers run: 47 | ```` 48 | roslaunch usb_cam usb_cam-test.launch 49 | ```` 50 | 51 | You should get the something like: 52 | 53 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig4.png) 54 | 55 | Then, initialize the 2d detection service node: 56 | ```` 57 | rosrun openpose_ros_pkg openpose_ros_node 58 | ```` 59 | If everything works fine you should see the following output in your shell: 60 | ```` 61 | [ INFO] [1501140533.950685432]: Initialization Successful! 62 | ```` 63 | Finally, to get the 2d poses of the images from the camera and visualize the output, run: 64 | ````md 65 | rosrun openpose_ros_pkg openpose_ros_node_firephinx 66 | ```` 67 | You should obtain something similar to: 68 | 69 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig5.png) 70 | 71 | ### Topics 72 | 73 | If everything is running correctly, the package should be publishing in the topics: 74 | ```` 75 | /openpose_ros/input_image 76 | /openpose_ros/detected_poses_image 77 | /openpose_ros/detected_poses_keypoints 78 | ```` 79 | 80 | * **/openpose_ros/input_image:** The images the 2d detection node is taking to make the segmentation are published here. 81 | 82 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig6.png) 83 | 84 | * **/openpose_ros/detected_poses_image:** Images with the segmentation skeleton drawn in it are published here. 85 | 86 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig7.png) 87 | 88 | * **/openpose_ros/detected_poses_keypoints:** In this topic, the messages with the 2d detections keypoints (openpose bodyjoints) are being published. The messages have the following fields: 89 | * num_people_detected: number of people that are in the image. 90 | * person_ID: ID number assigned to each person. 91 | * bodypart (i.e. nose): Each bodypart has the following fields: 92 | * x: x openpose keypoint pixel coordinate. 93 | * y: y openpose keypoint pixel coordinate. 94 | * confidence: confidence of the detection. 95 | 96 | If you write the data in a csv it should be like this: 97 | 98 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig8.png) 99 | 100 | ## 3d Detection Module 101 | This module is composed of the same 2d extractor node described in the previous section, a node for getting 3d pose detection and a node for visualization of the output in RViz. We can see the resulting 3d human skeleton and the resulting 3d detections for the joints or both at the same time with the visualization node. An RGB-D camera is needed to run this module. 102 | 103 | First of all you might want to change some things in the code to adapt it to your necessities: 104 | 105 | * Go to `/skeleton_extract_3d/launch/openpose_skeleton_extract.launch`. You will see this: 106 | 107 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig9.png) 108 | 109 | *Here you should change `/usb_cam_3d/image_raw/` for the topic your camera will be publishing the `sensor_msgs::Image` messages (RGB images). You should also change `/kinect2/qhd/points` for the topic your camera will be publishing the `sensor_msgs::Pointcloud2` messages (depth images). 110 | 111 | * Go to `/skeleton_extract_3d/src/skeleton_extract_3d_node.cpp` and set the input resolution of the images (the resolution of the depth and the RGB images must be the same): 112 | 113 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig10.png) 114 | 115 | * Go to `/skeleton_extract_3d/src/skeleton_extract_3d_visualization_node.cpp`. You might want to change the color, shape, size etc. of the markers. To see the options you have go to http://wiki.ros.org/rviz/DisplayTypes/Marker. To set the options of the bodyjoints: 116 | 117 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig11.png) 118 | 119 | To set the options of the skeleton, go to: 120 | 121 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig12.png) 122 | 123 | Once you have set the options repeat step 4 of the installation process. Now that you have configured it, you can run the code. First of all connect your RGB-D and run the corresponding ROS drivers. 124 | 125 | For example you can use a KinectOne and https://github.com/code-iai/iai_kinect2 ROS drivers. To initialize the camera with this drivers run: 126 | ```` 127 | roslaunch kinect2_bridge kinect2_bridge.launch 128 | rosrun kinect2_viewer kinect2_viewer 129 | ```` 130 | 131 | Then you will see the camera output: 132 | 133 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig13.png) 134 | 135 | Once your camera is publishing, launch the 2D extractor node and the 3D extractor node by running: 136 | ```` 137 | roslaunch roslaunch skeleton_extract_3d openpose_skeleton_extract.launch 138 | ```` 139 | If everything is working fine you should have something similar to: 140 | 141 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig14.png) 142 | 143 | Then you can run the visualization node: 144 | ```` 145 | rosrun skeleton_extract_3d skeleton_extract_3d_visualization_node 146 | ```` 147 | **NOTE**: To have the fixed frame for visualization you must run: 148 | ```` 149 | rosrun kinect2_bridge kinect2_bridge _publish_tf:=true 150 | ```` 151 | Now open rviz and select as fixed frame the one you have set for the markers. For example, I have chosen: 152 | ```` 153 | kinect2_ir_optical_frame 154 | ```` 155 | Select the topics: 156 | ```` 157 | /openpose_ros/skeleton_3d/visualization_markers 158 | /openpose_ros/skeleton_3d/visualization_skeleton 159 | ```` 160 | and you should have something similar to: 161 | 162 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig15.png) 163 | 164 | You can also select the pointcloud at the same time: 165 | 166 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig16.png) 167 | 168 | ### Topics 169 | ```` 170 | /openpose_ros/skeleton_3d/detected_poses_image 171 | /openpose_ros/skeleton_3d/detected_poses_keypoints 172 | /openpose_ros/skeleton_3d/detected_poses_keypoints_3d 173 | /openpose_ros/skeleton_3d/input_pointcloud 174 | /openpose_ros/skeleton_3d/input_rgb 175 | /openpose_ros/skeleton_3d/visualization_markers 176 | /openpose_ros/skeleton_3d/visualization_skeleton 177 | ```` 178 | 179 | * **/openpose_ros/skeleton_3d/detected_poses_image:** the same kind of messages are published in this topic as in topic `/openpose_ros/detected_poses_image` in the 2d module. 180 | 181 | * **/openpose_ros/skeleton_3d/detected_poses_keypoints:** the same kind of messages are published in this topic as in topic 182 | `/openpose_ros/detected_poses_keypoints` in the 2d module. 183 | 184 | * **/openpose_ros/skeleton_3d/detected_poses_keypoints_3d:** the 3d detections are published in this topic. The fields are the same as the mesagges published in `/openpose_ros/skeleton_3d/detected_poses_keypoints` but the fields of each bodypart change. Now they are: 185 | * x: x real world coordinate of the joint. 186 | * y: y real world coordinate of the joint. 187 | * z: depth real world coordinate of the joint. 188 | * confidence: confidence of the 2d detections. 189 | 190 | If you write the message in a .csv, it should look like this: 191 | 192 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig17.png) 193 | 194 | * **/openpose_ros/skeleton_3d/input_pointcloud:** Here is published the point-cloud that is synchronized with the RGB image from where we extract the x,y, z real world coordinates of the keypoints. 195 | 196 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig18.png) 197 | 198 | * **/openpose_ros/skeleton_3d/input_rgb:** the RGB image that we use to make the 2d detections is published in this topic and it is synchronized with the input point-cloud. 199 | 200 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig19.png) 201 | 202 | * **/openpose_ros/skeleton_3d/visualization_markers:** the markers to visualize in RViz the 3d detections of the joints are published in this topic. 203 | 204 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig20.png) 205 | 206 | * **/openpose_ros/skeleton_3d/visualization_skeleton:** the skeleton to visualize in RViz the 3d detections is published in this topic. 207 | 208 | ![alt text](https://raw.githubusercontent.com/MiguelARD/openpose_ros-1/master/readme_images/Fig21.png) 209 | 210 | # Author 211 | Miguel Arduengo García 212 | 213 | Undergraduate student at CFIS-UPC (Barcelona Tech, Spain). 214 | 215 | CFIS-UPC: https://cfis.upc.edu/es 216 | 217 | Barcelona Tech: http://www.upc.edu/?set_language=en 218 | 219 | # Citation 220 | Please cite the tech report if it helps in your publications: 221 | 222 | Arduengo, M., Jorgensen, S.J., Hambuchen, K., Sentis, L., Moreno-Noguer, F., Alenyà, G. "ROS wrapper for real-time multi-person pose estimation with a single camera". Technical Report IRI-TR-17-02, Institut de Robòtica i Informàtica Industrial, CSIC-UPC, 2017. 223 | 224 | https://upcommons.upc.edu/handle/2117/110929 225 | -------------------------------------------------------------------------------- /dependency_hints.txt: -------------------------------------------------------------------------------- 1 | The following notes are the commands I used to install the NVIDIA bianry drivers, CUDA, cuDNN, and ROS Indigo. 2 | After doing these commands, openpose and its dependencies compiled for me quite easily. 3 | 4 | sudo apt-get update 5 | sudo apt-get upgrade 6 | 7 | # NVIDIA Titan XP Binary Driver Installation 8 | 9 | Installed the following driver for NVIDIA Titan XP: http://www.geforce.com/drivers/results/117741 10 | Following the instructions at https://ubuntuforums.org/showthread.php?t=2081649 using the binary instructions 11 | 12 | 1) Uninstall previous nvidia drivers 13 | sudo apt-get purge nvidia-current nvidia-settings 14 | 2) sudo service lightdm stop 15 | 16 | 3) ctrl-alt-f1 17 | 4) Login 18 | 5) Then switch to the directory where your drivers are 19 | 20 | cd Downloads 21 | 22 | 6) Then install the driver: 23 | 24 | sudo ./N 25 | # Fix no-TTY terminal on ctl-alt-F1 with Nvidia drivers 26 | 27 | https://forums.linuxmint.com/viewtopic.php?t=168108 28 | 29 | sudo nano /etc/default/grub 30 | 31 | Uncomment the lines 32 | 33 | #GRUB_TERMINAL=console 34 | 35 | #GRUB_GFXMODE=640x480 36 | 37 | And add nomodeset voa: 38 | 39 | GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset" 40 | Cuda 8.0 Install 41 | 42 | Download the .run installer from https://developer.nvidia.com/cuda-downloads (NOT THE DEB file) 43 | 44 | Perform the pre-installation instructions http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#pre-installation-actions 45 | 46 | (so far, I am able to skip the pre-installation instructions) 47 | 48 | Following : http://www.insiderattack.net/2014/12/installing-cuda-toolkit-65-in-ubuntu.html 49 | 50 | # Install CUDA via: 51 | 52 | sudo chmod a+x cuda_8.0....run 53 | sudo ./cuda_8.0.....run 54 | 55 | Say no to installing the driver (since we already did it). Say yes to everything else. 56 | 57 | Then perform the post installation instructions: (http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions) 58 | 59 | Add these to your ~/.bashrc 60 | 61 | export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}} 62 | 63 | export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64\ 64 | ${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} 65 | 66 | 67 | See if you can compile the examples: 68 | 69 | cd ~/NVIDIA_"examples" 70 | 71 | make -j8 72 | 73 | I had an error with g++ so I just installed it via: 74 | 75 | sudo apt-get install g++ 76 | 77 | 78 | # Install cudann 79 | 80 | https://askubuntu.com/questions/767269/how-can-i-install-cudnn-on-ubuntu-16-04 81 | 82 | step 0: Install cuda from the standard repositories. (See How can I install CUDA on Ubuntu 16.04?) 83 | 84 | Step 1: Register an nvidia developer account and download cudnn here (about 80 MB) 85 | 86 | Step 2: Check where your cuda installation is. For the installation from the repository it is /usr/lib/... and /usr/include. Otherwise, it will be /usr/local/cuda/ or /usr/local/cuda-. You can check it with which nvcc or ldconfig -p | grep cuda 87 | 88 | Step 3: Copy the files: 89 | 90 | $ cd folder/extracted/contents 91 | $ sudo cp -P include/cudnn.h /usr/include 92 | $ sudo cp -P lib64/libcudnn* /usr/lib/x86_64-linux-gnu/ 93 | $ sudo chmod a+r /usr/lib/x86_64-linux-gnu/libcudnn* 94 | 95 | For a local install of cuda (not using deb) 96 | 97 | 82 cd Downloads/cuda/ 98 | 99 | 84 sudo cp -P include/cudnn.h /usr/local/include/ 100 | 101 | 90 sudo cp -P lib64/libcudnn* /usr/local/cuda-8.0/lib64/ 102 | 103 | 91 sudo chmod a+r /usr/local/cuda-8.0/lib64/libcudnn* 104 | 105 | # Install ROS Indigo 106 | 107 | http://wiki.ros.org/indigo/Installation/Ubuntu -------------------------------------------------------------------------------- /install_openpose_and_enable_package.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "Adding CATKIN_IGNORE to this package. Will remove if install script successfully runs" 3 | touch CATKIN_IGNORE 4 | echo "------------------------- Install Script -------------------------" 5 | echo "This will run the original install scripts from openpose." 6 | cd openpose 7 | ./install_caffe_and_openpose.sh 8 | cd ../ 9 | echo "------------------------- Enabling Package -------------------------" 10 | echo "Assuming everything in openpose built properly, we remove CATKIN_IGNORE" 11 | rm CATKIN_IGNORE -------------------------------------------------------------------------------- /openpose_ros_msgs/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 2.8.3) 2 | project(openpose_ros_msgs) 3 | 4 | ## Compile as C++11, supported in ROS Kinetic and newer 5 | # add_compile_options(-std=c++11) 6 | 7 | ## Find catkin macros and libraries 8 | find_package(catkin REQUIRED COMPONENTS 9 | std_msgs 10 | sensor_msgs 11 | message_generation 12 | ) 13 | 14 | ## Generate messages in the 'msg' folder 15 | add_message_files( 16 | FILES 17 | BodypartDetection.msg 18 | PersonDetection.msg 19 | BodypartDetection_3d.msg 20 | PersonDetection_3d.msg 21 | ) 22 | 23 | ## Generate services in the 'srv' folder 24 | add_service_files( 25 | FILES 26 | GetPersons.srv 27 | ) 28 | 29 | ## Generate added messages and services with any dependencies listed here 30 | generate_messages( 31 | DEPENDENCIES 32 | std_msgs 33 | sensor_msgs 34 | ) 35 | 36 | ################################### 37 | ## catkin specific configuration ## 38 | ################################### 39 | ## CATKIN_DEPENDS: catkin_packages dependent projects also need 40 | catkin_package(CATKIN_DEPENDS 41 | message_runtime 42 | std_msgs 43 | sensor_msgs) 44 | -------------------------------------------------------------------------------- /openpose_ros_msgs/README.md: -------------------------------------------------------------------------------- 1 | See http://docs.ros.org/latest/api/catkin/html/howto/format2/building_msgs.html 2 | For building messages -------------------------------------------------------------------------------- /openpose_ros_msgs/msg/BodypartDetection.msg: -------------------------------------------------------------------------------- 1 | uint32 x 2 | uint32 y 3 | float32 confidence 4 | -------------------------------------------------------------------------------- /openpose_ros_msgs/msg/BodypartDetection_3d.msg: -------------------------------------------------------------------------------- 1 | float32 x 2 | float32 y 3 | float32 z 4 | float32 confidence 5 | -------------------------------------------------------------------------------- /openpose_ros_msgs/msg/PersonDetection.msg: -------------------------------------------------------------------------------- 1 | uint32 num_people_detected 2 | uint32 person_ID 3 | BodypartDetection nose 4 | BodypartDetection neck 5 | BodypartDetection right_shoulder 6 | BodypartDetection right_elbow 7 | BodypartDetection right_wrist 8 | BodypartDetection left_shoulder 9 | BodypartDetection left_elbow 10 | BodypartDetection left_wrist 11 | BodypartDetection right_hip 12 | BodypartDetection right_knee 13 | BodypartDetection right_ankle 14 | BodypartDetection left_hip 15 | BodypartDetection left_knee 16 | BodypartDetection left_ankle 17 | BodypartDetection right_eye 18 | BodypartDetection left_eye 19 | BodypartDetection right_ear 20 | BodypartDetection left_ear 21 | BodypartDetection chest 22 | -------------------------------------------------------------------------------- /openpose_ros_msgs/msg/PersonDetection_3d.msg: -------------------------------------------------------------------------------- 1 | uint32 num_people_detected 2 | uint32 person_ID 3 | BodypartDetection_3d nose 4 | BodypartDetection_3d neck 5 | BodypartDetection_3d right_shoulder 6 | BodypartDetection_3d right_elbow 7 | BodypartDetection_3d right_wrist 8 | BodypartDetection_3d left_shoulder 9 | BodypartDetection_3d left_elbow 10 | BodypartDetection_3d left_wrist 11 | BodypartDetection_3d right_hip 12 | BodypartDetection_3d right_knee 13 | BodypartDetection_3d right_ankle 14 | BodypartDetection_3d left_hip 15 | BodypartDetection_3d left_knee 16 | BodypartDetection_3d left_ankle 17 | BodypartDetection_3d right_eye 18 | BodypartDetection_3d left_eye 19 | BodypartDetection_3d right_ear 20 | BodypartDetection_3d left_ear 21 | BodypartDetection_3d chest 22 | -------------------------------------------------------------------------------- /openpose_ros_msgs/package.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | openpose_ros_msgs 4 | 0.0.0 5 | The openpose_ros_msgs package 6 | 7 | stevenjj 8 | 9 | 10 | TODO 11 | 12 | catkin 13 | message_generation 14 | std_msgs 15 | sensor_msgs 16 | 17 | message_runtime 18 | std_msgs 19 | sensor_msgs 20 | 21 | 22 | 23 | 24 | -------------------------------------------------------------------------------- /openpose_ros_msgs/srv/GetPersons.srv: -------------------------------------------------------------------------------- 1 | sensor_msgs/Image image 2 | --- 3 | PersonDetection[] detections -------------------------------------------------------------------------------- /openpose_ros_pkg/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 2.8.3) 2 | project(openpose_ros_pkg) 3 | 4 | set(OPENPOSE_INCLUDE_DIRS ../openpose/include 5 | ../openpose/src 6 | ../openpose/3rdparty/caffe/distribute/include 7 | /usr/local/cuda/include) 8 | add_compile_options(-std=c++11) 9 | 10 | ## Find catkin macros and libraries 11 | ## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz) 12 | ## is used, also find other catkin packages 13 | find_package(catkin REQUIRED COMPONENTS 14 | std_msgs 15 | sensor_msgs 16 | roscpp 17 | cv_bridge 18 | openpose_ros_msgs 19 | roslib 20 | image_transport 21 | ) 22 | 23 | ################################### 24 | ## catkin specific configuration ## 25 | ################################### 26 | ## The catkin_package macro generates cmake config files for your package 27 | ## Declare things to be passed to dependent projects 28 | ## INCLUDE_DIRS: uncomment this if you package contains header files 29 | ## LIBRARIES: libraries you create in this project that dependent projects also need 30 | ## CATKIN_DEPENDS: catkin_packages dependent projects also need 31 | ## DEPENDS: system dependencies of this project that dependent projects also need 32 | catkin_package( 33 | CATKIN_DEPENDS std_msgs 34 | roscpp 35 | cv_bridge 36 | openpose_ros_msgs 37 | roslib 38 | image_transport 39 | ) 40 | 41 | ########### 42 | ## Build ## 43 | ########### 44 | include_directories( 45 | # include 46 | ${catkin_INCLUDE_DIRS} 47 | ${OPENPOSE_INCLUDE_DIRS} 48 | ) 49 | 50 | add_definitions(-DUSE_CAFFE) 51 | 52 | FIND_LIBRARY(OPENPOSE_LIBRARY openpose ../openpose/build/lib) 53 | FIND_LIBRARY(CAFFE_LIBRARY caffe ../openpose/3rdparty/caffe/distribute/lib) 54 | FIND_LIBRARY(CUDA_LIBRARY cuda ../openpose/build/lib) 55 | FIND_LIBRARY(GFLAGS_LIBRARY gflags /usr/lib/x86_64-linux-gnu) 56 | FIND_LIBRARY(GLOG_LIBRARY glog /usr/lib/x86_64-linux-gnu) 57 | 58 | add_executable(openpose_ros src/openpose.cpp) 59 | target_link_libraries(openpose_ros ${catkin_LIBRARIES} 60 | ${OPENPOSE_LIBRARY} 61 | ${CAFFE_LIBRARY} 62 | ${CUDA_LIBRARY} 63 | ${GFLAGS_LIBRARY} 64 | ${GLOG_LIBRARY}) 65 | 66 | add_executable(openpose_ros_node src/openpose_ros_node.cpp) 67 | target_link_libraries(openpose_ros_node ${catkin_LIBRARIES} 68 | ${OPENPOSE_LIBRARY} 69 | ${CAFFE_LIBRARY} 70 | ${CUDA_LIBRARY} 71 | ) 72 | add_dependencies(openpose_ros_node ${catkin_EXPORTED_TARGETS}) 73 | 74 | add_executable(openpose_ros_node_3d src/openpose_ros_node_3d.cpp) 75 | target_link_libraries(openpose_ros_node_3d ${catkin_LIBRARIES} 76 | ${OPENPOSE_LIBRARY} 77 | ${CAFFE_LIBRARY} 78 | ${CUDA_LIBRARY} 79 | ) 80 | add_dependencies(openpose_ros_node_3d ${catkin_EXPORTED_TARGETS}) 81 | 82 | 83 | add_executable(openpose_ros_node_firephinx src/openpose_ros_node_firephinx.cpp) 84 | target_link_libraries(openpose_ros_node_firephinx ${catkin_LIBRARIES} 85 | ${OPENPOSE_LIBRARY} 86 | ${CAFFE_LIBRARY} 87 | ${CUDA_LIBRARY} 88 | ${GFLAGS_LIBRARY} 89 | ${GLOG_LIBRARY}) 90 | add_dependencies(openpose_ros_node_firephinx ${catkin_EXPORTED_TARGETS}) 91 | 92 | 93 | add_executable(test_openpose_ros_service_call src/test_openpose_ros_service_call.cpp) 94 | target_link_libraries(test_openpose_ros_service_call ${catkin_LIBRARIES}) 95 | add_dependencies(test_openpose_ros_service_call ${catkin_EXPORTED_TARGETS}) 96 | -------------------------------------------------------------------------------- /openpose_ros_pkg/README.md: -------------------------------------------------------------------------------- 1 | This package runs openpose as a ros executable. (see src/openpose.cpp) 2 | 3 | The exeuctable `openpose_ros_node_firephinx` is based on https://github.com/firephinx/openpose_ros but was slightly changed to make it compilable with this repo's CMakeLists.txt as well as a functionality for finding the model_path automatically using ros package path 4 | 5 | The ROS service `openpose_ros_node` was originally based on https://github.com/tue-robotics/openpose_ros but has since been modified to reflect newer changes on openpose. 6 | -------------------------------------------------------------------------------- /openpose_ros_pkg/examples/COCO_val2014_000000000192.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/openpose_ros_pkg/examples/COCO_val2014_000000000192.jpg -------------------------------------------------------------------------------- /openpose_ros_pkg/package.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | openpose_ros_pkg 4 | 0.0.0 5 | The openpose_ros_pkg package 6 | 7 | 8 | 9 | 10 | stevenjj 11 | 12 | 13 | 14 | 15 | 16 | TODO 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | catkin 43 | 44 | std_msgs 45 | roscpp 46 | cv_bridge 47 | openpose_ros_msgs 48 | roslib 49 | image_transport 50 | 51 | std_msgs 52 | roscpp 53 | cv_bridge 54 | openpose_ros_msgs 55 | roslib 56 | image_transport 57 | 58 | 59 | 60 | 61 | 62 | 63 | -------------------------------------------------------------------------------- /openpose_ros_pkg/src/openpose.cpp: -------------------------------------------------------------------------------- 1 | // ------------------------- OpenPose Library Tutorial - Real Time Pose Estimation ------------------------- 2 | // If the user wants to learn to use the OpenPose library, we highly recommend to start with the `examples/tutorial_*/` folders. 3 | // This example summarizes all the funcitonality of the OpenPose library: 4 | // 1. Read folder of images / video / webcam (`producer` module) 5 | // 2. Extract and render body keypoint / heatmap / PAF of that image (`pose` module) 6 | // 3. Extract and render face keypoint / heatmap / PAF of that image (`face` module) 7 | // 4. Save the results on disc (`filestream` module) 8 | // 5. Display the rendered pose (`gui` module) 9 | // Everything in a multi-thread scenario (`thread` module) 10 | // Points 2 to 5 are included in the `wrapper` module 11 | // In addition to the previous OpenPose modules, we also need to use: 12 | // 1. `core` module: 13 | // For the Array class that the `pose` module needs 14 | // For the Datum struct that the `thread` module sends between the queues 15 | // 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively 16 | // This file should only be used for the user to take specific examples. 17 | 18 | // C++ std library dependencies 19 | #include 20 | #include // `std::chrono::` functions and classes, e.g. std::chrono::milliseconds 21 | #include // sscanf 22 | #include 23 | #include // std::this_thread 24 | #include 25 | // Other 3rdparty dependencies 26 | #include // DEFINE_bool, DEFINE_int32, DEFINE_int64, DEFINE_uint64, DEFINE_double, DEFINE_string 27 | #include // google::InitGoogleLogging 28 | 29 | // OpenPose dependencies 30 | // Option a) Importing all modules 31 | #include 32 | // Option b) Manually importing the desired modules. Recommended if you only intend to use a few modules. 33 | // #include 34 | // #include 35 | // #include 36 | // #include 37 | // #include 38 | // #include 39 | // #include 40 | // #include 41 | // #include 42 | // #include 43 | 44 | // See all the available parameter options withe the `--help` flag. E.g. `./build/examples/openpose/openpose.bin --help`. 45 | // Note: This command will show you flags for other unnecessary 3rdparty files. Check only the flags for the OpenPose 46 | // executable. E.g. for `openpose.bin`, look for `Flags from examples/openpose/openpose.cpp:`. 47 | // Debugging 48 | DEFINE_int32(logging_level, 4, "The logging level. Integer in the range [0, 255]. 0 will output any log() message, while" 49 | " 255 will not output any. Current OpenPose library messages are in the range 0-4: 1 for" 50 | " low priority messages and 4 for important ones."); 51 | // Producer 52 | DEFINE_int32(camera, 0, "The camera index for cv::VideoCapture. Integer in the range [0, 9]."); 53 | DEFINE_string(camera_resolution, "1280x720", "Size of the camera frames to ask for."); 54 | DEFINE_double(camera_fps, 30.0, "Frame rate for the webcam (only used when saving video from webcam). Set this value to the" 55 | " minimum value between the OpenPose displayed speed and the webcam real frame rate."); 56 | DEFINE_string(video, "", "Use a video file instead of the camera. Use `examples/media/video.avi` for our default" 57 | " example video."); 58 | DEFINE_string(image_dir, "", "Process a directory of images. Use `examples/media/` for our default example folder with 20" 59 | " images."); 60 | DEFINE_uint64(frame_first, 0, "Start on desired frame number. Indexes are 0-based, i.e. the first frame has index 0."); 61 | DEFINE_uint64(frame_last, -1, "Finish on desired frame number. Select -1 to disable. Indexes are 0-based, e.g. if set to" 62 | " 10, it will process 11 frames (0-10)."); 63 | DEFINE_bool(frame_flip, false, "Flip/mirror each frame (e.g. for real time webcam demonstrations)."); 64 | DEFINE_int32(frame_rotate, 0, "Rotate each frame, 4 possible values: 0, 90, 180, 270."); 65 | DEFINE_bool(frames_repeat, false, "Repeat frames when finished."); 66 | // OpenPose 67 | DEFINE_string(model_folder, "models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located."); 68 | DEFINE_string(resolution, "1280x720", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the" 69 | " default images resolution."); 70 | DEFINE_int32(num_gpu, -1, "The number of GPU devices to use. If negative, it will use all the available GPUs in your" 71 | " machine."); 72 | DEFINE_int32(num_gpu_start, 0, "GPU device start number."); 73 | DEFINE_int32(keypoint_scale, 0, "Scaling of the (x,y) coordinates of the final pose data array, i.e. the scale of the (x,y)" 74 | " coordinates that will be saved with the `write_keypoint` & `write_keypoint_json` flags." 75 | " Select `0` to scale it to the original source resolution, `1`to scale it to the net output" 76 | " size (set with `net_resolution`), `2` to scale it to the final output size (set with" 77 | " `resolution`), `3` to scale it in the range [0,1], and 4 for range [-1,1]. Non related" 78 | " with `num_scales` and `scale_gap`."); 79 | // OpenPose Body Pose 80 | DEFINE_string(model_pose, "COCO", "Model to be used (e.g. COCO, MPI, MPI_4_layers)."); 81 | DEFINE_string(net_resolution, "656x368", "Multiples of 16. If it is increased, the accuracy usually increases. If it is decreased," 82 | " the speed increases."); 83 | DEFINE_int32(num_scales, 1, "Number of scales to average."); 84 | DEFINE_double(scale_gap, 0.3, "Scale gap between scales. No effect unless num_scales>1. Initial scale is always 1. If you" 85 | " want to change the initial scale, you actually want to multiply the `net_resolution` by" 86 | " your desired initial scale."); 87 | DEFINE_bool(heatmaps_add_parts, false, "If true, it will add the body part heatmaps to the final op::Datum::poseHeatMaps array" 88 | " (program speed will decrease). Not required for our library, enable it only if you intend" 89 | " to process this information later. If more than one `add_heatmaps_X` flag is enabled, it" 90 | " will place then in sequential memory order: body parts + bkg + PAFs. It will follow the" 91 | " order on POSE_BODY_PART_MAPPING in `include/openpose/pose/poseParameters.hpp`."); 92 | DEFINE_bool(heatmaps_add_bkg, false, "Same functionality as `add_heatmaps_parts`, but adding the heatmap corresponding to" 93 | " background."); 94 | DEFINE_bool(heatmaps_add_PAFs, false, "Same functionality as `add_heatmaps_parts`, but adding the PAFs."); 95 | // OpenPose Face 96 | DEFINE_bool(face, false, "Enables face keypoint detection. It will share some parameters from the body pose, e.g." 97 | " `model_folder`."); 98 | DEFINE_string(face_net_resolution, "368x368", "Multiples of 16. Analogous to `net_resolution` but applied to the face keypoint detector." 99 | " 320x320 usually works fine while giving a substantial speed up when multiple faces on the" 100 | " image."); 101 | // OpenPose Hand 102 | DEFINE_bool(hand, false, "Enables hand keypoint detection. It will share some parameters from the body pose, e.g." 103 | " `model_folder`."); 104 | DEFINE_string(hand_net_resolution, "368x368", "Multiples of 16. Analogous to `net_resolution` but applied to the hand keypoint detector."); 105 | DEFINE_int32(hand_detection_mode, -1, "Set to 0 to perform 1-time keypoint detection (fastest), 1 for iterative detection" 106 | " (recommended for images and fast videos, slow method), 2 for tracking (recommended for" 107 | " webcam if the frame rate is >10 FPS per GPU used and for video, in practice as fast as" 108 | " 1-time detection), 3 for both iterative and tracking (recommended for webcam if the" 109 | " resulting frame rate is still >10 FPS and for video, ideally best result but slower), or" 110 | " -1 (default) for automatic selection (fast method for webcam, tracking for video and" 111 | " iterative for images)."); 112 | // OpenPose Rendering 113 | DEFINE_int32(part_to_show, 0, "Part to show from the start."); 114 | DEFINE_bool(disable_blending, false, "If blending is enabled, it will merge the results with the original frame. If disabled, it" 115 | " will only display the results."); 116 | // OpenPose Rendering Pose 117 | DEFINE_int32(render_pose, 2, "Set to 0 for no rendering, 1 for CPU rendering (slightly faster), and 2 for GPU rendering" 118 | " (slower but greater functionality, e.g. `alpha_X` flags). If rendering is enabled, it will" 119 | " render both `outputData` and `cvOutputData` with the original image and desired body part" 120 | " to be shown (i.e. keypoints, heat maps or PAFs)."); 121 | DEFINE_double(alpha_pose, 0.6, "Blending factor (range 0-1) for the body part rendering. 1 will show it completely, 0 will" 122 | " hide it. Only valid for GPU rendering."); 123 | DEFINE_double(alpha_heatmap, 0.7, "Blending factor (range 0-1) between heatmap and original frame. 1 will only show the" 124 | " heatmap, 0 will only show the frame. Only valid for GPU rendering."); 125 | // OpenPose Rendering Face 126 | DEFINE_int32(render_face, -1, "Analogous to `render_pose` but applied to the face. Extra option: -1 to use the same" 127 | " configuration that `render_pose` is using."); 128 | DEFINE_double(alpha_face, 0.6, "Analogous to `alpha_pose` but applied to face."); 129 | DEFINE_double(alpha_heatmap_face, 0.7, "Analogous to `alpha_heatmap` but applied to face."); 130 | // OpenPose Rendering Hand 131 | DEFINE_int32(render_hand, -1, "Analogous to `render_pose` but applied to the hand. Extra option: -1 to use the same" 132 | " configuration that `render_pose` is using."); 133 | DEFINE_double(alpha_hand, 0.6, "Analogous to `alpha_pose` but applied to hand."); 134 | DEFINE_double(alpha_heatmap_hand, 0.7, "Analogous to `alpha_heatmap` but applied to hand."); 135 | // Display 136 | DEFINE_bool(fullscreen, false, "Run in full-screen mode (press f during runtime to toggle)."); 137 | DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is" 138 | " too long, it will skip frames. If it is too fast, it will slow it down."); 139 | DEFINE_bool(no_gui_verbose, false, "Do not write text on output images on GUI (e.g. number of current frame and people). It" 140 | " does not affect the pose rendering."); 141 | DEFINE_bool(no_display, false, "Do not open a display window."); 142 | // Result Saving 143 | DEFINE_string(write_images, "", "Directory to write rendered frames in `write_images_format` image format."); 144 | DEFINE_string(write_images_format, "png", "File extension and format for `write_images`, e.g. png, jpg or bmp. Check the OpenCV" 145 | " function cv::imwrite for all compatible extensions."); 146 | DEFINE_string(write_video, "", "Full file path to write rendered frames in motion JPEG video format. It might fail if the" 147 | " final path does not finish in `.avi`. It internally uses cv::VideoWriter."); 148 | DEFINE_string(write_keypoint, "", "Directory to write the people body pose keypoint data. Set format with `write_keypoint_format`."); 149 | DEFINE_string(write_keypoint_format, "yml", "File extension and format for `write_keypoint`: json, xml, yaml & yml. Json not available" 150 | " for OpenCV < 3.0, use `write_keypoint_json` instead."); 151 | DEFINE_string(write_keypoint_json, "", "Directory to write people pose data in *.json format, compatible with any OpenCV version."); 152 | DEFINE_string(write_coco_json, "", "Full file path to write people pose data with *.json COCO validation format."); 153 | DEFINE_string(write_heatmaps, "", "Directory to write heatmaps in *.png format. At least 1 `add_heatmaps_X` flag must be" 154 | " enabled."); 155 | DEFINE_string(write_heatmaps_format, "png", "File extension and format for `write_heatmaps`, analogous to `write_images_format`." 156 | " Recommended `png` or any compressed and lossless format."); 157 | 158 | op::PoseModel gflagToPoseModel(const std::string& poseModeString) 159 | { 160 | op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); 161 | if (poseModeString == "COCO") 162 | return op::PoseModel::COCO_18; 163 | else if (poseModeString == "MPI") 164 | return op::PoseModel::MPI_15; 165 | else if (poseModeString == "MPI_4_layers") 166 | return op::PoseModel::MPI_15_4; 167 | else if (poseModeString == "BODY_22") 168 | return op::PoseModel::BODY_22; 169 | else 170 | { 171 | op::error("String does not correspond to any model (COCO, MPI, MPI_4_layers)", __LINE__, __FUNCTION__, __FILE__); 172 | return op::PoseModel::COCO_18; 173 | } 174 | } 175 | 176 | op::ScaleMode gflagToScaleMode(const int keypointScale) 177 | { 178 | op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); 179 | if (keypointScale == 0) 180 | return op::ScaleMode::InputResolution; 181 | else if (keypointScale == 1) 182 | return op::ScaleMode::NetOutputResolution; 183 | else if (keypointScale == 2) 184 | return op::ScaleMode::OutputResolution; 185 | else if (keypointScale == 3) 186 | return op::ScaleMode::ZeroToOne; 187 | else if (keypointScale == 4) 188 | return op::ScaleMode::PlusMinusOne; 189 | else 190 | { 191 | const std::string message = "String does not correspond to any scale mode: (0, 1, 2, 3, 4) for (InputResolution," 192 | " NetOutputResolution, OutputResolution, ZeroToOne, PlusMinusOne)."; 193 | op::error(message, __LINE__, __FUNCTION__, __FILE__); 194 | return op::ScaleMode::InputResolution; 195 | } 196 | } 197 | 198 | // Determine type of frame source 199 | op::ProducerType gflagsToProducerType(const std::string& imageDirectory, const std::string& videoPath, const int webcamIndex) 200 | { 201 | op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); 202 | // Avoid duplicates (e.g. selecting at the time camera & video) 203 | if (!imageDirectory.empty() && !videoPath.empty()) 204 | op::error("Selected simultaneously image directory and video. Please, select only one.", __LINE__, __FUNCTION__, __FILE__); 205 | else if (!imageDirectory.empty() && webcamIndex != 0) 206 | op::error("Selected simultaneously image directory and webcam. Please, select only one.", __LINE__, __FUNCTION__, __FILE__); 207 | else if (!videoPath.empty() && webcamIndex != 0) 208 | op::error("Selected simultaneously video and webcam. Please, select only one.", __LINE__, __FUNCTION__, __FILE__); 209 | 210 | // Get desired op::ProducerType 211 | if (!imageDirectory.empty()) 212 | return op::ProducerType::ImageDirectory; 213 | else if (!videoPath.empty()) 214 | return op::ProducerType::Video; 215 | else 216 | return op::ProducerType::Webcam; 217 | } 218 | 219 | std::shared_ptr gflagsToProducer(const std::string& imageDirectory, const std::string& videoPath, const int webcamIndex, 220 | const op::Point webcamResolution, const double webcamFps) 221 | { 222 | op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); 223 | const auto type = gflagsToProducerType(imageDirectory, videoPath, webcamIndex); 224 | 225 | if (type == op::ProducerType::ImageDirectory) 226 | return std::make_shared(imageDirectory); 227 | else if (type == op::ProducerType::Video) 228 | return std::make_shared(videoPath); 229 | else if (type == op::ProducerType::Webcam) 230 | return std::make_shared(webcamIndex, webcamResolution, webcamFps); 231 | else 232 | { 233 | op::error("Undefined Producer selected.", __LINE__, __FUNCTION__, __FILE__); 234 | return std::shared_ptr{}; 235 | } 236 | } 237 | 238 | std::vector gflagToHeatMaps(const bool heatMapsAddParts, const bool heatMapsAddBkg, const bool heatMapsAddPAFs) 239 | { 240 | std::vector heatMapTypes; 241 | if (heatMapsAddParts) 242 | heatMapTypes.emplace_back(op::HeatMapType::Parts); 243 | if (heatMapsAddBkg) 244 | heatMapTypes.emplace_back(op::HeatMapType::Background); 245 | if (heatMapsAddPAFs) 246 | heatMapTypes.emplace_back(op::HeatMapType::PAFs); 247 | return heatMapTypes; 248 | } 249 | 250 | op::DetectionMode gflagToDetectionMode(const int handDetectionModeFlag, const std::shared_ptr& producer = nullptr) 251 | { 252 | if (handDetectionModeFlag == -1) 253 | { 254 | if (producer == nullptr) 255 | op::error("Since there is no default producer, `hand_detection_mode` must be set.", __LINE__, __FUNCTION__, __FILE__); 256 | const auto producerType = producer->getType(); 257 | if (producerType == op::ProducerType::Webcam) 258 | return op::DetectionMode::Fast; 259 | else if (producerType == op::ProducerType::ImageDirectory) 260 | return op::DetectionMode::Iterative; 261 | else if (producerType == op::ProducerType::Video) 262 | return op::DetectionMode::Tracking; 263 | } 264 | else if (handDetectionModeFlag == 0) 265 | return op::DetectionMode::Fast; 266 | else if (handDetectionModeFlag == 1) 267 | return op::DetectionMode::Iterative; 268 | else if (handDetectionModeFlag == 2) 269 | return op::DetectionMode::Tracking; 270 | else if (handDetectionModeFlag == 3) 271 | return op::DetectionMode::IterativeAndTracking; 272 | // else 273 | op::error("Undefined DetectionMode selected.", __LINE__, __FUNCTION__, __FILE__); 274 | return op::DetectionMode::Fast; 275 | } 276 | 277 | op::RenderMode gflagToRenderMode(const int renderFlag, const int renderPoseFlag = -2) 278 | { 279 | if (renderFlag == -1 && renderPoseFlag != -2) 280 | return gflagToRenderMode(renderPoseFlag, -2); 281 | else if (renderFlag == 0) 282 | return op::RenderMode::None; 283 | else if (renderFlag == 1) 284 | return op::RenderMode::Cpu; 285 | else if (renderFlag == 2) 286 | return op::RenderMode::Gpu; 287 | else 288 | { 289 | op::error("Undefined RenderMode selected.", __LINE__, __FUNCTION__, __FILE__); 290 | return op::RenderMode::None; 291 | } 292 | } 293 | 294 | // Google flags into program variables 295 | std::tuple, op::Point, op::Point, op::Point, std::shared_ptr, op::PoseModel, op::ScaleMode, 296 | std::vector> gflagsToOpParameters() 297 | { 298 | op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); 299 | // cameraFrameSize 300 | op::Point cameraFrameSize; 301 | auto nRead = sscanf(FLAGS_camera_resolution.c_str(), "%dx%d", &cameraFrameSize.x, &cameraFrameSize.y); 302 | op::checkE(nRead, 2, "Error, camera resolution format (" + FLAGS_camera_resolution + ") invalid, should be e.g., 1280x720", 303 | __LINE__, __FUNCTION__, __FILE__); 304 | // outputSize 305 | op::Point outputSize; 306 | nRead = sscanf(FLAGS_resolution.c_str(), "%dx%d", &outputSize.x, &outputSize.y); 307 | op::checkE(nRead, 2, "Error, resolution format (" + FLAGS_resolution + ") invalid, should be e.g., 960x540 ", 308 | __LINE__, __FUNCTION__, __FILE__); 309 | // netInputSize 310 | op::Point netInputSize; 311 | nRead = sscanf(FLAGS_net_resolution.c_str(), "%dx%d", &netInputSize.x, &netInputSize.y); 312 | op::checkE(nRead, 2, "Error, net resolution format (" + FLAGS_net_resolution + ") invalid, should be e.g., 656x368 (multiples of 16)", 313 | __LINE__, __FUNCTION__, __FILE__); 314 | // faceNetInputSize 315 | op::Point faceNetInputSize; 316 | nRead = sscanf(FLAGS_face_net_resolution.c_str(), "%dx%d", &faceNetInputSize.x, &faceNetInputSize.y); 317 | op::checkE(nRead, 2, "Error, face net resolution format (" + FLAGS_face_net_resolution 318 | + ") invalid, should be e.g., 368x368 (multiples of 16)", __LINE__, __FUNCTION__, __FILE__); 319 | // handNetInputSize 320 | op::Point handNetInputSize; 321 | nRead = sscanf(FLAGS_hand_net_resolution.c_str(), "%dx%d", &handNetInputSize.x, &handNetInputSize.y); 322 | op::checkE(nRead, 2, "Error, hand net resolution format (" + FLAGS_hand_net_resolution 323 | + ") invalid, should be e.g., 368x368 (multiples of 16)", __LINE__, __FUNCTION__, __FILE__); 324 | // producerType 325 | const auto producerSharedPtr = gflagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_camera, cameraFrameSize, FLAGS_camera_fps); 326 | // poseModel 327 | const auto poseModel = gflagToPoseModel(FLAGS_model_pose); 328 | // keypointScale 329 | const auto keypointScale = gflagToScaleMode(FLAGS_keypoint_scale); 330 | // heatmaps to add 331 | const auto heatMapTypes = gflagToHeatMaps(FLAGS_heatmaps_add_parts, FLAGS_heatmaps_add_bkg, FLAGS_heatmaps_add_PAFs); 332 | // Return 333 | return std::make_tuple(outputSize, netInputSize, faceNetInputSize, handNetInputSize, producerSharedPtr, poseModel, keypointScale, heatMapTypes); 334 | } 335 | 336 | int opRealTimePoseDemo() 337 | { 338 | // logging_level 339 | op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.", __LINE__, __FUNCTION__, __FILE__); 340 | op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level); 341 | // op::ConfigureLog::setPriorityThreshold(op::Priority::None); // To print all logging messages 342 | 343 | op::log("Starting pose estimation demo.", op::Priority::High); 344 | const auto timerBegin = std::chrono::high_resolution_clock::now(); 345 | 346 | // Applying user defined configuration 347 | op::Point outputSize; 348 | op::Point netInputSize; 349 | op::Point faceNetInputSize; 350 | op::Point handNetInputSize; 351 | std::shared_ptr producerSharedPtr; 352 | op::PoseModel poseModel; 353 | op::ScaleMode keypointScale; 354 | std::vector heatMapTypes; 355 | std::tie(outputSize, netInputSize, faceNetInputSize, handNetInputSize, producerSharedPtr, poseModel, keypointScale, 356 | heatMapTypes) = gflagsToOpParameters(); 357 | 358 | // OpenPose wrapper 359 | op::log("Configuring OpenPose wrapper.", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); 360 | op::Wrapper> opWrapper; 361 | // Pose configuration (use WrapperStructPose{} for default and recommended configuration) 362 | const op::WrapperStructPose wrapperStructPose{netInputSize, outputSize, keypointScale, FLAGS_num_gpu, FLAGS_num_gpu_start, 363 | FLAGS_num_scales, (float)FLAGS_scale_gap, gflagToRenderMode(FLAGS_render_pose), poseModel, 364 | !FLAGS_disable_blending, (float)FLAGS_alpha_pose, (float)FLAGS_alpha_heatmap, 365 | FLAGS_part_to_show, FLAGS_model_folder, heatMapTypes, op::ScaleMode::UnsignedChar}; 366 | // Face configuration (use op::WrapperStructFace{} to disable it) 367 | const op::WrapperStructFace wrapperStructFace{FLAGS_face, faceNetInputSize, gflagToRenderMode(FLAGS_render_face, FLAGS_render_pose), 368 | (float)FLAGS_alpha_face, (float)FLAGS_alpha_heatmap_face}; 369 | // Hand configuration (use op::WrapperStructHand{} to disable it) 370 | const op::WrapperStructHand wrapperStructHand{FLAGS_hand, handNetInputSize, gflagToDetectionMode(FLAGS_hand_detection_mode, producerSharedPtr), 371 | gflagToRenderMode(FLAGS_render_hand, FLAGS_render_pose), (float)FLAGS_alpha_hand, 372 | (float)FLAGS_alpha_heatmap_hand}; 373 | // Producer (use default to disable any input) 374 | const op::WrapperStructInput wrapperStructInput{producerSharedPtr, FLAGS_frame_first, FLAGS_frame_last, FLAGS_process_real_time, 375 | FLAGS_frame_flip, FLAGS_frame_rotate, FLAGS_frames_repeat}; 376 | // Consumer (comment or use default argument to disable any output) 377 | const op::WrapperStructOutput wrapperStructOutput{!FLAGS_no_display, !FLAGS_no_gui_verbose, FLAGS_fullscreen, FLAGS_write_keypoint, 378 | op::stringToDataFormat(FLAGS_write_keypoint_format), FLAGS_write_keypoint_json, 379 | FLAGS_write_coco_json, FLAGS_write_images, FLAGS_write_images_format, FLAGS_write_video, 380 | FLAGS_write_heatmaps, FLAGS_write_heatmaps_format}; 381 | // Configure wrapper 382 | opWrapper.configure(wrapperStructPose, wrapperStructFace, wrapperStructHand, wrapperStructInput, wrapperStructOutput); 383 | // Set to single-thread running (e.g. for debugging purposes) 384 | // opWrapper.disableMultiThreading(); 385 | 386 | // Start processing 387 | // Two different ways of running the program on multithread environment 388 | op::log("Starting thread(s)", op::Priority::High); 389 | // Option a) Recommended - Also using the main thread (this thread) for processing (it saves 1 thread) 390 | // Start, run & stop threads 391 | opWrapper.exec(); // It blocks this thread until all threads have finished 392 | 393 | // // Option b) Keeping this thread free in case you want to do something else meanwhile, e.g. profiling the GPU memory 394 | // // VERY IMPORTANT NOTE: if OpenCV is compiled with Qt support, this option will not work. Qt needs the main thread to 395 | // // plot visual results, so the final GUI (which uses OpenCV) would return an exception similar to: 396 | // // `QMetaMethod::invoke: Unable to invoke methods with return values in queued connections` 397 | // // Start threads 398 | // opWrapper.start(); 399 | // // Profile used GPU memory 400 | // // 1: wait ~10sec so the memory has been totally loaded on GPU 401 | // // 2: profile the GPU memory 402 | // const auto sleepTimeMs = 10; 403 | // for (auto i = 0 ; i < 10000/sleepTimeMs && opWrapper.isRunning() ; i++) 404 | // std::this_thread::sleep_for(std::chrono::milliseconds{sleepTimeMs}); 405 | // op::Profiler::profileGpuMemory(__LINE__, __FUNCTION__, __FILE__); 406 | // // Keep program alive while running threads 407 | // while (opWrapper.isRunning()) 408 | // std::this_thread::sleep_for(std::chrono::milliseconds{sleepTimeMs}); 409 | // // Stop and join threads 410 | // op::log("Stopping thread(s)", op::Priority::High); 411 | // opWrapper.stop(); 412 | 413 | // Measuring total time 414 | const auto now = std::chrono::high_resolution_clock::now(); 415 | const auto totalTimeSec = (double)std::chrono::duration_cast(now-timerBegin).count() * 1e-9; 416 | const auto message = "Real-time pose estimation demo successfully finished. Total time: " + std::to_string(totalTimeSec) + " seconds."; 417 | op::log(message, op::Priority::High); 418 | 419 | return 0; 420 | } 421 | 422 | int main(int argc, char *argv[]) 423 | { 424 | // Initializing google logging (Caffe uses it for logging) 425 | google::InitGoogleLogging("opRealTimePoseDemo"); 426 | 427 | // Parsing command line flags 428 | gflags::ParseCommandLineFlags(&argc, &argv, true); 429 | 430 | // Running opRealTimePoseDemo 431 | return opRealTimePoseDemo(); 432 | } 433 | -------------------------------------------------------------------------------- /openpose_ros_pkg/src/openpose_ros_node.cpp: -------------------------------------------------------------------------------- 1 | //#define USE_CAFFE 2 | /*#include 3 | #include 4 | #include 5 | 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | */ 12 | #include 13 | 14 | #include 15 | #include 16 | #include 17 | #include 18 | #include 19 | 20 | #include 21 | 22 | #include 23 | #include 24 | 25 | 26 | std::shared_ptr g_pose_extractor; 27 | std::shared_ptr poseRenderer; 28 | std::map g_bodypart_map; 29 | op::Point g_net_input_size; 30 | op::Point output_size; 31 | int g_num_scales; 32 | double g_scale_gap; 33 | 34 | 35 | ros::Publisher image_skeleton_pub; 36 | 37 | //! 38 | //! \brief getParam Get parameter from node handle 39 | //! \param nh The nodehandle 40 | //! \param param_name Key string 41 | //! \param default_value Default value if not found 42 | //! \return The parameter value 43 | //! 44 | template 45 | T getParam(const ros::NodeHandle& nh, const std::string& param_name, T default_value) 46 | { 47 | T value; 48 | if (nh.hasParam(param_name)) 49 | { 50 | nh.getParam(param_name, value); 51 | } 52 | else 53 | { 54 | ROS_WARN_STREAM("Parameter '" << param_name << "' not found, defaults to '" << default_value << "'"); 55 | value = default_value; 56 | } 57 | return value; 58 | } 59 | 60 | 61 | //declare publisher keypoints_pub 62 | ros::Publisher keypoints_pub; 63 | 64 | template 65 | K getParamK(const ros::NodeHandle& nh, const std::string& param_name, K default_value) 66 | { 67 | K value; 68 | if (nh.hasParam(param_name)) 69 | { 70 | nh.getParam(param_name, value); 71 | } 72 | else 73 | { 74 | ROS_WARN_STREAM("Parameter '" << param_name << "' not found, defaults to '" << default_value << "'"); 75 | value = default_value; 76 | } 77 | return value; 78 | } 79 | 80 | 81 | op::PoseModel stringToPoseModel(const std::string& pose_model_string) 82 | { 83 | if (pose_model_string == "COCO") 84 | return op::PoseModel::COCO_18; 85 | else if (pose_model_string == "MPI") 86 | return op::PoseModel::MPI_15; 87 | else if (pose_model_string == "MPI_4_layers") 88 | return op::PoseModel::MPI_15_4; 89 | else 90 | { 91 | ROS_ERROR("String does not correspond to any model (COCO, MPI, MPI_4_layers)"); 92 | return op::PoseModel::COCO_18; 93 | } 94 | } 95 | 96 | std::map getBodyPartMapFromPoseModel(const op::PoseModel& pose_model) 97 | { 98 | if (pose_model == op::PoseModel::COCO_18) 99 | { 100 | return op::POSE_COCO_BODY_PARTS; 101 | } 102 | else if (pose_model == op::PoseModel::MPI_15 || pose_model == op::PoseModel::MPI_15_4) 103 | { 104 | return op::POSE_MPI_BODY_PARTS; 105 | } 106 | else 107 | { 108 | ROS_FATAL("Invalid pose model, not map present"); 109 | exit(1); 110 | } 111 | } 112 | 113 | 114 | openpose_ros_msgs::BodypartDetection getBodyPartDetectionFromArrayAndIndex(const op::Array& array, size_t idx) 115 | { 116 | openpose_ros_msgs::BodypartDetection bodypart; 117 | 118 | bodypart.x = array[idx]; 119 | bodypart.y = array[idx+1]; 120 | bodypart.confidence = array[idx+2]; 121 | return bodypart; 122 | } 123 | 124 | openpose_ros_msgs::BodypartDetection getNANBodypart() 125 | { 126 | openpose_ros_msgs::BodypartDetection bodypart; 127 | bodypart.confidence = NAN; 128 | return bodypart; 129 | } 130 | 131 | bool detectPosesCallback(openpose_ros_msgs::GetPersons::Request& req, openpose_ros_msgs::GetPersons::Response& res) 132 | { 133 | ROS_INFO("detectPosesCallback"); 134 | 135 | // Convert ROS message to opencv image 136 | cv_bridge::CvImagePtr cv_ptr; 137 | try 138 | { 139 | cv_ptr = cv_bridge::toCvCopy(req.image, req.image.encoding); 140 | } 141 | catch (cv_bridge::Exception& e) 142 | { 143 | ROS_ERROR("detectPosesCallback cv_bridge exception: %s", e.what()); 144 | return false; 145 | } 146 | 147 | 148 | cv::Mat image = cv_ptr->image; 149 | 150 | // std::string path = ros::package::getPath("openpose_ros_pkg"); 151 | // std::string filename = path + "/examples/COCO_val2014_000000000192.jpg"; 152 | // cv::Mat image = op::loadImage(filename, CV_LOAD_IMAGE_COLOR); 153 | 154 | ROS_INFO("Parsed image"); 155 | ROS_INFO_STREAM("Perform forward pass with the following settings:"); 156 | ROS_INFO_STREAM("- net_input_size: " << g_net_input_size.x << " " << g_net_input_size.y); 157 | ROS_INFO_STREAM("- num_scales: " << g_num_scales); 158 | ROS_INFO_STREAM("- scale_gap: " << g_scale_gap); 159 | ROS_INFO_STREAM("- image_size: " << image.size()); 160 | op::CvMatToOpInput cv_mat_to_op_input(g_net_input_size, g_num_scales, g_scale_gap); 161 | // g_pose_extractor->forwardPass(cv_mat_to_op_input.format(image), image.size()); 162 | 163 | ROS_INFO("Initialized Net Size"); 164 | 165 | op::Array netInputArray; 166 | std::vector scaleRatios; 167 | 168 | std::tie(netInputArray, scaleRatios) = cv_mat_to_op_input.format(image); 169 | ROS_INFO("Preparing for forward pass"); 170 | 171 | 172 | g_pose_extractor->forwardPass(netInputArray, {image.cols, image.rows}, scaleRatios); 173 | 174 | ROS_INFO("g_pose_extractor->forwardPass done"); 175 | 176 | 177 | 178 | 179 | //const op::Array poses; 180 | const op::Array poses = g_pose_extractor->getPoseKeypoints(); 181 | 182 | // VISUALIZE OUTPUT 183 | //const auto poseKeypoints = g_pose_extractor->getPoseKeypoints(); 184 | 185 | op::Point outputSize(output_size.x, output_size.y); 186 | op::CvMatToOpOutput cvMatToOpOutput{outputSize}; 187 | op::OpOutputToCvMat opOutputToCvMat{outputSize}; 188 | 189 | const op::Point windowedSize = outputSize; 190 | op::FrameDisplayer frameDisplayer{windowedSize, "OpenPose Example"}; 191 | 192 | double scaleInputToOutput; 193 | op::Array outputArray; 194 | std::tie(scaleInputToOutput, outputArray) = cvMatToOpOutput.format(image); 195 | 196 | //poseRenderer->renderPose(outputArray, poseKeypoints); 197 | poseRenderer->renderPose(outputArray, poses); 198 | auto outputImage = opOutputToCvMat.formatToCvMat(outputArray); 199 | 200 | 201 | sensor_msgs::Image ros_image; 202 | cv_bridge::CvImagePtr cv_ptr_out = cv_bridge::toCvCopy(req.image, req.image.encoding); 203 | cv_ptr_out->image = outputImage; 204 | ros_image = *(cv_ptr_out->toImageMsg()); 205 | 206 | 207 | //frameDisplayer.displayFrame(outputImage, 0); // Alternative: cv::imshow(outputImage) + cv::waitKey(0) 208 | image_skeleton_pub.publish(ros_image); 209 | 210 | // End Visualize Output 211 | 212 | 213 | if (!poses.empty() && poses.getNumberDimensions() != 3) 214 | { 215 | ROS_ERROR("pose.getNumberDimensions(): %d != 3", (int) poses.getNumberDimensions()); 216 | return false; 217 | } 218 | 219 | int num_people = poses.getSize(0); 220 | int num_bodyparts = poses.getSize(1); 221 | 222 | ROS_INFO("num people: %d", num_people); 223 | 224 | for (size_t person_idx = 0; person_idx < num_people; person_idx++) 225 | { 226 | ROS_INFO(" Person ID: %zu", person_idx); 227 | 228 | openpose_ros_msgs::PersonDetection person_msg; 229 | 230 | //add number of people detected 231 | person_msg.num_people_detected = num_people; 232 | 233 | //add person ID 234 | person_msg.person_ID = person_idx; 235 | 236 | 237 | // Initialize all bodyparts with nan 238 | // (commented this line) --> openpose_ros_msgs::PersonDetection person_msg; 239 | person_msg.nose = getNANBodypart(); 240 | person_msg.neck = getNANBodypart(); 241 | person_msg.right_shoulder = getNANBodypart(); 242 | person_msg.right_elbow = getNANBodypart(); 243 | person_msg.right_wrist = getNANBodypart(); 244 | person_msg.left_shoulder = getNANBodypart(); 245 | person_msg.left_elbow = getNANBodypart(); 246 | person_msg.left_wrist = getNANBodypart(); 247 | person_msg.right_hip = getNANBodypart(); 248 | person_msg.right_knee = getNANBodypart(); 249 | person_msg.right_ankle = getNANBodypart(); 250 | person_msg.left_hip = getNANBodypart(); 251 | person_msg.left_knee = getNANBodypart(); 252 | person_msg.left_ankle = getNANBodypart(); 253 | person_msg.right_eye = getNANBodypart(); 254 | person_msg.left_eye = getNANBodypart(); 255 | person_msg.right_ear = getNANBodypart(); 256 | person_msg.left_ear = getNANBodypart(); 257 | person_msg.chest = getNANBodypart(); 258 | 259 | for (size_t bodypart_idx = 0; bodypart_idx < num_bodyparts; bodypart_idx++) 260 | { 261 | size_t final_idx = 3*(person_idx*num_bodyparts + bodypart_idx); 262 | 263 | std::string body_part_string = g_bodypart_map[bodypart_idx]; 264 | 265 | openpose_ros_msgs::BodypartDetection bodypart_detection = getBodyPartDetectionFromArrayAndIndex(poses, final_idx); 266 | 267 | if (body_part_string == "Nose") person_msg.nose = bodypart_detection; 268 | else if (body_part_string == "Neck") person_msg.neck = bodypart_detection; 269 | else if (body_part_string == "RShoulder") person_msg.right_shoulder = bodypart_detection; 270 | else if (body_part_string == "RElbow") person_msg.right_elbow = bodypart_detection; 271 | else if (body_part_string == "RWrist") person_msg.right_wrist = bodypart_detection; 272 | else if (body_part_string == "LShoulder") person_msg.left_shoulder = bodypart_detection; 273 | else if (body_part_string == "LElbow") person_msg.left_elbow = bodypart_detection; 274 | else if (body_part_string == "LWrist") person_msg.left_wrist = bodypart_detection; 275 | else if (body_part_string == "RHip") person_msg.right_hip = bodypart_detection; 276 | else if (body_part_string == "RKnee") person_msg.right_knee = bodypart_detection; 277 | else if (body_part_string == "RAnkle") person_msg.right_ankle = bodypart_detection; 278 | else if (body_part_string == "LHip") person_msg.left_hip = bodypart_detection; 279 | else if (body_part_string == "LKnee") person_msg.left_knee = bodypart_detection; 280 | else if (body_part_string == "LAnkle") person_msg.left_ankle = bodypart_detection; 281 | else if (body_part_string == "REye") person_msg.right_eye = bodypart_detection; 282 | else if (body_part_string == "LEye") person_msg.left_eye = bodypart_detection; 283 | else if (body_part_string == "REar") person_msg.right_ear = bodypart_detection; 284 | else if (body_part_string == "LEar") person_msg.left_ear = bodypart_detection; 285 | else if (body_part_string == "Chest") person_msg.chest = bodypart_detection; 286 | else 287 | { 288 | ROS_ERROR("Unknown bodypart %s, this should never happen!", body_part_string.c_str()); 289 | } 290 | 291 | ROS_INFO(" body part: %s", body_part_string.c_str()); 292 | ROS_INFO(" (x, y, confidence): %i, %i, %f", bodypart_detection.x, bodypart_detection.y, bodypart_detection.confidence); 293 | 294 | } 295 | 296 | //publish keypoints data of person_msg 297 | keypoints_pub.publish(person_msg); 298 | 299 | res.detections.push_back(person_msg); 300 | 301 | } 302 | 303 | ROS_INFO("Detected %d persons", (int) res.detections.size()); 304 | 305 | return true; 306 | } 307 | 308 | int main(int argc, char** argv) 309 | { 310 | ros::init(argc, argv, "openpose_ros_service_node"); 311 | 312 | ros::NodeHandle local_nh("~"); 313 | // g_net_input_size = op::Point(getParam(local_nh, "net_input_width", 656), getParam(local_nh, "net_input_height", 368)); 314 | 315 | g_net_input_size.x = getParam(local_nh, "net_input_width", 656); 316 | g_net_input_size.y = getParam(local_nh, "net_input_height", 368); 317 | 318 | op::Point net_output_size(getParam(local_nh, "net_output_width", 656), getParam(local_nh, "net_output_height", 368)); 319 | //op::Point output_size(getParam(local_nh, "output_width", 1280), getParam(local_nh, "output_height", 720)); 320 | //op::Point output_size(getParam(local_nh, "output_width", 1024), getParam(local_nh, "output_height", 1024)); 321 | 322 | output_size.x = getParam(local_nh, "output_width", 1024); 323 | output_size.y = getParam(local_nh, "output_height", 1024); 324 | 325 | g_num_scales = getParam(local_nh, "num_scales", 1); 326 | g_scale_gap = getParam(local_nh, "scale_gap", 0.3); 327 | unsigned int num_gpu_start = getParam(local_nh, "num_gpu_start", 0); 328 | 329 | 330 | std::string package_path = ros::package::getPath("openpose_ros_pkg"); 331 | std::string folder_location = package_path + "/../openpose/models/"; 332 | 333 | std::string model_folder = getParam(local_nh, "model_folder", folder_location); 334 | op::PoseModel pose_model = stringToPoseModel(getParam(local_nh, "pose_model", std::string("COCO"))); 335 | g_bodypart_map = getBodyPartMapFromPoseModel(pose_model); 336 | 337 | ros::NodeHandle nh; 338 | 339 | ros::ServiceServer service = nh.advertiseService("detect_poses", detectPosesCallback); 340 | 341 | image_skeleton_pub = nh.advertise( "/openpose_ros/detected_poses_image", 0 ); 342 | 343 | //declare publisher of type openpose_ros_msgs::PersonDetection in topic /openpose_ros/detected_poses_keypoints 344 | keypoints_pub = nh.advertise( "/openpose_ros/detected_poses_keypoints" , 0 ); 345 | 346 | g_pose_extractor = std::shared_ptr( 347 | /* new op::PoseExtractorCaffe(g_net_input_size, net_output_size, output_size, g_num_scales, 348 | g_scale_gap, pose_model, model_folder, num_gpu_start));*/ 349 | 350 | new op::PoseExtractorCaffe(g_net_input_size, net_output_size, output_size, g_num_scales, pose_model, 351 | model_folder, num_gpu_start)); 352 | 353 | poseRenderer = std::shared_ptr( 354 | new op::PoseRenderer(net_output_size, output_size, pose_model, nullptr, true, 0.6)); 355 | //poseRenderer{netOutputSize, outputSize, poseModel, nullptr, !FLAGS_disable_blending, (float)FLAGS_alpha_pose}; 356 | 357 | g_pose_extractor->initializationOnThread(); 358 | poseRenderer->initializationOnThread(); 359 | 360 | ROS_INFO("Initialization Successful!"); 361 | ros::spin(); 362 | 363 | return 0; 364 | } 365 | 366 | 367 | -------------------------------------------------------------------------------- /openpose_ros_pkg/src/openpose_ros_node_3d.cpp: -------------------------------------------------------------------------------- 1 | //#define USE_CAFFE 2 | /*#include 3 | #include 4 | #include 5 | 6 | #include 7 | #include 8 | #include 9 | #include 10 | #include 11 | */ 12 | #include 13 | 14 | #include 15 | #include 16 | #include 17 | #include 18 | #include 19 | 20 | #include 21 | 22 | #include 23 | #include 24 | 25 | 26 | std::shared_ptr g_pose_extractor; 27 | std::shared_ptr poseRenderer; 28 | std::map g_bodypart_map; 29 | op::Point g_net_input_size; 30 | op::Point output_size; 31 | int g_num_scales; 32 | double g_scale_gap; 33 | 34 | 35 | ros::Publisher image_skeleton_pub; 36 | 37 | //! 38 | //! \brief getParam Get parameter from node handle 39 | //! \param nh The nodehandle 40 | //! \param param_name Key string 41 | //! \param default_value Default value if not found 42 | //! \return The parameter value 43 | //! 44 | template 45 | T getParam(const ros::NodeHandle& nh, const std::string& param_name, T default_value) 46 | { 47 | T value; 48 | if (nh.hasParam(param_name)) 49 | { 50 | nh.getParam(param_name, value); 51 | } 52 | else 53 | { 54 | ROS_WARN_STREAM("Parameter '" << param_name << "' not found, defaults to '" << default_value << "'"); 55 | value = default_value; 56 | } 57 | return value; 58 | } 59 | 60 | //declare publisher keypoints_pub 61 | ros::Publisher keypoints_pub; 62 | 63 | template 64 | K getParamK(const ros::NodeHandle& nh, const std::string& param_name, K default_value) 65 | { 66 | K value; 67 | if (nh.hasParam(param_name)) 68 | { 69 | nh.getParam(param_name, value); 70 | } 71 | else 72 | { 73 | ROS_WARN_STREAM("Parameter '" << param_name << "' not found, defaults to '" << default_value << "'"); 74 | value = default_value; 75 | } 76 | return value; 77 | } 78 | 79 | op::PoseModel stringToPoseModel(const std::string& pose_model_string) 80 | { 81 | if (pose_model_string == "COCO") 82 | return op::PoseModel::COCO_18; 83 | else if (pose_model_string == "MPI") 84 | return op::PoseModel::MPI_15; 85 | else if (pose_model_string == "MPI_4_layers") 86 | return op::PoseModel::MPI_15_4; 87 | else 88 | { 89 | ROS_ERROR("String does not correspond to any model (COCO, MPI, MPI_4_layers)"); 90 | return op::PoseModel::COCO_18; 91 | } 92 | } 93 | 94 | std::map getBodyPartMapFromPoseModel(const op::PoseModel& pose_model) 95 | { 96 | if (pose_model == op::PoseModel::COCO_18) 97 | { 98 | return op::POSE_COCO_BODY_PARTS; 99 | } 100 | else if (pose_model == op::PoseModel::MPI_15 || pose_model == op::PoseModel::MPI_15_4) 101 | { 102 | return op::POSE_MPI_BODY_PARTS; 103 | } 104 | else 105 | { 106 | ROS_FATAL("Invalid pose model, not map present"); 107 | exit(1); 108 | } 109 | } 110 | 111 | 112 | openpose_ros_msgs::BodypartDetection getBodyPartDetectionFromArrayAndIndex(const op::Array& array, size_t idx) 113 | { 114 | openpose_ros_msgs::BodypartDetection bodypart; 115 | 116 | bodypart.x = array[idx]; 117 | bodypart.y = array[idx+1]; 118 | bodypart.confidence = array[idx+2]; 119 | return bodypart; 120 | } 121 | 122 | openpose_ros_msgs::BodypartDetection getNANBodypart() 123 | { 124 | openpose_ros_msgs::BodypartDetection bodypart; 125 | bodypart.confidence = NAN; 126 | return bodypart; 127 | } 128 | 129 | bool detectPosesCallback(openpose_ros_msgs::GetPersons::Request& req, openpose_ros_msgs::GetPersons::Response& res) 130 | { 131 | ROS_INFO("detectPosesCallback"); 132 | 133 | // Convert ROS message to opencv image 134 | cv_bridge::CvImagePtr cv_ptr; 135 | try 136 | { 137 | cv_ptr = cv_bridge::toCvCopy(req.image, req.image.encoding); 138 | } 139 | catch (cv_bridge::Exception& e) 140 | { 141 | ROS_ERROR("detectPosesCallback cv_bridge exception: %s", e.what()); 142 | return false; 143 | } 144 | 145 | 146 | cv::Mat image = cv_ptr->image; 147 | 148 | // std::string path = ros::package::getPath("openpose_ros_pkg"); 149 | // std::string filename = path + "/examples/COCO_val2014_000000000192.jpg"; 150 | // cv::Mat image = op::loadImage(filename, CV_LOAD_IMAGE_COLOR); 151 | 152 | ROS_INFO("Parsed image"); 153 | ROS_INFO_STREAM("Perform forward pass with the following settings:"); 154 | ROS_INFO_STREAM("- net_input_size: " << g_net_input_size.x << " " << g_net_input_size.y); 155 | ROS_INFO_STREAM("- num_scales: " << g_num_scales); 156 | ROS_INFO_STREAM("- scale_gap: " << g_scale_gap); 157 | ROS_INFO_STREAM("- image_size: " << image.size()); 158 | op::CvMatToOpInput cv_mat_to_op_input(g_net_input_size, g_num_scales, g_scale_gap); 159 | // g_pose_extractor->forwardPass(cv_mat_to_op_input.format(image), image.size()); 160 | 161 | ROS_INFO("Initialized Net Size"); 162 | 163 | op::Array netInputArray; 164 | std::vector scaleRatios; 165 | 166 | std::tie(netInputArray, scaleRatios) = cv_mat_to_op_input.format(image); 167 | ROS_INFO("Preparing for forward pass"); 168 | 169 | 170 | g_pose_extractor->forwardPass(netInputArray, {image.cols, image.rows}, scaleRatios); 171 | 172 | ROS_INFO("g_pose_extractor->forwardPass done"); 173 | 174 | 175 | 176 | 177 | //const op::Array poses; 178 | const op::Array poses = g_pose_extractor->getPoseKeypoints(); 179 | 180 | // VISUALIZE OUTPUT 181 | //const auto poseKeypoints = g_pose_extractor->getPoseKeypoints(); 182 | 183 | op::Point outputSize(output_size.x, output_size.y); 184 | op::CvMatToOpOutput cvMatToOpOutput{outputSize}; 185 | op::OpOutputToCvMat opOutputToCvMat{outputSize}; 186 | 187 | const op::Point windowedSize = outputSize; 188 | op::FrameDisplayer frameDisplayer{windowedSize, "OpenPose Example"}; 189 | 190 | double scaleInputToOutput; 191 | op::Array outputArray; 192 | std::tie(scaleInputToOutput, outputArray) = cvMatToOpOutput.format(image); 193 | 194 | //poseRenderer->renderPose(outputArray, poseKeypoints); 195 | poseRenderer->renderPose(outputArray, poses); 196 | auto outputImage = opOutputToCvMat.formatToCvMat(outputArray); 197 | 198 | 199 | sensor_msgs::Image ros_image; 200 | cv_bridge::CvImagePtr cv_ptr_out = cv_bridge::toCvCopy(req.image, req.image.encoding); 201 | cv_ptr_out->image = outputImage; 202 | ros_image = *(cv_ptr_out->toImageMsg()); 203 | 204 | 205 | //frameDisplayer.displayFrame(outputImage, 0); // Alternative: cv::imshow(outputImage) + cv::waitKey(0) 206 | image_skeleton_pub.publish(ros_image); 207 | 208 | // End Visualize Output 209 | 210 | 211 | if (!poses.empty() && poses.getNumberDimensions() != 3) 212 | { 213 | ROS_ERROR("pose.getNumberDimensions(): %d != 3", (int) poses.getNumberDimensions()); 214 | return false; 215 | } 216 | 217 | int num_people = poses.getSize(0); 218 | int num_bodyparts = poses.getSize(1); 219 | 220 | ROS_INFO("num people: %d", num_people); 221 | 222 | for (size_t person_idx = 0; person_idx < num_people; person_idx++) 223 | { 224 | ROS_INFO(" Person ID: %zu", person_idx); 225 | 226 | openpose_ros_msgs::PersonDetection person_msg; 227 | 228 | //add number of people detected 229 | person_msg.num_people_detected = num_people; 230 | 231 | //add person ID 232 | person_msg.person_ID = person_idx; 233 | 234 | // Initialize all bodyparts with nan 235 | // (commented this line) --> openpose_ros_msgs::PersonDetection person_msg; 236 | person_msg.nose = getNANBodypart(); 237 | person_msg.neck = getNANBodypart(); 238 | person_msg.right_shoulder = getNANBodypart(); 239 | person_msg.right_elbow = getNANBodypart(); 240 | person_msg.right_wrist = getNANBodypart(); 241 | person_msg.left_shoulder = getNANBodypart(); 242 | person_msg.left_elbow = getNANBodypart(); 243 | person_msg.left_wrist = getNANBodypart(); 244 | person_msg.right_hip = getNANBodypart(); 245 | person_msg.right_knee = getNANBodypart(); 246 | person_msg.right_ankle = getNANBodypart(); 247 | person_msg.left_hip = getNANBodypart(); 248 | person_msg.left_knee = getNANBodypart(); 249 | person_msg.left_ankle = getNANBodypart(); 250 | person_msg.right_eye = getNANBodypart(); 251 | person_msg.left_eye = getNANBodypart(); 252 | person_msg.right_ear = getNANBodypart(); 253 | person_msg.left_ear = getNANBodypart(); 254 | person_msg.chest = getNANBodypart(); 255 | 256 | for (size_t bodypart_idx = 0; bodypart_idx < num_bodyparts; bodypart_idx++) 257 | { 258 | size_t final_idx = 3*(person_idx*num_bodyparts + bodypart_idx); 259 | 260 | std::string body_part_string = g_bodypart_map[bodypart_idx]; 261 | 262 | openpose_ros_msgs::BodypartDetection bodypart_detection = getBodyPartDetectionFromArrayAndIndex(poses, final_idx); 263 | 264 | if (body_part_string == "Nose") person_msg.nose = bodypart_detection; 265 | else if (body_part_string == "Neck") person_msg.neck = bodypart_detection; 266 | else if (body_part_string == "RShoulder") person_msg.right_shoulder = bodypart_detection; 267 | else if (body_part_string == "RElbow") person_msg.right_elbow = bodypart_detection; 268 | else if (body_part_string == "RWrist") person_msg.right_wrist = bodypart_detection; 269 | else if (body_part_string == "LShoulder") person_msg.left_shoulder = bodypart_detection; 270 | else if (body_part_string == "LElbow") person_msg.left_elbow = bodypart_detection; 271 | else if (body_part_string == "LWrist") person_msg.left_wrist = bodypart_detection; 272 | else if (body_part_string == "RHip") person_msg.right_hip = bodypart_detection; 273 | else if (body_part_string == "RKnee") person_msg.right_knee = bodypart_detection; 274 | else if (body_part_string == "RAnkle") person_msg.right_ankle = bodypart_detection; 275 | else if (body_part_string == "LHip") person_msg.left_hip = bodypart_detection; 276 | else if (body_part_string == "LKnee") person_msg.left_knee = bodypart_detection; 277 | else if (body_part_string == "LAnkle") person_msg.left_ankle = bodypart_detection; 278 | else if (body_part_string == "REye") person_msg.right_eye = bodypart_detection; 279 | else if (body_part_string == "LEye") person_msg.left_eye = bodypart_detection; 280 | else if (body_part_string == "REar") person_msg.right_ear = bodypart_detection; 281 | else if (body_part_string == "LEar") person_msg.left_ear = bodypart_detection; 282 | else if (body_part_string == "Chest") person_msg.chest = bodypart_detection; 283 | else 284 | { 285 | ROS_ERROR("Unknown bodypart %s, this should never happen!", body_part_string.c_str()); 286 | } 287 | 288 | ROS_INFO(" body part: %s", body_part_string.c_str()); 289 | ROS_INFO(" (x, y, confidence): %i, %i, %f", bodypart_detection.x, bodypart_detection.y, bodypart_detection.confidence); 290 | 291 | } 292 | 293 | //publish keypoints data of person_msg 294 | keypoints_pub.publish(person_msg); 295 | 296 | res.detections.push_back(person_msg); 297 | 298 | } 299 | 300 | ROS_INFO("Detected %d persons", (int) res.detections.size()); 301 | 302 | return true; 303 | } 304 | 305 | int main(int argc, char** argv) 306 | { 307 | ros::init(argc, argv, "openpose_ros_service_node_3d"); 308 | 309 | ros::NodeHandle local_nh("~"); 310 | // g_net_input_size = op::Point(getParam(local_nh, "net_input_width", 656), getParam(local_nh, "net_input_height", 368)); 311 | 312 | g_net_input_size.x = getParam(local_nh, "net_input_width", 656); 313 | g_net_input_size.y = getParam(local_nh, "net_input_height", 368); 314 | 315 | op::Point net_output_size(getParam(local_nh, "net_output_width", 656), getParam(local_nh, "net_output_height", 368)); 316 | //op::Point output_size(getParam(local_nh, "output_width", 1280), getParam(local_nh, "output_height", 720)); 317 | //op::Point output_size(getParam(local_nh, "output_width", 1024), getParam(local_nh, "output_height", 1024)); 318 | 319 | output_size.x = getParam(local_nh, "output_width", 1024); 320 | output_size.y = getParam(local_nh, "output_height", 1024); 321 | 322 | g_num_scales = getParam(local_nh, "num_scales", 1); 323 | g_scale_gap = getParam(local_nh, "scale_gap", 0.3); 324 | unsigned int num_gpu_start = getParam(local_nh, "num_gpu_start", 0); 325 | 326 | 327 | std::string package_path = ros::package::getPath("openpose_ros_pkg"); 328 | std::string folder_location = package_path + "/../openpose/models/"; 329 | 330 | std::string model_folder = getParam(local_nh, "model_folder", folder_location); 331 | op::PoseModel pose_model = stringToPoseModel(getParam(local_nh, "pose_model", std::string("COCO"))); 332 | g_bodypart_map = getBodyPartMapFromPoseModel(pose_model); 333 | 334 | ros::NodeHandle nh; 335 | 336 | ros::ServiceServer service = nh.advertiseService("detect_poses_3d", detectPosesCallback); 337 | 338 | image_skeleton_pub = nh.advertise( "/openpose_ros/skeleton_3d/detected_poses_image", 0 ); 339 | 340 | //declare publisher of type openpose_ros_msgs::PersonDetection in topic /openpose_ros/detected_poses_keypoints 341 | keypoints_pub = nh.advertise( "/openpose_ros/skeleton_3d/detected_poses_keypoints" , 0 ); 342 | 343 | g_pose_extractor = std::shared_ptr( 344 | /* new op::PoseExtractorCaffe(g_net_input_size, net_output_size, output_size, g_num_scales, 345 | g_scale_gap, pose_model, model_folder, num_gpu_start));*/ 346 | 347 | new op::PoseExtractorCaffe(g_net_input_size, net_output_size, output_size, g_num_scales, pose_model, 348 | model_folder, num_gpu_start)); 349 | 350 | poseRenderer = std::shared_ptr( 351 | new op::PoseRenderer(net_output_size, output_size, pose_model, nullptr, true, 0.6)); 352 | //poseRenderer{netOutputSize, outputSize, poseModel, nullptr, !FLAGS_disable_blending, (float)FLAGS_alpha_pose}; 353 | 354 | g_pose_extractor->initializationOnThread(); 355 | poseRenderer->initializationOnThread(); 356 | 357 | ROS_INFO("Initialization Successful!"); 358 | ros::spin(); 359 | 360 | return 0; 361 | } 362 | 363 | 364 | -------------------------------------------------------------------------------- /openpose_ros_pkg/src/openpose_ros_node_firephinx.cpp: -------------------------------------------------------------------------------- 1 | //#define USE_CAFFE 2 | 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | 9 | #include // `std::chrono::` functions and classes, e.g. std::chrono::milliseconds 10 | #include // std::string 11 | 12 | #include // cv::Mat & cv::Size 13 | 14 | // ------------------------- OpenPose Library Tutorial - Pose - Example 1 - Extract from Image ------------------------- 15 | // This first example shows the user how to: 16 | // 1. Load an image (`filestream` module) 17 | // 2. Extract the pose of that image (`pose` module) 18 | // 3. Render the pose on a resized copy of the input image (`pose` module) 19 | // 4. Display the rendered pose (`gui` module) 20 | // In addition to the previous OpenPose modules, we also need to use: 21 | // 1. `core` module: for the Array class that the `pose` module needs 22 | // 2. `utilities` module: for the error & logging functions, i.e. op::error & op::log respectively 23 | 24 | // 3rdparty dependencies 25 | #include // DEFINE_bool, DEFINE_int32, DEFINE_int64, DEFINE_uint64, DEFINE_double, DEFINE_string 26 | #include // google::InitGoogleLogging 27 | // OpenPose dependencies 28 | #include 29 | #include 30 | #include 31 | #include 32 | #include 33 | 34 | #include 35 | #include 36 | #include 37 | #include 38 | #include 39 | #include 40 | 41 | 42 | 43 | //#include 44 | 45 | // See all the available parameter options withe the `--help` flag. E.g. `./build/examples/openpose/openpose.bin --help`. 46 | // Note: This command will show you flags for other unnecessary 3rdparty files. Check only the flags for the OpenPose 47 | // executable. E.g. for `openpose.bin`, look for `Flags from examples/openpose/openpose.cpp:`. 48 | // Debugging 49 | DEFINE_int32(logging_level, 4, "The logging level. Integer in the range [0, 255]. 0 will output any log() message, while" 50 | " 255 will not output any. Current OpenPose library messages are in the range 0-4: 1 for" 51 | " low priority messages and 4 for important ones."); 52 | // Camera Topic 53 | DEFINE_string(camera_topic, "/usb_cam/image_raw", "Image topic that OpenPose will process."); 54 | // OpenPose 55 | std::string package_path = ros::package::getPath("openpose_ros_pkg"); 56 | std::string model_folder_location = package_path + "/../openpose/models/"; 57 | 58 | //DEFINE_string(model_folder, "/home/stevenjj/nstrf_ws/src/openpose_ros/openpose/models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located."); 59 | 60 | /*DEFINE_string(net_resolution, "656x368", "Multiples of 16. If it is increased, the accuracy usually increases. If it is decreased," 61 | " the speed increases."); 62 | // "Multiples of 16. the accuracy usually increases. If it is decreased, the speed increases */ 63 | #define NET_RES_X 656 64 | #define NET_RES_Y 368 65 | /*DEFINE_string(resolution, "1280x720", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the" 66 | " default images resolution.");*/ 67 | #define OUTPUT_RES_X 1280 // Display Resolution Output Width 68 | #define OUTPUT_RES_Y 720 // Display Resolution Output Height 69 | 70 | //DEFINE_string(model_pose, "COCO", "Model to be used (e.g. COCO, MPI, MPI_4_layers)."); 71 | #define MODEL_POSE "COCO" //"Model to be used (e.g. COCO, MPI, MPI_4_layers)."; 72 | DEFINE_int32(num_gpu_start, 0, "GPU device start number."); 73 | DEFINE_double(scale_gap, 0.3, "Scale gap between scales. No effect unless num_scales>1. Initial scale is always 1. If you" 74 | " want to change the initial scale, you actually want to multiply the `net_resolution` by" 75 | " your desired initial scale."); 76 | DEFINE_int32(num_scales, 1, "Number of scales to average."); 77 | // OpenPose Rendering 78 | DEFINE_bool(disable_blending, false, "If blending is enabled, it will merge the results with the original frame. If disabled, it" 79 | " will only display the results."); 80 | DEFINE_double(alpha_pose, 0.6, "Blending factor (range 0-1) for the body part rendering. 1 will show it completely, 0 will" 81 | " hide it. Only valid for GPU rendering."); 82 | op::PoseModel stringToPoseModel(const std::string& pose_model_string) 83 | { 84 | if (pose_model_string == "COCO") 85 | return op::PoseModel::COCO_18; 86 | else if (pose_model_string == "MPI") 87 | return op::PoseModel::MPI_15; 88 | else if (pose_model_string == "MPI_4_layers") 89 | return op::PoseModel::MPI_15_4; 90 | else 91 | { 92 | ROS_ERROR("String does not correspond to any model (COCO, MPI, MPI_4_layers)"); 93 | return op::PoseModel::COCO_18; 94 | } 95 | } 96 | 97 | 98 | class RosImgSub 99 | { 100 | private: 101 | ros::NodeHandle nh_; 102 | image_transport::ImageTransport it_; 103 | image_transport::Subscriber image_sub_; 104 | cv_bridge::CvImagePtr cv_img_ptr_; 105 | 106 | public: 107 | RosImgSub(const std::string& image_topic): it_(nh_) 108 | { 109 | // Subscribe to input video feed and publish output video feed 110 | image_sub_ = it_.subscribe(image_topic, 1, &RosImgSub::convertImage, this); 111 | cv_img_ptr_ = nullptr; 112 | } 113 | 114 | ~RosImgSub(){} 115 | 116 | void convertImage(const sensor_msgs::ImageConstPtr& msg) 117 | { 118 | try 119 | { 120 | cv_img_ptr_ = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); 121 | } 122 | catch (cv_bridge::Exception& e) 123 | { 124 | ROS_ERROR("cv_bridge exception: %s", e.what()); 125 | return; 126 | } 127 | } 128 | 129 | cv_bridge::CvImagePtr& getCvImagePtr() 130 | { 131 | return cv_img_ptr_; 132 | } 133 | 134 | }; 135 | 136 | int openPoseROSTutorial() 137 | { 138 | op::log("OpenPose ROS Node", op::Priority::High); 139 | // ------------------------- INITIALIZATION ------------------------- 140 | // Step 1 - Set logging level 141 | // - 0 will output all the logging messages 142 | // - 255 will output nothing 143 | op::check(0 <= FLAGS_logging_level && FLAGS_logging_level <= 255, "Wrong logging_level value.", __LINE__, __FUNCTION__, __FILE__); 144 | op::ConfigureLog::setPriorityThreshold((op::Priority)FLAGS_logging_level); 145 | op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); 146 | // Step 2 - Read Google flags (user defined configuration) 147 | // outputSize 148 | //const auto outputSize = op::flagsToPoint(FLAGS_resolution, "1280x720"); 149 | op::Point outputSize; 150 | outputSize.x = OUTPUT_RES_X; 151 | outputSize.y = OUTPUT_RES_Y; 152 | 153 | op::Point netInputSize; 154 | netInputSize.x = NET_RES_X; //656; 155 | netInputSize.y = NET_RES_Y; //368; 156 | 157 | // Declare Node Handle 158 | ros::NodeHandle nh; 159 | 160 | // Declare Service 161 | ros::ServiceClient client = nh.serviceClient("detect_poses"); 162 | openpose_ros_msgs::GetPersons srv; 163 | 164 | // Declare Publisher 165 | ros::Publisher input_image_pub = nh.advertise( "/openpose_ros/input_image", 0 ); 166 | 167 | // Initialize cv_ptr 168 | sensor_msgs::Image ros_image; 169 | ros_image.encoding = sensor_msgs::image_encodings::BGR8; 170 | cv_bridge::CvImagePtr cv_ptr; 171 | cv_ptr = cv_bridge::toCvCopy(ros_image, sensor_msgs::image_encodings::BGR8); 172 | 173 | // netInputSize 174 | //const auto netInputSize = op::flagsToPoint(FLAGS_net_resolution, "656x368"); 175 | // netOutputSize 176 | const auto netOutputSize = netInputSize; 177 | // poseModel 178 | //const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose); 179 | op::PoseModel poseModel = stringToPoseModel(std::string(MODEL_POSE)); 180 | // Check no contradictory flags enabled 181 | if (FLAGS_alpha_pose < 0. || FLAGS_alpha_pose > 1.) 182 | op::error("Alpha value for blending must be in the range [0,1].", __LINE__, __FUNCTION__, __FILE__); 183 | if (FLAGS_scale_gap <= 0. && FLAGS_num_scales > 1) 184 | op::error("Incompatible flag configuration: scale_gap must be greater than 0 or num_scales = 1.", __LINE__, __FUNCTION__, __FILE__); 185 | // Logging 186 | op::log("", op::Priority::Low, __LINE__, __FUNCTION__, __FILE__); 187 | // Step 3 - Initialize all required classes 188 | op::CvMatToOpInput cvMatToOpInput{netInputSize, FLAGS_num_scales, (float)FLAGS_scale_gap}; 189 | op::CvMatToOpOutput cvMatToOpOutput{outputSize}; 190 | op::PoseExtractorCaffe poseExtractorCaffe{netInputSize, netOutputSize, outputSize, FLAGS_num_scales, poseModel, 191 | model_folder_location, FLAGS_num_gpu_start}; 192 | op::PoseRenderer poseRenderer{netOutputSize, outputSize, poseModel, nullptr, !FLAGS_disable_blending, (float)FLAGS_alpha_pose}; 193 | //op::PoseRenderer poseRenderer{netOutputSize, outputSize, poseModel, nullptr, true, 0.6}; 194 | 195 | op::OpOutputToCvMat opOutputToCvMat{outputSize}; 196 | // Step 4 - Initialize resources on desired thread (in this case single thread, i.e. we init resources here) 197 | poseExtractorCaffe.initializationOnThread(); 198 | poseRenderer.initializationOnThread(); 199 | ROS_INFO("Initialization Success"); 200 | 201 | // Step 5 - Initialize the image subscriber 202 | RosImgSub ris(FLAGS_camera_topic); 203 | 204 | int frame_count = 0; 205 | const std::chrono::high_resolution_clock::time_point timerBegin = std::chrono::high_resolution_clock::now(); 206 | 207 | ros::spinOnce(); 208 | 209 | // Step 6 - Continuously process images from image subscriber 210 | while (ros::ok()) 211 | { 212 | // ------------------------- POSE ESTIMATION AND RENDERING ------------------------- 213 | // Step 1 - Get cv_image ptr and check that it is not null 214 | cv_bridge::CvImagePtr cvImagePtr = ris.getCvImagePtr(); 215 | if(cvImagePtr != nullptr) 216 | { 217 | cv::Mat inputImage = cvImagePtr->image; 218 | 219 | // Convert to ros Image msg 220 | ros_image = *(cvImagePtr->toImageMsg()); 221 | 222 | // Step 2 - Format input image to OpenPose input and output formats 223 | op::Array netInputArray; 224 | std::vector scaleRatios; 225 | std::tie(netInputArray, scaleRatios) = cvMatToOpInput.format(inputImage); 226 | double scaleInputToOutput; 227 | op::Array outputArray; 228 | std::tie(scaleInputToOutput, outputArray) = cvMatToOpOutput.format(inputImage); 229 | // Step 3 - Estimate poseKeypoints 230 | ROS_INFO("Performing Forward Pass"); 231 | 232 | // Begin Service Call 233 | // You need to run the node openpose_ros_node for the service to work 234 | srv.request.image = ros_image; 235 | if (client.call(srv)) 236 | { 237 | 238 | //publish input image, subscribe /openpose_ros/input_image to get the input image 239 | input_image_pub.publish(ros_image); 240 | 241 | //subscribe to /openpose_ros/detected_poses_image to get the images 242 | //subscribe to /openpose_ros/detected_poses_keypoints to get the keypoints in (x,y,score) format 243 | ROS_INFO("Call Successful"); 244 | } 245 | else 246 | { 247 | ROS_ERROR("Failed to call service detect_poses"); 248 | } 249 | 250 | poseExtractorCaffe.forwardPass(netInputArray, {inputImage.cols, inputImage.rows}, scaleRatios); 251 | std::cout << "Forward Pass Success" << std::endl; 252 | 253 | const auto poseKeypoints = poseExtractorCaffe.getPoseKeypoints(); 254 | std::cout << " Got Keypoints" << std::endl; 255 | // Step 4 - Render poseKeypoints 256 | poseRenderer.renderPose(outputArray, poseKeypoints); 257 | std::cout << " Rendering Pose" << std::endl; 258 | // Step 5 - OpenPose output format to cv::Mat 259 | auto outputImage = opOutputToCvMat.formatToCvMat(outputArray); 260 | std::cout << " Outputing Image" << std::endl; 261 | 262 | // ------------------------- SHOWING RESULT AND CLOSING ------------------------- 263 | // Step 1 - Show results 264 | cv::imshow("OpenPose ROS", outputImage); 265 | cv::waitKey(1); 266 | frame_count++; 267 | } 268 | ros::spinOnce(); 269 | } 270 | 271 | // Measuring total time 272 | const double totalTimeSec = (double)std::chrono::duration_cast 273 | (std::chrono::high_resolution_clock::now()-timerBegin).count() * 1e-9; 274 | const std::string message = "Real-time pose estimation demo successfully finished. Total time: " 275 | + std::to_string(totalTimeSec) + " seconds. " + std::to_string(frame_count) 276 | + " frames processed. Average fps is " + std::to_string(frame_count/totalTimeSec) + "."; 277 | op::log(message, op::Priority::Max); 278 | 279 | // Return successful message 280 | return 0; 281 | } 282 | 283 | int main(int argc, char** argv) 284 | { 285 | google::InitGoogleLogging("openpose_ros_node"); 286 | gflags::ParseCommandLineFlags(&argc, &argv, true); 287 | ros::init(argc, argv, "openpose_ros_node"); 288 | 289 | return openPoseROSTutorial(); 290 | } 291 | -------------------------------------------------------------------------------- /openpose_ros_pkg/src/test_openpose_ros_service_call.cpp: -------------------------------------------------------------------------------- 1 | // This code tests the openpose rosservice node by loading an image and calling the service.#include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | 9 | int main(int argc, char** argv){ 10 | // Initialize ROS 11 | ros::init(argc, argv, "test_openpose_ros_service_call"); 12 | // Declare Node Handle 13 | ros::NodeHandle nh; 14 | 15 | // Declare Service 16 | ros::ServiceClient client = nh.serviceClient("detect_poses"); 17 | openpose_ros_msgs::GetPersons srv; 18 | 19 | // Declare Publisher 20 | ros::Publisher input_image_pub = nh.advertise( "/openpose_ros/input_image", 0 ); 21 | 22 | // Initialize cv_ptr 23 | sensor_msgs::Image ros_image; 24 | ros_image.encoding = sensor_msgs::image_encodings::BGR8; 25 | cv_bridge::CvImagePtr cv_ptr; 26 | cv_ptr = cv_bridge::toCvCopy(ros_image, sensor_msgs::image_encodings::BGR8); 27 | 28 | // Read Image 29 | std::string path = ros::package::getPath("openpose_ros_pkg"); 30 | std::string filename = path + "/examples/COCO_val2014_000000000192.jpg"; 31 | 32 | cv::Mat image = cv::imread(filename, CV_LOAD_IMAGE_COLOR); 33 | cv_ptr->image = image; 34 | 35 | // Convert to ros Image msg 36 | ros_image = *(cv_ptr->toImageMsg()); 37 | 38 | // Publish Input Msg 39 | while (true){ 40 | input_image_pub.publish(ros_image); 41 | 42 | // Begin Service Call 43 | srv.request.image = ros_image; 44 | if (client.call(srv)) 45 | { 46 | ROS_INFO("Call Successful"); //subscribe to /openpose_ros/detected_poses_image to view the result 47 | } 48 | else 49 | { 50 | ROS_ERROR("Failed to call service detect_poses"); 51 | } 52 | 53 | ros::spinOnce(); 54 | } 55 | 56 | return 0; 57 | } 58 | -------------------------------------------------------------------------------- /readme_images/Fig1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig1.png -------------------------------------------------------------------------------- /readme_images/Fig10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig10.png -------------------------------------------------------------------------------- /readme_images/Fig11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig11.png -------------------------------------------------------------------------------- /readme_images/Fig12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig12.png -------------------------------------------------------------------------------- /readme_images/Fig13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig13.png -------------------------------------------------------------------------------- /readme_images/Fig14.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig14.png -------------------------------------------------------------------------------- /readme_images/Fig15.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig15.png -------------------------------------------------------------------------------- /readme_images/Fig16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig16.png -------------------------------------------------------------------------------- /readme_images/Fig17.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig17.png -------------------------------------------------------------------------------- /readme_images/Fig18.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig18.png -------------------------------------------------------------------------------- /readme_images/Fig19.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig19.png -------------------------------------------------------------------------------- /readme_images/Fig2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig2.png -------------------------------------------------------------------------------- /readme_images/Fig20.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig20.png -------------------------------------------------------------------------------- /readme_images/Fig21.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig21.png -------------------------------------------------------------------------------- /readme_images/Fig3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig3.png -------------------------------------------------------------------------------- /readme_images/Fig4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig4.png -------------------------------------------------------------------------------- /readme_images/Fig5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig5.png -------------------------------------------------------------------------------- /readme_images/Fig6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig6.png -------------------------------------------------------------------------------- /readme_images/Fig7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig7.png -------------------------------------------------------------------------------- /readme_images/Fig8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig8.png -------------------------------------------------------------------------------- /readme_images/Fig9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stevenjj/openpose_ros/fbd7654457d59e5e6767fe0dd5449e78d47b0b3b/readme_images/Fig9.png -------------------------------------------------------------------------------- /skeleton_extract_3d/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 2.8.3) 2 | project(skeleton_extract_3d) 3 | 4 | ## Compile as C++11, supported in ROS Kinetic and newer 5 | # add_compile_options(-std=c++11) 6 | 7 | find_package(catkin REQUIRED COMPONENTS 8 | roscpp 9 | sensor_msgs 10 | pcl_ros 11 | pcl_conversions 12 | cv_bridge 13 | message_filters 14 | openpose_ros_msgs 15 | visualization_msgs 16 | ) 17 | 18 | find_package(PCL 1.7 REQUIRED) 19 | 20 | catkin_package( 21 | # CATKIN_DEPENDS other_catkin_pkg 22 | ) 23 | 24 | include_directories( 25 | ${PCL_INCLUDE_DIRS} 26 | ${catkin_INCLUDE_DIRS} 27 | ) 28 | 29 | 30 | add_executable(skeleton_extract_3d_node src/skeleton_extract_3d_node.cpp) 31 | target_link_libraries(skeleton_extract_3d_node ${catkin_LIBRARIES} ${PCL_LIBRARIES}) 32 | add_dependencies(skeleton_extract_3d_node ${catkin_EXPORTED_TARGETS}) 33 | 34 | add_executable(skeleton_extract_3d_visualization_node src/skeleton_extract_3d_visualization_node.cpp) 35 | target_link_libraries(skeleton_extract_3d_visualization_node ${catkin_LIBRARIES}) 36 | add_dependencies(skeleton_extract_3d_visualization_node ${catkin_EXPORTED_TARGETS}) 37 | -------------------------------------------------------------------------------- /skeleton_extract_3d/README.md: -------------------------------------------------------------------------------- 1 | Launches the `openpose_ros_node` and the 3D extractor node 2 | ```` 3 | roslaunch skeleton_extract_3d openpose_skeleton_extract.launch 4 | ```` -------------------------------------------------------------------------------- /skeleton_extract_3d/launch/openpose_skeleton_extract.launch: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | -------------------------------------------------------------------------------- /skeleton_extract_3d/package.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | skeleton_extract_3d 4 | 0.0.0 5 | The skeleton_extract_3d package 6 | 7 | 8 | 9 | 10 | stevenjj 11 | 12 | 13 | 14 | 15 | 16 | TODO 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | catkin 43 | 44 | catkin 45 | roscpp 46 | sensor_msgs 47 | pcl_ros 48 | pcl_conversions 49 | cv_bridge 50 | message_filters 51 | openpose_ros_msgs 52 | visualization_msgs 53 | 54 | roscpp 55 | sensor_msgs 56 | pcl_ros 57 | pcl_conversions 58 | cv_bridge 59 | message_filters 60 | openpose_ros_msgs 61 | visualization_msgs 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | -------------------------------------------------------------------------------- /skeleton_extract_3d/src/skeleton_extract_3d_node.cpp: -------------------------------------------------------------------------------- 1 | // This code synchronizes the 3d point cloud and 2D image and publishes 3d locations of human skeletons 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | 8 | #include 9 | #include 10 | #include 11 | 12 | #include 13 | 14 | #include 15 | 16 | #include 17 | #include 18 | #include 19 | 20 | #include 21 | #include 22 | 23 | #include 24 | #include 25 | 26 | 27 | // Set resolution 28 | 29 | #define width 960 30 | #define height 540 31 | 32 | 33 | 34 | // Declare Publishers 35 | ros::Publisher pc_pub; 36 | ros::Publisher image_pub; 37 | 38 | 39 | // Declare 3d keypoints publisher 40 | ros::Publisher keypoints_3d_pub; 41 | 42 | 43 | // Function to initialize all skeletal detections 44 | 45 | openpose_ros_msgs::BodypartDetection_3d getNANBodypart() 46 | { 47 | openpose_ros_msgs::BodypartDetection_3d bodypart_depth; 48 | bodypart_depth.x = NAN; 49 | bodypart_depth.y = NAN; 50 | bodypart_depth.z = NAN; 51 | bodypart_depth.confidence = NAN; 52 | return bodypart_depth; 53 | } 54 | 55 | 56 | // Function for when a bodypart is not detected 57 | void notDetectedBodyPart(std::string bodypart_name) 58 | { 59 | std::cerr << bodypart_name << " not detected!!" << std::endl; 60 | 61 | std::cerr << bodypart_name << " pixel coordinates (x,y,z): " << std::endl; 62 | std::cerr << "( "<< "nan" << ", " 63 | << "nan" << ", " 64 | << "nan" << ")" << std::endl; 65 | 66 | std::cerr << bodypart_name << " real world coordinates (x,y,z): " << std::endl; 67 | std::cerr << "( " << "nan" << ", " 68 | << "nan" << ", " 69 | << "nan" << ")" << std::endl; 70 | 71 | } 72 | 73 | 74 | // Function to make the average of a vector 75 | 76 | double Average(std::vector v) 77 | { 78 | double total = 0.0; 79 | double size = 0.0; 80 | for (int n = 0; n < v.size(); n++){total += v[n];size++;} 81 | 82 | return total/size; 83 | 84 | } 85 | 86 | 87 | // Function to get 3d detections 88 | 89 | openpose_ros_msgs::BodypartDetection_3d get3dcoordinates(const openpose_ros_msgs::BodypartDetection bodypart_2d , const pcl::PointCloud::Ptr temp_cloud, const double maxx, const double minx, const double maxy, const double miny, const double maxz, const double minz, const std::string bodypart_name) 90 | { 91 | openpose_ros_msgs::BodypartDetection_3d bodypart_depth; 92 | 93 | // Include 2d confidence 3d detections message 94 | bodypart_depth.confidence = bodypart_2d.confidence; 95 | 96 | // If not detected bodypart 97 | if (std::isnan(bodypart_depth.confidence) || bodypart_depth.confidence == 0.0 || (bodypart_2d.x == 0 && bodypart_depth.y == 0) || bodypart_2d.x > width || bodypart_2d.y > height ) 98 | { 99 | notDetectedBodyPart(bodypart_name); 100 | bodypart_depth.x = NAN; 101 | bodypart_depth.y = NAN; 102 | bodypart_depth.z = NAN; 103 | } 104 | 105 | // If detected bodypart 106 | else 107 | { 108 | 109 | // Get keypoint pixel coordinates 110 | unsigned long long int x_pixel = bodypart_2d.x; 111 | unsigned long long int y_pixel = bodypart_2d.y; 112 | 113 | // Vector for storing the keypoint index and the surrounding indices ones 114 | std::vector indices; 115 | int index=0; 116 | 117 | // Number of colums and rows of indices surrounding keypoint to get (both must be even) 118 | int rows = 3; 119 | int columns = 3; 120 | 121 | // Store in the vector the indices surrounding the keypoint 122 | for (int i = -(rows-1)/2 ; i <= (rows-1)/2; i++) 123 | { 124 | for (int j = -(columns-1)/2; j <= (columns-1)/2; j++) 125 | { 126 | index = width*(y_pixel+i)+x_pixel+j+1; 127 | indices.push_back(index); 128 | } 129 | } 130 | 131 | // Vector for storing possible world coordinates of indices in the cluster 132 | std::vector possible_x; 133 | std::vector possible_y; 134 | std::vector possible_z; 135 | 136 | // Get coordinates if are valid 137 | for(int n=0; n < indices.size(); n++) 138 | { 139 | if (not std::isnan(temp_cloud->points[indices[n]].x) && not std::isnan(temp_cloud->points[indices[n]].y) && not std::isnan(temp_cloud->points[indices[n]].z)) 140 | { 141 | if (temp_cloud->points[indices[n]].x <= maxx && temp_cloud->points[indices[n]].x >= minx) 142 | { 143 | if (temp_cloud->points[indices[n]].y <= maxy && temp_cloud->points[indices[n]].y >= miny) 144 | { 145 | if (temp_cloud->points[indices[n]].z <= maxz && temp_cloud->points[indices[n]].z >= minz) 146 | { 147 | possible_x.push_back(temp_cloud->points[indices[n]].x); 148 | possible_y.push_back(temp_cloud->points[indices[n]].y); 149 | possible_z.push_back(temp_cloud->points[indices[n]].z); 150 | } 151 | } 152 | } 153 | 154 | } 155 | } 156 | 157 | // Check if vectors are empty 158 | if (possible_x.size() == 0 || possible_y.size() == 0 || possible_z.size() == 0) 159 | { 160 | notDetectedBodyPart(bodypart_name); 161 | bodypart_depth.x = NAN; 162 | bodypart_depth.y = NAN; 163 | bodypart_depth.z = NAN; 164 | } 165 | else 166 | { 167 | // Make the mean for each coordinate 168 | bodypart_depth.x = Average(possible_x); 169 | bodypart_depth.y = Average(possible_y); 170 | bodypart_depth.z = Average(possible_z); 171 | 172 | 173 | std::cerr << bodypart_name << " pixel coordinates (x,y,z): " << std::endl; 174 | std::cerr << "( "<< bodypart_2d.x << ", " 175 | << bodypart_2d.y << ", " 176 | << bodypart_depth.z << ")" << std::endl; 177 | 178 | std::cerr << bodypart_name << " real world coordinates (x,y,z): " << std::endl; 179 | std::cerr << "( " << bodypart_depth.x << ", " 180 | << bodypart_depth.y << ", " 181 | << bodypart_depth.z << ")" << std::endl; 182 | 183 | 184 | } 185 | 186 | } 187 | 188 | return bodypart_depth; 189 | 190 | } 191 | // Declare Service Client 192 | ros::ServiceClient client; 193 | openpose_ros_msgs::GetPersons srv; 194 | 195 | typedef message_filters::sync_policies::ApproximateTime MySyncPolicy; 196 | 197 | //Declare Callback 198 | void callback(const sensor_msgs::PointCloud2ConstPtr& cloud_msg, const sensor_msgs::ImageConstPtr& image_msg){ 199 | ROS_INFO("Cloud and Image Messages Received!"); 200 | ROS_INFO(" Cloud Time Stamp: %f", cloud_msg->header.stamp.toSec()); 201 | ROS_INFO(" Image Time Stamp: %f", image_msg->header.stamp.toSec()); 202 | 203 | 204 | // Publish input pointcloud and image 205 | pc_pub.publish(*cloud_msg); 206 | image_pub.publish(*image_msg); 207 | 208 | srv.request.image = *image_msg; 209 | if (client.call(srv)) 210 | { 211 | ROS_INFO("ROS Service call Successful"); 212 | // Prepare a new ROS Message for all skeletal detections 213 | 214 | 215 | // Initialize message for skeletal detections 216 | openpose_ros_msgs::PersonDetection_3d person_msg; 217 | 218 | // Number of people detected 219 | int num_people = srv.response.detections.size(); 220 | 221 | // Number of bodyparts, we suppose we are working with COCO 222 | int num_bodyparts = 18; 223 | 224 | // for each detection (person), 225 | 226 | for (size_t person_idx = 0; person_idx < num_people; person_idx++) 227 | { 228 | 229 | // Prepare a new ROS Message for this skeletal detection 230 | 231 | // Add number of people detected 232 | person_msg.num_people_detected = num_people; 233 | 234 | // Add person ID 235 | person_msg.person_ID = person_idx; 236 | 237 | // Initialize all bodyparts (x,y,z=0 & confidence = nan) 238 | 239 | person_msg.nose = getNANBodypart(); 240 | person_msg.neck = getNANBodypart(); 241 | person_msg.right_shoulder = getNANBodypart(); 242 | person_msg.right_elbow = getNANBodypart(); 243 | person_msg.right_wrist = getNANBodypart(); 244 | person_msg.left_shoulder = getNANBodypart(); 245 | person_msg.left_elbow = getNANBodypart(); 246 | person_msg.left_wrist = getNANBodypart(); 247 | person_msg.right_hip = getNANBodypart(); 248 | person_msg.right_knee = getNANBodypart(); 249 | person_msg.right_ankle = getNANBodypart(); 250 | person_msg.left_hip = getNANBodypart(); 251 | person_msg.left_knee = getNANBodypart(); 252 | person_msg.left_ankle = getNANBodypart(); 253 | person_msg.right_eye = getNANBodypart(); 254 | person_msg.left_eye = getNANBodypart(); 255 | person_msg.right_ear = getNANBodypart(); 256 | person_msg.left_ear = getNANBodypart(); 257 | person_msg.chest = getNANBodypart(); 258 | 259 | 260 | // Declare pcl pointcloud 261 | pcl::PointCloud::Ptr temp_cloud (new pcl::PointCloud); 262 | 263 | //Load pointcloud data from cloud_msg (sensor_msgs::Pointcloud2) to temp_cloud of type pcl::Pointcloud 264 | pcl::fromROSMsg (*cloud_msg, *temp_cloud); 265 | 266 | // Get max and min coordinates 267 | 268 | // Declare pcl points 269 | pcl::PointXYZ min; 270 | pcl::PointXYZ max; 271 | 272 | // Get min and max depth coordinate 273 | pcl::getMinMax3D(*temp_cloud,min,max); 274 | 275 | // Get minimun x,y,z in the screen 276 | std::cerr << "Minimum pointcloud (x,y,z) coordinates: " << std::endl; 277 | std::cerr << " (" << min.x << ", " 278 | << min.y << ", " 279 | << min.z << " )"<< std::endl; 280 | 281 | // Get maximum x,y,z in the screen 282 | std::cerr << "Maximum pointcloud (x,y,z) coordinates: " << std::endl; 283 | std::cerr << " (" << max.x << ", " 284 | << max.y << ", " 285 | << max.z << " )"<< std::endl; 286 | 287 | // for each body part 288 | 289 | for (size_t bodypart_idx = 0; bodypart_idx < num_bodyparts; bodypart_idx++) 290 | { 291 | // Get corresponding 2d skeleton detection for person with ID idx 292 | openpose_ros_msgs::PersonDetection skeleton_detections = srv.response.detections[person_idx]; 293 | 294 | // Initialize bodypart msg 295 | openpose_ros_msgs::BodypartDetection bodypart_detections; 296 | 297 | if (bodypart_idx == 0) 298 | { 299 | bodypart_detections = skeleton_detections.nose; 300 | person_msg.nose = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Nose"); 301 | } 302 | else if (bodypart_idx == 1) 303 | { 304 | bodypart_detections = skeleton_detections.neck; 305 | person_msg.neck = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Neck"); 306 | } 307 | else if (bodypart_idx == 2) 308 | { 309 | bodypart_detections = skeleton_detections.right_shoulder; 310 | person_msg.right_shoulder = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right shoulder"); 311 | } 312 | else if (bodypart_idx == 3) 313 | { 314 | bodypart_detections = skeleton_detections.right_elbow; 315 | person_msg.right_elbow = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right elbow"); 316 | } 317 | else if (bodypart_idx == 4) 318 | { 319 | bodypart_detections = skeleton_detections.right_wrist; 320 | person_msg.right_wrist = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right wrist"); 321 | } 322 | else if (bodypart_idx == 5) 323 | { 324 | bodypart_detections = skeleton_detections.left_shoulder; 325 | person_msg.left_shoulder = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left shoulder"); 326 | } 327 | else if (bodypart_idx == 6) 328 | { 329 | bodypart_detections = skeleton_detections.left_elbow; 330 | person_msg.left_elbow = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left elbow"); 331 | } 332 | else if (bodypart_idx == 7) 333 | { 334 | bodypart_detections = skeleton_detections.left_wrist; 335 | person_msg.left_wrist = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left wrist"); 336 | } 337 | else if (bodypart_idx == 8) 338 | { 339 | bodypart_detections = skeleton_detections.right_hip; 340 | person_msg.right_hip = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right hip"); 341 | } 342 | else if (bodypart_idx == 9) 343 | { 344 | bodypart_detections = skeleton_detections.right_knee; 345 | person_msg.right_knee = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right knee"); 346 | } 347 | else if (bodypart_idx == 10) 348 | { 349 | bodypart_detections = skeleton_detections.right_ankle; 350 | person_msg.right_ankle = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right ankle"); 351 | } 352 | else if (bodypart_idx == 11) 353 | { 354 | bodypart_detections = skeleton_detections.left_hip; 355 | person_msg.left_hip = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left hip"); 356 | } 357 | else if (bodypart_idx == 12) 358 | { 359 | bodypart_detections = skeleton_detections.left_knee; 360 | person_msg.left_knee = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left knee"); 361 | } 362 | else if (bodypart_idx == 13) 363 | { 364 | bodypart_detections = skeleton_detections.left_ankle; 365 | person_msg.left_ankle = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left ankle"); 366 | } 367 | else if (bodypart_idx == 14) 368 | { 369 | bodypart_detections = skeleton_detections.right_eye; 370 | person_msg.right_eye = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right eye"); 371 | } 372 | else if (bodypart_idx == 15) 373 | { 374 | bodypart_detections = skeleton_detections.left_eye; 375 | person_msg.left_eye = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left eye"); 376 | } 377 | else if (bodypart_idx == 16) 378 | { 379 | bodypart_detections = skeleton_detections.right_ear; 380 | person_msg.right_ear = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Right ear"); 381 | } 382 | else if (bodypart_idx == 17) 383 | { 384 | bodypart_detections = skeleton_detections.left_ear; 385 | person_msg.left_ear = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Left ear"); 386 | } 387 | /* else if (bodypart_idx == 18) 388 | { 389 | bodypart_detections = skeleton_detections.chest; 390 | person_msg.chest = get3dcoordinates(bodypart_detections , temp_cloud, max.x, min.x, max.y, min.y, max.z , min.z, "Chest"); 391 | } 392 | */ 393 | } 394 | 395 | // Publish 3D detection 396 | keypoints_3d_pub.publish(person_msg); 397 | 398 | } 399 | } 400 | else 401 | { 402 | ROS_ERROR("Failed to call service detect_poses_3d"); 403 | ROS_ERROR("Did you remap the service and client names?"); 404 | ROS_ERROR("This node expects a service called skeleton_2d_detector in which a launch file should have handled the remapping"); 405 | } 406 | 407 | } 408 | 409 | int main(int argc, char** argv){ 410 | // Initialize ROS 411 | ros::init(argc, argv, "skeleton_extract_3d_node"); 412 | 413 | // Declare Node Handle 414 | ros::NodeHandle nh("~"); 415 | 416 | // Declare Subscribers 417 | // Synchronize Point Cloud and Image Subscription Received Messages 418 | message_filters::Subscriber cloud_sub(nh, "point_cloud", 1); 419 | message_filters::Subscriber image_sub(nh, "image", 1); 420 | client = nh.serviceClient("skeleton_2d_detector"); 421 | 422 | 423 | // Pointcloud publisher topic /openpose_ros/input_pointcloud 424 | pc_pub = nh.advertise( "/openpose_ros/skeleton_3d/input_pointcloud", 0); 425 | 426 | // Image publisher topic /openpose_ros/input_rgb 427 | image_pub = nh.advertise( "/openpose_ros/skeleton_3d/input_rgb", 0); 428 | 429 | // Keypoints in 3D topic /openpose_ros/detected_poses_keypoints_3d 430 | keypoints_3d_pub = nh.advertise( "/openpose_ros/skeleton_3d/detected_poses_keypoints_3d", 0); 431 | 432 | 433 | // ApproximateTime takes a queue size as its constructor argument, hence MySyncPolicy(10) 434 | message_filters::Synchronizer sync(MySyncPolicy(10), cloud_sub, image_sub); 435 | sync.registerCallback(boost::bind(&callback, _1, _2)); 436 | 437 | // Spin Forever 438 | ros::spin(); 439 | 440 | return 0; 441 | } 442 | 443 | -------------------------------------------------------------------------------- /skeleton_extract_3d/src/skeleton_extract_3d_visualization_node.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | 4 | #include 5 | #include 6 | #include 7 | #include 8 | 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | 15 | #include 16 | #include 17 | 18 | // Declare marker and skeleton publishers 19 | 20 | ros::Publisher marker_pub; 21 | ros::Publisher skeleton_pub; 22 | 23 | // Function to check if 3d detection is not NAN 24 | bool PointISValid(const openpose_ros_msgs::BodypartDetection_3d bodypart){ 25 | if (std::isnan(bodypart.x) || std::isnan(bodypart.y) || std::isnan(bodypart.z)){return false;} 26 | else{return true;} 27 | } 28 | 29 | // Function to add joint coordinates to each point marker 30 | geometry_msgs::Point AddPoint(const openpose_ros_msgs::BodypartDetection_3d bodypart){ 31 | 32 | geometry_msgs::Point p; 33 | p.x = bodypart.x; 34 | p.y = bodypart.y; 35 | p.z = bodypart.z; 36 | 37 | return p; 38 | } 39 | 40 | 41 | // Declare class to subscribe to 3d detections and publish visualization markers 42 | class SubscribeAndPublish 43 | { 44 | public: 45 | SubscribeAndPublish() 46 | { 47 | //Topics you want to publish 48 | marker_pub = n_.advertise("/openpose_ros/skeleton_3d/visualization_markers", 1); 49 | skeleton_pub = n_.advertise("/openpose_ros/skeleton_3d/visualization_skeleton", 1); 50 | 51 | //Topics you want to subscribe 52 | sub_ = n_.subscribe("/openpose_ros/skeleton_3d/detected_poses_keypoints_3d", 1, &SubscribeAndPublish::callback, this); 53 | } 54 | 55 | // Declare callback for subscriber 56 | void callback(const openpose_ros_msgs::PersonDetection_3d& person_msg) 57 | { 58 | 59 | ROS_INFO("3D Detection Received!"); 60 | 61 | 62 | // Declare marker for the body joints 63 | visualization_msgs::Marker marker; 64 | 65 | // Set boyjoints markers 66 | marker.header.frame_id = "/kinect2_ir_optical_frame"; 67 | marker.id = person_msg.person_ID; 68 | marker.ns = "joints"; 69 | marker.header.stamp = ros::Time(); 70 | // Markers will be spheres 71 | marker.type = visualization_msgs::Marker::SPHERE_LIST; 72 | marker.action = visualization_msgs::Marker::ADD; 73 | marker.scale.x = 0.05; 74 | marker.scale.y = 0.05; 75 | marker.scale.z = 0.05; 76 | // Joints are red 77 | marker.color.a = 1.0; 78 | marker.color.r = 1.0; 79 | marker.color.g = 0.0; 80 | marker.color.b = 0.0; 81 | 82 | // Set marker duration in 150ms 83 | marker.lifetime = ros::Duration(0.15); 84 | 85 | // Declare line strip marker for the skeleton 86 | visualization_msgs::Marker skeleton; 87 | 88 | skeleton.id = person_msg.person_ID; 89 | skeleton.header.frame_id = "/kinect2_ir_optical_frame"; 90 | skeleton.ns = "skeleton"; 91 | skeleton.header.stamp = ros::Time(); 92 | // Skeleton will be lines 93 | skeleton.type = visualization_msgs::Marker::LINE_LIST; 94 | skeleton.scale.x = 0.03; 95 | skeleton.scale.y = 0.03; 96 | skeleton.scale.z = 0.03; 97 | // Skeleton is blue 98 | skeleton.color.a = 1.0; 99 | skeleton.color.r = 0.0; 100 | skeleton.color.g = 0.0; 101 | skeleton.color.b = 1.0; 102 | 103 | // Set skeleton lifetime 104 | skeleton.lifetime = ros::Duration(0.15); 105 | 106 | // Assign 3D joints coordinates to each marker 107 | // Check if two consecutive markers exist, if true, draw a line 108 | for (size_t bodypart_idx = 0; bodypart_idx < 18 ; bodypart_idx++){ 109 | 110 | if (bodypart_idx == 0) 111 | { 112 | if (PointISValid(person_msg.nose)) 113 | {marker.points.push_back(AddPoint(person_msg.nose));} 114 | } 115 | else if (bodypart_idx == 1) 116 | { 117 | if (PointISValid(person_msg.neck)){ 118 | marker.points.push_back(AddPoint(person_msg.neck)); 119 | // Draw line between neck and nose 120 | if (PointISValid(person_msg.nose)){ 121 | skeleton.points.push_back(AddPoint(person_msg.nose)); 122 | skeleton.points.push_back(AddPoint(person_msg.neck)); 123 | } 124 | } 125 | 126 | } 127 | else if (bodypart_idx == 2) 128 | { 129 | if (PointISValid(person_msg.right_shoulder)){ 130 | marker.points.push_back(AddPoint(person_msg.right_shoulder)); 131 | // Draw line between right shoulder and neck 132 | if (PointISValid(person_msg.neck)){ 133 | skeleton.points.push_back(AddPoint(person_msg.neck)); 134 | skeleton.points.push_back(AddPoint(person_msg.right_shoulder)); 135 | } 136 | } 137 | } 138 | else if (bodypart_idx == 3) 139 | { 140 | if (PointISValid(person_msg.right_elbow)){ 141 | marker.points.push_back(AddPoint(person_msg.right_elbow)); 142 | // Draw line between right shoulder and right elbow 143 | if (PointISValid(person_msg.right_shoulder)){ 144 | skeleton.points.push_back(AddPoint(person_msg.right_shoulder)); 145 | skeleton.points.push_back(AddPoint(person_msg.right_elbow)); 146 | } 147 | } 148 | 149 | } 150 | else if (bodypart_idx == 4) 151 | { 152 | if (PointISValid(person_msg.right_wrist)){ 153 | marker.points.push_back(AddPoint(person_msg.right_wrist)); 154 | // Draw line between right elbow and right wrist 155 | if (PointISValid(person_msg.right_elbow)){ 156 | skeleton.points.push_back(AddPoint(person_msg.right_elbow)); 157 | skeleton.points.push_back(AddPoint(person_msg.right_wrist)); 158 | } 159 | } 160 | } 161 | else if (bodypart_idx == 5) 162 | { 163 | if (PointISValid(person_msg.left_shoulder)){ 164 | marker.points.push_back(AddPoint(person_msg.left_shoulder)); 165 | // Draw line between left shoulder and neck 166 | if (PointISValid(person_msg.neck)){ 167 | skeleton.points.push_back(AddPoint(person_msg.neck)); 168 | skeleton.points.push_back(AddPoint(person_msg.left_shoulder)); 169 | } 170 | } 171 | 172 | } 173 | else if (bodypart_idx == 6) 174 | { 175 | if (PointISValid(person_msg.left_elbow)){ 176 | marker.points.push_back(AddPoint(person_msg.left_elbow)); 177 | // Draw line between left shoulder and left elbow 178 | if (PointISValid(person_msg.left_shoulder)){ 179 | skeleton.points.push_back(AddPoint(person_msg.left_shoulder)); 180 | skeleton.points.push_back(AddPoint(person_msg.left_elbow)); 181 | } 182 | } 183 | } 184 | else if (bodypart_idx == 7) 185 | { 186 | if (PointISValid(person_msg.left_wrist)){ 187 | marker.points.push_back(AddPoint(person_msg.left_wrist)); 188 | // Draw line between left elbow and left wrist 189 | if (PointISValid(person_msg.left_elbow)){ 190 | skeleton.points.push_back(AddPoint(person_msg.left_elbow)); 191 | skeleton.points.push_back(AddPoint(person_msg.left_wrist)); 192 | } 193 | } 194 | } 195 | else if (bodypart_idx == 8) 196 | { 197 | if (PointISValid(person_msg.right_hip)){ 198 | marker.points.push_back(AddPoint(person_msg.right_hip)); 199 | // Draw line between right hip and neck 200 | if (PointISValid(person_msg.neck)){ 201 | skeleton.points.push_back(AddPoint(person_msg.neck)); 202 | skeleton.points.push_back(AddPoint(person_msg.right_hip)); 203 | } 204 | } 205 | } 206 | else if (bodypart_idx == 9) 207 | { 208 | if (PointISValid(person_msg.right_knee)){ 209 | marker.points.push_back(AddPoint(person_msg.right_knee)); 210 | // Draw line between right hip and right knee 211 | if (PointISValid(person_msg.right_hip)){ 212 | skeleton.points.push_back(AddPoint(person_msg.right_hip)); 213 | skeleton.points.push_back(AddPoint(person_msg.right_knee)); 214 | } 215 | } 216 | } 217 | else if (bodypart_idx == 10) 218 | { 219 | if (PointISValid(person_msg.right_ankle)){ 220 | marker.points.push_back(AddPoint(person_msg.right_ankle)); 221 | // Draw line between right ankle and right knee 222 | if (PointISValid(person_msg.right_hip)){ 223 | skeleton.points.push_back(AddPoint(person_msg.right_knee)); 224 | skeleton.points.push_back(AddPoint(person_msg.right_ankle)); 225 | } 226 | } 227 | } 228 | else if (bodypart_idx == 11) 229 | { 230 | if (PointISValid(person_msg.left_hip)){ 231 | marker.points.push_back(AddPoint(person_msg.left_hip)); 232 | // Draw line between left hip and neck 233 | if (PointISValid(person_msg.neck)){ 234 | skeleton.points.push_back(AddPoint(person_msg.neck)); 235 | skeleton.points.push_back(AddPoint(person_msg.left_hip)); 236 | } 237 | } 238 | } 239 | else if (bodypart_idx == 12) 240 | { 241 | if (PointISValid(person_msg.left_knee)){ 242 | marker.points.push_back(AddPoint(person_msg.left_knee)); 243 | // Draw line between left knee and left hip 244 | if (PointISValid(person_msg.left_hip)){ 245 | skeleton.points.push_back(AddPoint(person_msg.left_hip)); 246 | skeleton.points.push_back(AddPoint(person_msg.left_knee)); 247 | } 248 | } 249 | } 250 | else if (bodypart_idx == 13) 251 | { 252 | if (PointISValid(person_msg.left_ankle)){ 253 | marker.points.push_back(AddPoint(person_msg.left_ankle)); 254 | // Draw line between left knee and left ankle 255 | if (PointISValid(person_msg.left_knee)){ 256 | skeleton.points.push_back(AddPoint(person_msg.left_knee)); 257 | skeleton.points.push_back(AddPoint(person_msg.left_ankle)); 258 | } 259 | } 260 | } 261 | else if (bodypart_idx == 14) 262 | { 263 | if (PointISValid(person_msg.right_eye)){ 264 | marker.points.push_back(AddPoint(person_msg.right_eye)); 265 | // Draw line between right eye and nose 266 | if (PointISValid(person_msg.nose)){ 267 | skeleton.points.push_back(AddPoint(person_msg.nose)); 268 | skeleton.points.push_back(AddPoint(person_msg.right_eye)); 269 | } 270 | } 271 | } 272 | else if (bodypart_idx == 15) 273 | { 274 | if (PointISValid(person_msg.left_eye)){ 275 | marker.points.push_back(AddPoint(person_msg.left_eye)); 276 | // Draw line between left eye and nose 277 | if (PointISValid(person_msg.nose)){ 278 | skeleton.points.push_back(AddPoint(person_msg.nose)); 279 | skeleton.points.push_back(AddPoint(person_msg.left_eye)); 280 | } 281 | } 282 | } 283 | else if (bodypart_idx == 16) 284 | { 285 | if (PointISValid(person_msg.right_ear)){ 286 | marker.points.push_back(AddPoint(person_msg.right_ear)); 287 | // Draw line between right eye and right ear 288 | if (PointISValid(person_msg.right_eye)){ 289 | skeleton.points.push_back(AddPoint(person_msg.right_eye)); 290 | skeleton.points.push_back(AddPoint(person_msg.right_ear)); 291 | } 292 | } 293 | } 294 | else if (bodypart_idx == 17) 295 | { 296 | if (PointISValid(person_msg.left_ear)){ 297 | marker.points.push_back(AddPoint(person_msg.left_ear)); 298 | // Draw line between left eye and left ear 299 | if (PointISValid(person_msg.left_eye)){ 300 | skeleton.points.push_back(AddPoint(person_msg.left_eye)); 301 | skeleton.points.push_back(AddPoint(person_msg.left_ear)); 302 | } 303 | } 304 | } 305 | /* else if (bodypart_idx == 18) 306 | { 307 | if (PointISValid(person_msg.chest)) 308 | {marker.points.push_back(AddPoint(person_msg.chest));} 309 | } 310 | */ 311 | } 312 | 313 | // Publish markers 314 | marker_pub.publish(marker); 315 | skeleton_pub.publish(skeleton); 316 | } 317 | 318 | private: 319 | ros::NodeHandle n_; 320 | ros::Publisher marker_pub; 321 | ros::Publisher skeleton_pub; 322 | ros::Subscriber sub_; 323 | 324 | };//End of class SubscribeAndPublish 325 | 326 | int main(int argc, char **argv) 327 | { 328 | //Initiate ROS 329 | ros::init(argc, argv, "skeleton_extract_3d_visualization_node"); 330 | 331 | //Create an object of class SubscribeAndPublish that will take care of everything 332 | SubscribeAndPublish SAPObject; 333 | 334 | ros::spin(); 335 | 336 | return 0; 337 | } 338 | 339 | 340 | --------------------------------------------------------------------------------