└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # VINS-Fusion with Cerebro 2 | This is the cerebro module for VINS-Fusion. The aim of this project 3 | is better loop detection and recover from kidnap. The cerebro node connects to 4 | the vins_estimator nodes of VINS-Fusion 5 | (with ros interface). The DataManager class handles 6 | all the incoming data. Visualization handles all the visualization. 7 | Cerebro class handles the loop closure intelligence part. It publishes a LoopMsg 8 | which contains timestamps of the identified loopcandidate along with the 9 | computed relative pose between the pair. The pose computation needs a stereo pair 10 | for reliable pose computation. 11 | This is a multi-threaded object oriented implementation and I observe a CPU load factor 12 | of about 2.0. A separate node handles pose graph solver (it is in [github-repo](https://github.com/mpkuse/solve_keyframe_pose_graph) ). 13 | 14 | 15 | **A MORE UPTO DATE README IS AVAILABLE IN THE ORIGINAL REPO ([mpkuse/cerebro](https://github.com/mpkuse/cerebro) )** 16 | 17 | **Manuscript**: 18 | Preprint : [https://arxiv.org/abs/1904.06962](https://arxiv.org/abs/1904.06962) 19 | 20 | 21 | ## Highlight Video 22 | [![IMAGE ALT TEXT](http://img.youtube.com/vi/lDzDHZkInos/0.jpg)](http://www.youtube.com/watch?v=lDzDHZkInos "Video Title") 23 | 24 | Alternate link to highlight video: 25 | [Dailymotion](https://www.dailymotion.com/video/x78chs4) 26 | 27 | 28 | ## AR Demo Under Kidnap. 29 | We show our system's performance for AR tasks under kidnap. We are able to track and relocalize despite long kidnaps. 30 | [![IMAGE ALT TEXT](http://img.youtube.com/vi/HL7Nk-fBNqM/0.jpg)](http://www.youtube.com/watch?v=HL7Nk-fBNqM "Video Title") 31 | 32 | 33 | ## Save Map to Disk and Relocalize from Loaded Map (Teach-Repeat) 34 | Our system is able to save the constructed map to disk. It is also able to load a previously stored map and relocalize live and in realtime from that map. We demo the effect of relocalization from map with [our group's SurfelMapping code](https://github.com/HKUST-Aerial-Robotics/DenseSurfelMapping) 35 | 36 | [![IMAGE ALT TEXT](http://img.youtube.com/vi/OViEEB3rINo/0.jpg)](http://www.youtube.com/watch?v=OViEEB3rINo "Video Title") 37 | 38 | 39 | 40 | 41 | ------- 42 | 43 | 44 | ## MyntEye Demo (Using VINS-Fusion as Odometry Estimator) 45 | [![IMAGE ALT TEXT](http://img.youtube.com/vi/3YQF4_v7AEg/0.jpg)](http://www.youtube.com/watch?v=3YQF4_v7AEg "Video Title") 46 | 47 | 48 | [![IMAGE ALT TEXT](http://img.youtube.com/vi/sTd_rZdW4DQ/0.jpg)](http://www.youtube.com/watch?v=sTd_rZdW4DQ "Video Title") 49 | 50 | 51 | 52 | 53 | ## MyntEye Demo (Using VINS-Mono as Odometry Estimator) 54 | [![IMAGE ALT TEXT](http://img.youtube.com/vi/KDRo9LpL6Hs/0.jpg)](http://www.youtube.com/watch?v=KDRo9LpL6Hs "Video Title") 55 | 56 | [![IMAGE ALT TEXT](http://img.youtube.com/vi/XvoCrLFq99I/0.jpg)](http://www.youtube.com/watch?v=XvoCrLFq99I "Video Title") 57 | 58 | 59 | 60 | ## EuRoC MAV Dataset live merge MH-01, ... MH-05. 61 | [![IMAGE ALT TEXT](http://img.youtube.com/vi/mnnoAlAIsN8/0.jpg)](http://www.youtube.com/watch?v=mnnoAlAIsN8 "Video Title") 62 | 63 | 64 | ## EuRoC MAV Dataset live merge V1_01, V1_02, V1_03, V2_01, V2_02 65 | 66 | [![IMAGE ALT TEXT](http://img.youtube.com/vi/rIaANkd74cQ/0.jpg)](http://www.youtube.com/watch?v=rIaANkd74cQ "Video Title") 67 | 68 | 69 | 70 | 71 | For more demonstration, have a look at my [youtube playlist](https://www.youtube.com/playlist?list=PLWyydx20vdPzs5VVhZu0TGsReT7U17Fxp) 72 | 73 | 74 | 75 | ## Visual-Inertial Datasets 76 | - [ETHZ EuroC Dataset](https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets) 77 | - [TUM Visual Inertial Dataset](https://vision.in.tum.de/data/datasets/visual-inertial-dataset) 78 | - UPenn - [Penncosyvio](https://github.com/daniilidis-group/penncosyvio) 79 | - [ADVIO](https://github.com/AaltoVision/ADVIO) ARKIT, Tango logging. 80 | - Our MyntEye (Stereo+IMU) [One-drive-link](https://hkustconnect-my.sharepoint.com/:f:/g/personal/mpkuse_connect_ust_hk/EkTisuLkXLFBs_WHYkxoH2oBeVIkdLc3-5a_t1J9c_4wkg?e=h9cifx) 81 | 82 | ## How to run - Docker 83 | I highly recommend the already deployed packages with docker. 84 | Run the roscore on your host pc and all the packages run inside 85 | of docker container. rviz runs on the host pc. 86 | 87 | I assume you have a PC with a graphics card and cuda9 working smoothly 88 | and nvidia-docker installed. 89 | ``` 90 | $(host) export ROS_HOSTNAME=`hostname` 91 | $(host) roscore 92 | # assume that host has the ip address 172.17.0.1 in docker-network aka docker0 93 | $(host) docker run --runtime=nvidia -it \ 94 | --add-host `hostname`:172.17.0.1 \ 95 | --env ROS_MASTER_URI=http://`hostname`:11311/ \ 96 | --env CUDA_VISIBLE_DEVICES=0 \ 97 | --hostname happy_go \ 98 | --name happy_go \ 99 | mpkuse/kusevisionkit:vins-kidnap bash 100 | $(host) rviz # inside rviz open config cerebro/config/good-viz.rviz. If you open rviz in a new tab you might need to do set ROS_HOSTNAME again. 101 | $(docker) roslaunch cerebro mynteye_vinsfusion.launch 102 | OR 103 | $(docker) roslaunch cerebro euroc_vinsfusion.launch 104 | $(host) rosbag play 1.bag 105 | ``` 106 | 107 | Edit the launch file as needed. 108 | 109 | 110 | If you are unfamiliar with docker, you may want to read [my blog post](https://kusemanohar.wordpress.com/2018/10/03/docker-for-computer-vision-researchers/) 111 | on using docker for computer vision researchers. 112 | You might want to have a look at my test ros-package to ensure things work with docker [docker_ros_test](https://github.com/mpkuse/docker_ros_test). 113 | 114 | 115 | 116 | 117 | ## How to compile (from scratch) 118 | You will need a) VINS-Fusion (with modification for reset by mpkuse), b) cerebro 119 | and c) solve_keyframe_pose_graph. Besure to setup a `catkin_ws` and make sure 120 | your ROS works correctly. 121 | 122 | ### Dependencies 123 | - ROS Kinetic 124 | - Eigen3 125 | - Ceres 126 | - OpenCV3 (should also work with 2.4 (not tested), 3.3 and 3.4) 127 | - [Theia-sfm](http://theia-sfm.org/) 128 | - [OpenImageIO](https://github.com/OpenImageIO/oiio). tested with version Release-1.7.6RC1 129 | - [RocksDB](https://github.com/facebook/rocksdb). tested with version v5.9.2 130 | 131 | 132 | ### Get VINS-Fusion Working [GIT](https://github.com/HKUST-Aerial-Robotics/VINS-Fusion.git) 133 | I recommend you use my fork of VINS-Fusion, in which I have fixed some bugs 134 | and added mechanism for reseting the VINS. 135 | ``` 136 | cd catkin_ws/src 137 | #git clone https://github.com/HKUST-Aerial-Robotics/VINS-Fusion.git 138 | git clone https://github.com/mpkuse/VINS-Fusion 139 | cd ../ 140 | catkin_make 141 | source catkin_ws/devel/setup.bash 142 | ``` 143 | 144 | Make sure your vins-fusion can compile and run correctly. See vins-fusion github repo 145 | for the latest information on prerequisites and compilation instructions. 146 | For compatibility I recommend using my fork of vins-mono/vins-fusion. Some minor 147 | modifications have been made by me for working with kidnap cases. 148 | 149 | ### Cerebro [GIT](https://github.com/mpkuse/cerebro) 150 | ``` 151 | cd catkin_ws/src/ 152 | git clone https://github.com/mpkuse/cerebro 153 | cd ../ 154 | catkin_make 155 | ``` 156 | 157 | This has 2 exectables. **a)** ros server that takes as input an image and 158 | returns a image descriptor. **b)** cerebro_node, this 159 | finds the loop candidates and computes the relative poses. I have also 160 | included my trained model (about 4 MB) in this package (located scripts/keras.model). The pose computation 161 | uses the stereo pair in this node. This node publishes the loopcandidate's relative pose 162 | which is expected to be consumed the pose-graph solver. 163 | 164 | If you wish to train your own model, you may use [my learning code here](https://github.com/mpkuse/cartwheel_train). 165 | 166 | **Threads:** 167 | - *Main Thread* : ros-callbacks 168 | - *data_association_th* : Sync the incoming image data and incoming data vins_estimator. 169 | - *desc_th* : Consumes the images to produce whole-image-descriptors. 170 | - *dot_product_th* : Dot product of current image descriptor with all the previous ones. 171 | - *loopcandidate_consumer_th* : Computes the relative pose at the loopcandidates. Publishes the loopEdge. 172 | - *kidnap_th* : Identifies kidnap. If kidnap publishes the reset signals for vins_estimator. 173 | - *viz_th* : Publishes the image-pair, and more things for debugging and analysis. 174 | 175 | 176 | ### Pose Graph Solver [GIT](https://github.com/mpkuse/solve_keyframe_pose_graph) 177 | Use my pose graph solver, [github-repo](https://github.com/mpkuse/solve_keyframe_pose_graph). 178 | The differences between this implementation and the 179 | original from VINS-Fusion is that this can handle kidnap cases, 180 | handles multiple world co-ordinate frames and it 181 | uses a switch-constraint formulation of the pose-graph problem. 182 | It uses the disjoint set forest to maintain a set association of world co-ordinate systems. 183 | ``` 184 | cd catkin_ws/src/ 185 | git clone https://github.com/mpkuse/solve_keyframe_pose_graph 186 | cd ../ 187 | catkin_make 188 | ``` 189 | 190 | **Threads:** 191 | - *Main thread* : ros-callbacks for odometry poses (from vins_estimator) and LoopMsg (from cerebro). 192 | - *SLAM* : Monitors the node-poses and loop-edges, on new loop-edges constructs and solves the pose-graph optimization problem. 193 | - *th4* : Publish latest odometry. 194 | - *th5* : Display image to visualize disjoint set datastructure. 195 | - *th6* : Publish corrected poses, Different color for nodes in different co-ordinate systems. 196 | 197 | ### AR Demo [GIT](https://github.com/mpkuse/ar_demo) 198 | To make ar_demo similar to my video above you could use this package. It takes the corrected pose from the pose graph solver (along with the worldID) and renders a polygonal mesh on the camera-image. It also have support for ground plane estimation, checkout the readme of that package. 199 | ``` 200 | cd catkin_ws/src 201 | git clone https://github.com/mpkuse/ar_demo 202 | cd ../ 203 | catkin_make 204 | ``` 205 | 206 | ### vins_mono_debug_pkg (optional, needed only if you wish to debug vins-mono/vins-fusion) 207 | With cerebro node it is possible to live run the vins and make it log all the 208 | details to file for further analysis/debugging. This might be useful 209 | for researchers and other Ph.D. student to help VINS-Fusion improve further. 210 | see [github/mpkuse/vins_mono](https://github.com/mpkuse/vins_mono_debug_pkg). 211 | It basically contains unit tests and some standalone tools which might come in handy. 212 | If you are looking to help improve VINS-fusion or cerebro also look at 'Development Guidelines'. 213 | 214 | ## How to run the Full System 215 | ``` 216 | roslaunch cerebro mynteye_vinsfusion.launch 217 | ``` 218 | You can get some of my bag files collected with the mynteye camera HERE. More 219 | example launch files in folder `launch`, all the config files which contains 220 | calibration info are in folder `config`. 221 | 222 | 223 | ## Development Guidelines 224 | If you are developing I still recommend using docker. with -v flags in docker you could mount your 225 | pc's folders on the docker. I recommend keeping all the packages in folder `docker_ws_slam/catkin_ws/src` 226 | on your host pc. And all the rosbags in folder `/media/mpkuse/Bulk_Data`. And then mount these 227 | two folders on the docker-container. Edit the following command as needed. 228 | 229 | 230 | ``` 231 | docker run --runtime=nvidia -it -v /media/mpkuse/Bulk_Data/:/Bulk_Data -v /home/mpkuse/docker_ws_slam:/app --add-host `hostname`:172.17.0.1 --env ROS_MASTER_URI=http://`hostname`:11311/ --env CUDA_VISIBLE_DEVICES=0 --hostname happy_go --name happy_go mpkuse/kusevisionkit:ros-kinetic-vins bash 232 | ``` 233 | 234 | Each of my classes can export the data they hold as json objects and image files. Look at the 235 | end of `main()` in `cerebro_node.cpp` and modify as needed to extract more debug data. Similarly 236 | the pose graph solver can also be debugged. 237 | For streamlining printing messages 238 | I have preprocessor macros at the start of function implementation (of classes DataManager and Cerebro), 239 | read the comments there and edit as per need. Try to implement your algorithms in 240 | object oriented way and using the producer-consumer paradigm. Look at my thread-mains for example. 241 | 242 | Finally, sensible PR with bug fixes, enhancements are welcome! 243 | ![](doc/rosgraph.png) 244 | 245 | ## Authors 246 | Manohar Kuse 247 | --------------------------------------------------------------------------------