├── .assets ├── EuRoc数据集格式.png ├── NH_数据集图片.png ├── VIODE方法架构图-16501021690221.png ├── VIODE方法架构图.png ├── image-20220416192259756.png ├── image-20220416192832064.png ├── image-20220416193236557.png ├── image-20220416193336130.png ├── image-20220416204814112.png └── myVIODE数据内容展示.jpg ├── README.md ├── calibration_files ├── README.md └── airsim_config.yaml ├── json ├── CV.json ├── imu_pose.json └── px4.json ├── scripts ├── euroc2rosbag.py ├── keyboardControl.py └── printImageInfo.py ├── step_one.cpp └── step_two.cpp /.assets/EuRoc数据集格式.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/.assets/EuRoc数据集格式.png -------------------------------------------------------------------------------- /.assets/NH_数据集图片.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/.assets/NH_数据集图片.png -------------------------------------------------------------------------------- /.assets/VIODE方法架构图-16501021690221.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/.assets/VIODE方法架构图-16501021690221.png -------------------------------------------------------------------------------- /.assets/VIODE方法架构图.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/.assets/VIODE方法架构图.png -------------------------------------------------------------------------------- /.assets/image-20220416192259756.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/.assets/image-20220416192259756.png -------------------------------------------------------------------------------- /.assets/image-20220416192832064.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/.assets/image-20220416192832064.png -------------------------------------------------------------------------------- /.assets/image-20220416193236557.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/.assets/image-20220416193236557.png -------------------------------------------------------------------------------- /.assets/image-20220416193336130.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/.assets/image-20220416193336130.png -------------------------------------------------------------------------------- /.assets/image-20220416204814112.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/.assets/image-20220416204814112.png -------------------------------------------------------------------------------- /.assets/myVIODE数据内容展示.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/.assets/myVIODE数据内容展示.jpg -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # AirSim-Dataset-Gain 2 | 3 | #### 1 介绍 4 | 本项目为参考[VIODE](https://github.com/kminoda/VIODE)数据集的采集方法进行的复现,可以实现采集高频率、高质量的IMU和图像数据。 5 | 6 | 7 | **本项目介绍** 8 | 9 | * 本项目目前支持采集采集`IMU`、`单目Image`和`GroundTruth`数据,采集时为`EuRoc`的ASL格式,提供了Python[脚本](./scripts/euroc2rosbag.py)可以转换为`rosbag`格式 10 | 11 | * 本项目提供了所需要的源代码和json配置文件(硬件在环方式) 12 | 13 | * 本项目提供了运行`vins-mono`所需的[参考配置文件](./calibration_files/airsim_config.yaml),并提供了[相关文档](https://zhuanlan.zhihu.com/p/482098440)用于支持计算自定义的内外参 14 | 15 | * 本项目同时提供了evo评估工具在数据集上的[使用方法]() **(空缺链接!随后补上)** 16 | 17 | * 本项目提供部分本人采集的数据集: 18 | 19 | 20 | 本场景下载地址:[AirSimNH](https://github.com/microsoft/AirSim/releases/download/v1.6.0-windows/AirSimNH.zip) 21 | 22 | NH_数据集图片 23 | 24 | 25 | 本场景下载地址:[001](https://github.com/microsoft/AirSim/releases/download/v1.6.0-windows/CityEnviron.zip.001),[002](https://github.com/microsoft/AirSim/releases/download/v1.6.0-windows/CityEnviron.zip.002) 26 | 27 | myVIODE数据内容展示 28 | 29 | 30 | 31 | #### 2 系统架构 32 | 33 | VIODE方法架构图 34 | 35 | > **您可以不选择使用硬件在环的方式,只要能够实现对AirSim中的无人机进行运动控制即可** 36 | 37 | #### 3 运行环境 38 | 39 | 本项目基于AirSim、UE4等环境,具体版本: 40 | 41 | * AirSim:1.6.0 42 | * UE4:4.25(无关紧要) 43 | * 操作系统:windows10 44 | * IDE:Visual Studio2019([可以正常编译AirSim](https://microsoft.github.io/AirSim/build_windows/)) 45 | 46 | 47 | #### 4 安装教程 48 | 49 | ##### Prerequests 50 | 51 | 1. 完成AirSim的在Win下的安装,[参考](https://microsoft.github.io/AirSim/build_windows/)。 52 | 53 | 2. 下载本项目: 54 | 55 | ```bash 56 | git clone https://github.com/jike5/AirSim-VIO-Dataset-Gain.git 57 | ``` 58 | 59 | #### 5 使用说明 60 | 61 | 1. 将本项目文件添加到`AirSim`项目中 62 | 63 | 为了避免配置`Visual Studio`项目,非常建议采用以下方法: 64 | 65 | * 打开`AirSim`项目文件夹,进入`HelloDrone`文件夹下 66 | 67 | * 复制本项目的`*.cpp`到`HelloDrone`目录 68 | 69 | image-20220416192259756 70 | 71 | 2. 使用Visual Studio2019打开AirSim项目 72 | 73 | 双击`AirSim/AirSim.sln`打开VS 74 | 75 | 设置启动项: 76 | 77 | image-20220416192832064 78 | 79 | 包含`step_one.cpp`和`step_two.cpp` 80 | 81 | image-20220416193236557 82 | 83 | 右键`main.cpp`-->`属性`-->从生成排除:是 84 | 85 | image-20220416193336130 86 | 87 | 用同样的方法将`step_one.cpp`设置为:`从生成排除:否`;将`step_two.cpp`设置为:`从生成排除:是` 88 | 89 | 这样`F5`就会启动`HelloDrone`项目下的`step_one.cpp`的`main`函数了,后面切换第二步时只需用把`step_one.cpp`从生成排除,把`step_two.cpp`包含即可。在启动该程序前还需要配置一下硬件在环控制。 90 | 91 | 3. 设置`settings.json` 92 | 93 | **本项目的`json/`目录下提供了我所使用到的所有settings.json文件** 94 | 95 | 本项目中`json/`目录下提供了硬件在环的json文件:`px4.json` 96 | 97 | **如果您不想使用硬件在环的方式控制无人机,您也可以使用[API](https://microsoft.github.io/AirSim/apis/#apis-for-multirotor)或者只使用[遥控器](https://microsoft.github.io/AirSim/remote_control/#rc-setup-for-default-config)。如果你不是使用硬件在环的方式,只需按照参考链接完成配置,可跳过本条。** 98 | 99 | **本项目中采用硬件在环的原因:** 100 | 101 | * 暂无合适的遥控器;AirSim官方适配的为[FrSky Taranis X9D Plus](https://hobbyking.com/en_us/frsky-2-4ghz-accst-taranis-x9d-plus-and-x8r-combo-digital-telemetry-radio-system-mode-2.html),而本人手里只有 *富斯i6* 和 *乐迪AT9S Pro* 这两款,经过测试这两款均达不到FrSky Taranis X9D Plus的效果。 102 | * 另一种使用API的方式编写控制代码较为复杂,如果要实现简单的轨迹如圆、方形工作量还可以接受。另一方面,由于本项目采集数据服务于SLAM,视觉初始化需要一个好的初始化动作,直接使用航模遥控比API控制个人觉得可能更为便捷。 103 | 104 | **[硬件在环设置步骤](https://microsoft.github.io/AirSim/px4_setup/):** 105 | 106 | * 连接PX4的 USB口与电脑 107 | * 打开QGC,如果能正常显示连接说明连接成功 108 | * 保证遥控器与接收机正常连接,可以在QGC里遥控器校准里查看 109 | * 将`json/px4.json`里的内容复制到你的AirSim配置文件,一般在`C:\Users\youname\Documents\AirSim`下名为`settings.json`的文件 110 | * 打开一个编译好的AirSim环境(如Blocks),可以通过遥控器解锁进行起飞、控制表明配置成功 111 | 112 | (以上叙述较为简略,建议没有用过飞控的参考[AirSim官方](https://microsoft.github.io/AirSim/px4_setup/)和[PX4](https://docs.px4.io/master/en/),或者考虑前面提到的使用**API**或者**遥控器直接连接**) 113 | 114 | 4. 启动IMU>采集程序 115 | 116 | 回到Visual Studio2019,`F5`启动程序,默认数据会保存在`D:\\AirSim\\dataset`下,文件夹名为当前日期和时间,格式与[EuRoc](https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets#the_euroc_mav_dataset)数据集保持一致 117 | 118 | EuRoc数据集格式 119 | 120 | 5. 采集相机数据 121 | 122 | (1) 采集相机数据前,需要到默认的数据保存目录(`D:\\AirSim\\dataset`)下复制刚刚`IMU`和`GT`数据所在的文件夹名: 123 | 124 | 125 | 126 | 这样才能保证数据在同一目录下。 127 | 128 | (2) 设置json;将`json/cv.json`文件内容复制到`settings.json`,将AirSim设置为CV模式,`cv.json`默认设置相机平视,像素为`752*480`,您可以自己修改外参和内参,具体参考:[资料1](https://zhuanlan.zhihu.com/p/482098440)和[资料2](./calibration_files/airsim_config.yaml) 129 | 130 | (3) 设置`step_two.cpp`: `从生成排除:否`,并将`step_one.cpp`从生成中排除 131 | 132 | (4) 打开AirSim场景,回到Visual Studio,`F5`运行,即可看到AirSim窗口在一帧一帧的采集图像数据。 133 | 134 | 6. 如果您需要获得`rosbag`格式,您需要一台安装了ros的电脑。本项目在`scripts/euroc2rosbag.py`中提供了转换的脚本,使用方法: 135 | 136 | ``` 137 | cd scripts 138 | python euroc2rosbag.py --folder MH_01 --output-bag MH_01.bag 139 | ``` 140 | 141 | #### 参与贡献 142 | 143 | [gitee上提交PR和issue流程和注意事项](https://xie.infoq.cn/article/3c5e8757345d4a3208aa48ca6) 144 | 145 | 1. Fork 本仓库 146 | 2. 新建 Feat_xxx 分支 147 | 3. 提交代码 148 | 4. 新建 Pull Request 149 | 150 | #### TODO 151 | 152 | 后续可能会增加双目、深度图、语义等数据,目前本人暂无需求 153 | 154 | > **Contact:** 155 | > 156 | > Email: zhenxinzhu163@163.com 157 | > 158 | > B站/知乎/GZH:智能之欣 159 | > 160 | 161 | -------------------------------------------------------------------------------- /calibration_files/README.md: -------------------------------------------------------------------------------- 1 | ## 参考配置文件 2 | 3 | `airsim_config.yaml`文件为运行VINS-Mono所需的配置文件 4 | 5 | 如果您不是使用我的`cv.json`,即您对AirSim相机的参数进行了修改的话,可以参考本人的这篇博客[AirSim相机、IMU内外参分析(VIO、vSLAM) - 知乎 (zhihu.com)](https://zhuanlan.zhihu.com/p/482098440),重新计算相机的内外参 6 | 7 | -------------------------------------------------------------------------------- /calibration_files/airsim_config.yaml: -------------------------------------------------------------------------------- 1 | %YAML:1.0 2 | 3 | #common parameters 4 | imu_topic: "/imu0" 5 | image_topic: "/cam0/image_raw" 6 | output_path: "~/output/" 7 | 8 | #camera calibration 9 | model_type: PINHOLE 10 | camera_name: camera 11 | image_width: 752 12 | image_height: 480 13 | distortion_parameters: 14 | k1: 0.0 15 | k2: 0.0 16 | p1: 0.0 17 | p2: 0.0 18 | projection_parameters: 19 | fx: 376.0 20 | fy: 376.0 21 | cx: 376.0 22 | cy: 240.0 23 | 24 | # Extrinsic parameter between IMU and Camera. 25 | estimate_extrinsic: 0 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it. 26 | # 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess. 27 | # 2 Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning. 28 | #If you choose 0 or 1, you should write down the following matrix. 29 | #Rotation from camera frame to imu frame, imu^R_cam 30 | extrinsicRotation: !!opencv-matrix 31 | rows: 3 32 | cols: 3 33 | dt: d 34 | data: [0.0, 0.0, 1.0, 35 | 1.0, 0.0, 0.0, 36 | 0.0, 1.0, 0.0] 37 | #Translation from camera frame to imu frame, imu^T_cam 38 | extrinsicTranslation: !!opencv-matrix 39 | rows: 3 40 | cols: 1 41 | dt: d 42 | data: [0.0, 0.0, 0.0] 43 | 44 | #feature traker paprameters 45 | max_cnt: 120 # max feature number in feature tracking 46 | min_dist: 30 # min distance between two features 47 | freq: 10 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 48 | F_threshold: 1.0 # ransac threshold (pixel) 49 | show_track: 1 # publish tracking image as topic 50 | equalize: 1 # if image is too dark or light, trun on equalize to find enough features 51 | fisheye: 0 # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points 52 | 53 | #optimization parameters 54 | max_solver_time: 0.04 # max solver itration time (ms), to guarantee real time 55 | max_num_iterations: 8 # max solver itrations, to guarantee real time 56 | keyframe_parallax: 10.0 # keyframe selection threshold (pixel) 57 | 58 | #imu parameters The more accurate parameters you provide, the better performance 59 | acc_n: 0.2 # accelerometer measurement noise standard deviation. #0.2 0.04 60 | gyr_n: 0.05 # gyroscope measurement noise standard deviation. #0.05 0.004 61 | acc_w: 0.02 # accelerometer bias random work noise standard deviation. #0.02 62 | gyr_w: 4.0e-5 # gyroscope bias random work noise standard deviation. #4.0e-5 63 | g_norm: 9.81007 # gravity magnitude 64 | 65 | #loop closure parameters 66 | loop_closure: 1 # start loop closure 67 | load_previous_pose_graph: 0 # load and reuse previous pose graph; load from 'pose_graph_save_path' 68 | fast_relocalization: 0 # useful in real-time and large project 69 | pose_graph_save_path: "~/output/pose_graph/" # save and load path 70 | 71 | #unsynchronization parameters 72 | estimate_td: 0 # online estimate time offset between camera and imu 73 | td: 0.0 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock) 74 | 75 | #rolling shutter parameters 76 | rolling_shutter: 0 # 0: global shutter camera, 1: rolling shutter camera 77 | rolling_shutter_tr: 0 # unit: s. rolling shutter read out time per frame (from data sheet). 78 | 79 | #visualization parameters 80 | save_image: 1 # save image in pose graph for visualization prupose; you can close this function by setting 0 81 | visualize_imu_forward: 0 # output imu forward propogation to achieve low latency and high frequence results 82 | visualize_camera_size: 0.4 # size of camera marker in RVIZ -------------------------------------------------------------------------------- /json/CV.json: -------------------------------------------------------------------------------- 1 | { 2 | "SettingsVersion": 1.2, 3 | "SimMode": "ComputerVision", 4 | "CameraDefaults": { 5 | "CaptureSettings": [ 6 | { 7 | "ImageType": 0, 8 | "Width": 752, 9 | "Height": 480, 10 | "FOV_Degrees": 90 11 | } 12 | ] 13 | } 14 | } -------------------------------------------------------------------------------- /json/imu_pose.json: -------------------------------------------------------------------------------- 1 | { 2 | "SeeDocsAt": "https://github.com/Microsoft/AirSim/blob/master/docs/settings.md", 3 | "SettingsVersion": 1.2, 4 | "SimMode": "Multirotor", 5 | "ViewMode": "SpringArmChase", 6 | "ClockSpeed": 1.0, 7 | "Vehicles": { 8 | "drone_1": { 9 | "VehicleType": "SimpleFlight", 10 | "DefaultVehicleState": "Armed", 11 | "EnableCollisionPassthrogh": false, 12 | "EnableCollisions": true, 13 | "AllowAPIAlways": true, 14 | "RC": { 15 | "RemoteControlID": 0, 16 | "AllowAPIWhenDisconnected": false 17 | }, 18 | "Sensors": { 19 | "Imu" : { 20 | "SensorType": 2, 21 | "Enabled": true 22 | } 23 | }, 24 | "X": 0, "Y": 0, "Z": -2, 25 | "Pitch": 0, "Roll": 0, "Yaw": 0 26 | } 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /json/px4.json: -------------------------------------------------------------------------------- 1 | { 2 | "SettingsVersion": 1.2, 3 | "SimMode": "Multirotor", 4 | "ClockType": "SteppableClock", 5 | "Vehicles": { 6 | "PX4": { 7 | "VehicleType": "PX4Multirotor", 8 | "UseSerial": true, 9 | "LockStep": true, 10 | "Sensors":{ 11 | "Barometer":{ 12 | "SensorType": 1, 13 | "Enabled": true, 14 | "PressureFactorSigma": 0.0001825 15 | } 16 | }, 17 | "Parameters": { 18 | "NAV_RCL_ACT": 0, 19 | "NAV_DLL_ACT": 0, 20 | "COM_OBL_ACT": 1, 21 | "LPE_LAT": 47.641468, 22 | "LPE_LON": -122.140165 23 | } 24 | } 25 | } 26 | } -------------------------------------------------------------------------------- /scripts/euroc2rosbag.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | print "importing libraries" 3 | 4 | print "usage: python2 euroc2rosbag.py --folder MH_01 --output-bag MH_01.bag" 5 | 6 | import rosbag 7 | import rospy 8 | from sensor_msgs.msg import Image 9 | from sensor_msgs.msg import Imu 10 | from nav_msgs.msg import Odometry 11 | from geometry_msgs.msg import Point, Pose, Quaternion, Twist, Vector3 12 | 13 | #import ImageFile 14 | import time, sys, os 15 | import argparse 16 | import cv2 17 | import numpy as np 18 | import csv 19 | 20 | print "import libraries success!" 21 | 22 | #structure 23 | # dataset/mav0/cam0/data/TIMESTAMP.png 24 | # dataset/mav0/imu0/data.csv 25 | # dataset/mav0/state_groundtruth_estimate0/data.csv 26 | 27 | #setup the argument list 28 | parser = argparse.ArgumentParser(description='Create a ROS bag using the images and imu data.') 29 | parser.add_argument('--folder', metavar='folder', nargs='?', help='Data folder') 30 | parser.add_argument('--output-bag', metavar='output_bag', default="output.bag", help='ROS bag file %(default)s') 31 | 32 | #print help if no argument is specified 33 | if len(sys.argv)<2: 34 | parser.print_help() 35 | sys.exit(0) 36 | 37 | #parse the args 38 | parsed = parser.parse_args() 39 | 40 | def getImageFilesFromDir(dir): 41 | '''Generates a list of files from the directory''' 42 | image_files = list() 43 | timestamps = list() 44 | if os.path.exists(dir): 45 | for path, names, files in os.walk(dir): 46 | for f in files: 47 | if os.path.splitext(f)[1] in ['.bmp', '.png', '.jpg']: 48 | image_files.append( os.path.join( path, f ) ) 49 | timestamps.append(os.path.splitext(f)[0]) 50 | #sort by timestamp 51 | sort_list = sorted(zip(timestamps, image_files)) 52 | image_files = [file[1] for file in sort_list] 53 | return image_files 54 | 55 | def loadImageToRosMsg(filename): 56 | image_np = cv2.imread(filename, cv2.IMREAD_GRAYSCALE) 57 | 58 | timestamp_nsecs = os.path.splitext(os.path.basename(filename))[0] 59 | timestamp = rospy.Time( secs=int(timestamp_nsecs[0:-9]), nsecs=int(timestamp_nsecs[-9:]) ) 60 | 61 | rosimage = Image() 62 | rosimage.header.stamp = timestamp 63 | rosimage.height = image_np.shape[0] 64 | rosimage.width = image_np.shape[1] 65 | rosimage.step = rosimage.width #only with mono8! (step = width * byteperpixel * numChannels) 66 | rosimage.encoding = "mono8" 67 | rosimage.data = image_np.tostring() 68 | 69 | return rosimage, timestamp 70 | 71 | def createImuMessge(timestamp_int, omega, alpha): 72 | timestamp_nsecs = str(timestamp_int) 73 | timestamp = rospy.Time( int(timestamp_nsecs[0:-9]), int(timestamp_nsecs[-9:]) ) 74 | 75 | rosimu = Imu() 76 | rosimu.header.stamp = timestamp 77 | rosimu.angular_velocity.x = float(omega[0]) 78 | rosimu.angular_velocity.y = float(omega[1]) 79 | rosimu.angular_velocity.z = float(omega[2]) 80 | rosimu.linear_acceleration.x = float(alpha[0]) 81 | rosimu.linear_acceleration.y = float(alpha[1]) 82 | rosimu.linear_acceleration.z = float(alpha[2]) 83 | 84 | return rosimu, timestamp 85 | 86 | def createOdometryMsg(timestamp_int, position, orientation, linear, angular): 87 | timestamp_nsecs = str(timestamp_int) 88 | timestamp = rospy.Time( int(timestamp_nsecs[0:-9]), int(timestamp_nsecs[-9:]) ) 89 | 90 | rosodom = Odometry() 91 | rosodom.header.stamp = timestamp 92 | rosodom.header.frame_id = "drone" 93 | # set the position 94 | rosodom.pose.pose.position.x = float(position[0]) 95 | rosodom.pose.pose.position.y = float(position[1]) 96 | rosodom.pose.pose.position.z = float(position[2]) 97 | 98 | rosodom.pose.pose.orientation.w = float(orientation[0]) 99 | rosodom.pose.pose.orientation.x = float(orientation[1]) 100 | rosodom.pose.pose.orientation.y = float(orientation[2]) 101 | rosodom.pose.pose.orientation.z = float(orientation[3]) 102 | 103 | # set the velocity 104 | rosodom.child_frame_id = "drone/odom_local_ned" 105 | rosodom.twist.twist.linear.x = float(linear[0]) 106 | rosodom.twist.twist.linear.y = float(linear[1]) 107 | rosodom.twist.twist.linear.z = float(linear[2]) 108 | 109 | rosodom.twist.twist.angular.x = float(angular[0]) 110 | rosodom.twist.twist.angular.y = float(angular[1]) 111 | rosodom.twist.twist.angular.z = float(angular[2]) 112 | 113 | return rosodom, timestamp 114 | 115 | 116 | #create the bag 117 | try: 118 | bag = rosbag.Bag(parsed.output_bag, 'w') 119 | 120 | #write images 121 | camfolders = "/mav0/cam0/data" 122 | camdir = parsed.folder + camfolders 123 | image_files = getImageFilesFromDir(camdir) 124 | for image_filename in image_files: 125 | image_msg, timestamp = loadImageToRosMsg(image_filename) 126 | bag.write("/cam0/image_raw", image_msg, timestamp) 127 | 128 | #write imu data 129 | imufile = parsed.folder + "/mav0/imu0/data.csv" 130 | with open(imufile, 'rb') as csvfile: 131 | reader = csv.reader(csvfile, delimiter=',') 132 | headers = next(reader, None) 133 | for row in reader: 134 | imumsg, timestamp = createImuMessge(row[0], row[1:4], row[4:7]) 135 | bag.write("/imu0", imumsg, timestamp) 136 | 137 | #write odometry data 138 | odometryfile = parsed.folder + "/mav0/state_groundtruth_estimate0/data.csv" 139 | with open(odometryfile, "rb") as csvfile: 140 | reader = csv.reader(csvfile, delimiter=',') 141 | headers = next(reader, None) 142 | for row in reader: 143 | odometrymsg, timestamp = createOdometryMsg(row[0], row[1:4], row[4:8], row[8:11], row[11:14]) 144 | bag.write("/odometry", odometrymsg, timestamp) 145 | 146 | finally: 147 | bag.close() 148 | 149 | 150 | -------------------------------------------------------------------------------- /scripts/keyboardControl.py: -------------------------------------------------------------------------------- 1 | import airsim 2 | import numpy as np 3 | import time 4 | import keyboard # using module keyboard (install using pip3 install keyboard) 5 | 6 | vx=0 7 | vy=0 8 | vz=0 9 | 10 | client = airsim.MultirotorClient() 11 | client.confirmConnection() 12 | client.enableApiControl(True) 13 | 14 | 15 | while True: # making a loop 16 | try: # used try so that if user pressed other than the given key error will not be shown 17 | if keyboard.is_pressed('w'): # if key 'q' is pressed 18 | vx=5 19 | elif keyboard.is_pressed('s'): 20 | vx=-5 21 | else: 22 | vx=0 23 | if keyboard.is_pressed('d'): # if key 'q' is pressed 24 | vy=5 25 | elif keyboard.is_pressed('a'): 26 | vy=-5 27 | else: 28 | vy=0 29 | if keyboard.is_pressed('space'): 30 | vz=-5 31 | else: 32 | vz=5 33 | print("vx" + str(vx) + " vy " + str(vy) + " vz " + str(vz)) 34 | client.moveByVelocityAsync(vx,vy,vz,12) 35 | except: 36 | break # if user pressed a key other than the given key the loop will break -------------------------------------------------------------------------------- /scripts/printImageInfo.py: -------------------------------------------------------------------------------- 1 | import rospy 2 | import cv2 3 | from cv_bridge import CvBridge, CvBridgeError 4 | from sensor_msgs.msg import Image 5 | 6 | class image_converter: 7 | def __init__(self): 8 | self.bridge = CvBridge() 9 | self.image_sub = rospy.Subscriber("/airsim_node/drone_1/front_center_custom/Scene",Image,self.callback) 10 | 11 | def callback(self, data): 12 | try: 13 | cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8") 14 | except CvBridgeError as e: 15 | print e 16 | 17 | # print image shape 18 | (rows, cols, channels) = cv_image.shape 19 | print "rosw: ", rows, "cols: ", cols, "channels: ", channels 20 | 21 | cv_image_gray = cv2.cvtColor(cv_image, cv2.COLOR_RGB2GRAY) 22 | 23 | cv2.imshow("Color window", cv_image) 24 | cv2.imshow("gray window", cv_image_gray) 25 | cv2.waitKey(3) 26 | 27 | if __name__ == "__main__": 28 | try: 29 | rospy.init_node("cv_print_shape") 30 | rospy.loginfo("starting cv_print_shape node") 31 | image_converter() 32 | rospy.spin() 33 | except KeyboardInterrupt: 34 | print "Shutting down cv_print_shape node" 35 | cv2.destroyAllWindows() 36 | -------------------------------------------------------------------------------- /step_one.cpp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/step_one.cpp -------------------------------------------------------------------------------- /step_two.cpp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jike5/AirSim-VIO-Dataset-Gain/e412282aa7a5c919908daef1262f74df91fa1ad8/step_two.cpp --------------------------------------------------------------------------------