├── .gitignore
├── README.md
├── cfg
└── sim.yaml
├── ckpts
└── unitree_go2
│ ├── flat_model_6800.pt
│ └── rough_model_7850.pt
├── env
├── sim_env.py
├── terrain.py
└── terrain_cfg.py
├── go2
├── go2_ctrl.py
├── go2_ctrl_cfg.py
├── go2_env.py
└── go2_sensors.py
├── isaac_go2_ros2.py
├── media
├── sim-demo1.gif
└── sim-demo2.gif
├── ros2
└── go2_ros2_bridge.py
└── rviz
└── go2.rviz
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
2 | outputs
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Isaac Sim Unitree Go2 ROS2
2 | [](https://docs.python.org/3/whatsnew/3.10.html)
3 | [](https://docs.ros.org/en/humble/index.html)
4 | [](https://docs.omniverse.nvidia.com/isaacsim/latest/overview.html)
5 | [](https://docs.omniverse.nvidia.com/isaacsim/latest/overview.html)
6 | [](https://releases.ubuntu.com/22.04/)
7 |
8 | Welcome to the Isaac Sim Unitree Go2 repository! This repository provides a Unitree Go2 quadruped robot simulation, leveraging the Isaac Sim/Isaac Lab framework and integrating seamlessly with a ROS 2 interface. It offers a flexible platform for testing navigation, decision-making, and other autonomous tasks in various scenarios.
9 |
10 |
11 |
12 |
13 |  |
14 |  |
15 |
16 |
17 |
18 |
19 | ## Installation Guide
20 | **Step I:** Please follow the [Isaac Lab official documentation](https://isaac-sim.github.io/IsaacLab/main/source/setup/installation/binaries_installation.html) to install the Isaac Sim 4.2.0 and Isaac Lab 1.2.0.
21 |
22 | **Step II:** Please install [ROS2 Humble](https://docs.ros.org/en/humble/index.html) with the official installation guide.
23 |
24 | **Step III:** Install the prerequisite C extension in the conda environment. [reference link](https://stackoverflow.com/questions/58424974/anaconda-importerror-usr-lib64-libstdc-so-6-version-glibcxx-3-4-21-not-fo)
25 | ```
26 | # default conda env for Isaac Lab
27 | conda activate isaaclab
28 | conda install -c conda-forge libstdcxx-ng
29 | ```
30 |
31 | **Step IV:** Clone this repo to your local directory.
32 | ```
33 | git clone https://github.com/Zhefan-Xu/isaac-go2-ros2.git
34 | ```
35 |
36 | ## Run Unitree Go2 Simulation
37 | To run the simulation, please use the following command:
38 | ```
39 | conda activate isaaclab
40 | python isaac_go2_ros2.py
41 | ```
42 | Once the simulation is loaded, the robot can be teleoperated by the keyboard:
43 |
44 | ```W```: Forward, ```A```: Left, ```S```: Backward, ```D```: Right, ```Z```: Left Turn, ```C```: Right Turn.
45 |
46 |
47 | https://github.com/user-attachments/assets/7abb41fd-26f7-4e5d-bc7f-98ee10467a6a
48 |
49 |
50 | ## ROS2 Topics and Visualization
51 | After launching the simulation, the ROS2 data can be visualized in ```Rviz2```:
52 | ```
53 | rviz2 -d /path/to/isaac-go2-ros2/rviz/go2.rviz
54 | ```
55 | 
56 |
57 | Here is a categorized list of ROS 2 topics available for the Unitree Go2:
58 |
59 | **Command and Control**
60 | - `/unitree_go2/cmd_vel`: Topic to send velocity commands to the robot for motion control.
61 |
62 | **Front Camera**
63 | - `/unitree_go2/front_cam/color_image`: Publishes RGB color images captured by the front camera.
64 | - `/unitree_go2/front_cam/depth_image`: Publishes depth images from the front camera.
65 | - `unitree_go2/front_cam/semantic_segmentation_image`: Publishes semantic segmentation images from the front camera.
66 | - `/unitree_go2/front_cam/info`: Publishes camera information, including intrinsic parameters.
67 |
68 | **LIDAR**
69 | - `/unitree_go2/lidar/point_cloud`: Publishes a point cloud generated by the robot's LIDAR sensor.
70 |
71 | **Odometry and Localization**
72 | - `/unitree_go2/odom`: Publishes odometry data, including the robot's position, orientation, and velocity.
73 | - `/unitree_go2/pose`: Publishes the current pose of the robot in the world frame.
74 |
75 |
76 | ## Simulation Environments & settings
77 | The simulation environments and settings can be changed in ```isaac-go2-ros2/cfg/sim.yaml``` config file.
78 |
79 | #### Launch different simulation environments
80 | The current implementation contains a few environments which can be found on ```isaac-go2-ros2/env/sim_env.py```, which follows standard Isaac Sim method for importing USD environments. To change the environment, please change the ```env_name``` in the config file ```isaac-go2-ros2/cfg/sim.yaml```. Current available environments:
81 | - ```warehouse```: A simple warehouse environment in Isaac Sim.
82 | - ```warehouse-forklifts```: A warehouse environment with forklifts.
83 | - ```warehouse-shelves```: A warehouse environment with shelves.
84 | - ```full-warehouse```: A full warehouse environment containing everything.
85 | - ```obstacle-sparse```: A sparse obstacle field environment.
86 | - ```obstacle-medium```: A medium obstacle field environment.
87 | - ```obstacle-dense```: A dense obstacle field environment.
88 |
89 |
90 | #### Launch multiple robots in the environment
91 | This repository supports running multiple Unitree Go2 robots and the number of robots can by changed by the ```num_envs``` parameter in the config file ```isaac-go2-ros2/cfg/sim.yaml```. The following shows an example video.
92 |
93 | https://github.com/user-attachments/assets/47ef05c1-5124-4feb-afc8-a3f2c306a212
94 |
95 |
96 |
97 |
98 | ## Example Usage
99 | The video shows an example of using this repo with an [RL agent](https://github.com/Zhefan-Xu/NavRL) to achieve navigation and collision avoidance:
100 |
101 |
102 | https://github.com/user-attachments/assets/ccc986c6-bf94-41fe-a4d5-3417ce8b3384
103 |
104 | ## Acknowledgement
105 | The Go2 controller is based on the RL controller implemented in [go2_omniverse](https://github.com/abizovnuralem/go2_omniverse).
106 |
107 |
108 |
109 |
110 |
111 |
--------------------------------------------------------------------------------
/cfg/sim.yaml:
--------------------------------------------------------------------------------
1 | defaults:
2 | - _self_
3 |
4 | num_envs: 1
5 | freq: 40 # can only be divisible by 200
6 | camera_follow: True
7 | env_name: obstacle-dense
8 | # env_name: full-warehouse
9 |
10 | sensor:
11 | enable_lidar: True
12 | enable_camera: True
13 | color_image: True
14 | depth_image: True
15 | semantic_segmentation: True
16 |
17 | sim_app:
18 | anti_aliasing: 0
19 | width: 1280
20 | height: 720
21 | hide_ui: False
--------------------------------------------------------------------------------
/ckpts/unitree_go2/flat_model_6800.pt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Zhefan-Xu/isaac-go2-ros2/be8e4ccfe78f77f24221ab7d59b79b08d55b8acd/ckpts/unitree_go2/flat_model_6800.pt
--------------------------------------------------------------------------------
/ckpts/unitree_go2/rough_model_7850.pt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Zhefan-Xu/isaac-go2-ros2/be8e4ccfe78f77f24221ab7d59b79b08d55b8acd/ckpts/unitree_go2/rough_model_7850.pt
--------------------------------------------------------------------------------
/env/sim_env.py:
--------------------------------------------------------------------------------
1 | from omni.isaac.core.utils.prims import define_prim, get_prim_at_path
2 | from omni.isaac.nucleus import get_assets_root_path
3 | from omni.isaac.lab.terrains import TerrainImporterCfg, TerrainImporter
4 | from omni.isaac.lab.terrains import TerrainGeneratorCfg
5 | from env.terrain_cfg import HfUniformDiscreteObstaclesTerrainCfg
6 | import omni.replicator.core as rep
7 |
8 | def add_semantic_label():
9 | ground_plane = rep.get.prims("/World/ground")
10 | with ground_plane:
11 | # Add a semantic label
12 | rep.modify.semantics([("class", "floor")])
13 |
14 | def create_obstacle_sparse_env():
15 | add_semantic_label()
16 | # Terrain
17 | terrain = TerrainImporterCfg(
18 | prim_path="/World/obstacleTerrain",
19 | terrain_type="generator",
20 | terrain_generator=TerrainGeneratorCfg(
21 | seed=0,
22 | size=(50, 50),
23 | color_scheme="height",
24 | sub_terrains={"t1": HfUniformDiscreteObstaclesTerrainCfg(
25 | seed=0,
26 | size=(50, 50),
27 | obstacle_width_range=(0.5, 1.0),
28 | obstacle_height_range=(1.0, 2.0),
29 | num_obstacles=100 ,
30 | obstacles_distance=2.0,
31 | border_width=5,
32 | avoid_positions=[[0, 0]]
33 | )},
34 | ),
35 | visual_material=None,
36 | )
37 | TerrainImporter(terrain)
38 |
39 | def create_obstacle_medium_env():
40 | add_semantic_label()
41 | # Terrain
42 | terrain = TerrainImporterCfg(
43 | prim_path="/World/obstacleTerrain",
44 | terrain_type="generator",
45 | terrain_generator=TerrainGeneratorCfg(
46 | seed=0,
47 | size=(50, 50),
48 | color_scheme="height",
49 | sub_terrains={"t1": HfUniformDiscreteObstaclesTerrainCfg(
50 | seed=0,
51 | size=(50, 50),
52 | obstacle_width_range=(0.5, 1.0),
53 | obstacle_height_range=(1.0, 2.0),
54 | num_obstacles=200 ,
55 | obstacles_distance=2.0,
56 | border_width=5,
57 | avoid_positions=[[0, 0]]
58 | )},
59 | ),
60 | visual_material=None,
61 | )
62 | TerrainImporter(terrain)
63 |
64 |
65 | def create_obstacle_dense_env():
66 | add_semantic_label()
67 | # Terrain
68 | terrain = TerrainImporterCfg(
69 | prim_path="/World/obstacleTerrain",
70 | terrain_type="generator",
71 | terrain_generator=TerrainGeneratorCfg(
72 | seed=0,
73 | size=(50, 50),
74 | color_scheme="height",
75 | sub_terrains={"t1": HfUniformDiscreteObstaclesTerrainCfg(
76 | seed=0,
77 | size=(50, 50),
78 | obstacle_width_range=(0.5, 1.0),
79 | obstacle_height_range=(1.0, 2.0),
80 | num_obstacles=400,
81 | obstacles_distance=2.0,
82 | border_width=5,
83 | avoid_positions=[[0, 0]]
84 | )},
85 | ),
86 | visual_material=None,
87 | )
88 | TerrainImporter(terrain)
89 |
90 | def create_warehouse_env():
91 | add_semantic_label()
92 | assets_root_path = get_assets_root_path()
93 | prim = get_prim_at_path("/World/Warehouse")
94 | prim = define_prim("/World/Warehouse", "Xform")
95 | asset_path = assets_root_path+"/Isaac/Environments/Simple_Warehouse/warehouse.usd"
96 | prim.GetReferences().AddReference(asset_path)
97 |
98 | def create_warehouse_forklifts_env():
99 | add_semantic_label()
100 | assets_root_path = get_assets_root_path()
101 | prim = get_prim_at_path("/World/Warehouse")
102 | prim = define_prim("/World/Warehouse", "Xform")
103 | asset_path = assets_root_path+"/Isaac/Environments/Simple_Warehouse/warehouse_with_forklifts.usd"
104 | prim.GetReferences().AddReference(asset_path)
105 |
106 | def create_warehouse_shelves_env():
107 | add_semantic_label()
108 | assets_root_path = get_assets_root_path()
109 | prim = get_prim_at_path("/World/Warehouse")
110 | prim = define_prim("/World/Warehouse", "Xform")
111 | asset_path = assets_root_path+"/Isaac/Environments/Simple_Warehouse/warehouse_multiple_shelves.usd"
112 | prim.GetReferences().AddReference(asset_path)
113 |
114 | def create_full_warehouse_env():
115 | add_semantic_label()
116 | assets_root_path = get_assets_root_path()
117 | prim = get_prim_at_path("/World/Warehouse")
118 | prim = define_prim("/World/Warehouse", "Xform")
119 | asset_path = assets_root_path+"/Isaac/Environments/Simple_Warehouse/full_warehouse.usd"
120 | prim.GetReferences().AddReference(asset_path)
121 |
122 | def create_hospital_env():
123 | add_semantic_label()
124 | assets_root_path = get_assets_root_path()
125 | prim = get_prim_at_path("/World/Hospital")
126 | prim = define_prim("/World/Hospital", "Xform")
127 | asset_path = assets_root_path+"/Isaac/Environments/Hospital/hospital.usd"
128 | prim.GetReferences().AddReference(asset_path)
129 |
130 | def create_office_env():
131 | add_semantic_label()
132 | assets_root_path = get_assets_root_path()
133 | prim = get_prim_at_path("/World/Office")
134 | prim = define_prim("/World/Office", "Xform")
135 | asset_path = assets_root_path+"/Isaac/Environments/Office/office.usd"
136 | prim.GetReferences().AddReference(asset_path)
--------------------------------------------------------------------------------
/env/terrain.py:
--------------------------------------------------------------------------------
1 | from omni.isaac.lab.terrains.height_field.utils import height_field_to_mesh
2 | import numpy as np
3 | import time
4 |
5 | @height_field_to_mesh
6 | def uniform_discrete_obstacles_terrain(difficulty: float, cfg) -> np.ndarray:
7 | np.random.seed(cfg.seed)
8 | def is_good_position(obs_list, obs_pos, min_dist):
9 | for i in range(len(obs_list)):
10 | obs_pos_i = obs_list[i]
11 | dist = ((obs_pos[0] - obs_pos_i[0])**2 + (obs_pos[1] - obs_pos_i[1])**2)**0.5
12 | if (dist < min_dist):
13 | return False
14 | return True
15 |
16 | # switch parameters to discrete units
17 | # -- terrain
18 | width_pixels = int(cfg.size[0] / cfg.horizontal_scale)
19 | length_pixels = int(cfg.size[1] / cfg.horizontal_scale)
20 | # -- obstacles
21 | obs_height_min = cfg.obstacle_height_range[0]
22 | obs_height_max = cfg.obstacle_height_range[1]
23 | obs_width_min = cfg.obstacle_width_range[0]
24 | obs_width_max = cfg.obstacle_width_range[1]
25 | # -- center of the terrain
26 | platform_width = int(cfg.platform_width / cfg.horizontal_scale)
27 |
28 | # create discrete ranges for the obstacles
29 | # -- position
30 | obs_x_range = np.arange(0, width_pixels, 4)
31 | obs_y_range = np.arange(0, length_pixels, 4)
32 |
33 | # create a terrain with a flat platform at the center
34 | hf_raw = np.zeros((width_pixels, length_pixels))
35 | obs_dist = cfg.obstacles_distance
36 | stop_sampling = False
37 | # generate the obstacles
38 | obs_list = cfg.avoid_positions
39 | for _ in range(cfg.num_obstacles):
40 | # sample size
41 | height = int(np.random.uniform(obs_height_min, obs_height_max) / cfg.vertical_scale)
42 | width = int(np.random.uniform(obs_width_min, obs_width_max) / cfg.horizontal_scale)
43 | length = int(np.random.uniform(obs_width_min, obs_width_max) / cfg.horizontal_scale)
44 |
45 | # sample position
46 | start_time = time.time()
47 | good_position = False
48 | while (not good_position):
49 | x_start = int(np.random.choice(obs_x_range))
50 | y_start = int(np.random.choice(obs_y_range))
51 | x_scale = x_start * cfg.horizontal_scale
52 | y_scale = y_start * cfg.horizontal_scale
53 | good_position = is_good_position(obs_list, [x_scale, y_scale], obs_dist)
54 | sample_time = time.time()
55 | if (sample_time - start_time) > 0.2:
56 | stop_sampling = True
57 | break
58 | if (stop_sampling):
59 | break
60 |
61 | obs_list.append([x_scale, y_scale])
62 | # clip start position to the terrain
63 | if x_start + width > width_pixels:
64 | x_start = width_pixels - width
65 | if y_start + length > length_pixels:
66 | y_start = length_pixels - length
67 | # add to terrain
68 | hf_raw[x_start : x_start + width, y_start : y_start + length] = height
69 |
70 |
71 | # clip the terrain to the platform
72 | x1 = (width_pixels - platform_width) // 2
73 | x2 = (width_pixels + platform_width) // 2
74 | y1 = (length_pixels - platform_width) // 2
75 | y2 = (length_pixels + platform_width) // 2
76 | hf_raw[x1:x2, y1:y2] = 0
77 | # round off the heights to the nearest vertical step
78 | return np.rint(hf_raw).astype(np.int16)
79 |
80 |
--------------------------------------------------------------------------------
/env/terrain_cfg.py:
--------------------------------------------------------------------------------
1 | from omni.isaac.lab.terrains.height_field.hf_terrains_cfg import HfTerrainBaseCfg
2 | import env.terrain as terrain
3 | from omni.isaac.lab.utils import configclass
4 | from dataclasses import MISSING
5 |
6 | @configclass
7 | class HfUniformDiscreteObstaclesTerrainCfg(HfTerrainBaseCfg):
8 | """Configuration for a discrete obstacles height field terrain."""
9 |
10 | function = terrain.uniform_discrete_obstacles_terrain
11 | seed: float = 0
12 | """Env seed."""
13 | obstacle_width_range: tuple[float, float] = MISSING
14 | """The minimum and maximum width of the obstacles (in m)."""
15 | obstacle_height_range: tuple[float, float] = MISSING
16 | """The minimum and maximum height of the obstacles (in m)."""
17 | num_obstacles: int = MISSING
18 | """The number of obstacles to generate."""
19 | obstacles_distance: float = MISSING
20 | """The minimum distance between obstacles (in m)."""
21 | platform_width: float = 1.0
22 | """The width of the square platform at the center of the terrain. Defaults to 1.0."""
23 | avoid_positions: list[list[float, float]] = []
--------------------------------------------------------------------------------
/go2/go2_ctrl.py:
--------------------------------------------------------------------------------
1 | import os
2 | import torch
3 | import carb
4 | import gymnasium as gym
5 | from omni.isaac.lab.envs import ManagerBasedEnv
6 | from go2.go2_ctrl_cfg import unitree_go2_flat_cfg, unitree_go2_rough_cfg
7 | from omni.isaac.lab_tasks.utils.wrappers.rsl_rl import RslRlVecEnvWrapper, RslRlOnPolicyRunnerCfg
8 | from omni.isaac.lab_tasks.utils import get_checkpoint_path
9 | from rsl_rl.runners import OnPolicyRunner
10 |
11 | base_vel_cmd_input = None
12 |
13 | # Initialize base_vel_cmd_input as a tensor when created
14 | def init_base_vel_cmd(num_envs):
15 | global base_vel_cmd_input
16 | base_vel_cmd_input = torch.zeros((num_envs, 3), dtype=torch.float32)
17 |
18 | # Modify base_vel_cmd to use the tensor directly
19 | def base_vel_cmd(env: ManagerBasedEnv) -> torch.Tensor:
20 | global base_vel_cmd_input
21 | return base_vel_cmd_input.clone().to(env.device)
22 |
23 | # Update sub_keyboard_event to modify specific rows of the tensor based on key inputs
24 | def sub_keyboard_event(event) -> bool:
25 | global base_vel_cmd_input
26 | lin_vel = 1.5
27 | ang_vel = 1.5
28 |
29 | if base_vel_cmd_input is not None:
30 | if event.type == carb.input.KeyboardEventType.KEY_PRESS:
31 | # Update tensor values for environment 0
32 | if event.input.name == 'W':
33 | base_vel_cmd_input[0] = torch.tensor([lin_vel, 0, 0], dtype=torch.float32)
34 | elif event.input.name == 'S':
35 | base_vel_cmd_input[0] = torch.tensor([-lin_vel, 0, 0], dtype=torch.float32)
36 | elif event.input.name == 'A':
37 | base_vel_cmd_input[0] = torch.tensor([0, lin_vel, 0], dtype=torch.float32)
38 | elif event.input.name == 'D':
39 | base_vel_cmd_input[0] = torch.tensor([0, -lin_vel, 0], dtype=torch.float32)
40 | elif event.input.name == 'Z':
41 | base_vel_cmd_input[0] = torch.tensor([0, 0, ang_vel], dtype=torch.float32)
42 | elif event.input.name == 'C':
43 | base_vel_cmd_input[0] = torch.tensor([0, 0, -ang_vel], dtype=torch.float32)
44 |
45 | # If there are multiple environments, handle inputs for env 1
46 | if base_vel_cmd_input.shape[0] > 1:
47 | if event.input.name == 'I':
48 | base_vel_cmd_input[1] = torch.tensor([lin_vel, 0, 0], dtype=torch.float32)
49 | elif event.input.name == 'K':
50 | base_vel_cmd_input[1] = torch.tensor([-lin_vel, 0, 0], dtype=torch.float32)
51 | elif event.input.name == 'J':
52 | base_vel_cmd_input[1] = torch.tensor([0, lin_vel, 0], dtype=torch.float32)
53 | elif event.input.name == 'L':
54 | base_vel_cmd_input[1] = torch.tensor([0, -lin_vel, 0], dtype=torch.float32)
55 | elif event.input.name == 'M':
56 | base_vel_cmd_input[1] = torch.tensor([0, 0, ang_vel], dtype=torch.float32)
57 | elif event.input.name == '>':
58 | base_vel_cmd_input[1] = torch.tensor([0, 0, -ang_vel], dtype=torch.float32)
59 |
60 | # Reset commands to zero on key release
61 | elif event.type == carb.input.KeyboardEventType.KEY_RELEASE:
62 | base_vel_cmd_input.zero_()
63 | return True
64 |
65 | def get_rsl_flat_policy(cfg):
66 | cfg.observations.policy.height_scan = None
67 | env = gym.make("Isaac-Velocity-Flat-Unitree-Go2-v0", cfg=cfg)
68 | env = RslRlVecEnvWrapper(env)
69 |
70 | # Low level control: rsl control policy
71 | agent_cfg: RslRlOnPolicyRunnerCfg = unitree_go2_flat_cfg
72 | ckpt_path = get_checkpoint_path(log_path=os.path.abspath("ckpts"),
73 | run_dir=agent_cfg["load_run"],
74 | checkpoint=agent_cfg["load_checkpoint"])
75 | ppo_runner = OnPolicyRunner(env, agent_cfg, log_dir=None, device=agent_cfg["device"])
76 | ppo_runner.load(ckpt_path)
77 | policy = ppo_runner.get_inference_policy(device=agent_cfg["device"])
78 | return env, policy
79 |
80 | def get_rsl_rough_policy(cfg):
81 | env = gym.make("Isaac-Velocity-Rough-Unitree-Go2-v0", cfg=cfg)
82 | env = RslRlVecEnvWrapper(env)
83 |
84 | # Low level control: rsl control policy
85 | agent_cfg: RslRlOnPolicyRunnerCfg = unitree_go2_rough_cfg
86 | ckpt_path = get_checkpoint_path(log_path=os.path.abspath("ckpts"),
87 | run_dir=agent_cfg["load_run"],
88 | checkpoint=agent_cfg["load_checkpoint"])
89 | ppo_runner = OnPolicyRunner(env, agent_cfg, log_dir=None, device=agent_cfg["device"])
90 | ppo_runner.load(ckpt_path)
91 | policy = ppo_runner.get_inference_policy(device=agent_cfg["device"])
92 | return env, policy
--------------------------------------------------------------------------------
/go2/go2_ctrl_cfg.py:
--------------------------------------------------------------------------------
1 | unitree_go2_flat_cfg = {
2 | 'seed': 42,
3 | 'device': 'cuda:0',
4 | 'num_steps_per_env': 24,
5 | 'max_iterations': 1500,
6 | 'empirical_normalization': False,
7 | 'policy': {
8 | 'class_name': 'ActorCritic',
9 | 'init_noise_std': 1.0,
10 | 'actor_hidden_dims': [128, 128, 128],
11 | 'critic_hidden_dims': [128, 128, 128],
12 | 'activation': 'elu'
13 | },
14 | 'algorithm': {
15 | 'class_name': 'PPO',
16 | 'value_loss_coef': 1.0,
17 | 'use_clipped_value_loss': True,
18 | 'clip_param': 0.2,
19 | 'entropy_coef': 0.01,
20 | 'num_learning_epochs': 5,
21 | 'num_mini_batches': 4,
22 | 'learning_rate': 0.001,
23 | 'schedule': 'adaptive',
24 | 'gamma': 0.99,
25 | 'lam': 0.95,
26 | 'desired_kl': 0.01,
27 | 'max_grad_norm': 1.0
28 | },
29 | 'save_interval': 50,
30 | 'experiment_name': 'unitree_go2_flat',
31 | 'run_name': '',
32 | 'logger': 'tensorboard',
33 | 'neptune_project': 'isaaclab',
34 | 'wandb_project': 'isaaclab',
35 | 'resume': False,
36 | 'load_run': 'unitree_go2',
37 | 'load_checkpoint': 'flat_model_6800.pt'
38 | }
39 |
40 | unitree_go2_rough_cfg = {
41 | 'seed': 42,
42 | 'device': 'cuda',
43 | 'num_steps_per_env': 24,
44 | 'max_iterations': 15000,
45 | 'empirical_normalization': False,
46 | 'policy': {
47 | 'class_name': 'ActorCritic',
48 | 'init_noise_std': 1.0,
49 | 'actor_hidden_dims': [512, 256, 128],
50 | 'critic_hidden_dims': [512, 256, 128],
51 | 'activation': 'elu'
52 | },
53 | 'algorithm': {
54 | 'class_name': 'PPO',
55 | 'value_loss_coef': 1.0,
56 | 'use_clipped_value_loss': True,
57 | 'clip_param': 0.2,
58 | 'entropy_coef': 0.01,
59 | 'num_learning_epochs': 5,
60 | 'num_mini_batches': 4,
61 | 'learning_rate': 0.001,
62 | 'schedule': 'adaptive',
63 | 'gamma': 0.99,
64 | 'lam': 0.95,
65 | 'desired_kl': 0.01,
66 | 'max_grad_norm': 1.0
67 | },
68 | 'save_interval': 50,
69 | 'experiment_name': 'unitree_go2_rough',
70 | 'run_name': '',
71 | 'logger': 'tensorboard',
72 | 'neptune_project': 'orbit',
73 | 'wandb_project': 'orbit',
74 | 'resume': False,
75 | 'load_run': 'unitree_go2',
76 | 'load_checkpoint': 'rough_model_7850.pt'
77 | }
78 |
--------------------------------------------------------------------------------
/go2/go2_env.py:
--------------------------------------------------------------------------------
1 | from omni.isaac.lab.scene import InteractiveSceneCfg
2 | from omni.isaac.lab_assets.unitree import UNITREE_GO2_CFG
3 | from omni.isaac.lab.sensors import RayCasterCfg, patterns, ContactSensorCfg
4 | from omni.isaac.lab.utils import configclass
5 | from omni.isaac.lab.assets import ArticulationCfg, AssetBaseCfg
6 | import omni.isaac.lab.sim as sim_utils
7 | import omni.isaac.lab.envs.mdp as mdp
8 | from omni.isaac.lab.managers import ObservationGroupCfg as ObsGroup
9 | from omni.isaac.lab.managers import ObservationTermCfg as ObsTerm
10 | from omni.isaac.lab.envs import ManagerBasedRLEnvCfg
11 | from omni.isaac.lab.managers import SceneEntityCfg
12 | from omni.isaac.lab.utils.noise import UniformNoiseCfg
13 | from omni.isaac.core.utils.viewports import set_camera_view
14 | import numpy as np
15 | from scipy.spatial.transform import Rotation as R
16 | import go2.go2_ctrl as go2_ctrl
17 |
18 |
19 | @configclass
20 | class Go2SimCfg(InteractiveSceneCfg):
21 | # ground plane
22 | ground = AssetBaseCfg(prim_path="/World/ground",
23 | spawn=sim_utils.GroundPlaneCfg(color=(0.1, 0.1, 0.1), size=(300., 300.)),
24 | init_state=AssetBaseCfg.InitialStateCfg(
25 | pos=(0, 0, 1e-4)
26 | ))
27 |
28 | # Lights
29 | light = AssetBaseCfg(
30 | prim_path="/World/light",
31 | spawn=sim_utils.DistantLightCfg(color=(0.75, 0.75, 0.75), intensity=3000.0),
32 | )
33 | sky_light = AssetBaseCfg(
34 | prim_path="/World/skyLight",
35 | spawn=sim_utils.DomeLightCfg(color=(0.2, 0.2, 0.3), intensity=2000.0),
36 | )
37 |
38 | # Go2 Robot
39 | unitree_go2: ArticulationCfg = UNITREE_GO2_CFG.replace(prim_path="{ENV_REGEX_NS}/Go2")
40 |
41 | # Go2 foot contact sensor
42 | contact_forces = ContactSensorCfg(prim_path="{ENV_REGEX_NS}/Go2/.*_foot", history_length=3, track_air_time=True)
43 |
44 | # Go2 height scanner
45 | height_scanner = RayCasterCfg(
46 | prim_path="{ENV_REGEX_NS}/Go2/base",
47 | offset=RayCasterCfg.OffsetCfg(pos=(0.0, 0.0, 20)),
48 | attach_yaw_only=True,
49 | pattern_cfg=patterns.GridPatternCfg(resolution=0.1, size=[1.6, 1.0]),
50 | debug_vis=False,
51 | mesh_prim_paths=["/World/ground"],
52 | )
53 |
54 | @configclass
55 | class ActionsCfg:
56 | """Action specifications for the environment."""
57 | joint_pos = mdp.JointPositionActionCfg(asset_name="unitree_go2", joint_names=[".*"])
58 |
59 | @configclass
60 | class ObservationsCfg:
61 | """Observation specifications for the environment."""
62 |
63 | @configclass
64 | class PolicyCfg(ObsGroup):
65 | """Observations for policy group."""
66 |
67 | # observation terms (order preserved)
68 | base_lin_vel = ObsTerm(func=mdp.base_lin_vel,
69 | params={"asset_cfg": SceneEntityCfg(name="unitree_go2")})
70 | base_ang_vel = ObsTerm(func=mdp.base_ang_vel,
71 | params={"asset_cfg": SceneEntityCfg(name="unitree_go2")})
72 | projected_gravity = ObsTerm(func=mdp.projected_gravity,
73 | params={"asset_cfg": SceneEntityCfg(name="unitree_go2")},
74 | noise=UniformNoiseCfg(n_min=-0.05, n_max=0.05))
75 | # velocity command
76 | base_vel_cmd = ObsTerm(func=go2_ctrl.base_vel_cmd)
77 |
78 | joint_pos = ObsTerm(func=mdp.joint_pos_rel,
79 | params={"asset_cfg": SceneEntityCfg(name="unitree_go2")})
80 | joint_vel = ObsTerm(func=mdp.joint_vel_rel,
81 | params={"asset_cfg": SceneEntityCfg(name="unitree_go2")})
82 | actions = ObsTerm(func=mdp.last_action)
83 |
84 | # Height scan
85 | height_scan = ObsTerm(func=mdp.height_scan,
86 | params={"sensor_cfg": SceneEntityCfg("height_scanner")},
87 | clip=(-1.0, 1.0))
88 |
89 | def __post_init__(self) -> None:
90 | self.enable_corruption = False
91 | self.concatenate_terms = True
92 |
93 | # observation groups
94 | policy: PolicyCfg = PolicyCfg()
95 |
96 | @configclass
97 | class CommandsCfg:
98 | """Command specifications for the MDP."""
99 | base_vel_cmd = mdp.UniformVelocityCommandCfg(
100 | asset_name="unitree_go2",
101 | resampling_time_range=(0.0, 0.0),
102 | debug_vis=True,
103 | ranges=mdp.UniformVelocityCommandCfg.Ranges(
104 | lin_vel_x=(0.0, 0.0), lin_vel_y=(0.0, 0.0), ang_vel_z=(0.0, 0.0), heading=(0, 0)
105 | ),
106 | )
107 |
108 | @configclass
109 | class EventCfg:
110 | """Configuration for events."""
111 | pass
112 |
113 | @configclass
114 | class RewardsCfg:
115 | """Reward terms for the MDP."""
116 | pass
117 |
118 |
119 | @configclass
120 | class TerminationsCfg:
121 | """Termination terms for the MDP."""
122 | pass
123 |
124 | @configclass
125 | class CurriculumCfg:
126 | """Curriculum terms for the MDP."""
127 | pass
128 |
129 |
130 |
131 | @configclass
132 | class Go2RSLEnvCfg(ManagerBasedRLEnvCfg):
133 | """Configuration for the Go2 environment."""
134 | # scene settings
135 | scene = Go2SimCfg(num_envs=2, env_spacing=2.0)
136 |
137 | # basic settings
138 | observations = ObservationsCfg()
139 | actions = ActionsCfg()
140 |
141 | # dummy settings
142 | commands = CommandsCfg()
143 | rewards = RewardsCfg()
144 | terminations = TerminationsCfg()
145 | events = EventCfg()
146 | curriculum = CurriculumCfg()
147 |
148 | def __post_init__(self):
149 | # viewer settings
150 | self.viewer.eye = [-4.0, 0.0, 5.0]
151 | self.viewer.lookat = [0.0, 0.0, 0.0]
152 |
153 | # step settings
154 | self.decimation = 8 # step
155 |
156 | # simulation settings
157 | self.sim.dt = 0.005 # sim step every
158 | self.sim.render_interval = self.decimation
159 | self.sim.disable_contact_processing = True
160 | self.sim.render.antialiasing_mode = None
161 | # self.sim.physics_material = self.scene.terrain.physics_material
162 |
163 | # settings for rsl env control
164 | self.episode_length_s = 20.0 # can be ignored
165 | self.is_finite_horizon = False
166 | self.actions.joint_pos.scale = 0.25
167 |
168 | if self.scene.height_scanner is not None:
169 | self.scene.height_scanner.update_period = self.decimation * self.sim.dt
170 |
171 | def camera_follow(env):
172 | if (env.unwrapped.scene.num_envs == 1):
173 | robot_position = env.unwrapped.scene["unitree_go2"].data.root_state_w[0, :3].cpu().numpy()
174 | robot_orientation = env.unwrapped.scene["unitree_go2"].data.root_state_w[0, 3:7].cpu().numpy()
175 | rotation = R.from_quat([robot_orientation[1], robot_orientation[2],
176 | robot_orientation[3], robot_orientation[0]])
177 | yaw = rotation.as_euler('zyx')[0]
178 | yaw_rotation = R.from_euler('z', yaw).as_matrix()
179 | set_camera_view(
180 | yaw_rotation.dot(np.asarray([-4.0, 0.0, 5.0])) + robot_position,
181 | robot_position
182 | )
--------------------------------------------------------------------------------
/go2/go2_sensors.py:
--------------------------------------------------------------------------------
1 | import omni
2 | import numpy as np
3 | from pxr import Gf
4 | import omni.replicator.core as rep
5 | from omni.isaac.sensor import Camera
6 | import omni.isaac.core.utils.numpy.rotations as rot_utils
7 |
8 | class SensorManager:
9 | def __init__(self, num_envs):
10 | self.num_envs = num_envs
11 |
12 | def add_rtx_lidar(self):
13 | lidar_annotators = []
14 | for env_idx in range(self.num_envs):
15 | _, sensor = omni.kit.commands.execute(
16 | "IsaacSensorCreateRtxLidar",
17 | path="/lidar",
18 | parent=f"/World/envs/env_{env_idx}/Go2/base",
19 | config="Hesai_XT32_SD10",
20 | # config="Velodyne_VLS128",
21 | translation=(0.2, 0, 0.2),
22 | orientation=Gf.Quatd(1.0, 0.0, 0.0, 0.0), # Gf.Quatd is w,i,j,k
23 | )
24 |
25 | annotator = rep.AnnotatorRegistry.get_annotator("RtxSensorCpuIsaacCreateRTXLidarScanBuffer")
26 | hydra_texture = rep.create.render_product(sensor.GetPath(), [1, 1], name="Isaac")
27 | annotator.attach(hydra_texture.path)
28 | lidar_annotators.append(annotator)
29 | return lidar_annotators
30 |
31 | def add_camera(self, freq):
32 | cameras = []
33 | for env_idx in range(self.num_envs):
34 | camera = Camera(
35 | prim_path=f"/World/envs/env_{env_idx}/Go2/base/front_cam",
36 | translation=np.array([0.4, 0.0, 0.2]),
37 | frequency=freq,
38 | resolution=(640, 480),
39 | orientation=rot_utils.euler_angles_to_quats(np.array([0, 0, 0]), degrees=True),
40 | )
41 | camera.initialize()
42 | camera.set_focal_length(1.5)
43 | cameras.append(camera)
44 | return cameras
--------------------------------------------------------------------------------
/isaac_go2_ros2.py:
--------------------------------------------------------------------------------
1 | from isaacsim import SimulationApp
2 | import os
3 | import hydra
4 | import rclpy
5 | import torch
6 | import time
7 | import math
8 |
9 | FILE_PATH = os.path.join(os.path.dirname(__file__), "cfg")
10 | @hydra.main(config_path=FILE_PATH, config_name="sim", version_base=None)
11 | def run_simulator(cfg):
12 | # launch omniverse app
13 | simulation_app = SimulationApp({"headless": False, "anti_aliasing": cfg.sim_app.anti_aliasing,
14 | "width": cfg.sim_app.width, "height": cfg.sim_app.height,
15 | "hide_ui": cfg.sim_app.hide_ui})
16 |
17 | import omni
18 | import carb
19 | import go2.go2_ctrl as go2_ctrl
20 | import ros2.go2_ros2_bridge as go2_ros2_bridge
21 | from go2.go2_env import Go2RSLEnvCfg, camera_follow
22 | import env.sim_env as sim_env
23 | import go2.go2_sensors as go2_sensors
24 |
25 |
26 | # Go2 Environment setup
27 | go2_env_cfg = Go2RSLEnvCfg()
28 | go2_env_cfg.scene.num_envs = cfg.num_envs
29 | go2_env_cfg.decimation = math.ceil(1./go2_env_cfg.sim.dt/cfg.freq)
30 | go2_env_cfg.sim.render_interval = go2_env_cfg.decimation
31 | go2_ctrl.init_base_vel_cmd(cfg.num_envs)
32 | # env, policy = go2_ctrl.get_rsl_flat_policy(go2_env_cfg)
33 | env, policy = go2_ctrl.get_rsl_rough_policy(go2_env_cfg)
34 |
35 | # Simulation environment
36 | if (cfg.env_name == "obstacle-dense"):
37 | sim_env.create_obstacle_dense_env() # obstacles dense
38 | elif (cfg.env_name == "obstacle-medium"):
39 | sim_env.create_obstacle_medium_env() # obstacles medium
40 | elif (cfg.env_name == "obstacle-sparse"):
41 | sim_env.create_obstacle_sparse_env() # obstacles sparse
42 | elif (cfg.env_name == "warehouse"):
43 | sim_env.create_warehouse_env() # warehouse
44 | elif (cfg.env_name == "warehouse-forklifts"):
45 | sim_env.create_warehouse_forklifts_env() # warehouse forklifts
46 | elif (cfg.env_name == "warehouse-shelves"):
47 | sim_env.create_warehouse_shelves_env() # warehouse shelves
48 | elif (cfg.env_name == "full-warehouse"):
49 | sim_env.create_full_warehouse_env() # full warehouse
50 |
51 | # Sensor setup
52 | sm = go2_sensors.SensorManager(cfg.num_envs)
53 | lidar_annotators = sm.add_rtx_lidar()
54 | cameras = sm.add_camera(cfg.freq)
55 |
56 | # Keyboard control
57 | system_input = carb.input.acquire_input_interface()
58 | system_input.subscribe_to_keyboard_events(
59 | omni.appwindow.get_default_app_window().get_keyboard(), go2_ctrl.sub_keyboard_event)
60 |
61 | # ROS2 Bridge
62 | rclpy.init()
63 | dm = go2_ros2_bridge.RobotDataManager(env, lidar_annotators, cameras, cfg)
64 |
65 | # Run simulation
66 | sim_step_dt = float(go2_env_cfg.sim.dt * go2_env_cfg.decimation)
67 | obs, _ = env.reset()
68 | while simulation_app.is_running():
69 | start_time = time.time()
70 | with torch.inference_mode():
71 | # control joints
72 | actions = policy(obs)
73 |
74 | # step the environment
75 | obs, _, _, _ = env.step(actions)
76 |
77 | # # ROS2 data
78 | dm.pub_ros2_data()
79 | rclpy.spin_once(dm)
80 |
81 | # Camera follow
82 | if (cfg.camera_follow):
83 | camera_follow(env)
84 |
85 | # limit loop time
86 | elapsed_time = time.time() - start_time
87 | if elapsed_time < sim_step_dt:
88 | sleep_duration = sim_step_dt - elapsed_time
89 | time.sleep(sleep_duration)
90 | actual_loop_time = time.time() - start_time
91 | rtf = min(1.0, sim_step_dt/elapsed_time)
92 | print(f"\rStep time: {actual_loop_time*1000:.2f}ms, Real Time Factor: {rtf:.2f}", end='', flush=True)
93 |
94 | dm.destroy_node()
95 | rclpy.shutdown()
96 | simulation_app.close()
97 |
98 | if __name__ == "__main__":
99 | run_simulator()
100 |
--------------------------------------------------------------------------------
/media/sim-demo1.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Zhefan-Xu/isaac-go2-ros2/be8e4ccfe78f77f24221ab7d59b79b08d55b8acd/media/sim-demo1.gif
--------------------------------------------------------------------------------
/media/sim-demo2.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Zhefan-Xu/isaac-go2-ros2/be8e4ccfe78f77f24221ab7d59b79b08d55b8acd/media/sim-demo2.gif
--------------------------------------------------------------------------------
/ros2/go2_ros2_bridge.py:
--------------------------------------------------------------------------------
1 | import rclpy
2 | from rclpy.node import Node
3 | from nav_msgs.msg import Odometry
4 | from geometry_msgs.msg import PoseStamped, Twist, TransformStamped
5 | from sensor_msgs.msg import PointCloud2, PointField, Image
6 | from sensor_msgs_py import point_cloud2
7 | from tf2_ros import TransformBroadcaster
8 | from tf2_ros.static_transform_broadcaster import StaticTransformBroadcaster
9 | import numpy as np
10 | from cv_bridge import CvBridge
11 | import cv2
12 | import omni
13 | import omni.graph.core as og
14 | import omni.replicator.core as rep
15 | import omni.syntheticdata._syntheticdata as sd
16 | import subprocess
17 | import time
18 | import go2.go2_ctrl as go2_ctrl
19 |
20 | ext_manager = omni.kit.app.get_app().get_extension_manager()
21 | ext_manager.set_extension_enabled_immediate("omni.isaac.ros2_bridge", True)
22 | from omni.isaac.ros2_bridge import read_camera_info
23 |
24 |
25 | class RobotDataManager(Node):
26 | def __init__(self, env, lidar_annotators, cameras, cfg):
27 | super().__init__("robot_data_manager")
28 | self.cfg = cfg
29 | self.create_ros_time_graph()
30 | sim_time_set = False
31 | while (rclpy.ok() and sim_time_set==False):
32 | sim_time_set = self.use_sim_time()
33 |
34 | self.env = env
35 | self.num_envs = env.unwrapped.scene.num_envs
36 | self.lidar_annotators = lidar_annotators
37 | self.cameras = cameras
38 | self.points = []
39 |
40 | # ROS2 Broadcaster
41 | self.broadcaster= TransformBroadcaster(self)
42 |
43 | # ROS2 Publisher
44 | self.odom_pub = []
45 | self.pose_pub = []
46 | self.lidar_pub = []
47 | self.semantic_seg_img_vis_pub = []
48 |
49 | # ROS2 Subscriber
50 | self.cmd_vel_sub = []
51 | self.color_img_sub = []
52 | self.depth_img_sub = []
53 | self.semantic_seg_img_sub = []
54 |
55 | # ROS2 Timer
56 | self.lidar_publish_timer = []
57 | for i in range(self.num_envs):
58 | if (self.num_envs == 1):
59 | self.odom_pub.append(
60 | self.create_publisher(Odometry, "unitree_go2/odom", 10))
61 | self.pose_pub.append(
62 | self.create_publisher(PoseStamped, "unitree_go2/pose", 10))
63 | self.lidar_pub.append(
64 | self.create_publisher(PointCloud2, "unitree_go2/lidar/point_cloud", 10)
65 | )
66 | self.semantic_seg_img_vis_pub.append(
67 | self.create_publisher(Image, "unitree_go2/front_cam/semantic_segmentation_image_vis", 10)
68 | )
69 | self.cmd_vel_sub.append(
70 | self.create_subscription(Twist, "unitree_go2/cmd_vel",
71 | lambda msg: self.cmd_vel_callback(msg, 0), 10)
72 | )
73 | self.semantic_seg_img_sub.append(
74 | self.create_subscription(Image, "/unitree_go2/front_cam/semantic_segmentation_image",
75 | lambda msg: self.semantic_segmentation_callback(msg, 0), 10)
76 | )
77 | else:
78 | self.odom_pub.append(
79 | self.create_publisher(Odometry, f"unitree_go2_{i}/odom", 10))
80 | self.pose_pub.append(
81 | self.create_publisher(PoseStamped, f"unitree_go2_{i}/pose", 10))
82 | self.lidar_pub.append(
83 | self.create_publisher(PointCloud2, f"unitree_go2_{i}/lidar/point_cloud", 10)
84 | )
85 | self.semantic_seg_img_vis_pub.append(
86 | self.create_publisher(Image, f"unitree_go2_{i}/front_cam/semantic_segmentation_image_vis", 10)
87 | )
88 | self.cmd_vel_sub.append(
89 | self.create_subscription(Twist, f"unitree_go2_{i}/cmd_vel",
90 | lambda msg, env_idx=i: self.cmd_vel_callback(msg, env_idx), 10)
91 | )
92 | self.semantic_seg_img_sub.append(
93 | self.create_subscription(Image, f"/unitree_go2_{i}/front_cam/semantic_segmentation_image",
94 | lambda msg, env_idx=i: self.semantic_segmentation_callback(msg, env_idx), 10)
95 | )
96 |
97 | # self.create_timer(0.1, self.pub_ros2_data_callback)
98 | # self.create_timer(0.1, self.pub_lidar_data_callback)
99 |
100 | # use wall time for lidar and odom pub
101 | self.odom_pose_freq = 50.0
102 | self.lidar_freq = 15.0
103 | self.odom_pose_pub_time = time.time()
104 | self.lidar_pub_time = time.time()
105 | self.create_static_transform()
106 | self.create_camera_publisher()
107 |
108 |
109 | def create_ros_time_graph(self):
110 | og.Controller.edit(
111 | {"graph_path": "/ClockGraph", "evaluator_name": "execution"},
112 | {
113 | og.Controller.Keys.CREATE_NODES: [
114 | ("ReadSimTime", "omni.isaac.core_nodes.IsaacReadSimulationTime"),
115 | ("PublishClock", "omni.isaac.ros2_bridge.ROS2PublishClock"),
116 | ("OnPlayBack", "omni.graph.action.OnPlaybackTick"),
117 | ],
118 | og.Controller.Keys.CONNECT: [
119 | # Connecting execution of OnImpulseEvent node to PublishClock so it will only publish when an impulse event is triggered
120 | ("OnPlayBack.outputs:tick", "PublishClock.inputs:execIn"),
121 | # Connecting simulationTime data of ReadSimTime to the clock publisher node
122 | ("ReadSimTime.outputs:simulationTime", "PublishClock.inputs:timeStamp"),
123 | ],
124 | og.Controller.Keys.SET_VALUES: [
125 | # Assigning topic name to clock publisher
126 | ("PublishClock.inputs:topicName", "/clock"),
127 | ],
128 | },
129 | )
130 |
131 | def use_sim_time(self):
132 | # Define the command as a list
133 | command = ["ros2", "param", "set", "/robot_data_manager", "use_sim_time", "true"]
134 |
135 | # Run the command in a non-blocking way
136 | subprocess.Popen(command)
137 | return True
138 |
139 | def create_static_transform(self):
140 | for i in range(self.num_envs):
141 | # LiDAR
142 | # Create and publish the transform
143 | lidar_broadcaster = StaticTransformBroadcaster(self)
144 | base_lidar_transform = TransformStamped()
145 | base_lidar_transform.header.stamp = self.get_clock().now().to_msg()
146 | if (self.num_envs == 1):
147 | base_lidar_transform.header.frame_id = "unitree_go2/base_link"
148 | base_lidar_transform.child_frame_id = "unitree_go2/lidar_frame"
149 | else:
150 | base_lidar_transform.header.frame_id = f"unitree_go2_{i}/base_link"
151 | base_lidar_transform.child_frame_id = f"unitree_go2_{i}/lidar_frame"
152 |
153 | # Translation
154 | base_lidar_transform.transform.translation.x = 0.2
155 | base_lidar_transform.transform.translation.y = 0.0
156 | base_lidar_transform.transform.translation.z = 0.2
157 |
158 | # Rotation
159 | base_lidar_transform.transform.rotation.x = 0.0
160 | base_lidar_transform.transform.rotation.y = 0.0
161 | base_lidar_transform.transform.rotation.z = 0.0
162 | base_lidar_transform.transform.rotation.w = 1.0
163 |
164 | # Publish the transform
165 | lidar_broadcaster.sendTransform(base_lidar_transform)
166 |
167 | # -------------------------------------------------------------
168 | # Camera
169 | # Create and publish the transform
170 | camera_broadcaster = StaticTransformBroadcaster(self)
171 | base_cam_transform = TransformStamped()
172 | # base_cam_transform.header.stamp = self.get_clock().now().to_msg()
173 | if (self.num_envs == 1):
174 | base_cam_transform.header.frame_id = "unitree_go2/base_link"
175 | base_cam_transform.child_frame_id = "unitree_go2/front_cam"
176 | else:
177 | base_cam_transform.header.frame_id = f"unitree_go2_{i}/base_link"
178 | base_cam_transform.child_frame_id = f"unitree_go2_{i}/front_cam"
179 |
180 | # Translation
181 | base_cam_transform.transform.translation.x = 0.4
182 | base_cam_transform.transform.translation.y = 0.0
183 | base_cam_transform.transform.translation.z = 0.2
184 |
185 | # Rotation
186 | base_cam_transform.transform.rotation.x = -0.5
187 | base_cam_transform.transform.rotation.y = 0.5
188 | base_cam_transform.transform.rotation.z = -0.5
189 | base_cam_transform.transform.rotation.w = 0.5
190 |
191 | # Publish the transform
192 | camera_broadcaster.sendTransform(base_cam_transform)
193 |
194 | def create_camera_publisher(self):
195 | # self.pub_image_graph()
196 | if (self.cfg.sensor.enable_camera):
197 | if (self.cfg.sensor.color_image):
198 | self.pub_color_image()
199 | if (self.cfg.sensor.depth_image):
200 | self.pub_depth_image()
201 | if (self.cfg.sensor.semantic_segmentation):
202 | self.pub_semantic_image()
203 | # self.pub_cam_depth_cloud()
204 | self.publish_camera_info()
205 |
206 | def publish_odom(self, base_pos, base_rot, base_lin_vel_b, base_ang_vel_b, env_idx):
207 | odom_msg = Odometry()
208 | odom_msg.header.stamp = self.get_clock().now().to_msg()
209 | odom_msg.header.frame_id = "map"
210 | if (self.num_envs == 1):
211 | odom_msg.child_frame_id = "base_link"
212 | else:
213 | odom_msg.child_frame_id = f"unitree_go2_{env_idx}/base_link"
214 | odom_msg.pose.pose.position.x = base_pos[0].item()
215 | odom_msg.pose.pose.position.y = base_pos[1].item()
216 | odom_msg.pose.pose.position.z = base_pos[2].item()
217 | odom_msg.pose.pose.orientation.x = base_rot[1].item()
218 | odom_msg.pose.pose.orientation.y = base_rot[2].item()
219 | odom_msg.pose.pose.orientation.z = base_rot[3].item()
220 | odom_msg.pose.pose.orientation.w = base_rot[0].item()
221 | odom_msg.twist.twist.linear.x = base_lin_vel_b[0].item()
222 | odom_msg.twist.twist.linear.y = base_lin_vel_b[1].item()
223 | odom_msg.twist.twist.linear.z = base_lin_vel_b[2].item()
224 | odom_msg.twist.twist.angular.x = base_ang_vel_b[0].item()
225 | odom_msg.twist.twist.angular.y = base_ang_vel_b[1].item()
226 | odom_msg.twist.twist.angular.z = base_ang_vel_b[2].item()
227 | self.odom_pub[env_idx].publish(odom_msg)
228 |
229 | # transform
230 | map_base_trans = TransformStamped()
231 | map_base_trans.header.stamp = self.get_clock().now().to_msg()
232 | map_base_trans.header.frame_id = "map"
233 | if (self.num_envs == 1):
234 | map_base_trans.child_frame_id = "unitree_go2/base_link"
235 | else:
236 | map_base_trans.child_frame_id = f"unitree_go2_{env_idx}/base_link"
237 | map_base_trans.transform.translation.x = base_pos[0].item()
238 | map_base_trans.transform.translation.y = base_pos[1].item()
239 | map_base_trans.transform.translation.z = base_pos[2].item()
240 | map_base_trans.transform.rotation.x = base_rot[1].item()
241 | map_base_trans.transform.rotation.y = base_rot[2].item()
242 | map_base_trans.transform.rotation.z = base_rot[3].item()
243 | map_base_trans.transform.rotation.w = base_rot[0].item()
244 | self.broadcaster.sendTransform(map_base_trans)
245 |
246 | def publish_pose(self, base_pos, base_rot, env_idx):
247 | pose_msg = PoseStamped()
248 | pose_msg.header.stamp = self.get_clock().now().to_msg()
249 | pose_msg.header.frame_id = "map"
250 | pose_msg.pose.position.x = base_pos[0].item()
251 | pose_msg.pose.position.y = base_pos[1].item()
252 | pose_msg.pose.position.z = base_pos[2].item()
253 | pose_msg.pose.orientation.x = base_rot[1].item()
254 | pose_msg.pose.orientation.y = base_rot[2].item()
255 | pose_msg.pose.orientation.z = base_rot[3].item()
256 | pose_msg.pose.orientation.w = base_rot[0].item()
257 | self.pose_pub[env_idx].publish(pose_msg)
258 |
259 | def publish_lidar_data(self, points, env_idx):
260 | point_cloud = PointCloud2()
261 | if (self.num_envs == 1):
262 | point_cloud.header.frame_id = "unitree_go2/lidar_frame"
263 | else:
264 | point_cloud.header.frame_id = f"unitree_go2_{env_idx}/lidar_frame"
265 | point_cloud.header.stamp = self.get_clock().now().to_msg()
266 | fields = [
267 | PointField(name='x', offset=0, datatype=PointField.FLOAT32, count=1),
268 | PointField(name='y', offset=4, datatype=PointField.FLOAT32, count=1),
269 | PointField(name='z', offset=8, datatype=PointField.FLOAT32, count=1),
270 | ]
271 | point_cloud = point_cloud2.create_cloud(point_cloud.header, fields, points)
272 | self.lidar_pub[env_idx].publish(point_cloud)
273 |
274 | def pub_ros2_data_callback(self):
275 | robot_data = self.env.unwrapped.scene["unitree_go2"].data
276 | for i in range(self.num_envs):
277 | self.publish_odom(robot_data.root_state_w[i, :3],
278 | robot_data.root_state_w[i, 3:7],
279 | robot_data.root_lin_vel_b[i],
280 | robot_data.root_ang_vel_b[i],
281 | i)
282 | self.publish_pose(robot_data.root_state_w[i, :3],
283 | robot_data.root_state_w[i, 3:7], i)
284 |
285 | def pub_lidar_data_callback(self):
286 | for i in range(self.num_envs):
287 | self.publish_lidar_data(self.lidar_annotators[i].get_data()["data"].reshape(-1, 3), i)
288 |
289 | def pub_ros2_data(self):
290 | pub_odom_pose = False
291 | pub_lidar = False
292 | dt_odom_pose = time.time() - self.odom_pose_pub_time
293 | dt_lidar = time.time() - self.lidar_pub_time
294 | if (dt_odom_pose >= 1./self.odom_pose_freq):
295 | pub_odom_pose = True
296 |
297 | if (dt_lidar >= 1./self.lidar_freq):
298 | pub_lidar = True
299 |
300 | if (pub_odom_pose):
301 | self.odom_pose_pub_time = time.time()
302 | robot_data = self.env.unwrapped.scene["unitree_go2"].data
303 | for i in range(self.num_envs):
304 | self.publish_odom(robot_data.root_state_w[i, :3],
305 | robot_data.root_state_w[i, 3:7],
306 | robot_data.root_lin_vel_b[i],
307 | robot_data.root_ang_vel_b[i],
308 | i)
309 | self.publish_pose(robot_data.root_state_w[i, :3],
310 | robot_data.root_state_w[i, 3:7], i)
311 | if (self.cfg.sensor.enable_lidar):
312 | if (pub_lidar):
313 | self.lidar_pub_time = time.time()
314 | for i in range(self.num_envs):
315 | self.publish_lidar_data(self.lidar_annotators[i].get_data()["data"].reshape(-1, 3), i)
316 |
317 | def cmd_vel_callback(self, msg, env_idx):
318 | go2_ctrl.base_vel_cmd_input[env_idx][0] = msg.linear.x
319 | go2_ctrl.base_vel_cmd_input[env_idx][1] = msg.linear.y
320 | go2_ctrl.base_vel_cmd_input[env_idx][2] = msg.angular.z
321 |
322 | def semantic_segmentation_callback(self, img, env_idx):
323 | bridge = CvBridge()
324 | semantic_image = bridge.imgmsg_to_cv2(img, desired_encoding='passthrough')
325 | semantic_image_normalized = (semantic_image / semantic_image.max() * 255).astype(np.uint8)
326 |
327 | # Apply a predefined colormap
328 | color_mapped_image = cv2.applyColorMap(semantic_image_normalized, cv2.COLORMAP_JET)
329 | # Convert to ROS Image
330 | bridge = CvBridge()
331 | image_msg = bridge.cv2_to_imgmsg(color_mapped_image, encoding='rgb8')
332 | self.semantic_seg_img_vis_pub[env_idx].publish(image_msg)
333 |
334 |
335 | def pub_image_graph(self):
336 | for i in range(self.num_envs):
337 | if (self.num_envs == 1):
338 | color_topic_name = "unitree_go2/front_cam/color_image"
339 | depth_topic_name = "unitree_go2/front_cam/depth_image"
340 | # segmentation_topic_name = "unitree_go2/front_cam/segmentation_image"
341 | # depth_cloud_topic_name = "unitree_go2/front_cam/depth_cloud"
342 | frame_id = "unitree_go2/front_cam"
343 | else:
344 | color_topic_name = f"unitree_go2_{i}/front_cam/color_image"
345 | depth_topic_name = f"unitree_go2_{i}/front_cam/depth_image"
346 | # segmentation_topic_name = f"unitree_go2_{i}/front_cam/segmentation_image"
347 | # depth_cloud_topic_name = f"unitree_go2_{i}/front_cam/depth_cloud"
348 | frame_id = f"unitree_go2_{i}/front_cam"
349 | keys = og.Controller.Keys
350 | og.Controller.edit(
351 | {
352 | "graph_path": "/CameraROS2Graph",
353 | "evaluator_name": "execution",
354 | },
355 | {
356 | keys.CREATE_NODES: [
357 | ("OnPlaybackTick", "omni.graph.action.OnPlaybackTick"),
358 | ("IsaacCreateRenderProduct", "omni.isaac.core_nodes.IsaacCreateRenderProduct"),
359 | ("ROS2CameraHelperColor", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
360 | ("ROS2CameraHelperDepth", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
361 | # ("ROS2CameraHelperSegmentation", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
362 | # ("ROS2CameraHelperDepthCloud", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
363 | ],
364 |
365 | keys.SET_VALUES: [
366 | ("IsaacCreateRenderProduct.inputs:cameraPrim", f"/World/envs/env_{i}/Go2/base/front_cam"),
367 | ("IsaacCreateRenderProduct.inputs:enabled", True),
368 | ("IsaacCreateRenderProduct.inputs:height", 480),
369 | ("IsaacCreateRenderProduct.inputs:width", 640),
370 |
371 | # color camera
372 | ("ROS2CameraHelperColor.inputs:type", "rgb"),
373 | ("ROS2CameraHelperColor.inputs:topicName", color_topic_name),
374 | ("ROS2CameraHelperColor.inputs:frameId", frame_id),
375 | ("ROS2CameraHelperColor.inputs:useSystemTime", False),
376 |
377 | # depth camera
378 | ("ROS2CameraHelperDepth.inputs:type", "depth"),
379 | ("ROS2CameraHelperDepth.inputs:topicName", depth_topic_name),
380 | ("ROS2CameraHelperDepth.inputs:frameId", frame_id),
381 | ("ROS2CameraHelperDepth.inputs:useSystemTime", False),
382 |
383 | # semantic camera
384 | # ("ROS2CameraHelperSegmentation.inputs:type", "semantic_segmentation"),
385 | # ("ROS2CameraHelperSegmentation.inputs:enableSemanticLabels", True),
386 | # ("ROS2CameraHelperSegmentation.inputs:topicName", segmentation_topic_name),
387 | # ("ROS2CameraHelperSegmentation.inputs:frameId", frame_id),
388 | # ("ROS2CameraHelperSegmentation.inputs:useSystemTime", False),
389 |
390 | # depth camera cloud
391 | # ("ROS2CameraHelperDepthCloud.inputs:type", "depth_pcl"),
392 | # ("ROS2CameraHelperDepthCloud.inputs:topicName", depth_cloud_topic_name),
393 | # ("ROS2CameraHelperDepthCloud.inputs:frameId", frame_id),
394 | # ("ROS2CameraHelperDepthCloud.inputs:useSystemTime", True),
395 | ],
396 |
397 | keys.CONNECT: [
398 | ("OnPlaybackTick.outputs:tick", "IsaacCreateRenderProduct.inputs:execIn"),
399 | ("IsaacCreateRenderProduct.outputs:execOut", "ROS2CameraHelperColor.inputs:execIn"),
400 | ("IsaacCreateRenderProduct.outputs:renderProductPath", "ROS2CameraHelperColor.inputs:renderProductPath"),
401 | ("IsaacCreateRenderProduct.outputs:execOut", "ROS2CameraHelperDepth.inputs:execIn"),
402 | ("IsaacCreateRenderProduct.outputs:renderProductPath", "ROS2CameraHelperDepth.inputs:renderProductPath"),
403 | # ("IsaacCreateRenderProduct.outputs:execOut", "ROS2CameraHelperSegmentation.inputs:execIn"),
404 | # ("IsaacCreateRenderProduct.outputs:renderProductPath", "ROS2CameraHelperSegmentation.inputs:renderProductPath"),
405 | # ("IsaacCreateRenderProduct.outputs:execOut", "ROS2CameraHelperDepthCloud.inputs:execIn"),
406 | # ("IsaacCreateRenderProduct.outputs:renderProductPath", "ROS2CameraHelperDepthCloud.inputs:renderProductPath"),
407 | ],
408 |
409 | },
410 | )
411 |
412 | def pub_color_image(self):
413 | for i in range(self.num_envs):
414 | # The following code will link the camera's render product and publish the data to the specified topic name.
415 | render_product = self.cameras[i]._render_product_path
416 | step_size = 1
417 | if (self.num_envs == 1):
418 | topic_name = "unitree_go2/front_cam/color_image"
419 | frame_id = "unitree_go2/front_cam"
420 | else:
421 | topic_name = f"unitree_go2_{i}/front_cam/color_image"
422 | frame_id = f"unitree_go2_{i}/front_cam"
423 | node_namespace = ""
424 | queue_size = 1
425 |
426 | rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.Rgb.name)
427 |
428 | writer = rep.writers.get(rv + "ROS2PublishImage")
429 | writer.initialize(
430 | frameId=frame_id,
431 | nodeNamespace=node_namespace,
432 | queueSize=queue_size,
433 | topicName=topic_name,
434 | )
435 | writer.attach([render_product])
436 |
437 | # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate
438 | gate_path = omni.syntheticdata.SyntheticData._get_node_path(
439 | rv + "IsaacSimulationGate", render_product
440 | )
441 | og.Controller.attribute(gate_path + ".inputs:step").set(step_size)
442 |
443 | def pub_depth_image(self):
444 | for i in range(self.num_envs):
445 | # The following code will link the camera's render product and publish the data to the specified topic name.
446 | render_product = self.cameras[i]._render_product_path
447 | step_size = 1
448 | if (self.num_envs == 1):
449 | topic_name = "unitree_go2/front_cam/depth_image"
450 | frame_id = "unitree_go2/front_cam"
451 | else:
452 | topic_name = f"unitree_go2_{i}/front_cam/depth_image"
453 | frame_id = f"unitree_go2_{i}/front_cam"
454 | node_namespace = ""
455 | queue_size = 1
456 |
457 | rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(
458 | sd.SensorType.DistanceToImagePlane.name
459 | )
460 | writer = rep.writers.get(rv + "ROS2PublishImage")
461 | writer.initialize(
462 | frameId=frame_id,
463 | nodeNamespace=node_namespace,
464 | queueSize=queue_size,
465 | topicName=topic_name
466 | )
467 | writer.attach([render_product])
468 |
469 | # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate
470 | gate_path = omni.syntheticdata.SyntheticData._get_node_path(
471 | rv + "IsaacSimulationGate", render_product
472 | )
473 | og.Controller.attribute(gate_path + ".inputs:step").set(step_size)
474 |
475 | def pub_semantic_image(self):
476 | for i in range(self.num_envs):
477 | # The following code will link the camera's render product and publish the data to the specified topic name.
478 | render_product = self.cameras[i]._render_product_path
479 | step_size = 1
480 | if (self.num_envs == 1):
481 | topic_name = "unitree_go2/front_cam/semantic_segmentation_image"
482 | label_topic_name = "unitree_go2/front_cam/semantic_segmentation_label"
483 | frame_id = "unitree_go2/front_cam"
484 | else:
485 | topic_name = f"unitree_go2_{i}/front_cam/semantic_segmentation_image"
486 | label_topic_name = f"unitree_go2_{i}/front_cam/semantic_segmentation_label"
487 | frame_id = f"unitree_go2_{i}/front_cam"
488 | node_namespace = ""
489 | queue_size = 1
490 |
491 | rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(
492 | sd.SensorType.SemanticSegmentation.name
493 | )
494 | writer = rep.writers.get("ROS2PublishSemanticSegmentation")
495 | writer.initialize(
496 | frameId=frame_id,
497 | nodeNamespace=node_namespace,
498 | queueSize=queue_size,
499 | topicName=topic_name
500 | )
501 | writer.attach([render_product])
502 |
503 | semantic_writer = rep.writers.get(
504 | "SemanticSegmentationSD" + f"ROS2PublishSemanticLabels"
505 | )
506 | semantic_writer.initialize(
507 | nodeNamespace=node_namespace,
508 | queueSize=queue_size,
509 | topicName=label_topic_name,
510 | )
511 | semantic_writer.attach([render_product])
512 |
513 | # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate
514 | gate_path = omni.syntheticdata.SyntheticData._get_node_path(
515 | rv + "IsaacSimulationGate", render_product
516 | )
517 | og.Controller.attribute(gate_path + ".inputs:step").set(step_size)
518 |
519 | def pub_cam_depth_cloud(self):
520 | for i in range(self.num_envs):
521 | # The following code will link the camera's render product and publish the data to the specified topic name.
522 | render_product = self.cameras[i]._render_product_path
523 | step_size = 1
524 | if (self.num_envs == 1):
525 | topic_name = "unitree_go2/front_cam/depth_cloud"
526 | frame_id = "unitree_go2/front_cam"
527 | else:
528 | topic_name = f"unitree_go2_{i}/front_cam/depth_cloud"
529 | frame_id = f"unitree_go2_{i}/front_cam"
530 | node_namespace = ""
531 | queue_size = 1
532 |
533 | # Note, this pointcloud publisher will simply convert the Depth image to a pointcloud using the Camera intrinsics.
534 | # This pointcloud generation method does not support semantic labelled objects.
535 | rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(
536 | sd.SensorType.DistanceToImagePlane.name
537 | )
538 |
539 | writer = rep.writers.get(rv + "ROS2PublishPointCloud")
540 | writer.initialize(
541 | frameId=frame_id,
542 | nodeNamespace=node_namespace,
543 | queueSize=queue_size,
544 | topicName=topic_name,
545 | )
546 | writer.attach([render_product])
547 |
548 | # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate
549 | gate_path = omni.syntheticdata.SyntheticData._get_node_path(
550 | rv + "IsaacSimulationGate", render_product
551 | )
552 |
553 | og.Controller.attribute(gate_path + ".inputs:step").set(step_size)
554 |
555 | def publish_camera_info(self):
556 | for i in range(self.num_envs):
557 | # The following code will link the camera's render product and publish the data to the specified topic name.
558 | render_product = self.cameras[i]._render_product_path
559 | step_size = 1
560 | if (self.num_envs == 1):
561 | topic_name = "unitree_go2/front_cam/info"
562 | else:
563 | topic_name = f"unitree_go2_{i}/front_cam/info"
564 | queue_size = 1
565 | node_namespace = ""
566 | frame_id = self.cameras[i].prim_path.split("/")[-1] # This matches what the TF tree is publishing.
567 |
568 | writer = rep.writers.get("ROS2PublishCameraInfo")
569 | camera_info = read_camera_info(render_product_path=render_product)
570 | writer.initialize(
571 | frameId=frame_id,
572 | nodeNamespace=node_namespace,
573 | queueSize=queue_size,
574 | topicName=topic_name,
575 | width=camera_info["width"],
576 | height=camera_info["height"],
577 | projectionType=camera_info["projectionType"],
578 | k=camera_info["k"].reshape([1, 9]),
579 | r=camera_info["r"].reshape([1, 9]),
580 | p=camera_info["p"].reshape([1, 12]),
581 | physicalDistortionModel=camera_info["physicalDistortionModel"],
582 | physicalDistortionCoefficients=camera_info["physicalDistortionCoefficients"],
583 | )
584 | writer.attach([render_product])
585 |
586 | gate_path = omni.syntheticdata.SyntheticData._get_node_path(
587 | "PostProcessDispatch" + "IsaacSimulationGate", render_product
588 | )
589 |
590 | # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate
591 | og.Controller.attribute(gate_path + ".inputs:step").set(step_size)
--------------------------------------------------------------------------------
/rviz/go2.rviz:
--------------------------------------------------------------------------------
1 | Panels:
2 | - Class: rviz_common/Displays
3 | Help Height: 78
4 | Name: Displays
5 | Property Tree Widget:
6 | Expanded:
7 | - /depth_image1/Topic1
8 | Splitter Ratio: 0.5
9 | Tree Height: 435
10 | - Class: rviz_common/Selection
11 | Name: Selection
12 | - Class: rviz_common/Tool Properties
13 | Expanded:
14 | - /2D Goal Pose1
15 | - /Publish Point1
16 | Name: Tool Properties
17 | Splitter Ratio: 0.5886790156364441
18 | - Class: rviz_common/Views
19 | Expanded:
20 | - /Current View1
21 | Name: Views
22 | Splitter Ratio: 0.5
23 | - Class: rviz_common/Time
24 | Experimental: false
25 | Name: Time
26 | SyncMode: 0
27 | SyncSource: lidar_point_cloud
28 | Visualization Manager:
29 | Class: ""
30 | Displays:
31 | - Alpha: 0.5
32 | Cell Size: 1
33 | Class: rviz_default_plugins/Grid
34 | Color: 0; 0; 0
35 | Enabled: true
36 | Line Style:
37 | Line Width: 0.029999999329447746
38 | Value: Lines
39 | Name: Grid
40 | Normal Cell Count: 0
41 | Offset:
42 | X: 0
43 | Y: 0
44 | Z: 0
45 | Plane: XY
46 | Plane Cell Count: 200
47 | Reference Frame:
48 | Value: true
49 | - Class: rviz_default_plugins/Axes
50 | Enabled: true
51 | Length: 1
52 | Name: origin
53 | Radius: 0.10000000149011612
54 | Reference Frame:
55 | Value: true
56 | - Alpha: 1
57 | Autocompute Intensity Bounds: true
58 | Autocompute Value Bounds:
59 | Max Value: 1.943721055984497
60 | Min Value: -0.0016210675239562988
61 | Value: true
62 | Axis: Z
63 | Channel Name: intensity
64 | Class: rviz_default_plugins/PointCloud2
65 | Color: 255; 255; 255
66 | Color Transformer: AxisColor
67 | Decay Time: 0
68 | Enabled: true
69 | Invert Rainbow: false
70 | Max Color: 255; 255; 255
71 | Max Intensity: 4096
72 | Min Color: 0; 0; 0
73 | Min Intensity: 0
74 | Name: lidar_point_cloud
75 | Position Transformer: XYZ
76 | Selectable: true
77 | Size (Pixels): 3
78 | Size (m): 0.10000000149011612
79 | Style: Flat Squares
80 | Topic:
81 | Depth: 5
82 | Durability Policy: Volatile
83 | Filter size: 10
84 | History Policy: Keep Last
85 | Reliability Policy: Reliable
86 | Value: /unitree_go2/lidar/point_cloud
87 | Use Fixed Frame: true
88 | Use rainbow: true
89 | Value: true
90 | - Alpha: 1
91 | Autocompute Intensity Bounds: true
92 | Autocompute Value Bounds:
93 | Max Value: 244.01438903808594
94 | Min Value: 0.39650601148605347
95 | Value: true
96 | Axis: Z
97 | Channel Name: intensity
98 | Class: rviz_default_plugins/PointCloud2
99 | Color: 255; 255; 255
100 | Color Transformer: AxisColor
101 | Decay Time: 0
102 | Enabled: true
103 | Invert Rainbow: false
104 | Max Color: 255; 255; 255
105 | Max Intensity: 4096
106 | Min Color: 0; 0; 0
107 | Min Intensity: 0
108 | Name: depth_cloud
109 | Position Transformer: XYZ
110 | Selectable: true
111 | Size (Pixels): 3
112 | Size (m): 0.10000000149011612
113 | Style: Flat Squares
114 | Topic:
115 | Depth: 5
116 | Durability Policy: Volatile
117 | Filter size: 10
118 | History Policy: Keep Last
119 | Reliability Policy: Reliable
120 | Value: /unitree_go2/front_cam/depth_cloud
121 | Use Fixed Frame: true
122 | Use rainbow: true
123 | Value: true
124 | - Alpha: 1
125 | Axes Length: 0.5
126 | Axes Radius: 0.10000000149011612
127 | Class: rviz_default_plugins/Pose
128 | Color: 255; 25; 0
129 | Enabled: true
130 | Head Length: 0.30000001192092896
131 | Head Radius: 0.10000000149011612
132 | Name: Go2 Pose
133 | Shaft Length: 1
134 | Shaft Radius: 0.05000000074505806
135 | Shape: Axes
136 | Topic:
137 | Depth: 5
138 | Durability Policy: Volatile
139 | Filter size: 10
140 | History Policy: Keep Last
141 | Reliability Policy: Reliable
142 | Value: /unitree_go2/pose
143 | Value: true
144 | - Class: rviz_default_plugins/Image
145 | Enabled: true
146 | Max Value: 1
147 | Median window: 5
148 | Min Value: 0
149 | Name: depth_image
150 | Normalize Range: true
151 | Topic:
152 | Depth: 5
153 | Durability Policy: Volatile
154 | History Policy: Keep Last
155 | Reliability Policy: Reliable
156 | Value: /unitree_go2/front_cam/depth_image
157 | Value: true
158 | - Class: rviz_default_plugins/Image
159 | Enabled: true
160 | Max Value: 1
161 | Median window: 5
162 | Min Value: 0
163 | Name: color_image
164 | Normalize Range: true
165 | Topic:
166 | Depth: 5
167 | Durability Policy: Volatile
168 | History Policy: Keep Last
169 | Reliability Policy: Reliable
170 | Value: /unitree_go2/front_cam/color_image
171 | Value: true
172 | Enabled: true
173 | Global Options:
174 | Background Color: 222; 221; 218
175 | Fixed Frame: map
176 | Frame Rate: 30
177 | Name: root
178 | Tools:
179 | - Class: rviz_default_plugins/Interact
180 | Hide Inactive Objects: true
181 | - Class: rviz_default_plugins/MoveCamera
182 | - Class: rviz_default_plugins/Select
183 | - Class: rviz_default_plugins/FocusCamera
184 | - Class: rviz_default_plugins/Measure
185 | Line color: 128; 128; 0
186 | - Class: rviz_default_plugins/SetInitialPose
187 | Covariance x: 0.25
188 | Covariance y: 0.25
189 | Covariance yaw: 0.06853891909122467
190 | Topic:
191 | Depth: 5
192 | Durability Policy: Volatile
193 | History Policy: Keep Last
194 | Reliability Policy: Reliable
195 | Value: /initialpose
196 | - Class: rviz_default_plugins/SetGoal
197 | Topic:
198 | Depth: 5
199 | Durability Policy: Volatile
200 | History Policy: Keep Last
201 | Reliability Policy: Reliable
202 | Value: /goal_pose
203 | - Class: rviz_default_plugins/PublishPoint
204 | Single click: true
205 | Topic:
206 | Depth: 5
207 | Durability Policy: Volatile
208 | History Policy: Keep Last
209 | Reliability Policy: Reliable
210 | Value: /clicked_point
211 | Transformation:
212 | Current:
213 | Class: rviz_default_plugins/TF
214 | Value: true
215 | Views:
216 | Current:
217 | Class: rviz_default_plugins/Orbit
218 | Distance: 15.57160758972168
219 | Enable Stereo Rendering:
220 | Stereo Eye Separation: 0.05999999865889549
221 | Stereo Focal Distance: 1
222 | Swap Stereo Eyes: false
223 | Value: false
224 | Focal Point:
225 | X: -0.9345346689224243
226 | Y: -0.7097622752189636
227 | Z: 0.8943328261375427
228 | Focal Shape Fixed Size: true
229 | Focal Shape Size: 0.05000000074505806
230 | Invert Z Axis: false
231 | Name: Current View
232 | Near Clip Distance: 0.009999999776482582
233 | Pitch: 1.0053977966308594
234 | Target Frame:
235 | Value: Orbit (rviz)
236 | Yaw: 3.1335415840148926
237 | Saved: ~
238 | Window Geometry:
239 | Displays:
240 | collapsed: false
241 | Height: 1376
242 | Hide Left Dock: false
243 | Hide Right Dock: true
244 | QMainWindow State: 000000ff00000000fd000000040000000000000156000004c2fc020000000efb0000001200530065006c0065006300740069006f006e00000001e10000009b0000005c00fffffffb0000001e0054006f006f006c002000500072006f007000650072007400690065007302000001ed000001df00000185000000a3fb000000120056006900650077007300200054006f006f02000001df000002110000018500000122fb000000200054006f006f006c002000500072006f0070006500720074006900650073003203000002880000011d000002210000017afb000000100044006900730070006c006100790073010000003d0000023e000000c900fffffffb0000002000730065006c0065006300740069006f006e00200062007500660066006500720200000138000000aa0000023a00000294fb00000014005700690064006500530074006500720065006f02000000e6000000d2000003ee0000030bfb0000000c004b0069006e0065006300740200000186000001060000030c00000261fb0000001600640065007000740068005f0069006d00610067006501000002810000013c0000002800fffffffb000000160063006f006c006f0072005f0069006d00610067006501000003c30000013c0000002800fffffffb0000000a0049006d0061006700650100000380000000160000000000000000fb0000000a0049006d006100670065010000039c000000160000000000000000fb0000001200520047004200200049006d0061006700650100000380000000160000000000000000fb000000160044006500700074006800200049006d006100670065010000039c000000160000000000000000000000010000010f000002b0fc0200000003fb0000001e0054006f006f006c002000500072006f00700065007200740069006500730100000041000000780000000000000000fb0000000a00560069006500770073000000003d000002b0000000a400fffffffb0000001200530065006c0065006300740069006f006e010000025a000000b200000000000000000000000200000490000000a9fc0100000001fb0000000a00560069006500770073030000004e00000080000002e10000019700000003000009ba0000003efc0100000002fb0000000800540069006d00650100000000000009ba000002fb00fffffffb0000000800540069006d006501000000000000045000000000000000000000085e000004c200000004000000040000000800000008fc0000000100000002000000010000000a0054006f006f006c00730100000000ffffffff0000000000000000
245 | Selection:
246 | collapsed: false
247 | Time:
248 | collapsed: false
249 | Tool Properties:
250 | collapsed: false
251 | Views:
252 | collapsed: true
253 | Width: 2490
254 | X: 70
255 | Y: 27
256 | color_image:
257 | collapsed: false
258 | depth_image:
259 | collapsed: false
260 |
--------------------------------------------------------------------------------