├── _config.yml ├── .gitattributes ├── images └── im2.png ├── data └── mogaze.7z ├── README.md └── getting_started.md /_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-minimal -------------------------------------------------------------------------------- /.gitattributes: -------------------------------------------------------------------------------- 1 | *.7z filter=lfs diff=lfs merge=lfs -text 2 | -------------------------------------------------------------------------------- /images/im2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/humans-to-robots-motion/mogaze/HEAD/images/im2.png -------------------------------------------------------------------------------- /data/mogaze.7z: -------------------------------------------------------------------------------- 1 | version https://git-lfs.github.com/spec/v1 2 | oid sha256:740b1f837c46a5b75056b45bfee5ab5d74ec5110ce9cc79e0213b05ddc5ce958 3 | size 984442583 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | This dataset captures long sequences of 2 | full-body everyday manipulation tasks, with eye-gaze. 3 | 4 | ![sample](https://raw.githubusercontent.com/humans-to-robots-motion/mogaze/master/images/im2.png) 5 | 6 | A detailed description can be found on 7 | [arxiv](https://arxiv.org/abs/2011.11552). 8 | 9 | The motion data was captured using a traditional 10 | motion capture system based on reflective markers. 11 | We additionally captured eye-gaze using a wearable pupil-tracking device. 12 | The dataset can be used for the design and evaluation 13 | of full-body motion prediction algorithms. 14 | Furthermore, our experiments shows eye-gaze as a powerful predictor of human intent. 15 | The dataset includes 180 min of motion capture data with 16 | 1627 pick and place actions being performed. 17 | 18 | # Citation 19 | 20 | When using this dataset please mention the following paper in your work 21 | ```bibtex 22 | @article{kratzer2020mogaze, 23 | title={MoGaze: A Dataset of Full-Body Motions that Includes Workspace Geometry and Eye-Gaze}, 24 | author={Kratzer, Philipp and Bihlmaier, Simon and Balachandra Midlagajni, Niteesh and Prakash, Rohit and Toussaint, Marc and Mainprice, Jim}, 25 | journal={IEEE Robotics and Automation Letters (RAL)}, 26 | year={2020} 27 | } 28 | ``` 29 | 30 | # Data 31 | All data is available in the single file "data/mogaze.7z". 32 | 33 | 34 | # Getting Started 35 | You can playback the data using the [humoro](https://github.com/PhilippJKratzer/humoro) library. 36 | 37 | Here is a tutorial on getting started with it: [Getting Started](https://humans-to-robots-motion.github.io/mogaze/getting_started) 38 | 39 | # Visualization 40 | 41 | To just visualize the datafiles, you can use the play\_traj.py example. For instance, the following command plays the first file of participant one: 42 | ```python 43 | python3 examples/playback/play_traj.py mogaze/p1_1_human_data.hdf5 --gaze mogaze/p1_1_gaze_data.hdf5 --obj mogaze/p1_1_object_data.hdf5 --segfile mogaze/p1_1_segmentations.hdf5 --scene mogaze/scene.xml 44 | ``` 45 | -------------------------------------------------------------------------------- /getting_started.md: -------------------------------------------------------------------------------- 1 | # Getting started: Display MoGaze data with humoro 2 | This tutorial describes how to get started using the [MoGaze dataset](https://humans-to-robots-motion.github.io/mogaze/) with our pybullet based library [humoro](https://github.com/PhilippJKratzer/humoro). 3 | 4 | There is an [ipython notebook version](https://github.com/PhilippJKratzer/humoro/blob/master/examples/getting_started.ipynb) of this tutorial available. 5 | 6 | ## Installation 7 | The installation is tested on Ubuntu 18.04. 8 | 9 | The packages python3 and python3-pip need to be installed and upgraded (skip if python already is installed): 10 | ```bash 11 | sudo apt install python3 12 | sudo apt install python3-pip 13 | python3 -m pip install --upgrade pip --user 14 | ``` 15 | 16 | For parts of the software qt5 is used, it can be installed using: 17 | ```bash 18 | sudo apt install qt5-default 19 | ``` 20 | 21 | Clone the repository: 22 | ```bash 23 | git clone https://github.com/PhilippJKratzer/humoro.git 24 | ``` 25 | 26 | The python requirements can be installed using: 27 | ```bash 28 | cd humoro 29 | python3 -m pip install -r requirements.txt --user 30 | ``` 31 | 32 | Finally, you can install humoro system-wide using: 33 | ```bash 34 | sudo python3 setup.py install 35 | ``` 36 | 37 | Download and extract the dataset files: 38 | ```bash 39 | wget https://github.com/humans-to-robots-motion/mogaze/raw/master/data/mogaze.7z 40 | 7z x mogaze.7z 41 | ``` 42 | 43 | ## Playback Human data 44 | Let's first have a closer look into the human data only. We can load a trajectory from file using the following: 45 | 46 | ```python 47 | from humoro.trajectory import Trajectory 48 | 49 | full_traj = Trajectory() 50 | full_traj.loadTrajHDF5("mogaze/p1_1_human_data.hdf5") 51 | ``` 52 | 53 | The trajectory contains a data array, a description of the joints and some fixed joints for scaling: 54 | 55 | ```python 56 | print("The data has dimension timeframe, state_size:") 57 | print(full_traj.data.shape) 58 | print("") 59 | print("This is a list of jointnames (from the urdf) corresponding to the state dimensions:") 60 | print(list(full_traj.description)) 61 | print("") 62 | print("Some joints are used for scaling the human and do not change over time") 63 | print("They are available in a dictionary:") 64 | print(full_traj.data_fixed) 65 | ``` 66 | 67 | To play the trajectory using the pybullet player, we spawn a human and add a trajectory to the human 68 | ```python 69 | from humoro.player_pybullet import Player 70 | pp = Player() 71 | pp.spawnHuman("Human1") 72 | pp.addPlaybackTraj(full_traj, "Human1") 73 | ``` 74 | 75 | A specific frame can be displayed: 76 | ```python 77 | pp.showFrame(3000) 78 | ``` 79 | 80 | Or a sequence of frames can be played using: 81 | ```python 82 | pp.play(duration=360, startframe=3000) 83 | ``` 84 | There is also a possibility to use a Qt5 widget (pp.play_controls()) to allow fast forward and skipping through the file. It has also some options for segmenting the data. We explain it in the segmentation section. 85 | 86 | ## Playback multiple humans at the same time 87 | Often it is useful to display multiple human trajectories at the same time. For example, it can be used to show the output of a prediction and the ground truth at the same time. 88 | 89 | It can be achieved by spawning a second human and adding a trajectory to it. A trajectory also has an element *startframe*, which tells the player when a trajectory starts. 90 | 91 | 92 | ```python 93 | pp.spawnHuman("Human2", color=[0., 1., 0., 1.]) 94 | # this extracts a subtrajectory from the full trajectory: 95 | sub_traj = full_traj.subTraj(3000, 3360) 96 | # we change the startframe of the sub_traj, 97 | # thus the player will play it at a different time: 98 | sub_traj.startframe = 4000 99 | pp.addPlaybackTraj(sub_traj, "Human2") 100 | pp.play(duration=360, startframe=4000) 101 | ``` 102 | 103 | ## Loading Objects 104 | There is a helper function to directly spawn the objects and add the playback trajectories to the player: 105 | ```python 106 | from humoro.load_scenes import autoload_objects 107 | obj_trajs, obj_names = autoload_objects(pp, "mogaze/p1_1_object_data.hdf5", "mogaze/scene.xml") 108 | pp.play(duration=360, startframe=3000) 109 | ``` 110 | 111 | You can access the object trajectories and names: 112 | ```python 113 | print("objects:") 114 | print(obj_names) 115 | print("data shape for first object:") 116 | print(obj_trajs[0].data.shape) # 7 dimensions: 3 pos + 4 quaternion rotation 117 | ``` 118 | 119 | ## Loading Gaze 120 | The gaze can be loaded the following way. Only a trajectory of gaze direction points is loaded, the start point comes from the "goggles" object. 121 | ```python 122 | from humoro.gaze import load_gaze 123 | gaze_traj = load_gaze("mogaze/p1_1_gaze_data.hdf5") 124 | pp.addPlaybackTrajGaze(gaze_traj) 125 | pp.play(duration=360, startframe=3000) 126 | ``` 127 | 128 | If you want to use the raw gaze data, the direction points need to be rotated by the calibration rotation: 129 | ```python 130 | print("calibration rotation quaternion:") 131 | print(gaze_traj.data_fixed['calibration']) 132 | ``` 133 | ## Segmentations 134 | The following loads a small Qt5 Application that displays a time axis with the segmentations. The file is segmented into when an object moves. 135 | 136 | Note that opening the QApplication does not allow to spawn any new objects in pybullet. 137 | ```python 138 | pp.play_controls("mogaze/p1_1_segmentations.hdf5") 139 | ``` 140 | 141 | The label "null" means that no object moves at the moment (e.g. when the human moves towards an object to pick it up. 142 | 143 | It is also possible to directly use the segmentation file, it contains elements of the form (startframe, endframe, label): 144 | ```python 145 | import h5py 146 | with h5py.File("mogaze/p1_1_segmentations.hdf5", "r") as segfile: 147 | # print first 5 segments: 148 | for i in range(5): 149 | print(segfile["segments"][i]) 150 | ``` 151 | 152 | ## Kinematics 153 | In order to compute positions from the joint angle trajectory, pybullet can be used. We have a small helper class, which can be used like this: 154 | ```python 155 | from humoro.kin_pybullet import HumanKin 156 | kinematics = HumanKin() 157 | kinematics.set_state(full_traj, 100) # set state at frame 100 158 | print("position of right wrist:") 159 | wrist_id = kinematics.inv_index["rWristRotZ"] 160 | pos = kinematics.get_position(wrist_id) 161 | print(pos) 162 | ``` 163 | 164 | The Jacobian can be retreived with: 165 | ```python 166 | print(kinematics.get_jacobian(wrist_id)) 167 | ``` 168 | --------------------------------------------------------------------------------