├── LICENSE └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Qineng Wang (Aiden) 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Awesome-embodied-world-models-papers 2 | A paper list that includes world models or generative video models for embodied agents. 3 | The papers with **real robot experiments** are marked with 🤖. The papers with **open-sourced code** are marked with 🌟. 4 | 5 | 6 | ## Survey Papers 7 | + [arXiv 2024.11] **Understanding World or Predicting Future? A Comprehensive Survey of World Models** [[paper](https://arxiv.org/pdf/2411.14499)] 8 | 9 | + [arXiv 2024.07] **Aligning Cyber Space with Physical World: A Comprehensive Survey on Embodied AI** [[paper](https://arxiv.org/abs/2407.06886)] [[repo](https://github.com/HCPLab-SYSU/Embodied_AI_Paper_List)] 10 | 11 | + [arXiv 2024.05] **Is Sora a World Simulator? A Comprehensive Survey on General World Models and Beyond** [[paper](https://arxiv.org/abs/2405.03520)] [[repo](https://github.com/GigaAI-research/General-World-Models-Survey)] 12 | 13 | ## Method Papers 14 | 15 | + 🌟[paper 2025.01] **GameFactory: Creating New Games with Generative Interactive Videos** [[paper](https://arxiv.org/abs/2501.08325)] [[website](https://vvictoryuki.github.io/gamefactory/)] [[code](https://github.com/KwaiVGI/GameFactory)] 16 | + 🌟[paper 2025.01] **Cosmos World Foundation Model Platform for Physical AI** [[paper](https://d1qx31qr3h6wln.cloudfront.net/publications/NVIDIA%20Cosmos_4.pdf)] [[website](https://www.nvidia.com/en-us/ai/cosmos/)] [[code](https://github.com/NVIDIA/Cosmos)] 17 | 18 | + [arXiv 2025.01] **EnerVerse: Envisioning Embodied Future Space for Robotics Manipulation** [[paper](https://arxiv.org/abs/2501.01895)] [[website](https://sites.google.com/view/enerverse)] 19 | 20 | + [arXiv 2024.12] **GenEx: Generating an Explorable World** [[paper](https://arxiv.org/abs/2412.09624)] [[website](https://www.genex.world/)] 21 | 22 | 23 | + [blog 2024.12] **Genie 2: A large-scale foundation world model** [[blog](https://deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/)] 24 | 25 | + 🌟[arXiv 2024.12] **PlayGen: Playable Game Generation** [[paper](https://arxiv.org/abs/2412.00887)] [[website](http://124.156.151.207/)] [[code](https://github.com/GreatX3/Playable-Game-Generation)] 26 | 27 | + 🌟[arXiv 2024.12] **Motion Dreamer: Realizing Physically Coherent Video Generation through Scene-Aware Motion Reasoning** [[paper](https://arxiv.org/abs/2412.00547)] [[website](https://envision-research.github.io/MotionDreamer/)] [[code](https://github.com/EnVision-Research/MotionDreamer)] 28 | 29 | + [arXiv 2024.10] **EVA: An Embodied World Model for Future Video Anticipation** [[paper](https://arxiv.org/abs/2410.15461)] 30 | 31 | + 🌟[arXiv 2024.10] **AVID: Adapting Video Diffusion Models to World Models** [[paper](https://arxiv.org/abs/2410.12822)] [[website](https://sites.google.com/view/avid-world-model-adapters/home)] [[code](https://github.com/microsoft/causica/tree/main/research_experiments/avid)] 32 | 33 | + 🌟[blog 2024.10] **Oasis: A Universe in a Transformer** [[blog](https://decart.ai/articles/oasis-interactive-ai-video-game-model)] [[website](https://oasis-model.github.io/)] [[code](https://github.com/etched-ai/open-oasis)] 34 | + [arXiv 2024.08] **GameNGen: Diffusion Models Are Real-Time Game Engines** [[paper](https://arxiv.org/abs/2408.14837)] [[website](https://gamengen.github.io/)] 35 | 36 | 37 | + 🌟[arXiv 2024.06] **IRASim: Learning Interactive Real-Robot Action Simulators** [[paper](https://arxiv.org/abs/2406.14540)] [[website](https://gen-irasim.github.io/)] [[code](https://github.com/bytedance/IRASim)] 38 | 39 | + 🌟[arXiv 2024.06] **Pandora: Towards General World Model with Natural Language Actions and Video States** [[paper](https://arxiv.org/abs/2406.09455)] [[website](https://world-model.maitrix.org/)] [[code](https://github.com/maitrix-org/Pandora)] 40 | 41 | + 🌟[arXiv 2024.05] **iVideoGPT: Interactive VideoGPTs are Scalable World Models** [[paper](https://arxiv.org/abs/2405.15223)] [[website](https://thuml.github.io/iVideoGPT/)] [[code](https://github.com/thuml/iVideoGPT)] `NeurIPS 2024` 42 | 43 | + 🌟[arXiv 2024.05] **DIAMOND: Diffusion for World Modeling: Visual Details Matter in Atari** [[paper](https://arxiv.org/abs/2405.12399)] [[website](https://diamond-wm.github.io/)] [[code](https://github.com/eloialonso/diamond)] `NeurIPS 2024 Spotlight` 44 | 45 | + 🌟[arXiv 2024.04] **RoboDreamer: Learning Compositional World Models for Robot Imagination** [[paper](https://arxiv.org/abs/2404.12377)] [[website](https://robovideo.github.io/)] [[code](https://github.com/rainbow979/robodreamer)] `ICML 2024` 46 | 47 | + 🌟[arXiv 2024.03] **3D-VLA: A 3D Vision-Language-Action Generative World Model** [[paper](https://arxiv.org/abs/2403.09631)] [[website](https://vis-www.cs.umass.edu/3dvla/)] [[code](https://github.com/UMass-Foundation-Model/3D-VLA)] `ICML 2024` 48 | 49 | + [arXiv 2024.02] **Genie: Generative Interactive Environments** [[paper](https://arxiv.org/abs/2402.15391)] [[website](https://sites.google.com/view/genie-2024/?pli=1)] `ICML 2024 Best Paper` 50 | 51 | + 🤖[arXiv 2023.10] **UniSim: Learning Interactive Real-World Simulators** [[paper](https://arxiv.org/abs/2310.06114)] [[website](https://universal-simulator.github.io/unisim/)] `ICLR 2024` 52 | 53 | + [arXiv 2023.02] **UniPi: Learning Universal Policies via Text-Guided Video Generation** [[paper](https://arxiv.org/pdf/2302.00111)] [[website](https://universal-policy.github.io/unipi/)] `NeurIPS 2023` 54 | 55 | --------------------------------------------------------------------------------