├── .gitignore
├── README.md
├── config
└── Denoiser.py
├── dataset
├── Grasp6DDataset.py
└── __init__.py
├── demo
└── intro.png
├── eval.py
├── generate.py
├── models
├── __init__.py
├── denoiser.py
├── loss.py
├── noise_predictor.py
├── pointnet_utils.py
└── utils.py
├── requirements.txt
├── robot_exp.py
├── train.py
├── utils
├── __init__.py
├── builder.py
├── config_utils.py
├── test_utils.py
└── trainer.py
└── visualize.py
/.gitignore:
--------------------------------------------------------------------------------
1 | log/
2 | __pycache__/
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # Language-Driven 6-DoF Grasp Detection Using Negative Prompt Guidance
4 |
5 | [](https://arxiv.org/abs/2407.13842)
6 | [](https://airvlab.github.io/grasp-anything/)
7 |
8 |
ECCV 2024 Oral
9 |
10 |

11 |
12 | We address the task of language-driven 6-DoF grasp detection in cluttered point clouds. We introduce a novel diffusion model incorporating the new concept of negative prompt guidance learning. Our proposed negative prompt guidance assists in tackling the fine-grained challenge of the language-driven grasp detection task, directing the detection process toward the desired object by steering away from undesired ones.
13 |
14 |
15 |
16 |
17 |
18 | ## 1. Setup
19 | Create new CONDA environment and install necessary packages
20 |
21 | conda create -n l6gd python=3.9
22 | conda activate l6gd
23 | conda install pip
24 | pip install -r requirements.txt
25 |
26 | ## 2. Download Grasp-Anything-6D dataset
27 | You can request for our HuggingFace dataset at [our project page](https://airvlab.github.io/grasp-anything/).
28 |
29 | ## 3. Training
30 | To start training the model, run
31 |
32 | python3 train.py --config