├── LICENSE └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 OpenGVLab 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model 2 | 3 | 4 | [![MIT license](https://img.shields.io/badge/License-MIT-blue.svg)](https://lbesson.mit-license.org/) [![arXiv](https://img.shields.io/badge/arXiv-2404.01342-red)](https://arxiv.org/abs/2404.01342) 5 | 6 | ## Abstract 7 | 8 |
CLICK for the full abstract 9 | 10 | > Text-to-image (T2I) generative models have attracted significant attention and found extensive applications within and beyond academic research. For example, the Civitai community, a platform for T2I innovation, currently hosts an impressive array of 74,492 distinct models. However, this diversity presents a formidable challenge in selecting the most appropriate model and parameters, a process that typically requires numerous trials. Drawing inspiration from the tool usage research of large language models (LLMs), we introduce DiffAgent, an LLM agent designed to screen the accurate selection in seconds via API calls. DiffAgent leverages a novel two-stage training framework, SFTA, enabling it to accurately align T2I API responses with user input in accordance with human preferences. To train and evaluate DiffAgent's capabilities, we present DABench, a comprehensive dataset encompassing an extensive range of T2I APIs from the community. Our evaluations reveal that DiffAgent not only excels in identifying the appropriate T2I API but also underscores the effectiveness of the SFTA training framework. 11 | >
12 | 13 | We are open to any suggestions and discussions and feel free to contact us through [liruizhao@stu.xmu.edu.cn](mailto:liruizhao@stu.xmu.edu.cn). 14 | 15 | 16 | ## TODO 17 | 18 | - [x] dataset 19 | - [ ] data collection script 20 | - [ ] pretrain model 21 | - [ ] training code 22 | 23 | ## News 24 | 25 | - 2024/04/15 - Our dataset DABench is now publicly accessible and can be retrieved from [Google Drive](https://drive.google.com/file/d/1-zqkHbuD1Di5eqLUspE3mzkRAmOCZYtZ/view?usp=sharing)! 26 | 27 | ## Contents 28 | 29 | - [Install](#install) 30 | - [Dataset](#dataset) 31 | - [Usage](#usage) 32 | - [Citation](#citation) 33 | 34 | ## Install 35 | 36 | ``` 37 | conda create -n diffagent python=3.9.17 38 | conda activate diffagent 39 | git clone https://github.com/OpenGVLab/DiffAgent.git 40 | cd diffagent 41 | pip install -r requirements.txt 42 | ``` 43 | 44 | ## Dataset 45 | 46 | Our research introduces a high-quality dataset, DABench, accessible via [Google Drive](https://drive.google.com/file/d/1-zqkHbuD1Di5eqLUspE3mzkRAmOCZYtZ/view?usp=sharing), encompassing Instruction-API pairs from SD 1.5 and SD XL (a total of 50,482). 47 | Additionally, we furnish the corresponding mapping dictionaries to facilitate subsequent model downloads or API information reconstruction. 48 | 49 | 50 | The dataset DABench proposed in our work is collected from Civitai ([license](https://github.com/civitai/civitai/blob/main/LICENSE)). The stipulations of the license highlight potential legal implications if this dataset is employed for commercial objectives. Therefore, it is strongly recommended that any entity intending to utilize this data for commercial ends should seek explicit authorization from either the relevant website or author. 51 | 52 | 53 | ## Usage 54 | 55 | 56 | ## Citation 57 | 58 | If you use our work or our dataset in this repo, or find them helpful, please consider giving a citation. 59 | 60 | ``` 61 | @article{zhao2024diffagent, 62 | title={DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model}, 63 | author={Zhao, Lirui and Yang, Yue and Zhang, Kaipeng and Shao, Wenqi and Zhang, Yuxin and Qiao, Yu and Luo, Ping and Ji, Rongrong}, 64 | journal={arXiv preprint arXiv:2404.01342}, 65 | year={2024} 66 | } 67 | ``` 68 | 69 | --------------------------------------------------------------------------------