├── vllm_group ├── __init__.py └── llms.py ├── setup.py ├── .pre-commit-config.yaml ├── LICENSE ├── README.md └── .gitignore /vllm_group/__init__.py: -------------------------------------------------------------------------------- 1 | from .llms import LLMs 2 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup, find_packages 2 | 3 | setup( 4 | name="vllm_group", 5 | version="0.1", 6 | author="zhuzilin", 7 | url="https://github.com/zhuzilin/vllm-group", 8 | packages=find_packages(), 9 | ) 10 | -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | repos: 2 | # Using this mirror lets us use mypyc-compiled black, which is about 2x faster 3 | - repo: https://github.com/psf/black-pre-commit-mirror 4 | rev: 24.2.0 5 | hooks: 6 | - id: black 7 | # It is recommended to specify the latest version of Python 8 | # supported by your project here, or alternatively use 9 | # pre-commit's default_language_version, see 10 | # https://pre-commit.com/#top_level-default_language_version 11 | language_version: python3.10 -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2024 Zilin Zhu 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## vLLM Group 2 | 3 | In many usecases of [vllm-project/vllm](https://github.com/vllm-project/vllm), we hope to use all the GPUs we have on the local node, with data parallel instead of a over-extended tensor parallel. This project is trying to help with that. You could use the following code to initialize `torch.cuda.device_count() / tensor_parallel_size` models with vLLM directly through ray. 4 | 5 | ```python 6 | from vllm_group import LLMs 7 | 8 | # Note that currently we need to download the model to local dir first. 9 | llms = LLMs( 10 | "/root/Qwen2.5-7B-Instruct", 11 | tensor_parallel_size=2, 12 | trust_remote_code=True, 13 | func_of_seed=lambda idx: idx, 14 | ) 15 | ``` 16 | 17 | - Note that the value of arguments that start with `func_of_` need to be callable, so that different llm could receive different argument value. For example, for `func_of_seed=lambda idx: idx`, each llm will have its index as the `seed` argument. 18 | 19 | And then you could generate with each llm asynchronizely like: 20 | 21 | ```python 22 | import ray 23 | 24 | outputs = [] 25 | for i in range(5): 26 | outputs.append(llms[i % len(llms)].generate.remote( 27 | "Hey, how are you doing?", 28 | )) 29 | 30 | outputs = [ray.get(output) for output in outputs] 31 | ``` 32 | 33 | This project is highly inspired by [OpenRLHF/OpenRLHF](https://github.com/OpenRLHF/OpenRLHF) 34 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/#use-with-ide 110 | .pdm.toml 111 | 112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 113 | __pypackages__/ 114 | 115 | # Celery stuff 116 | celerybeat-schedule 117 | celerybeat.pid 118 | 119 | # SageMath parsed files 120 | *.sage.py 121 | 122 | # Environments 123 | .env 124 | .venv 125 | env/ 126 | venv/ 127 | ENV/ 128 | env.bak/ 129 | venv.bak/ 130 | 131 | # Spyder project settings 132 | .spyderproject 133 | .spyproject 134 | 135 | # Rope project settings 136 | .ropeproject 137 | 138 | # mkdocs documentation 139 | /site 140 | 141 | # mypy 142 | .mypy_cache/ 143 | .dmypy.json 144 | dmypy.json 145 | 146 | # Pyre type checker 147 | .pyre/ 148 | 149 | # pytype static type analyzer 150 | .pytype/ 151 | 152 | # Cython debug symbols 153 | cython_debug/ 154 | 155 | # PyCharm 156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 158 | # and can be added to the global gitignore or merged into this file. For a more nuclear 159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 160 | #.idea/ -------------------------------------------------------------------------------- /vllm_group/llms.py: -------------------------------------------------------------------------------- 1 | import vllm 2 | import ray 3 | import torch 4 | 5 | 6 | @ray.remote 7 | class LLM: 8 | def __init__(self, *args, **kwargs): 9 | self.llm = vllm.LLM(*args, **kwargs) 10 | 11 | def generate(self, *args, **kwargs): 12 | return self.llm.generate(*args, **kwargs) 13 | 14 | def encode(self, *args, **kwargs): 15 | return self.llm.encode(*args, **kwargs) 16 | 17 | 18 | class LLMs: 19 | def __init__( 20 | self, *args, tensor_parallel_size=1, pipeline_parallel_size=1, **kwargs 21 | ): 22 | num_gpu = torch.cuda.device_count() 23 | num_gpu_per_llm = tensor_parallel_size * pipeline_parallel_size 24 | self.llms = [] 25 | 26 | funcs = {} 27 | for key, value in kwargs.items(): 28 | if key.startswith("func_of_"): 29 | assert callable( 30 | value 31 | ), "value of arguments starting with 'func_of_' should be callable" 32 | funcs[key[len("func_of_") :]] = value 33 | 34 | for key in funcs: 35 | kwargs.pop(f"func_of_{key}") 36 | 37 | for idx in range(num_gpu // num_gpu_per_llm): 38 | other_kwargs = {key: value(idx) for key, value in funcs.items()} 39 | 40 | # The following code is from OpenRLHF. 41 | # The main idea is that, 42 | # when tensor_parallel_size = 1, vLLM will use gpu_executor and ray won't be triggered. 43 | # therefore, we need to allocate the GPU for the gpu_executor. 44 | # when tensor_parallel_size > 1, vLLM will use itself set the ray cluster, we only need 45 | # to pass the placement_group through `placement_group_capture_child_tasks`. 46 | num_gpus = int(tensor_parallel_size == 1) 47 | scheduling_strategy = None 48 | 49 | if tensor_parallel_size > 1: 50 | pg = ray.util.placement_group( 51 | [{"GPU": 1, "CPU": 1} for _ in range(num_gpu_per_llm)] 52 | ) 53 | scheduling_strategy = ( 54 | ray.util.scheduling_strategies.PlacementGroupSchedulingStrategy( 55 | placement_group=pg, 56 | placement_group_capture_child_tasks=True, 57 | placement_group_bundle_index=0, 58 | ) 59 | ) 60 | llm = LLM.options( 61 | num_cpus=1, 62 | num_gpus=num_gpus, 63 | scheduling_strategy=scheduling_strategy, 64 | ).remote( 65 | tensor_parallel_size=tensor_parallel_size, 66 | pipeline_parallel_size=pipeline_parallel_size, 67 | *args, 68 | **kwargs, 69 | **other_kwargs, 70 | ) 71 | self.llms.append(llm) 72 | 73 | def __len__(self): 74 | return len(self.llms) 75 | 76 | def __getitem__(self, index): 77 | return self.llms[index] 78 | 79 | 80 | if __name__ == "__main__": 81 | llms = LLMs( 82 | "/root/Qwen2.5-7B-Instruct", 83 | tensor_parallel_size=2, 84 | trust_remote_code=True, 85 | func_of_seed=lambda idx: idx, 86 | ) 87 | outputs = [] 88 | for i in range(5): 89 | outputs.append( 90 | llms[i % len(llms)].generate.remote( 91 | "Why more is different? Please explain in one sentence.", 92 | vllm.SamplingParams( 93 | max_tokens=128, 94 | top_p=0.7, 95 | temperature=0.8, 96 | stop=["", "<|im_end|>", "<|endoftext|>"], 97 | ), 98 | ) 99 | ) 100 | 101 | outputs = [ray.get(output) for output in outputs] 102 | for output in outputs: 103 | print(output[0].outputs[0].text) 104 | print("=" * 50) 105 | --------------------------------------------------------------------------------