├── .DS_Store ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE.txt ├── LowCodeLLM ├── .DS_Store ├── Dockerfile ├── README.md ├── config.template └── src │ ├── app.py │ ├── executingLLM.py │ ├── index.html │ ├── lowCodeLLM.py │ ├── openAIWrapper.py │ ├── planningLLM.py │ ├── requirements.txt │ ├── supervisord.conf │ └── test │ ├── test_execute.py │ ├── test_extend_workflow.py │ ├── test_get_workflow.py │ └── testcases │ ├── execute_test_cases.json │ ├── extend_workflow_test_cases.json │ └── get_workflow_test_cases.json ├── README.md ├── SECURITY.md ├── TaskMatrix.AI └── README.md ├── assets ├── demo.gif ├── demo_inf.gif ├── demo_short.gif ├── figure.jpg ├── low-code-demovideo.mp4 ├── low-code-llm.png ├── low-code-operation.png ├── overview.png └── paradigm.png ├── requirements.txt └── visual_chatgpt.py /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chenfei-wu/TaskMatrix/4b7664f8d3a23804ac1b795d75a73efd162769f0/.DS_Store -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Microsoft Open Source Code of Conduct 2 | 3 | This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). 4 | 5 | Resources: 6 | 7 | - [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) 8 | - [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) 9 | - Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | 2 | To contribute to this GitHub project, you can follow these steps: 3 | 4 | 1. Fork the repository you want to contribute to by clicking the "Fork" button on the project page. 5 | 6 | 2. Clone the repository to your local machine and enter the newly created repo using the following commands: 7 | 8 | ``` 9 | git clone https://github.com/YOUR-GITHUB-USERNAME/TaskMatrix.git 10 | cd TaskMatrix 11 | ``` 12 | 3. Create a new branch for your changes using the following command: 13 | 14 | ``` 15 | git checkout -b "branch-name" 16 | ``` 17 | 4. Make your changes to the code or documentation. 18 | 19 | 5. Add the changes to the staging area using the following command: 20 | ``` 21 | git add . 22 | ``` 23 | 24 | 6. Commit the changes with a meaningful commit message using the following command: 25 | ``` 26 | git commit -m "your commit message" 27 | ``` 28 | 7. Push the changes to your forked repository using the following command: 29 | ``` 30 | git push origin branch-name 31 | ``` 32 | 8. Go to the GitHub website and navigate to your forked repository. 33 | 34 | 9. Click the "New pull request" button. 35 | 36 | 10. Select the branch you just pushed to and the branch you want to merge into on the original repository. 37 | 38 | 11. Add a description of your changes and click the "Create pull request" button. 39 | 40 | 12. Wait for the project maintainer to review your changes and provide feedback. 41 | 42 | 13. Make any necessary changes based on feedback and repeat steps 5-12 until your changes are accepted and merged into the main project. 43 | 44 | 14. Once your changes are merged, you can update your forked repository and local copy of the repository with the following commands: 45 | 46 | ``` 47 | git fetch upstream 48 | git checkout main 49 | git merge upstream/main 50 | ``` 51 | Finally, delete the branch you created with the following command: 52 | ``` 53 | git branch -d branch-name 54 | ``` 55 | That's it you made it 🐣⭐⭐ -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Microsoft 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | 23 | The recommended models in this Repo are just examples, used for scientific research exploring the concept of task automation and benchmarking with the paper published at [Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models](https://arxiv.org/abs/2303.04671). Users can replace the models in this Repo according to their research needs. When using the recommended models in this Repo, you need to comply with the licenses of these models respectively. Microsoft shall not be held liable for any infringement of third-party rights resulting from your usage of this repo. Users agree to defend, indemnify and hold Microsoft harmless from and against all damages, costs, and attorneys' fees in connection with any claims arising from this Repo. If anyone believes that this Repo infringes on your rights, please notify the project owner [email](chewu@microsoft.com). 24 | -------------------------------------------------------------------------------- /LowCodeLLM/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chenfei-wu/TaskMatrix/4b7664f8d3a23804ac1b795d75a73efd162769f0/LowCodeLLM/.DS_Store -------------------------------------------------------------------------------- /LowCodeLLM/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:22.04 2 | 3 | RUN apt-get -y update 4 | RUN apt-get install -y git python3.11 python3-pip supervisor 5 | RUN pip3 install --upgrade pip 6 | RUN pip3 install --upgrade setuptools 7 | RUN ln -s /usr/bin/python3 /usr/bin/python 8 | COPY src/requirements.txt requirements.txt 9 | RUN pip3 install -r requirements.txt 10 | 11 | COPY src /app/src 12 | 13 | WORKDIR /app/src 14 | ENV WORKERS 2 15 | CMD supervisord -c supervisord.conf 16 | -------------------------------------------------------------------------------- /LowCodeLLM/README.md: -------------------------------------------------------------------------------- 1 | # Low-code LLM 2 | 3 | **Low-code LLM** is a novel human-LLM interaction pattern, involving human in the loop to achieve more controllable and stable responses. 4 | 5 | See our paper: [Low-code LLM: Visual Programming over LLMs](https://arxiv.org/abs/2304.08103) 6 | 7 | In the future, [TaskMatrix.AI](https://arxiv.org/abs/2304.08103) can enhance task automation by breaking down tasks more effectively and utilizing existing foundation models and APIs of other AI models and systems to achieve diversified tasks in both digital and physical domains. And the low-code human-LLM interaction pattern can enhance user's experience on controling over the process and expressing their preference. 8 | 9 | ## Video Demo 10 | https://user-images.githubusercontent.com/43716920/233937121-cd057f04-dec8-45b8-9c52-a9e9594eec80.mp4 11 | 12 | (This is a conceptual video demo to demonstrate the complete process) 13 | 14 | ## Quick Start 15 | Please note that due to time constraints, the code we provide is only the minimum viable version of the low-code LLM interactive code, i.e. only demonstrating the core concept of Low-code LLM human-LLM interaction. We welcome anyone who is interested in improving our front-end interface. 16 | Currently, both the `OpenAI API` and `Azure OpenAI Service` are supported. You would be required to provide the requisite information to invoke these APIs. 17 | 18 | ``` 19 | # clone the repo 20 | git clone https://github.com/microsoft/TaskMatrix.git 21 | 22 | # go to directlory 23 | cd LowCodeLLM 24 | 25 | # build and run docker 26 | docker build -t lowcode:latest . 27 | 28 | # If OpenAI API is being used, it is only necessary to provide the API key. 29 | docker run -p 8888:8888 --env OPENAIKEY={Your_Private_Openai_Key} lowcode:latest 30 | 31 | # When using Azure OpenAI Service, it is advisable to store the necessary information in a configuration file for ease of access. 32 | # Kindly duplicate the config.template file and name the copied file as config.ini. Then, fill out the necessary information in the config.ini file. 33 | docker run -p 8888:8888 --env-file config.ini lowcode:latest 34 | ``` 35 | You can now try it by visiting [Demo page](http://localhost:8888/) 36 | 37 | 38 | ## System Overview 39 | 40 | overview 41 | 42 | As shown in the above figure, human-LLM interaction can be completed by: 43 | - A Planning LLM that generates a highly structured workflow for complex tasks. 44 | - Users editing the workflow with predefined low-code operations, which are all supported by clicking, dragging, or text editing. 45 | - An Executing LLM that generates responses with the reviewed workflow. 46 | - Users continuing to refine the workflow until satisfactory results are obtained. 47 | 48 | ## Six Kinds of Pre-defined Low-code Operations 49 | operations 50 | 51 | ## Advantages 52 | 53 | - **Controllable Generation.** Complicated tasks are decomposed into structured conducting plans and presented to users as workflows. Users can control the LLMs’ execution through low-code operations to achieve more controllable responses. The responses generated followed the customized workflow will be more aligned with the user’s requirements. 54 | - **Friendly Interaction.** The intuitive workflow enables users to swiftly comprehend the LLMs' execution logic, and the low-code operation through a graphical user interface empowers users to conveniently modify the workflow in a user-friendly manner. In this way, time-consuming prompt engineering is mitigated, allowing users to efficiently implement their ideas into detailed instructions to achieve high-quality results. 55 | - **Wide applicability.** The proposed framework can be applied to a wide range of complex tasks across various domains, especially in situations where human's intelligence or preference are indispensable. 56 | 57 | 58 | ## Acknowledgement 59 | Part of this paper has been collaboratively crafted through interactions with the proposed Low-code LLM. The process began with GPT-4 outlining the framework, followed by the authors supplementing it with innovative ideas and refining the structure of the workflow. Ultimately, GPT-4 took charge of generating cohesive and compelling text. 60 | -------------------------------------------------------------------------------- /LowCodeLLM/config.template: -------------------------------------------------------------------------------- 1 | USE_AZURE=True 2 | OPENAIKEY=your-azure-openai-service-key 3 | API_BASE=your-base-url-for-azure 4 | API_VERSION=2023-03-15-preview 5 | MODEL=your-gpt-deployment-name -------------------------------------------------------------------------------- /LowCodeLLM/src/app.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Microsoft Corporation. 2 | # Licensed under the MIT License. 3 | 4 | import os 5 | from flask import Flask, request, send_from_directory 6 | from flask_cors import CORS, cross_origin 7 | from lowCodeLLM import lowCodeLLM 8 | from flask.logging import default_handler 9 | import logging 10 | 11 | app = Flask('lowcode-llm', static_folder='', template_folder='') 12 | app.debug = True 13 | llm = lowCodeLLM() 14 | gunicorn_logger = logging.getLogger('gunicorn.error') 15 | app.logger = gunicorn_logger 16 | logging_format = logging.Formatter( 17 | '%(asctime)s - %(levelname)s - %(filename)s - %(funcName)s - %(lineno)s - %(message)s') 18 | default_handler.setFormatter(logging_format) 19 | 20 | @app.route("/") 21 | def index(): 22 | return send_from_directory(".", "index.html") 23 | 24 | @app.route('/api/get_workflow', methods=['POST']) 25 | @cross_origin() 26 | def get_workflow(): 27 | try: 28 | request_content = request.get_json() 29 | task_prompt = request_content['task_prompt'] 30 | workflow = llm.get_workflow(task_prompt) 31 | return workflow, 200 32 | except Exception as e: 33 | app.logger.error( 34 | 'failed to get_workflow, msg:%s, request data:%s' % (str(e), request.json)) 35 | return {'errmsg': 'internal errors'}, 500 36 | 37 | @app.route('/api/extend_workflow', methods=['POST']) 38 | @cross_origin() 39 | def extend_workflow(): 40 | try: 41 | request_content = request.get_json() 42 | task_prompt = request_content['task_prompt'] 43 | current_workflow = request_content['current_workflow'] 44 | step = request_content['step'] 45 | sub_workflow = llm.extend_workflow(task_prompt, current_workflow, step) 46 | return sub_workflow, 200 47 | except Exception as e: 48 | app.logger.error( 49 | 'failed to extend_workflow, msg:%s, request data:%s' % (str(e), request.json)) 50 | return {'errmsg': 'internal errors'}, 500 51 | 52 | @app.route('/api/execute', methods=['POST']) 53 | @cross_origin() 54 | def execute(): 55 | try: 56 | request_content = request.get_json() 57 | task_prompt = request_content['task_prompt'] 58 | confirmed_workflow = request_content['confirmed_workflow'] 59 | curr_input = request_content['curr_input'] 60 | history = request_content['history'] 61 | response = llm.execute(task_prompt,confirmed_workflow, history, curr_input) 62 | return response, 200 63 | except Exception as e: 64 | app.logger.error( 65 | 'failed to execute, msg:%s, request data:%s' % (str(e), request.json)) 66 | return {'errmsg': 'internal errors'}, 500 67 | -------------------------------------------------------------------------------- /LowCodeLLM/src/executingLLM.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Microsoft Corporation. 2 | # Licensed under the MIT License. 3 | 4 | from openAIWrapper import OpenAIWrapper 5 | 6 | EXECUTING_LLM_PREFIX = """Executing LLM is designed to provide outstanding responses. 7 | Executing LLM will be given a overall task as the background of the conversation between the Executing LLM and human. 8 | When providing response, Executing LLM MUST STICTLY follow the provided standard operating procedure (SOP). 9 | the SOP is formatted as: 10 | ''' 11 | STEP 1: [step name][step descriptions][[[if 'condition1'][Jump to STEP]], [[if 'condition2'][Jump to STEP]], ...] 12 | ''' 13 | here "[[[if 'condition1'][Jump to STEP n]]]" is judgmental logic. It means when you're performing this step, and if 'condition1' is satisfied, you will perform STEP n next. 14 | 15 | Remember: 16 | Executing LLM is facing a real human, who does not know what SOP is. 17 | So, Do not show him/her the SOP steps you are following, or it will make him/her confused. Just response the answer. 18 | """ 19 | 20 | EXECUTING_LLM_SUFFIX = """ 21 | Remember: 22 | Executing LLM is facing a real human, who does not know what SOP is. 23 | So, Do not show him/her the SOP steps you are following, or it will make him/her confused. Just response the answer. 24 | """ 25 | 26 | class executingLLM: 27 | def __init__(self, temperature) -> None: 28 | self.prefix = EXECUTING_LLM_PREFIX 29 | self.suffix = EXECUTING_LLM_SUFFIX 30 | self.LLM = OpenAIWrapper(temperature) 31 | self.messages = [{"role": "system", "content": "You are a helpful assistant."}, 32 | {"role": "system", "content": self.prefix}] 33 | 34 | def execute(self, current_prompt, history): 35 | ''' provide LLM the dialogue history and the current prompt to get response ''' 36 | messages = self.messages + history 37 | messages.append({'role': 'user', "content": current_prompt + self.suffix}) 38 | response, status = self.LLM.run(messages) 39 | if status: 40 | return response 41 | else: 42 | return "OpenAI API error." -------------------------------------------------------------------------------- /LowCodeLLM/src/index.html: -------------------------------------------------------------------------------- 1 | 3 | 4 | 5 | 6 | 7 | 8 | Tutorial Demo 9 | 92 | 93 | 94 |

Low Code Demo

95 |
96 |
97 |
98 | Task: 99 |
100 | 101 | 102 |
103 |
104 |
105 |
loading...
106 |
107 | 108 | 109 |
110 | 117 |
118 | 119 | 120 | 121 | 122 | 690 | 691 | -------------------------------------------------------------------------------- /LowCodeLLM/src/lowCodeLLM.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Microsoft Corporation. 2 | # Licensed under the MIT License. 3 | 4 | from planningLLM import planningLLM 5 | from executingLLM import executingLLM 6 | import json 7 | 8 | class lowCodeLLM: 9 | def __init__(self, PLLM_temperature=0.4, ELLM_temperature=0): 10 | self.PLLM = planningLLM(PLLM_temperature) 11 | self.ELLM = executingLLM(ELLM_temperature) 12 | 13 | def get_workflow(self, task_prompt): 14 | return self.PLLM.get_workflow(task_prompt) 15 | 16 | def extend_workflow(self, task_prompt, current_workflow, step=''): 17 | ''' generate a sub-workflow for one of steps 18 | - input: the current workflow, the step needs to extend 19 | - output: sub-workflow ''' 20 | workflow = self._json2txt(current_workflow) 21 | return self.PLLM.extend_workflow(task_prompt, workflow, step) 22 | 23 | def execute(self, task_prompt,confirmed_workflow, history, curr_input): 24 | ''' chat with the workflow-equipped low-code LLM ''' 25 | prompt = [{'role': 'system', "content": 'The overall task you are facing is: '+task_prompt+ 26 | '\nThe standard operating procedure(SOP) is:\n'+self._json2txt(confirmed_workflow)}] 27 | history = prompt + history 28 | response = self.ELLM.execute(curr_input, history) 29 | return response 30 | 31 | def _json2txt(self, workflow_json): 32 | ''' convert the json workflow to text''' 33 | def json2text_step(step): 34 | step_res = "" 35 | step_res += step["stepId"] + ": [" + step["stepName"] + "]" 36 | step_res += "[" + step["stepDescription"] + "][" 37 | for jump in step["jumpLogic"]: 38 | step_res += "[[" + jump["Condition"] + "][" + jump["Target"] + "]]," 39 | step_res += "]\n" 40 | return step_res 41 | 42 | workflow_txt = "" 43 | for step in json.loads(workflow_json): 44 | workflow_txt += json2text_step(step) 45 | for substep in step['extension']: 46 | workflow_txt += json2text_step(substep) 47 | return workflow_txt -------------------------------------------------------------------------------- /LowCodeLLM/src/openAIWrapper.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Microsoft Corporation. 2 | # Licensed under the MIT License. 3 | 4 | import os 5 | import openai 6 | 7 | class OpenAIWrapper: 8 | def __init__(self, temperature): 9 | self.key = os.environ.get("OPENAIKEY") 10 | openai.api_key = self.key 11 | 12 | # Access the USE_AZURE environment variable 13 | self.use_azure = os.environ.get('USE_AZURE') 14 | 15 | # Check if USE_AZURE is defined 16 | if self.use_azure is not None: 17 | # Convert the USE_AZURE value to boolean 18 | self.use_azure = self.use_azure.lower() == 'true' 19 | else: 20 | self.use_azure = False 21 | 22 | if self.use_azure: 23 | openai.api_type = "azure" 24 | self.api_base = os.environ.get('API_BASE') 25 | openai.api_base = self.api_base 26 | self.api_version = os.environ.get('API_VERSION') 27 | openai.api_version = self.api_version 28 | self.engine = os.environ.get('MODEL') 29 | else: 30 | self.chat_model_id = "gpt-3.5-turbo" 31 | 32 | self.temperature = temperature 33 | self.max_tokens = 2048 34 | self.top_p = 1 35 | self.time_out = 7 36 | 37 | def run(self, prompt): 38 | return self._post_request_chat(prompt) 39 | 40 | def _post_request_chat(self, messages): 41 | try: 42 | if self.use_azure: 43 | response = openai.ChatCompletion.create( 44 | engine=self.engine, 45 | messages=messages, 46 | temperature=self.temperature, 47 | max_tokens=self.max_tokens, 48 | frequency_penalty=0, 49 | presence_penalty=0 50 | ) 51 | else: 52 | response = openai.ChatCompletion.create( 53 | model=self.chat_model_id, 54 | messages=messages, 55 | temperature=self.temperature, 56 | max_tokens=self.max_tokens, 57 | frequency_penalty=0, 58 | presence_penalty=0 59 | ) 60 | res = response['choices'][0]['message']['content'] 61 | return res, True 62 | except Exception as e: 63 | return "", False 64 | -------------------------------------------------------------------------------- /LowCodeLLM/src/planningLLM.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Microsoft Corporation. 2 | # Licensed under the MIT License. 3 | 4 | import re 5 | import json 6 | from openAIWrapper import OpenAIWrapper 7 | 8 | PLANNING_LLM_PREFIX = """Planning LLM is designed to provide a standard operating procedure so that an difficult task will be broken down into several steps, and the task will be easily solved by following these steps. 9 | Planning LLM is a powerful problem-solving assistant, and it only needs to analyze the task and provide standard operating procedure as guidance, but does not need actually to solve the problem. 10 | Sometimes there exists some unknown or undetermined situation, thus judgmental logic is needed: some "conditions" are listed, and the next step that should be carried out if a "condition" is satisfied is also listed. The judgmental logics are not necessary. 11 | Planning LLM MUST only provide standard operating procedure in the following format without any other words: 12 | ''' 13 | STEP 1: [step name][step descriptions][[[if 'condition1'][Jump to STEP]], [[[if 'condition1'][Jump to STEP]], [[if 'condition2'][Jump to STEP]], ...] 14 | STEP 2: [step name][step descriptions][[[if 'condition1'][Jump to STEP]], [[[if 'condition1'][Jump to STEP]], [[if 'condition2'][Jump to STEP]], ...] 15 | ... 16 | ''' 17 | 18 | For example: 19 | ''' 20 | STEP 1: [Brainstorming][Choose a topic or prompt, and generate ideas and organize them into an outline][] 21 | STEP 2: [Research][Gather information, take notes and organize them into the outline][[[lack of ideas][Jump to STEP 1]]] 22 | ... 23 | ''' 24 | """ 25 | 26 | EXTEND_PREFIX = """ 27 | \nsome steps of the SOP provided by Planning LLM are too rough, so Planning LLM can also provide a detailed sub-SOP for the given step. 28 | Remember, Planning LLM take the overall SOP into consideration, and the sub-SOP MUST be consistent with the rest of the steps, and there MUST be no duplication in content between the extension and the original SOP. 29 | 30 | For example: 31 | If the overall SOP is: 32 | ''' 33 | STEP 1: [Brainstorming][Choose a topic or prompt, and generate ideas and organize them into an outline][] 34 | STEP 2: [Write][write the text][] 35 | ''' 36 | If the STEP 2: "write the text" is too rough and needs to be extended, then the response could be: 37 | ''' 38 | STEP 2.1: [Write the title][write the title of the essay][] 39 | STEP 2.2: [Write the body][write the body of the essay][[[if lack of materials][Jump to STEP 1]]] 40 | ''' 41 | """ 42 | 43 | PLANNING_LLM_SUFFIX = """\nRemember: Planning LLM is very strict to the format and NEVER reply any word other than the standard operating procedure. 44 | """ 45 | 46 | class planningLLM: 47 | def __init__(self, temperature) -> None: 48 | self.prefix = PLANNING_LLM_PREFIX 49 | self.suffix = PLANNING_LLM_SUFFIX 50 | self.LLM = OpenAIWrapper(temperature) 51 | self.messages = [{"role": "system", "content": "You are a helpful assistant."}] 52 | 53 | def get_workflow(self, task_prompt): 54 | ''' 55 | - input: task_prompt 56 | - output: workflow (json) 57 | ''' 58 | messages = self.messages + [{'role': 'user', "content": PLANNING_LLM_PREFIX+'\nThe task is:\n'+task_prompt+PLANNING_LLM_SUFFIX}] 59 | response, status = self.LLM.run(messages) 60 | if status: 61 | return self._txt2json(response) 62 | else: 63 | return "OpenAI API error." 64 | 65 | def extend_workflow(self, task_prompt, current_workflow, step): 66 | messages = self.messages + [{'role': 'user', "content": PLANNING_LLM_PREFIX+'\nThe task is:\n'+task_prompt+PLANNING_LLM_SUFFIX}] 67 | messages.append({'role': 'user', "content": EXTEND_PREFIX+ 68 | 'The current SOP is:\n'+current_workflow+ 69 | '\nThe step needs to be extended is:\n'+step+ 70 | PLANNING_LLM_SUFFIX}) 71 | response, status = self.LLM.run(messages) 72 | if status: 73 | return self._txt2json(response) 74 | else: 75 | return "OpenAI API error." 76 | 77 | def _txt2json(self, workflow_txt): 78 | ''' convert the workflow in natural language to json format ''' 79 | workflow = [] 80 | try: 81 | steps = workflow_txt.split('\n') 82 | for step in steps: 83 | if step[0:4] != "STEP": 84 | continue 85 | left_indices = [_.start() for _ in re.finditer("\[", step)] 86 | right_indices = [_.start() for _ in re.finditer("\]", step)] 87 | step_id = step[: left_indices[0]-2] 88 | step_name = step[left_indices[0]+1: right_indices[0]] 89 | step_description = step[left_indices[1]+1: right_indices[1]] 90 | jump_str = step[left_indices[2]+1: right_indices[-1]] 91 | if re.findall(re.compile(r'[A-Za-z]',re.S), jump_str) == []: 92 | workflow.append({"stepId": step_id, "stepName": step_name, "stepDescription": step_description, "jumpLogic": [], "extension": []}) 93 | continue 94 | jump_logic = [] 95 | left_indices = [_.start() for _ in re.finditer('\[', jump_str)] 96 | right_indices = [_.start() for _ in re.finditer('\]', jump_str)] 97 | i = 1 98 | while i < len(left_indices): 99 | jump = {"Condition": jump_str[left_indices[i]+1: right_indices[i-1]], "Target": re.search(r'STEP\s\d', jump_str[left_indices[i+1]+1: right_indices[i]]).group(0)} 100 | jump_logic.append(jump) 101 | i += 3 102 | workflow.append({"stepId": step_id, "stepName": step_name, "stepDescription": step_description, "jumpLogic": jump_logic, "extension": []}) 103 | return json.dumps(workflow) 104 | except: 105 | print("Format error, please try again.") -------------------------------------------------------------------------------- /LowCodeLLM/src/requirements.txt: -------------------------------------------------------------------------------- 1 | Flask==2.2.5 2 | Flask_Cors==3.0.10 3 | openai==0.27.2 4 | gunicorn==20.1.0 5 | gevent==21.8.0 6 | -------------------------------------------------------------------------------- /LowCodeLLM/src/supervisord.conf: -------------------------------------------------------------------------------- 1 | [supervisord] 2 | nodaemon=true 3 | loglevel=info 4 | 5 | [program:flask] 6 | command=gunicorn --timeout 300 --bind "0.0.0.0:8888" "app:app" --log-level debug --capture-output --worker-class gevent 7 | priority=999 8 | stdout_logfile=/dev/stdout 9 | stdout_logfile_maxbytes=0 10 | stderr_logfile=/dev/stdout 11 | stderr_logfile_maxbytes=0 12 | autorestart=true -------------------------------------------------------------------------------- /LowCodeLLM/src/test/test_execute.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Microsoft Corporation. 2 | # Licensed under the MIT License. 3 | 4 | import json 5 | import sys 6 | import os 7 | import time 8 | sys.path.append(os.getcwd()) 9 | 10 | def test_extend_workflow(): 11 | from lowCodeLLM import lowCodeLLM 12 | cases = json.load(open("./test/testcases/execute_test_cases.json", "r")) 13 | llm = lowCodeLLM(0.5, 0) 14 | for c in cases: 15 | task_prompt = c["task_prompt"] 16 | confirmed_workflow = c["confirmed_workflow"] 17 | history = c["history"] 18 | curr_input = c["curr_input"] 19 | result = llm.execute(task_prompt, confirmed_workflow, history, curr_input) 20 | time.sleep(5) 21 | assert type(result) == str 22 | assert len(result) > 0 -------------------------------------------------------------------------------- /LowCodeLLM/src/test/test_extend_workflow.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Microsoft Corporation. 2 | # Licensed under the MIT License. 3 | 4 | import json 5 | import sys 6 | import os 7 | import time 8 | sys.path.append(os.getcwd()) 9 | 10 | def test_extend_workflow(): 11 | from lowCodeLLM import lowCodeLLM 12 | cases = json.load(open("./test/testcases/extend_workflow_test_cases.json", "r")) 13 | llm = lowCodeLLM(0.5, 0) 14 | for c in cases: 15 | task_prompt = c["task_prompt"] 16 | current_workflow = c["current_workflow"] 17 | step = c["step"] 18 | result = llm.extend_workflow(task_prompt, current_workflow, step) 19 | time.sleep(5) 20 | assert len(json.loads(result)) >= 1 -------------------------------------------------------------------------------- /LowCodeLLM/src/test/test_get_workflow.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Microsoft Corporation. 2 | # Licensed under the MIT License. 3 | 4 | import json 5 | import sys 6 | import os 7 | sys.path.append(os.getcwd()) 8 | 9 | def test_get_workflow(): 10 | from lowCodeLLM import lowCodeLLM 11 | cases = json.load(open("./test/testcases/get_workflow_test_cases.json", "r")) 12 | llm = lowCodeLLM(0.5, 0) 13 | for c in cases: 14 | task_prompt = c["task_prompt"] 15 | result = llm.get_workflow(task_prompt) 16 | assert len(json.loads(result)) >= 1 -------------------------------------------------------------------------------- /LowCodeLLM/src/test/testcases/execute_test_cases.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "task_prompt": "Write an essay about drunk driving issue.", 4 | "confirmed_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Research\", \"stepDescription\": \"Gather statistics and information about drunk driving issue\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Identify the causes\", \"stepDescription\": \"Analyze the reasons behind drunk driving\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Examine the consequences\", \"stepDescription\": \"Investigate the outcomes of drunk driving\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Develop a prevention plan\", \"stepDescription\": \"Create a plan to prevent drunk driving\", \"jumpLogic\": [{\"Condition\": \"if 'lack of information'\", \"Target\": \"STEP 1\"}, {\"Condition\": \"if 'unclear causes'\", \"Target\": \"STEP 2\"}, {\"Condition\": \"if 'incomplete analysis'\", \"Target\": \"STEP 2\"}, {\"Condition\": \"if 'unrealistic plan'\", \"Target\": \"STEP 4\"}], \"extension\": []}]", 5 | "history": [], 6 | "curr_input": "write the essay and show me." 7 | }, 8 | { 9 | "task_prompt": "Write an bolg about Microsoft.", 10 | "confirmed_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Research\", \"stepDescription\": \"Gather information about Microsoft's history, products, and services\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Analyze\", \"stepDescription\": \"Organize the gathered information into categories and identify key points\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Outline\", \"stepDescription\": \"Create an outline based on the key points and categories\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Write\", \"stepDescription\": \"Write a draft of the blog post using the outline as a guide\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 5\", \"stepName\": \"Edit\", \"stepDescription\": \"Review and revise the draft for clarity and accuracy\", \"jumpLogic\": [{\"Condition\": \"need for further editing\", \"Target\": \"STEP 4\"}], \"extension\": []}, {\"stepId\": \"STEP 6\", \"stepName\": \"Publish\", \"stepDescription\": \"Post the final version of the blog post on a suitable platform\", \"jumpLogic\": [], \"extension\": []}]", 11 | "history": [], 12 | "curr_input": "write the blog and show me." 13 | }, 14 | { 15 | "task_prompt": "I want to write a two-player battle game.", 16 | "confirmed_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Game Concept\", \"stepDescription\": \"Decide on the game concept and mechanics\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Game Design\", \"stepDescription\": \"Create a rough sketch of the game, including the game board, characters, and rules\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Programming\", \"stepDescription\": \"Write the code for the game\", \"jumpLogic\": [{\"Condition\": \"if 'game mechanics are too complex'\", \"Target\": \"STEP 1\"}], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Testing\", \"stepDescription\": \"Test the game for bugs and glitches\", \"jumpLogic\": [{\"Condition\": \"if 'gameplay is not balanced'\", \"Target\": \"STEP 2\"}], \"extension\": []}, {\"stepId\": \"STEP 5\", \"stepName\": \"Polishing\", \"stepDescription\": \"Add finishing touches to the game, including graphics and sound effects\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 6\", \"stepName\": \"Release\", \"stepDescription\": \"Publish the game for players to enjoy\", \"jumpLogic\": [], \"extension\": []}]", 17 | "history": [{"role": "asistant", "content": "sure, I can write it for you, do you want me show you the code?"}], 18 | "curr_input": "Sure." 19 | } 20 | ] -------------------------------------------------------------------------------- /LowCodeLLM/src/test/testcases/extend_workflow_test_cases.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "task_prompt": "Write an essay about drunk driving issue.", 4 | "current_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Research\", \"stepDescription\": \"Gather statistics and information about drunk driving issue\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Identify the causes\", \"stepDescription\": \"Analyze the reasons behind drunk driving\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Examine the consequences\", \"stepDescription\": \"Investigate the outcomes of drunk driving\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Develop a prevention plan\", \"stepDescription\": \"Create a plan to prevent drunk driving\", \"jumpLogic\": [{\"Condition\": \"if 'lack of information'\", \"Target\": \"STEP 1\"}, {\"Condition\": \"if 'unclear causes'\", \"Target\": \"STEP 2\"}, {\"Condition\": \"if 'incomplete analysis'\", \"Target\": \"STEP 2\"}, {\"Condition\": \"if 'unrealistic plan'\", \"Target\": \"STEP 4\"}], \"extension\": []}]", 5 | "step": "STEP 1" 6 | }, 7 | { 8 | "task_prompt": "Write an bolg about Microsoft.", 9 | "current_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Research\", \"stepDescription\": \"Gather information about Microsoft's history, products, and services\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Analyze\", \"stepDescription\": \"Organize the gathered information into categories and identify key points\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Outline\", \"stepDescription\": \"Create an outline based on the key points and categories\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Write\", \"stepDescription\": \"Write a draft of the blog post using the outline as a guide\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 5\", \"stepName\": \"Edit\", \"stepDescription\": \"Review and revise the draft for clarity and accuracy\", \"jumpLogic\": [{\"Condition\": \"need for further editing\", \"Target\": \"STEP 4\"}], \"extension\": []}, {\"stepId\": \"STEP 6\", \"stepName\": \"Publish\", \"stepDescription\": \"Post the final version of the blog post on a suitable platform\", \"jumpLogic\": [], \"extension\": []}]", 10 | "step": "STEP 2" 11 | }, 12 | { 13 | "task_prompt": "I want to write a two-player battle game.", 14 | "current_workflow": "[{\"stepId\": \"STEP 1\", \"stepName\": \"Game Concept\", \"stepDescription\": \"Decide on the game concept and mechanics\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 2\", \"stepName\": \"Game Design\", \"stepDescription\": \"Create a rough sketch of the game, including the game board, characters, and rules\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 3\", \"stepName\": \"Programming\", \"stepDescription\": \"Write the code for the game\", \"jumpLogic\": [{\"Condition\": \"if 'game mechanics are too complex'\", \"Target\": \"STEP 1\"}], \"extension\": []}, {\"stepId\": \"STEP 4\", \"stepName\": \"Testing\", \"stepDescription\": \"Test the game for bugs and glitches\", \"jumpLogic\": [{\"Condition\": \"if 'gameplay is not balanced'\", \"Target\": \"STEP 2\"}], \"extension\": []}, {\"stepId\": \"STEP 5\", \"stepName\": \"Polishing\", \"stepDescription\": \"Add finishing touches to the game, including graphics and sound effects\", \"jumpLogic\": [], \"extension\": []}, {\"stepId\": \"STEP 6\", \"stepName\": \"Release\", \"stepDescription\": \"Publish the game for players to enjoy\", \"jumpLogic\": [], \"extension\": []}]", 15 | "step": "STEP 3" 16 | } 17 | ] -------------------------------------------------------------------------------- /LowCodeLLM/src/test/testcases/get_workflow_test_cases.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "task_prompt": "Write an essay about drunk driving issue." 4 | }, 5 | { 6 | "task_prompt": "Write an bolg about Microsoft." 7 | }, 8 | { 9 | "task_prompt": "I want to write a two-player battle game." 10 | } 11 | ] -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # TaskMatrix 2 | 3 | **TaskMatrix** connects ChatGPT and a series of Visual Foundation Models to enable **sending** and **receiving** images during chatting. 4 | 5 | See our paper: [Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models](https://arxiv.org/abs/2303.04671) 6 | 7 | 8 | Open in Spaces 9 | 10 | 11 | 12 | Open in Colab 13 | 14 | 15 | ## Updates: 16 | - Now TaskMatrix supports [GroundingDINO](https://github.com/IDEA-Research/GroundingDINO) and [segment-anything](https://github.com/facebookresearch/segment-anything)! Thanks **@jordddan** for his efforts. For the image editing case, `GroundingDINO` is first used to locate bounding boxes guided by given text, then `segment-anything` is used to generate the related mask, and finally stable diffusion inpainting is used to edit image based on the mask. 17 | - Firstly, run `python visual_chatgpt.py --load "Text2Box_cuda:0,Segmenting_cuda:0,Inpainting_cuda:0,ImageCaptioning_cuda:0"` 18 | - Then, say `find xxx in the image` or `segment xxx in the image`. `xxx` is an object. TaskMatrix will return the detection or segmentation result! 19 | 20 | 21 | - Now TaskMatrix can support Chinese! Thanks to **@Wang-Xiaodong1899** for his efforts. 22 | - We propose the **template** idea in TaskMatrix! 23 | - A template is a **pre-defined execution flow** that assists ChatGPT in assembling complex tasks involving multiple foundation models. 24 | - A template contains the **experiential solution** to complex tasks as determined by humans. 25 | - A template can **invoke multiple foundation models** or even **establish a new ChatGPT session** 26 | - To define a **template**, simply adding a class with attributes `template_model = True` 27 | - Thanks to **@ShengmingYin** and **@thebestannie** for providing a template example in `InfinityOutPainting` class (see the following gif) 28 | - Firstly, run `python visual_chatgpt.py --load "Inpainting_cuda:0,ImageCaptioning_cuda:0,VisualQuestionAnswering_cuda:0"` 29 | - Secondly, say `extend the image to 2048x1024` to TaskMatrix! 30 | - By simply creating an `InfinityOutPainting` template, TaskMatrix can seamlessly extend images to any size through collaboration with existing `ImageCaptioning`, `Inpainting`, and `VisualQuestionAnswering` foundation models, **without the need for additional training**. 31 | - **TaskMatrix needs the effort of the community! We crave your contribution to add new and interesting features!** 32 | 33 | 34 | 35 | ## Insight & Goal: 36 | On the one hand, **ChatGPT (or LLMs)** serves as a **general interface** that provides a broad and diverse understanding of a 37 | wide range of topics. On the other hand, **Foundation Models** serve as **domain experts** by providing deep knowledge in specific domains. 38 | By leveraging **both general and deep knowledge**, we aim at building an AI that is capable of handling various tasks. 39 | 40 | 41 | ## Demo 42 | 43 | 44 | ## System Architecture 45 | 46 | 47 |

Logo

48 | 49 | 50 | ## Quick Start 51 | 52 | ``` 53 | # clone the repo 54 | git clone https://github.com/microsoft/TaskMatrix.git 55 | 56 | # Go to directory 57 | cd visual-chatgpt 58 | 59 | # create a new environment 60 | conda create -n visgpt python=3.8 61 | 62 | # activate the new environment 63 | conda activate visgpt 64 | 65 | # prepare the basic environments 66 | pip install -r requirements.txt 67 | pip install git+https://github.com/IDEA-Research/GroundingDINO.git 68 | pip install git+https://github.com/facebookresearch/segment-anything.git 69 | 70 | # prepare your private OpenAI key (for Linux) 71 | export OPENAI_API_KEY={Your_Private_Openai_Key} 72 | 73 | # prepare your private OpenAI key (for Windows) 74 | set OPENAI_API_KEY={Your_Private_Openai_Key} 75 | 76 | # Start TaskMatrix ! 77 | # You can specify the GPU/CPU assignment by "--load", the parameter indicates which 78 | # Visual Foundation Model to use and where it will be loaded to 79 | # The model and device are separated by underline '_', the different models are separated by comma ',' 80 | # The available Visual Foundation Models can be found in the following table 81 | # For example, if you want to load ImageCaptioning to cpu and Text2Image to cuda:0 82 | # You can use: "ImageCaptioning_cpu,Text2Image_cuda:0" 83 | 84 | # Advice for CPU Users 85 | python visual_chatgpt.py --load ImageCaptioning_cpu,Text2Image_cpu 86 | 87 | # Advice for 1 Tesla T4 15GB (Google Colab) 88 | python visual_chatgpt.py --load "ImageCaptioning_cuda:0,Text2Image_cuda:0" 89 | 90 | # Advice for 4 Tesla V100 32GB 91 | python visual_chatgpt.py --load "Text2Box_cuda:0,Segmenting_cuda:0, 92 | Inpainting_cuda:0,ImageCaptioning_cuda:0, 93 | Text2Image_cuda:1,Image2Canny_cpu,CannyText2Image_cuda:1, 94 | Image2Depth_cpu,DepthText2Image_cuda:1,VisualQuestionAnswering_cuda:2, 95 | InstructPix2Pix_cuda:2,Image2Scribble_cpu,ScribbleText2Image_cuda:2, 96 | SegText2Image_cuda:2,Image2Pose_cpu,PoseText2Image_cuda:2, 97 | Image2Hed_cpu,HedText2Image_cuda:3,Image2Normal_cpu, 98 | NormalText2Image_cuda:3,Image2Line_cpu,LineText2Image_cuda:3" 99 | 100 | ``` 101 | 102 | ## GPU memory usage 103 | Here we list the GPU memory usage of each visual foundation model, you can specify which one you like: 104 | 105 | | Foundation Model | GPU Memory (MB) | 106 | |------------------------|-----------------| 107 | | ImageEditing | 3981 | 108 | | InstructPix2Pix | 2827 | 109 | | Text2Image | 3385 | 110 | | ImageCaptioning | 1209 | 111 | | Image2Canny | 0 | 112 | | CannyText2Image | 3531 | 113 | | Image2Line | 0 | 114 | | LineText2Image | 3529 | 115 | | Image2Hed | 0 | 116 | | HedText2Image | 3529 | 117 | | Image2Scribble | 0 | 118 | | ScribbleText2Image | 3531 | 119 | | Image2Pose | 0 | 120 | | PoseText2Image | 3529 | 121 | | Image2Seg | 919 | 122 | | SegText2Image | 3529 | 123 | | Image2Depth | 0 | 124 | | DepthText2Image | 3531 | 125 | | Image2Normal | 0 | 126 | | NormalText2Image | 3529 | 127 | | VisualQuestionAnswering| 1495 | 128 | 129 | ## Acknowledgement 130 | We appreciate the open source of the following projects: 131 | 132 | [Hugging Face](https://github.com/huggingface)   133 | [LangChain](https://github.com/hwchase17/langchain)   134 | [Stable Diffusion](https://github.com/CompVis/stable-diffusion)   135 | [ControlNet](https://github.com/lllyasviel/ControlNet)   136 | [InstructPix2Pix](https://github.com/timothybrooks/instruct-pix2pix)   137 | [CLIPSeg](https://github.com/timojl/clipseg)   138 | [BLIP](https://github.com/salesforce/BLIP)   139 | 140 | ## Contact Information 141 | For help or issues using the TaskMatrix, please submit a GitHub issue. 142 | 143 | For other communications, please contact Chenfei WU (chewu@microsoft.com) or Nan DUAN (nanduan@microsoft.com). 144 | 145 | ## Trademark Notice 146 | 147 | Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. 148 | 149 | ## Disclaimer 150 | The recommended models in this Repo are just examples, used for scientific research exploring the concept of task automation and benchmarking with the paper published at [Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models](https://arxiv.org/abs/2303.04671). Users can replace the models in this Repo according to their research needs. When using the recommended models in this Repo, you need to comply with the licenses of these models respectively. Microsoft shall not be held liable for any infringement of third-party rights resulting from your usage of this repo. Users agree to defend, indemnify and hold Microsoft harmless from and against all damages, costs, and attorneys' fees in connection with any claims arising from this Repo. If anyone believes that this Repo infringes on your rights, please notify the project owner [email](chewu@microsoft.com). 151 | -------------------------------------------------------------------------------- /SECURITY.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ## Security 4 | 5 | Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). 6 | 7 | If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. 8 | 9 | ## Reporting Security Issues 10 | 11 | **Please do not report security vulnerabilities through public GitHub issues.** 12 | 13 | Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). 14 | 15 | If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). 16 | 17 | You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). 18 | 19 | Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: 20 | 21 | * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) 22 | * Full paths of source file(s) related to the manifestation of the issue 23 | * The location of the affected source code (tag/branch/commit or direct URL) 24 | * Any special configuration required to reproduce the issue 25 | * Step-by-step instructions to reproduce the issue 26 | * Proof-of-concept or exploit code (if possible) 27 | * Impact of the issue, including how an attacker might exploit the issue 28 | 29 | This information will help us triage your report more quickly. 30 | 31 | If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. 32 | 33 | ## Preferred Languages 34 | 35 | We prefer all communications to be in English. 36 | 37 | ## Policy 38 | 39 | Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). 40 | 41 | -------------------------------------------------------------------------------- /TaskMatrix.AI/README.md: -------------------------------------------------------------------------------- 1 | ## Description 2 | **TaskMatrix.AI** is an AI ecosystem that can connect foundation models with millions of APIs for task completion. Unlike most previous work that aimed to improve a single AI model, TaskMatrix.AI focuses more on using existing foundation models (as a brain-like central system) and APIs of other AI models and systems (as sub-task solvers) to achieve diversified tasks in both digital and physical domains. Visual ChatGPT is just an example of applying TaskMatrix.AI to the visual domain. 3 | 4 | ## Paper 5 | [TaskMatrix.AI](https://arxiv.org/abs/2303.16434) 6 | 7 | ## Online System 8 | In progress, coming in the near future... 9 | 10 | ## Paradigm Shift 11 | 12 | 13 | ## System Overview 14 | 15 | -------------------------------------------------------------------------------- /assets/demo.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chenfei-wu/TaskMatrix/4b7664f8d3a23804ac1b795d75a73efd162769f0/assets/demo.gif -------------------------------------------------------------------------------- /assets/demo_inf.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chenfei-wu/TaskMatrix/4b7664f8d3a23804ac1b795d75a73efd162769f0/assets/demo_inf.gif -------------------------------------------------------------------------------- /assets/demo_short.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chenfei-wu/TaskMatrix/4b7664f8d3a23804ac1b795d75a73efd162769f0/assets/demo_short.gif -------------------------------------------------------------------------------- /assets/figure.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chenfei-wu/TaskMatrix/4b7664f8d3a23804ac1b795d75a73efd162769f0/assets/figure.jpg -------------------------------------------------------------------------------- /assets/low-code-demovideo.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chenfei-wu/TaskMatrix/4b7664f8d3a23804ac1b795d75a73efd162769f0/assets/low-code-demovideo.mp4 -------------------------------------------------------------------------------- /assets/low-code-llm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chenfei-wu/TaskMatrix/4b7664f8d3a23804ac1b795d75a73efd162769f0/assets/low-code-llm.png -------------------------------------------------------------------------------- /assets/low-code-operation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chenfei-wu/TaskMatrix/4b7664f8d3a23804ac1b795d75a73efd162769f0/assets/low-code-operation.png -------------------------------------------------------------------------------- /assets/overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chenfei-wu/TaskMatrix/4b7664f8d3a23804ac1b795d75a73efd162769f0/assets/overview.png -------------------------------------------------------------------------------- /assets/paradigm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chenfei-wu/TaskMatrix/4b7664f8d3a23804ac1b795d75a73efd162769f0/assets/paradigm.png -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | langchain==0.0.101 2 | torch==1.13.1 3 | torchvision==0.14.1 4 | wget==3.2 5 | accelerate 6 | addict 7 | albumentations 8 | basicsr 9 | controlnet-aux 10 | diffusers 11 | einops 12 | gradio 13 | imageio 14 | imageio-ffmpeg 15 | invisible-watermark 16 | kornia 17 | numpy 18 | omegaconf 19 | open_clip_torch 20 | openai 21 | opencv-python 22 | prettytable 23 | safetensors 24 | streamlit 25 | test-tube 26 | timm 27 | torchmetrics 28 | transformers 29 | webdataset 30 | yapf 31 | -------------------------------------------------------------------------------- /visual_chatgpt.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Microsoft Corporation. 2 | # Licensed under the MIT License. 3 | 4 | # coding: utf-8 5 | import os 6 | import gradio as gr 7 | import random 8 | import torch 9 | import cv2 10 | import re 11 | import uuid 12 | from PIL import Image, ImageDraw, ImageOps, ImageFont 13 | import math 14 | import numpy as np 15 | import argparse 16 | import inspect 17 | import tempfile 18 | from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation 19 | from transformers import pipeline, BlipProcessor, BlipForConditionalGeneration, BlipForQuestionAnswering 20 | from transformers import AutoImageProcessor, UperNetForSemanticSegmentation 21 | 22 | from diffusers import StableDiffusionPipeline, StableDiffusionInpaintPipeline, StableDiffusionInstructPix2PixPipeline 23 | from diffusers import EulerAncestralDiscreteScheduler 24 | from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler 25 | from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker 26 | 27 | from controlnet_aux import OpenposeDetector, MLSDdetector, HEDdetector 28 | 29 | from langchain.agents.initialize import initialize_agent 30 | from langchain.agents.tools import Tool 31 | from langchain.chains.conversation.memory import ConversationBufferMemory 32 | from langchain.llms.openai import OpenAI 33 | 34 | # Grounding DINO 35 | import groundingdino.datasets.transforms as T 36 | from groundingdino.models import build_model 37 | from groundingdino.util import box_ops 38 | from groundingdino.util.slconfig import SLConfig 39 | from groundingdino.util.utils import clean_state_dict, get_phrases_from_posmap 40 | 41 | # segment anything 42 | from segment_anything import build_sam, SamPredictor, SamAutomaticMaskGenerator 43 | import cv2 44 | import numpy as np 45 | import matplotlib.pyplot as plt 46 | import wget 47 | 48 | VISUAL_CHATGPT_PREFIX = """Visual ChatGPT is designed to be able to assist with a wide range of text and visual related tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. Visual ChatGPT is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. 49 | 50 | Visual ChatGPT is able to process and understand large amounts of text and images. As a language model, Visual ChatGPT can not directly read images, but it has a list of tools to finish different visual tasks. Each image will have a file name formed as "image/xxx.png", and Visual ChatGPT can invoke different tools to indirectly understand pictures. When talking about images, Visual ChatGPT is very strict to the file name and will never fabricate nonexistent files. When using tools to generate new image files, Visual ChatGPT is also known that the image may not be the same as the user's demand, and will use other visual question answering tools or description tools to observe the real image. Visual ChatGPT is able to use tools in a sequence, and is loyal to the tool observation outputs rather than faking the image content and image file name. It will remember to provide the file name from the last tool observation, if a new image is generated. 51 | 52 | Human may provide new figures to Visual ChatGPT with a description. The description helps Visual ChatGPT to understand this image, but Visual ChatGPT should use tools to finish following tasks, rather than directly imagine from the description. 53 | 54 | Overall, Visual ChatGPT is a powerful visual dialogue assistant tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. 55 | 56 | 57 | TOOLS: 58 | ------ 59 | 60 | Visual ChatGPT has access to the following tools:""" 61 | 62 | VISUAL_CHATGPT_FORMAT_INSTRUCTIONS = """To use a tool, please use the following format: 63 | 64 | ``` 65 | Thought: Do I need to use a tool? Yes 66 | Action: the action to take, should be one of [{tool_names}] 67 | Action Input: the input to the action 68 | Observation: the result of the action 69 | ``` 70 | 71 | When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format: 72 | 73 | ``` 74 | Thought: Do I need to use a tool? No 75 | {ai_prefix}: [your response here] 76 | ``` 77 | """ 78 | 79 | VISUAL_CHATGPT_SUFFIX = """You are very strict to the filename correctness and will never fake a file name if it does not exist. 80 | You will remember to provide the image file name loyally if it's provided in the last tool observation. 81 | 82 | Begin! 83 | 84 | Previous conversation history: 85 | {chat_history} 86 | 87 | New input: {input} 88 | Since Visual ChatGPT is a text language model, Visual ChatGPT must use tools to observe images rather than imagination. 89 | The thoughts and observations are only visible for Visual ChatGPT, Visual ChatGPT should remember to repeat important information in the final response for Human. 90 | Thought: Do I need to use a tool? {agent_scratchpad} Let's think step by step. 91 | """ 92 | 93 | VISUAL_CHATGPT_PREFIX_CN = """Visual ChatGPT 旨在能够协助完成范围广泛的文本和视觉相关任务,从回答简单的问题到提供对广泛主题的深入解释和讨论。 Visual ChatGPT 能够根据收到的输入生成类似人类的文本,使其能够进行听起来自然的对话,并提供连贯且与手头主题相关的响应。 94 | 95 | Visual ChatGPT 能够处理和理解大量文本和图像。作为一种语言模型,Visual ChatGPT 不能直接读取图像,但它有一系列工具来完成不同的视觉任务。每张图片都会有一个文件名,格式为“image/xxx.png”,Visual ChatGPT可以调用不同的工具来间接理解图片。在谈论图片时,Visual ChatGPT 对文件名的要求非常严格,绝不会伪造不存在的文件。在使用工具生成新的图像文件时,Visual ChatGPT也知道图像可能与用户需求不一样,会使用其他视觉问答工具或描述工具来观察真实图像。 Visual ChatGPT 能够按顺序使用工具,并且忠于工具观察输出,而不是伪造图像内容和图像文件名。如果生成新图像,它将记得提供上次工具观察的文件名。 96 | 97 | Human 可能会向 Visual ChatGPT 提供带有描述的新图形。描述帮助 Visual ChatGPT 理解这个图像,但 Visual ChatGPT 应该使用工具来完成以下任务,而不是直接从描述中想象。有些工具将会返回英文描述,但你对用户的聊天应当采用中文。 98 | 99 | 总的来说,Visual ChatGPT 是一个强大的可视化对话辅助工具,可以帮助处理范围广泛的任务,并提供关于范围广泛的主题的有价值的见解和信息。 100 | 101 | 工具列表: 102 | ------ 103 | 104 | Visual ChatGPT 可以使用这些工具:""" 105 | 106 | VISUAL_CHATGPT_FORMAT_INSTRUCTIONS_CN = """用户使用中文和你进行聊天,但是工具的参数应当使用英文。如果要调用工具,你必须遵循如下格式: 107 | 108 | ``` 109 | Thought: Do I need to use a tool? Yes 110 | Action: the action to take, should be one of [{tool_names}] 111 | Action Input: the input to the action 112 | Observation: the result of the action 113 | ``` 114 | 115 | 当你不再需要继续调用工具,而是对观察结果进行总结回复时,你必须使用如下格式: 116 | 117 | 118 | ``` 119 | Thought: Do I need to use a tool? No 120 | {ai_prefix}: [your response here] 121 | ``` 122 | """ 123 | 124 | VISUAL_CHATGPT_SUFFIX_CN = """你对文件名的正确性非常严格,而且永远不会伪造不存在的文件。 125 | 126 | 开始! 127 | 128 | 因为Visual ChatGPT是一个文本语言模型,必须使用工具去观察图片而不是依靠想象。 129 | 推理想法和观察结果只对Visual ChatGPT可见,需要记得在最终回复时把重要的信息重复给用户,你只能给用户返回中文句子。我们一步一步思考。在你使用工具时,工具的参数只能是英文。 130 | 131 | 聊天历史: 132 | {chat_history} 133 | 134 | 新输入: {input} 135 | Thought: Do I need to use a tool? {agent_scratchpad} 136 | """ 137 | 138 | os.makedirs('image', exist_ok=True) 139 | 140 | 141 | def seed_everything(seed): 142 | random.seed(seed) 143 | np.random.seed(seed) 144 | torch.manual_seed(seed) 145 | torch.cuda.manual_seed_all(seed) 146 | return seed 147 | 148 | 149 | def prompts(name, description): 150 | def decorator(func): 151 | func.name = name 152 | func.description = description 153 | return func 154 | 155 | return decorator 156 | 157 | 158 | def blend_gt2pt(old_image, new_image, sigma=0.15, steps=100): 159 | new_size = new_image.size 160 | old_size = old_image.size 161 | easy_img = np.array(new_image) 162 | gt_img_array = np.array(old_image) 163 | pos_w = (new_size[0] - old_size[0]) // 2 164 | pos_h = (new_size[1] - old_size[1]) // 2 165 | 166 | kernel_h = cv2.getGaussianKernel(old_size[1], old_size[1] * sigma) 167 | kernel_w = cv2.getGaussianKernel(old_size[0], old_size[0] * sigma) 168 | kernel = np.multiply(kernel_h, np.transpose(kernel_w)) 169 | 170 | kernel[steps:-steps, steps:-steps] = 1 171 | kernel[:steps, :steps] = kernel[:steps, :steps] / kernel[steps - 1, steps - 1] 172 | kernel[:steps, -steps:] = kernel[:steps, -steps:] / kernel[steps - 1, -(steps)] 173 | kernel[-steps:, :steps] = kernel[-steps:, :steps] / kernel[-steps, steps - 1] 174 | kernel[-steps:, -steps:] = kernel[-steps:, -steps:] / kernel[-steps, -steps] 175 | kernel = np.expand_dims(kernel, 2) 176 | kernel = np.repeat(kernel, 3, 2) 177 | 178 | weight = np.linspace(0, 1, steps) 179 | top = np.expand_dims(weight, 1) 180 | top = np.repeat(top, old_size[0] - 2 * steps, 1) 181 | top = np.expand_dims(top, 2) 182 | top = np.repeat(top, 3, 2) 183 | 184 | weight = np.linspace(1, 0, steps) 185 | down = np.expand_dims(weight, 1) 186 | down = np.repeat(down, old_size[0] - 2 * steps, 1) 187 | down = np.expand_dims(down, 2) 188 | down = np.repeat(down, 3, 2) 189 | 190 | weight = np.linspace(0, 1, steps) 191 | left = np.expand_dims(weight, 0) 192 | left = np.repeat(left, old_size[1] - 2 * steps, 0) 193 | left = np.expand_dims(left, 2) 194 | left = np.repeat(left, 3, 2) 195 | 196 | weight = np.linspace(1, 0, steps) 197 | right = np.expand_dims(weight, 0) 198 | right = np.repeat(right, old_size[1] - 2 * steps, 0) 199 | right = np.expand_dims(right, 2) 200 | right = np.repeat(right, 3, 2) 201 | 202 | kernel[:steps, steps:-steps] = top 203 | kernel[-steps:, steps:-steps] = down 204 | kernel[steps:-steps, :steps] = left 205 | kernel[steps:-steps, -steps:] = right 206 | 207 | pt_gt_img = easy_img[pos_h:pos_h + old_size[1], pos_w:pos_w + old_size[0]] 208 | gaussian_gt_img = kernel * gt_img_array + (1 - kernel) * pt_gt_img # gt img with blur img 209 | gaussian_gt_img = gaussian_gt_img.astype(np.int64) 210 | easy_img[pos_h:pos_h + old_size[1], pos_w:pos_w + old_size[0]] = gaussian_gt_img 211 | gaussian_img = Image.fromarray(easy_img) 212 | return gaussian_img 213 | 214 | 215 | def cut_dialogue_history(history_memory, keep_last_n_words=500): 216 | if history_memory is None or len(history_memory) == 0: 217 | return history_memory 218 | tokens = history_memory.split() 219 | n_tokens = len(tokens) 220 | print(f"history_memory:{history_memory}, n_tokens: {n_tokens}") 221 | if n_tokens < keep_last_n_words: 222 | return history_memory 223 | paragraphs = history_memory.split('\n') 224 | last_n_tokens = n_tokens 225 | while last_n_tokens >= keep_last_n_words: 226 | last_n_tokens -= len(paragraphs[0].split(' ')) 227 | paragraphs = paragraphs[1:] 228 | return '\n' + '\n'.join(paragraphs) 229 | 230 | 231 | def get_new_image_name(org_img_name, func_name="update"): 232 | head_tail = os.path.split(org_img_name) 233 | head = head_tail[0] 234 | tail = head_tail[1] 235 | name_split = tail.split('.')[0].split('_') 236 | this_new_uuid = str(uuid.uuid4())[:4] 237 | if len(name_split) == 1: 238 | most_org_file_name = name_split[0] 239 | else: 240 | assert len(name_split) == 4 241 | most_org_file_name = name_split[3] 242 | recent_prev_file_name = name_split[0] 243 | new_file_name = f'{this_new_uuid}_{func_name}_{recent_prev_file_name}_{most_org_file_name}.png' 244 | return os.path.join(head, new_file_name) 245 | 246 | class InstructPix2Pix: 247 | def __init__(self, device): 248 | print(f"Initializing InstructPix2Pix to {device}") 249 | self.device = device 250 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 251 | 252 | self.pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained("timbrooks/instruct-pix2pix", 253 | safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker'), 254 | torch_dtype=self.torch_dtype).to(device) 255 | self.pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(self.pipe.scheduler.config) 256 | 257 | @prompts(name="Instruct Image Using Text", 258 | description="useful when you want to the style of the image to be like the text. " 259 | "like: make it look like a painting. or make it like a robot. " 260 | "The input to this tool should be a comma separated string of two, " 261 | "representing the image_path and the text. ") 262 | def inference(self, inputs): 263 | """Change style of image.""" 264 | print("===>Starting InstructPix2Pix Inference") 265 | image_path, text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) 266 | original_image = Image.open(image_path) 267 | image = self.pipe(text, image=original_image, num_inference_steps=40, image_guidance_scale=1.2).images[0] 268 | updated_image_path = get_new_image_name(image_path, func_name="pix2pix") 269 | image.save(updated_image_path) 270 | print(f"\nProcessed InstructPix2Pix, Input Image: {image_path}, Instruct Text: {text}, " 271 | f"Output Image: {updated_image_path}") 272 | return updated_image_path 273 | 274 | 275 | class Text2Image: 276 | def __init__(self, device): 277 | print(f"Initializing Text2Image to {device}") 278 | self.device = device 279 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 280 | self.pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", 281 | torch_dtype=self.torch_dtype) 282 | self.pipe.to(device) 283 | self.a_prompt = 'best quality, extremely detailed' 284 | self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ 285 | 'fewer digits, cropped, worst quality, low quality' 286 | 287 | @prompts(name="Generate Image From User Input Text", 288 | description="useful when you want to generate an image from a user input text and save it to a file. " 289 | "like: generate an image of an object or something, or generate an image that includes some objects. " 290 | "The input to this tool should be a string, representing the text used to generate image. ") 291 | def inference(self, text): 292 | image_filename = os.path.join('image', f"{str(uuid.uuid4())[:8]}.png") 293 | prompt = text + ', ' + self.a_prompt 294 | image = self.pipe(prompt, negative_prompt=self.n_prompt).images[0] 295 | image.save(image_filename) 296 | print( 297 | f"\nProcessed Text2Image, Input Text: {text}, Output Image: {image_filename}") 298 | return image_filename 299 | 300 | 301 | class ImageCaptioning: 302 | def __init__(self, device): 303 | print(f"Initializing ImageCaptioning to {device}") 304 | self.device = device 305 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 306 | self.processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") 307 | self.model = BlipForConditionalGeneration.from_pretrained( 308 | "Salesforce/blip-image-captioning-base", torch_dtype=self.torch_dtype).to(self.device) 309 | 310 | @prompts(name="Get Photo Description", 311 | description="useful when you want to know what is inside the photo. receives image_path as input. " 312 | "The input to this tool should be a string, representing the image_path. ") 313 | def inference(self, image_path): 314 | inputs = self.processor(Image.open(image_path), return_tensors="pt").to(self.device, self.torch_dtype) 315 | out = self.model.generate(**inputs) 316 | captions = self.processor.decode(out[0], skip_special_tokens=True) 317 | print(f"\nProcessed ImageCaptioning, Input Image: {image_path}, Output Text: {captions}") 318 | return captions 319 | 320 | 321 | class Image2Canny: 322 | def __init__(self, device): 323 | print("Initializing Image2Canny") 324 | self.low_threshold = 100 325 | self.high_threshold = 200 326 | 327 | @prompts(name="Edge Detection On Image", 328 | description="useful when you want to detect the edge of the image. " 329 | "like: detect the edges of this image, or canny detection on image, " 330 | "or perform edge detection on this image, or detect the canny image of this image. " 331 | "The input to this tool should be a string, representing the image_path") 332 | def inference(self, inputs): 333 | image = Image.open(inputs) 334 | image = np.array(image) 335 | canny = cv2.Canny(image, self.low_threshold, self.high_threshold) 336 | canny = canny[:, :, None] 337 | canny = np.concatenate([canny, canny, canny], axis=2) 338 | canny = Image.fromarray(canny) 339 | updated_image_path = get_new_image_name(inputs, func_name="edge") 340 | canny.save(updated_image_path) 341 | print(f"\nProcessed Image2Canny, Input Image: {inputs}, Output Text: {updated_image_path}") 342 | return updated_image_path 343 | 344 | 345 | class CannyText2Image: 346 | def __init__(self, device): 347 | print(f"Initializing CannyText2Image to {device}") 348 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 349 | self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-canny", 350 | torch_dtype=self.torch_dtype) 351 | self.pipe = StableDiffusionControlNetPipeline.from_pretrained( 352 | "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker'), 353 | torch_dtype=self.torch_dtype) 354 | self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) 355 | self.pipe.to(device) 356 | self.seed = -1 357 | self.a_prompt = 'best quality, extremely detailed' 358 | self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ 359 | 'fewer digits, cropped, worst quality, low quality' 360 | 361 | @prompts(name="Generate Image Condition On Canny Image", 362 | description="useful when you want to generate a new real image from both the user description and a canny image." 363 | " like: generate a real image of a object or something from this canny image," 364 | " or generate a new real image of a object or something from this edge image. " 365 | "The input to this tool should be a comma separated string of two, " 366 | "representing the image_path and the user description. ") 367 | def inference(self, inputs): 368 | image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) 369 | image = Image.open(image_path) 370 | self.seed = random.randint(0, 65535) 371 | seed_everything(self.seed) 372 | prompt = f'{instruct_text}, {self.a_prompt}' 373 | image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, 374 | guidance_scale=9.0).images[0] 375 | updated_image_path = get_new_image_name(image_path, func_name="canny2image") 376 | image.save(updated_image_path) 377 | print(f"\nProcessed CannyText2Image, Input Canny: {image_path}, Input Text: {instruct_text}, " 378 | f"Output Text: {updated_image_path}") 379 | return updated_image_path 380 | 381 | 382 | class Image2Line: 383 | def __init__(self, device): 384 | print("Initializing Image2Line") 385 | self.detector = MLSDdetector.from_pretrained('lllyasviel/ControlNet') 386 | 387 | @prompts(name="Line Detection On Image", 388 | description="useful when you want to detect the straight line of the image. " 389 | "like: detect the straight lines of this image, or straight line detection on image, " 390 | "or perform straight line detection on this image, or detect the straight line image of this image. " 391 | "The input to this tool should be a string, representing the image_path") 392 | def inference(self, inputs): 393 | image = Image.open(inputs) 394 | mlsd = self.detector(image) 395 | updated_image_path = get_new_image_name(inputs, func_name="line-of") 396 | mlsd.save(updated_image_path) 397 | print(f"\nProcessed Image2Line, Input Image: {inputs}, Output Line: {updated_image_path}") 398 | return updated_image_path 399 | 400 | 401 | class LineText2Image: 402 | def __init__(self, device): 403 | print(f"Initializing LineText2Image to {device}") 404 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 405 | self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-mlsd", 406 | torch_dtype=self.torch_dtype) 407 | self.pipe = StableDiffusionControlNetPipeline.from_pretrained( 408 | "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker'), 409 | torch_dtype=self.torch_dtype 410 | ) 411 | self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) 412 | self.pipe.to(device) 413 | self.seed = -1 414 | self.a_prompt = 'best quality, extremely detailed' 415 | self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ 416 | 'fewer digits, cropped, worst quality, low quality' 417 | 418 | @prompts(name="Generate Image Condition On Line Image", 419 | description="useful when you want to generate a new real image from both the user description " 420 | "and a straight line image. " 421 | "like: generate a real image of a object or something from this straight line image, " 422 | "or generate a new real image of a object or something from this straight lines. " 423 | "The input to this tool should be a comma separated string of two, " 424 | "representing the image_path and the user description. ") 425 | def inference(self, inputs): 426 | image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) 427 | image = Image.open(image_path) 428 | self.seed = random.randint(0, 65535) 429 | seed_everything(self.seed) 430 | prompt = f'{instruct_text}, {self.a_prompt}' 431 | image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, 432 | guidance_scale=9.0).images[0] 433 | updated_image_path = get_new_image_name(image_path, func_name="line2image") 434 | image.save(updated_image_path) 435 | print(f"\nProcessed LineText2Image, Input Line: {image_path}, Input Text: {instruct_text}, " 436 | f"Output Text: {updated_image_path}") 437 | return updated_image_path 438 | 439 | 440 | class Image2Hed: 441 | def __init__(self, device): 442 | print("Initializing Image2Hed") 443 | self.detector = HEDdetector.from_pretrained('lllyasviel/ControlNet') 444 | 445 | @prompts(name="Hed Detection On Image", 446 | description="useful when you want to detect the soft hed boundary of the image. " 447 | "like: detect the soft hed boundary of this image, or hed boundary detection on image, " 448 | "or perform hed boundary detection on this image, or detect soft hed boundary image of this image. " 449 | "The input to this tool should be a string, representing the image_path") 450 | def inference(self, inputs): 451 | image = Image.open(inputs) 452 | hed = self.detector(image) 453 | updated_image_path = get_new_image_name(inputs, func_name="hed-boundary") 454 | hed.save(updated_image_path) 455 | print(f"\nProcessed Image2Hed, Input Image: {inputs}, Output Hed: {updated_image_path}") 456 | return updated_image_path 457 | 458 | 459 | class HedText2Image: 460 | def __init__(self, device): 461 | print(f"Initializing HedText2Image to {device}") 462 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 463 | self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-hed", 464 | torch_dtype=self.torch_dtype) 465 | self.pipe = StableDiffusionControlNetPipeline.from_pretrained( 466 | "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker'), 467 | torch_dtype=self.torch_dtype 468 | ) 469 | self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) 470 | self.pipe.to(device) 471 | self.seed = -1 472 | self.a_prompt = 'best quality, extremely detailed' 473 | self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ 474 | 'fewer digits, cropped, worst quality, low quality' 475 | 476 | @prompts(name="Generate Image Condition On Soft Hed Boundary Image", 477 | description="useful when you want to generate a new real image from both the user description " 478 | "and a soft hed boundary image. " 479 | "like: generate a real image of a object or something from this soft hed boundary image, " 480 | "or generate a new real image of a object or something from this hed boundary. " 481 | "The input to this tool should be a comma separated string of two, " 482 | "representing the image_path and the user description") 483 | def inference(self, inputs): 484 | image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) 485 | image = Image.open(image_path) 486 | self.seed = random.randint(0, 65535) 487 | seed_everything(self.seed) 488 | prompt = f'{instruct_text}, {self.a_prompt}' 489 | image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, 490 | guidance_scale=9.0).images[0] 491 | updated_image_path = get_new_image_name(image_path, func_name="hed2image") 492 | image.save(updated_image_path) 493 | print(f"\nProcessed HedText2Image, Input Hed: {image_path}, Input Text: {instruct_text}, " 494 | f"Output Image: {updated_image_path}") 495 | return updated_image_path 496 | 497 | 498 | class Image2Scribble: 499 | def __init__(self, device): 500 | print("Initializing Image2Scribble") 501 | self.detector = HEDdetector.from_pretrained('lllyasviel/ControlNet') 502 | 503 | @prompts(name="Sketch Detection On Image", 504 | description="useful when you want to generate a scribble of the image. " 505 | "like: generate a scribble of this image, or generate a sketch from this image, " 506 | "detect the sketch from this image. " 507 | "The input to this tool should be a string, representing the image_path") 508 | def inference(self, inputs): 509 | image = Image.open(inputs) 510 | scribble = self.detector(image, scribble=True) 511 | updated_image_path = get_new_image_name(inputs, func_name="scribble") 512 | scribble.save(updated_image_path) 513 | print(f"\nProcessed Image2Scribble, Input Image: {inputs}, Output Scribble: {updated_image_path}") 514 | return updated_image_path 515 | 516 | 517 | class ScribbleText2Image: 518 | def __init__(self, device): 519 | print(f"Initializing ScribbleText2Image to {device}") 520 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 521 | self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-scribble", 522 | torch_dtype=self.torch_dtype) 523 | self.pipe = StableDiffusionControlNetPipeline.from_pretrained( 524 | "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker'), 525 | torch_dtype=self.torch_dtype 526 | ) 527 | self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) 528 | self.pipe.to(device) 529 | self.seed = -1 530 | self.a_prompt = 'best quality, extremely detailed' 531 | self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ 532 | 'fewer digits, cropped, worst quality, low quality' 533 | 534 | @prompts(name="Generate Image Condition On Sketch Image", 535 | description="useful when you want to generate a new real image from both the user description and " 536 | "a scribble image or a sketch image. " 537 | "The input to this tool should be a comma separated string of two, " 538 | "representing the image_path and the user description") 539 | def inference(self, inputs): 540 | image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) 541 | image = Image.open(image_path) 542 | self.seed = random.randint(0, 65535) 543 | seed_everything(self.seed) 544 | prompt = f'{instruct_text}, {self.a_prompt}' 545 | image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, 546 | guidance_scale=9.0).images[0] 547 | updated_image_path = get_new_image_name(image_path, func_name="scribble2image") 548 | image.save(updated_image_path) 549 | print(f"\nProcessed ScribbleText2Image, Input Scribble: {image_path}, Input Text: {instruct_text}, " 550 | f"Output Image: {updated_image_path}") 551 | return updated_image_path 552 | 553 | 554 | class Image2Pose: 555 | def __init__(self, device): 556 | print("Initializing Image2Pose") 557 | self.detector = OpenposeDetector.from_pretrained('lllyasviel/ControlNet') 558 | 559 | @prompts(name="Pose Detection On Image", 560 | description="useful when you want to detect the human pose of the image. " 561 | "like: generate human poses of this image, or generate a pose image from this image. " 562 | "The input to this tool should be a string, representing the image_path") 563 | def inference(self, inputs): 564 | image = Image.open(inputs) 565 | pose = self.detector(image) 566 | updated_image_path = get_new_image_name(inputs, func_name="human-pose") 567 | pose.save(updated_image_path) 568 | print(f"\nProcessed Image2Pose, Input Image: {inputs}, Output Pose: {updated_image_path}") 569 | return updated_image_path 570 | 571 | 572 | class PoseText2Image: 573 | def __init__(self, device): 574 | print(f"Initializing PoseText2Image to {device}") 575 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 576 | self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-openpose", 577 | torch_dtype=self.torch_dtype) 578 | self.pipe = StableDiffusionControlNetPipeline.from_pretrained( 579 | "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker'), 580 | torch_dtype=self.torch_dtype) 581 | self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) 582 | self.pipe.to(device) 583 | self.num_inference_steps = 20 584 | self.seed = -1 585 | self.unconditional_guidance_scale = 9.0 586 | self.a_prompt = 'best quality, extremely detailed' 587 | self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \ 588 | ' fewer digits, cropped, worst quality, low quality' 589 | 590 | @prompts(name="Generate Image Condition On Pose Image", 591 | description="useful when you want to generate a new real image from both the user description " 592 | "and a human pose image. " 593 | "like: generate a real image of a human from this human pose image, " 594 | "or generate a new real image of a human from this pose. " 595 | "The input to this tool should be a comma separated string of two, " 596 | "representing the image_path and the user description") 597 | def inference(self, inputs): 598 | image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) 599 | image = Image.open(image_path) 600 | self.seed = random.randint(0, 65535) 601 | seed_everything(self.seed) 602 | prompt = f'{instruct_text}, {self.a_prompt}' 603 | image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, 604 | guidance_scale=9.0).images[0] 605 | updated_image_path = get_new_image_name(image_path, func_name="pose2image") 606 | image.save(updated_image_path) 607 | print(f"\nProcessed PoseText2Image, Input Pose: {image_path}, Input Text: {instruct_text}, " 608 | f"Output Image: {updated_image_path}") 609 | return updated_image_path 610 | 611 | class SegText2Image: 612 | def __init__(self, device): 613 | print(f"Initializing SegText2Image to {device}") 614 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 615 | self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-seg", 616 | torch_dtype=self.torch_dtype) 617 | self.pipe = StableDiffusionControlNetPipeline.from_pretrained( 618 | "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker'), 619 | torch_dtype=self.torch_dtype) 620 | self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) 621 | self.pipe.to(device) 622 | self.seed = -1 623 | self.a_prompt = 'best quality, extremely detailed' 624 | self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \ 625 | ' fewer digits, cropped, worst quality, low quality' 626 | 627 | @prompts(name="Generate Image Condition On Segmentations", 628 | description="useful when you want to generate a new real image from both the user description and segmentations. " 629 | "like: generate a real image of a object or something from this segmentation image, " 630 | "or generate a new real image of a object or something from these segmentations. " 631 | "The input to this tool should be a comma separated string of two, " 632 | "representing the image_path and the user description") 633 | def inference(self, inputs): 634 | image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) 635 | image = Image.open(image_path) 636 | self.seed = random.randint(0, 65535) 637 | seed_everything(self.seed) 638 | prompt = f'{instruct_text}, {self.a_prompt}' 639 | image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, 640 | guidance_scale=9.0).images[0] 641 | updated_image_path = get_new_image_name(image_path, func_name="segment2image") 642 | image.save(updated_image_path) 643 | print(f"\nProcessed SegText2Image, Input Seg: {image_path}, Input Text: {instruct_text}, " 644 | f"Output Image: {updated_image_path}") 645 | return updated_image_path 646 | 647 | 648 | class Image2Depth: 649 | def __init__(self, device): 650 | print("Initializing Image2Depth") 651 | self.depth_estimator = pipeline('depth-estimation') 652 | 653 | @prompts(name="Predict Depth On Image", 654 | description="useful when you want to detect depth of the image. like: generate the depth from this image, " 655 | "or detect the depth map on this image, or predict the depth for this image. " 656 | "The input to this tool should be a string, representing the image_path") 657 | def inference(self, inputs): 658 | image = Image.open(inputs) 659 | depth = self.depth_estimator(image)['depth'] 660 | depth = np.array(depth) 661 | depth = depth[:, :, None] 662 | depth = np.concatenate([depth, depth, depth], axis=2) 663 | depth = Image.fromarray(depth) 664 | updated_image_path = get_new_image_name(inputs, func_name="depth") 665 | depth.save(updated_image_path) 666 | print(f"\nProcessed Image2Depth, Input Image: {inputs}, Output Depth: {updated_image_path}") 667 | return updated_image_path 668 | 669 | 670 | class DepthText2Image: 671 | def __init__(self, device): 672 | print(f"Initializing DepthText2Image to {device}") 673 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 674 | self.controlnet = ControlNetModel.from_pretrained( 675 | "fusing/stable-diffusion-v1-5-controlnet-depth", torch_dtype=self.torch_dtype) 676 | self.pipe = StableDiffusionControlNetPipeline.from_pretrained( 677 | "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker'), 678 | torch_dtype=self.torch_dtype) 679 | self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) 680 | self.pipe.to(device) 681 | self.seed = -1 682 | self.a_prompt = 'best quality, extremely detailed' 683 | self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \ 684 | ' fewer digits, cropped, worst quality, low quality' 685 | 686 | @prompts(name="Generate Image Condition On Depth", 687 | description="useful when you want to generate a new real image from both the user description and depth image. " 688 | "like: generate a real image of a object or something from this depth image, " 689 | "or generate a new real image of a object or something from the depth map. " 690 | "The input to this tool should be a comma separated string of two, " 691 | "representing the image_path and the user description") 692 | def inference(self, inputs): 693 | image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) 694 | image = Image.open(image_path) 695 | self.seed = random.randint(0, 65535) 696 | seed_everything(self.seed) 697 | prompt = f'{instruct_text}, {self.a_prompt}' 698 | image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, 699 | guidance_scale=9.0).images[0] 700 | updated_image_path = get_new_image_name(image_path, func_name="depth2image") 701 | image.save(updated_image_path) 702 | print(f"\nProcessed DepthText2Image, Input Depth: {image_path}, Input Text: {instruct_text}, " 703 | f"Output Image: {updated_image_path}") 704 | return updated_image_path 705 | 706 | 707 | class Image2Normal: 708 | def __init__(self, device): 709 | print("Initializing Image2Normal") 710 | self.depth_estimator = pipeline("depth-estimation", model="Intel/dpt-hybrid-midas") 711 | self.bg_threhold = 0.4 712 | 713 | @prompts(name="Predict Normal Map On Image", 714 | description="useful when you want to detect norm map of the image. " 715 | "like: generate normal map from this image, or predict normal map of this image. " 716 | "The input to this tool should be a string, representing the image_path") 717 | def inference(self, inputs): 718 | image = Image.open(inputs) 719 | original_size = image.size 720 | image = self.depth_estimator(image)['predicted_depth'][0] 721 | image = image.numpy() 722 | image_depth = image.copy() 723 | image_depth -= np.min(image_depth) 724 | image_depth /= np.max(image_depth) 725 | x = cv2.Sobel(image, cv2.CV_32F, 1, 0, ksize=3) 726 | x[image_depth < self.bg_threhold] = 0 727 | y = cv2.Sobel(image, cv2.CV_32F, 0, 1, ksize=3) 728 | y[image_depth < self.bg_threhold] = 0 729 | z = np.ones_like(x) * np.pi * 2.0 730 | image = np.stack([x, y, z], axis=2) 731 | image /= np.sum(image ** 2.0, axis=2, keepdims=True) ** 0.5 732 | image = (image * 127.5 + 127.5).clip(0, 255).astype(np.uint8) 733 | image = Image.fromarray(image) 734 | image = image.resize(original_size) 735 | updated_image_path = get_new_image_name(inputs, func_name="normal-map") 736 | image.save(updated_image_path) 737 | print(f"\nProcessed Image2Normal, Input Image: {inputs}, Output Depth: {updated_image_path}") 738 | return updated_image_path 739 | 740 | 741 | class NormalText2Image: 742 | def __init__(self, device): 743 | print(f"Initializing NormalText2Image to {device}") 744 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 745 | self.controlnet = ControlNetModel.from_pretrained( 746 | "fusing/stable-diffusion-v1-5-controlnet-normal", torch_dtype=self.torch_dtype) 747 | self.pipe = StableDiffusionControlNetPipeline.from_pretrained( 748 | "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker'), 749 | torch_dtype=self.torch_dtype) 750 | self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config) 751 | self.pipe.to(device) 752 | self.seed = -1 753 | self.a_prompt = 'best quality, extremely detailed' 754 | self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \ 755 | ' fewer digits, cropped, worst quality, low quality' 756 | 757 | @prompts(name="Generate Image Condition On Normal Map", 758 | description="useful when you want to generate a new real image from both the user description and normal map. " 759 | "like: generate a real image of a object or something from this normal map, " 760 | "or generate a new real image of a object or something from the normal map. " 761 | "The input to this tool should be a comma separated string of two, " 762 | "representing the image_path and the user description") 763 | def inference(self, inputs): 764 | image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) 765 | image = Image.open(image_path) 766 | self.seed = random.randint(0, 65535) 767 | seed_everything(self.seed) 768 | prompt = f'{instruct_text}, {self.a_prompt}' 769 | image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt, 770 | guidance_scale=9.0).images[0] 771 | updated_image_path = get_new_image_name(image_path, func_name="normal2image") 772 | image.save(updated_image_path) 773 | print(f"\nProcessed NormalText2Image, Input Normal: {image_path}, Input Text: {instruct_text}, " 774 | f"Output Image: {updated_image_path}") 775 | return updated_image_path 776 | 777 | 778 | class VisualQuestionAnswering: 779 | def __init__(self, device): 780 | print(f"Initializing VisualQuestionAnswering to {device}") 781 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 782 | self.device = device 783 | self.processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base") 784 | self.model = BlipForQuestionAnswering.from_pretrained( 785 | "Salesforce/blip-vqa-base", torch_dtype=self.torch_dtype).to(self.device) 786 | 787 | @prompts(name="Answer Question About The Image", 788 | description="useful when you need an answer for a question based on an image. " 789 | "like: what is the background color of the last image, how many cats in this figure, what is in this figure. " 790 | "The input to this tool should be a comma separated string of two, representing the image_path and the question") 791 | def inference(self, inputs): 792 | image_path, question = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) 793 | raw_image = Image.open(image_path).convert('RGB') 794 | inputs = self.processor(raw_image, question, return_tensors="pt").to(self.device, self.torch_dtype) 795 | out = self.model.generate(**inputs) 796 | answer = self.processor.decode(out[0], skip_special_tokens=True) 797 | print(f"\nProcessed VisualQuestionAnswering, Input Image: {image_path}, Input Question: {question}, " 798 | f"Output Answer: {answer}") 799 | return answer 800 | 801 | 802 | class Segmenting: 803 | def __init__(self, device): 804 | print(f"Inintializing Segmentation to {device}") 805 | self.device = device 806 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 807 | self.model_checkpoint_path = os.path.join("checkpoints","sam") 808 | 809 | self.download_parameters() 810 | self.sam = build_sam(checkpoint=self.model_checkpoint_path).to(device) 811 | self.sam_predictor = SamPredictor(self.sam) 812 | self.mask_generator = SamAutomaticMaskGenerator(self.sam) 813 | 814 | self.saved_points = [] 815 | self.saved_labels = [] 816 | 817 | def download_parameters(self): 818 | url = "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth" 819 | if not os.path.exists(self.model_checkpoint_path): 820 | wget.download(url,out=self.model_checkpoint_path) 821 | 822 | 823 | def show_mask(self, mask: np.ndarray,image: np.ndarray, 824 | random_color: bool = False, transparency=1) -> np.ndarray: 825 | 826 | """Visualize a mask on top of an image. 827 | Args: 828 | mask (np.ndarray): A 2D array of shape (H, W). 829 | image (np.ndarray): A 3D array of shape (H, W, 3). 830 | random_color (bool): Whether to use a random color for the mask. 831 | Outputs: 832 | np.ndarray: A 3D array of shape (H, W, 3) with the mask 833 | visualized on top of the image. 834 | transparenccy: the transparency of the segmentation mask 835 | """ 836 | 837 | if random_color: 838 | color = np.concatenate([np.random.random(3)], axis=0) 839 | else: 840 | color = np.array([30 / 255, 144 / 255, 255 / 255]) 841 | h, w = mask.shape[-2:] 842 | mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1) * 255 843 | 844 | image = cv2.addWeighted(image, 0.7, mask_image.astype('uint8'), transparency, 0) 845 | 846 | 847 | return image 848 | 849 | def show_box(self, box, ax, label): 850 | x0, y0 = box[0], box[1] 851 | w, h = box[2] - box[0], box[3] - box[1] 852 | ax.add_patch(plt.Rectangle((x0, y0), w, h, edgecolor='green', facecolor=(0,0,0,0), lw=2)) 853 | ax.text(x0, y0, label) 854 | 855 | 856 | def get_mask_with_boxes(self, image_pil, image, boxes_filt): 857 | 858 | size = image_pil.size 859 | H, W = size[1], size[0] 860 | for i in range(boxes_filt.size(0)): 861 | boxes_filt[i] = boxes_filt[i] * torch.Tensor([W, H, W, H]) 862 | boxes_filt[i][:2] -= boxes_filt[i][2:] / 2 863 | boxes_filt[i][2:] += boxes_filt[i][:2] 864 | 865 | boxes_filt = boxes_filt.cpu() 866 | transformed_boxes = self.sam_predictor.transform.apply_boxes_torch(boxes_filt, image.shape[:2]).to(self.device) 867 | 868 | masks, _, _ = self.sam_predictor.predict_torch( 869 | point_coords = None, 870 | point_labels = None, 871 | boxes = transformed_boxes.to(self.device), 872 | multimask_output = False, 873 | ) 874 | return masks 875 | 876 | def segment_image_with_boxes(self, image_pil, image_path, boxes_filt, pred_phrases): 877 | 878 | image = cv2.imread(image_path) 879 | image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) 880 | self.sam_predictor.set_image(image) 881 | 882 | masks = self.get_mask_with_boxes(image_pil, image, boxes_filt) 883 | 884 | # draw output image 885 | 886 | for mask in masks: 887 | image = self.show_mask(mask[0].cpu().numpy(), image, random_color=True, transparency=0.3) 888 | 889 | updated_image_path = get_new_image_name(image_path, func_name="segmentation") 890 | 891 | new_image = Image.fromarray(image) 892 | new_image.save(updated_image_path) 893 | 894 | return updated_image_path 895 | 896 | def set_image(self, img) -> None: 897 | """Set the image for the predictor.""" 898 | with torch.cuda.amp.autocast(): 899 | self.sam_predictor.set_image(img) 900 | 901 | def show_points(self, coords: np.ndarray, labels: np.ndarray, 902 | image: np.ndarray) -> np.ndarray: 903 | """Visualize points on top of an image. 904 | 905 | Args: 906 | coords (np.ndarray): A 2D array of shape (N, 2). 907 | labels (np.ndarray): A 1D array of shape (N,). 908 | image (np.ndarray): A 3D array of shape (H, W, 3). 909 | Returns: 910 | np.ndarray: A 3D array of shape (H, W, 3) with the points 911 | visualized on top of the image. 912 | """ 913 | pos_points = coords[labels == 1] 914 | neg_points = coords[labels == 0] 915 | for p in pos_points: 916 | image = cv2.circle( 917 | image, p.astype(int), radius=3, color=(0, 255, 0), thickness=-1) 918 | for p in neg_points: 919 | image = cv2.circle( 920 | image, p.astype(int), radius=3, color=(255, 0, 0), thickness=-1) 921 | return image 922 | 923 | 924 | def segment_image_with_click(self, img, is_positive: bool, 925 | evt: gr.SelectData): 926 | 927 | self.sam_predictor.set_image(img) 928 | self.saved_points.append([evt.index[0], evt.index[1]]) 929 | self.saved_labels.append(1 if is_positive else 0) 930 | input_point = np.array(self.saved_points) 931 | input_label = np.array(self.saved_labels) 932 | 933 | # Predict the mask 934 | with torch.cuda.amp.autocast(): 935 | masks, scores, logits = self.sam_predictor.predict( 936 | point_coords=input_point, 937 | point_labels=input_label, 938 | multimask_output=False, 939 | ) 940 | 941 | img = self.show_mask(masks[0], img, random_color=False, transparency=0.3) 942 | 943 | img = self.show_points(input_point, input_label, img) 944 | 945 | return img 946 | 947 | def segment_image_with_coordinate(self, img, is_positive: bool, 948 | coordinate: tuple): 949 | ''' 950 | Args: 951 | img (numpy.ndarray): the given image, shape: H x W x 3. 952 | is_positive: whether the click is positive, if want to add mask use True else False. 953 | coordinate: the position of the click 954 | If the position is (x,y), means click at the x-th column and y-th row of the pixel matrix. 955 | So x correspond to W, and y correspond to H. 956 | Output: 957 | img (PLI.Image.Image): the result image 958 | result_mask (numpy.ndarray): the result mask, shape: H x W 959 | 960 | Other parameters: 961 | transparency (float): the transparenccy of the mask 962 | to control he degree of transparency after the mask is superimposed. 963 | if transparency=1, then the masked part will be completely replaced with other colors. 964 | ''' 965 | self.sam_predictor.set_image(img) 966 | self.saved_points.append([coordinate[0], coordinate[1]]) 967 | self.saved_labels.append(1 if is_positive else 0) 968 | input_point = np.array(self.saved_points) 969 | input_label = np.array(self.saved_labels) 970 | 971 | # Predict the mask 972 | with torch.cuda.amp.autocast(): 973 | masks, scores, logits = self.sam_predictor.predict( 974 | point_coords=input_point, 975 | point_labels=input_label, 976 | multimask_output=False, 977 | ) 978 | 979 | 980 | img = self.show_mask(masks[0], img, random_color=False, transparency=0.3) 981 | 982 | img = self.show_points(input_point, input_label, img) 983 | 984 | img = Image.fromarray(img) 985 | 986 | result_mask = masks[0] 987 | 988 | return img, result_mask 989 | 990 | @prompts(name="Segment the Image", 991 | description="useful when you want to segment all the part of the image, but not segment a certain object." 992 | "like: segment all the object in this image, or generate segmentations on this image, " 993 | "or segment the image," 994 | "or perform segmentation on this image, " 995 | "or segment all the object in this image." 996 | "The input to this tool should be a string, representing the image_path") 997 | def inference_all(self,image_path): 998 | image = cv2.imread(image_path) 999 | image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) 1000 | masks = self.mask_generator.generate(image) 1001 | plt.figure(figsize=(20,20)) 1002 | plt.imshow(image) 1003 | if len(masks) == 0: 1004 | return 1005 | sorted_anns = sorted(masks, key=(lambda x: x['area']), reverse=True) 1006 | ax = plt.gca() 1007 | ax.set_autoscale_on(False) 1008 | polygons = [] 1009 | color = [] 1010 | for ann in sorted_anns: 1011 | m = ann['segmentation'] 1012 | img = np.ones((m.shape[0], m.shape[1], 3)) 1013 | color_mask = np.random.random((1, 3)).tolist()[0] 1014 | for i in range(3): 1015 | img[:,:,i] = color_mask[i] 1016 | ax.imshow(np.dstack((img, m))) 1017 | 1018 | updated_image_path = get_new_image_name(image_path, func_name="segment-image") 1019 | plt.axis('off') 1020 | plt.savefig( 1021 | updated_image_path, 1022 | bbox_inches="tight", dpi=300, pad_inches=0.0 1023 | ) 1024 | return updated_image_path 1025 | 1026 | class Text2Box: 1027 | def __init__(self, device): 1028 | print(f"Initializing ObjectDetection to {device}") 1029 | self.device = device 1030 | self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 1031 | self.model_checkpoint_path = os.path.join("checkpoints","groundingdino") 1032 | self.model_config_path = os.path.join("checkpoints","grounding_config.py") 1033 | self.download_parameters() 1034 | self.box_threshold = 0.3 1035 | self.text_threshold = 0.25 1036 | self.grounding = (self.load_model()).to(self.device) 1037 | 1038 | def download_parameters(self): 1039 | url = "https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth" 1040 | if not os.path.exists(self.model_checkpoint_path): 1041 | wget.download(url,out=self.model_checkpoint_path) 1042 | config_url = "https://raw.githubusercontent.com/IDEA-Research/GroundingDINO/main/groundingdino/config/GroundingDINO_SwinT_OGC.py" 1043 | if not os.path.exists(self.model_config_path): 1044 | wget.download(config_url,out=self.model_config_path) 1045 | def load_image(self,image_path): 1046 | # load image 1047 | image_pil = Image.open(image_path).convert("RGB") # load image 1048 | 1049 | transform = T.Compose( 1050 | [ 1051 | T.RandomResize([512], max_size=1333), 1052 | T.ToTensor(), 1053 | T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), 1054 | ] 1055 | ) 1056 | image, _ = transform(image_pil, None) # 3, h, w 1057 | return image_pil, image 1058 | 1059 | def load_model(self): 1060 | args = SLConfig.fromfile(self.model_config_path) 1061 | args.device = self.device 1062 | model = build_model(args) 1063 | checkpoint = torch.load(self.model_checkpoint_path, map_location="cpu") 1064 | load_res = model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False) 1065 | print(load_res) 1066 | _ = model.eval() 1067 | return model 1068 | 1069 | def get_grounding_boxes(self, image, caption, with_logits=True): 1070 | caption = caption.lower() 1071 | caption = caption.strip() 1072 | if not caption.endswith("."): 1073 | caption = caption + "." 1074 | image = image.to(self.device) 1075 | with torch.no_grad(): 1076 | outputs = self.grounding(image[None], captions=[caption]) 1077 | logits = outputs["pred_logits"].cpu().sigmoid()[0] # (nq, 256) 1078 | boxes = outputs["pred_boxes"].cpu()[0] # (nq, 4) 1079 | logits.shape[0] 1080 | 1081 | # filter output 1082 | logits_filt = logits.clone() 1083 | boxes_filt = boxes.clone() 1084 | filt_mask = logits_filt.max(dim=1)[0] > self.box_threshold 1085 | logits_filt = logits_filt[filt_mask] # num_filt, 256 1086 | boxes_filt = boxes_filt[filt_mask] # num_filt, 4 1087 | logits_filt.shape[0] 1088 | 1089 | # get phrase 1090 | tokenlizer = self.grounding.tokenizer 1091 | tokenized = tokenlizer(caption) 1092 | # build pred 1093 | pred_phrases = [] 1094 | for logit, box in zip(logits_filt, boxes_filt): 1095 | pred_phrase = get_phrases_from_posmap(logit > self.text_threshold, tokenized, tokenlizer) 1096 | if with_logits: 1097 | pred_phrases.append(pred_phrase + f"({str(logit.max().item())[:4]})") 1098 | else: 1099 | pred_phrases.append(pred_phrase) 1100 | 1101 | return boxes_filt, pred_phrases 1102 | 1103 | def plot_boxes_to_image(self, image_pil, tgt): 1104 | H, W = tgt["size"] 1105 | boxes = tgt["boxes"] 1106 | labels = tgt["labels"] 1107 | assert len(boxes) == len(labels), "boxes and labels must have same length" 1108 | 1109 | draw = ImageDraw.Draw(image_pil) 1110 | mask = Image.new("L", image_pil.size, 0) 1111 | mask_draw = ImageDraw.Draw(mask) 1112 | 1113 | # draw boxes and masks 1114 | for box, label in zip(boxes, labels): 1115 | # from 0..1 to 0..W, 0..H 1116 | box = box * torch.Tensor([W, H, W, H]) 1117 | # from xywh to xyxy 1118 | box[:2] -= box[2:] / 2 1119 | box[2:] += box[:2] 1120 | # random color 1121 | color = tuple(np.random.randint(0, 255, size=3).tolist()) 1122 | # draw 1123 | x0, y0, x1, y1 = box 1124 | x0, y0, x1, y1 = int(x0), int(y0), int(x1), int(y1) 1125 | 1126 | draw.rectangle([x0, y0, x1, y1], outline=color, width=6) 1127 | # draw.text((x0, y0), str(label), fill=color) 1128 | 1129 | font = ImageFont.load_default() 1130 | if hasattr(font, "getbbox"): 1131 | bbox = draw.textbbox((x0, y0), str(label), font) 1132 | else: 1133 | w, h = draw.textsize(str(label), font) 1134 | bbox = (x0, y0, w + x0, y0 + h) 1135 | # bbox = draw.textbbox((x0, y0), str(label)) 1136 | draw.rectangle(bbox, fill=color) 1137 | draw.text((x0, y0), str(label), fill="white") 1138 | 1139 | mask_draw.rectangle([x0, y0, x1, y1], fill=255, width=2) 1140 | 1141 | return image_pil, mask 1142 | 1143 | @prompts(name="Detect the Give Object", 1144 | description="useful when you only want to detect or find out given objects in the picture" 1145 | "The input to this tool should be a comma separated string of two, " 1146 | "representing the image_path, the text description of the object to be found") 1147 | def inference(self, inputs): 1148 | image_path, det_prompt = inputs.split(",") 1149 | print(f"image_path={image_path}, text_prompt={det_prompt}") 1150 | image_pil, image = self.load_image(image_path) 1151 | 1152 | boxes_filt, pred_phrases = self.get_grounding_boxes(image, det_prompt) 1153 | 1154 | size = image_pil.size 1155 | pred_dict = { 1156 | "boxes": boxes_filt, 1157 | "size": [size[1], size[0]], # H,W 1158 | "labels": pred_phrases,} 1159 | 1160 | image_with_box = self.plot_boxes_to_image(image_pil, pred_dict)[0] 1161 | 1162 | updated_image_path = get_new_image_name(image_path, func_name="detect-something") 1163 | updated_image = image_with_box.resize(size) 1164 | updated_image.save(updated_image_path) 1165 | print( 1166 | f"\nProcessed ObejectDetecting, Input Image: {image_path}, Object to be Detect {det_prompt}, " 1167 | f"Output Image: {updated_image_path}") 1168 | return updated_image_path 1169 | 1170 | 1171 | class Inpainting: 1172 | def __init__(self, device): 1173 | self.device = device 1174 | self.revision = 'fp16' if 'cuda' in self.device else None 1175 | self.torch_dtype = torch.float16 if 'cuda' in self.device else torch.float32 1176 | 1177 | self.inpaint = StableDiffusionInpaintPipeline.from_pretrained( 1178 | "runwayml/stable-diffusion-inpainting", revision=self.revision, torch_dtype=self.torch_dtype,safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker')).to(device) 1179 | def __call__(self, prompt, image, mask_image, height=512, width=512, num_inference_steps=50): 1180 | update_image = self.inpaint(prompt=prompt, image=image.resize((width, height)), 1181 | mask_image=mask_image.resize((width, height)), height=height, width=width, num_inference_steps=num_inference_steps).images[0] 1182 | return update_image 1183 | 1184 | class InfinityOutPainting: 1185 | template_model = True # Add this line to show this is a template model. 1186 | def __init__(self, ImageCaptioning, Inpainting, VisualQuestionAnswering): 1187 | self.llm = OpenAI(temperature=0) 1188 | self.ImageCaption = ImageCaptioning 1189 | self.inpaint = Inpainting 1190 | self.ImageVQA = VisualQuestionAnswering 1191 | self.a_prompt = 'best quality, extremely detailed' 1192 | self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \ 1193 | 'fewer digits, cropped, worst quality, low quality' 1194 | 1195 | def get_BLIP_vqa(self, image, question): 1196 | inputs = self.ImageVQA.processor(image, question, return_tensors="pt").to(self.ImageVQA.device, 1197 | self.ImageVQA.torch_dtype) 1198 | out = self.ImageVQA.model.generate(**inputs) 1199 | answer = self.ImageVQA.processor.decode(out[0], skip_special_tokens=True) 1200 | print(f"\nProcessed VisualQuestionAnswering, Input Question: {question}, Output Answer: {answer}") 1201 | return answer 1202 | 1203 | def get_BLIP_caption(self, image): 1204 | inputs = self.ImageCaption.processor(image, return_tensors="pt").to(self.ImageCaption.device, 1205 | self.ImageCaption.torch_dtype) 1206 | out = self.ImageCaption.model.generate(**inputs) 1207 | BLIP_caption = self.ImageCaption.processor.decode(out[0], skip_special_tokens=True) 1208 | return BLIP_caption 1209 | 1210 | def check_prompt(self, prompt): 1211 | check = f"Here is a paragraph with adjectives. " \ 1212 | f"{prompt} " \ 1213 | f"Please change all plural forms in the adjectives to singular forms. " 1214 | return self.llm(check) 1215 | 1216 | def get_imagine_caption(self, image, imagine): 1217 | BLIP_caption = self.get_BLIP_caption(image) 1218 | background_color = self.get_BLIP_vqa(image, 'what is the background color of this image') 1219 | style = self.get_BLIP_vqa(image, 'what is the style of this image') 1220 | imagine_prompt = f"let's pretend you are an excellent painter and now " \ 1221 | f"there is an incomplete painting with {BLIP_caption} in the center, " \ 1222 | f"please imagine the complete painting and describe it" \ 1223 | f"you should consider the background color is {background_color}, the style is {style}" \ 1224 | f"You should make the painting as vivid and realistic as possible" \ 1225 | f"You can not use words like painting or picture" \ 1226 | f"and you should use no more than 50 words to describe it" 1227 | caption = self.llm(imagine_prompt) if imagine else BLIP_caption 1228 | caption = self.check_prompt(caption) 1229 | print(f'BLIP observation: {BLIP_caption}, ChatGPT imagine to {caption}') if imagine else print( 1230 | f'Prompt: {caption}') 1231 | return caption 1232 | 1233 | def resize_image(self, image, max_size=1000000, multiple=8): 1234 | aspect_ratio = image.size[0] / image.size[1] 1235 | new_width = int(math.sqrt(max_size * aspect_ratio)) 1236 | new_height = int(new_width / aspect_ratio) 1237 | new_width, new_height = new_width - (new_width % multiple), new_height - (new_height % multiple) 1238 | return image.resize((new_width, new_height)) 1239 | 1240 | def dowhile(self, original_img, tosize, expand_ratio, imagine, usr_prompt): 1241 | old_img = original_img 1242 | while (old_img.size != tosize): 1243 | prompt = self.check_prompt(usr_prompt) if usr_prompt else self.get_imagine_caption(old_img, imagine) 1244 | crop_w = 15 if old_img.size[0] != tosize[0] else 0 1245 | crop_h = 15 if old_img.size[1] != tosize[1] else 0 1246 | old_img = ImageOps.crop(old_img, (crop_w, crop_h, crop_w, crop_h)) 1247 | temp_canvas_size = (expand_ratio * old_img.width if expand_ratio * old_img.width < tosize[0] else tosize[0], 1248 | expand_ratio * old_img.height if expand_ratio * old_img.height < tosize[1] else tosize[ 1249 | 1]) 1250 | temp_canvas, temp_mask = Image.new("RGB", temp_canvas_size, color="white"), Image.new("L", temp_canvas_size, 1251 | color="white") 1252 | x, y = (temp_canvas.width - old_img.width) // 2, (temp_canvas.height - old_img.height) // 2 1253 | temp_canvas.paste(old_img, (x, y)) 1254 | temp_mask.paste(0, (x, y, x + old_img.width, y + old_img.height)) 1255 | resized_temp_canvas, resized_temp_mask = self.resize_image(temp_canvas), self.resize_image(temp_mask) 1256 | image = self.inpaint(prompt=prompt, image=resized_temp_canvas, mask_image=resized_temp_mask, 1257 | height=resized_temp_canvas.height, width=resized_temp_canvas.width, 1258 | num_inference_steps=50).resize( 1259 | (temp_canvas.width, temp_canvas.height), Image.ANTIALIAS) 1260 | image = blend_gt2pt(old_img, image) 1261 | old_img = image 1262 | return old_img 1263 | 1264 | @prompts(name="Extend An Image", 1265 | description="useful when you need to extend an image into a larger image." 1266 | "like: extend the image into a resolution of 2048x1024, extend the image into 2048x1024. " 1267 | "The input to this tool should be a comma separated string of two, representing the image_path and the resolution of widthxheight") 1268 | def inference(self, inputs): 1269 | image_path, resolution = inputs.split(',') 1270 | width, height = resolution.split('x') 1271 | tosize = (int(width), int(height)) 1272 | image = Image.open(image_path) 1273 | image = ImageOps.crop(image, (10, 10, 10, 10)) 1274 | out_painted_image = self.dowhile(image, tosize, 4, True, False) 1275 | updated_image_path = get_new_image_name(image_path, func_name="outpainting") 1276 | out_painted_image.save(updated_image_path) 1277 | print(f"\nProcessed InfinityOutPainting, Input Image: {image_path}, Input Resolution: {resolution}, " 1278 | f"Output Image: {updated_image_path}") 1279 | return updated_image_path 1280 | 1281 | 1282 | 1283 | class ObjectSegmenting: 1284 | template_model = True # Add this line to show this is a template model. 1285 | def __init__(self, Text2Box:Text2Box, Segmenting:Segmenting): 1286 | # self.llm = OpenAI(temperature=0) 1287 | self.grounding = Text2Box 1288 | self.sam = Segmenting 1289 | 1290 | 1291 | @prompts(name="Segment the given object", 1292 | description="useful when you only want to segment the certain objects in the picture" 1293 | "according to the given text" 1294 | "like: segment the cat," 1295 | "or can you segment an obeject for me" 1296 | "The input to this tool should be a comma separated string of two, " 1297 | "representing the image_path, the text description of the object to be found") 1298 | def inference(self, inputs): 1299 | image_path, det_prompt = inputs.split(",") 1300 | print(f"image_path={image_path}, text_prompt={det_prompt}") 1301 | image_pil, image = self.grounding.load_image(image_path) 1302 | 1303 | boxes_filt, pred_phrases = self.grounding.get_grounding_boxes(image, det_prompt) 1304 | updated_image_path = self.sam.segment_image_with_boxes(image_pil,image_path,boxes_filt,pred_phrases) 1305 | print( 1306 | f"\nProcessed ObejectSegmenting, Input Image: {image_path}, Object to be Segment {det_prompt}, " 1307 | f"Output Image: {updated_image_path}") 1308 | return updated_image_path 1309 | 1310 | def merge_masks(self, masks): 1311 | ''' 1312 | Args: 1313 | mask (numpy.ndarray): shape N x 1 x H x W 1314 | Outputs: 1315 | new_mask (numpy.ndarray): shape H x W 1316 | ''' 1317 | if type(masks) == torch.Tensor: 1318 | x = masks 1319 | elif type(masks) == np.ndarray: 1320 | x = torch.tensor(masks,dtype=int) 1321 | else: 1322 | raise TypeError("the type of the input masks must be numpy.ndarray or torch.tensor") 1323 | x = x.squeeze(dim=1) 1324 | value, _ = x.max(dim=0) 1325 | new_mask = value.cpu().numpy() 1326 | new_mask.astype(np.uint8) 1327 | return new_mask 1328 | 1329 | def get_mask(self, image_path, text_prompt): 1330 | 1331 | print(f"image_path={image_path}, text_prompt={text_prompt}") 1332 | # image_pil (PIL.Image.Image) -> size: W x H 1333 | # image (numpy.ndarray) -> H x W x 3 1334 | image_pil, image = self.grounding.load_image(image_path) 1335 | 1336 | boxes_filt, pred_phrases = self.grounding.get_grounding_boxes(image, text_prompt) 1337 | image = cv2.imread(image_path) 1338 | image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) 1339 | self.sam.sam_predictor.set_image(image) 1340 | 1341 | # masks (torch.tensor) -> N x 1 x H x W 1342 | masks = self.sam.get_mask_with_boxes(image_pil, image, boxes_filt) 1343 | 1344 | # merged_mask -> H x W 1345 | merged_mask = self.merge_masks(masks) 1346 | # draw output image 1347 | 1348 | for mask in masks: 1349 | image = self.sam.show_mask(mask[0].cpu().numpy(), image, random_color=True, transparency=0.3) 1350 | 1351 | 1352 | merged_mask_image = Image.fromarray(merged_mask) 1353 | 1354 | return merged_mask 1355 | 1356 | 1357 | class ImageEditing: 1358 | template_model = True 1359 | def __init__(self, Text2Box:Text2Box, Segmenting:Segmenting, Inpainting:Inpainting): 1360 | print(f"Initializing ImageEditing") 1361 | self.sam = Segmenting 1362 | self.grounding = Text2Box 1363 | self.inpaint = Inpainting 1364 | 1365 | def pad_edge(self,mask,padding): 1366 | #mask Tensor [H,W] 1367 | mask = mask.numpy() 1368 | true_indices = np.argwhere(mask) 1369 | mask_array = np.zeros_like(mask, dtype=bool) 1370 | for idx in true_indices: 1371 | padded_slice = tuple(slice(max(0, i - padding), i + padding + 1) for i in idx) 1372 | mask_array[padded_slice] = True 1373 | new_mask = (mask_array * 255).astype(np.uint8) 1374 | #new_mask 1375 | return new_mask 1376 | 1377 | @prompts(name="Remove Something From The Photo", 1378 | description="useful when you want to remove and object or something from the photo " 1379 | "from its description or location. " 1380 | "The input to this tool should be a comma separated string of two, " 1381 | "representing the image_path and the object need to be removed. ") 1382 | def inference_remove(self, inputs): 1383 | image_path, to_be_removed_txt = inputs.split(",")[0], ','.join(inputs.split(',')[1:]) 1384 | return self.inference_replace_sam(f"{image_path},{to_be_removed_txt},background") 1385 | 1386 | @prompts(name="Replace Something From The Photo", 1387 | description="useful when you want to replace an object from the object description or " 1388 | "location with another object from its description. " 1389 | "The input to this tool should be a comma separated string of three, " 1390 | "representing the image_path, the object to be replaced, the object to be replaced with ") 1391 | def inference_replace_sam(self,inputs): 1392 | image_path, to_be_replaced_txt, replace_with_txt = inputs.split(",") 1393 | 1394 | print(f"image_path={image_path}, to_be_replaced_txt={to_be_replaced_txt}") 1395 | image_pil, image = self.grounding.load_image(image_path) 1396 | boxes_filt, pred_phrases = self.grounding.get_grounding_boxes(image, to_be_replaced_txt) 1397 | image = cv2.imread(image_path) 1398 | image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) 1399 | self.sam.sam_predictor.set_image(image) 1400 | masks = self.sam.get_mask_with_boxes(image_pil, image, boxes_filt) 1401 | mask = torch.sum(masks, dim=0).unsqueeze(0) 1402 | mask = torch.where(mask > 0, True, False) 1403 | mask = mask.squeeze(0).squeeze(0).cpu() #tensor 1404 | 1405 | mask = self.pad_edge(mask,padding=20) #numpy 1406 | mask_image = Image.fromarray(mask) 1407 | 1408 | updated_image = self.inpaint(prompt=replace_with_txt, image=image_pil, 1409 | mask_image=mask_image) 1410 | updated_image_path = get_new_image_name(image_path, func_name="replace-something") 1411 | updated_image = updated_image.resize(image_pil.size) 1412 | updated_image.save(updated_image_path) 1413 | print( 1414 | f"\nProcessed ImageEditing, Input Image: {image_path}, Replace {to_be_replaced_txt} to {replace_with_txt}, " 1415 | f"Output Image: {updated_image_path}") 1416 | return updated_image_path 1417 | 1418 | class BackgroundRemoving: 1419 | ''' 1420 | using to remove the background of the given picture 1421 | ''' 1422 | template_model = True 1423 | def __init__(self,VisualQuestionAnswering:VisualQuestionAnswering, Text2Box:Text2Box, Segmenting:Segmenting): 1424 | self.vqa = VisualQuestionAnswering 1425 | self.obj_segmenting = ObjectSegmenting(Text2Box,Segmenting) 1426 | 1427 | @prompts(name="Remove the background", 1428 | description="useful when you want to extract the object or remove the background," 1429 | "the input should be a string image_path" 1430 | ) 1431 | def inference(self, image_path): 1432 | ''' 1433 | given a image, return the picture only contains the extracted main object 1434 | ''' 1435 | updated_image_path = None 1436 | 1437 | mask = self.get_mask(image_path) 1438 | 1439 | image = Image.open(image_path) 1440 | mask = Image.fromarray(mask) 1441 | image.putalpha(mask) 1442 | 1443 | updated_image_path = get_new_image_name(image_path, func_name="detect-something") 1444 | image.save(updated_image_path) 1445 | 1446 | return updated_image_path 1447 | 1448 | def get_mask(self, image_path): 1449 | ''' 1450 | Description: 1451 | given an image path, return the mask of the main object. 1452 | Args: 1453 | image_path (string): the file path of the image 1454 | Outputs: 1455 | mask (numpy.ndarray): H x W 1456 | ''' 1457 | vqa_input = f"{image_path}, what is the main object in the image?" 1458 | text_prompt = self.vqa.inference(vqa_input) 1459 | 1460 | mask = self.obj_segmenting.get_mask(image_path,text_prompt) 1461 | 1462 | return mask 1463 | 1464 | 1465 | class ConversationBot: 1466 | def __init__(self, load_dict): 1467 | # load_dict = {'VisualQuestionAnswering':'cuda:0', 'ImageCaptioning':'cuda:1',...} 1468 | print(f"Initializing VisualChatGPT, load_dict={load_dict}") 1469 | if 'ImageCaptioning' not in load_dict: 1470 | raise ValueError("You have to load ImageCaptioning as a basic function for VisualChatGPT") 1471 | 1472 | self.models = {} 1473 | # Load Basic Foundation Models 1474 | for class_name, device in load_dict.items(): 1475 | self.models[class_name] = globals()[class_name](device=device) 1476 | 1477 | # Load Template Foundation Models 1478 | for class_name, module in globals().items(): 1479 | if getattr(module, 'template_model', False): 1480 | template_required_names = {k for k in inspect.signature(module.__init__).parameters.keys() if k!='self'} 1481 | loaded_names = set([type(e).__name__ for e in self.models.values()]) 1482 | if template_required_names.issubset(loaded_names): 1483 | self.models[class_name] = globals()[class_name]( 1484 | **{name: self.models[name] for name in template_required_names}) 1485 | 1486 | print(f"All the Available Functions: {self.models}") 1487 | 1488 | self.tools = [] 1489 | for instance in self.models.values(): 1490 | for e in dir(instance): 1491 | if e.startswith('inference'): 1492 | func = getattr(instance, e) 1493 | self.tools.append(Tool(name=func.name, description=func.description, func=func)) 1494 | self.llm = OpenAI(temperature=0) 1495 | self.memory = ConversationBufferMemory(memory_key="chat_history", output_key='output') 1496 | 1497 | def init_agent(self, lang): 1498 | self.memory.clear() #clear previous history 1499 | if lang=='English': 1500 | PREFIX, FORMAT_INSTRUCTIONS, SUFFIX = VISUAL_CHATGPT_PREFIX, VISUAL_CHATGPT_FORMAT_INSTRUCTIONS, VISUAL_CHATGPT_SUFFIX 1501 | place = "Enter text and press enter, or upload an image" 1502 | label_clear = "Clear" 1503 | else: 1504 | PREFIX, FORMAT_INSTRUCTIONS, SUFFIX = VISUAL_CHATGPT_PREFIX_CN, VISUAL_CHATGPT_FORMAT_INSTRUCTIONS_CN, VISUAL_CHATGPT_SUFFIX_CN 1505 | place = "输入文字并回车,或者上传图片" 1506 | label_clear = "清除" 1507 | self.agent = initialize_agent( 1508 | self.tools, 1509 | self.llm, 1510 | agent="conversational-react-description", 1511 | verbose=True, 1512 | memory=self.memory, 1513 | return_intermediate_steps=True, 1514 | agent_kwargs={'prefix': PREFIX, 'format_instructions': FORMAT_INSTRUCTIONS, 1515 | 'suffix': SUFFIX}, ) 1516 | return gr.update(visible = True), gr.update(visible = False), gr.update(placeholder=place), gr.update(value=label_clear) 1517 | 1518 | def run_text(self, text, state): 1519 | self.agent.memory.buffer = cut_dialogue_history(self.agent.memory.buffer, keep_last_n_words=500) 1520 | res = self.agent({"input": text.strip()}) 1521 | res['output'] = res['output'].replace("\\", "/") 1522 | response = re.sub('(image/[-\w]*.png)', lambda m: f'![](file={m.group(0)})*{m.group(0)}*', res['output']) 1523 | state = state + [(text, response)] 1524 | print(f"\nProcessed run_text, Input text: {text}\nCurrent state: {state}\n" 1525 | f"Current Memory: {self.agent.memory.buffer}") 1526 | return state, state 1527 | 1528 | def run_image(self, image, state, txt, lang): 1529 | image_filename = os.path.join('image', f"{str(uuid.uuid4())[:8]}.png") 1530 | print("======>Auto Resize Image...") 1531 | img = Image.open(image.name) 1532 | width, height = img.size 1533 | ratio = min(512 / width, 512 / height) 1534 | width_new, height_new = (round(width * ratio), round(height * ratio)) 1535 | width_new = int(np.round(width_new / 64.0)) * 64 1536 | height_new = int(np.round(height_new / 64.0)) * 64 1537 | img = img.resize((width_new, height_new)) 1538 | img = img.convert('RGB') 1539 | img.save(image_filename, "PNG") 1540 | print(f"Resize image form {width}x{height} to {width_new}x{height_new}") 1541 | description = self.models['ImageCaptioning'].inference(image_filename) 1542 | if lang == 'Chinese': 1543 | Human_prompt = f'\nHuman: 提供一张名为 {image_filename}的图片。它的描述是: {description}。 这些信息帮助你理解这个图像,但是你应该使用工具来完成下面的任务,而不是直接从我的描述中想象。 如果你明白了, 说 \"收到\". \n' 1544 | AI_prompt = "收到。 " 1545 | else: 1546 | Human_prompt = f'\nHuman: provide a figure named {image_filename}. The description is: {description}. This information helps you to understand this image, but you should use tools to finish following tasks, rather than directly imagine from my description. If you understand, say \"Received\". \n' 1547 | AI_prompt = "Received. " 1548 | self.agent.memory.buffer = self.agent.memory.buffer + Human_prompt + 'AI: ' + AI_prompt 1549 | state = state + [(f"![](file={image_filename})*{image_filename}*", AI_prompt)] 1550 | print(f"\nProcessed run_image, Input image: {image_filename}\nCurrent state: {state}\n" 1551 | f"Current Memory: {self.agent.memory.buffer}") 1552 | return state, state, f'{txt} {image_filename} ' 1553 | 1554 | 1555 | if __name__ == '__main__': 1556 | if not os.path.exists("checkpoints"): 1557 | os.mkdir("checkpoints") 1558 | parser = argparse.ArgumentParser() 1559 | parser.add_argument('--load', type=str, default="ImageCaptioning_cuda:0,Text2Image_cuda:0") 1560 | args = parser.parse_args() 1561 | load_dict = {e.split('_')[0].strip(): e.split('_')[1].strip() for e in args.load.split(',')} 1562 | bot = ConversationBot(load_dict=load_dict) 1563 | with gr.Blocks(css="#chatbot .overflow-y-auto{height:500px}") as demo: 1564 | lang = gr.Radio(choices = ['Chinese','English'], value=None, label='Language') 1565 | chatbot = gr.Chatbot(elem_id="chatbot", label="Visual ChatGPT") 1566 | state = gr.State([]) 1567 | with gr.Row(visible=False) as input_raws: 1568 | with gr.Column(scale=0.7): 1569 | txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter, or upload an image").style( 1570 | container=False) 1571 | with gr.Column(scale=0.15, min_width=0): 1572 | clear = gr.Button("Clear") 1573 | with gr.Column(scale=0.15, min_width=0): 1574 | btn = gr.UploadButton(label="🖼️",file_types=["image"]) 1575 | 1576 | lang.change(bot.init_agent, [lang], [input_raws, lang, txt, clear]) 1577 | txt.submit(bot.run_text, [txt, state], [chatbot, state]) 1578 | txt.submit(lambda: "", None, txt) 1579 | btn.upload(bot.run_image, [btn, state, txt, lang], [chatbot, state, txt]) 1580 | clear.click(bot.memory.clear) 1581 | clear.click(lambda: [], None, chatbot) 1582 | clear.click(lambda: [], None, state) 1583 | demo.launch(server_name="0.0.0.0", server_port=7861) 1584 | --------------------------------------------------------------------------------