├── .gitattributes ├── .github ├── FUNDING.yml └── ISSUE_TEMPLATE │ ├── bug_report.md │ └── feature_request.md ├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── app_icon.ico ├── bot ├── inputs │ ├── .gitignore │ └── README.md ├── outputs │ ├── .gitignore │ └── README.md └── temp │ ├── .gitignore │ └── README.md ├── g_diffuser_bot.py ├── g_diffuser_cli.py ├── inputs ├── .gitignore ├── README.md ├── endzoom.png └── scripts │ ├── .gitignore │ ├── README.md │ ├── zoom_composite.py │ └── zoom_maker.py ├── modules ├── __init__.py ├── g_diffuser_lib.py ├── g_diffuser_utilities.py ├── sdgrpcserver │ └── generated │ │ ├── __init__.py │ │ ├── completion_pb2.py │ │ ├── completion_pb2_grpc.py │ │ ├── dashboard_pb2.py │ │ ├── dashboard_pb2.pyi │ │ ├── dashboard_pb2_grpc.py │ │ ├── engines_pb2.py │ │ ├── engines_pb2.pyi │ │ ├── engines_pb2_grpc.py │ │ ├── generation_pb2.py │ │ ├── generation_pb2.pyi │ │ ├── generation_pb2_grpc.py │ │ ├── tensors_pb2.py │ │ ├── tensors_pb2.pyi │ │ └── tensors_pb2_grpc.py └── sdgrpcserver_client.py ├── outputs ├── .gitignore └── README.md └── temp ├── .gitignore └── README.md /.gitattributes: -------------------------------------------------------------------------------- 1 | extensions/stable-diffusion-grpcserver/** linguist-vendored -------------------------------------------------------------------------------- /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | github: [parlance-zz] 2 | custom: ["ETH@0x8e4BbD53bfF9C0765eE1859C590A5334722F2086"] 3 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. iOS] 28 | - Browser [e.g. chrome, safari] 29 | - Version [e.g. 22] 30 | 31 | **Smartphone (please complete the following information):** 32 | - Device: [e.g. iPhone6] 33 | - OS: [e.g. iOS8.1] 34 | - Browser [e.g. stock browser, safari] 35 | - Version [e.g. 22] 36 | 37 | **Additional context** 38 | Add any other context about the problem here. 39 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | *.pyo 3 | __pycache__/ 4 | *.png 5 | *.json 6 | *.jpg 7 | *.jpeg 8 | *.log 9 | *.mp4 10 | !part_of_stable_cabal.png -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Code of Conduct 2 | 3 | ## 1. Purpose 4 | 5 | This code of conduct outlines our expectations for all those who participate in our community, as well as the consequences for unacceptable behavior. 6 | 7 | ## 2. Expected Behavior 8 | 9 | The following behaviors are expected and requested of all community members: 10 | 11 | * Participate in an authentic and active way. In doing so, you contribute to the health and longevity of this community. 12 | * Exercise consideration and respect in your speech and actions. 13 | * Attempt collaboration before conflict. 14 | * Refrain from demeaning, discriminatory, or harassing behavior and speech. 15 | 16 | ## 3. Unacceptable Behavior 17 | 18 | The following behaviors are considered harassment and are unacceptable within our community: 19 | 20 | * Violence, threats of violence or violent language directed against another person. 21 | * Sexist, racist, homophobic, transphobic, or other language with malicious intent. 22 | * Posting or threatening to post other people's personally identifying information ("doxing"). 23 | * Personal insults. 24 | * Advocating for or encouraging any of the above behavior. 25 | 26 | ## 4. Consequences of Unacceptable Behavior 27 | 28 | Unacceptable behavior from any community member, including sponsors and anyone with decision-making authority will not be tolerated. 29 | 30 | If a community member engages in unacceptable behavior, the community organizers may take any action they deem appropriate, up to and including a temporary ban or permanent expulsion from the community without warning. 31 | 32 | ## 5. Reporting Guidelines 33 | 34 | If you are subject to or witness unacceptable behavior, or have any other concerns, please notify a community organizer as soon as possible. 35 | 36 | If there is an issue with community organizers please let me know at parlance@fifth-harmonic.com. 37 | 38 | ## 6. Addressing Grievances 39 | 40 | If you feel you have been falsely or unfairly accused of violating this Code of Conduct, you should notify parlance@fifth-harmonic.com with a concise description of your grievance. For the moment these will be handled on a case-by-case basis. 41 | 42 | 43 | Thank you and have fun diffusing! 44 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | ## How to contribute to G-Diffuser 2 | 3 | #### **Did you find a bug?** 4 | 5 | * **Ensure the bug was not already reported** by searching on GitHub under [Issues](https://github.com/parlance-zz/g-diffuser-bot/issues) 6 | 7 | * If you're unable to find an open issue addressing the problem, [open a new one](https://github.com/parlance-zz/g-diffuser-bot/issues/new). Be sure to include a **title and clear description**, as much relevant information as possible, and a **code sample** or an **executable test case** demonstrating the expected behavior that is not occurring. 8 | 9 | #### **Did you write a patch that fixes a bug or adds a feature?** 10 | 11 | * Open a new GitHub pull request with the patch. 12 | 13 | * Ensure the PR description clearly describes the problem and solution. 14 | 15 | #### **Do you intend to add a new feature or change an existing one?** 16 | 17 | * Suggest your change in the [Issues](https://github.com/parlance-zz/g-diffuser-bot/issues) and start writing code. 18 | 19 | #### **Do you have questions about the source code?** 20 | 21 | * Ask any question about how the code works by emailing parlance@fifth-harmonic.com 22 | 23 | Thanks! 24 | 25 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022 Christopher Friesen 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![https://www.stablecabal.org](https://www.g-diffuser.com/stablecabal.png) https://www.stablecabal.org 2 | 3 | ## [g-diffuser-bot](https://www.g-diffuser.com) - Discord bot and interface for Stable Diffusion 4 | - [G-Diffuser / Stable Cabal Discord](https://discord.gg/stFy2UPppg) 5 | 6 | Nov 23-2022 Update: The first release of the all-in-one installer version of G-Diffuser is here. This release no longer requires the installation of WSL or Docker, and has a systray icon to keep track of and launch G-Diffuser components. The download link is available under this project's [releases](https://github.com/parlance-zz/g-diffuser-bot/releases/tag/aio-release). 7 | 8 | Nov 20-2022 Update: The infinite zoom scripts have been updated with some improvements, notably a new compositer script that is hundreds of times faster than before. 9 | 10 | Nov 19-2022 Update: There are some new g-diffuser CLI scripts that can be used to make infinite zoom videos. Check out [/inputs/scripts/](https://github.com/parlance-zz/g-diffuser-bot/tree/dev/inputs/scripts) and have a look at [zoom_maker](https://github.com/parlance-zz/g-diffuser-bot/blob/dev/inputs/scripts/zoom_maker.py) and [zoom_composite](https://github.com/parlance-zz/g-diffuser-bot/blob/dev/inputs/scripts/zoom_composite.py) 11 | 12 | Nov 11-2022 Update: I've created a website to showcase a demo gallery of out-painting images made using g-diffuser bot - https://www.g-diffuser.com/ 13 | 14 | Nov 08-2022 Update: In/out-painting and img2img (aka "riffing") has (finally) been added to the Discord bot. New Discord bot command 'expand' allows you to change the canvas size of an input image while filling it with transparency, perfect for setting up out-painting. 15 | 16 | Nov 07-2022 Update: This update adds support for **clip guided models** and new parameters to control them. For now clip guidance has a heavy performance penalty, but this will improve with optimization. This update also adds **negative prompt support** to both the CLI and discord bot, and changes the default loaded models to include SD1.5 and SD1.5 with (small) clip. This update also adds several **new samplers (dpmspp_1, dpmspp_2, dpmspp_3)**. 17 | 18 | ## System Requirements: 19 | - Windows 10+ (1903+), nvidia GPU with at least 8GB VRAM, ~40GB free space for model downloads 20 | - You may need to turn on "developer mode" before beginning the install instructions. Look for "developer settings" in the start menu. 21 | 22 | ## G-Diffuser all-in-one 23 | The first release of the all-in-one installer is here. It notably features much easier "one-click" installation and updating, as well as a systray icon to keep track of g-diffuser programs and the server while it is running. 24 | 25 | ## Installation / Setup 26 | - Download and extract [G-Diffuser AIO Installer (Windows 10+ 64-bit)](https://github.com/parlance-zz/g-diffuser-bot/releases/tag/aio-release) to a folder of your choice. 27 | - Run install_or_update.cmd at least once (once to install, and again later if you wish update to the latest version) 28 | - Edit the filed named "config" and make sure to add your hugging-face access token and save the file. 29 | - If you don't have a huggingface token yet: 30 | - Register for a HuggingFace account at https://huggingface.co/join 31 | - Follow the instructions to access the repository at https://huggingface.co/CompVis/stable-diffusion-v1-4 (don't worry, this doesn't mean SD1.4 will be downloaded or used, it just grants you the necessary access to download stable diffusion models) 32 | - Create a token at https://huggingface.co/settings/tokens (if required, choose the "read" role) 33 | 34 | ## Usage 35 | - Run run.cmd to start the G-Diffuser system 36 | - You should see a G-Diffuser icon in your systray / notification area. Click on the icon to open and interact with the G-Diffuser system. If the icon is missing be sure it isn't hidden by clicking the "up" arrow near the notification area. 37 | 38 | ![G-Diffuser Systray](https://www.g-diffuser.com/systray_screenshot.jpg) 39 | 40 | GUI is coming soon(tm) -------------------------------------------------------------------------------- /app_icon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/parlance-zz/g-diffuser-bot/a003394476cb5495d1705daa013b94829a57c121/app_icon.ico -------------------------------------------------------------------------------- /bot/inputs/.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore everything in this directory 2 | * 3 | # Except this file 4 | !.gitignore 5 | !README.md 6 | !scripts/ -------------------------------------------------------------------------------- /bot/inputs/README.md: -------------------------------------------------------------------------------- 1 | - This folder is used for saved input images, argument files, and g-diffuser cli scripts used by the discord bot 2 | - All discord bot user paths are prefixed with the server/guild and the username 3 | -------------------------------------------------------------------------------- /bot/outputs/.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore everything in this directory 2 | * 3 | # Except this file 4 | !.gitignore 5 | !README.md -------------------------------------------------------------------------------- /bot/outputs/README.md: -------------------------------------------------------------------------------- 1 | - This folder is used as the root path for all sample and argument output files used by the discord bot 2 | - All discord bot user paths are prefixed with the server/guild and the username -------------------------------------------------------------------------------- /bot/temp/.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore everything in this directory 2 | * 3 | # Except this file 4 | !.gitignore 5 | !README.md -------------------------------------------------------------------------------- /bot/temp/README.md: -------------------------------------------------------------------------------- 1 | - This folder is used for misc temp files -------------------------------------------------------------------------------- /g_diffuser_bot.py: -------------------------------------------------------------------------------- 1 | """ 2 | MIT License 3 | 4 | Copyright (c) 2022 Christopher Friesen 5 | https://github.com/parlance-zz 6 | 7 | Permission is hereby granted, free of charge, to any person obtaining a copy 8 | of this software and associated documentation files (the "Software"), to deal 9 | in the Software without restriction, including without limitation the rights 10 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 11 | copies of the Software, and to permit persons to whom the Software is 12 | furnished to do so, subject to the following conditions: 13 | 14 | The above copyright notice and this permission notice shall be included in all 15 | copies or substantial portions of the Software. 16 | 17 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 18 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 19 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 20 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 21 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 22 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 23 | SOFTWARE. 24 | 25 | 26 | g_diffuser_bot.py - discord bot interface for g-diffuser-lib 27 | 28 | """ 29 | 30 | import modules.g_diffuser_lib as gdl 31 | from modules.g_diffuser_lib import SimpleLogger 32 | 33 | if __name__ == "__main__": 34 | gdl.load_config() 35 | import os; os.chdir(gdl.DEFAULT_PATHS.root) 36 | logger = SimpleLogger("g_diffuser_bot.log") 37 | 38 | import datetime 39 | import pathlib 40 | import yaml 41 | from typing import Optional 42 | import aiohttp 43 | import argparse 44 | from argparse import Namespace 45 | import inspect 46 | 47 | import discord 48 | from discord import app_commands 49 | 50 | # redirect default paths to the designated bot path 51 | gdl.DEFAULT_PATHS.inputs = gdl.DEFAULT_PATHS.bot+"/inputs" 52 | gdl.DEFAULT_PATHS.outputs = gdl.DEFAULT_PATHS.bot+"/outputs" 53 | gdl.DEFAULT_PATHS.temp = gdl.DEFAULT_PATHS.bot+"/temp" 54 | 55 | models = gdl.start_grpc_server() 56 | if models == None: 57 | raise Exception("Error: SDGRPC server is unavailable") 58 | 59 | MODEL_CHOICES = [] 60 | for model in models: 61 | MODEL_CHOICES.append(app_commands.Choice(name=model["id"], value=model["id"])) 62 | 63 | SAMPLER_CHOICES = [] 64 | for sampler in gdl.GRPC_SERVER_SUPPORTED_SAMPLERS_LIST: 65 | SAMPLER_CHOICES.append(app_commands.Choice(name=sampler, value=sampler)) 66 | 67 | class G_DiffuserBot(discord.Client): 68 | def __init__(self): 69 | intents = discord.Intents( 70 | messages=True, 71 | dm_messages=True, 72 | guild_messages=True, 73 | message_content=True, 74 | ) 75 | super().__init__(intents=intents) 76 | self.tree = app_commands.CommandTree(self) 77 | 78 | self.restart_now = None 79 | self.shutdown_now = None 80 | self.cmd_list = [] 81 | 82 | self.saved_state = argparse.Namespace() 83 | self.saved_state.users_total_elapsed_time = {} 84 | if gdl.DISCORD_BOT_SETTINGS.state_file_path: # load existing data if we have state file path 85 | try: self.load_state() 86 | except Exception as e: 87 | #print("Error loading '{0}' - {1}".format(gdl.DISCORD_BOT_SETTINGS.state_file_path, str(e))) 88 | pass 89 | 90 | return 91 | 92 | async def setup_commands(self): 93 | # this is broken, for some reason fetch_commands() always returns nothing 94 | app_commands = await self.tree.fetch_commands() 95 | for app_command in app_commands: 96 | await app_command.edit(dm_permission=True) 97 | 98 | # explicitly sync all commands with all guilds / servers bot has joined 99 | bot_guilds = client.guilds 100 | print("Synchronizing app commands with servers/guilds: {0}...\n".format(str([str(x.id) for x in bot_guilds]))) 101 | for guild in bot_guilds: 102 | self.tree.copy_global_to(guild=guild) 103 | await self.tree.sync(guild=guild) 104 | 105 | print("Bot app-command tree: {0}\n\n".format(str(vars(self.tree)))) 106 | return 107 | 108 | def load_state(self): 109 | with open(gdl.DISCORD_BOT_SETTINGS.state_file_path, 'r') as dfile: 110 | self.saved_state = argparse.Namespace(**json.load(dfile)) 111 | print("Loaded {0}...".format(gdl.DISCORD_BOT_SETTINGS.state_file_path)) 112 | return 113 | 114 | def save_state(self): 115 | try: 116 | (pathlib.Path(gdl.DISCORD_BOT_SETTINGS.state_file_path).parents[0]).mkdir(exist_ok=True, parents=True) 117 | with open(gdl.DISCORD_BOT_SETTINGS.state_file_path, "w") as dfile: 118 | json.dump(self.saved_state, dfile) 119 | print("Saved {0}...".format(gdl.DISCORD_BOT_SETTINGS.state_file_path)) 120 | except Exception as e: 121 | raise("Error saving '{0}' - {1}".format(gdl.DISCORD_BOT_SETTINGS.state_file_path, str(e))) 122 | return 123 | 124 | 125 | if __name__ == "__main__": 126 | parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) 127 | parser.add_argument( 128 | "--token", 129 | type=str, 130 | default=gdl.DISCORD_BOT_SETTINGS.token, 131 | help="if you want to override the discord bot token in config you can supply an alternate here", 132 | ) 133 | args = parser.parse_args() 134 | gdl.DISCORD_BOT_SETTINGS.token = args.token 135 | # if we don't have a valid discord bot token let's not go any further 136 | if (not gdl.DISCORD_BOT_SETTINGS.token) or (gdl.DISCORD_BOT_SETTINGS.token == "{your discord bot token}"): 137 | print("Fatal error: Cannot start discord bot with token '{0}'".format(gdl.DISCORD_BOT_SETTINGS.token)) 138 | print("Please update DISCORD_BOT_TOKEN in config. Press enter to continue...") 139 | input(); exit(1) 140 | else: 141 | client = G_DiffuserBot() 142 | 143 | 144 | def check_discord_attachment_size(file_path): 145 | return file_path # todo: check if img is too large, if so convert and save to jpg, return that path instead 146 | 147 | def get_discord_echo_args(args, img2img_params=False): 148 | if args.seed: args.seed -= 1 # show the seed that was _used_, not the next seed 149 | if args.auto_seed: args.auto_seed -= 1 150 | 151 | if img2img_params: 152 | if "width" in args: del args.width 153 | if "height" in args: del args.height 154 | 155 | if args.prompt == "": 156 | del args.prompt 157 | 158 | args_string = gdl.print_args(args, verbosity_level=2, return_only=True, width=9999) 159 | return args_string.replace("\n", "\t") 160 | 161 | @client.event 162 | async def on_ready(): 163 | await client.setup_commands() 164 | 165 | # img command for generating with an input image (img2img, in/outpainting) 166 | @client.tree.command( 167 | name="img", 168 | description="use an input image for img2img, inpainting, or outpainting", 169 | # nsfw=(gdl.GRPC_SERVER_SETTINGS.nsfw_behaviour != "block"), 170 | ) 171 | @app_commands.describe( 172 | input_image_url="url of input image (you can right-click an image in discord chat to get a link)", 173 | prompt='what do you want to create today?', 174 | num_samples='number of images to generate at once', 175 | model_name='which model to use', 176 | sampler='which sampling algorithm to use', 177 | cfg_scale='classifier-free guidance scale', 178 | seed='seed for the random generator', 179 | steps='number of sampling steps', 180 | negative_prompt='has the effect of an anti-prompt', 181 | guidance_strength='clip guidance (only affects clip models)', 182 | img2img_strength='amount to change the input image (only affects img2img, not in/out-painting)', 183 | expand_top='expand input image top by how much (in %)?', 184 | expand_right='expand input image right by how much (in %)?', 185 | expand_bottom='expand input image bottom by how much (in %)?', 186 | expand_left='expand input image left by how much (in %)?', 187 | expand_all='expand input image in _every_ direction by how much (in %)?', 188 | expand_softness='amount to soften the resulting input image mask (in %)', 189 | expand_space='distance erased from the edge of the original input image', 190 | hires_fix='Use the hires fix system to improve quality of large images', 191 | seamless_tiling='Generate a seamlessly tileable image', 192 | ) 193 | @app_commands.choices( 194 | sampler=SAMPLER_CHOICES, 195 | model_name=MODEL_CHOICES, 196 | ) 197 | async def img( 198 | interaction: discord.Interaction, 199 | input_image_url: str, 200 | prompt: Optional[str]= "", 201 | num_samples: Optional[app_commands.Range[int, 1, gdl.DISCORD_BOT_SETTINGS.max_output_limit]] = gdl.DISCORD_BOT_SETTINGS.default_output_n, 202 | model_name: Optional[app_commands.Choice[str]] = gdl.DEFAULT_SAMPLE_SETTINGS.model_name, 203 | sampler: Optional[app_commands.Choice[str]] = gdl.DEFAULT_SAMPLE_SETTINGS.sampler, 204 | cfg_scale: Optional[app_commands.Range[float, 0.0, 100.0]] = gdl.DEFAULT_SAMPLE_SETTINGS.cfg_scale, 205 | seed: Optional[app_commands.Range[int, 1, 2000000000]] = 0, 206 | steps: Optional[app_commands.Range[int, 1, gdl.DISCORD_BOT_SETTINGS.max_steps_limit]] = gdl.DEFAULT_SAMPLE_SETTINGS.steps, 207 | negative_prompt: Optional[str] = gdl.DEFAULT_SAMPLE_SETTINGS.negative_prompt, 208 | guidance_strength: Optional[app_commands.Range[float, 0.0, 1.0]] = gdl.DEFAULT_SAMPLE_SETTINGS.guidance_strength, 209 | img2img_strength: Optional[app_commands.Range[float, 0.0, 2.0]] = gdl.DEFAULT_SAMPLE_SETTINGS.img2img_strength, 210 | expand_top: Optional[app_commands.Range[float, 0.0, 1000.0]] = gdl.DEFAULT_SAMPLE_SETTINGS.expand_top, 211 | expand_bottom: Optional[app_commands.Range[float, 0.0, 1000.0]] = gdl.DEFAULT_SAMPLE_SETTINGS.expand_bottom, 212 | expand_left: Optional[app_commands.Range[float, 0.0, 1000.0]] = gdl.DEFAULT_SAMPLE_SETTINGS.expand_left, 213 | expand_right: Optional[app_commands.Range[float, 0.0, 1000.0]] = gdl.DEFAULT_SAMPLE_SETTINGS.expand_right, 214 | expand_all: Optional[app_commands.Range[float, 0.0, 1000.0]] = 0., 215 | expand_softness: Optional[app_commands.Range[float, 0.0, 100.0]] = gdl.DEFAULT_SAMPLE_SETTINGS.expand_softness, 216 | expand_space: Optional[app_commands.Range[float, 0., 100.0]] = gdl.DEFAULT_SAMPLE_SETTINGS.expand_space, 217 | hires_fix: Optional[bool] = gdl.DEFAULT_SAMPLE_SETTINGS.hires_fix, 218 | seamless_tiling: Optional[bool] = gdl.DEFAULT_SAMPLE_SETTINGS.seamless_tiling, 219 | ): 220 | try: await interaction.response.defer(thinking=True, ephemeral=False) # start by requesting more time to respond 221 | except Exception as e: print("exception in await interaction - " + str(e)) 222 | 223 | # build sample args from app command params 224 | args = locals().copy() 225 | del args["interaction"] 226 | 227 | args["expand_top"] += args["expand_all"] 228 | args["expand_bottom"] += args["expand_all"] 229 | args["expand_left"] += args["expand_all"] 230 | args["expand_right"] += args["expand_all"] 231 | del args["expand_all"] 232 | 233 | if type(args["model_name"]) != str: args["model_name"] = args["model_name"].value 234 | if type(args["sampler"]) != str: args["sampler"] = args["sampler"].value 235 | 236 | args = Namespace(**(vars(gdl.get_default_args()) | args)) 237 | init_image = await download_attachment(input_image_url) 238 | args.init_image = init_image 239 | gdl.print_args(args) 240 | 241 | output_args = await gdl.get_samples(args) 242 | 243 | if args.status == 2: # completed successfully 244 | attachment_files = [] 245 | for output in output_args: 246 | sample_filename = check_discord_attachment_size(gdl.DEFAULT_PATHS.outputs + "/" + output.output_file) 247 | attachment_files.append(discord.File(sample_filename)) 248 | 249 | args_str = get_discord_echo_args(args, img2img_params=True) 250 | cmd_str = inspect.currentframe().f_code.co_name 251 | message = "@" + interaction.user.display_name + f": /{cmd_str} {args_str}" 252 | 253 | try: await interaction.followup.send(files=attachment_files, content=message) 254 | except Exception as e: print("exception in await interaction - " + str(e)) 255 | else: 256 | print("error - " + args.error_message); gdl.print_args(args, verbosity_level=0) 257 | try: await interaction.followup.send(content="sorry, something went wrong :(", ephemeral=True) 258 | except Exception as e: print("exception in await interaction - " + str(e)) 259 | 260 | print("") 261 | return 262 | 263 | # txt2img command 264 | @client.tree.command( 265 | name="g", 266 | description="create something", 267 | # nsfw=(gdl.GRPC_SERVER_SETTINGS.nsfw_behaviour != "block"), 268 | ) 269 | @app_commands.describe( 270 | prompt='what do you want to create today?', 271 | num_samples='number of images to generate at once', 272 | model_name='which model to use', 273 | sampler='which sampling algorithm to use', 274 | cfg_scale='classifier-free guidance scale', 275 | seed='seed for the random generator', 276 | steps='number of sampling steps', 277 | negative_prompt='has the effect of an anti-prompt', 278 | guidance_strength='clip guidance (only affects clip models)', 279 | width='width (in pixels) of the output image', 280 | height='height (in pixels) of the output image', 281 | hires_fix='Use the hires fix system to improve quality of large images', 282 | seamless_tiling='Generate a seamlessly tileable image', 283 | ) 284 | @app_commands.choices( 285 | sampler=SAMPLER_CHOICES, 286 | model_name=MODEL_CHOICES, 287 | ) 288 | async def g( 289 | interaction: discord.Interaction, 290 | prompt: str, 291 | num_samples: Optional[app_commands.Range[int, 1, gdl.DISCORD_BOT_SETTINGS.max_output_limit]] = gdl.DISCORD_BOT_SETTINGS.default_output_n, 292 | model_name: Optional[app_commands.Choice[str]] = gdl.DEFAULT_SAMPLE_SETTINGS.model_name, 293 | sampler: Optional[app_commands.Choice[str]] = gdl.DEFAULT_SAMPLE_SETTINGS.sampler, 294 | cfg_scale: Optional[app_commands.Range[float, 0.0, 100.0]] = gdl.DEFAULT_SAMPLE_SETTINGS.cfg_scale, 295 | seed: Optional[app_commands.Range[int, 1, 2000000000]] = 0, 296 | steps: Optional[app_commands.Range[int, 1, gdl.DISCORD_BOT_SETTINGS.max_steps_limit]] = gdl.DEFAULT_SAMPLE_SETTINGS.steps, 297 | negative_prompt: Optional[str] = gdl.DEFAULT_SAMPLE_SETTINGS.negative_prompt, 298 | guidance_strength: Optional[app_commands.Range[float, 0.0, 1.0]] = gdl.DEFAULT_SAMPLE_SETTINGS.guidance_strength, 299 | width: Optional[app_commands.Range[int, 512, gdl.DEFAULT_SAMPLE_SETTINGS.max_width]] = gdl.DEFAULT_SAMPLE_SETTINGS.width, 300 | height: Optional[app_commands.Range[int, 512, gdl.DEFAULT_SAMPLE_SETTINGS.max_height]] = gdl.DEFAULT_SAMPLE_SETTINGS.height, 301 | hires_fix: Optional[bool] = gdl.DEFAULT_SAMPLE_SETTINGS.hires_fix, 302 | seamless_tiling: Optional[bool] = gdl.DEFAULT_SAMPLE_SETTINGS.seamless_tiling, 303 | ): 304 | try: await interaction.response.defer(thinking=True, ephemeral=False) # start by requesting more time to respond 305 | except Exception as e: print("exception in await interaction - " + str(e)) 306 | 307 | # build sample args from app command params 308 | args = locals().copy() 309 | del args["interaction"] 310 | if type(args["model_name"]) != str: args["model_name"] = args["model_name"].value 311 | if type(args["sampler"]) != str: args["sampler"] = args["sampler"].value 312 | 313 | args = Namespace(**(vars(gdl.get_default_args()) | args)) 314 | gdl.print_args(args) 315 | 316 | output_args = await gdl.get_samples(args) 317 | 318 | if args.status == 2: # completed successfully 319 | attachment_files = [] 320 | for output in output_args: 321 | sample_filename = check_discord_attachment_size(gdl.DEFAULT_PATHS.outputs + "/" + output.output_file) 322 | attachment_files.append(discord.File(sample_filename)) 323 | 324 | args_str = get_discord_echo_args(args) 325 | cmd_str = inspect.currentframe().f_code.co_name 326 | message = "@" + interaction.user.display_name + f": /{cmd_str} {args_str}" 327 | 328 | try: await interaction.followup.send(files=attachment_files, content=message) 329 | except Exception as e: print("exception in await interaction - " + str(e)) 330 | else: 331 | print("error - " + args.error_message); gdl.print_args(args, verbosity_level=0) 332 | try: await interaction.followup.send(content="sorry, something went wrong :(", ephemeral=True) 333 | except Exception as e: print("exception in await interaction - " + str(e)) 334 | 335 | print("") 336 | return 337 | 338 | async def download_attachment(url): 339 | mime_types ={ 340 | "image/png" : ".png", 341 | "image/jpeg" : ".jpg", 342 | "image/gif" : ".gif", 343 | "image/bmp" : ".bmp", 344 | } 345 | try: 346 | async with aiohttp.ClientSession() as session: 347 | async with session.get(url) as response: 348 | attachment_type = response.content_type 349 | if attachment_type not in mime_types: 350 | raise Exception("attachment type '{0}' not found in allowed attachment list '{1}'".format(attachment_type, mime_types)) 351 | 352 | if response.status == 200: 353 | attachment_extension = mime_types[attachment_type] 354 | sanitized_attachment_name = gdl.get_default_output_name(url) 355 | download_path = sanitized_attachment_name+attachment_extension 356 | full_download_path = gdl.DEFAULT_PATHS.inputs+"/"+download_path 357 | print("Downloading '" + url + "' to '" + full_download_path + "'...") 358 | with open(full_download_path, "wb") as out_file: 359 | out_file.write(await response.read()) 360 | else: 361 | raise("Error downloading url, status = {0}".format(str(response.status))) 362 | 363 | except Exception as e: 364 | raise("Error downloading url - {0}".format(str(e))) 365 | 366 | return download_path 367 | 368 | 369 | if __name__ == "__main__": 370 | client.run(gdl.DISCORD_BOT_SETTINGS.token, reconnect=True) -------------------------------------------------------------------------------- /g_diffuser_cli.py: -------------------------------------------------------------------------------- 1 | """ 2 | MIT License 3 | 4 | Copyright (c) 2022 Christopher Friesen 5 | https://github.com/parlance-zz 6 | 7 | Permission is hereby granted, free of charge, to any person obtaining a copy 8 | of this software and associated documentation files (the "Software"), to deal 9 | in the Software without restriction, including without limitation the rights 10 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 11 | copies of the Software, and to permit persons to whom the Software is 12 | furnished to do so, subject to the following conditions: 13 | 14 | The above copyright notice and this permission notice shall be included in all 15 | copies or substantial portions of the Software. 16 | 17 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 18 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 19 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 20 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 21 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 22 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 23 | SOFTWARE. 24 | 25 | 26 | g_diffuser_cli.py - interactive command line interface for g-diffuser 27 | 28 | """ 29 | 30 | import modules.g_diffuser_lib as gdl 31 | from modules.g_diffuser_lib import SimpleLogger 32 | 33 | import argparse 34 | from argparse import Namespace 35 | import code 36 | import glob 37 | import pathlib 38 | import asyncio 39 | import os 40 | 41 | import numpy as np 42 | import cv2 43 | 44 | VERSION_STRING = "g-diffuser-cli v2.0" 45 | INTERACTIVE_MODE_BANNER_STRING = """Interactive mode: 46 | call sample() with keyword arguments and use the up/down arrow-keys to browse command history: 47 | 48 | sample("pillars of creation", num_samples=3, cfg_scale=15) # batch of 3 samples with cfg scale 15 49 | sample("greg rutkowski", init_image="my_image.png", num_samples=0) # setting n <=0 repeats until stopped 50 | sample("something's wrong with the g-diffuser", sampler="k_euler") # uses the k_euler sampler 51 | # any parameters unspecified will use defaults 52 | 53 | show_args(default_args()) # show default arguments and sampling parameters 54 | my_args = default_args(cfg_scale=15) # you can assign a collection of arguments to a variable 55 | my_args.prompt = "art by frank" # and modify them before passing them to sample() 56 | sample(my_args) # sample modifies the arguments object you pass in with results 57 | show_args(my_args) # you can show the result args to verify the results and output path 58 | 59 | show_samplers() # show all available sampler names 60 | show_models() # show available model ids on the grpc server (check models.yaml for more info) 61 | save_args(my_args, "my_fav_args") # you can save your arguments in /inputs/args 62 | args = load_args("my_fav_args") # you can load those saved arguments by name 63 | 64 | run_script("zoom_maker", my_zoom_args) # you can save cli scripts(.py) in /inputs/scripts and run them in the cli 65 | run("zoom_composite", my_composite_args) # run is shorthand for run_script, you can also pass an args object to the script 66 | 67 | resample("old_path", "new_path", scale=20) # regenerate all saved outputs in /outputs/old_path into /outputs/new_path 68 | # with optional replacement / substituted arguments 69 | compare("path1", "path2", "path3") # make a comparison grid from all images in the specified output paths 70 | compare("a", "b", mode="rows") # arrange each output path's images into rows instead 71 | compare("a", "b", file="my_compare.jpg") # the comparison image will be saved by default as /outputs/compare.jpg 72 | # use 'file' to specify an alternate filename 73 | 74 | clear() # clear the command window history 75 | help() # display this message 76 | exit() # exit interactive mode 77 | """ 78 | 79 | def main(): 80 | gdl.load_config() 81 | os.chdir(gdl.DEFAULT_PATHS.root) 82 | logger = SimpleLogger("g_diffuser_cli.log") 83 | 84 | gdl.start_grpc_server() 85 | 86 | global LAST_ARGS_PATH 87 | LAST_ARGS_PATH = gdl.DEFAULT_PATHS.inputs+"/args/last_args.yaml" 88 | 89 | global cli_locals 90 | cli_locals = argparse.Namespace() 91 | cli_locals.sample = cli_get_samples 92 | cli_locals.s = cli_get_samples 93 | cli_locals.show_args = cli_show_args 94 | cli_locals.sha = cli_show_args 95 | cli_locals.load_args = cli_load_args 96 | cli_locals.la = cli_load_args 97 | cli_locals.save_args = cli_save_args 98 | cli_locals.sa = cli_save_args 99 | cli_locals.default_args = cli_default_args 100 | cli_locals.resample = cli_resample 101 | cli_locals.compare = cli_save_comparison_grid 102 | cli_locals.run_script = cli_run_script 103 | cli_locals.run = cli_run_script 104 | cli_locals.show_samplers = cli_show_samplers 105 | cli_locals.show_models = cli_show_models 106 | cli_locals.clear = cli_clear 107 | cli_locals.help = cli_help 108 | cli_locals.exit = cli_exit 109 | 110 | global INTERACTIVE_MODE_BANNER_STRING 111 | interpreter = code.InteractiveConsole(locals=dict(globals(), **vars(cli_locals))) 112 | interpreter.interact(banner=INTERACTIVE_MODE_BANNER_STRING, exitmsg="") 113 | 114 | return 115 | 116 | def cli_get_samples(prompt=None, **kwargs): 117 | global LAST_ARGS_PATH 118 | 119 | if prompt is None: prompt = "" 120 | if type(prompt) == argparse.Namespace: # prompt can be a prompt string or an args namespace 121 | args = prompt 122 | elif type(prompt) == str: 123 | args = cli_default_args(prompt=prompt) 124 | else: 125 | raise Exception("Invalid prompt type: {0} - '{1}'".format(str(type(prompt)), str(prompt))) 126 | 127 | args.__dict__ |= kwargs # merge / override with kwargs 128 | asyncio.run(gdl.get_samples(args, interactive=True)) 129 | 130 | # try to save the last used args in a file for convenience 131 | try: gdl.save_yaml(vars(gdl.strip_args(args, level=0)), LAST_ARGS_PATH) 132 | except Exception as e: pass 133 | return 134 | 135 | def cli_show_args(args, level=None): 136 | if level != None: verbosity_level = level 137 | else: verbosity_level = 1 138 | gdl.print_args(args, verbosity_level=verbosity_level) 139 | return 140 | 141 | def cli_load_args(name=""): 142 | global LAST_ARGS_PATH 143 | try: 144 | if not name: args_path = LAST_ARGS_PATH 145 | else: args_path = gdl.DEFAULT_PATHS.inputs+"/args/"+name+".yaml" 146 | saved_args = Namespace(**gdl.load_yaml(args_path)) 147 | gdl.print_args(saved_args) 148 | except Exception as e: 149 | print("Error loading args from file - {0}".format(e)) 150 | return saved_args 151 | 152 | def cli_save_args(args, name): 153 | try: 154 | args_path = gdl.DEFAULT_PATHS.inputs+"/args/"+name+".yaml" 155 | gdl.save_yaml(vars(gdl.strip_args(args, level=0)), args_path) 156 | print("Saved {0}".format(args_path)) 157 | except Exception as e: 158 | print("Error saving args - {0}".format(e)) 159 | return 160 | 161 | def cli_default_args(**kwargs): 162 | return Namespace(**(vars(gdl.get_default_args()) | kwargs)) 163 | 164 | def cli_resample(old_path, new_path, **kwargs): 165 | resample_args = argparse.Namespace(**kwargs) 166 | assert(old_path); assert(new_path) 167 | global DEFAULT_PATHS 168 | 169 | if not os.path.exists(DEFAULT_PATHS.outputs+"/"+old_path): 170 | print("Error: Output path '" + str(DEFAULT_PATHS.outputs+"/"+old_path) + "' does not exist") 171 | return 172 | 173 | all_resampled_samples = [] 174 | old_arg_files = glob.glob(DEFAULT_PATHS.outputs+"/"+old_path+"/**/*.json", recursive=True) 175 | if len(old_arg_files) > 0: 176 | print("Resampling "+str(len(old_arg_files)) + " output samples...") 177 | for arg_file in old_arg_files: 178 | args_file_dict = gdl.load_json(arg_file); assert(args_file_dict) 179 | if args_file_dict["n"] < 1: continue # skip samples that were endlessly repeated 180 | 181 | output_resample_args = argparse.Namespace(**(args_file_dict | vars(resample_args))) # merge with original args 182 | output_resample_args.n = 1 183 | output_resample_args.output_path = new_path # ensure output goes to specified path, regardless of output_path in args 184 | 185 | try: 186 | samples = gdl.get_samples(output_resample_args) 187 | all_resampled_samples.extend(samples) 188 | except KeyboardInterrupt: 189 | print("Aborting resample...") 190 | return 191 | except Exception as e: 192 | print("Error in gdl.get_samples '" + str(e) + "'") 193 | else: 194 | print("No outputs found in '" + str(DEFAULT_PATHS.outputs+"/"+old_path) + "' to resample") 195 | 196 | return 197 | 198 | def cli_save_comparison_grid(*paths, **kwargs): 199 | global DEFAULT_PATHS 200 | args = argparse.Namespace(**kwargs) 201 | if not "mode" in args: args.mode="columns" 202 | else: args.mode = args.mode.lower() 203 | if not "file" in args: grid_filename = "compare.jpg" 204 | else: grid_filename = args.file 205 | if "compare_output_path" in args: 206 | if args.compare_output_path: 207 | grid_filename = DEFAULT_PATHS.outputs+"/"+args.compare_output_path+"/"+grid_filename 208 | else: 209 | grid_filename = DEFAULT_PATHS.outputs+"/"+grid_filename 210 | else: 211 | grid_filename = DEFAULT_PATHS.outputs+"/"+grid_filename 212 | 213 | num_paths = len(paths) 214 | path_samples = [] 215 | 216 | max_sample_width = 0 # keep track of the largest image in all the folders to make everything fit in the event of non-uniform size 217 | max_sample_height = 0 218 | for path in paths: 219 | assert(type(path) == str) 220 | path_files = glob.glob(DEFAULT_PATHS.outputs+"/"+path+"/*.png") 221 | for file in path_files: 222 | if os.path.basename(file).startswith("grid_"): path_files.remove(file) # exclude grid images from comparison grids 223 | path_files = sorted(path_files) 224 | 225 | samples = [] 226 | for file in path_files: 227 | img = cv2.imread(file) 228 | max_sample_width = np.maximum(max_sample_width, img.shape[0]) 229 | max_sample_height = np.maximum(max_sample_height, img.shape[1]) 230 | samples.append(img) 231 | path_samples.append(samples) 232 | 233 | max_path_samples = 0 234 | for path_sample_list in path_samples: 235 | if len(path_sample_list) > max_path_samples: max_path_samples = len(path_sample_list) 236 | 237 | if args.mode != "rows": layout = (max_path_samples, num_paths) 238 | else: layout = (num_paths, max_path_samples) 239 | np_grid = np.zeros((layout[0] * max_sample_width, layout[1] * max_sample_height, 3), dtype="uint8") 240 | 241 | for x in range(len(path_samples)): 242 | for y in range(len(path_samples[x])): 243 | sample = path_samples[x][y] 244 | paste_x = x * max_sample_width 245 | paste_y = y * max_sample_height 246 | if args.mode != "rows": 247 | paste_x = y * max_sample_width 248 | paste_y = x * max_sample_height 249 | np_grid[paste_x:paste_x+max_sample_width, paste_y:paste_y+max_sample_height, :] = sample[:] 250 | 251 | (pathlib.Path(grid_filename).parents[0]).mkdir(exist_ok=True, parents=True) 252 | cv2.imwrite(grid_filename, np_grid) 253 | print("Saved " + grid_filename) 254 | return 255 | 256 | def cli_run_script(script_name, args=None, **kwargs): 257 | global cli_locals 258 | script_path = gdl.DEFAULT_PATHS.inputs+"/scripts/"+script_name+".py" 259 | args_dict = {"cli_args": args, "kwargs": kwargs} 260 | script_locals = dict(globals(), **(vars(cli_locals) | args_dict)) 261 | try: 262 | with open(script_path, "r") as script_file: 263 | exec(script_file.read(), script_locals) 264 | except KeyboardInterrupt: 265 | print("Okay, cancelling...") 266 | except Exception as e: 267 | raise 268 | 269 | if args: # if an args object was passed, update the object with the results from the script 270 | result_args = script_locals.get("args", None) 271 | if type(result_args) == Namespace: 272 | args.__dict__ = result_args.__dict__ 273 | return 274 | 275 | def cli_show_samplers(): 276 | for sampler in gdl.GRPC_SERVER_SUPPORTED_SAMPLERS_LIST: 277 | print("sampler='"+sampler+"'") 278 | return 279 | 280 | def cli_show_models(): 281 | gdl.show_models() 282 | return 283 | 284 | def cli_clear(): 285 | if os.name == "nt": os.system("cls") 286 | else: os.system("clear") 287 | return 288 | 289 | def cli_help(): 290 | global VERSION_STRING, INTERACTIVE_MODE_BANNER_STRING 291 | print(VERSION_STRING+INTERACTIVE_MODE_BANNER_STRING+"\n") 292 | return 293 | 294 | def cli_exit(): 295 | exit(0) 296 | 297 | 298 | if __name__ == "__main__": 299 | main() -------------------------------------------------------------------------------- /inputs/.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore everything in this directory 2 | * 3 | # Except this file 4 | !.gitignore 5 | !README.md 6 | !scripts/ 7 | !endzoom.png -------------------------------------------------------------------------------- /inputs/README.md: -------------------------------------------------------------------------------- 1 | - This folder is used for saved input images, argument files, and g-diffuser cli scripts 2 | -------------------------------------------------------------------------------- /inputs/endzoom.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/parlance-zz/g-diffuser-bot/a003394476cb5495d1705daa013b94829a57c121/inputs/endzoom.png -------------------------------------------------------------------------------- /inputs/scripts/.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore everything in this directory 2 | * 3 | # Except this file 4 | !.gitignore 5 | !*.py 6 | !README.md 7 | 3d_test.py 8 | collage_maker.py -------------------------------------------------------------------------------- /inputs/scripts/README.md: -------------------------------------------------------------------------------- 1 | - This folder is used g-diffuser cli script files. These are python files that can do anything python can, but run in the context of 2 | the g-diffuser interactive cli / python interpreter. You can use cli scripts to automate more complex sequences of tasks or comparisons. 3 | -------------------------------------------------------------------------------- /inputs/scripts/zoom_composite.py: -------------------------------------------------------------------------------- 1 | import os 2 | import cv2 3 | import glob 4 | import numpy as np 5 | 6 | import pygame 7 | from pygame.locals import * 8 | from OpenGL.GL import * 9 | from OpenGL.GLU import * 10 | 11 | # this is the folder relative to the output path where the keyframes are stored 12 | args = cli_default_args() 13 | args.zoom_output_path = "zoom_maker" 14 | 15 | args.expand_softness = 50. # **the expand values here should match the values used to create the frames in zoom_maker** 16 | args.expand_space = 10. 17 | args.expand_top = 50 18 | args.expand_bottom = 50 19 | args.expand_left = 50 20 | args.expand_right = 50 21 | 22 | args.zoom_num_interpolated_frames = 60 # number of interpolated frames per keyframe, controls zoom speed (and the expand ratio) 23 | args.zoom_frame_rate = 60 # fps of the output video 24 | args.zoom_output_file = "zoom.mp4" # name of output file (this will be saved in the folder with the key frames) 25 | args.zoom_preview_output = False # if enabled this will show a preview of the video in a window as it renders 26 | args.zoom_out = False # if enabled this will zoom out instead of zooming in 27 | args.zoom_acceleration_smoothing = 0. # if > 0. this slows the start and stop, good values are 1 to 3 28 | args.zoom_video_size = (1920*2, 1080*2) # video output resolution 29 | args.zoom_write_raw_frames = False # set to True to write raw video frames instead of encoding (this will take a lot of disk space) 30 | 31 | # ***************************************************************** 32 | 33 | # if args or keyword args were passed in the cli run command, override the defaults 34 | if cli_args: args = Namespace(**(vars(args) | vars(cli_args))) 35 | if kwargs: args = Namespace(**(vars(args) | kwargs)) 36 | 37 | # find keyframes and sort them 38 | print("Loading keyframes from {0}...".format(gdl.DEFAULT_PATHS.outputs+"/"+args.zoom_output_path)) 39 | frame_filenames = sorted(glob.glob(gdl.DEFAULT_PATHS.outputs+"/"+args.zoom_output_path+"/*.png"), reverse=True) 40 | #frame_filenames = frame_filenames[0:4] # limit to 4 frames for testing 41 | num_keyframes = len(frame_filenames) 42 | 43 | frame0_cv2_image = cv2.imread(frame_filenames[0]) 44 | source_size = (int(frame0_cv2_image.shape[1]), int(frame0_cv2_image.shape[0])) 45 | video_aspect_ratio = args.zoom_video_size[0]/args.zoom_video_size[1] 46 | source_aspect_ratio = source_size[0]/source_size[1] 47 | aspect_adjustmentX = source_size[0] / 1024.#args.zoom_video_size[0] 48 | aspect_adjustmentY = source_size[1] / 1024.#args.zoom_video_size[1] 49 | 50 | # setup opengl for compositing via pygame 51 | pygame.init() 52 | pygame.display.set_mode(args.zoom_video_size, HIDDEN|DOUBLEBUF|OPENGL, vsync=0) 53 | gluOrtho2D(-video_aspect_ratio, video_aspect_ratio, -1., 1.) 54 | glDisable(GL_CULL_FACE); glDisable(GL_DEPTH_TEST) 55 | glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND) 56 | glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) 57 | glPixelStorei(GL_UNPACK_ALIGNMENT, 1) 58 | 59 | # load keyframes and generate blending masks 60 | frame_textures = [] 61 | for f in range(num_keyframes): 62 | print("Generating textures {0}/{1}...".format(f+1, num_keyframes)) 63 | cv2_image = cv2.imread(frame_filenames[f]) 64 | if f > 0: np_image = gdl.expand_image(cv2_image, args.expand_top, args.expand_right, args.expand_bottom, args.expand_left, args.expand_softness, args.expand_space) 65 | else: np_image = gdl.expand_image(cv2_image, args.expand_top, args.expand_right, args.expand_bottom, args.expand_left, 0., 0.) 66 | 67 | frame_textures.append(glGenTextures(1)) 68 | glBindTexture(GL_TEXTURE_2D, frame_textures[f]) 69 | glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR) 70 | glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR) 71 | glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_LOD_BIAS, -0.1) 72 | glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, np_image.shape[1], np_image.shape[0], 0, GL_RGBA, GL_UNSIGNED_BYTE, np_image) 73 | glGenerateMipmap(GL_TEXTURE_2D) 74 | 75 | # create video encoder (if we're not writing raw frames intead) 76 | if args.zoom_write_raw_frames == False: 77 | fourcc = cv2.VideoWriter_fourcc(*'mp4v') 78 | print("Creating video of size {0}x{1}...".format(args.zoom_video_size[0], args.zoom_video_size[1])) 79 | video_output_path = gdl.DEFAULT_PATHS.outputs+"/"+gdl.get_noclobber_checked_path(gdl.DEFAULT_PATHS.outputs, args.zoom_output_path+"/"+args.zoom_output_file) 80 | result = cv2.VideoWriter(video_output_path, fourcc, args.zoom_frame_rate, args.zoom_video_size) 81 | frame_pixels = (GLubyte * (3*args.zoom_video_size[0]*args.zoom_video_size[1]))(0) 82 | 83 | if args.zoom_preview_output: # show video window if preview is enabled 84 | pygame.display.set_mode(args.zoom_video_size, SHOWN|DOUBLEBUF|OPENGL, vsync=0) 85 | 86 | start_offset = 1. # start pulled back from the first keyframe 87 | end_offset = 1. # end zoomed in on the last keyframe 88 | 89 | # create a schedule of time values for each rendered video frame 90 | if args.zoom_acceleration_smoothing > 0.: 91 | t_schedule = np.tanh(np.linspace(-args.zoom_acceleration_smoothing, args.zoom_acceleration_smoothing, args.zoom_num_interpolated_frames * num_keyframes)) 92 | t_schedule = t_schedule - np.min(t_schedule) 93 | t_schedule = t_schedule / np.max(t_schedule) * (num_keyframes+end_offset) + start_offset 94 | else: 95 | t_schedule = np.linspace(start_offset, num_keyframes+end_offset, args.zoom_num_interpolated_frames * num_keyframes) 96 | 97 | if args.zoom_out: 98 | t_schedule = t_schedule[::-1] # reverse the schedule if zooming out 99 | 100 | try: 101 | for f in range(len(t_schedule)): 102 | if (f % args.zoom_frame_rate) == 0: # print progress every (video) second 103 | print("Rendering {0:.2f}%...".format(f/len(t_schedule)*100.)) 104 | t = t_schedule[f] 105 | 106 | glClear(GL_COLOR_BUFFER_BIT) 107 | start_frame = int(np.clip(t+0.5-25., 0, num_keyframes-1)) 108 | end_frame = int(np.clip(t+0.5+25., 1, num_keyframes)) 109 | for f0 in range(start_frame, end_frame): 110 | z = f0 - t 111 | 112 | glPushMatrix() 113 | #glRotatef(t *2.*np.pi, 0., 0., 1.) 114 | scaleX = ((args.expand_left + args.expand_right)/100. +1.) ** (-z) 115 | scaleY = ((args.expand_top + args.expand_bottom)/100. +1.) ** (-z) 116 | glScalef(scaleX * aspect_adjustmentX, scaleY * aspect_adjustmentY, 1.) 117 | 118 | glBindTexture(GL_TEXTURE_2D, frame_textures[f0]) 119 | glBegin(GL_QUADS) 120 | glTexCoord2f(0., 0.); glVertex2f(-1.,-1.) 121 | glTexCoord2f(1., 0.); glVertex2f( 1.,-1.) 122 | glTexCoord2f(1., 1.); glVertex2f( 1., 1.) 123 | glTexCoord2f(0., 1.); glVertex2f(-1., 1.) 124 | glEnd() 125 | glPopMatrix() 126 | 127 | glReadPixels(0, 0, args.zoom_video_size[0], args.zoom_video_size[1], GL_RGB, GL_UNSIGNED_BYTE, frame_pixels) 128 | np_frame = np.array(frame_pixels).reshape(args.zoom_video_size[1], args.zoom_video_size[0], 3) 129 | 130 | if args.zoom_write_raw_frames == False: 131 | result.write(np_frame) 132 | else: 133 | frame_output_path = gdl.DEFAULT_PATHS.outputs+"/"+args.zoom_output_path+"/video_frames/"+str(f).zfill(8)+".png" 134 | gdl.save_image(np_frame, frame_output_path) 135 | 136 | pygame.display.flip() 137 | for e in pygame.event.get(): 138 | if e.type == pygame.QUIT: 139 | raise Exception("Operation cancelled by user") 140 | 141 | except Exception as e: 142 | print("Error: {0}".format(str(e))) 143 | raise 144 | finally: 145 | pygame.quit() 146 | if args.zoom_write_raw_frames == False: 147 | result.release() 148 | 149 | if args.zoom_write_raw_frames == False: 150 | print("Saved {0}".format(video_output_path)) -------------------------------------------------------------------------------- /inputs/scripts/zoom_maker.py: -------------------------------------------------------------------------------- 1 | # zoom and E N H A N C E 2 | 3 | import glob 4 | import shutil 5 | import os 6 | import numpy as np 7 | from argparse import Namespace 8 | 9 | args = cli_default_args() 10 | args.prompt_style = "dreamlikeart, surreal, surrealism, Textless, Perfect ling, mucheneuve, muchenies ing, Jeffrigheme, realistacking, ejsin the bartgerm, Perfect focus-stic listan, rhads, villess, colourealighelant lit" 11 | args.zoom_prompt_schedule = [ 12 | #"dreamlikeart, surreal, surrealism, Textless, Perfect ling, mucheneuve, muchenies ing, Jeffrigheme, realistacking, ejsin the bartgerm, Perfect focus-stic listan, rhads, villess, colourealighelant lit", 13 | #"dreamlikeart, abstract, surreal", 14 | "desert landscape with a watch buried in the sand", 15 | "a clocktower in the night sky", 16 | "an exotic bird with a crown on its head", 17 | "a huge apple tree with a snake wrapped around it", 18 | "a beautiful underwater city with colorful fish", 19 | "an open window with a beautiful view of the ocean", 20 | "a bed of hungry flowers", 21 | "golden stairs leading to a castle in the clouds", 22 | "a lavish banquet hall with a feast on the table", 23 | "a pile of large boulders", 24 | "a powerful river surrounded by trees", 25 | ] 26 | args.zoom_prompt_reset_interval = 1 # the prompt is switched to the next prompt in the list every n samples 27 | args.zoom_prompt_schedule_order = "linear" # rotate through the prompt list in order 28 | #args.zoom_prompt_schedule_order = "random" # uncomment this line to use prompts in random order 29 | args.zoom_interactive_cherrypicker = False # setting this to True will prompt you to interactively accept or reject each keyframe / sample 30 | # currently broken until I can find a better way to distribute opencv2 with appropriate dependencies 31 | args.num_samples = 1 32 | args.zoom_num_frames = 1000 # number of discrete zoom images to sample 33 | # (you can abort / close the program at any time to use the keyframes you have already generated) 34 | 35 | args.expand_softness = 50. 36 | args.expand_space = 10. # distance to hard erase from source image edge 37 | args.expand_top = 50 # amount to expand in each direction in each step 38 | args.expand_bottom = 50 # these values are in % of the original image size 39 | args.expand_left = 50 # exceeding 50% in any direction is not recommended for recursive zooms / pans 40 | args.expand_right = 50 41 | 42 | args.init_image = "" # starting (or rather, ending image file, relative to inputs path). if blank start with a generated image 43 | args.output_path = "zoom_maker" # output path, relative to outputs 44 | args.output_name = "zoom_maker" 45 | 46 | #args.model_name = "stable-diffusion-v1-5-standard" 47 | #args.steps = 80 48 | #args.cfg_scale = 11. 49 | #args.guidance_strength = 0. 50 | 51 | # if using sd2.x be sure to use a negative prompt 52 | #args.model_name = "stable-diffusion-v2-1-standard" 53 | #args.steps = 80 54 | #args.cfg_scale = 6.#4.2 55 | #args.guidance_strength = 0. 56 | 57 | args.model_name = "dreamlike-diffusion-1.0" 58 | args.steps = 80 59 | args.cfg_scale = 11. 60 | args.guidance_strength = 0. 61 | 62 | args.negative_prompt = "frame, comic book, collage, cropped, oversaturated, signed, greyscale, monotone, vignette, title, text, logo, watermark" 63 | #args.negative_prompt = "watermark, title, label, collage, cropped, highly saturated colors, monotone, vignette" 64 | #args.negative_prompt = "art by lisa frank, blender, cropped, lowres, poorly drawn face, out of frame, poorly drawn hands, blurry, bad art, text, watermark, disfigured, deformed, title, label, collage, vignette" 65 | #args.negative_prompt = "frame, blender, cropped, lowres, poorly drawn face, poorly drawn hands, blurry, bad art, text, watermark, disfigured, deformed, title, label, collage, vignette" 66 | #args.negative_prompt = "blender, lowres, poorly drawn face, blurry, bad art, text, watermark, disfigured, deformed, title, label, collage, vignette" 67 | 68 | # these dims are only for the starting image (if no user-supplied init_image is used) 69 | args.width = 768 70 | args.height = 512 71 | # for each subsequent generation the image is first expanded, but then contracted to fit inside the max width/height 72 | args.max_width = 768 73 | args.max_height = 512 74 | 75 | args.hires_fix = False 76 | args.sampler = "dpmspp_sde" 77 | #args.sampler="dpmspp_2m" 78 | #args.sampler = "k_euler_ancestral" 79 | #args.sampler = "k_dpm_2_ancestral" 80 | #args.sampler = "dpmspp_2" 81 | #args.sampler = "dpmspp_3" 82 | #args.sampler = "dpmspp_2s_ancestral" 83 | 84 | # ***************************************************************** 85 | 86 | # if args or keyword args were passed in the cli run command, override the defaults 87 | if cli_args: args = Namespace(**(vars(args) | vars(cli_args))) 88 | if kwargs: args = Namespace(**(vars(args) | kwargs)) 89 | 90 | frame_filenames = sorted(glob.glob(gdl.DEFAULT_PATHS.outputs+"/"+args.output_path+"/*.png"), reverse=True) 91 | if len(frame_filenames) > 0: 92 | args.init_image = "zoom_maker.png" 93 | output_file = frame_filenames[0] 94 | input_file = gdl.DEFAULT_PATHS.inputs+"/"+args.init_image 95 | print("Resuming zoom from {0}...".format(output_file)) 96 | print("Copying from {0} to {1}...".format(output_file, input_file)) 97 | shutil.copyfile(output_file, input_file) 98 | start_frame_index = len(frame_filenames) 99 | else: 100 | print("No frames found in {0} to resume from".format(gdl.DEFAULT_PATHS.outputs+"/"+args.output_path)) 101 | if not args.init_image: print("No init_image specified, generating a random starting image...") 102 | else: print("Starting from '{0}'...".format(args.init_image)) 103 | start_frame_index = 0 104 | 105 | # create zoom frames 106 | i = 0 107 | while i < args.zoom_num_frames: 108 | print("Starting iteration {0} of {1}...".format(i+1, args.zoom_num_frames)) 109 | 110 | # update the prompt according to the multi-prompt schedule 111 | if ((i % args.zoom_prompt_reset_interval) == 0) or (i == 0): 112 | if args.zoom_prompt_schedule_order == "linear": 113 | prompt_index = (i // args.zoom_prompt_reset_interval) % len(args.zoom_prompt_schedule) 114 | elif args.zoom_prompt_schedule_order == "random": 115 | prompt_index = np.random.randint(0, len(args.zoom_prompt_schedule)) 116 | else: 117 | raise Exception("Unknown prompt schedule order '{0}'".format(args.zoom_prompt_schedule_order)) 118 | args.prompt = args.zoom_prompt_schedule[prompt_index] 119 | if "prompt_style" in args: args.prompt = args.prompt_style + ", " + args.prompt 120 | print("prompt: {0}".format(args.prompt)) 121 | 122 | args.img2img_strength = 2. 123 | args.output_name = "zoom_maker_f{0}".format(str(i+start_frame_index).zfill(4)) 124 | sample(args) 125 | if args.status != 2: break # cancelled or error 126 | 127 | args.init_image = args.output_sample 128 | 129 | # currently disabled / broken because of opencv2 dependencies 130 | """ 131 | if args.zoom_interactive_cherrypicker: 132 | input_key = 32 133 | while chr(input_key).lower() not in ("y","n"): 134 | cv2.imshow("Accept or reject? (y/n):", args.output_sample) 135 | input_key = cv2.waitKey(0) 136 | cv2.destroyAllWindows() 137 | if input_key == -1: break # window closed 138 | if input_key == -1: break # terminate if window closed 139 | if chr(input_key).lower() != "y": 140 | output_file = gdl.DEFAULT_PATHS.outputs+"/"+args.output_file 141 | print("Removing {0} and retrying...".format(output_file)) 142 | os.remove(output_file) 143 | continue 144 | """ 145 | 146 | i += 1 147 | 148 | print("Done!") -------------------------------------------------------------------------------- /modules/__init__.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | 3 | base_path = os.path.dirname(__file__) 4 | sys.path.append(base_path) -------------------------------------------------------------------------------- /modules/g_diffuser_utilities.py: -------------------------------------------------------------------------------- 1 | #import skimage 2 | #from skimage.exposure import match_histograms 3 | #from skimage import color, transform 4 | 5 | import numpy as np 6 | import scipy 7 | 8 | # common utility functions for g-diffuser-lib input / output processing 9 | 10 | def fft2(data): 11 | if data.ndim > 2: # multiple channels 12 | out_fft = np.zeros((data.shape[0], data.shape[1], data.shape[2]), dtype=np.complex128) 13 | for c in range(data.shape[2]): 14 | c_data = data[:,:,c] 15 | out_fft[:,:,c] = np.fft.fft2(np.fft.fftshift(c_data),norm="ortho") 16 | out_fft[:,:,c] = np.fft.ifftshift(out_fft[:,:,c]) 17 | else: # single channel 18 | out_fft = np.zeros((data.shape[0], data.shape[1]), dtype=np.complex128) 19 | out_fft[:,:] = np.fft.fft2(np.fft.fftshift(data),norm="ortho") 20 | out_fft[:,:] = np.fft.ifftshift(out_fft[:,:]) 21 | 22 | return out_fft 23 | 24 | def ifft2(data): 25 | if data.ndim > 2: # multiple channels 26 | out_ifft = np.zeros((data.shape[0], data.shape[1], data.shape[2]), dtype=np.complex128) 27 | for c in range(data.shape[2]): 28 | c_data = data[:,:,c] 29 | out_ifft[:,:,c] = np.fft.ifft2(np.fft.fftshift(c_data),norm="ortho") 30 | out_ifft[:,:,c] = np.fft.ifftshift(out_ifft[:,:,c]) 31 | else: # single channel 32 | out_ifft = np.zeros((data.shape[0], data.shape[1]), dtype=np.complex128) 33 | out_ifft[:,:] = np.fft.ifft2(np.fft.fftshift(data),norm="ortho") 34 | out_ifft[:,:] = np.fft.ifftshift(out_ifft[:,:]) 35 | 36 | return out_ifft 37 | 38 | def get_gradient_kernel(width, height, std=3.14, mode="linear"): 39 | window_scale_x = float(width / min(width, height)) # for non-square aspect ratios we still want a circular kernel 40 | window_scale_y = float(height / min(width, height)) 41 | if mode == "gaussian": 42 | x = (np.arange(width) / width * 2. - 1.) * window_scale_x 43 | kx = np.exp(-x*x * std) 44 | if window_scale_x != window_scale_y: 45 | y = (np.arange(height) / height * 2. - 1.) * window_scale_y 46 | ky = np.exp(-y*y * std) 47 | else: 48 | y = x; ky = kx 49 | return np.outer(kx, ky) 50 | elif mode == "linear": 51 | x = (np.arange(width) / width * 2. - 1.) * window_scale_x 52 | if window_scale_x != window_scale_y: 53 | y = (np.arange(height) / height * 2. - 1.) * window_scale_y 54 | else: y = x 55 | return np.clip(1. - np.sqrt(np.add.outer(x*x, y*y)) * std / 3.14, 0., 1.) 56 | else: 57 | raise Exception("Error: Unknown mode in get_gradient_kernel: {0}".format(mode)) 58 | 59 | def convolve(data1, data2): # fast convolution with fft 60 | if data1.ndim != data2.ndim: # promote to rgb if mismatch 61 | if data1.ndim < 3: data1 = np_img_grey_to_rgb(data1) 62 | if data2.ndim < 3: data2 = np_img_grey_to_rgb(data2) 63 | return ifft2(fft2(data1) * fft2(data2)) 64 | 65 | def image_blur(data, std=3.14, mode="linear", use_fft=True): 66 | if use_fft: 67 | width = data.shape[0] 68 | height = data.shape[1] 69 | kernel = get_gradient_kernel(width, height, std, mode=mode) 70 | return np.real(convolve(data, kernel / np.sqrt(np.sum(kernel*kernel)))) 71 | else: 72 | k_width = 64 73 | kernel = get_gradient_kernel(k_width, k_width, std, mode=mode) 74 | return np.real(scipy.ndimage.convolve(data, kernel / np.sqrt(np.sum(kernel*kernel)), mode="nearest")) 75 | 76 | def normalize_image(data): 77 | normalized = data - np.min(data) 78 | normalized_max = np.max(normalized) 79 | assert(normalized_max > 0.) 80 | return normalized / normalized_max 81 | 82 | def np_img_rgb_to_grey(data): 83 | if data.ndim == 2: return data 84 | return np.sum(data, axis=2)/3. 85 | 86 | def np_img_grey_to_rgb(data): 87 | if data.ndim == 3: return data 88 | return np.expand_dims(data, 2) * np.ones((1, 1, 3)) 89 | 90 | """ 91 | def np_img_rgb_to_hsv(data): 92 | return color.rgb2hsv(data) 93 | 94 | def np_img_hsv_to_rgb(data): 95 | return color.hsv2rgb(data) 96 | 97 | def hsv_blend_image(image, match_to, hsv_mask=None): 98 | width = image.shape[0] 99 | height = image.shape[1] 100 | 101 | if type(hsv_mask) != np.ndarray: 102 | hsv_mask = np.ones((width, height, 3)) 103 | image_hsv = np_img_rgb_to_hsv(image) 104 | match_to_hsv = np_img_rgb_to_hsv(match_to) 105 | 106 | return np_img_hsv_to_rgb(image_hsv * (1.-hsv_mask) + hsv_mask * match_to_hsv) 107 | """ -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/completion_pb2.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by the protocol buffer compiler. DO NOT EDIT! 3 | # source: completion.proto 4 | """Generated protocol buffer code.""" 5 | from google.protobuf.internal import builder as _builder 6 | from google.protobuf import descriptor as _descriptor 7 | from google.protobuf import descriptor_pool as _descriptor_pool 8 | from google.protobuf import symbol_database as _symbol_database 9 | # @@protoc_insertion_point(imports) 10 | 11 | _sym_db = _symbol_database.Default() 12 | 13 | 14 | 15 | 16 | DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x10\x63ompletion.proto\x12\x07gooseai\"!\n\x05Token\x12\x0c\n\x04text\x18\x01 \x01(\t\x12\n\n\x02id\x18\x02 \x01(\r\"(\n\x06Tokens\x12\x1e\n\x06tokens\x18\x01 \x03(\x0b\x32\x0e.gooseai.Token\"E\n\x06Prompt\x12\x0e\n\x04text\x18\x01 \x01(\tH\x00\x12!\n\x06tokens\x18\x02 \x01(\x0b\x32\x0f.gooseai.TokensH\x00\x42\x08\n\x06prompt\":\n\tLogitBias\x12\x1f\n\x06tokens\x18\x01 \x01(\x0b\x32\x0f.gooseai.Tokens\x12\x0c\n\x04\x62ias\x18\x02 \x01(\x01\"1\n\x0bLogitBiases\x12\"\n\x06\x62iases\x18\x01 \x03(\x0b\x32\x12.gooseai.LogitBias\"\xbb\x02\n\x0f\x46requencyParams\x12\x1d\n\x10presence_penalty\x18\x01 \x01(\x01H\x00\x88\x01\x01\x12\x1e\n\x11\x66requency_penalty\x18\x02 \x01(\x01H\x01\x88\x01\x01\x12\x1f\n\x12repetition_penalty\x18\x03 \x01(\x01H\x02\x88\x01\x01\x12%\n\x18repetition_penalty_slope\x18\x04 \x01(\x01H\x03\x88\x01\x01\x12%\n\x18repetition_penalty_range\x18\x05 \x01(\rH\x04\x88\x01\x01\x42\x13\n\x11_presence_penaltyB\x14\n\x12_frequency_penaltyB\x15\n\x13_repetition_penaltyB\x1b\n\x19_repetition_penalty_slopeB\x1b\n\x19_repetition_penalty_range\"\x9a\x02\n\x0eSamplingParams\x12&\n\x05order\x18\x01 \x03(\x0e\x32\x17.gooseai.SamplingMethod\x12\x18\n\x0btemperature\x18\x02 \x01(\x01H\x00\x88\x01\x01\x12\x12\n\x05top_p\x18\x03 \x01(\x01H\x01\x88\x01\x01\x12\x12\n\x05top_k\x18\x04 \x01(\rH\x02\x88\x01\x01\x12\x1f\n\x12tail_free_sampling\x18\x05 \x01(\x01H\x03\x88\x01\x01\x12\x16\n\ttypical_p\x18\x06 \x01(\x01H\x04\x88\x01\x01\x12\x12\n\x05top_a\x18\x07 \x01(\x01H\x05\x88\x01\x01\x42\x0e\n\x0c_temperatureB\x08\n\x06_top_pB\x08\n\x06_top_kB\x15\n\x13_tail_free_samplingB\x0c\n\n_typical_pB\x08\n\x06_top_a\"\xe4\x01\n\x0bModelParams\x12\x35\n\x0fsampling_params\x18\x01 \x01(\x0b\x32\x17.gooseai.SamplingParamsH\x00\x88\x01\x01\x12\x37\n\x10\x66requency_params\x18\x02 \x01(\x0b\x32\x18.gooseai.FrequencyParamsH\x01\x88\x01\x01\x12-\n\nlogit_bias\x18\x03 \x01(\x0b\x32\x14.gooseai.LogitBiasesH\x02\x88\x01\x01\x42\x12\n\x10_sampling_paramsB\x13\n\x11_frequency_paramsB\r\n\x0b_logit_bias\"$\n\x04\x45\x63ho\x12\x12\n\x05index\x18\x01 \x01(\x05H\x00\x88\x01\x01\x42\x08\n\x06_index\"*\n\x0fModuleEmbedding\x12\n\n\x02id\x18\x01 \x01(\t\x12\x0b\n\x03key\x18\x02 \x01(\t\"C\n\x06Tensor\x12\x1d\n\x03typ\x18\x01 \x01(\x0e\x32\x10.gooseai.NumType\x12\x0c\n\x04\x64ims\x18\x02 \x03(\r\x12\x0c\n\x04\x64\x61ta\x18\x03 \x01(\x0c\"~\n\tEmbedding\x12\x1e\n\x03raw\x18\x01 \x01(\x0b\x32\x0f.gooseai.TensorH\x00\x12*\n\x06module\x18\x02 \x01(\x0b\x32\x18.gooseai.ModuleEmbeddingH\x00\x12\x10\n\x03pos\x18\x03 \x01(\rH\x01\x88\x01\x01\x42\x0b\n\tembeddingB\x06\n\x04_pos\"\x98\x02\n\x0c\x45ngineParams\x12\x17\n\nmax_tokens\x18\x01 \x01(\rH\x00\x88\x01\x01\x12\x18\n\x0b\x63ompletions\x18\x02 \x01(\rH\x01\x88\x01\x01\x12\x15\n\x08logprobs\x18\x03 \x01(\rH\x02\x88\x01\x01\x12 \n\x04\x65\x63ho\x18\x04 \x01(\x0b\x32\r.gooseai.EchoH\x03\x88\x01\x01\x12\x14\n\x07\x62\x65st_of\x18\x05 \x01(\rH\x04\x88\x01\x01\x12\x1d\n\x04stop\x18\x06 \x03(\x0b\x32\x0f.gooseai.Prompt\x12\x17\n\nmin_tokens\x18\x07 \x01(\rH\x05\x88\x01\x01\x42\r\n\x0b_max_tokensB\x0e\n\x0c_completionsB\x0b\n\t_logprobsB\x07\n\x05_echoB\n\n\x08_best_ofB\r\n\x0b_min_tokens\"3\n\x0bRequestMeta\x12\x16\n\tstreaming\x18\x01 \x01(\x08H\x00\x88\x01\x01\x42\x0c\n\n_streaming\"\xf8\x02\n\x07Request\x12\x11\n\tengine_id\x18\x01 \x01(\t\x12\x1f\n\x06prompt\x18\x02 \x03(\x0b\x32\x0f.gooseai.Prompt\x12/\n\x0cmodel_params\x18\x03 \x01(\x0b\x32\x14.gooseai.ModelParamsH\x00\x88\x01\x01\x12\x31\n\rengine_params\x18\x04 \x01(\x0b\x32\x15.gooseai.EngineParamsH\x01\x88\x01\x01\x12\x17\n\nrequest_id\x18\x05 \x01(\tH\x02\x88\x01\x01\x12&\n\nembeddings\x18\x06 \x03(\x0b\x32\x12.gooseai.Embedding\x12\x1c\n\x0forigin_received\x18\x07 \x01(\x04H\x03\x88\x01\x01\x12\'\n\x04meta\x18\x08 \x01(\x0b\x32\x14.gooseai.RequestMetaH\x04\x88\x01\x01\x42\x0f\n\r_model_paramsB\x10\n\x0e_engine_paramsB\r\n\x0b_request_idB\x12\n\x10_origin_receivedB\x07\n\x05_meta\"z\n\x07LogProb\x12\x1d\n\x05token\x18\x01 \x01(\x0b\x32\x0e.gooseai.Token\x12\x14\n\x07logprob\x18\x02 \x01(\x01H\x00\x88\x01\x01\x12\x1b\n\x0elogprob_before\x18\x03 \x01(\x01H\x01\x88\x01\x01\x42\n\n\x08_logprobB\x11\n\x0f_logprob_before\"3\n\rTokenLogProbs\x12\"\n\x08logprobs\x18\x01 \x03(\x0b\x32\x10.gooseai.LogProb\"\x98\x01\n\x08LogProbs\x12&\n\x06tokens\x18\x01 \x01(\x0b\x32\x16.gooseai.TokenLogProbs\x12\x13\n\x0btext_offset\x18\x02 \x03(\r\x12#\n\x03top\x18\x03 \x03(\x0b\x32\x16.gooseai.TokenLogProbs\x12*\n\ntop_before\x18\x04 \x03(\x0b\x32\x16.gooseai.TokenLogProbs\"\xa2\x01\n\nCompletion\x12\x0c\n\x04text\x18\x01 \x01(\t\x12\r\n\x05index\x18\x02 \x01(\r\x12#\n\x08logprobs\x18\x03 \x01(\x0b\x32\x11.gooseai.LogProbs\x12,\n\rfinish_reason\x18\x04 \x01(\x0e\x32\x15.gooseai.FinishReason\x12\x13\n\x0btoken_index\x18\x05 \x01(\r\x12\x0f\n\x07started\x18\x06 \x01(\x04\"n\n\nAnswerMeta\x12\x13\n\x06gpu_id\x18\x01 \x01(\tH\x00\x88\x01\x01\x12\x13\n\x06\x63pu_id\x18\x02 \x01(\tH\x01\x88\x01\x01\x12\x14\n\x07node_id\x18\x03 \x01(\tH\x02\x88\x01\x01\x42\t\n\x07_gpu_idB\t\n\x07_cpu_idB\n\n\x08_node_id\"\xd6\x01\n\x06\x41nswer\x12\x11\n\tanswer_id\x18\x01 \x01(\t\x12\x0f\n\x07\x63reated\x18\x02 \x01(\x04\x12\r\n\x05model\x18\x03 \x01(\t\x12$\n\x07\x63hoices\x18\x04 \x03(\x0b\x32\x13.gooseai.Completion\x12\x17\n\nrequest_id\x18\x05 \x01(\tH\x00\x88\x01\x01\x12\x1a\n\x12inference_received\x18\x06 \x01(\x04\x12&\n\x04meta\x18\x07 \x01(\x0b\x32\x13.gooseai.AnswerMetaH\x01\x88\x01\x01\x42\r\n\x0b_request_idB\x07\n\x05_meta*9\n\x0c\x46inishReason\x12\x08\n\x04NULL\x10\x00\x12\n\n\x06LENGTH\x10\x01\x12\x08\n\x04STOP\x10\x02\x12\t\n\x05\x45RROR\x10\x03*d\n\x0eSamplingMethod\x12\x08\n\x04NONE\x10\x00\x12\x0f\n\x0bTEMPERATURE\x10\x01\x12\t\n\x05TOP_K\x10\x02\x12\t\n\x05TOP_P\x10\x03\x12\x07\n\x03TFS\x10\x04\x12\t\n\x05TOP_A\x10\x05\x12\r\n\tTYPICAL_P\x10\x06*\'\n\x07NumType\x12\x08\n\x04\x46P16\x10\x00\x12\x08\n\x04\x46P32\x10\x01\x12\x08\n\x04\x42\x46\x31\x36\x10\x02\x32H\n\x11\x43ompletionService\x12\x33\n\nCompletion\x12\x10.gooseai.Request\x1a\x0f.gooseai.Answer\"\x00\x30\x01\x42\x0fZ\r./;completionb\x06proto3') 17 | 18 | _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) 19 | _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'completion_pb2', globals()) 20 | if _descriptor._USE_C_DESCRIPTORS == False: 21 | 22 | DESCRIPTOR._options = None 23 | DESCRIPTOR._serialized_options = b'Z\r./;completion' 24 | _FINISHREASON._serialized_start=2942 25 | _FINISHREASON._serialized_end=2999 26 | _SAMPLINGMETHOD._serialized_start=3001 27 | _SAMPLINGMETHOD._serialized_end=3101 28 | _NUMTYPE._serialized_start=3103 29 | _NUMTYPE._serialized_end=3142 30 | _TOKEN._serialized_start=29 31 | _TOKEN._serialized_end=62 32 | _TOKENS._serialized_start=64 33 | _TOKENS._serialized_end=104 34 | _PROMPT._serialized_start=106 35 | _PROMPT._serialized_end=175 36 | _LOGITBIAS._serialized_start=177 37 | _LOGITBIAS._serialized_end=235 38 | _LOGITBIASES._serialized_start=237 39 | _LOGITBIASES._serialized_end=286 40 | _FREQUENCYPARAMS._serialized_start=289 41 | _FREQUENCYPARAMS._serialized_end=604 42 | _SAMPLINGPARAMS._serialized_start=607 43 | _SAMPLINGPARAMS._serialized_end=889 44 | _MODELPARAMS._serialized_start=892 45 | _MODELPARAMS._serialized_end=1120 46 | _ECHO._serialized_start=1122 47 | _ECHO._serialized_end=1158 48 | _MODULEEMBEDDING._serialized_start=1160 49 | _MODULEEMBEDDING._serialized_end=1202 50 | _TENSOR._serialized_start=1204 51 | _TENSOR._serialized_end=1271 52 | _EMBEDDING._serialized_start=1273 53 | _EMBEDDING._serialized_end=1399 54 | _ENGINEPARAMS._serialized_start=1402 55 | _ENGINEPARAMS._serialized_end=1682 56 | _REQUESTMETA._serialized_start=1684 57 | _REQUESTMETA._serialized_end=1735 58 | _REQUEST._serialized_start=1738 59 | _REQUEST._serialized_end=2114 60 | _LOGPROB._serialized_start=2116 61 | _LOGPROB._serialized_end=2238 62 | _TOKENLOGPROBS._serialized_start=2240 63 | _TOKENLOGPROBS._serialized_end=2291 64 | _LOGPROBS._serialized_start=2294 65 | _LOGPROBS._serialized_end=2446 66 | _COMPLETION._serialized_start=2449 67 | _COMPLETION._serialized_end=2611 68 | _ANSWERMETA._serialized_start=2613 69 | _ANSWERMETA._serialized_end=2723 70 | _ANSWER._serialized_start=2726 71 | _ANSWER._serialized_end=2940 72 | _COMPLETIONSERVICE._serialized_start=3144 73 | _COMPLETIONSERVICE._serialized_end=3216 74 | # @@protoc_insertion_point(module_scope) 75 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/completion_pb2_grpc.py: -------------------------------------------------------------------------------- 1 | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! 2 | """Client and server classes corresponding to protobuf-defined services.""" 3 | import grpc 4 | 5 | import completion_pb2 as completion__pb2 6 | 7 | 8 | class CompletionServiceStub(object): 9 | """Missing associated documentation comment in .proto file.""" 10 | 11 | def __init__(self, channel): 12 | """Constructor. 13 | 14 | Args: 15 | channel: A grpc.Channel. 16 | """ 17 | self.Completion = channel.unary_stream( 18 | '/gooseai.CompletionService/Completion', 19 | request_serializer=completion__pb2.Request.SerializeToString, 20 | response_deserializer=completion__pb2.Answer.FromString, 21 | ) 22 | 23 | 24 | class CompletionServiceServicer(object): 25 | """Missing associated documentation comment in .proto file.""" 26 | 27 | def Completion(self, request, context): 28 | """Missing associated documentation comment in .proto file.""" 29 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 30 | context.set_details('Method not implemented!') 31 | raise NotImplementedError('Method not implemented!') 32 | 33 | 34 | def add_CompletionServiceServicer_to_server(servicer, server): 35 | rpc_method_handlers = { 36 | 'Completion': grpc.unary_stream_rpc_method_handler( 37 | servicer.Completion, 38 | request_deserializer=completion__pb2.Request.FromString, 39 | response_serializer=completion__pb2.Answer.SerializeToString, 40 | ), 41 | } 42 | generic_handler = grpc.method_handlers_generic_handler( 43 | 'gooseai.CompletionService', rpc_method_handlers) 44 | server.add_generic_rpc_handlers((generic_handler,)) 45 | 46 | 47 | # This class is part of an EXPERIMENTAL API. 48 | class CompletionService(object): 49 | """Missing associated documentation comment in .proto file.""" 50 | 51 | @staticmethod 52 | def Completion(request, 53 | target, 54 | options=(), 55 | channel_credentials=None, 56 | call_credentials=None, 57 | insecure=False, 58 | compression=None, 59 | wait_for_ready=None, 60 | timeout=None, 61 | metadata=None): 62 | return grpc.experimental.unary_stream(request, target, '/gooseai.CompletionService/Completion', 63 | completion__pb2.Request.SerializeToString, 64 | completion__pb2.Answer.FromString, 65 | options, channel_credentials, 66 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 67 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/dashboard_pb2.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by the protocol buffer compiler. DO NOT EDIT! 3 | # source: dashboard.proto 4 | """Generated protocol buffer code.""" 5 | from google.protobuf.internal import builder as _builder 6 | from google.protobuf import descriptor as _descriptor 7 | from google.protobuf import descriptor_pool as _descriptor_pool 8 | from google.protobuf import symbol_database as _symbol_database 9 | # @@protoc_insertion_point(imports) 10 | 11 | _sym_db = _symbol_database.Default() 12 | 13 | 14 | 15 | 16 | DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x0f\x64\x61shboard.proto\x12\x07gooseai\"\xa9\x01\n\x12OrganizationMember\x12+\n\x0corganization\x18\x01 \x01(\x0b\x32\x15.gooseai.Organization\x12 \n\x04user\x18\x02 \x01(\x0b\x32\r.gooseai.UserH\x00\x88\x01\x01\x12\'\n\x04role\x18\x03 \x01(\x0e\x32\x19.gooseai.OrganizationRole\x12\x12\n\nis_default\x18\x04 \x01(\x08\x42\x07\n\x05_user\"h\n\x11OrganizationGrant\x12\x16\n\x0e\x61mount_granted\x18\x01 \x01(\x01\x12\x13\n\x0b\x61mount_used\x18\x02 \x01(\x01\x12\x12\n\nexpires_at\x18\x03 \x01(\x04\x12\x12\n\ngranted_at\x18\x04 \x01(\x04\"V\n\x17OrganizationPaymentInfo\x12\x0f\n\x07\x62\x61lance\x18\x01 \x01(\x01\x12*\n\x06grants\x18\x02 \x03(\x0b\x32\x1a.gooseai.OrganizationGrant\"I\n\x16OrganizationAutoCharge\x12\x0f\n\x07\x65nabled\x18\x01 \x01(\x08\x12\n\n\x02id\x18\x02 \x01(\t\x12\x12\n\ncreated_at\x18\x03 \x01(\x04\"\xbc\x02\n\x0cOrganization\x12\n\n\x02id\x18\x01 \x01(\t\x12\x0c\n\x04name\x18\x02 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x03 \x01(\t\x12,\n\x07members\x18\x04 \x03(\x0b\x32\x1b.gooseai.OrganizationMember\x12;\n\x0cpayment_info\x18\x05 \x01(\x0b\x32 .gooseai.OrganizationPaymentInfoH\x00\x88\x01\x01\x12\x1f\n\x12stripe_customer_id\x18\x06 \x01(\tH\x01\x88\x01\x01\x12\x39\n\x0b\x61uto_charge\x18\x07 \x01(\x0b\x32\x1f.gooseai.OrganizationAutoChargeH\x02\x88\x01\x01\x42\x0f\n\r_payment_infoB\x15\n\x13_stripe_customer_idB\x0e\n\x0c_auto_charge\"<\n\x06\x41PIKey\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\x11\n\tis_secret\x18\x02 \x01(\x08\x12\x12\n\ncreated_at\x18\x03 \x01(\x04\"\xf7\x01\n\x04User\x12\n\n\x02id\x18\x01 \x01(\t\x12\x14\n\x07\x61uth_id\x18\x02 \x01(\tH\x00\x88\x01\x01\x12\x17\n\x0fprofile_picture\x18\x03 \x01(\t\x12\r\n\x05\x65mail\x18\x04 \x01(\t\x12\x32\n\rorganizations\x18\x05 \x03(\x0b\x32\x1b.gooseai.OrganizationMember\x12!\n\x08\x61pi_keys\x18\x07 \x03(\x0b\x32\x0f.gooseai.APIKey\x12\x12\n\ncreated_at\x18\x08 \x01(\x04\x12\x1b\n\x0e\x65mail_verified\x18\t \x01(\x08H\x01\x88\x01\x01\x42\n\n\x08_auth_idB\x11\n\x0f_email_verified\"9\n\x08\x43ostData\x12\x15\n\ramount_tokens\x18\x01 \x01(\r\x12\x16\n\x0e\x61mount_credits\x18\x02 \x01(\x01\"\xba\x01\n\x0bUsageMetric\x12\x11\n\toperation\x18\x01 \x01(\t\x12\x0e\n\x06\x65ngine\x18\x02 \x01(\t\x12%\n\ninput_cost\x18\x03 \x01(\x0b\x32\x11.gooseai.CostData\x12&\n\x0boutput_cost\x18\x04 \x01(\x0b\x32\x11.gooseai.CostData\x12\x11\n\x04user\x18\x05 \x01(\tH\x00\x88\x01\x01\x12\x1d\n\x15\x61ggregation_timestamp\x18\x06 \x01(\x04\x42\x07\n\x05_user\":\n\tCostTotal\x12\x15\n\ramount_tokens\x18\x01 \x01(\r\x12\x16\n\x0e\x61mount_credits\x18\x02 \x01(\x01\"e\n\x10TotalMetricsData\x12\'\n\x0binput_total\x18\x01 \x01(\x0b\x32\x12.gooseai.CostTotal\x12(\n\x0coutput_total\x18\x02 \x01(\x0b\x32\x12.gooseai.CostTotal\"Z\n\x07Metrics\x12%\n\x07metrics\x18\x01 \x03(\x0b\x32\x14.gooseai.UsageMetric\x12(\n\x05total\x18\x02 \x01(\x0b\x32\x19.gooseai.TotalMetricsData\"\x0e\n\x0c\x45mptyRequest\"$\n\x16GetOrganizationRequest\x12\n\n\x02id\x18\x01 \x01(\t\"\x99\x01\n\x11GetMetricsRequest\x12\x17\n\x0forganization_id\x18\x01 \x01(\t\x12\x14\n\x07user_id\x18\x02 \x01(\tH\x00\x88\x01\x01\x12\x12\n\nrange_from\x18\x03 \x01(\x04\x12\x10\n\x08range_to\x18\x04 \x01(\x04\x12#\n\x1binclude_per_request_metrics\x18\x05 \x01(\x08\x42\n\n\x08_user_id\"\"\n\rAPIKeyRequest\x12\x11\n\tis_secret\x18\x01 \x01(\x08\"\x1f\n\x11\x41PIKeyFindRequest\x12\n\n\x02id\x18\x01 \x01(\t\";\n UpdateDefaultOrganizationRequest\x12\x17\n\x0forganization_id\x18\x01 \x01(\t\"\"\n\x0e\x43lientSettings\x12\x10\n\x08settings\x18\x01 \x01(\x0c\"\x80\x01\n\x1d\x43reateAutoChargeIntentRequest\x12\x17\n\x0forganization_id\x18\x01 \x01(\t\x12\x17\n\x0fmonthly_maximum\x18\x02 \x01(\x04\x12\x15\n\rminimum_value\x18\x03 \x01(\x04\x12\x16\n\x0e\x61mount_credits\x18\x04 \x01(\x04\">\n\x13\x43reateChargeRequest\x12\x0e\n\x06\x61mount\x18\x01 \x01(\x04\x12\x17\n\x0forganization_id\x18\x02 \x01(\t\"R\n\x11GetChargesRequest\x12\x17\n\x0forganization_id\x18\x01 \x01(\t\x12\x12\n\nrange_from\x18\x02 \x01(\x04\x12\x10\n\x08range_to\x18\x03 \x01(\x04\"z\n\x06\x43harge\x12\n\n\x02id\x18\x01 \x01(\t\x12\x0c\n\x04paid\x18\x02 \x01(\x08\x12\x14\n\x0creceipt_link\x18\x03 \x01(\t\x12\x14\n\x0cpayment_link\x18\x04 \x01(\t\x12\x12\n\ncreated_at\x18\x05 \x01(\x04\x12\x16\n\x0e\x61mount_credits\x18\x06 \x01(\x04\"+\n\x07\x43harges\x12 \n\x07\x63harges\x18\x01 \x03(\x0b\x32\x0f.gooseai.Charge\"/\n\x14GetAutoChargeRequest\x12\x17\n\x0forganization_id\x18\x01 \x01(\t\"\x90\x01\n\x10\x41utoChargeIntent\x12\n\n\x02id\x18\x01 \x01(\t\x12\x14\n\x0cpayment_link\x18\x02 \x01(\t\x12\x12\n\ncreated_at\x18\x03 \x01(\x04\x12\x17\n\x0fmonthly_maximum\x18\x04 \x01(\x04\x12\x15\n\rminimum_value\x18\x05 \x01(\x04\x12\x16\n\x0e\x61mount_credits\x18\x06 \x01(\x04\"5\n\x15UpdateUserInfoRequest\x12\x12\n\x05\x65mail\x18\x01 \x01(\tH\x00\x88\x01\x01\x42\x08\n\x06_email\"*\n\x18UserPasswordChangeTicket\x12\x0e\n\x06ticket\x18\x01 \x01(\t*9\n\x10OrganizationRole\x12\n\n\x06MEMBER\x10\x00\x12\x0e\n\nACCOUNTANT\x10\x01\x12\t\n\x05OWNER\x10\x02\x32\xf7\x08\n\x10\x44\x61shboardService\x12-\n\x05GetMe\x12\x15.gooseai.EmptyRequest\x1a\r.gooseai.User\x12I\n\x0fGetOrganization\x12\x1f.gooseai.GetOrganizationRequest\x1a\x15.gooseai.Organization\x12:\n\nGetMetrics\x12\x1a.gooseai.GetMetricsRequest\x1a\x10.gooseai.Metrics\x12\x37\n\x0c\x43reateAPIKey\x12\x16.gooseai.APIKeyRequest\x1a\x0f.gooseai.APIKey\x12;\n\x0c\x44\x65leteAPIKey\x12\x1a.gooseai.APIKeyFindRequest\x1a\x0f.gooseai.APIKey\x12U\n\x19UpdateDefaultOrganization\x12).gooseai.UpdateDefaultOrganizationRequest\x1a\r.gooseai.User\x12\x43\n\x11GetClientSettings\x12\x15.gooseai.EmptyRequest\x1a\x17.gooseai.ClientSettings\x12\x45\n\x11SetClientSettings\x12\x17.gooseai.ClientSettings\x1a\x17.gooseai.ClientSettings\x12?\n\x0eUpdateUserInfo\x12\x1e.gooseai.UpdateUserInfoRequest\x1a\r.gooseai.User\x12V\n\x1a\x43reatePasswordChangeTicket\x12\x15.gooseai.EmptyRequest\x1a!.gooseai.UserPasswordChangeTicket\x12\x35\n\rDeleteAccount\x12\x15.gooseai.EmptyRequest\x1a\r.gooseai.User\x12=\n\x0c\x43reateCharge\x12\x1c.gooseai.CreateChargeRequest\x1a\x0f.gooseai.Charge\x12:\n\nGetCharges\x12\x1a.gooseai.GetChargesRequest\x1a\x10.gooseai.Charges\x12[\n\x16\x43reateAutoChargeIntent\x12&.gooseai.CreateAutoChargeIntentRequest\x1a\x19.gooseai.AutoChargeIntent\x12[\n\x16UpdateAutoChargeIntent\x12&.gooseai.CreateAutoChargeIntentRequest\x1a\x19.gooseai.AutoChargeIntent\x12O\n\x13GetAutoChargeIntent\x12\x1d.gooseai.GetAutoChargeRequest\x1a\x19.gooseai.AutoChargeIntentB:Z8github.com/stability-ai/api-interfaces/gooseai/dashboardb\x06proto3') 17 | 18 | _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) 19 | _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'dashboard_pb2', globals()) 20 | if _descriptor._USE_C_DESCRIPTORS == False: 21 | 22 | DESCRIPTOR._options = None 23 | DESCRIPTOR._serialized_options = b'Z8github.com/stability-ai/api-interfaces/gooseai/dashboard' 24 | _ORGANIZATIONROLE._serialized_start=2722 25 | _ORGANIZATIONROLE._serialized_end=2779 26 | _ORGANIZATIONMEMBER._serialized_start=29 27 | _ORGANIZATIONMEMBER._serialized_end=198 28 | _ORGANIZATIONGRANT._serialized_start=200 29 | _ORGANIZATIONGRANT._serialized_end=304 30 | _ORGANIZATIONPAYMENTINFO._serialized_start=306 31 | _ORGANIZATIONPAYMENTINFO._serialized_end=392 32 | _ORGANIZATIONAUTOCHARGE._serialized_start=394 33 | _ORGANIZATIONAUTOCHARGE._serialized_end=467 34 | _ORGANIZATION._serialized_start=470 35 | _ORGANIZATION._serialized_end=786 36 | _APIKEY._serialized_start=788 37 | _APIKEY._serialized_end=848 38 | _USER._serialized_start=851 39 | _USER._serialized_end=1098 40 | _COSTDATA._serialized_start=1100 41 | _COSTDATA._serialized_end=1157 42 | _USAGEMETRIC._serialized_start=1160 43 | _USAGEMETRIC._serialized_end=1346 44 | _COSTTOTAL._serialized_start=1348 45 | _COSTTOTAL._serialized_end=1406 46 | _TOTALMETRICSDATA._serialized_start=1408 47 | _TOTALMETRICSDATA._serialized_end=1509 48 | _METRICS._serialized_start=1511 49 | _METRICS._serialized_end=1601 50 | _EMPTYREQUEST._serialized_start=1603 51 | _EMPTYREQUEST._serialized_end=1617 52 | _GETORGANIZATIONREQUEST._serialized_start=1619 53 | _GETORGANIZATIONREQUEST._serialized_end=1655 54 | _GETMETRICSREQUEST._serialized_start=1658 55 | _GETMETRICSREQUEST._serialized_end=1811 56 | _APIKEYREQUEST._serialized_start=1813 57 | _APIKEYREQUEST._serialized_end=1847 58 | _APIKEYFINDREQUEST._serialized_start=1849 59 | _APIKEYFINDREQUEST._serialized_end=1880 60 | _UPDATEDEFAULTORGANIZATIONREQUEST._serialized_start=1882 61 | _UPDATEDEFAULTORGANIZATIONREQUEST._serialized_end=1941 62 | _CLIENTSETTINGS._serialized_start=1943 63 | _CLIENTSETTINGS._serialized_end=1977 64 | _CREATEAUTOCHARGEINTENTREQUEST._serialized_start=1980 65 | _CREATEAUTOCHARGEINTENTREQUEST._serialized_end=2108 66 | _CREATECHARGEREQUEST._serialized_start=2110 67 | _CREATECHARGEREQUEST._serialized_end=2172 68 | _GETCHARGESREQUEST._serialized_start=2174 69 | _GETCHARGESREQUEST._serialized_end=2256 70 | _CHARGE._serialized_start=2258 71 | _CHARGE._serialized_end=2380 72 | _CHARGES._serialized_start=2382 73 | _CHARGES._serialized_end=2425 74 | _GETAUTOCHARGEREQUEST._serialized_start=2427 75 | _GETAUTOCHARGEREQUEST._serialized_end=2474 76 | _AUTOCHARGEINTENT._serialized_start=2477 77 | _AUTOCHARGEINTENT._serialized_end=2621 78 | _UPDATEUSERINFOREQUEST._serialized_start=2623 79 | _UPDATEUSERINFOREQUEST._serialized_end=2676 80 | _USERPASSWORDCHANGETICKET._serialized_start=2678 81 | _USERPASSWORDCHANGETICKET._serialized_end=2720 82 | _DASHBOARDSERVICE._serialized_start=2782 83 | _DASHBOARDSERVICE._serialized_end=3925 84 | # @@protoc_insertion_point(module_scope) 85 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/dashboard_pb2.pyi: -------------------------------------------------------------------------------- 1 | """ 2 | @generated by mypy-protobuf. Do not edit manually! 3 | isort:skip_file 4 | """ 5 | import builtins 6 | import collections.abc 7 | import google.protobuf.descriptor 8 | import google.protobuf.internal.containers 9 | import google.protobuf.internal.enum_type_wrapper 10 | import google.protobuf.message 11 | import sys 12 | import typing 13 | 14 | if sys.version_info >= (3, 10): 15 | import typing as typing_extensions 16 | else: 17 | import typing_extensions 18 | 19 | DESCRIPTOR: google.protobuf.descriptor.FileDescriptor 20 | 21 | class _OrganizationRole: 22 | ValueType = typing.NewType("ValueType", builtins.int) 23 | V: typing_extensions.TypeAlias = ValueType 24 | 25 | class _OrganizationRoleEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_OrganizationRole.ValueType], builtins.type): 26 | DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor 27 | MEMBER: _OrganizationRole.ValueType # 0 28 | ACCOUNTANT: _OrganizationRole.ValueType # 1 29 | OWNER: _OrganizationRole.ValueType # 2 30 | 31 | class OrganizationRole(_OrganizationRole, metaclass=_OrganizationRoleEnumTypeWrapper): ... 32 | 33 | MEMBER: OrganizationRole.ValueType # 0 34 | ACCOUNTANT: OrganizationRole.ValueType # 1 35 | OWNER: OrganizationRole.ValueType # 2 36 | global___OrganizationRole = OrganizationRole 37 | 38 | @typing_extensions.final 39 | class OrganizationMember(google.protobuf.message.Message): 40 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 41 | 42 | ORGANIZATION_FIELD_NUMBER: builtins.int 43 | USER_FIELD_NUMBER: builtins.int 44 | ROLE_FIELD_NUMBER: builtins.int 45 | IS_DEFAULT_FIELD_NUMBER: builtins.int 46 | @property 47 | def organization(self) -> global___Organization: ... 48 | @property 49 | def user(self) -> global___User: ... 50 | role: global___OrganizationRole.ValueType 51 | is_default: builtins.bool 52 | def __init__( 53 | self, 54 | *, 55 | organization: global___Organization | None = ..., 56 | user: global___User | None = ..., 57 | role: global___OrganizationRole.ValueType = ..., 58 | is_default: builtins.bool = ..., 59 | ) -> None: ... 60 | def HasField(self, field_name: typing_extensions.Literal["_user", b"_user", "organization", b"organization", "user", b"user"]) -> builtins.bool: ... 61 | def ClearField(self, field_name: typing_extensions.Literal["_user", b"_user", "is_default", b"is_default", "organization", b"organization", "role", b"role", "user", b"user"]) -> None: ... 62 | def WhichOneof(self, oneof_group: typing_extensions.Literal["_user", b"_user"]) -> typing_extensions.Literal["user"] | None: ... 63 | 64 | global___OrganizationMember = OrganizationMember 65 | 66 | @typing_extensions.final 67 | class OrganizationGrant(google.protobuf.message.Message): 68 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 69 | 70 | AMOUNT_GRANTED_FIELD_NUMBER: builtins.int 71 | AMOUNT_USED_FIELD_NUMBER: builtins.int 72 | EXPIRES_AT_FIELD_NUMBER: builtins.int 73 | GRANTED_AT_FIELD_NUMBER: builtins.int 74 | amount_granted: builtins.float 75 | amount_used: builtins.float 76 | expires_at: builtins.int 77 | granted_at: builtins.int 78 | def __init__( 79 | self, 80 | *, 81 | amount_granted: builtins.float = ..., 82 | amount_used: builtins.float = ..., 83 | expires_at: builtins.int = ..., 84 | granted_at: builtins.int = ..., 85 | ) -> None: ... 86 | def ClearField(self, field_name: typing_extensions.Literal["amount_granted", b"amount_granted", "amount_used", b"amount_used", "expires_at", b"expires_at", "granted_at", b"granted_at"]) -> None: ... 87 | 88 | global___OrganizationGrant = OrganizationGrant 89 | 90 | @typing_extensions.final 91 | class OrganizationPaymentInfo(google.protobuf.message.Message): 92 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 93 | 94 | BALANCE_FIELD_NUMBER: builtins.int 95 | GRANTS_FIELD_NUMBER: builtins.int 96 | balance: builtins.float 97 | @property 98 | def grants(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___OrganizationGrant]: ... 99 | def __init__( 100 | self, 101 | *, 102 | balance: builtins.float = ..., 103 | grants: collections.abc.Iterable[global___OrganizationGrant] | None = ..., 104 | ) -> None: ... 105 | def ClearField(self, field_name: typing_extensions.Literal["balance", b"balance", "grants", b"grants"]) -> None: ... 106 | 107 | global___OrganizationPaymentInfo = OrganizationPaymentInfo 108 | 109 | @typing_extensions.final 110 | class OrganizationAutoCharge(google.protobuf.message.Message): 111 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 112 | 113 | ENABLED_FIELD_NUMBER: builtins.int 114 | ID_FIELD_NUMBER: builtins.int 115 | CREATED_AT_FIELD_NUMBER: builtins.int 116 | enabled: builtins.bool 117 | id: builtins.str 118 | created_at: builtins.int 119 | def __init__( 120 | self, 121 | *, 122 | enabled: builtins.bool = ..., 123 | id: builtins.str = ..., 124 | created_at: builtins.int = ..., 125 | ) -> None: ... 126 | def ClearField(self, field_name: typing_extensions.Literal["created_at", b"created_at", "enabled", b"enabled", "id", b"id"]) -> None: ... 127 | 128 | global___OrganizationAutoCharge = OrganizationAutoCharge 129 | 130 | @typing_extensions.final 131 | class Organization(google.protobuf.message.Message): 132 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 133 | 134 | ID_FIELD_NUMBER: builtins.int 135 | NAME_FIELD_NUMBER: builtins.int 136 | DESCRIPTION_FIELD_NUMBER: builtins.int 137 | MEMBERS_FIELD_NUMBER: builtins.int 138 | PAYMENT_INFO_FIELD_NUMBER: builtins.int 139 | STRIPE_CUSTOMER_ID_FIELD_NUMBER: builtins.int 140 | AUTO_CHARGE_FIELD_NUMBER: builtins.int 141 | id: builtins.str 142 | name: builtins.str 143 | description: builtins.str 144 | @property 145 | def members(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___OrganizationMember]: ... 146 | @property 147 | def payment_info(self) -> global___OrganizationPaymentInfo: ... 148 | stripe_customer_id: builtins.str 149 | @property 150 | def auto_charge(self) -> global___OrganizationAutoCharge: ... 151 | def __init__( 152 | self, 153 | *, 154 | id: builtins.str = ..., 155 | name: builtins.str = ..., 156 | description: builtins.str = ..., 157 | members: collections.abc.Iterable[global___OrganizationMember] | None = ..., 158 | payment_info: global___OrganizationPaymentInfo | None = ..., 159 | stripe_customer_id: builtins.str | None = ..., 160 | auto_charge: global___OrganizationAutoCharge | None = ..., 161 | ) -> None: ... 162 | def HasField(self, field_name: typing_extensions.Literal["_auto_charge", b"_auto_charge", "_payment_info", b"_payment_info", "_stripe_customer_id", b"_stripe_customer_id", "auto_charge", b"auto_charge", "payment_info", b"payment_info", "stripe_customer_id", b"stripe_customer_id"]) -> builtins.bool: ... 163 | def ClearField(self, field_name: typing_extensions.Literal["_auto_charge", b"_auto_charge", "_payment_info", b"_payment_info", "_stripe_customer_id", b"_stripe_customer_id", "auto_charge", b"auto_charge", "description", b"description", "id", b"id", "members", b"members", "name", b"name", "payment_info", b"payment_info", "stripe_customer_id", b"stripe_customer_id"]) -> None: ... 164 | @typing.overload 165 | def WhichOneof(self, oneof_group: typing_extensions.Literal["_auto_charge", b"_auto_charge"]) -> typing_extensions.Literal["auto_charge"] | None: ... 166 | @typing.overload 167 | def WhichOneof(self, oneof_group: typing_extensions.Literal["_payment_info", b"_payment_info"]) -> typing_extensions.Literal["payment_info"] | None: ... 168 | @typing.overload 169 | def WhichOneof(self, oneof_group: typing_extensions.Literal["_stripe_customer_id", b"_stripe_customer_id"]) -> typing_extensions.Literal["stripe_customer_id"] | None: ... 170 | 171 | global___Organization = Organization 172 | 173 | @typing_extensions.final 174 | class APIKey(google.protobuf.message.Message): 175 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 176 | 177 | KEY_FIELD_NUMBER: builtins.int 178 | IS_SECRET_FIELD_NUMBER: builtins.int 179 | CREATED_AT_FIELD_NUMBER: builtins.int 180 | key: builtins.str 181 | is_secret: builtins.bool 182 | created_at: builtins.int 183 | def __init__( 184 | self, 185 | *, 186 | key: builtins.str = ..., 187 | is_secret: builtins.bool = ..., 188 | created_at: builtins.int = ..., 189 | ) -> None: ... 190 | def ClearField(self, field_name: typing_extensions.Literal["created_at", b"created_at", "is_secret", b"is_secret", "key", b"key"]) -> None: ... 191 | 192 | global___APIKey = APIKey 193 | 194 | @typing_extensions.final 195 | class User(google.protobuf.message.Message): 196 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 197 | 198 | ID_FIELD_NUMBER: builtins.int 199 | AUTH_ID_FIELD_NUMBER: builtins.int 200 | PROFILE_PICTURE_FIELD_NUMBER: builtins.int 201 | EMAIL_FIELD_NUMBER: builtins.int 202 | ORGANIZATIONS_FIELD_NUMBER: builtins.int 203 | API_KEYS_FIELD_NUMBER: builtins.int 204 | CREATED_AT_FIELD_NUMBER: builtins.int 205 | EMAIL_VERIFIED_FIELD_NUMBER: builtins.int 206 | id: builtins.str 207 | auth_id: builtins.str 208 | profile_picture: builtins.str 209 | email: builtins.str 210 | @property 211 | def organizations(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___OrganizationMember]: ... 212 | @property 213 | def api_keys(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___APIKey]: ... 214 | created_at: builtins.int 215 | email_verified: builtins.bool 216 | def __init__( 217 | self, 218 | *, 219 | id: builtins.str = ..., 220 | auth_id: builtins.str | None = ..., 221 | profile_picture: builtins.str = ..., 222 | email: builtins.str = ..., 223 | organizations: collections.abc.Iterable[global___OrganizationMember] | None = ..., 224 | api_keys: collections.abc.Iterable[global___APIKey] | None = ..., 225 | created_at: builtins.int = ..., 226 | email_verified: builtins.bool | None = ..., 227 | ) -> None: ... 228 | def HasField(self, field_name: typing_extensions.Literal["_auth_id", b"_auth_id", "_email_verified", b"_email_verified", "auth_id", b"auth_id", "email_verified", b"email_verified"]) -> builtins.bool: ... 229 | def ClearField(self, field_name: typing_extensions.Literal["_auth_id", b"_auth_id", "_email_verified", b"_email_verified", "api_keys", b"api_keys", "auth_id", b"auth_id", "created_at", b"created_at", "email", b"email", "email_verified", b"email_verified", "id", b"id", "organizations", b"organizations", "profile_picture", b"profile_picture"]) -> None: ... 230 | @typing.overload 231 | def WhichOneof(self, oneof_group: typing_extensions.Literal["_auth_id", b"_auth_id"]) -> typing_extensions.Literal["auth_id"] | None: ... 232 | @typing.overload 233 | def WhichOneof(self, oneof_group: typing_extensions.Literal["_email_verified", b"_email_verified"]) -> typing_extensions.Literal["email_verified"] | None: ... 234 | 235 | global___User = User 236 | 237 | @typing_extensions.final 238 | class CostData(google.protobuf.message.Message): 239 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 240 | 241 | AMOUNT_TOKENS_FIELD_NUMBER: builtins.int 242 | AMOUNT_CREDITS_FIELD_NUMBER: builtins.int 243 | amount_tokens: builtins.int 244 | amount_credits: builtins.float 245 | def __init__( 246 | self, 247 | *, 248 | amount_tokens: builtins.int = ..., 249 | amount_credits: builtins.float = ..., 250 | ) -> None: ... 251 | def ClearField(self, field_name: typing_extensions.Literal["amount_credits", b"amount_credits", "amount_tokens", b"amount_tokens"]) -> None: ... 252 | 253 | global___CostData = CostData 254 | 255 | @typing_extensions.final 256 | class UsageMetric(google.protobuf.message.Message): 257 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 258 | 259 | OPERATION_FIELD_NUMBER: builtins.int 260 | ENGINE_FIELD_NUMBER: builtins.int 261 | INPUT_COST_FIELD_NUMBER: builtins.int 262 | OUTPUT_COST_FIELD_NUMBER: builtins.int 263 | USER_FIELD_NUMBER: builtins.int 264 | AGGREGATION_TIMESTAMP_FIELD_NUMBER: builtins.int 265 | operation: builtins.str 266 | engine: builtins.str 267 | @property 268 | def input_cost(self) -> global___CostData: ... 269 | @property 270 | def output_cost(self) -> global___CostData: ... 271 | user: builtins.str 272 | aggregation_timestamp: builtins.int 273 | def __init__( 274 | self, 275 | *, 276 | operation: builtins.str = ..., 277 | engine: builtins.str = ..., 278 | input_cost: global___CostData | None = ..., 279 | output_cost: global___CostData | None = ..., 280 | user: builtins.str | None = ..., 281 | aggregation_timestamp: builtins.int = ..., 282 | ) -> None: ... 283 | def HasField(self, field_name: typing_extensions.Literal["_user", b"_user", "input_cost", b"input_cost", "output_cost", b"output_cost", "user", b"user"]) -> builtins.bool: ... 284 | def ClearField(self, field_name: typing_extensions.Literal["_user", b"_user", "aggregation_timestamp", b"aggregation_timestamp", "engine", b"engine", "input_cost", b"input_cost", "operation", b"operation", "output_cost", b"output_cost", "user", b"user"]) -> None: ... 285 | def WhichOneof(self, oneof_group: typing_extensions.Literal["_user", b"_user"]) -> typing_extensions.Literal["user"] | None: ... 286 | 287 | global___UsageMetric = UsageMetric 288 | 289 | @typing_extensions.final 290 | class CostTotal(google.protobuf.message.Message): 291 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 292 | 293 | AMOUNT_TOKENS_FIELD_NUMBER: builtins.int 294 | AMOUNT_CREDITS_FIELD_NUMBER: builtins.int 295 | amount_tokens: builtins.int 296 | amount_credits: builtins.float 297 | def __init__( 298 | self, 299 | *, 300 | amount_tokens: builtins.int = ..., 301 | amount_credits: builtins.float = ..., 302 | ) -> None: ... 303 | def ClearField(self, field_name: typing_extensions.Literal["amount_credits", b"amount_credits", "amount_tokens", b"amount_tokens"]) -> None: ... 304 | 305 | global___CostTotal = CostTotal 306 | 307 | @typing_extensions.final 308 | class TotalMetricsData(google.protobuf.message.Message): 309 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 310 | 311 | INPUT_TOTAL_FIELD_NUMBER: builtins.int 312 | OUTPUT_TOTAL_FIELD_NUMBER: builtins.int 313 | @property 314 | def input_total(self) -> global___CostTotal: ... 315 | @property 316 | def output_total(self) -> global___CostTotal: ... 317 | def __init__( 318 | self, 319 | *, 320 | input_total: global___CostTotal | None = ..., 321 | output_total: global___CostTotal | None = ..., 322 | ) -> None: ... 323 | def HasField(self, field_name: typing_extensions.Literal["input_total", b"input_total", "output_total", b"output_total"]) -> builtins.bool: ... 324 | def ClearField(self, field_name: typing_extensions.Literal["input_total", b"input_total", "output_total", b"output_total"]) -> None: ... 325 | 326 | global___TotalMetricsData = TotalMetricsData 327 | 328 | @typing_extensions.final 329 | class Metrics(google.protobuf.message.Message): 330 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 331 | 332 | METRICS_FIELD_NUMBER: builtins.int 333 | TOTAL_FIELD_NUMBER: builtins.int 334 | @property 335 | def metrics(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___UsageMetric]: ... 336 | @property 337 | def total(self) -> global___TotalMetricsData: ... 338 | def __init__( 339 | self, 340 | *, 341 | metrics: collections.abc.Iterable[global___UsageMetric] | None = ..., 342 | total: global___TotalMetricsData | None = ..., 343 | ) -> None: ... 344 | def HasField(self, field_name: typing_extensions.Literal["total", b"total"]) -> builtins.bool: ... 345 | def ClearField(self, field_name: typing_extensions.Literal["metrics", b"metrics", "total", b"total"]) -> None: ... 346 | 347 | global___Metrics = Metrics 348 | 349 | @typing_extensions.final 350 | class EmptyRequest(google.protobuf.message.Message): 351 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 352 | 353 | def __init__( 354 | self, 355 | ) -> None: ... 356 | 357 | global___EmptyRequest = EmptyRequest 358 | 359 | @typing_extensions.final 360 | class GetOrganizationRequest(google.protobuf.message.Message): 361 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 362 | 363 | ID_FIELD_NUMBER: builtins.int 364 | id: builtins.str 365 | def __init__( 366 | self, 367 | *, 368 | id: builtins.str = ..., 369 | ) -> None: ... 370 | def ClearField(self, field_name: typing_extensions.Literal["id", b"id"]) -> None: ... 371 | 372 | global___GetOrganizationRequest = GetOrganizationRequest 373 | 374 | @typing_extensions.final 375 | class GetMetricsRequest(google.protobuf.message.Message): 376 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 377 | 378 | ORGANIZATION_ID_FIELD_NUMBER: builtins.int 379 | USER_ID_FIELD_NUMBER: builtins.int 380 | RANGE_FROM_FIELD_NUMBER: builtins.int 381 | RANGE_TO_FIELD_NUMBER: builtins.int 382 | INCLUDE_PER_REQUEST_METRICS_FIELD_NUMBER: builtins.int 383 | organization_id: builtins.str 384 | user_id: builtins.str 385 | range_from: builtins.int 386 | range_to: builtins.int 387 | include_per_request_metrics: builtins.bool 388 | def __init__( 389 | self, 390 | *, 391 | organization_id: builtins.str = ..., 392 | user_id: builtins.str | None = ..., 393 | range_from: builtins.int = ..., 394 | range_to: builtins.int = ..., 395 | include_per_request_metrics: builtins.bool = ..., 396 | ) -> None: ... 397 | def HasField(self, field_name: typing_extensions.Literal["_user_id", b"_user_id", "user_id", b"user_id"]) -> builtins.bool: ... 398 | def ClearField(self, field_name: typing_extensions.Literal["_user_id", b"_user_id", "include_per_request_metrics", b"include_per_request_metrics", "organization_id", b"organization_id", "range_from", b"range_from", "range_to", b"range_to", "user_id", b"user_id"]) -> None: ... 399 | def WhichOneof(self, oneof_group: typing_extensions.Literal["_user_id", b"_user_id"]) -> typing_extensions.Literal["user_id"] | None: ... 400 | 401 | global___GetMetricsRequest = GetMetricsRequest 402 | 403 | @typing_extensions.final 404 | class APIKeyRequest(google.protobuf.message.Message): 405 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 406 | 407 | IS_SECRET_FIELD_NUMBER: builtins.int 408 | is_secret: builtins.bool 409 | def __init__( 410 | self, 411 | *, 412 | is_secret: builtins.bool = ..., 413 | ) -> None: ... 414 | def ClearField(self, field_name: typing_extensions.Literal["is_secret", b"is_secret"]) -> None: ... 415 | 416 | global___APIKeyRequest = APIKeyRequest 417 | 418 | @typing_extensions.final 419 | class APIKeyFindRequest(google.protobuf.message.Message): 420 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 421 | 422 | ID_FIELD_NUMBER: builtins.int 423 | id: builtins.str 424 | def __init__( 425 | self, 426 | *, 427 | id: builtins.str = ..., 428 | ) -> None: ... 429 | def ClearField(self, field_name: typing_extensions.Literal["id", b"id"]) -> None: ... 430 | 431 | global___APIKeyFindRequest = APIKeyFindRequest 432 | 433 | @typing_extensions.final 434 | class UpdateDefaultOrganizationRequest(google.protobuf.message.Message): 435 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 436 | 437 | ORGANIZATION_ID_FIELD_NUMBER: builtins.int 438 | organization_id: builtins.str 439 | def __init__( 440 | self, 441 | *, 442 | organization_id: builtins.str = ..., 443 | ) -> None: ... 444 | def ClearField(self, field_name: typing_extensions.Literal["organization_id", b"organization_id"]) -> None: ... 445 | 446 | global___UpdateDefaultOrganizationRequest = UpdateDefaultOrganizationRequest 447 | 448 | @typing_extensions.final 449 | class ClientSettings(google.protobuf.message.Message): 450 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 451 | 452 | SETTINGS_FIELD_NUMBER: builtins.int 453 | settings: builtins.bytes 454 | def __init__( 455 | self, 456 | *, 457 | settings: builtins.bytes = ..., 458 | ) -> None: ... 459 | def ClearField(self, field_name: typing_extensions.Literal["settings", b"settings"]) -> None: ... 460 | 461 | global___ClientSettings = ClientSettings 462 | 463 | @typing_extensions.final 464 | class CreateAutoChargeIntentRequest(google.protobuf.message.Message): 465 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 466 | 467 | ORGANIZATION_ID_FIELD_NUMBER: builtins.int 468 | MONTHLY_MAXIMUM_FIELD_NUMBER: builtins.int 469 | MINIMUM_VALUE_FIELD_NUMBER: builtins.int 470 | AMOUNT_CREDITS_FIELD_NUMBER: builtins.int 471 | organization_id: builtins.str 472 | monthly_maximum: builtins.int 473 | minimum_value: builtins.int 474 | amount_credits: builtins.int 475 | def __init__( 476 | self, 477 | *, 478 | organization_id: builtins.str = ..., 479 | monthly_maximum: builtins.int = ..., 480 | minimum_value: builtins.int = ..., 481 | amount_credits: builtins.int = ..., 482 | ) -> None: ... 483 | def ClearField(self, field_name: typing_extensions.Literal["amount_credits", b"amount_credits", "minimum_value", b"minimum_value", "monthly_maximum", b"monthly_maximum", "organization_id", b"organization_id"]) -> None: ... 484 | 485 | global___CreateAutoChargeIntentRequest = CreateAutoChargeIntentRequest 486 | 487 | @typing_extensions.final 488 | class CreateChargeRequest(google.protobuf.message.Message): 489 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 490 | 491 | AMOUNT_FIELD_NUMBER: builtins.int 492 | ORGANIZATION_ID_FIELD_NUMBER: builtins.int 493 | amount: builtins.int 494 | organization_id: builtins.str 495 | def __init__( 496 | self, 497 | *, 498 | amount: builtins.int = ..., 499 | organization_id: builtins.str = ..., 500 | ) -> None: ... 501 | def ClearField(self, field_name: typing_extensions.Literal["amount", b"amount", "organization_id", b"organization_id"]) -> None: ... 502 | 503 | global___CreateChargeRequest = CreateChargeRequest 504 | 505 | @typing_extensions.final 506 | class GetChargesRequest(google.protobuf.message.Message): 507 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 508 | 509 | ORGANIZATION_ID_FIELD_NUMBER: builtins.int 510 | RANGE_FROM_FIELD_NUMBER: builtins.int 511 | RANGE_TO_FIELD_NUMBER: builtins.int 512 | organization_id: builtins.str 513 | range_from: builtins.int 514 | range_to: builtins.int 515 | def __init__( 516 | self, 517 | *, 518 | organization_id: builtins.str = ..., 519 | range_from: builtins.int = ..., 520 | range_to: builtins.int = ..., 521 | ) -> None: ... 522 | def ClearField(self, field_name: typing_extensions.Literal["organization_id", b"organization_id", "range_from", b"range_from", "range_to", b"range_to"]) -> None: ... 523 | 524 | global___GetChargesRequest = GetChargesRequest 525 | 526 | @typing_extensions.final 527 | class Charge(google.protobuf.message.Message): 528 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 529 | 530 | ID_FIELD_NUMBER: builtins.int 531 | PAID_FIELD_NUMBER: builtins.int 532 | RECEIPT_LINK_FIELD_NUMBER: builtins.int 533 | PAYMENT_LINK_FIELD_NUMBER: builtins.int 534 | CREATED_AT_FIELD_NUMBER: builtins.int 535 | AMOUNT_CREDITS_FIELD_NUMBER: builtins.int 536 | id: builtins.str 537 | paid: builtins.bool 538 | receipt_link: builtins.str 539 | payment_link: builtins.str 540 | created_at: builtins.int 541 | amount_credits: builtins.int 542 | def __init__( 543 | self, 544 | *, 545 | id: builtins.str = ..., 546 | paid: builtins.bool = ..., 547 | receipt_link: builtins.str = ..., 548 | payment_link: builtins.str = ..., 549 | created_at: builtins.int = ..., 550 | amount_credits: builtins.int = ..., 551 | ) -> None: ... 552 | def ClearField(self, field_name: typing_extensions.Literal["amount_credits", b"amount_credits", "created_at", b"created_at", "id", b"id", "paid", b"paid", "payment_link", b"payment_link", "receipt_link", b"receipt_link"]) -> None: ... 553 | 554 | global___Charge = Charge 555 | 556 | @typing_extensions.final 557 | class Charges(google.protobuf.message.Message): 558 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 559 | 560 | CHARGES_FIELD_NUMBER: builtins.int 561 | @property 562 | def charges(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Charge]: ... 563 | def __init__( 564 | self, 565 | *, 566 | charges: collections.abc.Iterable[global___Charge] | None = ..., 567 | ) -> None: ... 568 | def ClearField(self, field_name: typing_extensions.Literal["charges", b"charges"]) -> None: ... 569 | 570 | global___Charges = Charges 571 | 572 | @typing_extensions.final 573 | class GetAutoChargeRequest(google.protobuf.message.Message): 574 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 575 | 576 | ORGANIZATION_ID_FIELD_NUMBER: builtins.int 577 | organization_id: builtins.str 578 | def __init__( 579 | self, 580 | *, 581 | organization_id: builtins.str = ..., 582 | ) -> None: ... 583 | def ClearField(self, field_name: typing_extensions.Literal["organization_id", b"organization_id"]) -> None: ... 584 | 585 | global___GetAutoChargeRequest = GetAutoChargeRequest 586 | 587 | @typing_extensions.final 588 | class AutoChargeIntent(google.protobuf.message.Message): 589 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 590 | 591 | ID_FIELD_NUMBER: builtins.int 592 | PAYMENT_LINK_FIELD_NUMBER: builtins.int 593 | CREATED_AT_FIELD_NUMBER: builtins.int 594 | MONTHLY_MAXIMUM_FIELD_NUMBER: builtins.int 595 | MINIMUM_VALUE_FIELD_NUMBER: builtins.int 596 | AMOUNT_CREDITS_FIELD_NUMBER: builtins.int 597 | id: builtins.str 598 | payment_link: builtins.str 599 | created_at: builtins.int 600 | monthly_maximum: builtins.int 601 | minimum_value: builtins.int 602 | amount_credits: builtins.int 603 | def __init__( 604 | self, 605 | *, 606 | id: builtins.str = ..., 607 | payment_link: builtins.str = ..., 608 | created_at: builtins.int = ..., 609 | monthly_maximum: builtins.int = ..., 610 | minimum_value: builtins.int = ..., 611 | amount_credits: builtins.int = ..., 612 | ) -> None: ... 613 | def ClearField(self, field_name: typing_extensions.Literal["amount_credits", b"amount_credits", "created_at", b"created_at", "id", b"id", "minimum_value", b"minimum_value", "monthly_maximum", b"monthly_maximum", "payment_link", b"payment_link"]) -> None: ... 614 | 615 | global___AutoChargeIntent = AutoChargeIntent 616 | 617 | @typing_extensions.final 618 | class UpdateUserInfoRequest(google.protobuf.message.Message): 619 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 620 | 621 | EMAIL_FIELD_NUMBER: builtins.int 622 | email: builtins.str 623 | def __init__( 624 | self, 625 | *, 626 | email: builtins.str | None = ..., 627 | ) -> None: ... 628 | def HasField(self, field_name: typing_extensions.Literal["_email", b"_email", "email", b"email"]) -> builtins.bool: ... 629 | def ClearField(self, field_name: typing_extensions.Literal["_email", b"_email", "email", b"email"]) -> None: ... 630 | def WhichOneof(self, oneof_group: typing_extensions.Literal["_email", b"_email"]) -> typing_extensions.Literal["email"] | None: ... 631 | 632 | global___UpdateUserInfoRequest = UpdateUserInfoRequest 633 | 634 | @typing_extensions.final 635 | class UserPasswordChangeTicket(google.protobuf.message.Message): 636 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 637 | 638 | TICKET_FIELD_NUMBER: builtins.int 639 | ticket: builtins.str 640 | def __init__( 641 | self, 642 | *, 643 | ticket: builtins.str = ..., 644 | ) -> None: ... 645 | def ClearField(self, field_name: typing_extensions.Literal["ticket", b"ticket"]) -> None: ... 646 | 647 | global___UserPasswordChangeTicket = UserPasswordChangeTicket 648 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/dashboard_pb2_grpc.py: -------------------------------------------------------------------------------- 1 | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! 2 | """Client and server classes corresponding to protobuf-defined services.""" 3 | import grpc 4 | 5 | import dashboard_pb2 as dashboard__pb2 6 | 7 | 8 | class DashboardServiceStub(object): 9 | """Missing associated documentation comment in .proto file.""" 10 | 11 | def __init__(self, channel): 12 | """Constructor. 13 | 14 | Args: 15 | channel: A grpc.Channel. 16 | """ 17 | self.GetMe = channel.unary_unary( 18 | '/gooseai.DashboardService/GetMe', 19 | request_serializer=dashboard__pb2.EmptyRequest.SerializeToString, 20 | response_deserializer=dashboard__pb2.User.FromString, 21 | ) 22 | self.GetOrganization = channel.unary_unary( 23 | '/gooseai.DashboardService/GetOrganization', 24 | request_serializer=dashboard__pb2.GetOrganizationRequest.SerializeToString, 25 | response_deserializer=dashboard__pb2.Organization.FromString, 26 | ) 27 | self.GetMetrics = channel.unary_unary( 28 | '/gooseai.DashboardService/GetMetrics', 29 | request_serializer=dashboard__pb2.GetMetricsRequest.SerializeToString, 30 | response_deserializer=dashboard__pb2.Metrics.FromString, 31 | ) 32 | self.CreateAPIKey = channel.unary_unary( 33 | '/gooseai.DashboardService/CreateAPIKey', 34 | request_serializer=dashboard__pb2.APIKeyRequest.SerializeToString, 35 | response_deserializer=dashboard__pb2.APIKey.FromString, 36 | ) 37 | self.DeleteAPIKey = channel.unary_unary( 38 | '/gooseai.DashboardService/DeleteAPIKey', 39 | request_serializer=dashboard__pb2.APIKeyFindRequest.SerializeToString, 40 | response_deserializer=dashboard__pb2.APIKey.FromString, 41 | ) 42 | self.UpdateDefaultOrganization = channel.unary_unary( 43 | '/gooseai.DashboardService/UpdateDefaultOrganization', 44 | request_serializer=dashboard__pb2.UpdateDefaultOrganizationRequest.SerializeToString, 45 | response_deserializer=dashboard__pb2.User.FromString, 46 | ) 47 | self.GetClientSettings = channel.unary_unary( 48 | '/gooseai.DashboardService/GetClientSettings', 49 | request_serializer=dashboard__pb2.EmptyRequest.SerializeToString, 50 | response_deserializer=dashboard__pb2.ClientSettings.FromString, 51 | ) 52 | self.SetClientSettings = channel.unary_unary( 53 | '/gooseai.DashboardService/SetClientSettings', 54 | request_serializer=dashboard__pb2.ClientSettings.SerializeToString, 55 | response_deserializer=dashboard__pb2.ClientSettings.FromString, 56 | ) 57 | self.UpdateUserInfo = channel.unary_unary( 58 | '/gooseai.DashboardService/UpdateUserInfo', 59 | request_serializer=dashboard__pb2.UpdateUserInfoRequest.SerializeToString, 60 | response_deserializer=dashboard__pb2.User.FromString, 61 | ) 62 | self.CreatePasswordChangeTicket = channel.unary_unary( 63 | '/gooseai.DashboardService/CreatePasswordChangeTicket', 64 | request_serializer=dashboard__pb2.EmptyRequest.SerializeToString, 65 | response_deserializer=dashboard__pb2.UserPasswordChangeTicket.FromString, 66 | ) 67 | self.DeleteAccount = channel.unary_unary( 68 | '/gooseai.DashboardService/DeleteAccount', 69 | request_serializer=dashboard__pb2.EmptyRequest.SerializeToString, 70 | response_deserializer=dashboard__pb2.User.FromString, 71 | ) 72 | self.CreateCharge = channel.unary_unary( 73 | '/gooseai.DashboardService/CreateCharge', 74 | request_serializer=dashboard__pb2.CreateChargeRequest.SerializeToString, 75 | response_deserializer=dashboard__pb2.Charge.FromString, 76 | ) 77 | self.GetCharges = channel.unary_unary( 78 | '/gooseai.DashboardService/GetCharges', 79 | request_serializer=dashboard__pb2.GetChargesRequest.SerializeToString, 80 | response_deserializer=dashboard__pb2.Charges.FromString, 81 | ) 82 | self.CreateAutoChargeIntent = channel.unary_unary( 83 | '/gooseai.DashboardService/CreateAutoChargeIntent', 84 | request_serializer=dashboard__pb2.CreateAutoChargeIntentRequest.SerializeToString, 85 | response_deserializer=dashboard__pb2.AutoChargeIntent.FromString, 86 | ) 87 | self.UpdateAutoChargeIntent = channel.unary_unary( 88 | '/gooseai.DashboardService/UpdateAutoChargeIntent', 89 | request_serializer=dashboard__pb2.CreateAutoChargeIntentRequest.SerializeToString, 90 | response_deserializer=dashboard__pb2.AutoChargeIntent.FromString, 91 | ) 92 | self.GetAutoChargeIntent = channel.unary_unary( 93 | '/gooseai.DashboardService/GetAutoChargeIntent', 94 | request_serializer=dashboard__pb2.GetAutoChargeRequest.SerializeToString, 95 | response_deserializer=dashboard__pb2.AutoChargeIntent.FromString, 96 | ) 97 | 98 | 99 | class DashboardServiceServicer(object): 100 | """Missing associated documentation comment in .proto file.""" 101 | 102 | def GetMe(self, request, context): 103 | """Get info 104 | """ 105 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 106 | context.set_details('Method not implemented!') 107 | raise NotImplementedError('Method not implemented!') 108 | 109 | def GetOrganization(self, request, context): 110 | """Missing associated documentation comment in .proto file.""" 111 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 112 | context.set_details('Method not implemented!') 113 | raise NotImplementedError('Method not implemented!') 114 | 115 | def GetMetrics(self, request, context): 116 | """Missing associated documentation comment in .proto file.""" 117 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 118 | context.set_details('Method not implemented!') 119 | raise NotImplementedError('Method not implemented!') 120 | 121 | def CreateAPIKey(self, request, context): 122 | """API key management 123 | """ 124 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 125 | context.set_details('Method not implemented!') 126 | raise NotImplementedError('Method not implemented!') 127 | 128 | def DeleteAPIKey(self, request, context): 129 | """Missing associated documentation comment in .proto file.""" 130 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 131 | context.set_details('Method not implemented!') 132 | raise NotImplementedError('Method not implemented!') 133 | 134 | def UpdateDefaultOrganization(self, request, context): 135 | """User settings 136 | """ 137 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 138 | context.set_details('Method not implemented!') 139 | raise NotImplementedError('Method not implemented!') 140 | 141 | def GetClientSettings(self, request, context): 142 | """Missing associated documentation comment in .proto file.""" 143 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 144 | context.set_details('Method not implemented!') 145 | raise NotImplementedError('Method not implemented!') 146 | 147 | def SetClientSettings(self, request, context): 148 | """Missing associated documentation comment in .proto file.""" 149 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 150 | context.set_details('Method not implemented!') 151 | raise NotImplementedError('Method not implemented!') 152 | 153 | def UpdateUserInfo(self, request, context): 154 | """Missing associated documentation comment in .proto file.""" 155 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 156 | context.set_details('Method not implemented!') 157 | raise NotImplementedError('Method not implemented!') 158 | 159 | def CreatePasswordChangeTicket(self, request, context): 160 | """Missing associated documentation comment in .proto file.""" 161 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 162 | context.set_details('Method not implemented!') 163 | raise NotImplementedError('Method not implemented!') 164 | 165 | def DeleteAccount(self, request, context): 166 | """Missing associated documentation comment in .proto file.""" 167 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 168 | context.set_details('Method not implemented!') 169 | raise NotImplementedError('Method not implemented!') 170 | 171 | def CreateCharge(self, request, context): 172 | """Payment functions 173 | """ 174 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 175 | context.set_details('Method not implemented!') 176 | raise NotImplementedError('Method not implemented!') 177 | 178 | def GetCharges(self, request, context): 179 | """Missing associated documentation comment in .proto file.""" 180 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 181 | context.set_details('Method not implemented!') 182 | raise NotImplementedError('Method not implemented!') 183 | 184 | def CreateAutoChargeIntent(self, request, context): 185 | """Missing associated documentation comment in .proto file.""" 186 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 187 | context.set_details('Method not implemented!') 188 | raise NotImplementedError('Method not implemented!') 189 | 190 | def UpdateAutoChargeIntent(self, request, context): 191 | """Missing associated documentation comment in .proto file.""" 192 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 193 | context.set_details('Method not implemented!') 194 | raise NotImplementedError('Method not implemented!') 195 | 196 | def GetAutoChargeIntent(self, request, context): 197 | """Missing associated documentation comment in .proto file.""" 198 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 199 | context.set_details('Method not implemented!') 200 | raise NotImplementedError('Method not implemented!') 201 | 202 | 203 | def add_DashboardServiceServicer_to_server(servicer, server): 204 | rpc_method_handlers = { 205 | 'GetMe': grpc.unary_unary_rpc_method_handler( 206 | servicer.GetMe, 207 | request_deserializer=dashboard__pb2.EmptyRequest.FromString, 208 | response_serializer=dashboard__pb2.User.SerializeToString, 209 | ), 210 | 'GetOrganization': grpc.unary_unary_rpc_method_handler( 211 | servicer.GetOrganization, 212 | request_deserializer=dashboard__pb2.GetOrganizationRequest.FromString, 213 | response_serializer=dashboard__pb2.Organization.SerializeToString, 214 | ), 215 | 'GetMetrics': grpc.unary_unary_rpc_method_handler( 216 | servicer.GetMetrics, 217 | request_deserializer=dashboard__pb2.GetMetricsRequest.FromString, 218 | response_serializer=dashboard__pb2.Metrics.SerializeToString, 219 | ), 220 | 'CreateAPIKey': grpc.unary_unary_rpc_method_handler( 221 | servicer.CreateAPIKey, 222 | request_deserializer=dashboard__pb2.APIKeyRequest.FromString, 223 | response_serializer=dashboard__pb2.APIKey.SerializeToString, 224 | ), 225 | 'DeleteAPIKey': grpc.unary_unary_rpc_method_handler( 226 | servicer.DeleteAPIKey, 227 | request_deserializer=dashboard__pb2.APIKeyFindRequest.FromString, 228 | response_serializer=dashboard__pb2.APIKey.SerializeToString, 229 | ), 230 | 'UpdateDefaultOrganization': grpc.unary_unary_rpc_method_handler( 231 | servicer.UpdateDefaultOrganization, 232 | request_deserializer=dashboard__pb2.UpdateDefaultOrganizationRequest.FromString, 233 | response_serializer=dashboard__pb2.User.SerializeToString, 234 | ), 235 | 'GetClientSettings': grpc.unary_unary_rpc_method_handler( 236 | servicer.GetClientSettings, 237 | request_deserializer=dashboard__pb2.EmptyRequest.FromString, 238 | response_serializer=dashboard__pb2.ClientSettings.SerializeToString, 239 | ), 240 | 'SetClientSettings': grpc.unary_unary_rpc_method_handler( 241 | servicer.SetClientSettings, 242 | request_deserializer=dashboard__pb2.ClientSettings.FromString, 243 | response_serializer=dashboard__pb2.ClientSettings.SerializeToString, 244 | ), 245 | 'UpdateUserInfo': grpc.unary_unary_rpc_method_handler( 246 | servicer.UpdateUserInfo, 247 | request_deserializer=dashboard__pb2.UpdateUserInfoRequest.FromString, 248 | response_serializer=dashboard__pb2.User.SerializeToString, 249 | ), 250 | 'CreatePasswordChangeTicket': grpc.unary_unary_rpc_method_handler( 251 | servicer.CreatePasswordChangeTicket, 252 | request_deserializer=dashboard__pb2.EmptyRequest.FromString, 253 | response_serializer=dashboard__pb2.UserPasswordChangeTicket.SerializeToString, 254 | ), 255 | 'DeleteAccount': grpc.unary_unary_rpc_method_handler( 256 | servicer.DeleteAccount, 257 | request_deserializer=dashboard__pb2.EmptyRequest.FromString, 258 | response_serializer=dashboard__pb2.User.SerializeToString, 259 | ), 260 | 'CreateCharge': grpc.unary_unary_rpc_method_handler( 261 | servicer.CreateCharge, 262 | request_deserializer=dashboard__pb2.CreateChargeRequest.FromString, 263 | response_serializer=dashboard__pb2.Charge.SerializeToString, 264 | ), 265 | 'GetCharges': grpc.unary_unary_rpc_method_handler( 266 | servicer.GetCharges, 267 | request_deserializer=dashboard__pb2.GetChargesRequest.FromString, 268 | response_serializer=dashboard__pb2.Charges.SerializeToString, 269 | ), 270 | 'CreateAutoChargeIntent': grpc.unary_unary_rpc_method_handler( 271 | servicer.CreateAutoChargeIntent, 272 | request_deserializer=dashboard__pb2.CreateAutoChargeIntentRequest.FromString, 273 | response_serializer=dashboard__pb2.AutoChargeIntent.SerializeToString, 274 | ), 275 | 'UpdateAutoChargeIntent': grpc.unary_unary_rpc_method_handler( 276 | servicer.UpdateAutoChargeIntent, 277 | request_deserializer=dashboard__pb2.CreateAutoChargeIntentRequest.FromString, 278 | response_serializer=dashboard__pb2.AutoChargeIntent.SerializeToString, 279 | ), 280 | 'GetAutoChargeIntent': grpc.unary_unary_rpc_method_handler( 281 | servicer.GetAutoChargeIntent, 282 | request_deserializer=dashboard__pb2.GetAutoChargeRequest.FromString, 283 | response_serializer=dashboard__pb2.AutoChargeIntent.SerializeToString, 284 | ), 285 | } 286 | generic_handler = grpc.method_handlers_generic_handler( 287 | 'gooseai.DashboardService', rpc_method_handlers) 288 | server.add_generic_rpc_handlers((generic_handler,)) 289 | 290 | 291 | # This class is part of an EXPERIMENTAL API. 292 | class DashboardService(object): 293 | """Missing associated documentation comment in .proto file.""" 294 | 295 | @staticmethod 296 | def GetMe(request, 297 | target, 298 | options=(), 299 | channel_credentials=None, 300 | call_credentials=None, 301 | insecure=False, 302 | compression=None, 303 | wait_for_ready=None, 304 | timeout=None, 305 | metadata=None): 306 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/GetMe', 307 | dashboard__pb2.EmptyRequest.SerializeToString, 308 | dashboard__pb2.User.FromString, 309 | options, channel_credentials, 310 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 311 | 312 | @staticmethod 313 | def GetOrganization(request, 314 | target, 315 | options=(), 316 | channel_credentials=None, 317 | call_credentials=None, 318 | insecure=False, 319 | compression=None, 320 | wait_for_ready=None, 321 | timeout=None, 322 | metadata=None): 323 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/GetOrganization', 324 | dashboard__pb2.GetOrganizationRequest.SerializeToString, 325 | dashboard__pb2.Organization.FromString, 326 | options, channel_credentials, 327 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 328 | 329 | @staticmethod 330 | def GetMetrics(request, 331 | target, 332 | options=(), 333 | channel_credentials=None, 334 | call_credentials=None, 335 | insecure=False, 336 | compression=None, 337 | wait_for_ready=None, 338 | timeout=None, 339 | metadata=None): 340 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/GetMetrics', 341 | dashboard__pb2.GetMetricsRequest.SerializeToString, 342 | dashboard__pb2.Metrics.FromString, 343 | options, channel_credentials, 344 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 345 | 346 | @staticmethod 347 | def CreateAPIKey(request, 348 | target, 349 | options=(), 350 | channel_credentials=None, 351 | call_credentials=None, 352 | insecure=False, 353 | compression=None, 354 | wait_for_ready=None, 355 | timeout=None, 356 | metadata=None): 357 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/CreateAPIKey', 358 | dashboard__pb2.APIKeyRequest.SerializeToString, 359 | dashboard__pb2.APIKey.FromString, 360 | options, channel_credentials, 361 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 362 | 363 | @staticmethod 364 | def DeleteAPIKey(request, 365 | target, 366 | options=(), 367 | channel_credentials=None, 368 | call_credentials=None, 369 | insecure=False, 370 | compression=None, 371 | wait_for_ready=None, 372 | timeout=None, 373 | metadata=None): 374 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/DeleteAPIKey', 375 | dashboard__pb2.APIKeyFindRequest.SerializeToString, 376 | dashboard__pb2.APIKey.FromString, 377 | options, channel_credentials, 378 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 379 | 380 | @staticmethod 381 | def UpdateDefaultOrganization(request, 382 | target, 383 | options=(), 384 | channel_credentials=None, 385 | call_credentials=None, 386 | insecure=False, 387 | compression=None, 388 | wait_for_ready=None, 389 | timeout=None, 390 | metadata=None): 391 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/UpdateDefaultOrganization', 392 | dashboard__pb2.UpdateDefaultOrganizationRequest.SerializeToString, 393 | dashboard__pb2.User.FromString, 394 | options, channel_credentials, 395 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 396 | 397 | @staticmethod 398 | def GetClientSettings(request, 399 | target, 400 | options=(), 401 | channel_credentials=None, 402 | call_credentials=None, 403 | insecure=False, 404 | compression=None, 405 | wait_for_ready=None, 406 | timeout=None, 407 | metadata=None): 408 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/GetClientSettings', 409 | dashboard__pb2.EmptyRequest.SerializeToString, 410 | dashboard__pb2.ClientSettings.FromString, 411 | options, channel_credentials, 412 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 413 | 414 | @staticmethod 415 | def SetClientSettings(request, 416 | target, 417 | options=(), 418 | channel_credentials=None, 419 | call_credentials=None, 420 | insecure=False, 421 | compression=None, 422 | wait_for_ready=None, 423 | timeout=None, 424 | metadata=None): 425 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/SetClientSettings', 426 | dashboard__pb2.ClientSettings.SerializeToString, 427 | dashboard__pb2.ClientSettings.FromString, 428 | options, channel_credentials, 429 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 430 | 431 | @staticmethod 432 | def UpdateUserInfo(request, 433 | target, 434 | options=(), 435 | channel_credentials=None, 436 | call_credentials=None, 437 | insecure=False, 438 | compression=None, 439 | wait_for_ready=None, 440 | timeout=None, 441 | metadata=None): 442 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/UpdateUserInfo', 443 | dashboard__pb2.UpdateUserInfoRequest.SerializeToString, 444 | dashboard__pb2.User.FromString, 445 | options, channel_credentials, 446 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 447 | 448 | @staticmethod 449 | def CreatePasswordChangeTicket(request, 450 | target, 451 | options=(), 452 | channel_credentials=None, 453 | call_credentials=None, 454 | insecure=False, 455 | compression=None, 456 | wait_for_ready=None, 457 | timeout=None, 458 | metadata=None): 459 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/CreatePasswordChangeTicket', 460 | dashboard__pb2.EmptyRequest.SerializeToString, 461 | dashboard__pb2.UserPasswordChangeTicket.FromString, 462 | options, channel_credentials, 463 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 464 | 465 | @staticmethod 466 | def DeleteAccount(request, 467 | target, 468 | options=(), 469 | channel_credentials=None, 470 | call_credentials=None, 471 | insecure=False, 472 | compression=None, 473 | wait_for_ready=None, 474 | timeout=None, 475 | metadata=None): 476 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/DeleteAccount', 477 | dashboard__pb2.EmptyRequest.SerializeToString, 478 | dashboard__pb2.User.FromString, 479 | options, channel_credentials, 480 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 481 | 482 | @staticmethod 483 | def CreateCharge(request, 484 | target, 485 | options=(), 486 | channel_credentials=None, 487 | call_credentials=None, 488 | insecure=False, 489 | compression=None, 490 | wait_for_ready=None, 491 | timeout=None, 492 | metadata=None): 493 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/CreateCharge', 494 | dashboard__pb2.CreateChargeRequest.SerializeToString, 495 | dashboard__pb2.Charge.FromString, 496 | options, channel_credentials, 497 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 498 | 499 | @staticmethod 500 | def GetCharges(request, 501 | target, 502 | options=(), 503 | channel_credentials=None, 504 | call_credentials=None, 505 | insecure=False, 506 | compression=None, 507 | wait_for_ready=None, 508 | timeout=None, 509 | metadata=None): 510 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/GetCharges', 511 | dashboard__pb2.GetChargesRequest.SerializeToString, 512 | dashboard__pb2.Charges.FromString, 513 | options, channel_credentials, 514 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 515 | 516 | @staticmethod 517 | def CreateAutoChargeIntent(request, 518 | target, 519 | options=(), 520 | channel_credentials=None, 521 | call_credentials=None, 522 | insecure=False, 523 | compression=None, 524 | wait_for_ready=None, 525 | timeout=None, 526 | metadata=None): 527 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/CreateAutoChargeIntent', 528 | dashboard__pb2.CreateAutoChargeIntentRequest.SerializeToString, 529 | dashboard__pb2.AutoChargeIntent.FromString, 530 | options, channel_credentials, 531 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 532 | 533 | @staticmethod 534 | def UpdateAutoChargeIntent(request, 535 | target, 536 | options=(), 537 | channel_credentials=None, 538 | call_credentials=None, 539 | insecure=False, 540 | compression=None, 541 | wait_for_ready=None, 542 | timeout=None, 543 | metadata=None): 544 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/UpdateAutoChargeIntent', 545 | dashboard__pb2.CreateAutoChargeIntentRequest.SerializeToString, 546 | dashboard__pb2.AutoChargeIntent.FromString, 547 | options, channel_credentials, 548 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 549 | 550 | @staticmethod 551 | def GetAutoChargeIntent(request, 552 | target, 553 | options=(), 554 | channel_credentials=None, 555 | call_credentials=None, 556 | insecure=False, 557 | compression=None, 558 | wait_for_ready=None, 559 | timeout=None, 560 | metadata=None): 561 | return grpc.experimental.unary_unary(request, target, '/gooseai.DashboardService/GetAutoChargeIntent', 562 | dashboard__pb2.GetAutoChargeRequest.SerializeToString, 563 | dashboard__pb2.AutoChargeIntent.FromString, 564 | options, channel_credentials, 565 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 566 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/engines_pb2.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by the protocol buffer compiler. DO NOT EDIT! 3 | # source: engines.proto 4 | """Generated protocol buffer code.""" 5 | from google.protobuf.internal import builder as _builder 6 | from google.protobuf import descriptor as _descriptor 7 | from google.protobuf import descriptor_pool as _descriptor_pool 8 | from google.protobuf import symbol_database as _symbol_database 9 | # @@protoc_insertion_point(imports) 10 | 11 | _sym_db = _symbol_database.Default() 12 | 13 | 14 | import generation_pb2 as generation__pb2 15 | 16 | 17 | DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\rengines.proto\x12\x07gooseai\x1a\x10generation.proto\"\xdf\x01\n\rEngineSampler\x12*\n\x07sampler\x18\x01 \x01(\x0e\x32\x19.gooseai.DiffusionSampler\x12\x14\n\x0csupports_eta\x18\n \x01(\x08\x12\x16\n\x0esupports_churn\x18\x0b \x01(\x08\x12\x1d\n\x15supports_sigma_limits\x18\x0c \x01(\x08\x12\x1b\n\x13supports_karras_rho\x18\r \x01(\x08\x12\x38\n\x15supported_noise_types\x18\x14 \x03(\x0e\x32\x19.gooseai.SamplerNoiseType\"\xde\x01\n\nEngineInfo\x12\n\n\x02id\x18\x01 \x01(\t\x12\r\n\x05owner\x18\x02 \x01(\t\x12\r\n\x05ready\x18\x03 \x01(\x08\x12!\n\x04type\x18\x04 \x01(\x0e\x32\x13.gooseai.EngineType\x12+\n\ttokenizer\x18\x05 \x01(\x0e\x32\x18.gooseai.EngineTokenizer\x12\x0c\n\x04name\x18\x06 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x07 \x01(\t\x12\x33\n\x12supported_samplers\x18\xf4\x03 \x03(\x0b\x32\x16.gooseai.EngineSampler\"\x14\n\x12ListEnginesRequest\".\n\x07\x45ngines\x12#\n\x06\x65ngine\x18\x01 \x03(\x0b\x32\x13.gooseai.EngineInfo*Z\n\nEngineType\x12\x08\n\x04TEXT\x10\x00\x12\x0b\n\x07PICTURE\x10\x01\x12\t\n\x05\x41UDIO\x10\x02\x12\t\n\x05VIDEO\x10\x03\x12\x12\n\x0e\x43LASSIFICATION\x10\x04\x12\x0b\n\x07STORAGE\x10\x05*%\n\x0f\x45ngineTokenizer\x12\x08\n\x04GPT2\x10\x00\x12\x08\n\x04PILE\x10\x01\x32P\n\x0e\x45nginesService\x12>\n\x0bListEngines\x12\x1b.gooseai.ListEnginesRequest\x1a\x10.gooseai.Engines\"\x00\x42\x38Z6github.com/stability-ai/api-interfaces/gooseai/enginesb\x06proto3') 18 | 19 | _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) 20 | _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'engines_pb2', globals()) 21 | if _descriptor._USE_C_DESCRIPTORS == False: 22 | 23 | DESCRIPTOR._options = None 24 | DESCRIPTOR._serialized_options = b'Z6github.com/stability-ai/api-interfaces/gooseai/engines' 25 | _ENGINETYPE._serialized_start=565 26 | _ENGINETYPE._serialized_end=655 27 | _ENGINETOKENIZER._serialized_start=657 28 | _ENGINETOKENIZER._serialized_end=694 29 | _ENGINESAMPLER._serialized_start=45 30 | _ENGINESAMPLER._serialized_end=268 31 | _ENGINEINFO._serialized_start=271 32 | _ENGINEINFO._serialized_end=493 33 | _LISTENGINESREQUEST._serialized_start=495 34 | _LISTENGINESREQUEST._serialized_end=515 35 | _ENGINES._serialized_start=517 36 | _ENGINES._serialized_end=563 37 | _ENGINESSERVICE._serialized_start=696 38 | _ENGINESSERVICE._serialized_end=776 39 | # @@protoc_insertion_point(module_scope) 40 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/engines_pb2.pyi: -------------------------------------------------------------------------------- 1 | """ 2 | @generated by mypy-protobuf. Do not edit manually! 3 | isort:skip_file 4 | """ 5 | import builtins 6 | import collections.abc 7 | import generation_pb2 8 | import google.protobuf.descriptor 9 | import google.protobuf.internal.containers 10 | import google.protobuf.internal.enum_type_wrapper 11 | import google.protobuf.message 12 | import sys 13 | import typing 14 | 15 | if sys.version_info >= (3, 10): 16 | import typing as typing_extensions 17 | else: 18 | import typing_extensions 19 | 20 | DESCRIPTOR: google.protobuf.descriptor.FileDescriptor 21 | 22 | class _EngineType: 23 | ValueType = typing.NewType("ValueType", builtins.int) 24 | V: typing_extensions.TypeAlias = ValueType 25 | 26 | class _EngineTypeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_EngineType.ValueType], builtins.type): 27 | DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor 28 | TEXT: _EngineType.ValueType # 0 29 | PICTURE: _EngineType.ValueType # 1 30 | AUDIO: _EngineType.ValueType # 2 31 | VIDEO: _EngineType.ValueType # 3 32 | CLASSIFICATION: _EngineType.ValueType # 4 33 | STORAGE: _EngineType.ValueType # 5 34 | 35 | class EngineType(_EngineType, metaclass=_EngineTypeEnumTypeWrapper): 36 | """Possible engine type""" 37 | 38 | TEXT: EngineType.ValueType # 0 39 | PICTURE: EngineType.ValueType # 1 40 | AUDIO: EngineType.ValueType # 2 41 | VIDEO: EngineType.ValueType # 3 42 | CLASSIFICATION: EngineType.ValueType # 4 43 | STORAGE: EngineType.ValueType # 5 44 | global___EngineType = EngineType 45 | 46 | class _EngineTokenizer: 47 | ValueType = typing.NewType("ValueType", builtins.int) 48 | V: typing_extensions.TypeAlias = ValueType 49 | 50 | class _EngineTokenizerEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_EngineTokenizer.ValueType], builtins.type): 51 | DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor 52 | GPT2: _EngineTokenizer.ValueType # 0 53 | PILE: _EngineTokenizer.ValueType # 1 54 | 55 | class EngineTokenizer(_EngineTokenizer, metaclass=_EngineTokenizerEnumTypeWrapper): ... 56 | 57 | GPT2: EngineTokenizer.ValueType # 0 58 | PILE: EngineTokenizer.ValueType # 1 59 | global___EngineTokenizer = EngineTokenizer 60 | 61 | @typing_extensions.final 62 | class EngineSampler(google.protobuf.message.Message): 63 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 64 | 65 | SAMPLER_FIELD_NUMBER: builtins.int 66 | SUPPORTS_ETA_FIELD_NUMBER: builtins.int 67 | SUPPORTS_CHURN_FIELD_NUMBER: builtins.int 68 | SUPPORTS_SIGMA_LIMITS_FIELD_NUMBER: builtins.int 69 | SUPPORTS_KARRAS_RHO_FIELD_NUMBER: builtins.int 70 | SUPPORTED_NOISE_TYPES_FIELD_NUMBER: builtins.int 71 | sampler: generation_pb2.DiffusionSampler.ValueType 72 | supports_eta: builtins.bool 73 | supports_churn: builtins.bool 74 | supports_sigma_limits: builtins.bool 75 | supports_karras_rho: builtins.bool 76 | @property 77 | def supported_noise_types(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[generation_pb2.SamplerNoiseType.ValueType]: ... 78 | def __init__( 79 | self, 80 | *, 81 | sampler: generation_pb2.DiffusionSampler.ValueType = ..., 82 | supports_eta: builtins.bool = ..., 83 | supports_churn: builtins.bool = ..., 84 | supports_sigma_limits: builtins.bool = ..., 85 | supports_karras_rho: builtins.bool = ..., 86 | supported_noise_types: collections.abc.Iterable[generation_pb2.SamplerNoiseType.ValueType] | None = ..., 87 | ) -> None: ... 88 | def ClearField(self, field_name: typing_extensions.Literal["sampler", b"sampler", "supported_noise_types", b"supported_noise_types", "supports_churn", b"supports_churn", "supports_eta", b"supports_eta", "supports_karras_rho", b"supports_karras_rho", "supports_sigma_limits", b"supports_sigma_limits"]) -> None: ... 89 | 90 | global___EngineSampler = EngineSampler 91 | 92 | @typing_extensions.final 93 | class EngineInfo(google.protobuf.message.Message): 94 | """Engine info struct""" 95 | 96 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 97 | 98 | ID_FIELD_NUMBER: builtins.int 99 | OWNER_FIELD_NUMBER: builtins.int 100 | READY_FIELD_NUMBER: builtins.int 101 | TYPE_FIELD_NUMBER: builtins.int 102 | TOKENIZER_FIELD_NUMBER: builtins.int 103 | NAME_FIELD_NUMBER: builtins.int 104 | DESCRIPTION_FIELD_NUMBER: builtins.int 105 | SUPPORTED_SAMPLERS_FIELD_NUMBER: builtins.int 106 | id: builtins.str 107 | owner: builtins.str 108 | ready: builtins.bool 109 | type: global___EngineType.ValueType 110 | tokenizer: global___EngineTokenizer.ValueType 111 | name: builtins.str 112 | description: builtins.str 113 | @property 114 | def supported_samplers(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___EngineSampler]: ... 115 | def __init__( 116 | self, 117 | *, 118 | id: builtins.str = ..., 119 | owner: builtins.str = ..., 120 | ready: builtins.bool = ..., 121 | type: global___EngineType.ValueType = ..., 122 | tokenizer: global___EngineTokenizer.ValueType = ..., 123 | name: builtins.str = ..., 124 | description: builtins.str = ..., 125 | supported_samplers: collections.abc.Iterable[global___EngineSampler] | None = ..., 126 | ) -> None: ... 127 | def ClearField(self, field_name: typing_extensions.Literal["description", b"description", "id", b"id", "name", b"name", "owner", b"owner", "ready", b"ready", "supported_samplers", b"supported_samplers", "tokenizer", b"tokenizer", "type", b"type"]) -> None: ... 128 | 129 | global___EngineInfo = EngineInfo 130 | 131 | @typing_extensions.final 132 | class ListEnginesRequest(google.protobuf.message.Message): 133 | """Empty""" 134 | 135 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 136 | 137 | def __init__( 138 | self, 139 | ) -> None: ... 140 | 141 | global___ListEnginesRequest = ListEnginesRequest 142 | 143 | @typing_extensions.final 144 | class Engines(google.protobuf.message.Message): 145 | """Engine info list""" 146 | 147 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 148 | 149 | ENGINE_FIELD_NUMBER: builtins.int 150 | @property 151 | def engine(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___EngineInfo]: ... 152 | def __init__( 153 | self, 154 | *, 155 | engine: collections.abc.Iterable[global___EngineInfo] | None = ..., 156 | ) -> None: ... 157 | def ClearField(self, field_name: typing_extensions.Literal["engine", b"engine"]) -> None: ... 158 | 159 | global___Engines = Engines 160 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/engines_pb2_grpc.py: -------------------------------------------------------------------------------- 1 | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! 2 | """Client and server classes corresponding to protobuf-defined services.""" 3 | import grpc 4 | 5 | import engines_pb2 as engines__pb2 6 | 7 | 8 | class EnginesServiceStub(object): 9 | """Missing associated documentation comment in .proto file.""" 10 | 11 | def __init__(self, channel): 12 | """Constructor. 13 | 14 | Args: 15 | channel: A grpc.Channel. 16 | """ 17 | self.ListEngines = channel.unary_unary( 18 | '/gooseai.EnginesService/ListEngines', 19 | request_serializer=engines__pb2.ListEnginesRequest.SerializeToString, 20 | response_deserializer=engines__pb2.Engines.FromString, 21 | ) 22 | 23 | 24 | class EnginesServiceServicer(object): 25 | """Missing associated documentation comment in .proto file.""" 26 | 27 | def ListEngines(self, request, context): 28 | """Missing associated documentation comment in .proto file.""" 29 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 30 | context.set_details('Method not implemented!') 31 | raise NotImplementedError('Method not implemented!') 32 | 33 | 34 | def add_EnginesServiceServicer_to_server(servicer, server): 35 | rpc_method_handlers = { 36 | 'ListEngines': grpc.unary_unary_rpc_method_handler( 37 | servicer.ListEngines, 38 | request_deserializer=engines__pb2.ListEnginesRequest.FromString, 39 | response_serializer=engines__pb2.Engines.SerializeToString, 40 | ), 41 | } 42 | generic_handler = grpc.method_handlers_generic_handler( 43 | 'gooseai.EnginesService', rpc_method_handlers) 44 | server.add_generic_rpc_handlers((generic_handler,)) 45 | 46 | 47 | # This class is part of an EXPERIMENTAL API. 48 | class EnginesService(object): 49 | """Missing associated documentation comment in .proto file.""" 50 | 51 | @staticmethod 52 | def ListEngines(request, 53 | target, 54 | options=(), 55 | channel_credentials=None, 56 | call_credentials=None, 57 | insecure=False, 58 | compression=None, 59 | wait_for_ready=None, 60 | timeout=None, 61 | metadata=None): 62 | return grpc.experimental.unary_unary(request, target, '/gooseai.EnginesService/ListEngines', 63 | engines__pb2.ListEnginesRequest.SerializeToString, 64 | engines__pb2.Engines.FromString, 65 | options, channel_credentials, 66 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 67 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/generation_pb2.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by the protocol buffer compiler. DO NOT EDIT! 3 | # source: generation.proto 4 | """Generated protocol buffer code.""" 5 | from google.protobuf.internal import builder as _builder 6 | from google.protobuf import descriptor as _descriptor 7 | from google.protobuf import descriptor_pool as _descriptor_pool 8 | from google.protobuf import symbol_database as _symbol_database 9 | # @@protoc_insertion_point(imports) 10 | 11 | _sym_db = _symbol_database.Default() 12 | 13 | 14 | import tensors_pb2 as tensors__pb2 15 | 16 | 17 | DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x10generation.proto\x12\x07gooseai\x1a\rtensors.proto\"/\n\x05Token\x12\x11\n\x04text\x18\x01 \x01(\tH\x00\x88\x01\x01\x12\n\n\x02id\x18\x02 \x01(\rB\x07\n\x05_text\"T\n\x06Tokens\x12\x1e\n\x06tokens\x18\x01 \x03(\x0b\x32\x0e.gooseai.Token\x12\x19\n\x0ctokenizer_id\x18\x02 \x01(\tH\x00\x88\x01\x01\x42\x0f\n\r_tokenizer_id\"X\n\x18ImageAdjustment_Gaussian\x12\r\n\x05sigma\x18\x01 \x01(\x02\x12-\n\tdirection\x18\x02 \x01(\x0e\x32\x1a.gooseai.GaussianDirection\"\x18\n\x16ImageAdjustment_Invert\"h\n\x16ImageAdjustment_Levels\x12\x11\n\tinput_low\x18\x01 \x01(\x02\x12\x12\n\ninput_high\x18\x02 \x01(\x02\x12\x12\n\noutput_low\x18\x03 \x01(\x02\x12\x13\n\x0boutput_high\x18\x04 \x01(\x02\"\xd2\x01\n\x18ImageAdjustment_Channels\x12&\n\x01r\x18\x01 \x01(\x0e\x32\x16.gooseai.ChannelSourceH\x00\x88\x01\x01\x12&\n\x01g\x18\x02 \x01(\x0e\x32\x16.gooseai.ChannelSourceH\x01\x88\x01\x01\x12&\n\x01\x62\x18\x03 \x01(\x0e\x32\x16.gooseai.ChannelSourceH\x02\x88\x01\x01\x12&\n\x01\x61\x18\x04 \x01(\x0e\x32\x16.gooseai.ChannelSourceH\x03\x88\x01\x01\x42\x04\n\x02_rB\x04\n\x02_gB\x04\n\x02_bB\x04\n\x02_a\"t\n\x17ImageAdjustment_Rescale\x12\x0e\n\x06height\x18\x01 \x01(\x04\x12\r\n\x05width\x18\x02 \x01(\x04\x12\"\n\x04mode\x18\x03 \x01(\x0e\x32\x14.gooseai.RescaleMode\x12\x16\n\x0e\x61lgorithm_hint\x18\x04 \x03(\t\"P\n\x14ImageAdjustment_Crop\x12\x0b\n\x03top\x18\x01 \x01(\x04\x12\x0c\n\x04left\x18\x02 \x01(\x04\x12\r\n\x05width\x18\x03 \x01(\x04\x12\x0e\n\x06height\x18\x04 \x01(\x04\"\xd3\x02\n\x0fImageAdjustment\x12\x31\n\x04\x62lur\x18\x01 \x01(\x0b\x32!.gooseai.ImageAdjustment_GaussianH\x00\x12\x31\n\x06invert\x18\x02 \x01(\x0b\x32\x1f.gooseai.ImageAdjustment_InvertH\x00\x12\x31\n\x06levels\x18\x03 \x01(\x0b\x32\x1f.gooseai.ImageAdjustment_LevelsH\x00\x12\x35\n\x08\x63hannels\x18\x04 \x01(\x0b\x32!.gooseai.ImageAdjustment_ChannelsH\x00\x12\x33\n\x07rescale\x18\x05 \x01(\x0b\x32 .gooseai.ImageAdjustment_RescaleH\x00\x12-\n\x04\x63rop\x18\x06 \x01(\x0b\x32\x1d.gooseai.ImageAdjustment_CropH\x00\x42\x0c\n\nadjustment\"M\n\x04Lora\x12#\n\x06target\x18\x01 \x01(\x0e\x32\x13.gooseai.LoraTarget\x12 \n\x07tensors\x18\x02 \x03(\x0b\x32\x0f.tensors.Tensor\"\xf7\x03\n\x08\x41rtifact\x12\n\n\x02id\x18\x01 \x01(\x04\x12#\n\x04type\x18\x02 \x01(\x0e\x32\x15.gooseai.ArtifactType\x12\x0c\n\x04mime\x18\x03 \x01(\t\x12\x12\n\x05magic\x18\x04 \x01(\tH\x01\x88\x01\x01\x12\x10\n\x06\x62inary\x18\x05 \x01(\x0cH\x00\x12\x0e\n\x04text\x18\x06 \x01(\tH\x00\x12!\n\x06tokens\x18\x07 \x01(\x0b\x32\x0f.gooseai.TokensH\x00\x12\x33\n\nclassifier\x18\x0b \x01(\x0b\x32\x1d.gooseai.ClassifierParametersH\x00\x12!\n\x06tensor\x18\x0e \x01(\x0b\x32\x0f.tensors.TensorH\x00\x12\x1e\n\x04lora\x18\xfe\x03 \x01(\x0b\x32\r.gooseai.LoraH\x00\x12\r\n\x05index\x18\x08 \x01(\r\x12,\n\rfinish_reason\x18\t \x01(\x0e\x32\x15.gooseai.FinishReason\x12\x0c\n\x04seed\x18\n \x01(\r\x12\x0c\n\x04uuid\x18\x0c \x01(\t\x12\x0c\n\x04size\x18\r \x01(\x04\x12.\n\x0b\x61\x64justments\x18\xf4\x03 \x03(\x0b\x32\x18.gooseai.ImageAdjustment\x12\x32\n\x0fpostAdjustments\x18\xf5\x03 \x03(\x0b\x32\x18.gooseai.ImageAdjustmentB\x06\n\x04\x64\x61taB\x08\n\x06_magic\"N\n\x10PromptParameters\x12\x11\n\x04init\x18\x01 \x01(\x08H\x00\x88\x01\x01\x12\x13\n\x06weight\x18\x02 \x01(\x02H\x01\x88\x01\x01\x42\x07\n\x05_initB\t\n\x07_weight\"\xaf\x01\n\x06Prompt\x12\x32\n\nparameters\x18\x01 \x01(\x0b\x32\x19.gooseai.PromptParametersH\x01\x88\x01\x01\x12\x0e\n\x04text\x18\x02 \x01(\tH\x00\x12!\n\x06tokens\x18\x03 \x01(\x0b\x32\x0f.gooseai.TokensH\x00\x12%\n\x08\x61rtifact\x18\x04 \x01(\x0b\x32\x11.gooseai.ArtifactH\x00\x42\x08\n\x06promptB\r\n\x0b_parameters\"\x85\x01\n\x0fSigmaParameters\x12\x16\n\tsigma_min\x18\x01 \x01(\x02H\x00\x88\x01\x01\x12\x16\n\tsigma_max\x18\x02 \x01(\x02H\x01\x88\x01\x01\x12\x17\n\nkarras_rho\x18\n \x01(\x02H\x02\x88\x01\x01\x42\x0c\n\n_sigma_minB\x0c\n\n_sigma_maxB\r\n\x0b_karras_rho\"n\n\rChurnSettings\x12\r\n\x05\x63hurn\x18\x01 \x01(\x02\x12\x17\n\nchurn_tmin\x18\x02 \x01(\x02H\x00\x88\x01\x01\x12\x17\n\nchurn_tmax\x18\x03 \x01(\x02H\x01\x88\x01\x01\x42\r\n\x0b_churn_tminB\r\n\x0b_churn_tmax\"\x8b\x04\n\x11SamplerParameters\x12\x10\n\x03\x65ta\x18\x01 \x01(\x02H\x00\x88\x01\x01\x12\x1b\n\x0esampling_steps\x18\x02 \x01(\x04H\x01\x88\x01\x01\x12\x1c\n\x0flatent_channels\x18\x03 \x01(\x04H\x02\x88\x01\x01\x12 \n\x13\x64ownsampling_factor\x18\x04 \x01(\x04H\x03\x88\x01\x01\x12\x16\n\tcfg_scale\x18\x05 \x01(\x02H\x04\x88\x01\x01\x12\x1d\n\x10init_noise_scale\x18\x06 \x01(\x02H\x05\x88\x01\x01\x12\x1d\n\x10step_noise_scale\x18\x07 \x01(\x02H\x06\x88\x01\x01\x12+\n\x05\x63hurn\x18\xf4\x03 \x01(\x0b\x32\x16.gooseai.ChurnSettingsH\x07\x88\x01\x01\x12-\n\x05sigma\x18\xf5\x03 \x01(\x0b\x32\x18.gooseai.SigmaParametersH\x08\x88\x01\x01\x12\x33\n\nnoise_type\x18\xf6\x03 \x01(\x0e\x32\x19.gooseai.SamplerNoiseTypeH\t\x88\x01\x01\x42\x06\n\x04_etaB\x11\n\x0f_sampling_stepsB\x12\n\x10_latent_channelsB\x16\n\x14_downsampling_factorB\x0c\n\n_cfg_scaleB\x13\n\x11_init_noise_scaleB\x13\n\x11_step_noise_scaleB\x08\n\x06_churnB\x08\n\x06_sigmaB\r\n\x0b_noise_type\"\x8b\x01\n\x15\x43onditionerParameters\x12 \n\x13vector_adjust_prior\x18\x01 \x01(\tH\x00\x88\x01\x01\x12(\n\x0b\x63onditioner\x18\x02 \x01(\x0b\x32\x0e.gooseai.ModelH\x01\x88\x01\x01\x42\x16\n\x14_vector_adjust_priorB\x0e\n\x0c_conditioner\"j\n\x12ScheduleParameters\x12\x12\n\x05start\x18\x01 \x01(\x02H\x00\x88\x01\x01\x12\x10\n\x03\x65nd\x18\x02 \x01(\x02H\x01\x88\x01\x01\x12\x12\n\x05value\x18\x03 \x01(\x02H\x02\x88\x01\x01\x42\x08\n\x06_startB\x06\n\x04_endB\x08\n\x06_value\"\xe4\x01\n\rStepParameter\x12\x13\n\x0bscaled_step\x18\x01 \x01(\x02\x12\x30\n\x07sampler\x18\x02 \x01(\x0b\x32\x1a.gooseai.SamplerParametersH\x00\x88\x01\x01\x12\x32\n\x08schedule\x18\x03 \x01(\x0b\x32\x1b.gooseai.ScheduleParametersH\x01\x88\x01\x01\x12\x32\n\x08guidance\x18\x04 \x01(\x0b\x32\x1b.gooseai.GuidanceParametersH\x02\x88\x01\x01\x42\n\n\x08_samplerB\x0b\n\t_scheduleB\x0b\n\t_guidance\"\x97\x01\n\x05Model\x12\x30\n\x0c\x61rchitecture\x18\x01 \x01(\x0e\x32\x1a.gooseai.ModelArchitecture\x12\x11\n\tpublisher\x18\x02 \x01(\t\x12\x0f\n\x07\x64\x61taset\x18\x03 \x01(\t\x12\x0f\n\x07version\x18\x04 \x01(\x02\x12\x18\n\x10semantic_version\x18\x05 \x01(\t\x12\r\n\x05\x61lias\x18\x06 \x01(\t\"\xbc\x01\n\x10\x43utoutParameters\x12*\n\x07\x63utouts\x18\x01 \x03(\x0b\x32\x19.gooseai.CutoutParameters\x12\x12\n\x05\x63ount\x18\x02 \x01(\rH\x00\x88\x01\x01\x12\x11\n\x04gray\x18\x03 \x01(\x02H\x01\x88\x01\x01\x12\x11\n\x04\x62lur\x18\x04 \x01(\x02H\x02\x88\x01\x01\x12\x17\n\nsize_power\x18\x05 \x01(\x02H\x03\x88\x01\x01\x42\x08\n\x06_countB\x07\n\x05_grayB\x07\n\x05_blurB\r\n\x0b_size_power\"=\n\x1aGuidanceScheduleParameters\x12\x10\n\x08\x64uration\x18\x01 \x01(\x02\x12\r\n\x05value\x18\x02 \x01(\x02\"\x97\x02\n\x1aGuidanceInstanceParameters\x12\x1e\n\x06models\x18\x02 \x03(\x0b\x32\x0e.gooseai.Model\x12\x1e\n\x11guidance_strength\x18\x03 \x01(\x02H\x00\x88\x01\x01\x12\x35\n\x08schedule\x18\x04 \x03(\x0b\x32#.gooseai.GuidanceScheduleParameters\x12/\n\x07\x63utouts\x18\x05 \x01(\x0b\x32\x19.gooseai.CutoutParametersH\x01\x88\x01\x01\x12$\n\x06prompt\x18\x06 \x01(\x0b\x32\x0f.gooseai.PromptH\x02\x88\x01\x01\x42\x14\n\x12_guidance_strengthB\n\n\x08_cutoutsB\t\n\x07_prompt\"~\n\x12GuidanceParameters\x12\x30\n\x0fguidance_preset\x18\x01 \x01(\x0e\x32\x17.gooseai.GuidancePreset\x12\x36\n\tinstances\x18\x02 \x03(\x0b\x32#.gooseai.GuidanceInstanceParameters\"n\n\rTransformType\x12.\n\tdiffusion\x18\x01 \x01(\x0e\x32\x19.gooseai.DiffusionSamplerH\x00\x12%\n\x08upscaler\x18\x02 \x01(\x0e\x32\x11.gooseai.UpscalerH\x00\x42\x06\n\x04type\"Y\n\x11\x45xtendedParameter\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x05\x66loat\x18\x02 \x01(\x02H\x00\x12\r\n\x03int\x18\x03 \x01(\x04H\x00\x12\r\n\x03str\x18\x04 \x01(\tH\x00\x42\x07\n\x05value\"D\n\x12\x45xtendedParameters\x12.\n\nparameters\x18\x01 \x03(\x0b\x32\x1a.gooseai.ExtendedParameter\"P\n\x12HiresFixParameters\x12\x0e\n\x06\x65nable\x18\x01 \x01(\x08\x12\x19\n\x0coos_fraction\x18\x02 \x01(\x02H\x00\x88\x01\x01\x42\x0f\n\r_oos_fraction\"\xde\x04\n\x0fImageParameters\x12\x13\n\x06height\x18\x01 \x01(\x04H\x00\x88\x01\x01\x12\x12\n\x05width\x18\x02 \x01(\x04H\x01\x88\x01\x01\x12\x0c\n\x04seed\x18\x03 \x03(\r\x12\x14\n\x07samples\x18\x04 \x01(\x04H\x02\x88\x01\x01\x12\x12\n\x05steps\x18\x05 \x01(\x04H\x03\x88\x01\x01\x12.\n\ttransform\x18\x06 \x01(\x0b\x32\x16.gooseai.TransformTypeH\x04\x88\x01\x01\x12*\n\nparameters\x18\x07 \x03(\x0b\x32\x16.gooseai.StepParameter\x12\x36\n\x10masked_area_init\x18\x08 \x01(\x0e\x32\x17.gooseai.MaskedAreaInitH\x05\x88\x01\x01\x12\x31\n\rweight_method\x18\t \x01(\x0e\x32\x15.gooseai.WeightMethodH\x06\x88\x01\x01\x12\x15\n\x08quantize\x18\n \x01(\x08H\x07\x88\x01\x01\x12\x34\n\textension\x18\xf4\x03 \x01(\x0b\x32\x1b.gooseai.ExtendedParametersH\x08\x88\x01\x01\x12\x30\n\x05hires\x18\xfe\x03 \x01(\x0b\x32\x1b.gooseai.HiresFixParametersH\t\x88\x01\x01\x12\x14\n\x06tiling\x18\x88\x04 \x01(\x08H\n\x88\x01\x01\x42\t\n\x07_heightB\x08\n\x06_widthB\n\n\x08_samplesB\x08\n\x06_stepsB\x0c\n\n_transformB\x13\n\x11_masked_area_initB\x10\n\x0e_weight_methodB\x0b\n\t_quantizeB\x0c\n\n_extensionB\x08\n\x06_hiresB\t\n\x07_tiling\"J\n\x11\x43lassifierConcept\x12\x0f\n\x07\x63oncept\x18\x01 \x01(\t\x12\x16\n\tthreshold\x18\x02 \x01(\x02H\x00\x88\x01\x01\x42\x0c\n\n_threshold\"\xf4\x01\n\x12\x43lassifierCategory\x12\x0c\n\x04name\x18\x01 \x01(\t\x12,\n\x08\x63oncepts\x18\x02 \x03(\x0b\x32\x1a.gooseai.ClassifierConcept\x12\x17\n\nadjustment\x18\x03 \x01(\x02H\x00\x88\x01\x01\x12$\n\x06\x61\x63tion\x18\x04 \x01(\x0e\x32\x0f.gooseai.ActionH\x01\x88\x01\x01\x12\x35\n\x0f\x63lassifier_mode\x18\x05 \x01(\x0e\x32\x17.gooseai.ClassifierModeH\x02\x88\x01\x01\x42\r\n\x0b_adjustmentB\t\n\x07_actionB\x12\n\x10_classifier_mode\"\xb8\x01\n\x14\x43lassifierParameters\x12/\n\ncategories\x18\x01 \x03(\x0b\x32\x1b.gooseai.ClassifierCategory\x12,\n\x07\x65xceeds\x18\x02 \x03(\x0b\x32\x1b.gooseai.ClassifierCategory\x12-\n\x0frealized_action\x18\x03 \x01(\x0e\x32\x0f.gooseai.ActionH\x00\x88\x01\x01\x42\x12\n\x10_realized_action\"k\n\x0f\x41ssetParameters\x12$\n\x06\x61\x63tion\x18\x01 \x01(\x0e\x32\x14.gooseai.AssetAction\x12\x12\n\nproject_id\x18\x02 \x01(\t\x12\x1e\n\x03use\x18\x03 \x01(\x0e\x32\x11.gooseai.AssetUse\"\x94\x01\n\nAnswerMeta\x12\x13\n\x06gpu_id\x18\x01 \x01(\tH\x00\x88\x01\x01\x12\x13\n\x06\x63pu_id\x18\x02 \x01(\tH\x01\x88\x01\x01\x12\x14\n\x07node_id\x18\x03 \x01(\tH\x02\x88\x01\x01\x12\x16\n\tengine_id\x18\x04 \x01(\tH\x03\x88\x01\x01\x42\t\n\x07_gpu_idB\t\n\x07_cpu_idB\n\n\x08_node_idB\x0c\n\n_engine_id\"\xa9\x01\n\x06\x41nswer\x12\x11\n\tanswer_id\x18\x01 \x01(\t\x12\x12\n\nrequest_id\x18\x02 \x01(\t\x12\x10\n\x08received\x18\x03 \x01(\x04\x12\x0f\n\x07\x63reated\x18\x04 \x01(\x04\x12&\n\x04meta\x18\x06 \x01(\x0b\x32\x13.gooseai.AnswerMetaH\x00\x88\x01\x01\x12$\n\tartifacts\x18\x07 \x03(\x0b\x32\x11.gooseai.ArtifactB\x07\n\x05_meta\"\xeb\x02\n\x07Request\x12\x11\n\tengine_id\x18\x01 \x01(\t\x12\x12\n\nrequest_id\x18\x02 \x01(\t\x12-\n\x0erequested_type\x18\x03 \x01(\x0e\x32\x15.gooseai.ArtifactType\x12\x1f\n\x06prompt\x18\x04 \x03(\x0b\x32\x0f.gooseai.Prompt\x12)\n\x05image\x18\x05 \x01(\x0b\x32\x18.gooseai.ImageParametersH\x00\x12\x33\n\nclassifier\x18\x07 \x01(\x0b\x32\x1d.gooseai.ClassifierParametersH\x00\x12)\n\x05\x61sset\x18\x08 \x01(\x0b\x32\x18.gooseai.AssetParametersH\x00\x12\x38\n\x0b\x63onditioner\x18\x06 \x01(\x0b\x32\x1e.gooseai.ConditionerParametersH\x01\x88\x01\x01\x42\x08\n\x06paramsB\x0e\n\x0c_conditionerJ\x04\x08\t\x10\nJ\x04\x08\n\x10\x0b\"w\n\x08OnStatus\x12%\n\x06reason\x18\x01 \x03(\x0e\x32\x15.gooseai.FinishReason\x12\x13\n\x06target\x18\x02 \x01(\tH\x00\x88\x01\x01\x12$\n\x06\x61\x63tion\x18\x03 \x03(\x0e\x32\x14.gooseai.StageActionB\t\n\x07_target\"\\\n\x05Stage\x12\n\n\x02id\x18\x01 \x01(\t\x12!\n\x07request\x18\x02 \x01(\x0b\x32\x10.gooseai.Request\x12$\n\ton_status\x18\x03 \x03(\x0b\x32\x11.gooseai.OnStatus\"A\n\x0c\x43hainRequest\x12\x12\n\nrequest_id\x18\x01 \x01(\t\x12\x1d\n\x05stage\x18\x02 \x03(\x0b\x32\x0e.gooseai.Stage\",\n\x0b\x41syncStatus\x12\x0c\n\x04\x63ode\x18\x01 \x01(\x05\x12\x0f\n\x07message\x18\x02 \x01(\t\"f\n\x0b\x41syncAnswer\x12\x1f\n\x06\x61nswer\x18\x01 \x03(\x0b\x32\x0f.gooseai.Answer\x12\x10\n\x08\x63omplete\x18\x02 \x01(\x08\x12$\n\x06status\x18\x03 \x01(\x0b\x32\x14.gooseai.AsyncStatus\"7\n\x0b\x41syncHandle\x12\x12\n\nrequest_id\x18\x01 \x01(\t\x12\x14\n\x0c\x61sync_handle\x18\x02 \x01(\t\"\x13\n\x11\x41syncCancelAnswer*E\n\x0c\x46inishReason\x12\x08\n\x04NULL\x10\x00\x12\n\n\x06LENGTH\x10\x01\x12\x08\n\x04STOP\x10\x02\x12\t\n\x05\x45RROR\x10\x03\x12\n\n\x06\x46ILTER\x10\x04*\xf8\x01\n\x0c\x41rtifactType\x12\x11\n\rARTIFACT_NONE\x10\x00\x12\x12\n\x0e\x41RTIFACT_IMAGE\x10\x01\x12\x12\n\x0e\x41RTIFACT_VIDEO\x10\x02\x12\x11\n\rARTIFACT_TEXT\x10\x03\x12\x13\n\x0f\x41RTIFACT_TOKENS\x10\x04\x12\x16\n\x12\x41RTIFACT_EMBEDDING\x10\x05\x12\x1c\n\x18\x41RTIFACT_CLASSIFICATIONS\x10\x06\x12\x11\n\rARTIFACT_MASK\x10\x07\x12\x13\n\x0f\x41RTIFACT_LATENT\x10\x08\x12\x13\n\x0f\x41RTIFACT_TENSOR\x10\t\x12\x12\n\rARTIFACT_LORA\x10\xf4\x03*M\n\x11GaussianDirection\x12\x12\n\x0e\x44IRECTION_NONE\x10\x00\x12\x10\n\x0c\x44IRECTION_UP\x10\x01\x12\x12\n\x0e\x44IRECTION_DOWN\x10\x02*\x83\x01\n\rChannelSource\x12\r\n\tCHANNEL_R\x10\x00\x12\r\n\tCHANNEL_G\x10\x01\x12\r\n\tCHANNEL_B\x10\x02\x12\r\n\tCHANNEL_A\x10\x03\x12\x10\n\x0c\x43HANNEL_ZERO\x10\x04\x12\x0f\n\x0b\x43HANNEL_ONE\x10\x05\x12\x13\n\x0f\x43HANNEL_DISCARD\x10\x06*D\n\x0bRescaleMode\x12\x12\n\x0eRESCALE_STRICT\x10\x00\x12\x10\n\x0cRESCALE_CROP\x10\x02\x12\x0f\n\x0bRESCALE_FIT\x10\x03*2\n\nLoraTarget\x12\r\n\tLORA_UNET\x10\x00\x12\x15\n\x11LORA_TEXT_ENCODER\x10\x01*g\n\x0eMaskedAreaInit\x12\x19\n\x15MASKED_AREA_INIT_ZERO\x10\x00\x12\x1b\n\x17MASKED_AREA_INIT_RANDOM\x10\x01\x12\x1d\n\x19MASKED_AREA_INIT_ORIGINAL\x10\x02*5\n\x0cWeightMethod\x12\x10\n\x0cTEXT_ENCODER\x10\x00\x12\x13\n\x0f\x43ROSS_ATTENTION\x10\x01*\x8f\x04\n\x10\x44iffusionSampler\x12\x10\n\x0cSAMPLER_DDIM\x10\x00\x12\x10\n\x0cSAMPLER_DDPM\x10\x01\x12\x13\n\x0fSAMPLER_K_EULER\x10\x02\x12\x1d\n\x19SAMPLER_K_EULER_ANCESTRAL\x10\x03\x12\x12\n\x0eSAMPLER_K_HEUN\x10\x04\x12\x13\n\x0fSAMPLER_K_DPM_2\x10\x05\x12\x1d\n\x19SAMPLER_K_DPM_2_ANCESTRAL\x10\x06\x12\x11\n\rSAMPLER_K_LMS\x10\x07\x12 \n\x1cSAMPLER_K_DPMPP_2S_ANCESTRAL\x10\x08\x12\x16\n\x12SAMPLER_K_DPMPP_2M\x10\t\x12\x17\n\x13SAMPLER_K_DPMPP_SDE\x10\n\x12\x1f\n\x1aSAMPLER_DPMSOLVERPP_1ORDER\x10\xf4\x03\x12\x1f\n\x1aSAMPLER_DPMSOLVERPP_2ORDER\x10\xf5\x03\x12\x1f\n\x1aSAMPLER_DPMSOLVERPP_3ORDER\x10\xf6\x03\x12\x15\n\x10SAMPLER_DPM_FAST\x10\xa6\x04\x12\x19\n\x14SAMPLER_DPM_ADAPTIVE\x10\xa7\x04\x12%\n SAMPLER_DPMSOLVERPP_2S_ANCESTRAL\x10\xa8\x04\x12\x1c\n\x17SAMPLER_DPMSOLVERPP_SDE\x10\xa9\x04\x12\x1b\n\x16SAMPLER_DPMSOLVERPP_2M\x10\xaa\x04*H\n\x10SamplerNoiseType\x12\x18\n\x14SAMPLER_NOISE_NORMAL\x10\x00\x12\x1a\n\x16SAMPLER_NOISE_BROWNIAN\x10\x01*F\n\x08Upscaler\x12\x10\n\x0cUPSCALER_RGB\x10\x00\x12\x13\n\x0fUPSCALER_GFPGAN\x10\x01\x12\x13\n\x0fUPSCALER_ESRGAN\x10\x02*\xd8\x01\n\x0eGuidancePreset\x12\x18\n\x14GUIDANCE_PRESET_NONE\x10\x00\x12\x1a\n\x16GUIDANCE_PRESET_SIMPLE\x10\x01\x12\x1d\n\x19GUIDANCE_PRESET_FAST_BLUE\x10\x02\x12\x1e\n\x1aGUIDANCE_PRESET_FAST_GREEN\x10\x03\x12\x18\n\x14GUIDANCE_PRESET_SLOW\x10\x04\x12\x1a\n\x16GUIDANCE_PRESET_SLOWER\x10\x05\x12\x1b\n\x17GUIDANCE_PRESET_SLOWEST\x10\x06*\x91\x01\n\x11ModelArchitecture\x12\x1b\n\x17MODEL_ARCHITECTURE_NONE\x10\x00\x12\x1f\n\x1bMODEL_ARCHITECTURE_CLIP_VIT\x10\x01\x12\"\n\x1eMODEL_ARCHITECTURE_CLIP_RESNET\x10\x02\x12\x1a\n\x16MODEL_ARCHITECTURE_LDM\x10\x03*\xa2\x01\n\x06\x41\x63tion\x12\x16\n\x12\x41\x43TION_PASSTHROUGH\x10\x00\x12\x1f\n\x1b\x41\x43TION_REGENERATE_DUPLICATE\x10\x01\x12\x15\n\x11\x41\x43TION_REGENERATE\x10\x02\x12\x1e\n\x1a\x41\x43TION_OBFUSCATE_DUPLICATE\x10\x03\x12\x14\n\x10\x41\x43TION_OBFUSCATE\x10\x04\x12\x12\n\x0e\x41\x43TION_DISCARD\x10\x05*D\n\x0e\x43lassifierMode\x12\x17\n\x13\x43LSFR_MODE_ZEROSHOT\x10\x00\x12\x19\n\x15\x43LSFR_MODE_MULTICLASS\x10\x01*=\n\x0b\x41ssetAction\x12\r\n\tASSET_PUT\x10\x00\x12\r\n\tASSET_GET\x10\x01\x12\x10\n\x0c\x41SSET_DELETE\x10\x02*\x81\x01\n\x08\x41ssetUse\x12\x17\n\x13\x41SSET_USE_UNDEFINED\x10\x00\x12\x13\n\x0f\x41SSET_USE_INPUT\x10\x01\x12\x14\n\x10\x41SSET_USE_OUTPUT\x10\x02\x12\x1a\n\x16\x41SSET_USE_INTERMEDIATE\x10\x03\x12\x15\n\x11\x41SSET_USE_PROJECT\x10\x04*W\n\x0bStageAction\x12\x15\n\x11STAGE_ACTION_PASS\x10\x00\x12\x18\n\x14STAGE_ACTION_DISCARD\x10\x01\x12\x17\n\x13STAGE_ACTION_RETURN\x10\x02\x32\xbe\x02\n\x11GenerationService\x12\x31\n\x08Generate\x12\x10.gooseai.Request\x1a\x0f.gooseai.Answer\"\x00\x30\x01\x12;\n\rChainGenerate\x12\x15.gooseai.ChainRequest\x1a\x0f.gooseai.Answer\"\x00\x30\x01\x12\x39\n\rAsyncGenerate\x12\x10.gooseai.Request\x1a\x14.gooseai.AsyncHandle\"\x00\x12;\n\x0b\x41syncResult\x12\x14.gooseai.AsyncHandle\x1a\x14.gooseai.AsyncAnswer\"\x00\x12\x41\n\x0b\x41syncCancel\x12\x14.gooseai.AsyncHandle\x1a\x1a.gooseai.AsyncCancelAnswer\"\x00\x42;Z9github.com/stability-ai/api-interfaces/gooseai/generationb\x06proto3') 18 | 19 | _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) 20 | _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'generation_pb2', globals()) 21 | if _descriptor._USE_C_DESCRIPTORS == False: 22 | 23 | DESCRIPTOR._options = None 24 | DESCRIPTOR._serialized_options = b'Z9github.com/stability-ai/api-interfaces/gooseai/generation' 25 | _FINISHREASON._serialized_start=6854 26 | _FINISHREASON._serialized_end=6923 27 | _ARTIFACTTYPE._serialized_start=6926 28 | _ARTIFACTTYPE._serialized_end=7174 29 | _GAUSSIANDIRECTION._serialized_start=7176 30 | _GAUSSIANDIRECTION._serialized_end=7253 31 | _CHANNELSOURCE._serialized_start=7256 32 | _CHANNELSOURCE._serialized_end=7387 33 | _RESCALEMODE._serialized_start=7389 34 | _RESCALEMODE._serialized_end=7457 35 | _LORATARGET._serialized_start=7459 36 | _LORATARGET._serialized_end=7509 37 | _MASKEDAREAINIT._serialized_start=7511 38 | _MASKEDAREAINIT._serialized_end=7614 39 | _WEIGHTMETHOD._serialized_start=7616 40 | _WEIGHTMETHOD._serialized_end=7669 41 | _DIFFUSIONSAMPLER._serialized_start=7672 42 | _DIFFUSIONSAMPLER._serialized_end=8199 43 | _SAMPLERNOISETYPE._serialized_start=8201 44 | _SAMPLERNOISETYPE._serialized_end=8273 45 | _UPSCALER._serialized_start=8275 46 | _UPSCALER._serialized_end=8345 47 | _GUIDANCEPRESET._serialized_start=8348 48 | _GUIDANCEPRESET._serialized_end=8564 49 | _MODELARCHITECTURE._serialized_start=8567 50 | _MODELARCHITECTURE._serialized_end=8712 51 | _ACTION._serialized_start=8715 52 | _ACTION._serialized_end=8877 53 | _CLASSIFIERMODE._serialized_start=8879 54 | _CLASSIFIERMODE._serialized_end=8947 55 | _ASSETACTION._serialized_start=8949 56 | _ASSETACTION._serialized_end=9010 57 | _ASSETUSE._serialized_start=9013 58 | _ASSETUSE._serialized_end=9142 59 | _STAGEACTION._serialized_start=9144 60 | _STAGEACTION._serialized_end=9231 61 | _TOKEN._serialized_start=44 62 | _TOKEN._serialized_end=91 63 | _TOKENS._serialized_start=93 64 | _TOKENS._serialized_end=177 65 | _IMAGEADJUSTMENT_GAUSSIAN._serialized_start=179 66 | _IMAGEADJUSTMENT_GAUSSIAN._serialized_end=267 67 | _IMAGEADJUSTMENT_INVERT._serialized_start=269 68 | _IMAGEADJUSTMENT_INVERT._serialized_end=293 69 | _IMAGEADJUSTMENT_LEVELS._serialized_start=295 70 | _IMAGEADJUSTMENT_LEVELS._serialized_end=399 71 | _IMAGEADJUSTMENT_CHANNELS._serialized_start=402 72 | _IMAGEADJUSTMENT_CHANNELS._serialized_end=612 73 | _IMAGEADJUSTMENT_RESCALE._serialized_start=614 74 | _IMAGEADJUSTMENT_RESCALE._serialized_end=730 75 | _IMAGEADJUSTMENT_CROP._serialized_start=732 76 | _IMAGEADJUSTMENT_CROP._serialized_end=812 77 | _IMAGEADJUSTMENT._serialized_start=815 78 | _IMAGEADJUSTMENT._serialized_end=1154 79 | _LORA._serialized_start=1156 80 | _LORA._serialized_end=1233 81 | _ARTIFACT._serialized_start=1236 82 | _ARTIFACT._serialized_end=1739 83 | _PROMPTPARAMETERS._serialized_start=1741 84 | _PROMPTPARAMETERS._serialized_end=1819 85 | _PROMPT._serialized_start=1822 86 | _PROMPT._serialized_end=1997 87 | _SIGMAPARAMETERS._serialized_start=2000 88 | _SIGMAPARAMETERS._serialized_end=2133 89 | _CHURNSETTINGS._serialized_start=2135 90 | _CHURNSETTINGS._serialized_end=2245 91 | _SAMPLERPARAMETERS._serialized_start=2248 92 | _SAMPLERPARAMETERS._serialized_end=2771 93 | _CONDITIONERPARAMETERS._serialized_start=2774 94 | _CONDITIONERPARAMETERS._serialized_end=2913 95 | _SCHEDULEPARAMETERS._serialized_start=2915 96 | _SCHEDULEPARAMETERS._serialized_end=3021 97 | _STEPPARAMETER._serialized_start=3024 98 | _STEPPARAMETER._serialized_end=3252 99 | _MODEL._serialized_start=3255 100 | _MODEL._serialized_end=3406 101 | _CUTOUTPARAMETERS._serialized_start=3409 102 | _CUTOUTPARAMETERS._serialized_end=3597 103 | _GUIDANCESCHEDULEPARAMETERS._serialized_start=3599 104 | _GUIDANCESCHEDULEPARAMETERS._serialized_end=3660 105 | _GUIDANCEINSTANCEPARAMETERS._serialized_start=3663 106 | _GUIDANCEINSTANCEPARAMETERS._serialized_end=3942 107 | _GUIDANCEPARAMETERS._serialized_start=3944 108 | _GUIDANCEPARAMETERS._serialized_end=4070 109 | _TRANSFORMTYPE._serialized_start=4072 110 | _TRANSFORMTYPE._serialized_end=4182 111 | _EXTENDEDPARAMETER._serialized_start=4184 112 | _EXTENDEDPARAMETER._serialized_end=4273 113 | _EXTENDEDPARAMETERS._serialized_start=4275 114 | _EXTENDEDPARAMETERS._serialized_end=4343 115 | _HIRESFIXPARAMETERS._serialized_start=4345 116 | _HIRESFIXPARAMETERS._serialized_end=4425 117 | _IMAGEPARAMETERS._serialized_start=4428 118 | _IMAGEPARAMETERS._serialized_end=5034 119 | _CLASSIFIERCONCEPT._serialized_start=5036 120 | _CLASSIFIERCONCEPT._serialized_end=5110 121 | _CLASSIFIERCATEGORY._serialized_start=5113 122 | _CLASSIFIERCATEGORY._serialized_end=5357 123 | _CLASSIFIERPARAMETERS._serialized_start=5360 124 | _CLASSIFIERPARAMETERS._serialized_end=5544 125 | _ASSETPARAMETERS._serialized_start=5546 126 | _ASSETPARAMETERS._serialized_end=5653 127 | _ANSWERMETA._serialized_start=5656 128 | _ANSWERMETA._serialized_end=5804 129 | _ANSWER._serialized_start=5807 130 | _ANSWER._serialized_end=5976 131 | _REQUEST._serialized_start=5979 132 | _REQUEST._serialized_end=6342 133 | _ONSTATUS._serialized_start=6344 134 | _ONSTATUS._serialized_end=6463 135 | _STAGE._serialized_start=6465 136 | _STAGE._serialized_end=6557 137 | _CHAINREQUEST._serialized_start=6559 138 | _CHAINREQUEST._serialized_end=6624 139 | _ASYNCSTATUS._serialized_start=6626 140 | _ASYNCSTATUS._serialized_end=6670 141 | _ASYNCANSWER._serialized_start=6672 142 | _ASYNCANSWER._serialized_end=6774 143 | _ASYNCHANDLE._serialized_start=6776 144 | _ASYNCHANDLE._serialized_end=6831 145 | _ASYNCCANCELANSWER._serialized_start=6833 146 | _ASYNCCANCELANSWER._serialized_end=6852 147 | _GENERATIONSERVICE._serialized_start=9234 148 | _GENERATIONSERVICE._serialized_end=9552 149 | # @@protoc_insertion_point(module_scope) 150 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/generation_pb2_grpc.py: -------------------------------------------------------------------------------- 1 | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! 2 | """Client and server classes corresponding to protobuf-defined services.""" 3 | import grpc 4 | 5 | import generation_pb2 as generation__pb2 6 | 7 | 8 | class GenerationServiceStub(object): 9 | """ 10 | gRPC services 11 | 12 | """ 13 | 14 | def __init__(self, channel): 15 | """Constructor. 16 | 17 | Args: 18 | channel: A grpc.Channel. 19 | """ 20 | self.Generate = channel.unary_stream( 21 | '/gooseai.GenerationService/Generate', 22 | request_serializer=generation__pb2.Request.SerializeToString, 23 | response_deserializer=generation__pb2.Answer.FromString, 24 | ) 25 | self.ChainGenerate = channel.unary_stream( 26 | '/gooseai.GenerationService/ChainGenerate', 27 | request_serializer=generation__pb2.ChainRequest.SerializeToString, 28 | response_deserializer=generation__pb2.Answer.FromString, 29 | ) 30 | self.AsyncGenerate = channel.unary_unary( 31 | '/gooseai.GenerationService/AsyncGenerate', 32 | request_serializer=generation__pb2.Request.SerializeToString, 33 | response_deserializer=generation__pb2.AsyncHandle.FromString, 34 | ) 35 | self.AsyncResult = channel.unary_unary( 36 | '/gooseai.GenerationService/AsyncResult', 37 | request_serializer=generation__pb2.AsyncHandle.SerializeToString, 38 | response_deserializer=generation__pb2.AsyncAnswer.FromString, 39 | ) 40 | self.AsyncCancel = channel.unary_unary( 41 | '/gooseai.GenerationService/AsyncCancel', 42 | request_serializer=generation__pb2.AsyncHandle.SerializeToString, 43 | response_deserializer=generation__pb2.AsyncCancelAnswer.FromString, 44 | ) 45 | 46 | 47 | class GenerationServiceServicer(object): 48 | """ 49 | gRPC services 50 | 51 | """ 52 | 53 | def Generate(self, request, context): 54 | """Missing associated documentation comment in .proto file.""" 55 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 56 | context.set_details('Method not implemented!') 57 | raise NotImplementedError('Method not implemented!') 58 | 59 | def ChainGenerate(self, request, context): 60 | """Missing associated documentation comment in .proto file.""" 61 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 62 | context.set_details('Method not implemented!') 63 | raise NotImplementedError('Method not implemented!') 64 | 65 | def AsyncGenerate(self, request, context): 66 | """AsyncGenerate starts an asynchronous generation 67 | 68 | The passed Request is the same as to Generate. However this method 69 | will return immediately, returning a handle that can be used to get 70 | any results of the generation created so far or cancel it. 71 | """ 72 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 73 | context.set_details('Method not implemented!') 74 | raise NotImplementedError('Method not implemented!') 75 | 76 | def AsyncResult(self, request, context): 77 | """AsyncResult gets and results so far for an asynchronous generation 78 | 79 | You can call this multiple times. Each time you call it, you will get 80 | any results that are ready that have not been returned before. 81 | (Note that this "consumes" the ready results - they will not be returned again). 82 | 83 | Any generated results will eventually (default: after 10 minutes) be discarded 84 | if they are not taken by a call to this method. 85 | """ 86 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 87 | context.set_details('Method not implemented!') 88 | raise NotImplementedError('Method not implemented!') 89 | 90 | def AsyncCancel(self, request, context): 91 | """AsyncCancel cancels a generation that is currently in progress 92 | and discards any results that have not yet been returned by a call to AsyncResult. 93 | """ 94 | context.set_code(grpc.StatusCode.UNIMPLEMENTED) 95 | context.set_details('Method not implemented!') 96 | raise NotImplementedError('Method not implemented!') 97 | 98 | 99 | def add_GenerationServiceServicer_to_server(servicer, server): 100 | rpc_method_handlers = { 101 | 'Generate': grpc.unary_stream_rpc_method_handler( 102 | servicer.Generate, 103 | request_deserializer=generation__pb2.Request.FromString, 104 | response_serializer=generation__pb2.Answer.SerializeToString, 105 | ), 106 | 'ChainGenerate': grpc.unary_stream_rpc_method_handler( 107 | servicer.ChainGenerate, 108 | request_deserializer=generation__pb2.ChainRequest.FromString, 109 | response_serializer=generation__pb2.Answer.SerializeToString, 110 | ), 111 | 'AsyncGenerate': grpc.unary_unary_rpc_method_handler( 112 | servicer.AsyncGenerate, 113 | request_deserializer=generation__pb2.Request.FromString, 114 | response_serializer=generation__pb2.AsyncHandle.SerializeToString, 115 | ), 116 | 'AsyncResult': grpc.unary_unary_rpc_method_handler( 117 | servicer.AsyncResult, 118 | request_deserializer=generation__pb2.AsyncHandle.FromString, 119 | response_serializer=generation__pb2.AsyncAnswer.SerializeToString, 120 | ), 121 | 'AsyncCancel': grpc.unary_unary_rpc_method_handler( 122 | servicer.AsyncCancel, 123 | request_deserializer=generation__pb2.AsyncHandle.FromString, 124 | response_serializer=generation__pb2.AsyncCancelAnswer.SerializeToString, 125 | ), 126 | } 127 | generic_handler = grpc.method_handlers_generic_handler( 128 | 'gooseai.GenerationService', rpc_method_handlers) 129 | server.add_generic_rpc_handlers((generic_handler,)) 130 | 131 | 132 | # This class is part of an EXPERIMENTAL API. 133 | class GenerationService(object): 134 | """ 135 | gRPC services 136 | 137 | """ 138 | 139 | @staticmethod 140 | def Generate(request, 141 | target, 142 | options=(), 143 | channel_credentials=None, 144 | call_credentials=None, 145 | insecure=False, 146 | compression=None, 147 | wait_for_ready=None, 148 | timeout=None, 149 | metadata=None): 150 | return grpc.experimental.unary_stream(request, target, '/gooseai.GenerationService/Generate', 151 | generation__pb2.Request.SerializeToString, 152 | generation__pb2.Answer.FromString, 153 | options, channel_credentials, 154 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 155 | 156 | @staticmethod 157 | def ChainGenerate(request, 158 | target, 159 | options=(), 160 | channel_credentials=None, 161 | call_credentials=None, 162 | insecure=False, 163 | compression=None, 164 | wait_for_ready=None, 165 | timeout=None, 166 | metadata=None): 167 | return grpc.experimental.unary_stream(request, target, '/gooseai.GenerationService/ChainGenerate', 168 | generation__pb2.ChainRequest.SerializeToString, 169 | generation__pb2.Answer.FromString, 170 | options, channel_credentials, 171 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 172 | 173 | @staticmethod 174 | def AsyncGenerate(request, 175 | target, 176 | options=(), 177 | channel_credentials=None, 178 | call_credentials=None, 179 | insecure=False, 180 | compression=None, 181 | wait_for_ready=None, 182 | timeout=None, 183 | metadata=None): 184 | return grpc.experimental.unary_unary(request, target, '/gooseai.GenerationService/AsyncGenerate', 185 | generation__pb2.Request.SerializeToString, 186 | generation__pb2.AsyncHandle.FromString, 187 | options, channel_credentials, 188 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 189 | 190 | @staticmethod 191 | def AsyncResult(request, 192 | target, 193 | options=(), 194 | channel_credentials=None, 195 | call_credentials=None, 196 | insecure=False, 197 | compression=None, 198 | wait_for_ready=None, 199 | timeout=None, 200 | metadata=None): 201 | return grpc.experimental.unary_unary(request, target, '/gooseai.GenerationService/AsyncResult', 202 | generation__pb2.AsyncHandle.SerializeToString, 203 | generation__pb2.AsyncAnswer.FromString, 204 | options, channel_credentials, 205 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 206 | 207 | @staticmethod 208 | def AsyncCancel(request, 209 | target, 210 | options=(), 211 | channel_credentials=None, 212 | call_credentials=None, 213 | insecure=False, 214 | compression=None, 215 | wait_for_ready=None, 216 | timeout=None, 217 | metadata=None): 218 | return grpc.experimental.unary_unary(request, target, '/gooseai.GenerationService/AsyncCancel', 219 | generation__pb2.AsyncHandle.SerializeToString, 220 | generation__pb2.AsyncCancelAnswer.FromString, 221 | options, channel_credentials, 222 | insecure, call_credentials, compression, wait_for_ready, timeout, metadata) 223 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/tensors_pb2.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # Generated by the protocol buffer compiler. DO NOT EDIT! 3 | # source: tensors.proto 4 | """Generated protocol buffer code.""" 5 | from google.protobuf.internal import builder as _builder 6 | from google.protobuf import descriptor as _descriptor 7 | from google.protobuf import descriptor_pool as _descriptor_pool 8 | from google.protobuf import symbol_database as _symbol_database 9 | # @@protoc_insertion_point(imports) 10 | 11 | _sym_db = _symbol_database.Default() 12 | 13 | 14 | 15 | 16 | DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\rtensors.proto\x12\x07tensors\"\x82\x01\n\x06Tensor\x12\x1d\n\x05\x64type\x18\x01 \x01(\x0e\x32\x0e.tensors.Dtype\x12\r\n\x05shape\x18\x02 \x03(\x03\x12\x0c\n\x04\x64\x61ta\x18\x03 \x01(\x0c\x12.\n\tattr_type\x18\x04 \x01(\x0e\x32\x16.tensors.AttributeTypeH\x00\x88\x01\x01\x42\x0c\n\n_attr_type\"\xac\x01\n\tAttribute\x12\x0c\n\x04name\x18\x01 \x01(\t\x12!\n\x06module\x18\x03 \x01(\x0b\x32\x0f.tensors.ModuleH\x00\x12!\n\x06tensor\x18\x04 \x01(\x0b\x32\x0f.tensors.TensorH\x00\x12\x10\n\x06string\x18\x05 \x01(\tH\x00\x12\x0f\n\x05int64\x18\x06 \x01(\x03H\x00\x12\x0f\n\x05\x66loat\x18\x07 \x01(\x02H\x00\x12\x0e\n\x04\x62ool\x18\x08 \x01(\x08H\x00\x42\x07\n\x05value\"M\n\x06Module\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\r\n\x05names\x18\x02 \x03(\t\x12&\n\nattributes\x18\x03 \x03(\x0b\x32\x12.tensors.Attribute*\x9e\x02\n\x05\x44type\x12\x0e\n\nDT_INVALID\x10\x00\x12\x0e\n\nDT_FLOAT32\x10\x01\x12\x0e\n\nDT_FLOAT64\x10\x02\x12\x0e\n\nDT_FLOAT16\x10\x03\x12\x0f\n\x0b\x44T_BFLOAT16\x10\x04\x12\x10\n\x0c\x44T_COMPLEX32\x10\x05\x12\x10\n\x0c\x44T_COMPLEX64\x10\x06\x12\x11\n\rDT_COMPLEX128\x10\x07\x12\x0c\n\x08\x44T_UINT8\x10\x08\x12\x0b\n\x07\x44T_INT8\x10\t\x12\x0c\n\x08\x44T_INT16\x10\n\x12\x0c\n\x08\x44T_INT32\x10\x0b\x12\x0c\n\x08\x44T_INT64\x10\x0c\x12\x0b\n\x07\x44T_BOOL\x10\r\x12\r\n\tDT_QUINT8\x10\x0e\x12\x0c\n\x08\x44T_QINT8\x10\x0f\x12\r\n\tDT_QINT32\x10\x10\x12\x0f\n\x0b\x44T_QUINT4_2\x10\x11*0\n\rAttributeType\x12\x10\n\x0c\x41T_PARAMETER\x10\x00\x12\r\n\tAT_BUFFER\x10\x01\x42)Z\'github.com/coreweave/tensorizer/tensorsb\x06proto3') 17 | 18 | _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) 19 | _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'tensors_pb2', globals()) 20 | if _descriptor._USE_C_DESCRIPTORS == False: 21 | 22 | DESCRIPTOR._options = None 23 | DESCRIPTOR._serialized_options = b'Z\'github.com/coreweave/tensorizer/tensors' 24 | _DTYPE._serialized_start=414 25 | _DTYPE._serialized_end=700 26 | _ATTRIBUTETYPE._serialized_start=702 27 | _ATTRIBUTETYPE._serialized_end=750 28 | _TENSOR._serialized_start=27 29 | _TENSOR._serialized_end=157 30 | _ATTRIBUTE._serialized_start=160 31 | _ATTRIBUTE._serialized_end=332 32 | _MODULE._serialized_start=334 33 | _MODULE._serialized_end=411 34 | # @@protoc_insertion_point(module_scope) 35 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/tensors_pb2.pyi: -------------------------------------------------------------------------------- 1 | """ 2 | @generated by mypy-protobuf. Do not edit manually! 3 | isort:skip_file 4 | """ 5 | import builtins 6 | import collections.abc 7 | import google.protobuf.descriptor 8 | import google.protobuf.internal.containers 9 | import google.protobuf.internal.enum_type_wrapper 10 | import google.protobuf.message 11 | import sys 12 | import typing 13 | 14 | if sys.version_info >= (3, 10): 15 | import typing as typing_extensions 16 | else: 17 | import typing_extensions 18 | 19 | DESCRIPTOR: google.protobuf.descriptor.FileDescriptor 20 | 21 | class _Dtype: 22 | ValueType = typing.NewType("ValueType", builtins.int) 23 | V: typing_extensions.TypeAlias = ValueType 24 | 25 | class _DtypeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_Dtype.ValueType], builtins.type): 26 | DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor 27 | DT_INVALID: _Dtype.ValueType # 0 28 | DT_FLOAT32: _Dtype.ValueType # 1 29 | DT_FLOAT64: _Dtype.ValueType # 2 30 | DT_FLOAT16: _Dtype.ValueType # 3 31 | DT_BFLOAT16: _Dtype.ValueType # 4 32 | DT_COMPLEX32: _Dtype.ValueType # 5 33 | DT_COMPLEX64: _Dtype.ValueType # 6 34 | DT_COMPLEX128: _Dtype.ValueType # 7 35 | DT_UINT8: _Dtype.ValueType # 8 36 | DT_INT8: _Dtype.ValueType # 9 37 | DT_INT16: _Dtype.ValueType # 10 38 | DT_INT32: _Dtype.ValueType # 11 39 | DT_INT64: _Dtype.ValueType # 12 40 | DT_BOOL: _Dtype.ValueType # 13 41 | DT_QUINT8: _Dtype.ValueType # 14 42 | DT_QINT8: _Dtype.ValueType # 15 43 | DT_QINT32: _Dtype.ValueType # 16 44 | DT_QUINT4_2: _Dtype.ValueType # 17 45 | 46 | class Dtype(_Dtype, metaclass=_DtypeEnumTypeWrapper): ... 47 | 48 | DT_INVALID: Dtype.ValueType # 0 49 | DT_FLOAT32: Dtype.ValueType # 1 50 | DT_FLOAT64: Dtype.ValueType # 2 51 | DT_FLOAT16: Dtype.ValueType # 3 52 | DT_BFLOAT16: Dtype.ValueType # 4 53 | DT_COMPLEX32: Dtype.ValueType # 5 54 | DT_COMPLEX64: Dtype.ValueType # 6 55 | DT_COMPLEX128: Dtype.ValueType # 7 56 | DT_UINT8: Dtype.ValueType # 8 57 | DT_INT8: Dtype.ValueType # 9 58 | DT_INT16: Dtype.ValueType # 10 59 | DT_INT32: Dtype.ValueType # 11 60 | DT_INT64: Dtype.ValueType # 12 61 | DT_BOOL: Dtype.ValueType # 13 62 | DT_QUINT8: Dtype.ValueType # 14 63 | DT_QINT8: Dtype.ValueType # 15 64 | DT_QINT32: Dtype.ValueType # 16 65 | DT_QUINT4_2: Dtype.ValueType # 17 66 | global___Dtype = Dtype 67 | 68 | class _AttributeType: 69 | ValueType = typing.NewType("ValueType", builtins.int) 70 | V: typing_extensions.TypeAlias = ValueType 71 | 72 | class _AttributeTypeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[_AttributeType.ValueType], builtins.type): 73 | DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor 74 | AT_PARAMETER: _AttributeType.ValueType # 0 75 | AT_BUFFER: _AttributeType.ValueType # 1 76 | 77 | class AttributeType(_AttributeType, metaclass=_AttributeTypeEnumTypeWrapper): ... 78 | 79 | AT_PARAMETER: AttributeType.ValueType # 0 80 | AT_BUFFER: AttributeType.ValueType # 1 81 | global___AttributeType = AttributeType 82 | 83 | @typing_extensions.final 84 | class Tensor(google.protobuf.message.Message): 85 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 86 | 87 | DTYPE_FIELD_NUMBER: builtins.int 88 | SHAPE_FIELD_NUMBER: builtins.int 89 | DATA_FIELD_NUMBER: builtins.int 90 | ATTR_TYPE_FIELD_NUMBER: builtins.int 91 | dtype: global___Dtype.ValueType 92 | @property 93 | def shape(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.int]: ... 94 | data: builtins.bytes 95 | attr_type: global___AttributeType.ValueType 96 | def __init__( 97 | self, 98 | *, 99 | dtype: global___Dtype.ValueType = ..., 100 | shape: collections.abc.Iterable[builtins.int] | None = ..., 101 | data: builtins.bytes = ..., 102 | attr_type: global___AttributeType.ValueType | None = ..., 103 | ) -> None: ... 104 | def HasField(self, field_name: typing_extensions.Literal["_attr_type", b"_attr_type", "attr_type", b"attr_type"]) -> builtins.bool: ... 105 | def ClearField(self, field_name: typing_extensions.Literal["_attr_type", b"_attr_type", "attr_type", b"attr_type", "data", b"data", "dtype", b"dtype", "shape", b"shape"]) -> None: ... 106 | def WhichOneof(self, oneof_group: typing_extensions.Literal["_attr_type", b"_attr_type"]) -> typing_extensions.Literal["attr_type"] | None: ... 107 | 108 | global___Tensor = Tensor 109 | 110 | @typing_extensions.final 111 | class Attribute(google.protobuf.message.Message): 112 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 113 | 114 | NAME_FIELD_NUMBER: builtins.int 115 | MODULE_FIELD_NUMBER: builtins.int 116 | TENSOR_FIELD_NUMBER: builtins.int 117 | STRING_FIELD_NUMBER: builtins.int 118 | INT64_FIELD_NUMBER: builtins.int 119 | FLOAT_FIELD_NUMBER: builtins.int 120 | BOOL_FIELD_NUMBER: builtins.int 121 | name: builtins.str 122 | @property 123 | def module(self) -> global___Module: ... 124 | @property 125 | def tensor(self) -> global___Tensor: ... 126 | string: builtins.str 127 | int64: builtins.int 128 | float: builtins.float 129 | bool: builtins.bool 130 | def __init__( 131 | self, 132 | *, 133 | name: builtins.str = ..., 134 | module: global___Module | None = ..., 135 | tensor: global___Tensor | None = ..., 136 | string: builtins.str = ..., 137 | int64: builtins.int = ..., 138 | float: builtins.float = ..., 139 | bool: builtins.bool = ..., 140 | ) -> None: ... 141 | def HasField(self, field_name: typing_extensions.Literal["bool", b"bool", "float", b"float", "int64", b"int64", "module", b"module", "string", b"string", "tensor", b"tensor", "value", b"value"]) -> builtins.bool: ... 142 | def ClearField(self, field_name: typing_extensions.Literal["bool", b"bool", "float", b"float", "int64", b"int64", "module", b"module", "name", b"name", "string", b"string", "tensor", b"tensor", "value", b"value"]) -> None: ... 143 | def WhichOneof(self, oneof_group: typing_extensions.Literal["value", b"value"]) -> typing_extensions.Literal["module", "tensor", "string", "int64", "float", "bool"] | None: ... 144 | 145 | global___Attribute = Attribute 146 | 147 | @typing_extensions.final 148 | class Module(google.protobuf.message.Message): 149 | DESCRIPTOR: google.protobuf.descriptor.Descriptor 150 | 151 | NAME_FIELD_NUMBER: builtins.int 152 | NAMES_FIELD_NUMBER: builtins.int 153 | ATTRIBUTES_FIELD_NUMBER: builtins.int 154 | name: builtins.str 155 | @property 156 | def names(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.str]: ... 157 | @property 158 | def attributes(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Attribute]: ... 159 | def __init__( 160 | self, 161 | *, 162 | name: builtins.str = ..., 163 | names: collections.abc.Iterable[builtins.str] | None = ..., 164 | attributes: collections.abc.Iterable[global___Attribute] | None = ..., 165 | ) -> None: ... 166 | def ClearField(self, field_name: typing_extensions.Literal["attributes", b"attributes", "name", b"name", "names", b"names"]) -> None: ... 167 | 168 | global___Module = Module 169 | -------------------------------------------------------------------------------- /modules/sdgrpcserver/generated/tensors_pb2_grpc.py: -------------------------------------------------------------------------------- 1 | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! 2 | """Client and server classes corresponding to protobuf-defined services.""" 3 | import grpc 4 | 5 | -------------------------------------------------------------------------------- /modules/sdgrpcserver_client.py: -------------------------------------------------------------------------------- 1 | #!/bin/which python3 2 | 3 | # Modified version of Stability-AI SDK client.py. Changes: 4 | # - Calls cancel on ctrl-c to allow server to abort 5 | # - Supports setting ETA parameter 6 | # - Supports actually setting CLIP guidance strength 7 | # - Supports negative prompt by setting a prompt with negative weight 8 | # - Supports sending key to machines on local network over HTTP (not HTTPS) 9 | 10 | import sys 11 | import os 12 | import uuid 13 | import random 14 | import io 15 | import logging 16 | import time 17 | import mimetypes 18 | #import signal 19 | from sdgrpcserver.sonora import client as sonora_client 20 | 21 | import grpc 22 | from argparse import ArgumentParser, Namespace 23 | from typing import Dict, Generator, List, Optional, Union, Any, Sequence, Tuple 24 | from google.protobuf.json_format import MessageToJson 25 | #from PIL import Image 26 | 27 | # this is necessary because of how the auto-generated code constructs its imports 28 | thisPath = os.path.dirname(os.path.abspath(__file__)) 29 | genPath = thisPath +"/sdgrpcserver/generated" 30 | 31 | sys.path.append(str(genPath)) 32 | 33 | import generation_pb2 as generation 34 | import generation_pb2_grpc as generation_grpc 35 | import engines_pb2 as engines 36 | import engines_pb2_grpc as engines_grpc 37 | 38 | logger = logging.getLogger(__name__) 39 | logger.setLevel(level=logging.INFO) 40 | 41 | SAMPLERS: Dict[str, int] = { 42 | "ddim": generation.SAMPLER_DDIM, 43 | "plms": generation.SAMPLER_DDPM, 44 | "k_euler": generation.SAMPLER_K_EULER, 45 | "k_euler_ancestral": generation.SAMPLER_K_EULER_ANCESTRAL, 46 | "k_heun": generation.SAMPLER_K_HEUN, 47 | "k_dpm_2": generation.SAMPLER_K_DPM_2, 48 | "k_dpm_2_ancestral": generation.SAMPLER_K_DPM_2_ANCESTRAL, 49 | "k_lms": generation.SAMPLER_K_LMS, 50 | "dpm_fast": generation.SAMPLER_DPM_FAST, 51 | "dpm_adaptive": generation.SAMPLER_DPM_ADAPTIVE, 52 | "dpmspp_1": generation.SAMPLER_DPMSOLVERPP_1ORDER, 53 | "dpmspp_2": generation.SAMPLER_DPMSOLVERPP_2ORDER, 54 | "dpmspp_3": generation.SAMPLER_DPMSOLVERPP_3ORDER, 55 | "dpmspp_2s_ancestral": generation.SAMPLER_DPMSOLVERPP_2S_ANCESTRAL, 56 | "dpmspp_sde": generation.SAMPLER_DPMSOLVERPP_SDE, 57 | "dpmspp_2m": generation.SAMPLER_DPMSOLVERPP_2M, 58 | } 59 | 60 | NOISE_TYPES: Dict[str, int] = { 61 | "normal": generation.SAMPLER_NOISE_NORMAL, 62 | "brownian": generation.SAMPLER_NOISE_BROWNIAN, 63 | } 64 | 65 | 66 | def get_sampler_from_str(s: str) -> generation.DiffusionSampler: 67 | """ 68 | Convert a string to a DiffusionSampler enum. 69 | 70 | :param s: The string to convert. 71 | :return: The DiffusionSampler enum. 72 | """ 73 | algorithm_key = s.lower().strip() 74 | algorithm = SAMPLERS.get(algorithm_key, None) 75 | if algorithm is None: 76 | raise ValueError(f"unknown sampler {s}") 77 | 78 | return algorithm 79 | 80 | 81 | def get_noise_type_from_str(s: str) -> generation.SamplerNoiseType: 82 | noise_key = s.lower().strip() 83 | noise_type = NOISE_TYPES.get(noise_key, None) 84 | 85 | if noise_type is None: 86 | raise ValueError(f"unknown noise type {s}") 87 | 88 | return noise_type 89 | 90 | def image_to_prompt(im, init: bool = False, mask: bool = False) -> generation.Prompt: 91 | if init and mask: 92 | raise ValueError("init and mask cannot both be True") 93 | buf = io.BytesIO(im) 94 | #buf = io.BytesIO() 95 | #im.save(buf, format="PNG") 96 | #buf.seek(0) 97 | if mask: 98 | return generation.Prompt( 99 | artifact=generation.Artifact( 100 | type=generation.ARTIFACT_MASK, binary=buf.getvalue() 101 | ) 102 | ) 103 | return generation.Prompt( 104 | artifact=generation.Artifact( 105 | type=generation.ARTIFACT_IMAGE, binary=buf.getvalue() 106 | ), 107 | parameters=generation.PromptParameters(init=init), 108 | ) 109 | 110 | 111 | def process_artifacts_from_answers( 112 | prefix: str, 113 | answers: Union[ 114 | Generator[generation.Answer, None, None], Sequence[generation.Answer] 115 | ], 116 | write: bool = True, 117 | verbose: bool = False, 118 | ) -> Generator[Tuple[str, generation.Artifact], None, None]: 119 | """ 120 | Process the Artifacts from the Answers. 121 | 122 | :param prefix: The prefix for the artifact filenames. 123 | :param answers: The Answers to process. 124 | :param write: Whether to write the artifacts to disk. 125 | :param verbose: Whether to print the artifact filenames. 126 | :return: A Generator of tuples of artifact filenames and Artifacts, intended 127 | for passthrough. 128 | """ 129 | idx = 0 130 | for resp in answers: 131 | for artifact in resp.artifacts: 132 | artifact_p = f"{prefix}-{resp.request_id}-{resp.answer_id}-{idx}" 133 | if artifact.type == generation.ARTIFACT_IMAGE: 134 | ext = mimetypes.guess_extension(artifact.mime) 135 | contents = artifact.binary 136 | elif artifact.type == generation.ARTIFACT_CLASSIFICATIONS: 137 | ext = ".pb.json" 138 | contents = MessageToJson(artifact.classifier).encode("utf-8") 139 | elif artifact.type == generation.ARTIFACT_TEXT: 140 | ext = ".pb.json" 141 | contents = MessageToJson(artifact).encode("utf-8") 142 | else: 143 | ext = ".pb" 144 | contents = artifact.SerializeToString() 145 | out_p = f"{artifact_p}{ext}" 146 | if write: 147 | with open(out_p, "wb") as f: 148 | f.write(bytes(contents)) 149 | if verbose: 150 | artifact_t = generation.ArtifactType.Name(artifact.type) 151 | logger.info(f"wrote {artifact_t} to {out_p}") 152 | if artifact.finish_reason == generation.FILTER: 153 | logger.info(f"{artifact_t} flagged as NSFW") 154 | 155 | yield [out_p, artifact] 156 | idx += 1 157 | 158 | 159 | class StabilityInference: 160 | def __init__( 161 | self, 162 | host: str = "grpc.stability.ai:443", 163 | key: str = "", 164 | engine: str = "stable-diffusion-v1-5", 165 | verbose: bool = False, 166 | wait_for_ready: bool = True, 167 | use_grpc_web: bool = False, 168 | ): 169 | """ 170 | Initialize the client. 171 | 172 | :param host: Host to connect to. 173 | :param key: Key to use for authentication. 174 | :param engine: Engine to use. 175 | :param verbose: Whether to print debug messages. 176 | :param wait_for_ready: Whether to wait for the server to be ready, or 177 | to fail immediately. 178 | """ 179 | self.verbose = verbose 180 | self.engine = engine 181 | 182 | self.grpc_args = {"wait_for_ready": wait_for_ready} 183 | 184 | # disable send / receive limit 185 | grpc_channel_options = [("grpc.max_send_message_length", -1), ("grpc.max_receive_message_length", -1)] 186 | 187 | if verbose: 188 | logger.info(f"Opening channel to {host}") 189 | 190 | if use_grpc_web: 191 | channel = sonora_client.insecure_web_channel(host) 192 | channel._session.headers.update({"authorization": "Bearer {0}".format(key)}) 193 | 194 | else: 195 | call_credentials = [] 196 | 197 | if key: 198 | call_credentials.append(grpc.access_token_call_credentials(f"{key}")) 199 | 200 | if host.endswith("443"): 201 | channel_credentials = grpc.ssl_channel_credentials() 202 | else: 203 | print("Key provided but channel is not HTTPS - assuming a local network") 204 | channel_credentials = grpc.local_channel_credentials() 205 | 206 | channel = grpc.secure_channel( 207 | host, 208 | grpc.composite_channel_credentials(channel_credentials, *call_credentials), 209 | options=grpc_channel_options, 210 | ) 211 | else: 212 | channel = grpc.insecure_channel(host, options=grpc_channel_options) 213 | 214 | if verbose: 215 | logger.info(f"Channel opened to {host}") 216 | self.stub = generation_grpc.GenerationServiceStub(channel) 217 | self.engines_stub = engines_grpc.EnginesServiceStub(channel) 218 | return 219 | 220 | def list_engines(self): 221 | rq = engines.ListEnginesRequest() 222 | _engines = self.engines_stub.ListEngines(rq, **self.grpc_args) 223 | engines_list = [] 224 | for i in range(len(_engines.engine)): 225 | _engine = { "id": str(_engines.engine[i].id), 226 | "name": str(_engines.engine[i].name), 227 | "description": str(_engines.engine[i].description), 228 | "ready": bool(_engines.engine[i].ready), 229 | } 230 | engines_list.append(_engine) 231 | return engines_list 232 | 233 | def generate( 234 | self, 235 | prompt: Union[str, List[str], generation.Prompt, List[generation.Prompt]], 236 | negative_prompt: str = None, 237 | init_image = None, 238 | mask_image = None, 239 | height: int = 512, 240 | width: int = 512, 241 | start_schedule: float = 1.0, 242 | end_schedule: float = 0.01, 243 | cfg_scale: float = 7.0, 244 | eta: float = 0.0, 245 | churn: float = None, 246 | churn_tmin: float = None, 247 | churn_tmax: float = None, 248 | sigma_min: float = None, 249 | sigma_max: float = None, 250 | karras_rho: float = None, 251 | noise_type: int = None, 252 | sampler: generation.DiffusionSampler = generation.SAMPLER_K_LMS, 253 | steps: int = 50, 254 | seed: Union[Sequence[int], int] = 0, 255 | samples: int = 1, 256 | safety: bool = True, 257 | classifiers: Optional[generation.ClassifierParameters] = None, 258 | guidance_preset: generation.GuidancePreset = generation.GUIDANCE_PRESET_NONE, 259 | guidance_cuts: int = 0, 260 | guidance_strength: Optional[float] = None, 261 | guidance_prompt: Union[str, generation.Prompt] = None, 262 | guidance_models: List[str] = None, 263 | hires_fix: bool | None = None, 264 | hires_oos_fraction: float | None = None, 265 | tiling: bool = False, 266 | ) -> Generator[generation.Answer, None, None]: 267 | """ 268 | Generate images from a prompt. 269 | 270 | :param prompt: Prompt to generate images from. 271 | :param init_image: Init image. 272 | :param mask_image: Mask image 273 | :param height: Height of the generated images. 274 | :param width: Width of the generated images. 275 | :param start_schedule: Start schedule for init image. 276 | :param end_schedule: End schedule for init image. 277 | :param cfg_scale: Scale of the configuration. 278 | :param sampler: Sampler to use. 279 | :param steps: Number of steps to take. 280 | :param seed: Seed for the random number generator. 281 | :param samples: Number of samples to generate. 282 | :param safety: DEPRECATED/UNUSED - Cannot be disabled. 283 | :param classifiers: DEPRECATED/UNUSED - Has no effect on image generation. 284 | :param guidance_preset: Guidance preset to use. See generation.GuidancePreset for supported values. 285 | :param guidance_cuts: Number of cuts to use for guidance. 286 | :param guidance_strength: Strength of the guidance. We recommend values in range [0.0,1.0]. A good default is 0.25 287 | :param guidance_prompt: Prompt to use for guidance, defaults to `prompt` argument (above) if not specified. 288 | :param guidance_models: Models to use for guidance. 289 | :return: Generator of Answer objects. 290 | """ 291 | if (prompt is None) and (init_image is None): 292 | raise ValueError("prompt and/or init_image must be provided") 293 | 294 | if (mask_image is not None) and (init_image is None): 295 | raise ValueError("If mask_image is provided, init_image must also be provided") 296 | 297 | if not seed: 298 | seed = [random.randrange(0, 4294967295)] 299 | elif isinstance(seed, int): 300 | seed = [seed] 301 | else: 302 | seed = list(seed) 303 | 304 | prompts: List[generation.Prompt] = [] 305 | if any(isinstance(prompt, t) for t in (str, generation.Prompt)): 306 | prompt = [prompt] 307 | for p in prompt: 308 | if isinstance(p, str): 309 | p = generation.Prompt(text=p) 310 | elif not isinstance(p, generation.Prompt): 311 | raise TypeError("prompt must be a string or generation.Prompt object") 312 | prompts.append(p) 313 | 314 | if negative_prompt: 315 | prompts += [ 316 | generation.Prompt( 317 | text=negative_prompt, 318 | parameters=generation.PromptParameters(weight=-1), 319 | ) 320 | ] 321 | 322 | sampler_parameters = dict(cfg_scale=cfg_scale) 323 | 324 | if eta: 325 | sampler_parameters["eta"] = eta 326 | if noise_type: 327 | sampler_parameters["noise_type"] = noise_type 328 | 329 | if churn: 330 | churn_parameters = dict(churn=churn) 331 | 332 | if churn_tmin: 333 | churn_parameters["churn_tmin"] = churn_tmin 334 | if churn_tmax: 335 | churn_parameters["churn_tmax"] = churn_tmax 336 | 337 | sampler_parameters["churn"] = generation.ChurnSettings(**churn_parameters) 338 | 339 | sigma_parameters = {} 340 | 341 | if sigma_min: 342 | sigma_parameters["sigma_min"] = sigma_min 343 | if sigma_max: 344 | sigma_parameters["sigma_max"] = sigma_max 345 | if karras_rho: 346 | sigma_parameters["karras_rho"] = karras_rho 347 | 348 | sampler_parameters["sigma"] = generation.SigmaParameters(**sigma_parameters) 349 | 350 | step_parameters = dict( 351 | scaled_step=0, sampler=generation.SamplerParameters(**sampler_parameters) 352 | ) 353 | 354 | # NB: Specifying schedule when there's no init image causes washed out results 355 | if init_image is not None: 356 | step_parameters["schedule"] = generation.ScheduleParameters( 357 | start=start_schedule, 358 | end=end_schedule, 359 | ) 360 | prompts += [image_to_prompt(init_image, init=True)] 361 | 362 | if mask_image is not None: 363 | prompts += [image_to_prompt(mask_image, mask=True)] 364 | 365 | if guidance_prompt: 366 | if isinstance(guidance_prompt, str): 367 | guidance_prompt = generation.Prompt(text=guidance_prompt) 368 | elif not isinstance(guidance_prompt, generation.Prompt): 369 | raise ValueError("guidance_prompt must be a string or Prompt object") 370 | if guidance_strength == 0.0: 371 | guidance_strength = None 372 | 373 | # Build our CLIP parameters 374 | if guidance_preset is not generation.GUIDANCE_PRESET_NONE: 375 | # to do: make it so user can override this 376 | # step_parameters['sampler']=None 377 | 378 | if guidance_models: 379 | guiders = [generation.Model(alias=model) for model in guidance_models] 380 | else: 381 | guiders = None 382 | 383 | if guidance_cuts: 384 | cutouts = generation.CutoutParameters(count=guidance_cuts) 385 | else: 386 | cutouts = None 387 | 388 | step_parameters["guidance"] = generation.GuidanceParameters( 389 | guidance_preset=guidance_preset, 390 | instances=[ 391 | generation.GuidanceInstanceParameters( 392 | guidance_strength=guidance_strength, 393 | models=guiders, 394 | cutouts=cutouts, 395 | prompt=guidance_prompt, 396 | ) 397 | ], 398 | ) 399 | 400 | if hires_fix is None and hires_oos_fraction is not None: 401 | hires_fix = True 402 | 403 | hires = None 404 | 405 | if hires_fix is not None: 406 | hires_params: dict[str, bool | float] = dict(enable=hires_fix) 407 | if hires_oos_fraction is not None: 408 | hires_params["oos_fraction"] = hires_oos_fraction 409 | 410 | hires = generation.HiresFixParameters(**hires_params) 411 | 412 | image_parameters = generation.ImageParameters( 413 | transform=generation.TransformType(diffusion=sampler), 414 | height=height, 415 | width=width, 416 | seed=seed, 417 | steps=steps, 418 | samples=samples, 419 | parameters=[generation.StepParameter(**step_parameters)], 420 | hires=hires, 421 | tiling=tiling, 422 | ) 423 | 424 | return self.emit_request(prompt=prompts, image_parameters=image_parameters) 425 | 426 | # The motivation here is to facilitate constructing requests by passing protobuf objects directly. 427 | def emit_request( 428 | self, 429 | prompt: generation.Prompt, 430 | image_parameters: generation.ImageParameters, 431 | engine_id: str = None, 432 | request_id: str = None, 433 | ): 434 | if not request_id: 435 | request_id = str(uuid.uuid4()) 436 | if not engine_id: 437 | engine_id = self.engine 438 | 439 | rq = generation.Request( 440 | engine_id=engine_id, 441 | request_id=request_id, 442 | prompt=prompt, 443 | image=image_parameters, 444 | ) 445 | 446 | if self.verbose: 447 | logger.info("Sending request.") 448 | 449 | start = time.time() 450 | answers = self.stub.Generate(rq, **self.grpc_args) 451 | 452 | """ 453 | def cancel_request(unused_signum, unused_frame): 454 | print("Cancelling") 455 | answers.cancel() 456 | sys.exit(0) 457 | 458 | signal.signal(signal.SIGINT, cancel_request) 459 | """ 460 | 461 | for answer in answers: 462 | duration = time.time() - start 463 | if self.verbose: 464 | if len(answer.artifacts) > 0: 465 | artifact_ts = [ 466 | generation.ArtifactType.Name(artifact.type) 467 | for artifact in answer.artifacts 468 | ] 469 | logger.info( 470 | f"Got {answer.answer_id} with {artifact_ts} in " 471 | f"{duration:0.2f}s" 472 | ) 473 | else: 474 | logger.info( 475 | f"Got keepalive {answer.answer_id} in " f"{duration:0.2f}s" 476 | ) 477 | 478 | yield answer 479 | start = time.time() -------------------------------------------------------------------------------- /outputs/.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore everything in this directory 2 | * 3 | # Except this file 4 | !.gitignore 5 | !README.md -------------------------------------------------------------------------------- /outputs/README.md: -------------------------------------------------------------------------------- 1 | - This folder is used as the root path for all sample and argument output files -------------------------------------------------------------------------------- /temp/.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore everything in this directory 2 | * 3 | # Except this file 4 | !.gitignore 5 | !README.md -------------------------------------------------------------------------------- /temp/README.md: -------------------------------------------------------------------------------- 1 | - This folder is used for misc temp files --------------------------------------------------------------------------------