├── .github └── FUNDING.yml ├── .gitignore ├── LICENSE ├── README.md ├── dark.html ├── demo.png ├── demo_output.png ├── gpt3.py ├── gpt3_client.py └── requirements.txt /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | # These are supported funding model platforms 2 | 3 | github: minimaxir # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2] 4 | patreon: minimaxir # Replace with a single Patreon username 5 | open_collective: # Replace with a single Open Collective username 6 | ko_fi: # Replace with a single Ko-fi username 7 | tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel 8 | community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry 9 | liberapay: # Replace with a single Liberapay username 10 | issuehunt: # Replace with a single IssueHunt username 11 | otechie: # Replace with a single Otechie username 12 | custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2'] 13 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | img_output/ 2 | txt_output/ 3 | chromedriver 4 | .vscode 5 | .env 6 | *.pyc 7 | test* 8 | img.png 9 | text.txt 10 | prompt.txt -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Max Woolf 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # gpt-3-client 2 | 3 | [![](demo.png)](https://www.youtube.com/watch?v=kvgWeaRCuls) 4 | 5 | A client for OpenAI's GPT-3 API, intended to be used for more _ad hoc_ testing of prompts without using the web interface. This approach allows more customization of the resulting output, ideal for [livestreaming](https://www.youtube.com/watch?v=kvgWeaRCuls) organic, non-cherry-picked outputs. 6 | 7 | In addition to generating text via the API, the client: 8 | 9 | - Streams text generation as soon as it's generated (via [httpx](https://github.com/encode/httpx)) 10 | - Prints the generated text to console, with a **bolded** prompt and coloring the text by prediction quality (via [rich](https://github.com/willmcgugan/rich)) 11 | - Automatically saves each generated text to a file 12 | - Automatically creates a sharable, social-media-optimized image (using [imgmaker](https://github.com/minimaxir/imgmaker)). 13 | 14 | NB: This project is intended more for personal use, and as a result may not be as actively supported as my other projects intended for productionization. 15 | 16 | ## Install 17 | 18 | First, install the Python requirements: 19 | 20 | ```sh 21 | pip3 install httpx rich imgmaker 22 | ``` 23 | 24 | Then download this repostory and `cd` into the downloaded folder. 25 | 26 | Optionally, if you want to generate images of the generated text, you'll also need to have the latest version of Chrome installed and download the corresponding chromedriver (see the [imgmaker repo](https://github.com/minimaxir/imgmaker) for more information): 27 | 28 | ```sh 29 | imgmaker chromedriver 30 | ``` 31 | 32 | You will also need to export your OpenAI API key to the `OPENAI_API_SECRET_KEY` environment variable (this is more secure than putting it in a plaintext file). In bash/zsh, you can do this by: 33 | 34 | ```sh 35 | export OPENAI_API_SECRET_KEY= 36 | ``` 37 | 38 | ## Usage 39 | 40 | This GPT-3 client is intended to be interacted with _entirely_ via the command line . (due to how rich works, it will not work well in a Notebook) 41 | 42 | You can invoke the client via: 43 | 44 | ```sh 45 | python3 gpt3.py 46 | ``` 47 | 48 | The generation output will be stored in a text file in the `txt_output` folder, corresponding to a hash of the prompt and the temperature (e.g. `e286222c__0_7.txt`) 49 | 50 | Passing `--image` will render the output into a social-media sharable image. 51 | 52 | ```sh 53 | python3 gpt3.py --image --max_tokens 256 --include_coloring True 54 | ``` 55 | 56 | ![](demo_output.png) 57 | 58 | By default, it will generate from the prompt "Once upon a time". To specify a custom prompt, you can do: 59 | 60 | ```sh 61 | python3 gpt3.py --prompt "I am a pony" 62 | ``` 63 | 64 | For longer prompts, you can put the prompt in a `prompt.txt` file instead: 65 | 66 | ```sh 67 | python3 gpt3.py --prompt "prompt.txt" --max_tokens 256 68 | ``` 69 | 70 | ## CLI Arguments 71 | 72 | - `image`: Whether to render an image of the generated text (requires imgmaker) _[Default: False]_ 73 | - `prompt`: Prompt for GPT-3; either text or a path to a file. _[Default: "Once upon a time"]_ 74 | - `temperature`: Generation creativity; the higher, the crazier. _[Default: 0.7]_ 75 | - `max_tokens`: Number of tokens generated _[Default: 0.7]_ 76 | - `stop`: Token to use to stop generation. _[Default: ""]_ 77 | - `bg`: RGB tuple representing the base for coloration: you should set to the background of the terminal. _[Default: (31, 36, 40)]_ 78 | - `accent`: Accent color to blend with the `bg` to indicate prediction strength. _[Default: (0, 64, 0)]_ 79 | - `interactive`: Enters an "interactive" mode where you can provide prompts repeatedly. _[Default: False]_ 80 | - `pngquant`: If using `image`, uses pngquant to massively compress it. Requires pngquant installed on your system. _[Default: False]_ 81 | - `output_txt`: Specify output text file, overriding default behavior if not None. _[Default: None]_ 82 | - `output_img`: Specify output image file, overriding default behavior if not None. _[Default: None]_ 83 | - `include_prompt`: Include prompt in the generation output. _[Default: False]_ 84 | - `include_coloring`: Use probability coloring _[Default: True]_ 85 | - `watermark`: Specify watermark text on the generated image. Supports Markdown formatting. _[Default: "Generated using GPT-3 via OpenAI's API"]_ 86 | 87 | ## Maintainer/Creator 88 | 89 | Max Woolf ([@minimaxir](https://minimaxir.com)) 90 | 91 | _Max's open-source projects are supported by his [Patreon](https://www.patreon.com/minimaxir) and [GitHub Sponsors](https://github.com/sponsors/minimaxir). If you found this project helpful, any monetary contributions to the Patreon are appreciated and will be put to good creative use._ 92 | 93 | ## License 94 | 95 | MIT 96 | 97 | ## Disclaimer 98 | 99 | This repo has no affiliation with OpenAI. 100 | -------------------------------------------------------------------------------- /dark.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 10 | 14 | 15 | 16 | 52 | 53 | 54 |
55 |
56 | {{ html | safe }} 57 |
58 |
59 | 60 | 65 | 66 | 67 | 75 | 76 | -------------------------------------------------------------------------------- /demo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/minimaxir/gpt-3-client/50ae7e2b56a9d84bd459862a49b17106ab37dfcb/demo.png -------------------------------------------------------------------------------- /demo_output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/minimaxir/gpt-3-client/50ae7e2b56a9d84bd459862a49b17106ab37dfcb/demo_output.png -------------------------------------------------------------------------------- /gpt3.py: -------------------------------------------------------------------------------- 1 | from gpt3_client import GPT3Client 2 | import fire 3 | from rich.prompt import Prompt, Confirm 4 | from rich import print 5 | from rich.text import Text 6 | from json import JSONDecodeError 7 | import os 8 | 9 | 10 | def gpt3_app( 11 | image=False, 12 | prompt="Once upon a time", 13 | temperature: float = 0.7, 14 | max_tokens=32, 15 | stop: str = "", 16 | bg: tuple = (31, 36, 40), 17 | accent: tuple = (0, 64, 0), 18 | interactive=False, 19 | pngquant=False, 20 | output_txt=None, 21 | output_img=None, 22 | include_prompt=True, 23 | include_coloring=True, 24 | watermark="Generated using GPT-3 via OpenAI's API", 25 | ): 26 | 27 | if interactive: 28 | prompt = Prompt.ask("[i]Enter a prompt for the GPT-3 API[/i]") 29 | 30 | if os.path.exists(prompt): 31 | with open(prompt, "r", encoding="utf-8") as f: 32 | prompt = f.read() 33 | 34 | divider_color_str = "white" 35 | divider = Text("-" * 10 + "\n", style=divider_color_str) 36 | gpt3 = GPT3Client(image=image) 37 | 38 | try: 39 | gpt3.generate( 40 | prompt=prompt, 41 | temperature=temperature, 42 | max_tokens=max_tokens, 43 | stop=stop, 44 | bg=bg, 45 | accent=accent, 46 | pngquant=pngquant, 47 | output_txt=output_txt, 48 | output_img=output_img, 49 | include_prompt=include_prompt, 50 | include_coloring=include_coloring, 51 | watermark=watermark, 52 | ) 53 | 54 | print(divider) 55 | 56 | if interactive: 57 | continue_gen = True 58 | while continue_gen: 59 | continue_gen = Confirm.ask( 60 | "[i]Do you wish to continue generating from the same prompt?[/i]" 61 | ) 62 | 63 | if continue_gen: 64 | 65 | gpt3.generate( 66 | prompt=prompt, 67 | temperature=temperature, 68 | max_tokens=max_tokens, 69 | stop=stop, 70 | bg=bg, 71 | accent=accent, 72 | pngquant=pngquant, 73 | output_txt=output_txt, 74 | output_img=output_img, 75 | include_prompt=include_prompt, 76 | include_coloring=include_coloring, 77 | watermark=watermark, 78 | ) 79 | 80 | print(divider) 81 | 82 | except KeyboardInterrupt: 83 | print("\n\n[red italic]Generation interrupted![/]") 84 | except JSONDecodeError: 85 | print("\n\n[red italic]The JSON from the API was misparsed![/]") 86 | 87 | try: 88 | gpt3.close() 89 | except Exception: 90 | pass 91 | 92 | 93 | if __name__ == "__main__": 94 | fire.Fire(gpt3_app) 95 | -------------------------------------------------------------------------------- /gpt3_client.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import httpx 4 | from imgmaker import imgmaker 5 | import logging 6 | from math import exp 7 | from rich.console import Console 8 | from rich.text import Text 9 | import hashlib 10 | import codecs 11 | import re 12 | from datetime import datetime 13 | 14 | logger = logging.getLogger("GPT3Client") 15 | logger.setLevel(logging.INFO) 16 | 17 | 18 | class GPT3Client: 19 | def __init__(self, image: bool = True): 20 | 21 | assert os.getenv( 22 | "OPENAI_API_SECRET_KEY" 23 | ), "The OPENAI_API_SECRET_KEY Environment variable has not been set." 24 | 25 | self.headers = { 26 | "Content-Type": "application/json", 27 | "Authorization": f"Bearer {os.getenv('OPENAI_API_SECRET_KEY')}", 28 | } 29 | 30 | self.imgmaker = None 31 | if image: 32 | try: 33 | self.imgmaker = imgmaker() 34 | except ImportError: 35 | logging.warn( 36 | "imgmaker failed to load Chrome: you will not be able to generate images." 37 | ) 38 | 39 | def close(self): 40 | if self.imgmaker: 41 | self.imgmaker.close() 42 | 43 | def generate( 44 | self, 45 | prompt: str = "", 46 | temperature: float = 0.7, 47 | max_tokens: int = 32, 48 | stop: str = "", 49 | model: str = "davinci", 50 | bg: tuple = (31, 36, 40), 51 | accent: tuple = (0, 64, 0), 52 | pngquant: bool = False, 53 | output_txt: str = None, 54 | output_img: str = None, 55 | include_prompt: bool = True, 56 | include_coloring: bool = True, 57 | watermark: str = "Generated using GPT-3 via OpenAI's API", 58 | ): 59 | 60 | assert isinstance(stop, str), "stop is not a str." 61 | 62 | data = { 63 | "prompt": prompt, 64 | "max_tokens": max_tokens, 65 | "temperature": temperature, 66 | "stop": stop, 67 | "stream": True, 68 | "logprobs": 1, 69 | } 70 | 71 | console = Console(record=True) 72 | console.clear() 73 | 74 | if include_prompt: 75 | prompt_text = Text(prompt, style="bold", end="") 76 | console.print(prompt_text, end="") 77 | 78 | with httpx.stream( 79 | "POST", 80 | f"https://api.openai.com/v1/engines/{model}/completions", 81 | headers=self.headers, 82 | data=json.dumps(data), 83 | timeout=None, 84 | ) as r: 85 | for chunk in r.iter_text(): 86 | text = chunk[6:] # JSON chunks are prepended with "data: " 87 | if len(text) < 10 and "[DONE]" in text: 88 | break 89 | 90 | temp_token = None 91 | logprobs = json.loads(text)["choices"][0]["logprobs"] 92 | tokens = logprobs["tokens"] 93 | token_logprobs = logprobs["token_logprobs"] 94 | for i in range(len(tokens)): 95 | token = tokens[i] 96 | log_prob = token_logprobs[i] 97 | 98 | if token == stop or token == "<|endoftext|>": 99 | break 100 | 101 | if token.startswith("bytes:") and not temp_token: 102 | # We need to hold the 2-byte token to the next 1-byte token 103 | # to get the full bytestring to decode 104 | # 105 | # The API-returned tokens are in the form: 106 | # "bytes:\xe2\x80" and "bytes:\x9d" 107 | temp_token = token[6:] 108 | temp_prob = log_prob 109 | else: 110 | if temp_token: 111 | bytestring = temp_token + token[6:] 112 | 113 | # https://stackoverflow.com/a/37059682/9314418 114 | token = codecs.escape_decode(bytestring, "utf-8")[0].decode( 115 | "utf-8" 116 | ) 117 | temp_token = None 118 | log_prob = temp_prob # the true prob is the first one 119 | text = Text( 120 | token, 121 | style=f"on {self.derive_token_bg(log_prob, bg, accent, include_coloring,)}", 122 | end="", 123 | ) 124 | console.print(text, end="") 125 | 126 | # Export the generated text as HTML. 127 | raw_html = self.replace_hex_colors( 128 | console.export_html(inline_styles=True, code_format="{code}", clear=False) 129 | ) 130 | 131 | # Render the HTML as an image 132 | prompt_hash = hashlib.sha256(bytes(prompt, "utf-8")).hexdigest()[0:8] 133 | temp_string = str(temperature).replace(".", "_") 134 | timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") 135 | 136 | if output_img: 137 | img_file_name = output_img 138 | else: 139 | if not os.path.exists("img_output"): 140 | os.makedirs("img_output") 141 | img_file_name = f"img_output/{timestamp}__{prompt_hash}__{temp_string}.png" 142 | 143 | if self.imgmaker: 144 | self.imgmaker.generate( 145 | "dark.html", 146 | { 147 | "html": raw_html.replace("\n", "
"), 148 | "accent": f"rgb({accent[0]},{accent[1]},{accent[2]})", 149 | "watermark": watermark, 150 | }, 151 | width=450, 152 | height=600, 153 | downsample=False, 154 | output_file=img_file_name, 155 | use_pngquant=pngquant, 156 | ) 157 | 158 | # Save the generated text to a plain-text file 159 | if output_txt: 160 | txt_file_name = output_txt 161 | else: 162 | if not os.path.exists("txt_output"): 163 | os.makedirs("txt_output") 164 | txt_file_name = f"txt_output/{prompt_hash}__{temp_string}.txt" 165 | 166 | with open(txt_file_name, "a", encoding="utf-8") as f: 167 | f.write(console.export_text() + "\n" + "=" * 20 + "\n") 168 | 169 | console.line() 170 | 171 | def derive_token_bg( 172 | self, log_prob: float, bg: tuple, accent: tuple, include_coloring: bool 173 | ): 174 | prob = exp(log_prob) 175 | 176 | if include_coloring: 177 | color = ( 178 | int(max(bg[0] + accent[0] * prob, bg[0])), 179 | int(max(bg[1] + accent[1] * prob, bg[1])), 180 | int(max(bg[2] + accent[2] * prob, bg[2])), 181 | ) 182 | else: 183 | color = bg 184 | 185 | return f"rgb({min(color[0], 255)},{min(color[1], 255)},{min(color[2], 255)})" 186 | 187 | def replace_hex_colors(self, html: str): 188 | """ 189 | Headless Chrome requires inline styles to be 190 | in rbg instead of hex format. 191 | """ 192 | 193 | hex_colors = set(re.findall(r"(#.{6})\"", html)) 194 | 195 | for hex_color in hex_colors: 196 | # https://stackoverflow.com/questions/29643352/converting-hex-to-rgb-value-in-python/29643643 197 | 198 | h = hex_color.lstrip("#") 199 | rgb = tuple(int(h[i : i + 2], 16) for i in (0, 2, 4)) 200 | 201 | rgb_str = f"rgb({rgb[0]},{rgb[1]},{rgb[2]})" 202 | 203 | html = re.sub(hex_color, rgb_str, html) 204 | 205 | return html 206 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | httpx 2 | rich>=4.2.0 3 | imgmaker>=0.2.0 --------------------------------------------------------------------------------