├── gpt_utils ├── __init__.py ├── prompt.txt ├── poe_utils.py ├── vercel.py ├── gpt4all.py └── gpt4free.py ├── helper_utils ├── __init__.py ├── utils.py └── image_generation_utils.py ├── assets └── demo.png ├── cogs ├── __init__.py ├── commands_cogs │ ├── __init__.py │ ├── sync.py │ ├── misc.py │ ├── pollinations.py │ └── horde.py └── event_cogs │ ├── __init__.py │ ├── on_ready.py │ └── on_message.py ├── Dockerfile ├── example.env ├── .github └── ISSUE_TEMPLATE │ ├── question.md │ ├── feature_request.md │ └── bug_report.md ├── .gitignore ├── example_config.json ├── requirements.txt ├── CONTRIBUTING.md ├── main.py ├── LICENSE ├── run.sh ├── run.bat ├── run.py └── README.md /gpt_utils/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /helper_utils/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /assets/demo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mono300genuine/discord-chatbot/HEAD/assets/demo.png -------------------------------------------------------------------------------- /cogs/__init__.py: -------------------------------------------------------------------------------- 1 | from .commands_cogs import COMMANDS 2 | from .event_cogs import EVENT_HANDLERS 3 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:bullseye 2 | 3 | WORKDIR /app 4 | 5 | COPY . . 6 | 7 | CMD ["sh", "run.sh"] 8 | -------------------------------------------------------------------------------- /example.env: -------------------------------------------------------------------------------- 1 | BOT_TOKEN= 2 | API_KEY= 3 | PRODIA_KEY= 4 | CHATGPT_KEY= -------------------------------------------------------------------------------- /cogs/commands_cogs/__init__.py: -------------------------------------------------------------------------------- 1 | from pkgutil import iter_modules 2 | 3 | COMMANDS = [module.name for module in iter_modules(__path__, f"{__package__}.")] 4 | -------------------------------------------------------------------------------- /cogs/event_cogs/__init__.py: -------------------------------------------------------------------------------- 1 | from pkgutil import iter_modules 2 | 3 | EVENT_HANDLERS = [module.name for module in iter_modules(__path__, f"{__package__}.")] 4 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/question.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Question 3 | about: Ask a question 4 | title: "[QUESTION]: " 5 | labels: question 6 | assignees: '' 7 | 8 | --- 9 | 10 | 11 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | helper_utils/horde_module/ 2 | venv/ 3 | .idea/ 4 | cliRequests.log 5 | .env 6 | config.json 7 | .replit 8 | replit.nix 9 | poetry.lock 10 | pyproject.toml 11 | __pycache__ -------------------------------------------------------------------------------- /example_config.json: -------------------------------------------------------------------------------- 1 | { 2 | "chatbot": true, 3 | "model": "gpt4free", 4 | "loading_gif": "https://tenor.com/view/loading-gif-9212724", 5 | "sync": true, 6 | "image_model": "SDXL 1.0" 7 | } -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | discord~=2.2.3 2 | requests~=2.28.2 3 | python-dotenv~=1.0.0 4 | pyyaml~=6.0 5 | Pillow~=9.5.0 6 | urllib3~=1.26.16 7 | aiohttp~=3.8.4 8 | httpx~=0.24.1 9 | inquirerpy~=0.3.4 10 | 11 | # Language models 12 | gpt4all~=0.2.1 # For GPT4All 13 | imaginepy # For upscaling 14 | g4f 15 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contribution Guidelines 2 | 3 | 1. Always use double quotes. 4 | 2. Always add two blank lines before a function declaration. 5 | 3. Use snake case. (this_is_a_snake_case_variable) 6 | 4. Try to keep your code clean. 7 | 5. Document your code with comments if there could be any confusion. 8 | 6. Do not use any GPL licensed code in the main repository. If you need to include it, add it to an extra repository. 9 | 7. Have fun hacking away at the project! 10 | -------------------------------------------------------------------------------- /cogs/commands_cogs/sync.py: -------------------------------------------------------------------------------- 1 | from discord.ext import commands 2 | 3 | 4 | class Sync(commands.Cog): 5 | def __init__(self, bot): 6 | self.bot = bot 7 | 8 | @commands.hybrid_command() 9 | async def sync(self, ctx): 10 | await ctx.defer() 11 | await self.bot.tree.sync() 12 | await ctx.send("Successfully synced commands!") 13 | print("Synced commands!") 14 | 15 | 16 | async def setup(bot): 17 | await bot.add_cog(Sync(bot)) 18 | -------------------------------------------------------------------------------- /gpt_utils/prompt.txt: -------------------------------------------------------------------------------- 1 | You are Stable Assistant, a bot that is made to assist the users with various tasks, such as writing. You can write essays, stories, etc. 2 | When the user requests an image, write prompt where prompt is a prompt for the picture. 3 | Immediately draw the picture for the user, do not ask them for confirmation! 4 | 5 | **Example** 6 | User: Please draw an apple for me. 7 | You: Sure thing! An apple, 4k, ultra uhd, cinematic 8 | 9 | **End of example** 10 | -------------------------------------------------------------------------------- /gpt_utils/poe_utils.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | 4 | logging.error("THIS MODULE IS BROKEN AND UNMAINTAINED") 5 | 6 | # from utils import poe_key 7 | # import poe 8 | # 9 | # 10 | # client = poe.Client(poe_key) 11 | # 12 | # 13 | # async def generate_message(prompt: str) -> str: 14 | # chunk = None 15 | # for chunk in client.send_message("StableAssistant", prompt): 16 | # pass 17 | # 18 | # return chunk["text"] 19 | # 20 | # 21 | # async def clear_context() -> None: 22 | # client.send_chat_break("StableAssistant") 23 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: enhancement 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: bug 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. Windows] 28 | - Version: [e.g. 10] 29 | - Python Version: 30 | 31 | **Additional context** 32 | Add any other context about the problem here. 33 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | from helper_utils.utils import bot_token 2 | 3 | import discord 4 | from discord.ext import commands 5 | from cogs import COMMANDS, EVENT_HANDLERS 6 | 7 | 8 | 9 | class CogBOT(commands.Bot): 10 | def __init__(self, *args: any, **kwargs: any) -> None: 11 | super().__init__(*args, **kwargs) 12 | 13 | async def setup_hook(self) -> None: 14 | for cog in COMMANDS: 15 | cog_name = cog.split('.')[-1] 16 | discord.client._log.info(f"Loaded Command {cog_name}") 17 | await self.load_extension(f"{cog}") 18 | for cog in EVENT_HANDLERS: 19 | cog_name = cog.split('.')[-1] 20 | discord.client._log.info(f"Loaded Event Handler {cog_name}") 21 | await self.load_extension(f"{cog}") 22 | 23 | 24 | intents = discord.Intents.default() 25 | intents.message_content = True 26 | bot = CogBOT(command_prefix="/", intents=intents) 27 | 28 | bot.run(bot_token, reconnect=True) 29 | -------------------------------------------------------------------------------- /cogs/event_cogs/on_ready.py: -------------------------------------------------------------------------------- 1 | import discord 2 | from discord.ext import commands 3 | import logging 4 | from helper_utils.utils import config 5 | 6 | 7 | class OnReady(commands.Cog): 8 | def __init__(self, bot): 9 | self.bot = bot 10 | 11 | @commands.Cog.listener() 12 | async def on_ready(self): 13 | if config["sync"]: 14 | logging.info("Syncing bot commands, please be patient!") 15 | await self.bot.tree.sync() 16 | else: 17 | logging.info("Not syncing bot commands") 18 | 19 | await self.bot.change_presence(activity=discord.Game( 20 | name="Type / to see commands")) 21 | logging.debug(f"{self.bot.user.name} has connected to Discord!") 22 | invite_link = discord.utils.oauth_url( 23 | self.bot.user.id, 24 | permissions=discord.Permissions(administrator=False), 25 | scopes=("bot", "applications.commands")) 26 | logging.info(f"Invite link: {invite_link}") 27 | 28 | 29 | async def setup(bot): 30 | await bot.add_cog(OnReady(bot)) 31 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023-20xx mak448a 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /run.sh: -------------------------------------------------------------------------------- 1 | #! /bin/sh 2 | FILE=venv/bin/activate 3 | 4 | if [ -d "helper_utils/horde_module" ] 5 | then 6 | : 7 | else 8 | git clone https://github.com/mak448a/horde_module helper_utils/horde_module --depth=1 9 | fi 10 | 11 | 12 | if test -f "$FILE"; 13 | then 14 | # We're setup 15 | source venv/bin/activate 16 | python3 main.py 17 | else 18 | # We're not setup, let's get the tokens and API keys from the user 19 | # Let's also install dependencies 20 | cp "example_config.json" "config.json" 21 | touch .env 22 | python3 -m venv venv 23 | source venv/bin/activate 24 | pip install -r requirements.txt 25 | echo -n "Enter your bot token: " 26 | read TOKEN 27 | 28 | echo -n "Enter your Prodia API Key: " 29 | read PRODIA_KEY 30 | 31 | echo -n "Enter your Stable Horde API Key: " 32 | read API_KEY 33 | 34 | echo -n "Enter your ChatGPT Key: " 35 | read CHATGPT_KEY 36 | 37 | # If token is none exit 38 | if [ -z $PRODIA_KEY ] || [-z $TOKEN] || [-z $API_KEY] 39 | then 40 | echo "ERROR! YOU DIDN'T ENTER ALL YOUR CREDENTIALS!" 41 | exit 1 42 | fi 43 | 44 | # Overwrite existing file if it exists 45 | echo "BOT_TOKEN=$TOKEN" > .env 46 | echo "PRODIA_KEY=$PRODIA_KEY" >> .env 47 | echo "API_KEY=$API_KEY" >> .env 48 | echo "CHATGPT_KEY=$CHATGPT_KEY" >> .env 49 | python3 main.py 50 | fi 51 | -------------------------------------------------------------------------------- /run.bat: -------------------------------------------------------------------------------- 1 | @echo off 2 | set FILE=venv\Scripts\activate 3 | 4 | if exist horde_module ( 5 | goto setup 6 | ) else ( 7 | echo Downloading horde_module repository... 8 | curl -LJO https://github.com/mak448a/horde_module/archive/refs/heads/main.zip 9 | echo Extracting repository... 10 | powershell -command "Expand-Archive -Path .\main.zip -DestinationPath .\helper_utils\horde_module -Force" 11 | del main.zip 12 | ) 13 | 14 | :setup 15 | if exist %FILE% ( 16 | call venv\Scripts\activate 17 | python main.py 18 | ) else ( 19 | copy example_config.json config.json 20 | echo > .env 21 | python -m venv venv 22 | call venv\Scripts\activate 23 | pip install -r requirements.txt 24 | set /p TOKEN=Enter your bot token: 25 | echo BOT_TOKEN=%TOKEN% > .env 26 | set /p PRODIA_KEY=Enter your Prodia API Key: 27 | echo PRODIA_KEY=%PRODIA_KEY% >> .env 28 | set /p API_KEY=Enter your API Key: 29 | echo API_KEY=%API_KEY% >> .env 30 | set /p API_KEY=Enter your ChatGPT Key: 31 | echo CHATGPT_KEY=%API_KEY% >> .env 32 | if [%TOKEN%]==[] ( 33 | goto fail 34 | ) 35 | if [%TOKEN%]==[] ( 36 | goto fail 37 | ) 38 | if [%PRODIA_KEY%]==[] ( 39 | goto fail 40 | ) 41 | if [%API_KEY%]==[] ( 42 | goto fail 43 | ) 44 | python main.py 45 | ) 46 | 47 | :fail 48 | echo ERROR! YOU DIDN'T ENTER ALL YOUR CREDENTIALS! 49 | EXIT /b 1 50 | -------------------------------------------------------------------------------- /gpt_utils/vercel.py: -------------------------------------------------------------------------------- 1 | # Unmaintained, since vercel-llm-api seems to be broken. 2 | 3 | 4 | # import vercel_ai 5 | # import curl_cffi.requests.errors 6 | 7 | 8 | # client = vercel_ai.Client() 9 | 10 | 11 | # with open("gpt_utils/prompt.txt") as f: 12 | # prompt = f.read() 13 | 14 | 15 | # async def generate_message(message: str) -> str: 16 | # global prompt 17 | 18 | # # Code from https://github.com/ading2210/vercel-llm-api/issues/12 19 | # retry, max_retries = 0, 15 20 | # prompt += f"{message}\n" 21 | 22 | # while retry < max_retries: 23 | # try: 24 | # result = "" 25 | # for chunk in client.generate("openai:gpt-3.5-turbo", prompt): 26 | # # print(chunk, end="", flush=True) 27 | # result += chunk 28 | 29 | # prompt += result + "\n" 30 | # # print(prompt) 31 | 32 | # return result 33 | # except curl_cffi.requests.errors.RequestsError: 34 | # retry += 1 35 | # # print(f"Retrying {retry}/{max_retries}...") 36 | # if retry == max_retries: 37 | # raise Exception("Could not connect. Max retries exceeded.") 38 | # continue 39 | 40 | 41 | # async def clear_context() -> None: 42 | # global prompt 43 | # with open("gpt_utils/prompt.txt") as file: 44 | # prompt = file.read() 45 | -------------------------------------------------------------------------------- /gpt_utils/gpt4all.py: -------------------------------------------------------------------------------- 1 | from gpt4all import GPT4All 2 | import copy 3 | 4 | llm = GPT4All("ggml-gpt4all-j-v1.3-groovy") 5 | 6 | with open("gpt_utils/prompt.txt") as f: 7 | prompt = f.read() 8 | 9 | remove_text = copy.copy(prompt) 10 | original_prompt = copy.copy(prompt) 11 | 12 | 13 | async def generate_message(user_input): 14 | global prompt, remove_text 15 | constructed = f"\nHuman: {user_input}\nStable Assistant: " 16 | prompt += constructed 17 | 18 | output_of_text = llm.generate(prompt) 19 | 20 | if "Human: " in output_of_text: 21 | print("Uh oh, generated extraneous text. Removing and sending cleaned output.") 22 | output_of_text = output_of_text.split("Human: ")[1] + "\n NOTE: This may be the model generating" \ 23 | " an output for the user instead of itself." 24 | 25 | # Solution taken from: https://groups.google.com/g/comp.lang.python/c/Rq40dmwLfMQ?pli=1 26 | if len(prompt) > 1000: 27 | print("Removing first two lines of context to avoid context explosion...") 28 | prompt = "\n".join(prompt.split('\n')[2:]) 29 | remove_text = "\n".join(remove_text.split('\n')[2:]) 30 | return output_of_text 31 | 32 | 33 | async def clear_context(): 34 | global prompt, remove_text 35 | prompt = copy.copy(original_prompt) 36 | remove_text = copy.copy(original_prompt) 37 | -------------------------------------------------------------------------------- /gpt_utils/gpt4free.py: -------------------------------------------------------------------------------- 1 | import g4f 2 | import aiohttp 3 | from helper_utils.utils import chatgpt_key 4 | 5 | 6 | 7 | with open("gpt_utils/prompt.txt") as f: 8 | prompt = f.read() 9 | 10 | messages = [{"role": "system", "content": prompt}] 11 | 12 | async def generate_message(message: str) -> str: 13 | global prompt 14 | 15 | messages.append({"role": "user", "content": message}) 16 | 17 | try: 18 | response = await g4f.ChatCompletion.create_async( 19 | model=g4f.models.gpt_35_turbo, 20 | provider=g4f.Provider.OpenaiChat, 21 | messages=messages, 22 | access_token=chatgpt_key, 23 | stream=False 24 | ) 25 | messages.append({"role": "assistant", "content": response}) 26 | except aiohttp.client_exceptions.ClientResponseError: 27 | response = await g4f.ChatCompletion.create_async( 28 | model="meta-llama/Llama-2-70b-chat-hf", 29 | provider=g4f.Provider.HuggingChat, 30 | messages=messages, 31 | stream=False 32 | ) 33 | messages.append({"role": "assistant", "content": response}) 34 | print(response) 35 | return response 36 | 37 | 38 | async def clear_context() -> None: 39 | global prompt, messages 40 | with open("gpt_utils/prompt.txt") as file: 41 | prompt = file.read() 42 | messages = [{"role": "system", "content": prompt}] 43 | -------------------------------------------------------------------------------- /cogs/event_cogs/on_message.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | import logging 4 | 5 | from helper_utils.utils import bot_token, config, FakeCtx, generate_message 6 | 7 | import discord 8 | from discord.ext import commands 9 | from discord import app_commands 10 | 11 | class OnMessage(commands.Cog): 12 | def __init__(self, bot): 13 | self.bot = bot 14 | 15 | @commands.Cog.listener() 16 | async def on_message(self, message): 17 | if str(self.bot.user.id) not in message.content: 18 | return 19 | if not config["chatbot"]: 20 | return 21 | if message.author == self.bot.user: 22 | return 23 | 24 | cleaned_message = message.content.replace(f"<@{self.bot.user.id}>", "") 25 | 26 | async with message.channel.typing(): 27 | msg = await generate_message(cleaned_message) 28 | 29 | msg1 = msg.split("")[0] 30 | 31 | # Check if the model wants to draw something 32 | if "" in msg: 33 | await message.channel.send(msg1) 34 | else: 35 | await message.channel.send(msg) 36 | 37 | if "" in msg and "" in msg: 38 | # Parse the draw tag 39 | prompt = msg.split("")[1].split("")[0] 40 | 41 | logging.debug(f"{message.author.mention} is generating ```{prompt}``` with " 42 | f"{config['image_model']}!") 43 | 44 | await self.bot.get_cog("Horde").imagine_horde( # Get cog Horde (cogs/horde.py) and then call imagine_horde 45 | FakeCtx(message), # NOQA 46 | prompt=prompt, 47 | model=app_commands.Choice(name=config["image_model"], value=config["image_model"]) 48 | ) 49 | 50 | async def setup(bot): 51 | await bot.add_cog(OnMessage(bot)) -------------------------------------------------------------------------------- /cogs/commands_cogs/misc.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | from discord.ext import commands 4 | import discord 5 | from discord import app_commands 6 | from helper_utils.image_generation_utils import upscale_image, generate_prodia 7 | from helper_utils.utils import clear_context, line_junk, config 8 | 9 | 10 | class Misc(commands.Cog): 11 | def __init__(self, bot): 12 | self.bot = bot 13 | 14 | @commands.hybrid_command(name="upscale", description="Upscale an image with imaginepy") 15 | async def upscale(self, ctx, file: discord.Attachment): 16 | await ctx.defer() 17 | image_filename = await upscale_image(file) 18 | 19 | await ctx.send(file=discord.File(f"{image_filename}")) 20 | os.remove(image_filename) 21 | 22 | @commands.hybrid_command(name="imagine_prodia", description="Generate an image with Prodia") 23 | @app_commands.choices(model=[ 24 | app_commands.Choice(name="SDXL 1.0", 25 | value="SDXL 1.0") 26 | ]) 27 | async def prodia(self, ctx, *, prompt, model: app_commands.Choice[str]): 28 | reply = await ctx.send( 29 | f"{ctx.message.author.mention} is generating ```{prompt}``` with " 30 | f"{model.name}! " 31 | f"{line_junk}{config['loading_gif']}") 32 | 33 | filename = await generate_prodia(prompt) 34 | await reply.edit( 35 | content=f"Here are the generated images for {ctx.author.mention}.\n- Prompt: ```{prompt}```\n- Model: `" 36 | f"{model.name}`", 37 | attachments=[discord.File(filename),]) 38 | 39 | os.remove(filename) 40 | 41 | @commands.hybrid_command(name="clear_context", description="Clear the chat context") 42 | async def clear(self, ctx): 43 | await ctx.defer() 44 | await clear_context() 45 | await ctx.send("Cleared the chat context!") 46 | 47 | 48 | async def setup(bot): 49 | await bot.add_cog(Misc(bot)) 50 | -------------------------------------------------------------------------------- /cogs/commands_cogs/pollinations.py: -------------------------------------------------------------------------------- 1 | import os 2 | import random 3 | import requests 4 | from discord.ext import commands 5 | from helper_utils.utils import line_junk, config 6 | import discord 7 | import urllib.parse 8 | 9 | 10 | class Pollinations(commands.Cog): 11 | def __init__(self, bot): 12 | self.bot = bot 13 | 14 | @commands.hybrid_command(name="imagine_poly", description="Generate image using Pollinations") 15 | async def imagine_poly(self, ctx, *, prompt: str): 16 | encoded_prompt = urllib.parse.quote(prompt) 17 | images = [] 18 | 19 | reply = await ctx.send( 20 | f"{ctx.author.mention} is generating ```{prompt}``` with Pollinations! {line_junk}" 21 | f"{config['loading_gif']}") 22 | 23 | # Generate four images with the given prompt 24 | i = 0 25 | while len(images) < 4: 26 | seed = random.randint(1, 100000) # Generate a random seed 27 | image_url = f"https://image.pollinations.ai/prompt/{encoded_prompt}?seed={seed}" 28 | response = requests.get(image_url) 29 | 30 | try: 31 | image_data = response.content 32 | 33 | # Generate a unique filename for each image 34 | filename = f"{ctx.author.id}_{ctx.message.id}_{i}.png" 35 | with open(filename, "wb") as f: 36 | f.write(image_data) 37 | 38 | images.append(filename) 39 | i += 1 40 | except (requests.exceptions.RequestException, ValueError, KeyError) as e: 41 | print(f"Error generating image: {e}") 42 | 43 | if images: 44 | # Send all image files as attachments in a single message 45 | image_files = [discord.File(image) for image in images] 46 | await reply.edit( 47 | content=f"Here are the generated images for {ctx.author.mention}.\n- Prompt: ```{prompt}```\n- Model: `" 48 | f"Pollinations`", attachments=image_files) 49 | 50 | # Delete the local image files 51 | for image in image_files: 52 | os.remove(image.filename) 53 | else: 54 | await reply.edit("Error generating images. Please try again later.") 55 | 56 | 57 | async def setup(bot): 58 | await bot.add_cog(Pollinations(bot)) 59 | -------------------------------------------------------------------------------- /helper_utils/utils.py: -------------------------------------------------------------------------------- 1 | from dotenv import load_dotenv 2 | import json 3 | import logging 4 | import os 5 | 6 | 7 | # Load .env file 8 | load_dotenv(".env") 9 | bot_token: str = os.getenv("BOT_TOKEN") 10 | api_key: str = os.getenv("API_KEY") 11 | prodia_key: str = os.getenv("PRODIA_KEY") 12 | chatgpt_key: str = os.getenv("CHATGPT_KEY") 13 | 14 | if api_key == "0000000000": 15 | logging.warning("Default API key selected. Generating images will be slower. " 16 | "Generated images will be sent to LAION to improve Stable Diffusion.") 17 | 18 | # Load config 19 | with open("config.json") as f: 20 | config = json.load(f) 21 | 22 | # Line junk is some stuff that makes Discord hide links. 23 | # See https://www.youtube.com/watch?v=9OgpQHSP5qE (by Ntts) 24 | 25 | line_junk = """||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​||||​|| _ _ _ _ _ _""" 26 | 27 | 28 | class FakeCtx: 29 | def __init__(self, message): 30 | self.author = message.author 31 | self.message = message 32 | 33 | async def send(self, content, file=None): 34 | return await self.message.channel.send(content, file=file) 35 | 36 | 37 | # Import functions for use in other files 38 | if config["chatbot"]: 39 | # Figure out which model the user specified 40 | if config["model"] == "GPT4All": 41 | # GPT4All 42 | from gpt_utils.gpt4all import generate_message, clear_context # NOQA 43 | elif config["model"] == "gpt4free": 44 | # GPT4All 45 | from gpt_utils.gpt4free import generate_message, clear_context # NOQA 46 | else: 47 | # Fallback 48 | logging.warning("Configured model improperly! Check config.json!") 49 | from gpt_utils.gpt4free import generate_message, clear_context # NOQA 50 | else: 51 | # Create a dummy function so that we don't error out when importing them 52 | def fake_function(): 53 | pass 54 | 55 | generate_message, clear_context = fake_function, fake_function 56 | -------------------------------------------------------------------------------- /cogs/commands_cogs/horde.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os 3 | 4 | from discord.ext import commands 5 | from discord import app_commands 6 | from helper_utils.utils import line_junk, config 7 | from helper_utils.image_generation_utils import generate_with_stable_horde 8 | 9 | 10 | class Horde(commands.Cog): 11 | def __init__(self, bot): 12 | self.bot = bot 13 | 14 | @commands.hybrid_command(name="imagine_horde", description="Generate an image with Stable Horde") 15 | @app_commands.choices(model=[ 16 | app_commands.Choice(name="SDXL 1.0", 17 | value="SDXL 1.0"), 18 | app_commands.Choice(name="Stable Diffusion", value="stable_diffusion_2.1"), 19 | app_commands.Choice(name="SDXL Beta (WILL BE SHARED FOR IMPROVEMENT OF THE MODELS)", 20 | value="SDXL_beta::stability.ai#6901"), 21 | app_commands.Choice(name="Deliberate", value="Deliberate"), 22 | app_commands.Choice(name="Anything Diffusion", value="Anything Diffusion"), 23 | app_commands.Choice(name="Realistic Vision", value="Realistic Vision"), 24 | app_commands.Choice(name="Dreamshaper", value="Dreamshaper"), 25 | app_commands.Choice(name="Abyss OrangeMix", value="Abyss OrangeMix"), 26 | app_commands.Choice(name="OpenJourney Diffusion", value="OpenJourney Diffusion"), 27 | app_commands.Choice(name="Original Stable Diffusion", value="stable_diffusion"), 28 | app_commands.Choice(name="ICBINP - I Can't Believe It's Not Photography", 29 | value="ICBINP - I Can't Believe It's Not Photography"), 30 | ]) 31 | 32 | async def imagine_horde(self, ctx, *, prompt: str, model: app_commands.Choice[str], negative: str = None): 33 | reply = await ctx.send( 34 | f"{ctx.message.author.mention} is generating ```{prompt}``` with " 35 | f"{model.name}! " 36 | f"{line_junk}{config['loading_gif']}") 37 | 38 | logging.debug(f"{ctx.message.author.mention} is generating ```{prompt}``` with " 39 | f"{model.name}!") 40 | 41 | # `###` tells Stable Horde we want a negative prompt. 42 | image_files, images = await generate_with_stable_horde( 43 | f"{prompt}{'###' if negative else ''}{negative if negative else ''}", model.value) 44 | 45 | if negative: 46 | negative_string = f"\n- Negative Prompt: ```{negative}```" 47 | else: 48 | negative_string = "" 49 | 50 | await reply.edit( 51 | content=f"Here are the generated images for {ctx.author.mention}.\n- Prompt: ```{prompt}```\n- Model: `" 52 | f"{model.name}`{negative_string}", 53 | attachments=image_files) 54 | 55 | for image in images: 56 | os.remove(image) 57 | 58 | 59 | async def setup(bot): 60 | await bot.add_cog(Horde(bot)) 61 | -------------------------------------------------------------------------------- /run.py: -------------------------------------------------------------------------------- 1 | from InquirerPy import inquirer 2 | import logging 3 | import json 4 | import os 5 | 6 | 7 | logging.basicConfig(level=logging.INFO, 8 | format="[%(levelname)s] %(message)s") 9 | 10 | 11 | def configure(): 12 | print("For all options, choose the first option if unsure.") 13 | model = inquirer.select( 14 | message="Choose model:", 15 | choices=["gpt4free", "GPT4All", "None"], 16 | ).execute() 17 | sync_commands = inquirer.select( 18 | message="Bot command syncing:", 19 | choices=["On", "Off"], 20 | ).execute() 21 | loading_gif = inquirer.text( 22 | message="Paste gif link: (press enter to use default)" 23 | ).execute() 24 | 25 | # Error checking for loading GIF 26 | if "https://" in loading_gif.lower(): 27 | # A loading gif was found 28 | pass 29 | elif loading_gif: 30 | logging.log(logging.WARNING, "Invalid input!!! Using default link.") 31 | loading_gif = "" 32 | else: 33 | # Use default one 34 | loading_gif = "" 35 | 36 | # Define default config values 37 | default_config = { 38 | "chatbot": False, 39 | "model": "gpt4free", 40 | "loading_gif": "https://tenor.com/view/loading-gif-9212724", 41 | "sync": True, 42 | "image_model": "stable_diffusion_2.1" 43 | } 44 | # Set a variable to store new config 45 | config = default_config # add .copy() if you want it to be separate 46 | # Models 47 | if model == "None": 48 | config["chatbot"] = False 49 | elif model == "GPT4All": 50 | config["model"] = "GPT4All" 51 | config["chatbot"] = True 52 | elif model == "gpt4free": 53 | config["model"] = "gpt4free" 54 | config["chatbot"] = True 55 | else: 56 | config["chatbot"] = False 57 | # GIF 58 | if loading_gif: 59 | config["loading_gif"] = loading_gif 60 | # Syncing 61 | if sync_commands == "On": 62 | config["sync"] = True 63 | else: 64 | config["sync"] = False 65 | 66 | # Write our config to the config file 67 | with open("config.json", "w") as f: 68 | json.dump(config, f, indent=4) 69 | 70 | 71 | def get_credentials(): 72 | while True: 73 | token = inquirer.text(message="Bot Token:").execute() 74 | key = inquirer.text(message="Stable Horde API Key:").execute() 75 | prodia_key = inquirer.text(message="Prodia API Key:").execute() 76 | chatgpt_key = inquirer.text(message="ChatGPT Key:").execute() 77 | lines = f"""\ 78 | BOT_TOKEN={token} 79 | PRODIA_KEY={prodia_key} 80 | API_KEY={key} 81 | CHATGPT_KEY={chatgpt_key}""" 82 | if token == "" or key == "" or prodia_key == "": 83 | print("Enter valid input.") 84 | continue 85 | else: 86 | break 87 | 88 | with open(".env", "w") as f: 89 | f.writelines(lines) 90 | 91 | 92 | while True: 93 | if not os.path.exists(".env"): 94 | get_credentials() 95 | configure() 96 | print("Launching program...") 97 | import main # NOQA 98 | else: 99 | option = inquirer.select( 100 | message="Select Option:", 101 | choices=["Run", "Configure", "Add Credentials"], 102 | ).execute() 103 | if option == "Run": 104 | print("Launching program...") 105 | import main # NOQA 106 | elif option == "Configure": 107 | configure() 108 | elif option == "Add Credentials": 109 | get_credentials() 110 | -------------------------------------------------------------------------------- /helper_utils/image_generation_utils.py: -------------------------------------------------------------------------------- 1 | from helper_utils.horde_module import Generator 2 | from .utils import api_key, prodia_key 3 | 4 | import uuid 5 | import asyncio 6 | import logging 7 | import os 8 | 9 | import discord 10 | import requests 11 | from imaginepy import AsyncImagine, Style, Ratio 12 | 13 | horde_generator = Generator() 14 | 15 | 16 | async def generate_with_stable_horde(prompt: str, model: str): 17 | file_uuid = uuid.uuid1() 18 | 19 | await horde_generator.async_generate(prompt, api_key, f"{file_uuid}.png", 4, 20 | f"{model}") 21 | 22 | # Loop until the images generate. We check for the fourth image. 23 | while True: 24 | if os.path.exists(f"3_{file_uuid}.png"): 25 | break 26 | await asyncio.sleep(0.8) 27 | 28 | images = [] 29 | 30 | # Grab all the filenames 31 | for i in range(4): 32 | images.append(f"{i}_{file_uuid}.png") 33 | 34 | image_files = [discord.File(image) for image in images] 35 | 36 | return image_files, images 37 | 38 | 39 | # Imaginepy function 40 | async def generate_image_with_imaginepy(image_prompt, style_value, ratio_value): 41 | async_imagine = AsyncImagine() 42 | 43 | filenames = [] 44 | images = [] 45 | 46 | style_enum = Style[style_value] 47 | ratio_enum = Ratio[ratio_value] 48 | 49 | for _ in range(4): 50 | filenames.append(str(uuid.uuid4()) + ".png") 51 | img_data = await async_imagine.sdprem( 52 | prompt=image_prompt, 53 | style=style_enum, 54 | ratio=ratio_enum 55 | ) 56 | images.append(img_data) 57 | 58 | for i in images: 59 | if i is None: 60 | logging.error("An error occurred while generating the images.") 61 | return 62 | for index, filename in enumerate(filenames): 63 | try: 64 | with open(filename, mode="wb") as img_file: 65 | img_file.write(images[index]) 66 | except Exception as e: 67 | logging.error(f"An error occurred while writing the image to file: {e}") 68 | return None 69 | 70 | await async_imagine.close() 71 | 72 | image_files = [discord.File(file) for file in filenames] 73 | 74 | return image_files 75 | 76 | 77 | async def upscale_image(image): 78 | temp_id = uuid.uuid1() 79 | await image.save(f"{temp_id}.png") 80 | async_imagine = AsyncImagine() 81 | original_image = open(f"{temp_id}.png", "rb").read() 82 | upscaled_image = await async_imagine.upscale(original_image) 83 | 84 | with open(f"{temp_id}.png", "wb") as f: 85 | f.write(upscaled_image) 86 | return f"{temp_id}.png" 87 | 88 | 89 | 90 | async def generate_prodia(prompt: str) -> str: 91 | """Generates an image and returns the path.""" 92 | url = "https://api.prodia.com/v1/sdxl/generate" 93 | 94 | payload = { "prompt": prompt } 95 | headers = { 96 | "accept": "application/json", 97 | "content-type": "application/json", 98 | "X-Prodia-Key": prodia_key 99 | } 100 | 101 | response = requests.post(url, json=payload, headers=headers) 102 | 103 | job = response.json()["job"] 104 | 105 | 106 | while True: 107 | await asyncio.sleep(1) 108 | 109 | url = f"https://api.prodia.com/v1/job/{job}" 110 | 111 | headers = { 112 | "accept": "application/json", 113 | # Read in api key 114 | "X-Prodia-Key": prodia_key 115 | } 116 | 117 | response = requests.get(url, headers=headers) 118 | 119 | # print(response.json()["status"]) 120 | if response.json()["status"] == "succeeded": 121 | image_url = response.json()["imageUrl"] 122 | # print(image_url) 123 | break 124 | 125 | 126 | r = requests.get(image_url, allow_redirects=True) 127 | temp_id = uuid.uuid1() 128 | with open(f"{temp_id}.png", "wb") as f: 129 | f.write(r.content) 130 | 131 | return f"{temp_id}.png" 132 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Notice: This is a fork. Get the most up-to-date version upstream: https://github.com/mak448a/AI-Drawing-Chatbot 2 | # AI Drawing ChatBot 3 | 4 | ![](https://img.shields.io/github/license/mak448a/Stable-Diffusion-Bot) 5 | ![](https://img.shields.io/github/contributors/mak448a/Stable-Diffusion-Bot) 6 | ![](https://img.shields.io/github/repo-size/mak448a/Stable-Diffusion-Bot) 7 | 8 | This is a completely free AI image generator Discord bot written in Python. 9 | It has a chatbot that you can ask to draw pictures which uses Llama 2! 10 | 11 | This project uses various APIs, which allows it to run on less powerful computers. 12 | 13 | ![Demo picture](assets/demo.png) 14 | 15 | ## Table of Contents 16 | 1. [Features](#Features) 17 | 2. [Notes](#Notes) 18 | 3. [Prerequisites](#Prerequisites) 19 | 4. [Windows Setup](#windows-setup) 20 | 5. [Docker Setup](#docker-setup) 21 | 6. [Linux/macOS Setup](#linux-and-macos-setup) 22 | 7. [Manual Configuration](#Manual-Configuration) 23 | 8. [Generate images faster with Stable Horde](#Generate-images-faster-with-Stable-Horde) 24 | 25 | 26 | ## Features 27 | - AI chatbot which is integrated with Stable Diffusion 28 | - 2 slash commands for generating images 29 | - Stable Diffusion 30 | - SDXL Beta 31 | - Pollinations 32 | - Anything Diffusion 33 | - And more! 34 | 35 | 36 | ## Notes 37 | 38 | - gpt4free is the recommended model. 39 | - GPT4All isn't very "smart" 40 | - When using SDXL-Beta it sends your generated images to improve the model. 41 | - While generating a response with GPT4All, the Discord bot will lock up. 42 | No commands can be run until the bot finishes writing. 43 | - Do not input any personal information on the `/imagine_poly` command because your generated image will be displayed 44 | on their official frontpage. 45 | - Please do not enter any personal information in the chatbot or in the image generators, 46 | because your prompts are sent to various providers. 47 | - When you use Stable Horde to generate images, your prompts are sent to Stable Horde, as listed in their [privacy policy](https://stablehorde.net/privacy). 48 | - The chatbot will not work as expected if multiple users are chatting with it at once. 49 | - When hosting on a low-end server, GPT4All cannot be used. 50 | 51 | 52 | ## Prerequisites 53 | This project assumes that: 54 | - Python 3.8+ is installed and is on your PATH 55 | - Pip is installed 56 | - Git is installed 57 | - You know how to create a Discord Bot account 58 | - The package python-venv is installed (only for certain Linux distros) 59 | 60 | 61 | ## Windows Setup 62 | 1. Create a Discord Bot and grab the token. Make sure to switch on Message Content Intent in the `Bot` tab of the 63 | [Developer Portal](https://discord.com/developers/applications). 64 | 2. Grab an API key from Stable Horde [at their register page](https://stablehorde.net/register). 65 | 3. Clone the Stable Horde module: `git clone https://github.com/mak448a/horde_module helper_utils/horde_module --depth=1` 66 | 4. Create venv and install dependencies: `python -m venv venv` `venv\Scripts\activate.bat` 67 | `pip install -r requirements.txt` 68 | 5. Grab your ChatGPT key, which is under `accessToken` from here: `https://chat.openai.com/api/auth/session`. 69 | 6. Run `python run.py` and fill in all the information it asks for. 70 | 7. Invite the bot with the link it provides! 71 | 8. You can chat with the bot by mentioning it and typing your message. 72 | 73 | ## Linux and macOS Setup 74 | 1. Create a Discord Bot and grab the token. Make sure to switch on Message Content Intent in the `Bot` tab of the 75 | [Developer Portal](https://discord.com/developers/applications). 76 | 2. Grab an API key from Stable Horde [at their register page](https://stablehorde.net/register). 77 | 3. Create venv and install dependencies: `python3 -m venv venv && source venv/bin/activate && pip install -r requirements.txt` 78 | 4. Clone the Stable Horde module: `git clone https://github.com/mak448a/horde_module helper_utils/horde_module --depth=1` 79 | 5. Grab your ChatGPT key, which is under `accessToken` from here: `https://chat.openai.com/api/auth/session`. 80 | 6. Run `python3 run.py` and fill in all the information it asks for. 81 | 7. Invite the bot with the link it provides! 82 | 8. You can chat with the bot by mentioning it and typing your message. 83 | 84 | ## Docker Setup 85 | 1. Make sure you have Docker installed on your machine. 86 | 2. Clone this repository to your local machine. 87 | 3. Build the Docker image using the following command: `docker build -t ai-drawing-chatbot .` 88 | 4. Run the Docker container using the following command: `docker run -it --name chatbot ai-drawing-chatbot` 89 | 5. Enter in your token values. Go [here](#linux-and-macos-setup) to find out how to get these. 90 | 91 | 92 | ## Manual Configuration 93 |
Configuring chat model 94 | 95 | Go to `config.json` and set the key `"model"` to the model you want. 96 | 97 | **Available Models** 98 | - gpt4free 99 | - GPT4All 100 | 101 | You must write the model exactly as it is written here. 102 | 103 | When you are done, the edited line should look like this: 104 | 105 | ```json 106 | "model": "gpt4free", 107 | ``` 108 | 109 |
110 | 111 |
Disable chatbot 112 | 113 | Go to `config.json` and set the key `"chatbot"` to false. 114 | 115 | It should look like this: 116 | 117 | ```json 118 | "chatbot": false, 119 | ``` 120 |
121 | 122 |
Change loading GIF 123 | 124 | Go to `config.json` and set the key `"loading_gif"` to any GIF on Tenor you want! 125 | 126 | It should look like this: 127 | 128 | ```json 129 | "loading_gif": "https://tenor.com/your/favorite/loading/gif", 130 | ``` 131 |
132 | 133 |
Turn off command syncing (makes bot load faster) 134 | 135 | Please note that this will break the bot if you haven't run it for the first time. 136 | Go to `config.json` and set the key `"sync"` to `false`. 137 | 138 | It should look like this: 139 | 140 | ```json 141 | "sync": false, 142 | ``` 143 |
144 | 145 |
Change chatbot drawing model 146 | 147 | Go to `config.json` and set the key `"image_model"` to a model that you find [here](https://aqualxx.github.io/stable-ui/). 148 | 149 | It should look like this: 150 | 151 | ```json 152 | "image_model": "SDXL 1.0" 153 | ``` 154 |
155 | 156 | 157 | ## Generate images faster with Stable Horde 158 | Stable Horde uses Kudos to rank users on a queue. The more Kudos you have, the higher you are on the generation queue. 159 | You can get Kudos by rating images and/or generating images for Stable Horde 160 | on your local hardware. 161 | To get more Kudos, first stick your API key [here](https://tinybots.net/artbot/settings). 162 | Then, rate some images generated by other users [here](https://tinybots.net/artbot/rate). 163 | After rating a few images, you should have faster generation times! 164 | 165 | **IMPORTANT: When an image says, "This rating must be x," you must rate it that. 166 | This picture is used as a Captcha to avoid spam.** 167 | --------------------------------------------------------------------------------