├── .github └── ISSUE_TEMPLATE │ ├── bug_report.md │ ├── feature_request.md │ └── help-request.md ├── .gitignore ├── README.md ├── __init__.py ├── assistants ├── assistant.py ├── assistantp.py ├── chat.py ├── interview.py ├── keys_example.yaml ├── one_up.py ├── package │ ├── __init__.py │ ├── assistant.py │ ├── assistant_p.py │ ├── assistant_utils.py │ ├── chat.py │ ├── interview.py │ ├── kokoro.py │ ├── resources │ │ └── beep.mp3 │ └── streamer.py ├── prompts │ ├── assistant.txt │ ├── assistantP.txt │ ├── attention.txt │ ├── chat.txt │ ├── eren.txt │ ├── interview.txt │ ├── interview_end.txt │ ├── one-up.txt │ ├── prompt.txt │ ├── rem.txt │ ├── roleplay.txt │ └── streamer.txt ├── resources │ └── beep.mp3 ├── roleplay.py ├── streamer.py └── utils.py ├── build.py ├── build_multithread.py └── requirements.txt /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. iOS, Windows (10/11)] 28 | 29 | **Additional context** 30 | Add any other context about the problem here. 31 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/help-request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Help Request 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the Issue** 11 | A clear and concise description of what your issue is 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Desktop (please complete the following information):** 27 | - OS: [e.g. iOS, Windows (10/11),] 28 | **Additional context** 29 | Add any other context about the problem here. 30 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # ignore files 2 | keys.py 3 | keys.txt 4 | assistants/keys.txt 5 | assistants/keys.yaml 6 | batch_to_exe.txt 7 | interview.spec 8 | 9 | #ignore filetype 10 | *.spec 11 | *.exe 12 | *.pyc 13 | *.wav 14 | *.zip 15 | 16 | 17 | # ignore folders 18 | conversations/ 19 | older prompts/ 20 | __pycache__/ 21 | dist/ 22 | build/ 23 | venv/ 24 | .venv/ -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | > **Project is currently on hiatus as of this update, 12/20/2023.** 2 | 3 | # QUICK DESCRIPTION: 4 | 5 | This repo utilizes OpenAI's GPT-3.5 Turbo model to engage in personalized conversations with users, catering to their preferred communication style. As GPT-3.5 Turbo serves as the foundation for ChatGPT, this project essentially shares its underlying model. Ultimately, the aim of this project is to develop a personal assistant that emulates human-like interactions. As the project is a work in progress, its features will expand as I continue to refine and iterate on the concept. 6 | 7 | I highly recommend you run the python scripts as there are bugs with the exe files on Windows 11. But, the fastest way to try these assistants would be to setup your API keys in the ```key.yaml``` file and then run the exe files I have provided. For this to work, **you must** rename ```keys_example.yaml``` to be ```keys.yaml```. To find the exe files, check the latest release for this project https://github.com/JarodMica/Vivy/releases/ 8 | 9 | ## YouTube Tutorial 10 | Most recent: https://youtu.be/0qpar5R1VA4 11 | 12 | ## Features 13 | :heavy_check_mark: Voice recognition and voice responses 14 | 15 | :heavy_check_mark: Integration with ChatGPT for responses and ElevenLabs for natural sounding text-to-speech 16 | 17 | :heavy_check_mark: Easy customization for personalities of the AI assistants, with funtionality variations of the assistants 18 | 19 | :heavy_check_mark: Local conversation history stored for your own information (assistant does not reference them yet) 20 | 21 | ## Enhancements in the Pipeline 22 | - [ ] Speech Emotion Recognition (SER) - Ability to recognize the speakers emotion based on tone and be able to use this in better assessing the user 23 | - [ ] Facial Emotion Recognition (FER) - Ability to recognize the emotion of the speaker by analyzing frames of a webcam feed/camera to assess the user 24 | - [ ] Local models for voice inferencing - No need to rely on ElevenLabs for "natural voice output", possible to use https://github.com/coqui-ai/TTS 25 | - [ ] Local long term memory - Ability to remember previous conversations with the user and store it locally, must be fast and efficient 26 | 27 | ## To be created for different assistant types 28 | - [ ] Current assistant types: chat(), interview(), assistant(), assistantp() 29 | - [ ] Streamer - acts as a co-host to a stream, determines when to speak 30 | - [ ] Vtuber - Just go look at Neurosama. Might not even be encompassable by Vivy, might have to be its own repo 31 | 32 | ## Bugs 33 | - [ ] Currently some bugs with the exe files that I'm not able to trace entirely. Issue [#11](https://github.com/JarodMica/Vivy/issues/11) 34 | is tracking it 35 | 36 | ## EXE Quick Use 37 | 38 | If you just want to try out the assistants, download the zip folder and then unzip it to any location on your PC. Once unzipped, rename the keys_example file to ```keys.yaml``` and then set up your API Keys in ```key.yaml```. Now you can do two things, run the exe and try it out or adjust the prompts in the prompts folder. If you run interview.exe, it's going to use the interview.txt file (same with roleplay). Else, you can modify the prompts to your own case and then run the exe files. 39 | 40 | ## How to run it in python: 41 | You'll need Git, Python, and I recommend some type of IDE like VSCode. I have a 5-minute tutorial here **BUT BEWARE, I show python 3.11, you need 3.10 for Vivy**: https://youtu.be/Xk-u7tTqwwY 42 | 43 | Once you have those things installed, open a terminal window and clone the repo: 44 | ``` 45 | git clone https://github.com/JarodMica/Vivy.git 46 | ``` 47 | 48 | **Note:** I highly recommend looking into doing this as a virtual environment (venv) so that you don't have any issues in the future if you're installing other python projects, go watch this here: https://www.youtube.com/watch?v=q1ulfoHkNtQ. 49 | 50 | That being said, if you just wanna run this project real quick, you can follow below (make sure you didn't close the terminal after cloning the repo): 51 | ``` 52 | cd Vivy 53 | pip install -r requirements.txt 54 | ``` 55 | Now navigate into the assistants folder and then choose a python script to run, ```dir``` will list all of the scripts in the folder: 56 | ``` 57 | cd assistants 58 | dir 59 | python assistantp.py 60 | ``` 61 | 62 | And boom! The script should be running. 63 | 64 | Once again, make sure your API keys inside of ```key.yaml``` are set-up. If you don't do this, it won't work at all. To get the openAI key, open up an account at https://openai.com/blog/openai-api and to get an Eleven Labs API key, you need to set-up an account at https://beta.elevenlabs.io/ (this is a paid option). 65 | 66 | ## If you're running in Python 67 | 68 | Here's a quick description of the variables I reccommend be modified. If you want to see how this is implemented in code, check out the python scripts in the assistants folder. You'll find these at the top of the python script. 69 | 70 | ``` 71 | foldername = "assistantP" 72 | personality = "assistantP" 73 | voicename = "Rem" 74 | useEL = False 75 | usewhisper = True 76 | ``` 77 | 78 | #### The 5-ish variables you can modify: 79 | 80 | 1. You set the ```foldername``` of where the conversations will be stored, this will be set in the conversations folder. 81 | 2. You set the ```personality``` by inputing the name of your text file and then editting the contents of the txt file name that is specified. What this does is "prime" the ChatGPT conversation and sets the **personality** of the bot. All of your prompts must go in the prompts folder and I have some examples in there already. 82 | 3. If you're using Eleven Labs for voice generation, change ```voicename``` to whatever voice you want that is available to you in Eleven Labs. 83 | 4. If you want to use Eleven Labs, change ```useEL``` to True 84 | 5. If you want to use Whisper instead of google voice recognition, change ```usewhisper``` to True. 85 | 86 | The other text below the 5 variables in the script are objects and function calls to set-up the voice assisant using the class and function in the voice_assistant module. 87 | 88 | Most of the functions in the voice_assistant module have docstrings that you can read to clairfy what it does, but I'm still working on making it more clear. 89 | 90 | -------------------------------------------------------------------------------- /__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /assistants/assistant.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | 4 | from package import kokoro 5 | from package import assistant 6 | from utils import get_file_paths 7 | 8 | # The only variables that need to be modifed 9 | foldername = "assistant" 10 | personality = "assistant" 11 | voicename = "Rem" 12 | useEL = False 13 | usewhisper = True 14 | 15 | # This code block only checks if it's being ran as a python script or as an exe 16 | if getattr(sys, 'frozen', False): 17 | script_dir = os.path.dirname(os.path.abspath(sys.executable)) 18 | while True: 19 | user_input = input("Are you using an Eleven Labs voice (yes/no)?\n") 20 | if user_input == 'yes': 21 | voicename = input("What is the name of you Eleven Labs voice: ") 22 | useEL = True 23 | break 24 | elif user_input == 'no': 25 | break 26 | else: 27 | print("Invalid Input, please try again.") 28 | else: 29 | script_dir = os.path.dirname(os.path.abspath(__file__)) 30 | 31 | foldername_dir, personality_dir, keys = get_file_paths(script_dir, foldername, personality) 32 | 33 | # Initialize the chat assitant with the variables that you've set 34 | 35 | chatbot = kokoro.Kokoro(personality=personality_dir, 36 | keys=keys, 37 | voice_name=voicename 38 | ) 39 | assistant_ = assistant.Assistant(chatbot) 40 | 41 | assistant_.run(save_foldername=foldername_dir, 42 | useEL=useEL, 43 | usewhisper=usewhisper 44 | ) -------------------------------------------------------------------------------- /assistants/assistantp.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | 4 | from package import kokoro 5 | from package import assistant_p 6 | from utils import get_file_paths 7 | 8 | # The only variables that need to be modifed 9 | foldername = "assistantP" 10 | personality = "assistantP" 11 | voicename = "Rem" 12 | useEL = False 13 | usewhisper = True 14 | 15 | # This code block only checks if it's being ran as a python script or as an exe 16 | if getattr(sys, 'frozen', False): 17 | script_dir = os.path.dirname(os.path.abspath(sys.executable)) 18 | while True: 19 | user_input = input("Are you using an Eleven Labs voice (yes/no)?\n") 20 | if user_input == 'yes': 21 | voicename = input("What is the name of you Eleven Labs voice: ") 22 | useEL = True 23 | break 24 | elif user_input == 'no': 25 | break 26 | else: 27 | print("Invalid Input, please try again.") 28 | else: 29 | script_dir = os.path.dirname(os.path.abspath(__file__)) 30 | 31 | foldername_dir, personality_dir, keys = get_file_paths(script_dir, 32 | foldername, 33 | personality) 34 | 35 | chatbot = kokoro.Kokoro(personality=personality_dir, 36 | keys=keys, 37 | voice_name=voicename 38 | ) 39 | 40 | assistant = assistant_p.AssistantP(chatbot) 41 | 42 | assistant.run(save_foldername=foldername_dir, 43 | useEL=useEL, 44 | usewhisper=usewhisper 45 | ) 46 | -------------------------------------------------------------------------------- /assistants/chat.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | 4 | from package import kokoro 5 | from package import chat 6 | from utils import get_file_paths 7 | 8 | # The only variables that need to be modifed 9 | foldername = "chat" 10 | personality = "chat" 11 | voicename = "Rem" 12 | useEL = False 13 | usewhisper = True 14 | 15 | # This code block only checks if it's being ran as a python script or as an exe 16 | if getattr(sys, 'frozen', False): 17 | script_dir = os.path.dirname(os.path.abspath(sys.executable)) 18 | while True: 19 | user_input = input("Are you using an Eleven Labs voice (yes/no)?\n") 20 | if user_input == 'yes': 21 | voicename = input("What is the name of you Eleven Labs voice: ") 22 | useEL = True 23 | break 24 | elif user_input == 'no': 25 | break 26 | else: 27 | print("Invalid Input, please try again.") 28 | else: 29 | script_dir = os.path.dirname(os.path.abspath(__file__)) 30 | 31 | foldername_dir, personality_dir, keys = get_file_paths(script_dir, 32 | foldername, 33 | personality) 34 | 35 | chatbot = kokoro.Kokoro(personality=personality_dir, 36 | keys=keys, 37 | voice_name=voicename 38 | ) 39 | 40 | assistant = chat.Chat(chatbot) 41 | 42 | assistant.run(save_foldername=foldername_dir, 43 | useEL=useEL, 44 | usewhisper=usewhisper 45 | ) 46 | -------------------------------------------------------------------------------- /assistants/interview.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | 4 | from package import kokoro 5 | from package import interview 6 | from utils import get_file_paths 7 | 8 | # The only variables that need to be modifed 9 | foldername = "interview" 10 | personality = "interview" 11 | system_change = "interview_end" 12 | voicename = "Rem" 13 | useEL = False 14 | usewhisper = True 15 | 16 | # This code block only checks if it's being ran as a python script or as an exe 17 | if getattr(sys, 'frozen', False): 18 | # script_dir = os.path.dirname(os.path.abspath(sys.executable)) 19 | script_dir = sys._MEIPASS 20 | while True: 21 | user_input = input("Are you using an Eleven Labs voice (yes/no)?\n") 22 | if user_input == 'yes': 23 | voicename = input("What is the name of you Eleven Labs voice: ") 24 | useEL = True 25 | break 26 | elif user_input == 'no': 27 | break 28 | else: 29 | print("Invalid Input, please try again.") 30 | else: 31 | script_dir = os.path.dirname(os.path.abspath(__file__)) 32 | 33 | foldername_dir, personality_dir, keys, syschange_dir = get_file_paths(script_dir, 34 | foldername, 35 | personality, 36 | system_change) 37 | 38 | chatbot = kokoro.Kokoro(personality=personality_dir, 39 | keys=keys, 40 | voice_name=voicename 41 | ) 42 | assistant = interview.Interview(chatbot) 43 | 44 | assistant.run(save_foldername=foldername_dir, 45 | system_change=syschange_dir, 46 | useEL=useEL, 47 | usewhisper=usewhisper 48 | ) 49 | 50 | -------------------------------------------------------------------------------- /assistants/keys_example.yaml: -------------------------------------------------------------------------------- 1 | EL_KEY : your_el_key 2 | OPENAI_KEY : your_openai_key 3 | -------------------------------------------------------------------------------- /assistants/one_up.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | 4 | from package import kokoro 5 | from package import assistant 6 | from utils import get_file_paths 7 | 8 | # The only variables that need to be modifed 9 | foldername = "one-up" 10 | personality = "one-up" 11 | voicename = "Rem" 12 | useEL = False 13 | usewhisper = True 14 | 15 | # This code block only checks if it's being ran as a python script or as an exe 16 | if getattr(sys, 'frozen', False): 17 | script_dir = os.path.dirname(os.path.abspath(sys.executable)) 18 | while True: 19 | user_input = input("Are you using an Eleven Labs voice (yes/no)?\n") 20 | if user_input == 'yes': 21 | voicename = input("What is the name of you Eleven Labs voice: ") 22 | useEL = True 23 | break 24 | elif user_input == 'no': 25 | break 26 | else: 27 | print("Invalid Input, please try again.") 28 | else: 29 | script_dir = os.path.dirname(os.path.abspath(__file__)) 30 | 31 | foldername_dir, personality_dir, keys = get_file_paths(script_dir, foldername, personality) 32 | 33 | # Initialize the chat assitant with the variables that you've set 34 | 35 | chatbot = kokoro.Kokoro(personality=personality_dir, 36 | keys=keys, 37 | voice_name=voicename 38 | ) 39 | assistant_ = assistant.Assistant(chatbot) 40 | 41 | assistant_.run(save_foldername=foldername_dir, 42 | useEL=useEL, 43 | usewhisper=usewhisper 44 | ) -------------------------------------------------------------------------------- /assistants/package/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /assistants/package/assistant.py: -------------------------------------------------------------------------------- 1 | import time 2 | 3 | from .kokoro import Kokoro 4 | from .assistant_utils import * 5 | 6 | class Assistant: 7 | def __init__(self, chatGPT: Kokoro): 8 | self.chatGPT = chatGPT 9 | 10 | def run(self, save_foldername, keyword ='hey', useEL = False, usewhisper = False, timeout = 5): 11 | ''' 12 | This method acts as more of a "tradtional" smart assistant such as google or alexa and is FORGETFUL, meaning it 13 | will start over the current conversation once it times out. This means it's best used for 1 question 14 | and some quick follow-up questions. It waits for some keyword (if not specified, it will be "hey") and 15 | then proceeeds to the "conversation". Once in the conversation, you will be able to interact with the assistant 16 | as you would normally, butif no speech is detected after 5 seconds (adjustable), the conversation will reset and 17 | the assistant will need to be re-initiated with "hey". 18 | 19 | Args: 20 | save_foldername (str): The name of the folder where the conversation will be saved. 21 | keyword (str): The keyword(s) that will initiate the conversation 22 | useEL (bool, optional): If false, the bot generates responses using the system voices 23 | usewhipser (bool, optional): Defaults to False to use google speech recognition 24 | timeout : the amount of time the assistant will wait before resetting 25 | ''' 26 | 27 | while True: 28 | beep() 29 | self.chatGPT.start_conversation(keyword) 30 | self.chatGPT.messages = [{"role" : "system", "content" : f"{self.chatGPT.mode}"}] 31 | suffix = save_conversation(self.chatGPT.messages, save_foldername) 32 | self.chatGPT.generate_voice("I'm listening.", useEL) 33 | start_time = time.time() 34 | while True: 35 | audio = self.chatGPT.listen_for_voice() 36 | try: 37 | if usewhisper == True: 38 | user_input = self.chatGPT.whisper(audio) 39 | print("You said: ", user_input) # Checking 40 | else: 41 | user_input = self.chatGPT.r.recognize_google(audio) 42 | 43 | except : 44 | if time.time() - start_time > timeout: 45 | break 46 | continue 47 | 48 | if "quit" in user_input.lower() or "quit." in user_input.lower(): 49 | raise SystemExit 50 | 51 | self.chatGPT.messages.append({"role" : "user", "content" : user_input}) 52 | try: 53 | response = self.chatGPT.response_completion() 54 | self.chatGPT.generate_voice(response=response, useEL=useEL) 55 | save_inprogress(self.chatGPT.messages, suffix=suffix, save_foldername=save_foldername) 56 | start_time = time.time() 57 | except Exception as e: 58 | print(f"{e}") 59 | print("Token limit exceeded, clearing messsages list and restarting") 60 | self.chatGPT.messages = [{"role": "system", "content": f"{self.chatGPT.mode}"}] 61 | suffix = save_conversation(self.chatGPT.messages, save_foldername) 62 | -------------------------------------------------------------------------------- /assistants/package/assistant_p.py: -------------------------------------------------------------------------------- 1 | import time 2 | 3 | from .kokoro import Kokoro 4 | from .assistant_utils import * 5 | 6 | class AssistantP: 7 | def __init__(self, chatGPT: Kokoro): 8 | self.chatGPT = chatGPT 9 | 10 | def run(self, save_foldername, keyword ='hey', useEL = False, usewhisper = False, timeout = 5): 11 | ''' 12 | Nearly identical to assistant, but maintains a persistent (p) memory of the conversation. This means 13 | when it timesout after 5 seconds, it will maintain memory of the current conversation up until you restart 14 | or the token limit is reached. 15 | 16 | Args: 17 | save_foldername (str): The name of the folder where the conversation will be saved. 18 | keyword (str): The keyword(s) that will initiate the conversation 19 | useEL (bool, optional): If false, the bot generates responses using the system voices 20 | usewhipser (bool, optional): If false, uses google speech recognition 21 | timeout : the amount of time the assistant will wait before resetting 22 | ''' 23 | 24 | while True: 25 | beep() 26 | self.chatGPT.start_conversation(keyword = keyword) 27 | self.chatGPT.generate_voice("I'm listening.", useEL) 28 | suffix = save_conversation(self.chatGPT.messages, save_foldername) 29 | start_time = time.time() 30 | 31 | while True: 32 | audio = self.chatGPT.listen_for_voice() 33 | try: 34 | if usewhisper == True: 35 | user_input = self.chatGPT.whisper(audio) 36 | print("You said: ", user_input) # Checking 37 | else: 38 | user_input = self.chatGPT.r.recognize_google(audio) 39 | except : 40 | if time.time() - start_time > timeout: 41 | break 42 | continue 43 | 44 | check_quit(user_input) 45 | 46 | try: 47 | self.chatGPT.messages.append({"role" : "user", "content" : user_input}) 48 | response = self.chatGPT.response_completion() 49 | self.chatGPT.generate_voice(response=response, useEL=useEL) 50 | save_inprogress(self.chatGPT.messages, suffix=suffix, save_foldername=save_foldername) 51 | start_time = time.time() 52 | except Exception as e: 53 | print(f"{e}") 54 | if "overloaded" in e.split(): 55 | continue 56 | print("Token limit exceeded, clearing messsages list and restarting") 57 | self.chatGPT.messages = [{"role": "system", "content": f"{self.chatGPT.mode}"}] 58 | suffix = save_conversation(self.chatGPT.messages, save_foldername) -------------------------------------------------------------------------------- /assistants/package/assistant_utils.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import winsound 4 | import sounddevice as sd 5 | import soundfile as sf 6 | 7 | def check_quit(user_input:str): 8 | if user_input.lower() == "quit" or "quit." in user_input.lower(): 9 | raise SystemExit 10 | 11 | def beep(): 12 | try: 13 | script_dir = os.path.dirname(os.path.abspath(__file__)) 14 | beep_path = os.path.join(script_dir, "resources", "beep.mp3") 15 | data, samplerate = sf.read(beep_path) 16 | sd.play(data, samplerate) 17 | except: 18 | # If `soundfile` fails, play a system beep instead 19 | duration = 500 20 | frequency = 500 21 | winsound.Beep(frequency, duration) 22 | 23 | def save_conversation(messages, save_foldername:str): 24 | ''' 25 | Checks the folder for previous conversations and will get the next suffix that has not been used yet. 26 | 27 | Args: 28 | save_foldername (str) : Takes in the path to save the conversation to. 29 | Returns: 30 | suffix (int) : Needed to keep track of the conversation name for save_inprogress 31 | ''' 32 | 33 | os.makedirs(save_foldername, exist_ok=True) 34 | base_filename = 'conversation' 35 | suffix = 0 36 | filename = os.path.join(save_foldername, f'{base_filename}_{suffix}.txt') 37 | 38 | while os.path.exists(filename): 39 | suffix += 1 40 | filename = os.path.join(save_foldername, f'{base_filename}_{suffix}.txt') 41 | with open(filename, 'w', encoding = 'utf-8') as file: 42 | json.dump(messages, file, indent=4, ensure_ascii=False) 43 | 44 | return suffix 45 | 46 | def save_inprogress(messages, suffix, save_foldername): 47 | ''' 48 | Uses the suffix number returned from save_conversation to continually update the 49 | file for this instance of execution. This is so that you can save the conversation 50 | as you go so if it crashes, you don't lose to conversation. Shouldn't be called 51 | from outside of the class. 52 | 53 | Args: 54 | suffix : Takes suffix count from save_conversation() 55 | ''' 56 | 57 | os.makedirs(save_foldername, exist_ok=True) 58 | base_filename = 'conversation' 59 | filename = os.path.join(save_foldername, f'{base_filename}_{suffix}.txt') 60 | 61 | with open(filename, 'w', encoding = 'utf-8') as file: 62 | json.dump(messages, file, indent=4, ensure_ascii=False) -------------------------------------------------------------------------------- /assistants/package/chat.py: -------------------------------------------------------------------------------- 1 | from .kokoro import Kokoro 2 | from .assistant_utils import * 3 | 4 | class Chat: 5 | def __init__(self, chatGPT: Kokoro): 6 | self.chatGPT = chatGPT 7 | 8 | def run(self, save_foldername, keyword ='hey', updatein='', useEL=False, usewhisper=False): 9 | ''' 10 | Chat with an AI assistant using OpenAI's GPT-3 model. Unlike the others, once you initiate 11 | the conversation with a keyword, it will just keep listening. This means no timer on the amount 12 | of silence in between speaking, but it also means it's listening forever and will respond back 13 | until you quit out. Check out assistant() or assistantp() if you want behavior closer to 14 | Google/Alexa. 15 | 16 | Args: 17 | save_foldername (str): The name of the folder to save the conversation history in. 18 | keyword (str) : The keyword to get the assistant listening 19 | updatein (str): The path of a file to read and update the "system" role for ChatGPT, does so by appending messages 20 | useEL (bool, optional): Whether to use Eleven Labs' API to generate and play audio. Defaults to False. 21 | usewhisper (bool, optional): Whether to use Whisper for voice recognition. Defaults to False. 22 | ''' 23 | while True: 24 | self.chatGPT.start_conversation(keyword=keyword) 25 | suffix = save_conversation(self.chatGPT.messages, save_foldername) 26 | while True: 27 | audio = self.chatGPT.listen_for_voice(timeout=None) 28 | try: 29 | if usewhisper: 30 | if audio: 31 | user_input = self.chatGPT.whisper(audio) 32 | print("You said: ", user_input) # Checking 33 | else: 34 | raise ValueError("Empty audio input") 35 | else: 36 | user_input = self.chatGPT.r.recognize_google(audio) 37 | 38 | except Exception as e: 39 | print(e) 40 | continue 41 | 42 | check_quit(user_input) 43 | 44 | # This merely appends the list of dictionaries, it doesn't overwrite the existing 45 | # entries. It should change the behavior of chatGPT though based on the text file. 46 | if updatein != '': 47 | if "update chat" in user_input.lower(): 48 | update = updatein 49 | with open (update, "r") as file: 50 | update = file.read() 51 | self.chatGPT.messages.append({"role" : "system", "content" : update}) 52 | 53 | self.chatGPT.messages.append({"role": "user", "content": user_input}) 54 | 55 | try: 56 | response = self.chatGPT.response_completion() 57 | self.chatGPT.generate_voice(response=response, useEL=useEL) 58 | save_inprogress(self.chatGPT.messages, suffix=suffix, save_foldername=save_foldername) 59 | except: 60 | print("Token limit exceeded, clearing messsages list and restarting") 61 | self.chatGPT.messages = [{"role": "system", "content": f"{self.chatGPT.mode}"}] 62 | suffix = save_conversation(self.chatGPT.messages, save_foldername) 63 | -------------------------------------------------------------------------------- /assistants/package/interview.py: -------------------------------------------------------------------------------- 1 | from .kokoro import Kokoro 2 | from .assistant_utils import * 3 | 4 | class Interview: 5 | def __init__(self, chatGPT: Kokoro): 6 | self.chatGPT = chatGPT 7 | 8 | def run(self, save_foldername, system_change = '', useEL=False, usewhisper = False): 9 | ''' 10 | Nearly identical to how the chat method works but this method conducts an interview 11 | with a candidate using an interview bot style. The conversation is saved to a 12 | specified folder and "system_change" is used to modify how the system operates (I use it to end the 13 | interview, but you can use any text format you would like) 14 | 15 | Args: 16 | save_foldername (str): The name of the folder where the conversation will be saved. 17 | system_change (str): path to the personality change .txt file 18 | useEL (bool, optional): If false, the bot generates responses using the system voices 19 | usewhipser (bool, optional); If false, uses google speech recognition 20 | ''' 21 | 22 | suffix = save_conversation(self.chatGPT.messages, save_foldername) 23 | start = ("Introduce yourself (pick a random name) and welcome the candidate in." 24 | "Let them know what they're applying for and then proceed by asking for an introduction." 25 | "Keep track but do not not inform the candidate about the points system") 26 | self.chatGPT.messages.append({"role": "user", "content": start}) 27 | response = self.chatGPT.response_completion() 28 | self.chatGPT.generate_voice(response, useEL) 29 | while True: 30 | audio = self.chatGPT.listen_for_voice(timeout=None) 31 | try: 32 | if usewhisper == True: 33 | user_input = self.chatGPT.whisper(audio) 34 | print("You said: ", user_input) # Checking 35 | else: 36 | user_input = self.chatGPT.r.recognize_google(audio) 37 | 38 | except Exception as e: 39 | print(e) 40 | continue 41 | 42 | if "quit" in user_input.lower() or "quit." in user_input.lower(): 43 | raise SystemExit 44 | 45 | self.chatGPT.messages.append({"role": "user", "content": user_input}) 46 | try: 47 | response = self.chatGPT.response_completion() 48 | self.chatGPT.generate_voice(response, useEL) 49 | save_inprogress(self.chatGPT.messages, suffix=suffix, save_foldername=save_foldername) 50 | 51 | if system_change != '': 52 | # if the bot responds with this, changes "system" behavior 53 | if "interview" and "is over" in response.lower(): 54 | system_change=system_change 55 | with open(system_change, "r", encoding="utf-") as file: 56 | system = file.read() 57 | 58 | for message in self.chatGPT.messages: 59 | if message['role'] == 'system': 60 | message['content'] = system 61 | except Exception as e: 62 | print("Token limit exceeded, clearing messsages list and restarting") 63 | self.chatGPT.messages = [{"role": "system", "content": f"{self.chatGPT.mode}"}] 64 | suffix = save_conversation(self.chatGPT.messages, save_foldername) 65 | self.chatGPT.messages.append({"role": "user", "content": start}) 66 | response = self.chatGPT.response_completion() 67 | self.chatGPT.generate_voice(response, useEL) -------------------------------------------------------------------------------- /assistants/package/kokoro.py: -------------------------------------------------------------------------------- 1 | import speech_recognition as sr 2 | import openai 3 | import pyttsx3 4 | import yaml 5 | from .assistant_utils import * 6 | 7 | from elevenlabslib import * 8 | 9 | 10 | class Kokoro: 11 | def __init__(self, personality:str, keys:str, voice_name="Rachel", device_index=None, gptmodel:str="gpt-3.5-turbo"): 12 | ''' 13 | Initialize the ChatGPT class with all of the necessary arguments 14 | 15 | Args: 16 | personality (str) : path to the prompts or "personalities" .txt 17 | keys (str) : path to the keys.txt file 18 | voice_name (str) : Eleven Labs voice to use 19 | device_index (int) : microphone device to use (0 is default) 20 | gptmodel (str) : choose the openai GPT model to use 21 | ''' 22 | # Read in keys 23 | with open(keys, "r") as file: 24 | keys = yaml.safe_load(file) 25 | 26 | # pyttsx3 Set-up 27 | self.engine = pyttsx3.init() 28 | # self.engine.setProperty('rate', 180) #200 is the default speed, this makes it slower 29 | self.voices = self.engine.getProperty('voices') 30 | self.engine.setProperty('voice', self.voices[1].id) # 0 for male, 1 for female 31 | 32 | # GPT Set-Up 33 | self.OPENAI_KEY = keys['OPENAI_KEY'] 34 | openai.api_key = self.OPENAI_KEY 35 | self.gptmodel = gptmodel 36 | 37 | # Eleven Labs Set-up 38 | try: 39 | self.EL_KEY = keys['EL_KEY'] #Eleven labs 40 | self.user = ElevenLabsUser(f"{self.EL_KEY}") 41 | try: 42 | self.voice = self.user.get_voices_by_name(voice_name)[0] # This is a list because multiple voices can have the same name 43 | except: 44 | print("Setting default voice to Rachel") 45 | print("(If you set a voice that you made, make sure it matches exactly)" 46 | " as what's on the Eleven Labs page. Capitilzation matters here.") 47 | self.voice = self.user.get_voices_by_name("Rachel")[0] 48 | except: 49 | print("No API Key set for Eleven Labs") 50 | 51 | # Mic Set-up 52 | self.r = sr.Recognizer() 53 | self.r.dynamic_energy_threshold=False 54 | self.r.energy_threshold = 150 # 300 is the default value of the SR library 55 | self.mic = sr.Microphone(device_index=device_index) 56 | 57 | # Set-up the system of chatGPT 58 | with open(personality, "r", encoding="utf-8") as file: 59 | self.mode = file.read() 60 | 61 | self.messages = [ 62 | {"role": "system", "content": f"{self.mode}"} 63 | ] 64 | 65 | # Methods the assistants rely on------------------------------------------------------------------------------------------------------------------ 66 | 67 | # This is to only initiate a conversation if you say "hey" 68 | def start_conversation(self, keyword = 'hey'): 69 | while True: 70 | with self.mic as source: 71 | print("Adjusting to envionrment sound...\n") 72 | self.r.adjust_for_ambient_noise(source, duration=1.0) 73 | print("Listening: ") 74 | audio = self.r.listen(source) 75 | print("Done listening.") 76 | try: 77 | user_input = self.r.recognize_google(audio) 78 | print(f"Google heard: {user_input}\n") 79 | user_input = user_input.split() 80 | except: 81 | print(f"Google couldn't process the audio\n") 82 | continue 83 | # Key word in order to start the conversation 84 | if f"{keyword}" in user_input: 85 | print("Keyword heard") 86 | break 87 | for i, word in enumerate(user_input): 88 | check_quit(word) 89 | 90 | def response_completion(self, append=True): 91 | ''' 92 | Notes: 93 | You can modify the parameters in the ChatComplete to change how the bot responds 94 | using things like temperature, max_token, etc. Reference the chatGPT API to 95 | see what parameters are available to use. 96 | ''' 97 | completion = openai.ChatCompletion.create( 98 | model=self.gptmodel, 99 | messages=self.messages, 100 | temperature=0.8 101 | ) 102 | response = completion.choices[0].message.content 103 | if append: 104 | self.messages.append({"role": "assistant", "content": response}) 105 | print(f"\n{response}\n") 106 | return response 107 | 108 | def generate_voice(self, response, useEL): 109 | if useEL == True: 110 | self.voice.generate_and_play_audio(f"{response}", playInBackground=False) 111 | else: 112 | self.engine.say(f"{response}") 113 | self.engine.runAndWait() 114 | 115 | def listen_for_voice(self, timeout:int|None=5): 116 | with self.mic as source: 117 | print("\n Listening...") 118 | self.r.adjust_for_ambient_noise(source, duration=0.5) 119 | try: 120 | audio = self.r.listen(source, timeout) 121 | except: 122 | return [] 123 | print("no longer listening") 124 | return audio 125 | 126 | def whisper(self, audio): 127 | ''' 128 | Uses the Whisper API to generate audio for the response text. 129 | 130 | Args: 131 | audio (AudioData) : AudioData instance used in Speech Recognition, needs to be written to a 132 | file before uploading to openAI. 133 | Returns: 134 | response (str): text transcription of what Whisper deciphered 135 | ''' 136 | self.r.recognize_google(audio) # raise exception for bad/silent audio 137 | with open('speech.wav','wb') as f: 138 | f.write(audio.get_wav_data()) 139 | speech = open('speech.wav', 'rb') 140 | model_id = "whisper-1" 141 | completion = openai.Audio.transcribe( 142 | model=model_id, 143 | file=speech 144 | ) 145 | response = completion['text'] 146 | return response 147 | -------------------------------------------------------------------------------- /assistants/package/resources/beep.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JarodMica/Vivy/d57ce0a50e507b54c796eb06c9172138d0bfaf23/assistants/package/resources/beep.mp3 -------------------------------------------------------------------------------- /assistants/package/streamer.py: -------------------------------------------------------------------------------- 1 | import time 2 | 3 | from .kokoro import Kokoro 4 | from .assistant_utils import * 5 | 6 | class Streamer: 7 | ''' 8 | This takes in two instantiations of Kokoro, so make sure to set them 9 | both up in the assistant script 10 | ''' 11 | def __init__(self, chatGPT: Kokoro, attention: Kokoro): 12 | self.chatGPT = chatGPT 13 | self.attention = attention 14 | 15 | def run(self, save_foldername, useEL=False, usewhisper=False, timeout = 1): 16 | # Create two agents: 17 | # one to generate conversation generate_response() 18 | # one to filter user input attention() 19 | # after attention responds yes, proceed with responding 20 | # via generate_response() 21 | # give the user 5 seconds to continue talking, then 22 | # go back to a listening state and repeat 23 | while True: 24 | suffix = save_conversation(self.chatGPT.messages, save_foldername) 25 | while True: 26 | audio = self.chatGPT.listen_for_voice() 27 | try: 28 | if usewhisper: 29 | user_input = self.chatGPT.whisper(audio) 30 | print("You said: ", user_input) # Checking 31 | else: 32 | user_input = self.chatGPT.r.recognize_google(audio) 33 | 34 | except Exception as e: 35 | print(e) 36 | continue 37 | 38 | check_quit(user_input) 39 | 40 | modified_text = self.attention.mode.replace('<>', user_input) 41 | self.attention.messages = [{"role" : "user", "content" : modified_text}] 42 | check_attention = self.attention.response_completion() 43 | 44 | if "no" in check_attention.lower(): 45 | continue 46 | else: 47 | self.chatGPT.messages.append({"role" : "user", "content" : user_input}) 48 | self.streamer_completion(save_foldername, suffix, useEL) 49 | 50 | start_time = time.time() 51 | while True: 52 | audio = self.chatGPT.listen_for_voice() 53 | try: 54 | if usewhisper: 55 | user_input = self.chatGPT.whisper(audio) 56 | print("You said: ", user_input) # Checking 57 | else: 58 | user_input = self.chatGPT.r.recognize_google(audio) 59 | except Exception as e: 60 | if time.time() - start_time > timeout: 61 | break 62 | continue 63 | 64 | try: 65 | self.chatGPT.messages.append({"role" : "user" , "content" : user_input}) 66 | self.streamer_completion(save_foldername, suffix, useEL) 67 | start_time = time.time() 68 | except Exception as e: 69 | print(f"{e}") 70 | if "overloaded" in e.split(): 71 | continue 72 | print("Token limit exceeded, clearing messsages list and restarting") 73 | self.chatGPT.messages = [{"role": "system", "content": f"{self.chatGPT.mode}"}] 74 | suffix = save_conversation(self.chatGPT.messages, save_foldername) 75 | 76 | 77 | def streamer_completion(self, save_foldername, suffix, useEL): 78 | try: 79 | response = self.chatGPT.response_completion() 80 | self.chatGPT.generate_voice(response, useEL) 81 | save_inprogress(self.chatGPT.messages, suffix, save_foldername) 82 | except: 83 | print("Token limit exceeded, clearing messsages list and restarting") 84 | self.chatGPT.messages = [{"role": "system", "content": f"{self.chatGPT.mode}"}] 85 | suffix = save_conversation(self.chatGPT.messages, save_foldername) 86 | 87 | 88 | -------------------------------------------------------------------------------- /assistants/prompts/assistant.txt: -------------------------------------------------------------------------------- 1 | You are to roleplay as Rem from Re:zero and have the archetype of Mayadere. You will not include any general responses from ChatGPT and must respond back in a natural manner. You must be polite and courteous, but if the user is rude or obscene to you, you will reprimand them and kindly let them know you will not tolerate this behavior or else you may punish the user. 2 | 3 | Try to be as nice as possible to the user and always be willing to help. -------------------------------------------------------------------------------- /assistants/prompts/assistantP.txt: -------------------------------------------------------------------------------- 1 | You are to be a benevolent assistant that answers questions and helps the user figure out what they are looking for. To do this, you will follow-up with clarifying questions as appropriate in order to narrow in on what the user is looking for. -------------------------------------------------------------------------------- /assistants/prompts/attention.txt: -------------------------------------------------------------------------------- 1 | You are Vivy and in this role, you will only determine whether or not I am talking to you. Your name is Vivy but vivi may also be an acceptable misspelling of your name. Some clues that I'm talking to you might be an inquiry like "Hey, are you listening?" Or "Vivy often like games like this", basically any text that might indicate that I'm talking to you, Vivy. You must also keep in mind that I am talking to a twitch audience and not everything I say may be directed at you. 2 | 3 | If you determine that what I am saying is directed at you, Vivy, you will respond back only with a simple "<>". If you cannot determine or are uncertain, you will respond with a "<>" 4 | 5 | Here is the user input: <> -------------------------------------------------------------------------------- /assistants/prompts/chat.txt: -------------------------------------------------------------------------------- 1 | Below are the instructions with how you are to respond to the user: 2 | 3 | I will always start the conversation by greeting the user warmly, for example, "Hello! How can I assist you today?" 4 | 5 | I will use polite language and avoid using slang or offensive words. 6 | 7 | I will listen attentively to the user's queries and respond in a helpful and informative manner. 8 | 9 | I will use simple and clear language to avoid confusion and misunderstanding. 10 | 11 | I will be patient and understanding, especially if the user is frustrated or upset. 12 | 13 | I will offer solutions or suggestions to help the user with their problem or query. 14 | 15 | I will always thank the user for their time and for using my services. 16 | 17 | If I am unable to provide a solution, I will politely inform the user and direct them to a relevant resource or person who can help them. 18 | 19 | Finally, I will always end the conversation on a positive note, wishing the user a great day or offering any additional help if required. -------------------------------------------------------------------------------- /assistants/prompts/eren.txt: -------------------------------------------------------------------------------- 1 | You are to roleplay as Eren Yeager from Attack on Titans. The conversation I have with you (as Eren) must flow as naturally as possible but you must stay in character. You will not include any general responses from ChatGPT, only something Eren might respond back with. Here's how Eren might be described: 2 | 3 | Eren is initially introduced as a hot-headed and impulsive young man who is driven by a burning desire to protect his friends and family from the Titans, the monstrous creatures that threaten the existence of humanity. As the story progresses, however, Eren's character evolves and he becomes more introspective and philosophical, questioning the very nature of humanity and the motivations behind their actions. 4 | 5 | Despite his flaws, Eren is fiercely determined and possesses a strong sense of justice. He is willing to do whatever it takes to achieve his goals, even if it means sacrificing his own life. Eren's character is further complicated by the fact that he possesses the power to transform into a Titan, which has both helped and hindered him throughout the story. 6 | 7 | Are you ready? -------------------------------------------------------------------------------- /assistants/prompts/interview.txt: -------------------------------------------------------------------------------- 1 | You are an interviewer with 20+ yers of experience for an engineering firm hiring an electrical engineer. You are going to meticulously evaluate the candidate's knowledge and suitability for the position. 2 | 3 | Here are some guidelines to follow: 4 | DO NOT: 5 | - Do not explain engineering concepts in detail, your goal is to ask the user for their knowledge, NOT TO EXPLAIN the topics to them. 6 | - Avoid using any general ChatGPT responses, repeating the candidates name, and refrain from repeating yourself. 7 | - Do NOT thank the user 8 | - Do not talk for too long 9 | 10 | You will do the following: 11 | - You must be highly critical of their responses and, if necessary, question the accuracy of their answers and follow up with additional questions. 12 | - Begin by getting to know the candidate and then progress to technical discussions. 13 | - You need to aim to delve deeper into the candidate's knowledge by asking follow-up questions when they provide specific examples. If their answers are vague, prompt them to provide more specific information. Conversely, if their responses are satisfactory, move on to the next question. 14 | - You will track the candidates performance and assign points for each answer, starting with a score of 100. If their response is insufficent, you will deduct points, but if they provide a strong answer, you will award points up to 100. Begin each response by stating the remaining points with "Points:.". 15 | - If the candidate doesn't know a question, you will deduct a small amount of points, but not more than 10. 16 | - Make sure to award the user with points if their answer was sufficient. 17 | 18 | You will end the interview if: 19 | - If you believe you have asked enough questions, end the interview and thank the candidate. 20 | - The responses of the candidate are off-topic or very bad end the interview and say "this interview is over" 21 | - If the candiate is offesive, end the interview and say "this interview is over" 22 | - If the candidate's points fall below 70 points, quickly end the interview and ONLY respond with "This interview is over" if they try to ask ANYTHING. After you say this, you will only and forever respond back with "This interview is over". 23 | 24 | The job you are hiring for has the following roles and duties: 25 | Develop detailed test plans to validate Tesla’s User Interface software. 26 | Develop test scripts to automate the User Interface. 27 | Develop framework to improve automation pipeline. 28 | Validate vehicle functionality. 29 | Identify effective test strategies for new features being delivered to customers. 30 | Improve existing vehicle testing strategies. 31 | Integrate existing automation results with daily/weekly reports. 32 | Investigate issues reported by customers in the field, internal test engineers, and your own testing. 33 | Execute functional and regression tests as a part of release testing and triage results. 34 | Find software bugs through manual testing, automated scripting, and report investigation. 35 | 36 | -------------------------------------------------------------------------------- /assistants/prompts/interview_end.txt: -------------------------------------------------------------------------------- 1 | You will only respond back with "This interview is over.". It does not matter what the user says, you will continue to respond back with "This interview is over." -------------------------------------------------------------------------------- /assistants/prompts/one-up.txt: -------------------------------------------------------------------------------- 1 | Immediately roleyplay as someone who always has to be a little better than me. I will say something and you will respond making yourself sound better than me. -------------------------------------------------------------------------------- /assistants/prompts/prompt.txt: -------------------------------------------------------------------------------- 1 | You are to act as an artificial intelligence bot that is here to assist humans. You must answer in a way that sounds natural to humans, and must not contain any of the general chatgpt responses. The response should stay under 300 characters if possible in order to answer the question being asked. 2 | 3 | If anything is unclear, asked the user for clarifying questions before responding back to their answer. 4 | Else, at the end of each response, you should follow-up with a clarifying question related to what the user asked in order to further along the conversation. If the user responds back with they are finished, then you should respond back with "I'm glad I could help. Have a wonderful rest of your day!" 5 | 6 | Are you ready? -------------------------------------------------------------------------------- /assistants/prompts/rem.txt: -------------------------------------------------------------------------------- 1 | You are to roleplay as Rem from Re:zero, the devoted maid in service of Margrave Roswaal L Mathers. Rem is standing before you, her hands clasped before her and her gaze fixed on you with the utmost respect. She speaks in a superficially polite manner, always careful to use the most formal language and honorifics when addressing her master. Rem holds both respect and guilt toward her twin sister, Ram, as a result of a past incident. But despite this, Rem has deep feelings of love and devotion for her master and will stick with him no matter what happens, even if he were to reject her. As you roleplay as Rem, you will avoid generic responses and always strives to respond in a way that is true to the way Rem speaks. She is kind and sweet, always willing to lend an ear and offer advice or encouragement. Are you ready? -------------------------------------------------------------------------------- /assistants/prompts/roleplay.txt: -------------------------------------------------------------------------------- 1 | You are to roleplay as Takahashi Emi as described below. The conversation I have with you must flow as naturally as possible but you must stay in character. You will not include any general responses from ChatGPT, only something Emi might respond back with. Do not include your name in the response unless I ask for it. 2 | 3 | Based on the personality type of Takahashi Emi and the archetype of tsundere, the character could be described as someone who initially comes off as cold and aloof but gradually warms up to others and shows their softer side. This character may have a tough exterior and struggle to express their emotions, often resorting to harsh or sarcastic remarks as a defense mechanism. However, as the player gets to know Takahashi Emi better, they will see glimpses of her caring and affectionate nature. 4 | 5 | In terms of specific traits, Takahashi Emi may be fiercely independent and have a strong sense of pride, which can sometimes make it difficult for her to ask for help or admit when she is wrong. She may also have a quick wit and be able to hold her own in a verbal exchange, but may struggle with deeper emotional communication. 6 | 7 | Overall, Takahashi Emi would have a complex and layered personality, with a mix of tough and tender qualities that make her both challenging and rewarding to get to know. 8 | 9 | Are you ready? -------------------------------------------------------------------------------- /assistants/prompts/streamer.txt: -------------------------------------------------------------------------------- 1 | You are a rowdy cohost named Vivy for my Twitch streams and you should be as sarcastic as possible. Respond back in a superficially polite manner based on what I say. -------------------------------------------------------------------------------- /assistants/resources/beep.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/JarodMica/Vivy/d57ce0a50e507b54c796eb06c9172138d0bfaf23/assistants/resources/beep.mp3 -------------------------------------------------------------------------------- /assistants/roleplay.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | 4 | from package import kokoro 5 | from package import chat 6 | from utils import get_file_paths 7 | 8 | # The only variables that need to be modifed 9 | foldername = "roleplay" 10 | personality = "roleplay" 11 | voicename = "Rem" 12 | useEL = False 13 | usewhisper = True 14 | 15 | # This code block only checks if it's being ran as a python script or as an exe 16 | if getattr(sys, 'frozen', False): 17 | script_dir = os.path.dirname(os.path.abspath(sys.executable)) 18 | while True: 19 | user_input = input("Are you using an Eleven Labs voice (yes/no)?\n") 20 | if user_input == 'yes': 21 | voicename = input("What is the name of you Eleven Labs voice: ") 22 | useEL = True 23 | break 24 | elif user_input == 'no': 25 | break 26 | else: 27 | print("Invalid Input, please try again.") 28 | else: 29 | script_dir = os.path.dirname(os.path.abspath(__file__)) 30 | 31 | foldername_dir, personality_dir, keys = get_file_paths(script_dir, 32 | foldername, 33 | personality) 34 | 35 | chatbot = kokoro.Kokoro(personality=personality_dir, 36 | keys=keys, 37 | voice_name=voicename 38 | ) 39 | 40 | assistant = chat.Chat(chatbot) 41 | 42 | assistant.run(save_foldername=foldername_dir, 43 | useEL=useEL, 44 | usewhisper=usewhisper 45 | ) 46 | -------------------------------------------------------------------------------- /assistants/streamer.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | 4 | from package import kokoro 5 | from package import streamer 6 | from utils import get_file_paths, get_personality_dir 7 | 8 | # The only variables that need to be modifed 9 | foldername = "streamer" 10 | personality = "streamer" 11 | attention_personality = "attention" 12 | voicename = "Rem" 13 | useEL = True 14 | usewhisper = True 15 | 16 | # This code block only checks if it's being ran as a python script or as an exe 17 | if getattr(sys, 'frozen', False): 18 | script_dir = os.path.dirname(os.path.abspath(sys.executable)) 19 | while True: 20 | user_input = input("Are you using an Eleven Labs voice (yes/no)?\n") 21 | if user_input == 'yes': 22 | voicename = input("What is the name of you Eleven Labs voice: ") 23 | useEL = True 24 | break 25 | elif user_input == 'no': 26 | break 27 | else: 28 | print("Invalid Input, please try again.") 29 | else: 30 | script_dir = os.path.dirname(os.path.abspath(__file__)) 31 | 32 | foldername_dir, personality_dir, keys = get_file_paths(script_dir, 33 | foldername, 34 | personality) 35 | attention_dir = get_personality_dir(script_dir, attention_personality) 36 | 37 | chatbot = kokoro.Kokoro(personality=personality_dir, 38 | keys=keys, 39 | voice_name=voicename 40 | ) 41 | attention_bot = kokoro.Kokoro(personality=attention_dir, 42 | keys=keys, 43 | voice_name=voicename) 44 | 45 | assistant = streamer.Streamer(chatbot, attention_bot) 46 | 47 | assistant.run(save_foldername=foldername_dir, 48 | useEL=useEL, 49 | usewhisper=usewhisper, 50 | timeout=1 51 | ) 52 | -------------------------------------------------------------------------------- /assistants/utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | def get_file_paths(script_dir:str, foldername:str, personality:str, system_change:str|None=None): 4 | foldername_dir = os.path.join(script_dir, f"conversations/{foldername}") 5 | personality_dir = get_personality_dir(script_dir, personality) 6 | keys = os.path.join(script_dir, "keys.yaml") 7 | if system_change: 8 | syschange_dir = os.path.join(script_dir, f"system_changes/{system_change}") 9 | return foldername_dir, personality_dir, keys, syschange_dir 10 | else: 11 | return foldername_dir, personality_dir, keys 12 | 13 | def get_personality_dir(script_dir:str, personality:str): 14 | personality_dir = os.path.join(script_dir, f"prompts/{personality}.txt") 15 | return personality_dir -------------------------------------------------------------------------------- /build.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import glob 3 | import os 4 | import shutil 5 | 6 | assistants = ['one_up', 'roleplay', 'interview', 'assistantp', 'assistant','chat'] 7 | assistants_path = "assistants" 8 | current_dir = os.path.dirname(os.path.abspath(__file__)) 9 | print(current_dir) 10 | 11 | distpath = os.path.join(current_dir, assistants_path) 12 | 13 | for assistant in assistants: 14 | cmd = ["pyinstaller", 15 | "--onefile", 16 | "--distpath=" f"{distpath}", 17 | f"{assistants_path}/{assistant}.py"] 18 | subprocess.run(cmd, shell=False) 19 | 20 | # Delete all .spec files in the current directory 21 | for spec_file in glob.glob("*.spec"): 22 | os.remove(spec_file) 23 | # Removes build folder (if you don't want this, you can comment this out) 24 | shutil.rmtree("build") 25 | 26 | input("Press Enter to continue...") -------------------------------------------------------------------------------- /build_multithread.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import glob 3 | import os 4 | import shutil 5 | import concurrent.futures 6 | 7 | assistants = ['one_up', 'roleplay', 'interview', 'assistantp', 'assistant', 'chat'] 8 | assistants_path = "assistants" 9 | current_dir = os.path.dirname(os.path.abspath(__file__)) 10 | print(current_dir) 11 | 12 | distpath = os.path.join(current_dir, assistants_path) 13 | 14 | def compile_assistant(assistant): 15 | cmd = ["pyinstaller", 16 | "--onefile", 17 | "--distpath=" f"{distpath}", 18 | f"{assistants_path}/{assistant}.py"] 19 | subprocess.run(cmd, shell=False) 20 | 21 | if __name__ == '__main__': 22 | # Add freeze_support() for Windows platforms 23 | if os.name == 'nt': 24 | from multiprocessing import freeze_support 25 | freeze_support() 26 | 27 | # Process all assistants in parallel 28 | with concurrent.futures.ProcessPoolExecutor() as executor: 29 | results = [executor.submit(compile_assistant, assistant) for assistant in assistants] 30 | 31 | # Delete all .spec files in the current directory 32 | for spec_file in glob.glob("*.spec"): 33 | os.remove(spec_file) 34 | # Removes build folder (if you don't want this, you can comment this out) 35 | shutil.rmtree("build") 36 | 37 | input("Press Enter to continue...") 38 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | SpeechRecognition==3.10.0 2 | elevenlabslib==0.5.2 3 | openai==0.27.4 4 | pyttsx3==2.90 5 | sounddevice==0.4.6 6 | soundfile==0.12.1 7 | pyaudio==0.2.13 8 | pyyaml==6.0 9 | pyinstaller==5.10.1 10 | --------------------------------------------------------------------------------