├── LICENSE
├── README.md
├── audio_player.py
├── azure_text_to_speech.py
├── chat_god_app.py
├── obs_websockets.py
├── requirements.txt
├── static
└── css
│ └── style.css
├── templates
└── index.html
├── voices_manager.py
└── websockets_auth.py
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 DougDougGithub
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ChatGodApp
2 |
3 | Written by DougDoug, with help from Banana!
4 | You are welcome to adapt/use this code for whatever you'd like. Credit is appreciated but not necessary.
5 |
6 | ## SETUP
7 | 1) This was written in Python 3.9.2. Install page here: https://www.python.org/downloads/release/python-392/
8 |
9 | 2) Run "pip install -r requirements.txt" to install all modules.
10 |
11 | 3) This uses the twitchio module to connect to your Twitch channel.
12 | First you must generate a Access Token for your account. You can do this at: https://twitchtokengenerator.com/ , just make sure the Access Token has chat:read and chat:edit enabled.
13 | Once you've generated an Access Token, set it as a windows environment variable named TWITCH_ACCESS_TOKEN.
14 | Then update the TWITCH_CHANNEL_NAME variable in chat_god_app.py to the name of the twitch channel you are connecting to.
15 |
16 | 4) This uses Microsoft Azure's TTS service for the text-to-speech voices.
17 | First you must make an account and sign up for Microsoft Azure's services.
18 | Then use their site to generate an access key and region for the text-to-speech service.
19 | Then, set these as windows environment variables named AZURE_TTS_KEY and AZURE_TTS_REGION.
20 |
21 | 5) Optionally, you can use OBS Websockets and an OBS plugin to make images move while talking.
22 | First open up OBS. Make sure you're running version 28.X or later.
23 | Click Tools, then WebSocket Server Settings.
24 | Make sure "Enable WebSocket server" is checked. Make sure Server Port is '4455', and set the Server Password to 'TwitchChat9'.
25 | Next install the Move OBS plugin: https://obsproject.com/forum/resources/move.913/
26 | Now you can use the plugin to add a filter to an audio source that will change an image's transform based on the audio waveform.
27 | For example, I have a filter that will move each of the player images whenever text-to-speech audio is playing.
28 | Lastly, in the voices_manager.py code, update the OBS section so that it will turn the corresponding filters on and off when text-to-speech audio is being played.
29 | Note that OBS must be open when you're running this code, otherwise OBS WebSockets won't be able to connect.
30 | If you don't need the images to move while talking, you can just delete the OBS portions of the code.
31 |
32 | ## BASIC APP USAGE
33 |
34 | 1) Run chat_god_app.py and then open up http://127.0.0.1:5000 on a browser or as a browser source in OBS
35 |
36 | 2) You can enter a user's name in the "Choose User" field and hit enter to manually assign them as that player.
37 | Alternatively, viewers can join the pool of potential players by typing !player1, !player2, or !player3.
38 | Then, when you hit Pick Random, it will pick one of the viewers randomly from that player pool.
39 |
40 | 3) Once a user is picked, their twitch messages will be automatically read out loud via Azure TTS.
41 | You can change the voice and the voice style using the drop down menus on the web app.
42 | If a user starts their message with (angry), (cheerful), (excited), (hopeful), (sad), (shouting), (shout), (terrified), (unfriendly), (whispering), (whisper), or (random), it will automatically use that voice style.
43 |
--------------------------------------------------------------------------------
/audio_player.py:
--------------------------------------------------------------------------------
1 | import pygame
2 | import time
3 | import soundfile as sf
4 | import os
5 | from mutagen.mp3 import MP3
6 |
7 | class AudioManager:
8 |
9 | def __init__(self):
10 | pygame.mixer.init()
11 |
12 | def play_audio(self, file_path, sleep_during_playback=True, delete_file=False, play_using_music=True):
13 | """
14 | Parameters:
15 | file_path (str): path to the audio file
16 | sleep_during_playback (bool): means program will wait for length of audio file before returning
17 | delete_file (bool): means file is deleted after playback (note that this shouldn't be used for multithreaded function calls)
18 | play_using_music (bool): means it will use Pygame Music, if false then uses pygame Sound instead
19 | """
20 | print(f"Playing file with pygame: {file_path}")
21 | pygame.mixer.init()
22 | if play_using_music:
23 | # Pygame Mixer only plays one file at a time, but audio doesn't glitch
24 | pygame.mixer.music.load(file_path)
25 | pygame.mixer.music.play()
26 | else:
27 | # Pygame Sound lets you play multiple sounds simultaneously, but the audio glitches for longer files
28 | pygame_sound = pygame.mixer.Sound(file_path)
29 | pygame_sound.play()
30 |
31 | if sleep_during_playback:
32 | # Calculate length of the file, based on the file format
33 | _, ext = os.path.splitext(file_path) # Get the extension of this file
34 | if ext.lower() == '.wav':
35 | wav_file = sf.SoundFile(file_path)
36 | file_length = wav_file.frames / wav_file.samplerate
37 | wav_file.close()
38 | elif ext.lower() == '.mp3':
39 | mp3_file = MP3(file_path)
40 | file_length = mp3_file.info.length
41 | else:
42 | print("Cannot play audio, unknown file type")
43 | return
44 |
45 | # Sleep until file is done playing
46 | time.sleep(file_length)
47 |
48 | # Delete the file
49 | if delete_file:
50 | # Stop pygame so file can be deleted
51 | # Note, this can cause issues if this function is being run on multiple threads, since it quit the mixer for the other threads too
52 | pygame.mixer.music.stop()
53 | pygame.mixer.quit()
54 |
55 | try:
56 | os.remove(file_path)
57 | print(f"Deleted the audio file.")
58 | except PermissionError:
59 | print(f"Couldn't remove {file_path} because it is being used by another process.")
--------------------------------------------------------------------------------
/azure_text_to_speech.py:
--------------------------------------------------------------------------------
1 | import os
2 | import random
3 | import azure.cognitiveservices.speech as speechsdk
4 | from gtts import gTTS
5 | from pydub import AudioSegment
6 | import pygame
7 |
8 | AZURE_VOICES = [
9 | "en-US-DavisNeural",
10 | "en-US-TonyNeural",
11 | "en-US-JasonNeural",
12 | "en-US-GuyNeural",
13 | "en-US-JaneNeural",
14 | "en-US-NancyNeural",
15 | "en-US-JennyNeural",
16 | "en-US-AriaNeural",
17 | ]
18 |
19 | AZURE_VOICE_STYLES = [
20 | # Currently using the 9 of the 11 available voice styles
21 | # Note that certain styles aren't available on all voices
22 | "angry",
23 | "cheerful",
24 | "excited",
25 | "hopeful",
26 | "sad",
27 | "shouting",
28 | "terrified",
29 | "unfriendly",
30 | "whispering"
31 | ]
32 |
33 | AZURE_PREFIXES = {
34 | "(angry)" : "angry",
35 | "(cheerful)" : "cheerful",
36 | "(excited)" : "excited",
37 | "(hopeful)" : "hopeful",
38 | "(sad)" : "sad",
39 | "(shouting)" : "shouting",
40 | "(shout)" : "shouting",
41 | "(terrified)" : "terrified",
42 | "(unfriendly)" : "unfriendly",
43 | "(whispering)" : "whispering",
44 | "(whisper)" : "whispering",
45 | "(random)" : "random"
46 | }
47 |
48 | class AzureTTSManager:
49 | azure_speechconfig = None
50 | azure_synthesizer = None
51 |
52 | def __init__(self):
53 | pygame.init()
54 | # Creates an instance of a speech config with specified subscription key and service region.
55 | # Replace with your own subscription key and service region (e.g., "westus").
56 | self.azure_speechconfig = speechsdk.SpeechConfig(subscription=os.getenv('AZURE_TTS_KEY'), region=os.getenv('AZURE_TTS_REGION'))
57 | # Set the voice name, refer to https://aka.ms/speech/voices/neural for full list.
58 | self.azure_speechconfig.speech_synthesis_voice_name = "en-US-AriaNeural"
59 | # Creates a speech synthesizer. Setting audio_config to None means it wont play the synthesized text out loud.
60 | self.azure_synthesizer = speechsdk.SpeechSynthesizer(speech_config=self.azure_speechconfig, audio_config=None)
61 |
62 | # Returns the path to the new .wav file
63 | def text_to_audio(self, text: str, voice_name="random", voice_style="random"):
64 | if voice_name == "random":
65 | voice_name = random.choice(AZURE_VOICES)
66 | if voice_style == "random":
67 | voice_style = random.choice(AZURE_VOICE_STYLES)
68 |
69 | # Change the voice style if the message includes a prefix
70 | text = text.lower()
71 | if text.startswith("(") and ")" in text:
72 | prefix = text[0:(text.find(")")+1)]
73 | if prefix in AZURE_PREFIXES:
74 | voice_style = AZURE_PREFIXES[prefix]
75 | text = text.removeprefix(prefix)
76 | if len(text) == 0:
77 | print("This message was empty")
78 | return
79 | if voice_style == "random":
80 | voice_style = random.choice(AZURE_VOICE_STYLES)
81 |
82 | ssml_text = f"{text}"
83 | result = self.azure_synthesizer.speak_ssml_async(ssml_text).get()
84 |
85 | output = os.path.join(os.path.abspath(os.curdir), f"_Msg{str(hash(text))}{str(hash(voice_name))}{str(hash(voice_style))}.wav")
86 | if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:
87 | stream = speechsdk.AudioDataStream(result)
88 | stream.save_to_wav_file(output)
89 | else:
90 | # If Azure fails, use gTTS instead. gTTS saves as an mp3 by default, so convert it to a wav file after
91 | print("\n Azure failed, using gTTS instead \n")
92 | output_mp3 = output.replace(".wav", ".mp3")
93 | msgAudio = gTTS(text=text, lang='en', slow=False)
94 | msgAudio.save(output_mp3)
95 | audiosegment = AudioSegment.from_mp3(output_mp3)
96 | audiosegment.export(output, format="wav")
97 |
98 | return output
99 |
100 |
101 | # Tests here
102 | if __name__ == '__main__':
103 | tts_manager = AzureTTSManager()
104 | pygame.mixer.init()
105 |
106 | file_path = tts_manager.text_to_audio("Here's my test audio!!", "en-US-DavisNeural")
107 | pygame.mixer.music.load(file_path)
108 | pygame.mixer.music.play()
109 |
110 | while True:
111 | stuff_to_say = input("\nNext question? \n\n")
112 | if len(stuff_to_say) == 0:
113 | continue
114 | file_path = tts_manager.text_to_audio(stuff_to_say)
115 | pygame.mixer.music.load(file_path)
116 | pygame.mixer.music.play()
117 |
--------------------------------------------------------------------------------
/chat_god_app.py:
--------------------------------------------------------------------------------
1 | from twitchio.ext import commands
2 | from twitchio import *
3 | from datetime import datetime, timedelta
4 | from flask import Flask, render_template, session, request
5 | from flask_socketio import SocketIO, emit
6 | import asyncio
7 | import threading
8 | import pytz
9 | import random
10 | import os
11 | from voices_manager import TTSManager
12 |
13 | TWITCH_CHANNEL_NAME = 'dougdoug' # Replace this with your channel name
14 |
15 | socketio = SocketIO
16 | app = Flask(__name__)
17 | socketio = SocketIO(app, async_mode="threading")
18 | print(socketio.async_mode)
19 |
20 | @app.route("/")
21 | def home():
22 | return render_template('index.html') #redirects to index.html in templates folder
23 |
24 | @socketio.event
25 | def connect(): #when socket connects, send data confirming connection
26 | socketio.emit('message_send', {'message': "Connected successfully!", 'current_user': "Temp User", 'user_number': "1"})
27 |
28 | @socketio.on("tts")
29 | def toggletts(value):
30 | print("TTS: Received the value " + str(value['checked']))
31 | if value['user_number'] == "1":
32 | twitchbot.tts_enabled_1 = value['checked']
33 | elif value['user_number'] == "2":
34 | twitchbot.tts_enabled_2 = value['checked']
35 | elif value['user_number'] == "3":
36 | twitchbot.tts_enabled_3 = value['checked']
37 |
38 | @socketio.on("pickrandom")
39 | def pickrandom(value):
40 | twitchbot.randomUser(value['user_number'])
41 | print("Getting new random user for user " + value['user_number'])
42 |
43 | @socketio.on("choose")
44 | def chooseuser(value):
45 | if value['user_number'] == "1":
46 | twitchbot.current_user_1 = value['chosen_user'].lower()
47 | socketio.emit('message_send',
48 | {'message': f"{twitchbot.current_user_1} was picked!",
49 | 'current_user': f"{twitchbot.current_user_1}",
50 | 'user_number': value['user_number']})
51 | elif value['user_number'] == "2":
52 | twitchbot.current_user_2 = value['chosen_user'].lower()
53 | socketio.emit('message_send',
54 | {'message': f"{twitchbot.current_user_2} was picked!",
55 | 'current_user': f"{twitchbot.current_user_2}",
56 | 'user_number': value['user_number']})
57 | elif value['user_number'] == "3":
58 | twitchbot.current_user_3 = value['chosen_user'].lower()
59 | socketio.emit('message_send',
60 | {'message': f"{twitchbot.current_user_3} was picked!",
61 | 'current_user': f"{twitchbot.current_user_3}",
62 | 'user_number': value['user_number']})
63 |
64 | @socketio.on("voicename")
65 | def choose_voice_name(value):
66 | if (value['voice_name']) != None:
67 | twitchbot.update_voice_name(value['user_number'], value['voice_name'])
68 | print("Updating voice name to: " + value['voice_name'])
69 |
70 | @socketio.on("voicestyle")
71 | def choose_voice_style(value):
72 | if (value['voice_style']) != None:
73 | twitchbot.update_voice_style(value['user_number'], value['voice_style'])
74 | print("Updating voice style to: " + value['voice_style'])
75 |
76 |
77 | class Bot(commands.Bot):
78 | current_user_1 = None
79 | current_user_2 = None
80 | current_user_3 = None
81 | tts_enabled_1 = True
82 | tts_enabled_2 = True
83 | tts_enabled_3 = True
84 | keypassphrase_1 = "!player1"
85 | keypassphrase_2 = "!player2"
86 | keypassphrase_3 = "!player3"
87 | user_pool_1 = {} #dict of username and time last chatted
88 | user_pool_2 = {} #dict of username and time last chatted
89 | user_pool_3 = {} #dict of username and time last chatted
90 | seconds_active = 450 # of seconds until a chatter is booted from the list
91 | max_users = 2000 # of users who can be in user pool
92 | tts_manager = None
93 |
94 | def __init__(self):
95 | self.tts_manager = TTSManager()
96 |
97 | #connects to twitch channel
98 | super().__init__(token=os.getenv('TWITCH_ACCESS_TOKEN'), prefix='?', initial_channels=[TWITCH_CHANNEL_NAME])
99 |
100 | async def event_ready(self):
101 | print(f'Logged in as | {self.nick}')
102 | print(f'User id is | {self.user_id}')
103 |
104 | async def event_message(self, message):
105 | await self.process_message(message)
106 |
107 | async def process_message(self, message: Message):
108 | # print("We got a message from this person: " + message.author.name)
109 | # print("Their message was " + message.content)
110 |
111 | # If this is our current_user, read out their message
112 | if message.author.name == self.current_user_1:
113 | socketio.emit('message_send',
114 | {'message': f"{message.content}",
115 | 'current_user': f"{self.current_user_1}",
116 | 'user_number': "1"})
117 | if self.tts_enabled_1:
118 | self.tts_manager.text_to_audio(message.content, "1")
119 | elif message.author.name == self.current_user_2:
120 | socketio.emit('message_send',
121 | {'message': f"{message.content}",
122 | 'current_user': f"{self.current_user_2}",
123 | 'user_number': "2"})
124 | if self.tts_enabled_2:
125 | self.tts_manager.text_to_audio(message.content, "2")
126 | elif message.author.name == self.current_user_3:
127 | socketio.emit('message_send',
128 | {'message': f"{message.content}",
129 | 'current_user': f"{self.current_user_3}",
130 | 'user_number': "3"})
131 | if self.tts_enabled_3:
132 | self.tts_manager.text_to_audio(message.content, "3")
133 |
134 | # Add this chatter to the user_pool
135 | if message.content == self.keypassphrase_1:
136 | if message.author.name.lower() in self.user_pool_1: # Remove this chatter from pool if they're already there
137 | self.user_pool_1.pop(message.author.name.lower())
138 | self.user_pool_1[message.author.name.lower()] = message.timestamp # Add user to end of pool with new msg time
139 | # Now we remove the oldest viewer if they're past the activity threshold, or if we're past the max # of users
140 | activity_threshold = datetime.now(pytz.utc) - timedelta(seconds=self.seconds_active) # calculate the cutoff time
141 | oldest_user = list(self.user_pool_1.keys())[0] # The first user in the dict is the user who chatted longest ago
142 | if self.user_pool_1[oldest_user].replace(tzinfo=pytz.utc) < activity_threshold or len(self.user_pool_1) > self.max_users:
143 | self.user_pool_1.pop(oldest_user) # remove them from the list
144 | if len(self.user_pool_1) == self.max_users:
145 | print(f"{oldest_user} was popped due to hitting max users")
146 | else:
147 | print(f"{oldest_user} was popped due to not talking for {self.seconds_active} seconds")
148 | elif message.content == self.keypassphrase_2:
149 | if message.author.name.lower() in self.user_pool_2: # Remove this chatter from pool if they're already there
150 | self.user_pool_2.pop(message.author.name.lower())
151 | self.user_pool_2[message.author.name.lower()] = message.timestamp # Add user to end of pool with new msg time
152 | # Now we remove the oldest viewer if they're past the activity threshold, or if we're past the max # of users
153 | activity_threshold = datetime.now(pytz.utc) - timedelta(seconds=self.seconds_active) # calculate the cutoff time
154 | oldest_user = list(self.user_pool_2.keys())[0] # The first user in the dict is the user who chatted longest ago
155 | if self.user_pool_2[oldest_user].replace(tzinfo=pytz.utc) < activity_threshold or len(self.user_pool_2) > self.max_users:
156 | self.user_pool_2.pop(oldest_user) # remove them from the list
157 | if len(self.user_pool_2) == self.max_users:
158 | print(f"{oldest_user} was popped due to hitting max users")
159 | else:
160 | print(f"{oldest_user} was popped due to not talking for {self.seconds_active} seconds")
161 | elif message.content == self.keypassphrase_3:
162 | if message.author.name.lower() in self.user_pool_3: # Remove this chatter from pool if they're already there
163 | self.user_pool_3.pop(message.author.name.lower())
164 | self.user_pool_3[message.author.name.lower()] = message.timestamp # Add user to end of pool with new msg time
165 | # Now we remove the oldest viewer if they're past the activity threshold, or if we're past the max # of users
166 | activity_threshold = datetime.now(pytz.utc) - timedelta(seconds=self.seconds_active) # calculate the cutoff time
167 | oldest_user = list(self.user_pool_3.keys())[0] # The first user in the dict is the user who chatted longest ago
168 | if self.user_pool_3[oldest_user].replace(tzinfo=pytz.utc) < activity_threshold or len(self.user_pool_3) > self.max_users:
169 | self.user_pool_3.pop(oldest_user) # remove them from the list
170 | if len(self.user_pool_3) == self.max_users:
171 | print(f"{oldest_user} was popped due to hitting max users")
172 | else:
173 | print(f"{oldest_user} was popped due to not talking for {self.seconds_active} seconds")
174 |
175 |
176 | #picks a random user from the queue
177 | def randomUser(self, user_number):
178 | try:
179 | if user_number == "1":
180 | self.current_user_1 = random.choice(list(self.user_pool_1.keys()))
181 | socketio.emit('message_send',
182 | {'message': f"{self.current_user_1} was picked!",
183 | 'current_user': f"{self.current_user_1}",
184 | 'user_number': user_number})
185 | print("Random User is: " + self.current_user_1)
186 | elif user_number == "2":
187 | self.current_user_2 = random.choice(list(self.user_pool_2.keys()))
188 | socketio.emit('message_send',
189 | {'message': f"{self.current_user_2} was picked!",
190 | 'current_user': f"{self.current_user_2}",
191 | 'user_number': user_number})
192 | print("Random User is: " + self.current_user_2)
193 | elif user_number == "3":
194 | self.current_user_3 = random.choice(list(self.user_pool_3.keys()))
195 | socketio.emit('message_send',
196 | {'message': f"{self.current_user_3} was picked!",
197 | 'current_user': f"{self.current_user_3}",
198 | 'user_number': user_number})
199 | print("Random User is: " + self.current_user_3)
200 | except Exception:
201 | return
202 |
203 | def update_voice_name(self, user_number, voice_name):
204 | self.tts_manager.update_voice_name(user_number, voice_name)
205 |
206 | def update_voice_style(self, user_number, voice_style):
207 | self.tts_manager.update_voice_style(user_number, voice_style)
208 |
209 |
210 | def startTwitchBot():
211 | global twitchbot
212 | asyncio.set_event_loop(asyncio.new_event_loop())
213 | twitchbot = Bot()
214 | twitchbot.run()
215 |
216 | if __name__=='__main__':
217 |
218 | # Creates and runs the twitchio bot on a separate thread
219 | bot_thread = threading.Thread(target=startTwitchBot)
220 | bot_thread.start()
221 |
222 | socketio.run(app)
223 |
--------------------------------------------------------------------------------
/obs_websockets.py:
--------------------------------------------------------------------------------
1 | import time
2 | import sys
3 | from obswebsocket import obsws, requests # noqa: E402
4 | from websockets_auth import WEBSOCKET_HOST, WEBSOCKET_PORT, WEBSOCKET_PASSWORD
5 |
6 | ##########################################################
7 | ##########################################################
8 |
9 | class OBSWebsocketsManager:
10 | ws = None
11 |
12 | def __init__(self):
13 | # Connect to websockets
14 | self.ws = obsws(WEBSOCKET_HOST, WEBSOCKET_PORT, WEBSOCKET_PASSWORD)
15 | try:
16 | self.ws.connect()
17 | except:
18 | print("\nPANIC!!\nCOULD NOT CONNECT TO OBS!\nDouble check that you have OBS open and that your websockets server is enabled in OBS.")
19 | time.sleep(10)
20 | sys.exit()
21 | print("Connected to OBS Websockets!\n")
22 |
23 | def disconnect(self):
24 | self.ws.disconnect()
25 |
26 | # Set the current scene
27 | def set_scene(self, new_scene):
28 | self.ws.call(requests.SetCurrentProgramScene(sceneName=new_scene))
29 |
30 | # Set the visibility of any source's filters
31 | def set_filter_visibility(self, source_name, filter_name, filter_enabled=True):
32 | self.ws.call(requests.SetSourceFilterEnabled(sourceName=source_name, filterName=filter_name, filterEnabled=filter_enabled))
33 |
34 | # Set the visibility of any source
35 | def set_source_visibility(self, scene_name, source_name, source_visible=True):
36 | response = self.ws.call(requests.GetSceneItemId(sceneName=scene_name, sourceName=source_name))
37 | myItemID = response.datain['sceneItemId']
38 | self.ws.call(requests.SetSceneItemEnabled(sceneName=scene_name, sceneItemId=myItemID, sceneItemEnabled=source_visible))
39 |
40 | # Returns the current text of a text source
41 | def get_text(self, source_name):
42 | response = self.ws.call(requests.GetInputSettings(inputName=source_name))
43 | return response.datain["inputSettings"]["text"]
44 |
45 | # Returns the text of a text source
46 | def set_text(self, source_name, new_text):
47 | self.ws.call(requests.SetInputSettings(inputName=source_name, inputSettings = {'text': new_text}))
48 |
49 | def get_source_transform(self, scene_name, source_name):
50 | response = self.ws.call(requests.GetSceneItemId(sceneName=scene_name, sourceName=source_name))
51 | myItemID = response.datain['sceneItemId']
52 | response = self.ws.call(requests.GetSceneItemTransform(sceneName=scene_name, sceneItemId=myItemID))
53 | transform = {}
54 | transform["positionX"] = response.datain["sceneItemTransform"]["positionX"]
55 | transform["positionY"] = response.datain["sceneItemTransform"]["positionY"]
56 | transform["scaleX"] = response.datain["sceneItemTransform"]["scaleX"]
57 | transform["scaleY"] = response.datain["sceneItemTransform"]["scaleY"]
58 | transform["rotation"] = response.datain["sceneItemTransform"]["rotation"]
59 | transform["sourceWidth"] = response.datain["sceneItemTransform"]["sourceWidth"] # original width of the source
60 | transform["sourceHeight"] = response.datain["sceneItemTransform"]["sourceHeight"] # original width of the source
61 | transform["width"] = response.datain["sceneItemTransform"]["width"] # current width of the source after scaling, not including cropping. If the source has been flipped horizontally, this number will be negative.
62 | transform["height"] = response.datain["sceneItemTransform"]["height"] # current height of the source after scaling, not including cropping. If the source has been flipped vertically, this number will be negative.
63 | transform["cropLeft"] = response.datain["sceneItemTransform"]["cropLeft"] # the amount cropped off the *original source width*. This is NOT scaled, must multiply by scaleX to get current # of cropped pixels
64 | transform["cropRight"] = response.datain["sceneItemTransform"]["cropRight"] # the amount cropped off the *original source width*. This is NOT scaled, must multiply by scaleX to get current # of cropped pixels
65 | transform["cropTop"] = response.datain["sceneItemTransform"]["cropTop"] # the amount cropped off the *original source height*. This is NOT scaled, must multiply by scaleY to get current # of cropped pixels
66 | transform["cropBottom"] = response.datain["sceneItemTransform"]["cropBottom"] # the amount cropped off the *original source height*. This is NOT scaled, must multiply by scaleY to get current # of cropped pixels
67 | return transform
68 |
69 | # The transform should be a dictionary containing any of the following keys with corresponding values
70 | # positionX, positionY, scaleX, scaleY, rotation, width, height, sourceWidth, sourceHeight, cropTop, cropBottom, cropLeft, cropRight
71 | # e.g. {"scaleX": 2, "scaleY": 2.5}
72 | # Note: there are other transform settings, like alignment, etc, but these feel like the main useful ones.
73 | # Use get_source_transform to see the full list
74 | def set_source_transform(self, scene_name, source_name, new_transform):
75 | response = self.ws.call(requests.GetSceneItemId(sceneName=scene_name, sourceName=source_name))
76 | myItemID = response.datain['sceneItemId']
77 | self.ws.call(requests.SetSceneItemTransform(sceneName=scene_name, sceneItemId=myItemID, sceneItemTransform=new_transform))
78 |
79 | # Note: an input, like a text box, is a type of source. This will get *input-specific settings*, not the broader source settings like transform and scale
80 | # For a text source, this will return settings like its font, color, etc
81 | def get_input_settings(self, input_name):
82 | return self.ws.call(requests.GetInputSettings(inputName=input_name))
83 |
84 | # Get list of all the input types
85 | def get_input_kind_list(self):
86 | return self.ws.call(requests.GetInputKindList())
87 |
88 | # Get list of all items in a certain scene
89 | def get_scene_items(self, scene_name):
90 | return self.ws.call(requests.GetSceneItemList(sceneName=scene_name))
91 |
92 |
93 | if __name__ == '__main__':
94 |
95 | print("Connecting to OBS Websockets")
96 | obswebsockets_manager = OBSWebsocketsManager()
97 |
98 | print("Changing visibility on a source \n\n")
99 | obswebsockets_manager.set_source_visibility('*** Mid Monitor', "Elgato Cam Link", False)
100 | time.sleep(3)
101 | obswebsockets_manager.set_source_visibility('*** Mid Monitor', "Elgato Cam Link", True)
102 | time.sleep(3)
103 |
104 | print("\nEnabling filter on a scene...\n")
105 | time.sleep(3)
106 | obswebsockets_manager.set_filter_visibility("/// TTS Characters", "Move Source - Godrick - Up", True)
107 | time.sleep(3)
108 | obswebsockets_manager.set_filter_visibility("/// TTS Characters", "Move Source - Godrick - Down", True)
109 | time.sleep(5)
110 |
111 | print("Swapping scene!")
112 | obswebsockets_manager.set_scene('*** Camera (Wide)')
113 | time.sleep(3)
114 | print("Swapping back! \n\n")
115 | obswebsockets_manager.set_scene('*** Mid Monitor')
116 |
117 | print("Changing visibility on scroll filter and Audio Move filter \n\n")
118 | obswebsockets_manager.set_filter_visibility("Line In", "Audio Move - Chat God", True)
119 | obswebsockets_manager.set_filter_visibility("Middle Monitor", "DS3 - Scroll", True)
120 | time.sleep(3)
121 | obswebsockets_manager.set_filter_visibility("Line In", "Audio Move - Chat God", False)
122 | obswebsockets_manager.set_filter_visibility("Middle Monitor", "DS3 - Scroll", False)
123 |
124 | print("Getting a text source's current text! \n\n")
125 | current_text = obswebsockets_manager.get_text("??? Challenge Title ???")
126 | print(f"Here's its current text: {current_text}\n\n")
127 |
128 | print("Changing a text source's text! \n\n")
129 | obswebsockets_manager.set_text("??? Challenge Title ???", "Here's my new text!")
130 | time.sleep(3)
131 | obswebsockets_manager.set_text("??? Challenge Title ???", current_text)
132 | time.sleep(1)
133 |
134 | print("Getting a source's transform!")
135 | transform = obswebsockets_manager.get_source_transform('*** Mid Monitor', "Middle Monitor")
136 | print(f"Here's the transform: {transform}\n\n")
137 |
138 | print("Setting a source's transform!")
139 | new_transform = {"scaleX": 2, "scaleY": 2}
140 | obswebsockets_manager.set_source_transform('*** Mid Monitor', "Middle Monitor", new_transform)
141 | time.sleep(3)
142 | print("Setting the transform back. \n\n")
143 | obswebsockets_manager.set_source_transform('*** Mid Monitor', "Middle Monitor", transform)
144 |
145 | response = obswebsockets_manager.get_input_settings("??? Challenge Title ???")
146 | print(f"\nHere are the input settings:{response}\n")
147 | time.sleep(2)
148 |
149 | response = obswebsockets_manager.get_input_kind_list()
150 | print(f"\nHere is the input kind list:{response}\n")
151 | time.sleep(2)
152 |
153 | response = obswebsockets_manager.get_scene_items('*** Mid Monitor')
154 | print(f"\nHere is the scene's item list:{response}\n")
155 | time.sleep(2)
156 |
157 | time.sleep(300)
158 |
159 | #############################################
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | azure.cognitiveservices.speech
2 | Flask
3 | Flask_SocketIO
4 | gTTS
5 | mutagen
6 | obs_websocket_py
7 | pydub
8 | pygame
9 | pygame_ce
10 | pytz
11 | soundfile
12 | twitchio
--------------------------------------------------------------------------------
/static/css/style.css:
--------------------------------------------------------------------------------
1 | /* Applies to all elements */
2 | * {
3 | font-family: 'Roboto', sans-serif;
4 | background-color: hsl(250, 24%, 19%);
5 | color: #ccc;
6 | /* color: #fff;*/
7 | scrollbar-width: none; /*firefox support */
8 | }
9 |
10 | /* Applies to all form elements */
11 | form{
12 | padding: 5px;
13 | margin-top: 0px;
14 | margin-left: -5px;
15 | margin-right: 10px;
16 | margin-bottom: 5px;
17 | float: left;
18 | }
19 |
20 | .choose-box {
21 | text-align: left;
22 | display: block;
23 | margin:left;
24 | }
25 |
26 | /* Applies to all user-name-box class elements */
27 | .user-name-box {
28 | /* Three Team */
29 | width: 240px;
30 | height: 40;
31 | padding: 0px;
32 |
33 | color: #79f1ff;
34 | text-align: center;
35 | }
36 |
37 | /* Applies to all user-name class elements */
38 | .user-name {
39 | /* Three Team Size */
40 | font-size: 35px;
41 | color: rgb(256,256,256);
42 | text-shadow: -3px 0 black, 0 3px black, 3px 0 black, 0 -3px black;
43 | }
44 |
45 | /* Applies to all user-message-box class elements */
46 | .user-message-box {
47 | /* Three Teams */
48 | width: 240px;
49 | height: 85px;
50 | padding: 0px;
51 | margin-top: -6px;
52 |
53 | text-align: center;
54 | text-shadow: -2px 0 black, 0 2px black, 2px 0 black, 0 -2px black;
55 | }
56 |
57 | /* Applies to all user-message class elements */
58 | .user-message {
59 | /* Three Teams */
60 | font-size: 15px;
61 | color: #79f1ff;
62 | padding: -5px;
63 | background-color: transparent;
64 | }
--------------------------------------------------------------------------------
/templates/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |