├── .gitignore ├── LICENSE ├── README.md ├── configuration.yaml ├── custom_components └── visual_stream_assist │ ├── __init__.py │ ├── __pycache__ │ ├── __init__.cpython-312.pyc │ ├── config_flow.cpython-312.pyc │ ├── sensor.cpython-312.pyc │ └── switch.cpython-312.pyc │ ├── config_flow.py │ ├── core │ ├── README.md │ ├── __init__.py │ ├── __pycache__ │ │ ├── __init__.cpython-312.pyc │ │ └── stream.cpython-312.pyc │ └── stream.py │ ├── gifs │ ├── jarvis_listen.gif │ ├── jarvis_speech.gif │ ├── roomie_listen.gif │ ├── roomie_speech.gif │ ├── sheila_listen.gif │ └── sheila_speech.gif │ ├── manifest.json │ ├── sensor.py │ ├── services.yaml │ ├── switch.py │ └── translations │ └── en.json ├── hacs.json ├── openwakeword models ├── Lisa.tflite ├── Rommie.tflite ├── Sheila2.tflite └── jarvis_v2.tflite └── www └── gifs ├── change_to_jarvis_gifs.sh ├── change_to_roomie_gifs.sh ├── change_to_sheila_gifs.sh ├── jarvis_listen.gif ├── jarvis_speech.gif ├── listen.gif ├── roomie_listen.gif ├── roomie_speech.gif ├── sheila_listen.gif ├── sheila_speech.gif └── speech.gif /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | .idea/ 3 | .homeassistant/ 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Stoleru Ioan Aurel 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # VisualStreamAssist 2 | Through this project I wanted to add to AlexxIT integration [StreamAssist](https://github.com/AlexxIT/StreamAssist) personalized and visual responses to be played on an android tablet using the browser mod. 3 | 4 | ## Pre-requisites 5 | 6 | - Home Assistant 2023.11.3 or newer 7 | - A voice assistant [configured in HA](https://my.home-assistant.io/redirect/voice_assistants/) with STT and TTS in a language of your choice 8 | - Install [Browser Mod](https://github.com/thomasloven/hass-browser_mod) integration with HACS. The browser mod media player of android tablet will be used to stream gif files with browser_mod.popup service and audio responses with browser mod media player. 9 | - Install [Rtpmic](https://play.google.com/store/apps/details?id=com.rtpmic&hl=en_US) app on android tablet or another application to stream mic or camera with sound on tablet. If you use Rtpmic app, on default settings check **auto start streaming** and **start at boot**, **target adress** `255.255.255.255`, **port** `5555`and **audio codec** `G.711a`. 10 | - Optionally install [Fully kiosk browser](https://play.google.com/store/apps/details?id=de.ozerov.fully&hl=en_US) on android tablet and Fully Kiosk Browser integration on Home Assistant. 11 | 12 | ## Installation 13 | 14 | [HACS](https://hacs.xyz/) > Integrations > 3 dots (upper top corner) > Custom repositories > URL: `https://github.com/relust/VisualStreamAssist`, Category: Integration > Add > wait > Stream Assist > Install 15 | 16 | ### Config Stream Assist 17 | 18 | 1. Add **Stream Assist** Integration 19 | Settings > Integrations > Add Integration > Stream Assist 20 | 2. Config **Stream Assist** Integration 21 | Settings > Integrations > Stream Assist > Configure 22 | 23 | - If you use **Rtpmic** app, **Stream URL** is `rtp://192.168.0.xxx:5555` 24 | - On **Player Entity** copy exact name of your **BROWSER MODE PLAYER** of tablet browser (media_player.xxx_xxx). 25 | - On **Browser ID** copy exact name of your **BROWSER MODE BROWSER** (from tablet Browser Mod tab/Browser ID field). 26 | - For complete **TTS service for wake word detection** and **TTS language for wake word detection** fields you can simulate a new automation, add action **Media Player**, select **Play media**, select a media player, and from **Pick media** select **Text to speech**, select your language and write a message. Then go to yaml mode and copy the **tts service** and **tts language**. **IMPORTANT IT MUST BE THE SAME AS IN THE SELECTED PIPELINE** 27 | - **Example:** 28 | - From `media-source://tts/edge_tts?message=how can I help you&language=en-US-MichelleNeural` 29 | - copy `edge_tts` to **TTT service field** and `en-US-MichelleNeural` to **TTS language field** 30 | - On **Wake Word detection responses** you can put many response swith a comma between them. 31 | - **Example:** `how can I hel you, how can assist you, yes i`m listening` 32 | - Copy [speech.gif and listen.gif](https://github.com/relust/VisualStreamAssist/tree/main/www/gifs) or, after integration insallation, from Home Assistant `/config/custom_components/visual_stream_assist/gifs`directory on `www/gifs` directory and on UI **Speech Gif** and **Listen Gif** fields write the path: 33 | - `/local/gifs/jarvis_speech.gif` 34 | - `/local/gifs/jarvis_listen.gif` 35 | - You can select Voice Assistant Pipeline for recognition process: **WAKE => STT => NLP => TTS**. By default componen will use default pipeline. You can create several **Pipelines** with different settings. And several **Stream Assist** components with different settings. 36 | 37 | 38 | ## Using 39 | 40 | Component has MIC switch and multiple sensors - WAKE, STT, INTENT, TTS. There may be fewer sensors, depending on the Pipeline settings. 41 | 42 | The sensor attributes contain a lot of useful information about the results of each step of the assistant. 43 | 44 | You can also view the pipelines running history in the Home Assistant interface: 45 | 46 | - Settings > Voice assistants > Pipeline > 3 dots > Debug 47 | -------------------------------------------------------------------------------- /configuration.yaml: -------------------------------------------------------------------------------- 1 | shell_command: 2 | roomie_gifs: bash /config/www/gifs/change_to_roomie_gifs.sh 3 | jarvis_gifs: bash /config/www/gifs/change_to_jarvis_gifs.sh 4 | sheila_gifs: bash /config/www/gifs/change_to_sheila_gifs.sh 5 | -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/__init__.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | from homeassistant.config_entries import ConfigEntry 4 | from homeassistant.const import Platform 5 | from homeassistant.core import HomeAssistant, ServiceResponse, SupportsResponse, ServiceCall 6 | from homeassistant.helpers.device_registry import DeviceEntry 7 | from homeassistant.helpers.typing import ConfigType 8 | 9 | from .core import DOMAIN, get_stream_source, assist_run, stream_run 10 | from .core.stream import Stream 11 | 12 | _LOGGER = logging.getLogger(__name__) 13 | 14 | PLATFORMS = (Platform.SENSOR, Platform.SWITCH) 15 | 16 | 17 | async def async_setup(hass: HomeAssistant, config: ConfigType): 18 | async def run(call: ServiceCall) -> ServiceResponse: 19 | stt_stream = Stream() 20 | 21 | try: 22 | coro = stream_run(hass, call.data, stt_stream=stt_stream) 23 | hass.async_create_task(coro) 24 | 25 | return await assist_run( 26 | hass, call.data, context=call.context, stt_stream=stt_stream 27 | ) 28 | except Exception as e: 29 | _LOGGER.error("visual_stream_assist.run", exc_info=e) 30 | return {"error": {"type": str(type(e)), "message": str(e)}} 31 | finally: 32 | stt_stream.close() 33 | 34 | hass.services.async_register( 35 | DOMAIN, "run", run, supports_response=SupportsResponse.OPTIONAL 36 | ) 37 | 38 | return True 39 | 40 | 41 | async def async_setup_entry(hass: HomeAssistant, config_entry: ConfigEntry): 42 | if config_entry.data: 43 | hass.config_entries.async_update_entry( 44 | config_entry, data={}, options=config_entry.data 45 | ) 46 | 47 | if not config_entry.update_listeners: 48 | config_entry.add_update_listener(async_update_options) 49 | 50 | await hass.config_entries.async_forward_entry_setups(config_entry, PLATFORMS) 51 | 52 | return True 53 | 54 | 55 | async def async_unload_entry(hass: HomeAssistant, config_entry: ConfigEntry): 56 | return await hass.config_entries.async_unload_platforms(config_entry, PLATFORMS) 57 | 58 | 59 | async def async_update_options(hass: HomeAssistant, config_entry: ConfigEntry): 60 | await hass.config_entries.async_reload(config_entry.entry_id) 61 | 62 | 63 | async def async_remove_config_entry_device( 64 | hass: HomeAssistant, config_entry: ConfigEntry, device_entry: DeviceEntry 65 | ) -> bool: 66 | return True 67 | 68 | 69 | async def async_remove_entry(hass: HomeAssistant, config_entry: ConfigEntry) -> None: 70 | pass 71 | -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/__pycache__/__init__.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/__pycache__/__init__.cpython-312.pyc -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/__pycache__/config_flow.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/__pycache__/config_flow.cpython-312.pyc -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/__pycache__/sensor.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/__pycache__/sensor.cpython-312.pyc -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/__pycache__/switch.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/__pycache__/switch.cpython-312.pyc -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/config_flow.py: -------------------------------------------------------------------------------- 1 | import homeassistant.helpers.config_validation as cv 2 | import voluptuous as vol 3 | from homeassistant.components import assist_pipeline 4 | from homeassistant.components.camera import CameraEntityFeature 5 | from homeassistant.components.media_player import MediaPlayerEntityFeature 6 | from homeassistant.config_entries import ConfigFlow, ConfigEntry, OptionsFlow 7 | from homeassistant.core import callback 8 | from homeassistant.helpers import entity_registry 9 | 10 | from .core import DOMAIN 11 | 12 | 13 | class ConfigFlowHandler(ConfigFlow, domain=DOMAIN): 14 | async def async_step_user(self, user_input=None): 15 | if user_input: 16 | title = user_input.pop("name") 17 | return self.async_create_entry(title=title, data=user_input) 18 | 19 | reg = entity_registry.async_get(self.hass) 20 | cameras = [ 21 | k 22 | for k, v in reg.entities.items() 23 | if v.domain == "camera" 24 | and v.supported_features & CameraEntityFeature.STREAM 25 | ] 26 | 27 | return self.async_show_form( 28 | step_id="user", 29 | data_schema=vol_schema( 30 | { 31 | vol.Required("name"): str, 32 | vol.Exclusive("stream_source", "url"): str, 33 | vol.Exclusive("camera_entity_id", "url"): vol.In(cameras), 34 | }, 35 | user_input, 36 | ), 37 | ) 38 | 39 | @staticmethod 40 | @callback 41 | def async_get_options_flow(entry: ConfigEntry): 42 | return OptionsFlowHandler(entry) 43 | 44 | 45 | class OptionsFlowHandler(OptionsFlow): 46 | def __init__(self, config_entry: ConfigEntry): 47 | self.config_entry = config_entry 48 | 49 | async def async_step_init(self, user_input: dict = None): 50 | if user_input is not None: 51 | return self.async_create_entry(title="", data=user_input) 52 | 53 | reg = entity_registry.async_get(self.hass) 54 | cameras = [ 55 | k 56 | for k, v in reg.entities.items() 57 | if v.domain == "camera" 58 | and v.supported_features & CameraEntityFeature.STREAM 59 | ] 60 | players = [ 61 | k 62 | for k, v in reg.entities.items() 63 | if v.domain == "media_player" 64 | and v.supported_features & MediaPlayerEntityFeature.PLAY_MEDIA 65 | ] 66 | 67 | pipelines = { 68 | p.id: p.name for p in assist_pipeline.async_get_pipelines(self.hass) 69 | } 70 | 71 | defaults = self.config_entry.options.copy() 72 | 73 | return self.async_show_form( 74 | step_id="init", 75 | data_schema=vol_schema( 76 | { 77 | vol.Exclusive("stream_source", "url"): str, 78 | vol.Exclusive("camera_entity_id", "url"): vol.In(cameras), 79 | #vol.Optional("player_entity_id"): cv.multi_select(players), 80 | vol.Optional("player_entity_id"): str, 81 | vol.Optional("browser_id"): str, 82 | vol.Optional("tts_service"): str, 83 | vol.Optional("tts_language"): str, 84 | vol.Optional("stt_start_media"): str, 85 | vol.Optional("speech_gif"): str, 86 | vol.Optional("listen_gif"): str, 87 | vol.Optional("pipeline_id"): vol.In(pipelines) 88 | }, 89 | defaults, 90 | ), 91 | ) 92 | 93 | 94 | def vol_schema(schema: dict, defaults: dict) -> vol.Schema: 95 | schema = {k: v for k, v in schema.items() if not empty(v)} 96 | 97 | if defaults: 98 | for key in schema: 99 | if key.schema in defaults: 100 | key.default = vol.default_factory(defaults[key.schema]) 101 | 102 | return vol.Schema(schema) 103 | 104 | 105 | def empty(v) -> bool: 106 | if isinstance(v, vol.In): 107 | return len(v.container) == 0 108 | if isinstance(v, cv.multi_select): 109 | return len(v.options) == 0 110 | return False -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/core/README.md: -------------------------------------------------------------------------------- 1 | ```yaml 2 | service: stream_assist.run 3 | data: 4 | stream_source: rtsp://... 5 | camera_entity_id: camera.xxx 6 | player_entity_id: media_player.xxx 7 | stt_start_media: media-source://media_source/local/beep.mp3 8 | pipeline_id: abcdefg... 9 | assist: 10 | start_stage: wake_word # wake_word, stt, intent, tts 11 | end_stage: tts 12 | pipeline: 13 | conversation_language: en 14 | conversation_engine: homeassistant 15 | language: en 16 | name: Home Assistant 17 | stt_engine: stt.faster_whisper 18 | stt_language: en 19 | tts_engine: tts.google_en_com 20 | tts_language: en 21 | tts_voice: None 22 | wake_word_entity: wake_word.openwakeword 23 | wake_word_id: None 24 | wake_word_settings: { timeout: 5 } 25 | audio_settings: 26 | noise_suppression_level: None 27 | auto_gain_dbfs: None 28 | volume_multiplier: None 29 | conversation_id: None 30 | device_id: None 31 | intent_input: None 32 | tts_audio_output: None # None, wav, mp3 33 | tts_input: None 34 | stream: 35 | file: ... 36 | options: {} 37 | ``` 38 | -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/core/__init__.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import logging 3 | from typing import Callable 4 | import time 5 | import re 6 | import random 7 | from mutagen.mp3 import MP3 8 | import io 9 | from homeassistant.components import assist_pipeline 10 | from homeassistant.components import media_player 11 | from homeassistant.components import stt 12 | from homeassistant.components.assist_pipeline import ( 13 | AudioSettings, 14 | Pipeline, 15 | PipelineEvent, 16 | PipelineEventCallback, 17 | PipelineEventType, 18 | PipelineInput, 19 | PipelineStage, 20 | PipelineRun, 21 | WakeWordSettings, 22 | ) 23 | from homeassistant.components.camera import Camera 24 | from homeassistant.config_entries import ConfigEntry 25 | from homeassistant.core import HomeAssistant, Context 26 | from homeassistant.helpers.device_registry import DeviceEntryType 27 | from homeassistant.helpers.entity import Entity, DeviceInfo 28 | from homeassistant.helpers.entity_component import EntityComponent 29 | from homeassistant.helpers.network import get_url 30 | from homeassistant.helpers.aiohttp_client import async_get_clientsession 31 | from .stream import Stream 32 | 33 | _LOGGER = logging.getLogger(__name__) 34 | 35 | DOMAIN = "visual_stream_assist" 36 | EVENTS = ["wake", "stt", "intent", "tts"] 37 | 38 | 39 | def init_entity(entity: Entity, key: str, config_entry: ConfigEntry) -> str: 40 | unique_id = config_entry.entry_id[:7] 41 | num = 1 + EVENTS.index(key) if key in EVENTS else 0 42 | 43 | entity._attr_unique_id = f"{unique_id}-{key}" 44 | entity._attr_name = config_entry.title + " " + key.upper().replace("_", " ") 45 | entity._attr_icon = f"mdi:numeric-{num}" 46 | entity._attr_device_info = DeviceInfo( 47 | name=config_entry.title, 48 | identifiers={(DOMAIN, unique_id)}, 49 | entry_type=DeviceEntryType.SERVICE, 50 | ) 51 | 52 | return unique_id 53 | 54 | 55 | async def get_stream_source(hass: HomeAssistant, entity: str) -> str | None: 56 | try: 57 | component: EntityComponent = hass.data["camera"] 58 | camera: Camera = next(e for e in component.entities if e.entity_id == entity) 59 | return await camera.stream_source() 60 | except Exception as e: 61 | _LOGGER.error("get_stream_source", exc_info=e) 62 | return None 63 | 64 | async def get_tts_duration(hass: HomeAssistant, tts_url: str) -> float: 65 | try: 66 | # Ensure we have the full URL 67 | if tts_url.startswith('/'): 68 | base_url = get_url(hass) 69 | full_url = f"{base_url}{tts_url}" 70 | else: 71 | full_url = tts_url 72 | 73 | # Use Home Assistant's aiohttp client session 74 | session = async_get_clientsession(hass) 75 | async with session.get(full_url) as response: 76 | if response.status != 200: 77 | _LOGGER.error(f"Failed to fetch TTS audio: HTTP {response.status}") 78 | return 0 79 | 80 | content = await response.read() 81 | 82 | # Use mutagen to get the duration 83 | audio = MP3(io.BytesIO(content)) 84 | duration = audio.info.length 85 | # Log the calculated duration 86 | _LOGGER.info(f"TTS duration calculated successfully: {duration} seconds") 87 | return duration 88 | 89 | except Exception as e: 90 | _LOGGER.error(f"Error getting TTS duration: {e}") 91 | return 0 92 | 93 | async def stream_run(hass: HomeAssistant, data: dict, stt_stream: Stream) -> None: 94 | stream_kwargs = data.get("stream", {}) 95 | 96 | if "file" not in stream_kwargs: 97 | if url := data.get("stream_source"): 98 | stream_kwargs["file"] = url 99 | elif entity := data.get("camera_entity_id"): 100 | stream_kwargs["file"] = await get_stream_source(hass, entity) 101 | else: 102 | return 103 | 104 | stt_stream.open(**stream_kwargs) 105 | 106 | await hass.async_add_executor_job(stt_stream.run) 107 | 108 | 109 | async def assist_run( 110 | hass: HomeAssistant, 111 | data: dict, 112 | context: Context = None, 113 | event_callback: PipelineEventCallback = None, 114 | stt_stream: Stream = None, 115 | ) -> dict: 116 | # 1. Process assist_pipeline settings 117 | assist = data.get("assist", {}) 118 | 119 | if pipeline_id := data.get("pipeline_id"): 120 | # get pipeline from pipeline ID 121 | pipeline = assist_pipeline.async_get_pipeline(hass, pipeline_id) 122 | elif pipeline_json := assist.get("pipeline"): 123 | # get pipeline from JSON 124 | pipeline = Pipeline.from_json(pipeline_json) 125 | else: 126 | # get default pipeline 127 | pipeline = assist_pipeline.async_get_pipeline(hass) 128 | 129 | if "start_stage" not in assist: 130 | # auto select start stage 131 | if pipeline.wake_word_entity: 132 | assist["start_stage"] = PipelineStage.WAKE_WORD 133 | elif pipeline.stt_engine: 134 | assist["start_stage"] = PipelineStage.STT 135 | else: 136 | raise Exception("Unknown start_stage") 137 | 138 | if "end_stage" not in assist: 139 | # auto select end stage 140 | if pipeline.tts_engine: 141 | assist["end_stage"] = PipelineStage.TTS 142 | else: 143 | assist["end_stage"] = PipelineStage.INTENT 144 | 145 | player_entity_id = data.get("player_entity_id") 146 | browser_id = data.get("browser_id") 147 | 148 | # 2. Setup Pipeline Run 149 | events = {} 150 | 151 | def internal_event_callback(event: PipelineEvent): 152 | _LOGGER.debug(f"event: {event}") 153 | 154 | events[event.type] = ( 155 | {"data": event.data, "timestamp": event.timestamp} 156 | if event.data 157 | else {"timestamp": event.timestamp} 158 | ) 159 | 160 | if event.type == PipelineEventType.WAKE_WORD_END: 161 | if player_entity_id and (messages_str := data.get("stt_start_media")): 162 | tts_service = data.get("tts_service") 163 | tts_language = data.get("tts_language") 164 | messages = [msg.strip() for msg in messages_str.split(",")] 165 | random_message = random.choice(messages) 166 | media_id = f"media-source://tts/{tts_service}?message={random_message}&language={tts_language}" 167 | 168 | play_media(hass, player_entity_id, media_id, "music") 169 | if player_entity_id and (media_id := data.get("speech_gif")): 170 | show_popup(hass, player_entity_id, media_id, "picture", browser_id) 171 | if player_entity_id and (media_id := data.get("listen_gif")): 172 | asyncio.create_task(async_delay_listening(hass, player_entity_id, media_id,"picture", browser_id )) 173 | 174 | elif event.type == PipelineEventType.TTS_END: 175 | if player_entity_id: 176 | tts = event.data["tts_output"] 177 | play_media(hass, player_entity_id, tts["url"], tts["mime_type"]) 178 | if player_entity_id and (media_id := data.get("speech_gif")): 179 | show_popup(hass, player_entity_id, media_id, "picture", browser_id) 180 | if player_entity_id: 181 | tts = event.data["tts_output"] 182 | tts_url = tts["url"] 183 | asyncio.create_task(async_delay_close_popup(hass, player_entity_id, browser_id, tts_url, events)) 184 | 185 | if event_callback: 186 | event_callback(event) 187 | 188 | pipeline_run = PipelineRun( 189 | hass, 190 | context=context, 191 | pipeline=pipeline, 192 | start_stage=assist["start_stage"], # wake_word, stt, intent, tts 193 | end_stage=assist["end_stage"], # wake_word, stt, intent, tts 194 | event_callback=internal_event_callback, 195 | tts_audio_output=assist.get("tts_audio_output"), # None, wav, mp3 196 | wake_word_settings=new(WakeWordSettings, assist.get("wake_word_settings")), 197 | audio_settings=new(AudioSettings, assist.get("audio_settings")), 198 | ) 199 | 200 | # 3. Setup Pipeline Input 201 | pipeline_input = PipelineInput( 202 | run=pipeline_run, 203 | stt_metadata=stt.SpeechMetadata( 204 | language="", # set in async_pipeline_from_audio_stream 205 | format=stt.AudioFormats.WAV, 206 | codec=stt.AudioCodecs.PCM, 207 | bit_rate=stt.AudioBitRates.BITRATE_16, 208 | sample_rate=stt.AudioSampleRates.SAMPLERATE_16000, 209 | channel=stt.AudioChannels.CHANNEL_MONO, 210 | ), 211 | stt_stream=stt_stream, 212 | intent_input=assist.get("intent_input"), 213 | tts_input=assist.get("tts_input"), 214 | conversation_id=assist.get("conversation_id"), 215 | device_id=assist.get("device_id"), 216 | ) 217 | 218 | try: 219 | # 4. Validate Pipeline 220 | await pipeline_input.validate() 221 | 222 | # 5. Run Stream (optional) 223 | if stt_stream: 224 | stt_stream.start() 225 | 226 | # 6. Run Pipeline 227 | await pipeline_input.execute() 228 | 229 | except AttributeError: 230 | pass # 'PipelineRun' object has no attribute 'stt_provider' 231 | finally: 232 | if stt_stream: 233 | stt_stream.stop() 234 | 235 | return events 236 | 237 | 238 | async def async_delay_listening(hass, player_entity_id, media_id, media_type, browser_id): 239 | # Așteaptă o secundă înainte de a începe verificarea 240 | await asyncio.sleep(1.5) 241 | 242 | # while True: 243 | # player_state = hass.states.get(player_entity_id).state 244 | # if player_state == "idle": 245 | # break # Ieși din buclă dacă playerul nu mai este în starea "playing" 246 | # await asyncio.sleep(0.1) # Așteaptă 100 ms și verifică din nou 247 | 248 | show_popup(hass, player_entity_id, media_id, media_type, browser_id) 249 | 250 | 251 | async def async_delay_close_popup(hass, player_entity_id, browser_id, tts_url, events): 252 | 253 | duration = await get_tts_duration(hass, tts_url) 254 | events[PipelineEventType.TTS_END]["data"]["tts_duration"] = duration 255 | _LOGGER.debug(f"Stored TTS duration: {duration} seconds") 256 | # Set a timer to simulate wake word detection after TTS playback 257 | await asyncio.sleep(duration) 258 | close_popup(hass, player_entity_id, browser_id) 259 | 260 | def play_media(hass: HomeAssistant, entity_id: str, media_id: str, media_type: str): 261 | service_data = { 262 | "entity_id": entity_id, 263 | "media_content_id": media_player.async_process_play_media_url(hass, media_id), 264 | "media_content_type": media_type, 265 | } 266 | 267 | # hass.services.call will block Hass 268 | coro = hass.services.async_call("media_player", "play_media", service_data) 269 | hass.async_create_background_task(coro, "visual_stream_assist_play_media") 270 | 271 | 272 | def show_popup(hass: HomeAssistant, player_entity_id: str, media_id: str, media_type: str, browser_id: str): 273 | service_data = { 274 | "entity_id": player_entity_id, 275 | "browser_id": browser_id, 276 | "style": """ 277 | --popup-min-width: 800px; 278 | --popup-border-radius: 28px; 279 | """, 280 | "content": 281 | { 282 | "type": media_type, 283 | "image": media_id 284 | } 285 | 286 | } 287 | 288 | coro = hass.services.async_call("browser_mod", "popup", service_data) 289 | hass.async_create_background_task(coro, "visual_stream_assist_show_popup") 290 | 291 | def close_popup(hass: HomeAssistant, player_entity_id: str, browser_id: str): 292 | service_data = { 293 | "entity_id": player_entity_id, 294 | "browser_id": browser_id, 295 | } 296 | 297 | coro = hass.services.async_call("browser_mod", "close_popup", service_data) 298 | hass.async_create_background_task(coro, "visual_stream_assist_close_popup") 299 | 300 | def run_forever( 301 | hass: HomeAssistant, 302 | data: dict, 303 | context: Context, 304 | event_callback: PipelineEventCallback, 305 | ) -> Callable: 306 | stt_stream = Stream() 307 | 308 | async def run_stream(): 309 | while not stt_stream.closed: 310 | try: 311 | await stream_run(hass, data, stt_stream=stt_stream) 312 | except Exception as e: 313 | _LOGGER.debug(f"run_stream error {type(e)}: {e}") 314 | await asyncio.sleep(30) 315 | 316 | async def run_assist(): 317 | while not stt_stream.closed: 318 | try: 319 | await assist_run( 320 | hass, 321 | data, 322 | context=context, 323 | event_callback=event_callback, 324 | stt_stream=stt_stream, 325 | ) 326 | except Exception as e: 327 | _LOGGER.debug(f"run_assist error {type(e)}: {e}") 328 | 329 | hass.async_create_background_task(run_stream(), "visual_stream_assist_run_stream") 330 | hass.async_create_background_task(run_assist(), "visual_stream_assist_run_assist") 331 | 332 | return stt_stream.close 333 | 334 | 335 | def new(cls, kwargs: dict): 336 | if not kwargs: 337 | return cls() 338 | kwargs = {k: v for k, v in kwargs.items() if hasattr(cls, k)} 339 | return cls(**kwargs) 340 | -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/core/__pycache__/__init__.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/core/__pycache__/__init__.cpython-312.pyc -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/core/__pycache__/stream.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/core/__pycache__/stream.cpython-312.pyc -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/core/stream.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import logging 3 | 4 | import av 5 | from av.audio.resampler import AudioResampler 6 | from av.container.input import InputContainer 7 | 8 | _LOGGER = logging.getLogger(__name__) 9 | 10 | 11 | class Stream: 12 | def __init__(self): 13 | self.closed: bool = False 14 | self.container: InputContainer | None = None 15 | self.enabled: bool = False 16 | self.queue: asyncio.Queue[bytes] = asyncio.Queue() 17 | 18 | def open(self, file: str, **kwargs): 19 | _LOGGER.debug(f"stream open") 20 | 21 | if "options" not in kwargs: 22 | kwargs["options"] = { 23 | "fflags": "nobuffer", 24 | "flags": "low_delay", 25 | "timeout": "5000000", 26 | } 27 | 28 | if file.startswith("rtsp"): 29 | kwargs["options"]["rtsp_flags"] = "prefer_tcp" 30 | kwargs["options"]["allowed_media_types"] = "audio" 31 | 32 | kwargs.setdefault("timeout", 5) 33 | 34 | # https://pyav.org/docs/9.0.2/api/_globals.html 35 | self.container = av.open(file, **kwargs) 36 | 37 | def run(self, end=True): 38 | _LOGGER.debug("stream start") 39 | 40 | resampler = AudioResampler(format="s16", layout="mono", rate=16000) 41 | 42 | try: 43 | for frame in self.container.decode(audio=0): 44 | if self.closed: 45 | return 46 | if not self.enabled: 47 | continue 48 | for frame_raw in resampler.resample(frame): 49 | chunk = frame_raw.to_ndarray().tobytes() 50 | self.queue.put_nowait(chunk) 51 | except Exception as e: 52 | _LOGGER.debug(f"stream exception {type(e)}: {e}") 53 | finally: 54 | self.container.close() 55 | self.container = None 56 | 57 | if end and self.enabled: 58 | self.queue.put_nowait(b"") 59 | 60 | _LOGGER.debug("stream end") 61 | 62 | def close(self): 63 | _LOGGER.debug(f"stream close") 64 | self.closed = True 65 | 66 | def start(self): 67 | while self.queue.qsize(): 68 | self.queue.get_nowait() 69 | 70 | self.enabled = True 71 | 72 | def stop(self): 73 | self.enabled = False 74 | 75 | def __aiter__(self): 76 | return self 77 | 78 | async def __anext__(self) -> bytes: 79 | return await self.queue.get() 80 | -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/gifs/jarvis_listen.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/gifs/jarvis_listen.gif -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/gifs/jarvis_speech.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/gifs/jarvis_speech.gif -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/gifs/roomie_listen.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/gifs/roomie_listen.gif -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/gifs/roomie_speech.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/gifs/roomie_speech.gif -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/gifs/sheila_listen.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/gifs/sheila_listen.gif -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/gifs/sheila_speech.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/custom_components/visual_stream_assist/gifs/sheila_speech.gif -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/manifest.json: -------------------------------------------------------------------------------- 1 | { 2 | "domain": "visual_stream_assist", 3 | "name": "Visual Stream Assist", 4 | "codeowners": [ 5 | "@AlexxIT", 6 | "@relust" 7 | ], 8 | "config_flow": true, 9 | "dependencies": [ 10 | "stream", 11 | "assist_pipeline" 12 | ], 13 | "documentation": "https://github.com/relust/VisualStreamAssist", 14 | "iot_class": "calculated", 15 | "issue_tracker": "https://github.com/relust/VisualStreamAssist/issues", 16 | "requirements": [], 17 | "version": "v2.0.1" 18 | } -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/sensor.py: -------------------------------------------------------------------------------- 1 | from homeassistant.components import assist_pipeline 2 | from homeassistant.components.sensor import SensorEntity 3 | from homeassistant.config_entries import ConfigEntry 4 | from homeassistant.const import STATE_IDLE 5 | from homeassistant.core import HomeAssistant 6 | from homeassistant.helpers.dispatcher import async_dispatcher_connect 7 | from homeassistant.helpers.entity_platform import AddEntitiesCallback 8 | 9 | from .core import EVENTS, init_entity 10 | 11 | 12 | async def async_setup_entry( 13 | hass: HomeAssistant, 14 | config_entry: ConfigEntry, 15 | async_add_entities: AddEntitiesCallback, 16 | ) -> None: 17 | pipeline_id = config_entry.options.get("pipeline_id") 18 | pipeline = assist_pipeline.async_get_pipeline(hass, pipeline_id) 19 | 20 | entities = [] 21 | 22 | for event in EVENTS: 23 | if event == "wake" and not pipeline.wake_word_entity: 24 | continue 25 | if event == "stt" and not pipeline.stt_engine: 26 | break 27 | if event == "tts" and not pipeline.tts_engine: 28 | continue 29 | entities.append(StreamAssistSensor(config_entry, event)) 30 | 31 | async_add_entities(entities) 32 | 33 | 34 | class StreamAssistSensor(SensorEntity): 35 | _attr_native_value = STATE_IDLE 36 | 37 | def __init__(self, config_entry: ConfigEntry, key: str): 38 | init_entity(self, key, config_entry) 39 | 40 | async def async_added_to_hass(self) -> None: 41 | remove = async_dispatcher_connect(self.hass, self.unique_id, self.signal) 42 | self.async_on_remove(remove) 43 | 44 | def signal(self, value: str, extra: dict = None): 45 | self._attr_native_value = value or STATE_IDLE 46 | self._attr_extra_state_attributes = extra 47 | self.schedule_update_ha_state() 48 | -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/services.yaml: -------------------------------------------------------------------------------- 1 | # https://developers.home-assistant.io/docs/dev_101_services 2 | # https://www.home-assistant.io/docs/blueprint/selectors/ 3 | run: 4 | name: Run 5 | description: 6 | fields: 7 | stream_source: 8 | name: Stream URL 9 | description: Link to stream (any type, supported by FFmpeg) 10 | example: rtsp://rtsp:12345678@192.168.1.123:554/av_stream/ch0 11 | selector: 12 | text: 13 | 14 | camera_entity_id: 15 | name: Camera Entity 16 | description: Entity for STT source 17 | selector: 18 | entity: 19 | domain: camera 20 | supported_features: 21 | - camera.CameraEntityFeature.STREAM 22 | 23 | player_entity_id: 24 | name: Player Entity 25 | description: Entity for playing TTS 26 | selector: 27 | entity: 28 | domain: media_player 29 | supported_features: 30 | - media_player.MediaPlayerEntityFeature.PLAY_MEDIA 31 | 32 | browser_id: 33 | name: Browser ID 34 | description: Browser for playing gif picture 35 | selector: 36 | text: 37 | 38 | tts_service: 39 | name: TTS_service 40 | description: Tss service for playing wake word detection resposnses 41 | selector: 42 | text: 43 | 44 | tts_language: 45 | name: TTS_language 46 | description: Tss language for playing wake word detection resposnses 47 | selector: 48 | text: 49 | 50 | stt_start_media: 51 | name: Wake Word detect media 52 | description: media-source://tts/edge_tts?message=how can i assist you&language=en-US-MichelleNeural 53 | selector: 54 | text: 55 | 56 | speech_gif: 57 | name: GIF for speak 58 | description: Link to the gif for simulate speaking 59 | selector: 60 | text: 61 | 62 | listen_gif: 63 | name: GIF for listen 64 | description: Link to the gif for simulate listening 65 | selector: 66 | text: 67 | 68 | pipeline_id: 69 | name: Pipeline 70 | description: Settings > Voice Assistant 71 | selector: 72 | assist_pipeline: 73 | 74 | assist: 75 | name: Assist 76 | description: Advanced parameters 77 | selector: 78 | object: 79 | 80 | stream: 81 | name: Stream 82 | description: Advanced parameters 83 | selector: 84 | object: -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/switch.py: -------------------------------------------------------------------------------- 1 | import logging 2 | from typing import Callable 3 | 4 | from homeassistant.components.assist_pipeline import PipelineEvent, PipelineEventType 5 | from homeassistant.components.switch import SwitchEntity 6 | from homeassistant.config_entries import ConfigEntry 7 | from homeassistant.core import HomeAssistant 8 | from homeassistant.helpers.dispatcher import async_dispatcher_send 9 | from homeassistant.helpers.entity_platform import AddEntitiesCallback 10 | 11 | from .core import run_forever, init_entity, EVENTS 12 | 13 | _LOGGER = logging.getLogger(__name__) 14 | 15 | 16 | async def async_setup_entry( 17 | hass: HomeAssistant, 18 | config_entry: ConfigEntry, 19 | async_add_entities: AddEntitiesCallback, 20 | ) -> None: 21 | async_add_entities([StreamAssistSwitch(config_entry)]) 22 | 23 | 24 | class StreamAssistSwitch(SwitchEntity): 25 | on_close: Callable = None 26 | 27 | def __init__(self, config_entry: ConfigEntry): 28 | self._attr_is_on = False 29 | self._attr_should_poll = False 30 | 31 | self.options = config_entry.options.copy() 32 | self.uid = init_entity(self, "mic", config_entry) 33 | 34 | def event_callback(self, event: PipelineEvent): 35 | # Event type: wake_word-start, wake_word-end 36 | # Error code: wake-word-timeout, wake-provider-missing, wake-stream-failed 37 | code = ( 38 | event.data["code"] 39 | if event.type == PipelineEventType.ERROR 40 | else event.type.replace("_word", "") 41 | ) 42 | 43 | name, state = code.split("-", 1) 44 | 45 | async_dispatcher_send(self.hass, f"{self.uid}-{name}", state, event.data) 46 | 47 | async def async_added_to_hass(self) -> None: 48 | self.options["assist"] = {"device_id": self.device_entry.id} 49 | 50 | async def async_turn_on(self) -> None: 51 | if self._attr_is_on: 52 | return 53 | 54 | self._attr_is_on = True 55 | self._async_write_ha_state() 56 | 57 | for event in EVENTS: 58 | async_dispatcher_send(self.hass, f"{self.uid}-{event}", None) 59 | 60 | self.on_close = run_forever( 61 | self.hass, 62 | self.options, 63 | context=self._context, 64 | event_callback=self.event_callback, 65 | ) 66 | 67 | async def async_turn_off(self) -> None: 68 | if not self._attr_is_on: 69 | return 70 | 71 | self._attr_is_on = False 72 | self._async_write_ha_state() 73 | 74 | self.on_close() 75 | 76 | async def async_will_remove_from_hass(self) -> None: 77 | if self._attr_is_on: 78 | self.on_close() 79 | 80 | -------------------------------------------------------------------------------- /custom_components/visual_stream_assist/translations/en.json: -------------------------------------------------------------------------------- 1 | { 2 | "config": { 3 | "error": { 4 | }, 5 | "step": { 6 | "user": { 7 | "data": { 8 | "name": "Name", 9 | "stream_source": "Stream URL (RTSP/RTMP/HTTP)", 10 | "camera_entity_id": "Camera Entity" 11 | }, 12 | "description": "Select camera `entity_id` or stream `source`" 13 | } 14 | } 15 | }, 16 | "options": { 17 | "step": { 18 | "init": { 19 | "data": { 20 | "stream_source": "Stream URL (RTP/RTSP/RTMP/HTTP Ex: rtp://192.168.0.xxx:5555)", 21 | "camera_entity_id": "Camera Entity", 22 | "player_entity_id": "Player Entity (Ex: media_player.ha_display)", 23 | "browser_id": "Browser mod Browser ID (Ex: ha_display)", 24 | "tts_service": "TTS service for wake word detection (Ex: edge_tts/cloud/google_translate)", 25 | "tts_language": "TTS language for wake word detection (Ex: en-US-MichelleNeural/en-GB&voice=AbbiNeural)", 26 | "stt_start_media": "Wake Word detection responses (Ex: how can I help you, yes im listening)", 27 | "speech_gif": "Speech Gif (Ex: local/gifs/jarvis_speech.gif)", 28 | "listen_gif": "Listen Gif (Ex: local/gifs/jarvis_listen.gif)", 29 | "pipeline_id": "Pipeline (Settings > Voice Assistant)" 30 | } 31 | } 32 | } 33 | } 34 | } -------------------------------------------------------------------------------- /hacs.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "Visual Stream Assist", 3 | "render_readme": true, 4 | "homeassistant": "2024.5.0" 5 | } 6 | -------------------------------------------------------------------------------- /openwakeword models/Lisa.tflite: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/openwakeword models/Lisa.tflite -------------------------------------------------------------------------------- /openwakeword models/Rommie.tflite: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/openwakeword models/Rommie.tflite -------------------------------------------------------------------------------- /openwakeword models/Sheila2.tflite: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/openwakeword models/Sheila2.tflite -------------------------------------------------------------------------------- /openwakeword models/jarvis_v2.tflite: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/openwakeword models/jarvis_v2.tflite -------------------------------------------------------------------------------- /www/gifs/change_to_jarvis_gifs.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | cd /config/www/gifs/ 3 | cp /config/www/gifs/jarvis_speech.gif /config/www/gifs/speech.gif 4 | cp /config/www/gifs/jarvis_listen.gif /config/www/gifs/listen.gif 5 | 6 | 7 | 8 | 9 | 10 | -------------------------------------------------------------------------------- /www/gifs/change_to_roomie_gifs.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | cd /config/www/gifs/ 3 | cp /config/www/gifs/roomie_speech.gif /config/www/gifs/speech.gif 4 | cp /config/www/gifs/roomie_listen.gif /config/www/gifs/listen.gif 5 | 6 | 7 | 8 | -------------------------------------------------------------------------------- /www/gifs/change_to_sheila_gifs.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | cd /config/www/gifs/ 3 | cp /config/www/gifs/sheila_speech.gif /config/www/gifs/speech.gif 4 | cp /config/www/gifs/sheila_listen.gif /config/www/gifs/listen.gif 5 | 6 | 7 | 8 | -------------------------------------------------------------------------------- /www/gifs/jarvis_listen.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/www/gifs/jarvis_listen.gif -------------------------------------------------------------------------------- /www/gifs/jarvis_speech.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/www/gifs/jarvis_speech.gif -------------------------------------------------------------------------------- /www/gifs/listen.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/www/gifs/listen.gif -------------------------------------------------------------------------------- /www/gifs/roomie_listen.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/www/gifs/roomie_listen.gif -------------------------------------------------------------------------------- /www/gifs/roomie_speech.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/www/gifs/roomie_speech.gif -------------------------------------------------------------------------------- /www/gifs/sheila_listen.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/www/gifs/sheila_listen.gif -------------------------------------------------------------------------------- /www/gifs/sheila_speech.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/www/gifs/sheila_speech.gif -------------------------------------------------------------------------------- /www/gifs/speech.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/relust/VisualStreamAssist/7a20277259dd7aca05012b7e16dceb34cbcf382d/www/gifs/speech.gif --------------------------------------------------------------------------------