├── README.md ├── cx_freeze_setup.py ├── icon.ico ├── requirements.txt ├── screenshot.png └── transcriber.py /README.md: -------------------------------------------------------------------------------- 1 | # Transcriber 2 | 3 | A real-time speech to text transcription app built with Flet and OpenAI Whisper. 4 | 5 | ![Screenshot](screenshot.png) 6 | 7 | https://user-images.githubusercontent.com/69365652/205431746-1ece6d20-85a6-4112-ba5a-ee050b67268c.mp4 8 | 9 | ## Setting Up An Environment 10 | On Windows: 11 | ``` 12 | cd transcriber_app 13 | py -3.7 -m venv venv 14 | venv\Scripts\activate 15 | pip install -r requirements.txt 16 | ``` 17 | On Unix: 18 | ``` 19 | git clone https://github.com/davabase/transcriber_app/ 20 | cd transcriber_app 21 | python3.7 -m venv venv 22 | source venv/bin/activate 23 | pip install -r requirements.txt 24 | ``` 25 | 26 | ## Running From Source 27 | ``` 28 | python transcriber.py 29 | ``` 30 | 31 | ## Building From Source 32 | ``` 33 | python cx_freeze_setup.py build 34 | ``` 35 | This puts the output in the build directory. 36 | 37 | ## Usage 38 | * Select an input source using the dropdown. 39 | * Click "Start Transcribing" 40 | 41 | Selecting a specific language will greatly improve the transcription results. 42 | 43 | Transcriber can also be used to translate from other languages to English. 44 | 45 | You can change the audio model to improve performance. On slow machines you might prefer the Tiny model. 46 | 47 | You can also make the window transparent and set the text background, this is useful for overlaying on other apps. There's an invisible draggable zone just above and below the "Stop Transcribing" button, use this to move the window when it is transparent. 48 | 49 | ## Hidden Features 50 | When you stop a transcription, the lines from the transcription will be saved to `transcription.txt` in the same file as the app. 51 | 52 | Starting a transcription saves the current settings to `transcriber_settings.yaml`. These settings will be loaded automattically the next time you use the program. 53 | 54 | In `transcriber_settings.yaml` you can additionally set: 55 | * `transcribe_rate` 56 | * `seconds_of_silence_between_lines` 57 | * `max_record_time` 58 | 59 | which have no onscreen controls. 60 | 61 | `transcribe_rate` defines how often audio data shold be transcribed, in seconds. Decreasing this number will make the app more real time but will place more burden on your computer. 62 | 63 | `seconds_of_silence_between_lines` defines how much silence (audio with volume lower than the volume threshold) is required to automatically break up the transcription into separate lines. 64 | 65 | `max_record_time` defines the maximum amount of time we should allow a recording to be transcribed. Longer transcriptions take longer to transcribe and may no longer be real time. If you have audio with no silence in it, consider reducing this to break up the audio in to smaller chunks. 66 | 67 | ## Q&A 68 | Why do you use cx_Freeze instead of PyInstaller like Flet recommends? 69 | 70 | * It looks like PyInstaller and PyTorch don't get along. I tried building with PyInstaller and got errors with missing torch packages. I spent a bit of time trying to get it to work before switching to cx_Freeze which worked almost immediately. 71 | 72 | Read more about Whisper here: https://github.com/openai/whisper 73 | 74 | Read more about Flet here: https://flet.dev/ 75 | 76 | The code in this repository is public domain. 77 | -------------------------------------------------------------------------------- /cx_freeze_setup.py: -------------------------------------------------------------------------------- 1 | import sys 2 | from cx_Freeze import setup, Executable 3 | 4 | # Dependencies are automatically detected, but it might need fine tuning. 5 | # "packages": ["os"] is used as example only 6 | build_exe_options = {"packages": ["flet", "torch"], "includes": ["sys"]} 7 | 8 | # base="Win32GUI" should be used only for Windows GUI app 9 | base = None 10 | if sys.platform == "win32": 11 | base = "Win32GUI" 12 | 13 | setup( 14 | name="guifoo", 15 | version="0.1", 16 | description="Transcriber", 17 | options={"build_exe": build_exe_options}, 18 | executables=[Executable("transcriber.py", base=base, icon="icon.ico")], 19 | ) -------------------------------------------------------------------------------- /icon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/davabase/transcriber_app/0b762705b3717d1072dd6dcd36bc2ef7f703b212/icon.ico -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | cx_Freeze 2 | flet 3 | pyaudio 4 | --extra-index-url https://download.pytorch.org/whl/cu116 5 | torch 6 | git+https://github.com/openai/whisper.git -------------------------------------------------------------------------------- /screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/davabase/transcriber_app/0b762705b3717d1072dd6dcd36bc2ef7f703b212/screenshot.png -------------------------------------------------------------------------------- /transcriber.py: -------------------------------------------------------------------------------- 1 | #! python3.7 2 | 3 | # Needed to cx_Freeze as a GUI app. https://stackoverflow.com/a/3237924 4 | import sys 5 | class dummyStream: 6 | ''' dummyStream behaves like a stream but does nothing. ''' 7 | def __init__(self): pass 8 | def write(self,data): pass 9 | def read(self,data): pass 10 | def flush(self): pass 11 | def close(self): pass 12 | # Redirect all default streams to this dummyStream: 13 | sys.stdout = dummyStream() 14 | sys.stderr = dummyStream() 15 | sys.stdin = dummyStream() 16 | sys.__stdout__ = dummyStream() 17 | sys.__stderr__ = dummyStream() 18 | sys.__stdin__ = dummyStream() 19 | 20 | 21 | import audioop 22 | import flet as ft 23 | import io 24 | import os 25 | import numpy 26 | import pyaudio 27 | import torch 28 | import wave 29 | import whisper 30 | import yaml 31 | 32 | from datetime import datetime, timedelta 33 | from queue import Queue 34 | from threading import Thread 35 | from time import sleep 36 | from whisper.tokenizer import LANGUAGES 37 | 38 | 39 | def main(page: ft.Page): 40 | # 41 | # Settings and constants. 42 | # 43 | 44 | # Used to store transcription when done. 45 | settings_file = "transcriber_settings.yaml" 46 | settings = {} 47 | if os.path.exists(settings_file): 48 | with open(settings_file, 'r') as f: 49 | settings = yaml.safe_load(f) 50 | # Happens if the file is empty. 51 | if settings == None: 52 | settings = {} 53 | 54 | transcription_file = "transcription.txt" 55 | max_energy = 5000 56 | sample_rate = 16000 57 | chunk_size = 1024 58 | max_int16 = 2**15 59 | 60 | # Set window settings. 61 | page.title = "Transcriber" 62 | page.window_min_width = 817.0 63 | page.window_width = settings.get('window_width', page.window_min_width) 64 | page.window_min_height = 475.0 65 | page.window_height = settings.get('window_height', 800.0) 66 | 67 | # 68 | # Callbacks. 69 | # 70 | 71 | def always_on_top_callback(_): 72 | page.window_always_on_top = always_on_top_checkbox.value 73 | page.update() 74 | 75 | def text_background_callback(_): 76 | if text_background_checkbox.value: 77 | for list_item in transcription_list.controls: 78 | list_item.bgcolor = ft.colors.BLACK if dark_mode_checkbox.value else ft.colors.WHITE 79 | else: 80 | for list_item in transcription_list.controls: 81 | list_item.bgcolor = None 82 | transcription_list.update() 83 | 84 | def dark_mode_callback(_): 85 | if dark_mode_checkbox.value: 86 | page.theme_mode = ft.ThemeMode.DARK 87 | else: 88 | page.theme_mode = ft.ThemeMode.LIGHT 89 | text_background_callback(_) 90 | page.update() 91 | 92 | def language_callback(_): 93 | translate_checkbox.disabled = language_dropdown.value == 'en' 94 | if language_dropdown.value == 'en': 95 | translate_checkbox.value = False 96 | translate_checkbox.update() 97 | 98 | def text_size_callback(_): 99 | for list_item in transcription_list.controls: 100 | list_item.size = int(text_size_dropdown.value) 101 | transcription_list.update() 102 | 103 | audio_model:whisper.Whisper = None 104 | loaded_audio_model:str = None 105 | currently_transcribing = False 106 | stop_recording:function = None 107 | record_thread:Thread = None 108 | data_queue = Queue() 109 | def transcribe_callback(_): 110 | nonlocal currently_transcribing, audio_model, stop_recording, loaded_audio_model, record_thread, run_record_thread 111 | if not currently_transcribing: 112 | page.splash = ft.Container( 113 | content=ft.ProgressRing(), 114 | alignment=ft.alignment.center 115 | ) 116 | page.update() 117 | 118 | model = model_dropdown.value 119 | if model != "large" and language_dropdown.value == 'en': 120 | model = model + ".en" 121 | 122 | # Only re-load the audio model if it changed. 123 | if (not audio_model or not loaded_audio_model) or ((audio_model and loaded_audio_model) and loaded_audio_model != model): 124 | device = 'cpu' 125 | if torch.has_cuda: 126 | device = "cuda" 127 | audio_model = whisper.load_model(model, device) 128 | loaded_audio_model = model 129 | 130 | device_index = int(microphone_dropdown.value) 131 | if not record_thread: 132 | stream = pa.open(format=pyaudio.paInt16, 133 | channels=1, 134 | rate=sample_rate, 135 | input=True, 136 | frames_per_buffer=chunk_size, 137 | input_device_index=device_index) 138 | record_thread = Thread(target=recording_thread, args=[stream]) 139 | run_record_thread = True 140 | record_thread.start() 141 | 142 | transcribe_text.value = "Stop Transcribing" 143 | transcribe_icon.name = "stop_rounded" 144 | transcribe_button.bgcolor = ft.colors.RED_800 145 | 146 | # Disable all the controls. 147 | model_dropdown.disabled = True 148 | microphone_dropdown.disabled = True 149 | language_dropdown.disabled = True 150 | translate_checkbox.disabled = True 151 | settings_controls.visible = False 152 | 153 | # Make transparent. 154 | if transparent_checkbox.value: 155 | page.window_bgcolor = ft.colors.TRANSPARENT 156 | page.bgcolor = ft.colors.TRANSPARENT 157 | page.window_title_bar_hidden = True 158 | page.window_frameless = True 159 | draggable_area1.visible = True 160 | draggable_area2.visible = True 161 | 162 | # Save all settings. 163 | settings = { 164 | 'window_width': page.window_width, 165 | 'window_height': page.window_height, 166 | 'speech_model': model_dropdown.value, 167 | 'microphone_index': microphone_dropdown.value, 168 | 'language': language_dropdown.value, 169 | 'text_size': text_size_dropdown.value, 170 | 'translate': translate_checkbox.value, 171 | 'always_on_top': always_on_top_checkbox.value, 172 | 'dark_mode': dark_mode_checkbox.value, 173 | 'text_background': text_background_checkbox.value, 174 | 'transparent': transparent_checkbox.value, 175 | 'volume_threshold': energy_slider.value, 176 | 'transcribe_rate': transcribe_rate_seconds, 177 | 'max_record_time': max_record_time, 178 | 'seconds_of_silence_between_lines': silence_time, 179 | } 180 | 181 | with open(settings_file, 'w+') as f: 182 | yaml.dump(settings, f) 183 | 184 | currently_transcribing = True 185 | else: 186 | page.splash = ft.Container( 187 | content=ft.ProgressRing(), 188 | alignment=ft.alignment.center 189 | ) 190 | page.update() 191 | 192 | transcribe_text.value = "Start Transcribing" 193 | transcribe_icon.name = "play_arrow_rounded" 194 | transcribe_button.bgcolor = ft.colors.BLUE_800 195 | volume_bar.value = 0.01 196 | 197 | # Stop the record thread. 198 | if record_thread: 199 | run_record_thread = False 200 | record_thread.join() 201 | record_thread = None 202 | 203 | # Drain all the remaining data but save the last sample. 204 | # This is to pump the main loop one more time, otherwise we'll end up editing 205 | # the last line when we start transcribing again, rather than creating a new line. 206 | data = None 207 | while not data_queue.empty(): 208 | data = data_queue.get() 209 | if data: 210 | data_queue.put(data) 211 | 212 | # Enable all the controls. 213 | model_dropdown.disabled = False 214 | microphone_dropdown.disabled = False 215 | language_dropdown.disabled = False 216 | translate_checkbox.disabled = language_dropdown.value == 'en' 217 | settings_controls.visible = True 218 | 219 | # Make opaque. 220 | page.window_bgcolor = None 221 | page.bgcolor = None 222 | page.window_title_bar_hidden = False 223 | page.window_frameless = False 224 | draggable_area1.visible = False 225 | draggable_area2.visible = False 226 | 227 | # Save transcription. 228 | with open(transcription_file, 'w+', encoding='utf-8') as f: 229 | f.writelines('\n'.join([item.value for item in transcription_list.controls])) 230 | 231 | currently_transcribing = False 232 | 233 | page.splash = None 234 | page.update() 235 | 236 | 237 | # 238 | # Build controls. 239 | # 240 | 241 | model_dropdown = ft.Dropdown( 242 | options=[ 243 | ft.dropdown.Option('tiny', text="Tiny (Fastest)"), 244 | ft.dropdown.Option('base', text="Base"), 245 | ft.dropdown.Option('small', text="Small"), 246 | ft.dropdown.Option('medium', text="Medium"), 247 | ft.dropdown.Option('large', text="Large (Highest Quality)"), 248 | ], 249 | label="Speech To Text Model", 250 | value=settings.get('speech_model', 'base'), 251 | expand=True, 252 | content_padding=ft.padding.only(top=5, bottom=5, left=10), 253 | text_size=14, 254 | ) 255 | 256 | microphones = {} 257 | pa = pyaudio.PyAudio() 258 | for i in range(pa.get_device_count()): 259 | device_info = pa.get_device_info_by_index(i) 260 | if device_info['maxInputChannels'] > 0 and device_info['hostApi'] == 0: 261 | microphones[device_info['index']] = device_info['name'] 262 | 263 | default_mic = pa.get_default_input_device_info()['index'] 264 | selected_mic = int(settings.get('microphone_index', default_mic)) 265 | if selected_mic not in microphones: 266 | selected_mic = default_mic 267 | 268 | microphone_dropdown = ft.Dropdown( 269 | options=[ft.dropdown.Option(index, text=mic) for index, mic in microphones.items()], 270 | label="Audio Input Device", 271 | value=selected_mic, 272 | expand=True, 273 | content_padding=ft.padding.only(top=5, bottom=5, left=10), 274 | text_size=14, 275 | ) 276 | 277 | language_options = [ft.dropdown.Option("Auto")] 278 | language_options += [ft.dropdown.Option(abbr, text=lang.capitalize()) for abbr, lang in LANGUAGES.items()] 279 | language_dropdown = ft.Dropdown( 280 | options=language_options, 281 | label="Language", 282 | value=settings.get('language', "Auto"), 283 | content_padding=ft.padding.only(top=5, bottom=5, left=10), 284 | text_size=14, 285 | on_change=language_callback 286 | ) 287 | 288 | text_size_dropdown = ft.Dropdown( 289 | options=[ft.dropdown.Option(size) for size in range(8, 66, 2)], 290 | label="Text Size", 291 | value=settings.get('text_size', 24), 292 | on_change=text_size_callback, 293 | content_padding=ft.padding.only(top=5, bottom=5, left=10), 294 | text_size=14, 295 | ) 296 | 297 | translate_checkbox = ft.Checkbox(label="Translate To English", value=settings.get('translate', False), disabled=language_dropdown.value == 'en') 298 | dark_mode_checkbox = ft.Checkbox(label="Dark Mode", value=settings.get('dark_mode', False), on_change=dark_mode_callback) 299 | text_background_checkbox = ft.Checkbox(label="Text Background", value=settings.get('text_background', False), on_change=text_background_callback) 300 | always_on_top_checkbox = ft.Checkbox(label="Always On Top", value=settings.get('always_on_top', False), on_change=always_on_top_callback) 301 | transparent_checkbox = ft.Checkbox(label="Transparent", value=settings.get('transparent', False)) 302 | 303 | energy_slider = ft.Slider(min=0, max=max_energy, value=settings.get('volume_threshold', 300), expand=True, height=20) 304 | volume_bar = ft.ProgressBar(value=0.01, color=ft.colors.RED_800) 305 | 306 | transcription_list = ft.ListView([], spacing=10, padding=20, expand=True, auto_scroll=True) 307 | 308 | transcribe_text = ft.Text("Start Transcribing") 309 | transcribe_icon = ft.Icon("play_arrow_rounded") 310 | 311 | transcribe_button = ft.ElevatedButton( 312 | content=ft.Row( 313 | [ 314 | transcribe_icon, 315 | transcribe_text 316 | ], 317 | expand=True, 318 | alignment=ft.MainAxisAlignment.CENTER, 319 | spacing=5 320 | ), 321 | style=ft.ButtonStyle(shape=ft.RoundedRectangleBorder(radius=5)), 322 | bgcolor=ft.colors.BLUE_800, color=ft.colors.WHITE, 323 | on_click=transcribe_callback, 324 | ) 325 | 326 | settings_controls = ft.Column( 327 | [ 328 | ft.Container( 329 | content=ft.Row( 330 | [ 331 | model_dropdown, 332 | ft.Icon("help_outline", tooltip="Choose which model to transcribe speech with.\nModels are downloaded automatically the first time they are used.") 333 | ], 334 | spacing=10, 335 | ), 336 | padding=ft.padding.only(left=10, right=10, top=15), 337 | ), 338 | ft.Container( 339 | content=microphone_dropdown, 340 | padding=ft.padding.only(left=10, right=45, top=5) 341 | ), 342 | ft.Container( 343 | content=ft.Row( 344 | [ 345 | ft.Column( 346 | [ 347 | language_dropdown, 348 | translate_checkbox, 349 | ] 350 | ), 351 | ft.Container( 352 | content=ft.Column( 353 | [ 354 | text_size_dropdown, 355 | ft.Row( 356 | [ 357 | text_background_checkbox, 358 | dark_mode_checkbox, 359 | ] 360 | ), 361 | ], 362 | ), 363 | padding=ft.padding.only(left=10) 364 | ), 365 | ft.Column( 366 | [ 367 | ft.Row( 368 | [ 369 | transparent_checkbox, 370 | ft.Icon("help_outline", tooltip="Make the window transparent while transcribing.") 371 | ] 372 | ), 373 | always_on_top_checkbox, 374 | ] 375 | ), 376 | ], 377 | alignment=ft.MainAxisAlignment.SPACE_BETWEEN, 378 | ), 379 | margin=ft.margin.only(left=10, right=15, top=5), 380 | ), 381 | ft.Container( 382 | content=ft.Row( 383 | [ 384 | energy_slider, 385 | ft.Icon("help_outline", tooltip="Required volume to start decoding speech.\nAdjusts max volume automatically.") 386 | ], 387 | expand=True, 388 | ), 389 | padding=ft.padding.only(left=0, right=15, top=0), 390 | ), 391 | ], 392 | visible=True 393 | ) 394 | 395 | draggable_area1 = ft.Row( 396 | [ 397 | ft.WindowDragArea(ft.Container(height=30), expand=True), 398 | ], 399 | visible=False 400 | ) 401 | draggable_area2 = ft.Row( 402 | [ 403 | ft.WindowDragArea(ft.Container(height=30), expand=True), 404 | ], 405 | visible=False 406 | ) 407 | 408 | page.add( 409 | settings_controls, 410 | draggable_area1, 411 | ft.Container( 412 | content=transcribe_button, 413 | padding=ft.padding.only(left=10, right=45, top=5) 414 | ), 415 | ft.Container( 416 | content=volume_bar, 417 | padding=ft.padding.only(left=10, right=45, top=0) 418 | ), 419 | draggable_area2, 420 | ft.Container( 421 | content=transcription_list, 422 | padding=ft.padding.only(left=15, right=45, top=5), 423 | expand=True, 424 | ), 425 | ) 426 | 427 | # Set settings that may have been loaded. 428 | dark_mode_callback(None) 429 | always_on_top_callback(None) 430 | text_background_callback(None) 431 | 432 | # 433 | # Control loops. 434 | # 435 | 436 | run_record_thread = True 437 | def recording_thread(stream:pyaudio.Stream): 438 | nonlocal max_energy 439 | while run_record_thread: 440 | # We record as fast as possible so that we can update the volume bar at a fast rate. 441 | data = stream.read(chunk_size) 442 | energy = audioop.rms(data, pa.get_sample_size(pyaudio.paInt16)) 443 | if energy > max_energy: 444 | max_energy = energy 445 | energy_slider.max = max_energy 446 | energy_slider.update() 447 | volume_bar.value = min(energy / max_energy, 1.0) 448 | if energy < energy_slider.value: 449 | volume_bar.color = ft.colors.RED_800 450 | else: 451 | volume_bar.color = ft.colors.BLUE_800 452 | data_queue.put(data) 453 | volume_bar.update() 454 | 455 | next_transcribe_time = None 456 | transcribe_rate_seconds = float(settings.get('transcribe_rate', 0.5)) 457 | transcribe_rate = timedelta(seconds=transcribe_rate_seconds) 458 | max_record_time = settings.get('max_record_time', 30) 459 | silence_time = settings.get('seconds_of_silence_between_lines', 0.5) 460 | last_sample = bytes() 461 | samples_with_silence = 0 462 | while True: 463 | # Main loop. Wait for data from the recording thread at transcribe it at specified rate. 464 | if currently_transcribing and audio_model and not data_queue.empty(): 465 | now = datetime.utcnow() 466 | # Set next_transcribe_time for the first time. 467 | if not next_transcribe_time: 468 | next_transcribe_time = now + transcribe_rate 469 | 470 | # Only run transcription occasionally. This reduces stress on the GPU and makes transcriptions 471 | # more accurate because they have more audio context, but makes the transcription less real time. 472 | if now > next_transcribe_time: 473 | next_transcribe_time = now + transcribe_rate 474 | 475 | phrase_complete = False 476 | while not data_queue.empty(): 477 | data = data_queue.get() 478 | energy = audioop.rms(data, pa.get_sample_size(pyaudio.paInt16)) 479 | if (energy < energy_slider.value): 480 | samples_with_silence += 1 481 | else: 482 | samples_with_silence = 0 483 | 484 | # If we have encounter enough silence, restart the buffer and add a new line. 485 | if samples_with_silence > sample_rate / chunk_size * silence_time: 486 | phrase_complete = True 487 | last_sample = bytes() 488 | last_sample += data 489 | 490 | # Write out raw frames as a wave file. 491 | wav_file = io.BytesIO() 492 | wav_writer:wave.Wave_write = wave.open(wav_file, "wb") 493 | wav_writer.setframerate(sample_rate) 494 | wav_writer.setsampwidth(pa.get_sample_size(pyaudio.paInt16)) 495 | wav_writer.setnchannels(1) 496 | wav_writer.writeframes(last_sample) 497 | wav_writer.close() 498 | 499 | # Read the audio data, now with wave headers. 500 | wav_file.seek(0) 501 | wav_reader:wave.Wave_read = wave.open(wav_file) 502 | samples = wav_reader.getnframes() 503 | audio = wav_reader.readframes(samples) 504 | wav_reader.close() 505 | 506 | # Convert the wave data straight to a numpy array for the model. 507 | # https://stackoverflow.com/a/62298670 508 | audio_as_np_int16 = numpy.frombuffer(audio, dtype=numpy.int16) 509 | audio_as_np_float32 = audio_as_np_int16.astype(numpy.float32) 510 | audio_normalised = audio_as_np_float32 / max_int16 511 | 512 | language = None 513 | if language_dropdown.value != 'Auto': 514 | language = language_dropdown.value 515 | 516 | task = 'transcribe' 517 | if language != 'en' and translate_checkbox.value: 518 | task = 'translate' 519 | 520 | result = audio_model.transcribe(audio_normalised, language=language, task=task) 521 | text = result['text'].strip() 522 | 523 | color = None 524 | if text_background_checkbox.value: 525 | color = ft.colors.BLACK if dark_mode_checkbox.value else ft.colors.WHITE 526 | 527 | if not phrase_complete and transcription_list.controls: 528 | transcription_list.controls[-1].value = text 529 | elif not transcription_list.controls or (transcription_list.controls and transcription_list.controls[-1].value): 530 | # Always add a new item if there are no items in the list. 531 | # Only add another item to the list if the previous item is not an empty string. 532 | # Since hearing silence triggers phrase_complete, there's a good chance that most appends are going to empty text. 533 | transcription_list.controls.append(ft.Text(text, selectable=True, size=int(text_size_dropdown.value), bgcolor=color)) 534 | transcription_list.update() 535 | 536 | # If we've reached our max recording time, it's time to break up the buffer, add an empty line after we edited the last line. 537 | audio_length_in_seconds = samples / float(sample_rate) 538 | if audio_length_in_seconds > max_record_time: 539 | last_sample = bytes() 540 | transcription_list.controls.append(ft.Text('', selectable=True, size=int(text_size_dropdown.value), bgcolor=color)) 541 | 542 | sleep(0.1) 543 | 544 | 545 | if __name__ == "__main__": 546 | ft.app(target=main) 547 | --------------------------------------------------------------------------------