├── requirements.txt ├── README.md └── main.py /requirements.txt: -------------------------------------------------------------------------------- 1 | google-generativeai 2 | python-dotenv 3 | opencv-python 4 | numpy -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # VisionCutter: AI-Powered Video Editor 2 | 3 | VisionCutter is a Python script that uses the power of Google's Gemini AI and local machine learning to automatically edit a collection of video clips into a finished video. It analyzes your clips for content, color, and motion, then uses different creative "personalities" to generate unique video edits. 4 | 5 | This tool is perfect for creating dynamic music videos, artistic montages, or just for discovering interesting narrative connections within your footage. 6 | 7 | ## Features 8 | 9 | - **Hybrid Analysis**: Combines Google Gemini's visual description capabilities with local motion analysis (using OpenCV) for a deep understanding of each clip. 10 | - **Creative Editing Styles**: Comes with 6 pre-configured AI editor personalities: 11 | - `PULSATING_ENERGY`: Creates a high-energy, rhythmic edit. 12 | - `CHROMATIC_DREAM`: Focuses on color and mood to create a dream-like flow. 13 | - `NARRATIVE_CHAOS`: Juxtaposes clips for a chaotic, surreal effect. 14 | - `STORYTELLER`: Aims to build a coherent narrative from the clips. 15 | - `ACTION_STORYTELLER`: Edits like a Hollywood action trailer. 16 | - `POETIC_STORYTELLER`: Creates a visual poem by connecting clips metaphorically. 17 | - **Intelligent Caching**: Automatically caches video analysis (`.json`) and pre-processed video segments. Subsequent runs are significantly faster, saving time and API costs. 18 | - **Configurable**: Easily configure clip duration, editing styles to generate, and processing modes at the top of the script. 19 | - **Interactive**: Prompts the user for the desired length of each clip (in frames) at runtime. 20 | 21 | ## Requirements 22 | 23 | - Python 3.9+ 24 | - [FFmpeg](https://ffmpeg.org/download.html): Must be installed on your system and accessible from your terminal's PATH. 25 | 26 | ## Installation & Setup 27 | 28 | 1. **Clone the Repository** (or ensure you have all the project files). 29 | 30 | 2. **Add Your Video Clips**: 31 | - Place all your source video files (e.g., `.mp4`, `.mov`) inside the `clips` folder. 32 | 33 | 3. **Set Up Your API Key**: 34 | - Create a file named `.env` in the root directory of the project. 35 | - Inside the `.env` file, add your Google AI API Key like this: 36 | ``` 37 | GOOGLE_API_KEY="AIzaSy..." 38 | ``` 39 | 40 | 4. **Install Dependencies**: 41 | - Open your terminal in the project directory and run: 42 | ```bash 43 | pip install -r requirements.txt 44 | ``` 45 | 46 | ## How to Use 47 | 48 | 1. **Run the Script**: 49 | - Execute the script from your terminal: 50 | ```bash 51 | python main.py 52 | ``` 53 | 54 | 2. **Enter Clip Duration**: 55 | - The script will prompt you to enter the desired duration for each cut in the final video. Enter a number of **frames** (e.g., `14`). The script will calculate the duration in seconds based on the `OUTPUT_FPS` setting. 56 | 57 | 3. **Wait for the Magic**: 58 | - **First Run**: The script will take some time to analyze each video and create cached versions. You will see detailed analysis data printed in the terminal. 59 | - **Subsequent Runs**: If the cache already exists for your chosen frame duration, the script will skip the analysis and processing steps and move directly to the final video creation, which is much faster. 60 | 61 | 4. **Find Your Videos**: 62 | - The final videos will be saved in the root directory, with names corresponding to their editing style and frame length (e.g., `final_cut_STORYTELLER_14frames.mp4`). 63 | 64 | ## Customization 65 | 66 | You can easily customize the script's behavior by editing the parameters at the top of `main.py`: 67 | 68 | - `CLIP_MODE`: Choose between `'speed_up'` (compresses the whole clip to the target duration) or `'trim'` (cuts a segment from the middle of the clip). 69 | - `STYLES_TO_GENERATE`: A Python list of which AI personalities you want to use. You can remove styles you don't want or reorder them. 70 | - `OUTPUT_FPS`: The frame rate of the output videos. Defaults to 30. 71 | 72 | --- 73 | *This project was collaboratively developed with an AI assistant.* -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import os 2 | import subprocess 3 | import time 4 | import json 5 | from dotenv import load_dotenv 6 | import google.generativeai as genai 7 | import cv2 8 | import numpy as np 9 | import shutil 10 | 11 | # --- 1. EDITING PARAMETERS (Tweak these!) --- 12 | CLIP_MODE = 'speed_up' # 'speed_up' to fit the whole clip, or 'trim' to cut a segment 13 | STYLES_TO_GENERATE = [ 14 | 'PULSATING_ENERGY', 'CHROMATIC_DREAM', 'NARRATIVE_CHAOS', 'STORYTELLER', 15 | 'ACTION_STORYTELLER', 'POETIC_STORYTELLER' 16 | ] # A list of styles to generate 17 | OUTPUT_FPS = 30 # The frame rate of the final output video 18 | 19 | # --- 2. ADVANCED CONFIGURATION --- 20 | CLIPS_DIR = "clips" 21 | WORKSPACE_DIR = "workspace" 22 | CACHE_DIR = "cache" # Directory to store analysis results 23 | VISION_MODEL = "gemini-1.5-flash" 24 | CREATIVE_MODEL = "gemini-1.5-pro" 25 | 26 | # --- 3. SCRIPT LOGIC (No need to edit below) --- 27 | def setup_environment(): 28 | """Loads environment variables and configures the AI client.""" 29 | print("1. Initializing environment...") 30 | load_dotenv() 31 | api_key = os.getenv("GOOGLE_API_KEY") 32 | if not api_key: 33 | print("ERROR: 'GOOGLE_API_KEY' not found in .env file.") 34 | exit() 35 | try: 36 | genai.configure(api_key=api_key) 37 | except Exception as e: 38 | print(f"ERROR: Could not configure Gemini client: {e}") 39 | exit() 40 | 41 | def get_video_duration(video_path): 42 | """Gets the duration of a video file using ffprobe.""" 43 | command = [ 44 | 'ffprobe', 45 | '-v', 'error', 46 | '-show_entries', 'format=duration', 47 | '-of', 'default=noprint_wrappers=1:nokey=1', 48 | video_path 49 | ] 50 | try: 51 | result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=True, text=True) 52 | return float(result.stdout) 53 | except (subprocess.CalledProcessError, FileNotFoundError): 54 | print(f" WARNING: ffprobe failed for {video_path}. Could not get duration.") 55 | return None 56 | 57 | def calculate_motion_score(video_path): 58 | """Calculates a motion score for a video using optical flow.""" 59 | try: 60 | cap = cv2.VideoCapture(video_path) 61 | ret, prev_frame = cap.read() 62 | if not ret: 63 | cap.release() 64 | return 0.0 65 | 66 | prev_gray = cv2.cvtColor(prev_frame, cv2.COLOR_BGR2GRAY) 67 | total_magnitude = 0 68 | frame_count = 0 69 | 70 | # Analyze a few frames to get an average 71 | for _ in range(5): 72 | ret, frame = cap.read() 73 | if not ret: 74 | break 75 | 76 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 77 | flow = cv2.calcOpticalFlowFarneback(prev_gray, gray, None, 0.5, 3, 15, 3, 5, 1.2, 0) 78 | magnitude, _ = cv2.cartToPolar(flow[..., 0], flow[..., 1]) 79 | total_magnitude += np.mean(magnitude) 80 | prev_gray = gray 81 | frame_count += 1 82 | 83 | cap.release() 84 | if frame_count == 0: 85 | return 0.0 86 | 87 | # Normalize the score to a more intuitive range 88 | motion_score = (total_magnitude / frame_count) * 10 89 | return float(round(motion_score, 2)) 90 | 91 | except Exception as e: 92 | print(f" WARNING: Could not calculate motion score for {video_path}. Error: {e}") 93 | return 0.0 94 | 95 | def upload_video_to_gemini(video_path): 96 | """Uploads a video file to the Gemini API and waits for it to be ready.""" 97 | print(f" - Uploading '{os.path.basename(video_path)}' to Gemini...") 98 | video_file = genai.upload_file(path=video_path, display_name=os.path.basename(video_path)) 99 | print(f" - Gemini processing video... (ID: {video_file.name})") 100 | while video_file.state.name == "PROCESSING": 101 | time.sleep(5) 102 | video_file = genai.get_file(video_file.name) 103 | if video_file.state.name == "FAILED": 104 | raise ValueError(f"Video processing failed: {video_file.name}") 105 | print(f" - Gemini analysis ready.") 106 | return video_file 107 | 108 | def analyze_video_with_ai(video_path): 109 | """Sends a video to Gemini for detailed visual analysis.""" 110 | try: 111 | video_file = upload_video_to_gemini(video_path) 112 | model = genai.GenerativeModel(VISION_MODEL) 113 | prompt = """ 114 | You are a master film analyst. Your task is to analyze the provided video clip and return a structured JSON object. 115 | 116 | INSTRUCTIONS: 117 | 1. **Analyze**: Carefully observe the clip's content, colors, and mood. 118 | 2. **Language**: All text values in the JSON must be in English. 119 | 3. **Format**: Respond ONLY with a valid JSON object. Do NOT include markdown fences (```json), conversational text, or any other characters outside the JSON structure. 120 | 121 | JSON STRUCTURE: 122 | { 123 | "action_description": "A concise, evocative description of the main action or subject.", 124 | "dominant_colors": ["A list of 3-5 dominant or symbolic colors as english words (e.g., 'deep purple', 'neon pink', 'golden yellow')."], 125 | "camera_movement": "Describe the camera work (e.g., 'static shot', 'slow pan right', 'fast zoom in', 'handheld shaky cam', 'smooth tracking shot').", 126 | "overall_mood": "A few words describing the atmosphere or feeling (e.g., 'serene and peaceful', 'energetic and chaotic', 'mysterious and futuristic')." 127 | } 128 | """ 129 | response = model.generate_content([prompt, video_file]) 130 | genai.delete_file(video_file.name) 131 | 132 | cleaned_response = response.text.strip().replace("```json", "").replace("```", "") 133 | return json.loads(cleaned_response) 134 | 135 | except Exception as e: 136 | print(f" ERROR during video analysis for {video_path}: {e}") 137 | if 'video_file' in locals() and video_file: 138 | genai.delete_file(video_file.name) 139 | return None 140 | 141 | def get_creative_editing_styles(): 142 | """Returns a dictionary of different creative editing prompts.""" 143 | return { 144 | 'PULSATING_ENERGY': """ 145 | You are an editor for high-energy music videos. Your mission is to CREATE energy through editing. 146 | - Your primary tool is the cut. Create a fast-paced, almost jarring rhythm. 147 | - **Heavily prioritize clips with a high `motion_score` (e.g., > 1.0).** Also, use clips with 'fast' or 'shaky' `camera_movement`. 148 | - Use frequent, hypnotic repetitions of the most striking visuals, especially those with high motion. 149 | - Juxtapose different clips aggressively to build intensity. The sequence must feel like a visual drum machine. 150 | """, 151 | 'CHROMATIC_DREAM': """ 152 | You are an editor for an art-house film. Your style is painterly and deliberate. 153 | You build visual narratives through color and mood. 154 | - Create "chapters" of clips that share a similar `dominant_colors` palette. 155 | - Transition smoothly between color palettes. 156 | - You can create powerful moments by contrasting a sequence of low `motion_score` clips (like 'static shots') with a sudden high `motion_score` clip. 157 | - Repetition should be used sparingly, to emphasize a key mood. 158 | """, 159 | 'NARRATIVE_CHAOS': """ 160 | You are an avant-garde editor inspired by surrealism. You find meaning in chaos. 161 | Your goal is to create a visually jarring and thought-provoking sequence. 162 | - **Create chaos by aggressively mismatching `motion_score`**. Follow a static clip (score < 0.1) with a highly dynamic one (score > 1.5). 163 | - Use the `camera_movement` description to create further contrast, e.g., a 'slow pan' followed by a 'whip pan'. 164 | - Deliberately mismatch moods and colors for maximum contrast. 165 | - Tell a broken, abstract story. Let the viewer find their own meaning. 166 | """, 167 | 'STORYTELLER': """ 168 | You are a narrative film editor. Your goal is to tell a coherent and compelling story using the available clips. 169 | - **Your primary focus is the `action_description`.** Find logical or poetic connections between the descriptions to create a narrative flow. 170 | - **Build a simple story arc**: Try to establish a scene (beginning), develop an idea or action (middle), and provide a resolution or final image (end). 171 | - **Use `camera_movement` to guide the viewer's eye**: A 'pan right' can lead into another clip, creating a sense of continuous space. A 'zoom in' can create focus. 172 | - **Consider the `overall_mood`**: Build an emotional journey. Start calm, build to a climax, and end on a thoughtful note. 173 | - **Repetition is for narrative effect**: Only repeat a clip if it serves as a memory, a flashback, or a recurring motif that enhances the story. Avoid random repetition. 174 | """, 175 | 'ACTION_STORYTELLER': """ 176 | You are a Hollywood action movie trailer editor. Your job is to create a high-impact, thrilling narrative sequence. 177 | - **Find your "money shots"**: Identify clips with the highest `motion_score` and dramatic `action_description` (e.g., explosions, fast movement, dramatic reveals). These are your climax. 178 | - **Build the structure**: Start with establishing shots (low motion, wide views), build suspense with clips of rising action, hit the climax with your best shots, and conclude with a final, impactful image. 179 | - **Pacing is key**: Use a few quick cuts together to build excitement. Let a dramatic shot breathe for a moment. 180 | - **The story should be simple and powerful**: Good guy, bad guy, big explosion. Find that story in the clips. 181 | """, 182 | 'POETIC_STORYTELLER': """ 183 | You are a visual poet. You create meaning not from literal events, but from the juxtaposition of images, colors, and moods. 184 | - **Focus on `overall_mood` and `dominant_colors`**: Create an emotional journey. For example, transition from a "serene" blue sequence to a "chaotic" red one. 185 | - **Find metaphors**: Connect clips by theme. A clip of a flower opening could be followed by a sunrise. A clip of a falling object could be followed by a sad face. 186 | - **The `action_description` is a line in your poem**: Read all the descriptions and arrange them to create a verse. 187 | - **Rhythm is emotional**: Use `motion_score` to control the emotional pacing. A series of static shots can create contemplation before a high-motion shot provides an emotional release. 188 | """ 189 | } 190 | 191 | def get_artistic_sequence_from_ai(clips_data, style): 192 | """Sends clip analysis data to the creative AI to get the final editing sequence.""" 193 | print(f"\n4. Requesting artistic sequence from AI (Style: {style})...") 194 | 195 | styles = get_creative_editing_styles() 196 | if style not in styles: 197 | print(f" ERROR: Editing style '{style}' not found. Defaulting to 'PULSATING_ENERGY'.") 198 | style = 'PULSATING_ENERGY' 199 | 200 | model = genai.GenerativeModel(CREATIVE_MODEL) 201 | system_prompt = styles[style] 202 | 203 | # Make sequence length dynamic to ensure all clips can be included 204 | num_clips = len(clips_data) 205 | min_len = num_clips 206 | max_len = num_clips + 10 # Allow for some creative repetition 207 | 208 | user_prompt = f""" 209 | {system_prompt} 210 | Your task is to create a compelling video sequence by ordering the clips provided below. 211 | 212 | RULES: 213 | 1. **Mandatory Inclusion**: You MUST use every single clip from 'AVAILABLE CLIPS DATA' at least once in your final sequence. 214 | 2. **Output Format**: Your response must be ONLY a Python list of filenames. Do not add any other text. 215 | 3. **Repetition**: After including every clip once, you are encouraged to repeat clips to create rhythm, as per your style. 216 | 4. **Sequence Length**: The final list must contain between {min_len} and {max_len} clip filenames. 217 | 5. **Analyze the Data**: Use the detailed JSON data for each clip, especially the `motion_score`, to inform your creative choices. 218 | 219 | AVAILABLE CLIPS DATA: 220 | ```json 221 | {json.dumps(clips_data, indent=2)} 222 | ``` 223 | 224 | Return only the Python list of filenames. Example: ['clip01.mp4', 'clip02.mp4', 'clip03.mp4', 'clip01.mp4'] 225 | """ 226 | try: 227 | response = model.generate_content(user_prompt, generation_config={"temperature": 0.9}) 228 | raw_response = response.text.strip() 229 | if "```" in raw_response: 230 | raw_response = raw_response.split("```")[1].replace("python", "").strip() 231 | 232 | print(f" - AI Raw Response (cleaned): {raw_response}") 233 | import ast 234 | ordered_list = ast.literal_eval(raw_response) 235 | if isinstance(ordered_list, list): 236 | return ordered_list 237 | return [] 238 | except Exception as e: 239 | print(f" ERROR during creative sequencing: {e}") 240 | return [] 241 | 242 | def generate_final_video(ordered_sequence, target_frames, output_filename): 243 | """Generates the final video by concatenating pre-cached segments.""" 244 | print(f"\n5. Generating final video for '{output_filename}'...") 245 | if not ordered_sequence: 246 | print(" - ERROR: The editing sequence is empty. Aborting video generation.") 247 | return 248 | 249 | # Create the concat list pointing to the cached, pre-processed segments 250 | concat_list_path = os.path.join(WORKSPACE_DIR, "concat_list.txt") 251 | with open(concat_list_path, 'w') as f: 252 | for clip_filename in ordered_sequence: 253 | # IMPORTANT: All segments are now expected to be in the cache 254 | cached_segment_path = os.path.join(CACHE_DIR, f"{clip_filename}_{target_frames}frames.mp4") 255 | if os.path.exists(cached_segment_path): 256 | f.write(f"file '{os.path.abspath(cached_segment_path)}'\n") 257 | else: 258 | print(f" - WARNING: Cached segment for {clip_filename} not found! Skipping.") 259 | 260 | command = [ 261 | 'ffmpeg', '-y', '-f', 'concat', '-safe', '0', 262 | '-i', concat_list_path, 263 | '-c', 'copy', # Fast copy, no re-encoding needed 264 | '-r', str(OUTPUT_FPS), 265 | output_filename 266 | ] 267 | 268 | try: 269 | print(f" - Concatenating {len(ordered_sequence)} cached segments...") 270 | subprocess.run(command, check=True, capture_output=True, text=True) 271 | print(f"\n✨ SUCCESS! Final video generated: {output_filename}") 272 | 273 | except subprocess.CalledProcessError as e: 274 | print(f" - ERROR during FFmpeg processing.") 275 | print(f" - Command: {' '.join(e.cmd)}") 276 | print(f" - Stderr: {e.stderr}") 277 | except FileNotFoundError: 278 | print(" - ERROR: 'ffmpeg' or 'ffprobe' not found. Ensure they are installed and in your system's PATH.") 279 | exit() 280 | 281 | def main(): 282 | """Main function to orchestrate the video creation process.""" 283 | setup_environment() 284 | 285 | # --- Interactive Input for Frame Duration --- 286 | while True: 287 | try: 288 | frames_str = input(f"Enter the target duration for each clip in FRAMES (output will be {OUTPUT_FPS}fps): ") 289 | target_clip_duration_frames = int(frames_str) 290 | if target_clip_duration_frames > 0: 291 | break 292 | else: 293 | print(" Please enter a positive number of frames.") 294 | except ValueError: 295 | print(" Invalid input. Please enter an integer.") 296 | 297 | target_clip_duration_s = target_clip_duration_frames / OUTPUT_FPS 298 | 299 | # --- Directory Setup --- 300 | for dir_path in [WORKSPACE_DIR, CACHE_DIR, CLIPS_DIR]: 301 | if not os.path.exists(dir_path): 302 | os.makedirs(dir_path) 303 | 304 | try: 305 | all_video_files = [f for f in os.listdir(CLIPS_DIR) if f.lower().endswith(('.mp4', '.mov', '.avi'))] 306 | if not all_video_files: 307 | print(f"ERROR: No video files found in the '{CLIPS_DIR}' directory.") 308 | exit() 309 | except FileNotFoundError: 310 | print(f"ERROR: The '{CLIPS_DIR}' directory does not exist.") 311 | exit() 312 | 313 | # --- STAGE 1: Analysis and Pre-processing Cache --- 314 | print(f"\n2. Analyzing and Caching {len(all_video_files)} video clips...") 315 | clips_data = [] 316 | for video_file in all_video_files: 317 | video_path = os.path.join(CLIPS_DIR, video_file) 318 | analysis_cache_path = os.path.join(CACHE_DIR, f"{video_file}.json") 319 | segment_cache_path = os.path.join(CACHE_DIR, f"{video_file}_{target_clip_duration_frames}frames.mp4") 320 | print(f" - Checking '{video_file}'...") 321 | 322 | # Get analysis from cache or generate it 323 | if os.path.exists(analysis_cache_path): 324 | print(f" - Analysis found in cache.") 325 | with open(analysis_cache_path, 'r') as f: 326 | analysis = json.load(f) 327 | else: 328 | print(" - No analysis cache found. Performing full analysis...") 329 | print(" - Calculating local motion score...") 330 | motion_score = calculate_motion_score(video_path) 331 | print(f" - Motion Score: {motion_score}") 332 | analysis = analyze_video_with_ai(video_path) 333 | if analysis: 334 | analysis['motion_score'] = motion_score 335 | with open(analysis_cache_path, 'w') as f: 336 | json.dump(analysis, f, indent=4) 337 | print(f" - Analysis saved to cache.") 338 | 339 | if not analysis: 340 | print(" - ERROR: Could not analyze video. Skipping.") 341 | continue 342 | 343 | analysis['filename'] = video_file 344 | clips_data.append(analysis) 345 | 346 | # Create pre-processed segment if it doesn't exist in cache 347 | if not os.path.exists(segment_cache_path): 348 | print(f" - Pre-processed segment not found. Creating '{os.path.basename(segment_cache_path)}'...") 349 | source_duration = get_video_duration(video_path) 350 | if source_duration: 351 | filter_chain = "" 352 | if CLIP_MODE == 'speed_up': 353 | speed_factor = source_duration / target_clip_duration_s if target_clip_duration_s > 0 else 1 354 | filter_chain = f"setpts=PTS/{speed_factor:.4f},scale=1920:1080:force_original_aspect_ratio=decrease,pad=w=1920:h=1080:x=-1:y=-1" 355 | else: # 'trim' mode 356 | start_time = max(0, (source_duration / 2) - (target_clip_duration_s / 2)) 357 | filter_chain = f"trim=start={start_time:.4f}:duration={target_clip_duration_s:.4f},setpts=PTS-STARTPTS,scale=1920:1080:force_original_aspect_ratio=decrease,pad=w=1920:h=1080:x=-1:y=-1" 358 | 359 | command = [ 360 | 'ffmpeg', '-y', '-i', video_path, 361 | '-vf', filter_chain, 362 | '-c:v', 'libx264', '-preset', 'fast', '-crf', '23', 363 | '-an', 364 | segment_cache_path 365 | ] 366 | try: 367 | subprocess.run(command, check=True, capture_output=True, text=True) 368 | print(f" - Segment cached successfully.") 369 | except subprocess.CalledProcessError as e: 370 | print(f" - ERROR caching segment: {e.stderr}") 371 | else: 372 | print(f" - Pre-processed segment found in cache.") 373 | 374 | if not clips_data: 375 | print("\nERROR: No clips could be analyzed. Halting script.") 376 | exit() 377 | 378 | print("\n3. All clips are analyzed and cached.") 379 | 380 | # --- STAGE 2: Creative Editing --- 381 | for style in STYLES_TO_GENERATE: 382 | print("---------------------------------------------------------") 383 | ordered_sequence = get_artistic_sequence_from_ai(clips_data, style) 384 | 385 | if not ordered_sequence: 386 | print(f"\nWARNING: Could not generate sequence for style '{style}'. Skipping.") 387 | continue 388 | 389 | # --- VALIDATION STEP --- 390 | validated_sequence = [f for f in ordered_sequence if f in all_video_files] 391 | if len(validated_sequence) != len(ordered_sequence): 392 | print(" - WARNING: AI returned some non-existent filenames, which were removed.") 393 | 394 | if not validated_sequence: 395 | print(f"\nWARNING: No valid clips remained after validation for style '{style}'. Skipping.") 396 | continue 397 | 398 | print(f"\n - Artistic sequence received and validated for style '{style}' ({len(validated_sequence)} cuts).") 399 | 400 | output_filename = f"final_cut_{style}_{target_clip_duration_frames}frames.mp4" 401 | generate_final_video(validated_sequence, target_clip_duration_frames, output_filename) 402 | 403 | print("---------------------------------------------------------") 404 | print("\n✅ All video generation tasks are complete.") 405 | 406 | if __name__ == "__main__": 407 | main() --------------------------------------------------------------------------------