├── .gitignore
├── Dockerfile
├── LICENSE
├── README.md
├── cli.py
├── docker-compose.yml
├── docs
├── diagram.png
└── sample script
│ └── deepseek
│ ├── blueprint.json
│ ├── internet_search
│ ├── 1.json
│ ├── 2.json
│ └── 3.json
│ ├── refined_blueprint.json
│ ├── script_output.txt
│ └── video_description.txt
├── requirements.txt
├── src
├── .env
├── __init__.py
├── agent_prompt
│ ├── __init__.py
│ ├── get_prompt.py
│ └── prompt.yaml
├── baseLLM
│ ├── __init__.py
│ └── base.py
├── blueprint
│ ├── __init__.py
│ ├── create_blueprint.py
│ └── structured_output_schema.py
├── create_description
│ ├── __init__.py
│ └── create_description.py
├── datatypes.py
├── internet_research
│ ├── __init__.py
│ └── researcher.py
├── key_manager
│ ├── __init__.py
│ └── key_manager.py
├── main.py
├── refined_blueprint
│ ├── __init__.py
│ ├── refined_blueprint.py
│ └── structured_output_schema.py
└── writer
│ ├── __init__.py
│ └── writer.py
└── ui.py
/.gitignore:
--------------------------------------------------------------------------------
1 | scripts/
2 | **/__pycache__/
3 | __pycache__/
4 | .DS_Store
5 | src/.env
6 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | # Use official Python base image
2 | FROM python:3.11
3 |
4 | # Set working directory
5 | WORKDIR /app
6 |
7 | # Copy only requirements first to leverage caching
8 | COPY requirements.txt .
9 |
10 | # Install dependencies
11 | RUN pip install --no-cache-dir -r requirements.txt
12 |
13 | # Copy the rest of the application
14 | COPY . .
15 |
16 | # Expose the necessary port for Streamlit UI
17 | EXPOSE 8501
18 |
19 | # Default command (to be overridden in docker-compose)
20 | CMD ["python", "cli.py"]
21 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2025 Rahul Anand
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # 🎥 YouTube Script Writer
2 |
3 |
4 |
5 | YouTube Script Writer is an open-source AI-agent that streamlines script generation for YouTube videos. By inputting a **video title**, **language**, and **tone** (e.g., *"Incorporate Humor," "Present Only Facts," "Emotional," "Inspirational"*), content creators can quickly generate tailored scripts.
6 |
7 | The tool adapts to various **video lengths**, from **short TikTok-style clips (15-30 seconds)** to **longer formats (10-30 minutes)**, handling both research and writing so creators can focus on delivering their unique content.
8 |
9 | **Example: Deepseek**: [Deepseek-Example](https://github.com/rahulanand1103/youtube-script-writer/tree/main/docs/sample%20script/deepseek)
10 |
11 |
12 |
13 | ## WHY YOUTUBE SCRIPT WRITER?
14 |
15 | - **📝 AI-Powered Scripts** – Generates structured, engaging scripts automatically.
16 | - **🌍 Smart Research** – Gathers accurate, real-time information.
17 | - **🧠 Context-Aware Writing** – Ensures logical flow and coherence.
18 | - **⚡ Fast & Efficient** – Speeds up content creation with minimal effort.
19 | - **📈 Scalable** – Suitable for solo creators and large teams.
20 |
21 | ## Directory
22 |
23 | ```
24 | .
25 | ├── scripts/ # Output folder for generated scripts
26 | ├── src/ # Source code directory
27 | │ ├── agent_prompt/ # Handles AI agent prompts
28 | │ │ ├── __init__.py
29 | │ │ ├── get_prompt.py # Retrieves specific agent prompts
30 | │ │ └── prompt.yaml # Stores all prompt templates
31 | │ ├── blueprint/ # Generates initial script blueprints
32 | │ │ ├── __init__.py
33 | │ │ ├── create_blueprint.py # Defines the script's initial structure
34 | │ │ └── structured_output_schema.py # Defines schema for structured output
35 | │ ├── create_description/ # Generates video descriptions
36 | │ │ ├── __init__.py
37 | │ │ └── create_description.py # Creates and formats video descriptions
38 | │ ├── internet_research/ # Conducts internet-based research
39 | │ │ ├── __init__.py
40 | │ │ └── researcher.py # Fetches and processes online data
41 | │ ├── refined_blueprint/ # Refines blueprints based on research
42 | │ │ ├── __init__.py
43 | │ │ ├── refined_blueprint.py # Enhances the initial blueprint
44 | │ │ └── structured_output_schema.py # Defines refined output schema
45 | │ ├── writer/ # Handles script writing
46 | │ │ ├── __init__.py
47 | │ │ └── writer.py # Generates final script text
48 | │ ├── __init__.py
49 | │ ├── main.py # Main execution script
50 | │ ├── datatypes.py # Defines custom data types
51 | ├── LICENSE # License information
52 | ├── README.md # Project documentation
53 | ├── cli.py # Command-line interface for running scripts
54 | ├── requirements.txt # List of dependencies
55 | └── ui.py # User interface module
56 |
57 | ```
58 |
59 |
60 | ## 🎥 Demo
61 | ### UI
62 | https://github.com/user-attachments/assets/b215ad58-58f9-42d7-922e-ec7fcfe916a5
63 |
64 | ### CLI
65 | https://github.com/user-attachments/assets/53ab03ca-0d0d-4c83-9303-f2a74f13c85b
66 |
67 |
68 | ## 🏗️ Architecture
69 | 
70 | YouTube Script Writer follows a structured approach to script generation:
71 |
72 |
73 |
74 | ### 📌 Steps:
75 |
76 | 1. **Create Initial Outline** - Generates a rough structure of the script.
77 | 2. **Perform Internet Research** - Collects relevant data from various sources.
78 | 3. **Refine Outline** - Improves the initial outline based on research findings.
79 | 4. **Write Each Section** - Generates detailed content for each section of the script.
80 |
81 | ## ⚙️ Getting Started
82 |
83 | ### Installation
84 |
85 | 1. Install Python 3.11 or later.
86 | 2. Clone the project and navigate to the directory:
87 |
88 | ```bash
89 | git clone https://github.com/rahulanand1103/youtube-script-writer.git
90 | cd youtube-script-writer
91 | ```
92 |
93 | 3. Set up API keys by exporting them or storing them in a `.env` file:
94 |
95 | ```bash
96 | export OPENAI_API_KEY={Your OpenAI API Key here}
97 | export YDC_API_KEY={Your API search key here, replacing <__> with \<_\__\> or put inside ""}
98 | ```
99 | 4. Install dependencies:
100 |
101 | ```bash
102 | pip install -r requirements.txt
103 | ```
104 |
105 | 5. Run the CLI or UI:
106 |
107 | - **Using the CLI:**
108 |
109 | ```bash
110 | python cli.py
111 | ```
112 |
113 | - **Using the Streamlit UI:**
114 |
115 | ```bash
116 | streamlit run ui.py
117 | ```
118 |
119 | 6. Open the UI in your browser:
120 |
121 | ```
122 | http://localhost:8000
123 | ```
124 |
125 | ## 🐳 Running with Docker
126 |
127 | You can run both the CLI and UI interfaces using Docker Compose without installing Python or dependencies directly on your system.
128 |
129 | ### Prerequisites
130 |
131 | - [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/) installed on your system.
132 |
133 | ### Setup
134 |
135 | 1. **To run only one of the services:**
136 | Add your API keys in the command or use an `.env` file.
137 |
138 | ```bash
139 | # For UI only
140 | sudo OPENAI_API_KEY="your_openai_key" YDC_API_KEY="your_ydc_key" docker-compose up ui
141 |
142 | # For CLI only
143 | sudo OPENAI_API_KEY="your_openai_key" YDC_API_KEY="your_ydc_key" docker-compose up cli
144 | ```
145 |
146 |
--------------------------------------------------------------------------------
/cli.py:
--------------------------------------------------------------------------------
1 | from rich.console import Console
2 | from rich.table import Table
3 | import questionary
4 | from src import YouTubeScriptInput, YouTubeScriptGenerator
5 |
6 |
7 | def get_youtube_script_input() -> YouTubeScriptInput:
8 | ## video title
9 | video_title = questionary.text("Enter video title:").ask()
10 |
11 | ## video length
12 | video_length = questionary.select(
13 | "Select video length:",
14 | choices=[
15 | "TikTok/Shots/Reel (15-30 seconds)",
16 | "10-15 min short video",
17 | "20-30 min short video",
18 | ],
19 | ).ask()
20 |
21 | language = questionary.text(
22 | "In which language would you like to generate your YouTube script (e.g., English or French)?"
23 | ).ask()
24 |
25 | tone = questionary.select(
26 | "Select video tone:",
27 | choices=[
28 | "Incorporate little Humor in the Video",
29 | "Present Only Facts",
30 | "Emotional",
31 | "Inspirational",
32 | "Educational",
33 | "Entertaining",
34 | "Customize Content Style",
35 | ],
36 | ).ask()
37 |
38 | if tone == "Customize Content Style":
39 | tone = questionary.text("Enter custom video type:").ask()
40 |
41 | generate_description = questionary.confirm(
42 | "Do you want to generate a video description?"
43 | ).ask()
44 |
45 | return YouTubeScriptInput(
46 | language=language,
47 | tone=tone,
48 | video_length=video_length,
49 | video_title=video_title,
50 | description=generate_description,
51 | )
52 |
53 |
54 | def display_youtube_script_input(inputs: YouTubeScriptInput):
55 | console = Console()
56 | table = Table(title="Video Configuration")
57 | table.add_column("Field", style="cyan", no_wrap=True)
58 | table.add_column("Value", style="magenta")
59 | table.add_row("Language", inputs.language)
60 | table.add_row("Video Type", inputs.tone)
61 | table.add_row("Video Length", inputs.video_length)
62 | table.add_row("Video Title", inputs.video_title)
63 | table.add_row("Generate Description", str(inputs.description))
64 | console.print(table)
65 |
66 |
67 | if __name__ == "__main__":
68 | inputs = get_youtube_script_input()
69 | display_youtube_script_input(inputs)
70 |
71 | yt_script_generator = YouTubeScriptGenerator()
72 | yt_script_generator.generate(inputs)
73 |
74 |
75 |
76 |
--------------------------------------------------------------------------------
/docker-compose.yml:
--------------------------------------------------------------------------------
1 | version: "3.8"
2 |
3 | services:
4 | cli:
5 | build: .
6 | container_name: youtube_script_cli
7 | environment:
8 | - OPENAI_API_KEY
9 | - YDC_API_KEY
10 | volumes:
11 | - .:/app
12 | stdin_open: true
13 | tty: true
14 | command: ["python", "cli.py"]
15 |
16 |
17 | ui:
18 | build: .
19 | container_name: youtube_script_ui
20 | environment:
21 | - OPENAI_API_KEY
22 | - YDC_API_KEY
23 | volumes:
24 | - .:/app
25 | ports:
26 | - "8501:8501"
27 | command: ["streamlit", "run", "ui.py", "--server.port=8501", "--server.address=0.0.0.0"]
28 |
--------------------------------------------------------------------------------
/docs/diagram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rahulanand1103/youtube-script-writer/60e0425d32b857f71f29f4eca0e624432022f6ec/docs/diagram.png
--------------------------------------------------------------------------------
/docs/sample script/deepseek/blueprint.json:
--------------------------------------------------------------------------------
1 | {
2 | "page_title": "deepseek",
3 | "sections": [
4 | {
5 | "section_title": "Introduction",
6 | "description": "Briefly introduce the concept of 'deepseek' as a method or tool for finding hidden or intricate details in various contexts.",
7 | "time": "0-5 sec"
8 | },
9 | {
10 | "section_title": "Key Features",
11 | "description": "Highlight the main attributes of deepseek, such as its accuracy, real-time analysis, and wide application in different fields.",
12 | "time": "5-15 sec"
13 | },
14 | {
15 | "section_title": "Applications",
16 | "description": "Outline practical applications of deepseek in industries like technology, science, and data analysis.",
17 | "time": "15-25 sec"
18 | }
19 | ]
20 | }
--------------------------------------------------------------------------------
/docs/sample script/deepseek/internet_search/1.json:
--------------------------------------------------------------------------------
1 | {
2 | "topic": "deepseek",
3 | "questions": [
4 | "\"Deepseek method for uncovering hidden details in different contexts\"",
5 | "\"Deepseek tool applications for finding intricate details across various contexts\"",
6 | "deepseek introduction and usage in finding hidden or intricate details in multiple contexts"
7 | ],
8 | "internet_search": [
9 | {
10 | "content": "Hello friends, This will be a quick and short writeup on a simple vulnerability I found on deepseek AI. While using the Deepseek AI, you\u2019ve probably came across an annoying response which says\u2026",
11 | "url": "https://1-day.medium.com/uncovering-deepseek-ais-hidden-flaw-a-dive-into-its-response-filtering-system-96203b727192",
12 | "title": "Uncovering Deepseek AI\u2019s Hidden Flaw: A Dive Into Its Response ..."
13 | },
14 | {
15 | "content": "While using the Deepseek AI, you\u2019ve probably came across an annoying response which says \u201cSorry, that\u2019s beyond my current scope, Let\u2019s talk about something else.\u201d. You\u2019ll get this response when you ask stuffs which are related to politics/Chinese government/nations.",
16 | "url": "https://1-day.medium.com/uncovering-deepseek-ais-hidden-flaw-a-dive-into-its-response-filtering-system-96203b727192",
17 | "title": "Uncovering Deepseek AI\u2019s Hidden Flaw: A Dive Into Its Response ..."
18 | },
19 | {
20 | "content": "Released in January 2025, this model is based on DeepSeek-V3 and is focused on advanced reasoning tasks directly competing with OpenAI's o1 model in performance, while maintaining a significantly lower cost structure. Like DeepSeek-V3, the model has 671 billion parameters with a context length of 128,000.",
21 | "url": "https://www.techtarget.com/whatis/feature/DeepSeek-explained-Everything-you-need-to-know",
22 | "title": "DeepSeek explained: Everything you need to know"
23 | },
24 | {
25 | "content": "DeepSeek-Coder-V2. Released in July 2024, this is a 236 billion-parameter model offering a context window of 128,000 tokens, designed for complex coding challenges.",
26 | "url": "https://www.techtarget.com/whatis/feature/DeepSeek-explained-Everything-you-need-to-know",
27 | "title": "DeepSeek explained: Everything you need to know"
28 | },
29 | {
30 | "content": "DeepSeek-V3 (Dec '24): scaling sparse MoE networks to 671B parameters, with FP8 mixed precision training and intricate HPC co-design \u00b7 DeepSeek-R1 (Jan '25): building upon the efficiency foundations of the previous papers and using large-scale reinforcement learning to incentivize emergent chain-of-thought capabilities, including a \u201czero-SFT\u201d variant. For additional context on DeepSeek itself and the market backdrop that has caused claims made by the DeepSeek team to be taken out of context and spread widely, please take a look at my colleague Prasanna Pendse's post: Demystifying Deepseek.",
31 | "url": "https://martinfowler.com/articles/deepseek-papers.html",
32 | "title": "The DeepSeek Series: A Technical Overview"
33 | },
34 | {
35 | "content": "A central concern they grapple with is training instability (sudden irrecoverable divergences in the training process), which can often manifest in large-scale language models\u2014especially those with mixture-of-experts or very long contexts. By carefully tuning learning rates, batch sizes, and other hyperparameters 2, DeepSeek-LLM demonstrates that stable large-scale training is achievable, but it requires meticulous design of the architecture of the transformer model together with the infrastructure of the High Performance Computing (HPC) data center used to train it.",
36 | "url": "https://martinfowler.com/articles/deepseek-papers.html",
37 | "title": "The DeepSeek Series: A Technical Overview"
38 | }
39 | ],
40 | "iterations": 1,
41 | "section_info": {
42 | "section_title": "Introduction",
43 | "description": "Briefly introduce the concept of 'deepseek' as a method or tool for finding hidden or intricate details in various contexts.",
44 | "time": "0-5 sec"
45 | }
46 | }
--------------------------------------------------------------------------------
/docs/sample script/deepseek/internet_search/2.json:
--------------------------------------------------------------------------------
1 | {
2 | "topic": "deepseek",
3 | "questions": [
4 | "deepseek main attributes accuracy real-time analysis applications different fields",
5 | "\"DeepSeek key features accuracy real-time analysis applications various fields\"",
6 | "deepseek core characteristics accuracy real-time analytics diverse field applications"
7 | ],
8 | "internet_search": [
9 | {
10 | "content": "This is a nice trick: it helps you measure e.g. MMLU and MMBench accuracy in the pretraining checkpoints to see if the capabilities are improving over time, which gives you more resolution to whether the model is getting better at X but worse at Y31. This is not terribly necessary in a model with one objective, but in a multimodal model it becomes more important. The benchmarks are pretty strong here, which should by now be a pretty typical story for a DeepSeek model \u2013 at the frontier of open source, just shy of the closed models.",
11 | "url": "https://planetbanatt.net/articles/deepseek.html",
12 | "title": "Deepseek"
13 | },
14 | {
15 | "content": "In this vein, DeepSeek-AI's body of work stands out as extremely interesting. At the time of writing, it's eight papers long, and roughly covers their company's journey from roughly Llama 2 performance to roughly Llama 3 performance.",
16 | "url": "https://planetbanatt.net/articles/deepseek.html",
17 | "title": "Deepseek"
18 | },
19 | {
20 | "content": "Explore how DeepSeek-V3 redefines AI with groundbreaking architecture, efficient training, and impactful real-world applications in coding, education, and multilingual systems.",
21 | "url": "https://adasci.org/deepseek-v3-explained-optimizing-efficiency-and-scale/",
22 | "title": "DeepSeek-V3 Explained: Optimizing Efficiency and Scale"
23 | },
24 | {
25 | "content": "DeepSeek-V3 marks a transformative advancement in the domain of large language models (LLMs), setting a new benchmark for open-source AI. As a Mixture-of-Experts (MoE) model with 671 billion parameters\u201437 billion of which are activated per token.",
26 | "url": "https://adasci.org/deepseek-v3-explained-optimizing-efficiency-and-scale/",
27 | "title": "DeepSeek-V3 Explained: Optimizing Efficiency and Scale"
28 | },
29 | {
30 | "content": "In this vein, DeepSeek-AI's body of work stands out as extremely interesting. At the time of writing, it's eight papers long, and roughly covers their company's journey from roughly Llama 2 performance to roughly Llama 3 performance.",
31 | "url": "https://planetbanatt.net/articles/deepseek.html",
32 | "title": "Deepseek"
33 | },
34 | {
35 | "content": "I think everybody with serious interest in large language models should read these papers, specifically. DeepSeek has some interesting ideas about architecture, but that's not why I think they're a worthwhile read; it's more that the papers all come from the same place, and build upon each other as they go.",
36 | "url": "https://planetbanatt.net/articles/deepseek.html",
37 | "title": "Deepseek"
38 | }
39 | ],
40 | "iterations": 2,
41 | "section_info": {
42 | "section_title": "Key Features",
43 | "description": "Highlight the main attributes of deepseek, such as its accuracy, real-time analysis, and wide application in different fields.",
44 | "time": "5-15 sec"
45 | }
46 | }
--------------------------------------------------------------------------------
/docs/sample script/deepseek/internet_search/3.json:
--------------------------------------------------------------------------------
1 | {
2 | "topic": "deepseek",
3 | "questions": [
4 | "practical applications of deepseek in technology science data analysis industries",
5 | "deepseek practical applications in technology industry science data analysis",
6 | "practical uses of deepseek in various industries including technology, scientific research, and data analysis"
7 | ],
8 | "internet_search": [
9 | {
10 | "content": "As with any powerful technology, the development and deployment of AI come with ethical considerations. DeepSeek is committed to responsible AI practices, ensuring that its technologies are used in ways that are fair, transparent, and accountable.",
11 | "url": "https://medium.com/@tam.tamanna18/transforming-industries-with-deepseeks-ai-solutions-1ce566bb596d",
12 | "title": "Transforming Industries with DeepSeek\u2019s AI Solutions | by Tamanna ..."
13 | },
14 | {
15 | "content": "DeepSeek is a trailblazer in the field of artificial intelligence, with a mission to harness the power of AI to drive innovation and create value across industries. Through its advanced technologies, ethical practices, and commitment to social good, DeepSeek is shaping the future of AI and paving the way for a more intelligent, connected, and equitable world.",
16 | "url": "https://medium.com/@tam.tamanna18/transforming-industries-with-deepseeks-ai-solutions-1ce566bb596d",
17 | "title": "Transforming Industries with DeepSeek\u2019s AI Solutions | by Tamanna ..."
18 | },
19 | {
20 | "content": "Always intrigued by emerging LLMs and their application in data science, I decided to put DeepSeek to the test. My goal was to see how well its chatbot (V3) model could assist or even replace data scientists in their daily tasks. I used the same criteria from my previous article series, where I evaluated the performance of ChatGPT-4o vs. Claude 3.5 Sonnet vs. Gemini Advanced on SQL queries, Exploratory Data Analysis (EDA), and Machine Learning (ML).",
21 | "url": "https://towardsdatascience.com/deepseek-v3-a-new-contender-in-ai-powered-data-science-eec8992e46f5/",
22 | "title": "DeepSeek V3: A New Contender in AI-Powered Data Science | Towards ..."
23 | },
24 | {
25 | "content": "Overall, DeepSeek performed very well in the SQL section, providing clear explanations and suggestions for SQL queries. It only made two small mistakes and managed to fix itself quickly after my prompts. However, its file upload limitations could cause inconvenience for users. ... Now let\u2019s switch gears to Exploratory Data Analysis.",
26 | "url": "https://towardsdatascience.com/deepseek-v3-a-new-contender-in-ai-powered-data-science-eec8992e46f5/",
27 | "title": "DeepSeek V3: A New Contender in AI-Powered Data Science | Towards ..."
28 | },
29 | {
30 | "content": "Now we know exactly how DeepSeek was designed to work, and we may even have a clue toward its highly publicized scandal with OpenAI. Sean Michael Kerner is an IT consultant, technology enthusiast and tinkerer. He has pulled Token Ring, configured NetWare and been known to compile his own Linux kernel. He consults with industry and media organizations on technology issues.",
31 | "url": "https://www.techtarget.com/whatis/feature/DeepSeek-explained-Everything-you-need-to-know",
32 | "title": "DeepSeek explained: Everything you need to know"
33 | },
34 | {
35 | "content": "OpenAI has helped push the generative AI industry forward with its GPT family of models, as well as its o1 class of reasoning models. While the two companies are both developing generative AI LLMs, they have different approaches. DeepSeek uses a different approach to train its R1 models than what is used by OpenAI.",
36 | "url": "https://www.techtarget.com/whatis/feature/DeepSeek-explained-Everything-you-need-to-know",
37 | "title": "DeepSeek explained: Everything you need to know"
38 | }
39 | ],
40 | "iterations": 3,
41 | "section_info": {
42 | "section_title": "Applications",
43 | "description": "Outline practical applications of deepseek in industries like technology, science, and data analysis.",
44 | "time": "15-25 sec"
45 | }
46 | }
--------------------------------------------------------------------------------
/docs/sample script/deepseek/refined_blueprint.json:
--------------------------------------------------------------------------------
1 | {
2 | "page_title": "deepseek",
3 | "sections": [
4 | {
5 | "section_title": "Introduction to DeepSeek",
6 | "description": "Introduce DeepSeek as a cutting-edge AI model, highlighting its release date, main features, and its competitive edge over other models.",
7 | "time": "0-5 sec",
8 | "pointers": "- Mention the release date of DeepSeek, January 2025, and its foundation on DeepSeek-V3.\n- Highlight its advanced reasoning tasks and competition with OpenAI's o1 model.\n- Note the model's 671 billion parameters and its cost efficiency."
9 | },
10 | {
11 | "section_title": "Key Features of DeepSeek",
12 | "description": "Highlight the unique features of DeepSeek, focusing on its architecture, training efficiency, and real-world applications.",
13 | "time": "5-15 sec",
14 | "pointers": "- Discuss the Mixture-of-Experts (MoE) model with 671 billion parameters.\n- Explain the significance of its FP8 mixed precision training and HPC co-design.\n- Mention its strong performance benchmarks and evolution from Llama 2 to Llama 3 performance."
15 | },
16 | {
17 | "section_title": "Applications of DeepSeek",
18 | "description": "Outline the practical applications of DeepSeek, emphasizing its impact on technology, science, and data analysis.",
19 | "time": "15-25 sec",
20 | "pointers": "- Highlight DeepSeek's role in coding, education, and multilingual systems.\n- Discuss its ethical AI practices and commitment to responsible technology use.\n- Mention its performance in SQL queries and data analysis tasks, comparing it to other AI models."
21 | }
22 | ]
23 | }
--------------------------------------------------------------------------------
/docs/sample script/deepseek/script_output.txt:
--------------------------------------------------------------------------------
1 |
2 | ## Section: Introduction to DeepSeek
3 | [0-5 sec]: Released in January 2025, DeepSeek emerges as a groundbreaking AI model, built upon the robust foundation of DeepSeek-V3. This state-of-the-art model is specifically designed for advanced reasoning tasks and positions itself as a formidable competitor to OpenAI's o1 model, offering similar performance while maintaining a significantly lower cost structure. With an impressive 671 billion parameters, DeepSeek not only showcases its expansive capabilities but also underscores its efficiency in delivering high-quality results.
4 |
5 | ## Section: Key Features of DeepSeek
6 | [5-15 sec]: DeepSeek-V3 stands out as a Mixture-of-Experts (MoE) model boasting 671 billion parameters, with 37 billion activated per token, pushing the boundaries of open-source AI. Its groundbreaking FP8 mixed precision training, combined with high-performance computing (HPC) co-design, enhances both efficiency and scalability. This innovative approach not only marks a leap in architectural sophistication but also delivers robust performance benchmarks that trace its evolution from Llama 2 to Llama 3 levels of proficiency.
7 |
8 | ## Section: Applications of DeepSeek
9 | [15-25 sec]: DeepSeek is revolutionizing the tech landscape with its cutting-edge applications in coding, education, and multilingual systems. Known for its robust ethical AI practices, DeepSeek is dedicated to ensuring responsible technology use by adhering to fairness, transparency, and accountability. In data analysis, DeepSeek's performance in SQL queries stands out, showcasing clear explanations and prompt error corrections, although it faces some limitations in file uploads. When compared to other AI models like ChatGPT-4o and Claude 3.5 Sonnet, DeepSeek holds its ground by offering valuable insights and support in Exploratory Data Analysis and Machine Learning, further solidifying its role as a leader in AI innovation.
10 |
--------------------------------------------------------------------------------
/docs/sample script/deepseek/video_description.txt:
--------------------------------------------------------------------------------
1 | Explore the groundbreaking capabilities of DeepSeek, a leading-edge AI model that is transforming the landscape of artificial intelligence. In this video, we introduce DeepSeek, highlighting its release date, standout features, and how it stands apart from other models in the industry. Dive deeper as we discuss the unique architecture and training efficiency of DeepSeek, along with its real-world applications. Discover how DeepSeek is making a significant impact in technology, science, and data analysis. Stay tuned to learn more about how this revolutionary AI model is setting new standards and shaping the future.
2 |
3 | - [0-5 sec] Introduction to DeepSeek
4 | - [5-15 sec] Key Features of DeepSeek
5 | - [15-25 sec] Applications of DeepSeek
6 |
7 | #DeepSeek #AIModel #ArtificialIntelligence #Technology #Innovation #DataAnalysis
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | python-dotenv==1.0.1
2 | langchain_community==0.3.4
3 | langchain_openai==0.2.5
4 | langgraph==0.2.45
5 | google-search-results==2.4.2
6 | streamlit==1.42.2
7 | questionary==2.0.1
--------------------------------------------------------------------------------
/src/.env:
--------------------------------------------------------------------------------
1 | OPENAI_API_KEY=""
2 | YDC_API_KEY=""
--------------------------------------------------------------------------------
/src/__init__.py:
--------------------------------------------------------------------------------
1 | from src.main import YouTubeScriptGenerator
2 | from src.datatypes import YouTubeScriptInput
3 |
4 | __init__ = ["YouTubeScriptGenerator", "YouTubeScriptInput"]
5 |
--------------------------------------------------------------------------------
/src/agent_prompt/__init__.py:
--------------------------------------------------------------------------------
1 | from src.agent_prompt.get_prompt import GetPrompt
2 |
3 | __init__ = ["GetPrompt"]
4 |
--------------------------------------------------------------------------------
/src/agent_prompt/get_prompt.py:
--------------------------------------------------------------------------------
1 | import yaml
2 |
3 |
4 | class GetPrompt:
5 | _prompt_cache = None
6 |
7 | @classmethod
8 | def _load_prompt(cls):
9 | if cls._prompt_cache is None: # Load YAML only once
10 | with open("src/agent_prompt/prompt.yaml", "r", encoding="utf-8") as file:
11 | cls._prompt_cache = yaml.safe_load(file)
12 |
13 | @classmethod
14 | def get_prompt(cls, agent_name: str):
15 | cls._load_prompt() # Ensure the YAML file is loaded
16 | return cls._prompt_cache.get(agent_name, {}).get("prompt", "")
17 |
--------------------------------------------------------------------------------
/src/agent_prompt/prompt.yaml:
--------------------------------------------------------------------------------
1 | youtube_content_strategist:
2 | prompt: |
3 | You are a YouTube Content Strategist. Create an engaging YouTube script blueprint optimized for audience retention based on the given inputs:
4 | Video Title: # Title of youtube video
5 | Video Length: # video length [TikTok/Shots/Reel (15-30 seconds), 10-15 min short video, 20-30 min short video]
6 |
7 | Blueprint Requirements:
8 | 1. The script should have a logical flow with sections appropriate to the video length.
9 | 2. section title and description: Make sure the section title and description is meaningful.
10 | 3. The structure should be engaging and optimized for audience retention.
11 | 4. Break down each section with specific ideas and points to cover.
12 | 5. Each section should focus only on the script content without mentioning video production aspects.
13 | 6 **Avoid** including a "Conclusion" or "References" section in the final bluepoint
14 | 7. Adjust the start and end times of this section based on the full video. For example, if the previous duration was [0-2 min], update it to [2-5 min] accordingly. Ensure the new timing aligns seamlessly with the overall flow of the video
15 |
16 |
17 | research_analyst:
18 | prompt: |
19 | Role: Research Analyst
20 | Task: Generate a well-structured, open-ended Google search query focused purely on retrieving factual and comprehensive information. Avoid queries that imply tone (e.g., humor, opinion). Ensure clarity, specificity, and reliability while preventing redundancy with past queries.
21 |
22 | Input:
23 | - Section Details:** ## section details
24 | - Previous Questions:** ## previously generated question
25 |
26 | Output Format:
27 | Return only the script without any extra text like `Google Search Query:`, symbols at the start, or `Search Query:`.
28 |
29 |
30 | youtube_script_architect:
31 | prompt: |
32 | You are a YouTube Script Architect. Your role is to refine a YouTube script blueprint based on structured research and expert insights and Generate pointers for writer what it should in script bluepoint based upon section name and expert insights
33 |
34 | Inputs:
35 | - Video Title: # Title of youtube video
36 | - Video Length: # video length [TikTok/Shots/Reel (15-30 seconds), 10-15 min short video, 20-30 min short video]
37 | - Old bluepoint: # initial blueprint without any research
38 | - Internet Research: # expert internet research
39 |
40 | Refined Blueprint Requirements:
41 | 1. Enhance the Initial Blueprint using expert research while maintaining engaging storytelling.
42 | 2. Ensure section titles and descriptions are meaningful and clearly communicate what the section covers.
43 | 3. Refined script should have a logical flow with sections appropriate to the video length.
44 | 4. Provide clear, concise pointers on what the writer should cover in this section based on relevant search findings. Ensure the pointers align with the allocated time, focusing on creating engaging and informative content.
45 | 5. Break down each section with specific ideas and points to cover.
46 | 6. **Avoid** including a "Conclusion" or "References" section in the final bluepoint.
47 | 7. Each section should focus only on the script content without mentioning video production aspects.
48 | 8. Adjust the start and end times of this section based on the full video. For example, if the previous duration was [0-2 min], update it to [2-5 min] accordingly. Ensure the new timing aligns seamlessly with the overall flow of the video
49 |
50 |
51 | youtube_script_writer:
52 | prompt: |
53 | You are an expert YouTube scriptwriter specializing in crafting engaging and well-structured scripts. Your task is to write a compelling section based on thoroughly researched content from subject matter experts and key pointers provided by the YouTube Script Architect. Ensure the script is intelligently written, maintaining a natural flow while precisely filling the allocated time with the appropriate word count.
54 |
55 | Inputs:
56 | - Video Title: # Title of youtube video
57 | - Full Video Length: # for your references video length [TikTok/Shots/Reel (15-30 seconds), 10-15 min short video, 20-30 min short video]
58 | - Tone: # Incorporate little Humor, Present Only Facts, Emotional, Inspirational, Educational, Entertaining, etc.
59 | - Section Name: # name of the section
60 | - Allocated Time: # time allocated to this section
61 | - internet search # internet search by expert
62 | - Language: # language of script
63 | - guidance: # guidance what you should write on
64 | ### Instructions:
65 | - Use information **verbatim** from the Q&A session with experts. Do **not** introduce new details.
66 | - Maintain a natural flow, engaging storytelling, and clarity.
67 | - **Do not include a conclusion** for this section.
68 | - Follow the senior writer's guidance: {guidance}.
69 | - Structure the response as a single, well-written paragraph in **Markdown format** (no subheadings).
70 |
71 |
72 |
73 |
74 | youtube_description_writer:
75 | prompt: |
76 | Role: YouTube Description Writer
77 | Task: Generate a engaging, and SEO-optimized YouTube description for a video based on the given script blueprint. Ensure clarity, relevance, and keyword optimization. Include relevant hashtags and an optional call to action.Don't add any tone (e.g., humor, opinion).
78 |
79 | Input:
80 | script blueprint: ## script blueprint
81 | Output Format:
82 | - First generated a complete description
83 | - just [start time - end time of Section] and Section Name don't put any description here.
84 | Use the allocated time provided by YouTube Script Architect for the start and end times.
85 | - then #hashtag
86 |
--------------------------------------------------------------------------------
/src/baseLLM/__init__.py:
--------------------------------------------------------------------------------
1 | from src.baseLLM.base import BaseLLM
2 |
3 | __init__ = ["BaseLLM"]
4 |
--------------------------------------------------------------------------------
/src/baseLLM/base.py:
--------------------------------------------------------------------------------
1 | import os
2 | import logging
3 | from rich.logging import RichHandler
4 | from langchain_openai import ChatOpenAI
5 | from dotenv import load_dotenv
6 |
7 | load_dotenv()
8 | # Configure logging with RichHandler
9 | logging.basicConfig(
10 | level=logging.INFO, format="%(message)s", datefmt="[%X]", handlers=[RichHandler()]
11 | )
12 |
13 | logger = logging.getLogger(__name__)
14 |
15 |
16 | class BaseLLM:
17 | def __init__(self, model: str):
18 | """Base class for initializing an LLM."""
19 | self.llm = self._initialize_llm(model)
20 |
21 | @staticmethod
22 | def _initialize_llm(model: str) -> ChatOpenAI:
23 | """Initializes the ChatOpenAI model."""
24 | openai_api_key = os.getenv("OPENAI_API_KEY")
25 | if not openai_api_key:
26 | logger.error(
27 | "[bold red]❌ OPENAI_API_KEY is not set in environment variables.[/bold red]"
28 | )
29 | raise ValueError("Missing OPENAI_API_KEY. Please set it in your .env file.")
30 |
31 | logger.info(f"[bold green]✅ Loaded OpenAI Model:[/bold green] {model}")
32 | return ChatOpenAI(model=model)
33 |
--------------------------------------------------------------------------------
/src/blueprint/__init__.py:
--------------------------------------------------------------------------------
1 | from src.blueprint.create_blueprint import CreateBlueprint
2 |
3 | __init__ = ["CreateBlueprint"]
4 |
--------------------------------------------------------------------------------
/src/blueprint/create_blueprint.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | from pathlib import Path
4 | from langchain_core.prompts import ChatPromptTemplate
5 | from src.blueprint.structured_output_schema import BluePrint
6 | from src.agent_prompt import GetPrompt
7 | from rich.console import Console
8 | from rich.logging import RichHandler
9 | import logging
10 | from src.baseLLM import BaseLLM
11 |
12 | # Initialize Rich Console
13 | console = Console()
14 |
15 | # Configure logging with Rich
16 | logging.basicConfig(
17 | level=logging.INFO,
18 | format="%(message)s",
19 | datefmt="[%X]",
20 | handlers=[RichHandler(rich_tracebacks=True, markup=True)], # Enables colored logs
21 | )
22 | logger = logging.getLogger("rich")
23 |
24 |
25 | class CreateBlueprint(BaseLLM):
26 | def __init__(self, output_folder: str, model: str = "gpt-4o"):
27 | """Initializes the CreateBlueprint class with model and output folder settings."""
28 | super().__init__(model)
29 | self.output_folder = Path(output_folder)
30 |
31 | @staticmethod
32 | def _fetch_prompt() -> str:
33 | """Fetches the structured prompt for YouTube content strategy."""
34 | return GetPrompt.get_prompt("youtube_content_strategist")
35 |
36 | def _build_prompt_template(self) -> ChatPromptTemplate:
37 | """Builds a structured prompt template for the LLM."""
38 | return ChatPromptTemplate.from_messages(
39 | [
40 | ("system", self._fetch_prompt()),
41 | (
42 | "user",
43 | "Video Title: {video_title}\nVideo Length: {video_length}",
44 | ),
45 | ]
46 | )
47 |
48 | def generate(self, inputs) -> dict:
49 | """
50 | Generates a blueprint for YouTube content strategy using the LLM.
51 |
52 | :param inputs: An object containing `video_title`, `tone`, and `video_length`
53 | :return: Dictionary containing the structured output.
54 | """
55 | blueprint_prompt = self._build_prompt_template()
56 | generate_blueprint_direct = blueprint_prompt | self.llm.with_structured_output(
57 | BluePrint
58 | )
59 |
60 | try:
61 | output = generate_blueprint_direct.invoke(
62 | {
63 | "video_title": inputs.video_title,
64 | "video_length": inputs.video_length,
65 | }
66 | )
67 | console.print(
68 | "[bold cyan]✔ Blueprint generated successfully![/bold cyan]"
69 | ) # Colored output
70 | except Exception as e:
71 | logger.error(f"[bold red]❌ Error during LLM invocation:[/bold red] {e}")
72 | raise
73 |
74 | self._save_blueprint(output.dict())
75 | return output.dict()
76 |
77 | def _save_blueprint(self, data: dict) -> None:
78 | """Saves the generated blueprint to a JSON file."""
79 | self.output_folder.mkdir(parents=True, exist_ok=True)
80 | file_path = self.output_folder / "blueprint.json"
81 |
82 | with file_path.open("w", encoding="utf-8") as json_file:
83 | json.dump(data, json_file, indent=4)
84 |
85 | console.print(
86 | f"[bold green]✔ Blueprint saved successfully at:[/bold green] {file_path}"
87 | )
88 |
--------------------------------------------------------------------------------
/src/blueprint/structured_output_schema.py:
--------------------------------------------------------------------------------
1 | from pydantic import BaseModel, Field
2 | from typing import List, Optional
3 |
4 |
5 | class Section(BaseModel):
6 | section_title: str = Field(..., title="Title of the section.Make sure the section title is meaningful.")
7 | description: str = Field(
8 | ...,
9 | title="Content of the section must be realted to youtube script. Don't add anything about video.",
10 | )
11 | time: str = Field(
12 | ...,
13 | title="Adjust the start and end times of this section based on the full video. For example, if the previous duration was [0-2 min], update it to [2-5 min] accordingly. Ensure the new timing aligns seamlessly with the overall flow of the video",
14 | )
15 |
16 |
17 | class BluePrint(BaseModel):
18 | # Define the structure of the entire blueprint
19 | page_title: str = Field(..., title="Title of the page")
20 | sections: List[Section] = Field(
21 | default_factory=list,
22 | title="Title of the page. create a thought provoking and engaging Title/headline.",
23 | )
24 |
--------------------------------------------------------------------------------
/src/create_description/__init__.py:
--------------------------------------------------------------------------------
1 | from src.create_description.create_description import CreateDescription
2 |
3 | __init__ = ["CreateDescription"]
4 |
--------------------------------------------------------------------------------
/src/create_description/create_description.py:
--------------------------------------------------------------------------------
1 | import os
2 | import logging
3 | from pathlib import Path
4 | from rich.console import Console
5 | from rich.logging import RichHandler
6 | from langchain_core.prompts import ChatPromptTemplate
7 | from langchain_core.output_parsers import StrOutputParser
8 | from src.agent_prompt import GetPrompt
9 | from src.baseLLM import BaseLLM
10 |
11 |
12 | # Initialize Rich Console
13 | console = Console()
14 |
15 | # Configure logging with Rich
16 | logging.basicConfig(
17 | level=logging.INFO,
18 | format="%(message)s",
19 | datefmt="[%X]",
20 | handlers=[RichHandler(rich_tracebacks=True, markup=True)],
21 | )
22 | logger = logging.getLogger("rich")
23 |
24 |
25 | class CreateDescription(BaseLLM):
26 | def __init__(self, model: str = "gpt-4o") -> None:
27 | """Initializes the CreateDescription class with model settings."""
28 | super().__init__(model)
29 | self.prompt_template = self._build_prompt_template()
30 |
31 | @staticmethod
32 | def _fetch_prompt() -> str:
33 | """Fetches the structured prompt for YouTube description writing."""
34 | return GetPrompt.get_prompt("youtube_description_writer")
35 |
36 | def _build_prompt_template(self) -> ChatPromptTemplate:
37 | """Builds a structured prompt template for the LLM."""
38 | return ChatPromptTemplate.from_messages(
39 | [
40 | ("system", self._fetch_prompt()),
41 | ("user", "Script Outline: {script_outline}"),
42 | ]
43 | )
44 |
45 | def generate_conclusion(self, script_outline: list) -> str:
46 | """Generates a conclusion based on the provided refined blueprint."""
47 | try:
48 | script_outline = "\n\n".join(script_outline)
49 | writer = self.prompt_template | self.llm | StrOutputParser()
50 | conclusion_output = writer.invoke({"script_outline": script_outline})
51 | console.print("[bold cyan]✔ Conclusion generated successfully![/bold cyan]")
52 | return conclusion_output
53 | except Exception as e:
54 | logger.error(f"[bold red]❌ Error generating conclusion:[/bold red] {e}")
55 | return ""
56 |
--------------------------------------------------------------------------------
/src/datatypes.py:
--------------------------------------------------------------------------------
1 | from dataclasses import dataclass
2 | from pathlib import Path
3 |
4 |
5 | @dataclass
6 | class YouTubeScriptInput:
7 | language: str
8 | tone: str
9 | video_length: str
10 | video_title: str
11 | description: bool
12 |
13 |
14 | @dataclass
15 | class ScriptPaths:
16 | base: Path
17 | internet_search: Path
18 |
19 | @classmethod
20 | def from_base(cls, base_path: Path):
21 | return cls(
22 | base=base_path,
23 | internet_search=base_path / "internet_search",
24 | )
25 |
--------------------------------------------------------------------------------
/src/internet_research/__init__.py:
--------------------------------------------------------------------------------
1 | from src.internet_research.researcher import GraphState, Researcher
2 |
3 | __init__ = ["Researcher", "GraphState"]
4 |
--------------------------------------------------------------------------------
/src/internet_research/researcher.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | from pathlib import Path
4 | from typing_extensions import TypedDict
5 | from typing import Annotated
6 | from langgraph.graph.message import AnyMessage, add_messages
7 | from langchain_core.prompts import ChatPromptTemplate
8 | from langgraph.graph import END, StateGraph
9 | from rich.console import Console
10 | from rich.logging import RichHandler
11 | import logging
12 | from langchain_community.tools.you import YouSearchTool
13 | from langchain_community.utilities.you import YouSearchAPIWrapper
14 |
15 | from src.agent_prompt import GetPrompt
16 | from src.baseLLM import BaseLLM
17 |
18 |
19 | # Setup Rich Console & Logging
20 | console = Console()
21 |
22 | logging.basicConfig(
23 | level=logging.INFO,
24 | format="%(message)s",
25 | datefmt="[%X]",
26 | handlers=[RichHandler(rich_tracebacks=True, markup=True)],
27 | )
28 | logger = logging.getLogger("rich")
29 |
30 |
31 | ## Graph state definition
32 | class GraphState(TypedDict):
33 | topic: str
34 | questions: list
35 | internet_search: list
36 | topicid: str
37 | iterations: int
38 | section_info: dict
39 | summary: str
40 |
41 |
42 | class Researcher(BaseLLM):
43 | def __init__(
44 | self,
45 | output_folder: str,
46 | model="gpt-4o",
47 | temperature=0.3,
48 | no_generate_question: int = 3,
49 | no_internet_results: int = 2,
50 | ):
51 | """Initialize the Researcher with LLM and output settings."""
52 | super().__init__(model)
53 | self.output_folder = Path(output_folder)
54 | self.state = None
55 | self.no_generate_question = no_generate_question
56 | self.no_internet_results = no_internet_results
57 |
58 | @staticmethod
59 | def _fetch_prompt() -> str:
60 | """Fetches the structured prompt for YouTube content strategy."""
61 | return GetPrompt.get_prompt("research_analyst")
62 |
63 | def _generate_question(self, state: GraphState) -> GraphState:
64 | """Generates a new question based on the given state."""
65 | console.print("[bold cyan]🔍 Generating question...[/bold cyan]")
66 |
67 | gen_qn_prompt = ChatPromptTemplate.from_messages(
68 | [
69 | ("system", self._fetch_prompt()),
70 | (
71 | "user",
72 | "Section Details: {section_details}\n Previously Generated Questions: {previous_questions}",
73 | ),
74 | ]
75 | )
76 |
77 | gen_qn_agent = gen_qn_prompt | self.llm
78 | section_details = (
79 | f'Section title: "{state["section_info"]["section_title"]}"\n'
80 | f'Section description: "{state["section_info"]["description"]}"'
81 | )
82 |
83 | previous_questions_string = (
84 | " ".join(
85 | [f"Q{index}: {str(q)}" for index, q in enumerate(state["questions"])]
86 | )
87 | if state["questions"]
88 | else "None"
89 | )
90 |
91 | try:
92 | generated_output = gen_qn_agent.invoke(
93 | {
94 | "section_details": section_details,
95 | "previous_questions": previous_questions_string,
96 | }
97 | ).content
98 | state["questions"].append(generated_output)
99 |
100 | console.print(
101 | f"[bold green]✅ Generated Question:[/bold green] {generated_output}"
102 | )
103 | except Exception as e:
104 | logger.error(f"[bold red]❌ Error generating question:[/bold red] {e}")
105 | raise
106 |
107 | return state
108 |
109 | def _internet_search(self, state: GraphState) -> GraphState:
110 | """Performs an internet search using the latest generated question."""
111 | console.print("[bold cyan]🌍 Performing Internet Search...[/bold cyan]")
112 |
113 | try:
114 | api_wrapper = YouSearchAPIWrapper(num_web_results=self.no_internet_results)
115 | youTool = YouSearchTool(api_wrapper=api_wrapper)
116 | results = youTool.run(state["questions"][-1])
117 |
118 | for index, result in enumerate(results):
119 | search_data = {
120 | "content": result.page_content,
121 | "url": result.metadata["url"],
122 | "title": result.metadata["title"],
123 | }
124 | state["internet_search"].append(search_data)
125 |
126 | except Exception as e:
127 | logger.error(f"[bold red]❌ Internet search failed:[/bold red] {e}")
128 | raise
129 |
130 | return state
131 |
132 | def _save_data(self, state: GraphState) -> None:
133 | """Saves the collected research data to a JSON file."""
134 | self.output_folder.mkdir(parents=True, exist_ok=True)
135 | file_path = (
136 | self.output_folder / "internet_search" / f"{state['iterations']}.json"
137 | )
138 | file_path.parent.mkdir(parents=True, exist_ok=True)
139 |
140 | with file_path.open("w", encoding="utf-8") as json_file:
141 | json.dump(state, json_file, indent=4)
142 |
143 | console.print(
144 | f"[bold green]💾 Research data saved at:[/bold green] {file_path}"
145 | )
146 |
147 | def end_flow_decision(self, state: GraphState) -> str:
148 | """Determines whether to continue or end the research process."""
149 | if len(state["questions"]) >= self.no_generate_question:
150 | self._save_data(state)
151 | return "end"
152 | else:
153 | return "continue"
154 |
155 | def run(self, initial_state: GraphState) -> GraphState:
156 | """Runs the research workflow using a state graph."""
157 | console.print("[bold cyan]🚀 Starting Research Process...[/bold cyan]")
158 |
159 | workflow = StateGraph(GraphState)
160 | workflow.add_node("generate_question", self._generate_question)
161 | workflow.add_node("search_internet", self._internet_search)
162 |
163 | workflow.set_entry_point("generate_question")
164 | workflow.add_edge("generate_question", "search_internet")
165 | workflow.add_conditional_edges(
166 | "search_internet",
167 | self.end_flow_decision,
168 | {
169 | "end": END,
170 | "continue": "generate_question",
171 | },
172 | )
173 |
174 | app = workflow.compile()
175 |
176 | console.print("[bold green]🔄 Research Workflow Initialized[/bold green]")
177 | return app.invoke(initial_state)
178 |
--------------------------------------------------------------------------------
/src/key_manager/__init__.py:
--------------------------------------------------------------------------------
1 | from src.key_manager.key_manager import APIKeyManager
2 | __init__ = ["APIKeyManager"]
--------------------------------------------------------------------------------
/src/key_manager/key_manager.py:
--------------------------------------------------------------------------------
1 | import os
2 | import logging
3 | from dotenv import load_dotenv
4 | from rich.logging import RichHandler
5 |
6 | # Configure logging with Rich
7 | logging.basicConfig(level=logging.ERROR, handlers=[RichHandler()])
8 | logger = logging.getLogger(__name__)
9 |
10 | class APIKeyManager:
11 | """Manages API keys by checking existing environment variables first, then falling back to .env file."""
12 |
13 | REQUIRED_KEYS = ["OPENAI_API_KEY", "YDC_API_KEY"]
14 |
15 | @classmethod
16 | def load_and_validate_keys(cls) -> bool:
17 | """Ensure required API keys are set, preferring system environment variables over .env file.
18 |
19 | Returns:
20 | bool: True if all keys are set, False if any key is missing.
21 | """
22 |
23 | # Load .env file (if needed)
24 | load_dotenv()
25 |
26 | missing_keys = []
27 |
28 | for key in cls.REQUIRED_KEYS:
29 | env_value = os.environ.get(key) # Check if key is set via `export`
30 | dotenv_value = os.getenv(key) # Check if key exists in .env
31 |
32 | # Ensure the key is not empty and strip whitespace
33 | if env_value and env_value.strip():
34 | os.environ[key] = env_value.strip() # Keep already set system env variable
35 | elif dotenv_value and dotenv_value.strip():
36 | os.environ[key] = dotenv_value.strip() # Set from .env if system env is missing
37 | else:
38 | missing_keys.append(key)
39 |
40 | if missing_keys:
41 | logger.error(f"❌ Missing API keys: {', '.join(missing_keys)} | Set them using `export {missing_keys[0]}=your_api_key` or add them in `.env`")
42 | return False
43 |
44 | logger.info("✅ All API keys are properly set.")
45 | return True
46 |
47 |
48 |
49 |
--------------------------------------------------------------------------------
/src/main.py:
--------------------------------------------------------------------------------
1 | import os
2 | import uuid
3 | from pathlib import Path
4 | from rich.console import Console
5 | from src.datatypes import YouTubeScriptInput, ScriptPaths
6 |
7 |
8 | console = Console()
9 | import json
10 | from src.blueprint import CreateBlueprint
11 | from src.internet_research import Researcher, GraphState
12 | from src.refined_blueprint import YouTubeScriptArchitect
13 | from src.writer import GenerateScript
14 | from src.create_description import CreateDescription
15 | from typing_extensions import TypedDict
16 |
17 | from dataclasses import dataclass
18 | from pathlib import Path
19 | import uuid
20 | from rich.console import Console
21 | from langgraph.graph.message import AnyMessage, add_messages
22 | from langgraph.graph import END, StateGraph
23 | from src.key_manager import APIKeyManager
24 |
25 | console = Console()
26 |
27 |
28 | class MainGraphState(TypedDict):
29 | paths: ScriptPaths
30 | script_uuid: str
31 | inputs: YouTubeScriptInput
32 | intial_blueprint: dict
33 | refined_blueprint: dict
34 |
35 |
36 | class YouTubeScriptGenerator:
37 | def __init__(self, path: str = "scripts") -> None:
38 | self.script_uuid: str = str(uuid.uuid4())[:6]
39 | self.output_folder: Path = Path(path)
40 | self.paths: ScriptPaths = ScriptPaths.from_base(
41 | self.output_folder / self.script_uuid
42 | )
43 |
44 | def create_directory(self, main_state: MainGraphState) -> MainGraphState:
45 | """Creates a directory structure for the script."""
46 |
47 | if main_state["paths"].base.exists():
48 | console.print(
49 | f"[bold yellow]Directory \"{main_state['paths'].base}\" already exists.[/bold yellow]"
50 |
51 | )
52 |
53 | return
54 |
55 | for directory in [
56 | main_state["paths"].base,
57 | main_state["paths"].internet_search,
58 | ]:
59 | directory.mkdir(parents=True, exist_ok=True)
60 | console.print(
61 | f"[bold green]Directory '{directory}' created successfully![/bold green]"
62 | )
63 | return main_state
64 |
65 | def youtube_content_strategist(self, main_state: MainGraphState) -> MainGraphState:
66 | """
67 | ## Agent: youtube_content_strategist
68 | ## Task: is to create initial blueprint
69 |
70 | Args:
71 | inputs: (YouTubeScriptInput)
72 | output: bool
73 | """
74 | initial_blueprint = CreateBlueprint(main_state["paths"].base)
75 | intial_blueprint = initial_blueprint.generate(main_state["inputs"])
76 | main_state["intial_blueprint"] = intial_blueprint
77 | return main_state
78 |
79 | def research_analyst(self, main_state: MainGraphState) -> MainGraphState:
80 | """## Agent: research_analyst
81 | ## Task: is to do internet research
82 |
83 | Args:
84 | inputs (YouTubeScriptInput): _description_
85 |
86 | Returns:
87 | bool: _description_
88 | """
89 | for index, section in enumerate(main_state["intial_blueprint"]["sections"]):
90 | initial_state = GraphState(
91 | topic=main_state["intial_blueprint"]["page_title"],
92 | iterations=index + 1,
93 | section_info=section,
94 | questions=[],
95 | internet_search=[],
96 | )
97 |
98 | wiki_expert_dialogue = Researcher(main_state["paths"].base)
99 | wiki_expert_dialogue.run(initial_state=initial_state)
100 | return main_state
101 |
102 | def youtube_script_architect(self, main_state: MainGraphState) -> MainGraphState:
103 | YouTubeScriptArchitect(
104 | main_state["paths"], main_state["script_uuid"]
105 | ).refine_blueprint_run(main_state["inputs"], main_state["intial_blueprint"])
106 | return main_state
107 |
108 | def youtube_script_writer(self, main_state: MainGraphState) -> MainGraphState:
109 | generatearticle_obj = GenerateScript(refine_output=main_state["paths"].base)
110 | generatearticle_obj.generate(main_state["inputs"])
111 | return main_state
112 |
113 | def create_description(self, main_state: MainGraphState) -> MainGraphState:
114 | if main_state["inputs"].description:
115 | return "continue"
116 | else:
117 | return "end"
118 |
119 | def youtube_description_writer(self, main_state: MainGraphState):
120 | with open(f"{main_state['paths'].base}/refined_blueprint.json", "r") as file:
121 | refine_blueprint = json.load(file)
122 |
123 | refined_blueprint = f"Video Title: {refine_blueprint['page_title']}\n"
124 |
125 | for section in refine_blueprint["sections"]:
126 | refined_blueprint += (
127 | f"\nSection Name: {section['section_title']}\n"
128 | f"Section Description: {section['description']}\n"
129 | f"Time of this section: {section['time']}"
130 | )
131 | video_description = CreateDescription().generate_conclusion(refined_blueprint)
132 | print(video_description)
133 |
134 | with open(f"{self.paths.base}/video_description.txt", "w") as file:
135 | file.write(video_description)
136 |
137 | def generate(self, inputs: YouTubeScriptInput):
138 | if APIKeyManager.load_and_validate_keys():
139 | print("✅ API keys are validated and set.")
140 | else:
141 | print("❌ Some API keys are missing. Please set them as instructed above.")
142 | """Generates a new script with a unique ID."""
143 | main_state = MainGraphState(
144 | paths=self.paths,
145 | script_uuid=self.script_uuid,
146 | inputs=inputs,
147 | intial_blueprint=None,
148 | refined_blueprint=None,
149 | )
150 |
151 | workflow = StateGraph(MainGraphState)
152 | ## define all nodes
153 | workflow.add_node("create_directory", self.create_directory)
154 | workflow.add_node("youtube_content_strategist", self.youtube_content_strategist)
155 | workflow.add_node("research_analyst", self.research_analyst)
156 | workflow.add_node("youtube_script_architect", self.youtube_script_architect)
157 | workflow.add_node("youtube_script_writer", self.youtube_script_writer)
158 | workflow.add_node("youtube_description_writer", self.youtube_description_writer)
159 | ## creating edges
160 | workflow.add_edge("create_directory", "youtube_content_strategist")
161 | workflow.add_edge("youtube_content_strategist", "research_analyst")
162 | workflow.add_edge("research_analyst", "youtube_script_architect")
163 | workflow.add_edge("youtube_script_architect", "youtube_script_writer")
164 | workflow.add_conditional_edges(
165 | "youtube_script_writer",
166 | self.create_description,
167 | {
168 | "end": END,
169 | "continue": "youtube_description_writer",
170 | },
171 | )
172 | ## set-entry point
173 | workflow.set_entry_point("create_directory")
174 | ## complile
175 | app = workflow.compile()
176 | ## invoke
177 | app.invoke(main_state)
178 |
179 |
180 |
--------------------------------------------------------------------------------
/src/refined_blueprint/__init__.py:
--------------------------------------------------------------------------------
1 | from src.refined_blueprint.refined_blueprint import YouTubeScriptArchitect
2 |
3 | __init__ = ["YouTubeScriptArchitect"]
4 |
--------------------------------------------------------------------------------
/src/refined_blueprint/refined_blueprint.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | import logging
4 | from typing import List, Dict
5 | from langchain_core.prompts import ChatPromptTemplate
6 | from langchain_core.runnables import chain
7 | from src.refined_blueprint.structured_output_schema import RefinedBluePrint
8 | from src.agent_prompt import GetPrompt
9 | from rich.console import Console
10 | from src.baseLLM import BaseLLM
11 |
12 | # Initialize Rich Console
13 | console = Console()
14 |
15 |
16 | class YouTubeScriptArchitect(BaseLLM):
17 | def __init__(
18 | self,
19 | base_path: str,
20 | uuid: str,
21 | model="gpt-4o",
22 | ):
23 | super().__init__(model)
24 | self.path = base_path
25 | self.uuid = uuid
26 |
27 | @staticmethod
28 | def _fetch_prompt() -> str:
29 | """Fetches the structured prompt for YouTube content strategy."""
30 | return GetPrompt.get_prompt("youtube_script_architect")
31 |
32 | def _get_prompt(self) -> ChatPromptTemplate:
33 | """Creates a ChatPromptTemplate for refining the YouTube script blueprint."""
34 | messages = [
35 | ("system", self._fetch_prompt()),
36 | (
37 | "user",
38 | (
39 | """
40 | Create and refine the YouTube script blueprint based on the following details:
41 |
42 | Video Title: {video_title}
43 | Video Length: {video_length}
44 | Old Blueprint: {initial_blueprint}
45 |
46 | Internet Search: {internet_research}
47 |
48 | Generate the refined YouTube script blueprint.
49 | """
50 | ),
51 | ),
52 | ]
53 | return ChatPromptTemplate.from_messages(messages)
54 |
55 | def _get_internet_research(self, folder_path: str) -> str:
56 | """Reads all JSON files in a folder and extracts internet search content."""
57 | try:
58 | json_files = [f for f in os.listdir(folder_path) if f.endswith(".json")]
59 | internet_search_content = ""
60 |
61 | for file in json_files:
62 | file_path = os.path.join(folder_path, file)
63 | with open(file_path, "r") as f:
64 | data = json.load(f)
65 | section_title = data.get("section_info", {}).get(
66 | "section_title", "Unknown Section"
67 | )
68 | internet_search_text = "\n".join(
69 | [x["content"] for x in data.get("internet_search", [])]
70 | )
71 |
72 | internet_search_content += (
73 | f"Section Title: {section_title}\n{internet_search_text}\n\n"
74 | )
75 |
76 | return internet_search_content
77 | except Exception as e:
78 | logging.error(f"Error reading JSON files from {folder_path}: {str(e)}")
79 | return ""
80 |
81 | def _get_intial_blueprint(self, initial_blueprint: Dict) -> str:
82 | """Formats the initial blueprint as a string."""
83 | try:
84 | sections = [
85 | f"\nTitle: {section['section_title']}\nDescription: {section['description']}\nTime: {section['time']}"
86 | for section in initial_blueprint.get("sections", [])
87 | ]
88 | return (
89 | f"Video Title: {initial_blueprint.get('page_title', 'Unknown Title')}\n"
90 | + "".join(sections)
91 | )
92 | except Exception as e:
93 | logging.error(f"Error formatting initial blueprint: {str(e)}")
94 | return ""
95 |
96 | def refine_blueprint_run(self, inputs, initial_blueprint):
97 | """Runs the refinement process for the YouTube script blueprint."""
98 | try:
99 | initial_blueprint_str = self._get_intial_blueprint(initial_blueprint)
100 | internet_research = self._get_internet_research(self.path.internet_search)
101 | refine_blueprint_prompt = self._get_prompt()
102 | refine_blueprint_chain = (
103 | refine_blueprint_prompt
104 | | self.llm.with_structured_output(RefinedBluePrint)
105 | )
106 |
107 | refined_blueprint = refine_blueprint_chain.invoke(
108 | {
109 | "video_title": inputs.video_title,
110 | "video_length": inputs.video_length,
111 | "initial_blueprint": initial_blueprint_str,
112 | "internet_research": internet_research,
113 | }
114 | )
115 |
116 | output_file = os.path.join(self.path.base, "refined_blueprint.json")
117 | with open(output_file, "w") as json_file:
118 | json.dump(refined_blueprint.dict(), json_file, indent=4)
119 | console.print(
120 | f"[bold green]✔ Refined Blueprint saved successfully at:[/bold green] {output_file}"
121 | )
122 | return {"refined_output": refined_blueprint.dict()}
123 | except Exception as e:
124 | logging.error(f"Failed to refine blueprint: {str(e)}")
125 | return {"refined_output": None}
126 |
--------------------------------------------------------------------------------
/src/refined_blueprint/structured_output_schema.py:
--------------------------------------------------------------------------------
1 | from typing import List, Optional
2 | from pydantic import BaseModel, Field
3 |
4 |
5 | class Section(BaseModel):
6 | section_title: str = Field(
7 | ...,
8 | title="Title of the section.Make sure the section title is meaningful,Engaging and must be based upon research.",
9 | )
10 | description: str = Field(
11 | ...,
12 | title="Content of the section must be realted to youtube script. Don't add anything about video.",
13 | )
14 | time: str = Field(..., title="starting and end of this section")
15 |
16 | pointers: str = Field(
17 | description="Provide clear guidance in pointer on what the writer should cover in this section based upon search to ensure engaging and informative content"
18 | )
19 |
20 | @property
21 | def as_str(self) -> str:
22 | return f"## {self.section_title}\n\n{self.description}".strip()
23 |
24 |
25 | class RefinedBluePrint(BaseModel):
26 | page_title: str = Field(
27 | ...,
28 | title="Title of the video.",
29 | )
30 | sections: List[Section] = Field(
31 | default_factory=list,
32 | title="Titles and descriptions for each section of the video.",
33 | )
34 |
--------------------------------------------------------------------------------
/src/writer/__init__.py:
--------------------------------------------------------------------------------
1 | from src.writer.writer import GenerateScript
2 |
3 | __init__ = ["GenerateScript"]
4 |
--------------------------------------------------------------------------------
/src/writer/writer.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | from rich.console import Console
4 | from rich.text import Text
5 | from rich.logging import RichHandler
6 | from typing import List, Dict
7 | from langchain_core.prompts import ChatPromptTemplate
8 | from langchain_core.output_parsers import StrOutputParser
9 | from src.agent_prompt import GetPrompt
10 | from src.baseLLM import BaseLLM
11 |
12 |
13 | # Initialize Rich console
14 | console = Console()
15 |
16 |
17 | class GenerateScript(BaseLLM):
18 | def __init__(
19 | self,
20 | refine_output: str,
21 | model="gpt-4o",
22 | ) -> None:
23 | super().__init__(model)
24 | self.output_folders = refine_output
25 | self.k = 2
26 |
27 | blueprint_path = os.path.join(refine_output, "refined_blueprint.json")
28 | try:
29 | with open(blueprint_path, "r") as file:
30 | self.refine_blueprint = json.load(file)
31 | console.print(
32 | Text(
33 | f"✅ Loaded refined blueprint from {blueprint_path}",
34 | style="bold green",
35 | )
36 | )
37 | except Exception as e:
38 | console.print(
39 | Text(f"❌ Error loading blueprint: {str(e)}", style="bold red")
40 | )
41 | raise
42 |
43 | @staticmethod
44 | def _fetch_prompt() -> str:
45 | """Fetches the structured prompt for YouTube content strategy."""
46 | return GetPrompt.get_prompt("youtube_script_writer")
47 |
48 | def _get_section_prompt(self):
49 | """Generate the prompt for section writing."""
50 | writer_prompt = ChatPromptTemplate.from_messages(
51 | [
52 | ("system", self._fetch_prompt()),
53 | (
54 | "user",
55 | """
56 | Video Title: {video_title}\n
57 | Full Video Length: {video_length}\n
58 | Tone: {tone}\n
59 | Section Name: {section_name}\n
60 | Allocated Time: {allocated_Time}\n
61 | Internet Search: {internet_search}\n
62 | Language: {language}\n
63 | Guidance: {guidance}
64 | """,
65 | ),
66 | ]
67 | )
68 | return writer_prompt
69 |
70 | def _generate_section(
71 | self, index: int, section: Dict, data: List[str], inputs
72 | ) -> str:
73 | """Generates content for a section."""
74 | section_name = section["section_title"]
75 | section_description = section["description"]
76 | section_time = section["time"]
77 | section_guidance = section["pointers"]
78 |
79 | console.print(
80 | Text(
81 | f"✍️ Generating content for section: {section_name}", style="bold yellow"
82 | )
83 | )
84 |
85 | writer_prompt = self._get_section_prompt()
86 | writer = writer_prompt | self.llm | StrOutputParser()
87 |
88 | complete_output = writer.invoke(
89 | {
90 | "video_title": inputs.video_title,
91 | "video_length": inputs.video_length,
92 | "tone": inputs.tone,
93 | "section_name": f"Title: {section_name}\nDescription: {section_description}",
94 | "allocated_Time": section_time,
95 | "internet_search": "\n".join(data),
96 | "language": inputs.language,
97 | "guidance": section_guidance,
98 | }
99 | )
100 |
101 | return complete_output
102 |
103 | def generate(self, inputs):
104 | """Main function to generate the entire script."""
105 | sections = self.refine_blueprint["sections"]
106 | complete_output = ""
107 | console.print(Text("🔄 Research Workflow Initialized", style="bold green"))
108 | for index, section in enumerate(sections):
109 | internet_search_path = os.path.join(
110 | self.output_folders, "internet_search", f"{index+1}.json"
111 | )
112 | try:
113 | with open(internet_search_path, "r") as file:
114 | data = json.load(file)
115 | content = [x["content"] for x in data["internet_search"]]
116 | except Exception as e:
117 | console.print(
118 | Text(
119 | f"⚠️ Failed to load internet search data: {str(e)}",
120 | style="bold red",
121 | )
122 | )
123 | content = []
124 |
125 | section_output = self._generate_section(index, section, content, inputs)
126 | complete_output += f"\n#### Section: {section['section_title']}\n[{section['time']}]: {section_output}\n"
127 |
128 |
129 | output_file = os.path.join(self.output_folders, "script_output.txt")
130 | with open(output_file, "w") as file:
131 | file.write(complete_output)
132 |
133 | console.print(
134 | Text(f"✅ Script generated and saved to {output_file}", style="bold green")
135 | )
136 |
--------------------------------------------------------------------------------
/ui.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import time
3 | import os
4 | import threading
5 | from src import YouTubeScriptInput, YouTubeScriptGenerator
6 |
7 | st.set_page_config(
8 | page_title="YouTube Script Generator", page_icon="🎬", layout="centered"
9 | )
10 |
11 |
12 | class YouTubeScriptApp:
13 | def __init__(self):
14 | self.script_generated = False
15 | self.yt_script_generator = YouTubeScriptGenerator()
16 |
17 | def run_generation(self, youtube_inputs):
18 | """Background function to run script generation."""
19 | self.yt_script_generator.generate(youtube_inputs)
20 | self.script_generated = True # Mark process as completed
21 |
22 | def check_script_status(self):
23 | """Continuously check for each processing step in the background."""
24 | path = self.yt_script_generator.paths.base
25 | blueprint_done = search_done = refined_done = False
26 |
27 | while not self.script_generated:
28 | time.sleep(1)
29 |
30 | if not blueprint_done and os.path.exists(
31 | os.path.join(path, "blueprint.json")
32 | ):
33 | st.success("📜 Blueprint Done ✅")
34 | blueprint_done = True
35 |
36 | if not search_done and os.path.exists(
37 | os.path.join(path, "internet_search")
38 | ):
39 | if any(
40 | f.endswith(".json")
41 | for f in os.listdir(os.path.join(path, "internet_search"))
42 | ):
43 | st.info("🔍 Internet Search Done ✅")
44 | search_done = True
45 |
46 | if not refined_done and os.path.exists(
47 | os.path.join(path, "refined_blueprint.json")
48 | ):
49 | st.success("📜 Refined Outline Done ✅")
50 | refined_done = True
51 |
52 | if self.script_generated:
53 | break
54 |
55 | def read_file(self, filename):
56 | """Utility function to read the content of a file."""
57 | file_path = os.path.join(self.yt_script_generator.paths.base, filename)
58 | if os.path.exists(file_path):
59 | with open(file_path, "r", encoding="utf-8") as file:
60 | return file.read()
61 | return "File not found!"
62 |
63 | def main(self):
64 | st.title("🎥 YouTube Script Generator")
65 | st.markdown("---")
66 |
67 | language = st.text_input("🌍 Language for the script:")
68 | tone = st.selectbox(
69 | "🎭 Tone:",
70 | [
71 | "Incorporate little Humor in the Video",
72 | "Present Only Facts",
73 | "Emotional",
74 | "Inspirational",
75 | "Educational",
76 | "Entertaining",
77 | "Custom",
78 | ],
79 | )
80 | if tone == "Custom":
81 | tone = st.text_input("📝 Enter custom tone:")
82 |
83 | video_length = st.selectbox(
84 | "⏳ Video Length:",
85 | [
86 | "TikTok/Shots/Reel (15-30 seconds)",
87 | "10-15 min short video",
88 | "20-30 min short video",
89 | ],
90 | )
91 | video_title = st.text_input("🎬 Video Title:")
92 | description = st.checkbox("Include Video Description")
93 |
94 | if st.button("🚀 Generate Script"):
95 | st.markdown("---")
96 | st.subheader("Your Selections:")
97 | st.write(f"**🌍 Language:** {language}")
98 | st.write(f"**🎭 Tone:** {tone}")
99 | st.write(f"**⏳ Video Length:** {video_length}")
100 | st.write(f"**🎬 Video Title:** {video_title}")
101 |
102 | youtube_inputs = YouTubeScriptInput(
103 | language=language,
104 | tone=tone,
105 | video_length=video_length,
106 | video_title=video_title,
107 | description=description,
108 | )
109 | with st.spinner("generating ..."):
110 | thread = threading.Thread(
111 | target=self.run_generation, args=(youtube_inputs,)
112 | )
113 | thread.start()
114 |
115 | self.check_script_status()
116 | thread.join()
117 |
118 | script = self.read_file("script_output.txt")
119 | st.markdown("### 📝 Generated Script")
120 | st.markdown(script)
121 |
122 | if description:
123 | desc = self.read_file("video_description.txt")
124 | st.markdown("### 📝 Video Description")
125 | st.markdown(desc)
126 |
127 | st.success("✨ Script generation complete!")
128 |
129 |
130 | if __name__ == "__main__":
131 | app = YouTubeScriptApp()
132 | app.main()
133 |
--------------------------------------------------------------------------------