18 |
19 | ---
20 |
21 | ## Introducing OpenPlexity Pages
22 |
23 | OpenPlexity Pages serves as an open-source alternative to Perplexity Pages, with the aim of transforming your research into visually appealing, comprehensive content.
24 | Although the system is not capable of producing publication-ready articles, which often necessitate a substantial number of revisions, experienced editors may find it beneficial during their initial writing phase.
25 |
26 | ## What sets OpenPlexity apart?
27 |
28 | - **Open Source**: Unlike Perplexity Pages, OpenPlexity Pages is fully open source, allowing for community contributions and customizations.
29 | - **Privacy-Focused**: Your data stays with you. OpenPlexity Pages runs locally, ensuring your research and content remain private.
30 | - **Customizable**: Tailor the tone of your content to resonate with your target audience, from general readers to subject matter experts.
31 | - **Adaptable**: Easily modify the structure of your articles—add, rearrange, or remove sections to best suit your material.
32 | - **Visual**: Enhance your articles with AI-generated visuals or integrate your own images.
33 |
34 | ## Features That Matter
35 |
36 | - **Local LLM Support (Coming soon!)**: Harness the power of Llama3 and Mixtral using Ollama for content generation.
37 | - **Seamless Creation**: Transform your research into well-structured, beautifully formatted articles with ease.
38 | - **Always Current**: Unlike static embedding-based tools, OpenPlexity Pages uses real-time search results, ensuring your content is up-to-date.
39 |
40 | ## A Tool for Everyone
41 |
42 | OpenPlexity Pages empowers creators in any field to share knowledge:
43 |
44 | - **Educators**: Develop comprehensive study guides, breaking down complex topics into digestible content.
45 | - **Researchers**: Create detailed reports on your findings, making your work more accessible.
46 | - **Hobbyists**: Share your passions by creating engaging guides that inspire others.
47 | - **Content Creators**: Produce well-researched, visually appealing articles on any topic.
48 |
49 | # Requirements
50 | - `Groq API Key`,
51 | - `Seperapi API Key`.
52 |
53 | # Getting Started
54 |
55 | Follow these instructions to set up and run OpenPlexity Pages using Poetry.
56 |
57 | ## Installation
58 |
59 | First, ensure you have Poetry installed. If not, install it via pip:
60 |
61 | ```bash
62 | pip install poetry
63 | ```
64 |
65 | Once Poetry is installed, navigate to your project directory and install the dependencies:
66 |
67 | ```bash
68 | poetry install
69 | ```
70 |
71 | ## Configuration
72 |
73 | Next, you need to create a `.env` file in the root directory of the project. This file will store your `pplx_api` key. Use the following command to create and add your API key to the `.env` file:
74 |
75 | ```bash
76 | $ echo "GROQ_API_KEY=
77 | BASE_URL=https://rentry.co
78 | SERPER_API_KEY=" > .env
79 | ```
80 |
81 | ## Running the Application
82 |
83 | To run the application, use the following command:
84 |
85 | ```bash
86 | poetry run streamlit run openplexity_pages/app.py
87 | ```
88 |
89 | And that's it! Your application should now be up and running. Enjoy exploring OpenPlexity Pages!
90 |
91 | ---
92 |
93 | ## Architecture
94 |
95 |
96 |
97 |
98 |
99 | ## Contribute
100 |
101 | OpenPlexity Pages thrives on community contributions. Whether you're fixing bugs, adding features, or improving docs, we welcome your input! Check out our [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
102 |
103 | ## Support the Project
104 |
105 | Love OpenPlexity Pages? Here's how you can help:
106 |
107 | - Star us on GitHub
108 |
109 | ## The Power of Open Source
110 |
111 | While Perplexity Pages offers a polished, hosted solution, OpenPlexity Pages brings the power of AI-driven content creation to the open-source community. We believe in the potential of collaborative development and the importance of data privacy.
112 |
113 | With OpenPlexity Pages, you have the freedom to host your own instance, contribute to its development, and create content that educates, inspires, and engages your audience—all while maintaining full control over your data and the tool itself.
114 |
115 | **Let's see what we can create together.**
116 |
117 | ## Roadmap
118 | - [ ] Make better
119 | - [ ] Fix image feature
120 | - [ ] Add more document export modalities
121 | - [ ] Local LLM support
122 | - [ ] Settings for LLMs
123 |
124 | ## Acknowledgement
125 | We are very grateful to [MutatedMind](https://mutatedmind.com) for leading the UI development.
126 |
127 | ## License
128 |
129 | [MIT](https://opensource.org/licenses/MIT)
130 |
131 | Copyright (c) 2024-present, Alex Fazio
132 |
133 | ---
134 |
135 | [](https://x.com/alxfazio/status/1816167602265157672)
136 |
--------------------------------------------------------------------------------
/openplexity_pages/agent_writer.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import os
3 | import re
4 | import datetime
5 | from groq import Groq
6 | from crewai import Agent, Task, Crew, Process
7 | from textwrap import dedent
8 | from langchain_groq import ChatGroq
9 | from crewai_tools import SerperDevTool
10 | from dotenv import load_dotenv
11 |
12 | # Set up logging
13 | logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
14 |
15 | # Load environment variables from .env file
16 | load_dotenv()
17 | search_tool = SerperDevTool()
18 |
19 | # Get the API key from the environment variable
20 | SERPER_API_KEY = os.getenv('SERPER_API_KEY')
21 | GROQ_API_KEY = os.getenv('GROQ_API_KEY')
22 |
23 | # Check if the 'output-files' directory exists, and create it if it doesn't
24 | if not os.path.exists('output-files'):
25 | os.makedirs('output-files')
26 |
27 | # Agent Definitions
28 |
29 | writer_agent = Agent(
30 | role=dedent((
31 | """
32 | You are a professional writer.
33 | """)),
34 | backstory=dedent((
35 | """
36 | You are a professional writer, experienced in adhering to writing instructions for outstanding quality writing output.
37 | """)),
38 | goal=dedent((
39 | """
40 | Write an article according to the brief.
41 | """)),
42 | allow_delegation=False,
43 | verbose=True,
44 | max_iter=1,
45 | max_rpm=3,
46 | llm=ChatGroq(temperature=0.8, model_name="llama-3.1-70b-versatile"),
47 | )
48 |
49 | # Task Definitions
50 |
51 | task_is_writing = Task(
52 | description=dedent((
53 | """
54 | {prompt}
55 | """)),
56 | expected_output=dedent((
57 | """
58 | Present your article section using the following format:
59 |
60 |
61 | Write your four sentences here, including tags for the numbered citations within the text.
62 |
63 |
64 |
65 | List your sources here, numbered to match the inline citations.
66 |
67 | """)),
68 | agent=writer_agent,
69 | output_file=f'output-files/output_task_is_writing_{datetime.datetime.now().strftime("%Y%m%d_%H%M%S")}.md'
70 | )
71 |
72 |
73 | # Crew Kickoff
74 | def main(prompt):
75 | # Instantiate your crew with a sequential process
76 | crew = Crew(
77 | agents=[writer_agent],
78 | tasks=[task_is_writing],
79 | verbose=2,
80 | process=Process.sequential
81 | )
82 |
83 | inputs = {
84 | "prompt": prompt,
85 | }
86 |
87 | result = crew.kickoff(inputs=inputs)
88 | logging.info("\n\n########################")
89 | logging.info("## Here is your custom crew run result:")
90 | logging.info("########################\n")
91 | logging.info(result)
92 |
93 | # Convert result to string if it's not already
94 | return str(result)
95 |
96 |
97 | def summarise_paragraph(paragraph):
98 | summary = ""
99 |
100 | # Initialize the Groq client
101 | client = Groq()
102 |
103 | try:
104 | completion = client.chat.completions.create(
105 | model="llama-3.1-70b-versatile", # Keeping the original model name
106 | messages=[
107 | {
108 | "role": "system",
109 | "content": dedent(f"""
110 | Your task is to summarize a given paragraph in two sentences. Do not omit any idea. Here's the paragraph:
111 |
112 |
113 | {paragraph}
114 |
115 |
116 | To summarize this paragraph effectively:
117 | 1. Read the paragraph carefully to understand its main idea and key points.
118 | 2. Identify the most important information that captures the essence of the paragraph.
119 | 3. Condense this information into a single, concise sentence that accurately represents the paragraph's content.
120 | 4. Ensure your summary sentence is clear, coherent, and grammatically correct.
121 | 5. Avoid including minor details or examples unless they are crucial to the main idea.
122 |
123 | Please provide your one-sentence summary inside tags. Your summary should be no longer than 30 words.
124 | """)
125 | }
126 | ],
127 | temperature=0.7,
128 | max_tokens=1024,
129 | top_p=1,
130 | stream=True,
131 | stop=None,
132 | )
133 |
134 | for chunk in completion:
135 | if chunk.choices and chunk.choices[0].delta and chunk.choices[0].delta.content:
136 | content = chunk.choices[0].delta.content
137 | logging.info(content)
138 | summary += content
139 |
140 | except Exception as e:
141 | logging.error(f"An error occurred while generating the summary: {str(e)}")
142 | return None
143 |
144 | # Extract the summary from between the tags
145 | match = re.search(r'(.*?)', summary, re.DOTALL)
146 | if match:
147 | return match.group(1).strip()
148 | else:
149 | logging.warning("Could not find a properly formatted summary in the response.")
150 | return None
--------------------------------------------------------------------------------
/openplexity_pages/groq_search.py:
--------------------------------------------------------------------------------
1 | import os
2 | from groq import Groq
3 | import json
4 | import requests
5 | from dotenv import load_dotenv
6 |
7 | # Load environment variables from .env file
8 | load_dotenv()
9 |
10 | # Initialize Groq client
11 | GROQ_API_KEY = os.getenv('GROQ_API_KEY')
12 | SERPER_API_KEY = os.getenv('SERPER_API_KEY')
13 |
14 | if not GROQ_API_KEY:
15 | raise ValueError("GROQ_API_KEY environment variable is not set")
16 | client = Groq(api_key=GROQ_API_KEY)
17 | MODEL = 'llama3-groq-70b-8192-tool-use-preview'
18 |
19 | def google_search(query):
20 | """Perform a Google search using Serper API and return detailed results"""
21 | url = 'https://google.serper.dev/search'
22 | payload = json.dumps({
23 | 'q': query,
24 | 'num': 5, # Request 10 results
25 | 'gl': 'us',
26 | 'hl': 'en',
27 | 'type': 'search'
28 | })
29 | headers = {
30 | 'X-API-KEY': SERPER_API_KEY,
31 | 'Content-Type': 'application/json'
32 | }
33 | response = requests.post(url, headers=headers, data=payload)
34 | results = response.json().get('organic', [])
35 |
36 | formatted_results = ["Search results:"]
37 | for r in results:
38 | formatted_result = f"Title: {r.get('title', '')}\nLink: {r.get('link', '')}\nSnippet: {r.get('snippet', '')}\n---"
39 | formatted_results.append(formatted_result)
40 |
41 | return "\n".join(formatted_results)
42 |
43 | def run_conversation(user_prompt):
44 | messages = [
45 | {
46 | "role": "system",
47 | "content": """
48 | You are an AI assistant designed to help with Google searches and provide comprehensive answers based on the search results. Your task is to use the google_search function to find information and present the results in a clear and informative manner.
49 |
50 | To perform a search, use the following function:
51 | google_search(query="{{QUERY}}")
52 |
53 | The search results will include titles, links, and detailed snippets. Always cite the source of information by mentioning the link when providing answers.
54 |
55 | Present the search results in the following format:
56 |
57 | ```
58 | Search results:
59 | Title: [Title of the search result]
60 | Link: [URL of the search result]
61 | Snippet: [Snippet from the search result]
62 | ---
63 | [Repeat for each search result]
64 | ```
65 |
66 | After presenting the search results, provide a comprehensive answer to the query based on the information found. Synthesize the information from multiple sources when possible, and always cite the sources by mentioning the relevant links.
67 |
68 | Here is the query to search for:
69 | {{QUERY}}
70 |
71 | Begin by performing the search using the google_search function. Then, present the search results in the specified format. Finally, provide your answer to the query based on the search results.
72 |
73 | If the google_search function returns an error or no results, inform the user that the search was unsuccessful and that you are unable to provide an answer based on the given query.
74 |
75 | Present your final answer within tags.
76 | """
77 | },
78 | {
79 | "role": "user",
80 | "content": user_prompt,
81 | }
82 | ]
83 |
84 | tools = [
85 | {
86 | "type": "function",
87 | "function": {
88 | "name": "google_search",
89 | "description": "Perform a Google search and return top 5 results",
90 | "parameters": {
91 | "type": "object",
92 | "properties": {
93 | "query": {
94 | "type": "string",
95 | "description": "The search query",
96 | }
97 | },
98 | "required": ["query"],
99 | },
100 | },
101 | }
102 | ]
103 |
104 | try:
105 | response = client.chat.completions.create(
106 | model=MODEL,
107 | messages=messages,
108 | tools=tools,
109 | tool_choice="auto",
110 | max_tokens=4096
111 | )
112 |
113 | response_message = response.choices[0].message
114 | tool_calls = response_message.tool_calls
115 |
116 | if tool_calls:
117 | available_functions = {
118 | "google_search": google_search,
119 | }
120 | messages.append(response_message)
121 |
122 | for tool_call in tool_calls:
123 | function_name = tool_call.function.name
124 | function_to_call = available_functions[function_name]
125 | function_args = json.loads(tool_call.function.arguments)
126 | function_response = function_to_call(
127 | query=function_args.get("query")
128 | )
129 | messages.append(
130 | {
131 | "tool_call_id": tool_call.id,
132 | "role": "tool",
133 | "name": function_name,
134 | "content": function_response,
135 | }
136 | )
137 |
138 | second_response = client.chat.completions.create(
139 | model=MODEL,
140 | messages=messages
141 | )
142 | return second_response.choices[0].message.content
143 | else:
144 | return response_message.content
145 | except Exception as e:
146 | return f"An error occurred: {str(e)}"
--------------------------------------------------------------------------------
/openplexity_pages/prompt_helper.py:
--------------------------------------------------------------------------------
1 | from prompt_states import prompt_states
2 | from agent_writer import main as agent_writer
3 | import groq_search
4 |
5 | # Default values moved here
6 | DEFAULT_GLOBAL_PROMPT_ELEM = {
7 | "story_title": "",
8 | "tone_style": "",
9 | "audience": "",
10 | "persona_first_name": "",
11 | "persona_last_name": "",
12 | "exemplars": ""
13 | }
14 |
15 | DEFAULT_BLOCK_LEVEL_PROMPT_ELEM = {
16 | "Introduction": {"title": "Introduction", "word_count": 60, "keywords": "", "notes": ""},
17 | "Main": {"title": "Main", "word_count": 60, "keywords": "", "notes": ""},
18 | "Conclusion": {"title": "Conclusion", "word_count": 60, "keywords": "", "notes": ""}
19 | }
20 |
21 |
22 | # State Management Functions
23 |
24 | def load_general_prompt_state():
25 | return prompt_states
26 |
27 |
28 | def save_general_prompt_state(state):
29 | prompt_states.clear()
30 | prompt_states.update(state)
31 |
32 |
33 | # Setter Functions
34 |
35 | def update_global_prompt_elem(key, value):
36 | if "global_prompt_elem" not in prompt_states:
37 | prompt_states["global_prompt_elem"] = {}
38 | prompt_states["global_prompt_elem"][key] = value
39 |
40 |
41 | def update_block_prompt_elem(block, key, value):
42 | if "block_level_prompt_elem" not in prompt_states:
43 | prompt_states["block_level_prompt_elem"] = {}
44 | if block not in prompt_states["block_level_prompt_elem"]:
45 | prompt_states["block_level_prompt_elem"][block] = {}
46 | prompt_states["block_level_prompt_elem"][block][key] = value
47 |
48 |
49 | # Getter Functions
50 |
51 | def get_global_prompt_elem(key, default=None):
52 | if default is None:
53 | default = DEFAULT_GLOBAL_PROMPT_ELEM.get(key, "")
54 | return prompt_states.get("global_prompt_elem", {}).get(key, default)
55 |
56 |
57 | def remove_block_prompt_elem(block):
58 | if "block_level_prompt_elem" in prompt_states and block in prompt_states["block_level_prompt_elem"]:
59 | del prompt_states["block_level_prompt_elem"][block]
60 |
61 |
62 | def get_block_prompt_elem(block, key, default=None):
63 | if default is None:
64 | default = DEFAULT_BLOCK_LEVEL_PROMPT_ELEM.get(block, {}).get(key, "")
65 | return prompt_states.get("block_level_prompt_elem", {}).get(block, {}).get(key, default)
66 |
67 |
68 | # Prompt Generation Function
69 |
70 | def get_formatted_prompt(block):
71 | global_elements = load_general_prompt_state()["global_prompt_elem"]
72 | block_elements = load_general_prompt_state()["block_level_prompt_elem"].get(block, {})
73 |
74 | story_title = global_elements.get('story_title', '')
75 | block_title = block_elements.get('title', block)
76 |
77 | if story_title and block_title:
78 | groq_search_query = f"{story_title} {block_title}"
79 | research_results = groq_search.run_conversation(groq_search_query)
80 | else:
81 | research_results = ""
82 |
83 | # Fetch word count from block_elements, which is updated by app.py
84 | word_count = block_elements.get('word_count', '60') // 15 # "`// 15` converts the desired word count into an
85 | # approximate sentence count, which is more easily recognized by LLMS.
86 |
87 | # Include the story title in the prompt
88 | story_title = global_elements.get('story_title', 'Untitled Story')
89 |
90 | prompt = f"You are tasked with writing a concise article section for a larger story. Your goal is to create informative and engaging content that adheres to specific guidelines. Follow these instructions carefully: "
91 |
92 | prompt += f"""
93 | 1. Review the following research results. These will serve as the factual basis for your writing:
94 |
95 |
96 | {research_results}
97 |
98 | """
99 |
100 | prompt += f"""
101 | 2. Take note of the story title and section title:
102 |
103 | {story_title}
104 | {block_elements.get('title', block)}
105 | """
106 |
107 | prompt += f"""
108 | 3. Consider the following input variables while writing:
109 | """
110 | if global_elements.get("tone_style"):
111 | prompt += f"\n{global_elements['tone_style']}"
112 | if global_elements.get("audience"):
113 | prompt += f"\n{global_elements['audience']}"
114 | if block_elements.get("keywords"):
115 | prompt += f"\n{block_elements['keywords']}\n"
116 |
117 | prompt += f"""\n4. Write a {word_count}-sentence article section based on the story title and section title provided. Ensure that each sentence contains factual information about the section topic.
118 | """
119 |
120 | prompt += f"""5. Include sources for your information as inline citations (e.g., [1]) within the text. After the {word_count} sentences, provide an aggregate list of sources used.
121 | """
122 |
123 | prompt += f"""6. Maintain the specified tone throughout the article section. Remember your target audience and adjust your language and complexity accordingly.
124 | """
125 |
126 | prompt += f"""7. Write in the style exemplified by the style examples provided. Emulate the voice and manner of expression demonstrated in these examples.
127 | """
128 |
129 | prompt += f"""8. Incorporate the given keywords naturally into your text. Don't force them if they don't fit the context of the section.
130 | """
131 |
132 | prompt += f"""9. Present your article section within tags. Use tags for the numbered citations within the text, and tags for the list of sources at the end.
133 | """
134 |
135 | prompt += f"""10. Focus on creating engaging, factual content that meets all the specified requirements. Your goal is to inform and captivate the target audience while maintaining the appropriate tone and style.
136 | """
137 |
138 | if block_elements.get("notes"):
139 | prompt += f"\n 11. Consider these additional_notes and include relevant information if it fits within the context of the '{block_elements.get('title', block)}' section."
140 |
141 | prompt += f"""
142 | \nPresent your article section using the following format:
143 |
144 |
145 | Write your {word_count} sentences here, including tags for the numbered citations within the text.
146 |
147 |
148 |
149 | List your sources here, numbered to match the inline citations.
150 |
151 |
152 | Remember to ground your writing in the provided research results, adhere to the specified tone and style, and create content that is both informative and engaging for the target audience.
153 | """
154 |
155 | return prompt
156 |
157 |
158 | # New function to generate content
159 | def generate_api_response(block):
160 | prompt = get_formatted_prompt(block)
161 | full_response = agent_writer(prompt)
162 | return full_response
163 |
164 |
165 | def get_user_friendly_error_message(error):
166 | if isinstance(error, ValueError) and "blocked by the safety filters" in str(error):
167 | return ("The content was blocked by safety filters. Please try rephrasing your request or using less "
168 | "controversial topics.")
169 | elif isinstance(error, Exception):
170 | return f"An unexpected error occurred: {str(error)}. Please try again or contact support if the issue persists."
171 | else:
172 | return "An unknown error occurred. Please try again or contact support if the issue persists."
173 |
174 |
175 | # Initialization
176 | if not prompt_states["global_prompt_elem"]:
177 | prompt_states["global_prompt_elem"] = DEFAULT_GLOBAL_PROMPT_ELEM.copy()
178 |
179 | if not prompt_states["block_level_prompt_elem"]:
180 | prompt_states["block_level_prompt_elem"] = DEFAULT_BLOCK_LEVEL_PROMPT_ELEM.copy()
--------------------------------------------------------------------------------
/openplexity_pages/app.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import toggles_helper
3 | import prompt_helper
4 | from rentry import export_to_rentry
5 | import webbrowser
6 | from toggle_states import toggle_states_structure
7 | from serper_api import search_images as serper_search_images
8 | from streamlit_image_select import image_select
9 | import re
10 | import html
11 | import markdown
12 | from dotenv import load_dotenv
13 | import os
14 |
15 | # Load environment variables from .env file
16 | load_dotenv()
17 |
18 | # Define story blocks
19 | story_blocks = ["Introduction", "Main", "Conclusion"]
20 |
21 | # Initialize the story blocks in session state
22 | if 'story_blocks' not in st.session_state:
23 | st.session_state.story_blocks = ["Introduction", "Main", "Conclusion"]
24 |
25 | st.set_page_config(page_title="Openplexity Pages", layout="wide")
26 |
27 | # Custom CSS
28 | st.markdown("""
29 |
100 | """, unsafe_allow_html=True)
101 |
102 | st.markdown('',
103 | unsafe_allow_html=True)
104 |
105 |
106 | # Create three columns: Settings, Content, and Outline/Preview
107 | settings_column, content_column, outline_column = st.columns([1, 2, 1])
108 |
109 | # Add this at the beginning of the script, after the imports
110 | if 'toggles_initialized' not in st.session_state:
111 | toggles_helper.reset_all_toggles()
112 | st.session_state.toggles_initialized = True
113 |
114 |
115 | def format_markdown_content(block, content):
116 | # Convert Markdown to HTML
117 | html_content = markdown.markdown(content)
118 |
119 | # Handle custom tags
120 | html_content = re.sub(r'(.*?)',
121 | r'
\1
',
122 | html_content,
123 | flags=re.DOTALL)
124 |
125 | return html_content
126 |
127 |
128 | def add_new_block():
129 | new_block_name = f"Custom Block {len(st.session_state.story_blocks) - 2}"
130 | st.session_state.story_blocks.append(new_block_name)
131 | # Initialize prompt elements for the new block
132 | prompt_helper.update_block_prompt_elem(new_block_name, "title", new_block_name)
133 | prompt_helper.update_block_prompt_elem(new_block_name, "word_count", 60)
134 | prompt_helper.update_block_prompt_elem(new_block_name, "keywords", "")
135 | prompt_helper.update_block_prompt_elem(new_block_name, "notes", "")
136 |
137 |
138 | def remove_block(block_name):
139 | if block_name in st.session_state.story_blocks and len(st.session_state.story_blocks) > 3:
140 | st.session_state.story_blocks.remove(block_name)
141 | # Clean up all associated state
142 | for key in list(st.session_state.keys()):
143 | if key.startswith(f"{block_name}_"):
144 | del st.session_state[key]
145 | # Remove block from prompt_helper
146 | prompt_helper.remove_block_prompt_elem(block_name)
147 | print(f"after deletion {block_name}", prompt_helper.get_block_prompt_elem(block_name, None))
148 |
149 |
150 | def toggle_callback(toggle):
151 | st.session_state[toggle] = not st.session_state.get(toggle, False)
152 | value = st.session_state[toggle]
153 | toggles_helper.update_global_toggle_state(toggle, value)
154 | if not value:
155 | prompt_helper.update_global_prompt_elem(toggle, "")
156 |
157 |
158 | def img_to_html(img_url):
159 | img_html = f""
160 | return img_html
161 |
162 |
163 | def search_images(image_query, num_images=6):
164 | with st.spinner("Searching for images..."):
165 | images = serper_search_images(image_query, num_images=num_images)
166 | if images:
167 | image_urls = [img['imageUrl'] for img in images]
168 | return image_urls
169 | else:
170 | st.warning("No images found for the given query. Please try a different search term.")
171 | return []
172 |
173 |
174 | def display_image_select(block, image_urls):
175 | selected_image_index = image_select(
176 | label="Select an image",
177 | images=image_urls,
178 | captions=[f"Image {i + 1}" for i in range(len(image_urls))],
179 | use_container_width=True,
180 | return_value="index"
181 | )
182 | if selected_image_index is not None:
183 | selected_image_url = image_urls[selected_image_index]
184 | st.session_state[f"{block}_image_url"] = selected_image_url
185 | st.success(f"Image selected for {block}.")
186 | else:
187 | st.warning("No image selected. Please select an image to add to the article.")
188 |
189 |
190 | with settings_column:
191 | st.header("Article Settings")
192 |
193 | settings_tab, ai_api_settings_tab = st.tabs(
194 | ["Settings", "AI API Settings"])
195 |
196 | with settings_tab:
197 | # Global toggles
198 | for toggle in toggle_states_structure["global_tgl_elem"]:
199 | if toggle not in st.session_state:
200 | st.session_state[toggle] = toggles_helper.get_global_toggle_state(toggle)
201 |
202 | # Convert toggle name to a more readable format
203 | label = " ".join(toggle.split("_")[1:]).title()
204 |
205 | if st.checkbox(f"Toggle {label}", key=f"toggle_{toggle}", value=st.session_state[toggle],
206 | on_change=toggle_callback, args=(toggle,)):
207 | if toggle == "tgl_style":
208 | tone_style = st.selectbox("Tone", [
209 | "Assertive", "Authoritative", "Clear", "Compelling", "Concise",
210 | "Conversational", "Courteous", "Empathetic", "Emotive", "Engaging",
211 | "Friendly", "Funny", "Informative", "Persuasive", "Professional",
212 | "Sarcastic"
213 | ])
214 | prompt_helper.update_global_prompt_elem("tone_style", tone_style)
215 | elif toggle == "tgl_target_audience":
216 | audience = st.selectbox("Audience", [
217 | "Bargain Hunters",
218 | "Children",
219 | "College Students",
220 | "Educators",
221 | "Entrepreneurs",
222 | "Environmental Activists",
223 | "Fans of Specific Entertainment Genres",
224 | "Fitness and Health Enthusiasts",
225 | "General Public",
226 | "High-Income Individuals",
227 | "Hobbyists",
228 | "Homeowners",
229 | "LGBTQ+ Community",
230 | "Low-Income Individuals",
231 | "Luxury Consumers",
232 | "Parents",
233 | "People with Disabilities",
234 | "People with Specific Medical Conditions",
235 | "Personal Finance Seekers",
236 | "Pet Owners",
237 | "Professionals",
238 | "Renters",
239 | "Retirees or Seniors",
240 | "Rural Residents",
241 | "Self-Improvement Seekers",
242 | "Social Justice Advocates",
243 | "Specific Cultural or Ethnic Groups",
244 | "Students",
245 | "Tech Enthusiasts",
246 | "Technology Enthusiasts",
247 | "Travel Enthusiasts",
248 | "Urban Dwellers",
249 | "Young Adults"
250 | ])
251 | prompt_helper.update_global_prompt_elem("audience", audience)
252 | elif toggle == "tgl_persona":
253 | col1, col2 = st.columns(2)
254 | with col1:
255 | first_name = st.text_input("First Name", key="persona_first_name")
256 | with col2:
257 | last_name = st.text_input("Last Name", key="persona_last_name")
258 | if first_name and last_name:
259 | prompt_helper.update_global_prompt_elem("persona_first_name", first_name)
260 | prompt_helper.update_global_prompt_elem("persona_last_name", last_name)
261 | elif toggle == "tgl_exemplars":
262 | examples = st.text_area("Paste Example of tone/style",
263 | prompt_helper.get_global_prompt_elem("exemplars"))
264 | prompt_helper.update_global_prompt_elem("exemplars", examples)
265 |
266 | with ai_api_settings_tab:
267 | st.subheader("AI API Settings")
268 |
269 | # Model selection dropdown
270 | model = st.selectbox(
271 | "Model",
272 | [
273 | "llama-3-sonar-large-32k-online",
274 | "llama-3-sonar-small-32k-online",
275 | "llama-3.1-405b-reasoning",
276 | "llama-3.1-70b-versatile",
277 | "llama-3.1-8b-instant",
278 | "llama3-groq-70b-8192-tool-use-preview",
279 | "llama3-groq-8b-8192-tool-use-preview",
280 | "llama3-70b-8192",
281 | "llama3-8b-8192",
282 | "mixtral-8x7b-32768",
283 | "gemma-7b-it",
284 | "gemma2-9b-it"
285 | ],
286 | key="groq_model"
287 | )
288 |
289 | # API key input for Groq
290 | groq_api_key = st.text_input(
291 | "Groq API Key",
292 | type="password",
293 | key="groq_api_key",
294 | value=os.getenv("GROQ_API_KEY", "") # Display the stored value or an empty string
295 | )
296 |
297 | # Update the .env file with the entered Groq API key
298 | if groq_api_key:
299 | os.environ["GROQ_API_KEY"] = groq_api_key
300 | with open(".env", "a") as f:
301 | f.write(f"GROQ_API_KEY={groq_api_key}\n")
302 |
303 | # API key input for Serper
304 | serper_api_key = st.text_input(
305 | "Serper API Key",
306 | type="password",
307 | key="serper_api_key",
308 | value=os.getenv("SERPER_API_KEY", "") # Display the stored value or an empty string
309 | )
310 |
311 | # Update the .env file with the entered Serper API key
312 | if serper_api_key:
313 | os.environ["SERPER_API_KEY"] = serper_api_key
314 | with open(".env", "a") as f:
315 | f.write(f"SERPER_API_KEY={serper_api_key}\n")
316 |
317 | # Content column
318 | with content_column:
319 | # Add a div with class 'content-column' to target the CSS
320 | st.markdown('
', unsafe_allow_html=True)
321 |
322 | # Add custom CSS for centering the title
323 | st.markdown("""
324 |
332 | """, unsafe_allow_html=True)
333 |
334 | # Create a placeholder for the centered header
335 | header_placeholder = st.empty()
336 |
337 | # Display the default title or the user-provided title
338 | if 'story_title' not in st.session_state:
339 | st.session_state.story_title = "Create a New Article"
340 |
341 | header_placeholder.markdown(f'
{st.session_state.story_title}
',
342 | unsafe_allow_html=True)
343 |
344 | # Display the chat input
345 | story_title = st.chat_input("Story Title", key="story_title_input")
346 | if story_title:
347 | # Convert the story title to title case
348 | st.session_state.story_title = story_title.title()
349 | prompt_helper.update_global_prompt_elem("story_title", st.session_state.story_title)
350 | header_placeholder.markdown(f'
{st.session_state.story_title}
',
351 | unsafe_allow_html=True)
352 |
353 | # Story blocks
354 | for block in st.session_state.story_blocks:
355 | output_tab, settings_tab, image_tab = st.tabs(["Output", "Settings", "Image"])
356 |
357 | with output_tab:
358 | title = st.chat_input(f"{block} Title", key=f"{block}_title_input")
359 |
360 | if title:
361 | prompt_helper.update_block_prompt_elem(block, "title", title)
362 |
363 | # Create a placeholder for the streamed content
364 | output_placeholder = st.empty()
365 |
366 |
367 | # Function to update the placeholder with streamed content
368 | def update_content():
369 | with st.spinner(f"Generating {block} content..."):
370 | try:
371 | content_generator = prompt_helper.generate_api_response(block)
372 |
373 | # Create a placeholder for the streamed content
374 | content_placeholder = output_placeholder.empty()
375 |
376 | # Accumulate the entire response
377 | full_response = ""
378 | for chunk in content_generator:
379 | full_response += chunk
380 | # Show a loading message or progress bar
381 | content_placeholder.text("Generating content...")
382 |
383 | # Format the complete content
384 | display_content = format_markdown_content(block, full_response)
385 |
386 | # Wrap the content in a block-content div
387 | wrapped_content = f'
{display_content}
'
388 |
389 | # Display the formatted content
390 | content_placeholder.markdown(wrapped_content, unsafe_allow_html=True)
391 |
392 | # Store the complete response in session state
393 | st.session_state[f"{block}_response"] = wrapped_content
394 | except Exception as e:
395 | error_message = prompt_helper.get_user_friendly_error_message(e)
396 | st.error(f"An error occurred while generating content: {error_message}")
397 | st.button("Retry", on_click=update_content)
398 |
399 |
400 | # Run the function
401 | update_content()
402 |
403 | elif f"{block}_response" in st.session_state:
404 | # Add the image at the top of the content if it exists
405 | if f"{block}_image_url" in st.session_state:
406 | image_html = img_to_html(st.session_state[f"{block}_image_url"])
407 | display_content = image_html + st.session_state[f'{block}_response']
408 | else:
409 | display_content = st.session_state[f'{block}_response']
410 |
411 | # st.markdown(f"""
412 | #