├── LICENSE
├── README.md
├── app.py
├── requirements.txt
└── uploads
└── .gitignore
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 Parth Shah
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # QuickDigest AI: Elevate Your Data Interaction Experience
2 |
3 | Discover a new horizon of data interaction with QuickDigest AI, your intelligent companion in navigating through diverse data formats. QuickDigest AI is meticulously crafted to simplify and enrich your engagement with data, ensuring a seamless flow of insights right at your fingertips.
4 |
5 | QuickDigest AI has also won the Streamlit App of the month award, check it out here: https://twitter.com/streamlit/status/1730698585929601085
6 |
7 | ## Features
8 |
9 | ### Effortless Data Extraction and Interaction:
10 | QuickDigest AI stands as a beacon of innovation, allowing users to upload and interact with a variety of file formats including PDFs, Word documents, text files, and even audio/video files. The platform's cutting-edge technology ensures a smooth extraction of data, paving the way for meaningful conversations with the information gleaned from these files.
11 |
12 | ### Engage with Datasets:
13 | Dive into datasets like never before. QuickDigest AI invites you to upload your dataset and engage in a dialogue with it. Our advanced AI algorithms facilitate a conversational interaction with your dataset, making the extraction of insights an intuitive and enriching experience.
14 |
15 | ### Real-Time Web Search:
16 | One of the limitations of large language models is there limited knowledge. QuickDigest AI's real-time web search feature ensures you're always ahead with the latest information. Be it market trends, news updates, or the newest research findings, QuickDigest AI brings the world to you in real-time.
17 |
18 | ### Ignite Your Creative Spark:
19 | For product creators, QuickDigest AI unveils a realm of possibilities. Bored of simple product images, The Product Advertising Image Creator is your tool to craft captivating advertising images that resonate with your audience. Additionally, the Image Generator feature is your canvas to bring your creative visions to life, creating visually appealing images that speak volumes.
20 |
21 | ---
22 | ### Screenshots:
23 |
24 | #### Chat with your files
25 | 
26 | 
27 |
28 | #### Chat with your dataset
29 | 
30 |
31 | #### Product Theming
32 | 
33 | 
34 |
35 | ---
36 |
37 | ### Getting Started
38 |
39 | To get started with this project, clone this repository and install the requirements.
40 |
41 | 1. Clone the repository:
42 | ```
43 | git clone https://github.com/codingis4noobs2/QuickDigest.git
44 | ```
45 |
46 | 2. Change to the project directory:
47 | ```
48 | cd QuickDigest
49 | ```
50 |
51 | 3. Install the required packages:
52 | ```
53 | pip install -r requirements.txt
54 | ```
55 |
56 | 4. You need to set your own API key
57 | ```
58 | assembly_api_key = "YOUR_Aseembly_API_KEY"
59 | clipdrop_api_key = "YOUR_ClipDrop_API_KEY"
60 | ```
61 |
62 | 5. Run the app:
63 | ```
64 | streamlit run app.py
65 | ```
66 |
67 | 6. The app will now be accessible at http://localhost:8501.
68 |
--------------------------------------------------------------------------------
/app.py:
--------------------------------------------------------------------------------
1 | # Importing necessary library
2 | import streamlit as st
3 |
4 |
5 | # Setting up the page configuration
6 | st.set_page_config(
7 | page_title="QuickDigest AI",
8 | page_icon=":brain:",
9 | layout="wide",
10 | initial_sidebar_state="expanded"
11 | )
12 |
13 | # Defining the function to display the home page
14 | def home():
15 | import streamlit as st
16 | from streamlit_extras.badges import badge
17 | from streamlit_extras.colored_header import colored_header
18 | from streamlit_extras.let_it_rain import rain
19 |
20 | # Displaying a rain animation with specified parameters
21 | rain(
22 | emoji="🎈",
23 | font_size=54,
24 | falling_speed=5,
25 | animation_length="1",
26 | )
27 |
28 | # Displaying a colored header with specified parameters
29 | colored_header(
30 | label="QuickDigest AI🧠, Your Intelligent Data Companion",
31 | description="~ Powered by OpenAI, Llamaindex, AssemblyAI, Langchain, Replicate, Clipdrop",
32 | color_name="violet-70",
33 | )
34 |
35 | # Displaying information and warnings in the sidebar
36 | st.sidebar.info(
37 | "Visit [OpenAI Pricing](https://openai.com/pricing#language-models) to get an overview of costs incurring depending upon the model chosen."
38 | )
39 | st.sidebar.info(
40 | "For key & data privacy concerns, We do not store your Key, it will be removed after your session ends. Also OpenAI will not use data submitted by customers via our API to train or improve our models, unless you explicitly decide to share your data with us for this purpose, For more info please visit [OpenAI FAQs](https://help.openai.com/en/articles/7039943-data-usage-for-consumer-services-faq)."
41 | )
42 | st.sidebar.warning(
43 | "LLMs may produce inaccurate information about people, places, or facts. Don't entirely trust them."
44 | )
45 |
46 | # Displaying markdown text on the page
47 | st.markdown(
48 | "
Discover a new horizon of data interaction with QuickDigest AI, your intelligent companion in navigating through diverse data formats. QuickDigest AI is meticulously crafted to simplify and enrich your engagement with data, ensuring a seamless flow of insights right at your fingertips.
",
49 | unsafe_allow_html=True
50 | )
51 | st.markdown(
52 | "**Effortless Data Extraction and Interaction:** QuickDigest AI stands as a beacon of innovation, allowing users to upload and interact with a variety of file formats including PDFs, Word documents, text files, and even audio/video files. The platform's cutting-edge technology ensures a smooth extraction of data, paving the way for meaningful conversations with the information gleaned from these files."
53 | )
54 | st.markdown(
55 | "**Engage with your Datasets:** Dive into datasets like never before. QuickDigest AI invites you to upload your dataset and engage in a dialogue with it. Our advanced AI algorithms facilitate a conversational interaction with your dataset, making the extraction of insights an intuitive and enriching experience."
56 | )
57 | st.markdown(
58 | "**Real-Time Web Search:** One of the limitations of large language models is there limited knowledge. QuickDigest AI's real-time web search feature ensures you're always ahead with the latest information. Be it market trends, news updates, or the newest research findings, QuickDigest AI brings the world to you in real-time."
59 | )
60 | st.markdown(
61 | "**Ignite Your Creative Spark:** For product creators, QuickDigest AI unveils a realm of possibilities. Bored of simple product images, The Product Advertising Image Creator is your tool to craft captivating advertising images that resonate with your audience. Additionally, the Image Generator feature is your canvas to bring your creative visions to life, creating visually appealing images that speak volumes."
62 | )
63 |
64 | st.markdown("---")
65 |
66 | # Displaying a support section with badges and link button
67 | st.markdown("Support Us
", unsafe_allow_html=True)
68 | col1, col2, col3, col4 = st.columns(4)
69 | with col1:
70 | st.write("Star this repository on Github")
71 | badge(type="github", name="codingis4noobs2/QuickDigest")
72 | with col2:
73 | st.write("Follow me on twitter")
74 | badge(type="twitter", name="4gameparth")
75 | with col3:
76 | st.write("Buy me a coffee")
77 | badge(type="buymeacoffee", name="codingis4noobs2")
78 | with col4:
79 | st.link_button("Upvote on Replit", "https://replit.com/@ParthShah38/QuickDigestAI?v=1")
80 |
81 |
82 | # Function to display chat with files page
83 | def chat_with_files():
84 | import os
85 | import streamlit as st
86 | from streamlit_extras.badges import badge
87 | from streamlit_extras.colored_header import colored_header
88 | from llama_index import (
89 | OpenAIEmbedding,
90 | ServiceContext,
91 | set_global_service_context,
92 | )
93 | from llama_index.llms import OpenAI
94 | from llama_index.chat_engine.types import StreamingAgentChatResponse
95 | from llama_index import SimpleDirectoryReader, VectorStoreIndex
96 | import assemblyai as aai
97 | from PyPDF2 import PdfReader
98 | from docx import Document
99 |
100 | # Cache the result to avoid recomputation
101 | @st.cache_resource(show_spinner="Indexing documents...Please have patience")
102 | def build_index(files):
103 | documents = SimpleDirectoryReader(input_files=files).load_data()
104 | index = VectorStoreIndex.from_documents(documents)
105 | return index
106 |
107 | # Handle streaming responses
108 | def handle_stream(root, stream: StreamingAgentChatResponse):
109 | text = ""
110 | root.markdown("Thinking...")
111 | for token in stream.response_gen:
112 | text += token
113 | root.markdown(text)
114 | return text
115 |
116 | # Define constants and settings
117 | CACHE_DIR = "./uploads"
118 | aai.settings.api_key = st.secrets['assembly_api_key']
119 |
120 | # Render chat messages
121 | def render_message(message):
122 | with st.chat_message(message["role"]):
123 | st.write(message["text"])
124 |
125 | # Transcribe audio and video files
126 | def transcribe_audio_video(file_path):
127 | transcriber = aai.Transcriber()
128 | transcript = transcriber.transcribe(file_path)
129 | transcript_path = file_path + ".txt"
130 | with open(transcript_path, "w") as f:
131 | f.write(transcript.text)
132 | return transcript_path
133 |
134 | # Upload files and cache them
135 | def upload_files(types=["pdf", "txt", "mp3", "mp4", 'mpeg', 'doc', 'docx'], **kwargs):
136 | files = st.file_uploader(
137 | label=f"Upload files", type=types, **kwargs
138 | )
139 | if not files:
140 | st.info(f"Please add documents, Note: Scanned documents are not supported yet!")
141 | st.stop()
142 | return cache_files(files, types=types)
143 |
144 | # Cache uploaded files
145 | def cache_files(files, types=["pdf", "txt", "mp3", "mp4", 'mpeg', 'doc', 'docx']) -> list[str]:
146 | filepaths = []
147 | for file in files:
148 | # Determine the file extension from the mime type
149 | ext = file.type.split("/")[-1]
150 | if ext == "plain": # Handle text/plain mime type
151 | ext = "txt"
152 | elif ext in ["vnd.openxmlformats-officedocument.wordprocessingml.document", "vnd.ms-word"]:
153 | ext = "docx" # or "doc" depending on your needs
154 | if ext not in types:
155 | continue
156 | filepath = f"{CACHE_DIR}/{file.name}"
157 | with open(filepath, "wb") as f:
158 | f.write(file.getvalue())
159 | if ext in ["mp3", "mp4"]:
160 | filepath = transcribe_audio_video(filepath)
161 | filepaths.append(filepath)
162 | # st.sidebar.write("Uploaded files", filepaths) # Debug statement
163 | with st.sidebar:
164 | with st.expander("Uploaded Files"):
165 | filepaths_pretty = "\n".join(f"- {filepath}" for filepath in filepaths)
166 | st.markdown(f"{filepaths_pretty}")
167 | return filepaths
168 |
169 | def transcribe_and_save(file_path):
170 | transcriber = aai.Transcriber()
171 | transcript = transcriber.transcribe(file_path)
172 | transcript_path = file_path + ".txt"
173 | with open(transcript_path, "w") as f:
174 | f.write(transcript.text)
175 | return transcript_path
176 |
177 | # Save extracted text to a txt file
178 | def save_extracted_text_to_txt(text, filename):
179 | txt_filename = os.path.splitext(filename)[0] + ".txt"
180 | txt_filepath = os.path.join('uploads', txt_filename)
181 | with open(txt_filepath, 'w', encoding='utf-8') as txt_file:
182 | txt_file.write(text)
183 | return txt_filepath
184 |
185 | # Get OpenAI API key from session state
186 | def get_key():
187 | return st.session_state["openai_api_key"]
188 |
189 | # Read text from Word document
190 | def read_word_file(file_path):
191 | doc = Document(file_path)
192 | full_text = []
193 | for para in doc.paragraphs:
194 | full_text.append(para.text)
195 | return '\n'.join(full_text)
196 |
197 | # Process uploaded documents
198 | def process_documents(documents):
199 | processed_docs = []
200 | for doc in documents:
201 | if doc.endswith('.pdf'):
202 | processed_docs.append(process_pdf(doc))
203 | elif doc.endswith(('.doc', '.docx')):
204 | text = read_word_file(doc)
205 | txt_filepath = save_extracted_text_to_txt(text, os.path.basename(doc))
206 | processed_docs.append(txt_filepath)
207 | elif doc.endswith(('.mp3', '.mp4', '.mpeg')):
208 | processed_docs.append(transcribe_and_save(doc))
209 | else:
210 | processed_docs.append(doc)
211 | return processed_docs
212 |
213 | # Process PDF files
214 | def process_pdf(pdf_path):
215 | reader = PdfReader(pdf_path)
216 | all_text = ""
217 | for page in reader.pages:
218 | extracted_text = page.extract_text()
219 | if extracted_text:
220 | processed_text = ' '.join(extracted_text.split('\n'))
221 | all_text += processed_text + "\n\n"
222 | txt_filepath = save_extracted_text_to_txt(all_text, os.path.basename(pdf_path))
223 | os.remove(pdf_path) # Delete the original PDF file
224 | return txt_filepath
225 |
226 | # Main logic for handling OpenAI API key and document processing
227 | if "openai_api_key" not in st.session_state:
228 | openai_api_key = st.sidebar.text_input("OpenAI API Key", type="password")
229 | if not openai_api_key:
230 | st.sidebar.warning("Please add your OpenAI API key to continue!!")
231 | st.warning("Please add your OpenAI API key to continue!!")
232 | st.sidebar.info("To obtain your OpenAI API key, please visit [OpenAI](https://platform.openai.com/account/api-keys). They provide a $5 credit to allow you to experiment with their models. If you're unsure about how to get the API key, you can follow this [Tutorial](https://www.maisieai.com/help/how-to-get-an-openai-api-key-for-chatgpt). While obtaining the API key doesn't require a compulsory payment, once your allotted credit is exhausted, a payment will be necessary to continue using their services.")
233 | st.stop()
234 | st.session_state["openai_api_key"] = openai_api_key
235 |
236 | st.sidebar.text_input("Enter Youtube Video ID(Coming soon)", disabled=True)
237 | st.sidebar.text_input("Enter Spotify Podast link(Coming soon)", disabled=True)
238 |
239 | openai_api_key = get_key()
240 |
241 | if openai_api_key:
242 | st.toast('OpenAI API Key Added ✅')
243 | # Define service-context
244 | with st.sidebar:
245 | with st.expander("Advanced Settings"):
246 | st.session_state['temperature'] = st.number_input("Enter Temperature", help="It determines how creative the model should be", min_value=0.0,max_value=1.0, value=0.1)
247 | llm = OpenAI(temperature=st.session_state['temperature'], model='gpt-3.5-turbo', api_key=openai_api_key)
248 | embed_model = OpenAIEmbedding(api_key=openai_api_key)
249 | service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
250 | set_global_service_context(service_context)
251 |
252 | # Upload PDFs, DOCs, TXTs, MP3s, and MP4s
253 | documents = upload_files(types=["pdf", "txt", "mp3", "mp4", 'mpeg', 'doc', 'docx'], accept_multiple_files=True)
254 |
255 | # Process the uploaded documents
256 | processed_documents = process_documents(documents)
257 |
258 | if not processed_documents:
259 | st.warning("No documents uploaded!")
260 | st.stop()
261 |
262 | index = build_index(processed_documents)
263 | query_engine = index.as_chat_engine(chat_mode="condense_question", streaming=True)
264 |
265 | messages = st.session_state.get("messages", [])
266 |
267 | if not messages:
268 | messages.append({"role": "assistant", "text": "Hi!"})
269 |
270 | for message in messages:
271 | render_message(message)
272 |
273 | if user_query := st.chat_input():
274 | message = {"role": "user", "text": user_query}
275 | messages.append(message)
276 | render_message(message)
277 |
278 | with st.chat_message("assistant"):
279 | stream = query_engine.stream_chat(user_query)
280 | text = handle_stream(st.empty(), stream)
281 | message = {"role": "assistant", "text": text}
282 | messages.append(message)
283 | st.session_state.messages = messages
284 |
285 |
286 | # Function to use LLMs with web search
287 | def use_llms_with_web():
288 | from langchain.agents import ConversationalChatAgent, AgentExecutor
289 | from langchain.callbacks import StreamlitCallbackHandler
290 | from langchain.chat_models import ChatOpenAI
291 | from langchain.memory import ConversationBufferMemory
292 | from langchain.memory.chat_message_histories import StreamlitChatMessageHistory
293 | from langchain.tools import DuckDuckGoSearchRun
294 | import streamlit as st
295 |
296 |
297 | st.title("Use web search with LLMs")
298 | # Taking OpenAI API key input from the user
299 | openai_api_key = st.sidebar.text_input("OpenAI API Key", type="password")
300 | # Initializing message history and memory
301 | msgs = StreamlitChatMessageHistory()
302 | memory = ConversationBufferMemory(
303 | chat_memory=msgs, return_messages=True, memory_key="chat_history", output_key="output"
304 | )
305 | # Resetting chat history logic
306 | if len(msgs.messages) == 0 or st.sidebar.button("Reset chat history"):
307 | msgs.clear()
308 | msgs.add_ai_message("How can I help you?")
309 | st.session_state.steps = {}
310 |
311 | # Defining avatars for chat messages
312 | avatars = {"human": "user", "ai": "assistant"}
313 | for idx, msg in enumerate(msgs.messages):
314 | with st.chat_message(avatars[msg.type]):
315 | # Render intermediate steps if any were saved
316 | for step in st.session_state.steps.get(str(idx), []):
317 | if step[0].tool == "_Exception":
318 | continue
319 | with st.status(f"**{step[0].tool}**: {step[0].tool_input}", state="complete"):
320 | st.write(step[0].log)
321 | st.write(step[1])
322 | st.write(msg.content)
323 |
324 | # Taking new input from the user
325 | if prompt := st.chat_input(placeholder="Who won the 2022 Cricket World Cup?"):
326 | st.chat_message("user").write(prompt)
327 | # Checking if OpenAI API key is provided
328 | if not openai_api_key:
329 | st.info("Please add your OpenAI API key to continue.")
330 | st.stop()
331 | # Initializing LLM and tools for web search
332 | llm = ChatOpenAI(model_name="gpt-3.5-turbo", openai_api_key=openai_api_key, streaming=True)
333 | tools = [DuckDuckGoSearchRun(name="Search")]
334 | chat_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools)
335 |
336 | executor = AgentExecutor.from_agent_and_tools(
337 | agent=chat_agent,
338 | tools=tools,
339 | memory=memory,
340 | return_intermediate_steps=True,
341 | handle_parsing_errors=True,
342 | )
343 |
344 | with st.chat_message("assistant"):
345 | st_cb = StreamlitCallbackHandler(st.container(), expand_new_thoughts=False)
346 | response = executor(prompt, callbacks=[st_cb])
347 | st.write(response["output"])
348 | st.session_state.steps[str(len(msgs.messages) - 1)] = response["intermediate_steps"]
349 |
350 |
351 | # Function to display chat with dataset page
352 | def chat_with_dataset():
353 | from langchain.agents import AgentType
354 | from langchain.agents import create_pandas_dataframe_agent
355 | from langchain.callbacks import StreamlitCallbackHandler
356 | from langchain.chat_models import ChatOpenAI
357 | import streamlit as st
358 | import pandas as pd
359 | import os
360 |
361 |
362 | file_formats = {
363 | "csv": pd.read_csv,
364 | "xls": pd.read_excel,
365 | "xlsx": pd.read_excel,
366 | "xlsm": pd.read_excel,
367 | "xlsb": pd.read_excel,
368 | }
369 |
370 | def clear_submit():
371 | """
372 | Clear the Submit Button State
373 | Returns:
374 | """
375 | st.session_state["submit"] = False
376 |
377 | @st.cache_data()
378 | def load_data(uploaded_file):
379 | """
380 | Load data from the uploaded file based on its extension.
381 | """
382 | try:
383 | ext = os.path.splitext(uploaded_file.name)[1][1:].lower()
384 | except:
385 | ext = uploaded_file.split(".")[-1]
386 | if ext in file_formats:
387 | return file_formats[ext](uploaded_file)
388 | else:
389 | st.error(f"Unsupported file format: {ext}")
390 | return None
391 |
392 | st.title("Chat with your dataset")
393 | st.info("Asking one question at a time will result in a better output")
394 |
395 | uploaded_file = st.file_uploader(
396 | "Upload a Data file",
397 | type=list(file_formats.keys()),
398 | help="Various File formats are Support",
399 | on_change=clear_submit,
400 | )
401 |
402 | df = None # Initialize df to None outside the if block
403 |
404 | if uploaded_file:
405 | df = load_data(uploaded_file) # df will be assigned a value if uploaded_file is truthy
406 |
407 | if df is None: # Check if df is still None before proceeding
408 | st.warning("No data file uploaded or there was an error in loading the data.")
409 | return # Exit the function early if df is None
410 |
411 | openai_api_key = st.sidebar.text_input("OpenAI API Key", type="password")
412 |
413 | st.sidebar.info("If you face a KeyError: 'content' error, Press the clear conversation histroy button")
414 | if "messages" not in st.session_state or st.sidebar.button("Clear conversation history"):
415 | st.session_state["messages"] = [{"role": "assistant", "content": "How can I help you?"}]
416 |
417 | # Display previous chat messages
418 | for msg in st.session_state.messages:
419 | st.chat_message(msg["role"]).write(msg["content"])
420 |
421 | if prompt := st.chat_input(placeholder="What is this data about?"):
422 | st.session_state.messages.append({"role": "user", "content": prompt})
423 | st.chat_message("user").write(prompt)
424 | # Check if OpenAI API key is provided
425 | if not openai_api_key:
426 | st.info("Please add your OpenAI API key to continue.")
427 | st.stop()
428 |
429 | llm = ChatOpenAI(
430 | temperature=0, model="gpt-3.5-turbo-0613", openai_api_key=openai_api_key, streaming=True
431 | )
432 |
433 | pandas_df_agent = create_pandas_dataframe_agent(
434 | llm,
435 | df,
436 | verbose=True,
437 | agent_type=AgentType.OPENAI_FUNCTIONS,
438 | handle_parsing_errors=True,
439 | )
440 |
441 | with st.chat_message("assistant"):
442 | st_cb = StreamlitCallbackHandler(st.container(), expand_new_thoughts=False)
443 | response = pandas_df_agent.run(st.session_state.messages, callbacks=[st_cb])
444 | st.session_state.messages.append({"role": "assistant", "content": response})
445 | st.write(response)
446 |
447 | # Function to display transform products page
448 | def transform_products():
449 | import streamlit as st
450 | import requests
451 | import os
452 | import replicate
453 | import io
454 | from PIL import Image
455 |
456 |
457 | st.session_state['replicate_api_token'] = st.sidebar.text_input("Replicate API Token", type='password')
458 | os.environ['REPLICATE_API_TOKEN'] = st.session_state['replicate_api_token']
459 |
460 | if not st.session_state['replicate_api_token']:
461 | st.sidebar.warning('Please enter your Replicate API Token to continue!!')
462 | st.sidebar.info("You can get your Replicate API Token form here: [Replicate](https://replicate.com/account/api-tokens)")
463 | st.stop()
464 |
465 | if st.session_state['replicate_api_token']:
466 | st.info("This model works best with product images having transparent or plain backgrounds")
467 | # Prompt user to upload an image file
468 | img = st.file_uploader("Upload your product image", type=['png', 'jpg', 'jpeg'])
469 |
470 | if img is not None:
471 | has_plain_background = st.toggle("Does your product image have a plain or transparent background? If not, let us do the hard work for you!")
472 | prompt = st.text_input("Enter Prompt", help="Enter something you imagine...")
473 | negative_prompt = st.text_input("Enter Negative Prompt", help="Write what you don't want in the generated images")
474 | submit = st.button("Submit")
475 |
476 | if submit:
477 | if has_plain_background:
478 | # If image already has a plain background, prepare it for Replicate
479 | image = Image.open(img)
480 | bytes_obj = io.BytesIO()
481 | image.save(bytes_obj, format='PNG')
482 | bytes_obj.seek(0)
483 | else:
484 | # If image does not have a plain background, send it to ClipDrop to remove background
485 | image_file_object = img.read()
486 | r = requests.post('https://clipdrop-api.co/remove-background/v1',
487 | files={
488 | 'image_file': ('uploaded_image.jpg', image_file_object, 'image/jpeg')
489 | },
490 | headers={'x-api-key': st.secrets['clipdrop_api_key']}
491 | )
492 |
493 | if r.ok:
494 | # If background removal is successful, prepare image for Replicate
495 | image = Image.open(io.BytesIO(r.content))
496 | bytes_obj = io.BytesIO()
497 | image.save(bytes_obj, format='PNG')
498 | bytes_obj.seek(0)
499 | else:
500 | r.raise_for_status()
501 | st.error('Failed to remove background. Try again.')
502 | st.stop()
503 |
504 | # Send image to Replicate for transformation
505 | output = replicate.run(
506 | "logerzhu/ad-inpaint:b1c17d148455c1fda435ababe9ab1e03bc0d917cc3cf4251916f22c45c83c7df",
507 | input={"image_path": bytes_obj, "prompt": prompt, "image_num": 4}
508 | )
509 | col1, col2 = st.columns(2)
510 | with col1:
511 | st.image(output[1])
512 | st.image(output[2])
513 | with col2:
514 | st.image(output[3])
515 | st.image(output[4])
516 |
517 | # Function to generate images based on user input
518 | def generate_images():
519 | import streamlit as st
520 | import replicate
521 | import os
522 |
523 |
524 | st.session_state['replicate_api_token'] = st.sidebar.text_input("Replicate API Token", type='password')
525 | os.environ['REPLICATE_API_TOKEN'] = st.session_state['replicate_api_token']
526 |
527 | if not st.session_state['replicate_api_token']:
528 | st.sidebar.warning('Please enter your Replicate API Token to continue!!')
529 | st.sidebar.info("You can get your Replicate API Token form here: [Replicate](https://replicate.com/account/api-tokens)")
530 | st.stop()
531 |
532 | if st.session_state['replicate_api_token']:
533 | prompt = st.text_input(
534 | "Enter prompt",
535 | help="Write something you can imagine..."
536 | )
537 | negative_prompt = st.text_input(
538 | "Enter Negative prompt",
539 | help="Write what you don't want to see in the generated images"
540 | )
541 | submit = st.button("Submit")
542 |
543 | if submit:
544 | output = replicate.run(
545 | "stability-ai/sdxl:8beff3369e81422112d93b89ca01426147de542cd4684c244b673b105188fe5f",
546 | input={
547 | "prompt": prompt,
548 | "negative_prompt": negative_prompt,
549 | "num_outputs": 4
550 | },
551 | )
552 | col1, col2 = st.columns(2)
553 | with col1:
554 | st.image(output[0])
555 | st.image(output[2])
556 | with col2:
557 | st.image(output[1])
558 | st.image(output[3])
559 |
560 | # Dictonary to store all functions as pages
561 | page_names_to_funcs = {
562 | "Home 🏠": home,
563 | "Chat with files 📁": chat_with_files,
564 | "Chat with dataset 📖": chat_with_dataset,
565 | "Use web search with LLMs 🌐": use_llms_with_web,
566 | "Generate Images 🖌️": generate_images,
567 | "Transform your products 🎨": transform_products,
568 | }
569 |
570 | # display page by dictionary
571 | demo_name = st.sidebar.selectbox("Choose a page to navigate to", page_names_to_funcs.keys())
572 | page_names_to_funcs[demo_name]()
573 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | assemblyai==0.18.0
2 | langchain==0.0.300
3 | llama_index==0.8.31
4 | PyPDF2==3.0.1
5 | python-docx==0.8.11
6 | replicate==0.13.0
7 | streamlit==1.27.0
8 | streamlit_extras==0.3.2
9 | deeplake==3.6.25
10 | tabulate
11 | duckduckgo-search
12 | openai==0.28
13 |
--------------------------------------------------------------------------------
/uploads/.gitignore:
--------------------------------------------------------------------------------
1 | *.pdf
2 | *.txt
3 |
--------------------------------------------------------------------------------