├── .gitignore
├── LICENSE
├── README.md
├── advanced_tools_frameworks
├── cursor_ai_experiments
│ ├── ai_web_scrapper.py
│ ├── chatgpt_clone_llama3.py
│ └── multi_agent_researcher.py
├── gemini_multimodal_chatbot
│ ├── README.md
│ ├── gemini_multimodal_chatbot.py
│ └── requirements.txt
├── llm_router_app
│ ├── README.md
│ ├── llm_router.py
│ └── requirements.txt
├── mixture_of_agents
│ ├── mixture-of-agents.py
│ └── requirements.txt
├── multillm_chat_playground
│ ├── multillm_playground.py
│ └── requirements.txt
├── web_scrapping_ai_agent
│ ├── README.md
│ ├── ai_scrapper.py
│ ├── local_ai_scrapper.py
│ └── requirements.txt
└── web_search_ai_assistant
│ ├── README.md
│ ├── claude_websearch.py
│ ├── gpt4_websearch.py
│ └── requirements.txt
├── ai_agent_tutorials
├── ai_customer_support_agent
│ ├── README.md
│ ├── customer_support_agent.py
│ └── requirements.txt
├── ai_finance_agent_team
│ ├── README.md
│ ├── finance_agent_team.py
│ └── requirements.txt
├── ai_health_fitness_agent
│ ├── README.md
│ ├── health_agent.py
│ └── requirements.txt
├── ai_investment_agent
│ ├── README.md
│ ├── investment_agent.py
│ └── requirements.txt
├── ai_journalist_agent
│ ├── README.md
│ ├── journalist_agent.py
│ └── requirements.txt
├── ai_legal_agent_team
│ ├── README.md
│ ├── legal_agent_team.py
│ ├── local_ai_legal_agent_team
│ │ ├── README.md
│ │ ├── local_legal_agent.py
│ │ └── requirements.txt
│ └── requirements.txt
├── ai_meeting_agent
│ ├── README.md
│ ├── meeting_agent.py
│ └── requirements.txt
├── ai_movie_production_agent
│ ├── README.md
│ ├── movie_production_agent.py
│ └── requirements.txt
├── ai_personal_finance_agent
│ ├── README.md
│ ├── finance_agent.py
│ └── requirements.txt
├── ai_reasoning_agent
│ ├── local_ai_reasoning_agent.py
│ └── reasoning_agent.py
├── ai_services_agency
│ ├── README.md
│ ├── agency.py
│ └── requirements.txt
├── ai_startup_trend_analysis_agent
│ ├── README.md
│ ├── requirements.txt
│ └── startup_trends_agent.py
├── ai_travel_agent
│ ├── README.MD
│ ├── local_travel_agent.py
│ ├── requirements.txt
│ └── travel_agent.py
├── local_news_agent_openai_swarm
│ ├── README.md
│ ├── news_agent.py
│ └── requirements.txt
├── multi_agent_researcher
│ ├── README.md
│ ├── requirements.txt
│ ├── research_agent.py
│ └── research_agent_llama3.py
├── multimodal_ai_agent
│ ├── README.md
│ ├── mutimodal_agent.py
│ └── requirements.txt
├── multimodal_design_agent_team
│ ├── README.md
│ ├── design_agent_team.py
│ └── requirements.txt
└── xai_finance_agent
│ ├── README.md
│ ├── requirements.txt
│ └── xai_finance_agent.py
├── chat_with_X_tutorials
├── chat_with_github
│ ├── README.md
│ ├── chat_github.py
│ ├── chat_github_llama3.py
│ └── requirements.txt
├── chat_with_gmail
│ ├── README.md
│ ├── chat_gmail.py
│ └── requirements.txt
├── chat_with_pdf
│ ├── README.md
│ ├── chat_pdf.py
│ ├── chat_pdf_llama3.2.py
│ ├── chat_pdf_llama3.py
│ └── requirements.txt
├── chat_with_research_papers
│ ├── README.md
│ ├── chat_arxiv.py
│ ├── chat_arxiv_llama3.py
│ └── requirements.txt
├── chat_with_substack
│ ├── README.md
│ ├── chat_substack.py
│ └── requirements.txt
└── chat_with_youtube_videos
│ ├── README.md
│ ├── chat_youtube.py
│ └── requirements.txt
├── docs
└── banner
│ ├── unwind.png
│ └── unwind_black.png
├── llm_apps_with_memory_tutorials
├── ai_arxiv_agent_memory
│ ├── README.md
│ ├── ai_arxiv_agent_memory.py
│ └── requirements.txt
├── ai_travel_agent_memory
│ ├── README.md
│ ├── requirements.txt
│ └── travel_agent_memory.py
├── llama3_stateful_chat
│ ├── local_llama3_chat.py
│ └── requirements.txt
├── llm_app_personalized_memory
│ ├── README.md
│ ├── llm_app_memory.py
│ └── requirements.txt
├── local_chatgpt_with_memory
│ ├── README.md
│ ├── local_chatgpt_memory.py
│ └── requirements.txt
└── multi_llm_memory
│ ├── README.md
│ ├── multi_llm_memory.py
│ └── requirements.txt
├── llm_finetuning_tutorials
└── llama3.2_finetuning
│ ├── README.md
│ ├── finetune_llama3.2.py
│ └── requirements.txt
└── rag_tutorials
├── agentic_rag
├── README.md
├── rag_agent.py
└── requirements.txt
├── autonomous_rag
├── README.md
├── autorag.py
└── requirements.txt
├── hybrid_search_rag
├── README.md
├── main.py
└── requirements.txt
├── llama3.1_local_rag
├── README.md
├── llama3.1_local_rag.py
└── requirements.txt
├── local_hybrid_search_rag
├── README.md
├── local_main.py
└── requirements.txt
├── local_rag_agent
├── README.md
├── local_rag_agent.py
└── requirements.txt
├── rag-as-a-service
├── README.md
├── rag_app.py
└── requirements.txt
└── rag_agent_cohere
├── README.md
├── rag_agent_cohere.py
└── requirements.txt
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | share/python-wheels/
24 | *.egg-info/
25 | .installed.cfg
26 | *.egg
27 | MANIFEST
28 |
29 | # PyInstaller
30 | # Usually these files are written by a python script from a template
31 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
32 | *.manifest
33 | *.spec
34 |
35 | # Installer logs
36 | pip-log.txt
37 | pip-delete-this-directory.txt
38 |
39 | # Unit test / coverage reports
40 | htmlcov/
41 | .tox/
42 | .nox/
43 | .coverage
44 | .coverage.*
45 | .cache
46 | nosetests.xml
47 | coverage.xml
48 | *.cover
49 | *.py,cover
50 | .hypothesis/
51 | .pytest_cache/
52 | cover/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | .pybuilder/
76 | target/
77 |
78 | # Jupyter Notebook
79 | .ipynb_checkpoints
80 |
81 | # IPython
82 | profile_default/
83 | ipython_config.py
84 |
85 | # pyenv
86 | # For a library or package, you might want to ignore these files since the code is
87 | # intended to run in multiple environments; otherwise, check them in:
88 | # .python-version
89 |
90 | # pipenv
91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
94 | # install all needed dependencies.
95 | #Pipfile.lock
96 |
97 | # poetry
98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99 | # This is especially recommended for binary packages to ensure reproducibility, and is more
100 | # commonly ignored for libraries.
101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102 | #poetry.lock
103 |
104 | # pdm
105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106 | #pdm.lock
107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108 | # in version control.
109 | # https://pdm.fming.dev/latest/usage/project/#working-with-version-control
110 | .pdm.toml
111 | .pdm-python
112 | .pdm-build/
113 |
114 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
115 | __pypackages__/
116 |
117 | # Celery stuff
118 | celerybeat-schedule
119 | celerybeat.pid
120 |
121 | # SageMath parsed files
122 | *.sage.py
123 |
124 | # Environments
125 | .env
126 | .venv
127 | env/
128 | venv/
129 | ENV/
130 | env.bak/
131 | venv.bak/
132 |
133 | # Spyder project settings
134 | .spyderproject
135 | .spyproject
136 |
137 | # Rope project settings
138 | .ropeproject
139 |
140 | # mkdocs documentation
141 | /site
142 |
143 | # mypy
144 | .mypy_cache/
145 | .dmypy.json
146 | dmypy.json
147 |
148 | # Pyre type checker
149 | .pyre/
150 |
151 | # pytype static type analyzer
152 | .pytype/
153 |
154 | # Cython debug symbols
155 | cython_debug/
156 |
157 | # PyCharm
158 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
159 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
160 | # and can be added to the global gitignore or merged into this file. For a more nuclear
161 | # option (not recommended) you can uncomment the following to ignore the entire idea folder.
162 | #.idea/
163 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # 🌟 Awesome LLM Apps
2 |
3 | # 🌟 精选大型语言模型应用
4 |
5 | 这是一个精选的优秀大型语言模型(LLM)应用集合,基于RAG和AI代理构建。本仓库汇集了使用OpenAI、Anthropic、Google等模型的LLM应用,甚至包括像LLaMA这样的开源模型,您可以在本地计算机上运行。
6 |
7 | ## 🤔 为什么选择精选大型语言模型应用?
8 |
9 | - 💡 探索大型语言模型在不同领域中的实用和创新应用,从代码仓库到电子邮件收件箱等。
10 | - 🔥 了解结合了OpenAI、Anthropic、Gemini等LLM以及开源替代方案的RAG和AI代理应用。
11 | - 🎓 学习文档详尽的项目,并为不断增长的基于LLM的开源生态系统贡献力量。
12 |
13 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/cursor_ai_experiments/ai_web_scrapper.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from scrapegraphai.graphs import SmartScraperGraph
3 |
4 | st.title("AI网页智能爬取器")
5 |
6 | # 用户输入
7 | 信息需求 = st.text_input("请输入您想提取的信息内容:")
8 | 网址 = st.text_input("请输入网页链接:")
9 | 密钥 = st.text_input("请输入OpenAI API密钥:", type="password")
10 |
11 | 配置 = {
12 | "llm": {"api_key": 密钥, "model": "openai/gpt-4o-mini"},
13 | "verbose": True,
14 | "headless": False,
15 | }
16 |
17 | if st.button("开始爬取"):
18 | if 信息需求 and 网址 and 密钥:
19 | 爬虫 = SmartScraperGraph(prompt=信息需求, source=网址, config=配置)
20 | 结果 = 爬虫.run()
21 | st.write(结果)
22 | else:
23 | st.error("请完整填写所有输入项!")
24 |
25 | st.markdown("""
26 | ### 使用说明
27 | 1. 输入您想提取的信息内容。
28 | 2. 输入目标网页链接。
29 | 3. 输入您的OpenAI API密钥。
30 | 4. 点击“开始爬取”按钮,等待结果显示。
31 | """)
32 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/cursor_ai_experiments/chatgpt_clone_llama3.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from ollama import Client
3 |
4 | # 初始化 Ollama 客户端
5 | client = Client()
6 |
7 | # 设置页面配置
8 | st.set_page_config(page_title="本地ChatGPT克隆", page_icon="🤖", layout="wide")
9 | st.title("🤖 本地ChatGPT克隆")
10 |
11 | # 初始化会话消息列表
12 | if "消息列表" not in st.session_state:
13 | st.session_state.消息列表 = []
14 |
15 | # 展示已有的聊天记录
16 | for msg in st.session_state.消息列表:
17 | with st.chat_message(msg["角色"]):
18 | st.markdown(msg["内容"])
19 |
20 | # 用户输入处理
21 | if 用户输入 := st.chat_input("你有什么想法?"):
22 | st.session_state.消息列表.append({"角色": "user", "内容": 用户输入})
23 | with st.chat_message("user"):
24 | st.markdown(用户输入)
25 |
26 | # AI回复生成并实时展示
27 | with st.chat_message("assistant"):
28 | 占位符 = st.empty()
29 | 回复内容 = ""
30 | for 响应 in client.chat(model="llama3.1:latest", messages=st.session_state.消息列表, stream=True):
31 | 回复内容 += 响应['message']['content']
32 | 占位符.markdown(回复内容 + "▌")
33 | 占位符.markdown(回复内容)
34 | st.session_state.消息列表.append({"角色": "assistant", "内容": 回复内容})
35 |
36 | # 侧边栏信息
37 | st.sidebar.title("关于")
38 | st.sidebar.info("本地ChatGPT克隆,基于Ollama的llama3.1:latest模型和Streamlit实现。")
39 | st.sidebar.markdown("---")
40 | st.sidebar.markdown("由 ❤️ Your Name 制作")
41 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/cursor_ai_experiments/multi_agent_researcher.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from crewai import Agent, Task, Crew, Process
3 | from langchain_openai import ChatOpenAI
4 | import os
5 |
6 | # 初始化模型变量
7 | gpt4_model = None
8 |
9 | # 定义创建文章多智能体团队的函数
10 | def 创建文章团队(主题):
11 | # 角色信息字典,方便循环创建
12 | 角色信息 = {
13 | "研究员": {
14 | "目标": "对主题进行深入调研,收集关键信息、数据和专家观点",
15 | "背景": "你是一名细致入微的专业研究员",
16 | },
17 | "撰稿人": {
18 | "目标": "根据调研内容撰写详细且生动的文章,使用Markdown格式",
19 | "背景": "你是一名擅长写作和排版的内容专家",
20 | },
21 | "编辑": {
22 | "目标": "审核并优化文章,确保内容清晰、准确且格式规范",
23 | "背景": "你是一名经验丰富的编辑,注重内容质量和排版",
24 | }
25 | }
26 |
27 | # 创建Agent对象列表
28 | agents = []
29 | for 角色, 信息 in 角色信息.items():
30 | agents.append(Agent(
31 | role=角色,
32 | goal=信息["目标"],
33 | backstory=信息["背景"],
34 | verbose=True,
35 | allow_delegation=False,
36 | llm=gpt4_model
37 | ))
38 |
39 | # 创建任务描述字典,顺序对应agents列表
40 | 任务描述 = [
41 | f"请围绕主题“{主题}”进行全面调研,收集关键数据和专家观点。",
42 | """请根据调研内容撰写一篇结构清晰、内容丰富的文章,要求:
43 | - 使用Markdown格式,包含主标题(H1)、章节标题(H2)、小节标题(H3)
44 | - 适当使用项目符号或编号列表
45 | - 重点内容加粗或斜体强调
46 | - 文章条理清晰,易于阅读""",
47 | """请审核文章,确保:
48 | - Markdown格式正确且统一
49 | - 标题层级合理
50 | - 内容逻辑流畅,吸引读者
51 | - 重点突出
52 | 并对内容和格式进行必要的修改和提升。"""
53 | ]
54 |
55 | 预期输出 = [
56 | "调研报告,包含关键数据和专家观点。",
57 | "基于调研的详细且格式规范的Markdown文章。",
58 | "最终润色后的高质量文章。"
59 | ]
60 |
61 | # 创建Task对象列表
62 | tasks = []
63 | for i in range(len(agents)):
64 | tasks.append(Task(
65 | description=任务描述[i],
66 | agent=agents[i],
67 | expected_output=预期输出[i]
68 | ))
69 |
70 | # 创建团队,顺序执行任务
71 | crew = Crew(
72 | agents=agents,
73 | tasks=tasks,
74 | verbose=2,
75 | process=Process.sequential
76 | )
77 | return crew
78 |
79 | # Streamlit 页面配置和样式
80 | st.set_page_config(page_title="多智能体AI写作助手", page_icon="📝")
81 |
82 | st.markdown("""
83 |
98 | """, unsafe_allow_html=True)
99 |
100 | st.title("📝 多智能体AI写作助手")
101 |
102 | # 侧边栏输入API Key
103 | with st.sidebar:
104 | st.header("配置")
105 | api_key = st.text_input("请输入OpenAI API密钥(API Key):", type="password")
106 | if api_key:
107 | os.environ["OPENAI_API_KEY"] = api_key
108 | gpt4_model = ChatOpenAI(model_name="gpt-4o-mini")
109 | st.success("API密钥设置成功!")
110 | else:
111 | st.info("请输入OpenAI API密钥以继续。")
112 |
113 | st.markdown("使用AI多智能体,快速生成高质量文章!")
114 |
115 | subject = st.text_input("请输入文章主题:", placeholder="例如:人工智能对医疗行业的影响")
116 |
117 | if st.button("生成文章"):
118 | if not api_key:
119 | st.error("请在侧边栏输入OpenAI API密钥。")
120 | elif not subject.strip():
121 | st.warning("请输入文章主题。")
122 | else:
123 | with st.spinner("🤖 AI智能体正在生成文章,请稍候..."):
124 | 团队 = 创建文章团队(subject)
125 | 结果 = 团队.kickoff()
126 | st.markdown(结果)
127 |
128 | st.markdown("---")
129 | st.markdown("由CrewAI和OpenAI强力驱动 ❤️")
130 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/gemini_multimodal_chatbot/README.md:
--------------------------------------------------------------------------------
1 | ## ⚡️ 多模态聊天机器人(基于 Gemini Flash)
2 |
3 | 本项目展示了一个基于 Google Gemini Flash 模型的多模态聊天机器人,支持图片和文本输入,响应速度极快。
4 |
5 | ### 主要功能
6 | - **多模态输入**:支持上传图片和文本提问。
7 | - **Gemini Flash 模型**:采用谷歌轻量级但高效的 Gemini Flash 模型。
8 | - **聊天记录**:自动保存并展示对话历史。
9 |
10 | ### 快速开始
11 |
12 | 1. 克隆代码库:
13 |
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | ```
17 |
18 | 2. 安装依赖:
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 |
24 | 3. 获取 Google AI Studio API Key:
25 |
26 | - 访问 [Google AI Studio](https://aistudio.google.com/app/apikey) 注册并申请 API Key。
27 |
28 | 4. 启动应用:
29 |
30 | ```bash
31 | streamlit run gemini_multimodal_chatbot.py
32 | ```
--------------------------------------------------------------------------------
/advanced_tools_frameworks/gemini_multimodal_chatbot/gemini_multimodal_chatbot.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import google.generativeai as genai
3 | from PIL import Image
4 |
5 | # 页面配置
6 | st.set_page_config(page_title="多模态聊天机器人(Gemini Flash)", layout="wide")
7 | st.title("多模态聊天机器人(Gemini Flash ⚡️)")
8 | st.caption("通过图片和文本输入,与谷歌 Gemini Flash 模型对话,体验极速响应。🌟")
9 |
10 | # 输入API密钥
11 | api_key = st.text_input("请输入Google API密钥", type="password")
12 |
13 | if api_key:
14 | genai.configure(api_key=api_key)
15 | model = genai.GenerativeModel(model_name="gemini-1.5-flash-latest")
16 |
17 | # 初始化会话消息列表
18 | if "messages" not in st.session_state:
19 | st.session_state.messages = []
20 |
21 | # 侧边栏上传图片
22 | with st.sidebar:
23 | st.title("上传图片聊天")
24 | uploaded_file = st.file_uploader("选择图片(jpg/jpeg/png)", type=["jpg", "jpeg", "png"])
25 |
26 | if uploaded_file:
27 | image = Image.open(uploaded_file)
28 | st.image(image, caption="已上传图片", use_column_width=True)
29 |
30 | # 展示聊天记录
31 | chat_container = st.container()
32 | with chat_container:
33 | for msg in st.session_state.messages:
34 | with st.chat_message(msg["role"]):
35 | st.markdown(msg["content"])
36 |
37 | # 用户输入
38 | prompt = st.chat_input("请输入您的问题")
39 |
40 | if prompt:
41 | inputs = [prompt]
42 | st.session_state.messages.append({"role": "user", "content": prompt})
43 | with chat_container:
44 | with st.chat_message("user"):
45 | st.markdown(prompt)
46 |
47 | if uploaded_file:
48 | inputs.append(image)
49 |
50 | with st.spinner("正在生成回复..."):
51 | response = model.generate_content(inputs)
52 |
53 | st.session_state.messages.append({"role": "assistant", "content": response.text})
54 | with chat_container:
55 | with st.chat_message("assistant"):
56 | st.markdown(response.text)
57 |
58 | elif uploaded_file:
59 | st.warning("请同时输入文本问题,以便结合图片进行回答。")
60 | else:
61 | st.info("请输入有效的Google API密钥以开始使用。")
62 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/gemini_multimodal_chatbot/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | pillow
3 | google-generativeai
--------------------------------------------------------------------------------
/advanced_tools_frameworks/llm_router_app/README.md:
--------------------------------------------------------------------------------
1 | ## 📡 RouteLLM 聊天应用简介
2 |
3 | > 本项目基于开源的 [RouteLLM](https://github.com/lm-sys/RouteLLM/tree/main) 库,支持根据任务复杂度智能选择不同的语言模型,实现高效问答路由。
4 |
5 | ### 功能亮点
6 |
7 | - 聊天界面,方便与AI模型互动
8 | - 自动选择最适合的模型(GPT-4 mini 和 Meta-Llama 3.1 70B Turbo)
9 | - 显示聊天记录及对应模型信息
10 |
11 | ### 快速开始
12 |
13 | 1. 克隆代码仓库:
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 |
19 | 2. 安装依赖:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. 配置API密钥(建议使用环境变量):
26 |
27 | ```python
28 | import os
29 | os.environ["OPENAI_API_KEY"] = "你的OpenAI密钥"
30 | os.environ["TOGETHERAI_API_KEY"] = "你的TogetherAI密钥"
31 | ```
32 |
33 | 4. 启动应用:
34 |
35 | ```bash
36 | streamlit run llm_router.py
37 | ```
38 |
39 | ### 工作原理简述
40 |
41 | - 初始化RouteLLM控制器,载入强模型(GPT-4 mini)和弱模型(Meta-Llama 3.1)
42 | - 用户通过聊天界面输入问题
43 | - RouteLLM根据问题复杂度自动选择合适模型
44 | - 选中模型生成回答
45 | - 显示回答及使用的模型信息
46 | - 保留并展示完整聊天历史
47 |
48 | ---
49 |
50 | 此版本去掉了重复和过于详细的说明,语言更符合中文习惯,结构清晰,方便中国开发者快速理解和上手[2][3][4]。如果需要在代码中添加中文注释,也建议用简洁明了的句子说明变量、函数功能,保证UTF-8编码,方便团队协作和后期维护[3][5]。
51 |
52 | Citations:
53 | [1] https://github.com/lm-sys/RouteLLM/tree/main
54 | [2] https://www.minjiekaifa.com/agilearticles/code-comment-guide-80497.mhtml
55 | [3] https://docs.pingcode.com/ask/262360.html
56 | [4] https://www.cnblogs.com/guoxiaoyu/p/18313588
57 | [5] https://docs.pingcode.com/baike/278160
58 | [6] https://www.cnblogs.com/bdqczhl/p/5786977.html
59 | [7] https://blog.csdn.net/llf021421/article/details/8214232
60 | [8] https://blog.csdn.net/zhoushimiao1990/article/details/122686402
61 | [9] https://blog.csdn.net/ruanrunxue/article/details/103449587
62 | [10] https://zh-style-guide.readthedocs.io/zh-cn/latest/%E6%96%87%E6%A1%A3%E5%86%85%E5%AE%B9%E5%85%83%E7%B4%A0/%E4%BB%A3%E7%A0%81%E5%9D%97%E5%92%8C%E4%BB%A3%E7%A0%81%E6%B3%A8%E9%87%8A.html
63 | [11] https://tech.meituan.com/2017/01/19/clean-code.html
64 |
65 | ---
66 | 来自 Perplexity 的回答: pplx.ai/share
--------------------------------------------------------------------------------
/advanced_tools_frameworks/llm_router_app/llm_router.py:
--------------------------------------------------------------------------------
1 | import os
2 | import streamlit as st
3 | from routellm.controller import Controller
4 |
5 | # 设置API密钥
6 | os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
7 | os.environ["TOGETHERAI_API_KEY"] = "your_togetherai_api_key"
8 |
9 | # 初始化RouteLLM客户端
10 | client = Controller(
11 | routers=["mf"],
12 | strong_model="gpt-4o-mini",
13 | weak_model="together_ai/meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo"
14 | )
15 |
16 | st.title("RouteLLM 聊天应用")
17 |
18 | # 初始化聊天记录
19 | if "messages" not in st.session_state:
20 | st.session_state["messages"] = []
21 |
22 | # 展示聊天记录
23 | for msg in st.session_state["messages"]:
24 | with st.chat_message(msg["role"]):
25 | st.markdown(msg["content"])
26 | if "model" in msg:
27 | st.caption(f"使用模型:{msg['model']}")
28 |
29 | # 输入消息
30 | if user_input := st.chat_input("请输入消息:"):
31 | # 记录用户消息
32 | st.session_state["messages"].append({"role": "user", "content": user_input})
33 | with st.chat_message("user"):
34 | st.markdown(user_input)
35 |
36 | # 获取并展示AI回复
37 | with st.chat_message("assistant"):
38 | placeholder = st.empty()
39 | resp = client.chat.completions.create(
40 | model="router-mf-0.11593",
41 | messages=[{"role": "user", "content": user_input}]
42 | )
43 | content = resp['choices'][0]['message']['content']
44 | model_used = resp['model']
45 | placeholder.markdown(content)
46 | st.caption(f"使用模型:{model_used}")
47 |
48 | # 记录AI回复
49 | st.session_state["messages"].append({"role": "assistant", "content": content, "model": model_used})
50 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/llm_router_app/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | "routellm[serve,eval]"
--------------------------------------------------------------------------------
/advanced_tools_frameworks/mixture_of_agents/mixture-of-agents.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import asyncio
3 | import os
4 | from together import AsyncTogether, Together
5 |
6 | st.title("Mixture-of-Agents LLM App")
7 |
8 | def get_api_key():
9 | return st.text_input("Enter your Together API Key:", type="password")
10 |
11 | def get_user_prompt():
12 | return st.text_input("Enter your question:")
13 |
14 | def create_clients(api_key):
15 | os.environ["TOGETHER_API_KEY"] = api_key
16 | return Together(api_key=api_key), AsyncTogether(api_key=api_key)
17 |
18 | reference_models = [
19 | "Qwen/Qwen2-72B-Instruct",
20 | "Qwen/Qwen1.5-72B-Chat",
21 | "mistralai/Mixtral-8x22B-Instruct-v0.1",
22 | "databricks/dbrx-instruct",
23 | ]
24 | aggregator_model = "mistralai/Mixtral-8x22B-Instruct-v0.1"
25 |
26 | aggregator_system_prompt = (
27 | "You have been provided with a set of responses from various open-source models to the latest user query. "
28 | "Your task is to synthesize these responses into a single, high-quality response. "
29 | "It is crucial to critically evaluate the information provided in these responses, recognizing that some of it may be biased or incorrect. "
30 | "Your response should not simply replicate the given answers but should offer a refined, accurate, and comprehensive reply to the instruction. "
31 | "Ensure your response is well-structured, coherent, and adheres to the highest standards of accuracy and reliability. Responses from models:"
32 | )
33 |
34 | async def run_llm(async_client, model, prompt):
35 | resp = await async_client.chat.completions.create(
36 | model=model,
37 | messages=[{"role": "user", "content": prompt}],
38 | temperature=0.7,
39 | max_tokens=512,
40 | )
41 | return model, resp.choices[0].message.content
42 |
43 | async def main(async_client, client, prompt):
44 | results = await asyncio.gather(*[run_llm(async_client, m, prompt) for m in reference_models])
45 |
46 | st.subheader("Individual Model Responses:")
47 | for model, response in results:
48 | with st.expander(f"Response from {model}"):
49 | st.write(response)
50 |
51 | st.subheader("Aggregated Response:")
52 | stream = client.chat.completions.create(
53 | model=aggregator_model,
54 | messages=[
55 | {"role": "system", "content": aggregator_system_prompt},
56 | {"role": "user", "content": ",".join(r for _, r in results)},
57 | ],
58 | stream=True,
59 | )
60 |
61 | container = st.empty()
62 | full_resp = ""
63 | for chunk in stream:
64 | content = chunk.choices[0].delta.content or ""
65 | full_resp += content
66 | container.markdown(full_resp + "▌")
67 | container.markdown(full_resp)
68 |
69 | api_key = get_api_key()
70 | if api_key:
71 | client, async_client = create_clients(api_key)
72 | user_prompt = get_user_prompt()
73 | if st.button("Get Answer"):
74 | if user_prompt:
75 | asyncio.run(main(async_client, client, user_prompt))
76 | else:
77 | st.warning("Please enter a question.")
78 | else:
79 | st.warning("Please enter your Together API key to use the app.")
80 |
81 | st.sidebar.title("About this app")
82 | st.sidebar.write(
83 | "This app demonstrates a Mixture-of-Agents approach using multiple Language Models (LLMs) "
84 | "to answer a single question."
85 | )
86 | st.sidebar.subheader("How it works:")
87 | st.sidebar.markdown(
88 | """
89 | 1. The app sends your question to multiple LLMs:
90 | - Qwen/Qwen2-72B-Instruct
91 | - Qwen/Qwen1.5-72B-Chat
92 | - mistralai/Mixtral-8x22B-Instruct-v0.1
93 | - databricks/dbrx-instruct
94 | 2. Each model provides its own response
95 | 3. All responses are then aggregated using Mixtral-8x22B-Instruct-v0.1
96 | 4. The final aggregated response is displayed
97 | """
98 | )
99 | st.sidebar.write(
100 | "This approach allows for a more comprehensive and balanced answer by leveraging multiple AI models."
101 | )
102 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/mixture_of_agents/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | asyncio
3 | together
--------------------------------------------------------------------------------
/advanced_tools_frameworks/multillm_chat_playground/multillm_playground.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from litellm import completion
3 |
4 | st.title("Multi-LLM Chat Playground")
5 |
6 | # 输入API Key
7 | api_keys = {
8 | "gpt": st.text_input("Enter your OpenAI API Key:", type="password"),
9 | "claude": st.text_input("Enter your Anthropic API Key:", type="password"),
10 | "cohere": st.text_input("Enter your Cohere API Key:", type="password"),
11 | }
12 |
13 | models = {
14 | "gpt": {"model": "gpt-4o", "name": "GPT-4o", "api_key": api_keys["gpt"]},
15 | "claude": {"model": "claude-3-5-sonnet-20240620", "name": "Claude 3.5 Sonnet", "api_key": api_keys["claude"]},
16 | "cohere": {"model": "command-r-plus", "name": "Cohere", "api_key": api_keys["cohere"]},
17 | }
18 |
19 | def get_response(model_info, messages):
20 | try:
21 | resp = completion(model=model_info["model"], messages=messages, api_key=model_info["api_key"])
22 | return resp.choices[0].message.content
23 | except Exception as e:
24 | return f"Error: {str(e)}"
25 |
26 | if all(api_keys.values()):
27 | user_input = st.text_input("Enter your message:")
28 | if st.button("Send to All LLMs"):
29 | if user_input:
30 | messages = [{"role": "user", "content": user_input}]
31 | cols = st.columns(3)
32 | for col, key in zip(cols, models):
33 | with col:
34 | st.subheader(models[key]["name"])
35 | response = get_response(models[key], messages)
36 | if response.startswith("Error:"):
37 | st.error(response)
38 | else:
39 | st.write(response)
40 | st.subheader("Response Comparison")
41 | st.write("You can see how the three models responded differently to the same input.")
42 | st.write("This demonstrates the ability to use multiple LLMs in a single application.")
43 | else:
44 | st.warning("Please enter a message.")
45 | else:
46 | st.warning("Please enter all API keys to use the chat.")
47 |
48 | # 侧边栏信息
49 | st.sidebar.title("About this app")
50 | st.sidebar.write(
51 | "This app demonstrates the use of multiple Language Models (LLMs) "
52 | "in a single application using the LiteLLM library."
53 | )
54 | st.sidebar.subheader("Key features:")
55 | st.sidebar.markdown(
56 | """
57 | - Utilizes three different LLMs:
58 | - OpenAI's GPT-4o
59 | - Anthropic's Claude 3.5 Sonnet
60 | - Cohere's Command R Plus
61 | - Sends the same user input to all models
62 | - Displays responses side-by-side for easy comparison
63 | - Showcases the ability to use multiple LLMs in one application
64 | """
65 | )
66 | st.sidebar.write("Try it out to see how different AI models respond to the same prompt!")
67 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/multillm_chat_playground/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | litellm
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_scrapping_ai_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 💻 Web Scrapping AI Agent
2 | This Streamlit app allows you to scrape a website using OpenAI API and the scrapegraphai library. Simply provide your OpenAI API key, enter the URL of the website you want to scrape, and specify what you want the AI agent to extract from the website.
3 |
4 | ### Features
5 | - Scrape any website by providing the URL
6 | - Utilize OpenAI's LLMs (GPT-3.5-turbo or GPT-4) for intelligent scraping
7 | - Customize the scraping task by specifying what you want the AI agent to extract
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | ```
16 | 2. Install the required dependencies:
17 |
18 | ```bash
19 | pip install -r requirements.txt
20 | ```
21 | 3. Get your OpenAI API Key
22 |
23 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
24 |
25 | 4. Run the Streamlit App
26 | ```bash
27 | streamlit run ai_scrapper.py
28 | ```
29 |
30 | ### How it Works?
31 |
32 | - The app prompts you to enter your OpenAI API key, which is used to authenticate and access the OpenAI language models.
33 | - You can select the desired language model (GPT-3.5-turbo or GPT-4) for the scraping task.
34 | - Enter the URL of the website you want to scrape in the provided text input field.
35 | - Specify what you want the AI agent to extract from the website by entering a user prompt.
36 | - The app creates a SmartScraperGraph object using the provided URL, user prompt, and OpenAI configuration.
37 | - The SmartScraperGraph object scrapes the website and extracts the requested information using the specified language model.
38 | - The scraped results are displayed in the app for you to view
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_scrapping_ai_agent/ai_scrapper.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from scrapegraphai.graphs import SmartScraperGraph
3 |
4 | def main():
5 | st.title("Web Scraping AI Agent 🕵️♂️")
6 | st.caption("This app allows you to scrape a website using OpenAI API")
7 |
8 | api_key = st.text_input("OpenAI API Key", type="password")
9 | if not api_key:
10 | return
11 |
12 | model = st.radio("Select the model", ["gpt-3.5-turbo", "gpt-4"])
13 | url = st.text_input("Enter the URL of the website you want to scrape")
14 | prompt = st.text_input("What do you want the AI agent to scrape from the website?")
15 |
16 | if st.button("Scrape") and url and prompt:
17 | config = {"llm": {"api_key": api_key, "model": model}}
18 | scraper = SmartScraperGraph(prompt=prompt, source=url, config=config)
19 | result = scraper.run()
20 | st.write(result)
21 |
22 | if __name__ == "__main__":
23 | main()
24 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_scrapping_ai_agent/local_ai_scrapper.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from scrapegraphai.graphs import SmartScraperGraph
4 |
5 | # Set up the Streamlit app
6 | st.title("Web Scrapping AI Agent 🕵️♂️")
7 | st.caption("This app allows you to scrape a website using Llama 3.2")
8 |
9 | # Set up the configuration for the SmartScraperGraph
10 | graph_config = {
11 | "llm": {
12 | "model": "ollama/llama3.2",
13 | "temperature": 0,
14 | "format": "json", # Ollama needs the format to be specified explicitly
15 | "base_url": "http://localhost:11434", # set Ollama URL
16 | },
17 | "embeddings": {
18 | "model": "ollama/nomic-embed-text",
19 | "base_url": "http://localhost:11434", # set Ollama URL
20 | },
21 | "verbose": True,
22 | }
23 | # Get the URL of the website to scrape
24 | url = st.text_input("Enter the URL of the website you want to scrape")
25 | # Get the user prompt
26 | user_prompt = st.text_input("What you want the AI agent to scrae from the website?")
27 |
28 | # Create a SmartScraperGraph object
29 | smart_scraper_graph = SmartScraperGraph(
30 | prompt=user_prompt,
31 | source=url,
32 | config=graph_config
33 | )
34 | # Scrape the website
35 | if st.button("Scrape"):
36 | result = smart_scraper_graph.run()
37 | st.write(result)
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_scrapping_ai_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | scrapegraphai
3 | playwright
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_search_ai_assistant/README.md:
--------------------------------------------------------------------------------
1 | ## 🎯 Generative AI Search Assistant
2 | This Streamlit app combines the power of search engines and LLMs to provide you with pinpointed answers to your queries. By leveraging OpenAI's GPT-4o and the DuckDuckGo search engine, this AI search assistant delivers accurate and concise responses to your questions.
3 |
4 | ### Features
5 | - Get pinpointed answers to your queries
6 | - Utilize DuckDuckGo search engine for web searching
7 | - Use OpenAI GPT-4o for intelligent answer generation
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | ```
16 | 2. Install the required dependencies:
17 |
18 | ```bash
19 | pip install -r requirements.txt
20 | ```
21 | 3. Get your OpenAI API Key
22 |
23 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
24 |
25 | 4. Run the Streamlit App
26 | ```bash
27 | streamlit run ai_websearch.py
28 | ```
29 |
30 | ### How It Works?
31 |
32 | - Upon running the app, you will be prompted to enter your OpenAI API key. This key is used to authenticate and access the OpenAI language models.
33 |
34 | - Once you provide a valid API key, an instance of the Assistant class is created. This assistant utilizes the GPT-4 language model from OpenAI and the DuckDuckGo search engine tool.
35 |
36 | - Enter your search query in the provided text input field.
37 |
38 | - The assistant will perform the following steps:
39 | - Conduct a web search using DuckDuckGo based on your query
40 | - Analyze the search results and extract relevant information
41 | - Generate a concise and targeted answer using the GPT-4 language model
42 |
43 | - The pinpointed answer will be displayed in the app, providing you with the information you need.
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_search_ai_assistant/claude_websearch.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from phi.assistant import Assistant
4 | from phi.tools.duckduckgo import DuckDuckGo
5 | from phi.llm.anthropic import Claude
6 |
7 | # Set up the Streamlit app
8 | st.title("Claude Sonnet + AI Web Search 🤖")
9 | st.caption("This app allows you to search the web using Claude Sonnet 3.5")
10 |
11 | # Get Anthropic API key from user
12 | anthropic_api_key = st.text_input("Anthropic's Claude API Key", type="password")
13 |
14 | # If Anthropic API key is provided, create an instance of Assistant
15 | if anthropic_api_key:
16 | assistant = Assistant(
17 | llm=Claude(
18 | model="claude-3-5-sonnet-20240620",
19 | max_tokens=1024,
20 | temperature=0.9,
21 | api_key=anthropic_api_key) , tools=[DuckDuckGo()], show_tool_calls=True
22 | )
23 | # Get the search query from the user
24 | query= st.text_input("Enter the Search Query", type="default")
25 |
26 | if query:
27 | # Search the web using the AI Assistant
28 | response = assistant.run(query, stream=False)
29 | st.write(response)
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_search_ai_assistant/gpt4_websearch.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from phi.assistant import Assistant
4 | from phi.tools.duckduckgo import DuckDuckGo
5 | from phi.llm.openai import OpenAIChat
6 |
7 | # Set up the Streamlit app
8 | st.title("AI Web Search Assistant 🤖")
9 | st.caption("This app allows you to search the web using GPT-4o")
10 |
11 | # Get OpenAI API key from user
12 | openai_access_token = st.text_input("OpenAI API Key", type="password")
13 |
14 | # If OpenAI API key is provided, create an instance of Assistant
15 | if openai_access_token:
16 | # Create an instance of the Assistant
17 | assistant = Assistant(
18 | llm=OpenAIChat(
19 | model="gpt-4o",
20 | max_tokens=1024,
21 | temperature=0.9,
22 | api_key=openai_access_token) , tools=[DuckDuckGo()], show_tool_calls=True
23 | )
24 |
25 | # Get the search query from the user
26 | query= st.text_input("Enter the Search Query", type="default")
27 |
28 | if query:
29 | # Search the web using the AI Assistant
30 | response = assistant.run(query, stream=False)
31 | st.write(response)
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_search_ai_assistant/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | phidata
4 | duckduckgo-search
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_customer_support_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🛒 AI Customer Support Agent with Memory
2 | This Streamlit app implements an AI-powered customer support agent for synthetic data generated using GPT-4o. The agent uses OpenAI's GPT-4o model and maintains a memory of past interactions using the Mem0 library with Qdrant as the vector store.
3 |
4 | ### Features
5 |
6 | - Chat interface for interacting with the AI customer support agent
7 | - Persistent memory of customer interactions and profiles
8 | - Synthetic data generation for testing and demonstration
9 | - Utilizes OpenAI's GPT-4o model for intelligent responses
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | ```
17 |
18 | 2. Install the required dependencies:
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 |
24 | 3. Ensure Qdrant is running:
25 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
26 |
27 | ```bash
28 | docker pull qdrant/qdrant
29 |
30 | docker run -p 6333:6333 -p 6334:6334 \
31 | -v $(pwd)/qdrant_storage:/qdrant/storage:z \
32 | qdrant/qdrant
33 | ```
34 |
35 | 4. Run the Streamlit App
36 | ```bash
37 | streamlit run customer_support_agent.py
38 | ```
39 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_customer_support_agent/customer_support_agent.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from openai import OpenAI
3 | from mem0 import Memory
4 | import os
5 | import json
6 | from datetime import datetime, timedelta
7 |
8 | # Set up the Streamlit App
9 | st.title("AI Customer Support Agent with Memory 🛒")
10 | st.caption("Chat with a customer support assistant who remembers your past interactions.")
11 |
12 | # Set the OpenAI API key
13 | openai_api_key = st.text_input("Enter OpenAI API Key", type="password")
14 |
15 | if openai_api_key:
16 | os.environ['OPENAI_API_KEY'] = openai_api_key
17 |
18 | class CustomerSupportAIAgent:
19 | def __init__(self):
20 | config = {
21 | "vector_store": {
22 | "provider": "qdrant",
23 | "config": {
24 | "model": "gpt-4o-mini",
25 | "host": "localhost",
26 | "port": 6333,
27 | }
28 | },
29 | }
30 | self.memory = Memory.from_config(config)
31 | self.client = OpenAI()
32 | self.app_id = "customer-support"
33 |
34 | def handle_query(self, query, user_id=None):
35 | relevant_memories = self.memory.search(query=query, user_id=user_id)
36 | context = "Relevant past information:\n"
37 | if relevant_memories and "results" in relevant_memories:
38 | for memory in relevant_memories["results"]:
39 | if "memory" in memory:
40 | context += f"- {memory['memory']}\n"
41 |
42 | full_prompt = f"{context}\nCustomer: {query}\nSupport Agent:"
43 |
44 | response = self.client.chat.completions.create(
45 | model="gpt-4o-mini",
46 | messages=[
47 | {"role": "system", "content": "You are a customer support AI agent for TechGadgets.com, an online electronics store."},
48 | {"role": "user", "content": full_prompt}
49 | ]
50 | )
51 | answer = response.choices[0].message.content
52 |
53 | self.memory.add(query, user_id=user_id, metadata={"app_id": self.app_id, "role": "user"})
54 | self.memory.add(answer, user_id=user_id, metadata={"app_id": self.app_id, "role": "assistant"})
55 |
56 | return answer
57 |
58 | def get_memories(self, user_id=None):
59 | return self.memory.get_all(user_id=user_id)
60 |
61 | def generate_synthetic_data(self, user_id):
62 | today = datetime.now()
63 | order_date = (today - timedelta(days=10)).strftime("%B %d, %Y")
64 | expected_delivery = (today + timedelta(days=2)).strftime("%B %d, %Y")
65 |
66 | prompt = f"""Generate a detailed customer profile and order history for a TechGadgets.com customer with ID {user_id}. Include:
67 | 1. Customer name and basic info
68 | 2. A recent order of a high-end electronic device (placed on {order_date}, to be delivered by {expected_delivery})
69 | 3. Order details (product, price, order number)
70 | 4. Customer's shipping address
71 | 5. 2-3 previous orders from the past year
72 | 6. 2-3 customer service interactions related to these orders
73 | 7. Any preferences or patterns in their shopping behavior
74 |
75 | Format the output as a JSON object."""
76 |
77 | response = self.client.chat.completions.create(
78 | model="gpt-4o-mini",
79 | messages=[
80 | {"role": "system", "content": "You are a data generation AI that creates realistic customer profiles and order histories. Always respond with valid JSON."},
81 | {"role": "user", "content": prompt}
82 | ],
83 | response_format={"type": "json_object"}
84 | )
85 |
86 | customer_data = json.loads(response.choices[0].message.content)
87 |
88 | # Add generated data to memory
89 | for key, value in customer_data.items():
90 | if isinstance(value, list):
91 | for item in value:
92 | self.memory.add(json.dumps(item), user_id=user_id, metadata={"app_id": self.app_id, "role": "system"})
93 | else:
94 | self.memory.add(f"{key}: {json.dumps(value)}", user_id=user_id, metadata={"app_id": self.app_id, "role": "system"})
95 |
96 | return customer_data
97 |
98 | # Initialize the CustomerSupportAIAgent
99 | support_agent = CustomerSupportAIAgent()
100 |
101 | # Sidebar for customer ID and memory view
102 | st.sidebar.title("Enter your Customer ID:")
103 | previous_customer_id = st.session_state.get("previous_customer_id", None)
104 | customer_id = st.sidebar.text_input("Enter your Customer ID")
105 |
106 | if customer_id != previous_customer_id:
107 | st.session_state.messages = []
108 | st.session_state.previous_customer_id = customer_id
109 | st.session_state.customer_data = None
110 |
111 | # Add button to generate synthetic data
112 | if st.sidebar.button("Generate Synthetic Data"):
113 | if customer_id:
114 | with st.spinner("Generating customer data..."):
115 | st.session_state.customer_data = support_agent.generate_synthetic_data(customer_id)
116 | st.sidebar.success("Synthetic data generated successfully!")
117 | else:
118 | st.sidebar.error("Please enter a customer ID first.")
119 |
120 | if st.sidebar.button("View Customer Profile"):
121 | if st.session_state.customer_data:
122 | st.sidebar.json(st.session_state.customer_data)
123 | else:
124 | st.sidebar.info("No customer data generated yet. Click 'Generate Synthetic Data' first.")
125 |
126 | if st.sidebar.button("View Memory Info"):
127 | if customer_id:
128 | memories = support_agent.get_memories(user_id=customer_id)
129 | if memories:
130 | st.sidebar.write(f"Memory for customer **{customer_id}**:")
131 | if memories and "results" in memories:
132 | for memory in memories["results"]:
133 | if "memory" in memory:
134 | st.write(f"- {memory['memory']}")
135 | else:
136 | st.sidebar.info("No memory found for this customer ID.")
137 | else:
138 | st.sidebar.error("Please enter a customer ID to view memory info.")
139 |
140 | # Initialize the chat history
141 | if "messages" not in st.session_state:
142 | st.session_state.messages = []
143 |
144 | # Display the chat history
145 | for message in st.session_state.messages:
146 | with st.chat_message(message["role"]):
147 | st.markdown(message["content"])
148 |
149 | # Accept user input
150 | query = st.chat_input("How can I assist you today?")
151 |
152 | if query and customer_id:
153 | # Add user message to chat history
154 | st.session_state.messages.append({"role": "user", "content": query})
155 | with st.chat_message("user"):
156 | st.markdown(query)
157 |
158 | # Generate and display response
159 | answer = support_agent.handle_query(query, user_id=customer_id)
160 |
161 | # Add assistant response to chat history
162 | st.session_state.messages.append({"role": "assistant", "content": answer})
163 | with st.chat_message("assistant"):
164 | st.markdown(answer)
165 |
166 | elif not customer_id:
167 | st.error("Please enter a customer ID to start the chat.")
168 |
169 | else:
170 | st.warning("Please enter your OpenAI API key to use the customer support agent.")
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_customer_support_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_finance_agent_team/README.md:
--------------------------------------------------------------------------------
1 | ## 💲 AI Finance Agent Team with Web Access
2 | This script demonstrates how to build a team of AI agents that work together as a financial analyst using GPT-4o in just 20 lines of Python code. The system combines web search capabilities with financial data analysis tools to provide comprehensive financial insights.
3 |
4 | ### Features
5 | - Multi-agent system with specialized roles:
6 | - Web Agent for general internet research
7 | - Finance Agent for detailed financial analysis
8 | - Team Agent for coordinating between agents
9 | - Real-time financial data access through YFinance
10 | - Web search capabilities using DuckDuckGo
11 | - Persistent storage of agent interactions using SQLite
12 |
13 | ### How to get Started?
14 |
15 | 1. Clone the GitHub repository
16 | ```bash
17 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
18 | ```
19 |
20 | 2. Install the required dependencies:
21 |
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Get your OpenAI API Key
27 |
28 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
29 | - Set your OpenAI API key as an environment variable:
30 | ```bash
31 | export OPENAI_API_KEY='your-api-key-here'
32 | ```
33 |
34 | 4. Run the team of AI Agents
35 | ```bash
36 | python3 finance_agent_team.py
37 | ```
38 |
39 | 5. Open your web browser and navigate to the URL provided in the console output to interact with the team of AI agents through the playground interface.
40 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_finance_agent_team/finance_agent_team.py:
--------------------------------------------------------------------------------
1 | from phi.agent import Agent
2 | from phi.model.openai import OpenAIChat
3 | from phi.storage.agent.sqlite import SqlAgentStorage
4 | from phi.tools.duckduckgo import DuckDuckGo
5 | from phi.tools.yfinance import YFinanceTools
6 | from phi.playground import Playground, serve_playground_app
7 |
8 | web_agent = Agent(
9 | name="Web Agent",
10 | role="Search the web for information",
11 | model=OpenAIChat(id="gpt-4o"),
12 | tools=[DuckDuckGo()],
13 | storage=SqlAgentStorage(table_name="web_agent", db_file="agents.db"),
14 | add_history_to_messages=True,
15 | markdown=True,
16 | )
17 |
18 | finance_agent = Agent(
19 | name="Finance Agent",
20 | role="Get financial data",
21 | model=OpenAIChat(id="gpt-4o"),
22 | tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True, company_news=True)],
23 | instructions=["Always use tables to display data"],
24 | storage=SqlAgentStorage(table_name="finance_agent", db_file="agents.db"),
25 | add_history_to_messages=True,
26 | markdown=True,
27 | )
28 |
29 | agent_team = Agent(
30 | team=[web_agent, finance_agent],
31 | name="Agent Team (Web+Finance)",
32 | model=OpenAIChat(id="gpt-4o"),
33 | show_tool_calls=True,
34 | markdown=True,
35 | )
36 |
37 | app = Playground(agents=[agent_team]).get_app()
38 |
39 | if __name__ == "__main__":
40 | serve_playground_app("finance_agent_team:app", reload=True)
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_finance_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | openai
2 | phidata
3 | duckduckgo-search
4 | yfinance
5 | fastapi[standard]
6 | sqlalchemy
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_health_fitness_agent/README.md:
--------------------------------------------------------------------------------
1 | # AI Health & Fitness Planner Agent 🏋️♂️
2 |
3 | The **AI Health & Fitness Planner** is a personalized health and fitness Agent powered by Phidata's AI Agent framework. This app generates tailored dietary and fitness plans based on user inputs such as age, weight, height, activity level, dietary preferences, and fitness goals.
4 |
5 | ## Features
6 |
7 | - **Health Agent and Fitness Agent**
8 | - The app has two phidata agents that are specialists in giving Diet advice and Fitness/workout advice respectively.
9 |
10 | - **Personalized Dietary Plans**:
11 | - Generates detailed meal plans (breakfast, lunch, dinner, and snacks).
12 | - Includes important considerations like hydration, electrolytes, and fiber intake.
13 | - Supports various dietary preferences like Keto, Vegetarian, Low Carb, etc.
14 |
15 | - **Personalized Fitness Plans**:
16 | - Provides customized exercise routines based on fitness goals.
17 | - Covers warm-ups, main workouts, and cool-downs.
18 | - Includes actionable fitness tips and progress tracking advice.
19 |
20 | - **Interactive Q&A**: Allows users to ask follow-up questions about their plans.
21 |
22 |
23 | ## Requirements
24 |
25 | The application requires the following Python libraries:
26 |
27 | - `phidata`
28 | - `google-generativeai`
29 | - `streamlit`
30 |
31 | Ensure these dependencies are installed via the `requirements.txt` file according to their mentioned versions
32 |
33 | ## How to Run
34 |
35 | Follow the steps below to set up and run the application:
36 | Before anything else, Please get a free Gemini API Key provided by Google AI here: https://aistudio.google.com/apikey
37 |
38 | 1. **Clone the Repository**:
39 | ```bash
40 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
41 | cd ai_agent_tutorials```
42 |
43 | 2. **Install the dependencies**
44 | ```bash
45 | pip install -r requirements.txt
46 | ```
47 | 3. **Run the Streamlit app**
48 | ```bash
49 | streamlit run ai_health-fitness_agent/health_agent.py
50 | ```
51 |
52 |
53 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_health_fitness_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | phidata==2.5.33
2 | google-generativeai==0.8.3
3 | streamlit==1.40.2
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_investment_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 📈 AI Investment Agent
2 | This Streamlit app is an AI-powered investment agent that compares the performance of two stocks and generates detailed reports. By using GPT-4o with Yahoo Finance data, this app provides valuable insights to help you make informed investment decisions.
3 |
4 | ### Features
5 | - Compare the performance of two stocks
6 | - Retrieve comprehensive company information
7 | - Get the latest company news and analyst recommendations
8 | - Get the latest company news and analyst recommendations
9 |
10 | ### How to get Started?
11 |
12 | 1. Clone the GitHub repository
13 |
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Run the Streamlit App
27 | ```bash
28 | streamlit run investment_agent.py
29 | ```
30 |
31 | ### How it Works?
32 |
33 | - Upon running the app, you will be prompted to enter your OpenAI API key. This key is used to authenticate and access the OpenAI language model.
34 | - Once you provide a valid API key, an instance of the Assistant class is created. This assistant utilizes the GPT-4 language model from OpenAI and the YFinanceTools for accessing stock data.
35 | - Enter the stock symbols of the two companies you want to compare in the provided text input fields.
36 | - The assistant will perform the following steps:
37 | - Retrieve real-time stock prices and historical data using YFinanceTools
38 | - Fetch the latest company news and analyst recommendations
39 | - Gather comprehensive company information
40 | - Generate a detailed comparison report using the GPT-4 language model
41 | - The generated report will be displayed in the app, providing you with valuable insights and analysis to guide your investment decisions.
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_investment_agent/investment_agent.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from phi.assistant import Assistant
4 | from phi.llm.openai import OpenAIChat
5 | from phi.tools.yfinance import YFinanceTools
6 |
7 | # Set up the Streamlit app
8 | st.title("AI Investment Agent 📈🤖")
9 | st.caption("This app allows you to compare the performance of two stocks and generate detailed reports.")
10 |
11 | # Get OpenAI API key from user
12 | openai_api_key = st.text_input("OpenAI API Key", type="password")
13 |
14 | if openai_api_key:
15 | # Create an instance of the Assistant
16 | assistant = Assistant(
17 | llm=OpenAIChat(model="gpt-4o", api_key=openai_api_key),
18 | tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True, company_news=True)],
19 | show_tool_calls=True,
20 | )
21 |
22 | # Input fields for the stocks to compare
23 | stock1 = st.text_input("Enter the first stock symbol")
24 | stock2 = st.text_input("Enter the second stock symbol")
25 |
26 | if stock1 and stock2:
27 | # Get the response from the assistant
28 | query = f"Compare {stock1} to {stock2}. Use every tool you have."
29 | response = assistant.run(query, stream=False)
30 | st.write(response)
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_investment_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | phidata
3 | openai
4 | yfinance
5 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_journalist_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🗞️ AI Journalist Agent
2 | This Streamlit app is an AI-powered journalist agent that generates high-quality articles using OpenAI GPT-4o. It automates the process of researching, writing, and editing articles, allowing you to create compelling content on any topic with ease.
3 |
4 | ### Features
5 | - Searches the web for relevant information on a given topic
6 | - Writes well-structured, informative, and engaging articles
7 | - Edits and refines the generated content to meet the high standards of the New York Times
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | ```
16 | 2. Install the required dependencies:
17 |
18 | ```bash
19 | pip install -r requirements.txt
20 | ```
21 | 3. Get your OpenAI API Key
22 |
23 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
24 |
25 | 4. Get your SerpAPI Key
26 |
27 | - Sign up for an [SerpAPI account](https://serpapi.com/) and obtain your API key.
28 |
29 | 5. Run the Streamlit App
30 | ```bash
31 | streamlit run journalist_agent.py
32 | ```
33 |
34 | ### How it Works?
35 |
36 | The AI Journalist Agent utilizes three main components:
37 | - Searcher: Responsible for generating search terms based on the given topic and searching the web for relevant URLs using the SerpAPI.
38 | - Writer: Retrieves the text from the provided URLs using the NewspaperToolkit and writes a high-quality article based on the extracted information.
39 | - Editor: Coordinates the workflow between the Searcher and Writer, and performs final editing and refinement of the generated article.
40 |
41 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_journalist_agent/journalist_agent.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | from textwrap import dedent
3 | from phi.assistant import Assistant
4 | from phi.tools.serpapi_tools import SerpApiTools
5 | from phi.tools.newspaper_toolkit import NewspaperToolkit
6 | import streamlit as st
7 | from phi.llm.openai import OpenAIChat
8 |
9 | # Set up the Streamlit app
10 | st.title("AI Journalist Agent 🗞️")
11 | st.caption("Generate High-quality articles with AI Journalist by researching, wriritng and editing quality articles on autopilot using GPT-4o")
12 |
13 | # Get OpenAI API key from user
14 | openai_api_key = st.text_input("Enter OpenAI API Key to access GPT-4o", type="password")
15 |
16 | # Get SerpAPI key from the user
17 | serp_api_key = st.text_input("Enter Serp API Key for Search functionality", type="password")
18 |
19 | if openai_api_key and serp_api_key:
20 | searcher = Assistant(
21 | name="Searcher",
22 | role="Searches for top URLs based on a topic",
23 | llm=OpenAIChat(model="gpt-4o", api_key=openai_api_key),
24 | description=dedent(
25 | """\
26 | You are a world-class journalist for the New York Times. Given a topic, generate a list of 3 search terms
27 | for writing an article on that topic. Then search the web for each term, analyse the results
28 | and return the 10 most relevant URLs.
29 | """
30 | ),
31 | instructions=[
32 | "Given a topic, first generate a list of 3 search terms related to that topic.",
33 | "For each search term, `search_google` and analyze the results."
34 | "From the results of all searcher, return the 10 most relevant URLs to the topic.",
35 | "Remember: you are writing for the New York Times, so the quality of the sources is important.",
36 | ],
37 | tools=[SerpApiTools(api_key=serp_api_key)],
38 | add_datetime_to_instructions=True,
39 | )
40 | writer = Assistant(
41 | name="Writer",
42 | role="Retrieves text from URLs and writes a high-quality article",
43 | llm=OpenAIChat(model="gpt-4o", api_key=openai_api_key),
44 | description=dedent(
45 | """\
46 | You are a senior writer for the New York Times. Given a topic and a list of URLs,
47 | your goal is to write a high-quality NYT-worthy article on the topic.
48 | """
49 | ),
50 | instructions=[
51 | "Given a topic and a list of URLs, first read the article using `get_article_text`."
52 | "Then write a high-quality NYT-worthy article on the topic."
53 | "The article should be well-structured, informative, and engaging",
54 | "Ensure the length is at least as long as a NYT cover story -- at a minimum, 15 paragraphs.",
55 | "Ensure you provide a nuanced and balanced opinion, quoting facts where possible.",
56 | "Remember: you are writing for the New York Times, so the quality of the article is important.",
57 | "Focus on clarity, coherence, and overall quality.",
58 | "Never make up facts or plagiarize. Always provide proper attribution.",
59 | ],
60 | tools=[NewspaperToolkit()],
61 | add_datetime_to_instructions=True,
62 | add_chat_history_to_prompt=True,
63 | num_history_messages=3,
64 | )
65 |
66 | editor = Assistant(
67 | name="Editor",
68 | llm=OpenAIChat(model="gpt-4o", api_key=openai_api_key),
69 | team=[searcher, writer],
70 | description="You are a senior NYT editor. Given a topic, your goal is to write a NYT worthy article.",
71 | instructions=[
72 | "Given a topic, ask the search journalist to search for the most relevant URLs for that topic.",
73 | "Then pass a description of the topic and URLs to the writer to get a draft of the article.",
74 | "Edit, proofread, and refine the article to ensure it meets the high standards of the New York Times.",
75 | "The article should be extremely articulate and well written. "
76 | "Focus on clarity, coherence, and overall quality.",
77 | "Ensure the article is engaging and informative.",
78 | "Remember: you are the final gatekeeper before the article is published.",
79 | ],
80 | add_datetime_to_instructions=True,
81 | markdown=True,
82 | )
83 |
84 | # Input field for the report query
85 | query = st.text_input("What do you want the AI journalist to write an Article on?")
86 |
87 | if query:
88 | with st.spinner("Processing..."):
89 | # Get the response from the assistant
90 | response = editor.run(query, stream=False)
91 | st.write(response)
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_journalist_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | phidata
3 | openai
4 | google-search-results
5 | newspaper3k
6 | lxml_html_clean
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_legal_agent_team/README.md:
--------------------------------------------------------------------------------
1 | # 👨⚖️ AI Legal Agent Team
2 |
3 | A Streamlit application that simulates a full-service legal team using multiple AI agents to analyze legal documents and provide comprehensive legal insights. Each agent represents a different legal specialist role, from research and contract analysis to strategic planning, working together to provide thorough legal analysis and recommendations.
4 |
5 | ## Features
6 |
7 | - **Specialized Legal AI Agent Team**
8 | - **Legal Researcher**: Equipped with DuckDuckGo search tool to find and cite relevant legal cases and precedents. Provides detailed research summaries with sources and references specific sections from uploaded documents.
9 |
10 | - **Contract Analyst**: Specializes in thorough contract review, identifying key terms, obligations, and potential issues. References specific clauses from documents for detailed analysis.
11 |
12 | - **Legal Strategist**: Focuses on developing comprehensive legal strategies, providing actionable recommendations while considering both risks and opportunities.
13 |
14 | - **Team Lead**: Coordinates analysis between team members, ensures comprehensive responses, properly sourced recommendations, and references to specific document parts. Acts as an Agent Team coordinator for all three agents.
15 |
16 | - **Document Analysis Types**
17 | - Contract Review - Done by Contract Analyst
18 | - Legal Research - Done by Legal Researcher
19 | - Risk Assessment - Done by Legal Strategist, Contract Analyst
20 | - Compliance Check - Done by Legal Strategist, Legal Researcher, Contract Analyst
21 | - Custom Queries - Done by Agent Team - Legal Researcher, Legal Strategist, Contract Analyst
22 |
23 | ## How to Run
24 |
25 | 1. **Setup Environment**
26 | ```bash
27 | # Clone the repository
28 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
29 | cd ai_legal_agent_team
30 |
31 | # Install dependencies
32 | pip install -r requirements.txt
33 | ```
34 |
35 | 2. **Configure API Keys**
36 | - Get OpenAI API key from [OpenAI Platform](https://platform.openai.com)
37 | - Get Qdrant API key and URL from [Qdrant Cloud](https://cloud.qdrant.io)
38 |
39 | 3. **Run the Application**
40 | ```bash
41 | streamlit run legal_agent_team.py
42 | ```
43 | 4. **Use the Interface**
44 | - Enter API credentials
45 | - Upload a legal document (PDF)
46 | - Select analysis type
47 | - Add custom queries if needed
48 | - View analysis results
49 |
50 | ## Notes
51 |
52 | - Supports PDF documents only
53 | - Uses GPT-4o for analysis
54 | - Uses text-embedding-3-small for embeddings
55 | - Requires stable internet connection
56 | - API usage costs apply
57 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_legal_agent_team/local_ai_legal_agent_team/README.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/XiaomingX/awesome-llm-app/ec0cd6992b784ecc810c8d32a87748565a63d833/ai_agent_tutorials/ai_legal_agent_team/local_ai_legal_agent_team/README.md
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_legal_agent_team/local_ai_legal_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | phidata==2.6.7
2 | streamlit==1.40.2
3 | qdrant-client==1.12.1
4 | ollama==0.4.4
5 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_legal_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | phidata==2.5.33
2 | streamlit==1.40.2
3 | qdrant-client==1.12.1
4 | openai
5 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_meeting_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 📝 AI Meeting Preparation Agent
2 | This Streamlit application leverages multiple AI agents to create comprehensive meeting preparation materials. It uses OpenAI's GPT-4, Anthropic's Claude, and the Serper API for web searches to generate context analysis, industry insights, meeting strategies, and executive briefings.
3 |
4 | ### Features
5 |
6 | - Multi-agent AI system for thorough meeting preparation
7 | - Utilizes OpenAI's GPT-4 and Anthropic's Claude models
8 | - Web search capability using Serper API
9 | - Generates detailed context analysis, industry insights, meeting strategies, and executive briefings
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 | 2. Install the required dependencies:
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 | 3. Get your Anthropic API Key
24 |
25 | - Sign up for an [Anthropic account](https://console.anthropic.com) (or the LLM provider of your choice) and obtain your API key.
26 |
27 | 4. Get your SerpAPI Key
28 |
29 | - Sign up for an [Serper API account](https://serper.dev/) and obtain your API key.
30 |
31 | 5. Get your OpenAI API Key
32 |
33 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
34 |
35 | 6. Run the Streamlit App
36 | ```bash
37 | streamlit run meeting_agent.py
38 | ```
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_meeting_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | crewai
3 | crewai-tools
4 | openai
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_movie_production_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🎬 AI Movie Production Agent
2 | This Streamlit app is an AI-powered movie production assistant that helps bring your movie ideas to life using Claude 3.5 Sonnet model. It automates the process of script writing and casting, allowing you to create compelling movie concepts with ease.
3 |
4 | ### Features
5 | - Generates script outlines based on your movie idea, genre, and target audience
6 | - Suggests suitable actors for main roles, considering their past performances and current availability
7 | - Provides a concise movie concept overview
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | ```
16 | 2. Install the required dependencies:
17 |
18 | ```bash
19 | pip install -r requirements.txt
20 | ```
21 | 3. Get your Anthropic API Key
22 |
23 | - Sign up for an [Anthropic account](https://console.anthropic.com) (or the LLM provider of your choice) and obtain your API key.
24 |
25 | 4. Get your SerpAPI Key
26 |
27 | - Sign up for an [SerpAPI account](https://serpapi.com/) and obtain your API key.
28 |
29 | 5. Run the Streamlit App
30 | ```bash
31 | streamlit run movie_production_agent.py
32 | ```
33 |
34 | ### How it Works?
35 |
36 | The AI Movie Production Agent utilizes three main components:
37 | - **ScriptWriter**: Develops a compelling script outline with character descriptions and key plot points based on the given movie idea and genre.
38 | - **CastingDirector**: Suggests suitable actors for the main roles, considering their past performances and current availability.
39 | - **MovieProducer**: Oversees the entire process, coordinating between the ScriptWriter and CastingDirector, and providing a concise movie concept overview.
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_movie_production_agent/movie_production_agent.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from phi.assistant import Assistant
4 | from phi.tools.serpapi_tools import SerpApiTools
5 | from phi.llm.anthropic import Claude
6 | from textwrap import dedent
7 |
8 | # Set up the Streamlit app
9 | st.title("AI Movie Production Agent 🎬")
10 | st.caption("Bring your movie ideas to life with the teams of script writing and casting AI agents")
11 |
12 | # Get Anthropic API key from user
13 | anthropic_api_key = st.text_input("Enter Anthropic API Key to access Claude Sonnet 3.5", type="password")
14 | # Get SerpAPI key from the user
15 | serp_api_key = st.text_input("Enter Serp API Key for Search functionality", type="password")
16 |
17 | if anthropic_api_key and serp_api_key:
18 | script_writer = Assistant(
19 | name="ScriptWriter",
20 | llm=Claude(model="claude-3-5-sonnet-20240620", api_key=anthropic_api_key),
21 | description=dedent(
22 | """\
23 | You are an expert screenplay writer. Given a movie idea and genre,
24 | develop a compelling script outline with character descriptions and key plot points.
25 | """
26 | ),
27 | instructions=[
28 | "Write a script outline with 3-5 main characters and key plot points.",
29 | "Outline the three-act structure and suggest 2-3 twists.",
30 | "Ensure the script aligns with the specified genre and target audience.",
31 | ],
32 | )
33 |
34 | casting_director = Assistant(
35 | name="CastingDirector",
36 | llm=Claude(model="claude-3-5-sonnet-20240620", api_key=anthropic_api_key),
37 | description=dedent(
38 | """\
39 | You are a talented casting director. Given a script outline and character descriptions,
40 | suggest suitable actors for the main roles, considering their past performances and current availability.
41 | """
42 | ),
43 | instructions=[
44 | "Suggest 2-3 actors for each main role.",
45 | "Check actors' current status using `search_google`.",
46 | "Provide a brief explanation for each casting suggestion.",
47 | "Consider diversity and representation in your casting choices.",
48 | ],
49 | tools=[SerpApiTools(api_key=serp_api_key)],
50 | )
51 |
52 | movie_producer = Assistant(
53 | name="MovieProducer",
54 | llm=Claude(model="claude-3-5-sonnet-20240620", api_key=anthropic_api_key),
55 | team=[script_writer, casting_director],
56 | description="Experienced movie producer overseeing script and casting.",
57 | instructions=[
58 | "Ask ScriptWriter for a script outline based on the movie idea.",
59 | "Pass the outline to CastingDirector for casting suggestions.",
60 | "Summarize the script outline and casting suggestions.",
61 | "Provide a concise movie concept overview.",
62 | ],
63 | markdown=True,
64 | )
65 |
66 | # Input field for the report query
67 | movie_idea = st.text_area("Describe your movie idea in a few sentences:")
68 | genre = st.selectbox("Select the movie genre:",
69 | ["Action", "Comedy", "Drama", "Sci-Fi", "Horror", "Romance", "Thriller"])
70 | target_audience = st.selectbox("Select the target audience:",
71 | ["General", "Children", "Teenagers", "Adults", "Mature"])
72 | estimated_runtime = st.slider("Estimated runtime (in minutes):", 60, 180, 120)
73 |
74 | # Process the movie concept
75 | if st.button("Develop Movie Concept"):
76 | with st.spinner("Developing movie concept..."):
77 | input_text = (
78 | f"Movie idea: {movie_idea}, Genre: {genre}, "
79 | f"Target audience: {target_audience}, Estimated runtime: {estimated_runtime} minutes"
80 | )
81 | # Get the response from the assistant
82 | response = movie_producer.run(input_text, stream=False)
83 | st.write(response)
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_movie_production_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | phidata
3 | anthropic
4 | google-search-results
5 | lxml_html_clean
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_personal_finance_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 💰 AI Personal Finance Planner
2 | This Streamlit app is an AI-powered personal finance planner that generates personalized financial plans using OpenAI GPT-4o. It automates the process of researching, planning, and creating tailored budgets, investment strategies, and savings goals, empowering you to take control of your financial future with ease.
3 |
4 | ### Features
5 | - Set your financial goals and provide details about your current financial situation
6 | - Use GPT-4o to generate intelligent and personalized financial advice
7 | - Receive customized budgets, investment plans, and savings strategies
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | ```
16 | 2. Install the required dependencies:
17 |
18 | ```bash
19 | pip install -r requirements.txt
20 | ```
21 | 3. Get your OpenAI API Key
22 |
23 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
24 |
25 | 4. Get your SerpAPI Key
26 |
27 | - Sign up for an [SerpAPI account](https://serpapi.com/) and obtain your API key.
28 |
29 | 5. Run the Streamlit App
30 | ```bash
31 | streamlit run finance_agent.py
32 | ```
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_personal_finance_agent/finance_agent.py:
--------------------------------------------------------------------------------
1 | from textwrap import dedent
2 | from phi.assistant import Assistant
3 | from phi.tools.serpapi_tools import SerpApiTools
4 | import streamlit as st
5 | from phi.llm.openai import OpenAIChat
6 |
7 | # Set up the Streamlit app
8 | st.title("AI Personal Finance Planner 💰")
9 | st.caption("Manage your finances with AI Personal Finance Manager by creating personalized budgets, investment plans, and savings strategies using GPT-4o")
10 |
11 | # Get OpenAI API key from user
12 | openai_api_key = st.text_input("Enter OpenAI API Key to access GPT-4o", type="password")
13 |
14 | # Get SerpAPI key from the user
15 | serp_api_key = st.text_input("Enter Serp API Key for Search functionality", type="password")
16 |
17 | if openai_api_key and serp_api_key:
18 | researcher = Assistant(
19 | name="Researcher",
20 | role="Searches for financial advice, investment opportunities, and savings strategies based on user preferences",
21 | llm=OpenAIChat(model="gpt-4o", api_key=openai_api_key),
22 | description=dedent(
23 | """\
24 | You are a world-class financial researcher. Given a user's financial goals and current financial situation,
25 | generate a list of search terms for finding relevant financial advice, investment opportunities, and savings strategies.
26 | Then search the web for each term, analyze the results, and return the 10 most relevant results.
27 | """
28 | ),
29 | instructions=[
30 | "Given a user's financial goals and current financial situation, first generate a list of 3 search terms related to those goals.",
31 | "For each search term, `search_google` and analyze the results.",
32 | "From the results of all searches, return the 10 most relevant results to the user's preferences.",
33 | "Remember: the quality of the results is important.",
34 | ],
35 | tools=[SerpApiTools(api_key=serp_api_key)],
36 | add_datetime_to_instructions=True,
37 | )
38 | planner = Assistant(
39 | name="Planner",
40 | role="Generates a personalized financial plan based on user preferences and research results",
41 | llm=OpenAIChat(model="gpt-4o", api_key=openai_api_key),
42 | description=dedent(
43 | """\
44 | You are a senior financial planner. Given a user's financial goals, current financial situation, and a list of research results,
45 | your goal is to generate a personalized financial plan that meets the user's needs and preferences.
46 | """
47 | ),
48 | instructions=[
49 | "Given a user's financial goals, current financial situation, and a list of research results, generate a personalized financial plan that includes suggested budgets, investment plans, and savings strategies.",
50 | "Ensure the plan is well-structured, informative, and engaging.",
51 | "Ensure you provide a nuanced and balanced plan, quoting facts where possible.",
52 | "Remember: the quality of the plan is important.",
53 | "Focus on clarity, coherence, and overall quality.",
54 | "Never make up facts or plagiarize. Always provide proper attribution.",
55 | ],
56 | add_datetime_to_instructions=True,
57 | add_chat_history_to_prompt=True,
58 | num_history_messages=3,
59 | )
60 |
61 | # Input fields for the user's financial goals and current financial situation
62 | financial_goals = st.text_input("What are your financial goals?")
63 | current_situation = st.text_area("Describe your current financial situation")
64 |
65 | if st.button("Generate Financial Plan"):
66 | with st.spinner("Processing..."):
67 | # Get the response from the assistant
68 | response = planner.run(f"Financial goals: {financial_goals}, Current situation: {current_situation}", stream=False)
69 | st.write(response)
70 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_personal_finance_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | phidata
3 | openai
4 | google-search-results
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_reasoning_agent/local_ai_reasoning_agent.py:
--------------------------------------------------------------------------------
1 | from phi.agent import Agent
2 | from phi.model.ollama import Ollama
3 | from phi.playground import Playground, serve_playground_app
4 |
5 | reasoning_agent = Agent(name="Reasoning Agent", model=Ollama(id="qwq:32b"), markdown=True)
6 |
7 | # UI for Reasoning agent
8 | app = Playground(agents=[reasoning_agent]).get_app()
9 |
10 | # Run the Playground app
11 | if __name__ == "__main__":
12 | serve_playground_app("local_ai_reasoning_agent:app", reload=True)
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_reasoning_agent/reasoning_agent.py:
--------------------------------------------------------------------------------
1 | from phi.agent import Agent
2 | from phi.model.openai import OpenAIChat
3 | from phi.cli.console import console
4 |
5 | regular_agent = Agent(model=OpenAIChat(id="gpt-4o-mini"), markdown=True)
6 |
7 | reasoning_agent = Agent(
8 | model=OpenAIChat(id="gpt-4o"),
9 | reasoning=True,
10 | markdown=True,
11 | structured_outputs=True,
12 | )
13 |
14 | task = "How many 'r' are in the word 'supercalifragilisticexpialidocious'?"
15 |
16 | console.rule("[bold green]Regular Agent[/bold green]")
17 | regular_agent.print_response(task, stream=True)
18 | console.rule("[bold yellow]Reasoning Agent[/bold yellow]")
19 | reasoning_agent.print_response(task, stream=True, show_full_reasoning=True)
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_services_agency/README.md:
--------------------------------------------------------------------------------
1 | # AI Services Agency 👨💼
2 |
3 | An AI application that simulates a full-service digital agency using multiple AI agents to analyze and plan software projects. Each agent represents a different role in the project lifecycle, from strategic planning to technical implementation.
4 |
5 | ## Demo:
6 |
7 | https://github.com/user-attachments/assets/a0befa3a-f4c3-400d-9790-4b9e37254405
8 |
9 | ## Features
10 |
11 | ### Five specialized AI agents
12 |
13 | - **CEO Agent**: Strategic leader and final decision maker
14 | - Analyzes startup ideas using structured evaluation
15 | - Makes strategic decisions across product, technical, marketing, and financial domains
16 | - Uses AnalyzeStartupTool and MakeStrategicDecision tools
17 |
18 | - **CTO Agent**: Technical architecture and feasibility expert
19 | - Evaluates technical requirements and feasibility
20 | - Provides architecture decisions
21 | - Uses QueryTechnicalRequirements and EvaluateTechnicalFeasibility tools
22 |
23 | - **Product Manager Agent**: Product strategy specialist
24 | - Defines product strategy and roadmap
25 | - Coordinates between technical and marketing teams
26 | - Focuses on product-market fit
27 |
28 | - **Developer Agent**: Technical implementation expert
29 | - Provides detailed technical implementation guidance
30 | - Suggests optimal tech stack and cloud solutions
31 | - Estimates development costs and timelines
32 |
33 | - **Client Success Agent**: Marketing strategy leader
34 | - Develops go-to-market strategies
35 | - Plans customer acquisition approaches
36 | - Coordinates with product team
37 |
38 | ### Custom Tools
39 |
40 | The agency uses specialized tools built with OpenAI Schema for structured analysis:
41 | - **Analysis Tools**: AnalyzeProjectRequirements for market evaluation and analysis of startup idea
42 | - **Technical Tools**: CreateTechnicalSpecification for technical assessment
43 |
44 | ### 🔄 Asynchronous Communication
45 |
46 | The agency operates in async mode, enabling:
47 | - Parallel processing of analyses from different agents
48 | - Efficient multi-agent collaboration
49 | - Real-time communication between agents
50 | - Non-blocking operations for better performance
51 |
52 | ### 🔗 Agent Communication Flows
53 | - CEO ↔️ All Agents (Strategic Oversight)
54 | - CTO ↔️ Developer (Technical Implementation)
55 | - Product Manager ↔️ Marketing Manager (Go-to-Market Strategy)
56 | - Product Manager ↔️ Developer (Feature Implementation)
57 | - (and more!)
58 |
59 | ## How to Run
60 |
61 | Follow the steps below to set up and run the application:
62 | Before anything else, Please get your OpenAI API Key here: https://platform.openai.com/api-keys
63 |
64 | 1. **Clone the Repository**:
65 | ```bash
66 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
67 | cd ai_agent_tutorials
68 | ```
69 |
70 | 2. **Install the dependencies**:
71 | ```bash
72 | pip install -r requirements.txt
73 | ```
74 |
75 | 3. **Run the Streamlit app**:
76 | ```bash
77 | streamlit run ai_services_agency/agency.py
78 | ```
79 |
80 | 4. **Enter your OpenAI API Key** in the sidebar when prompted and start analyzing your startup idea!
81 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_services_agency/requirements.txt:
--------------------------------------------------------------------------------
1 | python-dotenv==1.0.1
2 | agency-swarm==0.4.1
3 | streamlit
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_startup_trend_analysis_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 📈 AI Startup Trend Analysis Agent
2 | The AI Startup Trend Analysis Agent is tool for budding entrepreneurs that generates actionable insights by identifying nascent trends, potential market gaps, and growth opportunities in specific sectors. Entrepreneurs can use these data-driven insights to validate ideas, spot market opportunities, and make informed decisions about their startup ventures. It combines Newspaper4k and DuckDuckGo to scan and analyze startup-focused articles and market data. Using Claude 3.5 Sonnet, it processes this information to extract emerging patterns and enable entrepreneurs to identify promising startup opportunities.
3 |
4 |
5 | ### Features
6 | - **User Prompt**: Entrepreneurs can input specific startup sectors or technologies of interest for research.
7 | - **News Collection**: This agent gathers recent startup news, funding rounds, and market analyses using DuckDuckGo.
8 | - **Summary Generation**: Concise summaries of verified information are generated using Newspaper4k.
9 | - **Trend Analysis**: The system identifies emerging patterns in startup funding, technology adoption, and market opportunities across analyzed stories.
10 | - **Streamlit UI**: The application features a user-friendly interface built with Streamlit for easy interaction.
11 |
12 | ### How to Get Started
13 | 1. **Clone the repository**:
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd ai_agent_tutorials/ai_business_insider_agent
17 | ```
18 |
19 | 2. **Create and activate a virtual environment**:
20 | ```bash
21 | # For macOS/Linux
22 | python -m venv venv
23 | source venv/bin/activate
24 |
25 | # For Windows
26 | python -m venv venv
27 | .\venv\Scripts\activate
28 | ```
29 |
30 | 3. **Install the required packages**:
31 | ```bash
32 | pip install -r requirements.txt
33 | ```
34 |
35 | 4. **Run the application**:
36 | ```bash
37 | streamlit run startup_trends_agent.py
38 | ```
39 | ### Important Note
40 | - The system specifically uses Claude's API for advanced language processing. You can obtain your Anthropic API key from [Anthropic's website](https://www.anthropic.com/api).
41 |
42 |
43 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_startup_trend_analysis_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | phidata==2.5.33
2 | streamlit==1.40.2
3 | duckduckgo_search==6.3.7
4 | newspaper4k==0.9.3.1
5 | lxml_html_clean==0.4.1
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_startup_trend_analysis_agent/startup_trends_agent.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from phi.agent import Agent
3 | from phi.tools.duckduckgo import DuckDuckGo
4 | from phi.model.anthropic import Claude
5 | from phi.tools.newspaper4k import Newspaper4k
6 | from phi.tools import Tool
7 | import logging
8 |
9 | logging.basicConfig(level=logging.DEBUG)
10 |
11 | # Setting up Streamlit app
12 | st.title("AI Startup Trend Analysis Agent 📈")
13 | st.caption("Get the latest trend analysis and startup opportunities based on your topic of interest in a click!.")
14 |
15 | topic = st.text_input("Enter the area of interest for your Startup:")
16 | anthropic_api_key = st.sidebar.text_input("Enter Anthropic API Key", type="password")
17 |
18 | if st.button("Generate Analysis"):
19 | if not anthropic_api_key:
20 | st.warning("Please enter the required API key.")
21 | else:
22 | with st.spinner("Processing your request..."):
23 | try:
24 | # Initialize Anthropic model
25 | anthropic_model = Claude(id ="claude-3-5-sonnet-20240620",api_key=anthropic_api_key)
26 |
27 | # Define News Collector Agent - Duckduckgo_search tool enables an Agent to search the web for information.
28 | search_tool = DuckDuckGo(search=True, news=True, fixed_max_results=5)
29 | news_collector = Agent(
30 | name="News Collector",
31 | role="Collects recent news articles on the given topic",
32 | tools=[search_tool],
33 | model=anthropic_model,
34 | instructions=["Gather latest articles on the topic"],
35 | show_tool_calls=True,
36 | markdown=True,
37 | )
38 |
39 | # Define Summary Writer Agent
40 | news_tool = Newspaper4k(read_article=True, include_summary=True)
41 | summary_writer = Agent(
42 | name="Summary Writer",
43 | role="Summarizes collected news articles",
44 | tools=[news_tool],
45 | model=anthropic_model,
46 | instructions=["Provide concise summaries of the articles"],
47 | show_tool_calls=True,
48 | markdown=True,
49 | )
50 |
51 | # Define Trend Analyzer Agent
52 | trend_analyzer = Agent(
53 | name="Trend Analyzer",
54 | role="Analyzes trends from summaries",
55 | model=anthropic_model,
56 | instructions=["Identify emerging trends and startup opportunities"],
57 | show_tool_calls=True,
58 | markdown=True,
59 | )
60 |
61 | # The multi agent Team setup of phidata:
62 | agent_team = Agent(
63 | agents=[news_collector, summary_writer, trend_analyzer],
64 | instructions=[
65 | "First, search DuckDuckGo for recent news articles related to the user's specified topic.",
66 | "Then, provide the collected article links to the summary writer.",
67 | "Important: you must ensure that the summary writer receives all the article links to read.",
68 | "Next, the summary writer will read the articles and prepare concise summaries of each.",
69 | "After summarizing, the summaries will be passed to the trend analyzer.",
70 | "Finally, the trend analyzer will identify emerging trends and potential startup opportunities based on the summaries provided in a detailed Report form so that any young entreprenur can get insane value reading this easily"
71 | ],
72 | show_tool_calls=True,
73 | markdown=True,
74 | )
75 |
76 | # Executing the workflow
77 | # Step 1: Collect news
78 | news_response = news_collector.run(f"Collect recent news on {topic}")
79 | articles = news_response.content
80 |
81 | # Step 2: Summarize articles
82 | summary_response = summary_writer.run(f"Summarize the following articles:\n{articles}")
83 | summaries = summary_response.content
84 |
85 | # Step 3: Analyze trends
86 | trend_response = trend_analyzer.run(f"Analyze trends from the following summaries:\n{summaries}")
87 | analysis = trend_response.content
88 |
89 | # Display results - if incase you want to use this furthur, you can uncomment the below 2 lines to get the summaries too!
90 | # st.subheader("News Summaries")
91 | # # st.write(summaries)
92 |
93 | st.subheader("Trend Analysis and Potential Startup Opportunities")
94 | st.write(analysis)
95 |
96 | except Exception as e:
97 | st.error(f"An error occurred: {e}")
98 | else:
99 | st.info("Enter the topic and API keys, then click 'Generate Analysis' to start.")
100 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_travel_agent/README.MD:
--------------------------------------------------------------------------------
1 | ## 🛫 AI Travel Agent
2 | This Streamlit app is an AI-powered travel Agent that generates personalized travel itineraries using OpenAI GPT-4o. It automates the process of researching, planning, and organizing your dream vacation, allowing you to explore exciting destinations with ease.
3 |
4 | ### Features
5 | - Research and discover exciting travel destinations, activities, and accommodations
6 | - Customize your itinerary based on the number of days you want to travel
7 | - Utilize the power of GPT-4o to generate intelligent and personalized travel plans
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | ```
16 | 2. Install the required dependencies:
17 |
18 | ```bash
19 | pip install -r requirements.txt
20 | ```
21 | 3. Get your OpenAI API Key
22 |
23 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
24 |
25 | 4. Get your SerpAPI Key
26 |
27 | - Sign up for an [SerpAPI account](https://serpapi.com/) and obtain your API key.
28 |
29 | 5. Run the Streamlit App
30 | ```bash
31 | streamlit run travel_agent.py
32 | ```
33 |
34 | ### How it Works?
35 |
36 | The AI Travel Agent has two main components:
37 | - Researcher: Responsible for generating search terms based on the user's destination and travel duration, and searching the web for relevant activities and accommodations using SerpAPI.
38 | - Planner: Takes the research results and user preferences to generate a personalized draft itinerary that includes suggested activiti
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_travel_agent/local_travel_agent.py:
--------------------------------------------------------------------------------
1 | from textwrap import dedent
2 | from phi.assistant import Assistant
3 | from phi.tools.serpapi_tools import SerpApiTools
4 | import streamlit as st
5 | from phi.llm.ollama import Ollama
6 |
7 | # Set up the Streamlit app
8 | st.title("AI Travel Planner using Llama-3 ✈️")
9 | st.caption("Plan your next adventure with AI Travel Planner by researching and planning a personalized itinerary on autopilot using local Llama-3")
10 |
11 | # Get SerpAPI key from the user
12 | serp_api_key = st.text_input("Enter Serp API Key for Search functionality", type="password")
13 |
14 | if serp_api_key:
15 | researcher = Assistant(
16 | name="Researcher",
17 | role="Searches for travel destinations, activities, and accommodations based on user preferences",
18 | llm=Ollama(model="llama3:instruct", max_tokens=1024),
19 | description=dedent(
20 | """\
21 | You are a world-class travel researcher. Given a travel destination and the number of days the user wants to travel for,
22 | generate a list of search terms for finding relevant travel activities and accommodations.
23 | Then search the web for each term, analyze the results, and return the 10 most relevant results.
24 | """
25 | ),
26 | instructions=[
27 | "Given a travel destination and the number of days the user wants to travel for, first generate a list of 3 search terms related to that destination and the number of days.",
28 | "For each search term, `search_google` and analyze the results."
29 | "From the results of all searches, return the 10 most relevant results to the user's preferences.",
30 | "Remember: the quality of the results is important.",
31 | ],
32 | tools=[SerpApiTools(api_key=serp_api_key)],
33 | add_datetime_to_instructions=True,
34 | )
35 | planner = Assistant(
36 | name="Planner",
37 | role="Generates a draft itinerary based on user preferences and research results",
38 | llm=Ollama(model="llama3:instruct", max_tokens=1024),
39 | description=dedent(
40 | """\
41 | You are a senior travel planner. Given a travel destination, the number of days the user wants to travel for, and a list of research results,
42 | your goal is to generate a draft itinerary that meets the user's needs and preferences.
43 | """
44 | ),
45 | instructions=[
46 | "Given a travel destination, the number of days the user wants to travel for, and a list of research results, generate a draft itinerary that includes suggested activities and accommodations.",
47 | "Ensure the itinerary is well-structured, informative, and engaging.",
48 | "Ensure you provide a nuanced and balanced itinerary, quoting facts where possible.",
49 | "Remember: the quality of the itinerary is important.",
50 | "Focus on clarity, coherence, and overall quality.",
51 | "Never make up facts or plagiarize. Always provide proper attribution.",
52 | ],
53 | add_datetime_to_instructions=True,
54 | add_chat_history_to_prompt=True,
55 | num_history_messages=3,
56 | )
57 |
58 | # Input fields for the user's destination and the number of days they want to travel for
59 | destination = st.text_input("Where do you want to go?")
60 | num_days = st.number_input("How many days do you want to travel for?", min_value=1, max_value=30, value=7)
61 |
62 | if st.button("Generate Itinerary"):
63 | with st.spinner("Processing..."):
64 | # Get the response from the assistant
65 | response = planner.run(f"{destination} for {num_days} days", stream=False)
66 | st.write(response)
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_travel_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | phidata
3 | openai
4 | google-search-results
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_travel_agent/travel_agent.py:
--------------------------------------------------------------------------------
1 | from textwrap import dedent
2 | from phi.assistant import Assistant
3 | from phi.tools.serpapi_tools import SerpApiTools
4 | import streamlit as st
5 | from phi.llm.openai import OpenAIChat
6 |
7 | # Set up the Streamlit app
8 | st.title("AI Travel Planner ✈️")
9 | st.caption("Plan your next adventure with AI Travel Planner by researching and planning a personalized itinerary on autopilot using GPT-4o")
10 |
11 | # Get OpenAI API key from user
12 | openai_api_key = st.text_input("Enter OpenAI API Key to access GPT-4o", type="password")
13 |
14 | # Get SerpAPI key from the user
15 | serp_api_key = st.text_input("Enter Serp API Key for Search functionality", type="password")
16 |
17 | if openai_api_key and serp_api_key:
18 | researcher = Assistant(
19 | name="Researcher",
20 | role="Searches for travel destinations, activities, and accommodations based on user preferences",
21 | llm=OpenAIChat(model="gpt-4o", api_key=openai_api_key),
22 | description=dedent(
23 | """\
24 | You are a world-class travel researcher. Given a travel destination and the number of days the user wants to travel for,
25 | generate a list of search terms for finding relevant travel activities and accommodations.
26 | Then search the web for each term, analyze the results, and return the 10 most relevant results.
27 | """
28 | ),
29 | instructions=[
30 | "Given a travel destination and the number of days the user wants to travel for, first generate a list of 3 search terms related to that destination and the number of days.",
31 | "For each search term, `search_google` and analyze the results."
32 | "From the results of all searches, return the 10 most relevant results to the user's preferences.",
33 | "Remember: the quality of the results is important.",
34 | ],
35 | tools=[SerpApiTools(api_key=serp_api_key)],
36 | add_datetime_to_instructions=True,
37 | )
38 | planner = Assistant(
39 | name="Planner",
40 | role="Generates a draft itinerary based on user preferences and research results",
41 | llm=OpenAIChat(model="gpt-4o", api_key=openai_api_key),
42 | description=dedent(
43 | """\
44 | You are a senior travel planner. Given a travel destination, the number of days the user wants to travel for, and a list of research results,
45 | your goal is to generate a draft itinerary that meets the user's needs and preferences.
46 | """
47 | ),
48 | instructions=[
49 | "Given a travel destination, the number of days the user wants to travel for, and a list of research results, generate a draft itinerary that includes suggested activities and accommodations.",
50 | "Ensure the itinerary is well-structured, informative, and engaging.",
51 | "Ensure you provide a nuanced and balanced itinerary, quoting facts where possible.",
52 | "Remember: the quality of the itinerary is important.",
53 | "Focus on clarity, coherence, and overall quality.",
54 | "Never make up facts or plagiarize. Always provide proper attribution.",
55 | ],
56 | add_datetime_to_instructions=True,
57 | add_chat_history_to_prompt=True,
58 | num_history_messages=3,
59 | )
60 |
61 | # Input fields for the user's destination and the number of days they want to travel for
62 | destination = st.text_input("Where do you want to go?")
63 | num_days = st.number_input("How many days do you want to travel for?", min_value=1, max_value=30, value=7)
64 |
65 | if st.button("Generate Itinerary"):
66 | with st.spinner("Processing..."):
67 | # Get the response from the assistant
68 | response = planner.run(f"{destination} for {num_days} days", stream=False)
69 | st.write(response)
--------------------------------------------------------------------------------
/ai_agent_tutorials/local_news_agent_openai_swarm/README.md:
--------------------------------------------------------------------------------
1 | ## 📰 Multi-agent AI news assistant
2 | This Streamlit application implements a sophisticated news processing pipeline using multiple specialized AI agents to search, synthesize, and summarize news articles. It leverages the Llama 3.2 model via Ollama and DuckDuckGo search to provide comprehensive news analysis.
3 |
4 |
5 | ### Features
6 | - Multi-agent architecture with specialized roles:
7 | - News Searcher: Finds recent news articles
8 | - News Synthesizer: Analyzes and combines information
9 | - News Summarizer: Creates concise, professional summaries
10 |
11 | - Real-time news search using DuckDuckGo
12 | - AP/Reuters-style summary generation
13 | - User-friendly Streamlit interface
14 |
15 |
16 | ### How to get Started?
17 |
18 | 1. Clone the GitHub repository
19 | ```bash
20 | git clone https://github.com/your-username/ai-news-processor.git
21 | cd local_news_agent_openai_swarm
22 | ```
23 |
24 | 2. Install the required dependencies:
25 |
26 | ```bash
27 | pip install -r requirements.txt
28 | ```
29 |
30 | 3. Pull and Run Llama 3.2 using Ollama:
31 |
32 | ```bash
33 | # Pull the model
34 | ollama pull llama3.2
35 |
36 | # Verify installation
37 | ollama list
38 |
39 | # Run the model (optional test)
40 | ollama run llama3.2
41 | ```
42 |
43 | 4. Create a .env file with your configurations:
44 | ```bash
45 | OPENAI_BASE_URL=http://localhost:11434/v1
46 | OPENAI_API_KEY=fake-key
47 | ```
48 | 5. Run the Streamlit app
49 | ```bash
50 | streamlit run news_agent.py
51 | ```
--------------------------------------------------------------------------------
/ai_agent_tutorials/local_news_agent_openai_swarm/news_agent.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from duckduckgo_search import DDGS
3 | from swarm import Swarm, Agent
4 | from datetime import datetime
5 | from dotenv import load_dotenv
6 |
7 | load_dotenv()
8 | MODEL = "llama3.2:latest"
9 | client = Swarm()
10 |
11 | st.set_page_config(page_title="AI News Processor", page_icon="📰")
12 | st.title("📰 News Inshorts Agent")
13 |
14 | def search_news(topic):
15 | """Search for news articles using DuckDuckGo"""
16 | with DDGS() as ddg:
17 | results = ddg.text(f"{topic} news {datetime.now().strftime('%Y-%m')}", max_results=3)
18 | if results:
19 | news_results = "\n\n".join([
20 | f"Title: {result['title']}\nURL: {result['href']}\nSummary: {result['body']}"
21 | for result in results
22 | ])
23 | return news_results
24 | return f"No news found for {topic}."
25 |
26 | # Create specialized agents
27 | search_agent = Agent(
28 | name="News Searcher",
29 | instructions="""
30 | You are a news search specialist. Your task is to:
31 | 1. Search for the most relevant and recent news on the given topic
32 | 2. Ensure the results are from reputable sources
33 | 3. Return the raw search results in a structured format
34 | """,
35 | functions=[search_news],
36 | model=MODEL
37 | )
38 |
39 | synthesis_agent = Agent(
40 | name="News Synthesizer",
41 | instructions="""
42 | You are a news synthesis expert. Your task is to:
43 | 1. Analyze the raw news articles provided
44 | 2. Identify the key themes and important information
45 | 3. Combine information from multiple sources
46 | 4. Create a comprehensive but concise synthesis
47 | 5. Focus on facts and maintain journalistic objectivity
48 | 6. Write in a clear, professional style
49 | Provide a 2-3 paragraph synthesis of the main points.
50 | """,
51 | model=MODEL
52 | )
53 |
54 | summary_agent = Agent(
55 | name="News Summarizer",
56 | instructions="""
57 | You are an expert news summarizer combining AP and Reuters style clarity with digital-age brevity.
58 |
59 | Your task:
60 | 1. Core Information:
61 | - Lead with the most newsworthy development
62 | - Include key stakeholders and their actions
63 | - Add critical numbers/data if relevant
64 | - Explain why this matters now
65 | - Mention immediate implications
66 |
67 | 2. Style Guidelines:
68 | - Use strong, active verbs
69 | - Be specific, not general
70 | - Maintain journalistic objectivity
71 | - Make every word count
72 | - Explain technical terms if necessary
73 |
74 | Format: Create a single paragraph of 250-400 words that informs and engages.
75 | Pattern: [Major News] + [Key Details/Data] + [Why It Matters/What's Next]
76 |
77 | Focus on answering: What happened? Why is it significant? What's the impact?
78 |
79 | IMPORTANT: Provide ONLY the summary paragraph. Do not include any introductory phrases,
80 | labels, or meta-text like "Here's a summary" or "In AP/Reuters style."
81 | Start directly with the news content.
82 | """,
83 | model=MODEL
84 | )
85 |
86 | def process_news(topic):
87 | """Run the news processing workflow"""
88 | with st.status("Processing news...", expanded=True) as status:
89 | # Search
90 | status.write("🔍 Searching for news...")
91 | search_response = client.run(
92 | agent=search_agent,
93 | messages=[{"role": "user", "content": f"Find recent news about {topic}"}]
94 | )
95 | raw_news = search_response.messages[-1]["content"]
96 |
97 | # Synthesize
98 | status.write("🔄 Synthesizing information...")
99 | synthesis_response = client.run(
100 | agent=synthesis_agent,
101 | messages=[{"role": "user", "content": f"Synthesize these news articles:\n{raw_news}"}]
102 | )
103 | synthesized_news = synthesis_response.messages[-1]["content"]
104 |
105 | # Summarize
106 | status.write("📝 Creating summary...")
107 | summary_response = client.run(
108 | agent=summary_agent,
109 | messages=[{"role": "user", "content": f"Summarize this synthesis:\n{synthesized_news}"}]
110 | )
111 | return raw_news, synthesized_news, summary_response.messages[-1]["content"]
112 |
113 | # User Interface
114 | topic = st.text_input("Enter news topic:", value="artificial intelligence")
115 | if st.button("Process News", type="primary"):
116 | if topic:
117 | try:
118 | raw_news, synthesized_news, final_summary = process_news(topic)
119 | st.header(f"📝 News Summary: {topic}")
120 | st.markdown(final_summary)
121 | except Exception as e:
122 | st.error(f"An error occurred: {str(e)}")
123 | else:
124 | st.error("Please enter a topic!")
--------------------------------------------------------------------------------
/ai_agent_tutorials/local_news_agent_openai_swarm/requirements.txt:
--------------------------------------------------------------------------------
1 | git+https://github.com/openai/swarm.git
2 | streamlit
3 | duckduckgo-search
--------------------------------------------------------------------------------
/ai_agent_tutorials/multi_agent_researcher/README.md:
--------------------------------------------------------------------------------
1 | ## 📰 Multi-Agent AI Researcher
2 | This Streamlit app empowers you to research top stories and users on HackerNews using a team of AI assistants with GPT-4o.
3 |
4 | ### Features
5 | - Research top stories and users on HackerNews
6 | - Utilize a team of AI assistants specialized in story and user research
7 | - Generate blog posts, reports, and social media content based on your research queries
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | ```
16 | 2. Install the required dependencies:
17 |
18 | ```bash
19 | pip install -r requirements.txt
20 | ```
21 | 3. Get your OpenAI API Key
22 |
23 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
24 |
25 | 4. Run the Streamlit App
26 | ```bash
27 | streamlit run research_agent.py
28 | ```
29 |
30 | ### How it works?
31 |
32 | - Upon running the app, you will be prompted to enter your OpenAI API key. This key is used to authenticate and access the OpenAI language models.
33 | - Once you provide a valid API key, three instances of the Assistant class are created:
34 | - **story_researcher**: Specializes in researching HackerNews stories.
35 | - **user_researcher**: Focuses on researching HackerNews users and reading articles from URLs.
36 | - **hn_assistant**: A team assistant that coordinates the research efforts of the story and user researchers.
37 |
38 | - Enter your research query in the provided text input field. This could be a topic, keyword, or specific question related to HackerNews stories or users.
39 | - The hn_assistant will orchestrate the research process by delegating tasks to the story_researcher and user_researcher based on your query.
40 | - The AI assistants will gather relevant information from HackerNews using the provided tools and generate a comprehensive response using the GPT-4 language model.
41 | - The generated content, which could be a blog post, report, or social media post, will be displayed in the app for you to review and use.
42 |
43 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/multi_agent_researcher/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | phidata
3 | openai
--------------------------------------------------------------------------------
/ai_agent_tutorials/multi_agent_researcher/research_agent.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from phi.assistant import Assistant
4 | from phi.tools.hackernews import HackerNews
5 | from phi.llm.openai import OpenAIChat
6 |
7 | # Set up the Streamlit app
8 | st.title("Multi-Agent AI Researcher 🔍🤖")
9 | st.caption("This app allows you to research top stories and users on HackerNews and write blogs, reports and social posts.")
10 |
11 | # Get OpenAI API key from user
12 | openai_api_key = st.text_input("OpenAI API Key", type="password")
13 |
14 | if openai_api_key:
15 | # Create instances of the Assistant
16 | story_researcher = Assistant(
17 | name="HackerNews Story Researcher",
18 | role="Researches hackernews stories and users.",
19 | tools=[HackerNews()],
20 | )
21 |
22 | user_researcher = Assistant(
23 | name="HackerNews User Researcher",
24 | role="Reads articles from URLs.",
25 | tools=[HackerNews()],
26 | )
27 |
28 | hn_assistant = Assistant(
29 | name="Hackernews Team",
30 | team=[story_researcher, user_researcher],
31 | llm=OpenAIChat(
32 | model="gpt-4o",
33 | max_tokens=1024,
34 | temperature=0.5,
35 | api_key=openai_api_key
36 | )
37 | )
38 |
39 | # Input field for the report query
40 | query = st.text_input("Enter your report query")
41 |
42 | if query:
43 | # Get the response from the assistant
44 | response = hn_assistant.run(query, stream=False)
45 | st.write(response)
--------------------------------------------------------------------------------
/ai_agent_tutorials/multi_agent_researcher/research_agent_llama3.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from phi.assistant import Assistant
4 | from phi.tools.hackernews import HackerNews
5 | from phi.llm.ollama import Ollama
6 |
7 | # Set up the Streamlit app
8 | st.title("Multi-Agent AI Researcher using Llama-3 🔍🤖")
9 | st.caption("This app allows you to research top stories and users on HackerNews and write blogs, reports and social posts.")
10 |
11 | # Create instances of the Assistant
12 | story_researcher = Assistant(
13 | name="HackerNews Story Researcher",
14 | role="Researches hackernews stories and users.",
15 | tools=[HackerNews()],
16 | llm=Ollama(model="llama3:instruct", max_tokens=1024)
17 | )
18 |
19 | user_researcher = Assistant(
20 | name="HackerNews User Researcher",
21 | role="Reads articles from URLs.",
22 | tools=[HackerNews()],
23 | llm=Ollama(model="llama3:instruct", max_tokens=1024)
24 | )
25 |
26 | hn_assistant = Assistant(
27 | name="Hackernews Team",
28 | team=[story_researcher, user_researcher],
29 | llm=Ollama(model="llama3:instruct", max_tokens=1024)
30 | )
31 |
32 | # Input field for the report query
33 | query = st.text_input("Enter your report query")
34 |
35 | if query:
36 | # Get the response from the assistant
37 | response = hn_assistant.run(query, stream=False)
38 | st.write(response)
--------------------------------------------------------------------------------
/ai_agent_tutorials/multimodal_ai_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🧬 Multimodal AI Agent
2 |
3 | A Streamlit application that combines video analysis and web search capabilities using Google's Gemini 2.0 model. This agent can analyze uploaded videos and answer questions by combining visual understanding with web-search.
4 |
5 | ### Features
6 |
7 | - Video analysis using Gemini 2.0 Flash
8 | - Web research integration via DuckDuckGo
9 | - Support for multiple video formats (MP4, MOV, AVI)
10 | - Real-time video processing
11 | - Combined visual and textual analysis
12 |
13 | ### How to get Started?
14 |
15 | 1. Clone the GitHub repository
16 |
17 | ```bash
18 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
19 | cd multimodal_ai_agents
20 | ```
21 | 2. Install the required dependencies:
22 |
23 | ```bash
24 | pip install -r requirements.txt
25 | ```
26 | 3. Get your Google Gemini API Key
27 |
28 | - Sign up for an [Google AI Studio account](https://aistudio.google.com/apikey) and obtain your API key.
29 |
30 | 4. Set up your Gemini API Key as the environment variable
31 |
32 | ```bash
33 | GOOGLE_API_KEY=your_api_key_here
34 | ```
35 |
36 | 5. Run the Streamlit App
37 | ```bash
38 | streamlit run multimodal_agent.py
39 | ```
--------------------------------------------------------------------------------
/ai_agent_tutorials/multimodal_ai_agent/mutimodal_agent.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from phi.agent import Agent
3 | from phi.model.google import Gemini
4 | from phi.tools.duckduckgo import DuckDuckGo
5 | from google.generativeai import upload_file, get_file
6 | import time
7 | from pathlib import Path
8 | import tempfile
9 |
10 | st.set_page_config(
11 | page_title="Multimodal AI Agent",
12 | page_icon="🧬",
13 | layout="wide"
14 | )
15 |
16 | st.title("Multimodal AI Agent 🧬")
17 |
18 | # Initialize single agent with both capabilities
19 | @st.cache_resource
20 | def initialize_agent():
21 | return Agent(
22 | name="Multimodal Analyst",
23 | model=Gemini(id="gemini-2.0-flash-exp"),
24 | tools=[DuckDuckGo()],
25 | markdown=True,
26 | )
27 |
28 | agent = initialize_agent()
29 |
30 | # File uploader
31 | uploaded_file = st.file_uploader("Upload a video file", type=['mp4', 'mov', 'avi'])
32 |
33 | if uploaded_file:
34 | with tempfile.NamedTemporaryFile(delete=False, suffix='.mp4') as tmp_file:
35 | tmp_file.write(uploaded_file.read())
36 | video_path = tmp_file.name
37 |
38 | st.video(video_path)
39 |
40 | user_prompt = st.text_area(
41 | "What would you like to know?",
42 | placeholder="Ask any question related to the video - the AI Agent will analyze it and search the web if needed",
43 | help="You can ask questions about the video content and get relevant information from the web"
44 | )
45 |
46 | if st.button("Analyze & Research"):
47 | if not user_prompt:
48 | st.warning("Please enter your question.")
49 | else:
50 | try:
51 | with st.spinner("Processing video and researching..."):
52 | video_file = upload_file(video_path)
53 | while video_file.state.name == "PROCESSING":
54 | time.sleep(2)
55 | video_file = get_file(video_file.name)
56 |
57 | prompt = f"""
58 | First analyze this video and then answer the following question using both
59 | the video analysis and web research: {user_prompt}
60 |
61 | Provide a comprehensive response focusing on practical, actionable information.
62 | """
63 |
64 | result = agent.run(prompt, videos=[video_file])
65 |
66 | st.subheader("Result")
67 | st.markdown(result.content)
68 |
69 | except Exception as e:
70 | st.error(f"An error occurred: {str(e)}")
71 | finally:
72 | Path(video_path).unlink(missing_ok=True)
73 | else:
74 | st.info("Please upload a video to begin analysis.")
75 |
76 | st.markdown("""
77 |
82 | """, unsafe_allow_html=True)
--------------------------------------------------------------------------------
/ai_agent_tutorials/multimodal_ai_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | phidata==2.7.2
2 | google-generativeai==0.8.3
3 | streamlit==1.40.2
--------------------------------------------------------------------------------
/ai_agent_tutorials/multimodal_design_agent_team/README.md:
--------------------------------------------------------------------------------
1 | # Multimodal AI Design Agent Team
2 |
3 | A Streamlit application that provides comprehensive design analysis using a team of specialized AI agents powered by Google's Gemini model.
4 |
5 | This application leverages multiple specialized AI agents to provide comprehensive analysis of UI/UX designs of your product and your competitors, combining visual understanding, user experience evaluation, and market research insights.
6 |
7 | ## Features
8 |
9 | - **Specialized Legal AI Agent Team**
10 |
11 | - 🎨 **Visual Design Agent**: Evaluates design elements, patterns, color schemes, typography, and visual hierarchy
12 | - 🔄 **UX Analysis Agent**: Assesses user flows, interaction patterns, usability, and accessibility
13 | - 📊 **Market Analysis Agent**: Provides market insights, competitor analysis, and positioning recommendations
14 |
15 | - **Multiple Analysis Types**: Choose from Visual Design, UX, and Market Analysis
16 | - **Comparative Analysis**: Upload competitor designs for comparative insights
17 | - **Customizable Focus Areas**: Select specific aspects for detailed analysis
18 | - **Context-Aware**: Provide additional context for more relevant insights
19 | - **Real-time Processing**: Get instant analysis with progress indicators
20 | - **Structured Output**: Receive well-organized, actionable insights
21 |
22 | ## How to Run
23 |
24 | 1. **Setup Environment**
25 | ```bash
26 | # Clone the repository
27 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
28 | cd ai_agent_tutorials/multimodal_design_agent_team
29 |
30 | # Create and activate virtual environment (optional)
31 | python -m venv venv
32 | source venv/bin/activate # On Windows: venv\Scripts\activate
33 |
34 | # Install dependencies
35 | pip install -r requirements.txt
36 | ```
37 |
38 | 2. **Get API Key**
39 | - Visit [Google AI Studio](https://aistudio.google.com/apikey)
40 | - Generate an API key
41 |
42 | 3. **Run the Application**
43 | ```bash
44 | streamlit run design_agent_team.py
45 | ```
46 |
47 | 4. **Use the Application**
48 | - Enter your Gemini API key in the sidebar
49 | - Upload design files (supported formats: JPG, JPEG, PNG)
50 | - Select analysis types and focus areas
51 | - Add context if needed
52 | - Click "Run Analysis" to get insights
53 |
54 |
55 | ## Technical Stack
56 |
57 | - **Frontend**: Streamlit
58 | - **AI Model**: Google Gemini 2.0
59 | - **Image Processing**: Pillow
60 | - **Market Research**: DuckDuckGo Search API
61 | - **Framework**: Phidata for agent orchestration
62 |
63 | ## Tips for Best Results
64 |
65 | - Upload clear, high-resolution images
66 | - Include multiple views/screens for better context
67 | - Add competitor designs for comparative analysis
68 | - Provide specific context about your target audience
69 |
70 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/multimodal_design_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | google-generativeai==0.8.3
2 | streamlit==1.30.0
3 | phidata==2.7.2
4 | Pillow==11.0.0
5 | duckduckgo-search==6.3.7
6 |
7 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/xai_finance_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 📊 AI Finance Agent with xAI Grok
2 | This application creates a financial analysis agent powered by xAI's Grok model, combining real-time stock data with web search capabilities. It provides structured financial insights through an interactive playground interface.
3 |
4 | ### Features
5 |
6 | - Powered by xAI's Grok-beta model
7 | - Real-time stock data analysis via YFinance
8 | - Web search capabilities through DuckDuckGo
9 | - Formatted output with tables for financial data
10 | - Interactive playground interface
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | cd ai_agent_tutorials/xai_finance_agent
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Get your OpenAI API Key
27 |
28 | - Sign up for an [xAI API account](https://console.x.ai/)
29 | - Set your XAI_API_KEY environment variable.
30 | ```bash
31 | export XAI_API_KEY='your-api-key-here'
32 | ```
33 |
34 | 4. Run the team of AI Agents
35 | ```bash
36 | python xai_finance_agent.py
37 | ```
38 |
39 | 5. Open your web browser and navigate to the URL provided in the console output to interact with the AI financial agent through the playground interface.
40 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/xai_finance_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | phidata
2 | duckduckgo-search
3 | yfinance
4 | fastapi[standard]
5 | openai
--------------------------------------------------------------------------------
/ai_agent_tutorials/xai_finance_agent/xai_finance_agent.py:
--------------------------------------------------------------------------------
1 | # import necessary python libraries
2 | from phi.agent import Agent
3 | from phi.model.xai import xAI
4 | from phi.tools.yfinance import YFinanceTools
5 | from phi.tools.duckduckgo import DuckDuckGo
6 | from phi.playground import Playground, serve_playground_app
7 |
8 | # create the AI finance agent
9 | agent = Agent(
10 | name="xAI Finance Agent",
11 | model = xAI(id="grok-beta"),
12 | tools=[DuckDuckGo(), YFinanceTools(stock_price=True, analyst_recommendations=True, stock_fundamentals=True)],
13 | instructions = ["Always use tables to display financial/numerical data. For text data use bullet points and small paragrpahs."],
14 | show_tool_calls = True,
15 | markdown = True,
16 | )
17 |
18 | # UI for finance agent
19 | app = Playground(agents=[agent]).get_app()
20 |
21 | if __name__ == "__main__":
22 | serve_playground_app("xai_finance_agent:app", reload=True)
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_github/README.md:
--------------------------------------------------------------------------------
1 | ## 💬 Chat with GitHub Repo
2 |
3 | LLM app with RAG to chat with GitHub Repo in just 30 lines of Python Code. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of the specified GitHub repository.
4 |
5 | ### Features
6 |
7 | - Provide the name of GitHub Repository as input
8 | - Ask questions about the content of the GitHub repository
9 | - Get accurate answers using OpenAI's API and Embedchain
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 | 2. Install the required dependencies:
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 | 3. Get your OpenAI API Key
24 |
25 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
26 |
27 | 4. Get your GitHub Access Token
28 |
29 | - Create a [personal access token](https://docs.github.com/en/enterprise-server@3.6/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token) with the necessary permissions to access the desired GitHub repository.
30 |
31 | 4. Run the Streamlit App
32 | ```bash
33 | streamlit run chat_github.py
34 | ```
35 |
36 | ### How it Works?
37 |
38 | - The app prompts the user to enter their OpenAI API key, which is used to authenticate requests to the OpenAI API.
39 |
40 | - It initializes an instance of the Embedchain App class and a GithubLoader with the provided GitHub Access Token.
41 |
42 | - The user is prompted to enter a GitHub repository URL, which is then added to the Embedchain app's knowledge base using the GithubLoader.
43 |
44 | - The user can ask questions about the GitHub repository using the text input.
45 |
46 | - When a question is asked, the app uses the chat method of the Embedchain app to generate an answer based on the content of the GitHub repository.
47 |
48 | - The app displays the generated answer to the user.
49 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_github/chat_github.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | from embedchain.pipeline import Pipeline as App
3 | from embedchain.loaders.github import GithubLoader
4 | import streamlit as st
5 | import os
6 |
7 | loader = GithubLoader(
8 | config={
9 | "token":"Your GitHub Token",
10 | }
11 | )
12 |
13 | # Create Streamlit app
14 | st.title("Chat with GitHub Repository 💬")
15 | st.caption("This app allows you to chat with a GitHub Repo using OpenAI API")
16 |
17 | # Get OpenAI API key from user
18 | openai_access_token = st.text_input("OpenAI API Key", type="password")
19 |
20 | # If OpenAI API key is provided, create an instance of App
21 | if openai_access_token:
22 | os.environ["OPENAI_API_KEY"] = openai_access_token
23 | # Create an instance of Embedchain App
24 | app = App()
25 | # Get the GitHub repo from the user
26 | git_repo = st.text_input("Enter the GitHub Repo", type="default")
27 | if git_repo:
28 | # Add the repo to the knowledge base
29 | app.add("repo:" + git_repo + " " + "type:repo", data_type="github", loader=loader)
30 | st.success(f"Added {git_repo} to knowledge base!")
31 | # Ask a question about the Github Repo
32 | prompt = st.text_input("Ask any question about the GitHub Repo")
33 | # Chat with the GitHub Repo
34 | if prompt:
35 | answer = app.chat(prompt)
36 | st.write(answer)
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_github/chat_github_llama3.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import tempfile
3 | from embedchain import App
4 | from embedchain.loaders.github import GithubLoader
5 | import streamlit as st
6 | import os
7 |
8 | GITHUB_TOKEN = os.getenv("Your GitHub Token")
9 |
10 | def get_loader():
11 | loader = GithubLoader(
12 | config={
13 | "token": GITHUB_TOKEN
14 | }
15 | )
16 | return loader
17 |
18 | if "loader" not in st.session_state:
19 | st.session_state['loader'] = get_loader()
20 |
21 | loader = st.session_state.loader
22 |
23 | # Define the embedchain_bot function
24 | def embedchain_bot(db_path):
25 | return App.from_config(
26 | config={
27 | "llm": {"provider": "ollama", "config": {"model": "llama3:instruct", "max_tokens": 250, "temperature": 0.5, "stream": True, "base_url": 'http://localhost:11434'}},
28 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
29 | "embedder": {"provider": "ollama", "config": {"model": "llama3:instruct", "base_url": 'http://localhost:11434'}},
30 | }
31 | )
32 |
33 | def load_repo(git_repo):
34 | global app
35 | # Add the repo to the knowledge base
36 | print(f"Adding {git_repo} to knowledge base!")
37 | app.add("repo:" + git_repo + " " + "type:repo", data_type="github", loader=loader)
38 | st.success(f"Added {git_repo} to knowledge base!")
39 |
40 |
41 | def make_db_path():
42 | ret = tempfile.mkdtemp(suffix="chroma")
43 | print(f"Created Chroma DB at {ret}")
44 | return ret
45 |
46 | # Create Streamlit app
47 | st.title("Chat with GitHub Repository 💬")
48 | st.caption("This app allows you to chat with a GitHub Repo using Llama-3 running with Ollama")
49 |
50 | # Initialize the Embedchain App
51 | if "app" not in st.session_state:
52 | st.session_state['app'] = embedchain_bot(make_db_path())
53 |
54 | app = st.session_state.app
55 |
56 | # Get the GitHub repo from the user
57 | git_repo = st.text_input("Enter the GitHub Repo", type="default")
58 |
59 | if git_repo and ("repos" not in st.session_state or git_repo not in st.session_state.repos):
60 | if "repos" not in st.session_state:
61 | st.session_state["repos"] = [git_repo]
62 | else:
63 | st.session_state.repos.append(git_repo)
64 | load_repo(git_repo)
65 |
66 |
67 | # Ask a question about the Github Repo
68 | prompt = st.text_input("Ask any question about the GitHub Repo")
69 | # Chat with the GitHub Repo
70 | if prompt:
71 | answer = st.session_state.app.chat(prompt)
72 | st.write(answer)
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_github/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain[github]
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_gmail/README.md:
--------------------------------------------------------------------------------
1 | ## 📨 Chat with Gmail Inbox
2 |
3 | LLM app with RAG to chat with Gmail in just 30 lines of Python Code. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of your Gmail Inbox.
4 |
5 | ### Features
6 |
7 | - Connect to your Gmail Inbox
8 | - Ask questions about the content of your emails
9 | - Get accurate answers using RAG and the selected LLM
10 |
11 | ### Installation
12 |
13 | 1. Clone the repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 | 2. Install the required dependencies
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 |
24 | 3. Set up your Google Cloud project and enable the Gmail API:
25 |
26 | - Go to the [Google Cloud Console](https://console.cloud.google.com/) and create a new project.
27 | - Navigate to "APIs & Services > OAuth consent screen" and configure the OAuth consent screen.
28 | - Publish the OAuth consent screen by providing the necessary app information.
29 | - Enable the Gmail API and create OAuth client ID credentials.
30 | - Download the credentials in JSON format and save them as `credentials.json` in your working directory.
31 |
32 | 4. Get your OpenAI API Key
33 |
34 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
35 |
36 | 4. Run the Streamlit App
37 |
38 | ```bash
39 | streamlit run chat_gmail.py
40 | ```
41 |
42 |
43 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_gmail/chat_gmail.py:
--------------------------------------------------------------------------------
1 | import tempfile
2 | import streamlit as st
3 | from embedchain import App
4 |
5 | # Define the embedchain_bot function
6 | def embedchain_bot(db_path, api_key):
7 | return App.from_config(
8 | config={
9 | "llm": {"provider": "openai", "config": {"model": "gpt-4-turbo", "temperature": 0.5, "api_key": api_key}},
10 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
11 | "embedder": {"provider": "openai", "config": {"api_key": api_key}},
12 | }
13 | )
14 |
15 | # Create Streamlit app
16 | st.title("Chat with your Gmail Inbox 📧")
17 | st.caption("This app allows you to chat with your Gmail inbox using OpenAI API")
18 |
19 | # Get the OpenAI API key from the user
20 | openai_access_token = st.text_input("Enter your OpenAI API Key", type="password")
21 |
22 | # Set the Gmail filter statically
23 | gmail_filter = "to: me label:inbox"
24 |
25 | # Add the Gmail data to the knowledge base if the OpenAI API key is provided
26 | if openai_access_token:
27 | # Create a temporary directory to store the database
28 | db_path = tempfile.mkdtemp()
29 | # Create an instance of Embedchain App
30 | app = embedchain_bot(db_path, openai_access_token)
31 | app.add(gmail_filter, data_type="gmail")
32 | st.success(f"Added emails from Inbox to the knowledge base!")
33 |
34 | # Ask a question about the emails
35 | prompt = st.text_input("Ask any question about your emails")
36 |
37 | # Chat with the emails
38 | if prompt:
39 | answer = app.query(prompt)
40 | st.write(answer)
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_gmail/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain[gmail]
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_pdf/README.md:
--------------------------------------------------------------------------------
1 | ## 📄 Chat with PDF
2 |
3 | LLM app with RAG to chat with PDF in just 30 lines of Python Code. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of the uploaded PDF.
4 |
5 | ### Features
6 |
7 | - Upload a PDF document
8 | - Ask questions about the content of the PDF
9 | - Get accurate answers using RAG and the selected LLM
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 | 2. Install the required dependencies
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 | 3. Get your OpenAI API Key
24 |
25 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
26 |
27 | 4. Run the Streamlit App
28 | ```bash
29 | streamlit run chat_pdf.py
30 | ```
31 | ### Interactive Application Demo
32 | https://github.com/Shubhamsaboo/awesome-llm-apps/assets/31396011/12bdfc11-c877-4fc7-9e70-63f21d2eb977
33 |
34 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_pdf/chat_pdf.py:
--------------------------------------------------------------------------------
1 | import os
2 | import tempfile
3 | import streamlit as st
4 | from embedchain import App
5 |
6 | def embedchain_bot(db_path, api_key):
7 | return App.from_config(
8 | config={
9 | "llm": {"provider": "openai", "config": {"api_key": api_key}},
10 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
11 | "embedder": {"provider": "openai", "config": {"api_key": api_key}},
12 | }
13 | )
14 |
15 | st.title("Chat with PDF")
16 |
17 | openai_access_token = st.text_input("OpenAI API Key", type="password")
18 |
19 | if openai_access_token:
20 | db_path = tempfile.mkdtemp()
21 | app = embedchain_bot(db_path, openai_access_token)
22 |
23 | pdf_file = st.file_uploader("Upload a PDF file", type="pdf")
24 |
25 | if pdf_file:
26 | with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as f:
27 | f.write(pdf_file.getvalue())
28 | app.add(f.name, data_type="pdf_file")
29 | os.remove(f.name)
30 | st.success(f"Added {pdf_file.name} to knowledge base!")
31 |
32 | prompt = st.text_input("Ask a question about the PDF")
33 |
34 | if prompt:
35 | answer = app.chat(prompt)
36 | st.write(answer)
37 |
38 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_pdf/chat_pdf_llama3.2.py:
--------------------------------------------------------------------------------
1 | # Import necessary libraries
2 | import os
3 | import tempfile
4 | import streamlit as st
5 | from embedchain import App
6 | import base64
7 | from streamlit_chat import message
8 |
9 | # Define the embedchain_bot function
10 | def embedchain_bot(db_path):
11 | return App.from_config(
12 | config={
13 | "llm": {"provider": "ollama", "config": {"model": "llama3.2:latest", "max_tokens": 250, "temperature": 0.5, "stream": True, "base_url": 'http://localhost:11434'}},
14 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
15 | "embedder": {"provider": "ollama", "config": {"model": "llama3.2:latest", "base_url": 'http://localhost:11434'}},
16 | }
17 | )
18 |
19 | # Add a function to display PDF
20 | def display_pdf(file):
21 | base64_pdf = base64.b64encode(file.read()).decode('utf-8')
22 | pdf_display = f''
23 | st.markdown(pdf_display, unsafe_allow_html=True)
24 |
25 | st.title("Chat with PDF using Llama 3.2")
26 | st.caption("This app allows you to chat with a PDF using Llama 3.2 running locally with Ollama!")
27 |
28 | # Define the database path
29 | db_path = tempfile.mkdtemp()
30 |
31 | # Create a session state to store the app instance and chat history
32 | if 'app' not in st.session_state:
33 | st.session_state.app = embedchain_bot(db_path)
34 | if 'messages' not in st.session_state:
35 | st.session_state.messages = []
36 |
37 | # Sidebar for PDF upload and preview
38 | with st.sidebar:
39 | st.header("PDF Upload")
40 | pdf_file = st.file_uploader("Upload a PDF file", type="pdf")
41 |
42 | if pdf_file:
43 | st.subheader("PDF Preview")
44 | display_pdf(pdf_file)
45 |
46 | if st.button("Add to Knowledge Base"):
47 | with st.spinner("Adding PDF to knowledge base..."):
48 | with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as f:
49 | f.write(pdf_file.getvalue())
50 | st.session_state.app.add(f.name, data_type="pdf_file")
51 | os.remove(f.name)
52 | st.success(f"Added {pdf_file.name} to knowledge base!")
53 |
54 | # Chat interface
55 | for i, msg in enumerate(st.session_state.messages):
56 | message(msg["content"], is_user=msg["role"] == "user", key=str(i))
57 |
58 | if prompt := st.chat_input("Ask a question about the PDF"):
59 | st.session_state.messages.append({"role": "user", "content": prompt})
60 | message(prompt, is_user=True)
61 |
62 | with st.spinner("Thinking..."):
63 | response = st.session_state.app.chat(prompt)
64 | st.session_state.messages.append({"role": "assistant", "content": response})
65 | message(response)
66 |
67 | # Clear chat history button
68 | if st.button("Clear Chat History"):
69 | st.session_state.messages = []
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_pdf/chat_pdf_llama3.py:
--------------------------------------------------------------------------------
1 | # Import necessary libraries
2 | import os
3 | import tempfile
4 | import streamlit as st
5 | from embedchain import App
6 |
7 | # Define the embedchain_bot function
8 | def embedchain_bot(db_path):
9 | return App.from_config(
10 | config={
11 | "llm": {"provider": "ollama", "config": {"model": "llama3:instruct", "max_tokens": 250, "temperature": 0.5, "stream": True, "base_url": 'http://localhost:11434'}},
12 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
13 | "embedder": {"provider": "ollama", "config": {"model": "llama3:instruct", "base_url": 'http://localhost:11434'}},
14 | }
15 | )
16 |
17 | st.title("Chat with PDF")
18 | st.caption("This app allows you to chat with a PDF using Llama3 running locally wiht Ollama!")
19 |
20 | # Create a temporary directory to store the PDF file
21 | db_path = tempfile.mkdtemp()
22 | # Create an instance of the embedchain App
23 | app = embedchain_bot(db_path)
24 |
25 | # Upload a PDF file
26 | pdf_file = st.file_uploader("Upload a PDF file", type="pdf")
27 |
28 | # If a PDF file is uploaded, add it to the knowledge base
29 | if pdf_file:
30 | with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as f:
31 | f.write(pdf_file.getvalue())
32 | app.add(f.name, data_type="pdf_file")
33 | os.remove(f.name)
34 | st.success(f"Added {pdf_file.name} to knowledge base!")
35 |
36 | # Ask a question about the PDF
37 | prompt = st.text_input("Ask a question about the PDF")
38 | # Display the answer
39 | if prompt:
40 | answer = app.chat(prompt)
41 | st.write(answer)
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_pdf/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_research_papers/README.md:
--------------------------------------------------------------------------------
1 | ## 🔎 Chat with Arxiv Research Papers
2 | This Streamlit app enables you to engage in interactive conversations with arXiv, a vast repository of scholarly articles, using GPT-4o. With this RAG application, you can easily access and explore the wealth of knowledge contained within arXiv.
3 |
4 | ### Features
5 | - Engage in conversational interactions with arXiv
6 | - Access and explore a vast collection of research papers
7 | - Utilize OpenAI GPT-4o for intelligent responses
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | ```
16 | 2. Install the required dependencies:
17 |
18 | ```bash
19 | pip install -r requirements.txt
20 | ```
21 | 3. Get your OpenAI API Key
22 |
23 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
24 |
25 | 4. Run the Streamlit App
26 | ```bash
27 | streamlit run chat_arxiv.py
28 | ```
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_research_papers/chat_arxiv.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from phi.assistant import Assistant
4 | from phi.llm.openai import OpenAIChat
5 | from phi.tools.arxiv_toolkit import ArxivToolkit
6 |
7 | # Set up the Streamlit app
8 | st.title("Chat with Research Papers 🔎🤖")
9 | st.caption("This app allows you to chat with arXiv research papers using OpenAI GPT-4o model.")
10 |
11 | # Get OpenAI API key from user
12 | openai_access_token = st.text_input("OpenAI API Key", type="password")
13 |
14 | # If OpenAI API key is provided, create an instance of Assistant
15 | if openai_access_token:
16 | # Create an instance of the Assistant
17 | assistant = Assistant(
18 | llm=OpenAIChat(
19 | model="gpt-4o",
20 | max_tokens=1024,
21 | temperature=0.9,
22 | api_key=openai_access_token) , tools=[ArxivToolkit()]
23 | )
24 |
25 | # Get the search query from the user
26 | query= st.text_input("Enter the Search Query", type="default")
27 |
28 | if query:
29 | # Search the web using the AI Assistant
30 | response = assistant.run(query, stream=False)
31 | st.write(response)
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_research_papers/chat_arxiv_llama3.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from phi.assistant import Assistant
4 | from phi.llm.ollama import Ollama
5 | from phi.tools.arxiv_toolkit import ArxivToolkit
6 |
7 | # Set up the Streamlit app
8 | st.title("Chat with Research Papers 🔎🤖")
9 | st.caption("This app allows you to chat with arXiv research papers using Llama-3 running locally.")
10 |
11 | # Create an instance of the Assistant
12 | assistant = Assistant(
13 | llm=Ollama(
14 | model="llama3:instruct") , tools=[ArxivToolkit()], show_tool_calls=True
15 | )
16 |
17 | # Get the search query from the user
18 | query= st.text_input("Enter the Search Query", type="default")
19 |
20 | if query:
21 | # Search the web using the AI Assistant
22 | response = assistant.run(query, stream=False)
23 | st.write(response)
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_research_papers/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | phidata
3 | arxiv
4 | openai
5 | pypdf
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_substack/README.md:
--------------------------------------------------------------------------------
1 | ## 📝 Chat with Substack Newsletter
2 | Streamlit app that allows you to chat with a Substack newsletter using OpenAI's API and the Embedchain library. This app leverages GPT-4 to provide accurate answers to questions based on the content of the specified Substack newsletter.
3 |
4 | ## Features
5 | - Input a Substack blog URL
6 | - Ask questions about the content of the Substack newsletter
7 | - Get accurate answers using OpenAI's API and Embedchain
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | ```
16 | 2. Install the required dependencies:
17 |
18 | ```bash
19 | pip install -r requirements.txt
20 | ```
21 | 3. Get your OpenAI API Key
22 |
23 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
24 |
25 | 4. Run the Streamlit App
26 | ```bash
27 | streamlit run chat_substack.py
28 | ```
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_substack/chat_substack.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from embedchain import App
3 | import tempfile
4 |
5 | # Define the embedchain_bot function
6 | def embedchain_bot(db_path, api_key):
7 | return App.from_config(
8 | config={
9 | "llm": {"provider": "openai", "config": {"model": "gpt-4-turbo", "temperature": 0.5, "api_key": api_key}},
10 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
11 | "embedder": {"provider": "openai", "config": {"api_key": api_key}},
12 | }
13 | )
14 |
15 | st.title("Chat with Substack Newsletter 📝")
16 | st.caption("This app allows you to chat with Substack newsletter using OpenAI API")
17 |
18 | # Get OpenAI API key from user
19 | openai_access_token = st.text_input("OpenAI API Key", type="password")
20 |
21 | if openai_access_token:
22 | # Create a temporary directory to store the database
23 | db_path = tempfile.mkdtemp()
24 | # Create an instance of Embedchain App
25 | app = embedchain_bot(db_path, openai_access_token)
26 |
27 | # Get the Substack blog URL from the user
28 | substack_url = st.text_input("Enter Substack Newsletter URL", type="default")
29 |
30 | if substack_url:
31 | # Add the Substack blog to the knowledge base
32 | app.add(substack_url, data_type='substack')
33 | st.success(f"Added {substack_url} to knowledge base!")
34 |
35 | # Ask a question about the Substack blog
36 | query = st.text_input("Ask any question about the substack newsletter!")
37 |
38 | # Query the Substack blog
39 | if query:
40 | result = app.query(query)
41 | st.write(result)
42 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_substack/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_youtube_videos/README.md:
--------------------------------------------------------------------------------
1 | ## 📽️ Chat with YouTube Videos
2 |
3 | LLM app with RAG to chat with YouTube Videos in just 30 lines of Python Code. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of the uploaded video.
4 |
5 | ### Features
6 |
7 | - Input a YouTube video URL
8 | - Ask questions about the content of the video
9 | - Get accurate answers using RAG and the selected LLM
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 | 2. Install the required dependencies:
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 | 3. Get your OpenAI API Key
24 |
25 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
26 |
27 | 4. Run the Streamlit App
28 | ```bash
29 | streamlit run chat_youtube.py
30 | ```
31 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_youtube_videos/chat_youtube.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import tempfile
3 | import streamlit as st
4 | from embedchain import App
5 |
6 | # Define the embedchain_bot function
7 | def embedchain_bot(db_path, api_key):
8 | return App.from_config(
9 | config={
10 | "llm": {"provider": "openai", "config": {"model": "gpt-4o", "temperature": 0.5, "api_key": api_key}},
11 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
12 | "embedder": {"provider": "openai", "config": {"api_key": api_key}},
13 | }
14 | )
15 |
16 | # Create Streamlit app
17 | st.title("Chat with YouTube Video 📺")
18 | st.caption("This app allows you to chat with a YouTube video using OpenAI API")
19 |
20 | # Get OpenAI API key from user
21 | openai_access_token = st.text_input("OpenAI API Key", type="password")
22 |
23 | # If OpenAI API key is provided, create an instance of App
24 | if openai_access_token:
25 | # Create a temporary directory to store the database
26 | db_path = tempfile.mkdtemp()
27 | # Create an instance of Embedchain App
28 | app = embedchain_bot(db_path, openai_access_token)
29 | # Get the YouTube video URL from the user
30 | video_url = st.text_input("Enter YouTube Video URL", type="default")
31 | # Add the video to the knowledge base
32 | if video_url:
33 | app.add(video_url, data_type="youtube_video")
34 | st.success(f"Added {video_url} to knowledge base!")
35 | # Ask a question about the video
36 | prompt = st.text_input("Ask any question about the YouTube Video")
37 | # Chat with the video
38 | if prompt:
39 | answer = app.chat(prompt)
40 | st.write(answer)
41 |
42 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_youtube_videos/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain[youtube]
--------------------------------------------------------------------------------
/docs/banner/unwind.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/XiaomingX/awesome-llm-app/ec0cd6992b784ecc810c8d32a87748565a63d833/docs/banner/unwind.png
--------------------------------------------------------------------------------
/docs/banner/unwind_black.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/XiaomingX/awesome-llm-app/ec0cd6992b784ecc810c8d32a87748565a63d833/docs/banner/unwind_black.png
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/ai_arxiv_agent_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 📚 AI Research Agent with Memory
2 | This Streamlit app implements an AI-powered research assistant that helps users search for academic papers on arXiv while maintaining a memory of user interests and past interactions. It utilizes OpenAI's GPT-4o-mini model for processing search results, MultiOn for web browsing, and Mem0 with Qdrant for maintaining user context.
3 |
4 | ### Features
5 |
6 | - Search interface for querying arXiv papers
7 | - AI-powered processing of search results for improved readability
8 | - Persistent memory of user interests and past searches
9 | - Utilizes OpenAI's GPT-4o-mini model for intelligent processing
10 | - Implements memory storage and retrieval using Mem0 and Qdrant
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Ensure Qdrant is running:
26 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
27 |
28 | ```bash
29 | docker pull qdrant/qdrant
30 |
31 | docker run -p 6333:6333 -p 6334:6334 \
32 | -v $(pwd)/qdrant_storage:/qdrant/storage:z \
33 | qdrant/qdrant
34 | ```
35 |
36 | 4. Run the Streamlit App
37 | ```bash
38 | streamlit run ai_arxiv_agent_memory.py
39 | ```
40 |
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/ai_arxiv_agent_memory/ai_arxiv_agent_memory.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import os
3 | from mem0 import Memory
4 | from multion.client import MultiOn
5 | from openai import OpenAI
6 |
7 | st.title("AI Research Agent with Memory 📚")
8 |
9 | api_keys = {k: st.text_input(f"{k.capitalize()} API Key", type="password") for k in ['openai', 'multion']}
10 |
11 | if all(api_keys.values()):
12 | os.environ['OPENAI_API_KEY'] = api_keys['openai']
13 | # Initialize Mem0 with Qdrant
14 | config = {
15 | "vector_store": {
16 | "provider": "qdrant",
17 | "config": {
18 | "model": "gpt-4o-mini",
19 | "host": "localhost",
20 | "port": 6333,
21 | }
22 | },
23 | }
24 | memory, multion, openai_client = Memory.from_config(config), MultiOn(api_key=api_keys['multion']), OpenAI(api_key=api_keys['openai'])
25 |
26 | user_id = st.sidebar.text_input("Enter your Username")
27 | #user_interests = st.text_area("Research interests and background")
28 |
29 | search_query = st.text_input("Research paper search query")
30 |
31 | def process_with_gpt4(result):
32 | prompt = f"""
33 | Based on the following arXiv search result, provide a proper structured output in markdown that is readable by the users.
34 | Each paper should have a title, authors, abstract, and link.
35 | Search Result: {result}
36 | Output Format: Table with the following columns: [{{"title": "Paper Title", "authors": "Author Names", "abstract": "Brief abstract", "link": "arXiv link"}}, ...]
37 | """
38 | response = openai_client.chat.completions.create(model="gpt-4o-mini", messages=[{"role": "user", "content": prompt}], temperature=0.2)
39 | return response.choices[0].message.content
40 |
41 | if st.button('Search for Papers'):
42 | with st.spinner('Searching and Processing...'):
43 | relevant_memories = memory.search(search_query, user_id=user_id, limit=3)
44 | prompt = f"Search for arXiv papers: {search_query}\nUser background: {' '.join(mem['text'] for mem in relevant_memories)}"
45 | result = process_with_gpt4(multion.browse(cmd=prompt, url="https://arxiv.org/"))
46 | st.markdown(result)
47 |
48 | if st.sidebar.button("View Memory"):
49 | st.sidebar.write("\n".join([f"- {mem['text']}" for mem in memory.get_all(user_id=user_id)]))
50 |
51 | else:
52 | st.warning("Please enter your API keys to use this app.")
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/ai_arxiv_agent_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai
4 | multion
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/ai_travel_agent_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧳 AI Travel Agent with Memory
2 | This Streamlit app implements an AI-powered travel assistant that remembers user preferences and past interactions. It utilizes OpenAI's GPT-4o for generating responses and Mem0 with Qdrant for maintaining conversation history.
3 |
4 | ### Features
5 | - Chat-based interface for interacting with an AI travel assistant
6 | - Persistent memory of user preferences and past conversations
7 | - Utilizes OpenAI's GPT-4o model for intelligent responses
8 | - Implements memory storage and retrieval using Mem0 and Qdrant
9 | - User-specific conversation history and memory viewing
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | ```
17 |
18 | 2. Install the required dependencies:
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 |
24 | 3. Ensure Qdrant is running:
25 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
26 |
27 | ```bash
28 | docker pull qdrant/qdrant
29 |
30 | docker run -p 6333:6333 -p 6334:6334 \
31 | -v $(pwd)/qdrant_storage:/qdrant/storage:z \
32 | qdrant/qdrant
33 | ```
34 |
35 | 4. Run the Streamlit App
36 | ```bash
37 | streamlit run travel_agent_memory.py
38 | ```
39 |
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/ai_travel_agent_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/ai_travel_agent_memory/travel_agent_memory.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from openai import OpenAI
3 | from mem0 import Memory
4 |
5 | # Set up the Streamlit App
6 | st.title("AI Travel Agent with Memory 🧳")
7 | st.caption("Chat with a travel assistant who remembers your preferences and past interactions.")
8 |
9 | # Set the OpenAI API key
10 | openai_api_key = st.text_input("Enter OpenAI API Key", type="password")
11 |
12 | if openai_api_key:
13 | # Initialize OpenAI client
14 | client = OpenAI(api_key=openai_api_key)
15 |
16 | # Initialize Mem0 with Qdrant
17 | config = {
18 | "vector_store": {
19 | "provider": "qdrant",
20 | "config": {
21 | "host": "localhost",
22 | "port": 6333,
23 | }
24 | },
25 | }
26 | memory = Memory.from_config(config)
27 |
28 | # Sidebar for username and memory view
29 | st.sidebar.title("Enter your username:")
30 | previous_user_id = st.session_state.get("previous_user_id", None)
31 | user_id = st.sidebar.text_input("Enter your Username")
32 |
33 | if user_id != previous_user_id:
34 | st.session_state.messages = []
35 | st.session_state.previous_user_id = user_id
36 |
37 | # Sidebar option to show memory
38 | st.sidebar.title("Memory Info")
39 | if st.button("View My Memory"):
40 | memories = memory.get_all(user_id=user_id)
41 | if memories and "results" in memories:
42 | st.write(f"Memory history for **{user_id}**:")
43 | for mem in memories["results"]:
44 | if "memory" in mem:
45 | st.write(f"- {mem['memory']}")
46 | else:
47 | st.sidebar.info("No learning history found for this user ID.")
48 | else:
49 | st.sidebar.error("Please enter a username to view memory info.")
50 |
51 | # Initialize the chat history
52 | if "messages" not in st.session_state:
53 | st.session_state.messages = []
54 |
55 | # Display the chat history
56 | for message in st.session_state.messages:
57 | with st.chat_message(message["role"]):
58 | st.markdown(message["content"])
59 |
60 | # Accept user input
61 | prompt = st.chat_input("Where would you like to travel?")
62 |
63 | if prompt and user_id:
64 | # Add user message to chat history
65 | st.session_state.messages.append({"role": "user", "content": prompt})
66 | with st.chat_message("user"):
67 | st.markdown(prompt)
68 |
69 | # Retrieve relevant memories
70 | relevant_memories = memory.search(query=prompt, user_id=user_id)
71 | context = "Relevant past information:\n"
72 | if relevant_memories and "results" in relevant_memories:
73 | for memory in relevant_memories["results"]:
74 | if "memory" in memory:
75 | context += f"- {memory['memory']}\n"
76 |
77 | # Prepare the full prompt
78 | full_prompt = f"{context}\nHuman: {prompt}\nAI:"
79 |
80 | # Generate response
81 | response = client.chat.completions.create(
82 | model="gpt-4o",
83 | messages=[
84 | {"role": "system", "content": "You are a travel assistant with access to past conversations."},
85 | {"role": "user", "content": full_prompt}
86 | ]
87 | )
88 | answer = response.choices[0].message.content
89 |
90 | # Add assistant response to chat history
91 | st.session_state.messages.append({"role": "assistant", "content": answer})
92 | with st.chat_message("assistant"):
93 | st.markdown(answer)
94 |
95 | # Store the user query and AI response in memory
96 | memory.add(prompt, user_id=user_id, metadata={"role": "user"})
97 | memory.add(answer, user_id=user_id, metadata={"role": "assistant"})
98 | elif not user_id:
99 | st.error("Please enter a username to start the chat.")
100 |
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/llama3_stateful_chat/local_llama3_chat.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from openai import OpenAI
3 |
4 | # Set up the Streamlit App
5 | st.title("Local ChatGPT with Memory 🦙")
6 | st.caption("Chat with locally hosted memory-enabled Llama-3 using the LM Studio 💯")
7 |
8 | # Point to the local server setup using LM Studio
9 | client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
10 |
11 | # Initialize the chat history
12 | if "messages" not in st.session_state:
13 | st.session_state.messages = []
14 |
15 | # Display the chat history
16 | for message in st.session_state.messages:
17 | with st.chat_message(message["role"]):
18 | st.markdown(message["content"])
19 |
20 | # Accept user input
21 | if prompt := st.chat_input("What is up?"):
22 | st.session_state.messages.append({"role": "system", "content": "When the input starts with /add, don't follow up with a prompt."})
23 | # Add user message to chat history
24 | st.session_state.messages.append({"role": "user", "content": prompt})
25 | # Display user message in chat message container
26 | with st.chat_message("user"):
27 | st.markdown(prompt)
28 | # Generate response
29 | response = client.chat.completions.create(
30 | model="lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF",
31 | messages=st.session_state.messages, temperature=0.7
32 | )
33 | # Add assistant response to chat history
34 | st.session_state.messages.append({"role": "assistant", "content": response.choices[0].message.content})
35 | # Display assistant response in chat message container
36 | with st.chat_message("assistant"):
37 | st.markdown(response.choices[0].message.content)
38 |
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/llama3_stateful_chat/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/llm_app_personalized_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧠 LLM App with Memory
2 | This Streamlit app is an AI-powered chatbot that uses OpenAI's GPT-4o model with a persistent memory feature. It allows users to have conversations with the AI while maintaining context across multiple interactions.
3 |
4 | ### Features
5 |
6 | - Utilizes OpenAI's GPT-4o model for generating responses
7 | - Implements persistent memory using Mem0 and Qdrant vector store
8 | - Allows users to view their conversation history
9 | - Provides a user-friendly interface with Streamlit
10 |
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Ensure Qdrant is running:
26 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
27 |
28 | ```bash
29 | docker pull qdrant/qdrant
30 |
31 | docker run -p 6333:6333 -p 6334:6334 \
32 | -v $(pwd)/qdrant_storage:/qdrant/storage:z \
33 | qdrant/qdrant
34 | ```
35 |
36 | 4. Run the Streamlit App
37 | ```bash
38 | streamlit run llm_app_memory.py
39 | ```
40 |
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/llm_app_personalized_memory/llm_app_memory.py:
--------------------------------------------------------------------------------
1 | import os
2 | import streamlit as st
3 | from mem0 import Memory
4 | from openai import OpenAI
5 |
6 | st.title("LLM App with Memory 🧠")
7 | st.caption("LLM App with personalized memory layer that remembers ever user's choice and interests")
8 |
9 | openai_api_key = st.text_input("Enter OpenAI API Key", type="password")
10 | os.environ["OPENAI_API_KEY"] = openai_api_key
11 |
12 | if openai_api_key:
13 | # Initialize OpenAI client
14 | client = OpenAI(api_key=openai_api_key)
15 |
16 | # Initialize Mem0 with Qdrant
17 | config = {
18 | "vector_store": {
19 | "provider": "qdrant",
20 | "config": {
21 | "collection_name": "llm_app_memory",
22 | "host": "localhost",
23 | "port": 6333,
24 | }
25 | },
26 | }
27 |
28 | memory = Memory.from_config(config)
29 |
30 | user_id = st.text_input("Enter your Username")
31 |
32 | prompt = st.text_input("Ask ChatGPT")
33 |
34 | if st.button('Chat with LLM'):
35 | with st.spinner('Searching...'):
36 | relevant_memories = memory.search(query=prompt, user_id=user_id)
37 | # Prepare context with relevant memories
38 | context = "Relevant past information:\n"
39 |
40 | for mem in relevant_memories:
41 | context += f"- {mem['text']}\n"
42 |
43 | # Prepare the full prompt
44 | full_prompt = f"{context}\nHuman: {prompt}\nAI:"
45 |
46 | # Get response from GPT-4
47 | response = client.chat.completions.create(
48 | model="gpt-4o",
49 | messages=[
50 | {"role": "system", "content": "You are a helpful assistant with access to past conversations."},
51 | {"role": "user", "content": full_prompt}
52 | ]
53 | )
54 |
55 | answer = response.choices[0].message.content
56 |
57 | st.write("Answer: ", answer)
58 |
59 | # Add AI response to memory
60 | memory.add(answer, user_id=user_id)
61 |
62 |
63 | # Sidebar option to show memory
64 | st.sidebar.title("Memory Info")
65 | if st.button("View My Memory"):
66 | memories = memory.get_all(user_id=user_id)
67 | if memories and "results" in memories:
68 | st.write(f"Memory history for **{user_id}**:")
69 | for mem in memories["results"]:
70 | if "memory" in mem:
71 | st.write(f"- {mem['memory']}")
72 | else:
73 | st.sidebar.info("No learning history found for this user ID.")
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/llm_app_personalized_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/local_chatgpt_with_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧠 Local ChatGPT using Llama 3.1 with Personal Memory
2 | This Streamlit application implements a fully local ChatGPT-like experience using Llama 3.1, featuring personalized memory storage for each user. All components, including the language model, embeddings, and vector store, run locally without requiring external API keys.
3 |
4 | ### Features
5 | - Fully local implementation with no external API dependencies
6 | - Powered by Llama 3.1 via Ollama
7 | - Personal memory space for each user
8 | - Local embedding generation using Nomic Embed
9 | - Vector storage with Qdrant
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | ```
17 |
18 | 2. Install the required dependencies:
19 |
20 | ```bash
21 | cd rag_tutorials/local_rag_agent
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Install and start [Qdrant](https://qdrant.tech/documentation/guides/installation/) vector database locally
26 |
27 | ```bash
28 | docker pull qdrant/qdrant
29 | docker run -p 6333:6333 qdrant/qdrant
30 | ```
31 |
32 | 4. Install [Ollama](https://ollama.com/download) and pull Llama 3.1
33 | ```bash
34 | ollama pull llama3.1
35 | ```
36 |
37 | 5. Run the Streamlit App
38 | ```bash
39 | streamlit run local_chatgpt_memory.py
40 | ```
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/local_chatgpt_with_memory/local_chatgpt_memory.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from mem0 import Memory
3 | from litellm import completion
4 |
5 | # Configuration for Memory
6 | config = {
7 | "vector_store": {
8 | "provider": "qdrant",
9 | "config": {
10 | "collection_name": "local-chatgpt-memory",
11 | "host": "localhost",
12 | "port": 6333,
13 | "embedding_model_dims": 768,
14 | },
15 | },
16 | "llm": {
17 | "provider": "ollama",
18 | "config": {
19 | "model": "llama3.1:latest",
20 | "temperature": 0,
21 | "max_tokens": 8000,
22 | "ollama_base_url": "http://localhost:11434", # Ensure this URL is correct
23 | },
24 | },
25 | "embedder": {
26 | "provider": "ollama",
27 | "config": {
28 | "model": "nomic-embed-text:latest",
29 | # Alternatively, you can use "snowflake-arctic-embed:latest"
30 | "ollama_base_url": "http://localhost:11434",
31 | },
32 | },
33 | "version": "v1.1"
34 | }
35 |
36 | st.title("Local ChatGPT using Llama 3.1 with Personal Memory 🧠")
37 | st.caption("Each user gets their own personalized memory space!")
38 |
39 | # Initialize session state for chat history and previous user ID
40 | if "messages" not in st.session_state:
41 | st.session_state.messages = []
42 | if "previous_user_id" not in st.session_state:
43 | st.session_state.previous_user_id = None
44 |
45 | # Sidebar for user authentication
46 | with st.sidebar:
47 | st.title("User Settings")
48 | user_id = st.text_input("Enter your Username", key="user_id")
49 |
50 | # Check if user ID has changed
51 | if user_id != st.session_state.previous_user_id:
52 | st.session_state.messages = [] # Clear chat history
53 | st.session_state.previous_user_id = user_id # Update previous user ID
54 |
55 | if user_id:
56 | st.success(f"Logged in as: {user_id}")
57 |
58 | # Initialize Memory with the configuration
59 | m = Memory.from_config(config)
60 |
61 | # Memory viewing section
62 | st.header("Memory Context")
63 | if st.button("View My Memory"):
64 | memories = m.get_all(user_id=user_id)
65 | if memories and "results" in memories:
66 | st.write(f"Memory history for **{user_id}**:")
67 | for memory in memories["results"]:
68 | if "memory" in memory:
69 | st.write(f"- {memory['memory']}")
70 |
71 | # Main chat interface
72 | if user_id: # Only show chat interface if user is "logged in"
73 | # Display chat history
74 | for message in st.session_state.messages:
75 | with st.chat_message(message["role"]):
76 | st.markdown(message["content"])
77 |
78 | # User input
79 | if prompt := st.chat_input("What is your message?"):
80 | # Add user message to chat history
81 | st.session_state.messages.append({"role": "user", "content": prompt})
82 |
83 | # Display user message
84 | with st.chat_message("user"):
85 | st.markdown(prompt)
86 |
87 | # Add to memory
88 | m.add(prompt, user_id=user_id)
89 |
90 | # Get context from memory
91 | memories = m.get_all(user_id=user_id)
92 | context = ""
93 | if memories and "results" in memories:
94 | for memory in memories["results"]:
95 | if "memory" in memory:
96 | context += f"- {memory['memory']}\n"
97 |
98 | # Generate assistant response
99 | with st.chat_message("assistant"):
100 | message_placeholder = st.empty()
101 | full_response = ""
102 |
103 | # Stream the response
104 | try:
105 | response = completion(
106 | model="ollama/llama3.1:latest",
107 | messages=[
108 | {"role": "system", "content": "You are a helpful assistant with access to past conversations. Use the context provided to give personalized responses."},
109 | {"role": "user", "content": f"Context from previous conversations with {user_id}: {context}\nCurrent message: {prompt}"}
110 | ],
111 | api_base="http://localhost:11434",
112 | stream=True
113 | )
114 |
115 | # Process streaming response
116 | for chunk in response:
117 | if hasattr(chunk, 'choices') and len(chunk.choices) > 0:
118 | content = chunk.choices[0].delta.get('content', '')
119 | if content:
120 | full_response += content
121 | message_placeholder.markdown(full_response + "▌")
122 |
123 | # Final update
124 | message_placeholder.markdown(full_response)
125 | except Exception as e:
126 | st.error(f"Error generating response: {str(e)}")
127 | full_response = "I apologize, but I encountered an error generating the response."
128 | message_placeholder.markdown(full_response)
129 |
130 | # Add assistant response to chat history
131 | st.session_state.messages.append({"role": "assistant", "content": full_response})
132 |
133 | # Add response to memory
134 | m.add(f"Assistant: {full_response}", user_id=user_id)
135 |
136 | else:
137 | st.info("👈 Please enter your username in the sidebar to start chatting!")
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/local_chatgpt_with_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
4 | litellm
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/multi_llm_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧠 Multi-LLM App with Shared Memory
2 | This Streamlit application demonstrates a multi-LLM system with a shared memory layer, allowing users to interact with different language models while maintaining conversation history and context across sessions.
3 |
4 | ### Features
5 |
6 | - Support for multiple LLMs:
7 | - OpenAI's GPT-4o
8 | - Anthropic's Claude 3.5 Sonnet
9 |
10 | - Persistent memory using Qdrant vector store
11 | - User-specific conversation history
12 | - Memory retrieval for contextual responses
13 | - User-friendly interface with LLM selection
14 |
15 | ### How to get Started?
16 |
17 | 1. Clone the GitHub repository
18 | ```bash
19 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
20 | ```
21 |
22 | 2. Install the required dependencies:
23 |
24 | ```bash
25 | pip install -r requirements.txt
26 | ```
27 |
28 | 3. Ensure Qdrant is running:
29 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
30 |
31 | ```bash
32 | docker pull qdrant/qdrant
33 | docker run -p 6333:6333 qdrant/qdrant
34 | ```
35 |
36 | 4. Run the Streamlit App
37 | ```bash
38 | streamlit run multi_llm_memory.py
39 | ```
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/multi_llm_memory/multi_llm_memory.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from mem0 import Memory
3 | from openai import OpenAI
4 | import os
5 | from litellm import completion
6 |
7 | st.title("Multi-LLM App with Shared Memory 🧠")
8 | st.caption("LLM App with a personalized memory layer that remembers each user's choices and interests across multiple users and LLMs")
9 |
10 | openai_api_key = st.text_input("Enter OpenAI API Key", type="password")
11 | anthropic_api_key = st.text_input("Enter Anthropic API Key", type="password")
12 |
13 | if openai_api_key and anthropic_api_key:
14 | os.environ["ANTHROPIC_API_KEY"] = anthropic_api_key
15 |
16 | # Initialize Mem0 with Qdrant
17 | config = {
18 | "vector_store": {
19 | "provider": "qdrant",
20 | "config": {
21 | "host": "localhost",
22 | "port": 6333,
23 | }
24 | },
25 | }
26 |
27 | memory = Memory.from_config(config)
28 |
29 | user_id = st.sidebar.text_input("Enter your Username")
30 | llm_choice = st.sidebar.radio("Select LLM", ('OpenAI GPT-4o', 'Claude Sonnet 3.5'))
31 |
32 | if llm_choice == 'OpenAI GPT-4o':
33 | client = OpenAI(api_key=openai_api_key)
34 | elif llm_choice == 'Claude Sonnet 3.5':
35 | config = {
36 | "llm": {
37 | "provider": "litellm",
38 | "config": {
39 | "model": "claude-3-5-sonnet-20240620",
40 | "temperature": 0.5,
41 | "max_tokens": 2000,
42 | }
43 | }
44 | }
45 | client = Memory.from_config(config)
46 |
47 | prompt = st.text_input("Ask the LLM")
48 |
49 | if st.button('Chat with LLM'):
50 | with st.spinner('Searching...'):
51 | relevant_memories = memory.search(query=prompt, user_id=user_id)
52 | context = "Relevant past information:\n"
53 | if relevant_memories and "results" in relevant_memories:
54 | for memory in relevant_memories["results"]:
55 | if "memory" in memory:
56 | context += f"- {memory['memory']}\n"
57 |
58 | full_prompt = f"{context}\nHuman: {prompt}\nAI:"
59 |
60 | if llm_choice == 'OpenAI GPT-4o':
61 | response = client.chat.completions.create(
62 | model="gpt-4o",
63 | messages=[
64 | {"role": "system", "content": "You are a helpful assistant with access to past conversations."},
65 | {"role": "user", "content": full_prompt}
66 | ]
67 | )
68 | answer = response.choices[0].message.content
69 | elif llm_choice == 'Claude Sonnet 3.5':
70 | messages=[
71 | {"role": "system", "content": "You are a helpful assistant with access to past conversations."},
72 | {"role": "user", "content": full_prompt}
73 | ]
74 | response = completion(model="claude-3-5-sonnet-20240620", messages=messages)
75 | answer = response.choices[0].message.content
76 | st.write("Answer: ", answer)
77 |
78 | memory.add(answer, user_id=user_id)
79 |
80 | # Sidebar option to show memory
81 | st.sidebar.title("Memory Info")
82 | if st.button("View My Memory"):
83 | memories = memory.get_all(user_id=user_id)
84 | if memories and "results" in memories:
85 | st.write(f"Memory history for **{user_id}**:")
86 | for mem in memories["results"]:
87 | if "memory" in mem:
88 | st.write(f"- {mem['memory']}")
89 | else:
90 | st.sidebar.info("No learning history found for this user ID.")
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/multi_llm_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
4 | litellm
--------------------------------------------------------------------------------
/llm_finetuning_tutorials/llama3.2_finetuning/README.md:
--------------------------------------------------------------------------------
1 | ## 🦙 Finetune Llama 3.2 in 30 Lines of Python
2 |
3 | This script demonstrates how to finetune the Llama 3.2 model using the [Unsloth](https://unsloth.ai/) library, which makes the process easy and fast. You can run this example to finetune Llama 3.1 1B and 3B models for free in Google Colab.
4 |
5 | ### Features
6 |
7 | - Finetunes Llama 3.2 model using the Unsloth library
8 | - Implements Low-Rank Adaptation (LoRA) for efficient finetuning
9 | - Uses the FineTome-100k dataset for training
10 | - Configurable for different model sizes (1B and 3B)
11 |
12 | ### Installation
13 |
14 | 1. Clone the repository:
15 |
16 | ```bash
17 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
18 | cd llama3.2_finetuning
19 | ```
20 |
21 | 2. Install the required dependencies:
22 |
23 | ```bash
24 | pip install -r requirements.txt
25 | ```
26 |
27 | ## Usage
28 |
29 | 1. Open the script in Google Colab or your preferred Python environment.
30 |
31 | 2. Run the script to start the finetuning process:
32 |
33 | ```bash
34 | # Run the entire script
35 | python finetune_llama3.2.py
36 | ```
37 |
38 | 3. The finetuned model will be saved in the "finetuned_model" directory.
39 |
40 | ## How it Works
41 |
42 | 1. **Model Loading**: The script loads the Llama 3.2 3B Instruct model using Unsloth's FastLanguageModel.
43 |
44 | 2. **LoRA Setup**: Low-Rank Adaptation is applied to specific layers of the model for efficient finetuning.
45 |
46 | 3. **Data Preparation**: The FineTome-100k dataset is loaded and preprocessed using a chat template.
47 |
48 | 4. **Training Configuration**: The script sets up the SFTTrainer with specific training arguments.
49 |
50 | 5. **Finetuning**: The model is finetuned on the prepared dataset.
51 |
52 | 6. **Model Saving**: The finetuned model is saved to disk.
53 |
54 | ## Configuration
55 |
56 | You can modify the following parameters in the script:
57 |
58 | - `model_name`: Change to "unsloth/Llama-3.1-1B-Instruct" for the 1B model
59 | - `max_seq_length`: Adjust the maximum sequence length
60 | - `r`: LoRA rank
61 | - Training hyperparameters in `TrainingArguments`
62 |
63 | ## Customization
64 |
65 | - To use a different dataset, replace the `load_dataset` function call with your desired dataset.
66 | - Adjust the `target_modules` in the LoRA setup to finetune different layers of the model.
67 | - Modify the chat template in `get_chat_template` if you're using a different conversational format.
68 |
69 | ## Running on Google Colab
70 |
71 | 1. Open a new Google Colab notebook.
72 | 2. Copy the entire script into a code cell.
73 | 3. Add a cell at the beginning to install the required libraries:
74 |
75 | ```
76 | !pip install torch transformers datasets trl unsloth
77 | ```
78 |
79 | 4. Run the cells to start the finetuning process.
80 |
81 | ## Notes
82 |
83 | - This script is optimized for running on Google Colab's free tier, which provides access to GPUs.
84 | - The finetuning process may take some time, depending on the model size and the available computational resources.
85 | - Make sure you have enough storage space in your Colab instance to save the finetuned model.
--------------------------------------------------------------------------------
/llm_finetuning_tutorials/llama3.2_finetuning/finetune_llama3.2.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from unsloth import FastLanguageModel
3 | from datasets import load_dataset
4 | from trl import SFTTrainer
5 | from transformers import TrainingArguments
6 | from unsloth.chat_templates import get_chat_template, standardize_sharegpt
7 |
8 | # Load model and tokenizer
9 | model, tokenizer = FastLanguageModel.from_pretrained(
10 | model_name="unsloth/Llama-3.2-3B-Instruct",
11 | max_seq_length=2048, load_in_4bit=True,
12 | )
13 |
14 | # Add LoRA adapters
15 | model = FastLanguageModel.get_peft_model(
16 | model, r=16,
17 | target_modules=[
18 | "q_proj", "k_proj", "v_proj", "o_proj",
19 | "gate_proj", "up_proj", "down_proj"
20 | ],
21 | )
22 |
23 | # Set up chat template and prepare dataset
24 | tokenizer = get_chat_template(tokenizer, chat_template="llama-3.1")
25 | dataset = load_dataset("mlabonne/FineTome-100k", split="train")
26 | dataset = standardize_sharegpt(dataset)
27 | dataset = dataset.map(
28 | lambda examples: {
29 | "text": [
30 | tokenizer.apply_chat_template(convo, tokenize=False)
31 | for convo in examples["conversations"]
32 | ]
33 | },
34 | batched=True
35 | )
36 |
37 | # Set up trainer
38 | trainer = SFTTrainer(
39 | model=model,
40 | train_dataset=dataset,
41 | dataset_text_field="text",
42 | max_seq_length=2048,
43 | args=TrainingArguments(
44 | per_device_train_batch_size=2,
45 | gradient_accumulation_steps=4,
46 | warmup_steps=5,
47 | max_steps=60,
48 | learning_rate=2e-4,
49 | fp16=not torch.cuda.is_bf16_supported(),
50 | bf16=torch.cuda.is_bf16_supported(),
51 | logging_steps=1,
52 | output_dir="outputs",
53 | ),
54 | )
55 |
56 | # Train the model
57 | trainer.train()
58 |
59 | # Save the finetuned model
60 | model.save_pretrained("finetuned_model")
--------------------------------------------------------------------------------
/llm_finetuning_tutorials/llama3.2_finetuning/requirements.txt:
--------------------------------------------------------------------------------
1 | torch
2 | unsloth
3 | transformers
4 | datasets
5 | trl
--------------------------------------------------------------------------------
/rag_tutorials/agentic_rag/README.md:
--------------------------------------------------------------------------------
1 | ## 🗃️ AI RAG Agent with Web Access
2 | This script demonstrates how to build a Retrieval-Augmented Generation (RAG) agent with web access using GPT-4o in just 15 lines of Python code. The agent uses a PDF knowledge base and has the ability to search the web using DuckDuckGo.
3 |
4 | ### Features
5 |
6 | - Creates a RAG agent using GPT-4o
7 | - Incorporates a PDF-based knowledge base
8 | - Uses LanceDB as the vector database for efficient similarity search
9 | - Includes web search capability through DuckDuckGo
10 | - Provides a playground interface for easy interaction
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Get your OpenAI API Key
26 |
27 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
28 | - Set your OpenAI API key as an environment variable:
29 | ```bash
30 | export OPENAI_API_KEY='your-api-key-here'
31 | ```
32 |
33 | 4. Run the AI RAG Agent
34 | ```bash
35 | python3 rag_agent.py
36 | ```
37 | 5. Open your web browser and navigate to the URL provided in the console output to interact with the RAG agent through the playground interface.
38 |
39 | ### How it works?
40 |
41 | 1. **Knowledge Base Creation:** The script creates a knowledge base from a PDF file hosted online.
42 | 2. **Vector Database Setup:** LanceDB is used as the vector database for efficient similarity search within the knowledge base.
43 | 3. **Agent Configuration:** An AI agent is created using GPT-4o as the underlying model, with the PDF knowledge base and DuckDuckGo search tool.
44 | 4. **Playground Setup:** A playground interface is set up for easy interaction with the RAG agent.
45 |
46 |
--------------------------------------------------------------------------------
/rag_tutorials/agentic_rag/rag_agent.py:
--------------------------------------------------------------------------------
1 | from phi.agent import Agent
2 | from phi.model.openai import OpenAIChat
3 | from phi.knowledge.pdf import PDFUrlKnowledgeBase
4 | from phi.vectordb.lancedb import LanceDb, SearchType
5 | from phi.playground import Playground, serve_playground_app
6 | from phi.tools.duckduckgo import DuckDuckGo
7 |
8 | db_uri = "tmp/lancedb"
9 | # Create a knowledge base from a PDF
10 | knowledge_base = PDFUrlKnowledgeBase(
11 | urls=["https://phi-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
12 | # Use LanceDB as the vector database
13 | vector_db=LanceDb(table_name="recipes", uri=db_uri, search_type=SearchType.vector),
14 | )
15 | # Load the knowledge base: Comment out after first run
16 | knowledge_base.load(upsert=True)
17 |
18 | rag_agent = Agent(
19 | model=OpenAIChat(id="gpt-4o"),
20 | agent_id="rag-agent",
21 | knowledge=knowledge_base, # Add the knowledge base to the agent
22 | tools=[DuckDuckGo()],
23 | show_tool_calls=True,
24 | markdown=True,
25 | )
26 |
27 | app = Playground(agents=[rag_agent]).get_app()
28 |
29 | if __name__ == "__main__":
30 | serve_playground_app("rag_agent:app", reload=True)
--------------------------------------------------------------------------------
/rag_tutorials/agentic_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | phidata
2 | openai
3 | lancedb
4 | tantivy
5 | pypdf
6 | sqlalchemy
7 | pgvector
8 | psycopg[binary]
9 |
--------------------------------------------------------------------------------
/rag_tutorials/autonomous_rag/README.md:
--------------------------------------------------------------------------------
1 | ## 🤖 AutoRAG: Autonomous RAG with GPT-4o and Vector Database
2 | This Streamlit application implements an Autonomous Retrieval-Augmented Generation (RAG) system using OpenAI's GPT-4o model and PgVector database. It allows users to upload PDF documents, add them to a knowledge base, and query the AI assistant with context from both the knowledge base and web searches.
3 | Features
4 |
5 | ### Freatures
6 | - Chat interface for interacting with the AI assistant
7 | - PDF document upload and processing
8 | - Knowledge base integration using PostgreSQL and Pgvector
9 | - Web search capability using DuckDuckGo
10 | - Persistent storage of assistant data and conversations
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Ensure PgVector Database is running:
26 | The app expects PgVector to be running on [localhost:5532](http://localhost:5532/). Adjust the configuration in the code if your setup is different.
27 |
28 | ```bash
29 | docker run -d \
30 | -e POSTGRES_DB=ai \
31 | -e POSTGRES_USER=ai \
32 | -e POSTGRES_PASSWORD=ai \
33 | -e PGDATA=/var/lib/postgresql/data/pgdata \
34 | -v pgvolume:/var/lib/postgresql/data \
35 | -p 5532:5432 \
36 | --name pgvector \
37 | phidata/pgvector:16
38 | ```
39 |
40 | 4. Run the Streamlit App
41 | ```bash
42 | streamlit run autorag.py
43 | ```
44 |
--------------------------------------------------------------------------------
/rag_tutorials/autonomous_rag/autorag.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import nest_asyncio
3 | from io import BytesIO
4 | from phi.assistant import Assistant
5 | from phi.document.reader.pdf import PDFReader
6 | from phi.llm.openai import OpenAIChat
7 | from phi.knowledge import AssistantKnowledge
8 | from phi.tools.duckduckgo import DuckDuckGo
9 | from phi.embedder.openai import OpenAIEmbedder
10 | from phi.vectordb.pgvector import PgVector2
11 | from phi.storage.assistant.postgres import PgAssistantStorage
12 |
13 | # Apply nest_asyncio to allow nested event loops, required for running async functions in Streamlit
14 | nest_asyncio.apply()
15 |
16 | # Database connection string for PostgreSQL
17 | DB_URL = "postgresql+psycopg://ai:ai@localhost:5532/ai"
18 |
19 | # Function to set up the Assistant, utilizing caching for resource efficiency
20 | @st.cache_resource
21 | def setup_assistant(api_key: str) -> Assistant:
22 | llm = OpenAIChat(model="gpt-4o-mini", api_key=api_key)
23 | # Set up the Assistant with storage, knowledge base, and tools
24 | return Assistant(
25 | name="auto_rag_assistant", # Name of the Assistant
26 | llm=llm, # Language model to be used
27 | storage=PgAssistantStorage(table_name="auto_rag_storage", db_url=DB_URL),
28 | knowledge_base=AssistantKnowledge(
29 | vector_db=PgVector2(
30 | db_url=DB_URL,
31 | collection="auto_rag_docs",
32 | embedder=OpenAIEmbedder(model="text-embedding-ada-002", dimensions=1536, api_key=api_key),
33 | ),
34 | num_documents=3,
35 | ),
36 | tools=[DuckDuckGo()], # Additional tool for web search via DuckDuckGo
37 | instructions=[
38 | "Search your knowledge base first.",
39 | "If not found, search the internet.",
40 | "Provide clear and concise answers.",
41 | ],
42 | show_tool_calls=True,
43 | search_knowledge=True,
44 | read_chat_history=True,
45 | markdown=True,
46 | debug_mode=True,
47 | )
48 |
49 | # Function to add a PDF document to the knowledge base
50 | def add_document(assistant: Assistant, file: BytesIO):
51 | reader = PDFReader()
52 | docs = reader.read(file)
53 | if docs:
54 | assistant.knowledge_base.load_documents(docs, upsert=True)
55 | st.success("Document added to the knowledge base.")
56 | else:
57 | st.error("Failed to read the document.")
58 |
59 | # Function to query the Assistant and return a response
60 | def query_assistant(assistant: Assistant, question: str) -> str:
61 | return "".join([delta for delta in assistant.run(question)])
62 |
63 | # Main function to handle Streamlit app layout and interactions
64 | def main():
65 | st.set_page_config(page_title="AutoRAG", layout="wide")
66 | st.title("🤖 Auto-RAG: Autonomous RAG with GPT-4o")
67 |
68 | api_key = st.sidebar.text_input("Enter your OpenAI API Key 🔑", type="password")
69 |
70 | if not api_key:
71 | st.sidebar.warning("Enter your OpenAI API Key to proceed.")
72 | st.stop()
73 |
74 | assistant = setup_assistant(api_key)
75 |
76 | uploaded_file = st.sidebar.file_uploader("📄 Upload PDF", type=["pdf"])
77 |
78 | if uploaded_file and st.sidebar.button("🛠️ Add to Knowledge Base"):
79 | add_document(assistant, BytesIO(uploaded_file.read()))
80 |
81 | question = st.text_input("💬 Ask Your Question:")
82 |
83 | # When the user submits a question, query the assistant for an answer
84 | if st.button("🔍 Get Answer"):
85 | # Ensure the question is not empty
86 | if question.strip():
87 | with st.spinner("🤔 Thinking..."):
88 | # Query the assistant and display the response
89 | answer = query_assistant(assistant, question)
90 | st.write("📝 **Response:**", answer)
91 | else:
92 | # Show an error if the question input is empty
93 | st.error("Please enter a question.")
94 |
95 | # Entry point of the application
96 | if __name__ == "__main__":
97 | main()
98 |
--------------------------------------------------------------------------------
/rag_tutorials/autonomous_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | phidata
3 | openai
4 | psycopg-binary
5 | pgvector
6 | requests
7 | sqlalchemy
8 | pypdf
9 | duckduckgo-search
--------------------------------------------------------------------------------
/rag_tutorials/hybrid_search_rag/README.md:
--------------------------------------------------------------------------------
1 | # 👀 RAG App with Hybrid Search
2 |
3 | A powerful document Q&A application that leverages Hybrid Search (RAG) and Claude's advanced language capabilities to provide comprehensive answers. Built with RAGLite for robust document processing and retrieval, and Streamlit for an intuitive chat interface, this system seamlessly combines document-specific knowledge with Claude's general intelligence to deliver accurate and contextual responses.
4 |
5 | ## Demo video:
6 |
7 |
8 | https://github.com/user-attachments/assets/b576bf6e-4a48-4a43-9600-48bcc8f359a5
9 |
10 |
11 | ## Features
12 |
13 | - **Hybrid Search Question Answering**
14 | - RAG-based answers for document-specific queries
15 | - Fallback to Claude for general knowledge questions
16 |
17 | - **Document Processing**:
18 | - PDF document upload and processing
19 | - Automatic text chunking and embedding
20 | - Hybrid search combining semantic and keyword matching
21 | - Reranking for better context selection
22 |
23 | - **Multi-Model Integration**:
24 | - Claude for text generation - tested with Claude 3 Opus
25 | - OpenAI for embeddings - tested with text-embedding-3-large
26 | - Cohere for reranking - tested with Cohere 3.5 reranker
27 |
28 | ## Prerequisites
29 |
30 | You'll need the following API keys and database setup:
31 |
32 | 1. **Database**: Create a free PostgreSQL database at [Neon](https://neon.tech):
33 | - Sign up/Login at Neon
34 | - Create a new project
35 | - Copy the connection string (looks like: `postgresql://user:pass@ep-xyz.region.aws.neon.tech/dbname`)
36 |
37 | 2. **API Keys**:
38 | - [OpenAI API key](https://platform.openai.com/api-keys) for embeddings
39 | - [Anthropic API key](https://console.anthropic.com/settings/keys) for Claude
40 | - [Cohere API key](https://dashboard.cohere.com/api-keys) for reranking
41 |
42 | ## How to get Started?
43 |
44 | 1. **Clone the Repository**:
45 | ```bash
46 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
47 | cd rag_tutorials/hybrid_search_rag
48 | ```
49 |
50 | 2. **Install Dependencies**:
51 | ```bash
52 | pip install -r requirements.txt
53 | ```
54 |
55 | 3. **Install spaCy Model**:
56 | ```bash
57 | pip install https://github.com/explosion/spacy-models/releases/download/xx_sent_ud_sm-3.7.0/xx_sent_ud_sm-3.7.0-py3-none-any.whl
58 | ```
59 |
60 | 4. **Run the Application**:
61 | ```bash
62 | streamlit run main.py
63 | ```
64 |
65 | ## Usage
66 |
67 | 1. Start the application
68 | 2. Enter your API keys in the sidebar:
69 | - OpenAI API key
70 | - Anthropic API key
71 | - Cohere API key
72 | - Database URL (optional, defaults to SQLite)
73 | 3. Click "Save Configuration"
74 | 4. Upload PDF documents
75 | 5. Start asking questions!
76 | - Document-specific questions will use RAG
77 | - General questions will use Claude directly
78 |
79 | ## Database Options
80 |
81 | The application supports multiple database backends:
82 |
83 | - **PostgreSQL** (Recommended):
84 | - Create a free serverless PostgreSQL database at [Neon](https://neon.tech)
85 | - Get instant provisioning and scale-to-zero capability
86 | - Connection string format: `postgresql://user:pass@ep-xyz.region.aws.neon.tech/dbname`
87 |
88 | - **MySQL**:
89 | ```
90 | mysql://user:pass@host:port/db
91 | ```
92 | - **SQLite** (Local development):
93 | ```
94 | sqlite:///path/to/db.sqlite
95 | ```
96 |
97 | ## Contributing
98 |
99 | Contributions are welcome! Please feel free to submit a Pull Request.
100 |
--------------------------------------------------------------------------------
/rag_tutorials/hybrid_search_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | raglite==0.2.1
2 | pydantic==2.10.1
3 | sqlalchemy>=2.0.0
4 | psycopg2-binary>=2.9.9
5 | openai>=1.0.0
6 | cohere>=4.37
7 | pypdf>=3.0.0
8 | python-dotenv>=1.0.0
9 | rerankers==0.6.0
10 | spacy>=3.7.0
11 | streamlit
12 | anthropic
13 |
--------------------------------------------------------------------------------
/rag_tutorials/llama3.1_local_rag/README.md:
--------------------------------------------------------------------------------
1 | ## 💻 Local Lllama-3.1 with RAG
2 | Streamlit app that allows you to chat with any webpage using local Llama-3.1 and Retrieval Augmented Generation (RAG). This app runs entirely on your computer, making it 100% free and without the need for an internet connection.
3 |
4 |
5 | ### Features
6 | - Input a webpage URL
7 | - Ask questions about the content of the webpage
8 | - Get accurate answers using RAG and the Llama-3.1 model running locally on your computer
9 |
10 | ### How to get Started?
11 |
12 | 1. Clone the GitHub repository
13 |
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Run the Streamlit App
23 | ```bash
24 | streamlit run llama3.1_local_rag.py
25 | ```
26 |
27 | ### How it Works?
28 |
29 | - The app loads the webpage data using WebBaseLoader and splits it into chunks using RecursiveCharacterTextSplitter.
30 | - It creates Ollama embeddings and a vector store using Chroma.
31 | - The app sets up a RAG (Retrieval-Augmented Generation) chain, which retrieves relevant documents based on the user's question.
32 | - The Llama-3.1 model is called to generate an answer using the retrieved context.
33 | - The app displays the answer to the user's question.
34 |
35 |
--------------------------------------------------------------------------------
/rag_tutorials/llama3.1_local_rag/llama3.1_local_rag.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import ollama
3 | from langchain.text_splitter import RecursiveCharacterTextSplitter
4 | from langchain_community.document_loaders import WebBaseLoader
5 | from langchain_community.vectorstores import Chroma
6 | from langchain_community.embeddings import OllamaEmbeddings
7 |
8 | st.title("Chat with Webpage 🌐")
9 | st.caption("This app allows you to chat with a webpage using local llama3 and RAG")
10 |
11 | # Get the webpage URL from the user
12 | webpage_url = st.text_input("Enter Webpage URL", type="default")
13 |
14 | if webpage_url:
15 | # 1. Load the data
16 | loader = WebBaseLoader(webpage_url)
17 | docs = loader.load()
18 | text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=10)
19 | splits = text_splitter.split_documents(docs)
20 |
21 | # 2. Create Ollama embeddings and vector store
22 | embeddings = OllamaEmbeddings(model="llama3.1")
23 | vectorstore = Chroma.from_documents(documents=splits, embedding=embeddings)
24 |
25 | # 3. Call Ollama Llama3 model
26 | def ollama_llm(question, context):
27 | formatted_prompt = f"Question: {question}\n\nContext: {context}"
28 | response = ollama.chat(model='llama3.1', messages=[{'role': 'user', 'content': formatted_prompt}])
29 | return response['message']['content']
30 |
31 | # 4. RAG Setup
32 | retriever = vectorstore.as_retriever()
33 |
34 | def combine_docs(docs):
35 | return "\n\n".join(doc.page_content for doc in docs)
36 |
37 | def rag_chain(question):
38 | retrieved_docs = retriever.invoke(question)
39 | formatted_context = combine_docs(retrieved_docs)
40 | return ollama_llm(question, formatted_context)
41 |
42 | st.success(f"Loaded {webpage_url} successfully!")
43 |
44 | # Ask a question about the webpage
45 | prompt = st.text_input("Ask any question about the webpage")
46 |
47 | # Chat with the webpage
48 | if prompt:
49 | result = rag_chain(prompt)
50 | st.write(result)
--------------------------------------------------------------------------------
/rag_tutorials/llama3.1_local_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | ollama
3 | langchain
4 | langchain_community
--------------------------------------------------------------------------------
/rag_tutorials/local_hybrid_search_rag/README.md:
--------------------------------------------------------------------------------
1 | # 🖥️ Local RAG App with Hybrid Search
2 |
3 | A powerful document Q&A application that leverages Hybrid Search (RAG) and local LLMs for comprehensive answers. Built with RAGLite for robust document processing and retrieval, and Streamlit for an intuitive chat interface, this system combines document-specific knowledge with local LLM capabilities to deliver accurate and contextual responses.
4 |
5 | ## Demo:
6 |
7 |
8 | https://github.com/user-attachments/assets/375da089-1ab9-4bf4-b6f3-733f44e47403
9 |
10 |
11 | ## Quick Start
12 |
13 | For immediate testing, use these tested model configurations:
14 | ```bash
15 | # LLM Model
16 | bartowski/Llama-3.2-3B-Instruct-GGUF/Llama-3.2-3B-Instruct-Q4_K_M.gguf@4096
17 |
18 | # Embedder Model
19 | lm-kit/bge-m3-gguf/bge-m3-Q4_K_M.gguf@1024
20 | ```
21 | These models offer a good balance of performance and resource usage, and have been verified to work well together even on a MacBook Air M2 with 8GB RAM.
22 |
23 | ## Features
24 |
25 | - **Local LLM Integration**:
26 | - Uses llama-cpp-python models for local inference
27 | - Supports various quantization formats (Q4_K_M recommended)
28 | - Configurable context window sizes
29 |
30 | - **Document Processing**:
31 | - PDF document upload and processing
32 | - Automatic text chunking and embedding
33 | - Hybrid search combining semantic and keyword matching
34 | - Reranking for better context selection
35 |
36 | - **Multi-Model Integration**:
37 | - Local LLM for text generation (e.g., Llama-3.2-3B-Instruct)
38 | - Local embeddings using BGE models
39 | - FlashRank for local reranking
40 |
41 | ## Prerequisites
42 |
43 | 1. **Install spaCy Model**:
44 | ```bash
45 | pip install https://github.com/explosion/spacy-models/releases/download/xx_sent_ud_sm-3.7.0/xx_sent_ud_sm-3.7.0-py3-none-any.whl
46 | ```
47 |
48 | 2. **Install Accelerated llama-cpp-python** (Optional but recommended):
49 | ```bash
50 | # Configure installation variables
51 | LLAMA_CPP_PYTHON_VERSION=0.3.2
52 | PYTHON_VERSION=310 # 3.10, 3.11, 3.12
53 | ACCELERATOR=metal # For Mac
54 | # ACCELERATOR=cu121 # For NVIDIA GPU
55 | PLATFORM=macosx_11_0_arm64 # For Mac
56 | # PLATFORM=linux_x86_64 # For Linux
57 | # PLATFORM=win_amd64 # For Windows
58 |
59 | # Install accelerated version
60 | pip install "https://github.com/abetlen/llama-cpp-python/releases/download/v$LLAMA_CPP_PYTHON_VERSION-$ACCELERATOR/llama_cpp_python-$LLAMA_CPP_PYTHON_VERSION-cp$PYTHON_VERSION-cp$PYTHON_VERSION-$PLATFORM.whl"
61 | ```
62 |
63 | 3. **Install Dependencies**:
64 | ```bash
65 | pip install -r requirements.txt
66 | ```
67 |
68 | ## Model Setup
69 |
70 | RAGLite extends LiteLLM with support for llama.cpp models using llama-cpp-python. To select a llama.cpp model (e.g., from bartowski's collection), use a model identifier of the form "llama-cpp-python//@", where n_ctx is an optional parameter that specifies the context size of the model.
71 |
72 | 1. **LLM Model Path Format**:
73 | ```
74 | llama-cpp-python///@
75 | ```
76 | Example:
77 | ```
78 | bartowski/Llama-3.2-3B-Instruct-GGUF/Llama-3.2-3B-Instruct-Q4_K_M.gguf@4096
79 | ```
80 |
81 | 2. **Embedder Model Path Format**:
82 | ```
83 | llama-cpp-python///@
84 | ```
85 | Example:
86 | ```
87 | lm-kit/bge-m3-gguf/bge-m3-Q4_K_M.gguf@1024
88 | ```
89 |
90 | ## Database Setup
91 |
92 | The application supports multiple database backends:
93 |
94 | - **PostgreSQL** (Recommended):
95 | - Create a free serverless PostgreSQL database at [Neon](https://neon.tech) in a few clicks
96 | - Get instant provisioning and scale-to-zero capability
97 | - Connection string format: `postgresql://user:pass@ep-xyz.region.aws.neon.tech/dbname`
98 |
99 |
100 | ## How to Run
101 |
102 | 1. **Start the Application**:
103 | ```bash
104 | streamlit run local_main.py
105 | ```
106 |
107 | 2. **Configure the Application**:
108 | - Enter LLM model path
109 | - Enter embedder model path
110 | - Set database URL
111 | - Click "Save Configuration"
112 |
113 | 3. **Upload Documents**:
114 | - Upload PDF files through the interface
115 | - Wait for processing completion
116 |
117 | 4. **Start Chatting**:
118 | - Ask questions about your documents
119 | - Get responses using local LLM
120 | - Fallback to general knowledge when needed
121 |
122 | ## Notes
123 |
124 | - Context window size of 4096 is recommended for most use cases
125 | - Q4_K_M quantization offers good balance of speed and quality
126 | - BGE-M3 embedder with 1024 dimensions is optimal
127 | - Local models require sufficient RAM and CPU/GPU resources
128 | - Metal acceleration available for Mac, CUDA for NVIDIA GPUs
129 |
130 | ## Contributing
131 |
132 | Contributions are welcome! Please feel free to submit a Pull Request.
133 |
--------------------------------------------------------------------------------
/rag_tutorials/local_hybrid_search_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | raglite==0.2.1
2 | llama-cpp-python>=0.2.56
3 | sentence-transformers>=2.5.1
4 | pydantic==2.10.1
5 | sqlalchemy>=2.0.0
6 | psycopg2-binary>=2.9.9
7 | pypdf>=3.0.0
8 | python-dotenv>=1.0.0
9 | rerankers==0.6.0
10 | spacy>=3.7.0
11 | streamlit>=1.31.0
12 | flashrank==0.2.9
13 | numpy>=1.24.0
14 | pandas>=2.0.0
15 | tqdm>=4.66.0
16 |
--------------------------------------------------------------------------------
/rag_tutorials/local_rag_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🦙 Local RAG Agent with Llama 3.2
2 | This application implements a Retrieval-Augmented Generation (RAG) system using Llama 3.2 via Ollama, with Qdrant as the vector database.
3 |
4 |
5 | ### Features
6 | - Fully local RAG implementation
7 | - Powered by Llama 3.2 through Ollama
8 | - Vector search using Qdrant
9 | - Interactive playground interface
10 | - No external API dependencies
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | cd rag_tutorials/local_rag_agent
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Install and start [Qdrant](https://qdrant.tech/) vector database locally
27 |
28 | ```bash
29 | docker pull qdrant/qdrant
30 | docker run -p 6333:6333 qdrant/qdrant
31 | ```
32 |
33 | 4. Install [Ollama](https://ollama.com/download) and pull Llama 3.2 for LLM and OpenHermes as the embedder for OllamaEmbedder
34 | ```bash
35 | ollama pull llama3.2
36 | ollama pull openhermes
37 | ```
38 |
39 | 4. Run the AI RAG Agent
40 | ```bash
41 | python local_rag_agent.py
42 | ```
43 |
44 | 5. Open your web browser and navigate to the URL provided in the console output to interact with the RAG agent through the playground interface.
45 |
46 |
47 |
--------------------------------------------------------------------------------
/rag_tutorials/local_rag_agent/local_rag_agent.py:
--------------------------------------------------------------------------------
1 | # Import necessary libraries
2 | from phi.agent import Agent
3 | from phi.model.ollama import Ollama
4 | from phi.knowledge.pdf import PDFUrlKnowledgeBase
5 | from phi.vectordb.qdrant import Qdrant
6 | from phi.embedder.ollama import OllamaEmbedder
7 | from phi.playground import Playground, serve_playground_app
8 |
9 | # Define the collection name for the vector database
10 | collection_name = "thai-recipe-index"
11 |
12 | # Set up Qdrant as the vector database with the embedder
13 | vector_db = Qdrant(
14 | collection=collection_name,
15 | url="http://localhost:6333/",
16 | embedder=OllamaEmbedder()
17 | )
18 |
19 | # Define the knowledge base with the specified PDF URL
20 | knowledge_base = PDFUrlKnowledgeBase(
21 | urls=["https://phi-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
22 | vector_db=vector_db,
23 | )
24 |
25 | # Load the knowledge base, comment out after the first run to avoid reloading
26 | knowledge_base.load(recreate=True, upsert=True)
27 |
28 | # Create the Agent using Ollama's llama3.2 model and the knowledge base
29 | agent = Agent(
30 | name="Local RAG Agent",
31 | model=Ollama(id="llama3.2"),
32 | knowledge=knowledge_base,
33 | )
34 |
35 | # UI for RAG agent
36 | app = Playground(agents=[agent]).get_app()
37 |
38 | # Run the Playground app
39 | if __name__ == "__main__":
40 | serve_playground_app("local_rag_agent:app", reload=True)
41 |
--------------------------------------------------------------------------------
/rag_tutorials/local_rag_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | phidata
2 | qdrant-client
3 | ollama
4 | pypdf
5 | openai
6 | fastapi
7 | uvicorn
--------------------------------------------------------------------------------
/rag_tutorials/rag-as-a-service/README.md:
--------------------------------------------------------------------------------
1 | ## 🖇️ RAG-as-a-Service with Claude 3.5 Sonnet
2 | Build and deploy a production-ready Retrieval-Augmented Generation (RAG) service using Claude 3.5 Sonnet and Ragie.ai. This implementation allows you to create a document querying system with a user-friendly Streamlit interface in less than 50 lines of Python code.
3 |
4 | ### Features
5 | - Production-ready RAG pipeline
6 | - Integration with Claude 3.5 Sonnet for response generation
7 | - Document upload from URLs
8 | - Real-time document querying
9 | - Support for both fast and accurate document processing modes
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd rag-as-a-service
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Get your Anthropic API and Ragie API Key
26 |
27 | - Sign up for an [Anthropic account](https://console.anthropic.com/) and get your API key
28 | - Sign up for an [Ragie account](https://www.ragie.ai/) and get your API key
29 |
30 | 4. Run the Streamlit app
31 | ```bash
32 | streamlit run rag_app.py
33 | ```
--------------------------------------------------------------------------------
/rag_tutorials/rag-as-a-service/rag_app.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import requests
3 | from anthropic import Anthropic
4 | import time
5 | from typing import List, Dict, Optional
6 | from urllib.parse import urlparse
7 |
8 | class RAGPipeline:
9 | def __init__(self, ragie_api_key: str, anthropic_api_key: str):
10 | """
11 | Initialize the RAG pipeline with API keys.
12 | """
13 | self.ragie_api_key = ragie_api_key
14 | self.anthropic_api_key = anthropic_api_key
15 | self.anthropic_client = Anthropic(api_key=anthropic_api_key)
16 |
17 | # API endpoints
18 | self.RAGIE_UPLOAD_URL = "https://api.ragie.ai/documents/url"
19 | self.RAGIE_RETRIEVAL_URL = "https://api.ragie.ai/retrievals"
20 |
21 | def upload_document(self, url: str, name: Optional[str] = None, mode: str = "fast") -> Dict:
22 | """
23 | Upload a document to Ragie from a URL.
24 | """
25 | if not name:
26 | name = urlparse(url).path.split('/')[-1] or "document"
27 |
28 | payload = {
29 | "mode": mode,
30 | "name": name,
31 | "url": url
32 | }
33 |
34 | headers = {
35 | "accept": "application/json",
36 | "content-type": "application/json",
37 | "authorization": f"Bearer {self.ragie_api_key}"
38 | }
39 |
40 | response = requests.post(self.RAGIE_UPLOAD_URL, json=payload, headers=headers)
41 |
42 | if not response.ok:
43 | raise Exception(f"Document upload failed: {response.status_code} {response.reason}")
44 |
45 | return response.json()
46 |
47 | def retrieve_chunks(self, query: str, scope: str = "tutorial") -> List[str]:
48 | """
49 | Retrieve relevant chunks from Ragie for a given query.
50 | """
51 | headers = {
52 | "Content-Type": "application/json",
53 | "Authorization": f"Bearer {self.ragie_api_key}"
54 | }
55 |
56 | payload = {
57 | "query": query,
58 | "filters": {
59 | "scope": scope
60 | }
61 | }
62 |
63 | response = requests.post(
64 | self.RAGIE_RETRIEVAL_URL,
65 | headers=headers,
66 | json=payload
67 | )
68 |
69 | if not response.ok:
70 | raise Exception(f"Retrieval failed: {response.status_code} {response.reason}")
71 |
72 | data = response.json()
73 | return [chunk["text"] for chunk in data["scored_chunks"]]
74 |
75 | def create_system_prompt(self, chunk_texts: List[str]) -> str:
76 | """
77 | Create the system prompt with the retrieved chunks.
78 | """
79 | return f"""These are very important to follow: You are "Ragie AI", a professional but friendly AI chatbot working as an assistant to the user. Your current task is to help the user based on all of the information available to you shown below. Answer informally, directly, and concisely without a heading or greeting but include everything relevant. Use richtext Markdown when appropriate including bold, italic, paragraphs, and lists when helpful. If using LaTeX, use double $$ as delimiter instead of single $. Use $$...$$ instead of parentheses. Organize information into multiple sections or points when appropriate. Don't include raw item IDs or other raw fields from the source. Don't use XML or other markup unless requested by the user. Here is all of the information available to answer the user: === {chunk_texts} === If the user asked for a search and there are no results, make sure to let the user know that you couldn't find anything, and what they might be able to do to find the information they need. END SYSTEM INSTRUCTIONS"""
80 |
81 | def generate_response(self, system_prompt: str, query: str) -> str:
82 | """
83 | Generate response using Claude 3.5 Sonnet.
84 | """
85 | message = self.anthropic_client.messages.create(
86 | model="claude-3-sonnet-20240229",
87 | max_tokens=1024,
88 | system=system_prompt,
89 | messages=[
90 | {
91 | "role": "user",
92 | "content": query
93 | }
94 | ]
95 | )
96 |
97 | return message.content[0].text
98 |
99 | def process_query(self, query: str, scope: str = "tutorial") -> str:
100 | """
101 | Process a query through the complete RAG pipeline.
102 | """
103 | chunks = self.retrieve_chunks(query, scope)
104 |
105 | if not chunks:
106 | return "No relevant information found for your query."
107 |
108 | system_prompt = self.create_system_prompt(chunks)
109 | return self.generate_response(system_prompt, query)
110 |
111 | def initialize_session_state():
112 | """Initialize session state variables."""
113 | if 'pipeline' not in st.session_state:
114 | st.session_state.pipeline = None
115 | if 'document_uploaded' not in st.session_state:
116 | st.session_state.document_uploaded = False
117 | if 'api_keys_submitted' not in st.session_state:
118 | st.session_state.api_keys_submitted = False
119 |
120 | def main():
121 | st.set_page_config(page_title="RAG-as-a-Service", layout="wide")
122 | initialize_session_state()
123 |
124 | st.title(":linked_paperclips: RAG-as-a-Service")
125 |
126 | # API Keys Section
127 | with st.expander("🔑 API Keys Configuration", expanded=not st.session_state.api_keys_submitted):
128 | col1, col2 = st.columns(2)
129 | with col1:
130 | ragie_key = st.text_input("Ragie API Key", type="password", key="ragie_key")
131 | with col2:
132 | anthropic_key = st.text_input("Anthropic API Key", type="password", key="anthropic_key")
133 |
134 | if st.button("Submit API Keys"):
135 | if ragie_key and anthropic_key:
136 | try:
137 | st.session_state.pipeline = RAGPipeline(ragie_key, anthropic_key)
138 | st.session_state.api_keys_submitted = True
139 | st.success("API keys configured successfully!")
140 | except Exception as e:
141 | st.error(f"Error configuring API keys: {str(e)}")
142 | else:
143 | st.error("Please provide both API keys.")
144 |
145 | # Document Upload Section
146 | if st.session_state.api_keys_submitted:
147 | st.markdown("### 📄 Document Upload")
148 | doc_url = st.text_input("Enter document URL")
149 | doc_name = st.text_input("Document name (optional)")
150 |
151 | col1, col2 = st.columns([1, 3])
152 | with col1:
153 | upload_mode = st.selectbox("Upload mode", ["fast", "accurate"])
154 |
155 | if st.button("Upload Document"):
156 | if doc_url:
157 | try:
158 | with st.spinner("Uploading document..."):
159 | st.session_state.pipeline.upload_document(
160 | url=doc_url,
161 | name=doc_name if doc_name else None,
162 | mode=upload_mode
163 | )
164 | time.sleep(5) # Wait for indexing
165 | st.session_state.document_uploaded = True
166 | st.success("Document uploaded and indexed successfully!")
167 | except Exception as e:
168 | st.error(f"Error uploading document: {str(e)}")
169 | else:
170 | st.error("Please provide a document URL.")
171 |
172 | # Query Section
173 | if st.session_state.document_uploaded:
174 | st.markdown("### 🔍 Query Document")
175 | query = st.text_input("Enter your query")
176 |
177 | if st.button("Generate Response"):
178 | if query:
179 | try:
180 | with st.spinner("Generating response..."):
181 | response = st.session_state.pipeline.process_query(query)
182 | st.markdown("### Response:")
183 | st.markdown(response)
184 | except Exception as e:
185 | st.error(f"Error generating response: {str(e)}")
186 | else:
187 | st.error("Please enter a query.")
188 |
189 | if __name__ == "__main__":
190 | main()
--------------------------------------------------------------------------------
/rag_tutorials/rag-as-a-service/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | anthropic
3 | requests
4 |
--------------------------------------------------------------------------------
/rag_tutorials/rag_agent_cohere/README.md:
--------------------------------------------------------------------------------
1 | # RAG Agent with Cohere ⌘R
2 |
3 | A RAG Agentic system built with Cohere's new model Command-r7b-12-2024, Qdrant for vector storage, Langchain for RAG and LangGraph for orchestration. This application allows users to upload documents, ask questions about them, and get AI-powered responses with fallback to web search when needed.
4 |
5 | ## Features
6 |
7 | - **Document Processing**
8 | - PDF document upload and processing
9 | - Automatic text chunking and embedding
10 | - Vector storage in Qdrant cloud
11 |
12 | - **Intelligent Querying**
13 | - RAG-based document retrieval
14 | - Similarity search with threshold filtering
15 | - Automatic fallback to web search when no relevant documents found
16 | - Source attribution for answers
17 |
18 | - **Advanced Capabilities**
19 | - DuckDuckGo web search integration
20 | - LangGraph agent for web research
21 | - Context-aware response generation
22 | - Long answer summarization
23 |
24 | - **Model Specific Features**
25 | - Command-r7b-12-2024 model for Chat and RAG
26 | - cohere embed-english-v3.0 model for embeddings
27 | - create_react_agent function from langgraph
28 | - DuckDuckGoSearchRun tool for web search
29 |
30 | ## Prerequisites
31 |
32 | ### 1. Cohere API Key
33 | 1. Go to [Cohere Platform](https://dashboard.cohere.ai/api-keys)
34 | 2. Sign up or log in to your account
35 | 3. Navigate to API Keys section
36 | 4. Create a new API key
37 |
38 | ### 2. Qdrant Cloud Setup
39 | 1. Visit [Qdrant Cloud](https://cloud.qdrant.io/)
40 | 2. Create an account or sign in
41 | 3. Create a new cluster
42 | 4. Get your credentials:
43 | - Qdrant API Key: Found in API Keys section
44 | - Qdrant URL: Your cluster URL (format: `https://xxx-xxx.aws.cloud.qdrant.io`)
45 |
46 |
47 | ## How to Run
48 |
49 | 1. Clone the repository:
50 | ```bash
51 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
52 | cd rag_tutorials/rag_agent_cohere
53 | ```
54 |
55 | 2. Install dependencies:
56 | ```bash
57 | pip install -r requirements.txt
58 | ```
59 |
60 | ```bash
61 | streamlit run rag_agent_cohere.py
62 | ```
63 |
64 |
65 |
--------------------------------------------------------------------------------
/rag_tutorials/rag_agent_cohere/requirements.txt:
--------------------------------------------------------------------------------
1 | langchain==0.3.12
2 | langchain-community==0.3.12
3 | langchain-core==0.3.25
4 | langchain-cohere==0.3.2
5 | langchain-qdrant==0.2.0
6 | cohere==5.11.4
7 | qdrant-client==1.12.1
8 | duckduckgo-search==6.4.1
9 | streamlit==1.40.2
10 | tenacity==9.0.0
11 | typing-extensions==4.12.2
12 | pydantic==2.9.2
13 | pydantic-core==2.23.4
14 | langgraph==0.2.53
--------------------------------------------------------------------------------