├── .gitignore
├── ai_agent_tutorials
├── ai_legal_agent_team
│ ├── local_ai_legal_agent_team
│ │ ├── README.md
│ │ └── requirements.txt
│ ├── requirements.txt
│ └── README.md
├── multi_agent_researcher
│ ├── requirements.txt
│ ├── research_agent_llama3.py
│ ├── research_agent.py
│ └── README.md
├── ai_game_design_agent_team
│ ├── requirements.txt
│ └── README.md
├── ai_customer_support_agent
│ ├── requirements.txt
│ └── README.md
├── ai_investment_agent
│ ├── requirements.txt
│ ├── investment_agent.py
│ └── README.md
├── ai_meeting_agent
│ ├── requirements.txt
│ └── README.md
├── ai_system_architect_r1
│ ├── requirements.txt
│ └── README.md
├── ai_3dpygame_r1
│ ├── requirements.txt
│ └── README.md
├── ai_travel_agent
│ ├── requirements.txt
│ └── README.MD
├── ai_coding_agent_o3-mini
│ ├── requirements.txt
│ └── README.md
├── ai_services_agency
│ ├── requirements.txt
│ └── README.md
├── multimodal_ai_agent
│ ├── requirements.txt
│ ├── README.md
│ ├── multimodal_reasoning_agent.py
│ └── mutimodal_agent.py
├── ai_health_fitness_agent
│ ├── requirements.txt
│ └── README.md
├── ai_personal_finance_agent
│ ├── requirements.txt
│ └── README.md
├── ai_real_estate_agent
│ ├── requirements.txt
│ └── README.md
├── local_news_agent_openai_swarm
│ ├── .env
│ ├── requirements.txt
│ └── README.md
├── ai_chess_agent
│ ├── requirements.txt
│ └── README.md
├── xai_finance_agent
│ ├── requirements.txt
│ ├── xai_finance_agent.py
│ └── README.md
├── ai_tic_tac_toe_agent
│ ├── requirements.txt
│ └── README.md
├── ai_movie_production_agent
│ ├── requirements.txt
│ └── README.md
├── ai_aqi_analysis_agent
│ ├── requirements.txt
│ └── README.md
├── ai_finance_agent_team
│ ├── requirements.txt
│ ├── finance_agent_team.py
│ └── README.md
├── ai_journalist_agent
│ ├── requirements.txt
│ └── README.md
├── ai_data_analysis_agent
│ ├── requirements.txt
│ └── README.md
├── ai_lead_generation_agent
│ ├── requirements.txt
│ └── README.md
├── ai_medical_imaging_agent
│ ├── requirements.txt
│ └── README.md
├── ai_competitor_intelligence_agent_team
│ ├── requirements.txt
│ └── README.md
├── ai_startup_trend_analysis_agent
│ ├── requirements.txt
│ └── README.md
├── multimodal_design_agent_team
│ ├── requirements.txt
│ └── README.md
├── ai_data_visualisation_agent
│ ├── requirements.txt
│ └── README.md
├── ai_meme_generator_agent_browseruse
│ ├── requirements.txt
│ └── README.md
├── ai_teaching_agent_team
│ ├── requirements.txt
│ └── README.md
├── ai_recruitment_agent_team
│ └── requirements.txt
├── ai_reasoning_agent
│ ├── local_ai_reasoning_agent.py
│ └── reasoning_agent.py
├── gemini_multimodal_agent_demo
│ └── multimodal_ai_agent.py
└── ai_tic_tac_toe_game
│ └── README.md
├── advanced_tools_frameworks
├── local_chatgpt_clone
│ ├── requirements.txt
│ ├── README.md
│ └── chatgpt_clone_llama3.py
├── multillm_chat_playground
│ ├── requirements.txt
│ └── multillm_playground.py
├── local_llama3.1_tool_use
│ ├── requirements.txt
│ └── README.md
├── mixture_of_agents
│ └── requirements.txt
├── llm_router_app
│ ├── requirements.txt
│ ├── llm_router.py
│ └── README.md
├── web_scrapping_ai_agent
│ ├── requirements.txt
│ ├── ai_scrapper.py
│ ├── local_ai_scrapper.py
│ └── README.md
├── gemini_multimodal_chatbot
│ ├── requirements.txt
│ ├── README.md
│ └── gemini_multimodal_chatbot.py
├── web_search_ai_assistant
│ ├── requirements.txt
│ ├── claude_websearch.py
│ ├── gpt4_websearch.py
│ └── README.md
└── cursor_ai_experiments
│ ├── requirements.txt
│ ├── ai_web_scrapper.py
│ └── chatgpt_clone_llama3.py
├── chat_with_X_tutorials
├── chat_with_substack
│ ├── requirements.txt
│ ├── README.md
│ └── chat_substack.py
├── chat_with_github
│ ├── requirements.txt
│ ├── chat_github.py
│ ├── README.md
│ └── chat_github_llama3.py
├── chat_with_gmail
│ ├── requirements.txt
│ ├── README.md
│ └── chat_gmail.py
├── chat_with_pdf
│ ├── requirements.txt
│ ├── README.md
│ ├── chat_pdf.py
│ ├── chat_pdf_llama3.py
│ └── chat_pdf_llama3.2.py
├── chat_with_research_papers
│ ├── requirements.txt
│ ├── chat_arxiv_llama3.py
│ ├── README.md
│ └── chat_arxiv.py
└── chat_with_youtube_videos
│ ├── requirements.txt
│ ├── README.md
│ └── chat_youtube.py
├── llm_apps_with_memory_tutorials
├── llama3_stateful_chat
│ ├── requirements.txt
│ └── local_llama3_chat.py
├── ai_arxiv_agent_memory
│ ├── requirements.txt
│ ├── README.md
│ └── ai_arxiv_agent_memory.py
├── ai_travel_agent_memory
│ ├── requirements.txt
│ └── README.md
├── llm_app_personalized_memory
│ ├── requirements.txt
│ ├── README.md
│ └── llm_app_memory.py
├── multi_llm_memory
│ ├── requirements.txt
│ └── README.md
└── local_chatgpt_with_memory
│ ├── requirements.txt
│ └── README.md
├── rag_tutorials
├── rag-as-a-service
│ ├── requirements.txt
│ └── README.md
├── local_rag_agent
│ ├── requirements.txt
│ ├── README.md
│ └── local_rag_agent.py
├── llama3.1_local_rag
│ ├── requirements.txt
│ ├── README.md
│ └── llama3.1_local_rag.py
├── agentic_rag
│ ├── requirements.txt
│ ├── rag_agent.py
│ └── README.md
├── gemini_agentic_rag
│ ├── requirements.txt
│ └── README.md
├── autonomous_rag
│ ├── requirements.txt
│ └── README.md
├── deepseek_local_rag_agent
│ ├── requirements.txt
│ └── README.md
├── rag_chain
│ ├── requirements.txt
│ └── README.md
├── ai_blog_search
│ └── requirements.txt
├── hybrid_search_rag
│ ├── requirements.txt
│ └── README.md
├── rag_database_routing
│ ├── requirements.txt
│ └── README.md
├── local_hybrid_search_rag
│ └── requirements.txt
├── rag_agent_cohere
│ ├── requirements.txt
│ └── README.md
└── corrective_rag
│ ├── requirements.txt
│ └── README.md
├── docs
└── banner
│ └── unwind_black.png
└── llm_finetuning_tutorials
└── llama3.2_finetuning
├── requirements.txt
├── finetune_llama3.2.py
└── README.md
/.gitignore:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_legal_agent_team/local_ai_legal_agent_team/README.md:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/local_chatgpt_clone/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_substack/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain
--------------------------------------------------------------------------------
/ai_agent_tutorials/multi_agent_researcher/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | openai
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_github/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain[github]
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_gmail/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain[gmail]
--------------------------------------------------------------------------------
/advanced_tools_frameworks/multillm_chat_playground/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | litellm
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_game_design_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit==1.41.1
2 | autogen
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/llama3_stateful_chat/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
--------------------------------------------------------------------------------
/rag_tutorials/rag-as-a-service/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | anthropic
3 | requests
4 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/local_llama3.1_tool_use/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | ollama
3 | agno
--------------------------------------------------------------------------------
/advanced_tools_frameworks/mixture_of_agents/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | asyncio
3 | together
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_pdf/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain
3 | streamlit-chat
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_customer_support_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_investment_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | openai
4 | yfinance
5 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_meeting_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | crewai
3 | crewai-tools
4 | openai
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_system_architect_r1/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | anthropic
4 | agno
--------------------------------------------------------------------------------
/advanced_tools_frameworks/llm_router_app/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | "routellm[serve,eval]"
3 | routellm
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_scrapping_ai_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | scrapegraphai
3 | playwright
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_3dpygame_r1/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | langchain-openai
3 | browser-use
4 | streamlit
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_travel_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | openai
4 | google-search-results
--------------------------------------------------------------------------------
/advanced_tools_frameworks/gemini_multimodal_chatbot/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | pillow
3 | google-generativeai
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_coding_agent_o3-mini/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | e2b-code-interpreter
3 | agno
4 | Pillow
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_services_agency/requirements.txt:
--------------------------------------------------------------------------------
1 | python-dotenv==1.0.1
2 | agency-swarm==0.4.1
3 | streamlit
--------------------------------------------------------------------------------
/ai_agent_tutorials/multimodal_ai_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | google-generativeai==0.8.3
3 | streamlit==1.40.2
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/ai_arxiv_agent_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai
4 | multion
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/ai_travel_agent_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_search_ai_assistant/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | agno
4 | duckduckgo-search
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_health_fitness_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | google-generativeai==0.8.3
2 | streamlit==1.40.2
3 | agno
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_personal_finance_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | openai
4 | google-search-results
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_real_estate_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | firecrawl-py==1.9.0
3 | pydantic
4 | streamlit
5 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_research_papers/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | arxiv
4 | openai
5 | pypdf
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/llm_app_personalized_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/multi_llm_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
4 | litellm
--------------------------------------------------------------------------------
/ai_agent_tutorials/local_news_agent_openai_swarm/.env:
--------------------------------------------------------------------------------
1 | OPENAI_BASE_URL=http://localhost:11434/v1
2 | OPENAI_API_KEY=fake-key
--------------------------------------------------------------------------------
/docs/banner/unwind_black.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/D3villl/shubhamsaboo-llm-apps/HEAD/docs/banner/unwind_black.png
--------------------------------------------------------------------------------
/llm_finetuning_tutorials/llama3.2_finetuning/requirements.txt:
--------------------------------------------------------------------------------
1 | torch
2 | unsloth
3 | transformers
4 | datasets
5 | trl
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_chess_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | chess==1.11.1
3 | autogen==0.6.1
4 | cairosvg
5 | pillow
6 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/xai_finance_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | duckduckgo-search
3 | yfinance
4 | fastapi[standard]
5 | openai
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/local_chatgpt_with_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
4 | litellm
--------------------------------------------------------------------------------
/rag_tutorials/local_rag_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | qdrant-client
3 | ollama
4 | pypdf
5 | openai
6 | fastapi
7 | uvicorn
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_tic_tac_toe_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit==1.41.1
2 | agno
3 | openai==1.58.1
4 | google-generativeai==0.8.3
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_youtube_videos/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain[youtube]
3 | youtube-transcript-api==0.6.3
--------------------------------------------------------------------------------
/rag_tutorials/llama3.1_local_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | ollama
3 | langchain
4 | langchain_community
5 | langchain_ollama
6 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_movie_production_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | anthropic
4 | google-search-results
5 | lxml_html_clean
--------------------------------------------------------------------------------
/ai_agent_tutorials/local_news_agent_openai_swarm/requirements.txt:
--------------------------------------------------------------------------------
1 | git+https://github.com/openai/swarm.git
2 | streamlit
3 | duckduckgo-search
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_aqi_analysis_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | openai
3 | firecrawl-py==1.9.0
4 | gradio==5.9.1
5 | pydantic
6 | dataclasses
7 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_finance_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | openai
2 | agno
3 | duckduckgo-search
4 | yfinance
5 | fastapi[standard]
6 | sqlalchemy
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_journalist_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | openai
4 | google-search-results
5 | newspaper4k
6 | lxml_html_clean
--------------------------------------------------------------------------------
/rag_tutorials/agentic_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | openai
3 | lancedb
4 | tantivy
5 | pypdf
6 | sqlalchemy
7 | pgvector
8 | psycopg[binary]
9 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_legal_agent_team/local_ai_legal_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | streamlit==1.40.2
3 | qdrant-client==1.12.1
4 | ollama==0.4.4
5 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_data_analysis_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | phidata
2 | streamlit==1.41.1
3 | openai==1.58.1
4 | duckdb==1.1.3
5 | pandas
6 | numpy==1.26.4
7 | agno
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_lead_generation_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | firecrawl-py==1.9.0
2 | agno
3 | composio-phidata
4 | composio==0.1.1
5 | pydantic==2.10.5
6 | streamlit
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_medical_imaging_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit==1.40.2
2 | agno
3 | Pillow==10.0.0
4 | duckduckgo-search==6.4.1
5 | google-generativeai==0.8.3
--------------------------------------------------------------------------------
/advanced_tools_frameworks/cursor_ai_experiments/requirements.txt:
--------------------------------------------------------------------------------
1 | scrapegraphai
2 | playwright
3 | langchain-community
4 | streamlit-chat
5 | streamlit
6 | crewai
7 | ollama
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_competitor_intelligence_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | exa-py==1.7.1
2 | firecrawl-py==1.9.0
3 | duckduckgo-search==7.2.1
4 | agno
5 | streamlit==1.41.1
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_legal_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | streamlit==1.40.2
3 | qdrant-client==1.12.1
4 | openai
5 | pypdf
6 | duckduckgo-search
7 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_startup_trend_analysis_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | streamlit==1.40.2
3 | duckduckgo_search==6.3.7
4 | newspaper4k==0.9.3.1
5 | lxml_html_clean==0.4.1
--------------------------------------------------------------------------------
/rag_tutorials/gemini_agentic_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | exa==0.5.26
3 | qdrant-client==1.12.1
4 | langchain-qdrant==0.2.0
5 | langchain-community==0.3.13
6 | streamlit==1.41.1
--------------------------------------------------------------------------------
/ai_agent_tutorials/multimodal_design_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | google-generativeai==0.8.3
2 | streamlit==1.41.1
3 | agno
4 | Pillow==11.0.0
5 | duckduckgo-search==6.3.7
6 |
7 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_data_visualisation_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | together==1.3.10
2 | e2b-code-interpreter==1.0.3
3 | e2b==1.0.5
4 | Pillow==10.4.0
5 | streamlit
6 | pandas
7 | matplotlib
8 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_meme_generator_agent_browseruse/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | browser-use==0.1.26
3 | playwright==1.49.1
4 | langchain-openai
5 | langchain-anthropic
6 | asyncio
7 |
--------------------------------------------------------------------------------
/rag_tutorials/autonomous_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | openai
4 | psycopg-binary
5 | pgvector
6 | requests
7 | sqlalchemy
8 | pypdf
9 | duckduckgo-search
10 | nest_asyncio
11 |
--------------------------------------------------------------------------------
/rag_tutorials/deepseek_local_rag_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | exa==0.5.26
3 | qdrant-client==1.12.1
4 | langchain-qdrant==0.2.0
5 | langchain-community==0.3.13
6 | streamlit==1.41.1
7 | ollama
8 |
--------------------------------------------------------------------------------
/rag_tutorials/rag_chain/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | langchain-google-genai
3 | langchain-chroma
4 | langchain-community
5 | langchain-core
6 | chromadb
7 | sentence-transformers
8 | PyPDF2
9 | python-dotenv
10 |
--------------------------------------------------------------------------------
/rag_tutorials/ai_blog_search/requirements.txt:
--------------------------------------------------------------------------------
1 | langchain
2 | langgraph
3 | langchainhub
4 | langchain-community
5 | langchain-google-genai
6 | langchain-qdrant
7 | langchain-text-splitters
8 | tiktoken
9 | beautifulsoup4
10 | python-dotenv
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_teaching_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit==1.41.1
2 | openai==1.58.1
3 | duckduckgo-search==6.4.1
4 | typing-extensions>=4.5.0
5 | agno
6 | composio-phidata==0.6.9
7 | composio_core
8 | composio==0.1.1
9 | google-search-results==2.4.2
--------------------------------------------------------------------------------
/rag_tutorials/hybrid_search_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | raglite==0.2.1
2 | pydantic==2.10.1
3 | sqlalchemy>=2.0.0
4 | psycopg2-binary>=2.9.9
5 | openai>=1.0.0
6 | cohere>=4.37
7 | pypdf>=3.0.0
8 | python-dotenv>=1.0.0
9 | rerankers==0.6.0
10 | spacy>=3.7.0
11 | streamlit
12 | anthropic
13 |
--------------------------------------------------------------------------------
/rag_tutorials/rag_database_routing/requirements.txt:
--------------------------------------------------------------------------------
1 | langchain==0.3.12
2 | langchain-community==0.3.12
3 | langchain-core==0.3.28
4 | qdrant-client==1.12.1
5 | streamlit>=1.29.0
6 | pypdf>=4.0.0
7 | sentence-transformers>=2.2.2
8 | phidata==2.7.3
9 | langchain-openai==0.2.14
10 | langgraph==0.2.53
11 | duckduckgo-search==6.4.1
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_recruitment_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | # Core dependencies
2 | phidata
3 | agno
4 | streamlit==1.40.2
5 | PyPDF2==3.0.1
6 | streamlit-pdf-viewer==0.0.19
7 | requests==2.32.3
8 | pytz==2023.4
9 | typing-extensions>=4.9.0
10 |
11 | # Optional but recommended
12 | black>=24.1.1 # for code formatting
13 | python-dateutil>=2.8.2 # for date parsing
14 |
--------------------------------------------------------------------------------
/rag_tutorials/local_hybrid_search_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | raglite==0.2.1
2 | llama-cpp-python>=0.2.56
3 | sentence-transformers>=2.5.1
4 | pydantic==2.10.1
5 | sqlalchemy>=2.0.0
6 | psycopg2-binary>=2.9.9
7 | pypdf>=3.0.0
8 | python-dotenv>=1.0.0
9 | rerankers==0.6.0
10 | spacy>=3.7.0
11 | streamlit>=1.31.0
12 | flashrank==0.2.9
13 | numpy>=1.24.0
14 | pandas>=2.0.0
15 | tqdm>=4.66.0
16 |
--------------------------------------------------------------------------------
/rag_tutorials/rag_agent_cohere/requirements.txt:
--------------------------------------------------------------------------------
1 | langchain==0.3.12
2 | langchain-community==0.3.12
3 | langchain-core==0.3.25
4 | langchain-cohere==0.3.2
5 | langchain-qdrant==0.2.0
6 | cohere==5.11.4
7 | qdrant-client==1.12.1
8 | duckduckgo-search==6.4.1
9 | streamlit==1.40.2
10 | tenacity==9.0.0
11 | typing-extensions==4.12.2
12 | pydantic==2.9.2
13 | pydantic-core==2.23.4
14 | langgraph==0.2.53
--------------------------------------------------------------------------------
/rag_tutorials/corrective_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | # Core dependencies
2 | langchain==0.3.12
3 | langgraph==0.2.53
4 | qdrant-client==1.12.1
5 | langchain-openai==0.2.14
6 | langchain-anthropic==0.3.0
7 | tavily-python==0.5.0
8 | langchain-community==0.3.12
9 | langchain-core==0.3.28
10 | streamlit==1.41.1
11 | tenacity==8.5.0
12 | anthropic>=0.7.0
13 | openai>=1.12.0
14 | tiktoken>=0.6.0
15 | pydantic>=2.0.0
16 | numpy>=1.24.0
17 | PyYAML>=6.0.0
18 | nest-asyncio>=1.5.0
19 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_reasoning_agent/local_ai_reasoning_agent.py:
--------------------------------------------------------------------------------
1 | from agno.agent import Agent
2 | from agno.models.ollama import Ollama
3 | from agno.playground import Playground, serve_playground_app
4 |
5 | reasoning_agent = Agent(name="Reasoning Agent", model=Ollama(id="qwq:32b"), markdown=True)
6 |
7 | # UI for Reasoning agent
8 | app = Playground(agents=[reasoning_agent]).get_app()
9 |
10 | # Run the Playground app
11 | if __name__ == "__main__":
12 | serve_playground_app("local_ai_reasoning_agent:app", reload=True)
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_reasoning_agent/reasoning_agent.py:
--------------------------------------------------------------------------------
1 | from agno.agent import Agent
2 | from agno.models.openai import OpenAIChat
3 | from rich.console import Console
4 |
5 | regular_agent = Agent(model=OpenAIChat(id="gpt-4o-mini"), markdown=True)
6 | console = Console()
7 | reasoning_agent = Agent(
8 | model=OpenAIChat(id="gpt-4o"),
9 | reasoning=True,
10 | markdown=True,
11 | structured_outputs=True,
12 | )
13 |
14 | task = "How many 'r' are in the word 'supercalifragilisticexpialidocious'?"
15 |
16 | console.rule("[bold green]Regular Agent[/bold green]")
17 | regular_agent.print_response(task, stream=True)
18 | console.rule("[bold yellow]Reasoning Agent[/bold yellow]")
19 | reasoning_agent.print_response(task, stream=True, show_full_reasoning=True)
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_research_papers/chat_arxiv_llama3.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from agno.agent import Agent
4 | from agno.models.ollama import Ollama
5 | from agno.tools.arxiv import ArxivTools
6 |
7 | # Set up the Streamlit app
8 | st.title("Chat with Research Papers 🔎🤖")
9 | st.caption("This app allows you to chat with arXiv research papers using Llama-3 running locally.")
10 |
11 | # Create an instance of the Assistant
12 | assistant = Agent(
13 | model=Ollama(
14 | id="llama3.1:8b") , tools=[ArxivTools()], show_tool_calls=True
15 | )
16 |
17 | # Get the search query from the user
18 | query= st.text_input("Enter the Search Query", type="default")
19 |
20 | if query:
21 | # Search the web using the AI Assistant
22 | response = assistant.run(query, stream=False)
23 | st.write(response.content)
--------------------------------------------------------------------------------
/ai_agent_tutorials/xai_finance_agent/xai_finance_agent.py:
--------------------------------------------------------------------------------
1 | # import necessary python libraries
2 | from agno.agent import Agent
3 | from agno.models.xai import xAI
4 | from agno.tools.yfinance import YFinanceTools
5 | from agno.tools.duckduckgo import DuckDuckGoTools
6 | from agno.playground import Playground, serve_playground_app
7 |
8 | # create the AI finance agent
9 | agent = Agent(
10 | name="xAI Finance Agent",
11 | model = xAI(id="grok-beta"),
12 | tools=[DuckDuckGoTools(), YFinanceTools(stock_price=True, analyst_recommendations=True, stock_fundamentals=True)],
13 | instructions = ["Always use tables to display financial/numerical data. For text data use bullet points and small paragrpahs."],
14 | show_tool_calls = True,
15 | markdown = True,
16 | )
17 |
18 | # UI for finance agent
19 | app = Playground(agents=[agent]).get_app()
20 |
21 | if __name__ == "__main__":
22 | serve_playground_app("xai_finance_agent:app", reload=True)
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_substack/README.md:
--------------------------------------------------------------------------------
1 | ## 📝 Chat with Substack Newsletter
2 | Streamlit app that allows you to chat with a Substack newsletter using OpenAI's API and the Embedchain library. This app leverages GPT-4 to provide accurate answers to questions based on the content of the specified Substack newsletter.
3 |
4 | ## Features
5 | - Input a Substack blog URL
6 | - Ask questions about the content of the Substack newsletter
7 | - Get accurate answers using OpenAI's API and Embedchain
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_substack
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Run the Streamlit App
27 | ```bash
28 | streamlit run chat_substack.py
29 | ```
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_research_papers/README.md:
--------------------------------------------------------------------------------
1 | ## 🔎 Chat with Arxiv Research Papers
2 | This Streamlit app enables you to engage in interactive conversations with arXiv, a vast repository of scholarly articles, using GPT-4o. With this RAG application, you can easily access and explore the wealth of knowledge contained within arXiv.
3 |
4 | ### Features
5 | - Engage in conversational interactions with arXiv
6 | - Access and explore a vast collection of research papers
7 | - Utilize OpenAI GPT-4o for intelligent responses
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_research_papers
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Run the Streamlit App
27 | ```bash
28 | streamlit run chat_arxiv.py
29 | ```
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_youtube_videos/README.md:
--------------------------------------------------------------------------------
1 | ## 📽️ Chat with YouTube Videos
2 |
3 | LLM app with RAG to chat with YouTube Videos with OpenAI's gpt-4o, mem0/embedchain as memory and the youtube-transcript-api. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of the uploaded video.
4 |
5 | ### Features
6 |
7 | - Input a YouTube video URL
8 | - Ask questions about the content of the video
9 | - Get accurate answers using RAG and the selected LLM
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_youtube_videos
18 | ```
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 | 3. Get your OpenAI API Key
25 |
26 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
27 |
28 | 4. Run the Streamlit App
29 | ```bash
30 | streamlit run chat_youtube.py
31 | ```
32 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_pdf/README.md:
--------------------------------------------------------------------------------
1 | ## 📄 Chat with PDF
2 |
3 | LLM app with RAG to chat with PDF in just 30 lines of Python Code. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of the uploaded PDF.
4 |
5 | ### Features
6 |
7 | - Upload a PDF document
8 | - Ask questions about the content of the PDF
9 | - Get accurate answers using RAG and the selected LLM
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_pdf
18 | ```
19 | 2. Install the required dependencies
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 | 3. Get your OpenAI API Key
25 |
26 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
27 |
28 | 4. Run the Streamlit App
29 | ```bash
30 | streamlit run chat_pdf.py
31 | ```
32 | ### Interactive Application Demo
33 | https://github.com/Shubhamsaboo/awesome-llm-apps/assets/31396011/12bdfc11-c877-4fc7-9e70-63f21d2eb977
34 |
35 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_search_ai_assistant/claude_websearch.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from agno.agent import Agent
4 | from agno.tools.duckduckgo import DuckDuckGoTools
5 | from agno.models.anthropic import Claude
6 |
7 | # Set up the Streamlit app
8 | st.title("Claude Sonnet + AI Web Search 🤖")
9 | st.caption("This app allows you to search the web using Claude Sonnet 3.5")
10 |
11 | # Get Anthropic API key from user
12 | anthropic_api_key = st.text_input("Anthropic's Claude API Key", type="password")
13 |
14 | # If Anthropic API key is provided, create an instance of Assistant
15 | if anthropic_api_key:
16 | assistant = Agent(
17 | model=Claude(
18 | id="claude-3-5-sonnet-20240620",
19 | max_tokens=1024,
20 | temperature=0.3,
21 | api_key=anthropic_api_key) , tools=[DuckDuckGoTools()], show_tool_calls=True
22 | )
23 | # Get the search query from the user
24 | query= st.text_input("Enter the Search Query", type="default")
25 |
26 | if query:
27 | # Search the web using the AI Assistant
28 | response = assistant.run(query, stream=False)
29 | st.write(response.content)
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_research_papers/chat_arxiv.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from agno.agent import Agent
4 | from agno.models.openai import OpenAIChat
5 | from agno.tools.arxiv import ArxivTools
6 |
7 | # Set up the Streamlit app
8 | st.title("Chat with Research Papers 🔎🤖")
9 | st.caption("This app allows you to chat with arXiv research papers using OpenAI GPT-4o model.")
10 |
11 | # Get OpenAI API key from user
12 | openai_access_token = st.text_input("OpenAI API Key", type="password")
13 |
14 | # If OpenAI API key is provided, create an instance of Assistant
15 | if openai_access_token:
16 | # Create an instance of the Assistant
17 | assistant = Agent(
18 | model=OpenAIChat(
19 | id="gpt-4o",
20 | max_tokens=1024,
21 | temperature=0.9,
22 | api_key=openai_access_token) , tools=[ArxivTools()]
23 | )
24 |
25 | # Get the search query from the user
26 | query= st.text_input("Enter the Search Query", type="default")
27 |
28 | if query:
29 | # Search the web using the AI Assistant
30 | response = assistant.run(query, stream=False)
31 | st.write(response.content)
--------------------------------------------------------------------------------
/rag_tutorials/agentic_rag/rag_agent.py:
--------------------------------------------------------------------------------
1 | from agno.agent import Agent
2 | from agno.models.openai import OpenAIChat
3 | from agno.knowledge.pdf_url import PDFUrlKnowledgeBase
4 | from agno.vectordb.lancedb import LanceDb, SearchType
5 | from agno.playground import Playground, serve_playground_app
6 | from agno.tools.duckduckgo import DuckDuckGoTools
7 |
8 | db_uri = "tmp/lancedb"
9 | # Create a knowledge base from a PDF
10 | knowledge_base = PDFUrlKnowledgeBase(
11 | urls=["https://phi-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
12 | # Use LanceDB as the vector database
13 | vector_db=LanceDb(table_name="recipes", uri=db_uri, search_type=SearchType.vector),
14 | )
15 | # Load the knowledge base: Comment out after first run
16 | knowledge_base.load(upsert=True)
17 |
18 | rag_agent = Agent(
19 | model=OpenAIChat(id="gpt-4o"),
20 | agent_id="rag-agent",
21 | knowledge=knowledge_base, # Add the knowledge base to the agent
22 | tools=[DuckDuckGoTools()],
23 | show_tool_calls=True,
24 | markdown=True,
25 | )
26 |
27 | app = Playground(agents=[rag_agent]).get_app()
28 |
29 | if __name__ == "__main__":
30 | serve_playground_app("rag_agent:app", reload=True)
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_search_ai_assistant/gpt4_websearch.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from agno.agent import Agent
4 | from agno.tools.duckduckgo import DuckDuckGoTools
5 | from agno.models.openai import OpenAIChat
6 |
7 | # Set up the Streamlit app
8 | st.title("AI Web Search Assistant 🤖")
9 | st.caption("This app allows you to search the web using GPT-4o")
10 |
11 | # Get OpenAI API key from user
12 | openai_access_token = st.text_input("OpenAI API Key", type="password")
13 |
14 | # If OpenAI API key is provided, create an instance of Assistant
15 | if openai_access_token:
16 | # Create an instance of the Assistant
17 | assistant = Agent(
18 | model=OpenAIChat(
19 | id="gpt-4o",
20 | max_tokens=1024,
21 | temperature=0.9,
22 | api_key=openai_access_token) , tools=[DuckDuckGoTools()], show_tool_calls=True
23 | )
24 |
25 | # Get the search query from the user
26 | query= st.text_input("Enter the Search Query", type="default")
27 |
28 | if query:
29 | # Search the web using the AI Assistant
30 | response = assistant.run(query, stream=False)
31 | st.write(response.content)
--------------------------------------------------------------------------------
/rag_tutorials/rag-as-a-service/README.md:
--------------------------------------------------------------------------------
1 | ## 🖇️ RAG-as-a-Service with Claude 3.5 Sonnet
2 | Build and deploy a production-ready Retrieval-Augmented Generation (RAG) service using Claude 3.5 Sonnet and Ragie.ai. This implementation allows you to create a document querying system with a user-friendly Streamlit interface in less than 50 lines of Python code.
3 |
4 | ### Features
5 | - Production-ready RAG pipeline
6 | - Integration with Claude 3.5 Sonnet for response generation
7 | - Document upload from URLs
8 | - Real-time document querying
9 | - Support for both fast and accurate document processing modes
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/rag_tutorials/rag-as-a-service
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Get your Anthropic API and Ragie API Key
26 |
27 | - Sign up for an [Anthropic account](https://console.anthropic.com/) and get your API key
28 | - Sign up for an [Ragie account](https://www.ragie.ai/) and get your API key
29 |
30 | 4. Run the Streamlit app
31 | ```bash
32 | streamlit run rag_app.py
33 | ```
--------------------------------------------------------------------------------
/ai_agent_tutorials/multimodal_ai_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🧬 Multimodal AI Agent
2 |
3 | A Streamlit application that combines video analysis and web search capabilities using Google's Gemini 2.0 model. This agent can analyze uploaded videos and answer questions by combining visual understanding with web-search.
4 |
5 | ### Features
6 |
7 | - Video analysis using Gemini 2.0 Flash
8 | - Web research integration via DuckDuckGo
9 | - Support for multiple video formats (MP4, MOV, AVI)
10 | - Real-time video processing
11 | - Combined visual and textual analysis
12 |
13 | ### How to get Started?
14 |
15 | 1. Clone the GitHub repository
16 |
17 | ```bash
18 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
19 | cd ai_agent_tutorials/multimodal_ai_agent
20 | ```
21 | 2. Install the required dependencies:
22 |
23 | ```bash
24 | pip install -r requirements.txt
25 | ```
26 | 3. Get your Google Gemini API Key
27 |
28 | - Sign up for an [Google AI Studio account](https://aistudio.google.com/apikey) and obtain your API key.
29 |
30 | 4. Set up your Gemini API Key as the environment variable
31 |
32 | ```bash
33 | GOOGLE_API_KEY=your_api_key_here
34 | ```
35 |
36 | 5. Run the Streamlit App
37 | ```bash
38 | streamlit run multimodal_agent.py
39 | ```
40 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/gemini_multimodal_chatbot/README.md:
--------------------------------------------------------------------------------
1 | ## ⚡️ Multimodal Chatbot with Gemini Flash
2 | This repository contains a Streamlit application that demonstrates a multimodal chatbot using Google's Gemini Flash model. The chatbot allows users to interact with the model using both image and text inputs, providing lightning-fast results.
3 |
4 | ## Features
5 | - Multimodal input: Users can upload images and enter text queries to interact with the chatbot.
6 | - Gemini Flash model: The chatbot leverages Google's powerful Gemini Flash model for generating responses.
7 | - Chat history: The application maintains a chat history, displaying the conversation between the user and the chatbot.
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/advanced_tools_frameworks/gemini_multimodal_chatbot
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your Google Studio API Key
23 |
24 | - Sign up for an [Google AI Studio](https://aistudio.google.com/app/apikey) and obtain your API key.
25 |
26 | 4. Run the Streamlit App
27 | ```bash
28 | streamlit run gemini_multimodal_chatbot.py
29 | ```
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_pdf/chat_pdf.py:
--------------------------------------------------------------------------------
1 | import os
2 | import tempfile
3 | import streamlit as st
4 | from embedchain import App
5 |
6 | def embedchain_bot(db_path, api_key):
7 | return App.from_config(
8 | config={
9 | "llm": {"provider": "openai", "config": {"api_key": api_key}},
10 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
11 | "embedder": {"provider": "openai", "config": {"api_key": api_key}},
12 | }
13 | )
14 |
15 | st.title("Chat with PDF")
16 |
17 | openai_access_token = st.text_input("OpenAI API Key", type="password")
18 |
19 | if openai_access_token:
20 | db_path = tempfile.mkdtemp()
21 | app = embedchain_bot(db_path, openai_access_token)
22 |
23 | pdf_file = st.file_uploader("Upload a PDF file", type="pdf")
24 |
25 | if pdf_file:
26 | with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as f:
27 | f.write(pdf_file.getvalue())
28 | app.add(f.name, data_type="pdf_file")
29 | os.remove(f.name)
30 | st.success(f"Added {pdf_file.name} to knowledge base!")
31 |
32 | prompt = st.text_input("Ask a question about the PDF")
33 |
34 | if prompt:
35 | answer = app.chat(prompt)
36 | st.write(answer)
37 |
38 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_investment_agent/investment_agent.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from agno.agent import Assistant
4 | from agno.models.openai import OpenAIChat
5 | from agno.tools.yfinance import YFinanceTools
6 |
7 | # Set up the Streamlit app
8 | st.title("AI Investment Agent 📈🤖")
9 | st.caption("This app allows you to compare the performance of two stocks and generate detailed reports.")
10 |
11 | # Get OpenAI API key from user
12 | openai_api_key = st.text_input("OpenAI API Key", type="password")
13 |
14 | if openai_api_key:
15 | # Create an instance of the Assistant
16 | assistant = Assistant(
17 | llm=OpenAIChat(model="gpt-4o", api_key=openai_api_key),
18 | tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True, company_news=True)],
19 | show_tool_calls=True,
20 | )
21 |
22 | # Input fields for the stocks to compare
23 | stock1 = st.text_input("Enter the first stock symbol")
24 | stock2 = st.text_input("Enter the second stock symbol")
25 |
26 | if stock1 and stock2:
27 | # Get the response from the assistant
28 | query = f"Compare {stock1} to {stock2}. Use every tool you have."
29 | response = assistant.run(query, stream=False)
30 | st.write(response.content)
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/multi_llm_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧠 Multi-LLM App with Shared Memory
2 | This Streamlit application demonstrates a multi-LLM system with a shared memory layer, allowing users to interact with different language models while maintaining conversation history and context across sessions.
3 |
4 | ### Features
5 |
6 | - Support for multiple LLMs:
7 | - OpenAI's GPT-4o
8 | - Anthropic's Claude 3.5 Sonnet
9 |
10 | - Persistent memory using Qdrant vector store
11 | - User-specific conversation history
12 | - Memory retrieval for contextual responses
13 | - User-friendly interface with LLM selection
14 |
15 | ### How to get Started?
16 |
17 | 1. Clone the GitHub repository
18 | ```bash
19 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
20 | cd awesome-llm-apps/llm_apps_with_memory_tutorials/multi_llm_memory
21 | ```
22 |
23 | 2. Install the required dependencies:
24 |
25 | ```bash
26 | pip install -r requirements.txt
27 | ```
28 |
29 | 3. Ensure Qdrant is running:
30 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
31 |
32 | ```bash
33 | docker pull qdrant/qdrant
34 | docker run -p 6333:6333 qdrant/qdrant
35 | ```
36 |
37 | 4. Run the Streamlit App
38 | ```bash
39 | streamlit run multi_llm_memory.py
40 | ```
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_scrapping_ai_agent/ai_scrapper.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from scrapegraphai.graphs import SmartScraperGraph
4 |
5 | # Set up the Streamlit app
6 | st.title("Web Scrapping AI Agent 🕵️♂️")
7 | st.caption("This app allows you to scrape a website using OpenAI API")
8 |
9 | # Get OpenAI API key from user
10 | openai_access_token = st.text_input("OpenAI API Key", type="password")
11 |
12 | if openai_access_token:
13 | model = st.radio(
14 | "Select the model",
15 | ["gpt-3.5-turbo", "gpt-4"],
16 | index=0,
17 | )
18 | graph_config = {
19 | "llm": {
20 | "api_key": openai_access_token,
21 | "model": model,
22 | },
23 | }
24 | # Get the URL of the website to scrape
25 | url = st.text_input("Enter the URL of the website you want to scrape")
26 | # Get the user prompt
27 | user_prompt = st.text_input("What you want the AI agent to scrae from the website?")
28 |
29 | # Create a SmartScraperGraph object
30 | smart_scraper_graph = SmartScraperGraph(
31 | prompt=user_prompt,
32 | source=url,
33 | config=graph_config
34 | )
35 | # Scrape the website
36 | if st.button("Scrape"):
37 | result = smart_scraper_graph.run()
38 | st.write(result)
--------------------------------------------------------------------------------
/ai_agent_tutorials/multi_agent_researcher/research_agent_llama3.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from agno.agent import Agent
4 | from agno.tools.hackernews import HackerNews
5 | from agno.models.ollama import Ollama
6 |
7 | # Set up the Streamlit app
8 | st.title("Multi-Agent AI Researcher using Llama-3 🔍🤖")
9 | st.caption("This app allows you to research top stories and users on HackerNews and write blogs, reports and social posts.")
10 |
11 | # Create instances of the Assistant
12 | story_researcher = Agent(
13 | name="HackerNews Story Researcher",
14 | role="Researches hackernews stories and users.",
15 | tools=[HackerNews()],
16 | model=Ollama(id="llama3.2", max_tokens=1024)
17 | )
18 |
19 | user_researcher = Agent(
20 | name="HackerNews User Researcher",
21 | role="Reads articles from URLs.",
22 | tools=[HackerNews()],
23 | model=Ollama(id="llama3.2", max_tokens=1024)
24 | )
25 |
26 | hn_assistant = Agent(
27 | name="Hackernews Team",
28 | team=[story_researcher, user_researcher],
29 | model=Ollama(id="llama3.2", max_tokens=1024)
30 | )
31 |
32 | # Input field for the report query
33 | query = st.text_input("Enter your report query")
34 |
35 | if query:
36 | # Get the response from the assistant
37 | response = hn_assistant.run(query, stream=False)
38 | st.write(response.content)
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_scrapping_ai_agent/local_ai_scrapper.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from scrapegraphai.graphs import SmartScraperGraph
4 |
5 | # Set up the Streamlit app
6 | st.title("Web Scrapping AI Agent 🕵️♂️")
7 | st.caption("This app allows you to scrape a website using Llama 3.2")
8 |
9 | # Set up the configuration for the SmartScraperGraph
10 | graph_config = {
11 | "llm": {
12 | "model": "ollama/llama3.2",
13 | "temperature": 0,
14 | "format": "json", # Ollama needs the format to be specified explicitly
15 | "base_url": "http://localhost:11434", # set Ollama URL
16 | },
17 | "embeddings": {
18 | "model": "ollama/nomic-embed-text",
19 | "base_url": "http://localhost:11434", # set Ollama URL
20 | },
21 | "verbose": True,
22 | }
23 | # Get the URL of the website to scrape
24 | url = st.text_input("Enter the URL of the website you want to scrape")
25 | # Get the user prompt
26 | user_prompt = st.text_input("What you want the AI agent to scrae from the website?")
27 |
28 | # Create a SmartScraperGraph object
29 | smart_scraper_graph = SmartScraperGraph(
30 | prompt=user_prompt,
31 | source=url,
32 | config=graph_config
33 | )
34 | # Scrape the website
35 | if st.button("Scrape"):
36 | result = smart_scraper_graph.run()
37 | st.write(result)
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/llm_app_personalized_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧠 LLM App with Memory
2 | This Streamlit app is an AI-powered chatbot that uses OpenAI's GPT-4o model with a persistent memory feature. It allows users to have conversations with the AI while maintaining context across multiple interactions.
3 |
4 | ### Features
5 |
6 | - Utilizes OpenAI's GPT-4o model for generating responses
7 | - Implements persistent memory using Mem0 and Qdrant vector store
8 | - Allows users to view their conversation history
9 | - Provides a user-friendly interface with Streamlit
10 |
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/llm_apps_with_memory_tutorials/llm_app_personalized_memory
18 | ```
19 |
20 | 2. Install the required dependencies:
21 |
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Ensure Qdrant is running:
27 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
28 |
29 | ```bash
30 | docker pull qdrant/qdrant
31 |
32 | docker run -p 6333:6333 -p 6334:6334 \
33 | -v $(pwd)/qdrant_storage:/qdrant/storage:z \
34 | qdrant/qdrant
35 | ```
36 |
37 | 4. Run the Streamlit App
38 | ```bash
39 | streamlit run llm_app_memory.py
40 | ```
41 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_personal_finance_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 💰 AI Personal Finance Planner
2 | This Streamlit app is an AI-powered personal finance planner that generates personalized financial plans using OpenAI GPT-4o. It automates the process of researching, planning, and creating tailored budgets, investment strategies, and savings goals, empowering you to take control of your financial future with ease.
3 |
4 | ### Features
5 | - Set your financial goals and provide details about your current financial situation
6 | - Use GPT-4o to generate intelligent and personalized financial advice
7 | - Receive customized budgets, investment plans, and savings strategies
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/ai_agent_tutorials/ai_personal_finance_agent
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Get your SerpAPI Key
27 |
28 | - Sign up for an [SerpAPI account](https://serpapi.com/) and obtain your API key.
29 |
30 | 5. Run the Streamlit App
31 | ```bash
32 | streamlit run finance_agent.py
33 | ```
--------------------------------------------------------------------------------
/ai_agent_tutorials/gemini_multimodal_agent_demo/multimodal_ai_agent.py:
--------------------------------------------------------------------------------
1 | from agno.agent import Agent
2 | from agno.models.google import Gemini
3 | from agno.tools.duckduckgo import DuckDuckGoTools
4 | from google.generativeai import upload_file, get_file
5 | import time
6 |
7 | # 1. Initialize the Multimodal Agent
8 | agent = Agent(model=Gemini(id="gemini-2.0-flash-exp"), tools=[DuckDuckGoTools()], markdown=True)
9 |
10 | # 2. Image Input
11 | image_url = "https://example.com/sample_image.jpg"
12 |
13 | # 3. Audio Input
14 | audio_file = "sample_audio.mp3"
15 |
16 | # 4. Video Input
17 | video_file = upload_file("sample_video.mp4")
18 | while video_file.state.name == "PROCESSING":
19 | time.sleep(2)
20 | video_file = get_file(video_file.name)
21 |
22 | # 5. Multimodal Query
23 | query = """
24 | Combine insights from the inputs:
25 | 1. **Image**: Describe the scene and its significance.
26 | 2. **Audio**: Extract key messages that relate to the visual.
27 | 3. **Video**: Look at the video input and provide insights that connect with the image and audio context.
28 | 4. **Web Search**: Find the latest updates or events linking all these topics.
29 | Summarize the overall theme or story these inputs convey.
30 | """
31 |
32 | # 6. Multimodal Agent generates unified response
33 | agent.print_response(query, images=[image_url], audio=audio_file, videos=[video_file], stream=True)
--------------------------------------------------------------------------------
/advanced_tools_frameworks/local_chatgpt_clone/README.md:
--------------------------------------------------------------------------------
1 | ## 🦙💬 ChatGPT Clone using Llama-3
2 | This project demonstrates how to build a ChatGPT clone using the Llama-3 model running locally on your computer. The application is built using Python and Streamlit, providing a user-friendly interface for interacting with the language model. Best of all, it's 100% free and doesn't require an internet connection!
3 |
4 | ### Features
5 | - Runs locally on your computer without the need for an internet connection and completely free to use.
6 | - Utilizes the Llama-3 instruct model for generating responses
7 | - Provides a chat-like interface for seamless interaction
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/advanced_tools_frameworks/local_chatgpt_clone
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Download and install the [LM Studio desktop app](https://lmstudio.ai/). Download the Llama-3 instruct model.
23 |
24 | 4. Expose the Llama-3 model as an OpenAI API by starting the server in LM Studio. Watch this [video walkthrough](https://x.com/Saboo_Shubham_/status/1783715814790549683).
25 |
26 | 5. Run the Streamlit App
27 | ```bash
28 | streamlit run chatgpt_clone_llama3.py
29 | ```
30 |
31 |
--------------------------------------------------------------------------------
/rag_tutorials/local_rag_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🦙 Local RAG Agent with Llama 3.2
2 | This application implements a Retrieval-Augmented Generation (RAG) system using Llama 3.2 via Ollama, with Qdrant as the vector database.
3 |
4 |
5 | ### Features
6 | - Fully local RAG implementation
7 | - Powered by Llama 3.2 through Ollama
8 | - Vector search using Qdrant
9 | - Interactive playground interface
10 | - No external API dependencies
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | cd awesome-llm-apps/rag_tutorials/local_rag_agent
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Install and start [Qdrant](https://qdrant.tech/) vector database locally
27 |
28 | ```bash
29 | docker pull qdrant/qdrant
30 | docker run -p 6333:6333 qdrant/qdrant
31 | ```
32 |
33 | 4. Install [Ollama](https://ollama.com/download) and pull Llama 3.2 for LLM and OpenHermes as the embedder for OllamaEmbedder
34 | ```bash
35 | ollama pull llama3.2
36 | ollama pull openhermes
37 | ```
38 |
39 | 4. Run the AI RAG Agent
40 | ```bash
41 | python local_rag_agent.py
42 | ```
43 |
44 | 5. Open your web browser and navigate to the URL provided in the console output to interact with the RAG agent through the playground interface.
45 |
46 |
47 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_customer_support_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🛒 AI Customer Support Agent with Memory
2 | This Streamlit app implements an AI-powered customer support agent for synthetic data generated using GPT-4o. The agent uses OpenAI's GPT-4o model and maintains a memory of past interactions using the Mem0 library with Qdrant as the vector store.
3 |
4 | ### Features
5 |
6 | - Chat interface for interacting with the AI customer support agent
7 | - Persistent memory of customer interactions and profiles
8 | - Synthetic data generation for testing and demonstration
9 | - Utilizes OpenAI's GPT-4o model for intelligent responses
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/ai_agent_tutorials/ai_customer_support_agent
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Ensure Qdrant is running:
26 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
27 |
28 | ```bash
29 | docker pull qdrant/qdrant
30 |
31 | docker run -p 6333:6333 -p 6334:6334 \
32 | -v "$(pwd)/qdrant_storage:/qdrant/storage:z" \
33 | qdrant/qdrant
34 | ```
35 |
36 | 4. Run the Streamlit App
37 | ```bash
38 | streamlit run customer_support_agent.py
39 | ```
40 |
--------------------------------------------------------------------------------
/rag_tutorials/llama3.1_local_rag/README.md:
--------------------------------------------------------------------------------
1 | ## 💻 Local Lllama-3.1 with RAG
2 | Streamlit app that allows you to chat with any webpage using local Llama-3.1 and Retrieval Augmented Generation (RAG). This app runs entirely on your computer, making it 100% free and without the need for an internet connection.
3 |
4 |
5 | ### Features
6 | - Input a webpage URL
7 | - Ask questions about the content of the webpage
8 | - Get accurate answers using RAG and the Llama-3.1 model running locally on your computer
9 |
10 | ### How to get Started?
11 |
12 | 1. Clone the GitHub repository
13 |
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/rag_tutorials/llama3.1_local_rag
17 | ```
18 | 2. Install the required dependencies:
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 | 3. Run the Streamlit App
24 | ```bash
25 | streamlit run llama3.1_local_rag.py
26 | ```
27 |
28 | ### How it Works?
29 |
30 | - The app loads the webpage data using WebBaseLoader and splits it into chunks using RecursiveCharacterTextSplitter.
31 | - It creates Ollama embeddings and a vector store using Chroma.
32 | - The app sets up a RAG (Retrieval-Augmented Generation) chain, which retrieves relevant documents based on the user's question.
33 | - The Llama-3.1 model is called to generate an answer using the retrieved context.
34 | - The app displays the answer to the user's question.
35 |
36 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/xai_finance_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 📊 AI Finance Agent with xAI Grok
2 | This application creates a financial analysis agent powered by xAI's Grok model, combining real-time stock data with web search capabilities. It provides structured financial insights through an interactive playground interface.
3 |
4 | ### Features
5 |
6 | - Powered by xAI's Grok-beta model
7 | - Real-time stock data analysis via YFinance
8 | - Web search capabilities through DuckDuckGo
9 | - Formatted output with tables for financial data
10 | - Interactive playground interface
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/ai_agent_tutorials/xai_finance_agent
18 | ```
19 |
20 | 2. Install the required dependencies:
21 |
22 | ```bash
23 | cd awesome-llm-apps/ai_agent_tutorials/xai_finance_agent
24 | pip install -r requirements.txt
25 | ```
26 |
27 | 3. Get your OpenAI API Key
28 |
29 | - Sign up for an [xAI API account](https://console.x.ai/)
30 | - Set your XAI_API_KEY environment variable.
31 | ```bash
32 | export XAI_API_KEY='your-api-key-here'
33 | ```
34 |
35 | 4. Run the team of AI Agents
36 | ```bash
37 | python xai_finance_agent.py
38 | ```
39 |
40 | 5. Open your web browser and navigate to the URL provided in the console output to interact with the AI financial agent through the playground interface.
41 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_github/chat_github.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | from embedchain.pipeline import Pipeline as App
3 | from embedchain.loaders.github import GithubLoader
4 | import streamlit as st
5 | import os
6 |
7 | loader = GithubLoader(
8 | config={
9 | "token":"Your GitHub Token",
10 | }
11 | )
12 |
13 | # Create Streamlit app
14 | st.title("Chat with GitHub Repository 💬")
15 | st.caption("This app allows you to chat with a GitHub Repo using OpenAI API")
16 |
17 | # Get OpenAI API key from user
18 | openai_access_token = st.text_input("OpenAI API Key", type="password")
19 |
20 | # If OpenAI API key is provided, create an instance of App
21 | if openai_access_token:
22 | os.environ["OPENAI_API_KEY"] = openai_access_token
23 | # Create an instance of Embedchain App
24 | app = App()
25 | # Get the GitHub repo from the user
26 | git_repo = st.text_input("Enter the GitHub Repo", type="default")
27 | if git_repo:
28 | # Add the repo to the knowledge base
29 | app.add("repo:" + git_repo + " " + "type:repo", data_type="github", loader=loader)
30 | st.success(f"Added {git_repo} to knowledge base!")
31 | # Ask a question about the Github Repo
32 | prompt = st.text_input("Ask any question about the GitHub Repo")
33 | # Chat with the GitHub Repo
34 | if prompt:
35 | answer = app.chat(prompt)
36 | st.write(answer)
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_chess_agent/README.md:
--------------------------------------------------------------------------------
1 | # ♜ Agent White vs Agent Black: Chess Game
2 |
3 | An advanced Chess game system where two AI agents play chess against each other using Autogen in a streamlit app. It is built with robust move validation and game state management.
4 |
5 | ## Features
6 |
7 | ### Multi-Agent Architecture
8 | - Player White: OpenAI-powered strategic decision maker
9 | - Player Black: OpenAI-powered tactical opponent
10 | - Board Proxy: Validation agent for move legality and game state
11 |
12 | ### Safety & Validation
13 | - Robust move verification system
14 | - Illegal move prevention
15 | - Real-time board state monitoring
16 | - Secure game progression control
17 |
18 | ### Strategic Gameplay
19 | - AI-powered position evaluation
20 | - Deep tactical analysis
21 | - Dynamic strategy adaptation
22 | - Complete chess ruleset implementation
23 |
24 |
25 | ### How to get Started?
26 |
27 | 1. Clone the GitHub repository
28 |
29 | ```bash
30 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
31 | cd ai_agent_tutorials/ai_chess_game
32 | ```
33 | 2. Install the required dependencies:
34 |
35 | ```bash
36 | pip install -r requirements.txt
37 | ```
38 | 3. Get your OpenAI API Key
39 |
40 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
41 |
42 | 4. Run the Streamlit App
43 | ```bash
44 | streamlit run ai_chess_agent.py
45 | ```
46 |
47 |
--------------------------------------------------------------------------------
/rag_tutorials/local_rag_agent/local_rag_agent.py:
--------------------------------------------------------------------------------
1 | # Import necessary libraries
2 | from agno.agent import Agent
3 | from agno.models.ollama import Ollama
4 | from agno.knowledge.pdf_url import PDFUrlKnowledgeBase
5 | from agno.vectordb.qdrant import Qdrant
6 | from agno.embedder.ollama import OllamaEmbedder
7 | from agno.playground import Playground, serve_playground_app
8 |
9 | # Define the collection name for the vector database
10 | collection_name = "thai-recipe-index"
11 |
12 | # Set up Qdrant as the vector database with the embedder
13 | vector_db = Qdrant(
14 | collection=collection_name,
15 | url="http://localhost:6333/",
16 | embedder=OllamaEmbedder()
17 | )
18 |
19 | # Define the knowledge base with the specified PDF URL
20 | knowledge_base = PDFUrlKnowledgeBase(
21 | urls=["https://phi-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
22 | vector_db=vector_db,
23 | )
24 |
25 | # Load the knowledge base, comment out after the first run to avoid reloading
26 | knowledge_base.load(recreate=True, upsert=True)
27 |
28 | # Create the Agent using Ollama's llama3.2 model and the knowledge base
29 | agent = Agent(
30 | name="Local RAG Agent",
31 | model=Ollama(id="llama3.2"),
32 | knowledge=knowledge_base,
33 | )
34 |
35 | # UI for RAG agent
36 | app = Playground(agents=[agent]).get_app()
37 |
38 | # Run the Playground app
39 | if __name__ == "__main__":
40 | serve_playground_app("local_rag_agent:app", reload=True)
41 |
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/ai_travel_agent_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧳 AI Travel Agent with Memory
2 | This Streamlit app implements an AI-powered travel assistant that remembers user preferences and past interactions. It utilizes OpenAI's GPT-4o for generating responses and Mem0 with Qdrant for maintaining conversation history.
3 |
4 | ### Features
5 | - Chat-based interface for interacting with an AI travel assistant
6 | - Persistent memory of user preferences and past conversations
7 | - Utilizes OpenAI's GPT-4o model for intelligent responses
8 | - Implements memory storage and retrieval using Mem0 and Qdrant
9 | - User-specific conversation history and memory viewing
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/llm_apps_with_memory_tutorials/ai_travel_agent_memory
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Ensure Qdrant is running:
26 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
27 |
28 | ```bash
29 | docker pull qdrant/qdrant
30 |
31 | docker run -p 6333:6333 -p 6334:6334 \
32 | -v $(pwd)/qdrant_storage:/qdrant/storage:z \
33 | qdrant/qdrant
34 | ```
35 |
36 | 4. Run the Streamlit App
37 | ```bash
38 | streamlit run travel_agent_memory.py
39 | ```
40 |
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/local_chatgpt_with_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧠 Local ChatGPT using Llama 3.1 with Personal Memory
2 | This Streamlit application implements a fully local ChatGPT-like experience using Llama 3.1, featuring personalized memory storage for each user. All components, including the language model, embeddings, and vector store, run locally without requiring external API keys.
3 |
4 | ### Features
5 | - Fully local implementation with no external API dependencies
6 | - Powered by Llama 3.1 via Ollama
7 | - Personal memory space for each user
8 | - Local embedding generation using Nomic Embed
9 | - Vector storage with Qdrant
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/llm_apps_with_memory_tutorials/local_chatgpt_with_memory
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | cd awesome-llm-apps/rag_tutorials/local_rag_agent
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Install and start [Qdrant](https://qdrant.tech/documentation/guides/installation/) vector database locally
27 |
28 | ```bash
29 | docker pull qdrant/qdrant
30 | docker run -p 6333:6333 qdrant/qdrant
31 | ```
32 |
33 | 4. Install [Ollama](https://ollama.com/download) and pull Llama 3.1
34 | ```bash
35 | ollama pull llama3.1
36 | ```
37 |
38 | 5. Run the Streamlit App
39 | ```bash
40 | streamlit run local_chatgpt_memory.py
41 | ```
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_finance_agent_team/finance_agent_team.py:
--------------------------------------------------------------------------------
1 | from agno.agent import Agent
2 | from agno.models.openai import OpenAIChat
3 | from agno.storage.agent.sqlite import SqliteAgentStorage
4 | from agno.tools.duckduckgo import DuckDuckGoTools
5 | from agno.tools.yfinance import YFinanceTools
6 | from agno.playground import Playground, serve_playground_app
7 |
8 | web_agent = Agent(
9 | name="Web Agent",
10 | role="Search the web for information",
11 | model=OpenAIChat(id="gpt-4o"),
12 | tools=[DuckDuckGoTools()],
13 | storage=SqliteAgentStorage(table_name="web_agent", db_file="agents.db"),
14 | add_history_to_messages=True,
15 | markdown=True,
16 | )
17 |
18 | finance_agent = Agent(
19 | name="Finance Agent",
20 | role="Get financial data",
21 | model=OpenAIChat(id="gpt-4o"),
22 | tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True, company_news=True)],
23 | instructions=["Always use tables to display data"],
24 | storage=SqliteAgentStorage(table_name="finance_agent", db_file="agents.db"),
25 | add_history_to_messages=True,
26 | markdown=True,
27 | )
28 |
29 | agent_team = Agent(
30 | team=[web_agent, finance_agent],
31 | name="Agent Team (Web+Finance)",
32 | model=OpenAIChat(id="gpt-4o"),
33 | show_tool_calls=True,
34 | markdown=True,
35 | )
36 |
37 | app = Playground(agents=[agent_team]).get_app()
38 |
39 | if __name__ == "__main__":
40 | serve_playground_app("finance_agent_team:app", reload=True)
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_meeting_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 📝 AI Meeting Preparation Agent
2 | This Streamlit application leverages multiple AI agents to create comprehensive meeting preparation materials. It uses OpenAI's GPT-4, Anthropic's Claude, and the Serper API for web searches to generate context analysis, industry insights, meeting strategies, and executive briefings.
3 |
4 | ### Features
5 |
6 | - Multi-agent AI system for thorough meeting preparation
7 | - Utilizes OpenAI's GPT-4 and Anthropic's Claude models
8 | - Web search capability using Serper API
9 | - Generates detailed context analysis, industry insights, meeting strategies, and executive briefings
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/ai_agent_tutorials/ai_meeting_agent
18 | ```
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 | 3. Get your Anthropic API Key
25 |
26 | - Sign up for an [Anthropic account](https://console.anthropic.com) (or the LLM provider of your choice) and obtain your API key.
27 |
28 | 4. Get your SerpAPI Key
29 |
30 | - Sign up for an [Serper API account](https://serper.dev/) and obtain your API key.
31 |
32 | 5. Get your OpenAI API Key
33 |
34 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
35 |
36 | 6. Run the Streamlit App
37 | ```bash
38 | streamlit run meeting_agent.py
39 | ```
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/ai_arxiv_agent_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 📚 AI Research Agent with Memory
2 | This Streamlit app implements an AI-powered research assistant that helps users search for academic papers on arXiv while maintaining a memory of user interests and past interactions. It utilizes OpenAI's GPT-4o-mini model for processing search results, MultiOn for web browsing, and Mem0 with Qdrant for maintaining user context.
3 |
4 | ### Features
5 |
6 | - Search interface for querying arXiv papers
7 | - AI-powered processing of search results for improved readability
8 | - Persistent memory of user interests and past searches
9 | - Utilizes OpenAI's GPT-4o-mini model for intelligent processing
10 | - Implements memory storage and retrieval using Mem0 and Qdrant
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/llm_apps_with_memory_tutorials/ai_arxiv_agent_memory
18 | ```
19 |
20 | 2. Install the required dependencies:
21 |
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Ensure Qdrant is running:
27 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
28 |
29 | ```bash
30 | docker pull qdrant/qdrant
31 |
32 | docker run -p 6333:6333 -p 6334:6334 \
33 | -v $(pwd)/qdrant_storage:/qdrant/storage:z \
34 | qdrant/qdrant
35 | ```
36 |
37 | 4. Run the Streamlit App
38 | ```bash
39 | streamlit run ai_arxiv_agent_memory.py
40 | ```
41 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/local_chatgpt_clone/chatgpt_clone_llama3.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from openai import OpenAI
3 |
4 | # Set up the Streamlit App
5 | st.title("ChatGPT Clone using Llama-3 🦙")
6 | st.caption("Chat with locally hosted Llama-3 using the LM Studio 💯")
7 |
8 | # Point to the local server setup using LM Studio
9 | client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
10 |
11 | # Initialize the chat history
12 | if "messages" not in st.session_state:
13 | st.session_state.messages = []
14 |
15 | # Display the chat history
16 | for message in st.session_state.messages:
17 | with st.chat_message(message["role"]):
18 | st.markdown(message["content"])
19 |
20 | # Accept user input
21 | if prompt := st.chat_input("What is up?"):
22 | # Add user message to chat history
23 | st.session_state.messages.append({"role": "user", "content": prompt})
24 | # Display user message in chat message container
25 | with st.chat_message("user"):
26 | st.markdown(prompt)
27 | # Generate response
28 | response = client.chat.completions.create(
29 | model="lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF",
30 | messages=st.session_state.messages, temperature=0.7
31 | )
32 | # Add assistant response to chat history
33 | st.session_state.messages.append({"role": "assistant", "content": response.choices[0].message.content})
34 | # Display assistant response in chat message container
35 | with st.chat_message("assistant"):
36 | st.markdown(response.choices[0].message.content)
37 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_gmail/README.md:
--------------------------------------------------------------------------------
1 | ## 📨 Chat with Gmail Inbox
2 |
3 | LLM app with RAG to chat with Gmail in just 30 lines of Python Code. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of your Gmail Inbox.
4 |
5 | ### Features
6 |
7 | - Connect to your Gmail Inbox
8 | - Ask questions about the content of your emails
9 | - Get accurate answers using RAG and the selected LLM
10 |
11 | ### Installation
12 |
13 | 1. Clone the repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_gmail
18 | ```
19 | 2. Install the required dependencies
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Set up your Google Cloud project and enable the Gmail API:
26 |
27 | - Go to the [Google Cloud Console](https://console.cloud.google.com/) and create a new project.
28 | - Navigate to "APIs & Services > OAuth consent screen" and configure the OAuth consent screen.
29 | - Publish the OAuth consent screen by providing the necessary app information.
30 | - Enable the Gmail API and create OAuth client ID credentials.
31 | - Download the credentials in JSON format and save them as `credentials.json` in your working directory.
32 |
33 | 4. Get your OpenAI API Key
34 |
35 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
36 |
37 | 4. Run the Streamlit App
38 |
39 | ```bash
40 | streamlit run chat_gmail.py
41 | ```
42 |
43 |
44 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/local_news_agent_openai_swarm/README.md:
--------------------------------------------------------------------------------
1 | ## 📰 Multi-agent AI news assistant
2 | This Streamlit application implements a sophisticated news processing pipeline using multiple specialized AI agents to search, synthesize, and summarize news articles. It leverages the Llama 3.2 model via Ollama and DuckDuckGo search to provide comprehensive news analysis.
3 |
4 |
5 | ### Features
6 | - Multi-agent architecture with specialized roles:
7 | - News Searcher: Finds recent news articles
8 | - News Synthesizer: Analyzes and combines information
9 | - News Summarizer: Creates concise, professional summaries
10 |
11 | - Real-time news search using DuckDuckGo
12 | - AP/Reuters-style summary generation
13 | - User-friendly Streamlit interface
14 |
15 |
16 | ### How to get Started?
17 |
18 | 1. Clone the GitHub repository
19 | ```bash
20 | git clone https://github.com/your-username/ai-news-processor.git
21 | cd awesome-llm-apps/ai_agent_tutorials/local_news_agent_openai_swarm
22 | ```
23 |
24 | 2. Install the required dependencies:
25 |
26 | ```bash
27 | pip install -r requirements.txt
28 | ```
29 |
30 | 3. Pull and Run Llama 3.2 using Ollama:
31 |
32 | ```bash
33 | # Pull the model
34 | ollama pull llama3.2
35 |
36 | # Verify installation
37 | ollama list
38 |
39 | # Run the model (optional test)
40 | ollama run llama3.2
41 | ```
42 |
43 | 4. Create a .env file with your configurations:
44 | ```bash
45 | OPENAI_BASE_URL=http://localhost:11434/v1
46 | OPENAI_API_KEY=fake-key
47 | ```
48 | 5. Run the Streamlit app
49 | ```bash
50 | streamlit run news_agent.py
51 | ```
--------------------------------------------------------------------------------
/ai_agent_tutorials/multi_agent_researcher/research_agent.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from agno.agent import Agent
4 | from agno.tools.hackernews import HackerNewsTools
5 | from agno.models.openai import OpenAIChat
6 |
7 | # Set up the Streamlit app
8 | st.title("Multi-Agent AI Researcher 🔍🤖")
9 | st.caption("This app allows you to research top stories and users on HackerNews and write blogs, reports and social posts.")
10 |
11 | # Get OpenAI API key from user
12 | openai_api_key = st.text_input("OpenAI API Key", type="password")
13 |
14 | if openai_api_key:
15 | # Create instances of the Assistant
16 | story_researcher = Agent(
17 | name="HackerNews Story Researcher",
18 | role="Researches hackernews stories and users.",
19 | tools=[HackerNewsTools()],
20 | )
21 |
22 | user_researcher = Agent(
23 | name="HackerNews User Researcher",
24 | role="Reads articles from URLs.",
25 | tools=[HackerNewsTools()],
26 | )
27 |
28 | hn_assistant = Agent(
29 | name="Hackernews Team",
30 | team=[story_researcher, user_researcher],
31 | model=OpenAIChat(
32 | id="gpt-4o",
33 | max_tokens=1024,
34 | temperature=0.5,
35 | api_key=openai_api_key
36 | )
37 | )
38 |
39 | # Input field for the report query
40 | query = st.text_input("Enter your report query")
41 |
42 | if query:
43 | # Get the response from the assistant
44 | response = hn_assistant.run(query, stream=False)
45 | st.write(response.content)
--------------------------------------------------------------------------------
/rag_tutorials/autonomous_rag/README.md:
--------------------------------------------------------------------------------
1 | ## 🤖 AutoRAG: Autonomous RAG with GPT-4o and Vector Database
2 | This Streamlit application implements an Autonomous Retrieval-Augmented Generation (RAG) system using OpenAI's GPT-4o model and PgVector database. It allows users to upload PDF documents, add them to a knowledge base, and query the AI assistant with context from both the knowledge base and web searches.
3 | Features
4 |
5 | ### Freatures
6 | - Chat interface for interacting with the AI assistant
7 | - PDF document upload and processing
8 | - Knowledge base integration using PostgreSQL and Pgvector
9 | - Web search capability using DuckDuckGo
10 | - Persistent storage of assistant data and conversations
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/rag_tutorials/autonomous_rag
18 | ```
19 |
20 | 2. Install the required dependencies:
21 |
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Ensure PgVector Database is running:
27 | The app expects PgVector to be running on [localhost:5532](http://localhost:5532/). Adjust the configuration in the code if your setup is different.
28 |
29 | ```bash
30 | docker run -d \
31 | -e POSTGRES_DB=ai \
32 | -e POSTGRES_USER=ai \
33 | -e POSTGRES_PASSWORD=ai \
34 | -e PGDATA=/var/lib/postgresql/data/pgdata \
35 | -v pgvolume:/var/lib/postgresql/data \
36 | -p 5532:5432 \
37 | --name pgvector \
38 | phidata/pgvector:16
39 | ```
40 |
41 | 4. Run the Streamlit App
42 | ```bash
43 | streamlit run autorag.py
44 | ```
45 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_gmail/chat_gmail.py:
--------------------------------------------------------------------------------
1 | import tempfile
2 | import streamlit as st
3 | from embedchain import App
4 |
5 | # Define the embedchain_bot function
6 | def embedchain_bot(db_path, api_key):
7 | return App.from_config(
8 | config={
9 | "llm": {"provider": "openai", "config": {"model": "gpt-4-turbo", "temperature": 0.5, "api_key": api_key}},
10 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
11 | "embedder": {"provider": "openai", "config": {"api_key": api_key}},
12 | }
13 | )
14 |
15 | # Create Streamlit app
16 | st.title("Chat with your Gmail Inbox 📧")
17 | st.caption("This app allows you to chat with your Gmail inbox using OpenAI API")
18 |
19 | # Get the OpenAI API key from the user
20 | openai_access_token = st.text_input("Enter your OpenAI API Key", type="password")
21 |
22 | # Set the Gmail filter statically
23 | gmail_filter = "to: me label:inbox"
24 |
25 | # Add the Gmail data to the knowledge base if the OpenAI API key is provided
26 | if openai_access_token:
27 | # Create a temporary directory to store the database
28 | db_path = tempfile.mkdtemp()
29 | # Create an instance of Embedchain App
30 | app = embedchain_bot(db_path, openai_access_token)
31 | app.add(gmail_filter, data_type="gmail")
32 | st.success(f"Added emails from Inbox to the knowledge base!")
33 |
34 | # Ask a question about the emails
35 | prompt = st.text_input("Ask any question about your emails")
36 |
37 | # Chat with the emails
38 | if prompt:
39 | answer = app.query(prompt)
40 | st.write(answer)
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_finance_agent_team/README.md:
--------------------------------------------------------------------------------
1 | ## 💲 AI Finance Agent Team with Web Access
2 | This script demonstrates how to build a team of AI agents that work together as a financial analyst using GPT-4o in just 20 lines of Python code. The system combines web search capabilities with financial data analysis tools to provide comprehensive financial insights.
3 |
4 | ### Features
5 | - Multi-agent system with specialized roles:
6 | - Web Agent for general internet research
7 | - Finance Agent for detailed financial analysis
8 | - Team Agent for coordinating between agents
9 | - Real-time financial data access through YFinance
10 | - Web search capabilities using DuckDuckGo
11 | - Persistent storage of agent interactions using SQLite
12 |
13 | ### How to get Started?
14 |
15 | 1. Clone the GitHub repository
16 | ```bash
17 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
18 | cd awesome-llm-apps/ai_agent_tutorials/ai_finance_agent_team
19 | ```
20 |
21 | 2. Install the required dependencies:
22 |
23 | ```bash
24 | pip install -r requirements.txt
25 | ```
26 |
27 | 3. Get your OpenAI API Key
28 |
29 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
30 | - Set your OpenAI API key as an environment variable:
31 | ```bash
32 | export OPENAI_API_KEY='your-api-key-here'
33 | ```
34 |
35 | 4. Run the team of AI Agents
36 | ```bash
37 | python3 finance_agent_team.py
38 | ```
39 |
40 | 5. Open your web browser and navigate to the URL provided in the console output to interact with the team of AI agents through the playground interface.
41 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/cursor_ai_experiments/ai_web_scrapper.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from scrapegraphai.graphs import SmartScraperGraph
3 |
4 | # Streamlit app title
5 | st.title("AI Web Scraper")
6 |
7 | # Input fields for user prompt and source URL
8 | prompt = st.text_input("Enter the information you want to extract:")
9 | source_url = st.text_input("Enter the source URL:")
10 |
11 | # Input field for OpenAI API key
12 | api_key = st.text_input("Enter your OpenAI API key:", type="password")
13 |
14 | # Configuration for the scraping pipeline
15 | graph_config = {
16 | "llm": {
17 | "api_key": api_key,
18 | "model": "openai/gpt-4o-mini",
19 | },
20 | "verbose": True,
21 | "headless": False,
22 | }
23 |
24 | # Button to start the scraping process
25 | if st.button("Scrape"):
26 | if prompt and source_url and api_key:
27 | # Create the SmartScraperGraph instance
28 | smart_scraper_graph = SmartScraperGraph(
29 | prompt=prompt,
30 | source=source_url,
31 | config=graph_config
32 | )
33 |
34 | # Run the pipeline
35 | result = smart_scraper_graph.run()
36 |
37 | # Display the result
38 | st.write(result)
39 | else:
40 | st.error("Please provide all the required inputs.")
41 |
42 | # Instructions for the user
43 | st.markdown("""
44 | ### Instructions
45 | 1. Enter the information you want to extract in the first input box.
46 | 2. Enter the source URL from which you want to extract the information.
47 | 3. Enter your OpenAI API key.
48 | 4. Click on the "Scrape" button to start the scraping process.
49 | """)
--------------------------------------------------------------------------------
/advanced_tools_frameworks/cursor_ai_experiments/chatgpt_clone_llama3.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from ollama import Client
3 |
4 | # Initialize Ollama client
5 | client = Client()
6 |
7 | # Set up Streamlit page
8 | st.set_page_config(page_title="Local ChatGPT Clone", page_icon="🤖", layout="wide")
9 | st.title("🤖 Local ChatGPT Clone")
10 |
11 | # Initialize chat history
12 | if "messages" not in st.session_state:
13 | st.session_state.messages = []
14 |
15 | # Display chat messages
16 | for message in st.session_state.messages:
17 | with st.chat_message(message["role"]):
18 | st.markdown(message["content"])
19 |
20 | # User input
21 | if prompt := st.chat_input("What's on your mind?"):
22 | st.session_state.messages.append({"role": "user", "content": prompt})
23 | with st.chat_message("user"):
24 | st.markdown(prompt)
25 |
26 | # Generate AI response
27 | with st.chat_message("assistant"):
28 | message_placeholder = st.empty()
29 | full_response = ""
30 | for response in client.chat(model="llama3.1:latest", messages=st.session_state.messages, stream=True):
31 | full_response += response['message']['content']
32 | message_placeholder.markdown(full_response + "▌")
33 | message_placeholder.markdown(full_response)
34 | st.session_state.messages.append({"role": "assistant", "content": full_response})
35 |
36 | # Add a sidebar with information
37 | st.sidebar.title("About")
38 | st.sidebar.info("This is a local ChatGPT clone using Ollama's llama3.1:latest model and Streamlit.")
39 | st.sidebar.markdown("---")
40 | st.sidebar.markdown("Made with ❤️ by Your Name")
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_pdf/chat_pdf_llama3.py:
--------------------------------------------------------------------------------
1 | # Import necessary libraries
2 | import os
3 | import tempfile
4 | import streamlit as st
5 | from embedchain import App
6 |
7 | # Define the embedchain_bot function
8 | def embedchain_bot(db_path):
9 | return App.from_config(
10 | config={
11 | "llm": {"provider": "ollama", "config": {"model": "llama3:instruct", "max_tokens": 250, "temperature": 0.5, "stream": True, "base_url": 'http://localhost:11434'}},
12 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
13 | "embedder": {"provider": "ollama", "config": {"model": "llama3:instruct", "base_url": 'http://localhost:11434'}},
14 | }
15 | )
16 |
17 | st.title("Chat with PDF")
18 | st.caption("This app allows you to chat with a PDF using Llama3 running locally wiht Ollama!")
19 |
20 | # Create a temporary directory to store the PDF file
21 | db_path = tempfile.mkdtemp()
22 | # Create an instance of the embedchain App
23 | app = embedchain_bot(db_path)
24 |
25 | # Upload a PDF file
26 | pdf_file = st.file_uploader("Upload a PDF file", type="pdf")
27 |
28 | # If a PDF file is uploaded, add it to the knowledge base
29 | if pdf_file:
30 | with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as f:
31 | f.write(pdf_file.getvalue())
32 | app.add(f.name, data_type="pdf_file")
33 | os.remove(f.name)
34 | st.success(f"Added {pdf_file.name} to knowledge base!")
35 |
36 | # Ask a question about the PDF
37 | prompt = st.text_input("Ask a question about the PDF")
38 | # Display the answer
39 | if prompt:
40 | answer = app.chat(prompt)
41 | st.write(answer)
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_substack/chat_substack.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from embedchain import App
3 | import tempfile
4 |
5 | # Define the embedchain_bot function
6 | def embedchain_bot(db_path, api_key):
7 | return App.from_config(
8 | config={
9 | "llm": {"provider": "openai", "config": {"model": "gpt-4-turbo", "temperature": 0.5, "api_key": api_key}},
10 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
11 | "embedder": {"provider": "openai", "config": {"api_key": api_key}},
12 | }
13 | )
14 |
15 | st.title("Chat with Substack Newsletter 📝")
16 | st.caption("This app allows you to chat with Substack newsletter using OpenAI API")
17 |
18 | # Get OpenAI API key from user
19 | openai_access_token = st.text_input("OpenAI API Key", type="password")
20 |
21 | if openai_access_token:
22 | # Create a temporary directory to store the database
23 | db_path = tempfile.mkdtemp()
24 | # Create an instance of Embedchain App
25 | app = embedchain_bot(db_path, openai_access_token)
26 |
27 | # Get the Substack blog URL from the user
28 | substack_url = st.text_input("Enter Substack Newsletter URL", type="default")
29 |
30 | if substack_url:
31 | # Add the Substack blog to the knowledge base
32 | app.add(substack_url, data_type='substack')
33 | st.success(f"Added {substack_url} to knowledge base!")
34 |
35 | # Ask a question about the Substack blog
36 | query = st.text_input("Ask any question about the substack newsletter!")
37 |
38 | # Query the Substack blog
39 | if query:
40 | result = app.query(query)
41 | st.write(result)
42 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_travel_agent/README.MD:
--------------------------------------------------------------------------------
1 | ## 🛫 AI Travel Agent
2 | This Streamlit app is an AI-powered travel Agent that generates personalized travel itineraries using OpenAI GPT-4o. It automates the process of researching, planning, and organizing your dream vacation, allowing you to explore exciting destinations with ease.
3 |
4 | ### Features
5 | - Research and discover exciting travel destinations, activities, and accommodations
6 | - Customize your itinerary based on the number of days you want to travel
7 | - Utilize the power of GPT-4o to generate intelligent and personalized travel plans
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/ai_agent_tutorials/ai_travel_agent
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Get your SerpAPI Key
27 |
28 | - Sign up for an [SerpAPI account](https://serpapi.com/) and obtain your API key.
29 |
30 | 5. Run the Streamlit App
31 | ```bash
32 | streamlit run travel_agent.py
33 | ```
34 |
35 | ### How it Works?
36 |
37 | The AI Travel Agent has two main components:
38 | - Researcher: Responsible for generating search terms based on the user's destination and travel duration, and searching the web for relevant activities and accommodations using SerpAPI.
39 | - Planner: Takes the research results and user preferences to generate a personalized draft itinerary that includes suggested activiti
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/llama3_stateful_chat/local_llama3_chat.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from openai import OpenAI
3 |
4 | # Set up the Streamlit App
5 | st.title("Local ChatGPT with Memory 🦙")
6 | st.caption("Chat with locally hosted memory-enabled Llama-3 using the LM Studio 💯")
7 |
8 | # Point to the local server setup using LM Studio
9 | client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
10 |
11 | # Initialize the chat history
12 | if "messages" not in st.session_state:
13 | st.session_state.messages = []
14 |
15 | # Display the chat history
16 | for message in st.session_state.messages:
17 | with st.chat_message(message["role"]):
18 | st.markdown(message["content"])
19 |
20 | # Accept user input
21 | if prompt := st.chat_input("What is up?"):
22 | st.session_state.messages.append({"role": "system", "content": "When the input starts with /add, don't follow up with a prompt."})
23 | # Add user message to chat history
24 | st.session_state.messages.append({"role": "user", "content": prompt})
25 | # Display user message in chat message container
26 | with st.chat_message("user"):
27 | st.markdown(prompt)
28 | # Generate response
29 | response = client.chat.completions.create(
30 | model="lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF",
31 | messages=st.session_state.messages, temperature=0.7
32 | )
33 | # Add assistant response to chat history
34 | st.session_state.messages.append({"role": "assistant", "content": response.choices[0].message.content})
35 | # Display assistant response in chat message container
36 | with st.chat_message("assistant"):
37 | st.markdown(response.choices[0].message.content)
38 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_meme_generator_agent_browseruse/README.md:
--------------------------------------------------------------------------------
1 | # 🥸 AI Meme Generator Agent - Browser Use
2 |
3 | The AI Meme Generator Agent is a powerful browser automation tool that creates memes using AI agents. This app combines multi-LLM capabilities with automated browser interactions to generate memes based on text prompts through direct website manipulation.
4 |
5 | ## Features
6 |
7 | - **Multi-LLM Support**
8 | - Claude 3.5 Sonnet (Anthropic)
9 | - GPT-4o (OpenAI)
10 | - Deepseek v3 (Deepseek)
11 | - Automatic model switching with API key validation
12 |
13 | - **Browser Automation**:
14 | - Direct interaction with imgflip.com meme templates
15 | - Automated search for relevant meme formats
16 | - Dynamic text insertion for top/bottom captions
17 | - Image link extraction from generated memes
18 |
19 | - **Smart Generation Workflow**:
20 | - Action verb extraction from prompts
21 | - Metaphorical template matching
22 | - Multi-step quality validation
23 | - Automatic retry mechanism for failed generations
24 |
25 | - **User-Friendly Interface**:
26 | - Model configuration sidebar
27 | - API key management
28 | - Direct meme preview with clickable links
29 | - Responsive error handling
30 |
31 |
32 | API keys required:
33 | - **Anthropic** (for Claude)
34 | - **Deepseek**
35 | - **OpenAI** (for GPT-4o)
36 |
37 | ## How to Run
38 |
39 | 1. **Clone the Repository**:
40 | ```bash
41 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
42 | cd ai_agent_tutorials/ai_meme_generator_browseruse
43 | ```
44 | 2. **Install the dependencies**:
45 | ```bash
46 | pip install -r requirements.txt
47 | ```
48 | 3. **Run the Streamlit app**:
49 | ```bash
50 | streamlit run ai_meme_generator.py
51 | ```
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_data_visualisation_agent/README.md:
--------------------------------------------------------------------------------
1 | # 📊 AI Data Visualization Agent
2 | A Streamlit application that acts as your personal data visualization expert, powered by LLMs. Simply upload your dataset and ask questions in natural language - the AI agent will analyze your data, generate appropriate visualizations, and provide insights through a combination of charts, statistics, and explanations.
3 |
4 | ## Features
5 | #### Natural Language Data Analysis
6 | - Ask questions about your data in plain English
7 | - Get instant visualizations and statistical analysis
8 | - Receive explanations of findings and insights
9 | - Interactive follow-up questioning
10 |
11 | #### Intelligent Visualization Selection
12 | - Automatic choice of appropriate chart types
13 | - Dynamic visualization generation
14 | - Statistical visualization support
15 | - Custom plot formatting and styling
16 |
17 | #### Multi-Model AI Support
18 | - Meta-Llama 3.1 405B for complex analysis
19 | - DeepSeek V3 for detailed insights
20 | - Qwen 2.5 7B for quick analysis
21 | - Meta-Llama 3.3 70B for advanced queries
22 |
23 | ## How to Run
24 |
25 | Follow the steps below to set up and run the application:
26 | - Before anything else, Please get a free Together AI API Key here: https://api.together.ai/signin
27 | - Get a free E2B API Key here: https://e2b.dev/ ; https://e2b.dev/docs/legacy/getting-started/api-key
28 |
29 | 1. **Clone the Repository**
30 | ```bash
31 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
32 | cd ai_agent_tutorials/ai_data_visualisation_agent
33 | ```
34 | 2. **Install the dependencies**
35 | ```bash
36 | pip install -r requirements.txt
37 | ```
38 | 3. **Run the Streamlit app**
39 | ```bash
40 | streamlit run ai_data_visualisation_agent.py
41 | ```
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_journalist_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🗞️ AI Journalist Agent
2 | This Streamlit app is an AI-powered journalist agent that generates high-quality articles using OpenAI GPT-4o. It automates the process of researching, writing, and editing articles, allowing you to create compelling content on any topic with ease.
3 |
4 | ### Features
5 | - Searches the web for relevant information on a given topic
6 | - Writes well-structured, informative, and engaging articles
7 | - Edits and refines the generated content to meet the high standards of the New York Times
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/ai_agent_tutorials/ai_journalist_agent
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Get your SerpAPI Key
27 |
28 | - Sign up for an [SerpAPI account](https://serpapi.com/) and obtain your API key.
29 |
30 | 5. Run the Streamlit App
31 | ```bash
32 | streamlit run journalist_agent.py
33 | ```
34 |
35 | ### How it Works?
36 |
37 | The AI Journalist Agent utilizes three main components:
38 | - Searcher: Responsible for generating search terms based on the given topic and searching the web for relevant URLs using the SerpAPI.
39 | - Writer: Retrieves the text from the provided URLs using the NewspaperToolkit and writes a high-quality article based on the extracted information.
40 | - Editor: Coordinates the workflow between the Searcher and Writer, and performs final editing and refinement of the generated article.
41 |
42 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/local_llama3.1_tool_use/README.md:
--------------------------------------------------------------------------------
1 | ## 🦙 Local Llama3 Tool Use
2 | This Streamlit app demonstrates function calling with the local Llama3 model using Ollama. It allows users to interact with an AI assistant that can access specific tools based on user selection.
3 |
4 | ### Features
5 | - Utilizes local Llama3 model via Ollama as LLM
6 | - Integrates YFinance for stock data retrieval and SerpAPI for web search capabilities
7 | - Dynamic tool selection through a user-friendly sidebar
8 | - Real-time chat interface with the AI assistant
9 |
10 | ### How to get Started?
11 |
12 | 1. Clone the GitHub repository
13 |
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/advanced_tools_frameworks/local_llama3.1_tool_use
17 | ```
18 | 2. Install the required dependencies:
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 |
24 | 3. Get your OpenAI API Key
25 |
26 | - Set up your SerpAPI API key: Export your SerpAPI API key as an environment variable.
27 | ```bash
28 | export SERPAPI_API_KEY=your_api_key_here
29 | ```
30 |
31 | 4. Run the Streamlit App
32 | ```bash
33 | streamlit run llama3_tool_use.py
34 | ```
35 |
36 | ## How it Works?
37 |
38 | 1. **Tool Selection:** Users can select which tools (YFinance and/or SerpAPI) they want the assistant to use via checkboxes in the sidebar.
39 |
40 | 2. **Assistant Initialization:** The app initializes or updates the assistant based on the selected tools.
41 |
42 | 3. **Chat Interface:** Users can ask questions through a chat input, and the assistant responds using the enabled tools.
43 |
44 | 4. **Real-time Response:** The assistant's response is displayed in real-time, with a typing indicator.
45 |
46 | 5. **Tool Usage Display:** The app shows which tools are currently enabled in the sidebar.
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_movie_production_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🎬 AI Movie Production Agent
2 | This Streamlit app is an AI-powered movie production assistant that helps bring your movie ideas to life using Claude 3.5 Sonnet model. It automates the process of script writing and casting, allowing you to create compelling movie concepts with ease.
3 |
4 | ### Features
5 | - Generates script outlines based on your movie idea, genre, and target audience
6 | - Suggests suitable actors for main roles, considering their past performances and current availability
7 | - Provides a concise movie concept overview
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/ai_agent_tutorials/ai_movie_production_agent
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your Anthropic API Key
23 |
24 | - Sign up for an [Anthropic account](https://console.anthropic.com) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Get your SerpAPI Key
27 |
28 | - Sign up for an [SerpAPI account](https://serpapi.com/) and obtain your API key.
29 |
30 | 5. Run the Streamlit App
31 | ```bash
32 | streamlit run movie_production_agent.py
33 | ```
34 |
35 | ### How it Works?
36 |
37 | The AI Movie Production Agent utilizes three main components:
38 | - **ScriptWriter**: Develops a compelling script outline with character descriptions and key plot points based on the given movie idea and genre.
39 | - **CastingDirector**: Suggests suitable actors for the main roles, considering their past performances and current availability.
40 | - **MovieProducer**: Oversees the entire process, coordinating between the ScriptWriter and CastingDirector, and providing a concise movie concept overview.
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_scrapping_ai_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 💻 Web Scrapping AI Agent
2 | This Streamlit app allows you to scrape a website using OpenAI API and the scrapegraphai library. Simply provide your OpenAI API key, enter the URL of the website you want to scrape, and specify what you want the AI agent to extract from the website.
3 |
4 | ### Features
5 | - Scrape any website by providing the URL
6 | - Utilize OpenAI's LLMs (GPT-3.5-turbo or GPT-4) for intelligent scraping
7 | - Customize the scraping task by specifying what you want the AI agent to extract
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/advanced_tools_frameworks/web_scrapping_ai_agent
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Run the Streamlit App
27 | ```bash
28 | streamlit run ai_scrapper.py
29 | ```
30 |
31 | ### How it Works?
32 |
33 | - The app prompts you to enter your OpenAI API key, which is used to authenticate and access the OpenAI language models.
34 | - You can select the desired language model (GPT-3.5-turbo or GPT-4) for the scraping task.
35 | - Enter the URL of the website you want to scrape in the provided text input field.
36 | - Specify what you want the AI agent to extract from the website by entering a user prompt.
37 | - The app creates a SmartScraperGraph object using the provided URL, user prompt, and OpenAI configuration.
38 | - The SmartScraperGraph object scrapes the website and extracts the requested information using the specified language model.
39 | - The scraped results are displayed in the app for you to view
--------------------------------------------------------------------------------
/llm_finetuning_tutorials/llama3.2_finetuning/finetune_llama3.2.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from unsloth import FastLanguageModel
3 | from datasets import load_dataset
4 | from trl import SFTTrainer
5 | from transformers import TrainingArguments
6 | from unsloth.chat_templates import get_chat_template, standardize_sharegpt
7 |
8 | # Load model and tokenizer
9 | model, tokenizer = FastLanguageModel.from_pretrained(
10 | model_name="unsloth/Llama-3.2-3B-Instruct",
11 | max_seq_length=2048, load_in_4bit=True,
12 | )
13 |
14 | # Add LoRA adapters
15 | model = FastLanguageModel.get_peft_model(
16 | model, r=16,
17 | target_modules=[
18 | "q_proj", "k_proj", "v_proj", "o_proj",
19 | "gate_proj", "up_proj", "down_proj"
20 | ],
21 | )
22 |
23 | # Set up chat template and prepare dataset
24 | tokenizer = get_chat_template(tokenizer, chat_template="llama-3.1")
25 | dataset = load_dataset("mlabonne/FineTome-100k", split="train")
26 | dataset = standardize_sharegpt(dataset)
27 | dataset = dataset.map(
28 | lambda examples: {
29 | "text": [
30 | tokenizer.apply_chat_template(convo, tokenize=False)
31 | for convo in examples["conversations"]
32 | ]
33 | },
34 | batched=True
35 | )
36 |
37 | # Set up trainer
38 | trainer = SFTTrainer(
39 | model=model,
40 | train_dataset=dataset,
41 | dataset_text_field="text",
42 | max_seq_length=2048,
43 | args=TrainingArguments(
44 | per_device_train_batch_size=2,
45 | gradient_accumulation_steps=4,
46 | warmup_steps=5,
47 | max_steps=60,
48 | learning_rate=2e-4,
49 | fp16=not torch.cuda.is_bf16_supported(),
50 | bf16=torch.cuda.is_bf16_supported(),
51 | logging_steps=1,
52 | output_dir="outputs",
53 | ),
54 | )
55 |
56 | # Train the model
57 | trainer.train()
58 |
59 | # Save the finetuned model
60 | model.save_pretrained("finetuned_model")
--------------------------------------------------------------------------------
/advanced_tools_frameworks/llm_router_app/llm_router.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
4 | os.environ['TOGETHERAI_API_KEY'] = "your_togetherai_api_key"
5 |
6 | import streamlit as st
7 | from routellm.controller import Controller
8 |
9 | # Initialize RouteLLM client
10 | client = Controller(
11 | routers=["mf"],
12 | strong_model="gpt-4o-mini",
13 | weak_model="together_ai/meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
14 | )
15 |
16 | # Set up Streamlit app
17 | st.title("RouteLLM Chat App")
18 |
19 | # Initialize chat history
20 | if "messages" not in st.session_state:
21 | st.session_state.messages = []
22 |
23 | # Display chat messages
24 | for message in st.session_state.messages:
25 | with st.chat_message(message["role"]):
26 | st.markdown(message["content"])
27 | if "model" in message:
28 | st.caption(f"Model used: {message['model']}")
29 |
30 | # Chat input
31 | if prompt := st.chat_input("What is your message?"):
32 | # Add user message to chat history
33 | st.session_state.messages.append({"role": "user", "content": prompt})
34 | with st.chat_message("user"):
35 | st.markdown(prompt)
36 |
37 | # Get RouteLLM response
38 | with st.chat_message("assistant"):
39 | message_placeholder = st.empty()
40 | response = client.chat.completions.create(
41 | model="router-mf-0.11593",
42 | messages=[{"role": "user", "content": prompt}]
43 | )
44 | message_content = response['choices'][0]['message']['content']
45 | model_name = response['model']
46 |
47 | # Display assistant's response
48 | message_placeholder.markdown(message_content)
49 | st.caption(f"Model used: {model_name}")
50 |
51 | # Add assistant's response to chat history
52 | st.session_state.messages.append({"role": "assistant", "content": message_content, "model": model_name})
--------------------------------------------------------------------------------
/advanced_tools_frameworks/web_search_ai_assistant/README.md:
--------------------------------------------------------------------------------
1 | ## 🎯 Generative AI Search Assistant
2 | This Streamlit app combines the power of search engines and LLMs to provide you with pinpointed answers to your queries. By leveraging OpenAI's GPT-4o and the DuckDuckGo search engine, this AI search assistant delivers accurate and concise responses to your questions.
3 |
4 | ### Features
5 | - Get pinpointed answers to your queries
6 | - Utilize DuckDuckGo search engine for web searching
7 | - Use OpenAI GPT-4o for intelligent answer generation
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/advanced_tools_frameworks/web_search_ai_assistant
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Run the Streamlit App
27 | ```bash
28 | streamlit run ai_websearch.py
29 | ```
30 |
31 | ### How It Works?
32 |
33 | - Upon running the app, you will be prompted to enter your OpenAI API key. This key is used to authenticate and access the OpenAI language models.
34 |
35 | - Once you provide a valid API key, an instance of the Assistant class is created. This assistant utilizes the GPT-4 language model from OpenAI and the DuckDuckGo search engine tool.
36 |
37 | - Enter your search query in the provided text input field.
38 |
39 | - The assistant will perform the following steps:
40 | - Conduct a web search using DuckDuckGo based on your query
41 | - Analyze the search results and extract relevant information
42 | - Generate a concise and targeted answer using the GPT-4 language model
43 |
44 | - The pinpointed answer will be displayed in the app, providing you with the information you need.
--------------------------------------------------------------------------------
/rag_tutorials/agentic_rag/README.md:
--------------------------------------------------------------------------------
1 | ## 🗃️ AI RAG Agent with Web Access
2 | This script demonstrates how to build a Retrieval-Augmented Generation (RAG) agent with web access using GPT-4o in just 15 lines of Python code. The agent uses a PDF knowledge base and has the ability to search the web using DuckDuckGo.
3 |
4 | ### Features
5 |
6 | - Creates a RAG agent using GPT-4o
7 | - Incorporates a PDF-based knowledge base
8 | - Uses LanceDB as the vector database for efficient similarity search
9 | - Includes web search capability through DuckDuckGo
10 | - Provides a playground interface for easy interaction
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/rag_tutorials/agentic_rag
18 | ```
19 |
20 | 2. Install the required dependencies:
21 |
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Get your OpenAI API Key
27 |
28 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
29 | - Set your OpenAI API key as an environment variable:
30 | ```bash
31 | export OPENAI_API_KEY='your-api-key-here'
32 | ```
33 |
34 | 4. Run the AI RAG Agent
35 | ```bash
36 | python3 rag_agent.py
37 | ```
38 | 5. Open your web browser and navigate to the URL provided in the console output to interact with the RAG agent through the playground interface.
39 |
40 | ### How it works?
41 |
42 | 1. **Knowledge Base Creation:** The script creates a knowledge base from a PDF file hosted online.
43 | 2. **Vector Database Setup:** LanceDB is used as the vector database for efficient similarity search within the knowledge base.
44 | 3. **Agent Configuration:** An AI agent is created using GPT-4o as the underlying model, with the PDF knowledge base and DuckDuckGo search tool.
45 | 4. **Playground Setup:** A playground interface is set up for easy interaction with the RAG agent.
46 |
47 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_data_analysis_agent/README.md:
--------------------------------------------------------------------------------
1 | # 📊 AI Data Analysis Agent
2 |
3 | An AI data analysis Agent built using the Agno Agent framework and Openai's gpt-4o model. This agent helps users analyze their data - csv, excel files through natural language queries, powered by OpenAI's language models and DuckDB for efficient data processing - making data analysis accessible to users regardless of their SQL expertise.
4 |
5 | ## Features
6 |
7 | - 📤 **File Upload Support**:
8 | - Upload CSV and Excel files
9 | - Automatic data type detection and schema inference
10 | - Support for multiple file formats
11 |
12 | - 💬 **Natural Language Queries**:
13 | - Convert natural language questions into SQL queries
14 | - Get instant answers about your data
15 | - No SQL knowledge required
16 |
17 | - 🔍 **Advanced Analysis**:
18 | - Perform complex data aggregations
19 | - Filter and sort data
20 | - Generate statistical summaries
21 | - Create data visualizations
22 |
23 | - 🎯 **Interactive UI**:
24 | - User-friendly Streamlit interface
25 | - Real-time query processing
26 | - Clear result presentation
27 |
28 | ## How to Run
29 |
30 | 1. **Setup Environment**
31 | ```bash
32 | # Clone the repository
33 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
34 | cd ai_agent_tutorials/ai_data_analysis_agent
35 |
36 | # Install dependencies
37 | pip install -r requirements.txt
38 | ```
39 |
40 | 2. **Configure API Keys**
41 | - Get OpenAI API key from [OpenAI Platform](https://platform.openai.com)
42 |
43 | 3. **Run the Application**
44 | ```bash
45 | streamlit run ai_data_analyst.py
46 | ```
47 |
48 | ## Usage
49 |
50 | 1. Launch the application using the command above
51 | 2. Provide your OpenAI API key in the sidebar of Streamlit
52 | 3. Upload your CSV or Excel file through the Streamlit interface
53 | 4. Ask questions about your data in natural language
54 | 5. View the results and generated visualizations
55 |
56 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_health_fitness_agent/README.md:
--------------------------------------------------------------------------------
1 | # AI Health & Fitness Planner Agent 🏋️♂️
2 |
3 | The **AI Health & Fitness Planner** is a personalized health and fitness Agent powered by Agno AI Agent framework. This app generates tailored dietary and fitness plans based on user inputs such as age, weight, height, activity level, dietary preferences, and fitness goals.
4 |
5 | ## Features
6 |
7 | - **Health Agent and Fitness Agent**
8 | - The app has two phidata agents that are specialists in giving Diet advice and Fitness/workout advice respectively.
9 |
10 | - **Personalized Dietary Plans**:
11 | - Generates detailed meal plans (breakfast, lunch, dinner, and snacks).
12 | - Includes important considerations like hydration, electrolytes, and fiber intake.
13 | - Supports various dietary preferences like Keto, Vegetarian, Low Carb, etc.
14 |
15 | - **Personalized Fitness Plans**:
16 | - Provides customized exercise routines based on fitness goals.
17 | - Covers warm-ups, main workouts, and cool-downs.
18 | - Includes actionable fitness tips and progress tracking advice.
19 |
20 | - **Interactive Q&A**: Allows users to ask follow-up questions about their plans.
21 |
22 |
23 | ## Requirements
24 |
25 | The application requires the following Python libraries:
26 |
27 | - `agno`
28 | - `google-generativeai`
29 | - `streamlit`
30 |
31 | Ensure these dependencies are installed via the `requirements.txt` file according to their mentioned versions
32 |
33 | ## How to Run
34 |
35 | Follow the steps below to set up and run the application:
36 | Before anything else, Please get a free Gemini API Key provided by Google AI here: https://aistudio.google.com/apikey
37 |
38 | 1. **Clone the Repository**:
39 | ```bash
40 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
41 | cd awesome-llm-apps/ai_agent_tutorials/ai_health_fitness_agent
42 | ```
43 |
44 | 2. **Install the dependencies**
45 | ```bash
46 | pip install -r requirements.txt
47 | ```
48 | 3. **Run the Streamlit app**
49 | ```bash
50 | streamlit run health_agent.py
51 | ```
52 |
53 |
54 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_lead_generation_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🎯 AI Lead Generation Agent - Powered by Firecrawl's Extract Endpoint
2 |
3 | The AI Lead Generation Agent automates the process of finding and qualifying potential leads from Quora. It uses Firecrawl's search and the new Extract endpoint to identify relevant user profiles, extract valuable information, and organize it into a structured format in Google Sheets. This agent helps sales and marketing teams efficiently build targeted lead lists while saving hours of manual research.
4 |
5 | ### Features
6 | - **Targeted Search**: Uses Firecrawl's search endpoint to find relevant Quora URLs based on your search criteria
7 | - **Intelligent Extraction**: Leverages Firecrawl's new Extract endpoint to pull user information from Quora profiles
8 | - **Automated Processing**: Formats extracted user information into a clean, structured format
9 | - **Google Sheets Integration**: Automatically creates and populates Google Sheets with lead information
10 | - **Customizable Criteria**: Allows you to define specific search parameters to find your ideal leads for your niche
11 |
12 | ### How to Get Started
13 | 1. **Clone the repository**:
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd ai_agent_tutorials/ai_lead_generation_agent
17 | ```
18 | 3. **Install the required packages**:
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 4. **Important thing to do in composio**:
23 | - in the terminal, run this command: `composio add googlesheets`
24 | - In your compposio dashboard, create a new google sheet intergation and make sure it is active in the active integrations/connections tab
25 |
26 | 5. **Set up your API keys**:
27 | - Get your Firecrawl API key from [Firecrawl's website](https://www.firecrawl.dev/app/api-keys)
28 | - Get your Composio API key from [Composio's website](https://composio.ai)
29 | - Get your OpenAI API key from [OpenAI's website](https://platform.openai.com/api-keys)
30 |
31 | 6. **Run the application**:
32 | ```bash
33 | streamlit run ai_lead_generation_agent.py
34 | ```
35 |
36 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_investment_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 📈 AI Investment Agent
2 | This Streamlit app is an AI-powered investment agent built with Agno's AI Agent framework that compares the performance of two stocks and generates detailed reports. By using GPT-4o with Yahoo Finance data, this app provides valuable insights to help you make informed investment decisions.
3 |
4 | ### Features
5 | - Compare the performance of two stocks
6 | - Retrieve comprehensive company information
7 | - Get the latest company news and analyst recommendations
8 | - Get the latest company news and analyst recommendations
9 |
10 | ### How to get Started?
11 |
12 | 1. Clone the GitHub repository
13 |
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/ai_agent_tutorials/ai_investment_agent
17 | ```
18 | 2. Install the required dependencies:
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 | 3. Get your OpenAI API Key
24 |
25 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
26 |
27 | 4. Run the Streamlit App
28 | ```bash
29 | streamlit run investment_agent.py
30 | ```
31 |
32 | ### How it Works?
33 |
34 | - Upon running the app, you will be prompted to enter your OpenAI API key. This key is used to authenticate and access the OpenAI language model.
35 | - Once you provide a valid API key, an instance of the Assistant class is created. This assistant utilizes the GPT-4o language model from OpenAI and the YFinanceTools for accessing stock data.
36 | - Enter the stock symbols of the two companies you want to compare in the provided text input fields.
37 | - The assistant will perform the following steps:
38 | - Retrieve real-time stock prices and historical data using YFinanceTools
39 | - Fetch the latest company news and analyst recommendations
40 | - Gather comprehensive company information
41 | - Generate a detailed comparison report using the GPT-4 language model
42 | - The generated report will be displayed in the app, providing you with valuable insights and analysis to guide your investment decisions.
--------------------------------------------------------------------------------
/rag_tutorials/rag_agent_cohere/README.md:
--------------------------------------------------------------------------------
1 | # RAG Agent with Cohere ⌘R
2 |
3 | A RAG Agentic system built with Cohere's new model Command-r7b-12-2024, Qdrant for vector storage, Langchain for RAG and LangGraph for orchestration. This application allows users to upload documents, ask questions about them, and get AI-powered responses with fallback to web search when needed.
4 |
5 | ## Features
6 |
7 | - **Document Processing**
8 | - PDF document upload and processing
9 | - Automatic text chunking and embedding
10 | - Vector storage in Qdrant cloud
11 |
12 | - **Intelligent Querying**
13 | - RAG-based document retrieval
14 | - Similarity search with threshold filtering
15 | - Automatic fallback to web search when no relevant documents found
16 | - Source attribution for answers
17 |
18 | - **Advanced Capabilities**
19 | - DuckDuckGo web search integration
20 | - LangGraph agent for web research
21 | - Context-aware response generation
22 | - Long answer summarization
23 |
24 | - **Model Specific Features**
25 | - Command-r7b-12-2024 model for Chat and RAG
26 | - cohere embed-english-v3.0 model for embeddings
27 | - create_react_agent function from langgraph
28 | - DuckDuckGoSearchRun tool for web search
29 |
30 | ## Prerequisites
31 |
32 | ### 1. Cohere API Key
33 | 1. Go to [Cohere Platform](https://dashboard.cohere.ai/api-keys)
34 | 2. Sign up or log in to your account
35 | 3. Navigate to API Keys section
36 | 4. Create a new API key
37 |
38 | ### 2. Qdrant Cloud Setup
39 | 1. Visit [Qdrant Cloud](https://cloud.qdrant.io/)
40 | 2. Create an account or sign in
41 | 3. Create a new cluster
42 | 4. Get your credentials:
43 | - Qdrant API Key: Found in API Keys section
44 | - Qdrant URL: Your cluster URL (format: `https://xxx-xxx.aws.cloud.qdrant.io`)
45 |
46 |
47 | ## How to Run
48 |
49 | 1. Clone the repository:
50 | ```bash
51 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
52 | cd rag_tutorials/rag_agent_cohere
53 | ```
54 |
55 | 2. Install dependencies:
56 | ```bash
57 | pip install -r requirements.txt
58 | ```
59 |
60 | ```bash
61 | streamlit run rag_agent_cohere.py
62 | ```
63 |
64 |
65 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_github/README.md:
--------------------------------------------------------------------------------
1 | ## 💬 Chat with GitHub Repo
2 |
3 | LLM app with RAG to chat with GitHub Repo in just 30 lines of Python Code. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of the specified GitHub repository.
4 |
5 | ### Features
6 |
7 | - Provide the name of GitHub Repository as input
8 | - Ask questions about the content of the GitHub repository
9 | - Get accurate answers using OpenAI's API and Embedchain
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_github
18 | ```
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 | 3. Get your OpenAI API Key
25 |
26 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
27 |
28 | 4. Get your GitHub Access Token
29 |
30 | - Create a [personal access token](https://docs.github.com/en/enterprise-server@3.6/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token) with the necessary permissions to access the desired GitHub repository.
31 |
32 | 4. Run the Streamlit App
33 | ```bash
34 | streamlit run chat_github.py
35 | ```
36 |
37 | ### How it Works?
38 |
39 | - The app prompts the user to enter their OpenAI API key, which is used to authenticate requests to the OpenAI API.
40 |
41 | - It initializes an instance of the Embedchain App class and a GithubLoader with the provided GitHub Access Token.
42 |
43 | - The user is prompted to enter a GitHub repository URL, which is then added to the Embedchain app's knowledge base using the GithubLoader.
44 |
45 | - The user can ask questions about the GitHub repository using the text input.
46 |
47 | - When a question is asked, the app uses the chat method of the Embedchain app to generate an answer based on the content of the GitHub repository.
48 |
49 | - The app displays the generated answer to the user.
50 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/multi_agent_researcher/README.md:
--------------------------------------------------------------------------------
1 | ## 📰 Multi-Agent AI Researcher
2 | This Streamlit app empowers you to research top stories and users on HackerNews using a team of AI assistants with GPT-4o.
3 |
4 | ### Features
5 | - Research top stories and users on HackerNews
6 | - Utilize a team of AI assistants specialized in story and user research
7 | - Generate blog posts, reports, and social media content based on your research queries
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/ai_agent_tutorials/multi_agent_researcher
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Run the Streamlit App
27 | ```bash
28 | streamlit run research_agent.py
29 | ```
30 |
31 | ### How it works?
32 |
33 | - Upon running the app, you will be prompted to enter your OpenAI API key. This key is used to authenticate and access the OpenAI language models.
34 | - Once you provide a valid API key, three instances of the Assistant class are created:
35 | - **story_researcher**: Specializes in researching HackerNews stories.
36 | - **user_researcher**: Focuses on researching HackerNews users and reading articles from URLs.
37 | - **hn_assistant**: A team assistant that coordinates the research efforts of the story and user researchers.
38 |
39 | - Enter your research query in the provided text input field. This could be a topic, keyword, or specific question related to HackerNews stories or users.
40 | - The hn_assistant will orchestrate the research process by delegating tasks to the story_researcher and user_researcher based on your query.
41 | - The AI assistants will gather relevant information from HackerNews using the provided tools and generate a comprehensive response using the GPT-4 language model.
42 | - The generated content, which could be a blog post, report, or social media post, will be displayed in the app for you to review and use.
43 |
44 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_3dpygame_r1/README.md:
--------------------------------------------------------------------------------
1 | # 🎮 AI 3D PyGame Visualizer with DeepSeek R1
2 | This Project demonstrates R1's code capabilities with a PyGame code generator and visualizer with browser use. The system uses DeepSeek for reasoning, OpenAI for code extraction, and browser automation agents to visualize the code on Trinket.io.
3 |
4 | ### Features
5 |
6 | - Generates PyGame code from natural language descriptions
7 | - Uses DeepSeek Reasoner for code logic and explanation
8 | - Extracts clean code using OpenAI GPT-4o
9 | - Automates code visualization on Trinket.io using browser agents
10 | - Provides a streamlined Streamlit interface
11 | - Multi-agent system for handling different tasks (navigation, coding, execution, viewing)
12 |
13 | ### How to get Started?
14 |
15 | 1. Clone the GitHub repository
16 | ```bash
17 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
18 | cd awesome-llm-apps/ai_agent_tutorials/ai_3dpygame_r1
19 | ```
20 |
21 | 2. Install the required dependencies:
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Get your API Keys
27 | - Sign up for [DeepSeek](https://platform.deepseek.com/) and obtain your API key
28 | - Sign up for [OpenAI](https://platform.openai.com/) and obtain your API key
29 |
30 | 4. Run the AI PyGame Visualizer
31 | ```bash
32 | streamlit run ai_3dpygame_r1.py
33 | ```
34 |
35 | 5. Browser use automatically opens your web browser and navigate to the URL provided in the console output to interact with the PyGame generator.
36 |
37 | ### How it works?
38 |
39 | 1. **Query Processing:** User enters a natural language description of the desired PyGame visualization.
40 | 2. **Code Generation:**
41 | - DeepSeek Reasoner analyzes the query and provides detailed reasoning with code
42 | - OpenAI agent extracts clean, executable code from the reasoning
43 | 3. **Visualization:**
44 | - Browser agents automate the process of running code on Trinket.io
45 | - Multiple specialized agents handle different tasks:
46 | - Navigation to Trinket.io
47 | - Code input
48 | - Execution
49 | - Visualization viewing
50 | 4. **User Interface:** Streamlit provides an intuitive interface for entering queries, viewing code, and managing the visualization process.
51 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_startup_trend_analysis_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 📈 AI Startup Trend Analysis Agent
2 | The AI Startup Trend Analysis Agent is tool for budding entrepreneurs that generates actionable insights by identifying nascent trends, potential market gaps, and growth opportunities in specific sectors. Entrepreneurs can use these data-driven insights to validate ideas, spot market opportunities, and make informed decisions about their startup ventures. It combines Newspaper4k and DuckDuckGo to scan and analyze startup-focused articles and market data. Using Claude 3.5 Sonnet, it processes this information to extract emerging patterns and enable entrepreneurs to identify promising startup opportunities.
3 |
4 |
5 | ### Features
6 | - **User Prompt**: Entrepreneurs can input specific startup sectors or technologies of interest for research.
7 | - **News Collection**: This agent gathers recent startup news, funding rounds, and market analyses using DuckDuckGo.
8 | - **Summary Generation**: Concise summaries of verified information are generated using Newspaper4k.
9 | - **Trend Analysis**: The system identifies emerging patterns in startup funding, technology adoption, and market opportunities across analyzed stories.
10 | - **Streamlit UI**: The application features a user-friendly interface built with Streamlit for easy interaction.
11 |
12 | ### How to Get Started
13 | 1. **Clone the repository**:
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/ai_agent_tutorials/ai_startup_trend_analysis_agent
17 | ```
18 |
19 | 2. **Create and activate a virtual environment**:
20 | ```bash
21 | # For macOS/Linux
22 | python -m venv venv
23 | source venv/bin/activate
24 |
25 | # For Windows
26 | python -m venv venv
27 | .\venv\Scripts\activate
28 | ```
29 |
30 | 3. **Install the required packages**:
31 | ```bash
32 | pip install -r requirements.txt
33 | ```
34 |
35 | 4. **Run the application**:
36 | ```bash
37 | streamlit run startup_trends_agent.py
38 | ```
39 | ### Important Note
40 | - The system specifically uses Claude's API for advanced language processing. You can obtain your Anthropic API key from [Anthropic's website](https://www.anthropic.com/api).
41 |
42 |
43 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/llm_router_app/README.md:
--------------------------------------------------------------------------------
1 | ## 📡 RouteLLM Chat App
2 |
3 | > Note: This project is inspired by the opensource [RouteLLM library](https://github.com/lm-sys/RouteLLM/tree/main), which provides intelligent routing between different language models.
4 |
5 | This Streamlit application demonstrates the use of RouteLLM, a system that intelligently routes queries between different language models based on the complexity of the task. It provides a chat interface where users can interact with AI models, and the app automatically selects the most appropriate model for each query.
6 |
7 | ### Features
8 | - Chat interface for interacting with AI models
9 | - Automatic model selection using RouteLLM
10 | - Utilizes both GPT-4 and Meta-Llama 3.1 models
11 | - Displays chat history with model information
12 |
13 | ### How to get Started?
14 |
15 | 1. Clone the GitHub repository
16 |
17 | ```bash
18 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
19 | cd awesome-llm-apps/advanced_tools_frameworks/llm_router_app
20 | ```
21 | 2. Install the required dependencies:
22 |
23 | ```bash
24 | pip install -r requirements.txt
25 | ```
26 | 3. Set up your API keys:
27 |
28 | ```bash
29 | os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
30 | os.environ['TOGETHERAI_API_KEY'] = "your_togetherai_api_key"
31 | ```
32 | Note: In a production environment, it's recommended to use environment variables or a secure configuration management system instead of hardcoding API keys.
33 |
34 | 4. Run the Streamlit App
35 | ```bash
36 | streamlit run llm_router.py
37 | ```
38 |
39 | ### How it Works?
40 |
41 | 1. RouteLLM Initialization: The app initializes the RouteLLM controller with two models:
42 | - Strong model: GPT-4 (mini)
43 | - Weak model: Meta-Llama 3.1 70B Instruct Turbo
44 |
45 | 2. Chat Interface: Users can input messages through a chat interface.
46 |
47 | 3. Model Selection: RouteLLM automatically selects the appropriate model based on the complexity of the user's query.
48 |
49 | 4. Response Generation: The selected model generates a response to the user's input.
50 |
51 | 5. Display: The app displays the response along with information about which model was used.
52 |
53 | 6. History: The chat history is maintained and displayed, including model information for each response.
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_medical_imaging_agent/README.md:
--------------------------------------------------------------------------------
1 | # 🩻 Medical Imaging Diagnosis Agent
2 |
3 | A Medical Imaging Diagnosis Agent build on agno powered by Gemini 2.0 Flash that provides AI-assisted analysis of medical images of various scans. The agent acts as a medical imaging diagnosis expert to analyze various types of medical images and videos, providing detailed diagnostic insights and explanations.
4 |
5 | ## Features
6 |
7 | - **Comprehensive Image Analysis**
8 | - Image Type Identification (X-ray, MRI, CT scan, ultrasound)
9 | - Anatomical Region Detection
10 | - Key Findings and Observations
11 | - Potential Abnormalities Detection
12 | - Image Quality Assessment
13 | - Research and Reference
14 |
15 | ## How to Run
16 |
17 | 1. **Setup Environment**
18 | ```bash
19 | # Clone the repository
20 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
21 | cd ai_agent_tutorials/ai_medical_imaging_agent
22 |
23 | # Install dependencies
24 | pip install -r requirements.txt
25 | ```
26 |
27 | 2. **Configure API Keys**
28 | - Get Google API key from [Google AI Studio](https://aistudio.google.com)
29 |
30 | 3. **Run the Application**
31 | ```bash
32 | streamlit run ai_medical_imaging.py
33 | ```
34 |
35 | ## Analysis Components
36 |
37 | - **Image Type and Region**
38 | - Identifies imaging modality
39 | - Specifies anatomical region
40 |
41 | - **Key Findings**
42 | - Systematic listing of observations
43 | - Detailed appearance descriptions
44 | - Abnormality highlighting
45 |
46 | - **Diagnostic Assessment**
47 | - Potential diagnoses ranking
48 | - Differential diagnoses
49 | - Severity assessment
50 |
51 | - **Patient-Friendly Explanations**
52 | - Simplified terminology
53 | - Detailed first-principles explanations
54 | - Visual reference points
55 |
56 | ## Notes
57 |
58 | - Uses Gemini 2.0 Flash for analysis
59 | - Requires stable internet connection
60 | - Free API usage costs - 1,500 free requests per day by google!
61 | - For educational and development purposes only
62 | - Not a replacement for professional medical diagnosis
63 |
64 | ## Disclaimer
65 |
66 | This tool is for educational and informational purposes only. All analyses should be reviewed by qualified healthcare professionals. Do not make medical decisions based solely on this analysis.
--------------------------------------------------------------------------------
/ai_agent_tutorials/multimodal_ai_agent/multimodal_reasoning_agent.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from agno.agent import Agent
3 | from agno.models.google import Gemini
4 | import tempfile
5 | import os
6 |
7 | def main():
8 | # Set up the reasoning agent
9 | agent = Agent(
10 | model=Gemini(id="gemini-2.0-flash-thinking-exp-1219"),
11 | markdown=True
12 | )
13 |
14 | # Streamlit app title
15 | st.title("Multimodal Reasoning AI Agent 🧠")
16 |
17 | # Instruction
18 | st.write(
19 | "Upload an image and provide a reasoning-based task for the AI Agent. "
20 | "The AI Agent will analyze the image and respond based on your input."
21 | )
22 |
23 | # File uploader for image
24 | uploaded_file = st.file_uploader("Upload Image", type=["jpg", "jpeg", "png"])
25 |
26 | if uploaded_file is not None:
27 | try:
28 | # Save uploaded file to temporary file
29 | with tempfile.NamedTemporaryFile(delete=False, suffix='.jpg') as tmp_file:
30 | tmp_file.write(uploaded_file.getvalue())
31 | temp_path = tmp_file.name
32 |
33 | # Display the uploaded image
34 | st.image(uploaded_file, caption="Uploaded Image", use_container_width=True)
35 |
36 | # Input for dynamic task
37 | task_input = st.text_area(
38 | "Enter your task/question for the AI Agent:"
39 | )
40 |
41 | # Button to process the image and task
42 | if st.button("Analyze Image") and task_input:
43 | with st.spinner("AI is thinking... 🤖"):
44 | try:
45 | # Call the agent with the dynamic task and image path
46 | response = agent.run(task_input, images=[temp_path])
47 |
48 | # Display the response from the model
49 | st.markdown("### AI Response:")
50 | st.markdown(response.content)
51 | except Exception as e:
52 | st.error(f"An error occurred during analysis: {str(e)}")
53 | finally:
54 | # Clean up temp file
55 | if os.path.exists(temp_path):
56 | os.unlink(temp_path)
57 |
58 | except Exception as e:
59 | st.error(f"An error occurred while processing the image: {str(e)}")
60 |
61 | if __name__ == "__main__":
62 | main()
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_coding_agent_o3-mini/README.md:
--------------------------------------------------------------------------------
1 | # 💻 Multimodal AI Coding Agent Team with o3-mini and Gemini
2 | An AI Powered Streamlit application that serves as your personal coding assistant, powered by multiple Agents built on the new o3-mini model. You can also upload an image of a coding problem or describe it in text, and the AI agent will analyze, generate an optimal solution, and execute it in a sandbox environment.
3 |
4 | ## Features
5 | #### Multi-Modal Problem Input
6 | - Upload images of coding problems (supports PNG, JPG, JPEG)
7 | - Type problems in natural language
8 | - Automatic problem extraction from images
9 | - Interactive problem processing
10 |
11 | #### Intelligent Code Generation
12 | - Optimal solution generation with best time/space complexity
13 | - Clean, documented Python code output
14 | - Type hints and proper documentation
15 | - Edge case handling
16 |
17 | #### Secure Code Execution
18 | - Sandboxed code execution environment
19 | - Real-time execution results
20 | - Error handling and explanations
21 | - 30-second execution timeout protection
22 |
23 | #### Multi-Agent Architecture
24 | - Vision Agent (Gemini-2.0-flash) for image processing
25 | - Coding Agent (OpenAI- o3-mini) for solution generation
26 | - Execution Agent (OpenAI) for code running and result analysis
27 | - E2B Sandbox for secure code execution
28 |
29 | ## How to Run
30 |
31 | Follow the steps below to set up and run the application:
32 | - Get an OpenAI API key from: https://platform.openai.com/
33 | - Get a Google (Gemini) API key from: https://makersuite.google.com/app/apikey
34 | - Get an E2B API key from: https://e2b.dev/docs/getting-started/api-key
35 |
36 | 1. **Clone the Repository**
37 | ```bash
38 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
39 | cd ai_agent_tutorials/ai_coding_agent_o3-mini
40 | ```
41 |
42 | 2. **Install the dependencies**
43 | ```bash
44 | pip install -r requirements.txt
45 | ```
46 |
47 | 3. **Run the Streamlit app**
48 | ```bash
49 | streamlit run ai_coding_agent_o3.py
50 | ```
51 |
52 | 4. **Configure API Keys**
53 | - Enter your API keys in the sidebar
54 | - All three keys (OpenAI, Gemini, E2B) are required for full functionality
55 |
56 | ## Usage
57 | 1. Upload an image of a coding problem OR type your problem description
58 | 2. Click "Generate & Execute Solution"
59 | 3. View the generated solution with full documentation
60 | 4. See execution results and any generated files
61 | 5. Review any error messages or execution timeouts
62 |
--------------------------------------------------------------------------------
/advanced_tools_frameworks/gemini_multimodal_chatbot/gemini_multimodal_chatbot.py:
--------------------------------------------------------------------------------
1 | import os
2 | import streamlit as st
3 | import google.generativeai as genai
4 | from PIL import Image
5 |
6 | # Set up the Streamlit App
7 | st.set_page_config(page_title="Multimodal Chatbot with Gemini Flash", layout="wide")
8 | st.title("Multimodal Chatbot with Gemini Flash ⚡️")
9 | st.caption("Chat with Google's Gemini Flash model using image and text input to get lightning fast results. 🌟")
10 |
11 | # Get OpenAI API key from user
12 | api_key = st.text_input("Enter Google API Key", type="password")
13 |
14 | # Set up the Gemini model
15 | genai.configure(api_key=api_key)
16 | model = genai.GenerativeModel(model_name="gemini-1.5-flash-latest")
17 |
18 | if api_key:
19 | # Initialize the chat history
20 | if "messages" not in st.session_state:
21 | st.session_state.messages = []
22 |
23 | # Sidebar for image upload
24 | with st.sidebar:
25 | st.title("Chat with Images")
26 | uploaded_file = st.file_uploader("Upload an image...", type=["jpg", "jpeg", "png"])
27 |
28 | if uploaded_file:
29 | image = Image.open(uploaded_file)
30 | st.image(image, caption='Uploaded Image', use_column_width=True)
31 |
32 | # Main layout
33 | chat_placeholder = st.container()
34 |
35 | with chat_placeholder:
36 | # Display the chat history
37 | for message in st.session_state.messages:
38 | with st.chat_message(message["role"]):
39 | st.markdown(message["content"])
40 |
41 | # User input area at the bottom
42 | prompt = st.chat_input("What do you want to know?")
43 |
44 | if prompt:
45 | inputs = [prompt]
46 |
47 | # Add user message to chat history
48 | st.session_state.messages.append({"role": "user", "content": prompt})
49 | # Display user message in chat message container
50 | with chat_placeholder:
51 | with st.chat_message("user"):
52 | st.markdown(prompt)
53 |
54 | if uploaded_file:
55 | inputs.append(image)
56 |
57 | with st.spinner('Generating response...'):
58 | # Generate response
59 | response = model.generate_content(inputs)
60 |
61 | # Display assistant response in chat message container
62 | with chat_placeholder:
63 | with st.chat_message("assistant"):
64 | st.markdown(response.text)
65 |
66 | if uploaded_file and not prompt:
67 | st.warning("Please enter a text query to accompany the image.")
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_github/chat_github_llama3.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import tempfile
3 | from embedchain import App
4 | from embedchain.loaders.github import GithubLoader
5 | import streamlit as st
6 | import os
7 |
8 | GITHUB_TOKEN = os.getenv("Your GitHub Token")
9 |
10 | def get_loader():
11 | loader = GithubLoader(
12 | config={
13 | "token": GITHUB_TOKEN
14 | }
15 | )
16 | return loader
17 |
18 | if "loader" not in st.session_state:
19 | st.session_state['loader'] = get_loader()
20 |
21 | loader = st.session_state.loader
22 |
23 | # Define the embedchain_bot function
24 | def embedchain_bot(db_path):
25 | return App.from_config(
26 | config={
27 | "llm": {"provider": "ollama", "config": {"model": "llama3:instruct", "max_tokens": 250, "temperature": 0.5, "stream": True, "base_url": 'http://localhost:11434'}},
28 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
29 | "embedder": {"provider": "ollama", "config": {"model": "llama3:instruct", "base_url": 'http://localhost:11434'}},
30 | }
31 | )
32 |
33 | def load_repo(git_repo):
34 | global app
35 | # Add the repo to the knowledge base
36 | print(f"Adding {git_repo} to knowledge base!")
37 | app.add("repo:" + git_repo + " " + "type:repo", data_type="github", loader=loader)
38 | st.success(f"Added {git_repo} to knowledge base!")
39 |
40 |
41 | def make_db_path():
42 | ret = tempfile.mkdtemp(suffix="chroma")
43 | print(f"Created Chroma DB at {ret}")
44 | return ret
45 |
46 | # Create Streamlit app
47 | st.title("Chat with GitHub Repository 💬")
48 | st.caption("This app allows you to chat with a GitHub Repo using Llama-3 running with Ollama")
49 |
50 | # Initialize the Embedchain App
51 | if "app" not in st.session_state:
52 | st.session_state['app'] = embedchain_bot(make_db_path())
53 |
54 | app = st.session_state.app
55 |
56 | # Get the GitHub repo from the user
57 | git_repo = st.text_input("Enter the GitHub Repo", type="default")
58 |
59 | if git_repo and ("repos" not in st.session_state or git_repo not in st.session_state.repos):
60 | if "repos" not in st.session_state:
61 | st.session_state["repos"] = [git_repo]
62 | else:
63 | st.session_state.repos.append(git_repo)
64 | load_repo(git_repo)
65 |
66 |
67 | # Ask a question about the Github Repo
68 | prompt = st.text_input("Ask any question about the GitHub Repo")
69 | # Chat with the GitHub Repo
70 | if prompt:
71 | answer = st.session_state.app.chat(prompt)
72 | st.write(answer)
--------------------------------------------------------------------------------
/rag_tutorials/corrective_rag/README.md:
--------------------------------------------------------------------------------
1 | # 🔄 Corrective RAG Agent
2 | A sophisticated Retrieval-Augmented Generation (RAG) system that implements a corrective multi-stage workflow using LangGraph. This system combines document retrieval, relevance grading, query transformation, and web search to provide comprehensive and accurate responses.
3 |
4 | ## Features
5 |
6 | - **Smart Document Retrieval**: Uses Qdrant vector store for efficient document retrieval
7 | - **Document Relevance Grading**: Employs Claude 3.5 sonnet to assess document relevance
8 | - **Query Transformation**: Improves search results by optimizing queries when needed
9 | - **Web Search Fallback**: Uses Tavily API for web search when local documents aren't sufficient
10 | - **Multi-Model Approach**: Combines OpenAI embeddings and Claude 3.5 sonnet for different tasks
11 | - **Interactive UI**: Built with Streamlit for easy document upload and querying
12 |
13 | ## How to Run?
14 |
15 | 1. **Clone the Repository**:
16 | ```bash
17 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
18 | cd rag_tutorials/corrective_rag
19 | ```
20 |
21 | 2. **Install Dependencies**:
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. **Set Up API Keys**:
27 | You'll need to obtain the following API keys:
28 | - [OpenAI API key](https://platform.openai.com/api-keys) (for embeddings)
29 | - [Anthropic API key](https://console.anthropic.com/settings/keys) (for Claude 3.5 sonnet as LLM)
30 | - [Tavily API key](https://app.tavily.com/home) (for web search)
31 | - Qdrant Cloud Setup
32 | 1. Visit [Qdrant Cloud](https://cloud.qdrant.io/)
33 | 2. Create an account or sign in
34 | 3. Create a new cluster
35 | 4. Get your credentials:
36 | - Qdrant API Key: Found in API Keys section
37 | - Qdrant URL: Your cluster URL (format: `https://xxx-xxx.aws.cloud.qdrant.io`)
38 |
39 | 4. **Run the Application**:
40 | ```bash
41 | streamlit run corrective_rag.py
42 | ```
43 |
44 | 5. **Use the Application**:
45 | - Upload documents or provide URLs
46 | - Enter your questions in the query box
47 | - View the step-by-step Corrective RAG process
48 | - Get comprehensive answers
49 |
50 | ## Tech Stack
51 |
52 | - **LangChain**: For RAG orchestration and chains
53 | - **LangGraph**: For workflow management
54 | - **Qdrant**: Vector database for document storage
55 | - **Claude 3.5 sonnet**: Main language model for analysis and generation
56 | - **OpenAI**: For document embeddings
57 | - **Tavily**: For web search capabilities
58 | - **Streamlit**: For the user interface
59 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_tic_tac_toe_agent/README.md:
--------------------------------------------------------------------------------
1 | # 🎮 Agent X vs Agent O: Tic-Tac-Toe Game
2 |
3 | An interactive Tic-Tac-Toe game where two AI agents powered by different language models compete against each other built on Agno Agent Framework and Streamlit as UI. Watch as GPT-4O battles against either DeepSeek V3 or Google's Gemini 1.5 Flash in this classic game.
4 |
5 | ## Features
6 |
7 | ### Multi-Agent System
8 | - Player X: OpenAI's Gemini Flash 2.0
9 | - Player O: DeepSeek v3
10 | - Judge: GPT-4o for game outcome validation
11 |
12 | ### Interactive Interface
13 | - Real-time game board visualization
14 | - Move-by-move analysis
15 | - Agent response tracking
16 | - Clear game status updates
17 |
18 | ### Strategic Gameplay
19 | - AI-powered move decisions
20 | - Winning strategy implementation
21 | - Opponent move blocking
22 | - Victory condition monitoring
23 |
24 | ## Prerequisites
25 | - Python 3.8+
26 | - OpenAI API key
27 | - Either DeepSeek API key or Google API key (for Player O)
28 |
29 | ## How to Run
30 |
31 | 1. **Setup Environment**
32 | ```bash
33 | # Clone the repository
34 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
35 | cd ai_agent_tutorials/ai_tic_tac_toe_agent
36 |
37 | # Install dependencies
38 | pip install -r requirements.txt
39 | ```
40 |
41 | 2. **Configure API Keys**
42 | - Get OpenAI API key from [OpenAI Platform](https://platform.openai.com)
43 | - Get Google API key from [Google AI Studio](https://aistudio.google.com) (if using Gemini)
44 | - Get DeepSeek API key from DeepSeek platform [Deepseek platform](https://www.deepseek.com) (if using DeepSeek v3 model)
45 |
46 | 4. **Run the Application**
47 | ```bash
48 | streamlit run ai_tic_tac_toe_agent.py
49 | ```
50 |
51 | 5. **Using the Interface**
52 | - Enter your API keys in the sidebar
53 | - Click "Start Game" to begin
54 | - Watch as the AI agents battle it out!
55 | - Monitor the game progress and final results
56 |
57 | ## Game Components
58 |
59 | - **Game Board**
60 | - 3x3 interactive grid
61 | - Real-time move visualization
62 | - Clear symbol placement (X/O)
63 |
64 | - **AI Agent Players**
65 | - Player X: Strategic offensive moves
66 | - Player O: Defensive countermoves since this agent started later
67 | - AI Judge: Game outcome validation
68 |
69 | - **Game Flow**
70 | - Alternating turns between AIs
71 | - Move validation and error handling
72 | - Winner determination
73 | - Draw detection
74 |
75 | ## Disclaimer ⚠️
76 |
77 | This is a demonstration project showcasing AI capabilities in game playing. API costs will apply based on your usage of the OpenAI, DeepSeek, or Google APIs.
78 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_competitor_intelligence_agent_team/README.md:
--------------------------------------------------------------------------------
1 | # 🧲 AI Competitor Intelligence Agent Team
2 |
3 | The AI Competitor Intelligence Agent Team is a powerful competitor analysis tool powered by Firecrawl and Agno's AI Agent framework. This app helps businesses analyze their competitors by extracting structured data from competitor websites and generating actionable insights using AI.
4 |
5 | ## Features
6 |
7 | - **Multi-Agent System**
8 | - **Firecrawl Agent**: Specializes in crawling and summarizing competitor websites
9 | - **Analysis Agent**: Generates detailed competitive analysis reports
10 | - **Comparison Agent**: Creates structured comparisons between competitors
11 |
12 | - **Competitor Discovery**:
13 | - Finds similar companies using URL matching with Exa AI
14 | - Discovers competitors based on business descriptions
15 | - Automatically extracts relevant competitor URLs
16 |
17 | - **Comprehensive Analysis**:
18 | - Provides structured analysis reports with:
19 | - Market gaps and opportunities
20 | - Competitor weaknesses
21 | - Recommended features
22 | - Pricing strategies
23 | - Growth opportunities
24 | - Actionable recommendations
25 |
26 | - **Interactive Analysis**: Users can input either their company URL or description for analysis
27 |
28 | ## Requirements
29 |
30 | The application requires the following Python libraries:
31 |
32 | - `agno`
33 | - `exa-py`
34 | - `streamlit`
35 | - `pandas`
36 | - `firecrawl-py`
37 |
38 | You'll also need API keys for:
39 | - OpenAI
40 | - Firecrawl
41 | - Exa
42 |
43 | ## How to Run
44 |
45 | Follow these steps to set up and run the application:
46 |
47 | 1. **Clone the Repository**:
48 | ```bash
49 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
50 | cd ai_agent_tutorials/ai_competitors_analysis_team
51 | ```
52 |
53 | 2. **Install the dependencies**:
54 | ```bash
55 | pip install -r requirements.txt
56 | ```
57 |
58 | 3. **Set up your API keys**:
59 | - Get an OpenAI API key from: https://platform.openai.com/api-keys
60 | - Get a Firecrawl API key from: [Firecrawl website](https://www.firecrawl.dev/app/api-keys)
61 | - Get an Exa API key from: [Exa website](https://dashboard.exa.ai/api-keys)
62 |
63 | 4. **Run the Streamlit app**:
64 | ```bash
65 | streamlit run ai_competitor_analyser.py
66 | ```
67 |
68 | ## Usage
69 |
70 | 1. Enter your API keys in the sidebar
71 | 2. Input either:
72 | - Your company's website URL
73 | - A description of your company
74 | 3. Click "Analyze Competitors" to generate:
75 | - Competitor comparison table
76 | - Detailed analysis report
77 | - Strategic recommendations
78 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_legal_agent_team/README.md:
--------------------------------------------------------------------------------
1 | # 👨⚖️ AI Legal Agent Team
2 |
3 | A Streamlit application that simulates a full-service legal team using multiple AI agents to analyze legal documents and provide comprehensive legal insights. Each agent represents a different legal specialist role, from research and contract analysis to strategic planning, working together to provide thorough legal analysis and recommendations.
4 |
5 | ## Features
6 |
7 | - **Specialized Legal AI Agent Team**
8 | - **Legal Researcher**: Equipped with DuckDuckGo search tool to find and cite relevant legal cases and precedents. Provides detailed research summaries with sources and references specific sections from uploaded documents.
9 |
10 | - **Contract Analyst**: Specializes in thorough contract review, identifying key terms, obligations, and potential issues. References specific clauses from documents for detailed analysis.
11 |
12 | - **Legal Strategist**: Focuses on developing comprehensive legal strategies, providing actionable recommendations while considering both risks and opportunities.
13 |
14 | - **Team Lead**: Coordinates analysis between team members, ensures comprehensive responses, properly sourced recommendations, and references to specific document parts. Acts as an Agent Team coordinator for all three agents.
15 |
16 | - **Document Analysis Types**
17 | - Contract Review - Done by Contract Analyst
18 | - Legal Research - Done by Legal Researcher
19 | - Risk Assessment - Done by Legal Strategist, Contract Analyst
20 | - Compliance Check - Done by Legal Strategist, Legal Researcher, Contract Analyst
21 | - Custom Queries - Done by Agent Team - Legal Researcher, Legal Strategist, Contract Analyst
22 |
23 | ## How to Run
24 |
25 | 1. **Setup Environment**
26 | ```bash
27 | # Clone the repository
28 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
29 | cd awesome-llm-apps/ai_agent_tutorials/ai_legal_agent_team
30 |
31 | # Install dependencies
32 | pip install -r requirements.txt
33 | ```
34 |
35 | 2. **Configure API Keys**
36 | - Get OpenAI API key from [OpenAI Platform](https://platform.openai.com)
37 | - Get Qdrant API key and URL from [Qdrant Cloud](https://cloud.qdrant.io)
38 |
39 | 3. **Run the Application**
40 | ```bash
41 | streamlit run legal_agent_team.py
42 | ```
43 | 4. **Use the Interface**
44 | - Enter API credentials
45 | - Upload a legal document (PDF)
46 | - Select analysis type
47 | - Add custom queries if needed
48 | - View analysis results
49 |
50 | ## Notes
51 |
52 | - Supports PDF documents only
53 | - Uses GPT-4o for analysis
54 | - Uses text-embedding-3-small for embeddings
55 | - Requires stable internet connection
56 | - API usage costs apply
57 |
--------------------------------------------------------------------------------
/llm_apps_with_memory_tutorials/llm_app_personalized_memory/llm_app_memory.py:
--------------------------------------------------------------------------------
1 | import os
2 | import streamlit as st
3 | from mem0 import Memory
4 | from openai import OpenAI
5 |
6 | st.title("LLM App with Memory 🧠")
7 | st.caption("LLM App with personalized memory layer that remembers ever user's choice and interests")
8 |
9 | openai_api_key = st.text_input("Enter OpenAI API Key", type="password")
10 | os.environ["OPENAI_API_KEY"] = openai_api_key
11 |
12 | if openai_api_key:
13 | # Initialize OpenAI client
14 | client = OpenAI(api_key=openai_api_key)
15 |
16 | # Initialize Mem0 with Qdrant
17 | config = {
18 | "vector_store": {
19 | "provider": "qdrant",
20 | "config": {
21 | "collection_name": "llm_app_memory",
22 | "host": "localhost",
23 | "port": 6333,
24 | }
25 | },
26 | }
27 |
28 | memory = Memory.from_config(config)
29 |
30 | user_id = st.text_input("Enter your Username")
31 |
32 | prompt = st.text_input("Ask ChatGPT")
33 |
34 | if st.button('Chat with LLM'):
35 | with st.spinner('Searching...'):
36 | relevant_memories = memory.search(query=prompt, user_id=user_id)
37 | # Prepare context with relevant memories
38 | context = "Relevant past information:\n"
39 |
40 | for mem in relevant_memories:
41 | context += f"- {mem['text']}\n"
42 |
43 | # Prepare the full prompt
44 | full_prompt = f"{context}\nHuman: {prompt}\nAI:"
45 |
46 | # Get response from GPT-4
47 | response = client.chat.completions.create(
48 | model="gpt-4o",
49 | messages=[
50 | {"role": "system", "content": "You are a helpful assistant with access to past conversations."},
51 | {"role": "user", "content": full_prompt}
52 | ]
53 | )
54 |
55 | answer = response.choices[0].message.content
56 |
57 | st.write("Answer: ", answer)
58 |
59 | # Add AI response to memory
60 | memory.add(answer, user_id=user_id)
61 |
62 |
63 | # Sidebar option to show memory
64 | st.sidebar.title("Memory Info")
65 | if st.button("View My Memory"):
66 | memories = memory.get_all(user_id=user_id)
67 | if memories and "results" in memories:
68 | st.write(f"Memory history for **{user_id}**:")
69 | for mem in memories["results"]:
70 | if "memory" in mem:
71 | st.write(f"- {mem['memory']}")
72 | else:
73 | st.sidebar.info("No learning history found for this user ID.")
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_real_estate_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🏠 AI Real Estate Agent - Powered by Firecrawl's Extract Endpoint
2 |
3 | The AI Real Estate Agent automates property search and market analysis using Firecrawl's Extract endpoint and Agno AI Agent's insights. It helps users find properties matching their criteria while providing detailed location trends and investment recommendations. This agent streamlines the property search process by combining data from multiple real estate websites and offering intelligent analysis.
4 |
5 | ### Features
6 | - **Smart Property Search**: Uses Firecrawl's Extract endpoint to find properties across multiple real estate websites
7 | - **Multi-Source Integration**: Aggregates data from 99acres, Housing.com, Square Yards, Nobroker, and MagicBricks
8 | - **Location Analysis**: Provides detailed price trends and investment insights for different localities
9 | - **AI-Powered Recommendations**: Uses GPT models to analyze properties and provide structured recommendations
10 | - **User-Friendly Interface**: Clean Streamlit UI for easy property search and results viewing
11 | - **Customizable Search**: Filter by city, property type, category, and budget
12 |
13 | ### How to Get Started
14 | 1. **Clone the repository**:
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd ai_agent_tutorials/ai_real_estate_agent
18 | ```
19 |
20 | 2. **Install the required packages**:
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. **Set up your API keys**:
26 | - Get your Firecrawl API key from [Firecrawl's website](https://www.firecrawl.dev/app/api-keys)
27 | - Get your OpenAI API key from [OpenAI's website](https://platform.openai.com/api-keys)
28 |
29 | 4. **Run the application**:
30 | ```bash
31 | streamlit run ai_real_estate_agent.py
32 | ```
33 |
34 | ### Using the Agent
35 | 1. **Enter API Keys**:
36 | - Input your Firecrawl and OpenAI API keys in the sidebar
37 | - Keys are securely stored in the session state
38 |
39 | 2. **Set Search Criteria**:
40 | - Enter the city name
41 | - Select property category (Residential/Commercial)
42 | - Choose property type (Flat/Individual House)
43 | - Set maximum budget in Crores
44 |
45 | 3. **View Results**:
46 | - Property recommendations with detailed analysis
47 | - Location trends with investment insights
48 | - Expandable sections for easy reading
49 |
50 | ### Features in Detail
51 | - **Property Finding**:
52 | - Searches across multiple real estate websites
53 | - Returns 3-6 properties matching criteria
54 | - Provides detailed property information and analysis
55 |
56 | - **Location Analysis**:
57 | - Price trends for different localities
58 | - Rental yield analysis
59 | - Investment potential assessment
60 | - Top performing areas identification
61 |
62 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/multimodal_design_agent_team/README.md:
--------------------------------------------------------------------------------
1 | # Multimodal AI Design Agent Team
2 |
3 | A Streamlit application that provides comprehensive design analysis using a team of specialized AI agents powered by Google's Gemini model.
4 |
5 | This application leverages multiple specialized AI agents to provide comprehensive analysis of UI/UX designs of your product and your competitors, combining visual understanding, user experience evaluation, and market research insights.
6 |
7 | ## Features
8 |
9 | - **Specialized Legal AI Agent Team**
10 |
11 | - 🎨 **Visual Design Agent**: Evaluates design elements, patterns, color schemes, typography, and visual hierarchy
12 | - 🔄 **UX Analysis Agent**: Assesses user flows, interaction patterns, usability, and accessibility
13 | - 📊 **Market Analysis Agent**: Provides market insights, competitor analysis, and positioning recommendations
14 |
15 | - **Multiple Analysis Types**: Choose from Visual Design, UX, and Market Analysis
16 | - **Comparative Analysis**: Upload competitor designs for comparative insights
17 | - **Customizable Focus Areas**: Select specific aspects for detailed analysis
18 | - **Context-Aware**: Provide additional context for more relevant insights
19 | - **Real-time Processing**: Get instant analysis with progress indicators
20 | - **Structured Output**: Receive well-organized, actionable insights
21 |
22 | ## How to Run
23 |
24 | 1. **Setup Environment**
25 | ```bash
26 | # Clone the repository
27 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
28 | cd awesome-llm-apps/ai_agent_tutorials/multimodal_design_agent_team
29 |
30 | # Create and activate virtual environment (optional)
31 | python -m venv venv
32 | source venv/bin/activate # On Windows: venv\Scripts\activate
33 |
34 | # Install dependencies
35 | pip install -r requirements.txt
36 | ```
37 |
38 | 2. **Get API Key**
39 | - Visit [Google AI Studio](https://aistudio.google.com/apikey)
40 | - Generate an API key
41 |
42 | 3. **Run the Application**
43 | ```bash
44 | streamlit run design_agent_team.py
45 | ```
46 |
47 | 4. **Use the Application**
48 | - Enter your Gemini API key in the sidebar
49 | - Upload design files (supported formats: JPG, JPEG, PNG)
50 | - Select analysis types and focus areas
51 | - Add context if needed
52 | - Click "Run Analysis" to get insights
53 |
54 |
55 | ## Technical Stack
56 |
57 | - **Frontend**: Streamlit
58 | - **AI Model**: Google Gemini 2.0
59 | - **Image Processing**: Pillow
60 | - **Market Research**: DuckDuckGo Search API
61 | - **Framework**: Phidata for agent orchestration
62 |
63 | ## Tips for Best Results
64 |
65 | - Upload clear, high-resolution images
66 | - Include multiple views/screens for better context
67 | - Add competitor designs for comparative analysis
68 | - Provide specific context about your target audience
69 |
70 |
--------------------------------------------------------------------------------
/rag_tutorials/deepseek_local_rag_agent/README.md:
--------------------------------------------------------------------------------
1 | # 🐋 Deepseek Local RAG Reasoning Agent
2 |
3 | A powerful reasoning agent that combines local Deepseek models with RAG capabilities. Built using Deepseek (via Ollama), Snowflake for embeddings, Qdrant for vector storage, and Agno for agent orchestration, this application offers both simple local chat and advanced RAG-enhanced interactions with comprehensive document processing and web search capabilities.
4 |
5 | ## Features
6 |
7 | - **Dual Operation Modes**
8 | - Local Chat Mode: Direct interaction with Deepseek locally
9 | - RAG Mode: Enhanced reasoning with document context and web search integration - llama3.2
10 |
11 | - **Document Processing** (RAG Mode)
12 | - PDF document upload and processing
13 | - Web page content extraction
14 | - Automatic text chunking and embedding
15 | - Vector storage in Qdrant cloud
16 |
17 | - **Intelligent Querying** (RAG Mode)
18 | - RAG-based document retrieval
19 | - Similarity search with threshold filtering
20 | - Automatic fallback to web search
21 | - Source attribution for answers
22 |
23 | - **Advanced Capabilities**
24 | - Exa AI web search integration
25 | - Custom domain filtering for web search
26 | - Context-aware response generation
27 | - Chat history management
28 | - Thinking process visualization
29 |
30 | - **Model Specific Features**
31 | - Flexible model selection:
32 | - Deepseek r1 1.5b (lighter, suitable for most laptops)
33 | - Deepseek r1 7b (more capable, requires better hardware)
34 | - Snowflake Arctic Embedding model (SOTA) for vector embeddings
35 | - Agno Agent framework for orchestration
36 | - Streamlit-based interactive interface
37 |
38 | ## Prerequisites
39 |
40 | ### 1. Ollama Setup
41 | 1. Install [Ollama](https://ollama.ai)
42 | 2. Pull the Deepseek r1 model(s):
43 | ```bash
44 | # For the lighter model
45 | ollama pull deepseek-r1:1.5b
46 |
47 | # For the more capable model (if your hardware supports it)
48 | ollama pull deepseek-r1:7b
49 |
50 | ollama pull snowflake-arctic-embed
51 | ollama pull llama3.2
52 | ```
53 |
54 | ### 2. Qdrant Cloud Setup (for RAG Mode)
55 | 1. Visit [Qdrant Cloud](https://cloud.qdrant.io/)
56 | 2. Create an account or sign in
57 | 3. Create a new cluster
58 | 4. Get your credentials:
59 | - Qdrant API Key: Found in API Keys section
60 | - Qdrant URL: Your cluster URL (format: `https://xxx-xxx.cloud.qdrant.io`)
61 |
62 | ### 3. Exa AI API Key (Optional)
63 | 1. Visit [Exa AI](https://exa.ai)
64 | 2. Sign up for an account
65 | 3. Generate an API key for web search capabilities
66 |
67 | ## How to Run
68 |
69 | 1. Clone the repository:
70 | ```bash
71 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
72 | cd rag_tutorials/deepseek_local_rag_agent
73 | ```
74 |
75 | 2. Install dependencies:
76 | ```bash
77 | pip install -r requirements.txt
78 | ```
79 |
80 | 3. Run the application:
81 | ```bash
82 | streamlit run deepseek_rag_agent.py
83 | ```
84 |
85 |
--------------------------------------------------------------------------------
/chat_with_X_tutorials/chat_with_pdf/chat_pdf_llama3.2.py:
--------------------------------------------------------------------------------
1 | # Import necessary libraries
2 | import os
3 | import tempfile
4 | import streamlit as st
5 | from embedchain import App
6 | import base64
7 | from streamlit_chat import message
8 |
9 | # Define the embedchain_bot function
10 | def embedchain_bot(db_path):
11 | return App.from_config(
12 | config={
13 | "llm": {"provider": "ollama", "config": {"model": "llama3.2:latest", "max_tokens": 250, "temperature": 0.5, "stream": True, "base_url": 'http://localhost:11434'}},
14 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
15 | "embedder": {"provider": "ollama", "config": {"model": "llama3.2:latest", "base_url": 'http://localhost:11434'}},
16 | }
17 | )
18 |
19 | # Add a function to display PDF
20 | def display_pdf(file):
21 | base64_pdf = base64.b64encode(file.read()).decode('utf-8')
22 | pdf_display = f''
23 | st.markdown(pdf_display, unsafe_allow_html=True)
24 |
25 | st.title("Chat with PDF using Llama 3.2")
26 | st.caption("This app allows you to chat with a PDF using Llama 3.2 running locally with Ollama!")
27 |
28 | # Define the database path
29 | db_path = tempfile.mkdtemp()
30 |
31 | # Create a session state to store the app instance and chat history
32 | if 'app' not in st.session_state:
33 | st.session_state.app = embedchain_bot(db_path)
34 | if 'messages' not in st.session_state:
35 | st.session_state.messages = []
36 |
37 | # Sidebar for PDF upload and preview
38 | with st.sidebar:
39 | st.header("PDF Upload")
40 | pdf_file = st.file_uploader("Upload a PDF file", type="pdf")
41 |
42 | if pdf_file:
43 | st.subheader("PDF Preview")
44 | display_pdf(pdf_file)
45 |
46 | if st.button("Add to Knowledge Base"):
47 | with st.spinner("Adding PDF to knowledge base..."):
48 | with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as f:
49 | f.write(pdf_file.getvalue())
50 | st.session_state.app.add(f.name, data_type="pdf_file")
51 | os.remove(f.name)
52 | st.success(f"Added {pdf_file.name} to knowledge base!")
53 |
54 | # Chat interface
55 | for i, msg in enumerate(st.session_state.messages):
56 | message(msg["content"], is_user=msg["role"] == "user", key=str(i))
57 |
58 | if prompt := st.chat_input("Ask a question about the PDF"):
59 | st.session_state.messages.append({"role": "user", "content": prompt})
60 | message(prompt, is_user=True)
61 |
62 | with st.spinner("Thinking..."):
63 | response = st.session_state.app.chat(prompt)
64 | st.session_state.messages.append({"role": "assistant", "content": response})
65 | message(response)
66 |
67 | # Clear chat history button
68 | if st.button("Clear Chat History"):
69 | st.session_state.messages = []
--------------------------------------------------------------------------------
/ai_agent_tutorials/ai_tic_tac_toe_game/README.md:
--------------------------------------------------------------------------------
1 | # Agent Tic Tac Toe
2 |
3 | This example shows how to build an interactive Tic Tac Toe game where AI agents compete against each other. The application showcases how to:
4 | - Coordinate multiple AI agents in a turn-based game
5 | - Use different language models for different players
6 | - Create an interactive web interface with Streamlit
7 | - Handle game state and move validation
8 | - Display real-time game progress and move history
9 |
10 | ## Features
11 | - Multiple AI models support (GPT-4, Claude, Gemini, etc.)
12 | - Real-time game visualization
13 | - Move history tracking with board states
14 | - Interactive player selection
15 | - Game state management
16 | - Move validation and coordination
17 |
18 | ### 1. Create a virtual environment
19 |
20 | ```shell
21 | python3 -m venv .venv
22 | source .venv/bin/activate
23 | ```
24 |
25 | ### 2. Install dependencies
26 |
27 | ```shell
28 | pip install -r ai_agent_tutorials/ai_tic_tac_toe_game/requirements.txt
29 | ```
30 |
31 | ### 3. Export API Keys
32 |
33 | The game supports multiple AI models. Export the API keys for the models you want to use:
34 |
35 | ```shell
36 | # Required for OpenAI models
37 | export OPENAI_API_KEY=***
38 |
39 | # Optional - for additional models
40 | export ANTHROPIC_API_KEY=*** # For Claude models
41 | export GOOGLE_API_KEY=*** # For Gemini models
42 | export GROQ_API_KEY=*** # For Groq models
43 | ```
44 |
45 | ### 4. Run the Game
46 |
47 | ```shell
48 | streamlit run ai_agent_tutorials/ai_tic_tac_toe_game/app.py
49 | ```
50 |
51 | - Open [localhost:8501](http://localhost:8501) to view the game interface
52 |
53 | ## How It Works
54 |
55 | The game consists of three agents:
56 |
57 | 1. **Master Agent (Referee)**
58 | - Coordinates the game
59 | - Validates moves
60 | - Maintains game state
61 | - Determines game outcome
62 |
63 | 2. **Two Player Agents**
64 | - Make strategic moves
65 | - Analyze board state
66 | - Follow game rules
67 | - Respond to opponent moves
68 |
69 | ## Available Models
70 |
71 | The game supports various AI models:
72 | - GPT-4o (OpenAI)
73 | - GPT-o3-mini (OpenAI)
74 | - Gemini (Google)
75 | - Llama 3 (Groq)
76 | - Claude (Anthropic)
77 |
78 | ## Game Features
79 |
80 | 1. **Interactive Board**
81 | - Real-time updates
82 | - Visual move tracking
83 | - Clear game status display
84 |
85 | 2. **Move History**
86 | - Detailed move tracking
87 | - Board state visualization
88 | - Player action timeline
89 |
90 | 3. **Game Controls**
91 | - Start/Pause game
92 | - Reset board
93 | - Select AI models
94 | - View game history
95 |
96 | 4. **Performance Analysis**
97 | - Move timing
98 | - Strategy tracking
99 | - Game statistics
100 |
101 | ## Support
102 |
103 | Join our [Discord community](https://agno.link/discord) for help and discussions.
104 |
--------------------------------------------------------------------------------
/rag_tutorials/gemini_agentic_rag/README.md:
--------------------------------------------------------------------------------
1 | # 🤔 Agentic RAG with Gemini Flash Thinking
2 |
3 | A RAG Agentic system built with the new Gemini 2.0 Flash Thinking model and gemini-exp-1206, Qdrant for vector storage, and Agno (phidata prev) for agent orchestration. This application features intelligent query rewriting, document processing, and web search fallback capabilities to provide comprehensive AI-powered responses.
4 |
5 | ## Features
6 |
7 | - **Document Processing**
8 | - PDF document upload and processing
9 | - Web page content extraction
10 | - Automatic text chunking and embedding
11 | - Vector storage in Qdrant cloud
12 |
13 | - **Intelligent Querying**
14 | - Query rewriting for better retrieval
15 | - RAG-based document retrieval
16 | - Similarity search with threshold filtering
17 | - Automatic fallback to web search
18 | - Source attribution for answers
19 |
20 | - **Advanced Capabilities**
21 | - Exa AI web search integration
22 | - Custom domain filtering for web search
23 | - Context-aware response generation
24 | - Chat history management
25 | - Query reformulation agent
26 |
27 | - **Model Specific Features**
28 | - Gemini Thinking 2.0 Flash for chat and reasoning
29 | - Gemini Embedding model for vector embeddings
30 | - Agno Agent framework for orchestration
31 | - Streamlit-based interactive interface
32 |
33 | ## Prerequisites
34 |
35 | ### 1. Google API Key
36 | 1. Go to [Google AI Studio](https://aistudio.google.com/apikey)
37 | 2. Sign up or log in to your account
38 | 3. Create a new API key
39 |
40 | ### 2. Qdrant Cloud Setup
41 | 1. Visit [Qdrant Cloud](https://cloud.qdrant.io/)
42 | 2. Create an account or sign in
43 | 3. Create a new cluster
44 | 4. Get your credentials:
45 | - Qdrant API Key: Found in API Keys section
46 | - Qdrant URL: Your cluster URL (format: `https://xxx-xxx.cloud.qdrant.io`)
47 |
48 | ### 3. Exa AI API Key (Optional)
49 | 1. Visit [Exa AI](https://exa.ai)
50 | 2. Sign up for an account
51 | 3. Generate an API key for web search capabilities
52 |
53 | ## How to Run
54 |
55 | 1. Clone the repository:
56 | ```bash
57 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
58 | cd rag_tutorials/gemini_agentic_rag
59 | ```
60 |
61 | 2. Install dependencies:
62 | ```bash
63 | pip install -r requirements.txt
64 | ```
65 |
66 | 3. Run the application:
67 | ```bash
68 | streamlit run agentic_rag_gemini.py
69 | ```
70 |
71 | ## Usage
72 |
73 | 1. Configure API keys in the sidebar:
74 | - Enter your Google API key
75 | - Add Qdrant credentials
76 | - (Optional) Add Exa AI key for web search
77 |
78 | 2. Upload documents:
79 | - Use the file uploader for PDFs
80 | - Enter URLs for web content
81 |
82 | 3. Ask questions:
83 | - Type your query in the chat interface
84 | - View rewritten queries and sources
85 | - See web search results when relevant
86 |
87 | 4. Manage your session:
88 | - Clear chat history as needed
89 | - Configure web search domains
90 | - Monitor processed documents
91 |
--------------------------------------------------------------------------------
/ai_agent_tutorials/multimodal_ai_agent/mutimodal_agent.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from agno.agent import Agent
3 | from agno.models.google import Gemini
4 | from agno.tools.duckduckgo import DuckDuckGoTools
5 | from google.generativeai import upload_file, get_file
6 | import time
7 | from pathlib import Path
8 | import tempfile
9 |
10 | st.set_page_config(
11 | page_title="Multimodal AI Agent",
12 | page_icon="🧬",
13 | layout="wide"
14 | )
15 |
16 | st.title("Multimodal AI Agent 🧬")
17 |
18 | # Initialize single agent with both capabilities
19 | @st.cache_resource
20 | def initialize_agent():
21 | return Agent(
22 | name="Multimodal Analyst",
23 | model=Gemini(id="gemini-2.0-flash-exp"),
24 | tools=[DuckDuckGoTools()],
25 | markdown=True,
26 | )
27 |
28 | agent = initialize_agent()
29 |
30 | # File uploader
31 | uploaded_file = st.file_uploader("Upload a video file", type=['mp4', 'mov', 'avi'])
32 |
33 | if uploaded_file:
34 | with tempfile.NamedTemporaryFile(delete=False, suffix='.mp4') as tmp_file:
35 | tmp_file.write(uploaded_file.read())
36 | video_path = tmp_file.name
37 |
38 | st.video(video_path)
39 |
40 | user_prompt = st.text_area(
41 | "What would you like to know?",
42 | placeholder="Ask any question related to the video - the AI Agent will analyze it and search the web if needed",
43 | help="You can ask questions about the video content and get relevant information from the web"
44 | )
45 |
46 | if st.button("Analyze & Research"):
47 | if not user_prompt:
48 | st.warning("Please enter your question.")
49 | else:
50 | try:
51 | with st.spinner("Processing video and researching..."):
52 | video_file = upload_file(video_path)
53 | while video_file.state.name == "PROCESSING":
54 | time.sleep(2)
55 | video_file = get_file(video_file.name)
56 |
57 | prompt = f"""
58 | First analyze this video and then answer the following question using both
59 | the video analysis and web research: {user_prompt}
60 |
61 | Provide a comprehensive response focusing on practical, actionable information.
62 | """
63 |
64 | result = agent.run(prompt, videos=[video_file])
65 |
66 | st.subheader("Result")
67 | st.markdown(result.content)
68 |
69 | except Exception as e:
70 | st.error(f"An error occurred: {str(e)}")
71 | finally:
72 | Path(video_path).unlink(missing_ok=True)
73 | else:
74 | st.info("Please upload a video to begin analysis.")
75 |
76 | st.markdown("""
77 |
82 | """, unsafe_allow_html=True)
--------------------------------------------------------------------------------
/rag_tutorials/rag_chain/README.md:
--------------------------------------------------------------------------------
1 | # PharmaQuery
2 |
3 | ## Overview
4 | PharmaQuery is an advanced Pharmaceutical Insight Retrieval System designed to help users gain meaningful insights from research papers and documents in the pharmaceutical domain.
5 |
6 | ## Demo
7 | https://github.com/user-attachments/assets/c12ee305-86fe-4f71-9219-57c7f438f291
8 |
9 | ## Features
10 | - **Natural Language Querying**: Ask complex questions about the pharmaceutical industry and get concise, accurate answers.
11 | - **Custom Database**: Upload your own research documents to enhance the retrieval system's knowledge base.
12 | - **Similarity Search**: Retrieves the most relevant documents for your query using AI embeddings.
13 | - **Streamlit Interface**: User-friendly interface for queries and document uploads.
14 |
15 | ## Technologies Used
16 | - **Programming Language**: [Python 3.10+](https://www.python.org/downloads/release/python-31011/)
17 | - **Framework**: [LangChain](https://www.langchain.com/)
18 | - **Database**: [ChromaDB](https://www.trychroma.com/)
19 | - **Models**:
20 | - Embeddings: [Google Gemini API (embedding-001)](https://ai.google.dev/gemini-api/docs/embeddings)
21 | - Chat: [Google Gemini API (gemini-1.5-pro)](https://ai.google.dev/gemini-api/docs/models/gemini#gemini-1.5-pro)
22 | - **PDF Processing**: [PyPDFLoader](https://python.langchain.com/docs/integrations/document_loaders/pypdfloader/)
23 | - **Document Splitter**: [SentenceTransformersTokenTextSplitter](https://python.langchain.com/api_reference/text_splitters/sentence_transformers/langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter.html)
24 |
25 | ## Requirements
26 | 1. **Install Dependencies**:
27 | ```bash
28 | pip install -r requirements.txt
29 | ```
30 |
31 | 2. **Run the Application**:
32 | ```bash
33 | streamlit run app.py
34 | ```
35 |
36 | 3. **Use the Application**:
37 | - Paste your Google API Key in the sidebar.
38 | - Enter your query in the main interface.
39 | - Optionally, upload research papers in the sidebar to enhance the database.
40 |
41 | ## :mailbox: Connect With Me
42 |
43 |
44 |