├── .gitignore
├── LICENSE
├── README.md
├── advanced_ai_agents
├── autonomous_game_playing_agent_apps
│ ├── ai_3dpygame_r1
│ │ ├── README.md
│ │ ├── ai_3dpygame_r1.py
│ │ └── requirements.txt
│ ├── ai_chess_agent
│ │ ├── README.md
│ │ ├── ai_chess_agent.py
│ │ └── requirements.txt
│ └── ai_tic_tac_toe_agent
│ │ ├── README.md
│ │ ├── agents.py
│ │ ├── app.py
│ │ ├── requirements.txt
│ │ └── utils.py
├── multi_agent_apps
│ ├── agent_teams
│ │ ├── ai_competitor_intelligence_agent_team
│ │ │ ├── README.md
│ │ │ ├── competitor_agent_team.py
│ │ │ └── requirements.txt
│ │ ├── ai_finance_agent_team
│ │ │ ├── README.md
│ │ │ ├── finance_agent_team.py
│ │ │ └── requirements.txt
│ │ ├── ai_game_design_agent_team
│ │ │ ├── README.md
│ │ │ ├── game_design_agent_team.py
│ │ │ └── requirements.txt
│ │ ├── ai_legal_agent_team
│ │ │ ├── README.md
│ │ │ ├── legal_agent_team.py
│ │ │ ├── local_ai_legal_agent_team
│ │ │ │ ├── README.md
│ │ │ │ ├── local_legal_agent.py
│ │ │ │ └── requirements.txt
│ │ │ └── requirements.txt
│ │ ├── ai_recruitment_agent_team
│ │ │ ├── README.md
│ │ │ ├── ai_recruitment_agent_team.py
│ │ │ └── requirements.txt
│ │ ├── ai_services_agency
│ │ │ ├── README.md
│ │ │ ├── agency.py
│ │ │ └── requirements.txt
│ │ ├── ai_teaching_agent_team
│ │ │ ├── README.md
│ │ │ ├── requirements.txt
│ │ │ └── teaching_agent_team.py
│ │ ├── multimodal_coding_agent_team
│ │ │ ├── README.md
│ │ │ ├── ai_coding_agent_o3.py
│ │ │ └── requirements.txt
│ │ └── multimodal_design_agent_team
│ │ │ ├── README.md
│ │ │ ├── design_agent_team.py
│ │ │ └── requirements.txt
│ ├── ai_aqi_analysis_agent
│ │ ├── README.md
│ │ ├── ai_aqi_analysis_agent_gradio.py
│ │ ├── ai_aqi_analysis_agent_streamlit.py
│ │ └── requirements.txt
│ ├── ai_financial_coach_agent
│ │ ├── .env
│ │ ├── README.md
│ │ ├── ai_financial_coach_agent.py
│ │ └── requirements.txt
│ ├── ai_mental_wellbeing_agent
│ │ ├── README.md
│ │ ├── ai_mental_wellbeing_agent.py
│ │ └── requirements.txt
│ └── multi_agent_researcher
│ │ ├── README.md
│ │ ├── requirements.txt
│ │ ├── research_agent.py
│ │ └── research_agent_llama3.py
└── single_agent_apps
│ ├── ai_customer_support_agent
│ ├── README.md
│ ├── customer_support_agent.py
│ └── requirements.txt
│ ├── ai_deep_research_agent
│ ├── README.md
│ ├── deep_research_openai.py
│ └── requirements.txt
│ ├── ai_health_fitness_agent
│ ├── README.md
│ ├── health_agent.py
│ └── requirements.txt
│ ├── ai_investment_agent
│ ├── README.md
│ ├── investment_agent.py
│ └── requirements.txt
│ ├── ai_journalist_agent
│ ├── README.md
│ ├── journalist_agent.py
│ └── requirements.txt
│ ├── ai_lead_generation_agent
│ ├── README.md
│ ├── ai_lead_generation_agent.py
│ └── requirements.txt
│ ├── ai_meeting_agent
│ ├── README.md
│ ├── meeting_agent.py
│ └── requirements.txt
│ ├── ai_movie_production_agent
│ ├── README.md
│ ├── movie_production_agent.py
│ └── requirements.txt
│ ├── ai_personal_finance_agent
│ ├── README.md
│ ├── finance_agent.py
│ └── requirements.txt
│ ├── ai_real_estate_agent
│ ├── README.md
│ ├── ai_real_estate_agent.py
│ └── requirements.txt
│ └── ai_system_architect_r1
│ ├── README.md
│ ├── ai_system_architect_r1.py
│ └── requirements.txt
├── advanced_llm_apps
├── chat_with_X_tutorials
│ ├── chat_with_github
│ │ ├── README.md
│ │ ├── chat_github.py
│ │ ├── chat_github_llama3.py
│ │ └── requirements.txt
│ ├── chat_with_gmail
│ │ ├── README.md
│ │ ├── chat_gmail.py
│ │ └── requirements.txt
│ ├── chat_with_pdf
│ │ ├── README.md
│ │ ├── chat_pdf.py
│ │ ├── chat_pdf_llama3.2.py
│ │ ├── chat_pdf_llama3.py
│ │ └── requirements.txt
│ ├── chat_with_research_papers
│ │ ├── README.md
│ │ ├── chat_arxiv.py
│ │ ├── chat_arxiv_llama3.py
│ │ └── requirements.txt
│ ├── chat_with_substack
│ │ ├── README.md
│ │ ├── chat_substack.py
│ │ └── requirements.txt
│ └── chat_with_youtube_videos
│ │ ├── README.md
│ │ ├── chat_youtube.py
│ │ └── requirements.txt
├── cursor_ai_experiments
│ ├── ai_web_scrapper.py
│ ├── chatgpt_clone_llama3.py
│ ├── llm_router_app
│ │ ├── README.md
│ │ ├── llm_router.py
│ │ └── requirements.txt
│ ├── local_chatgpt_clone
│ │ ├── README.md
│ │ ├── chatgpt_clone_llama3.py
│ │ └── requirements.txt
│ ├── multi_agent_researcher.py
│ └── requirements.txt
├── llm_apps_with_memory_tutorials
│ ├── ai_arxiv_agent_memory
│ │ ├── README.md
│ │ ├── ai_arxiv_agent_memory.py
│ │ └── requirements.txt
│ ├── ai_travel_agent_memory
│ │ ├── README.md
│ │ ├── requirements.txt
│ │ └── travel_agent_memory.py
│ ├── llama3_stateful_chat
│ │ ├── local_llama3_chat.py
│ │ └── requirements.txt
│ ├── llm_app_personalized_memory
│ │ ├── README.md
│ │ ├── llm_app_memory.py
│ │ └── requirements.txt
│ ├── local_chatgpt_with_memory
│ │ ├── README.md
│ │ ├── local_chatgpt_memory.py
│ │ └── requirements.txt
│ └── multi_llm_memory
│ │ ├── README.md
│ │ ├── multi_llm_memory.py
│ │ └── requirements.txt
└── llm_finetuning_tutorials
│ └── llama3.2_finetuning
│ ├── README.md
│ ├── finetune_llama3.2.py
│ └── requirements.txt
├── docs
└── banner
│ └── unwind_black.png
├── mcp_ai_agents
├── browser_mcp_agent
│ ├── README.md
│ ├── main.py
│ ├── mcp_agent.config.yaml
│ ├── mcp_agent.secrets.yaml.example
│ └── requirements.txt
└── github_mcp_agent
│ ├── README.md
│ ├── github_agent.py
│ └── requirements.txt
├── rag_tutorials
├── agentic_rag
│ ├── README.md
│ ├── rag_agent.py
│ └── requirements.txt
├── ai_blog_search
│ ├── README.md
│ ├── app.py
│ └── requirements.txt
├── autonomous_rag
│ ├── README.md
│ ├── autorag.py
│ └── requirements.txt
├── corrective_rag
│ ├── README.md
│ ├── corrective_rag.py
│ └── requirements.txt
├── deepseek_local_rag_agent
│ ├── README.md
│ ├── deepseek_rag_agent.py
│ └── requirements.txt
├── gemini_agentic_rag
│ ├── README.md
│ ├── agentic_rag_gemini.py
│ └── requirements.txt
├── hybrid_search_rag
│ ├── README.md
│ ├── main.py
│ └── requirements.txt
├── llama3.1_local_rag
│ ├── README.md
│ ├── llama3.1_local_rag.py
│ └── requirements.txt
├── local_hybrid_search_rag
│ ├── README.md
│ ├── local_main.py
│ └── requirements.txt
├── local_rag_agent
│ ├── README.md
│ ├── local_rag_agent.py
│ └── requirements.txt
├── rag-as-a-service
│ ├── README.md
│ ├── rag_app.py
│ └── requirements.txt
├── rag_agent_cohere
│ ├── README.md
│ ├── rag_agent_cohere.py
│ └── requirements.txt
├── rag_chain
│ ├── README.md
│ ├── app.py
│ └── requirements.txt
└── rag_database_routing
│ ├── README.md
│ ├── rag_database_routing.py
│ └── requirements.txt
├── starter_ai_agents
├── ai_blog_to_podcast_agent
│ ├── README.md
│ ├── blog_to_podcast_agent.py
│ └── requirements.txt
├── ai_breakup_recovery_agent
│ ├── .DS_Store
│ ├── README.md
│ ├── ai_breakup_recovery_agent.py
│ └── requirements.txt
├── ai_data_analysis_agent
│ ├── README.md
│ ├── ai_data_analyst.py
│ └── requirements.txt
├── ai_data_visualisation_agent
│ ├── README.md
│ ├── ai_data_visualisation_agent.py
│ └── requirements.txt
├── ai_medical_imaging_agent
│ ├── README.md
│ ├── ai_medical_imaging.py
│ └── requirements.txt
├── ai_meme_generator_agent_browseruse
│ ├── README.md
│ ├── ai_meme_generator_agent.py
│ └── requirements.txt
├── ai_music_generator_agent
│ ├── README.md
│ ├── music_generator_agent.py
│ └── requirements.txt
├── ai_reasoning_agent
│ ├── README.md
│ ├── local_ai_reasoning_agent.py
│ ├── reasoning_agent.py
│ └── requirements.txt
├── ai_startup_trend_analysis_agent
│ ├── README.md
│ ├── requirements.txt
│ └── startup_trends_agent.py
├── ai_travel_agent
│ ├── README.MD
│ ├── local_travel_agent.py
│ ├── requirements.txt
│ └── travel_agent.py
├── gemini_multimodal_agent_demo
│ └── multimodal_ai_agent.py
├── local_news_agent_openai_swarm
│ ├── .env
│ ├── README.md
│ ├── news_agent.py
│ └── requirements.txt
├── mixture_of_agents
│ ├── mixture-of-agents.py
│ └── requirements.txt
├── multimodal_ai_agent
│ ├── README.md
│ ├── multimodal_reasoning_agent.py
│ ├── mutimodal_agent.py
│ └── requirements.txt
├── opeani_research_agent
│ ├── README.md
│ ├── requirements.txt
│ └── research_agent.py
├── web_scrapping_ai_agent
│ ├── README.md
│ ├── ai_scrapper.py
│ ├── local_ai_scrapper.py
│ └── requirements.txt
└── xai_finance_agent
│ ├── README.md
│ ├── requirements.txt
│ └── xai_finance_agent.py
└── voice_ai_agents
├── ai_audio_tour_agent
├── README.md
├── agent.py
├── ai_audio_tour_agent.py
├── manager.py
├── printer.py
└── requirements.txt
├── customer_support_voice_agent
├── README.md
├── customer_support_voice_agent.py
└── requirements.txt
└── voice_rag_openaisdk
├── README.md
├── rag_voice.py
└── requirements.txt
/.gitignore:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/advanced_ai_agents/autonomous_game_playing_agent_apps/ai_3dpygame_r1/README.md:
--------------------------------------------------------------------------------
1 | # 🎮 AI 3D PyGame Visualizer with DeepSeek R1
2 | This Project demonstrates R1's code capabilities with a PyGame code generator and visualizer with browser use. The system uses DeepSeek for reasoning, OpenAI for code extraction, and browser automation agents to visualize the code on Trinket.io.
3 |
4 | ### Features
5 |
6 | - Generates PyGame code from natural language descriptions
7 | - Uses DeepSeek Reasoner for code logic and explanation
8 | - Extracts clean code using OpenAI GPT-4o
9 | - Automates code visualization on Trinket.io using browser agents
10 | - Provides a streamlined Streamlit interface
11 | - Multi-agent system for handling different tasks (navigation, coding, execution, viewing)
12 |
13 | ### How to get Started?
14 |
15 | 1. Clone the GitHub repository
16 | ```bash
17 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
18 | cd awesome-llm-apps/ai_agent_tutorials/ai_3dpygame_r1
19 | ```
20 |
21 | 2. Install the required dependencies:
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Get your API Keys
27 | - Sign up for [DeepSeek](https://platform.deepseek.com/) and obtain your API key
28 | - Sign up for [OpenAI](https://platform.openai.com/) and obtain your API key
29 |
30 | 4. Run the AI PyGame Visualizer
31 | ```bash
32 | streamlit run ai_3dpygame_r1.py
33 | ```
34 |
35 | 5. Browser use automatically opens your web browser and navigate to the URL provided in the console output to interact with the PyGame generator.
36 |
37 | ### How it works?
38 |
39 | 1. **Query Processing:** User enters a natural language description of the desired PyGame visualization.
40 | 2. **Code Generation:**
41 | - DeepSeek Reasoner analyzes the query and provides detailed reasoning with code
42 | - OpenAI agent extracts clean, executable code from the reasoning
43 | 3. **Visualization:**
44 | - Browser agents automate the process of running code on Trinket.io
45 | - Multiple specialized agents handle different tasks:
46 | - Navigation to Trinket.io
47 | - Code input
48 | - Execution
49 | - Visualization viewing
50 | 4. **User Interface:** Streamlit provides an intuitive interface for entering queries, viewing code, and managing the visualization process.
51 |
--------------------------------------------------------------------------------
/advanced_ai_agents/autonomous_game_playing_agent_apps/ai_3dpygame_r1/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | langchain-openai
3 | browser-use
4 | streamlit
--------------------------------------------------------------------------------
/advanced_ai_agents/autonomous_game_playing_agent_apps/ai_chess_agent/README.md:
--------------------------------------------------------------------------------
1 | # ♜ Agent White vs Agent Black: Chess Game
2 |
3 | An advanced Chess game system where two AI agents play chess against each other using Autogen in a streamlit app. It is built with robust move validation and game state management.
4 |
5 | ## Features
6 |
7 | ### Multi-Agent Architecture
8 | - Player White: OpenAI-powered strategic decision maker
9 | - Player Black: OpenAI-powered tactical opponent
10 | - Board Proxy: Validation agent for move legality and game state
11 |
12 | ### Safety & Validation
13 | - Robust move verification system
14 | - Illegal move prevention
15 | - Real-time board state monitoring
16 | - Secure game progression control
17 |
18 | ### Strategic Gameplay
19 | - AI-powered position evaluation
20 | - Deep tactical analysis
21 | - Dynamic strategy adaptation
22 | - Complete chess ruleset implementation
23 |
24 |
25 | ### How to get Started?
26 |
27 | 1. Clone the GitHub repository
28 |
29 | ```bash
30 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
31 | cd ai_agent_tutorials/ai_chess_game
32 | ```
33 | 2. Install the required dependencies:
34 |
35 | ```bash
36 | pip install -r requirements.txt
37 | ```
38 | 3. Get your OpenAI API Key
39 |
40 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
41 |
42 | 4. Run the Streamlit App
43 | ```bash
44 | streamlit run ai_chess_agent.py
45 | ```
46 |
47 |
--------------------------------------------------------------------------------
/advanced_ai_agents/autonomous_game_playing_agent_apps/ai_chess_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | chess==1.11.1
3 | autogen==0.6.1
4 | cairosvg
5 | pillow
6 |
--------------------------------------------------------------------------------
/advanced_ai_agents/autonomous_game_playing_agent_apps/ai_tic_tac_toe_agent/README.md:
--------------------------------------------------------------------------------
1 | # 🎮 Agent X vs Agent O: Tic-Tac-Toe Game
2 |
3 | An interactive Tic-Tac-Toe game where two AI agents powered by different language models compete against each other built on Agno Agent Framework and Streamlit as UI.
4 |
5 | This example shows how to build an interactive Tic Tac Toe game where AI agents compete against each other. The application showcases how to:
6 | - Coordinate multiple AI agents in a turn-based game
7 | - Use different language models for different players
8 | - Create an interactive web interface with Streamlit
9 | - Handle game state and move validation
10 | - Display real-time game progress and move history
11 |
12 | ## Features
13 | - Multiple AI models support (GPT-4, Claude, Gemini, etc.)
14 | - Real-time game visualization
15 | - Move history tracking with board states
16 | - Interactive player selection
17 | - Game state management
18 | - Move validation and coordination
19 |
20 | ## How to Run?
21 |
22 | 1. **Setup Environment**
23 | ```bash
24 | # Clone the repository
25 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
26 | cd ai_agent_tutorials/ai_tic_tac_toe_agent
27 |
28 | # Install dependencies
29 | pip install -r requirements.txt
30 | ```
31 |
32 | ### 2. Install dependencies
33 |
34 | ```shell
35 | pip install -r requirements.txt
36 | ```
37 |
38 | ### 3. Export API Keys
39 |
40 | The game supports multiple AI models. Export the API keys for the models you want to use:
41 |
42 | ```shell
43 | # Required for OpenAI models
44 | export OPENAI_API_KEY=***
45 |
46 | # Optional - for additional models
47 | export ANTHROPIC_API_KEY=*** # For Claude models
48 | export GOOGLE_API_KEY=*** # For Gemini models
49 | export GROQ_API_KEY=*** # For Groq models
50 | ```
51 |
52 | ### 4. Run the Game
53 |
54 | ```shell
55 | streamlit run app.py
56 | ```
57 |
58 | - Open [localhost:8501](http://localhost:8501) to view the game interface
59 |
60 | ## How It Works
61 |
62 | The game consists of three agents:
63 |
64 | 1. **Master Agent (Referee)**
65 | - Coordinates the game
66 | - Validates moves
67 | - Maintains game state
68 | - Determines game outcome
69 |
70 | 2. **Two Player Agents**
71 | - Make strategic moves
72 | - Analyze board state
73 | - Follow game rules
74 | - Respond to opponent moves
75 |
76 | ## Available Models
77 |
78 | The game supports various AI models:
79 | - GPT-4o (OpenAI)
80 | - GPT-o3-mini (OpenAI)
81 | - Gemini (Google)
82 | - Llama 3 (Groq)
83 | - Claude (Anthropic)
84 |
85 | ## Game Features
86 |
87 | 1. **Interactive Board**
88 | - Real-time updates
89 | - Visual move tracking
90 | - Clear game status display
91 |
92 | 2. **Move History**
93 | - Detailed move tracking
94 | - Board state visualization
95 | - Player action timeline
96 |
97 | 3. **Game Controls**
98 | - Start/Pause game
99 | - Reset board
100 | - Select AI models
101 | - View game history
102 |
103 | 4. **Performance Analysis**
104 | - Move timing
105 | - Strategy tracking
106 | - Game statistics
107 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_competitor_intelligence_agent_team/README.md:
--------------------------------------------------------------------------------
1 | # 🧲 AI Competitor Intelligence Agent Team
2 |
3 | The AI Competitor Intelligence Agent Team is a powerful competitor analysis tool powered by Firecrawl and Agno's AI Agent framework. This app helps businesses analyze their competitors by extracting structured data from competitor websites and generating actionable insights using AI.
4 |
5 | ## Features
6 |
7 | - **Multi-Agent System**
8 | - **Firecrawl Agent**: Specializes in crawling and summarizing competitor websites
9 | - **Analysis Agent**: Generates detailed competitive analysis reports
10 | - **Comparison Agent**: Creates structured comparisons between competitors
11 |
12 | - **Competitor Discovery**:
13 | - Finds similar companies using URL matching with Exa AI
14 | - Discovers competitors based on business descriptions
15 | - Automatically extracts relevant competitor URLs
16 |
17 | - **Comprehensive Analysis**:
18 | - Provides structured analysis reports with:
19 | - Market gaps and opportunities
20 | - Competitor weaknesses
21 | - Recommended features
22 | - Pricing strategies
23 | - Growth opportunities
24 | - Actionable recommendations
25 |
26 | - **Interactive Analysis**: Users can input either their company URL or description for analysis
27 |
28 | ## Requirements
29 |
30 | The application requires the following Python libraries:
31 |
32 | - `agno`
33 | - `exa-py`
34 | - `streamlit`
35 | - `pandas`
36 | - `firecrawl-py`
37 |
38 | You'll also need API keys for:
39 | - OpenAI
40 | - Firecrawl
41 | - Exa
42 |
43 | ## How to Run
44 |
45 | Follow these steps to set up and run the application:
46 |
47 | 1. **Clone the Repository**:
48 | ```bash
49 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
50 | cd ai_agent_tutorials/ai_competitors_analysis_team
51 | ```
52 |
53 | 2. **Install the dependencies**:
54 | ```bash
55 | pip install -r requirements.txt
56 | ```
57 |
58 | 3. **Set up your API keys**:
59 | - Get an OpenAI API key from: https://platform.openai.com/api-keys
60 | - Get a Firecrawl API key from: [Firecrawl website](https://www.firecrawl.dev/app/api-keys)
61 | - Get an Exa API key from: [Exa website](https://dashboard.exa.ai/api-keys)
62 |
63 | 4. **Run the Streamlit app**:
64 | ```bash
65 | streamlit run ai_competitor_analyser.py
66 | ```
67 |
68 | ## Usage
69 |
70 | 1. Enter your API keys in the sidebar
71 | 2. Input either:
72 | - Your company's website URL
73 | - A description of your company
74 | 3. Click "Analyze Competitors" to generate:
75 | - Competitor comparison table
76 | - Detailed analysis report
77 | - Strategic recommendations
78 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_competitor_intelligence_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | exa-py==1.7.1
2 | firecrawl-py==1.9.0
3 | duckduckgo-search==7.2.1
4 | agno
5 | streamlit==1.41.1
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_finance_agent_team/README.md:
--------------------------------------------------------------------------------
1 | ## 💲 AI Finance Agent Team with Web Access
2 | This script demonstrates how to build a team of AI agents that work together as a financial analyst using GPT-4o in just 20 lines of Python code. The system combines web search capabilities with financial data analysis tools to provide comprehensive financial insights.
3 |
4 | ### Features
5 | - Multi-agent system with specialized roles:
6 | - Web Agent for general internet research
7 | - Finance Agent for detailed financial analysis
8 | - Team Agent for coordinating between agents
9 | - Real-time financial data access through YFinance
10 | - Web search capabilities using DuckDuckGo
11 | - Persistent storage of agent interactions using SQLite
12 |
13 | ### How to get Started?
14 |
15 | 1. Clone the GitHub repository
16 | ```bash
17 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
18 | cd awesome-llm-apps/ai_agent_tutorials/ai_finance_agent_team
19 | ```
20 |
21 | 2. Install the required dependencies:
22 |
23 | ```bash
24 | pip install -r requirements.txt
25 | ```
26 |
27 | 3. Get your OpenAI API Key
28 |
29 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
30 | - Set your OpenAI API key as an environment variable:
31 | ```bash
32 | export OPENAI_API_KEY='your-api-key-here'
33 | ```
34 |
35 | 4. Run the team of AI Agents
36 | ```bash
37 | python3 finance_agent_team.py
38 | ```
39 |
40 | 5. Open your web browser and navigate to the URL provided in the console output to interact with the team of AI agents through the playground interface.
41 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_finance_agent_team/finance_agent_team.py:
--------------------------------------------------------------------------------
1 | from agno.agent import Agent
2 | from agno.models.openai import OpenAIChat
3 | from agno.storage.agent.sqlite import SqliteAgentStorage
4 | from agno.tools.duckduckgo import DuckDuckGoTools
5 | from agno.tools.yfinance import YFinanceTools
6 | from agno.playground import Playground, serve_playground_app
7 |
8 | web_agent = Agent(
9 | name="Web Agent",
10 | role="Search the web for information",
11 | model=OpenAIChat(id="gpt-4o"),
12 | tools=[DuckDuckGoTools()],
13 | storage=SqliteAgentStorage(table_name="web_agent", db_file="agents.db"),
14 | add_history_to_messages=True,
15 | markdown=True,
16 | )
17 |
18 | finance_agent = Agent(
19 | name="Finance Agent",
20 | role="Get financial data",
21 | model=OpenAIChat(id="gpt-4o"),
22 | tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True, company_news=True)],
23 | instructions=["Always use tables to display data"],
24 | storage=SqliteAgentStorage(table_name="finance_agent", db_file="agents.db"),
25 | add_history_to_messages=True,
26 | markdown=True,
27 | )
28 |
29 | agent_team = Agent(
30 | team=[web_agent, finance_agent],
31 | name="Agent Team (Web+Finance)",
32 | model=OpenAIChat(id="gpt-4o"),
33 | show_tool_calls=True,
34 | markdown=True,
35 | )
36 |
37 | app = Playground(agents=[agent_team]).get_app()
38 |
39 | if __name__ == "__main__":
40 | serve_playground_app("finance_agent_team:app", reload=True)
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_finance_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | openai
2 | agno
3 | duckduckgo-search
4 | yfinance
5 | fastapi[standard]
6 | sqlalchemy
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_game_design_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit==1.41.1
2 | autogen
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_legal_agent_team/README.md:
--------------------------------------------------------------------------------
1 | # 👨⚖️ AI Legal Agent Team
2 |
3 | A Streamlit application that simulates a full-service legal team using multiple AI agents to analyze legal documents and provide comprehensive legal insights. Each agent represents a different legal specialist role, from research and contract analysis to strategic planning, working together to provide thorough legal analysis and recommendations.
4 |
5 | ## Features
6 |
7 | - **Specialized Legal AI Agent Team**
8 | - **Legal Researcher**: Equipped with DuckDuckGo search tool to find and cite relevant legal cases and precedents. Provides detailed research summaries with sources and references specific sections from uploaded documents.
9 |
10 | - **Contract Analyst**: Specializes in thorough contract review, identifying key terms, obligations, and potential issues. References specific clauses from documents for detailed analysis.
11 |
12 | - **Legal Strategist**: Focuses on developing comprehensive legal strategies, providing actionable recommendations while considering both risks and opportunities.
13 |
14 | - **Team Lead**: Coordinates analysis between team members, ensures comprehensive responses, properly sourced recommendations, and references to specific document parts. Acts as an Agent Team coordinator for all three agents.
15 |
16 | - **Document Analysis Types**
17 | - Contract Review - Done by Contract Analyst
18 | - Legal Research - Done by Legal Researcher
19 | - Risk Assessment - Done by Legal Strategist, Contract Analyst
20 | - Compliance Check - Done by Legal Strategist, Legal Researcher, Contract Analyst
21 | - Custom Queries - Done by Agent Team - Legal Researcher, Legal Strategist, Contract Analyst
22 |
23 | ## How to Run
24 |
25 | 1. **Setup Environment**
26 | ```bash
27 | # Clone the repository
28 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
29 | cd awesome-llm-apps/ai_agent_tutorials/ai_legal_agent_team
30 |
31 | # Install dependencies
32 | pip install -r requirements.txt
33 | ```
34 |
35 | 2. **Configure API Keys**
36 | - Get OpenAI API key from [OpenAI Platform](https://platform.openai.com)
37 | - Get Qdrant API key and URL from [Qdrant Cloud](https://cloud.qdrant.io)
38 |
39 | 3. **Run the Application**
40 | ```bash
41 | streamlit run legal_agent_team.py
42 | ```
43 | 4. **Use the Interface**
44 | - Enter API credentials
45 | - Upload a legal document (PDF)
46 | - Select analysis type
47 | - Add custom queries if needed
48 | - View analysis results
49 |
50 | ## Notes
51 |
52 | - Supports PDF documents only
53 | - Uses GPT-4o for analysis
54 | - Uses text-embedding-3-small for embeddings
55 | - Requires stable internet connection
56 | - API usage costs apply
57 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_legal_agent_team/local_ai_legal_agent_team/README.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Shubhamsaboo/awesome-llm-apps/78da770fdec4c63026a1a32305a1f4f0ad6eaf45/advanced_ai_agents/multi_agent_apps/agent_teams/ai_legal_agent_team/local_ai_legal_agent_team/README.md
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_legal_agent_team/local_ai_legal_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | streamlit==1.40.2
3 | qdrant-client==1.12.1
4 | ollama==0.4.4
5 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_legal_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | streamlit==1.40.2
3 | qdrant-client==1.12.1
4 | openai
5 | pypdf
6 | duckduckgo-search
7 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_recruitment_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | # Core dependencies
2 | phidata
3 | agno
4 | streamlit==1.40.2
5 | PyPDF2==3.0.1
6 | streamlit-pdf-viewer==0.0.19
7 | requests==2.32.3
8 | pytz==2023.4
9 | typing-extensions>=4.9.0
10 |
11 | # Optional but recommended
12 | black>=24.1.1 # for code formatting
13 | python-dateutil>=2.8.2 # for date parsing
14 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_services_agency/requirements.txt:
--------------------------------------------------------------------------------
1 | python-dotenv==1.0.1
2 | agency-swarm==0.4.1
3 | streamlit
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/ai_teaching_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit==1.41.1
2 | openai==1.58.1
3 | duckduckgo-search==6.4.1
4 | typing-extensions>=4.5.0
5 | agno
6 | composio-phidata==0.6.9
7 | composio_core
8 | composio==0.1.1
9 | google-search-results==2.4.2
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/multimodal_coding_agent_team/README.md:
--------------------------------------------------------------------------------
1 | # 💻 Multimodal AI Coding Agent Team with o3-mini and Gemini
2 | An AI Powered Streamlit application that serves as your personal coding assistant, powered by multiple Agents built on the new o3-mini model. You can also upload an image of a coding problem or describe it in text, and the AI agent will analyze, generate an optimal solution, and execute it in a sandbox environment.
3 |
4 | ## Features
5 | #### Multi-Modal Problem Input
6 | - Upload images of coding problems (supports PNG, JPG, JPEG)
7 | - Type problems in natural language
8 | - Automatic problem extraction from images
9 | - Interactive problem processing
10 |
11 | #### Intelligent Code Generation
12 | - Optimal solution generation with best time/space complexity
13 | - Clean, documented Python code output
14 | - Type hints and proper documentation
15 | - Edge case handling
16 |
17 | #### Secure Code Execution
18 | - Sandboxed code execution environment
19 | - Real-time execution results
20 | - Error handling and explanations
21 | - 30-second execution timeout protection
22 |
23 | #### Multi-Agent Architecture
24 | - Vision Agent (Gemini-2.0-flash) for image processing
25 | - Coding Agent (OpenAI- o3-mini) for solution generation
26 | - Execution Agent (OpenAI) for code running and result analysis
27 | - E2B Sandbox for secure code execution
28 |
29 | ## How to Run
30 |
31 | Follow the steps below to set up and run the application:
32 | - Get an OpenAI API key from: https://platform.openai.com/
33 | - Get a Google (Gemini) API key from: https://makersuite.google.com/app/apikey
34 | - Get an E2B API key from: https://e2b.dev/docs/getting-started/api-key
35 |
36 | 1. **Clone the Repository**
37 | ```bash
38 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
39 | cd ai_agent_tutorials/ai_coding_agent_o3-mini
40 | ```
41 |
42 | 2. **Install the dependencies**
43 | ```bash
44 | pip install -r requirements.txt
45 | ```
46 |
47 | 3. **Run the Streamlit app**
48 | ```bash
49 | streamlit run ai_coding_agent_o3.py
50 | ```
51 |
52 | 4. **Configure API Keys**
53 | - Enter your API keys in the sidebar
54 | - All three keys (OpenAI, Gemini, E2B) are required for full functionality
55 |
56 | ## Usage
57 | 1. Upload an image of a coding problem OR type your problem description
58 | 2. Click "Generate & Execute Solution"
59 | 3. View the generated solution with full documentation
60 | 4. See execution results and any generated files
61 | 5. Review any error messages or execution timeouts
62 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/multimodal_coding_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | e2b-code-interpreter
3 | agno
4 | Pillow
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/multimodal_design_agent_team/README.md:
--------------------------------------------------------------------------------
1 | # Multimodal AI Design Agent Team
2 |
3 | A Streamlit application that provides comprehensive design analysis using a team of specialized AI agents powered by Google's Gemini model.
4 |
5 | This application leverages multiple specialized AI agents to provide comprehensive analysis of UI/UX designs of your product and your competitors, combining visual understanding, user experience evaluation, and market research insights.
6 |
7 | ## Features
8 |
9 | - **Specialized Legal AI Agent Team**
10 |
11 | - 🎨 **Visual Design Agent**: Evaluates design elements, patterns, color schemes, typography, and visual hierarchy
12 | - 🔄 **UX Analysis Agent**: Assesses user flows, interaction patterns, usability, and accessibility
13 | - 📊 **Market Analysis Agent**: Provides market insights, competitor analysis, and positioning recommendations
14 |
15 | - **Multiple Analysis Types**: Choose from Visual Design, UX, and Market Analysis
16 | - **Comparative Analysis**: Upload competitor designs for comparative insights
17 | - **Customizable Focus Areas**: Select specific aspects for detailed analysis
18 | - **Context-Aware**: Provide additional context for more relevant insights
19 | - **Real-time Processing**: Get instant analysis with progress indicators
20 | - **Structured Output**: Receive well-organized, actionable insights
21 |
22 | ## How to Run
23 |
24 | 1. **Setup Environment**
25 | ```bash
26 | # Clone the repository
27 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
28 | cd awesome-llm-apps/ai_agent_tutorials/multimodal_design_agent_team
29 |
30 | # Create and activate virtual environment (optional)
31 | python -m venv venv
32 | source venv/bin/activate # On Windows: venv\Scripts\activate
33 |
34 | # Install dependencies
35 | pip install -r requirements.txt
36 | ```
37 |
38 | 2. **Get API Key**
39 | - Visit [Google AI Studio](https://aistudio.google.com/apikey)
40 | - Generate an API key
41 |
42 | 3. **Run the Application**
43 | ```bash
44 | streamlit run design_agent_team.py
45 | ```
46 |
47 | 4. **Use the Application**
48 | - Enter your Gemini API key in the sidebar
49 | - Upload design files (supported formats: JPG, JPEG, PNG)
50 | - Select analysis types and focus areas
51 | - Add context if needed
52 | - Click "Run Analysis" to get insights
53 |
54 |
55 | ## Technical Stack
56 |
57 | - **Frontend**: Streamlit
58 | - **AI Model**: Google Gemini 2.0
59 | - **Image Processing**: Pillow
60 | - **Market Research**: DuckDuckGo Search API
61 | - **Framework**: Phidata for agent orchestration
62 |
63 | ## Tips for Best Results
64 |
65 | - Upload clear, high-resolution images
66 | - Include multiple views/screens for better context
67 | - Add competitor designs for comparative analysis
68 | - Provide specific context about your target audience
69 |
70 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/agent_teams/multimodal_design_agent_team/requirements.txt:
--------------------------------------------------------------------------------
1 | google-generativeai==0.8.3
2 | streamlit==1.41.1
3 | agno
4 | Pillow==11.0.0
5 | duckduckgo-search==6.3.7
6 |
7 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/ai_aqi_analysis_agent/README.md:
--------------------------------------------------------------------------------
1 | # 🌍 AQI Analysis Agent
2 |
3 | The AQI Analysis Agent is a powerful air quality monitoring and health recommendation tool powered by Firecrawl and Agno's AI Agent framework. This app helps users make informed decisions about outdoor activities by analyzing real-time air quality data and providing personalized health recommendations.
4 |
5 | ## Features
6 |
7 | - **Multi-Agent System**
8 | - **AQI Analyzer**: Fetches and processes real-time air quality data
9 | - **Health Recommendation Agent**: Generates personalized health advice
10 |
11 | - **Air Quality Metrics**:
12 | - Overall Air Quality Index (AQI)
13 | - Particulate Matter (PM2.5 and PM10)
14 | - Carbon Monoxide (CO) levels
15 | - Temperature
16 | - Humidity
17 | - Wind Speed
18 |
19 | - **Comprehensive Analysis**:
20 | - Real-time data visualization
21 | - Health impact assessment
22 | - Activity safety recommendations
23 | - Best time suggestions for outdoor activities
24 | - Weather condition correlations
25 |
26 | - **Interactive Features**:
27 | - Location-based analysis
28 | - Medical condition considerations
29 | - Activity-specific recommendations
30 | - Downloadable reports
31 | - Example queries for quick testing
32 |
33 | ## How to Run
34 |
35 | Follow these steps to set up and run the application:
36 |
37 | 1. **Clone the Repository**:
38 | ```bash
39 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
40 | cd ai_agent_tutorials/ai_aqi_analysis_agent
41 | ```
42 |
43 | 2. **Install the dependencies**:
44 | ```bash
45 | pip install -r requirements.txt
46 | ```
47 |
48 | 3. **Set up your API keys**:
49 | - Get an OpenAI API key from: https://platform.openai.com/api-keys
50 | - Get a Firecrawl API key from: [Firecrawl website](https://www.firecrawl.dev/app/api-keys)
51 |
52 | 4. **Run the Gradio app**:
53 | ```bash
54 | python ai_aqi_analysis_agent.py
55 | ```
56 |
57 | 5. **Access the Web Interface**:
58 | - The terminal will display two URLs:
59 | - Local URL: `http://127.0.0.1:7860` (for local access)
60 | - Public URL: `https://xxx-xxx-xxx.gradio.live` (for temporary public access)
61 | - Click on either URL to open the web interface in your browser
62 |
63 | ## Usage
64 |
65 | 1. Enter your API keys in the API Configuration section
66 | 2. Input location details:
67 | - City name
68 | - State (optional for Union Territories/US cities)
69 | - Country
70 | 3. Provide personal information:
71 | - Medical conditions (optional)
72 | - Planned outdoor activity
73 | 4. Click "Analyze & Get Recommendations" to receive:
74 | - Current air quality data
75 | - Health impact analysis
76 | - Activity safety recommendations
77 | 5. Try the example queries for quick testing
78 |
79 | ## Note
80 |
81 | The air quality data is fetched using Firecrawl's web scraping capabilities. Due to caching and rate limiting, the data might not always match real-time values on the website. For the most accurate real-time data, consider checking the source website directly.
82 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/ai_aqi_analysis_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | openai
3 | firecrawl-py==1.9.0
4 | gradio==5.9.1
5 | pydantic
6 | dataclasses
7 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/ai_financial_coach_agent/.env:
--------------------------------------------------------------------------------
1 | GOOGLE_API_KEY=your_gemini_api_key_here
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/ai_financial_coach_agent/README.md:
--------------------------------------------------------------------------------
1 | # AI Financial Coach Agent with Google ADK 💰
2 |
3 | The **AI Financial Coach** is a personalized financial advisor powered by Google's ADK (Agent Development Kit) framework. This app provides comprehensive financial analysis and recommendations based on user inputs including income, expenses, debts, and financial goals.
4 |
5 | ## Features
6 |
7 | - **Multi-Agent Financial Analysis System**
8 | - Budget Analysis Agent: Analyzes spending patterns and recommends optimizations
9 | - Savings Strategy Agent: Creates personalized savings plans and emergency fund strategies
10 | - Debt Reduction Agent: Develops optimized debt payoff strategies using avalanche and snowball methods
11 |
12 | - **Expense Analysis**:
13 | - Supports both CSV upload and manual expense entry
14 | - CSV transaction analysis with date, category, and amount tracking
15 | - Visual breakdown of spending by category
16 | - Automated expense categorization and pattern detection
17 |
18 | - **Savings Recommendations**:
19 | - Emergency fund sizing and building strategies
20 | - Custom savings allocations across different goals
21 | - Practical automation techniques for consistent saving
22 | - Progress tracking and milestone recommendations
23 |
24 | - **Debt Management**:
25 | - Multiple debt handling with interest rate optimization
26 | - Comparison between avalanche and snowball methods
27 | - Visual debt payoff timeline and interest savings analysis
28 | - Actionable debt reduction recommendations
29 |
30 | - **Interactive Visualizations**:
31 | - Pie charts for expense breakdown
32 | - Bar charts for income vs. expenses
33 | - Debt comparison graphs
34 | - Progress tracking metrics
35 |
36 |
37 | ## How to Run
38 |
39 | Follow the steps below to set up and run the application:
40 |
41 | 1. **Get API Key**:
42 | - Get a free Gemini API Key from Google AI Studio: https://aistudio.google.com/apikey
43 | - Create a `.env` file in the project root and add your API key:
44 | ```
45 | GOOGLE_API_KEY=your_api_key_here
46 | ```
47 |
48 | 2. **Clone the Repository**:
49 | ```bash
50 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
51 | cd awesome-llm-apps/ai_agent_tutorials/ai_financial_coach_agent
52 | ```
53 |
54 | 3. **Install Dependencies**:
55 | ```bash
56 | pip install -r requirements.txt
57 | ```
58 |
59 | 4. **Run the Streamlit App**:
60 | ```bash
61 | streamlit run ai_financial_coach_agent.py
62 | ```
63 |
64 | ## CSV File Format
65 |
66 | The application accepts CSV files with the following required columns:
67 | - `Date`: Transaction date in YYYY-MM-DD format
68 | - `Category`: Expense category
69 | - `Amount`: Transaction amount (supports currency symbols and comma formatting)
70 |
71 | Example:
72 | ```csv
73 | Date,Category,Amount
74 | 2024-01-01,Housing,1200.00
75 | 2024-01-02,Food,150.50
76 | 2024-01-03,Transportation,45.00
77 | ```
78 |
79 | A template CSV file can be downloaded directly from the application's sidebar.
80 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/ai_financial_coach_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | google-adk==0.1.0
2 | streamlit
3 | pandas==2.1.1
4 | matplotlib==3.8.0
5 | numpy==1.26.0
6 | python-dotenv==1.0.0
7 | plotly==5.18.0
8 | asyncio==3.4.3
9 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/ai_mental_wellbeing_agent/README.md:
--------------------------------------------------------------------------------
1 | # AI Mental Wellbeing Agent Team 🧠
2 |
3 | The AI Mental Wellbeing Agent Team is a supportive mental health assessment and guidance system powered by [AG2](https://github.com/ag2ai/ag2?tab=readme-ov-file)(formerly AutoGen)'s AI Agent framework. This app provides personalized mental health support through the coordination of specialized AI agents, each focusing on different aspects of mental health care based on user inputs such as emotional state, stress levels, sleep patterns, and current symptoms. This is built on AG2's new swarm feature run through initiate_swarm_chat() method.
4 |
5 | ## Features
6 |
7 | - **Specialized Mental Wellbeing Support Team**
8 | - 🧠 **Assessment Agent**: Analyzes emotional state and psychological needs with clinical precision and empathy
9 | - 🎯 **Action Agent**: Creates immediate action plans and connects users with appropriate resources
10 | - 🔄 **Follow-up Agent**: Designs long-term support strategies and prevention plans
11 |
12 | - **Comprehensive Mental Wellbeing Support**:
13 | - Detailed psychological assessment
14 | - Immediate coping strategies
15 | - Resource recommendations
16 | - Long-term support planning
17 | - Crisis prevention strategies
18 | - Progress monitoring systems
19 |
20 | - **Customizable Input Parameters**:
21 | - Current emotional state
22 | - Sleep patterns
23 | - Stress levels
24 | - Support system information
25 | - Recent life changes
26 | - Current symptoms
27 |
28 | - **Interactive Results**:
29 | - Real-time assessment summaries
30 | - Detailed recommendations in expandable sections
31 | - Clear action steps and resources
32 | - Long-term support strategies
33 |
34 | ## How to Run
35 |
36 | Follow these steps to set up and run the application:
37 |
38 | 1. **Clone the Repository**:
39 | ```bash
40 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
41 | cd ai_agent_tutorials/ai_mental_wellbeing_agent
42 | ```
43 |
44 | 2. **Install Dependencies**:
45 | ```bash
46 | pip install -r requirements.txt
47 | ```
48 |
49 | 3. **Create Environment File**:
50 | Create a `.env` file in the project directory:
51 | ```bash
52 | echo "AUTOGEN_USE_DOCKER=0" > .env
53 | ```
54 | This disables Docker requirement for code execution in AutoGen.
55 |
56 | 4. **Set Up OpenAI API Key**:
57 | - Obtain an OpenAI API key from [OpenAI's platform](https://platform.openai.com)
58 | - You'll input this key in the app's sidebar when running
59 |
60 | 5. **Run the Streamlit App**:
61 | ```bash
62 | streamlit run ai_mental_wellbeing_agent.py
63 | ```
64 |
65 |
66 | ## ⚠️ Important Notice
67 |
68 | This application is a supportive tool and does not replace professional mental health care. If you're experiencing thoughts of self-harm or severe crisis:
69 |
70 | - Call National Crisis Hotline: 988
71 | - Call Emergency Services: 911
72 | - Seek immediate professional help
73 |
74 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/ai_mental_wellbeing_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | autogen-agentchat
2 | autogen-ext
3 | pyautogen
4 | streamlit
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/multi_agent_researcher/README.md:
--------------------------------------------------------------------------------
1 | ## 📰 Multi-Agent AI Researcher
2 | This Streamlit app empowers you to research top stories and users on HackerNews using a team of AI assistants with GPT-4o.
3 |
4 | ### Features
5 | - Research top stories and users on HackerNews
6 | - Utilize a team of AI assistants specialized in story and user research
7 | - Generate blog posts, reports, and social media content based on your research queries
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/ai_agent_tutorials/multi_agent_researcher
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Run the Streamlit App
27 | ```bash
28 | streamlit run research_agent.py
29 | ```
30 |
31 | ### How it works?
32 |
33 | - Upon running the app, you will be prompted to enter your OpenAI API key. This key is used to authenticate and access the OpenAI language models.
34 | - Once you provide a valid API key, three instances of the Assistant class are created:
35 | - **story_researcher**: Specializes in researching HackerNews stories.
36 | - **user_researcher**: Focuses on researching HackerNews users and reading articles from URLs.
37 | - **hn_assistant**: A team assistant that coordinates the research efforts of the story and user researchers.
38 |
39 | - Enter your research query in the provided text input field. This could be a topic, keyword, or specific question related to HackerNews stories or users.
40 | - The hn_assistant will orchestrate the research process by delegating tasks to the story_researcher and user_researcher based on your query.
41 | - The AI assistants will gather relevant information from HackerNews using the provided tools and generate a comprehensive response using the GPT-4 language model.
42 | - The generated content, which could be a blog post, report, or social media post, will be displayed in the app for you to review and use.
43 |
44 |
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/multi_agent_researcher/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | openai
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/multi_agent_researcher/research_agent.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from agno.agent import Agent
4 | from agno.tools.hackernews import HackerNewsTools
5 | from agno.models.openai import OpenAIChat
6 |
7 | # Set up the Streamlit app
8 | st.title("Multi-Agent AI Researcher 🔍🤖")
9 | st.caption("This app allows you to research top stories and users on HackerNews and write blogs, reports and social posts.")
10 |
11 | # Get OpenAI API key from user
12 | openai_api_key = st.text_input("OpenAI API Key", type="password")
13 |
14 | if openai_api_key:
15 | # Create instances of the Assistant
16 | story_researcher = Agent(
17 | name="HackerNews Story Researcher",
18 | role="Researches hackernews stories and users.",
19 | tools=[HackerNewsTools()],
20 | )
21 |
22 | user_researcher = Agent(
23 | name="HackerNews User Researcher",
24 | role="Reads articles from URLs.",
25 | tools=[HackerNewsTools()],
26 | )
27 |
28 | hn_assistant = Agent(
29 | name="Hackernews Team",
30 | team=[story_researcher, user_researcher],
31 | model=OpenAIChat(
32 | id="gpt-4o",
33 | max_tokens=1024,
34 | temperature=0.5,
35 | api_key=openai_api_key
36 | )
37 | )
38 |
39 | # Input field for the report query
40 | query = st.text_input("Enter your report query")
41 |
42 | if query:
43 | # Get the response from the assistant
44 | response = hn_assistant.run(query, stream=False)
45 | st.write(response.content)
--------------------------------------------------------------------------------
/advanced_ai_agents/multi_agent_apps/multi_agent_researcher/research_agent_llama3.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from agno.agent import Agent
4 | from agno.tools.hackernews import HackerNews
5 | from agno.models.ollama import Ollama
6 |
7 | # Set up the Streamlit app
8 | st.title("Multi-Agent AI Researcher using Llama-3 🔍🤖")
9 | st.caption("This app allows you to research top stories and users on HackerNews and write blogs, reports and social posts.")
10 |
11 | # Create instances of the Assistant
12 | story_researcher = Agent(
13 | name="HackerNews Story Researcher",
14 | role="Researches hackernews stories and users.",
15 | tools=[HackerNews()],
16 | model=Ollama(id="llama3.2", max_tokens=1024)
17 | )
18 |
19 | user_researcher = Agent(
20 | name="HackerNews User Researcher",
21 | role="Reads articles from URLs.",
22 | tools=[HackerNews()],
23 | model=Ollama(id="llama3.2", max_tokens=1024)
24 | )
25 |
26 | hn_assistant = Agent(
27 | name="Hackernews Team",
28 | team=[story_researcher, user_researcher],
29 | model=Ollama(id="llama3.2", max_tokens=1024)
30 | )
31 |
32 | # Input field for the report query
33 | query = st.text_input("Enter your report query")
34 |
35 | if query:
36 | # Get the response from the assistant
37 | response = hn_assistant.run(query, stream=False)
38 | st.write(response.content)
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_customer_support_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🛒 AI Customer Support Agent with Memory
2 | This Streamlit app implements an AI-powered customer support agent for synthetic data generated using GPT-4o. The agent uses OpenAI's GPT-4o model and maintains a memory of past interactions using the Mem0 library with Qdrant as the vector store.
3 |
4 | ### Features
5 |
6 | - Chat interface for interacting with the AI customer support agent
7 | - Persistent memory of customer interactions and profiles
8 | - Synthetic data generation for testing and demonstration
9 | - Utilizes OpenAI's GPT-4o model for intelligent responses
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/ai_agent_tutorials/ai_customer_support_agent
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Ensure Qdrant is running:
26 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
27 |
28 | ```bash
29 | docker pull qdrant/qdrant
30 |
31 | docker run -p 6333:6333 -p 6334:6334 \
32 | -v "$(pwd)/qdrant_storage:/qdrant/storage:z" \
33 | qdrant/qdrant
34 | ```
35 |
36 | 4. Run the Streamlit App
37 | ```bash
38 | streamlit run customer_support_agent.py
39 | ```
40 |
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_customer_support_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_deep_research_agent/README.md:
--------------------------------------------------------------------------------
1 | # Deep Research Agent with OpenAI Agents SDK and Firecrawl
2 |
3 | A powerful research assistant that leverages OpenAI's Agents SDK and Firecrawl's deep research capabilities to perform comprehensive web research on any topic and any question.
4 |
5 | ## Features
6 |
7 | - **Deep Web Research**: Automatically searches the web, extracts content, and synthesizes findings
8 | - **Enhanced Analysis**: Uses OpenAI's Agents SDK to elaborate on research findings with additional context and insights
9 | - **Interactive UI**: Clean Streamlit interface for easy interaction
10 | - **Downloadable Reports**: Export research findings as markdown files
11 |
12 | ## How It Works
13 |
14 | 1. **Input Phase**: User provides a research topic and API credentials
15 | 2. **Research Phase**: The tool uses Firecrawl to search the web and extract relevant information
16 | 3. **Analysis Phase**: An initial research report is generated based on the findings
17 | 4. **Enhancement Phase**: A second agent elaborates on the initial report, adding depth and context
18 | 5. **Output Phase**: The enhanced report is presented to the user and available for download
19 |
20 | ## Requirements
21 |
22 | - Python 3.8+
23 | - OpenAI API key
24 | - Firecrawl API key
25 | - Required Python packages (see `requirements.txt`)
26 |
27 | ## Installation
28 |
29 | 1. Clone this repository:
30 | ```bash
31 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
32 | cd ai_agent_tutorials/ai_deep_research_agent
33 | ```
34 |
35 | 2. Install the required packages:
36 | ```bash
37 | pip install -r requirements.txt
38 | ```
39 |
40 | ## Usage
41 |
42 | 1. Run the Streamlit app:
43 | ```bash
44 | streamlit run deep_research_openai.py
45 | ```
46 |
47 | 2. Enter your API keys in the sidebar:
48 | - OpenAI API key
49 | - Firecrawl API key
50 |
51 | 3. Enter your research topic in the main input field
52 |
53 | 4. Click "Start Research" and wait for the process to complete
54 |
55 | 5. View and download your enhanced research report
56 |
57 | ## Example Research Topics
58 |
59 | - "Latest developments in quantum computing"
60 | - "Impact of climate change on marine ecosystems"
61 | - "Advancements in renewable energy storage"
62 | - "Ethical considerations in artificial intelligence"
63 | - "Emerging trends in remote work technologies"
64 |
65 | ## Technical Details
66 |
67 | The application uses two specialized agents:
68 |
69 | 1. **Research Agent**: Utilizes Firecrawl's deep research endpoint to gather comprehensive information from multiple web sources.
70 |
71 | 2. **Elaboration Agent**: Enhances the initial research by adding detailed explanations, examples, case studies, and practical implications.
72 |
73 | The Firecrawl deep research tool performs multiple iterations of web searches, content extraction, and analysis to provide thorough coverage of the topic.
74 |
75 |
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_deep_research_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | openai-agents
2 | firecrawl
3 | streamlit
4 | firecrawl-py
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_health_fitness_agent/README.md:
--------------------------------------------------------------------------------
1 | # AI Health & Fitness Planner Agent 🏋️♂️
2 |
3 | The **AI Health & Fitness Planner** is a personalized health and fitness Agent powered by Agno AI Agent framework. This app generates tailored dietary and fitness plans based on user inputs such as age, weight, height, activity level, dietary preferences, and fitness goals.
4 |
5 | ## Features
6 |
7 | - **Health Agent and Fitness Agent**
8 | - The app has two phidata agents that are specialists in giving Diet advice and Fitness/workout advice respectively.
9 |
10 | - **Personalized Dietary Plans**:
11 | - Generates detailed meal plans (breakfast, lunch, dinner, and snacks).
12 | - Includes important considerations like hydration, electrolytes, and fiber intake.
13 | - Supports various dietary preferences like Keto, Vegetarian, Low Carb, etc.
14 |
15 | - **Personalized Fitness Plans**:
16 | - Provides customized exercise routines based on fitness goals.
17 | - Covers warm-ups, main workouts, and cool-downs.
18 | - Includes actionable fitness tips and progress tracking advice.
19 |
20 | - **Interactive Q&A**: Allows users to ask follow-up questions about their plans.
21 |
22 |
23 | ## Requirements
24 |
25 | The application requires the following Python libraries:
26 |
27 | - `agno`
28 | - `google-generativeai`
29 | - `streamlit`
30 |
31 | Ensure these dependencies are installed via the `requirements.txt` file according to their mentioned versions
32 |
33 | ## How to Run
34 |
35 | Follow the steps below to set up and run the application:
36 | Before anything else, Please get a free Gemini API Key provided by Google AI here: https://aistudio.google.com/apikey
37 |
38 | 1. **Clone the Repository**:
39 | ```bash
40 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
41 | cd awesome-llm-apps/ai_agent_tutorials/ai_health_fitness_agent
42 | ```
43 |
44 | 2. **Install the dependencies**
45 | ```bash
46 | pip install -r requirements.txt
47 | ```
48 | 3. **Run the Streamlit app**
49 | ```bash
50 | streamlit run health_agent.py
51 | ```
52 |
53 |
54 |
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_health_fitness_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | google-generativeai==0.8.3
2 | streamlit==1.40.2
3 | agno
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_investment_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 📈 AI Investment Agent
2 | This Streamlit app is an AI-powered investment agent built with Agno's AI Agent framework that compares the performance of two stocks and generates detailed reports. By using GPT-4o with Yahoo Finance data, this app provides valuable insights to help you make informed investment decisions.
3 |
4 | ### Features
5 | - Compare the performance of two stocks
6 | - Retrieve comprehensive company information
7 | - Get the latest company news and analyst recommendations
8 | - Get the latest company news and analyst recommendations
9 |
10 | ### How to get Started?
11 |
12 | 1. Clone the GitHub repository
13 |
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/ai_agent_tutorials/ai_investment_agent
17 | ```
18 | 2. Install the required dependencies:
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 | 3. Get your OpenAI API Key
24 |
25 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
26 |
27 | 4. Run the Streamlit App
28 | ```bash
29 | streamlit run investment_agent.py
30 | ```
31 |
32 | ### How it Works?
33 |
34 | - Upon running the app, you will be prompted to enter your OpenAI API key. This key is used to authenticate and access the OpenAI language model.
35 | - Once you provide a valid API key, an instance of the Assistant class is created. This assistant utilizes the GPT-4o language model from OpenAI and the YFinanceTools for accessing stock data.
36 | - Enter the stock symbols of the two companies you want to compare in the provided text input fields.
37 | - The assistant will perform the following steps:
38 | - Retrieve real-time stock prices and historical data using YFinanceTools
39 | - Fetch the latest company news and analyst recommendations
40 | - Gather comprehensive company information
41 | - Generate a detailed comparison report using the GPT-4 language model
42 | - The generated report will be displayed in the app, providing you with valuable insights and analysis to guide your investment decisions.
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_investment_agent/investment_agent.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from agno.agent import Agent
3 | from agno.models.openai import OpenAIChat
4 | from agno.tools.yfinance import YFinanceTools
5 |
6 | st.title("AI Investment Agent 📈🤖")
7 | st.caption("This app allows you to compare the performance of two stocks and generate detailed reports.")
8 |
9 | openai_api_key = st.text_input("OpenAI API Key", type="password")
10 |
11 | if openai_api_key:
12 | assistant = Agent(
13 | model=OpenAIChat(id="gpt-4o", api_key=openai_api_key),
14 | tools=[
15 | YFinanceTools(stock_price=True, analyst_recommendations=True, stock_fundamentals=True)
16 | ],
17 | show_tool_calls=True,
18 | description="You are an investment analyst that researches stock prices, analyst recommendations, and stock fundamentals.",
19 | instructions=[
20 | "Format your response using markdown and use tables to display data where possible."
21 | ],
22 | )
23 |
24 | col1, col2 = st.columns(2)
25 | with col1:
26 | stock1 = st.text_input("Enter first stock symbol (e.g. AAPL)")
27 | with col2:
28 | stock2 = st.text_input("Enter second stock symbol (e.g. MSFT)")
29 |
30 | if stock1 and stock2:
31 | with st.spinner(f"Analyzing {stock1} and {stock2}..."):
32 | query = f"Compare both the stocks - {stock1} and {stock2} and make a detailed report for an investment trying to invest and compare these stocks"
33 | response = assistant.run(query, stream=False)
34 | st.markdown(response.content)
35 |
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_investment_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | openai
4 | yfinance
5 |
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_journalist_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🗞️ AI Journalist Agent
2 | This Streamlit app is an AI-powered journalist agent that generates high-quality articles using OpenAI GPT-4o. It automates the process of researching, writing, and editing articles, allowing you to create compelling content on any topic with ease.
3 |
4 | ### Features
5 | - Searches the web for relevant information on a given topic
6 | - Writes well-structured, informative, and engaging articles
7 | - Edits and refines the generated content to meet the high standards of the New York Times
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/ai_agent_tutorials/ai_journalist_agent
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Get your SerpAPI Key
27 |
28 | - Sign up for an [SerpAPI account](https://serpapi.com/) and obtain your API key.
29 |
30 | 5. Run the Streamlit App
31 | ```bash
32 | streamlit run journalist_agent.py
33 | ```
34 |
35 | ### How it Works?
36 |
37 | The AI Journalist Agent utilizes three main components:
38 | - Searcher: Responsible for generating search terms based on the given topic and searching the web for relevant URLs using the SerpAPI.
39 | - Writer: Retrieves the text from the provided URLs using the NewspaperToolkit and writes a high-quality article based on the extracted information.
40 | - Editor: Coordinates the workflow between the Searcher and Writer, and performs final editing and refinement of the generated article.
41 |
42 |
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_journalist_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | openai
4 | google-search-results
5 | newspaper4k
6 | lxml_html_clean
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_lead_generation_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🎯 AI Lead Generation Agent - Powered by Firecrawl's Extract Endpoint
2 |
3 | The AI Lead Generation Agent automates the process of finding and qualifying potential leads from Quora. It uses Firecrawl's search and the new Extract endpoint to identify relevant user profiles, extract valuable information, and organize it into a structured format in Google Sheets. This agent helps sales and marketing teams efficiently build targeted lead lists while saving hours of manual research.
4 |
5 | ### Features
6 | - **Targeted Search**: Uses Firecrawl's search endpoint to find relevant Quora URLs based on your search criteria
7 | - **Intelligent Extraction**: Leverages Firecrawl's new Extract endpoint to pull user information from Quora profiles
8 | - **Automated Processing**: Formats extracted user information into a clean, structured format
9 | - **Google Sheets Integration**: Automatically creates and populates Google Sheets with lead information
10 | - **Customizable Criteria**: Allows you to define specific search parameters to find your ideal leads for your niche
11 |
12 | ### How to Get Started
13 | 1. **Clone the repository**:
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd ai_agent_tutorials/ai_lead_generation_agent
17 | ```
18 | 3. **Install the required packages**:
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 4. **Important thing to do in composio**:
23 | - in the terminal, run this command: `composio add googlesheets`
24 | - In your compposio dashboard, create a new google sheet intergation and make sure it is active in the active integrations/connections tab
25 |
26 | 5. **Set up your API keys**:
27 | - Get your Firecrawl API key from [Firecrawl's website](https://www.firecrawl.dev/app/api-keys)
28 | - Get your Composio API key from [Composio's website](https://composio.ai)
29 | - Get your OpenAI API key from [OpenAI's website](https://platform.openai.com/api-keys)
30 |
31 | 6. **Run the application**:
32 | ```bash
33 | streamlit run ai_lead_generation_agent.py
34 | ```
35 |
36 |
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_lead_generation_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | firecrawl-py==1.9.0
2 | agno
3 | composio-phidata
4 | composio==0.1.1
5 | pydantic==2.10.5
6 | streamlit
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_meeting_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 📝 AI Meeting Preparation Agent
2 | This Streamlit application leverages multiple AI agents to create comprehensive meeting preparation materials. It uses OpenAI's GPT-4, Anthropic's Claude, and the Serper API for web searches to generate context analysis, industry insights, meeting strategies, and executive briefings.
3 |
4 | ### Features
5 |
6 | - Multi-agent AI system for thorough meeting preparation
7 | - Utilizes OpenAI's GPT-4 and Anthropic's Claude models
8 | - Web search capability using Serper API
9 | - Generates detailed context analysis, industry insights, meeting strategies, and executive briefings
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/ai_agent_tutorials/ai_meeting_agent
18 | ```
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 | 3. Get your Anthropic API Key
25 |
26 | - Sign up for an [Anthropic account](https://console.anthropic.com) (or the LLM provider of your choice) and obtain your API key.
27 |
28 | 4. Get your SerpAPI Key
29 |
30 | - Sign up for an [Serper API account](https://serper.dev/) and obtain your API key.
31 |
32 | 5. Get your OpenAI API Key
33 |
34 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
35 |
36 | 6. Run the Streamlit App
37 | ```bash
38 | streamlit run meeting_agent.py
39 | ```
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_meeting_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | crewai
3 | crewai-tools
4 | openai
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_movie_production_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🎬 AI Movie Production Agent
2 | This Streamlit app is an AI-powered movie production assistant that helps bring your movie ideas to life using Claude 3.5 Sonnet model. It automates the process of script writing and casting, allowing you to create compelling movie concepts with ease.
3 |
4 | ### Features
5 | - Generates script outlines based on your movie idea, genre, and target audience
6 | - Suggests suitable actors for main roles, considering their past performances and current availability
7 | - Provides a concise movie concept overview
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/ai_agent_tutorials/ai_movie_production_agent
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your Anthropic API Key
23 |
24 | - Sign up for an [Anthropic account](https://console.anthropic.com) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Get your SerpAPI Key
27 |
28 | - Sign up for an [SerpAPI account](https://serpapi.com/) and obtain your API key.
29 |
30 | 5. Run the Streamlit App
31 | ```bash
32 | streamlit run movie_production_agent.py
33 | ```
34 |
35 | ### How it Works?
36 |
37 | The AI Movie Production Agent utilizes three main components:
38 | - **ScriptWriter**: Develops a compelling script outline with character descriptions and key plot points based on the given movie idea and genre.
39 | - **CastingDirector**: Suggests suitable actors for the main roles, considering their past performances and current availability.
40 | - **MovieProducer**: Oversees the entire process, coordinating between the ScriptWriter and CastingDirector, and providing a concise movie concept overview.
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_movie_production_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | anthropic
4 | google-search-results
5 | lxml_html_clean
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_personal_finance_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 💰 AI Personal Finance Planner
2 | This Streamlit app is an AI-powered personal finance planner that generates personalized financial plans using OpenAI GPT-4o. It automates the process of researching, planning, and creating tailored budgets, investment strategies, and savings goals, empowering you to take control of your financial future with ease.
3 |
4 | ### Features
5 | - Set your financial goals and provide details about your current financial situation
6 | - Use GPT-4o to generate intelligent and personalized financial advice
7 | - Receive customized budgets, investment plans, and savings strategies
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/ai_agent_tutorials/ai_personal_finance_agent
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Get your SerpAPI Key
27 |
28 | - Sign up for an [SerpAPI account](https://serpapi.com/) and obtain your API key.
29 |
30 | 5. Run the Streamlit App
31 | ```bash
32 | streamlit run finance_agent.py
33 | ```
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_personal_finance_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | openai
4 | google-search-results
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_real_estate_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🏠 AI Real Estate Agent - Powered by Firecrawl's Extract Endpoint
2 |
3 | The AI Real Estate Agent automates property search and market analysis using Firecrawl's Extract endpoint and Agno AI Agent's insights. It helps users find properties matching their criteria while providing detailed location trends and investment recommendations. This agent streamlines the property search process by combining data from multiple real estate websites and offering intelligent analysis.
4 |
5 | ### Features
6 | - **Smart Property Search**: Uses Firecrawl's Extract endpoint to find properties across multiple real estate websites
7 | - **Multi-Source Integration**: Aggregates data from 99acres, Housing.com, Square Yards, Nobroker, and MagicBricks
8 | - **Location Analysis**: Provides detailed price trends and investment insights for different localities
9 | - **AI-Powered Recommendations**: Uses GPT models to analyze properties and provide structured recommendations
10 | - **User-Friendly Interface**: Clean Streamlit UI for easy property search and results viewing
11 | - **Customizable Search**: Filter by city, property type, category, and budget
12 |
13 | ### How to Get Started
14 | 1. **Clone the repository**:
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd ai_agent_tutorials/ai_real_estate_agent
18 | ```
19 |
20 | 2. **Install the required packages**:
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. **Set up your API keys**:
26 | - Get your Firecrawl API key from [Firecrawl's website](https://www.firecrawl.dev/app/api-keys)
27 | - Get your OpenAI API key from [OpenAI's website](https://platform.openai.com/api-keys)
28 |
29 | 4. **Run the application**:
30 | ```bash
31 | streamlit run ai_real_estate_agent.py
32 | ```
33 |
34 | ### Using the Agent
35 | 1. **Enter API Keys**:
36 | - Input your Firecrawl and OpenAI API keys in the sidebar
37 | - Keys are securely stored in the session state
38 |
39 | 2. **Set Search Criteria**:
40 | - Enter the city name
41 | - Select property category (Residential/Commercial)
42 | - Choose property type (Flat/Individual House)
43 | - Set maximum budget in Crores
44 |
45 | 3. **View Results**:
46 | - Property recommendations with detailed analysis
47 | - Location trends with investment insights
48 | - Expandable sections for easy reading
49 |
50 | ### Features in Detail
51 | - **Property Finding**:
52 | - Searches across multiple real estate websites
53 | - Returns 3-6 properties matching criteria
54 | - Provides detailed property information and analysis
55 |
56 | - **Location Analysis**:
57 | - Price trends for different localities
58 | - Rental yield analysis
59 | - Investment potential assessment
60 | - Top performing areas identification
61 |
62 |
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_real_estate_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | firecrawl-py==1.9.0
3 | pydantic
4 | streamlit
5 | openai
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_system_architect_r1/README.md:
--------------------------------------------------------------------------------
1 | # 🤖 AI System Architect Advisor with R1
2 |
3 | An Agno agentic system that provides expert software architecture analysis and recommendations using a dual-model approach combining DeepSeek R1's Reasoning and Claude. The system provides detailed technical analysis, implementation roadmaps, and architectural decisions for complex software systems.
4 |
5 | ## Features
6 |
7 | - **Dual AI Model Architecture**
8 | - **DeepSeek Reasoner**: Provides initial technical analysis and structured reasoning about architecture patterns, tools, and implementation strategies
9 | - **Claude-3.5**: Generates detailed explanations, implementation roadmaps, and technical specifications based on DeepSeek's analysis
10 |
11 | - **Comprehensive Analysis Components**
12 | - Architecture Pattern Selection
13 | - Infrastructure Resource Planning
14 | - Security Measures and Compliance
15 | - Database Architecture
16 | - Performance Requirements
17 | - Cost Estimation
18 | - Risk Assessment
19 |
20 | - **Analysis Types**
21 | - Real-time Event Processing Systems
22 | - Healthcare Data Platforms
23 | - Financial Trading Platforms
24 | - Multi-tenant SaaS Solutions
25 | - Digital Content Delivery Networks
26 | - Supply Chain Management Systems
27 |
28 | ## How to Run
29 |
30 | 1. **Setup Environment**
31 | ```bash
32 | # Clone the repository
33 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
34 | cd awesome-llm-apps/ai_agent_tutorials/ai_system_architect_r1
35 |
36 | # Install dependencies
37 | pip install -r requirements.txt
38 | ```
39 |
40 | 2. **Configure API Keys**
41 | - Get DeepSeek API key from DeepSeek platform
42 | - Get Anthropic API key from [Anthropic Platform](https://www.anthropic.com)
43 |
44 | 3. **Run the Application**
45 | ```bash
46 | streamlit run ai_system_architect_r1.py
47 | ```
48 |
49 | 4. **Use the Interface**
50 | - Enter API credentials in sidebar
51 | - Structure your prompt with:
52 | - Project Context
53 | - Requirements
54 | - Constraints
55 | - Scale
56 | - Security/Compliance needs
57 | - View detailed analysis results
58 |
59 | ## Example Test Prompts:
60 |
61 | ### 1. Financial Trading Platform
62 | "We need to build a high-frequency trading platform that processes market data streams, executes trades with sub-millisecond latency, maintains audit trails, and handles complex risk calculations. The system needs to be globally distributed, handle 100,000 transactions per second, and have robust disaster recovery capabilities."
63 | ### 2. Multi-tenant SaaS Platform
64 | "Design a multi-tenant SaaS platform for enterprise resource planning that needs to support customization per tenant, handle different data residency requirements, support offline capabilities, and maintain performance isolation between tenants. The system should scale to 10,000 concurrent users and support custom integrations."
65 |
66 | ## Notes
67 |
68 | - Requires both DeepSeek and Anthropic API keys
69 | - Provides real-time analysis with detailed explanations
70 | - Supports chat-based interaction
71 | - Includes clear reasoning for all architectural decisions
72 | - API usage costs apply
73 |
74 |
75 |
--------------------------------------------------------------------------------
/advanced_ai_agents/single_agent_apps/ai_system_architect_r1/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | anthropic
4 | agno
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_github/README.md:
--------------------------------------------------------------------------------
1 | ## 💬 Chat with GitHub Repo
2 |
3 | LLM app with RAG to chat with GitHub Repo in just 30 lines of Python Code. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of the specified GitHub repository.
4 |
5 | ### Features
6 |
7 | - Provide the name of GitHub Repository as input
8 | - Ask questions about the content of the GitHub repository
9 | - Get accurate answers using OpenAI's API and Embedchain
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_github
18 | ```
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 | 3. Get your OpenAI API Key
25 |
26 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
27 |
28 | 4. Get your GitHub Access Token
29 |
30 | - Create a [personal access token](https://docs.github.com/en/enterprise-server@3.6/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token) with the necessary permissions to access the desired GitHub repository.
31 |
32 | 4. Run the Streamlit App
33 | ```bash
34 | streamlit run chat_github.py
35 | ```
36 |
37 | ### How it Works?
38 |
39 | - The app prompts the user to enter their OpenAI API key, which is used to authenticate requests to the OpenAI API.
40 |
41 | - It initializes an instance of the Embedchain App class and a GithubLoader with the provided GitHub Access Token.
42 |
43 | - The user is prompted to enter a GitHub repository URL, which is then added to the Embedchain app's knowledge base using the GithubLoader.
44 |
45 | - The user can ask questions about the GitHub repository using the text input.
46 |
47 | - When a question is asked, the app uses the chat method of the Embedchain app to generate an answer based on the content of the GitHub repository.
48 |
49 | - The app displays the generated answer to the user.
50 |
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_github/chat_github.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | from embedchain.pipeline import Pipeline as App
3 | from embedchain.loaders.github import GithubLoader
4 | import streamlit as st
5 | import os
6 |
7 | loader = GithubLoader(
8 | config={
9 | "token":"Your GitHub Token",
10 | }
11 | )
12 |
13 | # Create Streamlit app
14 | st.title("Chat with GitHub Repository 💬")
15 | st.caption("This app allows you to chat with a GitHub Repo using OpenAI API")
16 |
17 | # Get OpenAI API key from user
18 | openai_access_token = st.text_input("OpenAI API Key", type="password")
19 |
20 | # If OpenAI API key is provided, create an instance of App
21 | if openai_access_token:
22 | os.environ["OPENAI_API_KEY"] = openai_access_token
23 | # Create an instance of Embedchain App
24 | app = App()
25 | # Get the GitHub repo from the user
26 | git_repo = st.text_input("Enter the GitHub Repo", type="default")
27 | if git_repo:
28 | # Add the repo to the knowledge base
29 | app.add("repo:" + git_repo + " " + "type:repo", data_type="github", loader=loader)
30 | st.success(f"Added {git_repo} to knowledge base!")
31 | # Ask a question about the Github Repo
32 | prompt = st.text_input("Ask any question about the GitHub Repo")
33 | # Chat with the GitHub Repo
34 | if prompt:
35 | answer = app.chat(prompt)
36 | st.write(answer)
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_github/chat_github_llama3.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import tempfile
3 | from embedchain import App
4 | from embedchain.loaders.github import GithubLoader
5 | import streamlit as st
6 | import os
7 |
8 | GITHUB_TOKEN = os.getenv("Your GitHub Token")
9 |
10 | def get_loader():
11 | loader = GithubLoader(
12 | config={
13 | "token": GITHUB_TOKEN
14 | }
15 | )
16 | return loader
17 |
18 | if "loader" not in st.session_state:
19 | st.session_state['loader'] = get_loader()
20 |
21 | loader = st.session_state.loader
22 |
23 | # Define the embedchain_bot function
24 | def embedchain_bot(db_path):
25 | return App.from_config(
26 | config={
27 | "llm": {"provider": "ollama", "config": {"model": "llama3:instruct", "max_tokens": 250, "temperature": 0.5, "stream": True, "base_url": 'http://localhost:11434'}},
28 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
29 | "embedder": {"provider": "ollama", "config": {"model": "llama3:instruct", "base_url": 'http://localhost:11434'}},
30 | }
31 | )
32 |
33 | def load_repo(git_repo):
34 | global app
35 | # Add the repo to the knowledge base
36 | print(f"Adding {git_repo} to knowledge base!")
37 | app.add("repo:" + git_repo + " " + "type:repo", data_type="github", loader=loader)
38 | st.success(f"Added {git_repo} to knowledge base!")
39 |
40 |
41 | def make_db_path():
42 | ret = tempfile.mkdtemp(suffix="chroma")
43 | print(f"Created Chroma DB at {ret}")
44 | return ret
45 |
46 | # Create Streamlit app
47 | st.title("Chat with GitHub Repository 💬")
48 | st.caption("This app allows you to chat with a GitHub Repo using Llama-3 running with Ollama")
49 |
50 | # Initialize the Embedchain App
51 | if "app" not in st.session_state:
52 | st.session_state['app'] = embedchain_bot(make_db_path())
53 |
54 | app = st.session_state.app
55 |
56 | # Get the GitHub repo from the user
57 | git_repo = st.text_input("Enter the GitHub Repo", type="default")
58 |
59 | if git_repo and ("repos" not in st.session_state or git_repo not in st.session_state.repos):
60 | if "repos" not in st.session_state:
61 | st.session_state["repos"] = [git_repo]
62 | else:
63 | st.session_state.repos.append(git_repo)
64 | load_repo(git_repo)
65 |
66 |
67 | # Ask a question about the Github Repo
68 | prompt = st.text_input("Ask any question about the GitHub Repo")
69 | # Chat with the GitHub Repo
70 | if prompt:
71 | answer = st.session_state.app.chat(prompt)
72 | st.write(answer)
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_github/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain[github]
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_gmail/README.md:
--------------------------------------------------------------------------------
1 | ## 📨 Chat with Gmail Inbox
2 |
3 | LLM app with RAG to chat with Gmail in just 30 lines of Python Code. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of your Gmail Inbox.
4 |
5 | ### Features
6 |
7 | - Connect to your Gmail Inbox
8 | - Ask questions about the content of your emails
9 | - Get accurate answers using RAG and the selected LLM
10 |
11 | ### Installation
12 |
13 | 1. Clone the repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_gmail
18 | ```
19 | 2. Install the required dependencies
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Set up your Google Cloud project and enable the Gmail API:
26 |
27 | - Go to the [Google Cloud Console](https://console.cloud.google.com/) and create a new project.
28 | - Navigate to "APIs & Services > OAuth consent screen" and configure the OAuth consent screen.
29 | - Publish the OAuth consent screen by providing the necessary app information.
30 | - Enable the Gmail API and create OAuth client ID credentials.
31 | - Download the credentials in JSON format and save them as `credentials.json` in your working directory.
32 |
33 | 4. Get your OpenAI API Key
34 |
35 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
36 |
37 | 4. Run the Streamlit App
38 |
39 | ```bash
40 | streamlit run chat_gmail.py
41 | ```
42 |
43 |
44 |
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_gmail/chat_gmail.py:
--------------------------------------------------------------------------------
1 | import tempfile
2 | import streamlit as st
3 | from embedchain import App
4 |
5 | # Define the embedchain_bot function
6 | def embedchain_bot(db_path, api_key):
7 | return App.from_config(
8 | config={
9 | "llm": {"provider": "openai", "config": {"model": "gpt-4-turbo", "temperature": 0.5, "api_key": api_key}},
10 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
11 | "embedder": {"provider": "openai", "config": {"api_key": api_key}},
12 | }
13 | )
14 |
15 | # Create Streamlit app
16 | st.title("Chat with your Gmail Inbox 📧")
17 | st.caption("This app allows you to chat with your Gmail inbox using OpenAI API")
18 |
19 | # Get the OpenAI API key from the user
20 | openai_access_token = st.text_input("Enter your OpenAI API Key", type="password")
21 |
22 | # Set the Gmail filter statically
23 | gmail_filter = "to: me label:inbox"
24 |
25 | # Add the Gmail data to the knowledge base if the OpenAI API key is provided
26 | if openai_access_token:
27 | # Create a temporary directory to store the database
28 | db_path = tempfile.mkdtemp()
29 | # Create an instance of Embedchain App
30 | app = embedchain_bot(db_path, openai_access_token)
31 | app.add(gmail_filter, data_type="gmail")
32 | st.success(f"Added emails from Inbox to the knowledge base!")
33 |
34 | # Ask a question about the emails
35 | prompt = st.text_input("Ask any question about your emails")
36 |
37 | # Chat with the emails
38 | if prompt:
39 | answer = app.query(prompt)
40 | st.write(answer)
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_gmail/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain[gmail]
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_pdf/README.md:
--------------------------------------------------------------------------------
1 | ## 📄 Chat with PDF
2 |
3 | LLM app with RAG to chat with PDF in just 30 lines of Python Code. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of the uploaded PDF.
4 |
5 | ### Features
6 |
7 | - Upload a PDF document
8 | - Ask questions about the content of the PDF
9 | - Get accurate answers using RAG and the selected LLM
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_pdf
18 | ```
19 | 2. Install the required dependencies
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 | 3. Get your OpenAI API Key
25 |
26 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
27 |
28 | 4. Run the Streamlit App
29 | ```bash
30 | streamlit run chat_pdf.py
31 | ```
32 | ### Interactive Application Demo
33 | https://github.com/Shubhamsaboo/awesome-llm-apps/assets/31396011/12bdfc11-c877-4fc7-9e70-63f21d2eb977
34 |
35 |
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_pdf/chat_pdf.py:
--------------------------------------------------------------------------------
1 | import os
2 | import tempfile
3 | import streamlit as st
4 | from embedchain import App
5 |
6 | def embedchain_bot(db_path, api_key):
7 | return App.from_config(
8 | config={
9 | "llm": {"provider": "openai", "config": {"api_key": api_key}},
10 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
11 | "embedder": {"provider": "openai", "config": {"api_key": api_key}},
12 | }
13 | )
14 |
15 | st.title("Chat with PDF")
16 |
17 | openai_access_token = st.text_input("OpenAI API Key", type="password")
18 |
19 | if openai_access_token:
20 | db_path = tempfile.mkdtemp()
21 | app = embedchain_bot(db_path, openai_access_token)
22 |
23 | pdf_file = st.file_uploader("Upload a PDF file", type="pdf")
24 |
25 | if pdf_file:
26 | with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as f:
27 | f.write(pdf_file.getvalue())
28 | app.add(f.name, data_type="pdf_file")
29 | os.remove(f.name)
30 | st.success(f"Added {pdf_file.name} to knowledge base!")
31 |
32 | prompt = st.text_input("Ask a question about the PDF")
33 |
34 | if prompt:
35 | answer = app.chat(prompt)
36 | st.write(answer)
37 |
38 |
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_pdf/chat_pdf_llama3.2.py:
--------------------------------------------------------------------------------
1 | # Import necessary libraries
2 | import os
3 | import tempfile
4 | import streamlit as st
5 | from embedchain import App
6 | import base64
7 | from streamlit_chat import message
8 |
9 | # Define the embedchain_bot function
10 | def embedchain_bot(db_path):
11 | return App.from_config(
12 | config={
13 | "llm": {"provider": "ollama", "config": {"model": "llama3.2:latest", "max_tokens": 250, "temperature": 0.5, "stream": True, "base_url": 'http://localhost:11434'}},
14 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
15 | "embedder": {"provider": "ollama", "config": {"model": "llama3.2:latest", "base_url": 'http://localhost:11434'}},
16 | }
17 | )
18 |
19 | # Add a function to display PDF
20 | def display_pdf(file):
21 | base64_pdf = base64.b64encode(file.read()).decode('utf-8')
22 | pdf_display = f''
23 | st.markdown(pdf_display, unsafe_allow_html=True)
24 |
25 | st.title("Chat with PDF using Llama 3.2")
26 | st.caption("This app allows you to chat with a PDF using Llama 3.2 running locally with Ollama!")
27 |
28 | # Define the database path
29 | db_path = tempfile.mkdtemp()
30 |
31 | # Create a session state to store the app instance and chat history
32 | if 'app' not in st.session_state:
33 | st.session_state.app = embedchain_bot(db_path)
34 | if 'messages' not in st.session_state:
35 | st.session_state.messages = []
36 |
37 | # Sidebar for PDF upload and preview
38 | with st.sidebar:
39 | st.header("PDF Upload")
40 | pdf_file = st.file_uploader("Upload a PDF file", type="pdf")
41 |
42 | if pdf_file:
43 | st.subheader("PDF Preview")
44 | display_pdf(pdf_file)
45 |
46 | if st.button("Add to Knowledge Base"):
47 | with st.spinner("Adding PDF to knowledge base..."):
48 | with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as f:
49 | f.write(pdf_file.getvalue())
50 | st.session_state.app.add(f.name, data_type="pdf_file")
51 | os.remove(f.name)
52 | st.success(f"Added {pdf_file.name} to knowledge base!")
53 |
54 | # Chat interface
55 | for i, msg in enumerate(st.session_state.messages):
56 | message(msg["content"], is_user=msg["role"] == "user", key=str(i))
57 |
58 | if prompt := st.chat_input("Ask a question about the PDF"):
59 | st.session_state.messages.append({"role": "user", "content": prompt})
60 | message(prompt, is_user=True)
61 |
62 | with st.spinner("Thinking..."):
63 | response = st.session_state.app.chat(prompt)
64 | st.session_state.messages.append({"role": "assistant", "content": response})
65 | message(response)
66 |
67 | # Clear chat history button
68 | if st.button("Clear Chat History"):
69 | st.session_state.messages = []
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_pdf/chat_pdf_llama3.py:
--------------------------------------------------------------------------------
1 | # Import necessary libraries
2 | import os
3 | import tempfile
4 | import streamlit as st
5 | from embedchain import App
6 |
7 | # Define the embedchain_bot function
8 | def embedchain_bot(db_path):
9 | return App.from_config(
10 | config={
11 | "llm": {"provider": "ollama", "config": {"model": "llama3:instruct", "max_tokens": 250, "temperature": 0.5, "stream": True, "base_url": 'http://localhost:11434'}},
12 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
13 | "embedder": {"provider": "ollama", "config": {"model": "llama3:instruct", "base_url": 'http://localhost:11434'}},
14 | }
15 | )
16 |
17 | st.title("Chat with PDF")
18 | st.caption("This app allows you to chat with a PDF using Llama3 running locally wiht Ollama!")
19 |
20 | # Create a temporary directory to store the PDF file
21 | db_path = tempfile.mkdtemp()
22 | # Create an instance of the embedchain App
23 | app = embedchain_bot(db_path)
24 |
25 | # Upload a PDF file
26 | pdf_file = st.file_uploader("Upload a PDF file", type="pdf")
27 |
28 | # If a PDF file is uploaded, add it to the knowledge base
29 | if pdf_file:
30 | with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as f:
31 | f.write(pdf_file.getvalue())
32 | app.add(f.name, data_type="pdf_file")
33 | os.remove(f.name)
34 | st.success(f"Added {pdf_file.name} to knowledge base!")
35 |
36 | # Ask a question about the PDF
37 | prompt = st.text_input("Ask a question about the PDF")
38 | # Display the answer
39 | if prompt:
40 | answer = app.chat(prompt)
41 | st.write(answer)
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_pdf/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain
3 | streamlit-chat
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_research_papers/README.md:
--------------------------------------------------------------------------------
1 | ## 🔎 Chat with Arxiv Research Papers
2 | This Streamlit app enables you to engage in interactive conversations with arXiv, a vast repository of scholarly articles, using GPT-4o. With this RAG application, you can easily access and explore the wealth of knowledge contained within arXiv.
3 |
4 | ### Features
5 | - Engage in conversational interactions with arXiv
6 | - Access and explore a vast collection of research papers
7 | - Utilize OpenAI GPT-4o for intelligent responses
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_research_papers
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Run the Streamlit App
27 | ```bash
28 | streamlit run chat_arxiv.py
29 | ```
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_research_papers/chat_arxiv.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from agno.agent import Agent
4 | from agno.models.openai import OpenAIChat
5 | from agno.tools.arxiv import ArxivTools
6 |
7 | # Set up the Streamlit app
8 | st.title("Chat with Research Papers 🔎🤖")
9 | st.caption("This app allows you to chat with arXiv research papers using OpenAI GPT-4o model.")
10 |
11 | # Get OpenAI API key from user
12 | openai_access_token = st.text_input("OpenAI API Key", type="password")
13 |
14 | # If OpenAI API key is provided, create an instance of Assistant
15 | if openai_access_token:
16 | # Create an instance of the Assistant
17 | assistant = Agent(
18 | model=OpenAIChat(
19 | id="gpt-4o",
20 | max_tokens=1024,
21 | temperature=0.9,
22 | api_key=openai_access_token) , tools=[ArxivTools()]
23 | )
24 |
25 | # Get the search query from the user
26 | query= st.text_input("Enter the Search Query", type="default")
27 |
28 | if query:
29 | # Search the web using the AI Assistant
30 | response = assistant.run(query, stream=False)
31 | st.write(response.content)
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_research_papers/chat_arxiv_llama3.py:
--------------------------------------------------------------------------------
1 | # Import the required libraries
2 | import streamlit as st
3 | from agno.agent import Agent
4 | from agno.models.ollama import Ollama
5 | from agno.tools.arxiv import ArxivTools
6 |
7 | # Set up the Streamlit app
8 | st.title("Chat with Research Papers 🔎🤖")
9 | st.caption("This app allows you to chat with arXiv research papers using Llama-3 running locally.")
10 |
11 | # Create an instance of the Assistant
12 | assistant = Agent(
13 | model=Ollama(
14 | id="llama3.1:8b") , tools=[ArxivTools()], show_tool_calls=True
15 | )
16 |
17 | # Get the search query from the user
18 | query= st.text_input("Enter the Search Query", type="default")
19 |
20 | if query:
21 | # Search the web using the AI Assistant
22 | response = assistant.run(query, stream=False)
23 | st.write(response.content)
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_research_papers/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | arxiv
4 | openai
5 | pypdf
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_substack/README.md:
--------------------------------------------------------------------------------
1 | ## 📝 Chat with Substack Newsletter
2 | Streamlit app that allows you to chat with a Substack newsletter using OpenAI's API and the Embedchain library. This app leverages GPT-4 to provide accurate answers to questions based on the content of the specified Substack newsletter.
3 |
4 | ## Features
5 | - Input a Substack blog URL
6 | - Ask questions about the content of the Substack newsletter
7 | - Get accurate answers using OpenAI's API and Embedchain
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_substack
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Get your OpenAI API Key
23 |
24 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
25 |
26 | 4. Run the Streamlit App
27 | ```bash
28 | streamlit run chat_substack.py
29 | ```
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_substack/chat_substack.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from embedchain import App
3 | import tempfile
4 |
5 | # Define the embedchain_bot function
6 | def embedchain_bot(db_path, api_key):
7 | return App.from_config(
8 | config={
9 | "llm": {"provider": "openai", "config": {"model": "gpt-4-turbo", "temperature": 0.5, "api_key": api_key}},
10 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
11 | "embedder": {"provider": "openai", "config": {"api_key": api_key}},
12 | }
13 | )
14 |
15 | st.title("Chat with Substack Newsletter 📝")
16 | st.caption("This app allows you to chat with Substack newsletter using OpenAI API")
17 |
18 | # Get OpenAI API key from user
19 | openai_access_token = st.text_input("OpenAI API Key", type="password")
20 |
21 | if openai_access_token:
22 | # Create a temporary directory to store the database
23 | db_path = tempfile.mkdtemp()
24 | # Create an instance of Embedchain App
25 | app = embedchain_bot(db_path, openai_access_token)
26 |
27 | # Get the Substack blog URL from the user
28 | substack_url = st.text_input("Enter Substack Newsletter URL", type="default")
29 |
30 | if substack_url:
31 | # Add the Substack blog to the knowledge base
32 | app.add(substack_url, data_type='substack')
33 | st.success(f"Added {substack_url} to knowledge base!")
34 |
35 | # Ask a question about the Substack blog
36 | query = st.text_input("Ask any question about the substack newsletter!")
37 |
38 | # Query the Substack blog
39 | if query:
40 | result = app.query(query)
41 | st.write(result)
42 |
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_substack/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_youtube_videos/README.md:
--------------------------------------------------------------------------------
1 | ## 📽️ Chat with YouTube Videos
2 |
3 | LLM app with RAG to chat with YouTube Videos with OpenAI's gpt-4o, mem0/embedchain as memory and the youtube-transcript-api. The app uses Retrieval Augmented Generation (RAG) to provide accurate answers to questions based on the content of the uploaded video.
4 |
5 | ### Features
6 |
7 | - Input a YouTube video URL
8 | - Ask questions about the content of the video
9 | - Get accurate answers using RAG and the selected LLM
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 |
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/chat_with_X_tutorials/chat_with_youtube_videos
18 | ```
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 | 3. Get your OpenAI API Key
25 |
26 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
27 |
28 | 4. Run the Streamlit App
29 | ```bash
30 | streamlit run chat_youtube.py
31 | ```
32 |
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_youtube_videos/chat_youtube.py:
--------------------------------------------------------------------------------
1 | import tempfile
2 | import streamlit as st
3 | from embedchain import App
4 | from youtube_transcript_api import YouTubeTranscriptApi
5 | from typing import Tuple
6 |
7 | def embedchain_bot(db_path: str, api_key: str) -> App:
8 | return App.from_config(
9 | config={
10 | "llm": {"provider": "openai", "config": {"model": "gpt-4", "temperature": 0.5, "api_key": api_key}},
11 | "vectordb": {"provider": "chroma", "config": {"dir": db_path}},
12 | "embedder": {"provider": "openai", "config": {"api_key": api_key}},
13 | }
14 | )
15 |
16 | def extract_video_id(video_url: str) -> str:
17 | if "youtube.com/watch?v=" in video_url:
18 | return video_url.split("v=")[-1].split("&")[0]
19 | elif "youtube.com/shorts/" in video_url:
20 | return video_url.split("/shorts/")[-1].split("?")[0]
21 | else:
22 | raise ValueError("Invalid YouTube URL")
23 |
24 | def fetch_video_data(video_url: str) -> Tuple[str, str]:
25 | try:
26 | video_id = extract_video_id(video_url)
27 | transcript = YouTubeTranscriptApi.get_transcript(video_id)
28 | transcript_text = " ".join([entry["text"] for entry in transcript])
29 | return "Unknown", transcript_text # Title is set to "Unknown" since we're not fetching it
30 | except Exception as e:
31 | st.error(f"Error fetching transcript: {e}")
32 | return "Unknown", "No transcript available for this video."
33 |
34 | # Create Streamlit app
35 | st.title("Chat with YouTube Video 📺")
36 | st.caption("This app allows you to chat with a YouTube video using OpenAI API")
37 |
38 | # Get OpenAI API key from user
39 | openai_access_token = st.text_input("OpenAI API Key", type="password")
40 |
41 | # If OpenAI API key is provided, create an instance of App
42 | if openai_access_token:
43 | # Create a temporary directory to store the database
44 | db_path = tempfile.mkdtemp()
45 | # Create an instance of Embedchain App
46 | app = embedchain_bot(db_path, openai_access_token)
47 | # Get the YouTube video URL from the user
48 | video_url = st.text_input("Enter YouTube Video URL", type="default")
49 | # Add the video to the knowledge base
50 | if video_url:
51 | try:
52 | title, transcript = fetch_video_data(video_url)
53 | if transcript != "No transcript available for this video.":
54 | app.add(transcript, data_type="text", metadata={"title": title, "url": video_url})
55 | st.success(f"Added video '{title}' to knowledge base!")
56 | else:
57 | st.warning(f"No transcript available for video '{title}'. Cannot add to knowledge base.")
58 | except Exception as e:
59 | st.error(f"Error adding video: {e}")
60 | # Ask a question about the video
61 | prompt = st.text_input("Ask any question about the YouTube Video")
62 | # Chat with the video
63 | if prompt:
64 | try:
65 | answer = app.chat(prompt)
66 | st.write(answer)
67 | except Exception as e:
68 | st.error(f"Error chatting with the video: {e}")
--------------------------------------------------------------------------------
/advanced_llm_apps/chat_with_X_tutorials/chat_with_youtube_videos/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | embedchain[youtube]
3 | youtube-transcript-api==0.6.3
--------------------------------------------------------------------------------
/advanced_llm_apps/cursor_ai_experiments/ai_web_scrapper.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from scrapegraphai.graphs import SmartScraperGraph
3 |
4 | # Streamlit app title
5 | st.title("AI Web Scraper")
6 |
7 | # Input fields for user prompt and source URL
8 | prompt = st.text_input("Enter the information you want to extract:")
9 | source_url = st.text_input("Enter the source URL:")
10 |
11 | # Input field for OpenAI API key
12 | api_key = st.text_input("Enter your OpenAI API key:", type="password")
13 |
14 | # Configuration for the scraping pipeline
15 | graph_config = {
16 | "llm": {
17 | "api_key": api_key,
18 | "model": "openai/gpt-4o-mini",
19 | },
20 | "verbose": True,
21 | "headless": False,
22 | }
23 |
24 | # Button to start the scraping process
25 | if st.button("Scrape"):
26 | if prompt and source_url and api_key:
27 | # Create the SmartScraperGraph instance
28 | smart_scraper_graph = SmartScraperGraph(
29 | prompt=prompt,
30 | source=source_url,
31 | config=graph_config
32 | )
33 |
34 | # Run the pipeline
35 | result = smart_scraper_graph.run()
36 |
37 | # Display the result
38 | st.write(result)
39 | else:
40 | st.error("Please provide all the required inputs.")
41 |
42 | # Instructions for the user
43 | st.markdown("""
44 | ### Instructions
45 | 1. Enter the information you want to extract in the first input box.
46 | 2. Enter the source URL from which you want to extract the information.
47 | 3. Enter your OpenAI API key.
48 | 4. Click on the "Scrape" button to start the scraping process.
49 | """)
--------------------------------------------------------------------------------
/advanced_llm_apps/cursor_ai_experiments/chatgpt_clone_llama3.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from ollama import Client
3 |
4 | # Initialize Ollama client
5 | client = Client()
6 |
7 | # Set up Streamlit page
8 | st.set_page_config(page_title="Local ChatGPT Clone", page_icon="🤖", layout="wide")
9 | st.title("🤖 Local ChatGPT Clone")
10 |
11 | # Initialize chat history
12 | if "messages" not in st.session_state:
13 | st.session_state.messages = []
14 |
15 | # Display chat messages
16 | for message in st.session_state.messages:
17 | with st.chat_message(message["role"]):
18 | st.markdown(message["content"])
19 |
20 | # User input
21 | if prompt := st.chat_input("What's on your mind?"):
22 | st.session_state.messages.append({"role": "user", "content": prompt})
23 | with st.chat_message("user"):
24 | st.markdown(prompt)
25 |
26 | # Generate AI response
27 | with st.chat_message("assistant"):
28 | message_placeholder = st.empty()
29 | full_response = ""
30 | for response in client.chat(model="llama3.1:latest", messages=st.session_state.messages, stream=True):
31 | full_response += response['message']['content']
32 | message_placeholder.markdown(full_response + "▌")
33 | message_placeholder.markdown(full_response)
34 | st.session_state.messages.append({"role": "assistant", "content": full_response})
35 |
36 | # Add a sidebar with information
37 | st.sidebar.title("About")
38 | st.sidebar.info("This is a local ChatGPT clone using Ollama's llama3.1:latest model and Streamlit.")
39 | st.sidebar.markdown("---")
40 | st.sidebar.markdown("Made with ❤️ by Your Name")
--------------------------------------------------------------------------------
/advanced_llm_apps/cursor_ai_experiments/llm_router_app/README.md:
--------------------------------------------------------------------------------
1 | ## 📡 RouteLLM Chat App
2 |
3 | > Note: This project is inspired by the opensource [RouteLLM library](https://github.com/lm-sys/RouteLLM/tree/main), which provides intelligent routing between different language models.
4 |
5 | This Streamlit application demonstrates the use of RouteLLM, a system that intelligently routes queries between different language models based on the complexity of the task. It provides a chat interface where users can interact with AI models, and the app automatically selects the most appropriate model for each query.
6 |
7 | ### Features
8 | - Chat interface for interacting with AI models
9 | - Automatic model selection using RouteLLM
10 | - Utilizes both GPT-4 and Meta-Llama 3.1 models
11 | - Displays chat history with model information
12 |
13 | ### How to get Started?
14 |
15 | 1. Clone the GitHub repository
16 |
17 | ```bash
18 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
19 | cd awesome-llm-apps/advanced_tools_frameworks/llm_router_app
20 | ```
21 | 2. Install the required dependencies:
22 |
23 | ```bash
24 | pip install -r requirements.txt
25 | ```
26 | 3. Set up your API keys:
27 |
28 | ```bash
29 | os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
30 | os.environ['TOGETHERAI_API_KEY'] = "your_togetherai_api_key"
31 | ```
32 | Note: In a production environment, it's recommended to use environment variables or a secure configuration management system instead of hardcoding API keys.
33 |
34 | 4. Run the Streamlit App
35 | ```bash
36 | streamlit run llm_router.py
37 | ```
38 |
39 | ### How it Works?
40 |
41 | 1. RouteLLM Initialization: The app initializes the RouteLLM controller with two models:
42 | - Strong model: GPT-4 (mini)
43 | - Weak model: Meta-Llama 3.1 70B Instruct Turbo
44 |
45 | 2. Chat Interface: Users can input messages through a chat interface.
46 |
47 | 3. Model Selection: RouteLLM automatically selects the appropriate model based on the complexity of the user's query.
48 |
49 | 4. Response Generation: The selected model generates a response to the user's input.
50 |
51 | 5. Display: The app displays the response along with information about which model was used.
52 |
53 | 6. History: The chat history is maintained and displayed, including model information for each response.
--------------------------------------------------------------------------------
/advanced_llm_apps/cursor_ai_experiments/llm_router_app/llm_router.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
4 | os.environ['TOGETHERAI_API_KEY'] = "your_togetherai_api_key"
5 |
6 | import streamlit as st
7 | from routellm.controller import Controller
8 |
9 | # Initialize RouteLLM client
10 | client = Controller(
11 | routers=["mf"],
12 | strong_model="gpt-4o-mini",
13 | weak_model="together_ai/meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
14 | )
15 |
16 | # Set up Streamlit app
17 | st.title("RouteLLM Chat App")
18 |
19 | # Initialize chat history
20 | if "messages" not in st.session_state:
21 | st.session_state.messages = []
22 |
23 | # Display chat messages
24 | for message in st.session_state.messages:
25 | with st.chat_message(message["role"]):
26 | st.markdown(message["content"])
27 | if "model" in message:
28 | st.caption(f"Model used: {message['model']}")
29 |
30 | # Chat input
31 | if prompt := st.chat_input("What is your message?"):
32 | # Add user message to chat history
33 | st.session_state.messages.append({"role": "user", "content": prompt})
34 | with st.chat_message("user"):
35 | st.markdown(prompt)
36 |
37 | # Get RouteLLM response
38 | with st.chat_message("assistant"):
39 | message_placeholder = st.empty()
40 | response = client.chat.completions.create(
41 | model="router-mf-0.11593",
42 | messages=[{"role": "user", "content": prompt}]
43 | )
44 | message_content = response['choices'][0]['message']['content']
45 | model_name = response['model']
46 |
47 | # Display assistant's response
48 | message_placeholder.markdown(message_content)
49 | st.caption(f"Model used: {model_name}")
50 |
51 | # Add assistant's response to chat history
52 | st.session_state.messages.append({"role": "assistant", "content": message_content, "model": model_name})
--------------------------------------------------------------------------------
/advanced_llm_apps/cursor_ai_experiments/llm_router_app/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | "routellm[serve,eval]"
3 | routellm
--------------------------------------------------------------------------------
/advanced_llm_apps/cursor_ai_experiments/local_chatgpt_clone/README.md:
--------------------------------------------------------------------------------
1 | ## 🦙💬 ChatGPT Clone using Llama-3
2 | This project demonstrates how to build a ChatGPT clone using the Llama-3 model running locally on your computer. The application is built using Python and Streamlit, providing a user-friendly interface for interacting with the language model. Best of all, it's 100% free and doesn't require an internet connection!
3 |
4 | ### Features
5 | - Runs locally on your computer without the need for an internet connection and completely free to use.
6 | - Utilizes the Llama-3 instruct model for generating responses
7 | - Provides a chat-like interface for seamless interaction
8 |
9 | ### How to get Started?
10 |
11 | 1. Clone the GitHub repository
12 |
13 | ```bash
14 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
15 | cd awesome-llm-apps/advanced_tools_frameworks/local_chatgpt_clone
16 | ```
17 | 2. Install the required dependencies:
18 |
19 | ```bash
20 | pip install -r requirements.txt
21 | ```
22 | 3. Download and install the [LM Studio desktop app](https://lmstudio.ai/). Download the Llama-3 instruct model.
23 |
24 | 4. Expose the Llama-3 model as an OpenAI API by starting the server in LM Studio. Watch this [video walkthrough](https://x.com/Saboo_Shubham_/status/1783715814790549683).
25 |
26 | 5. Run the Streamlit App
27 | ```bash
28 | streamlit run chatgpt_clone_llama3.py
29 | ```
30 |
31 |
--------------------------------------------------------------------------------
/advanced_llm_apps/cursor_ai_experiments/local_chatgpt_clone/chatgpt_clone_llama3.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from openai import OpenAI
3 |
4 | # Set up the Streamlit App
5 | st.title("ChatGPT Clone using Llama-3 🦙")
6 | st.caption("Chat with locally hosted Llama-3 using the LM Studio 💯")
7 |
8 | # Point to the local server setup using LM Studio
9 | client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
10 |
11 | # Initialize the chat history
12 | if "messages" not in st.session_state:
13 | st.session_state.messages = []
14 |
15 | # Display the chat history
16 | for message in st.session_state.messages:
17 | with st.chat_message(message["role"]):
18 | st.markdown(message["content"])
19 |
20 | # Accept user input
21 | if prompt := st.chat_input("What is up?"):
22 | # Add user message to chat history
23 | st.session_state.messages.append({"role": "user", "content": prompt})
24 | # Display user message in chat message container
25 | with st.chat_message("user"):
26 | st.markdown(prompt)
27 | # Generate response
28 | response = client.chat.completions.create(
29 | model="lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF",
30 | messages=st.session_state.messages, temperature=0.7
31 | )
32 | # Add assistant response to chat history
33 | st.session_state.messages.append({"role": "assistant", "content": response.choices[0].message.content})
34 | # Display assistant response in chat message container
35 | with st.chat_message("assistant"):
36 | st.markdown(response.choices[0].message.content)
37 |
--------------------------------------------------------------------------------
/advanced_llm_apps/cursor_ai_experiments/local_chatgpt_clone/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
--------------------------------------------------------------------------------
/advanced_llm_apps/cursor_ai_experiments/requirements.txt:
--------------------------------------------------------------------------------
1 | scrapegraphai
2 | playwright
3 | langchain-community
4 | streamlit-chat
5 | streamlit
6 | crewai
7 | ollama
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/ai_arxiv_agent_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 📚 AI Research Agent with Memory
2 | This Streamlit app implements an AI-powered research assistant that helps users search for academic papers on arXiv while maintaining a memory of user interests and past interactions. It utilizes OpenAI's GPT-4o-mini model for processing search results, MultiOn for web browsing, and Mem0 with Qdrant for maintaining user context.
3 |
4 | ### Features
5 |
6 | - Search interface for querying arXiv papers
7 | - AI-powered processing of search results for improved readability
8 | - Persistent memory of user interests and past searches
9 | - Utilizes OpenAI's GPT-4o-mini model for intelligent processing
10 | - Implements memory storage and retrieval using Mem0 and Qdrant
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/llm_apps_with_memory_tutorials/ai_arxiv_agent_memory
18 | ```
19 |
20 | 2. Install the required dependencies:
21 |
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Ensure Qdrant is running:
27 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
28 |
29 | ```bash
30 | docker pull qdrant/qdrant
31 |
32 | docker run -p 6333:6333 -p 6334:6334 \
33 | -v $(pwd)/qdrant_storage:/qdrant/storage:z \
34 | qdrant/qdrant
35 | ```
36 |
37 | 4. Run the Streamlit App
38 | ```bash
39 | streamlit run ai_arxiv_agent_memory.py
40 | ```
41 |
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/ai_arxiv_agent_memory/ai_arxiv_agent_memory.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | import os
3 | from mem0 import Memory
4 | from multion.client import MultiOn
5 | from openai import OpenAI
6 |
7 | st.title("AI Research Agent with Memory 📚")
8 |
9 | api_keys = {k: st.text_input(f"{k.capitalize()} API Key", type="password") for k in ['openai', 'multion']}
10 |
11 | if all(api_keys.values()):
12 | os.environ['OPENAI_API_KEY'] = api_keys['openai']
13 | # Initialize Mem0 with Qdrant
14 | config = {
15 | "vector_store": {
16 | "provider": "qdrant",
17 | "config": {
18 | "model": "gpt-4o-mini",
19 | "host": "localhost",
20 | "port": 6333,
21 | }
22 | },
23 | }
24 | memory, multion, openai_client = Memory.from_config(config), MultiOn(api_key=api_keys['multion']), OpenAI(api_key=api_keys['openai'])
25 |
26 | user_id = st.sidebar.text_input("Enter your Username")
27 | #user_interests = st.text_area("Research interests and background")
28 |
29 | search_query = st.text_input("Research paper search query")
30 |
31 | def process_with_gpt4(result):
32 | """Processes an arXiv search result to produce a structured markdown output.
33 |
34 | This function takes a search result from arXiv and generates a markdown-formatted
35 | table containing details about each paper. The table includes columns for the
36 | paper's title, authors, a brief abstract, and a link to the paper on arXiv.
37 |
38 | Args:
39 | result (str): The raw search result from arXiv, typically in a text format.
40 |
41 | Returns:
42 | str: A markdown-formatted string containing a table with paper details."""
43 | prompt = f"""
44 | Based on the following arXiv search result, provide a proper structured output in markdown that is readable by the users.
45 | Each paper should have a title, authors, abstract, and link.
46 | Search Result: {result}
47 | Output Format: Table with the following columns: [{{"title": "Paper Title", "authors": "Author Names", "abstract": "Brief abstract", "link": "arXiv link"}}, ...]
48 | """
49 | response = openai_client.chat.completions.create(model="gpt-4o-mini", messages=[{"role": "user", "content": prompt}], temperature=0.2)
50 | return response.choices[0].message.content
51 |
52 | if st.button('Search for Papers'):
53 | with st.spinner('Searching and Processing...'):
54 | relevant_memories = memory.search(search_query, user_id=user_id, limit=3)
55 | prompt = f"Search for arXiv papers: {search_query}\nUser background: {' '.join(mem['text'] for mem in relevant_memories)}"
56 | result = process_with_gpt4(multion.browse(cmd=prompt, url="https://arxiv.org/"))
57 | st.markdown(result)
58 |
59 | if st.sidebar.button("View Memory"):
60 | st.sidebar.write("\n".join([f"- {mem['text']}" for mem in memory.get_all(user_id=user_id)]))
61 |
62 | else:
63 | st.warning("Please enter your API keys to use this app.")
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/ai_arxiv_agent_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai
4 | multion
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/ai_travel_agent_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧳 AI Travel Agent with Memory
2 | This Streamlit app implements an AI-powered travel assistant that remembers user preferences and past interactions. It utilizes OpenAI's GPT-4o for generating responses and Mem0 with Qdrant for maintaining conversation history.
3 |
4 | ### Features
5 | - Chat-based interface for interacting with an AI travel assistant
6 | - Persistent memory of user preferences and past conversations
7 | - Utilizes OpenAI's GPT-4o model for intelligent responses
8 | - Implements memory storage and retrieval using Mem0 and Qdrant
9 | - User-specific conversation history and memory viewing
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/llm_apps_with_memory_tutorials/ai_travel_agent_memory
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Ensure Qdrant is running:
26 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
27 |
28 | ```bash
29 | docker pull qdrant/qdrant
30 |
31 | docker run -p 6333:6333 -p 6334:6334 \
32 | -v $(pwd)/qdrant_storage:/qdrant/storage:z \
33 | qdrant/qdrant
34 | ```
35 |
36 | 4. Run the Streamlit App
37 | ```bash
38 | streamlit run travel_agent_memory.py
39 | ```
40 |
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/ai_travel_agent_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/llama3_stateful_chat/local_llama3_chat.py:
--------------------------------------------------------------------------------
1 | import streamlit as st
2 | from openai import OpenAI
3 |
4 | # Set up the Streamlit App
5 | st.title("Local ChatGPT with Memory 🦙")
6 | st.caption("Chat with locally hosted memory-enabled Llama-3 using the LM Studio 💯")
7 |
8 | # Point to the local server setup using LM Studio
9 | client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
10 |
11 | # Initialize the chat history
12 | if "messages" not in st.session_state:
13 | st.session_state.messages = []
14 |
15 | # Display the chat history
16 | for message in st.session_state.messages:
17 | with st.chat_message(message["role"]):
18 | st.markdown(message["content"])
19 |
20 | # Accept user input
21 | if prompt := st.chat_input("What is up?"):
22 | st.session_state.messages.append({"role": "system", "content": "When the input starts with /add, don't follow up with a prompt."})
23 | # Add user message to chat history
24 | st.session_state.messages.append({"role": "user", "content": prompt})
25 | # Display user message in chat message container
26 | with st.chat_message("user"):
27 | st.markdown(prompt)
28 | # Generate response
29 | response = client.chat.completions.create(
30 | model="lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF",
31 | messages=st.session_state.messages, temperature=0.7
32 | )
33 | # Add assistant response to chat history
34 | st.session_state.messages.append({"role": "assistant", "content": response.choices[0].message.content})
35 | # Display assistant response in chat message container
36 | with st.chat_message("assistant"):
37 | st.markdown(response.choices[0].message.content)
38 |
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/llama3_stateful_chat/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/llm_app_personalized_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧠 LLM App with Memory
2 | This Streamlit app is an AI-powered chatbot that uses OpenAI's GPT-4o model with a persistent memory feature. It allows users to have conversations with the AI while maintaining context across multiple interactions.
3 |
4 | ### Features
5 |
6 | - Utilizes OpenAI's GPT-4o model for generating responses
7 | - Implements persistent memory using Mem0 and Qdrant vector store
8 | - Allows users to view their conversation history
9 | - Provides a user-friendly interface with Streamlit
10 |
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/llm_apps_with_memory_tutorials/llm_app_personalized_memory
18 | ```
19 |
20 | 2. Install the required dependencies:
21 |
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Ensure Qdrant is running:
27 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
28 |
29 | ```bash
30 | docker pull qdrant/qdrant
31 |
32 | docker run -p 6333:6333 -p 6334:6334 \
33 | -v $(pwd)/qdrant_storage:/qdrant/storage:z \
34 | qdrant/qdrant
35 | ```
36 |
37 | 4. Run the Streamlit App
38 | ```bash
39 | streamlit run llm_app_memory.py
40 | ```
41 |
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/llm_app_personalized_memory/llm_app_memory.py:
--------------------------------------------------------------------------------
1 | import os
2 | import streamlit as st
3 | from mem0 import Memory
4 | from openai import OpenAI
5 |
6 | st.title("LLM App with Memory 🧠")
7 | st.caption("LLM App with personalized memory layer that remembers ever user's choice and interests")
8 |
9 | openai_api_key = st.text_input("Enter OpenAI API Key", type="password")
10 | os.environ["OPENAI_API_KEY"] = openai_api_key
11 |
12 | if openai_api_key:
13 | # Initialize OpenAI client
14 | client = OpenAI(api_key=openai_api_key)
15 |
16 | # Initialize Mem0 with Qdrant
17 | config = {
18 | "vector_store": {
19 | "provider": "qdrant",
20 | "config": {
21 | "collection_name": "llm_app_memory",
22 | "host": "localhost",
23 | "port": 6333,
24 | }
25 | },
26 | }
27 |
28 | memory = Memory.from_config(config)
29 |
30 | user_id = st.text_input("Enter your Username")
31 |
32 | prompt = st.text_input("Ask ChatGPT")
33 |
34 | if st.button('Chat with LLM'):
35 | with st.spinner('Searching...'):
36 | relevant_memories = memory.search(query=prompt, user_id=user_id)
37 | # Prepare context with relevant memories
38 | context = "Relevant past information:\n"
39 |
40 | for mem in relevant_memories:
41 | context += f"- {mem['text']}\n"
42 |
43 | # Prepare the full prompt
44 | full_prompt = f"{context}\nHuman: {prompt}\nAI:"
45 |
46 | # Get response from GPT-4
47 | response = client.chat.completions.create(
48 | model="gpt-4o",
49 | messages=[
50 | {"role": "system", "content": "You are a helpful assistant with access to past conversations."},
51 | {"role": "user", "content": full_prompt}
52 | ]
53 | )
54 |
55 | answer = response.choices[0].message.content
56 |
57 | st.write("Answer: ", answer)
58 |
59 | # Add AI response to memory
60 | memory.add(answer, user_id=user_id)
61 |
62 |
63 | # Sidebar option to show memory
64 | st.sidebar.title("Memory Info")
65 | if st.button("View My Memory"):
66 | memories = memory.get_all(user_id=user_id)
67 | if memories and "results" in memories:
68 | st.write(f"Memory history for **{user_id}**:")
69 | for mem in memories["results"]:
70 | if "memory" in mem:
71 | st.write(f"- {mem['memory']}")
72 | else:
73 | st.sidebar.info("No learning history found for this user ID.")
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/llm_app_personalized_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/local_chatgpt_with_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧠 Local ChatGPT using Llama 3.1 with Personal Memory
2 | This Streamlit application implements a fully local ChatGPT-like experience using Llama 3.1, featuring personalized memory storage for each user. All components, including the language model, embeddings, and vector store, run locally without requiring external API keys.
3 |
4 | ### Features
5 | - Fully local implementation with no external API dependencies
6 | - Powered by Llama 3.1 via Ollama
7 | - Personal memory space for each user
8 | - Local embedding generation using Nomic Embed
9 | - Vector storage with Qdrant
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/llm_apps_with_memory_tutorials/local_chatgpt_with_memory
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | cd awesome-llm-apps/rag_tutorials/local_rag_agent
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Install and start [Qdrant](https://qdrant.tech/documentation/guides/installation/) vector database locally
27 |
28 | ```bash
29 | docker pull qdrant/qdrant
30 | docker run -p 6333:6333 qdrant/qdrant
31 | ```
32 |
33 | 4. Install [Ollama](https://ollama.com/download) and pull Llama 3.1
34 | ```bash
35 | ollama pull llama3.1
36 | ```
37 |
38 | 5. Run the Streamlit App
39 | ```bash
40 | streamlit run local_chatgpt_memory.py
41 | ```
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/local_chatgpt_with_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
4 | litellm
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/multi_llm_memory/README.md:
--------------------------------------------------------------------------------
1 | ## 🧠 Multi-LLM App with Shared Memory
2 | This Streamlit application demonstrates a multi-LLM system with a shared memory layer, allowing users to interact with different language models while maintaining conversation history and context across sessions.
3 |
4 | ### Features
5 |
6 | - Support for multiple LLMs:
7 | - OpenAI's GPT-4o
8 | - Anthropic's Claude 3.5 Sonnet
9 |
10 | - Persistent memory using Qdrant vector store
11 | - User-specific conversation history
12 | - Memory retrieval for contextual responses
13 | - User-friendly interface with LLM selection
14 |
15 | ### How to get Started?
16 |
17 | 1. Clone the GitHub repository
18 | ```bash
19 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
20 | cd awesome-llm-apps/llm_apps_with_memory_tutorials/multi_llm_memory
21 | ```
22 |
23 | 2. Install the required dependencies:
24 |
25 | ```bash
26 | pip install -r requirements.txt
27 | ```
28 |
29 | 3. Ensure Qdrant is running:
30 | The app expects Qdrant to be running on localhost:6333. Adjust the configuration in the code if your setup is different.
31 |
32 | ```bash
33 | docker pull qdrant/qdrant
34 | docker run -p 6333:6333 qdrant/qdrant
35 | ```
36 |
37 | 4. Run the Streamlit App
38 | ```bash
39 | streamlit run multi_llm_memory.py
40 | ```
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_apps_with_memory_tutorials/multi_llm_memory/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | openai
3 | mem0ai==0.1.29
4 | litellm
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_finetuning_tutorials/llama3.2_finetuning/README.md:
--------------------------------------------------------------------------------
1 | ## 🦙 Finetune Llama 3.2 in 30 Lines of Python
2 |
3 | This script demonstrates how to finetune the Llama 3.2 model using the [Unsloth](https://unsloth.ai/) library, which makes the process easy and fast. You can run this example to finetune Llama 3.1 1B and 3B models for free in Google Colab.
4 |
5 | ### Features
6 |
7 | - Finetunes Llama 3.2 model using the Unsloth library
8 | - Implements Low-Rank Adaptation (LoRA) for efficient finetuning
9 | - Uses the FineTome-100k dataset for training
10 | - Configurable for different model sizes (1B and 3B)
11 |
12 | ### Installation
13 |
14 | 1. Clone the repository:
15 |
16 | ```bash
17 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
18 | cd awesome-llm-apps/llm_finetuning_tutorials/llama3.2_finetuning
19 | ```
20 |
21 | 2. Install the required dependencies:
22 |
23 | ```bash
24 | pip install -r requirements.txt
25 | ```
26 |
27 | ## Usage
28 |
29 | 1. Open the script in Google Colab or your preferred Python environment.
30 |
31 | 2. Run the script to start the finetuning process:
32 |
33 | ```bash
34 | # Run the entire script
35 | python finetune_llama3.2.py
36 | ```
37 |
38 | 3. The finetuned model will be saved in the "finetuned_model" directory.
39 |
40 | ## How it Works
41 |
42 | 1. **Model Loading**: The script loads the Llama 3.2 3B Instruct model using Unsloth's FastLanguageModel.
43 |
44 | 2. **LoRA Setup**: Low-Rank Adaptation is applied to specific layers of the model for efficient finetuning.
45 |
46 | 3. **Data Preparation**: The FineTome-100k dataset is loaded and preprocessed using a chat template.
47 |
48 | 4. **Training Configuration**: The script sets up the SFTTrainer with specific training arguments.
49 |
50 | 5. **Finetuning**: The model is finetuned on the prepared dataset.
51 |
52 | 6. **Model Saving**: The finetuned model is saved to disk.
53 |
54 | ## Configuration
55 |
56 | You can modify the following parameters in the script:
57 |
58 | - `model_name`: Change to "unsloth/Llama-3.1-1B-Instruct" for the 1B model
59 | - `max_seq_length`: Adjust the maximum sequence length
60 | - `r`: LoRA rank
61 | - Training hyperparameters in `TrainingArguments`
62 |
63 | ## Customization
64 |
65 | - To use a different dataset, replace the `load_dataset` function call with your desired dataset.
66 | - Adjust the `target_modules` in the LoRA setup to finetune different layers of the model.
67 | - Modify the chat template in `get_chat_template` if you're using a different conversational format.
68 |
69 | ## Running on Google Colab
70 |
71 | 1. Open a new Google Colab notebook.
72 | 2. Copy the entire script into a code cell.
73 | 3. Add a cell at the beginning to install the required libraries:
74 |
75 | ```
76 | !pip install torch transformers datasets trl unsloth
77 | ```
78 |
79 | 4. Run the cells to start the finetuning process.
80 |
81 | ## Notes
82 |
83 | - This script is optimized for running on Google Colab's free tier, which provides access to GPUs.
84 | - The finetuning process may take some time, depending on the model size and the available computational resources.
85 | - Make sure you have enough storage space in your Colab instance to save the finetuned model.
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_finetuning_tutorials/llama3.2_finetuning/finetune_llama3.2.py:
--------------------------------------------------------------------------------
1 | import torch
2 | from unsloth import FastLanguageModel
3 | from datasets import load_dataset
4 | from trl import SFTTrainer
5 | from transformers import TrainingArguments
6 | from unsloth.chat_templates import get_chat_template, standardize_sharegpt
7 |
8 | # Load model and tokenizer
9 | model, tokenizer = FastLanguageModel.from_pretrained(
10 | model_name="unsloth/Llama-3.2-3B-Instruct",
11 | max_seq_length=2048, load_in_4bit=True,
12 | )
13 |
14 | # Add LoRA adapters
15 | model = FastLanguageModel.get_peft_model(
16 | model, r=16,
17 | target_modules=[
18 | "q_proj", "k_proj", "v_proj", "o_proj",
19 | "gate_proj", "up_proj", "down_proj"
20 | ],
21 | )
22 |
23 | # Set up chat template and prepare dataset
24 | tokenizer = get_chat_template(tokenizer, chat_template="llama-3.1")
25 | dataset = load_dataset("mlabonne/FineTome-100k", split="train")
26 | dataset = standardize_sharegpt(dataset)
27 | dataset = dataset.map(
28 | lambda examples: {
29 | "text": [
30 | tokenizer.apply_chat_template(convo, tokenize=False)
31 | for convo in examples["conversations"]
32 | ]
33 | },
34 | batched=True
35 | )
36 |
37 | # Set up trainer
38 | trainer = SFTTrainer(
39 | model=model,
40 | train_dataset=dataset,
41 | dataset_text_field="text",
42 | max_seq_length=2048,
43 | args=TrainingArguments(
44 | per_device_train_batch_size=2,
45 | gradient_accumulation_steps=4,
46 | warmup_steps=5,
47 | max_steps=60,
48 | learning_rate=2e-4,
49 | fp16=not torch.cuda.is_bf16_supported(),
50 | bf16=torch.cuda.is_bf16_supported(),
51 | logging_steps=1,
52 | output_dir="outputs",
53 | ),
54 | )
55 |
56 | # Train the model
57 | trainer.train()
58 |
59 | # Save the finetuned model
60 | model.save_pretrained("finetuned_model")
--------------------------------------------------------------------------------
/advanced_llm_apps/llm_finetuning_tutorials/llama3.2_finetuning/requirements.txt:
--------------------------------------------------------------------------------
1 | torch
2 | unsloth
3 | transformers
4 | datasets
5 | trl
--------------------------------------------------------------------------------
/docs/banner/unwind_black.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Shubhamsaboo/awesome-llm-apps/78da770fdec4c63026a1a32305a1f4f0ad6eaf45/docs/banner/unwind_black.png
--------------------------------------------------------------------------------
/mcp_ai_agents/browser_mcp_agent/README.md:
--------------------------------------------------------------------------------
1 | # 🌐 MCP Browser Agent
2 |
3 | 
4 |
5 | A Streamlit application that allows you to browse and interact with websites using natural language commands through the Model Context Protocol (MCP) and [MCP-Agent](https://github.com/lastmile-ai/mcp-agent) with Puppeteer integration.
6 |
7 | ## Features
8 |
9 | - **Natural Language Interface**: Control a browser with simple English commands
10 | - **Full Browser Navigation**: Visit websites and navigate through pages
11 | - **Interactive Elements**: Click buttons, fill forms, and scroll through content
12 | - **Visual Feedback**: Take screenshots of webpage elements
13 | - **Information Extraction**: Extract and summarize content from webpages
14 | - **Multi-step Tasks**: Complete complex browsing sequences through conversation
15 |
16 | ## Setup
17 |
18 | ### Requirements
19 |
20 | - Python 3.8+
21 | - Node.js and npm (for Puppeteer)
22 | - This is a critical requirement! The app uses Puppeteer to control a headless browser
23 | - Download and install from [nodejs.org](https://nodejs.org/)
24 | - OpenAI or Anthropic API Key
25 |
26 | ### Installation
27 |
28 | 1. Clone this repository:
29 | ```bash
30 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
31 | cd mcp_ai_agents/browser_mcp_agent
32 | ```
33 |
34 | 2. Install the required Python packages:
35 | ```bash
36 | pip install -r requirements.txt
37 | ```
38 |
39 | 3. Verify Node.js and npm are installed:
40 | ```bash
41 | node --version
42 | npm --version
43 | ```
44 | Both commands should return version numbers. If they don't, please install Node.js.
45 |
46 | 4. Set up your API keys:
47 | - Set OpenAI API Key as an environment variable:
48 | ```bash
49 | export OPENAI_API_KEY=your-openai-api-key
50 | ```
51 |
52 |
53 | ### Running the App
54 |
55 | 1. Start the Streamlit app:
56 | ```bash
57 | streamlit run main.py
58 | ```
59 |
60 | 2. In the app interface:
61 | - Enter your browsing command
62 | - Click "Run Command"
63 | - View the results and screenshots
64 |
65 | ### Example Commands
66 |
67 | #### Basic Navigation
68 | - "Go to www.lastmileai.dev"
69 | - "Go back to the previous page"
70 |
71 | #### Interaction
72 | - "Click on the login button"
73 | - "Scroll down to see more content"
74 |
75 | #### Content Extraction
76 | - "Summarize the main content of this page"
77 | - "Extract the navigation menu items"
78 | - "Take a screenshot of the hero section"
79 |
80 | #### Multi-step Tasks
81 | - "Go to the blog, find the most recent article, and summarize its key points"
82 |
83 | ## Architecture
84 |
85 | The application uses:
86 | - Streamlit for the user interface
87 | - MCP (Model Context Protocol) to connect the LLM with tools
88 | - Puppeteer for browser automation
89 | - [MCP-Agent](https://github.com/lastmile-ai/mcp-agent/) for the Agentic Framework
90 | - OpenAI's models to interpret commands and generate responses
91 |
--------------------------------------------------------------------------------
/mcp_ai_agents/browser_mcp_agent/mcp_agent.config.yaml:
--------------------------------------------------------------------------------
1 | execution_engine: asyncio
2 | logger:
3 | transports: [console, file]
4 | level: debug
5 | progress_display: true
6 | path_settings:
7 | path_pattern: "logs/mcp-agent-{unique_id}.jsonl"
8 | unique_id: "timestamp" # Options: "timestamp" or "session_id"
9 | timestamp_format: "%Y%m%d_%H%M%S"
10 |
11 | mcp:
12 | servers:
13 | puppeteer:
14 | command: "npx"
15 | args: ["-y", "@modelcontextprotocol/server-puppeteer"]
16 |
17 |
18 | openai:
19 | # Secrets (API keys, etc.) are stored in an mcp_agent.secrets.yaml file which can be gitignored
20 | default_model: "gpt-4.1-mini-2025-04-14"
21 |
--------------------------------------------------------------------------------
/mcp_ai_agents/browser_mcp_agent/mcp_agent.secrets.yaml.example:
--------------------------------------------------------------------------------
1 | openai:
2 | api_key: YOUR_OPENAI_API_KEY
--------------------------------------------------------------------------------
/mcp_ai_agents/browser_mcp_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit>=1.28.0
2 | mcp-agent>=0.0.14
3 | openai>=1.0.0
4 | asyncio>=3.4.3
--------------------------------------------------------------------------------
/mcp_ai_agents/github_mcp_agent/README.md:
--------------------------------------------------------------------------------
1 | # 🐙 MCP GitHub Agent
2 |
3 | A Streamlit application that allows you to explore and analyze GitHub repositories using natural language queries through the Model Context Protocol (MCP).
4 |
5 | ## Features
6 |
7 | - **Natural Language Interface**: Ask questions about repositories in plain English
8 | - **Comprehensive Analysis**: Explore issues, pull requests, repository activity, and code statistics
9 | - **Interactive UI**: User-friendly interface with example queries and custom input
10 | - **MCP Integration**: Leverages the Model Context Protocol to interact with GitHub's API
11 | - **Real-time Results**: Get immediate insights on repository activity and health
12 |
13 | ## Setup
14 |
15 | ### Requirements
16 |
17 | - Python 3.8+
18 | - Node.js and npm (for MCP GitHub server)
19 | - This is a critical requirement! The app uses `npx` to run the MCP GitHub server
20 | - Download and install from [nodejs.org](https://nodejs.org/)
21 | - GitHub Personal Access Token with appropriate permissions
22 | - OpenAI API Key
23 |
24 | ### Installation
25 |
26 | 1. Clone this repository:
27 | ```bash
28 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
29 | cd mcp-github-agent
30 | ```
31 |
32 | 2. Install the required Python packages:
33 | ```bash
34 | pip install -r requirements.txt
35 | ```
36 |
37 | 3. Verify Node.js and npm are installed:
38 | ```bash
39 | node --version
40 | npm --version
41 | npx --version
42 | ```
43 | All of these commands should return version numbers. If they don't, please install Node.js.
44 |
45 | 4. Set up your API keys:
46 | - Set OpenAI API Key as an environment variable:
47 | ```bash
48 | export OPENAI_API_KEY=your-openai-api-key
49 | ```
50 | - GitHub token will be entered directly in the app interface
51 |
52 | 5. Create a GitHub Personal Access Token:
53 | - Visit https://github.com/settings/tokens
54 | - Create a new token with `repo` and `user` scopes
55 | - Save the token somewhere secure
56 |
57 | ### Running the App
58 |
59 | 1. Start the Streamlit app:
60 | ```bash
61 | streamlit run app.py
62 | ```
63 |
64 | 2. In the app interface:
65 | - Enter your GitHub token in the sidebar
66 | - Specify a repository to analyze
67 | - Select a query type or write your own
68 | - Click "Run Query"
69 |
70 | ### Example Queries
71 |
72 | #### Issues
73 | - "Show me issues by label"
74 | - "What issues are being actively discussed?"
75 | - "Find issues labeled as bugs"
76 |
77 | #### Pull Requests
78 | - "What PRs need review?"
79 | - "Show me recent merged PRs"
80 | - "Find PRs with conflicts"
81 |
82 | #### Repository
83 | - "Show repository health metrics"
84 | - "Show repository activity patterns"
85 | - "Analyze code quality trends"
86 |
--------------------------------------------------------------------------------
/mcp_ai_agents/github_mcp_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit>=1.28.0
2 | agno>=1.1.0
3 | mcp>=0.1.0
4 | openai>=1.0.0
5 | asyncio>=3.4.3
--------------------------------------------------------------------------------
/rag_tutorials/agentic_rag/README.md:
--------------------------------------------------------------------------------
1 | ## 🗃️ AI RAG Agent with Web Access
2 | This script demonstrates how to build a Retrieval-Augmented Generation (RAG) agent with web access using GPT-4o in just 15 lines of Python code. The agent uses a PDF knowledge base and has the ability to search the web using DuckDuckGo.
3 |
4 | ### Features
5 |
6 | - Creates a RAG agent using GPT-4o
7 | - Incorporates a PDF-based knowledge base
8 | - Uses LanceDB as the vector database for efficient similarity search
9 | - Includes web search capability through DuckDuckGo
10 | - Provides a playground interface for easy interaction
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/rag_tutorials/agentic_rag
18 | ```
19 |
20 | 2. Install the required dependencies:
21 |
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Get your OpenAI API Key
27 |
28 | - Sign up for an [OpenAI account](https://platform.openai.com/) (or the LLM provider of your choice) and obtain your API key.
29 | - Set your OpenAI API key as an environment variable:
30 | ```bash
31 | export OPENAI_API_KEY='your-api-key-here'
32 | ```
33 |
34 | 4. Run the AI RAG Agent
35 | ```bash
36 | python3 rag_agent.py
37 | ```
38 | 5. Open your web browser and navigate to the URL provided in the console output to interact with the RAG agent through the playground interface.
39 |
40 | ### How it works?
41 |
42 | 1. **Knowledge Base Creation:** The script creates a knowledge base from a PDF file hosted online.
43 | 2. **Vector Database Setup:** LanceDB is used as the vector database for efficient similarity search within the knowledge base.
44 | 3. **Agent Configuration:** An AI agent is created using GPT-4o as the underlying model, with the PDF knowledge base and DuckDuckGo search tool.
45 | 4. **Playground Setup:** A playground interface is set up for easy interaction with the RAG agent.
46 |
47 |
--------------------------------------------------------------------------------
/rag_tutorials/agentic_rag/rag_agent.py:
--------------------------------------------------------------------------------
1 | from agno.agent import Agent
2 | from agno.models.openai import OpenAIChat
3 | from agno.knowledge.pdf_url import PDFUrlKnowledgeBase
4 | from agno.vectordb.lancedb import LanceDb, SearchType
5 | from agno.playground import Playground, serve_playground_app
6 | from agno.tools.duckduckgo import DuckDuckGoTools
7 |
8 | db_uri = "tmp/lancedb"
9 | # Create a knowledge base from a PDF
10 | knowledge_base = PDFUrlKnowledgeBase(
11 | urls=["https://phi-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
12 | # Use LanceDB as the vector database
13 | vector_db=LanceDb(table_name="recipes", uri=db_uri, search_type=SearchType.vector),
14 | )
15 | # Load the knowledge base: Comment out after first run
16 | knowledge_base.load(upsert=True)
17 |
18 | rag_agent = Agent(
19 | model=OpenAIChat(id="gpt-4o"),
20 | agent_id="rag-agent",
21 | knowledge=knowledge_base, # Add the knowledge base to the agent
22 | tools=[DuckDuckGoTools()],
23 | show_tool_calls=True,
24 | markdown=True,
25 | )
26 |
27 | app = Playground(agents=[rag_agent]).get_app()
28 |
29 | if __name__ == "__main__":
30 | serve_playground_app("rag_agent:app", reload=True)
--------------------------------------------------------------------------------
/rag_tutorials/agentic_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | openai
3 | lancedb
4 | tantivy
5 | pypdf
6 | sqlalchemy
7 | pgvector
8 | psycopg[binary]
9 |
--------------------------------------------------------------------------------
/rag_tutorials/ai_blog_search/requirements.txt:
--------------------------------------------------------------------------------
1 | langchain
2 | langgraph
3 | langchainhub
4 | langchain-community
5 | langchain-google-genai
6 | langchain-qdrant
7 | langchain-text-splitters
8 | tiktoken
9 | beautifulsoup4
10 | python-dotenv
--------------------------------------------------------------------------------
/rag_tutorials/autonomous_rag/README.md:
--------------------------------------------------------------------------------
1 | ## 🤖 AutoRAG: Autonomous RAG with GPT-4o and Vector Database
2 | This Streamlit application implements an Autonomous Retrieval-Augmented Generation (RAG) system using OpenAI's GPT-4o model and PgVector database. It allows users to upload PDF documents, add them to a knowledge base, and query the AI assistant with context from both the knowledge base and web searches.
3 | Features
4 |
5 | ### Freatures
6 | - Chat interface for interacting with the AI assistant
7 | - PDF document upload and processing
8 | - Knowledge base integration using PostgreSQL and Pgvector
9 | - Web search capability using DuckDuckGo
10 | - Persistent storage of assistant data and conversations
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | cd awesome-llm-apps/rag_tutorials/autonomous_rag
18 | ```
19 |
20 | 2. Install the required dependencies:
21 |
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Ensure PgVector Database is running:
27 | The app expects PgVector to be running on [localhost:5532](http://localhost:5532/). Adjust the configuration in the code if your setup is different.
28 |
29 | ```bash
30 | docker run -d \
31 | -e POSTGRES_DB=ai \
32 | -e POSTGRES_USER=ai \
33 | -e POSTGRES_PASSWORD=ai \
34 | -e PGDATA=/var/lib/postgresql/data/pgdata \
35 | -v pgvolume:/var/lib/postgresql/data \
36 | -p 5532:5432 \
37 | --name pgvector \
38 | phidata/pgvector:16
39 | ```
40 |
41 | 4. Run the Streamlit App
42 | ```bash
43 | streamlit run autorag.py
44 | ```
45 |
--------------------------------------------------------------------------------
/rag_tutorials/autonomous_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | agno
3 | openai
4 | psycopg-binary
5 | pgvector
6 | requests
7 | sqlalchemy
8 | pypdf
9 | duckduckgo-search
10 | nest_asyncio
11 |
--------------------------------------------------------------------------------
/rag_tutorials/corrective_rag/README.md:
--------------------------------------------------------------------------------
1 | # 🔄 Corrective RAG Agent
2 | A sophisticated Retrieval-Augmented Generation (RAG) system that implements a corrective multi-stage workflow using LangGraph. This system combines document retrieval, relevance grading, query transformation, and web search to provide comprehensive and accurate responses.
3 |
4 | ## Features
5 |
6 | - **Smart Document Retrieval**: Uses Qdrant vector store for efficient document retrieval
7 | - **Document Relevance Grading**: Employs Claude 3.5 sonnet to assess document relevance
8 | - **Query Transformation**: Improves search results by optimizing queries when needed
9 | - **Web Search Fallback**: Uses Tavily API for web search when local documents aren't sufficient
10 | - **Multi-Model Approach**: Combines OpenAI embeddings and Claude 3.5 sonnet for different tasks
11 | - **Interactive UI**: Built with Streamlit for easy document upload and querying
12 |
13 | ## How to Run?
14 |
15 | 1. **Clone the Repository**:
16 | ```bash
17 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
18 | cd rag_tutorials/corrective_rag
19 | ```
20 |
21 | 2. **Install Dependencies**:
22 | ```bash
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. **Set Up API Keys**:
27 | You'll need to obtain the following API keys:
28 | - [OpenAI API key](https://platform.openai.com/api-keys) (for embeddings)
29 | - [Anthropic API key](https://console.anthropic.com/settings/keys) (for Claude 3.5 sonnet as LLM)
30 | - [Tavily API key](https://app.tavily.com/home) (for web search)
31 | - Qdrant Cloud Setup
32 | 1. Visit [Qdrant Cloud](https://cloud.qdrant.io/)
33 | 2. Create an account or sign in
34 | 3. Create a new cluster
35 | 4. Get your credentials:
36 | - Qdrant API Key: Found in API Keys section
37 | - Qdrant URL: Your cluster URL (format: `https://xxx-xxx.aws.cloud.qdrant.io`)
38 |
39 | 4. **Run the Application**:
40 | ```bash
41 | streamlit run corrective_rag.py
42 | ```
43 |
44 | 5. **Use the Application**:
45 | - Upload documents or provide URLs
46 | - Enter your questions in the query box
47 | - View the step-by-step Corrective RAG process
48 | - Get comprehensive answers
49 |
50 | ## Tech Stack
51 |
52 | - **LangChain**: For RAG orchestration and chains
53 | - **LangGraph**: For workflow management
54 | - **Qdrant**: Vector database for document storage
55 | - **Claude 3.5 sonnet**: Main language model for analysis and generation
56 | - **OpenAI**: For document embeddings
57 | - **Tavily**: For web search capabilities
58 | - **Streamlit**: For the user interface
59 |
--------------------------------------------------------------------------------
/rag_tutorials/corrective_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | # Core dependencies
2 | langchain==0.3.12
3 | langgraph==0.2.53
4 | qdrant-client==1.12.1
5 | langchain-openai==0.2.14
6 | langchain-anthropic==0.3.0
7 | tavily-python==0.5.0
8 | langchain-community==0.3.12
9 | langchain-core==0.3.28
10 | streamlit==1.41.1
11 | tenacity==8.5.0
12 | anthropic>=0.7.0
13 | openai>=1.12.0
14 | tiktoken>=0.6.0
15 | pydantic>=2.0.0
16 | numpy>=1.24.0
17 | PyYAML>=6.0.0
18 | nest-asyncio>=1.5.0
19 |
--------------------------------------------------------------------------------
/rag_tutorials/deepseek_local_rag_agent/README.md:
--------------------------------------------------------------------------------
1 | # 🐋 Deepseek Local RAG Reasoning Agent
2 |
3 | A powerful reasoning agent that combines local Deepseek models with RAG capabilities. Built using Deepseek (via Ollama), Snowflake for embeddings, Qdrant for vector storage, and Agno for agent orchestration, this application offers both simple local chat and advanced RAG-enhanced interactions with comprehensive document processing and web search capabilities.
4 |
5 | ## Features
6 |
7 | - **Dual Operation Modes**
8 | - Local Chat Mode: Direct interaction with Deepseek locally
9 | - RAG Mode: Enhanced reasoning with document context and web search integration - llama3.2
10 |
11 | - **Document Processing** (RAG Mode)
12 | - PDF document upload and processing
13 | - Web page content extraction
14 | - Automatic text chunking and embedding
15 | - Vector storage in Qdrant cloud
16 |
17 | - **Intelligent Querying** (RAG Mode)
18 | - RAG-based document retrieval
19 | - Similarity search with threshold filtering
20 | - Automatic fallback to web search
21 | - Source attribution for answers
22 |
23 | - **Advanced Capabilities**
24 | - Exa AI web search integration
25 | - Custom domain filtering for web search
26 | - Context-aware response generation
27 | - Chat history management
28 | - Thinking process visualization
29 |
30 | - **Model Specific Features**
31 | - Flexible model selection:
32 | - Deepseek r1 1.5b (lighter, suitable for most laptops)
33 | - Deepseek r1 7b (more capable, requires better hardware)
34 | - Snowflake Arctic Embedding model (SOTA) for vector embeddings
35 | - Agno Agent framework for orchestration
36 | - Streamlit-based interactive interface
37 |
38 | ## Prerequisites
39 |
40 | ### 1. Ollama Setup
41 | 1. Install [Ollama](https://ollama.ai)
42 | 2. Pull the Deepseek r1 model(s):
43 | ```bash
44 | # For the lighter model
45 | ollama pull deepseek-r1:1.5b
46 |
47 | # For the more capable model (if your hardware supports it)
48 | ollama pull deepseek-r1:7b
49 |
50 | ollama pull snowflake-arctic-embed
51 | ollama pull llama3.2
52 | ```
53 |
54 | ### 2. Qdrant Cloud Setup (for RAG Mode)
55 | 1. Visit [Qdrant Cloud](https://cloud.qdrant.io/)
56 | 2. Create an account or sign in
57 | 3. Create a new cluster
58 | 4. Get your credentials:
59 | - Qdrant API Key: Found in API Keys section
60 | - Qdrant URL: Your cluster URL (format: `https://xxx-xxx.cloud.qdrant.io`)
61 |
62 | ### 3. Exa AI API Key (Optional)
63 | 1. Visit [Exa AI](https://exa.ai)
64 | 2. Sign up for an account
65 | 3. Generate an API key for web search capabilities
66 |
67 | ## How to Run
68 |
69 | 1. Clone the repository:
70 | ```bash
71 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
72 | cd rag_tutorials/deepseek_local_rag_agent
73 | ```
74 |
75 | 2. Install dependencies:
76 | ```bash
77 | pip install -r requirements.txt
78 | ```
79 |
80 | 3. Run the application:
81 | ```bash
82 | streamlit run deepseek_rag_agent.py
83 | ```
84 |
85 |
--------------------------------------------------------------------------------
/rag_tutorials/deepseek_local_rag_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | exa==0.5.26
3 | qdrant-client==1.12.1
4 | langchain-qdrant==0.2.0
5 | langchain-community==0.3.13
6 | streamlit==1.41.1
7 | ollama
8 |
--------------------------------------------------------------------------------
/rag_tutorials/gemini_agentic_rag/README.md:
--------------------------------------------------------------------------------
1 | # 🤔 Agentic RAG with Gemini Flash Thinking
2 |
3 | A RAG Agentic system built with the new Gemini 2.0 Flash Thinking model and gemini-exp-1206, Qdrant for vector storage, and Agno (phidata prev) for agent orchestration. This application features intelligent query rewriting, document processing, and web search fallback capabilities to provide comprehensive AI-powered responses.
4 |
5 | ## Features
6 |
7 | - **Document Processing**
8 | - PDF document upload and processing
9 | - Web page content extraction
10 | - Automatic text chunking and embedding
11 | - Vector storage in Qdrant cloud
12 |
13 | - **Intelligent Querying**
14 | - Query rewriting for better retrieval
15 | - RAG-based document retrieval
16 | - Similarity search with threshold filtering
17 | - Automatic fallback to web search
18 | - Source attribution for answers
19 |
20 | - **Advanced Capabilities**
21 | - Exa AI web search integration
22 | - Custom domain filtering for web search
23 | - Context-aware response generation
24 | - Chat history management
25 | - Query reformulation agent
26 |
27 | - **Model Specific Features**
28 | - Gemini Thinking 2.0 Flash for chat and reasoning
29 | - Gemini Embedding model for vector embeddings
30 | - Agno Agent framework for orchestration
31 | - Streamlit-based interactive interface
32 |
33 | ## Prerequisites
34 |
35 | ### 1. Google API Key
36 | 1. Go to [Google AI Studio](https://aistudio.google.com/apikey)
37 | 2. Sign up or log in to your account
38 | 3. Create a new API key
39 |
40 | ### 2. Qdrant Cloud Setup
41 | 1. Visit [Qdrant Cloud](https://cloud.qdrant.io/)
42 | 2. Create an account or sign in
43 | 3. Create a new cluster
44 | 4. Get your credentials:
45 | - Qdrant API Key: Found in API Keys section
46 | - Qdrant URL: Your cluster URL (format: `https://xxx-xxx.cloud.qdrant.io`)
47 |
48 | ### 3. Exa AI API Key (Optional)
49 | 1. Visit [Exa AI](https://exa.ai)
50 | 2. Sign up for an account
51 | 3. Generate an API key for web search capabilities
52 |
53 | ## How to Run
54 |
55 | 1. Clone the repository:
56 | ```bash
57 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
58 | cd rag_tutorials/gemini_agentic_rag
59 | ```
60 |
61 | 2. Install dependencies:
62 | ```bash
63 | pip install -r requirements.txt
64 | ```
65 |
66 | 3. Run the application:
67 | ```bash
68 | streamlit run agentic_rag_gemini.py
69 | ```
70 |
71 | ## Usage
72 |
73 | 1. Configure API keys in the sidebar:
74 | - Enter your Google API key
75 | - Add Qdrant credentials
76 | - (Optional) Add Exa AI key for web search
77 |
78 | 2. Upload documents:
79 | - Use the file uploader for PDFs
80 | - Enter URLs for web content
81 |
82 | 3. Ask questions:
83 | - Type your query in the chat interface
84 | - View rewritten queries and sources
85 | - See web search results when relevant
86 |
87 | 4. Manage your session:
88 | - Clear chat history as needed
89 | - Configure web search domains
90 | - Monitor processed documents
91 |
--------------------------------------------------------------------------------
/rag_tutorials/gemini_agentic_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | exa==0.5.26
3 | qdrant-client==1.12.1
4 | langchain-qdrant==0.2.0
5 | langchain-community==0.3.13
6 | streamlit==1.41.1
--------------------------------------------------------------------------------
/rag_tutorials/hybrid_search_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | raglite==0.2.1
2 | pydantic==2.10.1
3 | sqlalchemy>=2.0.0
4 | psycopg2-binary>=2.9.9
5 | openai>=1.0.0
6 | cohere>=4.37
7 | pypdf>=3.0.0
8 | python-dotenv>=1.0.0
9 | rerankers==0.6.0
10 | spacy>=3.7.0
11 | streamlit
12 | anthropic
13 |
--------------------------------------------------------------------------------
/rag_tutorials/llama3.1_local_rag/README.md:
--------------------------------------------------------------------------------
1 | ## 💻 Local Lllama-3.1 with RAG
2 | Streamlit app that allows you to chat with any webpage using local Llama-3.1 and Retrieval Augmented Generation (RAG). This app runs entirely on your computer, making it 100% free and without the need for an internet connection.
3 |
4 |
5 | ### Features
6 | - Input a webpage URL
7 | - Ask questions about the content of the webpage
8 | - Get accurate answers using RAG and the Llama-3.1 model running locally on your computer
9 |
10 | ### How to get Started?
11 |
12 | 1. Clone the GitHub repository
13 |
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/rag_tutorials/llama3.1_local_rag
17 | ```
18 | 2. Install the required dependencies:
19 |
20 | ```bash
21 | pip install -r requirements.txt
22 | ```
23 | 3. Run the Streamlit App
24 | ```bash
25 | streamlit run llama3.1_local_rag.py
26 | ```
27 |
28 | ### How it Works?
29 |
30 | - The app loads the webpage data using WebBaseLoader and splits it into chunks using RecursiveCharacterTextSplitter.
31 | - It creates Ollama embeddings and a vector store using Chroma.
32 | - The app sets up a RAG (Retrieval-Augmented Generation) chain, which retrieves relevant documents based on the user's question.
33 | - The Llama-3.1 model is called to generate an answer using the retrieved context.
34 | - The app displays the answer to the user's question.
35 |
36 |
--------------------------------------------------------------------------------
/rag_tutorials/llama3.1_local_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | ollama
3 | langchain
4 | langchain_community
5 | langchain_ollama
6 |
--------------------------------------------------------------------------------
/rag_tutorials/local_hybrid_search_rag/requirements.txt:
--------------------------------------------------------------------------------
1 | raglite==0.2.1
2 | llama-cpp-python>=0.2.56
3 | sentence-transformers>=2.5.1
4 | pydantic==2.10.1
5 | sqlalchemy>=2.0.0
6 | psycopg2-binary>=2.9.9
7 | pypdf>=3.0.0
8 | python-dotenv>=1.0.0
9 | rerankers==0.6.0
10 | spacy>=3.7.0
11 | streamlit>=1.31.0
12 | flashrank==0.2.9
13 | numpy>=1.24.0
14 | pandas>=2.0.0
15 | tqdm>=4.66.0
16 |
--------------------------------------------------------------------------------
/rag_tutorials/local_rag_agent/README.md:
--------------------------------------------------------------------------------
1 | ## 🦙 Local RAG Agent with Llama 3.2
2 | This application implements a Retrieval-Augmented Generation (RAG) system using Llama 3.2 via Ollama, with Qdrant as the vector database.
3 |
4 |
5 | ### Features
6 | - Fully local RAG implementation
7 | - Powered by Llama 3.2 through Ollama
8 | - Vector search using Qdrant
9 | - Interactive playground interface
10 | - No external API dependencies
11 |
12 | ### How to get Started?
13 |
14 | 1. Clone the GitHub repository
15 | ```bash
16 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | cd awesome-llm-apps/rag_tutorials/local_rag_agent
23 | pip install -r requirements.txt
24 | ```
25 |
26 | 3. Install and start [Qdrant](https://qdrant.tech/) vector database locally
27 |
28 | ```bash
29 | docker pull qdrant/qdrant
30 | docker run -p 6333:6333 qdrant/qdrant
31 | ```
32 |
33 | 4. Install [Ollama](https://ollama.com/download) and pull Llama 3.2 for LLM and OpenHermes as the embedder for OllamaEmbedder
34 | ```bash
35 | ollama pull llama3.2
36 | ollama pull openhermes
37 | ```
38 |
39 | 4. Run the AI RAG Agent
40 | ```bash
41 | python local_rag_agent.py
42 | ```
43 |
44 | 5. Open your web browser and navigate to the URL provided in the console output to interact with the RAG agent through the playground interface.
45 |
46 |
47 |
--------------------------------------------------------------------------------
/rag_tutorials/local_rag_agent/local_rag_agent.py:
--------------------------------------------------------------------------------
1 | # Import necessary libraries
2 | from agno.agent import Agent
3 | from agno.models.ollama import Ollama
4 | from agno.knowledge.pdf_url import PDFUrlKnowledgeBase
5 | from agno.vectordb.qdrant import Qdrant
6 | from agno.embedder.ollama import OllamaEmbedder
7 | from agno.playground import Playground, serve_playground_app
8 |
9 | # Define the collection name for the vector database
10 | collection_name = "thai-recipe-index"
11 |
12 | # Set up Qdrant as the vector database with the embedder
13 | vector_db = Qdrant(
14 | collection=collection_name,
15 | url="http://localhost:6333/",
16 | embedder=OllamaEmbedder()
17 | )
18 |
19 | # Define the knowledge base with the specified PDF URL
20 | knowledge_base = PDFUrlKnowledgeBase(
21 | urls=["https://phi-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
22 | vector_db=vector_db,
23 | )
24 |
25 | # Load the knowledge base, comment out after the first run to avoid reloading
26 | knowledge_base.load(recreate=True, upsert=True)
27 |
28 | # Create the Agent using Ollama's llama3.2 model and the knowledge base
29 | agent = Agent(
30 | name="Local RAG Agent",
31 | model=Ollama(id="llama3.2"),
32 | knowledge=knowledge_base,
33 | )
34 |
35 | # UI for RAG agent
36 | app = Playground(agents=[agent]).get_app()
37 |
38 | # Run the Playground app
39 | if __name__ == "__main__":
40 | serve_playground_app("local_rag_agent:app", reload=True)
41 |
--------------------------------------------------------------------------------
/rag_tutorials/local_rag_agent/requirements.txt:
--------------------------------------------------------------------------------
1 | agno
2 | qdrant-client
3 | ollama
4 | pypdf
5 | openai
6 | fastapi
7 | uvicorn
--------------------------------------------------------------------------------
/rag_tutorials/rag-as-a-service/README.md:
--------------------------------------------------------------------------------
1 | ## 🖇️ RAG-as-a-Service with Claude 3.5 Sonnet
2 | Build and deploy a production-ready Retrieval-Augmented Generation (RAG) service using Claude 3.5 Sonnet and Ragie.ai. This implementation allows you to create a document querying system with a user-friendly Streamlit interface in less than 50 lines of Python code.
3 |
4 | ### Features
5 | - Production-ready RAG pipeline
6 | - Integration with Claude 3.5 Sonnet for response generation
7 | - Document upload from URLs
8 | - Real-time document querying
9 | - Support for both fast and accurate document processing modes
10 |
11 | ### How to get Started?
12 |
13 | 1. Clone the GitHub repository
14 | ```bash
15 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
16 | cd awesome-llm-apps/rag_tutorials/rag-as-a-service
17 | ```
18 |
19 | 2. Install the required dependencies:
20 |
21 | ```bash
22 | pip install -r requirements.txt
23 | ```
24 |
25 | 3. Get your Anthropic API and Ragie API Key
26 |
27 | - Sign up for an [Anthropic account](https://console.anthropic.com/) and get your API key
28 | - Sign up for an [Ragie account](https://www.ragie.ai/) and get your API key
29 |
30 | 4. Run the Streamlit app
31 | ```bash
32 | streamlit run rag_app.py
33 | ```
--------------------------------------------------------------------------------
/rag_tutorials/rag-as-a-service/requirements.txt:
--------------------------------------------------------------------------------
1 | streamlit
2 | anthropic
3 | requests
4 |
--------------------------------------------------------------------------------
/rag_tutorials/rag_agent_cohere/README.md:
--------------------------------------------------------------------------------
1 | # RAG Agent with Cohere ⌘R
2 |
3 | A RAG Agentic system built with Cohere's new model Command-r7b-12-2024, Qdrant for vector storage, Langchain for RAG and LangGraph for orchestration. This application allows users to upload documents, ask questions about them, and get AI-powered responses with fallback to web search when needed.
4 |
5 | ## Features
6 |
7 | - **Document Processing**
8 | - PDF document upload and processing
9 | - Automatic text chunking and embedding
10 | - Vector storage in Qdrant cloud
11 |
12 | - **Intelligent Querying**
13 | - RAG-based document retrieval
14 | - Similarity search with threshold filtering
15 | - Automatic fallback to web search when no relevant documents found
16 | - Source attribution for answers
17 |
18 | - **Advanced Capabilities**
19 | - DuckDuckGo web search integration
20 | - LangGraph agent for web research
21 | - Context-aware response generation
22 | - Long answer summarization
23 |
24 | - **Model Specific Features**
25 | - Command-r7b-12-2024 model for Chat and RAG
26 | - cohere embed-english-v3.0 model for embeddings
27 | - create_react_agent function from langgraph
28 | - DuckDuckGoSearchRun tool for web search
29 |
30 | ## Prerequisites
31 |
32 | ### 1. Cohere API Key
33 | 1. Go to [Cohere Platform](https://dashboard.cohere.ai/api-keys)
34 | 2. Sign up or log in to your account
35 | 3. Navigate to API Keys section
36 | 4. Create a new API key
37 |
38 | ### 2. Qdrant Cloud Setup
39 | 1. Visit [Qdrant Cloud](https://cloud.qdrant.io/)
40 | 2. Create an account or sign in
41 | 3. Create a new cluster
42 | 4. Get your credentials:
43 | - Qdrant API Key: Found in API Keys section
44 | - Qdrant URL: Your cluster URL (format: `https://xxx-xxx.aws.cloud.qdrant.io`)
45 |
46 |
47 | ## How to Run
48 |
49 | 1. Clone the repository:
50 | ```bash
51 | git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
52 | cd rag_tutorials/rag_agent_cohere
53 | ```
54 |
55 | 2. Install dependencies:
56 | ```bash
57 | pip install -r requirements.txt
58 | ```
59 |
60 | ```bash
61 | streamlit run rag_agent_cohere.py
62 | ```
63 |
64 |
65 |
--------------------------------------------------------------------------------
/rag_tutorials/rag_agent_cohere/requirements.txt:
--------------------------------------------------------------------------------
1 | langchain==0.3.12
2 | langchain-community==0.3.12
3 | langchain-core==0.3.25
4 | langchain-cohere==0.3.2
5 | langchain-qdrant==0.2.0
6 | cohere==5.11.4
7 | qdrant-client==1.12.1
8 | duckduckgo-search==6.4.1
9 | streamlit==1.40.2
10 | tenacity==9.0.0
11 | typing-extensions==4.12.2
12 | pydantic==2.9.2
13 | pydantic-core==2.23.4
14 | langgraph==0.2.53
--------------------------------------------------------------------------------
/rag_tutorials/rag_chain/README.md:
--------------------------------------------------------------------------------
1 | # PharmaQuery
2 |
3 | ## Overview
4 | PharmaQuery is an advanced Pharmaceutical Insight Retrieval System designed to help users gain meaningful insights from research papers and documents in the pharmaceutical domain.
5 |
6 | ## Demo
7 | https://github.com/user-attachments/assets/c12ee305-86fe-4f71-9219-57c7f438f291
8 |
9 | ## Features
10 | - **Natural Language Querying**: Ask complex questions about the pharmaceutical industry and get concise, accurate answers.
11 | - **Custom Database**: Upload your own research documents to enhance the retrieval system's knowledge base.
12 | - **Similarity Search**: Retrieves the most relevant documents for your query using AI embeddings.
13 | - **Streamlit Interface**: User-friendly interface for queries and document uploads.
14 |
15 | ## Technologies Used
16 | - **Programming Language**: [Python 3.10+](https://www.python.org/downloads/release/python-31011/)
17 | - **Framework**: [LangChain](https://www.langchain.com/)
18 | - **Database**: [ChromaDB](https://www.trychroma.com/)
19 | - **Models**:
20 | - Embeddings: [Google Gemini API (embedding-001)](https://ai.google.dev/gemini-api/docs/embeddings)
21 | - Chat: [Google Gemini API (gemini-1.5-pro)](https://ai.google.dev/gemini-api/docs/models/gemini#gemini-1.5-pro)
22 | - **PDF Processing**: [PyPDFLoader](https://python.langchain.com/docs/integrations/document_loaders/pypdfloader/)
23 | - **Document Splitter**: [SentenceTransformersTokenTextSplitter](https://python.langchain.com/api_reference/text_splitters/sentence_transformers/langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter.html)
24 |
25 | ## Requirements
26 | 1. **Install Dependencies**:
27 | ```bash
28 | pip install -r requirements.txt
29 | ```
30 |
31 | 2. **Run the Application**:
32 | ```bash
33 | streamlit run app.py
34 | ```
35 |
36 | 3. **Use the Application**:
37 | - Paste your Google API Key in the sidebar.
38 | - Enter your query in the main interface.
39 | - Optionally, upload research papers in the sidebar to enhance the database.
40 |
41 | ## :mailbox: Connect With Me
42 |
43 |
44 |