├── .env
├── LICENSE
├── README.md
├── app.py
├── requirements.txt
└── templates
└── index.html
/.env:
--------------------------------------------------------------------------------
1 | # Importing the Langchain API key
2 | LANGCHAIN_API_KEY=""
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 Rajveer Singh
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | # Chatbot with LLama3
3 |
4 | This is a simple chatbot application built using the LLama3 model from Meta. The chatbot is deployed using Flask and can be accessed via a web interface.
5 |
6 |
7 |
8 | ## Features
9 |
10 | - Uses the LLama3 model from Langchain for natural language processing.
11 | - Utilizes dotenv for managing environment variables.
12 | - Implements a ChatPromptTemplate for defining user and system messages.
13 | - Supports querying the chatbot with user input.
14 | - Web-based interface for easy interaction.
15 | - Uses Bootstrap for styling.
16 |
17 | ## Prerequisite
18 |
19 | - You have to install [Ollama](https://ollama.com/download) in your system.
20 | - After installing the Ollama you have to install llama3 by using this command
21 |
22 | ## Getting Started
23 |
24 | ## Installation
25 |
26 | 1. Clone the repository:
27 |
28 | ```bash
29 | git clone https://github.com/rajveersinghcse/Llama3-Chatbot.git
30 | ```
31 |
32 | 2. Navigate to the project directory:
33 |
34 | ```bash
35 | cd llama3-chatbot
36 | ```
37 |
38 | 3. Install the required dependencies:
39 |
40 | ```bash
41 | pip install -r requirements.txt
42 | ```
43 |
44 | 4. In `.env` file paste your Langchain API key.
45 |
46 | 3. Run this command:
47 |
48 | ```bash
49 | flask --app app.py run
50 | ```
51 |
52 | 5. Open your browser and go to `http://localhost:5000` to access the chatbot.
53 |
54 | ## Usage
55 |
56 | - Enter your query in the input field and click "Submit."
57 | - The chatbot will process your query and respond.
58 |
59 | ## Customization
60 |
61 | You can customize the chatbot's behavior by modifying the `initialize_chatbot()` function in `app.py`. For example, you can change the prompts or adjust the LLama3 model settings.
62 |
63 | ## License
64 |
65 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
66 |
67 | ## Acknowledgments
68 |
69 | - Meta for providing the LLama3 model.
70 | - Bootstrap for the frontend styling.
71 |
--------------------------------------------------------------------------------
/app.py:
--------------------------------------------------------------------------------
1 | import os
2 | from dotenv import load_dotenv
3 | from langchain_community.llms import Ollama
4 | from langchain_core.prompts import ChatPromptTemplate
5 | from langchain_core.output_parsers import StrOutputParser
6 | from flask import Flask, request, render_template
7 |
8 | # Load environment variables from .env file
9 | load_dotenv()
10 |
11 | # Set environment variables for langsmith tracking
12 | os.environ["LANGCHAIN_API_KEY"] = os.getenv("LANGCHAIN_API_KEY")
13 | os.environ["LANGCHAIN_TRACING_V2"] = "true"
14 |
15 | # Create Flask app
16 | app = Flask(__name__)
17 |
18 | # Define chatbot initialization
19 | def initialize_chatbot():
20 | # Create chatbot prompt
21 | prompt = ChatPromptTemplate.from_messages(
22 | [
23 | ("system", "Provide response to the user queries"),
24 | ("user", "Question: {question}")
25 | ]
26 | )
27 |
28 | # Initialize OpenAI LLM and output parser
29 | llm = Ollama(model="llama3")
30 | output_parser = StrOutputParser()
31 |
32 | # Create chain
33 | chain = prompt | llm | output_parser
34 | return chain
35 |
36 | # Initialize chatbot
37 | chain = initialize_chatbot()
38 |
39 | # Define route for home page
40 | @app.route('/', methods=['GET', 'POST'])
41 | def home():
42 | if request.method == 'POST':
43 | input_text = request.form['input_text']
44 | if input_text:
45 | output = chain.invoke({'question': input_text})
46 | return render_template('index.html', input_text=input_text, output=output)
47 | return render_template('index.html')
48 |
49 | if __name__ == '__main__':
50 | app.run(debug=True)
51 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | langchain-openai
2 | langchain-core
3 | python-dotenv
4 | streamlit
5 | langchain-community
6 | flask
7 |
--------------------------------------------------------------------------------
/templates/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 | Chatbot with LLama3
8 |
9 |
10 |
11 |
50 |
51 |
52 |
53 |