├── .gitignore ├── requirements.txt ├── env.example ├── res └── screenshot.png ├── llamapedia.bat ├── src ├── config.py ├── main.py ├── ollama_service.py └── ui.py ├── llamapedia.desktop ├── system_prompt.txt └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | .env 2 | __pycache__ -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | streamlit==1.22.0 2 | python-dotenv==1.0.0 3 | requests==2.31.0 -------------------------------------------------------------------------------- /env.example: -------------------------------------------------------------------------------- 1 | OLLAMA_API_URL=http://localhost:11434/api/chat 2 | OLLAMA_MODEL=llama3.1:8b -------------------------------------------------------------------------------- /res/screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tcsenpai/llamapedia/HEAD/res/screenshot.png -------------------------------------------------------------------------------- /llamapedia.bat: -------------------------------------------------------------------------------- 1 | @echo off 2 | cd /d "%~dp0" 3 | cd .. 4 | streamlit run src/main.py 5 | echo. 6 | echo Press any key to close this window... 7 | pause >nul -------------------------------------------------------------------------------- /src/config.py: -------------------------------------------------------------------------------- 1 | import os 2 | from dotenv import load_dotenv 3 | 4 | load_dotenv() 5 | 6 | OLLAMA_API_URL = os.getenv("OLLAMA_API_URL") 7 | OLLAMA_MODEL = os.getenv("OLLAMA_MODEL") 8 | 9 | with open("system_prompt.txt", "r") as f: 10 | SYSTEM_PROMPT = f.read().strip() -------------------------------------------------------------------------------- /llamapedia.desktop: -------------------------------------------------------------------------------- 1 | [Desktop Entry] 2 | Version=1.0 3 | Type=Application 4 | Name=Llamapedia 5 | Comment=Your friendly llama-themed encyclopedia 6 | Exec=bash -c 'cd "$(dirname "%k")/" && pwd && streamlit run src/main.py; echo "Press Enter to close this window..."; read' 7 | Icon=icon.png 8 | Terminal=true 9 | Categories=Education;Reference; -------------------------------------------------------------------------------- /src/main.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | from ui import render_wiki_interface, display_response 3 | from ollama_service import chat_with_ollama 4 | 5 | def main(): 6 | st.set_page_config(page_title="Llamapedia", page_icon="📚", layout="wide") 7 | 8 | user_query = render_wiki_interface() 9 | 10 | if user_query: 11 | with st.spinner("Generating response..."): 12 | response = chat_with_ollama(user_query) 13 | display_response(user_query, response) 14 | 15 | if __name__ == "__main__": 16 | main() 17 | -------------------------------------------------------------------------------- /system_prompt.txt: -------------------------------------------------------------------------------- 1 | You are a digital encyclopedia like Wikipedia. 2 | You will understand the user query and deliver a Wikipedia-like page about the subject or topic requested by the user. 3 | You cannot add any other elements to the response. 4 | Your aim is to provide a full lenght, informative Wikipedia-like page with all the details. 5 | You will mimic the style of Wikipedia, including the use of headings, subheadings, and concise paragraphs. 6 | You will not add any other elements to the response. 7 | Before you start, think step by step (reasoning) about the task and the response. 8 | Also double check your response for any errors or inconsistencies. -------------------------------------------------------------------------------- /src/ollama_service.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from config import OLLAMA_API_URL, OLLAMA_MODEL, SYSTEM_PROMPT 3 | 4 | 5 | def chat_with_ollama(user_message): 6 | messages = [ 7 | {"role": "system", "content": SYSTEM_PROMPT}, 8 | {"role": "user", "content": user_message}, 9 | ] 10 | 11 | try: 12 | response = requests.post( 13 | OLLAMA_API_URL, 14 | json={ 15 | "model": OLLAMA_MODEL, 16 | "messages": messages, 17 | "stream": False, 18 | "temperature": 0.8, 19 | "num_ctx": 8192, 20 | # "mirostat": 1, 21 | "num_predict": -2, 22 | }, 23 | ) 24 | response.raise_for_status() 25 | return response.json()["message"]["content"] 26 | except requests.RequestException as e: 27 | print(f"Error calling OLLAMA API: {e}") 28 | return "An error occurred while processing your request." 29 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Llamapedia 2 | 3 | [](https://justforfunnoreally.dev) 4 | 5 |  6 | 7 | This Streamlit app provides a Wikipedia-like interface for querying an OLLAMA language model. Users can input their queries, and the app will generate informative responses using the configured OLLAMA model. 8 | 9 | ## Setup 10 | 11 | 1. Install the required dependencies: 12 | ``` 13 | pip install -r requirements.txt 14 | ``` 15 | 16 | 2. Configure the `.env` file with your OLLAMA API URL and model name (copy the `env.example` file to `.env` and modify it). 17 | 18 | 3. Run the Streamlit app: 19 | ``` 20 | streamlit run src/main.py 21 | ``` 22 | 23 | 4. Open your web browser and navigate to the URL provided by Streamlit (usually `http://localhost:8501`). 24 | 25 | ## Tips 26 | 27 | - You can use the `llamapedia.desktop` or `llamapedia.bat` files to run the app. This provides a terminal window for the app and an easy way to launch it with a double-click. 28 | 29 | ## Usage 30 | 31 | 1. Enter your query in the text area provided. 32 | 2. Click the "Submit" button to generate a response. 33 | 3. The app will display the OLLAMA-generated response in a Wikipedia-like format. 34 | 35 | ## Configuration 36 | 37 | - Modify the `system_prompt.txt` file to change the system prompt sent to the OLLAMA model. 38 | - Update the `.env` file to change the OLLAMA API URL or model name. 39 | -------------------------------------------------------------------------------- /src/ui.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | 3 | 4 | def set_wiki_style(): 5 | st.markdown( 6 | """ 7 | 61 | """, 62 | unsafe_allow_html=True, 63 | ) 64 | 65 | 66 | def render_wiki_interface(): 67 | set_wiki_style() 68 | 69 | # Create a container for the top bar 70 | top_bar = st.container() 71 | 72 | with top_bar: 73 | # Center the Llamapedia banner 74 | st.markdown( 75 | "