├── .gitignore ├── .github ├── ISSUE_TEMPLATE │ ├── config.yml │ ├── bug_report.yml │ └── feature_request.yml └── pull_request_template.md ├── __pycache__ ├── app.cpython-311.pyc ├── emotion_advanced.cpython-310.pyc ├── emotion_advanced.cpython-311.pyc ├── emotion_advanced.cpython-312.pyc └── emotion_advanced.cpython-313.pyc ├── .vscode └── settings.json ├── .streamlit └── config.toml ├── Public └── Images │ └── WhatsApp Image 2024-11-18 at 11.40.34_076eab8e.jpg ├── requirements.txt ├── LICENSE ├── CODE_OF_CONDUCT.md ├── README.md ├── emotion_advanced.py └── app.py /.gitignore: -------------------------------------------------------------------------------- 1 | .env 2 | venv/ 3 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/config.yml: -------------------------------------------------------------------------------- 1 | blank_issue_enable: true 2 | blank_issue_title: "Please provide a title for your issue" -------------------------------------------------------------------------------- /__pycache__/app.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/satvik091/WisdomWeaver/HEAD/__pycache__/app.cpython-311.pyc -------------------------------------------------------------------------------- /.vscode/settings.json: -------------------------------------------------------------------------------- 1 | { 2 | "python-envs.defaultEnvManager": "ms-python.python:system", 3 | "python-envs.pythonProjects": [] 4 | } -------------------------------------------------------------------------------- /__pycache__/emotion_advanced.cpython-310.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/satvik091/WisdomWeaver/HEAD/__pycache__/emotion_advanced.cpython-310.pyc -------------------------------------------------------------------------------- /__pycache__/emotion_advanced.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/satvik091/WisdomWeaver/HEAD/__pycache__/emotion_advanced.cpython-311.pyc -------------------------------------------------------------------------------- /__pycache__/emotion_advanced.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/satvik091/WisdomWeaver/HEAD/__pycache__/emotion_advanced.cpython-312.pyc -------------------------------------------------------------------------------- /__pycache__/emotion_advanced.cpython-313.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/satvik091/WisdomWeaver/HEAD/__pycache__/emotion_advanced.cpython-313.pyc -------------------------------------------------------------------------------- /.streamlit/config.toml: -------------------------------------------------------------------------------- 1 | [theme] 2 | primaryColor="#F63366" 3 | backgroundColor="#0E1117" 4 | secondaryBackgroundColor="#262730" 5 | textColor="#FAFAFA" 6 | font="sans serif" -------------------------------------------------------------------------------- /Public/Images/WhatsApp Image 2024-11-18 at 11.40.34_076eab8e.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/satvik091/WisdomWeaver/HEAD/Public/Images/WhatsApp Image 2024-11-18 at 11.40.34_076eab8e.jpg -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.yml: -------------------------------------------------------------------------------- 1 | name: Bug report 2 | description: Create a report to help us improve 3 | labels: bug 4 | body: 5 | - type: textarea 6 | attributes: 7 | label: Describe the bug 8 | description: Describe the bug. 9 | placeholder: > 10 | A clear and concise description of what the bug is. 11 | validations: 12 | required: true 13 | - type: textarea 14 | attributes: 15 | label: To Reproduce 16 | placeholder: > 17 | Steps to reproduce the behavior: 18 | - type: textarea 19 | attributes: 20 | label: Expected behavior 21 | placeholder: > 22 | A clear and concise description of what you expected to happen. 23 | - type: textarea 24 | attributes: 25 | label: Additional context 26 | placeholder: > 27 | Add any other context about the problem here. -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | streamlit>=1.47.1 2 | streamlit-webrtc>=0.63.3 3 | deepface>=0.0.93 4 | opencv-python>=4.5.5.64 5 | numpy>=1.26.0,<2.0.0 6 | pandas>=2.3.1 7 | pillow>=11.3.0 8 | mtcnn>=1.0.0 9 | retina-face>=0.0.17 10 | 11 | tensorflow==2.12.0 12 | keras==2.12.0 13 | tf-keras>=2.17.0 # Compatibility shim 14 | 15 | google-generativeai>=0.8.5 16 | google-ai-generativelanguage>=0.6.15 17 | google-api-python-client>=2.177.0 18 | google-api-core>=2.25.1 19 | google-auth>=2.40.3 20 | protobuf>=5.29.5 21 | 22 | python-dotenv>=1.1.1 23 | requests>=2.32.4 24 | asyncio>=3.4.3 25 | plotly>=5.18.0 26 | 27 | av>=14.4.0 28 | aiortc>=1.13.0 29 | pyee>=13.0.0 30 | 31 | flask>=3.1.1 32 | flask-cors>=6.0.1 33 | gunicorn>=23.0.0 34 | 35 | absl-py>=2.3.1 36 | h5py>=3.14.0 37 | google-auth-httplib2>=0.2.0 38 | pydeck>=0.9.1 39 | pyOpenSSL>=25.1.0 40 | typing_extensions>=4.14.1 41 | markdown>=3.8.2 42 | six>=1.17.0 43 | certifi>=2025.7.14 44 | idna>=3.10 45 | urllib3>=2.5.0 46 | chardet>=5.2.0 47 | tqdm>=4.67.1 48 | watchdog>=6.0.0 49 | google-generativeai>=0.8.0 50 | python-dotenv>=1.0.0 51 | tf-keras>=2.17.0 52 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2025 satvik091 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.yml: -------------------------------------------------------------------------------- 1 | name: Feature request 2 | description: Suggest an idea for this project 3 | labels: enhancement 4 | body: 5 | - type: textarea 6 | attributes: 7 | label: Is your feature request related to a problem or challenge? 8 | description: Please describe what you are trying to do. 9 | placeholder: > 10 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 11 | (This section helps developers understand the context and *why* for this feature, in addition to the *what*) 12 | - type: textarea 13 | attributes: 14 | label: Describe the solution you'd like 15 | placeholder: > 16 | A clear and concise description of what you want to happen. 17 | - type: textarea 18 | attributes: 19 | label: Describe alternatives you've considered 20 | placeholder: > 21 | A clear and concise description of any alternative solutions or features you've considered. 22 | - type: textarea 23 | attributes: 24 | label: Additional context 25 | placeholder: > 26 | Add any other context or screenshots about the feature request here. -------------------------------------------------------------------------------- /.github/pull_request_template.md: -------------------------------------------------------------------------------- 1 | ## Which issue does this PR close? 2 | 3 | 6 | 7 | - Closes #. 8 | 9 | ## Rationale for this change 10 | 11 | 15 | 16 | ## What changes are included in this PR? 17 | 18 | 21 | 22 | ## Are these changes tested? 23 | 24 | 29 | 30 | ## Are there any user-facing changes? 31 | 32 | 35 | 36 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | **🌟 WisdomWeaver Code of Conduct - GSSoC'25** 2 | 3 | **Fostering Respect, Inclusion & Collaboration in Open Source** 4 | 5 | --- 6 | 7 | **📜 Our Pledge** 8 | 9 | WisdomWeaver is committed to creating a welcoming, harassment-free environment for all contributors, regardless of gender, experience, background, or identity. We value kindness, constructive feedback, and open collaboration to build a thriving open-source community. 10 | 11 | --- 12 | 13 | **✨ Community Values** 14 | 15 | We grow stronger when we: 16 | - ✅ Empower – Encourage learners and acknowledge all contributions. 17 | - ✅ Respect – Engage with patience, empathy, and professionalism. 18 | - ✅ Include – Celebrate diverse perspectives and backgrounds. 19 | - ✅ Collaborate – Share knowledge openly and credit others’ work. 20 | - ✅ Innovate – Foster creativity while maintaining ethical standards. 21 | 22 | --- 23 | 24 | **🚫 Unacceptable Behavior** 25 | 26 | The following will not be tolerated: 27 | - 🔴 Harassment – Personal attacks, trolling, or unwelcome DMs. 28 | - 🔴 Discrimination – Bias based on identity, experience, or background. 29 | - 🔴 Disrespect – Dismissive comments, gatekeeping, or elitism. 30 | - 🔴 Spam/Plagiarism – Off-topic promotions or uncredited content. 31 | - 🔴 Toxicity – Aggressive, inflammatory, or unconstructive behavior. 32 | 33 | --- 34 | 35 | **🧭 Where This Applies** 36 | 37 | This Code of Conduct applies to all WisdomWeaver spaces, including: 38 | - GitHub (https://github.com/satvik091/WisdomWeaver) 39 | - Community platform (Discord) 40 | - Project-related social media interactions 41 | - Events, workshops, and virtual meetups 42 | 43 | --- 44 | 45 | **🛡️ Reporting Violations** 46 | 47 | If you witness or experience behavior that violates this Code of Conduct, please contact a Project Admin or the Mentor. We are committed to addressing all reports with discretion and care. 48 | 49 | --- 50 | 51 | **⚖️ Enforcement** 52 | 53 | Violations may result in: 54 | - ⚠️ Warning – For minor, first-time offenses. 55 | - ⏸️ Temporary Ban – For repeated or moderate violations. 56 | - 🚫 Permanent Removal – For severe or malicious behavior. 57 | 58 | --- 59 | 60 | **🌱 Our Vision** 61 | 62 | WisdomWeaver is a space for growth, mentorship, and ethical open-source contribution. Let’s build a legacy of collaboration over competition! 63 | 64 | --- 65 | 66 | **📜 Attribution** 67 | 68 | This Code of Conduct is adapted from the [Contributor Covenant (v3.0) (CC BY 4.0)](https://www.contributor-covenant.org/version/3/0/code_of_conduct/) with WisdomWeaver-specific modifications for GSSoC'25. We honor India's knowledge-sharing legacy while fostering ethical open-source collaboration. 69 | 70 | 📜 Inspired by: Contributor Covenant & GSSoC'25 Guidelines 71 | 72 | --- 73 | 74 | **Let’s code with kindness! 💙🚀** -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 🕉️ Bhagavad Gita Wisdom Weaver 2 | 3 | A real-time, AI-powered chatbot that provides **mental health support and spiritual guidance** using teachings from the **Bhagavad Gita**. Ask life questions and receive structured answers powered by **Google Gemini API**, displayed in a clean and friendly **Streamlit interface**. 4 | 5 | --- 6 | ## ❓ Why Use WisdomWeaver? 7 | 8 | In today’s fast-paced world, we often face stress, confusion, and emotional challenges. **WisdomWeaver** bridges ancient spiritual wisdom with modern AI to help you: 9 | 10 | - 🧘‍♀️ Reflect deeply on life problems with timeless Gita teachings. 11 | - 💡 Get practical and philosophical advice tailored to your questions. 12 | - 🌿 Improve mental well-being with spiritually grounded responses. 13 | - 🔄 Understand the Gita verse-by-verse with contextual insights. 14 | 15 | Whether you're spiritually inclined, curious about the Gita, or just looking for calm guidance — this tool is made for **you**. 16 | 17 | --- 18 | 19 | ## 📽️ Demo : 20 | https://bkins-wisdomweaver.streamlit.app/ 21 | image 22 | 23 | > 🙏 Ask any question like: *"Mujhe anxiety ho rahi hai, kya karun?"* 24 | > 📜 Get a reply from the Gita like: 25 | > **Chapter 2, Verse 47** 26 | > *Karmanye vadhikaraste ma phaleshu kadachana...* 27 | > _"Do your duty without attachment to outcomes."_ 28 | > 💡 With Explanation + Real-life Application! 29 | 30 | --- 31 | 32 | ## 🧠 Features 33 | 34 | - 🧘‍♂️ **Ask Anything**: Get spiritual & practical guidance based on Bhagavad Gita. 35 | - 🔍 **Chapter/Verse Browser**: View any shloka translation chapter-wise. 36 | - 🧾 **Structured Response**: AI responds with: 37 | - Chapter & Verse 38 | - Sanskrit Shloka 39 | - Translation 40 | - Explanation 41 | - Modern Life Application 42 | - 💬 **Chat History**: See your past questions in the sidebar. 43 | - 🌐 **Streamlit UI**: Responsive, clean, and user-friendly. 44 | - ⚡ **Powered by Gemini AI**: Uses Google’s Gemini 2.0 Flash model. 45 | 46 | --- 47 | 48 | ## 🛠️ Tech Stack 49 | 50 | | Feature | Tech Used | 51 | |---------------|---------------------| 52 | | UI/Frontend | Streamlit | 53 | | AI Backend | Google Gemini API | 54 | | Language | Python | 55 | | Data Handling | Pandas | 56 | | Image | PIL (Pillow) | 57 | | Async Support | asyncio | 58 | | Data Source | Bhagavad Gita CSV | 59 | 60 | --- 61 | 62 | ## ⚙️ Setup Instructions 63 | 64 | ### 📦 Prerequisites 65 | 66 | - Python 3.9 or higher 67 | - pip (Python package installer) 68 | - Google Gemini API Key ([Get one here](https://aistudio.google.com/app/apikey)) 69 | 70 | ## 🔑 Generating Your Google Gemini API Key 71 | 72 | To use the Google Gemini API, follow these steps to generate your API key: 73 | 74 | 1. Go to the [Google AI Studio](https://makersuite.google.com/app) website. 75 | 2. Sign in with your Google account. 76 | 3. Click on **"Create API Key in new project"** or select an existing project to generate a new key. 77 | 4. Copy the generated API key. 78 | 📌 **Note:** You’ll need this key for authentication in the next step. 79 | 80 | ### 🚀 Installation 81 | 82 | 1. **Clone the repository** 83 | ```bash 84 | git clone https://github.com/satvik091/WisdomWeaver.git 85 | cd WisdomWeaver 86 | ``` 87 | 88 | 2. **Create a virtual environment (recommended)** 89 | ```bash 90 | # On Windows 91 | python -m venv venv 92 | venv\Scripts\activate 93 | 94 | # On macOS/Linux 95 | python3 -m venv venv 96 | source venv/bin/activate 97 | ``` 98 | 99 | 3. **Install required Python packages** 100 | ```bash 101 | pip install -r requirements.txt 102 | ``` 103 | 104 | ## 🔑 API Key Configuration 105 | 106 | To securely use your Google Gemini API key in the **WisdomWeaver** project: 107 | 108 | ### 1. Create a `.env` file 109 | In the root directory of your project (where `main.py` and `requirements.txt` are located), create a new file named `.env`. 110 | 111 | ### 2. Add your API key to `.env` 112 | Open the `.env` file and add the following line (replace `your_api_key_here` with the actual key you generated earlier): 113 | -change .env.example to .env 114 | ```env 115 | GOOGLE_API_KEY=your_api_key_here 116 | 117 | ### 🔔 Important Notes 118 | 119 | - 🔒 **Never share your API key publicly.** 120 | - ✅ **Make sure your `.env` file is excluded from version control** (e.g., Git). 121 | - 📁 **The `.gitignore` file should already contain an entry for `.env`.** Double-check if you're unsure. 122 | ``` 123 | --- 124 | 125 | ### ▶️ Run the Application 126 | 127 | 1. **Make sure your virtual environment is activated** 128 | ```bash 129 | # On Windows 130 | venv\Scripts\activate 131 | 132 | # On macOS/Linux 133 | source venv/bin/activate 134 | ``` 135 | 136 | 2. **Run the Streamlit app** 137 | ```bash 138 | streamlit run app.py 139 | ``` 140 | ### 🌐 Open in Browser 141 | 142 | Once the app starts, **WisdomWeaver** will automatically open in your default web browser at: 143 | 144 | [http://localhost:8501](http://localhost:8501) 145 | 146 | If it doesn’t open automatically, simply copy and paste the URL into your browser. 147 | 148 | 149 | ### 🔧 Troubleshooting 150 | 151 | **Issue: Module not found errors** 152 | - Make sure your virtual environment is activated 153 | - Run `pip install -r requirements.txt` again 154 | 155 | **Issue: API key not working** 156 | - Verify your API key in the `.env` file 157 | - Make sure the `.env` file is in the root directory 158 | - Check that your Google AI API key is valid 159 | 160 | **Issue: Streamlit not starting** 161 | - Make sure you're in the correct directory 162 | - Try running `streamlit --version` to verify installation 163 | 164 | --- 165 | ## 📂 Folder Structure 166 | 167 | ```plaintext 168 | gita-gemini-bot/ 169 | ├── main.py # Streamlit app file 170 | ├── bhagavad_gita_verses.csv # Bhagavad Gita verse data 171 | ├── requirements.txt # Python dependencies 172 | ├── README.md # You're here! 173 | ├── .env.example # Sample environment config 174 | └── .streamlit/ # Streamlit config folder 175 | ``` 176 | 177 | --- 178 | ## 💻 Sample Question 179 | 180 | **Q:** *Zindagi ka purpose kya hai?* 181 | 182 | **Output:** 183 | 184 | - 📖 **Chapter 3, Verse 30** 185 | - 🕉️ *Mayi sarvani karmani sannyasyadhyatmacetasa...* 186 | 187 | **Translation:** 188 | *Dedicate all actions to me with full awareness of the Self.* 189 | 190 | **Explanation:** 191 | Lord Krishna advises detachment and devotion in duty. 192 | 193 | **Application:** 194 | Focus on sincere efforts, not selfish rewards. 195 | 196 | --- 197 | ## 🤝 Contributing 198 | 199 | We welcome contributions as part of **GirlScript Summer of Code 2025 (GSSoC'25)** and beyond! 200 | 201 | ### 📌 Steps to Contribute 202 | 203 | 1. **Fork** this repo 🍴 204 | 2. **Create a branch** 205 | ```bash 206 | git checkout -b feat/amazing-feature 207 | ``` 208 | 3. **Make your changes** ✨ 209 | 4. **Commit your changes** 210 | ```bash 211 | git commit -m 'Add: Amazing Feature' 212 | ``` 213 | 5. **Push to your branch** 214 | ```bash 215 | git push origin feat/amazing-feature 216 | ``` 217 | 6. **Open a Pull Request and link the related issue** 218 | ```bash 219 | Closes #6 220 | ``` 221 | 222 | --- 223 | ## 🌸 GirlScript Summer of Code 2025 224 | 225 | This project is proudly part of **GSSoC '25**! 226 | Thanks to the amazing open-source community, contributors, and mentors for your valuable support. 227 | 228 | --- 229 | ## 📄 License 230 | 231 | This project is licensed under the **MIT License**. 232 | See the [LICENSE](LICENSE) file for full details. 233 | 234 | --- 235 | 236 | ## 🙏 Acknowledgements 237 | 238 | - 📜 **Bhagavad Gita** – Eternal source of wisdom 239 | - 🧠 **Google Gemini API** – AI backend for responses 240 | - 🌐 **Streamlit Team** – For the interactive app framework 241 | - 👥 **GSSoC 2025 Community** – For mentorship and collaboration 242 | 243 | --- 244 | 245 | ## 📬 Contact 246 | 247 | Have ideas, feedback, or just want to say hi? 248 | 249 | - 🛠️ Open an issue in the repository 250 | - 📧 Contact our mentor: 251 | 252 | **Mentor**: Harmanpreet 253 | **GitHub**: [Harman-2](https://github.com/Harman-2) 254 | 255 | --- 256 | 257 | Thank you for visiting! 🙏 258 | 259 | 260 | -------------------------------------------------------------------------------- /emotion_advanced.py: -------------------------------------------------------------------------------- 1 | from deepface import DeepFace 2 | import cv2 3 | import numpy as np 4 | import threading 5 | import time 6 | from collections import deque 7 | import queue 8 | import logging 9 | 10 | class AdvancedEmotionDetector: 11 | 12 | def __init__(self, fallback_emotion="neutral", verbose=False): 13 | 14 | def __init__(self, verbose=True): 15 | # Add verbose parameter to control console output 16 | self.verbose = verbose 17 | 18 | # Load face detection models 19 | self.face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml') 20 | 21 | # Threading components 22 | self.emotion_queue = queue.Queue(maxsize=2) 23 | self.frame_queue = queue.Queue(maxsize=5) 24 | self.result_queue = queue.Queue(maxsize=2) 25 | 26 | # State variables 27 | self.current_emotion = "Initializing..." 28 | self.emotion_confidence = {} 29 | self.is_processing = False 30 | self.stop_threads = False 31 | 32 | # Performance settings 33 | self.skip_frames = 2 34 | self.frame_count = 0 35 | 36 | # Emotion smoothing (reduces flickering) 37 | self.emotion_history = deque(maxlen=5) 38 | 39 | # Customizable fallback and verbose settings 40 | self.fallback_emotion = fallback_emotion 41 | self.verbose = verbose 42 | logging.basicConfig(level=logging.INFO if verbose else logging.WARNING) 43 | 44 | # Start processing thread 45 | self.processing_thread = threading.Thread(target=self._emotion_processing_loop, daemon=True) 46 | self.processing_thread.start() 47 | 48 | if self.verbose: 49 | print("Advanced Emotion Detector initialized!") 50 | 51 | def _emotion_processing_loop(self): 52 | """Background thread for emotion processing""" 53 | while not self.stop_threads: 54 | try: 55 | if not self.frame_queue.empty(): 56 | face_roi = self.frame_queue.get(timeout=0.1) 57 | 58 | # Process emotion 59 | emotion, confidence = self._analyze_emotion_internal(face_roi) 60 | 61 | if emotion: 62 | # Add to history for smoothing 63 | self.emotion_history.append(emotion) 64 | 65 | # Get most common emotion from recent history 66 | smoothed_emotion = max(set(self.emotion_history), 67 | key=self.emotion_history.count) 68 | 69 | # Update results 70 | if not self.result_queue.full(): 71 | self.result_queue.put((smoothed_emotion, confidence)) 72 | 73 | time.sleep(0.01) # Small delay to prevent CPU overload 74 | except queue.Empty: 75 | continue 76 | except Exception as e: 77 | if self.verbose: 78 | 79 | logging.warning(f"Processing error: {e}") 80 | 81 | print(f"Processing error: {e}") 82 | 83 | 84 | def _analyze_emotion_internal(self, face_roi): 85 | """Internal emotion analysis method with improved neutral handling""" 86 | try: 87 | # Ensure minimum size for better accuracy 88 | if face_roi.shape[0] < 100 or face_roi.shape[1] < 100: 89 | face_roi = cv2.resize(face_roi, (224, 224)) 90 | 91 | # Enhance image quality 92 | face_roi = cv2.convertScaleAbs(face_roi, alpha=1.2, beta=10) 93 | 94 | analysis = DeepFace.analyze( 95 | face_roi, 96 | actions=['emotion'], 97 | enforce_detection=False, 98 | silent=True, 99 | detector_backend='opencv' # Faster backend 100 | ) 101 | 102 | if isinstance(analysis, list): 103 | emotion_data = analysis[0]['emotion'] 104 | dominant_emotion = analysis[0]['dominant_emotion'] 105 | else: 106 | emotion_data = analysis['emotion'] 107 | dominant_emotion = analysis['dominant_emotion'] 108 | 109 | # Smart neutral handling - show 2nd best if neutral confidence < 98% 110 | if dominant_emotion == 'neutral': 111 | neutral_confidence = emotion_data['neutral'] 112 | 113 | if neutral_confidence < 95.0: 114 | # Find alternative emotions sorted by confidence 115 | sorted_emotions = sorted(emotion_data.items(), key=lambda x: x[1], reverse=True) 116 | 117 | # Look for the best non-neutral emotion 118 | for emotion_name, confidence_score in sorted_emotions[1:]: # Skip neutral (first one) 119 | if emotion_name == 'angry': 120 | if confidence_score > 12.0: 121 | dominant_emotion = emotion_name 122 | if self.verbose: 123 | 124 | logging.warning(f"Neutral confidence {neutral_confidence:.1f}% < 95%, using {emotion_name} ({confidence_score:.1f}%)") 125 | break 126 | else: 127 | if self.verbose: 128 | logging.warning(f"Skipping angry ({confidence_score:.1f}%) - below 12% threshold") 129 | 130 | print(f"Neutral confidence {neutral_confidence:.1f}% < 95%, using {emotion_name} ({confidence_score:.1f}%)") 131 | break 132 | else: 133 | if self.verbose: 134 | print(f"Skipping angry ({confidence_score:.1f}%) - below 12% threshold") 135 | 136 | continue 137 | elif emotion_name == 'sad': 138 | if confidence_score > 2.5: 139 | dominant_emotion = emotion_name 140 | if self.verbose: 141 | 142 | logging.warning(f"Neutral confidence {neutral_confidence:.1f}% < 95%, using {emotion_name} ({confidence_score:.1f}%)") 143 | break 144 | else: 145 | if self.verbose: 146 | logging.warning(f"Skipping sad ({confidence_score:.1f}%) - below 2.5% threshold") 147 | 148 | print(f"Neutral confidence {neutral_confidence:.1f}% < 95%, using {emotion_name} ({confidence_score:.1f}%)") 149 | break 150 | else: 151 | if self.verbose: 152 | print(f"Skipping sad ({confidence_score:.1f}%) - below 2.5% threshold") 153 | 154 | continue 155 | else: 156 | if confidence_score > 5.0: 157 | dominant_emotion = emotion_name 158 | if self.verbose: 159 | 160 | logging.warning(f"Neutral confidence {neutral_confidence:.1f}% < 95%, using {emotion_name} ({confidence_score:.1f}%)") 161 | 162 | print(f"Neutral confidence {neutral_confidence:.1f}% < 95%, using {emotion_name} ({confidence_score:.1f}%)") 163 | 164 | break 165 | 166 | # Special handling for happy - only display if confidence > 70% 167 | if dominant_emotion == 'happy': 168 | happy_confidence = emotion_data['happy'] 169 | 170 | if happy_confidence <= 70.0: 171 | # Find alternative emotions sorted by confidence 172 | sorted_emotions = sorted(emotion_data.items(), key=lambda x: x[1], reverse=True) 173 | 174 | # Look for the best non-happy emotion 175 | for emotion_name, confidence_score in sorted_emotions: 176 | if emotion_name != 'happy' and confidence_score > 5.0: 177 | dominant_emotion = emotion_name 178 | if self.verbose: 179 | 180 | logging.warning(f"Happy confidence {happy_confidence:.1f}% <= 70%, using {emotion_name} ({confidence_score:.1f}%)") 181 | 182 | print(f"Happy confidence {happy_confidence:.1f}% <= 70%, using {emotion_name} ({confidence_score:.1f}%)") 183 | 184 | break 185 | else: 186 | # If no alternative found, fall back to custom fallback 187 | if self.verbose: 188 | logging.warning(f"No suitable emotion found, falling back to {self.fallback_emotion}") 189 | dominant_emotion = self.fallback_emotion 190 | 191 | return dominant_emotion, emotion_data 192 | 193 | except Exception as e: 194 | if self.verbose: 195 | logging.warning(f"Error in emotion analysis: {e}, falling back to {self.fallback_emotion}") 196 | return self.fallback_emotion, None 197 | 198 | def detect_faces_optimized(self, frame): 199 | """Optimized face detection with single best face selection""" 200 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 201 | 202 | # Apply histogram equalization for better detection 203 | gray = cv2.equalizeHist(gray) 204 | 205 | faces = self.face_cascade.detectMultiScale( 206 | gray, 207 | scaleFactor=1.08, 208 | minNeighbors=8, 209 | minSize=(80, 80), 210 | maxSize=(400, 400), 211 | flags=cv2.CASCADE_SCALE_IMAGE 212 | ) 213 | 214 | if len(faces) > 0: 215 | scored_faces = [] 216 | frame_center_x, frame_center_y = frame.shape[1] // 2, frame.shape[0] // 2 217 | 218 | for (x, y, w, h) in faces: 219 | area = w * h 220 | face_center_x, face_center_y = x + w // 2, y + h // 2 221 | distance_from_center = ((face_center_x - frame_center_x) ** 2 + 222 | (face_center_y - frame_center_y) ** 2) ** 0.5 223 | aspect_ratio = min(w, h) / max(w, h) 224 | score = area * aspect_ratio * (1 / (1 + distance_from_center * 0.01)) 225 | scored_faces.append((score, (x, y, w, h))) 226 | 227 | best_face = max(scored_faces, key=lambda x: x[0])[1] 228 | return [best_face] 229 | 230 | return faces 231 | 232 | def update_emotion_async(self, face_roi): 233 | """Add face region to processing queue""" 234 | if not self.frame_queue.full(): 235 | self.frame_queue.put(face_roi.copy()) 236 | 237 | def get_current_emotion(self): 238 | """Get the latest emotion result""" 239 | try: 240 | while not self.result_queue.empty(): 241 | emotion, confidence = self.result_queue.get_nowait() 242 | self.current_emotion = emotion 243 | self.emotion_confidence = confidence 244 | except queue.Empty: 245 | pass 246 | 247 | return self.current_emotion, self.emotion_confidence 248 | 249 | def draw_advanced_results(self, frame, faces): 250 | """Draw enhanced visualization with emotion details""" 251 | emotion, confidence = self.get_current_emotion() 252 | 253 | for i, (x, y, w, h) in enumerate(faces): 254 | # Draw face rectangle with rounded corners effect 255 | cv2.rectangle(frame, (x-2, y-2), (x+w+2, y+h+2), (0, 255, 0), 3) 256 | cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 1) 257 | 258 | # Draw emotion text with background 259 | if emotion and emotion != "Initializing...": 260 | # Background rectangle for text 261 | text_size = cv2.getTextSize(f"Emotion: {emotion}", 262 | cv2.FONT_HERSHEY_SIMPLEX, 0.8, 2)[0] 263 | cv2.rectangle(frame, (x, y-35), (x + text_size[0] + 10, y-5), (0, 0, 0), -1) 264 | 265 | # Emotion text 266 | cv2.putText( 267 | frame, 268 | f"Emotion: {emotion}", 269 | (x+5, y-15), 270 | cv2.FONT_HERSHEY_SIMPLEX, 271 | 0.8, 272 | (0, 255, 0), 273 | 2 274 | ) 275 | 276 | # Confidence bar and details 277 | if confidence: 278 | # Get current emotion confidence 279 | current_conf = confidence.get(emotion, 0) 280 | 281 | # Draw confidence bar 282 | bar_width = int((w * current_conf) / 100) 283 | cv2.rectangle(frame, (x, y+h+5), (x+bar_width, y+h+15), (0, 255, 0), -1) 284 | cv2.rectangle(frame, (x, y+h+5), (x+w, y+h+15), (255, 255, 255), 1) 285 | 286 | # Confidence percentage 287 | cv2.putText( 288 | frame, 289 | f"{current_conf:.1f}%", 290 | (x+w-60, y+h+25), 291 | cv2.FONT_HERSHEY_SIMPLEX, 292 | 0.5, 293 | (255, 255, 255), 294 | 1 295 | ) 296 | 297 | # Show top 3 emotions with scores 298 | sorted_emotions = sorted(confidence.items(), key=lambda x: x[1], reverse=True)[:3] 299 | y_offset = y + h + 40 300 | 301 | for idx, (emo, score) in enumerate(sorted_emotions): 302 | if score > 1: # Only show emotions with >1% confidence 303 | color = (0, 255, 0) if emo == emotion else (255, 255, 255) 304 | cv2.putText( 305 | frame, 306 | f"{idx+1}. {emo}: {score:.1f}%", 307 | (x, y_offset), 308 | cv2.FONT_HERSHEY_SIMPLEX, 309 | 0.4, 310 | color, 311 | 1 312 | ) 313 | y_offset += 15 314 | 315 | def cleanup(self): 316 | """Clean up resources""" 317 | self.stop_threads = True 318 | if self.processing_thread.is_alive(): 319 | self.processing_thread.join(timeout=1) 320 | 321 | def main_advanced(): 322 | detector = AdvancedEmotionDetector(verbose=True) # Turns them on (default behavior) 323 | 324 | # Initialize video capture with optimal settings 325 | cap = cv2.VideoCapture(0) 326 | if not cap.isOpened(): 327 | print("Error: Could not open webcam.") 328 | return 329 | 330 | # Optimize camera settings 331 | cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640) 332 | cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480) 333 | cap.set(cv2.CAP_PROP_FPS, 30) 334 | cap.set(cv2.CAP_PROP_BUFFERSIZE, 1) # Reduce buffer for lower latency 335 | 336 | print("Advanced emotion detection started!") 337 | print("Controls:") 338 | print("- Press 'q' to quit") 339 | print("- Press 'r' to reset emotion history") 340 | print("- Press 'd' to toggle debug mode") 341 | 342 | fps_counter = deque(maxlen=30) 343 | last_time = time.time() 344 | debug_mode = False 345 | 346 | try: 347 | while True: 348 | ret, frame = cap.read() 349 | if not ret: 350 | print("Warning: Failed to grab frame.") 351 | break 352 | 353 | # Mirror effect 354 | frame = cv2.flip(frame, 1) 355 | 356 | # Detect faces 357 | faces = detector.detect_faces_optimized(frame) 358 | 359 | # Process emotion asynchronously 360 | if len(faces) > 0 and detector.frame_count % detector.skip_frames == 0: 361 | # Use the single best face (already filtered by detect_faces_optimized) 362 | x, y, w, h = faces[0] # Only one face now 363 | 364 | # Extract face with padding 365 | padding = 30 366 | y1 = max(0, y - padding) 367 | y2 = min(frame.shape[0], y + h + padding) 368 | x1 = max(0, x - padding) 369 | x2 = min(frame.shape[1], x + w + padding) 370 | 371 | face_roi = frame[y1:y2, x1:x2] 372 | 373 | if face_roi.size > 0: 374 | detector.update_emotion_async(face_roi) 375 | 376 | # Draw results 377 | if len(faces) > 0: 378 | detector.draw_advanced_results(frame, faces) 379 | else: 380 | cv2.putText( 381 | frame, 382 | "No face detected - Move closer to camera", 383 | (50, 50), 384 | cv2.FONT_HERSHEY_SIMPLEX, 385 | 0.7, 386 | (0, 0, 255), 387 | 2 388 | ) 389 | 390 | # Calculate and display FPS 391 | current_time = time.time() 392 | fps_counter.append(1.0 / (current_time - last_time)) 393 | last_time = current_time 394 | avg_fps = sum(fps_counter) / len(fps_counter) 395 | 396 | cv2.putText( 397 | frame, 398 | f"FPS: {avg_fps:.1f}", 399 | (frame.shape[1] - 120, 30), 400 | cv2.FONT_HERSHEY_SIMPLEX, 401 | 0.6, 402 | (255, 255, 255), 403 | 2 404 | ) 405 | 406 | # Display queue status 407 | queue_status = f"Queue: {detector.frame_queue.qsize()}" 408 | cv2.putText( 409 | frame, 410 | queue_status, 411 | (10, frame.shape[0] - 20), 412 | cv2.FONT_HERSHEY_SIMPLEX, 413 | 0.5, 414 | (255, 255, 255), 415 | 1 416 | ) 417 | 418 | cv2.imshow("Advanced Emotion Detection", frame) 419 | 420 | # Handle keyboard input 421 | key = cv2.waitKey(1) & 0xFF 422 | if key == ord('q'): 423 | break 424 | elif key == ord('r'): 425 | detector.emotion_history.clear() 426 | if detector.verbose: 427 | print("Emotion history reset!") 428 | elif key == ord('d'): 429 | debug_mode = not debug_mode 430 | if detector.verbose: 431 | print(f"Debug mode: {'ON' if debug_mode else 'OFF'}") 432 | 433 | detector.frame_count += 1 434 | 435 | except KeyboardInterrupt: 436 | if detector.verbose: 437 | print("\nShutting down...") 438 | 439 | finally: 440 | detector.cleanup() 441 | cap.release() 442 | cv2.destroyAllWindows() 443 | if detector.verbose: 444 | print("Advanced emotion detection stopped.") 445 | 446 | if __name__ == "__main__": 447 | detector = AdvancedEmotionDetector(fallback_emotion="calm", verbose=True) 448 | # Force a failure by passing None or invalid data 449 | emotion, _ = detector._analyze_emotion_internal(None) # Direct call to internal method 450 | print(f"Current emotion: {emotion}") -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | 3 | st.set_page_config( 4 | page_title="Wisdom Weaver", 5 | layout="wide", 6 | initial_sidebar_state="expanded" 7 | ) 8 | import requests 9 | import pandas as pd 10 | import streamlit as st 11 | import os 12 | import google.generativeai as genai 13 | from typing import Dict, List 14 | import json 15 | import asyncio 16 | from PIL import Image 17 | import time 18 | from datetime import datetime 19 | import re 20 | from dotenv import load_dotenv 21 | import cv2 22 | import numpy as np 23 | from streamlit_webrtc import webrtc_streamer, VideoTransformerBase, RTCConfiguration 24 | from collections import Counter, deque 25 | import plotly.graph_objects as go 26 | import plotly.express as px 27 | 28 | # Import the advanced emotion detector 29 | from emotion_advanced import AdvancedEmotionDetector 30 | 31 | load_dotenv() 32 | 33 | # Constants 34 | GEMINI_API_KEY = os.getenv("GEMINI_API_KEY", "YOUR_API_KEY") 35 | GITA_CSV_PATH = "bhagavad_gita_verses.csv" 36 | IMAGE_PATH = "Public/Images/WhatsApp Image 2024-11-18 at 11.40.34_076eab8e.jpg" 37 | 38 | 39 | def create_confidence_visualization(confidence_data: Dict[str, float], title: str = "Model Confidence") -> go.Figure: 40 | """ 41 | Create a confidence visualization using Plotly. 42 | 43 | Args: 44 | confidence_data: Dictionary with emotion names as keys and confidence scores as values 45 | title: Title for the chart 46 | 47 | Returns: 48 | Plotly figure object 49 | """ 50 | if not confidence_data: 51 | return None 52 | 53 | # Sort emotions by confidence score 54 | sorted_emotions = sorted(confidence_data.items(), key=lambda x: x[1], reverse=True) 55 | emotions, scores = zip(*sorted_emotions) 56 | 57 | # Create color mapping based on confidence levels 58 | colors = [] 59 | for score in scores: 60 | if score >= 80: 61 | colors.append('#2E8B57') # Sea Green - High confidence 62 | elif score >= 60: 63 | colors.append('#FFD700') # Gold - Medium confidence 64 | elif score >= 40: 65 | colors.append('#FF8C00') # Dark Orange - Low confidence 66 | else: 67 | colors.append('#DC143C') # Crimson - Very low confidence 68 | 69 | # Create horizontal bar chart 70 | fig = go.Figure(data=[ 71 | go.Bar( 72 | x=scores, 73 | y=emotions, 74 | orientation='h', 75 | marker_color=colors, 76 | text=[f'{score:.1f}%' for score in scores], 77 | textposition='auto', 78 | hovertemplate='%{y}
Confidence: %{x:.1f}%' 79 | ) 80 | ]) 81 | 82 | # Update layout for better appearance 83 | fig.update_layout( 84 | title={ 85 | 'text': title, 86 | 'x': 0.5, 87 | 'xanchor': 'center', 88 | 'font': {'size': 16, 'color': '#333333'} 89 | }, 90 | xaxis_title="Confidence Score (%)", 91 | yaxis_title="Emotions", 92 | xaxis=dict(range=[0, 100]), 93 | height=300, 94 | margin=dict(l=20, r=20, t=40, b=20), 95 | plot_bgcolor='rgba(0,0,0,0)', 96 | paper_bgcolor='rgba(0,0,0,0)', 97 | showlegend=False 98 | ) 99 | 100 | return fig 101 | 102 | 103 | def create_confidence_interval_display(confidence_data: Dict[str, float], primary_emotion: str) -> str: 104 | """ 105 | Create a text-based confidence interval display. 106 | 107 | Args: 108 | confidence_data: Dictionary with emotion names and confidence scores 109 | primary_emotion: The primary detected emotion 110 | 111 | Returns: 112 | Formatted string showing confidence interval 113 | """ 114 | if not confidence_data or primary_emotion not in confidence_data: 115 | return "Confidence: Not available" 116 | 117 | primary_confidence = confidence_data[primary_emotion] 118 | 119 | # Calculate confidence interval based on the score 120 | if primary_confidence >= 90: 121 | interval = f"95% CI: {primary_confidence-5:.1f}%–{min(100, primary_confidence+5):.1f}%" 122 | confidence_level = "Very High" 123 | elif primary_confidence >= 75: 124 | interval = f"90% CI: {primary_confidence-8:.1f}%–{min(100, primary_confidence+8):.1f}%" 125 | confidence_level = "High" 126 | elif primary_confidence >= 60: 127 | interval = f"85% CI: {primary_confidence-10:.1f}%–{min(100, primary_confidence+10):.1f}%" 128 | confidence_level = "Medium" 129 | elif primary_confidence >= 40: 130 | interval = f"80% CI: {primary_confidence-12:.1f}%–{min(100, primary_confidence+12):.1f}%" 131 | confidence_level = "Low" 132 | else: 133 | interval = f"75% CI: {primary_confidence-15:.1f}%–{min(100, primary_confidence+15):.1f}%" 134 | confidence_level = "Very Low" 135 | 136 | return f"**{primary_emotion.title()}** ({primary_confidence:.1f}%) - {confidence_level} Confidence\n{interval}" 137 | 138 | 139 | def render_confidence_dashboard(emotion_detector): 140 | """ 141 | Render a comprehensive confidence dashboard for emotion detection. 142 | 143 | Args: 144 | emotion_detector: The AdvancedEmotionDetector instance 145 | """ 146 | if not emotion_detector: 147 | return 148 | 149 | # Get current emotion and confidence data 150 | current_emotion, confidence_data = emotion_detector.get_current_emotion() 151 | 152 | if not confidence_data: 153 | st.info("🔍 Waiting for emotion detection data...") 154 | return 155 | 156 | st.markdown("### 📊 Emotion Detection Confidence Dashboard") 157 | 158 | # Create two columns for the dashboard 159 | col1, col2 = st.columns([2, 1]) 160 | 161 | with col1: 162 | # Display confidence visualization 163 | if confidence_data: 164 | fig = create_confidence_visualization(confidence_data, "Emotion Confidence Scores") 165 | if fig: 166 | st.plotly_chart(fig, use_container_width=True, config={'displayModeBar': False}) 167 | 168 | with col2: 169 | # Display confidence interval and summary 170 | if current_emotion and confidence_data: 171 | confidence_text = create_confidence_interval_display(confidence_data, current_emotion) 172 | st.markdown(confidence_text) 173 | 174 | # Add confidence level indicator 175 | primary_confidence = confidence_data.get(current_emotion, 0) 176 | if primary_confidence >= 80: 177 | st.success("✅ High Confidence Detection") 178 | elif primary_confidence >= 60: 179 | st.warning("⚠️ Medium Confidence Detection") 180 | else: 181 | st.error("❌ Low Confidence Detection") 182 | 183 | # Show top 3 emotions 184 | sorted_emotions = sorted(confidence_data.items(), key=lambda x: x[1], reverse=True)[:3] 185 | st.markdown("**Top 3 Emotions:**") 186 | for i, (emotion, score) in enumerate(sorted_emotions, 1): 187 | st.markdown(f"{i}. {emotion.title()}: {score:.1f}%") 188 | 189 | # Add helpful tooltip 190 | with st.expander("ℹ️ Understanding Confidence Scores"): 191 | st.markdown(""" 192 | **What do these confidence scores mean?** 193 | 194 | - **90-100%**: Very High Confidence - The model is very certain about this emotion 195 | - **75-89%**: High Confidence - The model is confident but there's some uncertainty 196 | - **60-74%**: Medium Confidence - Moderate certainty, consider the context 197 | - **40-59%**: Low Confidence - Significant uncertainty, results should be interpreted carefully 198 | - **Below 40%**: Very Low Confidence - High uncertainty, consider manual assessment 199 | 200 | **Tips for better accuracy:** 201 | - Ensure good lighting conditions 202 | - Face the camera directly 203 | - Maintain a neutral expression initially 204 | - Avoid rapid movements 205 | """) 206 | 207 | 208 | def initialize_session_state(): 209 | """Initialize Streamlit session state variables with better defaults.""" 210 | default_states = { 211 | 'messages': [], 212 | 'bot': None, 213 | 'selected_theme': 'Life Guidance', 214 | 'question_history': [], 215 | 'favorite_verses': [], 216 | 'current_mood': 'Seeking Wisdom', 217 | 'emotional_state': 'Neutral', 218 | 'language_preference': 'English', 219 | 'webcam_enabled': False, 220 | 'emotion_detector': None, 221 | 'emotion_log': deque(maxlen=300), 222 | 'last_detected_emotion': None 223 | } 224 | 225 | for key, default_value in default_states.items(): 226 | if key not in st.session_state: 227 | st.session_state[key] = default_value 228 | 229 | # Initialize bot if not already done 230 | if st.session_state.bot is None: 231 | if not GEMINI_API_KEY: 232 | st.error("Please set the GEMINI_API_KEY in your configuration.") 233 | st.stop() 234 | st.session_state.bot = GitaGeminiBot(GEMINI_API_KEY) 235 | 236 | # Initialize emotion detector once 237 | if st.session_state.emotion_detector is None: 238 | st.session_state.emotion_detector = AdvancedEmotionDetector() 239 | 240 | 241 | def dominant_emotion(window_sec: int = 5) -> str: 242 | """ 243 | Return the emotion that occurred most often in the last seconds 244 | on the webcam feed. Falls back to the sidebar selection when nothing found. 245 | """ 246 | ctx = st.session_state.get("webrtc_ctx") 247 | if ctx and ctx.state.playing and ctx.video_processor: 248 | cutoff = time.time() - window_sec 249 | recent = [e for ts, e in ctx.video_processor.emotion_history if ts >= cutoff] 250 | if recent: 251 | dom = Counter(recent).most_common(1)[0][0] 252 | # Don't modify session state - just return the detected emotion 253 | return dom 254 | return st.session_state.emotional_state 255 | 256 | 257 | class EmotionTransformer(VideoTransformerBase): 258 | """WebRTC video transformer for emotion detection.""" 259 | 260 | def __init__(self): 261 | # One detector per peer‑connection 262 | self.detector = AdvancedEmotionDetector() 263 | # Ring‑buffer of (timestamp, emotion) tuples – 5 s at 30 fps ≈ 150 264 | self.emotion_history: deque = deque(maxlen=150) 265 | 266 | def recv(self, frame): 267 | """Process each frame for emotion detection.""" 268 | try: 269 | # Convert frame to numpy array 270 | img = frame.to_ndarray(format="bgr24") 271 | 272 | # Flip frame horizontally for mirror effect 273 | img = cv2.flip(img, 1) 274 | 275 | # Only proceed if detector is available 276 | if self.detector is not None: 277 | # Detect faces 278 | faces = self.detector.detect_faces_optimized(img) 279 | 280 | if len(faces) > 0: 281 | # Process only the best face 282 | x, y, w, h = faces[0] 283 | padding = 30 284 | y1 = max(0, y - padding) 285 | y2 = min(img.shape[0], y + h + padding) 286 | x1 = max(0, x - padding) 287 | x2 = min(img.shape[1], x + w + padding) 288 | face_roi = img[y1:y2, x1:x2] 289 | 290 | if face_roi.size > 0: 291 | self.detector.update_emotion_async(face_roi) 292 | # Save latest emotion for the GUI thread 293 | if hasattr(self.detector, "current_emotion") and self.detector.current_emotion: 294 | self.emotion_history.append( 295 | (time.time(), self.detector.current_emotion) 296 | ) 297 | 298 | # Draw results on frame 299 | self.detector.draw_advanced_results(img, faces) 300 | 301 | # Try different VideoFrame import approaches 302 | try: 303 | from streamlit_webrtc.models import VideoFrame 304 | return VideoFrame.from_ndarray(img, format="bgr24") 305 | except ImportError: 306 | try: 307 | import av 308 | # Create VideoFrame using av library 309 | av_frame = av.VideoFrame.from_ndarray(img, format="bgr24") 310 | return av_frame 311 | except ImportError: 312 | # Fallback: return the original frame 313 | return frame 314 | 315 | except Exception as e: 316 | print(f"Error in emotion detection: {e}") 317 | # Return original frame on error 318 | return frame 319 | 320 | 321 | class GitaGeminiBot: 322 | def __init__(self, api_key: str): 323 | """Initialize the Gita bot with Gemini API and enhanced features.""" 324 | genai.configure(api_key=api_key) 325 | self.model = genai.GenerativeModel('gemini-2.0-flash') 326 | self.verses_db = self.load_gita_database() 327 | self.themes = { 328 | 'Life Guidance': 'guidance for life decisions and personal growth', 329 | 'Dharma & Ethics': 'understanding of duty, righteousness, and moral conduct', 330 | 'Spiritual Growth': 'spiritual development and self-realization', 331 | 'Relationships': 'wisdom for interpersonal relationships and social harmony', 332 | 'Work & Career': 'guidance for professional life and service', 333 | 'Inner Peace': 'achieving mental tranquility and emotional balance', 334 | 'Devotion & Love': 'understanding devotion, love, and surrender' 335 | } 336 | 337 | @st.cache_data 338 | def load_gita_database(_self) -> Dict: 339 | """Load the Bhagavad Gita dataset with caching for better performance.""" 340 | try: 341 | verses_df = pd.read_csv(GITA_CSV_PATH) 342 | except FileNotFoundError: 343 | st.error(f"Gita database file '{GITA_CSV_PATH}' not found. Please ensure the file is in the correct location.") 344 | st.stop() 345 | except Exception as e: 346 | st.error(f"Error loading Gita database: {str(e)}") 347 | st.stop() 348 | 349 | verses_db = {} 350 | for _, row in verses_df.iterrows(): 351 | chapter = f"chapter_{row['chapter_number']}" 352 | if chapter not in verses_db: 353 | verses_db[chapter] = { 354 | "title": row['chapter_title'], 355 | "verses": {}, 356 | "summary": _self._get_chapter_summary(row['chapter_number']) 357 | } 358 | verse_num = str(row['chapter_verse']) 359 | verses_db[chapter]["verses"][verse_num] = { 360 | "translation": row['translation'] 361 | } 362 | return verses_db 363 | 364 | def format_response(self, raw_text: str) -> Dict: 365 | """Enhanced response formatting with better error handling.""" 366 | try: 367 | # Try JSON parsing first 368 | if raw_text.strip().startswith('{') and raw_text.strip().endswith('}'): 369 | try: 370 | return json.loads(raw_text) 371 | except json.JSONDecodeError: 372 | pass 373 | 374 | # Enhanced text parsing 375 | response = { 376 | "verse_reference": "", 377 | "sanskrit": "", 378 | "translation": "", 379 | "explanation": "", 380 | "application": "", 381 | "keywords": [] 382 | } 383 | 384 | lines = [line.strip() for line in raw_text.split('\n') if line.strip()] 385 | current_section = None 386 | 387 | for line in lines: 388 | line_lower = line.lower() 389 | 390 | # Better pattern matching 391 | if re.search(r'chapter\s+\d+.*verse\s+\d+', line_lower): 392 | response["verse_reference"] = line 393 | elif line_lower.startswith(('sanskrit:', 'verse:')): 394 | response["sanskrit"] = re.sub(r'^(sanskrit:|verse:)\s*', '', line, flags=re.IGNORECASE) 395 | elif line_lower.startswith('translation:'): 396 | response["translation"] = re.sub(r'^translation:\s*', '', line, flags=re.IGNORECASE) 397 | elif line_lower.startswith(('explanation:', 'meaning:')): 398 | current_section = "explanation" 399 | response["explanation"] = re.sub(r'^(explanation:|meaning:)\s*', '', line, flags=re.IGNORECASE) 400 | elif line_lower.startswith(('application:', 'practical:')): 401 | current_section = "application" 402 | response["application"] = re.sub(r'^(application:|practical:)\s*', '', line, flags=re.IGNORECASE) 403 | elif current_section and line: 404 | response[current_section] += " " + line 405 | 406 | # Extract keywords for better searchability 407 | text_content = f"{response['translation']} {response['explanation']} {response['application']}" 408 | response["keywords"] = self._extract_keywords(text_content) 409 | 410 | return response 411 | 412 | except Exception as e: 413 | st.error(f"Error formatting response: {str(e)}") 414 | return { 415 | "verse_reference": "Error in parsing", 416 | "sanskrit": "", 417 | "translation": raw_text[:500] + "..." if len(raw_text) > 500 else raw_text, 418 | "explanation": "Please try rephrasing your question.", 419 | "application": "", 420 | "keywords": [] 421 | } 422 | 423 | def _get_chapter_summary(self, chapter_num: int) -> str: 424 | """Get a brief summary for each chapter.""" 425 | summaries = { 426 | 1: "Arjuna's moral dilemma and the beginning of Krishna's counsel", 427 | 2: "The fundamental teachings on the soul, duty, and the path of knowledge", 428 | 3: "The path of selfless action and karma yoga", 429 | 4: "Divine knowledge, incarnation, and the evolution of dharma", 430 | 5: "The harmony between action and renunciation", 431 | 6: "The practice of meditation and self-control", 432 | 7: "Knowledge of the Absolute and devotion to the Divine", 433 | 8: "The imperishable Brahman and the path at the time of death", 434 | 9: "Royal knowledge and the most confidential wisdom", 435 | 10: "Divine manifestations and infinite glories", 436 | 11: "The cosmic vision of the universal form", 437 | 12: "The path of devotion and love", 438 | 13: "The field of activity and the knower of the field", 439 | 14: "The three modes of material nature", 440 | 15: "The supreme person and the cosmic tree", 441 | 16: "Divine and demonic natures in human beings", 442 | 17: "The three divisions of faith and their characteristics", 443 | 18: "The perfection of renunciation and complete surrender" 444 | } 445 | return summaries.get(chapter_num, "Eternal wisdom and guidance") 446 | 447 | def _extract_keywords(self, text: str) -> List[str]: 448 | """Extract relevant keywords from the response text.""" 449 | common_gita_keywords = [ 450 | 'dharma', 'karma', 'moksha', 'yoga', 'devotion', 'meditation', 'duty', 451 | 'righteousness', 'soul', 'divine', 'surrender', 'detachment', 'wisdom', 452 | 'knowledge', 'action', 'service', 'love', 'peace', 'truth' 453 | ] 454 | 455 | text_lower = text.lower() 456 | found_keywords = [keyword for keyword in common_gita_keywords if keyword in text_lower] 457 | return found_keywords[:5] # Return top 5 relevant keywords 458 | 459 | def _calculate_response_confidence(self, response: Dict, question: str, theme: str = None, mood: str = None, emotional_state: str = None) -> float: 460 | """ 461 | Calculate confidence score for the AI response based on various factors. 462 | 463 | Args: 464 | response: The formatted response dictionary 465 | question: The user's question 466 | theme: Selected theme 467 | mood: User's mood 468 | emotional_state: Detected emotional state 469 | 470 | Returns: 471 | Confidence score between 0 and 100 472 | """ 473 | confidence_score = 0.0 474 | 475 | # Base confidence from response completeness (40% weight) 476 | completeness_score = 0.0 477 | required_fields = ['verse_reference', 'translation', 'explanation', 'application'] 478 | present_fields = sum(1 for field in required_fields if response.get(field) and len(str(response[field]).strip()) > 10) 479 | completeness_score = (present_fields / len(required_fields)) * 40 480 | 481 | # Content quality score (30% weight) 482 | quality_score = 0.0 483 | if response.get('translation'): 484 | quality_score += 10 485 | if response.get('explanation') and len(response['explanation']) > 50: 486 | quality_score += 10 487 | if response.get('application') and len(response['application']) > 50: 488 | quality_score += 10 489 | 490 | # Context relevance score (20% weight) 491 | relevance_score = 0.0 492 | if theme and theme in self.themes: 493 | relevance_score += 5 494 | if mood: 495 | relevance_score += 5 496 | if emotional_state: 497 | relevance_score += 5 498 | if response.get('keywords'): 499 | relevance_score += 5 500 | 501 | # Response length and detail score (10% weight) 502 | detail_score = 0.0 503 | total_length = sum(len(str(response.get(field, ''))) for field in ['translation', 'explanation', 'application']) 504 | if total_length > 200: 505 | detail_score = 10 506 | elif total_length > 100: 507 | detail_score = 7 508 | elif total_length > 50: 509 | detail_score = 5 510 | 511 | confidence_score = completeness_score + quality_score + relevance_score + detail_score 512 | 513 | # Ensure score is between 0 and 100 514 | return min(100.0, max(0.0, confidence_score)) 515 | 516 | async def get_response(self, question: str, theme: str = None, mood: str = None, emotional_state: str = None) -> Dict: 517 | """Enhanced response generation with theme, mood, and emotional state context.""" 518 | try: 519 | # Build context-aware prompt 520 | theme_context = "" 521 | if theme and theme in self.themes: 522 | theme_context = f"Focus on {self.themes[theme]}. " 523 | 524 | mood_context = "" 525 | if mood: 526 | mood_context = f"The user is currently {mood.lower()}. " 527 | 528 | emotional_context = "" 529 | if emotional_state: 530 | emotional_context = f"The user's emotional state is {emotional_state.lower()}. Please provide guidance that acknowledges and addresses this emotional state. " 531 | 532 | prompt = f""" 533 | {theme_context}{mood_context}{emotional_context}Based on the Bhagavad Gita's teachings, provide guidance for this question: 534 | {question} 535 | 536 | Please format your response exactly like this: 537 | Chapter X, Verse Y 538 | Sanskrit: [Sanskrit verse if available] 539 | Translation: [Clear English translation] 540 | Explanation: [Detailed explanation of the verse's meaning and context, considering the user's emotional state] 541 | Application: [Practical guidance for applying this wisdom in modern life, tailored to the user's current emotional state] 542 | 543 | Make the response comprehensive but accessible to modern readers, with special attention to providing comfort and guidance appropriate for someone who is {emotional_state.lower() if emotional_state else 'seeking wisdom'}. 544 | """ 545 | 546 | # Add retry logic for API calls 547 | max_retries = 3 548 | for attempt in range(max_retries): 549 | try: 550 | response = self.model.generate_content(prompt) 551 | if response.text: 552 | break 553 | except Exception as e: 554 | if attempt == max_retries - 1: 555 | raise e 556 | time.sleep(1) # Brief pause before retry 557 | 558 | if not response.text: 559 | raise ValueError("Empty response received from the model") 560 | 561 | formatted_response = self.format_response(response.text) 562 | 563 | # Calculate confidence score based on response quality and completeness 564 | confidence_score = self._calculate_response_confidence(formatted_response, question, theme, mood, emotional_state) 565 | formatted_response["confidence_score"] = confidence_score 566 | 567 | # Add metadata 568 | formatted_response["timestamp"] = datetime.now().isoformat() 569 | formatted_response["theme"] = theme 570 | formatted_response["mood"] = mood 571 | formatted_response["emotional_state"] = emotional_state 572 | 573 | return formatted_response 574 | 575 | except Exception as e: 576 | st.error(f"Error getting response: {str(e)}") 577 | return { 578 | "verse_reference": "Service Temporarily Unavailable", 579 | "sanskrit": "", 580 | "translation": "We're experiencing technical difficulties. Please try again in a moment.", 581 | "explanation": "The wisdom of the Gita teaches us patience in times of difficulty.", 582 | "application": "Take this moment to practice patience and try your question again.", 583 | "keywords": ["patience", "perseverance"], 584 | "timestamp": datetime.now().isoformat(), 585 | "theme": theme, 586 | "mood": mood, 587 | "emotional_state": emotional_state 588 | } 589 | 590 | 591 | def render_additional_options(): 592 | """Render additional options below the image, including webcam with emotion detection.""" 593 | 594 | # --- NEW: keep UI in-sync with webcam --- 595 | if st.session_state.get("webcam_enabled"): 596 | detected = dominant_emotion() 597 | if detected and detected != st.session_state.get("last_detected_emotion"): 598 | st.session_state.emotional_state = detected 599 | st.session_state.last_detected_emotion = detected 600 | # ---------------------------------------- 601 | 602 | st.markdown("### 🎯 Personalize Your Spiritual Journey") 603 | 604 | # Create columns for better layout 605 | col1, col2, col3, col4 = st.columns(4) 606 | 607 | with col1: 608 | st.selectbox( 609 | "🎭 Current Mood", 610 | ["Seeking Wisdom", "Feeling Confused", "Need Motivation", "Seeking Peace", 611 | "Facing Challenges", "Grateful", "Contemplative"], 612 | key="current_mood", 613 | help="Your current state of mind helps tailor the guidance" 614 | ) 615 | 616 | with col2: 617 | st.selectbox( 618 | "📚 Focus Theme", 619 | list(st.session_state.bot.themes.keys()), 620 | key="selected_theme", 621 | help="Choose the area where you seek guidance" 622 | ) 623 | 624 | with col3: 625 | st.selectbox( 626 | "🌐 Response Style", 627 | ["Detailed", "Concise", "Contemplative", "Practical"], 628 | key="response_style", 629 | help="How would you like the wisdom to be presented?" 630 | ) 631 | 632 | with col4: 633 | st.selectbox( 634 | "💭 Emotional State", 635 | ["Neutral", "Happy", "Sad", "Angry", "Fear", "Surprise", "Disgust"], 636 | key="emotional_state", 637 | help="Your current emotional state for personalized guidance" 638 | ) 639 | 640 | 641 | # Webcam Section 642 | st.markdown("### 📹 Spiritual Presence & Emotion Detection") 643 | webcam_col1, webcam_col2 = st.columns([1, 3]) 644 | with webcam_col1: 645 | webcam_enabled = st.checkbox( 646 | "Enable Webcam with Emotion Detection", 647 | key="webcam_enabled", 648 | help="Enable webcam and emotion detection for mindful presence" 649 | ) 650 | 651 | 652 | if webcam_enabled: 653 | # WebRTC Configuration for better connectivity 654 | rtc_configuration = RTCConfiguration({ 655 | "iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}] 656 | }) 657 | 658 | st.info("🎥 Webcam with emotion detection is now active. You can continue chatting while the camera runs!") 659 | 660 | # Create a container for the webcam feed and confidence dashboard 661 | webcam_container = st.container() 662 | with webcam_container: 663 | # Use columns to control the width - making it smaller 664 | cam_col1, cam_col2, cam_col3 = st.columns([1, 2, 1]) 665 | with cam_col2: 666 | # Start WebRTC streamer with emotion detection and higher resolution 667 | ctx = webrtc_streamer( 668 | key="gita_webcam", 669 | video_transformer_factory=EmotionTransformer, 670 | rtc_configuration=rtc_configuration, 671 | media_stream_constraints={ 672 | "video": { 673 | "width": {"ideal": 1280, "min": 640, "max": 1920}, 674 | "height": {"ideal": 720, "min": 480, "max": 1080}, 675 | "frameRate": {"ideal": 30, "min": 15, "max": 60} 676 | }, 677 | "audio": False 678 | }, 679 | async_processing=True 680 | ) 681 | # Expose ctx so the main thread can read the emotion history 682 | if ctx: 683 | st.session_state["webrtc_ctx"] = ctx 684 | 685 | # Add confidence dashboard below the webcam 686 | if st.session_state.emotion_detector: 687 | render_confidence_dashboard(st.session_state.emotion_detector) 688 | 689 | # Quick action buttons - All 5 on one line, centered 690 | st.markdown("### ⚡ Quick Actions") 691 | 692 | # Adjust column widths to center and evenly space 5 buttons with text + emoji 693 | # The middle columns are now slightly larger to accommodate text, and side columns balance. 694 | # The sum of these ratios is 11. 695 | col_empty_left_qa, col_qa1, col_qa2, col_qa3, col_qa4, col_qa5, col_empty_right_qa = st.columns([0.75, 2, 2, 2, 2, 2, 0.75]) 696 | 697 | 698 | with col_qa1: 699 | if st.button("🎲 Random Verse", help="Get a random verse for inspiration"): 700 | return "random_verse" 701 | 702 | with col_qa2: 703 | if st.button("💭 Daily Reflection", help="Get guidance for daily contemplation"): 704 | return "daily_reflection" 705 | 706 | with col_qa3: 707 | if st.button("🔍 Verse Search", help="Search for specific verses"): 708 | return "verse_search" 709 | 710 | with col_qa4: 711 | if st.button("📖 Chapter Summary", help="Get a summary of any chapter"): 712 | return "chapter_summary" 713 | 714 | 715 | with col_qa5: 716 | if st.button("⭐ Favorites Marked", help="View your saved favorite verses"): 717 | return "view_favorites" 718 | 719 | # Reset Chat button - on its own centered line directly below Quick Actions 720 | # Use columns to center a single button effectively. 721 | col_empty_left_reset, col_reset, col_empty_right_reset = st.columns([3, 2, 3]) 722 | with col_reset: 723 | if st.button("🔄 Reset Chat", help="Clear all chat history and start fresh"): 724 | return "reset_chat" 725 | 726 | 727 | return None 728 | 729 | 730 | def handle_quick_actions(action_type): 731 | """Handle quick action button clicks.""" 732 | if action_type == "random_verse": 733 | # Get random verse 734 | import random 735 | chapters = list(st.session_state.bot.verses_db.keys()) 736 | random_chapter = random.choice(chapters) 737 | verses = list(st.session_state.bot.verses_db[random_chapter]["verses"].keys()) 738 | random_verse = random.choice(verses) 739 | 740 | chapter_num = random_chapter.split('_')[1] 741 | question = f"Please share the wisdom from Chapter {chapter_num}, Verse {random_verse} and its practical application." 742 | return question 743 | 744 | elif action_type == "daily_reflection": 745 | today = datetime.now().strftime("%A") 746 | question = f"What guidance does the Bhagavad Gita offer for {today}? Please provide a verse for daily reflection and contemplation." 747 | return question 748 | 749 | elif action_type == "verse_search": 750 | st.session_state.show_search = True 751 | return None 752 | 753 | elif action_type == "chapter_summary": 754 | st.session_state.show_chapter_summary = True 755 | return None 756 | 757 | elif action_type == "view_favorites": # Handle the new action type 758 | st.info("Displaying your favorite verses...") 759 | return None 760 | 761 | elif action_type == "reset_chat": # Handle the reset_chat action type 762 | for key in ['messages', 'question_history']: 763 | if key in st.session_state: 764 | st.session_state[key] = [] 765 | st.experimental_rerun() # Rerun to clear chat 766 | return None # No question to generate after reset 767 | 768 | return None 769 | 770 | 771 | # 📚 Blog List Section 772 | st.header("📚 Blog List") 773 | 774 | blogs = [] 775 | 776 | if not blogs: 777 | st.info("No blogs available yet.") 778 | else: 779 | for blog in blogs: 780 | st.subheader(blog['title']) 781 | st.write(blog['content']) 782 | 783 | 784 | def render_enhanced_sidebar(): 785 | """Enhanced sidebar with better organization - showing ALL verses.""" 786 | st.sidebar.title("📖 Browse Sacred Texts") 787 | 788 | # Chapter browser with enhanced info 789 | chapters = list(st.session_state.bot.verses_db.keys()) 790 | selected_chapter = st.sidebar.selectbox( 791 | "Select Chapter", 792 | chapters, 793 | format_func=lambda x: f"Ch. {x.split('_')[1]}: {st.session_state.bot.verses_db[x]['title']}" 794 | ) 795 | 796 | if selected_chapter: 797 | chapter_data = st.session_state.bot.verses_db[selected_chapter] 798 | st.sidebar.markdown(f"### {chapter_data['title']}") 799 | st.sidebar.markdown(f"*{chapter_data.get('summary', '')}*") 800 | 801 | # Show verse count 802 | verse_count = len(chapter_data['verses']) 803 | st.sidebar.info(f"📊 {verse_count} verses in this chapter") 804 | 805 | # Show ALL verses instead of just top 5 806 | verses = chapter_data['verses'] 807 | st.sidebar.markdown("#### All Verses:") 808 | 809 | # Create a scrollable container for all verses 810 | for verse_num, verse_data in verses.items(): 811 | with st.sidebar.expander(f"Verse {verse_num}"): 812 | # Show full translation for shorter verses, truncate longer ones 813 | translation = verse_data['translation'] 814 | if len(translation) > 200: 815 | st.markdown(translation[:200] + "...") 816 | else: 817 | st.markdown(translation) 818 | 819 | # Add a button to use this verse for questioning 820 | if st.button(f"Ask about this verse", key=f"ask_verse_{selected_chapter}_{verse_num}"): 821 | chapter_num = selected_chapter.split('_')[1] 822 | question = f"Please explain Chapter {chapter_num}, Verse {verse_num} and its practical application in modern life." 823 | st.session_state.auto_question = question 824 | 825 | # Enhanced question history with confidence indicators 826 | st.sidebar.markdown("---") 827 | st.sidebar.title("💭 Your Spiritual Journey") 828 | 829 | user_questions = [msg["content"] for msg in st.session_state.messages if msg["role"] == "user"] 830 | ai_responses = [msg for msg in st.session_state.messages if msg["role"] == "assistant"] 831 | 832 | if user_questions: 833 | st.sidebar.markdown(f"**Questions Asked:** {len(user_questions)}") 834 | 835 | # Show recent questions with confidence indicators 836 | for i, q in enumerate(user_questions[-5:], 1): # Show last 5 questions 837 | with st.sidebar.expander(f"Question {len(user_questions) - 5 + i}"): 838 | st.markdown(f"*{q}*") 839 | 840 | # Show confidence for corresponding AI response if available 841 | if i <= len(ai_responses): 842 | response = ai_responses[-(i)] 843 | if response.get('confidence_score') is not None: 844 | confidence_score = response['confidence_score'] 845 | 846 | # Small confidence badge 847 | if confidence_score >= 80: 848 | st.sidebar.success(f"🤖 {confidence_score:.0f}%") 849 | elif confidence_score >= 60: 850 | st.sidebar.warning(f"🤖 {confidence_score:.0f}%") 851 | else: 852 | st.sidebar.error(f"🤖 {confidence_score:.0f}%") 853 | else: 854 | st.sidebar.info("🌱 Begin your journey by asking a question") 855 | 856 | # Confidence summary 857 | if ai_responses: 858 | avg_confidence = sum(msg.get('confidence_score', 0) for msg in ai_responses) / len(ai_responses) 859 | st.sidebar.markdown("---") 860 | st.sidebar.markdown("**📊 Response Quality Summary:**") 861 | 862 | if avg_confidence >= 80: 863 | st.sidebar.success(f"Average Confidence: {avg_confidence:.1f}%") 864 | elif avg_confidence >= 60: 865 | st.sidebar.warning(f"Average Confidence: {avg_confidence:.1f}%") 866 | else: 867 | st.sidebar.error(f"Average Confidence: {avg_confidence:.1f}%") 868 | 869 | # Favorites section (placeholder for future enhancement) 870 | st.sidebar.markdown("---") 871 | st.sidebar.title("⭐ Favorite Verses") 872 | if st.session_state.favorite_verses: 873 | for fav in st.session_state.favorite_verses: 874 | st.sidebar.markdown(f"• {fav}") 875 | else: 876 | st.sidebar.info("No favorites saved yet") 877 | 878 | 879 | 880 | 881 | def show_preloader(): 882 | st.markdown(""" 883 |
884 |
🕉️
885 |
Contemplating your journey...
886 |
887 | """, unsafe_allow_html=True) 888 | 889 | def login_signup_page(): 890 | st.markdown(""" 891 | 905 | """, unsafe_allow_html=True) 906 | st.markdown(""" 907 |
908 |
909 |
🕉️
910 |

Wisdom Weaver

911 |

Welcome! Please login or sign up to begin your spiritual journey.

912 |
913 |
914 | """, unsafe_allow_html=True) 915 | 916 | tab1, tab2 = st.tabs(["Login", "Sign Up"]) 917 | if "users" not in st.session_state: 918 | st.session_state.users = {"demo": "demo123"} # demo user 919 | 920 | with tab1: 921 | username = st.text_input("Username", key="login_user") 922 | password = st.text_input("Password", type="password", key="login_pass") 923 | if st.button("Login", key="login_btn"): 924 | show_preloader() 925 | time.sleep(1) 926 | if username in st.session_state.users and st.session_state.users[username] == password: 927 | st.session_state.logged_in = True 928 | st.session_state.current_user = username 929 | st.success("Login successful!") 930 | st.rerun() 931 | else: 932 | st.error("Invalid username or password.") 933 | 934 | with tab2: 935 | new_user = st.text_input("Choose a Username", key="signup_user") 936 | new_pass = st.text_input("Choose a Password", type="password", key="signup_pass") 937 | if st.button("Sign Up", key="signup_btn"): 938 | show_preloader() 939 | time.sleep(1) 940 | if new_user in st.session_state.users: 941 | st.error("Username already exists.") 942 | elif len(new_user) < 3 or len(new_pass) < 3: 943 | st.error("Username and password must be at least 3 characters.") 944 | else: 945 | st.session_state.users[new_user] = new_pass 946 | st.success("Sign up successful! Please login.") 947 | st.rerun() 948 | # ...rest of your code remains unchanged... 949 | 950 | def main_app(): 951 | # Your original main() code here 952 | 953 | def create_downloadable_content(chat_history: List[Dict]) -> str: 954 | """Formats the chat history into a readable string for download.""" 955 | content = f"--- Wisdom Weaver Chat History - {datetime.now().strftime('%Y-%m-%d %H:%M')} ---\n\n" 956 | for message in chat_history: 957 | role = message["role"] 958 | 959 | if role == "user": 960 | content += f"User: {message['content']}\n\n" 961 | else: 962 | # Handle the structured AI response 963 | content += f"Wisdom Weaver: " 964 | if message.get("verse_reference"): 965 | content += f"📖 {message['verse_reference']}\n" 966 | if message.get("sanskrit"): 967 | content += f"Sanskrit: {message['sanskrit']}\n" 968 | if message.get("translation"): 969 | content += f"Translation: {message['translation']}\n" 970 | if message.get("explanation"): 971 | content += f"Explanation: {message['explanation']}\n" 972 | if message.get("application"): 973 | content += f"Modern Application: {message['application']}\n" 974 | content += "\n" 975 | 976 | content += "--- End of Chat History ---" 977 | return content 978 | 979 | 980 | def main(): 981 | """Enhanced main Streamlit appli 982 | st.set_page_config( 983 | page_title="Bhagavad Gita Wisdom Weaver", 984 | page_icon="🕉️", 985 | layout="wide", # This is key for full-width layout 986 | initial_sidebar_state="expanded" 987 | ) 988 | st.markdown(""" 989 | 1003 | """, unsafe_allow_html=True) 1004 | 1005 | # Initialize session state first, before any other operations 1006 | initialize_session_state() 1007 | 1008 | # Load and display image with reduced width 1009 | if os.path.exists(IMAGE_PATH): 1010 | try: 1011 | image = Image.open(IMAGE_PATH) 1012 | max_width = 1800 1013 | aspect_ratio = image.height / image.width 1014 | resized_image = image.resize((max_width, int(max_width * aspect_ratio))) 1015 | 1016 | # Center the image by using columns with better proportions 1017 | col_img1, col_img2, col_img3 = st.columns([2, 1, 2]) 1018 | with col_img2: 1019 | st.image(resized_image, use_container_width=True, caption="Bhagavad Gita - Eternal Wisdom") 1020 | except Exception as e: 1021 | st.error(f"Error loading image: {str(e)}") 1022 | else: 1023 | st.warning("Image file not found. Please ensure the image is in the correct location.") 1024 | 1025 | quick_action = render_additional_options() 1026 | if quick_action: 1027 | 1028 | auto_question = handle_quick_actions(quick_action) 1029 | if auto_question: 1030 | st.session_state.messages.append({"role": "user", "content": auto_question}) 1031 | show_preloader() 1032 | response = asyncio.run(st.session_state.bot.get_response( 1033 | auto_question, 1034 | st.session_state.selected_theme, 1035 | st.session_state.current_mood 1036 | )) 1037 | st.session_state.messages.append({ 1038 | "role": "assistant", 1039 | **response 1040 | }) 1041 | st.rerun() 1042 | 1043 | col1, col2 = st.columns([2, 1]) 1044 | with col1: 1045 | st.title("🕉️ Bhagavad Gita Wisdom") 1046 | st.markdown(""" 1047 | Ask questions about life, dharma, and spirituality to receive guidance from the timeless wisdom of the Bhagavad Gita. 1048 | *Personalize your experience using the options above.* 1049 | """) 1050 | for message in st.session_state.messages: 1051 | 1052 | # Handle reset action directly here to avoid re-generating a question 1053 | if quick_action == "reset_chat": 1054 | handle_quick_actions("reset_chat") # This will clear messages and rerun 1055 | else: 1056 | auto_question = handle_quick_actions(quick_action) 1057 | if auto_question: 1058 | st.session_state.messages.append({"role": "user", "content": auto_question}) 1059 | with st.spinner("Contemplating your question..."): 1060 | response = asyncio.run(st.session_state.bot.get_response( 1061 | auto_question, 1062 | st.session_state.selected_theme, 1063 | st.session_state.current_mood, 1064 | dominant_emotion() 1065 | )) 1066 | st.session_state.messages.append({ 1067 | "role": "assistant", 1068 | **response 1069 | }) 1070 | st.experimental_rerun() 1071 | 1072 | st.markdown("

🕉️ Bhagavad Gita Wisdom

", unsafe_allow_html=True) 1073 | 1074 | st.markdown(""" 1075 |

1076 | Ask questions about life, dharma, and spirituality to receive guidance from the timeless wisdom of the Bhagavad Gita. Personalize your experience using the options above. 1077 |

1078 | """, unsafe_allow_html=True) 1079 | 1080 | left_empty_for_center, main_chat_col, sidebar_col = st.columns([1, 3, 1]) 1081 | 1082 | with main_chat_col: 1083 | if "submission_in_progress" not in st.session_state: 1084 | st.session_state.submission_in_progress = False 1085 | 1086 | if not st.session_state.submission_in_progress: 1087 | question = st.chat_input("Ask your question here...") 1088 | if question: 1089 | st.session_state.messages.append({"role": "user", "content": question}) 1090 | st.session_state.submission_in_progress = True 1091 | st.experimental_rerun() 1092 | else: 1093 | with st.spinner("🧘 Contemplating your question..."): 1094 | last_user_msg = st.session_state.messages[-1]["content"] 1095 | response = asyncio.run(st.session_state.bot.get_response( 1096 | last_user_msg, 1097 | st.session_state.selected_theme, 1098 | st.session_state.current_mood, 1099 | dominant_emotion() 1100 | )) 1101 | st.session_state.messages.append({ 1102 | "role": "assistant", 1103 | **response 1104 | }) 1105 | st.session_state.submission_in_progress = False 1106 | st.experimental_rerun() 1107 | 1108 | # Enhanced message display (now below the input) 1109 | for i, message in enumerate(st.session_state.messages): 1110 | 1111 | with st.chat_message(message["role"]): 1112 | if message["role"] == "user": 1113 | st.markdown(message["content"]) 1114 | else: 1115 | if message.get("verse_reference"): 1116 | st.markdown(f"**📖 {message['verse_reference']}**") 1117 | if message.get('sanskrit'): 1118 | st.markdown(f"*Sanskrit:* {message['sanskrit']}") 1119 | if message.get('translation'): 1120 | st.markdown(f"**Translation:** {message['translation']}") 1121 | if message.get('explanation'): 1122 | st.markdown("### 🧠 Understanding") 1123 | st.markdown(message["explanation"]) 1124 | if message.get('application'): 1125 | st.markdown("### 🌟 Modern Application") 1126 | st.markdown(message["application"]) 1127 | if message.get('keywords'): 1128 | st.markdown("**Key Concepts:** " + " • ".join([f"`{kw}`" for kw in message['keywords']])) 1129 | 1130 | if question := st.chat_input("Ask your question here..."): 1131 | st.session_state.messages.append({"role": "user", "content": question}) 1132 | show_preloader() 1133 | response = asyncio.run(st.session_state.bot.get_response( 1134 | question, 1135 | st.session_state.selected_theme, 1136 | st.session_state.current_mood 1137 | )) 1138 | st.session_state.messages.append({ 1139 | "role": "assistant", 1140 | **response 1141 | }) 1142 | st.rerun() 1143 | with col2: 1144 | render_enhanced_sidebar() 1145 | 1146 | 1147 | # Show context values that were passed to LLM 1148 | context_parts = [] 1149 | if message.get('theme'): 1150 | context_parts.append(f"🎯 {message['theme']}") 1151 | if message.get('mood'): 1152 | context_parts.append(f"🎭 {message['mood']}") 1153 | if message.get('emotional_state'): 1154 | context_parts.append(f"💭 {message['emotional_state']}") 1155 | 1156 | if context_parts: 1157 | st.markdown("**Response Context:** " + " • ".join(context_parts)) 1158 | 1159 | # Display confidence score for AI responses 1160 | if message.get('confidence_score') is not None: 1161 | confidence_score = message['confidence_score'] 1162 | 1163 | # Create confidence indicator 1164 | col1, col2, col3 = st.columns([1, 3, 1]) 1165 | with col2: 1166 | st.markdown("**🤖 AI Response Confidence:**") 1167 | 1168 | # Confidence bar 1169 | confidence_color = "green" if confidence_score >= 80 else "orange" if confidence_score >= 60 else "red" 1170 | st.progress(confidence_score / 100, text=f"{confidence_score:.1f}%") 1171 | 1172 | # Confidence level indicator 1173 | if confidence_score >= 80: 1174 | st.success(f"✅ High Confidence ({confidence_score:.1f}%)") 1175 | elif confidence_score >= 60: 1176 | st.warning(f"⚠️ Medium Confidence ({confidence_score:.1f}%)") 1177 | else: 1178 | st.error(f"❌ Low Confidence ({confidence_score:.1f}%)") 1179 | 1180 | # Confidence explanation tooltip 1181 | with st.expander("ℹ️ What does this confidence score mean?"): 1182 | st.markdown(f""" 1183 | **Response Quality Assessment:** 1184 | 1185 | This confidence score ({confidence_score:.1f}%) indicates how well the AI was able to: 1186 | - **Provide complete information** (verse reference, translation, explanation, application) 1187 | - **Match your context** (theme: {message.get('theme', 'N/A')}, mood: {message.get('mood', 'N/A')}) 1188 | - **Address your emotional state** ({message.get('emotional_state', 'N/A')}) 1189 | - **Offer detailed and relevant guidance** 1190 | 1191 | **Confidence Levels:** 1192 | - **80-100%**: Excellent response quality with comprehensive guidance 1193 | - **60-79%**: Good response with room for improvement 1194 | - **Below 60%**: Basic response, consider rephrasing your question 1195 | """) 1196 | 1197 | # NEW: Add Favorite button for assistant responses 1198 | unique_key = f"favorite_btn_{i}_{message.get('verse_reference', '').replace(' ', '_')}" 1199 | if st.button("⭐ Add to Favorites", key=unique_key): 1200 | verse_info = "" 1201 | if message.get("verse_reference"): 1202 | verse_info += message["verse_reference"] 1203 | if message.get("translation"): 1204 | if verse_info: 1205 | verse_info += " - " 1206 | verse_info += message["translation"] 1207 | 1208 | if verse_info and verse_info not in st.session_state.favorite_verses: 1209 | st.session_state.favorite_verses.append(verse_info) 1210 | st.success("Verse added to favorites!") 1211 | elif verse_info: 1212 | st.warning("This verse is already in your favorites!") 1213 | 1214 | # Add the download button after the chat messages 1215 | if st.session_state.messages: 1216 | chat_content = create_downloadable_content(st.session_state.messages) 1217 | st.download_button( 1218 | label="📥 Download Chat History", 1219 | data=chat_content, 1220 | file_name=f"WisdomWeaver_Chat_History_{datetime.now().strftime('%Y-%m-%d')}.txt", 1221 | mime="text/plain", 1222 | help="Download your entire chat conversation as a text file" 1223 | ) 1224 | 1225 | with sidebar_col: 1226 | render_enhanced_sidebar() 1227 | 1228 | 1229 | st.markdown("---") 1230 | with st.expander("💫 About Wisdom Weaver", expanded=True): 1231 | st.markdown(""" 1232 |
1233 |

🕉️ Wisdom Weaver 🕉️

1234 |

1235 | “Let the light of ancient wisdom guide your modern journey.” 1236 |

1237 |
1238 | ### 🌱 Our Mission 1239 | 1240 | To bridge the timeless teachings of the Bhagavad Gita with the challenges of today, nurturing clarity, strength, and inner peace for every seeker. 1241 | 1242 | ### ✨ Features 1243 | - 🧘 Spiritual Guidance: Personalized answers powered by Google's Gemini AI. 1244 | - 📖 Verse Explorer: Browse, search, and reflect on verses from all 18 chapters. 1245 | - 🎯 Quick Actions: Random verse, daily reflection, and more. 1246 | - ⭐ Favorites: Save and revisit your most inspiring verses. 1247 | - 🫶 Community: Connect, share, and grow with fellow seekers. 1248 | ### 📚 Why the Bhagavad Gita? 1249 | 1250 | The Gita is a universal scripture, a dialogue of the soul, offering wisdom for self-discovery, resilience, and harmony. Its teachings transcend boundaries, inviting all to walk the path of awareness. 1251 | 1252 | ### 👥 Meet the Team 1253 | - Satvik & Contributors: Spiritual technologists and lifelong learners. 1254 | - Advisors: Gita scholars and meditation mentors. 1255 | ### 🤝 Connect & Community 1256 | - 📧 Email: support@wisdomweaver.app 1257 | - 📸 Instagram: @wisdomweaver.ai 1258 | - 💬 Discord: Join our Community 1259 | - 💡 Feedback: We welcome your ideas and stories! 1260 |
1261 |

1262 | “You are not alone on this journey. May the wisdom of the Gita illuminate your path.”
1263 | 🙏 1264 |

1265 |
1266 | """, unsafe_allow_html=True) 1267 | 1268 | def main(): 1269 | if not st.session_state.get("logged_in", False): 1270 | login_signup_page() 1271 | else: 1272 | main_app() 1273 | 1274 | if __name__ == "__main__": 1275 | main() 1276 | 1277 | 1278 | **Wisdom Weaver** is a thoughtful AI-driven spiritual guide rooted in the timeless wisdom of the *Bhagavad Gita*. Created for modern seekers navigating life’s complexities, this platform offers personalized guidance, daily reflection, and the ability to connect with the deeper meaning behind ancient teachings. 1279 | 1280 | 1281 | ### 🌱 Our Vision 1282 | To bridge ancient spiritual insight with today’s challenges—offering clarity, strength, and inner peace through meaningful interaction. 1283 | 1284 | 1285 | ### 🔍 What We Offer 1286 | - **AI-Powered Insights:** Harnessing Google’s Gemini AI to interpret Gita verses in ways that resonate with your current state of mind. 1287 | - **Verse Exploration:** Access verses across all 18 chapters with translations, transliterations, and simplified meaning. 1288 | - **Theme-Based Guidance:** Whether it’s anxiety, purpose, relationships, or grief—we help you reflect and grow. 1289 | - **Interactive Tools:** Save favorite verses, revisit reflections, or receive a random verse tailored to your need. 1290 | - **Community-Centric Design:** Built by people who believe spirituality is a journey best shared. 1291 | 1292 | 1293 | ### 🌟 Why Bhagavad Gita? 1294 | In every era, humanity has faced the same questions: Who am I? What is my purpose? Why do I suffer? The Gita doesn’t provide fixed answers—it offers a path. A mirror. A gentle but firm invitation to understand the self and act with awareness. 1295 | 1296 | 1297 | ### 🧭 Meet the Team 1298 | - **Satvik gupta & Contributors:** Students of life, seekers of clarity—dedicated to merging tradition with technology. 1299 | - **Spiritual Mentors & Advisors:** Guiding the app’s soul to ensure authenticity and reverence. 1300 | 1301 | 1302 | 1303 | 1304 | *Wisdom Weaver is more than an app. It’s a living dialogue between past and present—a companion for every soul who believes that wisdom is not something we learn, but something we remember.* 1305 | """) 1306 | 1307 | 1308 | if __name__ == "__main__": 1309 | main() --------------------------------------------------------------------------------