├── screenshot.png
├── web
├── answers.png
├── OpenAI_Logo.png
├── index.html
├── script.js
└── styles.css
├── .gitignore
├── README.md
└── inter_ass.py
/screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pixelpump/Ai-Interview-Assistant-Python/HEAD/screenshot.png
--------------------------------------------------------------------------------
/web/answers.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pixelpump/Ai-Interview-Assistant-Python/HEAD/web/answers.png
--------------------------------------------------------------------------------
/web/OpenAI_Logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pixelpump/Ai-Interview-Assistant-Python/HEAD/web/OpenAI_Logo.png
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Python cache files
2 | __pycache__/
3 | *.py[cod]
4 |
5 | # Virtual environment
6 | venv/
7 | env/
8 | .env
9 |
10 | # IDE specific files
11 | .vscode/
12 | .idea/
13 |
14 | # Operating system files
15 | .DS_Store
16 | Thumbs.db
17 |
18 | # Application specific
19 | config.json
20 |
21 | # Logs
22 | *.log
23 |
24 | # Build files
25 | build/
26 | dist/
27 |
28 | # Dependency directories
29 | node_modules/
30 |
31 | # Other common files to ignore
32 | *.bak
33 | *.swp
34 | *.swo
35 |
--------------------------------------------------------------------------------
/web/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | Automatic Answers
7 |
8 |
9 |
10 |
11 |
12 |

13 |
×
14 |
15 |
16 |
17 |
21 |
22 |
23 | Answers
24 | 🔊
25 |
26 |
27 |
28 |
29 |
39 |
40 |
41 |
42 |
43 |
44 |
×
45 |
![Logo]()
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # AI Interview Assistant
2 |
3 | 
4 |
5 | I suppose we are all relegated to fighting fire with fire now. This AI Interview Assistant is an standalone python application that uses speech recognition and AI-powered responses intended to help during the interview process. When active, it listens to all system audio for spoken questions, processes them using OpenAI's api, and then provides spoken and written responses.
6 | The idea is that you could have this running while on an online interview (zoom etc.)
7 |
8 | ## Features
9 |
10 | - Speech recognition for capturing user questions
11 | - AI-powered responses using OpenAI's GPT-3.5-turbo model
12 | - Text-to-speech functionality for spoken answers
13 | - User-friendly web interface
14 | - API key management for OpenAI integration
15 | - Mutable audio responses
16 | - Responsive design for various screen sizes
17 |
18 | ## Prerequisites
19 |
20 | Before you begin, ensure you have met the following requirements:
21 |
22 | - Python 3.7+
23 | - An OpenAI API key
24 |
25 | ## Installation
26 |
27 | 1. Clone the repository:
28 | ```
29 | git clone https://github.com/yourusername/ai-interview-assistant.git
30 | cd ai-interview-assistant
31 | ```
32 |
33 | 2. Install the required Python packages:
34 | ```
35 | pip3 install eel SpeechRecognition openai
36 | ```
37 |
38 | 3. Set up your OpenAI API key:
39 | - Run the application
40 | - Use the UI to enter your API key
41 | - The key will be saved for future use
42 |
43 | ## Usage
44 |
45 | 1. Run the main Python script:
46 | ```
47 | python inter_ass.py
48 | ```
49 |
50 | 2. The application will open in your default web browser.
51 |
52 | 3. If you haven't already, enter your OpenAI API key when prompted.
53 |
54 | 4. Click the "Start Listening" button to begin.
55 |
56 | 5. If you sk your question clearly, the application will process your speech and display the question.
57 |
58 | 6. The AI will generate a response, which will be displayed in text and spoken aloud.
59 |
60 | 7. Use the mute icon next to each answer to toggle the audio on/off for that specific response.
61 |
62 | 8. Click the "Stop Listening" button when you're done.
63 |
64 | ## Configuration
65 |
66 | - You can toggle text-to-speech on/off using the speaker icon next to the "Answers" heading.
67 | - The "Clear Text" button will remove all questions and answers from the display.
68 | - To change or remove your API key, use the "Change/Remove API Key" button.
69 |
70 | ## Troubleshooting
71 |
72 | - If you encounter issues with speech recognition, ensure your microphone is properly connected and has the necessary permissions.
73 | - If responses are slow, check your internet connection as the application requires online access for AI processing.
74 |
75 | ## Contributing
76 |
77 | Contributions to the AI Interview Assistant are welcome. Please follow these steps:
78 |
79 | 1. Fork the repository.
80 | 2. Create a new branch: `git checkout -b `.
81 | 3. Make your changes and commit them: `git commit -m ''`
82 | 4. Push to the original branch: `git push origin /`
83 | 5. Create the pull request.
84 |
85 | Alternatively, see the GitHub documentation on [creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
86 |
87 | ## License
88 |
89 | This project uses the following license: [Common Development and Distribution License 1.0](https://opensource.org/license/cddl-1-0).
90 |
91 | ## Contact
92 |
93 | If you want to contact me, you can reach me at .
94 |
95 | ## Acknowledgements
96 |
97 | - [OpenAI](https://openai.com/) for providing the GPT and text-to-speech APIs
98 | - [Eel](https://github.com/ChrisKnott/Eel) for the Python/JavaScript integration
99 | - [SpeechRecognition](https://pypi.org/project/SpeechRecognition/) for the speech recognition functionality
100 |
--------------------------------------------------------------------------------
/web/script.js:
--------------------------------------------------------------------------------
1 | let listenButton = document.getElementById('listenButton');
2 | let clearButton = document.getElementById('clearButton');
3 | let questionsArea = document.getElementById('questionsArea');
4 | let answersArea = document.getElementById('answersArea');
5 | let apiKeyInput = document.getElementById('apiKeyInput');
6 | let saveApiKeyButton = document.getElementById('saveApiKey');
7 | let deleteApiKeyButton = document.getElementById('deleteApiKey');
8 | let modal = document.getElementById('modal');
9 | let modalMessage = document.getElementById('modal-message');
10 | let closeModal = document.getElementsByClassName('close')[0];
11 | let modalLogo = document.getElementById('modal-logo');
12 |
13 | // Add these lines at the beginning of your script
14 | let logoContainer = document.querySelector('.logo-container');
15 | let closeLogo = document.querySelector('.close-logo');
16 |
17 | // Add this event listener for the close logo button
18 | closeLogo.addEventListener('click', () => {
19 | logoContainer.classList.add('logo-hidden');
20 | });
21 |
22 | // Check for existing API key on load
23 | window.addEventListener('load', async () => {
24 | const hasApiKey = await eel.has_api_key()();
25 | updateApiKeyUI(hasApiKey);
26 | });
27 |
28 | listenButton.addEventListener('click', async () => {
29 | let isListening = await eel.toggle_listening()();
30 | listenButton.textContent = isListening ? 'Stop Listening' : 'Start Listening';
31 | listenButton.classList.toggle('listening', isListening);
32 |
33 | if (isListening) {
34 | let spinner = document.createElement('div');
35 | spinner.className = 'spinner';
36 | listenButton.appendChild(spinner);
37 | }
38 | });
39 |
40 | clearButton.addEventListener('click', () => {
41 | questionsArea.innerHTML = '';
42 | answersArea.innerHTML = '';
43 | });
44 |
45 | saveApiKeyButton.addEventListener('click', async () => {
46 | const apiKey = apiKeyInput.value.trim();
47 | if (apiKey) {
48 | const result = await eel.save_api_key(apiKey)();
49 | if (result) {
50 | showModal('Open AI API key saved successfully!', 'OpenAI_Logo.png');
51 | apiKeyInput.value = '';
52 | updateApiKeyUI(true);
53 | } else {
54 | showModal('Failed to save API key. Please try again.');
55 | }
56 | } else {
57 | showModal('Please enter a valid API key.');
58 | }
59 | });
60 |
61 | deleteApiKeyButton.addEventListener('click', async () => {
62 | const result = await eel.delete_api_key()();
63 | if (result) {
64 | showModal('API key removed successfully!');
65 | updateApiKeyUI(false);
66 | } else {
67 | showModal('Failed to delete API key. Please try again.');
68 | }
69 | });
70 |
71 | function updateApiKeyUI(hasApiKey) {
72 | apiKeyInput.style.display = hasApiKey ? 'none' : 'inline-block';
73 | saveApiKeyButton.style.display = hasApiKey ? 'none' : 'inline-block';
74 | deleteApiKeyButton.style.display = hasApiKey ? 'inline-block' : 'none';
75 | listenButton.disabled = !hasApiKey;
76 | }
77 |
78 | function showModal(message, logoSrc = null) {
79 | modalMessage.textContent = message;
80 | modalMessage.className = 'text-center'; // Ensure text is always centered
81 | if (logoSrc) {
82 | modalLogo.src = logoSrc;
83 | modalLogo.style.display = 'block';
84 | } else {
85 | modalLogo.style.display = 'none';
86 | }
87 | modal.style.display = 'block';
88 | }
89 |
90 | closeModal.onclick = function() {
91 | modal.style.display = 'none';
92 | }
93 |
94 | window.onclick = function(event) {
95 | if (event.target == modal) {
96 | modal.style.display = 'none';
97 | }
98 | }
99 |
100 | eel.expose(update_ui);
101 | function update_ui(question, answer) {
102 | if (question) {
103 | let p = document.createElement('p');
104 | p.textContent = question;
105 | questionsArea.appendChild(p);
106 | questionsArea.scrollTop = questionsArea.scrollHeight;
107 | }
108 | if (answer) {
109 | let answerContainer = document.createElement('div');
110 | answerContainer.className = 'answer-container';
111 |
112 | let p = document.createElement('p');
113 | try {
114 | const parsedAnswer = JSON.parse(answer);
115 | p.textContent = parsedAnswer.text;
116 |
117 | if (parsedAnswer.audio) {
118 | let audio = new Audio(`data:audio/mp3;base64,${parsedAnswer.audio}`);
119 | audio.onplay = function() {
120 | eel.audio_playback_started()();
121 | };
122 | audio.onended = function() {
123 | eel.audio_playback_ended()();
124 | };
125 |
126 | let muteIcon = document.createElement('span');
127 | muteIcon.className = 'mute-icon';
128 | muteIcon.innerHTML = '🔊';
129 | muteIcon.title = 'Mute/Unmute';
130 | muteIcon.onclick = function() {
131 | audio.muted = !audio.muted;
132 | muteIcon.innerHTML = audio.muted ? '🔇' : '🔊';
133 | };
134 |
135 | answerContainer.appendChild(muteIcon);
136 |
137 | audio.play().catch(e => console.error("Error playing audio:", e));
138 | }
139 | } catch (e) {
140 | console.error("Error parsing answer:", e);
141 | p.textContent = answer;
142 | }
143 |
144 | answerContainer.appendChild(p);
145 | answersArea.appendChild(answerContainer);
146 | answersArea.scrollTop = answersArea.scrollHeight;
147 | }
148 | }
149 |
150 | let ttsToggle = document.getElementById('ttsToggle');
151 | let ttsEnabled = true; // Set this to true by default
152 |
153 | // Update the UI to reflect the default state
154 | ttsToggle.classList.toggle('disabled', !ttsEnabled);
155 |
156 | ttsToggle.addEventListener('click', async () => {
157 | ttsEnabled = await eel.toggle_tts()();
158 | ttsToggle.classList.toggle('disabled', !ttsEnabled);
159 | });
--------------------------------------------------------------------------------
/inter_ass.py:
--------------------------------------------------------------------------------
1 | import eel
2 | import speech_recognition as sr
3 | from openai import OpenAI
4 | import threading
5 | import json
6 | import os
7 | import base64
8 | import time
9 | import re
10 |
11 | eel.init('web')
12 |
13 | class AudioAssistant:
14 | def __init__(self):
15 | self.setup_audio()
16 | self.is_listening = False
17 | self.client = None
18 | self.api_key = None
19 | self.tts_enabled = True # Set this to True by default
20 | self.is_speaking = False
21 | self.audio_playing = False
22 | self.load_api_key()
23 |
24 | def setup_audio(self):
25 | self.recognizer = sr.Recognizer()
26 | self.mic = sr.Microphone()
27 | with self.mic as source:
28 | self.recognizer.adjust_for_ambient_noise(source)
29 |
30 | def load_api_key(self):
31 | if os.path.exists('config.json'):
32 | with open('config.json', 'r') as f:
33 | config = json.load(f)
34 | self.set_api_key(config.get('api_key'))
35 |
36 | def set_api_key(self, api_key):
37 | self.api_key = api_key
38 | self.client = OpenAI(api_key=self.api_key)
39 | with open('config.json', 'w') as f:
40 | json.dump({'api_key': api_key}, f)
41 |
42 | def delete_api_key(self):
43 | self.api_key = None
44 | self.client = None
45 | if os.path.exists('config.json'):
46 | os.remove('config.json')
47 |
48 | def has_api_key(self):
49 | return self.api_key is not None
50 |
51 | def toggle_listening(self):
52 | if not self.client:
53 | return False
54 | self.is_listening = not self.is_listening
55 | if self.is_listening:
56 | threading.Thread(target=self.listen_and_process, daemon=True).start()
57 | return self.is_listening
58 |
59 | def listen_and_process(self):
60 | cooldown_time = 2 # Cooldown period in seconds
61 | last_speak_time = 0
62 |
63 | while self.is_listening:
64 | current_time = time.time()
65 | if not self.is_speaking and not self.audio_playing and (current_time - last_speak_time) > cooldown_time:
66 | try:
67 | with self.mic as source:
68 | audio = self.recognizer.listen(source, timeout=5, phrase_time_limit=5)
69 | text = self.recognizer.recognize_google(audio)
70 | if self.is_question(text):
71 | capitalized_text = text[0].upper() + text[1:]
72 | if not capitalized_text.endswith('?'):
73 | capitalized_text += '?'
74 | eel.update_ui(f"Q: {capitalized_text}", "")
75 | self.is_speaking = True
76 | response = self.get_ai_response(capitalized_text)
77 | eel.update_ui("", f"{response}")
78 | self.is_speaking = False
79 | last_speak_time = time.time() # Update the last speak time
80 | except sr.WaitTimeoutError:
81 | pass
82 | except sr.UnknownValueError:
83 | pass
84 | except Exception as e:
85 | eel.update_ui(f"An error occurred: {str(e)}", "")
86 | else:
87 | time.sleep(0.1) # Short sleep to prevent busy waiting
88 |
89 | def is_question(self, text):
90 | # Convert to lowercase for easier matching
91 | text = text.lower().strip()
92 |
93 | # List of question words and phrases
94 | question_starters = [
95 | "what", "why", "how", "when", "where", "who", "which",
96 | "can", "could", "would", "should", "is", "are", "do", "does",
97 | "am", "was", "were", "have", "has", "had", "will", "shall"
98 | ]
99 |
100 | # Check if the text starts with a question word
101 | if any(text.startswith(starter) for starter in question_starters):
102 | return True
103 |
104 | # Check for question mark at the end
105 | if text.endswith('?'):
106 | return True
107 |
108 | # Check for inverted word order (e.g., "Are you...?", "Can we...?")
109 | if re.match(r'^(are|can|could|do|does|have|has|will|shall|should|would|am|is)\s', text):
110 | return True
111 |
112 | # Check for specific phrases that indicate a question
113 | question_phrases = [
114 | "tell me about", "i'd like to know", "can you explain",
115 | "i was wondering", "do you know", "what about", "how about"
116 | ]
117 | if any(phrase in text for phrase in question_phrases):
118 | return True
119 |
120 | # If none of the above conditions are met, it's probably not a question
121 | return False
122 |
123 | def get_ai_response(self, question):
124 | try:
125 | response = self.client.chat.completions.create(
126 | model="gpt-3.5-turbo",
127 | messages=[
128 | {"role": "system", "content": "You are a helpful assistant."},
129 | {"role": "user", "content": question}
130 | ]
131 | )
132 | text_response = response.choices[0].message.content.strip()
133 |
134 | if self.tts_enabled:
135 | speech_response = self.client.audio.speech.create(
136 | model="tts-1",
137 | voice="alloy",
138 | input=text_response
139 | )
140 |
141 | audio_base64 = base64.b64encode(speech_response.content).decode('utf-8')
142 | return json.dumps({"text": text_response, "audio": audio_base64})
143 |
144 | return json.dumps({"text": text_response, "audio": None})
145 | except Exception as e:
146 | print(f"Error in get_ai_response: {str(e)}") # Add this line for debugging
147 | return json.dumps({"text": f"Error getting AI response: {str(e)}", "audio": None})
148 |
149 | assistant = AudioAssistant()
150 |
151 | @eel.expose
152 | def toggle_listening():
153 | return assistant.toggle_listening()
154 |
155 | @eel.expose
156 | def save_api_key(api_key):
157 | try:
158 | assistant.set_api_key(api_key)
159 | return True
160 | except Exception as e:
161 | print(f"Error saving API key: {str(e)}")
162 | return False
163 |
164 | @eel.expose
165 | def delete_api_key():
166 | try:
167 | assistant.delete_api_key()
168 | return True
169 | except Exception as e:
170 | print(f"Error deleting API key: {str(e)}")
171 | return False
172 |
173 | @eel.expose
174 | def has_api_key():
175 | return assistant.has_api_key()
176 |
177 | @eel.expose
178 | def toggle_tts():
179 | assistant.tts_enabled = not assistant.tts_enabled
180 | return assistant.tts_enabled
181 |
182 | @eel.expose
183 | def speaking_ended():
184 | assistant.is_speaking = False
185 |
186 | @eel.expose
187 | def audio_playback_started():
188 | assistant.audio_playing = True
189 |
190 | @eel.expose
191 | def audio_playback_ended():
192 | assistant.audio_playing = False
193 | assistant.is_speaking = False
194 |
195 | eel.start('index.html', size=(960, 840))
--------------------------------------------------------------------------------
/web/styles.css:
--------------------------------------------------------------------------------
1 | :root {
2 | --primary-color: #7a92a2;
3 | --secondary-color: #91b29f;
4 | --background-color: #f5f7fa;
5 | --text-color: #34495e;
6 | --card-background: #ffffff;
7 | --shadow-color: rgba(0, 0, 0, 0.1);
8 | }
9 |
10 | body {
11 | font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
12 | margin: 0;
13 | padding: 0;
14 | background-color: var(--background-color);
15 | color: var(--text-color);
16 | line-height: 1.6;
17 | }
18 |
19 | .container {
20 | max-width: 1000px;
21 | margin: 0 auto;
22 | padding: 12px 20px;
23 | display: flex;
24 | flex-direction: column;
25 | transition: all 0.5s ease-in-out;
26 | }
27 |
28 | h1 {
29 | text-align: center;
30 | color: var(--primary-color);
31 | font-size: 2.5em;
32 | margin-bottom: 30px;
33 | }
34 |
35 | button {
36 | display: inline-block;
37 | margin: 0 10px;
38 | padding: 12px 24px;
39 | font-size: 16px;
40 | font-weight: bold;
41 | cursor: pointer;
42 | background-color: var(--primary-color);
43 | color: white;
44 | border: none;
45 | border-radius: 30px;
46 | transition: all 0.3s ease;
47 | box-shadow: 0 4px 6px var(--shadow-color);
48 | }
49 |
50 | button:hover {
51 | background-color: #2980b9;
52 | transform: translateY(-2px);
53 | box-shadow: 0 6px 8px var(--shadow-color);
54 | }
55 |
56 | #clearButton {
57 | background-color: var(--secondary-color);
58 | }
59 |
60 | #clearButton:hover {
61 | background-color: #27ae60;
62 | }
63 |
64 | .content {
65 | display: flex;
66 | justify-content: space-between;
67 | margin-top: 30px;
68 | flex-grow: 1;
69 | transition: all 0.5s ease-in-out;
70 | }
71 |
72 | .questions, .answers {
73 | display: flex;
74 | flex-direction: column;
75 | flex-grow: 1;
76 | width: 45%;
77 | background-color: var(--card-background);
78 | border-radius: 10px;
79 | margin: 1vw;
80 | padding: 20px;
81 | box-shadow: 0 10px 20px var(--shadow-color);
82 | transition: all 0.3s ease;
83 | }
84 |
85 | .questions:hover, .answers:hover {
86 | transform: translateY(-5px);
87 | box-shadow: 0 15px 30px var(--shadow-color);
88 | }
89 |
90 | h2 {
91 | margin-top: 0;
92 | color: var(--primary-color);
93 | font-size: 1.5em;
94 | border-bottom: 2px solid var(--primary-color);
95 | padding-bottom: 10px;
96 | margin-bottom: 20px;
97 | }
98 |
99 | #questionsArea, #answersArea {
100 | flex-grow: 1;
101 | height: 400px;
102 | overflow-y: auto;
103 | padding: 15px;
104 | background-color: var(--background-color);
105 | border-radius: 5px;
106 | transition: all 0.5s ease-in-out;
107 | }
108 |
109 | #questionsArea p, #answersArea p {
110 | background-color: var(--card-background);
111 | padding: 10px 15px;
112 | border-radius: 5px;
113 | margin-bottom: 10px;
114 | box-shadow: 0 2px 4px var(--shadow-color);
115 | transition: all 0.2s ease;
116 | animation: fadeInUp 0.5s ease-out;
117 | }
118 |
119 | #questionsArea p:hover, #answersArea p:hover {
120 | transform: translateX(5px);
121 | box-shadow: 2px 2px 8px var(--shadow-color);
122 | }
123 |
124 | /* Scrollbar Styles */
125 | ::-webkit-scrollbar {
126 | width: 8px;
127 | }
128 |
129 | ::-webkit-scrollbar-track {
130 | background: var(--background-color);
131 | }
132 |
133 | ::-webkit-scrollbar-thumb {
134 | background: var(--primary-color);
135 | border-radius: 4px;
136 | }
137 |
138 | ::-webkit-scrollbar-thumb:hover {
139 | background: #2980b9;
140 | }
141 |
142 | /* Responsive Design */
143 | @media (max-width: 768px) {
144 | .content {
145 | flex-direction: column;
146 | }
147 |
148 | .questions, .answers {
149 | width: 100%;
150 | margin-bottom: 20px;
151 | }
152 | }
153 |
154 | /* Add these styles at the end of your existing CSS file */
155 |
156 | /* Update the existing spinner styles and add new styles for the button */
157 | .spinner {
158 | display: none;
159 | width: 20px;
160 | height: 20px;
161 | border: 3px solid transparent;
162 | border-top: 3px solid white;
163 | border-radius: 50%;
164 | animation: spin 1s linear infinite;
165 | position: absolute;
166 | right: 10px;
167 | top: 50%;
168 | transform: translateY(-50%);
169 | }
170 |
171 | @keyframes spin {
172 | 0% { transform: translateY(-50%) rotate(0deg); }
173 | 100% { transform: translateY(-50%) rotate(360deg); }
174 | }
175 |
176 | #listenButton {
177 | position: relative;
178 | padding-right: 40px; /* Make space for the spinner */
179 | }
180 |
181 | #listenButton.listening {
182 | background-color: #e74c3c; /* Red color for stop state */
183 | }
184 |
185 | #listenButton.listening:hover {
186 | background-color: #c0392b; /* Darker red for hover state */
187 | }
188 |
189 | #listenButton .spinner {
190 | display: none;
191 | }
192 |
193 | #listenButton.listening .spinner {
194 | display: block;
195 | }
196 |
197 | /* Add these styles for the logo image */
198 | .logo {
199 | position: absolute;
200 | top: 0;
201 | left: 50%;
202 | transform: translateX(-50%);
203 | max-width: 100%;
204 | height: auto;
205 | width: 500px; /* Adjust this value based on your preferred default size */
206 | transition: opacity 0.5s ease-in-out;
207 | }
208 |
209 | /* Responsive adjustments for the logo */
210 | @media (max-width: 768px) {
211 | .logo-container {
212 | height: 180px; /* Adjust this value for medium screens */
213 | }
214 | .logo {
215 | width: 450px; /* Smaller size for medium screens */
216 | }
217 | }
218 |
219 | @media (max-width: 480px) {
220 | .logo-container {
221 | height: 140px; /* Adjust this value for small screens */
222 | }
223 | .logo {
224 | width: 350px; /* Even smaller size for small screens */
225 | }
226 | }
227 |
228 | /* Update these styles for the logo container and logo */
229 | .logo-container {
230 | position: relative;
231 | width: 100%;
232 | height: 120px; /* Adjust this value based on your logo's height */
233 | margin: 0 auto -28px;
234 | transition: all 0.5s ease-in-out;
235 | overflow: hidden;
236 | }
237 |
238 | /* Update the logo-hidden class */
239 | .logo-hidden {
240 | height: 0;
241 | margin: 0;
242 | opacity: 0;
243 | }
244 |
245 | .logo-hidden .logo {
246 | opacity: 0;
247 | }
248 |
249 | /* Add these styles for the close-logo button */
250 | .close-logo {
251 | position: absolute;
252 | top: 10px;
253 | right: 10px;
254 | font-size: 24px;
255 | color: var(--primary-color);
256 | cursor: pointer;
257 | transition: color 0.3s ease;
258 | z-index: 10;
259 | }
260 |
261 | .close-logo:hover {
262 | color: #e74c3c;
263 | }
264 |
265 | /* Existing styles... */
266 |
267 | /* Add this new style for centered text */
268 | .text-center {
269 | text-align: center;
270 | }
271 |
272 | /* Existing styles... */
273 |
274 | /* Add this new style for the API key input */
275 | #apiKeyInput {
276 | width: 300px; /* Increase this value to make it wider */
277 | padding: 10px;
278 | margin-right: 10px;
279 | border: 1px solid var(--primary-color);
280 | border-radius: 5px;
281 | font-size: 16px;
282 | }
283 |
284 | /* Update the button container to better accommodate the wider input */
285 | .button-container {
286 | display: flex;
287 | flex-wrap: wrap;
288 | justify-content: center;
289 | align-items: center;
290 | gap: 10px;
291 | padding-top: 28px;
292 | }
293 |
294 | /* Existing styles... */
295 |
296 | /* Add these styles for the modal */
297 | .modal {
298 | display: none;
299 | position: fixed;
300 | z-index: 1000;
301 | left: 0;
302 | top: 0;
303 | width: 100%;
304 | height: 100%;
305 | background-color: rgba(0,0,0,0.4);
306 | animation: fadeIn 0.3s;
307 | }
308 |
309 | .modal-content {
310 | background-color: var(--card-background);
311 | margin: 15% auto;
312 | padding: 20px;
313 | border-radius: 10px;
314 | width: 50%;
315 | max-width: 500px;
316 | box-shadow: 0 5px 15px rgba(0,0,0,0.3);
317 | animation: slideIn 0.3s;
318 | position: relative;
319 | }
320 |
321 | .close {
322 | color: #aaa;
323 | float: right;
324 | font-size: 28px;
325 | font-weight: bold;
326 | cursor: pointer;
327 | transition: color 0.3s;
328 | }
329 |
330 | .close:hover,
331 | .close:focus {
332 | color: var(--primary-color);
333 | text-decoration: none;
334 | }
335 |
336 | @keyframes fadeIn {
337 | from {opacity: 0;}
338 | to {opacity: 1;}
339 | }
340 |
341 | @keyframes slideIn {
342 | from {transform: translateY(-50px); opacity: 0;}
343 | to {transform: translateY(0); opacity: 1;}
344 | }
345 |
346 | #modal-logo {
347 | display: block;
348 | margin: 0 auto 20px;
349 | max-width: 100px;
350 | height: auto;
351 | }
352 |
353 | /* Existing styles... */
354 |
355 | /* Add these styles for the TTS toggle */
356 | .tts-toggle {
357 | cursor: pointer;
358 | font-size: 0.8em;
359 | margin-left: 10px;
360 | transition: opacity 0.3s ease;
361 | }
362 |
363 | .tts-toggle.disabled {
364 | opacity: 0.5;
365 | }
366 |
367 | /* Existing styles... */
368 |
369 | /* Add these styles for the answer container and mute icon */
370 | .answer-container {
371 | position: relative;
372 | margin-bottom: 10px;
373 | background-color: var(--card-background);
374 | border-radius: 5px;
375 | box-shadow: 0 2px 4px var(--shadow-color);
376 | transition: all 0.2s ease;
377 | animation: fadeInUp 0.5s ease-out;
378 | }
379 |
380 | .mute-icon {
381 | position: absolute;
382 | top: 5px;
383 | right: 5px;
384 | cursor: pointer;
385 | font-size: 16px;
386 | opacity: 0.7;
387 | transition: opacity 0.3s ease;
388 | z-index: 10;
389 | }
390 |
391 | .mute-icon:hover {
392 | opacity: 1;
393 | }
394 |
395 | /* Update this style to accommodate the mute icon */
396 | #answersArea p {
397 | padding: 10px 30px 10px 15px; /* Increased right padding to make room for the mute icon */
398 | margin: 0;
399 | background-color: var(--card-background);
400 | padding: 10px 15px;
401 | border-radius: 5px;
402 | box-shadow: 0 2px 4px var(--shadow-color);
403 | transition: all 0.2s ease;
404 | animation: fadeInUp 0.5s ease-out;
405 | }
406 |
407 | /* Modify the hover effect to apply to the container instead of the paragraph */
408 | .answer-container:hover {
409 | transform: translateX(5px);
410 | box-shadow: 2px 2px 8px var(--shadow-color);
411 | }
412 |
413 | /* Remove the hover effect from the paragraph */
414 | #answersArea p:hover {
415 | transform: none;
416 | box-shadow: none;
417 | }
418 |
419 | /* Existing styles... */
420 |
--------------------------------------------------------------------------------