├── Banner.png
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Capture.png
├── Captures
├── About_UI.png
├── Audio_Effects.png
├── Audio_Effects2.png
├── Audio_Effects3.png
├── Edit_Options.png
├── Effects_Options.png
├── File_Options.png
├── General_UI.png
├── Quick_Effects.png
├── Shortcuts_UI.png
├── View_Options.png
├── View_Options2.png.png
└── View_Options3.png.png
├── LICENSE
├── METROMUSE_PHILOSOPHY.md
├── README.md
├── SECURITY.md
├── requirements.txt
├── resources
└── Download FFmpeg.txt
└── src
├── audio_effects.py
├── error_handler.py
├── icon.ico
├── icon.png
├── metro_muse.py
├── performance_monitor.py
├── project_manager.py
├── styles.qss
├── track_manager.py
├── track_renderer.py
└── ui_manager.py
/Banner.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Banner.png
--------------------------------------------------------------------------------
/CHANGELOG.md:
--------------------------------------------------------------------------------
1 | # Changelog
2 |
3 | All notable changes to this project will be documented in this file.
4 |
5 | ## [0.12.0] - 2025-06-09
6 |
7 | ### Added
8 |
9 | - Full project save/load support with `.mmp` format
10 | - Recent project manager
11 | - Auto-save with modification tracking
12 | - Project templates and presets
13 | - Real-time performance monitor (CPU & RAM)
14 | - Quality vs. performance modes
15 | - System recommendations based on performance
16 | - Detailed error logging and automatic recovery
17 | - User-friendly error dialogs
18 | - Asynchronous audio loading
19 | - Automatic downsampling for waveform rendering
20 | - Performance-based waveform detail levels
21 | - Background resource management
22 | - Improved keyboard shortcuts and shortcut system
23 |
24 | ### Changed
25 |
26 | - Optimized multitrack mixing engine for playback
27 | - Memory-efficient and optimized waveform rendering
28 | - Editing system with better error recovery
29 | - Polished UI with project-aware window title
30 | - Improved file format handling and metadata display
31 |
32 | ### Fixed
33 |
34 | - Better performance with large audio files
35 | - Improved stability in real-time effect preview
36 | - Fixes in editing and playback control synchronization
37 | - Reduced memory usage during intensive tasks
38 | - Better handling of ffmpeg-related errors
39 |
40 | ## [0.10.0] - 2025-04-28
41 |
42 | ### Added
43 |
44 | - Multitrack support with synchronized playback and per-track controls (solo, mute, volume)
45 | - Enhanced waveform display with adaptive time grid and scrubbing
46 | - Real-time amplitude visualization
47 | - Non-destructive editing (cut/copy/paste/trim per track)
48 | - Undo/redo stack with snapshot-based restore
49 | - Modern dark theme user interface with accessible large buttons
50 | - Drag and drop audio file import
51 | - Collapsible track headers and track color customization
52 | - Effects: Gain adjustment, fade-in, fade-out with instant preview
53 | - Export to WAV, FLAC, MP3, and AAC formats
54 | - Recent files management in sidebar
55 | - Efficient playback mixing with sounddevice
56 | - Keyboard shortcuts for common editing and navigation operations
57 |
58 | ### Changed
59 |
60 | - Redesigned interface (QSS flat design, large controls, SVG icon support)
61 | - Improved error handling for missing dependencies and file IO
62 | - File browser sidebar and menu bar enhancements for track management
63 |
64 | ### Fixed
65 |
66 | - Playback synchronization for multitrack
67 | - Selection synchronization and highlighting bugs
68 | - Audio format compatibility issues
69 |
70 | ## [0.9.0] - 2025-04-22
71 |
72 | ### Added
73 |
74 | - Initial release with basic waveform display
75 | - Loading and playing single track audio files
76 | - Basic editing: cut/copy/paste/trim
77 | - Basic playback and export
78 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Contributor Covenant Code of Conduct
2 |
3 | ## Our Pledge
4 |
5 | We as members, contributors, and leaders pledge to make participation in our
6 | community a harassment-free experience for everyone, regardless of age, body
7 | size, visible or invisible disability, ethnicity, sex characteristics, gender
8 | identity and expression, level of experience, education, socio-economic status,
9 | nationality, personal appearance, race, religion, or sexual identity
10 | and orientation.
11 |
12 | We pledge to act and interact in ways that contribute to an open, welcoming,
13 | diverse, inclusive, and healthy community.
14 |
15 | ## Our Standards
16 |
17 | Examples of behavior that contributes to a positive environment for our
18 | community include:
19 |
20 | * Demonstrating empathy and kindness toward other people
21 | * Being respectful of differing opinions, viewpoints, and experiences
22 | * Giving and gracefully accepting constructive feedback
23 | * Accepting responsibility and apologizing to those affected by our mistakes,
24 | and learning from the experience
25 | * Focusing on what is best not just for us as individuals, but for the
26 | overall community
27 |
28 | Examples of unacceptable behavior include:
29 |
30 | * The use of sexualized language or imagery, and sexual attention or
31 | advances of any kind
32 | * Trolling, insulting or derogatory comments, and personal or political attacks
33 | * Public or private harassment
34 | * Publishing others' private information, such as a physical or email
35 | address, without their explicit permission
36 | * Other conduct which could reasonably be considered inappropriate in a
37 | professional setting
38 |
39 | ## Enforcement Responsibilities
40 |
41 | Community leaders are responsible for clarifying and enforcing our standards of
42 | acceptable behavior and will take appropriate and fair corrective action in
43 | response to any behavior that they deem inappropriate, threatening, offensive,
44 | or harmful.
45 |
46 | Community leaders have the right and responsibility to remove, edit, or reject
47 | comments, commits, code, wiki edits, issues, and other contributions that are
48 | not aligned to this Code of Conduct, and will communicate reasons for moderation
49 | decisions when appropriate.
50 |
51 | ## Scope
52 |
53 | This Code of Conduct applies within all community spaces, and also applies when
54 | an individual is officially representing the community in public spaces.
55 | Examples of representing our community include using an official e-mail address,
56 | posting via an official social media account, or acting as an appointed
57 | representative at an online or offline event.
58 |
59 | ## Enforcement
60 |
61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be
62 | reported to the community leaders responsible for enforcement at
63 | .
64 | All complaints will be reviewed and investigated promptly and fairly.
65 |
66 | All community leaders are obligated to respect the privacy and security of the
67 | reporter of any incident.
68 |
69 | ## Enforcement Guidelines
70 |
71 | Community leaders will follow these Community Impact Guidelines in determining
72 | the consequences for any action they deem in violation of this Code of Conduct:
73 |
74 | ### 1. Correction
75 |
76 | **Community Impact**: Use of inappropriate language or other behavior deemed
77 | unprofessional or unwelcome in the community.
78 |
79 | **Consequence**: A private, written warning from community leaders, providing
80 | clarity around the nature of the violation and an explanation of why the
81 | behavior was inappropriate. A public apology may be requested.
82 |
83 | ### 2. Warning
84 |
85 | **Community Impact**: A violation through a single incident or series
86 | of actions.
87 |
88 | **Consequence**: A warning with consequences for continued behavior. No
89 | interaction with the people involved, including unsolicited interaction with
90 | those enforcing the Code of Conduct, for a specified period of time. This
91 | includes avoiding interactions in community spaces as well as external channels
92 | like social media. Violating these terms may lead to a temporary or
93 | permanent ban.
94 |
95 | ### 3. Temporary Ban
96 |
97 | **Community Impact**: A serious violation of community standards, including
98 | sustained inappropriate behavior.
99 |
100 | **Consequence**: A temporary ban from any sort of interaction or public
101 | communication with the community for a specified period of time. No public or
102 | private interaction with the people involved, including unsolicited interaction
103 | with those enforcing the Code of Conduct, is allowed during this period.
104 | Violating these terms may lead to a permanent ban.
105 |
106 | ### 4. Permanent Ban
107 |
108 | **Community Impact**: Demonstrating a pattern of violation of community
109 | standards, including sustained inappropriate behavior, harassment of an
110 | individual, or aggression toward or disparagement of classes of individuals.
111 |
112 | **Consequence**: A permanent ban from any sort of public interaction within
113 | the community.
114 |
115 | ## Attribution
116 |
117 | This Code of Conduct is adapted from the [Contributor Covenant][homepage],
118 | version 2.0, available at
119 | https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
120 |
121 | Community Impact Guidelines were inspired by [Mozilla's code of conduct
122 | enforcement ladder](https://github.com/mozilla/diversity).
123 |
124 | [homepage]: https://www.contributor-covenant.org
125 |
126 | For answers to common questions about this code of conduct, see the FAQ at
127 | https://www.contributor-covenant.org/faq. Translations are available at
128 | https://www.contributor-covenant.org/translations.
129 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing to MetroMuse
2 |
3 | Thank you for your interest in contributing to MetroMuse! Here’s how you can get started.
4 |
5 | ## How to Contribute
6 |
7 | 1. **Reporting Issues**: If you encounter a bug, please open an issue in our GitHub repository. Be sure to provide as many details as possible to help us reproduce and resolve the issue.
8 |
9 | 2. **Feature Requests**: If you have a new idea or feature you want to suggest, please open an issue to discuss it with the maintainers before you begin working on it.
10 |
11 | 3. **Code Changes**: If you have a solution to a problem or improvement, please fork the repository and create a pull request with your changes.
12 |
13 | 4. **Clean Code and Testing**: Please ensure your code is clean, well-documented, and properly tested before submitting a pull request.
14 |
15 | ## Pull Request Process
16 |
17 | 1. Fork the repository.
18 | 2. Create a new branch for your changes.
19 | 3. Make your changes and ensure everything works as expected.
20 | 4. Submit a pull request with a clear description of the changes you made.
21 | 5. Maintainers will review your pull request and provide feedback if needed.
22 |
23 | Thank you for contributing to MetroMuse!
24 |
--------------------------------------------------------------------------------
/Capture.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Capture.png
--------------------------------------------------------------------------------
/Captures/About_UI.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/About_UI.png
--------------------------------------------------------------------------------
/Captures/Audio_Effects.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/Audio_Effects.png
--------------------------------------------------------------------------------
/Captures/Audio_Effects2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/Audio_Effects2.png
--------------------------------------------------------------------------------
/Captures/Audio_Effects3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/Audio_Effects3.png
--------------------------------------------------------------------------------
/Captures/Edit_Options.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/Edit_Options.png
--------------------------------------------------------------------------------
/Captures/Effects_Options.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/Effects_Options.png
--------------------------------------------------------------------------------
/Captures/File_Options.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/File_Options.png
--------------------------------------------------------------------------------
/Captures/General_UI.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/General_UI.png
--------------------------------------------------------------------------------
/Captures/Quick_Effects.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/Quick_Effects.png
--------------------------------------------------------------------------------
/Captures/Shortcuts_UI.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/Shortcuts_UI.png
--------------------------------------------------------------------------------
/Captures/View_Options.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/View_Options.png
--------------------------------------------------------------------------------
/Captures/View_Options2.png.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/View_Options2.png.png
--------------------------------------------------------------------------------
/Captures/View_Options3.png.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/Captures/View_Options3.png.png
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2025 Iván Eduardo Chavez Ayub
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/METROMUSE_PHILOSOPHY.md:
--------------------------------------------------------------------------------
1 | ## MetroMuse Manifesto
2 |
3 | ### Vision
4 |
5 | MetroMuse was built to empower creators, musicians, and developers with an audio editing tool that blends **the visual rhythm of a metro system** with the **precision and power of a professional editor**. Our goal is to make multitrack audio editing intuitive, responsive, and creatively fluid—like a train gliding through sound.
6 |
7 | ### Mission
8 |
9 | - **Simplify multitrack editing** with a modern, clean, and user-first interface
10 | - **Deliver stable, high-performance** playback and rendering through smart resource management
11 | - **Ensure reliability**, with robust error handling, automatic recovery, and friendly diagnostics
12 | - **Support creative flow**, through project templates, auto-saving, and preset reuse
13 |
14 | ### Core Values
15 |
16 | 1. **Performance & Efficiency** – A finely tuned mixing engine and waveform rendering system keep your workflow smooth and uninterrupted
17 | 2. **Intuitive Design** – Clean UI, context-aware project titles, and powerful keyboard shortcuts enable effortless navigation
18 | 3. **Resilience** – With detailed logging, auto-recovery, and FFmpeg error handling, we minimize friction during production
19 | 4. **Transparency & Openness** – MetroMuse is open-source (MIT), welcoming contributions, ideas, and community-driven growth
20 |
21 | ### Development Principles
22 |
23 | - **True Multitrack Support** – Named, colored tracks with solo/mute toggles, real-time mixing, and asynchronous loading
24 | - **Smart Waveform Display** – Scalable zoom, adaptive time grids, auto-downsampling, and detail scaling based on system performance
25 | - **Robust Project System** – Save/load projects in `.mmp` format, access recent history, auto-save changes, and reuse custom templates
26 | - **Cross-platform & Format-ready** – Supports WAV, MP3, AAC, FLAC via FFmpeg, with Windows binaries included
27 | - **Performance-aware Features** – Live CPU/RAM monitoring, system-based quality modes, and dynamic resource handling
28 |
29 | ### Community & Future
30 |
31 | - **Open Collaboration** – Contributions are welcome: features, plugins, UX improvements, or engine tweaks
32 | - **Shared Roadmap** – Future goals include: spectral analysis, VST support, automation lanes, MIDI tools, audio recording, in-app guides, and theme customization
33 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # 
2 |
3 |
4 |
5 | ### Version 0.12.0 — *INCOMPLETE BETA*
6 |
7 | > ⚠️ **Disclaimer:** *MetroMuse is currently in beta. Some features may be incomplete, unstable, or under development.*
8 |
9 | ---
10 |
11 | ## 🎵 What is MetroMuse?
12 |
13 | **MetroMuse** is a **modern, cross-platform audio editor** featuring:
14 |
15 | * Multitrack capabilities
16 | * Enhanced waveform visualization
17 | * An intuitive, sleek interface built for creators
18 |
19 | ---
20 |
21 | ## ✨ Features Overview
22 |
23 | ### 🎚️ Multitrack Support
24 |
25 | * Solo, mute, and volume per track
26 | * Color coding & track naming
27 | * Synchronized playback
28 | * **NEW:** Asynchronous audio loading
29 | * **NEW:** Optimized waveform rendering
30 |
31 | ### 📊 Waveform Visualization
32 |
33 | * Zoomable, interactive display
34 | * Adaptive time grids & real-time amplitude
35 | * **NEW:** Automatic downsampling
36 | * **NEW:** Performance-based detail levels
37 |
38 | ### ✂️ Editing Tools
39 |
40 | * Cut, copy, paste with precision
41 | * Non-destructive edits & track-specific editing
42 | * **NEW:** Enhanced keyboard shortcuts
43 | * **NEW:** Improved error recovery
44 |
45 | ### 💾 Project System
46 |
47 | * **NEW:** `.mmp` project save/load
48 | * **NEW:** Recent projects manager
49 | * **NEW:** Auto-save & change tracking
50 | * **NEW:** Project templates/presets
51 |
52 | ### 🔧 Performance Monitoring
53 |
54 | * **NEW:** Real-time CPU/RAM usage
55 | * **NEW:** Quality/Performance modes
56 | * **NEW:** System optimization engine
57 |
58 | ### 🛡️ Error Handling
59 |
60 | * **NEW:** Detailed logging system
61 | * **NEW:** User-friendly error dialogs
62 | * **NEW:** Auto recovery & warning prompts
63 |
64 | ### 🎛️ Audio Effects
65 |
66 | * Volume, fade in/out, preview in real-time
67 | * Per-track effect control
68 |
69 | ### ▶️ Playback
70 |
71 | * Scrubbing & synced playback
72 | * **NEW:** Optimized multitrack engine
73 |
74 | ### 🎨 UI/UX
75 |
76 | * Dark theme, high-contrast icons (48×48 px)
77 | * **NEW:** Context-aware window title
78 | * **NEW:** Streamlined shortcuts
79 |
80 | ### 💾 File Formats
81 |
82 | * Supports WAV, MP3, AAC, FLAC
83 | * Drag-and-drop audio + metadata support
84 | * **NEW:** Better format handling
85 |
86 | ### ⚙️ Technical Highlights
87 |
88 | * Sample-accurate editing
89 | * Real-time waveform rendering
90 | * **NEW:** Memory-efficient processing
91 | * **NEW:** Background tasking
92 | * **NEW:** Smart resource management
93 |
94 | ---
95 |
96 | ## 🛠️ Development Status (v0.12.0)
97 |
98 | | Component | Status | Notes |
99 | | ------------------- | ------------- | --------------------------------------- |
100 | | Waveform Display | 🟢 Enhanced | Scrubbing, markers, optimized rendering |
101 | | Multitrack System | 🟢 Enhanced | Full controls, async loading |
102 | | Editing Tools | 🟢 Enhanced | Undo/redo, improved interaction |
103 | | Project Management | 🟢 New | `.mmp` format, autosave, templates |
104 | | Error Handling | 🟢 New | Logging, dialogs, recovery |
105 | | Performance Monitor | 🟢 New | Realtime CPU/memory usage |
106 | | Exporting | 🟡 Functional | Supports WAV, MP3, AAC, FLAC |
107 | | Playback | 🟡 Enhanced | Real-time, multitrack improvements |
108 | | UI/UX | 🟢 Enhanced | Shortcuts, responsiveness, polish |
109 |
110 | ---
111 |
112 | ## 📸 Interface Preview
113 |
114 | ### 🔹 New Icon
115 |
116 | 
117 |
118 | ### 🔹 General UI
119 |
120 | 
121 |
122 | ### 🔹 Effects Options
123 |
124 | 
125 |
126 | ### 🔹 Quick Effects Menu
127 |
128 | 
129 |
130 | ### 🔹 Audio Effects Studio
131 |
132 | 
133 | 
134 | 
135 |
136 | ### 🔹 File & Edit Menus
137 |
138 | 
139 | 
140 |
141 | ### 🔹 View Menu
142 |
143 | 
144 | 
145 | 
146 |
147 | ### 🔹 Shortcuts & About
148 |
149 | 
150 | 
151 |
152 | ---
153 |
154 | ## 📦 Dependencies
155 |
156 | ### Core Libraries
157 |
158 | * `PyQt5` (>=5.15.0)
159 | * `numpy` (>=1.21.0)
160 | * `matplotlib` (>=3.5.0)
161 | * `pydub` (>=0.25.0)
162 | * `librosa` (>=0.9.0)
163 | * `sounddevice` (>=0.4.0)
164 | * `scipy` (>=1.7.0)
165 |
166 | ### Optional Enhancements
167 |
168 | * `psutil` (>=5.8.0) — system monitoring
169 | * `PyQt5-stubs` — for development with type hinting
170 |
171 | ### External Tools
172 |
173 | * **ffmpeg** — for MP3, AAC, FLAC support
174 |
175 | * Windows: binaries included in `resources/`
176 | * Linux/macOS: install via package manager or [ffmpeg.org](https://ffmpeg.org)
177 |
178 | ---
179 |
180 | ## 🚀 Installation
181 |
182 | 1. **Clone the repository:**
183 |
184 | ```bash
185 | git clone https://github.com/Ivan-Ayub97/MetroMuse-AudioEditor.git
186 | cd MetroMuse
187 | ```
188 |
189 | 2. **Install required Python packages:**
190 |
191 | ```bash
192 | pip install -r requirements.txt
193 | ```
194 |
195 | 3. **Install ffmpeg (Windows):**
196 |
197 | ```bash
198 | winget install ffmpeg
199 | ```
200 |
201 | Then, copy `ffmpeg.exe`, `ffprobe.exe`, and `ffplay.exe` into the `resources/` folder.
202 |
203 | ---
204 |
205 | ## 🎮 Usage
206 |
207 | ### Launch the App
208 |
209 | ```bash
210 | python src/metro_muse.py
211 | ```
212 |
213 | ### 🗂️ Project Shortcuts
214 |
215 | | Action | Shortcut |
216 | | ------------ | ------------ |
217 | | New Project | Ctrl+N |
218 | | Open Project | Ctrl+Shift+O |
219 | | Save Project | Ctrl+S |
220 | | Save As | Ctrl+Shift+S |
221 |
222 | ### 🎧 Audio Tasks
223 |
224 | | Action | Shortcut / Action |
225 | | ------------ | --------------------------------------- |
226 | | Import Audio | Ctrl+O / Drag-and-drop / "Import Audio" |
227 | | Export Audio | Ctrl+E |
228 | | Add Track | "+ Add Track" |
229 | | Delete Track | Click "✕" in header |
230 |
231 | ### ⏯ Playback Controls
232 |
233 | | Action | Shortcut |
234 | | ------------ | --------------------- |
235 | | Play/Pause | Spacebar |
236 | | Stop | Esc |
237 | | Rewind | Home |
238 | | Fast Forward | End |
239 | | Scrub | Click + Drag Waveform |
240 |
241 | ### ✂️ Edit Commands
242 |
243 | | Action | Shortcut |
244 | | ------ | -------- |
245 | | Cut | Ctrl+X |
246 | | Copy | Ctrl+C |
247 | | Paste | Ctrl+V |
248 | | Undo | Ctrl+Z |
249 | | Redo | Ctrl+Y |
250 |
251 | ### 🧭 Navigation
252 |
253 | | Action | Shortcut |
254 | | --------- | ------------------- |
255 | | Zoom In | Ctrl++ / Wheel Up |
256 | | Zoom Out | Ctrl+- / Wheel Down |
257 | | Pan Left | ← Arrow |
258 | | Pan Right | → Arrow |
259 |
260 | ---
261 |
262 | ## 🔥 Recent Enhancements (v0.12.0)
263 |
264 | * ✅ `.mmp` project format with full save/load
265 | * ✅ Auto-save with tracking
266 | * ✅ Detailed error logging
267 | * ✅ Real-time performance monitor
268 | * ✅ Async audio file handling
269 | * ✅ Memory-optimized waveform renderer
270 | * ✅ Shortcut improvements
271 |
272 | ---
273 |
274 | ## 🚧 Upcoming Features
275 |
276 | * Spectrum analyzer
277 | * VST plugin support
278 | * Track automation
279 | * MIDI input
280 | * Recording interface
281 | * Effect chain manager
282 | * Export profiles/settings
283 | * In-app guides/tutorials
284 | * Full theme customization
285 |
286 | ---
287 |
288 | ## ⚠️ Known Issues
289 |
290 | * Exporting fails if `ffmpeg` isn’t properly set up
291 | * Echo/reverb effect modules still in progress
292 | * No VST support yet
293 | * Performance dips with large files (>500MB)
294 | * Preview lag possible on low-spec hardware
295 |
296 | ---
297 |
298 | ## 💻 System Requirements
299 |
300 | * **Python**: 3.7+
301 | * **FFmpeg**: Installed or placed in `resources/`
302 | * See [Dependencies](#-dependencies) section above
303 |
304 | ---
305 |
306 | ## 🗂️ Project Structure
307 |
308 |
309 |
310 | ```
311 | MetroMuse/
312 | ├── Captures/ # Screenshots of the interface
313 | │ └── ...
314 | │
315 | ├── src/ # Main source code
316 | │ ├── metro_muse.py # Main entry point
317 | │ ├── audio_effects.py # Audio processing effects
318 | │ ├── error_handler.py # Error handling utilities
319 | │ ├── performance_monitor.py # Performance tracking
320 | │ ├── project_manager.py # Project loading/saving logic
321 | │ ├── track_manager.py # Handles audio tracks
322 | │ ├── track_renderer.py # Track visualization or rendering
323 | │ ├── ui_manager.py # GUI management
324 | │ ├── styles.qss # Qt Style Sheet
325 | │ ├── icon.png # App icon (PNG)
326 | │ └── icon.ico # App icon (ICO)
327 | │
328 | ├── resources/ # Bundled third-party binaries
329 | │ ├── ffmpeg.exe
330 | │ ├── ffplay.exe
331 | │ └── ffprobe.exe
332 | │
333 | ├── requirements.txt # Python dependencies
334 | ├── README.md # Project overview
335 | ├── CHANGELOG.md # Version history
336 | ├── LICENSE # License information
337 | ├── CODE_OF_CONDUCT.md # Contributor behavior guidelines
338 | ├── CONTRIBUTING.md # Guidelines for contributing
339 | └── SECURITY.md # Security policies and contact
340 |
341 | ```
342 |
343 | ---
344 |
345 | ## 🤝 Contributions
346 |
347 | We welcome your help to improve MetroMuse!
348 |
349 | 1. Fork the repo
350 | 2. Create a new branch for your feature or fix
351 | 3. Submit a **pull request** with a clear description
352 |
353 | 💬 Bug reports, ideas, or questions?
354 | 📧 Contact: [negroayub97@gmail.com](mailto:negroayub97@gmail.com)
355 |
356 | ---
357 |
358 | ## 👤 Author
359 |
360 | **Iván Eduardo Chavez Ayub**
361 | 🔗 [GitHub](https://github.com/Ivan-Ayub97)
362 | 📧 [negroayub97@gmail.com](mailto:negroayub97@gmail.com)
363 | 🛠️ Python, PyQt5, pydub, librosa
364 |
365 | ---
366 |
367 | ## 🌟 Why MetroMuse?
368 |
369 | Because sometimes you just need a **simple, powerful editor that works**.
370 | **MetroMuse** is built with **focus, clarity, and creativity in mind** — open-source, evolving, and creator-driven.
371 |
372 |
--------------------------------------------------------------------------------
/SECURITY.md:
--------------------------------------------------------------------------------
1 | # Security Policy
2 |
3 | ## Supported Versions
4 |
5 | We release regular updates to ensure the security of the project. Below is a list of supported versions:
6 |
7 | | Version | Supported |
8 | | ------- | ------------------ |
9 | | 0.12.x | :white_check_mark: |
10 | | 0.10.x | :white_check_mark: |
11 |
12 | **Note**: If you're using an unsupported version, we recommend updating to the latest stable version.
13 |
14 | ## Reporting a Vulnerability
15 |
16 | If you discover a security vulnerability within this project, please follow these steps:
17 |
18 | 1. **Do not open an issue** in the public repository. We ask that you please **responsibly disclose** any security vulnerability by reporting it to us privately.
19 |
20 | 2. **Email us at [negroayub97@gmail.com]** (replace this with your preferred contact email) to report the issue. Provide as much information as possible, including steps to reproduce, potential impacts, and any proof-of-concept code if available.
21 |
22 | 3. We will review the report and work with you to resolve the issue as soon as possible.
23 |
24 | ## Security Updates
25 |
26 | We strive to provide timely security updates to this project. Once a vulnerability has been resolved, we will issue a new release, and we will publish a **Changelog** that highlights the security updates.
27 |
28 | ## Acknowledgments
29 |
30 | We thank the community for their contributions to improving the security of the project. We are committed to making MetroMuse a secure platform, and we appreciate the support of everyone who helps us maintain its security.
31 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | PyQt5>=5.15.0
2 | numpy>=1.21.0
3 | matplotlib>=3.5.0
4 | pydub>=0.25.0
5 | librosa>=0.9.0
6 | sounddevice>=0.4.0
7 | scipy>=1.7.0
8 | PyQt5-stubs>=5.15.0
9 | psutil>=5.8.0
10 | # ffmpeg (required for mp3, aac, flac support in pydub - install from https://ffmpeg.org/download.html)
11 | # For Windows users: ffmpeg binaries are included in resources/ folder
12 |
--------------------------------------------------------------------------------
/resources/Download FFmpeg.txt:
--------------------------------------------------------------------------------
1 | Download the ZIP file containing the bin for FFmpeg (which includes ffmpeg.exe, ffplay.exe, and ffprobe.exe)
2 | Extract it, and place the exe files in this directory.
3 | You can get it FFmpeg 7.1.1 from the official repository:
4 | 🔗 https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-essentials.zip
5 | or from my Google Drive:
6 | 🔗 https://drive.google.com/file/d/1hduBRKnJnaXdaCvGGt2bQUE2w9YGOK4l/view?usp=sharing
7 |
--------------------------------------------------------------------------------
/src/audio_effects.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import scipy.signal as signal
3 | from scipy.interpolate import interp1d
4 | from typing import Tuple, Optional, Dict, Any
5 | from PyQt5.QtCore import QObject, pyqtSignal, QThread, QTimer
6 | from PyQt5.QtWidgets import (
7 | QDialog, QVBoxLayout, QHBoxLayout, QLabel, QSlider, QPushButton,
8 | QGroupBox, QGridLayout, QComboBox, QSpinBox, QDoubleSpinBox,
9 | QCheckBox, QProgressBar, QTabWidget, QWidget, QFrame, QDialogButtonBox
10 | )
11 | from PyQt5.QtGui import QFont, QPalette, QColor
12 | from PyQt5.QtCore import Qt
13 | from error_handler import get_error_handler
14 |
15 |
16 | class AudioEffectProcessor:
17 | """
18 | Advanced audio effects processor with modern algorithms
19 | """
20 |
21 | @staticmethod
22 | def apply_reverb(samples: np.ndarray, sr: int, room_size: float = 0.5,
23 | damping: float = 0.5, wet_level: float = 0.3) -> np.ndarray:
24 | """
25 | Apply reverb effect using Schroeder reverb algorithm
26 |
27 | Args:
28 | samples: Audio samples
29 | sr: Sample rate
30 | room_size: Room size (0.0 - 1.0)
31 | damping: High frequency damping (0.0 - 1.0)
32 | wet_level: Wet/dry mix (0.0 - 1.0)
33 | """
34 | try:
35 | # Ensure samples are 2D
36 | if samples.ndim == 1:
37 | samples = samples[np.newaxis, :]
38 |
39 | channels, length = samples.shape
40 | output = np.zeros_like(samples)
41 |
42 | # Comb filter delays (in samples)
43 | comb_delays = [int(sr * delay) for delay in [0.0297, 0.0371, 0.0411, 0.0437]]
44 | comb_gains = [0.742, 0.733, 0.715, 0.697]
45 |
46 | # Allpass filter delays
47 | allpass_delays = [int(sr * delay) for delay in [0.005, 0.0168, 0.0298]]
48 | allpass_gains = [0.7, 0.7, 0.7]
49 |
50 | # Scale delays by room size
51 | comb_delays = [int(delay * (0.5 + room_size * 0.5)) for delay in comb_delays]
52 | allpass_delays = [int(delay * (0.5 + room_size * 0.5)) for delay in allpass_delays]
53 |
54 | for ch in range(channels):
55 | # Initialize delay lines
56 | comb_buffers = [np.zeros(delay) for delay in comb_delays]
57 | allpass_buffers = [np.zeros(delay) for delay in allpass_delays]
58 | comb_indices = [0] * len(comb_delays)
59 | allpass_indices = [0] * len(allpass_delays)
60 |
61 | reverb_signal = np.zeros(length)
62 |
63 | for i in range(length):
64 | # Sum comb filter outputs
65 | comb_sum = 0
66 | for j, (buffer, delay, gain) in enumerate(zip(comb_buffers, comb_delays, comb_gains)):
67 | if delay > 0:
68 | delayed_sample = buffer[comb_indices[j]]
69 | feedback = delayed_sample * gain * (1 - damping)
70 | buffer[comb_indices[j]] = samples[ch, i] + feedback
71 | comb_sum += delayed_sample
72 | comb_indices[j] = (comb_indices[j] + 1) % delay
73 |
74 | # Apply allpass filters
75 | allpass_out = comb_sum
76 | for j, (buffer, delay, gain) in enumerate(zip(allpass_buffers, allpass_delays, allpass_gains)):
77 | if delay > 0:
78 | delayed_sample = buffer[allpass_indices[j]]
79 | buffer[allpass_indices[j]] = allpass_out + delayed_sample * gain
80 | allpass_out = delayed_sample - allpass_out * gain
81 | allpass_indices[j] = (allpass_indices[j] + 1) % delay
82 |
83 | reverb_signal[i] = allpass_out
84 |
85 | # Mix wet and dry signals
86 | output[ch] = samples[ch] * (1 - wet_level) + reverb_signal * wet_level
87 |
88 | return output if samples.ndim > 1 else output[0]
89 |
90 | except Exception as e:
91 | get_error_handler().log_error(f"Error applying reverb: {str(e)}")
92 | return samples
93 |
94 | @staticmethod
95 | def apply_echo(samples: np.ndarray, sr: int, delay_ms: float = 300,
96 | feedback: float = 0.3, wet_level: float = 0.5) -> np.ndarray:
97 | """
98 | Apply echo effect
99 |
100 | Args:
101 | samples: Audio samples
102 | sr: Sample rate
103 | delay_ms: Echo delay in milliseconds
104 | feedback: Feedback amount (0.0 - 0.9)
105 | wet_level: Wet/dry mix (0.0 - 1.0)
106 | """
107 | try:
108 | if samples.ndim == 1:
109 | samples = samples[np.newaxis, :]
110 |
111 | channels, length = samples.shape
112 | delay_samples = int(sr * delay_ms / 1000)
113 |
114 | if delay_samples <= 0 or delay_samples >= length:
115 | return samples if samples.ndim > 1 else samples[0]
116 |
117 | output = np.zeros_like(samples)
118 |
119 | for ch in range(channels):
120 | delay_buffer = np.zeros(delay_samples)
121 | delay_index = 0
122 |
123 | for i in range(length):
124 | # Get delayed sample
125 | delayed_sample = delay_buffer[delay_index]
126 |
127 | # Calculate output with feedback
128 | echo_sample = samples[ch, i] + delayed_sample * feedback
129 |
130 | # Update delay buffer
131 | delay_buffer[delay_index] = echo_sample
132 | delay_index = (delay_index + 1) % delay_samples
133 |
134 | # Mix wet and dry
135 | output[ch, i] = samples[ch, i] * (1 - wet_level) + echo_sample * wet_level
136 |
137 | return output if samples.ndim > 1 else output[0]
138 |
139 | except Exception as e:
140 | get_error_handler().log_error(f"Error applying echo: {str(e)}")
141 | return samples
142 |
143 | @staticmethod
144 | def apply_chorus(samples: np.ndarray, sr: int, rate: float = 1.5,
145 | depth: float = 0.02, voices: int = 3) -> np.ndarray:
146 | """
147 | Apply chorus effect
148 |
149 | Args:
150 | samples: Audio samples
151 | sr: Sample rate
152 | rate: LFO rate in Hz
153 | depth: Modulation depth in seconds
154 | voices: Number of chorus voices
155 | """
156 | try:
157 | if samples.ndim == 1:
158 | samples = samples[np.newaxis, :]
159 |
160 | channels, length = samples.shape
161 | output = np.copy(samples)
162 |
163 | max_delay = int(sr * depth * 2)
164 |
165 | for ch in range(channels):
166 | for voice in range(voices):
167 | # Phase offset for each voice
168 | phase_offset = (2 * np.pi * voice) / voices
169 |
170 | # Create delay line
171 | delay_buffer = np.zeros(max_delay)
172 |
173 | for i in range(length):
174 | # Calculate LFO value
175 | lfo_phase = 2 * np.pi * rate * i / sr + phase_offset
176 | lfo_value = np.sin(lfo_phase)
177 |
178 | # Calculate delay in samples
179 | delay_time = depth * sr * (1 + lfo_value) / 2
180 | delay_samples = int(delay_time)
181 |
182 | if delay_samples < max_delay and i >= delay_samples:
183 | # Linear interpolation for fractional delay
184 | frac = delay_time - delay_samples
185 | delayed_sample = (delay_buffer[delay_samples] * (1 - frac) +
186 | delay_buffer[min(delay_samples + 1, max_delay - 1)] * frac)
187 |
188 | output[ch, i] += delayed_sample * 0.3 / voices
189 |
190 | # Update delay buffer
191 | if i < length - max_delay:
192 | delay_buffer[i % max_delay] = samples[ch, i]
193 |
194 | return output if samples.ndim > 1 else output[0]
195 |
196 | except Exception as e:
197 | get_error_handler().log_error(f"Error applying chorus: {str(e)}")
198 | return samples
199 |
200 | @staticmethod
201 | def apply_parametric_eq(samples: np.ndarray, sr: int,
202 | low_gain: float = 0, mid_gain: float = 0, high_gain: float = 0,
203 | low_freq: float = 200, mid_freq: float = 1000, high_freq: float = 5000,
204 | q_factor: float = 1.0) -> np.ndarray:
205 | """
206 | Apply 3-band parametric equalizer
207 |
208 | Args:
209 | samples: Audio samples
210 | sr: Sample rate
211 | low_gain, mid_gain, high_gain: Gain in dB for each band
212 | low_freq, mid_freq, high_freq: Center frequencies for each band
213 | q_factor: Q factor for mid band
214 | """
215 | try:
216 | if samples.ndim == 1:
217 | samples = samples[np.newaxis, :]
218 |
219 | channels, length = samples.shape
220 | output = np.copy(samples)
221 |
222 | # Design filters for each band
223 | nyquist = sr / 2
224 |
225 | for ch in range(channels):
226 | signal_data = samples[ch]
227 |
228 | # Low shelf filter
229 | if abs(low_gain) > 0.1:
230 | low_sos = signal.iirfilter(2, low_freq/nyquist, btype='lowpass',
231 | ftype='butter', output='sos')
232 | low_filtered = signal.sosfilt(low_sos, signal_data)
233 | gain_linear = 10 ** (low_gain / 20)
234 | signal_data = signal_data + (low_filtered - signal_data) * (gain_linear - 1)
235 |
236 | # Mid peaking filter
237 | if abs(mid_gain) > 0.1:
238 | # Create peaking filter
239 | w0 = 2 * np.pi * mid_freq / sr
240 | alpha = np.sin(w0) / (2 * q_factor)
241 | A = 10 ** (mid_gain / 40)
242 |
243 | # Peaking EQ coefficients
244 | b0 = 1 + alpha * A
245 | b1 = -2 * np.cos(w0)
246 | b2 = 1 - alpha * A
247 | a0 = 1 + alpha / A
248 | a1 = -2 * np.cos(w0)
249 | a2 = 1 - alpha / A
250 |
251 | # Normalize
252 | b = [b0/a0, b1/a0, b2/a0]
253 | a = [1, a1/a0, a2/a0]
254 |
255 | signal_data = signal.lfilter(b, a, signal_data)
256 |
257 | # High shelf filter
258 | if abs(high_gain) > 0.1:
259 | high_sos = signal.iirfilter(2, high_freq/nyquist, btype='highpass',
260 | ftype='butter', output='sos')
261 | high_filtered = signal.sosfilt(high_sos, signal_data)
262 | gain_linear = 10 ** (high_gain / 20)
263 | signal_data = signal_data + (high_filtered - signal_data) * (gain_linear - 1)
264 |
265 | output[ch] = signal_data
266 |
267 | return output if samples.ndim > 1 else output[0]
268 |
269 | except Exception as e:
270 | get_error_handler().log_error(f"Error applying EQ: {str(e)}")
271 | return samples
272 |
273 | @staticmethod
274 | def apply_compressor(samples: np.ndarray, sr: int, threshold: float = -12,
275 | ratio: float = 4, attack_ms: float = 5, release_ms: float = 50) -> np.ndarray:
276 | """
277 | Apply dynamic range compressor
278 |
279 | Args:
280 | samples: Audio samples
281 | sr: Sample rate
282 | threshold: Threshold in dB
283 | ratio: Compression ratio
284 | attack_ms: Attack time in milliseconds
285 | release_ms: Release time in milliseconds
286 | """
287 | try:
288 | if samples.ndim == 1:
289 | samples = samples[np.newaxis, :]
290 |
291 | channels, length = samples.shape
292 | output = np.zeros_like(samples)
293 |
294 | # Convert times to coefficients
295 | attack_coef = np.exp(-1 / (sr * attack_ms / 1000))
296 | release_coef = np.exp(-1 / (sr * release_ms / 1000))
297 | threshold_linear = 10 ** (threshold / 20)
298 |
299 | for ch in range(channels):
300 | envelope = 0
301 |
302 | for i in range(length):
303 | # Calculate envelope
304 | input_level = abs(samples[ch, i])
305 | if input_level > envelope:
306 | envelope = input_level + (envelope - input_level) * attack_coef
307 | else:
308 | envelope = input_level + (envelope - input_level) * release_coef
309 |
310 | # Calculate gain reduction
311 | if envelope > threshold_linear:
312 | gain_reduction = threshold_linear + (envelope - threshold_linear) / ratio
313 | gain_reduction = gain_reduction / envelope if envelope > 0 else 1
314 | else:
315 | gain_reduction = 1
316 |
317 | # Apply compression
318 | output[ch, i] = samples[ch, i] * gain_reduction
319 |
320 | return output if samples.ndim > 1 else output[0]
321 |
322 | except Exception as e:
323 | get_error_handler().log_error(f"Error applying compressor: {str(e)}")
324 | return samples
325 |
326 |
327 | class ModernEffectsDialog(QDialog):
328 | """
329 | Modern, user-friendly effects dialog with real-time preview
330 | """
331 |
332 | def __init__(self, parent=None, samples=None, sr=None):
333 | super().__init__(parent)
334 | self.samples = samples
335 | self.sr = sr
336 | self.preview_samples = None
337 | self.processor = AudioEffectProcessor()
338 |
339 | self.setWindowTitle("Audio Effects - MetroMuse")
340 | self.setModal(True)
341 | self.resize(600, 500)
342 |
343 | # Apply modern dark theme
344 | self.setStyleSheet(self._get_modern_stylesheet())
345 |
346 | self.setup_ui()
347 | self.connect_signals()
348 |
349 | def _get_modern_stylesheet(self):
350 | return """
351 | QDialog {
352 | background-color: #1e1e1e;
353 | color: #ffffff;
354 | font-family: 'Segoe UI', Arial, sans-serif;
355 | }
356 |
357 | QTabWidget::pane {
358 | border: 1px solid #3d3d3d;
359 | border-radius: 8px;
360 | background-color: #2d2d2d;
361 | }
362 |
363 | QTabBar::tab {
364 | background-color: #3d3d3d;
365 | color: #ffffff;
366 | padding: 10px 20px;
367 | margin-right: 2px;
368 | border-top-left-radius: 8px;
369 | border-top-right-radius: 8px;
370 | min-width: 100px;
371 | }
372 |
373 | QTabBar::tab:selected {
374 | background-color: #0078d4;
375 | color: #ffffff;
376 | }
377 |
378 | QTabBar::tab:hover:!selected {
379 | background-color: #4d4d4d;
380 | }
381 |
382 | QGroupBox {
383 | font-weight: bold;
384 | border: 2px solid #3d3d3d;
385 | border-radius: 8px;
386 | margin-top: 1ex;
387 | padding-top: 15px;
388 | background-color: #2d2d2d;
389 | }
390 |
391 | QGroupBox::title {
392 | subcontrol-origin: margin;
393 | left: 10px;
394 | padding: 0 10px 0 10px;
395 | color: #0078d4;
396 | }
397 |
398 | QSlider::groove:horizontal {
399 | border: 1px solid #3d3d3d;
400 | height: 8px;
401 | background: #1e1e1e;
402 | border-radius: 4px;
403 | }
404 |
405 | QSlider::handle:horizontal {
406 | background: qlineargradient(x1:0, y1:0, x2:1, y2:1, stop:0 #0078d4, stop:1 #106ebe);
407 | border: 1px solid #0078d4;
408 | width: 18px;
409 | height: 18px;
410 | margin: -5px 0;
411 | border-radius: 9px;
412 | }
413 |
414 | QSlider::handle:horizontal:hover {
415 | background: qlineargradient(x1:0, y1:0, x2:1, y2:1, stop:0 #1084d8, stop:1 #1378c8);
416 | }
417 |
418 | QSlider::sub-page:horizontal {
419 | background: qlineargradient(x1:0, y1:0, x2:1, y2:1, stop:0 #0078d4, stop:1 #106ebe);
420 | border-radius: 4px;
421 | }
422 |
423 | QPushButton {
424 | background-color: #0078d4;
425 | color: white;
426 | border: none;
427 | padding: 12px 24px;
428 | border-radius: 6px;
429 | font-weight: bold;
430 | min-width: 100px;
431 | }
432 |
433 | QPushButton:hover {
434 | background-color: #106ebe;
435 | }
436 |
437 | QPushButton:pressed {
438 | background-color: #005a9e;
439 | }
440 |
441 | QPushButton:disabled {
442 | background-color: #3d3d3d;
443 | color: #8d8d8d;
444 | }
445 |
446 | QLabel {
447 | color: #ffffff;
448 | font-size: 11px;
449 | }
450 |
451 | QCheckBox {
452 | color: #ffffff;
453 | spacing: 8px;
454 | }
455 |
456 | QCheckBox::indicator {
457 | width: 18px;
458 | height: 18px;
459 | border-radius: 3px;
460 | border: 2px solid #3d3d3d;
461 | background-color: #1e1e1e;
462 | }
463 |
464 | QCheckBox::indicator:checked {
465 | background-color: #0078d4;
466 | border-color: #0078d4;
467 | image: url(data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTAiIGhlaWdodD0iMTAiIHZpZXdCb3g9IjAgMCAxMCAxMCIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KPHBhdGggZD0iTTEuNSA1TDQgNy41TDguNSAyLjUiIHN0cm9rZT0id2hpdGUiIHN0cm9rZS13aWR0aD0iMiIgc3Ryb2tlLWxpbmVjYXA9InJvdW5kIiBzdHJva2UtbGluZWpvaW49InJvdW5kIi8+Cjwvc3ZnPgo=);
468 | }
469 |
470 | QProgressBar {
471 | border: 2px solid #3d3d3d;
472 | border-radius: 5px;
473 | text-align: center;
474 | background-color: #1e1e1e;
475 | }
476 |
477 | QProgressBar::chunk {
478 | background-color: qlineargradient(x1:0, y1:0, x2:1, y2:1, stop:0 #0078d4, stop:1 #106ebe);
479 | border-radius: 3px;
480 | }
481 | """
482 |
483 | def setup_ui(self):
484 | layout = QVBoxLayout(self)
485 | layout.setSpacing(15)
486 | layout.setContentsMargins(20, 20, 20, 20)
487 |
488 | # Title
489 | title = QLabel("Audio Effects Studio")
490 | title.setFont(QFont("Segoe UI", 16, QFont.Bold))
491 | title.setAlignment(Qt.AlignCenter)
492 | title.setStyleSheet("color: #0078d4; margin-bottom: 10px;")
493 | layout.addWidget(title)
494 |
495 | # Tab widget for different effect categories
496 | self.tab_widget = QTabWidget()
497 |
498 | # Time-based effects tab
499 | time_tab = self.create_time_effects_tab()
500 | self.tab_widget.addTab(time_tab, "🎵 Time Effects")
501 |
502 | # Frequency-based effects tab
503 | freq_tab = self.create_frequency_effects_tab()
504 | self.tab_widget.addTab(freq_tab, "🎛️ EQ & Filters")
505 |
506 | # Dynamics effects tab
507 | dynamics_tab = self.create_dynamics_effects_tab()
508 | self.tab_widget.addTab(dynamics_tab, "📊 Dynamics")
509 |
510 | layout.addWidget(self.tab_widget)
511 |
512 | # Preview section
513 | preview_group = QGroupBox("Real-time Preview")
514 | preview_layout = QHBoxLayout(preview_group)
515 |
516 | self.preview_button = QPushButton("🎧 Preview")
517 | self.preview_button.setEnabled(self.samples is not None)
518 |
519 | self.stop_preview_button = QPushButton("⏹ Stop")
520 | self.stop_preview_button.setEnabled(False)
521 |
522 | self.progress_bar = QProgressBar()
523 | self.progress_bar.setVisible(False)
524 |
525 | preview_layout.addWidget(self.preview_button)
526 | preview_layout.addWidget(self.stop_preview_button)
527 | preview_layout.addWidget(self.progress_bar)
528 | preview_layout.addStretch()
529 |
530 | layout.addWidget(preview_group)
531 |
532 | # Button box
533 | button_box = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel | QDialogButtonBox.Reset)
534 | button_box.accepted.connect(self.accept)
535 | button_box.rejected.connect(self.reject)
536 | button_box.button(QDialogButtonBox.Reset).clicked.connect(self.reset_all)
537 |
538 | layout.addWidget(button_box)
539 |
540 | def create_time_effects_tab(self):
541 | widget = QWidget()
542 | layout = QVBoxLayout(widget)
543 |
544 | # Reverb section
545 | reverb_group = QGroupBox("🏛️ Reverb")
546 | reverb_layout = QGridLayout(reverb_group)
547 |
548 | self.reverb_enabled = QCheckBox("Enable Reverb")
549 | reverb_layout.addWidget(self.reverb_enabled, 0, 0, 1, 2)
550 |
551 | # Room size
552 | reverb_layout.addWidget(QLabel("Room Size:"), 1, 0)
553 | self.room_size_slider = QSlider(Qt.Horizontal)
554 | self.room_size_slider.setRange(0, 100)
555 | self.room_size_slider.setValue(50)
556 | self.room_size_label = QLabel("50%")
557 | reverb_layout.addWidget(self.room_size_slider, 1, 1)
558 | reverb_layout.addWidget(self.room_size_label, 1, 2)
559 |
560 | # Damping
561 | reverb_layout.addWidget(QLabel("Damping:"), 2, 0)
562 | self.damping_slider = QSlider(Qt.Horizontal)
563 | self.damping_slider.setRange(0, 100)
564 | self.damping_slider.setValue(50)
565 | self.damping_label = QLabel("50%")
566 | reverb_layout.addWidget(self.damping_slider, 2, 1)
567 | reverb_layout.addWidget(self.damping_label, 2, 2)
568 |
569 | # Wet level
570 | reverb_layout.addWidget(QLabel("Wet Level:"), 3, 0)
571 | self.reverb_wet_slider = QSlider(Qt.Horizontal)
572 | self.reverb_wet_slider.setRange(0, 100)
573 | self.reverb_wet_slider.setValue(30)
574 | self.reverb_wet_label = QLabel("30%")
575 | reverb_layout.addWidget(self.reverb_wet_slider, 3, 1)
576 | reverb_layout.addWidget(self.reverb_wet_label, 3, 2)
577 |
578 | layout.addWidget(reverb_group)
579 |
580 | # Echo section
581 | echo_group = QGroupBox("🔊 Echo")
582 | echo_layout = QGridLayout(echo_group)
583 |
584 | self.echo_enabled = QCheckBox("Enable Echo")
585 | echo_layout.addWidget(self.echo_enabled, 0, 0, 1, 2)
586 |
587 | # Delay time
588 | echo_layout.addWidget(QLabel("Delay (ms):"), 1, 0)
589 | self.echo_delay_slider = QSlider(Qt.Horizontal)
590 | self.echo_delay_slider.setRange(50, 1000)
591 | self.echo_delay_slider.setValue(300)
592 | self.echo_delay_label = QLabel("300ms")
593 | echo_layout.addWidget(self.echo_delay_slider, 1, 1)
594 | echo_layout.addWidget(self.echo_delay_label, 1, 2)
595 |
596 | # Feedback
597 | echo_layout.addWidget(QLabel("Feedback:"), 2, 0)
598 | self.echo_feedback_slider = QSlider(Qt.Horizontal)
599 | self.echo_feedback_slider.setRange(0, 90)
600 | self.echo_feedback_slider.setValue(30)
601 | self.echo_feedback_label = QLabel("30%")
602 | echo_layout.addWidget(self.echo_feedback_slider, 2, 1)
603 | echo_layout.addWidget(self.echo_feedback_label, 2, 2)
604 |
605 | # Wet level
606 | echo_layout.addWidget(QLabel("Wet Level:"), 3, 0)
607 | self.echo_wet_slider = QSlider(Qt.Horizontal)
608 | self.echo_wet_slider.setRange(0, 100)
609 | self.echo_wet_slider.setValue(50)
610 | self.echo_wet_label = QLabel("50%")
611 | echo_layout.addWidget(self.echo_wet_slider, 3, 1)
612 | echo_layout.addWidget(self.echo_wet_label, 3, 2)
613 |
614 | layout.addWidget(echo_group)
615 |
616 | # Chorus section
617 | chorus_group = QGroupBox("🌊 Chorus")
618 | chorus_layout = QGridLayout(chorus_group)
619 |
620 | self.chorus_enabled = QCheckBox("Enable Chorus")
621 | chorus_layout.addWidget(self.chorus_enabled, 0, 0, 1, 2)
622 |
623 | # Rate
624 | chorus_layout.addWidget(QLabel("Rate (Hz):"), 1, 0)
625 | self.chorus_rate_slider = QSlider(Qt.Horizontal)
626 | self.chorus_rate_slider.setRange(1, 50)
627 | self.chorus_rate_slider.setValue(15)
628 | self.chorus_rate_label = QLabel("1.5Hz")
629 | chorus_layout.addWidget(self.chorus_rate_slider, 1, 1)
630 | chorus_layout.addWidget(self.chorus_rate_label, 1, 2)
631 |
632 | # Depth
633 | chorus_layout.addWidget(QLabel("Depth:"), 2, 0)
634 | self.chorus_depth_slider = QSlider(Qt.Horizontal)
635 | self.chorus_depth_slider.setRange(1, 50)
636 | self.chorus_depth_slider.setValue(20)
637 | self.chorus_depth_label = QLabel("0.020s")
638 | chorus_layout.addWidget(self.chorus_depth_slider, 2, 1)
639 | chorus_layout.addWidget(self.chorus_depth_label, 2, 2)
640 |
641 | # Voices
642 | chorus_layout.addWidget(QLabel("Voices:"), 3, 0)
643 | self.chorus_voices_slider = QSlider(Qt.Horizontal)
644 | self.chorus_voices_slider.setRange(2, 8)
645 | self.chorus_voices_slider.setValue(3)
646 | self.chorus_voices_label = QLabel("3")
647 | chorus_layout.addWidget(self.chorus_voices_slider, 3, 1)
648 | chorus_layout.addWidget(self.chorus_voices_label, 3, 2)
649 |
650 | layout.addWidget(chorus_group)
651 | layout.addStretch()
652 |
653 | return widget
654 |
655 | def create_frequency_effects_tab(self):
656 | widget = QWidget()
657 | layout = QVBoxLayout(widget)
658 |
659 | # Parametric EQ section
660 | eq_group = QGroupBox("🎛️ 3-Band Parametric EQ")
661 | eq_layout = QGridLayout(eq_group)
662 |
663 | self.eq_enabled = QCheckBox("Enable EQ")
664 | eq_layout.addWidget(self.eq_enabled, 0, 0, 1, 4)
665 |
666 | # Low band
667 | eq_layout.addWidget(QLabel("Low Band:"), 1, 0)
668 | eq_layout.addWidget(QLabel("Freq (Hz):"), 2, 0)
669 | self.low_freq_slider = QSlider(Qt.Horizontal)
670 | self.low_freq_slider.setRange(20, 500)
671 | self.low_freq_slider.setValue(200)
672 | self.low_freq_label = QLabel("200Hz")
673 | eq_layout.addWidget(self.low_freq_slider, 2, 1)
674 | eq_layout.addWidget(self.low_freq_label, 2, 2)
675 |
676 | eq_layout.addWidget(QLabel("Gain (dB):"), 3, 0)
677 | self.low_gain_slider = QSlider(Qt.Horizontal)
678 | self.low_gain_slider.setRange(-200, 200)
679 | self.low_gain_slider.setValue(0)
680 | self.low_gain_label = QLabel("0dB")
681 | eq_layout.addWidget(self.low_gain_slider, 3, 1)
682 | eq_layout.addWidget(self.low_gain_label, 3, 2)
683 |
684 | # Mid band
685 | eq_layout.addWidget(QLabel("Mid Band:"), 4, 0)
686 | eq_layout.addWidget(QLabel("Freq (Hz):"), 5, 0)
687 | self.mid_freq_slider = QSlider(Qt.Horizontal)
688 | self.mid_freq_slider.setRange(200, 5000)
689 | self.mid_freq_slider.setValue(1000)
690 | self.mid_freq_label = QLabel("1000Hz")
691 | eq_layout.addWidget(self.mid_freq_slider, 5, 1)
692 | eq_layout.addWidget(self.mid_freq_label, 5, 2)
693 |
694 | eq_layout.addWidget(QLabel("Gain (dB):"), 6, 0)
695 | self.mid_gain_slider = QSlider(Qt.Horizontal)
696 | self.mid_gain_slider.setRange(-200, 200)
697 | self.mid_gain_slider.setValue(0)
698 | self.mid_gain_label = QLabel("0dB")
699 | eq_layout.addWidget(self.mid_gain_slider, 6, 1)
700 | eq_layout.addWidget(self.mid_gain_label, 6, 2)
701 |
702 | # High band
703 | eq_layout.addWidget(QLabel("High Band:"), 7, 0)
704 | eq_layout.addWidget(QLabel("Freq (Hz):"), 8, 0)
705 | self.high_freq_slider = QSlider(Qt.Horizontal)
706 | self.high_freq_slider.setRange(1000, 20000)
707 | self.high_freq_slider.setValue(5000)
708 | self.high_freq_label = QLabel("5000Hz")
709 | eq_layout.addWidget(self.high_freq_slider, 8, 1)
710 | eq_layout.addWidget(self.high_freq_label, 8, 2)
711 |
712 | eq_layout.addWidget(QLabel("Gain (dB):"), 9, 0)
713 | self.high_gain_slider = QSlider(Qt.Horizontal)
714 | self.high_gain_slider.setRange(-200, 200)
715 | self.high_gain_slider.setValue(0)
716 | self.high_gain_label = QLabel("0dB")
717 | eq_layout.addWidget(self.high_gain_slider, 9, 1)
718 | eq_layout.addWidget(self.high_gain_label, 9, 2)
719 |
720 | layout.addWidget(eq_group)
721 | layout.addStretch()
722 |
723 | return widget
724 |
725 | def create_dynamics_effects_tab(self):
726 | widget = QWidget()
727 | layout = QVBoxLayout(widget)
728 |
729 | # Compressor section
730 | comp_group = QGroupBox("📊 Compressor")
731 | comp_layout = QGridLayout(comp_group)
732 |
733 | self.comp_enabled = QCheckBox("Enable Compressor")
734 | comp_layout.addWidget(self.comp_enabled, 0, 0, 1, 2)
735 |
736 | # Threshold
737 | comp_layout.addWidget(QLabel("Threshold (dB):"), 1, 0)
738 | self.comp_threshold_slider = QSlider(Qt.Horizontal)
739 | self.comp_threshold_slider.setRange(-400, 0)
740 | self.comp_threshold_slider.setValue(-120)
741 | self.comp_threshold_label = QLabel("-12dB")
742 | comp_layout.addWidget(self.comp_threshold_slider, 1, 1)
743 | comp_layout.addWidget(self.comp_threshold_label, 1, 2)
744 |
745 | # Ratio
746 | comp_layout.addWidget(QLabel("Ratio:"), 2, 0)
747 | self.comp_ratio_slider = QSlider(Qt.Horizontal)
748 | self.comp_ratio_slider.setRange(10, 200)
749 | self.comp_ratio_slider.setValue(40)
750 | self.comp_ratio_label = QLabel("4:1")
751 | comp_layout.addWidget(self.comp_ratio_slider, 2, 1)
752 | comp_layout.addWidget(self.comp_ratio_label, 2, 2)
753 |
754 | # Attack
755 | comp_layout.addWidget(QLabel("Attack (ms):"), 3, 0)
756 | self.comp_attack_slider = QSlider(Qt.Horizontal)
757 | self.comp_attack_slider.setRange(1, 100)
758 | self.comp_attack_slider.setValue(5)
759 | self.comp_attack_label = QLabel("5ms")
760 | comp_layout.addWidget(self.comp_attack_slider, 3, 1)
761 | comp_layout.addWidget(self.comp_attack_label, 3, 2)
762 |
763 | # Release
764 | comp_layout.addWidget(QLabel("Release (ms):"), 4, 0)
765 | self.comp_release_slider = QSlider(Qt.Horizontal)
766 | self.comp_release_slider.setRange(10, 500)
767 | self.comp_release_slider.setValue(50)
768 | self.comp_release_label = QLabel("50ms")
769 | comp_layout.addWidget(self.comp_release_slider, 4, 1)
770 | comp_layout.addWidget(self.comp_release_label, 4, 2)
771 |
772 | layout.addWidget(comp_group)
773 | layout.addStretch()
774 |
775 | return widget
776 |
777 | def connect_signals(self):
778 | """Connect all slider signals to update labels"""
779 | # Time effects
780 | self.room_size_slider.valueChanged.connect(
781 | lambda v: self.room_size_label.setText(f"{v}%"))
782 | self.damping_slider.valueChanged.connect(
783 | lambda v: self.damping_label.setText(f"{v}%"))
784 | self.reverb_wet_slider.valueChanged.connect(
785 | lambda v: self.reverb_wet_label.setText(f"{v}%"))
786 |
787 | self.echo_delay_slider.valueChanged.connect(
788 | lambda v: self.echo_delay_label.setText(f"{v}ms"))
789 | self.echo_feedback_slider.valueChanged.connect(
790 | lambda v: self.echo_feedback_label.setText(f"{v}%"))
791 | self.echo_wet_slider.valueChanged.connect(
792 | lambda v: self.echo_wet_label.setText(f"{v}%"))
793 |
794 | self.chorus_rate_slider.valueChanged.connect(
795 | lambda v: self.chorus_rate_label.setText(f"{v/10:.1f}Hz"))
796 | self.chorus_depth_slider.valueChanged.connect(
797 | lambda v: self.chorus_depth_label.setText(f"{v/1000:.3f}s"))
798 | self.chorus_voices_slider.valueChanged.connect(
799 | lambda v: self.chorus_voices_label.setText(str(v)))
800 |
801 | # EQ
802 | self.low_freq_slider.valueChanged.connect(
803 | lambda v: self.low_freq_label.setText(f"{v}Hz"))
804 | self.low_gain_slider.valueChanged.connect(
805 | lambda v: self.low_gain_label.setText(f"{v/10:.1f}dB"))
806 | self.mid_freq_slider.valueChanged.connect(
807 | lambda v: self.mid_freq_label.setText(f"{v}Hz"))
808 | self.mid_gain_slider.valueChanged.connect(
809 | lambda v: self.mid_gain_label.setText(f"{v/10:.1f}dB"))
810 | self.high_freq_slider.valueChanged.connect(
811 | lambda v: self.high_freq_label.setText(f"{v}Hz"))
812 | self.high_gain_slider.valueChanged.connect(
813 | lambda v: self.high_gain_label.setText(f"{v/10:.1f}dB"))
814 |
815 | # Compressor
816 | self.comp_threshold_slider.valueChanged.connect(
817 | lambda v: self.comp_threshold_label.setText(f"{v/10:.1f}dB"))
818 | self.comp_ratio_slider.valueChanged.connect(
819 | lambda v: self.comp_ratio_label.setText(f"{v/10:.1f}:1"))
820 | self.comp_attack_slider.valueChanged.connect(
821 | lambda v: self.comp_attack_label.setText(f"{v}ms"))
822 | self.comp_release_slider.valueChanged.connect(
823 | lambda v: self.comp_release_label.setText(f"{v}ms"))
824 |
825 | def get_effect_parameters(self):
826 | """Get all effect parameters as a dictionary"""
827 | return {
828 | 'reverb': {
829 | 'enabled': self.reverb_enabled.isChecked(),
830 | 'room_size': self.room_size_slider.value() / 100.0,
831 | 'damping': self.damping_slider.value() / 100.0,
832 | 'wet_level': self.reverb_wet_slider.value() / 100.0
833 | },
834 | 'echo': {
835 | 'enabled': self.echo_enabled.isChecked(),
836 | 'delay_ms': self.echo_delay_slider.value(),
837 | 'feedback': self.echo_feedback_slider.value() / 100.0,
838 | 'wet_level': self.echo_wet_slider.value() / 100.0
839 | },
840 | 'chorus': {
841 | 'enabled': self.chorus_enabled.isChecked(),
842 | 'rate': self.chorus_rate_slider.value() / 10.0,
843 | 'depth': self.chorus_depth_slider.value() / 1000.0,
844 | 'voices': self.chorus_voices_slider.value()
845 | },
846 | 'eq': {
847 | 'enabled': self.eq_enabled.isChecked(),
848 | 'low_freq': self.low_freq_slider.value(),
849 | 'low_gain': self.low_gain_slider.value() / 10.0,
850 | 'mid_freq': self.mid_freq_slider.value(),
851 | 'mid_gain': self.mid_gain_slider.value() / 10.0,
852 | 'high_freq': self.high_freq_slider.value(),
853 | 'high_gain': self.high_gain_slider.value() / 10.0
854 | },
855 | 'compressor': {
856 | 'enabled': self.comp_enabled.isChecked(),
857 | 'threshold': self.comp_threshold_slider.value() / 10.0,
858 | 'ratio': self.comp_ratio_slider.value() / 10.0,
859 | 'attack_ms': self.comp_attack_slider.value(),
860 | 'release_ms': self.comp_release_slider.value()
861 | }
862 | }
863 |
864 | def reset_all(self):
865 | """Reset all parameters to default values"""
866 | # Reset all checkboxes
867 | for checkbox in [self.reverb_enabled, self.echo_enabled, self.chorus_enabled,
868 | self.eq_enabled, self.comp_enabled]:
869 | checkbox.setChecked(False)
870 |
871 | # Reset all sliders to default values
872 | defaults = {
873 | self.room_size_slider: 50,
874 | self.damping_slider: 50,
875 | self.reverb_wet_slider: 30,
876 | self.echo_delay_slider: 300,
877 | self.echo_feedback_slider: 30,
878 | self.echo_wet_slider: 50,
879 | self.chorus_rate_slider: 15,
880 | self.chorus_depth_slider: 20,
881 | self.chorus_voices_slider: 3,
882 | self.low_freq_slider: 200,
883 | self.low_gain_slider: 0,
884 | self.mid_freq_slider: 1000,
885 | self.mid_gain_slider: 0,
886 | self.high_freq_slider: 5000,
887 | self.high_gain_slider: 0,
888 | self.comp_threshold_slider: -120,
889 | self.comp_ratio_slider: 40,
890 | self.comp_attack_slider: 5,
891 | self.comp_release_slider: 50
892 | }
893 |
894 | for slider, value in defaults.items():
895 | slider.setValue(value)
896 |
897 |
--------------------------------------------------------------------------------
/src/error_handler.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import sys
3 | import traceback
4 | from pathlib import Path
5 | from typing import Optional, Callable
6 | from PyQt5.QtCore import QObject, pyqtSignal
7 | from PyQt5.QtWidgets import QMessageBox, QDialog, QVBoxLayout, QTextEdit, QPushButton, QHBoxLayout
8 |
9 |
10 | class ErrorHandler(QObject):
11 | """
12 | Centralized error handling and logging system for MetroMuse.
13 | Provides user-friendly error messages and detailed logging.
14 | """
15 |
16 | errorOccurred = pyqtSignal(str, str) # Error type, error message
17 |
18 | def __init__(self, parent=None):
19 | super().__init__(parent)
20 | self.setup_logging()
21 | self.error_callback = None
22 |
23 | def setup_logging(self):
24 | """Setup logging configuration"""
25 | log_dir = Path(__file__).resolve().parent / "logs"
26 | log_dir.mkdir(exist_ok=True)
27 |
28 | log_file = log_dir / "metromuse.log"
29 |
30 | # Configure logging
31 | logging.basicConfig(
32 | level=logging.INFO,
33 | format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
34 | handlers=[
35 | logging.FileHandler(log_file, encoding='utf-8'),
36 | logging.StreamHandler(sys.stdout)
37 | ]
38 | )
39 |
40 | self.logger = logging.getLogger('MetroMuse')
41 |
42 | def set_error_callback(self, callback: Callable):
43 | """Set a callback function to be called when errors occur"""
44 | self.error_callback = callback
45 |
46 | def handle_exception(self, exc_type, exc_value, exc_traceback, user_message=None):
47 | """
48 | Handle exceptions with logging and user notification.
49 |
50 | Args:
51 | exc_type: Exception type
52 | exc_value: Exception value
53 | exc_traceback: Exception traceback
54 | user_message: Optional user-friendly message
55 | """
56 | # Log the full exception
57 | error_msg = ''.join(traceback.format_exception(exc_type, exc_value, exc_traceback))
58 | self.logger.error(f"Unhandled exception: {error_msg}")
59 |
60 | # Create user-friendly message
61 | if user_message is None:
62 | user_message = self._create_user_friendly_message(exc_type, exc_value)
63 |
64 | # Show error dialog
65 | self.show_error_dialog("Application Error", user_message, error_msg)
66 |
67 | # Emit signal
68 | self.errorOccurred.emit(exc_type.__name__, str(exc_value))
69 |
70 | # Call error callback if set
71 | if self.error_callback:
72 | self.error_callback(exc_type, exc_value, exc_traceback)
73 |
74 | def _create_user_friendly_message(self, exc_type, exc_value):
75 | """Create user-friendly error messages based on exception type"""
76 | error_messages = {
77 | FileNotFoundError: "A required file could not be found. Please check that all audio files and resources are in the correct location.",
78 | PermissionError: "Permission denied. Please check that you have the necessary permissions to access the file or folder.",
79 | MemoryError: "The system is running low on memory. Try closing other applications or working with smaller audio files.",
80 | ImportError: "A required component could not be loaded. Please check that all dependencies are properly installed.",
81 | ValueError: "Invalid data encountered. Please check your input and try again.",
82 | RuntimeError: "A runtime error occurred. This may be due to audio processing or system resource issues."
83 | }
84 |
85 | for exc_class, message in error_messages.items():
86 | if isinstance(exc_value, exc_class):
87 | return f"{message}\n\nTechnical details: {str(exc_value)}"
88 |
89 | return f"An unexpected error occurred: {str(exc_value)}"
90 |
91 | def show_error_dialog(self, title, message, details=None):
92 | """Show error dialog with optional details"""
93 | dialog = ErrorDialog(title, message, details)
94 | dialog.exec_()
95 |
96 | def log_info(self, message):
97 | """Log informational message"""
98 | self.logger.info(message)
99 |
100 | def log_warning(self, message):
101 | """Log warning message"""
102 | self.logger.warning(message)
103 |
104 | def log_error(self, message):
105 | """Log error message"""
106 | self.logger.error(message)
107 |
108 | def handle_audio_error(self, operation, error):
109 | """Handle audio-specific errors"""
110 | error_msg = f"Audio {operation} failed: {str(error)}"
111 | self.log_error(error_msg)
112 |
113 | user_messages = {
114 | "load": "Failed to load audio file. Please check that the file is a valid audio format and not corrupted.",
115 | "save": "Failed to save audio file. Please check disk space and file permissions.",
116 | "play": "Failed to play audio. Please check your audio device settings.",
117 | "record": "Failed to record audio. Please check your microphone settings.",
118 | "process": "Failed to process audio. The operation may be too complex for the current system resources."
119 | }
120 |
121 | user_msg = user_messages.get(operation, f"Audio {operation} failed.")
122 | user_msg += f"\n\nError details: {str(error)}"
123 |
124 | QMessageBox.critical(None, f"Audio {operation.title()} Error", user_msg)
125 | self.errorOccurred.emit("AudioError", error_msg)
126 |
127 | def handle_file_error(self, operation, filepath, error):
128 | """Handle file operation errors"""
129 | error_msg = f"File {operation} failed for '{filepath}': {str(error)}"
130 | self.log_error(error_msg)
131 |
132 | user_messages = {
133 | "open": f"Could not open file '{filepath}'. Please check that the file exists and you have permission to access it.",
134 | "save": f"Could not save file '{filepath}'. Please check disk space and write permissions.",
135 | "delete": f"Could not delete file '{filepath}'. Please check file permissions.",
136 | "move": f"Could not move file '{filepath}'. Please check source and destination permissions."
137 | }
138 |
139 | user_msg = user_messages.get(operation, f"File {operation} failed for '{filepath}'.")
140 | user_msg += f"\n\nError details: {str(error)}"
141 |
142 | QMessageBox.critical(None, f"File {operation.title()} Error", user_msg)
143 | self.errorOccurred.emit("FileError", error_msg)
144 |
145 |
146 | class ErrorDialog(QDialog):
147 | """Custom error dialog with expandable details"""
148 |
149 | def __init__(self, title, message, details=None, parent=None):
150 | super().__init__(parent)
151 | self.setWindowTitle(title)
152 | self.setMinimumWidth(400)
153 | self.setModal(True)
154 |
155 | layout = QVBoxLayout(self)
156 |
157 | # Main message
158 | from PyQt5.QtWidgets import QLabel
159 | message_label = QLabel(message)
160 | message_label.setWordWrap(True)
161 | layout.addWidget(message_label)
162 |
163 | # Details section (initially hidden)
164 | if details:
165 | self.details_widget = QTextEdit()
166 | self.details_widget.setPlainText(details)
167 | self.details_widget.setReadOnly(True)
168 | self.details_widget.setMaximumHeight(200)
169 | self.details_widget.hide()
170 | layout.addWidget(self.details_widget)
171 |
172 | # Show/Hide details button
173 | self.details_button = QPushButton("Show Details")
174 | self.details_button.clicked.connect(self.toggle_details)
175 |
176 | # Button layout
177 | button_layout = QHBoxLayout()
178 |
179 | if details:
180 | button_layout.addWidget(self.details_button)
181 |
182 | button_layout.addStretch()
183 |
184 | ok_button = QPushButton("OK")
185 | ok_button.clicked.connect(self.accept)
186 | ok_button.setDefault(True)
187 | button_layout.addWidget(ok_button)
188 |
189 | layout.addLayout(button_layout)
190 |
191 | def toggle_details(self):
192 | """Toggle visibility of error details"""
193 | if self.details_widget.isVisible():
194 | self.details_widget.hide()
195 | self.details_button.setText("Show Details")
196 | self.resize(self.width(), self.minimumHeight())
197 | else:
198 | self.details_widget.show()
199 | self.details_button.setText("Hide Details")
200 | self.resize(self.width(), self.height() + 200)
201 |
202 |
203 | # Global error handler instance
204 | _error_handler = None
205 |
206 | def get_error_handler():
207 | """Get the global error handler instance"""
208 | global _error_handler
209 | if _error_handler is None:
210 | _error_handler = ErrorHandler()
211 | return _error_handler
212 |
213 | def setup_exception_handler():
214 | """Setup global exception handler"""
215 | error_handler = get_error_handler()
216 |
217 | def handle_exception(exc_type, exc_value, exc_traceback):
218 | if issubclass(exc_type, KeyboardInterrupt):
219 | sys.__excepthook__(exc_type, exc_value, exc_traceback)
220 | return
221 |
222 | error_handler.handle_exception(exc_type, exc_value, exc_traceback)
223 |
224 | sys.excepthook = handle_exception
225 |
226 |
--------------------------------------------------------------------------------
/src/icon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/src/icon.ico
--------------------------------------------------------------------------------
/src/icon.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Ivan-Ayub97/MetroMuse-PyAudioEditor/e0be29af87fd7cc7565652ee78103f1dc34859a5/src/icon.png
--------------------------------------------------------------------------------
/src/performance_monitor.py:
--------------------------------------------------------------------------------
1 | import time
2 | import psutil
3 | import threading
4 | from typing import Dict, List, Optional, Callable
5 | from dataclasses import dataclass, field
6 | from PyQt5.QtCore import QObject, pyqtSignal, QTimer
7 | from error_handler import get_error_handler
8 |
9 | @dataclass
10 | class PerformanceMetrics:
11 | """Data class to store performance metrics"""
12 | timestamp: float = field(default_factory=time.time)
13 | cpu_percent: float = 0.0
14 | memory_percent: float = 0.0
15 | memory_used_mb: float = 0.0
16 | memory_available_mb: float = 0.0
17 | audio_buffer_size: int = 1024
18 | audio_latency_ms: float = 0.0
19 | active_tracks: int = 0
20 | is_playing: bool = False
21 | waveform_render_time_ms: float = 0.0
22 |
23 | class PerformanceMonitor(QObject):
24 | """
25 | Monitors system performance and provides optimization recommendations
26 | for MetroMuse audio editing operations.
27 | """
28 |
29 | # Signals
30 | metricsUpdated = pyqtSignal(object) # PerformanceMetrics object
31 | warningIssued = pyqtSignal(str, str) # Warning type, warning message
32 | recommendationIssued = pyqtSignal(str, str) # Recommendation type, recommendation message
33 |
34 | def __init__(self, parent=None, update_interval_ms=1000):
35 | super().__init__(parent)
36 | self.error_handler = get_error_handler()
37 | self.update_interval = update_interval_ms
38 | self.monitoring_enabled = False
39 |
40 | # Performance history (keep last 60 measurements)
41 | self.metrics_history: List[PerformanceMetrics] = []
42 | self.max_history_size = 60
43 |
44 | # Thresholds for warnings
45 | self.cpu_warning_threshold = 80.0 # %
46 | self.memory_warning_threshold = 85.0 # %
47 | self.memory_critical_threshold = 95.0 # %
48 |
49 | # Optimization settings
50 | self.optimization_callbacks: Dict[str, Callable] = {}
51 |
52 | # Timer for periodic monitoring
53 | self.monitor_timer = QTimer()
54 | self.monitor_timer.timeout.connect(self._collect_metrics)
55 |
56 | # Performance optimization flags
57 | self.performance_mode = "balanced" # "performance", "balanced", "quality"
58 |
59 | def start_monitoring(self):
60 | """Start performance monitoring"""
61 | if not self.monitoring_enabled:
62 | self.monitoring_enabled = True
63 | self.monitor_timer.start(self.update_interval)
64 | self.error_handler.log_info("Performance monitoring started")
65 |
66 | def stop_monitoring(self):
67 | """Stop performance monitoring"""
68 | if self.monitoring_enabled:
69 | self.monitoring_enabled = False
70 | self.monitor_timer.stop()
71 | self.error_handler.log_info("Performance monitoring stopped")
72 |
73 | def _collect_metrics(self):
74 | """Collect current performance metrics"""
75 | try:
76 | # Get system metrics
77 | cpu_percent = psutil.cpu_percent(interval=None)
78 | memory = psutil.virtual_memory()
79 |
80 | # Create metrics object
81 | metrics = PerformanceMetrics(
82 | cpu_percent=cpu_percent,
83 | memory_percent=memory.percent,
84 | memory_used_mb=memory.used / (1024 * 1024),
85 | memory_available_mb=memory.available / (1024 * 1024)
86 | )
87 |
88 | # Add to history
89 | self.metrics_history.append(metrics)
90 | if len(self.metrics_history) > self.max_history_size:
91 | self.metrics_history.pop(0)
92 |
93 | # Check for warnings
94 | self._check_performance_warnings(metrics)
95 |
96 | # Emit updated metrics
97 | self.metricsUpdated.emit(metrics)
98 |
99 | except Exception as e:
100 | self.error_handler.log_error(f"Error collecting performance metrics: {str(e)}")
101 |
102 | def _check_performance_warnings(self, metrics: PerformanceMetrics):
103 | """Check metrics for performance issues and emit warnings"""
104 | # CPU warnings
105 | if metrics.cpu_percent > self.cpu_warning_threshold:
106 | if metrics.cpu_percent > 95:
107 | self.warningIssued.emit(
108 | "Critical CPU Usage",
109 | f"CPU usage is at {metrics.cpu_percent:.1f}%. Consider reducing audio quality or track count."
110 | )
111 | self._recommend_cpu_optimization()
112 | else:
113 | self.warningIssued.emit(
114 | "High CPU Usage",
115 | f"CPU usage is high ({metrics.cpu_percent:.1f}%). Performance may be affected."
116 | )
117 |
118 | # Memory warnings
119 | if metrics.memory_percent > self.memory_critical_threshold:
120 | self.warningIssued.emit(
121 | "Critical Memory Usage",
122 | f"Memory usage is at {metrics.memory_percent:.1f}%. Application may become unstable."
123 | )
124 | self._recommend_memory_optimization()
125 | elif metrics.memory_percent > self.memory_warning_threshold:
126 | self.warningIssued.emit(
127 | "High Memory Usage",
128 | f"Memory usage is high ({metrics.memory_percent:.1f}%). Consider closing other applications."
129 | )
130 |
131 | def _recommend_cpu_optimization(self):
132 | """Provide CPU optimization recommendations"""
133 | recommendations = [
134 | "Consider switching to Performance Mode for reduced CPU usage",
135 | "Reduce the number of active audio tracks",
136 | "Lower audio quality settings if possible",
137 | "Close other applications to free up CPU resources"
138 | ]
139 |
140 | for rec in recommendations:
141 | self.recommendationIssued.emit("CPU Optimization", rec)
142 |
143 | def _recommend_memory_optimization(self):
144 | """Provide memory optimization recommendations"""
145 | recommendations = [
146 | "Close unused tracks to free memory",
147 | "Work with shorter audio files when possible",
148 | "Restart the application to clear memory leaks",
149 | "Close other applications to free up system memory"
150 | ]
151 |
152 | for rec in recommendations:
153 | self.recommendationIssued.emit("Memory Optimization", rec)
154 |
155 | def set_performance_mode(self, mode: str):
156 | """
157 | Set performance mode: 'performance', 'balanced', or 'quality'
158 |
159 | Args:
160 | mode: The performance mode to set
161 | """
162 | if mode not in ["performance", "balanced", "quality"]:
163 | raise ValueError("Mode must be 'performance', 'balanced', or 'quality'")
164 |
165 | self.performance_mode = mode
166 | self.error_handler.log_info(f"Performance mode set to: {mode}")
167 |
168 | # Apply mode-specific optimizations
169 | if mode == "performance":
170 | self._apply_performance_optimizations()
171 | elif mode == "balanced":
172 | self._apply_balanced_optimizations()
173 | elif mode == "quality":
174 | self._apply_quality_optimizations()
175 |
176 | def _apply_performance_optimizations(self):
177 | """Apply optimizations for maximum performance"""
178 | optimizations = {
179 | "waveform_detail_level": "low",
180 | "audio_buffer_size": 2048,
181 | "real_time_effects": False,
182 | "waveform_antialiasing": False,
183 | "background_processing": True
184 | }
185 |
186 | for key, value in optimizations.items():
187 | if key in self.optimization_callbacks:
188 | self.optimization_callbacks[key](value)
189 |
190 | def _apply_balanced_optimizations(self):
191 | """Apply balanced optimizations"""
192 | optimizations = {
193 | "waveform_detail_level": "medium",
194 | "audio_buffer_size": 1024,
195 | "real_time_effects": True,
196 | "waveform_antialiasing": True,
197 | "background_processing": True
198 | }
199 |
200 | for key, value in optimizations.items():
201 | if key in self.optimization_callbacks:
202 | self.optimization_callbacks[key](value)
203 |
204 | def _apply_quality_optimizations(self):
205 | """Apply optimizations for maximum quality"""
206 | optimizations = {
207 | "waveform_detail_level": "high",
208 | "audio_buffer_size": 512,
209 | "real_time_effects": True,
210 | "waveform_antialiasing": True,
211 | "background_processing": False
212 | }
213 |
214 | for key, value in optimizations.items():
215 | if key in self.optimization_callbacks:
216 | self.optimization_callbacks[key](value)
217 |
218 | def register_optimization_callback(self, setting_name: str, callback: Callable):
219 | """
220 | Register a callback function for performance optimizations
221 |
222 | Args:
223 | setting_name: Name of the setting to optimize
224 | callback: Function to call when optimization is applied
225 | """
226 | self.optimization_callbacks[setting_name] = callback
227 |
228 | def get_average_metrics(self, duration_seconds: int = 30) -> Optional[PerformanceMetrics]:
229 | """
230 | Get average performance metrics over the specified duration
231 |
232 | Args:
233 | duration_seconds: Duration to calculate average over
234 |
235 | Returns:
236 | Average metrics or None if insufficient data
237 | """
238 | if not self.metrics_history:
239 | return None
240 |
241 | # Filter metrics within the specified duration
242 | current_time = time.time()
243 | cutoff_time = current_time - duration_seconds
244 |
245 | recent_metrics = [
246 | m for m in self.metrics_history
247 | if m.timestamp >= cutoff_time
248 | ]
249 |
250 | if not recent_metrics:
251 | return None
252 |
253 | # Calculate averages
254 | avg_metrics = PerformanceMetrics(
255 | timestamp=current_time,
256 | cpu_percent=sum(m.cpu_percent for m in recent_metrics) / len(recent_metrics),
257 | memory_percent=sum(m.memory_percent for m in recent_metrics) / len(recent_metrics),
258 | memory_used_mb=sum(m.memory_used_mb for m in recent_metrics) / len(recent_metrics),
259 | memory_available_mb=sum(m.memory_available_mb for m in recent_metrics) / len(recent_metrics),
260 | audio_buffer_size=recent_metrics[-1].audio_buffer_size, # Use latest
261 | audio_latency_ms=sum(m.audio_latency_ms for m in recent_metrics) / len(recent_metrics),
262 | active_tracks=recent_metrics[-1].active_tracks, # Use latest
263 | is_playing=recent_metrics[-1].is_playing, # Use latest
264 | waveform_render_time_ms=sum(m.waveform_render_time_ms for m in recent_metrics) / len(recent_metrics)
265 | )
266 |
267 | return avg_metrics
268 |
269 | def update_audio_metrics(self, buffer_size: int, latency_ms: float, active_tracks: int, is_playing: bool):
270 | """
271 | Update audio-specific performance metrics
272 |
273 | Args:
274 | buffer_size: Current audio buffer size
275 | latency_ms: Current audio latency in milliseconds
276 | active_tracks: Number of active audio tracks
277 | is_playing: Whether audio is currently playing
278 | """
279 | if self.metrics_history:
280 | latest = self.metrics_history[-1]
281 | latest.audio_buffer_size = buffer_size
282 | latest.audio_latency_ms = latency_ms
283 | latest.active_tracks = active_tracks
284 | latest.is_playing = is_playing
285 |
286 | def update_waveform_render_time(self, render_time_ms: float):
287 | """
288 | Update waveform rendering performance metrics
289 |
290 | Args:
291 | render_time_ms: Time taken to render waveform in milliseconds
292 | """
293 | if self.metrics_history:
294 | latest = self.metrics_history[-1]
295 | latest.waveform_render_time_ms = render_time_ms
296 |
297 | def get_performance_report(self) -> Dict[str, any]:
298 | """
299 | Generate a comprehensive performance report
300 |
301 | Returns:
302 | Dictionary containing performance analysis
303 | """
304 | if not self.metrics_history:
305 | return {"status": "No data available"}
306 |
307 | latest = self.metrics_history[-1]
308 | avg_30s = self.get_average_metrics(30)
309 |
310 | report = {
311 | "timestamp": time.strftime("%Y-%m-%d %H:%M:%S"),
312 | "performance_mode": self.performance_mode,
313 | "current_metrics": {
314 | "cpu_percent": latest.cpu_percent,
315 | "memory_percent": latest.memory_percent,
316 | "memory_used_mb": latest.memory_used_mb,
317 | "active_tracks": latest.active_tracks,
318 | "is_playing": latest.is_playing
319 | },
320 | "average_30s": {
321 | "cpu_percent": avg_30s.cpu_percent if avg_30s else 0,
322 | "memory_percent": avg_30s.memory_percent if avg_30s else 0,
323 | "waveform_render_time_ms": avg_30s.waveform_render_time_ms if avg_30s else 0
324 | } if avg_30s else None,
325 | "recommendations": self._generate_recommendations(latest, avg_30s)
326 | }
327 |
328 | return report
329 |
330 | def _generate_recommendations(self, current: PerformanceMetrics, average: Optional[PerformanceMetrics]) -> List[str]:
331 | """
332 | Generate performance recommendations based on current metrics
333 |
334 | Args:
335 | current: Current performance metrics
336 | average: Average performance metrics
337 |
338 | Returns:
339 | List of recommendation strings
340 | """
341 | recommendations = []
342 |
343 | # CPU recommendations
344 | if current.cpu_percent > 80:
345 | recommendations.append("Consider reducing CPU usage by switching to Performance Mode")
346 | if current.active_tracks > 5:
347 | recommendations.append("Consider reducing the number of active tracks")
348 |
349 | # Memory recommendations
350 | if current.memory_percent > 80:
351 | recommendations.append("Memory usage is high - consider closing other applications")
352 | recommendations.append("Work with shorter audio files to reduce memory usage")
353 |
354 | # Audio performance recommendations
355 | if current.is_playing and current.audio_latency_ms > 50:
356 | recommendations.append("Audio latency is high - consider increasing buffer size")
357 |
358 | # Waveform rendering recommendations
359 | if average and average.waveform_render_time_ms > 100:
360 | recommendations.append("Waveform rendering is slow - consider reducing detail level")
361 |
362 | if not recommendations:
363 | recommendations.append("Performance is optimal - no recommendations at this time")
364 |
365 | return recommendations
366 |
367 |
368 | class AudioBufferOptimizer:
369 | """
370 | Utility class for optimizing audio buffer sizes based on system performance
371 | """
372 |
373 | @staticmethod
374 | def calculate_optimal_buffer_size(sample_rate: int, target_latency_ms: float = 20.0) -> int:
375 | """
376 | Calculate optimal buffer size for given sample rate and target latency
377 |
378 | Args:
379 | sample_rate: Audio sample rate in Hz
380 | target_latency_ms: Target latency in milliseconds
381 |
382 | Returns:
383 | Optimal buffer size in samples
384 | """
385 | # Convert target latency to samples
386 | target_samples = int((target_latency_ms / 1000.0) * sample_rate)
387 |
388 | # Round to nearest power of 2 for optimal performance
389 | power_of_2 = 1
390 | while power_of_2 < target_samples:
391 | power_of_2 *= 2
392 |
393 | # Ensure minimum buffer size for stability
394 | min_buffer = 256
395 | max_buffer = 4096
396 |
397 | return max(min_buffer, min(max_buffer, power_of_2))
398 |
399 | @staticmethod
400 | def adapt_buffer_size_for_performance(current_buffer: int, cpu_percent: float, memory_percent: float) -> int:
401 | """
402 | Adapt buffer size based on current system performance
403 |
404 | Args:
405 | current_buffer: Current buffer size
406 | cpu_percent: Current CPU usage percentage
407 | memory_percent: Current memory usage percentage
408 |
409 | Returns:
410 | Recommended buffer size
411 | """
412 | # Increase buffer size if system is under stress
413 | if cpu_percent > 80 or memory_percent > 85:
414 | return min(4096, current_buffer * 2)
415 |
416 | # Decrease buffer size if system has plenty of resources
417 | elif cpu_percent < 50 and memory_percent < 70:
418 | return max(256, current_buffer // 2)
419 |
420 | # Keep current buffer size
421 | return current_buffer
422 |
423 |
424 | # Global performance monitor instance
425 | _performance_monitor = None
426 |
427 | def get_performance_monitor():
428 | """Get the global performance monitor instance"""
429 | global _performance_monitor
430 | if _performance_monitor is None:
431 | _performance_monitor = PerformanceMonitor()
432 | return _performance_monitor
433 |
434 |
--------------------------------------------------------------------------------
/src/project_manager.py:
--------------------------------------------------------------------------------
1 | import json
2 | import os
3 | import uuid
4 | from pathlib import Path
5 | from typing import Dict, List, Optional, Any
6 | import numpy as np
7 | from PyQt5.QtCore import QObject, pyqtSignal
8 | from PyQt5.QtWidgets import QMessageBox, QFileDialog
9 |
10 |
11 | class ProjectManager(QObject):
12 | """
13 | Manages project save/load operations for MetroMuse.
14 | Handles serialization of tracks, settings, and project state.
15 | """
16 |
17 | projectSaved = pyqtSignal(str) # Emits project file path
18 | projectLoaded = pyqtSignal(str) # Emits project file path
19 |
20 | def __init__(self, parent=None):
21 | super().__init__(parent)
22 | self.current_project_path = None
23 | self.project_modified = False
24 |
25 | def create_new_project(self):
26 | """Create a new empty project"""
27 | self.current_project_path = None
28 | self.project_modified = False
29 | return {
30 | "version": "1.0",
31 | "name": "Untitled Project",
32 | "tracks": [],
33 | "settings": {
34 | "sample_rate": 44100,
35 | "bit_depth": 16,
36 | "global_volume": 1.0,
37 | "playback_position": 0.0
38 | },
39 | "metadata": {
40 | "created_date": None,
41 | "last_modified": None,
42 | "total_duration": 0.0
43 | }
44 | }
45 |
46 | def serialize_track(self, track) -> Dict[str, Any]:
47 | """Serialize a track object to a dictionary"""
48 | track_data = {
49 | "track_id": track.track_id,
50 | "name": track.name,
51 | "color": track.color.name() if track.color else None,
52 | "muted": track.muted,
53 | "soloed": track.soloed,
54 | "volume": track.volume,
55 | "filepath": track.filepath,
56 | "position": {
57 | "x": 0, # Track position in timeline
58 | "y": 0 # Track vertical position
59 | },
60 | "effects": [], # Placeholder for future effects
61 | "automation": {} # Placeholder for automation data
62 | }
63 |
64 | # Only store file path, not actual audio data (for file size efficiency)
65 | # Audio data will be reloaded from file path when project is opened
66 |
67 | return track_data
68 |
69 | def save_project(self, tracks, settings=None, filepath=None):
70 | """Save current project to file"""
71 | try:
72 | # If no filepath provided, show save dialog
73 | if filepath is None:
74 | filepath, _ = QFileDialog.getSaveFileName(
75 | None,
76 | "Save Project",
77 | "",
78 | "MetroMuse Project Files (*.mmp);;All Files (*)"
79 | )
80 | if not filepath:
81 | return False
82 |
83 | # Ensure .mmp extension
84 | if not filepath.endswith('.mmp'):
85 | filepath += '.mmp'
86 |
87 | # Create project data structure
88 | project_data = {
89 | "version": "1.0",
90 | "name": os.path.splitext(os.path.basename(filepath))[0],
91 | "tracks": [self.serialize_track(track) for track in tracks],
92 | "settings": settings or {
93 | "sample_rate": 44100,
94 | "bit_depth": 16,
95 | "global_volume": 1.0,
96 | "playback_position": 0.0
97 | },
98 | "metadata": {
99 | "created_date": None, # Will be set on first save
100 | "last_modified": None, # Will be updated on each save
101 | "total_duration": max([t.waveform_canvas.max_time for t in tracks if t.waveform_canvas]) if tracks else 0.0
102 | }
103 | }
104 |
105 | # Save to file
106 | with open(filepath, 'w', encoding='utf-8') as f:
107 | json.dump(project_data, f, indent=2, ensure_ascii=False)
108 |
109 | self.current_project_path = filepath
110 | self.project_modified = False
111 | self.projectSaved.emit(filepath)
112 |
113 | return True
114 |
115 | except Exception as e:
116 | QMessageBox.critical(None, "Save Error", f"Could not save project:\n{str(e)}")
117 | return False
118 |
119 | def load_project(self, filepath=None):
120 | """Load project from file"""
121 | try:
122 | # If no filepath provided, show open dialog
123 | if filepath is None:
124 | filepath, _ = QFileDialog.getOpenFileName(
125 | None,
126 | "Open Project",
127 | "",
128 | "MetroMuse Project Files (*.mmp);;All Files (*)"
129 | )
130 | if not filepath:
131 | return None
132 |
133 | # Check if file exists
134 | if not os.path.exists(filepath):
135 | QMessageBox.warning(None, "File Not Found", f"Project file not found: {filepath}")
136 | return None
137 |
138 | # Load project data
139 | with open(filepath, 'r', encoding='utf-8') as f:
140 | project_data = json.load(f)
141 |
142 | # Validate project data
143 | if not self._validate_project_data(project_data):
144 | QMessageBox.warning(None, "Invalid Project", "The project file appears to be corrupted or invalid.")
145 | return None
146 |
147 | self.current_project_path = filepath
148 | self.project_modified = False
149 | self.projectLoaded.emit(filepath)
150 |
151 | return project_data
152 |
153 | except json.JSONDecodeError:
154 | QMessageBox.critical(None, "Load Error", "The project file is corrupted or not a valid MetroMuse project.")
155 | return None
156 | except Exception as e:
157 | QMessageBox.critical(None, "Load Error", f"Could not load project:\n{str(e)}")
158 | return None
159 |
160 | def _validate_project_data(self, data):
161 | """Validate project data structure"""
162 | required_keys = ['version', 'tracks', 'settings']
163 | return all(key in data for key in required_keys)
164 |
165 | def save_project_as(self, tracks, settings=None):
166 | """Save project with new filename"""
167 | return self.save_project(tracks, settings, filepath=None)
168 |
169 | def is_project_modified(self):
170 | """Check if project has unsaved changes"""
171 | return self.project_modified
172 |
173 | def mark_project_modified(self):
174 | """Mark project as modified"""
175 | self.project_modified = True
176 |
177 | def get_current_project_name(self):
178 | """Get current project name"""
179 | if self.current_project_path:
180 | return os.path.splitext(os.path.basename(self.current_project_path))[0]
181 | return "Untitled Project"
182 |
183 | def get_recent_projects(self, max_count=10):
184 | """Get list of recent project files"""
185 | recent_projects_file = Path(__file__).resolve().parent / "recent_projects.json"
186 | try:
187 | if recent_projects_file.exists():
188 | with open(recent_projects_file, 'r') as f:
189 | recent = json.load(f)
190 | # Filter out non-existent files
191 | return [p for p in recent if os.path.exists(p)][:max_count]
192 | except Exception:
193 | pass
194 | return []
195 |
196 | def add_to_recent_projects(self, filepath):
197 | """Add project to recent projects list"""
198 | recent_projects_file = Path(__file__).resolve().parent / "recent_projects.json"
199 | try:
200 | recent = self.get_recent_projects()
201 | if filepath in recent:
202 | recent.remove(filepath)
203 | recent.insert(0, filepath)
204 | recent = recent[:10] # Keep only 10 most recent
205 |
206 | with open(recent_projects_file, 'w') as f:
207 | json.dump(recent, f)
208 | except Exception:
209 | pass
210 |
211 |
--------------------------------------------------------------------------------
/src/styles.qss:
--------------------------------------------------------------------------------
1 | /* MetroMuse - Modern Dark Theme Stylesheet */
2 |
3 | /* Global Variables */
4 | * {
5 | /* Main Colors */
6 | color-dark-bg: #232629;
7 | color-darker-bg: #1a1d23;
8 | color-light-fg: #eef1f4;
9 | color-accent: #ffea47;
10 | color-secondary-accent: #ff6b6b;
11 | color-success: #5ad95a;
12 | color-warning: #ffc14d;
13 |
14 | /* Secondary Colors */
15 | color-panel-bg: #21242b;
16 | color-border: #31343a;
17 | color-button-bg: #2a2e36;
18 | color-button-hover: #383c45;
19 | color-selected: #ffea47;
20 | }
21 |
22 | /* Main Application Window */
23 | QMainWindow {
24 | background-color: #232629;
25 | color: #eef1f4;
26 | }
27 |
28 | QWidget {
29 | background-color: #232629;
30 | color: #eef1f4;
31 | font-size: 10pt;
32 | }
33 |
34 | /* Buttons and Controls */
35 | QPushButton, QToolButton {
36 | background-color: #2a2e36;
37 | color: #eef1f4;
38 | border: none;
39 | border-radius: 6px;
40 | padding: 8px 12px;
41 | min-width: 48px;
42 | min-height: 48px;
43 | font-weight: bold;
44 | }
45 |
46 | QPushButton:hover, QToolButton:hover {
47 | background-color: #383c45;
48 | }
49 |
50 | QPushButton:pressed, QToolButton:pressed {
51 | background-color: #ffea47;
52 | color: #000000;
53 | }
54 |
55 | QPushButton:checked, QToolButton:checked {
56 | background-color: #ffea47;
57 | color: #000000;
58 | }
59 |
60 | QPushButton:disabled, QToolButton:disabled {
61 | background-color: #1e2128;
62 | color: #6c7079;
63 | }
64 |
65 | /* Menu Bar and Menus */
66 | QMenuBar {
67 | background-color: #1a1d23;
68 | color: #eef1f4;
69 | border-bottom: 1px solid #31343a;
70 | min-height: 32px;
71 | padding: 2px;
72 | }
73 |
74 | QMenuBar::item {
75 | background-color: transparent;
76 | padding: 6px 12px;
77 | border-radius: 4px;
78 | }
79 |
80 | QMenuBar::item:selected {
81 | background-color: #383c45;
82 | }
83 |
84 | QMenu {
85 | background-color: #21242b;
86 | color: #eef1f4;
87 | border: 1px solid #31343a;
88 | border-radius: 6px;
89 | padding: 4px;
90 | }
91 |
92 | QMenu::item {
93 | padding: 6px 24px 6px 12px;
94 | border-radius: 4px;
95 | }
96 |
97 | QMenu::item:selected {
98 | background-color: #383c45;
99 | }
100 |
101 | QMenu::separator {
102 | height: 1px;
103 | background-color: #31343a;
104 | margin: 4px 8px;
105 | }
106 |
107 | /* Sliders and Progress Bars */
108 | QSlider::groove:horizontal {
109 | background-color: #1e2128;
110 | height: 8px;
111 | border-radius: 4px;
112 | }
113 |
114 | QSlider::handle:horizontal {
115 | background-color: #ffea47;
116 | width: 18px;
117 | height: 18px;
118 | margin: -5px 0;
119 | border-radius: 9px;
120 | }
121 |
122 | QSlider::handle:horizontal:hover {
123 | background-color: #ffea47;
124 | }
125 |
126 | QProgressBar {
127 | background-color: #1e2128;
128 | border-radius: 4px;
129 | text-align: center;
130 | color: #eef1f4;
131 | min-height: 10px;
132 | }
133 |
134 | QProgressBar::chunk {
135 | background-color: #ffea47;
136 | border-radius: 4px;
137 | }
138 |
139 | /* Frames, Panels, and Groupboxes */
140 | QFrame, QGroupBox {
141 | background-color: #21242b;
142 | border-radius: 8px;
143 | border: 1px solid #31343a;
144 | }
145 |
146 | QGroupBox {
147 | margin-top: 24px;
148 | padding-top: 24px;
149 | font-weight: bold;
150 | }
151 |
152 | QGroupBox::title {
153 | subcontrol-origin: margin;
154 | subcontrol-position: top center;
155 | padding: 5px 10px;
156 | background-color: #21242b;
157 | border: 1px solid #31343a;
158 | border-radius: 4px;
159 | }
160 |
161 | /* Text Inputs and Line Edits */
162 | QLineEdit, QTextEdit {
163 | background-color: #1a1d23;
164 | color: #eef1f4;
165 | border: 1px solid #31343a;
166 | border-radius: 6px;
167 | padding: 8px;
168 | selection-background-color: #3a6d99;
169 | }
170 |
171 | /* List Widgets and Tree Views */
172 | QListWidget, QTreeView {
173 | background-color: #1a1d23;
174 | color: #eef1f4;
175 | border: 1px solid #31343a;
176 | border-radius: 6px;
177 | alternate-background-color: #21242b;
178 | }
179 |
180 | QListWidget::item, QTreeView::item {
181 | padding: 4px;
182 | border-radius: 4px;
183 | }
184 |
185 | QListWidget::item:selected, QTreeView::item:selected {
186 | background-color: #3a6d99;
187 | color: #ffffff;
188 | }
189 |
190 | QListWidget::item:hover, QTreeView::item:hover {
191 | background-color: #2a3542;
192 | }
193 |
194 | /* Scroll Bars */
195 | QScrollBar {
196 | background-color: #21242b;
197 | width: 16px;
198 | height: 16px;
199 | margin: 0px;
200 | }
201 |
202 | QScrollBar::handle {
203 | background-color: #3a3f4b;
204 | border-radius: 5px;
205 | min-height: 30px;
206 | min-width: 30px;
207 | }
208 |
209 | QScrollBar::handle:hover {
210 | background-color: #474d5d;
211 | }
212 |
213 | QScrollBar::add-line, QScrollBar::sub-line {
214 | height: 0px;
215 | width: 0px;
216 | }
217 |
218 | QScrollBar::add-page, QScrollBar::sub-page {
219 | background: none;
220 | }
221 |
222 | /* Tab Widgets */
223 | QTabWidget::pane {
224 | border: 1px solid #31343a;
225 | border-radius: 6px;
226 | background-color: #21242b;
227 | }
228 |
229 | QTabBar::tab {
230 | background-color: #1e2128;
231 | color: #eef1f4;
232 | border: 1px solid #31343a;
233 | border-bottom: none;
234 | border-top-left-radius: 6px;
235 | border-top-right-radius: 6px;
236 | padding: 8px 16px;
237 | min-width: 80px;
238 | margin-right: 2px;
239 | }
240 |
241 | QTabBar::tab:selected {
242 | background-color: #21242b;
243 | border-bottom: 2px solid #ffea47;
244 | }
245 |
246 | QTabBar::tab:hover:!selected {
247 | background-color: #2a2e36;
248 | }
249 |
250 | /* Status Bar */
251 | QStatusBar {
252 | background-color: #1a1d23;
253 | color: #eef1f4;
254 | border-top: 1px solid #31343a;
255 | min-height: 28px;
256 | }
257 |
258 | QStatusBar::item {
259 | border: none;
260 | }
261 |
262 | /* Tool Tips */
263 | QToolTip {
264 | background-color: #21242b;
265 | color: #eef1f4;
266 | border: 1px solid #ffea47;
267 | border-radius: 4px;
268 | padding: 6px;
269 | }
270 |
271 | /* Specialized Widgets for MetroMuse */
272 | /* Waveform Canvas Background */
273 | WaveformCanvas {
274 | background-color: #1a1d23;
275 | border-radius: 8px;
276 | border: 1px solid #31343a;
277 | }
278 |
279 | /* Track Headers */
280 | #TrackHeader {
281 | background-color: #21242b;
282 | border: 1px solid #31343a;
283 | border-radius: 6px;
284 | min-height: 48px;
285 | }
286 |
287 | /* Volume Sliders */
288 | #VolumeSlider::groove:horizontal {
289 | height: 4px;
290 | background-color: #1e2128;
291 | }
292 |
293 | #VolumeSlider::handle:horizontal {
294 | background-color: #ffea47;
295 | width: 16px;
296 | height: 16px;
297 | margin: -6px 0;
298 | border-radius: 8px;
299 | }
300 |
301 | /* Playback Controls Group */
302 | #PlaybackControls {
303 | background-color: #1a1d23;
304 | border-top: 1px solid #31343a;
305 | padding: 8px;
306 | }
307 |
308 | #PlaybackControls QToolButton {
309 | margin: 4px;
310 | }
311 |
312 | /* Time Display */
313 | #TimeDisplay {
314 | font-family: monospace;
315 | font-size: 16px;
316 | background-color: #1a1d23;
317 | padding: 4px 8px;
318 | border-radius: 4px;
319 | min-width: 120px;
320 | }
321 |
322 | /* MultiTrack Container */
323 | #MultiTrackContainer {
324 | background-color: #1a1d23;
325 | }
326 |
327 |
--------------------------------------------------------------------------------
/src/track_manager.py:
--------------------------------------------------------------------------------
1 | import os
2 | import uuid
3 | import threading
4 | import time
5 | from pathlib import Path
6 | from typing import Dict, List, Optional, Tuple, Union
7 |
8 | import numpy as np
9 | import sounddevice as sd
10 | from scipy import signal
11 | from PyQt5.QtCore import Qt, QObject, pyqtSignal, pyqtSlot, QTimer, QThread
12 | from PyQt5.QtGui import QColor, QIcon
13 | from PyQt5.QtWidgets import (
14 | QFrame, QHBoxLayout, QLabel, QSlider, QToolButton,
15 | QVBoxLayout, QWidget, QLineEdit, QColorDialog, QScrollArea,
16 | QSizePolicy, QPushButton, QMenu, QProgressBar, QMessageBox
17 | )
18 |
19 | # Import the existing WaveformCanvas class
20 | from track_renderer import EnhancedWaveformCanvas
21 | from pydub import AudioSegment
22 | from error_handler import get_error_handler
23 |
24 | # Constants for UI
25 | DARK_BG = '#232629'
26 | DARK_FG = '#eef1f4'
27 | ACCENT_COLOR = '#47cbff'
28 | TRACK_COLORS = [
29 | '#47cbff', # Sky blue
30 | '#ff6b6b', # Coral red
31 | '#5ad95a', # Green
32 | '#ffc14d', # Orange
33 | '#af8cff', # Purple
34 | '#ff9cee', # Pink
35 | '#4deeea', # Cyan
36 | '#ffec59', # Yellow
37 | '#ffa64d', # Amber
38 | '#9cff9c', # Light green
39 | ]
40 |
41 |
42 | class AudioTrack(QObject):
43 | """
44 | Represents a single audio track with its own waveform, controls, and audio data.
45 | """
46 | # Signals for track events
47 | nameChanged = pyqtSignal(str)
48 | colorChanged = pyqtSignal(QColor)
49 | muteChanged = pyqtSignal(bool)
50 | soloChanged = pyqtSignal(bool)
51 | volumeChanged = pyqtSignal(float)
52 | trackDeleted = pyqtSignal(object) # Emits self reference
53 |
54 | def __init__(self, parent=None, track_id=None, name="New Track", color=None):
55 | super().__init__(parent)
56 | self.track_id = track_id or str(uuid.uuid4())
57 | self._name = name
58 | self._color = QColor(color or TRACK_COLORS[0])
59 | self._muted = False
60 | self._soloed = False
61 | self._volume = 1.0 # 0.0 to 1.0
62 |
63 | # Audio data
64 | self.samples = None
65 | self.sr = None
66 | self.audio_segment = None
67 | self.filepath = None
68 |
69 | # UI components
70 | self.waveform_canvas = None
71 | self.track_widget = None
72 | self.header_widget = None
73 |
74 | # Initialize UI components
75 | self._init_ui_components()
76 |
77 | def _init_ui_components(self):
78 | """Initialize the track's UI components including waveform and header"""
79 | # Create waveform canvas
80 | # Create waveform canvas
81 | self.waveform_canvas = EnhancedWaveformCanvas()
82 | self.waveform_canvas.setMinimumHeight(120)
83 |
84 | # Create header widget with controls
85 | self.header_widget = self._create_header_widget()
86 |
87 | # Create main track widget (container for header and waveform)
88 | self.track_widget = QWidget()
89 | layout = QVBoxLayout(self.track_widget)
90 | layout.setContentsMargins(0, 0, 0, 0)
91 | layout.setSpacing(0)
92 | layout.addWidget(self.header_widget)
93 | layout.addWidget(self.waveform_canvas)
94 |
95 | # Style the track widget
96 | self.track_widget.setStyleSheet(f"""
97 | QWidget {{ background-color: {DARK_BG}; }}
98 | QLabel {{ color: {DARK_FG}; }}
99 | """)
100 |
101 | def _create_header_widget(self):
102 | """Create the track header with name, color, mute, solo, and volume controls"""
103 | header = QFrame()
104 | header.setFrameShape(QFrame.StyledPanel)
105 | header.setMaximumHeight(48)
106 | header.setMinimumHeight(48)
107 | header.setStyleSheet(f"""
108 | QFrame {{
109 | background-color: #21242b;
110 | border: 1px solid #31343a;
111 | border-radius: 6px;
112 | margin: 2px;
113 | }}
114 | QToolButton {{
115 | background-color: #2a2e36;
116 | color: {DARK_FG};
117 | border: none;
118 | border-radius: 4px;
119 | padding: 4px;
120 | min-width: 48px;
121 | min-height: 48px;
122 | }}
123 | QToolButton:hover {{
124 | background-color: #383c45;
125 | }}
126 | QToolButton:checked {{
127 | background-color: {self._color.name()};
128 | color: #000000;
129 | }}
130 | """)
131 |
132 | layout = QHBoxLayout(header)
133 | layout.setContentsMargins(4, 4, 4, 4)
134 | layout.setSpacing(8)
135 |
136 | # Color selector button
137 | self.color_btn = QToolButton()
138 | self.color_btn.setFixedSize(32, 32)
139 | self.color_btn.setStyleSheet(f"background-color: {self._color.name()}; border-radius: 16px;")
140 | self.color_btn.setToolTip("Change track color")
141 | self.color_btn.clicked.connect(self._show_color_dialog)
142 |
143 | # Track name edit
144 | self.name_edit = QLineEdit(self._name)
145 | self.name_edit.setStyleSheet(f"color: {DARK_FG}; background-color: #2a2e36; border: 1px solid #373a42; border-radius: 4px; padding: 4px;")
146 | self.name_edit.editingFinished.connect(self._update_name)
147 | self.name_edit.setMinimumWidth(150)
148 |
149 | # Mute button
150 | self.mute_btn = QToolButton()
151 | self.mute_btn.setText("M")
152 | self.mute_btn.setCheckable(True)
153 | self.mute_btn.setToolTip("Mute track")
154 | self.mute_btn.clicked.connect(self._toggle_mute)
155 |
156 | # Solo button
157 | self.solo_btn = QToolButton()
158 | self.solo_btn.setText("S")
159 | self.solo_btn.setCheckable(True)
160 | self.solo_btn.setToolTip("Solo track")
161 | self.solo_btn.clicked.connect(self._toggle_solo)
162 |
163 | # Volume slider
164 | self.volume_slider = QSlider(Qt.Horizontal)
165 | self.volume_slider.setRange(0, 100)
166 | self.volume_slider.setValue(int(self._volume * 100))
167 | self.volume_slider.setToolTip("Adjust track volume")
168 | self.volume_slider.valueChanged.connect(self._update_volume)
169 |
170 | # Delete track button
171 | self.delete_btn = QToolButton()
172 | self.delete_btn.setText("✕")
173 | self.delete_btn.setToolTip("Delete track")
174 | self.delete_btn.clicked.connect(self._delete_track)
175 |
176 | # Add widgets to layout
177 | layout.addWidget(self.color_btn)
178 | layout.addWidget(self.name_edit, 1)
179 | layout.addWidget(self.mute_btn)
180 | layout.addWidget(self.solo_btn)
181 | layout.addWidget(self.volume_slider, 2)
182 | layout.addWidget(self.delete_btn)
183 |
184 | return header
185 |
186 | def set_audio_data(self, samples, sr, audio_segment=None, filepath=None):
187 | """Set the audio data for this track and update the waveform display"""
188 | try:
189 | self.samples = samples
190 | self.sr = sr
191 | self.audio_segment = audio_segment
192 | self.filepath = filepath
193 |
194 | if self.waveform_canvas and samples is not None and sr is not None:
195 | # For large files, update waveform in a separate thread to avoid UI blocking
196 | if samples.size > 1000000: # If more than 1M samples
197 | self._update_waveform_async(samples, sr)
198 | else:
199 | self.waveform_canvas.plot_waveform(samples, sr)
200 | self._update_header_info()
201 |
202 | except Exception as e:
203 | get_error_handler().log_error(f"Error setting audio data for track {self.name}: {str(e)}")
204 | raise
205 |
206 | def _update_waveform_async(self, samples, sr):
207 | """Update waveform asynchronously for large files"""
208 | try:
209 | # Downsample for display if file is very large
210 | if samples.size > 5000000: # If more than 5M samples
211 | # Downsample to every 10th sample for display
212 | display_samples = samples[::10] if samples.ndim == 1 else samples[:, ::10]
213 | display_sr = sr // 10
214 | self.waveform_canvas.plot_waveform(display_samples, display_sr)
215 | else:
216 | self.waveform_canvas.plot_waveform(samples, sr)
217 | except Exception as e:
218 | get_error_handler().log_error(f"Error updating waveform for track {self.name}: {str(e)}")
219 |
220 | def _update_header_info(self):
221 | """Update header with audio information"""
222 | if hasattr(self, 'filepath') and self.filepath and self.name_edit:
223 | if self._name == "New Track": # Only auto-update if using default name
224 | basename = os.path.basename(self.filepath)
225 | self.name_edit.setText(basename)
226 | self._name = basename
227 | self.nameChanged.emit(self._name)
228 |
229 | # Properties and event handlers
230 | @property
231 | def name(self):
232 | return self._name
233 |
234 | @name.setter
235 | def name(self, value):
236 | if value != self._name:
237 | self._name = value
238 | if hasattr(self, 'name_edit') and self.name_edit:
239 | self.name_edit.setText(value)
240 | self.nameChanged.emit(value)
241 |
242 | def _update_name(self):
243 | new_name = self.name_edit.text()
244 | if new_name != self._name:
245 | self._name = new_name
246 | self.nameChanged.emit(new_name)
247 |
248 | @property
249 | def color(self):
250 | return self._color
251 |
252 | @color.setter
253 | def color(self, value):
254 | if isinstance(value, str):
255 | value = QColor(value)
256 |
257 | if value != self._color:
258 | self._color = value
259 | if hasattr(self, 'color_btn') and self.color_btn:
260 | self.color_btn.setStyleSheet(f"background-color: {value.name()}; border-radius: 16px;")
261 |
262 | # Update mute/solo button colors
263 | if hasattr(self, 'mute_btn') and self.mute_btn:
264 | self.mute_btn.setStyleSheet(f"QToolButton:checked {{ background-color: {value.name()}; }}")
265 | if hasattr(self, 'solo_btn') and self.solo_btn:
266 | self.solo_btn.setStyleSheet(f"QToolButton:checked {{ background-color: {value.name()}; }}")
267 |
268 | self.colorChanged.emit(value)
269 |
270 | def _show_color_dialog(self):
271 | color = QColorDialog.getColor(self._color, None, "Select Track Color")
272 | if color.isValid():
273 | self.color = color
274 |
275 | @property
276 | def muted(self):
277 | return self._muted
278 |
279 | @muted.setter
280 | def muted(self, value):
281 | if value != self._muted:
282 | self._muted = value
283 | if hasattr(self, 'mute_btn') and self.mute_btn:
284 | self.mute_btn.setChecked(value)
285 | self.muteChanged.emit(value)
286 |
287 | def _toggle_mute(self):
288 | self.muted = self.mute_btn.isChecked()
289 |
290 | @property
291 | def soloed(self):
292 | return self._soloed
293 |
294 | @soloed.setter
295 | def soloed(self, value):
296 | if value != self._soloed:
297 | self._soloed = value
298 | if hasattr(self, 'solo_btn') and self.solo_btn:
299 | self.solo_btn.setChecked(value)
300 | self.soloChanged.emit(value)
301 |
302 | def _toggle_solo(self):
303 | self.soloed = self.solo_btn.isChecked()
304 |
305 | @property
306 | def volume(self):
307 | return self._volume
308 |
309 | @volume.setter
310 | def volume(self, value):
311 | # Ensure volume is between 0 and 1
312 | value = max(0.0, min(1.0, value))
313 | if value != self._volume:
314 | self._volume = value
315 | if hasattr(self, 'volume_slider') and self.volume_slider:
316 | self.volume_slider.setValue(int(value * 100))
317 | self.volumeChanged.emit(value)
318 |
319 | def _update_volume(self):
320 | self.volume = self.volume_slider.value() / 100.0
321 |
322 | def _delete_track(self):
323 | """Handle track deletion request"""
324 | self.trackDeleted.emit(self)
325 |
326 | def is_playable(self):
327 | """Check if this track has playable audio data"""
328 | return (self.samples is not None and self.sr is not None) or self.audio_segment is not None
329 |
330 | def get_mixed_samples(self, start_time=0, duration=None):
331 | """
332 | Get audio samples for playback, applying volume and mute settings.
333 | Returns None if track is muted or has no audio data.
334 | """
335 | if self.muted or not self.is_playable():
336 | return None, None
337 |
338 | if self.audio_segment is not None:
339 | # Convert start_time from seconds to milliseconds
340 | start_ms = int(start_time * 1000)
341 | if duration is not None:
342 | duration_ms = int(duration * 1000)
343 | segment = self.audio_segment[start_ms:start_ms + duration_ms]
344 | else:
345 | segment = self.audio_segment[start_ms:]
346 |
347 | # Apply volume
348 | if self._volume != 1.0:
349 | gain_db = 20 * np.log10(self._volume) if self._volume > 0 else -96.0
350 | segment = segment.apply_gain(gain_db)
351 |
352 | # Convert to numpy array for mixing
353 | samples = np.array(segment.get_array_of_samples()).astype(np.float32)
354 | if segment.channels > 1:
355 | samples = samples.reshape((-1, segment.channels)).T
356 | else:
357 | samples = samples[None, :]
358 | samples = samples / (2 ** (8 * segment.sample_width - 1))
359 |
360 | return samples, segment.frame_rate
361 |
362 | elif self.samples is not None and self.sr is not None:
363 | # Convert start_time to sample index
364 | start_idx = int(start_time * self.sr)
365 | if duration is not None:
366 | end_idx = min(start_idx + int(duration * self.sr), self.samples.shape[-1])
367 | else:
368 | end_idx = self.samples.shape[-1]
369 |
370 | # Extract the samples and apply volume
371 | if self.samples.ndim > 1:
372 | samples = self.samples[:, start_idx:end_idx].copy()
373 | else:
374 | samples = self.samples[start_idx:end_idx].copy()
375 |
376 | samples = samples * self._volume
377 | return samples, self.sr
378 |
379 | return None, None
380 |
381 | def get_selection(self):
382 | """Get the current selection range from the waveform canvas"""
383 | if self.waveform_canvas and self.waveform_canvas.selection:
384 | return self.waveform_canvas.selection
385 | return None
386 |
387 | def set_selection(self, start, end):
388 | """Set selection range on waveform canvas"""
389 | if self.waveform_canvas:
390 | self.waveform_canvas.set_selection(start, end)
391 |
392 |
393 | class MultiTrackContainer(QWidget):
394 | """
395 | Container widget that manages multiple audio tracks and provides global playback controls.
396 | Synchronizes all track timelines and handles mixed playback.
397 | """
398 | # Signals for multitrack events
399 | playbackStarted = pyqtSignal()
400 | playbackPaused = pyqtSignal()
401 | playbackStopped = pyqtSignal()
402 | playbackPositionChanged = pyqtSignal(float) # Current position in seconds
403 | trackAdded = pyqtSignal(object) # AudioTrack reference
404 | trackRemoved = pyqtSignal(object) # AudioTrack reference
405 | selectionChanged = pyqtSignal(float, float) # start, end in seconds
406 |
407 | def __init__(self, parent=None):
408 | super().__init__(parent)
409 | self.tracks = [] # List of AudioTrack objects
410 | self.track_widgets = [] # List of track UI containers
411 |
412 | # Playback state
413 | self.is_playing = False
414 | self.is_paused = False
415 | self.playback_position = 0.0 # in seconds
416 | self.playback_stream = None
417 | self.playback_thread = None
418 | self.global_volume = 1.0
419 |
420 | # Timeline and selection
421 | self.time_ruler = None
422 | self.playhead = None
423 | self.global_selection = None # (start, end) in seconds or None
424 |
425 | # Initialize UI
426 | self._init_ui()
427 |
428 | def _init_ui(self):
429 | """Initialize the multitrack container UI"""
430 | main_layout = QVBoxLayout(self)
431 | main_layout.setContentsMargins(0, 0, 0, 0)
432 | main_layout.setSpacing(0)
433 |
434 | # Time ruler (shows seconds, minutes markers)
435 | self.time_ruler = self._create_time_ruler()
436 | main_layout.addWidget(self.time_ruler)
437 |
438 | # Tracks scroll area
439 | self.tracks_scroll = QScrollArea()
440 | self.tracks_scroll.setWidgetResizable(True)
441 | self.tracks_scroll.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOn)
442 | self.tracks_scroll.setVerticalScrollBarPolicy(Qt.ScrollBarAsNeeded)
443 | self.tracks_scroll.setStyleSheet(f"""
444 | QScrollArea {{ background: {DARK_BG}; border: none; }}
445 | QScrollBar {{ background: #21242b; width: 16px; height: 16px; }}
446 | QScrollBar::handle {{ background: #3a3f4b; border-radius: 5px; }}
447 | QScrollBar::handle:hover {{ background: #474d5d; }}
448 | """)
449 |
450 | # Container for all tracks
451 | self.tracks_container = QWidget()
452 | self.tracks_layout = QVBoxLayout(self.tracks_container)
453 | self.tracks_layout.setContentsMargins(0, 0, 0, 0)
454 | self.tracks_layout.setSpacing(4)
455 | self.tracks_layout.addStretch(1) # Push tracks to top
456 |
457 | self.tracks_scroll.setWidget(self.tracks_container)
458 | main_layout.addWidget(self.tracks_scroll, 1) # Tracks get all available space
459 |
460 | # Track controls (add track button, etc.)
461 | self.controls_widget = self._create_track_controls()
462 | main_layout.addWidget(self.controls_widget)
463 |
464 | # Playback controls
465 | self.playback_widget = self._create_playback_controls()
466 | main_layout.addWidget(self.playback_widget)
467 |
468 | def _create_time_ruler(self):
469 | """Create a time ruler widget that shows time markers"""
470 | ruler = QFrame()
471 | ruler.setMinimumHeight(30)
472 | ruler.setMaximumHeight(30)
473 | ruler.setFrameShape(QFrame.StyledPanel)
474 | ruler.setStyleSheet(f"""
475 | QFrame {{
476 | background-color: #1a1d23;
477 | border-bottom: 1px solid #31343a;
478 | }}
479 | """)
480 |
481 | # The actual time markers will be drawn using paintEvent when implemented
482 | # For now, we'll use a placeholder
483 | layout = QHBoxLayout(ruler)
484 | layout.setContentsMargins(8, 2, 8, 2)
485 | label = QLabel("Timeline - 00:00:00")
486 | label.setStyleSheet(f"color: {DARK_FG}; font-family: monospace;")
487 | layout.addWidget(label)
488 |
489 | return ruler
490 |
491 | def _create_track_controls(self):
492 | """Create controls for track management (add track, etc.)"""
493 | controls = QFrame()
494 | controls.setMaximumHeight(48)
495 | controls.setFrameShape(QFrame.StyledPanel)
496 | controls.setStyleSheet(f"""
497 | QFrame {{
498 | background-color: #1e2128;
499 | border-top: 1px solid #31343a;
500 | }}
501 | QPushButton {{
502 | background-color: #2a2e36;
503 | color: {DARK_FG};
504 | border: none;
505 | border-radius: 6px;
506 | padding: 8px 16px;
507 | font-weight: bold;
508 | min-height: 36px;
509 | }}
510 | QPushButton:hover {{
511 | background-color: #383c45;
512 | }}
513 | """)
514 |
515 | layout = QHBoxLayout(controls)
516 | layout.setContentsMargins(8, 4, 8, 4)
517 |
518 | # Add Track button
519 | self.add_track_btn = QPushButton("+ Add Track")
520 | self.add_track_btn.setToolTip("Add a new empty track")
521 | self.add_track_btn.clicked.connect(self.add_empty_track)
522 |
523 | # Import Audio button
524 | self.import_audio_btn = QPushButton("Import Audio")
525 | self.import_audio_btn.setToolTip("Import audio to a new track")
526 | self.import_audio_btn.clicked.connect(self.import_audio_file)
527 |
528 | layout.addWidget(self.add_track_btn)
529 | layout.addWidget(self.import_audio_btn)
530 | layout.addStretch(1)
531 |
532 | return controls
533 |
534 | def _create_playback_controls(self):
535 | """Create transport controls for playback"""
536 | controls = QFrame()
537 | controls.setMaximumHeight(60)
538 | controls.setFrameShape(QFrame.StyledPanel)
539 | controls.setStyleSheet(f"""
540 | QFrame {{
541 | background-color: #21242b;
542 | border-top: 1px solid #31343a;
543 | }}
544 | QToolButton {{
545 | background-color: #2a2e36;
546 | color: {DARK_FG};
547 | border: none;
548 | border-radius: 6px;
549 | padding: 8px;
550 | min-width: 48px;
551 | min-height: 48px;
552 | }}
553 | QToolButton:hover {{
554 | background-color: #383c45;
555 | }}
556 | QToolButton:checked {{
557 | background-color: {ACCENT_COLOR};
558 | color: #000000;
559 | }}
560 | """)
561 |
562 | layout = QHBoxLayout(controls)
563 | layout.setContentsMargins(8, 4, 8, 4)
564 | layout.setSpacing(12)
565 |
566 | # Create playback control buttons
567 | self.rewind_btn = QToolButton()
568 | self.rewind_btn.setText("⏪")
569 | self.rewind_btn.setToolTip("Rewind (Home)")
570 | self.rewind_btn.clicked.connect(self.rewind)
571 |
572 | self.play_btn = QToolButton()
573 | self.play_btn.setText("▶")
574 | self.play_btn.setToolTip("Play (Space)")
575 | self.play_btn.clicked.connect(self.play)
576 |
577 | self.pause_btn = QToolButton()
578 | self.pause_btn.setText("⏸")
579 | self.pause_btn.setToolTip("Pause (Space)")
580 | self.pause_btn.clicked.connect(self.pause)
581 |
582 | self.stop_btn = QToolButton()
583 | self.stop_btn.setText("⏹")
584 | self.stop_btn.setToolTip("Stop (Esc)")
585 | self.stop_btn.clicked.connect(self.stop)
586 |
587 | # Current time display
588 | self.time_display = QLabel("00:00.000")
589 | self.time_display.setStyleSheet(f"""
590 | color: {DARK_FG};
591 | font-family: monospace;
592 | font-size: 16px;
593 | background-color: #1a1d23;
594 | padding: 4px 8px;
595 | border-radius: 4px;
596 | """)
597 |
598 | # Add widgets to layout
599 | layout.addWidget(self.rewind_btn)
600 | layout.addWidget(self.play_btn)
601 | layout.addWidget(self.pause_btn)
602 | layout.addWidget(self.stop_btn)
603 | layout.addWidget(self.time_display, 1)
604 |
605 | return controls
606 |
607 | # Track management methods
608 | def add_empty_track(self):
609 | """Add a new empty track to the container"""
610 | # Create a new track with a color from the TRACK_COLORS list
611 | track_color = TRACK_COLORS[len(self.tracks) % len(TRACK_COLORS)]
612 | track = AudioTrack(self, name=f"Track {len(self.tracks) + 1}", color=track_color)
613 | return self._add_track(track)
614 |
615 | def _add_track(self, track):
616 | """Internal method to add a track to the container"""
617 | # Connect track signals
618 | track.trackDeleted.connect(self.remove_track)
619 |
620 | # Add track to list and UI
621 | self.tracks.append(track)
622 |
623 | # Add track widget to the layout (before the stretch)
624 | self.tracks_layout.insertWidget(self.tracks_layout.count() - 1, track.track_widget)
625 | self.track_widgets.append(track.track_widget)
626 |
627 | # Emit signal
628 | self.trackAdded.emit(track)
629 | return track
630 |
631 | def remove_track(self, track):
632 | """Remove a track from the container"""
633 | if track in self.tracks:
634 | # Remove track widget from layout
635 | if track.track_widget in self.track_widgets:
636 | self.tracks_layout.removeWidget(track.track_widget)
637 | self.track_widgets.remove(track.track_widget)
638 | track.track_widget.deleteLater()
639 |
640 | # Remove track from list
641 | self.tracks.remove(track)
642 |
643 | # Emit signal
644 | self.trackRemoved.emit(track)
645 |
646 | # Clean up track resources
647 | track.deleteLater()
648 |
649 | def clear_tracks(self):
650 | """Remove all tracks"""
651 | # Make a copy of the list to avoid modification during iteration
652 | tracks_copy = self.tracks.copy()
653 | for track in tracks_copy:
654 | self.remove_track(track)
655 |
656 | def get_track_by_id(self, track_id):
657 | """Get a track by its ID"""
658 | for track in self.tracks:
659 | if track.track_id == track_id:
660 | return track
661 | return None
662 |
663 | def import_audio_file(self):
664 | """Open a file dialog to import audio file to a new track"""
665 | from PyQt5.QtWidgets import QFileDialog
666 | filepath, _ = QFileDialog.getOpenFileName(
667 | self, "Import Audio File", "",
668 | "Audio Files (*.wav *.flac *.mp3 *.aac);;All Files (*)"
669 | )
670 | if filepath:
671 | self.load_audio_to_new_track(filepath)
672 |
673 | def load_audio_to_new_track(self, filepath):
674 | """Load an audio file into a new track"""
675 | try:
676 | # Create a new track with the file basename as the track name
677 | basename = os.path.basename(filepath)
678 | track_color = TRACK_COLORS[len(self.tracks) % len(TRACK_COLORS)]
679 | track = AudioTrack(self, name=basename, color=track_color)
680 |
681 | # Try to load the audio file
682 | self._load_audio_file(track, filepath)
683 |
684 | # Add track to container
685 | self._add_track(track)
686 | return track
687 |
688 | except Exception as e:
689 | from PyQt5.QtWidgets import QMessageBox
690 | QMessageBox.critical(self, "Import Error", f"Could not import audio file:\n{str(e)}")
691 | return None
692 |
693 | def _load_audio_file(self, track, filepath):
694 | """Load an audio file into a track using the appropriate loader"""
695 | ext = os.path.splitext(filepath)[1][1:].lower()
696 |
697 | # Try pydub first for supported formats
698 | if ext in ['mp3', 'flac', 'wav', 'aac']:
699 | try:
700 | if ext == 'aac':
701 | audio = AudioSegment.from_file(filepath, 'aac')
702 | else:
703 | audio = AudioSegment.from_file(filepath)
704 |
705 | # Convert to numpy array
706 | samples = np.array(audio.get_array_of_samples()).astype(np.float32)
707 | if audio.channels > 1:
708 | samples = samples.reshape((-1, audio.channels)).T
709 | samples = samples / (2 ** (8 * audio.sample_width - 1))
710 |
711 | # Set track data
712 | track.set_audio_data(samples, audio.frame_rate, audio, filepath)
713 | return
714 |
715 | except Exception as e:
716 | raise RuntimeError(f"Failed to decode {ext.upper()} audio file; ensure ffmpeg is present.\nError: {str(e)}")
717 |
718 | # Fallback to librosa for other formats
719 | import librosa
720 | samples, sr = librosa.load(filepath, sr=None, mono=False)
721 | if samples.ndim == 1:
722 | samples = np.expand_dims(samples, axis=0)
723 |
724 | # Set track data
725 | track.set_audio_data(samples, sr, None, filepath)
726 |
727 | # --- Playback Methods ---
728 | def play(self, start_position=None):
729 | """Start playback from the given position or current position"""
730 | if not self.tracks:
731 | return
732 |
733 | if start_position is not None:
734 | self.playback_position = start_position
735 |
736 | self.is_playing = True
737 | self.is_paused = False
738 | self._start_playback()
739 | self.playbackStarted.emit()
740 |
741 | def pause(self):
742 | """Pause playback"""
743 | self.is_playing = False
744 | self.is_paused = True
745 | self._stop_playback()
746 | self.playbackPaused.emit()
747 |
748 | def stop(self):
749 | """Stop playback and reset position"""
750 | self.is_playing = False
751 | self.is_paused = False
752 | self._stop_playback()
753 | self.playback_position = 0.0
754 | self.playbackStopped.emit()
755 |
756 | def rewind(self):
757 | """Rewind to beginning"""
758 | was_playing = self.is_playing
759 | self.stop()
760 | if was_playing:
761 | self.play(start_position=0.0)
762 |
763 | def _start_playback(self):
764 | """Start audio playback using sounddevice"""
765 | self._stop_playback() # Stop any existing playback
766 |
767 | # Start a new thread for playback to avoid UI blocking
768 | self.playback_thread = threading.Thread(target=self._playback_thread_func, daemon=True)
769 | self.playback_thread.start()
770 |
771 | def _stop_playback(self):
772 | """Stop current audio playback"""
773 | if self.playback_stream is not None:
774 | try:
775 | self.playback_stream.stop()
776 | self.playback_stream.close()
777 | except Exception:
778 | pass
779 | self.playback_stream = None
780 |
781 | def _playback_thread_func(self):
782 | """Thread function for audio playback"""
783 | if not self.tracks:
784 | return
785 |
786 | try:
787 | # Get information about tracks to play
788 | active_tracks = [t for t in self.tracks if not t.muted and t.is_playable()]
789 | if not active_tracks:
790 | return
791 |
792 | # Find if any track is soloed
793 | has_solo = any(t.soloed for t in active_tracks)
794 | if has_solo:
795 | active_tracks = [t for t in active_tracks if t.soloed]
796 |
797 | # Determine sample rate to use (use highest among tracks)
798 | sr = max(t.sr for t in active_tracks if t.sr is not None)
799 |
800 | # Setup sounddevice callback for streaming audio
801 | def audio_callback(outdata, frames, time, status):
802 | if not self.is_playing:
803 | # Fill with zeros and stop stream
804 | outdata.fill(0)
805 | self.playback_stream.stop()
806 | return
807 |
808 | # Calculate what portion of each track to play
809 | duration = frames / sr
810 |
811 | # Mix all active tracks
812 | mixed_samples = np.zeros((2, frames)) # Stereo output
813 |
814 | for track in active_tracks:
815 | track_samples, track_sr = track.get_mixed_samples(
816 | start_time=self.playback_position,
817 | duration=duration
818 | )
819 |
820 | if track_samples is not None:
821 | # Resample if needed
822 | if track_sr != sr:
823 | # Simple resampling - for proper implementation use a resampling library
824 | ratio = sr / track_sr
825 | new_len = int(track_samples.shape[1] * ratio)
826 | from scipy import signal
827 | track_samples = signal.resample(track_samples, new_len, axis=1)
828 |
829 | # Ensure correct length
830 | if track_samples.shape[1] < frames:
831 | # Pad with zeros
832 | pad_width = frames - track_samples.shape[1]
833 | track_samples = np.pad(track_samples, ((0, 0), (0, pad_width)))
834 | elif track_samples.shape[1] > frames:
835 | # Trim excess
836 | track_samples = track_samples[:, :frames]
837 |
838 | # Mix into output with track volume
839 | channels = track_samples.shape[0]
840 | if channels == 1:
841 | # Mono to stereo
842 | mixed_samples[0] += track_samples[0]
843 | mixed_samples[1] += track_samples[0]
844 | else:
845 | # Use first two channels (or duplicate if only one)
846 | mixed_samples[0] += track_samples[0]
847 | mixed_samples[1] += track_samples[min(1, channels-1)]
848 |
849 | # Apply global volume
850 | mixed_samples *= self.global_volume
851 |
852 | # Prevent clipping
853 | if np.max(np.abs(mixed_samples)) > 1.0:
854 | mixed_samples /= np.max(np.abs(mixed_samples))
855 |
856 | # Update output buffer (convert to float32 non-interleaved format)
857 | outdata[:] = mixed_samples.T.astype(np.float32)
858 |
859 | # Update playback position and emit signal
860 | self.playback_position += duration
861 | self.playbackPositionChanged.emit(self.playback_position)
862 |
863 | # Update playhead in each track
864 | for track in self.tracks:
865 | if track.waveform_canvas:
866 | track.waveform_canvas.update_playhead(self.playback_position)
867 |
868 | # Start the sounddevice stream
869 | self.playback_stream = sd.OutputStream(
870 | samplerate=sr,
871 | channels=2,
872 | callback=audio_callback,
873 | blocksize=1024,
874 | dtype='float32'
875 | )
876 |
877 | self.playback_stream.start()
878 |
879 | except Exception as e:
880 | print(f"Playback error: {e}")
881 | self.is_playing = False
882 | self.playbackStopped.emit()
883 |
884 | def set_global_selection(self, start, end):
885 | """Set global selection across all tracks"""
886 | self.global_selection = (start, end)
887 |
888 | # Apply to all tracks
889 | for track in self.tracks:
890 | if track.waveform_canvas:
891 | track.waveform_canvas.set_selection(start, end)
892 |
893 | # Emit signal
894 | self.selectionChanged.emit(start, end)
895 |
896 | def get_max_duration(self):
897 | """Get the maximum duration across all tracks"""
898 | if not self.tracks:
899 | return 0.0
900 |
901 | return max(
902 | (t.waveform_canvas.max_time if t.waveform_canvas else 0.0)
903 | for t in self.tracks
904 | )
905 |
--------------------------------------------------------------------------------
/src/track_renderer.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import matplotlib
3 | import matplotlib.pyplot as plt
4 | from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
5 |
6 | from PyQt5.QtCore import Qt, pyqtSignal, QObject, QEvent
7 | from PyQt5.QtGui import QColor, QPalette
8 | from PyQt5.QtWidgets import QSizePolicy
9 |
10 | # Constants for visualization
11 | DARK_BG = '#232629'
12 | DARK_FG = '#eef1f4'
13 | ACCENT_COLOR = '#47cbff'
14 | PLAYHEAD_COLOR = '#ff9cee'
15 | GRID_COLOR = '#31343a'
16 | SELECTION_COLOR = '#ffc14d'
17 |
18 |
19 | class EnhancedWaveformCanvas(FigureCanvas):
20 | """
21 | Enhanced waveform display with scrubbing support, time grid, and advanced visualization.
22 | Extends the original WaveformCanvas with more interactive features.
23 | """
24 | # Signals for interaction
25 | positionClicked = pyqtSignal(float) # Time position clicked in seconds
26 | positionDragged = pyqtSignal(float) # Time position dragged to in seconds
27 | selectionChanged = pyqtSignal(float, float) # Selection changed (start, end) in seconds
28 | zoomChanged = pyqtSignal(float, float) # Visible range changed (start, end) in seconds
29 |
30 | def __init__(self, parent=None):
31 | self.fig, self.ax = plt.subplots(facecolor=DARK_BG)
32 | super().__init__(self.fig)
33 | self.setParent(parent)
34 |
35 | # Configure the plot style
36 | self._style_axes()
37 |
38 | # Zoom & view state
39 | self._xmin = 0 # View start time (seconds)
40 | self._xmax = 1 # View end time (seconds)
41 | self.max_time = 1 # Total audio duration (seconds)
42 |
43 | # Selection state
44 | self.selection = None # (start_time, end_time) in seconds or None
45 | self._is_selecting = False
46 | self._select_start = None
47 |
48 | # Playhead state
49 | self.playhead_position = 0 # Current playback position in seconds
50 | self.playhead_line = None # The line object representing the playhead
51 |
52 | # Scrubbing state
53 | self._is_scrubbing = False
54 | self._last_scrub_pos = None
55 |
56 | # Grid and time markers
57 | self.grid_lines = []
58 | self.time_markers = []
59 | self.grid_visible = True
60 |
61 | # Live cursor and info display
62 | self.live_cursor_line = None
63 | self.live_cursor_text = None
64 | self.amplitude_label = None
65 | self.samples = None # Cached audio data
66 | self.sr = None # Sample rate
67 |
68 | # Connect mouse and key events
69 | self.mpl_connect("button_press_event", self.on_mouse_press)
70 | self.mpl_connect("motion_notify_event", self.on_mouse_move)
71 | self.mpl_connect("button_release_event", self.on_mouse_release)
72 | self.mpl_connect("scroll_event", self.on_scroll)
73 | self.mpl_connect("key_press_event", self.on_key_press)
74 |
75 | # Store color theme
76 | self.waveform_color = ACCENT_COLOR
77 | self.playhead_color = PLAYHEAD_COLOR
78 | self.grid_color = GRID_COLOR
79 | self.selection_color = SELECTION_COLOR
80 |
81 | # Set size policy for better layout behavior
82 | self.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding)
83 | self.setMinimumHeight(120)
84 |
85 | def _style_axes(self):
86 | """Apply visual styling to the axes for a modern dark theme"""
87 | self.ax.set_facecolor(DARK_BG)
88 | self.ax.tick_params(colors=DARK_FG, labelsize=9)
89 |
90 | # Style the spines (borders)
91 | for spine in self.ax.spines.values():
92 | spine.set_color(DARK_FG)
93 | spine.set_linewidth(0.5)
94 |
95 | # Set the figure background
96 | self.fig.patch.set_facecolor(DARK_BG)
97 |
98 | # Configure padding and layout
99 | self.fig.tight_layout(pad=0.5)
100 |
101 | def plot_waveform(self, samples, sr, color=None):
102 | """
103 | Plot audio waveform with enhanced visualization.
104 | Includes grid lines and keeps the current selection if any.
105 |
106 | Args:
107 | samples: Audio samples (numpy array)
108 | sr: Sample rate (Hz)
109 | color: Optional custom color for this waveform
110 | """
111 | self.samples = samples
112 | self.sr = sr
113 |
114 | # Store waveform color if provided
115 | if color:
116 | self.waveform_color = color
117 |
118 | # Clear the plot but keep axis settings
119 | self.ax.clear()
120 | self._style_axes()
121 |
122 | # Calculate time array and max time
123 | if samples.ndim > 1:
124 | samples_mono = samples.mean(axis=0)
125 | else:
126 | samples_mono = samples
127 |
128 | t = np.linspace(0, len(samples_mono) / sr, num=len(samples_mono))
129 | self.max_time = t[-1] if len(t) > 0 else 1
130 |
131 | # Set reasonable default view if current view is invalid
132 | if self._xmax <= self._xmin or self._xmax > self.max_time:
133 | self._xmin, self._xmax = 0, min(30, self.max_time) # Show first 30 seconds by default
134 |
135 | # Plot the waveform with the specified color
136 | self.ax.plot(t, samples_mono, color=self.waveform_color, linewidth=0.8)
137 |
138 | # Set axis limits and labels
139 | self.ax.set_xlim([self._xmin, self._xmax])
140 | self.ax.set_ylim([-1.05, 1.05])
141 | self.ax.set_xlabel('Time (s)', color=DARK_FG, fontsize=9)
142 | self.ax.set_ylabel('Amplitude', color=DARK_FG, fontsize=9)
143 |
144 | # Draw time grid
145 | self._draw_time_grid()
146 |
147 | # Draw selection if present
148 | self._draw_selection()
149 |
150 | # Draw playhead
151 | self._draw_playhead()
152 |
153 | # Remove any existing cursor displays
154 | if self.live_cursor_line is not None:
155 | self.live_cursor_line.remove()
156 | self.live_cursor_line = None
157 | if self.live_cursor_text is not None:
158 | self.live_cursor_text.remove()
159 | self.live_cursor_text = None
160 |
161 | self.draw()
162 |
163 | def _draw_time_grid(self):
164 | """Draw adaptive time grid with time markers based on zoom level"""
165 | # Clear existing grid lines and markers
166 | for line in self.grid_lines:
167 | try:
168 | line.remove()
169 | except (ValueError, AttributeError):
170 | # Line might already be removed or not in the axes
171 | pass
172 | self.grid_lines = []
173 |
174 | for text in self.time_markers:
175 | try:
176 | text.remove()
177 | except (ValueError, AttributeError):
178 | # Text might already be removed or not in the axes
179 | pass
180 | self.time_markers = []
181 |
182 | if not self.grid_visible:
183 | return
184 |
185 | # Determine appropriate grid spacing based on zoom level
186 | view_span = self._xmax - self._xmin
187 |
188 | if view_span <= 5: # <= 5 seconds visible: show 0.5 second grid
189 | grid_step = 0.5
190 | major_step = 1.0
191 | elif view_span <= 20: # <= 20 seconds visible: show 1 second grid
192 | grid_step = 1.0
193 | major_step = 5.0
194 | elif view_span <= 60: # <= 1 minute visible: show 5 second grid
195 | grid_step = 5.0
196 | major_step = 15.0
197 | elif view_span <= 300: # <= 5 minutes visible: show 15 second grid
198 | grid_step = 15.0
199 | major_step = 60.0
200 | else: # > 5 minutes: show 1 minute grid
201 | grid_step = 60.0
202 | major_step = 300.0
203 |
204 | # Get rounded start and end times for grid
205 | start_time = int(self._xmin / grid_step) * grid_step
206 | end_time = int(self._xmax / grid_step + 1) * grid_step
207 |
208 | # Draw vertical grid lines
209 | current_time = start_time
210 | while current_time <= end_time:
211 | # Skip if outside view
212 | if current_time < 0:
213 | current_time += grid_step
214 | continue
215 |
216 | # Determine if this is a major grid line
217 | is_major = (current_time % major_step) < 0.001 or abs(current_time % major_step - major_step) < 0.001
218 |
219 | # Draw the grid line with appropriate style
220 | line = self.ax.axvline(
221 | current_time,
222 | color=self.grid_color,
223 | linestyle='-' if is_major else ':',
224 | linewidth=0.8 if is_major else 0.5,
225 | alpha=0.6 if is_major else 0.3,
226 | zorder=-1
227 | )
228 | self.grid_lines.append(line)
229 |
230 | # Add time marker text (only for major grid lines)
231 | if is_major and current_time >= 0:
232 | # Format time based on value
233 | if current_time >= 3600: # >= 1 hour
234 | time_str = f"{int(current_time/3600):02d}:{int((current_time%3600)/60):02d}:{int(current_time%60):02d}"
235 | elif current_time >= 60: # >= 1 minute
236 | time_str = f"{int(current_time/60):d}:{int(current_time%60):02d}"
237 | else: # < 1 minute
238 | time_str = f"{current_time:.1f}s" if grid_step < 1 else f"{int(current_time):d}s"
239 |
240 | text = self.ax.text(
241 | current_time,
242 | 0.98, # Position at top of y-axis
243 | time_str,
244 | fontsize=8,
245 | color=DARK_FG,
246 | alpha=0.8,
247 | va='top',
248 | ha='center',
249 | bbox=dict(
250 | boxstyle="round,pad=0.2",
251 | facecolor=DARK_BG,
252 | alpha=0.7,
253 | edgecolor=self.grid_color,
254 | linewidth=0.5
255 | )
256 | )
257 | self.time_markers.append(text)
258 |
259 | current_time += grid_step
260 |
261 | def _draw_selection(self):
262 | """Draw the audio selection region"""
263 | # Remove previous selection patch if it exists
264 | if hasattr(self, "_sel_patch") and self._sel_patch:
265 | for patch in self._sel_patch:
266 | try:
267 | patch.remove()
268 | except (ValueError, AttributeError):
269 | # Patch might already be removed or not in the axes
270 | pass
271 | self._sel_patch = []
272 |
273 | # Draw new selection if present
274 | if self.selection:
275 | from matplotlib.patches import Rectangle
276 | start, end = self.selection
277 |
278 | # Create the selection rectangle
279 | rect = Rectangle(
280 | (start, -1.05),
281 | end - start,
282 | 2.1, # Full height of plot
283 | facecolor=self.selection_color,
284 | alpha=0.2,
285 | zorder=-1
286 | )
287 | self.ax.add_patch(rect)
288 | self._sel_patch.append(rect)
289 |
290 | # Add start and end markers
291 | for x in [start, end]:
292 | line = self.ax.axvline(
293 | x,
294 | color=self.selection_color,
295 | linestyle='-',
296 | linewidth=1.5,
297 | alpha=0.8,
298 | zorder=0
299 | )
300 | self._sel_patch.append(line)
301 |
302 | # Add selection time info at top
303 | duration = end - start
304 | if duration > 0:
305 | # Format selection time based on duration
306 | if duration >= 60:
307 | time_str = f"{int(duration/60):d}m {int(duration%60):02d}s"
308 | else:
309 | time_str = f"{duration:.2f}s"
310 |
311 | text = self.ax.text(
312 | (start + end) / 2, # Center of selection
313 | 1.0, # Top of plot
314 | f"Selection: {time_str}",
315 | fontsize=9,
316 | color=DARK_FG,
317 | weight='bold',
318 | va='top',
319 | ha='center',
320 | bbox=dict(
321 | boxstyle="round,pad=0.3",
322 | facecolor=self.selection_color,
323 | alpha=0.4,
324 | edgecolor=None
325 | )
326 | )
327 | self._sel_patch.append(text)
328 |
329 | def _draw_playhead(self):
330 | """Draw the playhead indicator line"""
331 | # Remove previous playhead
332 | if self.playhead_line is not None:
333 | try:
334 | self.playhead_line.remove()
335 | except (ValueError, AttributeError):
336 | # Playhead line might already be removed or not in the axes
337 | pass
338 |
339 | # Draw new playhead
340 | self.playhead_line = self.ax.axvline(
341 | self.playhead_position,
342 | color=self.playhead_color,
343 | linestyle='-',
344 | linewidth=2,
345 | alpha=0.9,
346 | zorder=10
347 | )
348 |
349 | def update_playhead(self, position):
350 | """Update the playhead position and redraw"""
351 | # Store new position
352 | self.playhead_position = position
353 |
354 | # Check if we need to scroll the view to follow the playhead
355 | view_width = self._xmax - self._xmin
356 | needs_scroll = (position < self._xmin or position > self._xmax)
357 |
358 | # Auto-scroll if playhead moves outside visible area
359 | if needs_scroll:
360 | # Center the view on the playhead
361 | half_width = view_width / 2
362 | self._xmin = max(0, position - half_width)
363 | self._xmax = min(self.max_time, position + half_width)
364 | self.ax.set_xlim([self._xmin, self._xmax])
365 |
366 | # Emit signal that view changed
367 | self.zoomChanged.emit(self._xmin, self._xmax)
368 |
369 | # Redraw the playhead
370 | self._draw_playhead()
371 | self.draw()
372 |
373 | def set_selection(self, start, end):
374 | """Set selection region in seconds"""
375 | if start > end:
376 | start, end = end, start
377 |
378 | # Bound selection to valid range
379 | start = max(0, min(start, self.max_time))
380 | end = max(0, min(end, self.max_time))
381 |
382 | self.selection = (start, end)
383 | self._draw_selection()
384 | self.draw()
385 |
386 | # Emit selection changed signal
387 | self.selectionChanged.emit(start, end)
388 |
389 | def clear_selection(self):
390 | """Clear the selection"""
391 | self.selection = None
392 | self._draw_selection()
393 | self.draw()
394 |
395 | def zoom(self, factor=0.5, center=None):
396 | """
397 | Zoom in/out centered on a specific point or the current view center.
398 |
399 | Args:
400 | factor: Zoom factor (0.5 = zoom in to half the width, 2.0 = zoom out to twice the width)
401 | center: Time point to center on (in seconds), or None to use current view center
402 | """
403 | # Get current view
404 | view_width = self._xmax - self._xmin
405 |
406 | # Determine center point for zoom
407 | if center is None:
408 | center = (self._xmin + self._xmax) / 2
409 |
410 | # Calculate new view width
411 | new_width = view_width * factor
412 |
413 | # Prevent zooming in too far (minimum 0.1 seconds) or out too far (maximum = track length)
414 | new_width = max(0.1, min(new_width, self.max_time))
415 |
416 | # Calculate new view bounds centered on the specified position
417 | half_width = new_width / 2
418 | new_xmin = max(0, center - half_width)
419 | new_xmax = min(self.max_time, center + half_width)
420 |
421 | # Adjust if we hit the boundaries
422 | if new_xmin <= 0:
423 | new_xmax = min(self.max_time, new_width)
424 | if new_xmax >= self.max_time:
425 | new_xmin = max(0, self.max_time - new_width)
426 |
427 | # Update the view
428 | self._xmin, self._xmax = new_xmin, new_xmax
429 | self.ax.set_xlim([self._xmin, self._xmax])
430 |
431 | # Update grid based on new zoom level
432 | self._draw_time_grid()
433 | self._draw_selection()
434 | self._draw_playhead()
435 | self.draw()
436 |
437 | # Emit signal that view changed
438 | self.zoomChanged.emit(self._xmin, self._xmax)
439 |
440 | def pan(self, offset):
441 | """
442 | Pan view by offset seconds.
443 |
444 | Args:
445 | offset: Pan amount in seconds (positive = right, negative = left)
446 | """
447 | # Get current view width
448 | view_width = self._xmax - self._xmin
449 |
450 | # Calculate new bounds
451 | new_xmin = max(0, self._xmin + offset)
452 | new_xmax = min(self.max_time, self._xmax + offset)
453 |
454 | # Adjust if we hit boundaries, maintaining view width
455 | if new_xmin <= 0:
456 | new_xmin = 0
457 | new_xmax = min(self.max_time, view_width)
458 | elif new_xmax >= self.max_time:
459 | new_xmax = self.max_time
460 | new_xmin = max(0, self.max_time - view_width)
461 |
462 | # Update the view
463 | self._xmin, self._xmax = new_xmin, new_xmax
464 | self.ax.set_xlim([self._xmin, self._xmax])
465 |
466 | # Update grid and redraw
467 | self._draw_time_grid()
468 | self.draw()
469 |
470 | # Emit signal that view changed
471 | self.zoomChanged.emit(self._xmin, self._xmax)
472 |
473 | # Event handlers
474 | def on_mouse_press(self, event):
475 | """Handle mouse press events for selection and scrubbing"""
476 | if event.inaxes != self.ax:
477 | return
478 |
479 | # Get time position from x-coordinate
480 | time_pos = event.xdata
481 |
482 | # Determine action based on mouse button
483 | if event.button == 1: # Left button
484 | if event.key == 'shift':
485 | # Shift+Click: Position playhead
486 | self.playhead_position = time_pos
487 | self._draw_playhead()
488 | self.draw()
489 | self.positionClicked.emit(time_pos)
490 | return
491 |
492 | # Start selection
493 | self._is_selecting = True
494 | self._select_start = time_pos
495 | self.set_selection(time_pos, time_pos) # Start with empty selection
496 |
497 | elif event.button == 2: # Middle button
498 | # Start scrubbing
499 | self._is_scrubbing = True
500 | self._last_scrub_pos = time_pos
501 | self.playhead_position = time_pos
502 | self._draw_playhead()
503 | self.draw()
504 | self.positionClicked.emit(time_pos)
505 |
506 | def on_mouse_move(self, event):
507 | """Handle mouse move events for selection, scrubbing, and cursor tracking"""
508 | # Handle selection dragging
509 | if self._is_selecting and event.inaxes == self.ax and self._select_start is not None:
510 | if event.xdata is not None:
511 | self.set_selection(self._select_start, event.xdata)
512 |
513 | # Handle scrubbing
514 | elif self._is_scrubbing and event.inaxes == self.ax:
515 | if event.xdata is not None:
516 | self.playhead_position = max(0, min(event.xdata, self.max_time))
517 | self._draw_playhead()
518 | self.draw()
519 | self.positionDragged.emit(self.playhead_position)
520 | self._last_scrub_pos = event.xdata
521 |
522 | # Update cursor info display
523 | else:
524 | self._update_cursor_info(event)
525 |
526 | def on_mouse_release(self, event):
527 | """Handle mouse release events for selection and scrubbing"""
528 | # Handle selection completion
529 | if self._is_selecting and event.inaxes == self.ax and self._select_start is not None:
530 | if event.xdata is not None:
531 | self.set_selection(self._select_start, event.xdata)
532 | self._is_selecting = False
533 |
534 | # Handle scrubbing end
535 | elif self._is_scrubbing:
536 | if event.xdata is not None and event.inaxes == self.ax:
537 | self.playhead_position = max(0, min(event.xdata, self.max_time))
538 | self._draw_playhead()
539 | self.draw()
540 | self.positionDragged.emit(self.playhead_position)
541 | self._is_scrubbing = False
542 | self._last_scrub_pos = None
543 |
544 | def on_scroll(self, event):
545 | """Handle mouse scroll events for zooming"""
546 | # Only handle scroll in the plot area
547 | if event.inaxes != self.ax:
548 | return
549 |
550 | # Determine zoom direction and factor
551 | if event.button == 'up': # Scroll up = zoom in
552 | factor = 0.5
553 | else: # Scroll down = zoom out
554 | factor = 2.0
555 |
556 | # Zoom centered on the scroll position
557 | self.zoom(factor, center=event.xdata)
558 |
559 | def on_key_press(self, event):
560 | """Handle keyboard events for navigation and selection"""
561 | if event.key == 'left': # Left arrow = pan left
562 | shift_pressed = event.key.startswith('shift')
563 | self.pan(-5.0 if shift_pressed else -1.0)
564 |
565 | elif event.key == 'right': # Right arrow = pan right
566 | shift_pressed = event.key.startswith('shift')
567 | self.pan(5.0 if shift_pressed else 1.0)
568 |
569 | elif event.key == 'up': # Up arrow = zoom in
570 | self.zoom(0.5)
571 |
572 | elif event.key == 'down': # Down arrow = zoom out
573 | self.zoom(2.0)
574 |
575 | elif event.key == 'home': # Home = go to start
576 | self.pan(-self.max_time) # Pan all the way left
577 |
578 | elif event.key == 'end': # End = go to end
579 | self.pan(self.max_time) # Pan all the way right
580 |
581 | elif event.key == 'escape': # Escape = clear selection
582 | self.clear_selection()
583 |
584 | def _update_cursor_info(self, event):
585 | """Update cursor information display with time and amplitude"""
586 | # Remove previous cursor markers
587 | if self.live_cursor_line is not None:
588 | try:
589 | self.live_cursor_line.remove()
590 | except (ValueError, AttributeError):
591 | pass
592 | self.live_cursor_line = None
593 |
594 | if self.live_cursor_text is not None:
595 | try:
596 | self.live_cursor_text.remove()
597 | except (ValueError, AttributeError):
598 | pass
599 | self.live_cursor_text = None
600 |
601 | # Only proceed if cursor is in the plot area
602 | if event.inaxes != self.ax or event.xdata is None:
603 | self.draw()
604 | return
605 |
606 | # Get time position
607 | t_cursor = event.xdata
608 |
609 | # Create vertical cursor line
610 | self.live_cursor_line = self.ax.axvline(
611 | t_cursor,
612 | color="#ffe658",
613 | alpha=0.45,
614 | linewidth=1.2,
615 | zorder=5
616 | )
617 |
618 | # Display time and amplitude if we have audio data
619 | if self.samples is not None and self.sr is not None and 0 <= t_cursor < self.max_time:
620 | # Find the sample closest to cursor position
621 | sample_idx = int(t_cursor * self.sr)
622 | if sample_idx < len(self.samples[0] if self.samples.ndim > 1 else self.samples):
623 | # Extract amplitude value
624 | if self.samples.ndim > 1:
625 | val = self.samples[0, sample_idx]
626 | else:
627 | val = self.samples[sample_idx]
628 |
629 | # Format time and amplitude text
630 | if t_cursor >= 60:
631 | time_str = f"{int(t_cursor/60):d}:{int(t_cursor%60):02d}.{int((t_cursor*1000)%1000):03d}"
632 | else:
633 | time_str = f"{t_cursor:.3f}s"
634 |
635 | txt = f"{time_str}, amp {val:+.3f}"
636 | else:
637 | txt = f"{t_cursor:.3f}s"
638 | else:
639 | txt = f"{t_cursor:.3f}s"
640 |
641 | # Create text label with time/amplitude info
642 | self.live_cursor_text = self.ax.text(
643 | t_cursor,
644 | 1, # Top of y-axis
645 | txt,
646 | va="bottom",
647 | ha="left",
648 | fontsize=9,
649 | color="#ffee88",
650 | bbox=dict(
651 | facecolor="#292a24",
652 | edgecolor="none",
653 | alpha=0.8,
654 | pad=1
655 | )
656 | )
657 |
658 | # Redraw to show the cursor info
659 | self.draw()
660 |
661 | def toggle_grid(self):
662 | """Toggle grid visibility"""
663 | self.grid_visible = not self.grid_visible
664 | if self.grid_visible:
665 | self._draw_time_grid()
666 | else:
667 | # Clear existing grid lines and markers
668 | for line in self.grid_lines:
669 | try:
670 | line.remove()
671 | except (ValueError, AttributeError):
672 | pass
673 | self.grid_lines = []
674 |
675 | for text in self.time_markers:
676 | try:
677 | text.remove()
678 | except (ValueError, AttributeError):
679 | pass
680 | self.time_markers = []
681 |
682 | self.draw()
683 |
684 | def set_color_theme(self, waveform_color=None, playhead_color=None, grid_color=None, selection_color=None):
685 | """Update the color theme for the waveform display"""
686 | if waveform_color:
687 | self.waveform_color = waveform_color
688 | if playhead_color:
689 | self.playhead_color = playhead_color
690 | if grid_color:
691 | self.grid_color = grid_color
692 | if selection_color:
693 | self.selection_color = selection_color
694 |
695 | # Redraw with new colors (if we have data)
696 | if self.samples is not None and self.sr is not None:
697 | self.plot_waveform(self.samples, self.sr)
698 |
--------------------------------------------------------------------------------
/src/ui_manager.py:
--------------------------------------------------------------------------------
1 | from PyQt5.QtCore import QObject, pyqtSignal, QTimer, QPropertyAnimation, QEasingCurve, QRect, Qt
2 | from PyQt5.QtWidgets import (
3 | QWidget, QVBoxLayout, QHBoxLayout, QFrame, QLabel, QPushButton, QSlider,
4 | QProgressBar, QSplitter, QScrollArea, QToolBar, QAction, QMenu, QSystemTrayIcon,
5 | QMessageBox, QGraphicsDropShadowEffect, QSizePolicy, QSpacerItem, QGroupBox,
6 | QGridLayout, QComboBox, QSpinBox, QCheckBox, QTabWidget, QStackedWidget,
7 | QTreeWidget, QTreeWidgetItem, QListWidget, QListWidgetItem, QTextEdit
8 | )
9 | from PyQt5.QtGui import (
10 | QFont, QFontMetrics, QPixmap, QIcon, QColor, QPalette, QLinearGradient,
11 | QPainter, QBrush, QPen, QMovie, QFontDatabase
12 | )
13 | from error_handler import get_error_handler
14 | import os
15 | from pathlib import Path
16 |
17 |
18 | class ModernUIManager(QObject):
19 | """
20 | Manages modern UI elements, themes, and user experience improvements
21 | """
22 |
23 | themeChanged = pyqtSignal(str) # Theme name
24 | animationFinished = pyqtSignal()
25 |
26 | def __init__(self, parent=None):
27 | super().__init__(parent)
28 | self.error_handler = get_error_handler()
29 | self.current_theme = "dark"
30 | self.animations = []
31 | self.ui_scale = 1.0
32 |
33 | # Load custom fonts
34 | self.load_custom_fonts()
35 |
36 | # Theme configurations
37 | self.themes = {
38 | "dark": self._get_dark_theme(),
39 | "light": self._get_light_theme(),
40 | "midnight": self._get_midnight_theme(),
41 | "ocean": self._get_ocean_theme()
42 | }
43 |
44 | def load_custom_fonts(self):
45 | """Load custom fonts for better typography"""
46 | try:
47 | # Load Inter font family for modern UI
48 | font_families = ["Segoe UI", "SF Pro Display", "Roboto", "Inter", "Arial"]
49 | self.primary_font = None
50 |
51 | for font_family in font_families:
52 | font = QFont(font_family)
53 | if QFontDatabase().hasFamily(font_family):
54 | self.primary_font = font_family
55 | break
56 |
57 | if not self.primary_font:
58 | self.primary_font = "Arial" # Fallback
59 |
60 | self.error_handler.log_info(f"Using font family: {self.primary_font}")
61 |
62 | except Exception as e:
63 | self.error_handler.log_error(f"Error loading fonts: {str(e)}")
64 | self.primary_font = "Arial"
65 |
66 | def _get_dark_theme(self):
67 | """Enhanced dark theme configuration"""
68 | return {
69 | "name": "Dark",
70 | "colors": {
71 | "primary_bg": "#1e1e1e",
72 | "secondary_bg": "#2d2d2d",
73 | "tertiary_bg": "#3d3d3d",
74 | "accent": "#0078d4",
75 | "accent_hover": "#106ebe",
76 | "accent_pressed": "#005a9e",
77 | "text_primary": "#ffffff",
78 | "text_secondary": "#cccccc",
79 | "text_disabled": "#888888",
80 | "border": "#3d3d3d",
81 | "border_focus": "#0078d4",
82 | "success": "#00d862",
83 | "warning": "#ffb900",
84 | "error": "#e74856",
85 | "selection": "#0078d4",
86 | "hover": "#4d4d4d"
87 | },
88 | "fonts": {
89 | "primary": self.primary_font,
90 | "size_small": 9,
91 | "size_normal": 11,
92 | "size_large": 13,
93 | "size_title": 16,
94 | "size_header": 20
95 | },
96 | "borders": {
97 | "radius_small": 4,
98 | "radius_normal": 6,
99 | "radius_large": 8,
100 | "width_thin": 1,
101 | "width_normal": 2
102 | },
103 | "spacing": {
104 | "tiny": 4,
105 | "small": 8,
106 | "normal": 12,
107 | "large": 16,
108 | "xlarge": 24
109 | },
110 | "shadows": {
111 | "small": "rgba(0, 0, 0, 0.2)",
112 | "normal": "rgba(0, 0, 0, 0.3)",
113 | "large": "rgba(0, 0, 0, 0.4)"
114 | }
115 | }
116 |
117 | def _get_light_theme(self):
118 | """Modern light theme configuration"""
119 | return {
120 | "name": "Light",
121 | "colors": {
122 | "primary_bg": "#ffffff",
123 | "secondary_bg": "#f5f5f5",
124 | "tertiary_bg": "#e8e8e8",
125 | "accent": "#0078d4",
126 | "accent_hover": "#106ebe",
127 | "accent_pressed": "#005a9e",
128 | "text_primary": "#000000",
129 | "text_secondary": "#424242",
130 | "text_disabled": "#888888",
131 | "border": "#d0d0d0",
132 | "border_focus": "#0078d4",
133 | "success": "#00d862",
134 | "warning": "#ffb900",
135 | "error": "#e74856",
136 | "selection": "#0078d4",
137 | "hover": "#f0f0f0"
138 | },
139 | "fonts": {
140 | "primary": self.primary_font,
141 | "size_small": 9,
142 | "size_normal": 11,
143 | "size_large": 13,
144 | "size_title": 16,
145 | "size_header": 20
146 | },
147 | "borders": {
148 | "radius_small": 4,
149 | "radius_normal": 6,
150 | "radius_large": 8,
151 | "width_thin": 1,
152 | "width_normal": 2
153 | },
154 | "spacing": {
155 | "tiny": 4,
156 | "small": 8,
157 | "normal": 12,
158 | "large": 16,
159 | "xlarge": 24
160 | },
161 | "shadows": {
162 | "small": "rgba(0, 0, 0, 0.1)",
163 | "normal": "rgba(0, 0, 0, 0.15)",
164 | "large": "rgba(0, 0, 0, 0.2)"
165 | }
166 | }
167 |
168 | def _get_midnight_theme(self):
169 | """Ultra-dark midnight theme"""
170 | return {
171 | "name": "Midnight",
172 | "colors": {
173 | "primary_bg": "#0d1117",
174 | "secondary_bg": "#161b22",
175 | "tertiary_bg": "#21262d",
176 | "accent": "#58a6ff",
177 | "accent_hover": "#388bfd",
178 | "accent_pressed": "#1f6feb",
179 | "text_primary": "#f0f6fc",
180 | "text_secondary": "#7d8590",
181 | "text_disabled": "#484f58",
182 | "border": "#30363d",
183 | "border_focus": "#58a6ff",
184 | "success": "#3fb950",
185 | "warning": "#d29922",
186 | "error": "#f85149",
187 | "selection": "#58a6ff",
188 | "hover": "#262c36"
189 | },
190 | "fonts": {
191 | "primary": self.primary_font,
192 | "size_small": 9,
193 | "size_normal": 11,
194 | "size_large": 13,
195 | "size_title": 16,
196 | "size_header": 20
197 | },
198 | "borders": {
199 | "radius_small": 4,
200 | "radius_normal": 6,
201 | "radius_large": 8,
202 | "width_thin": 1,
203 | "width_normal": 2
204 | },
205 | "spacing": {
206 | "tiny": 4,
207 | "small": 8,
208 | "normal": 12,
209 | "large": 16,
210 | "xlarge": 24
211 | },
212 | "shadows": {
213 | "small": "rgba(0, 0, 0, 0.3)",
214 | "normal": "rgba(0, 0, 0, 0.4)",
215 | "large": "rgba(0, 0, 0, 0.5)"
216 | }
217 | }
218 |
219 | def _get_ocean_theme(self):
220 | """Ocean-inspired blue theme"""
221 | return {
222 | "name": "Ocean",
223 | "colors": {
224 | "primary_bg": "#0f1419",
225 | "secondary_bg": "#1a2332",
226 | "tertiary_bg": "#253340",
227 | "accent": "#39bae6",
228 | "accent_hover": "#59c7ea",
229 | "accent_pressed": "#2aa3d1",
230 | "text_primary": "#e6f1ff",
231 | "text_secondary": "#95a7c7",
232 | "text_disabled": "#5c7199",
233 | "border": "#34495e",
234 | "border_focus": "#39bae6",
235 | "success": "#2ecc71",
236 | "warning": "#f39c12",
237 | "error": "#e74c3c",
238 | "selection": "#39bae6",
239 | "hover": "#2c3e50"
240 | },
241 | "fonts": {
242 | "primary": self.primary_font,
243 | "size_small": 9,
244 | "size_normal": 11,
245 | "size_large": 13,
246 | "size_title": 16,
247 | "size_header": 20
248 | },
249 | "borders": {
250 | "radius_small": 4,
251 | "radius_normal": 6,
252 | "radius_large": 8,
253 | "width_thin": 1,
254 | "width_normal": 2
255 | },
256 | "spacing": {
257 | "tiny": 4,
258 | "small": 8,
259 | "normal": 12,
260 | "large": 16,
261 | "xlarge": 24
262 | },
263 | "shadows": {
264 | "small": "rgba(0, 0, 0, 0.2)",
265 | "normal": "rgba(0, 0, 0, 0.3)",
266 | "large": "rgba(0, 0, 0, 0.4)"
267 | }
268 | }
269 |
270 | def apply_theme(self, theme_name, widget):
271 | """Apply theme to a widget"""
272 | if theme_name not in self.themes:
273 | theme_name = "dark"
274 |
275 | theme = self.themes[theme_name]
276 | self.current_theme = theme_name
277 |
278 | # Generate comprehensive stylesheet
279 | stylesheet = self._generate_stylesheet(theme)
280 | widget.setStyleSheet(stylesheet)
281 |
282 | # Apply font
283 | font = QFont(theme["fonts"]["primary"], theme["fonts"]["size_normal"])
284 | widget.setFont(font)
285 |
286 | self.themeChanged.emit(theme_name)
287 | self.error_handler.log_info(f"Applied theme: {theme_name}")
288 |
289 | def _generate_stylesheet(self, theme):
290 | """Generate comprehensive QSS stylesheet from theme"""
291 | colors = theme["colors"]
292 | fonts = theme["fonts"]
293 | borders = theme["borders"]
294 | spacing = theme["spacing"]
295 |
296 | return f"""
297 | /* Global Styles */
298 | QWidget {{
299 | background-color: {colors['primary_bg']};
300 | color: {colors['text_primary']};
301 | font-family: '{fonts['primary']}';
302 | font-size: {fonts['size_normal']}px;
303 | selection-background-color: {colors['selection']};
304 | }}
305 |
306 | /* Main Window */
307 | QMainWindow {{
308 | background-color: {colors['primary_bg']};
309 | border: none;
310 | }}
311 |
312 | /* Buttons */
313 | QPushButton {{
314 | background-color: {colors['accent']};
315 | color: {colors['text_primary']};
316 | border: none;
317 | border-radius: {borders['radius_normal']}px;
318 | padding: {spacing['normal']}px {spacing['large']}px;
319 | font-weight: 600;
320 | min-height: 24px;
321 | min-width: 80px;
322 | }}
323 |
324 | QPushButton:hover {{
325 | background-color: {colors['accent_hover']};
326 | }}
327 |
328 | QPushButton:pressed {{
329 | background-color: {colors['accent_pressed']};
330 | }}
331 |
332 | QPushButton:disabled {{
333 | background-color: {colors['tertiary_bg']};
334 | color: {colors['text_disabled']};
335 | }}
336 |
337 | QPushButton:checked {{
338 | background-color: {colors['accent_pressed']};
339 | border: 2px solid {colors['border_focus']};
340 | }}
341 |
342 | /* Tool Buttons */
343 | QToolButton {{
344 | background-color: {colors['secondary_bg']};
345 | color: {colors['text_primary']};
346 | border: 1px solid {colors['border']};
347 | border-radius: {borders['radius_normal']}px;
348 | padding: {spacing['small']}px;
349 | min-width: 48px;
350 | min-height: 48px;
351 | font-weight: 500;
352 | }}
353 |
354 | QToolButton:hover {{
355 | background-color: {colors['hover']};
356 | border-color: {colors['border_focus']};
357 | }}
358 |
359 | QToolButton:pressed {{
360 | background-color: {colors['accent']};
361 | color: {colors['text_primary']};
362 | }}
363 |
364 | QToolButton:checked {{
365 | background-color: {colors['accent']};
366 | color: {colors['text_primary']};
367 | border-color: {colors['border_focus']};
368 | }}
369 |
370 | /* Labels */
371 | QLabel {{
372 | color: {colors['text_primary']};
373 | background: transparent;
374 | }}
375 |
376 | QLabel[class="title"] {{
377 | font-size: {fonts['size_title']}px;
378 | font-weight: 700;
379 | color: {colors['accent']};
380 | }}
381 |
382 | QLabel[class="header"] {{
383 | font-size: {fonts['size_header']}px;
384 | font-weight: 600;
385 | color: {colors['text_primary']};
386 | }}
387 |
388 | QLabel[class="subtitle"] {{
389 | font-size: {fonts['size_large']}px;
390 | color: {colors['text_secondary']};
391 | }}
392 |
393 | /* Frames and Containers */
394 | QFrame {{
395 | background-color: {colors['secondary_bg']};
396 | border: 1px solid {colors['border']};
397 | border-radius: {borders['radius_normal']}px;
398 | }}
399 |
400 | QFrame[class="panel"] {{
401 | background-color: {colors['secondary_bg']};
402 | border: 1px solid {colors['border']};
403 | border-radius: {borders['radius_large']}px;
404 | padding: {spacing['normal']}px;
405 | }}
406 |
407 | QFrame[class="card"] {{
408 | background-color: {colors['secondary_bg']};
409 | border: 1px solid {colors['border']};
410 | border-radius: {borders['radius_large']}px;
411 | padding: {spacing['large']}px;
412 | }}
413 |
414 | /* Group Boxes */
415 | QGroupBox {{
416 | font-weight: 600;
417 | border: 2px solid {colors['border']};
418 | border-radius: {borders['radius_large']}px;
419 | margin-top: {spacing['normal']}px;
420 | padding-top: {spacing['large']}px;
421 | background-color: {colors['secondary_bg']};
422 | }}
423 |
424 | QGroupBox::title {{
425 | subcontrol-origin: margin;
426 | left: {spacing['normal']}px;
427 | padding: 0 {spacing['small']}px;
428 | color: {colors['accent']};
429 | font-weight: 600;
430 | background-color: {colors['secondary_bg']};
431 | }}
432 |
433 | /* Sliders */
434 | QSlider::groove:horizontal {{
435 | border: 1px solid {colors['border']};
436 | height: 8px;
437 | background: {colors['tertiary_bg']};
438 | border-radius: 4px;
439 | }}
440 |
441 | QSlider::handle:horizontal {{
442 | background: qlineargradient(x1:0, y1:0, x2:1, y2:1,
443 | stop:0 {colors['accent']}, stop:1 {colors['accent_hover']});
444 | border: 1px solid {colors['border_focus']};
445 | width: 20px;
446 | height: 20px;
447 | margin: -6px 0;
448 | border-radius: 10px;
449 | }}
450 |
451 | QSlider::handle:horizontal:hover {{
452 | background: qlineargradient(x1:0, y1:0, x2:1, y2:1,
453 | stop:0 {colors['accent_hover']}, stop:1 {colors['accent']});
454 | }}
455 |
456 | QSlider::sub-page:horizontal {{
457 | background: qlineargradient(x1:0, y1:0, x2:1, y2:1,
458 | stop:0 {colors['accent']}, stop:1 {colors['accent_hover']});
459 | border-radius: 4px;
460 | }}
461 |
462 | /* Progress Bars */
463 | QProgressBar {{
464 | border: 2px solid {colors['border']};
465 | border-radius: {borders['radius_normal']}px;
466 | text-align: center;
467 | background-color: {colors['tertiary_bg']};
468 | color: {colors['text_primary']};
469 | font-weight: 600;
470 | }}
471 |
472 | QProgressBar::chunk {{
473 | background-color: qlineargradient(x1:0, y1:0, x2:1, y2:1,
474 | stop:0 {colors['accent']}, stop:1 {colors['accent_hover']});
475 | border-radius: {borders['radius_small']}px;
476 | }}
477 |
478 | /* Text Inputs */
479 | QLineEdit, QTextEdit, QPlainTextEdit {{
480 | background-color: {colors['primary_bg']};
481 | color: {colors['text_primary']};
482 | border: 2px solid {colors['border']};
483 | border-radius: {borders['radius_normal']}px;
484 | padding: {spacing['small']}px;
485 | font-size: {fonts['size_normal']}px;
486 | }}
487 |
488 | QLineEdit:focus, QTextEdit:focus, QPlainTextEdit:focus {{
489 | border-color: {colors['border_focus']};
490 | }}
491 |
492 | QLineEdit:disabled, QTextEdit:disabled, QPlainTextEdit:disabled {{
493 | background-color: {colors['tertiary_bg']};
494 | color: {colors['text_disabled']};
495 | }}
496 |
497 | /* Lists and Trees */
498 | QListWidget, QTreeWidget {{
499 | background-color: {colors['primary_bg']};
500 | color: {colors['text_primary']};
501 | border: 1px solid {colors['border']};
502 | border-radius: {borders['radius_normal']}px;
503 | alternate-background-color: {colors['secondary_bg']};
504 | outline: none;
505 | }}
506 |
507 | QListWidget::item, QTreeWidget::item {{
508 | padding: {spacing['small']}px;
509 | border-radius: {borders['radius_small']}px;
510 | margin: 1px;
511 | }}
512 |
513 | QListWidget::item:selected, QTreeWidget::item:selected {{
514 | background-color: {colors['selection']};
515 | color: {colors['text_primary']};
516 | }}
517 |
518 | QListWidget::item:hover, QTreeWidget::item:hover {{
519 | background-color: {colors['hover']};
520 | }}
521 |
522 | /* Tabs */
523 | QTabWidget::pane {{
524 | border: 1px solid {colors['border']};
525 | border-radius: {borders['radius_large']}px;
526 | background-color: {colors['secondary_bg']};
527 | top: -1px;
528 | }}
529 |
530 | QTabBar::tab {{
531 | background-color: {colors['tertiary_bg']};
532 | color: {colors['text_secondary']};
533 | padding: {spacing['normal']}px {spacing['large']}px;
534 | margin-right: 2px;
535 | border-top-left-radius: {borders['radius_normal']}px;
536 | border-top-right-radius: {borders['radius_normal']}px;
537 | min-width: 100px;
538 | font-weight: 500;
539 | }}
540 |
541 | QTabBar::tab:selected {{
542 | background-color: {colors['secondary_bg']};
543 | color: {colors['text_primary']};
544 | border-bottom: 3px solid {colors['accent']};
545 | font-weight: 600;
546 | }}
547 |
548 | QTabBar::tab:hover:!selected {{
549 | background-color: {colors['hover']};
550 | color: {colors['text_primary']};
551 | }}
552 |
553 | /* Checkboxes */
554 | QCheckBox {{
555 | color: {colors['text_primary']};
556 | spacing: {spacing['small']}px;
557 | font-weight: 500;
558 | }}
559 |
560 | QCheckBox::indicator {{
561 | width: 20px;
562 | height: 20px;
563 | border-radius: {borders['radius_small']}px;
564 | border: 2px solid {colors['border']};
565 | background-color: {colors['primary_bg']};
566 | }}
567 |
568 | QCheckBox::indicator:hover {{
569 | border-color: {colors['border_focus']};
570 | }}
571 |
572 | QCheckBox::indicator:checked {{
573 | background-color: {colors['accent']};
574 | border-color: {colors['accent']};
575 | }}
576 |
577 | /* Combo Boxes */
578 | QComboBox {{
579 | background-color: {colors['secondary_bg']};
580 | color: {colors['text_primary']};
581 | border: 1px solid {colors['border']};
582 | border-radius: {borders['radius_normal']}px;
583 | padding: {spacing['small']}px {spacing['normal']}px;
584 | min-width: 100px;
585 | }}
586 |
587 | QComboBox:hover {{
588 | border-color: {colors['border_focus']};
589 | }}
590 |
591 | QComboBox::drop-down {{
592 | border: none;
593 | width: 20px;
594 | }}
595 |
596 | QComboBox::down-arrow {{
597 | width: 12px;
598 | height: 12px;
599 | }}
600 |
601 | QComboBox QAbstractItemView {{
602 | background-color: {colors['secondary_bg']};
603 | color: {colors['text_primary']};
604 | border: 1px solid {colors['border']};
605 | border-radius: {borders['radius_normal']}px;
606 | selection-background-color: {colors['selection']};
607 | }}
608 |
609 | /* Scroll Bars */
610 | QScrollBar {{
611 | background-color: {colors['secondary_bg']};
612 | width: 16px;
613 | height: 16px;
614 | border-radius: 8px;
615 | }}
616 |
617 | QScrollBar::handle {{
618 | background-color: {colors['tertiary_bg']};
619 | border-radius: 6px;
620 | min-height: 30px;
621 | min-width: 30px;
622 | }}
623 |
624 | QScrollBar::handle:hover {{
625 | background-color: {colors['hover']};
626 | }}
627 |
628 | QScrollBar::add-line, QScrollBar::sub-line {{
629 | height: 0px;
630 | width: 0px;
631 | }}
632 |
633 | QScrollBar::add-page, QScrollBar::sub-page {{
634 | background: none;
635 | }}
636 |
637 | /* Menu Bar */
638 | QMenuBar {{
639 | background-color: {colors['primary_bg']};
640 | color: {colors['text_primary']};
641 | border-bottom: 1px solid {colors['border']};
642 | padding: {spacing['small']}px;
643 | }}
644 |
645 | QMenuBar::item {{
646 | background-color: transparent;
647 | padding: {spacing['small']}px {spacing['normal']}px;
648 | border-radius: {borders['radius_small']}px;
649 | }}
650 |
651 | QMenuBar::item:selected {{
652 | background-color: {colors['hover']};
653 | }}
654 |
655 | /* Menus */
656 | QMenu {{
657 | background-color: {colors['secondary_bg']};
658 | color: {colors['text_primary']};
659 | border: 1px solid {colors['border']};
660 | border-radius: {borders['radius_normal']}px;
661 | padding: {spacing['small']}px;
662 | }}
663 |
664 | QMenu::item {{
665 | padding: {spacing['small']}px {spacing['large']}px;
666 | border-radius: {borders['radius_small']}px;
667 | }}
668 |
669 | QMenu::item:selected {{
670 | background-color: {colors['selection']};
671 | }}
672 |
673 | QMenu::separator {{
674 | height: 1px;
675 | background-color: {colors['border']};
676 | margin: {spacing['small']}px;
677 | }}
678 |
679 | /* Status Bar */
680 | QStatusBar {{
681 | background-color: {colors['primary_bg']};
682 | color: {colors['text_primary']};
683 | border-top: 1px solid {colors['border']};
684 | padding: {spacing['small']}px;
685 | }}
686 |
687 | /* Tool Tips */
688 | QToolTip {{
689 | background-color: {colors['secondary_bg']};
690 | color: {colors['text_primary']};
691 | border: 1px solid {colors['border_focus']};
692 | border-radius: {borders['radius_normal']}px;
693 | padding: {spacing['small']}px;
694 | font-size: {fonts['size_small']}px;
695 | }}
696 |
697 | /* Splitters */
698 | QSplitter::handle {{
699 | background-color: {colors['border']};
700 | }}
701 |
702 | QSplitter::handle:hover {{
703 | background-color: {colors['accent']};
704 | }}
705 | """
706 |
707 | def create_modern_button(self, text, icon=None, button_type="primary", size="normal"):
708 | """Create a modern styled button"""
709 | button = QPushButton(text)
710 |
711 | if icon:
712 | button.setIcon(icon)
713 | button.setIconSize(self._get_icon_size(size))
714 |
715 | # Set object name for specific styling
716 | button.setObjectName(f"button_{button_type}_{size}")
717 |
718 | # Apply size-specific properties
719 | if size == "small":
720 | button.setMinimumSize(60, 28)
721 | elif size == "large":
722 | button.setMinimumSize(120, 48)
723 | else: # normal
724 | button.setMinimumSize(80, 36)
725 |
726 | return button
727 |
728 | def create_modern_card(self, title=None, content_widget=None):
729 | """Create a modern card container"""
730 | card = QFrame()
731 | card.setObjectName("card")
732 |
733 | layout = QVBoxLayout(card)
734 |
735 | if title:
736 | title_label = QLabel(title)
737 | title_label.setObjectName("title")
738 | layout.addWidget(title_label)
739 |
740 | if content_widget:
741 | layout.addWidget(content_widget)
742 |
743 | return card
744 |
745 | def create_modern_panel(self, title=None):
746 | """Create a modern panel container"""
747 | panel = QFrame()
748 | panel.setObjectName("panel")
749 |
750 | layout = QVBoxLayout(panel)
751 |
752 | if title:
753 | title_label = QLabel(title)
754 | title_label.setObjectName("subtitle")
755 | layout.addWidget(title_label)
756 |
757 | return panel, layout
758 |
759 | def add_drop_shadow(self, widget, blur_radius=15, offset=(0, 2)):
760 | """Add drop shadow effect to widget"""
761 | try:
762 | shadow = QGraphicsDropShadowEffect()
763 | shadow.setBlurRadius(blur_radius)
764 | shadow.setOffset(offset[0], offset[1])
765 | shadow.setColor(QColor(0, 0, 0, 50))
766 | widget.setGraphicsEffect(shadow)
767 | except Exception as e:
768 | self.error_handler.log_error(f"Error adding shadow: {str(e)}")
769 |
770 | def animate_widget(self, widget, property_name, start_value, end_value, duration=300):
771 | """Animate widget property"""
772 | try:
773 | animation = QPropertyAnimation(widget, property_name.encode())
774 | animation.setDuration(duration)
775 | animation.setStartValue(start_value)
776 | animation.setEndValue(end_value)
777 | animation.setEasingCurve(QEasingCurve.OutCubic)
778 |
779 | # Clean up animation when finished
780 | animation.finished.connect(lambda: self.animations.remove(animation))
781 | animation.finished.connect(self.animationFinished.emit)
782 |
783 | self.animations.append(animation)
784 | animation.start()
785 |
786 | return animation
787 |
788 | except Exception as e:
789 | self.error_handler.log_error(f"Error creating animation: {str(e)}")
790 | return None
791 |
792 | def fade_in_widget(self, widget, duration=300):
793 | """Fade in widget with animation"""
794 | widget.setWindowOpacity(0)
795 | widget.show()
796 | return self.animate_widget(widget, "windowOpacity", 0.0, 1.0, duration)
797 |
798 | def fade_out_widget(self, widget, duration=300):
799 | """Fade out widget with animation"""
800 | animation = self.animate_widget(widget, "windowOpacity", 1.0, 0.0, duration)
801 | if animation:
802 | animation.finished.connect(widget.hide)
803 | return animation
804 |
805 | def slide_widget(self, widget, direction="down", duration=300):
806 | """Slide widget in from direction"""
807 | geometry = widget.geometry()
808 |
809 | if direction == "down":
810 | start_pos = QRect(geometry.x(), geometry.y() - geometry.height(),
811 | geometry.width(), geometry.height())
812 | elif direction == "up":
813 | start_pos = QRect(geometry.x(), geometry.y() + geometry.height(),
814 | geometry.width(), geometry.height())
815 | elif direction == "left":
816 | start_pos = QRect(geometry.x() + geometry.width(), geometry.y(),
817 | geometry.width(), geometry.height())
818 | else: # right
819 | start_pos = QRect(geometry.x() - geometry.width(), geometry.y(),
820 | geometry.width(), geometry.height())
821 |
822 | widget.setGeometry(start_pos)
823 | widget.show()
824 |
825 | return self.animate_widget(widget, "geometry", start_pos, geometry, duration)
826 |
827 | def _get_icon_size(self, size):
828 | """Get icon size based on button size"""
829 | sizes = {
830 | "small": (16, 16),
831 | "normal": (20, 20),
832 | "large": (24, 24)
833 | }
834 | return sizes.get(size, (20, 20))
835 |
836 | def set_ui_scale(self, scale_factor):
837 | """Set UI scale factor for high DPI displays"""
838 | self.ui_scale = scale_factor
839 | # Update theme font sizes
840 | for theme in self.themes.values():
841 | fonts = theme["fonts"]
842 | fonts["size_small"] = int(9 * scale_factor)
843 | fonts["size_normal"] = int(11 * scale_factor)
844 | fonts["size_large"] = int(13 * scale_factor)
845 | fonts["size_title"] = int(16 * scale_factor)
846 | fonts["size_header"] = int(20 * scale_factor)
847 |
848 | def get_current_theme(self):
849 | """Get current theme configuration"""
850 | return self.themes.get(self.current_theme, self.themes["dark"])
851 |
852 | def get_available_themes(self):
853 | """Get list of available theme names"""
854 | return list(self.themes.keys())
855 |
856 |
857 | # Global UI manager instance
858 | _ui_manager = None
859 |
860 | def get_ui_manager():
861 | """Get the global UI manager instance"""
862 | global _ui_manager
863 | if _ui_manager is None:
864 | _ui_manager = ModernUIManager()
865 | return _ui_manager
866 |
867 |
--------------------------------------------------------------------------------