├── requirements.txt
├── icon.ico
├── icon.png
├── icon_128x128.png
├── images
└── dipbench.png
├── LICENSE
├── README.md
└── dipbench.py
/requirements.txt:
--------------------------------------------------------------------------------
1 | pygame
2 | numpy
3 | scipy
4 | pyaudio
5 |
--------------------------------------------------------------------------------
/icon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Frieve-A/dipbench/HEAD/icon.ico
--------------------------------------------------------------------------------
/icon.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Frieve-A/dipbench/HEAD/icon.png
--------------------------------------------------------------------------------
/icon_128x128.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Frieve-A/dipbench/HEAD/icon_128x128.png
--------------------------------------------------------------------------------
/images/dipbench.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Frieve-A/dipbench/HEAD/images/dipbench.png
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | BSD 3-Clause License
2 |
3 | Copyright (c) 2022, Frieve-A
4 | All rights reserved.
5 |
6 | Redistribution and use in source and binary forms, with or without
7 | modification, are permitted provided that the following conditions are met:
8 |
9 | 1. Redistributions of source code must retain the above copyright notice, this
10 | list of conditions and the following disclaimer.
11 |
12 | 2. Redistributions in binary form must reproduce the above copyright notice,
13 | this list of conditions and the following disclaimer in the documentation
14 | and/or other materials provided with the distribution.
15 |
16 | 3. Neither the name of the copyright holder nor the names of its
17 | contributors may be used to endorse or promote products derived from
18 | this software without specific prior written permission.
19 |
20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # DiP-Bench
2 | A tool for analyzing the sound quality of digital pianos.
3 |
4 | (英語に続き日本語の解説があります)
5 |
6 | 
7 |
8 | A tool for analyzing the sound quality of digital pianos.
9 |
10 |
11 |
12 |
13 | ## How to download and install on Windows
14 |
15 | Note:
16 | This app requires a PC with 1920x1080, 100% or more display area.
17 |
18 | Latest version for 64bit Windows can be downloaded from the following page.
19 |
20 | https://github.com/Frieve-A/dipbench/releases
21 |
22 | Unzip the downloaded zip file and run the dipbench.exe to launch the app. No installation is required.
23 |
24 |
25 |
26 | ## How to execute on other platforms
27 |
28 | This application is written in Python.
29 | Follow the steps below to execute the Python code of the DiP-Bench.
30 |
31 | 1. git clone https://github.com/Frieve-A/dipbench.git
32 | 2. pip install -r requirements.txt
33 | 3. python dipbench.py
34 |
35 |
36 |
37 | ## How to use
38 |
39 | Connect the digital piano to be measured to the PC and launch the DiP-Bench app.
40 |
41 | Press A, I and O key on your keyboard and select the MIDI and audio devices you want to use for measurements.
42 |
43 | Press P key on the keyboard to start sound quality measurement in the pitch direction.
44 |
45 | After measurement, you can click the piano keyboard on the screen or use the left and right arrow keys on the keyboard to check the measured waveform and sound.
46 |
47 | Press V key on the keyboard to start sound velocity layer measurement.
48 |
49 | After measurement, you can click the bar on the right of the screen or use the up and down arrow keys on the keyboard to check the measured waveform and sound.
50 |
51 | Press R key on the keyboard to start real-time waveform visualization.
52 |
53 | Press the ESC key on the keyboard to exit real-time mode and return to the first screen.
54 |
55 |
56 |
57 | ## Keyboard shortcuts
58 |
59 | Q : Exit the app
60 |
61 | S : Save the measured waveforms as wav files in the app folder.
62 |
63 | F11 : Switch to full screen
64 |
65 |
66 |
67 | ---
68 |
69 |
70 |
71 | 電子ピアノの音質を解析するツールです。
72 |
73 |
74 |
75 |
76 | ## ダウンロードとインストール(Windows)
77 |
78 | ご注意)このアプリの動作には1920x1080、100%以上の表示領域を持ったPCが必要です。
79 |
80 | 以下のページより64bit Windows用の最新バージョンをダウンロードします。
81 |
82 | https://github.com/Frieve-A/dipbench/releases
83 |
84 | ダウンロードしたzipファイルを解凍し、dipbench.exeファイル起動します。インストールは不要です。
85 |
86 |
87 |
88 | ## その他のプラットフォームで実行するには
89 |
90 | 本アプリケーションはPythonで作成されています。
91 | 以下の手順でDiP-BenchのPythonコードを実行します。
92 |
93 | 1. git clone https://github.com/Frieve-A/dipbench.git
94 | 2. pip install -r requirements.txt
95 | 3. python dipbench.py
96 |
97 |
98 |
99 | ## 使い方
100 |
101 | PCに測定対象の電子ピアノを接続してDiP-Benchアプリを起動します。
102 |
103 | キーボードのA、I、Oキーを押して、測定に使用するMIDIとオーディオデバイスを切り替えます。
104 |
105 | キーボードのPキーを押して、音程方向の音質測定を開始します。
106 |
107 | 測定後、画面上のピアノ鍵盤をクリックするか、キーボードの左右キーを押すことで測定した波形と音を確認できます。
108 |
109 | キーボードのVキーを押して、ベロシティーレイヤーの測定を開始します。
110 |
111 | 測定後、画面右のバーをクリックするか、キーボードの上下キーを押すことで測定した波形と音を確認できます。
112 |
113 | キーボードのRキーを押して、リアルタイムに波形を可視化するモードに切り替えます。
114 |
115 | キーボードのESCキーでリアルタイムモードを抜け最初の画面に戻ります。
116 |
117 |
118 |
119 | ## キーボードショートカット
120 |
121 | Q : アプリを終了
122 |
123 | S : 測定波形をWavファイルとしてアプリのフォルダに保存
124 |
125 | F11 : フルスクリーンへの切り替え
126 |
--------------------------------------------------------------------------------
/dipbench.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import math
4 | import random
5 | import colorsys
6 | import wave
7 | import pygame
8 | import pygame.midi
9 | from pygame.locals import *
10 | import pyaudio
11 | import numpy as np
12 | from scipy.stats import spearmanr
13 |
14 |
15 | app_title = 'DiP-Bench (Digital Piano Benchmark)'
16 | app_version = '0.05'
17 |
18 | # settings
19 | sampling_freq = 48000
20 | pitch_direction_threshohld = 0.95
21 | velocity_direction_threshohld = 0.9987
22 | latency_threshold = 0.05 # ratio from std (0.01 means 1% of std)
23 | latency_threshold2 = 2.0 # ratio from noise floor (16.0 means 16x = 30dB)
24 |
25 | sample_length = 2048
26 |
27 | correl_size = 2048
28 | shift_range = 1024
29 |
30 | realtime_analysis_length = 8192
31 | spectrum_size = 8192
32 | min_level = -48.0 #dB
33 |
34 |
35 | # size
36 | screen_size = None
37 | floating_window_size = (1280, 720)
38 | window_size = None
39 | full_screen = False
40 | line_width = None
41 | base_size = None
42 | base_margin = None
43 | vector_scope_size = None
44 |
45 | # keyboard size
46 | keyboard_margin_x = None
47 | key_width = None
48 | keyboard_top = None
49 | energy_bottom = None
50 | white_key_width = None
51 | white_key_height = None
52 | black_key_width = None
53 | black_key_height = None
54 |
55 | # UI velocity size
56 | velocity_width = None
57 | velocity_left = None
58 | velocity_size = None
59 | velocity_bottom = None
60 |
61 |
62 | class Keyboard:
63 | keys = None
64 |
65 | def __init__(self):
66 | global keyboard_margin_x, key_width
67 | class Key:
68 | pass
69 | self.keys = []
70 | for i in range(128):
71 | oct = i // 12 - 1
72 | note = i % 12
73 | black_key = note in [1, 3, 6, 8, 10]
74 | x = round(keyboard_margin_x + key_width * (oct * 7 + [0.5, 0.925, 1.5, 2.075, 2.5, 3.5, 3.85, 4.5, 5, 5.5, 6.15, 6.5][note] - 5))
75 | key= Key()
76 | key.note_no = i
77 | key.black_key = black_key
78 | key.x = x
79 | key.normalized_x = round(keyboard_margin_x + 0.5 * key_width + (key_width * 7) / 12 * (i - 21))
80 | self.keys.append(key)
81 |
82 |
83 | def SetUI():
84 | global screen_size, floating_window_size, window_size, full_screen, line_width, base_size, base_margin, vector_scope_size
85 | global keyboard_margin_x, key_width, keyboard_top, energy_bottom, white_key_width, white_key_height, black_key_width, black_key_height
86 | global velocity_width, velocity_left, velocity_size, velocity_bottom
87 |
88 | if full_screen:
89 | window_size = screen_size
90 | else:
91 | window_size = floating_window_size
92 | base_size = window_size[0] // 240 # 8 pixel in 1920 x 1080
93 | if base_size > window_size[1] // 135:
94 | base_size = window_size[1] // 135
95 | line_width = base_size // 9 + 1
96 | base_margin = base_size * 3
97 | vector_scope_size = window_size[1] / 16 # sd = height/32
98 |
99 | # keyboard size
100 | keyboard_margin_x = base_size * 3
101 | key_width = (window_size[0] - keyboard_margin_x * 2) / 52
102 | keyboard_top = base_size * 100
103 | energy_bottom = keyboard_top - base_size * 7
104 | white_key_width = round(key_width * 33 / 36) #22.5 / 23.5
105 | white_key_height = round(key_width * 150 / 23.5)
106 | black_key_width = round(key_width * 23 / 36) #15 / 23.5
107 | black_key_height = round(key_width * 100 / 23.5)
108 |
109 | # velocity size
110 | velocity_width = base_size * 24
111 | velocity_left = window_size[0] - velocity_width - base_margin
112 | velocity_size = base_size // 2
113 | velocity_bottom = keyboard_top - base_size * 19
114 |
115 | pygame.display.set_mode(window_size, pygame.FULLSCREEN if full_screen else pygame.RESIZABLE) # workaround
116 | return pygame.display.set_mode(window_size, pygame.FULLSCREEN if full_screen else pygame.RESIZABLE), Keyboard()
117 |
118 |
119 |
120 | class DipBench:
121 | audio_inputs = []
122 | midi_inputs = []
123 | midi_outputs = []
124 | audio = None
125 |
126 | midi_in = None
127 | midi_out = None
128 | stream = None
129 |
130 | mode = 0
131 | monitor_mode = 0
132 | last_error = None
133 | terminated = False
134 |
135 | # measurement results in pitch direction
136 | tone = -1
137 | last_hue = 0.0
138 | pitch_waveforms = None
139 | pitch_spectrums = None
140 | duplication_in_pitch = None
141 | pitch_correl = None
142 | pitch_color = [(255,255,255)] * 88
143 | pitch_variation = None
144 | pitch_checked = None
145 | pitch_latency = None
146 | pitch_latency_average = None
147 | pitch_latency_std = None
148 | pitch_volume = None
149 | pitch_volume_average = None
150 | pitch_volume_std = None
151 |
152 | # measurement results in velocity direction
153 | velocity = -1
154 | velocity_waveforms = None
155 | velocity_spectrums = None
156 | duplication_in_velocity = None
157 | velocity_correl = None
158 | velocity_color = [(64,64,64)] * 127
159 | velocity_layer = None
160 | velocity_checked = None
161 | velocity_latency = None
162 | velocity_latency_average = None
163 | velocity_latency_std = None
164 | velocity_volume = None
165 | max_correl = 0.0
166 |
167 | # realtime measurement results
168 | realtime_waveform = None
169 | realtime_spectrum = None
170 | realtime_note_on = [False] * 88
171 | realtime_key_on = [False] * 88
172 | realtime_velocity = [0] * 88
173 | realtime_damper_on = False
174 |
175 | def __init__(self):
176 | self.audio = pyaudio.PyAudio()
177 |
178 | # prepare midi
179 | pygame.midi.init()
180 | for i in range(pygame.midi.get_count()):
181 | midi_device_info = pygame.midi.get_device_info(i)
182 | if not midi_device_info[4]: # not opened.
183 | device_name = midi_device_info[1].decode()
184 | if midi_device_info[2]: #input
185 | self.midi_inputs.append(device_name)
186 | else:
187 | self.midi_outputs.append(device_name)
188 | pygame.midi.quit()
189 |
190 | # prepare audio
191 | info = self.audio.get_host_api_info_by_index(0)
192 | device_count = info.get('deviceCount')
193 | for i in range(device_count):
194 | device = self.audio.get_device_info_by_host_api_device_index(0, i)
195 | if device.get('maxInputChannels') >= 2:
196 | self.audio_inputs.append(device.get('name'))
197 |
198 | def __del__(self):
199 | self.terminate()
200 | self.process()
201 | self.audio.terminate()
202 |
203 | def terminate(self):
204 | self.terminated = True
205 | self.last_error = None
206 |
207 | def shift_audio_inputs(self):
208 | self.audio_inputs.append(self.audio_inputs.pop(0))
209 |
210 | def shift_midi_inputs(self):
211 | self.midi_inputs.append(self.midi_inputs.pop(0))
212 |
213 | def shift_midi_outputs(self):
214 | self.midi_outputs.append(self.midi_outputs.pop(0))
215 |
216 | def __open_audio(self):
217 | info = self.audio.get_host_api_info_by_index(0)
218 | device_count = info.get('deviceCount')
219 | index = -1
220 | for i in range(device_count):
221 | device = self.audio.get_device_info_by_host_api_device_index(0, i)
222 | if self.audio_inputs[0] == device.get('name') and device.get('maxInputChannels') >= 2:
223 | index = i
224 | if index >= 0:
225 | self.stream = self.audio.open(input=True, input_device_index = index, format=pyaudio.paInt16, channels=2, rate=sampling_freq, frames_per_buffer=sample_length)
226 | else:
227 | self.last_error = f'Can\'t open audio input "{self.audio_inputs[0]}".'
228 |
229 | def __close_audio(self):
230 | self.stream.close()
231 | self.stream = None
232 |
233 | def __open_midi_in(self):
234 | self.__close_midi_in()
235 | pygame.midi.init()
236 | if len(self.midi_inputs) == 0:
237 | self.last_error = f'No midi input available.'
238 |
239 | for i in range(pygame.midi.get_count()):
240 | midi_device_info = pygame.midi.get_device_info(i)
241 | if not midi_device_info[4]: # not opened.
242 | device_name = midi_device_info[1].decode()
243 | if midi_device_info[2]: # input
244 | if self.midi_inputs[0] == device_name:
245 | self.midi_in = pygame.midi.Input(i)
246 | return
247 | self.last_error = f'Can\'t open midi input "{self.midi_inputs[0]}".'
248 | pygame.midi.quit()
249 |
250 | def __open_midi_out(self):
251 | self.__close_midi_out()
252 | pygame.midi.init()
253 | if len(self.midi_outputs) == 0:
254 | self.last_error = f'No midi output available.'
255 |
256 | for i in range(pygame.midi.get_count()):
257 | midi_device_info = pygame.midi.get_device_info(i)
258 | if not midi_device_info[4]: # not opened.
259 | device_name = midi_device_info[1].decode()
260 | if not midi_device_info[2]: # output
261 | if self.midi_outputs[0] == device_name:
262 | self.midi_out = pygame.midi.Output(i)
263 | return
264 | self.last_error = f'Can\'t open midi output "{self.midi_outputs[0]}".'
265 | pygame.midi.quit()
266 |
267 | def __close_midi_in(self):
268 | if self.midi_in is not None:
269 | self.midi_in.close()
270 | self.midi_in = None
271 | pygame.midi.quit()
272 |
273 | def __close_midi_out(self):
274 | if self.midi_out is not None:
275 | self.midi_out.close()
276 | self.midi_out = None
277 | pygame.midi.quit()
278 |
279 | def set_tone(self, tone, play=True):
280 | self.tone = np.clip(tone, -1, 87)
281 | if self.pitch_waveforms is not None and self.pitch_waveforms[self.tone] is not None and tone >= 0 and play:
282 | self.monitor_mode = 1
283 | sound = pygame.sndarray.make_sound(self.pitch_waveforms[self.tone])
284 | sound.play()
285 |
286 | def shift_tone_next(self):
287 | self.set_tone((self.tone + 1) % 88)
288 |
289 | def shift_tone_previous(self):
290 | if self.tone >= 0:
291 | self.set_tone((self.tone + 87) % 88)
292 | else:
293 | self.set_tone(87)
294 |
295 | def set_velocity(self, velocity, play=True):
296 | self.velocity = np.clip(velocity, -1, 126)
297 | if self.velocity_waveforms is not None and self.velocity_waveforms[self.velocity] is not None and velocity >= 0 and play:
298 | self.monitor_mode = 2
299 | sound = pygame.sndarray.make_sound(self.velocity_waveforms[self.velocity])
300 | sound.play()
301 |
302 | def shift_velocity_next(self):
303 | self.set_velocity((self.velocity + 1) % 127)
304 |
305 | def shift_velocity_previous(self):
306 | if self.velocity >= 0:
307 | self.set_velocity((self.velocity + 126) % 127)
308 | else:
309 | self.set_velocity(126)
310 |
311 | def get_note_on(self, note):
312 | if note == self.tone:
313 | return True
314 | if self.realtime_note_on:
315 | return self.realtime_note_on[note]
316 | return False
317 |
318 | def measure_pitch_variation(self):
319 | self.terminated = False
320 | self.last_error = None
321 | self.mode = self.monitor_mode = 1
322 | self.pitch_waveforms = [None] * 88
323 | self.pitch_spectrums = [None] * 88
324 | self.duplication_in_pitch = [None] * 88
325 | self.pitch_correl = [0.0] * 87
326 | self.pitch_color = [(255,255,255)] * 88
327 | self.tone = 0
328 | self.pitch_variation = 0
329 | self.pitch_checked = 0
330 | self.pitch_latency = [None] * 88
331 | self.pitch_latency_average = None
332 | self.pitch_latency_std = None
333 | self.pitch_volume = [None] * 88
334 | self.pitch_volume_average = None
335 | self.pitch_volume_std = None
336 |
337 | def __get_spectrum(self, waveform):
338 | waveform = np.sum(waveform,axis=1)
339 | if len(waveform) < spectrum_size:
340 | waveform = waveform * np.hanning(len(waveform))
341 | waveform = np.pad(waveform, ((0, spectrum_size - len(waveform))))
342 | else:
343 | waveform = waveform[:spectrum_size]
344 | waveform = waveform * np.hanning(spectrum_size)
345 | spectrum = np.log(np.abs(np.fft.fft(waveform / 32768.0)) / spectrum_size) / np.log(2) * 6.0 # in dB
346 | return spectrum
347 |
348 | def __check_duplication(self, pos, waveform1, waveform2, nextnote=False):
349 | if nextnote:
350 | ratio = np.exp(np.log(2.0) / 12.0)
351 | original_pos = np.linspace(0, len(waveform1) , len(waveform1))
352 | interp_pos = original_pos * ratio
353 | waveform1 = np.stack([np.interp(interp_pos, original_pos, waveform1[:, 0]), np.interp(interp_pos, original_pos, waveform1[:, 1])], axis = 1)
354 | offset = -int(sample_length * (ratio - 1.0))
355 | else:
356 | waveform1 = waveform1.astype(np.float32)
357 | offset = 0
358 | max_correl = 0.0
359 | shift_pos = -1
360 | for shift in range(shift_range * 2):
361 | left = sample_length + (shift - shift_range) + offset
362 | w1 = waveform1[left:left + correl_size].flatten()
363 | w2 = waveform2[sample_length:sample_length + correl_size].flatten()
364 | correl = np.dot(w1, w2) / (np.linalg.norm(w1) * np.linalg.norm(w2))
365 | if correl > max_correl:
366 | max_correl = correl
367 | shift_pos = shift
368 | print(pos, shift_pos - shift_range, max_correl)
369 |
370 | return max_correl
371 |
372 | def __check_latency(self, waveform):
373 | attack = np.abs(waveform[sample_length:sampling_freq // 4, 0])
374 | std = np.max(attack)
375 | noise_std = np.max(np.abs(waveform[:sample_length]))
376 |
377 | if std * latency_threshold > noise_std * latency_threshold2:
378 | threshold = std * latency_threshold
379 | else:
380 | threshold = noise_std * latency_threshold2
381 | indices = np.where(attack > threshold)
382 | if indices[0].size > 0:
383 | return indices[0][0] / sampling_freq
384 | else:
385 | return None
386 |
387 | def __check_volume(self, waveform, latency):
388 | volume = np.log(np.sqrt(np.mean(np.square(waveform[sample_length + int((latency if latency is not None else 0.01) * sampling_freq):] / 32768.0)))) / np.log(2.0) * 6.0
389 | return volume
390 |
391 | def __pitch_variation_measurement(self):
392 | if self.stream is not None:
393 | buf = self.stream.read(sample_length)
394 | if self.pitch_waveforms[self.tone] is None:
395 | self.pitch_waveforms[self.tone] = np.frombuffer(buf, dtype=np.int16).reshape(sample_length, 2)
396 | if self.midi_out:
397 | self.midi_out.note_on(self.tone + 21, 100)
398 | else:
399 | previous_len = len(self.pitch_waveforms[self.tone])
400 | self.pitch_waveforms[self.tone] = np.concatenate([self.pitch_waveforms[self.tone], np.frombuffer(buf, dtype=np.int16).reshape(sample_length, 2)])
401 | self.pitch_spectrums[self.tone] = self.__get_spectrum(self.pitch_waveforms[self.tone])
402 | if previous_len <= sampling_freq // 2 and len(self.pitch_waveforms[self.tone]) > sampling_freq // 2:
403 | # note off
404 | if self.midi_out:
405 | self.midi_out.note_off(self.tone + 21, 100)
406 |
407 | # duplication
408 | duplication = False
409 | if self.tone > 0:
410 | max_correl = self.__check_duplication(self.tone + 21, self.pitch_waveforms[self.tone - 1], self.pitch_waveforms[self.tone], True)
411 | self.pitch_correl[self.tone - 1] = max_correl
412 | duplication = max_correl > pitch_direction_threshohld
413 | self.duplication_in_pitch[self.tone] = duplication
414 | if not duplication:
415 | self.last_hue = (self.last_hue + 0.25 + random.random() * 0.5) % 1.0
416 | self.pitch_variation = self.pitch_variation + 1
417 | self.pitch_color[self.tone] = colorsys.hsv_to_rgb(self.last_hue, 0.8, 255.0)
418 | self.pitch_checked = self.pitch_checked + 1
419 |
420 | # latency
421 | self.pitch_latency[self.tone] = self.__check_latency(self.pitch_waveforms[self.tone])
422 | self.pitch_latency_average = np.nanmean(np.array(self.pitch_latency, dtype=float))
423 | if len([latency for latency in self.pitch_latency if latency is not None]) >= 1:
424 | self.pitch_latency_std = np.nanstd(np.array(self.pitch_latency, dtype=float))
425 |
426 | # volume
427 | if self.tone < 88 - 18: # ignore upper 18 tones
428 | self.pitch_volume[self.tone] = self.__check_volume(self.pitch_waveforms[self.tone], self.pitch_latency[self.tone])
429 | self.pitch_volume_average = np.nanmean(np.array(self.pitch_volume, dtype=float))
430 | if len([volume for volume in self.pitch_volume if volume is not None]) >= 1:
431 | self.pitch_volume_std = np.nanstd(np.array(self.pitch_volume, dtype=float))
432 |
433 | elif len(self.pitch_waveforms[self.tone]) >= sampling_freq:
434 | self.__close_audio()
435 | self.__close_midi_out()
436 |
437 | if self.tone < 87 and not self.terminated:
438 | self.tone = self.tone + 1
439 | else:
440 | self.mode = 0
441 | self.tone = -1
442 |
443 | if self.mode == 1 and self.stream is None and self.last_error == None:
444 | # start recording
445 | self.__open_audio()
446 | self.__open_midi_out()
447 |
448 | def measure_velocity_layer(self):
449 | self.terminated = False
450 | self.last_error = None
451 | self.mode = self.monitor_mode = 2
452 | self.velocity_waveforms = [None] * 127
453 | self.velocity_spectrums = [None] * 127
454 | self.duplication_in_velocity = [None] * 127
455 | self.velocity_correl = [0.0] * 126
456 | self.velocity_color = [(64,64,64)] * 127
457 | self.velocity = 0
458 | self.velocity_layer = 0
459 | self.velocity_checked = 0
460 | self.max_correl = 0.0
461 | self.velocity_latency = [None] * 127
462 | self.velocity_latency_average = None
463 | self.velocity_latency_std = None
464 | self.velocity_volume = [None] * 127
465 | if self.tone < 0:
466 | self.tone = 39
467 |
468 | def __velocity_layer_measurement(self):
469 | if self.stream is not None:
470 | buf = self.stream.read(sample_length)
471 | if self.velocity_waveforms[self.velocity] is None:
472 | self.velocity_waveforms[self.velocity] = np.frombuffer(buf, dtype=np.int16).reshape(sample_length, 2)
473 | if self.midi_out:
474 | self.midi_out.note_on(self.tone + 21, self.velocity + 1)
475 | else:
476 | previous_len = len(self.velocity_waveforms[self.velocity])
477 | self.velocity_waveforms[self.velocity] = np.concatenate([self.velocity_waveforms[self.velocity], np.frombuffer(buf, dtype=np.int16).reshape(sample_length, 2)])
478 | self.velocity_spectrums[self.velocity] = self.__get_spectrum(self.velocity_waveforms[self.velocity])
479 | if previous_len <= sampling_freq // 2 and len(self.velocity_waveforms[self.velocity]) > sampling_freq // 2:
480 | # note off
481 | if self.midi_out:
482 | self.midi_out.note_off(self.tone + 21, self.velocity + 1)
483 |
484 | if np.std(self.velocity_waveforms[self.velocity][sample_length:]) > np.std(self.velocity_waveforms[self.velocity][:sample_length]) * 64.0: # SN > 36dB
485 | # latency
486 | self.velocity_latency[self.velocity] = self.__check_latency(self.velocity_waveforms[self.velocity])
487 | self.velocity_latency_average = np.nanmean(np.array(self.velocity_latency, dtype=float))
488 | if len([latency for latency in self.velocity_latency if latency is not None]) >= 1:
489 | self.velocity_latency_std = np.nanstd(np.array(self.velocity_latency, dtype=float))
490 |
491 | # volume
492 | if np.std(self.velocity_waveforms[self.velocity][sample_length:]) > np.std(self.velocity_waveforms[self.velocity][:sample_length]) * 4.0: # SN > 12dB
493 | self.velocity_volume[self.velocity] = self.__check_volume(self.velocity_waveforms[self.velocity], self.velocity_latency[self.velocity])
494 |
495 | # duplication
496 | duplication = False
497 | if self.velocity > 0:
498 | max_correl = self.__check_duplication(self.velocity, self.velocity_waveforms[self.velocity - 1], self.velocity_waveforms[self.velocity])
499 | # max_correl = np.dot(self.velocity_spectrums[self.velocity - 1], self.velocity_spectrums[self.velocity]) / ((np.linalg.norm(self.velocity_spectrums[self.velocity - 1]) * np.linalg.norm(self.velocity_spectrums[self.velocity])))
500 | self.velocity_correl[self.velocity - 1] = max_correl
501 | duplication = max_correl > velocity_direction_threshohld
502 | if max_correl > self.max_correl:
503 | self.max_correl = max_correl
504 | self.duplication_in_velocity[self.velocity] = duplication and self.max_correl > velocity_direction_threshohld
505 | if self.max_correl > velocity_direction_threshohld or self.velocity_latency[self.velocity] is not None:
506 | if not duplication or self.velocity_layer == 0:
507 | self.last_hue = (self.last_hue + 0.25 + random.random() * 0.5) % 1.0
508 | self.velocity_layer = self.velocity_layer + 1
509 | self.velocity_color[self.velocity] = colorsys.hsv_to_rgb(self.last_hue, 0.8, 224.0)
510 | self.velocity_checked = self.velocity_checked + 1
511 |
512 | elif len(self.velocity_waveforms[self.velocity]) >= sampling_freq:
513 | self.__close_audio()
514 | self.__close_midi_out()
515 |
516 | if self.velocity < 126 and not self.terminated:
517 | self.velocity = self.velocity + 1
518 | else:
519 | self.mode = 0
520 | self.velocity = -1
521 |
522 | if self.mode == 2 and self.stream is None and self.last_error == None:
523 | # start recording
524 | self.__open_audio()
525 | self.__open_midi_out()
526 |
527 | def realtime_analysis(self):
528 | self.terminated = False
529 | self.last_error = None
530 | self.mode = self.monitor_mode = 3
531 | self.realtime_waveform = None
532 | self.realtime_spectrum = None
533 | self.tone = -1
534 | self.velocity = -1
535 | self.realtime_note_on = [False] * 88
536 | self.realtime_key_on = [False] * 88
537 | self.realtime_velocity = [0] * 88
538 | self.realtime_damper_on = False
539 | self.__open_audio()
540 | self.__open_midi_in()
541 |
542 | def __realtime_analysis(self):
543 | if self.terminated:
544 | self.mode = 0
545 | self.__close_audio()
546 | self.__close_midi_in()
547 | self.realtime_waveform = None
548 | self.realtime_spectrum = None
549 | return
550 |
551 | if self.midi_in and self.midi_in.poll():
552 | midi_events = self.midi_in.read(256)
553 | for midi_event in midi_events:
554 | key = midi_event[0][1] - 21
555 | if key >=0 and key < 88:
556 | if midi_event[0][0] & 0xf0 == 0x90 and midi_event[0][2] > 0: # note on
557 | self.realtime_note_on[key] = self.realtime_key_on[key] = True
558 | self.realtime_velocity[key] = midi_event[0][2]
559 | if midi_event[0][0] & 0xf0 == 0x80 or (midi_event[0][0] & 0xf0 == 0x90 and midi_event[0][2] == 0): # note off
560 | self.realtime_key_on[key] = False
561 | self.realtime_note_on[key] = self.realtime_note_on[key] and self.realtime_damper_on
562 | if midi_event[0][0] & 0xf0 == 0xb0: # control
563 | if midi_event[0][1] & 0xff == 0x40: # damper
564 | self.realtime_damper_on = midi_event[0][2] > 0
565 | if not self.realtime_damper_on:
566 | self.realtime_note_on = self.realtime_key_on.copy()
567 |
568 | if self.stream and self.last_error == None:
569 | buf = self.stream.read(sample_length)
570 | if self.realtime_waveform is None:
571 | self.realtime_waveform = np.frombuffer(buf, dtype=np.int16).reshape(sample_length, 2)
572 | else:
573 | self.realtime_waveform = np.concatenate([self.realtime_waveform[-(realtime_analysis_length - sample_length):], np.frombuffer(buf, dtype=np.int16).reshape(sample_length, 2)])
574 | self.realtime_spectrum = self.__get_spectrum(self.realtime_waveform)
575 |
576 | def process(self):
577 | if self.mode == 1:
578 | # Measure pitch variation
579 | self.__pitch_variation_measurement()
580 | elif self.mode == 2:
581 | # Measure velocity layers
582 | self.__velocity_layer_measurement()
583 | elif self.mode == 3:
584 | # Realtime analysis
585 | self.__realtime_analysis()
586 |
587 | def __save_waveforms(self, waveforms, fn_prefix, index_offset):
588 | for i, waveform in enumerate(waveforms):
589 | if waveform is not None:
590 | wav = wave.Wave_write(f'{fn_prefix}{i + index_offset:0=3}.wav')
591 | wav.setnchannels(2)
592 | wav.setsampwidth(2)
593 | wav.setframerate(48000)
594 | wav.writeframes(waveform.tobytes())
595 | wav.close()
596 |
597 | def save_waveforms(self):
598 | if self.pitch_waveforms is not None:
599 | self.__save_waveforms(self.pitch_waveforms, 'key_', 0)
600 | if self.velocity_waveforms is not None:
601 | self.__save_waveforms(self.velocity_waveforms, 'velocity_', 1)
602 |
603 | def resource_path(relative_path):
604 | if hasattr(sys, '_MEIPASS'):
605 | return os.path.join(sys._MEIPASS, relative_path)
606 | return os.path.join(os.path.abspath("."), relative_path)
607 |
608 | def main():
609 | global app_title, app_version
610 | global screen_size, floating_window_size, window_size, full_screen, line_width, base_size, base_margin, vector_scope_size
611 | global keyboard_margin_x, key_width, keyboard_top, energy_bottom, white_key_width, white_key_height, black_key_width, black_key_height
612 | global velocity_width, velocity_left, velocity_size, velocity_bottom
613 |
614 | # initialization
615 | pygame.mixer.pre_init(frequency=sampling_freq, size=-16, channels=2)
616 | pygame.init()
617 | pygame.display.set_icon(pygame.image.load(resource_path('icon_128x128.png')))
618 | pygame.display.set_caption(app_title)
619 | display_info = pygame.display.Info()
620 | screen_size = (display_info.current_w, display_info.current_h)
621 |
622 | screen, keyboard = SetUI()
623 | refresh_font = True
624 |
625 | dipbench = DipBench()
626 |
627 | # main_loop
628 | terminated = False
629 |
630 | clock = pygame.time.Clock()
631 |
632 | while (not terminated):
633 | if refresh_font:
634 | font = pygame.font.Font(None, base_size * 4)
635 | app_title_text = font.render(f'{app_title} version {app_version} / Frieve 2022-2024', True, (255,255,255))
636 | help_text = font.render('[A] Change Audio-In, [I] Change MIDI-In, [O] Change MIDI-Out, [P] Measure pitch variation, [V] Measure velocity layer, [R] Real-time mode, [F11] Full screen, [Q] Quit', True, (128,192,255))
637 | help_text2 = font.render('[ESC] Abort', True, (128,192,255))
638 | refresh_font = False
639 |
640 | screen.fill((0,0,0))
641 |
642 | # fps_text = font.render(f"FPS: {clock.get_fps():.2f}", True, (255, 255, 255))
643 | # screen.blit(fps_text, (window_size[0] - base_margin - base_size * 16, base_margin))
644 |
645 | # handle events
646 | for event in pygame.event.get():
647 | if event.type == QUIT:
648 | terminated = True
649 |
650 | elif event.type == KEYDOWN:
651 | if event.key == K_q:
652 | terminated = True
653 | elif event.key == K_ESCAPE:
654 | dipbench.terminate()
655 | elif event.key == K_F11:
656 | full_screen = not full_screen
657 | screen, keyboard = SetUI()
658 | refresh_font = True
659 | elif dipbench.mode == 0:
660 | # mode change
661 | if event.key == K_p:
662 | dipbench.measure_pitch_variation()
663 | elif event.key == K_v:
664 | dipbench.measure_velocity_layer()
665 | elif event.key == K_r:
666 | dipbench.realtime_analysis()
667 |
668 | # setting
669 | elif event.key == K_a:
670 | dipbench.shift_audio_inputs()
671 | elif event.key == K_i:
672 | dipbench.shift_midi_inputs()
673 | elif event.key == K_o:
674 | dipbench.shift_midi_outputs()
675 |
676 | # preview
677 | elif event.key == K_RIGHT:
678 | dipbench.shift_tone_next()
679 | elif event.key == K_LEFT:
680 | dipbench.shift_tone_previous()
681 | elif event.key == K_UP:
682 | dipbench.shift_velocity_next()
683 | elif event.key == K_DOWN:
684 | dipbench.shift_velocity_previous()
685 |
686 | # i/o
687 | elif event.key == K_s:
688 | dipbench.save_waveforms()
689 |
690 | elif event.type == pygame.VIDEORESIZE and not full_screen:
691 | floating_window_size = (event.w, event.h)
692 | screen, keyboard = SetUI()
693 | refresh_font = True
694 |
695 | elif event.type == MOUSEBUTTONDOWN and event.button == 1:
696 | if dipbench.mode == 0:
697 | mouse_click_tone = -1
698 | mouse_click_velocity = -1
699 | for key in [key for key in keyboard.keys[21:109] if key.black_key]:
700 | if event.pos[0] >= key.x - black_key_width / 2 and event.pos[0] < key.x + black_key_width / 2 and event.pos[1] >= keyboard_top and event.pos[1] < keyboard_top + black_key_height:
701 | if key.note_no >= 21 and key.note_no < 109:
702 | mouse_click_tone = key.note_no - 21
703 | if mouse_click_tone == -1:
704 | for key in [key for key in keyboard.keys[21:109] if not key.black_key]:
705 | if event.pos[0] >= key.x - white_key_width / 2 and event.pos[0] < key.x + white_key_width / 2 and event.pos[1] >= keyboard_top and event.pos[1] < keyboard_top + white_key_height:
706 | if key.note_no >= 21 and key.note_no < 109:
707 | mouse_click_tone = key.note_no - 21
708 | if event.pos[0] >= window_size[0] - base_margin - velocity_width and event.pos[0] < window_size[0] - base_margin and event.pos[1] >= velocity_bottom - 127 * velocity_size and event.pos[1] < velocity_bottom:
709 | mouse_click_velocity = (velocity_bottom - event.pos[1]) // velocity_size
710 | dipbench.set_tone(mouse_click_tone)
711 | dipbench.set_velocity(mouse_click_velocity)
712 |
713 | dipbench.process()
714 |
715 | # display info
716 | screen.blit(app_title_text, [base_margin, base_margin])
717 | if len(dipbench.audio_inputs) > 0 and len(dipbench.midi_inputs) > 0 and len(dipbench.midi_outputs) > 0:
718 | device_info_text = font.render(f'Audio input : {dipbench.audio_inputs[0]}, MIDI input : {dipbench.midi_inputs[0]}, MIDI output : {dipbench.midi_outputs[0]}', True, (255,255,255))
719 | else:
720 | device_info_text = font.render('Audio or MIDI device not available', True, (255,0,0))
721 | screen.blit(device_info_text, [base_margin, base_margin + base_size * 5])
722 | screen.blit(help_text if dipbench.mode == 0 else help_text2, [base_margin, base_margin + base_size * 10])
723 | if dipbench.last_error is not None:
724 | error_text = font.render(dipbench.last_error, True, (255,0,0))
725 | screen.blit(error_text, [base_margin, base_margin + base_size * 18])
726 |
727 | # prepare waveform
728 | monitor_wave = None
729 | waveform_std = 1.0
730 | display_latency = None
731 | display_volume = None
732 | if dipbench.monitor_mode == 1 and dipbench.pitch_waveforms is not None and dipbench.tone >= 0 and dipbench.pitch_waveforms[dipbench.tone] is not None:
733 | if len(dipbench.pitch_waveforms[dipbench.tone]) >= sample_length * 2:
734 | monitor_wave = dipbench.pitch_waveforms[dipbench.tone]
735 | waveform_std = np.std(monitor_wave[int(sample_length * 1.5):sample_length * 2])
736 | if dipbench.pitch_latency is not None and len(dipbench.pitch_latency) > dipbench.tone:
737 | display_latency = dipbench.pitch_latency[dipbench.tone]
738 | if dipbench.pitch_volume is not None and len(dipbench.pitch_volume) > dipbench.tone:
739 | display_volume = dipbench.pitch_volume[dipbench.tone]
740 | elif dipbench.monitor_mode == 2 and dipbench.velocity_waveforms is not None and dipbench.velocity >= 0 and dipbench.velocity_waveforms[dipbench.velocity] is not None:
741 | if len(dipbench.velocity_waveforms[dipbench.velocity]) >= sample_length * 2:
742 | monitor_wave = dipbench.velocity_waveforms[dipbench.velocity]
743 | waveform_std = np.std(monitor_wave[int(sample_length * 1.5):sample_length * 2])
744 | if dipbench.velocity_latency is not None and len(dipbench.velocity_latency) > dipbench.velocity:
745 | display_latency = dipbench.velocity_latency[dipbench.velocity]
746 | if dipbench.velocity_volume is not None and len(dipbench.velocity_volume) > dipbench.velocity:
747 | display_volume = dipbench.velocity_volume[dipbench.velocity]
748 | elif dipbench.monitor_mode == 3 and dipbench.realtime_waveform is not None:
749 | monitor_wave = dipbench.realtime_waveform
750 | waveform_std = np.std(monitor_wave)
751 | if waveform_std < 32768 * 2.0 ** (min_level / 6.0):
752 | waveform_std = 32768 * 2.0 ** (min_level / 6.0)
753 |
754 | # draw white keybed
755 | for key in [key for key in keyboard.keys[21:109] if not key.black_key]:
756 | screen.fill((255, 255, 255) if not dipbench.get_note_on(key.note_no - 21) else (255, 0, 0), Rect(key.x - white_key_width // 2, keyboard_top, white_key_width, white_key_height))
757 | # draw black keybed
758 | for key in [key for key in keyboard.keys[21:109] if key.black_key]:
759 | screen.fill((0, 0, 0), Rect(key.x - black_key_width // 2, keyboard_top, black_key_width, black_key_height))
760 | if dipbench.get_note_on(key.note_no - 21):
761 | screen.fill((255, 0, 0), Rect(key.x - black_key_width / 2 + line_width, keyboard_top, black_key_width - line_width * 2, black_key_height - line_width))
762 |
763 | # draw pitch summary text
764 | if dipbench.pitch_variation is not None and dipbench.pitch_checked > 0:
765 | volume_text = ''
766 | if dipbench.pitch_volume_std is not None:
767 | volume_text = f' Volume : standard deviation {dipbench.pitch_volume_std:.2f}dB.'
768 | latency_text = ''
769 | if dipbench.pitch_latency_std is not None:
770 | latency_text = f' Latency : average {dipbench.pitch_latency_average * 1000:.1f}ms, standard deviation {dipbench.pitch_latency_std * 1000:.1f}ms.'
771 |
772 | pitch_info_text = font.render(f'{dipbench.pitch_variation} / {dipbench.pitch_checked} waveforms for keys ({dipbench.pitch_variation * 100 / dipbench.pitch_checked:.2f}%). One waveform for {dipbench.pitch_checked / dipbench.pitch_variation:.2f} keys on average.{volume_text}{latency_text}', True, (255,128,192))
773 | screen.blit(pitch_info_text, [base_margin, energy_bottom - base_size * 7])
774 |
775 | # draw pitch variation
776 | if dipbench.pitch_correl is not None:
777 | for key in keyboard.keys[21:108]:
778 | x1 = key.normalized_x
779 | x2 = x1 + (key_width * 7) / 12
780 | height = base_size * 3
781 | point = []
782 | resolution = 6
783 | for i in range(resolution + 1):
784 | point.append((x1 + (x2 - x1) * i / resolution, energy_bottom - math.sqrt(math.sin(i / resolution * 3.1415926)) * height))
785 | pygame.draw.lines(screen, np.array([255.0,255.0,255.0]) * dipbench.pitch_correl[key.note_no - 21]**2, False, point, line_width)
786 |
787 | # draw individual volume
788 | pygame.draw.line(screen, (96,96,96), (base_margin, energy_bottom - base_size * 4), (window_size[0] - base_margin, energy_bottom - base_size * 4), line_width)
789 | pygame.draw.line(screen, (96,96,96), (base_margin, energy_bottom + base_size * 4), (window_size[0] - base_margin, energy_bottom + base_size * 4), line_width)
790 | if dipbench.pitch_volume is not None and dipbench.pitch_volume_average is not None and len(dipbench.pitch_volume) > 1:
791 | for key in keyboard.keys[21:109]:
792 | if dipbench.pitch_volume[key.note_no - 21] is not None:
793 | pygame.draw.line(screen, (0, 224, 0), (key.normalized_x, energy_bottom), (key.normalized_x, energy_bottom - (dipbench.pitch_volume[key.note_no - 21] - dipbench.pitch_volume_average) / 6.0 * base_size * 4), line_width * 2)
794 |
795 | # draw individual latency
796 | pygame.draw.line(screen, (128,128,128), (base_margin, window_size[1] - base_margin), (window_size[0] - base_margin, window_size[1] - base_margin), line_width)
797 | pygame.draw.line(screen, (96,96,96), (base_margin, window_size[1] - base_margin - 0.01 * base_size * 500), (window_size[0] - base_margin, window_size[1] - base_margin - 0.01 * base_size * 500), line_width)
798 | for key in keyboard.keys[21:109]:
799 | if dipbench.pitch_latency is not None and len(dipbench.pitch_latency) > key.note_no - 21 and dipbench.pitch_latency[key.note_no - 21] is not None:
800 | latency = dipbench.pitch_latency[key.note_no - 21]
801 | y = window_size[1] - base_margin - latency * base_size * 500
802 | pygame.draw.line(screen, (255,192,128), (key.normalized_x - base_size, y), (key.normalized_x + base_size, y), line_width * 2)
803 |
804 | # draw key center dot
805 | for key in keyboard.keys[21:109]:
806 | pygame.draw.circle(screen, dipbench.pitch_color[key.note_no - 21], (key.normalized_x + 1, energy_bottom), (base_size * 3) / 7, 0)
807 |
808 | # draw velocity summary
809 | if dipbench.velocity_layer is not None and dipbench.velocity_checked > 0:
810 | latency_text = ''
811 | if dipbench.velocity_volume is not None:
812 | volume = [vol for vol in dipbench.velocity_volume if vol is not None]
813 | if len(volume) > 0:
814 | velocity = [i + 1 for i, vol in enumerate(dipbench.velocity_volume) if vol is not None]
815 | spearman_corr, _ = spearmanr(velocity, volume)
816 | volume_text = f' Volume : spearman corr {spearman_corr:.6f}.'
817 |
818 | if dipbench.velocity_latency_std is not None:
819 | latency_text = f' Latency : standard deviation {dipbench.velocity_latency_std * 1000:.1f}ms.'
820 |
821 | velocity_info_text = font.render(f'{dipbench.velocity_layer} / {dipbench.velocity_checked} waveforms for velocities ({dipbench.velocity_layer * 100 / dipbench.velocity_checked:.2f}%). One waveform for {dipbench.velocity_checked / dipbench.velocity_layer:.2f} velocity on average.{volume_text}{latency_text}', True, (255,128,192))
822 | screen.blit(velocity_info_text, [base_margin, energy_bottom - base_size * 11])
823 |
824 | # draw velocity variation
825 | for i in range(len(dipbench.velocity_color)):
826 | rect = Rect(velocity_left, velocity_bottom - (i + 1) * velocity_size, velocity_width, velocity_size)
827 | pygame.draw.rect(screen, dipbench.velocity_color[i], rect)
828 |
829 | # draw velocity latency
830 | pointlist = []
831 | pygame.draw.line(screen, (96,96,96), (velocity_left + 0.01 * base_size * 500, velocity_bottom - 127 * velocity_size), (velocity_left + 0.01 * base_size * 500, velocity_bottom), line_width)
832 | if dipbench.velocity_latency is not None:
833 | for i in range(127):
834 | if dipbench.velocity_latency[i] is not None:
835 | x = int(velocity_left + dipbench.velocity_latency[i] * base_size * 500)
836 | y = velocity_bottom - i * velocity_size - velocity_size // 2
837 | pointlist.append((x,y))
838 | if len(pointlist) > 1:
839 | pygame.draw.lines(screen, (0,0,0), False, pointlist, base_size)
840 | pygame.draw.lines(screen, (255,192,128), False, pointlist, base_size // 2)
841 |
842 | # draw velocity volume
843 | pointlist = []
844 | if dipbench.velocity_volume is not None:
845 | velocity_min_volume = np.nanmin(np.array(dipbench.velocity_volume, dtype=float))
846 | velocity_max_volume = np.nanmax(np.array(dipbench.velocity_volume, dtype=float))
847 | if velocity_min_volume is not np.nan and velocity_max_volume is not np.nan and velocity_max_volume > velocity_min_volume:
848 | for i in range(127):
849 | if dipbench.velocity_volume[i] is not None:
850 | x = int(velocity_left + (dipbench.velocity_volume[i] - velocity_min_volume) / (velocity_max_volume - velocity_min_volume) * (velocity_width - 1))
851 | y = velocity_bottom - i * velocity_size - velocity_size // 2
852 | pointlist.append((x,y))
853 | if len(pointlist) > 1:
854 | pygame.draw.lines(screen, (0,0,0), False, pointlist, base_size)
855 | pygame.draw.lines(screen, (0,224,0), False, pointlist, base_size // 2)
856 |
857 | # draw velocity cursor
858 | velocity = []
859 | if dipbench.velocity >= 0:
860 | velocity.append(dipbench.velocity)
861 | if dipbench.realtime_velocity is not None:
862 | velocity.extend([v - 1 for i, v in enumerate(dipbench.realtime_velocity) if v > 0 and dipbench.realtime_note_on[i]])
863 | velocity.sort()
864 | last_velocity_y = window_size[1]
865 | for v in velocity:
866 | velocity_y = velocity_bottom - v * velocity_size - velocity_size // 2
867 | pointlist = [[velocity_left - base_size * 3, velocity_y - base_size],
868 | [velocity_left - base_size * 3, velocity_y + base_size],
869 | [velocity_left - base_size * 1, velocity_y]]
870 | pygame.draw.polygon(screen, (255,255,255), pointlist)
871 | if velocity_y < last_velocity_y:
872 | velocity_text = font.render(f'{v + 1}', True, (255,255,255))
873 | screen.blit(velocity_text, [velocity_left - base_size * 4 - velocity_text.get_width(), velocity_y - velocity_text.get_height() // 2])
874 | last_velocity_y = velocity_y - base_margin
875 |
876 | # draw waveform
877 | if monitor_wave is not None:
878 | # display left ch wave form
879 | waveform = monitor_wave[:,0]
880 |
881 | if len(waveform) >= sample_length * 2:
882 | # display waveform
883 | waveform = waveform[sample_length:sample_length * 2]
884 | waveform_pointlist = np.column_stack((np.linspace(base_margin, window_size[0] // 2 - base_margin, sample_length), waveform / waveform_std * vector_scope_size + keyboard_top // 2))
885 | pygame.draw.lines(screen, (96, 96, 96), False, waveform_pointlist)
886 |
887 | # display vector scope
888 | monitor_wave = (-monitor_wave + np.stack([monitor_wave[:,1], -monitor_wave[:,0]], axis=1)) * 0.707
889 | pointlist = (monitor_wave[:sampling_freq // 2] - np.mean(monitor_wave)) / waveform_std * 0.8 * vector_scope_size + (window_size[0] // 4, keyboard_top // 2)
890 | pygame.draw.lines(screen, (0,224,0), False, pointlist)
891 |
892 | # display latency
893 | if display_latency is not None:
894 | latency_x = sampling_freq * display_latency / sample_length * (window_size[0] // 2 - base_margin * 2) + base_margin
895 | pygame.draw.line(screen, (255,192,128), (latency_x, keyboard_top // 2 - base_size * 31), (latency_x, keyboard_top // 2 + base_size * 31), line_width * 2)
896 | latency_text = font.render(f'{display_latency * 1000:.1f}ms latency', True, (255,192,128))
897 | screen.blit(latency_text, [latency_x + base_size, keyboard_top // 2 - base_size * 31])
898 |
899 | # display volume
900 | if display_volume is not None:
901 | volume_text = font.render(f'{display_volume:.1f}dB', True, (0,224,0))
902 | screen.blit(volume_text, [window_size[0] // 2 - base_margin - volume_text.get_width(), keyboard_top // 2 - base_size * 31])
903 |
904 | # prepare spectrum
905 | monitor_spectrum = None
906 | if dipbench.monitor_mode == 1 and dipbench.pitch_spectrums is not None and dipbench.tone >= 0 and dipbench.pitch_spectrums[dipbench.tone] is not None:
907 | monitor_spectrum = dipbench.pitch_spectrums[dipbench.tone]
908 | elif dipbench.monitor_mode == 2 and dipbench.velocity_spectrums is not None and dipbench.velocity >= 0 and dipbench.velocity_spectrums[dipbench.velocity] is not None:
909 | monitor_spectrum = dipbench.velocity_spectrums[dipbench.velocity]
910 | elif dipbench.monitor_mode == 3 and dipbench.realtime_spectrum is not None:
911 | monitor_spectrum = dipbench.realtime_spectrum
912 | if monitor_spectrum is not None and len(monitor_spectrum) > 0:
913 | spectrum_max = monitor_spectrum.max()
914 | if spectrum_max < min_level:
915 | spectrum_max = min_level
916 | monitor_spectrum = np.clip(monitor_spectrum - spectrum_max + 96, 0, 96)
917 | x = np.linspace(window_size[0] // 2 + base_margin, velocity_left - base_margin * 3, spectrum_size // 4)
918 | y = velocity_bottom - monitor_spectrum[:spectrum_size // 4] / 96 * (velocity_bottom - (keyboard_top // 2 - base_size * 31))
919 | pointlist = np.stack([x, y], axis=1)
920 | pygame.draw.lines(screen, (0,224,0), False, pointlist)
921 |
922 | # draw
923 | pygame.display.flip()
924 |
925 | # wait
926 | clock.tick(60)
927 |
928 | pygame.quit()
929 |
930 |
931 | if __name__ == "__main__":
932 | main()
933 |
--------------------------------------------------------------------------------