├── LICENSE
├── README.md
└── src
├── HarmonicSummationSF.py
├── MelodyExtractionFromSingleWav.py
├── SIMM.py
├── SourceFilterModelSF.py
├── __init__.py
├── combineSaliences.py
├── contourExtraction.py
├── contour_classification
├── ShuffleLabelsOut.py
├── __init__.py
├── clf_utils.py
├── contour_utils.py
├── experiment_utils.py
├── generate_melody.py
├── melody_trackids.json
├── melody_trackids_orch.json
├── mv_gaussian.py
├── orch_groups.json
├── run_contour_training_melody_extraction.py
├── run_experiments.py
├── run_glass_ceiling_experiment.py
└── v_i_splits.json
├── imageMatlab.py
├── melodyExtractionFromSalienceFunction.py
├── parsing.py
├── peaks.py
├── tracking.py
└── utils.py
/LICENSE:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 3, 29 June 2007
3 |
4 | Copyright (C) 2007 Free Software Foundation, Inc.
5 | Everyone is permitted to copy and distribute verbatim copies
6 | of this license document, but changing it is not allowed.
7 |
8 | Preamble
9 |
10 | The GNU General Public License is a free, copyleft license for
11 | software and other kinds of works.
12 |
13 | The licenses for most software and other practical works are designed
14 | to take away your freedom to share and change the works. By contrast,
15 | the GNU General Public License is intended to guarantee your freedom to
16 | share and change all versions of a program--to make sure it remains free
17 | software for all its users. We, the Free Software Foundation, use the
18 | GNU General Public License for most of our software; it applies also to
19 | any other work released this way by its authors. You can apply it to
20 | your programs, too.
21 |
22 | When we speak of free software, we are referring to freedom, not
23 | price. Our General Public Licenses are designed to make sure that you
24 | have the freedom to distribute copies of free software (and charge for
25 | them if you wish), that you receive source code or can get it if you
26 | want it, that you can change the software or use pieces of it in new
27 | free programs, and that you know you can do these things.
28 |
29 | To protect your rights, we need to prevent others from denying you
30 | these rights or asking you to surrender the rights. Therefore, you have
31 | certain responsibilities if you distribute copies of the software, or if
32 | you modify it: responsibilities to respect the freedom of others.
33 |
34 | For example, if you distribute copies of such a program, whether
35 | gratis or for a fee, you must pass on to the recipients the same
36 | freedoms that you received. You must make sure that they, too, receive
37 | or can get the source code. And you must show them these terms so they
38 | know their rights.
39 |
40 | Developers that use the GNU GPL protect your rights with two steps:
41 | (1) assert copyright on the software, and (2) offer you this License
42 | giving you legal permission to copy, distribute and/or modify it.
43 |
44 | For the developers' and authors' protection, the GPL clearly explains
45 | that there is no warranty for this free software. For both users' and
46 | authors' sake, the GPL requires that modified versions be marked as
47 | changed, so that their problems will not be attributed erroneously to
48 | authors of previous versions.
49 |
50 | Some devices are designed to deny users access to install or run
51 | modified versions of the software inside them, although the manufacturer
52 | can do so. This is fundamentally incompatible with the aim of
53 | protecting users' freedom to change the software. The systematic
54 | pattern of such abuse occurs in the area of products for individuals to
55 | use, which is precisely where it is most unacceptable. Therefore, we
56 | have designed this version of the GPL to prohibit the practice for those
57 | products. If such problems arise substantially in other domains, we
58 | stand ready to extend this provision to those domains in future versions
59 | of the GPL, as needed to protect the freedom of users.
60 |
61 | Finally, every program is threatened constantly by software patents.
62 | States should not allow patents to restrict development and use of
63 | software on general-purpose computers, but in those that do, we wish to
64 | avoid the special danger that patents applied to a free program could
65 | make it effectively proprietary. To prevent this, the GPL assures that
66 | patents cannot be used to render the program non-free.
67 |
68 | The precise terms and conditions for copying, distribution and
69 | modification follow.
70 |
71 | TERMS AND CONDITIONS
72 |
73 | 0. Definitions.
74 |
75 | "This License" refers to version 3 of the GNU General Public License.
76 |
77 | "Copyright" also means copyright-like laws that apply to other kinds of
78 | works, such as semiconductor masks.
79 |
80 | "The Program" refers to any copyrightable work licensed under this
81 | License. Each licensee is addressed as "you". "Licensees" and
82 | "recipients" may be individuals or organizations.
83 |
84 | To "modify" a work means to copy from or adapt all or part of the work
85 | in a fashion requiring copyright permission, other than the making of an
86 | exact copy. The resulting work is called a "modified version" of the
87 | earlier work or a work "based on" the earlier work.
88 |
89 | A "covered work" means either the unmodified Program or a work based
90 | on the Program.
91 |
92 | To "propagate" a work means to do anything with it that, without
93 | permission, would make you directly or secondarily liable for
94 | infringement under applicable copyright law, except executing it on a
95 | computer or modifying a private copy. Propagation includes copying,
96 | distribution (with or without modification), making available to the
97 | public, and in some countries other activities as well.
98 |
99 | To "convey" a work means any kind of propagation that enables other
100 | parties to make or receive copies. Mere interaction with a user through
101 | a computer network, with no transfer of a copy, is not conveying.
102 |
103 | An interactive user interface displays "Appropriate Legal Notices"
104 | to the extent that it includes a convenient and prominently visible
105 | feature that (1) displays an appropriate copyright notice, and (2)
106 | tells the user that there is no warranty for the work (except to the
107 | extent that warranties are provided), that licensees may convey the
108 | work under this License, and how to view a copy of this License. If
109 | the interface presents a list of user commands or options, such as a
110 | menu, a prominent item in the list meets this criterion.
111 |
112 | 1. Source Code.
113 |
114 | The "source code" for a work means the preferred form of the work
115 | for making modifications to it. "Object code" means any non-source
116 | form of a work.
117 |
118 | A "Standard Interface" means an interface that either is an official
119 | standard defined by a recognized standards body, or, in the case of
120 | interfaces specified for a particular programming language, one that
121 | is widely used among developers working in that language.
122 |
123 | The "System Libraries" of an executable work include anything, other
124 | than the work as a whole, that (a) is included in the normal form of
125 | packaging a Major Component, but which is not part of that Major
126 | Component, and (b) serves only to enable use of the work with that
127 | Major Component, or to implement a Standard Interface for which an
128 | implementation is available to the public in source code form. A
129 | "Major Component", in this context, means a major essential component
130 | (kernel, window system, and so on) of the specific operating system
131 | (if any) on which the executable work runs, or a compiler used to
132 | produce the work, or an object code interpreter used to run it.
133 |
134 | The "Corresponding Source" for a work in object code form means all
135 | the source code needed to generate, install, and (for an executable
136 | work) run the object code and to modify the work, including scripts to
137 | control those activities. However, it does not include the work's
138 | System Libraries, or general-purpose tools or generally available free
139 | programs which are used unmodified in performing those activities but
140 | which are not part of the work. For example, Corresponding Source
141 | includes interface definition files associated with source files for
142 | the work, and the source code for shared libraries and dynamically
143 | linked subprograms that the work is specifically designed to require,
144 | such as by intimate data communication or control flow between those
145 | subprograms and other parts of the work.
146 |
147 | The Corresponding Source need not include anything that users
148 | can regenerate automatically from other parts of the Corresponding
149 | Source.
150 |
151 | The Corresponding Source for a work in source code form is that
152 | same work.
153 |
154 | 2. Basic Permissions.
155 |
156 | All rights granted under this License are granted for the term of
157 | copyright on the Program, and are irrevocable provided the stated
158 | conditions are met. This License explicitly affirms your unlimited
159 | permission to run the unmodified Program. The output from running a
160 | covered work is covered by this License only if the output, given its
161 | content, constitutes a covered work. This License acknowledges your
162 | rights of fair use or other equivalent, as provided by copyright law.
163 |
164 | You may make, run and propagate covered works that you do not
165 | convey, without conditions so long as your license otherwise remains
166 | in force. You may convey covered works to others for the sole purpose
167 | of having them make modifications exclusively for you, or provide you
168 | with facilities for running those works, provided that you comply with
169 | the terms of this License in conveying all material for which you do
170 | not control copyright. Those thus making or running the covered works
171 | for you must do so exclusively on your behalf, under your direction
172 | and control, on terms that prohibit them from making any copies of
173 | your copyrighted material outside their relationship with you.
174 |
175 | Conveying under any other circumstances is permitted solely under
176 | the conditions stated below. Sublicensing is not allowed; section 10
177 | makes it unnecessary.
178 |
179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180 |
181 | No covered work shall be deemed part of an effective technological
182 | measure under any applicable law fulfilling obligations under article
183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184 | similar laws prohibiting or restricting circumvention of such
185 | measures.
186 |
187 | When you convey a covered work, you waive any legal power to forbid
188 | circumvention of technological measures to the extent such circumvention
189 | is effected by exercising rights under this License with respect to
190 | the covered work, and you disclaim any intention to limit operation or
191 | modification of the work as a means of enforcing, against the work's
192 | users, your or third parties' legal rights to forbid circumvention of
193 | technological measures.
194 |
195 | 4. Conveying Verbatim Copies.
196 |
197 | You may convey verbatim copies of the Program's source code as you
198 | receive it, in any medium, provided that you conspicuously and
199 | appropriately publish on each copy an appropriate copyright notice;
200 | keep intact all notices stating that this License and any
201 | non-permissive terms added in accord with section 7 apply to the code;
202 | keep intact all notices of the absence of any warranty; and give all
203 | recipients a copy of this License along with the Program.
204 |
205 | You may charge any price or no price for each copy that you convey,
206 | and you may offer support or warranty protection for a fee.
207 |
208 | 5. Conveying Modified Source Versions.
209 |
210 | You may convey a work based on the Program, or the modifications to
211 | produce it from the Program, in the form of source code under the
212 | terms of section 4, provided that you also meet all of these conditions:
213 |
214 | a) The work must carry prominent notices stating that you modified
215 | it, and giving a relevant date.
216 |
217 | b) The work must carry prominent notices stating that it is
218 | released under this License and any conditions added under section
219 | 7. This requirement modifies the requirement in section 4 to
220 | "keep intact all notices".
221 |
222 | c) You must license the entire work, as a whole, under this
223 | License to anyone who comes into possession of a copy. This
224 | License will therefore apply, along with any applicable section 7
225 | additional terms, to the whole of the work, and all its parts,
226 | regardless of how they are packaged. This License gives no
227 | permission to license the work in any other way, but it does not
228 | invalidate such permission if you have separately received it.
229 |
230 | d) If the work has interactive user interfaces, each must display
231 | Appropriate Legal Notices; however, if the Program has interactive
232 | interfaces that do not display Appropriate Legal Notices, your
233 | work need not make them do so.
234 |
235 | A compilation of a covered work with other separate and independent
236 | works, which are not by their nature extensions of the covered work,
237 | and which are not combined with it such as to form a larger program,
238 | in or on a volume of a storage or distribution medium, is called an
239 | "aggregate" if the compilation and its resulting copyright are not
240 | used to limit the access or legal rights of the compilation's users
241 | beyond what the individual works permit. Inclusion of a covered work
242 | in an aggregate does not cause this License to apply to the other
243 | parts of the aggregate.
244 |
245 | 6. Conveying Non-Source Forms.
246 |
247 | You may convey a covered work in object code form under the terms
248 | of sections 4 and 5, provided that you also convey the
249 | machine-readable Corresponding Source under the terms of this License,
250 | in one of these ways:
251 |
252 | a) Convey the object code in, or embodied in, a physical product
253 | (including a physical distribution medium), accompanied by the
254 | Corresponding Source fixed on a durable physical medium
255 | customarily used for software interchange.
256 |
257 | b) Convey the object code in, or embodied in, a physical product
258 | (including a physical distribution medium), accompanied by a
259 | written offer, valid for at least three years and valid for as
260 | long as you offer spare parts or customer support for that product
261 | model, to give anyone who possesses the object code either (1) a
262 | copy of the Corresponding Source for all the software in the
263 | product that is covered by this License, on a durable physical
264 | medium customarily used for software interchange, for a price no
265 | more than your reasonable cost of physically performing this
266 | conveying of source, or (2) access to copy the
267 | Corresponding Source from a network server at no charge.
268 |
269 | c) Convey individual copies of the object code with a copy of the
270 | written offer to provide the Corresponding Source. This
271 | alternative is allowed only occasionally and noncommercially, and
272 | only if you received the object code with such an offer, in accord
273 | with subsection 6b.
274 |
275 | d) Convey the object code by offering access from a designated
276 | place (gratis or for a charge), and offer equivalent access to the
277 | Corresponding Source in the same way through the same place at no
278 | further charge. You need not require recipients to copy the
279 | Corresponding Source along with the object code. If the place to
280 | copy the object code is a network server, the Corresponding Source
281 | may be on a different server (operated by you or a third party)
282 | that supports equivalent copying facilities, provided you maintain
283 | clear directions next to the object code saying where to find the
284 | Corresponding Source. Regardless of what server hosts the
285 | Corresponding Source, you remain obligated to ensure that it is
286 | available for as long as needed to satisfy these requirements.
287 |
288 | e) Convey the object code using peer-to-peer transmission, provided
289 | you inform other peers where the object code and Corresponding
290 | Source of the work are being offered to the general public at no
291 | charge under subsection 6d.
292 |
293 | A separable portion of the object code, whose source code is excluded
294 | from the Corresponding Source as a System Library, need not be
295 | included in conveying the object code work.
296 |
297 | A "User Product" is either (1) a "consumer product", which means any
298 | tangible personal property which is normally used for personal, family,
299 | or household purposes, or (2) anything designed or sold for incorporation
300 | into a dwelling. In determining whether a product is a consumer product,
301 | doubtful cases shall be resolved in favor of coverage. For a particular
302 | product received by a particular user, "normally used" refers to a
303 | typical or common use of that class of product, regardless of the status
304 | of the particular user or of the way in which the particular user
305 | actually uses, or expects or is expected to use, the product. A product
306 | is a consumer product regardless of whether the product has substantial
307 | commercial, industrial or non-consumer uses, unless such uses represent
308 | the only significant mode of use of the product.
309 |
310 | "Installation Information" for a User Product means any methods,
311 | procedures, authorization keys, or other information required to install
312 | and execute modified versions of a covered work in that User Product from
313 | a modified version of its Corresponding Source. The information must
314 | suffice to ensure that the continued functioning of the modified object
315 | code is in no case prevented or interfered with solely because
316 | modification has been made.
317 |
318 | If you convey an object code work under this section in, or with, or
319 | specifically for use in, a User Product, and the conveying occurs as
320 | part of a transaction in which the right of possession and use of the
321 | User Product is transferred to the recipient in perpetuity or for a
322 | fixed term (regardless of how the transaction is characterized), the
323 | Corresponding Source conveyed under this section must be accompanied
324 | by the Installation Information. But this requirement does not apply
325 | if neither you nor any third party retains the ability to install
326 | modified object code on the User Product (for example, the work has
327 | been installed in ROM).
328 |
329 | The requirement to provide Installation Information does not include a
330 | requirement to continue to provide support service, warranty, or updates
331 | for a work that has been modified or installed by the recipient, or for
332 | the User Product in which it has been modified or installed. Access to a
333 | network may be denied when the modification itself materially and
334 | adversely affects the operation of the network or violates the rules and
335 | protocols for communication across the network.
336 |
337 | Corresponding Source conveyed, and Installation Information provided,
338 | in accord with this section must be in a format that is publicly
339 | documented (and with an implementation available to the public in
340 | source code form), and must require no special password or key for
341 | unpacking, reading or copying.
342 |
343 | 7. Additional Terms.
344 |
345 | "Additional permissions" are terms that supplement the terms of this
346 | License by making exceptions from one or more of its conditions.
347 | Additional permissions that are applicable to the entire Program shall
348 | be treated as though they were included in this License, to the extent
349 | that they are valid under applicable law. If additional permissions
350 | apply only to part of the Program, that part may be used separately
351 | under those permissions, but the entire Program remains governed by
352 | this License without regard to the additional permissions.
353 |
354 | When you convey a copy of a covered work, you may at your option
355 | remove any additional permissions from that copy, or from any part of
356 | it. (Additional permissions may be written to require their own
357 | removal in certain cases when you modify the work.) You may place
358 | additional permissions on material, added by you to a covered work,
359 | for which you have or can give appropriate copyright permission.
360 |
361 | Notwithstanding any other provision of this License, for material you
362 | add to a covered work, you may (if authorized by the copyright holders of
363 | that material) supplement the terms of this License with terms:
364 |
365 | a) Disclaiming warranty or limiting liability differently from the
366 | terms of sections 15 and 16 of this License; or
367 |
368 | b) Requiring preservation of specified reasonable legal notices or
369 | author attributions in that material or in the Appropriate Legal
370 | Notices displayed by works containing it; or
371 |
372 | c) Prohibiting misrepresentation of the origin of that material, or
373 | requiring that modified versions of such material be marked in
374 | reasonable ways as different from the original version; or
375 |
376 | d) Limiting the use for publicity purposes of names of licensors or
377 | authors of the material; or
378 |
379 | e) Declining to grant rights under trademark law for use of some
380 | trade names, trademarks, or service marks; or
381 |
382 | f) Requiring indemnification of licensors and authors of that
383 | material by anyone who conveys the material (or modified versions of
384 | it) with contractual assumptions of liability to the recipient, for
385 | any liability that these contractual assumptions directly impose on
386 | those licensors and authors.
387 |
388 | All other non-permissive additional terms are considered "further
389 | restrictions" within the meaning of section 10. If the Program as you
390 | received it, or any part of it, contains a notice stating that it is
391 | governed by this License along with a term that is a further
392 | restriction, you may remove that term. If a license document contains
393 | a further restriction but permits relicensing or conveying under this
394 | License, you may add to a covered work material governed by the terms
395 | of that license document, provided that the further restriction does
396 | not survive such relicensing or conveying.
397 |
398 | If you add terms to a covered work in accord with this section, you
399 | must place, in the relevant source files, a statement of the
400 | additional terms that apply to those files, or a notice indicating
401 | where to find the applicable terms.
402 |
403 | Additional terms, permissive or non-permissive, may be stated in the
404 | form of a separately written license, or stated as exceptions;
405 | the above requirements apply either way.
406 |
407 | 8. Termination.
408 |
409 | You may not propagate or modify a covered work except as expressly
410 | provided under this License. Any attempt otherwise to propagate or
411 | modify it is void, and will automatically terminate your rights under
412 | this License (including any patent licenses granted under the third
413 | paragraph of section 11).
414 |
415 | However, if you cease all violation of this License, then your
416 | license from a particular copyright holder is reinstated (a)
417 | provisionally, unless and until the copyright holder explicitly and
418 | finally terminates your license, and (b) permanently, if the copyright
419 | holder fails to notify you of the violation by some reasonable means
420 | prior to 60 days after the cessation.
421 |
422 | Moreover, your license from a particular copyright holder is
423 | reinstated permanently if the copyright holder notifies you of the
424 | violation by some reasonable means, this is the first time you have
425 | received notice of violation of this License (for any work) from that
426 | copyright holder, and you cure the violation prior to 30 days after
427 | your receipt of the notice.
428 |
429 | Termination of your rights under this section does not terminate the
430 | licenses of parties who have received copies or rights from you under
431 | this License. If your rights have been terminated and not permanently
432 | reinstated, you do not qualify to receive new licenses for the same
433 | material under section 10.
434 |
435 | 9. Acceptance Not Required for Having Copies.
436 |
437 | You are not required to accept this License in order to receive or
438 | run a copy of the Program. Ancillary propagation of a covered work
439 | occurring solely as a consequence of using peer-to-peer transmission
440 | to receive a copy likewise does not require acceptance. However,
441 | nothing other than this License grants you permission to propagate or
442 | modify any covered work. These actions infringe copyright if you do
443 | not accept this License. Therefore, by modifying or propagating a
444 | covered work, you indicate your acceptance of this License to do so.
445 |
446 | 10. Automatic Licensing of Downstream Recipients.
447 |
448 | Each time you convey a covered work, the recipient automatically
449 | receives a license from the original licensors, to run, modify and
450 | propagate that work, subject to this License. You are not responsible
451 | for enforcing compliance by third parties with this License.
452 |
453 | An "entity transaction" is a transaction transferring control of an
454 | organization, or substantially all assets of one, or subdividing an
455 | organization, or merging organizations. If propagation of a covered
456 | work results from an entity transaction, each party to that
457 | transaction who receives a copy of the work also receives whatever
458 | licenses to the work the party's predecessor in interest had or could
459 | give under the previous paragraph, plus a right to possession of the
460 | Corresponding Source of the work from the predecessor in interest, if
461 | the predecessor has it or can get it with reasonable efforts.
462 |
463 | You may not impose any further restrictions on the exercise of the
464 | rights granted or affirmed under this License. For example, you may
465 | not impose a license fee, royalty, or other charge for exercise of
466 | rights granted under this License, and you may not initiate litigation
467 | (including a cross-claim or counterclaim in a lawsuit) alleging that
468 | any patent claim is infringed by making, using, selling, offering for
469 | sale, or importing the Program or any portion of it.
470 |
471 | 11. Patents.
472 |
473 | A "contributor" is a copyright holder who authorizes use under this
474 | License of the Program or a work on which the Program is based. The
475 | work thus licensed is called the contributor's "contributor version".
476 |
477 | A contributor's "essential patent claims" are all patent claims
478 | owned or controlled by the contributor, whether already acquired or
479 | hereafter acquired, that would be infringed by some manner, permitted
480 | by this License, of making, using, or selling its contributor version,
481 | but do not include claims that would be infringed only as a
482 | consequence of further modification of the contributor version. For
483 | purposes of this definition, "control" includes the right to grant
484 | patent sublicenses in a manner consistent with the requirements of
485 | this License.
486 |
487 | Each contributor grants you a non-exclusive, worldwide, royalty-free
488 | patent license under the contributor's essential patent claims, to
489 | make, use, sell, offer for sale, import and otherwise run, modify and
490 | propagate the contents of its contributor version.
491 |
492 | In the following three paragraphs, a "patent license" is any express
493 | agreement or commitment, however denominated, not to enforce a patent
494 | (such as an express permission to practice a patent or covenant not to
495 | sue for patent infringement). To "grant" such a patent license to a
496 | party means to make such an agreement or commitment not to enforce a
497 | patent against the party.
498 |
499 | If you convey a covered work, knowingly relying on a patent license,
500 | and the Corresponding Source of the work is not available for anyone
501 | to copy, free of charge and under the terms of this License, through a
502 | publicly available network server or other readily accessible means,
503 | then you must either (1) cause the Corresponding Source to be so
504 | available, or (2) arrange to deprive yourself of the benefit of the
505 | patent license for this particular work, or (3) arrange, in a manner
506 | consistent with the requirements of this License, to extend the patent
507 | license to downstream recipients. "Knowingly relying" means you have
508 | actual knowledge that, but for the patent license, your conveying the
509 | covered work in a country, or your recipient's use of the covered work
510 | in a country, would infringe one or more identifiable patents in that
511 | country that you have reason to believe are valid.
512 |
513 | If, pursuant to or in connection with a single transaction or
514 | arrangement, you convey, or propagate by procuring conveyance of, a
515 | covered work, and grant a patent license to some of the parties
516 | receiving the covered work authorizing them to use, propagate, modify
517 | or convey a specific copy of the covered work, then the patent license
518 | you grant is automatically extended to all recipients of the covered
519 | work and works based on it.
520 |
521 | A patent license is "discriminatory" if it does not include within
522 | the scope of its coverage, prohibits the exercise of, or is
523 | conditioned on the non-exercise of one or more of the rights that are
524 | specifically granted under this License. You may not convey a covered
525 | work if you are a party to an arrangement with a third party that is
526 | in the business of distributing software, under which you make payment
527 | to the third party based on the extent of your activity of conveying
528 | the work, and under which the third party grants, to any of the
529 | parties who would receive the covered work from you, a discriminatory
530 | patent license (a) in connection with copies of the covered work
531 | conveyed by you (or copies made from those copies), or (b) primarily
532 | for and in connection with specific products or compilations that
533 | contain the covered work, unless you entered into that arrangement,
534 | or that patent license was granted, prior to 28 March 2007.
535 |
536 | Nothing in this License shall be construed as excluding or limiting
537 | any implied license or other defenses to infringement that may
538 | otherwise be available to you under applicable patent law.
539 |
540 | 12. No Surrender of Others' Freedom.
541 |
542 | If conditions are imposed on you (whether by court order, agreement or
543 | otherwise) that contradict the conditions of this License, they do not
544 | excuse you from the conditions of this License. If you cannot convey a
545 | covered work so as to satisfy simultaneously your obligations under this
546 | License and any other pertinent obligations, then as a consequence you may
547 | not convey it at all. For example, if you agree to terms that obligate you
548 | to collect a royalty for further conveying from those to whom you convey
549 | the Program, the only way you could satisfy both those terms and this
550 | License would be to refrain entirely from conveying the Program.
551 |
552 | 13. Use with the GNU Affero General Public License.
553 |
554 | Notwithstanding any other provision of this License, you have
555 | permission to link or combine any covered work with a work licensed
556 | under version 3 of the GNU Affero General Public License into a single
557 | combined work, and to convey the resulting work. The terms of this
558 | License will continue to apply to the part which is the covered work,
559 | but the special requirements of the GNU Affero General Public License,
560 | section 13, concerning interaction through a network will apply to the
561 | combination as such.
562 |
563 | 14. Revised Versions of this License.
564 |
565 | The Free Software Foundation may publish revised and/or new versions of
566 | the GNU General Public License from time to time. Such new versions will
567 | be similar in spirit to the present version, but may differ in detail to
568 | address new problems or concerns.
569 |
570 | Each version is given a distinguishing version number. If the
571 | Program specifies that a certain numbered version of the GNU General
572 | Public License "or any later version" applies to it, you have the
573 | option of following the terms and conditions either of that numbered
574 | version or of any later version published by the Free Software
575 | Foundation. If the Program does not specify a version number of the
576 | GNU General Public License, you may choose any version ever published
577 | by the Free Software Foundation.
578 |
579 | If the Program specifies that a proxy can decide which future
580 | versions of the GNU General Public License can be used, that proxy's
581 | public statement of acceptance of a version permanently authorizes you
582 | to choose that version for the Program.
583 |
584 | Later license versions may give you additional or different
585 | permissions. However, no additional obligations are imposed on any
586 | author or copyright holder as a result of your choosing to follow a
587 | later version.
588 |
589 | 15. Disclaimer of Warranty.
590 |
591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599 |
600 | 16. Limitation of Liability.
601 |
602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610 | SUCH DAMAGES.
611 |
612 | 17. Interpretation of Sections 15 and 16.
613 |
614 | If the disclaimer of warranty and limitation of liability provided
615 | above cannot be given local legal effect according to their terms,
616 | reviewing courts shall apply local law that most closely approximates
617 | an absolute waiver of all civil liability in connection with the
618 | Program, unless a warranty or assumption of liability accompanies a
619 | copy of the Program in return for a fee.
620 |
621 | END OF TERMS AND CONDITIONS
622 |
623 | How to Apply These Terms to Your New Programs
624 |
625 | If you develop a new program, and you want it to be of the greatest
626 | possible use to the public, the best way to achieve this is to make it
627 | free software which everyone can redistribute and change under these terms.
628 |
629 | To do so, attach the following notices to the program. It is safest
630 | to attach them to the start of each source file to most effectively
631 | state the exclusion of warranty; and each file should have at least
632 | the "copyright" line and a pointer to where the full notice is found.
633 |
634 | {one line to give the program's name and a brief idea of what it does.}
635 | Copyright (C) {year} {name of author}
636 |
637 | This program is free software: you can redistribute it and/or modify
638 | it under the terms of the GNU General Public License as published by
639 | the Free Software Foundation, either version 3 of the License, or
640 | (at your option) any later version.
641 |
642 | This program is distributed in the hope that it will be useful,
643 | but WITHOUT ANY WARRANTY; without even the implied warranty of
644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645 | GNU General Public License for more details.
646 |
647 | You should have received a copy of the GNU General Public License
648 | along with this program. If not, see .
649 |
650 | Also add information on how to contact you by electronic and paper mail.
651 |
652 | If the program does terminal interaction, make it output a short
653 | notice like this when it starts in an interactive mode:
654 |
655 | {project} Copyright (C) {year} {fullname}
656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
657 | This is free software, and you are welcome to redistribute it
658 | under certain conditions; type `show c' for details.
659 |
660 | The hypothetical commands `show w' and `show c' should show the appropriate
661 | parts of the General Public License. Of course, your program's commands
662 | might be different; for a GUI interface, you would use an "about box".
663 |
664 | You should also get your employer (if you work as a programmer) or school,
665 | if any, to sign a "copyright disclaimer" for the program, if necessary.
666 | For more information on this, and how to apply and follow the GNU GPL, see
667 | .
668 |
669 | The GNU General Public License does not permit incorporating your program
670 | into proprietary programs. If your program is a subroutine library, you
671 | may consider it more useful to permit linking proprietary applications with
672 | the library. If this is what you want to do, use the GNU Lesser General
673 | Public License instead of this License. But first, please read
674 | .
675 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # SourceFilterContoursMelody
2 | Melody extraction based on source-filter modelling
3 |
4 |
5 | This repository contains the code of the algorithm evaluated in MIREX 2015 and 2016 (BG).
6 | It also contains the code necessary to run the experiments in the following article (ISMIR2016):
7 |
8 | J. J. Bosch, R. M. Bittner, J. Salamon, and E. Gómez, "A Comparison of
9 | Melody Extraction Methods Based on Source-Filter Modelling", in Proc.
10 | 17th International Society for Music Information Retrieval Conference
11 | (ISMIR 2016), New York City, USA, Aug. 2016.
12 |
13 |
14 | J. Bosch, E. Gómez, "Melody extraction based on a source-filter model using pitch contour selection",
15 | in Proc. 13th Sound and Music Computing Conference (SMC 2016). Hamburg, Germany, 2016. p.67-74
16 |
17 | Author:
18 | Juan J. Bosch
19 | Music Technology Group, Universitat Pompeu Fabra, Barcelona
20 | Contact: juan.bosch@upf.edu
21 |
22 | This repository also contains code by R.M. Bittner (in contour_classification folder), and J.L Durrieu (source-filter model), which has been adapted to the needs of the conducted experiments
23 |
24 | The code is written in python (version 2.7), and presents the following dependencies:
25 |
26 | Essentia 2.0.1 or newer, with python bindings (http://essentia.upf.edu/)
27 | NumPy 1.8.2 (any relatively recent version should work)
28 |
29 | For contour classification, the following packages are also used:
30 |
31 | pandas
32 | scipy
33 | seaborn
34 | sklearn
35 |
36 | In order to execute the algorithm evaluated in MIREX 2016 (BG1 and BG2 submissions), it should be called from the folder which contains the source code, as:
37 |
38 | python MelodyExtractionFromSingleWav.py /inputaudiofolder/audio1.wav /estimations/audio1.txt --extractionMethod='BG2' --hopsize=0.01 --nb-iterations=30
39 |
40 | In order to execute the algorithm based on energy weighting, it should be called from the folder which contains the source code, as:
41 |
42 | python MelodyExtractionFromSingleWav.py /inputaudiofolder/audio1.wav /estimations/audio1.txt --extractionMethod='EWM' --hopsize=0.01 --nb-iterations=30
43 |
44 | In order to execute the algorithm with the contour creation parameters from ISMIR2016, use --extractionMethod='CBM' :
45 |
46 | python MelodyExtractionFromSingleWav.py /inputaudiofolder/audio1.wav /estimations/audio1.txt --extractionMethod='CBM' --hopsize=0.005805 --nb-iterations=30
47 |
48 | Best results are generally obtained with a hopsize of 128 samples if sampling rate = 44100 (hopsize=0.0029025), but they take longer to compute:
49 |
50 | python MelodyExtractionFromSingleWav.py /inputaudiofolder/audio1.wav /estimations/audio1.txt --extractionMethod='CBM' --hopsize=0.0029025 --nb-iterations=30
51 |
52 | where %input is the path to a wav file, and output is the file with the estimated melody.
53 |
54 | It is possible to also run the extraction using only Harmonic Summation instead of using source filter models.
55 | This option would be similar to the plugin MELODIA, but using the open source implementation in Essentia.
56 | This way, you can also save contours as those used in Bittner 2015 ISMIR article (and later use within a Pitch Contour Classification method)
57 | To do so use the option:
58 |
59 | --extractionMethod='SAL'
60 |
61 | To run contour classification experiments, you should first compute and save the contours, and adapt the paths accordingly.
62 | *Make sure to use the same hopsize for contour creation and contour classification.*
63 |
64 | python run_contour_training_melody_extraction.py
65 |
66 | python run_glass_ceiling_experiment.py
--------------------------------------------------------------------------------
/src/HarmonicSummationSF.py:
--------------------------------------------------------------------------------
1 | __author__ = 'jjb'
2 |
3 | from essentia.standard import *
4 | from essentia import *
5 |
6 |
7 | def calculateSF(filename, hopsizeFrames):
8 | """ Computes the salience function based on harmonic summation
9 | Parameters
10 | ----------
11 | filename: Name of the file
12 | hopsizeFrames: size of the hop in frames
13 |
14 | Returns
15 | -------
16 | times: list of times of each of the frames of the salience function
17 | salience: Harmonic summation salience function
18 | """
19 | from numpy import arange
20 | hopSize = int(hopsizeFrames)
21 | frameSize = 2048
22 | sampleRate = 44100
23 |
24 | # Setting the algorithms
25 | run_windowing = Windowing(type='hann', zeroPadding=3 * frameSize)
26 | run_spectrum = Spectrum(size=frameSize * 4)
27 | run_spectral_peaks = SpectralPeaks(minFrequency=1,
28 | maxFrequency=20000,
29 | maxPeaks=100,
30 | sampleRate=sampleRate,
31 | magnitudeThreshold=0,
32 | orderBy="magnitude")
33 | run_pitch_salience_function = PitchSalienceFunction()
34 |
35 | pool = Pool();
36 |
37 | # Now we are ready to start processing.
38 | # 1. Load audio and pass it through the equal-loudness filter
39 | audio = MonoLoader(filename=filename)()
40 | audio = EqualLoudness()(audio)
41 |
42 | # 2. Cut audio into frames and compute for each frame:
43 | # spectrum -> spectral peaks -> pitch salience function
44 | # With startFromZero = False, the first frame is centered at time = 0, instead of half the fremesize
45 | for frame in FrameGenerator(audio, frameSize=frameSize, hopSize=hopSize, startFromZero=False):
46 | frame = run_windowing(frame)
47 | spectrum = run_spectrum(frame)
48 | peak_frequencies, peak_magnitudes = run_spectral_peaks(spectrum)
49 | salience = run_pitch_salience_function(peak_frequencies, peak_magnitudes)
50 | pool.add('allframes_salience', salience)
51 |
52 | salience = pool['allframes_salience']
53 | times = arange(len(pool['allframes_salience'])) * float(hopSize) / sampleRate
54 |
55 | return times, salience
56 |
--------------------------------------------------------------------------------
/src/MelodyExtractionFromSingleWav.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | __author__ = "Juan Jose Bosch"
3 | __email__ = "juan.bosch@upf.edu"
4 |
5 | import sys
6 |
7 | from numpy import savetxt, max, column_stack,tile
8 | import utils
9 | import SourceFilterModelSF
10 | import combineSaliences
11 | import melodyExtractionFromSalienceFunction
12 | from HarmonicSummationSF import calculateSF
13 | from os.path import join,dirname,basename
14 | import parsing
15 |
16 | def process(args):
17 | # options
18 | mu = 1
19 | G = 0
20 | doConvolution = True
21 | wavfile = args[0]
22 |
23 | (pargs,options) = parsing.parseOptions(args)
24 |
25 | # -------------------------
26 |
27 | if options.extractionMethod == "BG1":
28 | # Options MIREX 2016: BG1
29 | options.pitchContinuity = 27.56
30 | options.peakDistributionThreshold = 1.3
31 | options.peakFrameThreshold = 0.7
32 | options.timeContinuity = 100
33 | options.minDuration = 100
34 | options.voicingTolerance = 1
35 | options.useVibrato = False
36 | options.decodingMethod = "PCS"
37 | options.combmode = 13
38 |
39 | if options.extractionMethod == "BG2":
40 | # Options MIREX 2016: BG2
41 | options.pitchContinuity = 27.56
42 | options.peakDistributionThreshold = 0.9
43 | options.peakFrameThreshold = 0.9
44 | options.timeContinuity = 100
45 | options.minDuration = 100
46 | options.voicingTolerance = 0
47 | options.useVibrato = False
48 | options.decodingMethod = "PCS"
49 | options.combmode = 13
50 |
51 | if options.extractionMethod == "EWM":
52 | # Options MIREX 2016: BG2
53 | options.combmode = 14
54 | options.decodingMethod = "PCS"
55 |
56 | if options.extractionMethod == "SAL":
57 | # Creating contours based on HS, and PCS like in Melodia, but here computed with essentia instead of the modified MELODIA VAMP plugin
58 | # Can be used to generate contours using HS like in Bittner 2015 (ISMIR), but here computed with essentia instead of the modified MELODIA VAMP plugin
59 | options.combmode = 0
60 | options.pitchContinuity = 27.56
61 | options.peakDistributionThreshold = 0.9
62 | options.peakFrameThreshold = 0.9
63 | options.timeContinuity = 100
64 | options.minDuration = 100
65 | options.decodingMethod = "PCS"
66 | options.useVibrato = True
67 |
68 | if options.extractionMethod == "CBM":
69 | options.pitchContinuity = 27.56
70 | options.peakDistributionThreshold = 0.9
71 | options.peakFrameThreshold = 0.9
72 | options.timeContinuity = 50
73 | options.minDuration = 100
74 | options.voicingTolerance = 0.2
75 | options.useVibrato = False
76 | options.decodingMethod = "PCS"
77 | options.combmode = 13
78 |
79 | combmode = options.combmode
80 |
81 | # Compute salience functions --------------
82 |
83 | # Compute HF0 (SIMM with source-filter model)
84 | if options.combmode > 0:
85 | timesHF0, HF0, options = SourceFilterModelSF.main(pargs, options)
86 | # In order to have the same structure as the Harmonic Summation Salience Function
87 | HF0 = HF0[1:, :]
88 |
89 | if combmode != 4 and combmode != 5 and combmode != 14:
90 | # Computing Harmonic Summation salience function
91 | hopSizeinSamplesHSSF = int(min(options.hopsizeInSamples, 0.01 * options.Fs))
92 | timesHSSF, HSSF = calculateSF(wavfile, hopSizeinSamplesHSSF)
93 | else:
94 | print "Harmonic Summation Salience function not used"
95 |
96 | # Combination mode used in MIREX, ISMIR2016, SMC2016
97 | if combmode == 0:
98 | combSal = HSSF.T
99 | times = timesHSSF
100 |
101 | if combmode == 13:
102 | times, combSal = combineSaliences.combine3MIREX(timesHF0, HF0, timesHSSF, HSSF, G, mu, doConvolution)
103 |
104 | # Salience function by Durrieu, multiplying every frame by the estimated energy of the melody, used in SMC2016
105 | if combmode == 14:
106 | fileEnergy = options.vit_pitch_output_file+'.egy'
107 | #fileEnergy = join(dirname(options.sal_output_file),'ME-Viterbi/'+basename(options.sal_output_file)[:-4]+'pitch.egy')
108 | timesEnergy,energy = utils.loadMEFile(fileEnergy)
109 | times,combSal = combineSaliences.combine14(timesHF0,HF0, timesEnergy,tile(energy,(HF0.shape[0],1)).T, G,mu,doConvolution)
110 |
111 | combSal = combSal / max(combSal)
112 |
113 | print("Extracting melody from salience function")
114 | times, pitch = melodyExtractionFromSalienceFunction.MEFromSF(times, combSal, options)
115 |
116 | # Save output file
117 | if options.decodingMethod != "PCC":
118 | savetxt(options.pitch_output_file, column_stack((times.T, pitch.T)), fmt='%-7.5f', delimiter="\t")
119 | print("Output file written")
120 |
121 |
122 | def main(args):
123 | process(args)
124 |
125 |
126 | if __name__ == '__main__':
127 | import time
128 |
129 | start_time = time.time()
130 |
131 | main(sys.argv[1:])
132 | print("Processing time: --- %s seconds ---" % (time.time() - start_time))
133 |
--------------------------------------------------------------------------------
/src/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/juanjobosch/SourceFilterContoursMelody/6f88e709c470f1423dc429198cb3c261a772c66c/src/__init__.py
--------------------------------------------------------------------------------
/src/combineSaliences.py:
--------------------------------------------------------------------------------
1 |
2 | import numpy as np
3 |
4 |
5 | def combine2(timesHF0, HF0init, timesHSSF, HSSF, G=0, mu=1, doConvolution=True,HF0norm='max'):
6 | """ Combines HF0 (based on SIMM) and HS (Harmonic summation)
7 | Parameters
8 | ----------
9 | timesHF0: timestamps of each frame in HF0 matrix
10 | HF0init: Nbins*Nframes
11 | timesHSSF: timestamps of each frame in HSSF matrix
12 | HSSF: Nframes2*Nbins2
13 | Saliences are assumed to have same hopsize and number of bins per semitone
14 | See the effect of G (set to 0 to have no effect)
15 | mu: HF0 = HF0 ** (1. / mu) (Set to 1 to have no effect)
16 | doConvolution: True or False, to perform the convolution with a Gaussian
17 |
18 | Returns
19 | -------
20 | times: Timestamps of the frames of the combined salience function
21 | sal: Combined Salience function
22 | """
23 |
24 | # Globally normalise HS
25 |
26 | plotting = False
27 | HSSF = HSSF.T
28 | HSSF = HSSF / np.max(HSSF)
29 | if plotting:
30 | try:
31 | import pylab as plt
32 | import matplotlib.gridspec as gridspec
33 |
34 | f, axarr = plt.subplots(2, 2)
35 | f.subplots_adjust(wspace=0.00001, hspace=0.00001)
36 | plt.size([7, 7])
37 |
38 | axarr[0, 0].set_xlim(2800, 3600)
39 | axarr[0, 0].set_xlim(2800, 3600)
40 | axarr[0, 1].set_xlim(2800, 3600)
41 | axarr[0, 1].set_xlim(2800, 3600)
42 | axarr[1, 0].set_xlim(2800, 3600)
43 | axarr[1, 0].set_xlim(2800, 3600)
44 | axarr[1, 1].set_xlim(2800, 3600)
45 | axarr[1, 1].set_xlim(2800, 3600)
46 |
47 | # plt.setp(axarr,1000
48 | # locs, labels = plt.xticks(1000*256./44100)
49 | # print labels
50 | # labels = labels
51 |
52 | # normalise by the max
53 |
54 | axarr[0, 0].imshow(np.log10(HF0init / (HF0init.max()) + 1e-15), origin='lower')
55 | axarr[0, 0].set_title('(a) (log)HF0 init')
56 |
57 | axarr[0, 1].imshow(HSSF, origin='lower')
58 | axarr[0, 1].set_title('(b) HS')
59 | except:
60 | print "Error in plotting"
61 |
62 | # Frame-wise normalisation dividing by the max on each frame
63 | if HF0norm == 'max':
64 | HF0init = (HF0init / (np.outer(np.ones(HF0init.shape[0]), np.max(HF0init, 0)) + 1e-15))
65 |
66 | # Frame-wise normalisation dividing by the sum on each frame
67 | if HF0norm == 'sum':
68 | HF0init = (HF0init / (np.outer(np.ones(HF0init.shape[0]), np.sum(HF0init, 0)) + 1e-15))
69 |
70 |
71 | # Gaussian filtering
72 | if doConvolution:
73 | sigma = 2
74 | Gausssize = 5
75 | x = np.linspace(-Gausssize / 2., Gausssize / 2., Gausssize)
76 | gaussFilter = np.exp(-x ** 2 / (2 * sigma ** 2))
77 | gaussFilter = gaussFilter / np.sum(gaussFilter) # normalize
78 | HF0 = np.zeros_like(HF0init)
79 | for i in range(HF0init.shape[1]):
80 | HF0[:, i] = np.convolve(HF0init[:, i], gaussFilter, mode='same')
81 | else:
82 | HF0 = HF0init
83 |
84 | # Global normalisation
85 | HF0 = HF0 / np.max(HF0)
86 |
87 | # Scaling
88 | # mu=1 (no scaling) in MIREX (2015,2016), SMC2016 and ISMIR2016
89 | HF0 = HF0 ** (1. / mu)
90 |
91 | hopSize = np.mean(np.diff(timesHF0))
92 |
93 | # Combining salience functions
94 | N1Fr = np.argmin(np.abs(timesHF0 - timesHSSF[0]))
95 |
96 | Nf0Mel = HSSF.shape[0]
97 | NfrMel = HSSF.shape[1]
98 |
99 | Nf0Dur = HF0.shape[0]
100 | NfrDur = HF0.shape[1]
101 |
102 | NF0 = max(Nf0Dur, Nf0Dur)
103 | NFr = max(NfrMel, NfrDur) + N1Fr
104 | times = timesHF0[0] + np.arange(NFr) * hopSize
105 |
106 | # Setting shape of the combination
107 | salcomb = np.zeros([NF0, NFr])
108 | salcomb[np.ix_(np.arange(Nf0Dur), (np.arange(NfrDur)))] = (1 - G) * HF0
109 |
110 | # hadamard product
111 | # G is = 0 in MIREX (2015,2016), SMC2016 and ISMIR2016
112 | salcomb[np.ix_(np.arange(Nf0Mel), np.arange(N1Fr, NfrMel + N1Fr))] = HSSF * (
113 | G + salcomb[np.ix_(np.arange(Nf0Mel), np.arange(N1Fr, NfrMel + N1Fr))])
114 |
115 | if plotting:
116 | try:
117 | axarr[1, 0].imshow(HF0, origin='lower')
118 | axarr[1, 0].set_title('(c) HF0-GF-Fn')
119 | axarr[1, 1].imshow(salcomb, origin='lower')
120 | axarr[1, 1].set_title('(d) Combination')
121 |
122 | plt.setp([a.get_xticklabels() for a in axarr[0, :]], visible=False)
123 | axarr[1, 1].set_xlabel('Frame number')
124 | axarr[1, 0].set_xlabel('Frame number')
125 | # axarr[0, 1].set_xlabel('Frame number')
126 | # axarr[0, 0].set_xlabel('Frame number')
127 | plt.setp([a.get_yticklabels() for a in axarr[:, 1]], visible=False)
128 | axarr[0, 0].set_ylabel('bins')
129 | axarr[1, 0].set_ylabel('bins')
130 | # axarr[0, 1].set_ylabel('bins')
131 | # axarr[1, 1].set_ylabel('bins')
132 | axarr[0, 0].tick_params(labelsize=10)
133 | axarr[0, 1].tick_params(labelsize=10)
134 | axarr[1, 0].tick_params(labelsize=10)
135 | axarr[1, 1].tick_params(labelsize=10)
136 | plt.tight_layout()
137 | # plt.imshow(HF0init,origin='lower')
138 | plt.savefig('saliences.pdf', bbox_inches='tight')
139 | plt.show()
140 | except:
141 | print("Error in plotting")
142 |
143 | return times, salcomb / salcomb.max()
144 |
145 |
146 | def simpleResize(timesHF0, HF0init, timesHSSF, HSSF):
147 | """ Simple resizing of HF0 (based on SIMM) and HS (Harmonic summation) if necessary
148 | Parameters.
149 | Could also be performed with scipy interpolate
150 | ----------
151 | timesHF0: timestamps of each frame in HF0 matrix
152 | HF0init: Nbins*Nframes
153 | timesHSSF: timestamps of each frame in HSSF matrix
154 | HSSF: Nframes2*Nbins2
155 |
156 | Returns
157 | -------
158 | timesHF0: timestamps of each frame in HF0 matrix
159 | HF0init: resized HF0
160 | timesHSSF: timestamps of each frame in HSSF matrix
161 | HSSF: resized HS """
162 |
163 | ratio = 1.0 * HSSF.shape[1] / HF0init.shape[0]
164 | n = round(ratio)
165 | if n > 1 and abs(n - ratio) < 0.01:
166 | HF0init = np.repeat(HF0init, n, axis=0)
167 | else:
168 | ratio = 1.0 * HF0init.shape[0] / HSSF.shape[1]
169 | n = round(ratio)
170 | if n > 1 and abs(n - ratio) < 0.01:
171 | HSSF = np.repeat(HSSF, n, axis=0)
172 | ratio = 1.0 * HSSF.shape[0] / HF0init.shape[1]
173 | n = round(ratio)
174 | if n > 1 and abs(n - ratio) < 0.01:
175 | HF0init = np.repeat(HF0init, n, axis=1)
176 | hop = np.diff(timesHF0)[0] / 2.
177 | timesHF0 = np.arange(timesHF0[0], timesHF0[-1] + (n - 1) * hop, hop)
178 | else:
179 | ratio = 1.0 * HF0init.shape[1] / HSSF.shape[0]
180 | n = round(ratio)
181 | if n > 1 and abs(n - ratio) < 0.01:
182 | HSSF = np.repeat(HSSF, n, axis=1)
183 | hop = np.diff(timesHSSF)[0] / 2.
184 | timesHSSF = np.arange(timesHSSF[0], timesHSSF[-1] + (n - 1) * hop, hop)
185 | return timesHF0, HF0init, timesHSSF, HSSF
186 |
187 | def combine14(timesHF0, HF0init, timesHSSF, HSSF, G, mu, doConvolution):
188 |
189 | timesHF0, HF0init, timesHSSF, HSSF = simpleResize(timesHF0, HF0init, timesHSSF, HSSF)
190 |
191 | # if (HSSF.T.shape != HF0init.shape):
192 | # HF0init = interpolateSaliences(HSSF.T,HF0init,timesHSSF,timesHF0)
193 |
194 | times, sal = combine2(timesHF0, HF0init, timesHSSF, HSSF, G, mu, doConvolution,HF0norm='sum')
195 | return times, sal
196 |
197 | def combine3MIREX(timesHF0, HF0init, timesHSSF, HSSF, G, mu, doConvolution):
198 | """ Combines HF0 and HS, used in MIREX (2015,2016), SMC2016 and ISMIR2016
199 | Parameters
200 | ----------
201 | timesHF0: timestamps of each frame in HF0 matrix
202 | HF0init: Nbins*Nframes
203 | timesHSSF: timestamps of each frame in HSSF matrix
204 | HSSF: Nframes2*Nbins2
205 | Ideally they should have same number of bins
206 | Simple resizing of matrices if necessary
207 | See the effect of G and mu in combine2 function
208 | doConvolution: True or False, to perform the convolution with a Gaussian
209 |
210 | Returns
211 | -------
212 | times: Timestamps of the frames of the combined salience function
213 | sal: Combined Salience function
214 | """
215 |
216 | #
217 | tHF0, HF0in, tHSSF, HSSFin = simpleResize(timesHF0, HF0init, timesHSSF, HSSF)
218 |
219 | # Combine both matrices
220 | times, sal = combine2(tHF0, HF0in, tHSSF, HSSFin, G, mu, doConvolution)
221 | return times, sal
222 |
223 |
--------------------------------------------------------------------------------
/src/contourExtraction.py:
--------------------------------------------------------------------------------
1 | import contour_classification.contour_utils as cu
2 |
3 |
4 | def compute_contour_data(contours_bins, contours_saliences, contours_start_times, stepNotes, minF0, hopsize,
5 | normalize=True, extra_features=None):
6 | from pandas import DataFrame, concat
7 | from numpy import mean, std, array, Inf, zeros
8 | """ Create contour pandas dataframe uing contour information previouslly extracted with Essentia.
9 | Initializes DataFrame to have all future columns.
10 | Parameters
11 | ----------
12 | contours_bins: set of bins of the extracted contours
13 | contours_saliences: set of saliences of the extracted contours
14 | contours_start_times: set of starting times of the extracted contours
15 | stepNotes: number of bins per semitone
16 | minF0: minimum F0 in the salience functions
17 | hopsize: Hop size
18 | normalize: [True, False] to normalise the features, as performed in Bittner2015
19 | extra_features: Ncontours * N_features
20 | set of extra features apart from the ones used by Bittner2015 (pitch, duration, vibrato, salience)
21 |
22 | Returns
23 | -------
24 | contour_data : DataFrame
25 | Pandas data frame with all contour data, to be used for contour classification
26 | """
27 |
28 | contours_bins = array(contours_bins)
29 | contours_saliences = array(contours_saliences)
30 | contours_start_times = array(contours_start_times)
31 | contour_data = DataFrame
32 | headers = []
33 |
34 | # Set of headers, containing the first 12 features [0:11] and the first time for each of the contours
35 | headers[0:12] = ['onset', 'offset', 'duration', 'pitch mean', 'pitch std',
36 | 'salience mean', 'salience std', 'salience tot',
37 | 'vibrato', 'vib rate', 'vib extent', 'vib coverage', 'first_time']
38 |
39 | # Number of contours
40 | Ncont = len(contours_bins)
41 |
42 | # Find length of longest contour
43 | maxLen = 0
44 | for i in range(Ncont):
45 | maxLen = max(maxLen, len(contours_bins[i]))
46 |
47 | # Header "first_time" can be used to find where the contour features end,
48 | # and when the contour info starts (time, bin, salience)
49 |
50 | # Just giving the extra headers some name
51 | headers[13:] = (array(range(maxLen * 3))).tolist()
52 |
53 | contour_data.num_end_cols = 4
54 |
55 | # Initialising dataset, following the format from the hacked VAMP MELODIA plugin from J. Salamon
56 | contour_data = DataFrame(Inf * zeros([Ncont, len(headers)]), columns=headers)
57 |
58 | for i in range(Ncont):
59 | #print i
60 | # Giving values for each row of the dataframe
61 | L = len(contours_saliences[i])
62 | # minF0 instead of 55
63 | pitches = 55 * 2 ** ((array(contours_bins[i]) / (12. * stepNotes)))
64 | contour_data.set_value(i, 'onset', contours_start_times[i])
65 | contour_data.set_value(i, 'offset', array(contours_start_times[i]) + len(pitches) * hopsize)
66 | contour_data.set_value(i, 'duration', len(pitches) * hopsize)
67 | contour_data.set_value(i, 'pitch mean', mean(pitches))
68 | contour_data.set_value(i, 'pitch std', std(pitches))
69 | contour_data.set_value(i, 'salience mean', mean(array(contours_saliences[i])))
70 | contour_data.set_value(i, 'salience std', std(array(contours_saliences[i])))
71 | contour_data.set_value(i, 'salience tot', sum(array(contours_saliences[i])))
72 |
73 | # In this case, we do not compute vibrato features, so we set them to 0.
74 | # This could be updated in order to use also vibrato features from contours extracted with Essentia
75 | contour_data.set_value(i, 'vibrato', 0)
76 | contour_data.set_value(i, 'vib rate', 0)
77 | contour_data.set_value(i, 'vib extent', 0)
78 | contour_data.set_value(i, 'vib coverage', 0)
79 |
80 | # After setting the features, we now give each contour the frame by frame information, e.g for frame0 (fr0), frame 1 (fr1)...
81 | # time_fr0, pitch_fr0, salience_fr0, time_fr1, pitch_fr1, salience_fr1, time_fr2, pitch_fr2, salience_fr2, ...
82 |
83 | contour_data.iloc[i, 12:12 + L * 3:3] = contours_start_times[i] + hopsize * array(range(L))
84 | contour_data.iloc[i, 13:13 + L * 3:3] = pitches
85 | contour_data.iloc[i, 14:14 + L * 3:3] = array(contours_saliences[i])
86 |
87 | # If extra features are used, they are set before the first_time
88 | if extra_features is not None:
89 | dfFeatures = concat([contour_data.ix[:, 0:12], extra_features], axis=1)
90 | contour_data = concat([dfFeatures, contour_data.ix[:, 12:]], axis=1)
91 |
92 | # All classification labels are initialised (will be updated while performing contour classification)
93 | contour_data['overlap'] = -1
94 | contour_data['labels'] = -1
95 | contour_data['melodiness'] = ""
96 | contour_data['mel prob'] = -1
97 |
98 | # Normalising features
99 | if normalize:
100 | contour_data = cu.normalize_features(contour_data)
101 |
102 | print "Contour dataframe created"
103 |
104 | return contour_data
105 |
--------------------------------------------------------------------------------
/src/contour_classification/ShuffleLabelsOut.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | '''Generate train/test splits by random shuffling of labels'''
3 | """Taken from
4 | https://github.com/bmcfee/ml_scraps/blob/master/ShuffleLabelsOut.py"""
5 |
6 | import numpy as np
7 | from sklearn.cross_validation import ShuffleSplit
8 |
9 |
10 | class ShuffleLabelsOut(ShuffleSplit):
11 | '''Shuffle- Labels-Out cross-validation iterator
12 |
13 | Parameters
14 | ----------
15 | y : array, [n_samples]
16 | Labels of samples
17 |
18 | n_iter : int (default 5)
19 | Number of shuffles to generate
20 |
21 | test_size : float (default 0.2), int, or None
22 |
23 | train_size : float, int, or None (default is None)
24 |
25 | random_state : int or RandomState
26 | '''
27 |
28 | def __init__(self, y, n_iter=5, test_size=0.2, train_size=None,
29 | random_state=None):
30 |
31 | classes, y_indices = np.unique(y, return_inverse=True)
32 |
33 | super(ShuffleLabelsOut, self).__init__(
34 | len(classes), n_iter=n_iter, test_size=test_size, train_size=train_size,
35 | random_state=random_state)
36 |
37 | self.classes = classes
38 | self.y_indices = y_indices
39 |
40 | def __repr__(self):
41 | return ('%s(labels=%s, n_iter=%d, test_size=%s, '
42 | 'random_state=%s)' % (
43 | self.__class__.__name__,
44 | self.y_indices,
45 | self.n_iter,
46 | str(self.test_size),
47 | self.random_state,
48 | ))
49 |
50 | def __len__(self):
51 | return self.n_iter
52 |
53 | def _iter_indices(self):
54 |
55 | for y_train, y_test in super(ShuffleLabelsOut, self)._iter_indices():
56 | # these are the indices of classes in the partition
57 | # invert them into data indices
58 |
59 | train = np.flatnonzero(np.in1d(self.y_indices, y_train))
60 | test = np.flatnonzero(np.in1d(self.y_indices, y_test))
61 |
62 | yield train, test
63 |
--------------------------------------------------------------------------------
/src/contour_classification/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/juanjobosch/SourceFilterContoursMelody/6f88e709c470f1423dc429198cb3c261a772c66c/src/contour_classification/__init__.py
--------------------------------------------------------------------------------
/src/contour_classification/clf_utils.py:
--------------------------------------------------------------------------------
1 | """ Utilities for classifier experiments """
2 | from sklearn.ensemble import RandomForestClassifier as RFC
3 | from sklearn import cross_validation
4 | from sklearn import metrics
5 | import numpy as np
6 | import matplotlib.pyplot as plt
7 |
8 |
9 | def cross_val_sweep(x_train, y_train, max_search=100,
10 | step=5, plot=True):
11 | """ Choose best parameter by performing cross fold validation
12 |
13 | Parameters
14 | ----------
15 | x_train : np.array [n_samples, n_features]
16 | Training features.
17 | y_train : np.array [n_samples]
18 | Training labels
19 | max_search : int
20 | Maximum depth value to sweep
21 | step : int
22 | Step size in parameter sweep
23 | plot : bool
24 | If true, plot error bars and cv accuracy
25 |
26 | Returns
27 | -------
28 | best_depth : int
29 | Optimal max_depth parameter
30 | max_cv_accuracy : DataFrames
31 | Best accuracy achieved on hold out set with optimal parameter.
32 | """
33 | scores = []
34 | for max_depth in np.arange(5, max_search, step):
35 | print "training with max_depth=%s" % max_depth
36 | clf = RFC(n_estimators=100, max_depth=max_depth, n_jobs=-1,
37 | class_weight='auto', max_features=None)
38 | all_scores = cross_validation.cross_val_score(clf, x_train, y_train,
39 | cv=5)
40 | scores.append([max_depth, np.mean(all_scores), np.std(all_scores)])
41 |
42 | depth = [score[0] for score in scores]
43 | accuracy = [score[1] for score in scores]
44 | std_dev = [score[2] for score in scores]
45 |
46 | if plot:
47 | plt.errorbar(depth, accuracy, std_dev, linestyle='-', marker='o')
48 | plt.title('Mean cross validation accuracy')
49 | plt.xlabel('max depth')
50 | plt.ylabel('mean accuracy')
51 | plt.show()
52 |
53 | best_depth = depth[np.argmax(accuracy)]
54 | max_cv_accuracy = np.max(accuracy)
55 | plot_data = (depth, accuracy, std_dev)
56 |
57 | return best_depth, max_cv_accuracy, plot_data
58 |
59 |
60 | def train_clf(x_train, y_train, best_depth):
61 | """ Train classifier.
62 |
63 | Parameters
64 | ----------
65 | x_train : np.array [n_samples, n_features]
66 | Training features.
67 | y_train : np.array [n_samples]
68 | Training labels
69 | best_depth : int
70 | Optimal max_depth parameter
71 |
72 | Returns
73 | -------
74 | clf : classifier
75 | Trained scikit-learn classifier
76 | """
77 | clf = RFC(n_estimators=100, max_depth=best_depth, n_jobs=-1,
78 | class_weight='auto', max_features=None)
79 | clf = clf.fit(x_train, y_train)
80 | return clf
81 |
82 |
83 | def clf_predictions(x_train, x_valid, x_test, clf):
84 | """ Compute probability predictions for all training and test examples.
85 |
86 | Parameters
87 | ----------
88 | x_train : np.array [n_samples, n_features]
89 | Training features.
90 | x_test : np.array [n_samples, n_features]
91 | Testing features.
92 | clf : classifier
93 | Trained scikit-learn classifier
94 |
95 | Returns
96 | -------
97 | p_train : np.array [n_samples]
98 | predicted probabilities for training set
99 | p_test : np.array [n_samples]
100 | predicted probabilities for testing set
101 | """
102 | p_train = clf.predict_proba(x_train)[:, 1]
103 | p_valid = clf.predict_proba(x_valid)[:, 1]
104 | p_test = clf.predict_proba(x_test)[:, 1]
105 | return p_train, p_valid, p_test
106 |
107 |
108 | def clf_metrics(p_train, p_test, y_train, y_test):
109 | """ Compute metrics on classifier predictions
110 |
111 | Parameters
112 | ----------
113 | p_train : np.array [n_samples]
114 | predicted probabilities for training set
115 | p_test : np.array [n_samples]
116 | predicted probabilities for testing set
117 | y_train : np.array [n_samples]
118 | Training labels.
119 | y_test : np.array [n_samples]
120 | Testing labels.
121 |
122 | Returns
123 | -------
124 | clf_scores : dict
125 | classifier scores for training set
126 | """
127 | y_pred_train = 1*(p_train >= 0.5)
128 | y_pred_test = 1*(p_test >= 0.5)
129 |
130 | train_scores = {}
131 | test_scores = {}
132 |
133 | train_scores['accuracy'] = metrics.accuracy_score(y_train, y_pred_train)
134 | test_scores['accuracy'] = metrics.accuracy_score(y_test, y_pred_test)
135 |
136 | train_scores['mcc'] = metrics.matthews_corrcoef(y_train, y_pred_train)
137 | test_scores['mcc'] = metrics.matthews_corrcoef(y_test, y_pred_test)
138 |
139 | (p, r, f, s) = metrics.precision_recall_fscore_support(y_train,
140 | y_pred_train)
141 | train_scores['precision'] = p
142 | train_scores['recall'] = r
143 | train_scores['f1'] = f
144 | train_scores['support'] = s
145 |
146 | (p, r, f, s) = metrics.precision_recall_fscore_support(y_test,
147 | y_pred_test)
148 | test_scores['precision'] = p
149 | test_scores['recall'] = r
150 | test_scores['f1'] = f
151 | test_scores['support'] = s
152 |
153 | train_scores['confusion matrix'] = \
154 | metrics.confusion_matrix(y_train, y_pred_train, labels=[0, 1])
155 | test_scores['confusion matrix'] = \
156 | metrics.confusion_matrix(y_test, y_pred_test, labels=[0, 1])
157 |
158 | train_scores['auc score'] = \
159 | metrics.roc_auc_score(y_train, p_train + 1, average='weighted')
160 | test_scores['auc score'] = \
161 | metrics.roc_auc_score(y_test, p_test + 1, average='weighted')
162 |
163 | clf_scores = {'train': train_scores, 'test': test_scores}
164 |
165 | return clf_scores
166 |
167 |
--------------------------------------------------------------------------------
/src/contour_classification/contour_utils.py:
--------------------------------------------------------------------------------
1 | """ Utility functions for processing contours """
2 |
3 | import pandas as pd
4 | import numpy as np
5 | import mir_eval
6 | try:
7 | import matplotlib.pyplot as plt
8 | import seaborn as sns
9 | sns.set()
10 | except:
11 | print "matplotlib or seaborn not available"
12 |
13 | def loadpickle(picklefile):
14 | from pickle import load
15 | try:
16 | with open(picklefile, 'rb') as handle:
17 | b = load(handle)
18 | except:
19 | "Pickle file not found: " + picklefile
20 | return b
21 |
22 | def load_contour_data(fpath, normalize=True):
23 | """ Load contour data from vamp output csv file.
24 | Initializes DataFrame to have all future columns.
25 |
26 | Parameters
27 | ----------
28 | fpath : str
29 | Path to vamp output csv file.
30 |
31 | Returns
32 | -------
33 | contour_data : DataFrame
34 | Pandas data frame with all contour data.
35 | """
36 | try:
37 | contour_data = pd.read_csv(fpath, header=None, index_col=None,
38 | delimiter=',').astype(float)
39 | del contour_data[0] # all zeros
40 | del contour_data[1] # just an unnecessary index
41 | headers = contour_data.columns.values.astype('str')
42 | headers[0:12] = ['onset', 'offset', 'duration', 'pitch mean', 'pitch std',
43 | 'salience mean', 'salience std', 'salience tot',
44 | 'vibrato', 'vib rate', 'vib extent', 'vib coverage']
45 | contour_data.columns = headers
46 | except:
47 | contour_data = loadpickle(fpath)
48 | # trying to load with pickle
49 |
50 | # Check if there is any column with all nans... it should not be considered
51 | df = contour_data.isnull().all()
52 | if np.where(df)[0]:
53 | contour_data = contour_data.drop(contour_data.columns[np.where(df)[0][0]], axis=1)
54 |
55 | # To ensure the contour has a duration > 0
56 | contour_data['duration'] = np.fmax(contour_data['duration'].values,0.001)
57 |
58 | contour_data.num_end_cols = 0
59 | contour_data['overlap'] = -1 # overlaps are unset
60 | contour_data['labels'] = -1 # all labels are unset
61 | contour_data['melodiness'] = ""
62 | contour_data['mel prob'] = -1
63 | contour_data.num_end_cols = 4
64 |
65 | if normalize:
66 | contour_data = normalize_features(contour_data)
67 |
68 | return contour_data
69 |
70 |
71 | def normalize_features(contour_data):
72 | """ Normalizes (trackwise) features in contour_data.
73 | Adds labels column with all labels unset.
74 |
75 | Parameters
76 | ----------
77 | contour_data : DataFrame
78 | Pandas data frame with all contour data.
79 | normalize : Bool
80 | If true, performs trackwise normalization over salience.
81 |
82 | Returns
83 | -------
84 | contour_data : DataFrame
85 | Pandas data frame with normalized contour feature data.
86 | """
87 |
88 | _, _, contour_sal = contours_from_contour_data(contour_data)
89 |
90 | # maximum salience value across all contours
91 | sal_max = contour_sal.max().max()
92 |
93 | # normalize salience features by max salience
94 | contour_data['salience mean'] = contour_data['salience mean']/sal_max
95 | contour_data['salience std'] = contour_data['salience std']/sal_max
96 |
97 | # normalize saience total by max salience and duration
98 | contour_data['salience tot'] = \
99 | contour_data['salience tot']/(sal_max*contour_data['duration'])
100 |
101 | # compute min and max duration
102 | dur_min = contour_data['duration'].min()
103 | dur_max = contour_data['duration'].max()
104 |
105 | # normalize duration to be between 0 and 1
106 | contour_data['duration'] = \
107 | (contour_data['duration'] - dur_min)/(dur_max - dur_min)
108 |
109 | # give standardized duration back to total salience
110 | contour_data['salience tot'] = \
111 | contour_data['salience tot']*contour_data['duration']
112 |
113 | return contour_data
114 |
115 |
116 | def contours_from_contour_data(contour_data, n_start=12, n_end=4):
117 | """ Get raw contour information from contour data
118 |
119 | Parameters
120 | ----------
121 | contour_data : DataFrame
122 | Pandas data frame with all contour data.
123 |
124 | Returns
125 | -------
126 | contour_times : DataFrame
127 | Pandas data frame with all raw contour times.
128 | contour_freqs : DataFrame
129 | Pandas data frame with all raw contour frequencies (Hz).
130 | contour_sal : DataFrame
131 | Pandas data frame with all raw contour salience values.
132 | """
133 |
134 | if 'first_time' in contour_data.columns:
135 | n_start = contour_data.columns.get_loc('first_time')
136 |
137 | # Check if there is any column with all nans... it should not be considered
138 | # df = contour_data.isnull().all()
139 | # if np.where(df)[0]:
140 | # n_end = contour_data.shape[1]-np.where(df)[0][0]
141 | #
142 |
143 |
144 | contour_times = contour_data.iloc[:, n_start:-n_end:3]
145 | contour_freqs = contour_data.iloc[:, n_start+1:-n_end:3]
146 | contour_sal = contour_data.iloc[:, n_start+2:-n_end:3]
147 |
148 | return contour_times, contour_freqs, contour_sal
149 |
150 |
151 | def load_annotation(fpath):
152 | """ Load an annotation file into a pandas Series.
153 | Add column with frequency values also converted to cents.
154 |
155 | Parameters
156 | ----------
157 | fpath : str
158 | Path to annotation file.
159 |
160 | Returns
161 | -------
162 | annot_data : DataFrame
163 | Pandas data frame with all annotation data.
164 | """
165 | # try:
166 | # annot_data = pd.read_csv(fpath, parse_dates=True,
167 | # index_col=False, header=None)
168 | # except:
169 | # annot_data = pd.read_csv(fpath, parse_dates=True,
170 | # index_col=False, header=None,sep='\t')
171 |
172 | # For Orchset
173 | separator = '\t'
174 |
175 | # For MedleyDB
176 | #separator = ','
177 |
178 | annot_data = pd.read_table(fpath, parse_dates=True,
179 | index_col=False,header=None,sep=separator)
180 |
181 | annot_data.columns = ['time', 'f0']
182 |
183 | # Add column with annotation values in cents
184 | annot_data['cents'] = 1200.0*np.log2(annot_data['f0']/55.0)
185 |
186 | return annot_data
187 |
188 |
189 | def plot_contours(contour_data, annot_data, contour_data2=None):
190 | """ Plot contours against annotation.
191 |
192 | Parameters
193 | ----------
194 | contour_data : DataFrame
195 | Pandas data frame with all contour data.
196 | annot_data : DataFrame
197 | Pandas data frame with all annotation data.
198 | """
199 | if contour_data2 is not None:
200 | c_times2, c_freqs2, _ = contours_from_contour_data(contour_data2)
201 | for (times, freqs) in zip(c_times2.iterrows(), c_freqs2.iterrows()):
202 | times = times[1].values
203 | freqs = freqs[1].values
204 | times = times[~np.isnan(times)]
205 | freqs = freqs[~np.isnan(freqs)]
206 | plt.plot(times, freqs, '.c')
207 |
208 | c_times, c_freqs, _ = contours_from_contour_data(contour_data)
209 | plt.figure()
210 | for (times, freqs) in zip(c_times.iterrows(), c_freqs.iterrows()):
211 | times = times[1].values
212 | freqs = freqs[1].values
213 | times = times[~np.isnan(times)]
214 | freqs = freqs[~np.isnan(freqs)]
215 | plt.plot(times, freqs, '.r')
216 |
217 | plt.plot(annot_data['time'], annot_data['f0'], '.k')
218 | plt.show()
219 |
220 |
221 | def compute_overlap(contour_data, annot_data):
222 | """ Compute percentage of overlap of each contour with annotation.
223 |
224 | Parameters
225 | ----------
226 | contour_data : DataFrame
227 | Pandas data frame with all contour data.
228 | annot_data : DataFrame
229 | Pandas data frame with all annotation data.
230 |
231 | Returns
232 | -------
233 | feature_data : DataFrame
234 | Pandas data frame with feature_data and labels.
235 | """
236 | c_times, c_freqs, _ = contours_from_contour_data(contour_data)
237 |
238 | for (times, freqs) in zip(c_times.iterrows(), c_freqs.iterrows()):
239 | row_idx = times[0]
240 | times = times[1].values
241 | freqs = freqs[1].values
242 |
243 | # remove trailing NaNs
244 | times = times[~np.isnan(times)]
245 | freqs = freqs[~np.isnan(freqs)]
246 |
247 | # get segment of ground truth matching this contour
248 | gt_segment = annot_data[annot_data['time'] >= times[0]]
249 | gt_segment = gt_segment[gt_segment['time'] <= times[-1]]
250 |
251 | if len(gt_segment['time']) == 0:
252 | # To avoid error in mir_eval
253 | res = mir_eval.melody.evaluate(np.zeros(1),np.zeros(1), times, freqs)
254 | else:
255 | # compute metrics
256 | res = mir_eval.melody.evaluate(gt_segment['time'].values,
257 | gt_segment['f0'].values, times, freqs)
258 |
259 | contour_data.ix[row_idx, 'overlap'] = res['Overall Accuracy']
260 |
261 | return contour_data
262 |
263 |
264 | def label_contours(contour_data, olap_thresh):
265 | """ Compute contours based on annotation.
266 | Contours with at least olap_thresh overlap with annotation
267 | are labeled as positive examples. Otherwise negative.
268 |
269 | Parameters
270 | ----------
271 | contour_data : DataFrame
272 | Pandas data frame with all contour data.
273 | annot_data : DataFrame
274 | Pandas data frame with all annotation data.
275 | olap_thresh : float
276 | Overlap threshold for positive examples
277 |
278 | Returns
279 | -------
280 | contour_data : DataFrame
281 | Pandas data frame with contour_data and labels.
282 | """
283 | contour_data['labels'] = 1*(contour_data['overlap'] > olap_thresh)
284 | return contour_data
285 |
286 |
287 | def contour_glass_ceiling(contour_fpath, annot_fpath):
288 | """ Get subset of contour data that overlaps with annotation.
289 |
290 | Parameters
291 | ----------
292 | contour_data : DataFrame
293 | Pandas data frame with all contour data.
294 | annot_data : DataFrame
295 | Pandas data frame with all annotation data.
296 |
297 | Returns
298 | -------
299 | olap_contours : DataFrame
300 | Subset of contour_data that overlaps with annotation.
301 | """
302 | # indices
303 | onset = 2
304 | offset = 3
305 | duration = 4
306 | pitch_mean = 5
307 | pitch_std = 6
308 | salience_mean = 7
309 | salience_std = 8
310 | salience_tot = 9
311 | vibrato = 10
312 | vibrato_rate = 11
313 | vibrato_extent = 12
314 | vibrato_coverage = 13
315 | first_time = 14
316 |
317 | hopsizeInSamples = 256.0
318 |
319 | def time_to_index(t):
320 | return int(np.round(t * 44100 / hopsizeInSamples))
321 |
322 | ###########################################################################
323 | def contours_to_activation(contours, n_times):
324 |
325 | if isinstance(contours,pd.DataFrame):
326 | # Edit for contours from melodia, should be 14, from contours from ISMIR2016, should be 12 (Orchset eval. with melodia contours)
327 | featName, startFeat,EndFeat = getFeatureInfo(contours)
328 | first_time = EndFeat+1
329 |
330 | c_last = contours.values[-1]
331 | nanind = np.where(np.isnan(c_last))[0]
332 | if len(nanind) > 0:
333 | nanind = nanind[0]
334 | c_last = c_last[:nanind]
335 | activation = [[] for x in range(time_to_index(n_times) + 1)]
336 |
337 | for c_num in contours.values:
338 | nanind = np.where(np.isnan(c_num))[0]
339 | if len(nanind) > 0:
340 | nanind = nanind[0]
341 | c_num = c_num[first_time:nanind]
342 | else:
343 | c_num = c_num[first_time:]
344 | ind = 0
345 | while ind < len(c_num):
346 | time_ind = time_to_index(c_num[ind])
347 | activation[time_ind].append(c_num[ind+1])
348 | ind += 3
349 |
350 | return activation
351 |
352 | ###########################################################################
353 | def pitch_accuracy(ref, activation):
354 | hits = 0
355 | misses = 0
356 | for rval in ref.values:
357 | ind = time_to_index(rval[0])
358 | if rval[1] > 0:
359 | match = False
360 | for v in activation[ind]:
361 | if np.abs(1200*np.log2(v/rval[1])) < 50:
362 | match = True
363 | if match:
364 | hits += 1
365 | else:
366 | misses += 1
367 | return hits / float(hits + misses)
368 | ###########################################################################
369 |
370 | ref = pd.read_csv(annot_fpath,
371 | header=None, index_col=False)
372 | ref = pd.read_csv(annot_fpath,
373 | header=None, sep = '\t',index_col=False)
374 | try:
375 | contours = loadpickle(contour_fpath)
376 |
377 | contours.drop('mel prob',inplace=True,axis=1)
378 | contours.drop('overlap',inplace=True,axis=1)
379 | contours.drop('labels',inplace=True,axis=1)
380 | contours.drop('melodiness',inplace=True,axis=1)
381 | except:
382 | # In case the contours are csv (created with the hacked MELODIA VAMP plugin from J.Salomon)
383 | try:
384 | contoursr = pd.read_csv(contour_fpath,header=None, index_col=False)
385 | # First two columns are irrelevant
386 | contours = contoursr.iloc[:,2:]
387 | except:
388 | print "No contours could be loaded"
389 |
390 |
391 | n_times = len(ref)
392 | activation = contours_to_activation(contours, n_times)
393 | rpa = pitch_accuracy(ref, activation)
394 |
395 | return rpa
396 |
397 |
398 |
399 | def join_contours(contours_list):
400 | """ Merge features for a multiple track into a single DataFrame
401 |
402 | Parameters
403 | ----------
404 | contours_list : list of DataFrames
405 | List of Pandas data frames with labeled features.
406 |
407 | Returns
408 | -------
409 | all_contours : DataFrame
410 | Merged feature data.
411 | """
412 | all_contours = pd.concat(contours_list, ignore_index=False)
413 | return all_contours
414 |
415 | def getFeatureInfo(contourDF):
416 | if 'first_time' in contourDF.columns:
417 | idxEndFeatures = contourDF.columns.get_loc('first_time')-1
418 | else:
419 | idxEndFeatures = 11 # From the original implementation, 12 is the last feature
420 | if 'duration' in contourDF.columns:
421 | idxStartFeatures = contourDF.columns.get_loc('duration')
422 | else:
423 | idxStartFeatures=0
424 | feats = contourDF.columns[idxStartFeatures:idxEndFeatures+1]
425 | return feats,idxStartFeatures,idxEndFeatures
426 |
427 |
428 | def pd_to_sklearn(contour_data,idxfirstfeature=0,idxEndFeatures=11):
429 | """ Convert pandas data frame to sklearn style features and labels
430 |
431 | Parameters
432 | ----------
433 | contour_data : DataFrame or dict of DataFrames
434 | DataFrame containing labeled features.
435 |
436 | Returns
437 | -------
438 | features : np.ndarray
439 | fetures (n_samples x n_features)
440 | labels : np.1darray
441 | Labels (n_samples,)
442 | """
443 | offset = 0
444 | # Reduce before join for speed and memory saving
445 | if isinstance(contour_data, dict):
446 | red_list = []
447 | lab_list = []
448 | for key in contour_data.keys():
449 | # Edit ISMIR offset
450 | #if isinstance(contour_data[key],pd.DataFrame):
451 | #print "Is dataframe"
452 | # offset = - 2
453 | red_list.append(contour_data[key].iloc[:, idxfirstfeature: idxEndFeatures+1])
454 | lab_list.append(contour_data[key]['labels'])
455 |
456 | joined_data = join_contours(red_list)
457 | joined_labels = join_contours(lab_list)
458 |
459 | else:
460 | #if isinstance(contour_data,pd.DataFrame):
461 | # offset = - 2
462 | joined_data = contour_data.iloc[:, idxfirstfeature:idxEndFeatures+1]
463 | joined_labels = contour_data['labels']
464 |
465 | features = np.array(joined_data)
466 | labels = np.array(joined_labels)
467 |
468 | return features, labels
469 |
470 |
--------------------------------------------------------------------------------
/src/contour_classification/experiment_utils.py:
--------------------------------------------------------------------------------
1 | """ Helper functions for experiments """
2 |
3 | from ShuffleLabelsOut import ShuffleLabelsOut
4 | import contour_utils as cc
5 | import json
6 | from sklearn import metrics
7 | import numpy as np
8 | import os
9 | import sys
10 | import matplotlib.pyplot as plt
11 | import seaborn as sns
12 | sns.set()
13 |
14 |
15 | def create_splits(test_size=0.15):
16 | """ Split MedleyDB into train/test splits.
17 |
18 | Returns
19 | -------
20 | mdb_files : list
21 | List of sorted medleydb files.
22 | splitter : iterator
23 | iterator of train/test indices.
24 | """
25 |
26 | #index = json.load(open('medley_artist_index.json'))
27 | # EDIT: For Orchset
28 | index = json.load(open('orch_groups.json'))
29 |
30 | mdb_files = []
31 | keys = []
32 |
33 | for trackid, artist in sorted(index.items()):
34 | mdb_files.append(trackid)
35 | keys.append(artist)
36 |
37 | keys = np.asarray(keys)
38 | mdb_files = np.asarray(mdb_files)
39 | splitter = ShuffleLabelsOut(keys, random_state=1, test_size=test_size)
40 |
41 | return mdb_files, splitter
42 |
43 |
44 | def get_data_files(track, meltype=1):
45 | """ Load all necessary data for a given track and melody type.
46 |
47 | Parameters
48 | ----------
49 | track : str
50 | Track identifier.
51 | meltype : int
52 | Melody annotation type. One of [1, 2, 3]
53 |
54 | Returns
55 | -------
56 | cdat : DataFrame
57 | Pandas DataFrame of contour data.
58 | adat : DataFrame
59 | Pandas DataFrame of annotation data.
60 | """
61 | contour_suffix = \
62 | "MIX_vamp_melodia-contours_melodia-contours_contoursall.csv"
63 | contours_path = "melodia_contours"
64 |
65 | # For ORCHSET with MELODIA --------------------------
66 |
67 | annot_path = os.path.join('/Users/jjb/Google Drive/data/segments/excerpts/GT')
68 |
69 | contour_suffix = \
70 | "_vamp_melodia-contours_melodia-contours_contoursall.csv"
71 | contours_path = "/Users/jjb/Google Drive/PhD/conferences/ISMIR2016/SIMM-PC/Orchset/contours_melodia"
72 | annot_suffix = "mel"
73 | contour_fname = "%s%s" % (track, contour_suffix)
74 | contour_fpath = os.path.join(contours_path, contour_fname)
75 | annot_fname = "%s.%s" % (track, annot_suffix)
76 | annot_fpath = os.path.join(annot_path, annot_fname)
77 |
78 |
79 | # Fot ORCHSET with SIMM --------------------------
80 |
81 | contour_suffix = "pitch.ctr"
82 | contours_path = "/Users/jjb/Google Drive/PhD/conferences/ISMIR2016/SIMM-PC/Orchset/C4-Contours/Conv_mu-1_G-0_LHSF-0_pC-27.56_pDTh-0.9_pFTh-0.9_tC-50_mD-100"
83 |
84 | contours_path = "/Users/jjb/Google Drive/PhD/Tests/Orchset/ScContours/"
85 |
86 | annot_suffix = "mel"
87 |
88 | annot_path = os.path.join('/Users/jjb/Google Drive/data/segments/excerpts/GT')
89 | contour_fname = "%s.%s" % (track, contour_suffix)
90 | contour_fpath = os.path.join(contours_path, contour_fname)
91 | annot_fname = "%s.%s" % (track, annot_suffix)
92 | annot_fpath = os.path.join(annot_path, annot_fname)
93 |
94 | # For MEDLEY with SIMM -------------------------
95 | contour_suffix = "MIX.pitch.ctr"
96 | contours_path = "/Users/jjb/Google Drive/PhD/conferences/ISMIR2016/SIMM-PC/MedleyDB/C4-Contours/Conv_mu-1_G-0_LHSF-0_pC-27.56_pDTh-0.9_pFTh-0.9_tC-50_mD-100"
97 |
98 | annot_suffix = "MELODY%s.csv" % str(meltype)
99 | mel_dir = "MELODY%s" % str(meltype)
100 | annot_path = os.path.join(os.environ['MEDLEYDB_PATH'], 'Annotations',
101 | 'Melody_Annotations', mel_dir)
102 |
103 | contour_fname = "%s_%s" % (track, contour_suffix)
104 | contour_fpath = os.path.join(contours_path, contour_fname)
105 | annot_fname = "%s_%s" % (track, annot_suffix)
106 | annot_fpath = os.path.join(annot_path, annot_fname)
107 |
108 | # Fot ORCHSET with SIMM --------------------------
109 |
110 | contour_suffix = "pitch.ctr"
111 | contours_path = "/Users/jjb/Google Drive/PhD/conferences/ISMIR2016/SIMM-PC/Orchset/C4-Contours/Conv_mu-1_G-0_LHSF-0_pC-27.56_pDTh-0.9_pFTh-0.9_tC-50_mD-100"
112 |
113 | #contours_path = "/Users/jjb/Google Drive/PhD/Tests/Orchset/ScContours/"
114 |
115 | annot_suffix = "mel"
116 |
117 | annot_path = os.path.join('/Users/jjb/Google Drive/data/segments/excerpts/GT')
118 | contour_fname = "%s.%s" % (track, contour_suffix)
119 | contour_fpath = os.path.join(contours_path, contour_fname)
120 | annot_fname = "%s.%s" % (track, annot_suffix)
121 | annot_fpath = os.path.join(annot_path, annot_fname)
122 |
123 | #################################################
124 |
125 | cdat = cc.load_contour_data(contour_fpath, normalize=True)
126 | adat = cc.load_annotation(annot_fpath)
127 |
128 | return cdat, adat
129 |
130 |
131 | def compute_all_overlaps(track_list, meltype):
132 | """ Compute each contour's overlap with annotation.
133 |
134 | Parameters
135 | ----------
136 | track_list : list
137 | List of all trackids
138 | meltype : int
139 | One of [1,2,3]
140 |
141 | Returns
142 | -------
143 | dset_contour_dict : dict of DataFrames
144 | Dict of dataframes keyed by trackid
145 | dset_annot_dict : dict of dataframes
146 | dict of annotation dataframes keyed by trackid
147 | """
148 |
149 | dset_contour_dict = {}
150 | dset_annot_dict = {}
151 |
152 | msg = "Generating features..."
153 | num_spaces = len(track_list) - len(msg)
154 | print msg + ' '*num_spaces + '|'
155 |
156 | for track in track_list:
157 | cdat, adat = get_data_files(track, meltype=meltype)
158 | dset_annot_dict[track] = adat.copy()
159 | dset_contour_dict[track] = cc.compute_overlap(cdat, adat)
160 | sys.stdout.write('.')
161 |
162 | return dset_contour_dict, dset_annot_dict
163 |
164 |
165 | def olap_stats(train_contour_dict):
166 | """ Compute overlap statistics.
167 |
168 | Parameters
169 | ----------
170 | train_contour_dict : dict of DataFrames
171 | Dict of train contour data frames
172 |
173 | Returns
174 | -------
175 | partial_olap_stats : DataFrames
176 | Description of overlap data.
177 | zero_olap_stats : DataFrames
178 | Description of non-overlap data.
179 | """
180 | # reduce for speed and memory
181 | red_list = []
182 | for cdat in train_contour_dict.values():
183 | red_list.append(cdat['overlap'])
184 |
185 | overlap_dat = cc.join_contours(red_list)
186 | non_zero_olap = overlap_dat[overlap_dat > 0]
187 | zero_olap = overlap_dat[overlap_dat == 0]
188 | partial_olap_stats = non_zero_olap.describe()
189 | zero_olap_stats = zero_olap.describe()
190 |
191 | return partial_olap_stats, zero_olap_stats
192 |
193 |
194 | def label_all_contours(train_contour_dict, valid_contour_dict,
195 | test_contour_dict, olap_thresh):
196 | """ Add labels to contours based on overlap_thresh.
197 |
198 | Parameters
199 | ----------
200 | train_contour_dict : dict of DataFrames
201 | dict of train contour data frames
202 | valid_contour_dict : dict of DataFrames
203 | dict of validation contour data frames
204 | test_contour_dict : dict of DataFrames
205 | dict of test contour data frames
206 | olap_thresh : float
207 | Value in [0, 1). Min overlap to be labeled as melody.
208 |
209 | Returns
210 | -------
211 | train_contour_dict : dict of DataFrames
212 | dict of train contour data frames
213 | test_contour_dict : dict of DataFrames
214 | dict of test contour data frames
215 | """
216 | for key in train_contour_dict.keys():
217 | train_contour_dict[key] = cc.label_contours(train_contour_dict[key],
218 | olap_thresh=olap_thresh)
219 |
220 | for key in valid_contour_dict.keys():
221 | valid_contour_dict[key] = cc.label_contours(valid_contour_dict[key],
222 | olap_thresh=olap_thresh)
223 |
224 | for key in test_contour_dict.keys():
225 | test_contour_dict[key] = cc.label_contours(test_contour_dict[key],
226 | olap_thresh=olap_thresh)
227 | return train_contour_dict, valid_contour_dict, test_contour_dict
228 |
229 |
230 | def contour_probs(clf, contour_data,idxStartFeatures=0,idxEndFeatures=11):
231 | """ Compute classifier probabilities for contours.
232 |
233 | Parameters
234 | ----------
235 | clf : scikit-learn classifier
236 | Binary classifier.
237 | contour_data : DataFrame
238 | DataFrame with contour information.
239 |
240 | Returns
241 | -------
242 | contour_data : DataFrame
243 | DataFrame with contour information and predicted probabilities.
244 | """
245 | contour_data['mel prob'] = -1
246 | features, _ = cc.pd_to_sklearn(contour_data,idxStartFeatures,idxEndFeatures)
247 | probs = clf.predict_proba(features)
248 | mel_probs = [p[1] for p in probs]
249 | contour_data['mel prob'] = mel_probs
250 | return contour_data
251 |
252 |
253 | def get_best_threshold(y_ref, y_pred_score, plot=False):
254 | """ Get threshold on scores that maximizes f1 score.
255 |
256 | Parameters
257 | ----------
258 | y_ref : array
259 | Reference labels (binary).
260 | y_pred_score : array
261 | Predicted scores.
262 | plot : bool
263 | If true, plot ROC curve
264 |
265 | Returns
266 | -------
267 | best_threshold : float
268 | threshold on score that maximized f1 score
269 | max_fscore : float
270 | f1 score achieved at best_threshold
271 | """
272 | pos_weight = 1.0 - float(len(y_ref[y_ref == 1]))/float(len(y_ref))
273 | neg_weight = 1.0 - float(len(y_ref[y_ref == 0]))/float(len(y_ref))
274 | sample_weight = np.zeros(y_ref.shape)
275 | sample_weight[y_ref == 1] = pos_weight
276 | sample_weight[y_ref == 0] = neg_weight
277 |
278 | print "max prediction value = %s" % np.max(y_pred_score)
279 | print "min prediction value = %s" % np.min(y_pred_score)
280 |
281 | precision, recall, thresholds = \
282 | metrics.precision_recall_curve(y_ref, y_pred_score, pos_label=1,
283 | sample_weight=sample_weight)
284 | beta = 1.0
285 | btasq = beta**2.0
286 | fbeta_scores = (1.0 + btasq)*(precision*recall)/((btasq*precision)+recall)
287 |
288 | max_fscore = fbeta_scores[np.nanargmax(fbeta_scores)]
289 | best_threshold = thresholds[np.nanargmax(fbeta_scores)]
290 |
291 | if plot:
292 | plt.figure(1)
293 | plt.subplot(1, 2, 1)
294 | plt.plot(recall, precision, '.b', label='PR curve')
295 | plt.xlim([0.0, 1.0])
296 | plt.ylim([0.0, 1.0])
297 | plt.xlabel('Recall')
298 | plt.ylabel('Precision')
299 | plt.title('Precision-Recall Curve')
300 | plt.legend(loc="lower right", frameon=True)
301 | plt.subplot(1, 2, 2)
302 | plt.plot(thresholds, fbeta_scores[:-1], '.r', label='f1-score')
303 | plt.xlabel('Probability Threshold')
304 | plt.ylabel('F1 score')
305 | plt.show()
306 |
307 | plot_data = (recall, precision, thresholds, fbeta_scores[:-1])
308 |
309 | return best_threshold, max_fscore, plot_data
310 |
--------------------------------------------------------------------------------
/src/contour_classification/generate_melody.py:
--------------------------------------------------------------------------------
1 | """ Module for generating melody output based on classifier scores """
2 | import pandas as pd
3 | import contour_utils as cc
4 | import numpy as np
5 | import mir_eval
6 |
7 |
8 | def melody_from_clf(contour_data, prob_thresh=0.5, penalty=0, method='viterbi'):
9 | """ Compute output melody using classifier output.
10 |
11 | Parameters
12 | ----------
13 | contour_data : DataFrame or dict of DataFrames
14 | DataFrame containing labeled features.
15 | prob_thresh : float
16 | Threshold that determines positive class
17 |
18 | Returns
19 | -------
20 | mel_output : Series
21 | Pandas Series with time stamp as index and f0 as values
22 | """
23 |
24 | contour_threshed = contour_data[contour_data['mel prob'] >= prob_thresh]
25 |
26 | if len(contour_threshed) == 0:
27 | print "Warning: no contours above threshold."
28 | contour_times, _, _ = \
29 | cc.contours_from_contour_data(contour_data, n_end=4)
30 |
31 | hopsizeInSamples = 256.0
32 | step_size = hopsizeInSamples/44100.0 # contour time stamp step size
33 | mel_time_idx = np.arange(0, np.max(contour_times.values.ravel()) + 1,
34 | step_size)
35 | mel_output = pd.Series(np.zeros(mel_time_idx.shape),
36 | index=mel_time_idx)
37 | return mel_output
38 |
39 | # get separate DataFrames of contour time, frequency, and probability
40 | contour_times, contour_freqs, _ = \
41 | cc.contours_from_contour_data(contour_threshed, n_end=4)
42 |
43 | # make frequencies below probability threshold negative
44 | #contour_freqs[contour_data['mel prob'] < prob_thresh] *= -1.0
45 |
46 | probs = contour_threshed['mel prob']
47 | contour_probs = pd.concat([probs]*contour_times.shape[1], axis=1,
48 | ignore_index=True)
49 |
50 | contour_num = pd.DataFrame(np.array(contour_threshed.index))
51 | contour_nums = pd.concat([contour_num]*contour_times.shape[1], axis=1,
52 | ignore_index=True)
53 |
54 | avg_freq = contour_freqs.mean(axis=1)
55 |
56 | # create DataFrame with all unwrapped [time, frequency, probability] values.
57 | mel_dat = pd.DataFrame(columns=['time', 'f0', 'probability', 'c_num'])
58 | mel_dat['time'] = contour_times.values.ravel()
59 | mel_dat['f0'] = contour_freqs.values.ravel()
60 | mel_dat['probability'] = contour_probs.values.ravel()
61 | mel_dat['c_num'] = contour_nums.values.ravel()
62 |
63 | # remove rows with NaNs
64 | mel_dat.dropna(inplace=True)
65 |
66 | # sort by probability then by time
67 | # duplicate times with have maximum probability value at the end
68 | mel_dat.sort(columns='probability', inplace=True)
69 | mel_dat.sort(columns='time', inplace=True)
70 |
71 | hopsizeInSamples = 256.0
72 | # compute evenly spaced time grid for output
73 | step_size = hopsizeInSamples/44100.0 # contour time stamp step size
74 | mel_time_idx = np.arange(0, np.max(mel_dat['time'].values) + 1, step_size)
75 |
76 | # find index in evenly spaced grid of estimated time values
77 | old_times = mel_dat['time'].values
78 | reidx = np.searchsorted(mel_time_idx, old_times)
79 | shift_idx = (np.abs(old_times - mel_time_idx[reidx - 1]) < \
80 | np.abs(old_times - mel_time_idx[reidx]))
81 | reidx[shift_idx] = reidx[shift_idx] - 1
82 |
83 | # find duplicate time values
84 | mel_dat['reidx'] = reidx
85 |
86 | if method == 'max':
87 | print "using max decoding"
88 | mel_dat.drop_duplicates(subset='reidx', take_last=True, inplace=True)
89 |
90 | mel_output = pd.Series(np.zeros(mel_time_idx.shape), index=mel_time_idx)
91 | mel_output.iloc[mel_dat['reidx']] = mel_dat['f0'].values
92 |
93 | else:
94 | print "using viterbi decoding"
95 | duplicates = mel_dat.duplicated(subset='reidx') | \
96 | mel_dat.duplicated(subset='reidx', take_last=True)
97 |
98 | not_duplicates = mel_dat[~duplicates]
99 |
100 | # initialize output melody
101 | mel_output = pd.Series(np.zeros(mel_time_idx.shape), index=mel_time_idx)
102 |
103 | # fill non-duplicate values
104 | mel_output.iloc[not_duplicates['reidx']] = not_duplicates['f0'].values
105 |
106 | dups = mel_dat[duplicates]
107 | dups['groupnum'] = (dups.loc[:, 'reidx'].diff() > 1).cumsum().copy()
108 | groups = dups.groupby('groupnum')
109 |
110 | for _, group in groups:
111 | states = np.unique(group['c_num'])
112 | center_freqs = avg_freq.loc[states]
113 | times = np.unique(group['reidx'])
114 |
115 | posterior = group[['probability', 'c_num', 'reidx']].pivot_table(
116 | 'probability', index='reidx',
117 | columns='c_num',
118 | fill_value=0.0).as_matrix()
119 |
120 | f0_vals = group[['f0', 'c_num', 'reidx']].pivot_table(
121 | 'f0', index='reidx',
122 | columns='c_num',
123 | fill_value=0.0).as_matrix()
124 |
125 | #posterior[np.where(f0_vals < prob_thresh)] = 0 #1e-10
126 |
127 | # build transition matrix from log distance between center frequency
128 | transition_matrix = np.log2(center_freqs.values)[np.newaxis, :] - \
129 | np.log2(center_freqs.values)[:, np.newaxis]
130 | transition_matrix = 1 - normalize(np.abs(transition_matrix), axis=1)
131 | transition_matrix = normalize(transition_matrix, axis=1)
132 |
133 | path = viterbi(posterior, transition_matrix=transition_matrix,
134 | prior=None, penalty=penalty)
135 |
136 | mel_output.iloc[times] = f0_vals[np.arange(len(path)), path]
137 |
138 | return mel_output
139 |
140 |
141 | def score_melodies(mel_output_dict, test_annot_dict):
142 | """ Score melody output against ground truth.
143 |
144 | Parameters
145 | ----------
146 | mel_output_dict : dict of Series
147 | Dictionary of melody output series keyed by trackid
148 | test_annot_dict : dict of DataFrames
149 | Dictionary of DataFrames containing annotations.
150 |
151 | Returns
152 | -------
153 | melody_scores : dict
154 | melody evaluation metrics for each track
155 | """
156 | melody_scores = {}
157 | print "Scoring..."
158 | for key in mel_output_dict.keys():
159 | print key
160 | if mel_output_dict[key] is None:
161 | print "skipping..."
162 | continue
163 | ref = test_annot_dict[key]
164 | est = mel_output_dict[key]
165 | if isinstance(est,pd.DataFrame) or isinstance(est,pd.Series):
166 | melody_scores[key] = mir_eval.melody.evaluate(ref['time'].values,
167 | ref['f0'].values,
168 | est.index.values,
169 | est.values)
170 | else:
171 | times, pitches = est
172 | melody_scores[key] = mir_eval.melody.evaluate(ref['time'].values,
173 | ref['f0'].values,
174 | times,
175 | pitches[:,0])
176 |
177 | return melody_scores
178 |
179 |
180 | def viterbi(posterior, transition_matrix=None, prior=None, penalty=0,
181 | scaled=True):
182 | """Find the optimal Viterbi path through a posteriorgram.
183 | Ported closely from Tae Min Cho's MATLAB implementation.
184 | Parameters
185 | ----------
186 | posterior: np.ndarray, shape=(num_obs, num_states)
187 | Matrix of observations (events, time steps, etc) by the number of
188 | states (classes, categories, etc), e.g.
189 | posterior[t, i] = Pr(y(t) | Q(t) = i)
190 | transition_matrix: np.ndarray, shape=(num_states, num_states)
191 | Transition matrix for the viterbi algorithm. For clarity, each row
192 | corresponds to the probability of transitioning to the next state, e.g.
193 | transition_matrix[i, j] = Pr(Q(t + 1) = j | Q(t) = i)
194 | prior: np.ndarray, default=None (uniform)
195 | Probability distribution over the states, e.g.
196 | prior[i] = Pr(Q(0) = i)
197 | penalty: scalar, default=0
198 | Scalar penalty to down-weight off-diagonal states.
199 | scaled : bool, default=True
200 | Scale transition probabilities between steps in the algorithm.
201 | Note: Hard-coded to True in TMC's implementation; it's probably a bad
202 | idea to change this.
203 | Returns
204 | -------
205 | path: np.ndarray, shape=(num_obs,)
206 | Optimal state indices through the posterior.
207 | """
208 |
209 | # Infer dimensions.
210 | num_obs, num_states = posterior.shape
211 |
212 | # Define the scaling function
213 | scaler = normalize if scaled else lambda x: x
214 | # Normalize the posterior.
215 | posterior = normalize(posterior, axis=1)
216 |
217 | if transition_matrix is None:
218 | transition_matrix = np.ones([num_states]*2)
219 |
220 | transition_matrix = normalize(transition_matrix, axis=1)
221 |
222 | # Apply the off-axis penalty.
223 | offset = np.ones([num_states]*2, dtype=float)
224 | offset -= np.eye(num_states, dtype=np.float)
225 | penalty = offset * np.exp(penalty) + np.eye(num_states, dtype=np.float)
226 | transition_matrix = penalty * transition_matrix
227 |
228 | # Create a uniform prior if one isn't provided.
229 | prior = np.ones(num_states) / float(num_states) if prior is None else prior
230 |
231 | # Algorithm initialization
232 | delta = np.zeros_like(posterior)
233 | psi = np.zeros_like(posterior)
234 | path = np.zeros(num_obs, dtype=int)
235 |
236 | idx = 0
237 | delta[idx, :] = scaler(prior * posterior[idx, :])
238 |
239 | for idx in range(1, num_obs):
240 | res = delta[idx - 1, :].reshape(1, num_states) * transition_matrix
241 | delta[idx, :] = scaler(np.max(res, axis=1) * posterior[idx, :])
242 | psi[idx, :] = np.argmax(res, axis=1)
243 |
244 | path[-1] = np.argmax(delta[-1, :])
245 | for idx in range(num_obs - 2, -1, -1):
246 | path[idx] = psi[idx + 1, path[idx + 1]]
247 | return path
248 |
249 |
250 | def normalize(x, axis=None):
251 | """Normalize the values of an ndarray to sum to 1 along the given axis.
252 | Parameters
253 | ----------
254 | x : np.ndarray
255 | Input multidimensional array to normalize.
256 | axis : int, default=None
257 | Axis to normalize along, otherwise performed over the full array.
258 | Returns
259 | -------
260 | z : np.ndarray, shape=x.shape
261 | Normalized array.
262 | """
263 | if not axis is None:
264 | shape = list(x.shape)
265 | shape[axis] = 1
266 | scalar = x.astype(float).sum(axis=axis).reshape(shape)
267 | scalar[scalar == 0] = 1.0
268 | else:
269 | scalar = x.sum()
270 | scalar = 1 if scalar == 0 else scalar
271 | return x / scalar
272 |
--------------------------------------------------------------------------------
/src/contour_classification/melody_trackids.json:
--------------------------------------------------------------------------------
1 | {
2 | "tracks": [
3 | "CelestialShore_DieForUs",
4 | "HezekiahJones_BorrowedHeart",
5 | "BrandonWebster_YesSirICanFly",
6 | "MusicDelta_Vivaldi",
7 | "Schumann_Mignon",
8 | "TheScarletBrand_LesFleursDuMal",
9 | "MusicDelta_SpeedMetal",
10 | "MusicDelta_ChineseDrama",
11 | "MusicDelta_ModalJazz",
12 | "EthanHein_GirlOnABridge",
13 | "StrandOfOaks_Spacestation",
14 | "LizNelson_Rainfall",
15 | "MusicDelta_Shadows",
16 | "BrandonWebster_DontHearAThing",
17 | "MusicDelta_Beethoven",
18 | "Debussy_LenfantProdigue",
19 | "PurlingHiss_Lolita",
20 | "MusicDelta_Grunge",
21 | "KarimDouaidy_Yatora",
22 | "KarimDouaidy_Hopscotch",
23 | "MusicDelta_FreeJazz",
24 | "SecretMountains_HighHorse",
25 | "ClaraBerryAndWooldog_WaltzForMyVictims",
26 | "AmarLal_SpringDay1",
27 | "AmarLal_Rest",
28 | "ClaraBerryAndWooldog_AirTraffic",
29 | "ClaraBerryAndWooldog_Stella",
30 | "ClaraBerryAndWooldog_TheBadGuys",
31 | "MusicDelta_Beatles",
32 | "AClassicEducation_NightOwl",
33 | "LizNelson_Coldwar",
34 | "FacesOnFilm_WaitingForGa",
35 | "PortStWillow_StayEven",
36 | "ClaraBerryAndWooldog_Boys",
37 | "InvisibleFamiliars_DisturbingWildlife",
38 | "AlexanderRoss_VelvetCurtain",
39 | "AimeeNorwich_Child",
40 | "AlexanderRoss_GoodbyeBolero",
41 | "Auctioneer_OurFutureFaces",
42 | "FamilyBand_Again",
43 | "MusicDelta_Country1",
44 | "MusicDelta_Country2",
45 | "MusicDelta_Gospel",
46 | "Mozart_DiesBildnis",
47 | "MusicDelta_Pachelbel",
48 | "MusicDelta_InTheHalloftheMountainKing",
49 | "Wolf_DieBekherte",
50 | "Mozart_BesterJungling",
51 | "MusicDelta_GriegTrolltog",
52 | "MatthewEntwistle_FairerHopes",
53 | "JoelHelander_Definition",
54 | "MatthewEntwistle_TheFlaxenField",
55 | "MatthewEntwistle_TheArch",
56 | "MatthewEntwistle_ImpressionsOfSaturn",
57 | "Schubert_Erstarrung",
58 | "MatthewEntwistle_Lontano",
59 | "Handel_TornamiAVagheggiar",
60 | "MichaelKropf_AllGoodThings",
61 | "JoelHelander_IntheAtticBedroom",
62 | "JoelHelander_ExcessiveResistancetoChange",
63 | "BigTroubles_Phantom",
64 | "MusicDelta_Reggae",
65 | "TheDistricts_Vermont",
66 | "Meaxic_TakeAStep",
67 | "MusicDelta_Zeppelin",
68 | "Creepoid_OldTree",
69 | "AvaLuna_Waterduct",
70 | "TheSoSoGlos_Emergency",
71 | "MusicDelta_80sRock",
72 | "MusicDelta_Punk",
73 | "MusicDelta_Rock",
74 | "HopAlong_SisterCities",
75 | "MusicDelta_Rockabilly",
76 | "MusicDelta_Hendrix",
77 | "Meaxic_YouListen",
78 | "MusicDelta_ChineseHenan",
79 | "Phoenix_ScotchMorris",
80 | "Phoenix_BrokenPledgeChicagoReel",
81 | "MusicDelta_ChineseYaoZu",
82 | "MusicDelta_ChineseJiangNan",
83 | "Phoenix_ColliersDaughter",
84 | "EthanHein_1930sSynthAndUprightBass",
85 | "ChrisJacoby_PigsFoot",
86 | "LizNelson_ImComingHome",
87 | "Phoenix_ElzicsFarewell",
88 | "Phoenix_SeanCaughlinsTheScartaglen",
89 | "Phoenix_LarkOnTheStrandDrummondCastle",
90 | "ChrisJacoby_BoothShotLincoln",
91 | "MusicDelta_ChineseChaoZhou",
92 | "AimeeNorwich_Flying",
93 | "MusicDelta_ChineseXinJing",
94 | "MusicDelta_SwingJazz",
95 | "CroqueMadame_Pilot",
96 | "MusicDelta_BebopJazz",
97 | "MusicDelta_LatinJazz",
98 | "CroqueMadame_Oil",
99 | "MatthewEntwistle_DontYouEver",
100 | "MusicDelta_FunkJazz",
101 | "MusicDelta_FusionJazz",
102 | "MusicDelta_CoolJazz",
103 | "StevenClark_Bounty",
104 | "MusicDelta_Disco",
105 | "Snowmine_Curfews",
106 | "NightPanther_Fire",
107 | "SweetLights_YouLetMeDown",
108 | "DreamersOfTheGhetto_HeavyLove",
109 | "HeladoNegro_MitadDelMundo",
110 | "MusicDelta_Britpop"
111 | ]
112 | }
--------------------------------------------------------------------------------
/src/contour_classification/melody_trackids_orch.json:
--------------------------------------------------------------------------------
1 | {
2 | "tracks": [
3 | "Beethoven-S3-I-ex1",
4 | "Beethoven-S3-I-ex2",
5 | "Beethoven-S3-I-ex3",
6 | "Beethoven-S3-I-ex5",
7 | "Beethoven-S3-I-ex6",
8 | "Beethoven-S5-I-ex1",
9 | "Beethoven-S5-II-ex1",
10 | "Beethoven-S5-II-ex2",
11 | "Beethoven-S5-II-ex3",
12 | "Beethoven-S7-II-ex2",
13 | "Beethoven-S9-II-ex1",
14 | "Beethoven-S9-II-ex2",
15 | "Beethoven-S9-II-ex3",
16 | "Brahms-HungarianDance-n5-ex1",
17 | "Brahms-S3-III-ex1",
18 | "Brahms-S3-III-ex2",
19 | "Brahms-S3-III-ex3",
20 | "Dvorak-S9-IV-ex1",
21 | "Dvorak-S9-IV-ex3",
22 | "Dvorak-S9-IV-ex4",
23 | "Dvorak-S9-IV-ex5",
24 | "Grieg-PeerGynt-HallMountainKing-ex1",
25 | "Grieg-PeerGynt-MorningMood-ex1",
26 | "Grieg-PeerGynt-MorningMood-ex2",
27 | "Haydn-S94-Andante-ex2",
28 | "Haydn-S94-Menuet-ex1",
29 | "Haydn-S94-Menuet-ex2",
30 | "Holst-ThePlanets-Jupiter-ex1",
31 | "Holst-ThePlanets-Jupiter-ex2",
32 | "Holst-ThePlanets-Jupiter-ex3",
33 | "Holst-ThePlanets-Jupiter-ex4",
34 | "Musorgski-Ravel-PicturesExhibition-ex10",
35 | "Musorgski-Ravel-PicturesExhibition-ex11",
36 | "Musorgski-Ravel-PicturesExhibition-ex4",
37 | "Musorgski-Ravel-PicturesExhibition-ex5",
38 | "Musorgski-Ravel-PicturesExhibition-ex6",
39 | "Musorgski-Ravel-PicturesExhibition-ex7",
40 | "Musorgski-Ravel-PicturesExhibition-ex8",
41 | "Musorgski-Ravel-PicturesExhibition-Promenade1-ex1",
42 | "Musorgski-Ravel-PicturesExhibition-Promenade1-ex2",
43 | "Profofiev-Romeo&Juliet-DanceKnights-ex1",
44 | "Profofiev-Romeo&Juliet-DanceKnights-ex2",
45 | "Ravel-Bolero-ex1",
46 | "Ravel-Bolero-ex2",
47 | "Ravel-Bolero-ex3",
48 | "Rimski-Korsakov-Scheherazade-Kalender-ex1",
49 | "Rimski-Korsakov-Scheherazade-Kalender-ex2",
50 | "Rimski-Korsakov-Scheherazade-Kalender-ex3",
51 | "Rimski-Korsakov-Scheherazade-Sea-SinbadShip-ex1",
52 | "Rimski-Korsakov-Scheherazade-Sea-SinbadShip-ex2",
53 | "Rimski-Korsakov-Scheherazade-Sea-SinbadShip-ex5",
54 | "Rimski-Korsakov-Scheherazade-YoungPrincePrincess-ex1",
55 | "Rimski-Korsakov-Scheherazade-YoungPrincePrincess-ex2",
56 | "Rimski-Korsakov-Scheherazade-YoungPrincePrincess-ex3",
57 | "Rimski-Korsakov-Scheherazade-YoungPrincePrincess-ex4",
58 | "Schubert-S8-II-ex2",
59 | "Smetana-MaVlast-Vltava-ex1",
60 | "Smetana-MaVlast-Vltava-ex4",
61 | "Strauss-BlueDanube-ex1",
62 | "Strauss-BlueDanube-ex2",
63 | "Strauss-BlueDanube-ex3",
64 | "Tchaikovsky-SwanLake-Scene-ex1",
65 | "Tchaikovsky-SwanLake-Scene-ex2",
66 | "Wagner-Tannhauser-Act2-ex2"
67 | ]
68 | }
--------------------------------------------------------------------------------
/src/contour_classification/mv_gaussian.py:
--------------------------------------------------------------------------------
1 | """ Functions for doing scoring based on multivariate gaussian as in Meloida
2 | """
3 | import numpy as np
4 | from scipy.stats import boxcox
5 | from scipy.stats import multivariate_normal
6 | from sklearn import metrics
7 |
8 |
9 | def transform_features(x_train, x_test):
10 | """ Transform features using a boxcox transform. Remove vibrato features.
11 | Comptes the optimal value of lambda on the training set and applies this
12 | lambda to the testing set.
13 |
14 | Parameters
15 | ----------
16 | x_train : np.array [n_samples, n_features]
17 | Untransformed training features.
18 | x_test : np.array [n_samples, n_features]
19 | Untransformed testing features.
20 |
21 | Returns
22 | -------
23 | x_train_boxcox : np.array [n_samples, n_features_trans]
24 | Transformed training features.
25 | x_test_boxcox : np.array [n_samples, n_features_trans]
26 | Transformed testing features.
27 | """
28 | x_train = x_train[:, 0:6]
29 | x_test = x_test[:, 0:6]
30 |
31 | _, n_feats = x_train.shape
32 |
33 | x_train_boxcox = np.zeros(x_train.shape)
34 | lmbda_opt = np.zeros((n_feats,))
35 |
36 | eps = 1.0 # shift features away from zero
37 | for i in range(n_feats):
38 | x_train_boxcox[:, i], lmbda_opt[i] = boxcox(x_train[:, i] + eps)
39 |
40 | x_test_boxcox = np.zeros(x_test.shape)
41 | for i in range(n_feats):
42 | x_test_boxcox[:, i] = boxcox(x_test[:, i] + eps, lmbda=lmbda_opt[i])
43 |
44 | return x_train_boxcox, x_test_boxcox
45 |
46 |
47 | def fit_gaussians(x_train_boxcox, y_train):
48 | """ Fit class-dependent multivariate gaussians on the training set.
49 |
50 | Parameters
51 | ----------
52 | x_train_boxcox : np.array [n_samples, n_features_trans]
53 | Transformed training features.
54 | y_train : np.array [n_samples]
55 | Training labels.
56 |
57 | Returns
58 | -------
59 | rv_pos : multivariate normal
60 | multivariate normal for melody class
61 | rv_neg : multivariate normal
62 | multivariate normal for non-melody class
63 | """
64 | pos_idx = np.where(y_train == 1)[0]
65 | mu_pos = np.mean(x_train_boxcox[pos_idx, :], axis=0)
66 | cov_pos = np.cov(x_train_boxcox[pos_idx, :], rowvar=0)
67 |
68 | neg_idx = np.where(y_train == 0)[0]
69 | mu_neg = np.mean(x_train_boxcox[neg_idx, :], axis=0)
70 | cov_neg = np.cov(x_train_boxcox[neg_idx, :], rowvar=0)
71 | rv_pos = multivariate_normal(mean=mu_pos, cov=cov_pos, allow_singular=True)
72 | rv_neg = multivariate_normal(mean=mu_neg, cov=cov_neg, allow_singular=True)
73 | return rv_pos, rv_neg
74 |
75 |
76 | def melodiness(sample, rv_pos, rv_neg):
77 | """ Compute melodiness score for an example given trained distributions.
78 |
79 | Parameters
80 | ----------
81 | sample : np.array [n_feats]
82 | Instance of transformed data.
83 | rv_pos : multivariate normal
84 | multivariate normal for melody class
85 | rv_neg : multivariate normal
86 | multivariate normal for non-melody class
87 |
88 | Returns
89 | -------
90 | melodiness: float
91 | score between 0 and inf. class cutoff at 1
92 | """
93 | return rv_pos.pdf(sample)/rv_neg.pdf(sample)
94 |
95 |
96 | def compute_all_melodiness(x_train_boxcox, x_test_boxcox, rv_pos, rv_neg):
97 | """ Compute melodiness for all training and test examples.
98 |
99 | Parameters
100 | ----------
101 | x_train_boxcox : np.array [n_samples, n_features_trans]
102 | Transformed training features.
103 | x_test_boxcox : np.array [n_samples, n_features_trans]
104 | Transformed testing features.
105 | rv_pos : multivariate normal
106 | multivariate normal for melody class
107 | rv_neg : multivariate normal
108 | multivariate normal for non-melody class
109 |
110 | Returns
111 | -------
112 | m_train : np.array [n_samples]
113 | melodiness scores for training set
114 | m_test : np.array [n_samples]
115 | melodiness scores for testing set
116 | """
117 | n_train = x_train_boxcox.shape[0]
118 | n_test = x_test_boxcox.shape[0]
119 |
120 | m_train = np.zeros((n_train, ))
121 | m_test = np.zeros((n_test, ))
122 |
123 | for i, sample in enumerate(x_train_boxcox):
124 | m_train[i] = melodiness(sample, rv_pos, rv_neg)
125 |
126 | for i, sample in enumerate(x_test_boxcox):
127 | m_test[i] = melodiness(sample, rv_pos, rv_neg)
128 |
129 | return m_train, m_test
130 |
131 |
132 | def melodiness_metrics(m_train, m_test, y_train, y_test):
133 | """ Compute metrics on melodiness score
134 |
135 | Parameters
136 | ----------
137 | m_train : np.array [n_samples]
138 | melodiness scores for training set
139 | m_test : np.array [n_samples]
140 | melodiness scores for testing set
141 | y_train : np.array [n_samples]
142 | Training labels.
143 | y_test : np.array [n_samples]
144 | Testing labels.
145 |
146 | Returns
147 | -------
148 | melodiness_scores : dict
149 | melodiness scores for training set
150 | """
151 | m_bin_train = 1*(m_train >= 1)
152 | m_bin_test = 1*(m_test >= 1)
153 |
154 | train_scores = {}
155 | test_scores = {}
156 |
157 | train_scores['accuracy'] = metrics.accuracy_score(y_train, m_bin_train)
158 | test_scores['accuracy'] = metrics.accuracy_score(y_test, m_bin_test)
159 |
160 | train_scores['mcc'] = metrics.matthews_corrcoef(y_train, m_bin_train)
161 | test_scores['mcc'] = metrics.matthews_corrcoef(y_test, m_bin_test)
162 |
163 | (p, r, f, s) = metrics.precision_recall_fscore_support(y_train,
164 | m_bin_train)
165 | train_scores['precision'] = p
166 | train_scores['recall'] = r
167 | train_scores['f1'] = f
168 | train_scores['support'] = s
169 |
170 | (p, r, f, s) = metrics.precision_recall_fscore_support(y_test,
171 | m_bin_test)
172 | test_scores['precision'] = p
173 | test_scores['recall'] = r
174 | test_scores['f1'] = f
175 | test_scores['support'] = s
176 |
177 | train_scores['confusion matrix'] = \
178 | metrics.confusion_matrix(y_train, m_bin_train, labels=[0, 1])
179 | test_scores['confusion matrix'] = \
180 | metrics.confusion_matrix(y_test, m_bin_test, labels=[0, 1])
181 |
182 | train_scores['auc score'] = \
183 | metrics.roc_auc_score(y_train, m_train + 1, average='weighted')
184 | test_scores['auc score'] = \
185 | metrics.roc_auc_score(y_test, m_test + 1, average='weighted')
186 |
187 | melodiness_scores = {'train': train_scores, 'test': test_scores}
188 |
189 | return melodiness_scores
190 |
191 |
--------------------------------------------------------------------------------
/src/contour_classification/orch_groups.json:
--------------------------------------------------------------------------------
1 | {
2 | "Beethoven-S3-I-ex1": "1",
3 | "Beethoven-S3-I-ex2": "1",
4 | "Beethoven-S3-I-ex3": "1",
5 | "Beethoven-S3-I-ex5": "1",
6 | "Beethoven-S3-I-ex6": "1",
7 | "Beethoven-S5-I-ex1": "2",
8 | "Beethoven-S5-II-ex1": "3",
9 | "Beethoven-S5-II-ex2": "3",
10 | "Beethoven-S5-II-ex3": "3",
11 | "Beethoven-S7-II-ex2": "4",
12 | "Beethoven-S9-II-ex1": "5",
13 | "Beethoven-S9-II-ex2": "5",
14 | "Beethoven-S9-II-ex3": "5",
15 | "Brahms-HungarianDance-n5-ex1": "6",
16 | "Brahms-S3-III-ex1": "7",
17 | "Brahms-S3-III-ex2": "7",
18 | "Brahms-S3-III-ex3": "7",
19 | "Dvorak-S9-IV-ex1": "8",
20 | "Dvorak-S9-IV-ex3": "8",
21 | "Dvorak-S9-IV-ex4": "8",
22 | "Dvorak-S9-IV-ex5": "8",
23 | "Grieg-PeerGynt-HallMountainKing-ex1": "9",
24 | "Grieg-PeerGynt-MorningMood-ex1": "10",
25 | "Grieg-PeerGynt-MorningMood-ex2": "10",
26 | "Haydn-S94-Andante-ex2": "11",
27 | "Haydn-S94-Menuet-ex1": "12",
28 | "Haydn-S94-Menuet-ex2": "12",
29 | "Holst-ThePlanets-Jupiter-ex1": "13",
30 | "Holst-ThePlanets-Jupiter-ex2": "13",
31 | "Holst-ThePlanets-Jupiter-ex3": "13",
32 | "Holst-ThePlanets-Jupiter-ex4": "13",
33 | "Musorgski-Ravel-PicturesExhibition-ex10": "30",
34 | "Musorgski-Ravel-PicturesExhibition-ex11": "31",
35 | "Musorgski-Ravel-PicturesExhibition-ex4": "25",
36 | "Musorgski-Ravel-PicturesExhibition-ex5": "26",
37 | "Musorgski-Ravel-PicturesExhibition-ex6": "27",
38 | "Musorgski-Ravel-PicturesExhibition-ex7": "28",
39 | "Musorgski-Ravel-PicturesExhibition-ex8": "29",
40 | "Musorgski-Ravel-PicturesExhibition-Promenade1-ex1": "14",
41 | "Musorgski-Ravel-PicturesExhibition-Promenade1-ex2": "14",
42 | "Profofiev-Romeo&Juliet-DanceKnights-ex1": "15",
43 | "Profofiev-Romeo&Juliet-DanceKnights-ex2": "15",
44 | "Ravel-Bolero-ex1": "16",
45 | "Ravel-Bolero-ex2": "16",
46 | "Ravel-Bolero-ex3": "16",
47 | "Rimski-Korsakov-Scheherazade-Kalender-ex1": "17",
48 | "Rimski-Korsakov-Scheherazade-Kalender-ex2": "17",
49 | "Rimski-Korsakov-Scheherazade-Kalender-ex3": "17",
50 | "Rimski-Korsakov-Scheherazade-Sea-SinbadShip-ex1": "18",
51 | "Rimski-Korsakov-Scheherazade-Sea-SinbadShip-ex2": "18",
52 | "Rimski-Korsakov-Scheherazade-Sea-SinbadShip-ex5": "18",
53 | "Rimski-Korsakov-Scheherazade-YoungPrincePrincess-ex1": "19",
54 | "Rimski-Korsakov-Scheherazade-YoungPrincePrincess-ex2": "19",
55 | "Rimski-Korsakov-Scheherazade-YoungPrincePrincess-ex3": "19",
56 | "Rimski-Korsakov-Scheherazade-YoungPrincePrincess-ex4": "19",
57 | "Schubert-S8-II-ex2": "20",
58 | "Smetana-MaVlast-Vltava-ex1": "21",
59 | "Smetana-MaVlast-Vltava-ex4": "21",
60 | "Strauss-BlueDanube-ex1": "22",
61 | "Strauss-BlueDanube-ex2": "22",
62 | "Strauss-BlueDanube-ex3": "22",
63 | "Tchaikovsky-SwanLake-Scene-ex1": "23",
64 | "Tchaikovsky-SwanLake-Scene-ex2": "23",
65 | "Wagner-Tannhauser-Act2-ex2": "24"
66 | }
--------------------------------------------------------------------------------
/src/contour_classification/run_contour_training_melody_extraction.py:
--------------------------------------------------------------------------------
1 | import contour_utils as cc
2 | import experiment_utils as eu
3 | import mv_gaussian as mv
4 | import clf_utils as cu
5 | import generate_melody as gm
6 | from sklearn.ensemble import RandomForestClassifier as RFC
7 | from sklearn.cross_validation import KFold
8 | from sklearn import cross_validation
9 | from sklearn import metrics
10 | import sklearn
11 | import pandas as pd
12 | import numpy as np
13 | import random
14 | import glob
15 | import os
16 | import json
17 | import matplotlib.pyplot as plt
18 | import seaborn as sns
19 | sns.set()
20 | from scipy.stats import boxcox
21 |
22 | from contour_utils import getFeatureInfo
23 |
24 |
25 |
26 | # 2
27 |
28 | plt.ion()
29 |
30 |
31 | mel_type=2
32 |
33 | reload(eu)
34 |
35 | scores = []
36 | scores_nm = []
37 |
38 | # EDIT: For MedleyDB
39 | #with open('melody_trackids.json', 'r') as fhandle:
40 | # track_list = json.load(fhandle)
41 |
42 | # For Orchset
43 | with open('melody_trackids_orch.json', 'r') as fhandle:
44 | track_list = json.load(fhandle)
45 |
46 |
47 | track_list = track_list['tracks']
48 |
49 | # mdb_files, splitter = eu.create_splits(test_size=0.15)
50 |
51 | dset_contour_dict, dset_annot_dict = \
52 | eu.compute_all_overlaps(track_list, meltype=mel_type)
53 |
54 | mdb_files, splitter = eu.create_splits(test_size=0.25)
55 |
56 | for i in range(4):
57 | for train, test in splitter:
58 | random.shuffle(train)
59 | n_train = len(train) - (len(test)/2)
60 | train_tracks = mdb_files[train[:n_train]]
61 | valid_tracks = mdb_files[train[n_train:]]
62 | test_tracks = mdb_files[test]
63 |
64 | train_contour_dict = {k: dset_contour_dict[k] for k in train_tracks}
65 | valid_contour_dict = {k: dset_contour_dict[k] for k in valid_tracks}
66 | test_contour_dict = {k: dset_contour_dict[k] for k in test_tracks}
67 |
68 | train_annot_dict = {k: dset_annot_dict[k] for k in train_tracks}
69 | valid_annot_dict = {k: dset_annot_dict[k] for k in valid_tracks}
70 | test_annot_dict = {k: dset_annot_dict[k] for k in test_tracks}
71 |
72 | reload(eu)
73 | olap_stats, zero_olap_stats = eu.olap_stats(train_contour_dict)
74 | OLAP_THRESH = 0.5
75 | train_contour_dict, valid_contour_dict, test_contour_dict = \
76 | eu.label_all_contours(train_contour_dict, valid_contour_dict, \
77 | test_contour_dict, olap_thresh=OLAP_THRESH)
78 | len(train_contour_dict)
79 |
80 | reload(cc)
81 |
82 | anyContourDataFrame = dset_contour_dict[dset_contour_dict.keys()[0]]
83 |
84 |
85 | feats, idxStartFeatures, idxEndFeatures = getFeatureInfo(anyContourDataFrame)
86 |
87 | X_train, Y_train = cc.pd_to_sklearn(train_contour_dict,idxStartFeatures,idxEndFeatures)
88 | X_valid, Y_valid = cc.pd_to_sklearn(valid_contour_dict,idxStartFeatures,idxEndFeatures)
89 | X_test, Y_test = cc.pd_to_sklearn(test_contour_dict,idxStartFeatures,idxEndFeatures)
90 | np.max(X_train,0)
91 |
92 |
93 | # x,y = cc.pd_to_sklearn(train_contour_dict['AClassicEducation_NightOwl'])
94 | # train_contour_dict['AClassicEducation_NightOwl']
95 | # contour_data = train_contour_dict['AClassicEducation_NightOwl']
96 | # x[68]
97 | # train_contour_dict['AClassicEducation_NightOwl'].loc[68,:]
98 | #
99 | # X_train_boxcox, X_test_boxcox = mv.transform_features(X_train, X_test)
100 | # rv_pos, rv_neg = mv.fit_gaussians(X_train_boxcox, Y_train)
101 | #
102 | # M_train, M_test = mv.compute_all_melodiness(X_train_boxcox, X_test_boxcox, rv_pos, rv_neg)
103 | #
104 | # reload(mv)
105 | # reload(eu)
106 | # melodiness_scores = mv.melodiness_metrics(M_train, M_test, Y_train, Y_test)
107 | # best_thresh, max_fscore,vals = eu.get_best_threshold(Y_test, M_test)
108 | # print "best threshold = %s" % best_thresh
109 | # print "maximum achieved f score = %s" % max_fscore
110 | # print melodiness_scores
111 |
112 | reload(cu)
113 | best_depth, max_cv_accuracy, plot_dat = cu.cross_val_sweep(X_train, Y_train,plot = False)
114 | print best_depth
115 | print max_cv_accuracy
116 |
117 | df = pd.DataFrame(np.array(plot_dat).transpose(), columns=['max depth', 'accuracy', 'std'])
118 |
119 |
120 | clf = cu.train_clf(X_train, Y_train, best_depth)
121 |
122 | reload(cu)
123 | P_train, P_valid, P_test = cu.clf_predictions(X_train, X_valid, X_test, clf)
124 | clf_scores = cu.clf_metrics(P_train, P_test, Y_train, Y_test)
125 | print clf_scores['test']
126 |
127 |
128 | reload(eu)
129 | best_thresh, max_fscore, plot_data = eu.get_best_threshold(Y_valid, P_valid)
130 | print "besth threshold = %s" % best_thresh
131 | print "maximum achieved f score = %s" % max_fscore
132 |
133 |
134 | for key in test_contour_dict.keys():
135 | test_contour_dict[key] = eu.contour_probs(clf, test_contour_dict[key],idxStartFeatures,idxEndFeatures)
136 |
137 |
138 | reload(gm)
139 | mel_output_dict = {}
140 | for i, key in enumerate(test_contour_dict.keys()):
141 | print key
142 | mel_output_dict[key] = gm.melody_from_clf(test_contour_dict[key], prob_thresh=best_thresh)
143 |
144 |
145 |
146 |
147 |
148 | reload(gm)
149 |
150 | mel_scores = gm.score_melodies(mel_output_dict, test_annot_dict)
151 |
152 |
153 | overall_scores = \
154 | pd.DataFrame(columns=['VR', 'VFA', 'RPA', 'RCA', 'OA'],
155 | index=mel_scores.keys())
156 | overall_scores['VR'] = \
157 | [mel_scores[key]['Voicing Recall'] for key in mel_scores.keys()]
158 | overall_scores['VFA'] = \
159 | [mel_scores[key]['Voicing False Alarm'] for key in mel_scores.keys()]
160 | overall_scores['RPA'] = \
161 | [mel_scores[key]['Raw Pitch Accuracy'] for key in mel_scores.keys()]
162 | overall_scores['RCA'] = \
163 | [mel_scores[key]['Raw Chroma Accuracy'] for key in mel_scores.keys()]
164 | overall_scores['OA'] = \
165 | [mel_scores[key]['Overall Accuracy'] for key in mel_scores.keys()]
166 |
167 | scores.append(overall_scores)
168 |
169 | print "Overall Scores"
170 | overall_scores.describe()
171 |
172 |
173 |
174 | # Tests with multilines
175 |
176 | #
177 | # from sys import path
178 | # currpath = os.getcwd()
179 | # from sys import path
180 | # path.append('../melody-SFContour')
181 | # path.append('../')
182 | # os.chdir("../melody-SFContour")
183 | # import optparse
184 | # parser = optparse.OptionParser("")
185 | # (options, args) = parser.parse_args([])
186 | # options.Pchangevx = 1
187 | # options.wNoteTrans = 1
188 | # options.wContourTrans = 1
189 | # options.wInstrTrans = 1
190 | # options.scale = 1
191 | # options.scaleSurr = 1
192 | # options.scalePan = 0
193 | # options.hopsizeInSamples = 256
194 | # options.hopsizeInSamples = 441
195 | # import generate_melody_ml as gm2
196 | # reload(gm2)
197 | # mel_output_dict_nm = {}
198 | # for i, key in enumerate(test_contour_dict.keys()):
199 | # print key
200 | # mel_output_dict_nm[key] = gm2.melody_from_clf(test_contour_dict[key], prob_thresh=best_thresh,options=options)
201 | # os.chdir(currpath)
202 | # print os.getcwd()
203 | # os.chdir("../contour_classification")
204 | #
205 | # import generate_melody as gm
206 | # reload(gm)
207 | #
208 | # # key="Beethoven-S3-I-ex2"
209 | # # df = mel_output_dict[key]
210 | # #
211 | # # df_pos = df[df > 0]
212 | # # df_zero = df[df == 0]
213 | # # df_neg = df[df < 0]
214 | # # plt.plot(df_pos.index, df_pos.values, ',g')
215 | # # plt.plot(df_zero.index, df_zero.values, ',y')
216 | # # plt.plot(df_neg.index, -1.0*df_neg.values, ',r')
217 | # # plt.show()
218 | # #
219 | # # df.index
220 | #
221 | # #df2 = mel_output_dict_nm[key]
222 | # #times, pitches = df2
223 | # #pitches[:,0]
224 | # #df_zero = df[df == 0]
225 | # #df_neg = df[df < 0]
226 | # #plt.plot(df_pos, df_pos, ',g')
227 | # #plt.plot(df_zero, df_zero, ',y')
228 | # #plt.plot(df_neg, -1.0*df_neg, ',r')
229 | # #plt.show()
230 | #
231 | # mel_scores_nm = gm.score_melodies(mel_output_dict_nm, test_annot_dict)
232 | #
233 | # overall_scores = \
234 | # pd.DataFrame(columns=['VR', 'VFA', 'RPA', 'RCA', 'OA'],
235 | # index=mel_scores_nm.keys())
236 | # overall_scores['VR'] = \
237 | # [mel_scores_nm[key]['Voicing Recall'] for key in mel_scores_nm.keys()]
238 | # overall_scores['VFA'] = \
239 | # [mel_scores_nm[key]['Voicing False Alarm'] for key in mel_scores_nm.keys()]
240 | # overall_scores['RPA'] = \
241 | # [mel_scores_nm[key]['Raw Pitch Accuracy'] for key in mel_scores_nm.keys()]
242 | # overall_scores['RCA'] = \
243 | # [mel_scores_nm[key]['Raw Chroma Accuracy'] for key in mel_scores_nm.keys()]
244 | # overall_scores['OA'] = \
245 | # [mel_scores_nm[key]['Overall Accuracy'] for key in mel_scores_nm.keys()]
246 | #
247 | # print "Overall Scores NM"
248 | # overall_scores.describe()
249 | # scores_nm.append(overall_scores)
250 |
251 |
252 | print "End"
253 |
254 |
255 | allscores = scores[0]
256 | for i in range(1,len(scores),1):
257 | allscores = allscores.append(scores[i])
258 | print i
259 | print (len(allscores))
260 |
261 |
262 | allscores.to_csv('allscoresNoTonal.csv')
263 | from pickle import dump
264 | picklefile = 'allscores'
265 | with open(picklefile, 'wb') as handle:
266 | dump(allscores, handle)
267 | print allscores.describe()
268 |
269 | np.argsort(clf.feature_importances_)
270 | np.sum(clf.feature_importances_)
271 | [feats[k] for k in np.argsort(clf.feature_importances_)]
272 |
273 |
274 | #
275 | # allscores_nm = scores_nm[0]
276 | # for i in range(1,len(scores_nm),1):
277 | # allscores_nm = allscores_nm.append(scores_nm[i])
278 | # print i
279 | # print (len(allscores_nm))
280 | #
281 | # allscores_nm.describe()
282 | #
283 | # from pickle import dump
284 | # picklefile = 'allscores_nm'
285 | # with open(picklefile, 'wb') as handle:
286 | # dump(allscores_nm, handle)
287 | #
288 | #
289 | #
290 | #
291 | # picklefile = 'allscores'
292 | #
293 | # from pickle import load
294 | # with open(picklefile, 'rb') as handle:
295 | # b = load(handle)
296 |
--------------------------------------------------------------------------------
/src/contour_classification/run_experiments.py:
--------------------------------------------------------------------------------
1 | """ Functions to run full experiment """
2 | import contour_utils as cc
3 | import experiment_utils as eu
4 | import mv_gaussian as mv
5 | import clf_utils as cu
6 | import generate_melody as gm
7 |
8 | import pandas as pd
9 | import numpy as np
10 | import random
11 | import json
12 | import os
13 | from contour_utils import getFeatureInfo
14 |
15 |
16 | from sklearn.externals import joblib
17 |
18 | def run_glassceiling_experiment(meltype):
19 |
20 | def get_fpaths(trackid, meltype):
21 | contour_suffix = \
22 | "MIX_vamp_melodia-contours_melodia-contours_contoursall.csv"
23 | contours_path = "melodia_contours"
24 |
25 | contour_suffix = "MIX.pitch.ctr"
26 | contours_path = "/Users/jjb/Documents/PhD/data/MedleyDB/Conv_mu-1_G-0_LHSF-0_pC-27.56_pDTh-1.2_pFTh-0.9_tC-75_mD-100_vxTol-1_Pchvx-1_wNoteTrans-1_wContourTrans-1_wInstrTrans-5_scale-1_-_scaleSurr-1"
27 |
28 | annot_suffix = "MELODY%s.csv" % str(meltype)
29 | mel_dir = "MELODY%s" % str(meltype)
30 | annot_path = os.path.join(os.environ['MEDLEYDB_PATH'], 'Annotations',
31 | 'Melody_Annotations', mel_dir)
32 |
33 | contour_fname = "%s_%s" % (track, contour_suffix)
34 | contour_fpath = os.path.join(contours_path, contour_fname)
35 | annot_fname = "%s_%s" % (track, annot_suffix)
36 | annot_fpath = os.path.join(annot_path, annot_fname)
37 |
38 |
39 | # For MEDLEY with SIMM -------------------------
40 | contour_suffix = "MIX.pitch.ctr"
41 | contours_path = "/Users/jjb/Google Drive/PhD/conferences/ISMIR2016/SIMM-PC/MedleyDB/C4-Contours/Conv_mu-1_G-0_LHSF-0_pC-27.56_pDTh-0.9_pFTh-0.9_tC-50_mD-100"
42 |
43 | annot_suffix = "MELODY%s.csv" % str(meltype)
44 | mel_dir = "MELODY%s" % str(meltype)
45 | annot_path = os.path.join(os.environ['MEDLEYDB_PATH'], 'Annotations',
46 | 'Melody_Annotations', mel_dir)
47 |
48 | contour_fname = "%s_%s" % (track, contour_suffix)
49 | contour_fpath = os.path.join(contours_path, contour_fname)
50 | annot_fname = "%s_%s" % (track, annot_suffix)
51 | annot_fpath = os.path.join(annot_path, annot_fname)
52 |
53 |
54 | # Fot ORCHSET with SIMM --------------------------
55 |
56 | contour_suffix = "pitch.ctr"
57 | contours_path = "/Users/jjb/Google Drive/PhD/conferences/ISMIR2016/SIMM-PC/Orchset/C4-Contours/Conv_mu-1_G-0_LHSF-0_pC-27.56_pDTh-1.3_pFTh-0.9_tC-50_mD-100"
58 | annot_suffix = "mel"
59 |
60 | annot_path = os.path.join('/Users/jjb/Google Drive/data/segments/excerpts/GT')
61 | contour_fname = "%s.%s" % (track, contour_suffix)
62 | contour_fpath = os.path.join(contours_path, contour_fname)
63 | annot_fname = "%s.%s" % (track, annot_suffix)
64 | annot_fpath = os.path.join(annot_path, annot_fname)
65 |
66 | # For ORCHSET with MELODIA (BIT)--------------------------
67 |
68 | annot_path = os.path.join('/Users/jjb/Google Drive/data/segments/excerpts/GT')
69 |
70 | contour_suffix = \
71 | "_vamp_melodia-contours_melodia-contours_contoursall.csv"
72 | contours_path = "/Users/jjb/Google Drive/PhD/conferences/ISMIR2016/SIMM-PC/Orchset/BIT"
73 | annot_suffix = "mel"
74 | contour_fname = "%s%s" % (track, contour_suffix)
75 | contour_fpath = os.path.join(contours_path, contour_fname)
76 | annot_fname = "%s.%s" % (track, annot_suffix)
77 | annot_fpath = os.path.join(annot_path, annot_fname)
78 |
79 | # Fot ORCHSET with SIMM --------------------------
80 |
81 | contour_suffix = "pitch.ctr"
82 | contours_path = "/Users/jjb/Google Drive/PhD/conferences/ISMIR2016/SIMM-PC/Orchset/C4-Contours/Conv_mu-1_G-0_LHSF-0_pC-27.56_pDTh-0.9_pFTh-0.9_tC-50_mD-100"
83 | #contours_path = "/Users/jjb/Google Drive/PhD/Tests/Orchset/ScContours/"
84 |
85 | annot_suffix = "mel"
86 |
87 | annot_path = os.path.join('/Users/jjb/Google Drive/data/segments/excerpts/GT')
88 | contour_fname = "%s.%s" % (track, contour_suffix)
89 | contour_fpath = os.path.join(contours_path, contour_fname)
90 | annot_fname = "%s.%s" % (track, annot_suffix)
91 | annot_fpath = os.path.join(annot_path, annot_fname)
92 |
93 | # ----------------------------
94 |
95 | return contour_fpath, annot_fpath
96 |
97 | # Compute Overlap with Annotation MEDLEY
98 | # with open('melody_trackids.json', 'r') as fhandle:
99 | # track_list = json.load(fhandle)
100 |
101 |
102 | # EDIT Compute Overlap with Annotation Orchset
103 | with open('melody_trackids_orch.json', 'r') as fhandle:
104 | track_list = json.load(fhandle)
105 |
106 |
107 | track_list = track_list['tracks']
108 |
109 | overlap_results = {}
110 |
111 | for track in track_list:
112 | print track
113 | cfpath, afpath = get_fpaths(track, meltype=meltype)
114 | print cfpath
115 | print afpath
116 | overlap_results[track] = \
117 | cc.contour_glass_ceiling(cfpath, afpath)
118 |
119 | return overlap_results
120 |
121 |
122 |
123 | def run_experiments(mel_type, outdir, olaps='all', decode='viterbi'):
124 |
125 | if not os.path.exists(outdir):
126 | os.mkdir(outdir)
127 |
128 | # Compute Overlap with Annotation
129 | # For MEDLEYDB
130 | #with open('melody_trackids.json', 'r') as fhandle:
131 | # track_list = json.load(fhandle)
132 |
133 | # For Orchset
134 | with open('melody_trackids_orch.json', 'r') as fhandle:
135 | track_list = json.load(fhandle)
136 |
137 | track_list = track_list['tracks']
138 |
139 | dset_contour_dict, dset_annot_dict = \
140 | eu.compute_all_overlaps(track_list, meltype=mel_type)
141 |
142 | mdb_files, splitter = eu.create_splits(test_size=0.25)
143 |
144 | split_num = 1
145 |
146 | for train, test in splitter:
147 |
148 | print "="*80
149 | print "Processing split number %s" % split_num
150 | print "="*80
151 |
152 | outdir2 = os.path.join(outdir, 'splitnum_%s' % split_num)
153 | if not os.path.exists(outdir2):
154 | os.mkdir(outdir2)
155 | outdir2 = os.path.join(outdir2)
156 |
157 | split_num = split_num + 1
158 |
159 | random.shuffle(train)
160 | n_train = len(train) - (len(test)/2)
161 | train_tracks = mdb_files[train[:n_train]]
162 | valid_tracks = mdb_files[train[n_train:]]
163 | test_tracks = mdb_files[test]
164 |
165 | train_contour_dict = {k: dset_contour_dict[k] for k in train_tracks}
166 | valid_contour_dict = {k: dset_contour_dict[k] for k in valid_tracks}
167 | test_contour_dict = {k: dset_contour_dict[k] for k in test_tracks}
168 |
169 | #train_annot_dict = {k: dset_annot_dict[k] for k in train_tracks}
170 | valid_annot_dict = {k: dset_annot_dict[k] for k in valid_tracks}
171 | test_annot_dict = {k: dset_annot_dict[k] for k in test_tracks}
172 |
173 | anyContourDataFrame = dset_contour_dict[dset_contour_dict.keys()[0]]
174 | feats, idxStartFeatures, idxEndFeatures = getFeatureInfo(anyContourDataFrame)
175 |
176 | olap_stats, _ = eu.olap_stats(train_contour_dict)
177 |
178 | fpath = os.path.join(outdir2, 'olap_stats.csv')
179 | olap_stats.to_csv(fpath)
180 |
181 | if olaps == 'all':
182 | olap_list = np.arange(0, 1, 0.1)
183 | else:
184 | if mel_type == 1:
185 | olap_list = [0.5]
186 | else:
187 | olap_list = [0.4]
188 |
189 | for olap_thresh in olap_list:
190 | try:
191 | print '='*40
192 | print "overlap threshold = %s" % olap_thresh
193 | print '='*40
194 |
195 | outdir3 = os.path.join(outdir2, 'olap_%s' % olap_thresh)
196 | if not os.path.exists(outdir3):
197 | os.mkdir(outdir3)
198 | outdir3 = os.path.join(outdir3)
199 |
200 | print "computing labels"
201 | x_train, y_train, x_valid, y_valid, \
202 | x_test, y_test, test_contour_dict = \
203 | compute_labels(train_contour_dict, valid_contour_dict, \
204 | test_contour_dict, olap_thresh)
205 |
206 | print "training and scoring classifier"
207 | clf, best_thresh = classifier(x_train, y_train, x_valid, y_valid,
208 | x_test, y_test, outdir3)
209 |
210 | #print "computing melody output"
211 | #melody_output(clf, best_thresh, decode,
212 | # valid_contour_dict, valid_annot_dict,
213 | # test_contour_dict, test_annot_dict, outdir3, idxStartFeatures, idxEndFeatures)
214 |
215 | # EDIT
216 | #print "scoring with multivariate gaussian"
217 | #multivariate_gaussian(x_train, y_train, x_test, y_test, outdir3)
218 | except:
219 | print "Error in run_experiments"
220 |
221 |
222 | def compute_labels(train_contour_dict, valid_contour_dict, \
223 | test_contour_dict, olap_thresh):
224 | """
225 | """
226 | # Compute Labels using Overlap Threshold
227 | train_contour_dict, valid_contour_dict, test_contour_dict = \
228 | eu.label_all_contours(train_contour_dict, valid_contour_dict, \
229 | test_contour_dict, olap_thresh=olap_thresh)
230 |
231 | x_train, y_train = cc.pd_to_sklearn(train_contour_dict)
232 | x_valid, y_valid = cc.pd_to_sklearn(valid_contour_dict)
233 | x_test, y_test = cc.pd_to_sklearn(test_contour_dict)
234 |
235 | return x_train, y_train, x_valid, y_valid, x_test, y_test, test_contour_dict
236 |
237 |
238 |
239 | def multivariate_gaussian(x_train, y_train, x_test, y_test, outdir):
240 | # Score with Multivariate Gaussian
241 |
242 | # Transform data using boxcox transform, and fit multivariate gaussians.
243 | x_train_boxcox, x_test_boxcox = mv.transform_features(x_train, x_test)
244 | rv_pos, rv_neg = mv.fit_gaussians(x_train_boxcox, y_train)
245 |
246 | # Compute melodiness scores on train and test set
247 | m_train, m_test = mv.compute_all_melodiness(x_train_boxcox, x_test_boxcox,
248 | rv_pos, rv_neg)
249 |
250 | # Compute various metrics based on melodiness scores.
251 | melodiness_scores = mv.melodiness_metrics(m_train, m_test, y_train, y_test)
252 | best_thresh, max_fscore, thresh_plot_data = \
253 | eu.get_best_threshold(y_test, m_test) # THIS SHOULD PROBABLY BE VALIDATION NUMBERS...
254 |
255 | # thresh_plot_data = pd.DataFrame(np.array(thresh_plot_data).transpose(),
256 | # columns=['recall', 'precision',
257 | # 'thresh', 'f1'])
258 | # fpath = os.path.join(outdir, 'thresh_plot_data.csv')
259 | # thresh_plot_data.to_csv(fpath)
260 |
261 | melodiness_scores = pd.DataFrame.from_dict(melodiness_scores)
262 | fpath = os.path.join(outdir, 'melodiness_scores.csv')
263 | melodiness_scores.to_csv(fpath)
264 |
265 | print "Melodiness best thresh = %s" % best_thresh
266 | print "Melodiness max f1 score = %s" % max_fscore
267 | print "overall melodiness scores:"
268 | print melodiness_scores
269 |
270 |
271 | def classifier(x_train, y_train, x_valid, y_valid, x_test, y_test, outdir):
272 | """ Train Classifier
273 | """
274 |
275 | # Cross Validation
276 | best_depth, _, cv_plot_data = cu.cross_val_sweep(x_train, y_train)
277 | print "Classifier best depth = %s" % best_depth
278 |
279 | cv_plot_data = pd.DataFrame(np.array(cv_plot_data).transpose(),
280 | columns=['max depth', 'accuracy', 'std'])
281 | fpath = os.path.join(outdir, 'cv_plot_data.csv')
282 | cv_plot_data.to_csv(fpath)
283 |
284 | # Training
285 | clf = cu.train_clf(x_train, y_train, best_depth)
286 |
287 | # Predict and Score
288 | p_train, p_valid, p_test = cu.clf_predictions(x_train, x_valid, x_test, clf)
289 | clf_scores = cu.clf_metrics(p_train, p_test, y_train, y_test)
290 | print "Classifier scores:"
291 | print clf_scores
292 |
293 | # Get threshold that maximizes F1 score
294 | best_thresh, max_fscore, thresh_plot_data = \
295 | eu.get_best_threshold(y_valid, p_valid)
296 |
297 | # thresh_plot_data = pd.DataFrame(np.array(thresh_plot_data).transpose(),
298 | # columns=['recall', 'precision',
299 | # 'thresh', 'f1'])
300 | # fpath = os.path.join(outdir, 'thresh_plot_data.csv')
301 | # thresh_plot_data.to_csv(fpath)
302 |
303 | clf_scores = pd.DataFrame.from_dict(clf_scores)
304 | fpath = os.path.join(outdir, 'classifier_scores.csv')
305 | clf_scores.to_csv(fpath)
306 |
307 | clf_outdir = os.path.join(outdir, 'classifier')
308 | if not os.path.exists(clf_outdir):
309 | os.mkdir(clf_outdir)
310 | clf_fpath = os.path.join(clf_outdir, 'rf_clf.pkl')
311 | joblib.dump(clf, clf_fpath)
312 |
313 | print "Classifier best threshold = %s" % best_thresh
314 | print "Classifier maximum f1 score = %s" % max_fscore
315 |
316 | return clf, best_thresh
317 |
318 |
319 | def melody_output(clf, best_thresh, decode,
320 | valid_contour_dict, valid_annot_dict,
321 | test_contour_dict, test_annot_dict, outdir,idxStartFeatures=0,idxEndFeatures=11):
322 | """ Generate Melody Output
323 | """
324 |
325 | # Add predicted melody probabilites to validation set contour data
326 | for key in valid_contour_dict.keys():
327 | valid_contour_dict[key] = eu.contour_probs(clf, valid_contour_dict[key],idxStartFeatures,idxEndFeatures)
328 |
329 | # Add predicted melody probabilites to test set contour data
330 | for key in test_contour_dict.keys():
331 | test_contour_dict[key] = eu.contour_probs(clf, test_contour_dict[key],idxStartFeatures,idxEndFeatures)
332 |
333 | meldir = os.path.join(outdir, 'melody_output')
334 | if not os.path.exists(meldir):
335 | os.mkdir(meldir)
336 | meldir = os.path.join(meldir)
337 |
338 | # Generate melody output using predictions
339 | print "Generating Validation Melodies"
340 | mel_valid_dict = {}
341 | for key in valid_contour_dict.keys():
342 | print key
343 | mel_valid_dict[key] = gm.melody_from_clf(valid_contour_dict[key],
344 | prob_thresh=best_thresh,
345 | method=decode)
346 | fpath = os.path.join(meldir, "%s_pred.csv" % key)
347 | mel_valid_dict[key].to_csv(fpath, header=False, index=True)
348 |
349 | # Score Melody Output
350 | mel_scores = gm.score_melodies(mel_valid_dict, valid_annot_dict)
351 |
352 | overall_scores = \
353 | pd.DataFrame(columns=['VR', 'VFA', 'RPA', 'RCA', 'OA'],
354 | index=mel_scores.keys())
355 | overall_scores['VR'] = \
356 | [mel_scores[key]['Voicing Recall'] for key in mel_scores.keys()]
357 | overall_scores['VFA'] = \
358 | [mel_scores[key]['Voicing False Alarm'] for key in mel_scores.keys()]
359 | overall_scores['RPA'] = \
360 | [mel_scores[key]['Raw Pitch Accuracy'] for key in mel_scores.keys()]
361 | overall_scores['RCA'] = \
362 | [mel_scores[key]['Raw Chroma Accuracy'] for key in mel_scores.keys()]
363 | overall_scores['OA'] = \
364 | [mel_scores[key]['Overall Accuracy'] for key in mel_scores.keys()]
365 |
366 | scores_fpath = os.path.join(outdir, "validate_mel_scores.csv")
367 | overall_scores.to_csv(scores_fpath)
368 |
369 | score_summary = os.path.join(outdir, "validate_mel_score_summary.csv")
370 | overall_scores.describe().to_csv(score_summary)
371 |
372 | # Generate melody output using predictions
373 | print "Generating Test Melodies"
374 | mel_test_dict = {}
375 | for key in test_contour_dict.keys():
376 | print key
377 | mel_test_dict[key] = gm.melody_from_clf(test_contour_dict[key],
378 | prob_thresh=best_thresh,
379 | method=decode)
380 | fpath = os.path.join(meldir, "%s_pred.csv" % key)
381 | mel_test_dict[key].to_csv(fpath, header=False, index=True)
382 |
383 | # Score Melody Output
384 | mel_scores = gm.score_melodies(mel_test_dict, test_annot_dict)
385 |
386 | overall_scores = \
387 | pd.DataFrame(columns=['VR', 'VFA', 'RPA', 'RCA', 'OA'],
388 | index=mel_scores.keys())
389 | overall_scores['VR'] = \
390 | [mel_scores[key]['Voicing Recall'] for key in mel_scores.keys()]
391 | overall_scores['VFA'] = \
392 | [mel_scores[key]['Voicing False Alarm'] for key in mel_scores.keys()]
393 | overall_scores['RPA'] = \
394 | [mel_scores[key]['Raw Pitch Accuracy'] for key in mel_scores.keys()]
395 | overall_scores['RCA'] = \
396 | [mel_scores[key]['Raw Chroma Accuracy'] for key in mel_scores.keys()]
397 | overall_scores['OA'] = \
398 | [mel_scores[key]['Overall Accuracy'] for key in mel_scores.keys()]
399 |
400 | scores_fpath = os.path.join(outdir, "all_mel_scores.csv")
401 | overall_scores.to_csv(scores_fpath)
402 |
403 | score_summary = os.path.join(outdir, "mel_score_summary.csv")
404 | overall_scores.describe().to_csv(score_summary)
405 |
--------------------------------------------------------------------------------
/src/contour_classification/run_glass_ceiling_experiment.py:
--------------------------------------------------------------------------------
1 | # Executes glass ceiling experiments
2 |
3 | import run_experiments as re
4 | import pandas as pd
5 | import numpy as np
6 | meltype = 1
7 | results = re.run_glassceiling_experiment(meltype)
8 | df = pd.DataFrame(results.values(), index=results.keys())
9 | print df.describe()
--------------------------------------------------------------------------------
/src/contour_classification/v_i_splits.json:
--------------------------------------------------------------------------------
1 | {
2 | "CelestialShore_DieForUs" : "v",
3 | "HezekiahJones_BorrowedHeart" : "v" ,
4 | "BrandonWebster_YesSirICanFly" : "v",
5 | "MusicDelta_Vivaldi" : "i",
6 | "Schumann_Mignon" : "v",
7 | "TheScarletBrand_LesFleursDuMal" : "v",
8 | "MusicDelta_SpeedMetal" : "i",
9 | "MusicDelta_ChineseDrama" : "i",
10 | "MusicDelta_ModalJazz" : "i",
11 | "EthanHein_GirlOnABridge" : "i",
12 | "StrandOfOaks_Spacestation" : "v",
13 | "LizNelson_Rainfall" : "v",
14 | "MusicDelta_Shadows" : "i",
15 | "BrandonWebster_DontHearAThing" : "v",
16 | "MusicDelta_Beethoven" : "i",
17 | "Debussy_LenfantProdigue" : "v",
18 | "PurlingHiss_Lolita" : "v",
19 | "MusicDelta_Grunge" : "v",
20 | "KarimDouaidy_Yatora" : "i",
21 | "KarimDouaidy_Hopscotch" : "i",
22 | "MusicDelta_FreeJazz" : "i",
23 | "SecretMountains_HighHorse" : "v",
24 | "ClaraBerryAndWooldog_WaltzForMyVictims" : "v",
25 | "AmarLal_SpringDay1" : "i",
26 | "AmarLal_Rest" : "i",
27 | "ClaraBerryAndWooldog_AirTraffic" : "v",
28 | "ClaraBerryAndWooldog_Stella" : "v",
29 | "ClaraBerryAndWooldog_TheBadGuys" : "v",
30 | "MusicDelta_Beatles" : "v",
31 | "AClassicEducation_NightOwl" : "v",
32 | "LizNelson_Coldwar" : "v",
33 | "FacesOnFilm_WaitingForGa" : "v",
34 | "PortStWillow_StayEven" : "v",
35 | "ClaraBerryAndWooldog_Boys" : "v",
36 | "InvisibleFamiliars_DisturbingWildlife" : "v",
37 | "AlexanderRoss_VelvetCurtain" : "v",
38 | "AimeeNorwich_Child" : "v",
39 | "AlexanderRoss_GoodbyeBolero" : "v",
40 | "Auctioneer_OurFutureFaces" : "v",
41 | "FamilyBand_Again" : "v",
42 | "MusicDelta_Country1" : "v",
43 | "MusicDelta_Country2" : "v",
44 | "MusicDelta_Gospel" : "v",
45 | "Mozart_DiesBildnis" : "v",
46 | "MusicDelta_Pachelbel" : "i",
47 | "MusicDelta_InTheHalloftheMountainKing" : "i",
48 | "Wolf_DieBekherte" : "v",
49 | "Mozart_BesterJungling" : "v",
50 | "MusicDelta_GriegTrolltog" : "i",
51 | "MatthewEntwistle_FairerHopes" : "i",
52 | "JoelHelander_Definition" : "i",
53 | "MatthewEntwistle_TheFlaxenField" : "i",
54 | "MatthewEntwistle_TheArch" : "i",
55 | "MatthewEntwistle_ImpressionsOfSaturn" : "i",
56 | "Schubert_Erstarrung" : "v",
57 | "MatthewEntwistle_Lontano" : "v",
58 | "Handel_TornamiAVagheggiar" : "v",
59 | "MichaelKropf_AllGoodThings" : "i",
60 | "JoelHelander_IntheAtticBedroom" : "i",
61 | "JoelHelander_ExcessiveResistancetoChange" : "i",
62 | "BigTroubles_Phantom" : "v",
63 | "MusicDelta_Reggae" : "v",
64 | "TheDistricts_Vermont" : "v",
65 | "Meaxic_TakeAStep" : "v",
66 | "MusicDelta_Zeppelin" : "i",
67 | "Creepoid_OldTree" : "v",
68 | "AvaLuna_Waterduct" : "v",
69 | "TheSoSoGlos_Emergency" : "v",
70 | "MusicDelta_80sRock" : "v",
71 | "MusicDelta_Punk" : "v",
72 | "MusicDelta_Rock" : "v",
73 | "HopAlong_SisterCities" : "v",
74 | "MusicDelta_Rockabilly" : "v",
75 | "MusicDelta_Hendrix" : "v",
76 | "Meaxic_YouListen" : "v",
77 | "MusicDelta_ChineseHenan" : "i",
78 | "Phoenix_ScotchMorris" : "i",
79 | "Phoenix_BrokenPledgeChicagoReel" : "i",
80 | "MusicDelta_ChineseYaoZu" : "i",
81 | "MusicDelta_ChineseJiangNan" : "i",
82 | "Phoenix_ColliersDaughter" : "i",
83 | "EthanHein_1930sSynthAndUprightBass" : "i",
84 | "ChrisJacoby_PigsFoot" : "i",
85 | "LizNelson_ImComingHome" : "v",
86 | "Phoenix_ElzicsFarewell" : "i",
87 | "Phoenix_SeanCaughlinsTheScartaglen" : "i",
88 | "Phoenix_LarkOnTheStrandDrummondCastle" : "i",
89 | "ChrisJacoby_BoothShotLincoln" : "i",
90 | "MusicDelta_ChineseChaoZhou" : "i",
91 | "AimeeNorwich_Flying" : "i",
92 | "MusicDelta_ChineseXinJing" : "i",
93 | "MusicDelta_SwingJazz" : "i",
94 | "CroqueMadame_Pilot" : "i",
95 | "MusicDelta_BebopJazz" : "i",
96 | "MusicDelta_LatinJazz" : "i",
97 | "CroqueMadame_Oil" : "i",
98 | "MatthewEntwistle_DontYouEver" : "v",
99 | "MusicDelta_FunkJazz" : "i",
100 | "MusicDelta_FusionJazz" : "i",
101 | "MusicDelta_CoolJazz" : "i",
102 | "StevenClark_Bounty" : "v",
103 | "MusicDelta_Disco" : "v",
104 | "Snowmine_Curfews" : "v",
105 | "NightPanther_Fire" : "v",
106 | "SweetLights_YouLetMeDown" : "v",
107 | "DreamersOfTheGhetto_HeavyLove" : "v",
108 | "HeladoNegro_MitadDelMundo" : "v",
109 | "MusicDelta_Britpop" : "v"
110 | }
--------------------------------------------------------------------------------
/src/imageMatlab.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 | #
3 | # a script to define some matlab compatible image functions
4 |
5 | # copyright (C) 2010 Jean-Louis Durrieu
6 | #
7 | # This program is free software: you can redistribute it and/or modify
8 | # it under the terms of the GNU General Public License as published by
9 | # the Free Software Foundation, either version 3 of the License, or
10 | # (at your option) any later version.
11 | #
12 | # This program is distributed in the hope that it will be useful,
13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 | # GNU General Public License for more details.
16 | #
17 | # You should have received a copy of the GNU General Public License
18 | # along with this program. If not, see .
19 |
20 | import matplotlib.pyplot as plt
21 |
22 | # The following instructions define some characteristics for the figures
23 | # In order to be able to use latex formulas in legends and text in
24 | # figures:
25 | ## plt.rc('text', usetex=True)
26 | # Turn on interactive mode to display the figures:
27 | plt.ion()
28 | # Characteristics of the figures:
29 | fontsize = 20;
30 | linewidth=4
31 | markersize = 16
32 | # Setting the above characteristics as defaults:
33 | plt.rc('legend',fontsize=fontsize)
34 | plt.rc('lines',markersize=markersize)
35 | plt.rc('lines',lw=linewidth)
36 |
37 | def imageM(*args,**kwargs):
38 | """
39 | imageM(*args, **kwargs)
40 |
41 | This function essentially is a wrapper for the
42 | matplotlib.pyplot function imshow, such that the actual result
43 | looks like the default that can be obtained with the MATLAB
44 | function image.
45 |
46 | The arguments are the same as the arguments for function imshow.
47 | """
48 | # The appearance of the image: nearest means that the image
49 | # is not smoothed:
50 | kwargs['interpolation'] = 'nearest'
51 | # keyword 'aspect' allows to adapt the aspect ratio to the
52 | # size of the window, and not the opposite (which is the default
53 | # behaviour):
54 | kwargs['aspect'] = 'auto'
55 | kwargs['origin'] = 0
56 | plt.imshow(*args,**kwargs)
57 |
--------------------------------------------------------------------------------
/src/melodyExtractionFromSalienceFunction.py:
--------------------------------------------------------------------------------
1 | __author__ = 'juanjobosch'
2 |
3 | import sys, os
4 | from essentia import *
5 | from essentia.standard import *
6 | import contourExtraction as ce
7 | import numpy as np
8 |
9 |
10 | def MEFromFileNumInFolder(salsfolder, outfolder, fileNum, options):
11 | """ Auxiliar function, to extract melody from a folder with precomputed and saved saliences (*.Msal)
12 | Parameters
13 | ----------
14 | salsfolder: Folder containing saved saliences
15 | outfolder: melody extraction output folder
16 | fileNum: number of the file [1:numfiles]
17 | options: set of options for melody extraction
18 |
19 | No return
20 | """
21 | from os.path import join, basename
22 | import glob
23 |
24 | if not os.path.exists(outfolder):
25 | os.makedirs(outfolder)
26 |
27 | fn = glob.glob(salsfolder + '*.Msal*')[fileNum - 1]
28 | bn = basename(fn)
29 | outputfile = join(outfolder, bn[0:bn.find('.Msal')] + '.pitch')
30 | MEFromSFFile(fn, outputfile, options)
31 |
32 |
33 | def loadSFFile(fn):
34 | """ Auxiliar function to load a previouslly saved salience function (*.Msal)
35 | Parameters
36 | ----------
37 | fn: filename
38 |
39 | Returns
40 | ----------
41 | times: set of times for the frames of the salience function
42 | SF: Pitch salience function
43 | """
44 | from os.path import splitext
45 | from numpy import loadtxt
46 | from scipy.io import loadmat
47 |
48 | if splitext(fn)[-1] == '.mat':
49 | loaded = loadmat(fn)
50 | mat = loaded.get('timesAndSF')
51 | else:
52 | try:
53 | mat = loadtxt(fn)
54 | except:
55 | mat = loadtxt(fn, delimiter=',')
56 | # load as text file
57 |
58 | times = mat[:, 0]
59 | SF = mat.T
60 |
61 | return times, SF
62 |
63 |
64 | def MEFromSFFile(fn, outputfile, options):
65 | """ Computes Melody extractino from a Salience function File
66 | Parameters
67 | ----------
68 | fn: salience function filename
69 | outputfile: output filename
70 | options: set of options for melody extraction
71 |
72 | No returns
73 |
74 | """
75 | from numpy import column_stack, savetxt
76 |
77 | times, SF = loadSFFile(fn)
78 | times, pitch = MEFromSF(times, SF, options)
79 | savetxt(outputfile, column_stack((times.T, pitch.T)), fmt='%-7.5f', delimiter=",")
80 |
81 |
82 | def MEFromSF(times, SF, options):
83 | """ Computes Melody extractino from a Salience function
84 | Parameters
85 | ----------
86 | times: set of times for each frame of the salience function
87 | SF: Pitch salience function
88 | options: set of options for melody extraction
89 | E.g.
90 | options.saveContours = True : to save contours as a dataframe for contour classification
91 | options.PCS = True : to run melody extraction based on Pitch Contour Selection (MIREX2015, MIREX2016, SMC2016, ISMIR2016(C2) )
92 |
93 | Returns:
94 | ----------
95 | times: set of times for each frame of the estimated melody
96 | pitch: set of pitches of the estimated melody
97 | """
98 |
99 | Fs = options.Fs
100 | hopsize = options.hopsizeInSamples
101 | stepNotes = options.stepNotes
102 | Nbins = SF.shape[0]
103 |
104 | try:
105 | voiceVibrato = options.voiceVibrato
106 | except:
107 | # Default: use of vibrato = False
108 | voiceVibrato = False
109 |
110 | voicingTolerance = options.voicingTolerance
111 |
112 | # Initialise methods:
113 |
114 | # Initialise Pitch contour selection: from contours, extracting melody using salamon2012 as implemented in Essentia
115 |
116 | run_pitch_contours_melody = PitchContoursMelody(guessUnvoiced=True,
117 | binResolution=int(stepNotes),
118 | hopSize=int(hopsize), voicingTolerance=voicingTolerance,
119 | voiceVibrato=voiceVibrato,
120 | referenceFrequency=options.minF0,
121 | minFrequency=options.minF0)
122 |
123 | # Computes peaks from salience function
124 |
125 | run_pitch_salience_function_peaks = PitchSalienceFunctionPeaks(binResolution=int(stepNotes),
126 | referenceFrequency=options.minF0,
127 | minFrequency=options.minF0)
128 |
129 | # Extracts contours from salience function peaks
130 |
131 | run_pitch_contours = PitchContours(hopSize=int(hopsize), binResolution=int(stepNotes),
132 | peakDistributionThreshold=options.peakDistributionThreshold,
133 | peakFrameThreshold=options.peakFrameThreshold,
134 | minDuration=options.minDuration,
135 | timeContinuity=options.timeContinuity,
136 | pitchContinuity=options.pitchContinuity)
137 |
138 | pool = Pool()
139 |
140 | # For all frames, compute salience peaks, and save their salience and bin
141 | for index in range(SF.shape[1]):
142 | # The vector should be of size 600 if we have 10 bins/semitone (total 6000)
143 | SALsalience_peaks_bins, SALsalience_peaks_saliences = run_pitch_salience_function_peaks(
144 | np.array(np.append((np.array(SF[0:600, index])), np.zeros(max(0, 600 - Nbins))), 'float32'))
145 | if (len(SALsalience_peaks_bins) == 0) or (len(SALsalience_peaks_saliences) == 0):
146 | SALsalience_peaks_bins = np.array([1], 'int')
147 | SALsalience_peaks_saliences = np.array([0.00000000000000000001], 'float32')
148 | pool.add('allframes_SALsalience_peaks_saliences', SALsalience_peaks_saliences)
149 | pool.add('allframes_SALsalience_peaks_bins', SALsalience_peaks_bins)
150 |
151 | # Create contours using previouslly computed peaks
152 | #print pool['allframes_SALsalience_peaks_bins']
153 | #print pool['allframes_SALsalience_peaks_saliences']
154 |
155 | contours_bins_SAL, contours_saliences_SAL, contours_start_times_SAL, durationSAL = run_pitch_contours(
156 | [arr.tolist() for arr in pool['allframes_SALsalience_peaks_bins']],
157 | [arr.tolist() for arr in pool['allframes_SALsalience_peaks_saliences']])
158 |
159 | contours_bins_SAL = [arr.tolist() for arr in contours_bins_SAL]
160 | contours_saliences_SAL = [arr.tolist() for arr in contours_saliences_SAL]
161 | contours_start_times_SAL = [arr.tolist() for arr in contours_start_times_SAL]
162 |
163 | # length = len(sorted(pool['allframes_SALsalience_peaks_bins'], key=len, reverse=True)[0])
164 | # salpBins = array([xi+[None]*(length-len(xi)) for xi in pool['allframes_SALsalience_peaks_bins']], dtype=single)
165 |
166 | # contours_bins_SAL, contours_saliences_SAL, contours_start_times_SAL, durationSAL = run_pitch_contours(
167 | # [np.array(arr, dtype='int') for arr in pool['allframes_SALsalience_peaks_bins']],
168 | # pool['allframes_SALsalience_peaks_saliences'])
169 |
170 | # contours_bins_SAL, contours_saliences_SAL, contours_start_times_SAL, durationSAL = run_pitch_contours(
171 | # np.array(pool['allframes_SALsalience_peaks_bins'],'float32'),
172 | # np.array((pool['allframes_SALsalience_peaks_saliences']), 'float32'))
173 |
174 | NContours = len(contours_bins_SAL)
175 | print 'NContours %d' % NContours
176 | pitch = np.zeros(len(times))
177 |
178 | options.saveContours = False
179 |
180 | if (NContours > 0):
181 |
182 | if options.decodingMethod == "PCS":
183 | # Extract melody from contours using Pitch Contour Selection
184 | allpitch, confidence = run_pitch_contours_melody(contours_bins_SAL,
185 | contours_saliences_SAL,
186 | contours_start_times_SAL,
187 | durationSAL)
188 |
189 | # We convert the allpitch (always positive) to a sequence of positive
190 | # and negative pitches, depending on the confidence, which is a measure
191 | # of the voicing. We add 0 to avoid negative zeros (-0.0)
192 | pitch = allpitch * (-1 + 2 * (confidence > 0)) + 0
193 | L = min(len(pitch), len(times))
194 | pitch = pitch[0:L]
195 | times = times[0:L]
196 | else:
197 | print "No decoding using Pitch Contour Selection"
198 |
199 | # If contours need to be saved for pitch contour classification, we compute the the contour data
200 | if options.saveContours:
201 | extraFeatures = None
202 | try:
203 | contour_data = ce.compute_contour_data(contours_bins_SAL, contours_saliences_SAL,
204 | contours_start_times_SAL, stepNotes, options.minF0,
205 | options.hopsize, extra_features=extraFeatures)
206 | picklefile = options.pitch_output_file + '.ctr'
207 | from pickle import dump
208 | with open(picklefile, 'wb') as handle:
209 | dump(contour_data, handle)
210 | except:
211 | print "Error computing contour data"
212 | return times, pitch
213 |
--------------------------------------------------------------------------------
/src/parsing.py:
--------------------------------------------------------------------------------
1 | # Most original code by J.L. Durrieu, modified by Juan J. Bosch in February, 2015
2 |
3 | # This program is free software: you can redistribute it and/or modify
4 | # it under the terms of the GNU General Public License as published by
5 | # the Free Software Foundation, either version 3 of the License, or
6 | # (at your option) any later version.
7 | #
8 | # This program is distributed in the hope that it will be useful,
9 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
10 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11 | # GNU General Public License for more details.
12 | #
13 | # You should have received a copy of the GNU General Public License
14 | # along with this program. If not, see .
15 |
16 |
17 | import optparse
18 |
19 | def parseOptions(argsin,wavfilerequired = False):
20 |
21 | usage = "usage: %prog [options] inputAudioFile"
22 | usage = "usage: %prog [options]"
23 | parser = optparse.OptionParser(usage)
24 | # Name of the output files:
25 | parser.add_option("-i", "--input-file",
26 | dest="input_file", type="string",
27 | help="Path of the input file.\n",
28 | default=None)
29 | parser.add_option("-o", "--pitch-output",
30 | dest="pitch_output_file", type="string",
31 | help="name of the output file for an external algorithm.\n"
32 | "If None appends _pitches to the wav",
33 | default=None)
34 | parser.add_option("-s", "--pitch-salience-output-file",
35 | dest="sal_output_file", type="string",
36 | help="name of the output file for the Salience File.\n"
37 | "If None the salience file is not saved.",
38 | default=None)
39 |
40 | parser.add_option("-v", "--vit-pitch-output-file",
41 | dest="vit_pitch_output_file", type="string",
42 | help="name of the output file for the estimated pitches with Viterbi.\n"
43 | "If None it does not execute the Viterbi extraction",
44 | default=None)
45 |
46 | parser.add_option("-p", "--pitch-output-file",
47 | dest="pitch_output_file", type="string",
48 | help="name of the output file for an external algorithm.\n"
49 | "If None appends _pitches to the wav",
50 | default=None)
51 | # Some more optional options:
52 | parser.add_option("-d", "--with-display", dest="displayEvolution",
53 | action="store_true",help="display the figures",
54 | default=False)
55 | parser.add_option("-q", "--quiet", dest="verbose",
56 | action="store_false",
57 | help="use to quiet all output verbose",
58 | default=False)
59 | parser.add_option("--nb-iterations", dest="nbiter",
60 | help="number of iterations", type="int",
61 | default=20)
62 |
63 | parser.add_option("--expandHF0Val", dest="expandHF0Val",
64 | help="value for expanding the distribution of the values of HF0", type="float",
65 | default=1)
66 |
67 | parser.add_option("--window-size", dest="windowSize", type="float",
68 | default=0.04644,help="size of analysis windows, in s.")
69 | parser.add_option("--Fourier-size", dest="fourierSize", type="int",
70 | default=None,
71 | help="size of Fourier transforms, "\
72 | "in samples.")
73 | # parser.add_option("--hopsize", dest="hopsize", type="float",
74 | # default=0.0058,
75 | # help="size of the hop between analysis windows, in s.")
76 | parser.add_option("--hopsize", dest="hopsize", type="float",
77 | default=0.01,
78 | help="size of the hop between analysis windows, in s.")
79 | parser.add_option("--nb-accElements", dest="R", type="float",
80 | default=40.0,
81 | help="number of elements for the accompaniment.")
82 | parser.add_option("--numAtomFilters", dest="P_numAtomFilters",
83 | type="int", default=30,
84 | help="Number of atomic filters - in WGAMMA.")
85 | parser.add_option("--numFilters", dest="K_numFilters", type="int",
86 | default=10,
87 | help="Number of filters for decomposition - in WPHI")
88 | parser.add_option("--min-F0-Freq", dest="minF0", type="float",
89 | default=55.0,
90 | help="Minimum of fundamental frequency F0.")
91 | parser.add_option("--max-F0-Freq", dest="maxF0", type="float",
92 | default=1760.0,
93 | help="Maximum of fundamental frequency F0.")
94 | parser.add_option("--samplingRate", dest="Fs", type="float",
95 | default=44100,
96 | help="Sampling rate")
97 | parser.add_option("--step-F0s", dest="stepNotes", type="int",
98 | default=10,
99 | help="Number of F0s in dictionary for each semitone.")
100 | # PitchContoursMelody
101 | parser.add_option("--voicingTolerance", dest="voicingTolerance", type="float",
102 | default=0.2,
103 | help="Allowed deviation below the average contour mean salience of all contours (fraction of the standard deviation)")
104 |
105 | #PitchContours
106 | parser.add_option("--peakDistributionThreshold", dest="peakDistributionThreshold", type="float",
107 | default=0.9,
108 | help="Allowed deviation below the peak salience mean over all frames (fraction of the standard deviation)")
109 |
110 | parser.add_option("--peakFrameThreshold", dest="peakFrameThreshold", type="float",
111 | default=0.9,
112 | help="Per-frame salience threshold factor (fraction of the highest peak salience in a frame)")
113 |
114 | parser.add_option("--minDuration", dest="minDuration", type="float",
115 | default=100,
116 | help="the minimum allowed contour duration [ms]")
117 |
118 | parser.add_option("--timeContinuity", dest="timeContinuity", type="float",
119 | default=100,
120 | help="Time continuity cue (the maximum allowed gap duration for a pitch contour) [ms]")
121 | parser.add_option("--voiceVibrato",dest = "voiceVibrato",default =False, help="detect voice vibrato for melody estimation")
122 |
123 | parser.add_option("--pitchContinuity", dest="pitchContinuity", type="float",
124 | default=27.5625,
125 | help="pitch continuity cue (maximum allowed pitch change durig 1 ms time period) [cents]")
126 |
127 | parser.add_option("--extractionMethod", dest="extractionMethod", type="string",
128 | help="name of the method to be executed, if None, default is BG2, with PCS (Pitch Contour Selection)",
129 | default="BG2")
130 |
131 | (options, args) = parser.parse_args(argsin)
132 | # if the argument is not given with -i
133 |
134 | if len(args)>0:
135 | options.input_file = args[0]
136 |
137 | if len(args) > 1:
138 | options.pitch_output_file = args[1]
139 |
140 | options.hopsizeInSamples = int(round(options.hopsize*options.Fs))
141 |
142 | if ((len(args) < 1) & wavfilerequired):
143 | parser.error("incorrect number of arguments, use option -h for help.")
144 |
145 | if options.pitch_output_file is None:
146 | options.pitch_output_file = options.input_file+'_pitches.txt'
147 |
148 | return args, options
149 |
150 |
151 | import optparse
152 |
153 | def parseOptionsSS(argsin,wavfilerequired = True):
154 |
155 | usage = "usage: %prog [options] inputAudioFile"
156 | parser = optparse.OptionParser(usage)
157 | # Name of the output files:
158 | parser.add_option("-m", "--melody-output-file",
159 | dest="solo_output_file", type="string",
160 | help="name of the audio output file for the estimated\n"\
161 | "solo (vocal) part",
162 | default="estimated_solo.wav")
163 | parser.add_option("-a", "--accomp-output-file",
164 | dest="acc_output_file", type="string",
165 | help="name of the audio output file for the estimated\n"\
166 | "music part",
167 | default="estimated_music.wav")
168 | parser.add_option("-c", "--melodyPC-output-file",
169 | dest="pc_pitch_output_file", type="string",
170 | help="name of the output file for the estimated pitches with pitch contours\n",
171 | default="pc.pitch")
172 | parser.add_option("-s", "--pitch-salience-output-file",
173 | dest="sal_output_file", type="string",
174 | help="name of the output file for the Salience File.\n"
175 | "If None the salience file is not saved.",
176 | default=None)
177 | parser.add_option("-v", "--vit-pitch-output-file",
178 | dest="vit_pitch_output_file", type="string",
179 | help="name of the output file for the estimated pitches with Viterbi.\n"
180 | "If None it does not execute the Viterbi extraction",
181 | default=None)
182 |
183 | #parser.add_option("-p", "--pitch-output-file",
184 | # dest="pitch_output_file", type="string",
185 | # help="name of the output file for an external algorithm.\n"
186 | # "If None appends _pitches to the wav",
187 | # default=None)
188 | # Some more optional options:
189 | parser.add_option("-d", "--with-display", dest="displayEvolution",
190 | action="store_true",help="display the figures",
191 | default=False)
192 | parser.add_option("-q", "--quiet", dest="verbose",
193 | action="store_false",
194 | help="use to quiet all output verbose",
195 | default=False)
196 | parser.add_option("--nb-iterations", dest="nbiter",
197 | help="number of iterations", type="int",
198 | default=30)
199 |
200 | parser.add_option("--expandHF0Val", dest="expandHF0Val",
201 | help="value for expanding the distribution of the values of HF0", type="float",
202 | default=1)
203 | parser.add_option("--voiceVibrato",dest = "voiceVibrato",default =False, help="detect voice vibrato for melody estimation")
204 | parser.add_option("--window-size", dest="windowSize", type="float",
205 | default=0.04644,help="size of analysis windows, in s.")
206 | parser.add_option("--Fourier-size", dest="fourierSize", type="int",
207 | default=None,
208 | help="size of Fourier transforms, "\
209 | "in samples.")
210 | parser.add_option("--hopsize", dest="hopsize", type="float",
211 | default=0.0058,
212 | help="size of the hop between analysis windows, in s.")
213 | parser.add_option("--nb-accElements", dest="R", type="float",
214 | default=40.0,
215 | help="number of elements for the accompaniment.")
216 | parser.add_option("--numAtomFilters", dest="P_numAtomFilters",
217 | type="int", default=30,
218 | help="Number of atomic filters - in WGAMMA.")
219 | parser.add_option("--numFilters", dest="K_numFilters", type="int",
220 | default=10,
221 | help="Number of filters for decomposition - in WPHI")
222 | parser.add_option("--min-F0-Freq", dest="minF0", type="float",
223 | default=100.0,
224 | help="Minimum of fundamental frequency F0.")
225 | parser.add_option("--max-F0-Freq", dest="maxF0", type="float",
226 | default=800.0,
227 | help="Maximum of fundamental frequency F0.")
228 | parser.add_option("--samplingRate", dest="Fs", type="float",
229 | default=44100,
230 | help="Sampling rate")
231 | parser.add_option("--step-F0s", dest="stepNotes", type="int",
232 | default=10,
233 | help="Number of F0s in dictionary for each semitone.")
234 | # PitchContoursMelody
235 | parser.add_option("--voicingTolerance", dest="voicingTolerance", type="float",
236 | default=0.2,
237 | help="Allowed deviation below the average contour mean salience of all contours (fraction of the standard deviation)")
238 |
239 | #PitchContours
240 | parser.add_option("--peakDistributionThreshold", dest="peakDistributionThreshold", type="float",
241 | default=0.9,
242 | help="Allowed deviation below the peak salience mean over all frames (fraction of the standard deviation)")
243 |
244 | parser.add_option("--peakFrameThreshold", dest="peakFrameThreshold", type="float",
245 | default=0.9,
246 | help="Per-frame salience threshold factor (fraction of the highest peak salience in a frame)")
247 |
248 | parser.add_option("--minDuration", dest="minDuration", type="float",
249 | default=100,
250 | help="the minimum allowed contour duration [ms]")
251 |
252 | parser.add_option("--timeContinuity", dest="timeContinuity", type="float",
253 | default=100,
254 | help="Time continuity cue (the maximum allowed gap duration for a pitch contour) [ms]")
255 |
256 | parser.add_option("--pitchContinuity", dest="pitchContinuity", type="float",
257 | default=27.5625,
258 | help="pitch continuity cue (maximum allowed pitch change durig 1 ms time period) [cents]")
259 | (options, args) = parser.parse_args(argsin)
260 |
261 | options.hopsizeInSamples = int(round(options.hopsize*options.Fs))
262 | options.input_file = args[0]
263 | if (len(args) != 1 & wavfilerequired):
264 | parser.error("incorrect number of arguments, use option -h for help.")
265 |
266 | if options.pitch_output_file is None:
267 | options.pitch_output_file = options.input_file+'_pitches.txt'
268 |
269 | return args, options
--------------------------------------------------------------------------------
/src/peaks.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | def _datacheck_peakdetect(x_axis, y_axis):
3 | if x_axis is None:
4 | x_axis = range(len(y_axis))
5 |
6 | if len(y_axis) != len(x_axis):
7 | raise ValueError('Input vectors y_axis and x_axis must have same length')
8 |
9 | #needs to be a numpy array
10 | y_axis = np.array(y_axis)
11 | x_axis = np.array(x_axis)
12 | return x_axis, y_axis
13 |
14 |
15 | def peakdetect(y_axis, x_axis=None, lookahead=300, delta=0):
16 | """
17 | Converted from/based on a MATLAB script at:
18 | http://billauer.co.il/peakdet.html
19 |
20 | function for detecting local maximas and minmias in a signal.
21 | Discovers peaks by searching for values which are surrounded by lower
22 | or larger values for maximas and minimas respectively
23 |
24 | keyword arguments:
25 | y_axis -- A list containg the signal over which to find peaks
26 | x_axis -- (optional) A x-axis whose values correspond to the y_axis list
27 | and is used in the return to specify the postion of the peaks. If
28 | omitted an index of the y_axis is used. (default: None)
29 | lookahead -- (optional) distance to look ahead from a peak candidate to
30 | determine if it is the actual peak (default: 200)
31 | '(sample / period) / f' where '4 >= f >= 1.25' might be a good value
32 | delta -- (optional) this specifies a minimum difference between a peak and
33 | the following points, before a peak may be considered a peak. Useful
34 | to hinder the function from picking up false peaks towards to end of
35 | the signal. To work well delta should be set to delta >= RMSnoise * 5.
36 | (default: 0)
37 | delta function causes a 20% decrease in speed, when omitted
38 | Correctly used it can double the speed of the function
39 |
40 | return -- two lists [max_peaks, min_peaks] containing the positive and
41 | negative peaks respectively. Each cell of the lists contains a tupple
42 | of: (position, peak_value)
43 | to get the average peak value do: np.mean(max_peaks, 0)[1] on the
44 | results to unpack one of the lists into x, y coordinates do:
45 | x, y = zip(*tab)
46 | """
47 | max_peaks = []
48 | min_peaks = []
49 | dump = [] # Used to pop the first hit which almost always is false
50 |
51 | # check input data
52 | x_axis, y_axis = _datacheck_peakdetect(x_axis, y_axis)
53 | # store data length for later use
54 | length = len(y_axis)
55 |
56 |
57 | #perform some checks
58 | if lookahead < 1:
59 | raise ValueError, "Lookahead must be '1' or above in value"
60 | #NOTE: commented this to use the function with log(histogram)
61 | #if not (np.isscalar(delta) and delta >= 0):
62 | if not (np.isscalar(delta)):
63 | raise ValueError, "delta must be a positive number"
64 |
65 | #maxima and minima candidates are temporarily stored in
66 | #mx and mn respectively
67 | mn, mx = np.Inf, -np.Inf
68 |
69 | #Only detect peak if there is 'lookahead' amount of points after it
70 | for index, (x, y) in enumerate(zip(x_axis[:-lookahead],
71 | y_axis[:-lookahead])):
72 | if y > mx:
73 | mx = y
74 | mxpos = x
75 | if y < mn:
76 | mn = y
77 | mnpos = x
78 |
79 | ####look for max####
80 | if y < mx-delta and mx != np.Inf:
81 | #Maxima peak candidate found
82 | #look ahead in signal to ensure that this is a peak and not jitter
83 | if y_axis[index:index+lookahead].max() < mx:
84 | max_peaks.append([mxpos, mx])
85 | dump.append(True)
86 | #set algorithm to only find minima now
87 | mx = np.Inf
88 | mn = np.Inf
89 | if index+lookahead >= length:
90 | #end is within lookahead no more peaks can be found
91 | break
92 | continue
93 | #else: #slows shit down this does
94 | # mx = ahead
95 | # mxpos = x_axis[np.where(y_axis[index:index+lookahead]==mx)]
96 |
97 | ####look for min####
98 | if y > mn+delta and mn != -np.Inf:
99 | #Minima peak candidate found
100 | #look ahead in signal to ensure that this is a peak and not jitter
101 | if y_axis[index:index+lookahead].min() > mn:
102 | min_peaks.append([mnpos, mn])
103 | dump.append(False)
104 | #set algorithm to only find maxima now
105 | mn = -np.Inf
106 | mx = -np.Inf
107 | if index+lookahead >= length:
108 | #end is within lookahead no more peaks can be found
109 | break
110 | #else: #slows shit down this does
111 | # mn = ahead
112 | # mnpos = x_axis[np.where(y_axis[index:index+lookahead]==mn)]
113 |
114 | #Remove the false hit on the first value of the y_axis
115 | try:
116 | if dump[0]:
117 | max_peaks.pop(0)
118 | else:
119 | min_peaks.pop(0)
120 | del dump
121 | except IndexError:
122 | #no peaks were found, should the function return empty lists?
123 | pass
124 |
125 | return [max_peaks, min_peaks]
126 |
127 |
128 | def peaks(x, y, lookahead=20, delta=0.00003):
129 | """
130 | A wrapper around peakdetect to pack the return values in a nicer format
131 | """
132 | _max, _min = peakdetect(y, x, lookahead, delta)
133 | x_peaks = [p[0] for p in _max]
134 | y_peaks = [p[1] for p in _max]
135 | x_valleys = [p[0] for p in _min]
136 | y_valleys = [p[1] for p in _min]
137 |
138 | _peaks = [x_peaks, y_peaks]
139 | _valleys = [x_valleys, y_valleys]
140 | return {"peaks": _peaks, "valleys": _valleys}
--------------------------------------------------------------------------------
/src/tracking.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | # copyright (C) 2010 Jean-Louis Durrieu
4 | #
5 | # This program is free software: you can redistribute it and/or modify
6 | # it under the terms of the GNU General Public License as published by
7 | # the Free Software Foundation, either version 3 of the License, or
8 | # (at your option) any later version.
9 | #
10 | # This program is distributed in the hope that it will be useful,
11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 | # GNU General Public License for more details.
14 | #
15 | # You should have received a copy of the GNU General Public License
16 | # along with this program. If not, see .
17 |
18 | from numpy import arange, zeros, array, argmax, vstack, amax, ones, outer
19 |
20 | def viterbiTracking(logDensity, logPriorDensities, logTransitionMatrix,
21 | verbose=False):
22 | """
23 | Naive implementation of the Viterbi algorithm:
24 | this is a bit slow, consider using viterbiTrackingArray instead.
25 |
26 | bestStatePath = viterbiTracking(logDensity, logPriorDensities,
27 | logTransitionMatrix, verbose=False)
28 |
29 | viterbiTracking returns the best path through matrix logDensity,
30 | assuming that logDensity contains the likelihood of the observation
31 | sequence, conditionally upon the hidden states. A hidden Markov
32 | model (HMM) is assumed, with prior probabilities for the states
33 | given by logPriorDensities, and transition probabilities given
34 | by the matrix logTransitionMatrix. More precisely:
35 | Inputs:
36 | logDensity is a S x N ndarray, where S is the number of hidden
37 | states and N is the number of frames of the
38 | observed signal. The element at row s and
39 | column n contains the conditional likelihood
40 | of the signal at frame n, conditionally upon
41 | state s.
42 | logPriorDensities is a ndarray of size S, containing the prior
43 | probabilities of the hidden states of the HMM.
44 | logTransitionMatrix is a S x S ndarray containing the transition
45 | probabilities: at row s and column t, it
46 | contains the probability of having state t
47 | after state s.
48 | verbose defines whether to display evolution information or not.
49 | Default is False.
50 |
51 | Outputs:
52 | bestStatePath is the sequence of best states, assuming the HMM
53 | with the given parameters.
54 | """
55 | numberOfStates, numberOfFrames = logDensity.shape
56 |
57 | cumulativeProbability = zeros([numberOfStates, numberOfFrames])
58 | antecedents = zeros([numberOfStates, numberOfFrames])
59 |
60 | for state in arange(numberOfStates):
61 | antecedents[state, 0] = -1
62 | cumulativeProbability[state, 0] = logPriorDensities[state] \
63 | + logDensity[state, 0]
64 |
65 | for n in arange(1, numberOfFrames):
66 | if verbose:
67 | print "frame number ", n, "over ", numberOfFrames
68 | for state in arange(numberOfStates):
69 | if verbose:
70 | print " state number ",state, " over ", numberOfStates
71 | cumulativeProbability[state, n] \
72 | = cumulativeProbability[0, n - 1] \
73 | + logTransitionMatrix[0, state]
74 | antecedents[state, n] = 0
75 | for state_ in arange(1, numberOfStates):
76 | if verbose:
77 | print " state number ",
78 | print state_, " over ", numberOfStates
79 | tempCumProba = cumulativeProbability[state_, n - 1] \
80 | + logTransitionMatrix[state_, state]
81 | if (tempCumProba > cumulativeProbability[state, n]):
82 | cumulativeProbability[state, n] = tempCumProba
83 | antecedents[state, n] = state_
84 | cumulativeProbability[state, n] \
85 | = cumulativeProbability[state, n] \
86 | + logDensity[state, n]
87 |
88 | # backtracking:
89 | bestStatePath = zeros(numberOfFrames)
90 | bestStatePath[-1] = argmax(cumulativeProbability[:, numberOfFrames - 1])
91 | for n in arange(numberOfFrames - 2, -1, -1):
92 | bestStatePath[n] = antecedents[bestStatePath[n + 1], n + 1]
93 |
94 | return bestStatePath
95 |
96 | def viterbiTrackingArray(logDensity, logPriorDensities, logTransitionMatrix,
97 | verbose=False):
98 | """
99 | bestStatePath = viterbiTrackingArray(logDensity, logPriorDensities,
100 | logTransitionMatrix, verbose=False)
101 |
102 | viterbiTrackingArray returns the best path through matrix logDensity,
103 | assuming that logDensity contains the likelihood of the observation
104 | sequence, conditionally upon the hidden states. A hidden Markov
105 | model (HMM) is assumed, with prior probabilities for the states
106 | given by logPriorDensities, and transition probabilities given
107 | by the matrix logTransitionMatrix. More precisely:
108 | Inputs:
109 | logDensity is a S x N ndarray, where S is the number of hidden
110 | states and N is the number of frames of the
111 | observed signal. The element at row s and
112 | column n contains the conditional likelihood
113 | of the signal at frame n, conditionally upon
114 | state s. The given values should be given as the
115 | logarithm of the probabilities.
116 | logPrioroDensities is a ndarray of size S, containing the prior
117 | probabilities of the hidden states of the HMM,
118 | logarithm of these values are expected.
119 | logTransitionMatrix is a S x S ndarray containing the transition
120 | probabilities: at row s and column t, it
121 | contains the probability of having state t
122 | after state s, logarithm expected.
123 | verbose defines whether to display evolution information or not.
124 | Default is False.
125 |
126 | Outputs:
127 | bestStatePath is the sequence of best states, assuming the HMM
128 | with the given parameters.
129 | """
130 | numberOfStates, numberOfFrames = logDensity.shape
131 |
132 | # logPriorDensities = vstack(logPriorDensities)
133 | onesStates = ones(numberOfStates)
134 |
135 | cumulativeProbability = zeros([numberOfStates, numberOfFrames])
136 | antecedents = zeros([numberOfStates, numberOfFrames], dtype=int)
137 |
138 | antecedents[:, 0] = -1
139 | cumulativeProbability[:, 0] = logPriorDensities[:] \
140 | + logDensity[:, 0]
141 |
142 | for n in arange(1, numberOfFrames):
143 | if verbose:
144 | print "frame number ", n, "over ", numberOfFrames
145 | # Find the state that minimizes the transition and the cumulative
146 | # probability. This operation can be done for all the target
147 | # states using numpy operations on ndarrays:
148 | antecedents[:, n] \
149 | = argmax(outer(onesStates,
150 | cumulativeProbability[:, n - 1]) \
151 | + logTransitionMatrix.T, axis=1)
152 | cumulativeProbability[:, n] \
153 | = cumulativeProbability[antecedents[:, n], n - 1] \
154 | + logTransitionMatrix[antecedents[:, n],
155 | arange(numberOfStates)] \
156 | + logDensity[:, n]
157 |
158 | # backtracking:
159 | bestStatePath = zeros(numberOfFrames)
160 | bestStatePath[-1]= int(argmax(cumulativeProbability[:, numberOfFrames- 1]))
161 | for n in arange(numberOfFrames - 2, -1, -1):
162 | bestStatePath[n] = antecedents[int(bestStatePath[n + 1]), n + 1]
163 |
164 | return bestStatePath
165 |
--------------------------------------------------------------------------------
/src/utils.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | def loadMEFile(fileName):
4 | try:
5 | a = np.loadtxt(fileName)
6 | except:
7 | a = np.loadtxt(fileName,delimiter=',')
8 | if a.shape[1]>2:
9 | est_freq = a[:, 1:]
10 | else:
11 | est_freq = a[:, 1]
12 | est_time = a[:, 0]
13 | return est_time,est_freq
--------------------------------------------------------------------------------