├── .gitignore
├── .gitmodules
├── LICENSE-GPLv3.txt
├── README.md
├── __init__.py
├── brignet.py
├── loadskeleton.py
├── ob_utils
├── binvox_rw.py
├── geometry.py
├── objects.py
└── sampling.py
├── postgen_utils
├── __init__.py
├── bone_mapping.py
└── bone_utils.py
├── preferences.py
├── rignetconnect.py
├── setup_utils
├── cuda_utils.py
└── venv_utils.py
└── ui
└── menus.py
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
2 | _additional_modules
3 |
--------------------------------------------------------------------------------
/.gitmodules:
--------------------------------------------------------------------------------
1 | [submodule "RigNet"]
2 | path = RigNet
3 | url = https://github.com/pKrime/RigNet.git
4 |
--------------------------------------------------------------------------------
/LICENSE-GPLv3.txt:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 3, 29 June 2007
3 |
4 | Copyright (C) 2007 Free Software Foundation, Inc.
5 | Everyone is permitted to copy and distribute verbatim copies
6 | of this license document, but changing it is not allowed.
7 |
8 | Preamble
9 |
10 | The GNU General Public License is a free, copyleft license for
11 | software and other kinds of works.
12 |
13 | The licenses for most software and other practical works are designed
14 | to take away your freedom to share and change the works. By contrast,
15 | the GNU General Public License is intended to guarantee your freedom to
16 | share and change all versions of a program--to make sure it remains free
17 | software for all its users. We, the Free Software Foundation, use the
18 | GNU General Public License for most of our software; it applies also to
19 | any other work released this way by its authors. You can apply it to
20 | your programs, too.
21 |
22 | When we speak of free software, we are referring to freedom, not
23 | price. Our General Public Licenses are designed to make sure that you
24 | have the freedom to distribute copies of free software (and charge for
25 | them if you wish), that you receive source code or can get it if you
26 | want it, that you can change the software or use pieces of it in new
27 | free programs, and that you know you can do these things.
28 |
29 | To protect your rights, we need to prevent others from denying you
30 | these rights or asking you to surrender the rights. Therefore, you have
31 | certain responsibilities if you distribute copies of the software, or if
32 | you modify it: responsibilities to respect the freedom of others.
33 |
34 | For example, if you distribute copies of such a program, whether
35 | gratis or for a fee, you must pass on to the recipients the same
36 | freedoms that you received. You must make sure that they, too, receive
37 | or can get the source code. And you must show them these terms so they
38 | know their rights.
39 |
40 | Developers that use the GNU GPL protect your rights with two steps:
41 | (1) assert copyright on the software, and (2) offer you this License
42 | giving you legal permission to copy, distribute and/or modify it.
43 |
44 | For the developers' and authors' protection, the GPL clearly explains
45 | that there is no warranty for this free software. For both users' and
46 | authors' sake, the GPL requires that modified versions be marked as
47 | changed, so that their problems will not be attributed erroneously to
48 | authors of previous versions.
49 |
50 | Some devices are designed to deny users access to install or run
51 | modified versions of the software inside them, although the manufacturer
52 | can do so. This is fundamentally incompatible with the aim of
53 | protecting users' freedom to change the software. The systematic
54 | pattern of such abuse occurs in the area of products for individuals to
55 | use, which is precisely where it is most unacceptable. Therefore, we
56 | have designed this version of the GPL to prohibit the practice for those
57 | products. If such problems arise substantially in other domains, we
58 | stand ready to extend this provision to those domains in future versions
59 | of the GPL, as needed to protect the freedom of users.
60 |
61 | Finally, every program is threatened constantly by software patents.
62 | States should not allow patents to restrict development and use of
63 | software on general-purpose computers, but in those that do, we wish to
64 | avoid the special danger that patents applied to a free program could
65 | make it effectively proprietary. To prevent this, the GPL assures that
66 | patents cannot be used to render the program non-free.
67 |
68 | The precise terms and conditions for copying, distribution and
69 | modification follow.
70 |
71 | TERMS AND CONDITIONS
72 |
73 | 0. Definitions.
74 |
75 | "This License" refers to version 3 of the GNU General Public License.
76 |
77 | "Copyright" also means copyright-like laws that apply to other kinds of
78 | works, such as semiconductor masks.
79 |
80 | "The Program" refers to any copyrightable work licensed under this
81 | License. Each licensee is addressed as "you". "Licensees" and
82 | "recipients" may be individuals or organizations.
83 |
84 | To "modify" a work means to copy from or adapt all or part of the work
85 | in a fashion requiring copyright permission, other than the making of an
86 | exact copy. The resulting work is called a "modified version" of the
87 | earlier work or a work "based on" the earlier work.
88 |
89 | A "covered work" means either the unmodified Program or a work based
90 | on the Program.
91 |
92 | To "propagate" a work means to do anything with it that, without
93 | permission, would make you directly or secondarily liable for
94 | infringement under applicable copyright law, except executing it on a
95 | computer or modifying a private copy. Propagation includes copying,
96 | distribution (with or without modification), making available to the
97 | public, and in some countries other activities as well.
98 |
99 | To "convey" a work means any kind of propagation that enables other
100 | parties to make or receive copies. Mere interaction with a user through
101 | a computer network, with no transfer of a copy, is not conveying.
102 |
103 | An interactive user interface displays "Appropriate Legal Notices"
104 | to the extent that it includes a convenient and prominently visible
105 | feature that (1) displays an appropriate copyright notice, and (2)
106 | tells the user that there is no warranty for the work (except to the
107 | extent that warranties are provided), that licensees may convey the
108 | work under this License, and how to view a copy of this License. If
109 | the interface presents a list of user commands or options, such as a
110 | menu, a prominent item in the list meets this criterion.
111 |
112 | 1. Source Code.
113 |
114 | The "source code" for a work means the preferred form of the work
115 | for making modifications to it. "Object code" means any non-source
116 | form of a work.
117 |
118 | A "Standard Interface" means an interface that either is an official
119 | standard defined by a recognized standards body, or, in the case of
120 | interfaces specified for a particular programming language, one that
121 | is widely used among developers working in that language.
122 |
123 | The "System Libraries" of an executable work include anything, other
124 | than the work as a whole, that (a) is included in the normal form of
125 | packaging a Major Component, but which is not part of that Major
126 | Component, and (b) serves only to enable use of the work with that
127 | Major Component, or to implement a Standard Interface for which an
128 | implementation is available to the public in source code form. A
129 | "Major Component", in this context, means a major essential component
130 | (kernel, window system, and so on) of the specific operating system
131 | (if any) on which the executable work runs, or a compiler used to
132 | produce the work, or an object code interpreter used to run it.
133 |
134 | The "Corresponding Source" for a work in object code form means all
135 | the source code needed to generate, install, and (for an executable
136 | work) run the object code and to modify the work, including scripts to
137 | control those activities. However, it does not include the work's
138 | System Libraries, or general-purpose tools or generally available free
139 | programs which are used unmodified in performing those activities but
140 | which are not part of the work. For example, Corresponding Source
141 | includes interface definition files associated with source files for
142 | the work, and the source code for shared libraries and dynamically
143 | linked subprograms that the work is specifically designed to require,
144 | such as by intimate data communication or control flow between those
145 | subprograms and other parts of the work.
146 |
147 | The Corresponding Source need not include anything that users
148 | can regenerate automatically from other parts of the Corresponding
149 | Source.
150 |
151 | The Corresponding Source for a work in source code form is that
152 | same work.
153 |
154 | 2. Basic Permissions.
155 |
156 | All rights granted under this License are granted for the term of
157 | copyright on the Program, and are irrevocable provided the stated
158 | conditions are met. This License explicitly affirms your unlimited
159 | permission to run the unmodified Program. The output from running a
160 | covered work is covered by this License only if the output, given its
161 | content, constitutes a covered work. This License acknowledges your
162 | rights of fair use or other equivalent, as provided by copyright law.
163 |
164 | You may make, run and propagate covered works that you do not
165 | convey, without conditions so long as your license otherwise remains
166 | in force. You may convey covered works to others for the sole purpose
167 | of having them make modifications exclusively for you, or provide you
168 | with facilities for running those works, provided that you comply with
169 | the terms of this License in conveying all material for which you do
170 | not control copyright. Those thus making or running the covered works
171 | for you must do so exclusively on your behalf, under your direction
172 | and control, on terms that prohibit them from making any copies of
173 | your copyrighted material outside their relationship with you.
174 |
175 | Conveying under any other circumstances is permitted solely under
176 | the conditions stated below. Sublicensing is not allowed; section 10
177 | makes it unnecessary.
178 |
179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180 |
181 | No covered work shall be deemed part of an effective technological
182 | measure under any applicable law fulfilling obligations under article
183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184 | similar laws prohibiting or restricting circumvention of such
185 | measures.
186 |
187 | When you convey a covered work, you waive any legal power to forbid
188 | circumvention of technological measures to the extent such circumvention
189 | is effected by exercising rights under this License with respect to
190 | the covered work, and you disclaim any intention to limit operation or
191 | modification of the work as a means of enforcing, against the work's
192 | users, your or third parties' legal rights to forbid circumvention of
193 | technological measures.
194 |
195 | 4. Conveying Verbatim Copies.
196 |
197 | You may convey verbatim copies of the Program's source code as you
198 | receive it, in any medium, provided that you conspicuously and
199 | appropriately publish on each copy an appropriate copyright notice;
200 | keep intact all notices stating that this License and any
201 | non-permissive terms added in accord with section 7 apply to the code;
202 | keep intact all notices of the absence of any warranty; and give all
203 | recipients a copy of this License along with the Program.
204 |
205 | You may charge any price or no price for each copy that you convey,
206 | and you may offer support or warranty protection for a fee.
207 |
208 | 5. Conveying Modified Source Versions.
209 |
210 | You may convey a work based on the Program, or the modifications to
211 | produce it from the Program, in the form of source code under the
212 | terms of section 4, provided that you also meet all of these conditions:
213 |
214 | a) The work must carry prominent notices stating that you modified
215 | it, and giving a relevant date.
216 |
217 | b) The work must carry prominent notices stating that it is
218 | released under this License and any conditions added under section
219 | 7. This requirement modifies the requirement in section 4 to
220 | "keep intact all notices".
221 |
222 | c) You must license the entire work, as a whole, under this
223 | License to anyone who comes into possession of a copy. This
224 | License will therefore apply, along with any applicable section 7
225 | additional terms, to the whole of the work, and all its parts,
226 | regardless of how they are packaged. This License gives no
227 | permission to license the work in any other way, but it does not
228 | invalidate such permission if you have separately received it.
229 |
230 | d) If the work has interactive user interfaces, each must display
231 | Appropriate Legal Notices; however, if the Program has interactive
232 | interfaces that do not display Appropriate Legal Notices, your
233 | work need not make them do so.
234 |
235 | A compilation of a covered work with other separate and independent
236 | works, which are not by their nature extensions of the covered work,
237 | and which are not combined with it such as to form a larger program,
238 | in or on a volume of a storage or distribution medium, is called an
239 | "aggregate" if the compilation and its resulting copyright are not
240 | used to limit the access or legal rights of the compilation's users
241 | beyond what the individual works permit. Inclusion of a covered work
242 | in an aggregate does not cause this License to apply to the other
243 | parts of the aggregate.
244 |
245 | 6. Conveying Non-Source Forms.
246 |
247 | You may convey a covered work in object code form under the terms
248 | of sections 4 and 5, provided that you also convey the
249 | machine-readable Corresponding Source under the terms of this License,
250 | in one of these ways:
251 |
252 | a) Convey the object code in, or embodied in, a physical product
253 | (including a physical distribution medium), accompanied by the
254 | Corresponding Source fixed on a durable physical medium
255 | customarily used for software interchange.
256 |
257 | b) Convey the object code in, or embodied in, a physical product
258 | (including a physical distribution medium), accompanied by a
259 | written offer, valid for at least three years and valid for as
260 | long as you offer spare parts or customer support for that product
261 | model, to give anyone who possesses the object code either (1) a
262 | copy of the Corresponding Source for all the software in the
263 | product that is covered by this License, on a durable physical
264 | medium customarily used for software interchange, for a price no
265 | more than your reasonable cost of physically performing this
266 | conveying of source, or (2) access to copy the
267 | Corresponding Source from a network server at no charge.
268 |
269 | c) Convey individual copies of the object code with a copy of the
270 | written offer to provide the Corresponding Source. This
271 | alternative is allowed only occasionally and noncommercially, and
272 | only if you received the object code with such an offer, in accord
273 | with subsection 6b.
274 |
275 | d) Convey the object code by offering access from a designated
276 | place (gratis or for a charge), and offer equivalent access to the
277 | Corresponding Source in the same way through the same place at no
278 | further charge. You need not require recipients to copy the
279 | Corresponding Source along with the object code. If the place to
280 | copy the object code is a network server, the Corresponding Source
281 | may be on a different server (operated by you or a third party)
282 | that supports equivalent copying facilities, provided you maintain
283 | clear directions next to the object code saying where to find the
284 | Corresponding Source. Regardless of what server hosts the
285 | Corresponding Source, you remain obligated to ensure that it is
286 | available for as long as needed to satisfy these requirements.
287 |
288 | e) Convey the object code using peer-to-peer transmission, provided
289 | you inform other peers where the object code and Corresponding
290 | Source of the work are being offered to the general public at no
291 | charge under subsection 6d.
292 |
293 | A separable portion of the object code, whose source code is excluded
294 | from the Corresponding Source as a System Library, need not be
295 | included in conveying the object code work.
296 |
297 | A "User Product" is either (1) a "consumer product", which means any
298 | tangible personal property which is normally used for personal, family,
299 | or household purposes, or (2) anything designed or sold for incorporation
300 | into a dwelling. In determining whether a product is a consumer product,
301 | doubtful cases shall be resolved in favor of coverage. For a particular
302 | product received by a particular user, "normally used" refers to a
303 | typical or common use of that class of product, regardless of the status
304 | of the particular user or of the way in which the particular user
305 | actually uses, or expects or is expected to use, the product. A product
306 | is a consumer product regardless of whether the product has substantial
307 | commercial, industrial or non-consumer uses, unless such uses represent
308 | the only significant mode of use of the product.
309 |
310 | "Installation Information" for a User Product means any methods,
311 | procedures, authorization keys, or other information required to install
312 | and execute modified versions of a covered work in that User Product from
313 | a modified version of its Corresponding Source. The information must
314 | suffice to ensure that the continued functioning of the modified object
315 | code is in no case prevented or interfered with solely because
316 | modification has been made.
317 |
318 | If you convey an object code work under this section in, or with, or
319 | specifically for use in, a User Product, and the conveying occurs as
320 | part of a transaction in which the right of possession and use of the
321 | User Product is transferred to the recipient in perpetuity or for a
322 | fixed term (regardless of how the transaction is characterized), the
323 | Corresponding Source conveyed under this section must be accompanied
324 | by the Installation Information. But this requirement does not apply
325 | if neither you nor any third party retains the ability to install
326 | modified object code on the User Product (for example, the work has
327 | been installed in ROM).
328 |
329 | The requirement to provide Installation Information does not include a
330 | requirement to continue to provide support service, warranty, or updates
331 | for a work that has been modified or installed by the recipient, or for
332 | the User Product in which it has been modified or installed. Access to a
333 | network may be denied when the modification itself materially and
334 | adversely affects the operation of the network or violates the rules and
335 | protocols for communication across the network.
336 |
337 | Corresponding Source conveyed, and Installation Information provided,
338 | in accord with this section must be in a format that is publicly
339 | documented (and with an implementation available to the public in
340 | source code form), and must require no special password or key for
341 | unpacking, reading or copying.
342 |
343 | 7. Additional Terms.
344 |
345 | "Additional permissions" are terms that supplement the terms of this
346 | License by making exceptions from one or more of its conditions.
347 | Additional permissions that are applicable to the entire Program shall
348 | be treated as though they were included in this License, to the extent
349 | that they are valid under applicable law. If additional permissions
350 | apply only to part of the Program, that part may be used separately
351 | under those permissions, but the entire Program remains governed by
352 | this License without regard to the additional permissions.
353 |
354 | When you convey a copy of a covered work, you may at your option
355 | remove any additional permissions from that copy, or from any part of
356 | it. (Additional permissions may be written to require their own
357 | removal in certain cases when you modify the work.) You may place
358 | additional permissions on material, added by you to a covered work,
359 | for which you have or can give appropriate copyright permission.
360 |
361 | Notwithstanding any other provision of this License, for material you
362 | add to a covered work, you may (if authorized by the copyright holders of
363 | that material) supplement the terms of this License with terms:
364 |
365 | a) Disclaiming warranty or limiting liability differently from the
366 | terms of sections 15 and 16 of this License; or
367 |
368 | b) Requiring preservation of specified reasonable legal notices or
369 | author attributions in that material or in the Appropriate Legal
370 | Notices displayed by works containing it; or
371 |
372 | c) Prohibiting misrepresentation of the origin of that material, or
373 | requiring that modified versions of such material be marked in
374 | reasonable ways as different from the original version; or
375 |
376 | d) Limiting the use for publicity purposes of names of licensors or
377 | authors of the material; or
378 |
379 | e) Declining to grant rights under trademark law for use of some
380 | trade names, trademarks, or service marks; or
381 |
382 | f) Requiring indemnification of licensors and authors of that
383 | material by anyone who conveys the material (or modified versions of
384 | it) with contractual assumptions of liability to the recipient, for
385 | any liability that these contractual assumptions directly impose on
386 | those licensors and authors.
387 |
388 | All other non-permissive additional terms are considered "further
389 | restrictions" within the meaning of section 10. If the Program as you
390 | received it, or any part of it, contains a notice stating that it is
391 | governed by this License along with a term that is a further
392 | restriction, you may remove that term. If a license document contains
393 | a further restriction but permits relicensing or conveying under this
394 | License, you may add to a covered work material governed by the terms
395 | of that license document, provided that the further restriction does
396 | not survive such relicensing or conveying.
397 |
398 | If you add terms to a covered work in accord with this section, you
399 | must place, in the relevant source files, a statement of the
400 | additional terms that apply to those files, or a notice indicating
401 | where to find the applicable terms.
402 |
403 | Additional terms, permissive or non-permissive, may be stated in the
404 | form of a separately written license, or stated as exceptions;
405 | the above requirements apply either way.
406 |
407 | 8. Termination.
408 |
409 | You may not propagate or modify a covered work except as expressly
410 | provided under this License. Any attempt otherwise to propagate or
411 | modify it is void, and will automatically terminate your rights under
412 | this License (including any patent licenses granted under the third
413 | paragraph of section 11).
414 |
415 | However, if you cease all violation of this License, then your
416 | license from a particular copyright holder is reinstated (a)
417 | provisionally, unless and until the copyright holder explicitly and
418 | finally terminates your license, and (b) permanently, if the copyright
419 | holder fails to notify you of the violation by some reasonable means
420 | prior to 60 days after the cessation.
421 |
422 | Moreover, your license from a particular copyright holder is
423 | reinstated permanently if the copyright holder notifies you of the
424 | violation by some reasonable means, this is the first time you have
425 | received notice of violation of this License (for any work) from that
426 | copyright holder, and you cure the violation prior to 30 days after
427 | your receipt of the notice.
428 |
429 | Termination of your rights under this section does not terminate the
430 | licenses of parties who have received copies or rights from you under
431 | this License. If your rights have been terminated and not permanently
432 | reinstated, you do not qualify to receive new licenses for the same
433 | material under section 10.
434 |
435 | 9. Acceptance Not Required for Having Copies.
436 |
437 | You are not required to accept this License in order to receive or
438 | run a copy of the Program. Ancillary propagation of a covered work
439 | occurring solely as a consequence of using peer-to-peer transmission
440 | to receive a copy likewise does not require acceptance. However,
441 | nothing other than this License grants you permission to propagate or
442 | modify any covered work. These actions infringe copyright if you do
443 | not accept this License. Therefore, by modifying or propagating a
444 | covered work, you indicate your acceptance of this License to do so.
445 |
446 | 10. Automatic Licensing of Downstream Recipients.
447 |
448 | Each time you convey a covered work, the recipient automatically
449 | receives a license from the original licensors, to run, modify and
450 | propagate that work, subject to this License. You are not responsible
451 | for enforcing compliance by third parties with this License.
452 |
453 | An "entity transaction" is a transaction transferring control of an
454 | organization, or substantially all assets of one, or subdividing an
455 | organization, or merging organizations. If propagation of a covered
456 | work results from an entity transaction, each party to that
457 | transaction who receives a copy of the work also receives whatever
458 | licenses to the work the party's predecessor in interest had or could
459 | give under the previous paragraph, plus a right to possession of the
460 | Corresponding Source of the work from the predecessor in interest, if
461 | the predecessor has it or can get it with reasonable efforts.
462 |
463 | You may not impose any further restrictions on the exercise of the
464 | rights granted or affirmed under this License. For example, you may
465 | not impose a license fee, royalty, or other charge for exercise of
466 | rights granted under this License, and you may not initiate litigation
467 | (including a cross-claim or counterclaim in a lawsuit) alleging that
468 | any patent claim is infringed by making, using, selling, offering for
469 | sale, or importing the Program or any portion of it.
470 |
471 | 11. Patents.
472 |
473 | A "contributor" is a copyright holder who authorizes use under this
474 | License of the Program or a work on which the Program is based. The
475 | work thus licensed is called the contributor's "contributor version".
476 |
477 | A contributor's "essential patent claims" are all patent claims
478 | owned or controlled by the contributor, whether already acquired or
479 | hereafter acquired, that would be infringed by some manner, permitted
480 | by this License, of making, using, or selling its contributor version,
481 | but do not include claims that would be infringed only as a
482 | consequence of further modification of the contributor version. For
483 | purposes of this definition, "control" includes the right to grant
484 | patent sublicenses in a manner consistent with the requirements of
485 | this License.
486 |
487 | Each contributor grants you a non-exclusive, worldwide, royalty-free
488 | patent license under the contributor's essential patent claims, to
489 | make, use, sell, offer for sale, import and otherwise run, modify and
490 | propagate the contents of its contributor version.
491 |
492 | In the following three paragraphs, a "patent license" is any express
493 | agreement or commitment, however denominated, not to enforce a patent
494 | (such as an express permission to practice a patent or covenant not to
495 | sue for patent infringement). To "grant" such a patent license to a
496 | party means to make such an agreement or commitment not to enforce a
497 | patent against the party.
498 |
499 | If you convey a covered work, knowingly relying on a patent license,
500 | and the Corresponding Source of the work is not available for anyone
501 | to copy, free of charge and under the terms of this License, through a
502 | publicly available network server or other readily accessible means,
503 | then you must either (1) cause the Corresponding Source to be so
504 | available, or (2) arrange to deprive yourself of the benefit of the
505 | patent license for this particular work, or (3) arrange, in a manner
506 | consistent with the requirements of this License, to extend the patent
507 | license to downstream recipients. "Knowingly relying" means you have
508 | actual knowledge that, but for the patent license, your conveying the
509 | covered work in a country, or your recipient's use of the covered work
510 | in a country, would infringe one or more identifiable patents in that
511 | country that you have reason to believe are valid.
512 |
513 | If, pursuant to or in connection with a single transaction or
514 | arrangement, you convey, or propagate by procuring conveyance of, a
515 | covered work, and grant a patent license to some of the parties
516 | receiving the covered work authorizing them to use, propagate, modify
517 | or convey a specific copy of the covered work, then the patent license
518 | you grant is automatically extended to all recipients of the covered
519 | work and works based on it.
520 |
521 | A patent license is "discriminatory" if it does not include within
522 | the scope of its coverage, prohibits the exercise of, or is
523 | conditioned on the non-exercise of one or more of the rights that are
524 | specifically granted under this License. You may not convey a covered
525 | work if you are a party to an arrangement with a third party that is
526 | in the business of distributing software, under which you make payment
527 | to the third party based on the extent of your activity of conveying
528 | the work, and under which the third party grants, to any of the
529 | parties who would receive the covered work from you, a discriminatory
530 | patent license (a) in connection with copies of the covered work
531 | conveyed by you (or copies made from those copies), or (b) primarily
532 | for and in connection with specific products or compilations that
533 | contain the covered work, unless you entered into that arrangement,
534 | or that patent license was granted, prior to 28 March 2007.
535 |
536 | Nothing in this License shall be construed as excluding or limiting
537 | any implied license or other defenses to infringement that may
538 | otherwise be available to you under applicable patent law.
539 |
540 | 12. No Surrender of Others' Freedom.
541 |
542 | If conditions are imposed on you (whether by court order, agreement or
543 | otherwise) that contradict the conditions of this License, they do not
544 | excuse you from the conditions of this License. If you cannot convey a
545 | covered work so as to satisfy simultaneously your obligations under this
546 | License and any other pertinent obligations, then as a consequence you may
547 | not convey it at all. For example, if you agree to terms that obligate you
548 | to collect a royalty for further conveying from those to whom you convey
549 | the Program, the only way you could satisfy both those terms and this
550 | License would be to refrain entirely from conveying the Program.
551 |
552 | 13. Use with the GNU Affero General Public License.
553 |
554 | Notwithstanding any other provision of this License, you have
555 | permission to link or combine any covered work with a work licensed
556 | under version 3 of the GNU Affero General Public License into a single
557 | combined work, and to convey the resulting work. The terms of this
558 | License will continue to apply to the part which is the covered work,
559 | but the special requirements of the GNU Affero General Public License,
560 | section 13, concerning interaction through a network will apply to the
561 | combination as such.
562 |
563 | 14. Revised Versions of this License.
564 |
565 | The Free Software Foundation may publish revised and/or new versions of
566 | the GNU General Public License from time to time. Such new versions will
567 | be similar in spirit to the present version, but may differ in detail to
568 | address new problems or concerns.
569 |
570 | Each version is given a distinguishing version number. If the
571 | Program specifies that a certain numbered version of the GNU General
572 | Public License "or any later version" applies to it, you have the
573 | option of following the terms and conditions either of that numbered
574 | version or of any later version published by the Free Software
575 | Foundation. If the Program does not specify a version number of the
576 | GNU General Public License, you may choose any version ever published
577 | by the Free Software Foundation.
578 |
579 | If the Program specifies that a proxy can decide which future
580 | versions of the GNU General Public License can be used, that proxy's
581 | public statement of acceptance of a version permanently authorizes you
582 | to choose that version for the Program.
583 |
584 | Later license versions may give you additional or different
585 | permissions. However, no additional obligations are imposed on any
586 | author or copyright holder as a result of your choosing to follow a
587 | later version.
588 |
589 | 15. Disclaimer of Warranty.
590 |
591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599 |
600 | 16. Limitation of Liability.
601 |
602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610 | SUCH DAMAGES.
611 |
612 | 17. Interpretation of Sections 15 and 16.
613 |
614 | If the disclaimer of warranty and limitation of liability provided
615 | above cannot be given local legal effect according to their terms,
616 | reviewing courts shall apply local law that most closely approximates
617 | an absolute waiver of all civil liability in connection with the
618 | Program, unless a warranty or assumption of liability accompanies a
619 | copy of the Program in return for a fee.
620 |
621 | END OF TERMS AND CONDITIONS
622 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | bRigNet
2 | ---------
3 | Neural Rigging for [blender](https://www.blender.org/ "Blender Home Page") using [RigNet](https://zhan-xu.github.io/rig-net/ "RigNet Home Page")
4 |
5 | **THIS ADD-ON IS DEAD AND WILL HADLY WORK, LET'S DISCUSS ABOUT BRINGING IT BACK IN THE ISSUES AND DISCUSSIONS PAGES**
6 | -------------------------------------------------------------------------------------------------------------------
7 |
8 |
9 | Blender is the open source 3D application from the Blender Foundation. RigNet is the Machine Learning prediction
10 | for articulated characters. It has a dual license, GPL3 for open source projects, commercial otherwise.
11 | It was presented in the following papers
12 |
13 | ```
14 | @InProceedings{AnimSkelVolNet,
15 | title={Predicting Animation Skeletons for 3D Articulated Models via Volumetric Nets},
16 | author={Zhan Xu and Yang Zhou and Evangelos Kalogerakis and Karan Singh},
17 | booktitle={2019 International Conference on 3D Vision (3DV)},
18 | year={2019}
19 | }
20 | ```
21 |
22 | ```
23 | @article{RigNet,
24 | title={RigNet: Neural Rigging for Articulated Characters},
25 | author={Zhan Xu and Yang Zhou and Evangelos Kalogerakis and Chris Landreth and Karan Singh},
26 | journal={ACM Trans. on Graphics},
27 | year={2020},
28 | volume={39}
29 | }
30 | ```
31 |
32 |
33 | ## Setup
34 |
35 | bRigNet requires SciPy, PyTorch and torch-geometric, along with torch-scatter and torch-sparse.
36 |
37 | ## Installation
38 |
39 | Download the Neural Rigging add-on as a .zip file and install it from the blender addons window,
40 | or copy the code to the blender scripts path
41 |
42 | ### Install dependencies via "Install" button
43 |
44 | At present, the CUDA toolkit from nVidia is required, it can be found at the
45 | [manufacturer website](https://developer.nvidia.com)
46 |
47 | A dependency installer is available in the preferences.
48 |
49 | * Install CUDA. At present prebuilt packages support versions 10.1, 10.2, 11.1
50 | * In the addon preferences, make sure that the Cuda version is detected correctly.
51 | * Hit the "Install" button. It can take time!
52 |
53 | Alternatively, Environment managers, like conda or virtualenv can be used to ease the install.
54 |
55 | ### Install dependencies using *conda*
56 |
57 | Anaconda is a data science platform from Anaconda Inc., it can be downloaded from the
58 | [company website](https://www.anaconda.com/).
59 |
60 | A lightweight version called [Miniconda](https://docs.conda.io/en/latest/miniconda.html) is available.
61 | Both versions include the package manager 'conda' used in the following steps.
62 |
63 | - Open a Miniconda or Anaconda prompt
64 | - Create a Conda Environment and activate it
65 |
66 | ```
67 | conda create -n brignet python=3.7
68 | conda activate brignet_deps
69 | ```
70 |
71 | - Install PyTorch. If CUDA is installed, the CUDA version can be queried in a command prompt. For example
72 |
73 | ```
74 | nvcc --version
75 | ```
76 | ```
77 | nvcc: NVIDIA (R) Cuda compiler driver
78 | Copyright (c) 2005-2019 NVIDIA Corporation
79 | Built on Wed_Oct_23_19:32:27_Pacific_Daylight_Time_2019
80 | Cuda compilation tools, release 10.2, V10.2.89
81 | ```
82 |
83 | In this case PyTorch can be installed in the command prompt via
84 |
85 | ```
86 | conda install pytorch==1.8.1 cudatoolkit=10.2 -c pytorch
87 | ```
88 |
89 | More complete information on the PyTorch command line can be found at the [PyTorch website](https://pytorch.org/)
90 | The install command on non-cuda devices is
91 |
92 | ```
93 | conda install pytorch==1.8.1 cpuonly -c pytorch
94 | ```
95 |
96 | - Install torch utilities. The syntax follows the pattern
97 |
98 | ```
99 | pip install [package-name] -f https://pytorch-geometric.com/whl/torch-[version]+cu[cuda-version].html
100 | ```
101 |
102 | ```
103 | pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html
104 | pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html
105 | pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html
106 | pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html
107 | pip install torch-geometric
108 | ```
109 |
110 | Alternatively, pip can try and build the libraries. Even if part of torch-sparse fails without a proper environment,
111 | the relevant modules are usually built
112 |
113 | ```
114 | pip install torch-scatter
115 | pip install torch-sparse
116 | pip install torch-geometric
117 | ```
118 |
119 | The directory of each environment can be obtained via
120 |
121 | ```
122 | conda info --envs
123 | ```
124 |
125 | The environment directory can be set in the "Additional Modules" setting of the bRigNet preferences
126 |
127 | ### Install dependencies using virtualenv
128 |
129 | virtualenv can be used to create a Python environment with the required packages.
130 | First, python 3.7 must be installed on the system. It can be found at https://www.python.org/downloads/
131 |
132 | Make sure that **Add Python 3.7 to PATH** is checked in the setup options.
133 |
134 | Usually, python comes with its package manager installed (pip). Please, refer
135 | to the [pip documentation](https://pypi.org/project/pip/) if pip is not present in your system.
136 |
137 | Next step is to install virtualenv. Open a command prompt and reach a folder where python packages will be kept
138 | please execute.
139 |
140 | ```
141 | pip install virtualenv
142 | ```
143 |
144 | then create the virtual environment and activate it
145 |
146 | ```
147 | virtualenv brignet_deps
148 | cd brignet_deps
149 | Scripts\activate
150 | ```
151 |
152 | now we can install the torch library. At present, version 1.8.1 is provided.
153 | torch-geometric provides prebuilt packages for CUDA 10.1, 10.2, 11.1
154 |
155 | CUDA 10.2 is used in this example:
156 |
157 | ```
158 | pip install torch==1.8.1+cu102 -f https://download.pytorch.org/whl/torch_stable.html
159 | pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html
160 | pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html
161 | pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html
162 | pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html
163 | pip install torch-geometric
164 | ```
165 |
166 | the virtual environment directory can be set as the "Additional modules path" in the brignet preferences
167 |
168 | ## Usage
169 | Enable *bRigNet* in the blender addons, the preferences will show up.
170 | Set the Modules path properties to the RigNet environment from the previous step
171 |
172 | RigNet requires a trained model. They have made theirs available at [this address](https://umass-my.sharepoint.com/:u:/g/personal/zhanxu_umass_edu/EYKLCvYTWFJArehlo3-H2SgBABnY08B4k5Q14K7H1Hh0VA)
173 | The checkpoint folder can be copied to the RigNet subfolder.
174 | A different location can be set in the addon preferences.
175 |
176 | #### Rig Generation
177 |
178 | the **bRigNet** tab will show up in the Viewport tools. Select a character mesh as target.
179 | Please make sure it doesn't exceed the 5K triangles. You can use the *Decimator* modifier
180 | to reduce the polycount on a copy of the mesh, and select a *Collection* of high res model
181 | on which to transfer the final weights
182 |
183 | #### Load generated rigs
184 |
185 | Rigs generated using RigNet from the command line can be loaded via the **Load Skeleton** panel.
186 | Please select the *.obj and *.txt file and press the button **Load Rignet character**
187 |
188 | ## Training
189 |
190 | The blender addon doesn't cover training yet. If you want to train your own model, please follow the instructions
191 | from the [RigNet project](https://github.com/zhan-xu/RigNet#training).
192 |
193 | ## Disclaimer
194 |
195 | This blender implementation of RigNet and the author of this add-on are NOT associated with the University of
196 | Massachusetts Amherst.
197 |
198 | This add-on has received a research grant from the Blender Foundation.
199 |
200 | ## License
201 |
202 | This addon is released under the GNU General Public License version 3 (GPLv3).
203 | The RigNet subfolder is licensed under the General Public License Version 3 (GPLv3), or under a Commercial License.
204 |
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
1 | # ====================== BEGIN GPL LICENSE BLOCK ======================
2 | #
3 | # This program is free software; you can redistribute it and/or
4 | # modify it under the terms of the GNU General Public License
5 | # as published by the Free Software Foundation, version 3.
6 | #
7 | # This program is distributed in the hope that it will be useful,
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 | # GNU General Public License for more details.
11 | #
12 | # You should have received a copy of the GNU General Public License
13 | # along with this program; if not, write to the Free Software Foundation,
14 | # Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
15 | #
16 | # ======================= END GPL LICENSE BLOCK ========================
17 |
18 |
19 | bl_info = {
20 | "name": "Neural Rigging (RigNet)",
21 | "version": (0, 1, 0),
22 | "author": "Paolo Acampora",
23 | "blender": (2, 90, 0),
24 | "description": "Armature and Weights prediction using RigNet",
25 | "location": "3D Viewport",
26 | "doc_url": "https://github.com/pKrime/brignet",
27 | "category": "Rigging",
28 | }
29 |
30 | import bpy
31 |
32 | from . import brignet, preferences, loadskeleton
33 | from . import postgen_utils
34 | from .ui import menus
35 |
36 | from importlib import reload
37 | try:
38 | reload(brignet)
39 | reload(preferences)
40 | reload(loadskeleton)
41 | reload(postgen_utils)
42 | except NameError:
43 | pass
44 |
45 | from .brignet import BrignetPanel, BrigNetPredict, BrignetRemesh, BrignetCollection
46 | from .preferences import BrignetPrefs, BrignetEnvironment
47 | from .loadskeleton import LoadRignetSkeleton, LoadSkeletonPanel
48 | from .postgen_utils import NamiFy, ExtractMetarig, SpineFix, MergeBones
49 |
50 |
51 | # REGISTER #
52 |
53 | def register():
54 | brignet.register_properties()
55 | bpy.utils.register_class(BrignetEnvironment)
56 | bpy.utils.register_class(BrignetPrefs)
57 | bpy.utils.register_class(BrignetCollection)
58 | bpy.utils.register_class(BrignetRemesh)
59 | bpy.utils.register_class(BrigNetPredict)
60 |
61 | bpy.utils.register_class(NamiFy)
62 | bpy.utils.register_class(ExtractMetarig)
63 | bpy.utils.register_class(SpineFix)
64 | bpy.utils.register_class(MergeBones)
65 |
66 | bpy.utils.register_class(BrignetPanel)
67 | bpy.utils.register_class(LoadRignetSkeleton)
68 | bpy.utils.register_class(LoadSkeletonPanel)
69 |
70 | BrignetPrefs.check_cuda()
71 | if not BrignetPrefs.add_module_paths():
72 | print("Modules path not found, please set in bRigNet preferences")
73 | BrignetPrefs.check_modules()
74 |
75 | bpy.types.VIEW3D_MT_pose_context_menu.append(menus.pose_context_options)
76 |
77 |
78 | def unregister():
79 | try:
80 | from . import rignetconnect
81 | rignetconnect.clear()
82 | except ModuleNotFoundError:
83 | # if we have failed to load rignetconnect, we have no device to clear
84 | pass
85 |
86 | bpy.types.VIEW3D_MT_pose_context_menu.remove(menus.pose_context_options)
87 | BrignetPrefs.reset_module_paths()
88 |
89 | bpy.utils.unregister_class(BrignetPanel)
90 | bpy.utils.unregister_class(BrignetPrefs)
91 | bpy.utils.unregister_class(BrignetEnvironment)
92 | bpy.utils.unregister_class(BrignetCollection)
93 | bpy.utils.unregister_class(BrignetRemesh)
94 | bpy.utils.unregister_class(BrigNetPredict)
95 | bpy.utils.unregister_class(NamiFy)
96 | bpy.utils.unregister_class(ExtractMetarig)
97 | bpy.utils.unregister_class(SpineFix)
98 | bpy.utils.unregister_class(MergeBones)
99 | bpy.utils.unregister_class(LoadSkeletonPanel)
100 | bpy.utils.unregister_class(LoadRignetSkeleton)
101 | brignet.unregister_properties()
102 |
--------------------------------------------------------------------------------
/brignet.py:
--------------------------------------------------------------------------------
1 | from enum import Enum
2 |
3 | import bpy
4 | from bpy.props import BoolProperty, FloatProperty, PointerProperty, StringProperty
5 |
6 | from .ob_utils import objects
7 | from .postgen_utils.bone_utils import NameFix
8 |
9 | try:
10 | from . import rignetconnect
11 | except ModuleNotFoundError:
12 | pass
13 |
14 |
15 | class BrignetRemesh(bpy.types.Operator):
16 | """Create remeshed model from highres objects"""
17 | bl_idname = "object.brignet_remesh"
18 | bl_label = "Create Remesh model from Collection"
19 |
20 | @classmethod
21 | def poll(cls, context):
22 | wm = context.window_manager
23 | if not wm.brignet_highrescollection:
24 | return False
25 |
26 | return True
27 |
28 | def execute(self, context):
29 | wm = context.window_manager
30 | if wm.brignet_targetmesh:
31 | # remove previous mesh
32 | bpy.data.objects.remove(wm.brignet_targetmesh, do_unlink=True)
33 | new_ob = objects.mesh_from_collection(wm.brignet_highrescollection, name='brignet_remesh')
34 |
35 | remesh = new_ob.modifiers.new(name='remesh', type='REMESH')
36 | remesh.voxel_size = 0.01
37 |
38 | decimate = new_ob.modifiers.new(name='decimate', type='DECIMATE')
39 | decimate.use_collapse_triangulate = True
40 |
41 | context.evaluated_depsgraph_get()
42 | decimate.ratio = 1800 / decimate.face_count
43 |
44 | new_ob.hide_render = True
45 | wm.brignet_targetmesh = new_ob
46 |
47 | collection_name = wm.brignet_highrescollection.name
48 | view_layer = bpy.context.view_layer.layer_collection.children.get(collection_name)
49 | view_layer.hide_viewport = True
50 |
51 | for ob in bpy.data.collections[collection_name].all_objects:
52 | ob.hide_set(True)
53 |
54 | return {'FINISHED'}
55 |
56 |
57 | class BrignetCollection(bpy.types.Operator):
58 | """Create collection from selected objects"""
59 | bl_idname = 'collection.brignet_collection'
60 | bl_label = 'Create collection from selected objects'
61 |
62 | @classmethod
63 | def poll(cls, context):
64 | if not context.selected_objects:
65 | return False
66 | if not next((ob for ob in context.selected_objects if ob.type == 'MESH'), None):
67 | return False
68 | return True
69 |
70 | def execute(self, context):
71 | collection = bpy.data.collections.new("BrignetGeometry")
72 | for ob in context.selected_objects:
73 | if ob.type != 'MESH':
74 | continue
75 | collection.objects.link(ob)
76 |
77 | bpy.context.scene.collection.children.link(collection)
78 | context.window_manager.brignet_highrescollection = collection
79 |
80 | return {'FINISHED'}
81 |
82 |
83 | class PredictSteps(Enum):
84 | NotStarted = 0
85 | Loading_Networks = 1
86 | Creating_Data = 2
87 | Predicting_Joints = 3
88 | Predicting_Hierarchy = 4
89 | Predicting_Weights = 5
90 | Creating_Armature = 6
91 | Post_Generation = 7
92 | Finished = 8
93 |
94 | @staticmethod
95 | def last():
96 | return PredictSteps.Finished
97 |
98 | @property
99 | def icon(self):
100 | if self.value == self.Loading_Networks.value:
101 | return 'NETWORK_DRIVE'
102 | if self.value == self.Creating_Data.value:
103 | return 'OUTLINER_DATA_POINTCLOUD'
104 | if self.value == self.Predicting_Joints.value:
105 | return 'BONE_DATA'
106 | if self.value == self.Predicting_Hierarchy.value:
107 | return 'ARMATURE_DATA'
108 | if self.value == self.Predicting_Weights.value:
109 | return 'OUTLINER_OB_ARMATURE'
110 | if self.value == self.Creating_Armature.value:
111 | return 'SCENE_DATA'
112 | if self.value == self.Finished.value:
113 | return 'CHECKMARK'
114 | return 'NONE'
115 |
116 | @property
117 | def nice_name(self):
118 | nice_name = self.name.replace('_', ' ')
119 |
120 | if self.value == self.Creating_Data.value:
121 | nice_name += " (takes a while...)"
122 |
123 | return nice_name
124 |
125 |
126 | class BrigNetPredict(bpy.types.Operator):
127 | """Predict joint position of chosen mesh using a trained model"""
128 | bl_idname = "object.brignet_predict"
129 | bl_label = "Predict Rig"
130 |
131 | bandwidth: FloatProperty()
132 | threshold: FloatProperty()
133 | current_step: PredictSteps.NotStarted
134 |
135 | _timer = None
136 | _networks = None
137 | _mesh_storage = None
138 | _pred_data = None
139 | _pred_skeleton = None
140 | _pred_rig = None
141 | _armature = None
142 |
143 | @classmethod
144 | def poll(cls, context):
145 | modules_found = bpy.context.preferences.addons[__package__].preferences.modules_found
146 | if not modules_found:
147 | # TODO: we should rather gray out the whole panel and display a warning
148 | return False
149 |
150 | wm = context.window_manager
151 | if not wm.brignet_targetmesh:
152 | return False
153 |
154 | return wm.brignet_targetmesh.type == 'MESH'
155 |
156 | def clean_up(self, context):
157 | wm = context.window_manager
158 | wm.event_timer_remove(self._timer)
159 | wm.brignet_current_progress = 0.0
160 | rignetconnect.clear()
161 |
162 | def modal(self, context, event):
163 | """Go through the prediction steps and show feedback"""
164 | context.area.tag_redraw()
165 | if event.type == 'ESC':
166 | self.clean_up(context)
167 | return {'CANCELLED'}
168 |
169 | wm = context.window_manager
170 | wm.brignet_targetmesh.hide_set(False) # hidden target mesh might cause crashes
171 | if self.current_step == PredictSteps.Loading_Networks:
172 | self._networks = rignetconnect.Networks(load_skinning=wm.brignet_predict_weights)
173 | elif self.current_step == PredictSteps.Creating_Data:
174 | self._pred_data, self._mesh_storage = rignetconnect.init_data(wm.brignet_targetmesh, wm.brignet_samples)
175 | elif self.current_step == PredictSteps.Predicting_Joints:
176 | self._pred_data = rignetconnect.predict_joint(self._pred_data, self._networks.joint_net, self._mesh_storage,
177 | self.bandwidth, self.threshold)
178 | elif self.current_step == PredictSteps.Predicting_Hierarchy:
179 | self._pred_skeleton = rignetconnect.predict_hierarchy(self._pred_data, self._networks, self._mesh_storage)
180 | elif self.current_step == PredictSteps.Predicting_Weights:
181 | if wm.brignet_predict_weights:
182 | self._pred_rig = rignetconnect.predict_weights(self._pred_data, self._pred_skeleton,
183 | self._networks.skin_net, self._mesh_storage)
184 | else:
185 | self._pred_rig = self._pred_skeleton
186 | mesh_data = self._mesh_storage.mesh_data
187 | self._pred_rig.normalize(mesh_data.scale_normalize, -mesh_data.translation_normalize)
188 | elif self.current_step == PredictSteps.Creating_Armature:
189 | self._armature = rignetconnect.create_armature(wm.brignet_targetmesh, self._pred_rig)
190 | elif self.current_step == PredictSteps.Post_Generation and self._armature:
191 | if wm.brignet_mirror_names:
192 | renamer = NameFix(self._armature)
193 | renamer.name_left_right()
194 | elif self.current_step == PredictSteps.Finished:
195 | self.clean_up(context)
196 |
197 | if wm.brignet_highrescollection:
198 | wm.brignet_highrescollection.hide_viewport = False
199 | objects.copy_weights(wm.brignet_highrescollection.objects, wm.brignet_targetmesh)
200 |
201 | for ob in wm.brignet_highrescollection.all_objects:
202 | ob.hide_set(False)
203 | wm.brignet_targetmesh.hide_set(True)
204 |
205 | return {'FINISHED'}
206 |
207 | # Advance current state
208 | try:
209 | self.current_step = PredictSteps(self.current_step.value + 1)
210 | wm.brignet_current_progress = self.current_step.value
211 | except ValueError:
212 | self.clean_up(context)
213 | return {'FINISHED'}
214 |
215 | return {'INTERFACE'}
216 |
217 | def invoke(self, context, event):
218 | # we import the rignet module here to prevent import errors before the dependencies are installed
219 | global rignetconnect
220 | from . import rignetconnect
221 |
222 | wm = context.window_manager
223 |
224 | self.bandwidth = (1 - wm.brignet_density) / 10
225 | self.threshold = wm.brignet_threshold/1000
226 | self.current_step = PredictSteps(0)
227 |
228 | # timer event makes sure that the modal script is executed even without user interaction
229 | self._timer = wm.event_timer_add(0.1, window=context.window)
230 | wm.modal_handler_add(self)
231 |
232 | return {'RUNNING_MODAL'}
233 |
234 |
235 | class BrignetPanel(bpy.types.Panel):
236 | """Creates a Panel in the scene context of the properties editor"""
237 | bl_label = "Neural Rigging"
238 | bl_idname = "RIGNET_PT_layout"
239 | bl_space_type = 'VIEW_3D'
240 | bl_region_type = 'UI'
241 | bl_category = 'RigNet'
242 |
243 | def draw(self, context):
244 | layout = self.layout
245 |
246 | wm = context.window_manager
247 |
248 | row = layout.row()
249 | row.label(text="Character Collection:")
250 |
251 | split = layout.split(factor=0.8, align=False)
252 | col = split.column()
253 | col.prop(wm, 'brignet_highrescollection', text='')
254 | col = split.column()
255 | col.operator(BrignetCollection.bl_idname, text='', icon='RESTRICT_SELECT_OFF')
256 |
257 | row = layout.row()
258 | row.label(text="Simple Mesh:")
259 |
260 | split = layout.split(factor=0.8, align=False)
261 | col = split.column()
262 | col.prop(wm, 'brignet_targetmesh', text='')
263 | col = split.column()
264 | col.operator(BrignetRemesh.bl_idname, text='', icon='RESTRICT_SELECT_OFF')
265 |
266 | if wm.brignet_targetmesh:
267 | remesh_mod = next((mod for mod in wm.brignet_targetmesh.modifiers if mod.type == 'REMESH'), None)
268 | decimate_mod = next((mod for mod in wm.brignet_targetmesh.modifiers if mod.type == 'DECIMATE'), None)
269 | if remesh_mod:
270 | row = layout.row()
271 | row.prop(remesh_mod, 'voxel_size', slider=True)
272 | if decimate_mod:
273 | row = layout.row()
274 | row.prop(decimate_mod, 'ratio', slider=True)
275 | row = layout.row()
276 | row.label(text='Face count: {0}'.format(decimate_mod.face_count))
277 |
278 | max_face_count = 5000
279 | if decimate_mod.face_count > max_face_count:
280 | row = layout.row()
281 | row.label(text=f'Face count too high (exceeds {max_face_count})', icon='ERROR')
282 | min_face_count = 1000
283 | if decimate_mod.face_count < min_face_count:
284 | row = layout.row()
285 | row.label(text=f'Face count too low (less than {min_face_count})', icon='ERROR')
286 |
287 | if wm.brignet_current_progress > 0.1:
288 | layout.separator()
289 | row = layout.row()
290 | current_step = PredictSteps(int(wm.brignet_current_progress))
291 | row.label(text=current_step.nice_name, icon=current_step.icon)
292 | row = layout.row()
293 | row.prop(wm, 'brignet_current_progress', slider=True)
294 | else:
295 | row = layout.row()
296 | row.operator('object.brignet_predict')
297 |
298 | row = layout.row()
299 | row.prop(wm, 'brignet_density', text='Density')
300 |
301 | row = layout.row()
302 | row.prop(wm, 'brignet_threshold', text='Treshold')
303 |
304 | row = layout.row()
305 | row.prop(wm, 'brignet_samples', text='Samples')
306 |
307 | row = layout.row()
308 | row.prop(wm, 'brignet_predict_weights')
309 |
310 | row = layout.row()
311 | row.prop(wm, 'brignet_mirror_names')
312 |
313 |
314 | def register_properties():
315 | bpy.types.WindowManager.brignet_targetmesh = PointerProperty(type=bpy.types.Object,
316 | name="bRigNet Target Object",
317 | description="Mesh to use for skin prediction. Keep below 5000 triangles",
318 | poll=lambda self, obj: obj.type == 'MESH' and obj.data is not self)
319 |
320 | bpy.types.WindowManager.brignet_highrescollection = PointerProperty(type=bpy.types.Collection,
321 | name="bRigNet HighRes Objects",
322 | description="Meshes to use for final skinning")
323 |
324 | bpy.types.WindowManager.brignet_density = FloatProperty(name="density", default=0.571, min=0.1, max=1.0,
325 | description="Bone Density")
326 |
327 | bpy.types.WindowManager.brignet_threshold = FloatProperty(name="threshold", default=0.75e-2,
328 | description='Minimum skin weight',
329 | min=0.01e-2,
330 | max=1.0)
331 |
332 | bpy.types.WindowManager.brignet_samples = FloatProperty(name="samples", default=2000,
333 | description='Poisson Disks Samples',
334 | min=100,
335 | max=5000)
336 |
337 | bpy.types.WindowManager.brignet_obj_path = StringProperty(name='Mesh obj',
338 | description='Path to Mesh file',
339 | subtype='FILE_PATH')
340 |
341 | bpy.types.WindowManager.brignet_skel_path = StringProperty(name='Skeleton txt',
342 | description='Path to Skeleton File',
343 | subtype='FILE_PATH')
344 |
345 | bpy.types.WindowManager.brignet_current_progress = FloatProperty(name="Progress", default=0.0,
346 | description='Progress of ongoing rig',
347 | min=0.0, max=PredictSteps.last().value,
348 | options={'HIDDEN', 'SKIP_SAVE'}
349 | )
350 |
351 | bpy.types.WindowManager.brignet_predict_weights = BoolProperty(name='Predict Weights', default=True,
352 | description='Predict Bone weights')
353 |
354 | bpy.types.WindowManager.brignet_mirror_names = BoolProperty(name='Mirror Bone Names', default=True,
355 | description='Apply .L/.R names to symmetric bones')
356 |
357 |
358 | def unregister_properties():
359 | del bpy.types.WindowManager.brignet_targetmesh
360 | del bpy.types.WindowManager.brignet_highrescollection
361 | del bpy.types.WindowManager.brignet_density
362 | del bpy.types.WindowManager.brignet_threshold
363 | del bpy.types.WindowManager.brignet_samples
364 | del bpy.types.WindowManager.brignet_obj_path
365 | del bpy.types.WindowManager.brignet_skel_path
366 | del bpy.types.WindowManager.brignet_current_progress
367 | del bpy.types.WindowManager.brignet_predict_weights
368 | del bpy.types.WindowManager.brignet_mirror_names
369 |
--------------------------------------------------------------------------------
/loadskeleton.py:
--------------------------------------------------------------------------------
1 | import os
2 | import bpy
3 |
4 | from .ob_utils.objects import ArmatureGenerator
5 |
6 |
7 | class LoadSkeletonPanel(bpy.types.Panel):
8 | """Access LoadSkeleton operator"""
9 | bl_label = "Load Skeleton"
10 | bl_idname = "RIGNET_PT_skeleton"
11 | bl_space_type = 'VIEW_3D'
12 | bl_region_type = 'UI'
13 | bl_category = 'RigNet'
14 |
15 | def draw(self, context):
16 | wm = context.window_manager
17 | layout = self.layout
18 |
19 | row = layout.row()
20 | row.prop(wm, 'brignet_obj_path')
21 | row = layout.row()
22 | row.prop(wm, 'brignet_skel_path')
23 | row = layout.row()
24 | row.operator("object.brignet_load")
25 |
26 |
27 | class LoadRignetSkeleton(bpy.types.Operator):
28 | """Load characters generated using RigNet from the command line"""
29 | bl_idname = "object.brignet_load"
30 | bl_label = "Load rignet character"
31 |
32 | @classmethod
33 | def poll(cls, context):
34 | return True # TODO: object mode
35 |
36 | def execute(self, context):
37 | wm = context.window_manager
38 | if not os.path.isfile(wm.brignet_skel_path):
39 | return {'CANCELLED'}
40 |
41 | from utils.rig_parser import Info
42 | skel_info = Info(filename=wm.brignet_skel_path)
43 |
44 | if os.path.isfile(wm.brignet_obj_path):
45 | bpy.ops.import_scene.obj(filepath=wm.brignet_obj_path, use_edges=True, use_smooth_groups=True,
46 | use_groups_as_vgroups=False, use_image_search=True, split_mode='OFF',
47 | axis_forward='-Z', axis_up='Y')
48 |
49 | mesh_obj = context.selected_objects[0]
50 | else:
51 | mesh_obj = None
52 |
53 | ArmatureGenerator(skel_info, mesh_obj).generate()
54 | return {'FINISHED'}
55 |
--------------------------------------------------------------------------------
/ob_utils/binvox_rw.py:
--------------------------------------------------------------------------------
1 | # Copyright (C) 2012 Daniel Maturana
2 | # This file is part of binvox-rw-py.
3 | #
4 | # binvox-rw-py is free software: you can redistribute it and/or modify
5 | # it under the terms of the GNU General Public License as published by
6 | # the Free Software Foundation, either version 3 of the License, or
7 | # (at your option) any later version.
8 | #
9 | # binvox-rw-py is distributed in the hope that it will be useful,
10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 | # GNU General Public License for more details.
13 | #
14 | # You should have received a copy of the GNU General Public License
15 | # along with binvox-rw-py. If not, see .
16 | #
17 |
18 | import os
19 | import sys
20 | import tempfile
21 | import subprocess
22 | import numpy as np
23 | import struct
24 |
25 | from .geometry import Voxels
26 | from .geometry import obj_simple_export
27 |
28 |
29 | def binvox_voxels(binvox_exe, mesh_v, mesh_f):
30 | # voxel
31 | fo_normalized = tempfile.NamedTemporaryFile(suffix='_normalized.obj')
32 | fo_normalized.close()
33 |
34 | obj_simple_export(fo_normalized.name, mesh_v, mesh_f)
35 |
36 | if sys.platform.startswith("win"):
37 | binvox_exe += ".exe"
38 |
39 | if not os.path.isfile(binvox_exe):
40 | os.unlink(fo_normalized.name)
41 | raise FileNotFoundError("binvox executable not found in {0}, please check RigNet path in the addon preferences")
42 |
43 | subprocess.call([binvox_exe, "-d", "88", fo_normalized.name])
44 | with open(os.path.splitext(fo_normalized.name)[0] + '.binvox', 'rb') as fvox:
45 | vox = read_as_3d_array(fvox)
46 |
47 | os.unlink(fo_normalized.name)
48 | return vox
49 |
50 |
51 | def read_header(fp):
52 | """ Read binvox header. Mostly meant for internal use.
53 | """
54 | line = fp.readline().strip()
55 | if not line.startswith(b'#binvox'):
56 | raise IOError('Not a binvox file')
57 | dims = list(map(int, fp.readline().strip().split(b' ')[1:]))
58 | translate = list(map(float, fp.readline().strip().split(b' ')[1:]))
59 | scale = list(map(float, fp.readline().strip().split(b' ')[1:]))[0]
60 | line = fp.readline()
61 |
62 | return dims, translate, scale
63 |
64 |
65 | def read_as_3d_array(fp, fix_coords=True):
66 | """ Read binary binvox format as array.
67 |
68 | Returns the model with accompanying metadata.
69 |
70 | Voxels are stored in a three-dimensional numpy array, which is simple and
71 | direct, but may use a lot of memory for large models. (Storage requirements
72 | are 8*(d^3) bytes, where d is the dimensions of the binvox model. Numpy
73 | boolean arrays use a byte per element).
74 |
75 | Doesn't do any checks on input except for the '#binvox' line.
76 | """
77 | dims, translate, scale = read_header(fp)
78 | raw_data = np.frombuffer(fp.read(), dtype=np.uint8)
79 | # if just using reshape() on the raw data:
80 | # indexing the array as array[i,j,k], the indices map into the
81 | # coords as:
82 | # i -> x
83 | # j -> z
84 | # k -> y
85 | # if fix_coords is true, then data is rearranged so that
86 | # mapping is
87 | # i -> x
88 | # j -> y
89 | # k -> z
90 | values, counts = raw_data[::2], raw_data[1::2]
91 | data = np.repeat(values, counts).astype(np.bool)
92 | data = data.reshape(dims)
93 | if fix_coords:
94 | # xzy to xyz TODO the right thing
95 | data = np.transpose(data, (0, 2, 1))
96 | axis_order = 'xyz'
97 | else:
98 | axis_order = 'xzy'
99 | return Voxels(data, dims, translate, scale, axis_order)
100 |
101 | def read_as_coord_array(fp, fix_coords=True):
102 | """ Read binary binvox format as coordinates.
103 |
104 | Returns binvox model with voxels in a "coordinate" representation, i.e. an
105 | 3 x N array where N is the number of nonzero voxels. Each column
106 | corresponds to a nonzero voxel and the 3 rows are the (x, z, y) coordinates
107 | of the voxel. (The odd ordering is due to the way binvox format lays out
108 | data). Note that coordinates refer to the binvox voxels, without any
109 | scaling or translation.
110 |
111 | Use this to save memory if your model is very sparse (mostly empty).
112 |
113 | Doesn't do any checks on input except for the '#binvox' line.
114 | """
115 | dims, translate, scale = read_header(fp)
116 | raw_data = np.frombuffer(fp.read(), dtype=np.uint8)
117 |
118 | values, counts = raw_data[::2], raw_data[1::2]
119 |
120 | sz = np.prod(dims)
121 | index, end_index = 0, 0
122 | end_indices = np.cumsum(counts)
123 | indices = np.concatenate(([0], end_indices[:-1])).astype(end_indices.dtype)
124 |
125 | values = values.astype(np.bool)
126 | indices = indices[values]
127 | end_indices = end_indices[values]
128 |
129 | nz_voxels = []
130 | for index, end_index in zip(indices, end_indices):
131 | nz_voxels.extend(range(index, end_index))
132 | nz_voxels = np.array(nz_voxels)
133 | # TODO are these dims correct?
134 | # according to docs,
135 | # index = x * wxh + z * width + y; // wxh = width * height = d * d
136 |
137 | x = nz_voxels / (dims[0]*dims[1])
138 | zwpy = nz_voxels % (dims[0]*dims[1]) # z*w + y
139 | z = zwpy / dims[0]
140 | y = zwpy % dims[0]
141 | if fix_coords:
142 | data = np.vstack((x, y, z))
143 | axis_order = 'xyz'
144 | else:
145 | data = np.vstack((x, z, y))
146 | axis_order = 'xzy'
147 |
148 | #return Voxels(data, dims, translate, scale, axis_order)
149 | return Voxels(np.ascontiguousarray(data), dims, translate, scale, axis_order)
150 |
151 | def dense_to_sparse(voxel_data, dtype=np.int):
152 | """ From dense representation to sparse (coordinate) representation.
153 | No coordinate reordering.
154 | """
155 | if voxel_data.ndim!=3:
156 | raise ValueError('voxel_data is wrong shape; should be 3D array.')
157 | return np.asarray(np.nonzero(voxel_data), dtype)
158 |
159 | def sparse_to_dense(voxel_data, dims, dtype=np.bool):
160 | if voxel_data.ndim!=2 or voxel_data.shape[0]!=3:
161 | raise ValueError('voxel_data is wrong shape; should be 3xN array.')
162 | if np.isscalar(dims):
163 | dims = [dims]*3
164 | dims = np.atleast_2d(dims).T
165 | # truncate to integers
166 | xyz = voxel_data.astype(np.int)
167 | # discard voxels that fall outside dims
168 | valid_ix = ~np.any((xyz < 0) | (xyz >= dims), 0)
169 | xyz = xyz[:,valid_ix]
170 | out = np.zeros(dims.flatten(), dtype=dtype)
171 | out[tuple(xyz)] = True
172 | return out
173 |
174 | #def get_linear_index(x, y, z, dims):
175 | #""" Assuming xzy order. (y increasing fastest.
176 | #TODO ensure this is right when dims are not all same
177 | #"""
178 | #return x*(dims[1]*dims[2]) + z*dims[1] + y
179 |
180 | def bwrite(fp,s):
181 | fp.write(s.encode())
182 |
183 | def write_pair(fp,state, ctr):
184 | fp.write(struct.pack('B',state))
185 | fp.write(struct.pack('B',ctr))
186 |
187 | def write(voxel_model, fp):
188 | """ Write binary binvox format.
189 |
190 | Note that when saving a model in sparse (coordinate) format, it is first
191 | converted to dense format.
192 |
193 | Doesn't check if the model is 'sane'.
194 |
195 | """
196 | if voxel_model.data.ndim==2:
197 | # TODO avoid conversion to dense
198 | dense_voxel_data = sparse_to_dense(voxel_model.data, voxel_model.dims)
199 | else:
200 | dense_voxel_data = voxel_model.data
201 |
202 | bwrite(fp, '#binvox 1\n')
203 | bwrite(fp, 'dim ' + ' '.join(map(str, voxel_model.dims)) + '\n')
204 | bwrite(fp, 'translate ' + ' '.join(map(str, voxel_model.translate)) + '\n')
205 | bwrite(fp, 'scale ' + str(voxel_model.scale) + '\n')
206 | bwrite(fp, 'data\n')
207 | if not voxel_model.axis_order in ('xzy', 'xyz'):
208 | raise ValueError('Unsupported voxel model axis order')
209 |
210 | if voxel_model.axis_order=='xzy':
211 | voxels_flat = dense_voxel_data.flatten()
212 | elif voxel_model.axis_order=='xyz':
213 | voxels_flat = np.transpose(dense_voxel_data, (0, 2, 1)).flatten()
214 |
215 | # keep a sort of state machine for writing run length encoding
216 | state = voxels_flat[0]
217 | ctr = 0
218 | for c in voxels_flat:
219 | if c==state:
220 | ctr += 1
221 | # if ctr hits max, dump
222 | if ctr==255:
223 | write_pair(fp, state, ctr)
224 | ctr = 0
225 | else:
226 | # if switch state, dump
227 | write_pair(fp, state, ctr)
228 | state = c
229 | ctr = 1
230 | # flush out remainders
231 | if ctr > 0:
232 | write_pair(fp, state, ctr)
233 |
234 |
235 | if __name__ == '__main__':
236 | import doctest
237 | doctest.testmod()
238 |
--------------------------------------------------------------------------------
/ob_utils/geometry.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from scipy.signal import convolve
3 | from random import random
4 |
5 | import bpy
6 | import bmesh
7 | from mathutils import Matrix
8 | from mathutils import Vector
9 | from mathutils.bvhtree import BVHTree
10 |
11 |
12 | def normalize_obj(mesh_v):
13 | dims = [max(mesh_v[:, 0]) - min(mesh_v[:, 0]),
14 | max(mesh_v[:, 1]) - min(mesh_v[:, 1]),
15 | max(mesh_v[:, 2]) - min(mesh_v[:, 2])]
16 | scale = 1.0 / max(dims)
17 | pivot = np.array([(min(mesh_v[:, 0]) + max(mesh_v[:, 0])) / 2, min(mesh_v[:, 1]),
18 | (min(mesh_v[:, 2]) + max(mesh_v[:, 2])) / 2])
19 | mesh_v[:, 0] -= pivot[0]
20 | mesh_v[:, 1] -= pivot[1]
21 | mesh_v[:, 2] -= pivot[2]
22 | mesh_v *= scale
23 | return mesh_v, pivot, scale
24 |
25 |
26 | def obj_simple_export(filepath, vertices, polygons):
27 | with open(filepath, 'w') as f:
28 | f.write("# OBJ file\n")
29 | for v in vertices:
30 | f.write("v {0:.4f} {1:.4f} {2:.4f}\n".format(*v))
31 |
32 | for p in polygons:
33 | f.write("f " + " ".join(str(i + 1) for i in p) + "\n")
34 |
35 |
36 | def get_geo_edges(surface_geodesic, remesh_obj_v):
37 | edge_index = []
38 | surface_geodesic += 1.0 * np.eye(len(surface_geodesic)) # remove self-loop edge here
39 | for i in range(len(remesh_obj_v)):
40 | geodesic_ball_samples = np.argwhere(surface_geodesic[i, :] <= 0.06).squeeze(1)
41 | if len(geodesic_ball_samples) > 10:
42 | geodesic_ball_samples = np.random.choice(geodesic_ball_samples, 10, replace=False)
43 | edge_index.append(np.concatenate((np.repeat(i, len(geodesic_ball_samples))[:, np.newaxis],
44 | geodesic_ball_samples[:, np.newaxis]), axis=1))
45 | edge_index = np.concatenate(edge_index, axis=0)
46 | return edge_index
47 |
48 |
49 | def get_tpl_edges(remesh_obj_v, remesh_obj_f):
50 | edge_index = []
51 | for v in range(len(remesh_obj_v)):
52 | face_ids = np.argwhere(remesh_obj_f == v)[:, 0]
53 | neighbor_ids = []
54 | for face_id in face_ids:
55 | for v_id in range(3):
56 | if remesh_obj_f[face_id, v_id] != v:
57 | neighbor_ids.append(remesh_obj_f[face_id, v_id])
58 | neighbor_ids = list(set(neighbor_ids))
59 | neighbor_ids = [np.array([v, n])[np.newaxis, :] for n in neighbor_ids]
60 | neighbor_ids = np.concatenate(neighbor_ids, axis=0)
61 | edge_index.append(neighbor_ids)
62 | edge_index = np.concatenate(edge_index, axis=0)
63 | return edge_index
64 |
65 |
66 | class Voxels:
67 | """ Holds a binvox model.
68 | data is either a three-dimensional numpy boolean array (dense representation)
69 | or a two-dimensional numpy float array (coordinate representation).
70 |
71 | dims, translate and scale are the model metadata.
72 |
73 | dims are the voxel dimensions, e.g. [32, 32, 32] for a 32x32x32 model.
74 |
75 | scale and translate relate the voxels to the original model coordinates.
76 |
77 | To translate voxel coordinates i, j, k to original coordinates x, y, z:
78 |
79 | x_n = (i+.5)/dims[0]
80 | y_n = (j+.5)/dims[1]
81 | z_n = (k+.5)/dims[2]
82 | x = scale*x_n + translate[0]
83 | y = scale*y_n + translate[1]
84 | z = scale*z_n + translate[2]
85 |
86 | """
87 |
88 | def __init__(self, data, dims, translate, scale, axis_order):
89 | self.data = data
90 | self.dims = dims
91 | self.translate = translate
92 | self.scale = scale
93 | assert (axis_order in ('xzy', 'xyz'))
94 | self.axis_order = axis_order
95 |
96 | def clone(self):
97 | data = self.data.copy()
98 | dims = self.dims[:]
99 | translate = self.translate[:]
100 | return Voxels(data, dims, translate, self.scale, self.axis_order)
101 |
102 |
103 | class NormalizedMeshData:
104 | def __init__(self, mesh_obj):
105 | # triangulate first
106 | bm = bmesh.new()
107 | bm.from_object(mesh_obj, bpy.context.evaluated_depsgraph_get())
108 |
109 | # apply modifiers
110 | mesh_obj.data.clear_geometry()
111 | for mod in reversed(mesh_obj.modifiers):
112 | mesh_obj.modifiers.remove(mod)
113 |
114 | bm.to_mesh(mesh_obj.data)
115 | bpy.context.evaluated_depsgraph_get()
116 | bmesh.ops.triangulate(bm, faces=bm.faces[:], quad_method='BEAUTY', ngon_method='BEAUTY')
117 |
118 | # rotate -90 deg on X axis
119 | mat = Matrix(((1.0, 0.0, 0.0, 0.0),
120 | (0.0, 0.0, 1.0, 0.0),
121 | (0.0, -1.0, 0.0, 0.0),
122 | (0.0, 0.0, 0.0, 1.0)))
123 |
124 | bmesh.ops.transform(bm, matrix=mat, verts=bm.verts[:])
125 | bm.verts.ensure_lookup_table()
126 | bm.faces.ensure_lookup_table()
127 |
128 | mesh_v = np.asarray([list(v.co) for v in bm.verts])
129 | self.mesh_f = np.asarray([[v.index for v in f.verts] for f in bm.faces])
130 |
131 | self.mesh_vn = np.asarray([list(v.normal) for v in bm.verts])
132 | self.tri_areas = [t.calc_area() for t in bm.faces]
133 |
134 | bm.free()
135 |
136 | self.mesh_v, self.translation_normalize, self.scale_normalize = normalize_obj(mesh_v)
137 | self._bvh_tree = BVHTree.FromPolygons(self.mesh_v.tolist(), self.mesh_f.tolist(), all_triangles=True)
138 |
139 | @property
140 | def bound_min(self):
141 | return -0.5, 0.0, min(self.mesh_v[:, 2])
142 |
143 | @property
144 | def bound_max(self):
145 | return 0.5, 1.0, max(self.mesh_v[:, 2])
146 |
147 | @property
148 | def bvh_tree(self):
149 | return self._bvh_tree
150 |
151 | def is_inside_volume(self, vector, samples=2):
152 | direction = Vector((random(), random(), random())).normalized()
153 | hits = self._count_hits(vector, direction)
154 |
155 | if hits == 0:
156 | return False
157 | if hits == 1:
158 | return True
159 |
160 | hits_modulo = hits % 2
161 | for i in range(samples):
162 | direction = Vector((random(), random(), random())).normalized()
163 | check_modulo = self._count_hits(vector, direction) % 2
164 | if hits_modulo == check_modulo:
165 | return hits_modulo == 1
166 |
167 | hits_modulo = check_modulo
168 |
169 | return hits_modulo == 1
170 |
171 | def _count_hits(self, start, direction):
172 | hits = 0
173 | offset = direction * 0.0001
174 | bvh_tree = self.bvh_tree
175 |
176 | location = bvh_tree.ray_cast(start, direction)[0]
177 |
178 | while location is not None:
179 | hits += 1
180 | location = bvh_tree.ray_cast(location + offset, direction)[0]
181 |
182 | return hits
183 |
184 | def _on_surface(self, point, radius):
185 | return self.bvh_tree.find_nearest(point, radius)
186 |
187 | def _remove_isolated_voxels(self, voxels):
188 | # convolution against kernel of 'Trues' with a 'False' center
189 | kernel = np.ones((3, 3, 3), 'int')
190 | kernel[1, 1, 1] = 0
191 |
192 | # expand to allow convolution on boundaries
193 | res_x, res_y, res_z = voxels.shape
194 | blank = np.zeros((res_x, 1, res_z), 'bool')
195 | expanded = np.hstack((blank, voxels, blank))
196 | blank = np.zeros((1, res_y + 2, res_z), 'bool')
197 | expanded = np.vstack((blank, expanded, blank))
198 | blank = np.zeros((res_x + 2, res_y + 2, 1), 'bool')
199 | expanded = np.dstack((blank, expanded, blank))
200 |
201 | it = np.nditer(voxels, flags=['multi_index'])
202 | for vox in it:
203 | if not vox:
204 | continue
205 | x, y, z = it.multi_index
206 |
207 | vox_slice = expanded[x:x+3, y:y+3, z:z+3].astype('int')
208 | if not convolve(vox_slice, kernel, 'valid').all():
209 | voxels[x, y, z] = False
210 | # TODO: if voxels[x, y, z] is True, we might jump the surroundings
211 |
212 | def voxels(self, resolution=88, remove_isolated=True):
213 | voxels = np.zeros([resolution, resolution, resolution], dtype=bool)
214 | res_x, res_y, res_z = voxels.shape # redundant, but might get useful if we change uniform res in the future
215 |
216 | vox_size = 1.0 / resolution
217 | vox_radius = vox_size / 2.0
218 |
219 | bound_min = self.bound_min
220 | min_x, min_y, min_z = bound_min
221 |
222 | z_co = min_z + vox_radius
223 | for z in range(res_z):
224 | y_co = min_y + vox_radius
225 | for y in range(res_y):
226 | x_co = min_x + vox_radius
227 | for x in range(res_x):
228 | voxels[x, y, z] = self.is_inside_volume(Vector((x_co, y_co, z_co)))
229 |
230 | x_co += vox_size
231 | y_co += vox_size
232 | z_co += vox_size
233 |
234 | if remove_isolated:
235 | self._remove_isolated_voxels(voxels)
236 |
237 | return Voxels(voxels, voxels.shape, bound_min, 1.0, 'xyz')
238 |
--------------------------------------------------------------------------------
/ob_utils/objects.py:
--------------------------------------------------------------------------------
1 | import bpy
2 | import bmesh
3 |
4 |
5 | def mesh_from_collection(collection, name=None):
6 | name = name if name else collection.name + '_join'
7 | objects = collection.objects[:]
8 |
9 | bm = bmesh.new()
10 | first_object = objects.pop()
11 | bm.from_object(first_object, bpy.context.evaluated_depsgraph_get())
12 | bm.transform(first_object.matrix_world)
13 |
14 | for ob in objects:
15 | bm.verts.ensure_lookup_table()
16 | last_idx = len(bm.verts)
17 |
18 | other_bm = bmesh.new()
19 | other_bm.from_object(ob, bpy.context.evaluated_depsgraph_get())
20 | other_bm.transform(ob.matrix_world)
21 | other_bm.verts.ensure_lookup_table()
22 | other_bm.edges.ensure_lookup_table()
23 | other_bm.faces.ensure_lookup_table()
24 |
25 | for vert in other_bm.verts:
26 | bm.verts.new(vert.co)
27 | bm.verts.ensure_lookup_table()
28 |
29 | for edge in other_bm.edges:
30 | bm.edges.new([bm.verts[vert.index + last_idx] for vert in edge.verts])
31 |
32 | for face in other_bm.faces:
33 | bm.faces.new([bm.verts[vert.index + last_idx] for vert in face.verts])
34 |
35 | new_mesh = bpy.data.meshes.new(name)
36 | bm.to_mesh(new_mesh)
37 | bm.free()
38 | new_ob = bpy.data.objects.new(name, new_mesh)
39 |
40 | bpy.context.scene.collection.objects.link(new_ob)
41 | return new_ob
42 |
43 |
44 | def get_armature_modifier(ob):
45 | return next((mod for mod in ob.modifiers if mod.type == 'ARMATURE'), None)
46 |
47 |
48 | def copy_weights(ob_list, ob_source, apply_modifier=True):
49 | src_mod = get_armature_modifier(ob_source)
50 | src_mod.show_viewport = False
51 | src_mod.show_render = False
52 | ob_source.hide_set(True)
53 |
54 | for ob in ob_list:
55 | remove_modifiers(ob)
56 |
57 | transf = ob.modifiers.new('weight_transf', 'DATA_TRANSFER')
58 | if not transf:
59 | continue
60 |
61 | transf.object = ob_source
62 | transf.use_vert_data = True
63 | transf.data_types_verts = {'VGROUP_WEIGHTS'}
64 | transf.vert_mapping = 'POLY_NEAREST'
65 |
66 | arm = ob.modifiers.new('Armature', 'ARMATURE')
67 | arm.object = src_mod.object
68 | arm.show_in_editmode = True
69 | arm.show_on_cage = True
70 |
71 | bpy.context.view_layer.objects.active = ob
72 | bpy.ops.object.datalayout_transfer(modifier=transf.name)
73 |
74 | if apply_modifier:
75 | bpy.ops.object.modifier_apply(modifier=transf.name)
76 |
77 |
78 | def remove_modifiers(ob, type_list=('DATA_TRANSFER', 'ARMATURE')):
79 | for mod in reversed(ob.modifiers):
80 | if mod.type in type_list:
81 | ob.modifiers.remove(mod)
82 |
83 |
84 | class ArmatureGenerator(object):
85 | def __init__(self, info, mesh=None):
86 | self._info = info
87 | self._mesh = mesh
88 |
89 | def generate(self, matrix=None):
90 | basename = self._mesh.name if self._mesh else ""
91 | arm_data = bpy.data.armatures.new(basename + "_armature")
92 | arm_obj = bpy.data.objects.new('brignet_rig', arm_data)
93 |
94 | bpy.context.collection.objects.link(arm_obj)
95 | bpy.context.view_layer.objects.active = arm_obj
96 | bpy.ops.object.mode_set(mode='EDIT')
97 |
98 | this_level = [self._info.root]
99 | hier_level = 1
100 | while this_level:
101 | next_level = []
102 | for p_node in this_level:
103 | pos = p_node.pos
104 | parent = p_node.parent.name if p_node.parent is not None else None
105 |
106 | e_bone = arm_data.edit_bones.new(p_node.name)
107 | if self._mesh and e_bone.name not in self._mesh.vertex_groups:
108 | self._mesh.vertex_groups.new(name=e_bone.name)
109 |
110 | e_bone.head.x, e_bone.head.z, e_bone.head.y = pos[0], pos[2], pos[1]
111 |
112 | if parent:
113 | e_bone.parent = arm_data.edit_bones[parent]
114 | if e_bone.parent.tail == e_bone.head:
115 | e_bone.use_connect = True
116 |
117 | if len(p_node.children) == 1:
118 | pos = p_node.children[0].pos
119 | e_bone.tail.x, e_bone.tail.z, e_bone.tail.y = pos[0], pos[2], pos[1]
120 | elif len(p_node.children) > 1:
121 | x_offset = [abs(c_node.pos[0] - pos[0]) for c_node in p_node.children]
122 |
123 | idx = x_offset.index(min(x_offset))
124 | pos = p_node.children[idx].pos
125 | e_bone.tail.x, e_bone.tail.z, e_bone.tail.y = pos[0], pos[2], pos[1]
126 |
127 | elif e_bone.parent:
128 | offset = e_bone.head - e_bone.parent.head
129 | e_bone.tail = e_bone.head + offset / 2
130 | else:
131 | e_bone.tail.x, e_bone.tail.z, e_bone.tail.y = pos[0], pos[2], pos[1]
132 | e_bone.tail.y += .1
133 |
134 | for c_node in p_node.children:
135 | next_level.append(c_node)
136 |
137 | this_level = next_level
138 | hier_level += 1
139 |
140 | if matrix:
141 | arm_data.transform(matrix)
142 |
143 | bpy.ops.object.mode_set(mode='POSE')
144 |
145 | if self._mesh:
146 | for v_skin in self._info.joint_skin:
147 | v_idx = int(v_skin.pop(0))
148 |
149 | for i in range(0, len(v_skin), 2):
150 | self._mesh.vertex_groups[v_skin[i]].add([v_idx], float(v_skin[i + 1]), 'REPLACE')
151 |
152 | arm_obj.matrix_world = self._mesh.matrix_world
153 | mod = self._mesh.modifiers.new('rignet', 'ARMATURE')
154 | mod.object = arm_obj
155 |
156 | return arm_obj
157 |
--------------------------------------------------------------------------------
/ob_utils/sampling.py:
--------------------------------------------------------------------------------
1 | import copy
2 | import logging
3 | from math import sqrt
4 | import numpy as np
5 | from scipy.sparse import lil_matrix
6 | from scipy.sparse.csgraph import dijkstra
7 | import time
8 |
9 | from mathutils.kdtree import KDTree
10 |
11 |
12 | class QueueEntry:
13 | def __init__(self, idx, weight):
14 | self.idx = idx
15 | self.weight = weight
16 |
17 | def __repr__(self):
18 | return f'{self.idx, self.weight}'
19 |
20 |
21 | class MeshSampler:
22 | def __init__(self, triangles, vertices, v_normals, triangle_areas):
23 | self._has_vertex_normals = False
24 | self._has_vertex_colors = False
25 |
26 | self.triangles = triangles
27 | self.vertices = vertices
28 | self.v_normals = v_normals
29 | self.areas = triangle_areas
30 |
31 | self._surface_geodesic = None
32 |
33 | @property
34 | def has_vertex_normals(self):
35 | return bool(self.v_normals)
36 |
37 | @property
38 | def has_vertex_colors(self):
39 | return self._has_vertex_colors
40 |
41 | @property
42 | def surface_area(self):
43 | return sum(self.areas)
44 |
45 | def compute_triangle_normals(self, normalized=True):
46 | # TODO
47 | raise NotImplementedError
48 |
49 | def sample_points_uniformlyImpl(self, number_of_points):
50 | triangle_areas = copy.deepcopy(self.areas)
51 | surface_area = self.surface_area
52 |
53 | triangle_areas[0] /= surface_area
54 | for i, area in enumerate(triangle_areas[1:]):
55 | triangle_areas[i + 1] = area / surface_area + triangle_areas[i]
56 |
57 | point_idx = 0
58 | points = []
59 | normals = []
60 | for i, triangle in enumerate(self.triangles):
61 | n = round(triangle_areas[i] * number_of_points)
62 | while point_idx < n:
63 | r1 = np.random.uniform()
64 | r2 = np.random.uniform()
65 |
66 | a = 1 - sqrt(r1)
67 | b = sqrt(r1) * (1 - r2)
68 | c = sqrt(r1) * r2
69 |
70 | points.append(a * self.vertices[triangle[0]] +
71 | b * self.vertices[triangle[1]] +
72 | c * self.vertices[triangle[2]])
73 |
74 | normals.append(a * self.v_normals[triangle[0]] +
75 | b * self.v_normals[triangle[1]] +
76 | c * self.v_normals[triangle[2]])
77 |
78 | point_idx += 1
79 |
80 | return points, normals
81 |
82 | def sample_points_poissondisk(self, number_of_points, init_factor=5, approximate=False):
83 | logger = logging.getLogger("SamplePointsPoissonDisk")
84 | if number_of_points < 1:
85 | logger.error("zero or negative number of points")
86 | return
87 |
88 | if not self.triangles.any:
89 | logger.error("input mesh has no triangles")
90 | return
91 |
92 | if init_factor < 1:
93 | logger.error("please provide either a point cloud or an init_factor greater than 0")
94 |
95 | all_points, normals = self.sample_points_uniformlyImpl(init_factor * number_of_points)
96 |
97 | # Set-up sample elimination
98 | alpha = 8 # constant defined in paper
99 | beta = 0.5 # constant defined in paper
100 | gamma = 1.5 # constant defined in paper
101 |
102 | pcl_size = len(all_points)
103 | ratio = number_of_points / pcl_size
104 | r_max = 2 * sqrt((self.surface_area / number_of_points) / (2 * sqrt(3.0)))
105 | r_min = r_max * beta * (1 - pow(ratio, gamma))
106 |
107 | deleted = [False] * pcl_size
108 |
109 | kd = KDTree(len(all_points))
110 | for i, v in enumerate(all_points):
111 | kd.insert(v, i)
112 |
113 | kd.balance()
114 |
115 | def weight_fcn(d):
116 | if d < r_min:
117 | d = r_min
118 |
119 | return pow(1 - d / r_max, alpha)
120 |
121 | def weight_fcn_squared(d2):
122 | d = sqrt(d2)
123 | return weight_fcn(d)
124 |
125 | def compute_point_weight(pidx0):
126 | nbs = kd.find_range(all_points[pidx0], r_max)
127 | weight = 0
128 |
129 | for neighbour, nb_idx, nb_dist in nbs:
130 | # only count weights if not the same point if not deleted
131 | if nb_idx == pidx0:
132 | continue
133 | if deleted[nb_idx]:
134 | continue
135 |
136 | weight += weight_fcn(nb_dist)
137 |
138 | return weight
139 |
140 | # init weights and priority queue
141 | queue = []
142 |
143 | for idx in range(pcl_size):
144 | weight = compute_point_weight(idx)
145 | queue.append(QueueEntry(idx, weight))
146 |
147 | priority = copy.copy(queue)
148 | current_number_of_points = pcl_size
149 |
150 | if approximate:
151 | first_slice = number_of_points + number_of_points * int(init_factor/2)
152 | step = init_factor * 2
153 | while current_number_of_points > first_slice:
154 | priority.sort(key=lambda q: q.weight)
155 | for p in priority[-step:]:
156 | deleted[p.idx] = True
157 | for p in priority[-step:]:
158 | nbs = kd.find_range(all_points[p.idx], r_max)
159 | for nb, nb_idx, nb_dist in nbs:
160 | queue[nb_idx].weight = compute_point_weight(nb_idx)
161 |
162 | priority = priority[:-step]
163 | current_number_of_points -= step
164 |
165 | while current_number_of_points > number_of_points:
166 | priority.sort(key=lambda q: q.weight)
167 |
168 | last = priority.pop()
169 | weight, pidx = last.weight, last.idx
170 | deleted[pidx] = True
171 | current_number_of_points -= 1
172 |
173 | # update weights
174 | nbs = kd.find_range(all_points[pidx], r_max)
175 |
176 | for nb, nb_idx, nb_dist in nbs:
177 | queue[nb_idx].weight = compute_point_weight(nb_idx)
178 |
179 | for i, point in enumerate(all_points):
180 | if deleted[i]:
181 | continue
182 |
183 | yield point, normals[i]
184 |
185 | def calc_geodesic(self, samples=2000):
186 | if self._surface_geodesic is not None:
187 | return self._surface_geodesic
188 |
189 | # RigNet uses 4000 samples, not sure this script can handle that.
190 |
191 | sampled = [(pt, normal) for pt, normal in self.sample_points_poissondisk(samples)]
192 | sample_points, sample_normals = list(zip(*sampled))
193 |
194 | pts = np.asarray(sample_points)
195 | pts_normal = np.asarray(sample_normals)
196 |
197 | time1 = time.time()
198 |
199 | verts_dist = np.sqrt(np.sum((pts[np.newaxis, ...] - pts[:, np.newaxis, :]) ** 2, axis=2))
200 | verts_nn = np.argsort(verts_dist, axis=1)
201 |
202 | N = len(pts)
203 | conn_matrix = lil_matrix((N, N), dtype=np.float32)
204 | for p in range(N):
205 | nn_p = verts_nn[p, 1:6]
206 | norm_nn_p = np.linalg.norm(pts_normal[nn_p], axis=1)
207 | norm_p = np.linalg.norm(pts_normal[p])
208 | cos_similar = np.dot(pts_normal[nn_p], pts_normal[p]) / (norm_nn_p * norm_p + 1e-10)
209 | nn_p = nn_p[cos_similar > -0.5]
210 | conn_matrix[p, nn_p] = verts_dist[p, nn_p]
211 | dist = dijkstra(conn_matrix, directed=False, indices=range(N),
212 | return_predecessors=False, unweighted=False)
213 |
214 | # replace inf distance with euclidean distance + 8
215 | # 6.12 is the maximal geodesic distance without considering inf, I add 8 to be safer.
216 | inf_pos = np.argwhere(np.isinf(dist))
217 | if len(inf_pos) > 0:
218 | euc_distance = np.sqrt(np.sum((pts[np.newaxis, ...] - pts[:, np.newaxis, :]) ** 2, axis=2))
219 | dist[inf_pos[:, 0], inf_pos[:, 1]] = 8.0 + euc_distance[inf_pos[:, 0], inf_pos[:, 1]]
220 |
221 | verts = self.vertices
222 | vert_pts_distance = np.sqrt(np.sum((verts[np.newaxis, ...] - pts[:, np.newaxis, :]) ** 2, axis=2))
223 | vert_pts_nn = np.argmin(vert_pts_distance, axis=0)
224 | surface_geodesic = dist[vert_pts_nn, :][:, vert_pts_nn]
225 | time2 = time.time()
226 | logger = logging.getLogger("Geodesic")
227 | logger.debug('surface geodesic calculation: {} seconds'.format((time2 - time1)))
228 |
229 | self._surface_geodesic = surface_geodesic
230 | return surface_geodesic
231 |
--------------------------------------------------------------------------------
/postgen_utils/__init__.py:
--------------------------------------------------------------------------------
1 | import bpy
2 | from bpy.props import BoolProperty
3 | from bpy.props import FloatProperty
4 |
5 | from math import floor
6 | from mathutils import Vector
7 |
8 | from . import bone_utils
9 | from . import bone_mapping
10 |
11 |
12 | class LimbChain:
13 | def __init__(self, chain_root, object, direction_change_stop=False):
14 | self.object = object
15 | self.length = chain_root.length
16 | self.bones = [chain_root]
17 | self.direction_change_stop = direction_change_stop
18 |
19 | self.get_children()
20 |
21 | @property
22 | def root(self):
23 | return self.bones[0]
24 |
25 | @property
26 | def end(self):
27 | return self.bones[-1]
28 |
29 | @property
30 | def mid(self):
31 | mid_idx = floor(len(self.bones) / 2)
32 | # TODO: compare length to mid bone with (self.length - self.end.length) / 2
33 |
34 | mid_bone = self.bones[mid_idx]
35 | return mid_bone
36 |
37 | def get_children(self):
38 | try:
39 | child = next(c for c in self.root.children if self.object.data.bones[c.name].use_connect)
40 | except (IndexError, StopIteration):
41 | return
42 |
43 | self.bones.append(child)
44 | self.length += child.length
45 | while child:
46 | try:
47 | child = next(c for c in child.children if self.object.data.bones[c.name].use_connect)
48 | except (IndexError, StopIteration):
49 | break
50 | else:
51 | self.bones.append(child)
52 | self.length += child.length
53 |
54 | if self.direction_change_stop and child.parent.vector.normalized().dot(child.vector.normalized()) < 0.6:
55 | break
56 |
57 |
58 | class MergeBones(bpy.types.Operator):
59 | """Merge two deformation bones and their vertex weights"""
60 |
61 | bl_idname = "object.brignet_merge_bones"
62 | bl_label = "Merge Deform Bones"
63 | bl_description = "Merge bones and their assigned vertex weights"
64 | bl_options = {'REGISTER', 'UNDO'}
65 |
66 | mirror: BoolProperty(name="Mirror", description="merge bones from the other side too", default=True)
67 | remove_merged: BoolProperty(name="Remove Merged", description="Remove merged groups", default=True)
68 | merge_tails: BoolProperty(name="Merge Tails",
69 | description="Move the resulting bone tail where the merged bone tail was", default=True)
70 |
71 | _armature = None
72 |
73 | @classmethod
74 | def poll(cls, context):
75 | if context.mode != 'POSE':
76 | return False
77 |
78 | # In case of multiple bones, we should choose which tail we should keep (perhaps most distant?)
79 | # for now we limit the merge to just two bones
80 | return len(context.selected_pose_bones) < 3
81 |
82 | def merge_bones(self, ebone, target_bone):
83 | """Merge selected bones and their vertex weights"""
84 | if self.merge_tails:
85 | target_bone.tail = ebone.tail
86 |
87 | for child in ebone.children:
88 | if child.head != target_bone.tail:
89 | child.use_connect = False
90 | child.parent = target_bone
91 |
92 | self._armature.data.edit_bones.remove(ebone)
93 |
94 | def execute(self, context):
95 | self._armature = context.active_object
96 |
97 | bone_names = [b.name for b in context.selected_pose_bones if b != context.active_pose_bone]
98 | target_name = context.active_pose_bone.name
99 |
100 | other_target_name = ''
101 |
102 | if self.mirror:
103 | side, other_side = side_from_bone_name(target_name)
104 |
105 | if side and other_side:
106 | other_target_name = other_side_name(target_name, side, other_side)
107 |
108 | for ob in bone_utils.iterate_rigged_obs(self._armature):
109 | for name in bone_names:
110 | bone_utils.merge_vertex_groups(ob, target_name, name, remove_merged=self.remove_merged)
111 |
112 | if other_target_name:
113 | other_name = other_side_name(name, side, other_side)
114 | bone_utils.merge_vertex_groups(ob, other_target_name, other_name, remove_merged=self.remove_merged)
115 |
116 | bpy.ops.object.mode_set(mode='EDIT')
117 | target_bone = self._armature.data.edit_bones[target_name]
118 | other_target_bone = None
119 | if other_target_name:
120 | try:
121 | other_target_bone = self._armature.data.edit_bones[other_target_name]
122 | except KeyError:
123 | pass
124 |
125 | for name in bone_names:
126 | ebone = self._armature.data.edit_bones[name]
127 | self.merge_bones(ebone, target_bone)
128 |
129 | if other_target_bone:
130 | try:
131 | ebone = self._armature.data.edit_bones[name[:-2] + other_side]
132 | except KeyError:
133 | pass
134 | else:
135 | self.merge_bones(ebone, other_target_bone)
136 |
137 | bpy.ops.object.mode_set(mode='POSE')
138 |
139 | return {'FINISHED'}
140 |
141 |
142 | class SpineFix(bpy.types.Operator):
143 | """Rename deformation bones as generated via rigify. Rigify should be enabled"""
144 |
145 | bl_idname = "object.brignet_spinefix"
146 | bl_label = "Fix Spine"
147 | bl_description = "Extend collapsed spine joints"
148 | bl_options = {'REGISTER', 'UNDO'}
149 |
150 | factor: FloatProperty(name='Factor', min=0.0, max=1.0, default=1.0)
151 | fwd_roll: FloatProperty(name='Roll', min=0.0, max=1.0, default=1.0)
152 | _central_tolerance = 0.01 # max dislocation for joint to be considered central
153 |
154 | @classmethod
155 | def poll(cls, context):
156 | if context.object.type != 'ARMATURE':
157 | return False
158 | if 'root' not in context.object.data.bones:
159 | return False
160 |
161 | return True
162 |
163 | def get_central_child(self, ebone):
164 | try:
165 | child = next(c for c in ebone.children if abs(c.tail.x) < self._central_tolerance)
166 | except StopIteration:
167 | return
168 |
169 | return child
170 |
171 | def execute(self, context):
172 | armature = context.active_object.data
173 | bpy.ops.object.mode_set(mode='EDIT')
174 |
175 | root_bone = armature.edit_bones['root']
176 | hip_bones = [c for c in root_bone.children if abs(c.tail.x) > self._central_tolerance]
177 |
178 | if not hip_bones:
179 | return {'FINISHED'}
180 |
181 | diff = root_bone.head.z - hip_bones[0].tail.z
182 | new_z = hip_bones[0].tail.z - diff/2
183 |
184 | new_head = Vector((0.0, root_bone.head.y, new_z))
185 | root_bone.head = self.factor * new_head + (1 - self.factor) * root_bone.head
186 |
187 | fwd = Vector((0.0, -1.0, 0.0))
188 | root_bone.roll = self.fwd_roll * bone_utils.ebone_roll_to_vector(root_bone, fwd) + (1 - self.fwd_roll) * root_bone.roll
189 |
190 | for bone in hip_bones:
191 | bone.use_connect = False
192 | bone.head.z = root_bone.head.z
193 |
194 | child = self.get_central_child(root_bone)
195 | while child:
196 | child.use_connect = True
197 | if child.length < 0.02:
198 | new_head = Vector((0.0, child.head.y, child.head.z - child.parent.length/2))
199 | child.head = self.factor * new_head + (1 - self.factor) * child.head
200 | child.tail.x = self.factor * 0.0 + (1 - self.factor) * child.tail.x
201 |
202 | child.roll = self.fwd_roll * bone_utils.ebone_roll_to_vector(child, fwd) + (1 - self.fwd_roll) * child.roll
203 | child = self.get_central_child(child)
204 |
205 | bpy.ops.object.mode_set(mode='POSE')
206 | return {'FINISHED'}
207 |
208 |
209 | def side_from_bone_name(bone_name):
210 | if bone_name.endswith(('.R', '.L')):
211 | side = bone_name[-2:]
212 | elif bone_name[:-1].endswith(tuple(f'.{sd}.00' for sd in ('L', 'R'))):
213 | side = bone_name[-6:-5]
214 | else:
215 | return "", ""
216 |
217 | return side, '.L' if side == '.R' else '.R'
218 |
219 |
220 | def other_side_name(bone_name, side, other_side):
221 | bone_name = bone_name.replace(f'{side}.00', f'{other_side}.00')
222 | if bone_name.endswith(side):
223 | bone_name = bone_name[:-2] + other_side
224 |
225 | return bone_name
226 |
227 |
228 | class NamiFy(bpy.types.Operator):
229 | """Rename deformation bones as generated via rigify. Rigify should be enabled"""
230 |
231 | bl_idname = "object.brignet_namify"
232 | bl_label = "Namify"
233 | bl_description = "Rename deformation bones as generated via rigify"
234 | bl_options = {'REGISTER', 'UNDO'}
235 |
236 | rename_mirrored: BoolProperty(name='Rename mirrored bones', default=True,
237 | description='Rename mirrored bones if found')
238 |
239 | @classmethod
240 | def poll(cls, context):
241 | return context.active_object.type == 'ARMATURE'
242 |
243 | def rename_def_bones(self, armature):
244 | for bone in armature.pose.bones:
245 | if bone.name.startswith('DEF-'):
246 | continue
247 | try:
248 | rigify_type = bone.rigify_type
249 | except AttributeError:
250 | self.report('ERROR', "Rigify attribute not found, please make sure Rigify is enabled")
251 | return
252 |
253 | if not rigify_type:
254 | continue
255 |
256 | side, other_side = side_from_bone_name(bone.name)
257 | rename_mirrored = self.rename_mirrored and side in ('.L', '.R')
258 |
259 | chain = LimbChain(bone, armature)
260 | rigify_parameters = bone.rigify_parameters
261 |
262 | if rigify_type == 'limbs.super_limb':
263 | if rigify_parameters.limb_type == 'arm':
264 | root_name, mid_name, end_name = 'DEF-upper_arm', 'DEF-forearm', 'DEF-hand'
265 | parent_name = 'DEF-shoulder'
266 | elif rigify_parameters.limb_type == 'leg':
267 | chain = LimbChain(bone, armature, direction_change_stop=True)
268 | root_name, mid_name, end_name = 'DEF-thigh', 'DEF-shin', 'DEF-foot'
269 | parent_name = 'DEF-pelvis'
270 |
271 | if rename_mirrored:
272 | other_parent_name = other_side_name(bone.parent.name, side, other_side)
273 | other_parent = armature.pose.bones[other_parent_name]
274 | other_parent.name = parent_name + other_side
275 | bone.parent.name = parent_name + side
276 |
277 | basename = root_name
278 | for cbone in chain.bones:
279 | if cbone == chain.mid:
280 | basename = mid_name
281 | elif cbone == chain.end:
282 | basename = end_name
283 | if cbone.name.startswith(basename):
284 | # already named
285 | continue
286 |
287 | if rename_mirrored:
288 | other_name = other_side_name(cbone.name, side, other_side)
289 | try:
290 | other_bone = armature.pose.bones[other_name]
291 | except KeyError:
292 | pass
293 | else:
294 | other_bone.name = basename + other_side
295 |
296 | cbone.name = basename + side
297 |
298 | try:
299 | cbone = cbone.children[0]
300 | except IndexError:
301 | pass
302 | else:
303 | basename = 'DEF-toe'
304 | if rename_mirrored:
305 | other_name = other_side_name(cbone.name, side, other_side)
306 | try:
307 | other_bone = armature.pose.bones[other_name]
308 | except KeyError:
309 | pass
310 | else:
311 | other_bone.name = basename + other_side
312 |
313 | cbone.name = basename + side
314 |
315 | elif rigify_type in ('spines.basic_spine', 'spines.super_spine'):
316 | basename = 'DEF-spine'
317 | for cbone in chain.bones:
318 | cbone.name = basename
319 |
320 | if rename_mirrored:
321 | other_name = bone.name[:-2] + other_side
322 | try:
323 | other_bone = armature.pose.bones[other_name]
324 | except KeyError:
325 | pass
326 | else:
327 | other_bone.name = basename + other_side
328 |
329 | bone.name = basename + side
330 |
331 | def execute(self, context):
332 | armature = context.active_object
333 | self.rename_def_bones(armature)
334 |
335 | # trigger update as vgroup indices have changed
336 | for obj in bone_utils.iterate_rigged_obs(armature):
337 | obj.update_tag(refresh={'DATA'})
338 |
339 | context.view_layer.update()
340 | return {'FINISHED'}
341 |
342 |
343 | class ExtractMetarig(bpy.types.Operator):
344 | """Create Metarig from current object"""
345 | bl_idname = "object.brignet_extract_metarig"
346 | bl_label = "Extract Metarig"
347 | bl_description = "Create Metarig from current object"
348 | bl_options = {'REGISTER', 'UNDO'}
349 |
350 | remove_missing: BoolProperty(name='Remove Unmatched Bones',
351 | default=True,
352 | description='Rigify will generate to the active object')
353 |
354 | assign_metarig: BoolProperty(name='Assign metarig',
355 | default=True,
356 | description='Rigify will generate to the active object')
357 |
358 | roll_knee_to_foot: BoolProperty(name='Roll knees to foot',
359 | default=True,
360 | description='Align knee roll with foot direction')
361 |
362 | @classmethod
363 | def poll(cls, context):
364 | if not context.object:
365 | return False
366 | if context.mode != 'POSE':
367 | return False
368 | if context.object.type != 'ARMATURE':
369 | return False
370 |
371 | return True
372 |
373 | def adjust_toes(self, armature):
374 | """Align toe joint with foot"""
375 | for side in '.R', '.L':
376 | foot_bone = armature.edit_bones[f'foot{side}']
377 | vector = foot_bone.vector.normalized()
378 |
379 | toe_bone = armature.edit_bones[f'toe{side}']
380 | dist = (toe_bone.tail - foot_bone.head).length
381 |
382 | vector *= dist
383 | new_loc = vector + foot_bone.head
384 | new_loc.z = toe_bone.tail.z
385 |
386 | toe_bone.tail = new_loc
387 |
388 | def adjust_knees(self, armature):
389 | """Straighten knee joints"""
390 | for side in '.R', '.L':
391 | thigh_bone = armature.edit_bones[f'thigh{side}']
392 | foot_bone = armature.edit_bones[f'foot{side}']
393 |
394 | leg_direction = (foot_bone.tail - thigh_bone.head).normalized()
395 | leg_direction *= thigh_bone.length
396 | thigh_bone.tail = thigh_bone.head + leg_direction
397 |
398 | if self.roll_knee_to_foot:
399 | foot_direction = -foot_bone.vector.normalized()
400 | thigh_bone.roll = bone_utils.ebone_roll_to_vector(thigh_bone, foot_direction)
401 | shin_bone = armature.edit_bones[f'shin{side}']
402 | shin_bone.roll = bone_utils.ebone_roll_to_vector(shin_bone, foot_direction)
403 |
404 | up_axis = Vector((0.0, 0.0, 1.0))
405 | foot_bone.roll = bone_utils.ebone_roll_to_vector(foot_bone, -up_axis)
406 | if foot_bone.children:
407 | foot_bone.children[0].roll = bone_utils.ebone_roll_to_vector(foot_bone, up_axis)
408 |
409 | def adjust_elbows(self, armature):
410 | """Straighten knee joints"""
411 | for side in '.R', '.L':
412 | arm_bone = armature.edit_bones[f'upper_arm{side}']
413 | forearm_bone = armature.edit_bones[f'forearm{side}']
414 | hand_bone = armature.edit_bones[f'hand{side}']
415 |
416 | arm_mid = forearm_bone.tail/2 - arm_bone.head/2
417 | bow_dir = arm_bone.tail - arm_mid
418 | bow_dir = bow_dir.cross(Vector((-1.0, 0.0, 0.0)))
419 | bow_dir.normalize()
420 |
421 | arm_bone.roll = bone_utils.ebone_roll_to_vector(arm_bone, bow_dir)
422 | forearm_bone.roll = bone_utils.ebone_roll_to_vector(forearm_bone, bow_dir)
423 | hand_bone.roll = bone_utils.ebone_roll_to_vector(hand_bone, bow_dir)
424 |
425 | def execute(self, context):
426 | src_object = context.object
427 | src_armature = context.object.data
428 |
429 | try:
430 | metarig = next(ob for ob in bpy.data.objects if ob.type == 'ARMATURE' and ob.data.rigify_target_rig == src_object)
431 | met_armature = metarig.data
432 | create_metarig = False
433 | except StopIteration:
434 | create_metarig = True
435 | met_armature = bpy.data.armatures.new('metarig')
436 | metarig = bpy.data.objects.new("metarig", met_armature)
437 |
438 | context.collection.objects.link(metarig)
439 |
440 | bpy.ops.object.mode_set(mode='OBJECT')
441 | bpy.ops.object.select_all(action='DESELECT')
442 |
443 | metarig.select_set(True)
444 | bpy.context.view_layer.objects.active = metarig
445 | bpy.ops.object.mode_set(mode='EDIT')
446 |
447 | if create_metarig:
448 | from rigify.metarigs.Basic import basic_human
449 | basic_human.create(metarig)
450 |
451 | met_skeleton = bone_mapping.RigifyMeta()
452 |
453 | def match_meta_bone(met_bone_group, src_bone_group, bone_attr):
454 | met_bone = met_armature.edit_bones[getattr(met_bone_group, bone_attr)]
455 | src_bone = src_armature.bones.get(getattr(src_bone_group, bone_attr), None)
456 |
457 | if not src_bone:
458 | print(getattr(src_bone_group, bone_attr, None), "not found in", src_armature)
459 | if self.remove_missing:
460 | met_armature.edit_bones.remove(met_bone)
461 | return
462 |
463 | met_bone.head = src_bone.head_local
464 | met_bone.tail = src_bone.tail_local
465 |
466 | src_skeleton = bone_mapping.RigifySkeleton()
467 | for bone_attr in ['hips', 'spine', 'spine1', 'spine2', 'neck', 'head']:
468 | match_meta_bone(met_skeleton.spine, src_skeleton.spine, bone_attr)
469 |
470 | if self.remove_missing and 'DEF-spine.005' not in src_armature.edit_bones:
471 | # TODO: should rather check all DEF-bones at once
472 | met_armature.edit_bones.remove(met_armature.edit_bones['spine.005'])
473 |
474 | for bone_attr in ['shoulder', 'arm', 'forearm', 'hand']:
475 | match_meta_bone(met_skeleton.right_arm, src_skeleton.right_arm, bone_attr)
476 | match_meta_bone(met_skeleton.left_arm, src_skeleton.left_arm, bone_attr)
477 |
478 | for bone_attr in ['upleg', 'leg', 'foot', 'toe']:
479 | match_meta_bone(met_skeleton.right_leg, src_skeleton.right_leg, bone_attr)
480 | match_meta_bone(met_skeleton.left_leg, src_skeleton.left_leg, bone_attr)
481 |
482 | self.adjust_toes(met_armature)
483 | self.adjust_knees(met_armature)
484 | self.adjust_elbows(met_armature)
485 |
486 | # find foot vertices
487 | foot_verts = {}
488 | foot_ob = None
489 | # pick object with most foot verts
490 | for ob in bone_utils.iterate_rigged_obs(src_object):
491 | if src_skeleton.left_leg.foot not in ob.vertex_groups:
492 | continue
493 | grouped_verts = [gv for gv, _ in bone_utils.get_group_verts_weight(ob, src_skeleton.left_leg.foot, threshold=0.8)]
494 | if len(grouped_verts) > len(foot_verts):
495 | foot_verts = grouped_verts
496 | foot_ob = ob
497 |
498 | if foot_verts:
499 | # find rear verts (heel)
500 | mat = ob.matrix_world
501 |
502 | rearest_y = max([(mat @ foot_ob.data.vertices[v].co)[1] for v in foot_verts])
503 | leftmost_x = max([(mat @ foot_ob.data.vertices[v].co)[0] for v in foot_verts]) # FIXME: we should counter rotate verts for more accuracy
504 | rightmost_x = min([(mat @ foot_ob.data.vertices[v].co)[0] for v in foot_verts])
505 |
506 | inv = src_object.matrix_world.inverted()
507 | for side in "L", "R":
508 | heel_bone = met_armature.edit_bones['heel.02.' + side]
509 |
510 | heel_bone.head.y = rearest_y
511 | heel_bone.tail.y = rearest_y
512 |
513 | if heel_bone.head.x > 0:
514 | heel_head = leftmost_x
515 | heel_tail = rightmost_x
516 | else:
517 | heel_head = rightmost_x * -1
518 | heel_tail = leftmost_x * -1
519 | heel_bone.head.x = heel_head
520 | heel_bone.tail.x = heel_tail
521 |
522 | heel_bone.head = inv @ heel_bone.head
523 | heel_bone.tail = inv @ heel_bone.tail
524 |
525 | for side in "L", "R":
526 | spine_bone = met_armature.edit_bones['spine']
527 | pelvis_bone = met_armature.edit_bones['pelvis.' + side]
528 | thigh_bone = met_armature.edit_bones['thigh.' + side]
529 | pelvis_bone.head = spine_bone.head
530 | pelvis_bone.tail.x = thigh_bone.tail.x
531 | pelvis_bone.tail.y = spine_bone.tail.y
532 | pelvis_bone.tail.z = spine_bone.tail.z
533 |
534 | spine_bone = met_armature.edit_bones['spine.003']
535 | breast_bone = met_armature.edit_bones['breast.' + side]
536 | breast_bone.head.x = pelvis_bone.tail.x
537 | breast_bone.head.y = spine_bone.head.y
538 | breast_bone.head.z = spine_bone.head.z
539 | #
540 | breast_bone.tail.x = breast_bone.head.x
541 | breast_bone.tail.z = breast_bone.head.z
542 |
543 | bpy.ops.object.mode_set(mode='POSE')
544 | if self.assign_metarig:
545 | met_armature.rigify_target_rig = src_object
546 |
547 | metarig.parent = src_object.parent
548 | metarig.matrix_local = src_object.matrix_local
549 |
550 | return {'FINISHED'}
551 |
--------------------------------------------------------------------------------
/postgen_utils/bone_mapping.py:
--------------------------------------------------------------------------------
1 |
2 | class HumanLimb:
3 | def __str__(self):
4 | return self.__class__.__name__ + ' ' + ', '.join(["{0}: {1}".format(k, v) for k, v in self.__dict__.items()])
5 |
6 | def __getitem__(self, item):
7 | return getattr(self, item, None)
8 |
9 | def items(self):
10 | return self.__dict__.items()
11 |
12 |
13 | class HumanSpine(HumanLimb):
14 | def __init__(self, head='', neck='', spine2='', spine1='', spine='', hips=''):
15 | self.head = head
16 | self.neck = neck
17 | self.spine2 = spine2
18 | self.spine1 = spine1
19 | self.spine = spine
20 | self.hips = hips
21 |
22 |
23 | class HumanArm(HumanLimb):
24 | def __init__(self, shoulder='', arm='', forearm='', hand=''):
25 | self.shoulder = shoulder
26 | self.arm = arm
27 | self.arm_twist = None
28 | self.forearm = forearm
29 | self.forearm_twist = None
30 | self.hand = hand
31 |
32 |
33 | class HumanLeg(HumanLimb):
34 | def __init__(self, upleg='', leg='', foot='', toe=''):
35 | self.upleg = upleg
36 | self.upleg_twist = None
37 | self.leg = leg
38 | self.leg_twist = None
39 | self.foot = foot
40 | self.toe = toe
41 |
42 |
43 | class HumanFingers(HumanLimb):
44 | def __init__(self, thumb=[''] * 3, index=[''] * 3, middle=[''] * 3, ring=[''] * 3, pinky=[''] * 3):
45 | self.thumb = thumb
46 | self.index = index
47 | self.middle = middle
48 | self.ring = ring
49 | self.pinky = pinky
50 |
51 |
52 | class HumanSkeleton:
53 | spine = None
54 |
55 | left_arm = None
56 | right_arm = None
57 | left_leg = None
58 | right_leg = None
59 |
60 | left_fingers = None
61 | right_fingers = None
62 |
63 | def conversion_map(self, target_skeleton):
64 | """Return a dictionary that maps skeleton bone names to target bone names
65 | >>> rigify = RigifySkeleton()
66 | >>> rigify.conversion_map(MixamoSkeleton())
67 | {'DEF-spine.006': 'Head', 'DEF-spine.004': 'Neck', 'DEF-spine.003'...
68 | """
69 | bone_map = dict()
70 |
71 | def bone_mapping(attr, limb, bone_name):
72 | target_limbs = getattr(target_skeleton, attr, None)
73 | if not target_limbs:
74 | return
75 |
76 | trg_name = target_limbs[limb]
77 |
78 | if trg_name:
79 | bone_map[bone_name] = trg_name
80 |
81 | for limb_name, bone_name in self.spine.items():
82 | bone_mapping('spine', limb_name, bone_name)
83 |
84 | for limb_name, bone_name in self.left_arm.items():
85 | bone_mapping('left_arm', limb_name, bone_name)
86 |
87 | for limb_name, bone_name in self.right_arm.items():
88 | bone_mapping('right_arm', limb_name, bone_name)
89 |
90 | for limb_name, bone_name in self.left_leg.items():
91 | bone_mapping('left_leg', limb_name, bone_name)
92 |
93 | for limb_name, bone_name in self.right_leg.items():
94 | bone_mapping('right_leg', limb_name, bone_name)
95 |
96 | def fingers_mapping(src_fingers, trg_fingers):
97 | for finger, bone_names in src_fingers.items():
98 | trg_bone_names = trg_fingers[finger]
99 |
100 | assert len(bone_names) == len(trg_bone_names)
101 | for bone, trg_bone in zip(bone_names, trg_bone_names):
102 | bone_map[bone] = trg_bone
103 |
104 | trg_fingers = target_skeleton.left_fingers
105 | fingers_mapping(self.left_fingers, trg_fingers)
106 |
107 | trg_fingers = target_skeleton.right_fingers
108 | fingers_mapping(self.right_fingers, trg_fingers)
109 |
110 | return bone_map
111 |
112 |
113 | class RigifySkeleton(HumanSkeleton):
114 | def __init__(self):
115 | self.spine = HumanSpine(
116 | head='DEF-spine.006',
117 | neck='DEF-spine.004',
118 | spine2='DEF-spine.003',
119 | spine1='DEF-spine.002',
120 | spine='DEF-spine.001',
121 | hips='DEF-spine'
122 | )
123 |
124 | for side, side_letter in zip(('left', 'right'), ('L', 'R')):
125 | arm = HumanArm(shoulder="DEF-shoulder.{0}".format(side_letter),
126 | arm="DEF-upper_arm.{0}".format(side_letter),
127 | forearm="DEF-forearm.{0}".format(side_letter),
128 | hand="DEF-hand.{0}".format(side_letter))
129 |
130 | arm.arm_twist = arm.arm + ".001"
131 | arm.forearm_twist = arm.forearm + ".001"
132 | setattr(self, side + "_arm", arm)
133 |
134 | fingers = HumanFingers(
135 | thumb=["DEF-thumb.{1:02d}.{0}".format(side_letter, i) for i in range(1, 4)],
136 | index=["DEF-f_index.{1:02d}.{0}".format(side_letter, i) for i in range(1, 4)],
137 | middle=["DEF-f_middle.{1:02d}.{0}".format(side_letter, i) for i in range(1, 4)],
138 | ring=["DEF-f_ring.{1:02d}.{0}".format(side_letter, i) for i in range(1, 4)],
139 | pinky=["DEF-f_pinky.{1:02d}.{0}".format(side_letter, i) for i in range(1, 4)],
140 | )
141 | setattr(self, side + "_fingers", fingers)
142 |
143 | leg = HumanLeg(upleg="DEF-thigh.{0}".format(side_letter),
144 | leg="DEF-shin.{0}".format(side_letter),
145 | foot="DEF-foot.{0}".format(side_letter),
146 | toe="DEF-toe.{0}".format(side_letter))
147 |
148 | leg.upleg_twist = leg.upleg + ".001"
149 | leg.leg_twist = leg.leg + ".001"
150 | setattr(self, side + "_leg", leg)
151 |
152 |
153 | class RigifyMeta(HumanSkeleton):
154 | def __init__(self):
155 | self.spine = HumanSpine(
156 | head='spine.006',
157 | neck='spine.004',
158 | spine2='spine.003',
159 | spine1='spine.002',
160 | spine='spine.001',
161 | hips='spine'
162 | )
163 |
164 | side = 'L'
165 | self.left_arm = HumanArm(shoulder="shoulder.{0}".format(side),
166 | arm="upper_arm.{0}".format(side),
167 | forearm="forearm.{0}".format(side),
168 | hand="hand.{0}".format(side))
169 |
170 | self.left_fingers = HumanFingers(
171 | thumb=["thumb.{1:02d}.{0}".format(side, i) for i in range(1, 4)],
172 | index=["f_index.{1:02d}.{0}".format(side, i) for i in range(1, 4)],
173 | middle=["f_middle.{1:02d}.{0}".format(side, i) for i in range(1, 4)],
174 | ring=["f_ring.{1:02d}.{0}".format(side, i) for i in range(1, 4)],
175 | pinky=["f_pinky.{1:02d}.{0}".format(side, i) for i in range(1, 4)],
176 | )
177 |
178 | self.left_leg = HumanLeg(upleg="thigh.{0}".format(side),
179 | leg="shin.{0}".format(side),
180 | foot="foot.{0}".format(side),
181 | toe="toe.{0}".format(side))
182 |
183 | side = 'R'
184 | self.right_arm = HumanArm(shoulder="shoulder.{0}".format(side),
185 | arm="upper_arm.{0}".format(side),
186 | forearm="forearm.{0}".format(side),
187 | hand="hand.{0}".format(side))
188 |
189 | self.right_fingers = HumanFingers(
190 | thumb=["thumb.{1:02d}.{0}".format(side, i) for i in range(1, 4)],
191 | index=["f_index.{1:02d}.{0}".format(side, i) for i in range(1, 4)],
192 | middle=["f_middle.{1:02d}.{0}".format(side, i) for i in range(1, 4)],
193 | ring=["f_ring.{1:02d}.{0}".format(side, i) for i in range(1, 4)],
194 | pinky=["f_pinky.{1:02d}.{0}".format(side, i) for i in range(1, 4)],
195 | )
196 |
197 | self.right_leg = HumanLeg(upleg="thigh.{0}".format(side),
198 | leg="shin.{0}".format(side),
199 | foot="foot.{0}".format(side),
200 | toe="toe.{0}".format(side))
201 |
202 |
203 | class UnrealSkeleton(HumanSkeleton):
204 | def __init__(self):
205 | self.spine = HumanSpine(
206 | head='head',
207 | neck='neck_01',
208 | spine2='spine_03',
209 | spine1='spine_02',
210 | spine='spine_01',
211 | hips='pelvis'
212 | )
213 |
214 | for side, side_letter in zip(('left', 'right'), ('_l', '_r')):
215 | arm = HumanArm(shoulder="clavicle" + side_letter,
216 | arm="upperarm" + side_letter,
217 | forearm="lowerarm" + side_letter,
218 | hand="hand" + side_letter)
219 |
220 | arm.arm_twist = "upperarm_twist_01" + side_letter
221 | arm.forearm_twist = "lowerarm_twist_01" + side_letter
222 | setattr(self, side + "_arm", arm)
223 |
224 | fingers = HumanFingers(
225 | thumb=["thumb_{0:02d}{1}".format(i, side_letter) for i in range(1, 4)],
226 | index=["index_{0:02d}{1}".format(i, side_letter) for i in range(1, 4)],
227 | middle=["middle_{0:02d}{1}".format(i, side_letter) for i in range(1, 4)],
228 | ring=["ring_{0:02d}{1}".format(i, side_letter) for i in range(1, 4)],
229 | pinky=["pinky_{0:02d}{1}".format(i, side_letter) for i in range(1, 4)],
230 | )
231 | setattr(self, side + "_fingers", fingers)
232 |
233 | leg = HumanLeg(upleg="thigh{0}".format(side_letter),
234 | leg="calf{0}".format(side_letter),
235 | foot="foot{0}".format(side_letter),
236 | toe="ball{0}".format(side_letter))
237 |
238 | leg.upleg_twist = "thigh_twist_01" + side_letter
239 | leg.leg_twist = "calf_twist_01" + side_letter
240 | setattr(self, side + "_leg", leg)
241 |
--------------------------------------------------------------------------------
/postgen_utils/bone_utils.py:
--------------------------------------------------------------------------------
1 | import bpy
2 | from mathutils import Matrix
3 | from mathutils import Vector
4 | from mathutils import Quaternion
5 | from math import pi
6 |
7 |
8 | def copy_bone_constraints(bone_a, bone_b):
9 | """Copy all bone constraints from bone_A to bone_b and sets their writable attributes.
10 | Doesn't delete constraints that already exist
11 | """
12 | for constr_a in bone_a.constraints:
13 | constr_b = bone_b.constraints.new(constr_a.type)
14 |
15 | for c_attr in dir(constr_b):
16 | if c_attr.startswith("_"):
17 | continue
18 | try:
19 | setattr(constr_b, c_attr, getattr(constr_a, c_attr))
20 | except AttributeError:
21 | continue
22 |
23 |
24 | def copy_bone(ob, bone_name, assign_name='', constraints=False, deform_bone='SAME'):
25 | """ Makes a copy of the given bone in the given armature object.
26 | Returns the resulting bone's name.
27 |
28 | NOTE: taken from rigify module utils.py, added the contraints option and stripped the part about rna properties
29 | """
30 | if bone_name not in ob.data.edit_bones:
31 | raise Exception("copy_bone(): bone '%s' not found, cannot copy it" % bone_name)
32 |
33 | if assign_name == '':
34 | assign_name = bone_name
35 | # Copy the edit bone
36 | edit_bone_1 = ob.data.edit_bones[bone_name]
37 | edit_bone_2 = ob.data.edit_bones.new(assign_name)
38 |
39 | bone_name_1 = bone_name
40 | bone_name_2 = edit_bone_2.name
41 |
42 | edit_bone_2.parent = edit_bone_1.parent
43 | edit_bone_2.use_connect = edit_bone_1.use_connect
44 |
45 | # Copy edit bone attributes
46 | edit_bone_2.layers = list(edit_bone_1.layers)
47 |
48 | edit_bone_2.head = Vector(edit_bone_1.head)
49 | edit_bone_2.tail = Vector(edit_bone_1.tail)
50 | edit_bone_2.roll = edit_bone_1.roll
51 |
52 | edit_bone_2.use_inherit_rotation = edit_bone_1.use_inherit_rotation
53 | edit_bone_2.use_inherit_scale = edit_bone_1.use_inherit_scale
54 | edit_bone_2.use_local_location = edit_bone_1.use_local_location
55 |
56 | if deform_bone == 'SAME':
57 | edit_bone_2.use_deform = edit_bone_1.use_deform
58 | else:
59 | edit_bone_2.use_deform = deform_bone
60 | edit_bone_2.bbone_segments = edit_bone_1.bbone_segments
61 | edit_bone_2.bbone_custom_handle_start = edit_bone_1.bbone_custom_handle_start
62 | edit_bone_2.bbone_custom_handle_end = edit_bone_1.bbone_custom_handle_end
63 |
64 | # ITD- bones go to MCH layer
65 | edit_bone_2.layers[30] = True
66 | edit_bone_2.layers[31] = False
67 | for i in range(30):
68 | edit_bone_2.layers[i] = False
69 |
70 | ob.update_from_editmode()
71 |
72 | # Get the pose bones
73 | pose_bone_1 = ob.pose.bones[bone_name_1]
74 | pose_bone_2 = ob.pose.bones[bone_name_2]
75 |
76 | # Copy pose bone attributes
77 | pose_bone_2.rotation_mode = pose_bone_1.rotation_mode
78 | pose_bone_2.rotation_axis_angle = tuple(pose_bone_1.rotation_axis_angle)
79 | pose_bone_2.rotation_euler = tuple(pose_bone_1.rotation_euler)
80 | pose_bone_2.rotation_quaternion = tuple(pose_bone_1.rotation_quaternion)
81 |
82 | pose_bone_2.lock_location = tuple(pose_bone_1.lock_location)
83 | pose_bone_2.lock_scale = tuple(pose_bone_1.lock_scale)
84 | pose_bone_2.lock_rotation = tuple(pose_bone_1.lock_rotation)
85 | pose_bone_2.lock_rotation_w = pose_bone_1.lock_rotation_w
86 | pose_bone_2.lock_rotations_4d = pose_bone_1.lock_rotations_4d
87 |
88 | if constraints:
89 | copy_bone_constraints(pose_bone_1, pose_bone_2)
90 |
91 | return bone_name_2
92 |
93 |
94 | def remove_bone_constraints(pbone):
95 | for constr in reversed(pbone.constraints):
96 | pbone.constraints.remove(constr)
97 |
98 |
99 | def remove_all_bone_constraints(ob):
100 | for pbone in ob.pose.bones:
101 | remove_bone_constraints(pbone)
102 |
103 |
104 | def get_armature_bone(ob, bone_name):
105 | """Return the Armature Bone with given bone_name, None if not found"""
106 | return ob.data.bones.get(bone_name, None)
107 |
108 |
109 | def get_edit_bone(ob, bone_name):
110 | """Return the Edit Bone with given bone name, None if not found"""
111 | return ob.data.edit_bones.get(bone_name, None)
112 |
113 |
114 | def is_def_bone(ob, bone_name):
115 | """Return True if the bone with given name is a deforming bone,
116 | False if it isn't, None if the bone is not found"""
117 | bone = get_armature_bone(ob, bone_name)
118 |
119 | if not bone:
120 | return
121 |
122 | return bone.use_deform
123 |
124 |
125 | def find_def_parent(ob, org_bone):
126 | """Return the first DEF- bone that is suitable as parent bone of given ORG- bone"""
127 | org_par = org_bone.parent
128 | if not org_par:
129 | return
130 |
131 | if org_par.name.startswith("MCH-"): # MCH bones risk to be named after the bone we have started with
132 | return find_def_parent(ob, org_par)
133 |
134 | par_def_name = "DEF-{0}".format(org_par.name[4:])
135 | try:
136 | par_def = ob.pose.bones[par_def_name]
137 | return par_def
138 | except KeyError:
139 | return find_def_parent(ob, org_par)
140 |
141 |
142 | def get_deform_root_name(ob):
143 | """Get the name of first deform bone with no deform parent
144 |
145 | :param ob:
146 | :return:
147 | """
148 | # TODO
149 | return 'DEF-spine'
150 |
151 |
152 | def get_deform_hips_name(ob, bone_name=None):
153 | """Starting from the root, get the first bone with more than one child
154 |
155 | :param ob: the armature object
156 | :param bone_name:
157 | :return: name of deform hips bone
158 | """
159 | if not bone_name:
160 | bone_name = get_deform_root_name(ob)
161 |
162 | bone = ob.data.edit_bones[bone_name]
163 |
164 | if len(bone.children) > 1:
165 | return bone_name
166 |
167 | return get_deform_hips_name(ob, bone.children[0].name)
168 |
169 |
170 | def set_inherit_scale(ob, inherit_mode='FIX_SHEAR'):
171 | for bone in ob.data.edit_bones:
172 | if not bone.use_deform:
173 | continue
174 |
175 | bone.inherit_scale = inherit_mode
176 |
177 |
178 | def copy_chain(ob, first, last_excluded=None, flip_bones=False):
179 | """Copy a chain of bones, return name of last copied bone"""
180 | bone = first
181 | bone_name = ''
182 |
183 | prev_itd_bone = None
184 | while bone != last_excluded:
185 | bone_name = bone.name
186 | itd_name = bone_name.replace("DEF-", "ITD-")
187 | try:
188 | itd_bone = ob.data.edit_bones[itd_name]
189 | except KeyError:
190 | itd_name = copy_bone(ob, bone_name, assign_name=itd_name, constraints=True, deform_bone=False)
191 | itd_bone = ob.data.edit_bones[itd_name]
192 |
193 | itd_bone.use_deform = False
194 | itd_bone.parent = prev_itd_bone
195 | prev_itd_bone = itd_bone
196 |
197 | cp_name = copy_bone(ob, bone_name, assign_name=bone_name.replace("DEF-", "CP-"), constraints=False,
198 | deform_bone=False)
199 | cp_bone = ob.data.edit_bones[cp_name]
200 | cp_bone.use_deform = False
201 |
202 | cp_bone.parent = None
203 |
204 | if flip_bones:
205 | flip_bone(cp_bone)
206 |
207 | pbone = ob.pose.bones[bone_name]
208 | remove_bone_constraints(pbone)
209 | cp_loc = pbone.constraints.new('COPY_LOCATION')
210 | cp_rot = pbone.constraints.new('COPY_ROTATION')
211 | cp_scale = pbone.constraints.new('COPY_SCALE')
212 | for constr in (cp_loc, cp_rot, cp_scale):
213 | constr.target = ob
214 | constr.subtarget = cp_name
215 |
216 | cp_bone.parent = itd_bone
217 |
218 | # ITD- bones go to MCH layer
219 | for new_bone in (itd_bone, cp_bone):
220 | new_bone.layers[30] = True
221 | new_bone.layers[31] = False
222 | for i in range(30):
223 | new_bone.layers[i] = False
224 |
225 | if not bone.children:
226 | break
227 | bone = bone.children[0]
228 |
229 | return bone_name
230 |
231 |
232 | def flip_bone(bone):
233 | bone.head, bone.tail = bone.tail.copy(), bone.head.copy()
234 |
235 |
236 | def find_tail_root(ob, tail_start_name='DEF-tail.001'):
237 | try:
238 | tail_bone = get_edit_bone(ob, tail_start_name)
239 | except KeyError:
240 | return
241 |
242 | if not tail_bone:
243 | return
244 |
245 | while tail_bone.parent and is_def_bone(ob, tail_bone.parent.name):
246 | tail_bone = tail_bone.parent
247 |
248 | return tail_bone.name
249 |
250 |
251 | def fix_tail_direction(ob):
252 | """Make the hips the actual root and parent the tail to it (Rigify tails are the other way around"""
253 | def_root_name = get_deform_root_name(ob)
254 | def_hips_name = get_deform_hips_name(ob, def_root_name)
255 |
256 | if def_root_name == def_hips_name:
257 |
258 | def_root_name = find_tail_root(ob)
259 | if not def_root_name:
260 | print("cannot figure root/hips, not fixing, tail")
261 | return def_hips_name
262 |
263 | def_root_edit = get_edit_bone(ob, def_root_name)
264 | def_hips_edit = get_edit_bone(ob, def_hips_name)
265 |
266 | tail_next_name = copy_chain(ob, def_root_edit, def_hips_edit, flip_bones=True)
267 | def_tail_next = get_edit_bone(ob, tail_next_name)
268 | def_tail_previous = def_hips_edit
269 |
270 | def_hips_edit.parent = None
271 | def_root_edit.parent = def_hips_edit
272 |
273 | while def_tail_next:
274 | if def_tail_next == def_hips_edit:
275 | break
276 | previous_parent = def_tail_next.parent
277 |
278 | def_tail_next.parent = None
279 | # flip bone
280 | flip_bone(def_tail_next)
281 | def_tail_next.parent = def_tail_previous
282 | if def_tail_previous is not def_hips_edit:
283 | def_tail_next.use_connect = True
284 |
285 | def_tail_previous = def_tail_next
286 | def_tail_next = previous_parent
287 |
288 | return def_hips_name
289 |
290 |
291 | def copytransform_to_copylocrot(ob):
292 | for pbone in ob.pose.bones:
293 | if not ob.data.bones[pbone.name].use_deform:
294 | continue
295 |
296 | to_remove = []
297 | for constr in pbone.constraints:
298 | if constr.type == 'COPY_TRANSFORM':
299 | to_remove.append(constr)
300 | for cp_constr in (pbone.constraints.new('COPY_ROTATION'), pbone.constraints.new('COPY_LOCATION')):
301 | cp_constr.target = constr.ob
302 | cp_constr.subtarget = constr.subtarget
303 | elif constr.type == 'STRETCH_TO':
304 | constr.mute = True
305 |
306 | for constr in to_remove:
307 | pbone.constraints.remove(constr)
308 |
309 |
310 | def limit_spine_scale(ob):
311 | for pbone in ob.pose.bones:
312 | if not ob.data.bones[pbone.name].use_deform:
313 | continue
314 |
315 | if not pbone.name.startswith('DEF-spine'):
316 | continue
317 |
318 | constr = pbone.constraints.new('LIMIT_SCALE')
319 |
320 | constr.min_x = 1
321 | constr.min_y = 1
322 | constr.min_z = 1
323 |
324 | constr.max_x = 1
325 | constr.max_y = 1
326 | constr.max_z = 1
327 |
328 | constr.use_min_x = True
329 | constr.use_min_y = True
330 | constr.use_min_z = True
331 |
332 | constr.use_max_x = True
333 | constr.use_max_y = True
334 | constr.use_max_z = True
335 |
336 | constr.owner_space = 'LOCAL'
337 |
338 |
339 | def gamefriendly_hierarchy(ob, fix_tail=True, limit_scale=False):
340 | """Changes Rigify (0.5) rigs to a single root deformation hierarchy.
341 | Create ITD- (InTermeDiate) bones in the process"""
342 | assert (ob.mode == 'EDIT')
343 |
344 | bone_names = list((b.name for b in ob.data.bones if is_def_bone(ob, b.name)))
345 | new_bone_names = [] # collect newly added bone names so that they can be edited later in Object Mode
346 |
347 | def_root_name = get_deform_root_name(ob)
348 |
349 | # we want deforming bone (i.e. the ones on layer 29) to have deforming bone parents
350 | for bone_name in bone_names:
351 | if bone_name == def_root_name:
352 | continue
353 |
354 | if not ob.pose.bones[bone_name].parent:
355 | # root bones are fine
356 | continue
357 | if is_def_bone(ob, ob.pose.bones[bone_name].parent.name):
358 | continue
359 |
360 | # Intermediate Bone
361 | itd_name = bone_name.replace("DEF-", "ITD-")
362 | itd_name = itd_name.replace("MCH-", "ITD-")
363 | if not itd_name.startswith("ITD-"):
364 | itd_name = "ITD-" + itd_name
365 | try:
366 | ob.data.edit_bones[itd_name]
367 | except KeyError:
368 | itd_name = copy_bone(ob, bone_name, assign_name=itd_name, constraints=True,
369 | deform_bone=False)
370 | new_bone_names.append(itd_name)
371 |
372 | # DEF- bone will now follow the ITD- bone
373 | pbone = ob.pose.bones[bone_name]
374 | remove_bone_constraints(pbone)
375 | for cp_constr in (pbone.constraints.new('COPY_LOCATION'),
376 | pbone.constraints.new('COPY_ROTATION'),
377 | pbone.constraints.new('COPY_SCALE')):
378 | cp_constr.target = ob
379 | cp_constr.subtarget = itd_name
380 |
381 | # Look for a DEF- bone that would be a good parent. Unlike DEF- bones, ORG- bones retain the
382 | # hierarchy from the metarig, so we are going to reproduce the ORG- hierarchy
383 | org_name = "ORG-{0}".format(bone_name[4:])
384 |
385 | try:
386 | org_bone = ob.pose.bones[org_name]
387 | except KeyError:
388 | print("WARNING: 'ORG-' bone not found ({0})", org_name)
389 | continue
390 | else:
391 | def_par = find_def_parent(ob, org_bone)
392 | if not def_par:
393 | print("WARNING: Parent not found for {0}".format(bone_name))
394 | # as a last resort, look for a DEF- bone with the same name but a lower number
395 | # (i.e. try to parent DEF-tongue.002 to DEF-tongue.001)
396 | if bone_name[-4] == "." and bone_name[-3:].isdigit():
397 | bname, number = bone_name.rsplit(".")
398 | number = int(number)
399 | if number > 1:
400 | def_par_name = "{0}.{1:03d}".format(bname, number - 1)
401 | print("Trying to use {0}".format(def_par_name))
402 | try:
403 | def_par = ob.pose.bones[def_par_name]
404 | except KeyError:
405 | print("No suitable DEF- parent for {0}".format(bone_name))
406 | continue
407 | else:
408 | continue
409 | else:
410 | continue
411 |
412 | ebone = get_edit_bone(ob, bone_name)
413 | ebone_par = get_edit_bone(ob, def_par.name)
414 | ebone.parent = ebone_par
415 |
416 | if fix_tail:
417 | new_root_name = fix_tail_direction(ob)
418 | if new_root_name:
419 | def_root_name = new_root_name
420 |
421 | if limit_scale:
422 | limit_spine_scale(ob)
423 |
424 | try:
425 | ob.data.edit_bones[def_root_name].parent = ob.data.edit_bones['root']
426 | except KeyError:
427 | print("WARNING: DEF hierarchy root was not parented to root bone")
428 |
429 |
430 | def iterate_rigged_obs(armature_object):
431 | for ob in bpy.data.objects:
432 | if ob.type != 'MESH':
433 | continue
434 | if not ob.modifiers:
435 | continue
436 | for modifier in [mod for mod in ob.modifiers if mod.type == 'ARMATURE']:
437 | if modifier.object == armature_object:
438 | yield ob
439 | break
440 |
441 |
442 | def get_group_verts_weight(obj, vertex_group, threshold=0.1):
443 | """Iterate vertex index and weight assigned to given vertex_group"""
444 | try:
445 | group_idx = obj.vertex_groups[vertex_group].index
446 | except KeyError:
447 | return
448 |
449 | for i, v in enumerate(obj.data.vertices):
450 | try:
451 | g = next(g for g in v.groups if g.group == group_idx)
452 | except StopIteration:
453 | continue
454 |
455 | if g.weight < threshold:
456 | continue
457 |
458 | yield i, g.weight
459 |
460 |
461 | def merge_vertex_groups(obj, v_grp_a, v_grp_b, remove_merged=True):
462 | try:
463 | group_a = obj.vertex_groups[v_grp_a]
464 | except KeyError:
465 | return
466 |
467 | for idx, weight in get_group_verts_weight(obj, v_grp_b):
468 | group_a.add([idx], weight, 'REPLACE')
469 |
470 | if remove_merged:
471 | try:
472 | to_remove = obj.vertex_groups[v_grp_b]
473 | except KeyError:
474 | pass
475 | else:
476 | obj.vertex_groups.remove(to_remove)
477 |
478 |
479 | def vec_roll_to_mat3_normalized(nor, roll):
480 | THETA_SAFE = 1.0e-5 # theta above this value are always safe to use
481 | THETA_CRITICAL = 1.0e-9 # above this is safe under certain conditions
482 |
483 | assert nor.magnitude - 1.0 < 0.01
484 |
485 | x = nor.x
486 | y = nor.y
487 | z = nor.z
488 |
489 | theta = 1.0 + y
490 | theta_alt = x * x + z * z
491 |
492 | # When theta is close to zero (nor is aligned close to negative Y Axis),
493 | # we have to check we do have non-null X/Z components as well.
494 | # Also, due to float precision errors, nor can be (0.0, -0.99999994, 0.0) which results
495 | # in theta being close to zero. This will cause problems when theta is used as divisor.
496 |
497 | bMatrix = Matrix().to_3x3()
498 |
499 | if theta > THETA_SAFE or ((x | z) and theta > THETA_CRITICAL):
500 | # nor is *not* aligned to negative Y-axis (0,-1,0).
501 | # We got these values for free... so be happy with it... ;)
502 |
503 | bMatrix[0][1] = -x
504 | bMatrix[1][0] = x
505 | bMatrix[1][1] = y
506 | bMatrix[1][2] = z
507 | bMatrix[2][1] = -z
508 |
509 | if theta > THETA_SAFE:
510 | # nor differs significantly from negative Y axis (0,-1,0): apply the general case. */
511 | bMatrix[0][0] = 1 - x * x / theta
512 | bMatrix[2][2] = 1 - z * z / theta
513 | bMatrix[2][0] = bMatrix[0][2] = -x * z / theta
514 | else:
515 | # nor is close to negative Y axis (0,-1,0): apply the special case. */
516 | bMatrix[0][0] = (x + z) * (x - z) / -theta_alt
517 | bMatrix[2][2] = -bMatrix[0][0]
518 | bMatrix[2][0] = bMatrix[0][2] = 2.0 * x * z / theta_alt
519 | else:
520 | # nor is very close to negative Y axis (0,-1,0): use simple symmetry by Z axis. */
521 | bMatrix.identity()
522 | bMatrix[0][0] = bMatrix[1][1] = -1.0
523 |
524 | # Make Roll matrix */
525 | quat = Quaternion(nor, roll)
526 | rMatrix = quat.to_matrix()
527 |
528 | # Combine and output result */
529 | return rMatrix @ bMatrix
530 |
531 |
532 | def ebone_roll_to_vector(bone, align_axis, axis_only=False):
533 | roll = 0.0
534 |
535 | assert abs(align_axis.magnitude - 1.0) < 1.0e-5
536 | nor = bone.tail - bone.head
537 | nor.normalize()
538 |
539 | d = nor.dot(align_axis)
540 | if d == 1.0:
541 | return roll
542 |
543 | mat = vec_roll_to_mat3_normalized(nor, 0.0)
544 |
545 | # project the new_up_axis along the normal */
546 | vec = align_axis.project(nor)
547 | align_axis_proj = align_axis - vec
548 |
549 | if axis_only:
550 | if align_axis_proj.angle(mat[2]) > pi / 2:
551 | align_axis_proj.negate()
552 |
553 | roll = align_axis_proj.angle(mat[2])
554 |
555 | vec = mat[2].cross(align_axis_proj)
556 |
557 | if vec.dot(nor) < 0.0:
558 | return -roll
559 |
560 | return roll
561 |
562 |
563 | class NameFix:
564 | def __init__(self, armature):
565 | self.threshold = 0.03
566 | self.armature = armature
567 |
568 | self.right_bones = []
569 | self.left_bones = []
570 | self.mid_bones = []
571 |
572 | self.collect_left_right()
573 |
574 | def collect_left_right(self):
575 | self.right_bones.clear()
576 | self.left_bones.clear()
577 |
578 | for bone in self.armature.data.bones:
579 | if bone.name.endswith('.R'):
580 | self.right_bones.append(bone.name)
581 | continue
582 | if bone.name.endswith('.L'):
583 | self.left_bones.append(bone.name)
584 | continue
585 | if abs(bone.head_local.x) < self.threshold:
586 | if abs(bone.tail_local.x) < self.threshold:
587 | self.mid_bones.append(bone.name)
588 | continue
589 | elif bone.tail_local.x < 0:
590 | # right bone with head at the middle
591 | self.right_bones.append(bone.name)
592 | continue
593 | else:
594 | # left bone with hand at the middle
595 | self.left_bones.append(bone.name)
596 | continue
597 |
598 | if bone.head_local.x < 0:
599 | self.right_bones.append(bone.name)
600 | else:
601 | self.left_bones.append(bone.name)
602 |
603 | def names_to_bones(self, names):
604 | self.armature.update_from_editmode()
605 | for name in names:
606 | yield self.armature.data.bones[name]
607 |
608 | def name_left_right(self):
609 | new_names = dict()
610 |
611 | for rbone in self.names_to_bones(self.right_bones):
612 | if rbone.name.endswith('.R'):
613 | continue
614 |
615 | new_names[rbone.name] = rbone.name + ".R"
616 |
617 | left_loc = rbone.head_local.copy()
618 | left_loc.x = -left_loc.x
619 |
620 | for lbone in self.names_to_bones(self.left_bones):
621 | match = True
622 | for i in range(3):
623 | if abs(lbone.head_local[i] - left_loc[i]) > self.threshold:
624 | match = False
625 | break
626 | if match:
627 | new_names[lbone.name] = rbone.name + ".L"
628 |
629 | for k, v in new_names.items():
630 | self.armature.data.bones[k].name = v
631 |
632 | self.left_bones = [new_names.get(name, name) for name in self.left_bones]
633 | self.right_bones = [new_names.get(name, name) for name in self.right_bones]
634 |
--------------------------------------------------------------------------------
/preferences.py:
--------------------------------------------------------------------------------
1 | import os
2 | from pathlib import Path
3 | import sys
4 |
5 | import bpy
6 |
7 | from .setup_utils import cuda_utils
8 | from .setup_utils import venv_utils
9 | from importlib.util import find_spec
10 |
11 |
12 | class BrignetEnvironment(bpy.types.Operator):
13 | """Create virtual environment with required modules"""
14 | bl_idname = "wm.brignet_environment"
15 | bl_label = "Create Remesh model from Collection"
16 |
17 | @classmethod
18 | def poll(cls, context):
19 | env_path = bpy.context.preferences.addons[__package__].preferences.modules_path
20 | if not env_path:
21 | return False
22 |
23 | return len(BrignetPrefs.missing_modules) > 0
24 |
25 | def execute(self, context):
26 | env_path = bpy.context.preferences.addons[__package__].preferences.modules_path
27 | venv_utils.setup_environment(env_path)
28 | BrignetPrefs.add_module_paths()
29 | return {'FINISHED'}
30 |
31 |
32 | class BrignetPrefs(bpy.types.AddonPreferences):
33 | bl_idname = __package__
34 |
35 | _cuda_info = None
36 | _added_paths = []
37 | missing_modules = []
38 |
39 | @staticmethod
40 | def check_cuda():
41 | BrignetPrefs._cuda_info = cuda_utils.CudaDetect()
42 |
43 | @staticmethod
44 | def add_module_paths():
45 | BrignetPrefs.reset_module_paths()
46 | env_path = bpy.context.preferences.addons[__package__].preferences.modules_path
47 |
48 | if not os.path.isdir(env_path):
49 | return False
50 |
51 | if sys.platform.startswith("linux"):
52 | lib_path = os.path.join(env_path, 'lib')
53 | sitepackages = os.path.join(lib_path, 'python3.7', 'site-packages')
54 | else:
55 | lib_path = os.path.join(env_path, 'Lib')
56 | sitepackages = os.path.join(lib_path, 'site-packages')
57 |
58 | if not os.path.isdir(sitepackages):
59 | # not a python path, but the user might be still typing
60 | return False
61 |
62 | platformpath = os.path.join(sitepackages, sys.platform)
63 | platformlibs = os.path.join(platformpath, 'lib')
64 |
65 | mod_paths = [lib_path, sitepackages, platformpath, platformlibs]
66 | if sys.platform.startswith("win"):
67 | mod_paths.append(os.path.join(env_path, 'DLLs'))
68 | mod_paths.append(os.path.join(sitepackages, 'Pythonwin'))
69 |
70 | for mod_path in mod_paths:
71 | if not os.path.isdir(mod_path):
72 | print(f'{mod_path} not a directory, skipping')
73 | continue
74 | if mod_path not in sys.path:
75 | print(f'adding {mod_path}')
76 | sys.path.append(mod_path)
77 | BrignetPrefs._added_paths.append(mod_path)
78 |
79 | BrignetPrefs.check_modules()
80 | return True
81 |
82 | @staticmethod
83 | def reset_module_paths():
84 | # FIXME: even if we do this, additional modules are still available
85 | for mod_path in BrignetPrefs._added_paths:
86 | print(f"removing module path: {mod_path}")
87 | sys.path.remove(mod_path)
88 | BrignetPrefs._added_paths.clear()
89 |
90 | def update_modules(self, context):
91 | self.add_module_paths()
92 |
93 | modules_path: bpy.props.StringProperty(
94 | name='RigNet environment path',
95 | description='Path to additional modules (torch, torch_geometric...)',
96 | subtype='DIR_PATH',
97 | update=update_modules,
98 | default=os.path.join(os.path.join(os.path.dirname(__file__)), '_additional_modules')
99 | )
100 |
101 | model_path: bpy.props.StringProperty(
102 | name='Model path',
103 | description='Path to RigNet code',
104 | subtype='DIR_PATH',
105 | default=os.path.join(os.path.join(os.path.dirname(__file__)), 'RigNet', 'checkpoints')
106 | )
107 |
108 | modules_found: bpy.props.BoolProperty(
109 | name='Required Modules',
110 | description="Whether required modules have been found or not"
111 | )
112 |
113 | @staticmethod
114 | def check_modules():
115 | BrignetPrefs.missing_modules.clear()
116 | for mod_name in ('torch', 'torch_geometric', 'torch_cluster', 'torch_sparse', 'torch_scatter', 'scipy'):
117 | if not find_spec(mod_name):
118 | BrignetPrefs.missing_modules.append(mod_name)
119 |
120 | preferences = bpy.context.preferences.addons[__package__].preferences
121 | preferences.modules_found = len(BrignetPrefs.missing_modules) == 0
122 |
123 | def draw(self, context):
124 | layout = self.layout
125 | column = layout.column()
126 |
127 | info = BrignetPrefs._cuda_info
128 | if info:
129 | py_ver = sys.version_info
130 | row = column.row()
131 | row.label(text=f"Python Version: {py_ver.major}.{py_ver.minor}.{py_ver.micro}")
132 | if info.result == cuda_utils.CudaResult.SUCCESS:
133 | row = column.row()
134 | row.label(text=f"Cuda Version: {info.major}.{info.minor}.{info.micro}")
135 | elif info.result == cuda_utils.CudaResult.NOT_FOUND:
136 | row = column.row()
137 | row.label(text="CUDA Toolkit not found", icon='ERROR')
138 |
139 | if info.has_cuda_hardware:
140 | row = column.row()
141 | split = row.split(factor=0.1, align=False)
142 | split.column()
143 | col = split.column()
144 | col.label(text="CUDA hardware is present. Please make sure that CUDA Toolkit is installed")
145 |
146 | op = col.operator(
147 | 'wm.url_open',
148 | text='nVidia Downloads',
149 | icon='URL'
150 | )
151 | op.url = 'https://developer.nvidia.com/downloads'
152 |
153 | if self.missing_modules:
154 | row = column.row()
155 | row.label(text=f"Modules not found: {','.join(self.missing_modules)}", icon='ERROR')
156 |
157 | box = column.box()
158 | col = box.column()
159 |
160 | row = col.row()
161 | split = row.split(factor=0.8, align=False)
162 | sp_col = split.column()
163 | sp_col.prop(self, 'modules_path', text='Modules Path')
164 |
165 | if self.missing_modules:
166 | sp_col = split.column()
167 | sp_col.operator(BrignetEnvironment.bl_idname, text='Install')
168 |
169 | row = col.row()
170 | split = row.split(factor=0.8, align=False)
171 | sp_col = split.column()
172 | sp_col.prop(self, 'model_path', text='Model Path')
173 | if not os.path.isdir(self.model_path) or 'bonenet' not in os.listdir(self.model_path):
174 | sp_col = split.column()
175 | op = sp_col.operator(
176 | 'wm.url_open',
177 | text='Download'
178 | )
179 | op.url = "https://umass-my.sharepoint.com/:u:/g/personal/zhanxu_umass_edu/EYKLCvYTWFJArehlo3-H2SgBABnY08B4k5Q14K7H1Hh0VA"
180 |
181 | row = col.row()
182 |
183 | if self.model_path:
184 | row.label(text="Please, unpack the content of 'checkpoints' to")
185 | row = col.row()
186 | row.label(text=f" {self.model_path}")
187 | else:
188 | row.label(text="Please, unpack the content of 'checkpoints' to the 'Model Path' folder")
189 |
190 | row = layout.row()
191 | row.label(text="End of bRigNet Preferences")
192 |
--------------------------------------------------------------------------------
/rignetconnect.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | import numpy as np
4 | import itertools as it
5 |
6 | import torch
7 | from torch_geometric.data import Data
8 | from torch_geometric.utils import add_self_loops
9 |
10 | from .RigNet.utils.rig_parser import Info
11 | from .RigNet.utils.tree_utils import TreeNode
12 | from .RigNet.utils.cluster_utils import meanshift_cluster, nms_meanshift
13 | from .RigNet.utils.mst_utils import increase_cost_for_outside_bone, primMST_symmetry, loadSkel_recur, inside_check, flip
14 | from .RigNet.utils.mst_utils import sample_on_bone
15 |
16 | from .RigNet.models.GCN import JOINTNET_MASKNET_MEANSHIFT as JOINTNET
17 | from .RigNet.models.ROOT_GCN import ROOTNET
18 | from .RigNet.models.PairCls_GCN import PairCls as BONENET
19 | from .RigNet.models.SKINNING import SKINNET
20 |
21 | import bpy
22 | from mathutils import Matrix
23 |
24 | from .ob_utils import sampling as mesh_sampling
25 | from .ob_utils.geometry import get_tpl_edges
26 | from .ob_utils.geometry import get_geo_edges
27 | from .ob_utils.geometry import NormalizedMeshData
28 |
29 | from .ob_utils.objects import ArmatureGenerator
30 |
31 |
32 | DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
33 |
34 |
35 | class MeshStorage:
36 | """Store Mesh Data and samples"""
37 | _instance = None # stores singleton instance
38 |
39 | _mesh_data = None
40 | _mesh_sampler = None
41 | _surf_geodesic = None
42 | _voxels = None
43 |
44 | def __init__(self, samples=2000):
45 | self._samples = samples
46 |
47 | def set_mesh_data(self, mesh_obj):
48 | self._mesh_data = NormalizedMeshData(mesh_obj)
49 |
50 | @property
51 | def mesh_data(self):
52 | assert self._mesh_data is not None
53 | return self._mesh_data
54 |
55 | @property
56 | def surface_geodesic(self):
57 | if self._surf_geodesic is None:
58 | assert self._mesh_data is not None
59 | self._surf_geodesic = self.mesh_sampler.calc_geodesic(samples=self._samples)
60 | return self._surf_geodesic
61 |
62 | @property
63 | def voxels(self):
64 | if self._voxels is None:
65 | assert self._mesh_data is not None
66 | self._voxels = self._mesh_data.voxels()
67 |
68 | return self._voxels
69 |
70 | @property
71 | def mesh_sampler(self):
72 | if self._mesh_sampler is None:
73 | assert self._mesh_data is not None
74 | self._mesh_sampler = mesh_sampling.MeshSampler(self._mesh_data.mesh_f, self._mesh_data.mesh_v,
75 | self._mesh_data.mesh_vn, self._mesh_data.tri_areas)
76 | return self._mesh_sampler
77 |
78 |
79 | def getInitId(data, model):
80 | """
81 | predict root joint ID via rootnet
82 | :param data:
83 | :param model:
84 | :return:
85 | """
86 | with torch.no_grad():
87 | root_prob, _ = model(data, shuffle=False)
88 | root_prob = torch.sigmoid(root_prob).data.cpu().numpy()
89 | root_id = np.argmax(root_prob)
90 | return root_id
91 |
92 |
93 | def create_single_data(mesh_storage: MeshStorage):
94 | """
95 | create input data for the network. The data is wrapped by Data structure in pytorch-geometric library
96 | """
97 |
98 | mesh_data = mesh_storage.mesh_data
99 |
100 | # vertices
101 | v = np.concatenate((mesh_data.mesh_v, mesh_data.mesh_vn), axis=1)
102 | v = torch.from_numpy(v).float()
103 | # topology edges
104 | print(" gathering topological edges.")
105 | tpl_e = get_tpl_edges(mesh_data.mesh_v, mesh_data.mesh_f).T
106 | tpl_e = torch.from_numpy(tpl_e).long()
107 | tpl_e, _ = add_self_loops(tpl_e, num_nodes=v.size(0))
108 | # surface geodesic distance matrix
109 | print(" calculating surface geodesic matrix.")
110 |
111 | surface_geodesic = mesh_storage.surface_geodesic
112 | # geodesic edges
113 | print(" gathering geodesic edges.")
114 | geo_e = get_geo_edges(surface_geodesic, mesh_data.mesh_v).T
115 | geo_e = torch.from_numpy(geo_e).long()
116 | geo_e, _ = add_self_loops(geo_e, num_nodes=v.size(0))
117 | # batch
118 | batch = torch.zeros(len(v), dtype=torch.long)
119 |
120 | geo_data = Data(x=v[:, 3:6], pos=v[:, 0:3], tpl_edge_index=tpl_e, geo_edge_index=geo_e, batch=batch)
121 | return geo_data
122 |
123 |
124 | def add_joints_data(input_data, vox, joint_pred_net, threshold, bandwidth=None, mesh_filename=None):
125 | """
126 | Predict joints
127 | :param input_data: wrapped input data
128 | :param vox: voxelized mesh
129 | :param joint_pred_net: network for predicting joints
130 | :param threshold: density threshold to filter out shifted points
131 | :param bandwidth: bandwidth for meanshift clustering
132 | :param mesh_filename: mesh filename for visualization
133 | :return: wrapped data with predicted joints, pair-wise bone representation added.
134 | """
135 | data_displacement, _, attn_pred, bandwidth_pred = joint_pred_net(input_data)
136 | y_pred = data_displacement + input_data.pos
137 | y_pred_np = y_pred.data.cpu().numpy()
138 | attn_pred_np = attn_pred.data.cpu().numpy()
139 | y_pred_np, index_inside = inside_check(y_pred_np, vox)
140 | attn_pred_np = attn_pred_np[index_inside, :]
141 | y_pred_np = y_pred_np[attn_pred_np.squeeze() > 1e-3]
142 | attn_pred_np = attn_pred_np[attn_pred_np.squeeze() > 1e-3]
143 |
144 | # symmetrize points by reflecting
145 | y_pred_np_reflect = y_pred_np * np.array([[-1, 1, 1]])
146 | y_pred_np = np.concatenate((y_pred_np, y_pred_np_reflect), axis=0)
147 | attn_pred_np = np.tile(attn_pred_np, (2, 1))
148 |
149 | if not bandwidth:
150 | bandwidth = bandwidth_pred.item()
151 | y_pred_np = meanshift_cluster(y_pred_np, bandwidth, attn_pred_np, max_iter=40)
152 |
153 | Y_dist = np.sum(((y_pred_np[np.newaxis, ...] - y_pred_np[:, np.newaxis, :]) ** 2), axis=2)
154 | density = np.maximum(bandwidth ** 2 - Y_dist, np.zeros(Y_dist.shape))
155 | density = np.sum(density, axis=0)
156 | density_sum = np.sum(density)
157 | y_pred_np = y_pred_np[density / density_sum > threshold]
158 | density = density[density / density_sum > threshold]
159 |
160 | pred_joints = nms_meanshift(y_pred_np, density, bandwidth)
161 | pred_joints, _ = flip(pred_joints)
162 |
163 | # prepare and add new data members
164 | pairs = list(it.combinations(range(pred_joints.shape[0]), 2))
165 | pair_attr = []
166 | for pr in pairs:
167 | dist = np.linalg.norm(pred_joints[pr[0]] - pred_joints[pr[1]])
168 | bone_samples = sample_on_bone(pred_joints[pr[0]], pred_joints[pr[1]])
169 | bone_samples_inside, _ = inside_check(bone_samples, vox)
170 | outside_proportion = len(bone_samples_inside) / (len(bone_samples) + 1e-10)
171 | attr = np.array([dist, outside_proportion, 1])
172 | pair_attr.append(attr)
173 | pairs = np.array(pairs)
174 | pair_attr = np.array(pair_attr)
175 | pairs = torch.from_numpy(pairs).float()
176 | pair_attr = torch.from_numpy(pair_attr).float()
177 | pred_joints = torch.from_numpy(pred_joints).float()
178 | joints_batch = torch.zeros(len(pred_joints), dtype=torch.long)
179 | pairs_batch = torch.zeros(len(pairs), dtype=torch.long)
180 |
181 | input_data.joints = pred_joints
182 | input_data.pairs = pairs
183 | input_data.pair_attr = pair_attr
184 | input_data.joints_batch = joints_batch
185 | input_data.pairs_batch = pairs_batch
186 | return input_data
187 |
188 |
189 | def predict_skeleton(input_data, vox, root_pred_net, bone_pred_net):
190 | """
191 | Predict skeleton structure based on joints
192 | :param input_data: wrapped data
193 | :param vox: voxelized mesh
194 | :param root_pred_net: network to predict root
195 | :param bone_pred_net: network to predict pairwise connectivity cost
196 | :return: predicted skeleton structure
197 | """
198 | root_id = getInitId(input_data, root_pred_net)
199 | pred_joints = input_data.joints.data.cpu().numpy()
200 |
201 | with torch.no_grad():
202 | connect_prob, _ = bone_pred_net(input_data, permute_joints=False)
203 | connect_prob = torch.sigmoid(connect_prob)
204 | pair_idx = input_data.pairs.long().data.cpu().numpy()
205 | prob_matrix = np.zeros((len(input_data.joints), len(input_data.joints)))
206 | prob_matrix[pair_idx[:, 0], pair_idx[:, 1]] = connect_prob.data.cpu().numpy().squeeze()
207 | prob_matrix = prob_matrix + prob_matrix.transpose()
208 | cost_matrix = -np.log(prob_matrix + 1e-10)
209 | cost_matrix = increase_cost_for_outside_bone(cost_matrix, pred_joints, vox)
210 |
211 | pred_skel = Info()
212 | parent, key, _ = primMST_symmetry(cost_matrix, root_id, pred_joints)
213 | for i in range(len(parent)):
214 | if parent[i] == -1:
215 | pred_skel.root = TreeNode('root', tuple(pred_joints[i]))
216 | break
217 | loadSkel_recur(pred_skel.root, i, None, pred_joints, parent)
218 | pred_skel.joint_pos = pred_skel.get_joint_dict()
219 |
220 | return pred_skel
221 |
222 |
223 | def pts2line(pts, lines):
224 | '''
225 | Calculate points-to-bone distance. Point to line segment distance refer to
226 | https://stackoverflow.com/questions/849211/shortest-distance-between-a-point-and-a-line-segment
227 | :param pts: N*3
228 | :param lines: N*6, where [N,0:3] is the starting position and [N, 3:6] is the ending position
229 | :return: origins are the neatest projected position of the point on the line.
230 | ends are the points themselves.
231 | dist is the distance in between, which is the distance from points to lines.
232 | Origins and ends will be used for generate rays.
233 | '''
234 | l2 = np.sum((lines[:, 3:6] - lines[:, 0:3]) ** 2, axis=1)
235 | origins = np.zeros((len(pts) * len(lines), 3))
236 | ends = np.zeros((len(pts) * len(lines), 3))
237 | dist = np.zeros((len(pts) * len(lines)))
238 | for l in range(len(lines)):
239 | if np.abs(l2[l]) < 1e-8: # for zero-length edges
240 | origins[l * len(pts):(l + 1) * len(pts)] = lines[l][0:3]
241 | else: # for other edges
242 | t = np.sum((pts - lines[l][0:3][np.newaxis, :]) * (lines[l][3:6] - lines[l][0:3])[np.newaxis, :], axis=1) / \
243 | l2[l]
244 | t = np.clip(t, 0, 1)
245 | t_pos = lines[l][0:3][np.newaxis, :] + t[:, np.newaxis] * (lines[l][3:6] - lines[l][0:3])[np.newaxis, :]
246 | origins[l * len(pts):(l + 1) * len(pts)] = t_pos
247 | ends[l * len(pts):(l + 1) * len(pts)] = pts
248 | dist[l * len(pts):(l + 1) * len(pts)] = np.linalg.norm(
249 | origins[l * len(pts):(l + 1) * len(pts)] - ends[l * len(pts):(l + 1) * len(pts)], axis=1)
250 | return origins, ends, dist
251 |
252 |
253 | def calc_pts2bone_visible_mat(bvhtree, origins, ends):
254 | '''
255 | Check whether the surface point is visible by the internal bone.
256 | Visible is defined as no occlusion on the path between.
257 | :param mesh:
258 | :param surface_pts: points on the surface (n*3)
259 | :param origins: origins of rays
260 | :param ends: ends of the rays, together with origins, we can decide the direction of the ray.
261 | :return: binary visibility matrix (n*m), where 1 indicate the n-th surface point is visible to the m-th ray
262 | '''
263 | ray_dirs = ends - origins
264 |
265 | min_hit_distance = []
266 | for ray_dir, origin in zip(ray_dirs, origins):
267 | # FIXME: perhaps we should sample more distances
268 | location, normal, index, distance = bvhtree.ray_cast(origin, ray_dir + 1e-15)
269 | if location:
270 | min_hit_distance.append(np.linalg.norm(np.array(location) - origin))
271 | else:
272 | min_hit_distance.append(np.linalg.norm(ray_dir))
273 |
274 | min_hit_distance = np.array(min_hit_distance)
275 | distance = np.linalg.norm(ray_dirs, axis=1)
276 | vis_mat = (np.abs(min_hit_distance - distance) < 1e-4)
277 | return vis_mat
278 |
279 |
280 | def calc_geodesic_matrix(bones, mesh_v, surface_geodesic, bvh_tree, use_sampling=False):
281 | """
282 | calculate volumetric geodesic distance from vertices to each bones
283 | :param bones: B*6 numpy array where each row stores the starting and ending joint position of a bone
284 | :param mesh_v: V*3 mesh vertices
285 | :param surface_geodesic: geodesic distance matrix of all vertices
286 | :return: an approaximate volumetric geodesic distance matrix V*B, were (v,b) is the distance from vertex v to bone b
287 | """
288 |
289 | if use_sampling:
290 | # TODO: perhaps not required with blender's bvh tree
291 | # will have to decimate the mesh otherwise
292 | # also, this should rather be done outside the function
293 | subsamples = mesh_v
294 | else:
295 | subsamples = mesh_v
296 |
297 | origins, ends, pts_bone_dist = pts2line(subsamples, bones)
298 | pts_bone_visibility = calc_pts2bone_visible_mat(bvh_tree, origins, ends)
299 | pts_bone_visibility = pts_bone_visibility.reshape(len(bones), len(subsamples)).transpose()
300 | pts_bone_dist = pts_bone_dist.reshape(len(bones), len(subsamples)).transpose()
301 | # remove visible points which are too far
302 | for b in range(pts_bone_visibility.shape[1]):
303 | visible_pts = np.argwhere(pts_bone_visibility[:, b] == 1).squeeze(1)
304 | if len(visible_pts) == 0:
305 | continue
306 | threshold_b = np.percentile(pts_bone_dist[visible_pts, b], 15)
307 | pts_bone_visibility[pts_bone_dist[:, b] > 1.3 * threshold_b, b] = False
308 |
309 | visible_matrix = np.zeros(pts_bone_visibility.shape)
310 | visible_matrix[np.where(pts_bone_visibility == 1)] = pts_bone_dist[np.where(pts_bone_visibility == 1)]
311 | for c in range(visible_matrix.shape[1]):
312 | unvisible_pts = np.argwhere(pts_bone_visibility[:, c] == 0).squeeze(1)
313 | visible_pts = np.argwhere(pts_bone_visibility[:, c] == 1).squeeze(1)
314 | if len(visible_pts) == 0:
315 | visible_matrix[:, c] = pts_bone_dist[:, c]
316 | continue
317 | for r in unvisible_pts:
318 | dist1 = np.min(surface_geodesic[r, visible_pts])
319 | nn_visible = visible_pts[np.argmin(surface_geodesic[r, visible_pts])]
320 | if np.isinf(dist1):
321 | visible_matrix[r, c] = 8.0 + pts_bone_dist[r, c]
322 | else:
323 | visible_matrix[r, c] = dist1 + visible_matrix[nn_visible, c]
324 | if use_sampling:
325 | nn_dist = np.sum((mesh_v[:, np.newaxis, :] - subsamples[np.newaxis, ...]) ** 2, axis=2)
326 | nn_ind = np.argmin(nn_dist, axis=1)
327 | visible_matrix = visible_matrix[nn_ind, :]
328 | return visible_matrix
329 |
330 |
331 | def add_duplicate_joints(skel):
332 | this_level = [skel.root]
333 | while this_level:
334 | next_level = []
335 | for p_node in this_level:
336 | if len(p_node.children) > 1:
337 | new_children = []
338 | for dup_id in range(len(p_node.children)):
339 | p_node_new = TreeNode(p_node.name + '_dup_{:d}'.format(dup_id), p_node.pos)
340 | p_node_new.overlap=True
341 | p_node_new.parent = p_node
342 | p_node_new.children = [p_node.children[dup_id]]
343 | # for user interaction, we move overlapping joints a bit to its children
344 | p_node_new.pos = np.array(p_node_new.pos) + 0.03 * np.linalg.norm(np.array(p_node.children[dup_id].pos) - np.array(p_node_new.pos))
345 | p_node_new.pos = (p_node_new.pos[0], p_node_new.pos[1], p_node_new.pos[2])
346 | p_node.children[dup_id].parent = p_node_new
347 | new_children.append(p_node_new)
348 | p_node.children = new_children
349 | p_node.overlap = False
350 | next_level += p_node.children
351 | this_level = next_level
352 | return skel
353 |
354 |
355 | def mapping_bone_index(bones_old, bones_new):
356 | bone_map = {}
357 | for i in range(len(bones_old)):
358 | bone_old = bones_old[i][np.newaxis, :]
359 | dist = np.linalg.norm(bones_new - bone_old, axis=1)
360 | ni = np.argmin(dist)
361 | bone_map[i] = ni
362 | return bone_map
363 |
364 |
365 | def get_bones(skel):
366 | """
367 | extract bones from skeleton struction
368 | :param skel: input skeleton
369 | :return: bones are B*6 array where each row consists starting and ending points of a bone
370 | bone_name are a list of B elements, where each element consists starting and ending joint name
371 | leaf_bones indicate if this bone is a virtual "leaf" bone.
372 | We add virtual "leaf" bones to the leaf joints since they always have skinning weights as well
373 | """
374 | bones = []
375 | bone_name = []
376 | leaf_bones = []
377 | this_level = [skel.root]
378 | while this_level:
379 | next_level = []
380 | for p_node in this_level:
381 | p_pos = np.array(p_node.pos)
382 | next_level += p_node.children
383 | for c_node in p_node.children:
384 | c_pos = np.array(c_node.pos)
385 | bones.append(np.concatenate((p_pos, c_pos))[np.newaxis, :])
386 | bone_name.append([p_node.name, c_node.name])
387 | leaf_bones.append(False)
388 | if len(c_node.children) == 0:
389 | bones.append(np.concatenate((c_pos, c_pos))[np.newaxis, :])
390 | bone_name.append([c_node.name, c_node.name+'_leaf'])
391 | leaf_bones.append(True)
392 | this_level = next_level
393 | bones = np.concatenate(bones, axis=0)
394 | return bones, bone_name, leaf_bones
395 |
396 |
397 |
398 | def assemble_skel_skin(skel, attachment):
399 | bones_old, bone_names_old, _ = get_bones(skel)
400 | skel_new = add_duplicate_joints(skel)
401 | bones_new, bone_names_new, _ = get_bones(skel_new)
402 | bone_map = mapping_bone_index(bones_old, bones_new)
403 | skel_new.joint_pos = skel_new.get_joint_dict()
404 | skel_new.joint_skin = []
405 |
406 | for v in range(len(attachment)):
407 | vi_skin = [str(v)]
408 | skw = attachment[v]
409 | skw = skw / (np.sum(skw) + 1e-10)
410 | for i in range(len(skw)):
411 | if i == len(bones_old):
412 | break
413 | if skw[i] > 1e-5:
414 | bind_joint_name = bone_names_new[bone_map[i]][0]
415 | bind_weight = skw[i]
416 | vi_skin.append(bind_joint_name)
417 | vi_skin.append(str(bind_weight))
418 | skel_new.joint_skin.append(vi_skin)
419 | return skel_new
420 |
421 |
422 | def post_filter(skin_weights, topology_edge, num_ring=1):
423 | skin_weights_new = np.zeros_like(skin_weights)
424 | for v in range(len(skin_weights)):
425 | adj_verts_multi_ring = []
426 | current_seeds = [v]
427 | for r in range(num_ring):
428 | adj_verts = []
429 | for seed in current_seeds:
430 | adj_edges = topology_edge[:, np.argwhere(topology_edge == seed)[:, 1]]
431 | adj_verts_seed = list(set(adj_edges.flatten().tolist()))
432 | adj_verts_seed.remove(seed)
433 | adj_verts += adj_verts_seed
434 | adj_verts_multi_ring += adj_verts
435 | current_seeds = adj_verts
436 | adj_verts_multi_ring = list(set(adj_verts_multi_ring))
437 | if v in adj_verts_multi_ring:
438 | adj_verts_multi_ring.remove(v)
439 | skin_weights_neighbor = [skin_weights[int(i), :][np.newaxis, :] for i in adj_verts_multi_ring]
440 | skin_weights_neighbor = np.concatenate(skin_weights_neighbor, axis=0)
441 | #max_bone_id = np.argmax(skin_weights[v, :])
442 | #if np.sum(skin_weights_neighbor[:, max_bone_id]) < 0.17 * len(skin_weights_neighbor):
443 | # skin_weights_new[v, :] = np.mean(skin_weights_neighbor, axis=0)
444 | #else:
445 | # skin_weights_new[v, :] = skin_weights[v, :]
446 | skin_weights_new[v, :] = np.mean(skin_weights_neighbor, axis=0)
447 |
448 | #skin_weights_new[skin_weights_new.sum(axis=1) == 0, :] = skin_weights[skin_weights_new.sum(axis=1) == 0, :]
449 | return skin_weights_new
450 |
451 |
452 | def predict_skinning(input_data, pred_skel, skin_pred_net, surface_geodesic, bvh_tree):
453 | """
454 | predict skinning
455 | :param input_data: wrapped input data
456 | :param pred_skel: predicted skeleton
457 | :param skin_pred_net: network to predict skinning weights
458 | :param surface_geodesic: geodesic distance matrix of all vertices
459 | :param mesh_filename: mesh filename
460 | :return: predicted rig with skinning weights information
461 | """
462 | global DEVICE, output_folder
463 | num_nearest_bone = 5
464 | bones, bone_names, bone_isleaf = get_bones(pred_skel)
465 | mesh_v = input_data.pos.data.cpu().numpy()
466 | print(" calculating volumetric geodesic distance from vertices to bone. This step takes some time...")
467 |
468 | geo_dist = calc_geodesic_matrix(bones, mesh_v, surface_geodesic, bvh_tree)
469 | input_samples = [] # joint_pos (x, y, z), (bone_id, 1/D)*5
470 | loss_mask = []
471 | skin_nn = []
472 | for v_id in range(len(mesh_v)):
473 | geo_dist_v = geo_dist[v_id]
474 | bone_id_near_to_far = np.argsort(geo_dist_v)
475 | this_sample = []
476 | this_nn = []
477 | this_mask = []
478 | for i in range(num_nearest_bone):
479 | if i >= len(bones):
480 | this_sample += bones[bone_id_near_to_far[0]].tolist()
481 | this_sample.append(1.0 / (geo_dist_v[bone_id_near_to_far[0]] + 1e-10))
482 | this_sample.append(bone_isleaf[bone_id_near_to_far[0]])
483 | this_nn.append(0)
484 | this_mask.append(0)
485 | else:
486 | skel_bone_id = bone_id_near_to_far[i]
487 | this_sample += bones[skel_bone_id].tolist()
488 | this_sample.append(1.0 / (geo_dist_v[skel_bone_id] + 1e-10))
489 | this_sample.append(bone_isleaf[skel_bone_id])
490 | this_nn.append(skel_bone_id)
491 | this_mask.append(1)
492 | input_samples.append(np.array(this_sample)[np.newaxis, :])
493 | skin_nn.append(np.array(this_nn)[np.newaxis, :])
494 | loss_mask.append(np.array(this_mask)[np.newaxis, :])
495 |
496 | skin_input = np.concatenate(input_samples, axis=0)
497 | loss_mask = np.concatenate(loss_mask, axis=0)
498 | skin_nn = np.concatenate(skin_nn, axis=0)
499 | skin_input = torch.from_numpy(skin_input).float()
500 | input_data.skin_input = skin_input
501 | input_data.to(DEVICE)
502 |
503 | skin_pred = skin_pred_net(input_data)
504 | skin_pred = torch.softmax(skin_pred, dim=1)
505 | skin_pred = skin_pred.data.cpu().numpy()
506 | skin_pred = skin_pred * loss_mask
507 |
508 | skin_nn = skin_nn[:, 0:num_nearest_bone]
509 | skin_pred_full = np.zeros((len(skin_pred), len(bone_names)))
510 | for v in range(len(skin_pred)):
511 | for nn_id in range(len(skin_nn[v, :])):
512 | skin_pred_full[v, skin_nn[v, nn_id]] = skin_pred[v, nn_id]
513 | print(" filtering skinning prediction")
514 | tpl_e = input_data.tpl_edge_index.data.cpu().numpy()
515 | skin_pred_full = post_filter(skin_pred_full, tpl_e, num_ring=1)
516 | skin_pred_full[skin_pred_full < np.max(skin_pred_full, axis=1, keepdims=True) * 0.35] = 0.0
517 | skin_pred_full = skin_pred_full / (skin_pred_full.sum(axis=1, keepdims=True) + 1e-10)
518 | skel_res = assemble_skel_skin(pred_skel, skin_pred_full)
519 | return skel_res
520 |
521 |
522 | class Networks:
523 | def __init__(self, model_dir="", load_networks=True, load_skinning=True):
524 | self.joint_net = None
525 | self.root_net = None
526 | self.bone_net = None
527 | self.skin_net = None
528 |
529 | self._load_skinning = load_skinning
530 | self.model_dir = model_dir if model_dir else bpy.context.preferences.addons[__package__].preferences.model_path
531 |
532 | if load_networks:
533 | self.load_networks()
534 |
535 | def load_networks(self):
536 | print("loading all networks...")
537 | joint_net = JOINTNET()
538 | joint_net.to(DEVICE)
539 | joint_net.eval()
540 | joint_net_checkpoint = torch.load(os.path.join(self.model_dir, 'gcn_meanshift/model_best.pth.tar'))
541 | joint_net.load_state_dict(joint_net_checkpoint['state_dict'])
542 | self.joint_net = joint_net
543 | print(" joint prediction network loaded.")
544 |
545 | root_net = ROOTNET()
546 | root_net.to(DEVICE)
547 | root_net.eval()
548 | root_net_checkpoint = torch.load(os.path.join(self.model_dir, 'rootnet/model_best.pth.tar'))
549 | root_net.load_state_dict(root_net_checkpoint['state_dict'])
550 | self.root_net = root_net
551 | print(" root prediction network loaded.")
552 |
553 | bone_net = BONENET()
554 | bone_net.to(DEVICE)
555 | bone_net.eval()
556 | bone_net_checkpoint = torch.load(os.path.join(self.model_dir, 'bonenet/model_best.pth.tar'))
557 | bone_net.load_state_dict(bone_net_checkpoint['state_dict'])
558 | self.bone_net = bone_net
559 | print(" connection prediction network loaded.")
560 |
561 | if self._load_skinning:
562 | skin_net = SKINNET(nearest_bone=5, use_Dg=True, use_Lf=True)
563 | skin_net_checkpoint = torch.load(os.path.join(self.model_dir, 'skinnet/model_best.pth.tar'))
564 | skin_net.load_state_dict(skin_net_checkpoint['state_dict'])
565 | skin_net.to(DEVICE)
566 | skin_net.eval()
567 | self.skin_net = skin_net
568 | print(" skinning prediction network loaded.")
569 |
570 |
571 | def init_data(mesh_obj, samples=2000):
572 | mesh_storage = MeshStorage(samples)
573 | mesh_storage.set_mesh_data(mesh_obj)
574 |
575 | predict_data = create_single_data(mesh_storage)
576 | predict_data.to(DEVICE)
577 |
578 | return predict_data, mesh_storage
579 |
580 |
581 | def predict_joint(predict_data, joint_network, mesh_storage: MeshStorage, bandwidth, threshold):
582 | print("predicting joints")
583 | predict_data = add_joints_data(predict_data, mesh_storage.voxels, joint_network, threshold, bandwidth=bandwidth)
584 | predict_data.to(DEVICE)
585 | return predict_data
586 |
587 |
588 | def predict_hierarchy(predict_data, networks: Networks, mesh_storage: MeshStorage):
589 | print("predicting connectivity")
590 | predicted_skeleton = predict_skeleton(predict_data, mesh_storage.voxels, networks.root_net, networks.bone_net)
591 | return predicted_skeleton
592 |
593 |
594 | def predict_weights(predict_data, predicted_skeleton, skin_network, mesh_storage: MeshStorage):
595 | print("predicting skinning")
596 | mesh_data = mesh_storage.mesh_data
597 | bvh_tree = mesh_data.bvh_tree
598 | predicted_rig = predict_skinning(predict_data, predicted_skeleton, skin_network, mesh_storage.surface_geodesic, bvh_tree)
599 |
600 | # here we reverse the normalization to the original scale and position
601 | predicted_rig.normalize(mesh_data.scale_normalize, -mesh_data.translation_normalize)
602 | return predicted_rig
603 |
604 |
605 | def create_armature(mesh_obj, predicted_rig):
606 | mesh_obj.vertex_groups.clear()
607 |
608 | for obj in bpy.data.objects:
609 | obj.select_set(False)
610 |
611 | mat = Matrix(((1.0, 0.0, 0.0, 0.0),
612 | (0.0, 0.0, -1.0, 0.0),
613 | (0.0, 1.0, 0.0, 0.0),
614 | (0.0, 0.0, 0.0, 1.0)))
615 | new_arm = ArmatureGenerator(predicted_rig, mesh_obj).generate(matrix=mat)
616 | torch.cuda.empty_cache()
617 |
618 | return new_arm
619 |
620 |
621 | def clear():
622 | torch.cuda.empty_cache()
623 |
--------------------------------------------------------------------------------
/setup_utils/cuda_utils.py:
--------------------------------------------------------------------------------
1 | import bpy
2 |
3 | import os
4 | import sys
5 |
6 | from enum import Enum
7 | import subprocess
8 |
9 |
10 | class CudaResult(Enum):
11 | SUCCESS = 1
12 | NOT_FOUND = 2
13 |
14 |
15 | class CudaDetect:
16 | """Checks Cuda version installed in the system"""
17 | def __init__(self):
18 | self.result = None
19 | self.major = 0
20 | self.minor = 0
21 | self.micro = 0
22 | self.has_cuda_hardware = False
23 |
24 | self.has_cuda_device()
25 | self.detect_cuda_ver()
26 |
27 | @staticmethod
28 | def get_cuda_path():
29 | try:
30 | return os.environ['CPATH']
31 | except KeyError:
32 | pass
33 |
34 | where_cmd = "where" if sys.platform.startswith('win') else "whereis"
35 | result = subprocess.check_output([where_cmd, 'nvcc']).decode('UTF-8')
36 |
37 | if '\n' in result:
38 | nvcc_path = result.split('\n', 1)[0]
39 | else:
40 | nvcc_path = result.split('\t', 1)[0]
41 |
42 | nvcc_dir, _ = os.path.split(nvcc_path)
43 | cuda_dir = os.path.dirname(nvcc_dir)
44 |
45 | return cuda_dir
46 |
47 | def has_cuda_device(self):
48 | """Checks for cuda hardware in cycles preferences"""
49 | prefs = bpy.context.preferences
50 | cprefs = prefs.addons['cycles'].preferences
51 |
52 | if bpy.app.version[0] > 2:
53 | # devices are iterated differently in blender 3.0/blender 2.9
54 | cprefs.refresh_devices()
55 |
56 | def get_dev():
57 | for dev in cprefs.devices:
58 | yield dev
59 | else:
60 | def get_dev():
61 | for dev in cprefs.get_devices(bpy.context):
62 | for dev_entry in dev:
63 | yield dev_entry
64 |
65 | for device in get_dev():
66 | if device.type == 'CUDA':
67 | self.has_cuda_hardware = True
68 | return
69 |
70 | def detect_cuda_ver(self):
71 | """Try execute the cuda compiler with the --version flag"""
72 | try:
73 | nvcc_out = subprocess.check_output(["nvcc", "--version"])
74 | except FileNotFoundError:
75 | self.result = CudaResult.NOT_FOUND
76 | return
77 |
78 | nvcc_out = str(nvcc_out)
79 | ver = nvcc_out.rsplit(" V", 1)[-1]
80 | ver = ver.strip("'\\r\\n")
81 | ver_ends = next((i for i, c in enumerate(ver) if not (c.isdigit() or c == '.')), len(ver))
82 | ver = ver[:ver_ends]
83 |
84 | self.major, self.minor, self.micro = ver.split(".", 2)
85 | self.result = CudaResult.SUCCESS
86 |
--------------------------------------------------------------------------------
/setup_utils/venv_utils.py:
--------------------------------------------------------------------------------
1 | import os
2 | from pathlib import Path
3 | import requests
4 | import sys
5 | import subprocess
6 | import shutil
7 | import tarfile
8 | import tempfile
9 | import venv
10 |
11 | from .cuda_utils import CudaDetect
12 |
13 |
14 | class VenvAutoSetup:
15 | def __init__(self, environment_path):
16 | self.env_path = environment_path
17 | self._on_win = sys.platform.startswith("win")
18 | self.py_exe = ""
19 |
20 | def create_venv(self, with_pip=True):
21 | if os.path.isdir(self.env_path):
22 | if len(os.listdir(self.env_path)) > 0:
23 | msg = "Can't create Virtual Env in existing, non empty directory)"
24 | # TODO: Custom Exception
25 | raise Exception(msg)
26 |
27 | venv.create(self.env_path, with_pip=with_pip)
28 |
29 | self.py_exe = self._get_py_exe()
30 |
31 | def _get_py_exe(self):
32 | v_py = os.path.join(self.env_path, "Scripts", "python")
33 | v_py = os.path.normpath(v_py)
34 |
35 | if self._on_win:
36 | v_py += ".exe"
37 |
38 | return v_py
39 |
40 | def venv_activate_line(self):
41 | v_activate = os.path.join(self.env_path, "Scripts", "activate")
42 | v_activate = os.path.normpath(v_activate)
43 |
44 | if self._on_win:
45 | v_activate = 'call "' + v_activate
46 | v_activate += '.bat"'
47 | else:
48 | v_activate = "source '{0}'".format(v_activate)
49 |
50 | return v_activate
51 |
52 | def pip_install_lines(self):
53 | lines = [
54 | f'"{self.py_exe}" -m ensurepip',
55 | f'{self.py_exe}" -m pip install wheel'
56 | ]
57 |
58 | return lines
59 |
60 | def pip_install_script(self):
61 | ba_file = tempfile.NamedTemporaryFile(mode='w+b',
62 | prefix="vpip_install_",
63 | suffix='.bat' if self._on_win else None,
64 | delete=False)
65 |
66 | with ba_file as f:
67 | if self._on_win:
68 | f.write(b"@echo off\n")
69 | else:
70 | f.write(b"#!/bin/bash\n")
71 | f.write(bytes(self.venv_activate_line(), 'utf-8'))
72 | f.write(b"\n")
73 |
74 | for line in self.pip_install_lines():
75 | f.write(bytes(line, 'utf-8'))
76 | f.write(b"\n")
77 |
78 | return ba_file.name
79 |
80 | def torch_install_script(self, torch_version="1.11.0", cuda_version="113", torch_url=""):
81 | if not torch_url:
82 | torch_url = "https://download.pytorch.org/whl"
83 |
84 | ba_file = tempfile.NamedTemporaryFile(mode='w+b',
85 | prefix="torch_install_",
86 | suffix='.bat' if self._on_win else None,
87 | delete=False)
88 |
89 | torch_line = f'"{self.py_exe}" -m pip install torch=={torch_version}+cu{cuda_version} --extra-index-url {torch_url}/cu{cuda_version}'
90 |
91 | with ba_file as f:
92 | if self._on_win:
93 | f.write(b"@echo off\n")
94 | else:
95 | f.write(b"#!/bin/bash\n")
96 | f.write(bytes(self.venv_activate_line(), 'utf-8'))
97 | f.write(b"\n")
98 |
99 | f.write(bytes(torch_line, 'utf-8'))
100 | f.write(b"\n")
101 |
102 | return ba_file.name
103 |
104 | def pkg_install_script(self, package_name, env_vars=dict(), additional_parameter=""):
105 | ba_file = tempfile.NamedTemporaryFile(mode='w+b',
106 | prefix=f"{package_name}_install_",
107 | suffix='.bat' if self._on_win else None,
108 | delete=False)
109 |
110 | pkg_line = f'"{self.py_exe}" -m pip install {package_name}'
111 | if additional_parameter:
112 | pkg_line += f" {additional_parameter}"
113 |
114 | with ba_file as f:
115 | if self._on_win:
116 | f.write(b"@echo off\n")
117 | for k, v in env_vars.items():
118 | f.write(bytes(f'set "{k}={v}"\n', 'utf-8'))
119 | else:
120 | f.write(b"#!/bin/bash\n")
121 | for k, v in env_vars.items():
122 | f.write(bytes(f'{k}="{v}"\n', 'utf-8'))
123 |
124 | f.write(bytes(self.venv_activate_line(), 'utf-8'))
125 | f.write(b"\n")
126 |
127 | f.write(bytes(pkg_line, 'utf-8'))
128 | f.write(b"\n")
129 |
130 | return ba_file.name
131 |
132 | def pkg_download_script(self, download_dir, packages=('torch-sparse', 'torch-cluster')):
133 | ba_file = tempfile.NamedTemporaryFile(mode='w+b',
134 | prefix="sparse_install_",
135 | suffix='.bat' if self._on_win else None,
136 | delete=False)
137 |
138 | pkg_lines = [f'python -m pip download --no-deps {pkg_name} -d {download_dir}\n' for pkg_name in packages]
139 |
140 | with ba_file as f:
141 | if self._on_win:
142 | f.write(b"@echo off\n")
143 | else:
144 | f.write(b"#!/bin/bash\n")
145 | f.write(bytes(self.venv_activate_line(), 'utf-8'))
146 | f.write(b"\n")
147 |
148 | for line in pkg_lines:
149 | f.write(bytes(line, 'utf-8'))
150 | f.write(b"\n")
151 |
152 | return ba_file.name
153 |
154 |
155 | def fix_source_absolute_paths(package_name, download_dir):
156 | pkg_archive = ""
157 | for fn in os.listdir(download_dir):
158 | if fn.startswith(f'{package_name}-') and fn.endswith('.tar.gz'):
159 | pkg_archive = fn
160 | break
161 |
162 | if not pkg_archive:
163 | raise FileNotFoundError(f'{package_name} archive not found')
164 |
165 | pkg_namever = pkg_archive
166 | pkg_namever = os.path.splitext(pkg_namever)[0]
167 | pkg_namever = os.path.splitext(pkg_namever)[0]
168 |
169 | pkg_dir = os.path.join(download_dir, pkg_namever)
170 | ar_file = tarfile.open(os.path.join(download_dir, pkg_archive))
171 | ar_file.extractall(pkg_dir)
172 | ar_file.close()
173 |
174 | src_info = os.path.join(pkg_dir, pkg_namever, f'{package_name}.egg-info', 'SOURCES.txt')
175 | src_old = os.path.join(pkg_dir, pkg_namever, f'{package_name}.egg-info', 'SOURCES_orig.txt')
176 |
177 | src_info = os.path.normpath(src_info)
178 | src_old = os.path.normpath(src_old)
179 |
180 | shutil.move(src_info, src_old)
181 |
182 | with open(src_old) as old, open(src_info, 'w') as new:
183 | lines = old.readlines()
184 | new.writelines([line for line in lines if not line.startswith('/')])
185 |
186 | dist_dir = os.path.join(download_dir, 'fix')
187 | Path(dist_dir).mkdir(0o755, exist_ok=True)
188 | fix_tar = os.path.join(dist_dir, f'{pkg_namever}.tar')
189 | with tarfile.open(fix_tar, "w") as tar:
190 | tar.add(os.path.join(pkg_dir, pkg_namever), arcname=pkg_namever)
191 |
192 | shutil.rmtree(pkg_dir)
193 | return os.path.normpath(fix_tar)
194 |
195 |
196 | def download_python_headers(download_dir):
197 | v_info = sys.version_info
198 | v_str = f"{v_info.major}.{v_info.minor}.{v_info.micro}"
199 | py_name = f"Python-{v_str}"
200 | f_name = f"{py_name}.tgz"
201 |
202 | py_dir = os.path.join(download_dir, "_python")
203 | Path(py_dir).mkdir(0o755, exist_ok=True)
204 | f_path = os.path.join(py_dir, f_name)
205 |
206 | if os.path.isfile(f_path):
207 | print(f"Using cached {f_path}")
208 | else:
209 | src_url = f"https://www.python.org/ftp/python/{v_str}/{f_name}"
210 | if not os.path.isfile(f_path):
211 | r = requests.get(src_url, allow_redirects=True)
212 | open(f_path, 'wb').write(r.content)
213 |
214 | include_dir = f'{py_name}/Include'
215 | ar_file = tarfile.open(f_path)
216 | for file_name in ar_file.getnames():
217 | if file_name.startswith(include_dir):
218 | ar_file.extract(file_name, os.path.join(download_dir, py_dir))
219 |
220 | ar_file.close()
221 | headers_dir = os.path.join(py_dir, include_dir)
222 | return os.path.normpath(headers_dir)
223 |
224 |
225 | def install_headers(env_path, download_dir):
226 | extracted_headers_dir = download_python_headers(download_dir)
227 | py_include_dir = os.path.join(env_path, 'Include')
228 |
229 | for item in os.listdir(extracted_headers_dir):
230 | src_path = os.path.join(extracted_headers_dir, item)
231 | dst_path = os.path.join(py_include_dir, item)
232 |
233 | if os.path.isdir(src_path):
234 | shutil.copytree(src_path, dst_path, dirs_exist_ok=True)
235 | continue
236 |
237 | shutil.copy(src_path, dst_path)
238 |
239 |
240 | def setup_environment(environment_path, with_pip=True, torch_version="1.11.0", cuda_version='113'):
241 | ve_setup = VenvAutoSetup(environment_path)
242 | ve_setup.create_venv(with_pip=with_pip)
243 |
244 | if with_pip:
245 | # TODO: install wheels
246 | pass
247 | else:
248 | # install pip via script
249 | print("installing pip")
250 | pip_install_script = ve_setup.pip_install_script()
251 | subprocess.check_call(pip_install_script)
252 |
253 | # install pytorch
254 | print("installing torch")
255 | torch_install_script = ve_setup.torch_install_script(torch_version=torch_version, cuda_version=cuda_version)
256 | subprocess.check_call(torch_install_script)
257 |
258 | # install torch-geometric
259 | if cuda_version in ('101', '102', '111'):
260 | # wheels are provided for these versions
261 | find_link = f"-f https://pytorch-geometric.com/whl/torch-{torch_version}+cu{cuda_version}.html"
262 | for pkg in ("torch-scatter", "torch-sparse", "torch-cluster", "torch-geometric"):
263 | print(f"Installing {pkg}")
264 | pkg_inst_script = ve_setup.pkg_install_script(pkg, additional_parameter=find_link)
265 | subprocess.check_call(pkg_inst_script)
266 | elif cuda_version == '113':
267 | find_link = f"-f https://data.pyg.org/whl/torch-{torch_version}+cu{cuda_version}.html"
268 | for pkg in ("torch-scatter", "torch-sparse", "torch-cluster", "torch-geometric"):
269 | print(f"Installing {pkg}")
270 | pkg_inst_script = ve_setup.pkg_install_script(pkg, additional_parameter=find_link)
271 | subprocess.check_call(pkg_inst_script)
272 | else:
273 | # we gotta build'em wheels
274 | platform = sys.platform
275 | if platform.startswith('linux'):
276 | cuda_lib_path = os.path.join(cuda_detect.get_cuda_path(), "lib64")
277 | os.environ['LD_LIBRARY_PATH'] = f"{cuda_lib_path}:{os.environ['LD_LIBRARY_PATH']}"
278 | elif platform == 'darwin':
279 | cuda_lib_path = os.path.join(cuda_detect.get_cuda_path(), "lib")
280 | os.environ['DYLD_LIBRARY_PATH '] = f"{cuda_lib_path}:{os.environ['DYLD_LIBRARY_PATH']}"
281 | else:
282 | raise NotImplementedError(f"Auto-Build not supported on {platform}")
283 |
284 | # install torch-scatter
285 | scatter_inst_script = ve_setup.pkg_install_script('torch-scatter')
286 | subprocess.check_call(scatter_inst_script)
287 |
288 | # install torch-sparse
289 | cuda_include_path = os.path.join(cuda_detect.get_cuda_path(), "include")
290 | if platform.startswith('win'):
291 | # TODO: needs python3.lib
292 | # TODO: check if cl.exe is available
293 | # we need to patch the absolute paths in Source to fix build error
294 | download_dir = os.path.join(ve_setup.env_path, '_download')
295 |
296 | # we need python headers
297 | install_headers(environment_path, download_dir)
298 |
299 | Path(download_dir).mkdir(0o755, exist_ok=True)
300 | pkg_dw_script = ve_setup.pkg_download_script(download_dir)
301 | subprocess.check_call(pkg_dw_script)
302 |
303 | fixed_sparse = fix_source_absolute_paths('torch_sparse', download_dir)
304 | pkg_install_script = ve_setup.pkg_install_script(fixed_sparse, env_vars=dict(CPATH=cuda_include_path))
305 | else:
306 | pkg_install_script = ve_setup.pkg_install_script('torch-sparse')
307 |
308 | subprocess.check_call(pkg_install_script)
309 |
310 | # install torch-cluster
311 | if platform.startswith('win'):
312 | # we need to patch the absolute paths in Source to fix build error
313 | fixed_cluster = fix_source_absolute_paths('torch_cluster', download_dir)
314 | pkg_install_script = ve_setup.pkg_install_script(fixed_cluster, env_vars=dict(CPATH=cuda_include_path))
315 | else:
316 | pkg_install_script = ve_setup.pkg_install_script('torch_cluster')
317 | subprocess.check_call(pkg_install_script)
318 |
319 | # install geometric
320 | pkg_install_script = ve_setup.pkg_install_script('torch_geometric')
321 | subprocess.check_call(pkg_install_script)
322 |
--------------------------------------------------------------------------------
/ui/menus.py:
--------------------------------------------------------------------------------
1 | from ..postgen_utils import NamiFy
2 | from ..postgen_utils import ExtractMetarig
3 | from ..postgen_utils import SpineFix
4 | from ..postgen_utils import MergeBones
5 |
6 |
7 | def menu_header(layout):
8 | row = layout.row()
9 | row.separator()
10 |
11 | row = layout.row()
12 | row.label(text="Neural Rig Utils")
13 |
14 |
15 | def pose_context_options(self, context):
16 | layout = self.layout
17 | menu_header(layout)
18 |
19 | row = layout.row()
20 | row.operator(ExtractMetarig.bl_idname)
21 |
22 | row = layout.row()
23 | row.operator(NamiFy.bl_idname)
24 |
25 | row = layout.row()
26 | row.operator(SpineFix.bl_idname)
27 |
28 | row = layout.row()
29 | row.operator(MergeBones.bl_idname)
30 |
--------------------------------------------------------------------------------