├── LICENSE
├── README.md
├── lbpcascade_frontalface_improved.xml
├── models
└── model-b66.pkl
├── network
├── __pycache__
│ └── network.cpython-38.pyc
└── network.py
├── requirements.txt
├── results
└── result.gif
├── test_network.py
├── test_network_mtcnn.py
└── utils
├── __init__.py
├── __pycache__
├── __init__.cpython-38.pyc
├── camera_normalize.cpython-38.pyc
├── coordinate_transform.cpython-38.pyc
├── tools.cpython-38.pyc
└── utils.cpython-38.pyc
├── camera_normalize.py
├── coordinate_transform.py
├── detect.py
├── tools.py
└── utils.py
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2022 Shaw
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Lightweight Head Pose Estimation
2 | This is an official implement for [**Accurate Head Pose Estimation Using Image Rectification and Lightweight Convolutional Neural Network**](https://ieeexplore.ieee.org/abstract/document/9693249?casa_token=IEBanEdMVjIAAAAA:XJZ3g0tn6gD8FOH-0DuB3j8i9kLZn6McNf1BkJTq6yfPfi5X9jZxo5WmfJX3D-267dIWef5M)
3 |
4 | ## Abstract
5 | Head pose estimation is an important step for many human-computer interaction applications such as face detection, facial recognition, and facial expression classification. Accurate head pose estimation benefits these applications that require face image as the input. Most head pose estimation methods suffer from perspective distortion because the users do not always align their face perfectly with the camera. This paper presents a new approach that uses image rectification to reduce the negative effect of perspective distortion and a lightweight convolutional neural network to obtain highly accurate head pose estimation. The proposed method calculates the angle between the camera optical axis and the projection vector of the face center. The face image is rectified using this estimated angle through perspective transformation. A lightweight network with the size of only 0.88 MB is designed to take the rectified face image as the input to perform head pose estimation. The output of the network, the head pose estimation of the rectified face image, is transformed back to the camera coordinate system as the final head pose estimation. Experiments on public benchmark datasets show that the proposed image rectification and the newly designed lightweight network remarkably improve the accuracy of head pose estimation. Compared with state-of-the-art methods, our approach achieves both higher accuracy and faster processing speed.
6 |
7 | ## Result
8 | 
9 |
10 | ## Platform
11 | + GTX-1080Ti
12 | + Ubuntu
13 |
14 | ## Dependencies
15 |
16 | + Anaconda
17 | + OpenCV
18 | + Pytorch
19 | + Numpy
20 |
21 | ## How to run the code
22 | ```
23 | python test_network.py [--input INPUT_VIDEO_PATH] [--output OUTPUT_VIDEO_PATH]
24 | ```
25 | If you want to use your webcam, please set [--input "0"].
26 |
27 | If you want to use mtcnn to detect face, please install [MTCNN](https://github.com/ipazc/mtcnn) and [Tensorflow](https://www.tensorflow.org/install) and run the following code.
28 | ```
29 | python test_network_mtcnn.py [--input INPUT_VIDEO_PATH] [--output OUTPUT_VIDEO_PATH]
30 | ```
31 |
32 |
33 | ## Datasets
34 |
35 | The model provided in this repo is trained on 300W-LP. For more datasets used in our paper, please refer to the following links.
36 |
37 | + [300W-LP, AFLW2000](http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3DDFA/main.htm)
38 | + [BIWI](https://data.vision.ee.ethz.ch/cvl/gfanelli/head_pose/head_forest.html)
39 |
40 |
--------------------------------------------------------------------------------
/lbpcascade_frontalface_improved.xml:
--------------------------------------------------------------------------------
1 |
2 |
66 |
67 |
68 |
69 | BOOST
70 | LBP
71 | 45
72 | 45
73 |
74 | GAB
75 | 9.9500000476837158e-001
76 | 5.0000000000000000e-001
77 | 9.4999999999999996e-001
78 | 1
79 | 100
80 |
81 | 256
82 | 1
83 | 19
84 |
85 |
86 | <_>
87 | 6
88 | -4.1617846488952637e+000
89 |
90 | <_>
91 |
92 | 0 -1 26 -1 -1 -17409 -1 -1 -1 -1 -1
93 |
94 | -9.9726462364196777e-001 -3.8938775658607483e-001
95 | <_>
96 |
97 | 0 -1 18 -1 -1 -21569 -20545 -1 -1 -20545 -1
98 |
99 | -9.8648911714553833e-001 -2.5386649370193481e-001
100 | <_>
101 |
102 | 0 -1 30 -21569 -16449 1006578219 -20801 -16449 -1 -21585 -1
103 |
104 | -9.6436238288879395e-001 -1.4039695262908936e-001
105 | <_>
106 |
107 | 0 -1 54 -1 -1 -16402 -4370 -1 -1 -1053010 -4456466
108 |
109 | -8.4081345796585083e-001 3.8321062922477722e-001
110 | <_>
111 |
112 | 0 -1 29 -184747280 -705314819 1326353 1364574079 -131073 -5
113 | 2147481147 -1
114 |
115 | -8.1084597110748291e-001 4.3495711684226990e-001
116 | <_>
117 |
118 | 0 -1 89 -142618625 -4097 -37269 -20933 872350430 -268476417
119 | 1207894255 2139032115
120 |
121 | -7.3140043020248413e-001 4.3799084424972534e-001
122 |
123 | <_>
124 | 6
125 | -4.0652265548706055e+000
126 |
127 | <_>
128 |
129 | 0 -1 19 -1 -1 -17409 -1 -1 -1 -1 -1
130 |
131 | -9.9727255105972290e-001 -7.2050148248672485e-001
132 | <_>
133 |
134 | 0 -1 38 -1 1073741823 -1 -1 -1 -1 -1 -1
135 |
136 | -9.8717331886291504e-001 -5.3031939268112183e-001
137 | <_>
138 |
139 | 0 -1 28 -16385 -1 -21569 -20545 -1 -1 -21569 -1
140 |
141 | -9.3442338705062866e-001 6.5213099122047424e-002
142 | <_>
143 |
144 | 0 -1 112 -2097153 -1 -1 -1 -1 -8193 -1 -35467
145 |
146 | -7.9567342996597290e-001 4.2883640527725220e-001
147 | <_>
148 |
149 | 0 -1 48 -134239573 -16465 58663467 -1079022929 -1073758273
150 | -81937 -8412501 -404766817
151 |
152 | -7.1264797449111938e-001 4.1050794720649719e-001
153 | <_>
154 |
155 | 0 -1 66 -17047555 -1099008003 2147479551 -1090584581 -69633
156 | -1342177281 -1090650121 -1472692240
157 |
158 | -7.6119172573089600e-001 4.2042696475982666e-001
159 |
160 | <_>
161 | 7
162 | -4.6904473304748535e+000
163 |
164 | <_>
165 |
166 | 0 -1 12 -1 -1 -17409 -1 -1 -1 -1 -1
167 |
168 | -9.9725550413131714e-001 -8.3142280578613281e-001
169 | <_>
170 |
171 | 0 -1 31 -1 -168429569 -1 -1 -1 -1 -1 -1
172 |
173 | -9.8183268308639526e-001 -3.6373397707939148e-001
174 | <_>
175 |
176 | 0 -1 38 -1 1073741759 -1 -1 -1 -1 -1 -1
177 |
178 | -9.1890293359756470e-001 7.8322596848011017e-002
179 | <_>
180 |
181 | 0 -1 27 -17409 -2097153 -134372726 -21873 -65 -536870913
182 | -161109 -4215889
183 |
184 | -8.0752444267272949e-001 1.9565649330615997e-001
185 | <_>
186 |
187 | 0 -1 46 -469779457 -286371842 -33619971 -212993 -1 -41943049
188 | -134217731 -1346863620
189 |
190 | -6.9232726097106934e-001 3.8141927123069763e-001
191 | <_>
192 |
193 | 0 -1 125 -1896950780 -1964839052 -9 707723004 -34078727
194 | -1074266122 -536872969 -262145
195 |
196 | -8.1760478019714355e-001 3.4172961115837097e-001
197 | <_>
198 |
199 | 0 -1 80 -402657501 654311423 -419533278 -452984853
200 | 1979676215 -1208090625 -167772569 -524289
201 |
202 | -6.3433408737182617e-001 4.3154156208038330e-001
203 |
204 | <_>
205 | 8
206 | -4.2590322494506836e+000
207 |
208 | <_>
209 |
210 | 0 -1 42 -1 -655361 -1 -1 -1 -1 -1 -1
211 |
212 | -9.9715477228164673e-001 -8.6178696155548096e-001
213 | <_>
214 |
215 | 0 -1 40 -1 -705300491 -1 -1 -1 -1 -1 -1
216 |
217 | -9.8356908559799194e-001 -5.7423096895217896e-001
218 | <_>
219 |
220 | 0 -1 43 -65 872413111 -2049 -1 -1 -1 -1 -1
221 |
222 | -9.2525935173034668e-001 -1.3835857808589935e-001
223 | <_>
224 |
225 | 0 -1 111 -1 -5242881 -1 -524289 -4194305 -1 -1 -43148
226 |
227 | -7.8076487779617310e-001 1.8362471461296082e-001
228 | <_>
229 |
230 | 0 -1 25 -145227841 868203194 -1627394049 935050171
231 | 2147483647 1006600191 -268439637 1002437615
232 |
233 | -7.2554033994674683e-001 3.3393219113349915e-001
234 | <_>
235 |
236 | 0 -1 116 -214961408 50592514 -2128 1072162674 -1077940293
237 | -1084489966 -134219854 -1074790401
238 |
239 | -6.1547595262527466e-001 3.9214438199996948e-001
240 | <_>
241 |
242 | 0 -1 3 -294987948 -1124421633 -73729 -268435841 -33654928
243 | 2122317823 -268599297 -33554945
244 |
245 | -6.4863425493240356e-001 3.8784855604171753e-001
246 | <_>
247 |
248 | 0 -1 22 -525585 -26738821 -17895690 1123482236 1996455758
249 | -8519849 -252182980 -461898753
250 |
251 | -5.5464369058609009e-001 4.4275921583175659e-001
252 |
253 | <_>
254 | 8
255 | -4.0009465217590332e+000
256 |
257 | <_>
258 |
259 | 0 -1 82 -1 -1 -1 -1 -33685505 -1 -1 -1
260 |
261 | -9.9707120656967163e-001 -8.9196771383285522e-001
262 | <_>
263 |
264 | 0 -1 84 -1 -1 -1 -1 2147446783 -1 -1 -1
265 |
266 | -9.8670446872711182e-001 -7.5064390897750854e-001
267 | <_>
268 |
269 | 0 -1 79 -1 -1 -262145 -1 -252379137 -1 -1 -1
270 |
271 | -8.9446705579757690e-001 7.0268943905830383e-002
272 | <_>
273 |
274 | 0 -1 61 -1 -8201 -1 -2097153 -16777217 -513 -16777217
275 | -1162149889
276 |
277 | -7.2166109085083008e-001 2.9786801338195801e-001
278 | <_>
279 |
280 | 0 -1 30 -21569 -1069121 1006578211 -134238545 -16450
281 | -268599297 -21617 -14680097
282 |
283 | -6.2449234724044800e-001 3.8551881909370422e-001
284 | <_>
285 |
286 | 0 -1 75 -268701913 -1999962377 1995165474 -453316822
287 | 1744684853 -2063597697 -134226057 -50336769
288 |
289 | -5.5207914113998413e-001 4.2211884260177612e-001
290 | <_>
291 |
292 | 0 -1 21 -352321825 -526489 -420020626 -486605074 1155483470
293 | -110104705 -587840772 -25428801
294 |
295 | -5.3324747085571289e-001 4.4535955786705017e-001
296 | <_>
297 |
298 | 0 -1 103 70270772 2012790229 -16810020 -245764 -1208090635
299 | -753667 -1073741828 -1363662420
300 |
301 | -6.4402890205383301e-001 3.8995954394340515e-001
302 |
303 | <_>
304 | 8
305 | -4.6897511482238770e+000
306 |
307 | <_>
308 |
309 | 0 -1 97 -1 -1 -1 -1 -524289 -524289 -1 -1
310 |
311 | -9.9684870243072510e-001 -8.8232177495956421e-001
312 | <_>
313 |
314 | 0 -1 84 -1 -1 -1 -1 2147438591 -1 -1 -1
315 |
316 | -9.8677414655685425e-001 -7.8965580463409424e-001
317 | <_>
318 |
319 | 0 -1 113 -1 -1 -1 -1 -1048577 -262149 -1048577 -35339
320 |
321 | -9.2621946334838867e-001 -2.9984828829765320e-001
322 | <_>
323 |
324 | 0 -1 33 -2249 867434291 -32769 -33562753 -1 -1073758209
325 | -4165 -1
326 |
327 | -7.2429555654525757e-001 2.2348840534687042e-001
328 | <_>
329 |
330 | 0 -1 98 1659068671 -142606337 587132538 -67108993 577718271
331 | -294921 -134479873 -129
332 |
333 | -5.5495566129684448e-001 3.5419258475303650e-001
334 | <_>
335 |
336 | 0 -1 100 -268441813 788267007 -286265494 -486576145 -8920251
337 | 2138505075 -151652570 -2050
338 |
339 | -5.3362584114074707e-001 3.9479774236679077e-001
340 | <_>
341 |
342 | 0 -1 51 -1368387212 -537102978 -98305 -163843 1065109500
343 | -16777217 -67321939 -1141359619
344 |
345 | -5.6162708997726440e-001 3.8008108735084534e-001
346 | <_>
347 |
348 | 0 -1 127 -268435550 1781120906 -251658720 -143130698
349 | -1048605 -1887436825 1979700688 -1008730125
350 |
351 | -5.1167154312133789e-001 4.0678605437278748e-001
352 |
353 | <_>
354 | 10
355 | -4.2179841995239258e+000
356 |
357 | <_>
358 |
359 | 0 -1 97 -1 -1 -1 -1 -524289 -524289 -1 -1
360 |
361 | -9.9685418605804443e-001 -8.8037383556365967e-001
362 | <_>
363 |
364 | 0 -1 90 -1 -1 -1 -1 -8912897 -524297 -8912897 -1
365 |
366 | -9.7972750663757324e-001 -5.7626229524612427e-001
367 | <_>
368 |
369 | 0 -1 96 -1 -1 -1 -1 -1 -65 -1 -2249
370 |
371 | -9.0239793062210083e-001 -1.7454113066196442e-001
372 | <_>
373 |
374 | 0 -1 71 -1 -4097 -1 -513 -16777217 -268468483 -16797697
375 | -1430589697
376 |
377 | -7.4346423149108887e-001 9.4165161252021790e-002
378 | <_>
379 |
380 | 0 -1 37 1364588304 -581845274 -536936460 -3 -308936705
381 | -1074331649 -4196865 -134225953
382 |
383 | -6.8877440690994263e-001 2.7647304534912109e-001
384 | <_>
385 |
386 | 0 -1 117 -37765187 -540675 -3 -327753 -1082458115 -65537
387 | 1071611901 536827253
388 |
389 | -5.7555085420608521e-001 3.4339720010757446e-001
390 | <_>
391 |
392 | 0 -1 85 -269490650 -1561395522 -1343312090 -857083986
393 | -1073750223 -369098755 -50856110 -2065
394 |
395 | -5.4036927223205566e-001 4.0065473318099976e-001
396 | <_>
397 |
398 | 0 -1 4 -425668880 -34427164 1879048177 -269570140 790740912
399 | -196740 2138535839 -536918145
400 |
401 | -4.8439365625381470e-001 4.4630467891693115e-001
402 | <_>
403 |
404 | 0 -1 92 74726960 -1246482434 -1 -246017 -1078607916
405 | -1073947163 -1644231687 -1359211496
406 |
407 | -5.6686979532241821e-001 3.6671569943428040e-001
408 | <_>
409 |
410 | 0 -1 11 -135274809 -1158173459 -353176850 540195262
411 | 2139086600 2071977814 -546898600 -96272673
412 |
413 | -5.1499199867248535e-001 4.0788397192955017e-001
414 |
415 | <_>
416 | 9
417 | -4.0345416069030762e+000
418 |
419 | <_>
420 |
421 | 0 -1 78 -1 -1 -1 -1 -8912897 -1 -8912897 -1
422 |
423 | -9.9573624134063721e-001 -8.5452395677566528e-001
424 | <_>
425 |
426 | 0 -1 93 -1 -1 -1 -1 -148635649 -524297 -8912897 -1
427 |
428 | -9.7307401895523071e-001 -5.2884924411773682e-001
429 | <_>
430 |
431 | 0 -1 77 -1 -8209 -1 -257 -772734977 -1 -201850881 -1
432 |
433 | -8.6225658655166626e-001 4.3712578713893890e-002
434 | <_>
435 |
436 | 0 -1 68 -570427393 -16649 -69633 -131073 -536944677 -1 -8737
437 | -1435828225
438 |
439 | -6.8078064918518066e-001 2.5120577216148376e-001
440 | <_>
441 |
442 | 0 -1 50 -1179697 -34082849 -3278356 -37429266 -1048578
443 | -555753474 -1015551096 -37489685
444 |
445 | -6.1699724197387695e-001 3.0963841080665588e-001
446 | <_>
447 |
448 | 0 -1 129 -1931606992 -17548804 -16842753 -1075021827
449 | 1073667572 -81921 -1611073620 -1415047752
450 |
451 | -6.0499197244644165e-001 3.0735063552856445e-001
452 | <_>
453 |
454 | 0 -1 136 -269754813 1761591286 -1073811523 2130378623 -17580
455 | -1082294665 -159514800 -1026883840
456 |
457 | -5.6772041320800781e-001 3.5023149847984314e-001
458 | <_>
459 |
460 | 0 -1 65 2016561683 1528827871 -10258447 960184191 125476830
461 | -8511618 -1078239365 187648611
462 |
463 | -5.5894804000854492e-001 3.4856522083282471e-001
464 | <_>
465 |
466 | 0 -1 13 -207423502 -333902 2013200231 -202348848 1042454451
467 | -16393 1073117139 2004162321
468 |
469 | -5.7197356224060059e-001 3.2818377017974854e-001
470 |
471 | <_>
472 | 9
473 | -3.4892759323120117e+000
474 |
475 | <_>
476 |
477 | 0 -1 78 -1 -1 -1 -1 -8912897 -1 -8912897 -1
478 |
479 | -9.8917990922927856e-001 -7.3812037706375122e-001
480 | <_>
481 |
482 | 0 -1 93 -1 -1 -1 -1 -148635649 -524297 -8912897 -1
483 |
484 | -9.3414896726608276e-001 -2.6945295929908752e-001
485 | <_>
486 |
487 | 0 -1 83 -1 -524289 -1 -1048577 1879011071 -32769 -524289
488 | -3178753
489 |
490 | -7.6891708374023438e-001 5.2568886429071426e-002
491 | <_>
492 |
493 | 0 -1 9 -352329729 -17891329 -16810117 -486871042 -688128841
494 | -1358954675 -16777218 -219217968
495 |
496 | -6.2337344884872437e-001 2.5143685936927795e-001
497 | <_>
498 |
499 | 0 -1 130 -2157 -1548812374 -1343233440 -418381854 -953155613
500 | -836960513 -713571200 -709888014
501 |
502 | -4.7277018427848816e-001 3.9616456627845764e-001
503 | <_>
504 |
505 | 0 -1 121 -1094717701 -67240065 -65857 -32899 -5783756
506 | -136446081 -134285352 -2003298884
507 |
508 | -5.1766264438629150e-001 3.5814732313156128e-001
509 | <_>
510 |
511 | 0 -1 23 -218830160 -119671186 5505075 1241491391 -1594469
512 | -2097185 2004828075 -67649541
513 |
514 | -6.5394639968872070e-001 3.0377501249313354e-001
515 | <_>
516 |
517 | 0 -1 115 -551814749 2099511088 -1090732551 -2045546512
518 | -1086341441 1059848178 800042912 252705994
519 |
520 | -5.2584588527679443e-001 3.3847147226333618e-001
521 | <_>
522 |
523 | 0 -1 99 -272651477 578776766 -285233490 -889225217
524 | 2147448656 377454463 2012701952 -68157761
525 |
526 | -6.1836904287338257e-001 2.8922611474990845e-001
527 |
528 | <_>
529 | 9
530 | -3.0220029354095459e+000
531 |
532 | <_>
533 |
534 | 0 -1 36 -1 -570425345 -1 -570425345 -1 -50331649 -6291457 -1
535 |
536 | -9.7703826427459717e-001 -6.2527233362197876e-001
537 | <_>
538 |
539 | 0 -1 124 -1430602241 -33619969 -1 -3 -1074003969 -1073758209
540 | -1073741825 -1073768705
541 |
542 | -8.9538317918777466e-001 -3.1887885928153992e-001
543 | <_>
544 |
545 | 0 -1 88 -1 -268439625 -65601 -268439569 -393809 -270532609
546 | -42076889 -288361721
547 |
548 | -6.8733429908752441e-001 1.2978810071945190e-001
549 | <_>
550 |
551 | 0 -1 132 -755049252 2042563807 1795096575 465121071
552 | -1090585188 -20609 -1459691784 539672495
553 |
554 | -5.7038843631744385e-001 3.0220884084701538e-001
555 | <_>
556 |
557 | 0 -1 20 -94377762 -25702678 1694167798 -231224662 1079955016
558 | -346144140 2029995743 -536918961
559 |
560 | -5.3204691410064697e-001 3.4054222702980042e-001
561 | <_>
562 |
563 | 0 -1 47 2143026943 -285278225 -3 -612438281 -16403 -131074
564 | -1 -1430749256
565 |
566 | -4.6176829934120178e-001 4.1114711761474609e-001
567 | <_>
568 |
569 | 0 -1 74 203424336 -25378820 -35667973 1073360894 -1912815660
570 | -573444 -356583491 -1365235056
571 |
572 | -4.9911966919898987e-001 3.5335537791252136e-001
573 | <_>
574 |
575 | 0 -1 6 -1056773 -1508430 -558153 -102747408 2133997491
576 | -269043865 2004842231 -8947721
577 |
578 | -4.0219521522521973e-001 4.3947893381118774e-001
579 | <_>
580 |
581 | 0 -1 70 -880809694 -1070282769 -1363162108 -838881281
582 | -680395161 -2064124929 -34244753 1173880701
583 |
584 | -5.3891533613204956e-001 3.2062566280364990e-001
585 |
586 | <_>
587 | 8
588 | -2.5489892959594727e+000
589 |
590 | <_>
591 |
592 | 0 -1 39 -1 -572522497 -8519681 -570425345 -4195329 -50333249
593 | -1 -1
594 |
595 | -9.4647216796875000e-001 -3.3662387728691101e-001
596 | <_>
597 |
598 | 0 -1 124 -1430735362 -33619971 -8201 -3 -1677983745
599 | -1073762817 -1074003969 -1142979329
600 |
601 | -8.0300611257553101e-001 -3.8466516882181168e-002
602 | <_>
603 |
604 | 0 -1 91 -67113217 -524289 -671482265 -786461 1677132031
605 | -268473345 -68005889 -70291765
606 |
607 | -5.8367580175399780e-001 2.6507318019866943e-001
608 | <_>
609 |
610 | 0 -1 17 -277872641 -553910292 -268435458 -16843010
611 | 1542420439 -1342178311 -143132940 -2834
612 |
613 | -4.6897178888320923e-001 3.7864661216735840e-001
614 | <_>
615 |
616 | 0 -1 137 -1312789 -290527285 -286326862 -5505280 -1712335966
617 | -2045979188 1165423617 -709363723
618 |
619 | -4.6382644772529602e-001 3.6114525794982910e-001
620 | <_>
621 |
622 | 0 -1 106 1355856590 -109445156 -96665606 2066939898
623 | 1356084692 1549031917 -30146561 -16581701
624 |
625 | -6.3095021247863770e-001 2.9294869303703308e-001
626 | <_>
627 |
628 | 0 -1 104 -335555328 118529 1860167712 -810680357 -33558656
629 | -1368391795 -402663552 -1343225921
630 |
631 | -5.9658926725387573e-001 2.7228885889053345e-001
632 | <_>
633 |
634 | 0 -1 76 217581168 -538349634 1062631419 1039868926
635 | -1090707460 -2228359 -1078042693 -1147128518
636 |
637 | -4.5812287926673889e-001 3.7063929438591003e-001
638 |
639 | <_>
640 | 9
641 | -2.5802578926086426e+000
642 |
643 | <_>
644 |
645 | 0 -1 35 -513 -706873891 -270541825 1564475391 -120602625
646 | -118490145 -3162113 -1025
647 |
648 | -8.9068460464477539e-001 -1.6470588743686676e-001
649 | <_>
650 |
651 | 0 -1 41 -1025 872144563 -2105361 -1078076417 -1048577
652 | -1145061461 -87557413 -1375993973
653 |
654 | -7.1808964014053345e-001 2.2022204473614693e-002
655 | <_>
656 |
657 | 0 -1 95 -42467849 967946223 -811601986 1030598351
658 | -1212430676 270856533 -1392539508 147705039
659 |
660 | -4.9424821138381958e-001 3.0048963427543640e-001
661 | <_>
662 |
663 | 0 -1 10 -218116370 -637284625 -87373174 -521998782
664 | -805355450 -615023745 -814267322 -12069282
665 |
666 | -5.5306458473205566e-001 2.9137542843818665e-001
667 | <_>
668 |
669 | 0 -1 105 -275849241 -527897 -11052049 -69756067 -15794193
670 | -1141376839 -564771 -287095455
671 |
672 | -4.6759819984436035e-001 3.6638516187667847e-001
673 | <_>
674 |
675 | 0 -1 24 -1900898096 -18985228 -44056577 -24675 -1074880639
676 | -283998 796335613 -1079041957
677 |
678 | -4.2737138271331787e-001 3.9243003726005554e-001
679 | <_>
680 |
681 | 0 -1 139 -555790844 410735094 -32106513 406822863 -897632192
682 | -912830145 -117771560 -1204027649
683 |
684 | -4.1896930336952209e-001 3.6744937300682068e-001
685 | <_>
686 |
687 | 0 -1 0 -1884822366 -1406613148 1135342180 -1979127580
688 | -68174862 246469804 1001386992 -708885872
689 |
690 | -5.7093089818954468e-001 2.9880744218826294e-001
691 | <_>
692 |
693 | 0 -1 45 -469053950 1439068142 2117758841 2004671078
694 | 207931006 1265321675 970353931 1541343047
695 |
696 | -6.0491901636123657e-001 2.4652053415775299e-001
697 |
698 | <_>
699 | 9
700 | -2.2425732612609863e+000
701 |
702 | <_>
703 |
704 | 0 -1 58 1481987157 282547485 -14952129 421131223 -391065352
705 | -24212488 -100094241 -1157907473
706 |
707 | -8.2822084426879883e-001 -2.1619293093681335e-001
708 | <_>
709 |
710 | 0 -1 126 -134217889 -543174305 -75497474 -16851650 -6685738
711 | -75834693 -2097200 -262146
712 |
713 | -5.4628932476043701e-001 2.7662658691406250e-001
714 | <_>
715 |
716 | 0 -1 133 -220728227 -604288517 -661662214 413104863
717 | -627323700 -251915415 -626200872 -1157958657
718 |
719 | -4.1643124818801880e-001 4.1700571775436401e-001
720 | <_>
721 |
722 | 0 -1 2 -186664033 -44236961 -1630262774 -65163606 -103237330
723 | -3083265 -1003729 2053105955
724 |
725 | -5.4847818613052368e-001 2.9710745811462402e-001
726 | <_>
727 |
728 | 0 -1 62 -256115886 -237611873 -620250696 387061799
729 | 1437882671 274878849 -8684449 1494294023
730 |
731 | -4.6202757954597473e-001 3.3915829658508301e-001
732 | <_>
733 |
734 | 0 -1 1 -309400577 -275864640 -1056864869 1737132756
735 | -272385089 1609671419 1740601343 1261376789
736 |
737 | -4.6158722043037415e-001 3.3939516544342041e-001
738 | <_>
739 |
740 | 0 -1 102 818197248 -196324552 286970589 -573270699
741 | -1174099579 -662077381 -1165157895 -1626859296
742 |
743 | -4.6193107962608337e-001 3.2456985116004944e-001
744 | <_>
745 |
746 | 0 -1 69 -1042550357 14675409 1367955200 -841482753
747 | 1642443255 8774277 1941304147 1099949563
748 |
749 | -4.9091196060180664e-001 3.3870378136634827e-001
750 | <_>
751 |
752 | 0 -1 72 -639654997 1375720439 -2129542805 1614801090
753 | -626787937 -5779294 1488699183 -525406458
754 |
755 | -4.9073097109794617e-001 3.0637946724891663e-001
756 |
757 | <_>
758 | 9
759 | -1.2258235216140747e+000
760 |
761 | <_>
762 |
763 | 0 -1 118 302046707 -16744240 1360106207 -543735387
764 | 1025700851 -1079408512 1796961263 -6334981
765 |
766 | -6.1358314752578735e-001 2.3539231717586517e-001
767 | <_>
768 |
769 | 0 -1 5 -144765953 -116448726 -653851877 1934829856 722021887
770 | 856564834 1933919231 -540838029
771 |
772 | -5.1209545135498047e-001 3.2506987452507019e-001
773 | <_>
774 |
775 | 0 -1 140 -170132825 -1438923874 1879300370 -1689337194
776 | -695606496 285911565 -1044188928 -154210028
777 |
778 | -5.1769560575485229e-001 3.2290914654731750e-001
779 | <_>
780 |
781 | 0 -1 131 -140776261 -355516414 822178224 -1039743806
782 | -1012208926 134887424 1438876097 -908591660
783 |
784 | -5.0321841239929199e-001 3.0263835191726685e-001
785 | <_>
786 |
787 | 0 -1 64 -2137211696 -1634281249 1464325973 498569935
788 | -1580152080 -2001687927 721783561 265096035
789 |
790 | -4.6532225608825684e-001 3.4638473391532898e-001
791 | <_>
792 |
793 | 0 -1 101 -255073589 -211824417 -972195129 -1063415417
794 | 1937994261 1363165220 -754733105 1967602541
795 |
796 | -4.9611270427703857e-001 3.3260712027549744e-001
797 | <_>
798 |
799 | 0 -1 81 -548146862 -655567194 -2062466596 1164562721
800 | 416408236 -1591631712 -83637777 975344427
801 |
802 | -4.9862930178642273e-001 3.2003280520439148e-001
803 | <_>
804 |
805 | 0 -1 55 -731904652 2147179896 2147442687 2112830847 -65604
806 | -131073 -42139667 -1074907393
807 |
808 | -3.6636069416999817e-001 4.5651626586914063e-001
809 | <_>
810 |
811 | 0 -1 67 1885036886 571985932 -1784930633 724431327
812 | 1940422257 -1085746880 964888398 731867951
813 |
814 | -5.2619713544845581e-001 3.2635414600372314e-001
815 |
816 | <_>
817 | 9
818 | -1.3604533672332764e+000
819 |
820 | <_>
821 |
822 | 0 -1 8 -287609985 -965585953 -2146397793 -492129894
823 | -729029645 -544619901 -645693256 -6565484
824 |
825 | -4.5212322473526001e-001 3.8910505175590515e-001
826 | <_>
827 |
828 | 0 -1 122 -102903523 -145031013 536899675 688195859
829 | -645291520 -1165359094 -905565928 171608223
830 |
831 | -4.9594074487686157e-001 3.4109055995941162e-001
832 | <_>
833 |
834 | 0 -1 134 -790640459 487931983 1778450522 1036604041
835 | -904752984 -954040118 -2134707506 304866043
836 |
837 | -4.1148442029953003e-001 3.9666590094566345e-001
838 | <_>
839 |
840 | 0 -1 141 -303829117 1726939070 922189815 -827983123
841 | 1567883042 1324809852 292710260 -942678754
842 |
843 | -3.5154473781585693e-001 4.8011952638626099e-001
844 | <_>
845 |
846 | 0 -1 59 -161295376 -159215460 -1858041315 2140644499
847 | -2009065472 -133804007 -2003265301 1263206851
848 |
849 | -4.2808216810226440e-001 3.9841541647911072e-001
850 | <_>
851 |
852 | 0 -1 34 -264248081 -667846464 1342624856 1381160835
853 | -2104716852 1342865409 -266612310 -165954877
854 |
855 | -4.3293288350105286e-001 4.0339657664299011e-001
856 | <_>
857 |
858 | 0 -1 32 -1600388464 -40369901 285344639 1394344275
859 | -255680312 -100532214 -1031663944 -7471079
860 |
861 | -4.1385015845298767e-001 4.5087572932243347e-001
862 | <_>
863 |
864 | 0 -1 15 1368521651 280207469 35779199 -105983261 1208124819
865 | -565870452 -1144024288 -591535344
866 |
867 | -4.2956474423408508e-001 4.2176279425621033e-001
868 | <_>
869 |
870 | 0 -1 109 1623607527 -661513115 -1073217263 -2142994420
871 | -1339883309 -89816956 436308899 1426178059
872 |
873 | -4.7764992713928223e-001 3.7551075220108032e-001
874 |
875 | <_>
876 | 9
877 | -4.2518746852874756e-001
878 |
879 | <_>
880 |
881 | 0 -1 135 -116728032 -1154420809 -1350582273 746061691
882 | -1073758277 2138570623 2113797566 -138674182
883 |
884 | -1.7125381529331207e-001 6.5421247482299805e-001
885 | <_>
886 |
887 | 0 -1 63 -453112432 -1795354691 -1342242964 494112553
888 | 209458404 -2114697500 1316830362 259213855
889 |
890 | -3.9870172739028931e-001 4.5807033777236938e-001
891 | <_>
892 |
893 | 0 -1 52 -268172036 294715533 268575185 486785157 -1065303920
894 | -360185856 -2147476808 134777113
895 |
896 | -5.3581339120864868e-001 3.5815808176994324e-001
897 | <_>
898 |
899 | 0 -1 86 -301996882 -345718921 1877946252 -940720129
900 | -58737369 -721944585 -92954835 -530449
901 |
902 | -3.9938014745712280e-001 4.9603295326232910e-001
903 | <_>
904 |
905 | 0 -1 14 -853281886 -756895766 2130706352 -9519120
906 | -1921059862 394133373 2138453959 -538200841
907 |
908 | -4.0230083465576172e-001 4.9537116289138794e-001
909 | <_>
910 |
911 | 0 -1 128 -2133448688 -641138493 1078022185 294060066
912 | -327122776 -2130640896 -2147466247 -1910634326
913 |
914 | -5.8290809392929077e-001 3.4102553129196167e-001
915 | <_>
916 |
917 | 0 -1 53 587265978 -2071658479 1108361221 -578448765
918 | -1811905899 -2008965119 33900729 762301595
919 |
920 | -4.5518967509269714e-001 4.7242793440818787e-001
921 | <_>
922 |
923 | 0 -1 138 -1022189373 -2139094976 16658 -1069445120
924 | -1073555454 -1073577856 1096068 -978351488
925 |
926 | -4.7530207037925720e-001 4.3885371088981628e-001
927 | <_>
928 |
929 | 0 -1 7 -395352441 -1073541103 -1056964605 1053186 269111298
930 | -2012184576 1611208714 -360415095
931 |
932 | -5.0448113679885864e-001 4.1588482260704041e-001
933 |
934 | <_>
935 | 7
936 | 2.7163455262780190e-002
937 |
938 | <_>
939 |
940 | 0 -1 49 783189748 -137429026 -257 709557994 2130460236
941 | -196611 -9580 585428708
942 |
943 | -2.0454545319080353e-001 7.9608374834060669e-001
944 | <_>
945 |
946 | 0 -1 108 1284360448 1057423155 1592696573 -852672655
947 | 1547382714 -1642594369 125705358 797134398
948 |
949 | -3.6474677920341492e-001 6.0925579071044922e-001
950 | <_>
951 |
952 | 0 -1 94 1347680270 -527720448 1091567712 1073745933
953 | -1073180671 0 285745154 -511192438
954 |
955 | -4.6406838297843933e-001 5.5626088380813599e-001
956 | <_>
957 |
958 | 0 -1 73 1705780944 -145486260 -115909 -281793505 -418072663
959 | -1681064068 1877454127 -1912330993
960 |
961 | -4.7043186426162720e-001 5.8430361747741699e-001
962 | <_>
963 |
964 | 0 -1 110 -2118142016 339509033 -285260567 1417764573
965 | 68144392 -468879483 -2033291636 231451911
966 |
967 | -4.8700931668281555e-001 5.4639810323715210e-001
968 | <_>
969 |
970 | 0 -1 119 -1888051818 489996135 -65539 849536890 2146716845
971 | -1107542088 -1275615746 -1119617586
972 |
973 | -4.3356490135192871e-001 6.5175366401672363e-001
974 | <_>
975 |
976 | 0 -1 44 -1879021438 336830528 1073766659 1477541961 8560696
977 | -1207369568 8462472 1493893448
978 |
979 | -5.4343086481094360e-001 5.2777874469757080e-001
980 |
981 | <_>
982 | 7
983 | 4.9174150824546814e-001
984 |
985 | <_>
986 |
987 | 0 -1 57 644098 15758324 1995964260 -463011882 893285175
988 | 83156983 2004317989 16021237
989 |
990 | -1.7073170840740204e-001 9.0782123804092407e-001
991 | <_>
992 |
993 | 0 -1 123 268632845 -2147450864 -2143240192 -2147401728
994 | 8523937 -1878523840 16777416 616824984
995 |
996 | -4.8744434118270874e-001 7.3311311006546021e-001
997 | <_>
998 |
999 | 0 -1 120 -2110735872 803880886 989739810 1673281312 91564930
1000 | -277454958 997709514 -581366443
1001 |
1002 | -4.0291741490364075e-001 8.2450771331787109e-001
1003 | <_>
1004 |
1005 | 0 -1 87 941753434 -1067128905 788512753 -1074450460
1006 | 779101657 -1346552460 938805167 -2050424642
1007 |
1008 | -3.6246949434280396e-001 8.7103593349456787e-001
1009 | <_>
1010 |
1011 | 0 -1 60 208 1645217920 130 538263552 33595552 -1475870592
1012 | 16783361 1375993867
1013 |
1014 | -6.1472141742706299e-001 5.9707164764404297e-001
1015 | <_>
1016 |
1017 | 0 -1 114 1860423179 1034692624 -285213187 -986681712
1018 | 1576755092 -1408205463 -127714 -1246035687
1019 |
1020 | -4.5621752738952637e-001 8.9482426643371582e-001
1021 | <_>
1022 |
1023 | 0 -1 107 33555004 -1861746688 1073807361 -754909184
1024 | 645922856 8388608 134250648 419635458
1025 |
1026 | -5.2466005086898804e-001 7.1834069490432739e-001
1027 |
1028 | <_>
1029 | 2
1030 | 1.9084988832473755e+000
1031 |
1032 | <_>
1033 |
1034 | 0 -1 16 536064 131072 -20971516 524288 576 1048577 0 40960
1035 |
1036 | -8.0000001192092896e-001 9.8018401861190796e-001
1037 | <_>
1038 |
1039 | 0 -1 56 67108864 0 4096 1074003968 8192 536870912 4 262144
1040 |
1041 | -9.6610915660858154e-001 9.2831486463546753e-001
1042 |
1043 | <_>
1044 |
1045 | 0 0 1 1
1046 | <_>
1047 |
1048 | 0 0 3 2
1049 | <_>
1050 |
1051 | 0 1 13 6
1052 | <_>
1053 |
1054 | 0 2 3 14
1055 | <_>
1056 |
1057 | 0 2 4 2
1058 | <_>
1059 |
1060 | 0 6 2 3
1061 | <_>
1062 |
1063 | 0 6 3 2
1064 | <_>
1065 |
1066 | 0 16 1 3
1067 | <_>
1068 |
1069 | 0 20 3 3
1070 | <_>
1071 |
1072 | 0 22 2 3
1073 | <_>
1074 |
1075 | 0 28 4 4
1076 | <_>
1077 |
1078 | 0 35 2 3
1079 | <_>
1080 |
1081 | 1 0 14 7
1082 | <_>
1083 |
1084 | 1 5 3 2
1085 | <_>
1086 |
1087 | 1 6 2 1
1088 | <_>
1089 |
1090 | 1 14 10 9
1091 | <_>
1092 |
1093 | 1 21 4 4
1094 | <_>
1095 |
1096 | 1 23 4 2
1097 | <_>
1098 |
1099 | 2 0 13 7
1100 | <_>
1101 |
1102 | 2 0 14 7
1103 | <_>
1104 |
1105 | 2 33 5 4
1106 | <_>
1107 |
1108 | 2 36 4 3
1109 | <_>
1110 |
1111 | 2 39 3 2
1112 | <_>
1113 |
1114 | 3 1 13 11
1115 | <_>
1116 |
1117 | 3 2 3 2
1118 | <_>
1119 |
1120 | 4 0 7 8
1121 | <_>
1122 |
1123 | 4 0 13 7
1124 | <_>
1125 |
1126 | 5 0 12 6
1127 | <_>
1128 |
1129 | 5 0 13 7
1130 | <_>
1131 |
1132 | 5 1 10 13
1133 | <_>
1134 |
1135 | 5 1 12 7
1136 | <_>
1137 |
1138 | 5 2 7 13
1139 | <_>
1140 |
1141 | 5 4 2 1
1142 | <_>
1143 |
1144 | 5 8 7 4
1145 | <_>
1146 |
1147 | 5 39 3 2
1148 | <_>
1149 |
1150 | 6 3 5 2
1151 | <_>
1152 |
1153 | 6 3 6 2
1154 | <_>
1155 |
1156 | 6 5 4 12
1157 | <_>
1158 |
1159 | 6 9 6 3
1160 | <_>
1161 |
1162 | 7 3 5 2
1163 | <_>
1164 |
1165 | 7 3 6 13
1166 | <_>
1167 |
1168 | 7 5 6 4
1169 | <_>
1170 |
1171 | 7 7 6 10
1172 | <_>
1173 |
1174 | 7 8 6 4
1175 | <_>
1176 |
1177 | 7 32 5 4
1178 | <_>
1179 |
1180 | 7 33 5 4
1181 | <_>
1182 |
1183 | 8 0 1 1
1184 | <_>
1185 |
1186 | 8 0 2 1
1187 | <_>
1188 |
1189 | 8 2 10 7
1190 | <_>
1191 |
1192 | 9 0 6 2
1193 | <_>
1194 |
1195 | 9 2 9 3
1196 | <_>
1197 |
1198 | 9 4 1 1
1199 | <_>
1200 |
1201 | 9 6 2 1
1202 | <_>
1203 |
1204 | 9 28 6 4
1205 | <_>
1206 |
1207 | 10 0 9 3
1208 | <_>
1209 |
1210 | 10 3 1 1
1211 | <_>
1212 |
1213 | 10 10 11 11
1214 | <_>
1215 |
1216 | 10 15 4 3
1217 | <_>
1218 |
1219 | 11 4 2 1
1220 | <_>
1221 |
1222 | 11 27 4 3
1223 | <_>
1224 |
1225 | 11 36 8 2
1226 | <_>
1227 |
1228 | 12 0 2 2
1229 | <_>
1230 |
1231 | 12 23 4 3
1232 | <_>
1233 |
1234 | 12 25 4 3
1235 | <_>
1236 |
1237 | 12 29 5 3
1238 | <_>
1239 |
1240 | 12 33 3 4
1241 | <_>
1242 |
1243 | 13 0 2 2
1244 | <_>
1245 |
1246 | 13 36 8 3
1247 | <_>
1248 |
1249 | 14 0 2 2
1250 | <_>
1251 |
1252 | 15 15 2 2
1253 | <_>
1254 |
1255 | 16 13 3 4
1256 | <_>
1257 |
1258 | 17 0 1 3
1259 | <_>
1260 |
1261 | 17 1 3 3
1262 | <_>
1263 |
1264 | 17 31 5 3
1265 | <_>
1266 |
1267 | 17 35 3 1
1268 | <_>
1269 |
1270 | 18 13 2 3
1271 | <_>
1272 |
1273 | 18 39 2 1
1274 | <_>
1275 |
1276 | 19 0 7 15
1277 | <_>
1278 |
1279 | 19 2 7 2
1280 | <_>
1281 |
1282 | 19 3 7 13
1283 | <_>
1284 |
1285 | 19 14 2 2
1286 | <_>
1287 |
1288 | 19 24 7 4
1289 | <_>
1290 |
1291 | 20 1 6 13
1292 | <_>
1293 |
1294 | 20 8 7 3
1295 | <_>
1296 |
1297 | 20 9 7 3
1298 | <_>
1299 |
1300 | 20 13 1 1
1301 | <_>
1302 |
1303 | 20 14 2 3
1304 | <_>
1305 |
1306 | 20 30 3 2
1307 | <_>
1308 |
1309 | 21 0 3 4
1310 | <_>
1311 |
1312 | 21 0 6 8
1313 | <_>
1314 |
1315 | 21 3 6 2
1316 | <_>
1317 |
1318 | 21 6 6 4
1319 | <_>
1320 |
1321 | 21 37 2 1
1322 | <_>
1323 |
1324 | 22 3 6 2
1325 | <_>
1326 |
1327 | 22 13 1 2
1328 | <_>
1329 |
1330 | 22 22 4 3
1331 | <_>
1332 |
1333 | 23 0 2 3
1334 | <_>
1335 |
1336 | 23 3 6 2
1337 | <_>
1338 |
1339 | 23 9 5 4
1340 | <_>
1341 |
1342 | 23 11 1 1
1343 | <_>
1344 |
1345 | 23 15 1 1
1346 | <_>
1347 |
1348 | 23 16 3 2
1349 | <_>
1350 |
1351 | 23 35 2 1
1352 | <_>
1353 |
1354 | 23 36 1 1
1355 | <_>
1356 |
1357 | 23 39 6 2
1358 | <_>
1359 |
1360 | 24 0 2 3
1361 | <_>
1362 |
1363 | 24 8 6 11
1364 | <_>
1365 |
1366 | 24 28 2 2
1367 | <_>
1368 |
1369 | 24 33 4 4
1370 | <_>
1371 |
1372 | 25 16 4 3
1373 | <_>
1374 |
1375 | 25 31 5 3
1376 | <_>
1377 |
1378 | 26 0 1 2
1379 | <_>
1380 |
1381 | 26 0 2 2
1382 | <_>
1383 |
1384 | 26 0 3 2
1385 | <_>
1386 |
1387 | 26 24 4 4
1388 | <_>
1389 |
1390 | 27 30 4 5
1391 | <_>
1392 |
1393 | 27 36 5 3
1394 | <_>
1395 |
1396 | 28 0 2 2
1397 | <_>
1398 |
1399 | 28 4 2 1
1400 | <_>
1401 |
1402 | 28 21 2 5
1403 | <_>
1404 |
1405 | 29 8 2 1
1406 | <_>
1407 |
1408 | 33 0 2 1
1409 | <_>
1410 |
1411 | 33 0 4 2
1412 | <_>
1413 |
1414 | 33 0 4 6
1415 | <_>
1416 |
1417 | 33 3 1 1
1418 | <_>
1419 |
1420 | 33 6 4 12
1421 | <_>
1422 |
1423 | 33 21 4 2
1424 | <_>
1425 |
1426 | 33 36 4 3
1427 | <_>
1428 |
1429 | 35 1 2 2
1430 | <_>
1431 |
1432 | 36 5 1 1
1433 | <_>
1434 |
1435 | 36 29 3 4
1436 | <_>
1437 |
1438 | 36 39 2 2
1439 | <_>
1440 |
1441 | 37 5 2 2
1442 | <_>
1443 |
1444 | 38 6 2 1
1445 | <_>
1446 |
1447 | 38 6 2 2
1448 | <_>
1449 |
1450 | 39 1 2 12
1451 | <_>
1452 |
1453 | 39 24 1 2
1454 | <_>
1455 |
1456 | 39 36 2 2
1457 | <_>
1458 |
1459 | 40 39 1 2
1460 | <_>
1461 |
1462 | 42 4 1 1
1463 | <_>
1464 |
1465 | 42 20 1 2
1466 | <_>
1467 |
1468 | 42 29 1 2
1469 |
1470 |
--------------------------------------------------------------------------------
/models/model-b66.pkl:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Shaw-git/Lightweight-Head-Pose-Estimation/aed1802a34cccdd7e35beb788517cc6d01319d5c/models/model-b66.pkl
--------------------------------------------------------------------------------
/network/__pycache__/network.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Shaw-git/Lightweight-Head-Pose-Estimation/aed1802a34cccdd7e35beb788517cc6d01319d5c/network/__pycache__/network.cpython-38.pyc
--------------------------------------------------------------------------------
/network/network.py:
--------------------------------------------------------------------------------
1 | import torch.nn as nn
2 | import torch.nn.functional as F
3 | import torch
4 | import math
5 |
6 | def conv_bn(inp, oup, stride, conv_layer=nn.Conv2d, norm_layer=nn.BatchNorm2d, nlin_layer=nn.ReLU):
7 |
8 | return nn.Sequential(
9 | conv_layer(inp, oup, 3, stride, 1, bias=False),
10 | norm_layer(oup),
11 | nlin_layer(inplace=True)
12 | )
13 |
14 | def adjust_feature_order(a, groups=2):
15 | idx = []
16 | for i in range(groups):
17 | for k in range(a):
18 | if k % groups == i:
19 | idx.append(k)
20 | return idx
21 |
22 | class h_sigmoid(nn.Module):
23 | def __init__(self, inplace=True):
24 | super(h_sigmoid, self).__init__()
25 | self.relu = nn.ReLU6(inplace=inplace)
26 |
27 | def forward(self, x):
28 | return self.relu(x + 3) / 6
29 |
30 | class h_swish(nn.Module):
31 | def __init__(self, inplace=True):
32 | super(h_swish, self).__init__()
33 | self.sigmoid = h_sigmoid(inplace=inplace)
34 | def forward(self, x):
35 | return x * self.sigmoid(x)
36 |
37 | class Bottleneck(nn.Module):
38 | def __init__(self, in_channels, out_channels, kernel, stride, exp_rate=2 ,activation=nn.ReLU, residual=False, groups=1, sort = False):
39 | super(Bottleneck, self).__init__()
40 | assert stride in [1, 2]
41 | assert kernel in [3, 5]
42 | exp_channels = int(in_channels * exp_rate)
43 | self.residual=residual
44 | self.sort = sort
45 | self.new_order = adjust_feature_order(exp_channels)
46 | padding = (kernel - 1) // 2
47 | conv_layer = nn.Conv2d
48 | norm_layer = nn.BatchNorm2d
49 | nlin_layer = activation
50 | # expand-linear
51 | self.exp_conv = nn.Sequential(conv_layer(in_channels, exp_channels, 1, 1, 0, bias=False, groups=groups),
52 | norm_layer(exp_channels),
53 | nlin_layer(inplace=True))
54 | # dw-linear
55 | self.dw_conv=nn.Sequential(conv_layer(exp_channels, exp_channels, kernel, stride, padding, groups=exp_channels, bias=False),
56 | norm_layer(exp_channels),
57 | nlin_layer(inplace=True))
58 | # pw-linear
59 | self.pw_conv =nn.Sequential(conv_layer(exp_channels, out_channels, 1, 1, 0, bias=False, groups=1),
60 | norm_layer(out_channels),
61 | nlin_layer(inplace=True))
62 |
63 | def forward(self, x):
64 | y = self.exp_conv(x)
65 | y = self.dw_conv(y)
66 | if self.sort:
67 | y = y[:, self.new_order]
68 | y = self.pw_conv(y)
69 | if self.residual:
70 | y = x + y
71 | return y
72 |
73 | class Network(nn.Module):
74 | def __init__(self, num_bins=66, M=99, cuda = False , bin_train= False, base=16, width_mult=1):
75 | super(Network, self).__init__()
76 | self.bin_train=bin_train
77 | self.M=M
78 | self.num_bins = num_bins
79 | input_shape =(1 , 3 , 224 , 224)
80 | multiplier = [1, 2, 4, 6, 6, 8, 8, 8]
81 | kernel = [3, 3, 3, 5, 5, 5, 3, 0]
82 | stride = [2, 2, 1, 2, 1, 2, 1, 0]
83 |
84 | bandwidth = [ base * m for m in multiplier]
85 | for i in range(3, len(bandwidth)):
86 | bandwidth[i] = int(bandwidth[i] * width_mult)
87 |
88 | self.features=[]
89 | self.features.append(conv_bn(3, bandwidth[0], 2))
90 | for i in range(len(bandwidth)-1):
91 | groups = 4
92 | sort = False
93 | activation = nn.ReLU if i < 3 else h_swish
94 | self.features.append(Bottleneck(bandwidth[i], bandwidth[i+1],kernel=kernel[i],exp_rate=2,stride=stride[i], groups=groups, sort=sort, activation=activation))
95 | self.features.append(nn.AdaptiveAvgPool2d((1, 1)))
96 | self.features.append(nn.ReLU())
97 | self.features = nn.Sequential(*self.features)
98 |
99 | self.feature_size = self.forward_feature(torch.zeros(*input_shape)).view(-1).size(0)
100 | self.fc_yaw = nn.Linear(self.feature_size, num_bins)
101 | self.fc_pitch = nn.Linear(self.feature_size, num_bins)
102 | self.fc_roll = nn.Linear(self.feature_size, num_bins)
103 | self.softmax = nn.Softmax(dim=1)
104 |
105 | self.idx_tensor = torch.FloatTensor([idx for idx in range(num_bins)])
106 |
107 | if cuda:
108 | self.idx_tensor = self.idx_tensor.cuda()
109 | self.bins = self.gen_bins()
110 |
111 | self._initialize_weights()
112 |
113 | def gen_bins(self):
114 | return self.idx_tensor*2*self.M/self.num_bins+self.M/self.num_bins-self.M
115 |
116 | def _initialize_weights(self):
117 | for m in self.modules():
118 | if isinstance(m, nn.Conv2d):
119 | n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
120 | m.weight.data.normal_(0, math.sqrt(2. / n))
121 | if m.bias is not None:
122 | m.bias.data.zero_()
123 | elif isinstance(m, nn.BatchNorm2d):
124 | m.weight.data.fill_(1)
125 | m.bias.data.zero_()
126 | elif isinstance(m, nn.Linear):
127 | n = m.weight.size(1)
128 | m.weight.data.normal_(0, 0.01)
129 | m.bias.data.zero_()
130 |
131 | def forward_feature(self, x):
132 | x = self.features(x)
133 | return x.view(x.size()[0], -1)
134 |
135 | def forward(self, x):
136 | x = self.forward_feature(x)
137 | pre_roll = self.fc_roll(x)
138 | pre_yaw = self.fc_yaw(x)
139 | pre_pitch = self.fc_pitch(x)
140 | if self.bin_train:
141 | return pre_roll, pre_yaw, pre_pitch
142 | else:
143 | roll = torch.sum(self.softmax(pre_roll) * self.bins, 1)
144 | yaw = torch.sum(self.softmax(pre_yaw) * self.bins, 1)
145 | pitch = torch.sum(self.softmax(pre_pitch) * self.bins, 1)
146 | return roll, yaw, pitch
147 |
148 | if __name__ == '__main__':
149 | width_mult =1
150 | net = Network(cuda=False, bin_train=False, num_bins=66)
151 | for name, p in net.named_parameters():
152 | print(name,p.shape)
153 | print(sum([p.numel() for p in net.parameters()]))
154 |
155 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | pytorch>=1.5.0
2 | python-opencv>=4.2.1
3 | numpy
4 |
5 |
--------------------------------------------------------------------------------
/results/result.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Shaw-git/Lightweight-Head-Pose-Estimation/aed1802a34cccdd7e35beb788517cc6d01319d5c/results/result.gif
--------------------------------------------------------------------------------
/test_network.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | import numpy as np
3 | import torch
4 | from network.network import Network
5 | from torchvision import transforms
6 | from PIL import Image
7 | from utils import load_snapshot
8 | from utils.camera_normalize import drawAxis
9 | import time
10 | import argparse
11 |
12 | def parse_option():
13 | parser = argparse.ArgumentParser('Please set input path and output path', add_help=False)
14 | parser.add_argument('--input', type=str, default = "0", help="the path of input stream")
15 | parser.add_argument('--output', type=str, default = "", help='where to save the result')
16 | args = parser.parse_args()
17 |
18 | return args
19 |
20 |
21 | def scale_bbox(bbox, scale):
22 | w = max(bbox[2], bbox[3]) * scale
23 | x= max(bbox[0] + bbox[2]/2 - w/2,0)
24 | y= max(bbox[1] + bbox[3]/2 - w/2,0)
25 | return np.asarray([x,y,w,w],np.int64)
26 |
27 | def main():
28 | args = parse_option()
29 | if len(args.input)==1:
30 | if ord('0')<=ord(args.input) and ord(args.input)<=ord('9'):
31 | cap = cv2.VideoCapture(int(args.input))
32 | else:
33 | print("invalid input path")
34 | exit()
35 | else:
36 | cap = cv2.VideoCapture(args.input)
37 |
38 |
39 | outstream = None
40 | if args.output != "":
41 | frame_size = [int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),
42 | int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))]
43 | outstream = cv2.VideoWriter(args.output,
44 | cv2.VideoWriter_fourcc(*'MJPG'),
45 | 25, frame_size)
46 |
47 | face_cascade = cv2.CascadeClassifier('lbpcascade_frontalface_improved.xml')
48 | pose_estimator = Network(bin_train=False)
49 | load_snapshot(pose_estimator,"./models/model-b66.pkl")
50 | pose_estimator = pose_estimator.eval()
51 |
52 | transform_test = transforms.Compose([transforms.CenterCrop(224),
53 | transforms.ToTensor(),
54 | transforms.Normalize(mean=[0.485, 0.456, 0.406],
55 | std=[0.229, 0.224, 0.225])])
56 | count = 0
57 | last_faces = None
58 | while True:
59 |
60 | ret, frame = cap.read()
61 | if not ret:
62 | break
63 | if count % 5 == 0:
64 | gray_img = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
65 | faces = face_cascade.detectMultiScale(gray_img, 1.2)
66 | if len(faces)==0 and (last_faces is not None):
67 | faces=last_faces
68 | last_faces = faces
69 |
70 | face_images = []
71 | face_tensors = []
72 | for i, bbox in enumerate(faces):
73 | x,y, w,h = scale_bbox(bbox,1.5)
74 | frame = cv2.rectangle(frame,(x,y), (x+w, y+h),color=(0,0,255),thickness=2)
75 | face_img = frame[y:y+h,x:x+w]
76 | face_images.append(face_img)
77 | pil_img = Image.fromarray(cv2.cvtColor(cv2.resize(face_img,(224,224)), cv2.COLOR_BGR2RGB))
78 | face_tensors.append(transform_test(pil_img)[None])
79 |
80 | if len(face_tensors)>0:
81 | with torch.no_grad():
82 | start = time.time()
83 | face_tensors = torch.cat(face_tensors,dim=0)
84 | roll, yaw, pitch = pose_estimator(face_tensors)
85 | print("inference time: %.3f ms/face"%((time.time()-start)/len(roll)*1000))
86 | for img, r,y,p in zip(face_images, roll,yaw,pitch):
87 | headpose = [r,y,p]
88 | drawAxis(img, headpose,size=50)
89 |
90 | cv2.imshow("Result", frame)
91 | if outstream is not None:
92 | outstream.write(frame)
93 |
94 | key = cv2.waitKey(1)
95 | if key==27 or key == ord("q"):
96 | break
97 | count+=1
98 | if outstream is not None:
99 | outstream.release()
100 |
101 |
102 | if __name__ == '__main__':
103 | main()
--------------------------------------------------------------------------------
/test_network_mtcnn.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | import numpy as np
3 | import torch
4 | from network.network import Network
5 | from torchvision import transforms
6 | from PIL import Image
7 | from utils import load_snapshot
8 | from utils.camera_normalize import drawAxis
9 | import time
10 | import argparse
11 | import mtcnn
12 |
13 | def parse_option():
14 | parser = argparse.ArgumentParser('Please set input path and output path', add_help=False)
15 | parser.add_argument('--input', type=str, default = "0", help="the path of input stream")
16 | parser.add_argument('--output', type=str, default = "", help='where to save the result')
17 | args = parser.parse_args()
18 |
19 | return args
20 |
21 |
22 | def scale_bbox(bbox, scale):
23 | w = max(bbox[2], bbox[3]) * scale
24 | x= max(bbox[0] + bbox[2]/2 - w/2,0)
25 | y= max(bbox[1] + bbox[3]/2 - w/2,0)
26 | return np.asarray([x,y,w,w],np.int64)
27 |
28 | def main():
29 | args = parse_option()
30 | if len(args.input)==1:
31 | if ord('0')<=ord(args.input) and ord(args.input)<=ord('9'):
32 | cap = cv2.VideoCapture(int(args.input))
33 | else:
34 | print("invalid input path")
35 | exit()
36 | else:
37 | cap = cv2.VideoCapture(args.input)
38 |
39 |
40 | outstream = None
41 | if args.output != "":
42 | frame_size = [int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),
43 | int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))]
44 | outstream = cv2.VideoWriter(args.output,
45 | cv2.VideoWriter_fourcc(*'MJPG'),
46 | 25, frame_size)
47 |
48 | detector =mtcnn.MTCNN()
49 | # face_cascade = cv2.CascadeClassifier('lbpcascade_frontalface_improved.xml')
50 | pose_estimator = Network(bin_train=False)
51 | load_snapshot(pose_estimator,"./models/model-b66.pkl")
52 | pose_estimator = pose_estimator.eval()
53 |
54 | transform_test = transforms.Compose([transforms.CenterCrop(224),
55 | transforms.ToTensor(),
56 | transforms.Normalize(mean=[0.485, 0.456, 0.406],
57 | std=[0.229, 0.224, 0.225])])
58 | count = 0
59 | last_faces = None
60 | while True:
61 |
62 | ret, frame = cap.read()
63 | if not ret:
64 | break
65 | if count % 5 == 0:
66 | img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
67 | faces = detector.detect_faces(img)
68 | if len(faces)==0 and (last_faces is not None):
69 | faces=last_faces
70 | last_faces = faces
71 |
72 | face_images = []
73 | face_tensors = []
74 | for i, items in enumerate(faces):
75 | if items["confidence"]<0.7:
76 | continue
77 | bbox = items["box"]
78 |
79 | x,y, w,h = scale_bbox(bbox,1.2)
80 | frame = cv2.rectangle(frame,(x,y), (x+w, y+h),color=(0,0,255),thickness=2)
81 | face_img = frame[y:y+h,x:x+w]
82 | face_images.append(face_img)
83 | pil_img = Image.fromarray(cv2.cvtColor(cv2.resize(face_img,(224,224)), cv2.COLOR_BGR2RGB))
84 | face_tensors.append(transform_test(pil_img)[None])
85 |
86 | if len(face_tensors)>0:
87 | with torch.no_grad():
88 | start = time.time()
89 | face_tensors = torch.cat(face_tensors,dim=0)
90 | roll, yaw, pitch = pose_estimator(face_tensors)
91 | print("inference time: %.3f ms/face"%((time.time()-start)/len(roll)*1000))
92 | for img, r,y,p in zip(face_images, roll,yaw,pitch):
93 | headpose = [r,y,p]
94 | drawAxis(img, headpose,size=50)
95 |
96 | cv2.imshow("Result", frame)
97 | if outstream is not None:
98 | outstream.write(frame)
99 |
100 | key = cv2.waitKey(1)
101 | if key==27 or key == ord("q"):
102 | break
103 | count+=1
104 | if outstream is not None:
105 | outstream.release()
106 |
107 |
108 | if __name__ == '__main__':
109 | main()
--------------------------------------------------------------------------------
/utils/__init__.py:
--------------------------------------------------------------------------------
1 | from .utils import *
2 | from .camera_normalize import *
3 | from .coordinate_transform import *
4 |
--------------------------------------------------------------------------------
/utils/__pycache__/__init__.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Shaw-git/Lightweight-Head-Pose-Estimation/aed1802a34cccdd7e35beb788517cc6d01319d5c/utils/__pycache__/__init__.cpython-38.pyc
--------------------------------------------------------------------------------
/utils/__pycache__/camera_normalize.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Shaw-git/Lightweight-Head-Pose-Estimation/aed1802a34cccdd7e35beb788517cc6d01319d5c/utils/__pycache__/camera_normalize.cpython-38.pyc
--------------------------------------------------------------------------------
/utils/__pycache__/coordinate_transform.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Shaw-git/Lightweight-Head-Pose-Estimation/aed1802a34cccdd7e35beb788517cc6d01319d5c/utils/__pycache__/coordinate_transform.cpython-38.pyc
--------------------------------------------------------------------------------
/utils/__pycache__/tools.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Shaw-git/Lightweight-Head-Pose-Estimation/aed1802a34cccdd7e35beb788517cc6d01319d5c/utils/__pycache__/tools.cpython-38.pyc
--------------------------------------------------------------------------------
/utils/__pycache__/utils.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Shaw-git/Lightweight-Head-Pose-Estimation/aed1802a34cccdd7e35beb788517cc6d01319d5c/utils/__pycache__/utils.cpython-38.pyc
--------------------------------------------------------------------------------
/utils/camera_normalize.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | import numpy as np
3 | import math
4 |
5 | def drawAcross(color, x, y, size=20,pigment=(255, 255, 255)):
6 | cv2.line(color, (int(x - size), int(y)), (int(x + size), int(y)), pigment, 2)
7 | cv2.line(color, (int(x), int(y - size)), (int(x), int(y + size)), pigment, 2)
8 |
9 | def normalize_headpose(headpose,intrinsic,center):
10 | Rt = FromPixelToRotationMatrix(center, intrinsic)
11 | HeadToCr = EulerToMatrix(-headpose[0], -headpose[1], -headpose[2]) # inverse(RzRyRx)= R(-x)R(-y)R(-z)
12 | CrToCn = Rt
13 | HeadToCn = np.matmul(CrToCn, HeadToCr)
14 | headpose = -1 * MatrixToEuler(HeadToCn)
15 | return headpose
16 |
17 | def anti_normalize_headpose(headpose,intrinsic,center):
18 | Rt = FromPixelToRotationMatrix(center, intrinsic)
19 | HeadToCr = EulerToMatrix(-headpose[0], -headpose[1], -headpose[2]) # inverse(RzRyRx)= R(-x)R(-y)R(-z)
20 | # CrToCn = Rt
21 | CnToCr = np.mat(Rt).I
22 | HeadToCn = np.matmul(CnToCr, HeadToCr)
23 | headpose = -1 * MatrixToEuler(HeadToCn)
24 | return headpose
25 |
26 |
27 | def normalize_landmarks(landmarks,intrinsic,center):
28 | Rt = FromPixelToRotationMatrix(center, intrinsic)
29 | new_marks = []
30 | for p in landmarks:
31 | p = From_src_To_dst(p, Rt, intrinsic)
32 | new_marks.append(p)
33 | return new_marks
34 |
35 |
36 | def normalize_glabel(glabel,intrinsic, center):
37 |
38 | Rt = FromPixelToRotationMatrix(center, intrinsic)
39 | CoToCn = Rt
40 | normalized_glabel = np.array(np.matmul(CoToCn, glabel))
41 | normalized_glabel = np.reshape(normalized_glabel,-1)
42 | return normalized_glabel
43 |
44 | def anti_normalize_glabel(glabel, intrinsic, center):
45 |
46 | Rt = FromPixelToRotationMatrix(center, intrinsic)
47 | #CoToCn = Rt
48 | CnToCo = np.mat(Rt).I
49 | normalized_glabel = np.array(np.matmul(CnToCo, glabel))
50 | normalized_glabel = np.reshape(normalized_glabel, -1)
51 | return normalized_glabel
52 |
53 | def warpFace(image, headpose=[],landmark=[],intrinsic=[],center=[]):
54 |
55 | if len(center)==0:
56 | center=np.mean(landmark,axis=0)
57 |
58 | Rt = FromPixelToRotationMatrix(center, intrinsic)
59 | src, dst = generate_src_dst(Rt, intrinsic)
60 | Mat = cv2.getPerspectiveTransform(src, dst)
61 | image = cv2.warpPerspective(image, Mat, dsize=(image.shape[1], image.shape[0]))
62 | new_mark = []
63 |
64 | if len(landmark)!=0:
65 | new_mark=[]
66 | for p in landmark:
67 | p = From_src_To_dst(p, Rt, intrinsic)
68 | new_mark.append(p)
69 |
70 | if len(headpose)!=0:
71 | HeadToCr=EulerToMatrix(-headpose[0],-headpose[1],-headpose[2]) # inverse(RzRyRx)= R(-x)R(-y)R(-z)
72 | CrToCn=Rt
73 | HeadToCn=np.matmul(CrToCn,HeadToCr)
74 | headpose=-1*MatrixToEuler(HeadToCn)
75 |
76 | return image,np.asarray(headpose),np.asarray(new_mark)
77 |
78 | def generate_src_dst(RM,intrinsic):
79 |
80 | src = [[100, 100], [100, 200], [200, 100], [200, 200]]
81 | CT=[[1,0,0],[0,-1,0],[0,0,-1]] #Coordinate Tranform
82 | dst = []
83 | Cr_inverse=np.mat(intrinsic).I
84 | Cn=intrinsic
85 | for d in src:
86 | d=[d[0],d[1],1]
87 | d=np.array(np.matmul(Cr_inverse,d)).reshape(3)
88 | d = np.matmul(CT, d)
89 | d= np.matmul(RM,d)
90 | d = np.matmul(CT, d)
91 | d=np.array(np.matmul(Cn,d)).reshape(3)
92 | d=d/d[2]
93 | dst.append(d[0:2])
94 | return np.array(src,dtype=np.float32),np.array(dst,dtype=np.float32)
95 |
96 | def From_src_To_dst(p,Rt,intrinsic):
97 |
98 | CT=[[1,0,0],[0,-1,0],[0,0,-1]] #Coordinate Tranform
99 | Cr_inverse=np.mat(intrinsic).I
100 | Cn=intrinsic
101 | d=p
102 | d=[d[0],d[1],1]
103 | d=np.array(np.matmul(Cr_inverse,d)).reshape(3)
104 | d = np.matmul(CT, d)
105 | d= np.matmul(Rt,d)
106 | d = np.matmul(CT, d)
107 | d=np.array(np.matmul(Cn,d)).reshape(3)
108 | d=d/d[2]
109 | return [d[0],d[1]]
110 |
111 | def FromPixelToRotationMatrix(center, intrinsic):
112 |
113 | def RotationToMatrix(axis, angle):
114 | axis = axis / np.sqrt(np.sum(np.power(axis, 2), axis=0))
115 | a = axis[0]
116 | b = axis[1]
117 | c = axis[2] # CnToCo = np.mat(Rt).I
118 | angle = -angle
119 |
120 | M = [
121 | [a ** 2 + (1 - a ** 2) * np.cos(angle), a * b * (1 - np.cos(angle)) + c * np.sin(angle),
122 | a * c * (1 - np.cos(angle)) - b * np.sin(angle)],
123 | [a * b * (1 - np.cos(angle)) - c * np.sin(angle), b ** 2 + (1 - b ** 2) * np.cos(angle),
124 | b * c * (1 - np.cos(angle)) + a * np.sin(angle)],
125 | [a * c * (1 - np.cos(angle)) + b * np.sin(angle), b * c * (1 - np.cos(angle)) - a * np.sin(angle),
126 | c ** 2 + (1 - c ** 2) * np.cos(angle)]
127 | ]
128 |
129 | return np.array(M)
130 |
131 | px = center[0] - intrinsic[0, 2]
132 | py = center[1] - intrinsic[1, 2]
133 | horizon = px / intrinsic[0, 0]
134 | vertical = py / intrinsic[1, 1]
135 | Vector = np.array([horizon, -vertical, -1])
136 | Vector = Vector / np.sqrt(np.sum(np.power(Vector, 2), axis=0))
137 | zAxis = [0, 0, -1]
138 | rotate_axis = np.cross(Vector, zAxis)
139 | rotate_axis = rotate_axis / np.sqrt(np.sum(np.power(rotate_axis, 2), axis=0))
140 | rotata_angel = np.sum(Vector * zAxis)
141 | rotata_angel = np.arccos(rotata_angel)
142 |
143 | return RotationToMatrix(rotate_axis,rotata_angel)
144 |
145 | def drawAxis(img, headpose, landmarks = None, size=100):
146 | roll, yaw, pitch = headpose[0], headpose[1], headpose[2]
147 | if landmarks!=None:
148 | tdx=np.mean(landmarks[42:48],axis=0)[0]
149 | tdy=np.mean(landmarks[42:48],axis=0)[1]
150 | else:
151 | tdx = img.shape[1]/2
152 | tdy = img.shape[0]/2
153 |
154 | matrix = EulerToMatrix(-roll, -yaw, -pitch)
155 |
156 | Xaxis = np.array([matrix[0, 0], matrix[1, 0], matrix[2, 0]]) * size
157 | Yaxis = np.array([matrix[0, 1], matrix[1, 1], matrix[2, 1]]) * size
158 | Zaxis = np.array([matrix[0, 2], matrix[1, 2], matrix[2, 2]]) * size
159 |
160 | cv2.line(img, (int(tdx), int(tdy)), (int(Xaxis[0]+tdx), int(-Xaxis[1]+tdy)), (0, 0, 255), 3)
161 | cv2.line(img, (int(tdx), int(tdy)), (int(-Yaxis[0]+tdx), int(Yaxis[1]+tdy)), (0, 255, 0), 3)
162 | cv2.line(img, (int(tdx), int(tdy)), (int(Zaxis[0]+tdx), int(-Zaxis[1]+tdy)), (255, 0, 0), 2)
163 |
164 | return img
165 |
166 | def draw_axis_from_mat(img,matrix, landmark=None , center=None, size=80):
167 |
168 |
169 | if landmark != None:
170 | tdx = np.mean(landmark[42:48], axis=0)[0]
171 | tdy = np.mean(landmark[42:48], axis=0)[1]
172 |
173 | if center != None:
174 | tdx = center[0]
175 | tdy = center[1]
176 |
177 | if landmark == None and center == None:
178 | tdx = int(img.shape[1] / 2)
179 | tdy = int(img.shape[0] / 2)
180 |
181 | Xaxis = np.array([matrix[0, 0], matrix[1, 0], matrix[2, 0]]) * size
182 | Yaxis = np.array([matrix[0, 1], matrix[1, 1], matrix[2, 1]]) * size
183 | Zaxis = np.array([matrix[0, 2], matrix[1, 2], matrix[2, 2]]) * size
184 |
185 | # A matrix to transform 3d point from OpenGL camera coordinate to Opencv camera coordinate
186 | m=[[1, 0, 0],
187 | [0, -1, 0],
188 | [0, 0, -1]]
189 |
190 | Xaxis = np.matmul(m, Xaxis)
191 | Yaxis = np.matmul(m, Yaxis)
192 | Zaxis = np.matmul(m, Zaxis)
193 |
194 | cv2.line(img, (int(tdx), int(tdy)), (int(Xaxis[0] + tdx), int(Xaxis[1] + tdy)), (0, 0, 255), 3)
195 | cv2.line(img, (int(tdx), int(tdy)), (int(Yaxis[0] + tdx), int(Yaxis[1] + tdy)), (0, 255, 0), 3)
196 | cv2.line(img, (int(tdx), int(tdy)), (int(Zaxis[0] + tdx), int(Zaxis[1] + tdy)), (255, 0, 0), 2)
197 |
198 | def drawAxis_test(img, roll, yaw, pitch, landmarks , size=100):
199 |
200 | tdx=np.mean(landmarks[42:48],axis=0)[0]
201 | tdy=np.mean(landmarks[42:48],axis=0)[1]
202 |
203 | matrix = np.linalg.inv(EulerToMatrix_XYZ(roll, yaw, pitch))
204 |
205 | Xaxis = np.array([matrix[0, 0], matrix[1, 0], matrix[2, 0]]) * size
206 | Yaxis = np.array([matrix[0, 1], matrix[1, 1], matrix[2, 1]]) * size
207 | Zaxis = np.array([matrix[0, 2], matrix[1, 2], matrix[2, 2]]) * size
208 |
209 |
210 | cv2.line(img, (int(tdx), int(tdy)), (int(Xaxis[0]+tdx), int(-Xaxis[1]+tdy)), (0, 0, 255), 3)
211 | cv2.line(img, (int(tdx), int(tdy)), (int(-Yaxis[0]+tdx), int(Yaxis[1]+tdy)), (0, 255, 0), 3)
212 | cv2.line(img, (int(tdx), int(tdy)), (int(Zaxis[0]+tdx), int(-Zaxis[1]+tdy)), (255, 0, 0), 2)
213 |
214 | return img
215 |
216 | def printWords(frame,roll,yaw,pitch,pos="left"):
217 | if pos=="left":
218 | frame = cv2.putText(frame, "roll: %f" % (roll), (10, 50), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1)
219 | frame = cv2.putText(frame, "yaw: %f" % (yaw), (10, 70), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1)
220 | frame = cv2.putText(frame, "pitch:%f" % (pitch), (10, 90), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1)
221 | elif pos=="right":
222 | dx=frame.shape[1]-200
223 | dy=50
224 | frame = cv2.putText(frame, "roll: %f" % (roll), (dx, dy), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1)
225 | dy+=30
226 | frame = cv2.putText(frame, "yaw: %f" % (yaw), (dx, dy), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1)
227 | dy += 30
228 | frame = cv2.putText(frame, "pitch:%f" % (pitch), (dx, dy), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1)
229 | return frame
230 |
231 | def EulerToMatrix(roll, yaw, pitch):
232 | # roll - z axis
233 | # yaw - y axis
234 | # pitch - x axis
235 | roll = roll / 180 * np.pi
236 | yaw = yaw / 180 * np.pi
237 | pitch = pitch / 180 * np.pi
238 |
239 | Rz = [[math.cos(roll), -math.sin(roll), 0],
240 | [math.sin(roll), math.cos(roll), 0],
241 | [0, 0, 1]]
242 |
243 | Ry = [[math.cos(yaw), 0, math.sin(yaw)],
244 | [0, 1, 0],
245 | [-math.sin(yaw), 0, math.cos(yaw)]]
246 |
247 | Rx = [[1, 0, 0],
248 | [0, math.cos(pitch), -math.sin(pitch)],
249 | [0, math.sin(pitch), math.cos(pitch)]]
250 |
251 | matrix = np.matmul(Rx, Ry)
252 | matrix = np.matmul(matrix, Rz)
253 |
254 | return matrix
255 |
256 | def EulerToMatrix_XYZ(roll, yaw, pitch):
257 | # roll - z axis
258 | # yaw - y axis
259 | # pitch - x axis
260 | roll = roll / 180 * np.pi
261 | yaw = yaw / 180 * np.pi
262 | pitch = pitch / 180 * np.pi
263 |
264 | Rz = [[math.cos(roll), -math.sin(roll), 0],
265 | [math.sin(roll), math.cos(roll), 0],
266 | [0, 0, 1]]
267 |
268 | Ry = [[math.cos(yaw), 0, math.sin(yaw)],
269 | [0, 1, 0],
270 | [-math.sin(yaw), 0, math.cos(yaw)]]
271 |
272 | Rx = [[1, 0, 0],
273 | [0, math.cos(pitch), -math.sin(pitch)],
274 | [0, math.sin(pitch), math.cos(pitch)]]
275 |
276 | matrix = np.matmul(Rz, Ry)
277 | matrix = np.matmul(matrix, Rx)
278 |
279 | return matrix
280 |
281 | def MatrixToEuler(M):
282 |
283 | yaw=math.asin(M[0,2])
284 | pitch = math.atan2(-M[1, 2], M[2, 2])
285 | roll =math.atan2(-M[0,1],M[0,0])
286 | return np.array([roll ,yaw ,pitch])*180/np.pi
287 |
288 | def MatrixToEuler_XYZ(M):
289 | M=np.mat(M).I
290 | return MatrixToEuler(M)*-1
291 |
292 |
293 | def HeadToCam_Matrix(roll, yaw, pitch):
294 |
295 | # roll - z axis
296 | # yaw - y axis
297 | # pitch - x axis
298 | #
299 | roll = -roll / 180 * np.pi
300 | yaw = -yaw / 180 * np.pi
301 | pitch = -pitch / 180 * np.pi
302 |
303 | Rz = [[math.cos(roll), -math.sin(roll), 0],
304 | [math.sin(roll), math.cos(roll), 0],
305 | [0, 0, 1]]
306 |
307 | Ry = [[math.cos(yaw), 0, math.sin(yaw)],
308 | [0, 1, 0],
309 | [-math.sin(yaw), 0, math.cos(yaw)]]
310 |
311 | Rx = [[1, 0, 0],
312 | [0, math.cos(pitch), -math.sin(pitch)],
313 | [0, math.sin(pitch), math.cos(pitch)]]
314 |
315 | matrix = np.matmul(Rx, Ry)
316 | matrix = np.matmul(matrix, Rz)
317 |
318 |
319 | return matrix
320 |
321 | def CamToHead_Matrix(roll, yaw, pitch):
322 |
323 | roll = roll / 180 * np.pi
324 | yaw = yaw / 180 * np.pi
325 | pitch = pitch / 180 * np.pi
326 |
327 | Rz = [[math.cos(roll), -math.sin(roll), 0],
328 | [math.sin(roll), math.cos(roll), 0],
329 | [0, 0, 1]]
330 |
331 | Ry = [[math.cos(yaw), 0, math.sin(yaw)],
332 | [0, 1, 0],
333 | [-math.sin(yaw), 0, math.cos(yaw)]]
334 |
335 | Rx = [[1, 0, 0],
336 | [0, math.cos(pitch), -math.sin(pitch)],
337 | [0, math.sin(pitch), math.cos(pitch)]]
338 |
339 | matrix = np.matmul(Rz, Ry)
340 | matrix = np.matmul(matrix, Rx)
341 |
342 | return matrix
--------------------------------------------------------------------------------
/utils/coordinate_transform.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import torch
3 | from . import tools
4 | from math import sin, cos, atan2
5 | def euler_to_geo_coordinate(euler_angle):
6 | # rotate order : intrinsic: x1 y2 z3 , extrinsic: z1 y2 x3 (roll, yaw, pitch)
7 | R = tools.EulerToMatrix(-euler_angle[0], -euler_angle[1], -euler_angle[2])
8 | z_axis = R[ :, 2]
9 | x_axis = R[ :, 0]
10 |
11 | latitude = np.arcsin(z_axis[1])
12 | longitude = atan2(z_axis[0],z_axis[2])
13 |
14 | norm_x=np.cross([0,1,0] ,[z_axis[0], 0 ,z_axis[2]])
15 | norm_x = norm_x / np.linalg.norm(norm_x, ord= 2)
16 | norm_y = np.cross(z_axis, norm_x)
17 | norm_y = norm_y / np.linalg.norm(norm_y, ord=2)
18 |
19 | x= np.dot(norm_x, x_axis)
20 | y = np.dot(norm_y, x_axis)
21 | roll = np.arctan2(y , x)
22 |
23 | coordinate = np.asarray([roll, longitude, latitude])/np.pi*180
24 | return coordinate
25 |
26 | def geo_coordinate_to_euler(coordinate):
27 | coordinate = np.asarray(coordinate)
28 | coordinate = coordinate/180*np.pi
29 | x = cos(coordinate[2]) *sin(coordinate[1])
30 | y = sin(coordinate[2])
31 | z = cos(coordinate[2]) *cos(coordinate[1])
32 | z_axis=[x,y, z]
33 | z_axis = z_axis / np.linalg.norm(z_axis, ord=2)
34 |
35 | norm_x = np.cross([0, 1, 0], [z_axis[0], 0, z_axis[2]])
36 | norm_x = norm_x / np.linalg.norm(norm_x, ord=2)
37 |
38 | norm_y = np.cross(z_axis, norm_x)
39 | norm_y = norm_y / np.linalg.norm(norm_y, ord=2)
40 |
41 | x_axis = cos(coordinate[0])*norm_x + sin(coordinate[0])*norm_y
42 | x_axis = x_axis/np.linalg.norm(x_axis ,2)
43 | y_axis = np.cross(z_axis, x_axis)
44 | R = np.asarray([x_axis, y_axis, z_axis]).transpose()
45 |
46 | return -1 * tools.MatrixToEuler(R)
47 |
48 | def train_geo_euler(pred, label):
49 | pred = np.asarray([ p.tolist() for p in pred]).transpose()
50 | label = np.asarray([ p.tolist() for p in label]).transpose()
51 | for i in range(len(pred)):
52 | pred[i] = geo_coordinate_to_euler(pred[i])
53 | label[i] = geo_coordinate_to_euler(label[i])
54 | pred = pred.transpose()
55 | label =label.transpose()
56 | return torch.FloatTensor(pred), torch.FloatTensor(label)
57 |
--------------------------------------------------------------------------------
/utils/detect.py:
--------------------------------------------------------------------------------
1 | def detect(image, model, bbox,)
--------------------------------------------------------------------------------
/utils/tools.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import math
3 | import cv2
4 |
5 | def EulerToMatrix(roll, yaw, pitch):
6 | # roll - z axis
7 | # yaw - y axis
8 | # pitch - x axis
9 | roll = roll / 180 * np.pi
10 | yaw = yaw / 180 * np.pi
11 | pitch = pitch / 180 * np.pi
12 |
13 | Rz = [[math.cos(roll), -math.sin(roll), 0],
14 | [math.sin(roll), math.cos(roll), 0],
15 | [0, 0, 1]]
16 |
17 | Ry = [[math.cos(yaw), 0, math.sin(yaw)],
18 | [0, 1, 0],
19 | [-math.sin(yaw), 0, math.cos(yaw)]]
20 |
21 | Rx = [[1, 0, 0],
22 | [0, math.cos(pitch), -math.sin(pitch)],
23 | [0, math.sin(pitch), math.cos(pitch)]]
24 |
25 | matrix = np.matmul(Rx, Ry)
26 | matrix = np.matmul(matrix, Rz)
27 |
28 | return matrix
29 |
30 | def cropFace(image,landmark,scale=1.2):
31 |
32 | center=np.mean(landmark,axis=0)
33 | size=(np.max(landmark,axis=0)[0]-np.min(landmark,axis=0)[0])*scale
34 |
35 | x0=center[0]-size/2
36 | x1=center[0]+size/2
37 | y0=center[1]-size/2
38 | y1=center[1]+size/2
39 |
40 |
41 | face=image[int(y0):int(y1),int(x0):int(x1)]
42 |
43 | return face
44 |
45 | def drawAxis(img, roll, yaw, pitch, landmarks=[] , size=100):
46 |
47 | if len(landmarks)>1:
48 | tdx=np.mean(landmarks[42:48],axis=0)[0]
49 | tdy=np.mean(landmarks[42:48],axis=0)[1]
50 | else:
51 | tdx=img.shape[1]/2
52 | tdy=img.shape[0]/2
53 |
54 |
55 | matrix = EulerToMatrix(-roll, -yaw, -pitch)
56 |
57 | Xaxis = np.array([matrix[0, 0], matrix[1, 0], matrix[2, 0]]) * size
58 | Yaxis = np.array([matrix[0, 1], matrix[1, 1], matrix[2, 1]]) * size
59 | Zaxis = np.array([matrix[0, 2], matrix[1, 2], matrix[2, 2]]) * size
60 |
61 |
62 | cv2.line(img, (int(tdx), int(tdy)), (int(Xaxis[0]+tdx), int(-Xaxis[1]+tdy)), (0, 0, 255), 3)
63 | cv2.line(img, (int(tdx), int(tdy)), (int(-Yaxis[0]+tdx), int(Yaxis[1]+tdy)), (0, 255, 0), 3)
64 | cv2.line(img, (int(tdx), int(tdy)), (int(Zaxis[0]+tdx), int(-Zaxis[1]+tdy)), (255, 0, 0), 2)
65 |
66 | return img
67 |
68 | def pose_rotate(headpose,angle):
69 | #positive anticlockwise roation image
70 | HeadToCr = EulerToMatrix(-headpose[0], -headpose[1], -headpose[2]) # inverse(RzRyRx)= R(-x)R(-y)R(-z)
71 | CrToCn = EulerToMatrix(angle,0,0)
72 | HeadToCn = np.matmul(CrToCn, HeadToCr)
73 | headpose = -1 * MatrixToEuler(HeadToCn)
74 | return headpose
75 |
76 | def MatrixToEuler(M):
77 |
78 | yaw=math.asin(M[0,2])
79 | pitch = math.atan2(-M[1, 2], M[2, 2])
80 | roll =math.atan2(-M[0,1],M[0,0])
81 | return np.array([roll ,yaw ,pitch])*180/np.pi
82 |
83 | def matrix_to_euler(M):
84 |
85 | if not (M[0, 2 ] == 1 or M[0, 2] == -1):
86 | yaw = math.asin(M[0, 2])
87 | yaw2=np.pi - yaw
88 | yaw2 = yaw2 - 2*np.pi if yaw<0 else yaw2
89 |
90 | pitch = math.atan2(-M[1, 2], M[2, 2])
91 | roll = math.atan2(-M[0, 1], M[0, 0])
92 |
93 | cos_yaw=math.cos(yaw2)
94 | pitch2 = math.atan2(-M[1, 2]/cos_yaw, M[2, 2]/cos_yaw)
95 | roll2 = math.atan2(-M[0, 1]/cos_yaw, M[0, 0]/cos_yaw)
96 |
97 | return np.array([roll ,yaw ,pitch])*180/np.pi , np.array([roll2 ,yaw2 ,pitch2 ])*180/np.pi
98 | else:
99 | if M[0, 2] ==1:
100 | yaw = 90
101 | roll_plus_pitch = math.atan2(M[1,0], M[1,1])*180/np.pi
102 | roll = roll_plus_pitch
103 | pitch = 0
104 | else:
105 | yaw = -90
106 | roll_minus_pitch = math.atan2(M[1,0], M[1,1])*180/np.pi
107 | roll = roll_minus_pitch
108 | pitch =0
109 | return np.array([roll ,yaw ,pitch])*180/np.pi, np.array([roll ,yaw ,pitch])*180/np.pi
110 |
111 |
112 | if __name__ == '__main__':
113 | import copy
114 | image = np.ones((224,224,3))
115 | image[10:20]=0
116 | headpose = [10, 80,10]
117 | image_1 = drawAxis(copy.copy(image), headpose[0], headpose[1], headpose[2])
118 | for i in range(1000):
119 | new_pose = pose_rotate(headpose,i)
120 | image_2 = drawAxis(copy.copy(image), new_pose[0], new_pose[1], new_pose[2])
121 | print(headpose, new_pose)
122 | cv2.imshow("image", image_1)
123 | cv2.imshow("image_2", image_2)
124 | cv2.waitKey(0)
125 |
126 | # R = EulerToMatrix(headpose[0], headpose[1], headpose[2])
127 | # print(pose_rotate(headpose, 10))
128 | # print(matrix_to_euler(R))
129 |
--------------------------------------------------------------------------------
/utils/utils.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import os
3 | import numpy as np
4 | import torch.nn as nn
5 | import torch.nn.functional as F
6 | # from matplotlib import pyplot as plt
7 | class softCrossEntropy(nn.Module):
8 | def __init__(self,weight=[0.1, 0.2, 0.5, 0.2, 0.1], size_average=True):
9 | super(softCrossEntropy, self).__init__()
10 | self.logsoftmax = nn.LogSoftmax(dim=1)
11 | self.weight = torch.FloatTensor(weight)
12 | self.size_average=size_average
13 | self.d = int(len(self.weight - 1) / 2)
14 | return
15 | def soft_cross_entropy(self, inputs, target):
16 | target_vector = torch.zeros(inputs.shape)
17 | for i, idx in enumerate(target):
18 | target_vector[i, idx - self.d: idx + self.d + 1] = self.weight
19 | if inputs.is_cuda:
20 | target_vector=target_vector.cuda()
21 | if self.size_average:
22 | return torch.mean(torch.sum(-target_vector * self.logsoftmax(inputs), dim=1))
23 | else:
24 | return torch.sum(torch.sum(-target * self.logsoftmax(inputs), dim=1))
25 |
26 | def forward(self, inputs, target):
27 | return self.soft_cross_entropy(inputs,target)
28 |
29 | def orient_loss(weight,orient):
30 | binary_orient=[]
31 | for ori in orient:
32 | if ori>60:
33 | binary_orient.append(1)
34 | else:
35 | binary_orient.append(0)
36 | binary_orient=torch.LongTensor(binary_orient)
37 | weight = weight.squeeze()
38 | if weight.is_cuda:
39 | binary_orient=binary_orient.cuda()
40 | return F.cross_entropy(weight,binary_orient)
41 |
42 |
43 | def get_ignored_params(model):
44 | # Generator function that yields ignored params.
45 | b = [model.conv1, model.bn1, model.fc_finetune]
46 | for i in range(len(b)):
47 | for module_name, module in b[i].named_modules():
48 | if 'bn' in module_name:
49 | module.eval()
50 | for name, param in module.named_parameters():
51 | yield param
52 |
53 |
54 | def get_non_ignored_params(model):
55 | # Generator function that yields params that will be optimized.
56 | b = [model.layer1, model.layer2, model.layer3, model.layer4]
57 | for i in range(len(b)):
58 | for module_name, module in b[i].named_modules():
59 | if 'bn' in module_name:
60 | module.eval()
61 | for name, param in module.named_parameters():
62 | yield param
63 |
64 | def get_fc_params(model):
65 | # Generator function that yields fc layer params.
66 | b = [model.fc_yaw, model.fc_pitch, model.fc_roll]
67 | for i in range(len(b)):
68 | for module_name, module in b[i].named_modules():
69 | for name, param in module.named_parameters():
70 | yield param
71 |
72 | def count_model_params(model):
73 | return "Model Size: %.4f M " %(sum(p.numel() for p in model.parameters())/1e6)
74 |
75 | def params_distribution(params, milestone):
76 | length = len(params)
77 | params =np.abs(params)
78 | for i in range(len(milestone)+1):
79 | if i==0:
80 | print("0 < value < %f : %.4f " %(milestone[i], sum(paramsmilestone[i-1] )/length*100))
84 | continue
85 | print("%f < value < %f : %.4f " % (milestone[i-1], milestone[i], sum((params > milestone[i-1])&(params < milestone[i]))/length*100 ))
86 |
87 |
88 | def load_filtered_state_dict(model, snapshot):
89 | # By user apaszke from discuss.pytorch.org
90 | model_dict = model.state_dict()
91 | snapshot = {k: v for k, v in snapshot.items() if k in model_dict}
92 | model_dict.update(snapshot)
93 | model.load_state_dict(model_dict)
94 |
95 |
96 | class calculate_errors():
97 | def __init__(self):
98 | self.roll = torch.empty(0)
99 | self.yaw = torch.empty(0)
100 | self.pitch = torch.empty(0)
101 |
102 | def clear(self):
103 | self.roll = torch.empty(0)
104 | self.yaw = torch.empty(0)
105 | self.pitch = torch.empty(0)
106 |
107 | def append(self, roll, yaw, pitch):
108 | self.roll = torch.cat([self.roll, roll.detach().cpu()], dim=0)
109 | self.yaw = torch.cat([self.yaw, yaw.detach().cpu()], dim=0)
110 | self.pitch = torch.cat([self.pitch, pitch.detach().cpu()], dim=0)
111 |
112 | def out(self):
113 | average_roll = torch.mean(torch.abs(torch.FloatTensor(self.roll))).item()
114 | average_yaw = torch.mean(torch.abs(torch.FloatTensor(self.yaw))).item()
115 | average_pitch = torch.mean(torch.abs(torch.FloatTensor(self.pitch))).item()
116 | return [average_roll, average_yaw, average_pitch]
117 |
118 |
119 | def save_model(model, path, name):
120 | print('Taking snapshot...')
121 | name = path + "/" + name + ".pkl"
122 | torch.save(model.state_dict(), name)
123 |
124 | def mk_train_dir(logger_path):
125 | if not os.path.exists(logger_path):
126 | os.mkdir(logger_path)
127 | for i in range(10000):
128 | new_path = os.path.join(logger_path, "%d" % i)
129 | if os.path.exists(new_path):
130 | continue
131 | else:
132 | os.mkdir(new_path)
133 | os.mkdir(os.path.join(new_path,"test_results"))
134 | os.mkdir(os.path.join(new_path, "models"))
135 | return new_path
136 |
137 | def save_pred_label(file, pred,labels):
138 | for i in range(len(labels[0])):
139 | file.write("%.3f %.3f %.3f || %.3f %.3f %.3f \n" % (pred[0][i], pred[1][i], pred[2][i],labels[0][i], labels[1][i],labels[2][i]))
140 |
141 | def add_test_msg(path, name ,errors, epoch, show=True):
142 | file=open(os.path.join(path,name+".txt"),'a')
143 | msg = name + " Epoch: %d Roll: %f Yaw: %f Pitch: %f MAE: %f SUM: %f \n" % (epoch,
144 | errors[0], errors[1], errors[2], np.mean(errors), np.sum(errors))
145 | file.write(msg)
146 | file.close()
147 | if show:
148 | print(msg)
149 |
150 | def tensor_board(logger_path):
151 | try:
152 | from tensorboardX import SummaryWriter
153 | is_tensorboard_available = True
154 | writer = SummaryWriter(log_dir=logger_path)
155 | print("--tensorBoard exit : log path - %s" % (logger_path))
156 | return is_tensorboard_available, writer
157 | except Exception:
158 | is_tensorboard_available = False
159 | return is_tensorboard_available, None
160 |
161 | def on_server():
162 | if os.path.exists("/home/ubuntu/ssd/datasets"):
163 | return False
164 | return True
165 |
166 | def get_data_path(DSFD=False):
167 | if os.path.exists("/home/ubuntu/ssd/datasets"):
168 | datafile="/home/ubuntu/ssd/datasets"
169 | elif os.path.exists("/home/zhangdong/Shaw/datasets"):
170 | datafile="/home/zhangdong/Shaw/datasets"
171 | path_300W_LP = os.path.join(datafile,"300W_LP")
172 | path_AFLW2000 = os.path.join(datafile,"AFLW2000_DSFD" if DSFD else "AFLW2000")
173 | path_BIWI = datafile
174 | return {"300W_LP":path_300W_LP, "AFLW2000": path_AFLW2000, "BIWI": path_BIWI}
175 |
176 | def write_tensorboard(writer, dataset_name, errors, iter, lr=-1):
177 | writer.add_scalar(dataset_name + " Roll", errors[0], iter)
178 | writer.add_scalar(dataset_name + " Yaw", errors[1], iter)
179 | writer.add_scalar(dataset_name + " Pitch", errors[2], iter)
180 | writer.add_scalar(dataset_name + " Sum", np.sum(errors), iter)
181 | writer.add_scalar(dataset_name + " Average",np.mean(errors), iter)
182 | if lr>0:
183 | writer.add_scalar(dataset_name +" Learn rate", lr, iter)
184 |
185 | def adjust_learning_rate(optimizer, epoch, schedule, gamma):
186 | """Sets the learning rate to the initial LR decayed by schedule"""
187 | if epoch in schedule:
188 | for param_group in optimizer.param_groups:
189 | param_group['lr'] *= gamma
190 | return optimizer.state_dict()['param_groups'][0]['lr']
191 |
192 | def is_minor_error(min_errors,errors):
193 | if errors[0] < min_errors[0] or errors[1] < min_errors[1] or errors[2] < min_errors[
194 | 2] or np.sum(errors) < np.sum(min_errors):
195 | min_errors[0] = min(errors[0], min_errors[0])
196 | min_errors[1] = min(errors[1], min_errors[1])
197 | min_errors[2] = min(errors[2], min_errors[2])
198 | return True
199 | return False
200 |
201 | def load_snapshot(model, snapshot):
202 | saved_state_dict = torch.load(snapshot,map_location=torch.device('cpu'))
203 | model.load_state_dict(saved_state_dict)
204 |
--------------------------------------------------------------------------------