├── .gitignore
├── README.md
├── aPyOpenGL
├── __init__.py
├── agl
│ ├── __init__.py
│ ├── app.py
│ ├── appmanager.py
│ ├── bvh.py
│ ├── camera.py
│ ├── const.py
│ ├── core
│ │ ├── __init__.py
│ │ ├── mesh.py
│ │ ├── primitive.py
│ │ └── shader.py
│ ├── data
│ │ ├── bvh
│ │ │ ├── ybot_capoeira.bvh
│ │ │ └── ybot_walk.bvh
│ │ ├── fbx
│ │ │ ├── etc
│ │ │ │ ├── arrow.fbx
│ │ │ │ └── axis.fbx
│ │ │ ├── model
│ │ │ │ ├── lafan.fbx
│ │ │ │ └── ybot.fbx
│ │ │ └── motion
│ │ │ │ ├── ybot_capoeira.fbx
│ │ │ │ └── ybot_walking.fbx
│ │ ├── fonts
│ │ │ └── consola.ttf
│ │ ├── obj
│ │ │ ├── lafan.mtl
│ │ │ ├── lafan.obj
│ │ │ ├── teapot.mtl
│ │ │ └── teapot.obj
│ │ ├── textures
│ │ │ ├── background.hdr
│ │ │ ├── brickwall.jpg
│ │ │ ├── brickwall_disp.jpg
│ │ │ ├── brickwall_normal.jpg
│ │ │ ├── grid.png
│ │ │ ├── ground_texture.jpg
│ │ │ ├── pbr_albedo.png
│ │ │ ├── pbr_metallic.png
│ │ │ ├── pbr_normal.png
│ │ │ ├── pbr_roughness.png
│ │ │ ├── skybox
│ │ │ │ ├── back.jpg
│ │ │ │ ├── bottom.jpg
│ │ │ │ ├── front.jpg
│ │ │ │ ├── left.jpg
│ │ │ │ ├── right.jpg
│ │ │ │ └── top.jpg
│ │ │ └── wood.jpg
│ │ └── txt
│ │ │ └── heightmap.txt
│ ├── fbx.py
│ ├── fbxparser
│ │ ├── __init__.py
│ │ ├── animation.py
│ │ ├── keyframe.py
│ │ ├── material.py
│ │ ├── mesh.py
│ │ ├── parser.py
│ │ ├── skeleton.py
│ │ ├── skin.py
│ │ └── texture.py
│ ├── heightmap.py
│ ├── light.py
│ ├── material.py
│ ├── mesh.py
│ ├── model.py
│ ├── motion
│ │ ├── __init__.py
│ │ ├── joint.py
│ │ ├── motion.py
│ │ ├── pose.py
│ │ └── skeleton.py
│ ├── obj.py
│ ├── render.py
│ ├── shader
│ │ ├── cubemap.fs
│ │ ├── cubemap.vs
│ │ ├── equirect.fs
│ │ ├── equirect.vs
│ │ ├── frag.fs
│ │ ├── lbs.vs
│ │ ├── shadow.fs
│ │ ├── shadow.vs
│ │ ├── text.fs
│ │ ├── text.vs
│ │ └── vert.vs
│ ├── text.py
│ ├── texture.py
│ └── ui.py
├── kin
│ ├── __init__.py
│ ├── kindisp.py
│ └── kinpose.py
├── learning
│ ├── __init__.py
│ ├── embedding.py
│ ├── mlp.py
│ ├── rbf.py
│ ├── transformer.py
│ └── vae.py
├── ops
│ ├── __init__.py
│ ├── mathops.py
│ └── motionops.py
├── transforms
│ ├── __init__.py
│ ├── numpy
│ │ ├── __init__.py
│ │ ├── aaxis.py
│ │ ├── euler.py
│ │ ├── ortho6d.py
│ │ ├── quat.py
│ │ ├── rotmat.py
│ │ └── xform.py
│ └── torch
│ │ ├── __init__.py
│ │ ├── aaxis.py
│ │ ├── euler.py
│ │ ├── ortho6d.py
│ │ ├── quat.py
│ │ ├── rotmat.py
│ │ └── xform.py
└── utils
│ ├── __init__.py
│ └── util.py
├── examples
├── 01_app.py
├── 02_animapp.py
├── 03_obj.py
├── 04_bvh.py
├── 05_bvh_with_fbx.py
├── 06_fbx_model.py
├── 07_fbx_motion.py
├── 07_kinpose.py
├── 08_kindisp.py
├── 09_sensors.py
├── 10_bvh_export.py
└── 99_heightmap.py
├── install.sh
├── requirements.txt
└── teaser.gif
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
2 | .vscode
3 | capture
4 | validate
5 |
6 | *.pkl
7 | *.fbm
8 |
9 | data/
10 | ops/
11 | # examples/
12 |
13 | *.ini
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Python Framework for Motion Data
2 | ```aPyOpenGL``` is a Python version framework of [aOpenGL](https://github.com/ltepenguin/aOpenGL) for motion data processing and visualization.
3 | Our framework is based on right-hand coordinate with y-axis as the up vector.
4 |
5 |
6 | # Installation
7 | ### Linux
8 | For Linux users, we provide a shell script that creates a conda environment in ```install.sh```. You can modify the environment name by changing ```ENV_NAME``` in the script, which is set to ```env-apyopengl``` by default.
9 | ```
10 | bash install.sh
11 | ```
12 |
13 | ### Windows
14 | For the visualization modules, install necessary modules first.
15 | ```
16 | pip install -r requirements.txt
17 | imageio_download_bin freeimage
18 | ```
19 | Also, visit this perfect guide to install FBX SDK on your computer: [Link for Windows](https://www.ralphminderhoud.com/blog/build-fbx-python-sdk-for-windows/)
20 |
21 | # How to use
22 | ```aPyOpenGL``` has four main modules ```agl```, ```kin```, ```transforms```, and ```ops```, and one additional auxiliary module ```utils```. Example codes are in [examples](examples/) and you can run the code you want through:
23 | ```
24 | python examples/{script_to_run}.py
25 | ```
26 |
27 | ## Set Up the Environment Variable
28 | If you add the path of this framework to the global environment variable, you can use this framework anywhere in your local computer.
29 |
30 | ### Linux
31 | Add this line to ```~/.bashrc```:
32 | ```
33 | export PYTHONPATH=$PYTHONPATH:{path/to/aPyOpenGL}
34 | ```
35 | and then execute this:
36 | ```
37 | source ~/.bashrc
38 | ```
39 |
40 | ### Windows
41 | Add the path of the cloned repository to the environment variable PYTHONPATH. If you don't know how, please refer to [this](https://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-so-it-finds-my-modules-packages).
42 |
43 | ### Commands
44 | * F1: Render the scene in GL_FILL mode.
45 | * F2: Render the scene in GL_LINE mode.
46 | * F5: Capture the screen in image and save in ```captures/yyyy-mm-dd/images```.
47 | * F6: Capture the screen in video if entered once, and save in ```captures/yyyy-mm-dd/videos``` if entered again.
48 | * Alt + Left Mouse: Tumble tool for the camera.
49 | * Alt + Middle Mouse: Track tool for the camera.
50 | * Alt + Mouse Scroll: Dolly tool for the camera.
51 | * Mouse Scroll: Zoom tool for the camera.
52 | * A: Switch the visualization of the axis.
53 | * G: Switch the visualization of the grid.
54 | * F: Switch the visualization of the render fps text.
55 | * Left / Right arrow: Move 1 second to the past / future.
56 |
57 | Additionally, you can add your own custom commands.
58 | You can find the examples in the codes in [examples](examples/).
59 |
60 | ## Motion
61 | We provide BVH parser for motion data and FBX parser for both motion and mesh data. Motion data in this framework is basically structred by hierarchy of Joint, Skeleton, Pose, and Motion, and you can see the structure [here](aPyOpenGL/agl/motion).
62 |
63 |
64 |
66 |
67 | ## Transforms
68 | ```transforms``` provides several operations for transformation in both numpy and pytorch.
69 | Modules that start with ```n_``` indicates that it's for numpy ndarray, and ```t_``` indicates pytorch tensor.
70 |
75 |
76 | ## Utils
77 | ```utils``` provides several utility functions like multiprocessing.
78 |
79 | # More to Come
80 | We are planning to support motion manipulation functions, like ```kin``` namespace in [aOpenGL](https://github.com/ltepenguin/aOpenGL). This will be updated soon!
81 |
82 |
112 |
113 | # Publications
114 | Published papers developed on top of this framework are as follows:
115 |
116 | * [SALAD: Skeleton-aware Latent Diffusion for Text-driven Motion Generation and Editing](https://kwanyun.github.io/AnyMoLe_page/) [Hong et al. CVPR 2025]
117 | * [AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models](https://kwanyun.github.io/AnyMoLe_page/) [Yun et al. CVPR 2025]
118 | * [ASMR: Adaptive Skeleton-Mesh Rigging and Skinning via 2D Generative Prior](https://seokhyeonhong.github.io/projects/asmr/) [Hong et al. Eurographics 2025]
119 | * Geometry-Aware Retargeting for Two-Skinned Characters Interaction [Jang et al. SIGGRAPH Asia 2024]
120 | * [Long-term Motion In-Betweening via Keyframe Prediction](https://github.com/seokhyeonhong/long-mib) [Hong et al. SCA2024]
121 |
122 | # Acknowledgements
123 | The overall structure of the rendering modules is inspired by
124 | [aOpenGL](https://github.com/ltepenguin/aOpenGL)
125 | and [LearnOpenGL](https://learnopengl.com/)
126 |
127 | Data processing, operation functions, and utility functions are inspired by
128 | [fairmotion](https://github.com/facebookresearch/fairmotion),
129 | [pytorch3d](https://github.com/facebookresearch/pytorch3d),
130 | [PFNN](https://github.com/sreyafrancis/PFNN),
131 | and [LaFAN1](https://github.com/ubisoft/ubisoft-laforge-animation-dataset) repositories
--------------------------------------------------------------------------------
/aPyOpenGL/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/__init__.py
--------------------------------------------------------------------------------
/aPyOpenGL/agl/__init__.py:
--------------------------------------------------------------------------------
1 | from .app import App
2 | from .appmanager import AppManager
3 | from .bvh import BVH
4 | from .camera import Camera
5 | from .const import *
6 | from .fbx import FBX
7 | from .heightmap import Heightmap
8 | from .light import Light
9 | from .material import Material
10 | from .model import Model
11 | from .motion import Joint, Skeleton, Pose, Motion
12 | from .render import Render
13 | from .texture import TextureType
14 |
15 | # from .kin.kinpose import KinPose
16 | # from .kin.kindisp import KinDisp
--------------------------------------------------------------------------------
/aPyOpenGL/agl/appmanager.py:
--------------------------------------------------------------------------------
1 | import glfw
2 | from OpenGL.GL import *
3 |
4 | from .app import App
5 | from .render import Render, RenderMode
6 | from .const import SHADOW_MAP_SIZE
7 |
8 | class AppManager:
9 | _app = None
10 |
11 | @staticmethod
12 | def start(app: App):
13 | AppManager._app = app
14 | AppManager._render_loop()
15 |
16 | @staticmethod
17 | def _render_loop():
18 | if AppManager._app is None:
19 | raise Exception("AppManager.app is empty")
20 |
21 | app = AppManager._app
22 |
23 | # start
24 | app.start()
25 | app.initialize_ui()
26 |
27 | # main loop
28 | glfw.set_time(0)
29 | while not glfw.window_should_close(app.window):
30 | # update window size
31 | width, height = glfw.get_window_size(app.window)
32 | glViewport(0, 0, width, height)
33 |
34 | # process inputs for ui
35 | app.process_inputs()
36 |
37 | # sky color
38 | sky_color = Render.sky_color()
39 | glClearColor(sky_color.x, sky_color.y, sky_color.z, sky_color.a)
40 | glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
41 |
42 | # update
43 | app.update()
44 |
45 | # update camera & light
46 | Render.update_render_view(app, width, height)
47 |
48 | # render shadow
49 | Render.set_render_mode(RenderMode.eSHADOW)
50 | glViewport(0, 0, SHADOW_MAP_SIZE, SHADOW_MAP_SIZE)
51 | glClear(GL_DEPTH_BUFFER_BIT)
52 | app.render()
53 |
54 | # render scene
55 | Render.set_render_mode(RenderMode.eDRAW)
56 | glViewport(0, 0, width, height)
57 | glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
58 | app.render()
59 |
60 | # render text
61 | Render.set_render_mode(RenderMode.eTEXT)
62 | glViewport(0, 0, width, height)
63 | app.render_text()
64 |
65 | # render xray
66 | Render.set_render_mode(RenderMode.eDRAW)
67 | glClear(GL_DEPTH_BUFFER_BIT)
68 | app.render_xray()
69 |
70 | # render ui
71 | app._render_ui()
72 |
73 | # late update
74 | app.late_update()
75 |
76 | # event
77 | glfw.poll_events()
78 | glfw.swap_buffers(app.window)
79 |
80 | app.terminate_ui()
81 | glfw.destroy_window(app.window)
82 | glfw.terminate()
83 | app.terminate()
84 | Render.vao_cache = {}
--------------------------------------------------------------------------------
/aPyOpenGL/agl/bvh.py:
--------------------------------------------------------------------------------
1 | import os
2 | import re
3 | import numpy as np
4 | import multiprocessing as mp
5 |
6 | from .motion import Joint, Skeleton, Pose, Motion
7 | from .model import Model
8 |
9 | from aPyOpenGL.transforms import n_euler
10 |
11 | channelmap = {
12 | 'Xrotation': 'x',
13 | 'Yrotation': 'y',
14 | 'Zrotation': 'z'
15 | }
16 |
17 | channelmap_inv = {
18 | 'x': 'Xrotation',
19 | 'y': 'Yrotation',
20 | 'z': 'Zrotation',
21 | }
22 |
23 | ordermap = {
24 | 'x': 0,
25 | 'y': 1,
26 | 'z': 2,
27 | }
28 |
29 | class BVH:
30 | """
31 | !!!
32 | Disclaimer:
33 | This implementation is only for character poses with 3D root positions and 3D joint rotations.
34 | Therefore, joint positions and scales within the BVH file are not considered.
35 | !!!
36 | """
37 | def __init__(self, filename: str, target_fps=30, scale=0.01):
38 | self.filename = filename
39 | self.target_fps = target_fps
40 | self.scale = scale
41 |
42 | self._cumsum_channels = 0
43 | self._valid_channel_idx = []
44 |
45 | self.poses = []
46 | self._load()
47 |
48 | def _load(self):
49 | if not self.filename.endswith(".bvh"):
50 | print(f"{self.filename} is not a bvh file.")
51 | return
52 |
53 | i = 0
54 | active = -1
55 | end_site = False
56 |
57 | skeleton = Skeleton(joints=[])
58 |
59 | with open(self.filename, "r") as f:
60 | for line in f:
61 | if "HIERARCHY" in line: continue
62 | if "MOTION" in line: continue
63 | if "{" in line: continue
64 |
65 | rmatch = re.match(r"ROOT (\w+)", line)
66 | if rmatch:
67 | skeleton.add_joint(rmatch.group(1), parent_idx=None)
68 | active = skeleton.num_joints - 1
69 | continue
70 |
71 | if "}" in line:
72 | if end_site:
73 | end_site = False
74 | else:
75 | active = skeleton.parent_idx[active]
76 | continue
77 |
78 | offmatch = re.match(r"\s*OFFSET\s+([\-\d\.e]+)\s+([\-\d\.e]+)\s+([\-\d\.e]+)", line)
79 | if offmatch:
80 | if not end_site:
81 | skeleton.joints[active].local_pos = np.array(list(map(float, offmatch.groups())), dtype=np.float32) * self.scale
82 | skeleton.recompute_pre_xform()
83 | continue
84 |
85 | chanmatch = re.match(r"\s*CHANNELS\s+(\d+)", line)
86 | if chanmatch:
87 | channels = int(chanmatch.group(1))
88 | channelis = 0 if channels == 3 else 3
89 | channelie = 3 if channels == 3 else 6
90 | parts = line.split()[2 + channelis:2 + channelie]
91 | if any([p not in channelmap for p in parts]):
92 | continue
93 | order = "".join([channelmap[p] for p in parts])
94 |
95 | if active == 0:
96 | assert channels == 6, f"Root joint must have 6 channels, but got {channels}"
97 | self._valid_channel_idx += [i for i in range(channels)]
98 | else:
99 | self._valid_channel_idx += [i + self._cumsum_channels for i in range(channelis, channelie)]
100 | self._cumsum_channels += channels
101 | continue
102 |
103 | jmatch = re.match(r"\s*JOINT\s+(.+)", line)
104 | if jmatch:
105 | skeleton.add_joint(jmatch.group(1), parent_idx=active)
106 | active = skeleton.num_joints - 1
107 | continue
108 |
109 | if "End Site" in line:
110 | end_site = True
111 | continue
112 |
113 | fmatch = re.match("\s*Frames:\s+(\d+)", line)
114 | if fmatch:
115 | fnum = int(fmatch.group(1))
116 | # positions = np.zeros((fnum, skeleton.num_joints, 3), dtype=np.float32)
117 | # rotations = np.zeros((fnum, skeleton.num_joints, 3), dtype=np.float32)
118 | continue
119 |
120 | fmatch = re.match("\s*Frame Time:\s+([\d\.]+)", line)
121 | if fmatch:
122 | frametime = float(fmatch.group(1))
123 | fps = round(1. / frametime)
124 | # if fps % self.target_fps != 0:
125 | # raise Exception(f"Invalid target fps for {self.filename}: {self.target_fps} (fps: {fps})")
126 |
127 | # sampling_step = fps // self.target_fps
128 | continue
129 |
130 | dmatch = line.strip().split(' ')
131 | if dmatch:
132 | data_block = np.array(list(map(float, dmatch)), dtype=np.float32)
133 | data_block = data_block[self._valid_channel_idx]
134 |
135 | root_pos = data_block[0:3] * self.scale
136 | joint_rots = data_block[3:].reshape(skeleton.num_joints, 3)
137 | local_quats = n_euler.to_quat(joint_rots, order, radians=False)
138 | self.poses.append(Pose(skeleton, local_quats, root_pos))
139 | i += 1
140 |
141 | # self.poses = self.poses[1::sampling_step]
142 |
143 | def motion(self):
144 | name = os.path.splitext(os.path.basename(self.filename))[0]
145 | res = Motion(self.poses, fps=self.target_fps, name=name)
146 | return res
147 |
148 | def model(self):
149 | return Model(meshes=None, skeleton=self.poses[0].skeleton)
--------------------------------------------------------------------------------
/aPyOpenGL/agl/camera.py:
--------------------------------------------------------------------------------
1 | import glm
2 |
3 | from .const import CAM_DOLLY_SENSITIVITY, CAM_TRACK_SENSITIVITY, CAM_TUMBLE_SENSITIVITY, CAM_ZOOM_SENSITIVITY
4 |
5 | class Camera:
6 | def __init__(
7 | self,
8 | position = glm.vec3(0, 3, 5),
9 | orientation = glm.mat3(1.0),
10 | focus_position = glm.vec3(0, 0, 0),
11 | fov_y = glm.radians(45),
12 | is_perspective = True,
13 | ortho_zoom = 100.0,
14 | z_near = 0.1,
15 | z_far = 1000.0,
16 | zoom_factor = 1.0,
17 | ):
18 | self.__position = position
19 | self.__orientation = orientation
20 | self.__focus_position = focus_position
21 | self.__up = glm.vec3(0, 1, 0)
22 | self.__fov_y = fov_y
23 | self.__is_perspective = is_perspective
24 | self.__ortho_zoom = ortho_zoom
25 | self.__z_near = z_near
26 | self.__z_far = z_far
27 | self.__zoom_factor = zoom_factor
28 |
29 | self.update()
30 |
31 | @property
32 | def position(self):
33 | return self.__position
34 |
35 | @property
36 | def focus_position(self):
37 | return self.__focus_position
38 |
39 | def update(self):
40 | z = glm.normalize(self.__focus_position - self.__position)
41 | x = glm.normalize(glm.cross(self.__up, z))
42 | self.__up = glm.cross(z, x)
43 | self.__orientation = glm.mat3(x, self.__up, z)
44 |
45 | def get_view_matrix(self):
46 | return glm.lookAt(self.__position, self.__focus_position, self.__up)
47 |
48 | def get_projection_matrix(self, width, height):
49 | aspect = width / (height + 1e-8)
50 | if self.__is_perspective:
51 | return glm.perspective(self.__zoom_factor * self.__fov_y, aspect, self.__z_near, self.__z_far)
52 | else:
53 | scale = self.__ortho_zoom * 0.00001
54 | return glm.ortho(-width * scale, width * scale, -height * scale, height * scale, self.__z_near, self.__z_far)
55 |
56 | def dolly(self, yoffset):
57 | yoffset *= CAM_DOLLY_SENSITIVITY
58 |
59 | disp = self.__orientation[2] * yoffset
60 | self.__position += disp
61 | self.__focus_position += disp
62 |
63 | self.update()
64 |
65 | def zoom(self, yoffset):
66 | yoffset *= CAM_ZOOM_SENSITIVITY
67 |
68 | if self.__is_perspective:
69 | self.__zoom_factor -= yoffset
70 | self.__zoom_factor = max(0.1, min(self.__zoom_factor, 2))
71 | else:
72 | self.__ortho_zoom -= yoffset * 100
73 | self.__ortho_zoom = max(0.1, min(self.__ortho_zoom, 1000))
74 |
75 | self.update()
76 |
77 | def tumble(self, dx, dy):
78 | dx *= CAM_TUMBLE_SENSITIVITY
79 | dy *= CAM_TUMBLE_SENSITIVITY
80 |
81 | disp = glm.vec4(self.__position - self.__focus_position, 1)
82 | alpha = 2.0
83 | Rx = glm.rotate(glm.mat4(1.0), alpha * dy, glm.vec3(glm.transpose(self.get_view_matrix())[0]))
84 | Ry = glm.rotate(glm.mat4(1.0), -alpha * dx, glm.vec3(0, 1, 0))
85 | R = Ry * Rx
86 | self.__position = self.__focus_position + glm.vec3(R * disp)
87 | self.__up = glm.mat3(R) * self.__up
88 |
89 | self.update()
90 |
91 | def track(self, dx, dy):
92 | dx *= CAM_TRACK_SENSITIVITY
93 | dy *= CAM_TRACK_SENSITIVITY
94 | VT = glm.transpose(self.get_view_matrix())
95 | delta = glm.vec3(-dx * VT[0] - dy * VT[1])
96 | self.__position += delta
97 | self.__focus_position += delta
98 | self.update()
99 |
100 | """ Camera manipulation functions """
101 | def set_position(self, position):
102 | self.__position = glm.vec3(position)
103 | self.update()
104 |
105 | def set_focus_position(self, focus_position):
106 | self.__focus_position = glm.vec3(focus_position)
107 | self.update()
108 |
109 | def set_up(self, up):
110 | self.__up = glm.vec3(up)
111 | self.update()
112 |
113 | def switch_projection(self):
114 | self.__is_perspective = not self.__is_perspective
115 | self.update()
116 |
117 | def set_ortho_zoom(self, ortho_zoom):
118 | self.__ortho_zoom = ortho_zoom
119 | self.update()
--------------------------------------------------------------------------------
/aPyOpenGL/agl/const.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | """ Path constants """
4 | AGL_PATH = os.path.dirname(__file__)
5 |
6 | """ Shadow constants """
7 | SHADOW_MAP_SIZE = 8 * 1024
8 | BACKGROUND_MAP_SIZE = 1024
9 |
10 | """ Camera constants """
11 | CAM_TRACK_SENSITIVITY = 0.002
12 | CAM_TUMBLE_SENSITIVITY = 0.002
13 | CAM_ZOOM_SENSITIVITY = 0.05
14 | CAM_DOLLY_SENSITIVITY = 0.2
15 |
16 | """ Light constants """
17 | MAX_LIGHT_NUM = 4
18 |
19 | """ Text constants """
20 | TEXT_RESOLUTION = 256
21 | FONT_DIR_PATH = os.path.join(AGL_PATH, "data/fonts/")
22 | CONSOLAS_FONT_PATH = os.path.join(FONT_DIR_PATH, "consola.ttf")
23 |
24 | """ Length conversion """
25 | INCH_TO_METER = 0.0254
26 |
27 | """ Material constants """
28 | MAX_MATERIAL_NUM = 5
29 | MAX_MATERIAL_TEXTURES = 25
30 |
31 | """ Texture constants """
32 | BACKGROUND_TEXTURE_FILE = "background.hdr"
33 | TEXTURE_DIR_PATH = os.path.join(AGL_PATH, "data/textures/")
34 |
35 | """ Skeleton constants """
36 | MAX_JOINT_NUM = 150
37 |
38 | """ Model constants """
39 | AXIS_MODEL_PATH = os.path.join(AGL_PATH, "data/fbx/etc/axis.fbx")
40 |
41 | """ Other constants """
42 | MAX_INSTANCE_NUM = 100
--------------------------------------------------------------------------------
/aPyOpenGL/agl/core/__init__.py:
--------------------------------------------------------------------------------
1 | from .mesh import *
2 | from .primitive import *
3 | from .shader import *
--------------------------------------------------------------------------------
/aPyOpenGL/agl/core/shader.py:
--------------------------------------------------------------------------------
1 | import os
2 | import numpy as np
3 | import glm
4 | from OpenGL.GL import *
5 |
6 | from ..const import AGL_PATH
7 |
8 | class Shader:
9 | def __init__(self, vertex_path, fragment_path, geometry_path=None):
10 | self.vertex_shader = _load_shader(vertex_path, GL_VERTEX_SHADER)
11 | self.fragment_shader = _load_shader(fragment_path, GL_FRAGMENT_SHADER)
12 | self.geometry_shader = _load_shader(geometry_path, GL_GEOMETRY_SHADER) if geometry_path is not None else None
13 | self.build()
14 |
15 | self.name = os.path.splitext(os.path.basename(vertex_path))[0]
16 | self.is_view_updated = False
17 | self.is_texture_updated = False
18 |
19 | def build(self):
20 | self.program = glCreateProgram()
21 | glAttachShader(self.program, self.vertex_shader)
22 | glAttachShader(self.program, self.fragment_shader)
23 | if self.geometry_shader is not None:
24 | glAttachShader(self.program, self.geometry_shader)
25 |
26 | glLinkProgram(self.program)
27 | _check_program_link_error(self.program)
28 |
29 | glDeleteShader(self.vertex_shader)
30 | glDeleteShader(self.fragment_shader)
31 | if self.geometry_shader is not None:
32 | glDeleteShader(self.geometry_shader)
33 |
34 | def use(self):
35 | glUseProgram(self.program)
36 |
37 | """ Set uniform variables in shader """
38 | def set_int(self, name, value): glUniform1i(glGetUniformLocation(self.program, name), value)
39 | def set_float(self, name, value): glUniform1f(glGetUniformLocation(self.program, name), value)
40 | def set_bool(self, name, value): glUniform1i(glGetUniformLocation(self.program, name), value)
41 | def set_vec2(self, name, value): glUniform2fv(glGetUniformLocation(self.program, name), 1, glm.value_ptr(value))
42 | def set_vec3(self, name, value): glUniform3fv(glGetUniformLocation(self.program, name), 1, glm.value_ptr(value))
43 | def set_vec4(self, name, value): glUniform4fv(glGetUniformLocation(self.program, name), 1, glm.value_ptr(value))
44 | def set_ivec3(self, name, value): glUniform3iv(glGetUniformLocation(self.program, name), 1, glm.value_ptr(value))
45 | def set_ivec4(self, name, value): glUniform4iv(glGetUniformLocation(self.program, name), 1, glm.value_ptr(value))
46 | def set_mat3(self, name, value): glUniformMatrix3fv(glGetUniformLocation(self.program, name), 1, GL_FALSE, glm.value_ptr(value))
47 | def set_mat4(self, name, value): glUniformMatrix4fv(glGetUniformLocation(self.program, name), 1, GL_FALSE, glm.value_ptr(value))
48 |
49 | # TODO: Impelement these functions
50 | # def set_struct(self, name, value, struct_name): glUniform1fv(glGetUniformLocation(self.program, name), f"{struct_name}.Type", value)
51 | # def set_multiple_float(self, name, value): glUniform1fv(glGetUniformLocation(self.program, name), len(value), value)
52 | # def set_multiple_vec3(self, name, value): glUniform3fv(glGetUniformLocation(self.program, name), len(value), glm.value_ptr(value))
53 | # def set_multiple_vec4(self, name, value): glUniform4fv(glGetUniformLocation(self.program, name), len(value), glm.value_ptr(value))
54 | # def set_multiple_ivec3(self, name, value): glUniform3iv(glGetUniformLocation(self.program, name), len(value), glm.value_ptr(value))
55 | # def set_multiple_ivec4(self, name, value): glUniform4iv(glGetUniformLocation(self.program, name), len(value), glm.value_ptr(value))
56 |
57 | def set_multiple_int(self, name, values): glUniform1iv(glGetUniformLocation(self.program, name), len(values), values)
58 | def set_multiple_float(self, name, values): glUniform1fv(glGetUniformLocation(self.program, name), len(values), values)
59 | def set_multiple_mat4(self, name, values): glUniformMatrix4fv(glGetUniformLocation(self.program, name), len(values), GL_FALSE, self._glm_values_to_ptr(values))
60 |
61 | def _glm_values_to_ptr(self, values):
62 | # transpose because glm is column-major while numpy is row-major
63 | if len(values) == 0:
64 | return np.array([], dtype=np.float32).ctypes.data_as(ctypes.POINTER(ctypes.c_float))
65 | float_array = np.concatenate([np.asarray(value, dtype=np.float32).transpose().flatten() for value in values])
66 | return float_array.ctypes.data_as(ctypes.POINTER(ctypes.c_float))
67 |
68 | def _check_shader_compile_error(handle):
69 | success = glGetShaderiv(handle, GL_COMPILE_STATUS)
70 | if not success:
71 | info_log = glGetShaderInfoLog(handle)
72 | raise Exception("Shader compilation error: " + info_log.decode("utf-8"))
73 |
74 | def _check_program_link_error(handle):
75 | success = glGetProgramiv(handle, GL_LINK_STATUS)
76 | if not success:
77 | info_log = glGetProgramInfoLog(handle)
78 | raise Exception("Shader program linking error: " + info_log.decode("utf-8"))
79 |
80 | def _load_shader(filename, shader_type):
81 | shader_code = _load_code(filename)
82 | shader = glCreateShader(shader_type)
83 |
84 | glShaderSource(shader, shader_code)
85 | glCompileShader(shader)
86 | _check_shader_compile_error(shader)
87 |
88 | return shader
89 |
90 | def _load_code(filename):
91 | shader_dir_path = os.path.join(AGL_PATH, "shader")
92 | shader_path = os.path.join(shader_dir_path, filename)
93 | with open(shader_path, "r") as f:
94 | code = f.read()
95 | return code
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/fbx/etc/arrow.fbx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/fbx/etc/arrow.fbx
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/fbx/etc/axis.fbx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/fbx/etc/axis.fbx
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/fbx/model/ybot.fbx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/fbx/model/ybot.fbx
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/fbx/motion/ybot_capoeira.fbx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/fbx/motion/ybot_capoeira.fbx
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/fbx/motion/ybot_walking.fbx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/fbx/motion/ybot_walking.fbx
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/fonts/consola.ttf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/fonts/consola.ttf
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/obj/lafan.mtl:
--------------------------------------------------------------------------------
1 | newmtl character:MeshSG
2 | illum 4
3 | Kd 1.00 0.45 0.00
4 | Ka 0.00 0.00 0.00
5 | Tf 1.00 1.00 1.00
6 | Ni 1.00
7 | Ks 0.10 0.10 0.10
8 | Ns 0.00
9 | newmtl character:MeshSG2
10 | illum 4
11 | Kd 1.00 0.45 0.00
12 | Ka 0.00 0.00 0.00
13 | Tf 1.00 1.00 1.00
14 | Ni 1.00
15 | Ks 0.10 0.10 0.10
16 | Ns 0.00
17 | newmtl character:MeshSG4
18 | illum 4
19 | Kd 0.70 0.70 0.70
20 | Ka 0.00 0.00 0.00
21 | Tf 1.00 1.00 1.00
22 | Ni 1.00
23 | Ks 0.10 0.10 0.10
24 | Ns 13.00
25 | newmtl character:MeshSG6
26 | illum 4
27 | Kd 0.30 0.30 0.30
28 | Ka 0.00 0.00 0.00
29 | Tf 1.00 1.00 1.00
30 | Ni 1.00
31 | Ks 0.10 0.10 0.10
32 | Ns 13.00
33 | newmtl character:MeshSG8
34 | illum 4
35 | Kd 1.00 1.00 0.00
36 | Ka 0.00 0.00 0.00
37 | Tf 1.00 1.00 1.00
38 | Ni 1.00
39 | Ks 0.10 0.10 0.10
40 | Ns 0.00
41 |
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/obj/teapot.mtl:
--------------------------------------------------------------------------------
1 | newmtl initialShadingGroup
2 | illum 4
3 | Kd 0.50 0.50 0.50
4 | Ka 0.00 0.00 0.00
5 | Tf 1.00 1.00 1.00
6 | Ni 1.00
7 |
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/background.hdr:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/background.hdr
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/brickwall.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/brickwall.jpg
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/brickwall_disp.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/brickwall_disp.jpg
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/brickwall_normal.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/brickwall_normal.jpg
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/grid.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/grid.png
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/ground_texture.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/ground_texture.jpg
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/pbr_albedo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/pbr_albedo.png
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/pbr_metallic.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/pbr_metallic.png
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/pbr_normal.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/pbr_normal.png
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/pbr_roughness.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/pbr_roughness.png
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/skybox/back.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/skybox/back.jpg
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/skybox/bottom.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/skybox/bottom.jpg
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/skybox/front.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/skybox/front.jpg
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/skybox/left.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/skybox/left.jpg
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/skybox/right.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/skybox/right.jpg
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/skybox/top.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/skybox/top.jpg
--------------------------------------------------------------------------------
/aPyOpenGL/agl/data/textures/wood.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/agl/data/textures/wood.jpg
--------------------------------------------------------------------------------
/aPyOpenGL/agl/fbxparser/__init__.py:
--------------------------------------------------------------------------------
1 | try:
2 | import fbx
3 |
4 | from .animation import *
5 | from .material import *
6 | from .mesh import *
7 | from .parser import *
8 | from .skeleton import *
9 | from .skin import *
10 | from .texture import *
11 | from .keyframe import *
12 | except ImportError:
13 | pass
--------------------------------------------------------------------------------
/aPyOpenGL/agl/fbxparser/animation.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 |
3 | import fbx
4 | import glm
5 |
6 | from .keyframe import KeyInterpType, Keyframe, NodeKeyframes, SceneKeyframes
7 |
8 | FbxAnimLayer = fbx.FbxAnimLayer
9 | FbxCriteria = fbx.FbxCriteria
10 | FbxEuler = fbx.FbxEuler
11 | FbxNode = fbx.FbxNode
12 |
13 | def get_scene_animation(anim_stack, node, scale):
14 | criteria = FbxCriteria.ObjectType(FbxAnimLayer.ClassId)
15 | num_anim_layers = anim_stack.GetMemberCount(criteria)
16 |
17 | if num_anim_layers > 1:
18 | print(f"Warning: {num_anim_layers} animation layers found, only the first one will be used")
19 |
20 | scene_keyframes = SceneKeyframes(anim_stack.GetName())
21 | anim_layer = anim_stack.GetMember(criteria, 0)
22 | get_animations(scene_keyframes, anim_layer, node, scale)
23 |
24 | time_mode = anim_stack.GetScene().GetGlobalSettings().GetTimeMode()
25 | scene_keyframes.start_frame = anim_stack.LocalStart.Get().GetFrameCount(time_mode)
26 | scene_keyframes.end_frame = anim_stack.LocalStop.Get().GetFrameCount(time_mode)
27 | scene_keyframes.fps = fbx.FbxTime.GetFrameRate(time_mode)
28 |
29 | return scene_keyframes
30 |
31 | def get_animations(scene_kfs: SceneKeyframes, anim_layer, node, scale):
32 | animation = get_keyframe_animation(node, anim_layer, scale)
33 | scene_kfs.node_keyframes.append(animation)
34 | for i in range(node.GetChildCount()):
35 | get_animations(scene_kfs, anim_layer, node.GetChild(i), scale)
36 |
37 | def get_keyframe_animation(node, anim_layer, scale) -> NodeKeyframes:
38 | node_kfs = NodeKeyframes(node.GetName())
39 |
40 | # rotation order
41 | order = node.GetRotationOrder(FbxNode.eSourcePivot)
42 | if order == FbxEuler.eOrderXYZ:
43 | node_kfs.euler_order = glm.ivec3(0, 1, 2)
44 | elif order == FbxEuler.eOrderXZY:
45 | node_kfs.euler_order = glm.ivec3(0, 2, 1)
46 | elif order == FbxEuler.eOrderYXZ:
47 | node_kfs.euler_order = glm.ivec3(1, 0, 2)
48 | elif order == FbxEuler.eOrderYZX:
49 | node_kfs.euler_order = glm.ivec3(1, 2, 0)
50 | elif order == FbxEuler.eOrderZXY:
51 | node_kfs.euler_order = glm.ivec3(2, 0, 1)
52 | elif order == FbxEuler.eOrderZYX:
53 | node_kfs.euler_order = glm.ivec3(2, 1, 0)
54 | else:
55 | print(f"Warning: unsupported rotation order {order}")
56 |
57 | # get keyframes
58 | for i, channel in enumerate(["X", "Y", "Z"]):
59 | anim_curve = node.LclTranslation.GetCurve(anim_layer, channel)
60 | if anim_curve:
61 | node_kfs.pos[i] = get_keyframes(anim_curve, scale)
62 |
63 | anim_curve = node.LclRotation.GetCurve(anim_layer, channel)
64 | if anim_curve:
65 | node_kfs.euler[i] = get_keyframes(anim_curve, 1.0)
66 |
67 | anim_curve = node.LclScaling.GetCurve(anim_layer, channel)
68 | if anim_curve:
69 | node_kfs.scale[i] = get_keyframes(anim_curve, 1.0)
70 |
71 | return node_kfs
72 |
73 | def get_keyframes(anim_curve, scale) -> list[Keyframe]:
74 | keys = []
75 | for i in range(anim_curve.KeyGetCount()):
76 | value = scale * anim_curve.KeyGetValue(i)
77 | frame = anim_curve.KeyGetTime(i).GetFrameCount()
78 | type = get_interpolation_type(anim_curve.KeyGetInterpolation(i))
79 |
80 | key = Keyframe(value, frame, type)
81 | keys.append(key)
82 |
83 | return keys
84 |
85 | def get_interpolation_type(flag):
86 | FbxAnimCurveDef = fbx.FbxAnimCurveDef
87 | if (flag & FbxAnimCurveDef.eInterpolationConstant) == FbxAnimCurveDef.eInterpolationConstant:
88 | return KeyInterpType.eCONSTANT
89 | if (flag & FbxAnimCurveDef.eInterpolationLinear) == FbxAnimCurveDef.eInterpolationLinear:
90 | return KeyInterpType.eLINEAR
91 | if (flag & FbxAnimCurveDef.eInterpolationCubic) == FbxAnimCurveDef.eInterpolationCubic:
92 | return KeyInterpType.eCUBIC
93 | return KeyInterpType.eUNKNOWN
--------------------------------------------------------------------------------
/aPyOpenGL/agl/fbxparser/material.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 |
3 | import fbx
4 | import FbxCommon
5 | import glm
6 |
7 | from .parser import MaterialInfo
8 |
9 | def get_materials(geometry) -> list[MaterialInfo]:
10 | materials = []
11 |
12 | if geometry:
13 | node = geometry.GetNode()
14 | if node:
15 | material_count = node.GetMaterialCount()
16 |
17 | if material_count > 0:
18 | color = fbx.FbxColor()
19 | for count in range(material_count):
20 | info = MaterialInfo()
21 | info.material_id = count
22 |
23 | material = node.GetMaterial(count)
24 | info.name = material.GetName()
25 |
26 | implementation = fbx.GetImplementation(material, "ImplementationHLSL")
27 | implementation_type = "HLSL"
28 | if not implementation:
29 | implementation = fbx.GetImplementation(material, "ImplementationCGFX")
30 | implementation_type = "CGFX"
31 |
32 | if not implementation:
33 | if material.GetClassId().Is(fbx.FbxSurfaceMaterial.ClassId):
34 | info.type = "default"
35 |
36 | # ambient
37 | ambient_prop = material.FindProperty(fbx.FbxSurfaceMaterial.sAmbient)
38 | if ambient_prop.IsValid():
39 | val = FbxCommon.FbxPropertyDouble3(ambient_prop).Get()
40 | color.Set(val[0], val[1], val[2])
41 | info.ambient = glm.vec3(color.mRed, color.mGreen, color.mBlue)
42 |
43 | # diffuse
44 | diffuse_prop = material.FindProperty(fbx.FbxSurfaceMaterial.sDiffuse)
45 | if diffuse_prop.IsValid():
46 | val = FbxCommon.FbxPropertyDouble3(diffuse_prop).Get()
47 | color.Set(val[0], val[1], val[2])
48 | info.diffuse = glm.vec3(color.mRed, color.mGreen, color.mBlue)
49 |
50 | # specular
51 | specular_prop = material.FindProperty(fbx.FbxSurfaceMaterial.sSpecular)
52 | if specular_prop.IsValid():
53 | val = FbxCommon.FbxPropertyDouble3(specular_prop).Get()
54 | color.Set(val[0], val[1], val[2])
55 | info.specular = glm.vec3(color.mRed, color.mGreen, color.mBlue)
56 |
57 | # emissive
58 | emissive_prop = material.FindProperty(fbx.FbxSurfaceMaterial.sEmissive)
59 | if emissive_prop.IsValid():
60 | val = FbxCommon.FbxPropertyDouble3(emissive_prop).Get()
61 | color.Set(val[0], val[1], val[2])
62 | info.emissive = glm.vec3(color.mRed, color.mGreen, color.mBlue)
63 |
64 | # opacity
65 | opacity_prop = material.FindProperty(fbx.FbxSurfaceMaterial.sTransparencyFactor)
66 | if opacity_prop.IsValid():
67 | val = FbxCommon.FbxPropertyDouble1(opacity_prop).Get()
68 | info.opacity = 1.0 - val
69 |
70 | # shininess
71 | shininess_prop = material.FindProperty(fbx.FbxSurfaceMaterial.sShininess)
72 | if shininess_prop.IsValid():
73 | val = FbxCommon.FbxPropertyDouble1(shininess_prop).Get()
74 | info.shininess = val
75 |
76 | # reflectivity
77 | reflectivity_prop = material.FindProperty(fbx.FbxSurfaceMaterial.sReflectionFactor)
78 | if reflectivity_prop.IsValid():
79 | val = FbxCommon.FbxPropertyDouble1(reflectivity_prop).Get()
80 | info.reflectivity = val
81 |
82 | materials.append(info)
83 |
84 | elif material.GetClassId().Is(fbx.FbxSurfacePhong.ClassId):
85 | info.type = "phong"
86 |
87 | # ambient
88 | val = material.Ambient.Get()
89 | color.Set(val[0], val[1], val[2])
90 | info.ambient = glm.vec3(color.mRed, color.mGreen, color.mBlue)
91 |
92 | # diffuse
93 | val = material.Diffuse.Get()
94 | color.Set(val[0], val[1], val[2])
95 | info.diffuse = glm.vec3(color.mRed, color.mGreen, color.mBlue)
96 |
97 | # specular
98 | val = material.Specular.Get()
99 | color.Set(val[0], val[1], val[2])
100 | info.specular = glm.vec3(color.mRed, color.mGreen, color.mBlue)
101 |
102 | # emissive
103 | val = material.Emissive.Get()
104 | color.Set(val[0], val[1], val[2])
105 | info.emissive = glm.vec3(color.mRed, color.mGreen, color.mBlue)
106 |
107 | # opacity
108 | val = material.TransparencyFactor.Get()
109 | info.opacity = 1.0 - val
110 |
111 | # shininess
112 | val = material.Shininess.Get()
113 | info.shininess = val
114 |
115 | # reflectivity
116 | val = material.ReflectionFactor.Get()
117 | info.reflectivity = val
118 |
119 | materials.append(info)
120 |
121 | elif material.GetClassId().Is(fbx.FbxSurfaceLambert.ClassId):
122 | info.type = "lambert"
123 |
124 | # ambient
125 | val = material.Ambient.Get()
126 | color.Set(val[0], val[1], val[2])
127 | info.ambient = glm.vec3(color.mRed, color.mGreen, color.mBlue)
128 |
129 | # diffuse
130 | val = material.Diffuse.Get()
131 | color.Set(val[0], val[1], val[2])
132 | info.diffuse = glm.vec3(color.mRed, color.mGreen, color.mBlue)
133 |
134 | # emissive
135 | val = material.Emissive.Get()
136 | color.Set(val[0], val[1], val[2])
137 | info.emissive = glm.vec3(color.mRed, color.mGreen, color.mBlue)
138 |
139 | # opacity
140 | val = material.TransparencyFactor.Get()
141 | info.opacity = 1.0 - val
142 |
143 | materials.append(info)
144 |
145 | else:
146 | print("Unknown material type: ", material.GetClassId().GetName())
147 |
148 | return materials
149 |
150 | def get_polygon_material_connection(mesh):
151 | material_connection = []
152 |
153 | polygon_count = mesh.GetPolygonCount()
154 | is_all_same = True
155 | for l in range(mesh.GetElementMaterialCount()):
156 | material_element = mesh.GetElementMaterial(l)
157 | if material_element.GetMappingMode() == fbx.FbxLayerElement.eByPolygon:
158 | is_all_same = False
159 | break
160 |
161 | if is_all_same:
162 | if mesh.GetElementMaterialCount() == 0:
163 | material_connection = [-1] * polygon_count
164 | else:
165 | material_element = mesh.GetElementMaterial(0)
166 | if material_element.GetMappingMode() == fbx.FbxLayerElement.eAllSame:
167 | material_id = material_element.GetIndexArray().GetAt(0)
168 | material_connection = [material_id] * polygon_count
169 | else:
170 | material_connection = [0] * polygon_count
171 | for i in range(polygon_count):
172 | material_num = mesh.GetElementMaterialCount()
173 | if material_num >= 1:
174 | material_element = mesh.GetElementMaterial(0)
175 | material_connection[i] = material_element.GetIndexArray().GetAt(i)
176 |
177 | return material_connection
--------------------------------------------------------------------------------
/aPyOpenGL/agl/fbxparser/skeleton.py:
--------------------------------------------------------------------------------
1 | import fbx
2 | import glm
3 | import numpy as np
4 |
5 | from .parser import JointData
6 |
7 | FbxEuler = fbx.FbxEuler
8 | FbxNode = fbx.FbxNode
9 |
10 | def to_quat(x, order):
11 | rx = glm.angleAxis(np.deg2rad(x[0], dtype=np.float32), glm.vec3(1, 0, 0))
12 | ry = glm.angleAxis(np.deg2rad(x[1], dtype=np.float32), glm.vec3(0, 1, 0))
13 | rz = glm.angleAxis(np.deg2rad(x[2], dtype=np.float32), glm.vec3(0, 0, 1))
14 |
15 | if order == FbxEuler.eOrderXYZ:
16 | return rz * ry * rx
17 | elif order == FbxEuler.eOrderXZY:
18 | return ry * rz * rx
19 | elif order == FbxEuler.eOrderYXZ:
20 | return rz * rx * ry
21 | elif order == FbxEuler.eOrderYZX:
22 | return rx * rz * ry
23 | elif order == FbxEuler.eOrderZXY:
24 | return ry * rx * rz
25 | elif order == FbxEuler.eOrderZYX:
26 | return rx * ry * rz
27 | else:
28 | raise ValueError(f"Unknown Euler order: {order}")
29 |
30 | def to_vec3(x):
31 | return glm.vec3(x[0], x[1], x[2])
32 |
33 | def parse_nodes_by_type(node, joints, parent_idx, type, scale):
34 | if node.GetTypeName() == "Null":
35 | for i in range(node.GetChildCount()):
36 | parse_nodes_by_type(node.GetChild(i), joints, parent_idx, type, scale)
37 |
38 | is_type = False
39 | for i in range(node.GetNodeAttributeCount()):
40 | attr = node.GetNodeAttributeByIndex(i)
41 | if attr.GetAttributeType() == type:
42 | is_type = True
43 | break
44 |
45 | if is_type:
46 | name = node.GetName()
47 | order = node.GetRotationOrder(FbxNode.eDestinationPivot)
48 | local_T = to_vec3(node.LclTranslation.Get())
49 | local_R = to_quat(node.LclRotation.Get(), order)
50 | local_S = to_vec3(node.LclScaling.Get())
51 |
52 | dest_pre_R = node.GetPreRotation(FbxNode.eDestinationPivot)
53 | pre_quat = to_quat(dest_pre_R, order)
54 |
55 | joint = JointData()
56 | joint.name = name
57 | joint.local_S = local_S
58 | joint.local_T = scale * local_T
59 | joint.local_R = local_R
60 | joint.pre_quat = pre_quat
61 | joint.parent_idx = parent_idx
62 | joints.append(joint)
63 | parent_idx = len(joints) - 1
64 | else:
65 | return
66 |
67 | for i in range(node.GetChildCount()):
68 | parse_nodes_by_type(node.GetChild(i), joints, parent_idx, type, scale)
--------------------------------------------------------------------------------
/aPyOpenGL/agl/fbxparser/skin.py:
--------------------------------------------------------------------------------
1 | import fbx
2 | import glm
3 |
4 | from .parser import SkinningData
5 |
6 | def get_skinning(skinning_data: SkinningData, geometry, control_idx_to_vertex_idx, vertex_num, scale):
7 | skin_count = geometry.GetDeformerCount(fbx.FbxDeformer.eSkin)
8 |
9 | if skin_count > 1:
10 | print("Warning: More than one skin deformer found")
11 | skin_count = 1
12 |
13 | if skin_count == 0:
14 | return False
15 |
16 | skinning_data.joint_indices1 = [glm.ivec4(-1) for _ in range(vertex_num)]
17 | skinning_data.joint_weights1 = [glm.vec4(0.0) for _ in range(vertex_num)]
18 | skinning_data.joint_indices2 = [glm.ivec4(-1) for _ in range(vertex_num)]
19 | skinning_data.joint_weights2 = [glm.vec4(0.0) for _ in range(vertex_num)]
20 |
21 | vertex_skinning_count = [0 for _ in range(vertex_num)]
22 | vertex_skinning_weight_sum = [0.0 for _ in range(vertex_num)]
23 |
24 | for i in range(skin_count):
25 | cluster_count = geometry.GetDeformer(i, fbx.FbxDeformer.eSkin).GetClusterCount()
26 |
27 | for j in range(cluster_count):
28 | cluster = geometry.GetDeformer(i, fbx.FbxDeformer.eSkin).GetCluster(j)
29 | if cluster.GetLinkMode() != fbx.FbxCluster.eNormalize:
30 | print("Warning: Skinning mode unknown")
31 | cluster.SetLinkMode(fbx.FbxCluster.eNormalize)
32 |
33 | if cluster.GetLink() is not None:
34 | joint_name = cluster.GetLink().GetName()
35 | if joint_name not in skinning_data.name_to_idx:
36 | skinning_data.joint_names.append(joint_name)
37 | joint_idx = len(skinning_data.joint_names) - 1
38 | skinning_data.name_to_idx[joint_name] = joint_idx
39 | else:
40 | joint_idx = skinning_data.name_to_idx[joint_name]
41 | else:
42 | print("Warning: Link error")
43 | continue
44 |
45 | idx_count = cluster.GetControlPointIndicesCount()
46 | indices = cluster.GetControlPointIndices()
47 | weights = cluster.GetControlPointWeights()
48 |
49 | for k in range(idx_count):
50 | control_point_idx = indices[k]
51 | vertex_weight = float(weights[k])
52 | vertex_indices = control_idx_to_vertex_idx.get(control_point_idx, None)
53 | if vertex_indices is None:
54 | print(f"Warning: Vertex not found at control point {control_point_idx}")
55 | continue
56 | for vertex_idx in vertex_indices:
57 | if vertex_skinning_count[vertex_idx] == 0:
58 | skinning_data.joint_indices1[vertex_idx].x = joint_idx
59 | skinning_data.joint_weights1[vertex_idx].x = vertex_weight
60 | elif vertex_skinning_count[vertex_idx] == 1:
61 | skinning_data.joint_indices1[vertex_idx].y = joint_idx
62 | skinning_data.joint_weights1[vertex_idx].y = vertex_weight
63 | elif vertex_skinning_count[vertex_idx] == 2:
64 | skinning_data.joint_indices1[vertex_idx].z = joint_idx
65 | skinning_data.joint_weights1[vertex_idx].z = vertex_weight
66 | elif vertex_skinning_count[vertex_idx] == 3:
67 | skinning_data.joint_indices1[vertex_idx].w = joint_idx
68 | skinning_data.joint_weights1[vertex_idx].w = vertex_weight
69 | elif vertex_skinning_count[vertex_idx] == 4:
70 | skinning_data.joint_indices2[vertex_idx].x = joint_idx
71 | skinning_data.joint_weights2[vertex_idx].x = vertex_weight
72 | elif vertex_skinning_count[vertex_idx] == 5:
73 | skinning_data.joint_indices2[vertex_idx].y = joint_idx
74 | skinning_data.joint_weights2[vertex_idx].y = vertex_weight
75 | elif vertex_skinning_count[vertex_idx] == 6:
76 | skinning_data.joint_indices2[vertex_idx].z = joint_idx
77 | skinning_data.joint_weights2[vertex_idx].z = vertex_weight
78 | elif vertex_skinning_count[vertex_idx] == 7:
79 | skinning_data.joint_indices2[vertex_idx].w = joint_idx
80 | skinning_data.joint_weights2[vertex_idx].w = vertex_weight
81 | else:
82 | print("Warning: Too many skinning weights")
83 | continue
84 |
85 | vertex_skinning_weight_sum[vertex_idx] += vertex_weight
86 | vertex_skinning_count[vertex_idx] += 1
87 |
88 | # global initial transform of the geometry node that contains the link node
89 | matrix = fbx.FbxAMatrix()
90 |
91 | matrix = cluster.GetTransformMatrix(matrix)
92 | Q = matrix.GetQ()
93 | T = matrix.GetT()
94 | S = matrix.GetS()
95 |
96 | q = glm.quat(Q[3], Q[0], Q[1], Q[2])
97 | t = glm.vec3(T[0], T[1], T[2]) * scale
98 | s = glm.vec3(S[0], S[1], S[2])
99 |
100 | global_xform = glm.translate(glm.mat4(1.0), t) * glm.mat4_cast(q) * glm.scale(glm.mat4(1.0), s)
101 |
102 | # joint transformations at binding pose
103 | matrix = cluster.GetTransformLinkMatrix(matrix)
104 | Q = matrix.GetQ()
105 | T = matrix.GetT()
106 | S = matrix.GetS()
107 |
108 | q = glm.quat(Q[3], Q[0], Q[1], Q[2])
109 | t = glm.vec3(T[0], T[1], T[2]) * scale
110 | s = glm.vec3(S[0], S[1], S[2])
111 |
112 | xform = glm.translate(glm.mat4(1.0), t) * glm.mat4_cast(q) * glm.scale(glm.mat4(1.0), s)
113 |
114 | skinning_data.offset_xform.append(glm.inverse(xform) * global_xform)
115 |
116 | if cluster.GetAssociateModel() is not None:
117 | print("Warning: Associate model is not None")
118 |
119 | for i in range(vertex_num):
120 | skinning_data.joint_weights1[i] /= vertex_skinning_weight_sum[i]
121 | skinning_data.joint_weights2[i] /= vertex_skinning_weight_sum[i]
122 |
123 | return True
--------------------------------------------------------------------------------
/aPyOpenGL/agl/fbxparser/texture.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 |
3 | import fbx
4 | from .parser import TextureInfo
5 |
6 | FbxCriteria = fbx.FbxCriteria
7 |
8 | def get_textures(geometry) -> list[TextureInfo]:
9 | textures = []
10 | if geometry.GetNode() is None:
11 | return textures
12 |
13 | criteria = FbxCriteria.ObjectType(fbx.FbxSurfaceMaterial.ClassId)
14 | num_materials = geometry.GetNode().GetSrcObjectCount(criteria)
15 | for material_idx in range(num_materials):
16 | material = geometry.GetNode().GetSrcObject(criteria, material_idx)
17 |
18 | if material is None:
19 | continue
20 |
21 | # go through all the possible textures
22 | for texture_idx in range(fbx.FbxLayerElement.sTypeTextureCount()):
23 | property = material.FindProperty(fbx.FbxLayerElement.sTextureChannelNames(texture_idx))
24 | textures.extend(find_and_display_texture_info_by_property(property, material_idx))
25 |
26 | return textures
27 |
28 | def find_and_display_texture_info_by_property(property, material_idx):
29 | if not property.IsValid():
30 | return []
31 |
32 | textures = []
33 |
34 | criteria = FbxCriteria.ObjectType(fbx.FbxTexture.ClassId)
35 | texture_count = property.GetSrcObjectCount()
36 | for j in range(texture_count):
37 | layered_texture = property.GetSrcObject(criteria, j)
38 | if isinstance(layered_texture, fbx.FbxLayeredTexture):
39 | for k in range(layered_texture.GetSrcObjectCount()):
40 | texture = layered_texture.GetSrcObject(criteria, k)
41 | blend_mode = layered_texture.GetTextureBlendMode(k)
42 |
43 | texture_info = get_file_texture(texture, blend_mode)
44 | texture_info.property = str(property.GetName())
45 | texture_info.connected_material = material_idx
46 | textures.append(texture_info)
47 | else:
48 | texture = property.GetSrcObject(criteria, j)
49 |
50 | texture_info = get_file_texture(texture, -1)
51 | texture_info.property = str(property.GetName())
52 | texture_info.connected_material = material_idx
53 | textures.append(texture_info)
54 |
55 | return textures
56 |
57 | def get_file_texture(texture, blend_mode):
58 | info = TextureInfo()
59 | info.name = texture.GetName()
60 |
61 | # get texture type
62 | if isinstance(texture, fbx.FbxFileTexture):
63 | info.filename = texture.GetFileName()
64 | elif isinstance(texture, fbx.FbxProceduralTexture):
65 | return info
66 | else:
67 | raise Exception("Unknown texture type")
68 |
69 | # get texture properties
70 | info.scale_u = texture.GetScaleU()
71 | info.scale_v = texture.GetScaleV()
72 | info.translation_u = texture.GetTranslationU()
73 | info.translation_v = texture.GetTranslationV()
74 | info.swap_uv = texture.GetSwapUV()
75 | info.rotation_u = texture.GetRotationU()
76 | info.rotation_v = texture.GetRotationV()
77 | info.rotation_w = texture.GetRotationW()
78 |
79 | # get texture alpha properties
80 | alpha_sources = [ "None", "RGB Intensity", "Black" ]
81 | info.alpha_source = alpha_sources[texture.GetAlphaSource()]
82 | info.crop_left = texture.GetCroppingLeft()
83 | info.crop_top = texture.GetCroppingTop()
84 | info.crop_right = texture.GetCroppingRight()
85 | info.crop_bottom = texture.GetCroppingBottom()
86 |
87 | # get texture mapping types
88 | mapping_types = [ "Null", "Planar", "Spherical", "Cylindrical", "Box", "Face", "UV", "Environment" ]
89 | info.mapping_type = mapping_types[texture.GetMappingType()]
90 |
91 | if texture.GetMappingType() == fbx.FbxTexture.ePlanar:
92 | planar_mapping_normals = [ "X", "Y", "Z" ]
93 | info.planar_mapping_normal = planar_mapping_normals[texture.GetPlanarMappingNormal()]
94 |
95 | # get blend modes
96 | blend_modes = [ "Translucent", "Additive", "Modulate", "Modulate2", "Over", "Normal", "Dissolve", "Darken", "ColorBurn", "LinearBurn",
97 | "DarkerColor", "Lighten", "Screen", "ColorDodge", "LinearDodge", "LighterColor", "SoftLight", "HardLight", "VividLight",
98 | "LinearLight", "PinLight", "HardMix", "Difference", "Exclusion", "Substract", "Divide", "Hue", "Saturation", "Color",
99 | "Luminosity", "Overlay" ]
100 |
101 | if blend_mode >= 0:
102 | info.blend_mode = blend_modes[blend_mode]
103 |
104 | info.alpha = texture.GetDefaultAlpha()
105 |
106 | # get texture uses
107 | if isinstance(texture, fbx.FbxFileTexture):
108 | material_uses = [ "Model Material", "Default Material" ]
109 | info.material_use = material_uses[texture.GetMaterialUse()]
110 |
111 | texture_uses = [ "Standard", "Shadow Map", "Light Map", "Spherical Reflexion Map", "Sphere Reflexion Map", "Bump Normal Map" ]
112 | info.texture_use = texture_uses[texture.GetTextureUse()]
113 |
114 | return info
--------------------------------------------------------------------------------
/aPyOpenGL/agl/light.py:
--------------------------------------------------------------------------------
1 | import glm
2 |
3 | class Light:
4 | def __init__(
5 | self,
6 | focus_position = glm.vec3(0),
7 | color = glm.vec3(1.0),
8 | intensity = 1.0,
9 | L = 5.0,
10 | z_near = 0.1,
11 | z_far = 100.0
12 | ):
13 | self.__focus_position = focus_position
14 | self.__color = color
15 | self.__intensity = intensity
16 | self.__L = L
17 | self.__z_near = z_near
18 | self.__z_far = z_far
19 |
20 | @property
21 | def vector(self):
22 | raise NotImplementedError
23 |
24 | @property
25 | def position(self):
26 | return NotImplementedError
27 |
28 | @property
29 | def focus_position(self):
30 | return self.__focus_position
31 |
32 | @property
33 | def color(self):
34 | return self.__color
35 |
36 | @property
37 | def intensity(self):
38 | return self.__intensity
39 |
40 | def get_view_matrix(self):
41 | return glm.lookAt(self.position, self.__focus_position, glm.vec3(0, 1, 0))
42 |
43 | def get_projection_matrix(self):
44 | return glm.ortho(-self.__L, self.__L, -self.__L, self.__L, self.__z_near, self.__z_far)
45 |
46 | def get_view_projection_matrix(self):
47 | return self.get_projection_matrix() * self.get_view_matrix()
48 |
49 | class DirectionalLight(Light):
50 | def __init__(
51 | self,
52 | direction = glm.vec3(-1, -2, -1),
53 | focus_position = glm.vec3(0),
54 | color = glm.vec3(1.0),
55 | intensity = 1.0,
56 | L = 20.0,
57 | z_near = 0.1,
58 | z_far = 50.0
59 | ):
60 | self.__direction = direction
61 | super().__init__(focus_position, color, intensity, L, z_near, z_far)
62 |
63 | @property
64 | def position(self):
65 | return self.focus_position - glm.normalize(self.__direction) * 10
66 |
67 | @property
68 | def attenuation(self):
69 | # Not used actually, but for completeness with other lights
70 | return glm.vec3(1.0, 0.0, 0.0)
71 |
72 | @property
73 | def vector(self):
74 | return glm.vec4(self.__direction, 0)
75 |
76 |
77 | class PointLight(Light):
78 | def __init__(
79 | self,
80 | position = glm.vec3(5),
81 | focus_position = glm.vec3(0),
82 | color = glm.vec3(1.0),
83 | intensity = 1.0,
84 | attenuation = glm.vec3(0.1, 0.01, 0.0),
85 | L = 5.0,
86 | z_near = 1.0,
87 | z_far = 40.0
88 | ):
89 | self.__position = position
90 | self.__attenuation = attenuation
91 | super().__init__(focus_position, color, intensity, L, z_near, z_far)
92 |
93 | @property
94 | def position(self):
95 | return self.__position
96 |
97 | @property
98 | def attenuation(self):
99 | return self.__attenuation
100 |
101 | @property
102 | def vector(self):
103 | return glm.vec4(self.position, 1)
--------------------------------------------------------------------------------
/aPyOpenGL/agl/material.py:
--------------------------------------------------------------------------------
1 | import glm
2 | import copy
3 |
4 | from .texture import Texture, TextureType
5 |
6 | class Material:
7 | def __init__(self,
8 | albedo = glm.vec3(0.5),
9 | diffuse = glm.vec3(1.0),
10 | specular = glm.vec3(0.1),
11 | shininess = 10.0
12 | ):
13 | self.albedo = glm.vec3(albedo)
14 | self.alpha = 1.0
15 |
16 | # phong model
17 | self.diffuse = glm.vec3(diffuse)
18 | self.specular = glm.vec3(specular)
19 | self.shininess = shininess
20 |
21 | # pbr model
22 | self.metallic = 0.0
23 | self.roughness = 1.0
24 | self.ao = 1.0
25 |
26 | # textures
27 | self.albedo_map = Texture()
28 | self.normal_map = Texture()
29 | self.disp_map = Texture()
30 | self.metallic_map = Texture()
31 | self.roughness_map = Texture()
32 | self.ao_map = Texture()
33 |
34 | self.type_dict = {
35 | "albedo" : TextureType.eALBEDO,
36 | "diffuse" : TextureType.eDIFFUSE,
37 | "normal" : TextureType.eNORMAL,
38 | "disp" : TextureType.eDISPLACEMENT,
39 | "metallic" : TextureType.eMETALIC,
40 | "roughness" : TextureType.eROUGHNESS,
41 | "ao" : TextureType.eAO
42 | }
43 |
44 | @staticmethod
45 | def from_mtl_dict(d):
46 | res = Material()
47 |
48 | value = d.get("ambient", None)
49 | if value is not None:
50 | res.albedo = glm.vec3(value)
51 |
52 | value = d.get("diffuse", None)
53 | if value is not None:
54 | res.diffuse = glm.vec3(value)
55 |
56 | value = d.get("specular", None)
57 | if value is not None:
58 | res.specular = glm.vec3(value)
59 |
60 | value = d.get("shininess", None)
61 | if value is not None:
62 | res.shininess = float(value)
63 |
64 | def __deepcopy__(self, memo):
65 | res = Material()
66 |
67 | res.albedo = copy.deepcopy(self.albedo)
68 | res.alpha = self.alpha
69 |
70 | res.diffuse = copy.deepcopy(self.diffuse)
71 | res.specular = copy.deepcopy(self.specular)
72 | res.shininess = self.shininess
73 |
74 | res.metallic = self.metallic
75 | res.roughness = self.roughness
76 | res.ao = self.ao
77 |
78 | res.albedo_map = copy.deepcopy(self.albedo_map)
79 | res.normal_map = copy.deepcopy(self.normal_map)
80 | res.disp_map = copy.deepcopy(self.disp_map)
81 | res.metallic_map = copy.deepcopy(self.metallic_map)
82 | res.roughness_map = copy.deepcopy(self.roughness_map)
83 | res.ao_map = copy.deepcopy(self.ao_map)
84 |
85 | memo[id(self)] = res
86 | return res
87 |
88 | def set_texture(self, texture, texture_type):
89 | if not isinstance(texture_type, TextureType):
90 | texture_type = self.type_dict.get(texture_type, TextureType.eDIFFUSE)
91 |
92 | # TextureType: Unknown
93 | if texture_type == TextureType.eUNKNOWN:
94 | raise Exception("Texture type not supported")
95 |
96 | # TextureType
97 | if texture_type == TextureType.eALBEDO or texture_type == TextureType.eDIFFUSE:
98 | self.albedo_map = texture
99 | elif texture_type == TextureType.eNORMAL:
100 | self.normal_map = texture
101 | elif texture_type == TextureType.eDISPLACEMENT:
102 | self.disp_map = texture
103 | elif texture_type == TextureType.eMETALIC:
104 | self.metallic_map = texture
105 | elif texture_type == TextureType.eROUGHNESS:
106 | self.roughness_map = texture
107 | elif texture_type == TextureType.eAO:
108 | self.ao_map = texture
109 |
110 | def set_albedo(self, albedo):
111 | self.albedo = glm.vec3(albedo)
112 |
113 | def set_diffuse(self, diffuse):
114 | self.diffuse = glm.vec3(diffuse)
115 |
116 | def set_specular(self, specular):
117 | self.specular = glm.vec3(specular)
118 |
119 | def set_shininess(self, shininess):
120 | self.shininess = shininess
121 |
122 | def set_alpha(self, alpha):
123 | self.alpha = alpha
--------------------------------------------------------------------------------
/aPyOpenGL/agl/mesh.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 | from OpenGL.GL import *
3 | import glm
4 | import copy
5 |
6 | from .motion import Skeleton, Pose
7 | from .core import MeshGL
8 |
9 | class Mesh:
10 | def __init__(
11 | self,
12 | mesh_gl: MeshGL,
13 | materials = None,
14 | skeleton: Skeleton = None,
15 | joint_map: dict[str, str] = None,
16 | ):
17 | self.mesh_gl = mesh_gl
18 | self.materials = materials
19 |
20 | # skinning
21 | self.skeleton = skeleton
22 | self.joint_map = joint_map
23 | self.use_skinning = (skeleton is not None)
24 | self.buffer = [glm.mat4(1.0)] * len(self.mesh_gl.joint_names)
25 |
26 | if self.skeleton is None and self.joint_map is not None:
27 | raise ValueError("Joint map requires a skeleton")
28 |
29 | def __deepcopy__(self, memo):
30 | res = Mesh(self.mesh_gl, copy.deepcopy(self.materials), self.skeleton)
31 | res.buffer = copy.deepcopy(self.buffer)
32 | memo[id(self)] = res
33 | return res
34 |
35 | def set_materials(self, materials):
36 | self.materials = materials
37 |
38 | def update_mesh(self, pose: Pose):
39 | if self.skeleton is None:
40 | return
41 |
42 | self.buffer = [glm.mat4(1.0) for _ in range(len(self.mesh_gl.joint_names))]
43 | if self.joint_map is None:
44 | self._update_without_joint_map(pose)
45 | else:
46 | self._update_with_joint_map(pose)
47 |
48 |
49 | def _update_with_joint_map(self, pose: Pose):
50 | global_xforms = pose.global_xforms
51 | buffer_updated = [False for _ in range(len(self.mesh_gl.joint_names))]
52 | for i in range(len(pose.skeleton.joints)):
53 | # get joint names
54 | src_jname = pose.skeleton.joints[i].name
55 | tgt_jname = self.joint_map.get(src_jname, None)
56 | if tgt_jname is None:
57 | continue
58 | tgt_idx = self.mesh_gl.name_to_idx.get(tgt_jname, None)
59 | if tgt_idx is None:
60 | continue
61 |
62 | # map global xform
63 | global_xform = glm.mat4(*global_xforms[i].T.ravel())
64 | bind_xform_inv = self.mesh_gl.bind_xform_inv[tgt_idx]
65 | self.buffer[tgt_idx] = global_xform * bind_xform_inv
66 | buffer_updated[tgt_idx] = True
67 |
68 | for i, updated in enumerate(buffer_updated):
69 | if not updated:
70 | jname = self.mesh_gl.joint_names[i]
71 | parent_idx = self.skeleton.parent_idx[self.skeleton.idx_by_name[jname]]
72 | pjoint_name = self.skeleton.joints[parent_idx].name
73 | self.buffer[i] = self.buffer[self.mesh_gl.name_to_idx[pjoint_name]]
74 |
75 | def _update_without_joint_map(self, pose: Pose):
76 | global_xforms = pose.global_xforms
77 | for i in range(len(self.mesh_gl.joint_names)):
78 | jidx = self.skeleton.idx_by_name[self.mesh_gl.joint_names[i]]
79 | global_xform = glm.mat4(*global_xforms[jidx].T.ravel())
80 | bind_xform_inv = self.mesh_gl.bind_xform_inv[i]
81 | self.buffer[i] = global_xform * bind_xform_inv
--------------------------------------------------------------------------------
/aPyOpenGL/agl/model.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 | from OpenGL.GL import *
3 |
4 | from .core import MeshGL
5 | from .material import Material
6 | from .motion import Skeleton, Pose
7 | from .mesh import Mesh
8 |
9 | class Model:
10 | def __init__(
11 | self,
12 | meshes: list[tuple[MeshGL, Material]] = None,
13 | skeleton: Skeleton = None,
14 | joint_map: dict[str, str] = None,
15 | ):
16 | if meshes is None and skeleton is None:
17 | raise ValueError("Both meshes and skeleton cannot be None")
18 | if skeleton is None and joint_map is not None:
19 | raise ValueError("Joint map requires a skeleton")
20 |
21 | self.skeleton = skeleton
22 | self.meshes = [Mesh(meshes[i][0], meshes[i][1], skeleton=skeleton, joint_map=joint_map) for i in range(len(meshes))] if meshes is not None else []
23 |
24 | def set_identity_joint_map(self):
25 | if self.skeleton is None:
26 | raise ValueError("Joint map requires a skeleton")
27 | joint_map = {joint.name: joint.name for joint in self.skeleton.joints}
28 | self.set_joint_map(joint_map)
29 |
30 | def set_joint_map(self, joint_map: dict[str, str]):
31 | for mesh in self.meshes:
32 | mesh.joint_map = joint_map
33 |
34 | def set_pose(self, pose: Pose):
35 | for mesh in self.meshes:
36 | mesh.update_mesh(pose)
--------------------------------------------------------------------------------
/aPyOpenGL/agl/motion/__init__.py:
--------------------------------------------------------------------------------
1 | from .joint import Joint
2 | from .skeleton import Skeleton
3 | from .pose import Pose
4 | from .motion import Motion
--------------------------------------------------------------------------------
/aPyOpenGL/agl/motion/joint.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 |
3 | import numpy as np
4 | from aPyOpenGL.transforms import n_quat, n_rotmat
5 |
6 | class Joint:
7 | """
8 | Joint of a skeleton.
9 |
10 | Attributes:
11 | name (str): Name of the joint
12 | pre_quat (np.ndarray): Pre-rotation of the joint in quaternion
13 | local_pos (np.ndarray): Local position of the joint relative to its parent
14 | """
15 | def __init__(
16 | self,
17 | name: str,
18 | pre_quat: np.ndarray = None,
19 | local_pos: np.ndarray = None
20 | ):
21 | self.name = str(name)
22 | self.__pre_quat = np.array([1, 0, 0, 0], dtype=np.float32) if pre_quat is None else np.array(pre_quat, dtype=np.float32)
23 | self.__local_pos = np.array([0, 0, 0], dtype=np.float32) if local_pos is None else np.array(local_pos, dtype=np.float32)
24 |
25 | if self.__pre_quat.shape != (4,):
26 | raise ValueError(f"Pre-rotation quaternion must be a 4-dimensional vector, but got {self.__pre_quat.shape}.")
27 | if self.__local_pos.shape != (3,):
28 | raise ValueError(f"Local position must be a 3-dimensional vector, but got {self.__local_pos.shape}.")
29 |
30 | self._recompute_pre_xform()
31 |
32 | @property
33 | def pre_quat(self):
34 | return self.__pre_quat.copy()
35 |
36 | @property
37 | def local_pos(self):
38 | return self.__local_pos.copy()
39 |
40 | @property
41 | def pre_xform(self):
42 | return self.__pre_xform.copy()
43 |
44 | @pre_quat.setter
45 | def pre_quat(self, value):
46 | self.__pre_quat = np.array(value, dtype=np.float32)
47 | if self.__pre_quat.shape != (4,):
48 | raise ValueError(f"Pre-rotation quaternion must be a 4-dimensional vector, but got {self.__pre_quat.shape}.")
49 | self._recompute_pre_xform()
50 |
51 | @local_pos.setter
52 | def local_pos(self, value):
53 | self.__local_pos = np.array(value, dtype=np.float32)
54 | if self.__local_pos.shape != (3,):
55 | raise ValueError(f"Local position must be a 3-dimensional vector, but got {self.__local_pos.shape}.")
56 | self._recompute_pre_xform()
57 |
58 | def _recompute_pre_xform(self):
59 | pre_rotmat = n_quat.to_rotmat(self.__pre_quat)
60 | self.__pre_xform = n_rotmat.to_xform(pre_rotmat, translation=self.__local_pos)
--------------------------------------------------------------------------------
/aPyOpenGL/agl/motion/pose.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 | from typing import Union
3 |
4 | import numpy as np
5 | from copy import deepcopy
6 |
7 | from .skeleton import Skeleton
8 | from aPyOpenGL import transforms as trf
9 |
10 |
11 | def _global_xforms_to_skeleton_xforms(global_xforms, parent_idx):
12 | noj = global_xforms.shape[0]
13 |
14 | skeleton_xforms = np.stack([np.identity(4, dtype=np.float32) for _ in range(noj - 1)], axis=0)
15 | for i in range(1, noj):
16 | parent_pos = global_xforms[parent_idx[i], :3, 3]
17 |
18 | target_dir = global_xforms[i, :3, 3] - parent_pos
19 | target_dir = target_dir / (np.linalg.norm(target_dir, axis=-1, keepdims=True) + 1e-8)
20 |
21 | quat = trf.n_quat.between_vecs(np.array([0, 1, 0], dtype=np.float32), target_dir)
22 |
23 | skeleton_xforms[i-1, :3, :3] = trf.n_quat.to_rotmat(quat)
24 | skeleton_xforms[i-1, :3, 3] = (parent_pos + global_xforms[i, :3, 3]) / 2
25 |
26 | return skeleton_xforms
27 |
28 |
29 | class Pose:
30 | """
31 | Represents a pose of a skeleton.
32 | It contains the local rotation matrices of each joint and the root position.
33 |
34 | global_xforms[i] = global_xforms[parent_idx[i]] @ pre_xform[i] @ local_rots[i]
35 |
36 | Attributes:
37 | skeleton (Skeleton) : The skeleton that this pose belongs to.
38 | local_quats (numpy.ndarray): Local rotations of each joint in quaternion.
39 | root_pos (numpy.ndarray): Root positoin in world space.
40 | """
41 | def __init__(
42 | self,
43 | skeleton: Skeleton,
44 | local_quats: Union[np.ndarray, list[np.ndarray]] = None,
45 | root_pos: np.ndarray = None,
46 | ):
47 | # set attributes
48 | self.__skeleton = skeleton
49 | self.__local_quats = np.stack([trf.n_quat.identity()] * skeleton.num_joints, axis=0) if local_quats is None else np.array(local_quats, dtype=np.float32)
50 | self.__root_pos = np.zeros(3, dtype=np.float32) if root_pos is None else np.array(root_pos, dtype=np.float32)
51 |
52 | # check shapes
53 | if self.__skeleton.num_joints == 0:
54 | raise ValueError("Cannot create a pose for an empty skeleton.")
55 |
56 | # global transformations
57 | self.__global_updated = False
58 | self.__global_xforms, self.__skeleton_xforms = None, None
59 |
60 |
61 | @property
62 | def skeleton(self):
63 | return deepcopy(self.__skeleton)
64 |
65 |
66 | @property
67 | def local_quats(self):
68 | return self.__local_quats.copy()
69 |
70 |
71 | @property
72 | def root_pos(self):
73 | return self.__root_pos.copy()
74 |
75 |
76 | @property
77 | def global_xforms(self):
78 | if not self.__global_updated:
79 | self.update_global_xform()
80 | return self.__global_xforms.copy()
81 |
82 |
83 | @property
84 | def skeleton_xforms(self):
85 | if not self.__global_updated:
86 | self.update_global_xform()
87 | return self.__skeleton_xforms.copy()
88 |
89 |
90 | @local_quats.setter
91 | def local_quats(self, value):
92 | self.__local_quats = np.array(value)
93 | self.__global_updated = False
94 |
95 |
96 | @root_pos.setter
97 | def root_pos(self, value):
98 | self.__root_pos = np.array(value)
99 | self.__global_updated = False
100 |
101 |
102 | def update_global_xform(self):
103 | if self.__global_updated:
104 | return
105 |
106 | # update global xform
107 | gq, gp = trf.n_quat.fk(self.__local_quats, self.__root_pos, self.__skeleton)
108 | gr = trf.n_quat.to_rotmat(gq)
109 | gx = np.stack([np.identity(4, dtype=np.float32) for _ in range(self.__skeleton.num_joints)], axis=0)
110 | gx[:, :3, :3] = gr
111 | gx[:, :3, 3] = gp
112 |
113 | self.__global_xforms = gx
114 |
115 | # update skeleton xform
116 | self.__skeleton_xforms = _global_xforms_to_skeleton_xforms(self.__global_xforms, self.__skeleton.parent_idx)
117 |
118 | self.__global_updated = True
119 |
120 |
121 | def set_global_xform(self, global_xforms, skeleton_xforms):
122 | self.__global_xforms = np.array(global_xforms, dtype=np.float32)
123 | self.__skeleton_xforms = np.array(skeleton_xforms, dtype=np.float32)
124 | self.__global_updated = True
125 |
126 |
127 | def remove_joint_by_name(self, joint_name):
128 | joint_indices = self.__skeleton.remove_joint_by_name(joint_name)
129 | self.__local_quats = np.delete(self.__local_quats, joint_indices, axis=0)
130 |
131 |
132 | def mirror(self, pair_indices, sym_axis=None):
133 | # swap joint indices
134 | local_quats = self.local_quats[pair_indices]
135 | root_pos = self.root_pos
136 |
137 | # mirror by symmetry axis
138 | if sym_axis is None:
139 | sym_axis = self.__skeleton.find_symmetry_axis(pair_indices)
140 | else:
141 | assert sym_axis in ["x", "y", "z"], f"Invalid axis {sym_axis} for symmetry axis, must be one of ['x', 'y', 'z']"
142 |
143 | idx = {"x": 0, "y": 1, "z": 2}[sym_axis]
144 | local_quats[:, (0, idx+1)] *= -1
145 | root_pos[idx] *= -1
146 |
147 | return Pose(deepcopy(self.__skeleton), local_quats, root_pos)
148 |
149 |
150 | @classmethod
151 | def from_numpy(cls, skeleton, local_quats, root_pos):
152 | return cls(skeleton, local_quats, root_pos)
153 |
154 | @classmethod
155 | def from_torch(cls, skeleton, local_quats, root_pos):
156 | return cls(skeleton, local_quats.cpu().numpy(), root_pos.cpu().numpy())
157 |
158 | # """ IK functions """
159 | # def two_bone_ik(self, base_idx, effector_idx, target_p, eps=1e-8, facing="forward"):
160 | # mid_idx = self.__skeleton.parent_idx[effector_idx]
161 | # if self.__skeleton.parent_idx[mid_idx] != base_idx:
162 | # raise ValueError(f"{base_idx} and {effector_idx} are not in a two bone IK hierarchy")
163 |
164 | # a = self.global_p[base_idx]
165 | # b = self.global_p[mid_idx]
166 | # c = self.global_p[effector_idx]
167 |
168 | # global_a_R = self.global_R[base_idx]
169 | # global_b_R = self.global_R[mid_idx]
170 |
171 | # lab = np.linalg.norm(b - a)
172 | # lcb = np.linalg.norm(b - c)
173 | # lat = np.clip(np.linalg.norm(target_p - a), eps, lab + lcb - eps)
174 |
175 | # ac_ab_0 = np.arccos(np.clip(np.dot(mathops.normalize(c - a), mathops.normalize(b - a)), -1, 1))
176 | # ba_bc_0 = np.arccos(np.clip(np.dot(mathops.normalize(a - b), mathops.normalize(c - b)), -1, 1))
177 | # ac_at_0 = np.arccos(np.clip(np.dot(mathops.normalize(c - a), mathops.normalize(target_p - a)), -1, 1))
178 |
179 | # ac_ab_1 = np.arccos(np.clip((lcb*lcb - lab*lab - lat*lat) / (-2*lab*lat), -1, 1))
180 | # ba_bc_1 = np.arccos(np.clip((lat*lat - lab*lab - lcb*lcb) / (-2*lab*lcb), -1, 1))
181 |
182 | # axis_0 = mathops.normalize(np.cross(c - a, self.forward if facing == "forward" else -self.forward))
183 | # axis_1 = mathops.normalize(np.cross(c - a, target_p - a))
184 |
185 | # r0 = rotation.A_to_R(ac_ab_1 - ac_ab_0, rotation.R_inv(global_a_R) @ axis_0)
186 | # r1 = rotation.A_to_R(ba_bc_1 - ba_bc_0, rotation.R_inv(global_b_R) @ axis_0)
187 | # r2 = rotation.A_to_R(ac_at_0, rotation.R_inv(global_a_R) @ axis_1)
188 |
189 | # self.local_R[base_idx] = self.local_R[base_idx] @ r0 @ r2
190 | # self.local_R[mid_idx] = self.local_R[mid_idx] @ r1
191 |
192 | # self.update()
--------------------------------------------------------------------------------
/aPyOpenGL/agl/motion/skeleton.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 |
3 | import numpy as np
4 |
5 | from .joint import Joint
6 |
7 | class Skeleton:
8 | """
9 | Hierarchical structure of joints
10 |
11 | Attributes:
12 | joints (list[Joint]): List of joints
13 | v_up (np.ndarray): Up vector of the skeleton
14 | v_forward (np.ndarray): Forward vector of the skeleton
15 | parent_id (list[int]): List of parent ids
16 | children_id (list[list[int]]): List of children ids
17 | id_by_name (dict[str, int]): Dictionary of joint ids by name
18 | """
19 | def __init__(
20 | self,
21 | joints: list[Joint] = None,
22 | ):
23 | self.__joints: list[Joint] = [] if joints is None else joints
24 | self.__parent_idx: list[int] = []
25 | self.__children_idx: list[list[int]] = []
26 | self.__idx_by_name: dict = {}
27 |
28 | if len(self.__joints) > 0:
29 | self.recompute_pre_xform()
30 |
31 | @property
32 | def num_joints(self):
33 | return len(self.__joints)
34 |
35 | @property
36 | def pre_xforms(self):
37 | return self.__pre_xforms.copy()
38 |
39 | @property
40 | def joints(self):
41 | return self.__joints.copy()
42 |
43 | @property
44 | def parent_idx(self):
45 | return self.__parent_idx.copy()
46 |
47 | @property
48 | def children_idx(self):
49 | return self.__children_idx.copy()
50 |
51 | @property
52 | def idx_by_name(self):
53 | return self.__idx_by_name.copy()
54 |
55 | def recompute_pre_xform(self):
56 | self.__pre_xforms = np.stack([joint.pre_xform for joint in self.__joints])
57 |
58 | def add_joint(self, joint_name, pre_quat=None, local_pos=None, parent_idx=None):
59 | # add parent and children indices
60 | if parent_idx is None or parent_idx == -1:
61 | if len(self.__joints) > 0:
62 | raise ValueError(f"Root joint {self.__joints[0].name} already exists. Cannot add {joint_name}.")
63 | self.__parent_idx.append(-1)
64 | else:
65 | self.__parent_idx.append(parent_idx)
66 | self.__children_idx[parent_idx].append(len(self.__joints))
67 |
68 | # add joint
69 | joint = Joint(joint_name, pre_quat, local_pos)
70 | self.__children_idx.append(list())
71 | self.__idx_by_name[joint_name] = len(self.__joints)
72 | self.__joints.append(joint)
73 |
74 | # recompute pre-transform
75 | self.recompute_pre_xform()
76 |
77 | def remove_joint_by_name(self, joint_name):
78 | joint_idx = self.__idx_by_name.get(joint_name, None)
79 | if joint_idx is None:
80 | raise ValueError(f"Joint {joint_name} does not exist.")
81 | return self.remove_joint_by_idx(joint_idx)
82 |
83 | def remove_joint_by_idx(self, joint_idx):
84 | remove_indices = [joint_idx]
85 |
86 | def dfs(jidx):
87 | for cidx in self.__children_idx[jidx]:
88 | remove_indices.append(cidx)
89 | dfs(cidx)
90 |
91 | dfs(joint_idx)
92 | remove_indices.sort(reverse=True)
93 |
94 | for ridx in remove_indices:
95 | self.__children_idx[self.__parent_idx[ridx]].remove(ridx)
96 | self.__joints.pop(ridx)
97 | self.__parent_idx.pop(ridx)
98 | self.__children_idx.pop(ridx)
99 |
100 | for i in range(len(self.__joints)):
101 | self.__parent_idx[i] = self.__parent_idx[i] - sum([1 for ridx in remove_indices if ridx < self.__parent_idx[i]])
102 | self.__children_idx[i] = [cidx - sum([1 for ridx in remove_indices if ridx < cidx]) for cidx in self.__children_idx[i]]
103 |
104 | for i, joint in enumerate(self.__joints):
105 | self.__idx_by_name[joint.name] = i
106 |
107 | self.recompute_pre_xform()
108 |
109 | return remove_indices
110 |
111 | def find_symmetry_axis(self, pair_indices):
112 | assert len(self.__joints) == len(pair_indices), f"number of pair indices {len(pair_indices)} must be same with the number of joints {len(self.__joints)}"
113 |
114 | offsets = self.pre_xforms[:, :3, -1].copy()
115 | offsets = offsets - offsets[pair_indices]
116 |
117 | x = np.max(np.abs(offsets[:, 0]))
118 | y = np.max(np.abs(offsets[:, 1]))
119 | z = np.max(np.abs(offsets[:, 2]))
120 |
121 | if x > y and x > z:
122 | axis = "x"
123 | elif y > x and y > z:
124 | axis = "y"
125 | elif z > x and z > y:
126 | axis = "z"
127 | else:
128 | raise Exception("Symmetry axis not found")
129 |
130 | return axis
--------------------------------------------------------------------------------
/aPyOpenGL/agl/obj.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 |
3 | import os
4 | from OpenGL.GL import *
5 | import glm
6 |
7 | from .core import VAO, VertexGL, bind_mesh
8 | from .material import Material
9 |
10 | def parse_obj(path, scale, verbose=False):
11 | if not path.endswith(".obj"):
12 | raise Exception(f"File must be .obj format, but got {path}")
13 |
14 | positions, uvs, normals, faces = [], [], [], []
15 | mtl_files = []
16 | current_mtl = "default"
17 |
18 | with open(path, "r") as f:
19 | lines = f.readlines()
20 |
21 | for line_idx, line in enumerate(lines):
22 | tokens = line.strip().split()
23 | if not tokens:
24 | continue
25 |
26 | prefix = tokens[0]
27 |
28 | # material
29 | if prefix == "mtllib":
30 | file_path = os.path.join(os.path.dirname(path), tokens[1])
31 | mtl_files.append(file_path)
32 |
33 | # group
34 | elif prefix == "g":
35 | continue
36 |
37 | # vertex
38 | elif prefix == "v":
39 | vertex = glm.vec3(list(map(float, tokens[1:4]))) * scale
40 | positions.append(vertex)
41 |
42 | # uv
43 | elif prefix == "vt":
44 | tex_coord = glm.vec2(list(map(float, tokens[1:])))
45 | uvs.append(tex_coord)
46 |
47 | # normal
48 | elif prefix == "vn":
49 | normal = glm.vec3(list(map(float, tokens[1:])))
50 | normals.append(normal)
51 |
52 | # face
53 | elif prefix == "f":
54 | vertices = [token.split("/") for token in tokens[1:]]
55 |
56 | if len(vertices) < 3:
57 | raise Exception(f"Faces with less than 3 vertices are not supported: {line}")
58 |
59 | # Triangulate if the face has more than 3 vertices
60 | elif len(vertices) == 3:
61 | # Simple triangle, directly use it
62 | for vtn in vertices:
63 | position_index = int(vtn[0]) - 1
64 | uv_index = int(vtn[1]) - 1 if len(vtn) > 1 and vtn[1] != "" else -1
65 | normal_index = int(vtn[2]) - 1 if len(vtn) > 2 and vtn[2] != "" else -1
66 | faces.append((position_index, uv_index, normal_index, current_mtl))
67 | else:
68 | # general fan triangulation, taking the first vertex as the anchor
69 | for i in range(1, len(vertices) - 1):
70 | triangle = [vertices[0], vertices[i], vertices[i + 1]]
71 | for vtn in triangle:
72 | position_index = int(vtn[0]) - 1
73 | uv_index = int(vtn[1]) - 1 if len(vtn) > 1 and vtn[1] != "" else -1
74 | normal_index = int(vtn[2]) - 1 if len(vtn) > 2 and vtn[2] != "" else -1
75 | faces.append((position_index, uv_index, normal_index, current_mtl))
76 | # material
77 | elif prefix == "usemtl":
78 | current_mtl = tokens[1]
79 |
80 | # unknown
81 | else:
82 | if verbose:
83 | print("Line {}: Unknown line {}".format(line_idx, line.replace("\n", "")))
84 | continue
85 |
86 | return positions, uvs, normals, faces, mtl_files
87 |
88 | def parse_mtl(path):
89 | if path is None:
90 | return {}
91 |
92 | if not path.endswith(".mtl"):
93 | raise Exception(f"File must be .mtl format, but got {path}")
94 |
95 | materials = {}
96 | material_name = None
97 |
98 | with open(path, "r") as f:
99 | lines = f.readlines()
100 |
101 | for line_idx, line in enumerate(lines):
102 | tokens = line.strip().split()
103 | if not tokens:
104 | continue
105 |
106 | prefix = tokens[0]
107 |
108 | # newmtl
109 | if prefix == "newmtl":
110 | material_name = tokens[1]
111 | materials[material_name] = Material()
112 |
113 | # ambient - not used in our renderer
114 | elif prefix == "Ka":
115 | # ambient = glm.vec3(list(map(float, tokens[1:])))
116 | pass
117 |
118 | # diffuse - albedo in our renderer
119 | # This is because "diffuse" in our renderer is the color of diffuse light
120 | elif prefix == "Kd":
121 | materials[material_name].albedo = glm.vec3(list(map(float, tokens[1:])))
122 |
123 | # specular
124 | elif prefix == "Ks":
125 | materials[material_name].specular = glm.vec3(list(map(float, tokens[1:])))
126 |
127 | # shininess
128 | elif prefix == "Ns":
129 | materials[material_name].shininess = float(tokens[1])
130 |
131 | # # texture - not supported yet
132 | # elif prefix == "map_Kd":
133 | # materials[material_name]["texture"] = tokens[1]
134 |
135 | else:
136 | print("Line {}: Unknown line {}".format(line_idx, line.replace("\n", "")))
137 | continue
138 |
139 | return materials
140 |
141 | def make_vertex(face, positions, uvs, normals, name_to_mtl_idx):
142 | p_idx, uv_idx, n_idx, mtl_name = face
143 |
144 | vertex = VertexGL()
145 | vertex.position = positions[p_idx]
146 | vertex.uv = uvs[uv_idx] if uv_idx != -1 else glm.vec2(0.0)
147 | vertex.normal = normals[n_idx] if n_idx != -1 else glm.vec3(0.0)
148 | vertex.material_id = name_to_mtl_idx[mtl_name]
149 |
150 | return vertex
151 |
152 | class Obj(VAO):
153 | def __init__(self, path, scale=1.0):
154 | v_array, v_index, materials = Obj.generate_vertices(path, scale)
155 | vao = bind_mesh(v_array, v_index)
156 | super().__init__(vao.id, vao.vbos, vao.ebo, vao.indices)
157 |
158 | self.materials = materials
159 |
160 |
161 | @staticmethod
162 | def generate_vertices(path, scale=1.0):
163 | positions, uvs, normals, faces, mtl_files = parse_obj(path, scale)
164 |
165 | # parse materials
166 | mtl_dict = { "default": Material() }
167 | for file in mtl_files:
168 | if file is not None:
169 | mtl_dict.update(parse_mtl(file))
170 | materials = [mtl_dict[name] for name in mtl_dict.keys()]
171 | name_to_mtl_idx = {name: idx for idx, name in enumerate(mtl_dict.keys())}
172 |
173 | # generate vertices
174 | vertex_array = [VertexGL(
175 | position = positions[p_idx],
176 | normal = normals[n_idx] if n_idx != -1 else glm.vec3(0),
177 | uv = uvs[uv_idx] if uv_idx != -1 else glm.vec2(0),
178 | material_id = name_to_mtl_idx[mtl_name]
179 | ) for p_idx, uv_idx, n_idx, mtl_name in faces]
180 |
181 | vertex_index = list(range(len(vertex_array)))
182 |
183 | return vertex_array, vertex_index, materials
--------------------------------------------------------------------------------
/aPyOpenGL/agl/shader/cubemap.fs:
--------------------------------------------------------------------------------
1 | #version 430
2 |
3 | // --------------------------------------------
4 | // input vertex data
5 | // --------------------------------------------
6 | in vec3 fTexCoord;
7 |
8 | // --------------------------------------------
9 | // output fragment color
10 | // --------------------------------------------
11 | out vec4 FragColor;
12 |
13 | // --------------------------------------------
14 | // uniform
15 | // --------------------------------------------
16 | uniform samplerCube uSkybox;
17 |
18 | // --------------------------------------------
19 | // main function
20 | // --------------------------------------------
21 | void main()
22 | {
23 | FragColor = texture(uSkybox, fTexCoord);
24 | }
--------------------------------------------------------------------------------
/aPyOpenGL/agl/shader/cubemap.vs:
--------------------------------------------------------------------------------
1 | #version 430
2 |
3 | // --------------------------------------------
4 | // input vertex data
5 | // --------------------------------------------
6 | layout(location=0) in vec3 vPosition;
7 |
8 | // --------------------------------------------
9 | // output vertex data
10 | // --------------------------------------------
11 | out vec3 fTexCoord;
12 |
13 | // --------------------------------------------
14 | // uniform data
15 | // --------------------------------------------
16 | uniform mat4 PV;
17 |
18 | void main()
19 | {
20 | fTexCoord = vPosition;
21 | vec4 pos = PV * vec4(vPosition, 1.0f);
22 | gl_Position = pos.xyww;
23 | }
--------------------------------------------------------------------------------
/aPyOpenGL/agl/shader/equirect.fs:
--------------------------------------------------------------------------------
1 | #version 430
2 |
3 | // --------------------------------------------
4 | // input vertex data
5 | // --------------------------------------------
6 | in vec3 fPosition;
7 |
8 | // --------------------------------------------
9 | // output fragment color
10 | // --------------------------------------------
11 | out vec4 FragColor;
12 |
13 | // --------------------------------------------
14 | // uniform
15 | // --------------------------------------------
16 | uniform sampler2D uEquirectangularMap;
17 |
18 | // --------------------------------------------
19 | // constants
20 | // --------------------------------------------
21 | const vec2 invAtan = vec2(0.1591f, 0.3183f);
22 |
23 | vec2 SampleSphericalMap(vec3 v)
24 | {
25 | vec2 uv = vec2(atan(v.z, v.x), asin(v.y));
26 | uv *= invAtan;
27 | uv += 0.5f;
28 |
29 | return uv;
30 | }
31 |
32 | void main()
33 | {
34 | vec2 uv = SampleSphericalMap(normalize(fPosition));
35 | vec3 color = texture(uEquirectangularMap, uv).rgb;
36 | // color = pow(color, vec3(2.2f));
37 |
38 | FragColor = vec4(color, 1.0f);
39 | }
--------------------------------------------------------------------------------
/aPyOpenGL/agl/shader/equirect.vs:
--------------------------------------------------------------------------------
1 | #version 430
2 |
3 | // --------------------------------------------
4 | // input vertex data
5 | // --------------------------------------------
6 | layout(location=0) in vec3 vPosition;
7 |
8 | // --------------------------------------------
9 | // output vertex data
10 | // --------------------------------------------
11 | out vec3 fPosition;
12 |
13 | // --------------------------------------------
14 | // uniform data
15 | // --------------------------------------------
16 | uniform mat4 uProjection;
17 | uniform mat4 uView;
18 |
19 | void main()
20 | {
21 | fPosition = vPosition;
22 | gl_Position = uProjection * uView * vec4(vPosition, 1.0f);
23 | }
--------------------------------------------------------------------------------
/aPyOpenGL/agl/shader/lbs.vs:
--------------------------------------------------------------------------------
1 | #version 430
2 | #define MAX_JOINT_NUM 150
3 | uniform mat4 uLbsJoints[MAX_JOINT_NUM];
4 |
5 | // --------------------------------------------
6 | // input vertex data
7 | // --------------------------------------------
8 | layout(location=0) in vec3 vPosition;
9 | layout(location=1) in vec3 vNormal;
10 | layout(location=2) in vec2 vTexCoord;
11 | layout(location=3) in vec3 vTangent;
12 | layout(location=4) in vec3 vBitangent;
13 | layout(location=5) in int vMaterialID;
14 | layout(location=6) in ivec4 vLbsJointIDs1;
15 | layout(location=7) in vec4 vLbsWeights1;
16 | layout(location=8) in ivec4 vLbsJointIDs2;
17 | layout(location=9) in vec4 vLbsWeights2;
18 |
19 | // --------------------------------------------
20 | // output vertex data
21 | // --------------------------------------------
22 | out vec3 fPosition;
23 | out vec2 fTexCoord;
24 | out vec3 fTangent;
25 | out vec3 fBitangent;
26 | out vec3 fNormal;
27 | flat out int fMaterialID;
28 | out vec4 fPosLightSpace;
29 |
30 | // --------------------------------------------
31 | // uniform data
32 | // --------------------------------------------
33 | uniform mat4 uPV;
34 | uniform mat4 uLightSpaceMatrix;
35 |
36 | mat4 GetJointMatrix(ivec4 ids, vec4 weights)
37 | {
38 | mat4 m = mat4(0.0f);
39 | for (int i = 0; i < 4 && 0 <= ids[i] && ids[i] < MAX_JOINT_NUM; ++i)
40 | {
41 | m += uLbsJoints[ids[i]] * weights[i];
42 | }
43 | return m;
44 | }
45 |
46 | void main()
47 | {
48 | // LBS
49 | mat4 lbsModel = GetJointMatrix(vLbsJointIDs1, vLbsWeights1) + GetJointMatrix(vLbsJointIDs2, vLbsWeights2);
50 |
51 | fPosition = vec3(lbsModel * vec4(vPosition, 1.0f));
52 | fTangent = normalize(mat3(lbsModel) * vTangent);
53 | fBitangent = normalize(mat3(lbsModel) * vBitangent);
54 | fNormal = normalize(transpose(inverse(mat3(lbsModel))) * vNormal);
55 | fTexCoord = vTexCoord;
56 | fPosLightSpace = uLightSpaceMatrix * vec4(fPosition, 1.0f);
57 | fMaterialID = vMaterialID;
58 |
59 | gl_Position = uPV * vec4(fPosition, 1.0f);
60 | }
--------------------------------------------------------------------------------
/aPyOpenGL/agl/shader/shadow.fs:
--------------------------------------------------------------------------------
1 | #version 430
2 |
3 | void main()
4 | {
5 | // gl_FragDepth = gl_FragCoord.z;
6 | }
--------------------------------------------------------------------------------
/aPyOpenGL/agl/shader/shadow.vs:
--------------------------------------------------------------------------------
1 | #version 430
2 | #define MAX_JOINT_NUM 100
3 | uniform mat4 uLbsJoints[MAX_JOINT_NUM];
4 |
5 | // --------------------------------------------
6 | // input vertex data
7 | // --------------------------------------------
8 | layout(location=0) in vec3 vPosition;
9 | layout(location=4) in ivec4 vLbsJointIDs1;
10 | layout(location=5) in vec4 vLbsWeights1;
11 | layout(location=6) in ivec4 vLbsJointIDs2;
12 | layout(location=7) in vec4 vLbsWeights2;
13 |
14 | // --------------------------------------------
15 | // uniform data
16 | // --------------------------------------------
17 | uniform mat4 uLightSpaceMatrix;
18 | uniform mat4 uModel;
19 | uniform bool uIsSkinned;
20 |
21 | #define MAX_INSTANCE_NUM 100
22 | uniform int uInstanceNum;
23 | uniform mat4 uInstanceModel[MAX_INSTANCE_NUM];
24 |
25 | mat4 GetJointMatrix(ivec4 ids, vec4 weights)
26 | {
27 | mat4 m = mat4(0.0f);
28 | for (int i = 0; i < 4; ++i)
29 | {
30 | if (0 <= ids[i] && ids[i] < MAX_JOINT_NUM)
31 | {
32 | m += uLbsJoints[ids[i]] * weights[i];
33 | }
34 | else
35 | {
36 | break;
37 | }
38 | }
39 | return m;
40 | }
41 |
42 | void main()
43 | {
44 | if (uIsSkinned)
45 | {
46 | // LBS
47 | mat4 lbsModel = GetJointMatrix(vLbsJointIDs1, vLbsWeights1) + GetJointMatrix(vLbsJointIDs2, vLbsWeights2);
48 | gl_Position = uLightSpaceMatrix * lbsModel * vec4(vPosition, 1.0f);
49 | }
50 | else if (uInstanceNum == 1)
51 | {
52 | gl_Position = uLightSpaceMatrix * uModel * vec4(vPosition, 1.0f);
53 | }
54 | else
55 | {
56 | gl_Position = uLightSpaceMatrix * uInstanceModel[gl_InstanceID] * vec4(vPosition, 1.0f);
57 | }
58 | }
--------------------------------------------------------------------------------
/aPyOpenGL/agl/shader/text.fs:
--------------------------------------------------------------------------------
1 | #version 430
2 |
3 | // --------------------------------------------
4 | // input vertex data
5 | // --------------------------------------------
6 | in vec2 fTexCoord;
7 |
8 | // --------------------------------------------
9 | // output fragment color
10 | // --------------------------------------------
11 | out vec4 FragColor;
12 |
13 | // --------------------------------------------
14 | // uniform
15 | // --------------------------------------------
16 | uniform sampler2D uFontTexture;
17 | uniform vec3 uTextColor;
18 |
19 | // --------------------------------------------
20 | // main function
21 | // --------------------------------------------
22 | void main()
23 | {
24 | vec4 sampled = vec4(1.0f, 1.0f, 1.0f, texture(uFontTexture, fTexCoord).r);
25 | FragColor = vec4(uTextColor, 1.0f) * sampled;
26 | }
--------------------------------------------------------------------------------
/aPyOpenGL/agl/shader/text.vs:
--------------------------------------------------------------------------------
1 | #version 430
2 |
3 | // --------------------------------------------
4 | // input vertex data
5 | // --------------------------------------------
6 | layout(location=0) in vec4 vPosition; // vec2 position, vec2 texcoord
7 |
8 | // --------------------------------------------
9 | // output vertex data
10 | // --------------------------------------------
11 | out vec2 fTexCoord;
12 |
13 | // --------------------------------------------
14 | // uniform data
15 | // --------------------------------------------
16 | uniform mat4 uPVM;
17 |
18 | void main()
19 | {
20 | gl_Position = uPVM * vec4(vPosition.xy, 0.0f, 1.0f);
21 | fTexCoord = vPosition.zw;
22 | }
--------------------------------------------------------------------------------
/aPyOpenGL/agl/shader/vert.vs:
--------------------------------------------------------------------------------
1 | #version 430
2 |
3 | // --------------------------------------------
4 | // input vertex data
5 | // --------------------------------------------
6 | layout(location=0) in vec3 vPosition;
7 | layout(location=1) in vec3 vNormal;
8 | layout(location=2) in vec2 vTexCoord;
9 | layout(location=3) in vec3 vTangent;
10 | layout(location=4) in vec3 vBitangent;
11 | layout(location=5) in int vMaterialID;
12 | layout(location=6) in ivec4 vLbsJointIDs1;
13 | layout(location=7) in vec4 vLbsWeights1;
14 | layout(location=8) in ivec4 vLbsJointIDs2;
15 | layout(location=9) in vec4 vLbsWeights2;
16 |
17 | // --------------------------------------------
18 | // output vertex data
19 | // --------------------------------------------
20 | out vec3 fPosition;
21 | out vec2 fTexCoord;
22 | out vec3 fTangent;
23 | out vec3 fBitangent;
24 | out vec3 fNormal;
25 | flat out int fMaterialID;
26 | out vec4 fPosLightSpace;
27 |
28 | // --------------------------------------------
29 | // uniform data
30 | // --------------------------------------------
31 | uniform mat4 uPV;
32 | uniform mat4 uModel;
33 | uniform mat4 uLightSpaceMatrix;
34 |
35 | #define MAX_INSTANCE_NUM 100
36 | uniform int uInstanceNum;
37 | uniform mat4 uInstanceModel[MAX_INSTANCE_NUM];
38 |
39 | void main()
40 | {
41 | mat4 M = mat4(1.0f);
42 | if (uInstanceNum == 1)
43 | {
44 | M = uModel;
45 | }
46 | else
47 | {
48 | M = uInstanceModel[gl_InstanceID];
49 | }
50 | fPosition = vec3(M * vec4(vPosition, 1.0f));
51 | fTangent = normalize(mat3(M) * vTangent);
52 | fBitangent = normalize(mat3(M) * vBitangent);
53 | fNormal = normalize(mat3(M) * vNormal);
54 | fTexCoord = vTexCoord;
55 | fPosLightSpace = uLightSpaceMatrix * vec4(fPosition, 1.0f);
56 | fMaterialID = vMaterialID;
57 | gl_Position = uPV * vec4(fPosition, 1.0f);
58 | }
--------------------------------------------------------------------------------
/aPyOpenGL/agl/text.py:
--------------------------------------------------------------------------------
1 | import os
2 | import glm
3 |
4 | import freetype as ft
5 | from OpenGL.GL import *
6 |
7 | from .const import TEXT_RESOLUTION, FONT_DIR_PATH
8 |
9 | class Character:
10 | def __init__(self, texture_id, size, bearing, advance):
11 | self.texture_id = texture_id
12 | self.size = size
13 | self.bearing = bearing
14 | self.advance = advance
15 |
16 | class FontTexture:
17 | def __init__(self, font_filename="consola.ttf"):
18 | self.character_map = {}
19 |
20 | # initialize and load the FreeType library
21 | face = ft.Face(self.get_font_path(font_filename))
22 | face.set_pixel_sizes(0, TEXT_RESOLUTION)
23 |
24 | # disable byte-alignment restriction
25 | glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
26 |
27 | # load 128 ASCII characters
28 | for c in range(128):
29 | char = chr(c)
30 | face.load_char(char, ft.FT_LOAD_RENDER)
31 |
32 | texture = glGenTextures(1)
33 | glBindTexture(GL_TEXTURE_2D, texture)
34 | glTexImage2D(
35 | GL_TEXTURE_2D,
36 | 0,
37 | GL_RED,
38 | face.glyph.bitmap.width,
39 | face.glyph.bitmap.rows,
40 | 0,
41 | GL_RED,
42 | GL_UNSIGNED_BYTE,
43 | face.glyph.bitmap.buffer
44 | )
45 |
46 | # set texture options
47 | glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
48 | glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
49 | glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
50 | glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
51 |
52 | # store character for later use
53 | character = Character(
54 | texture,
55 | glm.ivec2(face.glyph.bitmap.width, face.glyph.bitmap.rows),
56 | glm.ivec2(face.glyph.bitmap_left, face.glyph.bitmap_top),
57 | face.glyph.advance.x
58 | )
59 | self.character_map[char] = character
60 |
61 | glBindTexture(GL_TEXTURE_2D, 0)
62 |
63 | # VAO and VBO for text quads
64 | self.vao = glGenVertexArrays(1)
65 | self.vbo = glGenBuffers(1)
66 | glBindVertexArray(self.vao)
67 | glBindBuffer(GL_ARRAY_BUFFER, self.vbo)
68 |
69 | glBufferData(GL_ARRAY_BUFFER, glm.sizeof(glm.vec4) * 6, None, GL_DYNAMIC_DRAW)
70 | glEnableVertexAttribArray(0)
71 | glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, ctypes.c_void_p(0))
72 | glBindBuffer(GL_ARRAY_BUFFER, 0)
73 | glBindVertexArray(0)
74 |
75 | def get_font_path(self, font_filename):
76 | font_path = os.path.join(FONT_DIR_PATH, font_filename)
77 | return font_path
78 |
79 | def character(self, c):
80 | return self.character_map[c]
81 |
--------------------------------------------------------------------------------
/aPyOpenGL/agl/ui.py:
--------------------------------------------------------------------------------
1 | import imgui
2 | from imgui.integrations.glfw import GlfwRenderer
3 | import glfw
4 | from OpenGL.GL import *
5 |
6 | from .const import CONSOLAS_FONT_PATH
7 |
8 | class UI:
9 | def __init__(self):
10 | self.window = None
11 | self.menu_to_items = {} # {menu_name: list[(item_name, func, key=None)]}
12 | self.key_to_func = {} # {key: list[func]}
13 |
14 | def initialize(self, window):
15 | self.window = window
16 |
17 | # imgui setup
18 | imgui.create_context()
19 | self.impl = GlfwRenderer(window, attach_callbacks=False)
20 |
21 | # IO - prevent creating a default window
22 | self.io = imgui.get_io()
23 |
24 | width, height = glfw.get_window_size(window)
25 | self.resize_font(width, height)
26 | self.font = self.io.fonts.add_font_from_file_ttf(CONSOLAS_FONT_PATH, 16)
27 | self.impl.refresh_font_texture()
28 |
29 | def resize_font(self, width, height):
30 | self.io.font_global_scale = min(width, height) / 750
31 | self.impl.refresh_font_texture()
32 |
33 | def process_inputs(self):
34 | self.impl.process_inputs()
35 | imgui.new_frame()
36 | imgui.push_font(self.font)
37 |
38 | if imgui.begin_main_menu_bar():
39 | # default menu
40 | if imgui.begin_menu("Menu", True):
41 | exit_clicked, exit_activated = imgui.menu_item("Exit", "ESC", False, True)
42 | if exit_clicked:
43 | glfw.set_window_should_close(self.window, True)
44 |
45 | imgui.end_menu()
46 |
47 | # custom menus
48 | for menu_name, items in self.menu_to_items.items():
49 | imgui.separator()
50 | if imgui.begin_menu(menu_name, True):
51 | max_len = 0
52 | for item_name, _, _ in items:
53 | max_len = max(max_len, imgui.calc_text_size(item_name)[0] + imgui.calc_text_size(item_name)[1] * 2 + imgui.get_style().item_spacing.x)
54 |
55 | for item_name, func, key in items:
56 | clicked, activated = imgui.menu_item(item_name, "")
57 | if key is not None:
58 | text_key = glfw.get_key_name(key, 0)
59 | if text_key is not None:
60 | text_key = text_key.upper()
61 | elif key is glfw.KEY_SPACE:
62 | text_key = "Space"
63 | elif key is glfw.KEY_LEFT:
64 | text_key = "Left"
65 | elif key is glfw.KEY_RIGHT:
66 | text_key = "Right"
67 |
68 | imgui.same_line(max_len)
69 | imgui.text_disabled(text_key)
70 |
71 | if clicked:
72 | func()
73 |
74 | imgui.end_menu()
75 | imgui.end_main_menu_bar()
76 |
77 | # add slider to the glfw window
78 | # imgui.begin("Slider")
79 | # imgui.slider_float("float", 0, 1.0, 2.0, "%.3f")
80 | # imgui.end()
81 |
82 |
83 | def render(self, show_ui=True):
84 | imgui.pop_font()
85 | imgui.render()
86 | if show_ui:
87 | self.impl.render(imgui.get_draw_data())
88 | imgui.end_frame()
89 |
90 | def terminate(self):
91 | self.impl.shutdown()
92 |
93 | def add_menu(self, menu_name):
94 | self.menu_to_items[menu_name] = []
95 |
96 | def add_menu_item(self, menu_name, item_name, func, key=None):
97 | if self.menu_to_items.get(menu_name, None) is None:
98 | self.add_menu(menu_name)
99 |
100 | self.menu_to_items[menu_name].append([item_name, func, key])
101 | if key is not None:
102 | if self.key_to_func.get(key, None) is None:
103 | self.key_to_func[key] = []
104 | self.key_to_func[key].append(func)
105 |
106 | # def add_render_toggle(self, menu_name, item_name, render_option, key=None, activated=False):
107 | # if self.menu_to_items.get(menu_name, None) is None:
108 | # self.add_menu(menu_name)
109 |
110 | # item = [render_option, key, activated]
111 | # self.menu_to_items[menu_name][item_name] = item
112 |
113 | # if key is not None:
114 | # self.hotkey_to_render_options.setdefault(key, [])
115 | # self.hotkey_to_render_options[key].append(render_option)
116 |
117 | def key_callback(self, window, key, scancode, action, mods):
118 | if not action == glfw.PRESS:
119 | return
120 |
121 | funcs = self.key_to_func.get(key, None)
122 | if funcs is not None:
123 | for func in funcs:
124 | func()
--------------------------------------------------------------------------------
/aPyOpenGL/kin/__init__.py:
--------------------------------------------------------------------------------
1 | from .kinpose import KinPose
2 | from .kindisp import KinDisp
--------------------------------------------------------------------------------
/aPyOpenGL/kin/kindisp.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | from .kinpose import KinPose
4 | from aPyOpenGL.transforms import n_quat, n_xform, n_rotmat
5 |
6 | class KinDisp:
7 | def __init__(self, source: KinPose, target: KinPose):
8 | self.source = source
9 | self.target = target
10 |
11 | # delta
12 | self.d_basis_xform = self.target.basis_xform @ np.linalg.inv(self.source.basis_xform)
13 | self.d_local_quats = n_quat.mul(self.target.local_quats, n_quat.inv(self.source.local_quats))
14 | self.d_local_root_pos = self.target.local_root_pos - self.source.local_root_pos
15 |
16 | def apply(self, kpose: KinPose):
17 | # apply delta to the input KinPose
18 | kpose.basis_xform = self.d_basis_xform @ kpose.basis_xform
19 | kpose.local_quats = n_quat.mul(self.d_local_quats, kpose.local_quats)
20 | kpose.local_root_pos = self.d_local_root_pos + kpose.local_root_pos
--------------------------------------------------------------------------------
/aPyOpenGL/kin/kinpose.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | from ..agl import Pose
4 | from ..transforms import n_quat, n_xform
5 |
6 | class KinPose:
7 | """
8 | Represents a pose of a skeleton with a basis transformation.
9 | It doesn't modify the original pose data.
10 |
11 | NOTE: We assume that the pre-rotation is defined so that the forward direction of the root joint is the z-axis in the world coordinate when the local root rotation is identity.
12 |
13 | global_xforms
14 | - root: basis_xform @ pre_xforms[0] @ local_xforms[0]
15 | - others: global_xforms[parent] @ pre_xforms[i] @ local_xforms[i]
16 |
17 | basis_xform (4, 4)
18 | - 4x4 transformation matrix in global space
19 | - initialized by the root joint
20 |
21 | local_xforms (J, 4, 4)
22 | - root: relative to the basis
23 | - others: relative to the parent
24 | """
25 | def __init__(self, pose: Pose):
26 | # original pose data - NO CHANGE
27 | self.pose = pose
28 | self.skeleton = pose.skeleton
29 | self.root_pre_quat = self.skeleton.joints[0].pre_quat
30 |
31 | self._recompute_local_root()
32 |
33 | def _recompute_local_root(self):
34 | # local transformations
35 | # - root: relative to the basis
36 | # - others: relative to the parent
37 | self.local_root_pos = self.pose.root_pos.copy()
38 | self.local_quats = self.pose.local_quats.copy()
39 |
40 | # basis transformation (4, 4)
41 | self.basis_xform = self.get_projected_root_xform()
42 |
43 | # root transformation relative to the basis
44 | root_xform = n_quat.to_xform(self.pose.local_quats[0], self.pose.root_pos)
45 | self.local_quats[0] = n_quat.mul(n_quat.inv(self.root_pre_quat), n_quat.from_rotmat(self.basis_xform[:3, :3].T @ root_xform[:3, :3])) # root_pre_rot.inv() * basis_rot.inv() * root_rot
46 | self.local_root_pos = self.basis_xform[:3, :3].T @ (self.pose.root_pos - self.basis_xform[:3, 3])
47 |
48 | def get_projected_root_xform(self):
49 | # basis: world forward -> root forward
50 | root_fwd = n_quat.mul_vec(self.pose.local_quats[0], np.array([0, 0, 1], dtype=np.float32))
51 | root_fwd = root_fwd * np.array([1, 0, 1], dtype=np.float32)
52 | root_fwd = root_fwd / (np.linalg.norm(root_fwd) + 1e-8)
53 |
54 | world_fwd = np.array([0, 0, 1], dtype=np.float32)
55 |
56 | basis_quat = n_quat.between_vecs(world_fwd, root_fwd)
57 |
58 | # basis
59 | basis_rotmat = n_quat.to_rotmat(basis_quat)
60 | basis_pos = self.pose.root_pos * np.array([1, 0, 1], dtype=np.float32)
61 | basis_xform = n_xform.from_rotmat(basis_rotmat, basis_pos)
62 |
63 | return basis_xform
64 |
65 | def set_basis_xform(self, xform):
66 | self.basis_xform = np.array(xform, dtype=np.float32)
67 | if self.basis_xform.shape != (4, 4):
68 | raise ValueError(f"basis_xform must be 4x4, not {self.basis_xform.shape}")
69 |
70 | def transform_basis(self, delta):
71 | self.basis_xform = delta @ self.basis_xform
72 |
73 | def set_pose(self, pose: Pose):
74 | self.pose = pose
75 | self._recompute_local_root()
76 |
77 | def to_pose(self) -> Pose:
78 | local_quats = self.local_quats.copy()
79 |
80 | # recompute local "root" transformation
81 | # - rotation: global_rot = basis_rot * pre_rot * local_rot_to_basis = pre_rot * local_rot_for_pose
82 | # Therefore, local_rot_for_pose = pre_rot.inv() * basis_rot * pre_rot * local_rot_to_basis
83 | # - position: global_pos = basis_rot * local_pos + basis_pos
84 | q0 = n_quat.mul(n_quat.inv(self.root_pre_quat), n_quat.from_rotmat(self.basis_xform[:3, :3])) # pre_rot.inv() * basis_rot
85 | q1 = n_quat.mul(self.root_pre_quat, local_quats[0]) # pre_rot * local_rot_to_basis
86 | local_quats[0] = n_quat.mul(q0, q1)
87 | root_pos = self.basis_xform @ np.concatenate([self.local_root_pos, [1]])
88 | return Pose(self.skeleton, local_quats, root_pos[:3])
--------------------------------------------------------------------------------
/aPyOpenGL/learning/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/learning/__init__.py
--------------------------------------------------------------------------------
/aPyOpenGL/learning/embedding.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 |
4 | class SinusoidalPositionalEmbedding(nn.Module):
5 | def __init__(self, dim, max_len):
6 | super(SinusoidalPositionalEmbedding, self).__init__()
7 | self.dim = dim
8 | self.max_len = max_len
9 |
10 | if dim % 2 != 0:
11 | raise ValueError(f"SinusoidalPositionalEmbedding: dim must be even, but got {dim}")
12 |
13 | pos = torch.arange(0, max_len, step=1, dtype=torch.float32).unsqueeze(1)
14 | div_term = 1.0 / torch.pow(10000, torch.arange(0, dim, step=2, dtype=torch.float32) / dim)
15 |
16 | embedding = torch.empty((max_len, dim))
17 | embedding[:, 0::2] = torch.sin(pos * div_term)
18 | embedding[:, 1::2] = torch.cos(pos * div_term)
19 | self.embedding = nn.Parameter(embedding, requires_grad=False)
20 |
21 | def forward(self, position):
22 | position = torch.clamp(position, 0, self.max_len - 1).long()
23 | return self.embedding[position] # (B, T, dim)
24 |
25 | class RelativeSinusoidalPositionalEmbedding(nn.Module):
26 | def __init__(self, dim, max_len):
27 | super(RelativeSinusoidalPositionalEmbedding, self).__init__()
28 | self.dim = dim
29 | self.max_len = max_len
30 |
31 | if dim % 2 != 0:
32 | raise ValueError(f"RelativeSinusoidalPositionalEmbedding: dim must be even, but got {dim}")
33 |
34 | pos = torch.arange(-max_len+1, max_len, step=1, dtype=torch.float32).unsqueeze(1)
35 | div_term = 1.0 / torch.pow(10000, torch.arange(0, dim, step=2, dtype=torch.float32) / dim)
36 |
37 | embedding = torch.empty((2*max_len-1, dim))
38 | embedding[:, 0::2] = torch.sin(pos * div_term)
39 | embedding[:, 1::2] = torch.cos(pos * div_term)
40 | self.embedding = nn.Parameter(embedding, requires_grad=False)
41 |
42 | def forward(self, position):
43 | # -T+1, ..., T-1 -> 0, ..., 2T-2
44 | position = torch.clamp(position + self.max_len - 1, 0, 2*self.max_len - 2).long()
45 | return self.embedding[position] # (B, T, dim)
--------------------------------------------------------------------------------
/aPyOpenGL/learning/mlp.py:
--------------------------------------------------------------------------------
1 | import math
2 | import torch
3 | import torch.nn as nn
4 | import torch.nn.functional as F
5 | from torch.nn.parameter import Parameter
6 |
7 | ACTIVATION = {
8 | "relu": nn.ReLU,
9 | "leaky_relu": nn.LeakyReLU,
10 | "elu": nn.ELU,
11 | "prelu": nn.PReLU,
12 | "sigmoid": nn.Sigmoid,
13 | "tanh": nn.Tanh
14 | }
15 |
16 | class MLP(nn.Module):
17 | def __init__(self, input_dim, output_dim, hidden_dims=[256, 256, 256], activation="relu", activation_at_last=False):
18 | super(MLP, self).__init__()
19 |
20 | layers = []
21 | for h_dim in hidden_dims:
22 | layers.append(nn.Linear(input_dim, h_dim))
23 | layers.append(ACTIVATION[activation]())
24 | input_dim = h_dim
25 | layers.append(nn.Linear(input_dim, output_dim))
26 | if activation_at_last:
27 | layers.append(ACTIVATION[activation]())
28 | self.layers = nn.Sequential(*layers)
29 |
30 | def forward(self, x):
31 | return self.layers(x)
32 |
33 | # TODO: receive activation as a string and use ACTIVATION dict
34 | class PhaseMLP(nn.Module):
35 | def __init__(self, input_dim, hidden_dims, output_dim, activation="relu", activation_at_last=False):
36 | super(PhaseMLP, self).__init__()
37 |
38 | self.params_w = nn.ParameterList()
39 | self.params_b = nn.ParameterList()
40 | for h_dim in hidden_dims:
41 | self.params_w.append(Parameter(torch.Tensor(4, input_dim, h_dim)))
42 | self.params_b.append(Parameter(torch.Tensor(4, 1, h_dim)))
43 | self.register_parameter("w_{}".format(len(self.params_w) - 1), self.params_w[-1])
44 | self.register_parameter("b_{}".format(len(self.params_b) - 1), self.params_b[-1])
45 | input_dim = h_dim
46 | self.params_w.append(Parameter(torch.Tensor(4, input_dim, output_dim)))
47 | self.params_b.append(Parameter(torch.Tensor(4, 1, output_dim)))
48 | self.register_parameter("w_{}".format(len(self.params_w) - 1), self.params_w[-1])
49 | self.register_parameter("b_{}".format(len(self.params_b) - 1), self.params_b[-1])
50 |
51 | for i in range(len(self.params_w)):
52 | nn.init.kaiming_uniform_(self.params_w[i])
53 | nn.init.zeros_(self.params_b[i])
54 |
55 | self.activation = [ACTIVATION[activation]() for _ in range(len(self.params_w) - 1)]
56 | self.activation_at_last = activation_at_last
57 | if activation_at_last:
58 | self.activation.append(ACTIVATION[activation]())
59 |
60 | def forward(self, x, phase):
61 | x = x.unsqueeze(1)
62 | w, idx_0, idx_1, idx_2, idx_3 = self.phase_idx(phase)
63 | for i in range(0, len(self.params_w)):
64 | param_w, param_b = self.params_w[i], self.params_b[i]
65 | weight = self.cubic(param_w[idx_0], param_w[idx_1], param_w[idx_2], param_w[idx_3], w)
66 | bias = self.cubic(param_b[idx_0], param_b[idx_1], param_b[idx_2], param_b[idx_3], w)
67 | x = torch.bmm(x, weight) + bias
68 | if i < len(self.activation):
69 | x = self.activation[i](x)
70 | return x.squeeze(1)
71 |
72 | def cubic(self, a0, a1, a2, a3, w):
73 | return\
74 | a1\
75 | +w*(0.5*a2 - 0.5*a0)\
76 | +w*w*(a0 - 2.5*a1 + 2*a2 - 0.5*a3)\
77 | +w*w*w*(1.5*a1 - 1.5*a2 + 0.5*a3 - 0.5*a0)
78 |
79 | def phase_idx(self, phase):
80 | w = 4 * phase
81 | idx_1 = torch.remainder(w.floor().long(), 4)
82 | idx_0 = torch.remainder(idx_1-1, 4)
83 | idx_2 = torch.remainder(idx_1+1, 4)
84 | idx_3 = torch.remainder(idx_1+2, 4)
85 | w = torch.fmod(w, 1)
86 | return w.view(-1, 1, 1), idx_0.view(-1), idx_1.view(-1), idx_2.view(-1), idx_3.view(-1)
87 |
88 | class MultiLinear(nn.Module):
89 | def __init__(self, num_layers, in_features, out_features, bias=True):
90 | super(MultiLinear, self).__init__()
91 |
92 | self.weight = nn.Parameter(torch.Tensor(num_layers, in_features, out_features))
93 | self.bias = nn.Parameter(torch.Tensor(num_layers, out_features)) if bias else None
94 | nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5))
95 | if self.bias is not None:
96 | nn.init.zeros_(self.bias)
97 |
98 | def forward(self, x):
99 | x = torch.einsum("...i,nio->...no", x, self.weight)
100 | return x + self.bias if self.bias is not None else x
--------------------------------------------------------------------------------
/aPyOpenGL/learning/rbf.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 |
4 | def multiquadric(x):
5 | return torch.sqrt(x**2 + 1)
6 |
7 | def inverse(x):
8 | return 1.0 / torch.sqrt(x**2 + 1)
9 |
10 | def gaussian(x):
11 | return torch.exp(-x**2)
12 |
13 | def linear(x):
14 | return x
15 |
16 | def quadric(x):
17 | return x**2
18 |
19 | def cubic(x):
20 | return x**3
21 |
22 | def quartic(x):
23 | return x**4
24 |
25 | def quintic(x):
26 | return x**5
27 |
28 | def thin_plate(x):
29 | return x**2 * torch.log(x + 1e-8)
30 |
31 | def logistic(x):
32 | return 1.0 / (1.0 + torch.exp(-torch.clamp(x, -5, 5)))
33 |
34 | def smoothstep(x):
35 | return ((torch.clamp(1.0 - x, 0.0, 1.0))**2.0) * (3 - 2*(torch.clamp(1.0 - x, 0.0, 1.0)))
36 |
37 | KERNELS = {
38 | "multiquadric": multiquadric,
39 | "inverse": inverse,
40 | "gaussian": gaussian,
41 | "linear": linear,
42 | "quadric": quadric,
43 | "cubic": cubic,
44 | "quartic": quartic,
45 | "quintic": quintic,
46 | "thin_plate": thin_plate,
47 | "logistic": logistic,
48 | "smoothstep": smoothstep,
49 | }
50 |
51 |
52 | class Solve(nn.Module):
53 | def __init__(self, l=1e-5):
54 | super(Solve, self).__init__()
55 | self.l = l
56 |
57 | def fit(self, X, Y):
58 | """
59 | Args:
60 | X: (..., N, N)
61 | Y: (..., N, K)
62 | """
63 | LU, piv = torch.linalg.lu_factor(X.transpose(-1, -2) + torch.eye(X.shape[-2], device=X.device) * self.l)
64 | self.M = nn.Parameter(torch.lu_solve(Y, LU, piv).transpose(-1, -2))
65 |
66 | def forward(self, Xp):
67 | """
68 | Args:
69 | Xp: (..., M, N)
70 | Returns:
71 | (..., M, K)
72 | """
73 | return torch.matmul(self.M, Xp.transpose(-1, -2)).transpose(-1, -2)
74 |
75 | class RBF(nn.Module):
76 | def __init__(self, L=None, eps=None, function="multiquadric", smooth=1e-8):
77 | super(RBF, self).__init__()
78 | self.solver = Solve(l=-smooth) if L is None else L
79 | self.kernel = KERNELS.get(function)
80 | if self.kernel is None:
81 | raise ValueError(f"Invalid kernel function: {function}")
82 | self.eps = eps
83 |
84 | def fit(self, X, Y):
85 | """
86 | Args:
87 | X: (..., N, D)
88 | Y: (..., N, K)
89 | """
90 | self.X = X
91 | dist = torch.cdist(self.X, self.X) # (B, N, N) or (N, N)
92 | self.eps = torch.ones(len(dist), device=X.device) / dist.mean() if self.eps is None else self.eps
93 | self.solver.fit(self.kernel(self.eps * dist), Y)
94 |
95 | def forward(self, Xp):
96 | """
97 | Args:
98 | Xp: (..., M, D)
99 | Returns:
100 | (..., M, K)
101 | """
102 | D = torch.cdist(Xp, self.X)
103 | return self.solver.forward(self.kernel(self.eps * D))
--------------------------------------------------------------------------------
/aPyOpenGL/learning/transformer.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 | import torch.nn.functional as F
4 |
5 | def skew(QE_t):
6 | """ Used for relative positional encoding. Implementation from Music Transformer [Huang et al. 2018] """
7 | B, H, T, _ = QE_t.shape # (B, H, T, 2T-1)
8 |
9 | QE_t = F.pad(QE_t, (0, 1)).view(B, H, 2*T*T)
10 | QE_t = F.pad(QE_t, (0, T-1)).view(B, H, T+1, 2*T-1)
11 | return QE_t[:, :, :T, -T:]
12 |
13 | class MultiHeadAttention(nn.Module):
14 | def __init__(self, d_model, d_head, n_head, dropout=0.1, pre_layernorm=False):
15 | super(MultiHeadAttention, self).__init__()
16 | self.d_model = d_model
17 | self.d_head = d_head
18 | self.n_head = n_head
19 | self.pre_layernorm = pre_layernorm
20 |
21 | self.W_q = nn.Linear(d_model, n_head * d_head, bias=False)
22 | self.W_k = nn.Linear(d_model, n_head * d_head, bias=False)
23 | self.W_v = nn.Linear(d_model, n_head * d_head, bias=False)
24 | self.W_out = nn.Linear(n_head * d_head, d_model)
25 |
26 | self.atten_scale = 1 / (d_head ** 0.5)
27 | self.dropout = nn.Dropout(dropout)
28 | self.layer_norm = nn.LayerNorm(d_model)
29 |
30 |
31 | def forward(self, x, context, mask=None, lookup_table=None):
32 | B, T1, D = x.shape
33 | _, T2, _ = context.shape
34 |
35 | if self.pre_layernorm:
36 | x = self.layer_norm(x)
37 |
38 | # linear projection
39 | q = self.W_q(x) # (B, T1, n_head*d_head)
40 | k = self.W_k(context) # (B, T2, n_head*d_head)
41 | v = self.W_v(context) # (B, T2, n_head*d_head)
42 |
43 | # split heads
44 | q = q.view(B, T1, self.n_head, self.d_head).transpose(1, 2) # (B, n_head, T1, d_head)
45 | k = k.view(B, T2, self.n_head, self.d_head).transpose(1, 2) # (B, n_head, T2, d_head)
46 | v = v.view(B, T2, self.n_head, self.d_head).transpose(1, 2) # (B, n_head, T2, d_head)
47 |
48 | # attention score
49 | atten_score = torch.matmul(q, k.transpose(-2, -1)) # (B, n_head, T1, T2)
50 |
51 | # relative positional encoding
52 | if lookup_table is not None:
53 | atten_score += skew(torch.matmul(q, lookup_table.transpose(-2, -1)))
54 |
55 | # attention scale
56 | atten_score *= self.atten_scale # (B, n_head, T1, T2)
57 |
58 | # mask
59 | if mask is not None:
60 | atten_score.masked_fill_(mask, -1e9)
61 |
62 | # attention
63 | attention = F.softmax(atten_score, dim=-1) # (B, n_head, T1, T2)
64 | attention = torch.matmul(attention, v).transpose(1, 2).contiguous().view(B, -1, self.n_head * self.d_head) # (B, T1, n_head*d_head)
65 |
66 | # output
67 | output = self.W_out(attention) # (B, T1, d_model)
68 | output = self.dropout(output)
69 |
70 | if self.pre_layernorm:
71 | return x + output
72 | else:
73 | return self.layer_norm(x + output)
74 |
75 | class LocalMultiHeadAttention(nn.Module):
76 | def __init__(self, d_model, d_head, n_head, receptive_size, dropout=0.1, pre_layernorm=False):
77 | super(LocalMultiHeadAttention, self).__init__()
78 | self.d_model = d_model
79 | self.d_head = d_head
80 | self.n_head = n_head
81 | self.receptive_size = receptive_size
82 | if receptive_size % 2 == 0:
83 | raise ValueError("receptive size must be odd")
84 | self.pre_layernorm = pre_layernorm
85 |
86 | self.W_q = nn.Linear(d_model, n_head * d_head, bias=False)
87 | self.W_k = nn.Linear(d_model, n_head * d_head, bias=False)
88 | self.W_v = nn.Linear(d_model, n_head * d_head, bias=False)
89 | self.W_out = nn.Linear(n_head * d_head, d_model)
90 |
91 | self.atten_scale = 1 / (d_head ** 0.5)
92 | self.dropout = nn.Dropout(dropout)
93 | self.layer_norm = nn.LayerNorm(d_model)
94 |
95 | def local_attention_mask(self, Q_Kt):
96 | B, H, T1, T2 = Q_Kt.shape
97 |
98 | mask = torch.ones((B, H, T1, T2+1), dtype=torch.bool, device=Q_Kt.device)
99 | mask = F.pad(mask, (self.receptive_size, 0), value=False).reshape(B, H, T1*(T2+self.receptive_size+1))
100 | mask = mask[..., :-T1].reshape(B, H, T1, T2+self.receptive_size)
101 | mask = mask[:, :, :, self.receptive_size//2:T2+self.receptive_size//2]
102 |
103 | return mask
104 |
105 | def forward(self, x, context, lookup_table=None):
106 | B, T1, D = x.shape
107 | _, T2, _ = context.shape
108 |
109 | if self.pre_layernorm:
110 | x = self.layer_norm(x)
111 |
112 | # linear projection
113 | q = self.W_q(x) # (B, T1, n_head*d_head)
114 | k = self.W_k(context) # (B, T2, n_head*d_head)
115 | v = self.W_v(context) # (B, T2, n_head*d_head)
116 |
117 | # split heads
118 | q = q.view(B, T1, self.n_head, self.d_head).transpose(1, 2) # (B, n_head, T1, d_head)
119 | k = k.view(B, T2, self.n_head, self.d_head).transpose(1, 2) # (B, n_head, T2, d_head)
120 | v = v.view(B, T2, self.n_head, self.d_head).transpose(1, 2) # (B, n_head, T2, d_head)
121 |
122 | # attention score
123 | atten_score = torch.matmul(q, k.transpose(-2, -1)) # (B, n_head, T1, T2)
124 |
125 | # relative positional encoding
126 | if lookup_table is not None:
127 | atten_score += skew(torch.matmul(q, lookup_table.transpose(-2, -1)))
128 |
129 | # attention scale
130 | atten_score *= self.atten_scale # (B, n_head, T1, T2)
131 |
132 | # local attention
133 | atten_mask = self.local_attention_mask(atten_score)
134 | atten_score.masked_fill_(atten_mask, -1e9)
135 |
136 | # attention
137 | attention = F.softmax(atten_score, dim=-1) # (B, n_head, T1, T2)
138 | attention = torch.matmul(attention, v).transpose(1, 2).contiguous().view(B, -1, self.n_head * self.d_head) # (B, T1, n_head*d_head)
139 |
140 | # output
141 | output = self.W_out(attention) # (B, T1, d_model)
142 | output = self.dropout(output)
143 |
144 | if self.pre_layernorm:
145 | return x + output
146 | else:
147 | return self.layer_norm(x + output)
148 |
149 | class PoswiseFeedForwardNet(nn.Module):
150 | def __init__(self, d_model, d_ff, dropout=0.1, pre_layernorm=False):
151 | super(PoswiseFeedForwardNet, self).__init__()
152 | self.pre_layernorm = pre_layernorm
153 |
154 | self.layers = nn.Sequential(
155 | nn.Linear(d_model, d_ff),
156 | nn.ReLU(),
157 | nn.Dropout(dropout),
158 | nn.Linear(d_ff, d_model),
159 | nn.Dropout(dropout)
160 | )
161 | self.layer_norm = nn.LayerNorm(d_model)
162 |
163 | def forward(self, x):
164 | if self.pre_layernorm:
165 | return x + self.layers(self.layer_norm(x))
166 | else:
167 | return self.layer_norm(x + self.layers(x))
--------------------------------------------------------------------------------
/aPyOpenGL/learning/vae.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn as nn
3 |
4 | """ Encoder for VAE """
5 | class VariationalEncoder(nn.Module):
6 | def __init__(self, input_dim, latent_dim=32, hidden_dims=[256, 256, 256]):
7 | super(VariationalEncoder, self).__init__()
8 |
9 | layers = []
10 | for h_dim in hidden_dims:
11 | layers.append(nn.Sequential(nn.Linear(input_dim, h_dim), nn.PReLU()))
12 | input_dim = h_dim
13 | self.layers = nn.Sequential(*layers)
14 |
15 | self.mean = nn.Linear(hidden_dims[-1], latent_dim)
16 | self.log_var = nn.Linear(hidden_dims[-1], latent_dim)
17 |
18 | def forward(self, x):
19 | x = self.layers(x)
20 | mean, log_var = self.mean(x), self.log_var(x)
21 | return mean, log_var
22 |
23 | """ Decoder for VAE that concatenates latent vector to input for every layer """
24 | class LatentConcatDecoder(nn.Module):
25 | def __init__(self, input_dim, latent_dim, output_dim, hidden_dims=[256, 256, 256]):
26 | super(LatentConcatDecoder, self).__init__()
27 |
28 | self.layers = nn.ModuleList()
29 | for h_dim in hidden_dims:
30 | self.layers.append(nn.Sequential(nn.Linear(input_dim + latent_dim, h_dim), nn.PReLU()))
31 | input_dim = h_dim
32 | self.layers.append(nn.Linear(input_dim + latent_dim, output_dim))
33 |
34 | def forward(self, x, z):
35 | for layer in self.layers:
36 | x = layer(torch.cat([x, z], dim=-1))
37 | return x
38 |
39 |
--------------------------------------------------------------------------------
/aPyOpenGL/ops/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/ops/__init__.py
--------------------------------------------------------------------------------
/aPyOpenGL/ops/mathops.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn.functional as F
3 | import numpy as np
4 |
5 | ####################################################################################
6 |
7 | def signed_angle_torch(v1, v2, vn, dim=-1, eps=1e-8):
8 | v1_unit = F.normalize(v1, dim=dim, eps=eps)
9 | v2_unit = F.normalize(v2, dim=dim, eps=eps)
10 |
11 | dot = torch.sum(v1_unit * v2_unit, dim=dim)
12 | dot = torch.clamp(dot, -1 + eps, 1 - eps)
13 | angle = torch.acos(dot)
14 |
15 | cross = torch.cross(v1_unit, v2_unit, dim=dim)
16 | cross = torch.sum(cross * vn, dim=dim)
17 | angle = torch.where(cross < 0, -angle, angle)
18 |
19 | return angle
20 |
21 | def signed_angle_numpy(v1, v2, vn, dim=-1):
22 | v1_unit = v1 / (np.linalg.norm(v1, axis=dim, keepdims=True) + 1e-8)
23 | v2_unit = v2 / (np.linalg.norm(v2, axis=dim, keepdims=True) + 1e-8)
24 |
25 | dot = np.sum(v1_unit * v2_unit, axis=dim)
26 | dot = np.clip(dot, -1, 1)
27 | angle = np.arccos(dot)
28 |
29 | cross = np.cross(v1_unit, v2_unit, axis=dim)
30 | cross = np.sum(cross * vn, axis=dim)
31 | angle = np.where(cross < 0, -angle, angle)
32 |
33 | return angle
34 |
35 | def signed_angle(v1, v2, vn=[0, 1, 0], dim=-1):
36 | """
37 | Signed angle from v1 to v2 around vn
38 | Args:
39 | v1: vector to rotate from (..., 3)
40 | v2: vector to rotate to (..., 3)
41 | vn: normal vector to rotate around (..., 3)
42 | dim: dimension to normalize along
43 | Returns:
44 | Signed angle from v1 to v2 around vn (...,)
45 | """
46 | if isinstance(v1, torch.Tensor):
47 | return signed_angle_torch(v1, v2, torch.tensor(vn, dtype=torch.float32, device=v1.device), dim)
48 | elif isinstance(v1, np.ndarray):
49 | return signed_angle_numpy(v1, v2, np.array(vn, dtype=np.float32), dim)
50 | else:
51 | raise TypeError(f"Type must be torch.Tensor or numpy.ndarray, but got {type(v1)}")
52 |
53 | ####################################################################################
54 |
55 | def lerp(x, y, t):
56 | """
57 | Args:
58 | x: start value (..., D)
59 | y: end value (..., D)
60 | t: interpolation value (...,) or float
61 | Returns:
62 | interpolated value (..., D)
63 | """
64 | return x + t * (y - x)
65 |
66 | ####################################################################################
67 |
68 | def clamp(x, min_val, max_val):
69 | if isinstance(x, torch.Tensor):
70 | return torch.clamp(x, min_val, max_val)
71 | elif isinstance(x, np.ndarray):
72 | return np.clip(x, min_val, max_val)
73 | else:
74 | return max(min(x, max_val), min_val)
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/__init__.py:
--------------------------------------------------------------------------------
1 | from .numpy import aaxis as n_aaxis
2 | from .numpy import euler as n_euler
3 | from .numpy import quat as n_quat
4 | from .numpy import rotmat as n_rotmat
5 | from .numpy import ortho6d as n_ortho6d
6 | from .numpy import xform as n_xform
7 |
8 | from .torch import aaxis as t_aaxis
9 | from .torch import euler as t_euler
10 | from .torch import quat as t_quat
11 | from .torch import rotmat as t_rotmat
12 | from .torch import ortho6d as t_ortho6d
13 | from .torch import xform as t_xform
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/numpy/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/transforms/numpy/__init__.py
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/numpy/aaxis.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | from . import rotmat, quat, ortho6d, xform
4 |
5 | def _split_axis_angle(aaxis):
6 | # aaxis: (..., 3) angle axis
7 | angle = np.linalg.norm(aaxis, axis=-1)
8 | axis = aaxis / (angle[..., None] + 1e-8)
9 | return angle, axis
10 |
11 | """
12 | Angle-axis to other representations
13 | """
14 | def to_quat(aaxis):
15 | angle, axis = _split_axis_angle(aaxis)
16 |
17 | cos = np.cos(angle / 2)[..., None]
18 | sin = np.sin(angle / 2)[..., None]
19 | axis_sin = axis * sin
20 |
21 | return np.concatenate([cos, axis_sin], axis=-1) # (..., 4)
22 |
23 | def to_rotmat(aaxis):
24 | # split angle and axis
25 | angle, axis = _split_axis_angle(aaxis)
26 | a0, a1, a2 = axis[..., 0], axis[..., 1], axis[..., 2]
27 | zero = np.zeros_like(a0)
28 | batch_dims = angle.shape
29 |
30 | # skew symmetric matrix
31 | S = np.stack([zero, -a2, a1, a2, zero, -a0, -a1, a0, zero], axis=-1)
32 | S = S.reshape(batch_dims + (3, 3)) # (..., 3, 3)
33 |
34 | # rotation matrix
35 | I = np.eye(3, dtype=np.float32) # (3, 3)
36 | I = np.tile(I, reps=(batch_dims + (1, 1))) # (..., 3, 3)
37 | sin = np.sin(angle)[..., None, None] # (..., 1, 1)
38 | cos = np.cos(angle)[..., None, None] # (..., 1, 1)
39 |
40 | return I + S * sin + np.matmul(S, S) * (1 - cos) # (..., 3, 3)
41 |
42 | def to_ortho6d(aaxis):
43 | return rotmat.to_ortho6d(to_rotmat(aaxis))
44 |
45 | def to_xform(aaxis, translation=None):
46 | return rotmat.to_xform(to_rotmat(aaxis), translation=translation)
47 |
48 | """
49 | Other representations to angle-axis
50 | """
51 | def from_quat(q):
52 | return quat.to_aaxis(q)
53 |
54 | def from_rotmat(r):
55 | return rotmat.to_aaxis(r)
56 |
57 | def from_ortho6d(r):
58 | return ortho6d.to_aaxis(r)
59 |
60 | def from_xform(x):
61 | return xform.to_aaxis(x)
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/numpy/euler.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | from . import quat
4 |
5 | def to_rotmat(angles, order, radians=True):
6 | if not radians:
7 | angles = np.deg2rad(angles)
8 |
9 | def _euler_axis_to_rotmat(angle, axis):
10 | one = np.ones_like(angle, dtype=np.float32)
11 | zero = np.zeros_like(angle, dtype=np.float32)
12 | cos = np.cos(angle, dtype=np.float32)
13 | sin = np.sin(angle, dtype=np.float32)
14 |
15 | if axis == "x":
16 | rotmat_flat = (one, zero, zero, zero, cos, -sin, zero, sin, cos)
17 | elif axis == "y":
18 | rotmat_flat = (cos, zero, sin, zero, one, zero, -sin, zero, cos)
19 | elif axis == "z":
20 | rotmat_flat = (cos, -sin, zero, sin, cos, zero, zero, zero, one)
21 | else:
22 | raise ValueError(f"Invalid axis: {axis}")
23 | return np.stack(rotmat_flat, axis=-1).reshape(angle.shape + (3, 3))
24 |
25 | Rs = [_euler_axis_to_rotmat(angles[..., i], order[i]) for i in range(3)]
26 | return np.matmul(np.matmul(Rs[0], Rs[1]), Rs[2])
27 |
28 | def to_quat(angles, order, radians=True):
29 | if not radians:
30 | angles = np.deg2rad(angles)
31 |
32 | def _euler_axis_to_quat(angle, axis):
33 | zero = np.zeros_like(angle, dtype=np.float32)
34 | cos = np.cos(angle / 2, dtype=np.float32)
35 | sin = np.sin(angle / 2, dtype=np.float32)
36 |
37 | if axis == "x":
38 | quat_flat = (cos, sin, zero, zero)
39 | elif axis == "y":
40 | quat_flat = (cos, zero, sin, zero)
41 | elif axis == "z":
42 | quat_flat = (cos, zero, zero, sin)
43 | else:
44 | raise ValueError(f"Invalid axis: {axis}")
45 | return np.stack(quat_flat, axis=-1).reshape(angle.shape + (4,))
46 |
47 | qs = [_euler_axis_to_quat(angles[..., i], order[i]) for i in range(3)]
48 | return quat.mul(quat.mul(qs[0], qs[1]), qs[2])
49 |
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/numpy/ortho6d.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | from . import rotmat, quat, aaxis, xform
4 |
5 | """
6 | Operations
7 | """
8 | def fk(local_ortho6d, root_pos, skeleton):
9 | global_xforms = xform.fk(to_xform(local_ortho6d), root_pos, skeleton)
10 | global_ortho6ds, global_pos = xform.to_ortho6d(global_xforms), xform.to_translation(global_xforms)
11 | return global_ortho6ds, global_pos
12 |
13 | """
14 | Orthonormal 6D rotation to other representation
15 | """
16 | def to_quat(ortho6d):
17 | return rotmat.to_quat(to_rotmat(ortho6d))
18 |
19 | def to_rotmat(ortho6d):
20 | x_, y_ = ortho6d[..., :3], ortho6d[..., 3:]
21 |
22 | # normalize x
23 | x = x_ / (np.linalg.norm(x_, axis=-1, keepdims=True) + 1e-8)
24 |
25 | # normalize y
26 | y = y_ - np.sum(x * y_, axis=-1, keepdims=True) * x
27 | y = y / (np.linalg.norm(y, axis=-1, keepdims=True) + 1e-8)
28 |
29 | # normalize z
30 | z = np.cross(x, y, axis=-1)
31 |
32 | return np.stack([x, y, z], axis=-2) # (..., 3, 3)
33 |
34 | def to_aaxis(ortho6d):
35 | return rotmat.to_aaxis(to_rotmat(ortho6d))
36 |
37 | def to_quat(ortho6d):
38 | return rotmat.to_quat(to_rotmat(ortho6d))
39 |
40 | def to_xform(ortho6d, translation=None):
41 | return rotmat.to_xform(to_rotmat(ortho6d), translation=translation)
42 |
43 | """
44 | Other representation to 6D rotation
45 | """
46 | def from_aaxis(a):
47 | return aaxis.to_ortho6d(a)
48 |
49 | def from_quat(q):
50 | return quat.to_ortho6d(q)
51 |
52 | def from_rotmat(r):
53 | return rotmat.to_ortho6d(r)
54 |
55 | def from_xform(x):
56 | return xform.to_ortho6d(x)
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/numpy/quat.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | from . import rotmat, aaxis, euler, ortho6d, xform
4 |
5 | """
6 | Quaternion operations
7 | """
8 | def mul(q0, q1):
9 | r0, i0, j0, k0 = np.split(q0, 4, axis=-1)
10 | r1, i1, j1, k1 = np.split(q1, 4, axis=-1)
11 |
12 | res = np.concatenate([
13 | r0*r1 - i0*i1 - j0*j1 - k0*k1,
14 | r0*i1 + i0*r1 + j0*k1 - k0*j1,
15 | r0*j1 - i0*k1 + j0*r1 + k0*i1,
16 | r0*k1 + i0*j1 - j0*i1 + k0*r1
17 | ], axis=-1)
18 |
19 | return res
20 |
21 | def mul_vec(q, v):
22 | t = 2.0 * np.cross(q[..., 1:], v, axis=-1)
23 | res = v + q[..., 0:1] * t + np.cross(q[..., 1:], t, axis=-1)
24 | return res
25 |
26 | def inv(q):
27 | return np.concatenate([q[..., 0:1], -q[..., 1:]], axis=-1)
28 |
29 | def identity():
30 | return np.array([1.0, 0.0, 0.0, 0.0], dtype=np.float32)
31 |
32 | def interpolate(q_from, q_to, t):
33 | """
34 | Args:
35 | q_from: (..., 4)
36 | q_to: (..., 4)
37 | t: (..., t) or (t,), or just a float
38 | Returns:
39 | interpolated quaternion (..., 4, t)
40 | """
41 |
42 | # ensure t is a numpy array
43 | if isinstance(t, float):
44 | t = np.array([t], dtype=np.float32)
45 | t = np.zeros_like(q_from[..., 0:1]) + t # (..., t)
46 |
47 | # ensure unit quaternions
48 | q_from_ = q_from / (np.linalg.norm(q_from, axis=-1, keepdims=True) + 1e-8) # (..., 4)
49 | q_to_ = q_to / (np.linalg.norm(q_to, axis=-1, keepdims=True) + 1e-8) # (..., 4)
50 |
51 | # ensure positive dot product
52 | dot = np.sum(q_from_ * q_to_, axis=-1) # (...,)
53 | neg = dot < 0.0
54 | dot[neg] = -dot[neg]
55 | q_to_[neg] = -q_to_[neg]
56 |
57 | # omega = arccos(dot)
58 | linear = dot > 0.9999
59 | omegas = np.arccos(dot[~linear]) # (...,)
60 | omegas = omegas[..., None] # (..., 1)
61 | sin_omegas = np.sin(omegas) # (..., 1)
62 |
63 | # interpolation amounts
64 | t0 = np.empty_like(t)
65 | t0[linear] = 1.0 - t[linear]
66 | t0[~linear] = np.sin((1.0 - t[~linear]) * omegas) / sin_omegas # (..., t)
67 |
68 | t1 = np.empty_like(t)
69 | t1[linear] = t[linear]
70 | t1[~linear] = np.sin(t[~linear] * omegas) / sin_omegas # (..., t)
71 |
72 | # interpolate
73 | q_interp = t0[..., None, :] * q_from_[..., :, None] + t1[..., None, :] * q_to_[..., :, None] # (..., 4, t)
74 |
75 | return q_interp
76 |
77 | def between_vecs(v_from, v_to):
78 | v_from_ = v_from / (np.linalg.norm(v_from, axis=-1, keepdims=True) + 1e-8) # (..., 3)
79 | v_to_ = v_to / (np.linalg.norm(v_to, axis=-1, keepdims=True) + 1e-8) # (..., 3)
80 |
81 | dot = np.sum(v_from_ * v_to_, axis=-1) # (...,)
82 | cross = np.cross(v_from_, v_to_)
83 | cross = cross / (np.linalg.norm(cross, axis=-1, keepdims=True) + 1e-8) # (..., 3)
84 |
85 | real = np.sqrt((1.0 + dot) * 0.5) # (...,)
86 | imag = np.sqrt((1.0 - dot) * 0.5)[..., None] * cross
87 |
88 | return np.concatenate([real[..., None], imag], axis=-1)
89 |
90 | def fk(local_quats, root_pos, skeleton):
91 | """
92 | Attributes:
93 | local_quats: (..., J, 4)
94 | root_pos: (..., 3), global root position
95 | skeleton: aPyOpenGL.agl.Skeleton
96 | """
97 | pre_xforms = np.tile(skeleton.pre_xforms, local_quats.shape[:-2] + (1, 1, 1)) # (..., J, 4, 4)
98 | pre_quats = xform.to_quat(pre_xforms) # (..., J, 4)
99 | pre_pos = xform.to_translation(pre_xforms) # (..., J, 3)
100 | pre_pos[..., 0, :] = root_pos
101 |
102 | global_quats = [mul(pre_quats[..., 0, :], local_quats[..., 0, :])]
103 | global_pos = [pre_pos[..., 0, :]]
104 |
105 | for i in range(1, skeleton.num_joints):
106 | parent_idx = skeleton.parent_idx[i]
107 | global_quats.append(mul(mul(global_quats[parent_idx], pre_quats[..., i, :]), local_quats[..., i, :]))
108 | global_pos.append(mul_vec(global_quats[parent_idx], pre_pos[..., i, :]) + global_pos[parent_idx])
109 |
110 | global_quats = np.stack(global_quats, axis=-2) # (..., J, 4)
111 | global_pos = np.stack(global_pos, axis=-2) # (..., J, 3)
112 |
113 | return global_quats, global_pos
114 |
115 | """
116 | Quaternion to other representations
117 | """
118 | def to_aaxis(quat):
119 | axis, angle = np.empty_like(quat[..., 1:]), np.empty_like(quat[..., 0])
120 |
121 | # small angles
122 | length = np.sqrt(np.sum(quat[..., 1:] * quat[..., 1:], axis=-1)) # (...,)
123 | small_angles = length < 1e-8
124 |
125 | # avoid division by zero
126 | angle[small_angles] = 0.0
127 | axis[small_angles] = np.array([1.0, 0.0, 0.0], dtype=np.float32)
128 |
129 | # normal case
130 | angle[~small_angles] = 2.0 * np.arctan2(length[~small_angles], quat[..., 0][~small_angles]) # (...,)
131 | axis[~small_angles] = quat[..., 1:][~small_angles] / length[~small_angles][..., None] # (..., 3)
132 |
133 | # make sure angle is in [-pi, pi)
134 | large_angles = angle >= np.pi
135 | angle[large_angles] = angle[large_angles] - 2 * np.pi
136 |
137 | return axis * angle[..., None] # (..., 3)
138 |
139 | def to_rotmat(quat):
140 | two_s = 2.0 / np.sum(quat * quat, axis=-1) # (...,)
141 | r, i, j, k = quat[..., 0], quat[..., 1], quat[..., 2], quat[..., 3]
142 |
143 | rotmat = np.stack([
144 | 1.0 - two_s * (j*j + k*k),
145 | two_s * (i*j - k*r),
146 | two_s * (i*k + j*r),
147 | two_s * (i*j + k*r),
148 | 1.0 - two_s * (i*i + k*k),
149 | two_s * (j*k - i*r),
150 | two_s * (i*k - j*r),
151 | two_s * (j*k + i*r),
152 | 1.0 - two_s * (i*i + j*j)
153 | ], axis=-1)
154 | return rotmat.reshape(quat.shape[:-1] + (3, 3)) # (..., 3, 3)
155 |
156 | def to_ortho6d(quat):
157 | return rotmat.to_ortho6d(to_rotmat(quat))
158 |
159 | def to_xform(quat, translation=None):
160 | return rotmat.to_xform(to_rotmat(quat), translation=translation)
161 |
162 | def to_euler(quat, order, radians=True):
163 | return rotmat.to_euler(to_rotmat(quat), order, radians=radians)
164 |
165 | """
166 | Other representations to quaternion
167 | """
168 | def from_aaxis(a):
169 | return aaxis.to_quat(a)
170 |
171 | def from_euler(angles, order, radians=True):
172 | return euler.to_quat(angles, order, radians=radians)
173 |
174 | def from_rotmat(r):
175 | return rotmat.to_quat(r)
176 |
177 | def from_ortho6d(r6d):
178 | return ortho6d.to_quat(r6d)
179 |
180 | def from_xform(x):
181 | return xform.to_quat(x)
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/numpy/rotmat.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | from . import quat, aaxis, ortho6d, xform, euler
4 |
5 | """
6 | Operations
7 | """
8 | def interpolate(r_from, r_to, t):
9 | q_from = to_quat(r_from)
10 | q_to = to_quat(r_to)
11 | q = quat.interpolate(q_from, q_to, t)
12 | return quat.to_rotmat(q)
13 |
14 | def fk(local_rotmats, root_pos, skeleton):
15 | """
16 | Attributes:
17 | local_rotmats: (..., J, 3, 3)
18 | root_pos: (..., 3), global root position
19 | skeleton: aPyOpenGL.agl.Skeleton
20 | """
21 | pre_xforms = np.tile(skeleton.pre_xforms, local_rotmats.shape[:-3] + (1, 1, 1)) # (..., J, 4, 4)
22 | pre_rotmats = xform.to_rotmat(pre_xforms) # (..., J, 3, 3)
23 | pre_pos = xform.to_translation(pre_xforms) # (..., J, 3)
24 | pre_pos[..., 0, :] = root_pos
25 |
26 | global_rotmats = [np.matmul(pre_rotmats[..., 0, :, :], local_rotmats[..., 0, :, :])]
27 | global_pos = [pre_pos[..., 0, :]]
28 |
29 | for i in range(1, skeleton.num_joints):
30 | parent_idx = skeleton.parent_idx[i]
31 | global_rotmats.append(np.matmul(np.matmul(global_rotmats[parent_idx], pre_rotmats[..., i, :, :]), local_rotmats[..., i, :, :]))
32 | global_pos.append(np.einsum("...ij,...j->...i", global_rotmats[parent_idx], pre_pos[..., i, :]) + global_pos[parent_idx])
33 |
34 | global_rotmats = np.stack(global_rotmats, axis=-3) # (..., J, 3, 3)
35 | global_pos = np.stack(global_pos, axis=-2) # (..., J, 3)
36 |
37 | return global_rotmats, global_pos
38 |
39 | def inv(r):
40 | return np.transpose(r, axes=(-2, -1))
41 |
42 | """
43 | Rotation matrix to other representation
44 | """
45 | def to_aaxis(rotmat):
46 | return quat.to_aaxis(to_quat(rotmat))
47 |
48 | def to_quat(rotmat):
49 | batch_dim = rotmat.shape[:-2]
50 | rotmat_ = rotmat.reshape(batch_dim + (9,))
51 | rotmat00, rotmat01, rotmat02, rotmat10, rotmat11, rotmat12, rotmat20, rotmat21, rotmat22 = rotmat_[..., 0], rotmat_[..., 1], rotmat_[..., 2], rotmat_[..., 3], rotmat_[..., 4], rotmat_[..., 5], rotmat_[..., 6], rotmat_[..., 7], rotmat_[..., 8]
52 |
53 | def _to_positive_sqrt(x):
54 | ret = np.zeros_like(x)
55 | positive = x > 0
56 | ret[positive] = np.sqrt(x[positive])
57 | return ret
58 |
59 | quat_square = np.stack([
60 | (1.0 + rotmat00 + rotmat11 + rotmat22), # 4*r*r
61 | (1.0 + rotmat00 - rotmat11 - rotmat22), # 4*i*i
62 | (1.0 - rotmat00 + rotmat11 - rotmat22), # 4*j*j
63 | (1.0 - rotmat00 - rotmat11 + rotmat22), # 4*k*k
64 | ], axis=-1) # (..., 4)
65 | quat_abs = _to_positive_sqrt(quat_square) # 2*|r|, 2*|i|, 2*|j|, 2*|k|
66 | r, i, j, k = quat_abs[..., 0], quat_abs[..., 1], quat_abs[..., 2], quat_abs[..., 3]
67 |
68 | quat_candidates = np.stack([
69 | np.stack([r*r, rotmat21-rotmat12, rotmat02-rotmat20, rotmat10-rotmat01], axis=-1),
70 | np.stack([rotmat21-rotmat12, i*i, rotmat01+rotmat10, rotmat02+rotmat20], axis=-1),
71 | np.stack([rotmat02-rotmat20, rotmat01+rotmat10, j*j, rotmat12+rotmat21], axis=-1),
72 | np.stack([rotmat10-rotmat01, rotmat02+rotmat20, rotmat12+rotmat21, k*k], axis=-1),
73 | ], axis=-2) # (..., 4, 4)
74 | quat_candidates = quat_candidates / (2 * quat_abs[..., None] + 1e-8)
75 |
76 | quat_idx = np.argmax(quat_square, axis=-1)
77 | quat = np.take_along_axis(quat_candidates, quat_idx[..., None, None].repeat(4, axis=-1), axis=-2).squeeze(-2)
78 | quat = quat / np.linalg.norm(quat, axis=-1, keepdims=True)
79 |
80 | return quat.reshape(batch_dim + (4,))
81 |
82 | def to_ortho6d(rotmat):
83 | return np.concatenate([rotmat[..., 0, :], rotmat[..., 1, :]], axis=-1)
84 |
85 | def to_xform(rotmat, translation=None):
86 | batch_dims = rotmat.shape[:-2]
87 |
88 | # transformation matrix
89 | I = np.eye(4, dtype=np.float32) # (4, 4)
90 | I = np.tile(I, reps=batch_dims + (1, 1)) # (..., 4, 4)
91 |
92 | # fill rotation matrix
93 | I[..., :3, :3] = rotmat # (..., 4, 4)
94 |
95 | # fill translation
96 | if translation is not None:
97 | I[..., :3, 3] = translation
98 |
99 | return I
100 |
101 | def to_euler(rotmat, order, radians=True):
102 | """
103 | Assumes extrinsic rotation and Tait-Bryan angles.
104 | Alpha, beta, gamma are the angles of rotation about the x, y, z axes respectively.
105 | TODO: handle gimbal lock (singularities)
106 | """
107 | if len(order) != 3:
108 | raise ValueError(f"Order must be a 3-element list, but got {len(order)} elements")
109 |
110 | order = order.lower()
111 | if set(order) != set("xyz"):
112 | raise ValueError(f"Invalid order: {order}")
113 |
114 | axis2idx = {"x": 0, "y": 1, "z": 2}
115 | idx0, idx1, idx2 = (axis2idx[axis] for axis in order)
116 |
117 | # compute beta
118 | sign = -1.0 if (idx0 - idx2) % 3 == 2 else 1.0
119 | beta = np.arcsin(sign * rotmat[..., idx0, idx2])
120 |
121 | # compute alpha
122 | sign = -1.0 if (idx0 - idx2) % 3 == 1 else 1.0
123 | alpha = np.arctan2(sign * rotmat[..., idx1, idx2], rotmat[..., idx2, idx2])
124 |
125 | # compute gamma -> same sign as alpha
126 | gamma = np.arctan2(sign * rotmat[..., idx0, idx1], rotmat[..., idx0, idx0])
127 |
128 | if not radians:
129 | alpha, beta, gamma = np.rad2deg(alpha), np.rad2deg(beta), np.rad2deg(gamma)
130 |
131 | return np.stack([alpha, beta, gamma], axis=-1)
132 |
133 | """
134 | Other representation to rotation matrix
135 | """
136 | def from_aaxis(a):
137 | return aaxis.to_rotmat(a)
138 |
139 | def from_quat(q):
140 | return quat.to_rotmat(q)
141 |
142 | def from_ortho6d(r):
143 | return ortho6d.to_rotmat(r)
144 |
145 | def from_xform(x):
146 | return xform.to_rotmat(x)
147 |
148 | def from_euler(angles, order, radians=True):
149 | return euler.to_rotmat(angles, order, radians=radians)
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/numpy/xform.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | from . import rotmat, quat, aaxis
4 |
5 | """
6 | Operations
7 | """
8 | def interpolate(x0, x1, t):
9 | r0, p0 = x0[..., :3, :3], x0[..., :3, 3]
10 | r1, p1 = x1[..., :3, :3], x1[..., :3, 3]
11 |
12 | r = rotmat.interpolate(r0, r1, t)
13 | p = p0 + (p1 - p0) * t
14 |
15 | return rotmat.to_xform(r, translation=p)
16 |
17 | def fk(local_xforms, root_pos, skeleton):
18 | """
19 | Attributes:
20 | local_xforms: (..., J, 4, 4)
21 | root_pos: (..., 3), global root position
22 | skeleton: aPyOpenGL.agl.Skeleton
23 | """
24 | pre_xforms = np.tile(skeleton.pre_xforms, local_xforms.shape[:-3] + (1, 1, 1)) # (..., J, 4, 4)
25 | pre_xforms[..., 0, :3, 3] = root_pos
26 |
27 | global_xforms = [pre_xforms[..., 0, :, :] @ local_xforms[..., 0, :, :]]
28 | for i in range(1, skeleton.num_joints):
29 | global_xforms.append(global_xforms[skeleton.parent_idx[i]] @ pre_xforms[..., i, :, :] @ local_xforms[..., i, :, :])
30 |
31 | global_xforms = np.stack(global_xforms, axis=-3) # (..., J, 4, 4)
32 | return global_xforms
33 |
34 | """
35 | Transformation matrix to other representation
36 | """
37 | def to_rotmat(xform):
38 | return xform[..., :3, :3].copy()
39 |
40 | def to_quat(xform):
41 | return rotmat.to_quat(to_rotmat(xform))
42 |
43 | def to_aaxis(xform):
44 | return quat.to_aaxis(to_quat(xform))
45 |
46 | def to_ortho6d(xform):
47 | return rotmat.to_ortho6d(to_rotmat(xform))
48 |
49 | def to_translation(xform):
50 | return np.ascontiguousarray(xform[..., :3, 3])
51 |
52 | """
53 | Other representation to transformation matrix
54 | """
55 | def from_rotmat(r, translation=None):
56 | return rotmat.to_xform(r, translation=translation)
57 |
58 | def from_quat(q, translation=None):
59 | return quat.to_xform(q, translation=translation)
60 |
61 | def from_aaxis(a, translation=None):
62 | return aaxis.to_xform(a, translation=translation)
63 |
64 | def from_ortho6d(r, translation=None):
65 | return rotmat.to_xform(r, translation=translation)
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/torch/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/transforms/torch/__init__.py
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/torch/aaxis.py:
--------------------------------------------------------------------------------
1 | import torch
2 |
3 | from . import rotmat, quat, ortho6d, xform
4 |
5 | def _split_axis_angle(aaxis):
6 | angle = torch.norm(aaxis, dim=-1)
7 | axis = aaxis / (angle[..., None] + 1e-8)
8 | return angle, axis
9 |
10 | """
11 | Angle-axis to other representations
12 | """
13 | def to_quat(aaxis):
14 | angle, axis = _split_axis_angle(aaxis)
15 |
16 | cos = torch.cos(angle / 2)[..., None]
17 | sin = torch.sin(angle / 2)[..., None]
18 | axis_sin = axis * sin
19 |
20 | return torch.cat([cos, axis_sin], dim=-1)
21 |
22 | def to_rotmat(aaxis):
23 | # split angle and axis
24 | angle, axis = _split_axis_angle(aaxis)
25 | a0, a1, a2 = axis[..., 0], axis[..., 1], axis[..., 2]
26 | zero = torch.zeros_like(a0)
27 | batch_dims = angle.shape
28 |
29 | # skew symmetric matrix
30 | S = torch.stack([zero, -a2, a1, a2, zero, -a0, -a1, a0, zero], dim=-1)
31 | S = S.reshape(batch_dims + (3, 3))
32 |
33 | # rotation matrix
34 | I = torch.eye(3, dtype=torch.float32, device=aaxis.device)
35 | I = I.repeat(batch_dims + (1, 1))
36 | sin = torch.sin(angle)[..., None, None]
37 | cos = torch.cos(angle)[..., None, None]
38 |
39 | return I + S * sin + torch.matmul(S, S) * (1 - cos)
40 |
41 | def to_ortho6d(aaxis):
42 | return rotmat.to_ortho6d(to_rotmat(aaxis))
43 |
44 | def to_xform(aaxis, translation=None):
45 | return rotmat.to_xform(to_rotmat(aaxis), translation=translation)
46 |
47 | """
48 | Other representations to angle-axis
49 | """
50 | def from_quat(q):
51 | return quat.to_aaxis(q)
52 |
53 | def from_rotmat(r):
54 | return rotmat.to_aaxis(r)
55 |
56 | def from_ortho6d(r):
57 | return ortho6d.to_aaxis(r)
58 |
59 | def from_xform(x):
60 | return xform.to_aaxis(x)
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/torch/euler.py:
--------------------------------------------------------------------------------
1 | import torch
2 |
3 | from . import quat
4 |
5 | def to_rotmat(angles, order, radians=True):
6 | if not radians:
7 | angles = torch.deg2rad(angles)
8 |
9 | def _euler_axis_to_rotmat(angle, axis):
10 | one = torch.ones_like(angle)
11 | zero = torch.zeros_like(angle)
12 | cos = torch.cos(angle)
13 | sin = torch.sin(angle)
14 |
15 | if axis == "x":
16 | R_flat = (one, zero, zero, zero, cos, -sin, zero, sin, cos)
17 | elif axis == "y":
18 | R_flat = (cos, zero, sin, zero, one, zero, -sin, zero, cos)
19 | elif axis == "z":
20 | R_flat = (cos, -sin, zero, sin, cos, zero, zero, zero, one)
21 | else:
22 | raise ValueError(f"Invalid axis: {axis}")
23 |
24 | return torch.stack(R_flat, dim=-1).reshape(angle.shape + (3, 3))
25 |
26 | Rs = [_euler_axis_to_rotmat(angles[..., i], order[i]) for i in range(3)]
27 | return torch.matmul(torch.matmul(Rs[0], Rs[1]), Rs[2])
28 |
29 | def to_quat(angles, order, radians=True):
30 | if not radians:
31 | angles = torch.deg2rad(angles)
32 |
33 | def _euler_axis_to_Q(angle, axis):
34 | zero = torch.zeros_like(angle)
35 | cos = torch.cos(angle / 2)
36 | sin = torch.sin(angle / 2)
37 |
38 | if axis == "x":
39 | Q_flat = (cos, sin, zero, zero)
40 | elif axis == "y":
41 | Q_flat = (cos, zero, sin, zero)
42 | elif axis == "z":
43 | Q_flat = (cos, zero, zero, sin)
44 | else:
45 | raise ValueError(f"Invalid axis: {axis}")
46 | return torch.stack(Q_flat, dim=-1).reshape(angle.shape + (4,))
47 |
48 | qs = [_euler_axis_to_Q(angles[..., i], order[i]) for i in range(3)]
49 | return quat.mul(quat.mul(qs[0], qs[1]), qs[2])
50 |
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/torch/ortho6d.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn.functional as F
3 |
4 | from . import rotmat, quat, aaxis, xform
5 |
6 | """
7 | Operations
8 | """
9 | def fk(local_ortho6d, root_pos, skeleton):
10 | global_xforms = xform.fk(to_xform(local_ortho6d), root_pos, skeleton)
11 | global_ortho6ds, global_pos = xform.to_ortho6d(global_xforms), xform.to_translation(global_xforms)
12 | return global_ortho6ds, global_pos
13 |
14 | def mul(r0, r1):
15 | r0_ = to_rotmat(r0)
16 | r1_ = to_rotmat(r1)
17 | res = torch.matmul(r0, r1)
18 | return rotmat.to_ortho6d(res)
19 |
20 | def inv(r):
21 | r_ = to_rotmat(r)
22 | res = torch.inverse(r_)
23 | return rotmat.to_ortho6d(res)
24 |
25 | """
26 | Orthonormal 6D rotation to other representation
27 | """
28 | def to_rotmat(ortho6d):
29 | x_, y_ = ortho6d[..., :3], ortho6d[..., 3:]
30 | x = F.normalize(x_, dim=-1, eps=1e-8)
31 | y = F.normalize(y_ - (x * y_).sum(dim=-1, keepdim=True) * x, dim=-1, eps=1e-8)
32 | z = torch.cross(x, y, dim=-1)
33 | return torch.stack([x, y, z], dim=-2) # (..., 3, 3)
34 |
35 | def to_aaxis(ortho6d):
36 | return rotmat.to_aaxis(to_rotmat(ortho6d))
37 |
38 | def to_quat(ortho6d):
39 | return rotmat.to_quat(to_rotmat(ortho6d))
40 |
41 | def to_xform(ortho6d, translation=None):
42 | return rotmat.to_xform(to_rotmat(ortho6d), translation=translation)
43 |
44 | """
45 | Other representation to 6D rotation
46 | """
47 | def from_aaxis(a):
48 | return aaxis.to_ortho6d(a)
49 |
50 | def from_quat(q):
51 | return quat.to_ortho6d(q)
52 |
53 | def from_rotmat(r):
54 | return rotmat.to_ortho6d(r)
55 |
56 | def from_xform(x):
57 | return xform.to_ortho6d(x)
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/torch/quat.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn.functional as F
3 | from . import rotmat, aaxis, euler, ortho6d, xform
4 |
5 | def _cross(a, b):
6 | ax, ay, az = torch.split(a, 1, dim=-1)
7 | bx, by, bz = torch.split(b, 1, dim=-1)
8 | return torch.cat([ay*bz - az*by, az*bx - ax*bz, ax*by - ay*bx], dim=-1)
9 |
10 | """
11 | Quaternion operations
12 | """
13 | def mul(q0, q1):
14 | r0, i0, j0, k0 = torch.split(q0, 1, dim=-1)
15 | r1, i1, j1, k1 = torch.split(q1, 1, dim=-1)
16 |
17 | res = torch.cat([
18 | r0*r1 - i0*i1 - j0*j1 - k0*k1,
19 | r0*i1 + i0*r1 + j0*k1 - k0*j1,
20 | r0*j1 - i0*k1 + j0*r1 + k0*i1,
21 | r0*k1 + i0*j1 - j0*i1 + k0*r1
22 | ], dim=-1)
23 |
24 | return res
25 |
26 | def mul_vec(q, v):
27 | t = 2.0 * _cross(q[..., 1:], v)
28 | res = v + q[..., 0:1] * t + _cross(q[..., 1:], t)
29 | return res
30 |
31 | def inv(q):
32 | return torch.cat([q[..., 0:1], -q[..., 1:]], dim=-1)
33 |
34 | def identity(device="cpu"):
35 | return torch.tensor([1.0, 0.0, 0.0, 0.0], dtype=torch.float32, device=device)
36 |
37 | def interpolate(q_from, q_to, t):
38 | """
39 | Args:
40 | q_from: (..., 4)
41 | q_to: (..., 4)
42 | t: (..., t) or (t,), or just a float
43 | Returns:
44 | interpolated quaternion (..., 4, t)
45 | """
46 | device = q_from.device
47 |
48 | # ensure t is a torch tensor
49 | if isinstance(t, float):
50 | t = torch.tensor([t], dtype=torch.float32, device=device)
51 | t = torch.zeros_like(q_from[..., 0:1]) + t # (..., t)
52 |
53 | # ensure unit quaternions
54 | q_from_ = F.normalize(q_from, dim=-1, eps=1e-8) # (..., 4)
55 | q_to_ = F.normalize(q_to, dim=-1, eps=1e-8) # (..., 4)
56 |
57 | # ensure positive dot product
58 | dot = torch.sum(q_from_ * q_to_, dim=-1) # (...,)
59 | neg = dot < 0.0
60 | dot[neg] = -dot[neg]
61 | q_to_[neg] = -q_to_[neg]
62 |
63 | # omega = arccos(dot)
64 | linear = dot > 0.9999
65 | omegas = torch.acos(dot[~linear]) # (...,)
66 | omegas = omegas[..., None] # (..., 1)
67 | sin_omegas = torch.sin(omegas) # (..., 1)
68 |
69 | # interpolation amounts
70 | t0 = torch.empty_like(t)
71 | t0[linear] = 1.0 - t[linear]
72 | t0[~linear] = torch.sin((1.0 - t[~linear]) * omegas) / sin_omegas # (..., t)
73 |
74 | t1 = torch.empty_like(t)
75 | t1[linear] = t[linear]
76 | t1[~linear] = torch.sin(t[~linear] * omegas) / sin_omegas # (..., t)
77 |
78 | # interpolate
79 | q_interp = t0[..., None, :] * q_from_[..., :, None] + t1[..., None, :] * q_to_[..., :, None] # (..., 4, t)
80 | q_interp = F.normalize(q_interp, dim=-2, eps=1e-8) # (..., 4, t)
81 |
82 | return q_interp
83 |
84 | def between_vecs(v_from, v_to):
85 | v_from_ = F.normalize(v_from, dim=-1, eps=1e-8) # (..., 3)
86 | v_to_ = F.normalize(v_to, dim=-1, eps=1e-8) # (..., 3)
87 |
88 | dot = torch.sum(v_from_ * v_to_, dim=-1) # (...,)
89 | cross = _cross(v_from_, v_to_)
90 | cross = F.normalize(cross, dim=-1, eps=1e-8) # (..., 3)
91 |
92 | real = torch.sqrt(0.5 * (1.0 + dot))
93 | imag = torch.sqrt(0.5 * (1.0 - dot))[..., None] * cross
94 |
95 | return torch.cat([real[..., None], imag], dim=-1)
96 |
97 | def fk(local_quats, root_pos, skeleton):
98 | """
99 | Attributes:
100 | local_quats: (..., J, 4)
101 | root_pos: (..., 3), global root position
102 | skeleton: aPyOpenGL.agl.Skeleton
103 | """
104 | pre_xforms = torch.from_numpy(skeleton.pre_xforms).to(local_quats.device)
105 | pre_xforms = torch.tile(pre_xforms, local_quats.shape[:-2] + (1, 1, 1)) # (..., J, 4, 4)
106 | pre_quats = xform.to_quat(pre_xforms)
107 | pre_pos = xform.to_translation(pre_xforms)
108 | pre_pos[..., 0, :] = root_pos
109 |
110 | global_quats = [mul(pre_quats[..., 0, :], local_quats[..., 0, :])]
111 | global_pos = [pre_pos[..., 0, :]]
112 |
113 | for i in range(1, skeleton.num_joints):
114 | parent_idx = skeleton.parent_idx[i]
115 | global_quats.append(mul(mul(global_quats[parent_idx], pre_quats[..., i, :]), local_quats[..., i, :]))
116 | global_pos.append(mul_vec(global_quats[parent_idx], pre_pos[..., i, :]) + global_pos[parent_idx])
117 |
118 | global_quats = torch.stack(global_quats, dim=-2) # (..., J, 4)
119 | global_pos = torch.stack(global_pos, dim=-2) # (..., J, 3)
120 |
121 | return global_quats, global_pos
122 |
123 | """ Quaternion to other representations """
124 | def to_aaxis(quat):
125 | axis, angle = torch.empty_like(quat[..., 1:]), torch.empty_like(quat[..., 0])
126 |
127 | # small angles
128 | length = torch.sqrt(torch.sum(quat[..., 1:] * quat[..., 1:], dim=-1)) # (...,)
129 | small_angles = length < 1e-8
130 |
131 | # avoid division by zero
132 | angle[small_angles] = 0.0
133 | axis[small_angles] = torch.tensor([1.0, 0.0, 0.0], dtype=torch.float32, device=quat.device) # (..., 3)
134 |
135 | # normal case
136 | angle[~small_angles] = 2.0 * torch.atan2(length[~small_angles], quat[..., 0][~small_angles]) # (...,)
137 | axis[~small_angles] = quat[..., 1:][~small_angles] / length[~small_angles][..., None] # (..., 3)
138 |
139 | # make sure angle is in [-pi, pi)
140 | large_angles = angle >= torch.pi
141 | angle[large_angles] = angle[large_angles] - 2 * torch.pi
142 |
143 | return axis * angle[..., None]
144 |
145 | def to_rotmat(quat):
146 | two_s = 2.0 / torch.sum(quat * quat, dim=-1) # (...,)
147 | r, i, j, k = quat[..., 0], quat[..., 1], quat[..., 2], quat[..., 3]
148 |
149 | rotmat = torch.stack([
150 | 1.0 - two_s * (j*j + k*k),
151 | two_s * (i*j - k*r),
152 | two_s * (i*k + j*r),
153 | two_s * (i*j + k*r),
154 | 1.0 - two_s * (i*i + k*k),
155 | two_s * (j*k - i*r),
156 | two_s * (i*k - j*r),
157 | two_s * (j*k + i*r),
158 | 1.0 - two_s * (i*i + j*j)
159 | ], dim=-1)
160 | return rotmat.reshape(quat.shape[:-1] + (3, 3)) # (..., 3, 3)
161 |
162 | def to_ortho6d(quat):
163 | return rotmat.to_ortho6d(to_rotmat(quat))
164 |
165 | def to_xform(quat):
166 | return rotmat.to_xform(to_rotmat(quat))
167 |
168 | def to_euler(quat, order, radians=True):
169 | return rotmat.to_euler(to_rotmat(quat), order, radians=radians)
170 |
171 | """
172 | Other representations to quaternion
173 | """
174 | def from_aaxis(a):
175 | return aaxis.to_quat(a)
176 |
177 | def from_euler(angles, order, radians=True):
178 | return euler.to_quat(angles, order, radians=radians)
179 |
180 | def from_rotmat(r):
181 | return rotmat.to_quat(r)
182 |
183 | def from_ortho6d(r6d):
184 | return ortho6d.to_quat(r6d)
185 |
186 | def from_xform(x):
187 | return xform.to_quat(x)
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/torch/rotmat.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import torch.nn.functional as F
3 |
4 | from . import quat, aaxis, ortho6d, xform, euler
5 |
6 | """
7 | Operations
8 | """
9 | def interpolate(r_from, r_to, t):
10 | q_from = to_quat(r_from)
11 | q_to = to_quat(r_to)
12 | q = quat.interpolate(q_from, q_to, t)
13 | return quat.to_rotmat(q)
14 |
15 | def fk(local_rotmats, root_pos, skeleton):
16 | """
17 | Attributes:
18 | local_rotmats: (..., J, 3, 3)
19 | root_pos: (..., 3), global root position
20 | skeleton: aPyOpenGL.agl.Skeleton
21 | """
22 | pre_xforms = torch.from_numpy(skeleton.pre_xforms).to(local_rotmats.device) # (J, 4, 4)
23 | pre_xforms = torch.tile(pre_xforms, local_rotmats.shape[:-3] + (1, 1, 1))
24 | pre_rotmats = xform.to_rotmat(pre_xforms) # (..., J, 3, 3)
25 | pre_pos = xform.to_translation(pre_xforms) # (..., J, 3)
26 | pre_pos[..., 0, :] = root_pos
27 |
28 | global_rotmats = [torch.matmul(pre_rotmats[..., 0, :, :], local_rotmats[..., 0, :, :])]
29 | global_pos = [pre_pos[..., 0, :]]
30 |
31 | for i in range(1, skeleton.num_joints):
32 | parent_idx = skeleton.parent_idx[i]
33 | global_rotmats.append(torch.matmul(torch.matmul(global_rotmats[parent_idx], pre_rotmats[..., i, :, :]), local_rotmats[..., i, :, :]))
34 | global_pos.append(torch.einsum("...ij,...j->...i", global_rotmats[parent_idx], pre_pos[..., i, :]) + global_pos[parent_idx])
35 |
36 | global_rotmats = torch.stack(global_rotmats, dim=-3) # (..., J, 3, 3)
37 | global_pos = torch.stack(global_pos, dim=-2) # (..., J, 3)
38 |
39 | return global_rotmats, global_pos
40 | """
41 | Rotations to other representations
42 | """
43 | def to_aaxis(rotmat):
44 | return quat.to_aaxis(to_quat(rotmat))
45 |
46 | def to_quat(rotmat):
47 | batch_dim = rotmat.shape[:-2]
48 | R00, R01, R02, R10, R11, R12, R20, R21, R22 = torch.unbind(rotmat.reshape(batch_dim + (9,)), dim=-1)
49 |
50 | def _to_positive_sqrt(x):
51 | ret = torch.zeros_like(x)
52 | positive = x > 0
53 | ret[positive] = torch.sqrt(x[positive])
54 | return ret
55 |
56 | Q_square = torch.stack([
57 | (1.0 + R00 + R11 + R22), # 4*r*r
58 | (1.0 + R00 - R11 - R22), # 4*i*i
59 | (1.0 - R00 + R11 - R22), # 4*j*j
60 | (1.0 - R00 - R11 + R22), # 4*k*k
61 | ], dim=-1) # (..., 4)
62 | Q_abs = _to_positive_sqrt(Q_square) # 2*|r|, 2*|i|, 2*|j|, 2*|k|
63 | r, i, j, k = torch.unbind(Q_abs, dim=-1)
64 |
65 | Q_candidates = torch.stack([
66 | torch.stack([r*r, R21-R12, R02-R20, R10-R01], dim=-1),
67 | torch.stack([R21-R12, i*i, R01+R10, R02+R20], dim=-1),
68 | torch.stack([R02-R20, R01+R10, j*j, R12+R21], dim=-1),
69 | torch.stack([R10-R01, R02+R20, R12+R21, k*k], dim=-1),
70 | ], dim=-2) # (..., 4, 4)
71 | Q_candidates = Q_candidates / (2 * Q_abs[..., None] + 1e-8)
72 |
73 | Q_idx = torch.argmax(Q_square, dim=-1)
74 | Q = torch.gather(Q_candidates, dim=-2, index=Q_idx[..., None, None].expand(batch_dim + (1, 4))).squeeze(-2)
75 | Q = F.normalize(Q, dim=-1)
76 |
77 | return Q.reshape(batch_dim + (4,))
78 |
79 | def to_ortho6d(rotmat):
80 | return torch.cat([rotmat[..., 0, :], rotmat[..., 1, :]], dim=-1)
81 |
82 | def to_xform(rotmat, translation=None):
83 | batch_dims = rotmat.shape[:-2]
84 |
85 | # transformation matrix
86 | I = torch.eye(4, dtype=torch.float32, device=rotmat.device)
87 | I = I.repeat(batch_dims + (1, 1))
88 |
89 | # fill rotation matrix
90 | I[..., :3, :3] = rotmat
91 |
92 | # fill translation
93 | if translation is not None:
94 | I[..., :3, 3] = translation
95 |
96 | return I
97 |
98 | def to_euler(rotmat, order, radians=True):
99 | """
100 | Assumes extrinsic rotation and Tait-Bryan angles.
101 | Alpha, beta, gamma are the angles of rotation about the x, y, z axes respectively.
102 | TODO: handle gimbal lock (singularities)
103 | """
104 | if len(order) != 3:
105 | raise ValueError(f"Order must be a 3-element list, but got {len(order)} elements")
106 |
107 | order = order.lower()
108 | if set(order) != set("xyz"):
109 | raise ValueError(f"Invalid order: {order}")
110 |
111 | axis2idx = {"x": 0, "y": 1, "z": 2}
112 | idx0, idx1, idx2 = (axis2idx[axis] for axis in order)
113 |
114 | # compute beta
115 | sign = -1.0 if (idx0 - idx2) % 3 == 2 else 1.0
116 | beta = torch.asin(sign * rotmat[..., idx0, idx2])
117 |
118 | # compute alpha
119 | sign = -1.0 if (idx0 - idx2) % 3 == 1 else 1.0
120 | alpha = torch.atan2(sign * rotmat[..., idx1, idx2], rotmat[..., idx2, idx2])
121 |
122 | # compute gamma -> same sign as alpha
123 | gamma = torch.atan2(sign * rotmat[..., idx0, idx1], rotmat[..., idx0, idx0])
124 |
125 | if not radians:
126 | alpha, beta, gamma = torch.rad2deg(alpha), torch.rad2deg(beta), torch.rad2deg(gamma)
127 |
128 | return torch.stack([alpha, beta, gamma], dim=-1)
129 |
130 |
131 | """
132 | Other representation to rotation matrix
133 | """
134 | def from_aaxis(a):
135 | return aaxis.to_rotmat(a)
136 |
137 | def from_quat(q):
138 | return quat.to_rotmat(q)
139 |
140 | def from_ortho6d(r):
141 | return ortho6d.to_rotmat(r)
142 |
143 | def from_xform(x):
144 | return xform.to_rotmat(x)
145 |
146 | def from_euler(angles, order, radians=True):
147 | return euler.to_rotmat(angles, order, radians=radians)
--------------------------------------------------------------------------------
/aPyOpenGL/transforms/torch/xform.py:
--------------------------------------------------------------------------------
1 | import torch
2 |
3 | from . import rotmat, quat, aaxis
4 |
5 | """
6 | Operations
7 | """
8 | def interpolate(x_from, x_to, t):
9 | r_from, p_from = x_from[..., :3, :3], x_from[..., :3, 3]
10 | r_to, p_to = x_to[..., :3, :3], x_to[..., :3, 3]
11 |
12 | r = rotmat.interpolate(r_from, r_to, t)
13 | p = p_from + (p_to - p_from) * t
14 |
15 | return rotmat.to_xform(r, translation=p)
16 |
17 | def fk(local_xforms, root_pos, skeleton):
18 | """
19 | Attributes:
20 | local_xforms: (..., J, 4, 4)
21 | root_pos: (..., 3), global root position
22 | skeleton: aPyOpenGL.agl.Skeleton
23 | """
24 | pre_xforms = torch.from_numpy(skeleton.pre_xforms).to(local_xforms.device) # (J, 4, 4)
25 | pre_xforms = torch.tile(pre_xforms, local_xforms.shape[:-3] + (1, 1, 1)) # (..., J, 4, 4)
26 | pre_xforms[..., 0, :3, 3] = root_pos
27 |
28 | global_xforms = [pre_xforms[..., 0, :, :] @ local_xforms[..., 0, :, :]]
29 | for i in range(1, skeleton.num_joints):
30 | global_xforms.append(global_xforms[skeleton.parent_idx[i]] @ pre_xforms[..., i, :, :] @ local_xforms[..., i, :, :])
31 |
32 | global_xforms = torch.stack(global_xforms, dim=-3) # (..., J, 4, 4)
33 | return global_xforms
34 |
35 | """
36 | Transformation matrix to other representation
37 | """
38 | def to_rotmat(xform):
39 | return xform[..., :3, :3].clone()
40 |
41 | def to_quat(xform):
42 | return rotmat.to_quat(to_rotmat(xform))
43 |
44 | def to_aaxis(xform):
45 | return quat.to_aaxis(to_quat(xform))
46 |
47 | def to_ortho6d(xform):
48 | return rotmat.to_ortho6d(to_rotmat(xform))
49 |
50 | def to_translation(xform):
51 | return xform[..., :3, 3].clone()
52 |
53 | """
54 | Other representation to transformation matrix
55 | """
56 | def from_rotmat(r, translation=None):
57 | return rotmat.to_xform(r, translation=translation)
58 |
59 | def from_quat(q, translation=None):
60 | return quat.to_xform(q, translation=translation)
61 |
62 | def from_aaxis(a, translation=None):
63 | return aaxis.to_xform(a, translation=translation)
64 |
65 | def from_ortho6d(r, translation=None):
66 | return rotmat.to_xform(r, translation=translation)
--------------------------------------------------------------------------------
/aPyOpenGL/utils/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/aPyOpenGL/utils/__init__.py
--------------------------------------------------------------------------------
/aPyOpenGL/utils/util.py:
--------------------------------------------------------------------------------
1 | import random
2 |
3 | import torch
4 | import numpy as np
5 | from tqdm import tqdm
6 |
7 | from functools import partial
8 | import multiprocessing as mp
9 |
10 | def seed(x=777):
11 | torch.manual_seed(x)
12 | torch.cuda.manual_seed(x)
13 | torch.cuda.manual_seed_all(x)
14 | torch.backends.cudnn.deterministic = True
15 | torch.backends.cudnn.benchmark = False
16 | np.random.seed(x)
17 | random.seed(x)
18 |
19 | def run_parallel_sync(func, iterable, num_cpus=mp.cpu_count(), desc=None, **kwargs):
20 | if desc is not None:
21 | print(f"{desc} in sync [CPU: {num_cpus}]")
22 |
23 | func_with_kwargs = partial(func, **kwargs)
24 | with mp.Pool(num_cpus) as pool:
25 | res = pool.map(func_with_kwargs, iterable) if iterable is not None else pool.map(func_with_kwargs)
26 |
27 | return res
28 |
29 | def run_parallel_async(func, iterable, num_cpus=mp.cpu_count(), desc=None, **kwargs):
30 | if desc is not None:
31 | print(f"{desc} in async [CPU: {num_cpus}]")
32 |
33 | func_with_kwargs = partial(func, **kwargs)
34 | with mp.Pool(num_cpus) as pool:
35 | res = list(tqdm(pool.imap(func_with_kwargs, iterable) if iterable is not None else pool.imap(func_with_kwargs), total=len(iterable)))
36 |
37 | return res
--------------------------------------------------------------------------------
/examples/01_app.py:
--------------------------------------------------------------------------------
1 | import glm
2 | from aPyOpenGL import agl
3 |
4 | class MyApp(agl.App):
5 | def start(self):
6 | super().start()
7 | self.plane = agl.Render.plane().texture("brickwall.jpg", agl.TextureType.eALBEDO).texture("brickwall_normal.jpg", agl.TextureType.eNORMAL).texture("brickwall_disp.jpg", agl.TextureType.eDISPLACEMENT).position([0, 2, 0])
8 | self.cube = agl.Render.cube().texture("brickwall.jpg", agl.TextureType.eALBEDO).texture("brickwall_normal.jpg", agl.TextureType.eNORMAL).position([1, 0.35, 2]).scale(0.7)
9 | self.sphere = agl.Render.sphere().albedo([0.2, 1, 0.2]).position([-2, 0, 2]).scale(1.8)
10 | self.sphere2 = agl.Render.sphere().texture("pbr_albedo.png").texture("pbr_normal.png", agl.TextureType.eNORMAL).texture("pbr_metallic.png", agl.TextureType.eMETALIC).texture("pbr_roughness.png", agl.TextureType.eROUGHNESS).position([2, 1, 2])
11 | self.cone = agl.Render.cone().albedo([0.2, 0.2, 1]).position([0, 0.5, 2])
12 | self.pyramid = agl.Render.pyramid().albedo([1, 1, 0]).position([0, 0, -1])
13 | self.text = agl.Render.text("Hello, aPyOpenGL!")
14 | self.text_on_screen = agl.Render.text_on_screen("Hello, Screen!\nText after linebreak").position([0.01, 0.5, 0]).scale(0.5)
15 | self.cubemap = agl.Render.cubemap("skybox")
16 |
17 | self.ball = agl.Render.sphere().scale(0.5)
18 | self.instance_ball = agl.Render.sphere().albedo([0, 1, 1]).scale(0.5).instance_num(100)
19 |
20 | def render(self):
21 | super().render()
22 | self.plane.draw()
23 | self.cube.draw()
24 | self.sphere.draw()
25 | self.sphere2.draw()
26 | self.cone.draw()
27 | self.pyramid.draw()
28 | self.cubemap.draw()
29 |
30 | # draw 1000 balls by one draw call for each
31 | # for i in range(1000):
32 | # self.ball.position([i*0.1, 0, i*0.1]).draw()
33 |
34 | # draw 1000 balls by one draw call for each chunk
35 | for i in range(10):
36 | positions = []
37 | for j in range(100):
38 | positions.append(glm.vec3((i*10+j) * 0.1, 0, (i*10+j) * 0.1))
39 | self.instance_ball.position(positions).draw()
40 |
41 | def render_text(self):
42 | super().render_text()
43 | self.text.draw()
44 | self.text_on_screen.draw()
45 |
46 | if __name__ == "__main__":
47 | agl.AppManager.start(MyApp())
--------------------------------------------------------------------------------
/examples/02_animapp.py:
--------------------------------------------------------------------------------
1 | import glm
2 |
3 | from aPyOpenGL import agl
4 |
5 | class MyApp(agl.App):
6 | def __init__(self):
7 | super().__init__()
8 |
9 | def start(self):
10 | super().start()
11 | self.sphere = agl.Render.sphere().albedo([0.2, 1, 0.2])
12 |
13 | def update(self):
14 | super().update()
15 | angle = self.frame * 0.01 * glm.pi()
16 | self.sphere.position([glm.cos(angle), 0.5, glm.sin(angle)])
17 |
18 | def render(self):
19 | super().render()
20 | self.sphere.draw()
21 |
22 | def render_text(self):
23 | super().render_text()
24 | agl.Render.text_on_screen(self.frame).draw()
25 |
26 | if __name__ == "__main__":
27 | agl.AppManager.start(MyApp())
--------------------------------------------------------------------------------
/examples/03_obj.py:
--------------------------------------------------------------------------------
1 | import os
2 | import glm
3 |
4 | from aPyOpenGL import agl
5 |
6 | class MyApp(agl.App):
7 | def __init__(self, filename):
8 | super().__init__()
9 | self.obj = agl.Render.obj(filename)
10 |
11 | def render(self):
12 | super().render()
13 | self.obj.draw()
14 |
15 | if __name__ == "__main__":
16 | filename = os.path.join(agl.AGL_PATH, "data/obj/teapot.obj")
17 | agl.AppManager.start(MyApp(filename))
--------------------------------------------------------------------------------
/examples/04_bvh.py:
--------------------------------------------------------------------------------
1 | import os
2 | import glfw
3 | import glm
4 |
5 | from aPyOpenGL import agl
6 |
7 | class MotionApp(agl.App):
8 | def __init__(self, bvh_filename):
9 | super().__init__()
10 |
11 | # motion data
12 | bvh = agl.BVH(bvh_filename)
13 | self.motion = bvh.motion()
14 | self.model = bvh.model()
15 | self.total_frames = self.motion.num_frames
16 | self.fps = self.motion.fps
17 |
18 | # camera options
19 | self.focus_on_root = False
20 | self.follow_root = False
21 | self.init_cam_pos = self.camera.position
22 |
23 | def update(self):
24 | super().update()
25 |
26 | # set camera focus on the root
27 | curr_frame = self.frame % self.total_frames
28 | if self.focus_on_root:
29 | self.camera.set_focus_position(self.motion.poses[curr_frame].root_p)
30 | self.camera.set_up(glm.vec3(0, 1, 0))
31 | elif self.follow_root:
32 | self.camera.set_position(self.motion.poses[curr_frame].root_p + glm.vec3(0, 1.5, 5))
33 | self.camera.set_focus_position(self.motion.poses[curr_frame].root_p)
34 | self.camera.set_up(glm.vec3(0, 1, 0))
35 | self.camera.update()
36 |
37 | # set pose
38 | self.model.set_pose(self.motion.poses[curr_frame])
39 |
40 | def render_xray(self):
41 | super().render_xray()
42 | agl.Render.skeleton(self.motion.poses[self.frame % self.total_frames]).draw()
43 |
44 | def render_text(self):
45 | super().render_text()
46 | agl.Render.text_on_screen(f"Frame: {self.frame % self.total_frames} / {self.total_frames}").draw()
47 |
48 | def key_callback(self, window, key, scancode, action, mods):
49 | super().key_callback(window, key, scancode, action, mods)
50 |
51 | # set camera focus on the root
52 | if key == glfw.KEY_F3 and action == glfw.PRESS:
53 | self.focus_on_root = not self.focus_on_root
54 | elif key == glfw.KEY_F4 and action == glfw.PRESS:
55 | self.follow_root = not self.follow_root
56 |
57 | if __name__ == "__main__":
58 | bvh_filename = os.path.join(agl.AGL_PATH, "data/bvh/ybot_capoeira.bvh")
59 | agl.AppManager.start(MotionApp(bvh_filename))
--------------------------------------------------------------------------------
/examples/05_bvh_with_fbx.py:
--------------------------------------------------------------------------------
1 | import os
2 | import glfw
3 | import glm
4 |
5 | from aPyOpenGL import agl
6 |
7 | BVH2FBX = {
8 | "Hips": "mixamorig:Hips",
9 | "Spine": "mixamorig:Spine",
10 | "Spine1": "mixamorig:Spine1",
11 | "Spine2": "mixamorig:Spine2",
12 | "Neck": "mixamorig:Neck",
13 | "Head": "mixamorig:Head",
14 | "LeftShoulder": "mixamorig:LeftShoulder",
15 | "LeftArm": "mixamorig:LeftArm",
16 | "LeftForeArm": "mixamorig:LeftForeArm",
17 | "LeftHand": "mixamorig:LeftHand",
18 | "LeftHandThumb1": "mixamorig:LeftHandThumb1",
19 | "LeftHandThumb2": "mixamorig:LeftHandThumb2",
20 | "LeftHandThumb3": "mixamorig:LeftHandThumb3",
21 | "LeftHandIndex1": "mixamorig:LeftHandIndex1",
22 | "LeftHandIndex2": "mixamorig:LeftHandIndex2",
23 | "LeftHandIndex3": "mixamorig:LeftHandIndex3",
24 | "LeftHandMiddle1": "mixamorig:LeftHandMiddle1",
25 | "LeftHandMiddle2": "mixamorig:LeftHandMiddle2",
26 | "LeftHandMiddle3": "mixamorig:LeftHandMiddle3",
27 | "LeftHandRing1": "mixamorig:LeftHandRing1",
28 | "LeftHandRing2": "mixamorig:LeftHandRing2",
29 | "LeftHandRing3": "mixamorig:LeftHandRing3",
30 | "LeftHandPinky1": "mixamorig:LeftHandPinky1",
31 | "LeftHandPinky2": "mixamorig:LeftHandPinky2",
32 | "LeftHandPinky3": "mixamorig:LeftHandPinky3",
33 | "RightShoulder": "mixamorig:RightShoulder",
34 | "RightArm": "mixamorig:RightArm",
35 | "RightForeArm": "mixamorig:RightForeArm",
36 | "RightHand": "mixamorig:RightHand",
37 | "RightHandThumb1": "mixamorig:RightHandThumb1",
38 | "RightHandThumb2": "mixamorig:RightHandThumb2",
39 | "RightHandThumb3": "mixamorig:RightHandThumb3",
40 | "RightHandIndex1": "mixamorig:RightHandIndex1",
41 | "RightHandIndex2": "mixamorig:RightHandIndex2",
42 | "RightHandIndex3": "mixamorig:RightHandIndex3",
43 | "RightHandMiddle1": "mixamorig:RightHandMiddle1",
44 | "RightHandMiddle2": "mixamorig:RightHandMiddle2",
45 | "RightHandMiddle3": "mixamorig:RightHandMiddle3",
46 | "RightHandRing1": "mixamorig:RightHandRing1",
47 | "RightHandRing2": "mixamorig:RightHandRing2",
48 | "RightHandRing3": "mixamorig:RightHandRing3",
49 | "RightHandPinky1": "mixamorig:RightHandPinky1",
50 | "RightHandPinky2": "mixamorig:RightHandPinky2",
51 | "RightHandPinky3": "mixamorig:RightHandPinky3",
52 | "LeftUpLeg": "mixamorig:LeftUpLeg",
53 | "LeftLeg": "mixamorig:LeftLeg",
54 | "LeftFoot": "mixamorig:LeftFoot",
55 | "LeftToeBase": "mixamorig:LeftToeBase",
56 | "RightUpLeg": "mixamorig:RightUpLeg",
57 | "RightLeg": "mixamorig:RightLeg",
58 | "RightFoot": "mixamorig:RightFoot",
59 | "RightToeBase": "mixamorig:RightToeBase",
60 | }
61 |
62 | class MotionApp(agl.App):
63 | def __init__(self, bvh_filename, fbx_filename):
64 | super().__init__()
65 |
66 | # motion data
67 | bvh = agl.BVH(bvh_filename)
68 | fbx = agl.FBX(fbx_filename)
69 | self.motion = bvh.motion()
70 | self.model = fbx.model()
71 | self.model.set_joint_map(BVH2FBX)
72 |
73 | self.total_frames = self.motion.num_frames
74 |
75 | def update(self):
76 | super().update()
77 | self.model.set_pose(self.motion.poses[self.frame % self.total_frames])
78 |
79 | def render(self):
80 | super().render()
81 | agl.Render.model(self.model).draw()
82 |
83 | def render_xray(self):
84 | super().render_xray()
85 | agl.Render.skeleton(self.motion.poses[self.frame % self.total_frames]).draw()
86 |
87 | def render_text(self):
88 | super().render_text()
89 | agl.Render.text_on_screen(f"Frame: {self.frame % self.total_frames} / {self.total_frames}").draw()
90 |
91 | if __name__ == "__main__":
92 | bvh_filename = os.path.join(agl.AGL_PATH, "data/bvh/ybot_capoeira.bvh")
93 | fbx_filename = os.path.join(agl.AGL_PATH, "data/fbx/model/ybot.fbx")
94 | agl.AppManager.start(MotionApp(bvh_filename, fbx_filename))
--------------------------------------------------------------------------------
/examples/06_fbx_model.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | from aPyOpenGL import agl
4 |
5 | class MyApp(agl.App):
6 | def __init__(self, filename):
7 | super().__init__()
8 | self.model = agl.FBX(filename).model()
9 |
10 | def render(self):
11 | super().render()
12 | agl.Render.model(self.model).draw()
13 |
14 | if __name__ == "__main__":
15 | filename = os.path.join(agl.AGL_PATH, "data/fbx/model/ybot.fbx")
16 | agl.AppManager.start(MyApp(filename))
--------------------------------------------------------------------------------
/examples/07_fbx_motion.py:
--------------------------------------------------------------------------------
1 | import os
2 | import glfw
3 | import glm
4 |
5 | from OpenGL.GL import *
6 | from aPyOpenGL import agl, transforms as trf
7 |
8 | class MotionApp(agl.App):
9 | def __init__(self, motion_filename, model_filename):
10 | super().__init__()
11 |
12 | # motion data
13 | self.motion = agl.FBX(motion_filename).motions()[0]
14 | self.model = agl.FBX(model_filename).model()
15 | self.total_frames = self.motion.num_frames
16 | self.fps = self.motion.fps
17 |
18 | # camera options
19 | self.focus_on_root = False
20 | self.follow_root = False
21 | self.init_cam_pos = self.camera.position
22 |
23 | # show xray
24 | self.show_xray = True
25 |
26 | def start(self):
27 | super().start()
28 |
29 | # render options
30 | self.render_model = agl.Render.model(self.model)
31 | self.render_sphere = agl.Render.sphere(0.02).albedo([1, 0, 0]).color_mode(True).instance_num(self.motion.skeleton.num_joints)
32 |
33 | # UI options
34 | self.ui.add_menu("MotionApp")
35 | self.ui.add_menu_item("MotionApp", "Model", self.render_model.switch_visible, key=glfw.KEY_M)
36 | self.ui.add_menu_item("MotionApp", "X-ray", lambda: setattr(self, "show_xray", not self.show_xray), key=glfw.KEY_X)
37 |
38 | def update(self):
39 | super().update()
40 |
41 | self.frame = self.frame % self.total_frames
42 | self.model.set_pose(self.motion.poses[self.frame])
43 |
44 | # set camera focus on the root
45 | if self.focus_on_root:
46 | self.camera.set_focus_position(self.motion.poses[self.frame].root_pos)
47 | self.camera.set_up(glm.vec3(0, 1, 0))
48 | elif self.follow_root:
49 | self.camera.set_position(self.motion.poses[self.frame].root_pos + glm.vec3(0, 1.5, 5))
50 | self.camera.set_focus_position(self.motion.poses[self.frame].root_pos)
51 | self.camera.set_up(glm.vec3(0, 1, 0))
52 | self.camera.update()
53 |
54 | def render(self):
55 | super().render()
56 | self.render_model.update_model(self.model).draw()
57 |
58 | def render_xray(self):
59 | super().render_xray()
60 | if self.show_xray:
61 | agl.Render.skeleton(self.motion.poses[self.frame]).draw()
62 |
63 | def key_callback(self, window, key, scancode, action, mods):
64 | super().key_callback(window, key, scancode, action, mods)
65 |
66 | # set camera focus on the root
67 | if key == glfw.KEY_F3 and action == glfw.PRESS:
68 | self.focus_on_root = not self.focus_on_root
69 | elif key == glfw.KEY_F4 and action == glfw.PRESS:
70 | self.follow_root = not self.follow_root
71 |
72 | if __name__ == "__main__":
73 | motion_filename = os.path.join(agl.AGL_PATH, "data/fbx/motion/ybot_walking.fbx")
74 | model_filename = os.path.join(agl.AGL_PATH, "data/fbx/model/ybot.fbx")
75 | agl.AppManager.start(MotionApp(motion_filename, model_filename))
--------------------------------------------------------------------------------
/examples/07_kinpose.py:
--------------------------------------------------------------------------------
1 | # import os
2 | # import glm
3 | # import glfw
4 | # import numpy as np
5 |
6 | # from aPyOpenGL import agl, kin
7 |
8 | # class MotionApp(agl.App):
9 | # def __init__(self, motion_filename, model_filename):
10 | # super().__init__()
11 |
12 | # # motion data
13 | # self.motion = agl.FBX(motion_filename).motions()[0]
14 | # self.model = agl.FBX(model_filename).model()
15 | # self.total_frames = self.motion.num_frames
16 | # self.fps = self.motion.fps
17 |
18 | # def start(self):
19 | # super().start()
20 | # self.render_model = agl.Render.model(self.model)
21 | # self.kinpose = kin.KinPose(self.motion.poses[0])
22 |
23 | # def update(self):
24 | # super().update()
25 |
26 | # curr_frame = self.frame % self.total_frames
27 | # self.kinpose.set_pose(self.motion.poses[curr_frame])
28 |
29 | # # update model to render
30 | # self.model.set_pose(self.kinpose.to_pose())
31 |
32 | # def render(self):
33 | # super().render()
34 | # self.render_model.update_model(self.model).draw()
35 |
36 | # if __name__ == "__main__":
37 | # motion_filename = os.path.join(agl.AGL_PATH, "data/fbx/motion/ybot_walking.fbx")
38 | # model_filename = os.path.join(agl.AGL_PATH, "data/fbx/model/ybot.fbx")
39 | # agl.AppManager.start(MotionApp(motion_filename, model_filename))
--------------------------------------------------------------------------------
/examples/08_kindisp.py:
--------------------------------------------------------------------------------
1 | # import os
2 | # import glm
3 | # import glfw
4 | # import numpy as np
5 |
6 | # from aPyOpenGL import agl, kin
7 |
8 | # class MotionApp(agl.AnimApp):
9 | # def __init__(self, motion_filename, model_filename):
10 | # super().__init__()
11 |
12 | # # motion data
13 | # self.motion = agl.FBX(motion_filename).motions()[0]
14 | # self.model = agl.FBX(model_filename).model()
15 | # self.total_frames = self.motion.num_frames
16 | # self.fps = self.motion.fps
17 |
18 | # def start(self):
19 | # super().start()
20 |
21 | # # render options
22 | # self.render_skeleton = agl.Render.skeleton(self.model)
23 | # self.render_model = agl.Render.model(self.model)
24 |
25 | # # UI options
26 | # self.ui.add_menu("MotionApp")
27 | # self.ui.add_menu_item("MotionApp", "X-Ray", self.render_skeleton.switch_visible, key=glfw.KEY_X)
28 | # self.ui.add_menu_item("MotionApp", "Model", self.render_model.switch_visible, key=glfw.KEY_M)
29 |
30 | # def update(self):
31 | # super().update()
32 |
33 | # # kindisp
34 | # kpose = kin.KinPose(self.motion.poses[0])
35 | # kdisp = kin.KinDisp(kpose, kin.KinPose(self.motion.poses[self.curr_frame]))
36 | # kdisp.apply(kpose)
37 |
38 | # # update model
39 | # self.model.set_pose(kpose.to_pose())
40 |
41 | # def render(self):
42 | # super().render()
43 | # self.render_model.update_model(self.model).draw()
44 |
45 | # def render_xray(self):
46 | # super().render_xray()
47 | # self.render_skeleton.update_skeleton(self.model).draw()
48 |
49 | # if __name__ == "__main__":
50 | # motion_filename = os.path.join(agl.AGL_PATH, "data/fbx/motion/ybot_walking.fbx")
51 | # model_filename = os.path.join(agl.AGL_PATH, "data/fbx/model/ybot.fbx")
52 | # agl.AppManager.start(MotionApp(motion_filename, model_filename))
--------------------------------------------------------------------------------
/examples/09_sensors.py:
--------------------------------------------------------------------------------
1 | # import os
2 | # import glm
3 | # import glfw
4 | # import numpy as np
5 |
6 | # from aPyOpenGL import agl, kin
7 |
8 | # class MotionApp(agl.AnimApp):
9 | # def __init__(self, motion_filename, model_filename):
10 | # super().__init__()
11 |
12 | # # motion data
13 | # self.motion = agl.FBX(motion_filename).motions()[0]
14 | # self.model = agl.FBX(model_filename).model()
15 | # self.total_frames = self.motion.num_frames
16 | # self.fps = self.motion.fps
17 |
18 | # def start(self):
19 | # super().start()
20 |
21 | # # render options
22 | # self.render_skeleton = agl.Render.skeleton(self.model)
23 | # self.render_model = agl.Render.model(self.model)
24 | # self.render_sphere = agl.Render.sphere(0.05).albedo([1, 0, 0])
25 |
26 | # # kin pose
27 | # self.kinpose = kin.KinPose(self.motion.poses[0])
28 |
29 | # # sensor points
30 | # sensor_range = np.linspace(-1, 1, 10)
31 | # x, z = np.meshgrid(sensor_range, sensor_range)
32 | # self.sensor_points = np.stack([x.flatten(), np.zeros_like(x.flatten()), z.flatten(), np.ones_like(x.flatten())], axis=-1)
33 |
34 | # def update(self):
35 | # super().update()
36 |
37 | # # update kinpose basis to the origin
38 | # self.kinpose.set_pose(self.motion.poses[self.curr_frame])
39 |
40 | # # update model to render
41 | # self.model.set_pose(self.kinpose.to_pose())
42 |
43 | # def render(self):
44 | # super().render()
45 | # self.render_model.update_model(self.model).draw()
46 |
47 | # # draw sensor points
48 | # points = (self.kinpose.basis_xform @ self.sensor_points[..., None])[..., 0]
49 | # for point in points:
50 | # self.render_sphere.position(point[:3]).draw()
51 |
52 | # def render_xray(self):
53 | # super().render_xray()
54 | # self.render_skeleton.update_skeleton(self.model).draw()
55 |
56 | # if __name__ == "__main__":
57 | # motion_filename = os.path.join(agl.AGL_PATH, "data/fbx/motion/ybot_walking.fbx")
58 | # model_filename = os.path.join(agl.AGL_PATH, "data/fbx/model/ybot.fbx")
59 | # agl.AppManager.start(MotionApp(motion_filename, model_filename))
--------------------------------------------------------------------------------
/examples/10_bvh_export.py:
--------------------------------------------------------------------------------
1 | import os
2 | import glfw
3 | import glm
4 | import numpy as np
5 |
6 | from aPyOpenGL import agl
7 |
8 | BVH2FBX = {
9 | "Hips": "mixamorig:Hips",
10 | "Spine": "mixamorig:Spine",
11 | "Spine1": "mixamorig:Spine1",
12 | "Spine2": "mixamorig:Spine2",
13 | "Neck": "mixamorig:Neck",
14 | "Head": "mixamorig:Head",
15 | "LeftShoulder": "mixamorig:LeftShoulder",
16 | "LeftArm": "mixamorig:LeftArm",
17 | "LeftForeArm": "mixamorig:LeftForeArm",
18 | "LeftHand": "mixamorig:LeftHand",
19 | "LeftHandThumb1": "mixamorig:LeftHandThumb1",
20 | "LeftHandThumb2": "mixamorig:LeftHandThumb2",
21 | "LeftHandThumb3": "mixamorig:LeftHandThumb3",
22 | "LeftHandIndex1": "mixamorig:LeftHandIndex1",
23 | "LeftHandIndex2": "mixamorig:LeftHandIndex2",
24 | "LeftHandIndex3": "mixamorig:LeftHandIndex3",
25 | "LeftHandMiddle1": "mixamorig:LeftHandMiddle1",
26 | "LeftHandMiddle2": "mixamorig:LeftHandMiddle2",
27 | "LeftHandMiddle3": "mixamorig:LeftHandMiddle3",
28 | "LeftHandRing1": "mixamorig:LeftHandRing1",
29 | "LeftHandRing2": "mixamorig:LeftHandRing2",
30 | "LeftHandRing3": "mixamorig:LeftHandRing3",
31 | "LeftHandPinky1": "mixamorig:LeftHandPinky1",
32 | "LeftHandPinky2": "mixamorig:LeftHandPinky2",
33 | "LeftHandPinky3": "mixamorig:LeftHandPinky3",
34 | "RightShoulder": "mixamorig:RightShoulder",
35 | "RightArm": "mixamorig:RightArm",
36 | "RightForeArm": "mixamorig:RightForeArm",
37 | "RightHand": "mixamorig:RightHand",
38 | "RightHandThumb1": "mixamorig:RightHandThumb1",
39 | "RightHandThumb2": "mixamorig:RightHandThumb2",
40 | "RightHandThumb3": "mixamorig:RightHandThumb3",
41 | "RightHandIndex1": "mixamorig:RightHandIndex1",
42 | "RightHandIndex2": "mixamorig:RightHandIndex2",
43 | "RightHandIndex3": "mixamorig:RightHandIndex3",
44 | "RightHandMiddle1": "mixamorig:RightHandMiddle1",
45 | "RightHandMiddle2": "mixamorig:RightHandMiddle2",
46 | "RightHandMiddle3": "mixamorig:RightHandMiddle3",
47 | "RightHandRing1": "mixamorig:RightHandRing1",
48 | "RightHandRing2": "mixamorig:RightHandRing2",
49 | "RightHandRing3": "mixamorig:RightHandRing3",
50 | "RightHandPinky1": "mixamorig:RightHandPinky1",
51 | "RightHandPinky2": "mixamorig:RightHandPinky2",
52 | "RightHandPinky3": "mixamorig:RightHandPinky3",
53 | "LeftUpLeg": "mixamorig:LeftUpLeg",
54 | "LeftLeg": "mixamorig:LeftLeg",
55 | "LeftFoot": "mixamorig:LeftFoot",
56 | "LeftToeBase": "mixamorig:LeftToeBase",
57 | "RightUpLeg": "mixamorig:RightUpLeg",
58 | "RightLeg": "mixamorig:RightLeg",
59 | "RightFoot": "mixamorig:RightFoot",
60 | "RightToeBase": "mixamorig:RightToeBase",
61 | }
62 |
63 | class MotionApp(agl.App):
64 | def __init__(self, bvh_filename1, bvh_filename2, fbx_filename):
65 | super().__init__()
66 |
67 | # motion data
68 | bvh1 = agl.BVH(bvh_filename1)
69 | bvh2 = agl.BVH(bvh_filename2)
70 | fbx = agl.FBX(fbx_filename)
71 |
72 | self.motion1 = bvh1.motion()
73 | self.motion2 = bvh2.motion()
74 | self.model1 = fbx.model()
75 | self.model2 = fbx.model()
76 | self.model1.set_joint_map(BVH2FBX)
77 | self.model2.set_joint_map(BVH2FBX)
78 |
79 | self.total_frames = self.motion1.num_frames
80 |
81 | def update(self):
82 | super().update()
83 |
84 | # Move pose by 1.0 in x-axis
85 | pose1 = self.motion1.poses[self.frame % self.total_frames]
86 | self.render_pose1 = agl.motion.Pose(pose1.skeleton, pose1.local_quats, pose1.root_pos)
87 | self.model1.set_pose(self.render_pose1)
88 |
89 | pose2 = self.motion2.poses[self.frame % self.total_frames]
90 | self.render_pose2 = agl.motion.Pose(pose2.skeleton, pose2.local_quats, pose2.root_pos + np.array([1.0, 0, 0]))
91 | self.model2.set_pose(self.render_pose2)
92 |
93 | def render(self):
94 | super().render()
95 | agl.Render.model(self.model1).draw()
96 | agl.Render.model(self.model2).draw() # * model to draw
97 |
98 | def render_xray(self):
99 | super().render_xray()
100 | agl.Render.skeleton(self.render_pose1).draw()
101 | agl.Render.skeleton(self.render_pose2).draw() # * pose to draw
102 |
103 | # TODO: Render two characters
104 |
105 |
106 | def render_text(self):
107 | super().render_text()
108 | agl.Render.text_on_screen(f"Frame: {self.frame % self.total_frames} / {self.total_frames}").draw()
109 |
110 | if __name__ == "__main__":
111 | bvh_filename1 = os.path.join(agl.AGL_PATH, "data/bvh/ybot_capoeira.bvh")
112 | bvh_filename2 = os.path.join(agl.AGL_PATH, "data/bvh/ybot_capoeira_export.bvh")
113 | fbx_filename = os.path.join(agl.AGL_PATH, "data/fbx/model/ybot.fbx")
114 |
115 | # Import bvh
116 | bvh = agl.BVH(bvh_filename1)
117 | fbx = agl.FBX(fbx_filename)
118 | motion = bvh.motion()
119 |
120 | # Export bvh
121 | motion.export_as_bvh(bvh_filename2)
122 |
123 | agl.AppManager.start(MotionApp(bvh_filename1, bvh_filename2, fbx_filename))
--------------------------------------------------------------------------------
/examples/99_heightmap.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | from aPyOpenGL import agl
4 |
5 | class MyApp(agl.App):
6 | def __init__(self, filepath):
7 | super().__init__()
8 | hmap = agl.Heightmap.load_from_file(filepath)
9 | self.heightmap = agl.Render.heightmap(hmap).albedo(0.2).floor(True)
10 |
11 | def start(self):
12 | super().start()
13 | self.grid.visible(False)
14 |
15 | def render(self):
16 | super().render()
17 | self.heightmap.draw()
18 |
19 | if __name__ == "__main__":
20 | filepath = os.path.join(agl.AGL_PATH, "data/txt/heightmap.txt")
21 | agl.AppManager.start(MyApp(filepath))
--------------------------------------------------------------------------------
/install.sh:
--------------------------------------------------------------------------------
1 | # install a conda environment with aPyOpenGL framework
2 |
3 | # initialize environment variables
4 | ENV_NAME=""
5 |
6 | while [[ $# -gt 0 ]]; do
7 | key="$1"
8 | case $key in
9 | --name)
10 | ENV_NAME="$2"
11 | echo "ENV_NAME: $ENV_NAME"
12 | shift # remove --name
13 | shift # remove its value
14 | ;;
15 | *)
16 | echo "Unknown option: $key"
17 | exit 1
18 | ;;
19 | esac
20 | done
21 |
22 | # check if --name is provided
23 | if [[ -z "$ENV_NAME" ]]; then
24 | echo "The --name argument is mandatory!"
25 | echo "Usage: install.sh --name "
26 | exit 1
27 | fi
28 |
29 | # check if environment with the same name exists
30 | conda info --envs | grep -E "^$ENV_NAME\s+" > /dev/null
31 | if [ $? -eq 0 ]; then
32 | echo "Environment with the name $ENV_NAME already exists!"
33 |
34 | # ask if the user wants to remove it
35 | read -p "Do you want to remove it? [y/n] " -n 1 -r
36 | echo
37 | if [[ $REPLY =~ ^[Yy]$ ]]; then
38 | conda remove -y -n "${ENV_NAME}" --all
39 | else
40 | echo "Aborting..."
41 | exit 1
42 | fi
43 | fi
44 |
45 | # ----- main ----- #
46 | ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
47 | pushd "${ROOT_DIR}"
48 | # initial setup
49 | apt-get install libxml2
50 | apt-get install libxml2-dev
51 |
52 | # # create and activate conda env
53 | conda create -y -n "${ENV_NAME}" python=3.8.10
54 | CONDA_DIR="$(conda info --base)"
55 | source "${CONDA_DIR}/etc/profile.d/conda.sh"
56 | conda activate "${ENV_NAME}"
57 | if [ $? -ne 0 ]; then
58 | echo "*** Failed to activate env"
59 | exit 1
60 | fi
61 |
62 | # make fbxsdk directory
63 | mkdir -p python-fbxsdk
64 | pip install gdown
65 |
66 | pushd python-fbxsdk
67 | # download fbx sdk
68 | SIP_TARGZ=sip-4.19.3.tar.gz
69 | FBXSDK_TARGZ=fbx202001_fbxsdk_linux.tar.gz
70 | BINDINGS_TARGZ=fbx202001_fbxpythonbindings_linux.tar.gz
71 |
72 | gdown --id 1qPvq_23_7jmxMM1gWEQPSCgSyvB6Q7CE -O $SIP_TARGZ
73 | gdown --id 1Kn8vH2QMkfaCUM6j5gNDUVwawNiMvq4c -O $FBXSDK_TARGZ
74 | gdown --id 1r4IcLf6nj10GjEgcDrIvrZ8_0I9XU-hG -O $BINDINGS_TARGZ
75 |
76 | # extract tar.gz files
77 | mkdir -p sdk
78 | mkdir -p bindings
79 |
80 | tar -xzf $SIP_TARGZ
81 | tar -xzf $FBXSDK_TARGZ -C sdk
82 | tar -xzf $BINDINGS_TARGZ -C bindings
83 |
84 | # install sdk
85 | pushd sdk
86 | chmod ugo+x fbx202001_fbxsdk_linux
87 | yes "yes" | ./fbx202001_fbxsdk_linux $(pwd)
88 | export FBXSDK_ROOT=$(pwd)
89 | export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$FBXSDK_ROOT/lib/gcc/x64/release
90 | popd
91 |
92 | # install sip
93 | pushd sip-4.19.3
94 | python configure.py
95 | sed -i "s/libs.extend(self.optional_list(\"LIBS\"))/libs = self.optional_list(\"LIBS\").extend(libs)" "sipconfig.py"
96 | python configure.py
97 | make -j8
98 | make install
99 | export SIP_ROOT=$(pwd)
100 | popd
101 |
102 | # install bindings
103 | pushd bindings
104 | chmod ugo+x fbx202001_fbxpythonbindings_linux
105 | yes "yes" | ./fbx202001_fbxpythonbindings_linux $(pwd)
106 |
107 | python PythonBindings.py Python3_x64
108 | pushd build/Python38_x64
109 | pattern="-lz -lxml2"
110 | sed -i "s/\(LIBS = .*\) $pattern/\1/g" "Makefile"
111 | sed -i "s/\(LIBS = .*\)/\1 $pattern/" "Makefile"
112 | make clean
113 | make -j8
114 | make install
115 | popd
116 | popd
117 |
118 | # move files
119 | ENV_PATH=$(conda info --envs | grep -E "^$ENV_NAME\s+" | awk '{print $NF}')
120 | rm $ENV_PATH/lib/python3.8/site-packages/fbx.so
121 | cp -r ./bindings/build/Python38_x64/fbx.so $ENV_PATH/lib/python3.8/site-packages
122 | cp -r ./bindings/build/Distrib/site-packages/fbx/FbxCommon.py $ENV_PATH/lib/python3.8/site-packages
123 | popd
124 |
125 | # install additional dependencies
126 | pip install -r requirements.txt
127 | imageio_download_bin freeimage
128 | # pip install torch==1.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
129 | # pip install torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 --index-url https://download.pytorch.org/whl/cu121
130 |
131 | # remove fbxsdk directory
132 | rm -rf python-fbxsdk
133 |
134 | # develop package
135 | conda develop $(pwd)
136 | popd
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | PyOpenGL==3.1.7
2 | PyOpenGL_accelerate==3.1.7
3 | PyGLM==2.5.7
4 | glfw==2.5.3
5 | tqdm==4.62.3
6 | freetype_py==2.3.0
7 | imageio==2.16.2
8 | imgui==2.0.0
9 | opencv-python
--------------------------------------------------------------------------------
/teaser.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seokhyeonhong/aPyOpenGL/388b7e94b86140e4ddf86901693aac1264e75a42/teaser.gif
--------------------------------------------------------------------------------