├── LICENSE ├── README.md ├── bluemix ├── credsfile_watson.SAMPLE.json ├── get_speech_transcription.py └── watson.py ├── bvh ├── BVHplay │ ├── bvhplay.py │ ├── camera.py │ ├── geo.py │ ├── menu.py │ ├── skeleton.py │ └── transport.py ├── bvh_parser.py ├── compute_kinect_features.py └── config.ini ├── conda └── vamp_conda_osx64.txt ├── images └── vamp_screen_shot.png ├── inputs ├── body_parts_1.json ├── body_parts_2.json └── sync_files.lst ├── praat └── prosody.praat └── sync_files_dir └── sample.bvh /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2015 ETS 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | 23 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # VAMP 2 | 3 | Visualization and Analysis for Multimodal Presentation (VAMP) 4 | 5 | VAMP is a set of scripts designed to extract high-level, expressive body language features from synchronized video, audio and skeletal data streams recorded from multimodal presentations. Release of VAMP is incremental at this point, with the current focus on facilitating the extraction of frame-level, body language features from skeletal data files. VAMP is tested to run successfully on OS X 64-bit architecture, but would be expanded to support other platforms in the future. 6 | 7 | Currently, we are working on the integration of the various components of VAMP. Until then, only BVH features can be generated. 8 | 9 | ![VAMP_screen_shot](/images/vamp_screen_shot.png) 10 | 11 | Reference 12 | --------- 13 | Full details of the VAMP framework is described in our [MLA'15 paper](http://benleong.net/downloads/icmi-mla-2015-leong.pdf): 14 | 15 | Utilizing Depth Sensors for Analyzing Multimodal Presentations: Hardware, Software and Toolkits, 16 | Chee Wee Leong, Lei Chen, Gary Feng, Chong Min Lee and Matthew MulHolland, 17 | in Proceedings of the 4th Multimodal Learning Analytics Workshop and Grand Challenges (MLA), Seattle, 2015 18 | 19 | A talk about VAMP was given at [PyGotham 2016](https://2016.pygotham.org/talks/370/visualization-and-analysi/) 20 | 21 | 22 | Biovision Hierarchical (BVH) Format 23 | --------- 24 | The Biovision Hierarchy (BVH) character animation file format was developed by Biovision. BVH provides skeleton hierarchy information as well as motion data. The BVH format is an ASCII file that is used to import rotational joint data. It is currently one of the most popular motion data formats, and has been widely adopted by the animation community. 25 | 26 | VAMP requires the Python package [BVHPlay](https://sites.google.com/a/cgspeed.com/cgspeed/bvhplay), which is a free, open-source BVH animation player for reading a BVH file and its playback. BVHPlay plays any BVH file that conforms to the basic BVH file format. It has a resizable window, four degrees-of-freedom controlling the angle of view, and provides the standard transport control buttons such as play, stop, step, forward, step back, step forward, go to start and go to end. 27 | 28 | ![BVHPlay screen dump](https://sites.google.com/a/cgspeed.com/cgspeed/bvhplay/screenshot1.jpg) 29 | 30 | Installation 31 | --------- 32 | VAMP also requires the [cgkit](http://cgkit.sourceforge.net/) Python package and [Pandas](http://pandas.pydata.org/) data analysis libraries. Setting up the environment is easiest through [Conda](http://conda.pydata.org/docs/) package management system. Due to limitation of BVHPlay, only Python 2.7 is supported in VAMP for the extraction of frame-level features. 33 | 34 | If you do not have conda install, follow the instructions [here](http://docs.continuum.io/anaconda/install) to install it. 35 | 36 | After doing a `git clone` of VAMP, type the following to install the vamp conda environment, 37 | 38 | ``` 39 | conda create --name vamp --file conda/vamp_conda_osx64.txt 40 | conda install -n vamp -c https://conda.anaconda.org/cleong cgkit 41 | ``` 42 | 43 | Configuration 44 | --------- 45 | VAMP can be configured to extract expressive features from arbitrary body parts, as long as these body parts are captured in BVH. The configuration is found in `inputs/body_parts_X.json`. For example, the `body_parts` field indicates which parts of the body in BVH would be targeted for expressive features extraction: 46 | 47 | ```javascript 48 | "body_parts": ["Hips","Spine","LeftShoulder","RightShoulder", 49 | "LeftArm","RightArm","LeftForeArm","RightForeArm", 50 | "LeftHand","RightHand"] 51 | ``` 52 | 53 | For specifying the weights of each body part in contribution to the overall kinetic energy, use: 54 | 55 | ```javascript 56 | "weights": {"Hips": 14.81, "Spine" : 12.65, "LeftShoulder" : 0.76875, 57 | "RightShoulder" : 0.76875, "LeftArm" : 1.5375, 58 | "RightArm" : 1.5375, "LeftForeArm" : 0.86, 59 | "RightForeArm" : 0.86, "LeftHand" : 0.2875, 60 | "RightHand" : 0.2875 }, 61 | ``` 62 | 63 | For symmetry indexes, the `anchor` serves as the axis of symmetry between two corresponding `parts` contained in a list: 64 | 65 | ```javascript 66 | "symmetry": {"anchor": "Hips", "parts": ["LeftHand", "RightHand"]}, 67 | ``` 68 | 69 | Note that, for spatial displacements and their first-order (temporal) and second-order (power) derivations, the displacement is measured from the `anchor` to each member of the `parts` list: 70 | 71 | ```javascript 72 | "spatial": [{"anchor": "Spine", "parts": ["LeftHand", "RightHand"]}, 73 | {"anchor": "Spine", "parts": ["LeftArm", "RightArm"]}, 74 | {"anchor": "LeftHand", "parts": ["RightHand"]}] 75 | ``` 76 | 77 | You need to also specify a list of prefixes of the BVH files you wish to process in `inputs/sync_files.lst`, and copy the actual BVH files to the `sync_files_dir`. In this case, the prefix is `sample` for the actual BVH file `sample.bvh` 78 | 79 | 80 | Running VAMP 81 | --------- 82 | Extraction of the body language features are performed in two steps. First, a recursive parsing of the BVH file is done to extract frame-level motion traces. Second, feature implementations are used to transform the motion traces into higher-level body language features. 83 | 84 | To initiate the `vamp` conda environment, type: 85 | 86 | ``` 87 | source activate vamp 88 | ``` 89 | 90 | Next, go into the bvh directory and run the following to perform recursive parsing of the BVH file: 91 | 92 | ``` 93 | ./bvh_parser.py 94 | ``` 95 | 96 | Finally, execute the following to store all features into the `output` directory: 97 | 98 | ``` 99 | ./compute_kinect_features.py 100 | ``` 101 | 102 | After running the two steps, you should be able to see the following in the `outputs` directory in the VAMP root directory: 103 | 104 | ``` 105 | sample.framelevelbvh.csv 106 | sample.framelevelfeatures.csv 107 | ``` 108 | 109 | The `.framelevelfeatures.csv` file contains frame-level body language expressive features that can be used further for building models for multimodal presentation assessments. 110 | 111 | 112 | 113 | -------------------------------------------------------------------------------- /bluemix/credsfile_watson.SAMPLE.json: -------------------------------------------------------------------------------- 1 | { 2 | "credentials": { 3 | "url": "https://stream.watsonplatform.net/speech-to-text/api", 4 | "password": "XXXXXXXXXXXX", 5 | "username": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" 6 | } 7 | } -------------------------------------------------------------------------------- /bluemix/get_speech_transcription.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import requests 4 | import json 5 | import sys 6 | import os 7 | 8 | from os.path import join, abspath 9 | 10 | WORK_DIR = os.path.dirname(os.path.realpath(__file__)) 11 | ARGS = sys.argv 12 | WAV_FILE = abspath(ARGS[1]) 13 | TRANSCRIPTION_OUTPUT_FILE = abspath(ARGS[2]) 14 | 15 | API_ENDPOINT = 'https://stream.watsonplatform.net/speech-to-text/api/v1/recognize' 16 | 17 | API_DEFAULT_PARAMS = { 18 | 'continuous': True, 19 | 'timestamps': True, 20 | 'word_confidence': True, 21 | 'profanity_filter': False, 22 | } 23 | 24 | API_DEFAULT_HEADERS = { 25 | 'content-type': 'audio/wav' 26 | } 27 | 28 | 29 | WATSON_CREDS_FILENAME = "credsfile_watson.json" 30 | 31 | 32 | def get_watson_creds(fname='credsfile_watson.json'): 33 | with open(join(WORK_DIR, fname), 'r') as f: 34 | data=json.load(f) 35 | return data['credentials'] 36 | 37 | 38 | def speech_to_text_api_call(audio_filename, username, password): 39 | with open(audio_filename, 'rb') as a_file: 40 | http_response=requests.post(API_ENDPOINT, 41 | auth=(username, password), 42 | data=a_file, 43 | params=API_DEFAULT_PARAMS, 44 | headers=API_DEFAULT_HEADERS, 45 | stream=False) 46 | return http_response 47 | 48 | 49 | def process_transcript_call(audio_filename, transcript_path, creds): 50 | resp=speech_to_text_api_call( 51 | audio_filename, 52 | username=creds['username'], 53 | password=creds['password']) 54 | with open(transcript_path, 'w') as t: 55 | t.write(resp.text) 56 | 57 | return json.loads(resp.text) 58 | 59 | 60 | def extract_transcription(data): 61 | transcription='' 62 | speaker_id='001' 63 | speaker_name='vamp' 64 | for result in data['results']: 65 | if result.get('alternatives'): 66 | # just pick best alternative 67 | alt=result.get('alternatives')[0] 68 | timestamps=alt['timestamps'] 69 | if timestamps: # for some reason, timestamps can be empty in some cases 70 | for idx, tobject in enumerate(alt['timestamps']): 71 | txt, tstart, tend=tobject 72 | transcription += '\t'.join((speaker_id, 73 | speaker_name, 74 | str(tstart), 75 | str(tend), 76 | txt)) + '\n' 77 | 78 | return transcription 79 | 80 | 81 | def main(): 82 | creds=get_watson_creds() 83 | transcription_output=process_transcript_call( 84 | WAV_FILE, 85 | TRANSCRIPTION_OUTPUT_FILE, 86 | creds) 87 | print extract_transcription(transcription_output).rstrip() 88 | 89 | 90 | if __name__ == '__main__': 91 | main() 92 | -------------------------------------------------------------------------------- /bluemix/watson.py: -------------------------------------------------------------------------------- 1 | import requests 2 | API_ENDPOINT = 'https://stream.watsonplatform.net/speech-to-text/api/v1/recognize' 3 | API_DEFAULT_PARAMS = { 4 | 'continuous': True, 5 | 'timestamps': True, 6 | 'word_confidence': True, 7 | 'profanity_filter': False, 8 | 'word_alternatives_threshold': 0.4 9 | } 10 | 11 | API_DEFAULT_HEADERS = { 12 | 'content-type': 'audio/wav' 13 | } 14 | 15 | 16 | 17 | def speech_to_text_api_call(audio_filename, username, password): 18 | with open(audio_filename, 'rb') as a_file: 19 | http_response = requests.post(API_ENDPOINT, 20 | auth=(username, password), 21 | data=a_file, 22 | params=API_DEFAULT_PARAMS, 23 | headers=API_DEFAULT_HEADERS, 24 | stream=False) 25 | return http_response 26 | -------------------------------------------------------------------------------- /bvh/BVHplay/bvhplay.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from Tkinter import Tk, IntVar, mainloop, Toplevel, BOTTOM, LEFT, W, \ 4 | Button, Label 5 | from tkFileDialog import askopenfilename 6 | from transport import Transport, Playbar, Viewport 7 | from camera import Camera 8 | from menu import Menubar 9 | from geo import worldvert, screenvert, worldedge, screenedge, grid_setup 10 | from skeleton import skeleton, process_bvhfile 11 | import profile 12 | import sys 13 | 14 | CANVAS_MINSIZE = 500 15 | 16 | 17 | ########################################### 18 | # KEYBOARD CALLBACKS 19 | ########################################### 20 | # Unfortunately I need to define functions here for the move keys 21 | # because the Camera object (in camera.py) can't call the redraw routine 22 | # (in test4.py) 23 | 24 | def MoveL(event): 25 | global mycamera 26 | global myviewport 27 | global mymenu 28 | global redraw_grid 29 | global redraw_axes 30 | redraw_grid = 1 31 | redraw_axes = 1 32 | mycamera.MoveL() 33 | if mymenu.readout: 34 | myviewport.draw_readout(mycamera) 35 | redraw() 36 | 37 | def MoveR(event): 38 | global mycamera 39 | global myviewport 40 | global mymenu 41 | global redraw_grid 42 | global redraw_axes 43 | redraw_grid = 1 44 | redraw_axes = 1 45 | mycamera.MoveR() 46 | if mymenu.readout: 47 | myviewport.draw_readout(mycamera) 48 | redraw() 49 | 50 | def MoveUp(event): 51 | global mycamera 52 | global myviewport 53 | global mymenu 54 | global redraw_grid 55 | global redraw_axes 56 | redraw_grid = 1 57 | redraw_axes = 1 58 | mycamera.MoveUp() 59 | if mymenu.readout: 60 | myviewport.draw_readout(mycamera) 61 | redraw() 62 | 63 | def MoveDown(event): 64 | global mycamera 65 | global myviewport 66 | global mymenu 67 | global redraw_grid 68 | global redraw_axes 69 | redraw_grid = 1 70 | redraw_axes = 1 71 | mycamera.MoveDown() 72 | if mymenu.readout: 73 | myviewport.draw_readout(mycamera) 74 | redraw() 75 | 76 | def MoveFd(event): 77 | global mycamera 78 | global myviewport 79 | global mymenu 80 | global redraw_grid 81 | global redraw_axes 82 | redraw_grid = 1 83 | redraw_axes = 1 84 | mycamera.MoveFd() 85 | if mymenu.readout: 86 | myviewport.draw_readout(mycamera) 87 | redraw() 88 | 89 | def MoveBack(event): 90 | global mycamera 91 | global myviewport 92 | global mymenu 93 | global redraw_grid 94 | global redraw_axes 95 | redraw_grid = 1 96 | redraw_axes = 1 97 | mycamera.MoveBack() 98 | if mymenu.readout: 99 | myviewport.draw_readout(mycamera) 100 | redraw() 101 | 102 | def RotL(event): 103 | global mycamera 104 | global myviewport 105 | global mymenu 106 | global redraw_grid 107 | global redraw_axes 108 | redraw_grid = 1 109 | redraw_axes = 1 110 | mycamera.RotL() 111 | if mymenu.readout: 112 | myviewport.draw_readout(mycamera) 113 | redraw() 114 | 115 | def RotR(event): 116 | global mycamera 117 | global myviewport 118 | global mymenu 119 | global redraw_grid 120 | global redraw_axes 121 | redraw_grid = 1 122 | redraw_axes = 1 123 | mycamera.RotR() 124 | if mymenu.readout: 125 | myviewport.draw_readout(mycamera) 126 | redraw() 127 | 128 | 129 | ############################################# 130 | # RESIZE CALLBACK 131 | ############################################# 132 | # This callback is triggered by the event for the Viewport 133 | # frame, which contains our canvas. 134 | def canvas_frame_change(event): 135 | global myviewport 136 | global mycamera 137 | global redraw_grid 138 | global redraw_axes 139 | DEBUG = 0 140 | if DEBUG: 141 | print "Got configure for event ", event 142 | print " widget = %s, width = %d, height = %d" % \ 143 | (event.widget, event.width, event.height) 144 | 145 | # The first time Tk calls this callback will be during initial window 146 | # setup. The Viewport() constructor initializes xframesize (and 147 | # yframesize) to -1, so if xframesize is -1, this callback knows that 148 | # we don't actually need to do anything to the canvas, rather we just 149 | # record the actual frame width and height. 150 | if myviewport.framewidth == -1: 151 | myviewport.framewidth = event.width 152 | myviewport.frameheight = event.height 153 | elif ( (myviewport.framewidth != event.width) or \ 154 | (myviewport.frameheight != event.height)): 155 | if DEBUG: 156 | print "canvas_frame_change: time to resize the canvas!" 157 | myviewport.framewidth = event.width 158 | myviewport.frameheight = event.height 159 | 160 | # First we set the new canvas size based on the new frame size, 161 | # then we scale the camera aspect ratio to match the canvas aspect 162 | # ratio. The initial (constructor-time) values of the camera's cfx,cfy 163 | # act as the base value against which we scale up. 164 | myviewport.canvaswidth = max(myviewport.framewidth-2, CANVAS_MINSIZE) 165 | myviewport.canvasheight = max(myviewport.frameheight-2, CANVAS_MINSIZE) 166 | 167 | mycamera.cfx = mycamera.basecfx * myviewport.canvaswidth / CANVAS_MINSIZE 168 | mycamera.cfy = mycamera.basecfy * myviewport.canvasheight / CANVAS_MINSIZE 169 | 170 | # Now resize the actual canvas 171 | myviewport.canvas.config(width=myviewport.canvaswidth, \ 172 | height=myviewport.canvasheight) 173 | 174 | if DEBUG: 175 | print " Resized canvas and camera aspect ratio." 176 | print " New canvas w,h: (%d, %d)" % (myviewport.canvaswidth, \ 177 | myviewport.canvasheight) 178 | print " New camera cfx,cfy: (%d, %d)" % (mycamera.cfx, mycamera.cfy) 179 | redraw_grid = 1 180 | redraw_axes = 1 181 | redraw() 182 | 183 | 184 | 185 | 186 | ############################################### 187 | # TRANSPORT BUTTON CALLBACKS 188 | ############################################### 189 | # We define these here because these callbacks need access to global 190 | # variables, which they can't get if these functions are in transport.py 191 | # 192 | # And since the class constructor doesn't set up the callbacks, we have to 193 | # do so ourselves after creating the Transport -- see the main code where 194 | # we call Transport() 195 | 196 | def onBegin(): 197 | global slidert 198 | global myskeleton 199 | global skelscreenedges 200 | # print "begin button clicked" 201 | if not myskeleton: return # Get out if myskeleton doesn't yet exist 202 | slidert.set(1) 203 | myskeleton.populate_skelscreenedges(skelscreenedges, slidert.get(), \ 204 | DEBUG=0) 205 | redraw() 206 | 207 | 208 | def onEnd(): 209 | # print "end button clicked" 210 | global slidert 211 | global myskeleton 212 | global skelscreenedges 213 | if not myskeleton: return # Get out if myskeleton doesn't yet exist 214 | slidert.set(myskeleton.frames) 215 | myskeleton.populate_skelscreenedges(skelscreenedges, slidert.get(), \ 216 | DEBUG=0) 217 | redraw() 218 | 219 | 220 | ###### 221 | # PLAY 222 | # When the user clicks play, we don't immediately step forward 223 | # a frame. Instead we just set mytransport.playing and set up 224 | # PlayScheduler for a callback. 225 | def onPlay(): 226 | global mytransport 227 | global myskeleton 228 | # print "play button clicked" 229 | if not myskeleton: return # Get out if myskeleton doesn't yet exist 230 | # if mytransport.playing is already 1, no need to do anything 231 | if not mytransport.playing: 232 | mytransport.playing = 1 233 | msec = int(myskeleton.dt * 1000) 234 | if(msec < 8): msec = 8 # Preserve speed sanity, max 125fps 235 | mytransport.after(msec, PlayScheduler) 236 | 237 | ########## 238 | # PlayScheduler: called indirectly via Tk's after() function 239 | # Every time we call this function, we should advance one slidert 240 | # position unless we're at the end. 241 | 242 | def PlayScheduler(): 243 | # DEBUG = 0 244 | global mytransport 245 | global slidert 246 | global myskeleton 247 | global skelscreenedges 248 | 249 | t = slidert.get() 250 | # if DEBUG: 251 | # print "PlayScheduler starting, t=",t 252 | if (t == myskeleton.frames): 253 | mytransport.playing = 0 # Stop when we run out of frames 254 | return() # Just to be clear that we exit here 255 | 256 | else: # This is now similar to onStepfd() 257 | msec = int(myskeleton.dt * 1000) 258 | if(msec < 8): msec = 8 # Preserve speed sanity, max 125fps 259 | 260 | # Testing shows that frame rates above 30fps can be a problem -- 261 | # playback fails because this callback routine is being called faster 262 | # than the display update code can run. So if the frame rate is 30fps 263 | # (30fps is 33msec per frame, thus the >30 test), THEN we can 264 | # reschedule this routine PRIOR to doing the math and display work. 265 | # But if the frame rate is faster than 30fps, we'll call mytransport.after 266 | # later. 267 | if mytransport.playing and msec >30: 268 | # DO NOT use "PlayScheduler()" -- you must write it as "PlayScheduler" 269 | mytransport.after(msec, PlayScheduler) 270 | 271 | slidert.set(t+1) 272 | # if DEBUG: 273 | # print " PlayScheduler set t to %d, calling populate_sse" % \ 274 | # (slidert.get()) 275 | myskeleton.populate_skelscreenedges(skelscreenedges, slidert.get(), \ 276 | DEBUG=0) 277 | redraw() 278 | 279 | # if DEBUG: 280 | # print "PlayScheduler: mytransport.playing =", mytransport.playing 281 | 282 | # Rescheduling for high-frame-rate animations 283 | if mytransport.playing and msec <=30: 284 | mytransport.after(msec, PlayScheduler) 285 | 286 | 287 | def onStop(): 288 | global mytransport 289 | # print "stop button clicked" 290 | mytransport.playing = 0 291 | 292 | def onStepback(): 293 | global slidert 294 | global myskeleton 295 | global skelscreenedges 296 | if not myskeleton: return # Get out if myskeleton doesn't yet exist 297 | x = slidert.get() 298 | if x>1: 299 | slidert.set(x-1) 300 | # print "stepback button clicked, slidert=", slidert.get() 301 | myskeleton.populate_skelscreenedges(skelscreenedges, slidert.get(), \ 302 | DEBUG=0) 303 | redraw() 304 | 305 | def onStepfd(): 306 | global slidert 307 | global myskeleton 308 | global skelscreenedges 309 | if not myskeleton: return # Get out if myskeleton doesn't yet exist 310 | x = slidert.get() 311 | if (x < myskeleton.frames - 1): 312 | slidert.set(x+1) 313 | # print "stepfd button clicked, slidert=",slidert.get() 314 | myskeleton.populate_skelscreenedges(skelscreenedges, slidert.get(), \ 315 | DEBUG=0) 316 | redraw() 317 | 318 | 319 | # The slider's "command" option is unlike most other widget callbacks 320 | # in that it passes us an argument containing the slider's current position. 321 | def onSlider(value): 322 | global slidert 323 | if slidert.get() == 0: 324 | slidert.set(1) # 0 not allowed 325 | # print "slider was slid, position = ", value 326 | 327 | if myskeleton: # myskeleton = 0 on initial program run 328 | myskeleton.populate_skelscreenedges(skelscreenedges, slidert.get(), \ 329 | DEBUG=0) 330 | redraw() 331 | 332 | # End Transport button callbacks 333 | ##################################### 334 | 335 | 336 | 337 | 338 | ################################################# 339 | # MENU CALLBACKS 340 | ################################################# 341 | 342 | def open_file(): # read_file file_read load_file 343 | global root # Root window 344 | global myskeleton 345 | global skelscreenedges 346 | global gridedges 347 | global slidert 348 | global myviewport 349 | global mycamera 350 | global mymenu 351 | global mytransport 352 | global redraw_grid 353 | global redraw_axes 354 | global file_prefix 355 | 356 | # No, you aren't allowed to try to load a new BVH in the middle of playing 357 | # back the current BVH... nice try. 358 | mytransport.playing = 0 359 | 360 | mycanvas = myviewport.canvas 361 | if file_prefix == 'NONE': 362 | filename = askopenfilename(title = 'Open BVH file', parent=root, \ 363 | filetypes =[ ('BVH files', '*.bvh'), ('All files', '*')] ) 364 | else: 365 | filename = askopenfilename(title = 'Open BVH file', parent=root, \ 366 | initialdir = file_prefix, \ 367 | filetypes =[ ('BVH files', '*.bvh'), ('All files', '*')] ) 368 | 369 | print "filename = ",filename # Remove this line later 370 | index = filename.rfind('/') # Unix 371 | index2 = filename.rfind('\\') # Windows 372 | if index != -1: 373 | file_prefix = filename[0:index+1] 374 | print "File prefix is ", file_prefix 375 | elif index2 != -1: 376 | file_prefix = filename[0:index2+1] 377 | print "File prefix is ", file_prefix 378 | 379 | # askopenfilename also allows: initialdir = '' 380 | # "filename" will have length 0 if user cancels the open. 381 | if len(filename) > 0: 382 | try: 383 | myskeleton2 = process_bvhfile(filename,DEBUG=1) 384 | ## myskeleton2 = profile.run('process_bvhfile(FILE,DEBUG=0)') 385 | except IOError: 386 | string = "IO Error while attempting to read file." 387 | print string 388 | error_popup(string) 389 | return 390 | except SyntaxError: 391 | string = "Parse error in BVH file." 392 | print string 393 | error_popup(string) 394 | return 395 | except: 396 | string = "Unknown error while attempting to read file." 397 | print string 398 | error_popup(string) 399 | return 400 | 401 | # If we make it here then we've successfully read and parsed the BVH file, 402 | # and created myskeleton2 403 | 404 | # Undraw all grid edges 405 | for edge in gridedges: 406 | edge.undraw(mycanvas) 407 | # Undraw all skeleton edges 408 | for edge in skelscreenedges: 409 | edge.undraw(mycanvas) 410 | # Don't need to undraw axes - leave as-is 411 | 412 | # Reset slider to 1 413 | slidert.set(1) 414 | 415 | # Say farewell to previous skeleton 416 | myskeleton = myskeleton2 417 | 418 | # Set camera position to something that will hopefully put the skeleton 419 | # in view. For Z we want something large so that we're set back from 420 | # the skeleton. 421 | mycamera.t[0] = int((myskeleton.minx + myskeleton.maxx)/2) 422 | mycamera.t[1] = int((myskeleton.miny + myskeleton.maxy)/2) 423 | if (mycamera.t[1] < 10): mycamera.t[1] = 10 424 | mycamera.t[2] = myskeleton.maxz + 100 425 | mycamera.yrot = 0 426 | mycamera.Recompute() 427 | if mymenu.readout: 428 | myviewport.draw_readout(mycamera) 429 | 430 | # Create skelscreenedges[] with sufficient space to handle one 431 | # screenedge per skeleton edge. 432 | skelscreenedges = myskeleton.make_skelscreenedges(DEBUG=0, arrow='none', 433 | circle=1) 434 | # Give us some screen edges to display. 435 | myskeleton.populate_skelscreenedges(skelscreenedges, slidert.get(), \ 436 | DEBUG=0) 437 | # print "skelscreenedges after population is:" 438 | # print skelscreenedges 439 | 440 | myplaybar.resetscale(myskeleton.frames) 441 | gridedges = grid_setup(myskeleton.minx, myskeleton.minz, \ 442 | myskeleton.maxx, myskeleton.maxz, DEBUG=0) 443 | redraw_grid = 1 444 | redraw_axes = 1 445 | redraw() 446 | 447 | ################################################ 448 | 449 | 450 | 451 | ##### 452 | # Alternate version of open_file called by a ctrl-o, which passes 453 | # an event in as an arg that we need to strip out. 454 | def open_file2(event): 455 | open_file() 456 | 457 | 458 | 459 | def error_popup(text): 460 | win = Toplevel() 461 | Button(win, text='OK', command=win.destroy).pack(side=BOTTOM) 462 | Label(win, text=text, anchor=W, justify=LEFT).pack(side=LEFT) 463 | # Refuse to let the user do anything until window is dismissed 464 | win.focus_set() 465 | win.grab_set() 466 | win.wait_window() 467 | 468 | 469 | 470 | 471 | def toggle_grid(): 472 | global mymenu 473 | global gridedges 474 | global myviewport 475 | global redraw_grid 476 | mycanvas = myviewport.canvas 477 | if mymenu.grid: 478 | mymenu.grid = 0 479 | mymenu.settingsmenu.entryconfig(0, label='Grid on') 480 | for edge in gridedges: 481 | # redraw() won't remove lines already drawn even if we set edge.drawme. 482 | # So we have to undraw the grid lines here, then set edge.drawme to 0 483 | # to block future draw attempts. 484 | edge.undraw(mycanvas) 485 | edge.drawme = 0 486 | else: 487 | mymenu.grid = 1 488 | mymenu.settingsmenu.entryconfig(0, label='Grid off') 489 | redraw_grid = 1 # redraw() won't draw the grid without this 490 | for edge in gridedges: 491 | edge.drawme = 1 492 | redraw() 493 | 494 | 495 | 496 | def toggle_axes(): 497 | global mymenu 498 | global axisedges 499 | global myviewport 500 | global redraw_axes 501 | mycanvas = myviewport.canvas 502 | if mymenu.axes: 503 | mymenu.axes = 0 504 | mymenu.settingsmenu.entryconfig(1, label='Axes on') 505 | for edge in axisedges: 506 | edge.undraw(mycanvas) 507 | edge.drawme = 0 508 | else: 509 | mymenu.axes = 1 510 | mymenu.settingsmenu.entryconfig(1, label='Axes off') 511 | redraw_axes = 1 512 | for edge in axisedges: 513 | edge.drawme = 1 514 | redraw() 515 | 516 | 517 | 518 | def toggle_readout(): # Camera xyz display 519 | global mymenu 520 | global myviewport 521 | global mycamera 522 | if mymenu.readout: 523 | mymenu.readout = 0 524 | myviewport.undraw_readout() 525 | mymenu.settingsmenu.entryconfig(2, label='Camera readout on') 526 | else: 527 | mymenu.readout = 1 528 | myviewport.draw_readout(mycamera) 529 | mymenu.settingsmenu.entryconfig(2, label='Camera readout off') 530 | 531 | 532 | 533 | 534 | ##################################### 535 | # REDRAW 536 | # Recomputes camspace and canvas-space coordinates of all edges and 537 | # redraws them. 538 | # 539 | def redraw(): 540 | global axisedges 541 | global skelscreenedges 542 | global gridedges 543 | global slidert 544 | global mycamera 545 | global myviewport 546 | global mymenu 547 | global redraw_grid # flag 548 | global redraw_axes # flag 549 | 550 | mycanvas = myviewport.canvas 551 | canvaswidth = myviewport.canvaswidth 552 | canvasheight = myviewport.canvasheight 553 | 554 | if mymenu.grid and redraw_grid: 555 | for edge in gridedges: 556 | edge.worldtocam(mycamera) 557 | edge.camtoscreen(mycamera, canvaswidth, canvasheight) 558 | edge.draw(mycanvas) 559 | redraw_grid = 0 560 | 561 | if mymenu.axes and redraw_axes: 562 | for edge in axisedges: 563 | edge.worldtocam(mycamera) # Update cam coords 564 | edge.camtoscreen(mycamera, canvaswidth, canvasheight) 565 | edge.draw(mycanvas) 566 | redraw_axes = 0 567 | 568 | for screenedge in skelscreenedges: 569 | ## print "Calling worldtocam for screenedge ", screenedge.descr 570 | screenedge.worldtocam(mycamera) 571 | screenedge.camtoscreen(mycamera, canvaswidth, canvasheight) 572 | ## print "PRINTOUT OF SCREENEDGE ", screenedge.descr 573 | ## print screenedge 574 | ## print 575 | screenedge.draw(mycanvas) 576 | 577 | 578 | 579 | ############################################ 580 | # MAIN PROGRAM STARTS HERE 581 | ############################################ 582 | 583 | root = Tk() 584 | root.title('BVHPlay') 585 | 586 | mymenu = Menubar(root) 587 | 588 | mymenu.filemenu.entryconfig(0, command = open_file) 589 | root.bind('', open_file2) 590 | mymenu.settingsmenu.entryconfig(0, command = toggle_grid) 591 | mymenu.settingsmenu.entryconfig(1, command = toggle_axes) 592 | mymenu.settingsmenu.entryconfig(2, command = toggle_readout) 593 | 594 | 595 | mytransport = Transport() # Create and pack a frame of transport buttons 596 | mytransport.btn_begin.config(command = onBegin) 597 | mytransport.btn_end.config(command = onEnd) 598 | mytransport.btn_stop.config(command = onStop) 599 | mytransport.btn_play.config(command = onPlay) 600 | mytransport.btn_stepback.config(command = onStepback) 601 | mytransport.btn_stepfd.config(command = onStepfd) 602 | 603 | myplaybar = Playbar() 604 | slidert = IntVar() # Special magic integer that allows me to tie it 605 | # to a Tk widget "variable" option, in this case 606 | # for our slider 607 | myplaybar.scale.config(variable = slidert) 608 | slidert.set(1) # DO NOT use "slidert = __" 609 | myplaybar.scale.config(command = onSlider) 610 | 611 | myviewport = Viewport(root, canvassize=CANVAS_MINSIZE) 612 | 613 | mycamera = Camera(x=0, y=15, z=35, cfx=20, parallel=0, \ 614 | ppdist=30, DEBUG=0) # cfy not needed or allowed 615 | ## mycamera = Camera(x=10, y=15, z=10, parallel=1) 616 | ## print "MYCAMERA: ", mycamera 617 | 618 | root.bind('', MoveL) 619 | root.bind('', MoveR) 620 | root.bind('', MoveUp) 621 | root.bind('', MoveDown) 622 | root.bind('', MoveFd) 623 | root.bind('', MoveBack) 624 | root.bind('', RotL) 625 | root.bind('', RotR) 626 | 627 | 628 | # Call Config() when CANVAS FRAME is resized. We don't want to 629 | # do this as root.bind() because then we get callback calls for every 630 | # widget and frame on the screen whenever we resize the window. We 631 | # just want a callback for the frame that holds the canvas. 632 | myviewport.bind('', canvas_frame_change) 633 | 634 | # Must init myskeleton to 0. At least one routine uses this as a test 635 | # for whether a skeleton exists or not. 636 | myskeleton = 0 637 | skelscreenedges = [] 638 | axisedges = [] 639 | 640 | # X axis 641 | sv1 = screenvert(0.,0.,0.) 642 | sv2 = screenvert(10.,0.,0.) 643 | se1 = screenedge(sv1, sv2, color='red', description='red x axis') 644 | axisedges.append(se1) 645 | 646 | 647 | # Y axis 648 | sv1 = screenvert(0.,0.,0.) 649 | sv2 = screenvert(0.,10.,0.) 650 | se1 = screenedge(sv1, sv2, color='green', description='green y axis') 651 | axisedges.append(se1) 652 | 653 | 654 | # Z axis 655 | sv1 = screenvert(0.,0.,0.) 656 | sv2 = screenvert(0.,0.,10.) 657 | se1 = screenedge(sv1, sv2, color='blue', description='blue z axis') 658 | axisedges.append(se1) 659 | redraw_axes = 1 660 | 661 | 662 | # Default grid prior to a skeleton load. 663 | gridedges = grid_setup(-50, -50, 50, 50, DEBUG=0) 664 | redraw_grid = 1 # Global flag - set to 1 when you move the camera please 665 | 666 | file_prefix = 'NONE' 667 | 668 | # redraw() only redraws edges, not the camera readout, because the 669 | # camera readout only changes when the camera moves, not when the 670 | # skeleton animates. If I were to have redraw() call draw_readout, 671 | # we'd erase and redraw the readout every timestep, which is a waste. 672 | if mymenu.readout: 673 | myviewport.draw_readout(mycamera) 674 | 675 | redraw() 676 | mainloop() 677 | -------------------------------------------------------------------------------- /bvh/BVHplay/camera.py: -------------------------------------------------------------------------------- 1 | 2 | from math import cos, sin, degrees, radians, pi 3 | from numpy import array, dot 4 | from copy import deepcopy 5 | 6 | ZEROMAT = array([ [0.,0.,0.,0.],[0.,0.,0.,0.],[0.,0.,0.,0.],[0.,0.,0.,0.] ]) 7 | IDENTITY = array([ [1.,0.,0.,0.],[0.,1.,0.,0.],[0.,0.,1.,0.],[0.,0.,0.,1.] ]) 8 | 9 | 10 | #################### 11 | # This camera can be either perspective or parallel-projection. 12 | # Default is perspective. 13 | # 14 | # PARALLEL PROJECTION (PP) CAMERA: 15 | # The camera can see objects that have an (xyz), within the 16 | # camera's coordinate system, of (-cfx to cfx, -cfy to cfy, ANY) where 17 | # the z value is irrelevant. 18 | # 19 | # 20 | # For the PP camera, "move forward" function doesn't actually change 21 | # the camera position, rather it reduces cfx and cfy thus giving a 22 | # "zoom in" effect. Similarly the "move back" function increases cfx 23 | # and cfy, giving a "zoom out" effect. 24 | # 25 | # 26 | # PERSPECTIVE CAMERA: 27 | # A perspective camera requires a Center of Projection, which is the 28 | # camera location itself, and a projection plane. In camera space 29 | # (which is LH space not RH), the projection plane is perp to the 30 | # z axis and the camera is looking directly down the z axis at the plane. 31 | # 32 | # "ppdist" is the distance from the camera to the projection plane. 33 | # 34 | # cfx and cfy represents the size of camera-space rectangle on the projection 35 | # plane that maps onto the Tk canvas. This is the 36 | # same meaning as for the PP camera. For the perspective camera, cf[xy] 37 | # are fixed at camera-creation time. To make a large object fit into 38 | # the display, instead of increasing cf[xy] (as we would with the PP 39 | # camera), you have to move the camera backwards, which makes more 40 | # vertices map into the view rectangle on the projection plane. 41 | 42 | # WATCH OUT: The aspect ratio specified by cf[xy] needs to match the 43 | # aspect ratio of the Tk canvas that displays your graphics. 44 | # However there's nothing intrinsic to Tk that makes this happen. 45 | # What this means is that when the user resizes the application window, 46 | # that event needs to trigger both a camera aspect ratio change 47 | # (a change to cf[xy]) as well as a canvas widget resize. 48 | 49 | # The math for the perspective camera is very close to the math for 50 | # the PP camera. The difference is that for the perspective camera, 51 | # we first have to correct a vertex's x and y to represent the process 52 | # of projecting the vertex onto the screen. Once we've done that, 53 | # we can follow the PP algorithm that simply ignores z entirely. 54 | # The difference in math doesn't show up here -- see the vertex class 55 | # in geo.py for where the difference takes place. 56 | 57 | 58 | class Camera: # Defaults based pretty much on trial and error 59 | def __init__(self, x=0, y=0, z=-10, DEBUG=0, parallel=0, \ 60 | ppdist=30, cfx=30): 61 | self.parallel = parallel # Set to 1 for PP camera 62 | 63 | # Initial camera view window must be a square (because I say so), 64 | # so we set self.cfy = cfx initially. 65 | self.cfx = cfx # Camera projection window width 66 | self.cfy = cfx # Camera projection window height 67 | # Permanently store the initial values for future use 68 | self.basecfx = self.cfx 69 | self.basecfy = self.cfy 70 | self.ppdist = ppdist # Projection plane distance from camera. 71 | # Used only for perspective camera. 72 | self.debug = DEBUG 73 | #cgkit# self.t = vec4(x,y,z,1) 74 | self.t = array([x,y,z,1.]) 75 | 76 | # The camera uses a left-handed coordinate system, however [xyz]rot 77 | # are in world coordinates, which is right-handed. 78 | # The default, zero-rotation position of the camera is with the 79 | # camera/eye pointed along the NEGATIVE Z world axis. 80 | self.xrot = 0 # Not presently used 81 | self.yrot = radians(0) 82 | self.zrot = 0 # Not presently used 83 | self.trans_inc = 5 # Camera move increment when move function called 84 | self.rot_inc = radians(5) # Camera rotation increment when 85 | # rot function called 86 | #cgkit# self.transmat = mat4() 87 | #cgkit# self.rotmat = mat4() 88 | self.transmat = deepcopy(ZEROMAT) 89 | self.rotmat = deepcopy(ZEROMAT) 90 | 91 | # To convert from worldspace to camera space, the final step 92 | # is to invert the z coordinate of a vertex, since this is how 93 | # you convert from RH worldspace to LH camspace. 94 | #cgkit# self.invertz = mat4(1.0) # Identity 95 | self.invertz = deepcopy(IDENTITY) 96 | self.invertz[2,2] = -1 97 | self.Recompute() 98 | 99 | 100 | def __repr__(self): # allows print to use Camera semi-normally 101 | str1 = "xyzw yrot cfx cfy ppdist: %s %s %s %s %s\n" % (self.t, 102 | degrees(self.yrot), self.cfx, self.cfy, self.ppdist) 103 | str2 = " transmat = " + self.transmat.__repr__() 104 | str3 = "\n rotmat = " + self.rotmat.__repr__() 105 | str4 = "\n invertz = " + self.invertz.__repr__() 106 | str5 = "\n worldtocam = " + self.worldtocam.__repr__() 107 | return (str1 + str2 + str3 + str4 + str5) 108 | 109 | def Recompute(self): 110 | #cgkit# self.transmat = self.transmat.translation(self.transvec) 111 | #cgkit# self.rotmat = self.rotmat.rotation(-self.yrot, YAXIS) 112 | #cgkit# self.worldtocam = self.invertz * self.rotmat * self.transmat 113 | 114 | self.transmat = deepcopy(IDENTITY) 115 | self.transmat[0,3] = -self.t[0] 116 | self.transmat[1,3] = -self.t[1] 117 | self.transmat[2,3] = -self.t[2] 118 | 119 | self.rotmat = deepcopy(IDENTITY) 120 | theta = -self.yrot 121 | self.rotmat[0,0] = cos(theta) 122 | self.rotmat[0,2] = sin(theta) 123 | self.rotmat[2,0] = -sin(theta) 124 | self.rotmat[2,2] = cos(theta) 125 | self.worldtocam = dot(dot(self.invertz, self.rotmat),self.transmat) 126 | 127 | def RecomputeRot(self): 128 | #cgkit# self.rotmat = self.rotmat.rotation(-self.yrot, YAXIS) 129 | #cgkit# self.worldtocam = self.invertz * self.rotmat * self.transmat 130 | self.rotmat = deepcopy(IDENTITY) 131 | theta = -self.yrot 132 | self.rotmat[0,0] = cos(theta) 133 | self.rotmat[0,2] = sin(theta) 134 | self.rotmat[2,0] = -sin(theta) 135 | self.rotmat[2,2] = cos(theta) 136 | self.worldtocam = dot(dot(self.invertz, self.rotmat),self.transmat) 137 | 138 | 139 | def RecomputeTrans(self): 140 | #cgkit# self.transmat = self.transmat.translation(self.transvec) 141 | #cgkit# self.worldtocam = self.invertz * self.rotmat * self.transmat 142 | self.transmat = deepcopy(IDENTITY) 143 | self.transmat[0,3] = -self.t[0] 144 | self.transmat[1,3] = -self.t[1] 145 | self.transmat[2,3] = -self.t[2] 146 | self.worldtocam = dot(dot(self.invertz, self.rotmat),self.transmat) 147 | 148 | def MoveR(self, event=None): 149 | #cgkit# self.t.x += (self.trans_inc * cos(self.yrot)) 150 | #cgkit# self.t.z -= (self.trans_inc * sin(self.yrot)) 151 | self.t[0] += (self.trans_inc * cos(self.yrot)) 152 | self.t[2] -= (self.trans_inc * sin(self.yrot)) 153 | self.RecomputeTrans() 154 | if self.debug: 155 | print "Camera move right" 156 | print self 157 | 158 | def MoveL(self, event=None): 159 | #cgkit# self.t.x -= (self.trans_inc * cos(self.yrot)) 160 | #cgkit# self.t.z += (self.trans_inc * sin(self.yrot)) 161 | self.t[0] -= (self.trans_inc * cos(self.yrot)) 162 | self.t[2] += (self.trans_inc * sin(self.yrot)) 163 | self.RecomputeTrans() 164 | if self.debug: 165 | print "Camera move left" 166 | print self 167 | 168 | def MoveUp(self, event=None): 169 | # This camera isn't allowed to tilt up/down or L/R, so "move up" 170 | # is always a true upward move in the world coordinate system. 171 | #cgkit# self.t.y += self.trans_inc 172 | self.t[1] += self.trans_inc 173 | self.RecomputeTrans() 174 | if self.debug: 175 | print "Camera move up" 176 | print self 177 | 178 | def MoveDown(self, event=None): 179 | #cgkit# self.t.y -= self.trans_inc 180 | self.t[1] -= self.trans_inc 181 | self.RecomputeTrans() 182 | if self.debug: 183 | print "Camera move down" 184 | print self 185 | 186 | def MoveFd(self, event=None): 187 | if self.parallel: 188 | self.cfx -= 5 189 | self.cfy -= 5 190 | if((self.cfx <=0) or (self.cfy <=0)): 191 | self.cfx += 5 # Back out the change 192 | self.cfy += 5 193 | # No Recompute needed since camera isn't actually moving even 194 | # though the user perceives it as such. 195 | else: # Perspective camera actually does move physically fd + back 196 | #cgkit# self.t.x -= (self.trans_inc * sin(self.yrot)) 197 | #cgkit# self.t.z -= (self.trans_inc * cos(self.yrot)) 198 | self.t[0] -= (self.trans_inc * sin(self.yrot)) 199 | self.t[2] -= (self.trans_inc * cos(self.yrot)) 200 | self.RecomputeTrans() 201 | if self.debug: 202 | print "Camera move forward" 203 | print self 204 | 205 | def MoveBack(self, event=None): 206 | if self.parallel: 207 | self.cfx += 5 208 | self.cfy += 5 209 | # No Recompute needed 210 | else: # Perspective camera 211 | #cgkit# self.t.x += (self.trans_inc * sin(self.yrot)) 212 | #cgkit# self.t.z += (self.trans_inc * cos(self.yrot)) 213 | self.t[0] += (self.trans_inc * sin(self.yrot)) 214 | self.t[2] += (self.trans_inc * cos(self.yrot)) 215 | self.RecomputeTrans() 216 | if self.debug: 217 | print "Camera move back" 218 | print self 219 | 220 | def RotR(self, event=None): 221 | self.yrot -= self.rot_inc 222 | if (self.yrot < 0): 223 | self.yrot += 2*pi 224 | self.RecomputeRot() 225 | if self.debug: 226 | print "Camera rotate right" 227 | print self 228 | 229 | def RotL(self, event=None): 230 | self.yrot += self.rot_inc 231 | if (self.yrot >= 2*pi): 232 | self.yrot -= 2*pi 233 | self.RecomputeRot() 234 | if self.debug: 235 | print "Camera rotate left" 236 | print self 237 | 238 | -------------------------------------------------------------------------------- /bvh/BVHplay/geo.py: -------------------------------------------------------------------------------- 1 | # 2 | 3 | from numpy import array, dot 4 | 5 | 6 | ######################################################### 7 | # WORLDVERT class 8 | ######################################################### 9 | 10 | class worldvert: 11 | def __init__(self, x=0, y=0, z=0, description='', DEBUG=0): 12 | self.tr = array([x,y,z,1]) # tr = "translate position" 13 | self.descr = description 14 | self.DEBUG = DEBUG 15 | 16 | def __repr__(self): 17 | mystr = "worldvert " + self.descr + "\n tr: " + self.tr.__repr__() 18 | return mystr 19 | 20 | 21 | ################################################## 22 | # WORLDEDGE class 23 | ################################################## 24 | 25 | class worldedge: 26 | def __init__(self, wv1, wv2, description='', DEBUG=0): 27 | self.wv1 = wv1 28 | self.wv2 = wv2 29 | self.descr = description 30 | self.DEBUG = DEBUG 31 | 32 | def __repr__(self): 33 | mystr = "Worldedge " + self.descr +" wv1:\n" + self.wv1.__repr__() \ 34 | + "\nworldedge " + self.descr + " wv2:\n" + \ 35 | self.wv2.__repr__() + "\n" 36 | return mystr 37 | 38 | 39 | ########################################################## 40 | # SCREEENVERT class 41 | ########################################################## 42 | # 9/1/08: Way too ugly to have screenvert contain or point to 43 | # a worldvert, so I'm changing this to use a translate array just 44 | # like worldvert. If you want screenvert to have the values of 45 | # a worldvert, you need to copy those values in by hand or pass 46 | # them in at construction time, just like you would with a worldvert. 47 | 48 | class screenvert: 49 | def __init__(self, x=0., y=0., z=0., description='', DEBUG=0): 50 | self.tr = array([x,y,z,1]) # tr = "translate position" 51 | self.camtr = array([0.,0.,0.,0.]) # Position in camera space 52 | self.screenx = 0 53 | self.screeny = 0 54 | self.descr = description 55 | self.DEBUG = DEBUG 56 | 57 | def __repr__(self): 58 | mystr = "screenvert " + self.descr + "\n tr: " + self.tr.__repr__() \ 59 | + "\ncamspace: " + self.camtr.__repr__() + \ 60 | "\n screenspace: " + str(self.screenx) + ", " + \ 61 | str(self.screeny) 62 | return mystr 63 | 64 | def worldtocam(self, camera): 65 | # camera.worldtocam is a premultiplied set of conversion transforms 66 | # (trans then rot then invertz) maintained by the Camera object 67 | self.camtr = dot(camera.worldtocam, self.tr) 68 | if self.DEBUG: 69 | print "Converted vertex %s to camspace:" % (self.descr) 70 | print self 71 | 72 | 73 | 74 | 75 | ################################################## 76 | # SCREENEDGE class 77 | ################################################## 78 | 79 | class screenedge: 80 | def __init__(self, sv1, sv2, width=2, color='black', arrow='none', \ 81 | description='', circle=0, DEBUG=0): 82 | self.sv1 = sv1 # screenvert not worldvert 83 | self.sv2 = sv2 84 | self.width = width 85 | self.id = 0 # Tracks canvas ID for line 86 | self.cid = 0 # canvas ID of circle at joint end, if in use 87 | self.color = color 88 | self.arrow = arrow 89 | self.descr = description 90 | self.circle = circle # Set to 1 to draw circle at end of edge 91 | self.drawme = 1 # Set to 0 to not attempt to draw on screen 92 | self.DEBUG = DEBUG 93 | 94 | def __repr__(self): 95 | mystr = "Screenedge " + self.descr +" sv1:\n" + self.sv1.__repr__() \ 96 | + "\nscreenedge " + self.descr + " sv2:\n" + \ 97 | self.sv2.__repr__() + "\n" 98 | return mystr 99 | 100 | def worldtocam(self,camera): 101 | self.sv1.worldtocam(camera) 102 | self.sv2.worldtocam(camera) 103 | 104 | 105 | # 9/6/08: was in screenvert class, but needs to be in screenedge class 106 | def camtoscreen(self, camera, canvw, canvh): # canvw = canvaswidth, \ 107 | # canvh = canvasheight 108 | 109 | # cf[xy] defines a camera rectangle, with origin at the center, that 110 | # spans from (x,y)=(-cfx, -cfy) to (cfx, cfy) 111 | # 112 | # canv[wh] defines a Tk canvas rectange, with origin in upper left 113 | # and "down" meaning "y positive", that spans from 114 | # (0,0) to (canvh,canvw) 115 | # 116 | # PARALLEL PROJECTION (PP) camera: 117 | # We can ignore the camera Z value in this case. 118 | # So this function just has to map camera (x,y) onto Tk's canvas 119 | # (x,y), taking the sign reversal of the y axis into account. 120 | # 121 | # PERSPECTIVE PROJECTION camera: 122 | # First we have to correct vertex x and y using a fudge factor 123 | # that incorporates the vertex's z distance away from the camera, 124 | # and the projection plane distance. 125 | # After that the math is the same as the PP case. 126 | 127 | cfx = camera.cfx # "Camera frame size", width 128 | cfy = camera.cfy # "Camera frame size", height 129 | ppdist = camera.ppdist 130 | x1 = self.sv1.camtr[0] 131 | y1 = self.sv1.camtr[1] 132 | z1 = self.sv1.camtr[2] 133 | 134 | x2 = self.sv2.camtr[0] 135 | y2 = self.sv2.camtr[1] 136 | z2 = self.sv2.camtr[2] 137 | 138 | if camera.parallel: 139 | self.sv1.screenx = (canvw/2)*(1 + (x1/cfx)) 140 | self.sv1.screeny = (canvh/2)*(1 - (y1/cfy)) 141 | self.sv2.screenx = (canvw/2)*(1 + (x2/cfx)) 142 | self.sv2.screeny = (canvh/2)*(1 - (y2/cfy)) 143 | 144 | else: # perspective camera 145 | if z1 > 0.1 and z2 > 0.1: 146 | self.drawme = 1 147 | xproj1 = x1 * ppdist / z1 148 | yproj1 = y1 * ppdist / z1 149 | self.sv1.screenx = (canvw/2)*(1 + (xproj1/cfx)) 150 | self.sv1.screeny = (canvh/2)*(1 - (yproj1/cfy)) 151 | 152 | xproj2 = x2 * ppdist / z2 153 | yproj2 = y2 * ppdist / z2 154 | self.sv2.screenx = (canvw/2)*(1 + (xproj2/cfx)) 155 | self.sv2.screeny = (canvh/2)*(1 - (yproj2/cfy)) 156 | 157 | elif z1 <= 0.1 and z2 <= 0.1: 158 | self.drawme = 0 # Both verts are behind the camera -- stop now 159 | 160 | elif z1 > 0.1 and z2 <= 0.1: 161 | # First vert is in front of camera, second vert is not 162 | # print "se.camtoscreen case 3 starting for (%s)" % self.descr 163 | # print " verts are (%s,%s,%s) (%s,%s,%s)" % \ 164 | # (x1,y1,z1,x2,y2,z2) 165 | self.drawme = 1 166 | xproj1 = x1 * ppdist / z1 167 | yproj1 = y1 * ppdist / z1 168 | self.sv1.screenx = (canvw/2)*(1 + (xproj1/cfx)) 169 | self.sv1.screeny = (canvh/2)*(1 - (yproj1/cfy)) 170 | # print " sv1 maps to (%s,%s)" % (self.sv1.screenx, \ 171 | # self.sv1.screeny) 172 | 173 | t = (0.1-z1)/(z2-z1) 174 | x3 = t*(x2-x1) + x1 175 | y3 = t*(y2-y1) + y1 176 | z3 = 0.1 177 | # print " Computed alternate point (%s,%s,%s)" % (x3,y3,z3) 178 | xproj3 = x3 * ppdist / z3 179 | yproj3 = y3 * ppdist / z3 180 | self.sv2.screenx = (canvw/2)*(1 + (xproj3/cfx)) 181 | self.sv2.screeny = (canvh/2)*(1 - (yproj3/cfy)) 182 | # print " Alternate point maps to (%s,%s)" % (self.sv2.screenx, \ 183 | # self.sv2.screeny) 184 | 185 | else: 186 | # First vert is behind the camera, second vert is not 187 | # print "se.camtoscreen case 4 starting for (%s)", self.descr 188 | self.drawme = 1 189 | xproj2 = x2 * ppdist / z2 190 | yproj2 = y2 * ppdist / z2 191 | self.sv2.screenx = (canvw/2)*(1 + (xproj2/cfx)) 192 | self.sv2.screeny = (canvh/2)*(1 - (yproj2/cfy)) 193 | 194 | t = (0.1-z2)/(z1-z2) 195 | x3 = t*(x1-x2) + x2 196 | y3 = t*(y1-y2) + y2 197 | z3 = 0.1 198 | xproj3 = x3 * ppdist / z3 199 | yproj3 = y3 * ppdist / z3 200 | self.sv1.screenx = (canvw/2)*(1 + (xproj3/cfx)) 201 | self.sv1.screeny = (canvh/2)*(1 - (yproj3/cfy)) 202 | 203 | # If the vertex has z=0, it's "in the camera's blind spot" - it's like 204 | # having something right above your head or next to one of your ears. 205 | # The visual result is that you can't see it. But you can still see 206 | # the edge that connects to the vertex, if the other end of the edge 207 | # is within the viewport. 208 | # 209 | # If the vertex has z<0, it will project onto the projection plane 210 | # incorrectly, somewhat as if the object is behind you and you're 211 | # holding a spoon in front of you and seeing the object reflected. 212 | # The x and y projection points end up flipping from the z>0 case. 213 | # This isn't what we want -- it's very disorienting and doesn't 214 | # correspond to what an actual viewer would see. 215 | # The approach used here is to compute a replacement point (x3,y3,z3) 216 | # which is in front of the camera. Here the math sets z3=0.1 and 217 | # computes x3 and y3 using a parameterized representation of 218 | # the line segment sv1--sv2 219 | # 220 | # This approach to vertices with z<=0 also means that for the perspective 221 | # camera, we don't see objects that are behind us. For the parallel 222 | # camera this presently isn't true - the parallel camera renders points 223 | # no matter what (camera z) is. 224 | 225 | 226 | 227 | # def camtoscreen(self,camera, canvw, canvh): 228 | # self.sv1.camtoscreen(camera, canvw, canvh) 229 | # self.sv2.camtoscreen(camera, canvw, canvh) 230 | 231 | def draw(self, canvas): 232 | self.undraw(canvas) 233 | if self.drawme: 234 | x1 = self.sv1.screenx 235 | y1 = self.sv1.screeny 236 | x2 = self.sv2.screenx 237 | y2 = self.sv2.screeny 238 | if self.DEBUG: 239 | print "About to call create_line with (%d, %d, %d, %d)" \ 240 | % (x1,y1,x2,y2) 241 | self.id = canvas.create_line(x1,y1,x2,y2, fill=self.color, \ 242 | width=self.width, arrow=self.arrow) 243 | if self.circle: 244 | self.cid = canvas.create_oval(x2-3,y2-3,x2+3,y2+3, \ 245 | fill=self.color) 246 | 247 | def undraw(self, canvas): 248 | if self.id: 249 | canvas.delete(self.id) 250 | self.id = 0 251 | if self.cid: 252 | canvas.delete(self.cid) 253 | self.cid = 0 254 | 255 | 256 | 257 | 258 | ############################# 259 | # BEGIN NON-CLASS FUNCTIONS 260 | ############################# 261 | 262 | #################################### 263 | # DELETE_SCREEN_LINES 264 | # Use this function to delete all displayed skeleton edges (bones) 265 | # from display on a canvas 266 | # 267 | def undraw_screen_lines(screenedgelist, canvas): 268 | count = len(screenedgelist) 269 | for x in range(count): 270 | screenedgelist[x].undraw(canvas) 271 | 272 | 273 | ################################## 274 | # GRID_SETUP 275 | # Creates and returns a populated array of screenedge 276 | # Don't call this until you've set up your skeleton and can 277 | # extract minx, miny, maxx, maxy from it. 278 | # 279 | def grid_setup(minx, minz, maxx, maxz, DEBUG=0): 280 | 281 | if DEBUG: 282 | print "grid_setup: minx=%s, minz=%s, maxx=%s, maxz=%s" % \ 283 | (minx, minz, maxx, maxz) 284 | 285 | # The input values define a rectangle. Round them to nearest 10. 286 | minx2 = 10*int(minx/10) - 10 287 | maxx2 = 10*int(maxx/10) + 10 288 | minz2 = 10*int(minz/10) - 10 289 | maxz2 = 10*int(maxz/10) + 10 290 | 291 | gridedges = [] 292 | # Range() won't give us the topmost value of the range, so we have to 293 | # use maxz2+1 as the top of the range. 294 | for z in range(minz2, maxz2+1, 10): 295 | sv1 = screenvert(minx2, 0., z) 296 | sv2 = screenvert(maxx2, 0., z) 297 | se = screenedge(sv1, sv2, width=1, color='grey', DEBUG=0) 298 | if DEBUG: 299 | print "grid_setup: adding screenedge from (%d,%d) to (%d,%d)" \ 300 | % (minx2, z, maxx2, z) 301 | gridedges.append(se) 302 | 303 | for x in range(minx2, maxx2+1, 10): 304 | sv1 = screenvert(x, 0., minz2) 305 | sv2 = screenvert(x, 0., maxz2) 306 | se = screenedge(sv1, sv2, width=1, color='grey', DEBUG=0) 307 | if DEBUG: 308 | print "grid_setup: adding screenedge from (%d,%d) to (%d,%d)" \ 309 | % (x, minz2, x, maxz2) 310 | gridedges.append(se) 311 | 312 | return gridedges 313 | -------------------------------------------------------------------------------- /bvh/BVHplay/menu.py: -------------------------------------------------------------------------------- 1 | # 2 | # from Tkinter import * 3 | 4 | from Tkinter import Menu, Toplevel, Button, Label, BOTTOM, LEFT, W 5 | 6 | ######################################## 7 | 8 | class Menubar: 9 | def __init__(self, parentwin): 10 | # This code based on rat book pp. 500-501 11 | topmenu = Menu(parentwin) # Create the Menu object 12 | parentwin.config(menu=topmenu) # Tell the window to use the Menu 13 | self.parentwin = parentwin 14 | 15 | filemenu = Menu(topmenu, tearoff=0) 16 | self.filemenu = filemenu 17 | retval = filemenu.add_command(label='Open... (ctrl-o)') 18 | retval = filemenu.add_command(label='Quit', command = parentwin.quit) 19 | topmenu.add_cascade(label='File', menu=filemenu) # Don't forget this 20 | 21 | settingsmenu = Menu(topmenu, tearoff=0) 22 | self.settingsmenu = settingsmenu 23 | retval = settingsmenu.add_command(label='Grid off') 24 | retval = settingsmenu.add_command(label='Axes off') 25 | retval = settingsmenu.add_command(label='Camera readout off') 26 | topmenu.add_cascade(label='Settings', menu=settingsmenu) 27 | self.grid = 1 # On by default 28 | self.axes = 1 29 | self.readout = 1 30 | 31 | helpmenu = Menu(topmenu, tearoff=0) 32 | self.helpmenu = helpmenu 33 | helpmenu.add_command(label='About BVHPlay', command=self.about) 34 | helpmenu.add_command(label='Command list', command=self.commandlist) 35 | topmenu.add_cascade(label='Help', menu=helpmenu) 36 | # We don't pack() menus. 37 | 38 | def commandlist(self): 39 | win = Toplevel() # Creates an independent window 40 | t1 = " Camera control:" 41 | t2 = " a -- move camera left (strafe left)" 42 | t3 = " d -- move camera right (strafe right)" 43 | t4 = " s -- move camera back" 44 | t5 = " w -- move camera forward" 45 | t6 = " q -- rotate camera left" 46 | t7 = " e -- rotate camera right" 47 | t8 = " r -- move camera up" 48 | t9 = " f -- move camera down" 49 | t9a = "Hold down a key to trigger autorepeat." 50 | 51 | t10 = " Slider:" 52 | t11 = " Drag the slider left and right to scrub through keyframes." 53 | t12 = " The first frame is always frame 1 (there is no frame 0)." 54 | t12a = " You can also use the camera controls while you drag the slider." 55 | 56 | t13 = " Transport buttons:" 57 | t14 = " From left to right, the meanings of the buttons are as follows: " 58 | t15 = " -- Go to first frame of BVH sequence" 59 | t16 = " -- Step back 1 frame" 60 | t17 = " -- Stop" 61 | t18 = " -- Play" 62 | t19 = " -- Step forward 1 frame" 63 | t20 = " -- Go to last frame" 64 | 65 | t21 = " Window resize:" 66 | t22 = " Yes! You can resize the application window and the animation" 67 | t23 = " display area will resize appropriately. You can even resize" 68 | t24 = " during BVH playback." 69 | 70 | text1 = "\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n\n%s\n\n" \ 71 | % (t1,t2,t3,t4,t5,t6,t7,t8,t9,t9a) 72 | text2 = "%s\n%s\n%s\n%s\n\n" % (t10,t11,t12,t12a) 73 | text3 = "%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n" \ 74 | % (t13,t14,t15,t16,t17,t18,t19,t20) 75 | text4 = "\n%s\n%s\n%s\n%s\n" % (t21,t22,t23,t24) 76 | text = text1 + text2 + text3 + text4 77 | 78 | Button(win, text='Close', command=win.destroy).pack(side=BOTTOM) 79 | Label(win, text=text, anchor=W, justify=LEFT).pack(side=LEFT) 80 | 81 | # See snake book p. 440. Takes over the universe 82 | # win.focus_set() 83 | # win.grab_set() 84 | # win.wait_window() 85 | 86 | 87 | 88 | def about(self): 89 | win = Toplevel() 90 | t1 = " Welcome to BVHplay, a free BVH player. v1.00" 91 | 92 | t2 = " Home site for this program: www.cgspeed.com" 93 | t3 = " The source code is also available." 94 | 95 | t4 = " Copyright (c) 2008 Bruce Hahne\n" 96 | t5 = " Author's email address: hahne@io.com\n\n" 97 | 98 | t6 = " This program and its source code are usable by others" 99 | t7 = " under the terms of version 3 of the Gnu General Public" 100 | t8 = " license (dated June 29, 2007), which is available at " 101 | t9 = " www.gnu.org/licenses/gpl.html" 102 | 103 | t10 = " BVHplay uses portions of the Python Computer Graphics Kit (cgkit)" 104 | t11 = " and the numpy mathematics library. See the associated README" 105 | t12 = " distributed with this program for information and licensing" 106 | t13 = " related to these libraries." 107 | 108 | t14 = " This program comes with no warranty.\n" 109 | 110 | 111 | text1 = "\n%s\n\n%s\n%s\n\n" % (t1, t2, t3) 112 | text2 = t4 + t5 113 | text3 = "%s\n%s\n%s\n%s\n\n" % (t6, t7, t8, t9) 114 | text4 = "%s\n%s\n%s\n%s\n\n" % (t10, t11, t12, t13) 115 | text5 = t14 116 | 117 | text = text1 + text2 + text3 + text4 + text5 118 | 119 | Button(win, text='Close', command=win.destroy).pack(side=BOTTOM) 120 | Label(win, text=text, anchor=W, justify=LEFT).pack(side=LEFT) 121 | -------------------------------------------------------------------------------- /bvh/BVHplay/skeleton.py: -------------------------------------------------------------------------------- 1 | #!/opt/python/2.7/bin/python 2 | 3 | # skeleton.py: various BVH and skeleton-related classes. 4 | # Adapted from earlier work "bvhtest.py" 5 | 6 | # AVOIDING OFF-BY-ONE ERRORS: 7 | # Let N be the total number of keyframes in the BVH file. Then: 8 | # - bvh.keyframes[] is an array that runs from 0 to N-1 9 | # - skeleton.keyframes[] is another reference to bvh.keyframes and similarly 10 | # runs from 0 to N-1 11 | # - skeleton.edges{t} is a dict where t can run from 1 to N 12 | # - joint.trtr{t} is a dict where t can run from 1 to N 13 | # - joint.worldpos{t} is a dict where t can run from 1 to N 14 | # 15 | # So if you're talking about raw BVH keyframe rows from the file, 16 | # you use an array and the values run from 0 to N-1. This is an artifact 17 | # of using .append to create bvh.keyframes. 18 | # 19 | # By contrast, if you're talking about a non-keyframe data structure 20 | # derived from the BVH keyframes, such as matrices or edges, it's a 21 | # dictionary and the values run from 1 to N. 22 | 23 | import sys 24 | #sys.path.append('/home/speech/Research/MMI/bvhProcessing/cgkit/cgkit-2.0.0') 25 | 26 | from math import radians, cos, sin 27 | from cgkit.bvh import BVHReader 28 | from geo import worldvert, screenvert, worldedge, screenedge 29 | from numpy import array, dot 30 | from copy import deepcopy 31 | 32 | #cgkit# XAXIS = vec3(1,0,0) 33 | #cgkit# YAXIS = vec3(0,1,0) 34 | #cgkit# ZAXIS = vec3(0,0,1) 35 | #cgkit# ORIGIN = vec4(0,0,0,1) 36 | 37 | 38 | ZEROMAT = array([ [0.,0.,0.,0.],[0.,0.,0.,0.],[0.,0.,0.,0.],[0.,0.,0.,0.] ]) 39 | IDENTITY = array([ [1.,0.,0.,0.],[0.,1.,0.,0.],[0.,0.,1.,0.],[0.,0.,0.,1.] ]) 40 | 41 | 42 | ####################################### 43 | # JOINT class (formerly BONE) 44 | # A BVH "joint" is a single vertex with potentially MULTIPLE 45 | # edges. It's not accurate to call these "bones" because if 46 | # you rotate the joint, you rotate ALL attached bones. 47 | 48 | class joint: 49 | def __init__(self, name): 50 | self.name = name 51 | self.children = [] 52 | self.channels = [] # Set later. Ordered list of channels: each 53 | # list entry is one of [XYZ]position, [XYZ]rotation 54 | self.hasparent = 0 # flag 55 | self.parent = 0 # joint.addchild() sets this 56 | #cgkit# self.strans = vec3(0,0,0) # static translation vector (x, y, z) 57 | self.strans = array([0.,0.,0.]) # I think I could just use \ 58 | # regular Python arrays 59 | 60 | # Transformation matrices: 61 | self.stransmat = array([ [0.,0.,0.,0.],[0.,0.,0.,0.], \ 62 | [0.,0.,0.,0.],[0.,0.,0.,0.] ]) 63 | 64 | self.trtr = {} 65 | ## self.trtr = [] # self.trtr[time] A premultiplied series of 66 | # translation and rotation matrices 67 | self.worldpos = {} 68 | ## self.worldpos = [] # Time-based worldspace xyz position of the 69 | # joint's endpoint. A list of vec4's 70 | 71 | def info(self): 72 | print "Joint name:", self.name 73 | print " %s is connected to " % self.name, 74 | if(len(self.children) == 0): 75 | print "nothing" 76 | else: 77 | for child in self.children: 78 | print "%s " % child.name, 79 | print 80 | for child in self.children: 81 | child.info() 82 | 83 | 84 | def __repr__(self): # Recursively build up text info 85 | str2 = self.name + " at strans=" + str(self.strans) + " is connected to " 86 | # Not sure how well self.strans will work now that self.strans is 87 | # a numpy "array", no longer a cgkit vec3. 88 | if(len(self.children) == 0): 89 | str2 = str2 + "nothing\n" 90 | else: 91 | for child in self.children: 92 | str2 = str2 + child.name + " " 93 | str2 = str2 + "\n" 94 | str3 = "" 95 | for child in self.children: 96 | str3 = str3 + child.__repr__() 97 | str1 = str2 + str3 98 | return (str1) 99 | 100 | def addchild(self, childjoint): 101 | self.children.append(childjoint) 102 | childjoint.hasparent = 1 103 | childjoint.parent = self 104 | 105 | # Called by skeleton.create_edges() 106 | def create_edges_recurse(self, edgelist, t, DEBUG=0): 107 | if DEBUG: 108 | print "create_edge_recurse starting for joint ", self.name 109 | if self.hasparent: 110 | temp1 = self.parent.worldpos[t] # Faster than triple lookup below? 111 | temp2 = self.worldpos[t] 112 | v1 = worldvert(temp1[0], temp1[1], temp1[2], DEBUG=DEBUG, \ 113 | description=self.parent.name) 114 | v2 = worldvert(temp2[0], temp2[1], temp2[2], DEBUG=DEBUG, \ 115 | description=self.name) 116 | 117 | descr = self.parent.name + " to " + self.name 118 | myedge = worldedge(v1,v2, description=descr) 119 | edgelist.append(myedge) 120 | 121 | for child in self.children: 122 | if DEBUG: 123 | print " Recursing for child ", child.name 124 | child.create_edges_recurse(edgelist,t, DEBUG) 125 | 126 | # End class joint 127 | 128 | 129 | ############################### 130 | # SKELETON class 131 | # 132 | # This class is actually for a skeleton plus some time-related info 133 | # frames: number of frames in the animation 134 | # dt: delta-t in seconds per frame (default: 30fps i.e. 1/30) 135 | class skeleton: 136 | def __init__(self, hips, keyframes, frames=0, dt=.033333333): 137 | self.hips = hips 138 | # 9/1/08: we now transfer the large bvh.keyframes data structure to 139 | # the skeleton because we need to keep this dataset around. 140 | self.keyframes = keyframes 141 | self.frames = frames # Number of frames (caller must set correctly) 142 | self.dt = dt 143 | ## self.edges = [] # List of list of edges. self.edges[time][edge#] 144 | self.edges = {} # As of 9/1/08 this now runs from 1...N not 0...N-1 145 | 146 | # Precompute hips min and max values in all 3 dimensions. 147 | # First determine how far into a keyframe we need to look to find the 148 | # XYZ hip positions 149 | offset = 0 150 | for channel in self.hips.channels: 151 | if(channel == "Xposition"): xoffset = offset 152 | if(channel == "Yposition"): yoffset = offset 153 | if(channel == "Zposition"): zoffset = offset 154 | offset += 1 155 | self.minx = 999999999999 156 | self.miny = 999999999999 157 | self.minz = 999999999999 158 | self.maxx = -999999999999 159 | self.maxy = -999999999999 160 | self.maxz = -999999999999 161 | # We can't just look at the keyframe values, we also have to correct 162 | # by the static hips OFFSET value, since sometimes this can be quite 163 | # large. I feel it's bad BVH file form to have a non-zero HIPS offset 164 | # position, but there are definitely files that do this. 165 | xcorrect = self.hips.strans[0] 166 | ycorrect = self.hips.strans[1] 167 | zcorrect = self.hips.strans[2] 168 | 169 | # self.strans = array([0.,0.,0.]) # I think I could just use \ 170 | for keyframe in self.keyframes: 171 | x = keyframe[xoffset] + xcorrect 172 | y = keyframe[yoffset] + ycorrect 173 | z = keyframe[zoffset] + zcorrect 174 | if x < self.minx: self.minx = x 175 | if x > self.maxx: self.maxx = x 176 | if y < self.miny: self.miny = y 177 | if y > self.maxy: self.maxy = y 178 | if z < self.minz: self.minz = z 179 | if z > self.maxz: self.maxz = z 180 | 181 | 182 | def __repr__(self): 183 | str1 = "frames = " + str(self.frames) + ", dt = " + str(self.dt) + "\n" 184 | str1 = str1 + self.hips.__repr__() 185 | return str1 186 | 187 | 188 | 189 | #################### 190 | # MAKE_SKELSCREENEDGES: creates and returns an array of screenedge 191 | # that has exactly as many elements as the joint count of skeleton. 192 | # 193 | def make_skelscreenedges(self, arrow='none', circle=0, DEBUG=0): 194 | if DEBUG: 195 | print "make_sse starting" 196 | 197 | # I need the number of joints in self. We don't store this value 198 | # in the class, but we can get it by looking at the length of any 199 | # edge array self.edges[t] for any t. So we make sure that self.edges[1] 200 | # is set up, then we take its length. 201 | if not (self.edges.has_key(1)): 202 | self.create_edges_onet(1) 203 | 204 | jointcount = len(self.edges[1]) 205 | if DEBUG: 206 | print "make_skelscreenedges: jointcount=",jointcount 207 | skelscreenedges = [] 208 | 209 | for x in range(jointcount): 210 | sv1 = screenvert(0.,0.,0.,description='make_sse_sv1') 211 | sv2 = screenvert(0.,0.,0.,description='make_sse_sv2') 212 | se1 = screenedge(sv1, sv2, arrow=arrow, circle=circle, DEBUG=DEBUG, \ 213 | description='created_by_make_skelscreenedges' ) 214 | skelscreenedges.append(se1) 215 | return skelscreenedges 216 | 217 | 218 | 219 | 220 | 221 | ######################################### 222 | # CREATE_EDGES_ONET class function 223 | 224 | def create_edges_onet(self,t,DEBUG=0): 225 | # print "create_edges_onet starting for t=",t 226 | if DEBUG: 227 | print "create_edges_onet starting for t=",t 228 | 229 | # Before we can compute edge positions, we need to have called 230 | # process_bvhkeyframe for time t, which computes trtr and worldpos 231 | # for the joint hierarchy at time t. Since we no longer precompute 232 | # this information when we read the BVH file, here's where we do it. 233 | # This is on-demand computation of trtr and worldpos. 234 | if not (self.hips.worldpos.has_key(t)): 235 | if DEBUG: 236 | print "create_edges_onet: about to call process_bvhkeyframe for t=",t 237 | process_bvhkeyframe(self.keyframes[t-1], self.hips, t, DEBUG=DEBUG) 238 | 239 | if not (self.edges.has_key(t)): 240 | if DEBUG: 241 | print "create_edges_onet: creating edges for t=",t 242 | edgelist = [] 243 | self.hips.create_edges_recurse(edgelist, t, DEBUG=DEBUG) 244 | self.edges[t] = edgelist # dictionary entry 245 | 246 | if DEBUG: 247 | print "create_edges edge list at timestep %d:" %(t) 248 | print edgelist 249 | 250 | 251 | ################################# 252 | # POPULATE_SKELSCREENEDGES 253 | # Given a time t and a precreated array of screenedge, copies values 254 | # from skeleton.edges[] into the screenedge array. 255 | # 256 | # Use this routine whenever slidert (time position on slider) changes. 257 | # This routine is how you get your edge data somewhere that redraw() 258 | # will make use of it. 259 | 260 | def populate_skelscreenedges(self, sse, t, DEBUG=0): 261 | if DEBUG: 262 | print "populate_sse starting for t=",t 263 | # First we have to make sure that self.edges exists for slidert=t 264 | if not (self.edges.has_key(t)): 265 | if DEBUG: 266 | print "populate_skelscreenedges: about to call create_edges_onet(%d)" \ 267 | % (t) 268 | self.create_edges_onet(t, DEBUG=DEBUG) 269 | counter = 0 270 | for wldedge in self.edges[t]: 271 | # Yes, we copy in the xyz values manually. This keeps us sane. 272 | sse[counter].sv1.tr[0] = wldedge.wv1.tr[0] 273 | sse[counter].sv1.tr[1] = wldedge.wv1.tr[1] 274 | sse[counter].sv1.tr[2] = wldedge.wv1.tr[2] 275 | sse[counter].sv2.tr[0] = wldedge.wv2.tr[0] 276 | sse[counter].sv2.tr[1] = wldedge.wv2.tr[1] 277 | sse[counter].sv2.tr[2] = wldedge.wv2.tr[2] 278 | # Also copy in the name 279 | sse[counter].descr = wldedge.descr 280 | counter +=1 281 | if DEBUG: 282 | print "populate_skelscreenedges: copied %d edges from skeleton to sse" \ 283 | % (counter) 284 | 285 | 286 | 287 | ####################################### 288 | # READBVH class 289 | # 290 | # Per the BVHReader documentation, we need to subclass BVHReader 291 | # and set up functions onHierarchy, onMotion, and onFrame to parse 292 | # the BVH file. 293 | # 294 | class readbvh(BVHReader): 295 | def onHierarchy(self, root): 296 | print "readbvh: onHierarchy invoked" 297 | self.root = root # Save root for later use 298 | self.keyframes = [] # Used later in onFrame 299 | 300 | def onMotion(self, frames, dt): 301 | print "readbvh: onMotion invoked. frames = %s, dt = %s" % (frames,dt) 302 | self.frames = frames 303 | self.dt = dt 304 | 305 | def onFrame(self, values): 306 | print "readbvh: onFrame invoked, values =", values 307 | self.keyframes.append(values) # Hopefully this gives us a list of lists 308 | 309 | 310 | ####################################### 311 | # NON-CLASS FUNCTIONS START HERE 312 | ####################################### 313 | 314 | ####################################### 315 | # PROCESS_BVHNODE function 316 | # 317 | # Recursively process a BVHReader node object and return the root joint 318 | # of a bone hierarchy. This routine creates a new joint hierarchy. 319 | # It isn't a Skeleton yet since we haven't read any keyframes or 320 | # created a Skeleton class yet. 321 | # 322 | # Steps: 323 | # 1. Create a new joint 324 | # 2. Copy the info from Node to the new joint 325 | # 3. For each Node child, recursively call myself 326 | # 4. Return the new joint as retval 327 | # 328 | # We have to pass in the parent name because this routine 329 | # needs to be able to name the leaves "parentnameEnd" instead 330 | # of "End Site" 331 | 332 | def process_bvhnode(node, parentname='hips'): 333 | name = node.name 334 | if (name == "End Site") or (name == "end site"): 335 | name = parentname + "End" 336 | # print "process_bvhnode: name is ", name 337 | b1 = joint(name) 338 | b1.channels = node.channels 339 | #cgkit# b1.strans.x = node.offset[0] 340 | #cgkit# b1.strans.y = node.offset[1] 341 | #cgkit# b1.strans.z = node.offset[2] 342 | b1.strans[0] = node.offset[0] 343 | b1.strans[1] = node.offset[1] 344 | b1.strans[2] = node.offset[2] 345 | 346 | # Compute static translation matrix from vec3 b1.strans 347 | #cgkit# b1.stransmat = b1.stransmat.translation(b1.strans) 348 | ## b1.stransmat = deepcopy(IDENTITY) 349 | b1.stransmat = array([ [1.,0.,0.,0.],[0.,1.,0.,0.],[0.,0.,1.,0.],[0.,0.,0.,1.] ]) 350 | 351 | b1.stransmat[0,3] = b1.strans[0] 352 | b1.stransmat[1,3] = b1.strans[1] 353 | b1.stransmat[2,3] = b1.strans[2] 354 | 355 | for child in node.children: 356 | b2 = process_bvhnode(child, name) # Creates a child joint "b2" 357 | b1.addchild(b2) 358 | return b1 359 | 360 | 361 | ############################### 362 | # PROCESS_BVHKEYFRAME 363 | # Recursively extract (occasionally) translation and (mostly) rotation 364 | # values from a sequence of floats and assign to joints. 365 | # 366 | # Takes a keyframe (a list of floats) and returns a new keyframe that 367 | # contains the not-yet-processed (not-yet-eaten) floats of the original 368 | # sequence of floats. Also assigns the eaten floats to the appropriate 369 | # class variables of the appropriate Joint object. 370 | # 371 | # This function could technically be a class function within the Joint 372 | # class, but to maintain similarity with process_bvhnode I won't do that. 373 | # 374 | # 9/1/08: rewritten to process only one keyframe 375 | 376 | def process_bvhkeyframe(keyframe, joint, t, DEBUG=0): 377 | 378 | counter = 0 379 | dotrans = 0 380 | dorot = 0 381 | 382 | # We have to build up drotmat one rotation value at a time so that 383 | # we get the matrix multiplication order correct. 384 | drotmat = array([ [1.,0.,0.,0.],[0.,1.,0.,0.],[0.,0.,1.,0.],[0.,0.,0.,1.] ]) 385 | 386 | if DEBUG: 387 | print " process_bvhkeyframe: doing joint %s, t=%d" % (joint.name, t) 388 | print " keyframe has %d elements in it." % (len(keyframe)) 389 | 390 | # Suck in as many values off the front of "keyframe" as we need 391 | # to populate this joint's channels. The meanings of the keyvals 392 | # aren't given in the keyframe itself; their meaning is specified 393 | # by the channel names. 394 | for channel in joint.channels: 395 | keyval = keyframe[counter] 396 | if(channel == "Xposition"): 397 | dotrans = 1 398 | xpos = keyval 399 | elif(channel == "Yposition"): 400 | dotrans = 1 401 | ypos = keyval 402 | elif(channel == "Zposition"): 403 | dotrans = 1 404 | zpos = keyval 405 | elif(channel == "Xrotation"): 406 | dorot = 1 407 | xrot = keyval 408 | #cgkit# drotmat2 = drotmat2.rotation(radians(xrot), XAXIS) 409 | #cgkit# drotmat = drotmat * drotmat2 # Build up the full rotation matrix 410 | theta = radians(xrot) 411 | mycos = cos(theta) 412 | mysin = sin(theta) 413 | ## drotmat2 = deepcopy(IDENTITY) 414 | drotmat2 = array([ [1.,0.,0.,0.],[0.,1.,0.,0.],[0.,0.,1.,0.], \ 415 | [0.,0.,0.,1.] ]) 416 | drotmat2[1,1] = mycos 417 | drotmat2[1,2] = -mysin 418 | drotmat2[2,1] = mysin 419 | drotmat2[2,2] = mycos 420 | drotmat = dot(drotmat, drotmat2) 421 | 422 | elif(channel == "Yrotation"): 423 | dorot = 1 424 | yrot = keyval 425 | #cgkit# drotmat2 = drotmat2.rotation(radians(yrot), YAXIS) 426 | #cgkit# drotmat = drotmat * drotmat2 427 | theta = radians(yrot) 428 | mycos = cos(theta) 429 | mysin = sin(theta) 430 | ## drotmat2 = deepcopy(IDENTITY) 431 | drotmat2 = array([ [1.,0.,0.,0.],[0.,1.,0.,0.],[0.,0.,1.,0.], \ 432 | [0.,0.,0.,1.] ]) 433 | drotmat2[0,0] = mycos 434 | drotmat2[0,2] = mysin 435 | drotmat2[2,0] = -mysin 436 | drotmat2[2,2] = mycos 437 | drotmat = dot(drotmat, drotmat2) 438 | 439 | elif(channel == "Zrotation"): 440 | dorot = 1 441 | zrot = keyval 442 | theta = radians(zrot) 443 | mycos = cos(theta) 444 | mysin = sin(theta) 445 | ## drotmat2 = deepcopy(IDENTITY) 446 | drotmat2 = array([ [1.,0.,0.,0.],[0.,1.,0.,0.],[0.,0.,1.,0.], \ 447 | [0.,0.,0.,1.] ]) 448 | drotmat2[0,0] = mycos 449 | drotmat2[0,1] = -mysin 450 | drotmat2[1,0] = mysin 451 | drotmat2[1,1] = mycos 452 | drotmat = dot(drotmat, drotmat2) 453 | else: 454 | print "Fatal error in process_bvhkeyframe: illegal channel name ", \ 455 | channel 456 | return(0) 457 | ## sys.exit() 458 | counter += 1 459 | # End "for channel..." 460 | 461 | if dotrans: # If we are the hips... 462 | # Build a translation matrix for this keyframe 463 | dtransmat = array([ [1.,0.,0.,0.],[0.,1.,0.,0.],[0.,0.,1.,0.], \ 464 | [0.,0.,0.,1.] ]) 465 | dtransmat[0,3] = xpos 466 | dtransmat[1,3] = ypos 467 | dtransmat[2,3] = zpos 468 | 469 | if DEBUG: 470 | print " Joint %s: xpos ypos zpos is %s %s %s" % (joint.name, \ 471 | xpos, ypos, zpos) 472 | # End of IF dotrans 473 | 474 | if DEBUG: 475 | print " Joint %s: xrot yrot zrot is %s %s %s" % \ 476 | (joint.name, xrot, yrot, zrot) 477 | 478 | # At this point we should have computed: 479 | # stransmat (computed previously in process_bvhnode subroutine) 480 | # dtransmat (only if we're the hips) 481 | # drotmat 482 | # We now have enough to compute joint.trtr and also to convert 483 | # the position of this joint (vertex) to worldspace. 484 | # 485 | # For the non-hips case, we assume that our parent joint has already 486 | # had its trtr matrix appended to the end of self.trtr[] 487 | # and that the appropriate matrix from the parent is the LAST item 488 | # in the parent's trtr[] matrix list. 489 | # 490 | # Worldpos of the current joint is localtoworld = TRTR...T*[0,0,0,1] 491 | # which equals parent_trtr * T*[0,0,0,1] 492 | # In other words, the rotation value of a joint has no impact on 493 | # that joint's position in space, so drotmat doesn't get used to 494 | # compute worldpos in this routine. 495 | # 496 | # However we don't pass localtoworld down to our child -- what 497 | # our child needs is trtr = TRTRTR...TR 498 | # 499 | # The code below attempts to optimize the computations so that we 500 | # compute localtoworld first, then trtr. 501 | 502 | if joint.hasparent: # Not hips 503 | ## parent_trtr = joint.parent.trtr[-1] # Last entry from parent 504 | parent_trtr = joint.parent.trtr[t] # Dictionary-based rewrite 505 | 506 | # 8/31/2008: dtransmat now excluded from non-hips computation since 507 | # it's just identity anyway. 508 | ## localtoworld = dot(parent_trtr,dot(joint.stransmat,dtransmat)) 509 | localtoworld = dot(parent_trtr,joint.stransmat) 510 | 511 | else: # Hips 512 | #cgkit# localtoworld = joint.stransmat * dtransmat 513 | localtoworld = dot(joint.stransmat,dtransmat) 514 | 515 | #cgkit# trtr = localtoworld * drotmat 516 | trtr = dot(localtoworld,drotmat) 517 | 518 | ## joint.trtr.append(trtr) # Whew 519 | joint.trtr[t] = trtr # New dictionary-based approach 520 | 521 | 522 | #cgkit# worldpos = localtoworld * ORIGIN # worldpos should be a vec4 523 | # 524 | # numpy conversion: eliminate the matrix multiplication entirely, 525 | # since all we're doing is extracting the last column of worldpos. 526 | worldpos = array([ localtoworld[0,3],localtoworld[1,3], \ 527 | localtoworld[2,3], localtoworld[3,3] ]) 528 | ## joint.worldpos.append(worldpos) 529 | joint.worldpos[t] = worldpos # Dictionary-based approach 530 | 531 | if DEBUG: 532 | print " Joint %s: here are some matrices" % (joint.name) 533 | print " stransmat:" 534 | print joint.stransmat 535 | if not (joint.hasparent): # if hips 536 | print " dtransmat:" 537 | print dtransmat 538 | print " drotmat:" 539 | print drotmat 540 | print " localtoworld:" 541 | print localtoworld 542 | print " trtr:" 543 | print trtr 544 | print " worldpos:", worldpos 545 | print 546 | 547 | newkeyframe = keyframe[counter:] # Slices from counter+1 to end 548 | for child in joint.children: 549 | # Here's the recursion call. Each time we call process_bvhkeyframe, 550 | # the returned value "newkeyframe" should shrink due to the slicing 551 | # process 552 | newkeyframe = process_bvhkeyframe(newkeyframe, child, t, DEBUG=DEBUG) 553 | if(newkeyframe == 0): # If retval = 0 554 | print "Passing up fatal error in process_bvhkeyframe" 555 | return(0) 556 | return newkeyframe 557 | 558 | 559 | 560 | ############################### 561 | # PROCESS_BVHFILE function 562 | 563 | def process_bvhfile(filename, DEBUG=0): 564 | 565 | # 9/11/08: the caller of this routine should cover possible exceptions. 566 | # Here are two possible errors: 567 | # IOError: [Errno 2] No such file or directory: 'fizzball' 568 | # raise SyntaxError, "Syntax error in line %d: 'HIERARCHY' expected, \ 569 | # got '%s' instead"%(self.linenr, tok) 570 | 571 | # Here's some information about the two mybvh calls: 572 | # 573 | # mybvh.read() returns a readbvh instance: 574 | # retval from readbvh() is 575 | # So this isn't useful for error-checking. 576 | # 577 | # mybvh.read() returns None on success and throws an exception on failure. 578 | 579 | print "Reading BVH file...", 580 | #print "hello1"; 581 | mybvh = readbvh(filename) # Doesn't actually read the file, just creates 582 | # a readbvh object and sets up the file for 583 | # reading in the next line. 584 | 585 | retval = mybvh.read() # Reads and parses the file. 586 | #print "hello2"; 587 | hips = process_bvhnode(mybvh.root) # Create joint hierarchy 588 | print "done" 589 | 590 | print "Building skeleton...", 591 | myskeleton = skeleton(hips, keyframes = mybvh.keyframes, \ 592 | frames=mybvh.frames, dt=mybvh.dt) 593 | print "done" 594 | if DEBUG: 595 | print "skeleton is: ", myskeleton 596 | return(myskeleton) 597 | -------------------------------------------------------------------------------- /bvh/BVHplay/transport.py: -------------------------------------------------------------------------------- 1 | 2 | from Tkinter import Frame, BOTTOM, LEFT, S, NW, X, BitmapImage, Button, \ 3 | Scale, YES, BOTH, Canvas 4 | from math import log10, pow, floor, degrees 5 | 6 | ##################### 7 | # TRANSPORT BUTTONS 8 | 9 | DATA_BITMAP_BEGIN = """ 10 | #define Xbitmap_width 20 11 | #define Xbitmap_height 21 12 | static unsigned char Xbitmap_bits[] = { 13 | 0x00, 0x00, 0xf0, 0x0c, 0x60, 0xf0, 0x0c, 0x70, 0xf0, 14 | 0x0c, 0x78, 0xf0, 0x0c, 0x7c, 0xf0, 0x0c, 0x7e, 0xf0, 15 | 0x0c, 0x7f, 0xf0, 0x8c, 0x7f, 0xf0, 0xcc, 0x7f, 0xf0, 16 | 0xec, 0x7f, 0xf0, 0xfc, 0x7f, 0xf0, 0xec, 0x7f, 0xf0, 17 | 0xcc, 0x7f, 0xf0, 0x8c, 0x7f, 0xf0, 0x0c, 0x7f, 0xf0, 18 | 0x0c, 0x7e, 0xf0, 0x0c, 0x7c, 0xf0, 0x0c, 0x78, 0xf0, 19 | 0x0c, 0x70, 0xf0, 0x0c, 0x60, 0xf0, 0x00, 0x00, 0xf0, 20 | }; 21 | """ 22 | 23 | DATA_BITMAP_END = """ 24 | #define Xbitmap_width 20 25 | #define Xbitmap_height 21 26 | static unsigned char Xbitmap_bits[] = { 27 | 0x00, 0x00, 0xf0, 0x60, 0x00, 0xf3, 0xe0, 0x00, 0xf3, 28 | 0xe0, 0x01, 0xf3, 0xe0, 0x03, 0xf3, 0xe0, 0x07, 0xf3, 29 | 0xe0, 0x0f, 0xf3, 0xe0, 0x1f, 0xf3, 0xe0, 0x3f, 0xf3, 30 | 0xe0, 0x7f, 0xf3, 0xe0, 0xff, 0xf3, 0xe0, 0x7f, 0xf3, 31 | 0xe0, 0x3f, 0xf3, 0xe0, 0x1f, 0xf3, 0xe0, 0x0f, 0xf3, 32 | 0xe0, 0x07, 0xf3, 0xe0, 0x03, 0xf3, 0xe0, 0x01, 0xf3, 33 | 0xe0, 0x00, 0xf3, 0x60, 0x00, 0xf3, 0x00, 0x00, 0xf0, 34 | }; 35 | """ 36 | 37 | DATA_BITMAP_STOP = """ 38 | #define Xbitmap_width 20 39 | #define Xbitmap_height 21 40 | static unsigned char Xbitmap_bits[] = { 41 | 0x00, 0x00, 0xf0, 0x00, 0x00, 0xf0, 0x00, 0x00, 0xf0, 42 | 0x00, 0x00, 0xf0, 0xf8, 0xff, 0xf1, 0xf8, 0xff, 0xf1, 43 | 0xf8, 0xff, 0xf1, 0xf8, 0xff, 0xf1, 0xf8, 0xff, 0xf1, 44 | 0xf8, 0xff, 0xf1, 0xf8, 0xff, 0xf1, 0xf8, 0xff, 0xf1, 45 | 0xf8, 0xff, 0xf1, 0xf8, 0xff, 0xf1, 0xf8, 0xff, 0xf1, 46 | 0xf8, 0xff, 0xf1, 0xf8, 0xff, 0xf1, 0x00, 0x00, 0xf0, 47 | 0x00, 0x00, 0xf0, 0x00, 0x00, 0xf0, 0x00, 0x00, 0xf0, 48 | }; 49 | """ 50 | 51 | DATA_BITMAP_PLAY = """ 52 | #define Xbitmap_width 20 53 | #define Xbitmap_height 21 54 | static unsigned char Xbitmap_bits[] = { 55 | 0x30, 0x00, 0xf0, 0x70, 0x00, 0xf0, 0xf0, 0x00, 0xf0, 56 | 0xf0, 0x01, 0xf0, 0xf0, 0x03, 0xf0, 0xf0, 0x07, 0xf0, 57 | 0xf0, 0x0f, 0xf0, 0xf0, 0x1f, 0xf0, 0xf0, 0x3f, 0xf0, 58 | 0xf0, 0x7f, 0xf0, 0xf0, 0xff, 0xf0, 0xf0, 0x7f, 0xf0, 59 | 0xf0, 0x3f, 0xf0, 0xf0, 0x1f, 0xf0, 0xf0, 0x0f, 0xf0, 60 | 0xf0, 0x07, 0xf0, 0xf0, 0x03, 0xf0, 0xf0, 0x01, 0xf0, 61 | 0xf0, 0x00, 0xf0, 0x70, 0x00, 0xf0, 0x30, 0x00, 0xf0, 62 | }; 63 | """ 64 | 65 | DATA_BITMAP_STEPBACK = """ 66 | #define Xbitmap_width 20 67 | #define Xbitmap_height 21 68 | static unsigned char Xbitmap_bits[] = { 69 | 0x00, 0x00, 0xf0, 0x00, 0x98, 0xf1, 0x00, 0x9c, 0xf1, 70 | 0x00, 0x9e, 0xf1, 0x00, 0x9f, 0xf1, 0x80, 0x9f, 0xf1, 71 | 0xc0, 0x9f, 0xf1, 0xe0, 0x9f, 0xf1, 0xf0, 0x9f, 0xf1, 72 | 0xf8, 0x9f, 0xf1, 0xfc, 0x9f, 0xf1, 0xf8, 0x9f, 0xf1, 73 | 0xf0, 0x9f, 0xf1, 0xe0, 0x9f, 0xf1, 0xc0, 0x9f, 0xf1, 74 | 0x80, 0x9f, 0xf1, 0x00, 0x9f, 0xf1, 0x00, 0x9e, 0xf1, 75 | 0x00, 0x9c, 0xf1, 0x00, 0x98, 0xf1, 0x00, 0x00, 0xf0, 76 | }; 77 | """ 78 | 79 | DATA_BITMAP_STEPFD = """ 80 | #define Xbitmap_width 20 81 | #define Xbitmap_height 21 82 | static unsigned char Xbitmap_bits[] = { 83 | 0x00, 0x00, 0xf0, 0x98, 0x01, 0xf0, 0x98, 0x03, 0xf0, 84 | 0x98, 0x07, 0xf0, 0x98, 0x0f, 0xf0, 0x98, 0x1f, 0xf0, 85 | 0x98, 0x3f, 0xf0, 0x98, 0x7f, 0xf0, 0x98, 0xff, 0xf0, 86 | 0x98, 0xff, 0xf1, 0x98, 0xff, 0xf3, 0x98, 0xff, 0xf1, 87 | 0x98, 0xff, 0xf0, 0x98, 0x7f, 0xf0, 0x98, 0x3f, 0xf0, 88 | 0x98, 0x1f, 0xf0, 0x98, 0x0f, 0xf0, 0x98, 0x07, 0xf0, 89 | 0x98, 0x03, 0xf0, 0x98, 0x01, 0xf0, 0x00, 0x00, 0xf0, 90 | }; 91 | """ 92 | 93 | ######### 94 | # TRANSPORT class 95 | # 96 | # Contains the transport button set (play, stop, etc.) 97 | # Does not contain the slider. 98 | # 99 | # Don't create one of these until AFTER you do "root = Tk()" 100 | # otherwise you might get the "too early to create bitmap" error. 101 | class Transport(Frame): 102 | def __init__(self, parent=None): 103 | Frame.__init__(self, parent) 104 | self.pack(side=BOTTOM, anchor=S) 105 | # These must go AFTER root = Tk() 106 | self.bitmap_begin = BitmapImage(data=DATA_BITMAP_BEGIN) 107 | self.bitmap_end = BitmapImage(data=DATA_BITMAP_END) 108 | self.bitmap_stop = BitmapImage(data=DATA_BITMAP_STOP) 109 | self.bitmap_play = BitmapImage(data=DATA_BITMAP_PLAY) 110 | self.bitmap_stepback = BitmapImage(data=DATA_BITMAP_STEPBACK) 111 | self.bitmap_stepfd = BitmapImage(data=DATA_BITMAP_STEPFD) 112 | 113 | # Depcrecated: we no longer set up the COMMANDs here. 114 | # self.btn_begin = Button(self, image=self.bitmap_begin, 115 | # command = self.onBegin) 116 | 117 | self.btn_begin = Button(self, image=self.bitmap_begin) 118 | self.btn_end = Button(self, image=self.bitmap_end) 119 | self.btn_stop = Button(self, image=self.bitmap_stop) 120 | self.btn_play = Button(self, image=self.bitmap_play) 121 | self.btn_stepback = Button(self, image=self.bitmap_stepback) 122 | self.btn_stepfd = Button(self, image=self.bitmap_stepfd) 123 | 124 | self.btn_begin.pack(side=LEFT) 125 | self.btn_stepback.pack(side=LEFT) 126 | self.btn_stop.pack(side=LEFT) 127 | self.btn_play.pack(side=LEFT) 128 | self.btn_stepfd.pack(side=LEFT) 129 | self.btn_end.pack(side=LEFT) 130 | 131 | self.playing = 0 # Set to 1 when user presses play, 0 on stop 132 | 133 | ################## 134 | # VIEWPORT class 135 | # This holds a Tk canvas object in a frame 136 | # The "size" argument to the constructor is the initial heigh and 137 | # width of the canvas widget, not of the frame (which is very slightly 138 | # larger). 139 | 140 | class Viewport(Frame): 141 | def __init__(self, parent=None, canvassize=500): 142 | Frame.__init__(self, parent) 143 | 144 | # Incredibly, it's a pain to extract the canvas width,height directly 145 | # from the canvas widget itself, so we store these here and push them 146 | # down to the actual canvas widget whenever they change. 147 | self.canvaswidth = canvassize 148 | self.canvasheight = canvassize 149 | 150 | # The flag for whether we display the camera or not is in menu.py 151 | # as part the of menubar class. 152 | self.readoutid = 0 153 | 154 | self.framewidth = -1 # Will be set elsewhere 155 | self.frameheight = -1 156 | self.pack(side=BOTTOM, anchor=S, expand=YES, fill=BOTH) 157 | self.canvas = Canvas(self, width=canvassize, \ 158 | height=canvassize, bg='white') 159 | self.canvas.pack() 160 | 161 | 162 | ########################### 163 | # DRAW_READOUT class function: 164 | # - Build the text strings for the camera readout display 165 | # - If a readout is already on the screen, remove it 166 | # - Draw the new readout 167 | # This routine does NOT check the menu.py variable that determines 168 | # whether or not we should actually draw the readout. It's the 169 | # caller's responsibility to do that. 170 | # 171 | def draw_readout(self, camera): 172 | text1 = 'Camera:' 173 | text2 = "x=%d y=%d z=%d" % (round(camera.t[0]), \ 174 | round(camera.t[1]), round(camera.t[2])) 175 | text3 = "Rotation = %d" % (round(degrees(camera.yrot))) 176 | text4 = "%s\n%s\n%s" % (text1, text2, text3) 177 | 178 | if self.readoutid: 179 | self.canvas.delete(self.readoutid) 180 | self.readoutid = self.canvas.create_text(5,5, text=text4, \ 181 | anchor=NW) 182 | 183 | 184 | def undraw_readout(self): 185 | if self.readoutid: 186 | self.canvas.delete(self.readoutid) 187 | self.readoutid = 0 188 | 189 | 190 | ################## 191 | 192 | class Playbar(Frame): 193 | def __init__(self, parent=None): 194 | Frame.__init__(self, parent) 195 | self.pack(side=BOTTOM, anchor=S, fill=X) 196 | self.scale = Scale(self, from_=0, to=1, sliderlength = 10, 197 | showvalue=YES, tickinterval=10, 198 | orient='horizontal') 199 | self.scale.pack(fill=X) 200 | # We don't set the initial value of the scale here because 201 | # that's done by tying it to the global variable "slidert" 202 | 203 | # Resetscale: takes a frame count and resets scale parms appropriately 204 | def resetscale(self, frames): 205 | temp = floor(log10(frames)) 206 | tickinterval = int(pow(10,temp)) 207 | if(tickinterval<1): tickinterval = 1 208 | self.scale.config(to=frames, tickinterval=tickinterval) 209 | 210 | -------------------------------------------------------------------------------- /bvh/bvh_parser.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # License: MIT 3 | """ 4 | Recursively parse a time-series motion trace file specified in Biovision 5 | Hierarchical (BVH) format and extracts the time-series, Cartesian coordinated 6 | data for a set of specified body parts. 7 | 8 | :author: Ben Leong (cleong@ets.org) 9 | 10 | """ 11 | 12 | 13 | import ConfigParser 14 | import os 15 | import sys 16 | import json 17 | import csv 18 | from os.path import join 19 | sys.path.append('BVHplay/') 20 | 21 | from skeleton import skeleton, process_bvhfile 22 | import pandas as pd 23 | 24 | 25 | def readConfig(configFilename): 26 | """ 27 | Reads from a config file the required inputs to begin recursive parsing. 28 | 29 | Args: 30 | configFilename (txt): contains 31 | (i) the path to the list of sync files i.e. the 32 | prefix of a set of video, wave and BVH files that are 33 | synchronized e.g. Lab001 for the set Lab001.avi, Lab001.wav, 34 | Lab001.bvh 35 | (ii) the path to the directory where these synchronized files 36 | reside 37 | (iii) the set of body parts whose time-series, Cartesian 38 | coordinates needs to be extracted 39 | (iv) the output directory where the extracted time-series, 40 | Cartesian coordinate are stored 41 | 42 | 43 | Returns: 44 | a sequence of strings : parsed inputs from config file 45 | """ 46 | parser = ConfigParser.SafeConfigParser() 47 | parser.read(configFilename) 48 | 49 | sync_files_list = parser.get("inputs", "sync_files_list") 50 | sync_files_dir = parser.get("inputs", "sync_files_dir") 51 | body_parts_specification = parser.get("inputs", 52 | "body_parts_specification") 53 | output_dir = parser.get("outputs", "output_dir") 54 | 55 | return sync_files_list, sync_files_dir,\ 56 | body_parts_specification, output_dir 57 | 58 | 59 | def read_body_parts(body_part_json): 60 | """ 61 | Reads from a JSON formatted file the set of body parts whose 62 | Catesian coordinates are to be extracted. 63 | 64 | Args: 65 | body_part_json (json): a JSON formatted file specifying body parts 66 | 67 | Returns: 68 | list: list of body parts 69 | """ 70 | with open(body_part_json) as data_file: 71 | body_part_list = json.load(data_file)['body_parts'] 72 | return body_part_list 73 | 74 | 75 | def findTargetNode(root, nodeName, l): 76 | """ 77 | Recursive parsing of the BVH skeletal tree using breath-first 78 | search to locate the node that has the name of the targeted body part. 79 | 80 | Args: 81 | root (object): root node of the BVH skeletal tree 82 | nodeName (string): name of the targeted body part 83 | l (list): empty list 84 | 85 | Returns: 86 | list: list containing the node representing the targeted body part 87 | """ 88 | if root.name == nodeName: 89 | l.append(root) 90 | else: 91 | for child in root.children: 92 | findTargetNode(child, nodeName, l) 93 | return l 94 | 95 | 96 | def getFrameLevelDisplacements(nodeFound, start, finish): 97 | """ 98 | Iterates through the entire time-series data for a given 99 | body part to extract the X,Y,Z coordinate data. 100 | 101 | Args: 102 | nodeFound (object): joint object for the targeted body part 103 | start (int): starting frame number 104 | finish (int): ending frame number 105 | 106 | Returns: 107 | list: list of lists of X,Y,Z coordiantes. The number of lists must 108 | equal to number of frames in the BVH 109 | """ 110 | displacements = [] 111 | for i in range(start, finish): 112 | X = nodeFound.trtr[i][0][3] 113 | Y = nodeFound.trtr[i][1][3] 114 | Z = nodeFound.trtr[i][2][3] 115 | displacements.append([X, Y, Z]) 116 | 117 | return displacements 118 | 119 | 120 | def getBVHFeatures(body_part, filename_path): 121 | """ 122 | Extracts X,Y,Z coordinate data per frame for a given body part 123 | from a time-series BVH file. BVH file must be recursively parsed 124 | prior to the extraction to locate the relevant node to begin processing. 125 | 126 | Args: 127 | body_part (string): body part whose Cartesian coordinate is 128 | to be extracted filename_path (string): path of BVH file 129 | 130 | Returns: 131 | list: list of lists of X,Y,Z coordiantes. The number of lists 132 | must equal to number of frames in the BVH 133 | """ 134 | myskeleton = process_bvhfile(filename_path, DEBUG=0) 135 | frameTime = myskeleton.dt 136 | 137 | for t in range(1, myskeleton.frames + 1): 138 | myskeleton.create_edges_onet(t) 139 | 140 | nodeFound = findTargetNode(myskeleton.hips, body_part, [])[0] 141 | XYZ_displacement = getFrameLevelDisplacements( 142 | nodeFound, 143 | 1, 144 | myskeleton.frames + 145 | 1) 146 | return XYZ_displacement 147 | 148 | 149 | def main(): 150 | """ 151 | Iterates through each BVH file, and for each file, extracts 152 | the time-series, 3D coordinate data for a set of specified 153 | body parts on a per frame basis. It then stores the extracted 154 | data into an output directory specified by the user for further 155 | processing. 156 | """ 157 | sync_files_list, sync_files_dir, body_parts_specification,\ 158 | output_dir = readConfig('config.ini') 159 | bvh_file_lst = [filename.strip() for filename in open(sync_files_list)] 160 | body_parts = read_body_parts(body_parts_specification) 161 | header = [bp for bp in body_parts] 162 | 163 | for bvh_file in bvh_file_lst: 164 | if not os.path.exists(join(output_dir, bvh_file, 'bvh')): 165 | os.makedirs(join(output_dir, bvh_file, 'bvh')) 166 | list_bparts = [] 167 | for body_part in body_parts: 168 | list_bparts.append( 169 | getBVHFeatures( 170 | body_part, 171 | join( 172 | sync_files_dir, 173 | bvh_file + 174 | '.bvh'))) 175 | this_bvh_df = pd.DataFrame(list_bparts).transpose() 176 | this_bvh_df.index += 1 177 | this_bvh_df.columns = header 178 | 179 | this_bvh_df.to_csv( 180 | join( 181 | output_dir, 182 | bvh_file, 183 | 'bvh', 184 | bvh_file + 185 | '.framelevelbvh.csv'), 186 | quoting=csv.QUOTE_MINIMAL) 187 | 188 | 189 | if __name__ == "__main__": 190 | main() 191 | -------------------------------------------------------------------------------- /bvh/compute_kinect_features.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # License: MIT 3 | """ 4 | Extracts high-level, expressive body language features from 5 | a time-series motion trace. Extraction is on a per frame basis. 6 | 7 | :author: Ben Leong (cleong@ets.org) 8 | 9 | """ 10 | 11 | 12 | import ConfigParser 13 | import os 14 | import sys 15 | import json 16 | import csv 17 | import re 18 | from os.path import join 19 | sys.path.append('BVHplay/') 20 | 21 | import warnings 22 | warnings.simplefilter(action="ignore", category=FutureWarning) 23 | 24 | from skeleton import skeleton, process_bvhfile 25 | import pandas as pd 26 | import numpy as np 27 | 28 | 29 | def readConfig(configFilename): 30 | """ 31 | Reads from a config file the required inputs to begin recursive parsing. 32 | 33 | Args: 34 | configFilename (txt): contains 35 | (i) the path to the list of sync files i.e. the 36 | prefix of a set of video, wave and BVH files that are 37 | synchronized e.g. Lab001 for the set Lab001.avi, Lab001.wav, 38 | Lab001.bvh 39 | (ii) the path to the directory where these synchronized files 40 | reside 41 | (iii) the set of body parts whose time-series, Cartesian 42 | coordinates are extracted 43 | (iv) the frame rate of the BVH 44 | (v) the output directory where the extracted time-series, 45 | high-level features are stored 46 | 47 | Returns: 48 | a sequence of strings : parsed inputs from config file 49 | """ 50 | parser = ConfigParser.SafeConfigParser() 51 | parser.read(configFilename) 52 | 53 | sync_files_list = parser.get("inputs", "sync_files_list") 54 | sync_files_dir = parser.get("inputs", "sync_files_dir") 55 | body_parts_specification = parser.get("inputs", "body_parts_specification") 56 | frame_rate = float(parser.get("inputs", "frame_rate")) 57 | output_dir = parser.get("outputs", "output_dir") 58 | 59 | return sync_files_list, sync_files_dir, \ 60 | body_parts_specification, frame_rate, output_dir 61 | 62 | 63 | def read_body_part_weights(body_part_json): 64 | """ 65 | Reads from a JSON formatted file the set of body parts and their 66 | weights. 67 | 68 | Args: 69 | body_part_json (json): a JSON formatted specifying body parts 70 | and weights 71 | 72 | Returns: 73 | list: list of weights indexed by body parts 74 | """ 75 | with open(body_part_json) as data_file: 76 | weights = json.load(data_file)['weights'] 77 | return weights 78 | 79 | 80 | def read_symmetry_params(body_part_json): 81 | """ 82 | Reads from a JSON formatted file the body symmetry parameters. 83 | 84 | Args: 85 | body_part_json (json): a JSON formatted file specifying the 86 | body part forming the axis of the symmetry (anchor) and 87 | the symmetrical body parts around the axis (parts). 88 | 89 | Returns: 90 | a string and list of strings: anchor, list of parts 91 | """ 92 | with open(body_part_json) as data_file: 93 | symmetry_params = json.load(data_file)['symmetry'] 94 | anchor = symmetry_params['anchor'] 95 | parts = symmetry_params['parts'] 96 | return anchor, parts 97 | 98 | 99 | def read_spatial_params(body_part_json): 100 | """ 101 | Reads from a JSON formatted file the body spatial parameters. 102 | 103 | Args: 104 | body_part_json (json): a JSON formatted file specifying a 105 | list of dicts, where each dict consists of a body anchor and 106 | parts, and displacement is the sum of displacements from 107 | each anchor to each part 108 | 109 | Returns: 110 | a list of dicts: each dict consists of an anchor and a list of 111 | parts, where the displacement from the anchor to each part would 112 | be compueted and summed 113 | """ 114 | with open(body_part_json) as data_file: 115 | spatial_params_list = json.load(data_file)['spatial'] 116 | return spatial_params_list 117 | 118 | 119 | def compute_displacement(df1, df2): 120 | """ 121 | Computes the Catesian displacement between corresponding data points 122 | (corresponding frames) between two data frames. 123 | 124 | Args: 125 | df1 (DataFrame): first data frame 126 | df2 (DataFrame): second data frame 127 | 128 | Returns: 129 | DataFrame: resulting data frame comprising frame-level displacements 130 | """ 131 | return (abs(df2 - df1) ** 2).sum(axis=1).apply(np.sqrt) 132 | 133 | 134 | def compute_self_displacement(df): 135 | """ 136 | Computes the Catesian displacement between subsequent data points 137 | (subsequent frames) within a single data frame. 138 | 139 | Args: 140 | df (DataFrame): data frame whose self-displacement is to be computed 141 | 142 | Returns: 143 | DataFrame: resulting data frame comprising frame-level displacements 144 | """ 145 | temp_df = df.shift() 146 | temp_df.iloc[0] = temp_df.iloc[1] 147 | return compute_displacement(df, temp_df) 148 | 149 | 150 | def compute_derivative(df, frame_rate): 151 | """ 152 | Computes the derivative between subsequent data points (subsequent frames) 153 | within a single data frame. 154 | 155 | Args: 156 | df (DataFrame): data frame whose derivative is to be computed 157 | frame_rate (float): frame rate of the data frame 158 | 159 | Returns: 160 | DataFrame: resulting data frame comprising frame-level derivative 161 | """ 162 | return df.diff().fillna(float(0)) / (float(1) / frame_rate) 163 | 164 | 165 | def format_conversion(df, body_part): 166 | """ 167 | Converts a data frame with general column names and additional quoting 168 | into a data frame with specific column names without quoting. 169 | 170 | Args: 171 | df (DataFrame): data frame prior to conversion 172 | body_part (string): the body part whoe data frame is converted 173 | 174 | Returns: 175 | DataFrame: data frame after conversion 176 | """ 177 | stripped_df = df[body_part].apply(lambda x: re.sub('[\[\]]', '', x)) 178 | split_df = pd.DataFrame( 179 | stripped_df.to_frame().ix[ 180 | :, 0].str.split(', ').tolist(), columns=[ 181 | 'X', 'Y', 'Z']) 182 | converted_df = split_df.convert_objects(convert_numeric=True) 183 | converted_df.index += 1 184 | return converted_df 185 | 186 | 187 | def compute_kinetic_energy(df_dict, body_part_weights_norm, frame_rate): 188 | """ 189 | Computes the kinetic energy of the body on a per frame basis, 190 | using contribution from all body parts, where the weight of each 191 | body part represents the amount of contribution. 192 | 193 | Args: 194 | df_dict (dict): dict of data frames, indexed by body parts 195 | body_part_weights_norm (dict): dict of weights, indexed by body parts 196 | frame_rate (float): frame rate across all data frames 197 | 198 | Returns: 199 | DataFrame: resulting data frame comprising frame-level KE 200 | """ 201 | KE = 0.5 202 | list_KE = [] 203 | for bp in df_dict: 204 | this_bp_velocity = compute_derivative( 205 | compute_self_displacement( 206 | df_dict[bp]), 207 | frame_rate) 208 | list_KE.append(body_part_weights_norm[bp] * (this_bp_velocity ** 2)) 209 | return 0.5 * sum(list_KE) 210 | 211 | 212 | def compute_symmetry_indexes(df_dict, sym_anchor, sym_parts_lst): 213 | """ 214 | Computes the symmetric index along X,Y and Z axis on a per frame basis. 215 | 216 | Args: 217 | df_dict (dict): dicts of data frames, indexed by body parts 218 | sym_anchor (string): the body part being the axis of the 219 | symmetry 220 | sym_parts_lst (list): list of body parts being symmetrical 221 | around the axis 222 | 223 | Returns: 224 | DataFrame: resulting data frame comprising frame-level symmetry index 225 | values along X,Y and Z axes 226 | """ 227 | anchor_df = df_dict[sym_anchor] 228 | part_1_df = df_dict[sym_parts_lst[0]] 229 | part_2_df = df_dict[sym_parts_lst[1]] 230 | numerator = abs(abs(anchor_df - part_1_df) - abs(anchor_df - part_2_df)) 231 | denominator = abs(part_1_df - part_2_df) 232 | return numerator / denominator 233 | 234 | 235 | def compute_posture(df_dict): 236 | """ 237 | Computes posture, or bounding volume, of the body on a per frame basis. 238 | 239 | Args: 240 | df_dict (dict): dict of data frames, indexed by body parts 241 | 242 | Returns: 243 | DataFrame: resulting data frame comprising frame-level bounding volume 244 | values 245 | """ 246 | list_X = [] 247 | list_Y = [] 248 | list_Z = [] 249 | for bpart in df_dict: 250 | list_X.append(df_dict[bpart]['X'].values) 251 | list_Y.append(df_dict[bpart]['Y'].values) 252 | list_Z.append(df_dict[bpart]['Z'].values) 253 | df_X = pd.DataFrame(list_X).transpose() 254 | df_Y = pd.DataFrame(list_Y).transpose() 255 | df_Z = pd.DataFrame(list_Z).transpose() 256 | df_X.index += 1 257 | df_Y.index += 1 258 | df_Z.index += 1 259 | return abs(df_X.max(axis=1) - df_X.min(axis=1)) * abs(df_Y.max(axis=1) - 260 | df_Y.min(axis=1)) * abs(df_Z.max(axis=1) - df_Z.min(axis=1)) 261 | 262 | 263 | def main(): 264 | """ 265 | Extracts high-level, expressive body language features from a time-series 266 | motion trace. Extraction is on a per frame basis. For each BVH file, this 267 | script generates frame-level feature values for (i) kinetic energy 268 | (ii) symmetry indexes (iii) posture (iv) spatial dispacement. Please refer 269 | to the README.md on how to specify parameters for generating these feature 270 | values. 271 | 272 | Returns: 273 | TYPE: Description 274 | """ 275 | sync_files_list, sync_files_dir, body_parts_specification,\ 276 | frame_rate, output_dir = readConfig('config.ini') 277 | 278 | body_part_weights = read_body_part_weights(body_parts_specification) 279 | 280 | # normalize body part weights 281 | body_part_weights_norm = { 282 | p: float( 283 | body_part_weights[p]) / 284 | sum( 285 | body_part_weights.values()) for p in body_part_weights} 286 | 287 | sym_anchor, sym_parts_lst = read_symmetry_params(body_parts_specification) 288 | 289 | spatial_params_list = read_spatial_params(body_parts_specification) 290 | 291 | bvh_file_lst = [filename.strip() for filename in open(sync_files_list)] 292 | 293 | for bvh_file in bvh_file_lst: 294 | 295 | df_dict = dict() 296 | res_df = [] 297 | res_df_headers = [ 298 | 'kinetic_energy', 299 | 'symmetry_X', 300 | 'symmetry_Y', 301 | 'symmetry_Z', 302 | 'posture'] 303 | 304 | df = pd.read_csv( 305 | join( 306 | output_dir, 307 | bvh_file, 308 | 'bvh', 309 | bvh_file + 310 | '.framelevelbvh.csv')) 311 | for bp in body_part_weights: 312 | df_dict[bp] = format_conversion(df, bp) 313 | 314 | ke_df = compute_kinetic_energy( 315 | df_dict, 316 | body_part_weights_norm, 317 | frame_rate) 318 | symmetry_df = compute_symmetry_indexes( 319 | df_dict, 320 | sym_anchor, 321 | sym_parts_lst) 322 | posture_df = compute_posture(df_dict) 323 | 324 | res_df.append(ke_df) 325 | res_df.append(symmetry_df) 326 | res_df.append(posture_df) 327 | 328 | for param in spatial_params_list: 329 | disp_df_list = [] 330 | anchor = param['anchor'] 331 | feature_name_list = [anchor] 332 | for part in param['parts']: 333 | feature_name_list.append(part) 334 | disp_df_list.append( 335 | compute_displacement( 336 | df_dict[anchor], 337 | df_dict[part])) 338 | 339 | spatial_df = sum(disp_df_list) 340 | res_df.append(spatial_df) 341 | res_df_headers.append('_'.join(feature_name_list) + '_disp') 342 | 343 | spatial_first_derv_df = compute_derivative(spatial_df, frame_rate) 344 | res_df.append(spatial_first_derv_df) 345 | res_df_headers.append( 346 | '_'.join(feature_name_list) + 347 | '_disp_first_derv') 348 | 349 | spatial_second_derv_df = compute_derivative( 350 | spatial_first_derv_df, 351 | frame_rate) 352 | res_df.append(spatial_second_derv_df) 353 | res_df_headers.append( 354 | '_'.join(feature_name_list) + 355 | '_disp_second_derv') 356 | 357 | output_df = pd.concat(res_df, axis=1) 358 | output_df.columns = res_df_headers 359 | output_df.to_csv( 360 | join( 361 | output_dir, 362 | bvh_file, 363 | 'bvh', 364 | bvh_file + 365 | '.framelevelfeatures.csv'), 366 | quoting=csv.QUOTE_MINIMAL) 367 | 368 | 369 | if __name__ == '__main__': 370 | main() 371 | -------------------------------------------------------------------------------- /bvh/config.ini: -------------------------------------------------------------------------------- 1 | [inputs] 2 | sync_files_list = ../inputs/sync_files.lst 3 | sync_files_dir = ../sync_files_dir 4 | body_parts_specification = ../inputs/body_parts_2.json 5 | frame_rate = 30 6 | 7 | [outputs] 8 | output_dir = ../outputs -------------------------------------------------------------------------------- /conda/vamp_conda_osx64.txt: -------------------------------------------------------------------------------- 1 | # This file may be used to create an environment using: 2 | # $ conda create --name --file 3 | # platform: osx-64 4 | anaconda-client=1.1.0=py27_0 5 | binstar=0.12=2 6 | clyent=0.4.0=py27_0 7 | freetype=2.5.5=0 8 | jpeg=8d=1 9 | libpng=1.6.17=0 10 | libtiff=4.0.6=0 11 | numpy=1.10.1=py27_0 12 | openssl=1.0.2d=0 13 | pandas=0.17.0=np110py27_0 14 | pillow=3.0.0=py27_1 15 | pip=7.1.2=py27_0 16 | python=2.7.10=2 17 | python-dateutil=2.4.2=py27_0 18 | pytz=2015.6=py27_0 19 | pyyaml=3.11=py27_1 20 | readline=6.2=2 21 | requests=2.8.1=py27_0 22 | setuptools=18.4=py27_0 23 | six=1.10.0=py27_0 24 | sqlite=3.8.4.1=1 25 | tk=8.5.18=0 26 | wheel=0.26.0=py27_1 27 | yaml=0.1.6=0 28 | zlib=1.2.8=0 29 | -------------------------------------------------------------------------------- /images/vamp_screen_shot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/EducationalTestingService/VAMP/4dd18e12db8dc5d7211fed8f120e5a7059bebd87/images/vamp_screen_shot.png -------------------------------------------------------------------------------- /inputs/body_parts_1.json: -------------------------------------------------------------------------------- 1 | { 2 | "body_parts": ["Hips","Spine","LeftShoulder","RightShoulder", 3 | "LeftArm","RightArm","LeftForeArm","RightForeArm", 4 | "LeftHand","RightHand"], 5 | 6 | "weights": {"Hips": 14.81, "Spine" : 12.65, "LeftShoulder" : 0.76875, 7 | "RightShoulder" : 0.76875, "LeftArm" : 1.5375, 8 | "RightArm" : 1.5375, "LeftForeArm" : 0.86, 9 | "RightForeArm" : 0.86, "LeftHand" : 0.2875, 10 | "RightHand" : 0.2875 }, 11 | 12 | "symmetry": {"anchor": "Hips", "parts": ["LeftHand", "RightHand"]}, 13 | 14 | "spatial": [{"anchor": "Spine", "parts": ["LeftHand", "RightHand"]}, 15 | {"anchor": "Spine", "parts": ["LeftArm", "RightArm"]}, 16 | {"anchor": "LeftHand", "parts": ["RightHand"]}] 17 | 18 | } 19 | -------------------------------------------------------------------------------- /inputs/body_parts_2.json: -------------------------------------------------------------------------------- 1 | { 2 | "body_parts": ["Hips","Chest","LeftCollar","RightCollar", 3 | "LeftShoulder","RightShoulder","LeftElbow","RightElbow", 4 | "LeftWrist","RightWrist"], 5 | 6 | "weights": {"Hips": 14.81, "Chest" : 12.65, "LeftCollar" : 0.76875, 7 | "RightCollar" : 0.76875, "LeftShoulder" : 1.5375, 8 | "RightShoulder" : 1.5375, "LeftElbow" : 0.86, 9 | "RightElbow" : 0.86, "LeftWrist" : 0.2875, 10 | "RightWrist" : 0.2875 }, 11 | 12 | "symmetry": {"anchor": "Hips", "parts": ["LeftWrist", "RightWrist"]}, 13 | 14 | "spatial": [{"anchor": "Chest", "parts": ["LeftWrist", "RightWrist"]}, 15 | {"anchor": "Chest", "parts": ["LeftShoulder", "RightShoulder"]}, 16 | {"anchor": "LeftWrist", "parts": ["RightWrist"]}] 17 | 18 | } 19 | -------------------------------------------------------------------------------- /inputs/sync_files.lst: -------------------------------------------------------------------------------- 1 | sample 2 | -------------------------------------------------------------------------------- /praat/prosody.praat: -------------------------------------------------------------------------------- 1 | # prosody.praat 2 | # 3 | # 11/2013 4 | # 5 | # use the new Praat script style to extract both pitch & intensity 6 | form input a wav 7 | word Path 8 | word Wav_id 9 | endform 10 | 11 | # wav load 12 | sound = do ("Read from file...", path$ + wav_id$ + ".wav") 13 | 14 | # pitch 15 | selectObject (sound) 16 | do ("To Pitch...", 0, 75, 600) 17 | do ("Down to PitchTier") 18 | do ("Save as short text file...", path$ + wav_id$ + ".PitchTier") 19 | 20 | # intensity 21 | selectObject (sound) 22 | do ("To Intensity...", 100, 0, "yes") 23 | do ("Down to IntensityTier") 24 | do ("Save as short text file...", path$ + wav_id$ + ".IntensityTier") --------------------------------------------------------------------------------