├── Example ├── Big Camera.avi ├── Small Camera.avi └── Tracking Test.blend ├── README.md ├── Resolve Camera Tracks.py └── Screenshot.png /Example/Big Camera.avi: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Uberi/MotionTracking/f71c771270501989b4cfc46b52cc7d4c251feb7e/Example/Big Camera.avi -------------------------------------------------------------------------------- /Example/Small Camera.avi: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Uberi/MotionTracking/f71c771270501989b4cfc46b52cc7d4c251feb7e/Example/Small Camera.avi -------------------------------------------------------------------------------- /Example/Tracking Test.blend: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Uberi/MotionTracking/f71c771270501989b4cfc46b52cc7d4c251feb7e/Example/Tracking Test.blend -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Resolve Camera Track 2 | ==================== 3 | 4 | Addon for [Blender](http://www.blender.org/) implementing 3D point reconstruction using multiple camera angles. 5 | 6 | # [Workflow Example](http://anthony-zhang.me/blog/motion-tracking/) 7 | 8 | ![Screenshot](Screenshot.png) 9 | 10 | Rationale 11 | --------- 12 | 13 | Blender's standard 2D and 3D tracking is pretty effective at what it was designed to do. However, there are limits on the quality of the 3D tracking done from only one camera angle. To get around this, professional tracking setups use multiple cameras recording the same scheme from multiple angles. 14 | 15 | This addon adds the ability to resolve tracking points made from multiple cameras together into a single scene. Essentially, it takes 2D motion tracks from two or more camera recordings at different angles, and combines them into 3D motion tracks. With the 3D tracked data, we can control rigs, reconstruct scenes, and greenscreen with better accuracy than ever before. 16 | 17 | Installation 18 | ------------ 19 | 20 | The process is the same as any other Blender addon: 21 | 22 | 1. Open the User Preferences window (Ctrl + Alt + U) to the "Addons" tab. 23 | 2. Press "Install from File..." at the bottom of the window. 24 | 3. In the resulting file browser pane, select `Resolve Camera Tracks.py`. 25 | 4. The addon entry should appear in the user preferences window. 26 | 5. Check the checkbox at the top right of the entry, beside the running man icon. 27 | 28 | Alternatively, run the script from within a Text Editor window. 29 | 30 | Workflow 31 | -------- 32 | 33 | The following is a recommended workflow for working with the addon: 34 | 35 | 1. In the real world, mark a point on the ground as the real reference point. This is the center of the stage, so to speak. 36 | 2. Pick a point in Blender as the virtual reference point. The origin is generally a good choice. 37 | 3. Set up two or more cameras facing the reference point. For two cameras, the best setup is having the cameras facing 90 degrees to each other. 38 | 4. In Blender, add the same number of cameras, and set their properties such as sensor size and focal length to the same values as the real-world cameras. These values can often be found online by searching for the camera model online, or in the camera user manual. 39 | 5. Measure and record the location and orientation of each real camera relative to the reference point. 40 | 6. Move the Blender cameras so that they are at the same location and orientation relative to the virtual reference point as their associated real cameras. 41 | 7. Start recording with all cameras. This doesn't have to be at exactly the same time, but they do need to be matched up later. 42 | 8. Begin the performance near the reference point, ensuring that points of interest are visible to the cameras as much as possible. 43 | 9. Stop recording with all cameras. 44 | 10. Synchronize and trim the video footage from all cameras with a video editor such as Avidemux, or do this within Blender. 45 | 11. Ensure the video footage from all cameras match up in real time. 46 | 12. In Blender, track the points of interest using the motion tracker, ensuring the camera settings such as focal length are set to the same as the real cameras, and that tracks for any given feature have the same name across all videos. 47 | 13. For each video, select all the tracks and use `Movie Clip Editor > Reconstruction > Link Empty to Track`, making sure they are associated with the correct Blender camera. 48 | 14. Select all the generated empties, and invoke `View3D > Object > Resolve Camera Tracks`. 49 | 15. You should now have a set of Empty objects that track in 3D the locations of the real-world markers. These can now be used for animation; for example, as hooks for a rig. 50 | 51 | Usage 52 | ----- 53 | 54 | The operator is accessible via `View3D > Object > Resolve Camera Tracks`, or `Search > Resolve Camera Tracks`. 55 | 56 | ### Before 57 | 58 | * The videos are shot from two or more non-moving cameras facing the target at different angles. 59 | * The videos from all camera angles are synchronized and begin at the same point in real time. 60 | * There are the same number of Blender cameras, positioned in the same orientation and relative position. 61 | * The Blender cameras are calibrated with respect to focal length and sensor size. 62 | * Your videos has been tracked - there are virtual markers in Blender tracking each real world point of interest on your target. 63 | * The tracks for any given point of interest are named the same across all the videos. 64 | * Each track has an associated Empty object linked to its path on its corresponding camera (this is done using `Movie Clip Editor > Reconstruction > Link Empty to Track` while the desired camera is set to the default scene camera). 65 | 66 | ### During 67 | 68 | 1. Select all the Empty objects generated by all the invocations of `Movie Clip Editor > Reconstruction > Link Empty to Track` previously. 69 | 2. Invoke the Resolve Camera Tracks operator from the menu or searchbar, as described above. 70 | 71 | ### After 72 | 73 | There should be new Empty objects with their location animated such that at every frame, it **tracks the physical marker in 3D**. 74 | 75 | The resulting Empty objects will only be keyframed when at least 2 of the Blender markers they are tracking are enabled - when their 3D position can be unambiguously determined. 76 | 77 | Possible errors include: 78 | 79 | * `Non-empty object "SOME_OBJECT" selected`: SOME_OBJECT was not of the type Empty. 80 | * Select only empty objects. 81 | * `At least two objects associated with tracks named "SOME_TRACK" required, only one selected`: there was only one object associated with tracks named SOME_TRACK. 82 | * Select two or more Empty objects associated with tracks named SOME_TRACK. 83 | * `Follow Track constraint for "SOME_OBJECT" not found`: SOME_OBJECT was missing the Follow Track constraint that makes it follow a motion track. 84 | * Ensure SOME_OBJECT has a Follow Track constraint. 85 | * Ensure SOME_OBJECT is an Empty object created by `Movie Clip Editor > Reconstruction > Link Empty to Track`. 86 | * `Clip for constraint "SOME_CONSTRAINT" of "SOME_OBJECT" not found`: SOME_OBJECT's Follow Track constraint did not have its clip property set. 87 | * Check the Follow Track constraint settings for SOME_OBJECT to ensure the clip property is set. 88 | * Ensure SOME_OBJECT is an Empty object created by `Movie Clip Editor > Reconstruction > Link Empty to Track`. 89 | * `Track for constraint "SOME_CONSTRAINT" of "SOME_OBJECT" not found`: SOME_OBJECT's Follow Track constraint did not have an associated track. 90 | * Check the Follow Track constraint settings for each object to ensure the Track property is set. 91 | * Ensure the objects are empties created from motion tracks. 92 | * `At least 2 cameras need to be available`: the selected empties must represent views from two or more cameras, but only represented 0 or 1. 93 | * Select empties representing views from more than 1 camera. 94 | * Select more than 1 empty. 95 | * `Lines are too close to parallel`: the rays shot from two or more cameras to their associated Empty objects are too close to parallel. 96 | * Shoot the footage from cameras at a larger angle apart. 97 | -------------------------------------------------------------------------------- /Resolve Camera Tracks.py: -------------------------------------------------------------------------------- 1 | from mathutils import Vector 2 | import bpy 3 | import itertools 4 | import re 5 | from collections import defaultdict 6 | 7 | bl_info = { 8 | "name": "Resolve Camera Tracks", 9 | "author": "Anthony Zhang", 10 | "category": "Animation", 11 | "version": (1, 1), 12 | "blender": (2, 75, 0), 13 | "location": "View3D > Object > Resolve Camera Tracks or Search > Resolve Camera Tracks", 14 | "description": "3D point reconstruction from multiple camera angles", 15 | } 16 | 17 | class ResolveCameraTracks(bpy.types.Operator): 18 | bl_idname = "animation.resolve_camera_tracks" 19 | bl_label = "Resolve Camera Tracks" 20 | bl_options = {"REGISTER", "UNDO"} # enable undo for operator 21 | 22 | def execute(self, context): 23 | targets = [] 24 | for obj in context.selected_objects: 25 | if obj.type == "EMPTY": 26 | targets.append(obj) 27 | else: 28 | self.report({"ERROR_INVALID_INPUT"}, "Non-empty object \"{}\" selected".format(obj.name)) 29 | return {"CANCELLED"} 30 | 31 | # associate pairs of empties together by base name 32 | targets_by_track_name = defaultdict(list) 33 | for target in targets: 34 | track, _ = self.get_target_track(target) 35 | targets_by_track_name[track.name].append(target) 36 | for name, point_targets in targets_by_track_name.items(): 37 | if len(point_targets) < 2: 38 | self.report({"ERROR_INVALID_INPUT"}, "At least two objects associated with tracks named \"{}\" required, only one selected".format(name)) 39 | return {"CANCELLED"} 40 | 41 | # add the resolved empties 42 | resolved_empties = [] 43 | for point_targets in targets_by_track_name.values(): 44 | try: 45 | resolved_empties.append(self.add_resolved_empty(point_targets)) 46 | except Exception as e: 47 | import traceback 48 | self.report({"ERROR_INVALID_INPUT"}, traceback.format_exc()) 49 | return {"CANCELLED"} 50 | 51 | # select the resolved empties 52 | bpy.ops.object.select_all(action="DESELECT") 53 | for empty in resolved_empties: empty.select = True 54 | 55 | return {"FINISHED"} 56 | 57 | def get_target_track(self, target): 58 | """ 59 | Returns a motion tracking track associated with object `target` and 60 | the camera associated with object `target`. 61 | """ 62 | # find the follow track constraint and obtain the associated track 63 | for constraint in target.constraints: 64 | if constraint.type == "FOLLOW_TRACK": 65 | track_constraint = constraint 66 | if not track_constraint.clip: 67 | raise Exception("Clip for constraint \"{}\" of \"{}\" not found".format(track_constraint.name, target.name)) 68 | break 69 | else: 70 | raise Exception("Follow Track constraint for \"{}\" not found".format(target.name)) 71 | 72 | # get the track for the track constraint on the target 73 | for object in track_constraint.clip.tracking.objects: 74 | for track in object.tracks: 75 | if track.name == track_constraint.track: 76 | return track, track_constraint.camera 77 | raise Exception("Track for constraint \"{}\" of \"{}\" not found".format(track_constraint.name, target.name)) 78 | 79 | def get_target_locations(self, target): 80 | """ 81 | Returns a list of positions in world space for object `target` for 82 | frames that it is animated for (and None for frames that are not), the 83 | camera associated with object `target`, the start frame, and the end 84 | frame. 85 | """ 86 | track, camera = self.get_target_track(target) 87 | 88 | # obtain track information 89 | 90 | # set of frame indices for enabled markers 91 | marker_frames = {marker.frame for marker in track.markers if not marker.mute} 92 | start_frame, end_frame = min(marker_frames), max(marker_frames) 93 | 94 | # save the frame so we can restore it later 95 | original_frame = bpy.context.scene.frame_current 96 | 97 | # store object world locations for each frame 98 | locations = [] 99 | for i in range(start_frame, end_frame + 1): 100 | if i in marker_frames: 101 | bpy.context.scene.frame_set(i) 102 | locations.append(target.matrix_world.to_translation()) 103 | else: 104 | locations.append(None) 105 | 106 | # move back to the original frame 107 | bpy.context.scene.frame_set(original_frame) 108 | 109 | return locations, camera, start_frame, end_frame 110 | 111 | def add_resolved_empty(self, targets): 112 | """ 113 | Adds an empty animated to be at the point closest to every target in 114 | `targets`, where each target in `targets` is animated by a Follow 115 | Track constraint. 116 | 117 | Returns the newly created empty. 118 | """ 119 | 120 | # obtain target information 121 | target_points, target_cams, target_starts, target_ends = [], [], [], [] 122 | for target in targets: 123 | points, cam, start, end = self.get_target_locations(target) 124 | target_points.append(points + [None]) # the last element must be None 125 | target_cams.append(cam) 126 | target_starts.append(start) 127 | target_ends.append(end) 128 | 129 | # two camera is the minimum number of cameras 130 | if len(set(target_cams)) < 2: 131 | raise Exception("At least 2 cameras need to be available") 132 | 133 | # add the empty object 134 | bpy.ops.object.add(type="EMPTY") 135 | resolved = bpy.context.active_object 136 | 137 | # save the frame so we can restore it later 138 | original_frame = bpy.context.scene.frame_current 139 | 140 | # set up keyframes for each location 141 | min_distance, min_distance_frame = float("inf"), None 142 | max_distance, max_distance_frame = 0, None 143 | for frame in range(min(target_starts), max(target_ends)): 144 | # clamp indices to the last value, None, if outside of range 145 | indices = [] 146 | for start, end in zip(target_starts, target_ends): 147 | index = frame - start 148 | indices.append(-1 if index < 0 or index > end - start else index) 149 | 150 | bpy.context.scene.frame_set(frame) # move to the current frame 151 | 152 | # go through each possible combination of targets and find the one 153 | # that gives the best result 154 | best_location, best_distance = None, float("inf") 155 | for pair in itertools.combinations(range(0, len(targets)), 2): 156 | first, second = pair[0], pair[1] 157 | cam1, cam2 = target_cams[first].location, target_cams[second].location 158 | location1, location2 = target_points[first][indices[first]], target_points[second][indices[second]] 159 | if location1 is not None and location2 is not None: 160 | location, distance = closest_point(cam1, cam2, location1, location2) 161 | if distance < best_distance: # better result than current best 162 | best_distance = distance 163 | best_location = location 164 | 165 | # add keyframe if possible 166 | if best_location != None: 167 | resolved.location = best_location 168 | resolved.keyframe_insert(data_path="location") 169 | 170 | if best_distance <= min_distance: 171 | min_distance = best_distance 172 | min_distance_frame = frame 173 | if best_distance >= max_distance: 174 | max_distance = best_distance 175 | max_distance_frame = frame 176 | 177 | # move back to the original frame 178 | bpy.context.scene.frame_set(original_frame) 179 | 180 | # make the resolved track object more identifiable 181 | track, _ = self.get_target_track(targets[0]) 182 | resolved.name = "{}_tracked".format(track.name) 183 | resolved.empty_draw_type = "SPHERE" 184 | resolved.empty_draw_size = 0.1 185 | 186 | self.report({"INFO"}, "{}: min error {} (frame {}), max error {} (frame {})".format(resolved.name, min_distance / 2, min_distance_frame, max_distance / 2, max_distance_frame)) 187 | return resolved 188 | 189 | def closest_point(cam1, cam2, point1, point2): 190 | """ 191 | Produces the point closest to the lines formed from `cam1` to `point1` and 192 | from `cam2` to `point2`, and the total distance between this point and the 193 | lines. 194 | 195 | Reference: http://www.gbuffer.net/archives/361 196 | """ 197 | dir1 = point1 - cam1 198 | dir2 = point2 - cam2 199 | dir3 = cam2 - cam1 200 | a = dir1 * dir1 201 | b = -dir1 * dir2 202 | c = dir2 * dir2 203 | d = dir3 * dir1 204 | e = -dir3 * dir2 205 | if abs((c * a) - (b ** 2)) < 0.0001: # lines are nearly parallel 206 | raise Exception("Lines are too close to parallel") 207 | extent1 = ((d * c) - (e * b)) / ((c * a) - (b ** 2)) 208 | extent2 = (e - (b * extent1)) / c 209 | point1 = cam1 + (extent1 * dir1) 210 | point2 = cam2 + (extent2 * dir2) 211 | return (point1 + point2) / 2, (point1 - point2).magnitude 212 | 213 | def add_object_button(self, context): 214 | self.layout.operator(ResolveCameraTracks.bl_idname, 215 | text="Resolve Camera Tracks", icon="PLUGIN") 216 | 217 | def register(): 218 | bpy.utils.register_class(ResolveCameraTracks) 219 | bpy.types.VIEW3D_MT_object.append(add_object_button) 220 | 221 | def unregister(): 222 | bpy.utils.unregister_class(ResolveCameraTracks) 223 | bpy.types.VIEW3D_MT_object.remove(add_object_button) 224 | 225 | if __name__ == "__main__": 226 | register() 227 | -------------------------------------------------------------------------------- /Screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Uberi/MotionTracking/f71c771270501989b4cfc46b52cc7d4c251feb7e/Screenshot.png --------------------------------------------------------------------------------