├── CONTRIBUTING.md ├── requirements.txt ├── MANIFEST.in ├── src ├── Banner.PNG ├── sample.gif └── usage │ ├── 1.png │ ├── 2.png │ ├── 3.png │ ├── 4.png │ └── 5.png ├── 2d_motion_capture.unitypackage ├── mocap2d ├── __init__.py ├── mocap2d.py ├── editor.py └── detect.py ├── HISTORY.md ├── LICENSE ├── setup.py ├── .gitignore └── README.md /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | torch 2 | torchvision 3 | opencv-python 4 | numpy 5 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include README.md 2 | include LICENSE 3 | include HISTORY.md 4 | -------------------------------------------------------------------------------- /src/Banner.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/k2sebeom/unity-2dmocap/HEAD/src/Banner.PNG -------------------------------------------------------------------------------- /src/sample.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/k2sebeom/unity-2dmocap/HEAD/src/sample.gif -------------------------------------------------------------------------------- /src/usage/1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/k2sebeom/unity-2dmocap/HEAD/src/usage/1.png -------------------------------------------------------------------------------- /src/usage/2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/k2sebeom/unity-2dmocap/HEAD/src/usage/2.png -------------------------------------------------------------------------------- /src/usage/3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/k2sebeom/unity-2dmocap/HEAD/src/usage/3.png -------------------------------------------------------------------------------- /src/usage/4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/k2sebeom/unity-2dmocap/HEAD/src/usage/4.png -------------------------------------------------------------------------------- /src/usage/5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/k2sebeom/unity-2dmocap/HEAD/src/usage/5.png -------------------------------------------------------------------------------- /2d_motion_capture.unitypackage: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/k2sebeom/unity-2dmocap/HEAD/2d_motion_capture.unitypackage -------------------------------------------------------------------------------- /mocap2d/__init__.py: -------------------------------------------------------------------------------- 1 | from mocap2d.mocap2d import BoneDetector 2 | 3 | __all__ = ["BoneDetector"] 4 | __version__ = '0.1.2' 5 | -------------------------------------------------------------------------------- /HISTORY.md: -------------------------------------------------------------------------------- 1 | ## v0.1.0 2 | * First release of 2D motion capture unity package 3 | 4 | ## v0.1.1 5 | * Add positional argument of detect / command 6 | * Add feature to save a video analysis result to json file 7 | * Add feature to send a json result directly to unity 8 | 9 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 k2sebeom 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | import os 2 | from setuptools import setup, find_packages 3 | import mocap2d 4 | 5 | 6 | def read(fname): 7 | with open(os.path.join(os.path.dirname(__file__), fname)) as f: 8 | return f.read() 9 | 10 | 11 | setup( 12 | name='unity-2dmocap', 13 | version=mocap2d.__version__, 14 | description="Python application for Unity Asset of 2D motion capture", 15 | long_description=read('README.md'), 16 | long_description_content_type='text/markdown', 17 | keywords=["python", "motion capture", "unity"], 18 | author='SeBeom Lee', 19 | author_email='slee5@oberlin.edu', 20 | url='http://www.github.com/k2sebeom/unity-2dmocap', 21 | license="MIT", 22 | entry_points={ 23 | 'console_scripts': [ 24 | "2dmocap-unity=mocap2d.detect:main", 25 | "2dmocap-edit=mocap2d.editor:main" 26 | ] 27 | }, 28 | install_requires=read('requirements.txt').splitlines(), 29 | packages=find_packages(include=['mocap2d']), 30 | classifiers=[ 31 | 'Programming Language :: Python', 32 | 'Programming Language :: Python :: 3.6', 33 | 'Programming Language :: Python :: 3.7', 34 | 'License :: OSI Approved :: MIT License', 35 | ], 36 | project_urls={ 37 | 'Maintainer': 'https://github.com/k2sebeom', 38 | 'Source': 'https://github.com/unity-2dmocap', 39 | 'Tracker': 'https://github.com/k2sebeom/unity-2dmocap/issues' 40 | }, 41 | ) -------------------------------------------------------------------------------- /mocap2d/mocap2d.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import cv2 3 | import numpy as np 4 | 5 | from torchvision.models.detection import keypointrcnn_resnet50_fpn as keypoint 6 | 7 | 8 | class BoneDetector: 9 | __BONES = [ 10 | (17, 0), (17, 5), (5, 7), (7, 9), (17, 6), (6, 8), (8, 10), (17, 18), 11 | (18, 11), (11, 13), (13, 15), (18, 12), (12, 14), (14, 16) 12 | ] 13 | 14 | def __init__(self, gpu=False): 15 | self._model = keypoint(pretrained=True) 16 | self._device = None 17 | if gpu: 18 | print("Detecting gpu device...") 19 | self._device = torch.device("cuda:0") 20 | if self._device is None: 21 | raise Exception("No GPU instance detected!!") 22 | self._model.to(self._device) 23 | self._model.eval() 24 | print("Pose detector initialized.") 25 | 26 | def _image_to_tensor(self, image): 27 | img = np.transpose(image / 255, (2, 0, 1)) 28 | img = torch.from_numpy(img).float() 29 | if self._device: 30 | img = img.to(self._device) 31 | return img 32 | 33 | def extract_bone(self, image, show=False): 34 | im_tensor = self._image_to_tensor(image) 35 | result = self._model([im_tensor])[0] 36 | points, scores = self._find_human(result) 37 | if points is None: 38 | return None 39 | if show: 40 | self._draw_human(image, points) 41 | skeleton = {} 42 | for idx, keys in enumerate(self.__BONES): 43 | k1, k2 = keys 44 | skeleton[f"bone{idx + 1}"] = self._get_bone_vector( 45 | points[k1], points[k2]) 46 | pt1 = points[17] 47 | pt2 = points[18] 48 | ref = np.sqrt((pt1[0] - pt2[0])**2 + (pt1[1] - pt2[1])**2) 49 | return skeleton, points[17], ref 50 | 51 | def _draw_human(self, image, points): 52 | for pt1, pt2 in self.__BONES: 53 | image = cv2.line(image, tuple(points[pt1][:2]), 54 | tuple(points[pt2][:2]), (255, 0, 0), 3) 55 | cv2.imshow("Result", image) 56 | cv2.waitKey(1) 57 | 58 | @staticmethod 59 | def _get_bone_vector(key1, key2): 60 | return int(key2[0] - key1[0]), int(key1[1] - key2[1]) 61 | 62 | def _find_human(self, raw_result): 63 | scores = raw_result["scores"].data.cpu().numpy() 64 | if len(scores) == 0: 65 | return None, None 66 | human_idx = np.argmax(scores) 67 | points = raw_result["keypoints"][human_idx].data.cpu().numpy() 68 | scores = raw_result["keypoints_scores"][human_idx].data.cpu().numpy() 69 | mid_top = self._mid_point(points[5], points[6]) 70 | mid_bot = self._mid_point(points[11], points[12]) 71 | points = np.append(points, [mid_top, mid_bot], axis=0).astype(np.int32) 72 | scores = np.append( 73 | scores, [scores[5] + scores[6], scores[11] + scores[12]] 74 | ) 75 | return points, scores 76 | 77 | @staticmethod 78 | def _mid_point(point1, point2): 79 | return (point1[0] + point2[0]) // 2, (point1[1] + point2[1]) // 2, 1 80 | -------------------------------------------------------------------------------- /mocap2d/editor.py: -------------------------------------------------------------------------------- 1 | import tkinter as tk 2 | import cv2 3 | import argparse 4 | from glob import glob 5 | import sys 6 | from os import path 7 | 8 | 9 | global rotate, curr, frame_shape 10 | 11 | 12 | def main(): 13 | parser = argparse.ArgumentParser() 14 | parser.add_argument('file', help="Path to the video file", type=str) 15 | args = parser.parse_args() 16 | 17 | if not glob(args.file): 18 | print(f"Video file at {args.file} not found.") 19 | exit(1) 20 | 21 | video = cv2.VideoCapture(args.file) 22 | fps = video.get(cv2.CAP_PROP_FPS) 23 | frames = [] 24 | while True: 25 | ret, frame = video.read() 26 | if not ret: 27 | break 28 | frames.append(frame) 29 | video.release() 30 | 31 | global rotate, curr, frame_shape 32 | rotate = 0 33 | curr = 0 34 | frame_shape = frames[0].shape[:2][::-1] 35 | 36 | def get_image(): 37 | image = frames[curr] 38 | for _ in range(rotate % 4): 39 | image = cv2.rotate(image, cv2.ROTATE_90_CLOCKWISE) 40 | image = cv2.resize(image, frame_shape) 41 | return image 42 | 43 | def show_image(): 44 | cv2.imshow(title, get_image()) 45 | cv2.waitKey(1) 46 | 47 | window = tk.Tk() 48 | title = "Simple Video Editor" 49 | window.title(title) 50 | window.geometry("500x150") 51 | 52 | def change_point(val): 53 | global curr 54 | curr = int(val) 55 | show_image() 56 | 57 | tk.Label(window, text="Starting point").place(x=5, y=5) 58 | trim_start = tk.Scale(window, from_=0, to=len(frames) - 1, 59 | orient=tk.HORIZONTAL, command=change_point, 60 | showvalue=False) 61 | trim_start.place(x=90, y=5, width=400) 62 | 63 | tk.Label(window, text="Ending point").place(x=5, y=35) 64 | trim_end = tk.Scale(window, from_=0, to=len(frames) - 1, 65 | orient=tk.HORIZONTAL, command=change_point, 66 | showvalue=False) 67 | trim_end.place(x=90, y=35, width=400) 68 | trim_end.set(len(frames) - 1) 69 | 70 | def rotate_clock(): 71 | global rotate 72 | rotate += 1 73 | show_image() 74 | 75 | def rotate_counter(): 76 | global rotate 77 | rotate -= 1 78 | show_image() 79 | 80 | btn1 = tk.Button(window, text="CounterClock", command=rotate_counter) 81 | btn1.place(x=5, y=65) 82 | btn2 = tk.Button(window, text="ClockWise", command=rotate_clock) 83 | btn2.place(x=95, y=65) 84 | 85 | tk.Label(window, text="width =").place(x=185, y=65) 86 | w = tk.Entry(window) 87 | w.place(x=235, y=65, width=40) 88 | tk.Label(window, text="height =").place(x=285, y=65) 89 | h = tk.Entry(window) 90 | h.place(x=345, y=65, width=40) 91 | 92 | def resize(): 93 | global frame_shape 94 | if w.get() and h.get() and int(w.get()) > 0 and int(h.get()) > 0: 95 | frame_shape = (int(w.get()), int(h.get())) 96 | show_image() 97 | 98 | tk.Button(window, text="Resize", command=resize).place( 99 | x=395, y=60, width=70) 100 | 101 | def save_video(): 102 | global curr 103 | if int(trim_start.get()) > int(trim_end.get()): 104 | print("Invalid Trim points") 105 | 106 | fourcc = cv2.VideoWriter_fourcc(*'mp4v') 107 | filename = path.basename(args.file[0]).split('.')[0] + '.mp4' 108 | new_path = path.join(path.dirname(args.file[0]), 'new-' + filename) 109 | vid_shape = get_image().shape[:2][::-1] 110 | out = cv2.VideoWriter(new_path, fourcc, fps, vid_shape) 111 | for idx in range(int(trim_start.get()), int(trim_end.get()) + 1): 112 | curr = idx 113 | show_image() 114 | out.write(get_image()) 115 | out.release() 116 | print(f"Generated Video at {new_path}") 117 | 118 | tk.Button(window, text="Save", command=save_video).place( 119 | x=5, y=100, width=490, height=40) 120 | 121 | show_image() 122 | 123 | window.mainloop() 124 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider 2 | # Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839 3 | 4 | # Byte-compiled / optimized / DLL files 5 | __pycache__/ 6 | *.py[cod] 7 | *$py.class 8 | 9 | # C extensions 10 | *.so 11 | 12 | # Distribution / packaging 13 | .Python 14 | build/ 15 | develop-eggs/ 16 | dist/ 17 | downloads/ 18 | eggs/ 19 | .eggs/ 20 | lib/ 21 | lib64/ 22 | parts/ 23 | sdist/ 24 | var/ 25 | wheels/ 26 | pip-wheel-metadata/ 27 | share/python-wheels/ 28 | *.egg-info/ 29 | .installed.cfg 30 | *.egg 31 | MANIFEST 32 | 33 | # PyInstaller 34 | # Usually these files are written by a python script from a template 35 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 36 | *.manifest 37 | *.spec 38 | 39 | # Installer logs 40 | pip-log.txt 41 | pip-delete-this-directory.txt 42 | 43 | # Unit test / coverage reports 44 | htmlcov/ 45 | .tox/ 46 | .nox/ 47 | .coverage 48 | .coverage.* 49 | .cache 50 | nosetests.xml 51 | coverage.xml 52 | *.cover 53 | *.py,cover 54 | .hypothesis/ 55 | .pytest_cache/ 56 | 57 | # Translations 58 | *.mo 59 | *.pot 60 | 61 | # Django stuff: 62 | *.log 63 | local_settings.py 64 | db.sqlite3 65 | db.sqlite3-journal 66 | 67 | # Flask stuff: 68 | instance/ 69 | .webassets-cache 70 | 71 | # Scrapy stuff: 72 | .scrapy 73 | 74 | # Sphinx documentation 75 | docs/_build/ 76 | 77 | # PyBuilder 78 | target/ 79 | 80 | # Jupyter Notebook 81 | .ipynb_checkpoints 82 | 83 | # IPython 84 | profile_default/ 85 | ipython_config.py 86 | 87 | # pyenv 88 | .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 98 | __pypackages__/ 99 | 100 | # Celery stuff 101 | celerybeat-schedule 102 | celerybeat.pid 103 | 104 | # SageMath parsed files 105 | *.sage.py 106 | 107 | # Environments 108 | .env 109 | .venv 110 | env/ 111 | venv/ 112 | ENV/ 113 | env.bak/ 114 | venv.bak/ 115 | 116 | # Spyder project settings 117 | .spyderproject 118 | .spyproject 119 | 120 | # Rope project settings 121 | .ropeproject 122 | 123 | # mkdocs documentation 124 | /site 125 | 126 | # mypy 127 | .mypy_cache/ 128 | .dmypy.json 129 | dmypy.json 130 | 131 | # Pyre type checker 132 | .pyre/ 133 | 134 | .idea 135 | 136 | # User-specific stuff 137 | .idea/**/workspace.xml 138 | .idea/**/tasks.xml 139 | .idea/**/usage.statistics.xml 140 | .idea/**/dictionaries 141 | .idea/**/shelf 142 | 143 | # Generated files 144 | .idea/**/contentModel.xml 145 | 146 | # Sensitive or high-churn files 147 | .idea/**/dataSources/ 148 | .idea/**/dataSources.ids 149 | .idea/**/dataSources.local.xml 150 | .idea/**/sqlDataSources.xml 151 | .idea/**/dynamic.xml 152 | .idea/**/uiDesigner.xml 153 | .idea/**/dbnavigator.xml 154 | 155 | # Gradle 156 | .idea/**/gradle.xml 157 | .idea/**/libraries 158 | 159 | # Gradle and Maven with auto-import 160 | # When using Gradle or Maven with auto-import, you should exclude module files, 161 | # since they will be recreated, and may cause churn. Uncomment if using 162 | # auto-import. 163 | # .idea/artifacts 164 | # .idea/compiler.xml 165 | # .idea/jarRepositories.xml 166 | # .idea/modules.xml 167 | # .idea/*.iml 168 | # .idea/modules 169 | # *.iml 170 | # *.ipr 171 | 172 | # CMake 173 | cmake-build-*/ 174 | 175 | # Mongo Explorer plugin 176 | .idea/**/mongoSettings.xml 177 | 178 | # File-based project format 179 | *.iws 180 | 181 | # IntelliJ 182 | out/ 183 | 184 | # mpeltonen/sbt-idea plugin 185 | .idea_modules/ 186 | 187 | # JIRA plugin 188 | atlassian-ide-plugin.xml 189 | 190 | # Cursive Clojure plugin 191 | .idea/replstate.xml 192 | 193 | # Crashlytics plugin (for Android Studio and IntelliJ) 194 | com_crashlytics_export_strings.xml 195 | crashlytics.properties 196 | crashlytics-build.properties 197 | fabric.properties 198 | 199 | # Editor-based Rest Client 200 | .idea/httpRequests 201 | 202 | # Android studio 3.1+ serialized cache file 203 | .idea/caches/build_file_checksums.ser 204 | 205 | -------------------------------------------------------------------------------- /mocap2d/detect.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | from glob import glob 3 | import json 4 | import warnings 5 | import socket 6 | from os import path 7 | 8 | import cv2 9 | 10 | from mocap2d import BoneDetector 11 | 12 | 13 | def detect(args): 14 | if args.out and not path.isdir(args.out): 15 | print(f"Output path {args.out} is not a directory") 16 | exit(2) 17 | 18 | if not glob(args.file[0]): 19 | print(f"Video file at {args.file[0]} not found.") 20 | exit(1) 21 | 22 | print("Initializing pose detector...") 23 | detector = BoneDetector(gpu=args.gpu) 24 | print() 25 | print(f"Reading video file at {args.file[0]}") 26 | video = cv2.VideoCapture(args.file[0]) 27 | fps = video.get(cv2.CAP_PROP_FPS) 28 | frame_count = int(video.get(cv2.CAP_PROP_FRAME_COUNT)) 29 | interval = args.interval 30 | fpi = int(interval * fps + 1) 31 | print(f"{args.file[0]}: FPS = {fps}, FRAME COUNT = {frame_count}") 32 | print('--' * 50) 33 | print("Start processing the motion capture") 34 | bone_list = [] 35 | print(f"[>{'.' * 50}] 0%\r", end='') 36 | 37 | pos0 = None 38 | for idx in range(frame_count): 39 | ret, frame = video.read() 40 | 41 | if idx % fpi == 0: 42 | bones, pos, ref = detector.extract_bone(frame, args.show) 43 | if pos0 is None: 44 | pos0 = pos 45 | dx = round((pos[0] - pos0[0]) / ref, 2) 46 | dy = round((pos0[1] - pos[1]) / ref, 2) 47 | bones["pos"] = (dx, dy) 48 | bones["timestamp"] = round(idx / fps, 3) 49 | bone_list.append(json.dumps(bones)) 50 | curr = 50 * idx // frame_count 51 | print(f"[{'=' * curr}>{'.' * (50 - curr)}] " 52 | f"{round(100 * idx / frame_count, 2)}%\r", end='') 53 | print(f"[{'=' * 50}>] 100% ") 54 | cv2.destroyAllWindows() 55 | video.release() 56 | 57 | if args.out: 58 | with open(path.join(args.out, "result.json"), 'w') as out_file: 59 | out_file.writelines([bone + '\n' for bone in bone_list]) 60 | 61 | print('--' * 50) 62 | print("Motion Capture complete") 63 | return bone_list 64 | 65 | 66 | def send(args): 67 | if '.json' not in args.file[0]: 68 | print("Please provide the json file with the pose data") 69 | exit(4) 70 | 71 | print(f"Retrieving data from {args.file[0]}") 72 | bone_list = [] 73 | with open(args.file[0], 'r') as json_file: 74 | line = json_file.readline() 75 | while line: 76 | bone_list.append(line) 77 | line = json_file.readline() 78 | print("Pose data is ready\n") 79 | return bone_list 80 | 81 | 82 | def main(): 83 | warnings.filterwarnings("ignore", category=UserWarning) 84 | parser = argparse.ArgumentParser( 85 | description="Unity extension for 2D motion capture asset" 86 | ) 87 | parser.add_argument('command', metavar='{ detect, send }', 88 | help="detect: analyze video file and send / " 89 | "send: send json result without analyzing", 90 | type=str) 91 | 92 | parser.add_argument('-f', '--file', dest="file", required=True, nargs=1, 93 | help="Path to the video file", type=str) 94 | parser.add_argument('-g', '--gpu', dest="gpu", action="store_true", 95 | help="Run on gpu instance") 96 | parser.add_argument('-i', '--interval', dest='interval', type=float, 97 | help="Interval between key frames in seconds", 98 | default=0.2) 99 | parser.add_argument('-s', '--show', dest='show', action="store_true", 100 | help="Show the result of analysis during the process.") 101 | parser.add_argument('-o', '--output', dest='out', type=str, 102 | help="Directory to save the result in a json format") 103 | args = parser.parse_args() 104 | 105 | print('==' * 50) 106 | print(''' 107 | ##### ##### ## ## ##### 108 | ####### ## ### #### #### #### 109 | ## ## ## ## ## ## ## ## 110 | ## ## ## ## ### ## ### ## 111 | ## ## ## ## ## # # ## ###### ###### 112 | ## ### ### ## ## # # #### ## ## ## ## 113 | ####### ##### ## ## ### ##### #### ## ##### 114 | ## 115 | ## by k2sebeom''') 116 | print("==" * 50 + '\n') 117 | 118 | bone_list = [] 119 | if args.command == "detect": 120 | bone_list = detect(args) 121 | elif args.command == "send": 122 | bone_list = send(args) 123 | else: 124 | print("Please enter a valid commad of either 'send' or 'detect'") 125 | exit(3) 126 | 127 | print('Receive data at Unity by hitting "Connect to Detector" button') 128 | 129 | host = "127.0.0.1" 130 | port = 65432 131 | 132 | with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: 133 | s.bind((host, port)) 134 | s.listen() 135 | conn, addr = s.accept() 136 | with conn: 137 | app_name = conn.recv(1024).decode() 138 | print(f'Connected to Unity project [{app_name}]') 139 | for bone in bone_list: 140 | conn.sendall(bone.encode()) 141 | conn.recv(1024) 142 | print("Transfer Successful") 143 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |

2 | 3 |

4 | 5 | 2D Motion Capture for Unity 6 | ------- 7 | 8 | [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT) 9 | [![PyPI version](https://badge.fury.io/py/unity-2dmocap.svg)](https://badge.fury.io/py/unity-2dmocap) 10 | 11 | **2D Motion Capture for Unity** is a python application designed to be a complement of the Unity asset, 2D Motion Capture. 12 | This asset enables an easy construction of 2D sprite animations in Unity by a completely device-less motion capture technique, 13 | and this application plays a role in analyzing a video file to capture the posture of a human in the video. The extraction of the 14 | posture from the human image is implemented using [OpenCV](https://opencv.org/) and [torchvision](https://pytorch.org/docs/stable/torchvision/index.html). 15 | With this application, you can easily build a sophisticated animation clip for 16 | your Unity project without any need of special equipments, and once the animation is 17 | generated, you can freely modify the clips the same way you deal with generic Unity animation. 18 | 19 |

20 | 21 |

22 | 23 | ### Features 24 | * Commandline tool for video analysis and a direct connection to Unity Project 25 | * Simple video editor used for preparation of source video 26 | * Extracts human posture using a deep learning model of keypoint RCNN in torchvision 27 | * Exports posture information to a json file 28 | * Generates Unity animation clip on the editor environment 29 | * Easy save & load system for the sprite rigging on Unity 30 | 31 | ### Installation 32 | ```{commandline} 33 | $ pip install unity-2dmocap 34 | ``` 35 | To update the application to the latest version, you can run: 36 | ```{commandline} 37 | $ pip install --upgrade unity-2dmocap 38 | ``` 39 | To check if the application is installed properly, you can try the following: 40 | ```{commandline} 41 | $ python 42 | ``` 43 | ```{python} 44 | >>> import mocap2d 45 | >>> mocap2d.__version__ 46 | ``` 47 | If it prints out something without an error, the application is installed properly. 48 | 49 | ### Unity Asset 50 | 51 | To use this package, you need a 2D Motion Capture asset, which is a counterpart of 52 | this package. You can download the package below. 53 | 54 | [2d_motion_capture.unitypackage](./2d_motion_capture.unitypackage) 55 | 56 | ### Usage 57 | 58 | Once you have installed the package, you can use the features through a commandline. 59 | 60 | ```{commandline} 61 | $ 2dmocap-unity [-h] { detect, send } -f FILE [-g] [-i INTERVAL] [-s] [-o OUT] 62 | ``` 63 | The commands you can use are "detect" and "send". "detect" will analyze the video file 64 | and connect to the Unity project. "send" will read the pre-generated json file 65 | to the Unity Project without an analysis. 66 | 67 | If you are using "detect", the arguments are: 68 | * -h, --help: show the help message 69 | * -f FILE, --file FILE: a path to the video file that you want to analyze 70 | * -g, --gpu: with this argument, the program will run on the gpu instance for a faster analysis 71 | * -i INTERVAL, --interval INTERVAL: a target interval between animation keyframes in seconds 72 | * -s, --show: with this argument, the program will show the resulting skeleton during the analysis 73 | * -o OUT, -output OUT: a directory in which you will save result.json file for the later use 74 | 75 | If you are using "send", the arguments are: 76 | * -f FILE, --file FILE: a path to the json file you want to send 77 | 78 | ### QuickStart 79 | 80 | Suppose you want to analyze the video file named "ballet.mp4," and you want to quickly 81 | analyze it using the gpu of your machine. In addition, you want to have 0.2 seconds of intervals 82 | between keyframes. Then, the following command will start the program. 83 | 84 | ```{commandline} 85 | $ 2dmocap-unity detect --file=ballet.mp4 --gpu --interval=0.2 86 | ``` 87 | 88 | Once you type the command, the program will initiate the analysis as follows. 89 | 90 |

91 | 92 |

93 | 94 | You can wait until the analysis is over, and if you used the argument -s or --show, 95 | an external window will show up to show the progress of analysis by drawing a skeleton 96 | on the original image. After the analysis is complete, the program will address you to connect to 97 | the Unity project. 98 | 99 |

100 | 101 |

102 | 103 | To connect to your Unity project, open your project on the Unity editor. If you have have 104 | successfully imported the "2D Motion Capture" asset, the menu will appear on 105 | Window/2D Motion Capture. 106 | 107 |

108 | 109 |

110 | 111 | If you click on the menu, the following editor window will show up. 112 | 113 |

114 | 115 |

116 | 117 | As addressed by the program, if you click on "Connect to the Detector" button, the unity will 118 | connect with your python session and receive information about the result of the analysis. Then, 119 | the window will let you preview the generated animation. If you click on the button "Save Animation Clip", 120 | the generated animation clip will be saved to the directory you specify. 121 | 122 |

123 | 124 |

125 | 126 | Once the animation is generated, you can edit the clip just the same way you 127 | edit generic unity animation clips. 128 | 129 | ### Community and Feedback 130 | 131 | 2D Motion Capture for Unity is an open-source project and we encourage and welcome contribution. 132 | We are waiting for the active participation of game developers, animation artists, and the fans of machine learning technology, so 133 | if you wish to contribute, please feel free to send pull requests or report issues. 134 | 135 | For any further feedback or bug report, please you github's issue template or make a review 136 | on the Asset Store page. 137 | 138 | Your participation matters a great deal to us. 139 | --------------------------------------------------------------------------------