├── tore ├── utils │ ├── __init__.py │ ├── __pycache__ │ │ ├── comm.cpython-36.pyc │ │ ├── comm.cpython-37.pyc │ │ ├── comm.cpython-38.pyc │ │ ├── comm.cpython-39.pyc │ │ ├── logger.cpython-36.pyc │ │ ├── logger.cpython-37.pyc │ │ ├── logger.cpython-38.pyc │ │ ├── logger.cpython-39.pyc │ │ ├── __init__.cpython-36.pyc │ │ ├── __init__.cpython-37.pyc │ │ ├── __init__.cpython-38.pyc │ │ ├── __init__.cpython-39.pyc │ │ ├── image_ops.cpython-36.pyc │ │ ├── image_ops.cpython-37.pyc │ │ ├── image_ops.cpython-38.pyc │ │ ├── image_ops.cpython-39.pyc │ │ ├── renderer.cpython-36.pyc │ │ ├── renderer.cpython-37.pyc │ │ ├── renderer.cpython-38.pyc │ │ ├── renderer.cpython-39.pyc │ │ ├── tsv_file.cpython-36.pyc │ │ ├── tsv_file.cpython-37.pyc │ │ ├── tsv_file.cpython-38.pyc │ │ ├── tsv_file.cpython-39.pyc │ │ ├── tsv_file_ops.cpython-36.pyc │ │ ├── tsv_file_ops.cpython-37.pyc │ │ ├── tsv_file_ops.cpython-38.pyc │ │ ├── tsv_file_ops.cpython-39.pyc │ │ ├── metric_logger.cpython-36.pyc │ │ ├── metric_logger.cpython-37.pyc │ │ ├── metric_logger.cpython-38.pyc │ │ ├── metric_logger.cpython-39.pyc │ │ ├── metric_pampjpe.cpython-36.pyc │ │ ├── metric_pampjpe.cpython-37.pyc │ │ ├── metric_pampjpe.cpython-38.pyc │ │ ├── metric_pampjpe.cpython-39.pyc │ │ ├── miscellaneous.cpython-36.pyc │ │ ├── miscellaneous.cpython-37.pyc │ │ ├── miscellaneous.cpython-38.pyc │ │ ├── miscellaneous.cpython-39.pyc │ │ ├── renderer_metro.cpython-38.pyc │ │ ├── geometric_layers.cpython-36.pyc │ │ ├── geometric_layers.cpython-37.pyc │ │ ├── geometric_layers.cpython-38.pyc │ │ └── geometric_layers.cpython-39.pyc │ ├── metric_logger.py │ ├── dataset_utils.py │ ├── geometric_layers.py │ ├── metric_pampjpe.py │ ├── logger.py │ └── tsv_file_ops.py ├── modeling │ ├── __init__.py │ ├── .DS_Store │ ├── __pycache__ │ │ ├── _mano.cpython-37.pyc │ │ ├── _smpl.cpython-36.pyc │ │ ├── _smpl.cpython-37.pyc │ │ ├── _smpl.cpython-38.pyc │ │ ├── _smpl.cpython-39.pyc │ │ ├── __init__.cpython-36.pyc │ │ ├── __init__.cpython-37.pyc │ │ ├── __init__.cpython-38.pyc │ │ └── __init__.cpython-39.pyc │ ├── bert │ │ ├── __pycache__ │ │ │ ├── __init__.cpython-36.pyc │ │ │ ├── __init__.cpython-37.pyc │ │ │ ├── __init__.cpython-38.pyc │ │ │ ├── __init__.cpython-39.pyc │ │ │ ├── file_utils.cpython-36.pyc │ │ │ ├── file_utils.cpython-37.pyc │ │ │ ├── file_utils.cpython-38.pyc │ │ │ ├── file_utils.cpython-39.pyc │ │ │ ├── transformer.cpython-36.pyc │ │ │ ├── transformer.cpython-37.pyc │ │ │ ├── transformer.cpython-38.pyc │ │ │ ├── transformer.cpython-39.pyc │ │ │ ├── modeling_bert.cpython-36.pyc │ │ │ ├── modeling_bert.cpython-37.pyc │ │ │ ├── modeling_bert.cpython-38.pyc │ │ │ ├── modeling_bert.cpython-39.pyc │ │ │ ├── modeling_metro.cpython-36.pyc │ │ │ ├── modeling_metro.cpython-37.pyc │ │ │ ├── modeling_metro.cpython-38.pyc │ │ │ ├── modeling_metro.cpython-39.pyc │ │ │ ├── modeling_utils.cpython-36.pyc │ │ │ ├── modeling_utils.cpython-37.pyc │ │ │ ├── modeling_utils.cpython-38.pyc │ │ │ ├── modeling_utils.cpython-39.pyc │ │ │ ├── position_encoding.cpython-36.pyc │ │ │ ├── position_encoding.cpython-37.pyc │ │ │ ├── position_encoding.cpython-38.pyc │ │ │ └── position_encoding.cpython-39.pyc │ │ ├── bert-base-uncased │ │ │ └── config.json │ │ ├── __init__.py │ │ └── position_encoding.py │ └── hrnet │ │ ├── __pycache__ │ │ ├── hrnet_cls_net.cpython-37.pyc │ │ ├── hrnet_cls_net_featmaps.cpython-36.pyc │ │ ├── hrnet_cls_net_featmaps.cpython-37.pyc │ │ ├── hrnet_cls_net_featmaps.cpython-38.pyc │ │ └── hrnet_cls_net_featmaps.cpython-39.pyc │ │ └── config │ │ ├── __pycache__ │ │ ├── models.cpython-36.pyc │ │ ├── models.cpython-37.pyc │ │ ├── models.cpython-38.pyc │ │ ├── models.cpython-39.pyc │ │ ├── __init__.cpython-36.pyc │ │ ├── __init__.cpython-37.pyc │ │ ├── __init__.cpython-38.pyc │ │ ├── __init__.cpython-39.pyc │ │ ├── default.cpython-36.pyc │ │ ├── default.cpython-37.pyc │ │ ├── default.cpython-38.pyc │ │ └── default.cpython-39.pyc │ │ ├── __init__.py │ │ ├── models.py │ │ └── default.py ├── modeling_fm │ ├── __init__.py │ ├── __pycache__ │ │ ├── _mano.cpython-37.pyc │ │ ├── _smpl.cpython-36.pyc │ │ ├── _smpl.cpython-37.pyc │ │ ├── _smpl.cpython-38.pyc │ │ ├── _smpl.cpython-39.pyc │ │ ├── __init__.cpython-36.pyc │ │ ├── __init__.cpython-37.pyc │ │ ├── __init__.cpython-38.pyc │ │ └── __init__.cpython-39.pyc │ └── bert │ │ ├── __pycache__ │ │ ├── __init__.cpython-36.pyc │ │ ├── __init__.cpython-37.pyc │ │ ├── __init__.cpython-38.pyc │ │ ├── __init__.cpython-39.pyc │ │ ├── file_utils.cpython-36.pyc │ │ ├── file_utils.cpython-37.pyc │ │ ├── file_utils.cpython-38.pyc │ │ ├── file_utils.cpython-39.pyc │ │ ├── transformer.cpython-36.pyc │ │ ├── transformer.cpython-37.pyc │ │ ├── transformer.cpython-38.pyc │ │ ├── transformer.cpython-39.pyc │ │ ├── modeling_bert.cpython-36.pyc │ │ ├── modeling_bert.cpython-37.pyc │ │ ├── modeling_bert.cpython-38.pyc │ │ ├── modeling_bert.cpython-39.pyc │ │ ├── modeling_metro.cpython-36.pyc │ │ ├── modeling_metro.cpython-37.pyc │ │ ├── modeling_metro.cpython-38.pyc │ │ ├── modeling_metro.cpython-39.pyc │ │ ├── modeling_utils.cpython-36.pyc │ │ ├── modeling_utils.cpython-37.pyc │ │ ├── modeling_utils.cpython-38.pyc │ │ ├── modeling_utils.cpython-39.pyc │ │ ├── position_encoding.cpython-36.pyc │ │ ├── position_encoding.cpython-37.pyc │ │ ├── position_encoding.cpython-38.pyc │ │ └── position_encoding.cpython-39.pyc │ │ ├── bert-base-uncased │ │ └── config.json │ │ ├── __init__.py │ │ └── position_encoding.py ├── modeling_m │ ├── __init__.py │ ├── .DS_Store │ ├── __pycache__ │ │ ├── _smpl.cpython-37.pyc │ │ ├── _smpl.cpython-38.pyc │ │ ├── __init__.cpython-37.pyc │ │ └── __init__.cpython-38.pyc │ └── bert │ │ ├── __pycache__ │ │ ├── __init__.cpython-37.pyc │ │ ├── __init__.cpython-38.pyc │ │ ├── file_utils.cpython-37.pyc │ │ ├── file_utils.cpython-38.pyc │ │ ├── transformer.cpython-37.pyc │ │ ├── transformer.cpython-38.pyc │ │ ├── modeling_bert.cpython-37.pyc │ │ ├── modeling_bert.cpython-38.pyc │ │ ├── modeling_metro.cpython-37.pyc │ │ ├── modeling_metro.cpython-38.pyc │ │ ├── modeling_utils.cpython-37.pyc │ │ ├── modeling_utils.cpython-38.pyc │ │ ├── position_encoding.cpython-37.pyc │ │ └── position_encoding.cpython-38.pyc │ │ ├── bert-base-uncased │ │ └── config.json │ │ ├── __init__.py │ │ └── position_encoding.py ├── datasets │ ├── __init__.py │ └── __pycache__ │ │ ├── build.cpython-36.pyc │ │ ├── build.cpython-37.pyc │ │ ├── build.cpython-38.pyc │ │ ├── build.cpython-39.pyc │ │ ├── __init__.cpython-36.pyc │ │ ├── __init__.cpython-37.pyc │ │ ├── __init__.cpython-38.pyc │ │ ├── __init__.cpython-39.pyc │ │ ├── hand_mesh_tsv.cpython-36.pyc │ │ ├── hand_mesh_tsv.cpython-37.pyc │ │ ├── hand_mesh_tsv.cpython-38.pyc │ │ ├── hand_mesh_tsv.cpython-39.pyc │ │ ├── human_mesh_tsv.cpython-36.pyc │ │ ├── human_mesh_tsv.cpython-37.pyc │ │ ├── human_mesh_tsv.cpython-38.pyc │ │ └── human_mesh_tsv.cpython-39.pyc ├── __init__.py ├── .DS_Store └── __pycache__ │ ├── __init__.cpython-36.pyc │ ├── __init__.cpython-37.pyc │ ├── __init__.cpython-38.pyc │ └── __init__.cpython-39.pyc ├── manopth ├── mano │ ├── __init__.py │ └── webuser │ │ ├── __init__.py │ │ ├── posemapper.py │ │ ├── lbs.py │ │ ├── serialization.py │ │ └── verts.py ├── manopth │ ├── __init__.py │ ├── rotproj.py │ ├── tensutils.py │ ├── argutils.py │ ├── demo.py │ ├── rot6d.py │ └── rodrigues_layer.py ├── assets │ ├── mano_layer.png │ └── random_hand.png ├── .gitignore ├── environment.yml ├── test │ └── test_demo.py ├── examples │ ├── manopth_mindemo.py │ └── manopth_demo.py └── setup.py ├── transformers ├── MANIFEST.in ├── pytorch_transformers │ ├── tests │ │ ├── __init__.py │ │ ├── fixtures │ │ │ ├── input.txt │ │ │ ├── test_sentencepiece.model │ │ │ └── sample_text.txt │ │ ├── conftest.py │ │ ├── tokenization_utils_test.py │ │ ├── modeling_gpt2_test.py │ │ ├── modeling_openai_test.py │ │ ├── tokenization_xlm_test.py │ │ ├── tokenization_openai_test.py │ │ ├── tokenization_gpt2_test.py │ │ └── tokenization_transfo_xl_test.py │ ├── convert_tf_checkpoint_to_pytorch.py │ ├── __init__.py │ ├── convert_xlm_checkpoint_to_pytorch.py │ ├── convert_gpt2_checkpoint_to_pytorch.py │ └── convert_openai_checkpoint_to_pytorch.py ├── examples │ ├── requirements.txt │ └── tests_samples │ │ ├── .gitignore │ │ └── MRPC │ │ ├── dev.tsv │ │ └── train.tsv ├── docs │ ├── source │ │ ├── _static │ │ │ ├── css │ │ │ │ ├── Calibre-Light.ttf │ │ │ │ ├── Calibre-Medium.otf │ │ │ │ ├── Calibre-Thin.otf │ │ │ │ ├── Calibre-Regular.otf │ │ │ │ └── code-snippets.css │ │ │ └── js │ │ │ │ └── custom.js │ │ ├── imgs │ │ │ ├── warmup_cosine_schedule.png │ │ │ ├── warmup_linear_schedule.png │ │ │ ├── warmup_constant_schedule.png │ │ │ ├── warmup_cosine_hard_restarts_schedule.png │ │ │ └── warmup_cosine_warm_restarts_schedule.png │ │ ├── model_doc │ │ │ ├── transformerxl.rst │ │ │ ├── gpt2.rst │ │ │ ├── gpt.rst │ │ │ ├── xlm.rst │ │ │ ├── xlnet.rst │ │ │ └── bert.rst │ │ ├── bertology.rst │ │ ├── notebooks.rst │ │ ├── installation.rst │ │ ├── index.rst │ │ └── converting_tensorflow_models.rst │ ├── Makefile │ ├── requirements.txt │ └── README.md ├── docker │ └── Dockerfile ├── .coveragerc ├── requirements.txt ├── .github │ └── stale.yml ├── hubconf.py ├── .circleci │ └── config.yml ├── .gitignore └── setup.py ├── eval └── .DS_Store ├── docs ├── example_keli.gif ├── example_zliu.gif ├── example_lijuanw.gif ├── metro-overview.png ├── CODE_OF_CONDUCT.md ├── CONTRIBUTE.md ├── INSTALL.md └── DEMO.md ├── dist └── metro-0.1.0-py3.7.egg ├── samples └── human-body │ ├── 3dpw_test1.jpg │ ├── 3dpw_test2.jpg │ ├── 3dpw_test3.jpg │ ├── 3dpw_test1_tore_pred.jpg │ ├── 3dpw_test2_tore_pred.jpg │ └── 3dpw_test3_tore_pred.jpg ├── requirements.txt ├── inference_tore_fm.sh ├── eval_tore_fm.sh ├── eval_tore_m.sh ├── azure-pipelines.yml ├── train_tore_m.sh ├── train_tore_fm.sh ├── LICENSE ├── setup.py └── SECURITY.md /tore/utils/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /manopth/mano/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tore/modeling/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tore/modeling_fm/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tore/modeling_m/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /manopth/mano/webuser/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tore/datasets/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /tore/__init__.py: -------------------------------------------------------------------------------- 1 | __version__ = "0.1.0" 2 | -------------------------------------------------------------------------------- /manopth/manopth/__init__.py: -------------------------------------------------------------------------------- 1 | name = 'manopth' 2 | -------------------------------------------------------------------------------- /transformers/MANIFEST.in: -------------------------------------------------------------------------------- 1 | include LICENSE 2 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /transformers/examples/requirements.txt: -------------------------------------------------------------------------------- 1 | tensorboardX 2 | scikit-learn -------------------------------------------------------------------------------- /eval/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/eval/.DS_Store -------------------------------------------------------------------------------- /tore/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/.DS_Store -------------------------------------------------------------------------------- /docs/example_keli.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/docs/example_keli.gif -------------------------------------------------------------------------------- /docs/example_zliu.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/docs/example_zliu.gif -------------------------------------------------------------------------------- /docs/example_lijuanw.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/docs/example_lijuanw.gif -------------------------------------------------------------------------------- /docs/metro-overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/docs/metro-overview.png -------------------------------------------------------------------------------- /tore/modeling/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/.DS_Store -------------------------------------------------------------------------------- /tore/modeling_m/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/.DS_Store -------------------------------------------------------------------------------- /dist/metro-0.1.0-py3.7.egg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/dist/metro-0.1.0-py3.7.egg -------------------------------------------------------------------------------- /manopth/assets/mano_layer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/manopth/assets/mano_layer.png -------------------------------------------------------------------------------- /manopth/assets/random_hand.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/manopth/assets/random_hand.png -------------------------------------------------------------------------------- /transformers/examples/tests_samples/.gitignore: -------------------------------------------------------------------------------- 1 | *.* 2 | cache* 3 | temp* 4 | !*.tsv 5 | !*.json 6 | !.gitignore -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/fixtures/input.txt: -------------------------------------------------------------------------------- 1 | Who was Jim Henson ? ||| Jim Henson was a puppeteer 2 | -------------------------------------------------------------------------------- /samples/human-body/3dpw_test1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/samples/human-body/3dpw_test1.jpg -------------------------------------------------------------------------------- /samples/human-body/3dpw_test2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/samples/human-body/3dpw_test2.jpg -------------------------------------------------------------------------------- /samples/human-body/3dpw_test3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/samples/human-body/3dpw_test3.jpg -------------------------------------------------------------------------------- /tore/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /tore/__pycache__/__init__.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/__pycache__/__init__.cpython-37.pyc -------------------------------------------------------------------------------- /tore/__pycache__/__init__.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/__pycache__/__init__.cpython-38.pyc -------------------------------------------------------------------------------- /tore/__pycache__/__init__.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/__pycache__/__init__.cpython-39.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/comm.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/comm.cpython-36.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/comm.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/comm.cpython-37.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/comm.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/comm.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/comm.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/comm.cpython-39.pyc -------------------------------------------------------------------------------- /samples/human-body/3dpw_test1_tore_pred.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/samples/human-body/3dpw_test1_tore_pred.jpg -------------------------------------------------------------------------------- /samples/human-body/3dpw_test2_tore_pred.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/samples/human-body/3dpw_test2_tore_pred.jpg -------------------------------------------------------------------------------- /samples/human-body/3dpw_test3_tore_pred.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/samples/human-body/3dpw_test3_tore_pred.jpg -------------------------------------------------------------------------------- /tore/utils/__pycache__/logger.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/logger.cpython-36.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/logger.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/logger.cpython-37.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/logger.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/logger.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/logger.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/logger.cpython-39.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/build.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/build.cpython-36.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/build.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/build.cpython-37.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/build.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/build.cpython-38.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/build.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/build.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling/__pycache__/_mano.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/__pycache__/_mano.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/__pycache__/_smpl.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/__pycache__/_smpl.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/__pycache__/_smpl.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/__pycache__/_smpl.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/__pycache__/_smpl.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/__pycache__/_smpl.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/__pycache__/_smpl.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/__pycache__/_smpl.cpython-39.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/__init__.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/__init__.cpython-37.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/__init__.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/__init__.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/__init__.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/__init__.cpython-39.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/image_ops.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/image_ops.cpython-36.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/image_ops.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/image_ops.cpython-37.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/image_ops.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/image_ops.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/image_ops.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/image_ops.cpython-39.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/renderer.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/renderer.cpython-36.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/renderer.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/renderer.cpython-37.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/renderer.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/renderer.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/renderer.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/renderer.cpython-39.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/tsv_file.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/tsv_file.cpython-36.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/tsv_file.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/tsv_file.cpython-37.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/tsv_file.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/tsv_file.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/tsv_file.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/tsv_file.cpython-39.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/__init__.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/__init__.cpython-37.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/__init__.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/__init__.cpython-38.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/__init__.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/__init__.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/__pycache__/__init__.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/__pycache__/__init__.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/__pycache__/__init__.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/__pycache__/__init__.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/__pycache__/__init__.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/__pycache__/__init__.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/__pycache__/_mano.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/__pycache__/_mano.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/__pycache__/_smpl.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/__pycache__/_smpl.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/__pycache__/_smpl.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/__pycache__/_smpl.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/__pycache__/_smpl.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/__pycache__/_smpl.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/__pycache__/_smpl.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/__pycache__/_smpl.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_m/__pycache__/_smpl.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/__pycache__/_smpl.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_m/__pycache__/_smpl.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/__pycache__/_smpl.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/tsv_file_ops.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/tsv_file_ops.cpython-36.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/tsv_file_ops.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/tsv_file_ops.cpython-37.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/tsv_file_ops.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/tsv_file_ops.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/tsv_file_ops.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/tsv_file_ops.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/__pycache__/__init__.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/__pycache__/__init__.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/__pycache__/__init__.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/__pycache__/__init__.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/__pycache__/__init__.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/__pycache__/__init__.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_m/__pycache__/__init__.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/__pycache__/__init__.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_m/__pycache__/__init__.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/__pycache__/__init__.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/metric_logger.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/metric_logger.cpython-36.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/metric_logger.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/metric_logger.cpython-37.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/metric_logger.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/metric_logger.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/metric_logger.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/metric_logger.cpython-39.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/metric_pampjpe.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/metric_pampjpe.cpython-36.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/metric_pampjpe.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/metric_pampjpe.cpython-37.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/metric_pampjpe.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/metric_pampjpe.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/metric_pampjpe.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/metric_pampjpe.cpython-39.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/miscellaneous.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/miscellaneous.cpython-36.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/miscellaneous.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/miscellaneous.cpython-37.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/miscellaneous.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/miscellaneous.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/miscellaneous.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/miscellaneous.cpython-39.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/renderer_metro.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/renderer_metro.cpython-38.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/hand_mesh_tsv.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/hand_mesh_tsv.cpython-36.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/hand_mesh_tsv.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/hand_mesh_tsv.cpython-37.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/hand_mesh_tsv.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/hand_mesh_tsv.cpython-38.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/hand_mesh_tsv.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/hand_mesh_tsv.cpython-39.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/human_mesh_tsv.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/human_mesh_tsv.cpython-36.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/human_mesh_tsv.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/human_mesh_tsv.cpython-37.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/human_mesh_tsv.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/human_mesh_tsv.cpython-38.pyc -------------------------------------------------------------------------------- /tore/datasets/__pycache__/human_mesh_tsv.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/datasets/__pycache__/human_mesh_tsv.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/__init__.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/__init__.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/__init__.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/__init__.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/__init__.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/__init__.cpython-39.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/geometric_layers.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/geometric_layers.cpython-36.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/geometric_layers.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/geometric_layers.cpython-37.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/geometric_layers.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/geometric_layers.cpython-38.pyc -------------------------------------------------------------------------------- /tore/utils/__pycache__/geometric_layers.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/utils/__pycache__/geometric_layers.cpython-39.pyc -------------------------------------------------------------------------------- /transformers/docs/source/_static/css/Calibre-Light.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/transformers/docs/source/_static/css/Calibre-Light.ttf -------------------------------------------------------------------------------- /transformers/docs/source/_static/css/Calibre-Medium.otf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/transformers/docs/source/_static/css/Calibre-Medium.otf -------------------------------------------------------------------------------- /transformers/docs/source/_static/css/Calibre-Thin.otf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/transformers/docs/source/_static/css/Calibre-Thin.otf -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | yacs 2 | cython 3 | opencv-python 4 | tqdm 5 | nltk 6 | numpy 7 | scipy==1.4.1 8 | chumpy 9 | boto3 10 | requests 11 | efficientnet-pytorch -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/file_utils.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/file_utils.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/file_utils.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/file_utils.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/file_utils.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/file_utils.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/file_utils.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/file_utils.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/transformer.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/transformer.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/transformer.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/transformer.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/transformer.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/transformer.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/transformer.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/transformer.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/__init__.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/__init__.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/__init__.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/__init__.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/__init__.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/__init__.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/__init__.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/__init__.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/__init__.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/__init__.cpython-38.pyc -------------------------------------------------------------------------------- /transformers/docs/source/_static/css/Calibre-Regular.otf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/transformers/docs/source/_static/css/Calibre-Regular.otf -------------------------------------------------------------------------------- /transformers/docs/source/imgs/warmup_cosine_schedule.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/transformers/docs/source/imgs/warmup_cosine_schedule.png -------------------------------------------------------------------------------- /transformers/docs/source/imgs/warmup_linear_schedule.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/transformers/docs/source/imgs/warmup_linear_schedule.png -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_bert.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_bert.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_bert.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_bert.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_bert.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_bert.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_bert.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_bert.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_metro.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_metro.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_metro.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_metro.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_metro.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_metro.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_metro.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_metro.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_utils.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_utils.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_utils.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_utils.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_utils.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_utils.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/modeling_utils.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/modeling_utils.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/__pycache__/hrnet_cls_net.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/__pycache__/hrnet_cls_net.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/models.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/models.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/models.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/models.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/models.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/models.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/models.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/models.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/file_utils.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/file_utils.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/file_utils.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/file_utils.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/file_utils.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/file_utils.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/file_utils.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/file_utils.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/transformer.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/transformer.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/transformer.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/transformer.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/transformer.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/transformer.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/transformer.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/transformer.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/file_utils.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/file_utils.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/file_utils.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/file_utils.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/transformer.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/transformer.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/transformer.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/transformer.cpython-38.pyc -------------------------------------------------------------------------------- /transformers/docs/source/imgs/warmup_constant_schedule.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/transformers/docs/source/imgs/warmup_constant_schedule.png -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/__init__.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/__init__.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/__init__.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/__init__.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/__init__.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/__init__.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/default.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/default.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/default.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/default.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/default.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/default.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__pycache__/default.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/config/__pycache__/default.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_bert.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_bert.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_bert.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_bert.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_bert.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_bert.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_bert.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_bert.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/modeling_bert.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/modeling_bert.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/modeling_bert.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/modeling_bert.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/modeling_metro.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/modeling_metro.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/modeling_metro.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/modeling_metro.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/modeling_utils.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/modeling_utils.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/modeling_utils.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/modeling_utils.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/position_encoding.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/position_encoding.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/position_encoding.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/position_encoding.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/position_encoding.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/position_encoding.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/bert/__pycache__/position_encoding.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/bert/__pycache__/position_encoding.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_metro.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_metro.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_metro.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_metro.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_metro.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_metro.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_metro.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_metro.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_utils.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_utils.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_utils.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_utils.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_utils.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_utils.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/modeling_utils.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/modeling_utils.cpython-39.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/position_encoding.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/position_encoding.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_m/bert/__pycache__/position_encoding.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_m/bert/__pycache__/position_encoding.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/position_encoding.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/position_encoding.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/position_encoding.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/position_encoding.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/position_encoding.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/position_encoding.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__pycache__/position_encoding.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling_fm/bert/__pycache__/position_encoding.cpython-39.pyc -------------------------------------------------------------------------------- /inference_tore_fm.sh: -------------------------------------------------------------------------------- 1 | python ./tore/tools/tore_inference_fm.py \ 2 | --resume_checkpoint ./checkpoints/h64_gtr_itp0.8_44.4_3dpw.bin \ 3 | --image_file_or_path ./samples/human-body -------------------------------------------------------------------------------- /tore/modeling/hrnet/__pycache__/hrnet_cls_net_featmaps.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/__pycache__/hrnet_cls_net_featmaps.cpython-36.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/__pycache__/hrnet_cls_net_featmaps.cpython-37.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/__pycache__/hrnet_cls_net_featmaps.cpython-37.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/__pycache__/hrnet_cls_net_featmaps.cpython-38.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/__pycache__/hrnet_cls_net_featmaps.cpython-38.pyc -------------------------------------------------------------------------------- /tore/modeling/hrnet/__pycache__/hrnet_cls_net_featmaps.cpython-39.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/tore/modeling/hrnet/__pycache__/hrnet_cls_net_featmaps.cpython-39.pyc -------------------------------------------------------------------------------- /transformers/docs/source/imgs/warmup_cosine_hard_restarts_schedule.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/transformers/docs/source/imgs/warmup_cosine_hard_restarts_schedule.png -------------------------------------------------------------------------------- /transformers/docs/source/imgs/warmup_cosine_warm_restarts_schedule.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/transformers/docs/source/imgs/warmup_cosine_warm_restarts_schedule.png -------------------------------------------------------------------------------- /manopth/.gitignore: -------------------------------------------------------------------------------- 1 | *.sw* 2 | *.bak 3 | *_bak.py 4 | 5 | .cache/ 6 | __pycache__/ 7 | build/ 8 | dist/ 9 | manopth_hassony2.egg-info/ 10 | 11 | mano/models 12 | assets/mano_layer.svg 13 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/fixtures/test_sentencepiece.model: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Frank-ZY-Dou/TORE/HEAD/transformers/pytorch_transformers/tests/fixtures/test_sentencepiece.model -------------------------------------------------------------------------------- /transformers/docker/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM pytorch/pytorch:latest 2 | 3 | RUN git clone https://github.com/NVIDIA/apex.git && cd apex && python setup.py install --cuda_ext --cpp_ext 4 | 5 | RUN pip install pytorch_transformers 6 | 7 | WORKDIR /workspace -------------------------------------------------------------------------------- /manopth/environment.yml: -------------------------------------------------------------------------------- 1 | name: manopth 2 | 3 | dependencies: 4 | - opencv 5 | - python=3.7 6 | - matplotlib 7 | - numpy 8 | - pytorch 9 | - tqdm 10 | - git 11 | - pip: 12 | - git+https://github.com/hassony2/chumpy.git 13 | -------------------------------------------------------------------------------- /transformers/.coveragerc: -------------------------------------------------------------------------------- 1 | [run] 2 | source=pytorch_transformers 3 | omit = 4 | # skip convertion scripts from testing for now 5 | */convert_* 6 | */__main__.py 7 | [report] 8 | exclude_lines = 9 | pragma: no cover 10 | raise 11 | except 12 | register_parameter -------------------------------------------------------------------------------- /transformers/requirements.txt: -------------------------------------------------------------------------------- 1 | # PyTorch 2 | torch>=0.4.1 3 | # progress bars in model download and training scripts 4 | tqdm 5 | # Accessing files from S3 directly. 6 | boto3 7 | # Used for downloading models over HTTP 8 | requests 9 | # For OpenAI GPT 10 | regex 11 | # For XLNet 12 | sentencepiece -------------------------------------------------------------------------------- /transformers/docs/source/_static/css/code-snippets.css: -------------------------------------------------------------------------------- 1 | 2 | .highlight .c1, .highlight .sd{ 3 | color: #999 4 | } 5 | 6 | .highlight .nn, .highlight .k, .highlight .s1, .highlight .nb, .highlight .bp, .highlight .kc { 7 | color: #FB8D68; 8 | } 9 | 10 | .highlight .kn, .highlight .nv, .highlight .s2, .highlight .ow { 11 | color: #6670FF; 12 | } -------------------------------------------------------------------------------- /manopth/test/test_demo.py: -------------------------------------------------------------------------------- 1 | import torch 2 | 3 | from manopth.demo import generate_random_hand 4 | 5 | 6 | def test_generate_random_hand(): 7 | batch_size = 3 8 | hand_info = generate_random_hand(batch_size=batch_size, ncomps=6) 9 | verts = hand_info['verts'] 10 | joints = hand_info['joints'] 11 | assert verts.shape == (batch_size, 778, 3) 12 | assert joints.shape == (batch_size, 21, 3) 13 | -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/__init__.py: -------------------------------------------------------------------------------- 1 | # ------------------------------------------------------------------------------ 2 | # Copyright (c) Microsoft 3 | # Licensed under the MIT License. 4 | # Written by Bin Xiao (Bin.Xiao@microsoft.com) 5 | # ------------------------------------------------------------------------------ 6 | 7 | from .default import _C as config 8 | from .default import update_config 9 | from .models import MODEL_EXTRAS 10 | -------------------------------------------------------------------------------- /tore/modeling/bert/bert-base-uncased/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "architectures": [ 3 | "BertForMaskedLM" 4 | ], 5 | "attention_probs_dropout_prob": 0.1, 6 | "hidden_act": "gelu", 7 | "hidden_dropout_prob": 0.1, 8 | "hidden_size": 768, 9 | "initializer_range": 0.02, 10 | "intermediate_size": 3072, 11 | "max_position_embeddings": 512, 12 | "num_attention_heads": 12, 13 | "num_hidden_layers": 12, 14 | "type_vocab_size": 2, 15 | "vocab_size": 30522 16 | } 17 | -------------------------------------------------------------------------------- /tore/modeling_fm/bert/bert-base-uncased/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "architectures": [ 3 | "BertForMaskedLM" 4 | ], 5 | "attention_probs_dropout_prob": 0.1, 6 | "hidden_act": "gelu", 7 | "hidden_dropout_prob": 0.1, 8 | "hidden_size": 768, 9 | "initializer_range": 0.02, 10 | "intermediate_size": 3072, 11 | "max_position_embeddings": 512, 12 | "num_attention_heads": 12, 13 | "num_hidden_layers": 12, 14 | "type_vocab_size": 2, 15 | "vocab_size": 30522 16 | } 17 | -------------------------------------------------------------------------------- /tore/modeling_m/bert/bert-base-uncased/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "architectures": [ 3 | "BertForMaskedLM" 4 | ], 5 | "attention_probs_dropout_prob": 0.1, 6 | "hidden_act": "gelu", 7 | "hidden_dropout_prob": 0.1, 8 | "hidden_size": 768, 9 | "initializer_range": 0.02, 10 | "intermediate_size": 3072, 11 | "max_position_embeddings": 512, 12 | "num_attention_heads": 12, 13 | "num_hidden_layers": 12, 14 | "type_vocab_size": 2, 15 | "vocab_size": 30522 16 | } 17 | -------------------------------------------------------------------------------- /docs/CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Microsoft Open Source Code of Conduct 2 | 3 | This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). 4 | 5 | Resources: 6 | 7 | - [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) 8 | - [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) 9 | - Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns 10 | -------------------------------------------------------------------------------- /eval_tore_fm.sh: -------------------------------------------------------------------------------- 1 | python setup.py build develop 2 | CUDA_VISIBLE_DEVICES=0,1,2,3 \ 3 | python -m torch.distributed.launch --nproc_per_node=4 --master_port 47779 \ 4 | tore/tools/run_tore_fastmetro_bodymesh.py \ 5 | --run_eval_only \ 6 | --val_yaml /code/posemae/MeshGraphormer_data/datasets/human3.6m/valid.protocol2.yaml \ 7 | --num_workers 4 \ 8 | --per_gpu_eval_batch_size 16 \ 9 | --arch hrnet-w64 \ 10 | --output_dir eval/h64_gtr_itp08_h36m/ \ 11 | --keep_ratio 0.8 \ 12 | --resume_checkpoint checkpoints/h64_gtr_itp0.8_36.4.bin 13 | -------------------------------------------------------------------------------- /tore/modeling_m/bert/__init__.py: -------------------------------------------------------------------------------- 1 | __version__ = "1.0.0" 2 | 3 | from .modeling_bert import (BertConfig, BertModel, 4 | load_tf_weights_in_bert, BERT_PRETRAINED_MODEL_ARCHIVE_MAP, 5 | BERT_PRETRAINED_CONFIG_ARCHIVE_MAP) 6 | 7 | from .modeling_metro import (METRO, METRO_Encoder, METRO_Hand_Network, METRO_Body_Network) 8 | 9 | from .modeling_utils import (WEIGHTS_NAME, CONFIG_NAME, TF_WEIGHTS_NAME, 10 | PretrainedConfig, PreTrainedModel, prune_layer, Conv1D) 11 | 12 | from .file_utils import (PYTORCH_PRETRAINED_BERT_CACHE, cached_path) 13 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/conftest.py: -------------------------------------------------------------------------------- 1 | # content of conftest.py 2 | 3 | import pytest 4 | 5 | 6 | def pytest_addoption(parser): 7 | parser.addoption( 8 | "--runslow", action="store_true", default=False, help="run slow tests" 9 | ) 10 | 11 | 12 | def pytest_collection_modifyitems(config, items): 13 | if config.getoption("--runslow"): 14 | # --runslow given in cli: do not skip slow tests 15 | return 16 | skip_slow = pytest.mark.skip(reason="need --runslow option to run") 17 | for item in items: 18 | if "slow" in item.keywords: 19 | item.add_marker(skip_slow) 20 | -------------------------------------------------------------------------------- /transformers/docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SOURCEDIR = source 8 | BUILDDIR = _build 9 | 10 | # Put it first so that "make" without argument is like "make help". 11 | help: 12 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 13 | 14 | .PHONY: help Makefile 15 | 16 | # Catch-all target: route all unknown targets to Sphinx using the new 17 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 18 | %: Makefile 19 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) -------------------------------------------------------------------------------- /eval_tore_m.sh: -------------------------------------------------------------------------------- 1 | python setup.py build develop 2 | python -m torch.distributed.launch --nproc_per_node=8 --master_port 47779 \ 3 | tore/tools/run_tore_metro_bodymesh.py \ 4 | --run_eval_only \ 5 | --val_yaml /code/posemae/MeshGraphormer_data/datasets/human3.6m/valid.protocol2.yaml \ 6 | --num_workers 4 \ 7 | --per_gpu_eval_batch_size 32 \ 8 | --arch hrnet-w64 \ 9 | --output_dir eval/metro_h64_gtr_h36m/ \ 10 | --resume_checkpoint checkpoints/metro_h64_gtr_37.1.bin \ 11 | --num_hidden_layers 4 \ 12 | --num_attention_heads 4 \ 13 | --input_feat_dim 2051,512,128 \ 14 | --hidden_feat_dim 1024,256,128 15 | 16 | -------------------------------------------------------------------------------- /transformers/docs/requirements.txt: -------------------------------------------------------------------------------- 1 | alabaster==0.7.12 2 | Babel==2.7.0 3 | certifi==2019.6.16 4 | chardet==3.0.4 5 | commonmark==0.9.0 6 | docutils==0.14 7 | future==0.17.1 8 | idna==2.8 9 | imagesize==1.1.0 10 | Jinja2==2.10.1 11 | MarkupSafe==1.1.1 12 | packaging==19.0 13 | Pygments==2.4.2 14 | pyparsing==2.4.0 15 | pytz==2019.1 16 | recommonmark==0.5.0 17 | requests==2.22.0 18 | six==1.12.0 19 | snowballstemmer==1.9.0 20 | Sphinx==2.1.2 21 | sphinx-rtd-theme==0.4.3 22 | sphinxcontrib-applehelp==1.0.1 23 | sphinxcontrib-devhelp==1.0.1 24 | sphinxcontrib-htmlhelp==1.0.2 25 | sphinxcontrib-jsmath==1.0.1 26 | sphinxcontrib-qthelp==1.0.2 27 | sphinxcontrib-serializinghtml==1.1.3 28 | urllib3==1.25.3 29 | -------------------------------------------------------------------------------- /tore/modeling/bert/__init__.py: -------------------------------------------------------------------------------- 1 | __version__ = "1.0.0" 2 | 3 | from .modeling_bert import (BertConfig, BertModel, 4 | load_tf_weights_in_bert, BERT_PRETRAINED_MODEL_ARCHIVE_MAP, 5 | BERT_PRETRAINED_CONFIG_ARCHIVE_MAP) 6 | 7 | # from .modeling_metro import (METRO, METRO_Encoder, METRO_Hand_Network, METRO_Body_Network) 8 | from .modeling_metro import (FastMETRO_Body_Network, ) 9 | 10 | 11 | from .modeling_utils import (WEIGHTS_NAME, CONFIG_NAME, TF_WEIGHTS_NAME, 12 | PretrainedConfig, PreTrainedModel, prune_layer, Conv1D) 13 | 14 | from .file_utils import (PYTORCH_PRETRAINED_BERT_CACHE, cached_path) 15 | -------------------------------------------------------------------------------- /tore/modeling_fm/bert/__init__.py: -------------------------------------------------------------------------------- 1 | __version__ = "1.0.0" 2 | 3 | from .modeling_bert import (BertConfig, BertModel, 4 | load_tf_weights_in_bert, BERT_PRETRAINED_MODEL_ARCHIVE_MAP, 5 | BERT_PRETRAINED_CONFIG_ARCHIVE_MAP) 6 | 7 | # from .modeling_metro import (METRO, METRO_Encoder, METRO_Hand_Network, METRO_Body_Network) 8 | from .modeling_metro import (FastMETRO_Body_Network, ) 9 | 10 | 11 | from .modeling_utils import (WEIGHTS_NAME, CONFIG_NAME, TF_WEIGHTS_NAME, 12 | PretrainedConfig, PreTrainedModel, prune_layer, Conv1D) 13 | 14 | from .file_utils import (PYTORCH_PRETRAINED_BERT_CACHE, cached_path) 15 | -------------------------------------------------------------------------------- /transformers/docs/source/model_doc/transformerxl.rst: -------------------------------------------------------------------------------- 1 | Transformer XL 2 | ---------------------------------------------------- 3 | 4 | 5 | ``TransfoXLConfig`` 6 | ~~~~~~~~~~~~~~~~~~~~~ 7 | 8 | .. autoclass:: pytorch_transformers.TransfoXLConfig 9 | :members: 10 | 11 | 12 | ``TransfoXLTokenizer`` 13 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 14 | 15 | .. autoclass:: pytorch_transformers.TransfoXLTokenizer 16 | :members: 17 | 18 | 19 | ``TransfoXLModel`` 20 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 21 | 22 | .. autoclass:: pytorch_transformers.TransfoXLModel 23 | :members: 24 | 25 | 26 | ``TransfoXLLMHeadModel`` 27 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 28 | 29 | .. autoclass:: pytorch_transformers.TransfoXLLMHeadModel 30 | :members: 31 | -------------------------------------------------------------------------------- /azure-pipelines.yml: -------------------------------------------------------------------------------- 1 | # Python package 2 | # Create and test a Python package on multiple Python versions. 3 | # Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more: 4 | # https://docs.microsoft.com/azure/devops/pipelines/languages/python 5 | 6 | trigger: 7 | - main 8 | 9 | pool: 10 | vmImage: ubuntu-latest 11 | strategy: 12 | matrix: 13 | Python36: 14 | python.version: '3.6' 15 | 16 | steps: 17 | - task: UsePythonVersion@0 18 | inputs: 19 | versionSpec: '$(python.version)' 20 | displayName: 'Use Python $(python.version)' 21 | 22 | - script: | 23 | python -m pip install --upgrade pip 24 | pip install -r requirements.txt 25 | displayName: 'Install dependencies' 26 | 27 | -------------------------------------------------------------------------------- /train_tore_m.sh: -------------------------------------------------------------------------------- 1 | python setup.py build develop 2 | python -m torch.distributed.launch --nproc_per_node=8 \ 3 | tore/tools/run_tore_m_bodymesh.py \ 4 | --train_yaml /code/posemae/MeshGraphormer_data/datasets/Tax-H36m-coco40k-Muco-UP-Mpii/train.yaml \ 5 | --val_yaml /code/posemae/MeshGraphormer_data/datasets/human3.6m/valid.protocol2.yaml \ 6 | --arch resnet50 \ 7 | --num_workers 4 \ 8 | --per_gpu_train_batch_size 32 \ 9 | --per_gpu_eval_batch_size 32 \ 10 | --num_hidden_layers 4 \ 11 | --num_attention_heads 4 \ 12 | --lr 1e-4 \ 13 | --num_train_epochs 200 \ 14 | --input_feat_dim 2051,512,128 \ 15 | --hidden_feat_dim 1024,256,128 \ 16 | --output_dir output/metro_r50_gtr_test/ -------------------------------------------------------------------------------- /transformers/.github/stale.yml: -------------------------------------------------------------------------------- 1 | # Number of days of inactivity before an issue becomes stale 2 | daysUntilStale: 60 3 | # Number of days of inactivity before a stale issue is closed 4 | daysUntilClose: 7 5 | # Issues with these labels will never be considered stale 6 | exemptLabels: 7 | - pinned 8 | - security 9 | # Label to use when marking an issue as stale 10 | staleLabel: wontfix 11 | # Comment to post when marking an issue as stale. Set to `false` to disable 12 | markComment: > 13 | This issue has been automatically marked as stale because it has not had 14 | recent activity. It will be closed if no further activity occurs. Thank you 15 | for your contributions. 16 | # Comment to post when closing a stale issue. Set to `false` to disable 17 | closeComment: false -------------------------------------------------------------------------------- /transformers/docs/source/model_doc/gpt2.rst: -------------------------------------------------------------------------------- 1 | OpenAI GPT2 2 | ---------------------------------------------------- 3 | 4 | ``GPT2Config`` 5 | ~~~~~~~~~~~~~~~~~~~~~ 6 | 7 | .. autoclass:: pytorch_transformers.GPT2Config 8 | :members: 9 | 10 | 11 | ``GPT2Tokenizer`` 12 | ~~~~~~~~~~~~~~~~~~~~~ 13 | 14 | .. autoclass:: pytorch_transformers.GPT2Tokenizer 15 | :members: 16 | 17 | 18 | ``GPT2Model`` 19 | ~~~~~~~~~~~~~~~~~~~~~ 20 | 21 | .. autoclass:: pytorch_transformers.GPT2Model 22 | :members: 23 | 24 | 25 | ``GPT2LMHeadModel`` 26 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 27 | 28 | .. autoclass:: pytorch_transformers.GPT2LMHeadModel 29 | :members: 30 | 31 | 32 | ``GPT2DoubleHeadsModel`` 33 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 34 | 35 | .. autoclass:: pytorch_transformers.GPT2DoubleHeadsModel 36 | :members: 37 | -------------------------------------------------------------------------------- /train_tore_fm.sh: -------------------------------------------------------------------------------- 1 | python setup.py build develop 2 | CUDA_VISIBLE_DEVICES=0,1,2,3 \ 3 | python -m torch.distributed.launch --nproc_per_node=4 --master_port=44444 \ 4 | tore/tools/run_tore_fm_bodymesh.py \ 5 | --train_yaml /code/posemae/MeshGraphormer_data/datasets/Tax-H36m-coco40k-Muco-UP-Mpii/train.yaml \ 6 | --val_yaml /code/posemae/MeshGraphormer_data/datasets/human3.6m/valid.protocol2.yaml \ 7 | --num_workers 4 \ 8 | --per_gpu_train_batch_size 16 \ 9 | --per_gpu_eval_batch_size 16 \ 10 | --lr 1e-4 \ 11 | --arch efficientnet-b0 \ 12 | --num_train_epochs 60 \ 13 | --output_dir output/output_eb0_gtr_itp0.8_test/ \ 14 | --keep_ratio 0.8 \ 15 | --model_name 'FastMETRO_L' \ 16 | --itp_loss_weight 1e-3 \ 17 | --edge_and_normal_vector_loss "false" 18 | -------------------------------------------------------------------------------- /manopth/examples/manopth_mindemo.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from manopth.manolayer import ManoLayer 3 | from manopth import demo 4 | 5 | batch_size = 10 6 | # Select number of principal components for pose space 7 | ncomps = 6 8 | 9 | # Initialize MANO layer 10 | mano_layer = ManoLayer( 11 | mano_root='mano/models', use_pca=True, ncomps=ncomps, flat_hand_mean=False) 12 | 13 | # Generate random shape parameters 14 | random_shape = torch.rand(batch_size, 10) 15 | # Generate random pose parameters, including 3 values for global axis-angle rotation 16 | random_pose = torch.rand(batch_size, ncomps + 3) 17 | 18 | # Forward pass through MANO layer 19 | hand_verts, hand_joints = mano_layer(random_pose, random_shape) 20 | demo.display_hand({ 21 | 'verts': hand_verts, 22 | 'joints': hand_joints 23 | }, 24 | mano_faces=mano_layer.th_faces) 25 | -------------------------------------------------------------------------------- /manopth/manopth/rotproj.py: -------------------------------------------------------------------------------- 1 | import torch 2 | 3 | 4 | def batch_rotprojs(batches_rotmats): 5 | proj_rotmats = [] 6 | for batch_idx, batch_rotmats in enumerate(batches_rotmats): 7 | proj_batch_rotmats = [] 8 | for rot_idx, rotmat in enumerate(batch_rotmats): 9 | # GPU implementation of svd is VERY slow 10 | # ~ 2 10^-3 per hit vs 5 10^-5 on cpu 11 | U, S, V = rotmat.cpu().svd() 12 | rotmat = torch.matmul(U, V.transpose(0, 1)) 13 | orth_det = rotmat.det() 14 | # Remove reflection 15 | if orth_det < 0: 16 | rotmat[:, 2] = -1 * rotmat[:, 2] 17 | 18 | rotmat = rotmat.cuda() 19 | proj_batch_rotmats.append(rotmat) 20 | proj_rotmats.append(torch.stack(proj_batch_rotmats)) 21 | return torch.stack(proj_rotmats) 22 | -------------------------------------------------------------------------------- /transformers/hubconf.py: -------------------------------------------------------------------------------- 1 | dependencies = ['torch', 'tqdm', 'boto3', 'requests', 'regex'] 2 | 3 | from hubconfs.bert_hubconf import ( 4 | bertTokenizer, 5 | bertModel, 6 | bertForNextSentencePrediction, 7 | bertForPreTraining, 8 | bertForMaskedLM, 9 | bertForSequenceClassification, 10 | bertForMultipleChoice, 11 | bertForQuestionAnswering, 12 | bertForTokenClassification 13 | ) 14 | from hubconfs.gpt_hubconf import ( 15 | openAIGPTTokenizer, 16 | openAIGPTModel, 17 | openAIGPTLMHeadModel, 18 | openAIGPTDoubleHeadsModel 19 | ) 20 | from hubconfs.gpt2_hubconf import ( 21 | gpt2Tokenizer, 22 | gpt2Model, 23 | gpt2LMHeadModel, 24 | gpt2DoubleHeadsModel 25 | ) 26 | from hubconfs.transformer_xl_hubconf import ( 27 | transformerXLTokenizer, 28 | transformerXLModel, 29 | transformerXLLMHeadModel 30 | ) 31 | -------------------------------------------------------------------------------- /transformers/docs/source/model_doc/gpt.rst: -------------------------------------------------------------------------------- 1 | OpenAI GPT 2 | ---------------------------------------------------- 3 | 4 | ``OpenAIGPTConfig`` 5 | ~~~~~~~~~~~~~~~~~~~~~ 6 | 7 | .. autoclass:: pytorch_transformers.OpenAIGPTConfig 8 | :members: 9 | 10 | 11 | ``OpenAIGPTTokenizer`` 12 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 13 | 14 | .. autoclass:: pytorch_transformers.OpenAIGPTTokenizer 15 | :members: 16 | 17 | 18 | ``OpenAIGPTModel`` 19 | ~~~~~~~~~~~~~~~~~~~~~~~~~ 20 | 21 | .. autoclass:: pytorch_transformers.OpenAIGPTModel 22 | :members: 23 | 24 | 25 | ``OpenAIGPTLMHeadModel`` 26 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 27 | 28 | .. autoclass:: pytorch_transformers.OpenAIGPTLMHeadModel 29 | :members: 30 | 31 | 32 | ``OpenAIGPTDoubleHeadsModel`` 33 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 34 | 35 | .. autoclass:: pytorch_transformers.OpenAIGPTDoubleHeadsModel 36 | :members: 37 | -------------------------------------------------------------------------------- /docs/CONTRIBUTE.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | 3 | This project welcomes contributions and suggestions. Most contributions require you to agree to a 4 | Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us 5 | the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. 6 | 7 | When you submit a pull request, a CLA bot will automatically determine whether you need to provide 8 | a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions 9 | provided by the bot. You will only need to do this once across all repos using our CLA. 10 | 11 | This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). 12 | For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or 13 | contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. -------------------------------------------------------------------------------- /transformers/docs/source/model_doc/xlm.rst: -------------------------------------------------------------------------------- 1 | XLM 2 | ---------------------------------------------------- 3 | 4 | ``XLMConfig`` 5 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 6 | 7 | .. autoclass:: pytorch_transformers.XLMConfig 8 | :members: 9 | 10 | ``XLMTokenizer`` 11 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 12 | 13 | .. autoclass:: pytorch_transformers.XLMTokenizer 14 | :members: 15 | 16 | ``XLMModel`` 17 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 18 | 19 | .. autoclass:: pytorch_transformers.XLMModel 20 | :members: 21 | 22 | 23 | ``XLMWithLMHeadModel`` 24 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 25 | 26 | .. autoclass:: pytorch_transformers.XLMWithLMHeadModel 27 | :members: 28 | 29 | 30 | ``XLMForSequenceClassification`` 31 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 32 | 33 | .. autoclass:: pytorch_transformers.XLMForSequenceClassification 34 | :members: 35 | 36 | 37 | ``XLMForQuestionAnswering`` 38 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 39 | 40 | .. autoclass:: pytorch_transformers.XLMForQuestionAnswering 41 | :members: 42 | -------------------------------------------------------------------------------- /transformers/docs/source/model_doc/xlnet.rst: -------------------------------------------------------------------------------- 1 | XLNet 2 | ---------------------------------------------------- 3 | 4 | ``XLNetConfig`` 5 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 6 | 7 | .. autoclass:: pytorch_transformers.XLNetConfig 8 | :members: 9 | 10 | 11 | ``XLNetTokenizer`` 12 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 13 | 14 | .. autoclass:: pytorch_transformers.XLNetTokenizer 15 | :members: 16 | 17 | 18 | ``XLNetModel`` 19 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 20 | 21 | .. autoclass:: pytorch_transformers.XLNetModel 22 | :members: 23 | 24 | 25 | ``XLNetLMHeadModel`` 26 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 27 | 28 | .. autoclass:: pytorch_transformers.XLNetLMHeadModel 29 | :members: 30 | 31 | 32 | ``XLNetForSequenceClassification`` 33 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 34 | 35 | .. autoclass:: pytorch_transformers.XLNetForSequenceClassification 36 | :members: 37 | 38 | 39 | ``XLNetForQuestionAnswering`` 40 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 41 | 42 | .. autoclass:: pytorch_transformers.XLNetForQuestionAnswering 43 | :members: 44 | -------------------------------------------------------------------------------- /tore/utils/metric_logger.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) Microsoft Corporation. 3 | Licensed under the MIT license. 4 | 5 | Basic logger. It Computes and stores the average and current value 6 | """ 7 | 8 | class AverageMeter(object): 9 | 10 | def __init__(self): 11 | self.reset() 12 | 13 | def reset(self): 14 | self.val = 0 15 | self.avg = 0 16 | self.sum = 0 17 | self.count = 0 18 | 19 | def update(self, val, n=1): 20 | self.val = val 21 | self.sum += val * n 22 | self.count += n 23 | self.avg = self.sum / self.count 24 | 25 | 26 | 27 | class EvalMetricsLogger(object): 28 | 29 | def __init__(self): 30 | self.reset() 31 | 32 | def reset(self): 33 | # define a upper-bound performance (worst case) 34 | # numbers are in unit millimeter 35 | self.PAmPJPE = 100.0/1000.0 36 | self.mPJPE = 100.0/1000.0 37 | self.mPVE = 100.0/1000.0 38 | 39 | self.epoch = 0 40 | 41 | def update(self, mPVE, mPJPE, PAmPJPE, epoch): 42 | self.PAmPJPE = PAmPJPE 43 | self.mPJPE = mPJPE 44 | self.mPVE = mPVE 45 | self.epoch = epoch 46 | -------------------------------------------------------------------------------- /docs/INSTALL.md: -------------------------------------------------------------------------------- 1 | ## Installation 2 | 3 | Our codebase is developed based on Ubuntu 16.04 and NVIDIA GPU cards. 4 | 5 | ### Requirements 6 | - Python 3.7 7 | - Pytorch 1.4 8 | - torchvision 0.5.0 9 | - cuda 10.1 10 | 11 | ### Setup with Conda 12 | 13 | We suggest to create a new conda environment and install all the relevant dependencies. 14 | 15 | ```bash 16 | # Create a new environment 17 | conda create --name metro python=3.7 18 | conda activate metro 19 | 20 | # Install Pytorch 21 | conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch 22 | 23 | export INSTALL_DIR=$PWD 24 | 25 | # Install apex 26 | cd $INSTALL_DIR 27 | git clone https://github.com/NVIDIA/apex.git 28 | cd apex 29 | python setup.py install --cuda_ext --cpp_ext 30 | 31 | # Install OpenDR 32 | pip install opendr matplotlib 33 | 34 | # Install METRO 35 | cd $INSTALL_DIR 36 | git clone --recursive https://github.com/microsoft/MeshTransformer.git 37 | cd MeshTransformer 38 | python setup.py build develop 39 | 40 | # Install requirements 41 | pip install -r requirements.txt 42 | 43 | # Install manopth 44 | cd $INSTALL_DIR 45 | cd MeshTransformer 46 | pip install ./manopth/. 47 | 48 | unset INSTALL_DIR 49 | ``` 50 | 51 | 52 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) Microsoft Corporation. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE 22 | -------------------------------------------------------------------------------- /transformers/.circleci/config.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | jobs: 3 | build_py3: 4 | working_directory: ~/pytorch-transformers 5 | docker: 6 | - image: circleci/python:3.5 7 | resource_class: large 8 | parallelism: 4 9 | steps: 10 | - checkout 11 | - run: sudo pip install --progress-bar off . 12 | - run: sudo pip install pytest codecov pytest-cov 13 | - run: sudo pip install tensorboardX scikit-learn 14 | - run: python -m pytest -sv ./pytorch_transformers/tests/ --cov 15 | - run: python -m pytest -sv ./examples/ 16 | - run: codecov 17 | build_py2: 18 | working_directory: ~/pytorch-transformers 19 | resource_class: large 20 | parallelism: 4 21 | docker: 22 | - image: circleci/python:2.7 23 | steps: 24 | - checkout 25 | - run: sudo pip install --progress-bar off . 26 | - run: sudo pip install pytest codecov pytest-cov 27 | - run: python -m pytest -sv ./pytorch_transformers/tests/ --cov 28 | - run: codecov 29 | workflows: 30 | version: 2 31 | build_and_test: 32 | jobs: 33 | - build_py3 34 | - build_py2 -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | from __future__ import print_function 4 | import os 5 | import sys 6 | import re 7 | import os.path as op 8 | from setuptools import find_packages, setup 9 | 10 | # change directory to this module path 11 | try: 12 | this_file = __file__ 13 | except NameError: 14 | this_file = sys.argv[0] 15 | this_file = os.path.abspath(this_file) 16 | if op.dirname(this_file): 17 | os.chdir(op.dirname(this_file)) 18 | script_dir = os.getcwd() 19 | 20 | def readme(fname): 21 | """Read text out of a file in the same directory as setup.py. 22 | """ 23 | return open(op.join(script_dir, fname)).read() 24 | 25 | 26 | def find_version(fname): 27 | version_file = readme(fname) 28 | version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", 29 | version_file, re.M) 30 | if version_match: 31 | return version_match.group(1) 32 | raise RuntimeError("Unable to find version string.") 33 | 34 | 35 | setup( 36 | name="tore", 37 | version=find_version("tore/__init__.py"), 38 | description="TORE", 39 | long_description=readme('README.md'), 40 | packages=find_packages(), 41 | classifiers=[ 42 | 'Intended Audience :: Developers', 43 | "Programming Language :: Python", 44 | 'Topic :: Software Development', 45 | ] 46 | ) -------------------------------------------------------------------------------- /transformers/examples/tests_samples/MRPC/dev.tsv: -------------------------------------------------------------------------------- 1 | Quality #1 ID #2 ID #1 String #2 String 2 | 1 1355540 1355592 He said the foodservice pie business doesn 't fit the company 's long-term growth strategy . " The foodservice pie business does not fit our long-term growth strategy . 3 | 0 2029631 2029565 Magnarelli said Racicot hated the Iraqi regime and looked forward to using his long years of training in the war . His wife said he was " 100 percent behind George Bush " and looked forward to using his years of training in the war . 4 | 0 487993 487952 The dollar was at 116.92 yen against the yen , flat on the session , and at 1.2891 against the Swiss franc , also flat . The dollar was at 116.78 yen JPY = , virtually flat on the session , and at 1.2871 against the Swiss franc CHF = , down 0.1 percent . 5 | 1 1989515 1989458 The AFL-CIO is waiting until October to decide if it will endorse a candidate . The AFL-CIO announced Wednesday that it will decide in October whether to endorse a candidate before the primaries . 6 | 0 1783137 1782659 No dates have been set for the civil or the criminal trial . No dates have been set for the criminal or civil cases , but Shanley has pleaded not guilty . 7 | 1 3039165 3039036 Wal-Mart said it would check all of its million-plus domestic workers to ensure they were legally employed . It has also said it would review all of its domestic employees more than 1 million to ensure they have legal status . 8 | -------------------------------------------------------------------------------- /transformers/examples/tests_samples/MRPC/train.tsv: -------------------------------------------------------------------------------- 1 | Quality #1 ID #2 ID #1 String #2 String 2 | 1 1355540 1355592 He said the foodservice pie business doesn 't fit the company 's long-term growth strategy . " The foodservice pie business does not fit our long-term growth strategy . 3 | 0 2029631 2029565 Magnarelli said Racicot hated the Iraqi regime and looked forward to using his long years of training in the war . His wife said he was " 100 percent behind George Bush " and looked forward to using his years of training in the war . 4 | 0 487993 487952 The dollar was at 116.92 yen against the yen , flat on the session , and at 1.2891 against the Swiss franc , also flat . The dollar was at 116.78 yen JPY = , virtually flat on the session , and at 1.2871 against the Swiss franc CHF = , down 0.1 percent . 5 | 1 1989515 1989458 The AFL-CIO is waiting until October to decide if it will endorse a candidate . The AFL-CIO announced Wednesday that it will decide in October whether to endorse a candidate before the primaries . 6 | 0 1783137 1782659 No dates have been set for the civil or the criminal trial . No dates have been set for the criminal or civil cases , but Shanley has pleaded not guilty . 7 | 1 3039165 3039036 Wal-Mart said it would check all of its million-plus domestic workers to ensure they were legally employed . It has also said it would review all of its domestic employees more than 1 million to ensure they have legal status . 8 | -------------------------------------------------------------------------------- /transformers/docs/source/bertology.rst: -------------------------------------------------------------------------------- 1 | BERTology 2 | --------- 3 | 4 | There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call "BERTology"). Some good examples of this field are: 5 | 6 | 7 | * BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick: https://arxiv.org/abs/1905.05950 8 | * Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650 9 | * What Does BERT Look At? An Analysis of BERT's Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: https://arxiv.org/abs/1906.04341 10 | 11 | In order to help this new field develop, we have included a few additional features in the BERT/GPT/GPT-2 models to help people access the inner representations, mainly adapted from the great work of Paul Michel (https://arxiv.org/abs/1905.10650): 12 | 13 | 14 | * accessing all the hidden-states of BERT/GPT/GPT-2, 15 | * accessing all the attention weights for each head of BERT/GPT/GPT-2, 16 | * retrieving heads output values and gradients to be able to compute head importance score and prune head as explained in https://arxiv.org/abs/1905.10650. 17 | 18 | To help you understand and use these features, we have added a specific example script: `bertology.py `_ while extract information and prune a model pre-trained on GLUE. 19 | -------------------------------------------------------------------------------- /manopth/setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import find_packages, setup 2 | import warnings 3 | 4 | DEPENDENCY_PACKAGE_NAMES = ["matplotlib", "torch", "tqdm", "numpy", "cv2", 5 | "chumpy"] 6 | 7 | 8 | def check_dependencies(): 9 | missing_dependencies = [] 10 | for package_name in DEPENDENCY_PACKAGE_NAMES: 11 | try: 12 | __import__(package_name) 13 | except ImportError: 14 | missing_dependencies.append(package_name) 15 | 16 | if missing_dependencies: 17 | warnings.warn( 18 | 'Missing dependencies: {}. We recommend you follow ' 19 | 'the installation instructions at ' 20 | 'https://github.com/hassony2/manopth#installation'.format( 21 | missing_dependencies)) 22 | 23 | 24 | with open("README.md", "r") as fh: 25 | long_description = fh.read() 26 | 27 | check_dependencies() 28 | 29 | setup( 30 | name="manopth", 31 | version="0.0.1", 32 | author="Yana Hasson", 33 | author_email="yana.hasson.inria@gmail.com", 34 | packages=find_packages(exclude=('tests',)), 35 | python_requires=">=3.5.0", 36 | description="PyTorch mano layer", 37 | long_description=long_description, 38 | long_description_content_type="text/markdown", 39 | url="https://github.com/hassony2/manopth", 40 | classifiers=[ 41 | "Programming Language :: Python :: 3", 42 | "License :: OSI Approved :: GNU GENERAL PUBLIC LICENSE", 43 | "Operating System :: OS Independent", 44 | ], 45 | ) 46 | -------------------------------------------------------------------------------- /manopth/manopth/tensutils.py: -------------------------------------------------------------------------------- 1 | import torch 2 | 3 | from manopth import rodrigues_layer 4 | 5 | 6 | def th_posemap_axisang(pose_vectors): 7 | rot_nb = int(pose_vectors.shape[1] / 3) 8 | pose_vec_reshaped = pose_vectors.contiguous().view(-1, 3) 9 | rot_mats = rodrigues_layer.batch_rodrigues(pose_vec_reshaped) 10 | rot_mats = rot_mats.view(pose_vectors.shape[0], rot_nb * 9) 11 | pose_maps = subtract_flat_id(rot_mats) 12 | return pose_maps, rot_mats 13 | 14 | 15 | def th_with_zeros(tensor): 16 | batch_size = tensor.shape[0] 17 | padding = tensor.new([0.0, 0.0, 0.0, 1.0]) 18 | padding.requires_grad = False 19 | 20 | concat_list = [tensor, padding.view(1, 1, 4).repeat(batch_size, 1, 1)] 21 | cat_res = torch.cat(concat_list, 1) 22 | return cat_res 23 | 24 | 25 | def th_pack(tensor): 26 | batch_size = tensor.shape[0] 27 | padding = tensor.new_zeros((batch_size, 4, 3)) 28 | padding.requires_grad = False 29 | pack_list = [padding, tensor] 30 | pack_res = torch.cat(pack_list, 2) 31 | return pack_res 32 | 33 | 34 | def subtract_flat_id(rot_mats): 35 | # Subtracts identity as a flattened tensor 36 | rot_nb = int(rot_mats.shape[1] / 9) 37 | id_flat = torch.eye( 38 | 3, dtype=rot_mats.dtype, device=rot_mats.device).view(1, 9).repeat( 39 | rot_mats.shape[0], rot_nb) 40 | # id_flat.requires_grad = False 41 | results = rot_mats - id_flat 42 | return results 43 | 44 | 45 | def make_list(tensor): 46 | # type: (List[int]) -> List[int] 47 | return tensor 48 | -------------------------------------------------------------------------------- /transformers/docs/source/notebooks.rst: -------------------------------------------------------------------------------- 1 | Notebooks 2 | ================================================ 3 | 4 | We include `three Jupyter Notebooks `_ that can be used to check that the predictions of the PyTorch model are identical to the predictions of the original TensorFlow model. 5 | 6 | 7 | * 8 | The first NoteBook (\ `Comparing-TF-and-PT-models.ipynb `_\ ) extracts the hidden states of a full sequence on each layers of the TensorFlow and the PyTorch models and computes the standard deviation between them. In the given example, we get a standard deviation of 1.5e-7 to 9e-7 on the various hidden state of the models. 9 | 10 | * 11 | The second NoteBook (\ `Comparing-TF-and-PT-models-SQuAD.ipynb `_\ ) compares the loss computed by the TensorFlow and the PyTorch models for identical initialization of the fine-tuning layer of the ``BertForQuestionAnswering`` and computes the standard deviation between them. In the given example, we get a standard deviation of 2.5e-7 between the models. 12 | 13 | * 14 | The third NoteBook (\ `Comparing-TF-and-PT-models-MLM-NSP.ipynb `_\ ) compares the predictions computed by the TensorFlow and the PyTorch models for masked token language modeling using the pre-trained masked language modeling model. 15 | 16 | Please follow the instructions given in the notebooks to run and modify them. 17 | -------------------------------------------------------------------------------- /manopth/mano/webuser/posemapper.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Copyright 2017 Javier Romero, Dimitrios Tzionas, Michael J Black and the Max Planck Gesellschaft. All rights reserved. 3 | This software is provided for research purposes only. 4 | By using this software you agree to the terms of the MANO/SMPL+H Model license here http://mano.is.tue.mpg.de/license 5 | 6 | More information about MANO/SMPL+H is available at http://mano.is.tue.mpg.de. 7 | For comments or questions, please email us at: mano@tue.mpg.de 8 | 9 | 10 | About this file: 11 | ================ 12 | This file defines a wrapper for the loading functions of the MANO model. 13 | 14 | Modules included: 15 | - load_model: 16 | loads the MANO model from a given file location (i.e. a .pkl file location), 17 | or a dictionary object. 18 | 19 | ''' 20 | 21 | 22 | import chumpy as ch 23 | import numpy as np 24 | import cv2 25 | 26 | 27 | class Rodrigues(ch.Ch): 28 | dterms = 'rt' 29 | 30 | def compute_r(self): 31 | return cv2.Rodrigues(self.rt.r)[0] 32 | 33 | def compute_dr_wrt(self, wrt): 34 | if wrt is self.rt: 35 | return cv2.Rodrigues(self.rt.r)[1].T 36 | 37 | 38 | def lrotmin(p): 39 | if isinstance(p, np.ndarray): 40 | p = p.ravel()[3:] 41 | return np.concatenate( 42 | [(cv2.Rodrigues(np.array(pp))[0] - np.eye(3)).ravel() 43 | for pp in p.reshape((-1, 3))]).ravel() 44 | if p.ndim != 2 or p.shape[1] != 3: 45 | p = p.reshape((-1, 3)) 46 | p = p[1:] 47 | return ch.concatenate([(Rodrigues(pp) - ch.eye(3)).ravel() 48 | for pp in p]).ravel() 49 | 50 | 51 | def posemap(s): 52 | if s == 'lrotmin': 53 | return lrotmin 54 | else: 55 | raise Exception('Unknown posemapping: %s' % (str(s), )) 56 | -------------------------------------------------------------------------------- /transformers/docs/source/model_doc/bert.rst: -------------------------------------------------------------------------------- 1 | BERT 2 | ---------------------------------------------------- 3 | 4 | ``BertConfig`` 5 | ~~~~~~~~~~~~~~~~~~~~~ 6 | 7 | .. autoclass:: pytorch_transformers.BertConfig 8 | :members: 9 | 10 | 11 | ``BertTokenizer`` 12 | ~~~~~~~~~~~~~~~~~~~~~ 13 | 14 | .. autoclass:: pytorch_transformers.BertTokenizer 15 | :members: 16 | 17 | 18 | ``AdamW`` 19 | ~~~~~~~~~~~~~~~~ 20 | 21 | .. autoclass:: pytorch_transformers.AdamW 22 | :members: 23 | 24 | ``BertModel`` 25 | ~~~~~~~~~~~~~~~~~~~~ 26 | 27 | .. autoclass:: pytorch_transformers.BertModel 28 | :members: 29 | 30 | 31 | ``BertForPreTraining`` 32 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 33 | 34 | .. autoclass:: pytorch_transformers.BertForPreTraining 35 | :members: 36 | 37 | 38 | ``BertForMaskedLM`` 39 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ 40 | 41 | .. autoclass:: pytorch_transformers.BertForMaskedLM 42 | :members: 43 | 44 | 45 | ``BertForNextSentencePrediction`` 46 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 47 | 48 | .. autoclass:: pytorch_transformers.BertForNextSentencePrediction 49 | :members: 50 | 51 | 52 | ``BertForSequenceClassification`` 53 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 54 | 55 | .. autoclass:: pytorch_transformers.BertForSequenceClassification 56 | :members: 57 | 58 | 59 | ``BertForMultipleChoice`` 60 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 61 | 62 | .. autoclass:: pytorch_transformers.BertForMultipleChoice 63 | :members: 64 | 65 | 66 | ``BertForTokenClassification`` 67 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 68 | 69 | .. autoclass:: pytorch_transformers.BertForTokenClassification 70 | :members: 71 | 72 | 73 | ``BertForQuestionAnswering`` 74 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 75 | 76 | .. autoclass:: pytorch_transformers.BertForQuestionAnswering 77 | :members: 78 | 79 | -------------------------------------------------------------------------------- /transformers/docs/README.md: -------------------------------------------------------------------------------- 1 | # Generating the documentation 2 | 3 | To generate the documentation, you first have to build it. Several packages are necessary to build the doc, 4 | you can install them using: 5 | 6 | ```bash 7 | pip install -r requirements.txt 8 | ``` 9 | 10 | ## Packages installed 11 | 12 | Here's an overview of all the packages installed. If you ran the previous command installing all packages from 13 | `requirements.txt`, you do not need to run the following commands. 14 | 15 | Building it requires the package `sphinx` that you can 16 | install using: 17 | 18 | ```bash 19 | pip install -U sphinx 20 | ``` 21 | 22 | You would also need the custom installed [theme](https://github.com/readthedocs/sphinx_rtd_theme) by 23 | [Read The Docs](https://readthedocs.org/). You can install it using the following command: 24 | 25 | ```bash 26 | pip install sphinx_rtd_theme 27 | ``` 28 | 29 | The third necessary package is the `recommonmark` package to accept Markdown as well as Restructured text: 30 | 31 | ```bash 32 | pip install recommonmark 33 | ``` 34 | 35 | ## Building the documentation 36 | 37 | Once you have setup `sphinx`, you can build the documentation by running the following command in the `/docs` folder: 38 | 39 | ```bash 40 | make html 41 | ``` 42 | 43 | --- 44 | **NOTE** 45 | 46 | If you are adding/removing elements from the toc-tree or from any strutural item, it is recommended to clean the build 47 | directory before rebuilding. Run the following command to clean and build: 48 | 49 | ```bash 50 | make clean && make html 51 | ``` 52 | 53 | --- 54 | 55 | It should build the static app that will be available under `/docs/_build/html` 56 | 57 | ## Adding a new element to the tree (toc-tree) 58 | 59 | Accepted files are reStructuredText (.rst) and Markdown (.md). Create a file with its extension and put it 60 | in the source directory. You can then link it to the toc-tree by putting the filename without the extension. 61 | -------------------------------------------------------------------------------- /transformers/docs/source/installation.rst: -------------------------------------------------------------------------------- 1 | Installation 2 | ================================================ 3 | 4 | This repo was tested on Python 2.7 and 3.5+ (examples are tested only on python 3.5+) and PyTorch 0.4.1/1.0.0 5 | 6 | With pip 7 | ^^^^^^^^ 8 | 9 | PyTorch pretrained bert can be installed with pip as follows: 10 | 11 | .. code-block:: bash 12 | 13 | pip install pytorch-transformers 14 | 15 | From source 16 | ^^^^^^^^^^^ 17 | 18 | Clone the repository and instal locally: 19 | 20 | .. code-block:: bash 21 | 22 | git clone https://github.com/huggingface/pytorch-transformers.git 23 | cd pytorch-transformers 24 | pip install [--editable] . 25 | 26 | 27 | Tests 28 | ^^^^^ 29 | 30 | An extensive test suite is included for the library and the example scripts. Library tests can be found in the `tests folder `_ and examples tests in the `examples folder `_. 31 | 32 | These tests can be run using `pytest` (install pytest if needed with `pip install pytest`). 33 | 34 | You can run the tests from the root of the cloned repository with the commands: 35 | 36 | .. code-block:: bash 37 | 38 | python -m pytest -sv ./pytorch_transformers/tests/ 39 | python -m pytest -sv ./examples/ 40 | 41 | 42 | OpenAI GPT original tokenization workflow 43 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 44 | 45 | If you want to reproduce the original tokenization process of the ``OpenAI GPT`` paper, you will need to install ``ftfy`` (limit to version 4.4.3 if you are using Python 2) and ``SpaCy`` : 46 | 47 | .. code-block:: bash 48 | 49 | pip install spacy ftfy==4.4.3 50 | python -m spacy download en 51 | 52 | If you don't install ``ftfy`` and ``SpaCy``\ , the ``OpenAI GPT`` tokenizer will default to tokenize using BERT's ``BasicTokenizer`` followed by Byte-Pair Encoding (which should be fine for most usage, don't worry). 53 | -------------------------------------------------------------------------------- /manopth/manopth/argutils.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import os 3 | import pickle 4 | import subprocess 5 | import sys 6 | 7 | 8 | def print_args(args): 9 | opts = vars(args) 10 | print('======= Options ========') 11 | for k, v in sorted(opts.items()): 12 | print('{}: {}'.format(k, v)) 13 | print('========================') 14 | 15 | 16 | def save_args(args, save_folder, opt_prefix='opt', verbose=True): 17 | opts = vars(args) 18 | # Create checkpoint folder 19 | if not os.path.exists(save_folder): 20 | os.makedirs(save_folder, exist_ok=True) 21 | 22 | # Save options 23 | opt_filename = '{}.txt'.format(opt_prefix) 24 | opt_path = os.path.join(save_folder, opt_filename) 25 | with open(opt_path, 'a') as opt_file: 26 | opt_file.write('====== Options ======\n') 27 | for k, v in sorted(opts.items()): 28 | opt_file.write( 29 | '{option}: {value}\n'.format(option=str(k), value=str(v))) 30 | opt_file.write('=====================\n') 31 | opt_file.write('launched {} at {}\n'.format( 32 | str(sys.argv[0]), str(datetime.datetime.now()))) 33 | 34 | # Add git info 35 | label = subprocess.check_output(["git", "describe", 36 | "--always"]).strip() 37 | if subprocess.call( 38 | ["git", "branch"], 39 | stderr=subprocess.STDOUT, 40 | stdout=open(os.devnull, 'w')) == 0: 41 | opt_file.write('=== Git info ====\n') 42 | opt_file.write('{}\n'.format(label)) 43 | commit = subprocess.check_output(['git', 'rev-parse', 'HEAD']) 44 | opt_file.write('commit : {}\n'.format(commit.strip())) 45 | 46 | opt_picklename = '{}.pkl'.format(opt_prefix) 47 | opt_picklepath = os.path.join(save_folder, opt_picklename) 48 | with open(opt_picklepath, 'wb') as opt_file: 49 | pickle.dump(opts, opt_file) 50 | if verbose: 51 | print('Saved options to {}'.format(opt_path)) 52 | -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/models.py: -------------------------------------------------------------------------------- 1 | # ------------------------------------------------------------------------------ 2 | # Copyright (c) Microsoft 3 | # Licensed under the MIT License. 4 | # Create by Bin Xiao (Bin.Xiao@microsoft.com) 5 | # Modified by Ke Sun (sunk@mail.ustc.edu.cn) 6 | # ------------------------------------------------------------------------------ 7 | 8 | from __future__ import absolute_import 9 | from __future__ import division 10 | from __future__ import print_function 11 | 12 | from yacs.config import CfgNode as CN 13 | 14 | # high_resoluton_net related params for classification 15 | POSE_HIGH_RESOLUTION_NET = CN() 16 | POSE_HIGH_RESOLUTION_NET.PRETRAINED_LAYERS = ['*'] 17 | POSE_HIGH_RESOLUTION_NET.STEM_INPLANES = 64 18 | POSE_HIGH_RESOLUTION_NET.FINAL_CONV_KERNEL = 1 19 | POSE_HIGH_RESOLUTION_NET.WITH_HEAD = True 20 | 21 | POSE_HIGH_RESOLUTION_NET.STAGE2 = CN() 22 | POSE_HIGH_RESOLUTION_NET.STAGE2.NUM_MODULES = 1 23 | POSE_HIGH_RESOLUTION_NET.STAGE2.NUM_BRANCHES = 2 24 | POSE_HIGH_RESOLUTION_NET.STAGE2.NUM_BLOCKS = [4, 4] 25 | POSE_HIGH_RESOLUTION_NET.STAGE2.NUM_CHANNELS = [32, 64] 26 | POSE_HIGH_RESOLUTION_NET.STAGE2.BLOCK = 'BASIC' 27 | POSE_HIGH_RESOLUTION_NET.STAGE2.FUSE_METHOD = 'SUM' 28 | 29 | POSE_HIGH_RESOLUTION_NET.STAGE3 = CN() 30 | POSE_HIGH_RESOLUTION_NET.STAGE3.NUM_MODULES = 1 31 | POSE_HIGH_RESOLUTION_NET.STAGE3.NUM_BRANCHES = 3 32 | POSE_HIGH_RESOLUTION_NET.STAGE3.NUM_BLOCKS = [4, 4, 4] 33 | POSE_HIGH_RESOLUTION_NET.STAGE3.NUM_CHANNELS = [32, 64, 128] 34 | POSE_HIGH_RESOLUTION_NET.STAGE3.BLOCK = 'BASIC' 35 | POSE_HIGH_RESOLUTION_NET.STAGE3.FUSE_METHOD = 'SUM' 36 | 37 | POSE_HIGH_RESOLUTION_NET.STAGE4 = CN() 38 | POSE_HIGH_RESOLUTION_NET.STAGE4.NUM_MODULES = 1 39 | POSE_HIGH_RESOLUTION_NET.STAGE4.NUM_BRANCHES = 4 40 | POSE_HIGH_RESOLUTION_NET.STAGE4.NUM_BLOCKS = [4, 4, 4, 4] 41 | POSE_HIGH_RESOLUTION_NET.STAGE4.NUM_CHANNELS = [32, 64, 128, 256] 42 | POSE_HIGH_RESOLUTION_NET.STAGE4.BLOCK = 'BASIC' 43 | POSE_HIGH_RESOLUTION_NET.STAGE4.FUSE_METHOD = 'SUM' 44 | 45 | MODEL_EXTRAS = { 46 | 'cls_hrnet': POSE_HIGH_RESOLUTION_NET, 47 | } 48 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/tokenization_utils_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 HuggingFace Inc.. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import 16 | from __future__ import division 17 | from __future__ import print_function 18 | 19 | import unittest 20 | import six 21 | 22 | from pytorch_transformers import PreTrainedTokenizer 23 | from pytorch_transformers.tokenization_gpt2 import GPT2Tokenizer 24 | 25 | class TokenizerUtilsTest(unittest.TestCase): 26 | def check_tokenizer_from_pretrained(self, tokenizer_class): 27 | s3_models = list(tokenizer_class.max_model_input_sizes.keys()) 28 | for model_name in s3_models[:1]: 29 | tokenizer = tokenizer_class.from_pretrained(model_name) 30 | self.assertIsNotNone(tokenizer) 31 | self.assertIsInstance(tokenizer, tokenizer_class) 32 | self.assertIsInstance(tokenizer, PreTrainedTokenizer) 33 | 34 | for special_tok in tokenizer.all_special_tokens: 35 | if six.PY2: 36 | self.assertIsInstance(special_tok, unicode) 37 | else: 38 | self.assertIsInstance(special_tok, str) 39 | special_tok_id = tokenizer.convert_tokens_to_ids(special_tok) 40 | self.assertIsInstance(special_tok_id, int) 41 | 42 | def test_pretrained_tokenizers(self): 43 | self.check_tokenizer_from_pretrained(GPT2Tokenizer) 44 | 45 | if __name__ == "__main__": 46 | unittest.main() 47 | -------------------------------------------------------------------------------- /tore/utils/dataset_utils.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) Microsoft Corporation. 3 | Licensed under the MIT license. 4 | 5 | """ 6 | 7 | 8 | import os 9 | import os.path as op 10 | import numpy as np 11 | import base64 12 | import cv2 13 | import yaml 14 | from collections import OrderedDict 15 | 16 | 17 | def img_from_base64(imagestring): 18 | try: 19 | jpgbytestring = base64.b64decode(imagestring) 20 | nparr = np.frombuffer(jpgbytestring, np.uint8) 21 | r = cv2.imdecode(nparr, cv2.IMREAD_COLOR) 22 | return r 23 | except: 24 | return None 25 | 26 | 27 | def load_labelmap(labelmap_file): 28 | label_dict = None 29 | if labelmap_file is not None and op.isfile(labelmap_file): 30 | label_dict = OrderedDict() 31 | with open(labelmap_file, 'r') as fp: 32 | for line in fp: 33 | label = line.strip().split('\t')[0] 34 | if label in label_dict: 35 | raise ValueError("Duplicate label " + label + " in labelmap.") 36 | else: 37 | label_dict[label] = len(label_dict) 38 | return label_dict 39 | 40 | 41 | def load_shuffle_file(shuf_file): 42 | shuf_list = None 43 | if shuf_file is not None: 44 | with open(shuf_file, 'r') as fp: 45 | shuf_list = [] 46 | for i in fp: 47 | shuf_list.append(int(i.strip())) 48 | return shuf_list 49 | 50 | 51 | def load_box_shuffle_file(shuf_file): 52 | if shuf_file is not None: 53 | with open(shuf_file, 'r') as fp: 54 | img_shuf_list = [] 55 | box_shuf_list = [] 56 | for i in fp: 57 | idx = [int(_) for _ in i.strip().split('\t')] 58 | img_shuf_list.append(idx[0]) 59 | box_shuf_list.append(idx[1]) 60 | return [img_shuf_list, box_shuf_list] 61 | return None 62 | 63 | 64 | def load_from_yaml_file(file_name): 65 | with open(file_name, 'r') as fp: 66 | return yaml.load(fp, Loader=yaml.CLoader) 67 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/modeling_gpt2_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import 16 | from __future__ import division 17 | from __future__ import print_function 18 | 19 | import unittest 20 | import pytest 21 | 22 | 23 | from pytorch_transformers import (GPT2Config, GPT2Model, 24 | GPT2LMHeadModel, GPT2DoubleHeadsModel) 25 | 26 | from .modeling_common_test import CommonTestCases, ConfigTester 27 | 28 | class GPT2ModelTest(unittest.TestCase): 29 | 30 | def test_config(self): 31 | config_tester = ConfigTester(self, config_class=GPT2Config, n_embd=37) 32 | config_tester.run_common_tests() 33 | 34 | def test_model(self): 35 | model_tester = CommonTestCases.GPTModelTester(self, config_class=GPT2Config, base_model_class=GPT2Model, 36 | lm_head_model_class=GPT2LMHeadModel, 37 | double_head_model_class=GPT2DoubleHeadsModel) 38 | model_tester.run_common_tests(test_presents=True) 39 | 40 | @pytest.mark.slow 41 | def test_pretrained(self): 42 | model_tester = CommonTestCases.GPTModelTester(self, config_class=GPT2Config, base_model_class=GPT2Model, 43 | lm_head_model_class=GPT2LMHeadModel, 44 | double_head_model_class=GPT2DoubleHeadsModel) 45 | model_tester.run_slow_tests() 46 | 47 | if __name__ == "__main__": 48 | unittest.main() 49 | -------------------------------------------------------------------------------- /transformers/.gitignore: -------------------------------------------------------------------------------- 1 | # Initially taken from Github's Python gitignore file 2 | 3 | # Byte-compiled / optimized / DLL files 4 | __pycache__/ 5 | *.py[cod] 6 | *$py.class 7 | 8 | # C extensions 9 | *.so 10 | 11 | # Distribution / packaging 12 | .Python 13 | build/ 14 | develop-eggs/ 15 | dist/ 16 | downloads/ 17 | eggs/ 18 | .eggs/ 19 | lib/ 20 | lib64/ 21 | parts/ 22 | sdist/ 23 | var/ 24 | wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | 53 | # Translations 54 | *.mo 55 | *.pot 56 | 57 | # Django stuff: 58 | *.log 59 | local_settings.py 60 | db.sqlite3 61 | 62 | # Flask stuff: 63 | instance/ 64 | .webassets-cache 65 | 66 | # Scrapy stuff: 67 | .scrapy 68 | 69 | # Sphinx documentation 70 | docs/_build/ 71 | 72 | # PyBuilder 73 | target/ 74 | 75 | # Jupyter Notebook 76 | .ipynb_checkpoints 77 | 78 | # IPython 79 | profile_default/ 80 | ipython_config.py 81 | 82 | # pyenv 83 | .python-version 84 | 85 | # celery beat schedule file 86 | celerybeat-schedule 87 | 88 | # SageMath parsed files 89 | *.sage.py 90 | 91 | # Environments 92 | .env 93 | .venv 94 | env/ 95 | venv/ 96 | ENV/ 97 | env.bak/ 98 | venv.bak/ 99 | 100 | # Spyder project settings 101 | .spyderproject 102 | .spyproject 103 | 104 | # Rope project settings 105 | .ropeproject 106 | 107 | # mkdocs documentation 108 | /site 109 | 110 | # mypy 111 | .mypy_cache/ 112 | .dmypy.json 113 | dmypy.json 114 | 115 | # Pyre type checker 116 | .pyre/ 117 | 118 | # vscode 119 | .vscode 120 | 121 | # TF code 122 | tensorflow_code 123 | 124 | # Models 125 | models 126 | proc_data 127 | 128 | # examples 129 | runs 130 | examples/runs -------------------------------------------------------------------------------- /docs/DEMO.md: -------------------------------------------------------------------------------- 1 | # Quick Demo 2 | We provide demo codes for end-to-end inference here. 3 | 4 | Our inference codes will iterate all images in a given folder, and generate the results. 5 | 6 | ## Human Body Reconstruction 7 | 8 | This demo runs 3D human mesh reconstruction from a single image. 9 | 10 | Our codes require the input images that are already **cropped with the person centered** in the image. The input images should have the size of `224x224`. To run the demo, please place your test images under `./samples/human-body`, and then run the following script. 11 | 12 | 13 | ```bash 14 | python ./metro/tools/end2end_inference_bodymesh.py 15 | --resume_checkpoint ./models/metro_release/metro_3dpw_state_dict.bin 16 | --image_file_or_path ./samples/human-body 17 | ``` 18 | After running, it will generate the results in the folder `./samples/human-body` 19 | 20 | 21 | 22 | ## Hand Reconstruction 23 | 24 | This demo runs 3D hand reconstruction from a single image. 25 | 26 | You may want to provide the images that are already **cropped with the right-hand centered** in the image. The input images should have the size of `224x224`. Please place the images under `./samples/hand`, and run the following script. 27 | 28 | ```bash 29 | python ./metro/tools/end2end_inference_handmesh.py 30 | --resume_checkpoint ./models/metro_release/metro_hand_state_dict.bin 31 | --image_file_or_path ./samples/hand 32 | ``` 33 | After running, it will outputs the results in the folder `./samples/hand` 34 | 35 | 36 | 37 | ## Limitations 38 | 39 | - **This demo doesn't perform human/hand detection**. Our model requires a centered target in the image. 40 | - As **METRO is a data-driven approach**, it may not perform well if the test samples are very different from the training data. 41 | - **METRO is a mesh-specific approach**. For example, our hand model is trained only on right-hand data, and therefore it doesn't perform well on the left-hand images. How to develop a unified model for different 3D objects is an interesting future work. 42 | 43 | 44 | 45 | -------------------------------------------------------------------------------- /tore/utils/geometric_layers.py: -------------------------------------------------------------------------------- 1 | """ 2 | Useful geometric operations, e.g. Orthographic projection and a differentiable Rodrigues formula 3 | 4 | Parts of the code are taken from https://github.com/MandyMo/pytorch_HMR 5 | """ 6 | import torch 7 | 8 | def rodrigues(theta): 9 | """Convert axis-angle representation to rotation matrix. 10 | Args: 11 | theta: size = [B, 3] 12 | Returns: 13 | Rotation matrix corresponding to the quaternion -- size = [B, 3, 3] 14 | """ 15 | l1norm = torch.norm(theta + 1e-8, p = 2, dim = 1) 16 | angle = torch.unsqueeze(l1norm, -1) 17 | normalized = torch.div(theta, angle) 18 | angle = angle * 0.5 19 | v_cos = torch.cos(angle) 20 | v_sin = torch.sin(angle) 21 | quat = torch.cat([v_cos, v_sin * normalized], dim = 1) 22 | return quat2mat(quat) 23 | 24 | def quat2mat(quat): 25 | """Convert quaternion coefficients to rotation matrix. 26 | Args: 27 | quat: size = [B, 4] 4 <===>(w, x, y, z) 28 | Returns: 29 | Rotation matrix corresponding to the quaternion -- size = [B, 3, 3] 30 | """ 31 | norm_quat = quat 32 | norm_quat = norm_quat/norm_quat.norm(p=2, dim=1, keepdim=True) 33 | w, x, y, z = norm_quat[:,0], norm_quat[:,1], norm_quat[:,2], norm_quat[:,3] 34 | 35 | B = quat.size(0) 36 | 37 | w2, x2, y2, z2 = w.pow(2), x.pow(2), y.pow(2), z.pow(2) 38 | wx, wy, wz = w*x, w*y, w*z 39 | xy, xz, yz = x*y, x*z, y*z 40 | 41 | rotMat = torch.stack([w2 + x2 - y2 - z2, 2*xy - 2*wz, 2*wy + 2*xz, 42 | 2*wz + 2*xy, w2 - x2 + y2 - z2, 2*yz - 2*wx, 43 | 2*xz - 2*wy, 2*wx + 2*yz, w2 - x2 - y2 + z2], dim=1).view(B, 3, 3) 44 | return rotMat 45 | 46 | def orthographic_projection(X, camera): 47 | """Perform orthographic projection of 3D points X using the camera parameters 48 | Args: 49 | X: size = [B, N, 3] 50 | camera: size = [B, 3] 51 | Returns: 52 | Projected 2D points -- size = [B, N, 2] 53 | """ 54 | camera = camera.view(-1, 1, 3) 55 | X_trans = X[:, :, :2] + camera[:, :, 1:] 56 | shape = X_trans.shape 57 | X_2d = (camera[:, :, 0] * X_trans.view(shape[0], -1)).view(shape) 58 | return X_2d 59 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/modeling_openai_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import 16 | from __future__ import division 17 | from __future__ import print_function 18 | 19 | import unittest 20 | import pytest 21 | 22 | 23 | from pytorch_transformers import (OpenAIGPTConfig, OpenAIGPTModel, 24 | OpenAIGPTLMHeadModel, OpenAIGPTDoubleHeadsModel) 25 | 26 | from .modeling_common_test import CommonTestCases, ConfigTester 27 | 28 | class OpenAIModelTest(unittest.TestCase): 29 | 30 | def test_config(self): 31 | config_tester = ConfigTester(self, config_class=OpenAIGPTConfig, n_embd=37) 32 | config_tester.run_common_tests() 33 | 34 | def test_model(self): 35 | model_tester = CommonTestCases.GPTModelTester(self, config_class=OpenAIGPTConfig, base_model_class=OpenAIGPTModel, 36 | lm_head_model_class=OpenAIGPTLMHeadModel, 37 | double_head_model_class=OpenAIGPTDoubleHeadsModel) 38 | model_tester.run_common_tests(test_presents=False) 39 | 40 | @pytest.mark.slow 41 | def test_pretrained(self): 42 | model_tester = CommonTestCases.GPTModelTester(self, config_class=OpenAIGPTConfig, base_model_class=OpenAIGPTModel, 43 | lm_head_model_class=OpenAIGPTLMHeadModel, 44 | double_head_model_class=OpenAIGPTDoubleHeadsModel) 45 | model_tester.run_slow_tests() 46 | 47 | if __name__ == "__main__": 48 | unittest.main() 49 | -------------------------------------------------------------------------------- /transformers/docs/source/_static/js/custom.js: -------------------------------------------------------------------------------- 1 | function addIcon() { 2 | const huggingFaceLogo = "http://lysand.re/huggingface_logo.svg"; 3 | const image = document.createElement("img"); 4 | image.setAttribute("src", huggingFaceLogo); 5 | 6 | const div = document.createElement("div"); 7 | div.appendChild(image); 8 | div.style.textAlign = 'center'; 9 | div.style.paddingTop = '30px'; 10 | div.style.backgroundColor = '#6670FF'; 11 | 12 | const scrollDiv = document.getElementsByClassName("wy-side-scroll")[0]; 13 | scrollDiv.prepend(div); 14 | } 15 | 16 | function addCustomFooter() { 17 | const customFooter = document.createElement("div"); 18 | const questionOrIssue = document.createElement("div"); 19 | questionOrIssue.innerHTML = "Stuck? Read our Blog posts or Create an issue"; 20 | customFooter.appendChild(questionOrIssue); 21 | customFooter.classList.add("footer"); 22 | 23 | const social = document.createElement("div"); 24 | social.classList.add("footer__Social"); 25 | 26 | const imageDetails = [ 27 | { link: "https://huggingface.co", imageLink: "http://lysand.re/icons/website.svg" }, 28 | { link: "https://twitter.com/huggingface", imageLink: "http://lysand.re/icons/twitter.svg" }, 29 | { link: "https://github.com/huggingface", imageLink: "http://lysand.re/icons/github.svg" }, 30 | { link: "https://www.linkedin.com/company/huggingface/", imageLink: "http://lysand.re/icons/linkedin.svg" } 31 | ]; 32 | 33 | imageDetails.forEach(imageLinks => { 34 | const link = document.createElement("a"); 35 | const image = document.createElement("img"); 36 | image.src = imageLinks.imageLink; 37 | link.href = imageLinks.link; 38 | image.style.width = "30px"; 39 | image.classList.add("footer__CustomImage"); 40 | link.appendChild(image); 41 | social.appendChild(link); 42 | }); 43 | 44 | customFooter.appendChild(social); 45 | document.getElementsByTagName("footer")[0].appendChild(customFooter); 46 | } 47 | 48 | function onLoad() { 49 | addIcon(); 50 | addCustomFooter(); 51 | } 52 | 53 | window.addEventListener("load", onLoad); 54 | 55 | -------------------------------------------------------------------------------- /manopth/manopth/demo.py: -------------------------------------------------------------------------------- 1 | from matplotlib import pyplot as plt 2 | from mpl_toolkits.mplot3d import Axes3D 3 | from mpl_toolkits.mplot3d.art3d import Poly3DCollection 4 | import numpy as np 5 | import torch 6 | 7 | from manopth.manolayer import ManoLayer 8 | 9 | 10 | def generate_random_hand(batch_size=1, ncomps=6, mano_root='mano/models'): 11 | nfull_comps = ncomps + 3 # Add global orientation dims to PCA 12 | random_pcapose = torch.rand(batch_size, nfull_comps) 13 | mano_layer = ManoLayer(mano_root=mano_root) 14 | verts, joints = mano_layer(random_pcapose) 15 | return {'verts': verts, 'joints': joints, 'faces': mano_layer.th_faces} 16 | 17 | 18 | def display_hand(hand_info, mano_faces=None, ax=None, alpha=0.2, batch_idx=0, show=True): 19 | """ 20 | Displays hand batch_idx in batch of hand_info, hand_info as returned by 21 | generate_random_hand 22 | """ 23 | if ax is None: 24 | fig = plt.figure() 25 | ax = fig.add_subplot(111, projection='3d') 26 | verts, joints = hand_info['verts'][batch_idx], hand_info['joints'][ 27 | batch_idx] 28 | if mano_faces is None: 29 | ax.scatter(verts[:, 0], verts[:, 1], verts[:, 2], alpha=0.1) 30 | else: 31 | mesh = Poly3DCollection(verts[mano_faces], alpha=alpha) 32 | face_color = (141 / 255, 184 / 255, 226 / 255) 33 | edge_color = (50 / 255, 50 / 255, 50 / 255) 34 | mesh.set_edgecolor(edge_color) 35 | mesh.set_facecolor(face_color) 36 | ax.add_collection3d(mesh) 37 | ax.scatter(joints[:, 0], joints[:, 1], joints[:, 2], color='r') 38 | cam_equal_aspect_3d(ax, verts.numpy()) 39 | if show: 40 | plt.show() 41 | 42 | 43 | def cam_equal_aspect_3d(ax, verts, flip_x=False): 44 | """ 45 | Centers view on cuboid containing hand and flips y and z axis 46 | and fixes azimuth 47 | """ 48 | extents = np.stack([verts.min(0), verts.max(0)], axis=1) 49 | sz = extents[:, 1] - extents[:, 0] 50 | centers = np.mean(extents, axis=1) 51 | maxsize = max(abs(sz)) 52 | r = maxsize / 2 53 | if flip_x: 54 | ax.set_xlim(centers[0] + r, centers[0] - r) 55 | else: 56 | ax.set_xlim(centers[0] - r, centers[0] + r) 57 | # Invert y and z axis 58 | ax.set_ylim(centers[1] + r, centers[1] - r) 59 | ax.set_zlim(centers[2] + r, centers[2] - r) 60 | -------------------------------------------------------------------------------- /manopth/manopth/rot6d.py: -------------------------------------------------------------------------------- 1 | import torch 2 | 3 | 4 | def compute_rotation_matrix_from_ortho6d(poses): 5 | """ 6 | Code from 7 | https://github.com/papagina/RotationContinuity 8 | On the Continuity of Rotation Representations in Neural Networks 9 | Zhou et al. CVPR19 10 | https://zhouyisjtu.github.io/project_rotation/rotation.html 11 | """ 12 | x_raw = poses[:, 0:3] # batch*3 13 | y_raw = poses[:, 3:6] # batch*3 14 | 15 | x = normalize_vector(x_raw) # batch*3 16 | z = cross_product(x, y_raw) # batch*3 17 | z = normalize_vector(z) # batch*3 18 | y = cross_product(z, x) # batch*3 19 | 20 | x = x.view(-1, 3, 1) 21 | y = y.view(-1, 3, 1) 22 | z = z.view(-1, 3, 1) 23 | matrix = torch.cat((x, y, z), 2) # batch*3*3 24 | return matrix 25 | 26 | def robust_compute_rotation_matrix_from_ortho6d(poses): 27 | """ 28 | Instead of making 2nd vector orthogonal to first 29 | create a base that takes into account the two predicted 30 | directions equally 31 | """ 32 | x_raw = poses[:, 0:3] # batch*3 33 | y_raw = poses[:, 3:6] # batch*3 34 | 35 | x = normalize_vector(x_raw) # batch*3 36 | y = normalize_vector(y_raw) # batch*3 37 | middle = normalize_vector(x + y) 38 | orthmid = normalize_vector(x - y) 39 | x = normalize_vector(middle + orthmid) 40 | y = normalize_vector(middle - orthmid) 41 | # Their scalar product should be small ! 42 | # assert torch.einsum("ij,ij->i", [x, y]).abs().max() < 0.00001 43 | z = normalize_vector(cross_product(x, y)) 44 | 45 | x = x.view(-1, 3, 1) 46 | y = y.view(-1, 3, 1) 47 | z = z.view(-1, 3, 1) 48 | matrix = torch.cat((x, y, z), 2) # batch*3*3 49 | # Check for reflection in matrix ! If found, flip last vector TODO 50 | assert (torch.stack([torch.det(mat) for mat in matrix ])< 0).sum() == 0 51 | return matrix 52 | 53 | 54 | def normalize_vector(v): 55 | batch = v.shape[0] 56 | v_mag = torch.sqrt(v.pow(2).sum(1)) # batch 57 | v_mag = torch.max(v_mag, v.new([1e-8])) 58 | v_mag = v_mag.view(batch, 1).expand(batch, v.shape[1]) 59 | v = v/v_mag 60 | return v 61 | 62 | 63 | def cross_product(u, v): 64 | batch = u.shape[0] 65 | i = u[:, 1] * v[:, 2] - u[:, 2] * v[:, 1] 66 | j = u[:, 2] * v[:, 0] - u[:, 0] * v[:, 2] 67 | k = u[:, 0] * v[:, 1] - u[:, 1] * v[:, 0] 68 | 69 | out = torch.cat((i.view(batch, 1), j.view(batch, 1), k.view(batch, 1)), 1) 70 | 71 | return out 72 | -------------------------------------------------------------------------------- /transformers/docs/source/index.rst: -------------------------------------------------------------------------------- 1 | Pytorch-Transformers 2 | ================================================================================================================================================ 3 | 4 | PyTorch-Transformers is a library of state-of-the-art pre-trained models for Natural Language Processing (NLP). 5 | 6 | The library currently contains PyTorch implementations, pre-trained model weights, usage scripts and conversion utilities for the following models: 7 | 8 | 1. `BERT `_ (from Google) released with the paper `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding `_ by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 9 | 2. `GPT `_ (from OpenAI) released with the paper `Improving Language Understanding by Generative Pre-Training `_ by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 10 | 3. `GPT-2 `_ (from OpenAI) released with the paper `Language Models are Unsupervised Multitask Learners `_ by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 11 | 4. `Transformer-XL `_ (from Google/CMU) released with the paper `Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context `_ by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 12 | 5. `XLNet `_ (from Google/CMU) released with the paper `​XLNet: Generalized Autoregressive Pretraining for Language Understanding `_ by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 13 | 6. `XLM `_ (from Facebook) released together with the paper `Cross-lingual Language Model Pretraining `_ by Guillaume Lample and Alexis Conneau. 14 | 15 | .. toctree:: 16 | :maxdepth: 2 17 | :caption: Notes 18 | 19 | installation 20 | quickstart 21 | pretrained_models 22 | examples 23 | notebooks 24 | converting_tensorflow_models 25 | migration 26 | bertology 27 | torchscript 28 | 29 | 30 | .. toctree:: 31 | :maxdepth: 2 32 | :caption: Package Reference 33 | 34 | model_doc/overview 35 | model_doc/bert 36 | model_doc/gpt 37 | model_doc/transformerxl 38 | model_doc/gpt2 39 | model_doc/xlm 40 | model_doc/xlnet 41 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/tokenization_xlm_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import, division, print_function, unicode_literals 16 | 17 | import os 18 | import unittest 19 | import json 20 | 21 | from pytorch_transformers.tokenization_xlm import XLMTokenizer, VOCAB_FILES_NAMES 22 | 23 | from .tokenization_tests_commons import create_and_check_tokenizer_commons, TemporaryDirectory 24 | 25 | class XLMTokenizationTest(unittest.TestCase): 26 | 27 | def test_full_tokenizer(self): 28 | """ Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt """ 29 | vocab = ["l", "o", "w", "e", "r", "s", "t", "i", "d", "n", 30 | "w", "r", "t", 31 | "lo", "low", "er", 32 | "low", "lowest", "newer", "wider", ""] 33 | vocab_tokens = dict(zip(vocab, range(len(vocab)))) 34 | merges = ["l o 123", "lo w 1456", "e r 1789", ""] 35 | 36 | with TemporaryDirectory() as tmpdirname: 37 | vocab_file = os.path.join(tmpdirname, VOCAB_FILES_NAMES['vocab_file']) 38 | merges_file = os.path.join(tmpdirname, VOCAB_FILES_NAMES['merges_file']) 39 | with open(vocab_file, "w") as fp: 40 | fp.write(json.dumps(vocab_tokens)) 41 | with open(merges_file, "w") as fp: 42 | fp.write("\n".join(merges)) 43 | 44 | input_text = u"lower newer" 45 | output_text = u"lower newer" 46 | 47 | create_and_check_tokenizer_commons(self, input_text, output_text, XLMTokenizer, tmpdirname) 48 | 49 | tokenizer = XLMTokenizer(vocab_file, merges_file) 50 | 51 | text = "lower" 52 | bpe_tokens = ["low", "er"] 53 | tokens = tokenizer.tokenize(text) 54 | self.assertListEqual(tokens, bpe_tokens) 55 | 56 | input_tokens = tokens + [""] 57 | input_bpe_tokens = [14, 15, 20] 58 | self.assertListEqual( 59 | tokenizer.convert_tokens_to_ids(input_tokens), input_bpe_tokens) 60 | 61 | 62 | if __name__ == '__main__': 63 | unittest.main() 64 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/tokenization_openai_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import, division, print_function, unicode_literals 16 | 17 | import os 18 | import unittest 19 | import json 20 | 21 | from pytorch_transformers.tokenization_openai import OpenAIGPTTokenizer, VOCAB_FILES_NAMES 22 | 23 | from .tokenization_tests_commons import create_and_check_tokenizer_commons, TemporaryDirectory 24 | 25 | 26 | class OpenAIGPTTokenizationTest(unittest.TestCase): 27 | 28 | def test_full_tokenizer(self): 29 | """ Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt """ 30 | vocab = ["l", "o", "w", "e", "r", "s", "t", "i", "d", "n", 31 | "w", "r", "t", 32 | "lo", "low", "er", 33 | "low", "lowest", "newer", "wider", ""] 34 | vocab_tokens = dict(zip(vocab, range(len(vocab)))) 35 | merges = ["#version: 0.2", "l o", "lo w", "e r", ""] 36 | 37 | with TemporaryDirectory() as tmpdirname: 38 | vocab_file = os.path.join(tmpdirname, VOCAB_FILES_NAMES['vocab_file']) 39 | merges_file = os.path.join(tmpdirname, VOCAB_FILES_NAMES['merges_file']) 40 | with open(vocab_file, "w") as fp: 41 | fp.write(json.dumps(vocab_tokens)) 42 | with open(merges_file, "w") as fp: 43 | fp.write("\n".join(merges)) 44 | 45 | input_text = u"lower newer" 46 | output_text = u"lower newer" 47 | 48 | create_and_check_tokenizer_commons(self, input_text, output_text, OpenAIGPTTokenizer, tmpdirname) 49 | 50 | tokenizer = OpenAIGPTTokenizer(vocab_file, merges_file) 51 | 52 | text = "lower" 53 | bpe_tokens = ["low", "er"] 54 | tokens = tokenizer.tokenize(text) 55 | self.assertListEqual(tokens, bpe_tokens) 56 | 57 | input_tokens = tokens + [""] 58 | input_bpe_tokens = [14, 15, 20] 59 | self.assertListEqual( 60 | tokenizer.convert_tokens_to_ids(input_tokens), input_bpe_tokens) 61 | 62 | 63 | if __name__ == '__main__': 64 | unittest.main() 65 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/tokenization_gpt2_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import, division, print_function, unicode_literals 16 | 17 | import os 18 | import unittest 19 | import json 20 | 21 | from pytorch_transformers.tokenization_gpt2 import GPT2Tokenizer, VOCAB_FILES_NAMES 22 | 23 | from .tokenization_tests_commons import create_and_check_tokenizer_commons, TemporaryDirectory 24 | 25 | class GPT2TokenizationTest(unittest.TestCase): 26 | 27 | def test_full_tokenizer(self): 28 | """ Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt """ 29 | vocab = ["l", "o", "w", "e", "r", "s", "t", "i", "d", "n", 30 | "lo", "low", "er", 31 | "low", "lowest", "newer", "wider", ""] 32 | vocab_tokens = dict(zip(vocab, range(len(vocab)))) 33 | merges = ["#version: 0.2", "l o", "lo w", "e r", ""] 34 | special_tokens_map = {"unk_token": ""} 35 | 36 | with TemporaryDirectory() as tmpdirname: 37 | vocab_file = os.path.join(tmpdirname, VOCAB_FILES_NAMES['vocab_file']) 38 | merges_file = os.path.join(tmpdirname, VOCAB_FILES_NAMES['merges_file']) 39 | with open(vocab_file, "w") as fp: 40 | fp.write(json.dumps(vocab_tokens)) 41 | with open(merges_file, "w") as fp: 42 | fp.write("\n".join(merges)) 43 | 44 | input_text = u"lower newer" 45 | output_text = u"lowernewer" 46 | 47 | create_and_check_tokenizer_commons(self, input_text, output_text, GPT2Tokenizer, tmpdirname, **special_tokens_map) 48 | 49 | tokenizer = GPT2Tokenizer(vocab_file, merges_file, **special_tokens_map) 50 | text = "lower" 51 | bpe_tokens = ["low", "er"] 52 | tokens = tokenizer.tokenize(text) 53 | self.assertListEqual(tokens, bpe_tokens) 54 | 55 | input_tokens = tokens + [tokenizer.unk_token] 56 | input_bpe_tokens = [13, 12, 17] 57 | self.assertListEqual( 58 | tokenizer.convert_tokens_to_ids(input_tokens), input_bpe_tokens) 59 | 60 | 61 | if __name__ == '__main__': 62 | unittest.main() 63 | -------------------------------------------------------------------------------- /SECURITY.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ## Security 4 | 5 | Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). 6 | 7 | If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. 8 | 9 | ## Reporting Security Issues 10 | 11 | **Please do not report security vulnerabilities through public GitHub issues.** 12 | 13 | Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). 14 | 15 | If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). 16 | 17 | You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). 18 | 19 | Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: 20 | 21 | * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) 22 | * Full paths of source file(s) related to the manifestation of the issue 23 | * The location of the affected source code (tag/branch/commit or direct URL) 24 | * Any special configuration required to reproduce the issue 25 | * Step-by-step instructions to reproduce the issue 26 | * Proof-of-concept or exploit code (if possible) 27 | * Impact of the issue, including how an attacker might exploit the issue 28 | 29 | This information will help us triage your report more quickly. 30 | 31 | If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. 32 | 33 | ## Preferred Languages 34 | 35 | We prefer all communications to be in English. 36 | 37 | ## Policy 38 | 39 | Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). 40 | 41 | 42 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/tokenization_transfo_xl_test.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The Google AI Language Team Authors. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | from __future__ import absolute_import, division, print_function, unicode_literals 16 | 17 | import os 18 | import unittest 19 | from io import open 20 | 21 | from pytorch_transformers.tokenization_transfo_xl import TransfoXLTokenizer, VOCAB_FILES_NAMES 22 | 23 | from.tokenization_tests_commons import create_and_check_tokenizer_commons, TemporaryDirectory 24 | 25 | class TransfoXLTokenizationTest(unittest.TestCase): 26 | 27 | def test_full_tokenizer(self): 28 | vocab_tokens = [ 29 | "", "[CLS]", "[SEP]", "want", "unwanted", "wa", "un", 30 | "running", ",", "low", "l", 31 | ] 32 | with TemporaryDirectory() as tmpdirname: 33 | vocab_file = os.path.join(tmpdirname, VOCAB_FILES_NAMES['vocab_file']) 34 | with open(vocab_file, "w", encoding='utf-8') as vocab_writer: 35 | vocab_writer.write("".join([x + "\n" for x in vocab_tokens])) 36 | 37 | input_text = u" UNwanted , running" 38 | output_text = u" unwanted, running" 39 | 40 | create_and_check_tokenizer_commons(self, input_text, output_text, TransfoXLTokenizer, tmpdirname, lower_case=True) 41 | 42 | tokenizer = TransfoXLTokenizer(vocab_file=vocab_file, lower_case=True) 43 | 44 | tokens = tokenizer.tokenize(u" UNwanted , running") 45 | self.assertListEqual(tokens, ["", "unwanted", ",", "running"]) 46 | 47 | self.assertListEqual( 48 | tokenizer.convert_tokens_to_ids(tokens), [0, 4, 8, 7]) 49 | 50 | def test_full_tokenizer_lower(self): 51 | tokenizer = TransfoXLTokenizer(lower_case=True) 52 | 53 | self.assertListEqual( 54 | tokenizer.tokenize(u" \tHeLLo ! how \n Are yoU ? "), 55 | ["hello", "!", "how", "are", "you", "?"]) 56 | 57 | def test_full_tokenizer_no_lower(self): 58 | tokenizer = TransfoXLTokenizer(lower_case=False) 59 | 60 | self.assertListEqual( 61 | tokenizer.tokenize(u" \tHeLLo ! how \n Are yoU ? "), 62 | ["HeLLo", "!", "how", "Are", "yoU", "?"]) 63 | 64 | 65 | if __name__ == '__main__': 66 | unittest.main() 67 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/convert_tf_checkpoint_to_pytorch.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The HuggingFace Inc. team. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """Convert BERT checkpoint.""" 16 | 17 | from __future__ import absolute_import 18 | from __future__ import division 19 | from __future__ import print_function 20 | 21 | import argparse 22 | import torch 23 | 24 | from pytorch_transformers.modeling_bert import BertConfig, BertForPreTraining, load_tf_weights_in_bert 25 | 26 | import logging 27 | logging.basicConfig(level=logging.INFO) 28 | 29 | def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path): 30 | # Initialise PyTorch model 31 | config = BertConfig.from_json_file(bert_config_file) 32 | print("Building PyTorch model from configuration: {}".format(str(config))) 33 | model = BertForPreTraining(config) 34 | 35 | # Load weights from tf checkpoint 36 | load_tf_weights_in_bert(model, config, tf_checkpoint_path) 37 | 38 | # Save pytorch-model 39 | print("Save PyTorch model to {}".format(pytorch_dump_path)) 40 | torch.save(model.state_dict(), pytorch_dump_path) 41 | 42 | 43 | if __name__ == "__main__": 44 | parser = argparse.ArgumentParser() 45 | ## Required parameters 46 | parser.add_argument("--tf_checkpoint_path", 47 | default = None, 48 | type = str, 49 | required = True, 50 | help = "Path the TensorFlow checkpoint path.") 51 | parser.add_argument("--bert_config_file", 52 | default = None, 53 | type = str, 54 | required = True, 55 | help = "The config json file corresponding to the pre-trained BERT model. \n" 56 | "This specifies the model architecture.") 57 | parser.add_argument("--pytorch_dump_path", 58 | default = None, 59 | type = str, 60 | required = True, 61 | help = "Path to the output PyTorch model.") 62 | args = parser.parse_args() 63 | convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, 64 | args.bert_config_file, 65 | args.pytorch_dump_path) 66 | -------------------------------------------------------------------------------- /tore/modeling/bert/position_encoding.py: -------------------------------------------------------------------------------- 1 | # ---------------------------------------------------------------------------------------------- 2 | # FastMETRO Official Code 3 | # Copyright (c) POSTECH Algorithmic Machine Intelligence Lab. (P-AMI Lab.) All Rights Reserved 4 | # Licensed under the MIT license. 5 | # ---------------------------------------------------------------------------------------------- 6 | # Modified from DETR (https://github.com/facebookresearch/detr) 7 | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved [see https://github.com/facebookresearch/detr/blob/main/LICENSE for details] 8 | # ---------------------------------------------------------------------------------------------- 9 | 10 | """ 11 | Various positional encodings for the transformer. 12 | """ 13 | import math 14 | import torch 15 | from torch import nn 16 | 17 | class PositionEmbeddingSine(nn.Module): 18 | """ 19 | This is a more standard version of the position embedding, very similar to the one 20 | used by the Attention is all you need paper, generalized to work on images. 21 | """ 22 | def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): 23 | super().__init__() 24 | self.num_pos_feats = num_pos_feats 25 | self.temperature = temperature 26 | self.normalize = normalize 27 | if scale is not None and normalize is False: 28 | raise ValueError("normalize should be True if scale is passed") 29 | if scale is None: 30 | scale = 2 * math.pi 31 | self.scale = scale 32 | 33 | def forward(self, bs, h, w, device): 34 | ones = torch.ones((bs, h, w), dtype=torch.bool, device=device) 35 | y_embed = ones.cumsum(1, dtype=torch.float32) 36 | x_embed = ones.cumsum(2, dtype=torch.float32) 37 | if self.normalize: 38 | eps = 1e-6 39 | y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale 40 | x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale 41 | 42 | dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=device) 43 | dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) 44 | 45 | pos_x = x_embed[:, :, :, None] / dim_t 46 | pos_y = y_embed[:, :, :, None] / dim_t 47 | pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3) 48 | pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3) 49 | pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) 50 | return pos 51 | 52 | 53 | def build_position_encoding(pos_type, hidden_dim): 54 | N_steps = hidden_dim // 2 55 | if pos_type == 'sine': 56 | position_embedding = PositionEmbeddingSine(N_steps, normalize=True) 57 | else: 58 | raise ValueError(f"not supported {pos_type}") 59 | 60 | return position_embedding -------------------------------------------------------------------------------- /tore/modeling_fm/bert/position_encoding.py: -------------------------------------------------------------------------------- 1 | # ---------------------------------------------------------------------------------------------- 2 | # FastMETRO Official Code 3 | # Copyright (c) POSTECH Algorithmic Machine Intelligence Lab. (P-AMI Lab.) All Rights Reserved 4 | # Licensed under the MIT license. 5 | # ---------------------------------------------------------------------------------------------- 6 | # Modified from DETR (https://github.com/facebookresearch/detr) 7 | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved [see https://github.com/facebookresearch/detr/blob/main/LICENSE for details] 8 | # ---------------------------------------------------------------------------------------------- 9 | 10 | """ 11 | Various positional encodings for the transformer. 12 | """ 13 | import math 14 | import torch 15 | from torch import nn 16 | 17 | class PositionEmbeddingSine(nn.Module): 18 | """ 19 | This is a more standard version of the position embedding, very similar to the one 20 | used by the Attention is all you need paper, generalized to work on images. 21 | """ 22 | def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): 23 | super().__init__() 24 | self.num_pos_feats = num_pos_feats 25 | self.temperature = temperature 26 | self.normalize = normalize 27 | if scale is not None and normalize is False: 28 | raise ValueError("normalize should be True if scale is passed") 29 | if scale is None: 30 | scale = 2 * math.pi 31 | self.scale = scale 32 | 33 | def forward(self, bs, h, w, device): 34 | ones = torch.ones((bs, h, w), dtype=torch.bool, device=device) 35 | y_embed = ones.cumsum(1, dtype=torch.float32) 36 | x_embed = ones.cumsum(2, dtype=torch.float32) 37 | if self.normalize: 38 | eps = 1e-6 39 | y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale 40 | x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale 41 | 42 | dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=device) 43 | dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) 44 | 45 | pos_x = x_embed[:, :, :, None] / dim_t 46 | pos_y = y_embed[:, :, :, None] / dim_t 47 | pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3) 48 | pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3) 49 | pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) 50 | return pos 51 | 52 | 53 | def build_position_encoding(pos_type, hidden_dim): 54 | N_steps = hidden_dim // 2 55 | if pos_type == 'sine': 56 | position_embedding = PositionEmbeddingSine(N_steps, normalize=True) 57 | else: 58 | raise ValueError(f"not supported {pos_type}") 59 | 60 | return position_embedding -------------------------------------------------------------------------------- /tore/modeling_m/bert/position_encoding.py: -------------------------------------------------------------------------------- 1 | # ---------------------------------------------------------------------------------------------- 2 | # FastMETRO Official Code 3 | # Copyright (c) POSTECH Algorithmic Machine Intelligence Lab. (P-AMI Lab.) All Rights Reserved 4 | # Licensed under the MIT license. 5 | # ---------------------------------------------------------------------------------------------- 6 | # Modified from DETR (https://github.com/facebookresearch/detr) 7 | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved [see https://github.com/facebookresearch/detr/blob/main/LICENSE for details] 8 | # ---------------------------------------------------------------------------------------------- 9 | 10 | """ 11 | Various positional encodings for the transformer. 12 | """ 13 | import math 14 | import torch 15 | from torch import nn 16 | 17 | class PositionEmbeddingSine(nn.Module): 18 | """ 19 | This is a more standard version of the position embedding, very similar to the one 20 | used by the Attention is all you need paper, generalized to work on images. 21 | """ 22 | def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): 23 | super().__init__() 24 | self.num_pos_feats = num_pos_feats 25 | self.temperature = temperature 26 | self.normalize = normalize 27 | if scale is not None and normalize is False: 28 | raise ValueError("normalize should be True if scale is passed") 29 | if scale is None: 30 | scale = 2 * math.pi 31 | self.scale = scale 32 | 33 | def forward(self, bs, h, w, device): 34 | ones = torch.ones((bs, h, w), dtype=torch.bool, device=device) 35 | y_embed = ones.cumsum(1, dtype=torch.float32) 36 | x_embed = ones.cumsum(2, dtype=torch.float32) 37 | if self.normalize: 38 | eps = 1e-6 39 | y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale 40 | x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale 41 | 42 | dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=device) 43 | dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) 44 | 45 | pos_x = x_embed[:, :, :, None] / dim_t 46 | pos_y = y_embed[:, :, :, None] / dim_t 47 | pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3) 48 | pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3) 49 | pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) 50 | return pos 51 | 52 | 53 | def build_position_encoding(pos_type, hidden_dim): 54 | N_steps = hidden_dim // 2 55 | if pos_type == 'sine': 56 | position_embedding = PositionEmbeddingSine(N_steps, normalize=True) 57 | else: 58 | raise ValueError(f"not supported {pos_type}") 59 | 60 | return position_embedding -------------------------------------------------------------------------------- /transformers/pytorch_transformers/__init__.py: -------------------------------------------------------------------------------- 1 | __version__ = "1.0.0" 2 | from .tokenization_bert import BertTokenizer, BasicTokenizer, WordpieceTokenizer 3 | from .tokenization_openai import OpenAIGPTTokenizer 4 | from .tokenization_transfo_xl import (TransfoXLTokenizer, TransfoXLCorpus) 5 | from .tokenization_gpt2 import GPT2Tokenizer 6 | from .tokenization_xlnet import XLNetTokenizer, SPIECE_UNDERLINE 7 | from .tokenization_xlm import XLMTokenizer 8 | from .tokenization_utils import (PreTrainedTokenizer, clean_up_tokenization) 9 | 10 | from .modeling_bert import (BertConfig, BertModel, BertForPreTraining, 11 | BertForMaskedLM, BertForNextSentencePrediction, 12 | BertForSequenceClassification, BertForMultipleChoice, 13 | BertForTokenClassification, BertForQuestionAnswering, 14 | load_tf_weights_in_bert, BERT_PRETRAINED_MODEL_ARCHIVE_MAP, 15 | BERT_PRETRAINED_CONFIG_ARCHIVE_MAP) 16 | from .modeling_openai import (OpenAIGPTConfig, OpenAIGPTModel, 17 | OpenAIGPTLMHeadModel, OpenAIGPTDoubleHeadsModel, 18 | load_tf_weights_in_openai_gpt, OPENAI_GPT_PRETRAINED_CONFIG_ARCHIVE_MAP, 19 | OPENAI_GPT_PRETRAINED_MODEL_ARCHIVE_MAP) 20 | from .modeling_transfo_xl import (TransfoXLConfig, TransfoXLModel, TransfoXLLMHeadModel, 21 | load_tf_weights_in_transfo_xl, TRANSFO_XL_PRETRAINED_CONFIG_ARCHIVE_MAP, 22 | TRANSFO_XL_PRETRAINED_MODEL_ARCHIVE_MAP) 23 | from .modeling_gpt2 import (GPT2Config, GPT2Model, 24 | GPT2LMHeadModel, GPT2DoubleHeadsModel, 25 | load_tf_weights_in_gpt2, GPT2_PRETRAINED_CONFIG_ARCHIVE_MAP, 26 | GPT2_PRETRAINED_MODEL_ARCHIVE_MAP) 27 | from .modeling_xlnet import (XLNetConfig, 28 | XLNetPreTrainedModel, XLNetModel, XLNetLMHeadModel, 29 | XLNetForSequenceClassification, XLNetForQuestionAnswering, 30 | load_tf_weights_in_xlnet, XLNET_PRETRAINED_CONFIG_ARCHIVE_MAP, 31 | XLNET_PRETRAINED_MODEL_ARCHIVE_MAP) 32 | from .modeling_xlm import (XLMConfig, XLMModel, 33 | XLMWithLMHeadModel, XLMForSequenceClassification, 34 | XLMForQuestionAnswering, XLM_PRETRAINED_CONFIG_ARCHIVE_MAP, 35 | XLM_PRETRAINED_MODEL_ARCHIVE_MAP) 36 | from .modeling_utils import (WEIGHTS_NAME, CONFIG_NAME, TF_WEIGHTS_NAME, 37 | PretrainedConfig, PreTrainedModel, prune_layer, Conv1D) 38 | 39 | from .optimization import (AdamW, ConstantLRSchedule, WarmupConstantSchedule, WarmupCosineSchedule, 40 | WarmupCosineWithHardRestartsSchedule, WarmupLinearSchedule) 41 | 42 | from .file_utils import (PYTORCH_PRETRAINED_BERT_CACHE, cached_path) 43 | -------------------------------------------------------------------------------- /manopth/mano/webuser/lbs.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Copyright 2017 Javier Romero, Dimitrios Tzionas, Michael J Black and the Max Planck Gesellschaft. All rights reserved. 3 | This software is provided for research purposes only. 4 | By using this software you agree to the terms of the MANO/SMPL+H Model license here http://mano.is.tue.mpg.de/license 5 | 6 | More information about MANO/SMPL+H is available at http://mano.is.tue.mpg.de. 7 | For comments or questions, please email us at: mano@tue.mpg.de 8 | 9 | 10 | About this file: 11 | ================ 12 | This file defines a wrapper for the loading functions of the MANO model. 13 | 14 | Modules included: 15 | - load_model: 16 | loads the MANO model from a given file location (i.e. a .pkl file location), 17 | or a dictionary object. 18 | 19 | ''' 20 | 21 | 22 | from mano.webuser.posemapper import posemap 23 | import chumpy 24 | import numpy as np 25 | 26 | 27 | def global_rigid_transformation(pose, J, kintree_table, xp): 28 | results = {} 29 | pose = pose.reshape((-1, 3)) 30 | id_to_col = {kintree_table[1, i]: i for i in range(kintree_table.shape[1])} 31 | parent = { 32 | i: id_to_col[kintree_table[0, i]] 33 | for i in range(1, kintree_table.shape[1]) 34 | } 35 | 36 | if xp == chumpy: 37 | from mano.webuser.posemapper import Rodrigues 38 | rodrigues = lambda x: Rodrigues(x) 39 | else: 40 | import cv2 41 | rodrigues = lambda x: cv2.Rodrigues(x)[0] 42 | 43 | with_zeros = lambda x: xp.vstack((x, xp.array([[0.0, 0.0, 0.0, 1.0]]))) 44 | results[0] = with_zeros( 45 | xp.hstack((rodrigues(pose[0, :]), J[0, :].reshape((3, 1))))) 46 | 47 | for i in range(1, kintree_table.shape[1]): 48 | results[i] = results[parent[i]].dot( 49 | with_zeros( 50 | xp.hstack((rodrigues(pose[i, :]), ((J[i, :] - J[parent[i], :] 51 | ).reshape((3, 1))))))) 52 | 53 | pack = lambda x: xp.hstack([np.zeros((4, 3)), x.reshape((4, 1))]) 54 | 55 | results = [results[i] for i in sorted(results.keys())] 56 | results_global = results 57 | 58 | if True: 59 | results2 = [ 60 | results[i] - (pack(results[i].dot(xp.concatenate(((J[i, :]), 0))))) 61 | for i in range(len(results)) 62 | ] 63 | results = results2 64 | result = xp.dstack(results) 65 | return result, results_global 66 | 67 | 68 | def verts_core(pose, v, J, weights, kintree_table, want_Jtr=False, xp=chumpy): 69 | A, A_global = global_rigid_transformation(pose, J, kintree_table, xp) 70 | T = A.dot(weights.T) 71 | 72 | rest_shape_h = xp.vstack((v.T, np.ones((1, v.shape[0])))) 73 | 74 | v = (T[:, 0, :] * rest_shape_h[0, :].reshape( 75 | (1, -1)) + T[:, 1, :] * rest_shape_h[1, :].reshape( 76 | (1, -1)) + T[:, 2, :] * rest_shape_h[2, :].reshape( 77 | (1, -1)) + T[:, 3, :] * rest_shape_h[3, :].reshape((1, -1))).T 78 | 79 | v = v[:, :3] 80 | 81 | if not want_Jtr: 82 | return v 83 | Jtr = xp.vstack([g[:3, 3] for g in A_global]) 84 | return (v, Jtr) 85 | -------------------------------------------------------------------------------- /transformers/setup.py: -------------------------------------------------------------------------------- 1 | """ 2 | Simple check list from AllenNLP repo: https://github.com/allenai/allennlp/blob/master/setup.py 3 | 4 | To create the package for pypi. 5 | 6 | 1. Change the version in __init__.py and setup.py. 7 | 8 | 2. Commit these changes with the message: "Release: VERSION" 9 | 10 | 3. Add a tag in git to mark the release: "git tag VERSION -m'Adds tag VERSION for pypi' " 11 | Push the tag to git: git push --tags origin master 12 | 13 | 4. Build both the sources and the wheel. Do not change anything in setup.py between 14 | creating the wheel and the source distribution (obviously). 15 | 16 | For the wheel, run: "python setup.py bdist_wheel" in the top level allennlp directory. 17 | (this will build a wheel for the python version you use to build it - make sure you use python 3.x). 18 | 19 | For the sources, run: "python setup.py sdist" 20 | You should now have a /dist directory with both .whl and .tar.gz source versions of allennlp. 21 | 22 | 5. Check that everything looks correct by uploading the package to the pypi test server: 23 | 24 | twine upload dist/* -r pypitest 25 | (pypi suggest using twine as other methods upload files via plaintext.) 26 | 27 | Check that you can install it in a virtualenv by running: 28 | pip install -i https://testpypi.python.org/pypi pytorch-transformers 29 | 30 | 6. Upload the final version to actual pypi: 31 | twine upload dist/* -r pypi 32 | 33 | 7. Copy the release notes from RELEASE.md to the tag in github once everything is looking hunky-dory. 34 | 35 | """ 36 | from io import open 37 | from setuptools import find_packages, setup 38 | 39 | setup( 40 | name="pytorch_transformers", 41 | version="1.0.0", 42 | author="Thomas Wolf, Lysandre Debut, Victor Sanh, Tim Rault, Google AI Language Team Authors, Open AI team Authors", 43 | author_email="thomas@huggingface.co", 44 | description="Repository of pre-trained NLP Transformer models: BERT, GPT & GPT-2, Transformer-XL, XLNet and XLM", 45 | long_description=open("README.md", "r", encoding='utf-8').read(), 46 | long_description_content_type="text/markdown", 47 | keywords='NLP deep learning transformer pytorch BERT GPT GPT-2 google openai CMU', 48 | license='Apache', 49 | url="https://github.com/huggingface/pytorch-transformers", 50 | packages=find_packages(exclude=["*.tests", "*.tests.*", 51 | "tests.*", "tests"]), 52 | install_requires=['torch>=0.4.1', 53 | 'numpy', 54 | 'boto3', 55 | 'requests', 56 | 'tqdm', 57 | 'regex', 58 | 'sentencepiece'], 59 | entry_points={ 60 | 'console_scripts': [ 61 | "pytorch_transformers=pytorch_transformers.__main__:main", 62 | ] 63 | }, 64 | # python_requires='>=3.5.0', 65 | tests_require=['pytest'], 66 | classifiers=[ 67 | 'Intended Audience :: Science/Research', 68 | 'License :: OSI Approved :: Apache Software License', 69 | 'Programming Language :: Python :: 3', 70 | 'Topic :: Scientific/Engineering :: Artificial Intelligence', 71 | ], 72 | ) 73 | -------------------------------------------------------------------------------- /manopth/mano/webuser/serialization.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Copyright 2017 Javier Romero, Dimitrios Tzionas, Michael J Black and the Max Planck Gesellschaft. All rights reserved. 3 | This software is provided for research purposes only. 4 | By using this software you agree to the terms of the MANO/SMPL+H Model license here http://mano.is.tue.mpg.de/license 5 | 6 | More information about MANO/SMPL+H is available at http://mano.is.tue.mpg.de. 7 | For comments or questions, please email us at: mano@tue.mpg.de 8 | 9 | 10 | About this file: 11 | ================ 12 | This file defines a wrapper for the loading functions of the MANO model. 13 | 14 | Modules included: 15 | - load_model: 16 | loads the MANO model from a given file location (i.e. a .pkl file location), 17 | or a dictionary object. 18 | 19 | ''' 20 | 21 | 22 | __all__ = ['load_model', 'save_model'] 23 | 24 | import numpy as np 25 | import pickle 26 | import chumpy as ch 27 | from chumpy.ch import MatVecMult 28 | from mano.webuser.posemapper import posemap 29 | from mano.webuser.verts import verts_core 30 | 31 | def ready_arguments(fname_or_dict): 32 | 33 | if not isinstance(fname_or_dict, dict): 34 | dd = pickle.load(open(fname_or_dict, 'rb'), encoding='latin1') 35 | else: 36 | dd = fname_or_dict 37 | 38 | backwards_compatibility_replacements(dd) 39 | 40 | want_shapemodel = 'shapedirs' in dd 41 | nposeparms = dd['kintree_table'].shape[1] * 3 42 | 43 | if 'trans' not in dd: 44 | dd['trans'] = np.zeros(3) 45 | if 'pose' not in dd: 46 | dd['pose'] = np.zeros(nposeparms) 47 | if 'shapedirs' in dd and 'betas' not in dd: 48 | dd['betas'] = np.zeros(dd['shapedirs'].shape[-1]) 49 | 50 | for s in [ 51 | 'v_template', 'weights', 'posedirs', 'pose', 'trans', 'shapedirs', 52 | 'betas', 'J' 53 | ]: 54 | if (s in dd) and not hasattr(dd[s], 'dterms'): 55 | dd[s] = ch.array(dd[s]) 56 | 57 | if want_shapemodel: 58 | dd['v_shaped'] = dd['shapedirs'].dot(dd['betas']) + dd['v_template'] 59 | v_shaped = dd['v_shaped'] 60 | J_tmpx = MatVecMult(dd['J_regressor'], v_shaped[:, 0]) 61 | J_tmpy = MatVecMult(dd['J_regressor'], v_shaped[:, 1]) 62 | J_tmpz = MatVecMult(dd['J_regressor'], v_shaped[:, 2]) 63 | dd['J'] = ch.vstack((J_tmpx, J_tmpy, J_tmpz)).T 64 | dd['v_posed'] = v_shaped + dd['posedirs'].dot( 65 | posemap(dd['bs_type'])(dd['pose'])) 66 | else: 67 | dd['v_posed'] = dd['v_template'] + dd['posedirs'].dot( 68 | posemap(dd['bs_type'])(dd['pose'])) 69 | 70 | return dd 71 | 72 | 73 | def load_model(fname_or_dict): 74 | dd = ready_arguments(fname_or_dict) 75 | 76 | args = { 77 | 'pose': dd['pose'], 78 | 'v': dd['v_posed'], 79 | 'J': dd['J'], 80 | 'weights': dd['weights'], 81 | 'kintree_table': dd['kintree_table'], 82 | 'xp': ch, 83 | 'want_Jtr': True, 84 | 'bs_style': dd['bs_style'] 85 | } 86 | 87 | result, Jtr = verts_core(**args) 88 | result = result + dd['trans'].reshape((1, 3)) 89 | result.J_transformed = Jtr + dd['trans'].reshape((1, 3)) 90 | 91 | for k, v in dd.items(): 92 | setattr(result, k, v) 93 | 94 | return result 95 | -------------------------------------------------------------------------------- /manopth/manopth/rodrigues_layer.py: -------------------------------------------------------------------------------- 1 | """ 2 | This part reuses code from https://github.com/MandyMo/pytorch_HMR/blob/master/src/util.py 3 | which is part of a PyTorch port of SMPL. 4 | Thanks to Zhang Xiong (MandyMo) for making this great code available on github ! 5 | """ 6 | 7 | import argparse 8 | from torch.autograd import gradcheck 9 | import torch 10 | from torch.autograd import Variable 11 | 12 | from manopth import argutils 13 | 14 | 15 | def quat2mat(quat): 16 | """Convert quaternion coefficients to rotation matrix. 17 | Args: 18 | quat: size = [batch_size, 4] 4 <===>(w, x, y, z) 19 | Returns: 20 | Rotation matrix corresponding to the quaternion -- size = [batch_size, 3, 3] 21 | """ 22 | norm_quat = quat 23 | norm_quat = norm_quat / norm_quat.norm(p=2, dim=1, keepdim=True) 24 | w, x, y, z = norm_quat[:, 0], norm_quat[:, 1], norm_quat[:, 25 | 2], norm_quat[:, 26 | 3] 27 | 28 | batch_size = quat.size(0) 29 | 30 | w2, x2, y2, z2 = w.pow(2), x.pow(2), y.pow(2), z.pow(2) 31 | wx, wy, wz = w * x, w * y, w * z 32 | xy, xz, yz = x * y, x * z, y * z 33 | 34 | rotMat = torch.stack([ 35 | w2 + x2 - y2 - z2, 2 * xy - 2 * wz, 2 * wy + 2 * xz, 2 * wz + 2 * xy, 36 | w2 - x2 + y2 - z2, 2 * yz - 2 * wx, 2 * xz - 2 * wy, 2 * wx + 2 * yz, 37 | w2 - x2 - y2 + z2 38 | ], 39 | dim=1).view(batch_size, 3, 3) 40 | return rotMat 41 | 42 | 43 | def batch_rodrigues(axisang): 44 | #axisang N x 3 45 | axisang_norm = torch.norm(axisang + 1e-8, p=2, dim=1) 46 | angle = torch.unsqueeze(axisang_norm, -1) 47 | axisang_normalized = torch.div(axisang, angle) 48 | angle = angle * 0.5 49 | v_cos = torch.cos(angle) 50 | v_sin = torch.sin(angle) 51 | quat = torch.cat([v_cos, v_sin * axisang_normalized], dim=1) 52 | rot_mat = quat2mat(quat) 53 | rot_mat = rot_mat.view(rot_mat.shape[0], 9) 54 | return rot_mat 55 | 56 | 57 | def th_get_axis_angle(vector): 58 | angle = torch.norm(vector, 2, 1) 59 | axes = vector / angle.unsqueeze(1) 60 | return axes, angle 61 | 62 | 63 | if __name__ == '__main__': 64 | parser = argparse.ArgumentParser() 65 | parser.add_argument('--batch_size', default=1, type=int) 66 | parser.add_argument('--cuda', action='store_true') 67 | args = parser.parse_args() 68 | 69 | argutils.print_args(args) 70 | 71 | n_components = 6 72 | rot = 3 73 | inputs = torch.rand(args.batch_size, rot) 74 | inputs_var = Variable(inputs.double(), requires_grad=True) 75 | if args.cuda: 76 | inputs = inputs.cuda() 77 | # outputs = batch_rodrigues(inputs) 78 | test_function = gradcheck(batch_rodrigues, (inputs_var, )) 79 | print('batch test passed !') 80 | 81 | inputs = torch.rand(rot) 82 | inputs_var = Variable(inputs.double(), requires_grad=True) 83 | test_function = gradcheck(th_cv2_rod_sub_id.apply, (inputs_var, )) 84 | print('th_cv2_rod test passed') 85 | 86 | inputs = torch.rand(rot) 87 | inputs_var = Variable(inputs.double(), requires_grad=True) 88 | test_th = gradcheck(th_cv2_rod.apply, (inputs_var, )) 89 | print('th_cv2_rod_id test passed !') 90 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/convert_xlm_checkpoint_to_pytorch.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The HuggingFace Inc. team. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """Convert OpenAI GPT checkpoint.""" 16 | 17 | from __future__ import absolute_import, division, print_function 18 | 19 | import argparse 20 | import json 21 | from io import open 22 | 23 | import torch 24 | import numpy 25 | 26 | from pytorch_transformers.modeling_utils import CONFIG_NAME, WEIGHTS_NAME 27 | from pytorch_transformers.tokenization_xlm import VOCAB_FILES_NAMES 28 | 29 | import logging 30 | logging.basicConfig(level=logging.INFO) 31 | 32 | def convert_xlm_checkpoint_to_pytorch(xlm_checkpoint_path, pytorch_dump_folder_path): 33 | # Load checkpoint 34 | chkpt = torch.load(xlm_checkpoint_path, map_location='cpu') 35 | 36 | model = chkpt['model'] 37 | 38 | config = chkpt['params'] 39 | config = dict((n, v) for n, v in config.items() if not isinstance(v, (torch.FloatTensor, numpy.ndarray))) 40 | 41 | vocab = chkpt['dico_word2id'] 42 | vocab = dict((s + '' if s.find('@@') == -1 and i > 13 else s.replace('@@', ''), i) for s, i in vocab.items()) 43 | 44 | # Save pytorch-model 45 | pytorch_weights_dump_path = pytorch_dump_folder_path + '/' + WEIGHTS_NAME 46 | pytorch_config_dump_path = pytorch_dump_folder_path + '/' + CONFIG_NAME 47 | pytorch_vocab_dump_path = pytorch_dump_folder_path + '/' + VOCAB_FILES_NAMES['vocab_file'] 48 | 49 | print("Save PyTorch model to {}".format(pytorch_weights_dump_path)) 50 | torch.save(model, pytorch_weights_dump_path) 51 | 52 | print("Save configuration file to {}".format(pytorch_config_dump_path)) 53 | with open(pytorch_config_dump_path, "w", encoding="utf-8") as f: 54 | f.write(json.dumps(config, indent=2) + "\n") 55 | 56 | print("Save vocab file to {}".format(pytorch_config_dump_path)) 57 | with open(pytorch_vocab_dump_path, "w", encoding="utf-8") as f: 58 | f.write(json.dumps(vocab, indent=2) + "\n") 59 | 60 | 61 | if __name__ == "__main__": 62 | parser = argparse.ArgumentParser() 63 | ## Required parameters 64 | parser.add_argument("--xlm_checkpoint_path", 65 | default = None, 66 | type = str, 67 | required = True, 68 | help = "Path the official PyTorch dump.") 69 | parser.add_argument("--pytorch_dump_folder_path", 70 | default = None, 71 | type = str, 72 | required = True, 73 | help = "Path to the output PyTorch model.") 74 | args = parser.parse_args() 75 | convert_xlm_checkpoint_to_pytorch(args.xlm_checkpoint_path, args.pytorch_dump_folder_path) 76 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/convert_gpt2_checkpoint_to_pytorch.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The HuggingFace Inc. team. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """Convert OpenAI GPT checkpoint.""" 16 | 17 | from __future__ import absolute_import, division, print_function 18 | 19 | import argparse 20 | from io import open 21 | 22 | import torch 23 | 24 | from pytorch_transformers.modeling_gpt2 import (CONFIG_NAME, WEIGHTS_NAME, 25 | GPT2Config, 26 | GPT2Model, 27 | load_tf_weights_in_gpt2) 28 | 29 | import logging 30 | logging.basicConfig(level=logging.INFO) 31 | 32 | 33 | def convert_gpt2_checkpoint_to_pytorch(gpt2_checkpoint_path, gpt2_config_file, pytorch_dump_folder_path): 34 | # Construct model 35 | if gpt2_config_file == "": 36 | config = GPT2Config() 37 | else: 38 | config = GPT2Config(gpt2_config_file) 39 | model = GPT2Model(config) 40 | 41 | # Load weights from numpy 42 | load_tf_weights_in_gpt2(model, config, gpt2_checkpoint_path) 43 | 44 | # Save pytorch-model 45 | pytorch_weights_dump_path = pytorch_dump_folder_path + '/' + WEIGHTS_NAME 46 | pytorch_config_dump_path = pytorch_dump_folder_path + '/' + CONFIG_NAME 47 | print("Save PyTorch model to {}".format(pytorch_weights_dump_path)) 48 | torch.save(model.state_dict(), pytorch_weights_dump_path) 49 | print("Save configuration file to {}".format(pytorch_config_dump_path)) 50 | with open(pytorch_config_dump_path, "w", encoding="utf-8") as f: 51 | f.write(config.to_json_string()) 52 | 53 | 54 | if __name__ == "__main__": 55 | parser = argparse.ArgumentParser() 56 | ## Required parameters 57 | parser.add_argument("--gpt2_checkpoint_path", 58 | default = None, 59 | type = str, 60 | required = True, 61 | help = "Path the TensorFlow checkpoint path.") 62 | parser.add_argument("--pytorch_dump_folder_path", 63 | default = None, 64 | type = str, 65 | required = True, 66 | help = "Path to the output PyTorch model.") 67 | parser.add_argument("--gpt2_config_file", 68 | default = "", 69 | type = str, 70 | help = "An optional config json file corresponding to the pre-trained OpenAI model. \n" 71 | "This specifies the model architecture.") 72 | args = parser.parse_args() 73 | convert_gpt2_checkpoint_to_pytorch(args.gpt2_checkpoint_path, 74 | args.gpt2_config_file, 75 | args.pytorch_dump_folder_path) 76 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/convert_openai_checkpoint_to_pytorch.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | # Copyright 2018 The HuggingFace Inc. team. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | """Convert OpenAI GPT checkpoint.""" 16 | 17 | from __future__ import absolute_import, division, print_function 18 | 19 | import argparse 20 | from io import open 21 | 22 | import torch 23 | 24 | from pytorch_transformers.modeling_openai import (CONFIG_NAME, WEIGHTS_NAME, 25 | OpenAIGPTConfig, 26 | OpenAIGPTModel, 27 | load_tf_weights_in_openai_gpt) 28 | 29 | import logging 30 | logging.basicConfig(level=logging.INFO) 31 | 32 | 33 | def convert_openai_checkpoint_to_pytorch(openai_checkpoint_folder_path, openai_config_file, pytorch_dump_folder_path): 34 | # Construct model 35 | if openai_config_file == "": 36 | config = OpenAIGPTConfig() 37 | else: 38 | config = OpenAIGPTConfig(openai_config_file) 39 | model = OpenAIGPTModel(config) 40 | 41 | # Load weights from numpy 42 | load_tf_weights_in_openai_gpt(model, config, openai_checkpoint_folder_path) 43 | 44 | # Save pytorch-model 45 | pytorch_weights_dump_path = pytorch_dump_folder_path + '/' + WEIGHTS_NAME 46 | pytorch_config_dump_path = pytorch_dump_folder_path + '/' + CONFIG_NAME 47 | print("Save PyTorch model to {}".format(pytorch_weights_dump_path)) 48 | torch.save(model.state_dict(), pytorch_weights_dump_path) 49 | print("Save configuration file to {}".format(pytorch_config_dump_path)) 50 | with open(pytorch_config_dump_path, "w", encoding="utf-8") as f: 51 | f.write(config.to_json_string()) 52 | 53 | 54 | if __name__ == "__main__": 55 | parser = argparse.ArgumentParser() 56 | ## Required parameters 57 | parser.add_argument("--openai_checkpoint_folder_path", 58 | default = None, 59 | type = str, 60 | required = True, 61 | help = "Path the TensorFlow checkpoint path.") 62 | parser.add_argument("--pytorch_dump_folder_path", 63 | default = None, 64 | type = str, 65 | required = True, 66 | help = "Path to the output PyTorch model.") 67 | parser.add_argument("--openai_config_file", 68 | default = "", 69 | type = str, 70 | help = "An optional config json file corresponding to the pre-trained OpenAI model. \n" 71 | "This specifies the model architecture.") 72 | args = parser.parse_args() 73 | convert_openai_checkpoint_to_pytorch(args.openai_checkpoint_folder_path, 74 | args.openai_config_file, 75 | args.pytorch_dump_folder_path) 76 | -------------------------------------------------------------------------------- /tore/utils/metric_pampjpe.py: -------------------------------------------------------------------------------- 1 | """ 2 | Functions for compuing Procrustes alignment and reconstruction error 3 | 4 | Parts of the code are adapted from https://github.com/akanazawa/hmr 5 | 6 | """ 7 | from __future__ import absolute_import 8 | from __future__ import division 9 | from __future__ import print_function 10 | import numpy as np 11 | 12 | def compute_similarity_transform(S1, S2): 13 | """Computes a similarity transform (sR, t) that takes 14 | a set of 3D points S1 (3 x N) closest to a set of 3D points S2, 15 | where R is an 3x3 rotation matrix, t 3x1 translation, s scale. 16 | i.e. solves the orthogonal Procrutes problem. 17 | """ 18 | transposed = False 19 | if S1.shape[0] != 3 and S1.shape[0] != 2: 20 | S1 = S1.T 21 | S2 = S2.T 22 | transposed = True 23 | assert(S2.shape[1] == S1.shape[1]) 24 | 25 | # 1. Remove mean. 26 | mu1 = S1.mean(axis=1, keepdims=True) 27 | mu2 = S2.mean(axis=1, keepdims=True) 28 | X1 = S1 - mu1 29 | X2 = S2 - mu2 30 | 31 | # 2. Compute variance of X1 used for scale. 32 | var1 = np.sum(X1**2) 33 | 34 | # 3. The outer product of X1 and X2. 35 | K = X1.dot(X2.T) 36 | 37 | # 4. Solution that Maximizes trace(R'K) is R=U*V', where U, V are 38 | # singular vectors of K. 39 | U, s, Vh = np.linalg.svd(K) 40 | V = Vh.T 41 | # Construct Z that fixes the orientation of R to get det(R)=1. 42 | Z = np.eye(U.shape[0]) 43 | Z[-1, -1] *= np.sign(np.linalg.det(U.dot(V.T))) 44 | # Construct R. 45 | R = V.dot(Z.dot(U.T)) 46 | 47 | # 5. Recover scale. 48 | scale = np.trace(R.dot(K)) / var1 49 | 50 | # 6. Recover translation. 51 | t = mu2 - scale*(R.dot(mu1)) 52 | 53 | # 7. Error: 54 | S1_hat = scale*R.dot(S1) + t 55 | 56 | if transposed: 57 | S1_hat = S1_hat.T 58 | 59 | return S1_hat 60 | 61 | def compute_similarity_transform_batch(S1, S2): 62 | """Batched version of compute_similarity_transform.""" 63 | S1_hat = np.zeros_like(S1) 64 | for i in range(S1.shape[0]): 65 | S1_hat[i] = compute_similarity_transform(S1[i], S2[i]) 66 | return S1_hat 67 | 68 | def reconstruction_error(S1, S2, reduction='mean'): 69 | """Do Procrustes alignment and compute reconstruction error.""" 70 | S1_hat = compute_similarity_transform_batch(S1, S2) 71 | re = np.sqrt( ((S1_hat - S2)** 2).sum(axis=-1)).mean(axis=-1) 72 | if reduction == 'mean': 73 | re = re.mean() 74 | elif reduction == 'sum': 75 | re = re.sum() 76 | return re 77 | 78 | 79 | def reconstruction_error_v2(S1, S2, J24_TO_J14, reduction='mean'): 80 | """Do Procrustes alignment and compute reconstruction error.""" 81 | S1_hat = compute_similarity_transform_batch(S1, S2) 82 | S1_hat = S1_hat[:,J24_TO_J14,:] 83 | S2 = S2[:,J24_TO_J14,:] 84 | re = np.sqrt( ((S1_hat - S2)** 2).sum(axis=-1)).mean(axis=-1) 85 | if reduction == 'mean': 86 | re = re.mean() 87 | elif reduction == 'sum': 88 | re = re.sum() 89 | return re 90 | 91 | def get_alignMesh(S1, S2, reduction='mean'): 92 | """Do Procrustes alignment and compute reconstruction error.""" 93 | S1_hat = compute_similarity_transform_batch(S1, S2) 94 | re = np.sqrt( ((S1_hat - S2)** 2).sum(axis=-1)).mean(axis=-1) 95 | if reduction == 'mean': 96 | re = re.mean() 97 | elif reduction == 'sum': 98 | re = re.sum() 99 | return re, S1_hat, S2 100 | -------------------------------------------------------------------------------- /manopth/examples/manopth_demo.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | 3 | from matplotlib import pyplot as plt 4 | from mpl_toolkits.mplot3d import Axes3D 5 | import torch 6 | from tqdm import tqdm 7 | 8 | from manopth import argutils 9 | from manopth.manolayer import ManoLayer 10 | from manopth.demo import display_hand 11 | 12 | if __name__ == '__main__': 13 | parser = argparse.ArgumentParser() 14 | parser.add_argument('--batch_size', default=1, type=int) 15 | parser.add_argument('--cuda', action='store_true') 16 | parser.add_argument( 17 | '--no_display', 18 | action='store_true', 19 | help="Disable display output of ManoLayer given random inputs") 20 | parser.add_argument('--side', default='left', choices=['left', 'right']) 21 | parser.add_argument('--random_shape', action='store_true', help="Random hand shape") 22 | parser.add_argument('--rand_mag', type=float, default=1, help="Controls pose variability") 23 | parser.add_argument( 24 | '--flat_hand_mean', 25 | action='store_true', 26 | help="Use flat hand as mean instead of average hand pose") 27 | parser.add_argument( 28 | '--iters', 29 | type=int, 30 | default=1, 31 | help= 32 | "Use for quick profiling of forward and backward pass accross ManoLayer" 33 | ) 34 | parser.add_argument('--mano_root', default='mano/models') 35 | parser.add_argument('--root_rot_mode', default='axisang', choices=['rot6d', 'axisang']) 36 | parser.add_argument('--no_pca', action='store_true', help="Give axis-angle or rotation matrix as inputs instead of PCA coefficients") 37 | parser.add_argument('--joint_rot_mode', default='axisang', choices=['rotmat', 'axisang'], help="Joint rotation inputs") 38 | parser.add_argument( 39 | '--mano_ncomps', default=6, type=int, help="Number of PCA components") 40 | args = parser.parse_args() 41 | 42 | argutils.print_args(args) 43 | 44 | layer = ManoLayer( 45 | flat_hand_mean=args.flat_hand_mean, 46 | side=args.side, 47 | mano_root=args.mano_root, 48 | ncomps=args.mano_ncomps, 49 | use_pca=not args.no_pca, 50 | root_rot_mode=args.root_rot_mode, 51 | joint_rot_mode=args.joint_rot_mode) 52 | if args.root_rot_mode == 'axisang': 53 | rot = 3 54 | else: 55 | rot = 6 56 | print(rot) 57 | if args.no_pca: 58 | args.mano_ncomps = 45 59 | 60 | # Generate random pose coefficients 61 | pose_params = args.rand_mag * torch.rand(args.batch_size, args.mano_ncomps + rot) 62 | pose_params.requires_grad = True 63 | if args.random_shape: 64 | shape = torch.rand(args.batch_size, 10) 65 | else: 66 | shape = torch.zeros(1) # Hack to act like None for PyTorch JIT 67 | if args.cuda: 68 | pose_params = pose_params.cuda() 69 | shape = shape.cuda() 70 | layer.cuda() 71 | 72 | # Loop for forward/backward quick profiling 73 | for idx in tqdm(range(args.iters)): 74 | # Forward pass 75 | verts, Jtr = layer(pose_params, th_betas=shape) 76 | 77 | # Backward pass 78 | loss = torch.norm(verts) 79 | loss.backward() 80 | 81 | if not args.no_display: 82 | verts, Jtr = layer(pose_params, th_betas=shape) 83 | joints = Jtr.cpu().detach() 84 | verts = verts.cpu().detach() 85 | 86 | # Draw obtained vertices and joints 87 | display_hand({ 88 | 'verts': verts, 89 | 'joints': joints 90 | }, 91 | mano_faces=layer.th_faces) 92 | -------------------------------------------------------------------------------- /tore/modeling/hrnet/config/default.py: -------------------------------------------------------------------------------- 1 | 2 | # ------------------------------------------------------------------------------ 3 | # Copyright (c) Microsoft 4 | # Licensed under the MIT License. 5 | # Written by Bin Xiao (Bin.Xiao@microsoft.com) 6 | # Modified by Ke Sun (sunk@mail.ustc.edu.cn) 7 | # ------------------------------------------------------------------------------ 8 | 9 | from __future__ import absolute_import 10 | from __future__ import division 11 | from __future__ import print_function 12 | 13 | import os 14 | 15 | from yacs.config import CfgNode as CN 16 | 17 | 18 | _C = CN() 19 | 20 | _C.OUTPUT_DIR = '' 21 | _C.LOG_DIR = '' 22 | _C.DATA_DIR = '' 23 | _C.GPUS = (0,) 24 | _C.WORKERS = 4 25 | _C.PRINT_FREQ = 20 26 | _C.AUTO_RESUME = False 27 | _C.PIN_MEMORY = True 28 | _C.RANK = 0 29 | 30 | # Cudnn related params 31 | _C.CUDNN = CN() 32 | _C.CUDNN.BENCHMARK = True 33 | _C.CUDNN.DETERMINISTIC = False 34 | _C.CUDNN.ENABLED = True 35 | 36 | # common params for NETWORK 37 | _C.MODEL = CN() 38 | _C.MODEL.NAME = 'cls_hrnet' 39 | _C.MODEL.INIT_WEIGHTS = True 40 | _C.MODEL.PRETRAINED = '' 41 | _C.MODEL.NUM_JOINTS = 17 42 | _C.MODEL.NUM_CLASSES = 1000 43 | _C.MODEL.TAG_PER_JOINT = True 44 | _C.MODEL.TARGET_TYPE = 'gaussian' 45 | _C.MODEL.IMAGE_SIZE = [256, 256] # width * height, ex: 192 * 256 46 | _C.MODEL.HEATMAP_SIZE = [64, 64] # width * height, ex: 24 * 32 47 | _C.MODEL.SIGMA = 2 48 | _C.MODEL.EXTRA = CN(new_allowed=True) 49 | 50 | _C.LOSS = CN() 51 | _C.LOSS.USE_OHKM = False 52 | _C.LOSS.TOPK = 8 53 | _C.LOSS.USE_TARGET_WEIGHT = True 54 | _C.LOSS.USE_DIFFERENT_JOINTS_WEIGHT = False 55 | 56 | # DATASET related params 57 | _C.DATASET = CN() 58 | _C.DATASET.ROOT = '' 59 | _C.DATASET.DATASET = 'mpii' 60 | _C.DATASET.TRAIN_SET = 'train' 61 | _C.DATASET.TEST_SET = 'valid' 62 | _C.DATASET.DATA_FORMAT = 'jpg' 63 | _C.DATASET.HYBRID_JOINTS_TYPE = '' 64 | _C.DATASET.SELECT_DATA = False 65 | 66 | # training data augmentation 67 | _C.DATASET.FLIP = True 68 | _C.DATASET.SCALE_FACTOR = 0.25 69 | _C.DATASET.ROT_FACTOR = 30 70 | _C.DATASET.PROB_HALF_BODY = 0.0 71 | _C.DATASET.NUM_JOINTS_HALF_BODY = 8 72 | _C.DATASET.COLOR_RGB = False 73 | 74 | # train 75 | _C.TRAIN = CN() 76 | 77 | _C.TRAIN.LR_FACTOR = 0.1 78 | _C.TRAIN.LR_STEP = [90, 110] 79 | _C.TRAIN.LR = 0.001 80 | 81 | _C.TRAIN.OPTIMIZER = 'adam' 82 | _C.TRAIN.MOMENTUM = 0.9 83 | _C.TRAIN.WD = 0.0001 84 | _C.TRAIN.NESTEROV = False 85 | _C.TRAIN.GAMMA1 = 0.99 86 | _C.TRAIN.GAMMA2 = 0.0 87 | 88 | _C.TRAIN.BEGIN_EPOCH = 0 89 | _C.TRAIN.END_EPOCH = 140 90 | 91 | _C.TRAIN.RESUME = False 92 | _C.TRAIN.CHECKPOINT = '' 93 | 94 | _C.TRAIN.BATCH_SIZE_PER_GPU = 32 95 | _C.TRAIN.SHUFFLE = True 96 | 97 | # testing 98 | _C.TEST = CN() 99 | 100 | # size of images for each device 101 | _C.TEST.BATCH_SIZE_PER_GPU = 32 102 | # Test Model Epoch 103 | _C.TEST.FLIP_TEST = False 104 | _C.TEST.POST_PROCESS = False 105 | _C.TEST.SHIFT_HEATMAP = False 106 | 107 | _C.TEST.USE_GT_BBOX = False 108 | 109 | # nms 110 | _C.TEST.IMAGE_THRE = 0.1 111 | _C.TEST.NMS_THRE = 0.6 112 | _C.TEST.SOFT_NMS = False 113 | _C.TEST.OKS_THRE = 0.5 114 | _C.TEST.IN_VIS_THRE = 0.0 115 | _C.TEST.COCO_BBOX_FILE = '' 116 | _C.TEST.BBOX_THRE = 1.0 117 | _C.TEST.MODEL_FILE = '' 118 | 119 | # debug 120 | _C.DEBUG = CN() 121 | _C.DEBUG.DEBUG = False 122 | _C.DEBUG.SAVE_BATCH_IMAGES_GT = False 123 | _C.DEBUG.SAVE_BATCH_IMAGES_PRED = False 124 | _C.DEBUG.SAVE_HEATMAPS_GT = False 125 | _C.DEBUG.SAVE_HEATMAPS_PRED = False 126 | 127 | 128 | def update_config(cfg, config_file): 129 | cfg.defrost() 130 | cfg.merge_from_file(config_file) 131 | cfg.freeze() 132 | 133 | 134 | if __name__ == '__main__': 135 | import sys 136 | with open(sys.argv[1], 'w') as f: 137 | print(_C, file=f) 138 | 139 | -------------------------------------------------------------------------------- /tore/utils/logger.py: -------------------------------------------------------------------------------- 1 | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. 2 | import logging 3 | import os 4 | import sys 5 | from logging import StreamHandler, Handler, getLevelName 6 | 7 | 8 | # this class is a copy of logging.FileHandler except we end self.close() 9 | # at the end of each emit. While closing file and reopening file after each 10 | # write is not efficient, it allows us to see partial logs when writing to 11 | # fused Azure blobs, which is very convenient 12 | class FileHandler(StreamHandler): 13 | """ 14 | A handler class which writes formatted logging records to disk files. 15 | """ 16 | def __init__(self, filename, mode='a', encoding=None, delay=False): 17 | """ 18 | Open the specified file and use it as the stream for logging. 19 | """ 20 | # Issue #27493: add support for Path objects to be passed in 21 | filename = os.fspath(filename) 22 | #keep the absolute path, otherwise derived classes which use this 23 | #may come a cropper when the current directory changes 24 | self.baseFilename = os.path.abspath(filename) 25 | self.mode = mode 26 | self.encoding = encoding 27 | self.delay = delay 28 | if delay: 29 | #We don't open the stream, but we still need to call the 30 | #Handler constructor to set level, formatter, lock etc. 31 | Handler.__init__(self) 32 | self.stream = None 33 | else: 34 | StreamHandler.__init__(self, self._open()) 35 | 36 | def close(self): 37 | """ 38 | Closes the stream. 39 | """ 40 | self.acquire() 41 | try: 42 | try: 43 | if self.stream: 44 | try: 45 | self.flush() 46 | finally: 47 | stream = self.stream 48 | self.stream = None 49 | if hasattr(stream, "close"): 50 | stream.close() 51 | finally: 52 | # Issue #19523: call unconditionally to 53 | # prevent a handler leak when delay is set 54 | StreamHandler.close(self) 55 | finally: 56 | self.release() 57 | 58 | def _open(self): 59 | """ 60 | Open the current base file with the (original) mode and encoding. 61 | Return the resulting stream. 62 | """ 63 | return open(self.baseFilename, self.mode, encoding=self.encoding) 64 | 65 | def emit(self, record): 66 | """ 67 | Emit a record. 68 | 69 | If the stream was not opened because 'delay' was specified in the 70 | constructor, open it before calling the superclass's emit. 71 | """ 72 | if self.stream is None: 73 | self.stream = self._open() 74 | StreamHandler.emit(self, record) 75 | self.close() 76 | 77 | def __repr__(self): 78 | level = getLevelName(self.level) 79 | return '<%s %s (%s)>' % (self.__class__.__name__, self.baseFilename, level) 80 | 81 | 82 | def setup_logger(name, save_dir, distributed_rank, filename="log.txt"): 83 | logger = logging.getLogger(name) 84 | logger.setLevel(logging.DEBUG) 85 | # don't log results for the non-master process 86 | if distributed_rank > 0: 87 | return logger 88 | ch = logging.StreamHandler(stream=sys.stdout) 89 | ch.setLevel(logging.DEBUG) 90 | formatter = logging.Formatter("%(asctime)s %(name)s %(levelname)s: %(message)s") 91 | ch.setFormatter(formatter) 92 | logger.addHandler(ch) 93 | 94 | if save_dir: 95 | fh = FileHandler(os.path.join(save_dir, filename)) 96 | fh.setLevel(logging.DEBUG) 97 | fh.setFormatter(formatter) 98 | logger.addHandler(fh) 99 | 100 | return logger 101 | -------------------------------------------------------------------------------- /manopth/mano/webuser/verts.py: -------------------------------------------------------------------------------- 1 | ''' 2 | Copyright 2017 Javier Romero, Dimitrios Tzionas, Michael J Black and the Max Planck Gesellschaft. All rights reserved. 3 | This software is provided for research purposes only. 4 | By using this software you agree to the terms of the MANO/SMPL+H Model license here http://mano.is.tue.mpg.de/license 5 | 6 | More information about MANO/SMPL+H is available at http://mano.is.tue.mpg.de. 7 | For comments or questions, please email us at: mano@tue.mpg.de 8 | 9 | 10 | About this file: 11 | ================ 12 | This file defines a wrapper for the loading functions of the MANO model. 13 | 14 | Modules included: 15 | - load_model: 16 | loads the MANO model from a given file location (i.e. a .pkl file location), 17 | or a dictionary object. 18 | 19 | ''' 20 | 21 | 22 | import chumpy 23 | import mano.webuser.lbs as lbs 24 | from mano.webuser.posemapper import posemap 25 | import scipy.sparse as sp 26 | from chumpy.ch import MatVecMult 27 | 28 | 29 | def ischumpy(x): 30 | return hasattr(x, 'dterms') 31 | 32 | 33 | def verts_decorated(trans, 34 | pose, 35 | v_template, 36 | J_regressor, 37 | weights, 38 | kintree_table, 39 | bs_style, 40 | f, 41 | bs_type=None, 42 | posedirs=None, 43 | betas=None, 44 | shapedirs=None, 45 | want_Jtr=False): 46 | 47 | for which in [ 48 | trans, pose, v_template, weights, posedirs, betas, shapedirs 49 | ]: 50 | if which is not None: 51 | assert ischumpy(which) 52 | 53 | v = v_template 54 | 55 | if shapedirs is not None: 56 | if betas is None: 57 | betas = chumpy.zeros(shapedirs.shape[-1]) 58 | v_shaped = v + shapedirs.dot(betas) 59 | else: 60 | v_shaped = v 61 | 62 | if posedirs is not None: 63 | v_posed = v_shaped + posedirs.dot(posemap(bs_type)(pose)) 64 | else: 65 | v_posed = v_shaped 66 | 67 | v = v_posed 68 | 69 | if sp.issparse(J_regressor): 70 | J_tmpx = MatVecMult(J_regressor, v_shaped[:, 0]) 71 | J_tmpy = MatVecMult(J_regressor, v_shaped[:, 1]) 72 | J_tmpz = MatVecMult(J_regressor, v_shaped[:, 2]) 73 | J = chumpy.vstack((J_tmpx, J_tmpy, J_tmpz)).T 74 | else: 75 | assert (ischumpy(J)) 76 | 77 | assert (bs_style == 'lbs') 78 | result, Jtr = lbs.verts_core( 79 | pose, v, J, weights, kintree_table, want_Jtr=True, xp=chumpy) 80 | 81 | tr = trans.reshape((1, 3)) 82 | result = result + tr 83 | Jtr = Jtr + tr 84 | 85 | result.trans = trans 86 | result.f = f 87 | result.pose = pose 88 | result.v_template = v_template 89 | result.J = J 90 | result.J_regressor = J_regressor 91 | result.weights = weights 92 | result.kintree_table = kintree_table 93 | result.bs_style = bs_style 94 | result.bs_type = bs_type 95 | if posedirs is not None: 96 | result.posedirs = posedirs 97 | result.v_posed = v_posed 98 | if shapedirs is not None: 99 | result.shapedirs = shapedirs 100 | result.betas = betas 101 | result.v_shaped = v_shaped 102 | if want_Jtr: 103 | result.J_transformed = Jtr 104 | return result 105 | 106 | 107 | def verts_core(pose, 108 | v, 109 | J, 110 | weights, 111 | kintree_table, 112 | bs_style, 113 | want_Jtr=False, 114 | xp=chumpy): 115 | 116 | if xp == chumpy: 117 | assert (hasattr(pose, 'dterms')) 118 | assert (hasattr(v, 'dterms')) 119 | assert (hasattr(J, 'dterms')) 120 | assert (hasattr(weights, 'dterms')) 121 | 122 | assert (bs_style == 'lbs') 123 | result = lbs.verts_core(pose, v, J, weights, kintree_table, want_Jtr, xp) 124 | return result 125 | -------------------------------------------------------------------------------- /tore/utils/tsv_file_ops.py: -------------------------------------------------------------------------------- 1 | """ 2 | Copyright (c) Microsoft Corporation. 3 | Licensed under the MIT license. 4 | 5 | Basic operations for TSV files 6 | """ 7 | 8 | 9 | import os 10 | import os.path as op 11 | import json 12 | import numpy as np 13 | import base64 14 | import cv2 15 | from tqdm import tqdm 16 | import yaml 17 | from tore.utils.miscellaneous import mkdir 18 | from tore.utils.tsv_file import TSVFile 19 | 20 | 21 | def img_from_base64(imagestring): 22 | try: 23 | jpgbytestring = base64.b64decode(imagestring) 24 | nparr = np.frombuffer(jpgbytestring, np.uint8) 25 | r = cv2.imdecode(nparr, cv2.IMREAD_COLOR) 26 | return r 27 | except ValueError: 28 | return None 29 | 30 | def load_linelist_file(linelist_file): 31 | if linelist_file is not None: 32 | line_list = [] 33 | with open(linelist_file, 'r') as fp: 34 | for i in fp: 35 | line_list.append(int(i.strip())) 36 | return line_list 37 | 38 | def tsv_writer(values, tsv_file, sep='\t'): 39 | mkdir(op.dirname(tsv_file)) 40 | lineidx_file = op.splitext(tsv_file)[0] + '.lineidx' 41 | idx = 0 42 | tsv_file_tmp = tsv_file + '.tmp' 43 | lineidx_file_tmp = lineidx_file + '.tmp' 44 | with open(tsv_file_tmp, 'w') as fp, open(lineidx_file_tmp, 'w') as fpidx: 45 | assert values is not None 46 | for value in values: 47 | assert value is not None 48 | value = [v if type(v)!=bytes else v.decode('utf-8') for v in value] 49 | v = '{0}\n'.format(sep.join(map(str, value))) 50 | fp.write(v) 51 | fpidx.write(str(idx) + '\n') 52 | idx = idx + len(v) 53 | os.rename(tsv_file_tmp, tsv_file) 54 | os.rename(lineidx_file_tmp, lineidx_file) 55 | 56 | def tsv_reader(tsv_file, sep='\t'): 57 | with open(tsv_file, 'r') as fp: 58 | for i, line in enumerate(fp): 59 | yield [x.strip() for x in line.split(sep)] 60 | 61 | def config_save_file(tsv_file, save_file=None, append_str='.new.tsv'): 62 | if save_file is not None: 63 | return save_file 64 | return op.splitext(tsv_file)[0] + append_str 65 | 66 | def get_line_list(linelist_file=None, num_rows=None): 67 | if linelist_file is not None: 68 | return load_linelist_file(linelist_file) 69 | 70 | if num_rows is not None: 71 | return [i for i in range(num_rows)] 72 | 73 | def generate_hw_file(img_file, save_file=None): 74 | rows = tsv_reader(img_file) 75 | def gen_rows(): 76 | for i, row in tqdm(enumerate(rows)): 77 | row1 = [row[0]] 78 | img = img_from_base64(row[-1]) 79 | height = img.shape[0] 80 | width = img.shape[1] 81 | row1.append(json.dumps([{"height":height, "width": width}])) 82 | yield row1 83 | 84 | save_file = config_save_file(img_file, save_file, '.hw.tsv') 85 | tsv_writer(gen_rows(), save_file) 86 | 87 | def generate_linelist_file(label_file, save_file=None, ignore_attrs=()): 88 | # generate a list of image that has labels 89 | # images with only ignore labels are not selected. 90 | line_list = [] 91 | rows = tsv_reader(label_file) 92 | for i, row in tqdm(enumerate(rows)): 93 | labels = json.loads(row[1]) 94 | if labels: 95 | if ignore_attrs and all([any([lab[attr] for attr in ignore_attrs if attr in lab]) \ 96 | for lab in labels]): 97 | continue 98 | line_list.append([i]) 99 | 100 | save_file = config_save_file(label_file, save_file, '.linelist.tsv') 101 | tsv_writer(line_list, save_file) 102 | 103 | def load_from_yaml_file(yaml_file): 104 | with open(yaml_file, 'r') as fp: 105 | return yaml.load(fp, Loader=yaml.CLoader) 106 | 107 | def find_file_path_in_yaml(fname, root): 108 | if fname is not None: 109 | if op.isfile(fname): 110 | return fname 111 | elif op.isfile(op.join(root, fname)): 112 | return op.join(root, fname) 113 | else: 114 | raise FileNotFoundError( 115 | errno.ENOENT, os.strerror(errno.ENOENT), op.join(root, fname) 116 | ) 117 | -------------------------------------------------------------------------------- /transformers/pytorch_transformers/tests/fixtures/sample_text.txt: -------------------------------------------------------------------------------- 1 | This text is included to make sure Unicode is handled properly: 力加勝北区ᴵᴺᵀᵃছজটডণত 2 | Text should be one-sentence-per-line, with empty lines between documents. 3 | This sample text is public domain and was randomly selected from Project Guttenberg. 4 | 5 | The rain had only ceased with the gray streaks of morning at Blazing Star, and the settlement awoke to a moral sense of cleanliness, and the finding of forgotten knives, tin cups, and smaller camp utensils, where the heavy showers had washed away the debris and dust heaps before the cabin doors. 6 | Indeed, it was recorded in Blazing Star that a fortunate early riser had once picked up on the highway a solid chunk of gold quartz which the rain had freed from its incumbering soil, and washed into immediate and glittering popularity. 7 | Possibly this may have been the reason why early risers in that locality, during the rainy season, adopted a thoughtful habit of body, and seldom lifted their eyes to the rifted or india-ink washed skies above them. 8 | "Cass" Beard had risen early that morning, but not with a view to discovery. 9 | A leak in his cabin roof,--quite consistent with his careless, improvident habits,--had roused him at 4 A. M., with a flooded "bunk" and wet blankets. 10 | The chips from his wood pile refused to kindle a fire to dry his bed-clothes, and he had recourse to a more provident neighbor's to supply the deficiency. 11 | This was nearly opposite. 12 | Mr. Cassius crossed the highway, and stopped suddenly. 13 | Something glittered in the nearest red pool before him. 14 | Gold, surely! 15 | But, wonderful to relate, not an irregular, shapeless fragment of crude ore, fresh from Nature's crucible, but a bit of jeweler's handicraft in the form of a plain gold ring. 16 | Looking at it more attentively, he saw that it bore the inscription, "May to Cass." 17 | Like most of his fellow gold-seekers, Cass was superstitious. 18 | 19 | The fountain of classic wisdom, Hypatia herself. 20 | As the ancient sage--the name is unimportant to a monk--pumped water nightly that he might study by day, so I, the guardian of cloaks and parasols, at the sacred doors of her lecture-room, imbibe celestial knowledge. 21 | From my youth I felt in me a soul above the matter-entangled herd. 22 | She revealed to me the glorious fact, that I am a spark of Divinity itself. 23 | A fallen star, I am, sir!' continued he, pensively, stroking his lean stomach--'a fallen star!--fallen, if the dignity of philosophy will allow of the simile, among the hogs of the lower world--indeed, even into the hog-bucket itself. Well, after all, I will show you the way to the Archbishop's. 24 | There is a philosophic pleasure in opening one's treasures to the modest young. 25 | Perhaps you will assist me by carrying this basket of fruit?' And the little man jumped up, put his basket on Philammon's head, and trotted off up a neighbouring street. 26 | Philammon followed, half contemptuous, half wondering at what this philosophy might be, which could feed the self-conceit of anything so abject as his ragged little apish guide; 27 | but the novel roar and whirl of the street, the perpetual stream of busy faces, the line of curricles, palanquins, laden asses, camels, elephants, which met and passed him, and squeezed him up steps and into doorways, as they threaded their way through the great Moon-gate into the ample street beyond, drove everything from his mind but wondering curiosity, and a vague, helpless dread of that great living wilderness, more terrible than any dead wilderness of sand which he had left behind. 28 | Already he longed for the repose, the silence of the Laura--for faces which knew him and smiled upon him; but it was too late to turn back now. 29 | His guide held on for more than a mile up the great main street, crossed in the centre of the city, at right angles, by one equally magnificent, at each end of which, miles away, appeared, dim and distant over the heads of the living stream of passengers, the yellow sand-hills of the desert; 30 | while at the end of the vista in front of them gleamed the blue harbour, through a network of countless masts. 31 | At last they reached the quay at the opposite end of the street; 32 | and there burst on Philammon's astonished eyes a vast semicircle of blue sea, ringed with palaces and towers. 33 | He stopped involuntarily; and his little guide stopped also, and looked askance at the young monk, to watch the effect which that grand panorama should produce on him. 34 | -------------------------------------------------------------------------------- /transformers/docs/source/converting_tensorflow_models.rst: -------------------------------------------------------------------------------- 1 | Converting Tensorflow Checkpoints 2 | ================================================ 3 | 4 | A command-line interface is provided to convert a TensorFlow checkpoint in a PyTorch dump of the ``BertForPreTraining`` class (for BERT) or NumPy checkpoint in a PyTorch dump of the ``OpenAIGPTModel`` class (for OpenAI GPT). 5 | 6 | BERT 7 | ^^^^ 8 | 9 | You can convert any TensorFlow checkpoint for BERT (in particular `the pre-trained models released by Google `_\ ) in a PyTorch save file by using the `convert_tf_checkpoint_to_pytorch.py `_ script. 10 | 11 | This CLI takes as input a TensorFlow checkpoint (three files starting with ``bert_model.ckpt``\ ) and the associated configuration file (\ ``bert_config.json``\ ), and creates a PyTorch model for this configuration, loads the weights from the TensorFlow checkpoint in the PyTorch model and saves the resulting model in a standard PyTorch save file that can be imported using ``torch.load()`` (see examples in `run_bert_extract_features.py `_\ , `run_bert_classifier.py `_ and `run_bert_squad.py `_\ ). 12 | 13 | You only need to run this conversion script **once** to get a PyTorch model. You can then disregard the TensorFlow checkpoint (the three files starting with ``bert_model.ckpt``\ ) but be sure to keep the configuration file (\ ``bert_config.json``\ ) and the vocabulary file (\ ``vocab.txt``\ ) as these are needed for the PyTorch model too. 14 | 15 | To run this specific conversion script you will need to have TensorFlow and PyTorch installed (\ ``pip install tensorflow``\ ). The rest of the repository only requires PyTorch. 16 | 17 | Here is an example of the conversion process for a pre-trained ``BERT-Base Uncased`` model: 18 | 19 | .. code-block:: shell 20 | 21 | export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12 22 | 23 | pytorch_transformers bert \ 24 | $BERT_BASE_DIR/bert_model.ckpt \ 25 | $BERT_BASE_DIR/bert_config.json \ 26 | $BERT_BASE_DIR/pytorch_model.bin 27 | 28 | You can download Google's pre-trained models for the conversion `here `__. 29 | 30 | OpenAI GPT 31 | ^^^^^^^^^^ 32 | 33 | Here is an example of the conversion process for a pre-trained OpenAI GPT model, assuming that your NumPy checkpoint save as the same format than OpenAI pretrained model (see `here `__\ ) 34 | 35 | .. code-block:: shell 36 | 37 | export OPENAI_GPT_CHECKPOINT_FOLDER_PATH=/path/to/openai/pretrained/numpy/weights 38 | 39 | pytorch_transformers gpt \ 40 | $OPENAI_GPT_CHECKPOINT_FOLDER_PATH \ 41 | $PYTORCH_DUMP_OUTPUT \ 42 | [OPENAI_GPT_CONFIG] 43 | 44 | Transformer-XL 45 | ^^^^^^^^^^^^^^ 46 | 47 | Here is an example of the conversion process for a pre-trained Transformer-XL model (see `here `__\ ) 48 | 49 | .. code-block:: shell 50 | 51 | export TRANSFO_XL_CHECKPOINT_FOLDER_PATH=/path/to/transfo/xl/checkpoint 52 | 53 | pytorch_transformers transfo_xl \ 54 | $TRANSFO_XL_CHECKPOINT_FOLDER_PATH \ 55 | $PYTORCH_DUMP_OUTPUT \ 56 | [TRANSFO_XL_CONFIG] 57 | 58 | GPT-2 59 | ^^^^^ 60 | 61 | Here is an example of the conversion process for a pre-trained OpenAI's GPT-2 model. 62 | 63 | .. code-block:: shell 64 | 65 | export GPT2_DIR=/path/to/gpt2/checkpoint 66 | 67 | pytorch_transformers gpt2 \ 68 | $GPT2_DIR/model.ckpt \ 69 | $PYTORCH_DUMP_OUTPUT \ 70 | [GPT2_CONFIG] 71 | 72 | XLNet 73 | ^^^^^ 74 | 75 | Here is an example of the conversion process for a pre-trained XLNet model, fine-tuned on STS-B using the TensorFlow script: 76 | 77 | .. code-block:: shell 78 | 79 | export TRANSFO_XL_CHECKPOINT_PATH=/path/to/xlnet/checkpoint 80 | export TRANSFO_XL_CONFIG_PATH=/path/to/xlnet/config 81 | 82 | pytorch_transformers xlnet \ 83 | $TRANSFO_XL_CHECKPOINT_PATH \ 84 | $TRANSFO_XL_CONFIG_PATH \ 85 | $PYTORCH_DUMP_OUTPUT \ 86 | STS-B \ 87 | --------------------------------------------------------------------------------