├── Makefile ├── README.md ├── conf.py ├── index.rst ├── make.bat ├── recon.sh ├── requirements.txt ├── source ├── ARDemo │ ├── ARdemo.rst │ ├── ar_demo.png │ ├── capture_app.png │ ├── pipeline.png │ └── supported_devices.png ├── conf.py ├── index.rst └── introduction │ ├── intro.rst │ └── openxrlab_logo.png └── tag.pdf /Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line, and also 5 | # from the environment for the first two. 6 | SPHINXOPTS ?= 7 | SPHINXBUILD ?= sphinx-build 8 | SOURCEDIR = source 9 | BUILDDIR = build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 21 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # xrdocument 2 | 3 | **Welcome to OpenXRLab!** 4 | 5 | ## Introdunction 6 | xrdocoment will offer the summary intro of the openxrlab group and the index of every repo. 7 | In addition, we provide a AR demo by connecting [XRSLAM](https://github.com/openxrlab/xrslam), [XRSfM](https://github.com/openxrlab/xrsfm) and [XRLocalization](https://github.com/openxrlab/xrlocalization). Refer to the [Doc](http://doc.openxrlab.org.cn/openxrlab_document/ARDemo/ARdemo.html) to build it and play around the demo! 8 | 9 | ## OpenXRLab Document Address 10 | http://doc.openxrlab.org.cn/openxrlab_document/ 11 | -------------------------------------------------------------------------------- /conf.py: -------------------------------------------------------------------------------- 1 | # Configuration file for the Sphinx documentation builder. 2 | # 3 | # This file only contains a selection of the most common options. For a full 4 | # list see the documentation: 5 | # https://www.sphinx-doc.org/en/master/usage/configuration.html 6 | 7 | # -- Path setup -------------------------------------------------------------- 8 | 9 | # If extensions (or modules to document with autodoc) are in another directory, 10 | # add these directories to sys.path here. If the directory is relative to the 11 | # documentation root, use os.path.abspath to make it absolute, like shown here. 12 | # 13 | # import os 14 | # import sys 15 | # sys.path.insert(0, os.path.abspath('.')) 16 | 17 | 18 | # -- Project information ----------------------------------------------------- 19 | 20 | project = 'openxrlab' 21 | copyright = '2022, Sensetime' 22 | author = 'Sensetime' 23 | 24 | # The full version, including alpha/beta/rc tags 25 | release = 'v0.1.0' 26 | 27 | 28 | # -- General configuration --------------------------------------------------- 29 | 30 | # Add any Sphinx extension module names here, as strings. They can be 31 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 32 | # ones. 33 | extensions = [ 34 | 'myst_parser', 35 | 'sphinx_markdown_tables', 36 | ] 37 | 38 | # Add any paths that contain templates here, relative to this directory. 39 | templates_path = ['_templates'] 40 | 41 | # The language for content autogenerated by Sphinx. Refer to documentation 42 | # for a list of supported languages. 43 | # 44 | # This is also used if you do content translation via gettext catalogs. 45 | # Usually you set "language" from the command line for these cases. 46 | language = 'zh_cn' 47 | 48 | # List of patterns, relative to source directory, that match files and 49 | # directories to ignore when looking for source files. 50 | # This pattern also affects html_static_path and html_extra_path. 51 | exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] 52 | 53 | 54 | # -- Options for HTML output ------------------------------------------------- 55 | 56 | # The theme to use for HTML and HTML Help pages. See the documentation for 57 | # a list of builtin themes. 58 | # 59 | 60 | html_theme = 'sphinx_rtd_theme' 61 | 62 | html_theme = 'alabaster' 63 | 64 | import pytorch_sphinx_theme 65 | html_theme = 'pytorch_sphinx_theme' 66 | html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()] 67 | copybutton_prompt_text = r'>>> |\.\.\. ' 68 | copybutton_prompt_is_regexp = True 69 | 70 | # Add any paths that contain custom static files (such as style sheets) here, 71 | # relative to this directory. They are copied after the builtin static files, 72 | # so a file named "default.css" will overwrite the builtin "default.css". 73 | html_static_path = ['_static'] 74 | 75 | html_css_files = ['css/readthedocs.css'] 76 | 77 | -------------------------------------------------------------------------------- /index.rst: -------------------------------------------------------------------------------- 1 | OpenXRLabDoc master file, created by 2 | sphinx-quickstart on Wed Apr 20 11:45:54 2022. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | Welcome to OpenXRLab! 7 | ============================================ 8 | 9 | .. toctree:: 10 | :maxdepth: 2 11 | :caption: OpenXRLab Introduction 12 | 13 | introduction/intro 14 | 15 | .. toctree:: 16 | :maxdepth: 2 17 | :caption: OpenXRLab ARdemo 18 | 19 | ARDemo/ARdemo 20 | 21 | -------------------------------------------------------------------------------- /make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | pushd %~dp0 4 | 5 | REM Command file for Sphinx documentation 6 | 7 | if "%SPHINXBUILD%" == "" ( 8 | set SPHINXBUILD=sphinx-build 9 | ) 10 | set SOURCEDIR=source 11 | set BUILDDIR=build 12 | 13 | if "%1" == "" goto help 14 | 15 | %SPHINXBUILD% >NUL 2>NUL 16 | if errorlevel 9009 ( 17 | echo. 18 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 19 | echo.installed, then set the SPHINXBUILD environment variable to point 20 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 21 | echo.may add the Sphinx directory to PATH. 22 | echo. 23 | echo.If you don't have Sphinx installed, grab it from 24 | echo.http://sphinx-doc.org/ 25 | exit /b 1 26 | ) 27 | 28 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% 29 | goto end 30 | 31 | :help 32 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% 33 | 34 | :end 35 | popd 36 | -------------------------------------------------------------------------------- /recon.sh: -------------------------------------------------------------------------------- 1 | # Desired folder tree 2 | # └── XRARdemo 3 | # ├── XRLocalization 4 | # ├── XRSfm 5 | # ├── data -> DATA_DIR 6 | # ├── rawdata/data.bin -> RAW_DATA_PATH 7 | # └── save_reconstruction -> MODEL_DIR 8 | # └── xrdocument 9 | # ├── recon.sh 10 | 11 | DATA_DIR='' 12 | RAW_DATA_PATH='' 13 | MODEL_DIR='' 14 | 15 | SFM_DIR=${MODEL_DIR}/sfm/ 16 | TAG_SFM_DIR=${MODEL_DIR}/tag_sfm/ 17 | REFINE_DIR=${MODEL_DIR}/refine/ 18 | IMAGE_DIR=${DATA_DIR}/images/ 19 | CAMERA_TXT=${DATA_DIR}/camera.txt 20 | 21 | mkdir -p ${MODEL_DIR} 22 | mkdir -p ${SFM_DIR} 23 | mkdir -p ${TAG_SFM_DIR} 24 | mkdir -p ${REFINE_DIR} 25 | mkdir -p ${IMAGE_DIR} 26 | 27 | # unpacking data 28 | echo 'Step 0: Unpacking data' 29 | cd ../xrsfm 30 | [[ -f ${CAMERA_TXT} ]] || ./bin/unpack_collect_data ${RAW_DATA_PATH} ${DATA_DIR} || { echo 'unpacking data failed' ; exit 1; } 31 | 32 | # setup database 33 | echo 'Step 1: Setting up database' 34 | cd ../XRLocalization 35 | export PYTHONPATH=$PYTHONPATH:`pwd` 36 | 37 | [[ -f ${SFM_DIR}/database.db ]] || python tools/ir_create_database.py --image_dir ${IMAGE_DIR} --database_path ${SFM_DIR}/database.db || { echo 'create database failed' ; exit 1; } 38 | [[ -f ${SFM_DIR}/retrieval.txt ]] || python tools/ir_image_retrieve.py --retrieve_num 100 --database_path ${SFM_DIR}/database.db --save_path ${SFM_DIR}/retrieval.txt || { echo 'image retrieval failed' ; exit 1; } 39 | 40 | # run sfm 41 | echo 'Step 2: Running Sfm' 42 | cd ../xrsfm 43 | [[ -f ${SFM_DIR}/ftr.bin ]] || ./bin/run_matching ${IMAGE_DIR} ${SFM_DIR}/retrieval.txt sequential ${SFM_DIR} || { echo 'sift matching failed' ; exit 1; } 44 | [[ -f ${SFM_DIR}/images.bin ]] || ./bin/run_reconstruction ${SFM_DIR} ${CAMERA_TXT} ${SFM_DIR} || { echo 'sequential recon failed' ; exit 1; } 45 | [[ -f ${TAG_SFM_DIR}/cameras.bin ]] || ./bin/estimate_scale ${IMAGE_DIR} ${SFM_DIR} ${TAG_SFM_DIR} || { echo 'recover scale failed' ; exit 1; } 46 | 47 | # extract/match with superpoint 48 | echo 'Step 3: Re-extract/match with superpoint' 49 | cd ../XRLocalization 50 | [[ -f ${SFM_DIR}/features.bin ]] || python tools/recon_feature_extract.py --image_dir ${IMAGE_DIR} --image_bin_path ${SFM_DIR}/images.bin --feature_bin_path ${SFM_DIR}/features.bin || { echo 'extract superpoint failed' ; exit 1; } 51 | [[ -f ${SFM_DIR}/matching.bin ]] || python tools/recon_feature_match.py --recon_path ${SFM_DIR} --feature_bin_path ${SFM_DIR}/features.bin --match_bin_path ${SFM_DIR}/matching.bin || { echo 'matching superpoint failed' ; exit 1; } 52 | 53 | # re-triangulate 54 | echo 'Step 4: Re-triangulate' 55 | cd ../xrsfm 56 | [[ -f ${REFINE_DIR}/cameras.bin ]] || ./bin/run_triangulation ${SFM_DIR} ${SFM_DIR}/features.bin ${SFM_DIR}/matching.bin ${REFINE_DIR} || { echo 'run triangulation failed' ; exit 1; } 57 | 58 | # prepare for localization service 59 | echo 'Step 5: Prepare for localization service' 60 | cd ../XRLocalization 61 | python tools/loc_convert_reconstruction.py --feature_path ${SFM_DIR}/features.bin --model_path ${REFINE_DIR} --output_path ${MODEL_DIR} || { echo 'convert recon failed' ; exit 1; } 62 | python tools/ir_create_database.py --image_dir ${IMAGE_DIR} --database_path ${MODEL_DIR}/database.bin --image_bin_path ${SFM_DIR}/images.bin || { echo 'create database for loc server failed' ; exit 1; } 63 | 64 | 65 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | sphinx==4.0.2 2 | -e git+https://github.com/open-mmlab/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme 3 | sphinx_copybutton 4 | sphinx_markdown_tables 5 | myst-parser 6 | 7 | -------------------------------------------------------------------------------- /source/ARDemo/ARdemo.rst: -------------------------------------------------------------------------------- 1 | 1. Introduction 2 | ----------------- 3 | The goal of VR/AR is the seamless integration of virtual scenes and real scenes. 4 | In order to achieve accurate registration of virtual and real scenes. 5 | We plan to develop a set of spatial positioning technology by coupling local positioning technology and global positioning technology, that is, Cloud-End combined high precision real-time positioning scheme. 6 | The global positioning technology ensures the absolute accuracy of spatial positioning, and the local positioning technology ensures the real-time and local accuracy of positioning. 7 | The current set of ARDemo has preliminarily completed the series of basic capabilities, which can be used as a basic reference. 8 | 9 | .. image:: pipeline.png 10 | :width: 1000px 11 | 12 | 2. QuickStart 13 | ----------------- 14 | To see the AR demo on your iPhone, you need to perform the following steps: 15 | 16 | 1)Data capturer on your phone 17 | >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 18 | 19 | + Get the capture application and install it on the supported phone 20 | Capture your own data with an iPhone. Download the capture application from TestFlight(https://testflight.apple.com/join/PBZAiZce). 21 | It is recommended to upgrade to iOS16 or above. 22 | Available iPhone model for the capture application as below : 23 | 24 | .. image:: supported_devices.png 25 | :width: 1000px 26 | 27 | + Recover the real-world metric scale through marker 28 | Currently we recover the real-world metric scale through marker, so you need to post several (usually 2-3) AprilTag (https://github.com/openxrlab/xrdocument/blob/dev/tag.pdf) with different IDs in your room before recording data, just like the picture on the right in the figure below. 29 | The capture application looks like this. Choose suitable fps (e.g., 3fps) and click the red button to start recording. After you have finished, open the folder and send the data to your personal computer through airdrop. The recorded data will be a binary file ending in '.bin'. 30 | 31 | .. image:: capture_app.png 32 | :width: 600px 33 | 34 | 2)Preparation 35 | >>>>>>>>>>>>>>>>> 36 | Before this step, make sure you have cloned XRSfM successfully. See installation page in XRSfM and XRLocalization. 37 | 38 | + Build xrardemo_workspace 39 | 40 | .. code-block:: bash 41 | 42 | mkdir XRARdemo 43 | cd XRARdemo 44 | git clone xrsfm 45 | git clone xrocalization 46 | git clone xrdocument 47 | mkdir rawdata # put RawData.bin here from your phone 48 | mkdir data # unpack_collect_data path 49 | mkdir save_reconstruction # SFM output MODEL_DIR 50 | 51 | as below: 52 | └── XRARdemo 53 | ├── XRLocalization 54 | ├── XRSfm 55 | ├── data 56 | ├── rawdata 57 | └── xrdocument 58 | ├── recon.sh 59 | └── save_reconstruction 60 | 61 | + Fill in the paths in recon.sh and run: bash recon.sh 62 | before you should config the path in recon.sh(https://github.com/openxrlab/xrdocument/blob/dev/recon.sh) 63 | 64 | + Deploy positioning service and run it on your own server. 65 | 66 | .. code-block:: bash 67 | 68 | cd /path/to/XRLocalization 69 | python run_web_server.py --map_path /path/to/reconstruction_model --port 3000 70 | 71 | You can also replace the port with another if it is accessible and not blocked by a firewall etc. 72 | Also make sure you can connect to the server from your iPhone. 73 | One way is to deploy the visual positioning service on a server with a WAN IP. 74 | The other way is setup a LAN and connect both the server and your iPhone to this LAN. 75 | To test if the connection is established, visit ```http://ip:port``` on the browser of your iPhone. 76 | You will see a 'Hello' if it is successful. 77 | 78 | + Build SLAM AR app and install it on your iPhone. Reference XRSLAM doc: https://github.com/openxrlab/xrslam 79 | 80 | 3)Check the demo result 81 | >>>>>>>>>>>>>>>>>>>>>>>> 82 | 83 | .. image:: ar_demo.png 84 | :width: 1000px 85 | 86 | 87 | .. raw:: html 88 | 89 | 90 | 91 | The GUI of AR demo application looks like the above picture. You need to click the 'VLoc' button to switch to the mode with visual positioning service. To see the AR logo, click the start button. A toast labeling 'Step 1: Initializing the SLAM' will appear and you should move in a curved trajectory to make the SLAM initialize well. After that, another toast labeling 'Step 2' will replace the original one and remind you to keep your phone facing the front. In this step, the system will try to localize your 6 DoF pose through the visual positioning service. Generally, these two steps will takes no more than 3 seconds in total. After that, you can add new AR objects by tapping the screen. More details about the GUI can be found in XRSLAM. 92 | -------------------------------------------------------------------------------- /source/ARDemo/ar_demo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openxrlab/xrdocument/1adcd0b6e216213df670a5a9bf2abf0b83f7ae6a/source/ARDemo/ar_demo.png -------------------------------------------------------------------------------- /source/ARDemo/capture_app.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openxrlab/xrdocument/1adcd0b6e216213df670a5a9bf2abf0b83f7ae6a/source/ARDemo/capture_app.png -------------------------------------------------------------------------------- /source/ARDemo/pipeline.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openxrlab/xrdocument/1adcd0b6e216213df670a5a9bf2abf0b83f7ae6a/source/ARDemo/pipeline.png -------------------------------------------------------------------------------- /source/ARDemo/supported_devices.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openxrlab/xrdocument/1adcd0b6e216213df670a5a9bf2abf0b83f7ae6a/source/ARDemo/supported_devices.png -------------------------------------------------------------------------------- /source/conf.py: -------------------------------------------------------------------------------- 1 | # Configuration file for the Sphinx documentation builder. 2 | # 3 | # This file only contains a selection of the most common options. For a full 4 | # list see the documentation: 5 | # https://www.sphinx-doc.org/en/master/usage/configuration.html 6 | 7 | # -- Path setup -------------------------------------------------------------- 8 | 9 | # If extensions (or modules to document with autodoc) are in another directory, 10 | # add these directories to sys.path here. If the directory is relative to the 11 | # documentation root, use os.path.abspath to make it absolute, like shown here. 12 | # 13 | # import os 14 | # import sys 15 | # sys.path.insert(0, os.path.abspath('.')) 16 | 17 | 18 | # -- Project information ----------------------------------------------------- 19 | 20 | project = 'OpenXRLab' 21 | copyright = '2022, openxrlab' 22 | author = 'openxrlab' 23 | 24 | 25 | # -- General configuration --------------------------------------------------- 26 | 27 | # Add any Sphinx extension module names here, as strings. They can be 28 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 29 | # ones. 30 | extensions = [ 31 | ] 32 | 33 | # Add any paths that contain templates here, relative to this directory. 34 | templates_path = ['_templates'] 35 | 36 | # List of patterns, relative to source directory, that match files and 37 | # directories to ignore when looking for source files. 38 | # This pattern also affects html_static_path and html_extra_path. 39 | exclude_patterns = [] 40 | 41 | 42 | # -- Options for HTML output ------------------------------------------------- 43 | 44 | # The theme to use for HTML and HTML Help pages. See the documentation for 45 | # a list of builtin themes. 46 | 47 | html_theme = 'sphinx_rtd_theme' 48 | 49 | # Add any paths that contain custom static files (such as style sheets) here, 50 | # relative to this directory. They are copied after the builtin static files, 51 | # so a file named "default.css" will overwrite the builtin "default.css". 52 | html_static_path = ['_static'] 53 | -------------------------------------------------------------------------------- /source/index.rst: -------------------------------------------------------------------------------- 1 | .. image:: openxrlab_logo.png 2 | :width: 500px 3 | 4 | Welcome to OpenXRLab! 5 | ============================================ 6 | 7 | .. toctree:: 8 | :maxdepth: 2 9 | :caption: Introduction 10 | 11 | Introduction/Intro 12 | 13 | .. toctree:: 14 | :maxdepth: 2 15 | :caption: AR Demo 16 | 17 | ARDemo/ARdemo 18 | 19 | -------------------------------------------------------------------------------- /source/introduction/intro.rst: -------------------------------------------------------------------------------- 1 | .. image:: openxrlab_logo.png 2 | :width: 500px 3 | 4 | 5 | 1. Background 6 | --------------- 7 | With the rise of MetaVerse, XR becomes a booming industry all around the world. However, at present, tools related to XR are relatively scattered, and there are usually problems of compatibility in practice. In order to resolve this dilemma, a set of well-integrated toolchains like OpenmmLab and HuggingFace is born on demand — **OpenXRLab**. 8 | OpenXRLab is an open source platform in the XR field that is built on a unified foundation and implements various application algorithms, which is easy to use independently and jointly. 9 | 10 | 2. Infrastructure 11 | ------------------- 12 | We expect for the content programming production for the currently commonly used OpenCV, Open3D, Pytorch3D third-party libraries to sort out, establish a unified content production 13 | The basic function computing library is named Primer, and the code warehouse of the underlying basic computing library of OpenXRLab is established, and the sorting and interface encapsulation of the third-party library is completed. The optimization of implementation is expected to use the redefined classes and functions of the standard Primer library, which can better support XR related algorithm data opening and computing call. Optimize some of the shortcomings and shortcomings of current open source libraries. 14 | XRPrimer is a unified "back-end library" based on OpenCV, Open3D and Pytorch3D third-party libraries, which is a cross between PyTorch/OpenCV and other public open source libraries 15 | The middle layer of downstream algorithm application is to provide more convenient access to third-party libraries for integration and targeted optimization. Meanwhile, it also provides more convenient tool libraries for development. At the same time, it keeps original usage habits as much as possible and provides convenient installation packages through precompilation. 16 | XRPrimer will not do too much function encapsulation and computational reimplementation (unless the investment will improve the actual effect and efficiency). 17 | 18 | 3. Algorithm application 19 | ------------------------- 20 | Just like the good ecology of CV, virtual reality not only builds the basic capability of the underlying computing, but also needs more access of different types of algorithm applications in the downstream to promote the continuous optimization of algorithms and underlying computing. 21 | At present, we are actively promoting the migration of related applications such as NERF (neural rendering), MOCAP (motion capture), SFM/SLAM (localization/reconstruction) to the whole ecosystem, and then build a complete content algorithm research and development ecosystem with interoperability of underlying computing protocols. 22 | 23 | We hope you all to find this platform useful and helpful! 24 | 25 | 4. OpenXRLab Framework 26 | ------------------------ 27 | .. code-block:: bash 28 | 29 | +-------------------------------------+ +-----------------------------------------+ +-----------------------------+ 30 | | Spatial computing | | Multi-model Huamn Computer Interaction | | Rendering | 31 | +-------------------------------------+ +-----------------------------------------+ +-----------------------------+ 32 | +--------+ +-------+ +----------------+ +-------------------+ +-------------------+ +-------------+ +-------------+ 33 | | XRSLAM | | XRSFM | | XRLocalization | | XRMocap | | XRMoGen | | XRNeRF | | ... | 34 | +--------+ +-------+ +----------------+ +-------------------+ +-------------------+ +-------------+ +-------------+ 35 | ----------------------------------------------======= BINARY RELEASE =======--------------------------------------- 36 | +-----++--------------------------------------------------------------------------++------------------------------+ 37 | | L || C++/Python API || C/C++ struct | 38 | | o |+--------------------------------------------------------------------------+| Python struct | 39 | | g |+-----------------------------++-------------------------------------------+| ... | 40 | | || XR tools || Basic libraries || | 41 | +-----++-----------------------------++-------------------------------------------++------------------------------+ 42 | +-----------------------------------------------------------------------------------------------------------------+ 43 | | XR Infrastructure Platform | 44 | +-----------------------------------------------------------------------------------------------------------------+ 45 | 46 | + XRPrimer : https://github.com/openxrlab/xrprimer 47 | + XRSLAM : https://github.com/openxrlab/xrslam 48 | + XRSfm : https://github.com/openxrlab/xrsfm 49 | + XRLocalization : https://github.com/openxrlab/xrlocalization 50 | + XRMocap : https://github.com/openxrlab/xrmocap 51 | + XRMoGen : https://github.com/openxrlab/xrmogen 52 | + XRNeRF : https://github.com/openxrlab/xrnerf -------------------------------------------------------------------------------- /source/introduction/openxrlab_logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openxrlab/xrdocument/1adcd0b6e216213df670a5a9bf2abf0b83f7ae6a/source/introduction/openxrlab_logo.png -------------------------------------------------------------------------------- /tag.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/openxrlab/xrdocument/1adcd0b6e216213df670a5a9bf2abf0b83f7ae6a/tag.pdf --------------------------------------------------------------------------------