├── .github
└── ISSUE_TEMPLATE
│ ├── bug_report.md
│ └── feature_request.md
├── .gitignore
├── LICENSE
├── config
└── extension.toml
├── data
├── icon.png
└── preview.png
├── demo
├── .thumbs
│ └── 256x256
│ │ └── Ogn_controller.usd.png
├── demo_depth.npy
├── demo_rgb.png
└── demo_waypoints.txt
├── docs
├── CHANGELOG.md
├── README.md
├── installation.md
└── subsections
│ ├── building_own_digital_twin.md
│ ├── contribution_guide.md
│ ├── installation.md
│ └── running_example.md
├── isaacsim
└── oceansim
│ ├── modules
│ ├── SensorExample_python
│ │ ├── __init__.py
│ │ ├── extension.py
│ │ ├── global_variables.py
│ │ ├── scenario.py
│ │ └── ui_builder.py
│ └── colorpicker_python
│ │ ├── __init__.py
│ │ ├── extension.py
│ │ ├── global_variables.py
│ │ ├── scenario.py
│ │ └── ui_builder.py
│ ├── sensors
│ ├── BarometerSensor.py
│ ├── DVLsensor.py
│ ├── ImagingSonarSensor.py
│ └── UW_Camera.py
│ └── utils
│ ├── ImagingSonar_kernels.py
│ ├── MultivariateNormal.py
│ ├── MultivariateUniform.py
│ ├── UWrenderer_utils.py
│ ├── assets_utils.py
│ └── keyboard_cmd.py
└── media
├── caustics.gif
├── oceansim_demo.gif
├── oceansim_digital_twin.gif
├── oceansim_overall_framework.svg
├── pitch.png
└── semantic_editor.gif
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Bug report
3 | about: Create a report to help us improve
4 | title: ''
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Describe the bug**
11 | A clear and concise description of what the bug is.
12 |
13 | **To Reproduce**
14 | Steps to reproduce the behavior:
15 | 1. Go to '...'
16 | 2. Click on '....'
17 | 3. Scroll down to '....'
18 | 4. See error
19 |
20 | **Expected behavior**
21 | A clear and concise description of what you expected to happen.
22 |
23 | **Screenshots**
24 | If applicable, add screenshots to help explain your problem.
25 |
26 | **Desktop (please complete the following information):**
27 | - OS: [e.g. Ubuntu, Mint]
28 | - IsaaSim Version [e.g. 4.5.0]
29 |
30 | **Additional context**
31 | Add any other context about the problem here.
32 |
33 | **Attachements**
34 | Attachements including videos/screenshots of the bug are encouraged, along with any logs that are output on the terminal associated with IsaacSim
35 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Feature request
3 | about: Suggest an idea for this project
4 | title: ''
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Is your feature request related to a problem? Please describe.**
11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
12 |
13 | **Describe the solution you'd like**
14 | A clear and concise description of what you want to happen.
15 |
16 | **Describe alternatives you've considered**
17 | A clear and concise description of any alternative solutions or features you've considered.
18 |
19 | **Additional context**
20 | Add any other context or screenshots about the feature request here.
21 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | share/python-wheels/
24 | *.egg-info/
25 | .installed.cfg
26 | *.egg
27 | MANIFEST
28 |
29 | # PyInstaller
30 | # Usually these files are written by a python script from a template
31 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
32 | *.manifest
33 | *.spec
34 |
35 | # Installer logs
36 | pip-log.txt
37 | pip-delete-this-directory.txt
38 |
39 | # Unit test / coverage reports
40 | htmlcov/
41 | .tox/
42 | .nox/
43 | .coverage
44 | .coverage.*
45 | .cache
46 | nosetests.xml
47 | coverage.xml
48 | *.cover
49 | *.py,cover
50 | .hypothesis/
51 | .pytest_cache/
52 | cover/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | .pybuilder/
76 | target/
77 |
78 | # Jupyter Notebook
79 | .ipynb_checkpoints
80 |
81 | # IPython
82 | profile_default/
83 | ipython_config.py
84 |
85 | # pyenv
86 | # For a library or package, you might want to ignore these files since the code is
87 | # intended to run in multiple environments; otherwise, check them in:
88 | # .python-version
89 |
90 | # pipenv
91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
94 | # install all needed dependencies.
95 | #Pipfile.lock
96 |
97 | # UV
98 | # Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
99 | # This is especially recommended for binary packages to ensure reproducibility, and is more
100 | # commonly ignored for libraries.
101 | #uv.lock
102 |
103 | # poetry
104 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
105 | # This is especially recommended for binary packages to ensure reproducibility, and is more
106 | # commonly ignored for libraries.
107 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
108 | #poetry.lock
109 |
110 | # pdm
111 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
112 | #pdm.lock
113 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
114 | # in version control.
115 | # https://pdm.fming.dev/latest/usage/project/#working-with-version-control
116 | .pdm.toml
117 | .pdm-python
118 | .pdm-build/
119 |
120 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
121 | __pypackages__/
122 |
123 | # Celery stuff
124 | celerybeat-schedule
125 | celerybeat.pid
126 |
127 | # SageMath parsed files
128 | *.sage.py
129 |
130 | # Environments
131 | .env
132 | .venv
133 | env/
134 | venv/
135 | ENV/
136 | env.bak/
137 | venv.bak/
138 |
139 | # Spyder project settings
140 | .spyderproject
141 | .spyproject
142 |
143 | # Rope project settings
144 | .ropeproject
145 |
146 | # mkdocs documentation
147 | /site
148 |
149 | # mypy
150 | .mypy_cache/
151 | .dmypy.json
152 | dmypy.json
153 |
154 | # Pyre type checker
155 | .pyre/
156 |
157 | # pytype static type analyzer
158 | .pytype/
159 |
160 | # Cython debug symbols
161 | cython_debug/
162 |
163 | # PyCharm
164 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
165 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
166 | # and can be added to the global gitignore or merged into this file. For a more nuclear
167 | # option (not recommended) you can uncomment the following to ignore the entire idea folder.
168 | #.idea/
169 |
170 | # PyPI configuration file
171 | .pypirc
172 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright (c) 2024-2025, The OceanSim Project Developers.
2 |
3 | All rights reserved.
4 |
5 | SPDX-License-Identifier: BSD-3-Clause
6 |
7 | Redistribution and use in source and binary forms, with or without modification,
8 | are permitted provided that the following conditions are met:
9 |
10 | 1. Redistributions of source code must retain the above copyright notice,
11 | this list of conditions and the following disclaimer.
12 |
13 | 2. Redistributions in binary form must reproduce the above copyright notice,
14 | this list of conditions and the following disclaimer in the documentation
15 | and/or other materials provided with the distribution.
16 |
17 | 3. Neither the name of the copyright holder nor the names of its contributors
18 | may be used to endorse or promote products derived from this software without
19 | specific prior written permission.
20 |
21 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
22 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
23 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
25 | ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
26 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
27 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
28 | ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
29 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
30 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------------------
/config/extension.toml:
--------------------------------------------------------------------------------
1 | [core]
2 | reloadable = true
3 | order = 0
4 |
5 | [package]
6 | version = "1.0.0"
7 | category = "Simulation"
8 | title = "OceanSim"
9 | description = "Underwater simulation utilities"
10 | authors = ["FRoG--University of Michigan"]
11 | repository = ""
12 | keywords = ["Underwater","Sensor","Scene"]
13 | changelog = "docs/CHANGELOG.md"
14 | readme = "docs/README.md"
15 | preview_image = "data/preview.png"
16 | icon = "data/icon.png"
17 |
18 |
19 | [dependencies]
20 | "omni.kit.uiapp" = {}
21 | "omni.isaac.ui" = {}
22 | "omni.isaac.core" = {}
23 |
24 |
25 | [[python.module]]
26 | name = "isaacsim.oceansim.modules.SensorExample_python"
27 |
28 | [[python.module]]
29 | name = "isaacsim.oceansim.modules.colorpicker_python"
30 |
31 |
32 |
33 |
--------------------------------------------------------------------------------
/data/icon.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/umfieldrobotics/OceanSim/ed3e592b4ef25fd16e665af1816ca282123c0c04/data/icon.png
--------------------------------------------------------------------------------
/data/preview.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/umfieldrobotics/OceanSim/ed3e592b4ef25fd16e665af1816ca282123c0c04/data/preview.png
--------------------------------------------------------------------------------
/demo/.thumbs/256x256/Ogn_controller.usd.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/umfieldrobotics/OceanSim/ed3e592b4ef25fd16e665af1816ca282123c0c04/demo/.thumbs/256x256/Ogn_controller.usd.png
--------------------------------------------------------------------------------
/demo/demo_depth.npy:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/umfieldrobotics/OceanSim/ed3e592b4ef25fd16e665af1816ca282123c0c04/demo/demo_depth.npy
--------------------------------------------------------------------------------
/demo/demo_rgb.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/umfieldrobotics/OceanSim/ed3e592b4ef25fd16e665af1816ca282123c0c04/demo/demo_rgb.png
--------------------------------------------------------------------------------
/docs/CHANGELOG.md:
--------------------------------------------------------------------------------
1 | # Changelog
2 |
3 | ## [0.1.0] - 2025-01-08
4 |
5 | ### Added
6 |
7 | - Initial version of OceanSim Extension
8 |
--------------------------------------------------------------------------------
/docs/README.md:
--------------------------------------------------------------------------------
1 | # OceanSim: A GPU-Accelerated Underwater Robot Perception Simulation Framework
2 |
3 |
4 |
5 |
6 | [](https://umfieldrobotics.github.io/OceanSim/)
7 | [](https://docs.google.com/forms/d/e/1FAIpQLSfKWMhE4L6R4jjvEw_bfMtLigXbv5WZeijDah5vk2SpQZW1hA/viewform)
8 | [](https://arxiv.org/abs/2503.01074)
9 | [](https://docs.isaacsim.omniverse.nvidia.com/latest/index.html)
10 |
11 |
12 |
13 |
14 |
15 |
16 | OceanSim is a high-fidelity underwater simulation framework designed to accelerate the development of robust underwater perception solutions. Leveraging GPU-accelerated rendering and advanced physics-based techniques, OceanSim accurately models both visual and acoustic sensors, significantly reducing the simulation-to-real gap.
17 |
18 | ## Highlights
19 | 
20 |
21 |
22 | 🚀 **GPU-accelerated**: OceanSim fully leverages the power of GPU-based parallel computing. OceanSim is built on top of [NVIDIA Isaac Sim](https://developer.nvidia.com/isaac/sim) and is part of [NVIDIA Omniverse](https://www.nvidia.com/en-us/omniverse/) ecosystem, which provide high performance and real-time rendering. \
23 | 🌊 **Physics-based underwater sensor rendering**: Experience realistic simulations with advanced physics models that accurately replicate underwater sensor data under varied conditions. \
24 | 🎨 **Efficient 3D workflows**: Users of OceanSim can enjoy efficient 3D workflows empowered by [OpenUSD](https://openusd.org/release/index.html). \
25 | 🤝 **Built by the community, for the community**: OceanSim is an open-source project and we invite the community to join us to keep improving it!
26 |
27 | 
28 |
29 |
30 |
31 |
32 | ## Latest Updates
33 | - `[2025/4]` OceanSim is featured by [NVIDIA Robotics](https://www.linkedin.com/posts/nvidiarobotics_robotics-underwaterrobotics-simulation-activity-7313986055894880257-Dfmq?utm_source=share&utm_medium=member_desktop&rcm=ACoAACB8Y7sB7ikB6wVGPL5NrxYkNwk8RTEJ-3Y)!
34 | - `[2025/4]` 🔥 Beta version of OceanSim is released!
35 | - `[2025/3]` 🎉 OceanSim will be presented at [AQ²UASIM](https://sites.google.com/view/aq2uasim/home?authuser=0) and the late-breaking poster session at [ICRA 2025](https://2025.ieee-icra.org/)!
36 | - `[2025/3]` OceanSim paper is available on arXiv. Check it out [here](https://arxiv.org/abs/2503.01074).
37 |
38 | ## TODO
39 | - [x] Documentation for OceanSim provided example
40 | - [x] Built your own digital twin documentation
41 | - [x] Code release
42 | - [ ] ROS bridge release
43 |
44 | ## Documentation
45 |
46 | We divide the documentation into three parts:
47 | - [Installation](subsections/installation.md)
48 | - [Running OceanSim](subsections/running_example.md)
49 | - [Building Your Own Digital Twins with OceanSim](subsections/building_own_digital_twin.md)
50 |
51 | ## Support and Contributing
52 | We welcome contributions and discussions from the community!
53 | - Use [Discussions](https://github.com/umfieldrobotics/OceanSim/discussions) to share your ideas and discuss with other users.
54 | - Report bugs or request features by opening an issue in [Issues](https://github.com/umfieldrobotics/OceanSim/issues).
55 | - Submit a pull request if you want to contribute to the codebase. Please include the description of your changes and the motivation behind them in the pull request. You can check more details in [CONTRIBUTING.md](./subsections/contribution_guide.md).
56 |
57 | ## Contributors
58 | OceanSim is an open-source project initiated by the [Field Robotics Group](https://fieldrobotics.engin.umich.edu/) (FRoG) at the University of Michigan. We hope to build a vibrant community around OceanSim and invite contributions from researchers and developers around the world! A big shoutout to our contributors:
59 |
60 | [Jingyu Song](https://song-jingyu.github.io/), [Haoyu Ma](https://haoyuma2002814.github.io/), [Onur Bagoren](https://www.obagoren.com/), [Advaith V. Sethuraman](https://www.advaiths.com/), [Yiting Zhang](https://sites.google.com/umich.edu/yitingzhang/), and [Katherine A. Skinner](https://fieldrobotics.engin.umich.edu/).
61 |
67 |
68 |
69 |
70 | ## Citation
71 | If you find OceanSim useful for your research, we would appreciate that you cite our paper:
72 | ```
73 | @misc{song2025oceansim,
74 | title={OceanSim: A GPU-Accelerated Underwater Robot Perception Simulation Framework},
75 | author={Jingyu Song and Haoyu Ma and Onur Bagoren and Advaith V. Sethuraman and Yiting Zhang and Katherine A. Skinner},
76 | year={2025},
77 | eprint={2503.01074},
78 | archivePrefix={arXiv},
79 | primaryClass={cs.RO},
80 | url={https://arxiv.org/abs/2503.01074},
81 | }
82 | ```
83 | If you use the sonar model in OceanSim, please also cite the HoloOcean paper as the HoloOcean sonar model inspires our sonar model implementation:
84 | ```
85 | @inproceedings{Potokar22iros,
86 | author = {E. Potokar and K. Lay and K. Norman and D. Benham and T. Neilsen and M. Kaess and J. Mangelson},
87 | title = {Holo{O}cean: Realistic Sonar Simulation},
88 | booktitle = {Proc. IEEE/RSJ Intl. Conf. Intelligent Robots and Systems, IROS},
89 | address = {Kyoto, Japan},
90 | month = {Oct},
91 | year = {2022}
92 | }
93 | ```
94 |
95 | ---
96 |
97 | *OceanSim - A GPU-Accelerated Underwater Robot Perception Simulation Framework*
98 |
99 |
--------------------------------------------------------------------------------
/docs/installation.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/umfieldrobotics/OceanSim/ed3e592b4ef25fd16e665af1816ca282123c0c04/docs/installation.md
--------------------------------------------------------------------------------
/docs/subsections/building_own_digital_twin.md:
--------------------------------------------------------------------------------
1 | # Build Your Own Digital Twin in OceanSim
2 |
3 | 
4 |
5 | In this documentation, we show an example of building a digital twin for OceanSim. We hope this example can help you understand how to build your own digital twin in OceanSim.
6 |
7 | ## 3D Scan of the Environment
8 | We use a ZED stereo camera to scan the environment. The ZED camera is a stereo camera that can capture depth information and RGB images. We use the camera to scan the environment, and generate a folder of RGB images, which can be used to create a 3D model of the environment.
9 |
10 | The next step is to use the RGB images to create a 3D model of the environment. Among various 3D reconstruction softwares, we use [Metashape](https://www.agisoft.com/) to create a 3D model of the environment. The process is as follows:
11 | 1. Import the RGB images into Metashape.
12 | 2. Align the images to create a sparse point cloud.
13 | 3. Build a dense point cloud from the sparse point cloud.
14 | 4. Build a mesh from the dense point cloud.
15 | 5. Build texture for the model.
16 |
17 | We recommend checking the online tutorials of Metashape for more details.
18 |
19 | Please note the 3D model reconstructed by Metashape is not metric-scaled. To recover the correct metric scale, during the 3D scan, please also measure some reference distances between noticeable markers in the environment. In Metashape, you can use the `Scale Bar` tool to set the scale of the model. Please follow the steps in this [video](https://youtu.be/lp5eIOUJxCE?si=E3ZoLXriJAfuRdWU&t=359).
20 |
21 | ## Importing the 3D Scan
22 | We tested exporting the 3D model from Metashape as a `.obj` file and importing it into NVIDIA Isaac Sim. If you prefer `.usd` format, we recommend doing the conversion in Isaac Sim for better compatibility.
23 |
24 | Then you should be able to use this 3D model in OceanSim. Please refer to the provided [example](running_example.md) and modify it to fit your needs.
--------------------------------------------------------------------------------
/docs/subsections/contribution_guide.md:
--------------------------------------------------------------------------------
1 | # Contribution Guide
2 | The OceanSim project is open-source and welcomes contributions from the community. We accept contributions in the form of PR(Pull Requests) and issues.
3 |
4 | ## What is PR
5 | `PR` is the abbreviation of `Pull Request`. Here's the definition of `PR` in the [official document](https://docs.github.com/en/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests) of Github.
6 |
7 | ## Submit a PR to OceanSim
8 | 1. Fork the repository
9 | 2. Clone the forked repository to your local machine
10 | 3. Create a new branch for your changes
11 | 4. Make your changes and commit them
12 | 5. Push your changes to your forked repository
13 | 6. Create a pull request to the original repository
14 |
15 | ## PR Guidelines
16 | - Please make sure your code is well-documented and follows the coding style of the project.
17 | - We would appreciate if you can provide clear and significant commit message.
18 | - Provide clear and meaningful PR description.
19 | - Task name should be clarified in title. The general format is: \[Prefix\] Short description of the PR (Suffix)
20 | - Prefix: add new feature \[Feature\], fix bug \[Fix\], related to documents \[Docs\]
21 | - Introduce main changes, results and influences on other modules in short description
22 | - Associate related issues, discussion, and pull requests with a milestone
--------------------------------------------------------------------------------
/docs/subsections/installation.md:
--------------------------------------------------------------------------------
1 | # OceanSim Installation Documentation
2 | We design OceanSim as an extension package for NVIDIA Isaac Sim. This design allows better integration with Isaac Sim and users can pair OceanSim with other Isaac Sim extensions. This document provides a step-by-step guide to install OceanSim.
3 |
4 | ## Prerequisites
5 | OceanSim does not enforce any additional prerequisites beyond those required by Isaac Sim. Please refer to the [official Isaac Sim documentation](https://docs.isaacsim.omniverse.nvidia.com/latest/installation/requirements.html#system-requirements) for the prerequisites.
6 |
7 | OceanSim is developed with Isaac Sim 4.5. Due to the changes in Isaac Sim 4.5 compared to previous versions, OceanSim may not work with older versions of Isaac Sim.
8 |
9 | We have tested OceanSim on Ubuntu 20.04, 22.04, and 24.04. We have also tested OceanSim using various GPUs, including NVIDIA RTX 3090, RTX A6000, and RTX 4080 Super.
10 |
11 | ## Installation
12 | Install NVIDIA Isaac Sim 4.5. We follow the official [workstation installation guide](https://docs.isaacsim.omniverse.nvidia.com/latest/installation/install_workstation.html) to install Isaac Sim.
13 |
14 | Clone this repository to your local machine. We recommend cloning the repository to the Isaac Sim workspace directory.
15 | ```bash
16 | cd /path/to/isaacsim/extsUser
17 | git clone https://github.com/umfieldrobotics/OceanSim.git
18 | ```
19 |
20 | Download `OceanSim_assets` from [Google Drive](https://drive.google.com/drive/folders/1qg4-Y_GMiybnLc1BFjx0DsWfR0AgeZzA?usp=sharing) which contains USD assets of robot and environment.
21 |
22 | And change function `get_oceansim_assets_path()` in [~/isaacsim/extsUser/OceanSim/isaacsim/oceansim/utils/assets_utils.py](../../isaacsim/oceansim/utils/assets_utils.py) to return the path to the installed assets folder.
23 | ```bash
24 | def get_oceansim_assets_path() -> str:
25 | return "/path/to/downloaded/assets/OceanSim_assets"
26 | ```
27 |
28 | Launch Isaac Sim following this [guide](https://docs.isaacsim.omniverse.nvidia.com/latest/installation/install_workstation.html#isaac-sim-short-app-selector).
29 |
30 | ## Launching OceanSim
31 | There is no separate building process needed for OceanSim, as it is an extension. To load OceanSim:
32 | - IsaacSim, follow `Window -> Extensions`
33 | - On the window that shows up, remove the `@feature` filter that comes by default
34 | - Activate `OCEANSIM`
35 | - You can now exit the `Extensions` window, and OceanSim should be an option on the IsaacSim panel
36 |
--------------------------------------------------------------------------------
/docs/subsections/running_example.md:
--------------------------------------------------------------------------------
1 | # Run Examples in OceanSim
2 | In this document, we will provide simple guidelines for using existing features in OceanSim and Nvidia Isaac Sim to facilitate building underwater digital twins in our framework. The following are explanations of the examples that you can run:
3 |
4 | ## Sensor Example
5 | OceanSim provides an example also formatted as an extension to demonstrate the usage of underwater sensors and modify their parameters.
6 |
7 | Navigate to `OceanSim - Examples - Sensor Example` to open the module. Select the sensors you wish to simulate and point the "Path to USD" to your own USD scene or the example MHL scene in the `OceanSim_assets` directory.
8 |
9 | The module provides self-explanatory UI in which you can choose which sensor to use and corresponding data visualization will be automatically available. User may test this module in their own USD scenes otherwise a default one is used.
10 |
11 | We do not recommend user to perform digital twin experiments on this extension. This is example involves boilerplate code which is only for demonstration purposes.
12 | ### Instructions
13 | For more instructions when using this example, refer to the following, which you can also find in the [information panel](../../isaacsim/oceansim/modules/SensorExample_python/global_variables.py) in the extension UI:
14 | - This is a unified example that demonstrates various OceanSim UW sensors.
15 | - For users interested in making edits to the sensor parameters, click on the `Open Source Code` icon, which also looks like the "edit" icon. This will bring up the source code where many parameters can be edited, such as:
16 | - water surface height
17 | - sonar fov
18 | - camera rendering parameters
19 | - guides on developing your own digital twins.
20 | - User can test this demo on their own scene by copying the USD file path to `Path to USD`, otherwise a default one is loaded.
21 | - For DVL sensor, scene has to be toggled with static collider for beam interaction!
22 | - Manual control:
23 |
24 |
25 |
26 | | Key | Control |
27 | |----------|---------|
28 | | W/w | +x |
29 | | S/s | -x |
30 | | A/a | +y |
31 | | D/d | -y |
32 | | Up key | +z |
33 | | Down key | -z |
34 | | I/i | +pitch |
35 | | K/k | -pitch |
36 | | J/j | +yaw |
37 | | L/l | -yaw |
38 | | Left key | +roll |
39 | | Right key| -roll |
40 |
41 |
42 |
43 | - Automatic Control:
44 | - `Straight line`: Robot will travel to local x direction with v=0.5m/s
45 | - No control: Robot will remain static.
46 |
47 | ### Running the example
48 | To run the example after selecting the sensors and their configurations, first `Load`, and then click `Run`.
49 |
50 | ## Color Picker
51 | OceanSim provides a handy UI tool to accelerate the process of recreating underwater column effects similar to the robot's actual working environment by selecting the appropriate image formation parameters ([Akkaynak, Derya, and Tali Treibitz. "A revised underwater image formation model"](https://ieeexplore.ieee.org/document/8578801).)
52 |
53 | Navigate to `OceanSim - Color Picker` to open the module.
54 |
55 | This widget allows the user to visualize the rendered result in any USD scene while tuning parameters in real time.
56 |
57 | For more instructions when using this example, refer to [information panel](../../isaacsim/oceansim/modules/colorpicker_python/global_variables.py) in the extension UI.
58 |
59 | ## Tuning Object Reflectivity for Imaging Sonar
60 | The user can adjust the reflectivity of objects in the sonar perception by adding a semantic label to the object.
61 |
62 | The semantic type must be `"reflectivity"` as a string.
63 | And corresponding semantic data must be float, eg. `0.2`.
64 |
65 | Semantic configuration can either be performed by code during scene setup:
66 |
67 | ```bash
68 | from isaacsim.core.utils.semantics import add_update_semantics
69 | add_update_semantics(prim=,
70 | type_label='reflectivity',
71 | semantic_label='1.0')
72 | ```
73 | Or with UI provided in `semantics.schema.editor` ([Semantic Schema Editor](https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/semantics_schema_editor.html) should be auto loaded as Isaac Sim starts up).
74 |
75 | A simple tutorial is as follows:
76 |
77 | 
78 |
79 | As demonstrated by this workflow, developers are freely to add more modeling parameters as a new semantic type to improve sonar fidelity.
80 |
81 | ## Adding Water Caustics
82 | Note that the addition of water caustics into the USD scene is still under development and thus may lead to performance issues and crash during the simulation.
83 |
84 | To turn on rendering caustics, `Render Settings - Ray Tracing - Caustics` will be set `on`, and `Enable Caustics` in the UsdLux that supports caustics will be set `on` for the light source.
85 |
86 | Next we assign `transparent materials` (eg. Water, glass) to any mesh surface that we wish to [deflect photons](https://developer.nvidia.com/gpugems/gpugems/part-i-natural-effects/chapter-2-rendering-water-caustics) and create caustics.
87 |
88 | Lastly to simulate water caustics, we will deform the surface according to realistic water surface deformation.
89 |
90 | A USD file containing the caustic settings and surface deformation powered by a Warp kernel can be found in the OceanSim assets `~\OceanSim_assets\collected_MHL\mhl_water.usd` we published.
91 |
92 | And the corresponding demo video is provided below:
93 |
94 |
95 | 
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/modules/SensorExample_python/__init__.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
2 | #
3 | # NVIDIA CORPORATION and its licensors retain all intellectual property
4 | # and proprietary rights in and to this software, related documentation
5 | # and any modifications thereto. Any use, reproduction, disclosure or
6 | # distribution of this software and related documentation without an express
7 | # license agreement from NVIDIA CORPORATION is strictly prohibited.
8 | #
9 | from .extension import *
10 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/modules/SensorExample_python/extension.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
2 | #
3 | # NVIDIA CORPORATION and its licensors retain all intellectual property
4 | # and proprietary rights in and to this software, related documentation
5 | # and any modifications thereto. Any use, reproduction, disclosure or
6 | # distribution of this software and related documentation without an express
7 | # license agreement from NVIDIA CORPORATION is strictly prohibited.
8 | #
9 |
10 | import asyncio
11 | import gc
12 | import weakref
13 |
14 |
15 | import omni
16 | import omni.kit.commands
17 | import omni.physx as _physx
18 | import omni.timeline
19 | import omni.ui as ui
20 | import omni.usd
21 | from omni.isaac.ui.element_wrappers import ScrollingWindow
22 | from omni.isaac.ui.menu import MenuItemDescription, make_menu_item_description
23 | from omni.kit.menu.utils import add_menu_items, remove_menu_items
24 | from omni.usd import StageEventType
25 |
26 | from .global_variables import EXTENSION_TITLE
27 | from .ui_builder import UIBuilder
28 |
29 | """
30 | This file serves as a basic template for the standard boilerplate operations
31 | that make a UI-based extension appear on the toolbar.
32 |
33 | This implementation is meant to cover most use-cases without modification.
34 | Various callbacks are hooked up to a seperate class UIBuilder in .ui_builder.py
35 | Most users will be able to make their desired UI extension by interacting solely with
36 | UIBuilder.
37 |
38 | This class sets up standard useful callback functions in UIBuilder:
39 | on_menu_callback: Called when extension is opened
40 | on_timeline_event: Called when timeline is stopped, paused, or played
41 | on_physics_step: Called on every physics step
42 | on_stage_event: Called when stage is opened or closed
43 | cleanup: Called when resources such as physics subscriptions should be cleaned up
44 | build_ui: User function that creates the UI they want.
45 | """
46 |
47 |
48 | class Extension(omni.ext.IExt):
49 | def on_startup(self, ext_id: str):
50 | """Initialize extension and UI elements"""
51 |
52 | self.ext_id = ext_id
53 | self._usd_context = omni.usd.get_context()
54 |
55 | # Build Window
56 | self._window = ScrollingWindow(
57 | title=EXTENSION_TITLE, width=600, height=500, visible=False, dockPreference=ui.DockPreference.LEFT_BOTTOM
58 | )
59 | self._window.set_visibility_changed_fn(self._on_window)
60 |
61 | action_registry = omni.kit.actions.core.get_action_registry()
62 | action_registry.register_action(
63 | ext_id,
64 | f"CreateUIExtension:{EXTENSION_TITLE}",
65 | self._menu_callback,
66 | description=f"Add {EXTENSION_TITLE} Extension to UI toolbar",
67 | )
68 |
69 |
70 | self._menu_items = [
71 | MenuItemDescription(
72 | name="Examples",
73 | onclick_action=(ext_id, f"CreateUIExtension:{EXTENSION_TITLE}"),
74 | sub_menu=[
75 | make_menu_item_description(
76 | ext_id, "Sensor Example", lambda a=weakref.proxy(self): a._menu_callback()
77 | )
78 | ],
79 | )
80 | ]
81 |
82 | add_menu_items(self._menu_items, "OceanSim")
83 |
84 | # Filled in with User Functions
85 | self.ui_builder = UIBuilder()
86 |
87 | # Events
88 | self._usd_context = omni.usd.get_context()
89 | self._physxIFace = _physx.acquire_physx_interface()
90 | self._physx_subscription = None
91 | self._stage_event_sub = None
92 | self._timeline = omni.timeline.get_timeline_interface()
93 |
94 | def on_shutdown(self):
95 | self._models = {}
96 | remove_menu_items(self._menu_items, EXTENSION_TITLE)
97 |
98 | action_registry = omni.kit.actions.core.get_action_registry()
99 | action_registry.deregister_action(self.ext_id, f"CreateUIExtension:{EXTENSION_TITLE}")
100 |
101 | if self._window:
102 | self._window = None
103 | self.ui_builder.cleanup()
104 | gc.collect()
105 |
106 | def _on_window(self, visible):
107 | if self._window.visible:
108 | # Subscribe to Stage and Timeline Events
109 | self._usd_context = omni.usd.get_context()
110 | events = self._usd_context.get_stage_event_stream()
111 | self._stage_event_sub = events.create_subscription_to_pop(self._on_stage_event)
112 | stream = self._timeline.get_timeline_event_stream()
113 | self._timeline_event_sub = stream.create_subscription_to_pop(self._on_timeline_event)
114 |
115 | self._build_ui()
116 | else:
117 | self._usd_context = None
118 | self._stage_event_sub = None
119 | self._timeline_event_sub = None
120 | self.ui_builder.cleanup()
121 |
122 | def _build_ui(self):
123 | with self._window.frame:
124 | with ui.VStack(spacing=5, height=0):
125 | self._build_extension_ui()
126 |
127 | async def dock_window():
128 | await omni.kit.app.get_app().next_update_async()
129 |
130 | def dock(space, name, location, pos=0.5):
131 | window = omni.ui.Workspace.get_window(name)
132 | if window and space:
133 | window.dock_in(space, location, pos)
134 | return window
135 |
136 | tgt = ui.Workspace.get_window("Viewport")
137 | dock(tgt, EXTENSION_TITLE, omni.ui.DockPosition.LEFT, 0.33)
138 | await omni.kit.app.get_app().next_update_async()
139 |
140 | self._task = asyncio.ensure_future(dock_window())
141 |
142 | #################################################################
143 | # Functions below this point call user functions
144 | #################################################################
145 |
146 | def _menu_callback(self):
147 | self._window.visible = not self._window.visible
148 | self.ui_builder.on_menu_callback()
149 |
150 | def _on_timeline_event(self, event):
151 | if event.type == int(omni.timeline.TimelineEventType.PLAY):
152 | if not self._physx_subscription:
153 | self._physx_subscription = self._physxIFace.subscribe_physics_step_events(self._on_physics_step)
154 | elif event.type == int(omni.timeline.TimelineEventType.STOP):
155 | self._physx_subscription = None
156 |
157 | self.ui_builder.on_timeline_event(event)
158 |
159 | def _on_physics_step(self, step):
160 | self.ui_builder.on_physics_step(step)
161 |
162 | def _on_stage_event(self, event):
163 | if event.type == int(StageEventType.OPENED) or event.type == int(StageEventType.CLOSED):
164 | # stage was opened or closed, cleanup
165 | self._physx_subscription = None
166 | self.ui_builder.cleanup()
167 |
168 | self.ui_builder.on_stage_event(event)
169 |
170 | def _build_extension_ui(self):
171 | # Call user function for building UI
172 | self.ui_builder.build_ui()
173 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/modules/SensorExample_python/global_variables.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2022-2025, NVIDIA CORPORATION. All rights reserved.
2 | #
3 | # NVIDIA CORPORATION and its licensors retain all intellectual property
4 | # and proprietary rights in and to this software, related documentation
5 | # and any modifications thereto. Any use, reproduction, disclosure or
6 | # distribution of this software and related documentation without an express
7 | # license agreement from NVIDIA CORPORATION is strictly prohibited.
8 | #
9 |
10 | EXTENSION_TITLE = "Sensor Example"
11 |
12 | EXTENSION_DESCRIPTION = "This is unified example demonstrating various OceanSim UW sensors." \
13 | "User should click 'Open Source Code' icon located on the extension title for any sensor parameter tunning, \n" \
14 | 'eg: water surface height, sonar fov, camera rendering parameters..., and get consulted to develop own digital twins. \n' \
15 | "User may test this demo on their own scene by copying the USD file path to 'Path to USD',\n" \
16 | "otherwise a default one is loaded.\n" \
17 | "For DVL sensor, scene has to be toggled with static collider for beam interaction! \n" \
18 | 'Manual control: w:x+, s:x-, a:y+, d:y-, up:z+, down:z-, \n' \
19 | 'i:pitch-, k:pitch+, j:yaw+, l:yaw-, left:row-, right:row+. \n' \
20 | 'Straight line: Robot will travel to local x direction with v=0.5m/s. \n' \
21 | 'No control: Robot will remain static.'
22 |
23 |
24 | EXTENSION_LINK = "https://umfieldrobotics.github.io/OceanSim/"
--------------------------------------------------------------------------------
/isaacsim/oceansim/modules/SensorExample_python/scenario.py:
--------------------------------------------------------------------------------
1 | # Omniverse import
2 | import numpy as np
3 | from pxr import Gf, PhysxSchema
4 |
5 | # Isaac sim import
6 | from isaacsim.core.prims import SingleRigidPrim
7 | from isaacsim.core.utils.prims import get_prim_path
8 |
9 |
10 | class MHL_Sensor_Example_Scenario():
11 | def __init__(self):
12 | self._rob = None
13 | self._sonar = None
14 | self._cam = None
15 | self._DVL = None
16 | self._baro = None
17 |
18 | self._ctrl_mode = None
19 |
20 | self._running_scenario = False
21 | self._time = 0.0
22 |
23 | def setup_scenario(self, rob, sonar, cam, DVL, baro, ctrl_mode):
24 | self._rob = rob
25 | self._sonar = sonar
26 | self._cam = cam
27 | self._DVL = DVL
28 | self._baro = baro
29 | self._ctrl_mode = ctrl_mode
30 | if self._sonar is not None:
31 | self._sonar.sonar_initialize(include_unlabelled=True)
32 | if self._cam is not None:
33 | self._cam.initialize()
34 | if self._DVL is not None:
35 | self._DVL_reading = [0.0, 0.0, 0.0]
36 | if self._baro is not None:
37 | self._baro_reading = 101325.0 # atmospheric pressure (Pa)
38 |
39 |
40 | # Apply the physx force schema if manual control
41 | if ctrl_mode == "Manual control":
42 | from ...utils.keyboard_cmd import keyboard_cmd
43 |
44 | self._rob_forceAPI = PhysxSchema.PhysxForceAPI.Apply(self._rob)
45 | self._force_cmd = keyboard_cmd(base_command=np.array([0.0, 0.0, 0.0]),
46 | input_keyboard_mapping={
47 | # forward command
48 | "W": [10.0, 0.0, 0.0],
49 | # backward command
50 | "S": [-10.0, 0.0, 0.0],
51 | # leftward command
52 | "A": [0.0, 10.0, 0.0],
53 | # rightward command
54 | "D": [0.0, -10.0, 0.0],
55 | # rise command
56 | "UP": [0.0, 0.0, 10.0],
57 | # sink command
58 | "DOWN": [0.0, 0.0, -10.0],
59 | })
60 | self._torque_cmd = keyboard_cmd(base_command=np.array([0.0, 0.0, 0.0]),
61 | input_keyboard_mapping={
62 | # yaw command (left)
63 | "J": [0.0, 0.0, 10.0],
64 | # yaw command (right)
65 | "L": [0.0, 0.0, -10.0],
66 | # pitch command (up)
67 | "I": [0.0, -10.0, 0.0],
68 | # pitch command (down)
69 | "K": [0.0, 10.0, 0.0],
70 | # row command (left)
71 | "LEFT": [-10.0, 0.0, 0.0],
72 | # row command (negative)
73 | "RIGHT": [10.0, 0.0, 0.0],
74 | })
75 |
76 | self._running_scenario = True
77 | # This function will only be called if ctrl_mode==waypoints and waypoints files are changed
78 | def setup_waypoints(self, waypoint_path, default_waypoint_path):
79 | def read_data_from_file(file_path):
80 | # Initialize an empty list to store the floats
81 | data = []
82 |
83 | # Open the file in read mode
84 | with open(file_path, 'r') as file:
85 | # Read each line in the file
86 | for line in file:
87 | # Strip any leading/trailing whitespace and split the line by spaces
88 | float_strings = line.strip().split()
89 |
90 | # Convert the list of strings to a list of floats
91 | floats = [float(x) for x in float_strings]
92 |
93 | # Append the list of floats to the data list
94 | data.append(floats)
95 |
96 | return data
97 | try:
98 | self.waypoints = read_data_from_file(waypoint_path)
99 | print('Waypoints loaded successfully.')
100 | print(f'Waypoint[0]: {self.waypoints[0]}')
101 | except:
102 | self.waypoints = read_data_from_file(default_waypoint_path)
103 | print('Fail to load this waypoints. Back to default waypoints.')
104 |
105 |
106 | def teardown_scenario(self):
107 |
108 | # Because these two sensors create annotator cache in GPU,
109 | # close() will detach annotator from render product and clear the cache.
110 | if self._sonar is not None:
111 | self._sonar.close()
112 | if self._cam is not None:
113 | self._cam.close()
114 |
115 | # clear the keyboard subscription
116 | if self._ctrl_mode=="Manual control":
117 | self._force_cmd.cleanup()
118 | self._torque_cmd.cleanup()
119 |
120 | self._rob = None
121 | self._sonar = None
122 | self._cam = None
123 | self._DVL = None
124 | self._baro = None
125 | self._running_scenario = False
126 | self._time = 0.0
127 |
128 |
129 | def update_scenario(self, step: float):
130 |
131 |
132 | if not self._running_scenario:
133 | return
134 |
135 | self._time += step
136 |
137 | if self._sonar is not None:
138 | self._sonar.make_sonar_data()
139 | if self._cam is not None:
140 | self._cam.render()
141 | if self._DVL is not None:
142 | self._DVL_reading = self._DVL.get_linear_vel()
143 | if self._baro is not None:
144 | self._baro_reading = self._baro.get_pressure()
145 |
146 | if self._ctrl_mode=="Manual control":
147 | force_cmd = Gf.Vec3f(*self._force_cmd._base_command)
148 | torque_cmd = Gf.Vec3f(*self._torque_cmd._base_command)
149 | self._rob_forceAPI.CreateForceAttr().Set(force_cmd)
150 | self._rob_forceAPI.CreateTorqueAttr().Set(torque_cmd)
151 | elif self._ctrl_mode=="Waypoints":
152 | if len(self.waypoints) > 0:
153 | waypoints = self.waypoints[0]
154 | self._rob.GetAttribute('xformOp:translate').Set(Gf.Vec3f(waypoints[0], waypoints[1], waypoints[2]))
155 | self._rob.GetAttribute('xformOp:orient').Set(Gf.Quatd(waypoints[3], waypoints[4], waypoints[5], waypoints[6]))
156 | self.waypoints.pop(0)
157 | else:
158 | print('Waypoints finished')
159 | elif self._ctrl_mode=="Straight line":
160 | SingleRigidPrim(prim_path=get_prim_path(self._rob)).set_linear_velocity(np.array([0.5,0,0]))
161 |
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/modules/SensorExample_python/ui_builder.py:
--------------------------------------------------------------------------------
1 | # Omniverse import
2 | import numpy as np
3 | import os
4 | import omni.timeline
5 | import omni.ui as ui
6 | from omni.usd import StageEventType
7 | from pxr import PhysxSchema
8 | import carb
9 |
10 | # Isaac sim import
11 | from isaacsim.core.prims import SingleRigidPrim, SingleGeometryPrim
12 | from isaacsim.core.utils.prims import get_prim_at_path
13 | from isaacsim.core.utils.stage import get_current_stage, add_reference_to_stage, create_new_stage, open_stage
14 | from isaacsim.core.utils.rotations import euler_angles_to_quat
15 | from isaacsim.core.utils.semantics import add_update_semantics
16 | from isaacsim.gui.components import CollapsableFrame, StateButton, get_style, setup_ui_headers, CheckBox, combo_cb_xyz_plot_builder, combo_cb_plot_builder, dropdown_builder, str_builder
17 | from isaacsim.core.utils.viewports import set_camera_view
18 | from isaacsim.examples.extension.core_connectors import LoadButton, ResetButton
19 | from isaacsim.core.utils.extensions import get_extension_path
20 |
21 | # Custom import
22 | from .scenario import MHL_Sensor_Example_Scenario
23 | from .global_variables import EXTENSION_DESCRIPTION, EXTENSION_TITLE, EXTENSION_LINK
24 | from isaacsim.oceansim.utils.assets_utils import get_oceansim_assets_path
25 |
26 | class UIBuilder():
27 | def __init__(self):
28 |
29 | self._ext_id = omni.kit.app.get_app().get_extension_manager().get_extension_id_by_module(__name__)
30 | self._file_path = os.path.abspath(__file__)
31 | self._title = EXTENSION_TITLE
32 | self._doc_link = EXTENSION_LINK
33 | self._overview = EXTENSION_DESCRIPTION
34 | self._extension_path = get_extension_path(self._ext_id)
35 |
36 | self._ctrl_mode = 'Manual control'
37 | self._waypoints_path = self._extension_path + '/demo/demo_waypoints.txt'
38 | # Get access to the timeline to control stop/pause/play programmatically
39 | self._timeline = omni.timeline.get_timeline_interface()
40 |
41 | # UI frames created
42 | self.frames = []
43 | # UI elements created using a UIElementWrapper instance
44 | self.wrapped_ui_elements = []
45 |
46 | # Run initialization for the provided example
47 | self._on_init()
48 |
49 | ###################################################################################
50 | # The Functions Below Are Called Automatically By extension.py
51 | ###################################################################################
52 |
53 | def on_menu_callback(self):
54 | """Callback for when the UI is opened from the toolbar.
55 | This is called directly after build_ui().
56 | """
57 | pass
58 |
59 | def on_timeline_event(self, event):
60 | """Callback for Timeline events (Play, Pause, Stop)
61 |
62 | Args:
63 | event (omni.timeline.TimelineEventType): Event Type
64 | """
65 | if event.type == int(omni.timeline.TimelineEventType.STOP):
66 | # When the user hits the stop button through the UI, they will inevitably discover edge cases where things break
67 | # For complete robustness, the user should resolve those edge cases here
68 | # In general, for extensions based off this template, there is no value to having the user click the play/stop
69 | # button instead of using the Load/Reset/Run buttons provided.
70 | self._scenario_state_btn.reset()
71 | self._scenario_state_btn.enabled = False
72 |
73 | def on_physics_step(self, step: float):
74 | """Callback for Physics Step.
75 | Physics steps only occur when the timeline is playing
76 |
77 | Args:
78 | step (float): Size of physics step
79 | """
80 | pass
81 |
82 | def on_stage_event(self, event):
83 | """Callback for Stage Events
84 |
85 | Args:
86 | event (omni.usd.StageEventType): Event Type
87 | """
88 | if event.type == int(StageEventType.OPENED):
89 | # If the user opens a new stage, the extension should completely reset
90 | self._reset_extension()
91 |
92 | def cleanup(self):
93 | """
94 | Called when the stage is closed or the extension is hot reloaded.
95 | Perform any necessary cleanup such as removing active callback functions
96 | Buttons imported from omni.isaac.ui.element_wrappers implement a cleanup function that should be called
97 | """
98 | self._DVL_event_sub = None
99 | self._baro_event_sub = None
100 | for ui_elem in self.wrapped_ui_elements:
101 | ui_elem.cleanup()
102 | for frame in self.frames:
103 | frame.cleanup()
104 |
105 | def build_ui(self):
106 | """
107 | Build a custom UI tool to run your extension.
108 | This function will be called any time the UI window is closed and reopened.
109 | """
110 |
111 | setup_ui_headers(
112 | ext_id=self._ext_id,
113 | file_path=self._file_path,
114 | title=self._title,
115 | doc_link=self._doc_link,
116 | overview=self._overview,
117 | info_collapsed=False
118 | )
119 |
120 | sensor_choosing_frame = CollapsableFrame('Sensors', collapsed=False)
121 | self.frames.append(sensor_choosing_frame)
122 | with sensor_choosing_frame:
123 | with ui.VStack(style=get_style(), spacing=5, height=0):
124 | sonar_check_box = CheckBox(
125 | "Imaging Sonar",
126 | default_value=False,
127 | tooltip=" Click this checkbox to activate imaging sonar",
128 | on_click_fn=self._on_sonar_checkbox_click_fn,
129 | )
130 | self._use_sonar = False
131 | self.wrapped_ui_elements.append(sonar_check_box)
132 | camera_check_box = CheckBox(
133 | "Underwater Camera",
134 | default_value=False,
135 | tooltip=" Click this checkbox to activate underwater camera",
136 | on_click_fn=self._on_camera_checkbox_click_fn,
137 | )
138 | self._use_camera = False
139 | self.wrapped_ui_elements.append(camera_check_box)
140 |
141 | DVL_check_box = CheckBox(
142 | 'DVL',
143 | default_value=False,
144 | tooltip=" Click this checkbox to activate DVL",
145 | on_click_fn=self._on_DVL_checkbox_click_fn
146 | )
147 | self._use_DVL = False
148 | self.wrapped_ui_elements.append(DVL_check_box)
149 |
150 | baro_check_box = CheckBox(
151 | "Barometer",
152 | default_value=False,
153 | tooltip='Click this checkbox to activate barometer',
154 | on_click_fn=self._on_baro_checkbox_click_fn
155 | )
156 | self._use_baro = False
157 | self.wrapped_ui_elements.append(baro_check_box)
158 |
159 |
160 | world_controls_frame = CollapsableFrame("World Controls", collapsed=False)
161 | self.frames.append(world_controls_frame)
162 | with world_controls_frame:
163 | with ui.VStack(style=get_style(), spacing=5, height=0):
164 |
165 | # self._build_USD_filepicker()
166 | self._USD_path_field = str_builder(
167 | label='Path to USD',
168 | default_val="",
169 | tooltip='Select the USD file for the scene',
170 | use_folder_picker=True,
171 | folder_button_title="Select USD",
172 | folder_dialog_title='Select the USD scene to test')
173 |
174 | self._ctrl_mode_model = dropdown_builder(
175 | label='Control Mode',
176 | default_val=3,
177 | items=['No control', 'Straight line', 'Waypoints', 'Manual control'],
178 | tooltip='Select preferred control mode',
179 | on_clicked_fn=self._on_ctrl_mode_dropdown_clicked
180 | )
181 |
182 | self._load_btn = LoadButton(
183 | "Load Button", "LOAD", setup_scene_fn=self._setup_scene, setup_post_load_fn=self._setup_scenario
184 | )
185 | # self._load_btn.set_world_settings(physics_dt=1 / 60.0, rendering_dt=1 / 60.0)
186 | self.wrapped_ui_elements.append(self._load_btn)
187 |
188 | self._reset_btn = ResetButton(
189 | "Reset Button", "RESET", pre_reset_fn=None, post_reset_fn=self._on_post_reset_btn
190 | )
191 | self._reset_btn.enabled = False
192 | self.wrapped_ui_elements.append(self._reset_btn)
193 |
194 | run_scenario_frame = CollapsableFrame("Run Scenario", collapsed=False)
195 | self.frames.append(run_scenario_frame)
196 | with run_scenario_frame:
197 | with ui.VStack(style=get_style(), spacing=5, height=0):
198 | self._scenario_state_btn = StateButton(
199 | "Run Scenario",
200 | "RUN",
201 | "STOP",
202 | on_a_click_fn=self._on_run_scenario_a_text,
203 | on_b_click_fn=self._on_run_scenario_b_text,
204 | physics_callback_fn=self._update_scenario,
205 | )
206 | self._scenario_state_btn.enabled = False
207 | self.wrapped_ui_elements.append(self._scenario_state_btn)
208 |
209 | self.sensor_reading_frame = CollapsableFrame('Sensor Reading', collapsed=False, visible=False)
210 | self.frames.append(self.sensor_reading_frame)
211 | self.waypoints_frame = CollapsableFrame('Waypoints',collapsed=False, visible=False)
212 | self.frames.append(self.waypoints_frame)
213 |
214 |
215 |
216 |
217 | ######################################################################################
218 | # Functions Below This Point Related to Scene Setup (USD\PhysX..)
219 | ######################################################################################
220 |
221 | def _on_init(self):
222 |
223 | # Robot parameters
224 | self._rob_mass = 5.0 # kg
225 | self._rob_angular_damping = 10.0
226 | self._rob_linear_damping = 10.0
227 |
228 | # Sensor
229 | self._sonar = None
230 | self._sonar_trans = np.array([0.3,0.0, 0.3])
231 | self._cam = None
232 | self._cam_trans = np.array([0.3,0.0, 0.1])
233 | self._cam_focal_length = 21
234 | self._DVL = None
235 | self._DVL_trans = np.array([0,0,-0.1])
236 | self._baro = None
237 | self._water_surface = 1.43389 # Arbitrary
238 |
239 | # Scenario
240 | self._scenario = MHL_Sensor_Example_Scenario()
241 |
242 |
243 | def _setup_scene(self):
244 | """
245 | This function is attached to the Load Button as the setup_scene_fn callback.
246 | On pressing the Load Button, a new instance of World() is created and then this function is called.
247 | The user should now load their assets onto the stage and add them to the World Scene.
248 | """
249 | create_new_stage()
250 | if self._USD_path_field.get_value_as_string() != "":
251 | scene_prim_path = '/World/scene'
252 | add_reference_to_stage(usd_path=self._USD_path_field.get_value_as_string(), prim_path=scene_prim_path)
253 | print('User USD scene is loaded.')
254 | else:
255 | print('USD path is empty. Default to example scene')
256 |
257 | # add MHL scene as reference
258 | MHL_prim_path = '/World/mhl'
259 | MHL_usd_path = get_oceansim_assets_path() + "/collected_MHL/mhl_scaled.usd"
260 | add_reference_to_stage(usd_path=MHL_usd_path, prim_path=MHL_prim_path)
261 | # Toggle MHL mesh's collider
262 | SingleGeometryPrim(prim_path=MHL_prim_path, collision=True)
263 | # apply a reflectivity of 1.0 to mesh of the scene for sonar simulation
264 | add_update_semantics(prim=get_prim_at_path(MHL_prim_path + "/Mesh/mesh"),
265 | type_label='reflectivity',
266 | semantic_label='1.0')
267 | # Load the rock
268 | rock_prim_path = '/World/rock'
269 | rock_usd_path = get_oceansim_assets_path() + "/collected_rock/rock.usd"
270 | rock_prim = add_reference_to_stage(usd_path=rock_usd_path, prim_path=rock_prim_path)
271 | # apply a reflectivity of 2.0 for sonar simulation
272 | add_update_semantics(prim=get_prim_at_path(rock_prim_path+ '/Mesh/mesh'),
273 | type_label='reflectivity',
274 | semantic_label='2.0')
275 | # Toggle collider for the rock
276 | rock_collider_prim = SingleGeometryPrim(prim_path=rock_prim_path,
277 | collision=True)
278 | # Set collision approximation using convexDecomposition to automatically compute inertia matrix
279 | rock_collider_prim.set_collision_approximation('convexDecomposition')
280 | # Toggle rigid body for the rock
281 | rock_rigid_prim = SingleRigidPrim(prim_path=rock_prim_path,
282 | translation=np.array([1.0, 0.1, -1.5]),
283 | orientation=euler_angles_to_quat(np.array([0.0,0.0,90]), degrees=True),
284 | )
285 |
286 | # add bluerov robot as reference
287 | robot_prim_path = "/World/rob"
288 | robot_usd_path = get_oceansim_assets_path() + "/Bluerov/BROV_low.usd"
289 | self._rob = add_reference_to_stage(usd_path=robot_usd_path, prim_path=robot_prim_path)
290 | # Toggle rigid body and collider preset for robot, and set zero gravity to mimic underwater environment
291 | rob_rigidBody_API = PhysxSchema.PhysxRigidBodyAPI.Apply(get_prim_at_path(robot_prim_path))
292 | rob_rigidBody_API.CreateDisableGravityAttr(True)
293 | # Set damping of the robot
294 | rob_rigidBody_API.GetLinearDampingAttr().Set(self._rob_linear_damping)
295 | rob_rigidBody_API.GetAngularDampingAttr().Set(self._rob_angular_damping)
296 | # Set the mass for the robot to suppress a warning from inertia autocomputation
297 | rob_collider_prim = SingleGeometryPrim(prim_path=robot_prim_path,
298 | collision=True)
299 | rob_collider_prim.set_collision_approximation('boundingCube')
300 | SingleRigidPrim(prim_path=robot_prim_path,
301 | mass=self._rob_mass,
302 | translation=np.array([-2.0, 0.0, -0.8]))
303 |
304 | set_camera_view(eye=np.array([5,0.6,0.4]), target=rob_collider_prim.get_world_pose()[0])
305 |
306 |
307 | if self._use_sonar:
308 | from isaacsim.oceansim.sensors.ImagingSonarSensor import ImagingSonarSensor
309 | self._sonar = ImagingSonarSensor(prim_path=robot_prim_path + '/sonar',
310 | translation=self._sonar_trans,
311 | orientation=euler_angles_to_quat(np.array([0.0, 45, 0.0]), degrees=True),
312 | range_res=0.005,
313 | angular_res=0.25,
314 | hori_res=4000
315 | )
316 |
317 | if self._use_camera:
318 | from isaacsim.oceansim.sensors.UW_Camera import UW_Camera
319 |
320 | self._cam = UW_Camera(prim_path=robot_prim_path + '/UW_camera',
321 | resolution=[1920,1080],
322 | translation=self._cam_trans)
323 | self._cam.set_focal_length(0.1 * self._cam_focal_length)
324 | self._cam.set_clipping_range(0.1, 100)
325 |
326 | if self._use_DVL:
327 | from isaacsim.oceansim.sensors.DVLsensor import DVLsensor
328 |
329 | self._DVL = DVLsensor(max_range=10)
330 | self._DVL.attachDVL(rigid_body_path=robot_prim_path,
331 | translation=self._DVL_trans)
332 | self._DVL.add_debug_lines()
333 |
334 | if self._use_baro:
335 | from isaacsim.oceansim.sensors.BarometerSensor import BarometerSensor
336 |
337 | self._baro = BarometerSensor(prim_path=robot_prim_path + '/Baro',
338 | water_surface_z=self._water_surface)
339 |
340 |
341 |
342 | def _setup_scenario(self):
343 | """
344 | This function is attached to the Load Button as the setup_post_load_fn callback.
345 | The user may assume that their assets have been loaded by t setup_scene_fn callback, that
346 | their objects are properly initialized, and that the timeline is paused on timestep 0.
347 | """
348 | self._reset_scenario()
349 | self._add_extra_ui()
350 |
351 | # UI management
352 | self._scenario_state_btn.reset()
353 | self._scenario_state_btn.enabled = True
354 | self._reset_btn.enabled = True
355 |
356 | def _reset_scenario(self):
357 | self._scenario.teardown_scenario()
358 | self._scenario.setup_scenario(self._rob, self._sonar, self._cam, self._DVL, self._baro, self._ctrl_mode)
359 | def _on_post_reset_btn(self):
360 | """
361 | This function is attached to the Reset Button as the post_reset_fn callback.
362 | The user may assume that their objects are properly initialized, and that the timeline is paused on timestep 0.
363 |
364 | They may also assume that objects that were added to the World.Scene have been moved to their default positions.
365 | I.e. the cube prim will move back to the posiheirtion it was in when it was created in self._setup_scene().
366 | """
367 | self._reset_scenario()
368 |
369 | # UI management
370 | self._scenario_state_btn.reset()
371 | self._scenario_state_btn.enabled = True
372 |
373 | def _update_scenario(self, step: float):
374 | """This function is attached to the Run Scenario StateButton.
375 | This function was passed in as the physics_callback_fn argument.
376 | This means that when the a_text "RUN" is pressed, a subscription is made to call this function on every physics step.
377 | When the b_text "STOP" is pressed, the physics callback is removed.
378 |
379 | Args:
380 | step (float): The dt of the current physics step
381 | """
382 | self._scenario.update_scenario(step)
383 |
384 | def _on_run_scenario_a_text(self):
385 | """
386 | This function is attached to the Run Scenario StateButton.
387 | This function was passed in as the on_a_click_fn argument.
388 | It is called when the StateButton is clicked while saying a_text "RUN".
389 |
390 | This function simply plays the timeline, which means that physics steps will start happening. After the world is loaded or reset,
391 | the timeline is paused, which means that no physics steps will occur until the user makes it play either programmatically or
392 | through the left-hand UI toolbar.
393 | """
394 | self._timeline.play()
395 |
396 | def _on_run_scenario_b_text(self):
397 | """
398 | This function is attached to the Run Scenario StateButton.
399 | This function was passed in as the on_b_click_fn argument.
400 | It is called when the StateButton is clicked while saying a_text "STOP"
401 |
402 | Pausing the timeline on b_text is not strictly necessary for this example to run.
403 | Clicking "STOP" will cancel the physics subscription that updates the scenario, which means that
404 | the robot will stop getting new commands and the cube will stop updating without needing to
405 | pause at all. The reason that the timeline is paused here is to prevent the robot being carried
406 | forward by momentum for a few frames after the physics subscription is canceled. Pausing here makes
407 | this example prettier, but if curious, the user should observe what happens when this line is removed.
408 | """
409 | self._timeline.pause()
410 |
411 | def _reset_extension(self):
412 | """This is called when the user opens a new stage from self.on_stage_event().
413 | All state should be reset.
414 | """
415 | self._on_init()
416 | self._reset_ui()
417 |
418 | def _reset_ui(self):
419 | self._scenario_state_btn.reset()
420 | self._scenario_state_btn.enabled = False
421 | self._reset_btn.enabled = False
422 |
423 |
424 | def _on_sonar_checkbox_click_fn(self, model):
425 | self._use_sonar = model
426 | print('Reload the scene for changes to take effect.')
427 |
428 | def _on_camera_checkbox_click_fn(self, model):
429 | self._use_camera = model
430 | print('Reload the scene for changes to take effect.')
431 |
432 | def _on_DVL_checkbox_click_fn(self, model):
433 | self._use_DVL = model
434 | print('Reload the scene for changes to take effect.')
435 |
436 | def _on_baro_checkbox_click_fn(self, model):
437 | self._use_baro = model
438 | print('Reload the scene for changes to take effect.')
439 |
440 | def _on_manual_ctrl_cb_click_fn(self, model):
441 | self._manual_ctrl = model
442 | print('Reload the scene for changes to take effect.')
443 |
444 | def _on_ctrl_mode_dropdown_clicked(self, model):
445 | self._ctrl_mode = model
446 | print(f'Ctrl mode: {model}. Reload the scene for changes to take effect.')
447 |
448 |
449 | def _add_extra_ui(self):
450 | with self.sensor_reading_frame:
451 | with ui.VStack(spacing=5, height=0):
452 | if self._use_DVL is True:
453 | self._build_DVL_plot()
454 | self.sensor_reading_frame.visible = True
455 | if self._use_baro is True:
456 | self._build_baro_plot()
457 | self.sensor_reading_frame.visible = True
458 | if not self._use_baro and not self._use_DVL:
459 | self.sensor_reading_frame.visible = False
460 | with self.waypoints_frame:
461 | if self._ctrl_mode == 'Waypoints':
462 | self._build_waypoints_filepicker()
463 | self.waypoints_frame.visible = True
464 | else:
465 | self.waypoints_frame.visible = False
466 |
467 |
468 | def _build_waypoints_filepicker(self):
469 | self._waypoints_path_field = str_builder(
470 | label='Path to waypoints',
471 | default_val=self._waypoints_path,
472 | tooltip='Select the txt files containing the waypoint data',
473 | use_folder_picker=True,
474 | folder_button_title='Select txt',
475 | folder_dialog_title='Select the txt file containing the waypoint'
476 | )
477 | self._scenario.setup_waypoints(
478 | waypoint_path=self._waypoints_path,
479 | default_waypoint_path=self._extension_path + '/demo/demo_waypoints.txt'
480 | )
481 | self._waypoints_path_field.add_value_changed_fn(self._on_waypoints_path_changed_fn)
482 |
483 | def _on_waypoints_path_changed_fn(self, model):
484 | self._waypoints_path = model.get_value_as_string()
485 | self._scenario.setup_waypoints(
486 | waypoint_path=model.get_value_as_string(),
487 | default_waypoint_path=self._extension_path + '/demo/demo_waypoints.txt'
488 | )
489 |
490 | def _build_DVL_plot(self):
491 | self._DVL_event_sub = None
492 | self._DVL_x_vel = []
493 | self._DVL_y_vel = []
494 | self._DVL_z_vel = []
495 |
496 | kwargs = {
497 | "label": "DVL reading xyz vel (m/s)",
498 | "on_clicked_fn": self.toggle_DVL_step,
499 | "data": [self._DVL_x_vel, self._DVL_y_vel, self._DVL_z_vel],
500 | }
501 | (
502 | self._DVL_plot,
503 | self._DVL_plot_value,
504 | ) = combo_cb_xyz_plot_builder(**kwargs)
505 | def toggle_DVL_step(self, val=None):
506 | print("DVL DAQ: ", val)
507 | if val:
508 | if not self._DVL_event_sub:
509 | self._DVL_event_sub = (
510 | omni.kit.app.get_app().get_update_event_stream().create_subscription_to_pop(self._on_DVL_step)
511 | )
512 | else:
513 | self._DVL_event_sub = None
514 | else:
515 | self._DVL_event_sub = None
516 |
517 | def _on_DVL_step(self, e: carb.events.IEvent):
518 | # Casting np.float32 to float32 is necessary for the ui.Plot expects a consistent data type flow
519 | x_vel = float(self._scenario._DVL_reading[0])
520 | y_vel = float(self._scenario._DVL_reading[1])
521 | z_vel = float(self._scenario._DVL_reading[2])
522 |
523 | self._DVL_plot_value[0].set_value(x_vel)
524 | self._DVL_plot_value[1].set_value(y_vel)
525 | self._DVL_plot_value[2].set_value(z_vel)
526 |
527 | self._DVL_x_vel.append(x_vel)
528 | self._DVL_y_vel.append(y_vel)
529 | self._DVL_z_vel.append(z_vel)
530 | if len(self._DVL_x_vel) > 50:
531 | self._DVL_x_vel.pop(0)
532 | self._DVL_y_vel.pop(0)
533 | self._DVL_z_vel.pop(0)
534 |
535 | self._DVL_plot[0].set_data(*self._DVL_x_vel)
536 | self._DVL_plot[1].set_data(*self._DVL_y_vel)
537 | self._DVL_plot[2].set_data(*self._DVL_z_vel)
538 |
539 | def _build_baro_plot(self):
540 | self._baro_event_sub = None
541 | self._baro_data = []
542 |
543 | kwargs = {
544 | "label": "Barometer reading (Pa)",
545 | "on_clicked_fn": self.toggle_baro_step,
546 | "data": self._baro_data,
547 | "min": 101325.0,
548 | 'max': 101325.0 + 50000,
549 | }
550 | self._baro_plot, self._baro_plot_value = combo_cb_plot_builder(**kwargs)
551 |
552 |
553 | def toggle_baro_step(self, val=None):
554 | print('Barometer DAQ: ', val)
555 | if val:
556 | if not self._baro_event_sub:
557 | self._baro_event_sub= (
558 | omni.kit.app.get_app().get_update_event_stream().create_subscription_to_pop(self._on_baro_step)
559 | )
560 | else:
561 | self._baro_event_sub = None
562 | else:
563 | self._baro_event_sub = None
564 |
565 | def _on_baro_step(self, e: carb.events.IEvent):
566 | baro = float(self._scenario._baro_reading)
567 | self._baro_plot_value.set_value(baro)
568 | self._baro_data.append(baro)
569 | if len(self._baro_data) > 50:
570 | self._baro_data.pop(0)
571 | self._baro_plot.set_data(*self._baro_data)
572 |
573 |
574 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/modules/colorpicker_python/__init__.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
2 | #
3 | # NVIDIA CORPORATION and its licensors retain all intellectual property
4 | # and proprietary rights in and to this software, related documentation
5 | # and any modifications thereto. Any use, reproduction, disclosure or
6 | # distribution of this software and related documentation without an express
7 | # license agreement from NVIDIA CORPORATION is strictly prohibited.
8 | #
9 | from .extension import *
10 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/modules/colorpicker_python/extension.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
2 | #
3 | # NVIDIA CORPORATION and its licensors retain all intellectual property
4 | # and proprietary rights in and to this software, related documentation
5 | # and any modifications thereto. Any use, reproduction, disclosure or
6 | # distribution of this software and related documentation without an express
7 | # license agreement from NVIDIA CORPORATION is strictly prohibited.
8 | #
9 |
10 | import asyncio
11 | import gc
12 | import weakref
13 |
14 |
15 | import omni
16 | import omni.kit.commands
17 | import omni.physx as _physx
18 | import omni.timeline
19 | import omni.ui as ui
20 | import omni.usd
21 | from omni.isaac.ui.element_wrappers import ScrollingWindow
22 | from omni.isaac.ui.menu import MenuItemDescription, make_menu_item_description
23 | from omni.kit.menu.utils import add_menu_items, remove_menu_items
24 | from omni.usd import StageEventType
25 |
26 | from .global_variables import EXTENSION_TITLE
27 | from .ui_builder import UIBuilder
28 |
29 | """
30 | This file serves as a basic template for the standard boilerplate operations
31 | that make a UI-based extension appear on the toolbar.
32 |
33 | This implementation is meant to cover most use-cases without modification.
34 | Various callbacks are hooked up to a seperate class UIBuilder in .ui_builder.py
35 | Most users will be able to make their desired UI extension by interacting solely with
36 | UIBuilder.
37 |
38 | This class sets up standard useful callback functions in UIBuilder:
39 | on_menu_callback: Called when extension is opened
40 | on_timeline_event: Called when timeline is stopped, paused, or played
41 | on_physics_step: Called on every physics step
42 | on_stage_event: Called when stage is opened or closed
43 | cleanup: Called when resources such as physics subscriptions should be cleaned up
44 | build_ui: User function that creates the UI they want.
45 | """
46 |
47 |
48 | class Extension(omni.ext.IExt):
49 | def on_startup(self, ext_id: str):
50 | """Initialize extension and UI elements"""
51 | self.ext_id = ext_id
52 | self._usd_context = omni.usd.get_context()
53 |
54 | # Build Window
55 | self._window = ScrollingWindow(
56 | title=EXTENSION_TITLE, width=600, height=500, visible=False, dockPreference=ui.DockPreference.LEFT_BOTTOM
57 | )
58 | self._window.set_visibility_changed_fn(self._on_window)
59 |
60 | action_registry = omni.kit.actions.core.get_action_registry()
61 | action_registry.register_action(
62 | ext_id,
63 | f"CreateUIExtension:{EXTENSION_TITLE}",
64 | self._menu_callback,
65 | description=f"Add {EXTENSION_TITLE} Extension to UI toolbar",
66 | )
67 |
68 |
69 | self._menu_items = [
70 | make_menu_item_description(ext_id, EXTENSION_TITLE, lambda a=weakref.proxy(self): a._menu_callback())
71 | ]
72 |
73 | add_menu_items(self._menu_items, "OceanSim")
74 |
75 | # Filled in with User Functions
76 | self.ui_builder = UIBuilder()
77 |
78 | # Events
79 | self._usd_context = omni.usd.get_context()
80 | self._physxIFace = _physx.acquire_physx_interface()
81 | self._physx_subscription = None
82 | self._stage_event_sub = None
83 | self._timeline = omni.timeline.get_timeline_interface()
84 |
85 | def on_shutdown(self):
86 | self._models = {}
87 | remove_menu_items(self._menu_items, EXTENSION_TITLE)
88 |
89 | action_registry = omni.kit.actions.core.get_action_registry()
90 | action_registry.deregister_action(self.ext_id, f"CreateUIExtension:{EXTENSION_TITLE}")
91 |
92 | if self._window:
93 | self._window = None
94 | self.ui_builder.cleanup()
95 | gc.collect()
96 |
97 | def _on_window(self, visible):
98 | if self._window.visible:
99 | # Subscribe to Stage and Timeline Events
100 | self._usd_context = omni.usd.get_context()
101 | events = self._usd_context.get_stage_event_stream()
102 | self._stage_event_sub = events.create_subscription_to_pop(self._on_stage_event)
103 | stream = self._timeline.get_timeline_event_stream()
104 | self._timeline_event_sub = stream.create_subscription_to_pop(self._on_timeline_event)
105 |
106 | self._build_ui()
107 | else:
108 | self._usd_context = None
109 | self._stage_event_sub = None
110 | self._timeline_event_sub = None
111 | self.ui_builder.cleanup()
112 |
113 | def _build_ui(self):
114 | with self._window.frame:
115 | with ui.VStack(spacing=5, height=0):
116 | self._build_extension_ui()
117 |
118 | async def dock_window():
119 | await omni.kit.app.get_app().next_update_async()
120 |
121 | def dock(space, name, location, pos=0.5):
122 | window = omni.ui.Workspace.get_window(name)
123 | if window and space:
124 | window.dock_in(space, location, pos)
125 | return window
126 |
127 | tgt = ui.Workspace.get_window("Viewport")
128 | dock(tgt, EXTENSION_TITLE, omni.ui.DockPosition.LEFT, 0.33)
129 | await omni.kit.app.get_app().next_update_async()
130 |
131 | self._task = asyncio.ensure_future(dock_window())
132 |
133 | #################################################################
134 | # Functions below this point call user functions
135 | #################################################################
136 |
137 | def _menu_callback(self):
138 | self._window.visible = not self._window.visible
139 | self.ui_builder.on_menu_callback()
140 |
141 | def _on_timeline_event(self, event):
142 | if event.type == int(omni.timeline.TimelineEventType.PLAY):
143 | if not self._physx_subscription:
144 | self._physx_subscription = self._physxIFace.subscribe_physics_step_events(self._on_physics_step)
145 | elif event.type == int(omni.timeline.TimelineEventType.STOP):
146 | self._physx_subscription = None
147 |
148 | self.ui_builder.on_timeline_event(event)
149 |
150 | def _on_physics_step(self, step):
151 | self.ui_builder.on_physics_step(step)
152 |
153 | def _on_stage_event(self, event):
154 | if event.type == int(StageEventType.OPENED) or event.type == int(StageEventType.CLOSED):
155 | # stage was opened or closed, cleanup
156 | self._physx_subscription = None
157 | self.ui_builder.cleanup()
158 |
159 | self.ui_builder.on_stage_event(event)
160 |
161 | def _build_extension_ui(self):
162 | # Call user function for building UI
163 | self.ui_builder.build_ui()
164 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/modules/colorpicker_python/global_variables.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
2 | #
3 | # NVIDIA CORPORATION and its licensors retain all intellectual property
4 | # and proprietary rights in and to this software, related documentation
5 | # and any modifications thereto. Any use, reproduction, disclosure or
6 | # distribution of this software and related documentation without an express
7 | # license agreement from NVIDIA CORPORATION is strictly prohibited.
8 | #
9 |
10 | EXTENSION_TITLE = "Color Picker"
11 |
12 | EXTENSION_DESCRIPTION = "This is a handy widget that aims to adjust the underwater column effects \n" \
13 | "to mimic realist underwater environment based on underwater image formation model. \n" \
14 | "To use this widget, user should load the USD scene by specifying the 'Path to USD' \n" \
15 | "then click 'LOAD' and 'RUN'. \n" \
16 | "User will observe the real time renderred output broadcasted from viewport \n" \
17 | "and adjust rendering parameters as needed. \n" \
18 | "Parameters can be saved as YAML file and read by UW_camera sensor for end-to-end deployment.\n" \
19 |
20 |
21 | EXTENSION_LINK = "https://umfieldrobotics.github.io/OceanSim/"
22 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/modules/colorpicker_python/scenario.py:
--------------------------------------------------------------------------------
1 | # Omniverse import
2 | import numpy as np
3 | import omni.replicator.core as rep
4 | import omni.ui as ui
5 | import warp as wp
6 | from omni.kit.viewport.utility import get_active_viewport
7 |
8 | # Custom import
9 | from isaacsim.oceansim.utils.UWrenderer_utils import UW_render
10 |
11 | class Colorpicker_Scenario():
12 | def __init__(self):
13 |
14 | self.raw_rgba = None
15 | self.depth_image = None
16 | self._running_scenario = False
17 | self._time = 0.0
18 | self._id = 0
19 | self._device = wp.get_preferred_device()
20 |
21 | self._viewport = None
22 | self._viewport_rgba_annot = None
23 | self._viewport_depth_annot = None
24 |
25 |
26 |
27 | def setup_scenario(self):
28 |
29 |
30 | self._running_scenario = True
31 |
32 | self._viewport = get_active_viewport()
33 | self._viewport_rgba_annot = rep.AnnotatorRegistry.get_annotator(name='LdrColor', device=str(self._device))
34 | self._viewport_depth_annot = rep.AnnotatorRegistry.get_annotator(name="distance_to_camera", device=str(self._device))
35 | self._viewport_rgba_annot.attach(self._viewport.render_product_path)
36 | self._viewport_depth_annot.attach(self._viewport.render_product_path)
37 |
38 | self.make_window()
39 |
40 |
41 | def teardown_scenario(self):
42 | self._running_scenario = False
43 | self._time = 0.0
44 | self._id = 0
45 |
46 | if self._viewport is not None:
47 | self._viewport_rgba_annot.detach(self._viewport.render_product_path)
48 | self._viewport_depth_annot.detach(self._viewport.render_product_path)
49 | rep.AnnotatorCache.clear(self._viewport_rgba_annot)
50 | rep.AnnotatorCache.clear(self._viewport_depth_annot)
51 | self.ui_destroy()
52 |
53 | self._viewport = None
54 | self._viewport_rgba_annot = None
55 | self._viewport_depth_annot = None
56 |
57 |
58 | def update_scenario(self, step: float, render_param: np.ndarray):
59 |
60 |
61 | if not self._running_scenario:
62 | return
63 | self._time += step
64 | if self._viewport_rgba_annot.get_data().size == 0:
65 | return
66 | self.raw_rgba = self._viewport_rgba_annot.get_data()
67 | self.depth_image = self._viewport_depth_annot.get_data()
68 |
69 | self.update_render(render_param)
70 |
71 | self._id += 1
72 |
73 |
74 |
75 |
76 | def update_render(self, render_param: np.ndarray):
77 | if self.raw_rgba is not None:
78 | if self.raw_rgba.size !=0:
79 | backscatter_value = wp.vec3f(*render_param[0:3])
80 | atten_coeff = wp.vec3f(*render_param[6:9])
81 | backscatter_coeff = wp.vec3f(*render_param[3:6])
82 | self.uw_image = wp.zeros_like(self.raw_rgba)
83 | wp.launch(
84 | dim=(self.raw_rgba.shape[0], self.raw_rgba.shape[1]),
85 | kernel=UW_render,
86 | inputs=[
87 | self.raw_rgba,
88 | self.depth_image,
89 | backscatter_value,
90 | atten_coeff,
91 | backscatter_coeff
92 | ],
93 | outputs=[
94 | self.uw_image
95 | ]
96 | )
97 |
98 | self.image_provider.set_bytes_data_from_gpu(self.uw_image.ptr, [self.uw_image.shape[1], self.uw_image.shape[0]])
99 |
100 | def make_window(self):
101 |
102 | self.wrapped_ui_elements = []
103 | window = ui.Window("Render Result", width=1920, height=1080 + 40, visible=True)
104 | self.image_provider = ui.ByteImageProvider()
105 | with window.frame:
106 | with ui.ZStack(height=1080):
107 | ui.Rectangle(style={"background_color": 0xFF000000})
108 | ui.Label('Run the scenario for image to be received',
109 | style={'font_size': 55,'alignment': ui.Alignment.CENTER},
110 | word_wrap=True)
111 | render_result = ui.ImageWithProvider(self.image_provider, width=1920, height=1080,
112 | style={'fill_policy': ui.FillPolicy.PRESERVE_ASPECT_FIT,
113 | 'alignment' :ui.Alignment.CENTER})
114 |
115 | self.wrapped_ui_elements.append(render_result)
116 | self.wrapped_ui_elements.append(window)
117 | self.wrapped_ui_elements.append(self.image_provider)
118 |
119 | def ui_destroy(self):
120 | for elem in self.wrapped_ui_elements:
121 | elem.destroy()
122 |
123 |
124 |
125 |
126 |
127 |
128 |
129 |
130 |
131 | # from omni.kit.viewport.utility import get_active_viewport
132 | # self.viewport_api = get_active_viewport()
133 | # capture = self.viewport_api.schedule_capture(ByteCapture(self.on_capture_completed, aov_name='LdrColor'))
134 | # def on_capture_completed(self, buffer, buffer_size, width, height, format):
135 | # '''
136 | # Example
137 | # buffer:
138 | # buffer_size: 3686400
139 | # width: 1280
140 | # height: 720
141 | # format: TextureFormat.RGBA8_UNORM
142 | # '''
143 | # self.image_provider.set_raw_bytes_data(raw_bytes=buffer,
144 | # sizes=[width, height],
145 | # format=format)
146 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/modules/colorpicker_python/ui_builder.py:
--------------------------------------------------------------------------------
1 | # Omniverse import
2 | import numpy as np
3 | import omni.timeline
4 | import omni.ui as ui
5 | from omni.usd import StageEventType
6 | import warp as wp
7 | import yaml
8 | from PIL import Image
9 | import carb
10 | import os
11 | # Isaac sim import
12 |
13 | from isaacsim.core.utils.stage import open_stage
14 | from isaacsim.gui.components import CollapsableFrame, StateButton, get_style, combo_floatfield_slider_builder, Button, StringField, setup_ui_headers, str_builder
15 | from isaacsim.examples.extension.core_connectors import LoadButton, ResetButton
16 | from isaacsim.core.utils.extensions import get_extension_path
17 |
18 |
19 | # Custom import
20 | from .scenario import Colorpicker_Scenario
21 | from isaacsim.oceansim.utils.UWrenderer_utils import UW_render
22 | from .global_variables import EXTENSION_DESCRIPTION, EXTENSION_TITLE, EXTENSION_LINK
23 |
24 |
25 | class UIBuilder:
26 | def __init__(self):
27 | self._ext_id = omni.kit.app.get_app().get_extension_manager().get_extension_id_by_module(__name__)
28 | self._file_path = os.path.abspath(__file__)
29 | self._title = EXTENSION_TITLE
30 | self._doc_link = EXTENSION_LINK
31 | self._overview = EXTENSION_DESCRIPTION
32 | self._extension_path = get_extension_path(self._ext_id)
33 |
34 | # UI frames created
35 | self.frames = []
36 | # UI elements created using a UIElementWrapper instance
37 | self.wrapped_ui_elements = []
38 |
39 | # Get access to the timeline to control stop/pause/play programmatically
40 | self._timeline = omni.timeline.get_timeline_interface()
41 | # A flag indicating if the scenario is loaded at least once (helpful for UI module to see if scenario variables are created)
42 |
43 | # Run initialization for the provided example
44 | self._on_init()
45 |
46 | ###################################################################################
47 | # The Functions Below Are Called Automatically By extension.py
48 | ###################################################################################
49 |
50 | def on_menu_callback(self):
51 | """Callback for when the UI is opened from the toolbar.
52 | This is called directly after build_ui().
53 | """
54 | pass
55 |
56 | def on_timeline_event(self, event):
57 | """Callback for Timeline events (Play, Pause, Stop)
58 |
59 | Args:
60 | event (omni.timeline.TimelineEventType): Event Type
61 | """
62 | if event.type == int(omni.timeline.TimelineEventType.STOP):
63 | # When the user hits the stop button through the UI, they will inevitably discover edge cases where things break
64 | # For complete robustness, the user should resolve those edge cases here
65 | # In general, for extensions based off this template, there is no value to having the user click the play/stop
66 | # button instead of using the Load/Reset/Run buttons provided.
67 | self._scenario_state_btn.reset()
68 | self._scenario_state_btn.enabled = False
69 |
70 | def on_physics_step(self, step: float):
71 | """Callback for Physics Step.
72 | Physics steps only occur when the timeline is playing
73 |
74 | Args:
75 | step (float): Size of physics step
76 | """
77 | pass
78 |
79 | def on_stage_event(self, event):
80 | """Callback for Stage Events
81 |
82 | Args:
83 | event (omni.usd.StageEventType): Event Type
84 | """
85 | if event.type == int(StageEventType.OPENED):
86 | # If the user opens a new stage, the extension should completely reset
87 | self._reset_extension()
88 |
89 | def cleanup(self):
90 | """
91 | Called when the stage is closed or the extension is hot reloaded.
92 | Perform any necessary cleanup such as removing active callback functions
93 | Buttons imported from omni.isaac.ui.element_wrappers implement a cleanup function that should be called
94 | """
95 | for ui_elem in self.wrapped_ui_elements:
96 | ui_elem.cleanup()
97 | for frame in self.frames:
98 | frame.cleanup()
99 |
100 | def build_ui(self):
101 | """
102 | Build a custom UI tool to run your extension.
103 | This function will be called any time the UI window is closed and reopened.
104 | """
105 | setup_ui_headers(
106 | ext_id=self._ext_id,
107 | file_path=self._file_path,
108 | title=self._title,
109 | doc_link=self._doc_link,
110 | overview=self._overview,
111 | info_collapsed=False
112 | )
113 |
114 | demo_image_path = self._extension_path + "/demo/demo_rgb.png"
115 | demo_depth_path = self._extension_path + '/demo/demo_depth.npy'
116 | demo_image = Image.open(demo_image_path).convert('RGBA')
117 | self._demo_rgba = wp.array(data=np.array(demo_image), dtype=wp.uint8, ndim=3)
118 | self._demo_depth = wp.array(data=np.load(file=demo_depth_path),
119 | dtype=wp.float32,
120 | ndim=2)
121 | self._demo_res = [self._demo_rgba.shape[1], self._demo_rgba.shape[0]]
122 | self._demo_provider = ui.ByteImageProvider()
123 | self._demo_provider.set_bytes_data_from_gpu(self._demo_rgba.ptr, self._demo_res)
124 | self._uw_image = None
125 | self._param = np.zeros(9)
126 |
127 | world_controls_frame = CollapsableFrame("World Controls", collapsed=False)
128 | self.frames.append(world_controls_frame)
129 | with world_controls_frame:
130 | with ui.VStack(style=get_style(), spacing=5, height=0):
131 | self.scene_path_field = str_builder(
132 | label='Path to USD',
133 | tooltip='Input the path to your USD scene file',
134 | default_val="",
135 | use_folder_picker=True,
136 | folder_button_title='Select USD',
137 | folder_dialog_title='Select USD scene to import'
138 | )
139 |
140 | self._load_btn = LoadButton(
141 | "Load Button", "LOAD", setup_scene_fn=self._setup_scene, setup_post_load_fn=self._setup_scenario
142 | )
143 | self._load_btn.set_world_settings(physics_dt=1 / 60.0, rendering_dt=1 / 60.0)
144 | self.wrapped_ui_elements.append(self._load_btn)
145 |
146 | self._reset_btn = ResetButton(
147 | "Reset Button", "RESET", pre_reset_fn=None, post_reset_fn=self._on_post_reset_btn
148 | )
149 | self._reset_btn.enabled = False
150 | self.wrapped_ui_elements.append(self._reset_btn)
151 |
152 | run_scenario_frame = CollapsableFrame("Run Scenario", collapsed=False)
153 | self.frames.append(run_scenario_frame)
154 | with run_scenario_frame:
155 | with ui.VStack(style=get_style(), spacing=5, height=0):
156 | self._scenario_state_btn = StateButton(
157 | "Run Scenario",
158 | "RUN",
159 | "STOP",
160 | on_a_click_fn=self._on_run_scenario_a_text,
161 | on_b_click_fn=self._on_run_scenario_b_text,
162 | physics_callback_fn=self._update_scenario,
163 | )
164 | self._scenario_state_btn.enabled = False
165 | self.wrapped_ui_elements.append(self._scenario_state_btn)
166 |
167 |
168 | color_picker_frame = CollapsableFrame('Color Picker', collapsed=False)
169 | self.frames.append(color_picker_frame)
170 | self._param_models = []
171 | params_labels = [
172 | "Backscatter_R", "Backscatter_G","Backscatter_B",
173 | "Backscatter_coeff_R", "Backscatter_coeff_G", "Backscatter_coeff_B",
174 | "Attenuation_coeff_R", "Attenuation_coeff_G", "Attenuation_coeff_B",
175 | ]
176 | params_types = [
177 | 'float', 'float', 'float',
178 | 'float', 'float', 'float',
179 | 'float', 'float', 'float',
180 | ]
181 | params_default = [
182 | 0.0, 0.31, 0.24,
183 | 0.05, 0.05, 0.2,
184 | 0.05, 0.05, 0.05
185 | ]
186 | self._param = params_default
187 | with color_picker_frame:
188 | with ui.VStack(spacing=10):
189 |
190 | for i in range(9):
191 | param_model, param_slider = combo_floatfield_slider_builder(
192 | label=params_labels[i],
193 | type=params_types[i],
194 | default_val=params_default[i])
195 | self._param_models.append(param_model)
196 | param_model.add_value_changed_fn(self._on_color_param_changes)
197 | self._on_color_param_changes(param_model)
198 | with ui.ZStack(height=300):
199 | ui.Rectangle(style={"background_color": 0xFF000000})
200 | ui.ImageWithProvider(self._demo_provider,
201 | style={'alignment': ui.Alignment.CENTER,
202 | "fill_policy": ui.FillPolicy.PRESERVE_ASPECT_FIT})
203 | self.save_dir_field = StringField(
204 | label='YAML saving Path',
205 | tooltip='Save the render parameter and reference pic into this directory',
206 | use_folder_picker=True
207 | )
208 |
209 | self.wrapped_ui_elements.append(self.save_dir_field)
210 | self.file_name_field = StringField(
211 | label='File name',
212 | tooltip='Label your yaml file',
213 | default_value='render_param_0'
214 | )
215 | save_button = Button(
216 | text="Save param",
217 | label='Save render params',
218 | tooltip='Click this button to save the current render parameters',
219 | on_click_fn=self._on_save_param
220 | )
221 | save_viewport_button = Button(
222 | text='Save viewport',
223 | label='Save rendered image',
224 | tooltip="Click this button to capture the current raw/rendered/depth image from viewport",
225 | on_click_fn=self._on_save_viewport
226 | )
227 |
228 | self.wrapped_ui_elements.append(self.file_name_field)
229 | self.wrapped_ui_elements.append(save_button)
230 | self.wrapped_ui_elements.append(save_viewport_button)
231 |
232 | ######################################################################################
233 | # Functions Below This Point Related to Scene Setup (USD\PhysX..)
234 | ######################################################################################
235 |
236 | def _on_init(self):
237 |
238 | # Robot parameters
239 |
240 | self._scenario = Colorpicker_Scenario()
241 |
242 |
243 | def _setup_scene(self):
244 | """
245 | This function is attached to the Load Button as the setup_scene_fn callback.
246 | On pressing the Load Button, a new instance of World() is created and then this function is called.
247 | The user should now load their assets onto the stage and add them to the World Scene.
248 | """
249 | try:
250 | open_stage(self.scene_path_field.get_value_as_string())
251 | print('USD scene is loaded.')
252 | except:
253 | print('Path is not valid or scene can not be opened. Default to current stage')
254 |
255 |
256 |
257 | def _setup_scenario(self):
258 | """
259 | This function is attached to the Load Button as the setup_post_load_fn callback.
260 | The user may assume that their assets have been loaded by their setup_scene_fn callback, that
261 | their objects are properly initialized, and that the timeline is paused on timestep 0.
262 | """
263 | self._reset_scenario()
264 |
265 | # UI management
266 | self._scenario_state_btn.reset()
267 | self._scenario_state_btn.enabled = True
268 | self._reset_btn.enabled = True
269 |
270 | def _reset_scenario(self):
271 | self._scenario.teardown_scenario()
272 | self._scenario.setup_scenario()
273 |
274 | def _on_post_reset_btn(self):
275 | """
276 | This function is attached to the Reset Button as the post_reset_fn callback.
277 | The user may assume that their objects are properly initialized, and that the timeline is paused on timestep 0.
278 |
279 | They may also assume that objects that were added to the World.Scene have been moved to their default positions.
280 | I.e. the cube prim will move back to the position it was in when it was created in self._setup_scene().
281 | """
282 | self._reset_scenario()
283 |
284 | # UI management
285 | self._scenario_state_btn.reset()
286 | self._scenario_state_btn.enabled = True
287 |
288 | def _update_scenario(self, step: float):
289 | """This function is attached to the Run Scenario StateButton.
290 | This function was passed in as the physics_callback_fn argument.
291 | This means that when the a_text "RUN" is pressed, a subscription is made to call this function on every physics step.
292 | When the b_text "STOP" is pressed, the physics callback is removed.
293 |
294 | Args:
295 | step (float): The dt of the current physics step
296 | """
297 | self._scenario.update_scenario(step, self._param)
298 |
299 | def _on_run_scenario_a_text(self):
300 | """
301 | This function is attached to the Run Scenario StateButton.
302 | This function was passed in as the on_a_click_fn argument.
303 | It is called when the StateButton is clicked while saying a_text "RUN".
304 |
305 | This function simply plays the timeline, which means that physics steps will start happening. After the world is loaded or reset,
306 | the timeline is paused, which means that no physics steps will occur until the user makes it play either programmatically or
307 | through the left-hand UI toolbar.
308 | """
309 | self._timeline.play()
310 |
311 | def _on_run_scenario_b_text(self):
312 | """
313 | This function is attached to the Run Scenario StateButton.
314 | This function was passed in as the on_b_click_fn argument.
315 | It is called when the StateButton is clicked while saying a_text "STOP"
316 |
317 | Pausing the timeline on b_text is not strictly necessary for this example to run.
318 | Clicking "STOP" will cancel the physics subscription that updates the scenario, which means that
319 | the robot will stop getting new commands and the cube will stop updating without needing to
320 | pause at all. The reason that the timeline is paused here is to prevent the robot being carried
321 | forward by momentum for a few frames after the physics subscription is canceled. Pausing here makes
322 | this example prettier, but if curious, the user should observe what happens when this line is removed.
323 | """
324 | self._timeline.pause()
325 | # self._scenario.save()
326 |
327 | def _reset_extension(self):
328 | """This is called when the user opens a new stage from self.on_stage_event().
329 | All state should be reset.
330 | """
331 | self._on_init()
332 | self._reset_ui()
333 |
334 | def _reset_ui(self):
335 | self._scenario_state_btn.reset()
336 | self._scenario_state_btn.enabled = False
337 | self._reset_btn.enabled = False
338 |
339 |
340 |
341 |
342 | def _on_color_param_changes(self, model):
343 | for i, param_model in zip(range(9), self._param_models):
344 | self._param[i] = param_model.get_value_as_float()
345 | self._update_demo_render()
346 |
347 |
348 |
349 | def _update_demo_render(self):
350 |
351 | self._uw_image = wp.zeros_like(self._demo_rgba)
352 | wp.launch(
353 | dim=np.flip(self._demo_res),
354 | kernel=UW_render,
355 | inputs=[
356 | self._demo_rgba,
357 | self._demo_depth,
358 | wp.vec3f(*self._param[0:3]),
359 | wp.vec3f(*self._param[6:9]),
360 | wp.vec3f(*self._param[3:6])
361 | ],
362 | outputs=[
363 | self._uw_image
364 | ]
365 | )
366 |
367 | self._demo_provider.set_bytes_data_from_gpu(self._uw_image.ptr, self._demo_res)
368 |
369 | def _on_save_param(self):
370 | if self.save_dir_field.get_value() != "":
371 | data = {
372 | "backscatter_value":self._param[0:3],
373 | 'atten_coeff': self._param[6:9],
374 | 'backscatter_coeff': self._param[3:6]
375 | }
376 | save_dir = self.save_dir_field.get_value()
377 | yaml_path = save_dir + f"{self.file_name_field.get_value()}.yaml"
378 | png_path = save_dir + f"{self.file_name_field.get_value()}.png"
379 | with open(yaml_path, 'w') as file:
380 | try:
381 | yaml.dump(data, file, sort_keys=False)
382 | output_demo_image = Image.fromarray(self._uw_image.numpy(), 'RGBA')
383 | output_demo_image.save(png_path)
384 | print(f"Underwater render parameters written to {yaml_path}")
385 | except yaml.YAMLError as e:
386 | print(f"Error writing YAML file: {e}")
387 | else:
388 | carb.log_error('Saving directory is empty.')
389 |
390 | def _on_save_viewport(self):
391 | if self._scenario_state_btn.enabled:
392 | if self.save_dir_field.get_value() != "":
393 | save_dir = self.save_dir_field.get_value()
394 | raw_rgba = self._scenario.raw_rgba.numpy()
395 | depth = self._scenario.depth_image.numpy()
396 | rendered_image = self._scenario.uw_image.numpy()
397 | np.save(file=save_dir + '/viewport_depth.npy', arr=depth)
398 | raw_image = Image.fromarray(raw_rgba, 'RGBA')
399 | uw_image = Image.fromarray(rendered_image, 'RGBA')
400 | raw_image.save(save_dir + '/viewport_raw_rgba.png')
401 | uw_image.save(save_dir + '/viewport_uw_rgba.png')
402 | print(f'viewport result written to {save_dir}.')
403 | else:
404 |
405 | carb.log_error('Saving directory is empty.')
406 |
407 | else:
408 | print('Load a scenario first.')
--------------------------------------------------------------------------------
/isaacsim/oceansim/sensors/BarometerSensor.py:
--------------------------------------------------------------------------------
1 | # Omniverse import
2 | import numpy as np
3 | import carb
4 |
5 | # Isaac sim import
6 | from isaacsim.core.api.sensors import BaseSensor
7 | from isaacsim.core.api.physics_context import PhysicsContext
8 |
9 | # Custom import
10 | from isaacsim.oceansim.utils.MultivariateNormal import MultivariateNormal
11 |
12 |
13 | class BarometerSensor(BaseSensor):
14 | def __init__(self,
15 | prim_path,
16 | name = "baro",
17 | position = None,
18 | translation = None,
19 | orientation = None,
20 | scale = None,
21 | visible = None,
22 | water_density: float = 1000.0, # kg/m^3 (default for water)
23 | g: float = 9.81, # m/s^2, user-defined gravitational acceleration
24 | noise_cov: float = 0.0, # noise covariance for pressure measurement
25 | water_surface_z: float = 0.0, # z coordinate of the water surface
26 | atmosphere_pressure: float = 101325.0 # atmospheric pressure in Pascals
27 | ) -> None:
28 |
29 | """Initialize a barometer sensor with configurable physical properties and noise characteristics.
30 |
31 | .. note::
32 |
33 | This class is inheritied from ``BaseSensor``.
34 |
35 | Args:
36 | prim_path (str): prim path of the Prim to encapsulate or create.
37 | name (str, optional): shortname to be used as a key by Scene class.
38 | Note: needs to be unique if the object is added to the Scene.
39 | Defaults to "baro".
40 | position (Optional[Sequence[float]], optional): position in the world frame of the prim. shape is (3, ).
41 | Defaults to None, which means left unchanged.
42 | translation (Optional[Sequence[float]], optional): translation in the local frame of the prim
43 | (with respect to its parent prim). shape is (3, ).
44 | Defaults to None, which means left unchanged.
45 | orientation (Optional[Sequence[float]], optional): quaternion orientation in the world/ local frame of the prim
46 | (depends if translation or position is specified).
47 | quaternion is scalar-first (w, x, y, z). shape is (4, ).
48 | Defaults to None, which means left unchanged.
49 | scale (Optional[Sequence[float]], optional): local scale to be applied to the prim's dimensions. shape is (3, ).
50 | Defaults to None, which means left unchanged.
51 | visible (bool, optional): set to false for an invisible prim in the stage while rendering. Defaults to True.
52 | water_density (float, optional): Fluid density in kg/m³. Defaults to 1000.0 (fresh water).
53 | g (float, optional): Gravitational acceleration in m/s². Defaults to 9.81.
54 | noise_cov (float, optional): Covariance for pressure measurement noise (0 = no noise). Defaults to 0.0.
55 | water_surface_z (float, optional): Z-coordinate of water surface in world frame. Defaults to 0.0.
56 | atmosphere_pressure (float, optional): Atmospheric pressure at surface in Pascals. Defaults to 101325.0 (1 atm).
57 |
58 | Raises:
59 | Exception: if translation and position defined at the same time
60 | """
61 |
62 | super().__init__(prim_path, name, position, translation, orientation, scale, visible)
63 | self._name = name
64 | self._prim_path = prim_path
65 | self._water_density = water_density
66 | self._g = g
67 | self._mvn_press = MultivariateNormal(1)
68 | self._mvn_press.init_cov(noise_cov)
69 | self._water_surface_z = water_surface_z
70 | self._atmosphere_pressure = atmosphere_pressure
71 |
72 |
73 |
74 | physics_context = PhysicsContext()
75 | g_dir, scene_g = physics_context.get_gravity()
76 | if np.abs(self._g - np.abs(scene_g)) > 0.1:
77 | carb.log_warn(f'[{self._name}] Detected USD scene gravity is different from user definition. Reduced to user definition.')
78 |
79 |
80 |
81 | def get_pressure(self) -> float:
82 | """Calculate the total pressure at the sensor's current position, including hydrostatic pressure and noise.
83 |
84 | Returns:
85 | float: Total pressure in Pascals (Pa), composed of:
86 | - Atmospheric pressure (constant)
87 | - Hydrostatic pressure (if submerged, calculated as ρgh)
88 | - Gaussian noise (if noise_cov > 0)
89 |
90 | Note:
91 | The sensor returns only atmospheric pressure when above water surface (z-position ≥ water_surface_z).
92 | When submerged (z-position < water_surface_z), hydrostatic pressure is added based on depth.
93 | """
94 |
95 | if self.get_world_pose()[0][2] < self._water_surface_z:
96 | depth = self._water_surface_z - self.get_world_pose()[0][2]
97 | else:
98 | depth = 0.0
99 |
100 | # Compute hydrostatic pressure.
101 | pressure = self._atmosphere_pressure + self._water_density * self._g * depth
102 |
103 | # Add noise if defined.
104 | if self._mvn_press.is_uncertain():
105 | # The noise sample is a one-element array since our sensor is 1D.
106 | noise = self._mvn_press.sample_array()[0]
107 | pressure += noise
108 |
109 | return pressure
--------------------------------------------------------------------------------
/isaacsim/oceansim/sensors/DVLsensor.py:
--------------------------------------------------------------------------------
1 | # Omniverse import
2 | import numpy as np
3 | from pxr import Gf
4 | import omni.kit.commands
5 | import omni.graph.core as og
6 | import carb
7 |
8 | # Isaac sim import
9 | from isaacsim.core.api.sensors import BaseSensor
10 | from isaacsim.core.utils.rotations import euler_angles_to_quat, quat_to_rot_matrix
11 | from isaacsim.core.prims import SingleXFormPrim, SingleRigidPrim
12 | from isaacsim.sensors.physx import _range_sensor
13 |
14 | # Custom import
15 | from isaacsim.oceansim.utils.MultivariateNormal import MultivariateNormal
16 |
17 |
18 | class DVLsensor:
19 | def __init__(self,
20 | name: str = "DVL",
21 | elevation:float = 22.5, # deg
22 | rotation: float = 45, # deg
23 | vel_cov = 0,
24 | depth_cov = 0,
25 | min_range: float = 0.1,
26 | max_range: float = 100,
27 | num_beams_out_range_threshold: int = 2,
28 | freq: int = None, # Hz
29 | freq_bound: tuple[int] = [5, 100], # Hz
30 | freq_dependenet_range_bound: tuple[float] = [7.5, 50.0], # m
31 | sound_speed: float = 1500, # m/s
32 | ):
33 | """Initialize a DVL sensor with configurable beam geometry and operating parameters.
34 |
35 | Args:
36 | name (str): Identifier for the sensor. Defaults to "DVL".
37 | elevation (float): Beam elevation angle from horizontal in degrees. Defaults to 22.5°.
38 | rotation (float): Beam rotation about Z-axis in degrees. Defaults to 45° (Janus configuration).
39 | vel_cov (float): Velocity measurement noise covariance. Defaults to 0 (no noise).
40 | depth_cov (float): Depth measurement noise covariance. Defaults to 0 (no noise).
41 | min_range (float): Minimum valid range in meters. Defaults to 0.1m.
42 | max_range (float): Maximum valid range in meters. Defaults to 100m.
43 | num_beams_out_range_threshold (int): Number of lost beams before declaring dropout. Defaults to 2.
44 | freq (int, optional): Fixed operating frequency in Hz. If None, uses adaptive frequency. Defaults to None.
45 | freq_bound (tuple[int]): (min_freq, max_freq) for adaptive operation. Defaults to (5, 100)Hz.
46 | freq_dependenet_range_bound (tuple[float]): (min_range, max_range) for frequency adaptation. Defaults to (7.5, 50.0)m.
47 | sound_speed (float): Speed of sound in water in m/s. Defaults to 1500m/s.
48 | """
49 |
50 |
51 | self._name = name
52 |
53 | # DVL configuration params
54 | self._elevation = elevation
55 | self._rotation = rotation
56 | self._min_range = min_range
57 | self._max_range = max_range
58 |
59 | # DVL noise params
60 | self._mvn_vel = MultivariateNormal(4)
61 | self._mvn_vel.init_cov(vel_cov)
62 | self._mvn_dep = MultivariateNormal(4)
63 | self._mvn_dep.init_cov(depth_cov)
64 |
65 | sinElev = np.sin(np.deg2rad(self._elevation))
66 | cosElev = np.cos(np.deg2rad(self._elevation))
67 | self._transform = np.array([[1/(2*sinElev), 0, -1/(2*sinElev), 0],
68 | [0, 1/(2*sinElev), 0, -1/(2*sinElev)],
69 | [1/(4*cosElev), 1/(4*cosElev), 1/(4*cosElev), 1/(4*cosElev)]
70 | ])
71 |
72 | # sensor dropout related params
73 | self._num_beams_out_range_threshold = num_beams_out_range_threshold
74 |
75 | # Realistic DVL frequency dependent params
76 | self._user_static_freq_flag = False
77 | if freq is not None:
78 | self._user_static_freq_flag = True
79 | self._dt = 1/freq
80 | else:
81 | self._freq_bound = freq_bound
82 | self._freq_dependent_range_bound = freq_dependenet_range_bound
83 | self._sound_speed = sound_speed
84 |
85 | # Initialization
86 | self._rigid_body_path = None
87 | self._beam_paths = []
88 | self._elapsed_time_vel = 0.0
89 | self._elapsed_time_depth = 0.0
90 |
91 |
92 |
93 |
94 | def attachDVL(self,
95 | rigid_body_path:str,
96 | position = None,
97 | translation = None,
98 | orientation = None
99 | ):
100 |
101 | """Attach the DVL sensor to a rigid body in the simulation.
102 | ..note::
103 | This function will create a BaseSensor object under the parent rigid body prim and create 4 LightBeamSensors.
104 |
105 | Args:
106 | rigid_body_path (str): USD path to the parent rigid body prim.
107 | position (Optional[Sequence[float]], optional): position in the world frame of the prim. shape is (3, ).
108 | Defaults to None, which means left unchanged.
109 | translation (Optional[Sequence[float]], optional): translation in the local frame of the prim
110 | (with respect to its parent prim). shape is (3, ).
111 | Defaults to None, which means left unchanged.
112 | orientation (Optional[Sequence[float]], optional): quaternion orientation in the world/ local frame of the prim
113 | (depends if translation or position is specified).
114 | quaternion is scalar-first (w, x, y, z). shape is (4, ).
115 | Defaults to None, which means left unchanged.
116 | Raises:
117 | Exception: if translation and position defined at the same time
118 |
119 | """
120 | self._rigid_body_path = rigid_body_path
121 | self._rigid_body_prim = SingleRigidPrim(prim_path=self._rigid_body_path)
122 | sensor_prim_path = rigid_body_path + "/" + self._name
123 | self._DVL = BaseSensor(prim_path=sensor_prim_path,
124 | position=position,
125 | translation=translation,
126 | orientation=orientation)
127 |
128 | elevation = self._elevation
129 | rotation = self._rotation
130 | orients_euler = np.array([[elevation, 0.0, rotation],
131 | [0.0, elevation, rotation],
132 | [-elevation, 0.0, rotation],
133 | [0.0, -elevation, rotation]])
134 | orients_quat = []
135 | for i in range(orients_euler.shape[0]):
136 | orients_quat.append(euler_angles_to_quat(orients_euler[i,:], degrees=True))
137 | self._beam_paths.append(sensor_prim_path + f"/beam_{i}")
138 |
139 | result, sensor = omni.kit.commands.execute(
140 | "IsaacSensorCreateLightBeamSensor",
141 | path=self._beam_paths[i],
142 | min_range=self._min_range,
143 | max_range=self._max_range,
144 | forward_axis=Gf.Vec3d(0, 0, -1),
145 | num_rays=1,
146 | )
147 | SingleXFormPrim(prim_path=self._beam_paths[i]).set_local_pose(orientation=orients_quat[i])
148 | if result:
149 | self._DVL_interface = _range_sensor.acquire_lightbeam_sensor_interface()
150 | else:
151 | carb.log_error(f"[{self._name}] Beam Sensor fails to be loaded")
152 |
153 | def add_single_beam(self):
154 | self._single_beam_path = self._rigid_body_path + "/" + self._name + "/SingleBeam"
155 | result, sensor = omni.kit.commands.execute(
156 | "IsaacSensorCreateLightBeamSensor",
157 | path=self._single_beam_path,
158 | min_range=self._min_range,
159 | max_range=self._max_range,
160 | forward_axis=Gf.Vec3d(0, 0, -1),
161 | num_rays=1,
162 | )
163 | """Add a single vertical beam to the DVL for simplified depth measurements.
164 |
165 | Creates an additional beam sensor oriented straight downward (along -Z axis).
166 | The beam is created at: //SingleBeam
167 |
168 | Note:
169 | Primarily used for debugging or when single-beam depth measurement is sufficient.
170 | Uses the same min/max range settings as the main DVL beams.
171 | """
172 |
173 | def get_single_beam_range(self):
174 | """Get depth measurement from the vertical single beam. Only call this function after you added a singlebeam.
175 |
176 | Returns:
177 | float: Depth measurement in meters along the central beam.
178 | Returns 0 if no valid return (unlike main beams which return NaN).
179 |
180 | Note:
181 | This is a simpler alternative to get_depth() when only vertical range is needed.
182 |
183 | """
184 | return self._DVL_interface.get_linear_depth_data(self._single_beam_path)[0]
185 |
186 | def get_DVL_interface(self):
187 | """Get direct access to the underlying DVL sensor interface.
188 |
189 | Returns:
190 | _range_sensor.LightBeamSensorInterface: The raw physics sensor interface.
191 |
192 | Note:
193 | Advanced use only - provides low-level access to beam physics data.
194 | """
195 | return self._DVL_interface
196 |
197 | def get_baseSensor(self):
198 | """Get the core BaseSensor instance of the DVL.
199 |
200 | Returns:
201 | BaseSensor: The fundamental sensor prim wrapper.
202 |
203 | Note:
204 | Useful for modifying transform or visibility properties.
205 | """
206 | return self._DVL
207 |
208 | def get_beam_paths(self):
209 | """Get USD paths to all four DVL beam sensors.
210 |
211 | Returns:
212 | list[str]: List of four prim paths in the order:
213 | [beam_0, beam_1, beam_2, beam_3]
214 |
215 | Note:
216 | Paths follow pattern: //beam_
217 | """
218 | return self._beam_paths
219 |
220 | def get_depth(self):
221 | """Get depth measurements from all four beams.
222 |
223 | Returns:
224 | list[float]: Four depth measurements in meters. Returns NaN for beams with no return.
225 |
226 | Note:
227 | - Applies Gaussian noise if depth_cov > 0
228 | - Logs warning if >= num_beams_out_range_threshold beams are lost
229 | """
230 | depth = []
231 | if_hit = []
232 | for beam_path in self._beam_paths:
233 | depth.append(self._DVL_interface.get_linear_depth_data(beam_path)[0])
234 | if_hit.append(self._DVL_interface.get_beam_hit_data(beam_path)[0])
235 | if (self._mvn_dep.is_uncertain()):
236 | for i in range(4):
237 | sample = self._mvn_dep.sample_array()
238 | depth[i] += sample[i]
239 | # check if the sensor is in dropout state
240 | if if_hit.count(False) >= self._num_beams_out_range_threshold:
241 | carb.log_warn(f'[{self._name}] Measurement is dropped out')
242 |
243 | # set the no hit depth to nan
244 | depth = [value if hit else float('nan') for value, hit in zip(depth, if_hit)]
245 | return depth
246 |
247 | def get_dt(self):
248 | """Get current sensor update period based on operating mode.
249 |
250 | Returns:
251 | float: Update period in seconds.
252 |
253 | Note:
254 | For adaptive frequency mode, calculates period based on:
255 | - Fixed maximum frequency at close range
256 | - Sound-speed limited frequency at long range
257 | - Linear transition between bounds
258 | """
259 | if self._user_static_freq_flag:
260 | return self._dt
261 | else:
262 | min_range = min(self.get_depth())
263 | if min_range <= self._freq_dependent_range_bound[0]:
264 | self._dt = 1 / self._freq_bound[1]
265 | elif self._freq_dependent_range_bound[0] < min_range < self._freq_dependent_range_bound[1]:
266 | # To avoid abrupt jumps at h_min and h_max, smooth the transitions with linear ramp
267 | freq = self._freq_bound[1] - (self._freq_bound[1] - self._sound_speed/(2 * min_range))/(self._freq_dependent_range_bound[1] - self._freq_dependent_range_bound[0]) * (min_range - self._freq_dependent_range_bound[0])
268 | self._dt = 1 / freq
269 | else:
270 | self._dt = 1 / self._freq_bound[0]
271 | return self._dt
272 |
273 | def get_beam_hit(self):
274 | """Get hit detection status for all four DVL beams.
275 |
276 | Returns:
277 | list[bool]: Boolean hit status for each beam in order [beam_0, beam_1, beam_2, beam_3]
278 | True indicates beam has valid return, False indicates no return detected.
279 |
280 | Note:
281 | - Useful for monitoring individual beam performance
282 | - Mirrors the hit detection used internally in get_depth() and get_linear_vel()
283 | - Return order matches get_beam_paths() indices
284 | """
285 | beam_hit = []
286 | for beam_path in self._beam_paths:
287 | beam_hit.append(self._DVL_interface.get_beam_hit_data(beam_path)[0].astype(bool))
288 | return beam_hit
289 |
290 | def get_linear_vel(self):
291 | """Get 3D velocity vector in body frame.
292 |
293 | Returns:
294 | np.ndarray: [vx, vy, vz] velocity in m/s. Returns zeros during dropout.
295 |
296 | Note:
297 | - Applies Gaussian noise if vel_cov > 0
298 | """
299 | if_hit = []
300 | for beam_path in self._beam_paths:
301 | if_hit.append(self._DVL_interface.get_beam_hit_data(beam_path)[0])
302 | if if_hit.count(False) >= self._num_beams_out_range_threshold:
303 | carb.log_warn(f'[{self._name}] Measurement is dropped out')
304 | return np.zeros(3)
305 |
306 | world_vel = self._rigid_body_prim.get_linear_velocity()
307 | _, world_orient = self._rigid_body_prim.get_world_pose()
308 | rot_m = quat_to_rot_matrix(world_orient)
309 | vel = rot_m.T @ world_vel
310 | if (self._mvn_vel.is_uncertain()):
311 | sample = self._mvn_vel.sample_array()
312 | for i in range(4):
313 | for j in range(3):
314 | vel[j] += self._transform[j][i] * sample[i]
315 |
316 | return vel
317 |
318 |
319 | def get_linear_vel_fd(self, physics_dt: float):
320 | """Frequency-dependent version of get_linear_vel() that respects sensor update rate.
321 |
322 | Args:
323 | physics_dt (float): Current physics timestep duration.
324 |
325 | Returns:
326 | Union[np.ndarray, float]: Velocity vector if update is due, otherwise NaN.
327 | """
328 | if self.get_dt() < physics_dt:
329 | carb.log_warn(f'[{self._name}] Simulation physics_dt is larger than sensor_dt. Reduced to get_linear_vel().')
330 | self._elapsed_time_vel += physics_dt
331 | if self._elapsed_time_vel >= self.get_dt():
332 | self._elapsed_time_vel = 0.0
333 | return self.get_linear_vel()
334 | else:
335 | return float('nan')
336 |
337 | def get_depth_fd(self, physics_dt: float):
338 | """Frequency-dependent version of get_depth() that respects sensor update rate.
339 |
340 | Args:
341 | physics_dt (float): Current physics timestep duration.
342 |
343 | Returns:
344 | Union[list[float], float]: Depth measurements if update is due, otherwise NaN.
345 | """
346 | if self.get_dt() < physics_dt:
347 | carb.log_warn(f'[{self._name}] Simulation physics_dt is larger than sensor_dt. Reduced to get_depth().')
348 | self._elapsed_time_depth += physics_dt
349 | if self._elapsed_time_depth >= self.get_dt():
350 | self._elapsed_time_depth = 0.0
351 | return self.get_depth()
352 | else:
353 | return float('nan')
354 |
355 | def set_freq(self, freq: float):
356 | """Set a fixed operating frequency for the DVL sensor.
357 |
358 | Args:
359 | freq (float): Desired operating frequency in Hz (must be > 0)
360 |
361 | Note:
362 | - Overrides any adaptive frequency behavior
363 | - Automatically calculates the corresponding period (dt = 1/freq)
364 | - Sets internal flag to maintain fixed frequency mode
365 | - To revert to adaptive frequency, create a new DVL instance
366 |
367 | Example:
368 | >>> dvl.set_freq(10) # Sets DVL to update at 10Hz
369 | """
370 | self._user_static_freq_flag = True
371 | self._dt = 1 / freq
372 |
373 | def add_debug_lines(self):
374 | """Visualize DVL beams in the viewport using debug drawing.
375 |
376 | Creates an action graph that continuously draws the beam paths.
377 | """
378 |
379 | (action_graph, new_nodes, _, _) = og.Controller.edit(
380 | {"graph_path": "/debugLines", "evaluator_name": "execution"},
381 | {
382 | og.Controller.Keys.CREATE_NODES: [
383 | ("OnPlaybackTick", "omni.graph.action.OnPlaybackTick"),
384 | ("IsaacReadLightBeam0", "isaacsim.sensors.physx.IsaacReadLightBeam"),
385 | ("IsaacReadLightBeam1", "isaacsim.sensors.physx.IsaacReadLightBeam"),
386 | ("IsaacReadLightBeam2", "isaacsim.sensors.physx.IsaacReadLightBeam"),
387 | ("IsaacReadLightBeam3", "isaacsim.sensors.physx.IsaacReadLightBeam"),
388 | ("DebugDrawRayCast0", "isaacsim.util.debug_draw.DebugDrawRayCast"),
389 | ("DebugDrawRayCast1", "isaacsim.util.debug_draw.DebugDrawRayCast"),
390 | ("DebugDrawRayCast2", "isaacsim.util.debug_draw.DebugDrawRayCast"),
391 | ("DebugDrawRayCast3", "isaacsim.util.debug_draw.DebugDrawRayCast"),
392 | ],
393 | og.Controller.Keys.SET_VALUES: [
394 | ("IsaacReadLightBeam0.inputs:lightbeamPrim", self._beam_paths[0]),
395 | ("IsaacReadLightBeam1.inputs:lightbeamPrim", self._beam_paths[1]),
396 | ("IsaacReadLightBeam2.inputs:lightbeamPrim", self._beam_paths[2]),
397 | ("IsaacReadLightBeam3.inputs:lightbeamPrim", self._beam_paths[3]),
398 |
399 | ],
400 | og.Controller.Keys.CONNECT: [
401 | ("OnPlaybackTick.outputs:tick", "IsaacReadLightBeam0.inputs:execIn"),
402 | ("IsaacReadLightBeam0.outputs:execOut", "DebugDrawRayCast0.inputs:exec"),
403 | ("IsaacReadLightBeam0.outputs:beamOrigins", "DebugDrawRayCast0.inputs:beamOrigins"),
404 | ("IsaacReadLightBeam0.outputs:beamEndPoints", "DebugDrawRayCast0.inputs:beamEndPoints"),
405 | ("IsaacReadLightBeam0.outputs:numRays", "DebugDrawRayCast0.inputs:numRays"),
406 |
407 | ("OnPlaybackTick.outputs:tick", "IsaacReadLightBeam1.inputs:execIn"),
408 | ("IsaacReadLightBeam1.outputs:execOut", "DebugDrawRayCast1.inputs:exec"),
409 | ("IsaacReadLightBeam1.outputs:beamOrigins", "DebugDrawRayCast1.inputs:beamOrigins"),
410 | ("IsaacReadLightBeam1.outputs:beamEndPoints", "DebugDrawRayCast1.inputs:beamEndPoints"),
411 | ("IsaacReadLightBeam1.outputs:numRays", "DebugDrawRayCast1.inputs:numRays"),
412 |
413 | ("OnPlaybackTick.outputs:tick", "IsaacReadLightBeam2.inputs:execIn"),
414 | ("IsaacReadLightBeam2.outputs:execOut", "DebugDrawRayCast2.inputs:exec"),
415 | ("IsaacReadLightBeam2.outputs:beamOrigins", "DebugDrawRayCast2.inputs:beamOrigins"),
416 | ("IsaacReadLightBeam2.outputs:beamEndPoints", "DebugDrawRayCast2.inputs:beamEndPoints"),
417 | ("IsaacReadLightBeam2.outputs:numRays", "DebugDrawRayCast2.inputs:numRays"),
418 |
419 | ("OnPlaybackTick.outputs:tick", "IsaacReadLightBeam3.inputs:execIn"),
420 | ("IsaacReadLightBeam3.outputs:execOut", "DebugDrawRayCast3.inputs:exec"),
421 | ("IsaacReadLightBeam3.outputs:beamOrigins", "DebugDrawRayCast3.inputs:beamOrigins"),
422 | ("IsaacReadLightBeam3.outputs:beamEndPoints", "DebugDrawRayCast3.inputs:beamEndPoints"),
423 | ("IsaacReadLightBeam3.outputs:numRays", "DebugDrawRayCast3.inputs:numRays"),
424 | ],
425 | },
426 | )
427 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/sensors/ImagingSonarSensor.py:
--------------------------------------------------------------------------------
1 | from isaacsim.sensors.camera import Camera
2 | import omni.replicator.core as rep
3 | import omni.ui as ui
4 | import numpy as np
5 | from omni.replicator.core.scripts.functional import write_np
6 | import warp as wp
7 | from isaacsim.oceansim.utils.ImagingSonar_kernels import *
8 |
9 |
10 | # Future TODO
11 | # In future release, wrap this class around RTX lidar
12 |
13 | class ImagingSonarSensor(Camera):
14 | def __init__(self,
15 | prim_path,
16 | name = "ImagingSonar",
17 | frequency = None,
18 | dt = None,
19 | position = None,
20 | orientation = None,
21 | translation = None,
22 | render_product_path = None,
23 | physics_sim_view = None,
24 | min_range: float = 0.2, # m
25 | max_range: float = 3.0, # m
26 | range_res: float = 0.008, # deg
27 | hori_fov: float = 130.0, # deg
28 | vert_fov: float = 20.0, # deg
29 | angular_res: float = 0.5, # deg
30 | hori_res: int = 3000 # isaac camera render product only accepts square pixel,
31 | # for now vertical res is automatically set with ratio of hori_fov vs.vert_fov
32 | ):
33 |
34 |
35 | """Initialize an imaging sonar sensor with physical parameters.
36 |
37 | Args:
38 | prim_path (str): prim path of the Camera Prim to encapsulate or create.
39 | name (str, optional): shortname to be used as a key by Scene class.
40 | Note: needs to be unique if the object is added to the Scene.
41 | Defaults to "ImagingSonar".
42 | frequency (Optional[int], optional): Frequency of the sensor (i.e: how often is the data frame updated).
43 | Defaults to None.
44 | dt (Optional[str], optional): dt of the sensor (i.e: period at which a the data frame updated). Defaults to None.
45 | resolution (Optional[Tuple[int, int]], optional): resolution of the camera (width, height). Defaults to None.
46 | position (Optional[Sequence[float]], optional): position in the world frame of the prim. shape is (3, ).
47 | Defaults to None, which means left unchanged.
48 | translation (Optional[Sequence[float]], optional): translation in the local frame of the prim
49 | (with respect to its parent prim). shape is (3, ).
50 | Defaults to None, which means left unchanged.
51 | orientation (Optional[Sequence[float]], optional): quaternion orientation in the world/ local frame of the prim
52 | (depends if translation or position is specified).
53 | quaternion is scalar-first (w, x, y, z). shape is (4, ).
54 | Defaults to None, which means left unchanged.
55 | render_product_path (str): path to an existing render product, will be used instead of creating a new render product
56 | the resolution and camera attached to this render product will be set based on the input arguments.
57 | Note: Using same render product path on two Camera objects with different camera prims, resolutions is not supported
58 | Defaults to None
59 |
60 | physics_sim_view (_type_, optional): _description_. Defaults to None.
61 | min_range (float, optional): Minimum detection range in meters. Defaults to 0.2.
62 | max_range (float, optional): Maximum detection range in meters. Defaults to 3.0.
63 | range_res (float, optional): Range resolution in meters. Defaults to 0.008.
64 | hori_fov (float, optional): Horizontal field of view in degrees. Defaults to 130.0.
65 | vert_fov (float, optional): Vertical field of view in degrees. Defaults to 20.0.
66 | angular_res (float, optional): Angular resolution in degrees. Defaults to 0.5.
67 | hori_res (int, optional): Horizontal pixel resolution. Defaults to 3000.
68 |
69 | Note:
70 | - Vertical resolution is automatically calculated to maintain aspect ratio
71 | - Uses Warp for GPU-accelerated sonar image generation
72 | - Creates polar coordinate meshgrid for sonar returns processing
73 | """
74 |
75 |
76 | self._name = name
77 | # Raw parameters from Oculus M370s\MT370s\MD370s
78 | self.max_range = max_range # m (max is 200 m in datasheet )
79 | self.min_range = min_range # m (min is 0.2 m in datasheet)
80 | self.range_res = range_res # m (datasheet is 0.008 m)
81 | self.hori_fov = hori_fov # degree (hori_fov is 130 degrees in datasheet)
82 | self.vert_fov = vert_fov # degree (vert_fov is 20 degrees in datasheet)
83 | self.angular_res = angular_res # degree (datasheet is 2 deg)
84 | self.hori_res= hori_res
85 |
86 | # self.beam_separation = 0.5 # degree (Not USED FOR NOW)!!
87 | # self.num_beams = 256 # (max number of beams) (NOT USED FOR NOW)!!
88 | # self.update_rate = 40 # Hz (max update rate) (NOT USED FOR NOW)!!
89 |
90 |
91 | # Generate sonar map's r and z meshgrid
92 | self.min_azi = np.deg2rad(90-self.hori_fov/2)
93 | r, azi = np.meshgrid(np.arange(self.min_range,self.max_range,self.range_res),
94 | np.arange(np.deg2rad(90-self.hori_fov/2), np.deg2rad(90+self.hori_fov/2), np.deg2rad(self.angular_res)),
95 | indexing='ij')
96 | self.r = wp.array(r, shape=r.shape, dtype=wp.float32)
97 | self.azi = wp.array(azi, shape=r.shape, dtype=wp.float32)
98 |
99 | # Load array that doesn't change shapes to cuda for reusage memory
100 | # Users can also automatically see if they have set a reasonable parameter
101 | # for sonar map bin size\resolution once load the sensor
102 | self.bin_sum = wp.zeros(shape=self.r.shape, dtype=wp.float32)
103 | self.bin_count = wp.zeros(shape=self.r.shape, dtype=wp.int32)
104 | self.binned_intensity = wp.zeros(shape=self.r.shape, dtype=wp.float32)
105 | self.sonar_map = wp.zeros(shape=self.r.shape, dtype=wp.vec3)
106 | self.sonar_image = wp.zeros(shape=(self.r.shape[0], self.r.shape[1], 4), dtype=wp.uint8)
107 | self.gau_noise = wp.zeros(shape=self.r.shape, dtype=wp.float32)
108 | self.range_dependent_ray_noise = wp.zeros(shape=self.r.shape, dtype=wp.float32)
109 |
110 | self.AR = self.hori_fov / self.vert_fov
111 | self.vert_res = int(self.hori_res / self.AR)
112 | # By doing this, I am assuming the vertical beam separation
113 | # is the same as the beam horizontal separation.
114 | # This is bacause replicator raytracing is specified as resolutions
115 | # while non-squre pixel is not supported in Isaac sim. See details below.
116 |
117 | super().__init__(prim_path=prim_path,
118 | name=name,
119 | frequency=frequency,
120 | dt=dt,
121 | resolution=[self.hori_res, self.vert_res],
122 | position=position,
123 | orientation=orientation,
124 | translation=translation,
125 | render_product_path=render_product_path)
126 |
127 | self.set_clipping_range(
128 | near_distance=self.min_range,
129 | far_distance=self.max_range
130 | )
131 | # This is a bug. Needs to call initialize() before changing aperture
132 | # https://forums.developer.nvidia.com/t/error-when-setting-a-cameras-vertical-horizontal-aperture/271314
133 | # This line initialize the camera
134 | self.initialize(physics_sim_view)
135 |
136 | # Assume the default focal length to compute the desired horizontal aperture
137 | # The reason why we are doing this is because Isaac sim will fix vertical aperture
138 | # given aspect ratio for mandating square pixles
139 | # https://forums.developer.nvidia.com/t/how-to-modify-the-cameras-field-of-view/278427/5
140 | self.focal_length = self.get_focal_length()
141 | horizontal_aper = 2 * self.focal_length * np.tan(np.deg2rad(self.hori_fov) / 2)
142 | self.set_horizontal_aperture(horizontal_aper)
143 | # Notice if you would like to observe sonar view from linked viewport.
144 | # Only horizontal fov is displayed correctly while the vertical fov is
145 | # followed by your viewport aspect ratio settings.
146 |
147 |
148 | # Initialize the sensor so that annotator is
149 | # loaded on cuda and ready to acquire data
150 | # Data is generated per simulation tick
151 |
152 | # do_array_copy: If True, retrieve a copy of the data array.
153 | # This is recommended for workflows using asynchronous
154 | # backends to manage the data lifetime.
155 | # Can be set to False to gain performance if the data is
156 | # expected to be used immediately within the writer. Defaults to True.
157 |
158 | def sonar_initialize(self, output_dir : str = None, viewport: bool = True, include_unlabelled = False, if_array_copy: bool = True):
159 | """Initialize sonar data processing pipeline and annotators.
160 |
161 | Args:
162 | output_dir (str, optional): Directory to save sonar data. Defaults to None.
163 | If set to None, sonar will not write data.
164 | viewport (bool, optional): Enable viewport visualization. Defaults to True.
165 | Set to False for Sonar running without visualization.
166 | include_unlabelled (bool, optional): Include unlabelled objects to be scanned into sonar view. Defaults to False.
167 | if_array_copy (bool, optional): If True, retrieve a copy of the data array.
168 | This is recommended for workflows using asynchronous backends to manage the data lifetime.
169 | Can be set to False to gain performance if the data is expected to be used immediately within the writer.
170 | Defaults to True.
171 |
172 | Note:
173 | - Attaches pointcloud, camera params, and semantic segmentation annotators
174 | - Sets up Warp arrays for sonar image processing
175 | - Can optionally write data to disk if output_dir specified
176 | """
177 | self.writing = False
178 | self._viewport = viewport
179 | self._device = str(wp.get_preferred_device())
180 | self.scan_data = {}
181 | self.id = 0
182 |
183 | self.pointcloud_annot = rep.AnnotatorRegistry.get_annotator(
184 | name="pointcloud",
185 | init_params={"includeUnlabelled": include_unlabelled},
186 | do_array_copy=if_array_copy,
187 | device=self._device
188 | )
189 |
190 | self.cameraParams_annot = rep.AnnotatorRegistry.get_annotator(
191 | name="CameraParams",
192 | do_array_copy=if_array_copy,
193 | device=self._device
194 | )
195 |
196 | self.semanticSeg_annot = rep.AnnotatorRegistry.get_annotator(
197 | name='semantic_segmentation',
198 | init_params={"colorize": False},
199 | do_array_copy=if_array_copy,
200 | device=self._device
201 | )
202 |
203 | print(f'[{self._name}] Using {self._device}' )
204 | print(f'[{self._name}] Render query res: {self.hori_res} x {self.vert_res}. Binning res: {self.r.shape[0]} x {self.r.shape[1]}')
205 |
206 | self.pointcloud_annot.attach(self._render_product_path)
207 | self.cameraParams_annot.attach(self._render_product_path)
208 | self.semanticSeg_annot.attach(self._render_product_path)
209 |
210 | if output_dir is not None:
211 | self.writing = True
212 | self.backend = rep.BackendDispatch({"paths": {"out_dir": output_dir}})
213 | if self._viewport:
214 | self.make_sonar_viewport()
215 |
216 | print(f'[{self._name}] Initialized successfully. Data writing: {self.writing}')
217 |
218 | self.bin_sum.zero_()
219 | self.bin_count.zero_()
220 | self.binned_intensity.zero_()
221 | self.sonar_map.zero_()
222 | self.sonar_image.zero_()
223 | self.range_dependent_ray_noise.zero_()
224 | self.gau_noise.zero_()
225 |
226 |
227 |
228 | def scan(self):
229 |
230 | """Capture a single sonar scan frame and store the raw data.
231 |
232 | Returns:
233 | bool: True if scan was successful (valid data received), False otherwise
234 |
235 | Note:
236 | - Stores pointcloud, normals, semantics, and camera transform in scan_data dict
237 | - First few frames may be empty due to CUDA initialization
238 | - Automatically skips frames with no detected objects
239 | """
240 | # Due to the time to load annotator to cuda, the first few simulation tick gives no annotation in memory.
241 | # This would also reult error when no mesh within the sonar fov
242 | # Ignore scan that gives empty data stream
243 | if len(self.semanticSeg_annot.get_data()['info']['idToLabels']) !=0:
244 | self.scan_data['pcl'] = self.pointcloud_annot.get_data(device=self._device)['data'][0] # shape :(1,N,3)
245 | self.scan_data['normals'] = self.pointcloud_annot.get_data(device=self._device)['info']['pointNormals'][0] # shape :(1,N,4)
246 | self.scan_data['semantics'] = self.pointcloud_annot.get_data(device=self._device)['info']['pointSemantic'][0] # shape: (1, N)
247 | self.scan_data['viewTransform'] = self.cameraParams_annot.get_data()['cameraViewTransform'].reshape(4,4).T # 4 by 4 np.ndarray extrinsic matrix
248 | self.scan_data['idToLabels'] = self.semanticSeg_annot.get_data()['info']['idToLabels'] # dict
249 | return True
250 | else:
251 | return False
252 |
253 |
254 | def make_sonar_data(self,
255 | binning_method: str = "sum",
256 | normalizing_method: str = "range",
257 | query_prop: str ='reflectivity', # Do not modify this if not developing the sensor.
258 | attenuation: float = 0.1, # Control the attentuation along distance when computing attenuation
259 | gau_noise_param: float = 0.2, # multiplicative noise coefficient
260 | ray_noise_param: float = 0.05, # additive noise parameter
261 | intensity_offset: float = 0.0, # offset intensity after normalization
262 | intensity_gain: float = 1.0, # scale intensity after normalization
263 | central_peak: float = 2, # control the strength of the streak
264 | central_std: float = 0.001, # control the spread of the streak
265 | ):
266 | """Process raw scan data into a sonar image with configurable parameters.
267 |
268 | Args:
269 | binning_method (str): "sum" or "mean" for intensity accumulation
270 | Remember to adjust your noise scale accordingly after changing this.
271 | normalizing_method (str): "all" (global max) or "range" (per-range max)
272 | Remember to adjust your noise scale accordingly after changing this.
273 | query_prop (str): Material property to query (default 'reflectivity')
274 | Don't modify this if not for development.
275 | attenuation (float): Distance attenuation coefficient (0-1)
276 | gau_noise_param (float): Gaussian noise multiplier
277 | ray_noise_param (float): Rayleigh noise scale factor
278 | intensity_offset (float): Post-normalization intensity offset
279 | intensity_gain (float): Post-normalization intensity multiplier
280 | central_peak (float): Central beam streak intensity
281 | central_std (float): Central beam streak width
282 |
283 | """
284 |
285 |
286 |
287 | def make_indexToProp_array(idToLabels: dict, query_property: str):
288 | # A utility function helps to convert idToLabels into indexToProp array
289 | # This manipulation facilitates warp computation framework
290 | # indexToProp is an 1-dim array where the values associated with the query property
291 | # are placed at the index corresponding to the key
292 | # First two entry are always zero because {'0': {'class': 'BACKGROUND'}, '1': {'class': 'UNLABELLED'}}
293 | # eg: indexToProp = [0, 0, 0.1, 1 .....]
294 | max_id = max(idToLabels.keys(), default=-1)
295 | indexToProp_array = np.ones((int(max_id)+1,))
296 | for id in idToLabels.keys():
297 | for property in idToLabels.get(id):
298 | if property == query_property:
299 | indexToProp_array[int(id)] = idToLabels.get(id).get(property)
300 | return indexToProp_array
301 |
302 | if self.scan():
303 | num_points = self.scan_data['pcl'].shape[0]
304 | # Load these small numpy arrays to cuda
305 | indexToRefl = wp.array(make_indexToProp_array(idToLabels=self.scan_data['idToLabels'],
306 | query_property=query_prop),
307 | dtype=wp.float32)
308 | viewTransform=wp.mat44(self.scan_data['viewTransform'])
309 | # directly use warp array loaded on cuda
310 | pcl = self.scan_data['pcl']
311 | normals = self.scan_data['normals']
312 | semantics = self.scan_data['semantics']
313 | else:
314 | return
315 |
316 | # Compute intensity for each ray query
317 | intensity = wp.empty(shape=(num_points,), dtype=wp.float32)
318 | wp.launch(kernel=compute_intensity,
319 | dim=num_points,
320 | inputs=[
321 | pcl,
322 | normals,
323 | viewTransform,
324 | semantics,
325 | indexToRefl,
326 | attenuation,
327 | ],
328 | outputs=[
329 | intensity
330 | ]
331 | )
332 |
333 | # Transform pointcloud from world cooridates to sonar local
334 | pcl_local =wp.empty(shape=(num_points,), dtype=wp.vec3)
335 | pcl_spher = wp.empty(shape=(num_points,), dtype=wp.vec3)
336 | wp.launch(kernel=world2local,
337 | dim=num_points,
338 | inputs=[
339 | viewTransform,
340 | pcl
341 | ],
342 | outputs=[
343 | pcl_local,
344 | pcl_spher
345 | ]
346 | )
347 |
348 | # Collapse three dimensional intensity data to 2D
349 | # Simply sum intensity return and compute number of return that falls into the same bin
350 | self.bin_sum.zero_()
351 | self.bin_count.zero_()
352 | self.binned_intensity.zero_()
353 |
354 |
355 | wp.launch(kernel=bin_intensity,
356 | dim=num_points,
357 | inputs=[
358 | pcl_spher,
359 | intensity,
360 | self.min_range,
361 | self.min_azi,
362 | self.range_res,
363 | wp.radians(self.angular_res),
364 | ],
365 | outputs=[
366 | self.bin_sum,
367 | self.bin_count
368 | ]
369 | )
370 |
371 | # Process intensity data by either sum as it is or averaging
372 | if binning_method == "mean":
373 | wp.launch(
374 | kernel=average,
375 | dim=self.bin_sum.shape,
376 | inputs=[
377 | self.bin_sum,
378 | self.bin_count
379 | ],
380 | outputs=[
381 | self.binned_intensity,
382 | ]
383 | )
384 |
385 | if binning_method == "sum":
386 | self.binned_intensity = self.bin_sum
387 |
388 |
389 | self.range_dependent_ray_noise.zero_()
390 | self.gau_noise.zero_()
391 | self.sonar_map.zero_()
392 |
393 | # Calculate multiplicative gaussian noise
394 |
395 | wp.launch(
396 | kernel=normal_2d,
397 | dim=self.bin_sum.shape,
398 | inputs=[
399 | self.id, # use frame num for RNG seed increment
400 | 0.0,
401 | gau_noise_param
402 | ],
403 | outputs=[
404 | self.gau_noise
405 | ]
406 | )
407 |
408 | # Calculate additive rayleigh noise (range dependent and mimic central beam)
409 |
410 | wp.launch(
411 | kernel=range_dependent_rayleigh_2d,
412 | dim=self.bin_sum.shape,
413 | inputs=[
414 | self.id, # use frame num for RNG seed increment
415 | self.r,
416 | self.azi,
417 | self.max_range,
418 | ray_noise_param,
419 | central_peak,
420 | central_std,
421 | ],
422 | outputs=[
423 | self.range_dependent_ray_noise
424 |
425 | ]
426 | )
427 |
428 |
429 |
430 | # Normalizing intensity at each bin either by global maximum or rangewise maximum
431 | # Compute global maximum
432 | if normalizing_method == "all":
433 | maximum = wp.zeros(shape=(1,), dtype=wp.float32)
434 | wp.launch(
435 | dim=self.bin_sum.shape,
436 | kernel=all_max,
437 | inputs=[
438 | self.binned_intensity,
439 | ],
440 | outputs=[
441 | maximum # wp.array of shape (1,), max value is stored at maximum[0]
442 | ]
443 | )
444 |
445 | # Apply noise, normalize by global maximum, and convert (r, azi) to (x,y) for plotting
446 | wp.launch(
447 | kernel=make_sonar_map_all,
448 | dim=self.sonar_map.shape,
449 | inputs=[
450 | self.r,
451 | self.azi,
452 | self.binned_intensity,
453 | maximum,
454 | self.gau_noise,
455 | self.range_dependent_ray_noise,
456 | intensity_offset,
457 | intensity_gain
458 | ],
459 | outputs=[
460 | self.sonar_map
461 | ]
462 | )
463 |
464 | if normalizing_method == "range":
465 | # Compute rangewise maximum
466 | maximum = wp.zeros(shape=(self.r.shape[0],), dtype=wp.float32)
467 | wp.launch(
468 | dim=self.bin_sum.shape,
469 | kernel=range_max,
470 | inputs=[
471 | self.binned_intensity,
472 | ],
473 | outputs=[
474 | maximum # wp.array of shape (number of range bins, )
475 | ]
476 | )
477 | # Apply noise, normalize by range maximum, and convert (r, azi) to (x,y) for plotting
478 | wp.launch(
479 | kernel=make_sonar_map_range,
480 | dim=self.sonar_map.shape,
481 | inputs=[
482 | self.r,
483 | self.azi,
484 | self.binned_intensity,
485 | maximum,
486 | self.gau_noise,
487 | self.range_dependent_ray_noise,
488 | intensity_offset,
489 | intensity_gain
490 | ],
491 | outputs=[
492 | self.sonar_map
493 | ]
494 | )
495 |
496 |
497 | # Write data to the dir
498 | if self.writing:
499 | # self.backend.schedule(write_np, f"intensity_{self.id}.npy", data=intensity)
500 | # self.backend.schedule(write_np, f'pcl_local_{self.id}.npy', data=pcl_local)
501 | self.backend.schedule(write_np, f'sonar_data_{self.id}.npy', data=self.sonar_map)
502 | print(f"[{self._name}] [{self.id}] Writing sonar data to {self.backend.output_dir}")
503 |
504 | if self._viewport:
505 | self._sonar_provider.set_bytes_data_from_gpu(self.make_sonar_image().ptr,
506 | [self.sonar_map.shape[1], self.sonar_map.shape[0]])
507 | # self.backend.schedule(write_image, f'sonar_{self.id}.png', data = self.make_sonar_image())
508 |
509 | self.id += 1
510 |
511 |
512 | def make_sonar_image(self):
513 | """Convert processed sonar data to a viewable grayscale image.
514 |
515 | Returns:
516 | wp.array: GPU array containing the sonar image (RGBA format)
517 |
518 | Note:
519 | - Used internally for viewport display
520 | - Image dimensions match the sonar's polar binning resolution
521 | """
522 | self.sonar_image.zero_()
523 | wp.launch(
524 | dim=self.sonar_map.shape,
525 | kernel=make_sonar_image,
526 | inputs=[
527 | self.sonar_map
528 | ],
529 | outputs=[
530 | self.sonar_image
531 | ]
532 | )
533 | return self.sonar_image
534 |
535 |
536 | def make_sonar_viewport(self):
537 | """Create an interactive viewport window for real-time sonar visualization.
538 |
539 | Note:
540 | - Displays live sonar images when simulation is running
541 | - Includes range and azimuth tick marks
542 | - Window size is fixed at 800x800 pixels
543 | """
544 | self.wrapped_ui_elements = []
545 |
546 | range_tick_num = 10
547 | range_tick = np.round(np.linspace(self.min_range, self.max_range, range_tick_num), 2)
548 |
549 | azi_tick_num = 10
550 | azi_tick = np.round(np.linspace(90-self.hori_fov/2, 90+self.hori_fov/2, azi_tick_num))
551 | self._sonar_provider = ui.ByteImageProvider()
552 | self._window = ui.Window(self._name, width=800, height=800, visible=True)
553 |
554 | with self._window.frame:
555 | with ui.ZStack(height=720, width = 720):
556 | ui.Rectangle(widthstyle={"background_color": 0xFF000000})
557 | ui.Label('Run the scenario for image to be received',
558 | style={'font_size': 55,'alignment': ui.Alignment.CENTER},
559 | word_wrap=True)
560 | sonar_image_provider = ui.ImageWithProvider(self._sonar_provider,
561 | style={"width": 720,
562 | "height": 720,
563 | "fill_policy" : ui.FillPolicy.STRETCH,
564 | 'alignment': ui.Alignment.CENTER})
565 |
566 | # ui.Line(alignment=ui.Alignment.LEFT,
567 | # style={'border_width': 2,
568 | # 'color':ui.color.white })
569 | # with ui.VGrid(row_height = 720/(range_tick_num-1)):
570 | # for i in range(range_tick_num-1):
571 | # with ui.ZStack():
572 | # ui.Rectangle(style={'border_color': ui.color.white, 'background_color': ui.color.transparent,'border_width': 0.05, 'margin': 0})
573 | # ui.Label(str(range_tick[i]) + ' m',style={'font_size': 15,'alignment': ui.Alignment.LEFT, 'margin':2})
574 | # with ui.HGrid(column_width = 720/(azi_tick_num-1), direction=ui.Direction.RIGHT_TO_LEFT):
575 | # for i in range(azi_tick_num-1):
576 | # with ui.ZStack():
577 | # ui.Rectangle(style={'border_color': ui.color.white, 'background_color': ui.color.transparent,'border_width': 0.05, 'margin': 0})
578 | # ui.Label(str(azi_tick[i]) + "°",style={'font_size': 15,'alignment': ui.Alignment.RIGHT, 'margin':2})
579 | # ui.Label(str(range_tick[-1]) +" m", style={'font_size': 15, "alignment":ui.Alignment.LEFT_BOTTOM, 'margin':2})
580 |
581 | self.wrapped_ui_elements.append(sonar_image_provider)
582 | self.wrapped_ui_elements.append(self._sonar_provider)
583 | self.wrapped_ui_elements.append(self._window)
584 |
585 | def get_range(self) -> list[float]:
586 | """Get the configured operating range of the sonar.
587 |
588 | Returns:
589 | list[float]: [min_range, max_range] in meters
590 | """
591 | return [self.min_range, self.max_range]
592 |
593 | def get_fov(self) -> list[float]:
594 | """Get the configured field of view angles.
595 |
596 | Returns:
597 | list[float]: [horizontal_fov, vertical_fov] in degrees
598 | """
599 | return [self.hori_fov, self.vert_fov]
600 |
601 |
602 |
603 | def close(self):
604 | """Clean up resources by detaching annotators and clearing caches.
605 |
606 | Note:
607 | - Required for proper shutdown when done using the sensor
608 | - Also closes viewport window if one was created
609 | """
610 | self.pointcloud_annot.detach(self._render_product_path)
611 | self.cameraParams_annot.detach(self._render_product_path)
612 | self.semanticSeg_annot.detach(self._render_product_path)
613 |
614 | rep.AnnotatorCache.clear(self.pointcloud_annot)
615 | rep.AnnotatorCache.clear(self.cameraParams_annot)
616 | rep.AnnotatorCache.clear(self.semanticSeg_annot)
617 |
618 |
619 | print(f'[{self._name}] Annotator detached. AnnotatorCache cleaned.')
620 |
621 | if self._viewport:
622 | self.ui_destroy()
623 |
624 |
625 | def ui_destroy(self):
626 | """Explicitly destroy viewport UI elements.
627 |
628 | Note:
629 | - Called automatically by close()
630 | - Only needed if manually managing UI lifecycle
631 | """
632 | for elem in self.wrapped_ui_elements:
633 | elem.destroy()
--------------------------------------------------------------------------------
/isaacsim/oceansim/sensors/UW_Camera.py:
--------------------------------------------------------------------------------
1 | # Omniverse Import
2 | import omni.replicator.core as rep
3 | from omni.replicator.core.scripts.functional import write_image
4 | import omni.ui as ui
5 |
6 | # Isaac sim import
7 | from isaacsim.sensors.camera import Camera
8 | import numpy as np
9 | import warp as wp
10 | import yaml
11 | import carb
12 |
13 | # Custom import
14 | from isaacsim.oceansim.utils.UWrenderer_utils import UW_render
15 |
16 |
17 | class UW_Camera(Camera):
18 |
19 | def __init__(self,
20 | prim_path,
21 | name = "UW_Camera",
22 | frequency = None,
23 | dt = None,
24 | resolution = None,
25 | position = None,
26 | orientation = None,
27 | translation = None,
28 | render_product_path = None):
29 |
30 | """Initialize an underwater camera sensor.
31 |
32 | Args:
33 | prim_path (str): prim path of the Camera Prim to encapsulate or create.
34 | name (str, optional): shortname to be used as a key by Scene class.
35 | Note: needs to be unique if the object is added to the Scene.
36 | Defaults to "UW_Camera".
37 | frequency (Optional[int], optional): Frequency of the sensor (i.e: how often is the data frame updated).
38 | Defaults to None.
39 | dt (Optional[str], optional): dt of the sensor (i.e: period at which a the data frame updated). Defaults to None.
40 | resolution (Optional[Tuple[int, int]], optional): resolution of the camera (width, height). Defaults to None.
41 | position (Optional[Sequence[float]], optional): position in the world frame of the prim. shape is (3, ).
42 | Defaults to None, which means left unchanged.
43 | translation (Optional[Sequence[float]], optional): translation in the local frame of the prim
44 | (with respect to its parent prim). shape is (3, ).
45 | Defaults to None, which means left unchanged.
46 | orientation (Optional[Sequence[float]], optional): quaternion orientation in the world/ local frame of the prim
47 | (depends if translation or position is specified).
48 | quaternion is scalar-first (w, x, y, z). shape is (4, ).
49 | Defaults to None, which means left unchanged.
50 | render_product_path (str): path to an existing render product, will be used instead of creating a new render product
51 | the resolution and camera attached to this render product will be set based on the input arguments.
52 | Note: Using same render product path on two Camera objects with different camera prims, resolutions is not supported
53 | Defaults to None
54 | """
55 | self._name = name
56 | self._prim_path = prim_path
57 | self._res = resolution
58 | self._writing = False
59 |
60 | super().__init__(prim_path, name, frequency, dt, resolution, position, orientation, translation, render_product_path)
61 |
62 | def initialize(self,
63 | UW_param: np.ndarray = np.array([0.0, 0.31, 0.24, 0.05, 0.05, 0.2, 0.05, 0.05, 0.05 ]),
64 | viewport: bool = True,
65 | writing_dir: str = None,
66 | UW_yaml_path: str = None,
67 | physics_sim_view=None):
68 |
69 | """Configure underwater rendering properties and initialize pipelines.
70 |
71 | Args:
72 | UW_param (np.ndarray, optional): Underwater parameters array:
73 | [0:3] - Backscatter value (RGB)
74 | [3:6] - Attenuation coefficients (RGB)
75 | [6:9] - Backscatter coefficients (RGB)
76 | Defaults to typical coastal water values.
77 | viewport (bool, optional): Enable viewport visualization. Defaults to True.
78 | writing_dir (str, optional): Directory to save rendered images. Defaults to None.
79 | UW_yaml_path (str, optional): Path to YAML file with water properties. Defaults to None.
80 | physics_sim_view (_type_, optional): _description_. Defaults to None.
81 |
82 | """
83 | self._id = 0
84 | self._viewport = viewport
85 | self._device = wp.get_preferred_device()
86 | super().initialize(physics_sim_view)
87 |
88 | if UW_yaml_path is not None:
89 | with open(UW_yaml_path, 'r') as file:
90 | try:
91 | # Load the YAML content
92 | yaml_content = yaml.safe_load(file)
93 | self._backscatter_value = wp.vec3f(*yaml_content['backscatter_value'])
94 | self._atten_coeff = wp.vec3f(*yaml_content['atten_coeff'])
95 | self._backscatter_coeff = wp.vec3f(*yaml_content['backscatter_coeff'])
96 | print(f"[{self._name}] On {str(self._device)}. Using loaded render parameters:")
97 | print(f"[{self._name}] Render parameters: {yaml_content}")
98 | except yaml.YAMLError as exc:
99 | carb.log_error(f"[{self._name}] Error reading YAML file: {exc}")
100 | else:
101 | self._backscatter_value = wp.vec3f(*UW_param[0:3])
102 | self._atten_coeff = wp.vec3f(*UW_param[6:9])
103 | self._backscatter_coeff = wp.vec3f(*UW_param[3:6])
104 | print(f'[{self._name}] On {str(self._device)}. Using default render parameters.')
105 |
106 |
107 | self._rgba_annot = rep.AnnotatorRegistry.get_annotator('LdrColor', device=str(self._device))
108 | self._depth_annot = rep.AnnotatorRegistry.get_annotator('distance_to_camera', device=str(self._device))
109 |
110 | self._rgba_annot.attach(self._render_product_path)
111 | self._depth_annot.attach(self._render_product_path)
112 |
113 | if self._viewport:
114 | self.make_viewport()
115 |
116 | if writing_dir is not None:
117 | self._writing = True
118 | self._writing_backend = rep.BackendDispatch({"paths": {"out_dir": writing_dir}})
119 |
120 | print(f'[{self._name}] Initialized successfully. Data writing: {self._writing}')
121 |
122 | def render(self):
123 | """Process and display a single frame with underwater effects.
124 |
125 | Note:
126 | - Updates viewport display if enabled
127 | - Saves image to disk if writing_dir was specified
128 | """
129 | raw_rgba = self._rgba_annot.get_data()
130 | depth = self._depth_annot.get_data()
131 | if raw_rgba.size !=0:
132 | uw_image = wp.zeros_like(raw_rgba)
133 | wp.launch(
134 | dim=np.flip(self.get_resolution()),
135 | kernel=UW_render,
136 | inputs=[
137 | raw_rgba,
138 | depth,
139 | self._backscatter_value,
140 | self._atten_coeff,
141 | self._backscatter_coeff
142 | ],
143 | outputs=[
144 | uw_image
145 | ]
146 | )
147 |
148 | if self._viewport:
149 | self._provider.set_bytes_data_from_gpu(uw_image.ptr, self.get_resolution())
150 | if self._writing:
151 | self._writing_backend.schedule(write_image, path=f'UW_image_{self._id}.png', data=uw_image)
152 | print(f'[{self._name}] [{self._id}] Rendered image saved to {self._writing_backend.output_dir}')
153 |
154 | self._id += 1
155 |
156 | def make_viewport(self):
157 | """Create a viewport window for real-time visualization.
158 |
159 | Note:
160 | - Window size fixed at 1280x760 pixels
161 | """
162 |
163 | self.wrapped_ui_elements = []
164 | self.window = ui.Window(self._name, width=1280, height=720 + 40, visible=True)
165 | self._provider = ui.ByteImageProvider()
166 | with self.window.frame:
167 | with ui.ZStack(height=720):
168 | ui.Rectangle(style={"background_color": 0xFF000000})
169 | ui.Label('Run the scenario for image to be received',
170 | style={'font_size': 55,'alignment': ui.Alignment.CENTER},
171 | word_wrap=True)
172 | image_provider = ui.ImageWithProvider(self._provider, width=1280, height=720,
173 | style={'fill_policy': ui.FillPolicy.PRESERVE_ASPECT_FIT,
174 | 'alignment' :ui.Alignment.CENTER})
175 |
176 | self.wrapped_ui_elements.append(image_provider)
177 | self.wrapped_ui_elements.append(self._provider)
178 | self.wrapped_ui_elements.append(self.window)
179 |
180 | # Detach the annotator from render product and clear the data cache
181 | def close(self):
182 | """Clean up resources by detaching annotators and clearing caches.
183 |
184 | Note:
185 | - Required for proper shutdown when done using the sensor
186 | - Also closes viewport window if one was created
187 | """
188 | self._rgba_annot.detach(self._render_product_path)
189 | self._depth_annot.detach(self._render_product_path)
190 |
191 | rep.AnnotatorCache.clear(self._rgba_annot)
192 | rep.AnnotatorCache.clear(self._depth_annot)
193 |
194 | if self._viewport:
195 | self.ui_destroy()
196 |
197 | print(f'[{self._name}] Annotator detached. AnnotatorCache cleaned.')
198 |
199 |
200 | def ui_destroy(self):
201 | """Explicitly destroy viewport UI elements.
202 |
203 | Note:
204 | - Called automatically by close()
205 | - Only needed if manually managing UI lifecycle
206 | """
207 | for elem in self.wrapped_ui_elements:
208 | elem.destroy()
209 |
210 |
211 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/utils/ImagingSonar_kernels.py:
--------------------------------------------------------------------------------
1 | import warp as wp
2 |
3 |
4 | @wp.func
5 | def cartesian_to_spherical(cart: wp.vec3) -> wp.vec3:
6 | r = wp.sqrt(cart[0]*cart[0] + cart[1]*cart[1] + cart[2]*cart[2])
7 | return wp.vec3(r,
8 | wp.atan2(cart[1], cart[0]),
9 | wp.acos(cart[2] / r)
10 | )
11 |
12 |
13 | @wp.kernel
14 | def compute_intensity(pcl: wp.array(ndim=2, dtype=wp.float32),
15 | normals: wp.array(ndim=2, dtype=wp.float32),
16 | viewTransform: wp.mat44,
17 | semantics: wp.array(ndim=1, dtype=wp.uint32),
18 | indexToRefl: wp.array(dtype=wp.float32),
19 | attenuation: float,
20 | intensity: wp.array(dtype=wp.float32)
21 | ):
22 | tid = wp.tid()
23 | pcl_vec = wp.vec3(pcl[tid,0], pcl[tid,1], pcl[tid,2])
24 | normal_vec = wp.vec3(normals[tid,0], normals[tid,1],normals[tid,2])
25 | R = wp.mat33(viewTransform[0,0], viewTransform[0,1], viewTransform[0,2],
26 | viewTransform[1,0], viewTransform[1,1], viewTransform[1,2],
27 | viewTransform[2,0], viewTransform[2,1], viewTransform[2,2])
28 | T = wp.vec3(viewTransform[0,3], viewTransform[1,3], viewTransform[2,3])
29 | sensor_loc = - (wp.transpose(R) @ T)
30 | incidence = pcl_vec - sensor_loc
31 | # Will use warp.math.norm_l2() in future release
32 | dist = wp.sqrt(incidence[0]*incidence[0] + incidence[1]*incidence[1] + incidence[2]*incidence[2])
33 | unit_directs = wp.normalize(pcl_vec - sensor_loc)
34 | cos_theta = wp.dot(-unit_directs, normal_vec)
35 | reflectivity = indexToRefl[semantics[tid]]
36 | intensity[tid] = reflectivity * cos_theta * wp.exp(-attenuation * dist)
37 |
38 | @wp.kernel
39 | def world2local(viewTransform: wp.mat44,
40 | pcl_world: wp.array(ndim=2, dtype=wp.float32),
41 | pcl_local: wp.array(dtype=wp.vec3),
42 | pcl_local_spher: wp.array(dtype=wp.vec3)):
43 | tid = wp.tid()
44 | pcl_world_homogeneous = wp.vec4(pcl_world[tid,0],
45 | pcl_world[tid,1],
46 | pcl_world[tid,2],
47 | wp.float32(1.0)
48 | )
49 | pcl_local_homogeneous = viewTransform @ pcl_world_homogeneous
50 | # Rotate axis such that y axis pointing forward for sonar data plotting
51 | pcl_local[tid] = wp.vec3(pcl_local_homogeneous[0], -pcl_local_homogeneous[2], pcl_local_homogeneous[1])
52 | pcl_local_spher[tid] = cartesian_to_spherical(pcl_local[tid])
53 |
54 |
55 | @wp.kernel
56 | def bin_intensity(pcl: wp.array(dtype=wp.vec3),
57 | intensity: wp.array(dtype=wp.float32),
58 | x_offset: wp.float32,
59 | y_offset: wp.float32,
60 | x_res: wp.float32,
61 | y_res: wp.float32,
62 | bin_sum: wp.array(ndim=2, dtype=wp.float32),
63 | bin_count: wp.array(ndim=2, dtype=wp.int32)
64 | ):
65 | tid = wp.tid()
66 |
67 | # Get the range, azimuth, and intensity of the point
68 | x = pcl[tid][0]
69 | y = pcl[tid][1]
70 |
71 | # Calculate the bin indices for range and azimuth
72 | x_bin_idx = wp.int32((x - x_offset) / x_res)
73 | y_bin_idx = wp.int32((y - y_offset) / y_res)
74 | wp.atomic_add(bin_sum, x_bin_idx, y_bin_idx, intensity[tid])
75 | wp.atomic_add(bin_count, x_bin_idx, y_bin_idx, 1)
76 |
77 | @wp.kernel
78 | def average(sum: wp.array(ndim=2, dtype=wp.float32),
79 | count: wp.array(ndim=2, dtype=wp.int32),
80 | avg: wp.array(ndim=2, dtype=wp.float32)):
81 | i, j = wp.tid()
82 | if count[i, j] > 0:
83 | avg[i, j] = sum[i, j] / wp.float32(count[i, j])
84 |
85 |
86 | @wp.kernel
87 | def all_max(array: wp.array(ndim=2, dtype=wp.float32),
88 | max_value: wp.array(dtype=wp.float32)):
89 | i,j = wp.tid()
90 | wp.atomic_max(max_value, 0, array[i, j])
91 |
92 | @wp.kernel
93 | def range_max(array: wp.array(ndim=2, dtype=wp.float32),
94 | max_value: wp.array(dtype=wp.float32)):
95 | i, j = wp.tid()
96 | wp.atomic_max(max_value, i, array[i,j])
97 |
98 |
99 |
100 | @wp.kernel
101 | def normal_2d(seed: int,
102 | mean: float,
103 | std: float,
104 | output: wp.array(ndim=2, dtype=wp.float32),
105 |
106 | ):
107 | i, j = wp.tid()
108 | state = wp.rand_init(seed, i * output.shape[1] + j)
109 |
110 | # Generate normal random variable
111 | output[i,j] = mean + std * wp.randn(state)
112 |
113 |
114 |
115 | @wp.kernel
116 | def range_dependent_rayleigh_2d(seed: int,
117 | r: wp.array(ndim=2, dtype=wp.float32),
118 | azi: wp.array(ndim=2, dtype=wp.float32),
119 | max_range: float,
120 | rayleigh_scale: float,
121 | central_peak: float,
122 | central_std: float,
123 | output: wp.array(ndim=2, dtype = wp.float32)
124 | ):
125 | i, j = wp.tid()
126 | state = wp.rand_init(seed, i * output.shape[1] + j)
127 |
128 | # Generate two uniform random numbers
129 | n1 = wp.randn(state)
130 | n2 = wp.randn(state) # Offset for independence
131 |
132 | # Transform to Rayleigh distribution
133 | rayleigh = rayleigh_scale * wp.sqrt(n1*n1 + n2*n2)
134 | # Apply range dependency
135 | output[i,j] = wp.pow(r[i,j]/max_range, 2.0) * (1.0 + central_peak * wp.exp(-wp.pow(azi[i,j] - wp.PI/2.0, 2.0) / central_std)) * rayleigh
136 |
137 |
138 |
139 |
140 | @wp.kernel
141 | def make_sonar_map_all(r: wp.array(ndim=2, dtype=wp.float32),
142 | azi: wp.array(ndim=2, dtype=wp.float32),
143 | intensity: wp.array(ndim=2, dtype=wp.float32),
144 | max_intensity: wp.array(ndim=1, dtype=wp.float32),
145 | gau_noise: wp.array(ndim=2, dtype=wp.float32),
146 | range_ray_noise: wp.array(ndim=2, dtype=wp.float32),
147 | offset: wp.float32,
148 | gain: wp.float32,
149 | result: wp.array(ndim=2, dtype=wp.vec3)):
150 | i, j = wp.tid()
151 | intensity[i,j] = intensity[i,j]/max_intensity[0]
152 | intensity[i,j] += offset
153 | intensity[i,j] *= gain
154 | intensity[i,j] *= (0.5 + gau_noise[i,j])
155 | intensity[i,j] += range_ray_noise[i,j]
156 | intensity[i,j] = wp.clamp(intensity[i,j], wp.float32(0.0), wp.float32(1.0))
157 |
158 | result[i,j] = wp.vec3(r[i,j] * wp.cos(azi[i,j]),
159 | r[i,j] * wp.sin(azi[i,j]),
160 | intensity[i,j])
161 |
162 | @wp.kernel
163 | def make_sonar_map_range(r: wp.array(ndim=2, dtype=wp.float32),
164 | azi: wp.array(ndim=2, dtype=wp.float32),
165 | intensity: wp.array(ndim=2, dtype=wp.float32),
166 | max_intensity: wp.array(ndim=1, dtype=wp.float32),
167 | gau_noise: wp.array(ndim=2, dtype=wp.float32),
168 | range_ray_noise: wp.array(ndim=2, dtype=wp.float32),
169 | offset: wp.float32,
170 | gain: wp.float32,
171 | result: wp.array(ndim=2, dtype=wp.vec3)):
172 | i, j = wp.tid()
173 |
174 | if max_intensity[i] !=0:
175 | intensity[i,j] = intensity[i,j]/max_intensity[i]
176 |
177 | intensity[i,j] *= (0.5 + gau_noise[i,j])
178 | intensity[i,j] += range_ray_noise[i,j]
179 | intensity[i,j] += offset
180 | intensity[i,j] *= gain
181 | intensity[i,j] = wp.clamp(intensity[i,j], wp.float32(0.0), wp.float32(1.0))
182 |
183 | result[i,j] = wp.vec3(r[i,j] * wp.cos(azi[i,j]),
184 | r[i,j] * wp.sin(azi[i,j]),
185 | intensity[i,j])
186 |
187 | @wp.kernel
188 | def make_sonar_image(sonar_data: wp.array(ndim=2, dtype=wp.vec3),
189 | sonar_image: wp.array(ndim=3, dtype=wp.uint8)):
190 | i, j = wp.tid()
191 | width = sonar_data.shape[1]
192 | sonar_rgb = wp.uint8(sonar_data[i,j][2] * wp.float32(255))
193 | sonar_image[i,width-j,0] = sonar_rgb
194 | sonar_image[i,width-j,1] = sonar_rgb
195 | sonar_image[i,width-j,2] = sonar_rgb
196 | sonar_image[i,width-j,3] = wp.uint8(255)
197 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/utils/MultivariateNormal.py:
--------------------------------------------------------------------------------
1 | """
2 | Covariance Initialization:
3 | The init_sigma and init_cov methods handle both scalar and matrix inputs.
4 | Random Sampling:
5 | The sampling uses NumPy's random.default_rng() for reproducibility and performance.
6 | Cholesky Decomposition:
7 | Implemented directly in Python with an equivalent structure to the C++ version.
8 | Data Types:
9 | Unreal-specific types like TArray are replaced with Python lists or NumPy arrays.
10 |
11 | """
12 |
13 | import numpy as np
14 |
15 | class MultivariateNormal:
16 | def __init__(self, N):
17 | assert N > 0, "MVN: N must be > 0"
18 | self.N = N
19 | self.sqrt_cov = np.zeros((N, N))
20 | self.uncertain = False
21 | self.gen = np.random.default_rng()
22 |
23 | def init_sigma(self, sigma):
24 | """Initialize diagonal covariance using a single float or an array."""
25 | if isinstance(sigma, (float, int)):
26 | np.fill_diagonal(self.sqrt_cov, sigma)
27 | elif isinstance(sigma, (list, np.ndarray)):
28 | assert len(sigma) == self.N, f"Sigma has size {len(sigma)} and should be {self.N}"
29 | np.fill_diagonal(self.sqrt_cov, sigma)
30 | self.uncertain = True
31 |
32 | def init_cov(self, cov):
33 | """Initialize covariance."""
34 | if isinstance(cov, (float, int)):
35 | np.fill_diagonal(self.sqrt_cov, np.sqrt(cov))
36 | elif isinstance(cov, (list, np.ndarray)):
37 | cov = np.array(cov)
38 | if cov.ndim == 1: # Diagonal covariance
39 | np.fill_diagonal(self.sqrt_cov, np.sqrt(cov))
40 | elif cov.ndim == 2: # Full covariance matrix
41 | assert cov.shape == (self.N, self.N), f"Covariance matrix size {cov.shape} should be ({self.N}, {self.N})"
42 | self.sqrt_cov = cov.copy()
43 | success = self.cholesky(self.sqrt_cov)
44 | if not success:
45 | print("Warning: MVN encountered a non-positive definite covariance")
46 | else:
47 | raise ValueError("Invalid covariance input")
48 | self.uncertain = True
49 |
50 | def sample_array(self):
51 | """Generate a sample from the multivariate normal distribution."""
52 | if not self.uncertain:
53 | return np.zeros(self.N)
54 |
55 | # Sample from N(0,1)
56 | sam = self.gen.standard_normal(self.N)
57 | # Shift by the covariance
58 | result = self.sqrt_cov @ sam
59 | return result
60 |
61 | def sample_list(self):
62 | return self.sample_array().tolist()
63 |
64 | def sample_vector(self):
65 | assert self.N == 3, f"Can't use MVN size {self.N} with 3D vector samples"
66 | sample = self.sample_array()
67 | return sample.tolist()
68 |
69 | def sample_float(self):
70 | assert self.N == 1, f"Can't use MVN size {self.N} with float samples"
71 | return self.sample_array()[0]
72 |
73 | def sample_rayleigh(self):
74 | assert self.N == 1, f"Can't use MVN size {self.N} with Rayleigh Noise"
75 | x = self.sample_float()
76 | y = self.sample_float()
77 | return np.sqrt(x**2 + y**2)
78 |
79 | @staticmethod
80 | def cholesky(A):
81 | """Compute the Cholesky decomposition in place."""
82 | N = A.shape[0]
83 | for i in range(N):
84 | for j in range(i, N):
85 | sum_val = A[i, j]
86 | for k in range(i):
87 | sum_val -= A[i, k] * A[j, k]
88 | if i == j:
89 | if sum_val <= 0:
90 | return False # Not positive definite
91 | A[i, j] = np.sqrt(sum_val)
92 | else:
93 | A[j, i] = sum_val / A[i, i]
94 |
95 | # Zero out upper triangular part
96 | for i in range(N):
97 | for j in range(i + 1, N):
98 | A[i, j] = 0
99 | return True
100 |
101 | def get_sqrt_cov(self):
102 | return self.sqrt_cov
103 |
104 | def is_uncertain(self):
105 | return self.uncertain
--------------------------------------------------------------------------------
/isaacsim/oceansim/utils/MultivariateUniform.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from typing import List, Union
3 | """
4 | Key Notes:
5 |
6 | Assertions and Error Handling:
7 | Used Python's assert and ValueError for input validation.
8 |
9 | Random Sampling:
10 | Used numpy.random.default_rng() for random number generation, analogous to std::mt19937 in C++.
11 |
12 | Exponential Sampling and PDF:
13 | Replicated the exponential sampling logic using numpy.log and numpy.exp.
14 |
15 | Data Types:
16 | Leveraged numpy.ndarray for arrays and ensured the implementation aligns with Python's dynamic typing.
17 | """
18 |
19 | class MultivariateUniform:
20 | def __init__(self, N: int):
21 | assert N > 0, "UNIFORM: N must be > 0"
22 | self.N = N
23 | self.uncertain = False
24 | self.max = np.zeros(N)
25 | self.rng = np.random.default_rng()
26 |
27 | def init_bounds(self, max_: Union[float, List[float]]):
28 | if isinstance(max_, float):
29 | self.max.fill(max_)
30 | elif isinstance(max_, list) and len(max_) == self.N:
31 | self.max = np.array(max_)
32 | else:
33 | raise ValueError(f"Expected a float or list of size {self.N}, got {max_}")
34 | self.uncertain = np.any(self.max != 0)
35 |
36 | def sample_array(self) -> np.ndarray:
37 | if self.uncertain:
38 | return self.rng.uniform(0, 1, self.N) * self.max
39 | return np.zeros(self.N)
40 |
41 | def sample_list(self) -> List[float]:
42 | return self.sample_array().tolist()
43 |
44 | def sample_vector(self):
45 | if self.N != 3:
46 | raise ValueError(f"Can't use MVN size {self.N} with vector samples")
47 | sample = self.sample_array()
48 | return tuple(sample)
49 |
50 | def sample_float(self) -> float:
51 | if self.N != 1:
52 | raise ValueError(f"Can't use MVN size {self.N} with float samples")
53 | return self.sample_array()[0]
54 |
55 | def sample_exponential(self) -> float:
56 | if self.N != 1:
57 | raise ValueError(f"Can't use MVN size {self.N} with exponential samples")
58 | if self.uncertain:
59 | x = self.rng.uniform(0, 1)
60 | return -self.max[0] * np.log(x)
61 | return 0.0
62 |
63 | def exponential_pdf(self, x: float) -> float:
64 | if self.uncertain:
65 | return np.exp(-x / self.max[0]) / self.max[0]
66 | return 1.0
67 |
68 | def exponential_scaled_pdf(self, x: float) -> float:
69 | if self.uncertain:
70 | return np.exp(-x / self.max[0])
71 | return 1.0
72 |
73 | def is_uncertain(self) -> bool:
74 | return self.uncertain
75 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/utils/UWrenderer_utils.py:
--------------------------------------------------------------------------------
1 | import warp as wp
2 |
3 |
4 | @wp.func
5 | def vec3_exp(exponent: wp.vec3):
6 | return wp.vec3(wp.exp(exponent[0]), wp.exp(exponent[1]), wp.exp(exponent[2]), dtype=type(exponent[0]))
7 |
8 | @wp.func
9 | def vec3_mul(vec_1: wp.vec3,
10 | vec_2: wp.vec3):
11 | return wp.vec3(vec_1[0] * vec_2[0], vec_1[1] * vec_2[1], vec_1[2] * vec_2[2], dtype=type(vec_1[0]))
12 |
13 | @wp.kernel
14 | def UW_render(raw_image: wp.array(ndim=3, dtype=wp.uint8),
15 | depth_image: wp.array(ndim=2, dtype=wp.float32),
16 | backscatter_value: wp.vec3,
17 | atten_coeff: wp.vec3,
18 | backscatter_coeff: wp.vec3,
19 | uw_image: wp.array(ndim=3, dtype=wp.uint8)):
20 | i,j = wp.tid()
21 | raw_RGB = wp.vec3(wp.float32(raw_image[i,j,0]), wp.float32(raw_image[i,j,1]), wp.float32(raw_image[i,j,2]), dtype=wp.float32)
22 | depth = depth_image[i,j]
23 | exp_atten = vec3_exp(- depth * atten_coeff)
24 | exp_back = vec3_exp(- depth * backscatter_coeff)
25 | UW_RGB = vec3_mul(raw_RGB, exp_atten) + vec3_mul(backscatter_value * wp.float32(255), (wp.vec3f(1.0,1.0,1.0) - exp_back) )
26 | uw_image[i,j,0] = wp.uint8(wp.clamp(UW_RGB[0], wp.float32(0), wp.float32(255)))
27 | uw_image[i,j,1] = wp.uint8(wp.clamp(UW_RGB[1], wp.float32(0), wp.float32(255)))
28 | uw_image[i,j,2] = wp.uint8(wp.clamp(UW_RGB[2], wp.float32(0), wp.float32(255)))
29 | uw_image[i,j,3] = raw_image[i,j,3]
30 |
31 |
32 |
33 |
34 |
35 |
36 |
--------------------------------------------------------------------------------
/isaacsim/oceansim/utils/assets_utils.py:
--------------------------------------------------------------------------------
1 | def get_oceansim_assets_path() -> str:
2 | # return "/home/haoyu/Desktop/OceanSim_assets"
3 | return "/home/haoyu-ma/Desktop/OceanSim_assets"
--------------------------------------------------------------------------------
/isaacsim/oceansim/utils/keyboard_cmd.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2020-2024, NVIDIA CORPORATION. All rights reserved.
2 | #
3 | # NVIDIA CORPORATION and its licensors retain all intellectual property
4 | # and proprietary rights in and to this software, related documentation
5 | # and any modifications thereto. Any use, reproduction, disclosure or
6 | # distribution of this software and related documentation without an express
7 | # license agreement from NVIDIA CORPORATION is strictly prohibited.
8 | #
9 |
10 | import carb
11 | import numpy as np
12 | import omni
13 | import omni.appwindow # Contains handle to keyboard
14 |
15 |
16 | # THis can only be used after the scene is loaded
17 | class keyboard_cmd:
18 | def __init__(self,
19 | base_command: np.array = np.array([0.0, 0.0, 0.0]),
20 | input_keyboard_mapping: dict = {
21 | # forward command
22 | "W": [1.0, 0.0, 0.0],
23 | # backward command
24 | "S": [-1.0, 0.0, 0.0],
25 | # leftward command
26 | "A": [0.0, 1.0, 0.0],
27 | # rightward command
28 | "D": [0.0, -1.0, 0.0],
29 | # rise command
30 | "UP": [0.0, 0.0, 1.0],
31 | # sink command
32 | "DOWN": [0.0, 0.0, -1.0],
33 | }
34 | ) -> None:
35 | self._base_command = base_command
36 |
37 | self._input_keyboard_mapping = input_keyboard_mapping
38 |
39 | self._appwindow = omni.appwindow.get_default_app_window()
40 | self._input = carb.input.acquire_input_interface()
41 | self._keyboard = self._appwindow.get_keyboard()
42 | self._sub_keyboard = self._input.subscribe_to_keyboard_events(self._keyboard, self._sub_keyboard_event)
43 |
44 |
45 | def _sub_keyboard_event(self, event, *args, **kwargs) -> bool:
46 | """Subscriber callback to when kit is updated."""
47 | # when a key is pressedor released the command is adjusted w.r.t the key-mapping
48 | if event.type == carb.input.KeyboardEventType.KEY_PRESS:
49 | # on pressing, the command is incremented
50 | if event.input.name in self._input_keyboard_mapping:
51 | self._base_command += np.array(self._input_keyboard_mapping[event.input.name])
52 |
53 | elif event.type == carb.input.KeyboardEventType.KEY_RELEASE:
54 | # on release, the command is decremented
55 | if event.input.name in self._input_keyboard_mapping:
56 | self._base_command -= np.array(self._input_keyboard_mapping[event.input.name])
57 | return True
58 |
59 |
60 | def cleanup(self):
61 | self._appwindow = None
62 | self._input = None
63 | self._keyboard = None
64 | self._sub_keyboard = None
65 |
--------------------------------------------------------------------------------
/media/caustics.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/umfieldrobotics/OceanSim/ed3e592b4ef25fd16e665af1816ca282123c0c04/media/caustics.gif
--------------------------------------------------------------------------------
/media/oceansim_demo.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/umfieldrobotics/OceanSim/ed3e592b4ef25fd16e665af1816ca282123c0c04/media/oceansim_demo.gif
--------------------------------------------------------------------------------
/media/oceansim_digital_twin.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/umfieldrobotics/OceanSim/ed3e592b4ef25fd16e665af1816ca282123c0c04/media/oceansim_digital_twin.gif
--------------------------------------------------------------------------------
/media/pitch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/umfieldrobotics/OceanSim/ed3e592b4ef25fd16e665af1816ca282123c0c04/media/pitch.png
--------------------------------------------------------------------------------
/media/semantic_editor.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/umfieldrobotics/OceanSim/ed3e592b4ef25fd16e665af1816ca282123c0c04/media/semantic_editor.gif
--------------------------------------------------------------------------------