├── .github └── workflows │ └── deploy.yml ├── .gitignore ├── .pr-preview.json ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE.md ├── Makefile ├── README.md ├── img └── timings-v3.jpg ├── plane-detection-explainer.md ├── plane-detection-security-privacy-questionnaire.md ├── plane-detection.bs ├── planes-notifying-about-removal.md ├── w3c.json └── webxrmeshing-1.bs /.github/workflows/deploy.yml: -------------------------------------------------------------------------------- 1 | name: Build and Deploy 2 | on: 3 | push: 4 | branches: 5 | - main 6 | workflow_dispatch: 7 | 8 | jobs: 9 | build-and-deploy: 10 | runs-on: ubuntu-latest 11 | steps: 12 | - name: Checkout 🛎️ 13 | uses: actions/checkout@v2.3.1 # If you're using actions/checkout@v2 you must set persist-credentials to false in most cases for the deployment to work correctly. 14 | with: 15 | persist-credentials: false 16 | 17 | - name: Install and Build 🔧 # This example project is built using npm and outputs the result to the 'build' folder. Replace with the commands required to build your project, or remove this step entirely if your site is pre-built. 18 | run: make 19 | - name: Deploy 🚀 20 | uses: JamesIves/github-pages-deploy-action@4.0.0 21 | with: 22 | BRANCH: gh-pages # The branch the action should deploy to. 23 | FOLDER: out # The folder the action should deploy. 24 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | out/ 2 | env/ 3 | -------------------------------------------------------------------------------- /.pr-preview.json: -------------------------------------------------------------------------------- 1 | { 2 | "src_file": "plane-detection.bs", 3 | "type": "bikeshed", 4 | "params": { 5 | "force": 1 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Code of Conduct 2 | 3 | All documentation, code and communication under this repository are covered by the [W3C Code of Ethics and Professional Conduct](https://www.w3.org/Consortium/cepc/). 4 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Immersive Web Community Group 2 | 3 | This repository is being used for work in the W3C Immersive Web Community Group, governed by the [W3C Community License 4 | Agreement (CLA)](http://www.w3.org/community/about/agreements/cla/). To make substantive contributions, 5 | you must join the CG. 6 | 7 | If you are not the sole contributor to a contribution (pull request), please identify all 8 | contributors in the pull request comment. 9 | 10 | To add a contributor (other than yourself, that's automatic), mark them one per line as follows: 11 | 12 | ``` 13 | +@github_username 14 | ``` 15 | 16 | If you added a contributor by mistake, you can remove them in a comment with: 17 | 18 | ``` 19 | -@github_username 20 | ``` 21 | 22 | If you are making a pull request on behalf of someone else but you had no part in designing the 23 | feature, you can remove yourself with the above syntax. 24 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | All Reports in this Repository are licensed by Contributors 2 | under the 3 | [W3C Software and Document License](http://www.w3.org/Consortium/Legal/2015/copyright-software-and-document). 4 | 5 | Contributions to Specifications are made under the 6 | [W3C CLA](https://www.w3.org/community/about/agreements/cla/). 7 | 8 | Contributions to Test Suites are made under the 9 | [W3C 3-clause BSD License](https://www.w3.org/Consortium/Legal/2008/03-bsd-license.html) 10 | 11 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | LOCAL_BIKESHED := $(shell command -v bikeshed 2> /dev/null) 2 | 3 | .PHONY: dirs 4 | 5 | all: dirs out/plane-detection.html out/webxrmeshing-1.html 6 | 7 | dirs: out 8 | 9 | out: 10 | mkdir -p out 11 | 12 | out/plane-detection.html: plane-detection.bs 13 | ifndef LOCAL_BIKESHED 14 | curl https://api.csswg.org/bikeshed/ -F file=@plane-detection.bs -F output=err 15 | curl https://api.csswg.org/bikeshed/ -F file=@plane-detection.bs -F force=1 > out/plane-detection.html | tee 16 | else 17 | bikeshed spec plane-detection.bs out/plane-detection.html 18 | endif 19 | 20 | out/webxrmeshing-1.html: webxrmeshing-1.bs 21 | ifndef LOCAL_BIKESHED 22 | curl https://api.csswg.org/bikeshed/ -F file=@webxrmeshing-1.bs -F output=err 23 | curl https://api.csswg.org/bikeshed/ -F file=@webxrmeshing-1.bs -F force=1 > out/webxrmeshing-1.html | tee 24 | else 25 | bikeshed spec webxrmeshing-1.bs out/webxrmeshing-1.html 26 | endif -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Real World Geometry 2 | Feature lead: Alex Cooper (@alcooper91) 3 | 4 | Status: Incubation 5 | 6 | ## Introduction 7 | The goal of this feature incubation is to incubate APIs allowing access to real world geometry (abbreviated RWG) like planes and meshes. 8 | 9 | Areas explored may include real-world plane and mesh detection, the fine-grained configuration of APIs related to such real-world geometry, the use of and/or interaction of anchors with such real-world geometry, and exposing detailed information about confidence in and quality of such real-world geometry. These explorations may also include the use of such RWG, including using it for hit testing and for occlusion/depth information to allow more realistic rendering of virtual objects in real-world scenes. 10 | 11 | The initial focus is on surfacing plane detection API(s). The idea is that in order for some Augmented Reality scenarios to work nicely, developers need to know about surfaces present in the user’s environment (for example to ensure that there is sufficient space to place a virtual object in user’s environment). After surfacing a rudimentary API for plane detection to JavaScript, we will explore the use of the API and RWG, what’s possible in user space, and whether other APIs are necessary. 12 | 13 | In addition to just exposing additional functionality, we should also consider allowing app developers to configure it. Example configuration options may include filtering for which types of planes are detected (e.g. only horizontal or vertical planes). 14 | 15 | Moreover, it could be beneficial to application developer to be able to retrieve information about the confidence / quality of data returned by RWG APIs. 16 | 17 | ## Privacy considerations 18 | One of the concerns with exposing real-world understanding information to web applications is the risk that the information will be abused. In order to mitigate that, the User Agent will have to ensure that the user gave the website consent to access RWG data. These topics are explored in the [privacy and security explainer](https://github.com/immersive-web/privacy-and-security/blob/master/EXPLAINER.md#accessing-real-world-data). Proposals in this repo will include mitigations from that repo that are relevant for the covered APIs. 19 | -------------------------------------------------------------------------------- /img/timings-v3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/immersive-web/real-world-geometry/8d9d65429df9a5f8718cd58699d51beecbe35425/img/timings-v3.jpg -------------------------------------------------------------------------------- /plane-detection-explainer.md: -------------------------------------------------------------------------------- 1 | # Plane detection explainer 2 | ## Introduction 3 | This document presents an overview of the plane detection API. The idea is that in order for some Augmented Reality scenarios to work nicely, developers need to know about surfaces present in the user’s environment. Example use cases: 4 | - Placing virtual objects in user’s environment. This use case is partially addressed by WebXR's hit-test API. Plane detection API extends the amount of information returned to the sites, allowing them to not only find where the rays would intersect with the real world, but also compute approximate area of detected surfaces. 5 | - Measuring user’s environment. The sites could then leverage that knowledge and procedurally generate game arenas or otherwise adapt to the system expanding its knowledge of the environment. 6 | - Real-world physics interactions. The sites could compute how the virtual objects should behave when colliding with user's environment. Other physical interactions are also possible (e.g. computing echo effects based on planes detected around the user). 7 | It’s possible that access to detected plane information will be useful for occlusion as well, though limitations of current RWG runtimes may limit the effectiveness. 8 | 9 | Later sections of this document show how applications can enable plane detection and retrieve information about the planes. More advanced use shows how plane lifetime management could be performed in an application. Please note that this document is not supposed to serve as an API reference - it only shows by example how the API could be used. 10 | 11 | ## Overview 12 | The API is synchronous and frame-based. If plane detection is enabled in an XRSession, each XRFrame will have a set populated with planes tracked by the session. The advantage of a synchronous API is that we can guarantee that the set of returned planes is valid only during that frame, and this allows the User Agent to change their properties as they get refined by the underlying AR SDKs. 13 | 14 | The returned set will contain all planes tracked in the current frame. That may or may not include planes that are currently not visible to the device. If the plane object was present in the XRFrame’s set in frame N and is not present in the XRFrame’s set in frame N+1, it means that the tracking of that plane was lost / the plane is no longer present and all properties on the object will throw exceptions if the application attempts to access them. If the same plane is detected both in frame N and in frame N+1, it will be represented by the same XRPlane object, with its attributes / state possibly updated. 15 | 16 | The information stored in an XRPlane consists of plane orientation (if known), a convex polygon approximating detected plane, and the pose of the plane’s center that can be retrieved given a reference space. The center’s pose describes a new frame of reference in such a way that the Y axis is a plane’s normal vector, and X & Z axes are right and top vectors, respectively. The polygon vertices are specified in the reference space described by the plane center’s pose, and are returned in the form of a loop of points at the edges of the polygon. The information retrieved from a plane is valid and specified only when the XRFrame is `active`. 17 | 18 | Plane tracking is enabled by creating an XRSession with an appropriate feature descriptor. The feature descriptor that the applications can use to enable the feature is `"plane-detection"`. 19 | 20 | ## Plane detection - quick start 21 | The below steps assume that you already have created a basic application using the [WebXR Device API](https://immersive-web.github.io/webxr/). 22 | 23 | In order to use the plane detection API in the WebXR application, we need to first configure the session: 24 | ```javascript 25 | const options = { 26 | requiredFeatures: [ "plane-detection" ] 27 | }; 28 | 29 | const xrSession = await navigator.xr.requestSession("immersive-ar", options); 30 | ``` 31 | 32 | Subsequently, when the scheduled `requestAnimationFrame()` callback fires, the received XRFrame will now contain plane information in its `detectedPlanes` attribute: 33 | ```javascript 34 | const xrReferenceSpace = ...; // XRReferenceSpace retrieved from successful 35 | // call to xrSession.requestReferenceSpace(). 36 | 37 | // `requestAnimationFrame` callback: 38 | function onXRFrame(timestamp, frame) { 39 | const detectedPlanes = frame.detectedPlanes; 40 | detectedPlanes.forEach(plane => { 41 | const planePose = frame.getPose(plane.planeSpace, xrReferenceSpace); 42 | const planeVertices = plane.polygon; // plane.polygon is an array of objects 43 | // containing x,y,z coordinates 44 | 45 | // ... draw plane_vertices relative to plane_pose ... 46 | }); 47 | 48 | frame.session.requestAnimationFrame(onXRFrame); 49 | } 50 | ``` 51 | 52 | ## Plane detection - more advanced use 53 | In order to keep track of which planes have been added / removed, it’s possible to store the XRPlane objects and compare them against the set received in the latest frame: 54 | 55 | ```javascript 56 | const planes = Map(); 57 | 58 | function onXRFrame(timestamp, frame) { 59 | const detectedPlanes = frame.detectedPlanes; 60 | 61 | // First, let's check if any of the planes we know about is no longer tracked: 62 | for (const [plane, timestamp] of planes) { 63 | if(!detectedPlanes.has(plane)) { 64 | // Handle removed plane - `plane` was present in previous frame, 65 | // but is no longer tracked. 66 | 67 | // We know the plane no longer exists, no need to maintain it in the map: 68 | planes.delete(plane); 69 | } 70 | } 71 | 72 | // Then, let's handle all the planes that are still tracked. 73 | // This consists both of tracked planes that we have previously seen, and new planes. 74 | // The planes that we've previuosly seen may have been updated. 75 | detectedPlanes.forEach(plane => { 76 | if (planes.has(plane)) { 77 | if(plane.lastChangedTime != planes.get(plane)) { 78 | // Handle previously seen plane that was updated in current frame. 79 | // What this means that one of the plane's properties is different than 80 | // it used to be - most likely, the polygon has changed. 81 | 82 | // Update the lastChangeTime: 83 | planes.set(plane, plane.lastChangedTime); 84 | } else { 85 | // Handle previously seen plane that was not updated in current frame. 86 | // Depending on the application, this could be a no-op. 87 | // Note that plane's pose relative to some other space MAY have changed, 88 | // because a pose can be seen as a property derived from 2 entities (XRSpaces). 89 | } 90 | } else { 91 | // Handle new plane. 92 | 93 | // Update the lastChangeTime: 94 | planes.set(plane, plane.lastChangedTime); 95 | } 96 | 97 | // Irrespective of whether the plane is new or old, updated or not, its pose 98 | // may have changed: 99 | const planePose = frame.getPose(plane.planeSpace, xrReferenceSpace); 100 | }); 101 | 102 | frame.session.requestAnimationFrame(onXRFrame); 103 | } 104 | ``` 105 | 106 | As shown above, the application can check whether a plane object was updated in current frame by comparing `plane.lastChangedTime` with the `lastChangedTime` from the previous time the plane was accessed. Note that a plane is only treated as updated when some of its attributes have changed. This means that a plane whose `planeSpace` has a different pose relative to some other space will **not** be considered as updated, as the pose is a derived property of a pair of spaces, not the plane object itself. 107 | 108 | ## Subsumed planes 109 | It is possible that as the understanding of the user’s environment becomes more refined, some planes will be merged into other planes. In the model above, this situation will translate into the removal of a subsumed plane & adjustment of the properties of the subsuming plane. 110 | 111 | ## Key points 112 | Some of the important takeaways that might not be immediately apparent from the above presented code snippets are as follows: 113 | - The entire API surface is synchronous. 114 | - Plane attributes are only well-defined as long as the frame is `active`. 115 | - Triple-equal plane objects represent the same plane. 116 | - If a plane was detected in frame N and is still being detected in frame N+1, it will be represented by exactly the same object in `detectedPlanes` set, potentially with updated attributes. 117 | - If a plane was detected in frame N and is no longer being detected in frame N+1, it will not be present in `detectedPlanes` set. Although an application might still contain references to its plane object, any access to its properties will result in an exception. 118 | - If the application needs to access plane data information from previous frames, it has to copy the properties of the planes it’s interested in. 119 | 120 | ## Synchronous hit-test 121 | Exposing planes to the application also allows the application to implement custom, synchronous hit-test against those planes. Potential downside of this approach is the lack of access to the same data that the underlying AR frameworks are using to perform hit-test - this can result in lower quality of hit-test results when they are computed purely in JavaScript. 122 | 123 | ## Current limitations 124 | During a session that has enabled plane detection, the information about planes gets refined over time. This poses challenges to the developers as they cannot assume that the plane’s polygon or pose won’t change. Implications of plane information changing are that positioning objects relative to the plane might require adjusting said objects’ positions. One possible solution to this problem would be an introduction of / integrating anchors with plane detection. 125 | 126 | ## Appendix: Discussion on notifying about plane changes 127 | 128 | See [Planes - informing about removal](planes-notifying-about-removal.md). 129 | 130 | ## Appendix: proposed Web IDL 131 | 132 | ```webidl 133 | 134 | enum XRPlaneOrientation { 135 | "horizontal", 136 | "vertical" 137 | }; 138 | 139 | interface XRPlane { 140 | readonly attribute XRSpace planeSpace; 141 | 142 | readonly attribute FrozenArray polygon; 143 | readonly attribute XRPlaneOrientation? orientation; 144 | readonly attribute DOMHighResTimeStamp lastChangedTime; 145 | }; 146 | 147 | interface XRPlaneSet { 148 | readonly setlike; 149 | }; 150 | 151 | partial interface XRFrame { 152 | readonly attribute XRPlaneSet detectedPlanes; 153 | }; 154 | ``` 155 | -------------------------------------------------------------------------------- /plane-detection-security-privacy-questionnaire.md: -------------------------------------------------------------------------------- 1 | # Security and Privacy Questionnaire 2 | 3 | This document answers the [W3C TAG's Security and Privacy 4 | Questionnaire](https://w3ctag.github.io/security-questionnaire/) for the 5 | WebXR Plane Detection Module. 6 | 7 | 01. What information might this feature expose to Web sites or other parties, 8 | and for what purposes is that exposure necessary? 9 | 10 | The WebXR's plane detection feature exposes information about flat surfaces 11 | detected in users' environments. This information allows WebXR-powered apps to 12 | provide more immersive experience to their user, for example by computing how 13 | virtual objects interact with real world around the users. 14 | 15 | 02. Do features in your specification expose the minimum amount of information 16 | necessary to enable their intended uses? 17 | 18 | Yes. Plane detection feature leverages the capabilities of the underlying XR 19 | systems to surface only the information about planes detected in the environment 20 | around the user. Notably, the camera image (that could be used to attempt to 21 | compute similar information in JavaScript) is not exposed to the application. 22 | 23 | 03. How do the features in your specification deal with personal information, 24 | personally-identifiable information (PII), or information derived from 25 | them? 26 | 27 | The specification does not directly expose personal information. The users' 28 | environment could be used to attempt to infer information about the users 29 | (for example, if planes representing a desk, monitor and floor are detected, 30 | they could potentially be used to approximate the users' height). The specification 31 | allows the user agents to implement the feature in a more privacy-preserving way, 32 | for example by reducing the quality / level of detail of the returned data. 33 | Quantization of plane poses is also possible, but is not mandated. 34 | 35 | 04. How do the features in your specification deal with sensitive information? 36 | 37 | The feature can only be enabled during XR session creation if the WebXR-powered 38 | application asks for it. The user agents have to ask the user for consent prior 39 | to enabling the feature on a newly created session - this mechanism a part of the 40 | core [WebXR](https://immersive-web.github.io/webxr/) specification. 41 | 42 | 05. Do the features in your specification introduce new state for an origin 43 | that persists across browsing sessions? 44 | 45 | No. 46 | 47 | 06. Do the features in your specification expose information about the 48 | underlying platform to origins? 49 | 50 | Not directly - if the feature is not enabled on a session, the application could try 51 | to infer whether the user rejected it, or if the user attempted to create an XR session 52 | on a platform that does not support the feature. The core WebXR spec does not directly 53 | expose this information, but it could potentially be computed based on how quickly 54 | the session creation promise got resolved. 55 | 56 | 07. Do features in this specification allow an origin access to sensors on a user’s 57 | device? 58 | 59 | Not directly. The underlying XR system will most likely leverage sensors and camera 60 | in order to provide the plane detection capabilities. 61 | 62 | 08. What data do the features in this specification expose to an origin? Please 63 | also document what data is identical to data exposed by other features, in the 64 | same or different contexts. 65 | 66 | This specification exposes data about flat surfaces detected in users' environment. 67 | It consists of the pose (position and orientation) of each detected plane, and convex 68 | polygon representing approximate shape of the detected plane. Both the pose and the 69 | planes' polygon may evolve over time, in response to the underlying XR system's 70 | evolving knowledge about users' environment. 71 | 72 | 09. Do feautres in this specification enable new script execution/loading 73 | mechanisms? 74 | 75 | No. 76 | 77 | 10. Do features in this specification allow an origin to access other devices? 78 | 79 | No. 80 | 81 | 11. Do features in this specification allow an origin some measure of control over 82 | a user agent's native UI? 83 | 84 | No. 85 | 86 | 12. What temporary identifiers do the feautures in this specification create or 87 | expose to the web? 88 | 89 | None directly. The planes detected in users' environment can potentially be used 90 | to compute some kind of rough description of the users' environment. This could 91 | potentially be used as a spatial identifier, which could also be used to identify 92 | the user in case the feature was used in a location that is normally only accessible 93 | only to that user (e.g. at home). 94 | 95 | 13. How does this specification distinguish between behavior in first-party and 96 | third-party contexts? 97 | 98 | It is an extension to WebXR which is by default blocked for third-party contexts 99 | and can be controlled via a Feature Policy flag. 100 | 101 | 14. How do the features in this specification work in the context of a browser’s 102 | Private Browsing or Incognito mode? 103 | 104 | The specification does not mandate a different behaviour. 105 | 106 | 15. Does this specification have both "Security Considerations" and "Privacy 107 | Considerations" sections? 108 | 109 | [Yes](https://immersive-web.github.io/real-world-geometry/plane-detection.html#privacy-security). 110 | 111 | 16. Do features in your specification enable origins to downgrade default 112 | security protections? 113 | 114 | No. 115 | 116 | 17. What should this questionnaire have asked? 117 | 118 | N/A. 119 | -------------------------------------------------------------------------------- /plane-detection.bs: -------------------------------------------------------------------------------- 1 | 20 | 21 | 23 | 24 |
 25 | spec: WebXR Device API - Level 1; urlPrefix: https://immersive-web.github.io/webxr/#
 26 |     for: XRFrame;
 27 |         type: dfn; text: active; url: xrframe-active
 28 |         type: dfn; text: session; url: dom-xrframe-session
 29 |         type: dfn; text: time; url: xrframe-time
 30 |     for: XRSession;
 31 |         type: dfn; text: list of frame updates; url: xrsession-list-of-frame-updates
 32 |         type: dfn; text: mode; url: xrsession-mode
 33 |         type: dfn; text: XR device; url: xrsession-xr-device
 34 |     for: XRSpace;
 35 |         type: dfn; text: effective origin; url: xrspace-effective-origin
 36 |         type: dfn; text: native origin; url: xrspace-native-origin
 37 |         type: dfn; text: origin offset; url: xrspace-origin-offset
 38 |         type: dfn; text: session; url: xrspace-session
 39 |     type: dfn; text: capable of supporting; url: capable-of-supporting
 40 |     type: dfn; text: feature descriptor; url: feature-descriptor
 41 |     type: dfn; text: identity transform; url: identity-transform
 42 |     type: dfn; text: inline XR device; url: inline-xr-device
 43 |     type: dfn; text: quantization; url: quantization
 44 |     type: dfn; text: rounding; url: rounding
 45 |     type: dfn; text: XR device; url: xr-device
 46 | spec: WebXR Anchors Module; urlPrefix: https://immersive-web.github.io/anchors/#
 47 |     type: dfn; text: create new anchor object; url: create-new-anchor-object
 48 |     type: dfn; text: update anchors; url: update-anchors
 49 |     type: interface; text: XRAnchor; url: xr-anchor
 50 | 
51 | 52 |
 53 | {
 54 |   "webxr-anchors-module": {
 55 |     "authors": [
 56 |       "Piotr Bialecki"
 57 |     ],
 58 |     "href": "https://immersive-web.github.io/anchors/",
 59 |     "title": "WebXR Anchors Module",
 60 |     "status": "DR"
 61 |   }
 62 | }
 63 | 
64 | 65 | 131 | 132 | Introduction {#intro} 133 | ============ 134 | 135 |
136 | 137 |
138 | 139 | Initialization {#anchor-feature-initialization} 140 | ================== 141 | 142 | Feature descriptor {#anchor-feature-descriptor} 143 | ------------------ 144 | 145 | In order for the applications to signal their interest in using plane detection during a session, the session must be requested with appropriate [=feature descriptor=]. The string plane-detection is introduced by this module as a new valid feature descriptor for plane detection feature. 146 | 147 | A device is [=capable of supporting=] the plane-detection feature if the device's tracking system exposes a [=native plane detection=] capability. The [=inline XR device=] MUST NOT be treated as [=capable of supporting=] the plane-detection feature. 148 | 149 | When a session is created with plane-detection feature enabled, the [=update planes=] algorithm MUST be added to the [=list of frame updates=] of that session. 150 | 151 |
152 | The following code demonstrates how a session that requires plane detection could be requested: 153 | 154 |
155 | const session = await navigator.xr.requestSession("immersive-ar", {
156 |   requiredFeatures: ["plane-detection"]
157 | });
158 | 
159 | 160 |
161 | 162 | Planes {#planes-section} 163 | ====== 164 | 165 | XRPlaneOrientation {#plane-orientation} 166 | ------------------ 167 | 168 | 174 | 175 | - A plane orientation of "horizontal" indicates that the plane is primarily oriented horizontally (according to the conventions of the underlying platform). 176 | - A plane orientation of "vertical" indicates that the plane is primarily oriented vertically (according to the conventions of the underlying platform). 177 | 178 | XRPlane {#plane} 179 | ------- 180 | 181 | 191 | 192 | An {{XRPlane}} represents a single, flat surface detected by the underlying XR system. 193 | 194 | The {{XRPlane/planeSpace}} is an {{XRSpace}} that establishes the coordinate system of the plane. The [=XRSpace/native origin=] of the {{XRPlane/planeSpace}} tracks plane's center. The underlying XR system defines the exact meaning of the plane center. The Y axis of the coordinate system defined by {{XRPlane/planeSpace}} MUST represent the plane's normal vector. 195 | 196 | Each {{XRPlane}} has an associated native entity. 197 | 198 | Each {{XRPlane}} has an associated frame. 199 | 200 | The {{XRPlane/polygon}} is an array of vertices that describe the shape of the plane. They are returned in the form of a loop of points at the edges of the polygon, expressed in the coordinate system defined by {{XRPlane/planeSpace}}. The Y coordinate of each vertex MUST be 0.0. 201 | 202 |
203 | 204 | The {{XRPlane/semanticLabel}} attribute is a string that describes the [=semantic label=] of the polygon. This array is empty if there is no semantic information. The {{XRSystem}} SHOULD populate this with the [=semantic label=] it has knowledge of. 205 | 206 | A semantic label is an ASCII lowercase DOMString that describes the name in the real world of the {{XRPlane}} as known by the {{XRSystem}}. 207 | The list of semantic labels is defined in the semantic label registry. 208 | 209 |
210 | 211 | The {{XRPlane/orientation}} describes orientation of the plane, as classified by the underlying XR system. In case the orientation cannot be classified into {{XRPlaneOrientation/"horizontal"}} or {{XRPlaneOrientation/"vertical"}} by the underlying XR system, this attribute will be set to null. 212 | 213 | The {{XRPlane/lastChangedTime}} is the last time some of the plane attributes have been changed. 214 | 215 | Note: The pose of a plane is not considered a plane attribute and therefore updates to plane pose will not cause the {{XRPlane/lastChangedTime}} to change. This is because plane pose is a property that is derived from two different entities - {{XRPlane/planeSpace}} and the {{XRSpace}} relative to which the pose is to be computed via {{XRFrame/getPose()}} function. 216 | 217 | Obtaining detected planes {#obtaining-planes} 218 | ========================= 219 | 220 | XRPlaneSet {#plane-set} 221 | ---------- 222 | 223 | 228 | 229 | An {{XRPlaneSet}} is a collection of {{XRPlane}}s. It is the primary mechanism of obtaining the collection of planes detected in an {{XRFrame}}. 230 | 231 | 236 | 237 | {{XRFrame}} is extended to contain {{XRFrame/detectedPlanes}} attribute which contains all planes that are still tracked in the frame. The set is initially empty and will be populated by the [=update planes=] algorithm. If this attribute is accessed when the frame is not [=XRFrame/active=], the user agent MUST throw {{InvalidStateError}}. 238 | 239 |
240 | 241 | 246 | 247 | {{XRSession}} is extended to contain the {{XRSession/initiateRoomCapture}} method which, if supported, will ask the {{XR Compositor}} to capture the current room layout. It is up to the {{XRCompositor}} if this will replace or augment the [=XRSession/set of tracked planes=]. The user agent MAY also ignore this call, for instance if it doesn't support a manual room capture more or if it determines that the room is already set up. 248 | The {{XRSession/initiateRoomCapture}} method MUST only be able to be called once per {{XRSession}}. 249 | 250 |
251 | 252 | {{XRSession}} is also extended to contain associated set of tracked planes, which is initially empty. The elements of the set will be of {{XRPlane}} type. 253 | 254 |
255 | In order to update planes for |frame|, the user agent MUST run the following steps: 256 | 1. Let |session| be a |frame|'s [=XRFrame/session=]. 257 | 1. Let |device| be a |session|'s [=XRSession/XR device=]. 258 | 1. Let |trackedPlanes| be a result of calling into |device|'s [=native plane detection=] capability to obtain tracked planes at |frame|'s [=XRFrame/time=]. 259 | 1. For each |native plane| in |trackedPlanes|, run: 260 | 1. If desired, treat the |native plane| as if it were not present in |trackedPlanes| and continue to the next entry. See [[#privacy-security]] for criteria that could be used to determine whether an entry should be ignored in this way. 261 | 1. If |session|'s [=XRSession/set of tracked planes=] contains an object |plane| that [=corresponds to=] |native plane|, invoke [=update plane object=] algorithm with |plane|, |native plane|, and |frame|, and continue to the next entry. 262 | 1. Let |plane| be the result of invoking the [=create plane object=] algorithm with |native plane| and |frame|. 263 | 1. Add |plane| to |session|'s [=XRSession/set of tracked planes=]. 264 | 1. Remove each object in |session|'s [=XRSession/set of tracked planes=] that was neither created nor updated during the invocation of this algorithm. 265 | 1. Set |frame|'s {{XRFrame/detectedPlanes}} to [=XRSession/set of tracked planes=]. 266 |
267 | 268 |
269 | In order to create plane object from a [=native plane object=] |native plane| and {{XRFrame}} |frame|, the user agent MUST run the following steps: 270 | 1. Let |result| be a new instance of {{XRPlane}}. 271 | 1. Set |result|'s [=XRPlane/native entity=] to |native plane|. 272 | 1. Set |result|'s {{XRPlane/planeSpace}} to a new {{XRSpace}} object created with [=XRSpace/session=] set to |frame|'s {{XRFrame/session}} and [=XRSpace/native origin=] set to track |native plane|'s native origin. 273 | 1. Invoke [=update plane object=] algorithm with |result|, |native plane|, and |frame|. 274 | 1. Return |result|. 275 | 276 | A plane object, |result|, created in such way is said to correspond to the passed in native plane object |native plane|. 277 |
278 | 279 |
280 | In order to update plane object |plane| from a [=native plane object=] |native plane| and {{XRFrame}} |frame|, the user agent MUST run the following steps: 281 | 1. Set |plane|'s [=XRPlane/frame=] to |frame|. 282 | 1. If |native plane| is classified by the underlying system as vertical, set |plane|'s {{XRPlane/orientation}} to {{XRPlaneOrientation/"vertical"}}. Otherwise, if |native plane| is classified by the underlying system as horizontal, set |plane|'s {{XRPlane/orientation}} to {{XRPlaneOrientation/"horizontal"}}. Otherwise, set |plane|'s {{XRPlane/orientation}} to null. 283 | 1. Set |plane|'s {{XRPlane/polygon}} to the new array of vertices representing |native plane|'s polygon, performing all necessary conversions to account for differences in native plane polygon representation. 284 | 1.
Set |plane|'s {{XRPlane/semanticLabel}} to a new string with the [=semantic label|semantic labels=].
285 | 1. If desired, reduce the level of detail of the |plane|'s {{XRPlane/polygon}} as described in [[#privacy-security]]. 286 | 1. Set |plane|'s {{XRPlane/lastChangedTime}} to [=XRFrame/time=]. 287 |
288 | 289 |
290 | 291 | The following example demonstrates how an application could obtain information about detected planes and act on it. The code that can be used to render a graphical representation of the planes is not shown. 292 | 293 |
294 | // `planes` will track all detected planes that the application is aware of,
295 | // and at what timestamp they were updated. Initially, this is an empty map.
296 | const planes = Map();
297 | 
298 | function onXRFrame(timestamp, frame) {
299 |   const detectedPlanes = frame.detectedPlanes;
300 | 
301 |   // First, let's check if any of the planes we knew about is no longer tracked:
302 |   for (const [plane, timestamp] of planes) {
303 |     if(!detectedPlanes.has(plane)) {
304 |       // Handle removed plane - `plane` was present in previous frame,
305 |       // but is no longer tracked.
306 | 
307 |       // We know the plane no longer exists, remove it from the map:
308 |       planes.delete(plane);
309 |     }
310 |   }
311 | 
312 |   // Then, let's handle all the planes that are still tracked.
313 |   // This consists both of tracked planes that we have previously seen (may have
314 |   // been updated), and new planes.
315 |   detectedPlanes.forEach(plane => {
316 |     if (planes.has(plane)) {
317 |       // Handle previously-seen plane:
318 | 
319 |       if(plane.lastChangedTime > planes.get(plane)) {
320 |         // Handle previously seen plane that was updated.
321 |         // It means that one of the plane's properties is different than
322 |         // it used to be - most likely, the polygon has changed.
323 | 
324 |         ... // Render / prepare the plane for rendering, etc.
325 | 
326 |         // Update the time when we have updated the plane:
327 |         planes.set(plane, plane.lastChangedTime);
328 |       } else {
329 |         // Handle previously seen plane that was not updated in current frame.
330 |         // Note that plane's pose relative to some other space MAY have changed.
331 |       }
332 |     } else {
333 |       // Handle new plane.
334 | 
335 |       // Set the time when we have updated the plane:
336 |       planes.set(plane, plane.lastChangedTime);
337 |     }
338 | 
339 |     // Irrespective of whether the plane was previously seen or not,
340 |     // & updated or not, its pose MAY have changed:
341 |     const planePose = frame.getPose(plane.planeSpace, xrReferenceSpace);
342 |   });
343 | 
344 |   frame.session.requestAnimationFrame(onXRFrame);
345 | }
346 | 
347 | 348 |
349 | 350 | Native device concepts {#native-device-concepts} 351 | ====================== 352 | 353 | Native plane detection {#native-plane-detection-section} 354 | ---------------------- 355 | 356 |
357 | 358 | The plane detection API provides information about flat surfaces detected in users' environment. It is assumed in this specification that user agents can rely on native plane detection capabilities provided by the underlying platform for their implementation of plane-detection features. Specifically, the underlying XR device should provide a way to query all planes that are tracked at a time that corresponds to the [=XRFrame/time=]of a specific {{XRFrame}}. 359 | 360 | Moreover, it is assumed that the tracked planes, known as native plane objects, maintain their identity across frames - that is, given a plane object P returned by the underlying system at time t0, and a plane object Q returned by the underlying system at time t1, it is possible for the user agent to query the underlying system about whether P and Q correspond to the same logical plane object. The underlying system is also expected to provide a [=native origin=] that can be used to query the location of a pose at time t, although it is not guaranteed that plane pose will always be known (for example for planes that are still tracked but not localizable at a given time). In addition, the native plane object should expose a polygon describing approximate shape of the detected plane. 361 | 362 | In addition, the underlying system should recognize native planes as native entities for the purposes of {{XRAnchor}} creation. For more information, see [[webxr-anchors-module#native-anchor]] section. 363 | 364 |
365 | 366 | Privacy & Security Considerations {#privacy-security} 367 | ================================= 368 | 369 |
370 | 371 | The plane detection API exposes information about users' physical environment. The exposed plane information (such as plane's polygon) may be limited if the user agent so chooses. Some of the ways in which the user agent can reduce the exposed information are: decreasing the level of detail of the plane's polygon in [=update plane object=] algorithm (for example by decreasing the number of vertices, or by [=rounding=] / [=quantization|quantizing=] the coordinates of the vertices), or removing the plane altogether by behaving as if the plane object was not present in trackedPlanes collection in [=update planes=] algorithm (this could be done for example if the detected plane is deemed to small / too detailed to be surfaced and the mechanisms to reduce details exposed on planes are not implemented by the user agent). The poses of the planes (obtainable from {{XRPlane/planeSpace}}) could also be [=quantization|quantized=]. 372 | 373 | Since concepts from plane detection API can be used in methods exposed by [[webxr-anchors-module]] specification, some of the privacy & security considerations that are relevant to WebXR Anchors Module also apply here. For details, see [[webxr-anchors-module#privacy-security]] section. 374 | 375 | Due to how plane detection API extends WebXR Device API, the section [[webxr#security]] is also applicable to the features exposed by the WebXR Plane Detection Module. 376 | 377 |
378 | 379 | Acknowledgements {#ack} 380 | ================ 381 | 382 | The following individuals have contributed to the design of the WebXR Plane Detection specification: 383 | -------------------------------------------------------------------------------- /planes-notifying-about-removal.md: -------------------------------------------------------------------------------- 1 | # Planes - informing about removal 2 | 3 | ## Introduction 4 | This document presents & discusses ways of notifying web applications about loss of tracking for previously detected planes, with the end goal of deciding on the API shape that should be implemented. Plane detection is described in more detail in the [explainer](https://github.com/immersive-web/real-world-geometry/blob/master/plane-detection-explainer.md). As shown in [“a bit more advanced use”](https://github.com/immersive-web/real-world-geometry/blob/master/plane-detection-explainer.md#plane-detection---a-bit-more-advanced-use) section, currently there is no built in way for the application to know which of the planes that have previously been detected are no longer being tracked. The applications that are interested in this information (for example for cleanup purposes) are forced to maintain their own set of planes detected in previous frame and compare its contents with the planes detected in current frame - old planes that have not been detected in the current frame are considered removed. 5 | 6 | There are multiple possible ways of addressing this problem: 7 | 1. Event-based approach. 8 | 2. Promise-based approach. 9 | 3. Attribute-based approach. 10 | 11 | If needed, all of the approaches described below can be extended to inform about plane additions and modifications. The information about plane modifications is potentially more important, since it cannot be inferred based on the contents of `detectedPlanes` field. 12 | 13 | ## Event-based approach 14 | Application could be informed of the removal / tracking loss of a plane object by registering an event handler for `onremoved` event (name is not final). In current design of plane detection feature, event handler could reasonably be exposed on either of the 2 interfaces: 15 | 16 | - On XRPlane: 17 | 18 | ```webidl 19 | partial interface XRPlane { 20 | // Invoked when a plane is no longer being tracked. 21 | attribute EventHandler onremoved; 22 | } 23 | ``` 24 | 25 | Since the registration for this event must happen for each plane in whose removal the app is interested, the event handler could simply create a closure that captures the plane object: 26 | 27 | ```javascript 28 | let plane = ...; // Plane obtained from XRFrame's XRWorldInformation. 29 | plane.addEventListener('onremoved', (event) => { 30 | console.log("Plane removed.", plane); 31 | }); 32 | ``` 33 | 34 | - On XRSession: 35 | 36 | ```webidl 37 | partial interface XRSession { 38 | // Invoked when a plane is no longer being tracked. 39 | attribute EventHandler onplaneremoved; 40 | } 41 | ``` 42 | 43 | In this case, since the event handler registration happens for entire collection of planes, the plane object that was removed will have to be passed to the registered event listener through the event. Alternatively, the event handler can be renamed to `onplanesremoved` and receive a batch of planes that were removed in the current frame. 44 | 45 | ### Discussion 46 | With event-based approach, user agent needs to guarantee that all plane removal events are invoked prior to invocation of request animation frame callback for the frame in which the planes are no longer present. This means that once user agent has all the information needed to invoke application’s request animation frame callback, it must first notify the application about plane removals (see [Timings](#timings), duration number 2.). Plane objects that are being removed will have their properties inaccessible or undefined (with the exception of application-added properties) during the execution of event listener callbacks. If the application wants to act on the data during request animation frame callback, it will have to use a custom mechanism of doing so (e.g. by adding removed planes to a list that’s also accessible from request animation frame callback). 47 | 48 | ## Promise-based approach 49 | This approach is similar to the event-based approach. The difference is that application could be notified about plane removal by attaching a continuation to the promise returned by XRPlane object: 50 | 51 | ```webidl 52 | partial interface XRPlane { 53 | // Resolved when a plane is no longer being tracked. 54 | attribute Promise removed; 55 | } 56 | ``` 57 | 58 | The usage of the promise is as follows: 59 | 60 | ```javascript 61 | let plane = ...; // Plane obtained from XRFrame's XRWorldInformation. 62 | plane.removed.then(() => { 63 | console.log("Plane removed.", plane); 64 | }); 65 | ``` 66 | 67 | The application will likely attach a continuation to the promise every time a new plane gets detected. 68 | 69 | ### Discussion 70 | Similarly to event-based approach, the user agent must guarantee that the promise gets resolved prior to request animation frame callback (see [Timings](#timings), duration number 2.). The plane attributes will not be accessible from the continuation. If the application wants to act on the data during request animation frame callback, it will have to use a custom mechanism of doing so (e.g. by adding removed planes to a list that’s also accessible from request animation frame callback). 71 | 72 | It’s worth noting that unlike in event-based approach, adding a promise to `XRSession` interface is *not* proposed here. Having a promise on `XRSession` would be more problematic as the application would have to keep re-attaching a continuation to the promise every time previous promise got resolved - User Agent would have to guarantee that just prior to resolving previous promise, `XRSession.planesremoved` will return different promise object than the one about to be resolved. 73 | 74 | ## Attribute-based approach - difference lists 75 | In this approach, the `XRSession` interface will be extended with attributes that convey information about the planes that used to be present in previous frame but are no longer present in current frame. 76 | 77 | ```webidl 78 | interface XRPlaneSet { 79 | readonly setlike; 80 | } 81 | 82 | partial interface XRSession { 83 | // (existing attribute) Set with planes detected in current frame. 84 | readonly attribute XRPlaneSet? detectedPlanes; 85 | // (new attribute) Set with planes that were detected in previous frame 86 | // but are no longer detected in current frame. 87 | readonly attribute XRPlaneSet? removedPlanes; 88 | // (new attribute) Set with planes that were not detected before but 89 | // have been detected in the current frame. 90 | // This is a subset of `detectedPlanes`. 91 | readonly attribute XRPlaneSet? addedPlanes; 92 | // (new attribute) Set with planes that were detected before and have 93 | // been modified in the current frame. 94 | // This is a subset of `detectedPlanes`. 95 | readonly attribute XRPlaneSet? modifiedPlanes; 96 | } 97 | ``` 98 | 99 | This approach is mentioned in github issue [#4](https://github.com/immersive-web/real-world-geometry/issues/4) as “difference list”. 100 | 101 | ## Attribute-based approach - lastChangedTime attribite 102 | 103 | In this approach, the `XRPlane` interface is extended with `lastChangedTime` that holds the timestamp of an `XRFrame` in which the plane attributes were last modified. The applications could then inspect the attribute to determine whether it needs to update some of its state. 104 | 105 | ```webidl 106 | 107 | partial interface XRPlane { 108 | readonly attribute DOMHighResTimeStamp lastChangedTime; 109 | }; 110 | 111 | ``` 112 | 113 | This approach was first proposed in github issue [#4](https://github.com/immersive-web/real-world-geometry/issues/4#issuecomment-485595506). 114 | 115 | ### Discussion 116 | Attribute-based approach is simplest for the web application to deal with, and is consistent with overall API shape as it delivers all frame-related data to the application in the request animation frame callback & input source events through `XRFrame` instance. It is sufficient to iterate over list of planes at the beginning of request animation frame callback and perform any additions / modifications / cleanups necessary due to the planes being newly detected / modified / removed. The attributes of planes present in `removedPlanes` will be accessible, but should not be relied upon as they'd represent stale state. 117 | 118 | Another attribute-based approach that in addition allows us to polyfill the other ones is the addition of `XRPlane.lastChangedTime` attribute. This approach would still expose all the necessary information to the apps, and allows us to defer the decision on how the final API shape should look like until developer feedback is gathered. One drawback is the API ergonomics, but in this particular case it could motivate its users to be more vocal about the desired API shape that we could then adopt. 119 | 120 | ## Timings 121 | Above approaches are making guarantees to the app developers about when will the event handlers fire / when will the promises resolve or get rejected, and when can the plane attributes be queried by the application. Below image serves to clarify the possible timings. 122 | 123 | ![image with possible durations for callbacks](https://github.com/immersive-web/real-world-geometry/raw/master/img/timings-v3.jpg) 124 | 125 | Frame N is the frame where a hypothetical plane still exists. Frame N+1 is the frame in which that plane is no longer present. 126 | 127 | The vertical line represents the first moment where all data required to invoke application’s request animation frame callback is available to the user agent. The exact moment is implementation-dependent and might not be a single well-defined point in time - for example, there might exist an user-agent implementation where plane-related information is known earlier than viewer’s pose, AR camera image, etc. 128 | 129 | Description of the durations presented above: 130 | 1. Duration after request animation frame callback but prior to all data related to frame N+1 being received by the User Agent from AR system. 131 | 2. Duration after request animation frame callback and after entire all data related to frame N+1 is received by the User Agent from AR system. 132 | 3. Duration after request animation frame callback for frame N but prior to request animation frame callback for frame N+1. 133 | 4. Duration of request animation frame callback for frame N. 134 | 5. Duration of request animation frame callback for frame N+1. 135 | 136 | ## Links 137 | - https://github.com/immersive-web/real-world-geometry/blob/master/plane-detection-explainer.md 138 | - https://github.com/immersive-web/real-world-geometry/blob/master/plane-detection-explainer.md#plane-detection---a-bit-more-advanced-use 139 | - https://github.com/immersive-web/real-world-geometry/issues/4 140 | -------------------------------------------------------------------------------- /w3c.json: -------------------------------------------------------------------------------- 1 | { 2 | "group": [87846] 3 | , "contacts": ["dontcallmedom","himorin"] 4 | , "repo-type": "cg-report" 5 | } 6 | -------------------------------------------------------------------------------- /webxrmeshing-1.bs: -------------------------------------------------------------------------------- 1 | 17 | 18 | 22 | 23 |
 24 | 
25 | 26 | 27 | 28 | 29 | 83 | 84 | Introduction {#intro} 85 | ============ 86 | 87 |
88 | 89 | Meshing is a technique that uses a device's sensors to build a 3 dimensional representation of the world. 90 | 91 | The mesh consists of a large collection of geometry of medium complexity that changes slowly over time (aka the world mesh) and a very small collection of complex quickly changing content that represents the geometry close to the user (aka the near mesh). 92 | 93 | A user agent can have support for no, one or both types of geometry and the website author can request one or both types of geometry data. Requesting the geometry data is an expensive process so an author should only request it if they will do something with the information. 94 | 95 | Typically mesh data is not used to determine things like hand gestures or the recognition of real world objects. (However a UA could choose to infer this through postprocessing.) 96 | 97 | The most common use cases for the world mesh are: 98 | 1. physics: interaction of virtual objects with the real world 99 | 1. occlusion: blocking part or the whole virtual object by a real world one 100 | 101 | The most common use cases for the near mesh are: 102 | 1. a visual representation of objects near the viewer. 103 | 1. occlusion 104 | 105 |
106 | 107 | Terminology {#terminology} 108 | ----------- 109 | 110 | Application flow {#applicationflow} 111 | ---------------- 112 | 113 |
114 | 115 | Most applications using the WebXR Meshing API will follow a similar usage pattern: 116 | 117 | * During the creation of a WebXR session, pass a XRFeatureInit object with parameters for world and near mesh. 118 | * For each XRFrame, get the requested mesh data and apply it to the scene. 119 | 120 |
121 | 122 | Initialization {#initialization} 123 | ============== 124 | 125 | XRMeshQuality {#xr--mesh-quality} 126 | ------------- 127 | 128 |
129 | enum XRMeshQuality {
130 |    "low",
131 |    "medium",
132 |    "high"
133 | };
134 | 
135 | 136 | {{XRMeshQuality}} defines the quality of the mesh. A higher quality means that the mesh will be finer but also more resource intensive. It is up to the UA to define the quality level. 137 | 138 | XRWorldMeshFeature {#xr-world-mesh-feature} 139 | ------------------ 140 | 141 | The "worldmesh" feature is used to request world meshing. 142 | 143 |
144 | dictionary XRWorldMeshFeature: XRFeatureInit {
145 |     XRMeshQuality quality = "medium";
146 |     double width = 10.0;
147 |     double height = 10.0;
148 |     double breadth = 10.0;
149 | };
150 | 
151 | 152 | The width, height and breadth attributes define the distance, in meters, of the width, height and breadth of the area around the observer that should be meshed. 153 | The quality attribute defines the UA dependent quality of the mesh. 154 | 155 |
156 | The following code attempts to create an {{immersive-ar}} {{XRSession}} with world meshing. 157 | 158 |
159 | let xrSession;
160 | 
161 | navigator.xr.requestSession("immersive-ar", {
162 |                             requiredFeature: {
163 |                                 name: "XRWorldMeshFeature",
164 |                                 quality: "medium"}}).then((session) => {
165 |     xrSession = session;
166 | });
167 | 
168 |
169 | 170 | ISSUE: should the world mesh follow the observer or should it stay relative to the original position or should it be configurable? 171 | 172 | ISSUE: should the world mesh have an XRSpace? 173 | 174 | 175 | XRNearMeshFeature {#xr-near-mesh-feature} 176 | ----------------- 177 | 178 | The "nearmesh" feature is used to request meshing near the observer. It is UA dependent what the scanned area for the near mesh is but it MUST not overlap with the world mesh. 179 | 180 |
181 | dictionary XRNearMeshFeature: XRFeatureInit {
182 |     XRMeshQuality quality = "medium";
183 | };
184 | 
185 | 186 | The quality attribute defines the UA dependent quality of the mesh. 187 | 188 |
189 | The following code attempts to create an {{immersive-ar}} {{XRSession}} with near meshing. 190 | 191 |
192 | let xrSession;
193 | 
194 | navigator.xr.requestSession("immersive-vr", {
195 |                             requiredFeature: {
196 |                                 name: "XRNearMeshFeature",
197 |                                 quality: "low"}}).then((session) => {
198 |     xrSession = session;
199 | });
200 | 
201 |
202 | 203 | ISSUE: should the near mesh have an XRSpace? 204 | 205 | Frame Loop {#frame} 206 | ========== 207 | 208 | XRMesh structures {#xrframe-structures} 209 | ----------------- 210 | 211 |
212 | dictionary XRMeshBlock {
213 |     required Float32Array vertices;
214 |     required Uint16Array indices;
215 |     Float32Array normals;
216 | };
217 | 
218 | 219 | A {{XRMeshBlock}} contains the geometry data of a single mesh instance. 220 | 221 | {{XRMeshBlock/vertices}} contains a buffer with points in 3D space. Each point consists of 3 floats. 222 | 223 | Each value in {{XRMeshBlock/indices}} points to an offset into a point inside {{XRMeshBlock/vertices}} and defines the corner of a triangle. The set of triangles creates a polygon mesh. 224 | NOTE: the offset of each point is found by taking the index value and multiplying by 3. 225 | 226 | {{XRMeshBlock}} can contain an optional {{XRMeshBlock/normals}} which defines a buffer with the normal of each vertex. The size of {{XRMeshBlock/normals}} must be the same as {{XRMeshBlock/vertices}}. 227 | 228 | Issue: is 'normals' needed? If so, should it be requested during initialisation? 229 | 230 |
231 | interface XRNearMesh {
232 |     readonly setlike<XRMeshBlock>;
233 | };
234 | 
235 | interface XRWorldMesh {
236 |     readonly maplike<DOMString, XRMeshBlock>;
237 | };
238 | 
239 | dictionary XRMetadata {
240 |     XRWorldMesh worldMesh;
241 |     XRNearMesh nearMesh;
242 | };
243 | 
244 | 245 | The worldMesh attribute contains updates to the world mesh. If any key in the dictionary was a previously provided provided [=XRWorldMesh=] this new value must replace the previous [=XRMeshBlock=]. If the value of a new world XRMeshBlock contains no vertices, the existing XRMeshBlock must be deleted. 246 | 247 | The nearMesh attribute contains a new near mesh which will replace the previous near mesh (if any). If nearMesh contains no new XRMeshBlock object, there is no near mesh. 248 | 249 | 250 | XRFrame {#xrframe-interface} 251 | ------- 252 | 253 |
254 | partial interface XRFrame {
255 |     readonly attribute XRMetadata metaData;
256 | };
257 | 
258 | 259 | Each XRFrame contains a metaData attribute with new or updated world or near mesh data. 260 | 261 | Issue: should the mesh data persist per browser session or per xr session? 262 | 263 | Security and Privacy Considerations {#security} 264 | ============================================= 265 | 266 | The WebXR Meshing API is a powerful feature with that carries significant privacy risks. 267 | A UA MUST ask permission from the user during session creation before meshing data is returned to the page. 268 | 269 | An 'inline' session must NOT have access to mesh data. 270 | 271 | Additionally, mesh data MUST be constructed from the geometry of the real world. It MUST not reveal writing, colors, pictures or other visual content. 272 | 273 | ISSUE: clarify this section 274 | 275 | 276 | 277 | --------------------------------------------------------------------------------