├── CODE_OF_CONDUCT.md ├── README.md ├── WebXR API.md └── design docs ├── Coordinate Systems.md ├── From WebVR 2.0 to WebXR 2.1.md ├── WebVR Changes to Allow XR.md ├── persona ├── Erin Sims.md ├── Frank Moreno.md ├── Jan Morton.md ├── Liz Burks.md ├── Shane Riley.md └── Template.md └── scenario ├── City Walk.md ├── Graffiti.md ├── Morning Time.md ├── TV Time.md └── What Is This Bug.md /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Community Participation Guidelines 2 | 3 | This repository is governed by Mozilla's code of conduct and etiquette guidelines. 4 | For more details, please read the 5 | [Mozilla Community Participation Guidelines](https://www.mozilla.org/about/governance/policies/participation/). 6 | 7 | ## How to Report 8 | For more information on how to report violations of the Community Participation Guidelines, please read our '[How to Report](https://www.mozilla.org/about/governance/policies/participation/reporting/)' page. 9 | 10 | 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # (obsolete) webxr-api 2 | This repository contains a draft proposal and starting point for discussing WebXR that we created in the fall of 2017, to explore what it might mean to expand WebVR to include AR/MR capabilities. 3 | 4 | The WebVR community has shifted WebVR in this direction. The group is now called the [Immersive Web Community Group](https://github.com/immersive-web/) and the WebVR specification has now become the [WebXR Device API](https://github.com/immersive-web/webxr). 5 | 6 | We will not be updating this site any longer, although we will continue to experiment with the [webxr-polyfill](https://github.com/mozilla/webxr-polyfill) we created when we created this API specification, until there is a complete WebXR polyfill. At that point we expect to shift our experiments to the new polyfill. 7 | 8 | ## (old README, for historical purposes) 9 | 10 | In order to make progress on defining WebXR, we are creating a proposal for this capability. The api is intended to build on the concepts already included in the native WebVR implementation, or the WebVR polyfill, but extend them with AR capabilities appropriate for the underlying platform. 11 | 12 | The initial interface draft is in [WebXR API.md](https://github.com/mozilla/webxr-api/blob/master/WebXR%20API.md). 13 | 14 | A polyfill and example code using this draft WebXR API is available in the [webxr-polyfill repository](https://github.com/mozilla/webxr-polyfill). 15 | 16 | There is also a [primer on using the WebXR APIs](https://github.com/mozilla/webxr-polyfill/blob/master/CODING.md). 17 | 18 | We maintain a [list of changes we made to the WebVR 2.0 draft to create the WebXR draft](https://github.com/mozilla/webxr-api/blob/master/design%20docs/From%20WebVR%202.0%20to%20WebXR%202.1.md). 19 | 20 | Some of the concepts we believe are important to have in WebXR include: 21 | 22 | - The ability to have control the render of reality _inside_ the browser, as this is essential for enabling user privacy (e.g. controlling camera and location data), easy cross platform applications, and performance. 23 | 24 | - Making access to video frames and other "world knowledge" up to the user-agent, so they may require permission from user for access to these resources. 25 | 26 | - Supporting the potential for multiple simultaneous AR pages, where each page knows that they are rendering on top of reality and if they have focus. Supporting these lines up with the ability to render reality inside the browser since each application would not be responsible for rendering the view of reality, so their content could be composited. 27 | 28 | - Supporting some form of the idea of “custom, user defined” representations of reality like fully virtual realities. The critical feature is that the "reality" code can “filter” the view pose that is passed back into the rAF callback, both in the same page and in _other_ pages (if there is multi-page support). 29 | 30 | - Some ability to do high performance, synchronous computer vision in a mix of native and javascript. One approach is to have a synchronous vision worker that is executed before the rAF callback happens, but there are other approaches. 31 | -------------------------------------------------------------------------------- /WebXR API.md: -------------------------------------------------------------------------------- 1 | # WebXR draft (do not implement) 2 | 3 | This document is based off of the [Editor's draft of WebVR](https://immersive-web.github.io/webxr/spec/latest/), modified to support both VR and AR. 4 | 5 | There is a [webxr-polyfill repo](https://github.com/mozilla/webxr-polyfill) with [example code](https://github.com/mozilla/webxr-polyfill/tree/master/examples) and a [primer on using the APIs](https://github.com/mozilla/webxr-polyfill/blob/master/CODING.md). 6 | 7 | For easy comparison, we maintain a [list of changes from WebVR 2.0 to WebXR](https://github.com/mozilla/webxr-api/blob/master/design%20docs/From%20WebVR%202.0%20to%20WebXR%202.1.md). 8 | 9 | The major concepts are: 10 | 11 | *XRDisplay*: a particular device and method for rendering XR layers (e.g. Daydream, Vive, Rift, Hololens, GearVR, Cardboard, or magic window) 12 | 13 | *Reality*: the rearmost information shown in a display (e.g. the real world in a passthrough display, a virtual reality in a HMD, a camera view in a magic window) It is the view of the world presented to the user that will be augmented. 14 | 15 | *XRSession*: an interface to a display for rendering onto layers and requesting changes to the current Reality 16 | 17 | *XRLayer*: Each XRSession has a XRLayer that exposes the particular context (e.g. a WebGL context) for rendering. 18 | 19 | *XRPresentationFrame*: Information needed to render a single graphics frame into a layer, including pose information, as well as sensor data like point clouds and anchors. 20 | 21 | The typical application will request an XRSession from an XRDisplay, request a change to the Reality if necessary, then repeatedly request a XRPresentationFrame with which to render into the XRSession's XRLayer. 22 | 23 | Applications that require a dedicated virtual reality can request one from the XRSession and then replace the session's default Reality. 24 | 25 | The UA will be in control of which Reality is active, the render order of the session layers, and which session will receive the user's input events. 26 | 27 | _"VR" in names have been changed to "XR" to indicate that they are used for both VR and AR._ 28 | 29 | ## XR 30 | 31 | interface XR { 32 | Promise> getDisplays(); 33 | 34 | attribute EventHandler ondisplayconnect; 35 | attribute EventHandler ondisplaydisconnect; 36 | }; 37 | 38 | ## XRDisplay 39 | 40 | interface XRDisplay : EventTarget { 41 | readonly attribute DOMString displayName; 42 | readonly attribute boolean isExternal; 43 | 44 | Promise supportsSession(XRSessionCreateOptions parameters); 45 | Promise requestSession(XRSessionCreateOptions parameters); 46 | 47 | attribute EventHandler ondeactivate; 48 | }; 49 | 50 | Each XRDisplay represents a method of using a specific type of hardware to render AR or VR realities and layers. 51 | 52 | _The VRDevice interface was renamed XRDisplay to denote that it is specifically for graphical display types and not other types of devices._ 53 | 54 | A Pixel XL could expose several displays: a flat display, a magic window display, a Cardboard display, and a Daydream display. 55 | 56 | A PC with an attached HMD could expose a flat display and the HMD. 57 | 58 | A PC with no attached HMD could expose single a flat display. 59 | 60 | A Hololens could expose a single passthrough display. 61 | 62 | - How do we support calibration? 63 | - How do we support orientation reset? 64 | - What should we do about area description files? 65 | 66 | ## XRSession 67 | 68 | interface XRSession : EventTarget { 69 | readonly attribute XRDisplay display; 70 | 71 | readonly attribute boolean exclusive; 72 | readonly attribute XRPresentationContext outputContext; 73 | readonly attribute XRSessionType type; 74 | 75 | readonly attribute > realities; // All realities available to this session 76 | readonly attribute Reality reality; // For augmentation sessions, this defaults to most recently used Reality. For reality sessions, this defaults to a new virtual Reality. 77 | 78 | attribute XRLayer baseLayer; 79 | attribute double depthNear; 80 | attribute double depthFar; 81 | 82 | long requestFrame(XRFrameRequestCallback callback); 83 | void cancelFrame(long handle); 84 | 85 | readonly attribute boolean hasStageBounds; 86 | readonly attribute XRStageBounds? stageBounds; 87 | 88 | Promise end(); 89 | 90 | attribute EventHandler onblur; 91 | attribute EventHandler onfocus; 92 | attribute EventHandler onresetpose; 93 | attribute EventHandler onrealitychanged; 94 | attribute EventHandler onrealityconnect; 95 | attribute EventHandler onrealitydisconnect; 96 | attribute EventHandler onboundschange; 97 | attribute EventHandler onended; 98 | }; 99 | 100 | A script that wishes to make use of an XRDisplay can request an XRSession. This session provides a list of the available realities that the script may request as well as make a request for an animation frame. 101 | 102 | _The XRSession plays the same basic role as the VRSession, with the addition of reality and augmentation management. The initialization parameters indicate whether the session is for managing a Reality (e.g. a virtual reality) or for augmenting an existing Reality._ 103 | 104 | enum XRSessionType { "reality", "augmentation" }; 105 | 106 | dictionary XRSessionCreateOptions { 107 | boolean exclusive; 108 | XRPresentationContext outputContext; 109 | XRSessionType type; 110 | }; 111 | 112 | [SecureContext, Exposed=Window] interface XRPresentationContext { 113 | readonly attribute HTMLCanvasElement canvas; 114 | }; 115 | 116 | - 'exclusive' needs to be rethought given the new use of XRDisplay for magic window. Do we still need sessions that just want sensor data? 117 | 118 | ## Reality 119 | 120 | interface Reality : EventTarget { 121 | readonly atttribute DOMString name; 122 | readonly attribute boolean isShared; // True if sessions other than the creator can access this Reality 123 | readonly attribute boolean isPassthrough; // True if the Reality is a view of the outside world, not a full VR 124 | 125 | XRCoordinateSystem? getCoordinateSystem(...XRFrameOfReferenceType type); // Tries the types in order, returning the first match or null if none is found 126 | 127 | attribute EventHandler onchange; 128 | }; 129 | 130 | A Reality represents a view of the world, be it the real world via sensors or a virtual world that is rendered with WebGL or WebGPU. 131 | 132 | Realities can be shared among XRSessions, with multiple scripts rendering into their separate XRLayer.context that are then composited by the UA with the Reality being rearmost. 133 | 134 | - How do we support configuration (e.g. change white balance on camera input, change options on map view)? 135 | 136 | ## XRPointCloud 137 | 138 | interface XRPointCloud { 139 | readonly attribute Float32Array points; // Each point is [x, y, z, confidence in range 0-1] 140 | } 141 | 142 | ## XRLightEstimate 143 | 144 | interface XRLightEstimate { 145 | readonly attribute double ambientIntensity; 146 | readonly attribute double ambientColorTemperature; 147 | } 148 | 149 | - Should we support point and directional light estimates? 150 | 151 | ## XRAnchor 152 | 153 | interface XRAnchor { 154 | readonly attribute DOMString uid; 155 | attribute XRCoordinates coordinates; 156 | } 157 | 158 | XRAnchors provide per-frame coordinates which the Reality attempts to pin "in place". In a virtual Reality these coordinates probably do not change. In a Reality based on environment mapping sensors, the anchors may change coordinates on a per-frame bases as the system refines its map. 159 | 160 | ## XRPlaneAnchor 161 | 162 | interface XRPlaneAnchor : XRAnchor { 163 | readonly attribute double width; 164 | readonly attribute double length; 165 | } 166 | 167 | XRPlaneAnchors usually represent surfaces like floors, table tops, or walls. 168 | 169 | ## XRAnchorOffset 170 | 171 | interface XRAnchorOffset { 172 | readonly attribute DOMString anchorUID; // an XRAnchor.uid value 173 | attribute Float32Array poseMatrix; // the offset's transform relative to the XRAnchor.coordinates 174 | 175 | XRCoordinates? getTransformedCoordinates(XRAnchor anchor) // coordinates of this offset in the XRCoordinateSystem of the anchor parameter 176 | } 177 | 178 | XRAnchorOffset represents a position in relation to an anchor, returned from XRPresentationFrame.findAnchor. If the hit test intersects an XRPlaneAnchor, the XRAnchorOffset returned will contain that anchor and the position in the anchor's coordinate system where the intersection occurred. The Reality may, if possible, create a new XRAnchor for use in the XRAnchorOffset or return null if the ray does not intersect anything in the reality or it is not possible to anchor at the intersection. 179 | 180 | ## XRManifold 181 | 182 | interface XRManifold { 183 | TBD 184 | } 185 | 186 | - How do we expose the manifold vertices and edges as well as its extent (FOV only, full sphere, etc)? 187 | 188 | ## XRStageBounds 189 | 190 | interface XRStageBounds { 191 | readonly attribute XRCoordinates center; 192 | readonly attribute FrozenArray? geometry; 193 | }; 194 | 195 | ## XRStageBoundsPoint 196 | 197 | interface XRStageBoundsPoint { 198 | readonly attribute double x; 199 | readonly attribute double z; 200 | }; 201 | 202 | ## XRPresentationFrame 203 | 204 | interface XRPresentationFrame { 205 | readonly attribute XRSession session; 206 | readonly attribute FrozenArray views; 207 | 208 | readonly attribute boolean hasPointCloud; 209 | readonly attribute XRPointCloud? pointCloud; 210 | 211 | readonly attribute boolean hasManifold; 212 | readonly attribute XRManifold? manifold; 213 | 214 | readonly attribute boolean hasLightEstimate; 215 | readonly attribute XRLightEstimate? lightEstimate; 216 | 217 | readonly attribute sequence anchors; 218 | DOMString addAnchor(XRAnchor anchor); 219 | void removeAnchor(DOMString uid); 220 | XRAnchor? getAnchor(DOMString uid); 221 | Promise findAnchor(float32 normalizedScreenX, float32 normalizedScreenY); // cast a ray to find or create an anchor at the first intersection in the Reality. Screen coordinates are 0,0 at top left and 1,1 at bottom right. 222 | 223 | XRCoordinateSystem? getCoordinateSystem(...XRFrameOfReferenceType types); // Tries the types in order, returning the first match or null if none is found 224 | 225 | XRViewPose? getViewPose(XRCoordinateSystem coordinateSystem); 226 | }; 227 | 228 | _The XRPresentationFrame differs from the VRPresentationFrame with the addition of the point cloud, manifold, light estimates, and anchor management._ 229 | 230 | - How can we offer up a more generic ray based equivalent to the screen oriented findAnchor? 231 | - Should we fire an event when a marker or feature based anchor (e.g. a wall, a table top) is detected? 232 | - How can we access camera image buffers aor textures? 233 | 234 | ## XRView 235 | 236 | interface XRView { 237 | readonly attribute XREye? eye; // 'left', 'right', null 238 | attribute Float32Array projectionMatrix; 239 | 240 | XRViewport? getViewport(XRLayer layer); 241 | }; 242 | 243 | ## XRViewport 244 | 245 | interface XRViewport { 246 | attribute long x; 247 | attribute long y; 248 | attribute long width; 249 | attribute long height; 250 | }; 251 | 252 | 253 | ## XRCartographicCoordinates 254 | 255 | enum XRCartographicCoordinatesGeodeticFrame { "WGS84" }; 256 | 257 | interface XRCartographicCoordinates { 258 | attribute XRCartographicCoordinatesGeodeticFrame? geodeticFrame; 259 | attribute double latitude; 260 | attribute double longitude; 261 | attribute double positionAccuracy; 262 | attribute double altitude; 263 | attribute double altitudeAccuracy; 264 | attribute Float32Array orientation; // quaternion x,y,z,w from 0,0,0,1 of East/Up/South 265 | } 266 | 267 | The XRCartographicCoordinates are used in conjunction with the XRCoordinateSystem to represent a frame of reference that may optionally be positioned in relation to a geodetic frame like WGS84 for Earth, otherwise a sphere is assumed. 268 | 269 | - We could find geodetic frames for other planets and moons in this solar system 270 | 271 | ## XRCoordinateSystem 272 | 273 | enum XRFrameOfReferenceType { "headModel", "eyeLevel", "stage", "geospatial" }; 274 | 275 | interface XRCoordinateSystem { 276 | readonly attribute XRCartographicCoordinates? cartographicCoordinates; 277 | readonly attribute XRFrameOfReferenceType type; 278 | 279 | Float32Array? getTransformTo(XRCoordinateSystem other); 280 | }; 281 | 282 | 283 | ## XRCoordinates 284 | 285 | interface XRCoordinates { 286 | readonly attribute XRCoordinateSystem coordinateSystem; 287 | attribute Float32Array poseMatrix; 288 | 289 | XRCoordinates? getTransformedCoordinates(XRCoordinateSystem otherCoordinateSystem) 290 | }; 291 | 292 | 293 | ## XRViewPose 294 | 295 | interface XRViewPose { 296 | readonly attribute Float32Array poseModelMatrix; 297 | 298 | Float32Array getViewMatrix(XRView view); 299 | }; 300 | 301 | - Do we need coordinate systems for the poseModelMatrix and getViewMatrix? 302 | 303 | ## XRLayer 304 | 305 | interface XRLayer : EventTarget { 306 | }; 307 | 308 | ## XRWebGLLayer 309 | 310 | typedef (WebGLRenderingContext or WebGL2RenderingContext) XRWebGLRenderingContext; 311 | 312 | [Constructor(XRSession session, XRWebGLRenderingContext context, optional XRWebGLLayerInit layerInit)] 313 | 314 | interface XRWebGLLayer : XRLayer { 315 | readonly attribute XRWebGLRenderingContext context; 316 | 317 | readonly attribute boolean antialias; 318 | readonly attribute boolean depth; 319 | readonly attribute boolean stencil; 320 | readonly attribute boolean alpha; 321 | readonly attribute boolean multiview; 322 | 323 | readonly attribute WebGLFramebuffer framebuffer; 324 | readonly attribute long framebufferWidth; 325 | readonly attribute long framebufferHeight; 326 | 327 | void requestViewportScaling(double viewportScaleFactor); 328 | }; 329 | 330 | 331 | -------------------------------------------------------------------------------- /design docs/Coordinate Systems.md: -------------------------------------------------------------------------------- 1 | 2 | ## Reality info 3 | 4 | - Reality: StageBounds 5 | - Anchor: Coordinates 6 | - PlaneAnchor(Anchor): orientation, width, height 7 | - PointCloud: Array of points [x,y,z] 8 | - Manifold: ? 9 | 10 | ### Frame info 11 | 12 | - Frame: Views, PointCloud, Manifold, DisplayPose 13 | - DisplayPose: poseModelMatrix 14 | - View: projectionMatrix, ViewPort 15 | - Viewport: x, y, width, height 16 | 17 | ### Coordinate info 18 | 19 | - CartographicCoordinates: lat/lon/alt, accuracy, type (display, reality), geodeticFrame (WGS84) 20 | - CoordinateSystem: CartographicCoordinates?, type (headModel, eyeLevel, stage, spatial) 21 | - Coordinates: CoordinateSystem, x/y/z 22 | - StageBounds: XRCoordinate center, optional array of [x,y] points on stage plane 23 | 24 | ### Rendering 25 | 26 | - Layer: GL context, GL frame buffer with width and height 27 | 28 | ### Scene graph 29 | 30 | - Scene coordinates: local, world, camera, projection 31 | 32 | 33 | ### References 34 | 35 | - https://developer.microsoft.com/en-us/windows/mixed-reality/coordinate_systems 36 | - https://developer.microsoft.com/en-us/windows/mixed-reality/spatial_anchors 37 | 38 | - https://developers.google.com/tango/overview/motion-tracking 39 | - https://developers.google.com/tango/overview/frames-of-reference 40 | 41 | - https://developer.apple.com/documentation/arkit/ (no frames of reference or spatial coordinates?) 42 | 43 | - https://dev.w3.org/geo/api/spec-source.html#coordinates 44 | -------------------------------------------------------------------------------- /design docs/From WebVR 2.0 to WebXR 2.1.md: -------------------------------------------------------------------------------- 1 | ## WebVR 2.0 changes to *allow* AR 2 | 3 | These are the minimal changes to WebVR 2.0 needed so that at some point in the future we could support AR of the type described in [WebXR API.md](https://github.com/mozilla/webxr-api/blob/master/WebXR%20API.md) and implemented in [webxr-polyfill](https://github.com/mozilla/webxr-polyfill). 4 | 5 | ### Rename 6 | 7 | - rename everything from VR* to XR* 8 | - rename VRDevice to XRDisplay 9 | - rename VRDevicePose to XRViewPose 10 | 11 | ### Coordinate system 12 | 13 | - add XRCoordinates (has a pose matrix and XRCoordinateSystem attribute) 14 | - update XRCoordinateSystem to include a XRFrameOfReferenceType attribute 15 | - remove XRFrameOfReference (handled by a XRFrameOfReferenceType attribute on XRCoordinateSystem) 16 | 17 | ### Stage bounds 18 | 19 | - move stage bounds from XRFrameOfReference (which is removed) to XRSession 20 | - change XRStageBounds to have a XRCoordinates `center` attribute (and thus a XRCoordinateSystem) 21 | 22 | ### Misc 23 | 24 | - add XRPresentationFrame.getCoordinateSystem 25 | 26 | 27 | 28 | ## WebVR 2.0 changes to *support* AR 29 | 30 | These are the changes to WebVR 2.0 required to actually use the AR features in WebXR. 31 | 32 | ### New interfaces 33 | 34 | - Reality 35 | - XRSessionRealityType 36 | - XRPointCloud 37 | - XRLightEstimate 38 | - XRAnchor, XRPlaneAnchor, and XRAnchorOffset 39 | - XRManifold 40 | - XRCartographicCoordinates 41 | 42 | ### Interface additions 43 | 44 | - add "geospatial" to existing XRFrameOfReferenceType types "headModel", "eyeLevel", and "stage" 45 | - update XRCoordinateSystem to include an optional XRCartographicCoordinates attribute 46 | 47 | - add XRSessionType attribute to XRSessionCreateOptions to support either a Reality session or an augmentation session 48 | - update XRLayer to include focus API 49 | - update XRPresentationFrame to add point cloud, light estimate, manifold, and anchor APIs 50 | 51 | -------------------------------------------------------------------------------- /design docs/WebVR Changes to Allow XR.md: -------------------------------------------------------------------------------- 1 | # WebVR 2.0 changes to allow XR 2 | 3 | This document represents the WebXR interfaces after using the minimal "WebVR 2.0 changes to *allow* AR" list in [From WebVR 2.0 to WebXR 2.1.md](https://github.com/mozilla/webxr-api/blob/master/design%20docs/From%20WebVR%202.0%20to%20WebXR%202.1.md) to make the changes that will make it possible to eventually support AR. See the "WebVR 2.0 changes to *support* AR" list and the full [WebXR API.md](https://github.com/mozilla/webxr-api/blob/master/WebXR%20API.md) for API changes that would be required to actually implement AR applications. 4 | 5 | The major changes from WebVR "2.0" are: 6 | 7 | - naming changes like VR* to XR* and VRDevice to XRDisplay 8 | - introduce XRCoordinates and change XRCoordinateSystem to replace VRFRameOfReference 9 | - move stage bounds to the session and give them a center XRCoordinates (and thus a coordinate system) 10 | 11 | Changes that are not in this API but that would be backwards compatible in a later version that supports AR: 12 | 13 | - Realities 14 | - Anchors 15 | - Environment info like point clouds, manifolds, and light estimates 16 | 17 | 18 | ## XR 19 | 20 | interface XR { 21 | Promise> getDisplays(); 22 | 23 | attribute EventHandler ondisplayconnect; 24 | attribute EventHandler ondisplaydisconnect; 25 | }; 26 | 27 | ## XRDisplay 28 | 29 | interface XRDisplay : EventTarget { 30 | readonly attribute DOMString displayName; 31 | readonly attribute boolean isExternal; 32 | 33 | Promise supportsSession(XRSessionCreateOptions parameters); 34 | Promise requestSession(XRSessionCreateOptions parameters); 35 | 36 | attribute EventHandler ondeactivate; 37 | }; 38 | 39 | ## XRSession 40 | 41 | interface XRSession : EventTarget { 42 | readonly attribute XRDisplay display; 43 | 44 | readonly attribute boolean exclusive; 45 | readonly attribute XRPresentationContext outputContext; 46 | readonly attribute XRSessionType type; 47 | 48 | attribute XRLayer baseLayer; 49 | attribute double depthNear; 50 | attribute double depthFar; 51 | 52 | long requestFrame(XRFrameRequestCallback callback); 53 | void cancelFrame(long handle); 54 | 55 | readonly attribute boolean hasStageBounds; 56 | readonly attribute XRStageBounds? stageBounds; 57 | 58 | Promise end(); 59 | 60 | attribute EventHandler onblur; 61 | attribute EventHandler onfocus; 62 | attribute EventHandler onresetpose; 63 | attribute EventHandler onboundschange; 64 | attribute EventHandler onended; 65 | }; 66 | 67 | enum XRSessionType { "reality" }; // eventually also 'augmentation' 68 | 69 | dictionary XRSessionCreateOptions { 70 | boolean exclusive; 71 | XRPresentationContext outputContext; 72 | XRSessionType type; 73 | }; 74 | 75 | [SecureContext, Exposed=Window] interface XRPresentationContext { 76 | readonly attribute HTMLCanvasElement canvas; 77 | }; 78 | 79 | ## XRStageBounds 80 | 81 | interface XRStageBounds { 82 | readonly attribute XRCoordinates center; 83 | readonly attribute FrozenArray? geometry; 84 | }; 85 | 86 | ## XRStageBoundsPoint 87 | 88 | interface XRStageBoundsPoint { 89 | readonly attribute double x; 90 | readonly attribute double z; 91 | }; 92 | 93 | ## XRPresentationFrame 94 | 95 | interface XRPresentationFrame { 96 | readonly attribute XRSession session; 97 | readonly attribute FrozenArray views; 98 | 99 | XRCoordinateSystem? getCoordinateSystem(...XRFrameOfReferenceType types); // Tries the types in order, returning the first match or null if none is found 100 | 101 | XRViewPose? getViewPose(XRCoordinateSystem coordinateSystem); 102 | }; 103 | 104 | ## XRView 105 | 106 | interface XRView { 107 | readonly attribute XREye? eye; // 'left', 'right', null 108 | attribute Float32Array projectionMatrix; 109 | 110 | XRViewport? getViewport(XRLayer layer); 111 | }; 112 | 113 | ## XRViewport 114 | 115 | interface XRViewport { 116 | attribute long x; 117 | attribute long y; 118 | attribute long width; 119 | attribute long height; 120 | }; 121 | 122 | ## XRCoordinateSystem 123 | 124 | enum XRFrameOfReferenceType { "headModel", "eyeLevel", "stage" }; // eventually 'geospatial' 125 | 126 | interface XRCoordinateSystem { 127 | readonly attribute XRFrameOfReferenceType type; 128 | 129 | Float32Array? getTransformTo(XRCoordinateSystem other); 130 | }; 131 | 132 | 133 | ## XRCoordinates 134 | 135 | interface XRCoordinates { 136 | readonly attribute XRCoordinateSystem coordinateSystem; 137 | attribute Float32Array poseMatrix; 138 | 139 | XRCoordinates? getTransformedCoordinates(XRCoordinateSystem otherCoordinateSystem) 140 | }; 141 | 142 | 143 | ## XRViewPose 144 | 145 | interface XRViewPose { 146 | readonly attribute Float32Array poseModelMatrix; 147 | 148 | Float32Array getViewMatrix(XRView view); 149 | }; 150 | 151 | ## XRLayer 152 | 153 | interface XRLayer : EventTarget { 154 | }; 155 | 156 | ## XRWebGLLayer 157 | 158 | typedef (WebGLRenderingContext or WebGL2RenderingContext) XRWebGLRenderingContext; 159 | 160 | [Constructor(XRSession session, XRWebGLRenderingContext context, optional XRWebGLLayerInit layerInit)] 161 | 162 | interface XRWebGLLayer : XRLayer { 163 | readonly attribute XRWebGLRenderingContext context; 164 | 165 | readonly attribute boolean antialias; 166 | readonly attribute boolean depth; 167 | readonly attribute boolean stencil; 168 | readonly attribute boolean alpha; 169 | readonly attribute boolean multiview; 170 | 171 | readonly attribute WebGLFramebuffer framebuffer; 172 | readonly attribute long framebufferWidth; 173 | readonly attribute long framebufferHeight; 174 | 175 | void requestViewportScaling(double viewportScaleFactor); 176 | }; 177 | 178 | 179 | 180 | -------------------------------------------------------------------------------- /design docs/persona/Erin Sims.md: -------------------------------------------------------------------------------- 1 | 2 | # Erin Sims 3 | 4 | Title: Student 5 | 6 | Organization: Grenoble Public School 7 | 8 | Quote: Mom, what's that bug?! 9 | 10 | ## About 11 | 12 | Age: 11 13 | 14 | Education: Middle School 15 | 16 | Experience: 5 years of Lower School 17 | 18 | Responsibilities: Get good grades. House chores. Tends the family pets and plants. 19 | 20 | Initial touch-point: Her mom bought Erin a mobile phone when she started after school sports so that they could coordinate drop-off and pick-up times. It came with an XR app for identifying plants, animals, and insects. 21 | 22 | ## Highlights 23 | 24 | - Uses her phone for everything, but occasionally will use her mother's laptop to write reports for school 25 | - Enjoys being around people at all times. Likes to read, but prefers to do it with others in the room 26 | 27 | ## Goals 28 | 29 | - Figure out how the world works 30 | - Avoid boring and angry adults 31 | - Fit into the increasingly complex social groups at school and running practice 32 | 33 | ## Frustrations 34 | 35 | - When her mom get annoyed because she asks so many questions about insects and plants 36 | - She's not allowed to spend any money on the phone for or in apps 37 | - Her father lives two states away so she rarely sees him in person 38 | 39 | ## User story 40 | 41 | Erin lives with her mother in the central district of Grenoble. Her block has lots of other kids so she's usually in a group of kids, either from school or from her street. All of the kids have phones or tablets that they use for messaging and for almost all of their homework. 42 | 43 | You'd pick out Erin because she's forever stopping to look at plants and bugs by the sidewalk. Her friends squeel but Erin just watches them go about their lives and wonders what they eat and how they live. Erin runs with a group after school and the coach had to talk to her about stopping during practice. 44 | 45 | Erin's father, when he calls, often tells her how important it is to get through high school. He didn't and regrets it, so Erin feels that it's important and it's pretty easy for her so it's no big deal. Her mom lets her have a lot of plants and animals, and she spends about an hour each day tending them. On weekends, her mother and she work their way through the chore list and then if there's a good kid's movie they'll bus downtown and watch it in one of the cinemas. 46 | 47 | -------------------------------------------------------------------------------- /design docs/persona/Frank Moreno.md: -------------------------------------------------------------------------------- 1 | 2 | # Frank Moreno 3 | 4 | Title: Transport Mechanic 5 | 6 | Organization: US Army, Fort Benning GA 7 | 8 | Quote: "If I can get the parts, I can fix your vehicle. If I can't get the parts, I can probably still fix it." 9 | 10 | ## About 11 | 12 | Age: 34 13 | 14 | Education: High school degree, Army mechanical engineering certificate 15 | 16 | Experience: Worked on cars with his father growing up and in high school. Went through a two year training program with the Army. 17 | 18 | Responsibilities: Work with the maintenance IT systems to track what vehicles need work and arrange with the users to bring in the vehicles. Perform maintenance. When a vehicle fails, determine and document the failure point, order parts, and perform fixes. 19 | 20 | Initial touch-point: The PCs in the garage and office run a military version of Windows, but all of the tools run their own embedded systems. The vehicles run a hardened Linux. 21 | 22 | ## Highlights 23 | 24 | - Has many years of working with quirky shop computers and on-board systems. 25 | 26 | ## Goals 27 | 28 | - Avoid negative attention from people of higher rank. 29 | - Hit the target throughput numbers that the IT systems set for his team. 30 | - Keep the vehicles in top condition and ready for deployment. 31 | 32 | ## Frustrations 33 | 34 | - Ancient Windows OS and hardware on the office PCs. 35 | - Each version of the vehicle systems uses incompatible parts and software. 36 | 37 | ## User story 38 | 39 | Frank Morano has seen it all and can no longer be shocked. He's seen humvees towed in behind tanks. He's seen axles sheared while driving on flat roads. People in other bases send Frank descriptions of unusual damage and he no longer raises an eyebrow. 40 | 41 | With the exception of the two years that the was being trained as a mechanical engineer, Frank has worked on cars and trucks pretty much every day since he was 9. His father and he restored cars in their garage to earn extra money for the family, and now he manages a base garage where hundreds of vehicles pass through each month. 42 | 43 | Most days, Frank rarely sits down long enough to get annoyed at the PC in his office. He's called to the garage floor so often that now he just works from one of the unused stations down there or from his handset. It's regular hours and he has enough free time that he can live off base and live in the nearby town. 44 | 45 | Frank has watched as vehicles switched from analog radios to on-board digital navigation and communication systems. They don't necessarily work better, they just have different failure states. -------------------------------------------------------------------------------- /design docs/persona/Jan Morton.md: -------------------------------------------------------------------------------- 1 | 2 | # Jan Morton 3 | 4 | Title: Administrative Assistant to the Senior Vice President of Sales 5 | 6 | Organization: Consumer goods pricing model tech startup, Univoi.com 7 | 8 | Quote: "I sent that four days ago because I knew that you'd want it sent." 9 | 10 | ## About 11 | 12 | Age: 48 13 | 14 | Education: High School Diploma 15 | 16 | Division: Sales 17 | 18 | Experience: Full time mother of three kids. Then, eleven years as the assistant to the woman who is now a SVP. 19 | 20 | Responsibilities: Books travel arrangements, manages her boss's schedule, cheerfully fends off time wasters, oversees the younger assistants. 21 | 22 | Initial touch-point: Jan has always looked for technology to make her more efficient and when she saw a demo of the third generation Hololens and considered how many windows crowd her PC screen every day she wanted to give it a try. 23 | 24 | ## Highlights 25 | 26 | - Manages dozens of active projects and supports large numbers of travelers 27 | - Works closely with her SVP, who mostly uses her handset for communication and scheduling 28 | - 29 | 30 | ## Goals 31 | 32 | - Prevent surprises for her SVP by managing expectations and clear communication 33 | - Support travelers in her department. 34 | - Train new and inexperienced assistants on how the company functions and how to support their bosses. 35 | 36 | ## Frustrations 37 | 38 | - Never enough real estate on her screens. 39 | - People who don't read their email. 40 | - Company IT takes forever to fix anything. 41 | 42 | ## User story 43 | 44 | Jan has this job buttoned up. She's trained her boss on how to communicate her needs and how to read the email that Jan sends her. 45 | 46 | After Jan's kids were old enough that they didn't need her at home full time, she started working for a new manager at Univoi. Over the last eleven years, both Jan and the manager have moved up through the org chart to the SVP level. Jan now manages several other administrative assistants in the sales division. 47 | 48 | Jan thinks of herself as a chaos filter for her boss and for the division as a whole. She spends most of her day managing a large number of ongoing projects using every form of IT that you can imagine. She's never without her handset and tablet, and often she also has her laptop propped on a copy machine or on a counter in the office kitchen so that she can respond to the many requests that land in her in-box every day. 49 | 50 | A good day for Jan ends when all of the surprises that come her way have been translated into well managed and scheduled events for her boss and her boss's direct reports. 51 | 52 | -------------------------------------------------------------------------------- /design docs/persona/Liz Burks.md: -------------------------------------------------------------------------------- 1 | 2 | # Liz Burks 3 | 4 | Title: Web Consultant 5 | 6 | Organization: The Web Forest, a Chicago based 1 person LLC 7 | 8 | Quote: "I have four sites due by Friday afternoon, can this wait?" 9 | 10 | ## About 11 | 12 | Age: 28 13 | 14 | Education: Bachelors degree in English Literature 15 | 16 | Experience: Four years working in a book store, then took a coding camp and started taking short contract web work. 17 | 18 | Responsibilities: Help small business owners figure out what their sites need and sets them up with Squarespace or Wordpress sites. 19 | 20 | Initial touch-point: One of Liz's customers asked her to build an aug for their real estate business. 21 | 22 | ## Highlights 23 | 24 | - Often spends half of her billable time educating her clients 25 | - Has two e-readers but doesn't want to give up her library of paper books 26 | 27 | ## Goals 28 | 29 | - Streamline small business site production so that she can take more jobs 30 | - Attract larger customers who need more specialized services so she can charge more 31 | - At some point have enough time for more of a social life 32 | 33 | ## Frustrations 34 | 35 | - Her current customers don't want to pay for service contracts but get angry when sites go stale 36 | - Clients often come in wanting a feature based on a technical buzzword they read about on-line 37 | - The web (especially Javascript) is changing so quickly that she has trouble keeping up 38 | 39 | ## User story 40 | 41 | Liz has never had a problem working with clients, and that's why her customers are enthusiastic about recommending her to their friends. At a web developer conference she stands out, not just because she's tall and sometimes the only woman of color, but because she's usually the session organizer and knows half of the attendees. 42 | 43 | When the book store she managed went out of business in 2009, she decided to switch careers and attend a code camp. When she realized that it wasn't that hard to make squarespace and wordpress web sites, she filed incorporation papers half-way through the course and had clients lined up for the week after she graduated. 44 | 45 | Liz spends about half of her time connecting with potential and previous clients over coffee and the other half working from a corner of her living room. She makes time for yoga, but hasn't figured out yet how to date and see friends when she has so many projects in flight. 46 | 47 | -------------------------------------------------------------------------------- /design docs/persona/Shane Riley.md: -------------------------------------------------------------------------------- 1 | 2 | # Shane Riley 3 | 4 | Title: High School Student 5 | 6 | Organization: Atlanta Public Schools 7 | 8 | Quote: "Can you name every character in every season of Smallville? I can." 9 | 10 | ## About 11 | 12 | Age: 17 13 | 14 | Education: High School Junior 15 | 16 | Experience: Two years as an after school assistant to the editor at Atlanta Magazine. 17 | 18 | Responsibilities: Keep at high enough grade point average to get into UGA, where he plans to study in the journalism school for television production. 19 | 20 | Initial touch-point: Shane's parents gave him an old iPad when they bought a new one. 21 | 22 | ## Highlights 23 | 24 | - Doesn't want to "learn computers" for fun 25 | - Usually has a tablet, a laptop, and a handset in use at the same time 26 | 27 | ## Goals 28 | 29 | - Find the best on-line and television shows 30 | - Make funny videos with his friends 31 | - Get through high school and leave Atlanta as soon as possible 32 | 33 | ## Frustrations 34 | 35 | - Driving around Atlanta for work is often slow 36 | - He posts a lot of videos on-line, but few of them get many views or likes 37 | 38 | ## User story 39 | 40 | Shane is the young man who always seems to have more than one thing going on. Whether it's running errands for work, or working on homework, he usually has a tablet playing Netflix or Hulu and is messaging friends on his handset. After homework, he's usually editing some of the footage he took during the day into videos that he posts on-line. 41 | 42 | He likes when on-line videos have additional information about the actors and the directors so that he can look up more information about their careers and how they think about making videos. He occasionally watches on-line courses about video production and story telling with the idea that he'll get more views and likes when he posts. 43 | 44 | -------------------------------------------------------------------------------- /design docs/persona/Template.md: -------------------------------------------------------------------------------- 1 | 2 | # 3 | 4 | Title: 5 | 6 | Organization: 7 | 8 | Quote: 9 | 10 | ## About 11 | 12 | Age: 13 | Education: 14 | Division: 15 | Experience: 16 | 17 | Responsibilities: 18 | 19 | Initial touchpoint: 20 | 21 | ## Highlights 22 | 23 | - 24 | - 25 | - 26 | 27 | ## Goals 28 | 29 | - 30 | - 31 | - 32 | 33 | ## Frustrations 34 | 35 | - 36 | - 37 | - 38 | 39 | ## User story 40 | 41 | First impressions 42 | 43 | Path to current position 44 | 45 | A day in the life 46 | 47 | Restatement of broad goals 48 | 49 | 50 | -------------------------------------------------------------------------------- /design docs/scenario/City Walk.md: -------------------------------------------------------------------------------- 1 | The doctor says that if Jan wants to avoid sciatica pain in the future she'll need sit as little as possible and to walk at least 45 minutes every day. So, each day at lunch she dutifully steps back from her standing desk, puts on her jacket and glasses, takes the elevator down from her office, and walks out into downtown Seattle. 2 | 3 | Jan usually doesn't keep a lot of augs or apps running, but she's started to leave the Architectural Digest aug running so that it points out interesting bits of the Seattle skyline while she walks. 4 | 5 | Half-way through her route she wants a bit of music, so she tells her assistant to play one of the road trip streams offered by Spotify. The Spotify control app appears in her flock and follows her as she walks. When she pauses to wait for a crossing signal, her flock catches up and flows around her into easy-to-reach positions that won't distract her too much. 6 | 7 | On occasion one of the songs will annoy her, and she'll poke the Spotify app's skip control. When a great song comes on, she'll grab the app and tap it against her favorite songs list that she keeps in a note that is stuck to her forearm. The title and artist with a link appear at the bottom of the list. -------------------------------------------------------------------------------- /design docs/scenario/Graffiti.md: -------------------------------------------------------------------------------- 1 | Lee is bored. School is boring. Teachers are boring. The other students are especially boring. Seems like hanging out down by the river is the only time he gets to think and not worry about what's happening at home and hear about stupid crap he doesn't care about. He likes to draw, though. When he gets to the river he puts on the glasses that they issued him at school and pulls out his mobile. He touch taps open the drawing app, picks the brush and color, and then uses the mobile to draw big strokes around the path by the river's bank. He adds fantastic new flowers to tree limbs. He draws a short castle around the snake hole where he saw that long snake that time. He runs across some of his earlier drawings and erases them, because he's a lot better at drawing now. 2 | 3 | Lee takes a selfie of himself surrounded by his drawings and sends it to his mother. Maybe it will help her through today. Sometimes he skims other peoples' drawings, but mostly they're boring, so mostly he just makes things prettier than how he found them. -------------------------------------------------------------------------------- /design docs/scenario/Morning Time.md: -------------------------------------------------------------------------------- 1 | Liz steps quietly out of her bedroom so that her partner can sleep in after a long night. She puts on her glasses as she walks downstairs and her flock comes alive. She shoos her music and sculpture apps aside and pushes the web browser app to the front as she tells it to open the BBC news page. She pokes the page to play the World News podcast while she starts the coffee, but it's too intense for morning time so she tosses it away. As she starts the 4 minute timer, she notices that one of the search augs that she left in the kitchen has located a new vegetarian burrito stand a few blocks away. she tuts its map pin over to mid-day on her calendar and walks downstairs to the garden. 2 | 3 | The plant babies need some water, and her music app picks up a sound stickie that the kid left on the watering can. It begins to play one of their compositions, which lasts about as long as it takes her to water everyone. She pulls up her keyboard and pokes out a proud parent emoji next to the watering can for the kid to find. 4 | 5 | Liz left a lot of augs running in the downstairs lounge last night when she was poking at a new project, so she tuts most of them away. It's still too early and she hasn't had coffee. The one aug that she leaves up is a sketch of a character arc for a re-imagining of an Appalachian folk hero that she intends to sell into the Korean film market. She stares at it for a few minutes, and just as she flops onto one of the big pillows, the coffee timer goes off. 6 | 7 | She sighs and heads upstairs to bring a cup of the sweet nectar of life to her partner. -------------------------------------------------------------------------------- /design docs/scenario/TV Time.md: -------------------------------------------------------------------------------- 1 | Frank arrives home after a tough day at work, looking forward to a glass of wine and the latest episode of his show. He throws some food in the slow cooker and sets a timer on his mobile for when he needs to check it. With dinner on the way, he sits on his sofa and slips on his glasses. His flock of apps appears around him, including the timer for the food. He pulls forward the show app and pokes the play control on the latest episode thumbnail. He lowers the glasses' light shield so that he can only see the app. His flock fades away and the episode appears around him. Frank leans back and watches events unfold, occasionally tutting to change position or direction in the scene. 2 | 3 | The timer eventually rings and he pauses the episode, brings up his flock, lifts the light shield, and turns off the ringer. He pulls off the glasses and makes a bowl of food. When he sits at the table, he puts on the glasses and places a camera window over his food so that when he expands and starts the episode he can see to eat. -------------------------------------------------------------------------------- /design docs/scenario/What Is This Bug.md: -------------------------------------------------------------------------------- 1 | Erin sits at the bus stop and sees a little bug with too many legs. She wonders what it is, so she pulls out her mobile and browses over to the Tree of Life site, then clicks on the link to the identification aug. It prompts her to switch into a magic window view, which she accepts, and then she points the mobile's camera at the bug. It spends a few seconds watching the bug and figuring out that it's a millipede. Erin then switches back to flat view and reads about what this species of millipede eats and how it lives. --------------------------------------------------------------------------------