└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Introduction to visionOS (1.0) and Spatial Computing 2 | 3 | 4 | # Getting Started With visionOS 5 | 6 | When building your app, start with a window and add elements as appropriate to help immerse people in your content. Add a volume to showcase 3D content, or increase the level of immersion using a Full Space. The mixed style configures the space to display passthrough, but you can apply the progressive or full style to increase immersion and minimize distractions. 7 | 8 | * Add depth to your windows. Apply depth-based offsets to views to emphasize parts of your window, or to indicate a change in modality. Incorporate 3D objects directly into your view layouts to place them side by side with your 2D views. 9 | * Add hover effects to custom views. Highlight custom elements when someone looks at them using hover effects. Customize the behavior of your hover effects to achieve the look you want. 10 | * Implement menus and toolbars using ornaments. Place frequently used tools and commands on the outside edge of your windows using ornaments. 11 | 12 | 13 | 14 | 15 | 16 | ## RealityKit 17 | 18 | RealityKit plays an important role in visionOS apps, and you use it to manage the creation and animation of 3D objects in your apps. Create RealityKit content programmatically, or use Reality Composer Pro to build entire scenes that contain all the objects, animations, sounds, and visual effects you need. Include those scenes in your windows, volumes, or spaces using a RealityView. 19 | 20 | 21 | ## **RealityView** 22 | 23 | - In RealityKit, objects are SwiftUI views with an Entity 24 | - RealityView allows you to place ‘reality content’ into SwiftUI view hierarchy 25 | - 2 closures: Make, and update 26 | - Make - loads the entity: 27 | - If let scene = try? await Entity(named: “Scene”, in: …) { content.add(scene) } 28 | - Update - is not called every frame, only called when SwiftUI state changes 29 | 30 | —————— 31 | 32 | - App icons on visionOS are special - 3 images overlayed on top of each other, that respond dynamically when viewed (Front, Middle & Back layer) 33 | 34 | - Not all APIs that are available on iPadOS / iOS / MacOS are available on visionOS. 35 | - APIs that were deprecated prior to iOS 14 36 | - API’s that do not work well on visionOS: UIDeviceOrientation, UIScreen, UITabBar, & more.. 37 | 38 | 39 | - Origin (0,0,0) point is below user in a Space (at the users feet) 40 | 41 | 42 | 43 | 44 | ## **visionOS Workflow** 45 | - Use SwiftUI and UIKit for building UI 46 | - RealityKit for presenting 3d content, animations, and VFX 47 | - ARKit to understand the space around the user 48 | 49 | 50 | ## **Spaces** 51 | - App can contain single or multiple Spaces 52 | - Only one space can be open at a time 53 | 54 | 55 | ## **Model3D** 56 | 57 | - Use Model3D to load simple scenes asynchronously 58 | 59 | ``` 60 | struct CustomView: View { 61 | var body: some view { 62 | Model3D(named: “FileName”) { phase in //file should be in proj 63 | } 64 | } 65 | } 66 | 67 | Model3d .overlay { … } // overlay on SwiftUI 3d objects 68 | Model3d .rotation3dEffect( Rotation3D(angle: …., axis: …)) // apply rotation 69 | 70 | ``` 71 | 72 | 73 | ## **Gestures** 74 | 75 | - Since RealityView is a SwiftUI view containing multiple entities, a gesture will target each entity in the view. Therefore if we want a gesture to only affect a specific object, we must use targetedToEntity 76 | - .targetedToEntity( …. ) // Allow this gesture on specific entity only 77 | - Entity must have Collision component and Input Target component to receive gestures (can be added in Reality Composer Pro or programmatically) 78 | 79 | ``` 80 | RealityView { 81 | // …. 82 | } 83 | .gesture(SpatialTapGesture() 84 | .targetedToEntity( …. ) 85 | .onEnded { value in 86 | 87 | }) 88 | 89 | ``` 90 | 91 | ``` 92 | if .userInterfaceIdiom == .reality { 93 | gesture.numberOfTouchesRequired = 2 94 | } 95 | ``` 96 | 97 | 98 | 99 | ---------------------------------------------------------------------------------------------- 100 | 101 | 102 | ## **Design / UX** 103 | 104 | The visionOS user experience is basically an intuitive blend of iPadOS and VR. Most default system UI controls and APIs are available on visionOS, along with many new features. Apple has created an all new immersive way to interact and see UI in software applications. 105 | 106 | - visionOS is missing several standard APIs found in iOS & iPad, but most apps are intended to work by default. Apps will be presented on a 2d screen (Volume). 107 | - System color shade slightly vary across platforms (iOS, visionOS, WatchOS). 108 | - Make use of system colors on labels for improved readability. (UIColor.label, UIColor.secondaryLabel) 109 | - .borderStyle = .roundedRect for recessed background 110 | - Use Materials in UI. Materials adjust to blend into surroundings, materials adjust contrast and color balance based on light conditions and colors behind them 111 | - There is no distinction between dark / light mode on visionOS. All builtin controls use materials by default, they adjust to surroundings 112 | 113 | 114 | 115 | 116 | ## **Eye Tracking / Hover** 117 | - visionOS has eye tracking; (we do not receive exact eye position) 118 | - It is important UX to indicate when user is looking at specific UI elements 119 | - Builtin visionOS UI controls will have hover indicators 120 | - New UIView property: UIHoverStyle: can be highlight or lift; or set to nil 121 | - self.hoverStyle = …. 122 | 123 | 124 | 125 | ## **Interacting With UI** 126 | - Look and pinch = tap 127 | - Pinch and move = pan 128 | - When close enough to screen, reach and touch UI elements 129 | 130 | 131 | 132 | ## **Area Mapping** 133 | 134 | visionOS Builds a 3d model of the user surroundings to enable realistic lighting, shadows, and spatial audio. Apps get access to this data without needing access to user cameras, to ensure user privacy 135 | 136 | 137 | 138 | 139 | ## **Spatial Audio** 140 | 141 | Sophisticated understanding of user surroundings. Spatial audio system engine in visionOS fuses accounting sensing with 3d scene understanding to create a detailed model of sonic characteristics of the space. Since visionOS has a detailed understanding of the users surroundings, apps can simply direct where they want sounds to come from, and visionOS handles the audio mixing. 142 | 143 | 144 | 145 | 146 | ## **RealityView Hierarchy** 147 | 148 | - View 149 | - RealityView 150 | - Entity: An element of a RealityKit scene to which you attach components that provide appearance and behavior characteristics for the entity. 151 | 152 | 153 | 154 | ## **RealityKit Entity Base Class** 155 | 156 | RealityKit defines a few concrete subclasses of ``Entity`` that provide commonly used functionality. Components are added to entities as a representation of a geometry or a behavior that you apply to an entity. 157 | 158 | - AnchorEntity 159 | - ModelEntity 160 | 161 | 162 | ## Entity Components (protocol) 163 | 164 | A representation of a geometry or a behavior that you apply to an entity. You can add at most one component of a given type to an entity. RealityKit has a variety of predefined component types that you can use to add commonly needed characteristics. For example, the ``ModelComponent`` specifies visual appearance with a mesh and materials. The `CollisionComponent`` contains a shape and other information used to decide if one entity collides with another. 165 | 166 | 167 | 168 | 169 | ## **Anchoring Component** 170 | 171 | A description of how virtual content can be anchored to the real world. 172 | 173 | - Target: The kinds of real world objects to which an anchor entity can be tethered 174 | 175 | - Alignment 176 | - Horizontal 177 | - Vertical 178 | - Any 179 | - Classification: 180 | - Wall 181 | - Floor 182 | - Ceiling 183 | - Table 184 | - Seat 185 | - Any 186 | 187 | 188 | 189 | 190 | 191 | ## **Reduce draw calls** 192 | 193 | Each model entity in your scene generates one draw call per material for noninstanced geometry. On the other hand, with instanced entities, RealityKit only generates a single draw call for each material used on the original instanced entity. As a result, using instanced entities can reduces the number of draw calls your app makes. 194 | Sharing textures between entities by using texture atlases, which are textures that contain images for multiple materials, also reduces your app’s draw calls, as can combining multiple models into a single entity with shared materials. When combining model entities, be careful not to make the combined entities too large; if the entities are too large, they won’t be culled during frustum culling, when objects that are completely off-camera are removed from the rendering process — resulting in no draw calls at all. If any part of the combined entity is visible on screen, every material on that entity generates a draw call, even materials that are completely off screen. 195 | 196 | 197 | 198 | 199 | ---------------------------------------------------------------------------------------------- 200 | 201 | 202 | 203 | ## **Useful visionOS Snippets** 204 | 205 | 206 | Load and return a a specific entity from RealityComposerPro scene 207 | ``` 208 | // Load and return a a specific entity from RealityComposerPro scene 209 | 210 | @MainActor 211 | func loadFromRealityComposerPro(named entityName: String, fromSceneNamed sceneName: String) async -> Entity? { 212 | var entity: Entity? = nil 213 | do { 214 | let scene = try await Entity.load(named: sceneName, in: …bundle…) 215 | entity = scene.findEntity(named: entityName) 216 | } catch { 217 | print("Error loading \(entityName) from scene \(sceneName): \(error.localizedDescription)") 218 | } 219 | return entity 220 | } 221 | ``` 222 | 223 | 224 | Start ARKitSession Example 225 | ``` 226 | // Start ARKitSession Example 227 | let session = ARKitSession() 228 | let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical]) 229 | 230 | 231 | Task { 232 | try await session.run([planeData]) 233 | 234 | for await update in planeData.anchorUpdates { 235 | // Skip planes that are windows. 236 | if update.anchor.classification == .window { continue } 237 | 238 | switch update.event { 239 | case .added, .updated: 240 | updatePlane(update.anchor) 241 | case .removed: 242 | removePlane(update.anchor) 243 | } 244 | } 245 | } 246 | ``` 247 | 248 | 249 | Custom Shader (Underwater Apple example project, Octopus.swift) 250 | ``` 251 | // Custom Shader (Underwater Apple example project, Octopus.swift) 252 | do { 253 | let surfaceShader = CustomMaterial.SurfaceShader( 254 | named: "octopusSurface", 255 | in: MetalLibLoader.library 256 | ) 257 | try octopusModel.modifyMaterials { 258 | var mat = try CustomMaterial(from: $0, surfaceShader: surfaceShader) 259 | mat.custom.texture = .init(mask) 260 | mat.emissiveColor.texture = .init(bc2) 261 | return mat 262 | } 263 | } catch { 264 | assertionFailure("Failed to set a custom shader on the octopus \(error)") 265 | } 266 | ``` 267 | 268 | ``` 269 | // Generate a mesh for the plane (for occlusion). 270 | var meshResource: MeshResource? = nil 271 | do { 272 | let contents = MeshResource.Contents(planeGeometry: anchor.geometry) 273 | meshResource = try MeshResource.generate(from: contents) 274 | } catch { 275 | print("Failed to create a mesh resource for a plane anchor: \(error).") 276 | return 277 | } 278 | 279 | if let meshResource { 280 | // Make this plane occlude virtual objects behind it. 281 | entity.components.set(ModelComponent(mesh: meshResource, materials: [OcclusionMaterial()])) 282 | } 283 | 284 | // Generate a collision shape for the plane (for object placement and physics). 285 | var shape: ShapeResource? = nil 286 | do { 287 | let vertices = anchor.geometry.meshVertices.asSIMD3(ofType: Float.self) 288 | shape = try await ShapeResource.generateStaticMesh(positions: vertices, 289 | faceIndices: anchor.geometry.meshFaces.asUInt16Array()) 290 | } catch { 291 | print("Failed to create a static mesh for a plane anchor: \(error).") 292 | return 293 | } 294 | 295 | if let shape { 296 | var collisionGroup = PlaneAnchor.verticalCollisionGroup 297 | if anchor.alignment == .horizontal { 298 | collisionGroup = PlaneAnchor.horizontalCollisionGroup 299 | } 300 | 301 | entity.components.set(CollisionComponent(shapes: [shape], isStatic: true, 302 | filter: CollisionFilter(group: collisionGroup, mask: .all))) 303 | // The plane needs to be a static physics body so that objects come to rest on the plane. 304 | let physicsMaterial = PhysicsMaterialResource.generate() 305 | let physics = PhysicsBodyComponent(shapes: [shape], mass: 0.0, material: physicsMaterial, mode: .static) 306 | entity.components.set(physics) 307 | } 308 | ``` 309 | 310 | ----------------------------------------------------------------- 311 | 312 | 313 | ## **Useful Links / Documentation** 314 | 315 | GeometryReader3d 316 | - https://developer.apple.com/documentation/swiftui/geometryreader3d 317 | 318 | Checking whether your existing app is compatible with visionOS 319 | - https://developer.apple.com/documentation/visionos/checking-whether-your-app-is-compatible-with-visionos 320 | 321 | 322 | Bringing your ARKit app to visionOS 323 | - https://developer.apple.com/documentation/visionos/bringing-your-arkit-app-to-visionos 324 | 325 | 326 | Explaining ECS / Entity Component System 327 | - https://medium.com/macoclock/realitykit-911-entity-component-system-ecs-bfe0520e0e8e 328 | 329 | Implementing ECS in RealityKit 330 | - https://developer.apple.com/documentation/realitykit/implementing-systems-for-entities-in-a-scene 331 | 332 | 333 | Augmented Reality - Apple webpage 334 | - https://developer.apple.com/augmented-reality/ 335 | 336 | 337 | Understanding RealityKit’s modular architecture 338 | - https://developer.apple.com/documentation/visionos/understanding-the-realitykit-modular-architecture 339 | 340 | 341 | ARKit Scene Reconstruction / Scene Collision 342 | - https://developer.apple.com/documentation/visionos/incorporating-surroundings-in-an-immersive-experience 343 | 344 | 345 | AR Design / UI UX 346 | 347 | - https://developer.apple.com/design/human-interface-guidelines/augmented-reality 348 | 349 | 350 | More Apple visionOS Documentation: 351 | - https://developer.apple.com/documentation/visionos/incorporating-real-world-surroundings-in-an-immersive-experience 352 | - https://developer.apple.com/documentation/visionos/tracking-points-in-world-space 353 | 354 | 355 | -------------- 356 | 357 | ## **WWDC 2023 visionOS Videos** 358 | 359 | Meet ARKit for spatial computing 360 | - https://developer.apple.com/videos/play/wwdc2023/10082/ 361 | 362 | Evolve your ARKit app for spatial experiences 363 | - https://developer.apple.com/videos/play/wwdc2023/10091/ 364 | 365 | Get started with building apps for spatial computing 366 | - https://developer.apple.com/videos/play/wwdc2023/10260/ 367 | 368 | Optimize app power and performance for spatial computing 369 | - https://developer.apple.com/videos/play/wwdc2023/10100/ 370 | 371 | Meet UIKit for spatial computing 372 | - https://developer.apple.com/videos/play/wwdc2023/111215/ 373 | 374 | Meet Core Location for spatial computing 375 | - https://developer.apple.com/videos/play/wwdc2023/10146/ 376 | 377 | Elevate your windowed app for spatial computing 378 | - https://developer.apple.com/videos/play/wwdc2023/10110/ 379 | 380 | Enhance your iPad and iPhone apps for the Shared Space 381 | - https://developer.apple.com/videos/play/wwdc2023/10094/ 382 | 383 | Create a great spatial playback experience 384 | - https://developer.apple.com/videos/play/wwdc2023/10070/ 385 | 386 | Build spatial SharePlay experiences 387 | - https://developer.apple.com/videos/play/wwdc2023/10087/ 388 | 389 | Deliver video content for spatial experiences 390 | - https://developer.apple.com/videos/play/wwdc2023/10071/ 391 | 392 | Explore rendering for spatial computing 393 | - https://developer.apple.com/videos/play/wwdc2023/10095/ 394 | 395 | Building Immersive Apps 396 | 397 | Develop your first immersive app 398 | - https://developer.apple.com/videos/play/wwdc2023/10203/ 399 | 400 | Bring your Unity VR app to a fully immersive space 401 | - https://developer.apple.com/videos/play/wwdc2023/10093/ 402 | 403 | Create immersive Unity apps 404 | - https://developer.apple.com/videos/play/wwdc2023/10088/ 405 | 406 | 407 | ---------------------------------------------------------------------------------------------- 408 | 409 | Apple Sample Projects WWDC (ARKit, RealityKit) 2021 - 2023 410 | - https://developer.apple.com/sample-code/wwdc/2021/ 411 | - https://developer.apple.com/sample-code/wwdc/2022/ 412 | - https://developer.apple.com/sample-code/wwdc/2023/ 413 | 414 | 415 | ## **Apple visionOS Sample Projects 2023-24** 416 | - https://developer.apple.com/documentation/realitykit/construct-an-immersive-environment-for-visionos 417 | - https://developer.apple.com/documentation/realitykit/simulating-particles-in-your-visionos-app 418 | - https://developer.apple.com/documentation/realitykit/simulating-physics-with-collisions-in-your-visionos-app 419 | - https://developer.apple.com/documentation/visionos/happybeam 420 | - https://developer.apple.com/documentation/visionos/incorporating-real-world-surroundings-in-an-immersive-experience 421 | - https://developer.apple.com/documentation/visionos/placing-content-on-detected-planes 422 | 423 | 424 | Other Sample Resources 425 | - https://github.com/satoshi0212/visionOS_30Days 426 | - https://github.com/hunterh37/VisionOS_AnimatedModelEntityDemo 427 | - https://github.com/hunterh37/VisionOS_BouncyBalls 428 | - https://github.com/hunterh37/VisionOS_SceneReconstructionDemo 429 | 430 | 431 | 432 | 433 | 434 | 435 | ## **Current Vision Pro Simulator Limitations** 436 | 437 | - ARKitSession 438 | - PlaneDetectionProvider is not available on sim (Vision OS Beta 1.1 7/27/23) 439 | - SceneReconstructionProvider is not available on sim (Vision OS Beta 1.1 7/27/23) 440 | 441 | 442 | --------------------------------------------------------------------------------