├── README.md ├── SUMMARY.md ├── animation └── animation.md ├── assets ├── asset-management.md └── images.md ├── bin ├── bullet2header ├── greplace └── identifier2code ├── business-logic ├── async-programming.md ├── navigation.md ├── state-management.md └── testing.md ├── core ├── conventions.md ├── framework.md ├── messaging.md ├── platform-integration.md └── types.md ├── data-model ├── boxes.md ├── elements.md ├── render-objects.md └── widgets.md ├── get-involved.md ├── interaction ├── focus.md └── gestures.md ├── learning-path.md ├── rendering ├── compositing.md ├── layout.md ├── painting.md └── semantics.md ├── scrolling ├── scrollable.md ├── viewport-layout.md └── viewports.md ├── slivers ├── container-slivers.md ├── dynamic-slivers.md ├── persistent-headers.md └── sliver-model.md ├── text ├── text-editing.md ├── text-input.md └── text-rendering.md └── user-interface ├── containers.md ├── decoration.md ├── material.md ├── tables.md └── themes.md /README.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | Welcome to Flutter Internals, a **community-maintained** open source book providing a guided tour through Flutter's implementation. 4 | 5 | This book is very much a "work in progress"; in fact, we'd **love your help** with grammatical fixes, technical edits, and new content. 6 | 7 | \*\*\*\*[**Click here**](https://app.gitbook.com/invite/flutter-internals?invite=-Lz8eupmUYQGm6UH34Dq) to become a contributor. 8 | 9 | ## What is this book? 10 | 11 | * The goal of this book is to provide intuitive descriptions of Flutter’s internals in an easily digestible outline format. 12 | * Descriptions are intended to be comprehensive without becoming bogged down in implementation details or sacrificing clarity. 13 | * This book strives to provide a “hawk’s eye view” of Flutter’s implementation. 14 | 15 | ## Who is the audience of this book? 16 | 17 | * These notes are most useful once you have a solid understanding of how to use Flutter \(since they describe how the interfaces actually work\). 18 | * This outline was written for developers looking to build intuition about Flutter internals and Flutter hackers looking to ramp up on the codebase. 19 | * We also hope that this book can inspire more thorough learning materials \(such as deep dive videos or long form articles\). 20 | 21 | ## Who can contribute to this book? 22 | 23 | * Anyone \(thank you\)! If there’s a corner of the framework that you find confusing, please consider [becoming a contributor](https://app.gitbook.com/invite/flutter-internals?invite=-Lz8eupmUYQGm6UH34Dq) and updating the book with the relevant details. 24 | * **Please read the "Get Involved" section.** 25 | 26 | 27 | 28 | -------------------------------------------------------------------------------- /SUMMARY.md: -------------------------------------------------------------------------------- 1 | # Table of contents 2 | 3 | * [Introduction](README.md) 4 | * [Get Involved ❗](get-involved.md) 5 | * [Learning Path](learning-path.md) 6 | 7 | ## 🏭Core 8 | 9 | * [Framework](core/framework.md) 10 | * [Types](core/types.md) 11 | * [Messaging](core/messaging.md) 12 | * [Platform Integration](core/platform-integration.md) 13 | * [Conventions](core/conventions.md) 14 | 15 | ## 🌳Data Model 16 | 17 | * [Widgets](data-model/widgets.md) 18 | * [Elements](data-model/elements.md) 19 | * [Render Objects](data-model/render-objects.md) 20 | * [Boxes](data-model/boxes.md) 21 | 22 | ## 🎨Rendering 23 | 24 | * [Layout](rendering/layout.md) 25 | * [Compositing](rendering/compositing.md) 26 | * [Painting](rendering/painting.md) 27 | * [Semantics](rendering/semantics.md) 28 | 29 | ## 👆Interaction 30 | 31 | * [Gestures](interaction/gestures.md) 32 | * [Focus](interaction/focus.md) 33 | 34 | ## 🎥Animation 35 | 36 | * [Animation](animation/animation.md) 37 | 38 | ## 🏙Assets 39 | 40 | * [Asset Management](assets/asset-management.md) 41 | * [Images](assets/images.md) 42 | 43 | ## 🔠Text 44 | 45 | * [Text Rendering](text/text-rendering.md) 46 | * [Text Input](text/text-input.md) 47 | * [Text Editing](text/text-editing.md) 48 | 49 | ## 📜Scrolling 50 | 51 | * [Scrollable](scrolling/scrollable.md) 52 | * [Viewports](scrolling/viewports.md) 53 | * [Viewport Layout](scrolling/viewport-layout.md) 54 | 55 | ## 🥒Slivers 56 | 57 | * [Sliver Model](slivers/sliver-model.md) 58 | * [Persistent Headers](slivers/persistent-headers.md) 59 | * [Container Slivers](slivers/container-slivers.md) 60 | * [Dynamic Slivers](slivers/dynamic-slivers.md) 61 | 62 | ## 📱User Interface 63 | 64 | * [Containers](user-interface/containers.md) 65 | * [Decoration](user-interface/decoration.md) 66 | * [Themes](user-interface/themes.md) 67 | * [Tables](user-interface/tables.md) 68 | * [Material](user-interface/material.md) 69 | 70 | ## 🧠Business Logic 71 | 72 | * [Navigation](business-logic/navigation.md) 73 | * [State Management](business-logic/state-management.md) 74 | * [Async Programming](business-logic/async-programming.md) 75 | * [Testing](business-logic/testing.md) 76 | 77 | -------------------------------------------------------------------------------- /animation/animation.md: -------------------------------------------------------------------------------- 1 | # Animation 2 | 3 | ## How are animations scheduled? 4 | 5 | * `Window.onBeginFrame` invokes `SchedulerBinding.handleBeginFrame` every frame, which runs all transient callbacks scheduled during the prior frame. Ticker instances utilize transient callbacks \(via `SchedulerBinding.scheduleFrameCallback`\), and are therefore evaluated at this point. All tickers update their measure of elapsed time using the same frame timestamp, ensuring that tickers tick in unison. 6 | * `AnimationController` utilizes an associated `Ticker` to track the passage of time. When the ticker ticks, the elapsed time is provided to an internal simulation which transforms real time into a decimal value. The simulation \(typically `_InterpolationSimulation`\) interpolates between `AnimationController.lowerBound` and `AnimationController.upperBound` \(if spanning the animation’s full range\), or `AnimationController.value` and `AnimationController.target` \(if traversing from the current value to a new one\), applying a curve if available. Listeners are notified once the simulation produces a new value. 7 | * The animation’s behavior \(playing forward, playing backward, animating toward a target, etc\) is a consequence of how this internal simulation is configured \(i.e., by reversing the bounds, by altering the duration, by using a `_RepeatingSimulation` or `SpringSimulation`\). Typically, the simulation is responsible for mapping from real time to a value representing animation progress. 8 | * In the general case, an `_InterpolationSimulation` is configured in `AnimationController._animateToInternal`. 9 | * Next, the controller’s ticker is started, which advances the the simulation once per frame. The simulation is advanced using the elapsed time reported by the ticker. 10 | * Listeners are notified whenever the simulation is queried or reaches an endpoint, potentially changing the animation’s status \(`AnimationStatus`\). 11 | * Composing animations \(e.g., via `_AnimatedEvaluation` or `_ChainedEvaluation`\) works by proxying the underlying listenable \(i.e., by delegating listener operations to the parent animation, which advances as described above\). 12 | 13 | ## What is an animation? 14 | 15 | * An animation, as represented by `Animation`, traverses from zero to one \(and vice versa\) over a user-defined interval \(this is typically facilitated by an `AnimationController`, a special `Animation` that advances in real time\). The resulting value represents the animation’s progress \(i.e., a timing value\) and is often fed into a chain of animatables or descendant animations. These are re-evaluated every time the animation advances and therefore notifies its listeners. Some descendants \(e.g., curves\) transform the animation’s timing value into a new timing value; these affect the animation’s rate of change \(e.g., easing\) but not its duration. Others produce derived values that can be used to update the UI \(e.g., colors, shapes, and sizes\). Repeatedly updating the UI using these values is the basis of animation. 16 | 17 | ## What are the animation building blocks? 18 | 19 | * `Animation` couples `Listenable` with `AnimationStatus` and produces a sequence of values with a beginning and an end. The animation’s status is derived from the sequence’s directionality and whether values are currently being produced. In particular, the animation can be stopped at the sequence’s beginning or end \(`AnimationStatus.dimissed`, `AnimationStatus.completed`\), or actively producing values in a particular order \(`AnimationStatus.forward`, `AnimationStatus.reverse`\). `Animation` extends `ValueListenable` which produces a sequence of values but does not track status. 20 | * `Animation` is the most common specialization of `Animation` \(and the only specialization that can be used with `Animatable`\); as a convention, `Animation` produces values from zero to one, though it may overshoot this range before completing. These values are typically interpreted as the animation’s progress \(referred to as timing values, below\). How this interval is traversed over time determines the behavior of any downstream animatables. 21 | * `Animation` may represent other sequences, as well. For instances, an `Animation` might describe a sequence of border radii or line thicknesses. 22 | * More broadly, `Animation`, where `T` is not a timing value, is devoid of conventional significance. Such animations progress through their values as described earlier, and are typically driven by a preceding `Animation` \(that does represent a timing value\). 23 | * `Animatable` describes an animatable value, mapping an `Animation` \(which ranges from zero to one\) to a sequence of derived values \(via `Animatable`.evaluate, which forwards the animation’s value to `Animatable.transform` to produce a new value of type `T`\). The animatable may be driven by an animation \(i.e., repeatedly evaluated as the animation generates notifications, via `Animatable.animate`\). It may also be associated with a parent animatable to create an evaluation chain \(i.e., the parent evaluates the animation value, then the child evaluates the parent’s value, via `Animatable.chain`\). Unless the parent animatable is driven by an animation, however, chaining will not cause the animatable to animate; it only describes a sequence of transformations. 24 | * `Animatable.evaluate` always maps from a double to a value of type `T`. Conventionally, the provided double represents a timing value \(ranging from zero to one\), but this is not a requirement. 25 | * Tween is a subclass of `Animatable` that linearly interpolates between beginning and end values of type `T` \(via `Tween.lerp`\). By default, algebraic linear interpolation is used, though many types implement custom methods \(via `T.lerp`\) or provide overloaded operators. 26 | * `TweenSequence` is an animatable that allows an animation to drive a sequence of tweens, associating each with a portion of the animation’s duration \(via `TweenSequenceItem.weight`\). 27 | * Simulation models an object in one-dimensional space with a position \(`Simulation.x`\), velocity \(`Simulation.dx`\), and completion status \(`isDone`\) using logical units. Simulations are queried using a time value, also in logical units; as some simulations may be stateful, queries should generally use increasing values. A `Tolerance` instance specifies epsilon values for time, velocity, and position to determine when the simulation has settled. 28 | * `AnimationController` is an `Animation` subclass introducing explicit control and frame-synchronized timing. When active, the animation controller is driven at approximately 60 Hz. This is facilitated by a corresponding `Ticker` instance, a synchronized timer that triggers at the beginning of each frame; this instance may change over the course of the controller’s lifespan \(via `AnimationController.resync`\). The animation can be run backwards and forwards \(potentially without bounds\), toward and away from target values \(via `AnimationController.animateTo` and `AnimationController.animateBack`\), or cyclically \(via `AnimationController.repeat`\). Animations can also be driven by a custom `Simulation` \(via `AnimationController.animateWith`\) or a built-in spring simulation \(via `AnimationController.fling`\). The controller’s value \(`AnimationController.value`\) is advanced in real time, using a duration \(`AnimationController.duration`\) to interpolate between starting and ending values \(`AnimationController.upperBound`, `AnimationController.lowerBound`\), both doubles. An immutable view of the animation is also exposed \(`AnimationController.view`\). 29 | * Ticker invokes a callback once per frame \(via a transient callback scheduled using `SchedulerBinding.scheduleFrameCallback`\), passing a duration corresponding to how long the ticker has been ticking. This duration is measured using a timestamp set at the beginning of the frame \(`SchedulerBinding.handleBeginFrame`\). All tickers advance using the same timestamp and are therefore synchronized. When a ticker is enabled, a transient frame callback is registered via `SchedulerBinding.addFrameCallback`; this schedules a frame via `Window.scheduleFrame`, ensuring that the ticker will begin ticking. 30 | * Tickers measure a duration from when they first tick. If a ticker is stopped, the duration is reset and progress is lost. Muting a ticker allows time \(the duration\) to continue advancing while suppressing ticker callbacks. The animation will not progress while muted and will appear to jump ahead when unmuted. A ticker can absorb another ticker so that animation progress is not lost; that is, the new ticker will retain the old ticker’s elapsed time. 31 | * `TickerFuture` exposes the ticker status as a `Future`. When stopped, this future resolves; in all other cases, the future is unresolved. A derivative future, `TickerFuture.orCancel`, extends this interface to throw an exception if the ticker is cancelled. 32 | * `TickerProvider` vends `Ticker` instances. `TickerProviderStateMixin` and `SingleTickerProviderStateMixin` fulfill the `TickerProvider` interface within the context of a `State` object \(the latter has less overhead since it only tracks a single ticker\). These mixins query an inherited `TickerMode` that can enable and disable all descendent tickers en masse; this allows tickers to be muted and unmuted within a subset of the widget tree efficiently. 33 | * `AnimationLocalListenersMixin` and `AnimationLocalStatusListenersMixin` provide implementations for the two listenable interfaces supported by animations: value listeners \(`Animation.addListener`, `Animation.removeListener`\), and status listeners \(`Animation.addStatusListener`, `Animation.removeStatusListener`\). Both store listeners in a local `ObserverList` and support hooks indicating when a listener is registered and unregistered \(`didRegisterListener` and `didUnregisterListener`, respectively\). A number of framework subclasses depend on these mixins \(e.g., `AnimationController`\) since `Animation` doesn’t provide a concrete implementation. 34 | * `AnimationLazyListenerMixin` uses the aforementioned hooks to notify the client when there are no more listeners. This allows resources to be released until a listener is once again added \(via `AnimationLazyListenerMixin.didStartListening` and `AnimationLazyListenerMixin.didStopListening`\). 35 | * `AnimationEagerListenerMixin` ignores these hooks, instead introducing a dispose protocol; resources will be retained through the animation’s lifespan and therefore must be disposed before the instance is released. 36 | 37 | ## How are animations curved? 38 | 39 | * Curve determines an animation’s rate of change by specifying a mapping from input to output timing values \(i.e., from \[0, 1\] to \[0, 1\], though some curves stretch this interval, e.g., `ElasticInCurve`\). `Animation` produces suitable input values that may then be transformed \(via `Curve.transform`\) into new timing values. Later, these values may be used to drive downstream animatables \(or further transformed\), effectively altering the animation’s perceived rate of change. 40 | * Geometrically, a curve may be visualized as mapping an input timing value \(along the X-axis\) to an output timing value \(along the Y-axis\), with zero corresponding to `AnimationStatus.dismissed` and one corresponding to `AnimationStatus.completed`. 41 | * Curves cannot alter the overall duration of an animation, but will affect the rate that an animation is advanced during that interval. Additionally, even if they overshoot the unit interval, curves must map zero and one to values that round to zero or one, respectively. 42 | * There are a number of built-in curve instances: 43 | * Cubic defines a curve as a cubic function. 44 | * `ElasticInCurve`, `ElasticOutCurve`, `ElasticInOutCurve` define a spring-like curve that overshoots as it grows, shrinks, or settles, respectively. 45 | * Interval maps a curve to a subinterval, clamping to 0 or 1 at either end. 46 | * Threshold is 0 until a threshold is reached, then 1 thereafter. 47 | * `SawTooth` produces N linear intervals, with no interpolation at edges 48 | * `FlippedCurve` transforms an input curve, mirroring it both horizontally and vertically. 49 | * Curves exposes a large number of pre-defined curves. 50 | * `CurvedAnimation` is an `Animation` subclass that applies a curve to a parent animation \(via `AnimationWithParentMixin`\). As such, `CurvedAnimation` proxies the parent animation, transforming each value before any consumers may read it \(via `Curve.transform`\). `CurvedAnimation` also allows different curves to be used for forward and reverse directions. 51 | * `CurveTween` is an `Animatable` subclass that is analogous to `CurvedAnimation`. As an animatable, `CurveTween` delegates its transform \(via `Animatable.transform`\) to the provided curve transform \(via `Curve.transform`\). Since `CurveTween` doesn’t perform interpolation, but instead represents an arbitrary mapping, it isn’t actually a tween. 52 | * `AnimationController` includes built-in curve support \(via `_InterpolationSimulation`\). When the simulation is advanced to transform elapsed wall time into a timing value \(by querying `_InterpolationSimulation.x`\), if available, a curve is used when computing the new value. As the resulting value is generally interpreted as a timing value, this influences the perceived rate of change of the animation. 53 | 54 | ## How are animations composed? 55 | 56 | * `AnimationWithParentMixin` provides support for building animations that delegate to a parent animation. The various listener methods \(`AnimationWithParentMixin.addListener`, `AnimationWithParentMixin.addStatusListener`\) are forwarded to the parent; all relevant state is also read from the parent. Clients provide a value accessor that constructs a derivative value based on the parent’s value. 57 | * Composition is managed via `Animatable.chain` or `Animatable.animate`; `Animation.drive` delegates to the provided animatable. 58 | * `_AnimatedEvaluation` is an `Animation` that applies an animatable to a parent animation. All listenable methods delegate to the parent animation \(via `AnimationWithParentMixin`\); thus, the resulting animation is driven by the parent. The value accessor is overwritten so that parent’s value may be transformed by the animatable \(via `Animatable.evaluate`\). 59 | * `_ChainedEvaluation` is an `Animatable` that combines a parent animatable with a child animatable. In particular, `_ChainedEvaluation.transform` first evaluates the parent animatable \(via `Animatable.evaluate`\), then passes this value to the child animatable. 60 | * `CompoundAnimation` is an `Animation` subclass that combines two animations. `CompoundAnimation.value` is overwritten to produce a final value using the first and second animation’s values \(via `CompoundAnimation.first`, `CompoundAnimation.next`\). Note that `CompoundAnimation` is driven by two animations \(i.e., it ticks when either animation ticks\), unlike earlier composition examples that drive an animatable using a single parent animation. 61 | 62 | ## What are the higher level animation building blocks? 63 | 64 | * `ProxyAnimation` provides a read-only view of a parent animation that will reflect any changes to the original animation. It does this by proxying the animation listener methods as well as the status and value accessors. Additionally, `ProxyAnimation` supports replacing the parent animation inline; the transition is seamless from the perspective of any listeners. 65 | * `TrainHoppingAnimation` monitors two animations, switching from the first to the second when the second emits the same value as the first \(e.g., because it is reversed or moving toward the value more quickly\). `TrainHoppingAnimation` utilizes `AnimationEagerListenerMixin` because it relies on the parent animations’ notifications to determine when to switch tracks, regardless of whether there are any external listeners. 66 | * `CompoundAnimation` combines two animations, ticking when either animation ticks \(this differs from, e.g., `Animation.animate`, which drives an animatable via an animation\). The status is that of the second animation \(if it’s running\), else the first. The values are combined by overriding the `Animation.value` accessor; the constituent animations are referenced as `CompoundAnimation.first` and `CompoundAnimation.next`, respectively. This animation is lazy -- it will only listen to the sub-animations when it has listeners, and will avoid generating useless notifications. 67 | * `CompoundAnimation` is the basis of `MaxAnimation`, `MinAnimation`, and `MeanAnimation`. 68 | * `AlwaysStoppedAnimation` exposes a constant value and never changes status or notifies listeners. 69 | * `ReverseAnimation` plays an animation in reverse, using the appropriate status and direction. That is, if the parent animation is played forward \(e.g., via `AnimationController.forward`\), the `ReverseAnimation`’s status will be reversed. Moreover, the value reported by `ReverseAnimation` will be the inverse of the parent’s value assuming a \[0, 1\] range \(thus, one minus the parent’s value\). Note that this differs from simply reversing a tween \(e.g., tweening from one to zero\); though the values would be reversed, the animation status would be unchanged. 70 | 71 | ## What are the highest level animation building blocks? 72 | 73 | * `AnimatedWidget` is an abstract stateful widget that rebuilds whenever the provided listenable notifies its clients. When this happens, the associated state instance is marked dirty \(via `_AnimatedState.setState`\) and rebuilt. `_AnimatedState.build` delegates to the widget’s build method, which subclasses must implement; these utilize the listenable \(typically an animation\) to update the UI. 74 | * `AnimatedBuilder` extends `AnimatedWidget` to accept a build function \(`TransitionBuilder`\); this builder is invoked whenever the widget rebuilds \(via `AnimatedBuilder.build`\). This allows clients to utilize the `AnimatedWidget` flow without creating an explicit subclass. 75 | 76 | ## How does implicit animation work? 77 | 78 | * `ImplicitlyAnimatedWidget` provides support for widgets that animate in response to changes to selected properties; the initial value is not animated. Though descendant widgets are only able to customize the animation’s duration and curve, `ImplicitlyAnimatedWidget` are often convenient in that they fully manage the underlying `AnimationController`. 79 | * Subclasses must use a `State` instance that extends `ImplicitlyAnimatedWidgetState`. Those that should be rebuilt \(i.e., marked dirty\) whenever the animation ticks extend `AnimatedWidgetBaseState`, instead. 80 | * `ImplicitlyAnimatedWidgetState.forEachTween` is the engine that drives implicit animation. Subclasses implement this method such that the provided visitor \(`TweenVisitor`\) is invoked once per implicitly animatable property. 81 | * The visitor function requires three arguments: the current tween instance \(constructed by the superclass but cached locally, e.g., `ExampleState._opacityTween`\), the target value \(typically read from the widget, e.g., `ExampleState.widget.opacityValue`\), and a constructor \(`TweenConstructor`\) that returns a new tween instance starting at the provided value. The visitor returns an updated tween; this value is typically assigned to the same field associated with the first argument. 82 | * Tweens are constructed during state initialization \(via `ImplicitlyAnimatedWidgetState._constructTweens`\) for all implicitly animatable properties with non-null target values \(via `ImplicitlyAnimatedWidgetState.forEachTween`\). Tweens may also be constructed outside of this context as they transition from null to non-null target values. 83 | * When the widget is updated \(via `ImplicitlyAnimatedWidgetState.didUpdateWidget`\), `ImplicitlyAnimatedWidgetState.forEachTween` steps through the subclass’s animatable properties to update the tweens’ bounds \(via `ImplicitlyAnimatedWidgetState._updateTween`\). The tween’s start is set using the current animation value \(to avoid jumping\), with the tween’s end set to the target value. 84 | * Last, the animation is played forward if the tween wasn’t already animating toward the target value \(i.e., the tween’s previous endpoint didn’t match the target value, via `ImplicitlyAnimatedWidgetState._shouldAnimateTweens`\). 85 | * The subclass is responsible for using the animation \(`ImplicitlyAnimatedWidgetState.animation`\) and tween directly \(i.e., by evaluating the tween using the animation’s current value\). 86 | 87 | -------------------------------------------------------------------------------- /assets/asset-management.md: -------------------------------------------------------------------------------- 1 | # Asset Management 2 | 3 | ## How are assets managed? 4 | 5 | * `AssetBundle` is a container that provides asynchronous access to application resources \(e.g., images, strings, fonts\). Resources are associated with a string-based key and can be retrieved as bytes \(via `AssetBundle.load`\), a string \(via `AssetBundle.loadString`\), or structured data \(via `AssetBundle.loadStructuredData`\). A variety of subclasses support different methods for obtaining assets \(e.g., `PlatformAssetBundle`, `NetworkAssetBundle`\). Some bundles also support caching; if so, keys can be evicted from the bundle’s cache \(via `AssetBundle.evict`\). 6 | * `CachingAssetBundle` caches strings and structured data throughout the application’s lifetime \(unless explicitly evicted\). Binary data is not cached since the higher level methods are built atop `AssetBundle.load`, and the final representation is more efficient to store. 7 | * Every application is associated with a `rootBundle`. This `AssetBundle` contains the resources that were packaged when the application was built \(i.e., as specified by `pubspec.yaml`\). Though this bundle can be queried directly, `DefaultAssetBundle` provides a layer of indirection so that different bundles can be substituted \(e.g., for testing or localization\). 8 | 9 | ## How are assets fetched? 10 | 11 | * `NetworkAssetBundle` loads resources over the network. It does not implement caching; presumably, this is provided by the network layer. It provides a thin wrapper around dart’s `HttpClient`. 12 | * `PlatformAssetBundle` is a `CachingAssetBundle` subclass that fetches resources from a platform-specific application directory via platform messaging \(specifically, `Engine`::`HandleAssetPlatformMessage`\). 13 | 14 | -------------------------------------------------------------------------------- /assets/images.md: -------------------------------------------------------------------------------- 1 | # Images 2 | 3 | ## How are images represented? 4 | 5 | * At the lowest level, images are represented as a `Uint8List` \(i.e., an opaque list of unsigned bytes\). These bytes can be expressed in any number of image formats, and must be decoded to a common representation by a codec. 6 | * `instantiateImageCodec` accepts a list of bytes and returns the appropriate codec from the engine already bound to the provided image. This function accepts an optional width and height; if these do not match the image’s intrinsic size, the image is scaled accordingly. If only one dimension is provided, the other dimension remains the intrinsic dimension. `PaintingBinding.instantiateImageCodec` provides a thin wrapper around this function with the intention of eventually supporting additional processing. 7 | * Codec represents the application of a codec on a pre-specified image array. Codecs process both single frames and animated images. Once the `Codec` is retrieved via `instantiateImageCodec`, the decoded `FrameInfo` \(which contains the image\) may be requested via `Codec.nextFrame`; this may be invoked repeatedly for animations, and will automatically wrap to the first frame. The `Codec` must be disposed when no longer needed \(the image data remains valid\). 8 | * `DecoderCallback` provides a layer of indirection between image decoding \(via the `Codec` returned by `instantiateImageCodec`\) and any additional decoding necessary for an image \(e.g., resizing\). It is primarily used with `ImageProvider` to encapsulate decoding-specific implementation details. 9 | * `FrameInfo` corresponds to a single frame in an animated image \(single images are considered one-frame animations\). Duration, if application, is exposed via `FrameInfo.duration`. Otherwise, the decoded `Image` may be read as `FrameInfo.image`. 10 | * Image is an opaque handle to decoded image pixels managed by the engine, with a width and a height. The decoded bytes can be obtained via `Image.toByteData` which accepts an `ImageByteFormat` specifying \(e.g., `ImageByteFormat.rawRgba`, `ImageByteFormat.png`\). However, the raw bytes are often not required as the `Image` handle is sufficient to paint images to the screen. 11 | * `ImageInfo` associates an `Image` with a pixel density \(i.e., `ImageInfo.scale`\). Scale describes the number of image pixels per one side of a logical pixel \(e.g., a scale of `2.0` implies that each 1x1 logical pixel corresponds to 2x2 image pixels; that is, a 100x100 pixel image would be painted into a 50x50 logical pixel region and therefore have twice the resolution depending on the display\). 12 | 13 | ## What are the building blocks for managing image data? 14 | 15 | * The image framework must account for a variety of cases that complicate image handling. Some images are obtained asynchronously; others are arranged into image sets so than an optimal variant can be selected at runtime \(e.g., for the current resolution\). Others correspond to animations which update at regular intervals. Any of these images may be cached to avoid unnecessary loading. 16 | * `ImageStream` provides a consistent handle to a potentially evolving image resource; changes may be due to loading, animation, or explicit mutation. Changes are driven by a single `ImageStreamCompleter`, which notifies the `ImageStream` whenever concrete image data is available or changes \(via `ImageInfo`\). The `ImageStream` forwards notifications to one or more listeners \(i.e., `ImageStreamListener` instances\), which may be invoked multiple times as the image loads or mutates. Each `ImageStream` is associated with a key that can be used to determine whether two `ImageStream` instances are backed by the same completer \[?\]. 17 | * `ImageStreamListener` encapsulates a set of callbacks for responding to image events. If the image is being loaded \(e.g., via the network\), an `ImageChunkListener` is invoked with an `ImageChunkEvent` describing overall progress. If an image has become available, an `ImageListener` is invoked with the final `ImageInfo` \(including a flag indicating whether the image was loaded synchronously\). Last, if the image has failed to load, an `ImageErrorListener` is invoked. 18 | * The chunk listener is only called when an image must be loaded \(e.g., via `NetworkImage`\). It may also be called after the `ImageListener` if the image is an animation \(i.e., another frame is being fetched\). 19 | * The `ImageListener` may be invoked multiple times if the associated image is an animation \(i.e., once per frame\). 20 | * `ImageStreamListeners` are compared on the basis of the contained callbacks. 21 | * `ImageStreamCompleter` manages image loading for an `ImageStream` from an asynchronous source \(typically a `Codec`\). A list of `ImageStreamListener` instances are notified whenever image data becomes available \(i.e., the completer “completes”\), either in part \(via `ImageStreamListener.onImageChunk`\) or in whole \(via `ImageStreamListener.onImage`\). Listeners may be invoked multiple times \(e.g., as chunks are loaded or with multiple animation frames\). The completer notifies listeners when an image becomes available \(via `ImageStreamCompleter.setImage`\). Adding listeners after the image has been loaded will trigger synchronous notifications; this is how the `ImageCache` avoids refetching images unnecessarily. 22 | * The corresponding `Image` must be resolved to an `ImageInfo` \(i.e., by incorporating scale\); the scale is often provided explicitly. 23 | * `OneFrameImageStreamCompleter` handles one-frame \(i.e., single\) images. The corresponding `ImageInfo` is provided as a future; when this future resolves, `OneFrameImageStreamCompleter.setImage` is invoked, notifying listeners. 24 | * `MultiFrameImageStreamCompleter` handles multi-frame images \(e.g., animations or engine frames\), completing once per animation frame as long as there are listeners. If the image is only associated with a single frame, that frame is emitted immediately. An optional stream of `ImageChunkEvents` allows loading status to be conveyed to the attached listeners. Note that adding a new listener will attempt to decode the next frame; this is safe, if inefficient, as `Codec.getNextFrame` automatically cycles. 25 | * The next frame is eagerly decoded by the codec \(via `Codec.getNextFrame`\). Once available, a non-repeating callback is scheduled to emit the frame after the corresponding duration has lapsed \(via `FrameInfo.duration`\); the first frame is emitted immediately. If there are additional frames \(via `Codec.frameCount`\), or the animation cycles \(via `Codec.repetitionCount`\), this process is repeated. Frames are emitted via `MultiFrameImageStreamCompleter.setImage`, notifying all subscribed listeners. 26 | * In this way, the next frame is decoded eagerly but only emitted during the first application frame after the duration has lapsed. If at any point there are no listeners, the process is paused; no frames are decoded or emitted until a listener is added. 27 | * A singleton `ImageCache` is created by the `PaintingBinding` during initialization \(via `PaintingBinding.createImageCache`\). The cache maps keys to `ImageStreamCompleters`, retaining only the most recently used entries. Once a maximum number of entries or bytes is reached, the least recently accessed entries are evicted. Note that any images actively retained by the application \(e.g., `Image`, `ImageInfo`, `ImageStream`, etc.\) cannot be invalidated by this cache; the cache is only useful when locating an `ImageStreamCompleter` for a given key. If a completer is found, and the image has already been loaded, the listener is notified with the image synchronously. 28 | * `ImageCache.putIfAbsent` serves as the main interface to the cache. If a key is found, the corresponding `ImageStreamCompleter` is returned. Otherwise, the completer is built using the provided closure. In both cases, the timestamp is updated. 29 | * Because images are loaded asynchronously, the cache policy can only be enforced once the image loads. Thus, the cache maintains two maps: `ImageCache._pendingImages` and `ImageCache._cache`. On a cache miss, the newly built completer is added to the pending map and assigned an `ImageStreamListener`; when the listener is notified, the final image size is calculated, the listener removed, and the cache policy applied. The completer is then moved to the cache map. 30 | * If an image fails to load, it does not contribute to cache size but it does consume an entry. If an image is too large for the cache, the cache is expanded to accommodate the image with some headroom. 31 | * `ImageConfiguration` describes the operating environment so that the best image can be selected from a set of alternatives \(i.e., a double resolution image for a retina display\); this is the primary input to `ImageProvider`. A configuration can be extracted from the element tree via `createLocalImageConfiguration`. 32 | * `ImageProvider` identifies an image without committing to a specific asset. This allows the best variant to be selected according to the current `ImageConfiguration`. Any images managed via `ImageProvider` are passed through the global `ImageCache`. 33 | * `ImageProvider.obtainKey` produces a key that uniquely identifies a specific image \(including scale\) given an `ImageConfiguration` and the provider’s settings. 34 | * `ImageProvider.load` builds an `ImageStreamCompleter` for a given key. The completer begins fetching the image immediately and decodes the resulting bytes via the `DecoderCallback`. 35 | * `ImageProvider.resolve` wraps both methods to \(1\) obtain a key \(via `ImageProvider.obtainKey`\), \(2\) query the cache using the key, and \(3\) if no completer is found, create an `ImageStreamCompleter` \(via `ImageProvider.load`\) and update the cache. 36 | * `precacheImage` provides a convenient wrapper around `ImageProvider` so that a given image can be added to the `ImageCache`. So long as the same key is used for subsequent accesses, the image will be available immediately \(provided that it has fully loaded\). 37 | 38 | ## How are images provided and painted? 39 | 40 | * `ImageProvider` federates access to images, selecting the best image given the current environment \(i.e., `ImageConfiguration`\). The provider computes a key that uniquely identifies the asset to be loaded; this creates or retrieves an `ImageStreamCompleter` from the cache. Various provider subclasses override `ImageProvider.load` to customize how the completer is configured; most use `SynchronousFuture` to try to provide the image without needing to wait for the next frame. The `ImageStreamCompleter` is constructed with a future resolving to a bound codec \(i.e., associated with raw image bytes\). These bytes may be obtained in a variety of ways: from the network, from memory, from an `AssetBundle`, etc. The completer accepts an optional stream of `ImageChunkEvents` so that any listeners are notified as the image loads. Once the raw image has been read into memory, an appropriate codec is provided by the engine \(via a `DecoderCallback`, which generally delegates to `PaintingBinding.instantiateImageCodec`\). This codec is used to decode frames \(potentially multiple times for animated images\). As frames are decoded, listeners \(e.g., an image widget\) are notified with the finalized `ImageInfo` \(which includes decoded bytes and scale data\). These bytes may be painted directly via `paintImage`. 41 | 42 | ## What image providers are available? 43 | 44 | * `FileImage` provides images from the file system. As its own key, `FileImage` overrides the equality operator to compare the target file name and scale. A `MultiFrameImageStreamCompleter` is configured with the provided scale, and a `Codec` instantiated using bytes loaded from the file \(via `File.readAsBytes`\). The completer will only notify listeners when the image is fully loaded. 45 | * `MemoryImage` provides images directly from an immutable array of bytes. As its own key, `MemoryImage` overrides the equality operator to compare scale as well as the actual bytes. A `MultiFrameImageStreamCompleter` is configured with the provided scale, and a `Codec` instantiated using the provided bytes. The completer will only notify listeners when the image is fully loaded. 46 | * `NetworkImage` defines a thin interface to support different means of providing images from the network; it relies on instances of itself for a key. 47 | * `io.NetworkImage` implements this interface using `Dart`’s standard `HttpClient` to retrieve images. As its own key, `io.NetworkImage` overrides the equality operator to compare the target `URL` and scale. A `MultiFrameImageStreamCompleter` is configured with the provided scale, and a `Codec` instantiated using the consolidated bytes produced by `HttpClient.getUrl`. Unlike the other providers, `io.NetworkImage` will report loading status to its listeners via a stream of `ImageChunkEvents`. This relies on the “Content-Length” header being correctly reported by the remote server. 48 | * `AssetBundleImageProvider` provides images from an `AssetBundle` using `AssetBundleImageKey`. The key is comprised of a specific asset bundle, asset key, and image scale. A `MultiFrameImageStreamCompleter` is configured with the provided scale, and a `Codec` instantiated using bytes loaded from the bundle \(via `AssetBundle.load`\). The completer will only notify listeners when the image is fully loaded. 49 | * `ExactAssetImage` is a subclass that allows the bundle, asset, and image scale to be set explicitly, rather than read from an `ImageConfiguration`. 50 | * `AssetImage` is a subclass that resolves to the most appropriate asset given a set of alternatives and the current runtime environment. Primarily, this subclass selects assets optimized for the device’s pixel ratio using a simple naming convention. Assets are organized into logical directories within a given parent. Directories are named “Nx/”, where N is corresponds to the image’s intended scale; the default asset \(with 1:1 scaling\) is rooted within the parent itself. The variant that most closely matches the current pixel ratio is selected. 51 | * The main difference from the superclass is method by which keys are produced; all other functionality \(e.g., `AssetImage.load`, `AssetImage.resolve`\) is inherited. 52 | * A `JSON`-encoded asset manifest is produced from the pubspec file during building. This manifest is parsed to locate variants of each asset according to the scheme described above; from this list, the variant nearest the current pixel ratio is identified. A key is produced using this asset’s scale \(which may not match the device’s pixel ratio\), its fully qualified name, and the bundle that was used. The completer is configured by the superclass. 53 | * The equality operator is overridden such that only the unresolved asset name and bundle are consulted; scale \(and the best fitting asset name\) are excluded from the comparison. 54 | * `ResizeImage` wraps another `ImageProvider` to support size-aware caching. Ordinarily, images are decoded using their intrinsic dimensions \(via `instantiateImageCodec`\); consequently, the version of the image stored in the `ImageCache` corresponds to the full size image. This is inefficient for images that are displayed at a different size. `ResizeImage` addresses this by augmenting the underlying key with the requested dimensions; it also applies a `DecoderCallback` that forwards these dimensions via `instantiateImageCodec`. 55 | * The first time an image is provided, it is loaded using the underlying provider \(via `ImageProvider.load`, which doesn’t update the cache\). The resulting `ImageStreamCompleter` is cached using the `ResizeImage`’s key \(i.e., `_SizeAwareCacheKey`\). 56 | * Subsequent accesses will hit the cache, which returns an image with the corresponding dimensions. Usages with different dimensions will result in additional entries being added to the cache. 57 | 58 | ## What are the building blocks for image rendering? 59 | 60 | * There are several auxiliary classes allowing image rendering to be customized. `BlendMode` specifies how pixels from source and destination images are combined during compositing \(e.g., `BlendMode.multiply`, `BlendMode.overlay`, `BlendMode.difference`\). `ColorFilter` specifies a function combining two colors into an output color; this function is applied before any blending. `ImageFilter` provides a handle to an image filter applied during rendering \(e.g., Gaussian blur, scaling transforms\). `FilterQuality` allows the quality/performance of said filter to be broadly customized. 61 | * Canvas exposes the lowest level API for painting images into layers. The principal methods include `Canvas.drawImage`, which paints an image at a particular offset, `Canvas.drawImageRect`, which copies pixels from a source rectangle to a destination rectangle, `Canvas.drawAtlas`, which does the same for a variety of rectangles using a “sprite atlas,” and `Canvas.drawImageNine`, which slices an image into a non-uniform 3x3 grid, scaling the cardinal and center boxes to fill a destination rectangle \(the corners are copied directly\). Each of these methods accept a `Paint` instance to be used when compositing the image \(e.g., allowing a `BlendMode` to be specified\); each also calls directly into the engine to perform any actual painting. 62 | * `paintImage` wraps the canvas API to provide an imperative API for painting images in a variety of styles. It adds support for applying a box fit \(e.g., `BoxFit.cover` to ensure the image covers the destination\) and repeated painting \(e.g., `ImageRepeat.repeat` to tile an image to cover the destination\), managing layers as necessary. 63 | 64 | ## How are images integrated with the render tree? 65 | 66 | * Image encapsulates a variety of widgets, providing a high level interface to the image rendering machinery. This widget configures an `ImageProvider` \(selected based on the named constructor, e.g., `Image.network`, `Image.asset`, `Image.memory`\) which it resolves to obtain an `ImageStream`. Whenever this stream emits an `ImageInfo` instance, the widget is rebuilt and repainted. Conversely, if the widget is reconfigured, the `ImageProvider` is re-resolved, and the process repeated. From this flow, `Image` extracts the necessary data to fully configure a `RawImage` widget, which manages the actual `RenderImage` 67 | * If a cache width or cache height are provided, the underlying `ImageProvider` is wrapped in a `ResizeImage` \(via `Image._resizeIfNeeded`\). This ensures that the image is decoded and cached using the provided dimensions, potentially limiting the amount of memory used. 68 | * Image adds support for image chrome \(e.g., a loading indicator\) and semantic annotations. 69 | * If animations are disabled by `TickerMode`, `Image` pauses rendering of any new animation frames provided by the `ImageStream` for consistency. 70 | * The `ImageConfiguration` passed to `ImageProvider` is retrieved from the widget environment via `createLocalImageConfiguration`. 71 | * `RawImage` is a `LeafRenderObjectWidget` wrapping a `RenderImage` and all necessary configuration data \(e.g., the `ui.Image`, scale, dimensions, blend mode\). 72 | * `RenderImage` is a `RenderBox` leaf node that paints a single image; as such, it relies on the widget system to repaint whenever the associated `ImageStream` emits a new frame. Painting is performed by `paintImage` using a destination rectangle sized by layout and positioned at the current offset. Alignment, box fit, and repetition determines how the image fills the available space. 73 | * There are two types of dimensions considered during layout: the image’s intrinsic dimensions \(e.g., the number of bytes comprising the image divided by scale\) and the requested dimensions \(e.g., the value of width and height specified by the caller\). 74 | * During layout, the incoming constraints are applied to the requested dimensions \(via `RenderImage._sizeForConstraints`\): first, the requested dimensions are clamped to the constraints. Next, the result is adjusted to match the image’s intrinsic aspect ratio while remaining as large as possible. If there is no image associated with the render object, the smallest possible size is selected. 75 | * The intrinsic dimension methods apply the same logic. However, instead of using the incoming constraints, one dimension is fixed \(i.e., corresponding to method’s parameter\) whereas the other is left unconstrained. 76 | 77 | -------------------------------------------------------------------------------- /bin/bullet2header: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | TMP=`mktemp` 3 | for f in $(find . -name '*.md'); do 4 | # Replace all top-level bullets with H1s, wrap with newlines, and demote bullets. 5 | sed -e $'/^\*/s/^\*\(.*\)$/\\\n##\\1\\\n/' -e '/\s*\*/s/^ //' "$f" > "$TMP" 6 | mv "$TMP" "$f" 7 | done 8 | -------------------------------------------------------------------------------- /bin/greplace: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | TMP=`mktemp` 4 | for f in $(find . -name '*.md' | xargs grep -lr "$1"); do 5 | # Replace all top-level bullets with H1s, wrap with newlines, and demote bullets. 6 | sed -E "s/$1/$2/g" "$f" > "$TMP" 7 | mv "$TMP" "$f" 8 | done 9 | -------------------------------------------------------------------------------- /bin/identifier2code: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | TMP=`mktemp` 3 | for f in $(find . -name '*.md'); do 4 | # Find everything that looks like an identifier and mark it as such. 5 | sed -E -e 's/(\\_)[a-zA-Z.\\_]+|((\\_)|[a-zA-Z0-9])+((\\_)|[A-Z.])((\\_)|[a-zA-Z0-9]|\.([a-zA-Z0-9]|(\\_)))+/`&`/g' "$f" > "$TMP" 6 | mv "$TMP" "$f" 7 | done 8 | 9 | -------------------------------------------------------------------------------- /business-logic/async-programming.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: TODO 3 | --- 4 | 5 | # Async Programming 6 | 7 | -------------------------------------------------------------------------------- /business-logic/navigation.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: TODO 3 | --- 4 | 5 | # Navigation 6 | 7 | ## How does navigation work? 8 | 9 | ## How does navigation integrate with gestures? 10 | 11 | ## When and where are routes rendered? 12 | 13 | ## How does local history work? 14 | 15 | * `ModalRoute` 16 | 17 | ## How do overlays work? 18 | 19 | -------------------------------------------------------------------------------- /business-logic/state-management.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: WORK IN PROGRESS 3 | --- 4 | 5 | # State Management 6 | 7 | ## What is State Management? 8 | 9 | Let's start by talking about what State even IS. 10 | 11 | In the broadest possible sense, the state of an app is everything that exists in memory when the app is running. This includes the app’s assets, all the variables that the Flutter framework keeps about the UI, animation state, textures, fonts, and so on. While this broadest possible definition of state is valid, it’s not very useful for the architecture of an app. 12 | 13 | When we're discussing managing state, it's really a discussion about the best practices in the design of your code to change the state of your application. There are many libraries that provide techniques to help us manage state. Many of which were around before Flutter. As this guide is developed, we intend to talk about those different approaches. 14 | 15 | ### Why is State Management important? 16 | 17 | 18 | 19 | -------------------------------------------------------------------------------- /business-logic/testing.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: TODO 3 | --- 4 | 5 | # Testing 6 | 7 | -------------------------------------------------------------------------------- /core/conventions.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: TODO 3 | --- 4 | 5 | # Conventions 6 | 7 | * `performAction` is internal, `action` is external. 8 | * `computeValue` is internal, `getValue` is external. 9 | * Recursive calls should use the external variant. 10 | * `adoptChild` and `dropChild` from `AbstractNode` must be called after updating the actual child model to allow the framework to react accordingly. 11 | * Many widgets are comprised of lower-level widgets, generally prefixed with `Raw` \(e.g., `GestureDetector` and `RawGestureDetector`, `Chip` and `RawChip`, `MaterialButton` and `RawMaterialButton`\). `Text` and `RichText` is an exception. 12 | 13 | -------------------------------------------------------------------------------- /core/framework.md: -------------------------------------------------------------------------------- 1 | # Framework 2 | 3 | ## How is the app bootstrapped? 4 | 5 | * `runApp` kicks off binding initialization by invoking the `WidgetsFlutterBinding`/`RenderingFlutterBinding.ensureInitialized` static method. This calls each binding’s `initInstances` method, allowing each to initialize in turn. 6 | * This flow is built using mixin chaining: each of the concrete bindings \(e.g., `WidgetsFlutterBinding`\) extends `BaseBinding`, the superclass constraint shared by all binding mixins \(e.g., `GestureBinding`\). Consequently, common methods \(like `BaseBinding.initInstances`\) can be chained together via super invocations. These calls are linearized from left-to-right, starting with the superclass and proceeding sequentially through the mixins; this strict order allows later bindings to depend on earlier ones. 7 | * `RendererBinding.initInstances` creates the `RenderView`, passing an initial `ViewConfiguration` \(describing the size and density of the render surface\). It then prepares the first frame \(via `RenderView.prepareInitialFrame`\); this schedules the initial layout and initial paint \(via `RenderView.scheduleInitialLayout` and `RenderView.scheduleInitialPaint`; the latter creates the root layer, a `TransformLayer`\). This marks the `RenderView` as dirty for layout and painting but does not actually schedule a frame. 8 | * This is important since users may wish to begin interacting with the framework \(by initializing bindings via `BaseBinding.ensureInitialized`\) before starting up the app \(via `runApp`\). For instance, a plugin may need to block on a backend service before it can be used. 9 | * Finally, the `RendererBinding` installs a persistent frame callback to actually draw the frame \(`WidgetsBinding` overrides the method invoked by this callback to add the build phase\). Note that nothing will invoke this callback until the `Window.onDrawFrame` handler is installed. This will only happen once a frame has actually been scheduled. 10 | * Returning to `runApp`, `WidgetsBinding.scheduleAttachRootWidget` asynchronously creates a `RenderObjectToWidgetAdapter`, a `RenderObjectWidget` that inserts its child \(i.e., the app’s root widget\) into the provided container \(i.e., the `RenderView`\). 11 | * This asynchronicity is necessary to avoid scheduling two builds back-to-back; while this isn’t strictly invalid, it is inefficient and may trigger asserts in the framework. 12 | * If the initial build weren’t asynchronous, it would be possible for intervening events to re-dirty the tree before the warm up frame is scheduled. This would result in a second build \(without an intervening layout pass, etc.\) when rendering the warm-up frame. By ensuring that the initial build is scheduled asynchronously, there will be no render tree to dirty until the platform is initialized. 13 | * For example, the engine may report user settings changes during initialization \(via the `_updateUserSettingsData` hook\). This invokes callbacks on the window \(e.g., `Window.onTextScaleFactorChanged`\), which are forwarded to all `WidgetsBindingObservers` \(e.g., via `RendererBinding.handleTextScaleFactorChanged`\). As an observer, `WidgetsApp` reacts to the settings data by requesting a rebuild. 14 | * It then invokes `RenderObjectToWidgetAdapter.attachToRenderTree` to bootstrap and mount an element to serve as the root of the element hierarchy \(`RenderObjectToWidgetElement`, i.e., the element corresponding to the adapter\). If the element already exists, which will only happen if `runApp` is called again, its associated widget is updated \(`RenderObjectToWidgetElement._newWidget`\) and marked as needing to be built. 15 | * `RenderObjectToWidgetElement.updateChild` is invoked when this element is mounted or rebuilt, inflating or updating the child widget \(i.e., the app’s root widget\) accordingly. Once a descendant `RenderObjectWidget` is inflated, the corresponding render object \(which must be a `RenderBox`\) will be inserted into the `RenderView` \(via `RenderObjectToWidgetElement.insertChildRenderObject`\). The resulting render tree is managed in the usual way going forward. 16 | * A reference to this element is stored in `WidgetsBinding.renderViewElement`, serving as the root of the element tree. As a `RootRenderObjectElement`, this element establishes the `BuildOwner` for its descendants. 17 | * Finally, after scheduling the first frame \(via `SchedulerBinding.instance.ensureVisualUpdate`, which will lazily install the frame callbacks\), `runApp` invokes `SchedulerBinding.scheduleWarmUpFrame`, manually pumping the rendering pipeline. This gives the initial frame extra time to render as it’s likely the most expensive. 18 | * `SchedulerBinding.ensureFrameCallbacksRegistered` lazily installs frame callbacks as part of `SchedulerBinding.scheduleFrame`. Frames are typically scheduled in response to `PipelineOwner.requestVisualUpdate` \(due to UI needing painting, layout, or a rebuild\). Once configured, these callbacks \(`Window.onBeginFrame`, `Window.onDrawFrame`\) are invoked once per frame by the engine, running transient and persistent processes, respectively. The latter is generally responsible for ticking animations whereas the former runs the actual building and rendering pipeline. 19 | 20 | ## How is a frame rendered? 21 | 22 | * Once a frame is scheduled and callbacks are registered \(via `SchedulerBinding.ensureFrameCallbacksRegistered`\), the engine begins requesting frames automatically. The frame callbacks invoke handlers in response to these requests. In particular, `SchedulerBinding.drawFrame` processes persistent frame callbacks which are used to implement Flutter’s rendering pipeline. `WidgetsBinding.drawFrame` overrides `RendererBinding.drawFrame` to add the build process to this pipeline. 23 | * The rendering pipeline builds widgets, performs layout, updates compositing bits, paints layers, and finally composites everything into a scene which it uploads to the engine \(via `RenderView.compositeFrame`\). Semantics are also updated by this process. 24 | * `RenderView.compositeFrame` retains a reference to the root layer \(a `TransformLayer`\) which it recursively composites using `Layer.buildScene`. This iterates through all layers that`needsAddToScene`. If true, the layer is freshly composited into the scene. If false, previous invocations of `addToScene` will have stored an `EngineLayer` in `Layer.engineLayer`, which refers to a retained rendering of the layer subtree. A reference to this retained layer is added to the scene via `SceneBuilder.addRetained`. Once the `Scene` is built, it is uploaded to the engine via `Window.render`. 25 | 26 | ## How does the framework interact with the engine? 27 | 28 | * The framework primarily interacts via the `Window` class, a dart interface with hooks into and out of the engine. 29 | * The majority of the framework’s flows are driven by frame callbacks invoked by the engine. Other entry points into the framework include gesture handling, platform messaging, and device messaging. 30 | * Each binding serves as the singleton root of a subsystem within the framework; in several cases, bindings are layered to augment more fundamental bindings \(i.e., `WidgetsBinding` adds support for building to `RendererBinding`\). All direct framework/engine interaction is managed via the bindings, with the sole exception of the `RenderView`, which uploads frames to the engine. 31 | 32 | ## What bindings are implemented? 33 | 34 | * `GestureBinding` facilitates gesture handling across the framework, maintaining the gesture arena and pointer routing table. 35 | * Handles `Window.onPointerDataPacket`. 36 | * `ServicesBinding` facilitates message passing between the framework and platform. 37 | * Handles `Window.onPlatformMessage`. 38 | * `SchedulerBinding` manages a variety of callbacks \(transient, persistent, post-frame, and non-rendering tasks\), tracking lifecycle states and scheduler phases. It is also responsible for explicitly scheduling frames when visual updates are needed. 39 | * Handles `Window.onDrawFrame`, `Window.onBeginFrame`. 40 | * Invokes `Window.scheduleFrame`. 41 | * `PaintingBinding` owns the image cache which manages memory allocated to graphical assets used by the application. It also performs shader warm up to avoid stuttering during drawing \(via `ShaderWarmUp.execute` in `PaintingBinding.initInstances`\). This ensures that the corresponding shaders are compiled at a predictable time. 42 | * `SemanticsBinding` which is intended to manage the semantics and accessibility subsystems \(at the moment, this binding mainly tracks accessibility changes emitted by the engine via `Window.onAccessibilityFeaturesChanged`\). 43 | * `RendererBinding` implements the rendering pipeline. Additionally, it retains the root of the render tree \(i.e., the `RenderView`\) as well as the `PipelineOwner`, an instance that tracks when layout, painting, and compositing need to be re-processed \(i.e., have become dirty\). The `RendererBinding` also responds to events that may affect the application’s rendering \(including semantic state, though this will eventually be moved to the `SemanticsBinding`\). 44 | * Handles `Window.onSemanticsAction`, `Window.onTextScaleFactorChanged`, `Window.onMetricsChanged`, `Window.onSemanticsEnabledChanged`. 45 | * Invokes `Window.render` via `RenderView`. 46 | * `WidgetsBinding` augments the renderer binding with support for widget building \(i.e., configuring the render tree based on immutable UI descriptions\). It also retains the `BuildOwner`, an instance that facilitates rebuilding the render tree when configuration changes \(e.g., a new widget is substituted\). The `WidgetsBinding` also responds to events that might require rebuilding related to accessibility and locale changes \(though these may be moved to the `SemanticsBinding` in the future\). 47 | * Handles `Window.onAccessibilityFeaturesChanged`, `Window.onLocaleChanged`. 48 | * `TestWidgetsFlutterBinding` supports the widget testing framework. 49 | 50 | ## How do global keys work? 51 | 52 | * `Element.inflateWidget` checks for a global key before inflating a widget. If a global key is found, the corresponding element is returned instead \(preserving the corresponding element and rendering subtree\). 53 | * Global keys are cleaned up when the corresponding element is unmounted \(via `Element.unmount`\). 54 | 55 | -------------------------------------------------------------------------------- /core/messaging.md: -------------------------------------------------------------------------------- 1 | # Messaging 2 | 3 | ## How are messages passed between the framework and platform code? 4 | 5 | * `ServicesBinding.initInstances` sets the global message handler \(`Window.onPlatformMessage`\) to `ServicesBinding.defaultBinaryMessenger`. This instance processes messages from the platform \(via `BinaryMessenger.handlePlatformMessage`\) and allows other framework code to register message handlers \(via `BinaryMessenger.setMessageHandler`\). Handlers subscribe to a channel \(an identifier used to multiplex the single engine callback\) using an identifier shared by the framework and the platform. 6 | 7 | ## What are the building blocks of messaging? 8 | 9 | * `BinaryMessenger` multiplexes the global message handler via channel names, supporting handler registration and bidirectional binary messaging. Sending a message produces a future that resolves to the raw response. 10 | * `MessageCodec` defines an interface to encode and decode byte data \(`MessageCodec.encodeMessage`, `MessageCodec.decodeMessage`\). A cross-platform binary codec is available \(`StandardMessageCodec`\) as well as a JSON-based codec \(`JSONMessageCodec`\). The platform must implement a corresponding codec natively. 11 | * `MethodCodec` is analogous to `MessageCodec` \(but otherwise independent\) encoding and decoding `MethodCall` instances that wrap a method name and a dynamic list of arguments. Method-based codecs pack and unpack results into envelopes to distinguish success and error outcomes. 12 | * `BasicMessageChannel` provides a thin wrapper around `BinaryMessager` that uses the provided codec to encode and decode messages to and from raw byte data. 13 | * `MethodChannel` provides a thin wrapper around `BinaryMessager` that uses the provided method codec to encode and decode method invocations. Responses to incoming invocations are packed into envelopes indicating outcome; similarly, results from outgoing invocations are unpacked from their encoded envelope. These are returned as futures. 14 | * Success envelopes are unpacked and the result returned. 15 | * Error envelopes throw a `PlatformException`. 16 | * Unrecognized methods throw a `MissingPluginException` \(except when using an `OptionalMethodChannel`\). 17 | * `EventChannel` is a helper that exposes a remote stream as a local stream. The initial subscription is handled by invoking a remote method called `listen` \(via `MethodChannel.invokeMethod`\) which causes the platform to begin emitting a stream of envelope-encoded items. A top-level handler is installed \(via `ServicesBinding.defaultBinaryMessenger.setMessageHandler`\) to unpack and forward items to an output stream in the framework. If the stream ends \(for any reason\), a remote method called `cancel` is invoked and the global handler cleared. 18 | * `SystemChannels` is a singleton instance that provides references to messaging channels that are essential to the framework \(`SystemChannels.system`, `SystemChannels.keyEvent`, etc.\). 19 | 20 | -------------------------------------------------------------------------------- /core/platform-integration.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: TODO 3 | --- 4 | 5 | # Platform Integration 6 | 7 | ## How does `Flutter` interact with the system clipboard? 8 | 9 | * Clipboard and `ClipboardData` 10 | 11 | ## How does `Flutter` interact with the host system? 12 | 13 | ## How are device dimensions \(insets, padding, overlays\) conveyed? 14 | 15 | ## How are device configurations \(resolution, orientation\) conveyed? 16 | 17 | ## What are service extensions? 18 | 19 | ## What are `SystemChannels`? 20 | 21 | -------------------------------------------------------------------------------- /core/types.md: -------------------------------------------------------------------------------- 1 | # Types 2 | 3 | ## What types are used to describe positions? 4 | 5 | * `OffsetBase` represents a 2-dimensional \(2D\), axis-aligned vector. Subclasses are immutable and comparable using standard operators. 6 | * `Offset` is an `OffsetBase` subclass that may be understood as a point in cartesian space or a vector. Offsets may be manipulated algebraically using standard operators; the `&` operator allows a `Rect` to be constructed by combining the offset with a `Size` \(the offset identifies the rectangle’s top left corner\). Offsets can be interpolated. 7 | * `Point` is a dart class for representing a 2D point on the cartesian plane. 8 | 9 | ## What types are used to describe magnitudes? 10 | 11 | * `Size` is an `OffsetBase` subclass that represents a width and a height. Geometrically, `Size` describes a rectangle with its top left corner coincident with the origin. `Size` includes a number of methods describing a rectangle with dimensions matching the current instance and a top left corner coincident with a specified offset. Sizes may be manipulated algebraically using standard operators; the `+` operator expands the size according to a provided delta \(via `Offset`\). Sizes can be interpolated. 12 | * `Radius` describes either a circular or elliptical radius. The radius is expressed as intersections of the x and y-axes. Circular radii have identical values. Radii may be manipulated algebraically using standard operators and interpolated. 13 | 14 | ## What types are used to describe regions? 15 | 16 | * `Rect` is an immutable, 2D, axis-aligned, floating-point rectangle whose coordinates are relative to a given origin. A rectangle can be described in various ways \(e.g., by its center, by a bounding circle, by the magnitude of its left, top, right, and bottom edges, etc.\) or constructed by combining an `Offset` and a `Size`. Rectangles can be inflated, deflated, combined, intersected, translated, queried, and more. Rectangles can be compared for equality and interpolated. 17 | * `RRect` augments a `Rect` with four independent radii \(via `Radius`\) corresponding to its corners. Rounded rectangles can be described in various ways \(e.g., by offsets to each of its sides and one or more radii, by a bounding box fully enclosing the rounded rectangle with one or more radii, etc.\). Rounded rectangles define a number of sub-rectangles: a bounding rectangle \(`RRect.outerRect`\), an inner rectangle with left and right edges matching the base rectangle and top and bottom edges inset to coincide with the rounded corners' centers \(`RRect.wideMiddleRect`\), a similar rectangle but with the sets reversed \(`RRect.tallMiddleRect`\), and a rectangle that is the intersection of these two \(`RRect.middleRect`\). A rounded rectangle is said to describe a “stadium” if it possesses a side with no straight segment \(e.g., entirely drawn by the two rounded corners\). Rounded rectangles can be interpolated. 18 | 19 | ## What types are used to describe coordinate spaces? 20 | 21 | * `Axis` represents the X- or Y-axis \(horizontal or vertical, respectively\) relative to a coordinate space. The coordinate space can be arbitrarily transformed and therefore need not be parallel to the screen’s edges. 22 | * `AxisDirection` applies directionality to an axis. The value represents the direction in which values increase along an associated axis, with the origin rooted at the opposite end \(e.g., `AxisDirection.down` positions the origin at the top with positive values growing downward\). 23 | * `GrowthDirection` is the direction of growth relative to the current axis direction \(e.g., how items are ordered along the axis\). `GrowthDirection.forward` implies an ordering consistent with the axis direction \(the first item is at the origin with subsequent items following\). `GrowthDirection.reverse` is exactly the opposite \(the last item is at the origin with preceding items following\). 24 | * Growth direction does not flip the meaning of “leading” and “trailing,” it merely determines how children are ordered along a specified axis. 25 | * For a viewport, the origin is positioned according to an axis direction \(e.g., `AxisDirection.down` positions the origin at the top of the screen, `AxisDirection.up` positions the origin at the bottom of the screen\), with the growth direction determining how children are ordered starting from the origin. As a result, both pieces of information are necessary to determine where a set of slivers should actually appear. 26 | * `ScrollDirection` represents the user’s scroll direction relative to the positive scroll offset direction \(i.e., the direction in which positive scroll offsets increase as determined by axis direction and growth direction\). Includes an idle state \(`ScrollDirection.idle`\). 27 | * This is typically subject to the growth direction \(e.g., the scroll direction is flipped when growth is reversed\). 28 | * Confusingly, this refers to the direction the content is moving on screen rather than where the user is scrolling \(e.g., scrolling down a web page causes the page’s contents to move upward; this would be classified as `ScrollDirection.reverse` since this motion is opposite the axis direction\). 29 | 30 | ## What types are used to describe graphics? 31 | 32 | * `Color` is a 32-bit immutable quantity describing alpha, red, green, and blue color channels. Alpha can be defined using an opacity value from zero to one. Colors can be interpolated and converted into a luminance value. 33 | * `Shadow` represents a single drop shadow with a color, an offset from the casting element, and a blur radius characterizing the Gaussian blur applied to the shadow. 34 | * `Gradient` describes one or more smooth color transitions. Gradients can be interpolated and scaled; gradients can also be used to obtain a reference to a corresponding shader. Linear, radial, and sweep gradients are supported \(via `LinearGradient`, `RadialGradient`, and `SweepGradient`, respectively\). `TileMode` determines how a gradient paints beyond its defined bounds. Gradients may be clamped \(e.g., hold their initial and final values\), repeated \(e.g., restarted at their bounds\), or mirrored \(e.g., restarted but with initial and final values alternating\). 35 | 36 | ## How are tree nodes modeled? 37 | 38 | * `AbstractNode` represents a node in a tree without specifying a particular child model \(i.e., the tree's actual structure is left as an implementation detail\). Concrete implementations must call `AbstractNode.adoptChild` and `AbstractNode.dropChild` whenever the child model changes. 39 | * `AbstractNode.owner` references an arbitrary object shared by all nodes in a subtree. 40 | * `AbstractNode.attach` assigns an owner to the node. Adopting children will attach them automatically. Used by the owner to attach the tree via its root node. 41 | * Subclasses should attach all children since the parent can change its attachment state at any time \(i.e., after the child is adopted\) and must keep its children in sync. 42 | * `AbstractNode.detach` clears a node's owner. Dropping children will detach them automatically. Used by the owner to detach the tree via its root node. 43 | * Subclasses should detach all children since the parent can change its attachment state at any time \(i.e., after the child is adopted\) and must keep its children in sync. 44 | * `AbstractNode.attached` indicates whether the node is attached \(i.e., has an owner\). 45 | * `AbstractNode.parent` references the parent abstract node. 46 | * `AbstractNode.adoptChild` updates a child's parent and depth. The child is attached if the parent has an owner. 47 | * `AbstractNode.dropChild` clears the child's parent. The child is detached if the parent has an owner. 48 | * `AbstractNode.depth` is an integer that increases with depth. All depths below a given node will be greater than that node's depth. Note that values need not match the actual depth of the node. 49 | * `AbstractNode.redepthChild` updates a child's depth to be greater than its parent. 50 | * `AbstractNode.redepthChildren` uses the concrete child model to call `AbstractNode.redepthChild` on each child. 51 | 52 | -------------------------------------------------------------------------------- /data-model/boxes.md: -------------------------------------------------------------------------------- 1 | # Boxes 2 | 3 | ## What are the render box building blocks? 4 | 5 | * `RenderBox` models a two dimensional box with a width, height, and position \(`RenderBox.size.width`, `RenderBox.size.height`,`RenderBox.parentData.offset`\). The box's top-left corner defines its origin, with the bottom-right corner corresponding to `(width, height)`. 6 | * `BoxParentData` stores the child's offset in the parent’s coordinate space \(`BoxParentData.offset`\). By convention, this data may not be accessed by the child. 7 | * `BoxConstraints` describes immutable constraints expressed as a maximum and minimum width and height ranging from zero to infinity, inclusive. Constraints are satisfied by concrete sizes that fall within this range. 8 | * Box constraints are classified in several ways: 9 | * `BoxConstraints.isNormal`: minimum is greater than zero and less than or equal to the maximum in both dimensions. 10 | * `BoxConstraints.tight, BoxConstraints.isTight`: minimum and maximum values are equal in both dimensions. 11 | * `BoxConstraints.loose`: minimum value is zero, even if maximum is also zero \(i.e., loose and tight\). 12 | * `BoxConstraints.hasBoundedWidth`, `BoxConstraints.hasBoundedHeight`: the corresponding maximum is not infinite. 13 | * Constraints are unbounded when the maximum is infinite. 14 | * `BoxConstraints.expanding`: both maximum and minimum values are infinite in the same dimension \(i.e., tightly infinite\). 15 | * Expanding constraints imply that the corresponding dimension will be determined by other incoming constraints \(e.g., established by containing UI\). Dimensions must ultimately be finite. 16 | * `BoxConstraints.hasInfiniteWidth`, `BoxConstraints.hasInfiniteHeight`: the corresponding minimum is infinite \(thus, the maximum must also be infinite; this is the same as expanding\). 17 | * Box constraints can be evaluated \(`BoxConstraints.isSatisfiedBy`\), applied to a `Size` \(`BoxConstraints.constrain`\), tightened relative to constraints \(`BoxConstraints.tighten`\), loosened by setting minimums to zero \(`BoxConstraints.loosen`\), and intersected \(`BoxConstraints.enforce`\). Constraints can also be scaled standard algebraic operators. 18 | * `BoxHitTestResult` is a `HitTestResult` subclass that captures each `RenderBox` \(a`HitTestTarget`\) that reported being hit in order of decreasing precedence. 19 | * Instances include box-specific helpers intended to help transform global coordinates to local coordinates \(e.g., `BoxHitTestResult.addWithPaintOffset`, `BoxHitTestResult.addWithPaintTransform`\). 20 | * `BoxHitTestEntry` represents a box that was hit during hit testing. It captures the position of the collision in local coordinates \(`BoxHitTestEntry.localPosition`\). 21 | 22 | ## How do boxes model children?. 23 | 24 | * `ContainerBoxParentData` extends `BoxParentData` with `ContainerParentDataMixin`. This combines a child offset \(`BoxParentData.offset`\) with next and previous pointers \(`ContainerParentData.previousSibling`, `ContainerParentData.nextSibling`\) to describe a doubly linked list of children. 25 | * `RenderObjectWithChildMixin` and `ContainerRenderObjectMixin` can both be used with a type argument of `RenderBox`. The `ContainerRenderObjectMixin` accepts a parent data type argument; `ContainerBoxParentData` is compatible and adds support for box children. 26 | * `RenderBoxContainerDefaultsMixin` adds useful defaults to `ContainerRenderObjectMixin` for render boxes with children. Type constraints require that children extend `RenderBox` and parent data extends `ContainerBoxParentData`. 27 | * This mixin provides support for hit testing \(`RenderBoxContainerDefaultsMixin.defaultHitTestChildren`\), painting \(`RenderBoxContainerDefaultsMixin.defaultPaint`\), and listing \(`RenderBoxContainerDefaultsMixin.getChildrenAsList`\) children. 28 | * `RenderProxyBox` delegates all methods to a single `RenderBox` child, adopting the child's size as its own size and positioning the child at its origin. Proxies are convenient for writing subclasses that selectively override some, but not all, of a box's behavior. This box's implementation is provided by `RenderProxyBoxMixin`, which provides an alternative to direct inheritance. Proxy boxes uses `ParentData` rather than `BoxParentData` since the child's offset is never used. 29 | * There are numerous examples throughout the framework: 30 | * `RenderAbsorbPointer` overrides `RenderProxyBox.hitTest` to disable hit testing for a subtree \(i.e., by not testing its child\) when `RenderAbsorbPointer.absorbing` is enabled. 31 | * `RenderAnimatedOpacity` listens to an animation \(`RenderAnimatedOpacity.opacity`\) to render its child with varying opacity \(via `RenderAnimatedOpacity.paint`\). This box manages its listener by overriding `RenderProxyBox.attach` and `RenderProxyBox.detach`. It selective inserts an `OpacityLayer` \(i.e., for translucent values only\), and manages `RenderBox.alwaysNeedsCompositing` to reflect whether a layer will be added \(via `RenderAnimatedOpacityMixin._updateOpacity`, called in response to the animation\). 32 | * `RenderIntrinsicWidth` overrides `RenderProxyBox.performLayout` to adopt a size that corresponds to its child's intrinsic width, subject to the incoming constraints. It also overrides intrinsic sizing methods since it also provides size snapping \(`RenderBox.stepWidth`\). 33 | * `RenderShiftedBox` delegates all methods to a single `RenderBox` child, but leaves layout undefined. Subclasses override `RenderShiftedBox.performLayout` to assign a position to the child \(via `BoxParentData.offset`\). Otherwise, this subclass is analogous to `RenderProxyBox`. 34 | * `RenderPadding` overrides `RenderShiftedBox.performLayout` to deflate the incoming constraints by the resolved padding amount. The child is laid out using the new, padded constraints and positioned within the padded region. 35 | * `RenderBaseline` overrides `RenderShiftedBox.performLayout` to align its child's baseline \(via `RenderBox.getDistanceToBaseline`\) with an offset from its top edge \(`RenderBaseline.baseline`\). It then sizes itself such that its bottom is coincident with the child's baseline \(this can potentially truncate the top of the child\). 36 | 37 | ## How do render boxes handle layout? 38 | 39 | * Boxes use the standard `RenderObject` layout protocol to map from incoming `BoxConstraints` to a concrete `Size` \(stored in `RenderBox.size`\). Layout also determines the child's offset relative to the parent \(`RenderBox.parentData.offset`\). This information should not be read by the child during layout. 40 | * Boxes add support for intrinsic dimensions \(an ideal size, computed outside of the standard layout protocol\) as well as baselines \(a line to use for vertical alignment, typically used when laying out text\). `RenderBox` instances track changes to these values and whether the parent has queried them; if so, when that box is marked as needing layout, the parent is marked as well. 41 | * Intrinsic dimensions are cached \(`RenderBox._cachedIntrinsicDimensions`\) whenever they're computed \(via `RenderBox._computeIntrinsicDimension`\). The cache is disabled during debugging. 42 | * Baselines are cached \(`RenderBox._cachedBaselines`\) whenever they're computed \(via `RenderBox.getDistanceToActualBaseline`\). Different baselines are computed for alphabetic and ideographic text. 43 | * `RenderBox.markNeedsLayout` marks the parent as needing layout if the box's intrinsic dimension or baseline caches have been modified \(i.e., this implies that the parent has accessed the box's "out-of-band" geometry\). If so, both are cleared so that new values are computed after the next layout pass. 44 | * By default, boxes that are sized by their parents adopt the smallest size permitted by incoming constraints \(via `RenderBox.performResize`\). 45 | * Boxes can have non-box children. In this case, the constraints provided to children will need to be adapted from `BoxConstraints` to the appropriate type. 46 | 47 | ## How do render boxes handle painting? 48 | 49 | * Boxes use the standard `RenderObject` painting protocol to paint themselves to a provided canvas. The canvas's origin isn't necessarily coincident with the box's origin; the offset provided to `RenderBox.paint` describes where the box's origin falls on the canvas. The canvas and the box will always be axis-aligned. 50 | * `RenderBox.paintBounds` describes the region that will be painted by a box. This determines the size of the buffer used for painting and is expressed in local coordinates. It need not match `RenderBox.size`. 51 | * If the render box applies a transform when painting \(e.g., painting at a different offset than the one provided\), `RenderBox.applyPaintTransform` must apply the same transformation to the provided matrix. 52 | * `RenderBox.globalToLocal` and `RenderBox.localToGlobal` rely on this transformation to map from global coordinates to box coordinates and vice versa. 53 | * By default, `RenderBox.applyPaintTransform` applies the child’s offset \(via `child.parentData.offset`\) as a translation. 54 | 55 | ## How do render boxes handle hit testing? 56 | 57 | * Boxes support hit testing, a subscription-like mechanism for delegating and processing events. The box protocol implements a custom flow rather than extending the`HitTestable` interface \(though `RendererBinding`, the hit testing entry point, does implement this interface\). 58 | * `RendererBinding.hitTest` is invoked using mixin chaining \(via `GestureBinding._handlePointerEvent`\). The binding delegates to `RenderView.hitTest` which tests its child \(via `RenderBox.hitTest`\). 59 | * `RenderBox.hitTest` determines whether an offset \(in local coordinates\) falls within its bounds. If so, each child is tested in sequence \(via`RenderBox.hitTestChildren`\) before the box tests itself \(via `RenderBox.hitTestSelf`\). By default, both methods return `false` \(i.e., boxes do not handle events or forward events to their children\). 60 | * Boxes subscribe to the related event stream \(i.e., receive `RenderBox.handleEvent` calls for events related to the interaction\) by adding themselves to `BoxHitTestResult`. Boxes added earlier take precedence. 61 | * All boxes in the `BoxHitTestResult` are notified of events \(via `RenderBox.handleEvent`\) in the order that they were added. 62 | 63 | ## What are intrinsic dimensions? 64 | 65 | * Conceptually, the intrinsic dimensions of a box are its natural dimensions \(i.e., the size it “wants” to be\). The precise definition depends on the box's implementation and semantics \(i.e., what the box represents\). 66 | * Intrinsic dimensions are often defined in terms of the intrinsic dimensions of children and are therefore expensive to calculate \(typically traversing an entire subtree\). 67 | * Intrinsic dimensions often differ from the dimensions produced by layout \(except when using the `IntrinsicHeight` and `IntrinsicWidth` widgets, which attempt to layout a child using its intrinsic dimensions\). 68 | * Intrinsic dimensions are generally ignored unless one of these widgets are used \(or another widget that explicitly incorporates intrinsic dimensions into its own layout, e.g., `RenderTable`\). 69 | * The box model describes intrinsic dimensions in terms of minimum and maximum values for width and height \(via `RenderBox.computeMinIntrinsicHeight`, `RenderBox.computeMaxIntrinsicHeight`, etc.\). Both receive a value for the opposite dimension \(if infinite, the other dimension is unconstrained\); this is useful for boxes that define one intrinsic dimension in terms of the other \(e.g., text\). 70 | * Minimum intrinsic width is the smallest width before the box cannot paint correctly without clipping. 71 | * Intuition: making the box thinner would clip its contents. 72 | * If width is determined by height according to the box's semantics, the incoming height \(which may be infinite, i.e., unconstrained\) should be used. Otherwise, ignore the height. 73 | * Minimum intrinsic height is the same concept for height. 74 | * Maximum intrinsic width is the smallest width such that further expansion would not reduce minimum intrinsic height \(for that width\). 75 | * Intuition: making the box wider won’t help fit more content. 76 | * If width is determined by height according to the box's semantics, the incoming height \(which may be infinite, i.e., unconstrained\) should be used. Otherwise, ignore the height. 77 | * Maximum intrinsic height is the same concept for height. 78 | * The specific meaning of intrinsic dimensions depends on the box's semantics. 79 | * Text is width-in-height-out. 80 | * Maximum intrinsic width: the width of the string without line breaks \(increasing the width would not shrink the preferred height\). 81 | * Minimum intrinsic width: the width of the widest word \(decreasing the width would clip the word or cause an invalid break\). 82 | * Intrinsic heights are computed by laying out text with the provided width. 83 | * Viewports ignore incoming constraints and aggregate child dimensions without clipping \(i.e., ideally, a viewport can render all of its children without clipping\). 84 | * Aspect ratio boxes use the incoming dimension to compute the queried dimension \(i.e., width to determine height and vice versa\). If the incoming dimension is unbounded, the child's intrinsic dimensions are used instead. 85 | * When intrinsic dimensions cannot be computed or are too expensive, return zero. 86 | 87 | ## What are baselines? 88 | 89 | * Baselines are a concept borrowed from text rendering to describe the line upon which all glyphs are placed. Portions of the glyph typically extend below this line \(e.g., descenders\) as it serves primarily to vertically align a sequence of glyphs in a visually pleasing way. Characters from different fonts can be visually aligned by positioning each span of text such that baselines are collinear. 90 | * Boxes that define a visual baseline can also be aligned in this way. 91 | * Boxes may specify a baseline by implementing `RenderBox.computeDistanceToActualBaseline`. The returned value represents a vertical offset from the top of the box. Values are cached until the box is marked as needing layout. 92 | * Boxes typically return null \(i.e., they don't define a logical baseline\), a value intrinsic to what they represent \(i.e., the baseline of a span of text\), delegate to a single child, or use `RenderBoxContainerDefaultsMixin` to produce a baseline from a set of children. 93 | * `RenderBoxContainerDefaultsMixin.defaultComputeDistanceToFirstActualBaseline` returns the first valid baseline reported by the set of children, adjusted to account for the child's offset. 94 | * RenderBoxContainerDefaultsMixin.defaultComputeDistanceToHighestActualBaseline returns the minimum baseline \(i.e., vertical offset\) amongst all children, adjusted to account for the child's offset. 95 | * `RenderBox.getDistanceToBaseline` returns the offset to the box's bottom edge \(`RenderBox.size.height`\) if an actual baseline isn't available \(i.e., `RenderBox.computeDistanceToActualBaseline` returns `null`\). 96 | * The baseline may only be queried by a box's parent and only after the box has been laid out \(typically during parent layout or painting\). 97 | 98 | -------------------------------------------------------------------------------- /data-model/elements.md: -------------------------------------------------------------------------------- 1 | # Elements 2 | 3 | ## What are elements? 4 | 5 | * The element tree is anchored in the `WidgetsBinding` and established via `runApp` / `RenderObjectToWidgetAdapter`. 6 | * `Widget` instances are immutable representations of UI configuration data that are “inflated” into `Element` instances \(via `Element.inflateWidget`\). Elements therefore serve as widgets' mutable counterparts and are responsible for modeling the relationship between widgets \(e.g., the widget tree\), storing state and inherited relationships, and participating in the build process, etc. 7 | * All elements are associated with a `BuildOwner` singleton. This instance is responsible for tracking dirty elements and, during `WidgetsBinding.drawFrame`, re-building the element tree as needed. This process triggers several lifecycle events \(e.g., `initState`, `didChangeDependencies`, `didUpdateWidget`\). 8 | * Elements are assembled into a tree \(via `Element.mount` and `Element.unmount`\). Whereas these operations are permanent, elements may also be temporarily removed and restored \(via `Element.deactivate` and `Element.activate`, respectively\). 9 | * Elements transition through several lifecycle states \(`_ElementLifecycle`\) in response to the following methods: `Element.mount` \(`initial` to `active`, `Element.activate` is only called when reactivating\), `Element.deactivate` \(`active` to `inactive`, can be reactivated via `Element.activate)`, then finally `Element.unmount` \(`inactive` to `defunct`\). 10 | * Note that deactivating or unmounting an element is a recursive process, generally facilitated by the build owner's inactive elements list \(`BuildOwner._inactiveElements`\). All descendant elements are affected \(via`_InactiveElements._deactivateRecursively` and`_InactiveElements._unmount`\). 11 | * Elements are attached \(i.e., mounted\) to the element tree when they're first created. They may then be updated \(via `Element.update`\) multiple times as they become dirty \(e.g., due to widget changes or notifications\). An element may also be deactivated; this removes any associated render objects from the render tree and adds the element to the build owner's list of inactive nodes. This list \(`_InactiveElements`\) automatically deactivates all nodes in the affected subtree and clears all dependencies \(e.g., from `InheritedElement`\). 12 | * Parents are generally responsible for deactivating their children \(via `Element.deactivateChild`\). Deactivation temporarily removes the element \(and any associated render objects\) from the element tree; unmounting makes this change permanent. 13 | * An element may be reactivated within the same frame \(e.g., due to tree grafting\), otherwise the element will be permanently unmounted by the build owner \(via `BuildOwner.finalizeTree` which calls `Element.unmount`\). 14 | * If the element is reactivated, the subtree will be restored and marked dirty, causing it to be rebuilt \(re-adopting any render objects which were previously dropped\). 15 | * `Element.updateChild` is used to update a child element when its configuration \(i.e., widget\) changes. If the new widget isn't compatible with the old one \(e.g., doesn't exist, has a different type, or has a different key\), a fresh element is inflated \(via `Element.inflateWidget`\). Once an element is retrieved or inflated, the new configuration is applied via `Element.update`; this might alter an associated render object, notify dependents of a state change, or mutate the element itself. 16 | * When an element is re-inflated, it has no access to any existing children; that is, children associated with the old element aren't passed to the new element. Thus, all descendants need to be re-inflated, too \(there are no old elements to synchronize\). 17 | * Global keys are one exception: any children associated with a global key can be restored without being re-inflated. 18 | 19 | ## What are the element building blocks? 20 | 21 | * Elements are mainly broken down into `RenderObjectElement` and `ComponentElement`. `RenderObjectElements` are responsible for configuring render objects and keeping the render object tree and widget tree in sync. `ComponentElements` don't directly manage render objects but instead produce intermediate nodes via mechanisms like `Widget.build`. Both processes are driven by `Element.performRebuild` which is itself triggered by `BuildOwner.buildScope`. The latter is run as part of the build process every time the engine requests a frame. 22 | * `ProxyElement` forms a third category of elements that wrap a subtree of elements \(and are configured by `ProxyWidget`\). These generally augment the subtree in some way \(e.g., `InheritedElement` injects heritable state\). Proxy elements use notifications to inform subscribers when its configuration changes \(`ProxyElement.update` invokes `ProxyElement.updated` which, by default, calls `ProxyElement.notifyClients`\). Subclasses manage subscribers in an implementation-specific way. 23 | * `ParentDataElement` updates the parent data of all closest descendant render objects \(via `ParentDataElement._applyParentData`, which is called by `ParentDataElement.notifyClients`\). 24 | * `InheritedElement` notifies a set of dependents whenever its configuration is changed \(i.e., when `InheritedElement.update` is invoked\).`InheritedElement._dependants` is implemented as a mapping since each dependent can provide an arbitrary object to use when determining whether an update is applicable. Dependents are notified by invoking `Element.didChangeDependencies`. 25 | 26 | ## How is the render tree managed by `RenderObjectElement`? 27 | 28 | * Render object elements are responsible for managing an associated render object. `RenderObjectElement.update` applies updates to this render object to match a new configuration \(i.e., widget\). 29 | * The render object is created \(via `RenderObjectWidget.createRenderObject`\) when its element is first mounted. The render object is retained throughout the life of the element, even when the element is deactivated \(and the render object is detached\). 30 | * A new render object is created if an element is inflated and mounted \(e.g., because a new widget couldn't update the old one\); at this point, the old render object is destroyed. A slot token is used during this process so the render object can attach and detach itself from the render tree \(which can vary from the element tree\). 31 | * The render object is attached to the render tree when its element is first mounted \(via `RenderObjectElement.attachRenderObject`\). If the element is later deactivated \(due to tree grafting\), it will be re-attached when the graft is completed \(via `RenderObjectElement.inflateWidget`, which includes special logic for handling grafting by global key\). 32 | * The render object is updated \(via `RenderObjectWidget.updateRenderObject`\) when its element is updated \(via `Element.update`\) or rebuilt \(via `Element.rebuild`\). 33 | * The render object is detached from its parent \(via `RenderObjectElement.detachRenderObject`\) when the element is deactivated. This is generally managed by the parent \(via `Element.deactivateChild`\) and occurs when children are explicitly removed or reparented due to tree grafting. Deactivating a child calls `Element.detachRenderObject` which recursively processes descendants until reaching the nearest render object element boundary. `RenderObjectElement` overrides this method to detach its render object, cutting off the recursive walk. 34 | * Render objects may have children. However, there may be several intermediate nodes \(i.e., component elements\) between its `RenderObjectElement` and the elements associated with its children. That is, the element tree typically has many more nodes than the render tree. 35 | * Slot tokens are passed down the element tree so that these `RenderObjectElement` nodes can interact with their render object's parent \(via`RenderObjectElement.insertChildRenderObject`, `RenderObjectElement.moveChildRenderObject`, `RenderObjectElement.removeChildRenderObject`\). Tokens are interpreted in an implementation-specific manner by the ancestor `RenderObjectElement` to distinguish render object children. 36 | * Elements generally use their widget's children as the source of truth \(e.g., `MultiChildRenderObjectWidget.children`\). When the element is first mounted, each child is inflated and stored in an internal list \(e.g., `MultiChildRenderObjectElement._children`\); this list is later used when updating the element. 37 | * Elements can be grafted from one part of the tree to another within a single frame. Such elements are “forgotten” by their parents \(via `RenderObjectElement.forgetChild`\) so that they are excluding from iteration and updating. The old parent removes the child when the element is added to its new parent \(this happens during inflation, since grafting requires that the widget tree be updated, too\). 38 | * Elements are responsible for updating any children. To avoid unnecessarily inflation \(and potential loss of state\), the new and old child lists are synchronized using a linear reconciliation scheme optimized for empty lists, matched lists, and lists with one mismatched region: 39 | 40 | 1. The leading elements and widgets are matched by key and updated. 41 | 2. The trailing elements and widgets are matched by key with updates queued \(update order is significant\). 42 | 3. A mismatched region is identified in the old and new lists. 43 | 4. Old elements are indexed by key. 44 | 5. Old elements without a key are updated with null \(deleted\). 45 | 6. The index is consulted for each new, mismatched widget. 46 | 7. New widgets with keys in the index update together \(re-use\). 47 | 8. New widgets without matches are updated with null \(inflated\). 48 | 9. Remaining elements in the index are updated with null \(deleted\). 49 | 50 | ## What are the render object element building blocks? 51 | 52 | * `LeafRenderObjectElement`, `SingleChildRenderObjectElement`, and `MultiChildRenderObjectElement` provide support for common use cases and correspond to the similarly named widget helpers \(`LeafRenderObjectWidget`, `SingleChildRenderObjectWidget`, `MultiChildRenderObjectWidget`\) 53 | * The multi-child and single-child variants pair with `ContainerRenderObjectMixin` and `RenderObjectWithChildMixin` in the render tree. 54 | * These use the previous child \(or null\) as the slot identifier; this is convenient since `ContainerRenderObjectMixin` manages children using a linked list. 55 | 56 | ## How are elements managed by `ComponentElement`? 57 | 58 | * `ComponentElement` composes other elements. Rather than managing a render object itself, it produces descendant elements that manage their own render objects through building. 59 | * Building is an alternative to storing a static list of children. Components build a single child dynamically whenever they become dirty. 60 | * This process is driven by `Element.rebuild` which is invoked by the build owner when an element is marked dirty \(via `BuildOwner.scheduleBuildFor`\). Component elements also rebuild when they're first mounted \(via `ComponentElement._firstBuild`\) and when their widget changes \(via `ComponentElement.update`\). For `StatefulElement`, a rebuild may be scheduled spontaneously via `State.setState`. In all cases, lifecycle methods are invoked in response to changes to the element tree \(for example, `StatefulElement.update` will invoke `State.didUpdateWidget`\). 61 | * The actual implementation is supplied by `Element.performRebuild`. Component elements override`Element.performRebuild` to invoke `ComponentElement.build` whereas render object elements update their render object via `RenderObjectWidget.updateRenderObject`. 62 | * `ComponentElement.build` provides a hook for producing intermediate nodes in the element tree. `StatelessElement.build` invokes the widget’s build method, whereas `StatefulElement.build` invokes the state’s build method. `ProxyElement` simply returns its widget's child. 63 | * Note that if a component element rebuilds, the child element and the newly built widget will still be synchronized \(via `Element.updateChild`\). If the widget is compatible with the existing element, it'll be updated instead of re-inflated. This allows existing render objects to be mutated instead of being recreated. Depending on the mutation, this might involve any combination of layout, painting, and compositing. 64 | * Reassembly \(e.g., `Element.reassemble`\) marks the element as being dirty; most subclasses do not override this behavior. This causes the element tree to be rebuilt during the next frame. Render object elements update their render objects in response to `Element.performRebuild` and therefore also benefit from hot reload. 65 | 66 | ## How does building work? 67 | 68 | * Only widgets associated with `ComponentElement` \(e.g., `StatelessWidget`, `StatefulWidget`, `ProxyWidget`\) participate in the build process; `RenderObjectWidget` subclasses, generally associated with `RenderObjectElements`, do not; these simply update their render object when building. `ComponentElement` instances only have a single child, typically that returned by their widget’s build method \(`ProxyElement` returns the child attached to its widget\).. 69 | * When the element tree is first anchored to the render tree \(via `RenderObjectToWidgetAdapter.attachToRenderTree`\), the `RenderObjectToWidgetElement` \(a `RootRenderObjectElement`\) assigns a `BuildOwner` for the element tree. The `BuildOwner` is responsible for tracking dirty elements \(`BuildOwner.scheduleBuildFor`\), establishing build scopes wherein elements can be rebuilt / descendant elements can be marked dirty \(`BuildOwner.buildScope` / `BuildOwner.scheduleBuildFor`\), and unmounting inactive elements at the end of a frame \(`BuildOwner.finalizeTree`\). It also maintains a reference to the root `FocusManager` and triggers reassembly after a hot reload. 70 | * When a `ComponentElement` is mounted \(e.g., after being inflated\), an initial build is performed immediately \(via `ComponentElement._firstBuild`, which calls `ComponentElement.rebuild`\). 71 | * Later, elements can be marked dirty using `Element.markNeedsBuild`. This is invoked any time the UI might need to be updated implicitly \(or explicitly, in response to `State.setState`\). This method adds the element to the dirty list and, via `BuildOwner.onBuildScheduled`, schedules a frame via `SchedulerBinding.ensureVisualUpdate`. The actual build will take place when the next frame is processed. 72 | * Some operations trigger a rebuild directly \(i.e., without marking the tree dirty first\). These include `ProxyElement.update`, `StatelessElement.update`, `StatefulElement.update`, and `ComponentElement.mount`. In these cases, the intention is to update the element tree immediately. 73 | * Other operations schedule a build to occur during the next frame. These include `State.setState`, `Element.reassemble`, `Element.didChangeDependencies`, `StatefulElement.activate`, etc. 74 | * Proxy elements use notifications to indicate when underlying data has changed. In the case of `InheritedElement`, each dependent's `Element.didChangeDependencies` is invoked which, by default, marks that element as being dirty. This causes the descendant to rebuild when any of its dependencies change. 75 | * Once per frame, `BuildOwner.buildScope` will walk the element tree in depth-first order, only considering those nodes that have been marked dirty. By locking the tree and iterating in depth first order, any nodes that become dirty while rebuilding must necessarily be lower in the tree; this is because building is a unidirectional process -- a child cannot mark its parent as being dirty. Thus, it is not possible for build cycles to be introduced and it is not possible for elements that have been marked clean to become dirty again. 76 | * As the build progresses, `ComponentElement.performRebuild` delegates to the `ComponentElement.build` method to produce a new child widget for each dirty element. Next, `Element.updateChild` is invoked to efficiently reuse or recreate an element for the child. Crucially, if the child’s widget hasn’t changed, the build is immediately cut off. Note that if the child widget did change and `Element.update` is needed, that child will itself be marked dirty, and the build will continue down the tree. 77 | * Each `Element` maintains a map of all `InheritedElement` ancestors at its location. Thus, accessing dependencies from the build process is a constant time operation. 78 | * If `Element.updateChild` invokes `Element.deactivateChild` because a child is removed or moved to another part of the tree, `BuildOwner.finalizeTree` will unmount the element if it isn’t reintegrated by the end of the frame. 79 | 80 | ## How does element inheritance work? 81 | 82 | * `InheritedElement` provides an efficient mechanism for publishing heritable state to a subset of the element tree. This mechanism depends on support provided by `Element` itself. 83 | * All elements maintain a set of dependencies \(`Element._dependencies`, e.g., elements higher in the tree that fill a dependency\) and a mapping of all `InheritedElement` instances between this element and the root \(`Element._inheritedWidgets`\). The dependencies set is mainly tracked for debugging purposes . 84 | * The map of inherited elements serves as an optimization to avoid repeatedly walking the tree. Each dependency is uniquely identified by its instantiated type; multiple dependencies sharing a type shadow one another \(in this case, shadowed dependencies may still be be retrieved by walking the tree\). 85 | * This mapping is maintained by `Element._updateInheritance`. By default, elements copy the mapping from their parents. However, `InheritedElement` instances override this method to insert themselves into the mapping \(the mapping is always copied so that different branches of the tree are independent\). 86 | * This mapping is built on the fly \(via `Element._updateInheritance`\) when elements are first mounted \(via `Element.mount`\) or are reactivated \(via `Element.activate`\). The mapping is cleared when elements are deactivated \(via `Element.deactivate`\); the element is removed from each of its dependency's dependent lists \(`InheritedElement._dependents`\). As a result, it's usually not necessary to manually walk an element's ancestors. 87 | * Inherited relationships are established via `Element.dependOnInherited` \(`Element.inheritFromElement` is a simple wrapper\). In general, the inherited ancestor should be available in `Element._inheritedWidgets`. This process causes the inherited element to add the dependent element to its list of dependencies \(via `InheritedElement.updateDependencies`\). 88 | * When an element is reactivating \(e.g., after grafting\), it is notified of dependency changes if it had existing or unsatisfied dependencies \(e.g., a dependency was added but a corresponding `InheritedElement` wasn't found in `Element._inheritedWidgets`\). 89 | * Elements are notified when their dependencies change via `Element.didChangeDependencies`. By default, this method marks the element as being dirty. 90 | 91 | -------------------------------------------------------------------------------- /data-model/widgets.md: -------------------------------------------------------------------------------- 1 | # Widgets 2 | 3 | ## What are the widget building blocks? 4 | 5 | * Widgets provide an immutable description of the user interface. Though widgets themselves are immutable, they may be freely replaced, removed, or rearranged \(note that updating a widget's child typically requires the parent widget to be replaced, too\). Creating and destroying widgets is efficient since widgets are lightweight, immutable instances that are, ideally, compile-time constants. 6 | * The immutable widget tree is used to create and configure \(i.e., inflate\) a mutable element tree which manages a separate render tree; this final tree is responsible for layout, painting, gestures, and compositing. The element tree is efficiently synchronized with widget changes, reusing and mutating elements where possible \(that is, though a widget may be replaced with a different instance, provided the two instances have the same runtime type and key, the original element will be updated and not recreated\). Modifying the element tree typically updates the render tree which, in turns, changes what appears on the screen. 7 | * The main widget types are `RenderObjectWidget`, `StatefulWidget`, and `StatelessWidget`. Widgets that export data to one or more descendant widgets \(via notifications or another mechanism\) utilize `ProxyWidget` or one of its subclasses \(e.g., `InheritedWidget` or `ParentDataWidget`\). 8 | * In general, widgets either directly or indirectly configure render objects by modifying the element tree. Most widgets created by application developers \(via `StatefulWidget` and `StatelessWidget`\) delegate to a constellation of descendant widgets, typically via a build method \(e.g., `StatelessWidget.build`\). Others \(e.g., `RenderObjectWidget`\) manage a render object directly \(creating it and updating it via `RenderObjectWidget.createRenderObject` and `RenderObjectWidget.updateRenderObject`, respectively\). 9 | * Certain widgets wrap an explicit child widget via`ProxyWidget`, introducing heritable state \(e.g.,`InheritedWidget`, `InheritedModel`\) or configuring auxiliary data \(e.g.,`ParentDataWidget`\). 10 | * `ProxyWidget` notifies clients \(via `ProxyElement.notifyClients`\) in response to widget changes \(via `ProxyElement.updated`, called by `ProxyElement.update`\). 11 | * `ParentDataWidget` updates the nearest descendant render objects' parent data \(via `ParentDataElement._applyParentData`, which calls `RenderObjectElement._updateParentData`\); this process is triggered any time the corresponding widget is updated. 12 | * There are also bespoke widget subclasses that support less common types of configuration. For instance,`PreferredSizeWidget` extends `Widget` to capture a preferred size allowing subclasses \(e.g., `AppBar`, `TabBar`, `PreferredSize` \) to express sizing information to their containers \(e.g., `Scaffold`\). 13 | * `LeafRenderObjectWidget`, `SingleChildRenderObjectWidget`, and `MultiChildRenderObjectWidget` provide storage for render object widgets with zero or more children without constraining how the underlying render object is created or updated. These widgets correspond to`LeafRenderObjectElement`, `SingleChildRenderObjectElement`, and `MultiChildRenderObjectElement`, respectively, which manage the underlying child model in the element and render trees. 14 | * Anonymous widgets can be created using `Builder` and `StatefulBuilder`. 15 | 16 | ## How do stateless widgets work? 17 | 18 | * `StatelessWidget` is a trivial subclass of `Widget` that defines a `StatelessWidget.build` method and configures a `StatelessElement`. 19 | * `StatelessElement` is a `ComponentElement` subclass that invokes `StatelessWidget.build` in response to `StatelessElement.build` \(e.g., delegates building to its widget\). 20 | 21 | ## How do stateful widgets work? 22 | 23 | * `StatefulWidget` is associated with `StatefulElement`, a `ComponentElement` that is almost identical to `StatelessElement`. The key difference is that the `StatefulElement` retains a reference to the `State` of the corresponding `StatefulWidget`, invoking methods on that instance rather than the widget itself. For instance, when `StatefulElement.update` is invoked, the `State` instance is notified via `State.didUpdateWidget`. 24 | * `StatefulElement` creates the associated `State` instance when it is constructed \(i.e., in `StatefulWidget.createElement`\). Then, when the `StatefulElement` is built for the first time \(via `StatefulElement._firstBuild`, called by `StatefulElement.mount`\), `State.initState` is invoked. Crucially, `State` instance and the `StatefulWidget` reference the same element. 25 | * Since `State` is associated with the underlying `StatefulElement`, if the widget changes, provided that `StatefulElement.updateChild` is able to reuse the same element \(because the widget’s runtime type and key both match\), `State` will be preserved. Otherwise, the `State` will be recreated. 26 | 27 | ## Why is changing tree depth expensive? 28 | 29 | * Flutter doesn't have the ability to compare trees. That is, only an element's immediate children are considered when matching widgets and elements \(via `RenderObjectElement.updateChildren`\). 30 | * When increasing the tree depth \(i.e., inserting an intermediate node\), the existing parent will be configured with a child corresponding to the intermediate widget. In most cases, this widget will not correspond to a previous child \(i.e., `Widget.canUpdate` will return false\). Thus, the new element will be freshly inflated. Since the intermediate node is the new owner of its parent's children, each of those children will also be inflated \(the intermediate node doesn't have access to the existing elements\). This will proceed down the entire subtree. 31 | * When decreasing the tree depth, the parent will once again be assigned new children which likely won't sync with old children. Thus, the new children will need to be inflated, cascading down the entire subtree. 32 | * Adding a `GlobalKey` to the previous child can mitigate this issue since `Element.updateChild` is able to reuse elements that are stored in the `GlobalKey` registry \(allowing that subtree to simply be reinserted instead of rebuilt\). 33 | 34 | ## How do notifications work? 35 | 36 | * Notification support is not built directly into the widget abstraction, but layered on top of it. 37 | * `Notification` is an abstract class that searches up the element tree, visiting each widget subclass of `NotificationListener` \(`Notification.dispatch` calls `Notification.visitAncestor`, which performs this walk\). 38 | * The notification invokes `NotificationListener._dispatch` on each suitable widget, comparing the notification's static type with the callback's type parameter. If there's a match \(i.e., the notification is a subtype of the callback's type parameter\), the listener is invoked. 39 | * If the listener returns true, the walk terminates. Otherwise, the notification continues to bubble up the tree. 40 | 41 | -------------------------------------------------------------------------------- /get-involved.md: -------------------------------------------------------------------------------- 1 | # Get Involved ❗ 2 | 3 | This book is a community effort that aims to explain how Flutter actually works in an intuitive, but brief, way. 4 | 5 | Given the breadth and scope of the framework, there's a lot of material to cover and a lot of material to review. Suffice to say: **we need your help!** 6 | 7 | Any and all contributions are much appreciated. Keep on reading for ways to get involved. 8 | 9 | ### Ways to contribute 10 | 11 | * Copy editing and structural improvements. 12 | * Fact checking, corrections, and technical revisions. 13 | * Expanding sections that are incomplete or outdated. 14 | * Adding new sections or topics. 15 | * ... or however you think would be helpful! 16 | 17 | ### How to contribute 18 | 19 | * Use our [**invite link**](https://app.gitbook.com/invite/flutter-internals?invite=-Lz8eupmUYQGm6UH34Dq) to join as a contributor. 20 | * Once you've joined, you'll be able to comment, edit, and review. 21 | * Start editing! \(If adding a new section, please be mindful to mark it as "Work in Progress"\). 22 | * See the "[Project status](get-involved.md#project-status)" section for pointers to areas needing attention. 23 | * Add your name to the "[Authors](get-involved.md#authors)" section so you get the credit you deserve! 24 | 25 | ## Project status 26 | 27 | * **Needs copy editing** 28 | * \(_Section_\) Core 29 | * \(_Section_\) Data Model 30 | * \(_Section_\) Rendering 31 | * \(_Section_\) Interaction 32 | * \(_Section_\) Scrolling 33 | * \(_Section_\) Slivers 34 | * \(_Section_\) Animation 35 | * \(_Section_\) Assets 36 | * \(_Section_\) Text 37 | * \(_Section_\) User Interface 38 | * \(_Section_\) Business Logic 39 | * **Needs expansion** 40 | * Gestures 41 | * **Needs writing** 42 | * Semantics 43 | * Themes 44 | * Navigation 45 | * Material 46 | * State Management 47 | * Async Programming 48 | * Testing 49 | 50 | ## Authors 51 | 52 | ### Maintainers 53 | 54 | * Brandon Diamond 55 | 56 | ### Contributors 57 | 58 | * Ian Hickson 59 | 60 | 61 | 62 | -------------------------------------------------------------------------------- /interaction/focus.md: -------------------------------------------------------------------------------- 1 | # Focus 2 | 3 | ## What are the focus building blocks? 4 | 5 | * `FocusManager`, stored in the `WidgetsBinding`, tracks the currently focused node and the most recent node to request focus. It handles updating focus to the new primary \(if any\) and maintaining the consistency of the focus tree by sending appropriate notifications. The manager also bubbles raw keyboard events up from the focused node and tracks the current highlight mode. 6 | * `FocusTraversalPolicy` dictates how focus moves within a focus scope. Traversal can happen in a `TraversalDirection` \(`TraversalDirection.left`, `TraversalDirection.up`\) or to the next, previous, or first node \(i.e., the node to receive focus when nothing else is focused\). All traversal is limited to the node’s closest enclosing scope. The default traversal policy is the `ReadingOrderTraversalPolicy` but can be overridden using the `DefaultFocusTraversal` inherited widget. `FocusNode.skipTraversal` can be used to allow a node to be focusable without being eligible for traversal. 7 | * `FocusAttachment` is a helper used to attach, detach, and reparent a focus node as it moves around the focus tree, typically in response to changes to the underlying widget tree. As such, this ensures that the node’s focus parent is associated with a context \(`BuildContext`\) that is an ancestor of its own context. `The` attachment instance delegates to the underlying focus node using inherited widgets to set defaults -- i.e., when reparenting, `Focus.of` / `FocusScope.of` are used to find the node’s new parent. 8 | * `FocusNode` represents a discrete focusable element within the UI. Focus nodes can request focus, discard focus, respond to focus changes, and receive keyboard events when focused. Focus nodes are grouped into collections using scopes which are themselves a type of focus node. Conceptually, focus nodes are entities in the UI that can receive focus whereas focus scopes allow focus to shift amongst a group of descendant nodes. As program state, focus nodes must be associated with a `State` instance. 9 | * `FocusScopeNode` is a focus node subclass that organizes focus nodes into a traversable group. Conceptually, a scope corresponds to the subtree \(including nested scopes\) rooted at the focus scope node. Descendent nodes add themselves to the nearest enclosing scope when receiving focus \(`FocusNode._reparent` calls `FocusNode._removeChild` to ensure scopes forget nodes that have moved\). Focus scopes maintain a history stack with the top corresponding to the most recently focused node. If a child is removed, the last focused scope becomes the focused child; this process will not update the primary focus. 10 | * Focus is a stateful widget that manages a focus node that is either provided \(i.e., so that an ancestor can control focus\) or created automatically. `Focus.of` establishes an inherited relationship causing the dependant widget to be rebuilt when focus changes. Autofocus allows the node to request focus within the enclosing scope if no other node is already focused. 11 | * `FocusScope` is the same as above, but manages a focus scope node. 12 | * `FocusHighlightMode` and `FocusHighlightStrategy` determine how focusable widgets respond to focus with respect to highlighting. By default the highlight mode is updated automatically by tracking whether the last interaction was touch-based. The highlight mode is either `FocusHighlightMode.traditional`, indicating that all controls are eligible, and `FocusHighlightMode.touch`, indicating that only those controls that summon the soft keyboard are eligible. 13 | 14 | ## What is focus? 15 | 16 | * Focus determines the UI elements that will receive raw keyboard events. 17 | * The primary focused node is unique within the application. As such, it is the default recipient of keyboard events, as forwarded by the `FocusManager`. 18 | * All nodes along the path from the root focus scope to the primary focused node have focus. Keyboard events will bubble along this path until they are handled. 19 | * Moving focus \(via traversal or by requesting focus\) does not call `FocusNode.unfocus`, which would remove it from the enclosing scope’s stack of recently focused nodes, also ensuring that a pending update doesn’t refocus the node. 20 | * Controls managing focus nodes may choose to render with a highlight in response to `FocusNode.highlightMode`. 21 | 22 | ## What is the focus tree? 23 | 24 | * The focus tree is a sparse representation of focusable elements in the UI consisting entirely of focus nodes. Focus nodes maintain a list of children, as well as assessors that return the descendants and ancestors of a node in furthest and closest order, respectively. 25 | * Some focus nodes represent scopes which demarcate a subtree rooted at that node. Conceptually, scopes serve as a container for focus traversal operations. 26 | * Both scopes and ordinary nodes can receive focus, but scopes pass focus to the first descendent node that isn’t a scope. Even if a scope is not along the path to the primary focus, it tracks a focused child \(`FocusScopeNode.focusedChild`\) that would receive focus if it were. 27 | * Scopes maintain a stack of previously focused children with the top of the stack being the first to receive focus if that scope receives focus \(the focused child\). When the focused child is removed, focus is shifted to the previous holder. If a focused child is a scope, it is called the scope’s first focus. 28 | * When a node is focused, all enclosing scopes ensure that their focused child is either the target node or a scope that’s one level closer to the target node. 29 | * As focus changes, affected nodes receive focus notifications that clients can use to update the UI. The focused node also receives raw keyboard events which bubble up the focus path. 30 | 31 | ## How is focus attached to the widget tree? 32 | 33 | * A `FocusAttachment` ensures a `FocusNode` is anchored to the correct parent when the focus tree changes \(e.g., because the widget tree rebuilds\). This ensures that the associated build contexts are consistent with the actual widget tree. 34 | * As a form of program state, every focus node must be hosted by a `StatefulWidget`’s `State` instance. When state is initialized \(`State.initState`\), the focus node is attached to the current `BuildContext` \(via `FocusNode.attach`\), returning a `FocusAttachment` handle. 35 | * When the widget tree rebuilds \(`State.build`, `State.didChangeDependencies`\), the attachment must be updated via `FocusAttachment.reparent`. If a widget is configured to use a new focus node \(`State.didUpdateWidget`\), the previous attachment must be detached \(`FocusAttachment.detach`, which calls `FocusNode.unfocus` as needed\) before attaching the new node. Finally, the focus node must be disposed when the host state is itself disposed \(`FocusNode.dispose`, which calls `FocusAttachment.detach` as needed\). 36 | * Reparenting is delegated to `FocusNode._reparent`, using `Focus.of` / `FocusScope.of` to locate the nearest parent node. If the node being reparented previous had focus, focus is restored through the new path via `FocusNode._setAsFocusedChild`. 37 | * A focus node can have multiple attachments, but only one attachment may be active at a time. 38 | 39 | ## How is focus managed? 40 | 41 | * `FocusManager` tracks the root focus scope as well as the current \(primary\) and requesting \(next\) focus nodes. It also maintains a list of dirty nodes that require update notifications. 42 | * As nodes request focus \(`FocusNode.requestFocus`\), the manager is notified that the node is dirty \(`FocusNode._markAsDirty`\) and that a focus update is needed \(`FocusManager._markNeedsUpdate`\), passing the requesting node. This sets the next focus node and schedules a microtask to actually update focus. This can delay focus updates by up to one frame. 43 | * This microtask promotes the requesting node to primary \(`FocusManager._applyFocusChange`\), marking all nodes from the root to the incoming and outgoing nodes, inclusive, as being dirty. Then, all dirty nodes are notified that focus has changed \(`FocusNode._notify`\) and marked clean. 44 | * The new primary node updates all enclosing scopes such that they describe a path toward it via each scope’s focused child \(`FocusNode._setAsFocusedChild`\). 45 | 46 | ## How is focus requested? 47 | 48 | * When a node requests focus \(`FocusNode.requestFocus`\), every enclosing scope is updated to focus toward the requesting node directly or via an intermediate scope \(`FocusNode._setAsFocusedChild`\). The node is then marked dirty \(`FocusNode._markAsDirty`\) which updates the manager’s next node and requests an update. 49 | * When a scope requests focus \(`FocusScopeNode.requestFocus`\), the scope tree is traversed to find the first descendent node that isn’t a scope. Once found, that node is focused. Otherwise, the deepest scope is focused. 50 | * The focus manager resolves focus via `FocusManager._applyFocusChange`, promoting the next node to the primary and sending notifications to all dirty nodes. 51 | 52 | ## How is focus relinquished? 53 | 54 | * When a node is unfocused, the enclosing scope “forgets” the node and the manager is notified \(via `FrameManager._willUnfocusNode`\). 55 | * If the node had been tracked as primary or next, the corresponding property is cleared and an update scheduled. If a next node is available, that node becomes primary. If there is no primary and no next node, the root scope becomes primary. 56 | * The deepest non-scope node will not be automatically focused. However, future traversal will attempt to identify the current first focus \(`FocusTraversalPolicy.findFirstFocus`\) when navigating. 57 | * When a node is detached, if it had been primary, it is unfocused as above. It is then removed from the focus tree. 58 | * When a node is disposed, the manager is notified \(via `FrameManager._willDisposeFocusNode`\) which calls `FrameManager._willUnfocusNode`, as above. Finally, it is detached. 59 | * Focus updates are scheduled once per frame. As a result, state will not stack but resolve to the most recent request. 60 | 61 | ## What is a keyboard token? 62 | 63 | * Some controls display the soft keyboard in response to focus. The keyboard token is a boolean tracking whether focus was requested explicitly \(via `FocusNode.requestFocus`\) or assigned automatically due to another node losing focus. 64 | * Controls that display the keyboard consume the token \(`FocusNode.consumeKeyboardToken`\), which returns its value and sets it to false. This ensures that the keyboard is shown exactly once in response to explicit user interaction. 65 | 66 | -------------------------------------------------------------------------------- /interaction/gestures.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: 'TODO: Expand.' 3 | --- 4 | 5 | # Gestures 6 | 7 | ## How are hit tests performed? 8 | 9 | * The `RendererBinding` is `HitTestable`, which implies the presence of a `hitTest` method. The default implementation defers to `RenderView`, which itself implements `hitTest` to visit the entire render tree. Each render object is given an opportunity to add itself to a shared `HitTestResult`. 10 | * `GestureBinding.dispatchEvent` \(via `HitTestDispatcher`\) uses the `PointerRouter` to pass the original event to all render objects that were hit \(`RenderObject` implements `HitTestTarget`, and therefore provides `handleEvent`\). If a `GestureRecognizer` is utilized, the event’s pointer is passed via `GestureRecognizer.addPointer` which registers with the `PointerRouter` to receive future events. 11 | * Related events are sent to all original `HitTestTarget` as well as any routes registered with the `PointerRouter`. `PointerDownEvent` will close the gesture area, barring additional entries, whereas `PointerUpEvent` will sweep it, resolving competing gestures and preventing indecisive gestures from locking up input. 12 | 13 | ## How are gestures captured and propagated? 14 | 15 | * `Window.onPointerDataPacket` captures pointer updates from the engine and generates `PointerEvents` from the raw data. `PointerEventConverter` is utilized to map physical coordinates from the engine to logical coordinates, taking into account the device’s pixel ratio \(via `PointerEventConverter.expand`\). 16 | 17 | ## What does a gesture recognizer do? 18 | 19 | * A pointer is added to the gesture recognizer by client code on `PointerDownEvent`. Gesture recognizers determine whether a pointer is allowed by overriding `GestureRecognizer.isPointerAllowed`. If so, the recognizer subscribes to future pointer events via the `PointerRouter` and adds the pointer to the gesture arena via `GestureArenaManager.add`. 20 | * The recognizer will process incoming events, outcomes from the gesture arena \(`GestureArenaMember.accept`/`rejectGesture`\), spontaneous decisions about the gesture \(`GestureArenaEntry.resolve`\), and other externalities. Typically, recognizers watch the stream of `PointerEvents` via `HitTestTarget.handleEvent`, looking for terminating events like `PointerDown`, or criteria that will cause acceptance / rejection. If the gesture is accepted, the recognizer will continue to process events to characterize the gesture, invoking user-provided callbacks at key moments. 21 | * The gesture recognizer must unsubscribe from the pointer when rejecting or done processing, removing itself from the `PointerRouter` \(`OneSequenceGestureRecognizer.stopTrackingPointer` does this\). 22 | 23 | ## What does the gesture arena do? 24 | 25 | * The arena disambiguates multiple gestures in a way that allows single gestures to resolve immediately if there is no competition. A recognizer “wins” if it declares itself the winner or if it’s the last/sole survivor. 26 | 27 | ## Can gesture recognizers be grouped together? 28 | 29 | * A `GestureArenaTeam` combines multiple recognizers together into a group. 30 | * Captained teams cause the captain recognizer to win when all unaffiliated recognizers reject or a constituent accepts. 31 | * A non-captained team causes the first added recognizer to win when all unaffiliated recognizers reject. However, if a constituent accepts, that recognizer still takes the win. 32 | 33 | ## What auxiliary classes support gesture handling? 34 | 35 | * There are two major categories of gesture recognizers, multi-touch recognizers \(i.e., `MultiTapGestureRecognizer`\) that simultaneously process multiple pointers \(i.e., tapping with two fingers will register twice\), and single-touch recognizer \(i.e., `OneSequenceGestureRecognizer`\) that will only consider events from a single pointer \(i.e., tapping with two fingers will register once\). 36 | * There is a helper “Drag” object that is used to communicate drag-related updates to other parts of the framework \(like `DragScrollActivity`\) 37 | * There’s a `VelocityTracker` that generates fairly accurate estimates about drag velocity using curve fitting. 38 | * There are local and global `PointerRoutes` in `PointerRouter`. Local routes are used as described above; global routes are used to react to any interaction \(i.e., to dismiss a tooltip\). 39 | 40 | -------------------------------------------------------------------------------- /learning-path.md: -------------------------------------------------------------------------------- 1 | # Learning Path 2 | 3 | This book is written in outline format and is fairly terse. In order to get the most out of this book, we recommend the following learning path: 4 | 5 | * Read the [Dart Language Tour](https://dart.dev/guides/language/language-tour). 6 | * A basic understanding of Dart is, of course, essential. 7 | * Skim the official [Flutter Development Guide](https://flutter.dev/docs/development). 8 | * The official docs provide a great starting point for using Flutter as a developer. 9 | * Skim the official [Inside Flutter](https://flutter.dev/docs/resources/inside-flutter) article. 10 | * This article will give you a general idea of what's happening behind the scenes. 11 | * Read Didier Boelen's [Flutter Internals](https://www.didierboelens.com/2019/09/flutter-internals/) article. 12 | * Didier's article has outstanding illustrations and descriptions that will help you develop intuition about how Flutter works. 13 | * Read through this book, referencing the [code](https://github.com/flutter/flutter/tree/master/packages/flutter/lib) and [API docs](http://api.flutter.dev). 14 | * This book will help you integrate your conceptual understanding with how the framework actually works. 15 | 16 | Unfortunately, though methods and identifiers are frequently referenced, we haven't been able to provide deep linking to the relevant code just yet. 17 | 18 | If you're able, clone a copy of the Flutter repository and use a tool like `grep` to find the relevant sections. A source code browser with identifier linking is even better. 19 | 20 | Good luck! 21 | 22 | 23 | 24 | -------------------------------------------------------------------------------- /rendering/painting.md: -------------------------------------------------------------------------------- 1 | # Painting 2 | 3 | ## What are the painting building blocks? 4 | 5 | * `Path` describes a sequence of potentially disjoint movements on a plane. Paths tracks a current point as well as one or more subpaths \(created via `Path.moveTo`\). Subpaths may be closed \(i.e., the first and last points are coincident\), open \(i.e., the first and last points are distinct\), or self intersecting \(i.e., movements within the path intersect\). Paths incorporate lines, arcs, beziers, and more; each operation begins at the current point and, once complete, defines the new current point. The current point begins at the origin. Paths can be queried \(via `Path.contains`\), transformed \(via `Path.transform`\), and merged \(via `Path.combine`, which accepts a `PathOperation`\). 6 | * `PathFillType` defines the criteria determining whether a point is contained by the path. `PathFillType.evenOdd` casts a ray from the point outward, summing the number of edge crossings; an odd count indicates that the point is internal. `PathFillType.nonZero` considers the path’s directionality. Again casting a ray from the point outward, this method sums the number of clockwise and counterclockwise crossings. If the counts aren’t equal, the point is considered to be internal. 7 | * `Canvas` represents a graphical context supporting a number of drawing operations. These operations are captured by an associated `PictureRecorder` and, once finalized, transformed into a `Picture`. The `Canvas` is associated with a clip region \(i.e., an area within which painting will be visible\), and a current transform \(i.e., a matrix to be applied to any drawing\), both managed using a stack \(i.e., clip regions and transforms can be pushed and popped as drawing proceeds\). Any drawing outside of the canvas’s culling box \(“`cullRect`”\) may be discarded; by default, however, affected pixels are retained. Many operations accept a `Paint` parameter which describes how the drawing will be composited \(e.g., the fill, stroke, blending, etc\). 8 | * `Canvas` exposes a rich API for drawing. The majority of these operations are implemented within the engine. 9 | * `Canvas.clipPath`, `Canvas.clipRect`, etc., refine \(i.e., reduce\) the clip region. These operations compute the intersection of the current clip region and the provided geometry to define a new clip region. The clip region can be anti-aliased to provide a gradual blending. 10 | * `Canvas.translate`, `Canvas.scale`, `Canvas.transform`, etc., alter the current transformation matrix \(i.e., by multiplying it by an additional transform\). The former methods apply standard transformations, whereas the latter applies an arbitrary 4x4 matrix \(specified in column-major order\). 11 | * `Canvas.drawRect`, `Canvas.drawLine`, `Canvas.drawPath`, etc., perform fundamental drawing operations. 12 | * `Canvas.drawImage`, `Canvas.drawAtlas`, `Canvas.drawPicture`, etc., copy pixels from a rendered image or recorded picture into the current canvas. 13 | * `Canvas.drawParagraph` paints text into the canvas \(via `Paragraph._paint`\). 14 | * `Canvas.drawVertices`, `Canvas.drawPoints`, etc., describe solids using a collection of points. The former constructs triangles from a set of vertices \(Vertices\) and a vertex mode \(`VertexMode`\); this mode describes how vertices are composed into triangles \(e.g., `VertexMode.triangles` specifies that each sequence of three points defines a new triangle\). The resulting triangles are filled and composited using the provided `Paint` and `BlendMode`. The latter paints a set of points using a `PointMode` describing how the collection of points is to be interpreted \(e.g., as defining line segments or disconnected points\). 15 | * The save stack tracks the current transformation and clip region. New entries can be pushed \(via `Canvas.save` or `Canvas.saveLayer`\) and popped \(via `Canvas.restore`\). The number of items in this stack can also be queried \(via `Canvas.getSaveCount`\); there is always at least one item on the stack. All drawing operations are subject to the transform and clip at the top of the stack. 16 | * All drawing operations are performed sequentially \(by default or when using `Canvas.save`/`Canvas.restore`\). If the operation utilizes blending, it will be blended immediately after completing. 17 | * `Canvas.saveLayer` allows drawing operations to be grouped together and composited as a whole. Each individual operation will still be blended within the saved layer; however, once the layer is completed, the composite drawing will be blended as a whole using the provided `Paint` and bounds. 18 | * For example, an arbitrary drawing can be made consistently translucent by first painting it using an opaque fill, and then blending the resulting layer with the canvas. If instead each component of the drawing were individually blended, overlapping regions would appear darker. 19 | * This is particularly useful for antialiased clip regions \(i.e., regions that aren’t pixel aligned\). Without layers, any operations intersecting the clip would needed to be antialiased \(i.e., blended with the background\). If a subsequent operation intersects the clip at this same point, it would be blended with both the background and the previous operation; this produces visual artifacts \(e.g., color bleed\). If both operations were combined into a layer and composited as a whole, only the final operation would be blended. 20 | * Note that though this doesn’t introduce a new framework layer, it does cause the engine to switch to a new rendering target. This is fairly expensive as it flushes the `GPU`’s command buffer and requires data to be shuffled. 21 | * `Paint` describes how a drawing operation is to be applied to the canvas. In particular, it specifies a number of graphical parameters including the color to use when filling or stroking lines \(`Paint.color`, `Paint.colorFilter`, `Paint.shader`\), how new painting is to be blended with old painting \(`Paint.blendMode`, `Paint.isAntiAlias`, `Paint.maskFilter`\), and how edges are to be drawn \(`Paint.strokeWidth`, `Paint.strokeJoin`, `Paint.strokeCap`\). Fundamental to most drawing is whether the result is to be stroked \(e.g., drawn as an outline\) or filled; `Paint.style` exposes a `PaintingStyle` instance specifying the mode to be used. 22 | * If stroking, `Paint.strokeWidth` is measured in logical pixels orthogonal to the path being painted. A value of `0.0` will cause the line to be rendered as thin as possible \(“hairline rendering”\). 23 | * Any lines that are drawn will be capped at their endpoints according to a `StrokeCap` value \(via `Paint.strokeCap`; `StrokeCap.butt` is the default and does not paint a cap\). Caps extend the overall length of lines in proportion to the stroke width. 24 | * Discrete segments are joined according to a `StrokeJoin` value \(via `Paint.strokeJoin`; `StrokeJoin.miter` is the default and extends the original line such that the next can be drawn directly from it\). A limit may be specified to prevent the original line from extending too far \(via `Paint.strokeMiterLimit`; once exceeded, the join reverts to `StrokeJoin.bevel`\). 25 | * `ColorFilter` describes a function mapping from two input colors \(e.g., the paint’s color and the destination’s color\) to a final output color \(e.g., the final composited color\). If a `ColorFilter` is provided, it overrides both the paint color and shader; otherwise, the shader overrides the color. 26 | * `MaskFilter` applies a filter \(e.g., a blur\) to the drawing once it is complete but before it is composited. Currently, this is limited to a Gaussian blur. 27 | * `Shader` is a handle to a Skia shader utilized by the engine. Several are exposed within the framework, including `Gradient` and `ImageShader`. These are analogous, with the former generating pixels by smoothly blending colors and the latter reading them directly from an image. Both support tiling so that the original pixels can be extended beyond their bounds \(a different `TileMode` may be specified in either direction\); `ImageShader` also supports an arbitrary matrix to be applied to the source image. 28 | 29 | -------------------------------------------------------------------------------- /rendering/semantics.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: TODO 3 | --- 4 | 5 | # Semantics 6 | 7 | -------------------------------------------------------------------------------- /scrolling/scrollable.md: -------------------------------------------------------------------------------- 1 | # Scrollable 2 | 3 | ## What are the building blocks of scrolling? 4 | 5 | * `Scrollable` provides the interaction model for scrolling without specifying how the actual viewport is managed \(a `ViewportBuilder` must be provided\). UI concerns are customized directly or via an inherited `ScrollConfiguration` that exposes an immutable `ScrollBehavior` instance. This instance is used to build platform-specific chrome \(i.e., a scrolling indicator\) and provides ambient `ScrollPhysics`, a class that describes how scrolling UI will respond to user gestures. 6 | * `ScrollPhysics` is consulted throughout the framework to construct physics simulations for ballistic scrolling, to validate and adjust user interaction, to manage momentum across interactions, and to identify overscroll regions. 7 | * `ScrollableState` connects the `Scrollable` to a `ScrollPosition` via a `ScrollController`. This controller is responsible for producing the `ScrollPosition` from a given `ScrollContext` and `ScrollPhysics`; it also provides the `initialScrollOffset`. 8 | * For example, `PageView` injects a page-based scrolling mechanism by having its `ScrollController` \(`PageController`\) return a custom scroll position subclass. 9 | * The `ScrollContext` exposes build contexts for notifications and storage, a ticker provider for animations, and methods to interact with the scrollable; its analogous to `BuildContext` for an entire scrollable widget. 10 | * `ScrollPosition` tracks scroll offset as pixels \(reporting changes via `Listenable`\), applies physics to interactions via `ScrollPhysics`, and through subclasses like `ScrollPositionWithSingleContext` \(which implement `ScrollActivityDelegate` and makes concrete much of the actual scrolling machinery\), starts and stops `ScrollActivity` instances to mutate the represented scroll position. 11 | * The actual pixel offset and mechanisms for reacting to changes in the associated viewport are introduced via the `ViewportOffset` superclass. 12 | * Viewport metrics are mixed in via `ScrollMetrics`, which redundantly defines pixel offset and defines a number of other useful metrics like the amount of content above and below the current viewport \(`extentBefore`, `extentAfter`\), the pixel offset corresponding to the top and bottom of the current viewport \(`minScrollExtent`, `maxScrollExtent`\) and the viewport size \(`viewportDimension`\). 13 | * The scroll position may need to be corrected \(via `ScrollPosition.correctPixels` \[replaces pixels outright\] / `ViewportOffset.correctBy` \[applies a delta to pixels\]\) when the viewport is resized, as triggered by shrink wrapping or relayout. Every time a viewport \(via `RenderViewport`\) is laid out, the new content extents are checked by `ViewportOffset.applyContentDimensions` to ensure the offset won’t change; if it does, layout must be repeated. 14 | * `ViewportOffset.applyViewportDimension` and `ViewportOffset.applyContentDimensions` are called to determine if this is the case; any extents provided represent viewport slack -- how far the viewport can be scrolled in either direction beyond what is already visible. Activities are notified via `ScrollActivity.applyNewDimensions`\(\). 15 | * The original pixel values corresponds to certain children being visible. If the dimensions of the viewport change, the pixel offset required to maintain that same view may change. For example, consider a viewport sized to a single letter displaying “A,” “B,” and “C” in a column. When “B” is visible, pixels will correspond to “A”’s height. Suppose the viewport expands to fit the full column. Now, pixels will be zero \(no offset is needed\). \[?\] 16 | * The same is true if the viewport’s content changes size. Again, consider the aforementioned “A-B-C” scenario with “B” visible. Instead of the viewport changing size, suppose “A” is resized to be zero pixels tall. To keep “B” in view, the pixel offset must be updated \(from non-zero to zero\). \[?\] 17 | * `ScrollController` provides a convenient interface for interacting with one or more `ScrollPositions`; in effect, it calls the corresponding method in each of its positions. As a `Listenable`, the controller aggregates notifications from its positions. 18 | * `ScrollNotifications` are emitted by scrollable \(by way of the active `ScrollActivity`\). As a `LayoutChangedNotification` subclass, these are emitted after build and layout have already occurred, thus only painting can be performed in response without introduce jank. 19 | * Listening to a scroll position directly avoids the delay, allowing layout to be performed in response to offset changes. It’s not clear why this is faster - both paths seem to trigger at the same time \[?\] 20 | 21 | ## How is the scroll position updated in general? 22 | 23 | * The `ScrollPositionWithSingleContext` starts and manages `ScrollActivity` instances via drag, `animateTo`, `jumpTo`, and more. 24 | * `ScrollActivity` instances update the scroll position via `ScrollActivityDelegate`; `ScrollPositionWithSingleContext` implements this interface and applies changes requested by the current activity \(`setPixels`, `applyUserOffset`\) and starts follow-on activities \(`goIdle`, `goBalastic`\). 25 | * Any changes applied by the activity are processed by the scroll position, then passed back to the activity which generates scroll notifications \(e.g., `dispatchScrollUpdateNotification`\). 26 | * `DragScrollActivity`, `DrivenScrollActivity`, and `BallisticScrollActivity` apply user-driven scrolling, animation-driven scrolling, and physics-driven scrolling, respectively. 27 | * `ScrollPosition.beginActivity` starts activities and tracks all state changes. This is possible because the scroll position is always running an activity, even when idle \(`IdleScrollActivity`\). These state changes generate scroll notifications via the activity. 28 | 29 | ## How is the scroll position updated by dragging? 30 | 31 | * The underlying `Scrollable` uses a gesture recognizer to detect and track dragging if `ScrollPhysics.shouldAcceptUserOffset` allows. When a drag begins, the `Scrollable`’s scroll position is notified via `ScrollPosition.drag`. 32 | * `ScrollPositionWithSingleContext` implements this method to create a `ScrollDragController` which serves as an integration point for the `Scrollable`, which receives drag events, and the activity, which manages scroll state / notifications. The controller is returned as a `Drag` instance, which provides a mechanism to update state as events arrive. 33 | * As the user drags, the drag controller forwards a derived user offset back to `ScrollActivityDelegate.applyUserOffset` \(`ScrollPositionWithSingleContext`\) which applies `ScrollPhysics.applyPhysicsToUserOffset` and, if valid, invokes `ScrollActivityDelegate.setPixels`. This actually updates the scroll offset and generates scroll notifications. 34 | * When the drag completes, a ballistic simulation is started via `ScrollActivityDelegate.goBallistic`. This delegates to the scroll position’s `ScrollPhysics` instance to determine how to react. 35 | * Interestingly, the `DragScrollActivity` delegates most of its work to the drag controller and is mainly responsible for forwarding scroll notifications. 36 | 37 | ## How is the scroll position updated by `animateTo`? 38 | 39 | * The `DrivenScrollActivity` is much more straightforward. It starts an animation controller which, on every tick, updates the current pixel value via `setPixels`. When animating, if the container over-scrolls, an idle activity is started. If the animation completes successfully, a ballistic activity is started instead. 40 | 41 | ## How is scrolling behavior and state managed? 42 | 43 | * `The` `ScrollPosition` writes the current scroll offset to `PageStorage` if `ScrollPosition.keepScrollOffset` is true. 44 | 45 | ## How are the scrollable, the viewport, and any contained slivers associated? 46 | 47 | * `ScrollView` is a base class that builds a scrollable and a viewport, deferring to its subclass to specify how its slivers are constructed. The subclass overrides `buildSlivers` to do this \(`ScrollView.build` creates the `Scrollable`, which uses `ScrollView.buildViewport` as its `viewportBuilder`, which uses `ScrollView.buildSlivers` to obtain the sliver children\). 48 | 49 | -------------------------------------------------------------------------------- /scrolling/viewports.md: -------------------------------------------------------------------------------- 1 | # Viewports 2 | 3 | ## What are the general viewport building blocks? 4 | 5 | * `ViewportOffset` is an interface that tracks the current scroll offset \(`ViewportOffset.pixels`\) and direction \(`ViewportOffset.userScrollDirection`, which is relative to the positive axis direction, ignoring growth direction\); it also offers a variety of helpers \(e.g., `ViewportOffset.animateTo`\). The offset represents how much content has been scrolled off screen, or more precisely, the number of logical pixels by which all children have been shifted opposite the viewport’s scrolling direction \(`Viewport.axisDirection`\). For a web page, this would be how many pixels of content are above the browser’s viewport. This interface is implemented by `ScrollPosition`, tying together viewports and scrollables. Pixels can be negative when scrolling before the center sliver. \[?\] 6 | 7 | ## What are the viewport widget building blocks? 8 | 9 | * `Viewport` is a layout widget that is larger on the inside. The viewport is associated with a scroll offset \(`ViewportOffset`\), an interface that is implemented by `ScrollPosition` and typically fulfilled by `ScrollPositionWithSingleContext`. As the user scrolls, this offset is propagated to descendant slivers via layout. Finally, slivers are repainted at their new offsets, creating the visual effect of scrolling. 10 | * `ShrinkWrappingViewport` is a variant of viewport that sizes itself to match its children in the main axis \(instead of expanding to fill the main axis\). 11 | * `NestedScrollViewViewport` is a specialized viewport used by `NestedScrollView` to coordinate scrolling across two viewports \(supported by auxiliary widgets like `SliverOverlapAbsorberHandle`\). 12 | * `ScrollView` couples a viewport with a scrollable, delegating to its subclass to provide slivers; as such, the `ScrollView` provides a foundation for building scrollable UI. `CustomScrollView` accepts an arbitrary sliver list whereas `BoxScrollView` -- and its subclasses `ListView` and `GridView` -- apply a single layout model \(e.g., list or grid\) to a collection of slivers. 13 | 14 | ## What are the viewport rendering building blocks? 15 | 16 | * `RenderAbstractViewport` provides a common interface for all viewport subtypes. This allows the framework to locate and interact with viewports in a generic way \(via `RenderAbstractViewport.of`\). It also provides a generic interface for determining offsets necessary to reveal certain children. 17 | * `RenderViewportBase` provides shared code for render objects that host slivers. By establishing an axis and axis direction, `RenderViewportBase` maps the offset-based coordinate space used by slivers into cartesian space according to a managed offset value \(`ViewportOffset.pixels`\). `RenderViewportBase.layoutChildSequence` serves as the foundation for sliver layout \(and is typically invoked by `performLayout` in subclasses\). `RenderViewportBase` also establishes the cache extent \(the area to either side of the viewport that is laid out but not visible\) as well as entry points for hit testing and painting. 18 | * `RenderViewport` displays a subset of its sliver children based on its current viewport offset. A center sliver \(`RenderViewport.center`\) is anchored at offset zero. Slivers before center \(“reverse children”\) grow opposite the axis direction \(`GrowthDirection.reverse`\) whereas the center along with subsequent slivers \(“forward children”\) grow forward \(`GrowthDirection.forward`\); both groups are anchored according to the same axis direction \(this is why both start from the same edge\), though conceptually reverse slivers are laid out in the opposite axis direction \(e.g., their “leading” and “trailing” edges are flipped\). 19 | * The anchor point can be adjusted, changing the visual position of offset zero \(`RenderViewport.anchor` is in the range \[0, 1\], with zero corresponding to the axis origin \[?\]\). 20 | * Conceptually, children are ordered: RN-R0, center, FN-F0. 21 | * `RenderShrinkWrappingViewport` similar to `RenderViewport` except sized to match the total extent of visible children within the bounds of incoming constraints. 22 | * `RenderNestedScrollViewViewport` 23 | 24 | ## What are the attributes of a viewport? 25 | 26 | * Throughout this section, words like “main extent,” “cross extent,” “before,” “after,” “leading,” and “trailing” are used to eliminate spatial bias from descriptions. This is because viewports can be oriented along either axis \(e.g., horizontal, vertical\) with varying directionality \(e.g., down, right\). Moreover, the ordering of children along the axis is subject to the current growth direction \(e.g., forward or reverse\). 27 | * Viewports have two sets of dimensions: outer and inner. The portion of the viewport that occupies space on screen has a main axis and cross axis extent \(e.g., height and width\); these are the viewport’s “outer”, “outside”, or “physical” dimensions. The inside of the viewport, which matches or exceeds the outer extent, includes all the content contained within the viewport; these are described using “inner”, “inside”, or “logical” dimensions. The inner edges correspond to the edges of the viewport’s contents; the outer edges correspond to the edges of the visible content. When otherwise unspecified, the viewport’s leading / trailing edges generally refer to its outer \(i.e., physical\) edges. 28 | * The viewport is comprised of a bidirectional list of slivers. “Forward slivers” include a “center sliver,” and are laid out in the default axis direction \(`GrowthDirection.forward`\). “Reverse slivers” immediately precede the center sliver and are laid out opposite the axis direction \(`GrowthDirection.reverse`\). The line between forward and reverse slivers, at the center sliver’s leading edge, is called the “centerline,” coincident with the zero scroll offset. Within this document, the region within the viewport comprised of reverse slivers is called the “reverse region” with its counterpart being the “forward region.” `Note` that the viewport’s inner edges fully encompass both regions. 29 | * The viewport positions the center sliver at scroll offset zero by definition. However, the viewport also tracks a current scroll offset \(`RenderViewport.offset`\) to determine which of its sliver children are in view and therefore should be rendered. This offset represents the distance between the center sliver’s leading edge \(i.e., scroll offset zero\) and the viewport’s outer leading edge, and increases opposite the axis direction. 30 | * Equivalently, this represents the number of pixels by which the viewport’s contents have been shifted opposite its axis direction. 31 | * For example, if the center sliver’s leading edge is aligned with the viewport’s leading edge, the offset would be zero. If its trailing edge is aligned with the viewport’s leading edge, the offset would be the sliver’s extent. If its leading edge is aligned with the viewport’s trailing edge, the offset would be the viewport’s extent, negated. 32 | 33 | ## How do viewports manage parent data? 34 | 35 | * There are two major classes of parent data used by slivers: `SliverPhysicalParentData` \(used by `RenderViewport`\) and `SliverLogicalParentData` \(used by `RenderShrinkWrappingViewport`\). These differ in how they represent the child’s position. The former stores absolute coordinates from the parent’s visible top left corner whereas the latter stores the distance from the parent’s zero scroll offset to the child’s nearest edge. Physical coordinates are more efficient for children that must repaint often but incur a cost during layout. Logical coordinates optimize layout at the expense of added cost during painting. 36 | * Viewports use two subclasses to support multiple sliver children, `SliverPhysicalContainerParentData` and `SliverLogicalContainerParentData`. These are identical to their superclasses \(where the “parent” isn’t a sliver but the viewport itself\), mixing in `ContainerParentDataMixin`<`RenderSliver`>. 37 | 38 | ## When might the center sliver not appear at the leading edge? 39 | 40 | * The center sliver may be offset by `RenderSliver.centerOffsetAdjustment` \(added to the current `ViewportOffset.pixels` value\). This effectively shifts the zero scroll offset \(e.g., to visually center the center sliver\). 41 | * The zero scroll offset can itself be shifted by a proportion of the viewport’s main extent via `RenderViewport.anchor`. Zero positions the zero offset at the viewport’s leading edge; one positions the offset at the trailing edge \(and `0.5` would position it at the midpoint\). 42 | * These adjustments are mixed into the calculation early on \(see `RenderViewport.performLayout` and `RenderViewport._attemptLayout`\). Conceptually, it is easiest to ignore them other than to know that they shift the centerline’s visual position. 43 | * The center sliver may also paint itself at an arbitrary offset via `SliverGeometry.paintOrigin`, though this won’t actually move the zero offset. 44 | 45 | ## What are some of the quirks of viewport layout? 46 | 47 | * Forward and reverse slivers are laid out separately and are generally isolated from one another \[?\]. Reverse slivers are laid out first, then forward slivers. Reverse slivers and forward slivers share the same axis direction \(i.e., the generated constraints reference the same `SliverConstraints.axisDirection`\), though reverse sliver calculations \(e.g., for painting or layout offset\) effectively flip this direction. Thus, it is most intuitive to think of reverse slivers as having their leading and trailing edges flipped, etc. 48 | * Layout offset is effectively measured from the viewport’s outer leading edge to the nearest edge of the sliver \(i.e., the offset is relative to the viewport’s current view\). 49 | * More accurately, a sliver’s layout offset is measured from the zero scroll offset of its parent which, for a viewport, coincides with the centerline. However, since layout offset is iteratively computed by summing layout extents \(in `RenderViewportBase.layoutChildSequence`\) and these extents are zero unless a sliver is visible, this formal definition boils down to the practical definition described above. \[?\] 50 | * This property explains why `RenderViewportBase.computeAbsolutePaintOffset` is able to produce paint offsets trivially from layout offsets \(this is surprising since layout offsets are ostensibly measured from the zero scroll offset whereas paint offsets are measured from the box’s top left corner\). 51 | * Even though layout offsets after the trailing edge are approximate \(due to an implementation detail of `RenderViewportBase.layoutChildSequence`\), this direct mapping remains safe as out-of-bounds painting will be clipped. 52 | * Scroll offset can be interpreted in two ways. 53 | * When considering a sliver or viewport in isolation, scroll offset refers to a one dimensional coordinate space anchored at the object’s leading edge and extending toward its trailing edge. 54 | * When considering a sliver in relation to a parent container, scroll offset represents the first offset in the sliver’s coordinate space that would be visible in its parent \(e.g. offset zero implies that the leading edge of the sliver is visible; offset N implies that all except the leading N pixels are visible -- if N is greater than the sliver’s extent, some of those pixels are empty space\). Conceptually, a scroll offset represents how far a sliver’s leading precedes the viewport. 55 | * When zero, the sliver has been fully scrolled after the leading edge \(and possibly after the trailing edge\). 56 | * When less than the sliver’s scroll extent, a portion of the sliver precedes the leading edge. 57 | * When greater, the sliver is entirely before the leading edge. 58 | * Scroll extent represents how much space a sliver might consume; it need only be accurate when the constraints would permit the sliver to fully paint \(i.e., the desired paint extent fits within the remaining paintable space\). 59 | * Slivers preceding the leading edge or appearing within the viewport must provide valid scroll extents if they might conceivably be painted. 60 | * Slivers beyond the trailing edge may approximate their scroll extents since no pixels remain for painting. 61 | * Overlap is the pixel offset \(in the main axis direction\) necessary to fully “escape” any earlier sliver’s painting. More formally, this is the distance from the sliver’s current position to the first pixel that hasn’t been painted on by an earlier sliver. Typically, this pixel is after the sliver’s offset \(e.g., because a preceding sliver painted beyond its layout extent\). However, in some cases, the overlap can be negative indicating that the first such pixel is before the sliver’s offset \(i.e., earlier than the sliver’s offset\). 62 | * All slivers preceding the viewport’s trailing edge receive unclamped values for the remaining paintable and cacheable extents, even if those slivers are located far offscreen. Slivers implement a variety of layout effects and therefore may consume visible \(or cacheable\) pixels at their discretion. 63 | 64 | ## What other services does the viewport provide? 65 | 66 | * Viewports support the concept of maximum scroll obstruction \(`RenderViewportBase.maxScrollObstructionExtentBefore`\), a region of the viewport that is covered by “pinned” slivers and that effectively reduces the viewport’s scrollable bounds. This is a secondary concept used only when computing the scroll offset to reveal a certain sliver \(`AbstractRenderViewport.getOffsetToReveal`\). 67 | * Viewports provide a mechanism for querying the paint and \(approximate\) scroll offsets of their children. The implementation depends on the type of viewport; efficiency may also be affected by the parent model \(e.g., `RenderViewport` uses `SliverPhysicalContainerParentData`, allowing paint offsets to be returned immediately\). 68 | 69 | ## How are viewport children ordered? 70 | 71 | * A logical index is assigned to all children. The center child is assigned zero; subsequent children \(forward slivers\) are assigned increasing indices \(e.g., 1, 2, 3\) whereas preceding children \(reverse slivers\) are assigned decreasing indices \(e.g., -1, -2, -3\). 72 | * Children are stored sequentially \(R reverse slivers + the center sliver + F forward slivers\), starting with the “last” reverse sliver \(-R\), proceeding toward the “first” reverse sliver \(-1\), then the center sliver \(0\), then ending on the last forward sliver \(F+1\). 73 | * The first child \(`RenderViewport.firstChild`\) is the “last” reverse sliver. 74 | * Viewports define a painting order and a hit-test order. Reverse slivers are painted from last to first \(-1\), then forward slivers are painted from last to first \(center\). Hit-testing proceeds in the opposite direction: forward slivers are tested from the first \(center\) to the last, then reverse slivers are tested from the first \(-1\) to the last. 75 | 76 | ## What do shrink-wrapping viewports do differently? 77 | 78 | * Whereas an ordinary viewport expands to fill the main axis, shrink-wrapping viewports are sized to minimally contain their children. Consequently, as the viewport is scrolled, its size will change to accommodate the visible children which may require layout to be repeated. Shrink-wrapping viewports do not support reverse children. 79 | * The shrink-wrapping viewport uses logical coordinates instead of physical coordinates since it performs layout frequently. 80 | 81 | ## What is a scroll offset correction? 82 | 83 | * A pixel offset directly applied to the `ViewportOffset.pixels` value allowing descendent slivers to adjust the overall scrolling position. This is done to account for errors when estimating overall scroll extent for slivers that build content dynamically. 84 | * Such slivers often cannot measure their actual extent without building and laying out completely first. Doing this continuously would be prohibitively slow and thus relative positions are used \(i.e., position is reckoned based on where neighbors are without necessarily laying out all children\). 85 | * The scroll offset correct immediately halts layout, propagating to the nearest enclosing viewport. The value is added directly to the viewport’s current offset \(e.g., a positive correction increases `ViewportOffset.pixels`, translating content opposite the viewport’s axis direction -- i.e., scrolling forward\). 86 | * Conceptually, this allows slivers to address logical inconsistencies that arise due to, e.g., estimating child positions by determining a scroll offset that would have avoided the problem, then reattempting layout using this new offset. 87 | * Adjusting the scroll offset will not reset other child state \(e.g., the position of children in a `SliverList`\); thus, when such a sliver requests a scroll offset correction, the offset selected is one that would cause any existing, unaltered state to be consistent. 88 | * For example, a `SliverList` may not have enough room for newly revealed children when scrolling backwards \(e.g., because the incoming scroll offset indicates an inadequate number of pixels preceding the viewport’s leading edge\). The `SliverList` calculates the scroll offset correction that would have avoided this logical inconsistency, adjusting it \(and any affected layout offsets\) to ensure that children appear at the same visual location in the viewport. 89 | * If a chain of corrections occurs, layout will eventually fail. 90 | 91 | -------------------------------------------------------------------------------- /slivers/dynamic-slivers.md: -------------------------------------------------------------------------------- 1 | # Dynamic Slivers 2 | 3 | ## What are a few common types of dynamic slivers? 4 | 5 | * `RenderSliverList` positions children of varying extents in a linear array along the main axis. All positioning is relative to adjacent children in the list. Since only visible children are materialized, and earlier children may change extent, positions are occasionally corrected to maintain a consistent state. 6 | * `RenderSliverGrid` positions children in a two dimensional arrangement determined during layout by a `SliverGridDelegate`. The positioning, spacing, and size of each item is generally static, though the delegate is free to compute an arbitrarily complex layout. The current layout strategy does not support non-fixed extents. 7 | 8 | ## How do lists without variable item extent perform layout? 9 | 10 | * When extent is not fixed, position cannot be directly computed from a child’s index. Instead, position must be measured by laying out all preceding children. Since this would be quite expensive and eliminate the benefit of viewport culling, an optimization is used whereby previously laid out children serve to anchor newly revealed children \(i.e., positions are determined relative to previously laid out children\). 11 | * Note that this can lead to inconsistencies if off-screen children change size or are removed; scroll offset corrections are applied to address these errors. 12 | * Layout proceeds by establishing the range of children currently visible in the viewport; that is, the range of children beginning from the child starting at or intersecting the viewport’s leading edge to the child ending at or intersecting its trailing edge. Unlike a fixed extent list, this range cannot be computed directly but must be measured by traversing and building children starting from the first known child. Layout is computed as follows: 13 | * Ensure that there’s a first known child \(since, as mentioned, layout is performed relative to previously laid out children\). 14 | * If there isn’t, create \(but do not layout\) this initial child with an offset and index of zero. If this fails, the list has zero extent and layout is complete. 15 | * If the first known child does not precede or start at the viewport’s leading edge, build children toward the leading edge until such a child is identified. 16 | * Leading children must be built as the walk progresses since they must not have existed \(else, there would have been a different first known child\). As part of building, the item is inserted into the list’s child model. 17 | * If there are no more children to build, the last successfully processed child is positioned at offset zero \(i.e., bumped up to the top of the list\). 18 | * If the incoming scroll offset is already zero, then the walk ends; this child satisfies the search criteria. 19 | * If the scroll offset is non-zero, a scroll offset correction is needed. 20 | * A non-zero offset implies that a portion of the list precedes the leading edge. Given that there aren’t even enough children to reach the leading edge, this cannot be the case. 21 | * The correction ensures that the incoming scroll offset will be zero when layout is reattempted. Since the earliest child is now positioned at offset zero, the inconsistency is corrected. 22 | * The newly built child’s scroll offset is computed by subtracting its paint extent from the last child’s scroll offset. 23 | * If the resulting offset is negative, a scroll offset correction is needed. 24 | * A negative offset implies that there is insufficient room for the child. The last child’s offset represents the number of pixels available before that child; if this is less than the new child’s extent, the list is too small and the incoming scroll offset must be invalid. 25 | * All preceding children that do not fit \(including the one that triggered this case\) are built and measured to determine their total extent. The earliest such child is positioned at offset zero. 26 | * A scroll offset correction is calculated to allow sufficient room for the overflowing children while ensuring that the last processed child appears at the same visual location. 27 | * This quantity is the total extent needed minus the last walked item’s scroll offset. 28 | * Position the child at the calculated scroll offset. 29 | * The first child in the list must now precede or start at the viewport’s leading edge. Ensure that it has been laid out \(e.g., if the preceding walk wasn’t necessary\). 30 | * Advance to find the child starting at or intersecting with the viewport’s leading edge \(there may have already been several such children in the child list\). Then, advance to find the child ending at or intersecting with the viewport’s trailing edge. 31 | * Advance by identifying the next child in the list while incrementing an index counter to detect gaps. 32 | * If the next child hasn’t been built or a gap is detected, build and layout the child at the current index. 33 | * If no more children can be built, report failure. 34 | * If the next child hasn’t been laid out, lay it out now. 35 | * While advancing, position each child directly after the preceding child. 36 | * If advancing fails before the leading edge is reached, remove all but the latest such child \(via garbage collection\). Maintain this child as it captures the list’s total dimensions \(i.e., its position plus its paint extent corresponds to the list’s scroll extent\). 37 | * Complete layout with zero paint extent, using the last item to compute overall scroll and maximum paint extents. 38 | * Count the children that wholly precede the viewport’s leading edge. Once the trailing child is found, count all children following it. Remove the corresponding children via garbage collection since they are no longer visible. 39 | * Return the resulting geometry: 40 | * `SliverGeometry.scrollExtent`: estimated maximum extent \(this is correct for fixed extent lists\). 41 | * `SliverGeometry.maxPaintExtent`: estimated maximum extent \(this is the most that can be painted\). 42 | * `SliverGeometry.paintExtent`: the portion of reified children that are actually visible \(via `RenderSliver.calculatePaintOffset`\). 43 | * `SliverGeometry.hasVisualOverflow`: true if the trailing child extends beyond the viewport’s leading edge, or the list precedes the viewport’s leading edge \(i.e., incoming scroll offset is greater than zero\). 44 | * If the list was fully scrolled, it will not have had an opportunity to lay out children. However, it is still necessary to report underflow to the manager. 45 | 46 | ## What are the building blocks of grid layout? 47 | 48 | * `SliverGridGeometry` captures the geometry of an item within a grid. This encompasses a child’s scroll offset, cross axis offset, main axis extent, and cross axis extent. 49 | * `SliverGridParentData` extends `SliverMultiBoxAdaptorParentData` to include the child’s cross axis offset. This is necessary since multiple children within a grid can share the same scroll offset while appearing at different cross axis offsets. 50 | * `SliverGridLayout` encapsulates positioning, sizing, and spacing logic. It is consulted during layout to determine the position and size of each grid item. This information is provided by returning the minimum and maximum index for a given scroll offset \(via `SliverGridLayout.getMinChildIndexForScrollOffset` and `SliverGridLayout.getMaxChildIndexForScrollOffset`\), as well as the grid geometry for a given index \(via `SliverGridLayout.getGeometryForChildIndex`\). 51 | * `SliverGridDelegate` builds a `SliverGridLayout` subclass on demand \(i.e., during grid layout\). This allows the calculated layout to adjust to incoming sliver constraints. 52 | * `SliverGridRegularTileLayout` calculates a layout wherein children are equally sized and spaced. As such, all aspects of layout are computed directly \(i.e., without measuring adjacent children\). 53 | * `SliverGridDelegateWithFixedCrossAxisCount` configures a `SliverGridRegularTileLayout` such that the same number of children appear at a given scroll offset. Children are sized to share the available cross axis extent equally. 54 | * `SliverGridDelegateWithMaxCrossAxisExtent` configures a `SliverGridRegularTileLayout` such that tiles are no larger than the provided maximum cross axis extent. A candidate extent is calculated that divides the available space evenly \(i.e., without a remainder\) and that is as large as possible. 55 | 56 | ## How do grids perform layout? 57 | 58 | * Grids are laid out similarly to fixed extent lists. Once a layout is computed \(via `SliverGridDelegate.getLayout`\), it is treated as a static description of the grid \(i.e., positioning is absolute\). As a result, children do not need to be reified to measure extent and position. Layout proceeds as follows: 59 | * Compute a new `SliverGridLayout` via `SliverGridDelegate`, providing the incoming constraints.. 60 | * Target first and last indices are computed based on the children that would be visible given the scroll offset and the remaining paint extent \(via `SliverGridLayout.getMinChildIndexForScrollOffset` and `SliverGridLayout.getMaxChildIndexForScrollOffset`, respectively\). 61 | * Shrink wrapping viewports have infinite extent. In this case, there is no last index. 62 | * Any children that were visible but are now outside of the target index range are garbage collected \(via `RenderSliverMultiBoxAdaptor.collectGarbage`\). This also cleans up any expired keep alive children. 63 | * If there are no children attached to the grid, insert \(but do not lay out\) an initial child at the first index. 64 | * If this child cannot be built, layout is completed with scroll extent and maximum paint extent set to the calculated max scroll offset \(via `SliverGridLayout.computeMaxScrollOffset`\); all other geometry remains zero. 65 | * All children still attached to the grid fall in the visible index range and there is at least one such child. 66 | * If indices have become visible that precede the first child’s index, the corresponding children are built and laid out \(via `RenderSliverMultiBoxAdaptor.insertAndLayoutLeadingChild`\). 67 | * These children will need to be built since they could not have been attached to the grid by assumption. 68 | * If one of these children cannot be built, layout will fail. This is likely a bug. 69 | * Identify the child with the largest index that has been built and laid out so far. This is the trailing child. 70 | * If there were leading children, this will be the leading child adjacent to the initial child. If not, this is the initial child itself \(which is now laid out if necessary\). 71 | * Lay out every remaining child until there is no more room \(i.e., the target index is reached\) or no more children \(i.e., a child cannot be built\). Update the trailing child as layout progresses. 72 | * The trailing child serves as the “after” argument when inserting children \(via `RenderSliverMultiBoxAdaptor.insertAndLayoutChild`\). 73 | * The children may have already been attached to the grid. If so, the child is laid out without being rebuilt. 74 | * Layout offsets for both the main and cross axes are assigned according the geometry reported by the `SliverGridLayout` \(via `SliverGridLayout.getGeometryForChildIndex`\). 75 | * Compute the estimated maximum extent using the first and last index that were actually reified as well as the enclosing leading and trailing scroll offsets. 76 | * Return the resulting geometry: 77 | * `SliverGeometry.scrollExtent`: estimated maximum extent \(this is correct for grids with fixed extent\). 78 | * `SliverGeometry.maxPaintExtent`: estimated maximum extent \(this is the most that can be painted\). 79 | * `SliverGeometry.paintExtent`: the visible portion of the range defined by the leading and trailing scroll offsets. 80 | * `SliverGeometry.hasVisualOverflow`: always true, unfortunately. 81 | * If the list was fully scrolled, it will not have had an opportunity to lay out children. However, it is still necessary to report underflow to the manager. 82 | 83 | -------------------------------------------------------------------------------- /slivers/persistent-headers.md: -------------------------------------------------------------------------------- 1 | # Persistent Headers 2 | 3 | ## How do persistent headers work? 4 | 5 | * `RenderSliverPersistentHeader` provides a base class for implementing persistent headers within a viewport, adding support for varying between a minimum and maximum extent during scrolling. Some subclasses introduce pinning behavior, whereas others allow the header to scroll into and out of out of view. `SliverPersistentHeader` encapsulates this behavior into a stateless widget, delegating configuration \(and child building\) to a `SliverPersistentHeaderDelegate` instance. 6 | * Persistent headers can be any combination of pinnable and floating \(or neither\); those that float, however, can also play a snapping animation. All persistent headers expand and contract in response to scrolling; only floating headers do so in response to user scrolling anywhere in the viewport. 7 | * A floating header reappears whenever the user scrolls its direction. The header expands to its maximum extent as the user scrolls toward it, and shrinks as the user scrolls away. 8 | * A pinned header remains at the top of the viewport. Unless floating is also enabled, the header will only expand when approaching its actual position \(e.g., the top of the viewport\). 9 | * Snapping causes a floating header to animate to its expanded or contracted state when the user stops scrolling, regardless of scroll extent. 10 | * Persistent headers contain a single box child and track a maximum and minimum extent. The minimum extent is typically based on the box’s intrinsic dimensions \(i.e., the object’s natural size\). As per the render box protocol, by reading the child’s intrinsic size, the persistent header will be re-laid out if this changes. 11 | * `RenderSliverPersistentHeader` doesn’t configure parent data or determine how to position its child along the main axis. It does, however, provide support for hit testing children \(which requires subclasses to override `RenderSliverPersistentHeader.childMainAxisPosition`\). It also paints its child without using parent data, computing the child’s offset in the same way as `RenderSliverSingleBoxAdapter` \(i.e., to determine whether its viewport-naive box is offset on the canvas if partially scrolled out of view\). 12 | * Two major hooks are exposed: 13 | * `RenderSliverPersistentHeader.updateChild`: supports updating the contained box whenever the persistent header lays out or the box itself changes size. This is provided two pieces of layout information. 14 | * Shrink offset is the delta between the current and maximum extents \(i.e., how much more room there is to grow\). Always positive. 15 | * Overlaps content is true if the header’s leading edge is not at its layout position in the viewport. 16 | * `RenderSliverPersistentHeader.layoutChild`: invoked by subclasses to updates then lay out children within the largest visible portion of the header \(between maximum and minimum extent\). Subclasses provide a scroll offset, maximum extent, and overlap flag, all of which may differ from the incoming constraints. 17 | * Shrink offset is set to the scroll offset \(how far the header is before the viewport’s leading edge\) and is capped at the max extent. Conceptually, this represents how much of the header is off screen. 18 | * If the child has changed size or the shrink offset has changed, the box is given an opportunity to update its appearance \(via `RenderSliverPersistentHeader.updateChild`\). This is done within a layout callback since, in the common case, the child is built dynamically by the delegate. This is an example of interleaving build and layout. 19 | * This flow is facilitated by the persistent header widgets \(e.g., `_SliverPersistentHeaderRenderObjectWidget`\), which utilize a custom render object element \(`_SliverPersistentHeaderElement`\). 20 | * When the child is updated \(via `RenderSliverPersistentHeader.updateChild`\), the specialized element updates its child’s element \(via `Element.updateChild`\) using the widget produced by the delegate’s build method. 21 | * Finally, the child is laid out with its main extent loosely constrained to the portion of the header that’s visible -- with the minimum extent as a lower bound \(i.e., so that the box is never forced to be smaller than the minimum extent\). 22 | 23 | ## How is the expanding / contracting scrolling effect implemented? 24 | 25 | * `RenderSliverScrollingPersistentHeader` expands to its maximum extent when scrolled into view, and shrinks to its minimum extent before being scrolled out of view. 26 | * Lays out the child in the largest visible portion of the header \(up to the maximum extent\), then returns its own geometry. 27 | * `SliverGeometry.scrollExtent`: always the max extent since this is how much scrolling is needed to scroll past the header. 28 | * `SliverGeometry.paintOrigin`: paints before itself if there’s a negative overlap \(e.g., to fill empty space in the viewport from overscroll\). 29 | * `SliverGeometry.paintExtent`, `SliverGeometry.layoutExtent`: the largest visible portion of the header, clamped to the remaining paint extent. 30 | * `SliverGeometry.maxPaintExtent`: the maximum extent; it’s not possible to provide more content. 31 | * Tracks the child’s position across the scroll \(i.e., the distance from the header’s leading edge to the child’s leading edge\). The child is aligned with the trailing edge of the header to ensure it scrolls into view first. 32 | * Calculated as the portion of the maximum extent in view \(which is negative when scrolled before the leading edge\), minus the child’s extent after layout. 33 | 34 | ## How does pinning work? 35 | 36 | * `RenderSliverPinnedPersistentHeader` is similar to its scrolling sibling, but remains pinned at the top of the viewport regardless of offset. It also avoids overlapping earlier slivers \(e.g., useful for building stacking section labels\). 37 | * The pinned header will generally have a layout offset of zero when scrolled before the viewport’s leading edge. As a consequence, any painting that it performs will be coincident with the viewport’s leading edge. This is how pinning is achieved. 38 | * Recall that layout offset within a viewport only increases when slivers report a non-zero layout extent. Slivers generally report a zero layout extent when they precede the viewport’s leading edge \(i.e., when viewport scroll offset exceeds their extent\); thus, when the header precedes the viewport’s leading edge, it will likely have a zero layout offset. Additionally, recall that layout offset is used when the viewport paints its children. Since the header will have a zero layout offset, at least in the case of `RenderViewport`, the sliver will be painted at the viewport’s painting origin. 39 | * If an earlier, off-screen sliver consumes layout, this will bump out where the header paints. This might cause strange behavior. 40 | * If another pinned header precedes this one, it will not consume layout. However, by utilizing the overlap offset, the current header is able to avoid painting on top of the preceding header. 41 | * Next, it lays out the child in the largest visible portion of the header \(up to the maximum extent\), then returns its own geometry. 42 | * `SliverGeometry.scrollExtent`: always the max extent since this is how much scrolling is needed to scroll past the header \(though, practically speaking, it can never truly be scrolled out of view\). 43 | * `SliverGeometry.paintOrigin`: always paints on the first clean pixel to avoid overlapping earlier slivers. 44 | * `SliverGeometry.paintExtent`: always paints the entire child, even when scrolled out of view. Clamped to remaining paint extent. 45 | * `SliverGeometry.layoutExtent`: the pinned header will consume the portion of its maximum extent that is actually visible \(i.e., it only takes up space when its true layout position falls within the viewport\). Otherwise \(e.g., when pinned\), it occupies zero layout space and therefore may overlap any subsequent slivers. Clamped to remaining paint extent. 46 | * `SliverGeometry.maxScrollObstructionExtent`: the minimum extent, since this is the most that the viewport can be obscured when pinned. 47 | * The child is always positioned at zero as this sliver always paints at the viewport’s leading edge. 48 | 49 | ## How does floating work? 50 | 51 | * `RenderSliverFloatingPersistentHeader` is similar to its scrolling sibling, but reattaches to the viewport’s leading edge as soon as the user scrolls in its direction. It then shrinks and detaches if the user scrolls away. 52 | * The floating header tracks scroll offset, detecting when the user begins scrolling toward the header. The header maintains an effective scroll offset that matches the real scroll offset when scrolling away, but that enables floating otherwise. 53 | * It does this by jumping ahead such that the sliver’s trailing edge \(as measured using the effective offset and maximum extent\) is coincident with the viewport’s leading edge. This is the “floating threshold.” `All` subsequent scrolling deltas are applied to the effective offset until the user scrolls the header before the floating threshold. At this point, normal behavior is resumed. 54 | * `RenderSliverFloatingPersistentHeader.performLayout` detects the user’s scroll direction and manages an effective scroll offset. The effective scroll offset is used for updating geometry and painting, allowing the header to float above other slivers. 55 | * The effective scroll offset matches the actual scroll offset the first time layout is attempted or whenever the user is scrolling away from the header and the header isn’t currently floating. 56 | * Otherwise, the header is considered to be floating. This occurs when the user scrolls toward the header, or the header’s actual or effective scroll offset is less than its maximum extent \(i.e., the header’s effective trailing edge is at or after the viewport’s leading edge\). 57 | * When floating first begins \(i.e., because the user scrolled toward the sliver\), the effective scroll offset jumps to the header’s maximum extent. This effectively positions its trailing edge at the viewport’s leading edge. 58 | * As scrolling continues, the delta \(i.e., the change in actual offset\) is applied to the effective offset. As a result, geometry is updated as though the sliver were truly in this location. 59 | * The effective scroll offset is permitted to become smaller, allowing the header to reach its maximum extent as it scrolls into view. 60 | * The effective scroll offset is also permitted to become larger such that the header shrinks. Once the header is no longer visible, the effective scroll offset jumps back to the real scroll offset, and the header is no longer floating. 61 | * Once the effective scroll offset has been updated, the child is laid out using the effective scroll offset, the maximum extent, and an overlap flag. 62 | * The header may overlap content whenever it is floating \(i.e., its effective scroll offset is less than its actual scroll offset\). 63 | * Finally, the header’s geometry is computed and the child is positioned. 64 | * `RenderSliverFloatingPersistentHeader.updateGeometry` computes the header’s geometry as well as the child’s final position using the effective and actual scroll offsets. 65 | * `SliverGeometry.scrollExtent`: always the max extent since this is how much scrolling is needed to scroll past the header. 66 | * `SliverGeometry.paintOrigin`: always paints on the first clean pixel to avoid overlapping earlier slivers. 67 | * `SliverGeometry.paintExtent`: the largest visible portion of the header \(using its effective offset\), clamped to the remaining paint extent. 68 | * `SliverGeometry.layoutExtent`: the floating header will consume the portion of its maximum extent that is actually visible \(i.e., it only takes up space when its true layout position falls within the viewport\). Otherwise \(e.g., when floating\), it occupies zero layout space and therefore may overlap any subsequent slivers. Clamped to remaining paint extent. 69 | * `SliverGeometry.maxScrollObstructionExtent`: the maximum extent, since this is the most that the viewport can be obscured when floating. 70 | * The child’s position is calculated using the sliver’s effective trailing edge \(i.e., as measured using the sliver’s effective scroll offset\). When the sliver’s actual position precedes the viewport’s leading edge, its layout offset will typically be zero, and thus the sliver will paint at the viewport’s leading edge. 71 | 72 | ## How does pinning and floating work together? 73 | 74 | * `RenderSliverFloatingPinnedPersistentHeader` is identical to its parent \(`RenderSliverFloatingPersistentHeader`\) other than in how it calculates its geometry and child’s position. Like its parent, the header will reappear whenever the user scrolls toward it. However, the header will remained pinned at the viewport’s leading edge even when the user scrolls away. 75 | * The calculated geometry is almost identical to its parent. The key difference is that the pinned, floating header always paints at least its minimum extent \(room permitting\). Additionally, its child is always positioned at zero since painting always occurs at the viewport’s leading edge. 76 | * `SliverGeometry.scrollExtent`: always the max extent since this is how much scrolling is needed to scroll past the header \(though it will continue to paint itself\). 77 | * `SliverGeometry.paintOrigin`: always paints on the first clean pixel to avoid overlapping earlier slivers. 78 | * `SliverGeometry.paintExtent`: the largest visible portion of the header \(using its effective offset\). Never less than the minimum extent \(i.e., the child will always be fully painted\), never more than the remaining paint extent. Unlike the non-floating pinned header, this will vary so that the header visibly grows to its maximum extent. 79 | * `SliverGeometry.layoutExtent`: the pinned, floating header will consume the portion of its maximum extent that is actually visible \(i.e., it only takes up space when its true layout position falls within the viewport\). Otherwise \(e.g., when floating\), it occupies zero layout space and therefore may overlap any subsequent slivers. Clamped to the paint extent, which may be less than the remaining paint extent if still growing between minimum and maximum extent. 80 | * `SliverGeometry.maxScrollObstructionExtent`: the maximum extent, since this is the most that the viewport can be obscured when floating. 81 | 82 | -------------------------------------------------------------------------------- /text/text-editing.md: -------------------------------------------------------------------------------- 1 | # Text Editing 2 | 3 | ## What data structures support editable text? 4 | 5 | * `TextRange` represents a range using \[start, end\) character indices: start is the index of the first character and end is the index after the last character. If both are -1, the range is collapsed \(empty\) and outside of the text \(invalid\). If both are equal, the range is collapsed but potentially within the text \(i.e., an insertion point\). If start <= end, the range is said to be normal. 6 | * `TextSelection` expands on range to represent a selection of text. A range is specified as a \[`baseOffset`, `extentOffset`\) using character indices. The lesser will always become the base offset, with the greater becoming the extent offset \(i.e., the range is normalized\). If both offsets are the same, the selection is collapsed, representing an insertion point; selections have a concept of directionality \(`TextSelection.isDirectional`\) which may be left ambiguous until the selection is uncollapsed. Both offsets are resolved to positions using a provided affinity. This ensures that the selection is unambiguous before and after rendering \(e.g., due to automatic line breaks\). 7 | * `TextSelectionPoint` pairs an offset with the text direction at that offset; this is helpful for determining how to render a selection handle at a given position. 8 | * `TextEditingValue` captures the current editing state. It exposes the full text in the editor \(`TextEditingValue.text`\), a range in that text that is still being composed \(`TextEditingValue.composing`\), and any selection present in the UI \(`TextEditingValue.selection`\). Note that an affinity isn’t necessary for the composing range since it indexes unrendered text. 9 | 10 | ## How are editing and selection overlays built? 11 | 12 | * `TextSelectionDelegate` supports reading and writing the selection \(via `TextSelectionDelegate.textEditingValue`\), configures any associated selection actions \(e.g., `TextSelectionDelegate.canCopy`\), and provides helpers to manage selection UI \(e.g., `TextSelectionDelegate.bringIntoView`, `TextSelectionDelegate.hideToolbar`\). This delegate is utilized primarily by `TextSelectionControls` to implement the toolbar and selection handles. 13 | * `ToolbarOptions` is a helper bundling all options that determine toolbar behavior within an `EditableText` -- that is, how the overridden `TextSelectionDelegate` methods behave. 14 | * `TextSelectionControls` is an abstract class that builds and manages selection-related UI including the toolbar and selection handles. This class also implements toolbar behaviors \(e.g., `TextSelectionControls.handleCopy`\) and eligibility checks \(e.g., `TextSelectionControls.canCopy`\), deferring to the delegate where appropriate \(e.g., `TextSelectionDelegate.bringIntoView` to scroll the selection into view\). These checks are mainly used by `TextSelectionControls`’ build methods \(e.g., `TextSelectionControls.buildHandle`, `TextSelectionControls.buildToolbar`\), which construct the actual UI. Concrete implementations are provided for `Cupertino` and `Material` \(`_CupertinoTextSelectionControls` and `_MaterialTextSelectionControls`, respectively\), producing idiomatic UI for the corresponding platform. The build process is initiated by `TextSelectionOverlay`. 15 | * `TextSelectionOverlay` is the visual engine underpinning selection UI. It integrates `TextSelectionControls` and `TextSelectionDelegate` to build and configure the text selection handles and toolbar, and `TextEditingValue` to track the current editing state; the editing state may be updated at any point \(via `TextSelectionOverlay.update`\). Updates are made build-safe by scheduling a post-frame callback if in the midst of a persistent frame callback \(building, layout, etc; this avoids infinite recursion in the build method\). 16 | * The UI is inserted into the enclosing `Overlay` and hidden and shown as needed \(via `TextSelectionOverlay.hide`, `TextSelectionOverlay.showToolbar`, etc\). 17 | * The toolbar and selection handles are positioned using leader/follower layers \(via `CompositedTransformLeader` and `CompositedTransformFollower`\). A `LayerLink` instance for each type of UI is anchored to a region within the editable text so that the two layers are identically transformed \(e.g., to efficiently scroll together\). When this happens, `TextSelectionOverlay.updateForScroll` marks the overlay as needing to be rebuilt so that the UI can adjust to its new position. 18 | * The toolbar is built directly \(via `TextSelectionControls.buildToolbar`\), whereas each selection handle corresponds to a `_TextSelectionHandleOverlay` widget. These widgets invoke a handler when the selection range changes to update the `TextEditingValue` \(via `TextSelectionOverlay._handleSelectionHandleChanged`\). 19 | * `TextSelectionGestureDetector` is a stateful widget that recognizes a sequence of selection-related gestures \(e.g., a tap followed by a double tap\), unlike a typical detector which recognizes just one. The text field \(e.g., `TextField`\) incorporates the gesture detector when building the corresponding UI. 20 | * `_TextSelectionGestureDetectorState` coordinates the text editing gesture detectors, multiplexing them as described above. A map of recognizer factories is assembled and assigned callbacks \(via `GestureRecognizerFactoryWithHandlers`\) given the widget’s configuration. These are passed to a `RawGestureDetector` widget which constructs the recognizers as needed. 21 | * `_TransparentTapGestureRecognizer` is a `TapGestureRecognizer` capable of recognizing while ceding to other recognizers in the arena. Thus, the same tap may be handled by multiple recognizers. This is particularly useful since selection handles tend to overlap editable text; a single tap in the overlap region is generally processed by the selection handle, whereas a double tap is processed by the editable text. 22 | * `TextSelectionGestureDetectorBuilderDelegate` provides a hook for customizing the interaction model \(typically implemented by the text field, e.g., `_CupertinoTextFieldState`, `_TextFieldState`\). The delegate also exposes the `GlobalKey` associated with the underlying `EditableTextState`. 23 | * `TextSelectionGestureDetectorBuilder` configures a `TextSelectionGestureDetector` with sensible defaults for text editing. The delegate is used to obtain a reference to the editable text and to customize portions of the interaction model. 24 | * Platform-specific text fields extend `TextSelectionGestureDetectorBuilder` to provide idiomatic interaction models \(e.g., `_TextFieldSelectionGestureDetectorBuilder`\). 25 | 26 | ## How can editable behavior be customized? 27 | 28 | * `TextInputFormatter` provides a hook to transform text just before `EditableText.onChange` is invoked \(i.e., when a change is committed -- not as the user types\). Blocklisting, allowlisting, and length-limiting formatters are available \(`BlacklistingTextInputFormatter`, `WhitelistingTextInputFormatter`, and `LengthLimitingTextInputFormatter`, respectively\). 29 | * `TextEditingController` provides a bidirectional interface for interacting with an `EditableText` or subclass thereof; as a `ValueNotifier`, the controller will notify whenever state changes, including as the user types. The text \(`TextEditingController.text`\), selection \(`TextEditingController.selection`\), and underlying `TextEditingValue` \(`TextEditingController.value`\) can be read and written, even in response to notifications. The controller may also be used to produce a `TextSpan`, an immutable span of styled text that can be painted to a layer. 30 | 31 | ## How is editable text implemented? 32 | 33 | * `EditableText` is the fundamental text input widget, integrating the other editable building blocks \(e.g., `TextSelectionControls`, `TextSelectionOverlay`, etc.\) with keyboard interaction \(via `TextInput`\), scrolling \(via `Scrollable`\), and text rendering to implement a basic input field. `EditableText` also supports basic gestures \(tapping, long pressing, force pressing\) for cursor and selection management and `IME` interaction. A variety of properties allow editing behavior and text appearance to be customized, though the actual work is performed by `EditableTextState`. When `EditableText` receives focus but is not fully visible, it will be scrolled into view \(via `RenderObject.showOnScreen`\). 34 | * The resulting text is styled and structured \(via `TextStyle` and `StrutStyle`\), aligned \(via `TextAlign`\), and localized \(via `TextDirection` and `Locale`\). `EditableText` also supports a text scale factor. 35 | * `EditableText` layout behavior is dependant on the maximum and minimum number of lines \(`EditableText.maxLines`, `EditableText.minLines`\) and whether expansion is enabled \(`EditableText.expands`\). 36 | * If maximum lines is one \(the default\), the field will scroll horizontally on one line. 37 | * If maximum lines is null, the field will be laid out for the minimum number of lines, and grow vertically. 38 | * If maximum lines is greater than one, the field will be laid out for the minimum number of lines, and grow vertically until the maximum number of lines is reached. 39 | * If a multiline field reaches its maximum height, it will scroll vertically. 40 | * If a field is expanding, it is sized to the incoming constraints. 41 | * `EditableText` follows a simple editing flow to allow the application to react to text changes and handle keyboard actions \(via `EditableTextState._finalizeEditing`\). 42 | * `EditableText.onChanged` is invoked as the field’s contents are changed \(i.e., as characters are explicitly typed\). 43 | * `EditableText.onEditingComplete` \(by default\) submits changes, clearing the controller’s composing bit, and relinquishes focus. If a non-completion action was selected \(e.g., “next”\), focus is retained to allow the submit handler to manage focus itself. A custom handler can be provided to alter the latter behavior. 44 | * `EditableText.onSubmitted` is invoked last, when the user has indicated that editing is complete \(e.g., by hitting “done”\). 45 | * `EditableTextState` applies the configuration described by `EditableText` to implement a text field; it also manages the flow of information with the platform `TextInput` service. Additionally, the state object exposes a simplified, top-level interface for interacting with editable text. The editing value can be updated \(via `EditableTextState.updateEditingValue`\), the toolbar toggled \(via `EditableTextState.toggleToolbar`\), the `IME` displayed \(via `EditableTextState.requestKeyboard`\), editing actions performed \(via `EditableTextState.performAction`\), text scrolled into view \(via `EditableText.bringIntoView`\) and prepared for rendering \(via `EditableText.buildTextSpan`\). In this respect, `EditableTextState` is the glue binding many of the editing components together. 46 | * Is a: `TextInputClient`, `TextSelectionDelegate` 47 | * Breakdown how it is updated by the client / notifies the client of changes to keep things in sync 48 | * `updateEditingValue` is invoked by the client when user types on keyboard \(same for `performAction` / `floatingCursor`\). 49 | * `EditableTextState` participates in the keep alive protocol \(via `AutomaticKeepAliveClientMixin`\) to ensure that it isn’t prematurely destroyed, losing editing state \(e.g., when scrolled out of view\). 50 | * * When text is specified programmatically \(via `EditableTextState.textEditingValue`, `EditableTextState.updateEditingValue`\), the underlying `TextInput` service must be notified so that platform state remains in sync \(applying any text formatters beforehand\). `EditableTextState._didChangeTextEditingValue` 51 | * `RenderEditable` 52 | 53 | ## How are platform-specific text fields implemented? 54 | 55 | * `TextField` 56 | * `TextFieldState` 57 | * `CupertinoTextField` 58 | 59 | ## How is the toolbar rendered? 60 | 61 | * The toolbar UI is built by `TextSelectionControls.buildToolbar` using the line height, a bounding rectangle for the input \(in global, logical coordinates\), an anchor position and, if necessary, a tuple of `TextSelectionPoints`. 62 | * `EditableText` triggers on gesture, overlay does the work 63 | 64 | ## How are selection handles rendered? 65 | 66 | * Selection handles are visual handles rendered just before and just after a selection. Handles need not be symmetric; `TextSelectionHandleType` characterizes which variation of the handle is to be rendered \(left, right, or collapsed\). 67 | * Each handle is built by `TextSelectionControls.buildHandle` which requires a type and a line height. 68 | * The handle’s size is computed by `TextSelectionControls.getHandleSize`, typically using the associated render editable’s line height \(`RenderEditable.preferredLineHeight`\), which is derived from the text painter’s line height \(`TextPainter.preferredLineHeight`\). The painter calculates this height through direct measurement. 69 | * The handle’s anchor point is computed by `TextSelectionControls.getHandleAnchor` using the type of handle being rendered and the associated render editable’s line height, as above. 70 | * `EditableText` triggers on select / cursor position, overlay does the work 71 | * `EditableText` uses the handle’s size and anchor to ensure that selection handles are fully visible on screen \(via `RenderObject.showOnScreen`\). 72 | 73 | ## How does the editable retain state in response to platform lifecycle events? 74 | 75 | ## What is the best way to manage input via forms? 76 | 77 | ## `IME` \(input method editor\)? 78 | 79 | -------------------------------------------------------------------------------- /text/text-input.md: -------------------------------------------------------------------------------- 1 | # Text Input 2 | 3 | ## How are key events sent from the keyboard? 4 | 5 | * `SystemChannels.keyEvent` exposes a messaging channel that receives raw key data whenever the platform produces keyboard events. 6 | * `RawKeyboard` subscribes to this channel and forwards incoming messages as `RawKeyEvent` instances \(which encapsulate `RawKeyEventData`\). Physical and logical interpretations of the event are exposed via `RawKeyEvent.physicalKey` and `RawKeyEvent.logicalKey`, respectively. The character produced is available as `RawKeyEvent.character` but only for `RawKeyDownEvent` events. This field accounts for modifier keys / past keystrokes producing null for invalid combinations or a dart string, otherwise. 7 | * The physical key identifies the actual position of the key that was struck, expressed as the equivalent key on a standard `QWERTY` keyboard. The logical key ignores position, taking into account any mappings or layout changes to produce the actual key the user intended. 8 | * Subclasses of `RawKeyEventData` interpret platform-specific data to categorize the keystroke in a portable way \(`RawKeyEventDataAndroid`, `RawKeyEventDataMacOs`\) 9 | 10 | ## What is an `IME`? 11 | 12 | * `IME` stands for “input method editor,” which corresponds to any sort of on-screen text editing interface, such as the software keyboard. There can only be one active `IME` at a time. 13 | 14 | ## How does `Flutter` interact with `IMEs`? 15 | 16 | * `SystemChannels.textInput` exposes a method channel that implements a transactional interface for interacting with an `IME`. Operations are scoped to a given transaction \(client\), which is implicit once created. Outbound methods support configuring the `IME`, showing/hiding UI, and update editing state \(including selections\); inbound methods handle `IME` actions and editing changes. Convenient wrappers for this protocol make much of this seamless. 17 | 18 | ## What are the building blocks for interacting with an `IME`? 19 | 20 | * `TextInput.attach` federates access to the `IME`, setting the current client \(transaction\) that can interact with the keyboard. 21 | * `TextInputClient` is an interface to receive information from the `IME`. Once attached, clients are notified via method invocation when actions are invoked, the editing value is updated, or the cursor is moved. 22 | * `TextInputConnection` is returned by `TextInput.attach` and allows the `IME` to be altered. In particular, the editing state can be changed, the `IME` shown, and the connection closed. Once closed, if no other client attaches within the current animation frame, the `IME` will also be hidden. 23 | * `TextInputConfiguration` encapsulates configuration data sent to the `IME` when a client attaches. This includes the desired input type \(e.g., “datetime”, “`emailAddress`”, “phone”\) for which to optimize the `IME`, whether to enable autocorrect, whether to obscure input, the default action, capitalization mode \(`TextCapitalization`\), and more. 24 | * `TextInputAction` enumerates the set of special actions supported on all platforms \(e.g., “`emergencyCall`”, “done”, “next”\). Actions may only be used on platforms that support them. Actions have no intrinsic meaning; developers determine how to respond to actions themselves. 25 | * `TextEditingValue` represents the current text, selection, and composing state \(range being edited\) for a run of text. 26 | * `RawFloatingCursorPoint` represents the position of the “floating cursor” on `iOS`, a special cursor that appears when the user force presses the keyboard. Its position is reported via the client, including state changes \(`RawFloatingCursorDragState`\). 27 | 28 | -------------------------------------------------------------------------------- /user-interface/containers.md: -------------------------------------------------------------------------------- 1 | # Containers 2 | 3 | ## What are the container building blocks? 4 | 5 | * `Flex` is the base class for `Row` and `Column`. It implements the flex layout protocol in an axis-agnostic manner. 6 | * `Row` is identical to `Flex` with a default axis of `Axis.horizontal`. 7 | * `Column` is identical to `Flex` with a default axis of `Axis.vertical`. 8 | * `Flexible` is the base class for `Expanded`. It is a parent data widget that alters its child’s flex value. Its default fit is `FlexFit.loose`, which causes its child to be laid out with loose constraints 9 | * `Expanded` is identical to `Flexible` with a default fit of `FlexFit.tight`. Consequently, it passes tight constraints to its children, requiring them to fill all available space. 10 | 11 | ## How are flex-based containers laid out? 12 | 13 | * All flexible containers follow the same protocol. 14 | * Layout children without flex factors with unbounded main constraints and the incoming cross constraints \(if stretching, cross constraints are tight\). 15 | * Apportion remaining space among flex children using flex factors. 16 | * Main axis size = `myFlex * (freeSpace / totalFlex)` 17 | * Layout each child as above, with the resulting size as the main axis constraint. Use tight constraints for `FlexFit.tight`; else, use loose. 18 | * The cross extent is the max of all child cross extents. 19 | * If using `MainAxisSize.max`, the main extent is the incoming max constraint. Else, the main extent is the sum of all child extents in that dimension \(subject to constraints\). 20 | * Children are positioned according to `MainAxisAlignment` and `CrossAxisAlignment`. 21 | 22 | ## How are containers laid out? 23 | 24 | * In short, containers size to their child plus any padding; in so doing, they respect any additional constraints provided directly or via a width or height. Decorations may be painted over this entire region. Next, a margin is added around the resulting box and, if specified, a transformation applied to the entire container. 25 | * If there is no child and no explicit size, the container shrinks in unbounded environments and expands in bounded ones. 26 | * The container widget delegates to a number of sub-widgets based on its configuration. Each behavior is layered atop all previous layers \(thus, child refers to the accumulation of widgets\). If a width or height is provided, these are transformed into extra constraints. 27 | * If there is no child and no explicit size: 28 | * Shrink when the incoming constraints are unbounded \(via `LimitedBox`\); else, expand \(via `ConstrainedBox`\). 29 | * If there is an alignment: 30 | * Align the child within the parent \(via `Align`\). 31 | * If there is padding or the decoration has padding... 32 | * Apply the total padding to the child \(via `Padding`\). 33 | * If there is a decoration: 34 | * Wrap the child in the decoration \(via `DecoratedBox`\). 35 | * If there is a foreground decoration: 36 | * Wrap the child in the foreground decoration \(via `DecoratedBox`, using `DecorationPosition.foreground`\). 37 | * If there are extra constraints: 38 | * Apply the extra constraints to the incoming constraints \(via `ConstrainedBox`\). 39 | * If there is a margin... 40 | * Apply the margin to the child \(via `Padding`\). 41 | * If there is a transform... 42 | * Transform the child accordingly \(via `Transform`\). 43 | 44 | -------------------------------------------------------------------------------- /user-interface/decoration.md: -------------------------------------------------------------------------------- 1 | # Decoration 2 | 3 | ## What are decorations? 4 | 5 | * A decoration is a high level description of graphics to be painted onto a canvas, generally corresponding to a box but can also describe other shapes, too. In addition to being paintable, decorations support interaction and interpolation. 6 | 7 | ## What are the common components of a decoration? 8 | 9 | * `DecorationImage` describes an image \(as obtained via `ImageProvider`\) to be inscribed within a decoration, accepting many of the same arguments as `paintImage`. The alignment, repetition, and box fit determine how the image is laid out within the decoration and, if enabled, horizontal reflection will be applied for right-to-left locales. 10 | * `DecorationImagePainter` \(obtained via `DecorationImage.createPainter`\) performs the actual painting; this is a thin wrapper around `paintImage` that resolves the `ImageProvider` and applies any clipping and horizontal reflection. 11 | * `BoxShadow` is a `Shadow` subclass that additionally describes spread distance \(i.e., the amount of dilation to apply to the casting element’s mask before computing the shadow\). Shadows are typically arranged into a list to support a single decoration casting multiple shadows. 12 | * `BorderRadiusGeometry` describes the border radii of a particular box \(via `BorderRadius` or `BorderRadiusDirectional` depending on text direction sensitivity\). `BorderRadiusGeometry` is composed of four immutable `Radius` instances. 13 | * `BorderSide` describes a single side of a border; the precise interpretation is determined by the enclosing `ShapeBorder` subclass. Each side has a color, a style \(via `BorderStyle`\), and a width. A width of `0.0` will enable hairline rendering; that is, the border will be 1 physical pixel wide \(`BorderStyle.none` is necessary to prevent the border from rendering\). When hairline rendering is utilized, pixels may appear darker if they are painted multiple times by the given path. Border sides may be merged provided that they share a common style and color. Doing so produces a new `BorderSide` having a width equal to the sum of its constituents. 14 | 15 | ## What are the components of a shape decoration? 16 | 17 | * `ShapeBorder` is the base class of all shape outlines, including those used by box decorations; in essence, it describes a single shape with edges of defined width \(typically via `BorderSide`\). Shape borders can be interpolated and combined \(via the addition operator or `ShapeBorder.add`\). Additionally, borders may be scaled \(affecting properties like border width and radii\) and painted directly to a canvas \(via `ShapeBorder.paint`\); painting may be adjusted based on text direction. Paths describing the shape’s inner and outer edges may also be queried \(via `ShapeBorder.getInnerPath` and `ShapeBorder.getOuterPath`\). 18 | 19 | ## What are the components of a box decoration? 20 | 21 | * `BoxBorder` is a subclass of `ShapeBorder` that is further specialized by `Border` and `BorderDirectional` \(the latter adding text direction sensitivity\). These instances describe a set of four borders corresponding to the cardinal directions; their precise arrangement is left undefined until rendering. Borders may be combined \(via `Border.merge`\) provided that all associated sides share a style and color. If so, the corresponding widths are added together. 22 | * Borders must be made concrete by providing a rectangle and, optionally, a `BoxShape`. The provided rectangle determines how the borders are actually rendered; uniform borders are more efficient ot paint. 23 | * `BoxShape` describes how a box decoration \(or border\) is to be rendered into its bounds. If rectangular, painting is coincident with the bounds. If circular, the box is painted as a uniform circle with diameter matching the smaller of the bounding dimensions. 24 | 25 | ## What are the decoration building blocks? 26 | 27 | * Decoration describes an adaptive collection of graphical effects that may be applied to an arbitrary rectangle \(e.g., box\). Decorations optionally specify a padding \(via `Decoration.padding`\) to ensure that any additional painting within a box \(e.g., from a child widget; note that decorations do not perform clipping\) does not overlap with the decoration’s own painting. Additionally, certain decorations can be marked as complex \(via `Decoration.isComplex`\) to enable caching. 28 | * Decorations support hit testing \(via `Decoration.hitTest`\). A size is provided so that the decoration may be scaled to a particular box. The given offset describes a position within this box relative to its top-left corner. An optional `TextDirection` supports containers that are sensitive to this parameter. 29 | * Decorations support linear interpolation \(via `Decoration.lerp`, `Decoration.lerpFrom`, and `Decoration.lerpTo`\). The “t” parameter represents a position on a timeline with 0 corresponding to 0% \(i.e., the pre-state\) and 1 corresponding to 100% \(i.e., the post-state\); note that values outside of this range are possible. If the source or destination value is null \(indicating that a true interpolation isn’t possible\), a default interpolation should be computed that reasonably approximates a true interpolation. 30 | * `BoxDecoration` is a `Decoration` subclass that describes the appearance of a graphical box. Boxes are composed of a number of elements, including a border, a drop shadow, and a background. The background is itself comprised of color, gradient, and image layers. While typically rectangular, boxes may be given rounded corners or even a circular shape \(via `BoxDecoration.shape`\). `BoxDecorations` provide a `BoxPainter` subclass capable of rendering the described box given different `ImageConfigurations`. 31 | * `ShapeDecoration` is analogous to `BoxDecoration` but supports rendering into any shape \(via `ShapeBorder`\). Rendering occurs in layers: first a fill color is painted, then a gradient, and finally an image. Next, the `ShapeBorder` is painted \(clipping the previous layers\); the border also serves as the casting element for all associated shadows. `ShapeDecoration` also uses a `BoxPainter` subclass for rendering. 32 | * Shape decorations may be obtained from box decorations \(via `ShapeDecoration.fromBoxDecoration`\) since the latter is derived from the former. In general, box decorations are more efficient since they do not need to represent arbitrary shapes; however, shapes support a wider arrange of interpolation \(e.g., rectangle to circle\). 33 | * `DecoratedBox` incorporates a decoration into the widget hierarchy. Decorations can be painted in the foreground or background via `DecorationPosition` \(i.e., in front of or behind the child, respectively\). Generally, `Container` is used to incorporate a `DecoratedBox` into the UI. 34 | * `NotchedShape` describes the difference of two shapes \(i.e., a guest shape is subtracted from a host shape\). A path describing this shape is obtained by specifying two bounding rectangles \(i.e., the host and the guest\) sharing a coordinate space. The `AutomaticNotchedShape` subclass uses these bounds to determine the concrete dimensions of `ShapeBorder` instances before computing their difference. 35 | 36 | ## How are decorations painted? 37 | 38 | * `BoxPainter` provides a base class for instances capable of rendering a `Decoration` to a canvas given an `ImageConfiguration`. The configuration specifies the final size, scale, and locale to be used when rendering; this information allows an otherwise abstract decoration to be made concrete. Since decorations may rely on asynchronous image providers, `BoxPainter.onChanged` notifies client code when the associated resources have changed \(i.e, so that painting may be repeated\). 39 | 40 | -------------------------------------------------------------------------------- /user-interface/material.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: TODO 3 | --- 4 | 5 | # Material 6 | 7 | -------------------------------------------------------------------------------- /user-interface/tables.md: -------------------------------------------------------------------------------- 1 | # Tables 2 | 3 | ## How is table layout described? 4 | 5 | * `TableColumnWidth` describes the width of a single column in a `RenderTable`. Implementations can produce a flex factor for the column \(via `TableColumnWidth.flex`, which may iterate over every cell\) as well as a maximum and minimum intrinsic width \(via `TableColumnWidth.maxIntrinsicWidth` and `TableColumnWidth.minIntrinsicWidth`, which also have access to the incoming maximum width constraint\). Intrinsic dimensions are expensive to compute since they typically visit the entire subtree for each cell in the column. Subclasses implement a subset of these methods to provide different mechanisms for sizing columns. 6 | * `FixedColumnWidth` produces tight intrinsic dimensions, returning the provided constant without additional computation. 7 | * `FractionColumnWidth` applies a fraction to the incoming maximum width constraint to produce tight intrinsic dimensions. If the incoming constraint is unbounded, the resulting width will be zero. 8 | * `MaxColumnWidth` and `MinColumnWidth` encapsulate two `TableColumnWidth` instances, returning the greater or lesser value produced by each method, respectively. 9 | * `FlexColumnWidth` returns the specified flex value which corresponds to the portion of free space to be utilized by the column \(i.e., free space is distributed according to the ratio of the column’s flex factor to the total flex factor\). The intrinsic width is set to zero so that the column does not consume any inflexible space. 10 | * `IntrinsicColumnWidth` is the most expensive strategy, sizing the column according to the contained cells’ intrinsic widths. A flex factor allows columns to expand even further by incorporating a portion of any unclaimed space. The minimum and maximum intrinsic widths are defined as the maximum value reported by all contained render boxes \(via `RenderBox.getMinIntrinsicWidth` and `RenderBox.getMaxIntrinsicWidth` with unbounded height\). 11 | * `TableCellVerticalAlignment` specifies how a cell is positioned within a row. Top and bottom ensure that the corresponding side of the cell and row are coincident, middle vertically centers the cell, baseline aligns cells such that all baselines are coincident \(cells lacking a baseline are top aligned\), and fill sizes cells to the height of the cell \(if all cells fill, the row will have zero height\). 12 | 13 | ## How is table appearance described? 14 | 15 | * `TableBorder` describes the appearance of borders around and within a table. Similar to `Border`, `TableBorder` exposes `BorderSide` instances for each of the cardinal directions \(`TableBorder.top`, `TableBorder.bottom`, etc\). In addition, `TableBorder` describes the seams between rows \(`TableBorder.horizontalInside`\) and columns \(`TableBorder.verticalInside`\). Borders are painted via `TableBorder.paint` using row and column offsets determined by layout \(e.g., the number of pixels from the bounding rectangle’s top and left edges for horizontal and vertical borders; there will be one entry for each interior seam\). 16 | * `RenderTable` accepts a list of decorations to be applied to each row in order. These decorations span the full extent of each row, unlike any cell-based decorations \(which would be limited to the dimensions of the cell; cells may be offset within the row due to `TableCellVerticalAlignment` and \). 17 | 18 | ## How are tables rendered? 19 | 20 | * `TableCellParentData` extends `BoxParentData` to include the cell’s vertical alignment \(via `TableCellVerticalAlignment`\) as well as the most recent zero-indexed row and column numbers \(via `TableCellParentData.y` and `TableCellParentData.x`, respectively\). The cell’s coordinates are set during `RenderTable` layout whereas the vertical alignment is set by `TableCell`, a `ParentDataWidget` subclass. 21 | * `RenderTable` is a render box implementing table layout and painting. Columns may be associated with a sizing strategy via a mapping from index to `TableColumnWidth` \(`columnWidths`\); a default strategy \(`defaultColumnWidth`\) and default vertical alignment \(`defaultVerticalAlignment`\) round out layout. The table is painted with a border \(`TableBorder`\) and each row’s full extent may be decorated \(`rowDecorations`, via `Decoration`\). `RenderBox` children are passed as a list of rows; internally, children are stored in row-major order using a single list. The number of columns and rows can be inferred from the child list, or specifically set. If these values are subsequently altered, children that no longer fit in the table will be dropped. 22 | 23 | ## How does a table manage its children? 24 | 25 | * Children are stored in row-major order using a single list. Tables accept a flat list of children \(via `RenderTable.setFlatChildren`\), using a column count to divide cells into rows. New children are adopted \(via `RenderBox.adoptChild`\) and missing children are dropped \(via `RenderBox.dropChild`\); children that are moved are neither adopted nor dropped. Children may also be added using a list of rows \(via `RenderTable.setChildren`\); this clears all children before adding each row incrementally \(via `RenderTable.addRow`\). Note that this may unnecessarily drop children, unlike `RenderTable.setFlatChildren`. 26 | * Children are visited in row-major order. That is, the first row is iterated in order, then the second row, and so on. This is the order used when painting; hit testing uses the opposite order \(i.e., starting from the last item in the last row\). 27 | 28 | ## How are columns widths calculated? 29 | 30 | * A collection of `TableColumnWidth` instances describe how each column consumes space in the table. During layout, these instances are used to produce concrete widths given the incoming constraints \(via `RenderTable._computeColumnWidths`\). 31 | * Intrinsic widths and flex factors are computed for each column by locating the appropriate `TableColumnWidth` and passing the maximum width constraint as well as all contained cells. 32 | * The column’s width is initially defined as its maximum intrinsic width \(flex factor only increases this width\). Later, column widths may be reduced to satisfy incoming constraints. 33 | * Table width is therefore computed by summing the maximum intrinsic width of all columns. 34 | * Flex factors are summed for all flexible columns; maximum intrinsic widths are summed for all inflexible columns. These values are used to identify and distribute free space. 35 | * If there are flexible columns and room for expansion given the incoming constraints, free space is divided between all such columns. That is, if the table width \(i.e., total maximum intrinsic width\) is less than the incoming maximum width \(or, if unbounded, the minimum width\), there is room for flexible columns to expand. 36 | * Remaining space is defined as the relevant width constraint minus the maximum intrinsic widths of all inflexible columns. 37 | * This space is distributed in proportion to the ratio of the column’s flex factor to the sum of all flex factors. 38 | * If this would expand the column, the delta is computed and applied both to the column’s width and the table’s width. 39 | * If there were no flexible columns, ensure that the table is at least as wide as the minimum width constraint. 40 | * The difference between the table width \(i.e., total maximum intrinsic width\) and the minimum width is evenly distributed between all columns. 41 | * Ensure that the table does not exceed the maximum width constraint. 42 | * Columns may be sized using an arbitrary combination of intrinsic widths and flexible space. Some columns also specify a minimum intrinsic width. As a result, it’s not possible to resize a table using flex alone. An iterative approach is necessary to resize columns to respect the maximum width constraint without violating their other layout characteristics. The amount by which the table exceeds the maximum width constraint is the deficit. 43 | * Flexible columns are repeatedly shrunk until they’ve all reached their minimum intrinsic widths \(i.e., no flexible columns remain\) or the deficit has been eliminated. 44 | * The deficit is divided according to each column’s flex factor \(in relation to the total flex factor, which may change as noted below\). 45 | * If this amount would shrink the column below its minimum width, the column is clamped to this width and the deficit reduced by the corresponding delta. The column is no longer considered flexible \(reducing the total flex factor for subsequent calculations\). Otherwise, the deficit is reduced by the full amount. 46 | * This process is iterative because some columns cannot be shrunk by the full amount. 47 | * Any remaining deficit must be addressed using inflexible columns \(all flexible space has been consumed\). Columns are considered “available” if they haven’t reached their minimum width. Available columns are repeatedly shrunk until the deficit is eliminated or there are no more available columns. 48 | * The deficit is divided evenly between available columns. 49 | * If this amount would shrink the column below its minimum width, the column is clamped to this width and the deficit reduced by the corresponding delta \(this reduces the number of available columns\). Otherwise, the deficit is reduced by the full amount. 50 | * This process is iterative because some columns cannot be shrunk by the full amount. 51 | 52 | ## What are a table’s intrinsic dimensions? 53 | 54 | * The table’s intrinsic widths are calculated as the sum of each column’s largest intrinsic width \(using maximum or minimum dimensions with no height constraint, via `TableColumnWidth`\). 55 | * The table’s minimum and maximum intrinsic heights are equivalent, representing the sum of the largest intrinsic height found in each row \(using the calculated column width as input\). That is, each row is as tall as its tallest child, with the table’s total height corresponding to the sum of all such heights. 56 | * Concrete column widths are computed \(via `RenderTable._computeColumnWidths`\) using the width argument as a tight constraint. 57 | * Next, the largest maximum intrinsic height for each row is calculated \(via `RenderBox.getMaxIntrinsicHeight`\) using the calculated column width. The maximum row heights are summed to produce the table’s intrinsic height. 58 | 59 | ## How does a table layout its children? 60 | 61 | * If the table has zero columns or rows, it’s as small as possible given the incoming constraints. 62 | * First, concrete columns widths are calculated. These widths are incrementally summed to produce a list of x-coordinates describing the left edge of each column \(`RenderTable._columnLefts`\). The copy of this list used by layout is flipped for right-to-left locales. The overall table width is defined as the sum of all columns widths \(e.g., the last column x-coordinate plus the last column’s width\). 63 | * Next, a list of a y-coordinates describing the top edge of each row is calculated incrementally \(`RenderTable._rowTops`\). Child layout proceeds as this list is calculated \(i.e., row-by-row\). 64 | * The list of row tops \(`RenderTable._rowTops`\) is cleared and seeded with an initial y-coordinate of zero \(i.e., layout starts from the origin along the y-axis\). The current row height is zeroed as are before- and after-baseline distances. These values track the maximum dimensions produced as cells within the row are laid out. The before-baseline distance is the maximum distance from a child’s top to its baseline; the after-baseline distance is the maximum distance from a child’s baseline to its bottom. 65 | * Layout pass: iterate over all non-null children within the row, updating parent data \(i.e., x- and y-coordinates within the table\) and performing layout based on the child’s vertical alignment \(read from parent data and set by `TableCell`, a `ParentDataWidget` subclass\). 66 | * Children with top, middle, or bottom alignment are laid out with unbounded height and a tight width constraint corresponding to the column’s width. 67 | * Children with baseline alignment are also laid out with unbounded height and a tight width constraint. 68 | * Children with a baseline \(via `RenderBox.getDistanceToBaseline`\) update the baseline distances to be at least as large as the child’s values. 69 | * Children without a baseline update the row’s height to be at least as large as the child’s height. These children are positioned at the column’s left edge and the row’s top edge \(this is the only position set during the first pass\). 70 | * Children with fill alignment are an exception; these are laid out during the second pass, once row height is known. 71 | * If a baseline is produced during the first pass, row height is updated to be at least as large as the total baseline distance \(i.e., the sum of before- and after-baseline distances\). 72 | * The table’s baseline distance is defined as the first row’s before-baseline distance. 73 | * Positioning pass: iterate over all non-null children within the row, positioning them based on vertical alignment. 74 | * Children with top, middle, and bottom alignment are positioned at the column’s left edge and the row’s top, middle, or bottom edges, respectively. 75 | * Children with baseline alignment and an actual baseline are positioned such that all baselines align \(i.e., each child’s baseline is coincident with the maximum before baseline distance\). Those without baselines have already been positioned. 76 | * Children with fill alignment are now laid out with tight constraints matching the row’s height and the column’s width; children are positioned at the column’s left edge and the row’s top edge. 77 | * Proceed to the next row by calculating the next row’s top using the row height \(and adding it to `RenderTable._rowTops`\). 78 | * The table’s width and height \(i.e., size\) is defined as the sum of columns widths and row heights, respectively. 79 | 80 | ## How does a table paint its children? 81 | 82 | * If the table has zero columns or rows, its border \(if defined\) is painted into a zero-height rectangle matching the table’s width. 83 | * Each non-null decoration \(`RenderTable._rowDecorations`\) is painted via `Decoration.createBoxPainter`. Decorations are positioned using the incoming offset and the list of row tops \(`RenderTable._rowTops`\). 84 | * Each non-null child is painted at the position calculated during layout, adjusted by the incoming offset. 85 | * Finally, the table’s border is painted using the list of row and column edges \(these lists are filtered such that only interior edges are passed to `TableBorder.paint`\). 86 | * The border will be sized to match the total width consumed by columns and total height consumed by rows. 87 | * The painted height may fall short of the render object’s actual height \(i.e., if the total row height is less than the minimum height constraint\). In this case, there will be empty space below the table. 88 | * Table layout always satisfies the minimum width constraint, so there will never be empty horizontal space. 89 | 90 | -------------------------------------------------------------------------------- /user-interface/themes.md: -------------------------------------------------------------------------------- 1 | --- 2 | description: TODO 3 | --- 4 | 5 | # Themes 6 | 7 | ## How do themes work? 8 | 9 | ## How are colors managed? 10 | 11 | * The common `Color` type is generally used throughout the framework. These may be organized into swatches with a single primary `aRGB` value and a mapping from arbitrary keys to `Color` instances. Material provides a specialization called `MaterialColor` which uses an index value as key and limits the map to ten entries \(50, 100, 200, ... 900\), with larger indices being associated with darker shades. These are further organized into a standard set of colors and swatches within the `Colors` class. 12 | 13 | --------------------------------------------------------------------------------