├── 2013 ├── 10 │ └── Web Animations.md ├── 11 │ ├── WebCrypto_examples.js │ ├── WebCrypto.md │ └── WebCrypto.idl ├── 07 │ ├── OrientationLock.md │ ├── webaudio.idl │ ├── webaudio_examples.js │ └── WebAudio.md └── 08 │ ├── Push_API_initial_comments.md │ └── Push API.md ├── 2014 ├── 10 │ └── eme.md ├── 02 │ └── quota-management-api.md └── 04 │ └── http-209.md └── README.md /README.md: -------------------------------------------------------------------------------- 1 | # W3C TAG Specification Reviews 2 | 3 | This repository is for specification reviews by the [W3C Technical Architecture Group](http://www.w3.org/2001/tag/); usually (but not always) from W3C Working Groups. 4 | 5 | We use the [issues list](https://github.com/w3ctag/spec-reviews/issues) to track our reviews; you can request a review for your spec by adding a new issue. Please include a link to the specification. 6 | 7 | Note that we do **not** usually use the issue itself to perform the review; we prefer to engage with the specification authors and/or Working Group (as appropriate) directly, using their feedback mechanisms (e.g., issues list, mailing list, WG meetings). 8 | 9 | In some cases, we might prepare a review document, which lives in this repo. Old review documents are moved to the [Archive folder](https://github.com/w3ctag/spec-reviews/tree/master/Archive). 10 | -------------------------------------------------------------------------------- /2013/07/OrientationLock.md: -------------------------------------------------------------------------------- 1 | # Screen Orientation Lock - Draft Feedback 2 | 3 | Spec: https://dvcs.w3.org/hg/screen-orientation/raw-file/tip/Overview.html 4 | 5 | * the somewhat liberal use of SHOULD in that spec is going to lead to user 6 | agents doing bad things. It basically says that UAs are allowed to return 7 | bogus orientation values. This will lead to Device Orientation all over again 8 | (where UA's returned laughably different values depending on which way one 9 | rotates their phone). 10 | 11 | * overloading the `lockOrientation()` method could just use the "or" operator 12 | in WebIDL. 13 | 14 | * The `unlockOrientation()` and `lockOrientation()` methods do the same thing. 15 | The methods should be merged into a `setOrientation()` method. Setting the 16 | orientation to `null` or the empty string just returns it to its default. 17 | Could also add a "auto" orientation keyword. 18 | 19 | * If the spec can be changed to 20 | `setOrientation([TreatEmptyStringAs=null] Orientation value)`, 21 | then it should vend a Promise. So then, for example: 22 | 23 | ``` 24 | setOrientation("portrait").then(showSplashScreen, fail); 25 | setOrientation("landscape").then(showGameMenu, fail); 26 | //unlock it 27 | setOrientation(null).then(whatever); 28 | ``` 29 | 30 | * The allowed orientation values need to be defined as an enum. 31 | 32 | * The spec does not define which task queue to use. 33 | 34 | * The spec treats Screen as extending EventTarget, but Screen is not an 35 | EventTarget. Either the spec needs to make Screen and EventTarget or CSSOM 36 | View needs to be updated to be an EventTarget. 37 | -------------------------------------------------------------------------------- /2013/08/Push_API_initial_comments.md: -------------------------------------------------------------------------------- 1 | Dear Editors & Authors, 2 | 3 | The TAG has been looking to do review of the [Push API draft](https://dvcs.w3.org/hg/push/raw-file/tip/index.html) but, having reviewed the current spec, feel that there are issues with the draft that probably need discussion before we're able to proceed in providing detailed technical feedback. 4 | 5 | A few examples. 6 | 7 | First, while the introduction of the draft outlines a few underlying technologies that might generate push messages, it's not clear that there's a strong method for binding documents/URL space to these message types in any meaningful way. The API leaves the entire concept of routing of messages as an exercise for developers to navigate. We'd like to understand why the API was developed this way. 8 | 9 | Next, the idea of a message queue is useful, but the choice to use `hasPendingMessages()` and `setMessageHandler()` instead of the well-worn DOM event model or the message APIs that exist in the platform (Message Ports / `postMessage`/`onmessage`) is, at best, head-scratching. We'd like to understand the thinking behind it. 10 | 11 | As a final example, it's unclear from the code examples how web apps are meant to be invoked (or if they are at all) when push messages are delivered to users. If the system provides UI for interacting with the message, how does it relate to the web app? And if the system doesn't create a mapping between the two at registration time, does that mean that messages are only ever delivered to apps when users open them? Doesn't that make web apps permanently second-class? We'd like to understand the thinking there. 12 | 13 | We're optimistic that this is going to be an incredibly helpful API for apps and users and want to do what we can to help improve it. Thanks for your time and consideration. 14 | 15 | Regards, 16 | 17 | The TAG -------------------------------------------------------------------------------- /2013/11/WebCrypto_examples.js: -------------------------------------------------------------------------------- 1 | /*********************************** 2 | * 19.1. Generate a signing key pair, sign some data 3 | **/ 4 | // Algorithm Object 5 | var algorithmKeyGen = { 6 | name: "RSASSA-PKCS1-v1_5", 7 | // RsaKeyGenParams 8 | modulusLength: 2048, 9 | publicExponent: new Uint8Array([0x01, 0x00, 0x01]), // Equivalent to 65537 10 | }; 11 | 12 | var algorithmSign = { 13 | name: "RSASSA-PKCS1-v1_5", 14 | // RsaSsaParams 15 | hash: { 16 | name: "SHA-256", 17 | } 18 | }; 19 | 20 | window.crypto.subtle.generateKey(algorithmKeyGen, false, ["sign"]).then( 21 | function(key) { 22 | var dataPart1 = convertPlainTextToArrayBufferView("hello,"); 23 | var dataPart2 = convertPlainTextToArrayBufferView(" world!"); 24 | // TODO: create example utility function that converts text -> ArrayBufferView 25 | 26 | return window.crypto.subtle.sign(algorithmSign, key.privateKey, [dataPart1, dataPar2]); 27 | }, 28 | console.error.bind(console, "Unable to generate a key") 29 | ).then( 30 | console.log.bind(console, "The signature is: "), 31 | console.error.bind(console, "Unable to sign") 32 | ); 33 | 34 | 35 | /*********************************** 36 | * 19.2. Symmetric Encryption 37 | **/ 38 | var clearDataArrayBufferView = convertPlainTextToArrayBufferView("Plain Text Data"); 39 | // TODO: create example utility function that converts text -> ArrayBufferView 40 | 41 | var aesAlgorithmKeyGen = { 42 | name: "AES-CBC", 43 | // AesKeyGenParams 44 | length: 128 45 | }; 46 | 47 | var aesAlgorithmEncrypt = { 48 | name: "AES-CBC", 49 | // AesCbcParams 50 | iv: window.crypto.getRandomValues(new Uint8Array(16)) 51 | }; 52 | 53 | // Create a keygenerator to produce a one-time-use AES key to encrypt some data 54 | window.crypto.subtle.generateKey(aesAlgorithmKeyGen, false, ["encrypt"]).then( 55 | function(aesKey) { 56 | return window.crypto.subtle.encrypt(aesAlgorithmEncrypt, aesKey, [ clearDataArrayBufferView ]); 57 | } 58 | ).then(console.log.bind(console, "The ciphertext is: "), 59 | console.error.bind(console, "Unable to encrypt")); 60 | -------------------------------------------------------------------------------- /2013/08/Push API.md: -------------------------------------------------------------------------------- 1 | # Push API Draft Feedback 2 | 3 | [Draft under discussion](https://dvcs.w3.org/hg/push/raw-file/tip/index.html) 4 | 5 | ## Open questions 6 | 7 | ### Registration persistence 8 | 9 | In its present state all acting parts (user agent, push server, app server) operate using formal identifiers (URLs, registrations ids) and every connection is reinstatable. The only major exception is "last mile" between user agent and app code: message routing there is bound to a temporary entity (function callback) which exists in runtime only. If runtime context is lost the connection cannot be reinstated. 10 | 11 | We suggest to bind push messages to Service Workers. That would also resolve other questions, for example, what should happen when webapp calls window.location.reload(). 12 | 13 | ### Push server <-> App server protocol 14 | 15 | Spec neither specifies nor refers this protocol. From developer's point of view it's unknown how app server should communicate with push service having just endpoint URL and registration id. 16 | 17 | ### Private push servers 18 | 19 | Do we allow user to change browser's default push server? If the answer is "yes", which spec covers push service interface? 20 | 21 | ### Managing registrations 22 | 23 | Do we allow user to manage push registrations, i.e. view or delete registrations via user agent UI? In our view spec should recommend to implement such a possibility since currently we have to rely on webapp having some interface to unsubscribe. 24 | 25 | ## API Objects' Responsibilities 26 | 27 | ### ISSUE: Access granting and push registering functionality stick together 28 | 29 | `PushManager.register` provides two heterogeneous functions: 30 | 31 | * asking use permission, 32 | * registering push notifications. 33 | 34 | This mixing leads to odd user agent behavior, i.e. push service failure forces webapp to ask permission again. 35 | 36 | Suggestion: split the functionality and provide separate methods to ask user permission (check if permission has already being granted) and to register push notifications. Provide different sets of errors for those purposes. 37 | 38 | ## API Objects' Interfaces 39 | 40 | ### ISSUE: Bad names 41 | 42 | `PushManager` is limited to dealing with registrations. Suggestion: `PushRegistrationManager`. 43 | 44 | `navigator.push` as an object of PushManager type seems misleading per se and is inconsistent with `Array.prototype.push` method. Suggestion: `navigator.pushRegistrationManager`. 45 | 46 | `PushManager.registrations` method seems inconsistent. Suggestion: `PushManager.getRegistrations`. 47 | 48 | `AbortError` is raised when user doesn't grant permissions, not when registration is aborted. Suggestion: `PermissionDeniedError`. 49 | 50 | `PushRegisterMessage` name is confusing as it really occurs when push service failed, not when new registration granted. Suggestion: `PushServiceFailure`. 51 | 52 | `NoModificationAllowedError` is in fact a technical (network) error, not disallowance. 53 | -------------------------------------------------------------------------------- /2013/11/WebCrypto.md: -------------------------------------------------------------------------------- 1 | # Web Crypto Draft Feedback 2 | 3 | [Draft under discussion.](https://dvcs.w3.org/hg/webcrypto-api/file/dffe14c6052a/spec/Overview.html) 4 | 5 | We extracted the IDL and example code from the draft in question using a snippet run at the developer tool command line: 6 | 7 | ```js 8 | Array.prototype.slice.call( 9 | document.querySelectorAll(".idl,.es-code") 10 | ).map(function(n) { return n.innerText; }).join("\n\n\n"); 11 | ``` 12 | 13 | This is visible in [`WebCyrpto.idl`](WebCrypto.idl) and [`WebCrypto_examples.js`](WebCrypto_examples.js) in the current directory. 14 | 15 | ## General Discussion 16 | 17 | The Web Crypto API provides a set of low-level interfaces to block ciphers, hash functions, and use of key material. This review is a follow-up to collaborative work with the Web Crypto WG over the past year, including prototyping of Promise-based APIs and direct engagement regarding API style and idioms. 18 | 19 | The API in question is explicitly designed to be low-level. Other drafts are aimed at [higher-level use-cases](https://dvcs.w3.org/hg/webcrypto-highlevel/raw-file/tip/Overview.html). The difficulty in correctly and securely using low-level cryptographic primitives is acknowledged in the API; largely through the addition of the `crypto.subtle.*` family of methods. These primitives allow direct encryption, decryption, signing, and key operations using a family of recommended (but not required) algorithms. 20 | 21 | The explicit choice to avoid demanding algorithms or creating a closed set of them is motivated by the reality that the API is likely to survive longer than the algorithms. In the past decade, many weaknesses have been found in previously-considered-secure algorithms and modes, in addition to the reletless increase in available CPU and GPU computation power for the marginal dollar. Further, the advent of quantum computers ensures likely change in algorithm recommendations. 22 | 23 | A further challenge exists to secure usage: currently known-insecure algorithms are inputs to beleived-secure modes and composite algorithms. This necessitates exposing beleived-insecure algorithms in a low-level API. Further, it's likely that clients will need to continue to support insecure modes durring transition periods for conent to secure modes. All of this argues in favor of flexibility in the set of algorithms the the spec recommends and which conforming clients implement. 24 | 25 | The WG faces considerable challenges in defining scope, particularly around provisioning and ownership of key material. This will become apparent as we discuss what APIs are and aren't made avaialble later on. 26 | 27 | ## API Hygiene 28 | 29 | ### ISSUE: ... 30 | 31 | ## Layering Considerations 32 | 33 | Crypto algorithms are "just math". As such, nearly all commonly-used ciphers and modes can be implemented (perhaps inefficiently) in JavaScript. [asm.js](http://asmjs.org/spec/latest/) even brings JavaScript performance into line with other commonly used languages, enabling a straight de-sugaring of the recommended algorithms. 34 | 35 | Such a reference-implementation of the recommended algorithms is absent from the Web Cyrpto API. This seems a distinct oversight and something we recommend the WG look at for a next version. 36 | 37 | The execution context of algorithm execution is un-specified. This goes hand-in-hand with a lack of constructors/methods for individual algorithms, but points to a larger concern: should users of the API expect that crypto operations are happening in secure memory? If not, are there situations or use-cases that demand these constraints? If so, how can an API consumer know that they're being accomidated or not? 38 | 39 | Doing the exercise of defining how crytpo operations actually work seems useful. Options might include specifying new options on Web Workers that would enable the promise-based API to be de-sugared. 40 | 41 | 42 | ## Other Considerations 43 | 44 | ... 45 | 46 | ## End Notes 47 | 48 | ... 49 | -------------------------------------------------------------------------------- /2013/10/Web Animations.md: -------------------------------------------------------------------------------- 1 | # Web Animations 1.0 Draft Feedback 2 | 3 | [Draft under discussion](http://dev.w3.org/fxtf/web-animations/) 4 | 5 | ## API Field of Application 6 | 7 | ### REQUEST: Backwards Compatibility 8 | Specification says that CSS3 Transitions, CSS3 Animations and SVG animations "can be defined in terms of this model without any observable change", but does not provide any further information. Having an algorithm for describing those animations in terms of Web Animations would be very helpful. 9 | 10 | We understand, that Chrome team implemented those animations with Web Animations without breaking backwards compatibility, but it doesn't automatically implies that other teams will be able to do so, since Chrome model may differ with other browsers models and, in fact, actual specs. 11 | 12 | ### REQUEST: Computed Values 13 | Though specification describes computing animation values in details, API has no method of accessing computed values, except for calling getComputedStyle or accessing `.value` attributes; there is also no way to get current time fraction unless you use `effectCallback`. It seems wrong since low-level spec should provide such a basic thing. 14 | 15 | ### ISSUE: Statement needs clarification 16 |
Changes to specified style, specified attribute values, and the state of the Web Animations model made within the same execution block must be synchronized when rendering such that the whole set of changes is rendered together.
17 | Exact meaning of that phrase isn't clear. Rendering process isn't exposed in JavaScript, so it seems meaningless to set such a restriction in the spec. Furthermore, current implementations *do* render certain properties (e.g. width, margin) changes immediately after change is made, and some webapps rely on that fact. 18 | Also, term "execution block" needs to be defined (probably, in terms of execution contexts). 19 | 20 | ### ISSUE: Web Animations and requestAnimationFrame 21 | Specification doesn't mention requestAnimationFrame API at all, so it's totally unclear how Web Animations sampling algorithm relates to requestAnimationFrame one and how to use both of them in one webapp. It seems like, basically, Web Animations sampling does the same as rAF, i.e. executes callback at proper periods of time. Looks like rAF API can be built on top of Web Animations model. 22 | 23 | ### ISSUE: Constructability, cloneability, serializability 24 | Some basic objects (notably `AnimationPlayer` and `TimedItem`) are marked as non-constructable with no visible reasons. 25 | Some objects (notably `Animation`) have "clone" method while others have not; reasons remain unclear. 26 | There are no serialize/unserialize methods at all, though they seem useful in some cases. 27 | 28 | ## API Levels of Abstraction 29 | 30 | ### ISSUE: Wrong leveling 31 | Relations between `AnimationPlayer`, `TimedItem` and `Timing` interfaces present very uncommon patterns: copying writable attributes from `Timing` to associated `TimedItem` as read-only attributes, setter on `AnimationPlayer.source` which calls methods, etc. 32 | 33 | It seems like a design problem. Representing "computed" timing should be separated from grouping functionality and delegated to some kind of dependent entity (like `getComputedStyle` in CSS is separated from `.style` property). 34 | 35 | ## API Objects' Responsibilities 36 | 37 | ### ISSUE: `play` method 38 | `AnimationTimeline.play` method does two separate things: (a) create `AnimationPlayer` instance, (b) play this instance. Furthermore, the very name `play` is misleading, since `timeline.play` doesn't start playing `timeline` (compare with `element.animate`, which actually animates `element`). Probably, method should be renamed to `startAnimationPlayer`, and `AnimationPlayer` should be made constructable. 39 | 40 | ## API Objects' Interfaces 41 | 42 | ### ISSUE: Inconsistent naming 43 | `TimedItem` and `TimedGroup` provide some methods similar to DOM ones, but names don't fully match: 44 | * DOM ParentNode: firstElementChild, lastElementChild, childElementCount 45 | * WA AnimationGroup: firstChild, lastChild, no "count" property 46 | 47 | ### REQUEST: A `finished` promise for `AnimationPlayer` 48 | 49 | The `"finish"` event for `AnimationPlayer` is a perfect candidate for our newly-minted [use promises for state transitions](https://github.com/w3ctag/promises-guide#more-general-state-transitions) guidance. The TAG is interested in encouraging the introduction of promises for such cases across a wide variety of specs, as it has many authoring benefits, and as more and more async operations transition to using promises, network effects will multiply their usefulness. But even restricting ourselves to the web animations API alone, it would enable code like: 50 | 51 | ```js 52 | ap1.play(); 53 | ap2.play(); 54 | ap3.play(); 55 | Promise.all([ap1.finished, ap2.finished]).then(() => { 56 | setUpUINowThat1And2HaveStoppedMovingAround(); 57 | ap3.finished.then(() => { 58 | completeAllUISetupNowThatEverythingIsStill(); 59 | }); 60 | }); 61 | ``` 62 | 63 | This could be made even more convenient if `play` returned that same promise, so that the code could become 64 | 65 | ```js 66 | ap3.play(); 67 | Promise.all([ap1.play(), ap2.play()]).then(() => { 68 | setUpUINowThat1And2HaveStoppedMovingAround(); 69 | ap3.finished.then(() => { 70 | completeAllUISetupNowThatEverythingIsStill(); 71 | }); 72 | }); 73 | ``` 74 | 75 | We realize the proposed name of `finished` clashes with the existing boolean property, and are happy to discuss alternatives. (Our first thought is to coalesce the booleans `paused` and `finished` into a single `state` enumeration.) Additionally, we are of mixed opinions as to whether the `"finish"` event should be kept alongside the promise. 76 | 77 | If this sounds intriguing, we will be happy to lend our help on promisifying the API, as we have done in the past [for web audio](https://github.com/WebAudio/web-audio-api/issues/252). 78 | -------------------------------------------------------------------------------- /2014/10/eme.md: -------------------------------------------------------------------------------- 1 | # W3C TAG EME Spec Review 2 | 3 | [Draft under discussion](https://dvcs.w3.org/hg/html-media/raw-file/tip/encrypted-media/encrypted-media.html) 4 | 5 | Above all else, we believe EME should be a web platform API, and embody all that this means. It should not just be an API that exposes existing DRM systems to applications, similarly to how the media capture API is not simply an API that exposes existing webcam drivers to applications, or the Media Source Extensions does not simply expose existing video codecs. As a consequence, it should focus strongly on interoperability and standardized behavior, in the same fashion as all other existing web APIs do. 6 | 7 | We understand that the historical way of designing a DRM system has involved requirements that are unlike those of most web APIs, and cause a tension with the usual way specifications are developed openly. EME has chosen to address this via the idea of a CDM, which encapsulates unspecified behavior necessary for robustness. But just because the CDM’s behavior is undefined, does not mean that EME as a whole becomes a free-for-all that can ignore how the web platform works. 8 | 9 | Our concerns break down into three areas. 10 | 11 | ## Author-Facing Interoperability Between Key Systems 12 | 13 | It should be possible for a single web application to support multiple key systems, without writing code specific to each one, in a similar fashion to a web application supporting multiple video or audio codecs. Like codecs, the details of implementing a given key system may be hidden or proprietary. But this remains an implementation detail from an author-facing perspective. Authors should not care about which key system is in use, and after selecting one, all code they write should be the same no matter which key system was selected. 14 | 15 | As a consequence of this, the capabilities of a pre-existing DRM system are not useful for guiding discussion. The goal of EME should be a common-denominator API that can be used to interface with all DRM systems equally. 16 | 17 | Although CDMs are necessarily underspecified for robustness reasons, this should not be used as a loophole to drive through vendor-specific behavior. Out-of-band communication (via the CDM or otherwise) should not be possible, nor should vendor-specific extensions or extension points be blessed into the spec. If a vendor is in favor of adding a given capability to EME---perhaps a feature their CDM supports---then they should not do so through a side-channel or extension point of the EME API, but instead through the normal standardization process for web platform features. 18 | 19 | In general, given that CDMs are underspecified, their author-facing scope should be normatively limited as much as is possible while still giving the desired robustness guarantees. 20 | 21 | ## User-Facing Concerns 22 | 23 | As part of interoperability, EME should not provide APIs that are designed to allow restriction of content to one platform and/or key system. A content provider may ask for a certain level of robustness and/or hardware support, but not for a concrete platform or CDM provider. The web’s platform-independence is its greatest strength, and EME should not provide a means for either content providers or authors to bypass it. While certain key systems may only be supported on certain platforms, and certain content may only be available with certain key systems, such restrictions should simply be features of the ecosystem and not sanctioned by EME. Similarly, when introducing features to EME (or to any web API), the extent to which they allow platform discrimination should be carefully considered. 24 | 25 | As an analogy, different user agents support different cryptographic algorithms for TLS. However, from the perspctive of most users, authors, and server administrators, the exact algorithm used for a given secure connection is unimportant. The procedure of negotiating which algorithm to use is completely hidden, and the high-level API for making secure web requests (viz. `XMLHttpRequest`) does not force authors to consider these details. 26 | 27 | We are also deeply concerned about the security and privacy implications of EME. The ability of the CDM to potentially run arbitrary code is a hole in the web platform’s security model. To the extent that privacy-invasive or security-compromising features can be normatively disallowed, EME should do so. To the extent that they cannot be, e.g. for robustness reasons, we should restrict access to those features such that they can only be used from secure origins, so as to make them less accessible to attackers. 28 | 29 | Speaking to both of these areas, the way in which the CDM currently provides a potentially-permanent cryptographic identifier to identify the client or user is troubling. It can serve as an unspoofable user-agent string, with all of the attendant risks, and is a glaring privacy hole. Again, if nothing else, this kind of power should not be given to arbitrary coffee-shop attackers, and thus should be limited to secure origins. 30 | 31 | Another attack we are concerned about is the possibility of proprietary/encrypted license formats to decieve the user. The user agent should be able to read license the response and present that information to the user. 32 | 33 | Finally, we are concerned that encrypted media achieve the same level of accessibility as other media. This is partially a quality-of-implementation concern, but there may be ways the spec could normatively encourage it. Specific examples include ensuring the correct interaction with high-contrast mode and captioning systems. 34 | 35 | ## Platform Segmentation 36 | 37 | We are concerned about segmentation of the platform between different user agents, content providers, and and authors. No existing web API relies upon those using it or implementing it to negotiate business deals, and we would prefer that EME is no different. To meet the normal bar for a web platform API: 38 | 39 | - Independent content providers should be able to use EME to protect their content just as easily as large media companies. 40 | - New browser vendors should be able to add EME support to their browser based on open standards, without needing to license technology. This should ideally be possible both for new desktop browsers and for new mobile browsers or new devices. 41 | - New key systems should be able to join the EME ecosystem by implementing an open standard. 42 | - Content providers should be able to implement and interface with multiple key systems via the same code, without dealing with inconsistency in feature sets or behaviors. 43 | 44 | ## Security 45 | 46 | Although this is tangential to our technical review, we are concerned about the way that DRM systems are protected by anti-circumvention laws, which prohibit probing and penetration testing of DRM features. This principle impose considerable risks on web security specialists, and upon users in having to accept APIs that recieve less than the usual level of scrutiny. We do not expect the specification or implementers to solve this, but we have reached out to the Advisory Board to discuss ways the TAG and AB can collaborate to solve this problem, which is both technical and legal in nature. 47 | 48 | ## Closing Words 49 | 50 | We recognize that many of our concerns are in conflict with conventional DRM systems. However, we are hopeful that they are not in conflict with EME, as an API that desires to bring DRM to the web platform. That is, we recognize that resolving them might not be possible as-is, but we still see them as goals to strive for. 51 | 52 | The benefits of EME to content providers, user agents, and authors are clear: it allows a plug-in free way of distributing protected content, to the wide audience of web users. But like all other APIs that allow us to reach web users, that reach comes at a cost: the API must be interoperable, open, and both platform-independent and content-provider independent. Any DRM system that wants to be part of the open web platform needs to play by these rules, and the fact that existing solutions deployed on the web (e.g. NPAPI plugins) fail to do so is not an excuse for EME to hide behind. 53 | -------------------------------------------------------------------------------- /2014/02/quota-management-api.md: -------------------------------------------------------------------------------- 1 | # Quota Management API Review 2 | 3 | [Draft under discussion](https://dvcs.w3.org/hg/quota/raw-file/tip/Overview.html) — 2014-02-04 version 4 | 5 | In our view, this specification does not meet its goals or satisfy any compelling use cases. The overall model proposed is underspecified with regard to current storage models, and does not provide appropriate future extensibility. Furthermore, it continues down the path of infobar fatigue, which we want to avoid. 6 | 7 | ## The Overall Model 8 | 9 | The model presented here is an incremental evolution of the platform's existing capabilities, but is not a forward-thinking way to deal with the platform's storage APIs, and does not even provide that much value for those that already exist. 10 | 11 | For example, the distinction between "temporary" and "persistent" is underspecified. What is the actionable difference between temporary and persistent storage? Can apps rely on persistent storage being there forever, no matter how much space they use up? How ephemeral is temporary storage—could it be evicted at any time, without warning? Besides which, the only possibly-persistent storage on the platform currently is the filesystem API, which is only implemented in one browser engine. It seems strange that there's no way for an app to say "this IndexedDB database is really important." As a consequence, this distinction isn't useful to web authors. 12 | 13 | The current model provides ways to query the total amount of space used, in aggregate, by each storage type—but no way of knowing how much space the app is using on any given data. What happens if an app is denied access to more storage? As-is, it must simply guess at what items are most profitable to delete, in order to free up space, and then try again. And there's no way to know how much space something will take up before storing it. In summary, the ability to make decisions about what to store and what to delete must be largely based on trial and error. 14 | 15 | The idea of requesting more storage is yet another example of "infobar fatigue," asking users questions which they may not be able to answer intelligently. Boris Smus describes this problem in his post, ["Installable Webapps: Extend the Sandbox"](http://smus.com/installable-webapps/). Modern specifications need to empower the user agent to make more intelligent decisions on behalf of the user, but this specification's model of simply requesting more space almost inevitably requires infobars or similar UI. 16 | 17 | ## Proposed Use Cases, Requirements, and Constraints 18 | 19 | Given the above critique, we can ask, what are the base-level assumptions that a quota management API should be considering? Some are implicit in the current spec which might not be the most accurate; the above feedback, coming from a different direction, contains a few others. 20 | 21 | We'd love to work with the editors on identifying these in detail to help drive future revisions of the API. Off the cuff, a few come to mind: 22 | 23 | - **Use case**: React to space pressure in the environment to delete caches or other unnecessary data 24 | - **Use case**: Ensure enough space will be available to store the result of a potentially-expensive computation or download, before performing that computation or download 25 | - **Requirement**: Be able to measure the space taken up by various storage artifacts 26 | - **Requirement**: Allow applications to choose the relative importance of keeping storage artifacts without regard for their storage medium (i.e. allow keeping important data in IndexedDB or unimportant caches in the filesystem) 27 | - **Requirement**: Give apps enough information to notify the user about evicted storage artifacts (e.g. removing the "downloaded" checkmark from a video that was removed in response to space pressure) 28 | - **Constraint**: Minimize user interaction ("infobars") except when unavoidable, e.g. for resolving a conflict between space pressure and an app's desire to keep a certain storage artifact inviolate 29 | - **Constraint**: Any APIs specified must be flexible enough that it is clear how to extend them to future storage types, e.g. ServiceWorker caches 30 | 31 | ## Idiomatic JavaScript Critiques 32 | 33 | Although we recognize that the above discussion of the overall model might make much of the below obsolete, we hope this can still be useful feedback for how to build more idiomatic JavaScript APIs. 34 | 35 | ### Promises should be rejected with `Error` instances 36 | 37 | As per our [guidance on writing promise-using specifications](https://github.com/w3ctag/promises-guide), promises should always be rejected with objects that are `instanceof Error`. Notably, `DOMError` does *not* meet this criterion, whereas `DOMException` does. 38 | 39 | ### Time intervals should be in milliseconds, not seconds 40 | 41 | The `StorageWatcher`'s `rate` parameter is currently given in seconds, but most times in JavaScript—as seen e.g. in `setTimeout`, `setInterval`, or `Date.now()`—are represented in milliseconds. 42 | 43 | ### `supportedTypes` should be a frozen array 44 | 45 | The specification doesn't make it precisely clear, but we assume that `supportedTypes` cannot change during runtime. If that is the case, it should be represented as a *frozen array object*. (Furthermore, it should be the same frozen array object each time, so that `navigator.storageQuota.supportedTypes === navigator.storageQuota.supportedTypes`.) In JavaScript, the most idiomatic representation would be as 46 | 47 | ```js 48 | Object.defineProperty(navigator.storageQuota, "supportedTypes", { 49 | configurable: true, 50 | value: Object.freeze(["temporary", "persistent"]) 51 | }); 52 | ``` 53 | 54 | Note here that `navigator.storageQuota.supportedTypes` is a non-writable data property. WebIDL does not allow expressing such things, so at the cost of some idiomaticness, it would have to probably be a getter: 55 | 56 | ```js 57 | var supportedTypes = Object.freeze(["temporary", "persistent"]); 58 | Object.defineProperty(navigator.storageQuota, "supportedTypes", { 59 | configurable: true, 60 | get: function () { return supportedTypes; } 61 | }); 62 | ``` 63 | 64 | which, using the terminology of [bug 23682](https://www.w3.org/Bugs/Public/show_bug.cgi?id=23682), translates to the WebIDL of 65 | 66 | ```webidl 67 | [SameObject] readonly attribute frozen array supportedTypes; 68 | ``` 69 | 70 | ### Use dictionaries instead of non-constructible classes 71 | 72 | `StorageInfo` is specified as a `[NoInterfaceObject]` class (WebIDL "interface"), with no constructor. The idea of a constructor-less class in JavaScript is fairly nonsensical (in JS a class is literally the same thing as a constructor). In this particular case, it makes no sense for `StorageInfo` to be a class, with the `StorageInfo.prototype` that comes along with it, containing the two getters `usage` and `quota`. 73 | 74 | In JavaScript, we would instead represent such an object as simply an object literal, e.g. `{ usage: 5, quota: 10 }`. This is not an instance of any class, and especially not of a non-constructible one that somehow springs into life without ever being `new`ed. It has `Object.prototype` as its prototype, and has no getters, simply properties. In WebIDL, this would be represented with a dictionary type: 75 | 76 | ```webidl 77 | dictionary StorageInfo { 78 | unsigned long long usage; 79 | unsigned long long quota; 80 | } 81 | ``` 82 | 83 | ### Don't use non-constructible classes as namespaces 84 | 85 | This is essentially the same issue as the previous one, but in this case we are discussing `StorageQuota`. Again, `navigator.storageQuota` has somehow sprung into being as the only instance of a class `StorageQuota`, which it is not possible to actually construct an instance of since it has no constructor. In JavaScript, you would set up `navigator.storageQuota` as a simple "namespace object," again with no specially-crafted prototype chain, simply though something like 86 | 87 | ```js 88 | navigator.storageQuota = { 89 | supportedTypes: Object.freeze(["temporary", "persistent"]), 90 | queryInfo: function (type) { ... }, 91 | requestPersistentQuota: function (newQuota) { ... } 92 | }; 93 | 94 | // Now correct the access modifiers 95 | Object.defineProperties(navigator.storageQuota, { 96 | supportedTypes: { writable: false }, 97 | queryInfo: { enumerable: false }, 98 | requestPersistentQuota: { enumerable: false } 99 | }); 100 | ``` 101 | 102 | I think that a WebIDL dictionary would probably again be the best way to express this in that language. 103 | -------------------------------------------------------------------------------- /2013/07/webaudio.idl: -------------------------------------------------------------------------------- 1 | /*********************************** 2 | * Extracted via the web inspector from: 3 | * 4 | * https://dvcs.w3.org/hg/audio/raw-file/28a38310adae/webaudio/specification.html 5 | * 6 | * By running: 7 | * 8 | * Array.prototype.slice.call( 9 | * document.querySelectorAll(".idl,.idl-code,.es-code") 10 | * ).map(function(n) { return n.innerText; }).join("\n\n\n"); 11 | * 12 | * Reformatted for readability. 13 | * 14 | **/ 15 | 16 | 17 | /*********************************** 18 | * 4.1 19 | **/ 20 | callback DecodeSuccessCallback = void (AudioBuffer decodedData); 21 | callback DecodeErrorCallback = void (); 22 | 23 | [Constructor] 24 | interface AudioContext : EventTarget { 25 | 26 | readonly attribute AudioDestinationNode destination; 27 | readonly attribute float sampleRate; 28 | readonly attribute double currentTime; 29 | readonly attribute AudioListener listener; 30 | 31 | AudioBuffer createBuffer(unsigned long numberOfChannels, unsigned long length, float sampleRate); 32 | 33 | void decodeAudioData(ArrayBuffer audioData, 34 | DecodeSuccessCallback successCallback, 35 | optional DecodeErrorCallback errorCallback); 36 | 37 | 38 | // AudioNode creation 39 | AudioBufferSourceNode createBufferSource(); 40 | 41 | MediaElementAudioSourceNode createMediaElementSource(HTMLMediaElement mediaElement); 42 | 43 | MediaStreamAudioSourceNode createMediaStreamSource(MediaStream mediaStream); 44 | MediaStreamAudioDestinationNode createMediaStreamDestination(); 45 | 46 | ScriptProcessorNode createScriptProcessor(optional unsigned long bufferSize = 0, 47 | optional unsigned long numberOfInputChannels = 2, 48 | optional unsigned long numberOfOutputChannels = 2); 49 | 50 | AnalyserNode createAnalyser(); 51 | GainNode createGain(); 52 | DelayNode createDelay(optional double maxDelayTime = 1.0); 53 | BiquadFilterNode createBiquadFilter(); 54 | WaveShaperNode createWaveShaper(); 55 | PannerNode createPanner(); 56 | ConvolverNode createConvolver(); 57 | 58 | ChannelSplitterNode createChannelSplitter(optional unsigned long numberOfOutputs = 6); 59 | ChannelMergerNode createChannelMerger(optional unsigned long numberOfInputs = 6); 60 | 61 | DynamicsCompressorNode createDynamicsCompressor(); 62 | 63 | OscillatorNode createOscillator(); 64 | PeriodicWave createPeriodicWave(Float32Array real, Float32Array imag); 65 | 66 | }; 67 | 68 | 69 | /*********************************** 70 | * 4.1b 71 | **/ 72 | [Constructor(unsigned long numberOfChannels, unsigned long length, float sampleRate)] 73 | interface OfflineAudioContext : AudioContext { 74 | 75 | void startRendering(); 76 | 77 | attribute EventHandler oncomplete; 78 | 79 | }; 80 | 81 | 82 | /*********************************** 83 | * 4.1c 84 | **/ 85 | interface OfflineAudioCompletionEvent : Event { 86 | 87 | readonly attribute AudioBuffer renderedBuffer; 88 | 89 | }; 90 | 91 | 92 | /*********************************** 93 | * 4.2 94 | **/ 95 | enum ChannelCountMode { 96 | "max", 97 | "clamped-max", 98 | "explicit" 99 | }; 100 | 101 | enum ChannelInterpretation { 102 | "speakers", 103 | "discrete" 104 | }; 105 | 106 | interface AudioNode : EventTarget { 107 | 108 | void connect(AudioNode destination, optional unsigned long output = 0, optional unsigned long input = 0); 109 | void connect(AudioParam destination, optional unsigned long output = 0); 110 | void disconnect(optional unsigned long output = 0); 111 | 112 | readonly attribute AudioContext context; 113 | readonly attribute unsigned long numberOfInputs; 114 | readonly attribute unsigned long numberOfOutputs; 115 | 116 | // Channel up-mixing and down-mixing rules for all inputs. 117 | attribute unsigned long channelCount; 118 | attribute ChannelCountMode channelCountMode; 119 | attribute ChannelInterpretation channelInterpretation; 120 | 121 | }; 122 | 123 | 124 | /*********************************** 125 | * 4.4 126 | **/ 127 | interface AudioDestinationNode : AudioNode { 128 | 129 | readonly attribute unsigned long maxChannelCount; 130 | 131 | }; 132 | 133 | 134 | /*********************************** 135 | * 4.5 136 | **/ 137 | interface AudioParam { 138 | 139 | attribute float value; 140 | readonly attribute float defaultValue; 141 | 142 | // Parameter automation. 143 | void setValueAtTime(float value, double startTime); 144 | void linearRampToValueAtTime(float value, double endTime); 145 | void exponentialRampToValueAtTime(float value, double endTime); 146 | 147 | // Exponentially approach the target value with a rate having the given time constant. 148 | void setTargetAtTime(float target, double startTime, double timeConstant); 149 | 150 | // Sets an array of arbitrary parameter values starting at time for the given duration. 151 | // The number of values will be scaled to fit into the desired duration. 152 | void setValueCurveAtTime(Float32Array values, double startTime, double duration); 153 | 154 | // Cancels all scheduled parameter changes with times greater than or equal to startTime. 155 | void cancelScheduledValues(double startTime); 156 | 157 | }; 158 | 159 | 160 | /*********************************** 161 | * 4.7 162 | **/ 163 | interface GainNode : AudioNode { 164 | 165 | readonly attribute AudioParam gain; 166 | 167 | }; 168 | 169 | 170 | /*********************************** 171 | * 4.8 172 | **/ 173 | interface DelayNode : AudioNode { 174 | 175 | readonly attribute AudioParam delayTime; 176 | 177 | }; 178 | 179 | /*********************************** 180 | * 4.9 181 | **/ 182 | interface AudioBuffer { 183 | 184 | readonly attribute float sampleRate; 185 | readonly attribute long length; 186 | 187 | // in seconds 188 | readonly attribute double duration; 189 | 190 | readonly attribute long numberOfChannels; 191 | 192 | Float32Array getChannelData(unsigned long channel); 193 | 194 | }; 195 | 196 | 197 | /*********************************** 198 | * 4.10 199 | **/ 200 | interface AudioBufferSourceNode : AudioNode { 201 | 202 | attribute AudioBuffer? buffer; 203 | 204 | readonly attribute AudioParam playbackRate; 205 | 206 | attribute boolean loop; 207 | attribute double loopStart; 208 | attribute double loopEnd; 209 | 210 | void start(optional double when = 0, optional double offset = 0, optional double duration); 211 | void stop(optional double when = 0); 212 | 213 | attribute EventHandler onended; 214 | 215 | }; 216 | 217 | 218 | /*********************************** 219 | * 4.11 220 | **/ 221 | interface MediaElementAudioSourceNode : AudioNode { 222 | 223 | }; 224 | 225 | 226 | /*********************************** 227 | * 4.12 228 | **/ 229 | interface ScriptProcessorNode : AudioNode { 230 | 231 | attribute EventHandler onaudioprocess; 232 | 233 | readonly attribute long bufferSize; 234 | 235 | }; 236 | 237 | 238 | /*********************************** 239 | * 4.13 240 | **/ 241 | interface AudioProcessingEvent : Event { 242 | 243 | readonly attribute double playbackTime; 244 | readonly attribute AudioBuffer inputBuffer; 245 | readonly attribute AudioBuffer outputBuffer; 246 | 247 | }; 248 | 249 | /*********************************** 250 | * 4.13 251 | **/ 252 | enum PanningModelType { 253 | "equalpower", 254 | "HRTF" 255 | }; 256 | 257 | enum DistanceModelType { 258 | "linear", 259 | "inverse", 260 | "exponential" 261 | }; 262 | 263 | interface PannerNode : AudioNode { 264 | 265 | // Default for stereo is HRTF 266 | attribute PanningModelType panningModel; 267 | 268 | // Uses a 3D cartesian coordinate system 269 | void setPosition(double x, double y, double z); 270 | void setOrientation(double x, double y, double z); 271 | void setVelocity(double x, double y, double z); 272 | 273 | // Distance model and attributes 274 | attribute DistanceModelType distanceModel; 275 | attribute double refDistance; 276 | attribute double maxDistance; 277 | attribute double rolloffFactor; 278 | 279 | // Directional sound cone 280 | attribute double coneInnerAngle; 281 | attribute double coneOuterAngle; 282 | attribute double coneOuterGain; 283 | 284 | }; 285 | 286 | 287 | /*********************************** 288 | * 4.15 289 | **/ 290 | interface AudioListener { 291 | 292 | attribute double dopplerFactor; 293 | attribute double speedOfSound; 294 | 295 | // Uses a 3D cartesian coordinate system 296 | void setPosition(double x, double y, double z); 297 | void setOrientation(double x, double y, double z, double xUp, double yUp, double zUp); 298 | void setVelocity(double x, double y, double z); 299 | 300 | }; 301 | 302 | 303 | /*********************************** 304 | * 4.16 305 | **/ 306 | interface ConvolverNode : AudioNode { 307 | 308 | attribute AudioBuffer? buffer; 309 | attribute boolean normalize; 310 | 311 | }; 312 | 313 | 314 | /*********************************** 315 | * 4.17 316 | **/ 317 | interface AnalyserNode : AudioNode { 318 | 319 | // Real-time frequency-domain data 320 | void getFloatFrequencyData(Float32Array array); 321 | void getByteFrequencyData(Uint8Array array); 322 | 323 | // Real-time waveform data 324 | void getByteTimeDomainData(Uint8Array array); 325 | 326 | attribute unsigned long fftSize; 327 | readonly attribute unsigned long frequencyBinCount; 328 | 329 | attribute double minDecibels; 330 | attribute double maxDecibels; 331 | 332 | attribute double smoothingTimeConstant; 333 | 334 | }; 335 | 336 | 337 | /*********************************** 338 | * 4.18 339 | **/ 340 | interface ChannelSplitterNode : AudioNode { 341 | 342 | }; 343 | 344 | 345 | /*********************************** 346 | * 4.19 347 | **/ 348 | interface ChannelMergerNode : AudioNode { 349 | 350 | }; 351 | 352 | 353 | /*********************************** 354 | * 4.20 355 | **/ 356 | interface DynamicsCompressorNode : AudioNode { 357 | 358 | readonly attribute AudioParam threshold; // in Decibels 359 | readonly attribute AudioParam knee; // in Decibels 360 | readonly attribute AudioParam ratio; // unit-less 361 | readonly attribute AudioParam reduction; // in Decibels 362 | readonly attribute AudioParam attack; // in Seconds 363 | readonly attribute AudioParam release; // in Seconds 364 | 365 | }; 366 | 367 | 368 | /*********************************** 369 | * 4.21 370 | **/ 371 | enum BiquadFilterType { 372 | "lowpass", 373 | "highpass", 374 | "bandpass", 375 | "lowshelf", 376 | "highshelf", 377 | "peaking", 378 | "notch", 379 | "allpass" 380 | }; 381 | 382 | interface BiquadFilterNode : AudioNode { 383 | 384 | attribute BiquadFilterType type; 385 | readonly attribute AudioParam frequency; // in Hertz 386 | readonly attribute AudioParam detune; // in Cents 387 | readonly attribute AudioParam Q; // Quality factor 388 | readonly attribute AudioParam gain; // in Decibels 389 | 390 | void getFrequencyResponse(Float32Array frequencyHz, 391 | Float32Array magResponse, 392 | Float32Array phaseResponse); 393 | 394 | }; 395 | 396 | 397 | /*********************************** 398 | * 4.22 399 | **/ 400 | enum OverSampleType { 401 | "none", 402 | "2x", 403 | "4x" 404 | }; 405 | 406 | interface WaveShaperNode : AudioNode { 407 | 408 | attribute Float32Array? curve; 409 | attribute OverSampleType oversample; 410 | 411 | }; 412 | 413 | 414 | /*********************************** 415 | * 4.23 416 | **/ 417 | enum OscillatorType { 418 | "sine", 419 | "square", 420 | "sawtooth", 421 | "triangle", 422 | "custom" 423 | }; 424 | 425 | interface OscillatorNode : AudioNode { 426 | 427 | attribute OscillatorType type; 428 | 429 | readonly attribute AudioParam frequency; // in Hertz 430 | readonly attribute AudioParam detune; // in Cents 431 | 432 | void start(double when); 433 | void stop(double when); 434 | void setPeriodicWave(PeriodicWave periodicWave); 435 | 436 | attribute EventHandler onended; 437 | 438 | }; 439 | 440 | 441 | /*********************************** 442 | * 4.24 443 | **/ 444 | interface PeriodicWave { 445 | 446 | }; 447 | 448 | 449 | /*********************************** 450 | * 4.25 451 | **/ 452 | interface MediaStreamAudioSourceNode : AudioNode { 453 | 454 | }; 455 | 456 | 457 | /*********************************** 458 | * 4.26 459 | **/ 460 | interface MediaStreamAudioDestinationNode : AudioNode { 461 | 462 | readonly attribute MediaStream stream; 463 | 464 | }; 465 | -------------------------------------------------------------------------------- /2013/11/WebCrypto.idl: -------------------------------------------------------------------------------- 1 | /*********************************** 2 | * Extracted via the web inspector from: 3 | * 4 | * https://dvcs.w3.org/hg/webcrypto-api/file/dffe14c6052a/spec/Overview.html 5 | * 6 | * By running: 7 | * 8 | * Array.prototype.slice.call( 9 | * document.querySelectorAll(".idl,.idl-code,.es-code") 10 | * ).map(function(n) { return n.innerText; }).join("\n\n\n"); 11 | * 12 | * Reformatted for readability. 13 | * 14 | **/ 15 | 16 | /*********************************** 17 | * 9 18 | **/ 19 | [NoInterfaceObject] 20 | interface RandomSource { 21 | ArrayBufferView getRandomValues(ArrayBufferView array); 22 | }; 23 | 24 | 25 | 26 | /*********************************** 27 | * 10 28 | **/ 29 | // TBD: ISSUE-28 30 | typedef (Algorithm or DOMString) AlgorithmIdentifier; 31 | 32 | dictionary Algorithm { 33 | DOMString name; 34 | }; 35 | 36 | 37 | 38 | /*********************************** 39 | * 11 40 | **/ 41 | enum KeyType { 42 | "secret", 43 | "public", 44 | "private" 45 | }; 46 | 47 | enum KeyUsage { 48 | "encrypt", 49 | "decrypt", 50 | "sign", 51 | "verify", 52 | "deriveKey", 53 | "deriveBits", 54 | "wrapKey", 55 | "unwrapKey" 56 | }; 57 | 58 | interface Key { 59 | readonly attribute KeyType type; 60 | readonly attribute boolean extractable; 61 | readonly attribute Algorithm algorithm; 62 | readonly attribute KeyUsage[] usages; 63 | }; 64 | 65 | 66 | 67 | /*********************************** 68 | * 12 69 | **/ 70 | interface Crypto { 71 | readonly attribute SubtleCrypto subtle; 72 | }; 73 | 74 | Crypto implements RandomSource; 75 | 76 | partial interface Window { 77 | readonly attribute Crypto crypto; 78 | }; 79 | 80 | 81 | 82 | /*********************************** 83 | * 13. SubtleCrypto interface 84 | **/ 85 | enum KeyFormat { 86 | // An unformatted sequence of bytes. Intended for secret keys. 87 | "raw", 88 | // The DER encoding of the PrivateKeyInfo structure from RFC 5208. 89 | "pkcs8", 90 | // The DER encoding of the SubjectPublicKeyInfo structure from RFC 5280. 91 | "spki", 92 | // The key is represented as JSON according to the JSON Web Key format. 93 | "jwk", 94 | }; 95 | 96 | typedef (ArrayBuffer or ArrayBufferView) CryptoOperationData; 97 | 98 | interface SubtleCrypto { 99 | Promise encrypt(AlgorithmIdentifier algorithm, 100 | Key key, 101 | sequence data); 102 | Promise decrypt(AlgorithmIdentifier algorithm, 103 | Key key, 104 | sequence data); 105 | Promise sign(AlgorithmIdentifier algorithm, 106 | Key key, 107 | sequence data); 108 | Promise verify(AlgorithmIdentifier algorithm, 109 | Key key, 110 | CryptoOperationData signature, 111 | sequence data); 112 | Promise digest(AlgorithmIdentifier algorithm, 113 | sequence data); 114 | 115 | Promise generateKey(AlgorithmIdentifier algorithm, 116 | optional boolean extractable = false, 117 | optional KeyUsage[] keyUsages = []); 118 | Promise deriveKey(AlgorithmIdentifier algorithm, 119 | Key baseKey, 120 | AlgorithmIdentifier? derivedKeyType, 121 | optional boolean extractable = false, 122 | optional KeyUsage[] keyUsages = []); 123 | Promise deriveBits(AlgorithmIdentifier algorithm, 124 | Key baseKey, 125 | unsigned long length); 126 | 127 | // TBD: ISSUE-35 128 | Promise importKey(KeyFormat format, 129 | CryptoOperationData keyData, 130 | AlgorithmIdentifier? algorithm, 131 | optional boolean extractable = false, 132 | optional KeyUsage[] keyUsages = []); 133 | Promise exportKey(KeyFormat format, Key key); 134 | 135 | // Note: wrapKey and unwrapKey remain "Features at Risk" 136 | Promise wrapKey(KeyFormat format, 137 | Key key, 138 | Key wrappingKey, 139 | AlgorithmIdentifier wrapAlgorithm); 140 | Promise unwrapKey(KeyFormat format, 141 | CryptoOperationData wrappedKey, 142 | Key unwrappingKey, 143 | AlgorithmIdentifier unwrapAlgorithm, 144 | AlgorithmIdentifier? unwrappedKeyAlgorithm, 145 | optional boolean extractable = false, 146 | optional KeyUsage[] keyUsages = []); 147 | }; 148 | 149 | 150 | 151 | 152 | /*********************************** 153 | * 14. WorkerCrypto interface 154 | **/ 155 | interface WorkerCrypto { 156 | }; 157 | 158 | WorkerCrypto implements RandomSource; 159 | 160 | partial interface WorkerGlobalScope { 161 | readonly attribute WorkerCrypto crypto; 162 | }; 163 | 164 | 165 | 166 | /*********************************** 167 | * 15. BigInteger 168 | **/ 169 | typedef Uint8Array BigInteger; 170 | 171 | 172 | 173 | /*********************************** 174 | * 16. KeyPair 175 | **/ 176 | interface KeyPair { 177 | Key publicKey; 178 | Key privateKey; 179 | }; 180 | 181 | 182 | 183 | /*********************************** 184 | * 17.4.3. RsaKeyGenParams dictionary 185 | **/ 186 | dictionary RsaKeyGenParams : Algorithm { 187 | // The length, in bits, of the RSA modulus 188 | unsigned long modulusLength; 189 | // The RSA public exponent 190 | BigInteger publicExponent; 191 | }; 192 | 193 | 194 | 195 | /*********************************** 196 | * 17.5.3. RsaSsaParams dictionary 197 | **/ 198 | dictionary RsaSsaParams : Algorithm { 199 | // The hash algorithm to use 200 | AlgorithmIdentifier hash; 201 | }; 202 | 203 | 204 | 205 | /*********************************** 206 | * 207 | **/ 208 | 17.6.3. RsaPssParams dictionary 209 | 210 | dictionary RsaPssParams : Algorithm { 211 | // The hash function to apply to the message 212 | AlgorithmIdentifier hash; 213 | // The desired length of the random salt 214 | unsigned long saltLength; 215 | }; 216 | 217 | 218 | 219 | /*********************************** 220 | * 17.7.3. RsaOaepParams dictionary 221 | **/ 222 | dictionary RsaOaepParams : Algorithm { 223 | // The hash function to apply to the message 224 | AlgorithmIdentifier hash; 225 | // The optional label/application data to associate with the message 226 | CryptoOperationData? label; 227 | }; 228 | 229 | 230 | 231 | /*********************************** 232 | * 17.8.3. EcdsaParams dictionary 233 | **/ 234 | dictionary EcdsaParams : Algorithm { 235 | // The hash algorithm to use 236 | AlgorithmIdentifier hash; 237 | }; 238 | 239 | 240 | 241 | /*********************************** 242 | * 17.8.4. EcKeyGenParams dictionary 243 | **/ 244 | enum NamedCurve { 245 | // NIST recommended curve P-256, also known as secp256r1. 246 | "P-256", 247 | // NIST recommended curve P-384, also known as secp384r1. 248 | "P-384", 249 | // NIST recommended curve P-521, also known as secp521r1. 250 | "P-521" 251 | }; 252 | 253 | dictionary EcKeyGenParams : Algorithm { 254 | // A named curve 255 | NamedCurve namedCurve; 256 | }; 257 | 258 | 259 | 260 | /*********************************** 261 | * 17.9.3. EcdhKeyDeriveParams dictionary 262 | **/ 263 | typedef Uint8Array ECPoint; 264 | 265 | dictionary EcdhKeyDeriveParams : Algorithm { 266 | // The peer's EC public key. 267 | ECPoint public; 268 | }; 269 | 270 | 271 | 272 | /*********************************** 273 | * 17.10.3. AesCtrParams dictionary 274 | **/ 275 | dictionary AesCtrParams : Algorithm { 276 | // The initial value of the counter block. counter MUST be 16 bytes 277 | // (the AES block size). The counter bits are the rightmost length 278 | // bits of the counter block. The rest of the counter block is for 279 | // the nonce. The counter bits are incremented using the standard 280 | // incrementing function specified in NIST SP 800-38A Appendix B.1: 281 | // the counter bits are interpreted as a big-endian integer and 282 | // incremented by one. 283 | CryptoOperationData counter; 284 | // The length, in bits, of the rightmost part of the counter block 285 | // that is incremented. 286 | [EnforceRange] octet length; 287 | }; 288 | 289 | 290 | 291 | /*********************************** 292 | * 17.10.4. AesKeyGenParams dictionary 293 | **/ 294 | dictionary AesKeyGenParams : Algorithm { 295 | // The length, in bits, of the key. 296 | [EnforceRange] unsigned short length; 297 | }; 298 | 299 | 300 | 301 | /*********************************** 302 | * 17.11.3. AesCbcParams dictionary 303 | **/ 304 | dictionary AesCbcParams : Algorithm { 305 | // The initialization vector. MUST be 16 bytes. 306 | CryptoOperationData iv; 307 | }; 308 | 309 | 310 | 311 | /*********************************** 312 | * 17.13.3. AesGcmParams dictionary 313 | **/ 314 | dictionary AesGcmParams : Algorithm { 315 | // The initialization vector to use. May be up to 2^56 bytes long. 316 | CryptoOperationData iv; 317 | // The additional authentication data to include. 318 | CryptoOperationData? additionalData; 319 | // The desired length of the authentication tag. May be 0 - 128. 320 | [EnforceRange] octet? tagLength; 321 | }; 322 | 323 | 324 | 325 | /*********************************** 326 | * 17.14.3. AesCfbParams dictionary 327 | **/ 328 | dictionary AesCfbParams : Algorithm { 329 | // The initialization vector. MUST be 16 bytes. 330 | CryptoOperationData iv; 331 | }; 332 | 333 | 334 | 335 | /*********************************** 336 | * 17.15.3. HmacParams dictionary 337 | **/ 338 | dictionary HmacParams : Algorithm { 339 | // The inner hash function to use. 340 | AlgorithmIdentifier hash; 341 | }; 342 | 343 | 344 | 345 | /*********************************** 346 | * 17.15.4. HmacKeyParams dictionary 347 | **/ 348 | dictionary HmacKeyParams : Algorithm { 349 | // The inner hash function to use. 350 | AlgorithmIdentifier hash; 351 | // The length (in bytes) of the key to generate. If unspecified, the 352 | // recommended length will be used, which is the size of the associated hash function's block 353 | // size. 354 | unsigned long length? 355 | }; 356 | 357 | 358 | 359 | /*********************************** 360 | * 17.16.3. DhKeyGenParams dictionary 361 | **/ 362 | dictionary DhKeyGenParams : Algorithm { 363 | // The prime p. 364 | BigInteger prime; 365 | // The base g. 366 | BigInteger generator; 367 | }; 368 | 369 | 370 | 371 | /*********************************** 372 | * 17.16.4. DhKeyDeriveParams dictionary 373 | **/ 374 | dictionary DhKeyDeriveParams : Algorithm { 375 | // The peer's public value. 376 | BigInteger public; 377 | }; 378 | 379 | 380 | 381 | /*********************************** 382 | * 17.18.3. ConcatParams dictionary 383 | **/ 384 | dictionary ConcatParams : Algorithm { 385 | // The digest method to use to derive the keying material. 386 | AlgorithmIdentifier hash; 387 | 388 | // A bit string corresponding to the AlgorithmId field of the OtherInfo parameter. 389 | // The AlgorithmId indicates how the derived keying material will be parsed and for which 390 | // algorithm(s) the derived secret keying material will be used. 391 | CryptoOperationData algorithmId; 392 | 393 | // A bit string that corresponds to the PartyUInfo field of the OtherInfo parameter. 394 | CryptoOperationData partyUInfo; 395 | // A bit string that corresponds to the PartyVInfo field of the OtherInfo parameter. 396 | CryptoOperationData partyVInfo; 397 | // An optional bit string that corresponds to the SuppPubInfo field of the OtherInfo parameter. 398 | CryptoOperationData? publicInfo; 399 | // An optional bit string that corresponds to the SuppPrivInfo field of the OtherInfo parameter. 400 | CryptoOperationData? privateInfo; 401 | }; 402 | 403 | 404 | 405 | /*********************************** 406 | * 17.19.3. HkdfCtrParams dictionary 407 | **/ 408 | dictionary HkdfCtrParams : Algorithm { 409 | // The algorithm to use with HMAC (eg: SHA-256) 410 | AlgorithmIdentifier hash; 411 | // A bit string that corresponds to the label that identifies the purpose for the derived keying material. 412 | CryptoOperationData label; 413 | // A bit string that corresponds to the context of the key derivation, as described in Section 5 of NIST SP 800-108 [SP800-108] 414 | CryptoOperationData context; 415 | }; 416 | 417 | 418 | 419 | /*********************************** 420 | * 17.20.3. Pbkdf2Params dictionary 421 | **/ 422 | dictionary Pbkdf2Params : Algorithm { 423 | CryptoOperationData salt; 424 | [Clamp] unsigned long iterations; 425 | AlgorithmIdentifier prf; 426 | CryptoOperationData? password; 427 | }; 428 | -------------------------------------------------------------------------------- /2014/04/http-209.md: -------------------------------------------------------------------------------- 1 | # HTTP 209 2 | 3 | *This is an unofficial draft review, written by Jeni Tennison after discussion at the W3C TAG April 2014 F2F, of "[The Hypertext Transfer Protocol (HTTP) Status Code 209 (Contents of Related)](http://www.w3.org/2014/02/2xx/draft-prudhommeaux-http-status-209)" draft.* 4 | 5 | ## Overview 6 | 7 | The proposed `2NN Contents of Related` status code indicates that the server is providing a representation of a related resource rather than a representation of the requested resource. It can be described as a shorthand of a `303 See Other` response followed by a `200 OK` response on the redirection target. 8 | 9 | The main advantages of introducing this status code are: 10 | 11 | * that it avoids at least one additional HTTP request and response, reducing latency 12 | * that it retains the original URL in the browser bar 13 | 14 | ## Suggested Improvements 15 | 16 | 1. The current draft is sadly not visible in some browsers (notably Chrome). While not a bug with the draft, we recommend providing an HTML version of the draft, rather than relying on XML+XSLT, to increase its general accessibility. 17 | 18 | 1. The current draft states that the status code is `209`. The [HTTPbis specification states](http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-8.2) 19 | 20 | > Proposals for new status codes that are not yet widely deployed ought 21 | to avoid allocating a specific number for the code until there is 22 | clear consensus that it will be registered; instead, early drafts can 23 | use a notation such as "4NN", or "3N0" .. "3N9", to indicate the 24 | class of the proposed status code(s) without consuming a number 25 | prematurely. 26 | 27 | We suggest that this convention is followed in the draft to avoid premature implementation. 28 | 29 | 1. The current draft states that a `2NN Contents of Related` response should use a `Location` header to indicate the location of the related resource for which the contents are being provided. We think it would be better to use the `Content-Location` header for this purpose. The `Location` header is used for redirection whereas the `Content-Location` header is used to indicate a location that would provide the entity body provided in the response, which is the correct semantics in this case. 30 | 31 | 1. We have discussed another use case, not mentioned by the current draft, that we think fits into the general pattern supported by `2NN Contents of Related`, namely the use of packages of content. When a web application is requested, we think it would be useful for a `2NN Contents of Related` response to be able to point to a packaged version of that web application. For example, a request for `http://app.example.com/` would respond with a `2NN Contents of Related` whose content was a package including the HTML, images, CSS, Javascript and data for the web application. We are currently working on defining this behaviour, and would like to ensure that there is consistency in approach across that definition and the definition of the behaviour of `2NN Contents of Related`. 32 | 33 | 1. Regarding the interpretation of the HTTP headers in the `2NN Contents of Related` response, the specification of the `2NN Contents of Related` status code should make clear how the HTTP headers are interpreted. We recommend that they are interpreted as specified in the HTTPbis Internet Draft and related specifications, so that these do not have to be changed. For some headers this entails that they relate to the effective request URL. (This includes, for example, the default context IRI for the `Link` header as specified in [RFC 5988](http://tools.ietf.org/html/rfc5988#section-5.2).) Other headers, specifically those that provide [representation metadata](http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-3.1), obviously relate to the content. It would be good to make this explicit, possibly with an example, in particular with respect to caching and the interpretation of `Link` headers. 34 | 35 | 1. Given that the response's content is that of a related resource, it would be good to include a recommendation that the response headers include a `Link` header that indicates the relationship between the resource at the effective request URL (which is the default context IRI for the `Link` header) and the resource at the related URL (ie that given in the `Content-Location` header). For example, if the representation provided is one that *describes* the resource at the effective request URL, then the response might look like: 36 | 37 | HTTP/1.1 2NN Contents of Related 38 | Content-Location: http://example.com/meta 39 | Link: ; rel=describedby 40 | 41 | 1. Regarding caching the provided content against the related URL, the [definition of `Content-Location` in HTTPbis](http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-3.1.4.2) says (our emphasis): 42 | 43 | > If Content-Location is included in a 2xx (Successful) response message and its field-value refers to a URI that differs from the effective request URI, then the origin server claims that the URI is an identifier for a different resource corresponding to the enclosed representation. **Such a claim can only be trusted if both identifiers share the same resource owner, which cannot be programmatically determined via HTTP.** 44 | 45 | Whether or not the `Content-Location` header is used, the same argument about the cachability of the content at the related URL applies here. There is no viable scope (eg same-origin) that in fact guarantees that the effective request URL has the same resource owner as the related URL. Therefore we do not think it is advisable to extend the caching semantics to include the caching of the response content to be associated with the related URL. As well as being a security risk, it would be inappropriate to apply the same cache-controlling HTTP headers to both cache entries, and there is no mechanism to separate those headers. 46 | 47 | We do not think that the lack of cachability of the response associated with the `Content-Location` URL undermines the value of a `2NN Contents of Related` status code. 48 | 49 | 1. The current draft for `2NN Contents of Related` says: 50 | 51 | > However, some conventional clients may not be specifically programmed to accept content accompanying a 2xx response other than 200. Therefore, initial use of status code 2NN will be restricted to cases where the server has sufficient confidence in the clients understanding the new code. 52 | 53 | We do not think it is a good idea for servers to guess about whether clients can understand a `2NN Contents of Related` response. We think it would be better for servers to only respond with a `2NN Contents of Related` if the client has explicitly indicated that it can accept a related response. 54 | 55 | We therefore suggest, alongside the definition of the `2NN Contents of Related` status code, also defining an `Accept-Related` header that enables clients to indicate which relations they are prepared to accept. For example, a client that understood how to properly handle `describedby` and `provenance` related content might use: 56 | 57 | Accept-Related: describedby, provenance 58 | 59 | The server could use this to determine whether it should respond with a `2NN Contents of Related` response or a `303 See Other` response, using the other `Accept-*` headers to perform other types of content negotiation (around formats and languages) as usual. We note that this should be specified with attention to the [HTTPbis specification's requirements for new HTTP headers](http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-8.3.1). 60 | 61 | 1. It would be useful to explicitly state, at some point in the draft, how the `2NN Contents of Related` response applies in the context of different HTTP methods. 62 | 63 | 1. We note that the references into HTTPbis are pointing to old versions of the Internet Drafts, and it would be a good idea to point to the most recent versions and update the text accordingly. 64 | 65 | ## Other Notes 66 | 67 | ### Should this be a new status code? 68 | 69 | We discussed whether it is appropriate to introduce a new status code for this behaviour. We are in agreement that there should be a high bar for the creation of new HTTP status codes, and that other mechanisms, such as headers, may be an appropriate alternative in some cases. 70 | 71 | We do think that `2NN Contents of Related` is semantically distinct from `200 OK`, where a `GET` on the URL is defined as: 72 | 73 | GET a representation of the target resource 74 | 75 | which is plainly not the case here. We also think that `2NN Contents of Related` satisfies the conditions described for new status codes in [the HTTPbis messaging semantics Internet Draft](http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-8.2), namely: 76 | 77 | * it is potentially applicable to any type of resource, as any type of resource can be related to other resources in any type of way 78 | * it falls under an existing category for HTTP responses 79 | * it allows for a payload 80 | 81 | However, our opinion was that if the `2NN Contents of Related` response could be interpreted as a `200 OK` without consequence, then there was no need for the separate status code. The statement in the draft that this was an acceptable fallback therefore caused us to question its requirement. 82 | 83 | This is the underlying reason for suggesting the definition of an `Accept-Related` header to flag to the server that a `2NN Contents of Related` response will be correctly understood by the client, avoiding the situation where such a response is incorrectly interpreted as a `200 OK`. 84 | 85 | ### Should this be a 3NN status code? 86 | 87 | We discussed whether a `3NN Contents of Related` response with a `Location` header providing the related URL would be more appropriate than a `2NN Contents of Related` response with a `Content-Location` header. This would cause fallback to `300 Multiple Choices` which would give a more appropriate fallback behaviour than `200 OK` (ie displaying the content of the response and potentially redirecting to the related URL). 88 | 89 | This option is moot if our recommendation to use an `Accept-Related` header is followed. We note it purely in case there are unforeseen consequences and objections to introducing that header. 90 | 91 | ### Shouldn't people just link to the related URL directly? 92 | 93 | We note [Mark Nottingham's point](http://lists.w3.org/Archives/Public/www-tag/2014Mar/0021.html), specifically in the context of a paging scenario, that websites may/should link to the first page of a listing rather than to a URL for "the entire list" that then needs to be distinguished from "the first page". 94 | 95 | We note that in practice, GitHub (as used in the example given in the email referenced above) links to "the entire list" of issues, eg: 96 | 97 | https://github.com/w3c/web-platform-tests/issues 98 | 99 | which in fact provides the content for only a filtered version of the first page of issues, which is also available at: 100 | 101 | https://github.com/w3c/web-platform-tests/issues?page=1&state=open 102 | 103 | This appears to be common practice on the web for the pagination scenario. It is usual in Atom, for example, for the URL for a "logical feed" (in [RFC 5005 terminology](https://tools.ietf.org/html/rfc5005#section-1.2)) be the same as the URL for the "current" feed document. 104 | 105 | So we do not agree that people do or should point to the first page of a longer listing within their web pages. 106 | 107 | ### Can't you just use a `200 OK` response with a `Content-Location` header? 108 | 109 | In the [same email](http://lists.w3.org/Archives/Public/www-tag/2014Mar/0021.html), Mark Nottingham also discusses simply providing a `200 OK` response with a `Content-Location` header. We agree with Mark that in a paging scenario we would usually expect a request like: 110 | 111 | GET /w3c/web-platform-tests/issues HTTP/1.1 112 | Host: github.com 113 | 114 | to have a response like: 115 | 116 | HTTP/1.1 200 OK 117 | Content-Location: /w3c/web-platform-tests/issues?page=1&state=open 118 | Link: ; rel=first 119 | 120 | (We note that `rel=first` does not have the correct semantics for "the first page of this collection" as it is [defined in RFC 5005](https://tools.ietf.org/html/rfc5005#section-3) as "A URI that refers to the furthest preceding document in a series of documents." but we are not aware of a more appropriate link relation between a collection and the first page of a collection.) 121 | 122 | We think this is acceptable practice because the `200 OK` response is providing "a representation of the target resource". It is common for representations of a target resource to be partial representations — to not include all the properties of the resource that could possibly be included — so this pattern seems both semantically acceptable and common practice. 123 | 124 | However, this line of argument does not diminish the value of the `2NN Contents of Related` in other cases, where the provided content is not a partial representation of the requested resource. In particular, the same argument does not apply when the relationship is `rel=describedby` or our soon-to-be-proposed `rel=package`, where (as discussed above) the `200 OK` semantic is not applicable. 125 | 126 | ### Can't you just wait for HTTP/2? 127 | 128 | A final point that Mark makes in [his email](http://lists.w3.org/Archives/Public/www-tag/2014Mar/0021.html) is that if the sole reason for `2NN Contents of Related` is to reduce the number of round trips for performance reasons, then HTTP/2 Server Push satisfies that requirement. 129 | 130 | We agree that HTTP/2 Server Push addresses the same issues. However, we are not convinced that HTTP/2 will see widespread adoption in the long tail of websites. HTTP/2 Server Push will remain outside the reach of many publishers for a long time. 131 | 132 | We accept that the ability to change an HTTP status code is similarly outside the reach of many publishers, but we think it is more attainable, in both the short and medium term, than using HTTP/2. In particular we note that there exists a community of developers of Linked Data Platform servers and clients who could implement and take advantage of this status code quickly. 133 | 134 | As a general point, we would encourage developers of web-application-level protocols that operate over HTTP (such as the Linked Data Platform) to provide mechanisms that enable the protocol to operate in environments where publishers cannot change HTTP status codes or HTTP headers (such as publication through GitHub Pages or on a locked-down Apache server). -------------------------------------------------------------------------------- /2013/07/webaudio_examples.js: -------------------------------------------------------------------------------- 1 | /*********************************** 2 | * 1.2: Modular Routing 3 | * playing a single sound 4 | **/ 5 | var context = new AudioContext(); 6 | 7 | function playSound() { 8 | var source = context.createBufferSource(); 9 | source.buffer = dogBarkingBuffer; 10 | source.connect(context.destination); 11 | source.start(0); 12 | } 13 | 14 | 15 | /*********************************** 16 | * 1.2: Modular Routing 17 | * three sources and a convolution reverb send with a dynamics compressor at 18 | * the final output stage 19 | **/ 20 | var context = 0; 21 | var compressor = 0; 22 | var reverb = 0; 23 | 24 | var source1 = 0; 25 | var source2 = 0; 26 | var source3 = 0; 27 | 28 | var lowpassFilter = 0; 29 | var waveShaper = 0; 30 | var panner = 0; 31 | 32 | var dry1 = 0; 33 | var dry2 = 0; 34 | var dry3 = 0; 35 | 36 | var wet1 = 0; 37 | var wet2 = 0; 38 | var wet3 = 0; 39 | 40 | var masterDry = 0; 41 | var masterWet = 0; 42 | 43 | function setupRoutingGraph () { 44 | context = new AudioContext(); 45 | 46 | // Create the effects nodes. 47 | lowpassFilter = context.createBiquadFilter(); 48 | waveShaper = context.createWaveShaper(); 49 | panner = context.createPanner(); 50 | compressor = context.createDynamicsCompressor(); 51 | reverb = context.createConvolver(); 52 | 53 | // Create master wet and dry. 54 | masterDry = context.createGain(); 55 | masterWet = context.createGain(); 56 | 57 | // Connect final compressor to final destination. 58 | compressor.connect(context.destination); 59 | 60 | // Connect master dry and wet to compressor. 61 | masterDry.connect(compressor); 62 | masterWet.connect(compressor); 63 | 64 | // Connect reverb to master wet. 65 | reverb.connect(masterWet); 66 | 67 | // Create a few sources. 68 | source1 = context.createBufferSource(); 69 | source2 = context.createBufferSource(); 70 | source3 = context.createOscillator(); 71 | 72 | source1.buffer = manTalkingBuffer; 73 | source2.buffer = footstepsBuffer; 74 | source3.frequency.value = 440; 75 | 76 | // Connect source1 77 | dry1 = context.createGain(); 78 | wet1 = context.createGain(); 79 | source1.connect(lowpassFilter); 80 | lowpassFilter.connect(dry1); 81 | lowpassFilter.connect(wet1); 82 | dry1.connect(masterDry); 83 | wet1.connect(reverb); 84 | 85 | // Connect source2 86 | dry2 = context.createGain(); 87 | wet2 = context.createGain(); 88 | source2.connect(waveShaper); 89 | waveShaper.connect(dry2); 90 | waveShaper.connect(wet2); 91 | dry2.connect(masterDry); 92 | wet2.connect(reverb); 93 | 94 | // Connect source3 95 | dry3 = context.createGain(); 96 | wet3 = context.createGain(); 97 | source3.connect(panner); 98 | panner.connect(dry3); 99 | panner.connect(wet3); 100 | dry3.connect(masterDry); 101 | wet3.connect(reverb); 102 | 103 | // Start the sources now. 104 | source1.start(0); 105 | source2.start(0); 106 | source3.start(0); 107 | } 108 | 109 | 110 | /*********************************** 111 | * 4.5.4 AudioParam Automation Example 112 | **/ 113 | var t0 = 0; 114 | var t1 = 0.1; 115 | var t2 = 0.2; 116 | var t3 = 0.3; 117 | var t4 = 0.4; 118 | var t5 = 0.6; 119 | var t6 = 0.7; 120 | var t7 = 1.0; 121 | 122 | var curveLength = 44100; 123 | var curve = new Float32Array(curveLength); 124 | for (var i = 0; i < curveLength; ++i) 125 | curve[i] = Math.sin(Math.PI * i / curveLength); 126 | 127 | param.setValueAtTime(0.2, t0); 128 | param.setValueAtTime(0.3, t1); 129 | param.setValueAtTime(0.4, t2); 130 | param.linearRampToValueAtTime(1, t3); 131 | param.linearRampToValueAtTime(0.15, t4); 132 | param.exponentialRampToValueAtTime(0.75, t5); 133 | param.exponentialRampToValueAtTime(0.05, t6); 134 | param.setValueCurveAtTime(curve, t6, t7 - t6); 135 | 136 | 137 | /*********************************** 138 | * 4.11 The MediaElementAudioSourceNode Interface 139 | **/ 140 | var mediaElement = document.getElementById('mediaElementID'); 141 | var sourceNode = context.createMediaElementSource(mediaElement); 142 | sourceNode.connect(filterNode); 143 | 144 | 145 | /*********************************** 146 | * 4.16.1 Attributes 147 | **/ 148 | float calculateNormalizationScale(buffer) 149 | { 150 | const float GainCalibration = 0.00125; 151 | const float GainCalibrationSampleRate = 44100; 152 | const float MinPower = 0.000125; 153 | 154 | // Normalize by RMS power. 155 | size_t numberOfChannels = buffer->numberOfChannels(); 156 | size_t length = buffer->length(); 157 | 158 | float power = 0; 159 | 160 | for (size_t i = 0; i < numberOfChannels; ++i) { 161 | float* sourceP = buffer->channel(i)->data(); 162 | float channelPower = 0; 163 | 164 | int n = length; 165 | while (n--) { 166 | float sample = *sourceP++; 167 | channelPower += sample * sample; 168 | } 169 | 170 | power += channelPower; 171 | } 172 | 173 | power = sqrt(power / (numberOfChannels * length)); 174 | 175 | // Protect against accidental overload. 176 | if (isinf(power) || isnan(power) || power < MinPower) 177 | power = MinPower; 178 | 179 | float scale = 1 / power; 180 | 181 | // Calibrate to make perceived volume same as unprocessed. 182 | scale *= GainCalibration; 183 | 184 | // Scale depends on sample-rate. 185 | if (buffer->sampleRate()) 186 | scale *= GainCalibrationSampleRate / buffer->sampleRate(); 187 | 188 | // True-stereo compensation. 189 | if (buffer->numberOfChannels() == 4) 190 | scale *= 0.5; 191 | 192 | return scale; 193 | } 194 | 195 | 196 | /*********************************** 197 | * 6. Mixer Gain Structure 198 | * Example: Mixer with Send Busses 199 | **/ 200 | var context = 0; 201 | var compressor = 0; 202 | var reverb = 0; 203 | var delay = 0; 204 | var s1 = 0; 205 | var s2 = 0; 206 | 207 | var source1 = 0; 208 | var source2 = 0; 209 | var g1_1 = 0; 210 | var g2_1 = 0; 211 | var g3_1 = 0; 212 | var g1_2 = 0; 213 | var g2_2 = 0; 214 | var g3_2 = 0; 215 | 216 | // Setup routing graph 217 | function setupRoutingGraph() { 218 | context = new AudioContext(); 219 | 220 | compressor = context.createDynamicsCompressor(); 221 | 222 | // Send1 effect 223 | reverb = context.createConvolver(); 224 | // Convolver impulse response may be set here or later 225 | 226 | // Send2 effect 227 | delay = context.createDelay(); 228 | 229 | // Connect final compressor to final destination 230 | compressor.connect(context.destination); 231 | 232 | // Connect sends 1 & 2 through effects to main mixer 233 | s1 = context.createGain(); 234 | reverb.connect(s1); 235 | s1.connect(compressor); 236 | 237 | s2 = context.createGain(); 238 | delay.connect(s2); 239 | s2.connect(compressor); 240 | 241 | // Create a couple of sources 242 | source1 = context.createBufferSource(); 243 | source2 = context.createBufferSource(); 244 | source1.buffer = manTalkingBuffer; 245 | source2.buffer = footstepsBuffer; 246 | 247 | // Connect source1 248 | g1_1 = context.createGain(); 249 | g2_1 = context.createGain(); 250 | g3_1 = context.createGain(); 251 | source1.connect(g1_1); 252 | source1.connect(g2_1); 253 | source1.connect(g3_1); 254 | g1_1.connect(compressor); 255 | g2_1.connect(reverb); 256 | g3_1.connect(delay); 257 | 258 | // Connect source2 259 | g1_2 = context.createGain(); 260 | g2_2 = context.createGain(); 261 | g3_2 = context.createGain(); 262 | source2.connect(g1_2); 263 | source2.connect(g2_2); 264 | source2.connect(g3_2); 265 | g1_2.connect(compressor); 266 | g2_2.connect(reverb); 267 | g3_2.connect(delay); 268 | 269 | // We now have explicit control over all the volumes g1_1, g2_1, ..., s1, s2 270 | g2_1.gain.value = 0.2; // For example, set source1 reverb gain 271 | 272 | // Because g2_1.gain is an "AudioParam", 273 | // an automation curve could also be attached to it. 274 | // A "mixing board" UI could be created in canvas or WebGL controlling these gains. 275 | } 276 | 277 | 278 | /*********************************** 279 | * 7. Dynamic Lifetime 280 | **/ 281 | 282 | var context = 0; 283 | var compressor = 0; 284 | var gainNode1 = 0; 285 | var streamingAudioSource = 0; 286 | 287 | // Initial setup of the "long-lived" part of the routing graph 288 | function setupAudioContext() { 289 | context = new AudioContext(); 290 | 291 | compressor = context.createDynamicsCompressor(); 292 | gainNode1 = context.createGain(); 293 | 294 | // Create a streaming audio source. 295 | var audioElement = document.getElementById('audioTagID'); 296 | streamingAudioSource = context.createMediaElementSource(audioElement); 297 | streamingAudioSource.connect(gainNode1); 298 | 299 | gainNode1.connect(compressor); 300 | compressor.connect(context.destination); 301 | } 302 | 303 | // Later in response to some user action (typically mouse or key event) 304 | // a one-shot sound can be played. 305 | function playSound() { 306 | var oneShotSound = context.createBufferSource(); 307 | oneShotSound.buffer = dogBarkingBuffer; 308 | 309 | // Create a filter, panner, and gain node. 310 | var lowpass = context.createBiquadFilter(); 311 | var panner = context.createPanner(); 312 | var gainNode2 = context.createGain(); 313 | 314 | // Make connections 315 | oneShotSound.connect(lowpass); 316 | lowpass.connect(panner); 317 | panner.connect(gainNode2); 318 | gainNode2.connect(compressor); 319 | 320 | // Play 0.75 seconds from now (to play immediately pass in 0) 321 | oneShotSound.start(context.currentTime + 0.75); 322 | } 323 | 324 | 325 | /*********************************** 326 | * 11. Spatialization / Panning 327 | **/ 328 | // Calculate the source-listener vector. 329 | vec3 sourceListener = source.position - listener.position; 330 | 331 | if (sourceListener.isZero()) { 332 | // Handle degenerate case if source and listener are at the same point. 333 | azimuth = 0; 334 | elevation = 0; 335 | return; 336 | } 337 | 338 | sourceListener.normalize(); 339 | 340 | // Align axes. 341 | vec3 listenerFront = listener.orientation; 342 | vec3 listenerUp = listener.up; 343 | vec3 listenerRight = listenerFront.cross(listenerUp); 344 | listenerRight.normalize(); 345 | 346 | vec3 listenerFrontNorm = listenerFront; 347 | listenerFrontNorm.normalize(); 348 | 349 | vec3 up = listenerRight.cross(listenerFrontNorm); 350 | 351 | float upProjection = sourceListener.dot(up); 352 | 353 | vec3 projectedSource = sourceListener - upProjection * up; 354 | projectedSource.normalize(); 355 | 356 | azimuth = 180 * acos(projectedSource.dot(listenerRight)) / PI; 357 | 358 | // Source in front or behind the listener. 359 | double frontBack = projectedSource.dot(listenerFrontNorm); 360 | if (frontBack < 0) 361 | azimuth = 360 - azimuth; 362 | 363 | // Make azimuth relative to "front" and not "right" listener vector. 364 | if ((azimuth >= 0) && (azimuth <= 270)) 365 | azimuth = 90 - azimuth; 366 | else 367 | azimuth = 450 - azimuth; 368 | 369 | elevation = 90 - 180 * acos(sourceListener.dot(up)) / PI; 370 | 371 | if (elevation > 90) 372 | elevation = 180 - elevation; 373 | else if (elevation < -90) 374 | elevation = -180 - elevation; 375 | 376 | 377 | /*********************************** 378 | * 9. Channel up-mixing and down-mixing 379 | * 9.2 Channel Rules Examples 380 | **/ 381 | 382 | // Set gain node to explicit 2-channels (stereo). 383 | gain.channelCount = 2; 384 | gain.channelCountMode = "explicit"; 385 | gain.channelInterpretation = "speakers"; 386 | 387 | // Set "hardware output" to 4-channels for DJ-app with two stereo output busses. 388 | context.destination.channelCount = 4; 389 | context.destination.channelCountMode = "explicit"; 390 | context.destination.channelInterpretation = "discrete"; 391 | 392 | // Set "hardware output" to 8-channels for custom multi-channel speaker array 393 | // with custom matrix mixing. 394 | context.destination.channelCount = 8; 395 | context.destination.channelCountMode = "explicit"; 396 | context.destination.channelInterpretation = "discrete"; 397 | 398 | // Set "hardware output" to 5.1 to play an HTMLAudioElement. 399 | context.destination.channelCount = 6; 400 | context.destination.channelCountMode = "explicit"; 401 | context.destination.channelInterpretation = "speakers"; 402 | 403 | // Explicitly down-mix to mono. 404 | gain.channelCount = 1; 405 | gain.channelCountMode = "explicit"; 406 | gain.channelInterpretation = "speakers"; 407 | 408 | 409 | /*********************************** 410 | * 11. Sound Cones 411 | **/ 412 | if (source.orientation.isZero() || ((source.coneInnerAngle == 360) && (source.coneOuterAngle == 360))) 413 | return 1; // no cone specified - unity gain 414 | 415 | // Normalized source-listener vector 416 | vec3 sourceToListener = listener.position - source.position; 417 | sourceToListener.normalize(); 418 | 419 | vec3 normalizedSourceOrientation = source.orientation; 420 | normalizedSourceOrientation.normalize(); 421 | 422 | // Angle between the source orientation vector and the source-listener vector 423 | double dotProduct = sourceToListener.dot(normalizedSourceOrientation); 424 | double angle = 180 * acos(dotProduct) / PI; 425 | double absAngle = fabs(angle); 426 | 427 | // Divide by 2 here since API is entire angle (not half-angle) 428 | double absInnerAngle = fabs(source.coneInnerAngle) / 2; 429 | double absOuterAngle = fabs(source.coneOuterAngle) / 2; 430 | double gain = 1; 431 | 432 | if (absAngle <= absInnerAngle) 433 | // No attenuation 434 | gain = 1; 435 | else if (absAngle >= absOuterAngle) 436 | // Max attenuation 437 | gain = source.coneOuterGain; 438 | else { 439 | // Between inner and outer cones 440 | // inner -> outer, x goes from 0 -> 1 441 | double x = (absAngle - absInnerAngle) / (absOuterAngle - absInnerAngle); 442 | gain = (1 - x) + source.coneOuterGain * x; 443 | } 444 | 445 | return gain; 446 | 447 | 448 | /*********************************** 449 | * 11. Doppler Shift 450 | **/ 451 | double dopplerShift = 1; // Initialize to default value 452 | double dopplerFactor = listener.dopplerFactor; 453 | 454 | if (dopplerFactor > 0) { 455 | double speedOfSound = listener.speedOfSound; 456 | 457 | // Don't bother if both source and listener have no velocity. 458 | if (!source.velocity.isZero() || !listener.velocity.isZero()) { 459 | // Calculate the source to listener vector. 460 | vec3 sourceToListener = source.position - listener.position; 461 | 462 | double sourceListenerMagnitude = sourceToListener.length(); 463 | 464 | double listenerProjection = sourceToListener.dot(listener.velocity) / sourceListenerMagnitude; 465 | double sourceProjection = sourceToListener.dot(source.velocity) / sourceListenerMagnitude; 466 | 467 | listenerProjection = -listenerProjection; 468 | sourceProjection = -sourceProjection; 469 | 470 | double scaledSpeedOfSound = speedOfSound / dopplerFactor; 471 | listenerProjection = min(listenerProjection, scaledSpeedOfSound); 472 | sourceProjection = min(sourceProjection, scaledSpeedOfSound); 473 | 474 | dopplerShift = ((speedOfSound - dopplerFactor * listenerProjection) / (speedOfSound - dopplerFactor * sourceProjection)); 475 | fixNANs(dopplerShift); // Avoid illegal values 476 | 477 | // Limit the pitch shifting to 4 octaves up and 3 octaves down. 478 | dopplerShift = min(dopplerShift, 16); 479 | dopplerShift = max(dopplerShift, 0.125); 480 | } 481 | } 482 | 483 | -------------------------------------------------------------------------------- /2013/07/WebAudio.md: -------------------------------------------------------------------------------- 1 | # Web Audio Draft Feedback 2 | 3 | [Draft under discussion.](https://dvcs.w3.org/hg/audio/raw-file/28a38310adae/webaudio/specification.html) 4 | 5 | We extracted the IDL and example code from the draft in question using a snippet run at the developer tool command line: 6 | 7 | ```js 8 | Array.prototype.slice.call( 9 | document.querySelectorAll(".idl,.es-code") 10 | ).map(function(n) { return n.innerText; }).join("\n\n\n"); 11 | ``` 12 | 13 | This is visible in [`webaudio.idl`](webaudio.idl) and [`webaudio_examples.js`](webaudio_examples.js) in the current directory. 14 | 15 | ## General Discussion 16 | 17 | The Web Audio API presents an architecture for low-latency audio processing and playback. It does this by creating _sources_ and _destinations_. Processing nodes may be placed between these sources and destinations to modify the audio. Many of the most commonly used types of processing elements in audio engineering are required by the spec. This rich library of audio processing primitives combined with an architecture that ensures they can be run off of the main thread ensures the architecture presented by the API can map efficiently to many forms of hardware and, in general, that it will pose few hazards to portions of the web platform which are in contention for main-thread (JS, rendering) resources. 18 | 19 | In addition to the library of "canned" node types, Web Audio specifies a `ScriptProcessorNode` type that enables processing of samples in script. 20 | 21 | We note that the detailed attention given to low-latency processing and performance in a contended environment is emblematic of skilled design sense. We hold great hope that this API will bring the best of what modern audio hardware has to offer to the web platform and, if we may editorialize a bit, could not be more elated to see it progress. The following feedback is not meant to detract from the positive accomplishments that this design has achieved. They are offered in a spirit of collaboration and from a perspective of experience with the particulars web platform APIs. 22 | 23 | ## API Hygiene 24 | 25 | ### ISSUE: Duplicate Normative API 26 | 27 | *RESOLVED* by [changes that moved the duplicate names](https://dvcs.w3.org/hg/audio/rev/51bdf2d4e69c) to an [_informative_ section of the spec](https://dvcs.w3.org/hg/audio/raw-file/28a38310adae/webaudio/specification.html#OldNames)! 28 | 29 | Only listed here for completeness as it was raised in our [F2F meeting](http://www.w3.org/2001/tag/2013/05/30-minutes#item05) with Olivier Thereaux. Massive thanks to the authors and editors for re-considering their previous stance and bravely taking the opportunity attempt to sunset cruft. Nobody knows if it'll work in practice, but we want to be on record as vocally supporting your efforts here, and should it not work, in taking the appropriate share of the blame. 30 | 31 | ### ISSUE: Constructibility & De-sugaring of static methods 32 | 33 | The current API defines 26 interfaces: 34 | 35 | ``` 36 | $ cat webaudio.idl | grep -i interface | cut -d " " -f 2 37 | AudioContext 38 | OfflineAudioContext 39 | OfflineAudioCompletionEvent 40 | AudioNode 41 | AudioDestinationNode 42 | AudioParam 43 | GainNode 44 | DelayNode 45 | AudioBuffer 46 | AudioBufferSourceNode 47 | MediaElementAudioSourceNode 48 | ScriptProcessorNode 49 | AudioProcessingEvent 50 | PannerNode 51 | AudioListener 52 | ConvolverNode 53 | AnalyserNode 54 | ChannelSplitterNode 55 | ChannelMergerNode 56 | DynamicsCompressorNode 57 | BiquadFilterNode 58 | WaveShaperNode 59 | OscillatorNode 60 | PeriodicWave 61 | MediaStreamAudioSourceNode 62 | MediaStreamAudioDestinationNode 63 | ``` 64 | 65 | Of these, only two are marked constructible: 66 | 67 | ``` 68 | $ cat webaudio.idl | grep -A 1 -i constructor | grep interface | cut -d " " -f 2 69 | AudioContext 70 | OfflineAudioContext 71 | ``` 72 | 73 | Most of the types represented by the non-constructable interfaces _are_ visible in the API through normal use. For instance, to get a `PannerNode` instance a developer currently uses: 74 | 75 | ``` 76 | var panner = context.createPanner(); 77 | ``` 78 | 79 | Where `context` is an instance of `AudioContext` (or one of its subclasses). Prickly questions arise from this arrangement: 80 | 81 | 1. Assuming that the static methods on the `context` are desirable shortcuts for wiring up the context of a `Node` instance to the `context` against which it runs, _how_ does that context get set in a way that would allow pure JS objects to describe it? 82 | 2. By what privileged mechanism does the system create instances of these types if they do not have constructors? 83 | 3. Are these types in any way subclassable? If not, why not? 84 | 4. If the intent is to mirror other DOM APIs, it's curious to have `create*()` methods but no factory (e.g.: `createElement("tagname")`) 85 | 86 | Adding constructors and context-setting methods (or constructor params) for most of the interfaces that lack them would answer #'s 1 and 2 and largely obviate 4. E.g.: 87 | 88 | ```js 89 | // A possible de-sugaring for createPanner() when ctors are defined: 90 | AudioContext.prototype.createPanner = function() { 91 | var p = new PannerNode(); 92 | p.context = this; 93 | return p; 94 | }; 95 | 96 | // An alternative that provides the context via the PannerNode ctor: 97 | AudioContext.prototype.createPanner = function() { 98 | return new PannerNode({ context: this }); 99 | }; 100 | 101 | // An alternative that uses a positional context param: 102 | AudioContext.prototype.createPanner = function(attributes) { 103 | return new PannerNode(this, attributes); 104 | }; 105 | ``` 106 | 107 | Either constructor style allows these `AudioNode` types to conceptually be modeled more cleanly as JS objects which could self-host. 108 | 109 | Of course, this requires answering the follow-on questions "what happens if the context is invalid, changes, or is never set?", but those are reasonable to ask and their answers don't need to be complicated (certainly not for v1). 110 | 111 | An alternative design might locate the constructors on the context directly, but this seems to create as many problems as it solves. 112 | 113 | Using the constructor style from the last variant, we can re-work one of the examples from Section 7: 114 | 115 | ```js 116 | ... 117 | var context = new AudioContext(); 118 | ... 119 | 120 | function playSound() { 121 | var oneShotSound = context.createBufferSource(); 122 | oneShotSound.buffer = dogBarkingBuffer; 123 | 124 | // Create a filter, panner, and gain node. 125 | 126 | var lowpass = context.createBiquadFilter(); 127 | 128 | var panner = context.createPanner(); 129 | panner.panningModel = "equalpower"; 130 | panner.distanceModel = "linear"; 131 | 132 | var gainNode2 = context.createGain(); 133 | 134 | 135 | // Make connections 136 | oneShotSound.connect(lowpass); 137 | lowpass.connect(panner); 138 | panner.connect(gainNode2); 139 | gainNode2.connect(compressor); 140 | 141 | oneShotSound.start(context.currentTime + 0.75); 142 | } 143 | ``` 144 | 145 | to: 146 | 147 | ```js 148 | ... 149 | var context = new AudioContext(); 150 | ... 151 | 152 | function playSound() { 153 | var oneShotSound = new BufferSource(context, { buffer: dogBarkingBuffer }); 154 | 155 | // Create a filter, panner, and gain node. 156 | var lowpass = new BiquadFilterNode(context); 157 | var panner = new PannerNode(context, { 158 | panningModel: "equalpower", 159 | distanceModel: "linear" 160 | }); 161 | var gainNode2 = new GainNode(context); 162 | 163 | // Make connections 164 | oneShotSound.connect(lowpass); 165 | lowpass.connect(panner); 166 | panner.connect(gainNode2); 167 | gainNode2.connect(compressor); 168 | 169 | oneShotSound.start(context.currentTime + 0.75); 170 | } 171 | ``` 172 | 173 | ### ISSUE: Subclassing 174 | 175 | Related to a lack of constructors, but worth calling out independently, it's not currently possible to meaningfully compose node types, either through mixins or through subclassing. In JS, this sort of "is a" relationship is usually set up through the subclassing pattern: 176 | 177 | ```js 178 | var SubNodeType = function() { 179 | SuperNodeType.call(this); 180 | }; 181 | SubNodeType.prototype = Object.create(SuperNodeType.prototype); 182 | SubNodeType.prototype.constructor = SubNodeType; 183 | // ... 184 | ``` 185 | 186 | There doesn't seem to be any way in the current design to enable this sort of composition. This is deeply unfortunate. 187 | 188 | ### ISSUE: Callbacks without Promises 189 | 190 | A few of the APIs specified use a callback system which can be made Promise-compatible in a straightforward way. For instance, the current definition of `AudioContext::decodeAudioData` is given by: 191 | 192 | ``` 193 | void decodeAudioData(ArrayBuffer audioData, 194 | DecodeSuccessCallback successCallback, 195 | optional DecodeErrorCallback errorCallback); 196 | 197 | ``` 198 | 199 | This can be extended to be Promise-compatible by simply changing the return type: 200 | 201 | ``` 202 | Promise decodeAudioData(ArrayBuffer audioData, 203 | optional DecodeSuccessCallback successCallback, 204 | optional DecodeErrorCallback errorCallback); 205 | 206 | ``` 207 | 208 | This will allow users of these APIs to rely on the same uniform promise interface across this API and many others without changes to the callback style you currently employ. As an implementation detail, the existing success and error callbacks can be recast as though an internal method generated a promise and adds them to the list of listeners: 209 | 210 | ``` 211 | AudioContext.prototype.decodeAudioData = function(data, success, error) { 212 | var p = __internalDecode(data); 213 | if (success) { 214 | p.then(success, error); 215 | } 216 | return p; 217 | }; 218 | ``` 219 | 220 | Note the `successCallback` parameter is now optional. 221 | 222 | What's the effect of this? Code can now be written like: 223 | 224 | ```js 225 | // Wait for 3 samples to decode and then play them simultaneously: 226 | Promise.every( 227 | ctx.decodeAudioData(buffer1), 228 | ctx.decodeAudioData(buffer2), 229 | ctx.decodeAudioData(buffer3) 230 | ).then(function(samples) { 231 | samples.forEach(function(buffer) { 232 | (new BufferSource(ctx, { buffer: buffer })).start(); 233 | }); 234 | }); 235 | ``` 236 | 237 | `OfflineAudioContext` can be similarly improved by vending a `Promise` from `startRendering()` which resolves when `oncomplete`: 238 | 239 | ```js 240 | offlineContext.startRendering().then(function(renderedBuffer) { 241 | // ... 242 | }); 243 | ``` 244 | 245 | This is both terser and more composable than the current system. 246 | 247 | ### ISSUE: `ScriptProcessorNode` is Unfit For Purpose (Section 15) 248 | 249 | We harbor deep concerns about `ScriptProcessorNode` as currently defined. Notably: 250 | 251 | * As currently defined, these nodes _must_ run on the UI thread -- giving rise, it seems, to many of the admonitions not to use them in Section 15.3.*. This is deeply at odds with the rest of the architecture which seeks to keep processing _off_ of the main thread to ensure low-latency. 252 | * Since these functions run on the main thread, it's impossible to set execution constraints that might help reduce jitter; e.g., a different GC strategy or tight constraints on execution time slicing. 253 | 254 | This can be repaired. Here's a stab at it: 255 | 256 | ``` 257 | [Constructor(DOMString scriptURL, optional unsigned long bufferSize)] 258 | interface AudioWorker : Worker { 259 | 260 | }; 261 | 262 | interface AudioProcessingEvent : Event { 263 | 264 | readonly attribute double playbackTime; 265 | 266 | transferrable attribute AudioBuffer buffer; 267 | 268 | }; 269 | 270 | interface AudioWorkerGlobalScope : DedicatedWorkerGlobalScope { 271 | 272 | attribute EventHandler onaudioprocess; 273 | 274 | }; 275 | 276 | interface ScriptProcessorNode : AudioNode { 277 | 278 | attribute EventHandler onaudioprocess; 279 | 280 | readonly attribute long bufferSize; 281 | 282 | }; 283 | 284 | partial interface AudioContext { 285 | 286 | ScriptProcessorNode createScriptProcessor( 287 | DOMString scriptURL, 288 | optional unsigned long bufferSize = 0, 289 | optional unsigned long numberOfInputChannels = 2, 290 | optional unsigned long numberOfOutputChannels = 2); 291 | 292 | } 293 | ``` 294 | 295 | The idea here is that to ensure low-latency processing, no copying of resulting buffers is done (using the Worker's [Transferrable](https://plus.sandbox.google.com/114636678211810483111/posts/isYunS2B6os) mechanism). 296 | 297 | Scripts are loaded from external URLs and can control their inbound/outbound buffering with a constructor arguments. 298 | 299 | Under this arrangement it's possible for the system to start to change the constraints that these scripts run under. GC can be turned off, runtime can be tightly controlled, and these scripts can even be run on the (higher priority) audio-processing thread. 300 | 301 | All of this is necessary to ensure that scripts are not second-class citizens in the architecture; attractive nuisances which can't actually be used in the real world due to their predictable down-sides. 302 | 303 | ### ISSUE: Explaining ("De-sugaring") the `*Node` Types 304 | 305 | On the back of a repaired `ScriptProcessorNode` definition, the spec should contain tight descriptions of the built-in library of the `AudioNode` subclasses; preferably in the form of script which could be executed in a `ScriptProcessorNode`. 306 | 307 | *Obviously* we do not recommend that implementations lean on this sort of self-hosting for production systems. It is, however, clear that without such a detailed description of the expected algorithms, compatibility between implementations cannot be guaranteed, nor can conformance with the spec be meaningfully measured. This de-sugaring-as-spec-exercise would be helpful for the testing of the system and for users who will at a later time want to know _exactly_ what the system is expected to be doing for them. 308 | 309 | We imagine an appendix of the current spec that includes such de-sugarings and test suites built on them. 310 | 311 | ### ISSUE: Visible [Data Races](http://blog.regehr.org/archives/490) 312 | 313 | This topic has been debated on the [`public-audio` mailing list](http://lists.w3.org/Archives/Public/public-audio/2013JulSep/0162.html) and in [blogs](http://robert.ocallahan.org/2013/07/avoiding-copies-in-web-apis.html). 314 | 315 | It's reasonable to suggest that de-sugaring into a style that transfers ownership of buffers to nodes during processing is sufficient to close much of the problem down, but whatever the solution, we wish to be on the record as saying clearly that it is impermissible for Web Audio to unilaterally add visible data races to the JavaScript execution model. 316 | 317 | The Web Audio group is best suited to solve this issue, but we insist that no API be allowed to create visible data races from the perspective of linearly-executing JavaScript code. 318 | 319 | ### ISSUE: Lack of Serialization Primitives & Introspection 320 | 321 | The IDL for the Web Audio node graph from any `AudioContext` doesn't define a serialization and there's no other way of easily cloning/walking an `AudioNode` graph. This lack of reflective API makes it deeply unlikely that systems can easily share graphs. This seems like a real drag on developer productivity. It also makes it difficult to clone some graph of processing from an `OfflineAudioContext` to a real-time context or vice-versa. 322 | 323 | A `toString()` or `toJSON()` method on `AudioContext` combined with a reflection API over the node graph would solve the issue. 324 | 325 | ## Layering Considerations 326 | 327 | Web Audio is very low-level and this is a virtue. By describing a graph that operates in terms of samples of bytes, it enables developers to tightly control the behavior of processing and ensure low-latency delivery of results. 328 | 329 | Today's Web Audio spec is an island: connected to its surroundings via loose ties, not integrated into the fabric of the platform as the natural basis and explanation of all audio processing -- despite being incredibly fit for that purpose. 330 | 331 | Perhaps the most striking example of this comes from the presence in the platform of both Web Audio and the `