23 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/administrative-issues.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Administrative Issues
3 | about: Issues with the repository itself
4 | title: ''
5 | labels: Admin
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Describe the Issue**
11 | Briefly describe the issue with this site. This includes all issues not related to the spec itself and includes things like the issue tracker not working, wiki pages that are incorrect, and so on.
12 |
13 | **Expected Result**
14 | Describe what should have happened
15 |
16 | **Actual Result**
17 | Describe what actually happened and why that is wrong
18 |
--------------------------------------------------------------------------------
/audioparam.include:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
Parameter
6 |
Value
7 |
Notes
8 |
9 |
10 |
{{AudioParam/defaultValue}}
11 |
[DEFAULT]
12 |
[DEFAULT-NOTES?]
13 |
14 |
{{AudioParam/minValue}}
15 |
[MIN]
16 |
[MIN-NOTES?]
17 |
18 |
{{AudioParam/maxValue}}
19 |
[MAX]
20 |
[MAX-NOTES?]
21 |
22 |
{{AudioParam/automationRate}}
23 |
[RATE]
24 |
[RATE-NOTES?]
25 |
26 |
27 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Feature Request
3 | about: Suggest additions for WebAudio
4 | title: ''
5 | labels: feature,Needs WG Review
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Describe the feature**
11 | Briefly describe the feature you would like WebAudio to have.
12 |
13 | **Is there a prototype?**
14 | If you have a prototype (possibly using an AudioWorkletNode), provide links to illustrate this addition. This is the best way to propose a new feature.
15 |
16 | **Describe the feature in more detail**
17 | Provide more information how it does and how it works.
18 |
--------------------------------------------------------------------------------
/create-cr.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Create a zip file containing everything we need to publish a CR.
4 | # This assumes you've updated index.bs to add/change:
5 | # Status: CR
6 | # Date:
7 | # Deadline:
8 | # Prepare for TR: yes
9 |
10 | # Compile the spec, exit immediately if compile.sh fails
11 | set -e
12 | ./compile.sh
13 |
14 | # Create zip file of the things we need. First the basic stuff.
15 | rm -f cr.zip
16 | zip cr index.html style.css implementation-report.html test-report.html favicon.png
17 | # Now add all the images, but we don't need the graffle sources.
18 | zip cr `find images | grep -v graffle`
19 |
--------------------------------------------------------------------------------
/audionode.include:
--------------------------------------------------------------------------------
1 |
35 |
--------------------------------------------------------------------------------
/.github/workflows/auto-publish.yml:
--------------------------------------------------------------------------------
1 | on:
2 | pull_request:
3 | branches:
4 | - main
5 | push:
6 | branches:
7 | - main
8 |
9 | jobs:
10 | main:
11 | name: Setup, build, and Deploy to gh-pages branch
12 | runs-on: ubuntu-latest
13 | steps:
14 | - name: Checking out the repository
15 | uses: actions/checkout@v4
16 | - name: Setting up Python 3.9
17 | uses: actions/setup-python@v5
18 | with:
19 | python-version: '3.9'
20 | - name: Installing and updating Bikeshed
21 | run: pip3 install bikeshed && bikeshed update
22 | shell: bash
23 | - name: Building index.html from index.bs
24 | run: bash ./compile.sh
25 | shell: bash
26 | - name: Deploying to gh-pages branch
27 | if: github.event_name == 'push'
28 | uses: peaceiris/actions-gh-pages@v3
29 | with:
30 | github_token: ${{ secrets.GITHUB_TOKEN }}
31 | publish_dir: ./
32 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Web Audio API 1.0 Spec
2 |
3 | [](https://travis-ci.com/WebAudio/web-audio-api)
4 |
5 |
6 |
7 | This repository contains the latest editor's draft of the [W3C AudioWG](https://www.w3.org/2011/audio/)'s Web Audio API v1.0.
8 |
9 | You can preview the current version of the `main` branch [here](https://webaudio.github.io/web-audio-api/).
10 |
11 | # Tests
12 |
13 | For normative changes, a corresponding
14 | [web-platform-tests](https://github.com/web-platform-tests/wpt) PR is highly appreciated. Typically,
15 | both PRs will be merged at the same time. Note that a test change that contradicts the spec should
16 | not be merged before the corresponding spec change. If testing is not practical, please explain why
17 | and if appropriate [file an issue](https://github.com/web-platform-tests/wpt/issues/new) to follow
18 | up later. Add the `type:untestable` or `type:missing-coverage` label as appropriate.
19 |
20 | We also have an [implementation report](https://webaudio.github.io/web-audio-api/implementation-report.html).
21 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Bug Report
3 | about: Create a report to fix the specification.
4 | labels: Needs WG review, Untriaged
5 | assignees: ''
6 |
7 | ---
8 |
9 | **Describe the issue**
10 |
11 | A clear and concise description of what the problem is in the spec.
12 |
13 | If you need help on how to use WebAudio, ask on your favorite forum or consider visiting
14 | the [WebAudio Slack Channel](https://web-audio.slack.com/) (register [here](https://web-audio-slackin.herokuapp.com/)) or
15 | [StackOverflow](https://stackoverflow.com/).
16 |
17 | If it's really an implementation bug, considering filing an issue for your browser at
18 |
19 | * Safari (WebKit): https://bugs.webkit.org/enter_bug.cgi?product=WebKit&component=Web%20Audio (WebKit bugzilla account needed).
20 | * Chrome (Blink): https://new.crbug.com/ (Google account needed).
21 | * Firefox (Gecko): https://bugzilla.mozilla.org/enter_bug.cgi?product=Core&component=Web%20Audio (GitHub or Mozilla Bugzilla account needed).
22 |
23 |
24 | **Where Is It**
25 |
26 | Provide a link to where the problem is in the spec
27 |
28 | **Additional Information**
29 |
30 | Provide any additional information
31 |
--------------------------------------------------------------------------------
/compile.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | usage () {
4 | echo "compile.sh [-Kh?]"
5 | echo " -K Keep the actual-errs.txt. Useful for updating"
6 | echo " expected-errs.txt. (Otherwise removed when done.)"
7 | echo " -h This help"
8 | echo " -? This help"
9 | exit 0
10 | }
11 |
12 | KEEP=no
13 | while getopts "Kh?" arg
14 | do
15 | case $arg in
16 | K) KEEP=yes ;;
17 | h) usage ;;
18 | \?) usage ;;
19 | esac
20 | done
21 |
22 | # So we can see what we're doing
23 | set -x
24 |
25 | # Output from bikeshed is logged here, along with a version with line
26 | # numbers stripped out.
27 | BSLOG="bs.log"
28 | ERRLOG="actual-errs.txt"
29 |
30 | # Remove ERRLOG when we're done with this script, but keep BSLOG so we
31 | # can update the expected errors.
32 | if [ "$KEEP" = "no" ]; then
33 | trap "rm $ERRLOG" 0
34 | fi
35 |
36 | # Run bikeshed and save the output. You can use this output as is
37 | # to update expected-errs.txt.
38 | bikeshed --print=plain -f spec 2>&1 | tee $BSLOG
39 |
40 | # Remove the line numbers from the log, and make sure it ends with a newline.
41 | # Also remove any lines that start "cannot identify image file" because the path
42 | # is based the machine doing the build so we don't want that in the results.
43 | sed 's;^LINE [0-9]*:[0-9]*:;LINE:;' $BSLOG |
44 | sed '/^cannot identify image file/d' |
45 | sed -e '$a\' > $ERRLOG
46 |
47 | # Do the same for the expected errors and compare the two. Any
48 | # differences need to be fixed. Exit with a non-zero exit code if
49 | # there are any differences.
50 | (sed 's;^LINE [0-9]*:[0-9]*:;LINE:;' expected-errs.txt |
51 | sed '/^cannot identify image file/d' |
52 | sed -e '$a\' |
53 | diff -u - $ERRLOG) || exit 1
54 |
55 |
--------------------------------------------------------------------------------
/images/webaudio-js.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
57 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # How to contribute
2 |
3 | This repository containes the current Web Audio API _standard document_, developed by the Audio Working Group at the W3C. There are _multiple implementations_ of the Web Audio API standard.
4 |
5 | If you're about to open an issue about a bug in a particular implementation, please file a ticket in the corresponding implementation bug tracker instead:
6 |
7 | - WebKit (Safari): https://bugs.webkit.org/enter_bug.cgi?product=WebKit&component=Web%20Audio (WebKit bugzilla account needed).
8 | - Blink (Chrome, Chromium, Electron apps, etc.): https://new.crbug.com/ (Google account needed).
9 | - Gecko (Firefox): https://bugzilla.mozilla.org/enter_bug.cgi?product=Core&component=Web%20Audio (GitHub or Mozilla Bugzilla account needed).
10 |
11 | Testing the same Web Audio API code in multiple implementations can be a quick way to check if an implementation has a bug. However, it has happened that different implementation have the same bug, so it's best to read the [standard document](https://webaudio.github.io/web-audio-api/), and to check what _should_ happen.
12 |
13 | Feature requests are welcome, but a quick search in the [opened and closed issues](https://github.com/WebAudio/web-audio-api/issues) is welcome to avoid duplicates.
14 |
15 | Pull requests are also welcome, but any change to the standard (barring things like typos and the like) will have to be discussed in an issue (and possibly during a call, accessible only to W3C Audio Working Group members).
16 |
17 | [Bikeshed](https://github.com/tabatkins/bikeshed) is the tool used to write this
18 | specification. It can either be used via an HTTP API, or by running it locally.
19 |
20 | To use it via the HTTP API, run:
21 |
22 | ```
23 | curl https://api.csswg.org/bikeshed/ -F file=@index.bs -F force=1 -F output=err -o err && cat err | sed -E 's/\\033\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g' | diff -u - expected-errs.txt
24 | ```
25 |
26 | and preview your changes by doing:
27 | ```
28 | curl https://api.csswg.org/bikeshed/ -F file=@index.bs -F force=1 > index.html
29 | ```
30 |
31 | and opening `index.html` in your favorite browser.
32 |
33 | To run it locally, follow the [installation
34 | instructions](https://tabatkins.github.io/bikeshed/#installing).
35 |
36 | Then, `bikeshed serve` will run a web server [locally on port
37 | 8000](http://localhost:8000).
38 |
39 |
--------------------------------------------------------------------------------
/images/cancel-linear.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
--------------------------------------------------------------------------------
/images/cancel-setTarget.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
--------------------------------------------------------------------------------
/images/cancel-setValueCurve.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
--------------------------------------------------------------------------------
/expected-errs.txt:
--------------------------------------------------------------------------------
1 | LINE: Couldn't determine width and height of this image: 'images/cancel-linear.svg'
2 | LINE: Couldn't determine width and height of this image: 'images/cancel-setTarget.svg'
3 | LINE: Couldn't determine width and height of this image: 'images/cancel-setValueCurve.svg'
4 | LINE: Couldn't determine width and height of this image: 'images/channel-merger.svg'
5 | LINE: Couldn't determine width and height of this image: 'images/compression-curve.svg'
6 | LINE: Couldn't determine width and height of this image: 'images/dynamicscompressor-internal-graph.svg'
7 | LINE: Couldn't determine width and height of this image: 'images/panner-coord.svg'
8 | LINE: Couldn't determine width and height of this image: 'images/cone-diagram.svg'
9 | LINE: Can't find the 'contextOptions' argument of method 'OfflineAudioContext/constructor(numberOfChannels, length, sampleRate)' in the argumentdef block.
10 | LINE: Can't find the 'destinationNode' argument of method 'AudioNode/connect(destinationParam, output)' in the argumentdef block.
11 | LINE: Can't find the 'input' argument of method 'AudioNode/connect(destinationParam, output)' in the argumentdef block.
12 | LINE: Can't find the 'destinationNode' argument of method 'AudioNode/disconnect(destinationParam, output)' in the argumentdef block.
13 | LINE: Can't find the 'destinationNode' argument of method 'AudioNode/disconnect(destinationParam, output)' in the argumentdef block.
14 | LINE: Can't find the 'destinationNode' argument of method 'AudioNode/disconnect(destinationParam, output)' in the argumentdef block.
15 | LINE: Can't find the 'input' argument of method 'AudioNode/disconnect(destinationParam, output)' in the argumentdef block.
16 | LINE: Multiple possible 'audio' element refs.
17 | Arbitrarily chose https://html.spec.whatwg.org/multipage/media.html#audio
18 | To auto-select one of the following refs, insert one of these lines into a
block:
19 | spec:html; type:element; text:audio
20 | spec:epub-34; type:element; text:audio
21 | <{audio}>
22 | LINE: Multiple possible 'audio' element refs.
23 | Arbitrarily chose https://html.spec.whatwg.org/multipage/media.html#audio
24 | To auto-select one of the following refs, insert one of these lines into a
block:
25 | spec:html; type:element; text:audio
26 | spec:epub-34; type:element; text:audio
27 | <{audio}>
28 | LINE: Multiple possible 'audio' element refs.
29 | Arbitrarily chose https://html.spec.whatwg.org/multipage/media.html#audio
30 | To auto-select one of the following refs, insert one of these lines into a
block:
31 | spec:html; type:element; text:audio
32 | spec:epub-34; type:element; text:audio
33 | <{audio}>
34 | LINE: Multiple possible 'audio' element refs.
35 | Arbitrarily chose https://html.spec.whatwg.org/multipage/media.html#audio
36 | To auto-select one of the following refs, insert one of these lines into a
block:
37 | spec:html; type:element; text:audio
38 | spec:epub-34; type:element; text:audio
39 | <{audio}>
40 | LINE: Multiple possible 'audio' element refs.
41 | Arbitrarily chose https://html.spec.whatwg.org/multipage/media.html#audio
42 | To auto-select one of the following refs, insert one of these lines into a
block:
43 | spec:html; type:element; text:audio
44 | spec:epub-34; type:element; text:audio
45 | <{audio}>
46 | LINE: No 'idl' refs found for '[CC-MODE]'.
47 | {{ChannelCountMode/[CC-MODE]}}
48 | LINE: No 'idl' refs found for '[CC-INTERP]'.
49 | {{ChannelInterpretation/[CC-INTERP]}}
50 | LINE: Multiple possible 'audio' element refs.
51 | Arbitrarily chose https://html.spec.whatwg.org/multipage/media.html#audio
52 | To auto-select one of the following refs, insert one of these lines into a
block:
53 | spec:html; type:element; text:audio
54 | spec:epub-34; type:element; text:audio
55 | <{audio}>
56 | LINE: W3C policy requires Privacy Considerations and Security Considerations to be separate sections, but you appear to have them combined into one.
57 | ✔ Successfully generated, but fatal errors were suppressed
58 |
--------------------------------------------------------------------------------
/images/cone-diagram.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
99 |
--------------------------------------------------------------------------------
/style.css:
--------------------------------------------------------------------------------
1 | .nt, pre, .terminal, code, .prop, .esstring, .javavalue, .idlident, .idlstring, .xattr, .regex, .prod-number, .prod-lines, .prod-mid {
2 | font-size: 14px;
3 | }
4 | pre code, .prod-lines .nt {
5 | font-size: 14px !important;
6 | }
7 | .ednote, .terminal, code, .prop, .esstring, .javavalue, .idlident, .idlstring, .example, .note, blockquote {
8 | background: #d9e8ff;
9 | }
10 |
11 |
12 | td code {
13 | background: inherit;
14 | }
15 | .example blockquote {
16 | background: #f0f6ff;
17 | }
18 | table.grammar {
19 | background: #eee;
20 | }
21 | .ednote {
22 | border-top: 3px solid red;
23 | border-bottom: 3px solid red;
24 | margin: 1em 2em;
25 | padding: 0 1em 0 1em;
26 | background: #f8eeee;
27 | }
28 |
29 |
30 | .ednoteHeader {
31 | font-weight: bold;
32 | display: block;
33 | padding-top: 0.5em;
34 | }
35 | .toc ul li {
36 | list-style-type: none;
37 | margin-top: 0;
38 | margin-bottom: 0;
39 | }
40 | .toc ul {
41 | margin-bottom: 0.5em;
42 | }
43 | .terminal, code, .prop, .esstring, .javavalue, .idlident, .idlstring, .input {
44 | font-family: /*Consolas, Monaco,*/ monospace !important;
45 | }
46 | pre.code code {
47 | background: inherit;
48 | }
49 | .propattrset {
50 | }
51 | /*.prop {
52 | font-family: Consolas, Monaco, monospace;
53 | }*/
54 |
55 | .xattr {
56 | font-family: /*Consolas, Monaco,*/ monospace;
57 | }
58 |
59 | table { border-collapse:collapse; border-style:hidden hidden none hidden }
60 | table thead { border-bottom:solid }
61 | table tbody th:first-child { border-left:solid }
62 | table td, table th { border-left:solid; border-right:solid; border-bottom:solid thin; vertical-align:top; padding:0.2em }
63 |
64 | .nt, .prod-lines {
65 | font-family: /*Consolas, Monaco,*/ monospace;
66 | white-space: nowrap;
67 | }
68 | .idltype, .idlvalue {
69 | font-weight: bold;
70 | }
71 | .idlop {
72 | font-weight: bold;
73 | }
74 | .esvalue, .estype {
75 | font-weight: bold;
76 | }
77 | .javatype, .javapkg {
78 | font-weight: bold;
79 | }
80 | .regex {
81 | font-family: /*Consolas, Monaco,*/ monospace;
82 | white-space: nowrap;
83 | }
84 | .typevar {
85 | font-style: italic;
86 | }
87 | .example, .note {
88 | border-top: 3px solid #005a9c;
89 | border-bottom: 3px solid #005a9c;
90 | margin: 1em 2em;
91 | padding: 0 1em 0 1em;
92 | }
93 | .exampleHeader, .noteHeader {
94 | font-weight: bold;
95 | display: block;
96 | color: #005a9c;
97 | color: black;
98 | padding-top: 0.5em;
99 | }
100 | pre {
101 | overflow: auto;
102 | margin: 0;
103 | font-family: /*Consolas, Monaco,*/ monospace;
104 | }
105 | pre.code {
106 | padding: 0 1em 0 6em;
107 | margin: 0;
108 | margin-bottom: 1em;
109 | }
110 | .block {
111 | border: 1px solid #90b8de;
112 | border-left: 3px double #90b8de;
113 | border-left: none;
114 | border-right: none;
115 | background: #f0f6ff;
116 | margin: 2em;
117 | margin-top: 1em;
118 | margin-bottom: 1em;
119 | padding: 0 0.5em;
120 | padding-bottom: 0.5em;
121 | }
122 | .blockTitleDiv {
123 | text-align: left;
124 | }
125 | .blockTitle {
126 | position: relative;
127 | top: -0.75em;
128 | left: -1.5em;
129 | /*border: 1px solid #90b8de;
130 | border-left: none;
131 | border-right: none;*/
132 | background: #90b8de;
133 | color: white;
134 | padding: 0.25em 1em 0.25em 1em;
135 | font-weight: bold;
136 | font-size: 80%;
137 | }
138 | dfn {
139 | font-weight: bold;
140 | font-style: italic;
141 | }
142 | .dfnref {
143 | }
144 | li {
145 | margin-top: 0.5em;
146 | margin-bottom: 0.5em;
147 | }
148 | ul > li {
149 | list-style-type: disc;
150 | }
151 | .norm {
152 | font-style: italic;
153 | }
154 | .rfc2119 {
155 | text-transform: lowercase;
156 | font-variant: small-caps;
157 | }
158 | dfn var {
159 | font-style: normal;
160 | }
161 | blockquote {
162 | padding: 1px 1em;
163 | margin-left: 2em;
164 | margin-right: 2em;
165 | }
166 | a.placeholder {
167 | color: #00e;
168 | }
169 | dl.changes > dd {
170 | margin-left: 0;
171 | }
172 | dd > :first-child {
173 | margin-top: 0;
174 | }
175 | caption {
176 | caption-side: bottom;
177 | margin-top: 1em;
178 | font-weight: bold;
179 | }
180 | body {
181 | line-height: 1.3;
182 | }
183 | @media print {
184 | .section-link {
185 | display: none;
186 | }
187 | }
188 | .section-link {
189 | visibility: hidden;
190 | width: 1px;
191 | height: 1px;
192 | overflow: visible;
193 | font-size: 10pt;
194 | font-style: normal;
195 | }
196 | .section-link a {
197 | color: #666;
198 | font-weight: bold;
199 | text-decoration: none;
200 | }
201 | .section-link a:hover {
202 | color: #c00;
203 | }
204 | .section > *:hover > .section-link {
205 | visibility: visible;
206 | }
207 | div.set {
208 | margin-left: 3em;
209 | text-indent: -1em;
210 | }
211 | ol.algorithm ol {
212 | border-left: 1px solid #90b8de;
213 | margin-left: 1em;
214 | }
215 | dl.switch > dd > ol.only {
216 | margin-left: 0;
217 | }
218 | dl.switch {
219 | padding-left: 2em;
220 | }
221 | dl.switch > dt {
222 | text-indent: -1.5em;
223 | margin-top: 1em;
224 | }
225 | dl.switch > dt + dt {
226 | margin-top: 0;
227 | }
228 | dl.switch > dt:before {
229 | content: '\21AA';
230 | padding: 0 0.5em 0 0;
231 | display: inline-block;
232 | width: 1em;
233 | text-align: right;
234 | line-height: 0.5em;
235 | }
236 | .diagram {
237 | text-align: center;
238 | }
239 | iframe {
240 | border: 0;
241 | }
242 | .ignore {
243 | opacity: 0.5;
244 | }
245 | .comment {
246 | color: #005a9c;
247 | }
248 |
249 |
250 |
251 | .matrix {
252 | border-collapse: collapse;
253 | margin-left: auto;
254 | margin-right: auto;
255 | }
256 | .matrix th {
257 | background: #d9e8ff;
258 | text-align: right;
259 | }
260 | .matrix td, .matrix th {
261 | border: 1px solid #90b8de;
262 | padding: 4px;
263 | }
264 | .matrix th.corner {
265 | border: 0;
266 | background: none;
267 | }
268 | .matrix td {
269 | text-align: center;
270 | background: #f0f6ff;
271 | }
272 | .matrix .belowdiagonal {
273 | background: #ddd;
274 | }
275 |
276 | ul.notes { font-size: 90%; padding-left: 0 }
277 | ul.notes li { list-style-type: none }
278 | ul.notes .note-link { vertical-align: super }
279 | .note-link { font-size: 90% }
280 |
281 | .code var { color: #f44; }
282 |
283 | /* For dfn.js */
284 | body.dfnEnabled dfn { cursor: pointer; }
285 | .dfnPanel {
286 | display: inline;
287 | position: absolute;
288 | height: auto;
289 | width: auto;
290 | padding: 0.5em 0.75em;
291 | font: small sans-serif;
292 | background: #DDDDDD;
293 | color: black;
294 | border: outset 0.2em;
295 | cursor: default;
296 | }
297 | .dfnPanel * { margin: 0; padding: 0; font: inherit; text-indent: 0; }
298 | .dfnPanel :link, .dfnPanel :visited { color: black; }
299 | .dfnPanel p { font-weight: bolder; }
300 | .dfnPanel li { list-style-position: inside; }
301 |
--------------------------------------------------------------------------------
/webaudio-CR-transition.md:
--------------------------------------------------------------------------------
1 | # Transition request, Web Audio API to Candidate Recommendation
2 |
3 | see https://github.com/w3c/transitions/issues/89
4 |
5 | ## Document title, URLs, estimated publication date
6 |
7 | Web Audio API
8 |
9 | Latest published version:
10 | http://www.w3.org/TR/webaudio/
11 |
12 | Latest editor's draft:
13 | https://webaudio.github.io/web-audio-api/
14 |
15 | Date:
16 | first Tuesday or Thursday after a sucessful transition meeting
17 |
18 | ## Abstract
19 |
20 | See https://www.w3.org/TR/webaudio/#abstract
21 |
22 | ## Status
23 |
24 | See https://www.w3.org/TR/webaudio/#sotd (as WD)
25 |
26 | ## Link to group's decision to request transition
27 |
28 | [12:12] Agreement on transition request https://github.com/WebAudio/web-audio-api/blob/master/webaudio-CR-transition.md
29 |
30 | [12:13] All in favour
31 |
32 | [6 Sept 2018 telcon](https://www.w3.org/2018/09/06-audio-irc#T16-13-12-53).
33 |
34 |
35 | ## Changes
36 |
37 | The Working Draft of 8 December 2015 introduced a substantive
38 | change: the deprecation of the old extensibility point, ScriptProcessorNode and its replacement
39 | with AudioWorker.
40 |
41 | Since then, substantial effort has gone into fleshing out details of the AudioWorklet interface
42 | https://webaudio.github.io/web-audio-api/#AudioWorklet
43 |
44 | This was done in coordination with the Houdini taskforce, which specified Worklet and also PaintWorklet.
45 |
46 | An updated draft was published on 19 June 2018 for wide review.
47 |
48 | Changes between the 8 December 2015 Working Draft and the 19 June 2018 draft are listed in the changes section
49 | https://webaudio.github.io/web-audio-api/#changestart
50 |
51 | More recent changes are listed here
52 | https://github.com/WebAudio/web-audio-api/commits/main/
53 | and will be added to a separate changes subsection
54 |
55 | The remainder of Web Audio API is stable, has multiple implementations, and is widely used.
56 |
57 | ## Requirements satisfied
58 | There is a requirements document
59 |
60 | https://www.w3.org/TR/webaudio-usecases/
61 |
62 | the majority of the use cases listed there are met by Web Audio API. Speed change algorithms are complex and were not added in v.1 but could be implemented via AudioWorklet. We eliminated built-in Doppler shifting from the API because of complexity and CPU-consumption concerns. However, the effect would still be achievable within a game using continuous playbackRate control of an AudioBufferSourceNode generating the sound to be shifted. Some use cases would require substantial CPU power for professional results, but this is not an inherent limitation of the specification just of physics.
63 |
64 | ## Dependencies met (or not)
65 |
66 | This specification depends on on the [Worklets Working Draft](https://www.w3.org/TR/worklets-1/) and [WebRTC](https://www.w3.org/TR/webrtc/).
67 | Worklets is being implemented in Blink and in Firefox, and is believed stable.
68 | Web Audio tests for AudioWorklet will help test Worklet as well.
69 |
70 | ## Wide Review
71 |
72 | The most recent Working Draft was published on 19 June 2018
73 | https://www.w3.org/TR/2018/WD-webaudio-20180619/
74 |
75 | The specification has received wide review, including presentation and active discussion
76 | at three sucessful Web Audio conferences (fourth conference to take place September 2018)
77 | and uptake by an enthusiastic developer community.
78 | These conferences have included public plenary sessions with the working group.
79 |
80 | [WAC 2015 http://wac.ircam.fr/](http://wac.ircam.fr/)
81 |
82 | [WAC 2016 http://webaudio.gatech.edu/](http://webaudio.gatech.edu/)
83 |
84 | [WAC 2017 http://wac.eecs.qmul.ac.uk/](http://wac.eecs.qmul.ac.uk/)
85 |
86 | [WAC 2018 https://webaudioconf.com/](https://webaudioconf.com/)
87 |
88 |
89 |
90 | The specification was developed on GitHub, see the issues list
91 |
92 | https://github.com/WebAudio/web-audio-api/issues
93 |
94 |
95 | ### Security and Privacy:
96 |
97 | There is a security and privacy appendix
98 | https://webaudio.github.io/web-audio-api/#Security-Privacy-Considerations
99 | This benefitted from review and contributions of the Privacy Interest Group.
100 |
101 | ### Accessibility:
102 |
103 | This JavaScript API does not expose any user media controls, so although the constraints in Media Accessibility User Requirements
104 | https://www.w3.org/WAI/PF/media-a11y-reqs/
105 |
106 | were considered, they do not apply to this API.
107 |
108 | ### Internationalization:
109 |
110 | Web Audio API was discussed at TPAC 2016 with the I18n Core chair, who confirmed that JavaScript APIs
111 | which do not expose human-readable text strings or take natural-language input do not constitute
112 | a problem for Internationalization.
113 |
114 | ### TAG:
115 |
116 | This specification benefitted from extensive review by the TAG. Domenic Denicola was the lead
117 | reviewer. Changes were made in response to this review, and the TAG appeared satisfied.
118 |
119 | To verify this, a [second round of TAG review](https://github.com/w3ctag/design-reviews/issues/212)
120 | was initiated in Nov 2017. The spec was discussed with the TAG at their London f2f, and TAG confirmed they were satisfied.
121 |
122 | ## Issues addressed
123 |
124 | The issues list is on GitHub:
125 |
126 | https://github.com/WebAudio/web-audio-api/issues
127 |
128 | There are currently 80 open issues and 978 closed.
129 |
130 | Of those, 95 were feature requests that were [deferred to the next version](https://github.com/WebAudio/web-audio-api/milestone/2)
131 |
132 | WebAudio v.1 has [13 open issues and 433 closed](https://github.com/WebAudio/web-audio-api/milestone/1); these are primarily editorial clarifications.
133 |
134 | ## Formal Objections
135 |
136 | None
137 |
138 | ## Implementation
139 |
140 | There are implementations of Web Audio API in Safari, in Blink-based browsers such as Chrome, in
141 | Firefox, and in Microsoft Edge. This includes mobile implementations in Safari for iOS and
142 | Chrome and Firefox for Android.
143 |
144 | A draft implementation report is available:
145 |
146 | https://webaudio.github.io/web-audio-api/implementation-report.html
147 |
148 | There are no features at risk.
149 |
150 | A test suite is in progress and available at
151 | https://github.com/web-platform-tests/wpt/tree/master/webaudio
152 |
153 | WPT.fy results are available (with the usual caveats regarding browser versions, etc)
154 | https://wpt.fyi/results/webaudio/the-audio-api
155 |
156 | Mojitests from Mozilla are being converted to WPT format and will be pushed upstream to the WPR repo. These are all upstream reviewed.
157 | Google also has extensive tests; these have now all been converted to WPT format, and have been pushed to WPT. These are all upstream reviewed. The tracking issue for test migration is
158 | https://github.com/WebAudio/web-audio-api/issues/1388
159 |
160 | During the CR period, the WG expects to remove any test duplication and look for any untested areas.
161 |
162 | The Working Group expects to demonstrate 2 implementations of the
163 | features listed in this specification by the end of the Candidate
164 | Recommendation phase.
165 |
166 | ## Patent disclosures
167 |
168 | https://www.w3.org/2004/01/pp-impl/46884/status
169 |
170 |
--------------------------------------------------------------------------------
/images/channel-merger.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
105 |
--------------------------------------------------------------------------------
/convolution.html:
--------------------------------------------------------------------------------
1 |
2 |
4 |
5 |
6 |
7 | Web Audio API - Convolution Architecture
8 |
9 |
10 |
11 |
12 |
13 |
Convolution Reverb
14 |
15 | This section is informative and may be helpful to implementors
16 |
17 |
18 |
19 | A convolution reverb can be used to simulate an acoustic space with very high quality.
20 | It can also be used as the basis for creating a vast number of unique and interesting special effects. This technique is widely used
21 | in modern professional audio and motion picture production, and is an excellent choice to create room effects in a game engine.
22 |
23 |
24 | Creating a well-optimized real-time convolution engine is one of the more challenging parts of the Web Audio API implementation.
25 | When convolving an input audio stream of unknown (or theoretically infinite) length, the overlap-add approach is used, chopping the
26 | input stream into pieces of length L, performing the convolution on each piece, then re-constructing the output signal by delaying each result and summing.
27 |
28 |
29 |
Overlap-Add Convolution
30 |
31 |
32 |
33 |
34 |
35 |
36 | Direct convolution is far too computationally expensive due to the extremely long impulse responses typically used. Therefore an approach using
37 | FFTs must be used. But naively doing a standard overlap-add FFT convolution using an FFT of size N with L=N/2, where N is chosen to be at least twice the length of the convolution kernel (zero-padding the kernel) to perform each convolution operation in the diagram above would incur a substantial input to output pipeline latency on the order of L samples. Because of the enormous audible delay, this simple method cannot be used. Aside from the enormous delay, the size N of the FFT could be extremely large. For example, with an impulse response of 10 seconds at 44.1Khz, N would equal 1048576 (2^20).
38 | This would take a very long time to evaluate. Furthermore, such large FFTs are not practical due to substantial phase errors.
39 |
40 |
41 |
42 |
43 |
Optimizations and Tricks
44 |
45 |
46 |
47 | There exist several clever tricks which break the impulse response into smaller pieces, performing separate convolutions, then
48 | combining the results (exploiting the property of linearity). The best ones use a divide and conquer approach using different size FFTs and a direct
49 | convolution for the initial (leading) portion of the impulse response to achieve a zero-latency output. There are additional optimizations which can be done exploiting the
50 | fact that the tail of the reverb typically contains very little or no high-frequency energy. For this part, the convolution may be done at a lower sample-rate...
51 |
52 |
53 |
54 | Performance can be quite good, easily
55 | done in real-time without creating undo stress on modern mid-range CPUs. A multi-threaded implementation is really required if low (or zero) latency is required because of the way the buffering / processing chunking works. Achieving good performance requires a highly optimized FFT algorithm.
56 |
57 |
58 |
Multi-channel convolution
59 |
60 | It should be noted that a convolution reverb typically involves two convolution operations, with separate impulse responses for the left and right channels in
61 | the stereo case. For 5.1 surround, at least five separate convolution operations are necessary to generate output for each of the five channels.
62 |
63 |
64 |
Impulse Responses
65 |
66 | Similar to other assets such as JPEG images, WAV sound files, MP4 videos, shaders, and geometry, impulse responses can be considered as multi-media assets. As with these other
67 | assets, they require work to produce, and the high-quality ones are considered valuable. For example, a company called Audio Ease makes a fairly expensive ($500 - $1000)
68 | product called Altiverb
69 | containing several nicely recorded impulse responses along with a convolution reverb engine.
70 |
71 |
72 |
73 |
Convolution Engine Implementation
74 |
75 |
76 |
FFTConvolver (short convolutions)
77 |
78 |
79 | The FFTConvolver is able to do short convolutions with the FFT size N being at least twice as large as the
80 | length of the short impulse response. It incurs a latency of N/2 sample-frames. Because of this latency and performance considerations,
81 | it is not suitable for long convolutions. Multiple instances of this building block can be used to perform extremely long convolutions.
82 |
83 |
84 |
85 |
86 |
87 |
ReverbConvolver (long convolutions)
88 | The ReverbConvolver is able to perform extremely long real-time convolutions on a single audio channel.
89 | It uses multiple FFTConvolver objects as well as an input buffer and an accumulation buffer. Note that it's
90 | possible to get a multi-threaded implementation by exploiting the parallelism. Also note that the leading sections of the long
91 | impulse response are processed in the real-time thread for minimum latency. In theory it's possible to get zero latency if the
92 | very first FFTConvolver is replaced with a DirectConvolver (not using a FFT).
93 |
94 |
95 |
96 |
97 |
98 |
99 |
Reverb Effect (with matrixing)
100 |
101 |
102 |
103 |
104 |
105 |
106 |
Recording Impulse Responses
107 |
108 |
109 |
110 |
111 |
112 |
The most modern
113 | and accurate way to record the impulse response of a real acoustic space is to use
114 | a long exponential sine sweep. The test-tone can be as long as 20 or 30 seconds, or longer.
115 |
116 |
117 |
118 |
119 |
Several recordings of the
120 | test tone played through a speaker can be made with microphones placed and oriented at various positions in the room. It's important
121 | to document speaker placement/orientation, the types of microphones, their settings, placement, and orientations for each recording taken.
122 |
123 |
124 | Post-processing is required for each of these recordings by performing an inverse-convolution with the test tone,
125 | yielding the impulse response of the room with the corresponding microphone placement. These impulse responses are then
126 | ready to be loaded into the convolution reverb engine to re-create the sound of being in the room.
127 |
128 |
129 |
Tools
130 |
131 | Two command-line tools have been written:
132 |
133 |
134 | generate_testtones generates an exponential sine-sweep test-tone and its inverse. Another
135 | tool convolve was written for post-processing. With these tools, anybody with recording equipment can record their own impulse responses.
136 | To test the tools in practice, several recordings were made in a warehouse space with interesting
137 | acoustics. These were later post-processed with the command-line tools.
138 |
139 |
140 |
141 |
142 | % generate_testtones -h
143 | Usage: generate_testtone
144 | [-o /Path/To/File/To/Create] Two files will be created: .tone and .inverse
145 | [-rate <sample rate>] sample rate of the generated test tones
146 | [-duration <duration>] The duration, in seconds, of the generated files
147 | [-min_freq <min_freq>] The minimum frequency, in hertz, for the sine sweep
148 |
149 | % convolve -h
150 | Usage: convolve input_file impulse_response_file output_file
151 |
183 |
184 |
185 |
186 |
187 |
--------------------------------------------------------------------------------
/images/compression-curve.svg:
--------------------------------------------------------------------------------
1 |
2 |
193 |
--------------------------------------------------------------------------------
/explainer/user-selectable-render-size.md:
--------------------------------------------------------------------------------
1 | # User-Selectable Render Size
2 | - Hongchan Choi (hongchan@chromiun.org) - current contact
3 | - Raymond Toy (rtoy@chromium.org) - original explainer author
4 |
5 | ## Background
6 | Historically, WebAudio has always rendered the graph in chunks of 128 frames,
7 | called a [render
8 | quantum](https://webaudio.github.io/web-audio-api/#render-quantum) in the
9 | specification. This was a trade-off between function-call overhead and
10 | latency. A smaller number would reduce latency, but the function call overhead
11 | would increase. With a larger value, the overhead is reduced, but the latency
12 | increases because any change takes more audio frames to reach the output. In
13 | addition, Mac OS probably processed 128 frames at a time anyway.
14 |
15 | ## Issues
16 | This has worked well over time, especially on desktop, but is particularly bad
17 | on Android where 128 may not fit in well with Android's audio processing. The
18 | main problem is illustrated very well from the example from Paul Adenot in a
19 | [comment to issue #13](https://github.com/WebAudio/web-audio-api-v2/issues/13#issuecomment-572469654),
20 | reproduced below.
21 |
22 | The example is an Android phone that has a native processing buffer size
23 | of 192. Since WebAudio processes 128 frames we have the following behavior:
24 |
25 | |iteration | # number of frames to render | number of buffers to render | leftover frames|
26 | |---|---|---|---|
27 | |0| 192| 2| 64|
28 | |1| 192| 1| 0|
29 | |2| 192| 2| 64|
30 | |3| 192| 1| 0|
31 | |4| 192| 2| 64|
32 | |5| 192| 1| 0|
33 |
34 | At a sample rate of 48 kHz, 128 frames would require 2.666 ms. The net result
35 | is that the **peak** CPU usage is twice has high as might be expected since,
36 | every other time, you need to render the graph twice in 2.666 ms instead of once.
37 | Then the max complexity of the graph is unexpectedly limited because of this.
38 |
39 | However, if WebAudio rendered 192 frames at a time, the CPU usage would
40 | remain constant, and more complex graphs could be rendered because the peak CPU
41 | would be same as the average. This does increase latency compared to a native
42 | size of 128, but since
43 | Android is already using a size of 192, there is no actual additional latency.
44 |
45 | Finally, some applications do not need these low latency requirements, and may
46 | also want AudioWorklets to process larger blocks to reduce function call
47 | overhead. In this case allowing render sizes of 1024 or 2048 could be
48 | appropriate.
49 |
50 | ## The API
51 | To allow user-selectable render size, we propose the following API:
52 |
53 | ```idl
54 | // New enum
55 | enum AudioContextRenderSizeCategory {
56 | "default",
57 | "hardware"
58 | };
59 |
60 | dictionary AudioContextOptions {
61 | (AudioContextLatencyCategory or double) latencyHint = "interactive";
62 | float sampleRate;
63 |
64 | // New addition
65 | (AudioContextRenderSizeCategory or unsigned long) renderSizeHint = "default";
66 | };
67 |
68 | dictionary OfflineAudioContextOptions {
69 | unsigned long numberOfChannels = 1;
70 | required unsigned long length;
71 | required float sampleRate;
72 |
73 | // New addition
74 | (AudioContextRenderSizeCategory or unsigned long) renderSizeHint = "default";
75 | };
76 |
77 | partial interface BaseAudioContext {
78 | unsigned long renderSize;
79 | };
80 | ```
81 |
82 | ### Enumeration Description
83 | |Value|Description|
84 | |--|--|
85 | |"default" | Default rendering size of 128 frames |
86 | |"hardware" | Use an appropriate value for the hardware for an AudioContext or the default for an OfflineAudioContext|
87 |
88 | #### AudioContext
89 | For an AudioContext, the "hardware" category means a size is chosen that is
90 | appropriate to the current output device. For example, selecting "hardware" may
91 | result in a size of 192 frames for the Android phone used in the example.
92 |
93 | #### OfflineAudioContext
94 | For an OfflineAudioContext, there's no concept of "hardware", so using
95 | "hardware" is the same as "default".
96 |
97 | ### New Dictionary Members
98 | #### AudioContextOptions
99 |
100 |
renderSizeHint, of type (AudioContextRenderSizeCategory or unsigned long), defaulting to "default"
101 |
102 |
103 |
104 | Identifies the render size for the context. The preferred value of the
105 | renderSizeHint is one of the values from the
106 | AudioContextRenderSizeCategory. However, an unsigned long value may be
107 | given to request an exact number of frames to use for rendering. Powers of
108 | two between 64 and 2048, inclusive MUST be supported. It is recommended that
109 | UA's support values that are not a power of two.
110 |
111 | If the requested value is not supported by the UA, the UA MUST round the value
112 | up to the next smallest value that is supported. If this exceeds the maximum
113 | supported value, it is clamped to the max.
114 |
renderSizeHint, of type (AudioContextRenderSizeCategory or unsigned long), defaulting to "default"
121 |
122 |
123 |
124 | This has exactly the same meaning, behavior, and constraints as for an
125 | `AudioContext`. However, for an OfflineContext, the value of "hardware" is
126 | the same as "default".
127 |
138 | This is the actual number of frames used to render the graph. This may be
139 | different from the value requested by renderSizeHint.
140 |
141 |
142 | We explicitly do NOT support selecting a render size when using the
143 | 3-arg constructor for the OfflineAudioContext.
144 |
145 |
146 |
147 |
148 | ## Requirements
149 | ### Supported Sizes
150 | All UA's must support a `renderSize` that is a power of two between 64 and 2048,
151 | inclusive.
152 |
153 | It is highly recommended that other sizes that are not a power of two be
154 | supported. This is particularly important on Android where sizes of 96, 144,
155 | 192, and 240 are quite common. The problem isn't limited to Android.
156 | Windows generally wants 10 ms buffers so sizes of 440 or 480 for 44.1
157 | kHz and 48 kHz, respectively, should be supported.
158 |
159 | ## Interaction with `latencyHint`
160 | The
161 | [`latencyHint`](https://www.w3.org/TR/webaudio/#dom-audiocontextoptions-latencyhint)
162 | for an AudioContext can interact with the `renderSize`. The
163 | exact interaction between these is up to the UA to implement in a meaningful
164 | way. In particular, UAs that don't double buffer WebAudio's output, the
165 | `latencyHint` value can be 0, independent of the `renderSize`.
166 |
167 | However, for UAs that do double buffer, then, roughly, the `renderSize` chooses
168 | the minimum possible latency, and the `latencyHint` can increase this
169 | appropriately if needed.
170 |
171 | * If `renderSize` is "default", then 128 frames is used to render the graph and
172 | `latencyHint` behaves as before.
173 | * If `renderSize` is "hardware", then the graph is rendered using the hardware
174 | size. The latency value is chosen appropriately but the resulting latency
175 | value cannot be smaller than the hardware size.
176 | * If `renderSize` is a number, the graph is rendered using the appropriate
177 | UA-supported value. The `latencyHint` cannot produce latencies less than
178 | this.
179 |
180 | #### Non-normative Note:
181 | For example, suppose the "hardware" render size is 192 frames with a context
182 | whose sample rate is 48 kHz. Also assume that a "balanced" latency implies a
183 | latency of 20 ms, and "playback" implies a latency of 200 ms.
184 |
185 | Then, for
186 |
187 | * "interactive", the latency will be `192*n` where `n` is 1, 2, 3,..., and is
188 | chosen by the UA
189 | * "balanced", the latency will be 20 ms. This means the graph is rendered 5
190 | times (`5*192` = `20 ms * 48 kHz`) per callback.
191 | * "playback", the latency will be 200 ms. The graph is rendered 50 times per
192 | callback.
193 |
194 | If, however, a render size of 1024 is selected, we have:
195 |
196 | * "interactive", the latency will be `1024*n` where `n` is 1, 2, 3,..., and is
197 | chosen by the UA
198 | * "balanced", a latency of 20 ms is 960 frames, which is smaller than 1024. The
199 | resulting latency will be 1024 frames (21.33 msec).
200 | * "playback", the latency will be 200 ms since 200 ms is 9600 frames which is
201 | larger than 1024.
202 |
203 | Finally, if the render size is 2048, we have:
204 | * "interactive", the latency will be `2048*n` where `n` is 1, 2, 3,..., and is
205 | chosen by the UA
206 | * "balanced", a latency of 20 ms is 960 frames, which is greater than 2048. The
207 | resulting latency will be 2048 frames (42.67 msec).
208 | * "playback", the latency will be 200 ms since 200 ms is 9600 frames which is
209 | larger than 2048.
210 |
211 |
212 |
213 |
214 | # Security and Privacy Issues
215 | When the user requests "hardware" for the render size, the user's hardware
216 | capability can be exposed and can be used to finger print the user. While no UA
217 | is required to return exactly the hardware value, it is most beneficial if it
218 | actually did, as explained #issues. But for privacy reasons, a UA can return a
219 | different value.
220 |
221 | # Implementation Issues
222 | Conceptually this change is relatively simple, but some nodes may have
223 | additional complexities. It is up to the UA to handle these appropriately.
224 |
225 | ## AnalyserNode Implementation
226 | The `AnalyserNode` currently specifies powers of two both for the size of the
227 | returned time-domain data and for the size of the frequency domain data. This
228 | is probably ok.
229 |
230 | ## ConvolverNode Implementation
231 | For efficiency, the `ConvolverNode` is often implemented using FFTs. Typically,
232 | only power-of-two FFTs have been used because the render size was 128. To
233 | support user-selectable sizes, either more complex algorithms are needed to
234 | buffer the data appropriately, or more general FFTs are required to support
235 | sizes that are not a power of two. It is up to the discretion of the UA to
236 | implement this appropriately for all the supported render sizes.
237 |
238 | ## DelayNode Implementation
239 | Since render size may be different from 128 frames, the minimum delay for a
240 | `DelayNode` in a loop is adjusted to match the selected render size. Thus, if
241 | the render size is 256, loops containing a delay node must have a delay greater
242 | than or equal to this. See step 4.2.6.1 in the processing algorithm in
243 | [Sec. 2.4. Rendering an Audio Graph](https://webaudio.github.io/web-audio-api/#rendering-loop).
244 |
245 | ### ScriptProcessorNode
246 | The [construction of a
247 | `ScriptProcessorNode`](https://webaudio.github.io/web-audio-api/#dom-baseaudiocontext-createscriptprocessor)
248 | requires a
249 | [`bufferSize`](https://webaudio.github.io/web-audio-api/#dom-baseaudiocontext-createscriptprocessor-buffersize-numberofinputchannels-numberofoutputchannels-buffersize)
250 | argument that must be a power of two. This is fine with appropriate buffering,
251 | but perhaps it would be better if the buffer sizes are defined to be a power of
252 | two times the render size. So, while the current allowed sizes are 0, and
253 | `128*2^n`, for `n` = 1, 2, ..., 7., we may want to specify the sizes as 0,
254 | `r*2^n`, where `r` is the `renderSize`.
255 |
256 |
--------------------------------------------------------------------------------
/images/panner-coord.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
190 |
--------------------------------------------------------------------------------
/implementation-report.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | Web Audio 1.0 Implementation report
7 |
8 |
38 |
40 |
41 |
42 |
43 |
45 |
46 | Web Audio API 1.0 Implementation Report
47 |
48 |
49 | 2021-Feb-16
50 |
51 |
52 |
53 |
54 |
55 | This report documents the overall implementation status and
56 | detailed test results for Web Audio API 1.0. [CanIUse WebAudio].
58 |
74 | WebKit was one of the earliest implementations,
75 | initially behind a vendor prefix.
76 | From September 2020, Safari
77 | Technical Preview 113 included improved Web Audio support and these improvements have now
78 | taken the implementation to an excellent level.
79 |
80 |
81 | Blink
82 |
83 |
84 | Chrome supports Web Audio API from version 10 on, with -webkit
85 | prefix, and from version 34 onwards
87 | unprefixed. Chrome for Android supports Web Audio API from
88 | version 49 on.
89 |
90 |
91 | Opera supports Web Audio API from version 15 on, with -webkit prefix,
92 | and from version 22 onwards unprefixed.
93 |
94 |
95 | The Blink implementation postdates the Webkit/Blink split, and should
96 | be considered a separate implementation. Blink has a
98 | bugtracker. The overall implementation level is excellent.
99 |
100 |
101 | Firefox
102 |
103 |
104 | Firefox supports Web Audio API from version 15 on, with -webkit
105 | prefix, and from version 23 onwards
107 | unprefixed. Gecko has a
109 | bugtracker. The overall implementation level is very good.
110 |