├── .gitignore ├── ui-example.png ├── w3c.json ├── .pr-preview.json ├── .github ├── PULL_REQUEST_TEMPLATE.md └── workflows │ └── auto-publish.yml ├── .travis.yml ├── package.json ├── README.md ├── Makefile ├── explainers ├── contextual-biasing.md └── on-device-speech-recognition.md └── index.bs /.gitignore: -------------------------------------------------------------------------------- 1 | index.html 2 | node_modules/ 3 | .DS_Store 4 | .idea/ 5 | 6 | -------------------------------------------------------------------------------- /ui-example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/WebAudio/web-speech-api/HEAD/ui-example.png -------------------------------------------------------------------------------- /w3c.json: -------------------------------------------------------------------------------- 1 | { 2 | "group": "cg/audio-comgp", 3 | "contacts": ["svgeesus"], 4 | "repo-type": "cg-report" 5 | } 6 | -------------------------------------------------------------------------------- /.pr-preview.json: -------------------------------------------------------------------------------- 1 | { 2 | "src_file": "index.bs", 3 | "type": "bikeshed", 4 | "params": { 5 | "force": 1 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | Closes #??? 2 | 3 | The following tasks have been completed: 4 | 5 | * [ ] Updated web-platform-tests: (link to pull request) 6 | 7 | Implementation commitment: 8 | 9 | * [ ] Blink: (link to issue) 10 | * [ ] Gecko: (link to issue) 11 | * [ ] WebKit: (link to issue) 12 | 13 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | branches: 2 | only: 3 | - master 4 | language: python 5 | python: 6 | - "3.8" 7 | install: 8 | - pip install bikeshed 9 | - bikeshed update 10 | script: 11 | - bikeshed spec 12 | before_deploy: 13 | - mkdir out 14 | - mv *.html *.png out/ 15 | deploy: 16 | local-dir: out 17 | provider: pages 18 | skip-cleanup: true 19 | github-token: $GITHUB_TOKEN 20 | keep-history: true 21 | on: 22 | branch: master 23 | -------------------------------------------------------------------------------- /.github/workflows/auto-publish.yml: -------------------------------------------------------------------------------- 1 | name: Auto-publish 2 | on: 3 | pull_request: {} 4 | push: 5 | paths: 6 | - index.bs 7 | branches: [main] 8 | 9 | jobs: 10 | main: 11 | name: Build, Validate and Deploy 12 | runs-on: ubuntu-latest 13 | steps: 14 | - uses: actions/checkout@v4 15 | - uses: w3c/spec-prod@v2 16 | with: 17 | GH_PAGES_BRANCH: gh-pages 18 | BUILD_FAIL_ON: link-error 19 | -------------------------------------------------------------------------------- /package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "web-speech-api", 3 | "version": "1.0.0", 4 | "description": "This is the source for the [Web Speech API](https://webaudio.github.io/web-speech-api/) spec.", 5 | "main": "index.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "repository": { 10 | "type": "git", 11 | "url": "git+https://github.com/WebAudio/web-speech-api.git" 12 | }, 13 | "bugs": { 14 | "url": "https://github.com/WebAudio/web-speech-api/issues" 15 | }, 16 | "homepage": "https://github.com/WebAudio/web-speech-api#readme", 17 | "devDependencies": { 18 | "vnu-jar": "^24.10.17" 19 | } 20 | } 21 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Web Speech API 2 | 3 | This is the source for the [Web Speech API](https://webaudio.github.io/web-speech-api/) spec. 4 | 5 | ## Tests 6 | 7 | For normative changes, a corresponding 8 | [web-platform-tests](https://github.com/w3c/web-platform-tests) PR is highly appreciated. Typically, 9 | both PRs will be merged at the same time. Note that a test change that contradicts the spec should 10 | not be merged before the corresponding spec change. If testing is not practical, please explain why 11 | and if appropriate [file a web-platform-tests issue](https://github.com/w3c/web-platform-tests/issues/new) 12 | to follow up later. Add the `type:untestable` or `type:missing-coverage` label as appropriate. 13 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | SHELL := /bin/bash 2 | 3 | DST := $(patsubst %.bs,%.html,$(wildcard *.bs)) 4 | REMOTE := $(filter remote,$(MAKECMDGOALS)) 5 | 6 | all: $(DST) 7 | @ echo "All done" 8 | 9 | %.html : %.bs node_modules/vnu-jar/build/dist/vnu.jar 10 | ifndef REMOTE 11 | @ echo "Building $@" 12 | bikeshed --die-on=warning spec $< $@ 13 | java -jar node_modules/vnu-jar/build/dist/vnu.jar --also-check-css $@ 14 | else 15 | @ echo "Building $@ remotely" 16 | @ (HTTP_STATUS=$$(curl https://api.csswg.org/bikeshed/ \ 17 | --output $@ \ 18 | --write-out "%{http_code}" \ 19 | --header "Accept: text/plain, text/html" \ 20 | -F die-on=warning \ 21 | -F file=@$<) && \ 22 | [[ "$$HTTP_STATUS" -eq "200" ]]) || ( \ 23 | echo ""; cat $@; echo ""; \ 24 | rm -f index.html; \ 25 | exit 22 \ 26 | ); 27 | endif 28 | 29 | node_modules/vnu-jar/build/dist/vnu.jar: 30 | npm install vnu-jar 31 | 32 | remote: all 33 | 34 | -------------------------------------------------------------------------------- /explainers/contextual-biasing.md: -------------------------------------------------------------------------------- 1 | # Explainer: Contextual Biasing for the Web Speech API 2 | 3 | ## Introduction 4 | 5 | The Web Speech API provides powerful speech recognition capabilities to web applications. However, general-purpose speech recognition models can sometimes struggle with domain-specific terminology, proper nouns, or other words that are unlikely to appear in general conversation. This can lead to a frustrating user experience where the user's intent is frequently misrecognized. 6 | 7 | To address this, we introduce **contextual biasing** to the Web Speech API. This feature allows developers to provide "hints" to the speech recognition engine in the form of a list of phrases and boost values. By biasing the recognizer towards these phrases, applications can significantly improve the accuracy for vocabulary that is important in their specific context. 8 | 9 | ## Why Use Contextual Biasing? 10 | 11 | ### 1. **Improved Accuracy** 12 | By providing a list of likely phrases, developers can dramatically increase the probability of those phrases being recognized correctly. This is especially useful for words that are acoustically similar to more common words. 13 | 14 | ### 2. **Enhanced User Experience** 15 | When speech recognition "just works" for the user's context, it leads to a smoother, faster, and less frustrating interaction. Users don't have to repeat themselves or manually correct transcription errors. 16 | 17 | ### 3. **Enabling Specialized Applications** 18 | Contextual biasing makes the Web Speech API a more viable option for specialized applications in fields like medicine, law, science, or gaming, where precise and often uncommon terminology is essential. 19 | 20 | ## Example Use Cases 21 | 22 | ### 1. Voice-controlled Video Game 23 | A video game might have characters with unique names like "Zoltan," "Xylia," or "Grog." Without contextual biasing, a command like "Attack Zoltan" might be misheard as "Attack Sultan." By providing a list of character and location names, the game can ensure commands are understood reliably. 24 | 25 | ### 2. E-commerce Product Search 26 | An online store can bias the speech recognizer towards its product catalog. When a user says "Show me Fujifilm cameras," the recognizer is more likely to correctly identify "Fujifilm" instead of a more common but similar-sounding word. 27 | 28 | ### 3. Medical Dictation 29 | A web-based application for doctors could be biased towards recognizing complex medical terms, drug names, and procedures. This allows for accurate and efficient voice-based note-taking. 30 | 31 | ## New API Components 32 | 33 | Contextual biasing is implemented through a new `phrases` attribute on the `SpeechRecognition` interface, which uses the new `SpeechRecognitionPhrase` interface. 34 | 35 | ### 1. `SpeechRecognition.phrases` attribute 36 | This attribute is an `ObservableArray` that allows developers to provide contextual hints for the recognition session. It can be modified like a JavaScript `Array`. 37 | 38 | ### 2. `SpeechRecognitionPhrase` interface 39 | Represents a single phrase and its associated boost value. 40 | 41 | - `constructor(DOMString phrase, optional float boost = 1.0)`: Creates a new phrase object. 42 | - `phrase`: The text string to be boosted. 43 | - `boost`: A float between 0.0 and 10.0. Higher values make the phrase more likely to be recognized. 44 | 45 | 46 | ### Example Usage 47 | 48 | ```javascript 49 | // A list of phrases relevant to our application's context. 50 | const phraseData = [ 51 | { phrase: 'Zoltan', boost: 3.0 }, 52 | { phrase: 'Grog', boost: 2.0 }, 53 | ]; 54 | 55 | // Create SpeechRecognitionPhrase objects. 56 | const phraseObjects = phraseData.map(p => new SpeechRecognitionPhrase(p.phrase, p.boost)); 57 | 58 | const recognition = new SpeechRecognition(); 59 | 60 | // Assign the phrase objects to the recognition instance. 61 | // The attribute is an ObservableArray, so we can assign an array to it. 62 | recognition.phrases = phraseObjects; 63 | 64 | // We can also dynamically add/remove phrases. 65 | recognition.phrases.push(new SpeechRecognitionPhrase('Xylia', 2.5)); 66 | 67 | // Some user agents (e.g. Chrome) might only support on-device contextual biasing. 68 | recognition.processLocally = true; 69 | 70 | recognition.onresult = (event) => { 71 | const transcript = event.results[0][0].transcript; 72 | console.log(`Result: ${transcript}`); 73 | }; 74 | 75 | recognition.onerror = (event) => { 76 | if (event.error === 'phrases-not-supported') { 77 | console.warn('Contextual biasing is not supported by this browser/service.'); 78 | } 79 | }; 80 | 81 | // Start recognition when the user clicks a button. 82 | document.getElementById('speak-button').onclick = () => { 83 | recognition.start(); 84 | }; 85 | ``` 86 | 87 | ## Conclusion 88 | 89 | Contextual biasing is a powerful enhancement to the Web Speech API that gives developers finer control over the speech recognition process. By allowing applications to provide context-specific hints, this feature improves accuracy, creates a better user experience, and makes voice-enabled web applications more practical for a wider range of specialized use cases. -------------------------------------------------------------------------------- /explainers/on-device-speech-recognition.md: -------------------------------------------------------------------------------- 1 | # Explainer: On-Device Speech Recognition for the Web Speech API 2 | 3 | ## Introduction 4 | 5 | The Web Speech API is a powerful browser feature that enables applications to perform speech recognition. Traditionally, this functionality relies on sending audio data to cloud-based services for recognition. While this approach is effective, it has certain drawbacks: 6 | 7 | - **Privacy concerns:** Raw and transcribed audio is transmitted over the network. 8 | - **Latency issues:** Users may experience delays due to network communication. 9 | - **Offline limitations:** Speech recognition does not work without an internet connection. 10 | 11 | To address these issues, we introduce **on-device speech recognition capabilities** as part of the Web Speech API. This enhancement allows speech recognition to run locally on user devices, providing a faster, more private, and offline-compatible experience. 12 | 13 | ## Why Use On-Device Speech Recognition? 14 | 15 | ### 1. **Privacy** 16 | On-device processing ensures that neither raw audio nor transcriptions leave the user's device, enhancing data security and user trust. 17 | 18 | ### 2. **Performance** 19 | Local processing reduces latency, providing a smoother and faster user experience. 20 | 21 | ### 3. **Offline Functionality** 22 | Applications can offer speech recognition capabilities even without an active internet connection, increasing their utility in remote or low-connectivity environments. 23 | ## New API Members 24 | 25 | This enhancement introduces new members to the Web Speech API to support on-device recognition: a dictionary for configuration, an instance attribute, and static methods for managing capabilities. 26 | 27 | ### `SpeechRecognitionOptions` Dictionary 28 | 29 | This dictionary is used to configure speech recognition preferences, both for individual sessions and for querying or installing capabilities. 30 | 31 | It includes the following members: 32 | 33 | - `langs`: A required sequence of `DOMString` representing BCP-47 language tags (e.g., `['en-US']`). 34 | - `processLocally`: A boolean that, if `true`, instructs the recognition to be performed on-device. If `false` (the default), any available recognition method (cloud-based or on-device) may be used. 35 | 36 | 37 | ```idl 38 | dictionary SpeechRecognitionOptions { 39 | required sequence langs; // BCP-47 language tags 40 | boolean processLocally = false; // Instructs the recognition to be performed on-device. If `false` (default), any available recognition method may be used. 41 | }; 42 | ``` 43 | 44 | #### Example Usage 45 | ```javascript 46 | const recognition = new SpeechRecognition(); 47 | recognition.options = { 48 | langs: ['en-US'], 49 | processLocally: true 50 | }; 51 | recognition.start(); 52 | ``` 53 | 54 | ## Example use cases 55 | ### 1. Company with data residency requirements 56 | Websites with strict data residency requirements (i.e., regulatory, legal, or company policy) can ensure that audio data remains on the user's device and is not sent over the network for processing. This is particularly crucial for compliance with regulations like GDPR, which considers voice as personally identifiable information (PII) as voice recordings can reveal information about an individual's gender, ethnic origin, or even potential health conditions. On-device processing significantly enhances user privacy by minimizing the exposure of sensitive voice data. 57 | 58 | ### 2. Video conferencing service with strict performance requirements (e.g. meet.google.com) 59 | Some websites would only adopt the Web Speech API if it meets strict performance requirements. On-device speech recognition may provide better accuracy and latency as well as provide additional features (e.g. contextual biasing) that may not be available by the cloud-based service used by the user agent. In the event on-device speech recognition is not available, these websites may elect to use an alternative cloud-based speech recognition provider that meet these requirements instead of the default one provided by the user agent. 60 | 61 | ### 3. Educational website (e.g. khanacademy.org) 62 | Applications that need to function in unreliable or offline network conditions—such as voice-based productivity tools, educational software, or accessibility features—benefit from on-device speech recognition. This enables uninterrupted functionality during flights, remote travel, or in areas with limited connectivity. When on-device recognition is unavailable, a website can choose to hide the UI or gracefully degrade functionality to maintain a coherent user experience. 63 | 64 | ## New API Components 65 | 66 | ### 1. `static Promise SpeechRecognition.available(SpeechRecognitionOptions options)` 67 | This static method checks the availability of speech recognition capabilities matching the provided `SpeechRecognitionOptions`. 68 | 69 | The method returns a `Promise` that resolves to an `AvailabilityStatus` enum string: 70 | - `"available"`: Ready to use according to the specified options. 71 | - `"downloadable"`: Not currently available, but resources (e.g., language packs for on-device) can be downloaded. 72 | - `"downloading"`: Resources are currently being downloaded. 73 | - `"unavailable"`: Not available and not downloadable. 74 | 75 | #### Example Usage 76 | ```javascript 77 | // Check availability for on-device English (US) 78 | const options = { langs: ['en-US'], processLocally: true }; 79 | 80 | SpeechRecognition.available(options).then((status) => { 81 | console.log(`Speech recognition status for ${options.langs.join(', ')} (on-device): ${status}.`); 82 | if (status === 'available') { 83 | console.log('Ready to use on-device speech recognition.'); 84 | } else if (status === 'downloadable') { 85 | console.log('Resources are downloadable. Call install() if needed.'); 86 | } else if (status === 'downloading') { 87 | console.log('Resources are currently downloading.'); 88 | } else { 89 | console.log('Not available for on-device speech recognition.'); 90 | } 91 | }); 92 | ``` 93 | 94 | ### 2. `Promise install(SpeechRecognitionOptions options)` 95 | This method installs the resources required for speech recognition matching the provided `SpeechRecognitionOptions`. The installation process may download and configure necessary language models. 96 | 97 | #### Example Usage 98 | ```javascript 99 | // Install on-device resources for English (US) 100 | const options = { langs: ['en-US'], processLocally: true }; 101 | SpeechRecognition.install(options).then((success) => { 102 | if (success) { 103 | console.log(`On-device speech recognition resources for ${options.langs.join(', ')} installed successfully.`); 104 | } else { 105 | console.error(`Unable to install on-device speech recognition resources for ${options.langs.join(', ')}. This could be due to unsupported languages or download issues.`); 106 | } 107 | }); 108 | ``` 109 | 110 | ## Supported languages 111 | The availability of on-device speech recognition languages is user-agent dependent. As an example, Google Chrome supports the following languages for on-device recognition: 112 | * de-DE (German, Germany) 113 | * en-US (English, United States) 114 | * es-ES (Spanish, Spain) 115 | * fr-FR (French, France) 116 | * hi-IN (Hindi, India) 117 | * id-ID (Indonesian, Indonesia) 118 | * it-IT (Italian, Italy) 119 | * ja-JP (Japanese, Japan) 120 | * ko-KR (Korean, South Korea) 121 | * pl-PL (Polish, Poland) 122 | * pt-BR (Portuguese, Brazil) 123 | * ru-RU (Russian, Russia) 124 | * th-TH (Thai, Thailand) 125 | * tr-TR (Turkish, Turkey) 126 | * vi-VN (Vietnamese, Vietnam) 127 | * zh-CN (Chinese, Mandarin, Simplified) 128 | * zh-TW (Chinese, Mandarin, Traditional) 129 | 130 | ## Privacy considerations 131 | To reduce the risk of fingerprinting, user agents must implement privacy-preserving countermeasures. The Web Speech API will employ the same masking techniques used by the [Web Translation API](https://github.com/webmachinelearning/writing-assistance-apis/pull/47). 132 | 133 | ## Conclusion 134 | The addition of on-device speech recognition capabilities to the Web Speech API marks a significant step forward in creating more private, performant, and accessible web applications. By leveraging these new methods, developers can enhance user experiences while addressing key concerns around privacy and connectivity. -------------------------------------------------------------------------------- /index.bs: -------------------------------------------------------------------------------- 1 | 22 | 23 |
  24 | {
  25 |   "HTMLSPEECH": {
  26 |     "authors": [
  27 |       "Michael Bodell",
  28 |       "Björn Bringert",
  29 |       "Robert Brown",
  30 |       "Daniel C. Burnett",
  31 |       "Deborah Dahl",
  32 |       "Dan Druta",
  33 |       "Patrick Ehlen",
  34 |       "Charles Hemphill",
  35 |       "Michael Johnston",
  36 |       "Olli Pettay",
  37 |       "Satish Sampath",
  38 |       "Marc Schröder",
  39 |       "Glen Shires",
  40 |       "Raj Tumuluri",
  41 |       "Milan Young"
  42 |     ],
  43 |     "href": "https://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech-20111206/",
  44 |     "title": "HTML Speech Incubator Group Final Report"
  45 |   }
  46 | }
  47 | 
48 | 49 |

Introduction

50 | 51 |

This section is non-normative.

52 | 53 |

The Web Speech API aims to enable web developers to provide, in a web browser, speech-input and text-to-speech output features that are typically not available when using standard speech-recognition or screen-reader software. 54 | The API itself is agnostic of the underlying speech recognition and synthesis implementation and can support both server-based and client-based/embedded recognition and synthesis. 55 | The API is designed to enable both brief (one-shot) speech input and continuous speech input. 56 | Speech recognition results are provided to the web page as a list of hypotheses, along with other relevant information for each hypothesis.

57 | 58 |

This specification is a subset of the API defined in the [[HTMLSPEECH|HTML Speech Incubator Group Final Report]]. 59 | That report is entirely informative since it is not a standards track document. 60 | All portions of that report may be considered informative with regards to this document, and provide an informative background to this document. 61 | This specification is a fully-functional subset of that report. 62 | Specifically, this subset excludes the underlying transport protocol, the proposed additions to HTML markup, and it defines a simplified subset of the JavaScript API. 63 | This subset supports the majority of use-cases and sample code in the Incubator Group Final Report. 64 | This subset does not preclude future standardization of additions to the markup, API or underlying transport protocols, and indeed the Incubator Report defines a potential roadmap for such future work.

65 | 66 | 67 |

Use Cases

68 | 69 |

This section is non-normative.

70 | 71 |

This specification supports the following use cases, as defined in [[HTMLSPEECH#use-cases|Section 4 of the Incubator Report]].

72 | 73 |
    74 |
  • Voice Web Search
  • 75 |
  • Speech Command Interface
  • 76 |
  • Continuous Recognition of Open Dialog
  • 77 |
  • Speech UI present when no visible UI need be present
  • 78 |
  • Voice Activity Detection
  • 79 |
  • Temporal Structure of Synthesis to Provide Visual Feedback
  • 80 |
  • Hello World
  • 81 |
  • Speech Translation
  • 82 |
  • Speech Enabled Email Client
  • 83 |
  • Dialog Systems
  • 84 |
  • Multimodal Interaction
  • 85 |
  • Speech Driving Directions
  • 86 |
  • Multimodal Video Game
  • 87 |
  • Multimodal Search
  • 88 |
89 | 90 |

To keep the API to a minimum, this specification does not directly support the following use case. 91 | This does not preclude adding support for this as a future API enhancement, and indeed the Incubator report provides a roadmap for doing so.

92 | 93 |
    94 |
  • Rerecognition
  • 95 |
96 | 97 |

Security and privacy considerations

98 | 99 |
    100 |
  1. User agents must only start speech input sessions with explicit, informed user consent. 101 | User consent can include, for example: 102 |
      103 |
    • User click on a visible speech input element which has an obvious graphical representation showing that it will start speech input.
    • 104 |
    • Accepting a permission prompt shown as the result of a call to {{SpeechRecognition/start()}}.
    • 105 |
    • Consent previously granted to always allow speech input for this web page.
    • 106 |
    107 |
  2. 108 | 109 |
  3. User agents must give the user an obvious indication when audio is being recorded. 110 |
      111 |
    • In a graphical user agent, this could be a mandatory notification displayed by the user agent as part of its chrome and not accessible by the web page. 112 | This could for example be a pulsating/blinking record icon as part of the browser chrome/address bar, an indication in the status bar, an audible notification, or anything else relevant and accessible to the user. 113 | This UI element must also allow the user to stop recording.
      114 | Example UI recording notification.
    • 115 | 116 |
    • In a speech-only user agent, the indication may for example take the form of the system speaking the label of the speech input element, followed by a short beep.
    • 117 |
    118 |
  4. 119 | 120 |
  5. The user agent may also give the user a longer explanation the first time speech input is used, to let the user know what it is and how they can tune their privacy settings to disable speech recording if required.
  6. 121 | 122 |
  7. To mitigate the risk of fingerprinting, user agents MUST NOT personalize speech recognition when performing speech recognition on a {{MediaStreamTrack}}.
  8. 123 |
124 | 125 |

Implementation considerations

126 | 127 |

This section is non-normative.

128 | 129 |
    130 |
  1. Spoken password inputs can be problematic from a security perspective, but it is up to the user to decide if they want to speak their password.
  2. 131 | 132 |
  3. Speech input could potentially be used to eavesdrop on users. 133 | Malicious webpages could use tricks such as hiding the input element or otherwise making the user believe that it has stopped recording speech while continuing to do so. 134 | They could also potentially style the input element to appear as something else and trick the user into clicking them. 135 | An example of styling the file input element can be seen at https://www.quirksmode.org/dom/inputfile.html. 136 | The above recommendations are intended to reduce this risk of such attacks.
  4. 137 |
138 | 139 |

API Description

140 | 141 |

This section is normative.

142 | 143 |

The SpeechRecognition Interface

144 | 145 |

The speech recognition interface is the scripted web API for controlling a given recognition.

146 | The term "final result" indicates a {{SpeechRecognitionResult}} in which the {{SpeechRecognitionResult/isFinal}} attribute is true. 147 | The term "interim result" indicates a {{SpeechRecognitionResult}} in which the {{SpeechRecognitionResult/isFinal}} attribute is false. 148 | 149 | {{SpeechRecognition}} has the following internal slots: 150 | 151 |
152 | : [[started]] 153 | :: 154 | A boolean flag representing whether the speech recognition started. The initial value is false. 155 |
156 | 157 |
158 | : [[processLocally]] 159 | :: 160 | A boolean flag indicating whether recognition MUST be performed locally. The initial value is false. 161 |
162 | 163 |
164 | : [[phrases]] 165 | :: 166 | An {{ObservableArray}} of {{SpeechRecognitionPhrase}} objects representing a list of phrases for contextual biasing. The initial value is a new empty {{ObservableArray}}. 167 |
168 | 169 | 170 | [SecureContext, Exposed=Window] 171 | interface SpeechRecognition : EventTarget { 172 | constructor(); 173 | 174 | // recognition parameters 175 | attribute SpeechGrammarList grammars; 176 | attribute DOMString lang; 177 | attribute boolean continuous; 178 | attribute boolean interimResults; 179 | attribute unsigned long maxAlternatives; 180 | attribute boolean processLocally; 181 | attribute ObservableArray<SpeechRecognitionPhrase> phrases; 182 | 183 | // methods to drive the speech interaction 184 | undefined start(); 185 | undefined start(MediaStreamTrack audioTrack); 186 | undefined stop(); 187 | undefined abort(); 188 | static Promise<AvailabilityStatus> available(SpeechRecognitionOptions options); 189 | static Promise<boolean> install(SpeechRecognitionOptions options); 190 | 191 | // event methods 192 | attribute EventHandler onaudiostart; 193 | attribute EventHandler onsoundstart; 194 | attribute EventHandler onspeechstart; 195 | attribute EventHandler onspeechend; 196 | attribute EventHandler onsoundend; 197 | attribute EventHandler onaudioend; 198 | attribute EventHandler onresult; 199 | attribute EventHandler onnomatch; 200 | attribute EventHandler onerror; 201 | attribute EventHandler onstart; 202 | attribute EventHandler onend; 203 | }; 204 | 205 | dictionary SpeechRecognitionOptions { 206 | required sequence<DOMString> langs; 207 | boolean processLocally = false; 208 | }; 209 | 210 | enum SpeechRecognitionErrorCode { 211 | "no-speech", 212 | "aborted", 213 | "audio-capture", 214 | "network", 215 | "not-allowed", 216 | "service-not-allowed", 217 | "language-not-supported", 218 | "phrases-not-supported" 219 | }; 220 | 221 | enum AvailabilityStatus { 222 | "unavailable", 223 | "downloadable", 224 | "downloading", 225 | "available" 226 | }; 227 | 228 | [SecureContext, Exposed=Window] 229 | interface SpeechRecognitionErrorEvent : Event { 230 | constructor(DOMString type, SpeechRecognitionErrorEventInit eventInitDict); 231 | readonly attribute SpeechRecognitionErrorCode error; 232 | readonly attribute DOMString message; 233 | }; 234 | 235 | dictionary SpeechRecognitionErrorEventInit : EventInit { 236 | required SpeechRecognitionErrorCode error; 237 | DOMString message = ""; 238 | }; 239 | 240 | // Item in N-best list 241 | [SecureContext, Exposed=Window] 242 | interface SpeechRecognitionAlternative { 243 | readonly attribute DOMString transcript; 244 | readonly attribute float confidence; 245 | }; 246 | 247 | // A complete one-shot simple response 248 | [SecureContext, Exposed=Window] 249 | interface SpeechRecognitionResult { 250 | readonly attribute unsigned long length; 251 | getter SpeechRecognitionAlternative item(unsigned long index); 252 | readonly attribute boolean isFinal; 253 | }; 254 | 255 | // A collection of responses (used in continuous mode) 256 | [SecureContext, Exposed=Window] 257 | interface SpeechRecognitionResultList { 258 | readonly attribute unsigned long length; 259 | getter SpeechRecognitionResult item(unsigned long index); 260 | }; 261 | 262 | // A full response, which could be interim or final, part of a continuous response or not 263 | [SecureContext, Exposed=Window] 264 | interface SpeechRecognitionEvent : Event { 265 | constructor(DOMString type, SpeechRecognitionEventInit eventInitDict); 266 | readonly attribute unsigned long resultIndex; 267 | readonly attribute SpeechRecognitionResultList results; 268 | }; 269 | 270 | dictionary SpeechRecognitionEventInit : EventInit { 271 | unsigned long resultIndex = 0; 272 | required SpeechRecognitionResultList results; 273 | }; 274 | 275 | // The object representing a speech grammar. This interface has been deprecated and exists in this spec for the sole purpose of maintaining backwards compatibility. 276 | [Exposed=Window] 277 | interface SpeechGrammar { 278 | attribute DOMString src; 279 | attribute float weight; 280 | }; 281 | 282 | // The object representing a speech grammar collection. This interface has been deprecated and exists in this spec for the sole purpose of maintaining backwards compatibility. 283 | [Exposed=Window] 284 | interface SpeechGrammarList { 285 | constructor(); 286 | readonly attribute unsigned long length; 287 | getter SpeechGrammar item(unsigned long index); 288 | undefined addFromURI(DOMString src, 289 | optional float weight = 1.0); 290 | undefined addFromString(DOMString string, 291 | optional float weight = 1.0); 292 | }; 293 | 294 | // The object representing a phrase for contextual biasing. 295 | [SecureContext, Exposed=Window] 296 | interface SpeechRecognitionPhrase { 297 | constructor(DOMString phrase, optional float boost = 1.0); 298 | readonly attribute DOMString phrase; 299 | readonly attribute float boost; 300 | }; 301 | 302 | 303 |

SpeechRecognition Attributes

304 | 305 |
306 |
grammars attribute
307 |
The grammars attribute stores the collection of SpeechGrammar objects which represent the grammars that are active for this recognition. 308 | This attribute does nothing and exists in this spec for the sole purpose of maintaining backwards compatibility.
309 | 310 |
lang attribute
311 |
This attribute will set the language of the recognition for the request, using a valid BCP 47 language tag. [[!BCP47]] 312 | If unset it remains unset for getting in script, but will default to use the language of the html document root element and associated hierarchy. 313 | This default value is computed and used when the input request opens a connection to the recognition service.
314 | 315 |
continuous attribute
316 |
When the continuous attribute is set to false, the user agent must return no more than one final result in response to starting recognition, 317 | for example a single turn pattern of interaction. 318 | When the continuous attribute is set to true, the user agent must return zero or more final results representing multiple consecutive recognitions in response to starting recognition, 319 | for example a dictation. 320 | The default value must be false. Note, this attribute setting does not affect interim results.
321 | 322 |
interimResults attribute
323 |
Controls whether interim results are returned. 324 | When set to true, interim results should be returned. 325 | When set to false, interim results must not be returned. 326 | The default value must be false. Note, this attribute setting does not affect final results.
327 | 328 |
maxAlternatives attribute
329 |
This attribute will set the maximum number of {{SpeechRecognitionAlternative}}s per result. 330 | The default value is 1.
331 | 332 |
processLocally attribute
333 |
This attribute, when set to true, indicates a requirement that the speech recognition process MUST be performed locally on the user's device. 334 | If set to false, the user agent can choose between local and remote processing. 335 | The default value is false. 336 |
337 | 338 |
phrases attribute
339 |
340 | The `phrases` attribute provides a list of {{SpeechRecognitionPhrase}} objects to be used for contextual biasing. This is an {{ObservableArray}}, which can be modified like a JavaScript `Array` (e.g., using `push()`). 341 |
342 |
343 | The getter steps are to return the value of {{SpeechRecognition/[[phrases]]}}. 344 |
345 |
346 | 347 |

The group has discussed whether WebRTC might be used to specify selection of audio sources and remote recognizers. 348 | See Interacting with WebRTC, the Web Audio API and other external sources thread on public-speech-api@w3.org.

349 | 350 |

SpeechRecognition Methods

351 | 352 |
353 |
start() method
354 |
355 | Start the speech recognition process, directly from a microphone on the device. 356 | When invoked, run the following steps: 357 | 358 | 1. Let |requestMicrophonePermission| be a boolan variable set to to `true`. 359 | 1. Run the [=start session algorithm=] with |requestMicrophonePermission|. 360 |
361 | 362 |
start({{MediaStreamTrack}} audioTrack) method
363 |
364 | Start the speech recognition process, using a {{MediaStreamTrack}} 365 | When invoked, run the following steps: 366 | 367 | 1. Let |audioTrack| be the first argument. 368 | 1. If |audioTrack|'s {{MediaStreamTrack/kind}} attribute is NOT `"audio"`, 369 | throw an {{InvalidStateError}} and abort these steps. 370 | 1. If |audioTrack|'s {{MediaStreamTrack/readyState}} attribute is NOT 371 | `"live"`, throw an {{InvalidStateError}} and abort these steps. 372 | 1. Let |requestMicrophonePermission| be `false`. 373 | 1. Run the [=start session algorithm=] with |requestMicrophonePermission|. 374 |
375 | 376 |
stop() method
377 |
The stop method represents an instruction to the recognition service to stop listening to more audio, and to try and return a result using just the audio that it has already received for this recognition. 378 | A typical use of the stop method might be for a web application where the end user is doing the end pointing, similar to a walkie-talkie. 379 | The end user might press and hold the space bar to talk to the system and on the space down press the start call would have occurred and when the space bar is released the stop method is called to ensure that the system is no longer listening to the user. 380 | Once the stop method is called the speech service must not collect additional audio and must not continue to listen to the user. 381 | The speech service must attempt to return a recognition result (or a nomatch) based on the audio that it has already collected for this recognition. 382 | If the stop method is called on an object which is already stopped or being stopped (that is, start was never called on it, the end or error event has fired on it, or stop was previously called on it), the user agent must ignore the call.
383 | 384 |
abort() method
385 |
The abort method is a request to immediately stop listening and stop recognizing and do not return any information but that the system is done. 386 | When the abort method is called, the speech service must stop recognizing. 387 | The user agent must raise an end event once the speech service is no longer connected. 388 | If the abort method is called on an object which is already stopped or aborting (that is, start was never called on it, the end or error event has fired on it, or abort was previously called on it), the user agent must ignore the call.
389 | 390 |
available({{SpeechRecognitionOptions}} options) method
391 |
392 | The {{SpeechRecognition/available}} method returns a {{Promise}} that resolves to a {{AvailabilityStatus}} indicating the recognition availability matching the {{SpeechRecognitionOptions}} argument. 393 | Access to this method is gated behind the [=policy-controlled feature=] "on-device-speech-recognition", which has a [=policy-controlled feature/default allowlist=] of [=default allowlist/'self'=]. 394 | 395 | When invoked, run these steps: 396 | 1. Let promise be a new promise. 397 | 1. Run the availability algorithm with options and promise. If it returns an exception, throw it and abort these steps. 398 | 1. Return promise. 399 |
400 | 401 |
install({{SpeechRecognitionOptions}} options) method
402 |
403 | The {{SpeechRecognition/install}} method attempts to install speech recognition language packs for all languages specified in `options.langs`. 404 | It returns a {{Promise}} that resolves to a {{boolean}}. 405 | The promise resolves to `true` when all installation attempts for requested and supported languages succeed (or the languages were already installed). 406 | The promise resolves to `false` if `options.langs` is empty, if not all of the requested languages are supported, or if any installation attempt for a supported language fails. 407 | Access to this method is gated behind the [=policy-controlled feature=] "on-device-speech-recognition", which has a [=policy-controlled feature/default allowlist=] of [=default allowlist/'self'=]. 408 | 409 | When invoked, run these steps: 410 | 1. If the [=current settings object=]'s [=relevant global object=]'s [=associated Document=] is NOT [=fully active=], throw an {{InvalidStateError}} and abort these steps. 411 | 1. If any lang in {{SpeechRecognitionOptions/langs}} of options is not a valid [[!BCP47]] language tag, throw a {{SyntaxError}} and abort these steps. 412 | 1. If the on-device speech recognition language pack for any lang in {{SpeechRecognitionOptions/langs}} of options is unsupported, return a resolved {{Promise}} with false and skip the rest of these steps. 413 | 1. Let promise be a new promise. 414 | 1. For each lang in {{SpeechRecognitionOptions/langs}} of options, initiate the download of the on-device speech recognition language for lang. 415 |

416 | Note: The user agent can prompt the user for explicit permission to download the on-device speech recognition language pack. 417 |

418 | 1. [=Queue a task=] on the [=relevant global object=]'s [=task queue=] to run the following step: 419 | - When the download of all languages specified by {{SpeechRecognitionOptions/langs}} of options succeeds, resolve promise with true, otherwise resolve it with false. 420 |

421 | Note: The false resolution of the Promise does not indicate the specific cause of failure. User agents are encouraged to provide more detailed information about the failure in developer tools console messages. However, this detailed error information is not exposed to the script. 422 |

423 | 1. Return promise. 424 |

425 | {{SpeechRecognitionOptions/processLocally}} of options is not used in this algorithm. 426 |

427 |
428 | 429 |
430 | 431 |

AvailabilityStatus Enum Values

432 |

The {{AvailabilityStatus}} enum indicates the availability of speech recognition capabilities. Its values are:

433 |
434 |
"unavailable"
435 |
Indicates that speech recognition is not available for the specified language(s) and processing preference. 436 | If {{SpeechRecognitionOptions/processLocally}} of options is `true`, this means on-device recognition for the language is not supported by the user agent. 437 | If {{SpeechRecognitionOptions/processLocally}} of options is `false`, it means neither local nor remote recognition is available for at least one of the specified languages.
438 | 439 |
"downloadable"
440 |
Indicates that on-device speech recognition for the specified language(s) is supported by the user agent but not yet installed. It can potentially be installed using the {{SpeechRecognition/install()}} method. This status is primarily relevant when {{SpeechRecognitionOptions/processLocally}} of options is true.
441 | 442 |
"downloading"
443 |
Indicates that on-device speech recognition for the specified language(s) is currently in the process of being downloaded. This status is primarily relevant when {{SpeechRecognitionOptions/processLocally}} of options is true.
444 | 445 |
"available"
446 |
Indicates that speech recognition is available for all specified language(s) and the given processing preference. 447 | If {{SpeechRecognitionOptions/processLocally}} of options is true, this means on-device recognition is installed and ready. 448 | If {{SpeechRecognitionOptions/processLocally}} of options is false, it means recognition (which could be local or remote) is available.
449 |
450 | 451 |

When the availability algorithm with options and promise is invoked, the user agent MUST run the following steps: 452 | 1. If the [=current settings object=]'s [=relevant global object=]'s [=associated Document=] is NOT [=fully active=], throw an {{InvalidStateError}} and abort these steps. 453 | 1. Let langs be {{SpeechRecognitionOptions/langs}} of options. 454 | 1. If any lang in langs is not a valid [[!BCP47]] language tag, throw a {{SyntaxError}} and abort these steps. 455 | 1. If {{SpeechRecognitionOptions/processLocally}} of options is `false`: 456 | 1. If langs is an empty sequence, let status be {{AvailabilityStatus/unavailable}}. 457 | 1. Else if speech recognition (which may be remote) is available for all language in langs, let status be {{AvailabilityStatus/available}}. 458 | 1. Else, let status be {{AvailabilityStatus/unavailable}}. 459 | 1. If {{SpeechRecognitionOptions/processLocally}} of options is `true`: 460 |

    461 |
  1. If langs is an empty sequence, let status be {{AvailabilityStatus/unavailable}}.
  2. 462 |
  3. Else: 463 |
      464 |
    1. Let finalStatus be {{AvailabilityStatus/available}}.
    2. 465 |
    3. For each language in langs: 466 |
        467 |
      1. Let currentLanguageStatus.
      2. 468 |
      3. If on-device speech recognition for language is installed, set currentLanguageStatus to {{AvailabilityStatus/available}}.
      4. 469 |
      5. Else if on-device speech recognition for language is currently being downloaded, set currentLanguageStatus to {{AvailabilityStatus/downloading}}.
      6. 470 |
      7. Else if on-device speech recognition for language is supported by the user agent but not yet installed, set currentLanguageStatus to {{AvailabilityStatus/downloadable}}.
      8. 471 |
      9. Else (on-device speech recognition for language is not supported), set currentLanguageStatus to {{AvailabilityStatus/unavailable}}.
      10. 472 |
      11. If currentLanguageStatus comes after finalStatus in the ordered list `[{{AvailabilityStatus/available}}, {{AvailabilityStatus/downloading}}, {{AvailabilityStatus/downloadable}}, {{AvailabilityStatus/unavailable}}]`, set finalStatus to currentLanguageStatus.
      12. 473 |
      474 |
    4. 475 |
    5. Let status be finalStatus.
    6. 476 |
    477 |
  4. 478 |
479 | 1. [=Queue a task=] on the [=relevant global object=]'s [=task queue=] to run the following step: 480 | - Resolve promise with status. 481 | 482 | When the start session algorithm with 483 | |requestMicrophonePermission| is invoked, the user agent MUST run the 484 | following steps: 485 | 486 | 1. If the [=current settings object=]'s [=relevant global object=]'s 487 | [=associated Document=] is NOT [=fully active=], throw an {{InvalidStateError}} 488 | and abort these steps. 489 | 1. If {{SpeechRecognition/[[started]]}} is `true` and no error event or end event 491 | has fired on it, throw an {{InvalidStateError}} and abort these steps. 492 | 1. If this.{{SpeechRecognition/phrases}}'s `length` is greater than 0 and the user agent does not support contextual biasing: 493 | 1. [=Queue a task=] to [=fire an event=] named error at [=this=] using {{SpeechRecognitionErrorEvent}} with its {{SpeechRecognitionErrorEvent/error}} attribute initialized to `phrases-not-supported` and its {{SpeechRecognitionErrorEvent/message}} attribute set to an implementation-defined string detailing the reason. 494 | 1. Abort these steps. 495 | 1. If this.{{SpeechRecognition/[[processLocally]]}} is `true`: 496 | 1. If the user agent determines that local speech recognition is not available for this.{{SpeechRecognition/lang}}, or if it cannot fulfill the local processing requirement for other reasons: 497 | 1. [=Queue a task=] to [=fire an event=] named error at [=this=] using {{SpeechRecognitionErrorEvent}} with its {{SpeechRecognitionErrorEvent/error}} attribute initialized to {{SpeechRecognitionErrorCode/service-not-allowed}} and its {{SpeechRecognitionErrorEvent/message}} attribute set to an implementation-defined string detailing the reason. 498 | 1. Abort these steps. 499 | 1. Set {{[[started]]}} to `true`. 500 | 1. If |requestMicrophonePermission| is `true` and [=request 501 | permission to use=] "`microphone`" is [=permission/"denied"=]: 502 | 1. [=Queue a task=] to [=fire an event=] named error at [=this=] using {{SpeechRecognitionErrorEvent}} with its {{SpeechRecognitionErrorEvent/error}} attribute initialized to {{SpeechRecognitionErrorCode/not-allowed}} and its {{SpeechRecognitionErrorEvent/message}} attribute set to an implementation-defined string detailing the reason. 503 | 1. Abort these steps. 504 | 1. Once the system is successfully listening to the recognition, queue a task to 505 | [=fire an event=] named start at [=this=]. 506 | 507 |

SpeechRecognition Events

508 | 509 |

The DOM Level 2 Event Model is used for speech recognition events. 510 | The methods in the EventTarget interface should be used for registering event listeners. 511 | The SpeechRecognition interface also contains convenience attributes for registering a single event handler for each event type. 512 | These events do not bubble and are not cancelable.

513 | 514 |

For all these events, the timeStamp attribute defined in the DOM Level 2 Event interface must be set to the best possible estimate of when the real-world event which the event object represents occurred. 515 | This timestamp must be represented in the user agent's view of time, even for events where the timestamps in question could be raised on a different machine like a remote recognition service (i.e., in a speechend event with a remote speech endpointer).

516 | 517 |

Unless specified below, the ordering of the different events is undefined. 518 | For example, some implementations may fire audioend before speechstart or speechend if the audio detector is client-side and the speech detector is server-side.

519 | 520 |
521 |
audiostart event
522 |
Fired when the user agent has started to capture audio.
523 | 524 |
soundstart event
525 |
Fired when some sound, possibly speech, has been detected. 526 | This must be fired with low latency, e.g. by using a client-side energy detector. 527 | The audiostart event must always have been fired before the soundstart event.
528 | 529 |
speechstart event
530 |
Fired when the speech that will be used for speech recognition has started. 531 | The audiostart event must always have been fired before the speechstart event.
532 | 533 |
speechend event
534 |
Fired when the speech that will be used for speech recognition has ended. 535 | The speechstart event must always have been fired before speechend.
536 | 537 |
soundend event
538 |
Fired when some sound is no longer detected. 539 | This must be fired with low latency, e.g. by using a client-side energy detector. 540 | The soundstart event must always have been fired before soundend.
541 | 542 |
audioend event
543 |
Fired when the user agent has finished capturing audio. 544 | The audiostart event must always have been fired before audioend.
545 | 546 |
result event
547 |
Fired when the speech recognizer returns a result. 548 | The event must use the {{SpeechRecognitionEvent}} interface. 549 | The audiostart event must always have been fired before the result event.
550 | 551 |
nomatch event
552 |
Fired when the speech recognizer returns a final result with no recognition hypothesis that meet or exceed the confidence threshold. 553 | The event must use the {{SpeechRecognitionEvent}} interface. 554 | The {{SpeechRecognitionEvent/results}} attribute in the event may contain speech recognition results that are below the confidence threshold or may be null. 555 | The {{audiostart}} event must always have been fired before the nomatch event.
556 | 557 |
error event
558 |
Fired when a speech recognition error occurs. 559 | The event must use the {{SpeechRecognitionErrorEvent}} interface.
560 | 561 |
start event
562 |
Fired when the recognition service has begun to listen to the audio with the intention of recognizing. 563 | 564 |
end event
565 |
Fired when the service has disconnected. 566 | The event must always be generated when the session ends no matter the reason for the end.
567 |
568 | 569 |

SpeechRecognitionErrorEvent

570 | 571 |

The {{SpeechRecognitionErrorEvent}} interface is used for the error event.

572 |
573 |
error attribute
574 |
The errorCode is an enumeration indicating what has gone wrong. 575 | The values are: 576 |
577 |
"no-speech"
578 |
No speech was detected.
579 | 580 |
"aborted"
581 |
Speech input was aborted somehow, maybe by some user-agent-specific behavior such as UI that lets the user cancel speech input.
582 | 583 |
"audio-capture"
584 |
Audio capture failed.
585 | 586 |
"network"
587 |
Some network communication that was required to complete the recognition failed.
588 | 589 |
"not-allowed"
590 |
The user agent is not allowing any speech input to occur for reasons of security, privacy or user preference.
591 | 592 |
"service-not-allowed"
593 |
The user agent is not allowing the web application requested speech service, but would allow some speech service, to be used either because the user agent doesn't support the selected one or because of reasons of security, privacy or user preference.
594 | 595 |
"language-not-supported"
596 |
The language was not supported.
597 | 598 |
"phrases-not-supported"
599 |
The speech recognition model does not support phrases for contextual biasing.
600 |
601 |
602 | 603 |
message attribute
604 |
The message content is implementation specific. 605 | This attribute is primarily intended for debugging and developers should not use it directly in their application user interface.
606 |
607 | 608 |

SpeechRecognitionAlternative

609 | 610 |

The SpeechRecognitionAlternative represents a simple view of the response that gets used in a n-best list. 611 | 612 |

613 |
transcript attribute
614 |
The transcript string represents the raw words that the user spoke. 615 | For continuous recognition, leading or trailing whitespace MUST be included where necessary such that concatenation of consecutive SpeechRecognitionResults produces a proper transcript of the session.
616 | 617 |
confidence attribute
618 |
The confidence represents a numeric estimate between 0 and 1 of how confident the recognition system is that the recognition is correct. 619 | A higher number means the system is more confident. 620 |

The group has discussed whether confidence can be specified in a speech-recognition-engine-independent manner and whether confidence threshold and nomatch should be included, because this is not a dialog API. 621 | See Confidence property thread on public-speech-api@w3.org.

622 |
623 | 624 |

SpeechRecognitionResult

625 | 626 |

The SpeechRecognitionResult object represents a single one-shot recognition match, either as one small part of a continuous recognition or as the complete return result of a non-continuous recognition.

627 | 628 |
629 |
length attribute
630 |
The long attribute represents how many n-best alternatives are represented in the item array.
631 | 632 |
item(index) getter
633 |
The item getter returns a SpeechRecognitionAlternative from the index into an array of n-best values. 634 | If index is greater than or equal to length, this returns null. 635 | The user agent must ensure that the length attribute is set to the number of elements in the array. 636 | The user agent must ensure that the n-best list is sorted in non-increasing confidence order (each element must be less than or equal to the confidence of the preceding elements).
637 | 638 |
isFinal attribute
639 |
The final boolean must be set to true if this is the final time the speech service will return this particular index value. 640 | If the value is false, then this represents an interim result that could still be changed.
641 |
642 | 643 |

SpeechRecognitionResultList

644 | 645 |

The SpeechRecognitionResultList object holds a sequence of recognition results representing the complete return result of a continuous recognition. 646 | For a non-continuous recognition it will hold only a single value.

647 | 648 |
649 |
length attribute
650 |
The length attribute indicates how many results are represented in the item array.
651 | 652 |
item(index) getter
653 |
The item getter returns a SpeechRecognitionResult from the index into an array of result values. 654 | If index is greater than or equal to length, this returns null. 655 | The user agent must ensure that the length attribute is set to the number of elements in the array.
656 |
657 | 658 |

SpeechRecognitionEvent

659 | 660 |

The SpeechRecognitionEvent is the event that is raised each time there are any changes to interim or final results.

661 | 662 |
663 |
resultIndex attribute
664 |
The resultIndex must be set to the lowest index in the "results" array that has changed.
665 | 666 |
results attribute
667 |
The array of all current recognition results for this session. 668 | Specifically all final results that have been returned, followed by the current best hypothesis for all interim results. 669 | It must consist of zero or more final results followed by zero or more interim results. 670 | On subsequent SpeechRecognitionResultEvent events, interim results may be overwritten by a newer interim result or by a final result or may be removed (when at the end of the "results" array and the array length decreases). 671 | Final results must not be overwritten or removed. 672 | All entries for indexes less than resultIndex must be identical to the array that was present when the last SpeechRecognitionResultEvent was raised. 673 | All array entries (if any) for indexes equal or greater than resultIndex that were present in the array when the last SpeechRecognitionResultEvent was raised are removed and overwritten with new results. 674 | The length of the "results" array may increase or decrease, but must not be less than resultIndex. 675 | Note that when resultIndex equals results.length, no new results are returned, this may occur when the array length decreases to remove one or more interim results.
676 |
677 | 678 |

SpeechRecognitionPhrase

679 | 680 |

The SpeechRecognitionPhrase object represents a phrase for contextual biasing and has the following internal slots:

681 | 682 |
683 | : [[phrase]] 684 | :: 685 | A {{DOMString}} representing the text string to be boosted. The initial value is null. 686 | An empty value is allowed but should be ignored by the speech recognition model. 687 |
688 | 689 |
690 | : [[boost]] 691 | :: 692 | A float representing approximately the natural log of the number of times more likely the website thinks this phrase is 693 | than what the speech recognition model knows. 694 | A valid boost must be a float value inside the range [0.0, 10.0], with a default value of 1.0 if not specified. 695 | A boost of 0.0 means the phrase is not boosted at all, and a higher boost means the phrase is more likely to appear. 696 | A boost of 10.0 means the phrase is extremely likely to appear and should be rarely set. 697 |
698 | 699 |
700 |
SpeechRecognitionPhrase(|phrase|, |boost|) constructor
701 |
702 | When this constructor is invoked, run the following steps: 703 | 1. If |boost| is smaller than 0.0 or greater than 10.0, throw a {{SyntaxError}} and abort these steps. 704 | 1. Let |phr| be a new object of type {{SpeechRecognitionPhrase}}. 705 | 1. Set |phr|.{{[[phrase]]}} to be the value of |phrase|. 706 | 1. Set |phr|.{{[[boost]]}} to be the value of |boost|. 707 | 1. Return |phr|. 708 |
709 | 710 |
phrase attribute
711 |
This attribute returns the value of {{[[phrase]]}}.
712 | 713 |
boost attribute
714 |
This attribute returns the value of {{[[boost]]}}.
715 |
716 | 717 |

SpeechGrammar

718 | 719 |

The SpeechGrammar object represents a container for a grammar.

720 |

Grammar support has been deprecated and removed. The grammar objects remain in the spec for backwards compatibility purposes only and do not affect speech recognition.

721 |

This structure has the following attributes:

722 | 723 |
724 |
src attribute
725 |
The required src attribute is the URI for the grammar.
726 | 727 |
weight attribute
728 |
The optional weight attribute controls the weight that the speech recognition service should use with this grammar. 729 | By default, a grammar has a weight of 1. 730 | Larger weight values positively weight the grammar while smaller weight values make the grammar weighted less strongly.
731 |
732 | 733 |

SpeechGrammarList

734 | 735 |

The SpeechGrammarList object represents a collection of SpeechGrammar objects. 736 | This structure has the following attributes:

737 |

Grammar support has been deprecated and removed. The grammar objects remain in the spec for backwards compatibility purposes only and do not affect speech recognition.

738 | 739 |
740 |
length attribute
741 |
The length attribute represents how many grammars are currently in the array.
742 | 743 |
item(index) getter
744 |
The item getter returns a SpeechGrammar from the index into an array of grammars. 745 | The user agent must ensure that the length attribute is set to the number of elements in the array. 746 | The user agent must ensure that the index order from smallest to largest matches the order in which grammars were added to the array.
747 | 748 |
addFromURI(src, weight) method
749 |
This method appends a grammar to the grammars array parameter based on URI. 750 | The URI for the grammar is specified by the src parameter, which represents the URI for the grammar. 751 | Note, some services may support builtin grammars that can be specified by URI. 752 | The weight parameter represents this grammar's weight relative to the other grammar. 753 | 754 |
addFromString(string, weight) method
755 |
This method appends a grammar to the grammars array parameter based on text. 756 | The content of the grammar is specified by the string parameter. 757 | This content should be encoded into a data: URI when the SpeechGrammar object is created. 758 | The weight parameter represents this grammar's weight relative to the other grammar. 759 |
760 | 761 |

The SpeechSynthesis Interface

762 | 763 |

The SpeechSynthesis interface is the scripted web API for controlling a text-to-speech output.

764 | 765 |
 766 | [Exposed=Window]
 767 | interface SpeechSynthesis : EventTarget {
 768 |     readonly attribute boolean pending;
 769 |     readonly attribute boolean speaking;
 770 |     readonly attribute boolean paused;
 771 | 
 772 |     attribute EventHandler onvoiceschanged;
 773 | 
 774 |     undefined speak(SpeechSynthesisUtterance utterance);
 775 |     undefined cancel();
 776 |     undefined pause();
 777 |     undefined resume();
 778 |     sequence<SpeechSynthesisVoice> getVoices();
 779 | };
 780 | 
 781 | partial interface Window {
 782 |     [SameObject] readonly attribute SpeechSynthesis speechSynthesis;
 783 | };
 784 | 
 785 | [Exposed=Window]
 786 | interface SpeechSynthesisUtterance : EventTarget {
 787 |     constructor(optional DOMString text);
 788 | 
 789 |     attribute DOMString text;
 790 |     attribute DOMString lang;
 791 |     attribute SpeechSynthesisVoice? voice;
 792 |     attribute float volume;
 793 |     attribute float rate;
 794 |     attribute float pitch;
 795 | 
 796 |     attribute EventHandler onstart;
 797 |     attribute EventHandler onend;
 798 |     attribute EventHandler onerror;
 799 |     attribute EventHandler onpause;
 800 |     attribute EventHandler onresume;
 801 |     attribute EventHandler onmark;
 802 |     attribute EventHandler onboundary;
 803 | };
 804 | 
 805 | [Exposed=Window]
 806 | interface SpeechSynthesisEvent : Event {
 807 |     constructor(DOMString type, SpeechSynthesisEventInit eventInitDict);
 808 |     readonly attribute SpeechSynthesisUtterance utterance;
 809 |     readonly attribute unsigned long charIndex;
 810 |     readonly attribute unsigned long charLength;
 811 |     readonly attribute float elapsedTime;
 812 |     readonly attribute DOMString name;
 813 | };
 814 | 
 815 | dictionary SpeechSynthesisEventInit : EventInit {
 816 |     required SpeechSynthesisUtterance utterance;
 817 |     unsigned long charIndex = 0;
 818 |     unsigned long charLength = 0;
 819 |     float elapsedTime = 0;
 820 |     DOMString name = "";
 821 | };
 822 | 
 823 | enum SpeechSynthesisErrorCode {
 824 |     "canceled",
 825 |     "interrupted",
 826 |     "audio-busy",
 827 |     "audio-hardware",
 828 |     "network",
 829 |     "synthesis-unavailable",
 830 |     "synthesis-failed",
 831 |     "language-unavailable",
 832 |     "voice-unavailable",
 833 |     "text-too-long",
 834 |     "invalid-argument",
 835 |     "not-allowed",
 836 | };
 837 | 
 838 | [Exposed=Window]
 839 | interface SpeechSynthesisErrorEvent : SpeechSynthesisEvent {
 840 |     constructor(DOMString type, SpeechSynthesisErrorEventInit eventInitDict);
 841 |     readonly attribute SpeechSynthesisErrorCode error;
 842 | };
 843 | 
 844 | dictionary SpeechSynthesisErrorEventInit : SpeechSynthesisEventInit {
 845 |     required SpeechSynthesisErrorCode error;
 846 | };
 847 | 
 848 | [Exposed=Window]
 849 | interface SpeechSynthesisVoice {
 850 |     readonly attribute DOMString voiceURI;
 851 |     readonly attribute DOMString name;
 852 |     readonly attribute DOMString lang;
 853 |     readonly attribute boolean localService;
 854 |     readonly attribute boolean default;
 855 | };
 856 | 
857 | 858 |

SpeechSynthesis Attributes

859 | 860 |
861 |
pending attribute
862 |
This attribute is true if the queue for the global SpeechSynthesis instance contains any utterances which have not started speaking.
863 | 864 |
speaking attribute
865 |
This attribute is true if an utterance is being spoken. 866 | Specifically if an utterance has begun being spoken and has not completed being spoken. 867 | This is independent of whether the global SpeechSynthesis instance is in the paused state.
868 | 869 |
paused attribute
870 |
This attribute is true when the global SpeechSynthesis instance is in the paused state. 871 | This state is independent of whether anything is in the queue. 872 | The default state of a the global SpeechSynthesis instance for a new window is the non-paused state.
873 |
874 | 875 |

SpeechSynthesis Methods

876 | 877 |
878 |
speak(utterance) method
879 |
This method appends the SpeechSynthesisUtterance object utterance to the end of the queue for the global SpeechSynthesis instance. 880 | It does not change the paused state of the SpeechSynthesis instance. 881 | If the SpeechSynthesis instance is paused, it remains paused. 882 | If it is not paused and no other utterances are in the queue, then this utterance is spoken immediately, 883 | else this utterance is queued to begin speaking after the other utterances in the queue have been spoken. 884 | If changes are made to the SpeechSynthesisUtterance object after calling this method and prior to the corresponding end or error event, 885 | it is not defined whether those changes will affect what is spoken, and those changes may cause an error to be returned. 886 | The SpeechSynthesis object takes exclusive ownership of the SpeechSynthesisUtterance object. 887 | Passing it as a speak() argument to another SpeechSynthesis object should throw an exception. 888 | (For example, two frames may have the same origin and each will contain a SpeechSynthesis object.)
889 | 890 |
cancel() method
891 |
This method removes all utterances from the queue. 892 | If an utterance is being spoken, speaking ceases immediately. 893 | This method does not change the paused state of the global SpeechSynthesis instance.
894 | 895 |
pause() method
896 |
This method puts the global SpeechSynthesis instance into the paused state. 897 | If an utterance was being spoken, it pauses mid-utterance. 898 | (If called when the SpeechSynthesis instance was already in the paused state, it does nothing.)
899 | 900 |
resume() method
901 |
This method puts the global SpeechSynthesis instance into the non-paused state. 902 | If an utterance was speaking, it continues speaking the utterance at the point at which it was paused, else it begins speaking the next utterance in the queue (if any). 903 | (If called when the SpeechSynthesis instance was already in the non-paused state, it does nothing.)
904 | 905 |
getVoices() method
906 |
This method returns the available voices. 907 | It is user agent dependent which voices are available. 908 | If there are no voices available, or if the the list of available voices is not yet known (for example: server-side synthesis where the list is determined asynchronously), 909 | then this method must return a SpeechSynthesisVoiceList of length zero.
910 |
911 | 912 |

SpeechSynthesis Events

913 | 914 |
915 |
voiceschanged event
916 |
Fired when the contents of the SpeechSynthesisVoiceList, that the getVoices method will return, have changed. 917 | Examples include: server-side synthesis where the list is determined asynchronously, or when client-side voices are installed/uninstalled.
918 |
919 | 920 |

SpeechSynthesisUtterance Attributes

921 | 922 |
923 |
text attribute
924 |
This attribute specifies the text to be synthesized and spoken for this utterance. 925 | This may be either plain text or a complete, well-formed SSML document. [[!SSML]] 926 | For speech synthesis engines that do not support SSML, or only support certain tags, the user agent or speech engine must strip away the tags they do not support and speak the text. 927 | There may be a maximum length of the text, it may be limited to 32,767 characters.
928 | 929 |
lang attribute
930 |
This attribute specifies the language of the speech synthesis for the utterance, using a valid BCP 47 language tag. [[!BCP47]] 931 | If unset it remains unset for getting in script, but will default to use the language of the html document root element and associated hierarchy. 932 | This default value is computed and used when the input request opens a connection to the recognition service.
933 | 934 |
voice attribute
935 |
This attribute specifies the speech synthesis voice that the web application wishes to use. 936 | When a {{SpeechSynthesisUtterance}} object is created this attribute must be initialized to null. 937 | If, at the time of the {{speak()}} method call, this attribute has been set to one of the {{SpeechSynthesisVoice}} objects returned by {{getVoices()}}, then the user agent must use that voice. 938 | If this attribute is unset or null at the time of the {{speak()}} method call, then the user agent must use a user agent default voice. 939 | The user agent default voice should support the current language (see {{SpeechSynthesisUtterance/lang}}) and can be a local or remote speech service and can incorporate end user choices via interfaces provided by the user agent such as browser configuration parameters. 940 |
941 | 942 |
volume attribute
943 |
This attribute specifies the speaking volume for the utterance. 944 | It ranges between 0 and 1 inclusive, with 0 being the lowest volume and 1 the highest volume, with a default of 1. 945 | If SSML is used, this value will be overridden by prosody tags in the markup.
946 | 947 |
rate attribute
948 |
This attribute specifies the speaking rate for the utterance. 949 | It is relative to the default rate for this voice. 950 | 1 is the default rate supported by the speech synthesis engine or specific voice (which should correspond to a normal speaking rate). 951 | 2 is twice as fast, and 0.5 is half as fast. 952 | Values below 0.1 or above 10 are strictly disallowed, but speech synthesis engines or specific voices may constrain the minimum and maximum rates further, for example, a particular voice may not actually speak faster than 3 times normal even if you specify a value larger than 3. 953 | If SSML is used, this value will be overridden by prosody tags in the markup.
954 | 955 |
pitch attribute
956 |
This attribute specifies the speaking pitch for the utterance. 957 | It ranges between 0 and 2 inclusive, with 0 being the lowest pitch and 2 the highest pitch. 958 | 1 corresponds to the default pitch of the speech synthesis engine or specific voice. 959 | Speech synthesis engines or voices may constrain the minimum and maximum rates further. 960 | If SSML is used, this value will be overridden by prosody tags in the markup.
961 |
962 | 963 |

SpeechSynthesisUtterance Events

964 | 965 | Each of these events must use the {{SpeechSynthesisEvent}} interface, 966 | except the error event which must use the {{SpeechSynthesisErrorEvent}} interface. 967 | These events do not bubble and are not cancelable. 968 | 969 |
970 |
start event
971 |
Fired when this utterance has begun to be spoken.
972 | 973 |
end event
974 |
Fired when this utterance has completed being spoken. 975 | If this event fires, the error event must not be fired for this utterance.
976 | 977 |
error event
978 |
Fired if there was an error that prevented successful speaking of this utterance. 979 | If this event fires, the end event must not be fired for this utterance.
980 | 981 |
pause event
982 |
Fired when and if this utterance is paused mid-utterance.
983 | 984 |
resume event
985 |
Fired when and if this utterance is resumed after being paused mid-utterance. 986 | Adding the utterance to the queue while the global SpeechSynthesis instance is in the paused state, and then calling the resume method 987 | does not cause the resume event to be fired, in this case the utterance's start event will be called when the utterance starts.
988 | 989 |
mark event
990 |
Fired when the spoken utterance reaches a named "mark" tag in SSML. [[!SSML]] 991 | The user agent must fire this event if the speech synthesis engine provides the event.
992 | 993 |
boundary event
994 |
Fired when the spoken utterance reaches a word or sentence boundary. 995 | The user agent must fire this event if the speech synthesis engine provides the event.
996 |
997 | 998 |

SpeechSynthesisEvent Attributes

999 | 1000 |
1001 |
utterance attribute
1002 |
This attribute contains the SpeechSynthesisUtterance that triggered this event.
1003 | 1004 |
charIndex attribute
1005 |
This attribute indicates the zero-based character index into the original utterance string that most closely approximates the current speaking position of the speech engine. 1006 | No guarantee is given as to where charIndex will be with respect to word boundaries (such as at the end of the previous word or the beginning of the next word), only that all text before charIndex has already been spoken, and all text after charIndex has not yet been spoken. 1007 | The user agent must return this value if the speech synthesis engine supports it, otherwise the user agent must return 0.
1008 | 1009 |
charLength attribute
1010 |
This attribute indicates the length of the text (word or sentence) that will be spoken corresponding to this event. 1011 | This attribute is the length, in characters, starting from this event's {{SpeechSynthesisEvent/charIndex}}. 1012 | The user agent must return this value if the speech synthesis engine supports it or the user agent can otherwise determine it, otherwise the user agent must return 0.
1013 | 1014 |
elapsedTime attribute
1015 |
This attribute indicates the time, in seconds, that this event triggered, relative to when this utterance has begun to be spoken. 1016 | The user agent must return this value if the speech synthesis engine supports it or the user agent can otherwise determine it, otherwise the user agent must return 0.
1017 | 1018 |
name attribute
1019 |
For mark events, this attribute indicates the name of the marker, as defined in SSML as the name attribute of a mark element. [[!SSML]] 1020 | For boundary events, this attribute indicates the type of boundary that caused the event: "word" or "sentence". 1021 | For all other events, this value should return "".
1022 |
1023 | 1024 |

SpeechSynthesisErrorEvent Attributes

1025 | 1026 |

The SpeechSynthesisErrorEvent is the interface used for the SpeechSynthesisUtterance error event.

1027 |
1028 |
error attribute
1029 |
The errorCode is an enumeration indicating what has gone wrong. 1030 | The values are: 1031 |
1032 |
"canceled"
1033 |
A cancel method call caused the SpeechSynthesisUtterance to be removed from the queue before it had begun being spoken.
1034 | 1035 |
"interrupted"
1036 |
A cancel method call caused the SpeechSynthesisUtterance to be interrupted after it has begun being spoken and before it completed.
1037 | 1038 |
"audio-busy"
1039 |
The operation cannot be completed at this time because the user-agent cannot access the audio output device. 1040 | (For example, the user may need to correct this by closing another application.)
1041 | 1042 |
"audio-hardware"
1043 |
The operation cannot be completed at this time because the user-agent cannot identify an audio output device. 1044 | (For example, the user may need to connect a speaker or configure system settings.)
1045 | 1046 |
"network"
1047 |
The operation cannot be completed at this time because some required network communication failed.
1048 | 1049 |
"synthesis-unavailable"
1050 |
The operation cannot be completed at this time because no synthesis engine is available. 1051 | (For example, the user may need to install or configure a synthesis engine.)
1052 | 1053 |
"synthesis-failed"
1054 |
The operation failed because synthesis engine had an error.
1055 | 1056 |
"language-unavailable"
1057 |
No appropriate voice is available for the language designated in SpeechSynthesisUtterance lang.
1058 | 1059 |
"voice-unavailable"
1060 |
The voice designated in SpeechSynthesisUtterance voice attribute is not available.
1061 | 1062 |
"text-too-long"
1063 |
The contents of the SpeechSynthesisUtterance text attribute is too long to synthesize.
1064 | 1065 |
"invalid-argument"
1066 |
The contents of the SpeechSynthesisUtterance rate, pitch or volume attribute is not supported by synthesizer.
1067 | 1068 |
"not-allowed"
1069 |
Synthesis was not allowed to start by the user agent or system in the current context.
1070 |
1071 |
1072 |
1073 | 1074 |

SpeechSynthesisVoice Attributes

1075 | 1076 |
1077 |
voiceURI attribute
1078 |
The voiceURI attribute specifies the speech synthesis voice and the location of the speech synthesis service for this voice. 1079 | Note that the voiceURI is a generic URI and can thus point to local or remote services, either through use of a URN with meaning to the user agent or by specifying a URL that the user agent recognizes as a local service.
1080 | 1081 |
name attribute
1082 |
This attribute is a human-readable name that represents the voice. 1083 | There is no guarantee that all names returned are unique.
1084 | 1085 |
lang attribute
1086 |
This attribute is a BCP 47 language tag indicating the language of the voice. [[!BCP47]]
1087 | 1088 |
localService attribute
1089 |
This attribute is true for voices supplied by a local speech synthesizer, and is false for voices supplied by a remote speech synthesizer service. 1090 | (This may be useful because remote services may imply additional latency, bandwidth or cost, whereas local voices may imply lower quality, however there is no guarantee that any of these implications are true.)
1091 | 1092 |
default attribute
1093 |
This attribute is true for at most one voice per language. 1094 | There may be a different default for each language. 1095 | It is user agent dependent how default voices are determined.
1096 |
1097 | 1098 |

Examples

1099 | 1100 |

This section is non-normative.

1101 | 1102 |

Speech Recognition Examples

1103 | 1104 |
1105 |

Using speech recognition to fill an input-field and perform a web search.

1106 | 1107 |
1108 |     <script type="text/javascript">
1109 |       var recognition = new SpeechRecognition();
1110 |       recognition.onresult = function(event) {
1111 |         if (event.results.length > 0) {
1112 |           q.value = event.results[0][0].transcript;
1113 |           q.form.submit();
1114 |         }
1115 |       }
1116 |     </script>
1117 | 
1118 |     <form action="https://www.example.com/search">
1119 |       <input type="search" id="q" name="q" size=60>
1120 |       <input type="button" value="Click to Speak" onclick="recognition.start()">
1121 |     </form>
1122 |   
1123 |
1124 | 1125 |
1126 |

Using speech recognition to fill an options list with alternative speech results.

1127 | 1128 |
1129 |     <script type="text/javascript">
1130 |       var recognition = new SpeechRecognition();
1131 |       recognition.maxAlternatives = 10;
1132 |       recognition.onresult = function(event) {
1133 |         if (event.results.length > 0) {
1134 |           var result = event.results[0];
1135 |           for (var i = 0; i < result.length; ++i) {
1136 |             var text = result[i].transcript;
1137 |             select.options[i] = new Option(text, text);
1138 |           }
1139 |         }
1140 |       }
1141 | 
1142 |       function start() {
1143 |         select.options.length = 0;
1144 |         recognition.start();
1145 |       }
1146 |     </script>
1147 | 
1148 |     <select id="select"></select>
1149 |     <button onclick="start()">Click to Speak</button>
1150 |   
1151 |
1152 | 1153 |
1154 |

Using continuous speech recognition to fill a textarea.

1155 | 1156 |
1157 |     <textarea id="textarea" rows=10 cols=80></textarea>
1158 |     <button id="button" onclick="toggleStartStop()"></button>
1159 | 
1160 |     <script type="text/javascript">
1161 |       var recognizing;
1162 |       var recognition = new SpeechRecognition();
1163 |       recognition.continuous = true;
1164 |       reset();
1165 |       recognition.onend = reset;
1166 | 
1167 |       recognition.onresult = function (event) {
1168 |         for (var i = event.resultIndex; i < event.results.length; ++i) {
1169 |           if (event.results[i].isFinal) {
1170 |             textarea.value += event.results[i][0].transcript;
1171 |           }
1172 |         }
1173 |       }
1174 | 
1175 |       function reset() {
1176 |         recognizing = false;
1177 |         button.innerHTML = "Click to Speak";
1178 |       }
1179 | 
1180 |       function toggleStartStop() {
1181 |         if (recognizing) {
1182 |           recognition.stop();
1183 |           reset();
1184 |         } else {
1185 |           recognition.start();
1186 |           recognizing = true;
1187 |           button.innerHTML = "Click to Stop";
1188 |         }
1189 |       }
1190 |     </script>
1191 |   
1192 |
1193 | 1194 |
1195 |

Using continuous speech recognition, showing final results in black and interim results in grey.

1196 | 1197 |
1198 |     <button id="button" onclick="toggleStartStop()"></button>
1199 |     <div style="border:dotted;padding:10px">
1200 |       <span id="final_span"></span>
1201 |       <span id="interim_span" style="color:grey"></span>
1202 |     </div>
1203 | 
1204 |     <script type="text/javascript">
1205 |       var recognizing;
1206 |       var recognition = new SpeechRecognition();
1207 |       recognition.continuous = true;
1208 |       recognition.interimResults = true;
1209 |       reset();
1210 |       recognition.onend = reset;
1211 | 
1212 |       recognition.onresult = function (event) {
1213 |         var final = "";
1214 |         var interim = "";
1215 |         for (var i = 0; i < event.results.length; ++i) {
1216 |           if (event.results[i].isFinal) {
1217 |             final += event.results[i][0].transcript;
1218 |           } else {
1219 |             interim += event.results[i][0].transcript;
1220 |           }
1221 |         }
1222 |         final_span.innerHTML = final;
1223 |         interim_span.innerHTML = interim;
1224 |       }
1225 | 
1226 |       function reset() {
1227 |         recognizing = false;
1228 |         button.innerHTML = "Click to Speak";
1229 |       }
1230 | 
1231 |       function toggleStartStop() {
1232 |         if (recognizing) {
1233 |           recognition.stop();
1234 |           reset();
1235 |         } else {
1236 |           recognition.start();
1237 |           recognizing = true;
1238 |           button.innerHTML = "Click to Stop";
1239 |           final_span.innerHTML = "";
1240 |           interim_span.innerHTML = "";
1241 |         }
1242 |       }
1243 |     </script>
1244 |   
1245 |
1246 | 1247 |

Speech Synthesis Examples

1248 | 1249 |
1250 |

Spoken text.

1251 | 1252 |
1253 |     <script type="text/javascript">
1254 |       speechSynthesis.speak(new SpeechSynthesisUtterance('Hello World'));
1255 |     </script>
1256 |   
1257 |
1258 | 1259 |
1260 |

Spoken text with attributes and events.

1261 | 1262 |
1263 |     <script type="text/javascript">
1264 |       var u = new SpeechSynthesisUtterance();
1265 |       u.text = 'Hello World';
1266 |       u.lang = 'en-US';
1267 |       u.rate = 1.2;
1268 |       u.onend = function(event) { alert('Finished in ' + event.elapsedTime + ' seconds.'); }
1269 |       speechSynthesis.speak(u);
1270 |     </script>
1271 |   
1272 |
1273 | 1274 |

Acknowledgments

1275 | 1276 |

1277 | Adam Sobieski (Phoster) 1278 | Björn Bringert (Google) 1279 | Charles Pritchard 1280 | Dominic Mazzoni (Google) 1281 | Gerardo Capiel (Benetech) 1282 | Jerry Carter 1283 | Kagami Sascha Rosylight 1284 | Marcos Cáceres (Mozilla) 1285 | Nagesh Kharidi (Openstream) 1286 | Olli Pettay (Mozilla) 1287 | Peter Beverloo (Google) 1288 | Raj Tumuluri (Openstream) 1289 | Satish Sampath (Google) 1290 |

1291 | 1292 |

Also, the members of the HTML Speech Incubator Group, and the corresponding [[HTMLSPEECH|Final Report]], which created the basis for this specification.

1293 | --------------------------------------------------------------------------------