├── README.md └── web-speech-api ├── CODE_OF_CONDUCT.md ├── LICENSE ├── README.md ├── index.html ├── phrase-matcher ├── index.html ├── script.js └── style.css ├── speak-easy-synthesis ├── img │ ├── ws128.png │ └── ws512.png ├── index.html ├── manifest.webapp ├── script.js └── style.css └── speech-color-changer ├── img ├── ws128.png └── ws512.png ├── index.html ├── manifest.webapp ├── script.js └── style.css /README.md: -------------------------------------------------------------------------------- 1 | # web-speech-api 2 | 3 | NOTE: This repository is archived and moved into the MDN Web Docs [dom-examples repository](https://github.com/mdn/dom-examples) under the web-speech-api folder. 4 | -------------------------------------------------------------------------------- /web-speech-api/CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Community Participation Guidelines 2 | 3 | This repository is governed by Mozilla's code of conduct and etiquette guidelines. 4 | For more details, please read the 5 | [Mozilla Community Participation Guidelines](https://www.mozilla.org/about/governance/policies/participation/). 6 | 7 | ## How to Report 8 | For more information on how to report violations of the Community Participation Guidelines, please read our [How to Report](https://www.mozilla.org/about/governance/policies/participation/reporting/) page. 9 | 10 | 16 | -------------------------------------------------------------------------------- /web-speech-api/LICENSE: -------------------------------------------------------------------------------- 1 | CC0 1.0 Universal 2 | 3 | Statement of Purpose 4 | 5 | The laws of most jurisdictions throughout the world automatically confer 6 | exclusive Copyright and Related Rights (defined below) upon the creator and 7 | subsequent owner(s) (each and all, an "owner") of an original work of 8 | authorship and/or a database (each, a "Work"). 9 | 10 | Certain owners wish to permanently relinquish those rights to a Work for the 11 | purpose of contributing to a commons of creative, cultural and scientific 12 | works ("Commons") that the public can reliably and without fear of later 13 | claims of infringement build upon, modify, incorporate in other works, reuse 14 | and redistribute as freely as possible in any form whatsoever and for any 15 | purposes, including without limitation commercial purposes. These owners may 16 | contribute to the Commons to promote the ideal of a free culture and the 17 | further production of creative, cultural and scientific works, or to gain 18 | reputation or greater distribution for their Work in part through the use and 19 | efforts of others. 20 | 21 | For these and/or other purposes and motivations, and without any expectation 22 | of additional consideration or compensation, the person associating CC0 with a 23 | Work (the "Affirmer"), to the extent that he or she is an owner of Copyright 24 | and Related Rights in the Work, voluntarily elects to apply CC0 to the Work 25 | and publicly distribute the Work under its terms, with knowledge of his or her 26 | Copyright and Related Rights in the Work and the meaning and intended legal 27 | effect of CC0 on those rights. 28 | 29 | 1. Copyright and Related Rights. A Work made available under CC0 may be 30 | protected by copyright and related or neighboring rights ("Copyright and 31 | Related Rights"). Copyright and Related Rights include, but are not limited 32 | to, the following: 33 | 34 | i. the right to reproduce, adapt, distribute, perform, display, communicate, 35 | and translate a Work; 36 | 37 | ii. moral rights retained by the original author(s) and/or performer(s); 38 | 39 | iii. publicity and privacy rights pertaining to a person's image or likeness 40 | depicted in a Work; 41 | 42 | iv. rights protecting against unfair competition in regards to a Work, 43 | subject to the limitations in paragraph 4(a), below; 44 | 45 | v. rights protecting the extraction, dissemination, use and reuse of data in 46 | a Work; 47 | 48 | vi. database rights (such as those arising under Directive 96/9/EC of the 49 | European Parliament and of the Council of 11 March 1996 on the legal 50 | protection of databases, and under any national implementation thereof, 51 | including any amended or successor version of such directive); and 52 | 53 | vii. other similar, equivalent or corresponding rights throughout the world 54 | based on applicable law or treaty, and any national implementations thereof. 55 | 56 | 2. Waiver. To the greatest extent permitted by, but not in contravention of, 57 | applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and 58 | unconditionally waives, abandons, and surrenders all of Affirmer's Copyright 59 | and Related Rights and associated claims and causes of action, whether now 60 | known or unknown (including existing as well as future claims and causes of 61 | action), in the Work (i) in all territories worldwide, (ii) for the maximum 62 | duration provided by applicable law or treaty (including future time 63 | extensions), (iii) in any current or future medium and for any number of 64 | copies, and (iv) for any purpose whatsoever, including without limitation 65 | commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes 66 | the Waiver for the benefit of each member of the public at large and to the 67 | detriment of Affirmer's heirs and successors, fully intending that such Waiver 68 | shall not be subject to revocation, rescission, cancellation, termination, or 69 | any other legal or equitable action to disrupt the quiet enjoyment of the Work 70 | by the public as contemplated by Affirmer's express Statement of Purpose. 71 | 72 | 3. Public License Fallback. Should any part of the Waiver for any reason be 73 | judged legally invalid or ineffective under applicable law, then the Waiver 74 | shall be preserved to the maximum extent permitted taking into account 75 | Affirmer's express Statement of Purpose. In addition, to the extent the Waiver 76 | is so judged Affirmer hereby grants to each affected person a royalty-free, 77 | non transferable, non sublicensable, non exclusive, irrevocable and 78 | unconditional license to exercise Affirmer's Copyright and Related Rights in 79 | the Work (i) in all territories worldwide, (ii) for the maximum duration 80 | provided by applicable law or treaty (including future time extensions), (iii) 81 | in any current or future medium and for any number of copies, and (iv) for any 82 | purpose whatsoever, including without limitation commercial, advertising or 83 | promotional purposes (the "License"). The License shall be deemed effective as 84 | of the date CC0 was applied by Affirmer to the Work. Should any part of the 85 | License for any reason be judged legally invalid or ineffective under 86 | applicable law, such partial invalidity or ineffectiveness shall not 87 | invalidate the remainder of the License, and in such case Affirmer hereby 88 | affirms that he or she will not (i) exercise any of his or her remaining 89 | Copyright and Related Rights in the Work or (ii) assert any associated claims 90 | and causes of action with respect to the Work, in either case contrary to 91 | Affirmer's express Statement of Purpose. 92 | 93 | 4. Limitations and Disclaimers. 94 | 95 | a. No trademark or patent rights held by Affirmer are waived, abandoned, 96 | surrendered, licensed or otherwise affected by this document. 97 | 98 | b. Affirmer offers the Work as-is and makes no representations or warranties 99 | of any kind concerning the Work, express, implied, statutory or otherwise, 100 | including without limitation warranties of title, merchantability, fitness 101 | for a particular purpose, non infringement, or the absence of latent or 102 | other defects, accuracy, or the present or absence of errors, whether or not 103 | discoverable, all to the greatest extent permissible under applicable law. 104 | 105 | c. Affirmer disclaims responsibility for clearing rights of other persons 106 | that may apply to the Work or any use thereof, including without limitation 107 | any person's Copyright and Related Rights in the Work. Further, Affirmer 108 | disclaims responsibility for obtaining any necessary consents, permissions 109 | or other rights required for any use of the Work. 110 | 111 | d. Affirmer understands and acknowledges that Creative Commons is not a 112 | party to this document and has no duty or obligation with respect to this 113 | CC0 or use of the Work. 114 | 115 | For more information, please see 116 | 117 | -------------------------------------------------------------------------------- /web-speech-api/README.md: -------------------------------------------------------------------------------- 1 | # web-speech-api 2 | 3 | NOTE: This repository is archived and moved into the MDN Web Docs [dom-examples repository](https://github.com/mdn/dom-examples) under the web-speech-api folder. 4 | -------------------------------------------------------------------------------- /web-speech-api/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Web Speech API 9 | 10 | 11 | 14 | 15 | 16 | 17 |

Pick your test

18 | 23 |

More information

24 | 28 | 29 | 30 | -------------------------------------------------------------------------------- /web-speech-api/phrase-matcher/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Phrase matcher 9 | 10 | 11 | 14 | 15 | 16 | 17 |

Phrase matcher

18 |

Press the button then say the phrase to test the recognition.

19 | 20 | 21 | 22 |
23 |

Phrase...

24 |

Right or wrong?

25 |

...diagnostic messages

26 |
27 | 28 | 29 | 30 | -------------------------------------------------------------------------------- /web-speech-api/phrase-matcher/script.js: -------------------------------------------------------------------------------- 1 | var SpeechRecognition = SpeechRecognition || webkitSpeechRecognition; 2 | var SpeechGrammarList = SpeechGrammarList || webkitSpeechGrammarList; 3 | var SpeechRecognitionEvent = SpeechRecognitionEvent || webkitSpeechRecognitionEvent; 4 | 5 | var phrases = [ 6 | 'I love to sing because it\'s fun', 7 | 'where are you going', 8 | 'can I call you tomorrow', 9 | 'why did you talk while I was talking', 10 | 'she enjoys reading books and playing games', 11 | 'where are you going', 12 | 'have a great day', 13 | 'she sells seashells on the seashore' 14 | ]; 15 | 16 | var phrasePara = document.querySelector('.phrase'); 17 | var resultPara = document.querySelector('.result'); 18 | var diagnosticPara = document.querySelector('.output'); 19 | 20 | var testBtn = document.querySelector('button'); 21 | 22 | function randomPhrase() { 23 | var number = Math.floor(Math.random() * phrases.length); 24 | return number; 25 | } 26 | 27 | function testSpeech() { 28 | testBtn.disabled = true; 29 | testBtn.textContent = 'Test in progress'; 30 | 31 | var phrase = phrases[randomPhrase()]; 32 | // To ensure case consistency while checking with the returned output text 33 | phrase = phrase.toLowerCase(); 34 | phrasePara.textContent = phrase; 35 | resultPara.textContent = 'Right or wrong?'; 36 | resultPara.style.background = 'rgba(0,0,0,0.2)'; 37 | diagnosticPara.textContent = '...diagnostic messages'; 38 | 39 | var grammar = '#JSGF V1.0; grammar phrase; public = ' + phrase +';'; 40 | var recognition = new SpeechRecognition(); 41 | var speechRecognitionList = new SpeechGrammarList(); 42 | speechRecognitionList.addFromString(grammar, 1); 43 | recognition.grammars = speechRecognitionList; 44 | recognition.lang = 'en-US'; 45 | recognition.interimResults = false; 46 | recognition.maxAlternatives = 1; 47 | 48 | recognition.start(); 49 | 50 | recognition.onresult = function(event) { 51 | // The SpeechRecognitionEvent results property returns a SpeechRecognitionResultList object 52 | // The SpeechRecognitionResultList object contains SpeechRecognitionResult objects. 53 | // It has a getter so it can be accessed like an array 54 | // The first [0] returns the SpeechRecognitionResult at position 0. 55 | // Each SpeechRecognitionResult object contains SpeechRecognitionAlternative objects that contain individual results. 56 | // These also have getters so they can be accessed like arrays. 57 | // The second [0] returns the SpeechRecognitionAlternative at position 0. 58 | // We then return the transcript property of the SpeechRecognitionAlternative object 59 | var speechResult = event.results[0][0].transcript.toLowerCase(); 60 | diagnosticPara.textContent = 'Speech received: ' + speechResult + '.'; 61 | if(speechResult === phrase) { 62 | resultPara.textContent = 'I heard the correct phrase!'; 63 | resultPara.style.background = 'lime'; 64 | } else { 65 | resultPara.textContent = 'That didn\'t sound right.'; 66 | resultPara.style.background = 'red'; 67 | } 68 | 69 | console.log('Confidence: ' + event.results[0][0].confidence); 70 | } 71 | 72 | recognition.onspeechend = function() { 73 | recognition.stop(); 74 | testBtn.disabled = false; 75 | testBtn.textContent = 'Start new test'; 76 | } 77 | 78 | recognition.onerror = function(event) { 79 | testBtn.disabled = false; 80 | testBtn.textContent = 'Start new test'; 81 | diagnosticPara.textContent = 'Error occurred in recognition: ' + event.error; 82 | } 83 | 84 | recognition.onaudiostart = function(event) { 85 | //Fired when the user agent has started to capture audio. 86 | console.log('SpeechRecognition.onaudiostart'); 87 | } 88 | 89 | recognition.onaudioend = function(event) { 90 | //Fired when the user agent has finished capturing audio. 91 | console.log('SpeechRecognition.onaudioend'); 92 | } 93 | 94 | recognition.onend = function(event) { 95 | //Fired when the speech recognition service has disconnected. 96 | console.log('SpeechRecognition.onend'); 97 | } 98 | 99 | recognition.onnomatch = function(event) { 100 | //Fired when the speech recognition service returns a final result with no significant recognition. This may involve some degree of recognition, which doesn't meet or exceed the confidence threshold. 101 | console.log('SpeechRecognition.onnomatch'); 102 | } 103 | 104 | recognition.onsoundstart = function(event) { 105 | //Fired when any sound — recognisable speech or not — has been detected. 106 | console.log('SpeechRecognition.onsoundstart'); 107 | } 108 | 109 | recognition.onsoundend = function(event) { 110 | //Fired when any sound — recognisable speech or not — has stopped being detected. 111 | console.log('SpeechRecognition.onsoundend'); 112 | } 113 | 114 | recognition.onspeechstart = function (event) { 115 | //Fired when sound that is recognised by the speech recognition service as speech has been detected. 116 | console.log('SpeechRecognition.onspeechstart'); 117 | } 118 | recognition.onstart = function(event) { 119 | //Fired when the speech recognition service has begun listening to incoming audio with intent to recognize grammars associated with the current SpeechRecognition. 120 | console.log('SpeechRecognition.onstart'); 121 | } 122 | } 123 | 124 | testBtn.addEventListener('click', testSpeech); 125 | -------------------------------------------------------------------------------- /web-speech-api/phrase-matcher/style.css: -------------------------------------------------------------------------------- 1 | body, html { 2 | margin: 0; 3 | } 4 | 5 | html { 6 | height: 100%; 7 | background-color: teal; 8 | } 9 | 10 | body { 11 | height: inherit; 12 | overflow: hidden; 13 | } 14 | 15 | h1, p { 16 | font-family: sans-serif; 17 | text-align: center; 18 | } 19 | 20 | div p { 21 | padding: 20px; 22 | background-color: rgba(0,0,0,0.2); 23 | } 24 | 25 | div { 26 | overflow: auto; 27 | position: absolute; 28 | bottom: 0; 29 | right: 0; 30 | left: 0; 31 | } 32 | 33 | button { 34 | margin: 0 auto; 35 | display: block; 36 | font-size: 1.1rem; 37 | width: 170px; 38 | line-height: 2; 39 | margin-top: 30px; 40 | } 41 | 42 | @media all and (max-height: 410px) { 43 | div { 44 | position: static; 45 | } 46 | } 47 | 48 | .phrase { 49 | font-weight: bold; 50 | } 51 | 52 | .output { 53 | font-style: italic; 54 | } -------------------------------------------------------------------------------- /web-speech-api/speak-easy-synthesis/img/ws128.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mdn/web-speech-api/7ad8f2b13947c3a0ccad48477f41db62f78d8bd9/web-speech-api/speak-easy-synthesis/img/ws128.png -------------------------------------------------------------------------------- /web-speech-api/speak-easy-synthesis/img/ws512.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mdn/web-speech-api/7ad8f2b13947c3a0ccad48477f41db62f78d8bd9/web-speech-api/speak-easy-synthesis/img/ws512.png -------------------------------------------------------------------------------- /web-speech-api/speak-easy-synthesis/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Speech synthesiser 9 | 10 | 11 | 14 | 15 | 16 | 17 |

Speech synthesiser

18 | 19 |

Enter some text in the input below and press return or the "play" button to hear it. change voices using the dropdown menu.

20 | 21 |
22 | 23 |
24 | 25 |
1
26 |
27 |
28 |
29 | 30 |
1
31 |
32 |
33 | 36 |
37 | 38 |
39 |
40 | 41 | 42 | 43 | -------------------------------------------------------------------------------- /web-speech-api/speak-easy-synthesis/manifest.webapp: -------------------------------------------------------------------------------- 1 | { 2 | "name": "SpeechSyn", 3 | "description": "Web Speech API speech synthesis demo", 4 | "launch_path": "/index.html", 5 | "icons": { 6 | "512": "/img/ws512.png", 7 | "128": "/img/ws128.png" 8 | }, 9 | "developer": { 10 | "name": "Chris Mills", 11 | "url": "https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API" 12 | }, 13 | "default_locale": "en" 14 | } -------------------------------------------------------------------------------- /web-speech-api/speak-easy-synthesis/script.js: -------------------------------------------------------------------------------- 1 | const synth = window.speechSynthesis; 2 | 3 | const inputForm = document.querySelector("form"); 4 | const inputTxt = document.querySelector(".txt"); 5 | const voiceSelect = document.querySelector("select"); 6 | 7 | const pitch = document.querySelector("#pitch"); 8 | const pitchValue = document.querySelector(".pitch-value"); 9 | const rate = document.querySelector("#rate"); 10 | const rateValue = document.querySelector(".rate-value"); 11 | 12 | let voices = []; 13 | 14 | function populateVoiceList() { 15 | voices = synth.getVoices().sort(function (a, b) { 16 | const aname = a.name.toUpperCase(); 17 | const bname = b.name.toUpperCase(); 18 | 19 | if (aname < bname) { 20 | return -1; 21 | } else if (aname == bname) { 22 | return 0; 23 | } else { 24 | return +1; 25 | } 26 | }); 27 | const selectedIndex = 28 | voiceSelect.selectedIndex < 0 ? 0 : voiceSelect.selectedIndex; 29 | voiceSelect.innerHTML = ""; 30 | 31 | for (let i = 0; i < voices.length; i++) { 32 | const option = document.createElement("option"); 33 | option.textContent = `${voices[i].name} (${voices[i].lang})`; 34 | 35 | if (voices[i].default) { 36 | option.textContent += " -- DEFAULT"; 37 | } 38 | 39 | option.setAttribute("data-lang", voices[i].lang); 40 | option.setAttribute("data-name", voices[i].name); 41 | voiceSelect.appendChild(option); 42 | } 43 | voiceSelect.selectedIndex = selectedIndex; 44 | } 45 | 46 | populateVoiceList(); 47 | 48 | if (speechSynthesis.onvoiceschanged !== undefined) { 49 | speechSynthesis.onvoiceschanged = populateVoiceList; 50 | } 51 | 52 | function speak() { 53 | if (synth.speaking) { 54 | console.error("speechSynthesis.speaking"); 55 | return; 56 | } 57 | 58 | if (inputTxt.value !== "") { 59 | const utterThis = new SpeechSynthesisUtterance(inputTxt.value); 60 | 61 | utterThis.onend = function (event) { 62 | console.log("SpeechSynthesisUtterance.onend"); 63 | }; 64 | 65 | utterThis.onerror = function (event) { 66 | console.error("SpeechSynthesisUtterance.onerror"); 67 | }; 68 | 69 | const selectedOption = 70 | voiceSelect.selectedOptions[0].getAttribute("data-name"); 71 | 72 | for (let i = 0; i < voices.length; i++) { 73 | if (voices[i].name === selectedOption) { 74 | utterThis.voice = voices[i]; 75 | break; 76 | } 77 | } 78 | utterThis.pitch = pitch.value; 79 | utterThis.rate = rate.value; 80 | synth.speak(utterThis); 81 | } 82 | } 83 | 84 | inputForm.onsubmit = function (event) { 85 | event.preventDefault(); 86 | 87 | speak(); 88 | 89 | inputTxt.blur(); 90 | }; 91 | 92 | pitch.onchange = function () { 93 | pitchValue.textContent = pitch.value; 94 | }; 95 | 96 | rate.onchange = function () { 97 | rateValue.textContent = rate.value; 98 | }; 99 | 100 | voiceSelect.onchange = function () { 101 | speak(); 102 | }; 103 | -------------------------------------------------------------------------------- /web-speech-api/speak-easy-synthesis/style.css: -------------------------------------------------------------------------------- 1 | body, html { 2 | margin: 0; 3 | } 4 | 5 | html { 6 | height: 100%; 7 | } 8 | 9 | body { 10 | height: 90%; 11 | max-width: 800px; 12 | margin: 0 auto; 13 | } 14 | 15 | h1, p { 16 | font-family: sans-serif; 17 | text-align: center; 18 | padding: 20px; 19 | } 20 | 21 | .txt, select, form > div { 22 | display: block; 23 | margin: 0 auto; 24 | font-family: sans-serif; 25 | font-size: 16px; 26 | padding: 5px; 27 | } 28 | 29 | .txt { 30 | width: 80%; 31 | } 32 | 33 | select { 34 | width: 83%; 35 | } 36 | 37 | form > div { 38 | width: 81%; 39 | } 40 | 41 | .txt, form > div { 42 | margin-bottom: 10px; 43 | overflow: auto; 44 | } 45 | 46 | .clearfix { 47 | clear: both; 48 | } 49 | 50 | label { 51 | float: left; 52 | width: 10%; 53 | line-height: 1.5; 54 | } 55 | 56 | .rate-value, .pitch-value { 57 | float: right; 58 | width: 5%; 59 | line-height: 1.5; 60 | } 61 | 62 | #rate, #pitch { 63 | float: right; 64 | width: 81%; 65 | } 66 | 67 | .controls { 68 | text-align: center; 69 | margin-top: 10px; 70 | } 71 | 72 | .controls button { 73 | padding: 10px; 74 | } -------------------------------------------------------------------------------- /web-speech-api/speech-color-changer/img/ws128.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mdn/web-speech-api/7ad8f2b13947c3a0ccad48477f41db62f78d8bd9/web-speech-api/speech-color-changer/img/ws128.png -------------------------------------------------------------------------------- /web-speech-api/speech-color-changer/img/ws512.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mdn/web-speech-api/7ad8f2b13947c3a0ccad48477f41db62f78d8bd9/web-speech-api/speech-color-changer/img/ws512.png -------------------------------------------------------------------------------- /web-speech-api/speech-color-changer/index.html: -------------------------------------------------------------------------------- 1 |  2 | 3 | 4 | 5 | 6 | 7 | 8 | Speech color changer 9 | 10 | 11 | 14 | 15 | 16 | 17 |

Speech color changer

18 | 19 |

20 |
21 |

...diagnostic messages

22 |
23 | 24 | 25 | 26 | 27 | -------------------------------------------------------------------------------- /web-speech-api/speech-color-changer/manifest.webapp: -------------------------------------------------------------------------------- 1 | { 2 | "name": "SpeechRec", 3 | "description": "Web Speech API speech recognition demo", 4 | "type": "privileged", 5 | "launch_path": "/index.html", 6 | "icons": { 7 | "512": "/img/ws512.png", 8 | "128": "/img/ws128.png" 9 | }, 10 | "developer": { 11 | "name": "Chris Mills", 12 | "url": "https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API" 13 | }, 14 | "default_locale": "en", 15 | "permissions": { 16 | "audio-capture" : { 17 | "description" : "Audio capture" 18 | }, 19 | "speech-recognition" : { 20 | "description" : "Speech recognition" 21 | } 22 | } 23 | } -------------------------------------------------------------------------------- /web-speech-api/speech-color-changer/script.js: -------------------------------------------------------------------------------- 1 | var SpeechRecognition = SpeechRecognition || webkitSpeechRecognition 2 | var SpeechGrammarList = SpeechGrammarList || window.webkitSpeechGrammarList 3 | var SpeechRecognitionEvent = SpeechRecognitionEvent || webkitSpeechRecognitionEvent 4 | 5 | var colors = [ 'aqua' , 'azure' , 'beige', 'bisque', 'black', 'blue', 'brown', 'chocolate', 'coral', 'crimson', 'cyan', 'fuchsia', 'ghostwhite', 'gold', 'goldenrod', 'gray', 'green', 'indigo', 'ivory', 'khaki', 'lavender', 'lime', 'linen', 'magenta', 'maroon', 'moccasin', 'navy', 'olive', 'orange', 'orchid', 'peru', 'pink', 'plum', 'purple', 'red', 'salmon', 'sienna', 'silver', 'snow', 'tan', 'teal', 'thistle', 'tomato', 'turquoise', 'violet', 'white', 'yellow']; 6 | 7 | var recognition = new SpeechRecognition(); 8 | if (SpeechGrammarList) { 9 | // SpeechGrammarList is not currently available in Safari, and does not have any effect in any other browser. 10 | // This code is provided as a demonstration of possible capability. You may choose not to use it. 11 | var speechRecognitionList = new SpeechGrammarList(); 12 | var grammar = '#JSGF V1.0; grammar colors; public = ' + colors.join(' | ') + ' ;' 13 | speechRecognitionList.addFromString(grammar, 1); 14 | recognition.grammars = speechRecognitionList; 15 | } 16 | recognition.continuous = false; 17 | recognition.lang = 'en-US'; 18 | recognition.interimResults = false; 19 | recognition.maxAlternatives = 1; 20 | 21 | var diagnostic = document.querySelector('.output'); 22 | var bg = document.querySelector('html'); 23 | var hints = document.querySelector('.hints'); 24 | 25 | var colorHTML= ''; 26 | colors.forEach(function(v, i, a){ 27 | console.log(v, i); 28 | colorHTML += ' ' + v + ' '; 29 | }); 30 | hints.innerHTML = 'Tap/click then say a color to change the background color of the app. Try ' + colorHTML + '.'; 31 | 32 | document.body.onclick = function() { 33 | recognition.start(); 34 | console.log('Ready to receive a color command.'); 35 | } 36 | 37 | recognition.onresult = function(event) { 38 | // The SpeechRecognitionEvent results property returns a SpeechRecognitionResultList object 39 | // The SpeechRecognitionResultList object contains SpeechRecognitionResult objects. 40 | // It has a getter so it can be accessed like an array 41 | // The first [0] returns the SpeechRecognitionResult at the last position. 42 | // Each SpeechRecognitionResult object contains SpeechRecognitionAlternative objects that contain individual results. 43 | // These also have getters so they can be accessed like arrays. 44 | // The second [0] returns the SpeechRecognitionAlternative at position 0. 45 | // We then return the transcript property of the SpeechRecognitionAlternative object 46 | var color = event.results[0][0].transcript; 47 | diagnostic.textContent = 'Result received: ' + color + '.'; 48 | bg.style.backgroundColor = color; 49 | console.log('Confidence: ' + event.results[0][0].confidence); 50 | } 51 | 52 | recognition.onspeechend = function() { 53 | recognition.stop(); 54 | } 55 | 56 | recognition.onnomatch = function(event) { 57 | diagnostic.textContent = "I didn't recognise that color."; 58 | } 59 | 60 | recognition.onerror = function(event) { 61 | diagnostic.textContent = 'Error occurred in recognition: ' + event.error; 62 | } 63 | -------------------------------------------------------------------------------- /web-speech-api/speech-color-changer/style.css: -------------------------------------------------------------------------------- 1 | body, html { 2 | margin: 0; 3 | } 4 | 5 | html { 6 | height: 100%; 7 | } 8 | 9 | body { 10 | height: inherit; 11 | overflow: hidden; 12 | max-width: 800px; 13 | margin: 0 auto; 14 | } 15 | 16 | h1, p { 17 | font-family: sans-serif; 18 | text-align: center; 19 | padding: 20px; 20 | } 21 | 22 | div { 23 | height: 100px; 24 | overflow: auto; 25 | position: absolute; 26 | bottom: 0px; 27 | right: 0; 28 | left: 0; 29 | background-color: rgba(255,255,255,0.2); 30 | } 31 | 32 | ul { 33 | margin: 0; 34 | } 35 | 36 | .hints span { 37 | text-shadow: 0px 0px 6px rgba(255,255,255,0.7); 38 | } 39 | --------------------------------------------------------------------------------