├── .DS_Store ├── .gitignore ├── use-cases ├── .DS_Store ├── respec-config.js ├── draft.md └── index.html ├── gap-analysis ├── .DS_Store ├── respec-config.js └── index.html ├── user-scenarios ├── .DS_Store ├── respec-config.js ├── draft.md └── index.html ├── samples ├── audio │ ├── case-sub.mp3 │ ├── case-audio.mp3 │ ├── case-break.mp3 │ ├── case-phoneme.mp3 │ ├── case-prosody.mp3 │ ├── case-raven.mp3 │ ├── case-say-as.mp3 │ └── case-emphasis.mp3 ├── index.html ├── respec-config.js ├── w3cptf-singleattr-tests.html └── w3cptf-multiattr-tests.html ├── w3c.json ├── docs ├── bestpractices.md └── explainer.md ├── LICENSE.md ├── CODE_OF_CONDUCT.md ├── scripts ├── DICTIONARY └── proof.js ├── CONTRIBUTING.md ├── index.html ├── common └── acknowledgements.html ├── .github └── workflows │ └── auto-publish.yml ├── README.md ├── presentations └── template.html ├── technical-approach ├── respec-config.js ├── ssml-json-schema-w3cptf.json └── appendixJSON.html ├── explainer ├── respec-config.js └── index.html └── gap-analysis_and_use-case └── respec-config.js /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/.DS_Store -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | node_modules 3 | docs/expainer.md 4 | .vscode -------------------------------------------------------------------------------- /use-cases/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/use-cases/.DS_Store -------------------------------------------------------------------------------- /gap-analysis/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/gap-analysis/.DS_Store -------------------------------------------------------------------------------- /user-scenarios/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/user-scenarios/.DS_Store -------------------------------------------------------------------------------- /samples/audio/case-sub.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/samples/audio/case-sub.mp3 -------------------------------------------------------------------------------- /samples/audio/case-audio.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/samples/audio/case-audio.mp3 -------------------------------------------------------------------------------- /samples/audio/case-break.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/samples/audio/case-break.mp3 -------------------------------------------------------------------------------- /samples/audio/case-phoneme.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/samples/audio/case-phoneme.mp3 -------------------------------------------------------------------------------- /samples/audio/case-prosody.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/samples/audio/case-prosody.mp3 -------------------------------------------------------------------------------- /samples/audio/case-raven.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/samples/audio/case-raven.mp3 -------------------------------------------------------------------------------- /samples/audio/case-say-as.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/samples/audio/case-say-as.mp3 -------------------------------------------------------------------------------- /samples/audio/case-emphasis.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/w3c/pronunciation/HEAD/samples/audio/case-emphasis.mp3 -------------------------------------------------------------------------------- /w3c.json: -------------------------------------------------------------------------------- 1 | { 2 | "group": [83907] 3 | , "contacts": ["michael-n-cooper"] 4 | , "repo-type": "rec-track" 5 | } 6 | -------------------------------------------------------------------------------- /docs/bestpractices.md: -------------------------------------------------------------------------------- 1 | #Best Practices for Implementing (or authoring?) Spoken Presentation and Pronunciation of Web Content (DRAFT) 2 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | All documents in this Repository are licensed by contributors 2 | under the 3 | [W3C Document License](https://www.w3.org/Consortium/Legal/copyright-documents). 4 | 5 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Code of Conduct 2 | 3 | All documentation, code and communication under this repository are covered by the [W3C Code of Ethics and Professional Conduct](https://www.w3.org/Consortium/cepc/). 4 | -------------------------------------------------------------------------------- /scripts/DICTIONARY: -------------------------------------------------------------------------------- 1 | Alexa 2 | DOM 3 | Grenier 4 | HTML 5 | HTML5 6 | JSON 7 | JSON-LD 8 | Léamh 9 | MathML 10 | microdata 11 | SSML 12 | SVG 13 | SaaS 14 | TTS 15 | UI 16 | VoiceXML 17 | W3C 18 | aria-ssml 19 | data-ssml 20 | heteronyms 21 | namespace 22 | namespaces 23 | romaji -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Accessible Platform Architectures Working Group 2 | 3 | Contributions to this repository are intended to become part of Recommendation-track documents governed by the 4 | [W3C Patent Policy](https://www.w3.org/Consortium/Patent-Policy-20040205/) and 5 | [Document License](https://www.w3.org/Consortium/Legal/copyright-documents). To make substantive contributions to specifications, you must either participate 6 | in the relevant W3C Working Group or make a non-member patent licensing commitment. 7 | 8 | If you are not the sole contributor to a contribution (pull request), please identify all 9 | contributors in the pull request comment. 10 | 11 | To add a contributor (other than yourself, that's automatic), mark them one per line as follows: 12 | 13 | ``` 14 | +@github_username 15 | ``` 16 | 17 | If you added a contributor by mistake, you can remove them in a comment with: 18 | 19 | ``` 20 | -@github_username 21 | ``` 22 | 23 | If you are making a pull request on behalf of someone else but you had no part in designing the 24 | feature, you can remove yourself with the above syntax. 25 | -------------------------------------------------------------------------------- /index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Spec proposal 6 | 7 | 17 | 18 | 19 |
20 |

21 | This specification does neat stuff. 22 |

23 |
24 |
25 |

26 | This is an unofficial proposal. 27 |

28 |
29 | 30 |
31 |

Introduction

32 |

33 | See ReSpec's user guide 34 | for how toget started! 35 |

36 |
37 | 38 | 39 | -------------------------------------------------------------------------------- /common/acknowledgements.html: -------------------------------------------------------------------------------- 1 |
2 |

Acknowledgments

3 |

The following people contributed to the development of this document.

4 |
5 |

Participants active in the Pronunciation TF at the time of publication

6 | 26 |
27 |
28 | -------------------------------------------------------------------------------- /.github/workflows/auto-publish.yml: -------------------------------------------------------------------------------- 1 | name: CI 2 | on: 3 | pull_request: {} 4 | push: 5 | branches: [main] 6 | jobs: 7 | main: 8 | name: Build, Validate and Deploy 9 | runs-on: ubuntu-latest 10 | steps: 11 | - uses: actions/checkout@v2 12 | - uses: w3c/spec-prod@v2 13 | with: 14 | SOURCE: explainer/index.html 15 | DESTINATION: explainer/index.html 16 | TOOLCHAIN: respec 17 | GH_PAGES_BRANCH: gh-pages 18 | VALIDATE_WEBIDL: false 19 | VALIDATE_MARKUP: false 20 | - uses: w3c/spec-prod@v2 21 | with: 22 | SOURCE: gap-analysis_and_use-case/index.html 23 | DESTINATION: gap-analysis_and_use-case/index.html 24 | TOOLCHAIN: respec 25 | GH_PAGES_BRANCH: gh-pages 26 | VALIDATE_WEBIDL: false 27 | VALIDATE_MARKUP: false 28 | - uses: w3c/spec-prod@v2 29 | with: 30 | SOURCE: technical-approach/index.html 31 | DESTINATION: technical-approach/index.html 32 | TOOLCHAIN: respec 33 | GH_PAGES_BRANCH: gh-pages 34 | VALIDATE_WEBIDL: false 35 | VALIDATE_MARKUP: false 36 | - uses: w3c/spec-prod@v2 37 | with: 38 | SOURCE: user-scenarios/index.html 39 | DESTINATION: user-scenarios/index.html 40 | TOOLCHAIN: respec 41 | GH_PAGES_BRANCH: gh-pages 42 | VALIDATE_WEBIDL: false 43 | VALIDATE_MARKUP: false 44 | 45 | -------------------------------------------------------------------------------- /samples/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | W3C Pronunciation Task Force Samples Page 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 |

This document provides samples of Pronunciation.

16 |
17 |

W3C Pronunciation Task Force SSML in HTML Sample Content

18 |

This sample content has been developed by the W3C Accessible Platform Architecture Pronunciation Task Force to allow developers to 19 | evaluate and test their implementation of the proposed Attribute Model of SSML in HTML. The samples are presented in each of the two proposed 20 | approaches for representing SSML via attibutes. There are currently 9 test cases, each incorporating 21 | one or more of the SSML functions defined in the Pronunciation Technical Approach Document. Each test case includes an audio sample of the expected 22 | spoken presentation. 23 |

24 |

Developers of read aloud tools, screen readers, and voice assistants should use these test cases to evaluate and test their implementation. Questions and test results 25 | should be filed as issues in the Pronunciation github repo.

26 |

Sample Content

27 | 31 | 32 | 33 | 34 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | - [Calendar link](https://www.w3.org/groups/tf/pronunciation-tf/calendar/) 2 | 3 | # Spoken Presentation Task Force current work 4 | 5 | Editors' drafts are available on: 6 | 7 | * Explainer: Improving Spoken Presentation on the Web - https://w3c.github.io/pronunciation/explainer 8 | 9 | * Pronunciation User Scenarios - https://w3c.github.io/pronunciation/user-scenarios/ 10 | 11 | * Pronunciation Gap Analysis and Use Cases - https://w3c.github.io/pronunciation/gap-analysis_and_use-case 12 | 13 | * Pronunciation Technical Approach - https://w3c.github.io/pronunciation/technical-approach/ 14 | 15 | Markdown URL for Use-cases 16 | 17 | * https://github.com/w3c/pronunciation/blob/master/use-cases/draft.md 18 | 19 | ## Timeline 20 | 21 | 22 | Spoken Presentation TF want to make a publication every three months if possible. 23 | Timeline: https://github.com/w3c/pronunciation/wiki/Timeline 24 | 25 | # Implementations 26 | 27 | ## Multi-attribute Approach 28 | 29 | none 30 | 31 | ## Single-attribute Approach 32 | 33 | Note: need specific URL to confirm from each vendor 34 | 35 | Texthelp SpeechStream : https://www.texthelp.com/en-us/products/speechstream/ 36 | 37 | Pearson TestNav : https://home.testnav.com/ 38 | 39 | # For Authors 40 | 41 | ## Proofing Markdown 42 | 43 | Have node installed. Then, from this folder, run `npm install` from the command line (only the first time). 44 | 45 | To proof all markdown files in this repository, `node ./scripts/proof.js`. To proof specific files, use "glob" syntax like `node ./scripts/proof.js */draft.md` or a specific files `node ./scripts/proof.js contributing.md readme.md` 46 | 47 | If you introduce proper nouns, technical terms, or acronyms, you can add them to the `./scripts/DICTIONARY` file which will no longer trigger the `retext-spell` rule. 48 | -------------------------------------------------------------------------------- /scripts/proof.js: -------------------------------------------------------------------------------- 1 | const vfile = require('to-vfile'); 2 | const unified = require('unified'); 3 | const markdown = require('remark-parse'); 4 | const lint = require('remark-lint'); 5 | const stringify = require('remark-stringify'); 6 | const report = require('vfile-reporter'); 7 | const glob = require('fast-glob'); 8 | const remark2retext = require('remark-retext'); 9 | const fs = require('fs'); 10 | const path = require('path'); 11 | const personal = fs.readFileSync(path.join(__dirname, 'DICTIONARY')); 12 | 13 | // Plugins 14 | // @see https://github.com/retextjs/retext/blob/master/doc/plugins.md 15 | const english = require('retext-english'); 16 | const equality = require('retext-equality'); 17 | const passive = require('retext-passive'); 18 | const simplify = require('retext-simplify'); 19 | const readability = require('retext-readability'); 20 | const spell = require('retext-spell'); 21 | const dictionary = require('dictionary-en-us'); 22 | const urls = require('retext-syntax-urls'); 23 | const acronyms = require('retext-redundant-acronyms'); 24 | const repeated = require('retext-repeated-words'); 25 | 26 | // Take command line file glob 27 | const files = process.argv.slice(2); 28 | 29 | // Set default if no glob passed 30 | if (!files.length) { 31 | files.push('**/*.md'); 32 | } 33 | 34 | // Always avoid node_modules 35 | glob(['!node_modules', ...files]).then(files => 36 | files.map(file => processAST(vfile.readSync(file))) 37 | ); 38 | 39 | function processAST(ast) { 40 | unified() 41 | .use(markdown) 42 | .use(lint) 43 | .use( 44 | remark2retext, 45 | unified() 46 | .use(english) 47 | .use(urls) 48 | .use(acronyms) 49 | .use(repeated) 50 | .use(equality) 51 | .use(passive) 52 | .use(simplify) 53 | .use(readability) 54 | .use(spell, { 55 | dictionary, 56 | personal 57 | }) 58 | ) 59 | .use(stringify) 60 | .process(ast, function(err, file) { 61 | console.error(report(err || file)); 62 | }); 63 | } 64 | -------------------------------------------------------------------------------- /presentations/template.html: -------------------------------------------------------------------------------- 1 | 2 | 4 | 5 | 6 | 7 | Slide Shows in XHTML 8 | 9 | 11 | 13 | 15 | 17 | 20 | 21 | 22 | 23 |
24 | graphic with four colored squares 26 | 32 |
33 |
34 |
35 |

Specification for Spoken Presentation in HTML

36 | https://www.w3.org/TR/spoken-html/ 38 |

39 |
40 |
41 |

New Slide

42 | 61 |
62 | 63 | 64 | -------------------------------------------------------------------------------- /samples/respec-config.js: -------------------------------------------------------------------------------- 1 | var respecConfig = { 2 | // embed RDFa data in the output 3 | trace: true, 4 | doRDFa: '1.0', 5 | includePermalinks: true, 6 | permalinkEdge: true, 7 | permalinkHide: false, 8 | tocIntroductory: true, 9 | // specification status (e.g., WD, LC, NOTE, etc.). If in doubt use ED. 10 | specStatus: "unofficial", 11 | //noRecTrack: false, 12 | //crEnd: "2012-04-30", 13 | //perEnd: "2013-07-23", 14 | //publishDate: "2013-08-22", 15 | //diffTool: "http://www.aptest.com/standards/htmldiff/htmldiff.pl", 16 | 17 | // the specifications short name, as in http://www.w3.org/TR/short-name/ 18 | shortName: "pronunciation-samples", 19 | noTOC: true, 20 | 21 | 22 | // if you wish the publication date to be other than today, set this 23 | //publishDate: "2017-05-09", 24 | copyrightStart: "2021", 25 | license: "w3c-software", 26 | editors: [ 27 | 28 | { 29 | name: "", 30 | url: '', 31 | mailto: "", 32 | company: "", 33 | companyURI: "", 34 | 35 | } 36 | ], 37 | 38 | // if there is a previously published draft, uncomment this and set its YYYY-MM-DD date 39 | // and its maturity status 40 | //previousPublishDate: "", 41 | //previousMaturity: "", 42 | //prevRecURI: "", 43 | //previousDiffURI: "", 44 | 45 | // if there a publicly available Editors Draft, this is the link 46 | //edDraftURI: "https://w3c.github.io/pronunciation/technical-approach", 47 | 48 | // if this is a LCWD, uncomment and set the end of its review period 49 | // lcEnd: "2012-02-21", 50 | 51 | // editors, add as many as you like 52 | // only "name" is required 53 | 54 | 55 | // authors, add as many as you like. 56 | // This is optional, uncomment if you have authors as well as editors. 57 | // only "name" is required. Same format as editors. 58 | 59 | //authors: [ 60 | // { name: "Your Name", url: "http://example.org/", 61 | // company: "Your Company", companyURI: "http://example.com/" }, 62 | //], 63 | 64 | /* 65 | alternateFormats: [ 66 | { uri: 'aria-diff.html', label: "Diff from Previous Recommendation" } , 67 | { uri: 'aria.ps', label: "PostScript version" }, 68 | { uri: 'aria.pdf', label: "PDF version" } 69 | ], 70 | */ 71 | 72 | // errata: 'http://www.w3.org/2010/02/rdfa/errata.html', 73 | 74 | 75 | 76 | // WG info 77 | group: "apa" 78 | 79 | // Spec URLs 80 | 81 | 82 | }; 83 | -------------------------------------------------------------------------------- /technical-approach/respec-config.js: -------------------------------------------------------------------------------- 1 | var respecConfig = { 2 | // embed RDFa data in the output 3 | trace: true, 4 | doRDFa: '1.0', 5 | includePermalinks: true, 6 | permalinkEdge: true, 7 | permalinkHide: false, 8 | tocIntroductory: true, 9 | // specification status (e.g., WD, LC, NOTE, etc.). If in doubt use ED. 10 | specStatus: "ED", 11 | noRecTrack: false, 12 | //crEnd: "2012-04-30", 13 | //perEnd: "2013-07-23", 14 | //publishDate: "2013-08-22", 15 | //diffTool: "http://www.aptest.com/standards/htmldiff/htmldiff.pl", 16 | 17 | // the specifications short name, as in http://www.w3.org/TR/short-name/ 18 | shortName: "spoken-html", 19 | 20 | 21 | // if you wish the publication date to be other than today, set this 22 | //publishDate: "2017-05-09", 23 | copyrightStart: "2021", 24 | license: "w3c-software", 25 | 26 | // if there is a previously published draft, uncomment this and set its YYYY-MM-DD date 27 | // and its maturity status 28 | //previousPublishDate: "", 29 | //previousMaturity: "", 30 | //prevRecURI: "", 31 | //previousDiffURI: "", 32 | 33 | // if there a publicly available Editors Draft, this is the link 34 | edDraftURI: "https://w3c.github.io/pronunciation/technical-approach", 35 | 36 | // if this is a LCWD, uncomment and set the end of its review period 37 | // lcEnd: "2012-02-21", 38 | 39 | // editors, add as many as you like 40 | // only "name" is required 41 | editors: [ 42 | { 43 | name: "Irfan Ali", 44 | url: 'https://www.w3.org/users/98332', 45 | mailto: "iali@ets.org", 46 | company: "Educational Testing Service", 47 | companyURI: "https://www.ets.org/", 48 | w3cid: 98332 49 | }, 50 | { 51 | name: "Markku Hakkinen", 52 | url: 'https://www.w3.org/users/35712', 53 | mailto: "mhakkinen@ets.org", 54 | company: "Educational Testing Service", 55 | companyURI: "https://www.ets.org/", 56 | w3cid: 35712 57 | }, 58 | { 59 | name: "Paul Grenier", 60 | url: 'https://www.w3.org/users/', 61 | mailto: "pgrenier@gmail.com", 62 | company: "Invited Expert", 63 | w3cid: 111500 64 | 65 | }, 66 | { 67 | name: "Ruoxi Ran", 68 | url: 'https://www.w3.org', 69 | mailto: "ran@w3.org", 70 | company: "W3C", 71 | companyURI: "http://www.w3.org", 72 | w3cid: 100586 73 | } 74 | ], 75 | 76 | // authors, add as many as you like. 77 | // This is optional, uncomment if you have authors as well as editors. 78 | // only "name" is required. Same format as editors. 79 | 80 | //authors: [ 81 | // { name: "Your Name", url: "http://example.org/", 82 | // company: "Your Company", companyURI: "http://example.com/" }, 83 | //], 84 | 85 | /* 86 | alternateFormats: [ 87 | { uri: 'aria-diff.html', label: "Diff from Previous Recommendation" } , 88 | { uri: 'aria.ps', label: "PostScript version" }, 89 | { uri: 'aria.pdf', label: "PDF version" } 90 | ], 91 | */ 92 | 93 | // errata: 'http://www.w3.org/2010/02/rdfa/errata.html', 94 | 95 | maxTocLevel: 3, 96 | 97 | // WG info 98 | //group: 83907 99 | group: "apa", 100 | github: "https://github.com/w3c/pronunciation/" 101 | 102 | // Spec URLs 103 | 104 | 105 | }; 106 | -------------------------------------------------------------------------------- /explainer/respec-config.js: -------------------------------------------------------------------------------- 1 | var respecConfig = { 2 | // embed RDFa data in the output 3 | trace: true, 4 | doRDFa: '1.0', 5 | includePermalinks: true, 6 | permalinkEdge: true, 7 | permalinkHide: false, 8 | tocIntroductory: true, 9 | // specification status (e.g., WD, LC, NOTE, etc.). If in doubt use ED. 10 | specStatus: "ED", 11 | noRecTrack: true, 12 | //crEnd: "2012-04-30", 13 | //perEnd: "2013-07-23", 14 | //publishDate: "2013-08-22", 15 | //diffTool: "http://www.aptest.com/standards/htmldiff/htmldiff.pl", 16 | 17 | // the specifications short name, as in http://www.w3.org/TR/short-name/ 18 | shortName: "pronunciation-explainer", 19 | 20 | 21 | // if you wish the publication date to be other than today, set this 22 | //publishDate: "2017-05-09", 23 | copyrightStart: "2019", 24 | 25 | 26 | // if there is a previously published draft, uncomment this and set its YYYY-MM-DD date 27 | // and its maturity status 28 | //previousPublishDate: "", 29 | //previousMaturity: "", 30 | //prevRecURI: "", 31 | //previousDiffURI: "", 32 | 33 | // if there a publicly available Editors Draft, this is the link 34 | edDraftURI: "https://w3c.github.io/pronunciation/explainer", 35 | 36 | // if this is a LCWD, uncomment and set the end of its review period 37 | // lcEnd: "2012-02-21", 38 | 39 | // editors, add as many as you like 40 | // only "name" is required 41 | editors: [ 42 | { 43 | name: "Markku Hakkinen", 44 | url: 'https://www.w3.org/users/35712', 45 | mailto: "mhakkinen@ets.org", 46 | company: "Educational Testing Service", 47 | companyURI: "https://www.ets.org/", 48 | w3cid: 35712 49 | }, 50 | 51 | { 52 | name: "Irfan Ali", 53 | url: 'https://www.w3.org/users/98332', 54 | mailto: "iali@ets.org", 55 | company: "Educational Testing Service", 56 | companyURI: "https://www.ets.org/", 57 | w3cid: 98332 58 | } 59 | 60 | ], 61 | 62 | // authors, add as many as you like. 63 | // This is optional, uncomment if you have authors as well as editors. 64 | // only "name" is required. Same format as editors. 65 | 66 | //authors: [ 67 | // { name: "Your Name", url: "http://example.org/", 68 | // company: "Your Company", companyURI: "http://example.com/" }, 69 | //], 70 | 71 | /* 72 | alternateFormats: [ 73 | { uri: 'aria-diff.html', label: "Diff from Previous Recommendation" } , 74 | { uri: 'aria.ps', label: "PostScript version" }, 75 | { uri: 'aria.pdf', label: "PDF version" } 76 | ], 77 | */ 78 | 79 | // errata: 'http://www.w3.org/2010/02/rdfa/errata.html', 80 | 81 | // name of the WG 82 | //wg: "Accessible Platform Architectures Working Group", 83 | 84 | // URI of the public WG page 85 | //wgURI: "https://www.w3.org/WAI/APA/", 86 | 87 | // name (with the @w3c.org) of the public mailing to which comments are due 88 | //wgPublicList: "public-pronunciation", 89 | 90 | // URI of the patent status for this WG, for Rec-track documents 91 | // !!!! IMPORTANT !!!! 92 | // This is important for Rec-track documents, do not copy a patent URI from a random 93 | // document unless you know what you're doing. If in doubt ask your friendly neighbourhood 94 | // Team Contact. 95 | //wgPatentURI: "https://www.w3.org/2004/01/pp-impl/83907/status", 96 | //group: 83907 97 | //maxTocLevel: 2, 98 | group: "apa", 99 | github: "https://github.com/w3c/pronunciation/" 100 | 101 | 102 | 103 | // Spec URLs 104 | 105 | 106 | }; 107 | -------------------------------------------------------------------------------- /use-cases/respec-config.js: -------------------------------------------------------------------------------- 1 | var respecConfig = { 2 | // embed RDFa data in the output 3 | trace: true, 4 | doRDFa: '1.0', 5 | includePermalinks: true, 6 | permalinkEdge: true, 7 | permalinkHide: false, 8 | tocIntroductory: true, 9 | // specification status (e.g., WD, LC, NOTE, etc.). If in doubt use ED. 10 | specStatus: "ED", 11 | noRecTrack: true, 12 | //crEnd: "2012-04-30", 13 | //perEnd: "2013-07-23", 14 | //publishDate: "2013-08-22", 15 | //diffTool: "http://www.aptest.com/standards/htmldiff/htmldiff.pl", 16 | 17 | // the specifications short name, as in http://www.w3.org/TR/short-name/ 18 | shortName: "pronunciation-use-case", 19 | 20 | 21 | // if you wish the publication date to be other than today, set this 22 | //publishDate: "2017-05-09", 23 | copyrightStart: "2019", 24 | license: "w3c-software", 25 | 26 | // if there is a previously published draft, uncomment this and set its YYYY-MM-DD date 27 | // and its maturity status 28 | //previousPublishDate: "", 29 | //previousMaturity: "", 30 | //prevRecURI: "", 31 | //previousDiffURI: "", 32 | 33 | // if there a publicly available Editors Draft, this is the link 34 | edDraftURI: "https://w3c.github.io/pronunciation/user-scenarios", 35 | 36 | // if this is a LCWD, uncomment and set the end of its review period 37 | // lcEnd: "2012-02-21", 38 | 39 | // editors, add as many as you like 40 | // only "name" is required 41 | editors: [ 42 | { 43 | name: "Irfan Ali", 44 | url: 'https://www.w3.org/users/98332', 45 | mailto: "irfan.ali@blackrock.com", 46 | company: "BlackRock", 47 | companyURI: "https://www.blackrock.com/", 48 | w3cid: 98332 49 | }, 50 | { 51 | name: "Paul Grenier", 52 | mailto: "paul.grenier@deque.com", 53 | company: "Deque System" 54 | }, 55 | { 56 | name: "Markku Hakkinen", 57 | url: 'https://www.w3.org/users/35712', 58 | mailto: "mhakkinen@ets.org", 59 | company: "Educational Testing Service", 60 | companyURI: "https://www.ets.org/", 61 | w3cid: 35712 62 | }, 63 | { 64 | name: "Roy Ran", 65 | url: 'https://www.w3.org', 66 | mailto: "ran@w3.org", 67 | company: "W3C", 68 | companyURI: "http://www.w3.org", 69 | w3cid: 100586 70 | } 71 | ], 72 | 73 | // authors, add as many as you like. 74 | // This is optional, uncomment if you have authors as well as editors. 75 | // only "name" is required. Same format as editors. 76 | 77 | //authors: [ 78 | // { name: "Your Name", url: "http://example.org/", 79 | // company: "Your Company", companyURI: "http://example.com/" }, 80 | //], 81 | 82 | /* 83 | alternateFormats: [ 84 | { uri: 'aria-diff.html', label: "Diff from Previous Recommendation" } , 85 | { uri: 'aria.ps', label: "PostScript version" }, 86 | { uri: 'aria.pdf', label: "PDF version" } 87 | ], 88 | */ 89 | 90 | // errata: 'http://www.w3.org/2010/02/rdfa/errata.html', 91 | 92 | // name of the WG 93 | wg: "Accessible Platform Architectures Working Group", 94 | 95 | // URI of the public WG page 96 | wgURI: "https://www.w3.org/WAI/APA/", 97 | 98 | // name (with the @w3c.org) of the public mailing to which comments are due 99 | wgPublicList: "public-pronunciation", 100 | 101 | // URI of the patent status for this WG, for Rec-track documents 102 | // !!!! IMPORTANT !!!! 103 | // This is important for Rec-track documents, do not copy a patent URI from a random 104 | // document unless you know what you're doing. If in doubt ask your friendly neighbourhood 105 | // Team Contact. 106 | wgPatentURI: "https://www.w3.org/2004/01/pp-impl/83907/status", 107 | //maxTocLevel: 2, 108 | 109 | 110 | 111 | // Spec URLs 112 | 113 | 114 | }; 115 | -------------------------------------------------------------------------------- /samples/w3cptf-singleattr-tests.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | W3C Pronunciation Task Force Single Attribute Sample Page 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |

W3C Pronunciation Task Force Single Attribute SSML in HTML Test Cases

15 | 16 |

Developers of read aloud tools, screen readers, and voice assistants should use these test cases to evaluate and test their implementation. Questions and test results 17 | should be filed as issues in the Pronunciation github repo.

18 |

Case 1: Say-as

19 |

According the 2010 US Census, the population of 90274 22 | increased to 25209 from 24976 over the past 10 years.

23 |

Case 1: Audio rendering

Case 2: Phoneme

25 |

Once upon a midnight dreary

26 |

Case 2: Audio rendering

Case 3: sub

28 |

NaCL

29 |

Case 3: Audio rendering

Case 4: Voice

31 |

She said, "My name is Marie", to which, he replied, "I am Tom."

32 |

Case 4: Audio rendering (no sample)

33 |

Case 5: Emphasis

34 |

Please use extreme caution..

35 |

Case 5: Audio rendering

Case 6: Break

37 |

Take a deep breath, and exhale.

38 |

Case 6: Audio rendering

Case 7: Prosody

40 |

The tortoise, said (slowly) "I am almost at the finish line."

41 |

Case 7: Audio rendering

Case 8: Audio

43 |

You will hear a brief chime when your time is up.

44 |

Case 8: Audio rendering

Case 9: Extended Text - The Raven

46 |

47 | Once upon a midnight dreary 48 | , 49 | while I pondered, weak 50 | and weary,
51 | Over many a quaint and curious volume of forgotton 52 | lore—
53 | While I nodded, nearly napping, suddenly there came a tapping, 54 |
55 | As of some one gently rapping, 56 |
57 | rapping at my chamber door. 58 | 59 |
60 | "'Tis some visitor," 61 | I muttered, 62 | "tapping 63 | at my chamber door—
64 | Only this 65 | and nothing 66 | more." 67 |

68 |

Case 9: Audio rendering

W3C Pronunciation Task Force Multi-Attribute SSML in HTML Test Cases

16 | 17 |

Developers of read aloud tools, screen readers, and voice assistants should use these test cases to evaluate and test their implementation. Questions and test results 18 | should be filed as issues in the Pronunciation github repo.

19 |

Case 1: Say-as

20 |

According the 2010 US Census, the population of 90274 22 | increased to 25209 from 24976 over the past 10 years.

23 |

Case 1: Audio rendering

Case 2: Phoneme

25 |

Once upon a midnight dreary

26 |

Case 2: Audio rendering

Case 3: sub

28 |

NaCL

29 |

Case 3: Audio rendering

Case 4: Voice

31 |

She said, "My name is Marie", to which, he replied, I am Tom."

32 |

Case 4: Audio rendering (no sample)

33 |

Case 5: Emphasis

34 |

Please use extreme caution..

35 |

Case 5: Audio rendering

Case 6: Break

37 |

ake a deep breath, and exhale.

38 |

Case 6: Audio rendering

Case 7: Prosody

40 |

The tortoise, said (slowly) " 41 | I am almost at the finish line."

42 |

Case 7: Audio rendering

Case 8: Audio

44 |

You will hear a brief chime when your time is up.

45 |

Case 8: Audio rendering

Case 9: Extended Text - The Raven

47 |

48 | Once upon a midnight dreary
49 | , 50 | while I pondered, weak 51 | and weary,
52 | Over many a quaint and curious volume of forgotton lore—
54 | While I nodded, nearly napping, suddenly there came a tapping,
55 |
56 | As of some one gently rapping, 57 | 58 | rapping at my chamber door. 59 |
60 |
61 | 62 | "'Tis some visitor,"
63 | I muttered,
"tapping at my chamber door—


66 | Only this and nothing 67 |
more." 68 |

69 | 70 |

Case 9: Audio rendering

52 |

53 | The Pronunciation Task Force develops specifications for hypertext markup language (HTML) author control of 54 | text-to-speech (TTS) presentation. 55 |

56 |
57 |
58 |
59 |

Appendix A. SSML JSON Schema

60 |

The JSON schema defines the specific SSML functions, properties, and values recommended for implementation in 61 | this proposal.

62 | 63 |
64 |
 65 | 
 66 | {
 67 | "$schema": "http://json-schema.org/draft-07/schema#",
 68 | "$id": "http://ets-research.org/ia11ylab/ia/json/ssml-json-schema-w3cptf.json",
 69 | "title": "SSML as a single attribute for inclusion in HTML",
 70 | "description": "JSON structure representing each SSML element as a JSON object. The SSML properties are dervived 
71 | from https://www.w3.org/TR/speech-synthesis11/. Several elements are excluded: mark, speak, p, w and the desc attribute.
72 | Author: M. Hakkinen - ETS", 73 | "type": "object", 74 | "properties": { 75 | "say-as": { 76 | "description": "The unique identifier for a product", 77 | "type": "object", 78 | "properties": { 79 | "interpret-as": { "type": "string", 80 | "enum": ["date","time","telephone","characters","cardinal","ordinal"]}, 81 | "format": { "type": "string" }, 82 | "detail": {"type": "string"} 83 | } 84 | }, 85 | "phoneme": { 86 | "description": "The Phoneme Function", 87 | "type": "object", 88 | "properties": { 89 | "ph": { "type": "string"}, 90 | "alphabet": {"type": "string", "enum": ["ipa", "x-sampa"]}} 91 | }, 92 | "sub": { 93 | "description": "sub function", "type": "object", 94 | "properties": { 95 | "alias": {"type":"string"}} 96 | }, 97 | "voice":{"description": "voice function", "type":"object", 98 | "properties": { 99 | "gender": {"type":"string", 100 | "enum": ["female","male","neutral"]}, 101 | "age": {"type":"integer"}, 102 | "variant":{"type":"string"}, 103 | "name": {"type":"string"}, 104 | "languages": {"type":"string"} 105 | } 106 | }, 107 | "emphasis":{ 108 | "description": "speech emphasis level", 109 | "type":"object", 110 | "properties": { 111 | "level": {"type":"string", 112 | "enum": ["none","x-weak","weak","medium","strong","x-strong"]}, 113 | "time": {"type":"string", 114 | "pattern":"^(-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)ms|s)$"} 115 | } 116 | }, 117 | "prosody": { 118 | "description": "speech prosody", 119 | "type":"object", 120 | "properties": { 121 | "pitch": {"type":"string", 122 | "pattern":"^x-low|low|medium|high|x-high|default|(-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)Hz)$"}, 123 | "contour": {"type":"string"}, 124 | "range": {"type":"string", 125 | "pattern":"^x-low|low|medium|high|x-high|default|(-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)Hz)$"}, 126 | "rate": {"type":"string", 127 | "pattern":"^x-slow|slow|medium|fast|x-xfast|default|(-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)%)$"}, 128 | "duration": {"type": "string", 129 | "pattern":"^(-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)ms|s)$"}, 130 | "volume": {"type":"string", 131 | "pattern":"^silent|x-soft|soft|medium|loud|x-loud|default|(+|-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)dB)$"} 132 | } 133 | }, 134 | "break": { 135 | "description": "break - insert a timed pause", 136 | "type":"object", 137 | "properties": { 138 | "strength": {"type":"string", 139 | "enum": ["none","x-weak","weak","medium","strong","x-strong"]}, 140 | "time": {"type":"string", 141 | "pattern":"^(-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)ms|s)$"} 142 | } 143 | }, 144 | "audio": { 145 | "description":"audio element used to insert audio file into speech stream", 146 | "type":"object", 147 | "properties":{ 148 | "src": {"type":"uri"}, 149 | "fetchtimeout":{"type":"string", 150 | "pattern":"^(-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)ms|s)$"}, 151 | "fetchint":{"type":"string", 152 | "enum": ["safe","prefetch"]}, 153 | "maxage":{"type":"string"}, 154 | "maxstale":{"type":"string"}, 155 | "clipBegin":{"type": "string", 156 | "pattern":"^(-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)ms|s)$"}, 157 | "clipEnd":{"type": "string", 158 | "pattern":"^(-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)ms|s)$"}, 159 | "repeatCount":{"type":"integer" 160 | "repeatDur":{"type": "string", 161 | "pattern":"^(-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)ms|s)$"}, 162 | "soundLevel":{"type":"string", 163 | "pattern":"^(+|-?(0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)dB)$"}, 164 | "speed":{ 165 | "type":"string", 166 | "pattern":"^((0|[1-9]\\d*)?(\\.\\d+)?(?<=\\d)%)$"} 167 | } 168 | } 169 | } 170 | } 171 |
172 |
173 |
174 |
175 | 176 | 177 | 178 | 179 | -------------------------------------------------------------------------------- /explainer/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Explainer: Improving Spoken Presentation on the Web 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 |

The objective of the Pronunciation Task Force is to develop normative specifications and best practices guidance collaborating with other W3C groups as appropriate, to provide for proper pronunciation in HTML content when using text to speech (TTS) synthesis. This document defines a standard mechanism to allow 17 | content authors to include spoken presentation guidance in HTML content.

18 | 19 |
20 | 21 |
22 | 23 |
24 | 25 |

Introduction

26 |

Accurate, consistent pronunciation and presentation of content spoken by text-to-speech (TTS) synthesis is an essential requirement in education, 27 | communication, entertainment, and other domains. 28 | From helping to teach spelling and pronunciation in different languages, to reading learning materials or new stories, 29 | TTS has become a vital technology for providing access to digital content on the web, through mobile devices, and now via voice-based assistants. 30 | Organizations such as educational publishers and assessment vendors are looking for a standards-based solution to 31 | enable authoring of spoken presentation guidance in HTML which can then be consumed by assistive technologies (AT) and other 32 | applications that utilize text to speech synthesis (TTS) for rendering of content. 33 | Historically, efforts at standardization (e.g. SSML or CSS Speech) have not led to broad adoption of any standard by user agents, authors, or AT; 34 | what has arisen are a variety of non-interoperable approaches that meet specific needs for some applications. This explainer document presents the case for 35 | improving spoken presentation on the Web and how a standards-based approach can address the requirements.

36 | 37 | 38 |
39 |
40 |

What is this?

41 |

This is a proposal for a mechanism to allow content authors to include spoken presentation guidance in HTML content. Such guidance can be used by AT (including screen readers and read aloud tools) and voice assistants to control TTS synthesis. A key requirement is to ensure the spoken presentation content matches the author's intent and user expectations.

42 |

The challenge is integrating pronunciation content into HTML so that it is easy to author, does not "break" content, and is straightforward for consumption by AT, voice assistants, and other tools that produce spoken presentation of content.

43 |
44 |
45 |

Why do we care?

46 |

Several classes of AT users depend upon spoken rendering of web content by TTS synthesis. In contexts such as education, there are specific expectations for accuracy of spoken presentation in terms of pronunciation, emphasis, prosody, pausing, etc.

47 |

Correct pronunciation is also important in the context of language learning, where incorrect pronunciation can confuse learners.

48 |

In practice, the ecosystem of devices used in classrooms is broad, and each vendor generally provides their own TTS engines for their platforms. Ensuring consistent spoken presentation across devices is a very real problem, and challenge. For many educational assessment vendors, the problem necessitates non-interoperable hacks to tune pronunciation and other presentation features, such as pausing, which itself can introduce new problems through inconsistent representation of text across speech and braille.

49 |

It could be argued that continual advances in machine learning will improve the quality of synthesized speech, reducing the need for this proposal. Waiting for a robust solution that will likely still not fully address our needs is risky, especially when an authorable, declarative approach may be within reach (and wouldn't preclude or conflict with continual improvement in TTS technology).

50 |

The current situation:

51 |
    52 |
  • Is an authoring challenge for content developers that wish to support spoken presentation
  • 53 |
  • Limits interoperability and exchange of content between vendors and platforms
  • 54 |
  • Is an implementation challenge for developers creating AT and read aloud capabilities
  • 55 |
  • Presents an inconsistent, potentially confusing user experience for listeners of TTS
  • 56 |
57 |

With the growing consumer adoption of voice assistants, user expectations for high quality spoken presentation is growing. Google and Amazon both encourage application developers to utilize SSML to enhance the user experience on their platforms, yet Web content authors do not have the same opportunity to enhance the spoken presentation of their content.

58 |

Finding a solution to this need can have broader benefit in allowing authors to create web content that presents a better user experience if the content is presented by voice assistants.

59 |
60 |
61 |

Goals

62 |
    63 |
  • Define a standard mechanism that enables spoken presentation guidance to be authored in HTML
  • 64 |
  • Leverage existing specifications, if possible
  • 65 |
  • The mechanism must be consumable by AT such as screen readers
  • 66 |
  • Address use cases with features, not a unified specification that attempts to cover all scenarios
  • 67 |
68 |
69 |
70 |

Open Questions

71 |
    72 |
  1. What is the best approach to authoring spoken presentation guidance in HTML? Possible approaches include: 73 |
      74 |
    • Using existing elements and attributes in HTML (e.g. <span> with ARIA attributes)
    • 75 |
    • Defining new elements and/or attributes in HTML
    • 76 |
    • Using an existing specification such as SSML or CSS Speech within HTML
    • 77 |
    78 |
  2. 79 |
  3. What spoken presentation features should be supported in the initial version of the specification?
  4. 80 |
  5. How to ensure that the mechanism is easy to author, does not "break" content, and is straightforward for consumption by AT and other tools that produce spoken presentation of content?
  6. 81 |
82 |
83 | 84 | 85 | 86 | 87 | 88 | 89 |
Acknowledgements placeholder
90 | 91 | 92 | 93 | 94 | 95 | -------------------------------------------------------------------------------- /docs/explainer.md: -------------------------------------------------------------------------------- 1 | # Improving Spoken Presentation on the Web (DRAFT) 2 | 3 | ## What is this? 4 | 5 | This is a proposal for a mechanism to allow content authors to include spoken presentation 6 | guidance in HTML content. Such guidance can be used by assistive technologies (including screen readers and read aloud tools) and voice assistants to control text to speech synthesis. A key requirement is to ensure the spoken presentation content matches the author's intent and user expectations. 7 | 8 | Currently, the W3C SSML standard is seen as an important piece of a solution. The challenge is 9 | how to integrate SSML into HTML that is both easy to author, does not "break" content, and is straightforward for consumption by assistive technologies, voice assistants, and other tools 10 | that produce spoken presentation of content. 11 | 12 | This proposal has emerged from the work of the Accessible Platform Architecture Pronunciation 13 | Task Force and represents a decision point arising from two differing approaches for integrating SSML (or SSML-like characteristics) into HTML. Each of the approaches differs in authoring and consumption models (specifically for assistive technologies). 14 | 15 | ## Why do we care? 16 | 17 | Several classes of assistive technology users depend upon spoken rendering of web content by 18 | text to speech synthesis (TTS). In contexts such as education, there are specific expectations for 19 | accuracy of spoken presentation in terms of pronunciation, emphasis, prosody, pausing, etc. 20 | 21 | Correct pronunciation is also important in the context of language learning, where incorrect pronunciation can confuse learners. 22 | 23 | In practice, the ecosystem of devices used in classrooms is broad, and each vendor generally provides their own text to speech engines for their platforms. Ensuring consistent spoken presentation across devices is a very real problem, and challenge. For many educational assessment vendors, the problem necessitates non-interoperable hacks to tune pronunciation and other presentation features, such as pausing, which itself can introduce new problems through inconsistent representation of text across speech and braille. 24 | 25 | It could be argued that continual advances in machine learning will improve the quality of synthesized speech, reducing the need for this proposal. Waiting for a robust solution that will likely still not fully address our needs is risky, especially when an authorable, declarative approach may be within reach (and wouldn't preclude or conflict with continual improvement in TTS technology). 26 | 27 | The current situation: 28 | 29 | * Is an authoring challenge for content developers that wish to support spoken presentation 30 | * Limits interoperability and exchange of content between vendors and platforms 31 | * Is an implementation challenge for developers creating assistive technologies and read aloud capabilities 32 | * Presents an inconsistent, potentially confusing user experience for listeners of TTS 33 | 34 | With the growing consumer adoption of voice assistants, user expectations for high quality spoken presentation is growing. Google and Amazon both encourage application developers to utilize SSML to enhance the user experience on their platforms, yet Web content authors do not have the same opportunity to enhance the spoken presentation of their content. 35 | 36 | Finding a solution to this need can have broader benefit in allowing authors to create web content that presents a better user experience if the content is presented by voice assistants. 37 | 38 | ## Goals 39 | 40 | * Define a standard mechanism that enables spoken presentation guidance to be authored in HTML 41 | * Leverage SSML, if possible, as it is an existing standard that meets all identified requirements, and is supported by many speech synthesis platforms 42 | * The mechanism must be consumable by assistive technologies such as screen readers 43 | 44 | ## Non-Goals 45 | 46 | * Not trying to create a new speech presentation standard 47 | * Not trying to resurrect CSS Speech (incomplete solution in any case) 48 | 49 | ## Approaches considered 50 | 51 | A variety of approaches have been identified thus far by the Task Force, but two are considered front runners: 52 | 53 | 1. In-line SSML within Web Content 54 | 2. Attribute-based Model of SSML 55 | 56 | Both approaches have advantages and disadvantages and these are briefly summarized below. 57 | 58 | ### In-line SSML 59 | 60 | Advantages are that SSML is an existent standard directly consumable by many speech synthesizers, and there is precedent for in-lining non-HTML markup such as SVG and MathML. This approach may be more easily consumed by Voice Assistants. 61 | 62 | A key disadvantage is that inline SSML appears to be more difficult for Assistive Technologies to implement, specifically for screen readers. 63 | 64 | A simple example of in-line SSML in an HTML fragment is shown below: 65 | 66 | ``` HTML 67 |

According the 2010 US Census, the population 68 | of 90274 69 | increased to 25209 from 24976 over the past 10 years. 70 |

71 | ``` 72 | 73 | ### Attribute-based Model of SSML 74 | 75 | Advantages are that variants of the attribute model are currently used by educational assessment vendors, these variants are supported by custom read aloud tools, and it appears that the attribute model may be more easily implementable by screen reader vendors. The EPUB3 standard includes the SSML phoneme element implemented as a pair of namespaced attributes and is used by publishers in Japan. 76 | 77 | Disadvantages may include adding a level of complexity to authoring through the introduction of JSON, which could be mitigated by authoring tools. This approach requires transforming the attribute content represented in JSON into SSML by the consumer (screen reader, read aloud tool, voice assistant, etc.). Possible security concerns exist with the JSON approach. The EPUB approach would lead to a large number of attributes if all the SSML elements were to be implemented in that manner. 78 | 79 | No other standard uses string JSON values for attributes in HTML. This may cause problems for implementers 80 | who must parse the JSON values before processing. The browser, which normally attempts to address malformed 81 | HTML, can make no guarantees about the JSON strings. Implementers must decide how to handle malformed 82 | JSON. 83 | 84 | The schema for `data-ssml` values has not been set. Competing standards for this format, like 85 | [SpeakableSpecification](http://webschemas.org/SpeakableSpecification), as well as any issues converting 86 | SSML to a proper JSON schema could cause confusion for implementors and authors. Often such conversions 87 | are "...not exactly 1:1 transformation, but very very close". 88 | 89 | A simple example of the attribute based model of SSML is shown below: 90 | 91 | ``` HTML 92 |

According the 2010 US Census, the population 93 | of 90274 94 | increased to 25209 from 24976 over the past 10 years. 95 |

96 | ``` 97 | 98 | ## Open Questions 99 | 100 | 1. From the TAG/WHATWG perspective, what disadvantages/challenges have we missed with either approach? 101 | 2. Whichever approach makes sense from the web standards perspective, will/can it be adopted by assistive technologies? Particularly for screen readers, does it fit the accessibility API model? 102 | 103 | 104 | 105 | 106 | -------------------------------------------------------------------------------- /user-scenarios/draft.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | As part of the Accessible Platform Architectures (APA) Working Group, the Pronunciation Task Force (PTF) is a collaboration of subject matter experts working to identify and specify the optimal approach which can deliver reliably accurate pronunciation across browser and operating environments. With the introduction of the Kurzweil reading aid in 1976, to the more sophisticated synthetic speech currently used to assist communication as reading aids for the visually impaired and those with reading disabilities, the technology has multiple applications in education, communication, entertainment, etc. From helping to teach spelling and pronunciation in different languages, Text-to-Speech (TTS) has become a vital technology for providing access to digital content on the web and through mobile devices. 4 | 5 | The challenges that TTS presents include but are not limited to: the inability to accommodate regional variation and presentation of every phoneme present throughout the world; the incorrect determination by TTS of the pronunciation of content in context, and; the current inability to influence other pronunciation characteristics such as prosody and emphasis. 6 | 7 | # User Scenarios 8 | The purpose of developing user scenarios is to facilitate discussion and further requirements definition for pronunciation standards developed within the PTF prior to review of the APA. There are numerous interpretations of what form user scenarios adopt. Within the user experience research (UXR) body of practice, a user scenario is a written narrative related to the use of a service from the perspective of a user or user group. Importantly, the context of use is emphasized as is the desired outcome of use. There are potentially thousands of user scenarios for a technology such as TTS, however, the focus for the PTF is on the core scenarios that relate to the kinds of users who will engage with TTS. 9 | 10 | User scenarios, like Personas, represent a composite of real-world experiences. In the case of the PTF, the scenarios were derived from interviews of people who were end-consumers of TTS, as well as submitted narratives and industry examples from practitioners. There are several formats of scenarios. Several are general goal or task-oriented scenarios. Others elaborate on richer context, for example, educational assessment. 11 | 12 | The following user scenarios are organized on the three perspectives of TTS use derived from analysis of the qualitative data collected from the discovery work: 13 | 14 | + **End-Consumers of TTS:** Encompasses those with a visual disability or other need to have TTS operational when using assistive technologies (ATs). 15 | + **Digital Content Managers:** Addresses activities related to those responsible for producing content that needs to be accessible to ATs and W3C-WAI Guidelines. 16 | + **Software Engineers:** Includes developers and architects required to put TTS into an application or service. 17 | 18 | ## End-Consumers of TTS 19 | Ultimately, the quality and variation of TTS rendering by assistive technologies vary widely according to a user's context. The following user scenarios reinforce the necessity for accurate pronunciation from the perspective of those who consume digitally generated content. 20 | 21 | A. As a traveler who uses assistive technology (AT) with TTS to help navigate through websites, I need to hear arrival and destination codes pronounced accurately so I can select the desired travel itinerary. For example, a user with a visual impairment attempts to book a flight to Ottawa, Canada and so goes to a travel website. The user already knows the airport code and enters "YOW". The site produces the result in a drop-down list as "Ottawa, CA" but the AT does not pronounce the text accurately to help the user make the correct association between their data entry and the list item. 22 | 23 | B. As a test taker (tester) with a visual impairment who may use assistive technology to access the test content with speech software, screen reader or refreshable braille device, I want the content to be presented as intended, with accurate pronunciation and articulation, so that my assessment accurately reflects my knowledge of the content. 24 | 25 | C. As a student/learner with auditory and cognitive processing issues, it is difficult to distinguish sounds, inflections, and variations in pronunciation as rendered through synthetic voice, such as text-to-speech or screen reader technologies. Consistent and accurate pronunciation whether human-provided, external, or embedded is needed to support working executive processing, auditory processing and memory that facilitates comprehension in literacy and numeracy for learning and for assessments. 26 | 27 | D. As an English Learner (EL) or a visually impaired early learner using speech synthesis for reading comprehension that includes decoding words from letters as part of the learning construct (intent of measurement), pronunciation accuracy is vital to successful comprehension, as it allows the learner to distinguish sounds at the sentence, word, syllable, and phoneme level. 28 | 29 | ## Digital Content Management for TTS 30 | The advent of graphical user interfaces (GUIs) for the management and editing of text content has given rise to content creators not requiring technical expertise beyond the ability to operate a text editing application such as Microsoft Word. The following scenario summarizes the general use, accompanied by a hypothetical application. 31 | 32 | A. As a content creator, I want to create content that can readily be delivered through assistive technology, can convey the correct meaning, and ensure that screen readers render the right pronunciation based on the surrounding context. 33 | 34 | B. As a content producer for a global commercial site that is inclusive, I need to be able to provide accessible culture-specific content for different geographic regions. 35 | 36 | ### Educational Assessment 37 | In the educational assessment field, providing accurate and concise pronunciation for students with auditory accommodations, such as text-to-speech (TTS) or students with screen readers, is vital for ensuring content validity and alignment with the intended construct, which objectively measures a test takers knowledge and skills. For test administrators/educators, pronunciations must be consistent across instruction and assessment in order to avoid test bias or impact effects for students. Some additional requirements for the test administrators, include, but are not limited to, such scenarios: 38 | 39 | A. As a test administrator, I want to ensure that students with the read-aloud accommodation, who are using assistive technology or speech synthesis as an alternative to a human reader, have the same speech quality (e.g., intonation, expression, pronunciation, and pace, etc.) as a spoken language. 40 | 41 | B. As a math educator, I want to ensure that speech accuracy with mathematical expressions, including numbers, fractions, and operations have accurate pronunciation for those who rely on TTS. Some mathematical expressions require special pronunciations to ensure accurate interpretation while maintaining test validity and construct. Specific examples include: 42 | 43 | + Mathematical formulas written in simple text with special formatting should convey the correct meaning of the expression to identify changes from normal text to super- or to sub-script text. For example, without the proper formatting, the equation: 44 | a3-b3=(a-b)(a2+ab+b2) may incorrectly render through some technologies and applications as a3-b3=(a-b)(a2+ab+b2). 45 | + Distinctions made in writing are often not made explicit in speech; For example, “fx” may be interpreted as fx, f(x), fx, F X, F X. The distinction depends on the context; requiring the author to provide consistent and accurate semantic markup. 46 | + For math equations with Greek letters, it is important that the speech synthesizer be able to distinguish the phonetic differences between them, whether in the natural language or phonetic equivalents. For example, ε (epsilon) υ (upsilon) φ (phi) χ (chi) ξ(xi) 47 | 48 | C. As a test administrator/educator, pronunciations must be consistent across instruction and assessment, in order to avoid test bias and pronunciation effects on performance for students with disabilities (SWD) in comparison to students without disabilities (SWOD). Examples include: 49 | 50 | + If a test question is measuring rhyming of words or sounds of words, the speech synthesis should not read aloud the words, but rather spell out the words in the answer options. 51 | + If a test question is measuring spelling and the student needs to consider spelling correctness/incorrectness, the speech synthesis should not read aloud the misspelt words, especially for words, such as: 52 | 53 |       i. *Heteronyms/homographs*: same spelling, different pronunciation, different meanings, such as lead (to go in front of) or lead (a metal); wind (to follow a course that is not straight) or wind (a gust of air); bass (low, deep sound) or bass (a type of fish), etc. 54 | 55 |       ii. *Homophone*: words that sound alike, such as, to/two/too; there/their/they're; pray/prey; etc. 56 | 57 |       iii. *Homonyms*: multiple meaning words, such as scale (measure) or scale (climb, mount); fair (reasonable) or fair (carnival); suit (outfit) or suit (harmonize); etc. 58 | 59 | ### Academic and Linguistic Practitioners 60 | The extension of content management in TTS is one as a means of encoding and preserving spoken text for academic analyses; irrespective of discipline, subject domain, or research methodology. 61 | 62 | A. As a linguist, I want to represent all the pronunciation variations of a given word in any language, for future analyses. 63 | 64 | B. As a speech language pathologist or speech therapists, I want TTS functionality to include components of speech and language that include dialectal and individual differences in pronunciation; identify differences in intonation, syntax, and semantics, and; allow for enhanced comprehension, language processing and support phonological awareness. 65 | 66 | ## Software Application Development 67 | Technical standards for software development assist organizations and individuals to provide accessible experiences for users with disabilities. The final user scenarios in this document are considered from the perspective of those who design and develop software. 68 | 69 | A. As a Product Owner for a web content management system (CMS), I want the next software product release to have the capability of pronouncing speech "just like Alexa can". 70 | 71 | B. As a client-side user interface developer, I need a way to render text content, so it is spoken accurately with assistive technologies. 72 | 73 | 74 | 75 | 76 | 77 | -------------------------------------------------------------------------------- /user-scenarios/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Pronunciation User Scenarios 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 |

The objective of the Pronunciation Task Force is to develop normative specifications and best practices guidance collaborating with other W3C groups as appropriate, to provide for proper pronunciation in HTML content when using text to speech (TTS) synthesis. This document provides various user scenarios highlighting the need for standardization of pronunciation markup, to ensure that consistent and accurate representation of the content. The requirements that come from the user scenarios provide the basis for the technical requirements/specifications.

17 |
18 | 19 |
20 | 21 |
22 | 23 |

Introduction

24 | 25 |

As part of the Accessible Platform Architectures (APA) Working Group, the Pronunciation Task Force (PTF) is a collaboration of subject matter experts working to identify and specify the optimal approach which can deliver reliably accurate pronunciation across browser and operating environments. With the introduction of the Kurzweil reading aid in 1976, to the more sophisticated synthetic speech currently used to assist communication as reading aids for the visually impaired and those with reading disabilities, the technology has multiple applications in education, communication, entertainment, etc. From helping to teach spelling and pronunciation in different languages, Text-to-Speech (TTS) has become a vital technology for providing access to digital content on the web and through mobile devices. 26 |

27 |

The challenges that TTS presents include but are not limited to: the inability to accommodate regional variation and presentation of every phoneme present throughout the world; the incorrect determination by TTS of the pronunciation of content in context, and; the current inability to influence other pronunciation characteristics such as prosody and emphasis.

28 | 29 | 30 | 31 |
32 | 33 |
34 |

User Scenarios

35 |

The purpose of developing user scenarios is to facilitate discussion and further requirements definition for pronunciation standards developed within the PTF prior to review of the APA. There are numerous interpretations of what form user scenarios adopt. Within the user experience research (UXR) body of practice, a user scenario is a written narrative related to the use of a service from the perspective of a user or user group. Importantly, the context of use is emphasized as is the desired outcome of use. There are potentially thousands of user scenarios for a technology such as TTS, however, the focus for the PTF is on the core scenarios that relate to the kinds of users who will engage with TTS.

36 |

User scenarios, like Personas, represent a composite of real-world experiences. In the case of the PTF, the scenarios were derived from interviews of people who were end-consumers of TTS, as well as submitted narratives and industry examples from practitioners. There are several formats of scenarios. Several are general goal or task-oriented scenarios. Others elaborate on richer context, for example, educational assessment.

37 |

The following user scenarios are organized on the three perspectives of TTS use derived from analysis of the qualitative data collected from the discovery work:

38 |
    39 |
  • End-Consumers of TTS: Encompasses those with a visual disability or other need to have TTS operational when using assistive technologies (ATs).
  • 40 |
  • Digital Content Managers: Addresses activities related to those responsible for producing content that needs to be accessible to ATs and W3C-WAI Guidelines.
  • 41 |
  • Software Engineers: Includes developers and architects required to put TTS into an application or service.
  • 42 |
43 |
44 |

End-Consumer of TTS

45 |

Ultimately, the quality and variation of TTS rendering by assistive technologies vary widely according to a user's context. The following user scenarios reinforce the necessity for accurate pronunciation from the perspective of those who consume digitally generated content.

46 |
    47 |
  • A. As a traveler who uses assistive technology (AT) with TTS to help navigate through websites, I need to hear arrival and destination codes pronounced accurately so I can select the desired travel itinerary. For example, a user with a visual impairment attempts to book a flight to Ottawa, Canada and so goes to a travel website. The user already knows the airport code and enters "YOW". The site produces the result in a drop-down list as "Ottawa, CA" but the AT does not pronounce the text accurately to help the user make the correct association between their data entry and the list item.
  • 48 |
  • B. As a test taker (tester) with a visual impairment who may use assistive technology to access the test content with speech software, screen reader or refreshable braille device, I want the content to be presented as intended, with accurate pronunciation and articulation, so that my assessment accurately reflects my knowledge of the content.
  • 49 |
  • C. As a student/learner with auditory and cognitive processing issues, it is difficult to distinguish sounds, inflections, and variations in pronunciation as rendered through synthetic voice, such as text-to-speech or screen reader technologies. Consistent and accurate pronunciation whether human-provided, external, or embedded is needed to support working executive processing, auditory processing and memory that facilitates comprehension in literacy and numeracy for learning and for assessments.
  • 50 |
  • D. As an English Learner (EL) or a visually impaired early learner using speech synthesis for reading comprehension that includes decoding words from letters as part of the learning construct (intent of measurement), pronunciation accuracy is vital to successful comprehension, as it allows the learner to distinguish sounds at the sentence, word, syllable, and phoneme level.
  • 51 |
52 | 53 |
54 |
55 |

Digital Content Management for TTS

56 |

The advent of graphical user interfaces (GUIs) for the management and editing of text content has given rise to content creators not requiring technical expertise beyond the ability to operate a text editing application such as Microsoft Word. The following scenario summarizes the general use, accompanied by a hypothetical application.

57 |
    58 |
  • A. As a content creator, I want to create content that can readily be delivered through assistive technology, can convey the correct meaning, and ensure that screen readers render the right pronunciation based on the surrounding context.
  • 59 |
  • B. As a content producer for a global commercial site that is inclusive, I need to be able to provide accessible culture-specific content for different geographic regions.
  • 60 |
61 | 62 |
63 |

Educational Assessment

64 |

In the educational assessment field, providing accurate and concise pronunciation for students with auditory accommodations, such as text-to-speech (TTS) or students with screen readers, is vital for ensuring content validity and alignment with the intended construct, which objectively measures a test takers knowledge and skills. For test administrators/educators, pronunciations must be consistent across instruction and assessment in order to avoid test bias or impact effects for students. Some additional requirements for the test administrators, include, but are not limited to, such scenarios:

65 | 66 |
    67 |
  • A. As a test administrator, I want to ensure that students with the read-aloud accommodation, who are using assistive technology or speech synthesis as an alternative to a human reader, have the same speech quality (e.g., intonation, expression, pronunciation, and pace, etc.) as a spoken language.
  • 68 |
  • B. As a math educator, I want to ensure that speech accuracy with mathematical expressions, including numbers, fractions, and operations have accurate pronunciation for those who rely on TTS. Some mathematical expressions require special pronunciations to ensure accurate interpretation while maintaining test validity and construct. Specific examples include: 69 |
      70 |
    • Mathematical formulas written in simple text with special formatting should convey the correct meaning of the expression to identify changes from normal text to super- or to sub-script text. For example, without the proper formatting, the equation:a3-b3=(a-b)(a2+ab+b2) may incorrectly render through some technologies and applications as a3-b3=(a-b)(a2+ab+b2).
    • 71 |
    • Distinctions made in writing are often not made explicit in speech; For example, “fx” may be interpreted as fx, f(x), fx, F X, F X. The distinction depends on the context; requiring the author to provide consistent and accurate semantic markup.
    • 72 |
    • For math equations with Greek letters, it is important that the speech synthesizer be able to distinguish the phonetic differences between them, whether in the natural language or phonetic equivalents. For example, ε (epsilon) υ (upsilon) φ (phi) χ (chi) ξ(xi).
    • 73 |
    74 | 75 |
  • 76 |
  • C. As a test administrator/educator, pronunciations must be consistent across instruction and assessment, in order to avoid test bias and pronunciation effects on performance for students with disabilities (SWD) in comparison to students without disabilities (SWOD). Examples include: 77 |
      78 |
    • If a test question is measuring rhyming of words or sounds of words, the speech synthesis should not read aloud the words, but rather spell out the words in the answer options.
    • 79 |
    • If a test question is measuring spelling and the student needs to consider spelling correctness/incorrectness, the speech synthesis should not read aloud the misspelt words, especially for words, such as: 80 |
        81 |
      • Heteronyms/homographs: same spelling, different pronunciation, different meanings, such as lead (to go in front of) or lead (a metal); wind (to follow a course that is not straight) or wind (a gust of air); bass (low, deep sound) or bass (a type of fish), etc.
      • 82 |
      • Homophone: words that sound alike, such as, to/two/too; there/their/they're; pray/prey; etc.
      • 83 |
      • Homonyms: multiple meaning words, such as scale (measure) or scale (climb, mount); fair (reasonable) or fair (carnival); suit (outfit) or suit (harmonize); etc.
      • 84 |
      85 | 86 |
    • 87 |
    88 | 89 |
  • 90 |
91 | 92 |
93 |
94 |

Academic and Linguistic Practitioners

95 |

The extension of content management in TTS is one as a means of encoding and preserving spoken text for academic analyses; irrespective of discipline, subject domain, or research methodology.

96 |
    97 |
  • A. As a linguist, I want to represent all the pronunciation variations of a given word in any language, for future analyses.
  • 98 |
  • B. As a speech language pathologist or speech therapists, I want TTS functionality to include components of speech and language that include dialectal and individual differences in pronunciation; identify differences in intonation, syntax, and semantics, and; allow for enhanced comprehension, language processing and support phonological awareness.
  • 99 |
100 |
101 | 102 |
103 |
104 |

Software Application Development

105 |

Technical standards for software development assist organizations and individuals to provide accessible experiences for users with disabilities. The final user scenarios in this document are considered from the perspective of those who design and develop software.

106 |
    107 |
  • A. As a Product Owner for a web content management system (CMS), I want the next software product release to have the capability of pronouncing speech "just like Alexa can".
  • 108 |
  • B. As a client-side user interface developer, I need a way to render text content, so it is spoken accurately with assistive technologies.
  • 109 |
110 |
111 | 112 | 113 | 114 |
115 | 116 | 117 | 118 | 119 | 120 |
Acknowledgements placeholder
121 | 122 | 123 | 124 | 125 | 126 | -------------------------------------------------------------------------------- /use-cases/draft.md: -------------------------------------------------------------------------------- 1 | # Use Cases 2 | 3 | - [aria-ssml](#use-case-aria-ssml) 4 | - [data-ssml](#use-case-data-ssml) 5 | - [HTML5](#use-case-html5) 6 | - [Custom Element](#use-case-custom-element) 7 | - [JSON-LD](#use-case-json-ld) 8 | - [Ruby](#use-case-ruby) 9 | 10 | ## Abstract 11 | 12 | The objective of the Pronunciation Task Force is to develop normative specifications and best practices guidance collaborating with other W3C groups as appropriate, to provide for proper pronunciation in HTML content when using text to speech (TTS) synthesis. This document provides various use cases highlighting the need for standardization of pronunciation markup, to ensure that consistent and accurate representation of the content. The requirements from the user scenarios provide the basis for these technical requirements/specifications. 13 | 14 | ## Use Case `aria-ssml` 15 | 16 | ### Name 17 | `aria-ssml` 18 | 19 | ### Owner 20 | Paul Grenier 21 | 22 | ### Background and Current Practice 23 | A new `aria` attribute could be used to include pronunciation content. 24 | 25 | ### Goal 26 | - Embed SSML in an HTML document. 27 | 28 | ### Target Audience 29 | 30 | - Assistive Technology 31 | - Browser Extensions 32 | - Search Engines 33 | 34 | ### Implementation Options 35 | 36 | - `aria-ssml` as embedded JSON 37 | 38 | When AT encounters an element with `aria-ssml`, the AT should enhance the UI by processing the pronunciation content and passing it to the [Web Speech API](https://w3c.github.io/speech-api/) or an external API (e.g., [Google's Text to Speech API](https://cloud.google.com/text-to-speech/)). 39 | 40 | ```html 41 | I say pecan. 42 | You say pecan. 43 | ``` 44 | 45 | Client will convert JSON to SSML and pass the XML string a speech API. 46 | 47 | ```js 48 | var msg = new SpeechSynthesisUtterance(); 49 | msg.text = convertJSONtoSSML(element.dataset.ssml); 50 | speechSynthesis.speak(msg); 51 | ``` 52 | 53 | - `aria-ssml` referencing XML by template ID 54 | 55 | ```html 56 | 57 | 69 | 70 |

You say, pecan. I say, pecan.

71 | ``` 72 | 73 | Client will parse XML and serialize it before passing to a speech API: 74 | 75 | ```js 76 | var msg = new SpeechSynthesisUtterance(); 77 | var xml = document.getElementById('pecan').content.firstElementChild; 78 | msg.text = serialize(xml); 79 | speechSynthesis.speak(msg); 80 | ``` 81 | 82 | - `aria-ssml` referencing an XML string as script tag 83 | 84 | ```html 85 | 96 | 97 |

You say, pecan. I say, pecan.

98 | ``` 99 | 100 | Client will pass the XML string raw to a speech API. 101 | 102 | ```js 103 | var msg = new SpeechSynthesisUtterance(); 104 | msg.text = document.getElementById('pecan').textContent; 105 | speechSynthesis.speak(msg); 106 | ``` 107 | 108 | - `aria-ssml` referencing an external XML document by URL 109 | 110 | ```html 111 |

You say, pecan. I say, pecan.

112 | ``` 113 | 114 | Client will pass the string payload to a speech API. 115 | 116 | ```js 117 | var msg = new SpeechSynthesisUtterance(); 118 | var response = await fetch(el.dataset.ssml) 119 | msg.txt = await response.text(); 120 | speechSynthesis.speak(msg); 121 | ``` 122 | 123 | ### Existing Work 124 | 125 | - [`aria-ssml` proposal](https://github.com/alia11y/SSMLinHTMLproposal) 126 | - [SSML](https://www.w3.org/TR/speech-synthesis11/) 127 | - [Web Speech API](https://w3c.github.io/speech-api/) 128 | 129 | ### Problems and Limitations 130 | 131 | - `aria-ssml` is not a valid `aria-*` attribute. 132 | - OS/Browsers combinations that do not support the serialized XML usage of the Web Speech API. 133 | 134 | ## Use Case `data-ssml` 135 | 136 | ### Name 137 | `data-ssml` 138 | 139 | ### Owner 140 | Paul Grenier 141 | 142 | ### Background and Current Practice 143 | As an existing attribute, [`data-*`](https://html.spec.whatwg.org/multipage/dom.html#embedding-custom-non-visible-data-with-the-data-*-attributes) could be used, with some conventions, to include pronunciation content. 144 | 145 | ### Goal 146 | 147 | - Support repeated use within the page context 148 | - Support external file references 149 | - Reuse existing techniques without expanding specifications 150 | 151 | ### Target Audience 152 | 153 | - Hearing users 154 | 155 | ### Implementation Options 156 | 157 | - `data-ssml` as embedded JSON 158 | 159 | When an element with `data-ssml` is encountered by an SSML-aware AT, the AT should enhance the user interface by processing the referenced SSML content and passing it to the [Web Speech API](https://w3c.github.io/speech-api/) or an external API (e.g., [Google's Text to Speech API](https://cloud.google.com/text-to-speech/)). 160 | 161 | 162 | ```html 163 | I say pecan. 164 | You say pecan. 165 | ``` 166 | 167 | Client will convert JSON to SSML and pass the XML string a speech API. 168 | 169 | ```js 170 | var msg = new SpeechSynthesisUtterance(); 171 | msg.text = convertJSONtoSSML(element.dataset.ssml); 172 | speechSynthesis.speak(msg); 173 | ``` 174 | 175 | - `data-ssml` referencing XML by template ID 176 | 177 | ```html 178 | 179 | 191 | 192 |

You say, pecan. I say, pecan.

193 | ``` 194 | 195 | Client will parse XML and serialize it before passing to a speech API: 196 | 197 | ```js 198 | var msg = new SpeechSynthesisUtterance(); 199 | var xml = document.getElementById('pecan').content.firstElementChild; 200 | msg.text = serialize(xml); 201 | speechSynthesis.speak(msg); 202 | ``` 203 | 204 | - `data-ssml` referencing an XML string as script tag 205 | 206 | ```html 207 | 218 | 219 |

You say, pecan. I say, pecan.

220 | ``` 221 | 222 | Client will pass the XML string raw to a speech API. 223 | 224 | ```js 225 | var msg = new SpeechSynthesisUtterance(); 226 | msg.text = document.getElementById('pecan').textContent; 227 | speechSynthesis.speak(msg); 228 | ``` 229 | 230 | - `data-ssml` referencing an external XML document by URL 231 | 232 | ```html 233 |

You say, pecan. I say, pecan.

234 | ``` 235 | 236 | Client will pass the string payload to a speech API. 237 | 238 | ```js 239 | var msg = new SpeechSynthesisUtterance(); 240 | var response = await fetch(el.dataset.ssml) 241 | msg.txt = await response.text(); 242 | speechSynthesis.speak(msg); 243 | ``` 244 | 245 | ### Existing Work 246 | 247 | - [`aria-ssml` proposal](https://github.com/alia11y/SSMLinHTMLproposal) 248 | - [SSML](https://www.w3.org/TR/speech-synthesis11/) 249 | - [Web Speech API](https://w3c.github.io/speech-api/) 250 | 251 | ### Problems and Limitations 252 | 253 | - Does not assume or suggest visual pronunciation help for deaf or hard of hearing 254 | - Use of `data-*` requires input from AT vendors 255 | - XML data is not indexed by search engines 256 | 257 | ## Use Case HTML5 258 | 259 | ### Name 260 | HTML5 261 | 262 | ### Owner 263 | Paul Grenier 264 | 265 | ### Background and Current Practice 266 | HTML5 includes the XML [namespaces](https://www.w3.org/TR/html5/infrastructure.html#namespaces) for MathML and SVG. So, using either's elements in an HTML5 document is valid. Because SSML's implementation is non-visual in nature, browser implementation could be slow or non-existent without affecting how authors use SSML in HTML. Expansion of HTML5 to include SSML namespace would allow valid use of SSML in the HTML5 document. Browsers would treat the element like any other unknown element, as [`HTMLUnknownElement`](https://www.w3.org/TR/html50/dom.html#htmlunknownelement). 267 | 268 | ### Goal 269 | 270 | - Support valid use of SSML in HTML5 documents 271 | - Allow visual pronunciation support 272 | 273 | ### Target Audience 274 | 275 | - SSML-aware technologies and browser extensions 276 | - Search indexers 277 | 278 | ### Implementation Options 279 | 280 | - SSML 281 | 282 | When an element with [`data-ssml`](https://www.w3.org/TR/wai-aria-1.1/#aria-details) is encountered by an [SSML](https://www.w3.org/TR/speech-synthesis11/)-aware AT, the AT should enhance the user interface by processing the referenced SSML content and passing it to the [Web Speech API](https://w3c.github.io/speech-api/) or an external API (e.g., [Google's Text to Speech API](https://cloud.google.com/text-to-speech/)). 283 | 284 | ```html 285 | 286 | You say, pecan. 287 | I say, pecan. 288 | 289 | ``` 290 | 291 | ### Existing Work 292 | 293 | - [VoiceXML 2.1](https://www.w3.org/TR/voicexml21/) 294 | - [SMIL - Synchronized Multimedia Integration Language](https://www.w3.org/TR/REC-smil/smil-extended-linking.html#SMILLinking-Relationship-to-XLink) 295 | - [PLS - Pronunciation Lexicon](https://www.w3.org/TR/pronunciation-lexicon/#AppB) 296 | 297 | ### Problems and Limitations 298 | 299 | - SSML is not valid HTML5 300 | 301 | ## Use Case Custom Element 302 | 303 | ### Name 304 | Custom Element 305 | 306 | ### Owner 307 | Paul Grenier 308 | 309 | ### Background and Current Practice 310 | Embed valid SSML in HTML using custom elements registered as `ssml-*` where `*` is the actual SSML tag name (except for `p` which expects the same treatment as an HTML `p` in HTML layout). 311 | 312 | ### Goal 313 | 314 | - Support use of SSML in HTML documents. 315 | 316 | ### Target Audience 317 | 318 | - SSML-aware technologies and browser extensions 319 | - Search indexers 320 | 321 | ### Implementation Options 322 | 323 | - `ssml-speak`: see [demo](https://ssml-components.glitch.me/) 324 | 325 | Only the `` component requires registration. The component code lifts the SSML by getting the `innerHTML` and removing the `ssml-` prefix from the interior tags and passing it to the web speech API. The `

` tag from SSML is not given the prefix because we still want to start a semantic paragraph within the content. The other tags used in the example have no semantic meaning. Tags like `` in HTML could be converted to `` in SSML. In that case, CSS styles will come from the browser's default styles or the page author. 326 | 327 | ```html 328 | 329 | Here are SSML samples. 330 | I can pause. 331 | I can speak in cardinals. 332 | Your number is 10. 333 | Or I can speak in ordinals. 334 | You are 10 in line. 335 | Or I can even speak in digits. 336 | The digits for ten are 10. 337 | I can also substitute phrases, like the W3C. 338 | Finally, I can speak a paragraph with two sentences. 339 |

340 | You say, pecan. 341 | I say, pecan. 342 |

343 |
344 | 361 | ``` 362 | 363 | ```js 364 | class SSMLSpeak extends HTMLElement { 365 | constructor() { 366 | super(); 367 | const template = document.getElementById('ssml-controls'); 368 | const templateContent = template.content; 369 | this.attachShadow({mode: 'open'}) 370 | .appendChild(templateContent.cloneNode(true)); 371 | } 372 | connectedCallback() { 373 | const button = this.shadowRoot.querySelector('[role="switch"][aria-labelledby="play"]') 374 | const ssml = this.innerHTML.replace(/ssml-/gm, '') 375 | const msg = new SpeechSynthesisUtterance(); 376 | msg.lang = document.documentElement.lang; 377 | msg.text = ` 383 | ${ssml} 384 | `; 385 | msg.voice = speechSynthesis.getVoices().find(voice => voice.lang.startsWith(msg.lang)); 386 | msg.onstart = () => button.setAttribute('aria-checked', 'true'); 387 | msg.onend = () => button.setAttribute('aria-checked', 'false'); 388 | button.addEventListener('click', () => speechSynthesis[speechSynthesis.speaking ? 'cancel' : 'speak'](msg)) 389 | } 390 | } 391 | 392 | customElements.define('ssml-speak', SSMLSpeak); 393 | ``` 394 | 395 | ### Existing Work 396 | 397 | - [DOM Living Standard](https://dom.spec.whatwg.org/#concept-element-custom) 398 | - [Web Speech API](https://w3c.github.io/speech-api/) 399 | 400 | ### Problems and Limitations 401 | 402 | - OS/Browsers combinations that do not support the serialized XML usage of the Web Speech API. 403 | - Browsers may need to map SSML tags with CSS styles for default user agent styles. 404 | - Without an extension or AT, only user interaction can start the Web Speech API. 405 | - Authors or parsing may need to remove HTML content with unintended SSML semantics before serialization. 406 | 407 | ## Use Case JSON-LD 408 | 409 | ### Name 410 | JSON-LD 411 | 412 | ### Owner 413 | Paul Grenier 414 | 415 | ### Background and Current Practice 416 | [JSON-LD](https://www.w3.org/2018/jsonld-cg-reports/json-ld/) provides an established standard for embedding data in HTML. Unlike other microdata approaches, JSON-LD helps to reuse standardized annotations through external references. 417 | 418 | ### Goal 419 | 420 | - Support use of SSML in HTML documents. 421 | 422 | ### Target Audience 423 | 424 | - SSML-aware technologies and browser extensions 425 | - Search indexers 426 | 427 | ### Implementation Options 428 | 429 | - `JSON-LD` 430 | 431 | ```html 432 | 443 |

444 | Do you listen to WKRP? 447 |

448 | ``` 449 | 450 | ### Existing Work 451 | 452 | - [Web of Things Working Group](https://www.w3.org/WoT/WG/) 453 | - [Schema.org](https://schema.org/) 454 | 455 | ### Problems and Limitations 456 | 457 | - not an established "type"/published schema 458 | 459 | 460 | ## Use Case Ruby 461 | 462 | ### Name 463 | Ruby 464 | 465 | ### Owner 466 | Paul Grenier 467 | 468 | ### Background and Current Practice 469 | > [``](https://www.w3.org/TR/html5/textlevel-semantics.html#the-ruby-element) annotations are short runs of text presented alongside base text, primarily used in East Asian typography as a guide for pronunciation or to include other annotations. 470 | 471 | `ruby` guides pronunciation visually. This seems like a natural fit for text-to-speech. 472 | 473 | ### Goal 474 | 475 | - Support use of SSML in HTML documents. 476 | - Offer visual pronunciation support. 477 | 478 | ### Target Audience 479 | 480 | - AT and browser extensions 481 | - Search indexers 482 | 483 | ### Implementation Options 484 | 485 | - `ruby` with microdata 486 | 487 | Microdata can augment the `ruby` element and its descendants. 488 | 489 | ```html 490 |

491 | You say, 492 | 493 | 494 | pecan 495 | pɪˈkɑːn 496 | 497 | . 498 | 499 | I say, 500 | 501 | 502 | pe 503 | ˈpi 504 | can 505 | kæn 506 | 507 | . 508 | 509 |

510 | ``` 511 | 512 | ### Existing Work 513 | 514 | - [HTML Living Standard](https://html.spec.whatwg.org/multipage/text-level-semantics.html#the-ruby-element) 515 | - [Schema.org](https://schema.org/) 516 | 517 | ### Problems and Limitations 518 | 519 | - AT may process annotations as content 520 | - AT "double reading" words instead of choosing either the content or the annotation 521 | - Only offers for a few SSML expressions 522 | - Difficult to reuse by reference 523 | 524 | -------------------------------------------------------------------------------- /gap-analysis/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Pronunciation Gap Analysis 6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 |

This document is the Gap Analysis Review which presents required features of Spoken Text Pronunciation and 14 | Presentation and existing standards or specifications that may support (or enable support) of those features. Gaps 15 | are defined when a 16 | required feature does not have a corresponding method by which it can be authored in HTML.

17 |
18 |
19 |
20 |

Introduction

21 |

Accurate, consistent pronunciation and presentation of content spoken by text to speech synthesis (TTS) is an 22 | essential requirement in education and other domains. Organizations such as educational publishers and assessment 23 | vendors are 24 | looking for a standards-based solution to enable authoring of spoken presentation guidance in HTML which can then 25 | be consumed by assistive technologies and other applications which utilize text to speech synthesis for rendering 26 | of content.

27 |

W3C has developed two standards pertaining to the presentation of speech synthesis which have reached 28 | recommendation status, Speech Synthesis Markup Language 29 | (SSML) and the Pronunciation Lexicon Specification 30 | (PLS). Both standards are directly consumed by a speech synthesis engine supporting those standards. While a PLS 31 | file reference may be referenced in a HTML 32 | page using link rel, there is no known uptake of PLS using this method by assistive technologies. 33 | While there are technically methods to allow authors to inline SSML within HTML (using namespaces), such an 34 | approach has not been 35 | adopted, and anecdotal comments from browser and assistive technology vendors have suggested this is not a viable 36 | approach.

37 |

CSS Speech Module is a retired W3C Working Group Note that 38 | describes mechanism by which content authors may apply a variety of speech styling and presentation properties to 39 | HTML. This approach 40 | has a variety of advantages but does not implement the full set of features required for pronunciation. Section 16 of the Note specifically references the 42 | issue of pronunciation:

43 |
44 | CSS does not specify how to define the pronunciation (expressed using a well-defined phonetic alphabet) of a 45 | particular piece of text within the markup document. A "phonemes" property was described in earlier drafts of this 46 | specification, but 47 | objections were raised due to breaking the principle of separation between content and presentation (the 48 | "phonemes" authored within aural CSS stylesheets would have needed to be updated each time text changed within the 49 | markup document). The 50 | "phonemes" functionality is therefore considered out-of-scope in CSS (the presentation layer) and should be 51 | addressed in the markup / content layer. 52 |
53 |

While a portion of CSS Speech was demonstrated by Apple in 2011 on iOS with Safari and VoiceOver, 55 | it is not presently supported on any platform with any Assistive Technology, 56 | and work on the standard has itself been stopped by the CSS working group.

57 |

Efforts to address this need have been considered by both assessment technology vendors and the publishing 58 | community. Citing the need for pronunciation and presentation controls, the IMS Global Learning Consortium added 59 | the ability to author SSML 60 | markup, specify PLS files, and reference CSS Speech properties to the Question Test Interoperability (QTI) Accessible Portable Item Protocol (APIP). 62 | In practice, QTI/APIP 63 | authored content is transformed into HTML for rendering in web browsers. This led to the dilemma that there is no 64 | standardized (and supported) method for inlining SSML in HTML, nor is there support for CSS Speech. This has led 65 | to the situation 66 | where SSML is the primary authoring model, with assessment vendors implementing a custom method for adding the 67 | SSML (or SSML-like) features to HTML using non-standard or data attributes, with customized Read Aloud software 68 | consuming those 69 | attributes for text to speech synthesis. Given the need to deliver accurate spoken presentation, non-standard 70 | approaches often include mis-use of WAI-ARIA, and novel or contextually non-valid attributes (e.g., 71 | label). A 72 | particular problem occurs when custom pronunciation is applied via a misuse of the aria-label 73 | attribute, which results in an issue for screen reader users who also rely upon refreshable braille, and in which 74 | a hinted pronunciation 75 | intended only for a text to speech synthesizer also appears on the braille display. 76 |

77 |

The attribute model for adding pronunciation and presentation guidance for assistive technologies and text to 78 | speech synthesis has demonstrated traction by vendors trying to solve this need. It should be noted that many of 79 | the required 80 | features are not well supported by a single attribute, as most follow the form of a presentation property / value 81 | pairing. Using multiple attributes to provide guidance to assistive technologies is not novel, as seen with 82 | WAI-ARIA where multiple 83 | attributes may be applied to a single element, for example, role and aria-checked. The 84 | EPUB 85 | standard for 86 | digital publishing introduced a namespaced version of the SSML phoneme and alphabet 87 | attributes enabling content authors to provide pronunciation guidance. Uptake by the publishing community has been 88 | limited, reportedly 89 | due to the lack of support in reading systems and assistive technologies. 90 |

91 |
92 |
93 |

Core Features for Pronunciation and Spoken Presentation

94 |

The common spoken pronunciation requirements from the education domain serve as a primary source for these 95 | features. These requirements can be broken down into the following main functions that would support authoring and 96 | spoken presentation 97 | needs.

98 |
99 |

Language

100 |

When content is authored in mixed language, a mechanism is needed to allow authors to indicate both the base 101 | language of the content as well as the language of individual words and phrases. The expectation is that 102 | assistive technologies and 103 | other tools that utilize text to speech synthesis would detect and apply the language requested when presenting 104 | the text.

105 |
106 | 107 |

Voice Family / Gender

108 |

Content authors may elect to make adjustments of those paramters to control the spoken presentation for 109 | purposes such as providing a gender specific voice to reflect that of the author, or for a character (or 110 | characters) in theatrical 111 | presentation of a story. Many assistive technologies already provide user selection of voice family and gender 112 | independent of any authored intent.

113 |
114 |
115 |

Phonetic Pronunciation of String Values

116 |

In some cases words may need to have their phonetic pronunciation prescribed by the content author. This may 117 | occur when uncommon words (not supported by text to speech synthesizers), or in cases where word pronunciation 118 | will vary based on 119 | context, and that context may not be correctly described.

120 |
121 |
122 |

String Substitution

123 |

There are cases where content that is visually presented may require replacement (substitution) with an 124 | alternate textual form to ensure correct pronunciation by text to speech synthesizers. In some cases phonetic 125 | pronunciation may be a 126 | solution to this need.

127 |
128 |
129 |

Rate / Pitch / Volume

130 |

While end users should have full control over spoken presentation parameters such as speaking rate, pitch, and 131 | volume (e.g., WCAG 1.4.2 ), content authors may elect to make adjustments of those parameters to control the 132 | spoken presentation for 133 | purposes such as a theatrical presentation of a story. Many assistive technologies already provide user control 134 | speaking rate, pitch, and volume independent of any authored intent.

135 |
136 |
137 |

Emphasis

138 |

In written text, an author may find it necessary to add emphasis to an important word or phrase. HTML supports 139 | both semantic elements (e.g., em) and CSS properties which, through a variety of style options, 140 | make programmatic 141 | detection of authored emphasis difficult (e.g., font-weight: heavy). While the emphasis element has 142 | existed since HTML 2.0, there is currently no uptake by assistive technology or read aloud tools to present text 143 | semantically tagged 144 | for emphasis to be spoken with emphasis.

145 |
146 |
147 |

Say As

148 |

While text to speech engines continue to improve in their ability to process text and provide accurate spoken 149 | rendering of acronyms and numeric values, there can be instances where uncommon terms or alphanumeric constructs 150 | pose challenges. 151 | Further, some educators may have specific requirements as to how a numeric value be spoken which may differ from 152 | a TTS engine's default rendering. For example, the Smarter Balanced Assessment Consortium has developed Read Aloud Guidelines to be 154 | followed by human readers used by students who may require a spoken presentation of an educational test, which 155 | includes specific examples 156 | of how numeric values should be read aloud.

157 |
158 |

Presentation of Numeric Values

159 |

Precise control as to how numeric values should be spoken may not always be correctly determined by text to 160 | speech engines from context.  Examples include speaking a number as individual digits, correct reading of 161 | year values, and 162 | the correct speaking of ordinal and cardinal numbers.

163 |
164 |
165 |

Presentation of String Values

166 |

Precise control as to how string values should be spoken, which may not be determined correctly by text to 167 | speech synthesizers.

168 |
169 |
170 |
171 |

Pausing

172 |

Specific spoken presentation requirements exist in the Accessibility Guidelines from PARCC, and 173 | include requirements such as inserting pauses in the spoken presentation, before and after emphasized words and 174 | mathematical 175 | terms. In practice, content authors may find it necessary to insert pauses between numeric values to limit the 176 | chance of hearing multiple numbers as a single value. One common technique to achieve pausing to date has 177 | involved inserting 178 | non-visible commas before or after a text string requiring a pause. While this may work in practice for a read 179 | aloud TTS tool, it is problematic for screen reader users who may, based on verbosity settings, hear the 180 | multiple commas announced, 181 | and for refreshable braille users who will have the commas visible in braille.

182 |
183 |
184 | 185 |
186 |

Gap Analysis

187 |

Based on the features and use cases described in the prior sections, the following table presents existing speech 188 | presentation standards, HTML features, and WAI-ARIA attributes that may offer a method to achieve the requirement 189 | for HTML authors. A blank cell for any approach represents a gap in support.

190 | 191 | 192 | 193 | 194 | 195 | 196 | 197 | 198 | 199 | 200 | 201 | 202 | 203 | 204 | 205 | 206 | 207 | 208 | 209 | 210 | 211 | 212 | 213 | 214 | 215 | 216 | 217 | 218 | 219 | 220 | 221 | 222 | 223 | 224 | 225 | 226 | 227 | 228 | 229 | 230 | 231 | 232 | 233 | 234 | 235 | 236 | 237 | 238 | 239 | 240 | 241 | 242 | 243 | 244 | 245 | 246 | 247 | 248 | 249 | 250 | 251 | 252 | 253 | 254 | 255 | 256 | 257 | 258 | 259 | 260 | 261 | 262 | 263 | 264 | 265 |
Requirement
HTML
WAI-ARIA
PLS
CSS Speech
SSML
Language
Yes



Yes
Voice Family/Gender



Yes
Yes
Phonetic Pronunciation


Yes

Yes
Substitution

Partial


Yes
Rate/Pitch/Volume



Yes
Yes
Emphasis
Yes


Yes
Yes
Say As




Yes
Pausing



Yes
Yes
266 |

The following sections describe how each of the required features may be met by the use of existing approaches. A 267 | key consideration in the analysis is whether a means exists to directly author (or annotate) HTML content to 268 | incorporate the 269 | spoken presentation and pronunciation feature.

270 |
271 |

Language

272 |

Allow content authors to specify the language of text contained within an element so that the TTS used for 273 | rendering will select the appropriate language for synthesis.

274 | 275 |

HTML

276 |

lang attribute can be applied at the document level or to individual elements. (WCAG) (AT 277 | Supported: some)

278 |

SSML

279 |

Example: <speak> In Paris, they pronounce it <lang xml:lang="fr-FR">Paris</lang> 280 | </speak>code>

281 |
282 |
283 |

Voice Family/Gender

284 |

Allow content authors to specify a specific TTS voice to be used to render text. For example, for content that 285 | presents a dialog between two people, a woman and a man, the author may specify that a female voice be used for 286 | the woman's text and a 287 | male voice be used for the man's text. Some platform TTS services may support a variety of voices, identified by 288 | a name, gender, or even age.

289 |

CSS

290 |

voice-family property can be used to specify the gender of the voice.

291 |

Example: { voice-family: male; }

292 |

SSML

293 |

Using the <voice> element, the gender of the speaker, if supported by the TTS engine, can be 294 | specified.

295 |

Example: <voice gender="female" >Mary had a little lamb,</voice>

296 | 297 |
298 |
299 |

Phonetic Pronunciation

300 |

Allow content authors to precisely specify the phonetic pronunciation of a word or phrase.

301 |

PLS

302 |

Using PLS, all the pronunciations can be factored out into an external PLS document which is referenced by the 303 | <lexicon> element of SSML 304 |

305 |

306 |

Example: <speak> <lexicon uri="http://www.example.com/movie_lexicon.pls"/>
307 |           The title of the movie is: "La vita è bella" (Life is beautiful),
308 |           which is directed by Roberto Benigni.</speak>
309 |       
310 |

311 | 312 |

SSML

313 |

The following is a simple example of an SSML document. It includes an Italian movie title and the name of the 314 | director to be read in US English.

315 |

316 |

Example: The title of the movie is:
317 |         <speak> <phoneme alphabet="ipa" ph="ˈlɑ ˈviːɾə ˈʔeɪ ˈbɛlə">
318 |           "La vita è bella"</phoneme> (Life is beautiful),
319 |           which is directed by
320 |           <phoneme alphabet="ipa" ph="ɹəˈbɛːɹɾoʊ bɛˈniːnji"">
321 |           Roberto Benigni </phoneme>.</speak>
322 |         
323 |       
324 |

325 | 326 |
327 |
328 |

Substitution

329 |

Allow content authors to substitute a text string to be rendered by TTS instead of the actual text contained in 330 | an element.

331 |

WAI-ARIA

332 |

The aria-label and aria-labelledby attribute can be used 334 | by an author to supply a text string 335 | that will become the accessible name for the element upon which it is applied.  This usage effectively 336 | provides a mechanism for performing text substation that is supported by a screen reader. However, it is 337 | problematic for one significant reason; for users who utilize screen readers and refreshable Braille, the 338 | content that is voiced will not match the content that is sent to the refreshable Braille device. This mismatch 339 | would not be acceptable for some content, particularly for assessment content.

340 |

SSML

341 |

Pronounce the specified word or phrase as a different word or phrase. Specify the pronunciation to substitute 342 | with the alias attribute.

343 |

344 |

345 |         
346 |           <speak>
347 |           My favorite chemical element is <sub alias="aluminum">Al</sub>,
348 |           but Al prefers <sub alias="magnesium">Mg</sub>.
349 |           </speak>
350 |         
351 |       
352 |

353 |
354 |
355 |

Rate/Pitch/Volume

356 |

Allow content authors to specify characteristics, such as rate, pitch, and/or volume of the TTS rendering of 357 | the text.

358 |

CSS

359 |

360 |

361 |
voice-rate
362 |
The ‘voice-rate’ property manipulates the rate of generated synthetic speech in terms of words 363 | per minute.
364 |
365 |

366 | 367 |

368 |

369 |
voice-pitch
370 |
The ‘voice-pitch’ property specifies the "baseline" pitch of the generated speech output, which 371 | depends on the used ‘voice-family’ instance, and varies across speech synthesis processors (it 372 | approximately corresponds to the average pitch of the output). For example, the common pitch for a male voice 373 | is around 120Hz, whereas it is around 210Hz for a female voice.
374 |
375 |

376 |

377 |

378 |
voice-range
379 |
The ‘voice-range’ property specifies the variability in the "baseline" pitch, i.e. how much the 380 | fundamental frequency may deviate from the average pitch of the speech output. The dynamic pitch range of the 381 | generated speech generally increases for a highly animated voice, for example when variations in inflection 382 | are used to convey meaning and emphasis in speech. Typically, a low range produces a flat, monotonic voice, 383 | whereas a high range produces an animated voice.
384 |
385 |

386 |

SSML

387 |

prosody modifies the volume, pitch, and rate of the tagged speech.

388 |

389 |

390 |         
391 |           <speak>
392 |           Normal volume for the first sentence.
393 |           <prosody volume="x-loud">Louder volume for the second sentence</prosody>.
394 |           When I wake up, <prosody rate="x-slow">I speak quite slowly</prosody>.
395 |           I can speak with my normal pitch,
396 |           <prosody pitch="x-high"> but also with a much higher pitch </prosody>,
397 |           and also <prosody pitch="low">with a lower pitch</prosody>.
398 |           </speak>
399 |         
400 |       
401 |

402 |
403 |
404 |

Emphasis

405 |

Allow content authors to specify that text content be spoken with emphasis, for example, louder and more 406 | slowly. This can be viewed as a simplification of the Rate/Pitch/Volume controls to reduce authoring complexity. 407 |

408 |

HTML

409 |

410 | The HTML <em> element marks text that has stress emphasis. The <em> 411 | element can be nested, with each level of nesting indicating a greater degree of emphasis. 412 |

413 |

414 | The <em> element is for words that have a stressed emphasis compared to surrounding text, 415 | which is often limited to a word or words of a sentence and affects the meaning of the sentence itself. 416 | 417 | Typically this element is displayed in italic type. However, it should not be used simply to apply italic 418 | styling; use the CSS font-style property for that purpose. Use the <cite> 419 | element to mark the title of a work (book, play, song, etc.). Use the <i> element to mark 420 | text that is in an alternate tone or mood, which covers many common situations for italics such as scientific 421 | names or words in other languages. Use the <strong> element to mark text that has greater 422 | importance than surrounding text. 423 |

424 | 425 |

CSS

426 |

427 |

428 |
voice-stress
429 |
The ‘voice-stress’ property manipulates the strength of emphasis, which is normally applied 430 | using a combination of pitch change, timing changes, loudness and other acoustic differences. The precise 431 | meaning of the values therefore depend on the language being spoken.
432 |
433 |

434 | 435 |

SSML

436 |

Emphasize the tagged words or phrases. Emphasis changes rate and volume of the speech. More emphasis is spoken 437 | louder and slower. Less emphasis is quieter and faster.

438 |

439 |

440 |         
441 |           <speak>
442 |           I already told you I
443 |           <emphasis level="strong">really like</emphasis> that person.
444 |           </speak>
445 |         
446 |       
447 |

448 |
449 |
450 |

Say As

451 |

Allow content authors to specify how text is spoken. For example, content authors would be able to indicate 452 | that a series of four numbers should be spoken as a year rather than a cardinal number.

453 |

CSS

454 |

The ‘speak-as’ property determines in what manner text gets rendered aurally, based upon a 455 | predefined list of possibilities.

456 |

457 | Speech synthesizers are knowledgeable about what a number is. The ‘speak-as’ property enables some 458 | level of control on how user agents render numbers, and may be implemented as a preprocessing step before 459 | passing the text to the actual speech synthesizer. 460 |

461 |

SSML

462 |

463 | Describes how the text should be interpreted. This lets you provide additional context to the text and eliminate 464 | any ambiguity on how Alexa should render the text. Indicate how Alexa should interpret the text with the 465 | interpret-as attribute. 466 |

467 |

468 |

469 |         
470 |           <speak>
471 |           Here is a number spoken as a cardinal number:
472 |           <say-as interpret-as="cardinal">12345</say-as>.
473 |           Here is the same number with each digit spoken separately:
474 |           <say-as interpret-as="digits">12345</say-as>.
475 |           Here is a word spelled out: <say-as interpret-as="spell-out">hello</say-as>
476 |           </speak>
477 |         
478 |       
479 |

480 | 481 |
482 |
483 |

Pausing

484 |

Allow content authors to specify pauses before or after content to ensure the desired prosody of the 485 | presentation, which can affect the pronunciation of the pronunciation of content the precedes or follows the 486 | pause.

487 |

CSS

488 |

489 | The ‘pause-before’ and ‘pause-after’ properties specify a prosodic boundary (silence 490 | with a specific duration) that occurs before (or after) the speech synthesis rendition of the selected element, 491 | or if any ‘cue-before’ (or ‘cue-after’) is specified, before (or after) the cue within 492 | the aural box model. 493 |

494 |

495 | 496 | Note that although the functionality provided by this property is similar to the break element from 497 | the SSML markup language [SSML], the application of ‘pause’ prosodic boundaries within the aural box model of 498 | CSS Speech requires special considerations (e.g. "collapsed" pauses). 499 | 500 |

501 |

SSML

502 |

503 | break represents a pause in the speech. Set the length of the pause with the strength 504 | or time attributes. 505 |

506 |

507 |

508 |         
509 |           <speak>
510 |           There is a three second pause here <break time="3s"/>
511 |           then the speech continues.
512 |           </speak>
513 |         
514 |       
515 |

516 | 517 |
518 |
519 |
520 | Acknowledgements placeholder 521 |
522 | 523 | 524 | 525 | -------------------------------------------------------------------------------- /use-cases/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Pronunciation Use Cases 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 |

The objective of the Pronunciation Task Force is to develop normative specifications and best practices guidance collaborating with other W3C groups as appropriate, to provide for proper pronunciation in HTML content when using text to speech (TTS) synthesis. This document provides various use cases highlighting the need for standardization of pronunciation markup, to ensure that consistent and accurate representation of the content. The requirements from the user scenarios provide the basis for these technical requirements/specifications.

17 |
18 | 19 |
20 | 21 |
22 | 23 |

Introduction

24 | 25 |

This document provides use cases which describe specific implmentation approaches for introducing pronunciation 26 | and spoken presentation authoring markup into HTML5. These approaches are based on the two primary approaches 27 | that have evolved from the Pronunciation Task Force members. Other approaches may appear in subsequent working drafts. 28 |

29 |

Successful use cases will be those that provide ease of authoring and consumption by assistive technologies and user 30 | agents that utilize synthetic speech for spoken presentation of web content. The most challenging aspect of consumption may 31 | be alignment of the markup approach with the standard mechanisms by which assistive technologies, specifically screen 32 | readers, obtain content via platform accessibility APIs. 33 |

34 | 35 | 36 |
37 |
38 |

Use Case aria-ssml

39 |
40 |

Background and Current Practice

41 |

A new aria attribute could be used to include pronunciation content.

42 |
43 |
44 |

Goal

45 |

Embed SSML in an HTML document.

46 |
47 |
48 |

Target Audience

49 |
    50 |
  • Assistive Technology
  • 51 |
  • Browser Extensions
  • 52 |
  • Search Engines
  • 53 |
54 |
55 |
56 |

Implementation Options

57 |

aria-ssml as embedded JSON

58 |

When AT encounters an element with aria-ssml, the AT should enhance the UI by processing the pronunciation content and passing it to the Web Speech API or an external API (e.g., Google's Text to Speech API).

59 |
I say <span aria-ssml='{"phoneme":{"ph":"pɪˈkɑːn","alphabet":"ipa"}}'>pecan</span>.
 60 | You say <span aria-ssml='{"phoneme":{"ph":"ˈpi.kæn","alphabet":"ipa"}}'>pecan</span>.
61 |

Client will convert JSON to SSML and pass the XML string a speech API.

62 |
var msg = new SpeechSynthesisUtterance();
 63 | msg.text = convertJSONtoSSML(element.getAttribute('aria-ssml'));
 64 | speechSynthesis.speak(msg);
65 |

aria-ssml referencing XML by template ID

66 |
<!-- ssml must appear inside a template to be valid -->
 67 | <template id="pecan">
 68 | <?xml version="1.0"?>
 69 | <speak version="1.1"
 70 |        xmlns="http://www.w3.org/2001/10/synthesis"
 71 |        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 72 |        xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
 73 |                    http://www.w3.org/TR/speech-synthesis11/synthesis.xsd"
 74 |        xml:lang="en-US">
 75 |     You say, <phoneme alphabet="ipa" ph="pɪˈkɑːn">pecan</phoneme>.
 76 |     I say, <phoneme alphabet="ipa" ph="ˈpi.kæn">pecan</phoneme>.
 77 | </speak>
 78 | </template>
 79 | 
 80 | <p aria-ssml="#pecan">You say, pecan. I say, pecan.</p>
81 |

Client will parse XML and serialize it before passing to a speech API:

82 |
var msg = new SpeechSynthesisUtterance();
 83 | var xml = document.getElementById('pecan').content.firstElementChild;
 84 | msg.text = serialize(xml);
 85 | speechSynthesis.speak(msg);
86 | 87 |

aria-ssml referencing an XML string as script tag

88 |
<script id="pecan" type="application/ssml+xml">
 89 | <speak version="1.1"
 90 |        xmlns="http://www.w3.org/2001/10/synthesis"
 91 |        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 92 |        xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
 93 |                    http://www.w3.org/TR/speech-synthesis11/synthesis.xsd"
 94 |        xml:lang="en-US">
 95 |     You say, <phoneme alphabet="ipa" ph="pɪˈkɑːn">pecan</phoneme>.
 96 |     I say, <phoneme alphabet="ipa" ph="ˈpi.kæn">pecan</phoneme>.
 97 | </speak>
 98 | </script>
 99 | 
100 | <p aria-ssml="#pecan">You say, pecan. I say, pecan.</p>
101 |

Client will pass the XML string raw to a speech API.

102 |
var msg = new SpeechSynthesisUtterance();
103 | msg.text = document.getElementById('pecan').textContent;
104 | speechSynthesis.speak(msg);
105 |

aria-ssml referencing an external XML document by URL

106 |
<p aria-ssml="http://example.com/pronounce.ssml#pecan">You say, pecan. I say, pecan.</p>
107 |

Client will pass the string payload to a speech API.

108 |
var msg = new SpeechSynthesisUtterance();
109 | var response = await fetch(el.dataset.ssml)
110 | msg.txt = await response.text();
111 | speechSynthesis.speak(msg);
112 |
113 |
114 |

Existing Work

115 | 120 |
121 |
122 |

Problems and Limitations

123 |
    124 |
  • aria-ssml is not a valid aria-* attribute.
  • 125 |
  • OS/Browsers combinations that do not support the serialized XML usage of the Web Speech API.
  • 126 |
127 |
128 | 129 |
130 | 131 |
132 |

Use Case data-ssml

133 |
134 |

Background and Current Practice

135 |

As an existing attribute, data-* could be used, with some conventions, to include pronunciation content.

136 |
137 |
138 |

Goal

139 |
    140 |
  • Support repeated use within the page context
  • 141 |
  • Support external file references
  • 142 |
  • Reuse existing techniques without expanding specifications
  • 143 |
144 |
145 |
146 |

Target Audience

147 |

Hearing users

148 |
149 |
150 |

Implementation Options

151 |

data-ssml as embedded JSON

152 |

When an element with data-ssml is encountered by an SSML-aware AT, the AT should enhance the user interface by processing the referenced SSML content and passing it to the Web Speech API or an external API (e.g., Google's Text to Speech API).

153 |
<h2>The Pronunciation of Pecan</h2>
154 | <p><speak>
155 | I say <span data-ssml='{"phoneme":{"ph":"pɪˈkɑːn","alphabet":"ipa"}}'>pecan</span>.
156 | You say <span data-ssml='{"phoneme":{"ph":"ˈpi.kæn","alphabet":"ipa"}}'>pecan</span>.
157 |

Client will convert JSON to SSML and pass the XML string a speech API.

158 |
var msg = new SpeechSynthesisUtterance();
159 | msg.text = convertJSONtoSSML(element.dataset.ssml);
160 | speechSynthesis.speak(msg);
161 | 162 |

data-ssml referencing XML by template ID

163 |
<!-- ssml must appear inside a template to be valid -->
164 | <template id="pecan">
165 | <?xml version="1.0"?>
166 | <speak version="1.1"
167 |        xmlns="http://www.w3.org/2001/10/synthesis"
168 |        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
169 |        xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
170 |                    http://www.w3.org/TR/speech-synthesis11/synthesis.xsd"
171 |        xml:lang="en-US">
172 |     You say, <phoneme alphabet="ipa" ph="pɪˈkɑːn">pecan</phoneme>.
173 |     I say, <phoneme alphabet="ipa" ph="ˈpi.kæn">pecan</phoneme>.
174 | </speak>
175 | </template>
176 | 
177 | <p data-ssml="#pecan">You say, pecan. I say, pecan.</p>
178 |

Client will parse XML and serialize it before passing to a speech API:

179 |
var msg = new SpeechSynthesisUtterance();
180 | var xml = document.getElementById('pecan').content.firstElementChild;
181 | msg.text = serialize(xml);
182 | speechSynthesis.speak(msg);
183 | 184 |

data-ssml referencing an XML string as script tag

185 |
<script id="pecan" type="application/ssml+xml">
186 | <speak version="1.1"
187 |        xmlns="http://www.w3.org/2001/10/synthesis"
188 |        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
189 |        xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
190 |                    http://www.w3.org/TR/speech-synthesis11/synthesis.xsd"
191 |        xml:lang="en-US">
192 |     You say, <phoneme alphabet="ipa" ph="pɪˈkɑːn">pecan</phoneme>.
193 |     I say, <phoneme alphabet="ipa" ph="ˈpi.kæn">pecan</phoneme>.
194 | </speak>
195 | </script>
196 | 
197 | <p data-ssml="#pecan">You say, pecan. I say, pecan.</p>
198 |

Client will pass the XML string raw to a speech API.

199 |
var msg = new SpeechSynthesisUtterance();
200 | msg.text = document.getElementById('pecan').textContent;
201 | speechSynthesis.speak(msg);
202 |

data-ssml referencing an external XML document by URL

203 |
<p data-ssml="http://example.com/pronounce.ssml#pecan">You say, pecan. I say, pecan.</p>
204 |

Client will pass the string payload to a speech API.

205 |
var msg = new SpeechSynthesisUtterance();
206 | var response = await fetch(el.dataset.ssml)
207 | msg.txt = await response.text();
208 | speechSynthesis.speak(msg);
209 |
210 |
211 |

Existing Work

212 | 217 |
218 |
219 |

Problems and Limitations

220 |
    221 |
  • Does not assume or suggest visual pronunciation help for deaf or hard of hearing
  • 222 |
  • Use of data-* requires input from AT vendors
  • 223 |
  • XML data is not indexed by search engines
  • 224 |
225 |
226 |
227 | 228 |
229 |

Use Case HTML5

230 |
231 |

Background and Current Practice

232 |

HTML5 includes the XML namespaces for MathML and SVG. So, using either's elements in an HTML5 document is valid. Because SSML's implementation is non-visual in nature, browser implementation could be slow or non-existent without affecting how authors use SSML in HTML. Expansion of HTML5 to include SSML namespace would allow valid use of SSML in the HTML5 document. Browsers would treat the element like any other unknown element, as HTMLUnknownElement.

233 |
234 |
235 |

Goal

236 |
    237 |
  • Support valid use of SSML in HTML5 documents
  • 238 |
  • Allow visual pronunciation support
  • 239 |
240 |
241 |
242 |

Target Audience

243 |
    244 |
  • SSML-aware technologies and browser extensions
  • 245 |
  • Search indexers
  • 246 |
247 |
248 |
249 |

Implementation Options

250 |

SSML

251 |

When an element with data-ssml is encountered by an SSML-aware AT, the AT should enhance the user interface by processing the referenced SSML content and passing it to the Web Speech API or an external API (e.g., Google's Text to Speech API).

252 |
<h2>The Pronunciation of Pecan</h2>
253 |   <p><speak>
254 |   You say, <phoneme alphabet="ipa" ph="pɪˈkɑːn">pecan</phoneme>.
255 |   I say, <phoneme alphabet="ipa" ph="ˈpi.kæn">pecan</phoneme>.
256 | </speak></p>
257 |
258 |
259 |

Existing Work

260 | 265 |
266 |
267 |

Problems and Limitations

268 |

SSML is not valid HTML5

269 |
270 |
271 | 272 |
273 |

Use Case Custom Element

274 |
275 |

Background and Current Practice

276 |

Embed valid SSML in HTML using custom elements registered as ssml-* where * is the actual SSML tag name (except for p which expects the same treatment as an HTML p in HTML layout).

277 |
278 |
279 |

Goal

280 |

Support use of SSML in HTML documents.

281 |
282 |
283 |

Target Audience

284 |
    285 |
  • SSML-aware technologies and browser extensions
  • 286 |
  • Search indexers
  • 287 |
288 |
289 |
290 |

Implementation Options

291 |

ssml-speak: see demo

292 |

Only the <ssml-speak> component requires registration. The component code lifts the SSML by getting the innerHTML and removing the ssml- prefix from the interior tags and passing it to the web speech API. The <p> tag from SSML is not given the prefix because we still want to start a semantic paragraph within the content. The other tags used in the example have no semantic meaning. Tags like <em> in HTML could be converted to <emphasis> in SSML. In that case, CSS styles will come from the browser's default styles or the page author.

293 |
<ssml-speak>
294 |   Here are <ssml-say-as interpret-as="characters">SSML</ssml-say-as> samples.
295 |   I can pause<ssml-break time="3s"></ssml-break>.
296 |   I can speak in cardinals.
297 |   Your number is <ssml-say-as interpret-as="cardinal">10</ssml-say-as>.
298 |   Or I can speak in ordinals.
299 |   You are <ssml-say-as interpret-as="ordinal">10</ssml-say-as> in line.
300 |   Or I can even speak in digits.
301 |   The digits for ten are <ssml-say-as interpret-as="characters">10</ssml-say-as>.
302 |   I can also substitute phrases, like the <ssml-sub alias="World Wide Web Consortium">W3C</ssml-sub>.
303 |   Finally, I can speak a paragraph with two sentences.
304 |   <p>
305 |     <ssml-s>You say, <ssml-phoneme alphabet="ipa" ph="pɪˈkɑːn">pecan</ssml-phoneme>.</ssml-s>
306 |     <ssml-s>I say, <ssml-phoneme alphabet="ipa" ph="ˈpi.kæn">pecan</ssml-phoneme>.</ssml-s>
307 |   </p>
308 | </ssml-speak>
309 | <template id="ssml-controls">
310 |   <style>
311 |     [role="switch"][aria-checked="true"] :first-child,
312 |     [role="switch"][aria-checked="false"] :last-child {
313 |       background: #000;
314 |       color: #fff;
315 |     }
316 |   </style>
317 |   <slot></slot>
318 |   <p>
319 |     <span id="play">Speak</span>
320 |     <button role="switch" aria-checked="false" aria-labelledby="play">
321 |       <span>on</span>
322 |       <span>off</span>
323 |     </button>
324 |   </p>
325 | </template>
326 |
class SSMLSpeak extends HTMLElement {
327 |   constructor() {
328 |     super();
329 |     const template = document.getElementById('ssml-controls');
330 |     const templateContent = template.content;
331 |     this.attachShadow({mode: 'open'})
332 |       .appendChild(templateContent.cloneNode(true));
333 |   }
334 |   connectedCallback() {
335 |     const button = this.shadowRoot.querySelector('[role="switch"][aria-labelledby="play"]')
336 |     const ssml = this.innerHTML.replace(/ssml-/gm, '')
337 |     const msg = new SpeechSynthesisUtterance();
338 |     msg.lang = document.documentElement.lang;
339 |     msg.text = `<speak version="1.1"
340 |       xmlns="http://www.w3.org/2001/10/synthesis"
341 |       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
342 |       xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
343 |         http://www.w3.org/TR/speech-synthesis11/synthesis.xsd"
344 |       xml:lang="${msg.lang}">
345 |     ${ssml}
346 |     </speak>`;
347 |     msg.voice = speechSynthesis.getVoices().find(voice => voice.lang.startsWith(msg.lang));
348 |     msg.onstart = () => button.setAttribute('aria-checked', 'true');
349 |     msg.onend = () => button.setAttribute('aria-checked', 'false');
350 |     button.addEventListener('click', () => speechSynthesis[speechSynthesis.speaking ? 'cancel' : 'speak'](msg))
351 |   }
352 | }
353 | 
354 | customElements.define('ssml-speak', SSMLSpeak);
355 |
356 |
357 |

Existing Work

358 | 362 |
363 |
364 |

Problems and Limitations

365 |
    366 |
  • OS/Browsers combinations that do not support the serialized XML usage of the Web Speech API.
  • 367 |
  • Browsers may need to map SSML tags with CSS styles for default user agent styles.
  • 368 |
  • Without an extension or AT, only user interaction can start the Web Speech API.
  • 369 |
  • Authors or parsing may need to remove HTML content with unintended SSML semantics before serialization.
  • 370 |
371 |
372 |
373 | 374 |
375 |

Use Case JSON-LD

376 |
377 |

Background and Current Practice

378 |

JSON-LD provides an established standard for embedding data in HTML. Unlike other microdata approaches, JSON-LD helps to reuse standardized annotations through external references.

379 |
380 |
381 |

Goal

382 |

Support use of SSML in HTML documents.

383 |
384 |
385 |

Target Audience

386 |
    387 |
  • SSML-aware technologies and browser extensions
  • 388 |
  • Search indexers
  • 389 |
390 |
391 |
392 |

Implementation Options

393 |

JSON-LD

394 |
<script type="application/ld+json">
395 | {
396 |   "@context": "http://schema.org/",
397 |   "@id": "/pronunciation#WKRP",
398 |   "@type": "RadioStation",
399 |   "name": ["WKRP",
400 |     "@type": "PronounceableText",
401 |     "textValue": "WKRP",
402 |     "speechToTextMarkup": "SSML",
403 |     "phoneticText": "<speak><say-as interpret-as=\"characters\">WKRP</say-as>"
404 |   ]
405 | }
406 | </script>
407 | <p>
408 |   Do you listen to <span itemscope
409 |     itemtype="http://schema.org/PronounceableText"
410 |     itemid="/pronunciation#WKRP">WKRP</span>?
411 | </p>
412 |
413 |
414 |

Existing Work

415 | 423 |
424 |
425 |

Problems and Limitations

426 |

not an established "type"/published schema

427 |
428 |
429 |
430 |

Use Case Ruby

431 |
432 |

Background and Current Practice

433 |
<Ruby> annotations are short runs of text presented alongside base text, primarily used in East Asian typography as a guide for pronunciation or to include other annotations.
434 |

ruby guides pronunciation visually. This seems like a natural fit for text-to-speech.

435 |
436 |
437 |

Goal

438 |
    439 |
  • Support use of SSML in HTML documents.
  • 440 |
  • Offer visual pronunciation support.
  • 441 |
442 |
443 |
444 |

Target Audience

445 |
    446 |
  • AT and browser extensions
  • 447 |
  • Search indexers
  • 448 |
449 |
450 |
451 |

Implementation Options

452 |

ruby with microdata

453 |

Microdata can augment the ruby element and its descendants.

454 |
<p>
455 |   You say,
456 |   <span itemscope="" itemtype="http://example.org/Pronunciation">
457 |     <ruby itemprop="phoneme" content="pecan">
458 |       pecan
459 |       <rt itemprop="ph">pɪˈkɑːn</rt>
460 |       <meta itemprop="alphabet" content="ipa">
461 |     </ruby>.
462 |   </span>
463 |   I say,
464 |   <span itemscope="" itemtype="http://example.org/Pronunciation">
465 |     <ruby itemprop="phoneme" content="pecan">
466 |       pe
467 |       <rt itemprop="ph">ˈpi</rt>
468 |       can
469 |       <rt itemprop="ph">kæn</rt>
470 |       <meta itemprop="alphabet" content="ipa">
471 |     </ruby>.
472 |   </span>
473 | </p>
474 |
475 |
476 |

Existing Work

477 | 481 |
482 |
483 |

Problems and Limitations

484 |
    485 |
  • AT may process annotations as content
  • 486 |
  • AT "double reading" words instead of choosing either the content or the annotation
  • 487 |
  • Only offers for a few SSML expressions
  • 488 |
  • Difficult to reuse by reference
  • 489 |
490 |
491 |
492 | 493 |
494 |

User Scenarios

495 |

The purpose of developing user scenarios is to facilitate discussion and further requirements definition for pronunciation standards developed within the PTF prior to review of the APA. There are numerous interpretations of what form user scenarios adopt. Within the user experience research (UXR) body of practice, a user scenario is a written narrative related to the use of a service from the perspective of a user or user group. Importantly, the context of use is emphasized as is the desired outcome of use. There are potentially thousands of user scenarios for a technology such as TTS, however, the focus for the PTF is on the core scenarios that relate to the kinds of users who will engage with TTS.

496 |

User scenarios, like Personas, represent a composite of real-world experiences. In the case of the PTF, the scenarios were derived from interviews of people who were end-consumers of TTS, as well as submitted narratives and industry examples from practitioners. There are several formats of scenarios. Several are general goal or task-oriented scenarios. Others elaborate on richer context, for example, educational assessment.

497 |

The following user scenarios are organized on the three perspectives of TTS use derived from analysis of the qualitative data collected from the discovery work:

498 |
    499 |
  • End-Consumers of TTS: Encompasses those with a visual disability or other need to have TTS operational when using assistive technologies (ATs).
  • 500 |
  • Digital Content Managers: Addresses activities related to those responsible for producing content that needs to be accessible to ATs and W3C-WAI Guidelines.
  • 501 |
  • Software Engineers: Includes developers and architects required to put TTS into an application or service.
  • 502 |
503 |

Need to add the other categories, or remove the list above and just rely on the ToC.

504 |
505 | 506 |
507 |

Augmentative and Alternative Communication (AAC)

508 |
509 |

Names

510 |

As an AAC User I want my name to be pronounced correctly and I want to pronounce others names correctly using my AAC device.

511 |
512 |

Storing others' names

513 |

As an AAC user, I want to be able to input and store the correct pronunciation of others’ names, so I can address people respectfully and build meaningful relationships.

514 |

For instance, when meeting someone named “Nguyễn,” the AAC user wants to ensure their device pronounces the name correctly, using IPA or SSML markup, to foster respectful communication and avoid embarrassment.

515 |
516 |
517 |

Pronouncing my name

518 |

As an AAC user, I want my name to be pronounced correctly by my device, so that I can confidently introduce myself in social, educational, and professional settings.

519 |

For example, a user named “Siobhán” may find that default TTS engines mispronounce her name. She wants to input a phonetic or SSML-based pronunciation so that her name is spoken accurately every time.

520 |
521 |
522 |
523 | 524 |
525 |

End-Consumer of TTS

526 |

Ultimately, the quality and variation of TTS rendering by assistive technologies vary widely according to a user's context. The following user scenarios reinforce the necessity for accurate pronunciation from the perspective of those who consume digitally generated content.

527 |
528 |

Traveller

529 |

As a traveler who uses assistive technology (AT) with TTS to help navigate through websites, I need to hear arrival and destination codes pronounced accurately so I can select the desired travel itinerary. For example, a user with a visual impairment attempts to book a flight to Ottawa, Canada and so goes to a travel website. The user already knows the airport code and enters "YOW". The site produces the result in a drop-down list as "Ottawa, CA" but the AT does not pronounce the text accurately to help the user make the correct association between their data entry and the list item.

530 |
531 |
532 |

Test Taker

533 |

As a test taker (tester) with a visual impairment who may use assistive technology to access the test content with speech software, screen reader or refreshable braille device, I want the content to be presented as intended, with accurate pronunciation and articulation, so that my assessment accurately reflects my knowledge of the content.

534 |
535 |
536 |

Student

537 |

As a student/learner with auditory and cognitive processing issues, it is difficult to distinguish sounds, inflections, and variations in pronunciation as rendered through synthetic voice, such as text-to-speech or screen reader technologies. Consistent and accurate pronunciation whether human-provided, external, or embedded is needed to support working executive processing, auditory processing and memory that facilitates comprehension in literacy and numeracy for learning and for assessments.

538 |
539 |
540 |

English learner

541 |

As an English Learner (EL) or a visually impaired early learner using speech synthesis for reading comprehension that includes decoding words from letters as part of the learning construct (intent of measurement), pronunciation accuracy is vital to successful comprehension, as it allows the learner to distinguish sounds at the sentence, word, syllable, and phoneme level.

542 |
543 |
544 | 545 |
546 |

Digital Content Management for TTS

547 |

The advent of graphical user interfaces (GUIs) for the management and editing of text content has given rise to content creators not requiring technical expertise beyond the ability to operate a text editing application such as Microsoft Word. The following scenario summarizes the general use, accompanied by a hypothetical application.

548 |
    549 |
  • As a content creator, I want to create content that can readily be delivered through assistive technology, can convey the correct meaning, and ensure that screen readers render the right pronunciation based on the surrounding context.
  • 550 |
  • As a content producer for a global commercial site that is inclusive, I need to be able to provide accessible culture-specific content for different geographic regions.
  • 551 |
552 | 553 |
554 |

Educational Assessment

555 |

In the educational assessment field, providing accurate and concise pronunciation for students with auditory accommodations, such as text-to-speech (TTS) or students with screen readers, is vital for ensuring content validity and alignment with the intended construct, which objectively measures a test takers knowledge and skills. For test administrators/educators, pronunciations must be consistent across instruction and assessment in order to avoid test bias or impact effects for students. Some additional requirements for the test administrators, include, but are not limited to, such scenarios:

556 |
557 |

Test Administrator—Read-aloud intonation, expression

558 |

As a test administrator, I want to ensure that students with the read-aloud accommodation, who are using assistive technology or speech synthesis as an alternative to a human reader, have the same speech quality (e.g., intonation, expression, pronunciation, and pace, etc.) as a spoken language.

559 |

This may be simlar to the other Test Administrator case below?

560 |
561 |
562 |

Math educator

563 |

As a math educator, I want to ensure that speech accuracy with mathematical expressions, including numbers, fractions, and operations have accurate pronunciation for those who rely on TTS. Some mathematical expressions require special pronunciations to ensure accurate interpretation while maintaining test validity and construct. Specific examples include:

564 |
565 |

Formulas

566 |

Mathematical formulas written in simple text with special formatting should convey the correct meaning of the expression to identify changes from normal text to super- or to sub-script text. For example, without the proper formatting, the equation:a3-b3=(a-b)(a2+ab+b2) may incorrectly render through some technologies and applications as a3-b3=(a-b)(a2+ab+b2).

567 |
568 |
569 |

Distinctions in writing

570 |

Distinctions made in writing are often not made explicit in speech; For example, “fx” may be interpreted as fx, f(x), fx, F X, F X. The distinction depends on the context; requiring the author to provide consistent and accurate semantic markup.

571 |
572 |
573 |

Greek letters

574 |

For math equations with Greek letters, it is important that the speech synthesizer be able to distinguish the phonetic differences between them, whether in the natural language or phonetic equivalents. For example, ε (epsilon) υ (upsilon) φ (phi) χ (chi) ξ(xi).

575 |
576 |
577 |
578 |

Test Administrator—consistent pronunciation

579 |

As a test administrator/educator, pronunciations must be consistent across instruction and assessment, in order to avoid test bias and pronunciation effects on performance for students with disabilities (SWD) in comparison to students without disabilities (SWOD). Examples include:

580 |
581 |

Spelling out rhyming words

582 |

If a test question is measuring rhyming of words or sounds of words, the speech synthesis should not read aloud the words, but rather spell out the words in the answer options.

583 |
584 |
585 |

Questions measuring spelling

586 |

If a test question is measuring spelling and the student needs to consider spelling correctness/incorrectness, the speech synthesis should not read aloud the misspelt words, especially for words, such as:

587 |
    588 |
  • Heteronyms/homographs: same spelling, different pronunciation, different meanings, such as lead (to go in front of) or lead (a metal); wind (to follow a course that is not straight) or wind (a gust of air); bass (low, deep sound) or bass (a type of fish), etc.
  • 589 |
  • Homophone: words that sound alike, such as, to/two/too; there/their/they're; pray/prey; etc.
  • 590 |
  • Homonyms: multiple meaning words, such as scale (measure) or scale (climb, mount); fair (reasonable) or fair (carnival); suit (outfit) or suit (harmonize); etc.
  • 591 |
592 |
593 |
594 |
595 | 596 |
597 |

Academic and Linguistic Practitioners

598 |

The extension of content management in TTS is one as a means of encoding and preserving spoken text for academic analyses; irrespective of discipline, subject domain, or research methodology.

599 |
600 |

Linguist

601 |

A. As a linguist, I want to represent all the pronunciation variations of a given word in any language, for future analyses.

602 |
603 |
604 |

Speech Language Pathologist, Speech Therapist

605 |

As a speech language pathologist or speech therapists, I want TTS functionality to include components of speech and language that include dialectal and individual differences in pronunciation; identify differences in intonation, syntax, and semantics, and; allow for enhanced comprehension, language processing and support phonological awareness.

606 |
607 |
608 |
609 | 610 |
611 |

Software Application Development

612 |

Technical standards for software development assist organizations and individuals to provide accessible experiences for users with disabilities. The final user scenarios in this document are considered from the perspective of those who design and develop software.

613 |

Probably shouldn't use "final" here, as we mey re-order.

614 |
615 |

Product owner

616 |

As a Product Owner for a web content management system (CMS), I want the next software product release to have the capability of pronouncing speech "just like Alexa can".

617 |
618 |
619 |

Client-side User Interface Developer

620 |

As a client-side user interface developer, I need a way to render text content, so it is spoken accurately with assistive technologies.

621 |
622 |
623 | 624 |
Acknowledgements placeholder
625 | 626 | 627 | 628 | --------------------------------------------------------------------------------